[
  {
    "path": ".asf.yaml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# This file controls the settings of this repository\n#\n# See more details at\n# https://cwiki.apache.org/confluence/display/INFRA/Git+-+.asf.yaml+features\n\nnotifications:\n  commits: commits@datafusion.apache.org\n  issues: github@datafusion.apache.org\n  pullrequests: github@datafusion.apache.org\n  discussions: github@datafusion.apache.org\n  jira_options: link label worklog\ngithub:\n  description: \"Apache DataFusion Comet Spark Accelerator\"\n  homepage: https://datafusion.apache.org/comet\n  labels:\n    - arrow\n    - datafusion\n    - rust\n    - spark\n  enabled_merge_buttons:\n    squash: true\n    merge: false\n    rebase: false\n  features:\n    issues: true\n    discussions: true\n  protected_branches:\n    main:\n      required_pull_request_reviews:\n        required_approving_review_count: 1\n  pull_requests:\n    allow_update_branch: true\n# publishes the content of the `asf-site` branch to\n# https://datafusion.apache.org/comet/\npublish:\n  whoami: asf-site\n  subdir: comet\n"
  },
  {
    "path": ".claude/skills/audit-comet-expression/SKILL.md",
    "content": "---\nname: audit-comet-expression\ndescription: Audit an existing Comet expression for correctness and test coverage. Studies the Spark implementation across versions 3.4.3, 3.5.8, and 4.0.1, reviews the Comet and DataFusion implementations, identifies missing test coverage, and offers to implement additional tests.\nargument-hint: <expression-name>\n---\n\nAudit the Comet implementation of the `$ARGUMENTS` expression for correctness and test coverage.\n\n## Overview\n\nThis audit covers:\n\n1. Spark implementation across versions 3.4.3, 3.5.8, and 4.0.1\n2. Comet Scala serde implementation\n3. Comet Rust / DataFusion implementation\n4. Existing test coverage (SQL file tests and Scala tests)\n5. Gap analysis and test recommendations\n\n---\n\n## Step 1: Locate the Spark Implementations\n\nClone specific Spark version tags (use shallow clones to avoid polluting the workspace). Only clone a version if it is not already present.\n\n```bash\nset -eu -o pipefail\nfor tag in v3.4.3 v3.5.8 v4.0.1; do\n  dir=\"/tmp/spark-${tag}\"\n  if [ ! -d \"$dir\" ]; then\n    git clone --depth 1 --branch \"$tag\" https://github.com/apache/spark.git \"$dir\"\n  fi\ndone\n```\n\n### Find the expression class in each Spark version\n\nSearch the Catalyst SQL expressions source:\n\n```bash\nfor tag in v3.4.3 v3.5.8 v4.0.1; do\n  dir=\"/tmp/spark-${tag}\"\n  echo \"=== $tag ===\"\n  find \"$dir/sql/catalyst/src/main/scala\" -name \"*.scala\" | \\\n    xargs grep -l \"case class $ARGUMENTS\\b\\|object $ARGUMENTS\\b\" 2>/dev/null\ndone\n```\n\nIf the expression is not found in catalyst, also check core:\n\n```bash\nfor tag in v3.4.3 v3.5.8 v4.0.1; do\n  dir=\"/tmp/spark-${tag}\"\n  echo \"=== $tag ===\"\n  find \"$dir/sql\" -name \"*.scala\" | \\\n    xargs grep -l \"case class $ARGUMENTS\\b\\|object $ARGUMENTS\\b\" 2>/dev/null\ndone\n```\n\n### Read the Spark source for each version\n\nFor each Spark version, read the expression file and note:\n\n- The `eval`, `nullSafeEval`, and `doGenCode` / `doGenCodeSafe` methods\n- The `inputTypes` and `dataType` fields (accepted input types, return type)\n- Null handling strategy (`nullable`, `nullSafeEval`)\n- ANSI mode behavior (`ansiEnabled`, `failOnError`)\n- Special cases, guards, `require` assertions, and runtime exceptions\n- Any constants or configuration the expression reads\n\n### Compare across Spark versions\n\nProduce a concise diff summary of what changed between:\n\n- 3.4.3 → 3.5.8\n- 3.5.8 → 4.0.1\n\nPay attention to:\n\n- New input types added or removed\n- Behavior changes for edge cases (null, overflow, empty, boundary)\n- New ANSI mode branches\n- New parameters or configuration\n- Breaking API changes that Comet must shim\n\n---\n\n## Step 2: Locate the Spark Tests\n\n```bash\nfor tag in v3.4.3 v3.5.8 v4.0.1; do\n  dir=\"/tmp/spark-${tag}\"\n  echo \"=== $tag ===\"\n  find \"$dir/sql\" -name \"*.scala\" -path \"*/test/*\" | \\\n    xargs grep -l \"$ARGUMENTS\" 2>/dev/null\ndone\n```\n\nRead the relevant Spark test files and produce a list of:\n\n- Input types covered\n- Edge cases exercised (null, empty, overflow, negative, boundary values, special characters, etc.)\n- ANSI mode tests\n- Error cases\n\nThis list will be the reference for the coverage gap analysis in Step 5.\n\n---\n\n## Step 3: Locate the Comet Implementation\n\n### Scala serde\n\n```bash\n# Find the serde object\ngrep -r \"$ARGUMENTS\" spark/src/main/scala/org/apache/comet/serde/ --include=\"*.scala\" -l\ngrep -r \"$ARGUMENTS\" spark/src/main/scala/org/apache/comet/ --include=\"*.scala\" -l\n```\n\nRead the serde implementation and check:\n\n- Which Spark versions the serde handles\n- Whether `getSupportLevel` is implemented and accurate\n- Whether all input types are handled\n- Whether any types are explicitly marked `Unsupported`\n\n### Shims\n\n```bash\nfind spark/src/main -name \"CometExprShim.scala\" | xargs grep -l \"$ARGUMENTS\" 2>/dev/null\n```\n\nIf shims exist, read them and note any version-specific handling.\n\n### Rust / DataFusion implementation\n\n```bash\n# Search for the function in native/spark-expr\ngrep -r \"$ARGUMENTS\" native/spark-expr/src/ --include=\"*.rs\" -l\ngrep -r \"$ARGUMENTS\" native/core/src/ --include=\"*.rs\" -l\n```\n\nIf the expression delegates to DataFusion, find it there too. Set `$DATAFUSION_SRC` to a local DataFusion checkout, or fall back to searching the cargo registry:\n\n```bash\nif [ -n \"${DATAFUSION_SRC:-}\" ]; then\n  grep -r \"$ARGUMENTS\" \"$DATAFUSION_SRC\" --include=\"*.rs\" -l 2>/dev/null | head -10\nelse\n  # Fall back to cargo registry (may include unrelated crates)\n  grep -r \"$ARGUMENTS\" ~/.cargo/registry/src/*/datafusion* --include=\"*.rs\" -l 2>/dev/null | head -10\nfi\n```\n\nRead the Rust implementation and check:\n\n- Null handling (does it propagate nulls correctly?)\n- Overflow and underflow handling (returns `Err` vs panics)\n- Type dispatch (does it handle all types that Spark supports?)\n- ANSI / fail-on-error mode\n\n---\n\n## Step 4: Locate Existing Comet Tests\n\n### SQL file tests\n\n```bash\n# Find SQL test files for this expression\nfind spark/src/test/resources/sql-tests/expressions/ -name \"*.sql\" | \\\n  xargs grep -l \"$ARGUMENTS\" 2>/dev/null\n\n# Also check if there's a dedicated file\nfind spark/src/test/resources/sql-tests/expressions/ -name \"*$(echo $ARGUMENTS | tr '[:upper:]' '[:lower:]')*\"\n```\n\nRead every SQL test file found and list:\n\n- Table schemas and data values used\n- Queries exercised\n- Query modes used (`query`, `spark_answer_only`, `tolerance`, `ignore`, `expect_error`)\n- Any ConfigMatrix directives\n\n### Scala tests\n\n```bash\ngrep -r \"$ARGUMENTS\" spark/src/test/scala/ --include=\"*.scala\" -l\n```\n\nRead the relevant Scala test files and list:\n\n- Input types covered\n- Edge cases exercised\n- Whether constant folding is disabled for literal tests\n\n---\n\n## Step 5: Gap Analysis\n\nCompare the Spark test coverage (Step 2) against the Comet test coverage (Step 4). Produce a structured gap report:\n\n### Coverage matrix\n\nFor each of the following dimensions, note whether it is covered in Comet tests or missing:\n\n| Dimension                                                                                              | Spark tests it | Comet SQL test | Comet Scala test | Gap? |\n| ------------------------------------------------------------------------------------------------------ | -------------- | -------------- | ---------------- | ---- |\n| Column reference argument(s)                                                                           |                |                |                  |      |\n| Literal argument(s)                                                                                    |                |                |                  |      |\n| NULL input                                                                                             |                |                |                  |      |\n| Empty string / empty array / empty map                                                                 |                |                |                  |      |\n| Array/map with NULL elements                                                                           |                |                |                  |      |\n| Zero, negative zero, negative values (numeric)                                                         |                |                |                  |      |\n| Underflow, overflow                                                                                    |                |                |                  |      |\n| Boundary values (INT_MIN, INT_MAX, Long.MinValue, minimum positive, etc.)                              |                |                |                  |      |\n| NaN, Infinity, -Infinity, subnormal (float/double)                                                     |                |                |                  |      |\n| Multibyte / special UTF-8 (composed vs decomposed, e.g. `é` U+00E9 vs `e` + U+0301, non-Latin scripts) |                |                |                  |      |\n| ANSI mode (failOnError=true)                                                                           |                |                |                  |      |\n| Non-ANSI mode (failOnError=false)                                                                      |                |                |                  |      |\n| All supported input types                                                                              |                |                |                  |      |\n| Parquet dictionary encoding (ConfigMatrix)                                                             |                |                |                  |      |\n| Cross-version behavior differences                                                                     |                |                |                  |      |\n\n### Implementation gaps\n\nAlso review the Comet implementation (Step 3) against the Spark behavior (Step 1):\n\n- Are there input types that Spark supports but `getSupportLevel` returns `Unsupported` without comment?\n- Are there behavioral differences that are NOT marked `Incompatible` but should be?\n- Are there behavioral differences between Spark versions that the Comet implementation does not account for (missing shim)?\n- Does the Rust implementation match the Spark behavior for all edge cases?\n\n---\n\n## Step 6: Recommendations\n\nSummarize findings as a prioritized list.\n\n### High priority\n\nIssues where Comet may silently produce wrong results compared to Spark.\n\n### Medium priority\n\nMissing test coverage for edge cases that could expose bugs.\n\n### Low priority\n\nMinor gaps, cosmetic improvements, or nice-to-have tests.\n\n---\n\n## Step 7: Offer to Implement Missing Tests\n\nAfter presenting the gap analysis, ask the user:\n\n> I found the following missing test cases. Would you like me to implement them?\n>\n> - [list each missing test case]\n>\n> I can add them as SQL file tests in `spark/src/test/resources/sql-tests/expressions/<category>/$ARGUMENTS.sql`\n> (or as Scala tests in `CometExpressionSuite` for cases that require programmatic setup).\n\nIf the user says yes, implement the missing tests following the SQL file test format described in\n`docs/source/contributor-guide/sql-file-tests.md`. Prefer SQL file tests over Scala tests.\n\n### SQL file test template\n\n```sql\n-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ConfigMatrix: parquet.enable.dictionary=false,true\n\nstatement\nCREATE TABLE test_$ARGUMENTS(...) USING parquet\n\nstatement\nINSERT INTO test_$ARGUMENTS VALUES\n  (...),\n  (NULL)\n\n-- column argument\nquery\nSELECT $ARGUMENTS(col) FROM test_$ARGUMENTS\n\n-- literal arguments\nquery\nSELECT $ARGUMENTS('value'), $ARGUMENTS(''), $ARGUMENTS(NULL)\n```\n\n### Verify the tests pass\n\nAfter implementing tests, tell the user how to run them:\n\n```bash\n./mvnw test -DwildcardSuites=\"CometSqlFileTestSuite\" -Dsuites=\"org.apache.comet.CometSqlFileTestSuite $ARGUMENTS\" -Dtest=none\n```\n\n---\n\n## Step 8: Update the Expression Audit Log\n\nAfter completing the audit (whether or not tests were added), append a row to the audit log at\n`docs/source/contributor-guide/expression-audit-log.md`.\n\nThe row should include:\n\n- Expression name\n- Spark versions checked (e.g. 3.4.3, 3.5.8, 4.0.1)\n- Today's date\n- A brief summary of findings (behavioral differences, bugs found/fixed, tests added, known incompatibilities)\n\n---\n\n## Output Format\n\nPresent the audit as:\n\n1. **Expression Summary** - Brief description of what `$ARGUMENTS` does, its input/output types, and null behavior\n2. **Spark Version Differences** - Summary of any behavioral or API differences across Spark 3.4.3, 3.5.8, and 4.0.1\n3. **Comet Implementation Notes** - Summary of how Comet implements this expression and any concerns\n4. **Coverage Gap Analysis** - The gap table from Step 5, plus implementation gaps\n5. **Recommendations** - Prioritized list from Step 6\n6. **Offer to add tests** - The prompt from Step 7\n\n## Tone and Style\n\n- Write in clear, concise prose\n- Use backticks around code references (function names, file paths, class names, types, config keys)\n- Avoid robotic or formulaic language\n- Be constructive and acknowledge what is already well-covered before raising gaps\n- Avoid em dashes and semicolons; use separate sentences instead\n"
  },
  {
    "path": ".claude/skills/review-comet-pr/SKILL.md",
    "content": "---\nname: review-comet-pr\ndescription: Review a DataFusion Comet pull request for Spark compatibility and implementation correctness. Provides guidance to a reviewer rather than posting comments directly.\nargument-hint: <pr-number>\n---\n\nReview Comet PR #$ARGUMENTS\n\n## Before You Start\n\n### Gather PR Metadata\n\nFetch the PR details to understand the scope:\n\n```bash\ngh pr view $ARGUMENTS --repo apache/datafusion-comet --json title,body,author,isDraft,state,files\n```\n\n### Review Existing Comments First\n\nBefore forming your review:\n\n1. **Read all existing review comments** on the PR\n2. **Check the conversation tab** for any discussion\n3. **Avoid duplicating feedback** that others have already provided\n4. **Build on existing discussions** rather than starting new threads on the same topic\n5. **If you have no additional concerns beyond what's already discussed, say so**\n6. **Ignore Copilot reviews** - do not reference or build upon comments from GitHub Copilot\n\n```bash\n# View existing comments on a PR\ngh pr view $ARGUMENTS --repo apache/datafusion-comet --comments\n```\n\n---\n\n## Review Workflow\n\n### 1. Gather Context\n\nRead the changed files and understand the area of the codebase being modified:\n\n```bash\n# View the diff\ngh pr diff $ARGUMENTS --repo apache/datafusion-comet\n```\n\nFor expression PRs, check how similar expressions are implemented in the codebase. Look at the serde files in `spark/src/main/scala/org/apache/comet/serde/` and Rust implementations in `native/spark-expr/src/`.\n\n### 2. Read Spark Source (Expression PRs)\n\n**For any PR that adds or modifies an expression, you must read the Spark source code to understand the canonical behavior.** This is the authoritative reference for what Comet must match.\n\n1. **Clone or update the Spark repo:**\n\n   ```bash\n   # Clone if not already present (use /tmp to avoid polluting the workspace)\n   if [ ! -d /tmp/spark ]; then\n     git clone --depth 1 https://github.com/apache/spark.git /tmp/spark\n   fi\n   ```\n\n2. **Find the expression implementation in Spark:**\n\n   ```bash\n   # Search for the expression class (e.g., for \"Conv\", \"Hex\", \"Substring\")\n   find /tmp/spark/sql/catalyst/src/main/scala -name \"*.scala\" | xargs grep -l \"case class <ExpressionName>\"\n   ```\n\n3. **Read the Spark implementation carefully.** Pay attention to:\n   - The `eval` and `doGenEval`/`nullSafeEval` methods. These define the exact behavior.\n   - The `inputTypes` and `dataType` fields. These define which types Spark accepts and what it returns.\n   - Null handling. Does it use `nullable = true`? Does `nullSafeEval` handle nulls implicitly?\n   - Special cases, guards, and `require` assertions.\n   - ANSI mode branches (look for `SQLConf.get.ansiEnabled` or `failOnError`).\n\n4. **Read the Spark tests for the expression:**\n\n   ```bash\n   # Find test files\n   find /tmp/spark/sql -name \"*.scala\" -path \"*/test/*\" | xargs grep -l \"<ExpressionName>\"\n   ```\n\n5. **Compare the Spark behavior against the Comet implementation in the PR.** Identify:\n   - Edge cases tested in Spark but not in the PR\n   - Data types supported in Spark but not handled in the PR\n   - Behavioral differences that should be marked `Incompatible`\n\n6. **Suggest additional tests** for any edge cases or type combinations covered in Spark's tests that are missing from the PR's tests.\n\n### 3. Spark Compatibility Check\n\n**This is the most critical aspect of Comet reviews.** Comet must produce identical results to Spark.\n\nFor expression PRs, verify against the Spark source you read in step 2:\n\n1. **Check edge cases**\n   - Null handling\n   - Overflow behavior\n   - Empty input behavior\n   - Type-specific behavior\n\n2. **Verify all data types are handled**\n   - Does Spark support this type? (Check `inputTypes` in Spark source)\n   - Does the PR handle all Spark-supported types?\n\n3. **Check for ANSI mode differences**\n   - Spark behavior may differ between legacy and ANSI modes\n   - PR should handle both or mark as `Incompatible`\n\n### 4. Check Against Implementation Guidelines\n\n**Always verify PRs follow the implementation guidelines.**\n\n#### Scala Serde (`spark/src/main/scala/org/apache/comet/serde/`)\n\n- [ ] Expression class correctly identified\n- [ ] All child expressions converted via `exprToProtoInternal`\n- [ ] Return type correctly serialized\n- [ ] `getSupportLevel` reflects true compatibility:\n  - `Compatible()` - matches Spark exactly\n  - `Incompatible(Some(\"reason\"))` - differs in documented ways\n  - `Unsupported(Some(\"reason\"))` - cannot be implemented\n- [ ] Serde in appropriate file (`datetime.scala`, `strings.scala`, `arithmetic.scala`, etc.)\n\n#### Registration (`QueryPlanSerde.scala`)\n\n- [ ] Added to correct map (temporal, string, arithmetic, etc.)\n- [ ] No duplicate registrations\n- [ ] Import statement added\n\n#### Rust Implementation (if applicable)\n\nLocation: `native/spark-expr/src/`\n\n- [ ] Matches DataFusion and Arrow conventions\n- [ ] Null handling is correct\n- [ ] No panics. Use `Result` types.\n- [ ] Efficient array operations (avoid row-by-row)\n\n#### Tests - Prefer SQL File-Based Framework\n\n**Expression tests should use the SQL file-based framework (`CometSqlFileTestSuite`) where possible.** This framework automatically runs each query through both Spark and Comet and compares results. No Scala code is needed. Only fall back to Scala tests in `CometExpressionSuite` when the SQL framework cannot express the test. Examples include complex `DataFrame` setup, programmatic data generation, or non-expression tests.\n\n**SQL file test location:** `spark/src/test/resources/sql-tests/expressions/<category>/`\n\nCategories include: `aggregate/`, `array/`, `string/`, `math/`, `struct/`, `map/`, `datetime/`, `hash/`, etc.\n\n**SQL file structure:**\n\n```sql\n-- Create test data\nstatement\nCREATE TABLE test_crc32(col string, a int, b float) USING parquet\n\nstatement\nINSERT INTO test_crc32 VALUES ('Spark', 10, 1.5), (NULL, NULL, NULL), ('', 0, 0.0)\n\n-- Default mode: verifies native Comet execution + result matches Spark\nquery\nSELECT crc32(col) FROM test_crc32\n\n-- spark_answer_only: compares results without requiring native execution\nquery spark_answer_only\nSELECT crc32(cast(a as string)) FROM test_crc32\n\n-- tolerance: allows numeric variance for floating-point results\nquery tolerance=0.0001\nSELECT cos(v) FROM test_trig\n\n-- expect_fallback: asserts fallback to Spark occurs\nquery expect_fallback(unsupported expression)\nSELECT unsupported_func(v) FROM test_table\n\n-- expect_error: verifies both engines throw matching exceptions\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT 2147483647 + 1\n\n-- ignore: skip queries with known bugs (include GitHub issue link)\nquery ignore(https://github.com/apache/datafusion-comet/issues/NNNN)\nSELECT known_buggy_expr(v) FROM test_table\n```\n\n**Running SQL file tests:**\n\n```bash\n# All SQL file tests\n./mvnw test -Dsuites=\"org.apache.comet.CometSqlFileTestSuite\" -Dtest=none\n\n# Specific test file (substring match)\n./mvnw test -Dsuites=\"org.apache.comet.CometSqlFileTestSuite crc32\" -Dtest=none\n```\n\n**CRITICAL: Verify all test requirements (regardless of framework):**\n\n- [ ] Basic functionality tested (column data, not just literals)\n- [ ] Null handling tested (`SELECT expression(NULL)`)\n- [ ] Edge cases tested (empty input, overflow, boundary values)\n- [ ] Both literal values and column references tested (they use different code paths)\n- [ ] For timestamp/datetime expressions, timezone handling is tested (e.g., UTC, non-UTC session timezone, timestamps with and without timezone)\n- [ ] One expression per SQL file for easier debugging\n- [ ] If using Scala tests instead, literal tests MUST disable constant folding:\n  ```scala\n  withSQLConf(SQLConf.OPTIMIZER_EXCLUDED_RULES.key ->\n      \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n    checkSparkAnswerAndOperator(\"SELECT func(literal)\")\n  }\n  ```\n\n### 5. Performance Review (Expression PRs)\n\n**For PRs that add new expressions, performance is not optional.** The whole point of Comet is to be faster than Spark. If a new expression is not faster, it may not be worth adding.\n\n1. **Check that the PR includes microbenchmark results.** The PR description should contain benchmark numbers comparing Comet vs Spark for the new expression. If benchmark results are missing, flag this as a required addition.\n\n2. **Look for a microbenchmark implementation.** Expression benchmarks live in `spark/src/test/scala/org/apache/spark/sql/benchmark/`. Check whether the PR adds a benchmark for the new expression.\n\n3. **Review the benchmark results if provided:**\n   - Is Comet actually faster than Spark for this expression?\n   - Are the benchmarks representative? They should test with realistic data sizes, not just trivial inputs.\n   - Are different data types benchmarked if the expression supports multiple types?\n\n4. **Review the Rust implementation for performance concerns:**\n   - Unnecessary allocations or copies\n   - Row-by-row processing where batch/array operations are possible\n   - Redundant type conversions\n   - Inefficient string handling (e.g., repeated UTF-8 validation)\n   - Missing use of Arrow compute kernels where they exist\n\n5. **If benchmark results show Comet is slower than Spark**, flag this clearly. The PR should explain why the regression is acceptable or include a plan to optimize.\n\n### 6. Check CI Test Failures\n\n**Always check the CI status and summarize any test failures in your review.**\n\n```bash\n# View CI check status\ngh pr checks $ARGUMENTS --repo apache/datafusion-comet\n\n# View failed check details\ngh pr checks $ARGUMENTS --repo apache/datafusion-comet --failed\n```\n\n### 7. Documentation Check\n\nCheck whether the PR requires updates to user-facing documentation in `docs/`:\n\n- **Compatibility guide** (`docs/source/user-guide/compatibility.md`): New expressions or operators should be listed. Incompatible behaviors should be documented.\n- **Configuration guide** (`docs/source/user-guide/configs.md`): New config options should be documented.\n- **Expressions list** (`docs/source/user-guide/expressions.md`): New expressions should be added.\n\nIf the PR adds a new expression or operator but does not update the relevant docs, flag this as something that needs to be addressed.\n\n### 8. Common Comet Review Issues\n\n1. **Incomplete type support**: Spark expression supports types not handled in PR\n2. **Missing edge cases**: Null, overflow, empty string, negative values\n3. **Wrong return type**: Return type must match Spark exactly\n4. **Tests in wrong framework**: Expression tests should use the SQL file-based framework (`CometSqlFileTestSuite`) rather than adding to Scala test suites like `CometExpressionSuite`. Suggest migration if the PR adds Scala tests for expressions that could use SQL files instead.\n5. **Stale native code**: PR might need `./mvnw install -pl common -DskipTests`\n6. **Missing `getSupportLevel`**: Edge cases should be marked as `Incompatible`\n\n---\n\n## Output Format\n\nPresent your review as guidance for the reviewer. Structure your output as:\n\n1. **PR Summary** - Brief description of what the PR does\n2. **CI Status** - Summary of CI check results\n3. **Findings** - Your analysis organized by area (Spark compatibility, implementation, tests, etc.)\n4. **Suggested Review Comments** - Specific comments the reviewer could leave on the PR, with file and line references where applicable\n\n## Review Tone and Style\n\nWrite reviews that sound human and conversational. Avoid:\n\n- Robotic or formulaic language\n- Em dashes. Use separate sentences instead.\n- Semicolons. Use separate sentences instead.\n\nInstead:\n\n- Write in flowing paragraphs using simple grammar\n- Keep sentences short and separate rather than joining them with punctuation\n- Be kind and constructive, even when raising concerns\n- Use backticks around any code references (function names, file paths, class names, types, config keys, etc.)\n- **Suggest** adding tests rather than stating tests are missing (e.g., \"It might be worth adding a test for X\" not \"Tests are missing for X\")\n- **Ask questions** about edge cases rather than asserting they aren't handled (e.g., \"Does this handle the case where X is null?\" not \"This doesn't handle null\")\n- Frame concerns as questions or suggestions when possible\n- Acknowledge what the PR does well before raising concerns\n\n## Do Not Post Comments\n\n**IMPORTANT: Never post comments or reviews on the PR directly.** This skill is for providing guidance to a human reviewer. Present all findings and suggested comments to the user. The user will decide what to post.\n"
  },
  {
    "path": ".dockerignore",
    "content": ".git\n.github\n.idea\nbin\nconf\ndocs/build\ndocs/temp\ndocs/venv\nmetastore_db\ntarget\ncommon/target\nspark-integration/target\nfuzz-testing/target\nspark/target\nnative/target\ncore/target\nspark-warehouse\nvenv\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/bug_report.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Bug report\ndescription: Create a bug report\nlabels: bug\nbody:\n  - type: textarea\n    attributes:\n      label: Describe the bug\n      description: Describe the bug.\n      placeholder: >\n        A clear and concise description of what the bug is.\n    validations:\n      required: true\n  - type: textarea\n    attributes:\n      label: Steps to reproduce\n      placeholder: >\n        Describe steps to reproduce the bug:\n  - type: textarea\n    attributes:\n      label: Expected behavior\n      placeholder: >\n        A clear and concise description of what you expected to happen.\n  - type: textarea\n    attributes:\n      label: Additional context\n      placeholder: >\n        Add any other context about the problem here.\n"
  },
  {
    "path": ".github/ISSUE_TEMPLATE/feature_request.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Feature request\ndescription: Suggest an idea for this project\nlabels: enhancement\nbody:\n  - type: textarea\n    attributes:\n      label: What is the problem the feature request solves?\n      description: Please describe how feature request improves the Comet.\n      placeholder: >\n        A clear and concise description of what the improvement is. Ex. I'm always frustrated when [...]\n        (This section helps Comet developers understand the context and *why* for this feature, in addition to the *what*)\n  - type: textarea\n    attributes:\n      label: Describe the potential solution\n      placeholder: >\n        A clear and concise description of what you want to happen.\n  - type: textarea\n    attributes:\n      label: Additional context\n      placeholder: >\n        Add any other context or screenshots about the feature request here.\n"
  },
  {
    "path": ".github/actions/java-test/action.yaml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: \"Java Test\"\ndescription: \"Run Java tests\"\ninputs:\n  artifact_name:\n    description: \"Unique name for uploaded artifacts for this run\"\n    required: true\n  suites:\n    description: 'Which Scalatest test suites to run'\n    required: false\n    default: ''\n  maven_opts:\n    description: 'Maven options passed to the mvn command'\n    required: false\n    default: ''\n  scan_impl:\n    description: 'The default Parquet scan implementation'\n    required: false\n    default: 'auto'\n  upload-test-reports:\n    description: 'Whether to upload test results including coverage to GitHub'\n    required: false\n    default: 'false'\n  skip-native-build:\n    description: 'Skip native build (when using pre-built artifact)'\n    required: false\n    default: 'false'\n\nruns:\n  using: \"composite\"\n  steps:\n    - name: Run Cargo release build\n      if: ${{ inputs.skip-native-build != 'true' }}\n      shell: bash\n      # it is important that we run the Scala tests against a release build rather than a debug build\n      # to make sure that no tests are relying on overflow checks that are present only in debug builds\n      run: |\n        cd native\n        cargo build --release\n\n    - name: Cache Maven dependencies\n      # TODO: remove next line after working again\n      #  temporarily work around https://github.com/actions/runner-images/issues/13341\n      #  by disabling caching for macOS\n      if: ${{ runner.os != 'macOS' }}\n      uses: actions/cache@v5\n      with:\n        path: |\n          ~/.m2/repository\n          /root/.m2/repository\n        key: ${{ runner.os }}-java-maven-${{ hashFiles('**/pom.xml') }}\n        restore-keys: |\n          ${{ runner.os }}-java-maven-\n\n    - name: Run all tests\n      shell: bash\n      if: ${{ inputs.suites == '' }}\n      env:\n        COMET_PARQUET_SCAN_IMPL: ${{ inputs.scan_impl }}\n        SPARK_LOCAL_HOSTNAME: \"localhost\"\n        SPARK_LOCAL_IP: \"127.0.0.1\"\n      run: |\n        MAVEN_OPTS=\"-Xmx4G -Xms2G -XX:+UnlockDiagnosticVMOptions -XX:+ShowMessageBoxOnError -XX:+HeapDumpOnOutOfMemoryError -XX:ErrorFile=./hs_err_pid%p.log\" SPARK_HOME=`pwd` ./mvnw -B -Prelease install ${{ inputs.maven_opts }}\n    - name: Run specified tests\n      shell: bash\n      if: ${{ inputs.suites != '' }}\n      env:\n        COMET_PARQUET_SCAN_IMPL: ${{ inputs.scan_impl }}\n        SPARK_LOCAL_HOSTNAME: \"localhost\"\n        SPARK_LOCAL_IP: \"127.0.0.1\"\n      run: |\n        MAVEN_SUITES=\"$(echo \"${{ inputs.suites }}\" | paste -sd, -)\"\n        echo \"Running with MAVEN_SUITES=$MAVEN_SUITES\"\n        MAVEN_OPTS=\"-Xmx4G -Xms2G -DwildcardSuites=$MAVEN_SUITES -XX:+UnlockDiagnosticVMOptions -XX:+ShowMessageBoxOnError -XX:+HeapDumpOnOutOfMemoryError -XX:ErrorFile=./hs_err_pid%p.log\" SPARK_HOME=`pwd` ./mvnw -B -Prelease install ${{ inputs.maven_opts }}\n    - name: Upload crash logs\n      if: failure()\n      uses: actions/upload-artifact@v6\n      with:\n        name: crash-logs-${{ inputs.artifact_name }}\n        path: \"**/hs_err_pid*.log\"\n    - name: Debug listing\n      if: failure()\n      shell: bash\n      run: |  \n        echo \"CWD: $(pwd)\"\n        ls -lah .\n        ls -lah target\n        find . -name 'unit-tests.log'\n    - name: Upload unit-tests.log\n      if: failure()\n      uses: actions/upload-artifact@v6\n      with:\n        name: unit-tests-${{ inputs.artifact_name }}\n        path: \"**/target/unit-tests.log\"\n    - name: Upload test results\n      if: ${{ inputs.upload-test-reports == 'true' }}\n      uses: actions/upload-artifact@v6\n      with:\n         name: java-test-reports-${{ inputs.artifact_name }}\n         path: \"**/target/surefire-reports/*.txt\"\n         retention-days: 7 # 1 week for test reports\n         overwrite: true\n"
  },
  {
    "path": ".github/actions/rust-test/action.yaml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: \"Rust Test\"\ndescription: \"Run Rust tests\"\n\nruns:\n  using: \"composite\"\n  steps:\n    # Note: cargo fmt check is now handled by the lint job that gates this workflow\n\n    - name: Check Cargo clippy\n      shell: bash\n      run: |\n        cd native\n        cargo clippy --color=never --all-targets --workspace -- -D warnings\n\n    - name: Check compilation\n      shell: bash\n      run: |\n        cd native\n        cargo check --benches\n\n    - name: Check unused dependencies\n      shell: bash\n      run: |\n        cd native\n        cargo install cargo-machete --version 0.7.0 && cargo machete\n\n    - name: Cache Maven dependencies\n      uses: actions/cache@v4\n      with:\n        path: |\n          ~/.m2/repository\n          /root/.m2/repository\n        key: ${{ runner.os }}-rust-maven-${{ hashFiles('**/pom.xml') }}\n        restore-keys: |\n          ${{ runner.os }}-rust-maven-\n\n    - name: Build common module (pre-requisite for Rust tests)\n      shell: bash\n      run: |\n        cd common\n        ../mvnw -B clean compile -DskipTests\n\n    - name: Install nextest\n      shell: bash\n      run: |\n        cargo install cargo-nextest --locked\n\n    - name: Run Cargo test\n      shell: bash\n      run: |\n        cd native\n        # Set LD_LIBRARY_PATH to include JVM library path for tests that use JNI\n        export LD_LIBRARY_PATH=${JAVA_HOME}/lib/server:${LD_LIBRARY_PATH}\n        RUST_BACKTRACE=1 cargo nextest run\n\n"
  },
  {
    "path": ".github/actions/setup-builder/action.yaml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Prepare Builder\ndescription: 'Prepare Build Environment'\ninputs:\n  rust-version:\n    description: 'version of rust to install (e.g. nightly)'\n    required: true\n    default: 'stable'\n  jdk-version:\n    description: 'jdk version to install (e.g., 17)'\n    required: true\n    default: '17'\nruns:\n  using: \"composite\"\n  steps:\n    - name: Install Build Dependencies\n      shell: bash\n      run: |\n        apt-get update\n        apt-get install -y protobuf-compiler\n        apt-get install -y clang\n\n    - name: Install JDK ${{inputs.jdk-version}}\n      uses: actions/setup-java@v4\n      with:\n        # distribution is chosen to be zulu as it still offers JDK 8 with Silicon support, which\n        # is not available in the adopt distribution\n        distribution: 'zulu'\n        java-version: ${{inputs.jdk-version}}\n\n    - name: Set JAVA_HOME\n      shell: bash\n      run: echo \"JAVA_HOME=$(echo ${JAVA_HOME})\" >> $GITHUB_ENV\n\n    - name: Setup Rust toolchain\n      shell: bash\n      # rustfmt is needed for the substrait build script\n      run: |\n        echo \"Installing ${{inputs.rust-version}}\"\n        rustup toolchain install ${{inputs.rust-version}}\n        rustup default ${{inputs.rust-version}}\n        rustup component add rustfmt clippy\n"
  },
  {
    "path": ".github/actions/setup-iceberg-builder/action.yaml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Setup Iceberg Builder\ndescription: 'Setup Apache Iceberg to run Spark SQL tests'\ninputs:\n  iceberg-version:\n    description: 'The Apache Iceberg version (e.g., 1.8.1) to build'\n    required: true\nruns:\n  using: \"composite\"\n  steps:\n    - name: Clone Iceberg repo\n      uses: actions/checkout@v6\n      with:\n        repository: apache/iceberg\n        path: apache-iceberg\n        ref: apache-iceberg-${{inputs.iceberg-version}}\n        fetch-depth: 1\n\n    - name: Setup Iceberg for Comet\n      shell: bash\n      run: |\n        cd apache-iceberg\n        git apply ../dev/diffs/iceberg/${{inputs.iceberg-version}}.diff\n"
  },
  {
    "path": ".github/actions/setup-macos-builder/action.yaml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Prepare Builder for MacOS\ndescription: 'Prepare Build Environment'\ninputs:\n  rust-version:\n    description: 'version of rust to install (e.g. nightly)'\n    required: true\n    default: 'stable'\n  jdk-version:\n    description: 'jdk version to install (e.g., 17)'\n    required: true\n    default: '17'\n  jdk-architecture:\n    description: 'OS architecture for the JDK'\n    required: true\n    default: 'x64'\n  protoc-architecture:\n    description: 'OS architecture for protobuf compiler'\n    required: true\n    default: 'x86_64'\n\nruns:\n  using: \"composite\"\n  steps:\n    - name: Install Build Dependencies\n      shell: bash\n      run: |\n        # Install protobuf\n        mkdir -p $HOME/d/protoc\n        cd $HOME/d/protoc\n        export PROTO_ZIP=\"protoc-21.4-osx-${{inputs.protoc-architecture}}.zip\"\n        curl -LO https://github.com/protocolbuffers/protobuf/releases/download/v21.4/$PROTO_ZIP\n        unzip $PROTO_ZIP\n        echo \"$HOME/d/protoc/bin\" >> $GITHUB_PATH\n        export PATH=$PATH:$HOME/d/protoc/bin\n        # install openssl and setup DYLD_LIBRARY_PATH\n        brew install openssl\n        OPENSSL_LIB_PATH=`brew --prefix openssl`/lib\n        echo \"openssl lib path is: ${OPENSSL_LIB_PATH}\"\n        echo \"DYLD_LIBRARY_PATH=$OPENSSL_LIB_PATH:$DYLD_LIBRARY_PATH\" >> $GITHUB_ENV\n        # output the current status of SIP for later debugging\n        csrutil status || true\n\n    - name: Install JDK ${{inputs.jdk-version}}\n      uses: actions/setup-java@v4\n      with:\n        # distribution is chosen to be zulu as it still offers JDK 8 with Silicon support, which\n        # is not available in the adopt distribution\n        distribution: 'zulu'\n        java-version: ${{inputs.jdk-version}}\n        architecture: ${{inputs.jdk-architecture}}\n\n    - name: Set JAVA_HOME\n      shell: bash\n      run: echo \"JAVA_HOME=$(echo ${JAVA_HOME})\" >> $GITHUB_ENV\n\n    - name: Setup Rust toolchain\n      shell: bash\n      # rustfmt is needed for the substrait build script\n      run: |\n        echo \"Installing ${{inputs.rust-version}}\"\n        rustup toolchain install ${{inputs.rust-version}}\n        rustup default ${{inputs.rust-version}}\n        rustup component add rustfmt clippy\n"
  },
  {
    "path": ".github/actions/setup-spark-builder/action.yaml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Setup Spark Builder\ndescription: 'Setup Apache Spark to run SQL tests'\ninputs:\n  spark-short-version:\n    description: 'The Apache Spark short version (e.g., 3.5) to build'\n    required: true\n  spark-version:\n    description: 'The Apache Spark version (e.g., 3.5.8) to build'\n    required: true\n  skip-native-build:\n    description: 'Skip native build (when using pre-built artifact)'\n    required: false\n    default: 'false'\nruns:\n  using: \"composite\"\n  steps:\n    - name: Clone Spark repo\n      uses: actions/checkout@v6\n      with:\n        repository: apache/spark\n        path: apache-spark\n        ref: v${{inputs.spark-version}}\n        fetch-depth: 1\n\n    - name: Setup Spark for Comet\n      shell: bash\n      run: |\n        cd apache-spark\n        git apply ../dev/diffs/${{inputs.spark-version}}.diff\n\n    - name: Cache Maven dependencies\n      uses: actions/cache@v4\n      with:\n        path: |\n          ~/.m2/repository\n          /root/.m2/repository\n        key: ${{ runner.os }}-spark-sql-${{ hashFiles('spark/**/pom.xml', 'common/**/pom.xml') }}\n        restore-keys: |\n          ${{ runner.os }}-spark-sql-\n\n    - name: Build Comet (with native)\n      if: ${{ inputs.skip-native-build != 'true' }}\n      shell: bash\n      run: |\n        PROFILES=\"-Pspark-${{inputs.spark-short-version}}\" make release\n\n    - name: Build Comet (Maven only, skip native)\n      if: ${{ inputs.skip-native-build == 'true' }}\n      shell: bash\n      run: |\n        # Native library should already be in native/target/release/\n        ./mvnw install -Prelease -DskipTests -Pspark-${{inputs.spark-short-version}}\n"
  },
  {
    "path": ".github/dependabot.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nversion: 2\nupdates:\n  - package-ecosystem: cargo\n    directory: \"/native\"\n    schedule:\n      interval: weekly\n    target-branch: main\n    labels: [dependencies]\n    ignore:\n      # major version bumps of datafusion*, arrow*, parquet are handled manually\n      - dependency-name: \"arrow*\"\n        update-types: [\"version-update:semver-major\"]\n      - dependency-name: \"parquet\"\n        update-types: [\"version-update:semver-major\"]\n      - dependency-name: \"datafusion*\"\n        update-types: [\"version-update:semver-major\"]\n    groups:\n      proto:\n        applies-to: version-updates\n        patterns:\n          - \"prost*\"\n          - \"pbjson*\"\n      # Catch-all: group only minor/patch into a single PR,\n      # excluding deps we want always separate (and excluding arrow/parquet which have their own group)\n      all-other-cargo-deps:\n        applies-to: version-updates\n        patterns:\n          - \"*\"\n        exclude-patterns:\n          - \"arrow*\"\n          - \"parquet\"\n          - \"object_store\"\n          - \"sqlparser\"\n          - \"prost*\"\n          - \"pbjson*\"\n        update-types:\n          - \"minor\"\n          - \"patch\"\n  - package-ecosystem: \"github-actions\"\n    directory: \"/\"\n    schedule:\n      interval: \"weekly\"\n    open-pull-requests-limit: 10\n    labels: [dependencies]\n"
  },
  {
    "path": ".github/pull_request_template.md",
    "content": "## Which issue does this PR close?\n\n<!--\nWe generally require a GitHub issue to be filed for all bug fixes and enhancements and this helps us generate change logs for our releases. You can link an issue to this PR using the GitHub syntax. For example `Closes #123` indicates that this PR will close issue #123.\n-->\n\nCloses #.\n\n## Rationale for this change\n\n<!--\n Why are you proposing this change? If this is already explained clearly in the issue then this section is not needed.\n Explaining clearly why changes are proposed helps reviewers understand your changes and offer better suggestions for fixes.\n-->\n\n## What changes are included in this PR?\n\n<!--\nThere is no need to duplicate the description in the issue here but it is sometimes worth providing a summary of the individual changes in this PR.\n-->\n\n## How are these changes tested?\n\n<!--\nWe typically require tests for all PRs in order to:\n1. Prevent the code from being accidentally broken by subsequent changes\n2. Serve as another way to document the expected behavior of the code\n\nIf tests are not included in your PR, please explain why (for example, are they covered by existing tests)?\n-->\n"
  },
  {
    "path": ".github/workflows/codeql.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\nname: \"CodeQL\"\n\non:\n  push:\n    branches: [ \"main\" ]\n  pull_request:\n    branches: [ \"main\" ]\n  schedule:\n    - cron: '16 4 * * 1'\n\npermissions:\n  contents: read\n\njobs:\n  analyze:\n    name: Analyze Actions\n    runs-on: ubuntu-latest\n    permissions:\n      contents: read\n      security-events: write\n      packages: read\n\n    steps:\n    - name: Checkout repository\n      uses: actions/checkout@de0fac2e4500dabe0009e67214ff5f5447ce83dd # v6\n      with:\n        persist-credentials: false\n\n    - name: Initialize CodeQL\n      uses: github/codeql-action/init@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 # v4\n      with:\n        languages: actions\n\n    - name: Perform CodeQL Analysis\n      uses: github/codeql-action/analyze@95e58e9a2cdfd71adc6e0353d5c52f41a045d225 # v4\n      with:\n        category: \"/language:actions\"\n"
  },
  {
    "path": ".github/workflows/docker-publish.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Publish Docker images\n\nconcurrency:\n  group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }}\n  cancel-in-progress: true\n\non:\n  push:\n    tags:\n      - '*.*.*'\n      - '*.*.*-rc*'\n      - 'test-docker-publish-*'\n\njobs:\n  docker:\n    name: Docker\n    if: ${{ startsWith(github.repository, 'apache/') }}\n    runs-on: ubuntu-22.04\n    permissions:\n      contents: read\n      packages: write\n    steps:\n      - name: Remove unnecessary files\n        run: |\n          echo \"Disk space before cleanup:\"\n          df -h\n          docker system prune -af\n          sudo rm -rf /tmp/*\n          sudo rm -rf /opt/hostedtoolcache\n          sudo rm -rf \"$AGENT_TOOLSDIRECTORY\"\n          sudo apt-get clean\n          echo \"Disk space after cleanup:\"\n          df -h\n      - name: Set up Java\n        uses: actions/setup-java@v5\n        with:\n          java-version: '17'\n          distribution: 'temurin'\n          cache: 'maven'\n      - name: Extract Comet version\n        id: extract_version\n        run: |\n          # use the tag that triggered this workflow as the Comet version e.g. 0.4.0-rc1\n          echo \"COMET_VERSION=${GITHUB_REF##*/}\" >> $GITHUB_ENV\n      - name: Echo Comet version\n        run: echo \"The current Comet version is ${{ env.COMET_VERSION }}\"\n      - name: Set up Docker Buildx\n        uses: docker/setup-buildx-action@v4\n      - name: Login to GitHub Container Registry\n        uses: docker/login-action@v4\n        with:\n          registry: ghcr.io\n          username: ${{ github.actor }}\n          password: ${{ secrets.GITHUB_TOKEN }}\n      - name: Build and push\n        uses: docker/build-push-action@v7\n        with:\n          platforms: linux/amd64,linux/arm64\n          push: true\n          tags: ghcr.io/apache/datafusion-comet:spark-3.5-scala-2.12-${{ env.COMET_VERSION }}\n          file: kube/Dockerfile\n          no-cache: true\n"
  },
  {
    "path": ".github/workflows/docs.yaml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\non:\n  push:\n    branches:\n      - main\n    paths:\n      - .asf.yaml\n      - .github/workflows/docs.yaml\n      - docs/**\n\nname: Deploy DataFusion Comet site\n\njobs:\n  build-docs:\n    name: Build docs\n    if: ${{ startsWith(github.repository, 'apache/') }}\n    runs-on: ubuntu-latest\n    steps:\n      - name: Checkout docs sources\n        uses: actions/checkout@v6\n\n      - name: Checkout asf-site branch\n        uses: actions/checkout@v6\n        with:\n          ref: asf-site\n          path: asf-site\n\n      - name: Setup Python\n        uses: actions/setup-python@v6\n        with:\n          python-version: \"3.10\"\n\n      - name: Setup Java\n        uses: actions/setup-java@v5\n        with:\n          distribution: 'temurin'\n          java-version: '17'\n          cache: 'maven'\n\n      - name: Install dependencies\n        run: |\n          set -x\n          python3 -m venv venv\n          source venv/bin/activate\n          pip install -r docs/requirements.txt\n\n      - name: Build docs\n        run: |\n          set -x\n          source venv/bin/activate\n          cd docs\n          ./build.sh\n\n      - name: Copy & push the generated HTML\n        run: |\n          set -x\n          cd asf-site/\n          rsync \\\n            -a \\\n            --delete \\\n            --exclude '/.git/' \\\n            ../docs/build/html/ \\\n            ./\n          cp ../.asf.yaml .\n          touch .nojekyll\n          git status --porcelain\n          if [ \"$(git status --porcelain)\" != \"\" ]; then\n            git config user.name \"github-actions[bot]\"\n            git config user.email \"github-actions[bot]@users.noreply.github.com\"\n            git add --all\n            git commit -m 'Publish built docs triggered by ${{ github.sha }}'\n            git push || git push --force\n          fi"
  },
  {
    "path": ".github/workflows/iceberg_spark_test.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Iceberg Spark SQL Tests\n\nconcurrency:\n  group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }}\n  cancel-in-progress: true\n\non:\n  push:\n    branches:\n      - main\n    paths-ignore:\n      - \"benchmarks/**\"\n      - \"doc/**\"\n      - \"docs/**\"\n      - \"**.md\"\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  pull_request:\n    paths-ignore:\n      - \"benchmarks/**\"\n      - \"doc/**\"\n      - \"docs/**\"\n      - \"**.md\"\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  # manual trigger\n  # https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow\n  workflow_dispatch:\n\nenv:\n  RUST_VERSION: stable\n  RUST_BACKTRACE: 1\n\njobs:\n  # Build native library once and share with all test jobs\n  build-native:\n    name: Build Native Library\n    runs-on: ubuntu-24.04\n    container:\n      image: amd64/rust\n    steps:\n      - uses: actions/checkout@v6\n\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{ env.RUST_VERSION }}\n          jdk-version: 17\n\n      - name: Restore Cargo cache\n        uses: actions/cache/restore@v5\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n            native/target\n          key: ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-${{ hashFiles('native/**/*.rs') }}\n          restore-keys: |\n            ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-\n\n      - name: Build native library\n        # Use CI profile for faster builds (no LTO) and to share cache with pr_build_linux.yml.\n        run: |\n          cd native && cargo build --profile ci\n        env:\n          RUSTFLAGS: \"-Ctarget-cpu=x86-64-v3\"\n\n      - name: Save Cargo cache\n        uses: actions/cache/save@v5\n        if: github.ref == 'refs/heads/main'\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n            native/target\n          key: ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-${{ hashFiles('native/**/*.rs') }}\n\n      - name: Upload native library\n        uses: actions/upload-artifact@v7\n        with:\n          name: native-lib-iceberg\n          path: native/target/ci/libcomet.so\n          retention-days: 1\n\n  iceberg-spark:\n    needs: build-native\n    strategy:\n      matrix:\n        os: [ubuntu-24.04]\n        java-version: [11, 17]\n        iceberg-version: [{short: '1.8', full: '1.8.1'}, {short: '1.9', full: '1.9.1'}, {short: '1.10', full: '1.10.0'}]\n        spark-version: [{short: '3.4', full: '3.4.3'}, {short: '3.5', full: '3.5.8'}]\n        scala-version: ['2.13']\n      fail-fast: false\n    name: iceberg-spark/${{ matrix.os }}/iceberg-${{ matrix.iceberg-version.full }}/spark-${{ matrix.spark-version.full }}/scala-${{ matrix.scala-version }}/java-${{ matrix.java-version }}\n    runs-on: ${{ matrix.os }}\n    container:\n      image: amd64/rust\n    env:\n      SPARK_LOCAL_IP: localhost\n    steps:\n      - uses: actions/checkout@v6\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{env.RUST_VERSION}}\n          jdk-version: ${{ matrix.java-version }}\n      - name: Download native library\n        uses: actions/download-artifact@v8\n        with:\n          name: native-lib-iceberg\n          path: native/target/release/\n      - name: Build Comet\n        run: |\n          ./mvnw install -Prelease -DskipTests -Pspark-${{ matrix.spark-version.short }} -Pscala-${{ matrix.scala-version }}\n      - name: Setup Iceberg\n        uses: ./.github/actions/setup-iceberg-builder\n        with:\n          iceberg-version: ${{ matrix.iceberg-version.full }}\n      - name: Run Iceberg Spark tests\n        run: |\n          cd apache-iceberg\n          rm -rf /root/.m2/repository/org/apache/parquet # somehow parquet cache requires cleanups\n          ENABLE_COMET=true ENABLE_COMET_ONHEAP=true ./gradlew -DsparkVersions=${{ matrix.spark-version.short }} -DscalaVersion=${{ matrix.scala-version }} -DflinkVersions= -DkafkaVersions= \\\n            :iceberg-spark:iceberg-spark-${{ matrix.spark-version.short }}_${{ matrix.scala-version }}:test \\\n            -Pquick=true -x javadoc\n\n  iceberg-spark-extensions:\n    needs: build-native\n    strategy:\n      matrix:\n        os: [ubuntu-24.04]\n        java-version: [11, 17]\n        iceberg-version: [{short: '1.8', full: '1.8.1'}, {short: '1.9', full: '1.9.1'}, {short: '1.10', full: '1.10.0'}]\n        spark-version: [{short: '3.4', full: '3.4.3'}, {short: '3.5', full: '3.5.8'}]\n        scala-version: ['2.13']\n      fail-fast: false\n    name: iceberg-spark-extensions/${{ matrix.os }}/iceberg-${{ matrix.iceberg-version.full }}/spark-${{ matrix.spark-version.full }}/scala-${{ matrix.scala-version }}/java-${{ matrix.java-version }}\n    runs-on: ${{ matrix.os }}\n    container:\n      image: amd64/rust\n    env:\n      SPARK_LOCAL_IP: localhost\n    steps:\n      - uses: actions/checkout@v6\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{env.RUST_VERSION}}\n          jdk-version: ${{ matrix.java-version }}\n      - name: Download native library\n        uses: actions/download-artifact@v8\n        with:\n          name: native-lib-iceberg\n          path: native/target/release/\n      - name: Build Comet\n        run: |\n          ./mvnw install -Prelease -DskipTests -Pspark-${{ matrix.spark-version.short }} -Pscala-${{ matrix.scala-version }}\n      - name: Setup Iceberg\n        uses: ./.github/actions/setup-iceberg-builder\n        with:\n          iceberg-version: ${{ matrix.iceberg-version.full }}\n      - name: Run Iceberg Spark extensions tests\n        run: |\n          cd apache-iceberg\n          rm -rf /root/.m2/repository/org/apache/parquet # somehow parquet cache requires cleanups\n          ENABLE_COMET=true ENABLE_COMET_ONHEAP=true ./gradlew -DsparkVersions=${{ matrix.spark-version.short }} -DscalaVersion=${{ matrix.scala-version }} -DflinkVersions= -DkafkaVersions= \\\n            :iceberg-spark:iceberg-spark-extensions-${{ matrix.spark-version.short }}_${{ matrix.scala-version }}:test \\\n            -Pquick=true -x javadoc\n\n  iceberg-spark-runtime:\n    needs: build-native\n    strategy:\n      matrix:\n        os: [ubuntu-24.04]\n        java-version: [11, 17]\n        iceberg-version: [{short: '1.8', full: '1.8.1'}, {short: '1.9', full: '1.9.1'}, {short: '1.10', full: '1.10.0'}]\n        spark-version: [{short: '3.4', full: '3.4.3'}, {short: '3.5', full: '3.5.8'}]\n        scala-version: ['2.13']\n      fail-fast: false\n    name: iceberg-spark-runtime/${{ matrix.os }}/iceberg-${{ matrix.iceberg-version.full }}/spark-${{ matrix.spark-version.full }}/scala-${{ matrix.scala-version }}/java-${{ matrix.java-version }}\n    runs-on: ${{ matrix.os }}\n    container:\n      image: amd64/rust\n    env:\n      SPARK_LOCAL_IP: localhost\n    steps:\n      - uses: actions/checkout@v6\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{env.RUST_VERSION}}\n          jdk-version: ${{ matrix.java-version }}\n      - name: Download native library\n        uses: actions/download-artifact@v8\n        with:\n          name: native-lib-iceberg\n          path: native/target/release/\n      - name: Build Comet\n        run: |\n          ./mvnw install -Prelease -DskipTests -Pspark-${{ matrix.spark-version.short }} -Pscala-${{ matrix.scala-version }}\n      - name: Setup Iceberg\n        uses: ./.github/actions/setup-iceberg-builder\n        with:\n          iceberg-version: ${{ matrix.iceberg-version.full }}\n      - name: Run Iceberg Spark runtime tests\n        run: |\n          cd apache-iceberg\n          rm -rf /root/.m2/repository/org/apache/parquet # somehow parquet cache requires cleanups\n          ENABLE_COMET=true ENABLE_COMET_ONHEAP=true ./gradlew -DsparkVersions=${{ matrix.spark-version.short }} -DscalaVersion=${{ matrix.scala-version }} -DflinkVersions= -DkafkaVersions= \\\n            :iceberg-spark:iceberg-spark-runtime-${{ matrix.spark-version.short }}_${{ matrix.scala-version }}:integrationTest \\\n            -Pquick=true -x javadoc\n"
  },
  {
    "path": ".github/workflows/label_new_issues.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Label new issues with requires-triage\n\non:\n  issues:\n    types: [opened]\n\npermissions:\n  issues: write\n\njobs:\n  add-triage-label:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/github-script@v9\n        with:\n          script: |\n            await github.rest.issues.addLabels({\n              owner: context.repo.owner,\n              repo: context.repo.repo,\n              issue_number: context.issue.number,\n              labels: ['requires-triage']\n            })\n"
  },
  {
    "path": ".github/workflows/miri.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Run Miri Safety Checks\n\non:\n  push:\n    branches:\n      - main\n    paths-ignore:\n      - \"doc/**\"\n      - \"docs/**\"\n      - \"**.md\"\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  pull_request:\n    paths-ignore:\n      - \"doc/**\"\n      - \"docs/**\"\n      - \"**.md\"\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  # manual trigger\n  # https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow\n  workflow_dispatch:\n\njobs:\n  miri:\n    name: \"Miri\"\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v6\n      - name: Install Build Dependencies\n        shell: bash\n        run: |\n          sudo apt-get update\n          sudo apt-get install -y protobuf-compiler\n          sudo apt-get install -y clang\n      - name: Install Miri\n        run: |\n          rustup toolchain install nightly --component miri\n          rustup override set nightly\n          cargo miri setup\n      - name: Test with Miri\n        run: |\n          cd native\n          MIRIFLAGS=\"-Zmiri-disable-isolation\" cargo miri test --lib --bins --tests --examples\n"
  },
  {
    "path": ".github/workflows/pr_benchmark_check.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# Lightweight CI for benchmark-only changes - verifies compilation and linting\n# without running full test suites\n\nname: PR Benchmark Check\n\nconcurrency:\n  group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }}\n  cancel-in-progress: true\n\non:\n  push:\n    branches:\n      - main\n    paths:\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  pull_request:\n    paths:\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  workflow_dispatch:\n\nenv:\n  RUST_VERSION: stable\n  RUST_BACKTRACE: 1\n\njobs:\n  benchmark-check:\n    name: Benchmark Compile & Lint Check\n    runs-on: ubuntu-latest\n    container:\n      image: amd64/rust\n    steps:\n      - uses: actions/checkout@v6\n\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{ env.RUST_VERSION }}\n          jdk-version: 17\n\n      - name: Check Cargo fmt\n        run: |\n          cd native\n          cargo fmt --all -- --check --color=never\n\n      - name: Check Cargo clippy\n        run: |\n          cd native\n          cargo clippy --color=never --all-targets --workspace -- -D warnings\n\n      - name: Check benchmark compilation\n        run: |\n          cd native\n          cargo check --benches\n\n      - name: Cache Maven dependencies\n        uses: actions/cache@v5\n        with:\n          path: |\n            ~/.m2/repository\n            /root/.m2/repository\n          key: ${{ runner.os }}-benchmark-maven-${{ hashFiles('**/pom.xml') }}\n          restore-keys: |\n            ${{ runner.os }}-benchmark-maven-\n\n      - name: Check Scala compilation and linting\n        run: |\n          ./mvnw -B compile test-compile scalafix:scalafix -Dscalafix.mode=CHECK -Psemanticdb -DskipTests\n"
  },
  {
    "path": ".github/workflows/pr_build_linux.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: PR Build (Linux)\n\nconcurrency:\n  group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }}\n  cancel-in-progress: true\n\non:\n  push:\n    branches:\n      - main\n    paths-ignore:\n      - \"benchmarks/**\"\n      - \"doc/**\"\n      - \"docs/**\"\n      - \"**.md\"\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  pull_request:\n    paths-ignore:\n      - \"benchmarks/**\"\n      - \"doc/**\"\n      - \"docs/**\"\n      - \"**.md\"\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  # manual trigger\n  # https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow\n  workflow_dispatch:\n\nenv:\n  RUST_VERSION: stable\n  RUST_BACKTRACE: 1\n\njobs:\n\n  # Fast lint check - gates all other jobs\n  lint:\n    name: Lint\n    runs-on: ubuntu-latest\n    container:\n      image: amd64/rust\n    steps:\n      - uses: actions/checkout@v6\n\n      - name: Check Rust formatting\n        run: |\n          rustup component add rustfmt\n          cd native && cargo fmt --all -- --check\n\n  lint-java:\n    needs: lint\n    name: Lint Java (${{ matrix.profile.name }})\n    runs-on: ubuntu-latest\n    container:\n      image: amd64/rust\n      env:\n        JAVA_TOOL_OPTIONS: ${{ matrix.profile.java_version == '17' && '--add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-exports=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED' || '' }}\n    strategy:\n      matrix:\n        profile:\n          - name: \"Spark 3.4, JDK 11, Scala 2.12\"\n            java_version: \"11\"\n            maven_opts: \"-Pspark-3.4 -Pscala-2.12\"\n          - name: \"Spark 3.5, JDK 17, Scala 2.12\"\n            java_version: \"17\"\n            maven_opts: \"-Pspark-3.5 -Pscala-2.12\"\n          - name: \"Spark 4.0, JDK 17\"\n            java_version: \"17\"\n            maven_opts: \"-Pspark-4.0\"\n      fail-fast: false\n    steps:\n      - uses: runs-on/action@742bf56072eb4845a0f94b3394673e4903c90ff0  # v2.1.0\n      - uses: actions/checkout@v6\n\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{ env.RUST_VERSION }}\n          jdk-version: ${{ matrix.profile.java_version }}\n\n      - name: Cache Maven dependencies\n        uses: actions/cache@v5\n        with:\n          path: |\n            ~/.m2/repository\n            /root/.m2/repository\n          key: ${{ runner.os }}-java-maven-${{ hashFiles('**/pom.xml') }}-lint\n          restore-keys: |\n            ${{ runner.os }}-java-maven-\n\n      - name: Run scalafix check\n        run: |\n          ./mvnw -B package -DskipTests scalafix:scalafix -Dscalafix.mode=CHECK -Psemanticdb ${{ matrix.profile.maven_opts }}\n\n      - name: Setup Node.js\n        uses: actions/setup-node@v6\n        with:\n          node-version: '24'\n\n      - name: Install prettier\n        run: |\n          npm install -g prettier\n\n      - name: Run prettier\n        run: |\n          npx prettier \"**/*.md\" --write\n\n      - name: Mark workspace as safe for git\n        run: |\n          git config --global --add safe.directory \"$GITHUB_WORKSPACE\"\n\n      - name: Check for any local git changes (such as generated docs)\n        run: |\n          ./dev/ci/check-working-tree-clean.sh\n\n  # Build native library once and share with all test jobs\n  build-native:\n    needs: lint\n    name: Build Native Library\n    runs-on: ${{ github.repository_owner == 'apache' && format('runs-on={0},family=m8a+m7a+c8a,cpu=8,image=ubuntu24-full-x64,extras=s3-cache,disk=large,tag=datafusion-comet', github.run_id) || 'ubuntu-latest' }}\n    container:\n      image: amd64/rust\n    steps:\n      - uses: runs-on/action@742bf56072eb4845a0f94b3394673e4903c90ff0  # v2.1.0\n      - uses: actions/checkout@v6\n      - name: Setup Rust toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{ env.RUST_VERSION }}\n          jdk-version: 17  # JDK only needed for common module proto generation\n\n      - name: Restore Cargo cache\n        uses: actions/cache/restore@v5\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n            native/target\n          key: ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-${{ hashFiles('native/**/*.rs') }}\n          restore-keys: |\n            ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-\n\n      - name: Build native library (CI profile)\n        run: |\n          cd native\n          # CI profile: same overflow behavior as release, but faster compilation\n          # (no LTO, parallel codegen)\n          cargo build --profile ci\n        env:\n          RUSTFLAGS: \"-Ctarget-cpu=x86-64-v3\"\n\n      - name: Upload native library\n        uses: actions/upload-artifact@v7\n        with:\n          name: native-lib-linux\n          path: native/target/ci/libcomet.so\n          retention-days: 1\n\n      - name: Save Cargo cache\n        uses: actions/cache/save@v5\n        if: github.ref == 'refs/heads/main'\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n            native/target\n          key: ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-${{ hashFiles('native/**/*.rs') }}\n\n  # Run Rust tests (runs in parallel with build-native, uses debug builds)\n  linux-test-rust:\n    needs: lint\n    name: ubuntu-latest/rust-test\n    runs-on: ${{ github.repository_owner == 'apache' && format('runs-on={0},family=m8a+m7a+c8a,cpu=16,image=ubuntu24-full-x64,extras=s3-cache,disk=large,tag=datafusion-comet', github.run_id) || 'ubuntu-latest' }}\n    container:\n      image: amd64/rust\n    steps:\n      - uses: runs-on/action@742bf56072eb4845a0f94b3394673e4903c90ff0  # v2.1.0\n\n      - uses: actions/checkout@v6\n\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{ env.RUST_VERSION }}\n          jdk-version: 17\n\n      - name: Restore Cargo cache\n        uses: actions/cache/restore@v5\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n            native/target\n          # Note: Java version intentionally excluded - Rust target is JDK-independent\n          key: ${{ runner.os }}-cargo-debug-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-${{ hashFiles('native/**/*.rs') }}\n          restore-keys: |\n            ${{ runner.os }}-cargo-debug-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-\n\n      - name: Rust test steps\n        uses: ./.github/actions/rust-test\n\n      - name: Save Cargo cache\n        uses: actions/cache/save@v5\n        if: github.ref == 'refs/heads/main'\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n            native/target\n          key: ${{ runner.os }}-cargo-debug-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-${{ hashFiles('native/**/*.rs') }}\n\n  linux-test:\n    needs: build-native\n    strategy:\n      matrix:\n        # the goal with these profiles is to get coverage of all Java, Scala, and Spark\n        # versions without testing all possible combinations, which would be overkill\n        profile:\n          - name: \"Spark 3.4, JDK 11, Scala 2.12\"\n            java_version: \"11\"\n            maven_opts: \"-Pspark-3.4 -Pscala-2.12\"\n            scan_impl: \"auto\"\n\n          - name: \"Spark 3.5.5, JDK 17, Scala 2.13\"\n            java_version: \"17\"\n            maven_opts: \"-Pspark-3.5 -Dspark.version=3.5.5 -Pscala-2.13\"\n            scan_impl: \"auto\"\n\n          - name: \"Spark 3.5.6, JDK 17, Scala 2.13\"\n            java_version: \"17\"\n            maven_opts: \"-Pspark-3.5 -Dspark.version=3.5.6 -Pscala-2.13\"\n            scan_impl: \"auto\"\n\n          - name: \"Spark 3.5, JDK 17, Scala 2.12\"\n            java_version: \"17\"\n            maven_opts: \"-Pspark-3.5 -Pscala-2.12\"\n            scan_impl: \"native_iceberg_compat\"\n\n          - name: \"Spark 4.0, JDK 17\"\n            java_version: \"17\"\n            maven_opts: \"-Pspark-4.0\"\n            scan_impl: \"auto\"\n        suite:\n          - name: \"fuzz\"\n            value: |\n              org.apache.comet.CometFuzzTestSuite\n              org.apache.comet.CometFuzzAggregateSuite\n              org.apache.comet.CometFuzzIcebergSuite\n              org.apache.comet.CometFuzzMathSuite\n              org.apache.comet.DataGeneratorSuite\n          - name: \"shuffle\"\n            value: |\n              org.apache.comet.exec.CometShuffleSuite\n              org.apache.comet.exec.CometShuffle4_0Suite\n              org.apache.comet.exec.CometNativeColumnarToRowSuite\n              org.apache.comet.exec.CometNativeShuffleSuite\n              org.apache.comet.exec.CometShuffleEncryptionSuite\n              org.apache.comet.exec.CometShuffleManagerSuite\n              org.apache.comet.exec.CometAsyncShuffleSuite\n              org.apache.comet.exec.DisableAQECometShuffleSuite\n              org.apache.comet.exec.DisableAQECometAsyncShuffleSuite\n              org.apache.spark.shuffle.sort.SpillSorterSuite\n          - name: \"parquet\"\n            value: |\n              org.apache.comet.parquet.CometParquetWriterSuite\n              org.apache.comet.parquet.ParquetReadV1Suite\n              org.apache.comet.parquet.ParquetReadV2Suite\n              org.apache.comet.parquet.ParquetReadFromFakeHadoopFsSuite\n              org.apache.spark.sql.comet.ParquetDatetimeRebaseV1Suite\n              org.apache.spark.sql.comet.ParquetDatetimeRebaseV2Suite\n              org.apache.spark.sql.comet.ParquetEncryptionITCase\n              org.apache.comet.exec.CometNativeReaderSuite\n              org.apache.comet.CometIcebergNativeSuite\n          - name: \"csv\"\n            value: |\n              org.apache.comet.csv.CometCsvNativeReadSuite\n          - name: \"exec\"\n            value: |\n              org.apache.comet.exec.CometAggregateSuite\n              org.apache.comet.exec.CometExec3_4PlusSuite\n              org.apache.comet.exec.CometExecSuite\n              org.apache.comet.exec.CometGenerateExecSuite\n              org.apache.comet.exec.CometWindowExecSuite\n              org.apache.comet.exec.CometJoinSuite\n              org.apache.comet.CometNativeSuite\n              org.apache.comet.CometSparkSessionExtensionsSuite\n              org.apache.spark.CometPluginsSuite\n              org.apache.spark.CometPluginsDefaultSuite\n              org.apache.spark.CometPluginsNonOverrideSuite\n              org.apache.spark.CometPluginsUnifiedModeOverrideSuite\n              org.apache.comet.rules.CometScanRuleSuite\n              org.apache.comet.rules.CometExecRuleSuite\n              org.apache.spark.sql.CometTPCDSQuerySuite\n              org.apache.spark.sql.CometTPCDSQueryTestSuite\n              org.apache.spark.sql.CometTPCHQuerySuite\n              org.apache.spark.sql.comet.CometTPCDSV1_4_PlanStabilitySuite\n              org.apache.spark.sql.comet.CometTPCDSV2_7_PlanStabilitySuite\n              org.apache.spark.sql.comet.CometTaskMetricsSuite\n              org.apache.spark.sql.comet.CometDppFallbackRepro3949Suite\n              org.apache.spark.sql.comet.CometShuffleFallbackStickinessSuite\n              org.apache.comet.objectstore.NativeConfigSuite\n          - name: \"expressions\"\n            value: |\n              org.apache.comet.CometExpressionSuite\n              org.apache.comet.CometSqlFileTestSuite\n              org.apache.comet.CometExpressionCoverageSuite\n              org.apache.comet.CometHashExpressionSuite\n              org.apache.comet.CometTemporalExpressionSuite\n              org.apache.comet.CometArrayExpressionSuite\n              org.apache.comet.CometCastSuite\n              org.apache.comet.CometDateTimeUtilsSuite\n              org.apache.comet.CometMathExpressionSuite\n              org.apache.comet.CometStringExpressionSuite\n              org.apache.comet.CometBitwiseExpressionSuite\n              org.apache.comet.CometMapExpressionSuite\n              org.apache.comet.CometCsvExpressionSuite\n              org.apache.comet.CometJsonExpressionSuite\n              org.apache.comet.CometDateTimeUtilsSuite\n              org.apache.comet.SparkErrorConverterSuite\n              org.apache.comet.expressions.conditional.CometIfSuite\n              org.apache.comet.expressions.conditional.CometCoalesceSuite\n              org.apache.comet.expressions.conditional.CometCaseWhenSuite\n          - name: \"sql\"\n            value: |\n              org.apache.spark.sql.CometToPrettyStringSuite\n      fail-fast: false\n    name: ${{ matrix.profile.name }}/${{ matrix.profile.scan_impl }} [${{ matrix.suite.name }}]\n    runs-on: ${{ github.repository_owner == 'apache' && format('runs-on={0},family=m8a+m7a+c8a,cpu=16,image=ubuntu24-full-x64,extras=s3-cache,disk=large,tag=datafusion-comet', github.run_id) || 'ubuntu-latest' }}\n    container:\n      image: amd64/rust\n      env:\n        JAVA_TOOL_OPTIONS: ${{ matrix.profile.java_version == '17' && '--add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-exports=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.lang=ALL-UNNAMED' || '' }}\n\n    steps:\n      - uses: runs-on/action@742bf56072eb4845a0f94b3394673e4903c90ff0  # v2.1.0\n      - uses: actions/checkout@v6\n\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{ env.RUST_VERSION }}\n          jdk-version: ${{ matrix.profile.java_version }}\n\n      - name: Download native library\n        uses: actions/download-artifact@v8\n        with:\n          name: native-lib-linux\n          # Download to release/ since Maven's -Prelease expects libcomet.so there\n          path: native/target/release/\n\n      # Restore cargo registry cache (for any cargo commands that might run)\n      - name: Cache Cargo registry\n        uses: actions/cache@v5\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n          key: ${{ runner.os }}-cargo-registry-${{ hashFiles('native/**/Cargo.lock') }}\n          restore-keys: |\n            ${{ runner.os }}-cargo-registry-\n\n      - name: Java test steps\n        uses: ./.github/actions/java-test\n        with:\n          artifact_name: ${{ matrix.profile.name }}-${{ matrix.suite.name }}-${{ github.run_id }}-${{ github.run_number }}-${{ github.run_attempt }}-${{ matrix.profile.scan_impl }}\n          suites: ${{ matrix.suite.name == 'sql' && matrix.profile.name == 'Spark 3.4, JDK 11, Scala 2.12' && '' || matrix.suite.value }}\n          maven_opts: ${{ matrix.profile.maven_opts }}\n          scan_impl: ${{ matrix.profile.scan_impl }}\n          upload-test-reports: true\n          skip-native-build: true\n\n  # TPC-H correctness test - verifies benchmark queries produce correct results\n  verify-benchmark-results-tpch:\n    needs: build-native\n    name: Verify TPC-H Results\n    runs-on: ${{ github.repository_owner == 'apache' && format('runs-on={0},family=m8a+m7a+c8a,cpu=16,image=ubuntu24-full-x64,extras=s3-cache,disk=large,tag=datafusion-comet', github.run_id) || 'ubuntu-latest' }}\n    container:\n      image: amd64/rust\n    steps:\n      - uses: runs-on/action@742bf56072eb4845a0f94b3394673e4903c90ff0  # v2.1.0\n\n      - uses: actions/checkout@v6\n\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{ env.RUST_VERSION }}\n          jdk-version: 11\n\n      - name: Download native library\n        uses: actions/download-artifact@v8\n        with:\n          name: native-lib-linux\n          path: native/target/release/\n\n      - name: Cache Maven dependencies\n        uses: actions/cache@v5\n        with:\n          path: |\n            ~/.m2/repository\n            /root/.m2/repository\n          key: ${{ runner.os }}-java-maven-${{ hashFiles('**/pom.xml') }}\n          restore-keys: |\n            ${{ runner.os }}-java-maven-\n\n      - name: Cache TPC-H data\n        id: cache-tpch\n        uses: actions/cache@v5\n        with:\n          path: ./tpch\n          key: tpch-${{ hashFiles('.github/workflows/pr_build_linux.yml') }}\n\n      - name: Build project\n        run: |\n          ./mvnw -B -Prelease install -DskipTests\n\n      - name: Generate TPC-H data (SF=1)\n        if: steps.cache-tpch.outputs.cache-hit != 'true'\n        run: |\n          cd spark && MAVEN_OPTS='-Xmx20g' ../mvnw -B -Prelease exec:java -Dexec.mainClass=\"org.apache.spark.sql.GenTPCHData\" -Dexec.classpathScope=\"test\" -Dexec.cleanupDaemonThreads=\"false\" -Dexec.args=\"--location `pwd`/.. --scaleFactor 1 --numPartitions 1 --overwrite\"\n\n      - name: Run TPC-H queries\n        run: |\n          SPARK_HOME=`pwd` SPARK_TPCH_DATA=`pwd`/tpch/sf1_parquet ./mvnw -B -Prelease -Dsuites=org.apache.spark.sql.CometTPCHQuerySuite test\n\n  # TPC-DS correctness tests - verifies benchmark queries produce correct results\n  verify-benchmark-results-tpcds:\n    needs: build-native\n    name: Verify TPC-DS Results (${{ matrix.join }})\n    runs-on: ${{ github.repository_owner == 'apache' && format('runs-on={0},family=m8a+m7a+c8a,cpu=16,image=ubuntu24-full-x64,extras=s3-cache,disk=large,tag=datafusion-comet', github.run_id) || 'ubuntu-latest' }}\n    container:\n      image: amd64/rust\n    strategy:\n      matrix:\n        join: [sort_merge, broadcast, hash]\n      fail-fast: false\n    steps:\n      - uses: runs-on/action@742bf56072eb4845a0f94b3394673e4903c90ff0  # v2.1.0\n\n      - uses: actions/checkout@v6\n\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{ env.RUST_VERSION }}\n          jdk-version: 11\n\n      - name: Download native library\n        uses: actions/download-artifact@v8\n        with:\n          name: native-lib-linux\n          path: native/target/release/\n\n      - name: Cache Maven dependencies\n        uses: actions/cache@v5\n        with:\n          path: |\n            ~/.m2/repository\n            /root/.m2/repository\n          key: ${{ runner.os }}-java-maven-${{ hashFiles('**/pom.xml') }}\n          restore-keys: |\n            ${{ runner.os }}-java-maven-\n\n      - name: Cache TPC-DS data\n        id: cache-tpcds\n        uses: actions/cache@v5\n        with:\n          path: ./tpcds-sf-1\n          key: tpcds-${{ hashFiles('.github/workflows/pr_build_linux.yml') }}\n\n      - name: Build project\n        run: |\n          ./mvnw -B -Prelease install -DskipTests\n\n      - name: Checkout tpcds-kit\n        if: steps.cache-tpcds.outputs.cache-hit != 'true'\n        uses: actions/checkout@v6\n        with:\n          repository: databricks/tpcds-kit\n          path: ./tpcds-kit\n\n      - name: Build tpcds-kit\n        if: steps.cache-tpcds.outputs.cache-hit != 'true'\n        run: |\n          apt-get update && apt-get install -y yacc bison flex gcc-12 g++-12\n          update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-12 120 --slave /usr/bin/g++ g++ /usr/bin/g++-12\n          cd tpcds-kit/tools && make OS=LINUX\n\n      - name: Generate TPC-DS data (SF=1)\n        if: steps.cache-tpcds.outputs.cache-hit != 'true'\n        run: |\n          cd spark && MAVEN_OPTS='-Xmx20g' ../mvnw -B -Prelease exec:java -Dexec.mainClass=\"org.apache.spark.sql.GenTPCDSData\" -Dexec.classpathScope=\"test\" -Dexec.cleanupDaemonThreads=\"false\" -Dexec.args=\"--dsdgenDir `pwd`/../tpcds-kit/tools --location `pwd`/../tpcds-sf-1 --scaleFactor 1 --numPartitions 1\"\n\n      - name: Run TPC-DS queries (Sort merge join)\n        if: matrix.join == 'sort_merge'\n        run: |\n          SPARK_HOME=`pwd` SPARK_TPCDS_DATA=`pwd`/tpcds-sf-1 ./mvnw -B -Prelease -Dsuites=org.apache.spark.sql.CometTPCDSQuerySuite test\n        env:\n          SPARK_TPCDS_JOIN_CONF: |\n            spark.sql.autoBroadcastJoinThreshold=-1\n            spark.sql.join.preferSortMergeJoin=true\n\n      - name: Run TPC-DS queries (Broadcast hash join)\n        if: matrix.join == 'broadcast'\n        run: |\n          SPARK_HOME=`pwd` SPARK_TPCDS_DATA=`pwd`/tpcds-sf-1 ./mvnw -B -Prelease -Dsuites=org.apache.spark.sql.CometTPCDSQuerySuite test\n        env:\n          SPARK_TPCDS_JOIN_CONF: |\n            spark.sql.autoBroadcastJoinThreshold=10485760\n\n      - name: Run TPC-DS queries (Shuffled hash join)\n        if: matrix.join == 'hash'\n        run: |\n          SPARK_HOME=`pwd` SPARK_TPCDS_DATA=`pwd`/tpcds-sf-1 ./mvnw -B -Prelease -Dsuites=org.apache.spark.sql.CometTPCDSQuerySuite test\n        env:\n          SPARK_TPCDS_JOIN_CONF: |\n            spark.sql.autoBroadcastJoinThreshold=-1\n            spark.sql.join.forceApplyShuffledHashJoin=true\n"
  },
  {
    "path": ".github/workflows/pr_build_macos.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: PR Build (macOS)\n\nconcurrency:\n  group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }}\n  cancel-in-progress: true\n\non:\n  push:\n    branches:\n      - main\n    paths-ignore:\n      - \"benchmarks/**\"\n      - \"doc/**\"\n      - \"docs/**\"\n      - \"**.md\"\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  pull_request:\n    paths-ignore:\n      - \"benchmarks/**\"\n      - \"doc/**\"\n      - \"docs/**\"\n      - \"**.md\"\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  # manual trigger\n  # https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow\n  workflow_dispatch:\n\nenv:\n  RUST_VERSION: stable\n  RUST_BACKTRACE: 1\n\njobs:\n\n  # Fast lint check - gates all other jobs (runs on Linux for cost efficiency)\n  lint:\n    name: Lint\n    runs-on: ubuntu-latest\n    container:\n      image: amd64/rust\n    steps:\n      - uses: actions/checkout@v6\n\n      - name: Check Rust formatting\n        run: |\n          rustup component add rustfmt\n          cd native && cargo fmt --all -- --check\n\n  # Build native library once and share with all test jobs\n  build-native:\n    needs: lint\n    name: Build Native Library (macOS)\n    runs-on: macos-14\n    steps:\n      - uses: actions/checkout@v6\n\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-macos-builder\n        with:\n          rust-version: ${{ env.RUST_VERSION }}\n          jdk-version: 17\n          jdk-architecture: aarch64\n          protoc-architecture: aarch_64\n\n      - name: Restore Cargo cache\n        uses: actions/cache/restore@v5\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n            native/target\n          key: ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-${{ hashFiles('native/**/*.rs') }}\n          restore-keys: |\n            ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-\n\n      - name: Build native library (CI profile)\n        run: |\n          cd native\n          # CI profile: same overflow behavior as release, but faster compilation\n          # (no LTO, parallel codegen)\n          cargo build --profile ci\n        env:\n          RUSTFLAGS: \"-Ctarget-cpu=apple-m1\"\n\n      - name: Upload native library\n        uses: actions/upload-artifact@v7\n        with:\n          name: native-lib-macos\n          path: native/target/ci/libcomet.dylib\n          retention-days: 1\n\n      - name: Save Cargo cache\n        uses: actions/cache/save@v5\n        if: github.ref == 'refs/heads/main'\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n            native/target\n          key: ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-${{ hashFiles('native/**/*.rs') }}\n\n  macos-aarch64-test:\n    needs: build-native\n    strategy:\n      matrix:\n        os: [macos-14]\n\n        # the goal with these profiles is to get coverage of all Java, Scala, and Spark\n        # versions without testing all possible combinations, which would be overkill\n        profile:\n          - name: \"Spark 3.4, JDK 11, Scala 2.12\"\n            java_version: \"11\"\n            maven_opts: \"-Pspark-3.4 -Pscala-2.12\"\n\n          - name: \"Spark 3.5, JDK 17, Scala 2.13\"\n            java_version: \"17\"\n            maven_opts: \"-Pspark-3.5 -Pscala-2.13\"\n\n          - name: \"Spark 4.0, JDK 17, Scala 2.13\"\n            java_version: \"17\"\n            maven_opts: \"-Pspark-4.0 -Pscala-2.13\"\n\n        suite:\n          - name: \"fuzz\"\n            value: |\n              org.apache.comet.CometFuzzTestSuite\n              org.apache.comet.CometFuzzAggregateSuite\n              org.apache.comet.CometFuzzIcebergSuite\n              org.apache.comet.CometFuzzMathSuite\n              org.apache.comet.DataGeneratorSuite\n          - name: \"shuffle\"\n            value: |\n              org.apache.comet.exec.CometShuffleSuite\n              org.apache.comet.exec.CometShuffle4_0Suite\n              org.apache.comet.exec.CometNativeColumnarToRowSuite\n              org.apache.comet.exec.CometNativeShuffleSuite\n              org.apache.comet.exec.CometShuffleEncryptionSuite\n              org.apache.comet.exec.CometShuffleManagerSuite\n              org.apache.comet.exec.CometAsyncShuffleSuite\n              org.apache.comet.exec.DisableAQECometShuffleSuite\n              org.apache.comet.exec.DisableAQECometAsyncShuffleSuite\n              org.apache.spark.shuffle.sort.SpillSorterSuite\n          - name: \"parquet\"\n            value: |\n              org.apache.comet.parquet.CometParquetWriterSuite\n              org.apache.comet.parquet.ParquetReadV1Suite\n              org.apache.comet.parquet.ParquetReadV2Suite\n              org.apache.comet.parquet.ParquetReadFromFakeHadoopFsSuite\n              org.apache.spark.sql.comet.ParquetDatetimeRebaseV1Suite\n              org.apache.spark.sql.comet.ParquetDatetimeRebaseV2Suite\n              org.apache.spark.sql.comet.ParquetEncryptionITCase\n              org.apache.comet.exec.CometNativeReaderSuite\n              org.apache.comet.CometIcebergNativeSuite\n          - name: \"csv\"\n            value: |\n              org.apache.comet.csv.CometCsvNativeReadSuite\n          - name: \"exec\"\n            value: |\n              org.apache.comet.exec.CometAggregateSuite\n              org.apache.comet.exec.CometExec3_4PlusSuite\n              org.apache.comet.exec.CometExecSuite\n              org.apache.comet.exec.CometGenerateExecSuite\n              org.apache.comet.exec.CometWindowExecSuite\n              org.apache.comet.exec.CometJoinSuite\n              org.apache.comet.CometNativeSuite\n              org.apache.comet.CometSparkSessionExtensionsSuite\n              org.apache.spark.CometPluginsSuite\n              org.apache.spark.CometPluginsDefaultSuite\n              org.apache.spark.CometPluginsNonOverrideSuite\n              org.apache.spark.CometPluginsUnifiedModeOverrideSuite\n              org.apache.comet.rules.CometScanRuleSuite\n              org.apache.comet.rules.CometExecRuleSuite\n              org.apache.spark.sql.CometTPCDSQuerySuite\n              org.apache.spark.sql.CometTPCDSQueryTestSuite\n              org.apache.spark.sql.CometTPCHQuerySuite\n              org.apache.spark.sql.comet.CometTPCDSV1_4_PlanStabilitySuite\n              org.apache.spark.sql.comet.CometTPCDSV2_7_PlanStabilitySuite\n              org.apache.spark.sql.comet.CometTaskMetricsSuite\n              org.apache.spark.sql.comet.CometDppFallbackRepro3949Suite\n              org.apache.spark.sql.comet.CometShuffleFallbackStickinessSuite\n              org.apache.comet.objectstore.NativeConfigSuite\n          - name: \"expressions\"\n            value: |\n              org.apache.comet.CometExpressionSuite\n              org.apache.comet.CometSqlFileTestSuite\n              org.apache.comet.CometExpressionCoverageSuite\n              org.apache.comet.CometHashExpressionSuite\n              org.apache.comet.CometTemporalExpressionSuite\n              org.apache.comet.CometArrayExpressionSuite\n              org.apache.comet.CometCastSuite\n              org.apache.comet.CometMathExpressionSuite\n              org.apache.comet.CometStringExpressionSuite\n              org.apache.comet.CometBitwiseExpressionSuite\n              org.apache.comet.CometMapExpressionSuite\n              org.apache.comet.CometJsonExpressionSuite\n              org.apache.comet.CometCsvExpressionSuite\n              org.apache.comet.CometDateTimeUtilsSuite\n              org.apache.comet.SparkErrorConverterSuite\n              org.apache.comet.expressions.conditional.CometIfSuite\n              org.apache.comet.expressions.conditional.CometCoalesceSuite\n              org.apache.comet.expressions.conditional.CometCaseWhenSuite\n          - name: \"sql\"\n            value: |\n              org.apache.spark.sql.CometToPrettyStringSuite\n\n      fail-fast: false\n    name: ${{ matrix.os }}/${{ matrix.profile.name }} [${{ matrix.suite.name }}]\n    runs-on: ${{ matrix.os }}\n    steps:\n      - uses: actions/checkout@v6\n\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-macos-builder\n        with:\n          rust-version: ${{ env.RUST_VERSION }}\n          jdk-version: ${{ matrix.profile.java_version }}\n          jdk-architecture: aarch64\n          protoc-architecture: aarch_64\n\n      - name: Download native library\n        uses: actions/download-artifact@v8\n        with:\n          name: native-lib-macos\n          # Download to release/ since Maven's -Prelease expects libcomet.dylib there\n          path: native/target/release/\n\n      # Restore cargo registry cache (for any cargo commands that might run)\n      - name: Cache Cargo registry\n        uses: actions/cache@v5\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n          key: ${{ runner.os }}-cargo-registry-${{ hashFiles('native/**/Cargo.lock') }}\n          restore-keys: |\n            ${{ runner.os }}-cargo-registry-\n\n      - name: Set thread thresholds envs for spark test on macOS\n        # see: https://github.com/apache/datafusion-comet/issues/2965\n        shell: bash\n        run: |\n          echo \"SPARK_TEST_SQL_SHUFFLE_EXCHANGE_MAX_THREAD_THRESHOLD=256\" >> $GITHUB_ENV\n          echo \"SPARK_TEST_SQL_RESULT_QUERY_STAGE_MAX_THREAD_THRESHOLD=256\" >> $GITHUB_ENV\n          echo \"SPARK_TEST_HIVE_SHUFFLE_EXCHANGE_MAX_THREAD_THRESHOLD=48\" >> $GITHUB_ENV\n          echo \"SPARK_TEST_HIVE_RESULT_QUERY_STAGE_MAX_THREAD_THRESHOLD=48\" >> $GITHUB_ENV\n\n      - name: Java test steps\n        uses: ./.github/actions/java-test\n        with:\n          artifact_name: ${{ matrix.os }}-${{ matrix.profile.name }}-${{ matrix.suite.name }}-${{ github.run_id }}-${{ github.run_number }}-${{ github.run_attempt }}\n          suites: ${{ matrix.suite.name == 'sql' && matrix.profile.name == 'Spark 3.4, JDK 11, Scala 2.12' && '' || matrix.suite.value }}\n          maven_opts: ${{ matrix.profile.maven_opts }}\n          skip-native-build: true\n"
  },
  {
    "path": ".github/workflows/pr_markdown_format.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Check Markdown Formatting\n\nconcurrency:\n  group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }}\n  cancel-in-progress: true\n\non:\n  pull_request:\n    paths:\n      - '**.md'\n\njobs:\n  prettier-check:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v6\n\n      - name: Setup Node.js\n        uses: actions/setup-node@v6\n        with:\n          node-version: '24'\n\n      - name: Install prettier\n        run: npm install -g prettier\n\n      - name: Check markdown formatting\n        run: |\n          # if you encounter error, run prettier locally and commit changes using instructions at:\n          #\n          # https://datafusion.apache.org/comet/contributor-guide/development.html#how-to-format-md-document\n          #\n          prettier --check \"**/*.md\"\n"
  },
  {
    "path": ".github/workflows/pr_missing_suites.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Check that all test suites are added to PR workflows\n\non:\n  push:\n    branches:\n      - main\n  pull_request:\n    types: [opened, synchronize, reopened]\n\njobs:\n  check-missing-suites:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v6\n      - name: Check Missing Suites\n        run: python3 dev/ci/check-suites.py\n"
  },
  {
    "path": ".github/workflows/pr_rat_check.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: RAT License Check\n\nconcurrency:\n  group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }}\n  cancel-in-progress: true\n\npermissions:\n  contents: read\n\n# No paths-ignore: this workflow must run for ALL changes including docs\non:\n  push:\n    branches:\n      - main\n  pull_request:\n  workflow_dispatch:\n\njobs:\n  rat-check:\n    name: RAT License Check\n    runs-on: ubuntu-slim\n    steps:\n      - uses: actions/checkout@v6\n      - name: Set up Java\n        uses: actions/setup-java@v5\n        with:\n          distribution: temurin\n          java-version: 11\n      - name: Run RAT check\n        run: ./mvnw -B -N apache-rat:check\n"
  },
  {
    "path": ".github/workflows/pr_title_check.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Check PR Title\n\nconcurrency:\n  group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }}\n  cancel-in-progress: true\n\non:\n  pull_request:\n    types: [opened, edited, reopened]\n\njobs:\n  check-pr-title:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v6\n      - name: Check PR title\n        env:\n          PR_TITLE: ${{ github.event.pull_request.title }}\n        run: |\n          if ! echo $PR_TITLE | grep -Eq '^(\\w+)(\\(.+\\))?: .+$'; then\n            echo \"PR title does not follow conventional commit style.\"\n            echo \"Please use a title in the format: type: message, or type(scope): message\"\n            echo \"Example: feat: Add support for sort-merge join\"\n            exit 1\n          fi\n\n"
  },
  {
    "path": ".github/workflows/spark_sql_test.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Spark SQL Tests\n\nconcurrency:\n  group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }}\n  cancel-in-progress: true\n\non:\n  push:\n    branches:\n      - main\n    paths-ignore:\n      - \"benchmarks/**\"\n      - \"doc/**\"\n      - \"docs/**\"\n      - \"**.md\"\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  pull_request:\n    paths-ignore:\n      - \"benchmarks/**\"\n      - \"doc/**\"\n      - \"docs/**\"\n      - \"**.md\"\n      - \"native/core/benches/**\"\n      - \"native/spark-expr/benches/**\"\n      - \"spark/src/test/scala/org/apache/spark/sql/benchmark/**\"\n  # manual trigger\n  # https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow\n  workflow_dispatch:\n    inputs:\n      collect-fallback-logs:\n        description: 'Whether to collect Comet fallback reasons from spark sql unit test logs'\n        required: false\n        default: 'false'\n        type: boolean\n\nenv:\n  RUST_VERSION: stable\n  RUST_BACKTRACE: 1\n\njobs:\n\n  # Build native library once and share with all test jobs\n  build-native:\n    name: Build Native Library\n    runs-on: ubuntu-24.04\n    container:\n      image: amd64/rust\n    steps:\n      - uses: actions/checkout@v6\n\n      - name: Setup Rust toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{ env.RUST_VERSION }}\n          jdk-version: 17\n\n      - name: Restore Cargo cache\n        uses: actions/cache/restore@v5\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n            native/target\n          key: ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-${{ hashFiles('native/**/*.rs') }}\n          restore-keys: |\n            ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-\n\n      - name: Build native library (CI profile)\n        run: |\n          cd native\n          cargo build --profile ci\n        env:\n          RUSTFLAGS: \"-Ctarget-cpu=x86-64-v3\"\n\n      - name: Upload native library\n        uses: actions/upload-artifact@v7\n        with:\n          name: native-lib-linux\n          path: native/target/ci/libcomet.so\n          retention-days: 1\n\n      - name: Save Cargo cache\n        uses: actions/cache/save@v5\n        if: github.ref == 'refs/heads/main'\n        with:\n          path: |\n            ~/.cargo/registry\n            ~/.cargo/git\n            native/target\n          key: ${{ runner.os }}-cargo-ci-${{ hashFiles('native/**/Cargo.lock', 'native/**/Cargo.toml') }}-${{ hashFiles('native/**/*.rs') }}\n\n  spark-sql-test:\n    needs: build-native\n    strategy:\n      matrix:\n        os: [ubuntu-24.04]\n        module:\n          - {name: \"catalyst\", args1: \"catalyst/test\", args2: \"\"}\n          - {name: \"sql_core-1\", args1: \"\", args2: sql/testOnly * -- -l org.apache.spark.tags.ExtendedSQLTest -l org.apache.spark.tags.SlowSQLTest}\n          - {name: \"sql_core-2\", args1: \"\", args2: \"sql/testOnly * -- -n org.apache.spark.tags.ExtendedSQLTest\"}\n          - {name: \"sql_core-3\", args1: \"\", args2: \"sql/testOnly * -- -n org.apache.spark.tags.SlowSQLTest\"}\n          - {name: \"sql_hive-1\", args1: \"\", args2: \"hive/testOnly * -- -l org.apache.spark.tags.ExtendedHiveTest -l org.apache.spark.tags.SlowHiveTest\"}\n          - {name: \"sql_hive-2\", args1: \"\", args2: \"hive/testOnly * -- -n org.apache.spark.tags.ExtendedHiveTest\"}\n          - {name: \"sql_hive-3\", args1: \"\", args2: \"hive/testOnly * -- -n org.apache.spark.tags.SlowHiveTest\"}\n        # Since 4f5eaf0, auto mode uses native_datafusion for V1 scans,\n        # so we only need to test with auto.\n        config:\n          - {spark-short: '3.4', spark-full: '3.4.3', java: 11, scan-impl: 'auto'}\n          - {spark-short: '3.5', spark-full: '3.5.8', java: 11, scan-impl: 'auto'}\n          - {spark-short: '4.0', spark-full: '4.0.1', java: 17, scan-impl: 'auto'}\n        # Skip sql_hive-1 for Spark 4.0 due to https://github.com/apache/datafusion-comet/issues/2946\n        exclude:\n          - config: {spark-short: '4.0', spark-full: '4.0.1', java: 17, scan-impl: 'auto'}\n            module: {name: \"sql_hive-1\", args1: \"\", args2: \"hive/testOnly * -- -l org.apache.spark.tags.ExtendedHiveTest -l org.apache.spark.tags.SlowHiveTest\"}\n      fail-fast: false\n    name: spark-sql-${{ matrix.config.scan-impl }}-${{ matrix.module.name }}/spark-${{ matrix.config.spark-full }}\n    runs-on: ${{ matrix.os }}\n    container:\n      image: amd64/rust\n    steps:\n      - uses: actions/checkout@v6\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{env.RUST_VERSION}}\n          jdk-version: ${{ matrix.config.java }}\n      - name: Download native library\n        uses: actions/download-artifact@v8\n        with:\n          name: native-lib-linux\n          path: native/target/release/\n      - name: Setup Spark\n        uses: ./.github/actions/setup-spark-builder\n        with:\n          spark-version: ${{ matrix.config.spark-full }}\n          spark-short-version: ${{ matrix.config.spark-short }}\n          skip-native-build: true\n      - name: Run Spark tests\n        run: |\n          cd apache-spark\n          rm -rf /root/.m2/repository/org/apache/parquet # somehow parquet cache requires cleanups\n          NOLINT_ON_COMPILE=true ENABLE_COMET=true ENABLE_COMET_ONHEAP=true COMET_PARQUET_SCAN_IMPL=${{ matrix.config.scan-impl }} ENABLE_COMET_LOG_FALLBACK_REASONS=${{ github.event.inputs.collect-fallback-logs || 'false' }} \\\n            build/sbt -Dsbt.log.noformat=true ${{ matrix.module.args1 }} \"${{ matrix.module.args2 }}\"\n          if [ \"${{ github.event.inputs.collect-fallback-logs }}\" = \"true\" ]; then\n            find . -type f -name \"unit-tests.log\" -print0 | xargs -0 grep -h \"Comet cannot accelerate\" | sed 's/.*Comet cannot accelerate/Comet cannot accelerate/' | sort -u > fallback.log\n          fi\n        env:\n          LC_ALL: \"C.UTF-8\"\n      - name: Upload fallback log\n        if: ${{ github.event.inputs.collect-fallback-logs == 'true' }}\n        uses: actions/upload-artifact@v7\n        with:\n          name: fallback-log-spark-sql-${{ matrix.config.scan-impl }}-${{ matrix.module.name }}-spark-${{ matrix.config.spark-full }}\n          path: \"**/fallback.log\"\n\n  merge-fallback-logs:\n    if: ${{ github.event.inputs.collect-fallback-logs == 'true' }}\n    name: merge-fallback-logs\n    needs: [spark-sql-test]\n    runs-on: ubuntu-24.04\n    steps:\n      - name: Download fallback log artifacts\n        uses: actions/download-artifact@v8\n        with:\n          path: fallback-logs/\n      - name: Merge fallback logs\n        run: |\n          find ./fallback-logs/ -type f -name \"fallback.log\" -print0 | xargs -0 cat | sort -u > all_fallback.log\n      - name: Upload merged fallback log\n        uses: actions/upload-artifact@v7\n        with:\n          name: all-fallback-log\n          path: all_fallback.log\n"
  },
  {
    "path": ".github/workflows/spark_sql_test_native_iceberg_compat.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Spark SQL Tests (native_iceberg_compat)\n\nconcurrency:\n  group: ${{ github.repository }}-${{ github.head_ref || github.sha }}-${{ github.workflow }}\n  cancel-in-progress: true\n\non:\n  # manual trigger\n  # https://docs.github.com/en/actions/managing-workflow-runs/manually-running-a-workflow\n  workflow_dispatch:\n\nenv:\n  RUST_VERSION: stable\n  RUST_BACKTRACE: 1\n\njobs:\n  spark-sql-catalyst-native-iceberg-compat:\n    strategy:\n      matrix:\n        os: [ubuntu-24.04]\n        java-version: [11]\n        spark-version: [{short: '3.4', full: '3.4.3'}, {short: '3.5', full: '3.5.8'}]\n        module:\n          - {name: \"catalyst\", args1: \"catalyst/test\", args2: \"\"}\n          - {name: \"sql/core-1\", args1: \"\", args2: sql/testOnly * -- -l org.apache.spark.tags.ExtendedSQLTest -l org.apache.spark.tags.SlowSQLTest}\n          - {name: \"sql/core-2\", args1: \"\", args2: \"sql/testOnly * -- -n org.apache.spark.tags.ExtendedSQLTest\"}\n          - {name: \"sql/core-3\", args1: \"\", args2: \"sql/testOnly * -- -n org.apache.spark.tags.SlowSQLTest\"}\n          - {name: \"sql/hive-1\", args1: \"\", args2: \"hive/testOnly * -- -l org.apache.spark.tags.ExtendedHiveTest -l org.apache.spark.tags.SlowHiveTest\"}\n          - {name: \"sql/hive-2\", args1: \"\", args2: \"hive/testOnly * -- -n org.apache.spark.tags.ExtendedHiveTest\"}\n          - {name: \"sql/hive-3\", args1: \"\", args2: \"hive/testOnly * -- -n org.apache.spark.tags.SlowHiveTest\"}\n      fail-fast: false\n    name: spark-sql-native-iceberg-compat-${{ matrix.module.name }}/${{ matrix.os }}/spark-${{ matrix.spark-version.full }}/java-${{ matrix.java-version }}\n    runs-on: ${{ matrix.os }}\n    container:\n      image: amd64/rust\n    steps:\n      - uses: actions/checkout@v6\n      - name: Setup Rust & Java toolchain\n        uses: ./.github/actions/setup-builder\n        with:\n          rust-version: ${{env.RUST_VERSION}}\n          jdk-version: ${{ matrix.java-version }}\n      - name: Setup Spark\n        uses: ./.github/actions/setup-spark-builder\n        with:\n          spark-version: ${{ matrix.spark-version.full }}\n          spark-short-version: ${{ matrix.spark-version.short }}\n      - name: Run Spark tests\n        run: |\n          cd apache-spark\n          rm -rf /root/.m2/repository/org/apache/parquet # somehow parquet cache requires cleanups\n          ENABLE_COMET=true ENABLE_COMET_ONHEAP=true COMET_PARQUET_SCAN_IMPL=native_iceberg_compat build/sbt -Dsbt.log.noformat=true ${{ matrix.module.args1 }} \"${{ matrix.module.args2 }}\"\n        env:\n          LC_ALL: \"C.UTF-8\"\n\n"
  },
  {
    "path": ".github/workflows/stale.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: \"Close stale PRs\"\non:\n  schedule:\n    - cron: \"30 1 * * *\"\n\njobs:\n  close-stale-prs:\n    runs-on: ubuntu-latest\n    permissions:\n      issues: write\n      pull-requests: write\n    steps:\n      - uses: actions/stale@b5d41d4e1d5dceea10e7104786b73624c18a190f  # v10.2.0\n        with:\n          stale-pr-message: \"Thank you for your contribution. Unfortunately, this pull request is stale because it has been open 60 days with no activity. Please remove the stale label or comment or this will be closed in 7 days.\"\n          days-before-pr-stale: 60\n          days-before-pr-close: 7\n          # do not close stale issues\n          days-before-issue-stale: -1\n          days-before-issue-close: -1\n          repo-token: ${{ secrets.GITHUB_TOKEN }}"
  },
  {
    "path": ".github/workflows/take.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Assign/unassign the issue via `take` or `untake` comment\non:\n  issue_comment:\n    types: created\n\npermissions:\n  issues: write\n\njobs:\n  issue_assign:\n    runs-on: ubuntu-latest\n    if: (!github.event.issue.pull_request) && (github.event.comment.body == 'take' || github.event.comment.body == 'untake')\n    concurrency:\n      group: ${{ github.actor }}-issue-assign\n    steps:\n      - name: Take or untake issue\n        env:\n          COMMENT_BODY: ${{ github.event.comment.body }}\n          ISSUE_NUMBER: ${{ github.event.issue.number }}\n          USER_LOGIN: ${{ github.event.comment.user.login }}\n          REPO: ${{ github.repository }}\n          TOKEN: ${{ secrets.GITHUB_TOKEN }}\n        run: |\n          if [ \"$COMMENT_BODY\" == \"take\" ]\n          then\n            CODE=$(curl -H \"Authorization: token $TOKEN\" -LI https://api.github.com/repos/$REPO/issues/$ISSUE_NUMBER/assignees/$USER_LOGIN -o /dev/null -w '%{http_code}\\n' -s)\n            if [ \"$CODE\" -eq \"204\" ]\n            then\n              echo \"Assigning issue $ISSUE_NUMBER to $USER_LOGIN\"\n              curl -X POST -H \"Authorization: token $TOKEN\" -H \"Content-Type: application/json\" -d \"{\\\"assignees\\\": [\\\"$USER_LOGIN\\\"]}\" https://api.github.com/repos/$REPO/issues/$ISSUE_NUMBER/assignees\n            else\n              echo \"Cannot assign issue $ISSUE_NUMBER to $USER_LOGIN\"\n            fi\n          elif [ \"$COMMENT_BODY\" == \"untake\" ]\n          then\n            echo \"Unassigning issue $ISSUE_NUMBER from $USER_LOGIN\"\n            curl -X DELETE -H \"Authorization: token $TOKEN\" -H \"Content-Type: application/json\" -d \"{\\\"assignees\\\": [\\\"$USER_LOGIN\\\"]}\" https://api.github.com/repos/$REPO/issues/$ISSUE_NUMBER/assignees\n          fi\n"
  },
  {
    "path": ".github/workflows/validate_workflows.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nname: Validate Github Workflows\n\non:\n  pull_request:\n    paths:\n      - \".github/workflows/*.yml\"\n      - \".github/workflows/*.yaml\"\n  push:\n    branches:\n      - main\n\njobs:\n  validate:\n    runs-on: ubuntu-latest\n    steps:\n      - uses: actions/checkout@v6\n\n      - name: Install actionlint\n        run: |\n          curl -sSfL https://raw.githubusercontent.com/rhysd/actionlint/main/scripts/download-actionlint.bash | bash\n          echo \"$PWD\" >> $GITHUB_PATH\n\n      - name: Lint GitHub Actions workflows\n        run: actionlint -color --shellcheck=off\n"
  },
  {
    "path": ".gitignore",
    "content": "CLAUDE.md\ntarget\n.idea\n*.iml\n.vscode/\n.bloop/\n.metals/\nderby.log\nmetastore_db/\nspark-warehouse/\ndependency-reduced-pom.xml\nnative/proto/src/generated\nprebuild\n.flattened-pom.xml\nrat.txt\nfiltered_rat.txt\ndev/dist\napache-rat-*.jar\nvenv\n.venv\ndev/release/comet-rm/workdir\nspark/benchmarks\n.DS_Store\ncomet-event-trace.json\n__pycache__\noutput\n"
  },
  {
    "path": ".mvn/wrapper/maven-wrapper.properties",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\ndistributionUrl=https://repo.maven.apache.org/maven2/org/apache/maven/apache-maven/3.9.6/apache-maven-3.9.6-bin.zip\nwrapperUrl=https://repo.maven.apache.org/maven2/org/apache/maven/wrapper/maven-wrapper/3.2.0/maven-wrapper-3.2.0.jar\n"
  },
  {
    "path": ".scalafix.conf",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\nrules = [\n  ExplicitResultTypes,\n  NoAutoTupling,\n  RemoveUnused,\n\n  DisableSyntax,\n  LeakingImplicitClassVal,\n  NoValInForComprehension,\n  ProcedureSyntax,\n  RedundantSyntax\n]\n"
  },
  {
    "path": "CHANGELOG.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Apache DataFusion Comet Changelog\n\nComprehensive changelogs for each release are available [here](dev/changelog).\n"
  },
  {
    "path": "LICENSE.txt",
    "content": "\n                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n\n--------------------------------------------------------------------------------\n\nThis project includes code from Apache Aurora.\n\n* dev/release/{release,changelog,release-candidate} are based on the scripts from\n  Apache Aurora\n\nCopyright: 2016 The Apache Software Foundation.\nHome page: https://aurora.apache.org/\nLicense: http://www.apache.org/licenses/LICENSE-2.0"
  },
  {
    "path": "Makefile",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n.PHONY: all core jvm test clean release-linux release bench\n\ndefine spark_jvm_17_extra_args\n$(shell ./mvnw help:evaluate -q -DforceStdout -Dexpression=extraJavaTestArgs)\nendef\n\n# Build optional Comet native features (like hdfs e.g)\nFEATURES_ARG := $(shell ! [ -z $(COMET_FEATURES) ] && echo '--features=$(COMET_FEATURES)')\n\nall: core jvm\n\ncore:\n\tcd native && cargo build $(FEATURES_ARG)\ntest-rust:\n\t# We need to compile CometException so that the cargo test can pass\n\t./mvnw compile -pl common -DskipTests $(PROFILES)\n\tcd native && cargo build $(FEATURES_ARG) && \\\n\tRUST_BACKTRACE=1 cargo test $(FEATURES_ARG)\njvm:\n\t./mvnw clean package -DskipTests $(PROFILES)\ntest-jvm: core\n\tSPARK_HOME=`pwd` COMET_CONF_DIR=$(shell pwd)/conf RUST_BACKTRACE=1 ./mvnw verify $(PROFILES)\ntest: test-rust test-jvm\nclean:\n\tcd native && cargo clean\n\t./mvnw clean $(PROFILES)\n\trm -rf .dist\nbench:\n\tcd native && RUSTFLAGS=\"-Ctarget-cpu=native\" cargo bench $(FEATURES_ARG) $(filter-out $@,$(MAKECMDGOALS))\nformat:\n\tcd native && cargo fmt\n\t./mvnw compile test-compile scalafix:scalafix -Psemanticdb $(PROFILES)\n\t./mvnw spotless:apply $(PROFILES)\n\n# build native libs for amd64 architecture Linux/MacOS on a Linux/amd64 machine/container\ncore-amd64-libs:\n\tcd native && RUSTFLAGS=\"-Ctarget-cpu=x86-64-v3\" cargo build -j 2 --release $(FEATURES_ARG)\nifdef HAS_OSXCROSS\n\trustup target add x86_64-apple-darwin\n\tcd native && cargo build -j 2 --target x86_64-apple-darwin --release $(FEATURES_ARG)\nendif\n\n# build native libs for arm64 architecture Linux/MacOS on a Linux/arm64 machine/container\ncore-arm64-libs:\n\tcd native && RUSTFLAGS=\"-Ctarget-cpu=neoverse-n1\" cargo build -j 2 --release $(FEATURES_ARG)\nifdef HAS_OSXCROSS\n\trustup target add aarch64-apple-darwin\n\tcd native && cargo build -j 2 --target aarch64-apple-darwin --release $(FEATURES_ARG)\nendif\n\ncore-amd64:\n\trustup target add x86_64-apple-darwin\n\tcd native && RUSTFLAGS=\"-Ctarget-cpu=skylake\" CC=o64-clang CXX=o64-clang++ cargo build --target x86_64-apple-darwin --release $(FEATURES_ARG)\n\tmkdir -p common/target/classes/org/apache/comet/darwin/x86_64\n\tcp native/target/x86_64-apple-darwin/release/libcomet.dylib common/target/classes/org/apache/comet/darwin/x86_64\n\tcd native && RUSTFLAGS=\"-Ctarget-cpu=x86-64-v3\" cargo build --release $(FEATURES_ARG)\n\tmkdir -p common/target/classes/org/apache/comet/linux/amd64\n\tcp native/target/release/libcomet.so common/target/classes/org/apache/comet/linux/amd64\n\tjar -cf common/target/comet-native-x86_64.jar \\\n\t\t-C common/target/classes/org/apache/comet darwin \\\n\t\t-C common/target/classes/org/apache/comet linux\n\t./dev/deploy-file common/target/comet-native-x86_64.jar comet-native-x86_64${COMET_CLASSIFIER} jar\n\ncore-arm64:\n\trustup target add aarch64-apple-darwin\n\tcd native && RUSTFLAGS=\"-Ctarget-cpu=apple-m1\" CC=arm64-apple-darwin21.4-clang CXX=arm64-apple-darwin21.4-clang++ CARGO_FEATURE_NEON=1 cargo build --target aarch64-apple-darwin --release $(FEATURES_ARG)\n\tmkdir -p common/target/classes/org/apache/comet/darwin/aarch64\n\tcp native/target/aarch64-apple-darwin/release/libcomet.dylib common/target/classes/org/apache/comet/darwin/aarch64\n\tcd native && RUSTFLAGS=\"-Ctarget-cpu=neoverse-n1\" cargo build --release $(FEATURES_ARG)\n\tmkdir -p common/target/classes/org/apache/comet/linux/aarch64\n\tcp native/target/release/libcomet.so common/target/classes/org/apache/comet/linux/aarch64\n\tjar -cf common/target/comet-native-aarch64.jar \\\n\t\t-C common/target/classes/org/apache/comet darwin \\\n\t\t-C common/target/classes/org/apache/comet linux\n\t./dev/deploy-file common/target/comet-native-aarch64.jar comet-native-aarch64${COMET_CLASSIFIER} jar\n\nrelease-linux: clean\n\trustup target add aarch64-apple-darwin x86_64-apple-darwin\n\tcd native && RUSTFLAGS=\"-Ctarget-cpu=apple-m1\" CC=arm64-apple-darwin21.4-clang CXX=arm64-apple-darwin21.4-clang++ CARGO_FEATURE_NEON=1 cargo build --target aarch64-apple-darwin --release $(FEATURES_ARG)\n\tcd native && RUSTFLAGS=\"-Ctarget-cpu=skylake\" CC=o64-clang CXX=o64-clang++ cargo build --target x86_64-apple-darwin --release $(FEATURES_ARG)\n\tcd native && RUSTFLAGS=\"-Ctarget-cpu=native\" cargo build --release $(FEATURES_ARG)\n\t./mvnw install -Prelease -DskipTests $(PROFILES)\nrelease:\n\tcd native && RUSTFLAGS=\"$(RUSTFLAGS) -Ctarget-cpu=native\" cargo build --release $(FEATURES_ARG)\n\t./mvnw install -Prelease -DskipTests $(PROFILES)\nrelease-nogit:\n\tcd native && RUSTFLAGS=\"-Ctarget-cpu=native\" cargo build --release\n\t./mvnw install -Prelease -DskipTests $(PROFILES) -Dmaven.gitcommitid.skip=true\nbenchmark-%: release\n\tcd spark && COMET_CONF_DIR=$(shell pwd)/conf MAVEN_OPTS='-Xmx20g ${call spark_jvm_17_extra_args}' ../mvnw exec:java -Dexec.mainClass=\"$*\" -Dexec.classpathScope=\"test\" -Dexec.cleanupDaemonThreads=\"false\" -Dexec.args=\"$(filter-out $@,$(MAKECMDGOALS))\" $(PROFILES)\n.DEFAULT:\n\t@: # ignore arguments provided to benchmarks e.g. \"make benchmark-foo -- --bar\", we do not want to treat \"--bar\" as target\n"
  },
  {
    "path": "NOTICE.txt",
    "content": "Apache DataFusion Comet\nCopyright 2024 The Apache Software Foundation\n\nThis product includes software developed at\nThe Apache Software Foundation (http://www.apache.org/).\n\nThis product includes software developed at\nApache Gluten (https://github.com/apache/incubator-gluten/)\nSpecifically:\n- Optimizer rule to replace SortMergeJoin with ShuffleHashJoin\n\nThis product includes software developed at\nDataFusion HDFS ObjectStore Contrib Package(https://github.com/datafusion-contrib/datafusion-objectstore-hdfs)\n\nThis product includes software developed at\nDataFusion fs-hdfs3 Contrib Package(https://github.com/datafusion-contrib/fs-hdfs)\n"
  },
  {
    "path": "README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Apache DataFusion Comet\n\n[![Apache licensed][license-badge]][license-url]\n[![Discord chat][discord-badge]][discord-url]\n[![Pending PRs][pending-pr-badge]][pending-pr-url]\n[![Maven Central][maven-badge]][maven-url]\n\n[license-badge]: https://img.shields.io/badge/license-Apache%20v2-blue.svg\n[license-url]: https://github.com/apache/datafusion-comet/blob/main/LICENSE.txt\n[discord-badge]: https://img.shields.io/discord/885562378132000778.svg?logo=discord&style=flat-square\n[discord-url]: https://discord.gg/3EAr4ZX6JK\n[pending-pr-badge]: https://img.shields.io/github/issues-search/apache/datafusion-comet?query=is%3Apr+is%3Aopen+draft%3Afalse+review%3Arequired+status%3Asuccess&label=Pending%20PRs&logo=github\n[pending-pr-url]: https://github.com/apache/datafusion-comet/pulls?q=is%3Apr+is%3Aopen+draft%3Afalse+review%3Arequired+status%3Asuccess+sort%3Aupdated-desc\n[maven-badge]: https://img.shields.io/maven-central/v/org.apache.datafusion/comet-spark-spark4.0_2.13\n[maven-url]: https://search.maven.org/search?q=g:org.apache.datafusion%20AND%20comet-spark\n\n<img src=\"docs/source/_static/images/DataFusionComet-Logo-Light.png\" width=\"512\" alt=\"logo\"/>\n\nApache DataFusion Comet is a high-performance accelerator for Apache Spark, built on top of the powerful\n[Apache DataFusion] query engine. Comet is designed to significantly enhance the\nperformance of Apache Spark workloads while leveraging commodity hardware and seamlessly integrating with the\nSpark ecosystem without requiring any code changes.\n\nComet also accelerates Apache Iceberg, when performing Parquet scans from Spark.\n\n[Apache DataFusion]: https://datafusion.apache.org\n\n# Benefits of Using Comet\n\n## Run Spark Queries at DataFusion Speeds\n\nComet delivers a performance speedup for many queries, enabling faster data processing and shorter time-to-insights.\n\nThe following chart shows the time it takes to run the 22 TPC-H queries against 100 GB of data in Parquet format\nusing a single executor with 8 cores. See the [Comet Benchmarking Guide](https://datafusion.apache.org/comet/contributor-guide/benchmarking.html)\nfor details of the environment used for these benchmarks.\n\nWhen using Comet, the overall run time is reduced from 687 seconds to 302 seconds, a 2.2x speedup.\n\n![](docs/source/_static/images/benchmark-results/0.11.0/tpch_allqueries.png)\n\nHere is a breakdown showing relative performance of Spark and Comet for each TPC-H query.\n\n![](docs/source/_static/images/benchmark-results/0.11.0/tpch_queries_compare.png)\n\nThe following charts shows how much Comet currently accelerates each query from the benchmark.\n\n### Relative speedup\n\n![](docs/source/_static/images/benchmark-results/0.11.0/tpch_queries_speedup_rel.png)\n\n### Absolute speedup\n\n![](docs/source/_static/images/benchmark-results/0.11.0/tpch_queries_speedup_abs.png)\n\nThese benchmarks can be reproduced in any environment using the documentation in the\n[Comet Benchmarking Guide](https://datafusion.apache.org/comet/contributor-guide/benchmarking.html). We encourage\nyou to run your own benchmarks.\n\nResults for our benchmark derived from TPC-DS are available in the [benchmarking guide](https://datafusion.apache.org/comet/contributor-guide/benchmark-results/tpc-ds.html).\n\n## Use Commodity Hardware\n\nComet leverages commodity hardware, eliminating the need for costly hardware upgrades or\nspecialized hardware accelerators, such as GPUs or FPGA. By maximizing the utilization of commodity hardware, Comet\nensures cost-effectiveness and scalability for your Spark deployments.\n\n## Spark Compatibility\n\nComet aims for 100% compatibility with all supported versions of Apache Spark, allowing you to integrate Comet into\nyour existing Spark deployments and workflows seamlessly. With no code changes required, you can immediately harness\nthe benefits of Comet's acceleration capabilities without disrupting your Spark applications.\n\n## Tight Integration with Apache DataFusion\n\nComet tightly integrates with the core Apache DataFusion project, leveraging its powerful execution engine. With\nseamless interoperability between Comet and DataFusion, you can achieve optimal performance and efficiency in your\nSpark workloads.\n\n## Active Community\n\nComet boasts a vibrant and active community of developers, contributors, and users dedicated to advancing the\ncapabilities of Apache DataFusion and accelerating the performance of Apache Spark.\n\n## Getting Started\n\nTo get started with Apache DataFusion Comet, follow the\n[installation instructions](https://datafusion.apache.org/comet/user-guide/installation.html). Join the\n[DataFusion Slack and Discord channels](https://datafusion.apache.org/contributor-guide/communication.html) to connect\nwith other users, ask questions, and share your experiences with Comet.\n\nFollow [Apache DataFusion Comet Overview](https://datafusion.apache.org/comet/about/index.html#comet-overview) to get more detailed information\n\n## Contributing\n\nWe welcome contributions from the community to help improve and enhance Apache DataFusion Comet. Whether it's fixing\nbugs, adding new features, writing documentation, or optimizing performance, your contributions are invaluable in\nshaping the future of Comet. Check out our\n[contributor guide](https://datafusion.apache.org/comet/contributor-guide/contributing.html) to get started.\n\n## License\n\nApache DataFusion Comet is licensed under the Apache License 2.0. See the [LICENSE.txt](LICENSE.txt) file for details.\n\n## Acknowledgments\n\nWe would like to express our gratitude to the Apache DataFusion community for their support and contributions to\nComet. Together, we're building a faster, more efficient future for big data processing with Apache Spark.\n"
  },
  {
    "path": "benchmarks/Dockerfile",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nFROM apache/datafusion-comet:0.7.0-spark3.5.5-scala2.12-java11\n\nRUN apt update \\\n    && apt install -y git python3 python3-pip \\\n    && apt clean\n\nRUN cd /opt \\\n    && git clone https://github.com/apache/datafusion-benchmarks.git\n"
  },
  {
    "path": "benchmarks/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Running Comet Benchmarks in Microk8s\n\nThis guide explains how to run benchmarks derived from TPC-H and TPC-DS in Apache DataFusion Comet deployed in a\nlocal Microk8s cluster.\n\n## Use Microk8s locally\n\nInstall Micro8s following the instructions at https://microk8s.io/docs/getting-started and then perform these\nadditional steps, ensuring that any existing kube config is backed up first.\n\n```shell\nmkdir -p ~/.kube\nmicrok8s config > ~/.kube/config\n\nmicrok8s enable dns\nmicrok8s enable registry\n\nmicrok8s kubectl create serviceaccount spark\n```\n\n## Build Comet Docker Image\n\nRun the following command from the root of this repository to build the Comet Docker image, or use a published\nDocker image from https://github.com/orgs/apache/packages?repo_name=datafusion-comet\n\n```shell\ndocker build -t apache/datafusion-comet -f kube/Dockerfile .\n```\n\n## Build Comet Benchmark Docker Image\n\nBuild the benchmark Docker image and push to the Microk8s Docker registry.\n\n```shell\ndocker build -t apache/datafusion-comet-tpcbench  .\ndocker tag apache/datafusion-comet-tpcbench localhost:32000/apache/datafusion-comet-tpcbench:latest\ndocker push localhost:32000/apache/datafusion-comet-tpcbench:latest\n```\n\n## Run benchmarks\n\n```shell\nexport SPARK_MASTER=k8s://https://127.0.0.1:16443\nexport COMET_DOCKER_IMAGE=localhost:32000/apache/datafusion-comet-tpcbench:latest\n# Location of Comet JAR within the Docker image\nexport COMET_JAR=/opt/spark/jars/comet-spark-spark3.4_2.12-0.5.0-SNAPSHOT.jar\n\n$SPARK_HOME/bin/spark-submit \\\n    --master $SPARK_MASTER \\\n    --deploy-mode cluster  \\\n    --name comet-tpcbench \\\n    --driver-memory 8G \\\n    --conf spark.driver.memory=8G \\\n    --conf spark.executor.instances=1 \\\n    --conf spark.executor.memory=32G \\\n    --conf spark.executor.cores=8 \\\n    --conf spark.cores.max=8 \\\n    --conf spark.task.cpus=1 \\\n    --conf spark.executor.memoryOverhead=3G \\\n    --jars local://$COMET_JAR \\\n    --conf spark.executor.extraClassPath=$COMET_JAR \\\n    --conf spark.driver.extraClassPath=$COMET_JAR \\\n    --conf spark.plugins=org.apache.spark.CometPlugin \\\n    --conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \\\n    --conf spark.comet.enabled=true \\\n    --conf spark.comet.exec.enabled=true \\\n    --conf spark.comet.exec.all.enabled=true \\\n    --conf spark.comet.cast.allowIncompatible=true \\\n    --conf spark.comet.exec.shuffle.enabled=true \\\n    --conf spark.comet.exec.shuffle.mode=auto \\\n    --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n    --conf spark.kubernetes.namespace=default \\\n    --conf spark.kubernetes.driver.pod.name=tpcbench  \\\n    --conf spark.kubernetes.container.image=$COMET_DOCKER_IMAGE \\\n    --conf spark.kubernetes.driver.volumes.hostPath.tpcdata.mount.path=/mnt/bigdata/tpcds/sf100/ \\\n    --conf spark.kubernetes.driver.volumes.hostPath.tpcdata.options.path=/mnt/bigdata/tpcds/sf100/ \\\n    --conf spark.kubernetes.executor.volumes.hostPath.tpcdata.mount.path=/mnt/bigdata/tpcds/sf100/ \\\n    --conf spark.kubernetes.executor.volumes.hostPath.tpcdata.options.path=/mnt/bigdata/tpcds/sf100/ \\\n    --conf spark.kubernetes.authenticate.caCertFile=/var/snap/microk8s/current/certs/ca.crt \\\n    local:///opt/datafusion-benchmarks/runners/datafusion-comet/tpcbench.py \\\n    --benchmark tpcds \\\n    --data /mnt/bigdata/tpcds/sf100/ \\\n    --queries /opt/datafusion-benchmarks/tpcds/queries-spark \\\n    --iterations 1\n```\n"
  },
  {
    "path": "benchmarks/pyspark/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# PySpark Benchmarks\n\nA suite of PySpark benchmarks for comparing performance between Spark, Comet JVM, and Comet Native implementations.\n\n## Available Benchmarks\n\nRun `python run_benchmark.py --list-benchmarks` to see all available benchmarks:\n\n- **shuffle-hash** - Shuffle all columns using hash partitioning on group_key\n- **shuffle-roundrobin** - Shuffle all columns using round-robin partitioning\n\n## Prerequisites\n\n- Apache Spark cluster (standalone, YARN, or Kubernetes)\n- PySpark installed\n- Comet JAR built\n\n## Build Comet JAR\n\n```bash\ncd /path/to/datafusion-comet\nmake release\n```\n\n## Step 1: Generate Test Data\n\nGenerate test data with realistic 50-column schema (nested structs, arrays, maps):\n\n```bash\nspark-submit \\\n  --master spark://master:7077 \\\n  --executor-memory 16g \\\n  generate_data.py \\\n  --output /tmp/shuffle-benchmark-data \\\n  --rows 10000000 \\\n  --partitions 200\n```\n\n### Data Generation Options\n\n| Option               | Default    | Description                  |\n| -------------------- | ---------- | ---------------------------- |\n| `--output`, `-o`     | (required) | Output path for parquet data |\n| `--rows`, `-r`       | 10000000   | Number of rows               |\n| `--partitions`, `-p` | 200        | Number of output partitions  |\n\n## Step 2: Run Benchmarks\n\n### List Available Benchmarks\n\n```bash\npython run_benchmark.py --list-benchmarks\n```\n\n### Run Individual Benchmarks\n\nYou can run specific benchmarks by name:\n\n```bash\n# Hash partitioning shuffle - Spark baseline\nspark-submit --master spark://master:7077 \\\n  run_benchmark.py --data /tmp/shuffle-benchmark-data --mode spark --benchmark shuffle-hash\n\n# Round-robin shuffle - Spark baseline\nspark-submit --master spark://master:7077 \\\n  run_benchmark.py --data /tmp/shuffle-benchmark-data --mode spark --benchmark shuffle-roundrobin\n\n# Hash partitioning - Comet JVM shuffle\nspark-submit --master spark://master:7077 \\\n  --jars /path/to/comet.jar \\\n  --conf spark.comet.enabled=true \\\n  --conf spark.comet.exec.shuffle.enabled=true \\\n  --conf spark.comet.shuffle.mode=jvm \\\n  --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n  run_benchmark.py --data /tmp/shuffle-benchmark-data --mode jvm --benchmark shuffle-hash\n\n# Round-robin - Comet Native shuffle\nspark-submit --master spark://master:7077 \\\n  --jars /path/to/comet.jar \\\n  --conf spark.comet.enabled=true \\\n  --conf spark.comet.exec.shuffle.enabled=true \\\n  --conf spark.comet.exec.shuffle.mode=native \\\n  --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n  run_benchmark.py --data /tmp/shuffle-benchmark-data --mode native --benchmark shuffle-roundrobin\n```\n\n### Run All Benchmarks\n\nUse the provided script to run all benchmarks across all modes:\n\n```bash\nSPARK_MASTER=spark://master:7077 \\\nEXECUTOR_MEMORY=16g \\\n./run_all_benchmarks.sh /tmp/shuffle-benchmark-data\n```\n\n## Checking Results\n\nOpen the Spark UI (default: http://localhost:4040) during each benchmark run to compare shuffle write sizes in the Stages tab.\n\n## Adding New Benchmarks\n\nThe benchmark framework makes it easy to add new benchmarks:\n\n1. **Create a benchmark class** in `benchmarks/` directory (or add to existing file):\n\n```python\nfrom benchmarks.base import Benchmark\n\nclass MyBenchmark(Benchmark):\n    @classmethod\n    def name(cls) -> str:\n        return \"my-benchmark\"\n\n    @classmethod\n    def description(cls) -> str:\n        return \"Description of what this benchmark does\"\n\n    def run(self) -> Dict[str, Any]:\n        # Read data\n        df = self.spark.read.parquet(self.data_path)\n\n        # Run your benchmark operation\n        def benchmark_operation():\n            result = df.filter(...).groupBy(...).agg(...)\n            result.write.mode(\"overwrite\").parquet(\"/tmp/output\")\n\n        # Time it\n        duration_ms = self._time_operation(benchmark_operation)\n\n        return {\n            'duration_ms': duration_ms,\n            # Add any other metrics you want to track\n        }\n```\n\n2. **Register the benchmark** in `benchmarks/__init__.py`:\n\n```python\nfrom .my_module import MyBenchmark\n\n_BENCHMARK_REGISTRY = {\n    # ... existing benchmarks\n    MyBenchmark.name(): MyBenchmark,\n}\n```\n\n3. **Run your new benchmark**:\n\n```bash\npython run_benchmark.py --data /path/to/data --mode spark --benchmark my-benchmark\n```\n\nThe base `Benchmark` class provides:\n\n- Automatic timing via `_time_operation()`\n- Standard output formatting via `execute_timed()`\n- Access to SparkSession, data path, and mode\n- Spark configuration printing\n"
  },
  {
    "path": "benchmarks/pyspark/benchmarks/__init__.py",
    "content": "#!/usr/bin/env python3\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"\nBenchmark registry for PySpark benchmarks.\n\nThis module provides a central registry for discovering and running benchmarks.\n\"\"\"\n\nfrom typing import Dict, Type, List\n\nfrom .base import Benchmark\nfrom .shuffle import ShuffleHashBenchmark, ShuffleRoundRobinBenchmark\n\n\n# Registry of all available benchmarks\n_BENCHMARK_REGISTRY: Dict[str, Type[Benchmark]] = {\n    ShuffleHashBenchmark.name(): ShuffleHashBenchmark,\n    ShuffleRoundRobinBenchmark.name(): ShuffleRoundRobinBenchmark,\n}\n\n\ndef get_benchmark(name: str) -> Type[Benchmark]:\n    \"\"\"\n    Get a benchmark class by name.\n\n    Args:\n        name: Benchmark name\n\n    Returns:\n        Benchmark class\n\n    Raises:\n        KeyError: If benchmark name is not found\n    \"\"\"\n    if name not in _BENCHMARK_REGISTRY:\n        available = \", \".join(sorted(_BENCHMARK_REGISTRY.keys()))\n        raise KeyError(\n            f\"Unknown benchmark: {name}. Available benchmarks: {available}\"\n        )\n    return _BENCHMARK_REGISTRY[name]\n\n\ndef list_benchmarks() -> List[tuple[str, str]]:\n    \"\"\"\n    List all available benchmarks.\n\n    Returns:\n        List of (name, description) tuples\n    \"\"\"\n    benchmarks = []\n    for name in sorted(_BENCHMARK_REGISTRY.keys()):\n        benchmark_cls = _BENCHMARK_REGISTRY[name]\n        benchmarks.append((name, benchmark_cls.description()))\n    return benchmarks\n\n\n__all__ = [\n    'Benchmark',\n    'get_benchmark',\n    'list_benchmarks',\n    'ShuffleHashBenchmark',\n    'ShuffleRoundRobinBenchmark',\n]\n"
  },
  {
    "path": "benchmarks/pyspark/benchmarks/base.py",
    "content": "#!/usr/bin/env python3\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"\nBase benchmark class providing common functionality for all benchmarks.\n\"\"\"\n\nimport time\nfrom abc import ABC, abstractmethod\nfrom typing import Dict, Any\n\nfrom pyspark.sql import SparkSession\n\n\nclass Benchmark(ABC):\n    \"\"\"Base class for all PySpark benchmarks.\"\"\"\n\n    def __init__(self, spark: SparkSession, data_path: str, mode: str):\n        \"\"\"\n        Initialize benchmark.\n\n        Args:\n            spark: SparkSession instance\n            data_path: Path to input data\n            mode: Execution mode (spark, jvm, native)\n        \"\"\"\n        self.spark = spark\n        self.data_path = data_path\n        self.mode = mode\n\n    @classmethod\n    @abstractmethod\n    def name(cls) -> str:\n        \"\"\"Return the benchmark name (used for CLI).\"\"\"\n        pass\n\n    @classmethod\n    @abstractmethod\n    def description(cls) -> str:\n        \"\"\"Return a short description of the benchmark.\"\"\"\n        pass\n\n    @abstractmethod\n    def run(self) -> Dict[str, Any]:\n        \"\"\"\n        Run the benchmark and return results.\n\n        Returns:\n            Dictionary containing benchmark results (must include 'duration_ms')\n        \"\"\"\n        pass\n\n    def execute_timed(self) -> Dict[str, Any]:\n        \"\"\"\n        Execute the benchmark with timing and standard output.\n\n        Returns:\n            Dictionary containing benchmark results\n        \"\"\"\n        print(f\"\\n{'=' * 80}\")\n        print(f\"Benchmark: {self.name()}\")\n        print(f\"Mode: {self.mode.upper()}\")\n        print(f\"{'=' * 80}\")\n        print(f\"Data path: {self.data_path}\")\n\n        # Print relevant Spark configuration\n        self._print_spark_config()\n\n        # Clear cache before running\n        self.spark.catalog.clearCache()\n\n        # Run the benchmark\n        print(f\"\\nRunning benchmark...\")\n        results = self.run()\n\n        # Print results\n        print(f\"\\nDuration: {results['duration_ms']:,} ms\")\n        if 'row_count' in results:\n            print(f\"Rows processed: {results['row_count']:,}\")\n\n        # Print any additional metrics\n        for key, value in results.items():\n            if key not in ['duration_ms', 'row_count']:\n                print(f\"{key}: {value}\")\n\n        print(f\"{'=' * 80}\\n\")\n\n        return results\n\n    def _print_spark_config(self):\n        \"\"\"Print relevant Spark configuration.\"\"\"\n        conf = self.spark.sparkContext.getConf()\n        print(f\"Shuffle manager: {conf.get('spark.shuffle.manager', 'default')}\")\n        print(f\"Comet enabled: {conf.get('spark.comet.enabled', 'false')}\")\n        print(f\"Comet shuffle enabled: {conf.get('spark.comet.exec.shuffle.enabled', 'false')}\")\n        print(f\"Comet shuffle mode: {conf.get('spark.comet.shuffle.mode', 'not set')}\")\n        print(f\"Spark UI: {self.spark.sparkContext.uiWebUrl}\")\n\n    def _time_operation(self, operation_fn):\n        \"\"\"\n        Time an operation and return duration in milliseconds.\n\n        Args:\n            operation_fn: Function to time (takes no arguments)\n\n        Returns:\n            Duration in milliseconds\n        \"\"\"\n        start_time = time.time()\n        operation_fn()\n        duration_ms = int((time.time() - start_time) * 1000)\n        return duration_ms\n"
  },
  {
    "path": "benchmarks/pyspark/benchmarks/shuffle.py",
    "content": "#!/usr/bin/env python3\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"\nShuffle benchmarks for comparing shuffle file sizes and performance.\n\nThese benchmarks test different partitioning strategies (hash, round-robin)\nacross Spark, Comet JVM, and Comet Native shuffle implementations.\n\"\"\"\n\nfrom typing import Dict, Any\nfrom pyspark.sql import DataFrame\n\nfrom .base import Benchmark\n\n\nclass ShuffleBenchmark(Benchmark):\n    \"\"\"Base class for shuffle benchmarks with common repartitioning logic.\"\"\"\n\n    def __init__(self, spark, data_path: str, mode: str, num_partitions: int = 200):\n        \"\"\"\n        Initialize shuffle benchmark.\n\n        Args:\n            spark: SparkSession instance\n            data_path: Path to input parquet data\n            mode: Execution mode (spark, jvm, native)\n            num_partitions: Number of partitions to shuffle to\n        \"\"\"\n        super().__init__(spark, data_path, mode)\n        self.num_partitions = num_partitions\n\n    def _read_and_count(self) -> tuple[DataFrame, int]:\n        \"\"\"Read input data and count rows.\"\"\"\n        df = self.spark.read.parquet(self.data_path)\n        row_count = df.count()\n        return df, row_count\n\n    def _repartition(self, df: DataFrame) -> DataFrame:\n        \"\"\"\n        Repartition dataframe using the strategy defined by subclass.\n\n        Args:\n            df: Input dataframe\n\n        Returns:\n            Repartitioned dataframe\n        \"\"\"\n        raise NotImplementedError(\"Subclasses must implement _repartition\")\n\n    def _write_output(self, df: DataFrame, output_path: str):\n        \"\"\"Write repartitioned data to parquet.\"\"\"\n        df.write.mode(\"overwrite\").parquet(output_path)\n\n    def run(self) -> Dict[str, Any]:\n        \"\"\"\n        Run the shuffle benchmark.\n\n        Returns:\n            Dictionary with duration_ms and row_count\n        \"\"\"\n        # Read input data\n        df, row_count = self._read_and_count()\n        print(f\"Number of rows: {row_count:,}\")\n\n        # Define the benchmark operation\n        def benchmark_operation():\n            # Repartition using the specific strategy\n            repartitioned = self._repartition(df)\n\n            # Write to parquet to force materialization\n            output_path = f\"/tmp/shuffle-benchmark-output-{self.mode}-{self.name()}\"\n            self._write_output(repartitioned, output_path)\n            print(f\"Wrote repartitioned data to: {output_path}\")\n\n        # Time the operation\n        duration_ms = self._time_operation(benchmark_operation)\n\n        return {\n            'duration_ms': duration_ms,\n            'row_count': row_count,\n            'num_partitions': self.num_partitions,\n        }\n\n\nclass ShuffleHashBenchmark(ShuffleBenchmark):\n    \"\"\"Shuffle benchmark using hash partitioning on a key column.\"\"\"\n\n    @classmethod\n    def name(cls) -> str:\n        return \"shuffle-hash\"\n\n    @classmethod\n    def description(cls) -> str:\n        return \"Shuffle all columns using hash partitioning on group_key\"\n\n    def _repartition(self, df: DataFrame) -> DataFrame:\n        \"\"\"Repartition using hash partitioning on group_key.\"\"\"\n        return df.repartition(self.num_partitions, \"group_key\")\n\n\nclass ShuffleRoundRobinBenchmark(ShuffleBenchmark):\n    \"\"\"Shuffle benchmark using round-robin partitioning.\"\"\"\n\n    @classmethod\n    def name(cls) -> str:\n        return \"shuffle-roundrobin\"\n\n    @classmethod\n    def description(cls) -> str:\n        return \"Shuffle all columns using round-robin partitioning\"\n\n    def _repartition(self, df: DataFrame) -> DataFrame:\n        \"\"\"Repartition using round-robin (no partition columns specified).\"\"\"\n        return df.repartition(self.num_partitions)\n"
  },
  {
    "path": "benchmarks/pyspark/generate_data.py",
    "content": "#!/usr/bin/env python3\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"\nGenerate test data for shuffle size comparison benchmark.\n\nThis script generates a parquet dataset with a realistic schema (100 columns\nincluding deeply nested structs, arrays, and maps) for benchmarking shuffle\noperations across Spark, Comet JVM, and Comet Native shuffle modes.\n\"\"\"\n\nimport argparse\nfrom pyspark.sql import SparkSession\nfrom pyspark.sql import functions as F\nfrom pyspark.sql.types import (\n    StructType, StructField, IntegerType, LongType, DoubleType,\n    StringType, BooleanType, DateType, TimestampType, ArrayType,\n    MapType, DecimalType\n)\n\n\ndef generate_data(output_path: str, num_rows: int, num_partitions: int):\n    \"\"\"Generate test data with realistic schema and write to parquet.\"\"\"\n\n    spark = SparkSession.builder \\\n        .appName(\"ShuffleBenchmark-DataGen\") \\\n        .getOrCreate()\n\n    print(f\"Generating {num_rows:,} rows with {num_partitions} partitions\")\n    print(f\"Output path: {output_path}\")\n    print(\"Schema: 100 columns including deeply nested structs, arrays, and maps\")\n\n    # Start with a range and build up the columns\n    df = spark.range(0, num_rows, numPartitions=num_partitions)\n\n    # Add columns using selectExpr for better performance\n    df = df.selectExpr(\n        # Key columns for grouping/partitioning (1-3)\n        \"cast(id % 1000 as int) as partition_key\",\n        \"cast(id % 100 as int) as group_key\",\n        \"id as row_id\",\n\n        # Integer columns (4-15)\n        \"cast(id % 10000 as int) as category_id\",\n        \"cast(id % 500 as int) as region_id\",\n        \"cast(id % 50 as int) as department_id\",\n        \"cast((id * 7) % 1000000 as int) as customer_id\",\n        \"cast((id * 13) % 100000 as int) as product_id\",\n        \"cast(id % 12 + 1 as int) as month\",\n        \"cast(id % 28 + 1 as int) as day\",\n        \"cast(2020 + (id % 5) as int) as year\",\n        \"cast((id * 17) % 256 as int) as priority\",\n        \"cast((id * 19) % 1000 as int) as rank\",\n        \"cast((id * 23) % 10000 as int) as score_int\",\n        \"cast((id * 29) % 500 as int) as level\",\n\n        # Long columns (16-22)\n        \"id * 1000 as transaction_id\",\n        \"(id * 17) % 10000000000 as account_number\",\n        \"(id * 31) % 1000000000 as reference_id\",\n        \"(id * 37) % 10000000000 as external_id\",\n        \"(id * 41) % 1000000000 as correlation_id\",\n        \"(id * 43) % 10000000000 as trace_id\",\n        \"(id * 47) % 1000000000 as span_id\",\n\n        # Double columns (23-35)\n        \"cast(id % 10000 as double) / 100.0 as amount\",\n        \"cast((id * 3) % 10000 as double) / 100.0 as price\",\n        \"cast(id % 100 as double) / 100.0 as discount\",\n        \"cast((id * 7) % 500 as double) / 10.0 as weight\",\n        \"cast((id * 11) % 1000 as double) / 10.0 as height\",\n        \"cast(id % 360 as double) as latitude\",\n        \"cast((id * 2) % 360 as double) as longitude\",\n        \"cast((id * 13) % 10000 as double) / 1000.0 as rate\",\n        \"cast((id * 17) % 100 as double) / 100.0 as percentage\",\n        \"cast((id * 19) % 1000 as double) as velocity\",\n        \"cast((id * 23) % 500 as double) / 10.0 as acceleration\",\n        \"cast((id * 29) % 10000 as double) / 100.0 as temperature\",\n        \"cast((id * 31) % 1000 as double) / 10.0 as pressure\",\n\n        # String columns (36-50)\n        \"concat('user_', cast(id % 100000 as string)) as user_name\",\n        \"concat('email_', cast(id % 50000 as string), '@example.com') as email\",\n        \"concat('SKU-', lpad(cast(id % 10000 as string), 6, '0')) as sku\",\n        \"concat('ORD-', cast(id as string)) as order_id\",\n        \"array('pending', 'processing', 'shipped', 'delivered', 'cancelled')[cast(id % 5 as int)] as status\",\n        \"array('USD', 'EUR', 'GBP', 'JPY', 'CAD')[cast(id % 5 as int)] as currency\",\n        \"concat('Description for item ', cast(id % 1000 as string), ' with additional details') as description\",\n        \"concat('REF-', lpad(cast(id % 100000 as string), 8, '0')) as reference_code\",\n        \"concat('TXN-', cast(id as string), '-', cast(id % 1000 as string)) as transaction_code\",\n        \"array('A', 'B', 'C', 'D', 'E')[cast(id % 5 as int)] as grade\",\n        \"concat('Note: Record ', cast(id as string), ' processed successfully') as notes\",\n        \"concat('Session-', lpad(cast(id % 10000 as string), 6, '0')) as session_id\",\n        \"concat('Device-', cast(id % 1000 as string)) as device_id\",\n        \"array('chrome', 'firefox', 'safari', 'edge')[cast(id % 4 as int)] as browser\",\n        \"array('windows', 'macos', 'linux', 'ios', 'android')[cast(id % 5 as int)] as os\",\n\n        # Boolean columns (51-56)\n        \"id % 2 = 0 as is_active\",\n        \"id % 3 = 0 as is_verified\",\n        \"id % 7 = 0 as is_premium\",\n        \"id % 5 = 0 as is_deleted\",\n        \"id % 11 = 0 as is_featured\",\n        \"id % 13 = 0 as is_archived\",\n\n        # Date and timestamp columns (57-60)\n        \"date_add(to_date('2020-01-01'), cast(id % 1500 as int)) as created_date\",\n        \"date_add(to_date('2020-01-01'), cast((id + 30) % 1500 as int)) as updated_date\",\n        \"date_add(to_date('2020-01-01'), cast((id + 60) % 1500 as int)) as expires_date\",\n        \"to_timestamp(concat('2020-01-01 ', lpad(cast(id % 24 as string), 2, '0'), ':00:00')) as created_at\",\n\n        # Simple arrays (61-65)\n        \"array(cast(id % 100 as int), cast((id + 1) % 100 as int), cast((id + 2) % 100 as int), cast((id + 3) % 100 as int), cast((id + 4) % 100 as int)) as tag_ids\",\n        \"array(cast(id % 1000 as double) / 10.0, cast((id * 2) % 1000 as double) / 10.0, cast((id * 3) % 1000 as double) / 10.0) as scores\",\n        \"array(concat('tag_', cast(id % 20 as string)), concat('tag_', cast((id + 5) % 20 as string)), concat('tag_', cast((id + 10) % 20 as string))) as tags\",\n        \"array(id % 2 = 0, id % 3 = 0, id % 5 = 0, id % 7 = 0) as flag_array\",\n        \"array(id * 1000, id * 2000, id * 3000) as long_array\",\n\n        # Simple maps (66-68)\n        \"map('key1', cast(id % 100 as string), 'key2', cast((id * 2) % 100 as string), 'key3', cast((id * 3) % 100 as string)) as str_attributes\",\n        \"map('score1', cast(id % 100 as double), 'score2', cast((id * 2) % 100 as double)) as double_attributes\",\n        \"map(cast(id % 10 as int), concat('val_', cast(id % 100 as string)), cast((id + 1) % 10 as int), concat('val_', cast((id + 1) % 100 as string))) as int_key_map\",\n\n        # Level 2 nested struct: address with nested geo (69)\n        \"named_struct(\"\n        \"  'street', concat(cast(id % 9999 as string), ' Main St'),\"\n        \"  'city', array('New York', 'Los Angeles', 'Chicago', 'Houston', 'Phoenix')[cast(id % 5 as int)],\"\n        \"  'state', array('NY', 'CA', 'IL', 'TX', 'AZ')[cast(id % 5 as int)],\"\n        \"  'zip', lpad(cast(id % 99999 as string), 5, '0'),\"\n        \"  'country', 'USA',\"\n        \"  'geo', named_struct(\"\n        \"    'lat', cast(id % 180 as double) - 90.0,\"\n        \"    'lng', cast(id % 360 as double) - 180.0,\"\n        \"    'accuracy', cast(id % 100 as double)\"\n        \"  )\"\n        \") as address\",\n\n        # Level 3 nested struct: organization hierarchy (70)\n        \"named_struct(\"\n        \"  'company', named_struct(\"\n        \"    'name', concat('Company_', cast(id % 1000 as string)),\"\n        \"    'industry', array('tech', 'finance', 'healthcare', 'retail')[cast(id % 4 as int)],\"\n        \"    'headquarters', named_struct(\"\n        \"      'city', array('NYC', 'SF', 'LA', 'CHI')[cast(id % 4 as int)],\"\n        \"      'country', 'USA',\"\n        \"      'timezone', array('EST', 'PST', 'PST', 'CST')[cast(id % 4 as int)]\"\n        \"    )\"\n        \"  ),\"\n        \"  'department', named_struct(\"\n        \"    'name', array('Engineering', 'Sales', 'Marketing', 'HR')[cast(id % 4 as int)],\"\n        \"    'code', concat('DEPT-', cast(id % 100 as string)),\"\n        \"    'budget', cast(id % 1000000 as double)\"\n        \"  )\"\n        \") as organization\",\n\n        # Level 4 nested struct: deep config (71)\n        \"named_struct(\"\n        \"  'level1', named_struct(\"\n        \"    'level2a', named_struct(\"\n        \"      'level3a', named_struct(\"\n        \"        'value_int', cast(id % 1000 as int),\"\n        \"        'value_str', concat('deep_', cast(id % 100 as string)),\"\n        \"        'value_bool', id % 2 = 0\"\n        \"      ),\"\n        \"      'level3b', named_struct(\"\n        \"        'metric1', cast(id % 100 as double),\"\n        \"        'metric2', cast((id * 2) % 100 as double)\"\n        \"      )\"\n        \"    ),\"\n        \"    'level2b', named_struct(\"\n        \"      'setting1', concat('setting_', cast(id % 50 as string)),\"\n        \"      'setting2', id % 3 = 0,\"\n        \"      'values', array(cast(id % 10 as int), cast((id + 1) % 10 as int), cast((id + 2) % 10 as int))\"\n        \"    )\"\n        \"  ),\"\n        \"  'metadata', named_struct(\"\n        \"    'version', concat('v', cast(id % 10 as string)),\"\n        \"    'timestamp', id * 1000\"\n        \"  )\"\n        \") as deep_config\",\n\n        # Array of structs with nested structs (72)\n        \"array(\"\n        \"  named_struct(\"\n        \"    'item_id', cast(id % 1000 as int),\"\n        \"    'details', named_struct(\"\n        \"      'name', concat('Item_', cast(id % 100 as string)),\"\n        \"      'category', array('electronics', 'clothing', 'food', 'books')[cast(id % 4 as int)],\"\n        \"      'pricing', named_struct(\"\n        \"        'base', cast(id % 100 as double) + 0.99,\"\n        \"        'discount', cast(id % 20 as double) / 100.0,\"\n        \"        'tax_rate', 0.08\"\n        \"      )\"\n        \"    ),\"\n        \"    'quantity', cast(id % 10 + 1 as int)\"\n        \"  ),\"\n        \"  named_struct(\"\n        \"    'item_id', cast((id + 100) % 1000 as int),\"\n        \"    'details', named_struct(\"\n        \"      'name', concat('Item_', cast((id + 100) % 100 as string)),\"\n        \"      'category', array('electronics', 'clothing', 'food', 'books')[cast((id + 1) % 4 as int)],\"\n        \"      'pricing', named_struct(\"\n        \"        'base', cast((id + 50) % 100 as double) + 0.99,\"\n        \"        'discount', cast((id + 5) % 20 as double) / 100.0,\"\n        \"        'tax_rate', 0.08\"\n        \"      )\"\n        \"    ),\"\n        \"    'quantity', cast((id + 1) % 10 + 1 as int)\"\n        \"  )\"\n        \") as line_items\",\n\n        # Map with struct values (73)\n        \"map(\"\n        \"  'primary', named_struct('name', concat('Primary_', cast(id % 100 as string)), 'score', cast(id % 100 as double), 'active', true),\"\n        \"  'secondary', named_struct('name', concat('Secondary_', cast(id % 100 as string)), 'score', cast((id * 2) % 100 as double), 'active', id % 2 = 0)\"\n        \") as contact_map\",\n\n        # Struct with map containing arrays (74)\n        \"named_struct(\"\n        \"  'config_name', concat('Config_', cast(id % 100 as string)),\"\n        \"  'settings', map(\"\n        \"    'integers', array(cast(id % 10 as int), cast((id + 1) % 10 as int), cast((id + 2) % 10 as int)),\"\n        \"    'strings', array(concat('s1_', cast(id % 10 as string)), concat('s2_', cast(id % 10 as string)))\"\n        \"  ),\"\n        \"  'enabled', id % 2 = 0\"\n        \") as config_with_map\",\n\n        # Array of arrays (75)\n        \"array(\"\n        \"  array(cast(id % 10 as int), cast((id + 1) % 10 as int), cast((id + 2) % 10 as int)),\"\n        \"  array(cast((id * 2) % 10 as int), cast((id * 2 + 1) % 10 as int)),\"\n        \"  array(cast((id * 3) % 10 as int), cast((id * 3 + 1) % 10 as int), cast((id * 3 + 2) % 10 as int), cast((id * 3 + 3) % 10 as int))\"\n        \") as nested_int_arrays\",\n\n        # Array of maps (76)\n        \"array(\"\n        \"  map('a', cast(id % 100 as string), 'b', cast((id + 1) % 100 as string)),\"\n        \"  map('x', cast((id * 2) % 100 as string), 'y', cast((id * 2 + 1) % 100 as string), 'z', cast((id * 2 + 2) % 100 as string))\"\n        \") as array_of_maps\",\n\n        # Map with array values (77)\n        \"map(\"\n        \"  'scores', array(cast(id % 100 as double), cast((id * 2) % 100 as double), cast((id * 3) % 100 as double)),\"\n        \"  'ranks', array(cast(id % 10 as double), cast((id + 1) % 10 as double))\"\n        \") as map_with_arrays\",\n\n        # Complex event structure (78)\n        \"named_struct(\"\n        \"  'event_id', concat('EVT-', cast(id as string)),\"\n        \"  'event_type', array('click', 'view', 'purchase', 'signup')[cast(id % 4 as int)],\"\n        \"  'timestamp', id * 1000,\"\n        \"  'properties', map(\"\n        \"    'source', array('web', 'mobile', 'api')[cast(id % 3 as int)],\"\n        \"    'campaign', concat('camp_', cast(id % 50 as string))\"\n        \"  ),\"\n        \"  'user', named_struct(\"\n        \"    'id', cast(id % 100000 as int),\"\n        \"    'segment', array('new', 'returning', 'premium')[cast(id % 3 as int)],\"\n        \"    'attributes', named_struct(\"\n        \"      'age_group', array('18-24', '25-34', '35-44', '45+')[cast(id % 4 as int)],\"\n        \"      'interests', array(concat('int_', cast(id % 10 as string)), concat('int_', cast((id + 1) % 10 as string)))\"\n        \"    )\"\n        \"  )\"\n        \") as event_data\",\n\n        # Financial transaction with deep nesting (79)\n        \"named_struct(\"\n        \"  'txn_id', concat('TXN-', cast(id as string)),\"\n        \"  'amount', named_struct(\"\n        \"    'value', cast(id % 10000 as double) / 100.0,\"\n        \"    'currency', array('USD', 'EUR', 'GBP')[cast(id % 3 as int)],\"\n        \"    'exchange', named_struct(\"\n        \"      'rate', 1.0 + cast(id % 100 as double) / 1000.0,\"\n        \"      'source', 'market',\"\n        \"      'timestamp', id * 1000\"\n        \"    )\"\n        \"  ),\"\n        \"  'parties', named_struct(\"\n        \"    'sender', named_struct(\"\n        \"      'account', concat('ACC-', lpad(cast(id % 100000 as string), 8, '0')),\"\n        \"      'bank', named_struct(\"\n        \"        'code', concat('BNK-', cast(id % 100 as string)),\"\n        \"        'country', array('US', 'UK', 'DE', 'JP')[cast(id % 4 as int)]\"\n        \"      )\"\n        \"    ),\"\n        \"    'receiver', named_struct(\"\n        \"      'account', concat('ACC-', lpad(cast((id + 50000) % 100000 as string), 8, '0')),\"\n        \"      'bank', named_struct(\"\n        \"        'code', concat('BNK-', cast((id + 50) % 100 as string)),\"\n        \"        'country', array('US', 'UK', 'DE', 'JP')[cast((id + 1) % 4 as int)]\"\n        \"      )\"\n        \"    )\"\n        \"  )\"\n        \") as financial_txn\",\n\n        # Product catalog entry (80)\n        \"named_struct(\"\n        \"  'product_id', concat('PROD-', lpad(cast(id % 10000 as string), 6, '0')),\"\n        \"  'variants', array(\"\n        \"    named_struct(\"\n        \"      'sku', concat('VAR-', cast(id % 1000 as string), '-A'),\"\n        \"      'attributes', map('color', 'red', 'size', 'S'),\"\n        \"      'inventory', named_struct('quantity', cast(id % 100 as int), 'warehouse', 'WH-1')\"\n        \"    ),\"\n        \"    named_struct(\"\n        \"      'sku', concat('VAR-', cast(id % 1000 as string), '-B'),\"\n        \"      'attributes', map('color', 'blue', 'size', 'M'),\"\n        \"      'inventory', named_struct('quantity', cast((id + 10) % 100 as int), 'warehouse', 'WH-2')\"\n        \"    )\"\n        \"  ),\"\n        \"  'pricing', named_struct(\"\n        \"    'list_price', cast(id % 1000 as double) + 0.99,\"\n        \"    'tiers', array(\"\n        \"      named_struct('min_qty', 1, 'price', cast(id % 1000 as double) + 0.99),\"\n        \"      named_struct('min_qty', 10, 'price', cast(id % 1000 as double) * 0.9 + 0.99),\"\n        \"      named_struct('min_qty', 100, 'price', cast(id % 1000 as double) * 0.8 + 0.99)\"\n        \"    )\"\n        \"  )\"\n        \") as product_catalog\",\n\n        # Additional scalar columns (81-90)\n        \"cast((id * 53) % 10000 as int) as metric_1\",\n        \"cast((id * 59) % 10000 as int) as metric_2\",\n        \"cast((id * 61) % 10000 as int) as metric_3\",\n        \"cast((id * 67) % 1000000 as long) as counter_1\",\n        \"cast((id * 71) % 1000000 as long) as counter_2\",\n        \"cast((id * 73) % 10000 as double) / 100.0 as measure_1\",\n        \"cast((id * 79) % 10000 as double) / 100.0 as measure_2\",\n        \"concat('label_', cast(id % 500 as string)) as label_1\",\n        \"concat('category_', cast(id % 200 as string)) as label_2\",\n        \"id % 17 = 0 as flag_1\",\n\n        # Additional complex columns (91-95)\n        \"array(\"\n        \"  named_struct('ts', id * 1000, 'value', cast(id % 100 as double)),\"\n        \"  named_struct('ts', id * 1000 + 1000, 'value', cast((id + 1) % 100 as double)),\"\n        \"  named_struct('ts', id * 1000 + 2000, 'value', cast((id + 2) % 100 as double))\"\n        \") as time_series\",\n\n        \"map(\"\n        \"  'en', concat('English text ', cast(id % 100 as string)),\"\n        \"  'es', concat('Spanish texto ', cast(id % 100 as string)),\"\n        \"  'fr', concat('French texte ', cast(id % 100 as string))\"\n        \") as translations\",\n\n        \"named_struct(\"\n        \"  'rules', array(\"\n        \"    named_struct('id', cast(id % 100 as int), 'condition', concat('cond_', cast(id % 10 as string)), 'action', concat('act_', cast(id % 5 as string))),\"\n        \"    named_struct('id', cast((id + 1) % 100 as int), 'condition', concat('cond_', cast((id + 1) % 10 as string)), 'action', concat('act_', cast((id + 1) % 5 as string)))\"\n        \"  ),\"\n        \"  'default_action', 'none',\"\n        \"  'priority', cast(id % 10 as int)\"\n        \") as rule_engine\",\n\n        \"array(\"\n        \"  map('metric', 'cpu', 'value', cast(id % 100 as double), 'unit', 'percent'),\"\n        \"  map('metric', 'memory', 'value', cast((id * 2) % 100 as double), 'unit', 'percent'),\"\n        \"  map('metric', 'disk', 'value', cast((id * 3) % 100 as double), 'unit', 'percent')\"\n        \") as system_metrics\",\n\n        \"named_struct(\"\n        \"  'permissions', map(\"\n        \"    'read', array('user', 'admin'),\"\n        \"    'write', array('admin'),\"\n        \"    'delete', array('admin')\"\n        \"  ),\"\n        \"  'roles', array(\"\n        \"    named_struct('name', 'viewer', 'level', 1),\"\n        \"    named_struct('name', 'editor', 'level', 2),\"\n        \"    named_struct('name', 'admin', 'level', 3)\"\n        \"  )\"\n        \") as access_control\",\n\n        # Final columns (96-100)\n        \"cast((id * 83) % 1000 as int) as final_metric_1\",\n        \"cast((id * 89) % 10000 as double) / 100.0 as final_measure_1\",\n        \"concat('final_', cast(id % 1000 as string)) as final_label\",\n        \"array(cast(id % 5 as int), cast((id + 1) % 5 as int), cast((id + 2) % 5 as int), cast((id + 3) % 5 as int), cast((id + 4) % 5 as int)) as final_array\",\n        \"named_struct(\"\n        \"  'summary', concat('Summary for record ', cast(id as string)),\"\n        \"  'checksum', concat(cast(id % 256 as string), '-', cast((id * 7) % 256 as string), '-', cast((id * 13) % 256 as string)),\"\n        \"  'version', cast(id % 100 as int)\"\n        \") as final_metadata\"\n    )\n\n    print(f\"Generated schema with {len(df.columns)} columns\")\n    df.printSchema()\n\n    # Write as parquet\n    df.write.mode(\"overwrite\").parquet(output_path)\n\n    # Verify the data\n    written_df = spark.read.parquet(output_path)\n    actual_count = written_df.count()\n    print(f\"Wrote {actual_count:,} rows to {output_path}\")\n\n    spark.stop()\n\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description=\"Generate test data for shuffle benchmark\"\n    )\n    parser.add_argument(\n        \"--output\", \"-o\",\n        required=True,\n        help=\"Output path for parquet data (local path or hdfs://...)\"\n    )\n    parser.add_argument(\n        \"--rows\", \"-r\",\n        type=int,\n        default=10_000_000,\n        help=\"Number of rows to generate (default: 10000000 for ~1GB with wide schema)\"\n    )\n    parser.add_argument(\n        \"--partitions\", \"-p\",\n        type=int,\n        default=None,\n        help=\"Number of output partitions (default: auto based on cluster)\"\n    )\n\n    args = parser.parse_args()\n\n    # Default partitions to a reasonable number if not specified\n    num_partitions = args.partitions if args.partitions else 200\n\n    generate_data(args.output, args.rows, num_partitions)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "benchmarks/pyspark/run_all_benchmarks.sh",
    "content": "#!/bin/bash\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\n# Run all shuffle benchmarks (Spark, Comet JVM, Comet Native)\n# Check the Spark UI during each run to compare shuffle sizes\n\nset -e\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" && pwd)\"\nDATA_PATH=\"${1:-/tmp/shuffle-benchmark-data}\"\nCOMET_JAR=\"${COMET_JAR:-$SCRIPT_DIR/../../spark/target/comet-spark-spark3.5_2.12-0.14.0-SNAPSHOT.jar}\"\nSPARK_MASTER=\"${SPARK_MASTER:-local[*]}\"\nEXECUTOR_MEMORY=\"${EXECUTOR_MEMORY:-16g}\"\nEVENT_LOG_DIR=\"${EVENT_LOG_DIR:-/tmp/spark-events}\"\n\n# Create event log directory\nmkdir -p \"$EVENT_LOG_DIR\"\n\necho \"========================================\"\necho \"Shuffle Size Comparison Benchmark\"\necho \"========================================\"\necho \"Data path:       $DATA_PATH\"\necho \"Comet JAR:       $COMET_JAR\"\necho \"Spark master:    $SPARK_MASTER\"\necho \"Executor memory: $EXECUTOR_MEMORY\"\necho \"Event log dir:   $EVENT_LOG_DIR\"\necho \"========================================\"\n\n# Run Spark baseline (no Comet)\necho \"\"\necho \">>> Running SPARK shuffle benchmark...\"\n$SPARK_HOME/bin/spark-submit \\\n  --master \"$SPARK_MASTER\" \\\n  --executor-memory \"$EXECUTOR_MEMORY\" \\\n  --conf spark.eventLog.enabled=true \\\n  --conf spark.eventLog.dir=\"$EVENT_LOG_DIR\" \\\n  --conf spark.comet.enabled=false \\\n  --conf spark.comet.exec.shuffle.enabled=false \\\n  \"$SCRIPT_DIR/run_benchmark.py\" \\\n  --data \"$DATA_PATH\" \\\n  --mode spark\n\n# Run Comet JVM shuffle\necho \"\"\necho \">>> Running COMET JVM shuffle benchmark...\"\n$SPARK_HOME/bin/spark-submit \\\n  --master \"$SPARK_MASTER\" \\\n  --executor-memory \"$EXECUTOR_MEMORY\" \\\n  --jars \"$COMET_JAR\" \\\n  --driver-class-path \"$COMET_JAR\" \\\n  --conf spark.executor.extraClassPath=\"$COMET_JAR\" \\\n  --conf spark.eventLog.enabled=true \\\n  --conf spark.eventLog.dir=\"$EVENT_LOG_DIR\" \\\n  --conf spark.memory.offHeap.enabled=true \\\n  --conf spark.memory.offHeap.size=16g \\\n  --conf spark.comet.enabled=true \\\n  --conf spark.comet.operator.DataWritingCommandExec.allowIncompatible=true \\\n  --conf spark.comet.parquet.write.enabled=true \\\n  --conf spark.comet.logFallbackReasons.enabled=true \\\n  --conf spark.comet.explainFallback.enabled=true \\\n  --conf spark.comet.shuffle.mode=jvm \\\n  --conf spark.comet.exec.shuffle.mode=jvm \\\n  --conf spark.comet.exec.replaceSortMergeJoin=true \\\n  --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n  --conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \\\n  --conf spark.comet.cast.allowIncompatible=true \\\n  \"$SCRIPT_DIR/run_benchmark.py\" \\\n  --data \"$DATA_PATH\" \\\n  --mode jvm\n\n# Run Comet Native shuffle\necho \"\"\necho \">>> Running COMET NATIVE shuffle benchmark...\"\n$SPARK_HOME/bin/spark-submit \\\n  --master \"$SPARK_MASTER\" \\\n  --executor-memory \"$EXECUTOR_MEMORY\" \\\n  --jars \"$COMET_JAR\" \\\n  --driver-class-path \"$COMET_JAR\" \\\n  --conf spark.executor.extraClassPath=\"$COMET_JAR\" \\\n  --conf spark.eventLog.enabled=true \\\n  --conf spark.eventLog.dir=\"$EVENT_LOG_DIR\" \\\n  --conf spark.memory.offHeap.enabled=true \\\n  --conf spark.memory.offHeap.size=16g \\\n  --conf spark.comet.enabled=true \\\n  --conf spark.comet.operator.DataWritingCommandExec.allowIncompatible=true \\\n  --conf spark.comet.parquet.write.enabled=true \\\n  --conf spark.comet.logFallbackReasons.enabled=true \\\n  --conf spark.comet.explainFallback.enabled=true \\\n  --conf spark.comet.exec.shuffle.mode=native \\\n  --conf spark.comet.exec.replaceSortMergeJoin=true \\\n  --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n  --conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \\\n  --conf spark.comet.cast.allowIncompatible=true \\\n  \"$SCRIPT_DIR/run_benchmark.py\" \\\n  --data \"$DATA_PATH\" \\\n  --mode native\n\necho \"\"\necho \"========================================\"\necho \"BENCHMARK COMPLETE\"\necho \"========================================\"\necho \"Event logs written to: $EVENT_LOG_DIR\"\necho \"\"\n"
  },
  {
    "path": "benchmarks/pyspark/run_benchmark.py",
    "content": "#!/usr/bin/env python3\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"\nRun PySpark benchmarks.\n\nRun benchmarks by name with appropriate spark-submit configs for different modes\n(spark, jvm, native). Check the Spark UI to compare results between modes.\n\"\"\"\n\nimport argparse\nimport sys\n\nfrom pyspark.sql import SparkSession\n\nfrom benchmarks import get_benchmark, list_benchmarks\n\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description=\"Run PySpark benchmarks\",\n        formatter_class=argparse.RawDescriptionHelpFormatter,\n        epilog=\"\"\"\nExamples:\n  # Run hash partitioning shuffle benchmark in Spark mode\n  python run_benchmark.py --data /path/to/data --mode spark --benchmark shuffle-hash\n\n  # Run round-robin shuffle benchmark in Comet native mode\n  python run_benchmark.py --data /path/to/data --mode native --benchmark shuffle-roundrobin\n\n  # List all available benchmarks\n  python run_benchmark.py --list-benchmarks\n        \"\"\"\n    )\n    parser.add_argument(\n        \"--data\", \"-d\",\n        help=\"Path to input parquet data\"\n    )\n    parser.add_argument(\n        \"--mode\", \"-m\",\n        choices=[\"spark\", \"jvm\", \"native\"],\n        help=\"Shuffle mode being tested\"\n    )\n    parser.add_argument(\n        \"--benchmark\", \"-b\",\n        default=\"shuffle-hash\",\n        help=\"Benchmark to run (default: shuffle-hash)\"\n    )\n    parser.add_argument(\n        \"--list-benchmarks\",\n        action=\"store_true\",\n        help=\"List all available benchmarks and exit\"\n    )\n\n    args = parser.parse_args()\n\n    # Handle --list-benchmarks\n    if args.list_benchmarks:\n        print(\"Available benchmarks:\\n\")\n        for name, description in list_benchmarks():\n            print(f\"  {name:25s} - {description}\")\n        return 0\n\n    # Validate required arguments\n    if not args.data:\n        parser.error(\"--data is required when running a benchmark\")\n    if not args.mode:\n        parser.error(\"--mode is required when running a benchmark\")\n\n    # Get the benchmark class\n    try:\n        benchmark_cls = get_benchmark(args.benchmark)\n    except KeyError as e:\n        print(f\"Error: {e}\", file=sys.stderr)\n        print(\"\\nUse --list-benchmarks to see available benchmarks\", file=sys.stderr)\n        return 1\n\n    # Create Spark session\n    spark = SparkSession.builder \\\n        .appName(f\"{benchmark_cls.name()}-{args.mode.upper()}\") \\\n        .getOrCreate()\n\n    try:\n        # Create and run the benchmark\n        benchmark = benchmark_cls(spark, args.data, args.mode)\n        results = benchmark.execute_timed()\n\n        print(\"\\nCheck Spark UI for shuffle sizes and detailed metrics\")\n        return 0\n\n    finally:\n        spark.stop()\n\n\nif __name__ == \"__main__\":\n    sys.exit(main())\n"
  },
  {
    "path": "benchmarks/tpc/.gitignore",
    "content": "*.json\n*.png"
  },
  {
    "path": "benchmarks/tpc/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Benchmarking Scripts\n\nThis directory contains scripts used for generating benchmark results that are published in this repository and in\nthe Comet documentation.\n\nFor full instructions on running these benchmarks on an EC2 instance, see the [Comet Benchmarking on EC2 Guide].\n\n[Comet Benchmarking on EC2 Guide]: https://datafusion.apache.org/comet/contributor-guide/benchmarking_aws_ec2.html\n\n## Setup\n\nTPC queries are bundled in `benchmarks/tpc/queries/` (derived from TPC-H/DS under the TPC Fair Use Policy).\n\n## Usage\n\nAll benchmarks are run via `run.py`:\n\n```\npython3 run.py --engine <engine> --benchmark <tpch|tpcds> [options]\n```\n\n| Option                    | Description                                                                     |\n| ------------------------- | ------------------------------------------------------------------------------- |\n| `--engine`                | Engine name (matches a TOML file in `engines/`)                                 |\n| `--benchmark`             | `tpch` or `tpcds`                                                               |\n| `--iterations`            | Number of iterations (default: 1)                                               |\n| `--output`                | Output directory (default: `.`)                                                 |\n| `--query`                 | Run a single query number                                                       |\n| `--no-restart`            | Skip Spark master/worker restart                                                |\n| `--dry-run`               | Print the spark-submit command without executing                                |\n| `--jfr`                   | Enable Java Flight Recorder profiling                                           |\n| `--jfr-dir`               | Directory for JFR output files (default: `/results/jfr`)                        |\n| `--async-profiler`        | Enable async-profiler (profiles Java + native code)                             |\n| `--async-profiler-dir`    | Directory for async-profiler output (default: `/results/async-profiler`)        |\n| `--async-profiler-event`  | Event type: `cpu`, `wall`, `alloc`, `lock`, etc. (default: `cpu`)               |\n| `--async-profiler-format` | Output format: `flamegraph`, `jfr`, `collapsed`, `text` (default: `flamegraph`) |\n\nAvailable engines: `spark`, `comet`, `comet-iceberg`, `gluten`\n\n## Example usage\n\nSet Spark environment variables:\n\n```shell\nexport SPARK_HOME=/opt/spark-3.5.3-bin-hadoop3/\nexport SPARK_MASTER=spark://yourhostname:7077\n```\n\nSet path to data (TPC queries are bundled in `benchmarks/tpc/queries/`):\n\n```shell\nexport TPCH_DATA=/mnt/bigdata/tpch/sf100/\n```\n\nRun Spark benchmark:\n\n```shell\nexport JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64\nsudo ./drop-caches.sh\npython3 run.py --engine spark --benchmark tpch\n```\n\nRun Comet benchmark:\n\n```shell\nexport JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64\nexport COMET_JAR=/opt/comet/comet-spark-spark3.5_2.12-0.10.0.jar\nsudo ./drop-caches.sh\npython3 run.py --engine comet --benchmark tpch\n```\n\nRun Gluten benchmark:\n\n```shell\nexport JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64\nexport GLUTEN_JAR=/opt/gluten/gluten-velox-bundle-spark3.5_2.12-linux_amd64-1.4.0.jar\nsudo ./drop-caches.sh\npython3 run.py --engine gluten --benchmark tpch\n```\n\nPreview a command without running it:\n\n```shell\npython3 run.py --engine comet --benchmark tpch --dry-run\n```\n\nGenerating charts:\n\n```shell\npython3 generate-comparison.py --benchmark tpch --labels \"Spark 3.5.3\" \"Comet 0.9.0\" \"Gluten 1.4.0\" --title \"TPC-H @ 100 GB (single executor, 8 cores, local Parquet files)\" spark-tpch-1752338506381.json comet-tpch-1752337818039.json gluten-tpch-1752337474344.json\n```\n\n## Engine Configuration\n\nEach engine is defined by a TOML file in `engines/`. The config specifies JARs, Spark conf overrides,\nrequired environment variables, and optional defaults/exports. See existing files for examples.\n\n## Iceberg Benchmarking\n\nComet includes native Iceberg support via iceberg-rust integration. This enables benchmarking TPC-H queries\nagainst Iceberg tables with native scan acceleration.\n\n### Prerequisites\n\nDownload the Iceberg Spark runtime JAR (required for running the benchmark):\n\n```shell\nwget https://repo1.maven.org/maven2/org/apache/iceberg/iceberg-spark-runtime-3.5_2.12/1.8.1/iceberg-spark-runtime-3.5_2.12-1.8.1.jar\nexport ICEBERG_JAR=/path/to/iceberg-spark-runtime-3.5_2.12-1.8.1.jar\n```\n\nNote: Table creation uses `--packages` which auto-downloads the dependency.\n\n### Create Iceberg tables\n\nConvert existing Parquet data to Iceberg format using `create-iceberg-tables.py`.\nThe script configures the Iceberg catalog automatically -- no `--conf` flags needed.\n\n```shell\nexport ICEBERG_WAREHOUSE=/mnt/bigdata/iceberg-warehouse\nmkdir -p $ICEBERG_WAREHOUSE\n\n# TPC-H\n$SPARK_HOME/bin/spark-submit \\\n    --master $SPARK_MASTER \\\n    --packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.8.1 \\\n    --conf spark.driver.memory=8G \\\n    --conf spark.executor.instances=2 \\\n    --conf spark.executor.cores=8 \\\n    --conf spark.cores.max=16 \\\n    --conf spark.executor.memory=16g \\\n    create-iceberg-tables.py \\\n    --benchmark tpch \\\n    --parquet-path $TPCH_DATA \\\n    --warehouse $ICEBERG_WAREHOUSE\n\n# TPC-DS\n$SPARK_HOME/bin/spark-submit \\\n    --master $SPARK_MASTER \\\n    --packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.8.1 \\\n    --conf spark.driver.memory=8G \\\n    --conf spark.executor.instances=2 \\\n    --conf spark.executor.cores=8 \\\n    --conf spark.cores.max=16 \\\n    --conf spark.executor.memory=16g \\\n    create-iceberg-tables.py \\\n    --benchmark tpcds \\\n    --parquet-path $TPCDS_DATA \\\n    --warehouse $ICEBERG_WAREHOUSE\n```\n\n### Run Iceberg benchmark\n\n```shell\nexport JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64\nexport COMET_JAR=/opt/comet/comet-spark-spark3.5_2.12-0.10.0.jar\nexport ICEBERG_JAR=/path/to/iceberg-spark-runtime-3.5_2.12-1.8.1.jar\nexport ICEBERG_WAREHOUSE=/mnt/bigdata/iceberg-warehouse\nsudo ./drop-caches.sh\npython3 run.py --engine comet-iceberg --benchmark tpch\n```\n\nThe benchmark uses Comet's native iceberg-rust integration, which is enabled by default.\nVerify native scanning is active by checking for `CometIcebergNativeScanExec` in the\nphysical plan output.\n\n### create-iceberg-tables.py options\n\n| Option           | Required | Default        | Description                         |\n| ---------------- | -------- | -------------- | ----------------------------------- |\n| `--benchmark`    | Yes      |                | `tpch` or `tpcds`                   |\n| `--parquet-path` | Yes      |                | Path to source Parquet data         |\n| `--warehouse`    | Yes      |                | Path to Iceberg warehouse directory |\n| `--catalog`      | No       | `local`        | Iceberg catalog name                |\n| `--database`     | No       | benchmark name | Database name for the tables        |\n\n## Running with Docker\n\nA Docker Compose setup is provided in `infra/docker/` for running benchmarks in an isolated\nSpark standalone cluster. The Docker image supports both **Linux (amd64)** and **macOS (arm64)**\nvia architecture-agnostic Java symlinks created at build time.\n\n### Build the image\n\nThe image must be built for the correct platform to match the native libraries in the\nengine JARs (e.g. Comet bundles `libcomet.so` for a specific OS/arch).\n\n```shell\ndocker build -t comet-bench -f benchmarks/tpc/infra/docker/Dockerfile .\n```\n\n### Building a compatible Comet JAR\n\nThe Comet JAR contains platform-specific native libraries (`libcomet.so` / `libcomet.dylib`).\nA JAR built on the host may not work inside the Docker container due to OS, architecture,\nor glibc version mismatches. Use `Dockerfile.build-comet` to build a JAR with compatible\nnative libraries:\n\n- **macOS (Apple Silicon):** The host JAR contains `darwin/aarch64` libraries which\n  won't work in Linux containers. You **must** use the build Dockerfile.\n- **Linux:** If your host glibc version differs from the container's, the native library\n  will fail to load with a `GLIBC_x.xx not found` error. The build Dockerfile uses\n  Ubuntu 20.04 (glibc 2.31) for broad compatibility. Use it if you see\n  `UnsatisfiedLinkError` mentioning glibc when running benchmarks.\n\n```shell\nmkdir -p output\ndocker build -t comet-builder \\\n    -f benchmarks/tpc/infra/docker/Dockerfile.build-comet .\ndocker run --rm -v $(pwd)/output:/output comet-builder\nexport COMET_JAR=$(pwd)/output/comet-spark-spark3.5_2.12-*.jar\n```\n\n### Platform notes\n\n**macOS (Apple Silicon):** Docker Desktop is required.\n\n- **Memory:** Docker Desktop defaults to a small memory allocation (often 8 GB) which\n  is not enough for Spark benchmarks. Go to **Docker Desktop > Settings > Resources >\n  Memory** and increase it to at least 48 GB (each worker requests 16 GB for its executor\n  plus overhead, and the driver needs 8 GB). Without enough memory, executors will be\n  OOM-killed (exit code 137).\n- **File Sharing:** You may need to add your data directory (e.g. `/opt`) to\n  **Docker Desktop > Settings > Resources > File Sharing** before mounting host volumes.\n\n**Linux (amd64):** Docker uses cgroup memory limits directly without a VM layer. No\nspecial Docker configuration is needed, but you may still need to build the Comet JAR\nusing `Dockerfile.build-comet` (see above) if your host glibc version doesn't match\nthe container's.\n\nThe Docker image auto-detects the container architecture (amd64/arm64) and sets up\narch-agnostic Java symlinks. The compose file uses `BENCH_JAVA_HOME` (not `JAVA_HOME`)\nto avoid inheriting the host's Java path into the container.\n\n### Start the cluster\n\nSet environment variables pointing to your host paths, then start the Spark master and\ntwo workers:\n\n```shell\nexport DATA_DIR=/mnt/bigdata/tpch/sf100\nexport RESULTS_DIR=/tmp/bench-results\nexport COMET_JAR=/opt/comet/comet-spark-spark3.5_2.12-0.10.0.jar\n\nmkdir -p $RESULTS_DIR/spark-events\ndocker compose -f benchmarks/tpc/infra/docker/docker-compose.yml up -d\n```\n\nSet `COMET_JAR`, `GLUTEN_JAR`, or `ICEBERG_JAR` to the host path of the engine JAR you\nwant to use. Each JAR is mounted individually into the container, so you can easily switch\nbetween versions by changing the path and restarting.\n\n### Run benchmarks\n\nUse `docker compose run --rm` to execute benchmarks. The `--rm` flag removes the\ncontainer when it exits, preventing port conflicts on subsequent runs. Pass\n`--no-restart` since the cluster is already managed by Compose, and `--output /results`\nso that output files land in the mounted results directory:\n\n```shell\ndocker compose -f benchmarks/tpc/infra/docker/docker-compose.yml \\\n    run --rm -p 4040:4040 bench \\\n    python3 /opt/benchmarks/run.py \\\n    --engine comet --benchmark tpch --output /results --no-restart\n```\n\nThe `-p 4040:4040` flag exposes the Spark Application UI on the host. The following\nUIs are available during a benchmark run:\n\n| UI                | URL                    |\n| ----------------- | ---------------------- |\n| Spark Master      | http://localhost:8080  |\n| Worker 1          | http://localhost:8081  |\n| Worker 2          | http://localhost:8082  |\n| Spark Application | http://localhost:4040  |\n| History Server    | http://localhost:18080 |\n\n> **Note:** The Master UI links to the Application UI using the container's internal\n> hostname, which is not reachable from the host. Use `http://localhost:4040` directly\n> to access the Application UI.\n\nThe Spark Application UI is only available while a benchmark is running. To inspect\ncompleted runs, uncomment the `history-server` service in `docker-compose.yml` and\nrestart the cluster. The History Server reads event logs from `$RESULTS_DIR/spark-events`.\n\nFor Gluten (requires Java 8), you must restart the **entire cluster** with `JAVA_HOME`\nset so that all services (master, workers, and bench) use Java 8:\n\n```shell\nexport BENCH_JAVA_HOME=/usr/lib/jvm/java-8-openjdk\ndocker compose -f benchmarks/tpc/infra/docker/docker-compose.yml down\ndocker compose -f benchmarks/tpc/infra/docker/docker-compose.yml up -d\n\ndocker compose -f benchmarks/tpc/infra/docker/docker-compose.yml \\\n    run --rm bench \\\n    python3 /opt/benchmarks/run.py \\\n    --engine gluten --benchmark tpch --output /results --no-restart\n```\n\n> **Important:** Only passing `-e JAVA_HOME=...` to the `bench` container is not\n> sufficient -- the workers also need Java 8 or Gluten will fail at runtime with\n> `sun.misc.Unsafe` errors. Unset `BENCH_JAVA_HOME` (or switch it back to Java 17)\n> and restart the cluster before running Comet or Spark benchmarks.\n\n### Memory limits\n\nTwo compose files are provided for different hardware profiles:\n\n| File                        | Workers | Total memory | Use case                       |\n| --------------------------- | ------- | ------------ | ------------------------------ |\n| `docker-compose.yml`        | 2       | ~74 GB       | SF100+ on a workstation/server |\n| `docker-compose-laptop.yml` | 1       | ~12 GB       | SF1–SF10 on a laptop           |\n\n**`docker-compose.yml`** (workstation default):\n\n| Container      | Container limit (`mem_limit`) | Spark JVM allocation      |\n| -------------- | ----------------------------- | ------------------------- |\n| spark-worker-1 | 32 GB                         | 16 GB executor + overhead |\n| spark-worker-2 | 32 GB                         | 16 GB executor + overhead |\n| bench (driver) | 10 GB                         | 8 GB driver               |\n| **Total**      | **74 GB**                     |                           |\n\nConfigure via environment variables: `WORKER_MEM_LIMIT` (default: 32g per worker),\n`BENCH_MEM_LIMIT` (default: 10g), `WORKER_MEMORY` (default: 16g, Spark executor memory),\n`WORKER_CORES` (default: 8).\n\n### Running on a laptop with small scale factors\n\nFor local development or testing with small scale factors (e.g. SF1 or SF10), use the\nlaptop compose file which runs a single worker with reduced memory:\n\n```shell\ndocker compose -f benchmarks/tpc/infra/docker/docker-compose-laptop.yml up -d\n```\n\nThis starts one worker (4 GB executor inside an 8 GB container) and a 4 GB bench\ncontainer, totaling approximately **12 GB** of memory.\n\nThe benchmark scripts request 2 executor instances and 16 max cores by default\n(`run.py`). Spark will simply use whatever resources are available on the single worker,\nso no script changes are needed.\n\n### Comparing Parquet vs Iceberg performance\n\nRun both benchmarks and compare:\n\n```shell\npython3 generate-comparison.py --benchmark tpch \\\n    --labels \"Comet (Parquet)\" \"Comet (Iceberg)\" \\\n    --title \"TPC-H @ 100 GB: Parquet vs Iceberg\" \\\n    comet-tpch-*.json comet-iceberg-tpch-*.json\n```\n\n## Java Flight Recorder Profiling\n\nUse the `--jfr` flag to capture JFR profiles from the Spark driver and executors.\nJFR is built into JDK 11+ so no additional dependencies are needed.\n\n```shell\npython3 run.py --engine comet --benchmark tpch --jfr\n```\n\nJFR recordings are written to `/results/jfr/` by default (configurable with\n`--jfr-dir`). The driver writes `driver.jfr` and each executor writes\n`executor.jfr` (JFR appends the PID when multiple executors share a path).\n\nWith Docker Compose, the `/results` volume is shared across all containers,\nso JFR files from both driver and executors are collected in\n`$RESULTS_DIR/jfr/` on the host:\n\n```shell\ndocker compose -f benchmarks/tpc/infra/docker/docker-compose.yml \\\n    run --rm bench \\\n    python3 /opt/benchmarks/run.py \\\n    --engine comet --benchmark tpch --output /results --no-restart --jfr\n```\n\nOpen the `.jfr` files with [JDK Mission Control](https://jdk.java.net/jmc/),\nIntelliJ IDEA's profiler, or `jfr` CLI tool (`jfr summary driver.jfr`).\n\n## async-profiler Profiling\n\nUse the `--async-profiler` flag to capture profiles with\n[async-profiler](https://github.com/async-profiler/async-profiler). Unlike JFR,\nasync-profiler can profile **both Java and native (Rust/C++) code** in the same\nflame graph, making it especially useful for profiling Comet workloads.\n\n### Prerequisites\n\nasync-profiler must be installed on every node where the driver or executors run.\nSet `ASYNC_PROFILER_HOME` to the installation directory:\n\n```shell\n# Download and extract (Linux x64 example)\nwget https://github.com/async-profiler/async-profiler/releases/download/v3.0/async-profiler-3.0-linux-x64.tar.gz\ntar xzf async-profiler-3.0-linux-x64.tar.gz -C /opt/async-profiler --strip-components=1\nexport ASYNC_PROFILER_HOME=/opt/async-profiler\n```\n\nOn Linux, `perf_event_paranoid` must be set to allow profiling:\n\n```shell\nsudo sysctl kernel.perf_event_paranoid=1   # or 0 / -1 for full access\nsudo sysctl kernel.kptr_restrict=0          # optional: enable kernel symbols\n```\n\n### Basic usage\n\n```shell\npython3 run.py --engine comet --benchmark tpch --async-profiler\n```\n\nThis produces HTML flame graphs in `/results/async-profiler/` by default\n(`driver.html` and `executor.html`).\n\n### Choosing events and output format\n\n```shell\n# Wall-clock profiling (includes time spent waiting/sleeping)\npython3 run.py --engine comet --benchmark tpch \\\n    --async-profiler --async-profiler-event wall\n\n# Allocation profiling with JFR output\npython3 run.py --engine comet --benchmark tpch \\\n    --async-profiler --async-profiler-event alloc --async-profiler-format jfr\n\n# Lock contention profiling\npython3 run.py --engine comet --benchmark tpch \\\n    --async-profiler --async-profiler-event lock\n```\n\n| Event   | Description                                         |\n| ------- | --------------------------------------------------- |\n| `cpu`   | On-CPU time (default). Shows where CPU cycles go.   |\n| `wall`  | Wall-clock time. Includes threads that are blocked. |\n| `alloc` | Heap allocation profiling.                          |\n| `lock`  | Lock contention profiling.                          |\n\n| Format       | Extension | Description                              |\n| ------------ | --------- | ---------------------------------------- |\n| `flamegraph` | `.html`   | Interactive HTML flame graph (default).  |\n| `jfr`        | `.jfr`    | JFR format, viewable in JMC or IntelliJ. |\n| `collapsed`  | `.txt`    | Collapsed stacks for FlameGraph scripts. |\n| `text`       | `.txt`    | Flat text summary of hot methods.        |\n\n### Docker usage\n\nThe Docker image includes async-profiler pre-installed at\n`/opt/async-profiler`. The `ASYNC_PROFILER_HOME` environment variable is\nalready set in the compose files, so no extra configuration is needed:\n\n```shell\ndocker compose -f benchmarks/tpc/infra/docker/docker-compose.yml \\\n    run --rm bench \\\n    python3 /opt/benchmarks/run.py \\\n    --engine comet --benchmark tpch --output /results --no-restart --async-profiler\n```\n\nOutput files are collected in `$RESULTS_DIR/async-profiler/` on the host.\n\n**Note:** On Linux, the Docker container needs `--privileged` or\n`SYS_PTRACE` capability and `perf_event_paranoid <= 1` on the host for\n`cpu`/`wall` events. Allocation (`alloc`) and lock (`lock`) events work\nwithout special privileges.\n"
  },
  {
    "path": "benchmarks/tpc/create-iceberg-tables.py",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"\nConvert TPC-H or TPC-DS Parquet data to Iceberg tables.\n\nUsage:\n    spark-submit \\\n        --packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.8.1 \\\n        create-iceberg-tables.py \\\n        --benchmark tpch \\\n        --parquet-path /path/to/tpch/parquet \\\n        --warehouse /path/to/iceberg-warehouse\n\n    spark-submit \\\n        --packages org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.8.1 \\\n        create-iceberg-tables.py \\\n        --benchmark tpcds \\\n        --parquet-path /path/to/tpcds/parquet \\\n        --warehouse /path/to/iceberg-warehouse\n\"\"\"\n\nimport argparse\nimport os\nimport sys\nfrom pyspark.sql import SparkSession\nimport time\n\nTPCH_TABLES = [\n    \"customer\",\n    \"lineitem\",\n    \"nation\",\n    \"orders\",\n    \"part\",\n    \"partsupp\",\n    \"region\",\n    \"supplier\",\n]\n\nTPCDS_TABLES = [\n    \"call_center\",\n    \"catalog_page\",\n    \"catalog_returns\",\n    \"catalog_sales\",\n    \"customer\",\n    \"customer_address\",\n    \"customer_demographics\",\n    \"date_dim\",\n    \"time_dim\",\n    \"household_demographics\",\n    \"income_band\",\n    \"inventory\",\n    \"item\",\n    \"promotion\",\n    \"reason\",\n    \"ship_mode\",\n    \"store\",\n    \"store_returns\",\n    \"store_sales\",\n    \"warehouse\",\n    \"web_page\",\n    \"web_returns\",\n    \"web_sales\",\n    \"web_site\",\n]\n\nBENCHMARK_TABLES = {\n    \"tpch\": TPCH_TABLES,\n    \"tpcds\": TPCDS_TABLES,\n}\n\n\ndef main(benchmark: str, parquet_path: str, warehouse: str, catalog: str, database: str):\n    table_names = BENCHMARK_TABLES[benchmark]\n\n    # Validate paths before starting Spark\n    errors = []\n    if not os.path.isdir(parquet_path):\n        errors.append(f\"Error: --parquet-path '{parquet_path}' does not exist or is not a directory\")\n    if not os.path.isdir(warehouse):\n        errors.append(f\"Error: --warehouse '{warehouse}' does not exist or is not a directory. \"\n                       \"Create it with: mkdir -p \" + warehouse)\n    if errors:\n        for e in errors:\n            print(e, file=sys.stderr)\n        sys.exit(1)\n\n    spark = SparkSession.builder \\\n        .appName(f\"Create Iceberg {benchmark.upper()} Tables\") \\\n        .config(f\"spark.sql.catalog.{catalog}\", \"org.apache.iceberg.spark.SparkCatalog\") \\\n        .config(f\"spark.sql.catalog.{catalog}.type\", \"hadoop\") \\\n        .config(f\"spark.sql.catalog.{catalog}.warehouse\", warehouse) \\\n        .getOrCreate()\n\n    # Set the Iceberg catalog as the current catalog so that\n    # namespace operations are routed correctly\n    spark.sql(f\"USE {catalog}\")\n\n    # Create namespace if it doesn't exist\n    try:\n        spark.sql(f\"CREATE NAMESPACE IF NOT EXISTS {database}\")\n    except Exception:\n        # Namespace may already exist\n        pass\n\n    for table in table_names:\n        parquet_table_path = f\"{parquet_path}/{table}.parquet\"\n        iceberg_table = f\"{catalog}.{database}.{table}\"\n\n        print(f\"Converting {parquet_table_path} -> {iceberg_table}\")\n        start_time = time.time()\n\n        # Drop table if exists to allow re-running\n        spark.sql(f\"DROP TABLE IF EXISTS {iceberg_table}\")\n\n        # Read parquet and write as Iceberg\n        df = spark.read.parquet(parquet_table_path)\n        df.writeTo(iceberg_table).using(\"iceberg\").create()\n\n        row_count = spark.table(iceberg_table).count()\n        elapsed = time.time() - start_time\n        print(f\"  Created {iceberg_table} with {row_count} rows in {elapsed:.2f}s\")\n\n    print(f\"\\nAll {benchmark.upper()} tables created successfully!\")\n    print(f\"Tables available at: {catalog}.{database}.*\")\n\n    spark.stop()\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(\n        description=\"Convert TPC-H or TPC-DS Parquet data to Iceberg tables\"\n    )\n    parser.add_argument(\n        \"--benchmark\", required=True, choices=[\"tpch\", \"tpcds\"],\n        help=\"Benchmark whose tables to convert (tpch or tpcds)\"\n    )\n    parser.add_argument(\n        \"--parquet-path\", required=True,\n        help=\"Path to Parquet data directory\"\n    )\n    parser.add_argument(\n        \"--warehouse\", required=True,\n        help=\"Path to Iceberg warehouse directory\"\n    )\n    parser.add_argument(\n        \"--catalog\", default=\"local\",\n        help=\"Iceberg catalog name (default: 'local')\"\n    )\n    parser.add_argument(\n        \"--database\", default=None,\n        help=\"Database name to create tables in (defaults to benchmark name)\"\n    )\n    args = parser.parse_args()\n\n    database = args.database if args.database else args.benchmark\n    main(args.benchmark, args.parquet_path, args.warehouse, args.catalog, database)\n"
  },
  {
    "path": "benchmarks/tpc/drop-caches.sh",
    "content": "#!/bin/bash\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\necho 1 > /proc/sys/vm/drop_caches\n"
  },
  {
    "path": "benchmarks/tpc/engines/comet-hashjoin.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[engine]\nname = \"comet-hashjoin\"\n\n[env]\nrequired = [\"COMET_JAR\"]\n\n[spark_submit]\njars = [\"$COMET_JAR\"]\ndriver_class_path = [\"$COMET_JAR\"]\n\n[spark_conf]\n\"spark.driver.extraClassPath\" = \"$COMET_JAR\"\n\"spark.executor.extraClassPath\" = \"$COMET_JAR\"\n\"spark.plugins\" = \"org.apache.spark.CometPlugin\"\n\"spark.shuffle.manager\" = \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\"\n\"spark.comet.scan.impl\" = \"native_datafusion\"\n\"spark.comet.exec.replaceSortMergeJoin\" = \"true\"\n\"spark.comet.expression.Cast.allowIncompatible\" = \"true\"\n"
  },
  {
    "path": "benchmarks/tpc/engines/comet-iceberg-hashjoin.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[engine]\nname = \"comet-iceberg-hashjoin\"\n\n[env]\nrequired = [\"COMET_JAR\", \"ICEBERG_JAR\", \"ICEBERG_WAREHOUSE\"]\n\n[env.defaults]\nICEBERG_CATALOG = \"local\"\n\n[spark_submit]\njars = [\"$COMET_JAR\", \"$ICEBERG_JAR\"]\ndriver_class_path = [\"$COMET_JAR\", \"$ICEBERG_JAR\"]\n\n[spark_conf]\n\"spark.driver.extraClassPath\" = \"$COMET_JAR:$ICEBERG_JAR\"\n\"spark.executor.extraClassPath\" = \"$COMET_JAR:$ICEBERG_JAR\"\n\"spark.plugins\" = \"org.apache.spark.CometPlugin\"\n\"spark.shuffle.manager\" = \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\"\n\"spark.comet.exec.replaceSortMergeJoin\" = \"true\"\n\"spark.comet.expression.Cast.allowIncompatible\" = \"true\"\n\"spark.comet.enabled\" = \"true\"\n\"spark.comet.exec.enabled\" = \"true\"\n\"spark.comet.scan.icebergNative.enabled\" = \"true\"\n\"spark.comet.explainFallback.enabled\" = \"true\"\n\"spark.sql.catalog.${ICEBERG_CATALOG}\" = \"org.apache.iceberg.spark.SparkCatalog\"\n\"spark.sql.catalog.${ICEBERG_CATALOG}.type\" = \"hadoop\"\n\"spark.sql.catalog.${ICEBERG_CATALOG}.warehouse\" = \"$ICEBERG_WAREHOUSE\"\n\"spark.sql.defaultCatalog\" = \"${ICEBERG_CATALOG}\"\n\n[tpcbench_args]\nuse_iceberg = true\n"
  },
  {
    "path": "benchmarks/tpc/engines/comet-iceberg.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[engine]\nname = \"comet-iceberg\"\n\n[env]\nrequired = [\"COMET_JAR\", \"ICEBERG_JAR\", \"ICEBERG_WAREHOUSE\"]\n\n[env.defaults]\nICEBERG_CATALOG = \"local\"\n\n[spark_submit]\njars = [\"$COMET_JAR\", \"$ICEBERG_JAR\"]\ndriver_class_path = [\"$COMET_JAR\", \"$ICEBERG_JAR\"]\n\n[spark_conf]\n\"spark.driver.extraClassPath\" = \"$COMET_JAR:$ICEBERG_JAR\"\n\"spark.executor.extraClassPath\" = \"$COMET_JAR:$ICEBERG_JAR\"\n\"spark.plugins\" = \"org.apache.spark.CometPlugin\"\n\"spark.shuffle.manager\" = \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\"\n\"spark.comet.expression.Cast.allowIncompatible\" = \"true\"\n\"spark.comet.enabled\" = \"true\"\n\"spark.comet.exec.enabled\" = \"true\"\n\"spark.comet.scan.icebergNative.enabled\" = \"true\"\n\"spark.comet.explainFallback.enabled\" = \"true\"\n\"spark.sql.catalog.${ICEBERG_CATALOG}\" = \"org.apache.iceberg.spark.SparkCatalog\"\n\"spark.sql.catalog.${ICEBERG_CATALOG}.type\" = \"hadoop\"\n\"spark.sql.catalog.${ICEBERG_CATALOG}.warehouse\" = \"$ICEBERG_WAREHOUSE\"\n\"spark.sql.defaultCatalog\" = \"${ICEBERG_CATALOG}\"\n\n[tpcbench_args]\nuse_iceberg = true\n"
  },
  {
    "path": "benchmarks/tpc/engines/comet.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[engine]\nname = \"comet\"\n\n[env]\nrequired = [\"COMET_JAR\"]\n\n[spark_submit]\njars = [\"$COMET_JAR\"]\ndriver_class_path = [\"$COMET_JAR\"]\n\n[spark_conf]\n\"spark.driver.extraClassPath\" = \"$COMET_JAR\"\n\"spark.executor.extraClassPath\" = \"$COMET_JAR\"\n\"spark.plugins\" = \"org.apache.spark.CometPlugin\"\n\"spark.shuffle.manager\" = \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\"\n\"spark.comet.scan.impl\" = \"native_datafusion\"\n\"spark.comet.expression.Cast.allowIncompatible\" = \"true\"\n"
  },
  {
    "path": "benchmarks/tpc/engines/gluten.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[engine]\nname = \"gluten\"\n\n[env]\nrequired = [\"GLUTEN_JAR\"]\nexports = { TZ = \"UTC\" }\n\n[spark_submit]\njars = [\"$GLUTEN_JAR\"]\n\n[spark_conf]\n\"spark.plugins\" = \"org.apache.gluten.GlutenPlugin\"\n\"spark.driver.extraClassPath\" = \"${GLUTEN_JAR}\"\n\"spark.executor.extraClassPath\" = \"${GLUTEN_JAR}\"\n\"spark.gluten.sql.columnar.forceShuffledHashJoin\" = \"true\"\n\"spark.shuffle.manager\" = \"org.apache.spark.shuffle.sort.ColumnarShuffleManager\"\n\"spark.sql.session.timeZone\" = \"UTC\"\n"
  },
  {
    "path": "benchmarks/tpc/engines/spark.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[engine]\nname = \"spark\"\n"
  },
  {
    "path": "benchmarks/tpc/generate-comparison.py",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nimport argparse\nimport json\nimport logging\nimport matplotlib.pyplot as plt\nimport numpy as np\n\nlogging.basicConfig(level=logging.INFO)\nlogger = logging.getLogger(__name__)\n\ndef geomean(data):\n    return np.prod(data) ** (1 / len(data))\n\ndef get_durations(result, query_key):\n    \"\"\"Extract durations from a query result, supporting both old (list) and new (dict) formats.\"\"\"\n    value = result[query_key]\n    if isinstance(value, dict):\n        return value[\"durations\"]\n    return value\n\ndef get_all_queries(results):\n    \"\"\"Return the sorted union of all query keys across all result sets.\"\"\"\n    all_keys = set()\n    for result in results:\n        all_keys.update(result.keys())\n    # Filter to numeric query keys and sort numerically\n    numeric_keys = []\n    for k in all_keys:\n        try:\n            numeric_keys.append(int(k))\n        except ValueError:\n            pass\n    return sorted(numeric_keys)\n\ndef get_common_queries(results, labels):\n    \"\"\"Return queries present in ALL result sets, warning about queries missing from some files.\"\"\"\n    all_queries = get_all_queries(results)\n    common = []\n    for query in all_queries:\n        key = str(query)\n        present = [labels[i] for i, r in enumerate(results) if key in r]\n        missing = [labels[i] for i, r in enumerate(results) if key not in r]\n        if missing:\n            logger.warning(f\"Query {query}: present in [{', '.join(present)}] but missing from [{', '.join(missing)}]\")\n        if not missing:\n            common.append(query)\n    return common\n\ndef check_result_consistency(results, labels, benchmark):\n    \"\"\"Log warnings if row counts or result hashes differ across result sets.\"\"\"\n    all_queries = get_all_queries(results)\n    for query in all_queries:\n        key = str(query)\n        row_counts = []\n        hashes = []\n        for i, result in enumerate(results):\n            if key not in result:\n                continue\n            value = result[key]\n            if not isinstance(value, dict):\n                continue\n            if \"row_count\" in value:\n                row_counts.append((labels[i], value[\"row_count\"]))\n            if \"result_hash\" in value:\n                hashes.append((labels[i], value[\"result_hash\"]))\n\n        if len(row_counts) > 1:\n            counts = set(rc for _, rc in row_counts)\n            if len(counts) > 1:\n                details = \", \".join(f\"{label}={rc}\" for label, rc in row_counts)\n                logger.warning(f\"Query {query}: row count mismatch: {details}\")\n\n        if len(hashes) > 1:\n            hash_values = set(h for _, h in hashes)\n            if len(hash_values) > 1:\n                details = \", \".join(f\"{label}={h}\" for label, h in hashes)\n                logger.warning(f\"Query {query}: result hash mismatch: {details}\")\n\ndef generate_query_rel_speedup_chart(baseline, comparison, label1: str, label2: str, benchmark: str, title: str, common_queries=None):\n    if common_queries is None:\n        common_queries = range(1, query_count(benchmark)+1)\n    results = []\n    for query in common_queries:\n        a = np.median(np.array(get_durations(baseline, str(query))))\n        b = np.median(np.array(get_durations(comparison, str(query))))\n        if a > b:\n            speedup = a/b-1\n        else:\n            speedup = -(1/(a/b)-1)\n        results.append((\"q\" + str(query), round(speedup*100, 0)))\n\n    results = sorted(results, key=lambda x: -x[1])\n\n    queries, speedups = zip(*results)\n\n    # Create figure and axis\n    if benchmark == \"tpch\":\n        fig, ax = plt.subplots(figsize=(10, 6))\n    else:\n        fig, ax = plt.subplots(figsize=(35, 10))\n\n    # Create bar chart\n    bars = ax.bar(queries, speedups, color='skyblue')\n\n    # Add text annotations\n    for bar, speedup in zip(bars, speedups):\n        yval = bar.get_height()\n        if yval >= 0:\n            ax.text(bar.get_x() + bar.get_width() / 2.0, min(800, yval+5), f'{yval:.0f}%', va='bottom', ha='center', fontsize=8,\n                    color='blue', rotation=90)\n        else:\n            ax.text(bar.get_x() + bar.get_width() / 2.0, yval, f'{yval:.0f}%', va='top', ha='center', fontsize=8,\n                    color='blue', rotation=90)\n\n    # Add title and labels\n    ax.set_title(label2 + \" speedup over \" + label1 + \" (\" + title + \")\")\n    ax.set_ylabel('Speedup Percentage (100% speedup = 2x faster)')\n    ax.set_xlabel('Query')\n\n    # Customize the y-axis to handle both positive and negative values better\n    ax.axhline(0, color='black', linewidth=0.8)\n    min_value = (min(speedups) // 100) * 100\n    max_value = ((max(speedups) // 100) + 1) * 100 + 50\n    if benchmark == \"tpch\":\n        ax.set_ylim(min_value, max_value)\n    else:\n        # TODO improve this\n        ax.set_ylim(-250, 300)\n\n    # Show grid for better readability\n    ax.yaxis.grid(True)\n\n    # Save the plot as an image file\n    plt.savefig(f'{benchmark}_queries_speedup_rel.png', format='png')\n\ndef generate_query_abs_speedup_chart(baseline, comparison, label1: str, label2: str, benchmark: str, title: str, common_queries=None):\n    if common_queries is None:\n        common_queries = range(1, query_count(benchmark)+1)\n    results = []\n    for query in common_queries:\n        a = np.median(np.array(get_durations(baseline, str(query))))\n        b = np.median(np.array(get_durations(comparison, str(query))))\n        speedup = a-b\n        results.append((\"q\" + str(query), round(speedup, 1)))\n\n    results = sorted(results, key=lambda x: -x[1])\n\n    queries, speedups = zip(*results)\n\n    # Create figure and axis\n    if benchmark == \"tpch\":\n        fig, ax = plt.subplots(figsize=(10, 6))\n    else:\n        fig, ax = plt.subplots(figsize=(35, 10))\n\n    # Create bar chart\n    bars = ax.bar(queries, speedups, color='skyblue')\n\n    # Add text annotations\n    for bar, speedup in zip(bars, speedups):\n        yval = bar.get_height()\n        if yval >= 0:\n            ax.text(bar.get_x() + bar.get_width() / 2.0, min(800, yval+5), f'{yval:.1f}', va='bottom', ha='center', fontsize=8,\n                    color='blue', rotation=90)\n        else:\n            ax.text(bar.get_x() + bar.get_width() / 2.0, yval, f'{yval:.1f}', va='top', ha='center', fontsize=8,\n                    color='blue', rotation=90)\n\n    # Add title and labels\n    ax.set_title(label2 + \" speedup over \" + label1 + \" (\" + title + \")\")\n    ax.set_ylabel('Speedup (in seconds)')\n    ax.set_xlabel('Query')\n\n    # Customize the y-axis to handle both positive and negative values better\n    ax.axhline(0, color='black', linewidth=0.8)\n    min_value = min(speedups) * 2 - 20\n    max_value = max(speedups) * 1.5\n    ax.set_ylim(min_value, max_value)\n\n    # Show grid for better readability\n    ax.yaxis.grid(True)\n\n    # Save the plot as an image file\n    plt.savefig(f'{benchmark}_queries_speedup_abs.png', format='png')\n\ndef generate_query_comparison_chart(results, labels, benchmark: str, title: str, common_queries=None):\n    if common_queries is None:\n        common_queries = range(1, query_count(benchmark)+1)\n    queries = []\n    benches = []\n    for _ in results:\n        benches.append([])\n    for query in common_queries:\n        queries.append(\"q\" + str(query))\n        for i in range(0, len(results)):\n            benches[i].append(np.median(np.array(get_durations(results[i], str(query)))))\n\n    # Define the width of the bars\n    bar_width = 0.3\n\n    # Define the positions of the bars on the x-axis\n    index = np.arange(len(queries)) * 1.5\n\n    # Create a bar chart\n    if benchmark == \"tpch\":\n        fig, ax = plt.subplots(figsize=(15, 6))\n    else:\n        fig, ax = plt.subplots(figsize=(35, 6))\n\n    for i in range(0, len(results)):\n        bar = ax.bar(index + i * bar_width, benches[i], bar_width, label=labels[i])\n\n    # Add labels, title, and legend\n    ax.set_title(title)\n    ax.set_xlabel('Queries')\n    ax.set_ylabel('Query Time (seconds)')\n    ax.set_xticks(index + bar_width / 2)\n    ax.set_xticklabels(queries)\n    ax.legend()\n\n    # Save the plot as an image file\n    plt.savefig(f'{benchmark}_queries_compare.png', format='png')\n\ndef generate_summary(results, labels, benchmark: str, title: str, common_queries=None):\n    if common_queries is None:\n        common_queries = range(1, query_count(benchmark)+1)\n    timings = []\n    for _ in results:\n        timings.append(0)\n\n    num_queries = len([q for q in common_queries])\n    for query in common_queries:\n        for i in range(0, len(results)):\n            timings[i] += np.median(np.array(get_durations(results[i], str(query))))\n\n    # Create figure and axis\n    fig, ax = plt.subplots()\n    fig.set_size_inches(10, 6)\n\n    # Add title and labels\n    ax.set_title(title)\n    ax.set_ylabel(f'Time in seconds to run {num_queries} {benchmark} queries (lower is better)')\n\n    times = [round(x,0) for x in timings]\n\n    # Create bar chart\n    bars = ax.bar(labels, times, color='skyblue', width=0.8)\n\n    # Add text annotations\n    for bar in bars:\n        yval = bar.get_height()\n        ax.text(bar.get_x() + bar.get_width() / 2.0, yval, f'{yval}', va='bottom')  # va: vertical alignment\n\n    plt.savefig(f'{benchmark}_allqueries.png', format='png')\n\ndef query_count(benchmark: str):\n    if benchmark == \"tpch\":\n        return 22\n    elif benchmark == \"tpcds\":\n        return 99\n    else:\n        raise \"invalid benchmark name\"\n\ndef main(files, labels, benchmark: str, title: str):\n    results = []\n    for filename in files:\n        with open(filename) as f:\n            results.append(json.load(f))\n    check_result_consistency(results, labels, benchmark)\n    common_queries = get_common_queries(results, labels)\n    if not common_queries:\n        logger.error(\"No queries found in common across all result files\")\n        return\n    generate_summary(results, labels, benchmark, title, common_queries)\n    generate_query_comparison_chart(results, labels, benchmark, title, common_queries)\n    if len(files) == 2:\n        generate_query_abs_speedup_chart(results[0], results[1], labels[0], labels[1], benchmark, title, common_queries)\n        generate_query_rel_speedup_chart(results[0], results[1], labels[0], labels[1], benchmark, title, common_queries)\n\nif __name__ == '__main__':\n    argparse = argparse.ArgumentParser(description='Generate comparison')\n    argparse.add_argument('filenames', nargs='+', type=str, help='JSON result files')\n    argparse.add_argument('--labels', nargs='+', type=str, help='Labels')\n    argparse.add_argument('--benchmark', type=str, help='Benchmark name (tpch or tpcds)')\n    argparse.add_argument('--title', type=str, help='Chart title')\n    args = argparse.parse_args()\n    main(args.filenames, args.labels, args.benchmark, args.title)\n"
  },
  {
    "path": "benchmarks/tpc/infra/docker/Dockerfile",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Benchmark image for running TPC-H and TPC-DS benchmarks across engines\n# (Spark, Comet, Gluten).\n#\n# Build (from repository root):\n#   docker build -t comet-bench -f benchmarks/tpc/infra/docker/Dockerfile .\n\nARG SPARK_IMAGE=apache/spark:3.5.2-python3\nFROM ${SPARK_IMAGE}\n\nUSER root\n\n# Install Java 8 (Gluten) and Java 17 (Comet) plus Python 3.\nRUN apt-get update \\\n    && apt-get install -y --no-install-recommends \\\n       openjdk-8-jdk-headless \\\n       openjdk-17-jdk-headless \\\n       python3 python3-pip procps wget \\\n    && apt-get clean \\\n    && rm -rf /var/lib/apt/lists/*\n\n# Install async-profiler for profiling Java + native (Rust/C++) code.\nARG ASYNC_PROFILER_VERSION=3.0\nRUN ARCH=$(uname -m) && \\\n    if [ \"$ARCH\" = \"x86_64\" ]; then AP_ARCH=\"linux-x64\"; \\\n    elif [ \"$ARCH\" = \"aarch64\" ]; then AP_ARCH=\"linux-aarch64\"; \\\n    else echo \"Unsupported architecture: $ARCH\" && exit 1; fi && \\\n    wget -q \"https://github.com/async-profiler/async-profiler/releases/download/v${ASYNC_PROFILER_VERSION}/async-profiler-${ASYNC_PROFILER_VERSION}-${AP_ARCH}.tar.gz\" \\\n         -O /tmp/async-profiler.tar.gz && \\\n    mkdir -p /opt/async-profiler && \\\n    tar xzf /tmp/async-profiler.tar.gz -C /opt/async-profiler --strip-components=1 && \\\n    rm /tmp/async-profiler.tar.gz\nENV ASYNC_PROFILER_HOME=/opt/async-profiler\n\n# Default to Java 17 (override with JAVA_HOME at runtime for Gluten).\n# Detect architecture (amd64 or arm64) so the image works on both Linux and macOS.\nARG TARGETARCH\nRUN ln -s /usr/lib/jvm/java-17-openjdk-${TARGETARCH} /usr/lib/jvm/java-17-openjdk && \\\n    ln -s /usr/lib/jvm/java-8-openjdk-${TARGETARCH} /usr/lib/jvm/java-8-openjdk\nENV JAVA_HOME=/usr/lib/jvm/java-17-openjdk\n\n# Copy the benchmark scripts into the image.\nCOPY benchmarks/tpc/run.py              /opt/benchmarks/run.py\nCOPY benchmarks/tpc/tpcbench.py         /opt/benchmarks/tpcbench.py\nCOPY benchmarks/tpc/engines             /opt/benchmarks/engines\nCOPY benchmarks/tpc/queries             /opt/benchmarks/queries\nCOPY benchmarks/tpc/create-iceberg-tables.py /opt/benchmarks/create-iceberg-tables.py\nCOPY benchmarks/tpc/generate-comparison.py   /opt/benchmarks/generate-comparison.py\n\n# Engine JARs are bind-mounted or copied in at runtime via --jars.\n# Data and query paths are also bind-mounted.\n\nWORKDIR /opt/benchmarks\n\n# Defined in the base apache/spark image.\nARG spark_uid\nUSER ${spark_uid}\n"
  },
  {
    "path": "benchmarks/tpc/infra/docker/Dockerfile.build-comet",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Build a Comet JAR with native libraries for the current platform.\n#\n# This is useful on macOS (Apple Silicon) where the host-built JAR contains\n# darwin/aarch64 native libraries but Docker containers need linux/aarch64.\n#\n# Usage (from repository root):\n#   docker build -t comet-builder -f benchmarks/tpc/infra/docker/Dockerfile.build-comet .\n#   docker run --rm -v $(pwd)/output:/output comet-builder\n#\n# The JAR is copied to ./output/ on the host.\n\n# Use Ubuntu 20.04 to match the GLIBC version (2.31) in apache/spark images.\nFROM ubuntu:20.04 AS builder\n\nARG TARGETARCH\nENV DEBIAN_FRONTEND=noninteractive\n\n# Install build dependencies: Java 17, Maven wrapper prerequisites, GCC 11.\n# Ubuntu 20.04's default GCC 9 has a memcmp bug (GCC #95189) that breaks aws-lc-sys.\nRUN apt-get update && apt-get install -y --no-install-recommends \\\n        openjdk-17-jdk-headless \\\n        curl ca-certificates git pkg-config \\\n        libssl-dev unzip software-properties-common \\\n    && add-apt-repository -y ppa:ubuntu-toolchain-r/test \\\n    && apt-get update \\\n    && apt-get install -y --no-install-recommends gcc-11 g++-11 make \\\n    && update-alternatives --install /usr/bin/gcc gcc /usr/bin/gcc-11 110 \\\n    && update-alternatives --install /usr/bin/g++ g++ /usr/bin/g++-11 110 \\\n    && update-alternatives --install /usr/bin/cc cc /usr/bin/gcc-11 110 \\\n    && apt-get clean && rm -rf /var/lib/apt/lists/*\n\n# Install protoc 25.x (Ubuntu 22.04's protoc is too old for proto3 optional fields).\nARG PROTOC_VERSION=25.6\nRUN ARCH=$(uname -m) && \\\n    if [ \"$ARCH\" = \"aarch64\" ]; then PROTOC_ARCH=\"linux-aarch_64\"; \\\n    else PROTOC_ARCH=\"linux-x86_64\"; fi && \\\n    curl -sLO \"https://github.com/protocolbuffers/protobuf/releases/download/v${PROTOC_VERSION}/protoc-${PROTOC_VERSION}-${PROTOC_ARCH}.zip\" && \\\n    unzip -o \"protoc-${PROTOC_VERSION}-${PROTOC_ARCH}.zip\" -d /usr/local bin/protoc && \\\n    rm \"protoc-${PROTOC_VERSION}-${PROTOC_ARCH}.zip\" && \\\n    protoc --version\n\n# Set JAVA_HOME and LD_LIBRARY_PATH so the Rust build can find libjvm.\nRUN ln -s /usr/lib/jvm/java-17-openjdk-${TARGETARCH} /usr/lib/jvm/java-17-openjdk\nENV JAVA_HOME=/usr/lib/jvm/java-17-openjdk\nENV LD_LIBRARY_PATH=${JAVA_HOME}/lib/server\n\n# Install Rust.\nRUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y\nENV PATH=\"/root/.cargo/bin:${PATH}\"\n\nWORKDIR /build\n\n# Copy the full source tree.\nCOPY . .\n\n# Build native code + package the JAR (skip tests).\nRUN make release-nogit\n\n# The entrypoint copies the built JAR to /output (bind-mounted from host).\nRUN mkdir -p /output\nCMD [\"sh\", \"-c\", \"cp spark/target/comet-spark-spark3.5_2.12-*-SNAPSHOT.jar /output/ && echo 'Comet JAR copied to /output/' && ls -lh /output/*.jar\"]\n"
  },
  {
    "path": "benchmarks/tpc/infra/docker/docker-compose-laptop.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Lightweight Spark standalone cluster for TPC benchmarks on a laptop.\n#\n# Single worker, ~12 GB total memory. Suitable for SF1-SF10 testing.\n#\n# Usage:\n#   export COMET_JAR=/path/to/comet-spark-0.10.0.jar\n#   docker compose -f benchmarks/tpc/infra/docker/docker-compose-laptop.yml up -d\n#\n# Environment variables (set in .env or export before running):\n#   BENCH_IMAGE        - Docker image to use (default: comet-bench)\n#   DATA_DIR           - Host path to TPC data (default: /tmp/tpc-data)\n#   RESULTS_DIR        - Host path for results output (default: /tmp/bench-results)\n#   COMET_JAR          - Host path to Comet JAR\n#   GLUTEN_JAR         - Host path to Gluten JAR\n#   ICEBERG_JAR        - Host path to Iceberg Spark runtime JAR\n#   BENCH_JAVA_HOME    - Java home inside container (default: /usr/lib/jvm/java-17-openjdk)\n#                        Set to /usr/lib/jvm/java-8-openjdk for Gluten\n#   ASYNC_PROFILER_HOME - async-profiler install path (default: /opt/async-profiler)\n\nx-volumes: &volumes\n  - ${DATA_DIR:-/tmp/tpc-data}:/data:ro\n  - ${RESULTS_DIR:-/tmp/bench-results}:/results\n  - ${COMET_JAR:-/dev/null}:/jars/comet.jar:ro\n  - ${GLUTEN_JAR:-/dev/null}:/jars/gluten.jar:ro\n  - ${ICEBERG_JAR:-/dev/null}:/jars/iceberg.jar:ro\n  - ${RESULTS_DIR:-/tmp/bench-results}/logs:/opt/spark/logs\n  - ${RESULTS_DIR:-/tmp/bench-results}/work:/opt/spark/work\n\nservices:\n  spark-master:\n    image: ${BENCH_IMAGE:-comet-bench}\n    container_name: spark-master\n    hostname: spark-master\n    command: /opt/spark/sbin/start-master.sh --host spark-master\n    ports:\n      - \"7077:7077\"\n      - \"8080:8080\"\n    volumes: *volumes\n    environment:\n      - JAVA_HOME=${BENCH_JAVA_HOME:-/usr/lib/jvm/java-17-openjdk}\n      - SPARK_MASTER_HOST=spark-master\n      - SPARK_NO_DAEMONIZE=true\n\n  spark-worker-1:\n    image: ${BENCH_IMAGE:-comet-bench}\n    container_name: spark-worker-1\n    hostname: spark-worker-1\n    depends_on:\n      - spark-master\n    command: /opt/spark/sbin/start-worker.sh spark://spark-master:7077\n    ports:\n      - \"8081:8081\"\n    volumes: *volumes\n    environment:\n      - JAVA_HOME=${BENCH_JAVA_HOME:-/usr/lib/jvm/java-17-openjdk}\n      - SPARK_WORKER_CORES=4\n      - SPARK_WORKER_MEMORY=4g\n      - SPARK_NO_DAEMONIZE=true\n    mem_limit: 8g\n    memswap_limit: 8g\n    stop_grace_period: 30s\n\n  bench:\n    image: ${BENCH_IMAGE:-comet-bench}\n    container_name: bench-runner\n    depends_on:\n      - spark-master\n      - spark-worker-1\n    # Override 'command' to run a specific benchmark, e.g.:\n    #   docker compose run bench python3 /opt/benchmarks/run.py \\\n    #       --engine comet --benchmark tpch --no-restart\n    command: [\"echo\", \"Use 'docker compose run bench python3 /opt/benchmarks/run.py ...' to run benchmarks\"]\n    volumes: *volumes\n    environment:\n      - JAVA_HOME=${BENCH_JAVA_HOME:-/usr/lib/jvm/java-17-openjdk}\n      - SPARK_HOME=/opt/spark\n      - SPARK_MASTER=spark://spark-master:7077\n      - COMET_JAR=/jars/comet.jar\n      - GLUTEN_JAR=/jars/gluten.jar\n      - ICEBERG_JAR=/jars/iceberg.jar\n      - TPCH_DATA=/data\n      - TPCDS_DATA=/data\n      - SPARK_EVENT_LOG_DIR=/results/spark-events\n      - ASYNC_PROFILER_HOME=/opt/async-profiler\n    mem_limit: 4g\n    memswap_limit: 4g\n"
  },
  {
    "path": "benchmarks/tpc/infra/docker/docker-compose.yml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\n# Spark standalone cluster for TPC benchmarks.\n#\n# Two workers are used so that shuffles go through the network stack,\n# which better reflects real cluster behavior.\n#\n# Usage:\n#   export COMET_JAR=/path/to/comet-spark-0.10.0.jar\n#   docker compose -f benchmarks/tpc/infra/docker/docker-compose.yml up -d\n#\n# Environment variables (set in .env or export before running):\n#   BENCH_IMAGE        - Docker image to use (default: comet-bench)\n#   DATA_DIR           - Host path to TPC data (default: /tmp/tpc-data)\n#   RESULTS_DIR        - Host path for results output (default: /tmp/bench-results)\n#   COMET_JAR          - Host path to Comet JAR\n#   GLUTEN_JAR         - Host path to Gluten JAR\n#   ICEBERG_JAR        - Host path to Iceberg Spark runtime JAR\n#   WORKER_MEM_LIMIT   - Hard memory limit per worker container (default: 32g)\n#   BENCH_MEM_LIMIT    - Hard memory limit for the bench runner (default: 10g)\n#   BENCH_JAVA_HOME    - Java home inside container (default: /usr/lib/jvm/java-17-openjdk)\n#                        Set to /usr/lib/jvm/java-8-openjdk for Gluten\n#   ASYNC_PROFILER_HOME - async-profiler install path (default: /opt/async-profiler)\n\nx-volumes: &volumes\n  - ${DATA_DIR:-/tmp/tpc-data}:/data:ro\n  - ${RESULTS_DIR:-/tmp/bench-results}:/results\n  - ${COMET_JAR:-/dev/null}:/jars/comet.jar:ro\n  - ${GLUTEN_JAR:-/dev/null}:/jars/gluten.jar:ro\n  - ${ICEBERG_JAR:-/dev/null}:/jars/iceberg.jar:ro\n  - ${RESULTS_DIR:-/tmp/bench-results}/logs:/opt/spark/logs\n  - ${RESULTS_DIR:-/tmp/bench-results}/work:/opt/spark/work\n\nx-worker: &worker\n  image: ${BENCH_IMAGE:-comet-bench}\n  depends_on:\n    - spark-master\n  command: /opt/spark/sbin/start-worker.sh spark://spark-master:7077\n  volumes: *volumes\n  environment:\n    - JAVA_HOME=${BENCH_JAVA_HOME:-/usr/lib/jvm/java-17-openjdk}\n    - SPARK_WORKER_CORES=${WORKER_CORES:-8}\n    - SPARK_WORKER_MEMORY=${WORKER_MEMORY:-16g}\n    - SPARK_NO_DAEMONIZE=true\n  mem_limit: ${WORKER_MEM_LIMIT:-32g}\n  memswap_limit: ${WORKER_MEM_LIMIT:-32g}\n  stop_grace_period: 30s\n\nservices:\n  spark-master:\n    image: ${BENCH_IMAGE:-comet-bench}\n    container_name: spark-master\n    hostname: spark-master\n    command: /opt/spark/sbin/start-master.sh --host spark-master\n    ports:\n      - \"7077:7077\"\n      - \"8080:8080\"\n    volumes: *volumes\n    environment:\n      - JAVA_HOME=${BENCH_JAVA_HOME:-/usr/lib/jvm/java-17-openjdk}\n      - SPARK_MASTER_HOST=spark-master\n      - SPARK_NO_DAEMONIZE=true\n\n  spark-worker-1:\n    <<: *worker\n    container_name: spark-worker-1\n    hostname: spark-worker-1\n    ports:\n      - \"8081:8081\"\n\n  spark-worker-2:\n    <<: *worker\n    container_name: spark-worker-2\n    hostname: spark-worker-2\n    ports:\n      - \"8082:8081\"\n\n  bench:\n    image: ${BENCH_IMAGE:-comet-bench}\n    container_name: bench-runner\n    depends_on:\n      - spark-master\n      - spark-worker-1\n      - spark-worker-2\n    # Override 'command' to run a specific benchmark, e.g.:\n    #   docker compose run bench python3 /opt/benchmarks/run.py \\\n    #       --engine comet --benchmark tpch --no-restart\n    command: [\"echo\", \"Use 'docker compose run bench python3 /opt/benchmarks/run.py ...' to run benchmarks\"]\n    volumes: *volumes\n    environment:\n      - JAVA_HOME=${BENCH_JAVA_HOME:-/usr/lib/jvm/java-17-openjdk}\n      - SPARK_HOME=/opt/spark\n      - SPARK_MASTER=spark://spark-master:7077\n      - COMET_JAR=/jars/comet.jar\n      - GLUTEN_JAR=/jars/gluten.jar\n      - ICEBERG_JAR=/jars/iceberg.jar\n      - TPCH_DATA=/data\n      - TPCDS_DATA=/data\n      - SPARK_EVENT_LOG_DIR=/results/spark-events\n      - ASYNC_PROFILER_HOME=/opt/async-profiler\n    mem_limit: ${BENCH_MEM_LIMIT:-10g}\n    memswap_limit: ${BENCH_MEM_LIMIT:-10g}\n\n  # Uncomment to enable the Spark History Server for inspecting completed\n  # benchmark runs at http://localhost:18080.  Requires event logs in\n  # $RESULTS_DIR/spark-events (created by `mkdir -p $RESULTS_DIR/spark-events`\n  # before starting the cluster).\n  #\n  # history-server:\n  #   image: ${BENCH_IMAGE:-comet-bench}\n  #   container_name: spark-history\n  #   hostname: spark-history\n  #   command: /opt/spark/sbin/start-history-server.sh\n  #   ports:\n  #     - \"18080:18080\"\n  #   volumes:\n  #     - ${RESULTS_DIR:-/tmp/bench-results}:/results:ro\n  #   environment:\n  #     - JAVA_HOME=${BENCH_JAVA_HOME:-/usr/lib/jvm/java-17-openjdk}\n  #     - SPARK_HISTORY_OPTS=-Dspark.history.fs.logDirectory=/results/spark-events\n  #     - SPARK_NO_DAEMONIZE=true\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q1.sql",
    "content": "-- CometBench-DS query 1 derived from TPC-DS query 1 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith customer_total_return as\n(select sr_customer_sk as ctr_customer_sk\n,sr_store_sk as ctr_store_sk\n,sum(SR_RETURN_AMT_INC_TAX) as ctr_total_return\nfrom store_returns\n,date_dim\nwhere sr_returned_date_sk = d_date_sk\nand d_year =1999\ngroup by sr_customer_sk\n,sr_store_sk)\n select  c_customer_id\nfrom customer_total_return ctr1\n,store\n,customer\nwhere ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2\nfrom customer_total_return ctr2\nwhere ctr1.ctr_store_sk = ctr2.ctr_store_sk)\nand s_store_sk = ctr1.ctr_store_sk\nand s_state = 'TN'\nand ctr1.ctr_customer_sk = c_customer_sk\norder by c_customer_id\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q10.sql",
    "content": "-- CometBench-DS query 10 derived from TPC-DS query 10 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n  cd_gender,\n  cd_marital_status,\n  cd_education_status,\n  count(*) cnt1,\n  cd_purchase_estimate,\n  count(*) cnt2,\n  cd_credit_rating,\n  count(*) cnt3,\n  cd_dep_count,\n  count(*) cnt4,\n  cd_dep_employed_count,\n  count(*) cnt5,\n  cd_dep_college_count,\n  count(*) cnt6\n from\n  customer c,customer_address ca,customer_demographics\n where\n  c.c_current_addr_sk = ca.ca_address_sk and\n  ca_county in ('Clinton County','Platte County','Franklin County','Louisa County','Harmon County') and\n  cd_demo_sk = c.c_current_cdemo_sk and \n  exists (select *\n          from store_sales,date_dim\n          where c.c_customer_sk = ss_customer_sk and\n                ss_sold_date_sk = d_date_sk and\n                d_year = 2002 and\n                d_moy between 3 and 3+3) and\n   (exists (select *\n            from web_sales,date_dim\n            where c.c_customer_sk = ws_bill_customer_sk and\n                  ws_sold_date_sk = d_date_sk and\n                  d_year = 2002 and\n                  d_moy between 3 ANd 3+3) or \n    exists (select * \n            from catalog_sales,date_dim\n            where c.c_customer_sk = cs_ship_customer_sk and\n                  cs_sold_date_sk = d_date_sk and\n                  d_year = 2002 and\n                  d_moy between 3 and 3+3))\n group by cd_gender,\n          cd_marital_status,\n          cd_education_status,\n          cd_purchase_estimate,\n          cd_credit_rating,\n          cd_dep_count,\n          cd_dep_employed_count,\n          cd_dep_college_count\n order by cd_gender,\n          cd_marital_status,\n          cd_education_status,\n          cd_purchase_estimate,\n          cd_credit_rating,\n          cd_dep_count,\n          cd_dep_employed_count,\n          cd_dep_college_count\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q11.sql",
    "content": "-- CometBench-DS query 11 derived from TPC-DS query 11 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith year_total as (\n select c_customer_id customer_id\n       ,c_first_name customer_first_name\n       ,c_last_name customer_last_name\n       ,c_preferred_cust_flag customer_preferred_cust_flag\n       ,c_birth_country customer_birth_country\n       ,c_login customer_login\n       ,c_email_address customer_email_address\n       ,d_year dyear\n       ,sum(ss_ext_list_price-ss_ext_discount_amt) year_total\n       ,'s' sale_type\n from customer\n     ,store_sales\n     ,date_dim\n where c_customer_sk = ss_customer_sk\n   and ss_sold_date_sk = d_date_sk\n group by c_customer_id\n         ,c_first_name\n         ,c_last_name\n         ,c_preferred_cust_flag \n         ,c_birth_country\n         ,c_login\n         ,c_email_address\n         ,d_year \n union all\n select c_customer_id customer_id\n       ,c_first_name customer_first_name\n       ,c_last_name customer_last_name\n       ,c_preferred_cust_flag customer_preferred_cust_flag\n       ,c_birth_country customer_birth_country\n       ,c_login customer_login\n       ,c_email_address customer_email_address\n       ,d_year dyear\n       ,sum(ws_ext_list_price-ws_ext_discount_amt) year_total\n       ,'w' sale_type\n from customer\n     ,web_sales\n     ,date_dim\n where c_customer_sk = ws_bill_customer_sk\n   and ws_sold_date_sk = d_date_sk\n group by c_customer_id\n         ,c_first_name\n         ,c_last_name\n         ,c_preferred_cust_flag \n         ,c_birth_country\n         ,c_login\n         ,c_email_address\n         ,d_year\n         )\n  select  \n                  t_s_secyear.customer_id\n                 ,t_s_secyear.customer_first_name\n                 ,t_s_secyear.customer_last_name\n                 ,t_s_secyear.customer_email_address\n from year_total t_s_firstyear\n     ,year_total t_s_secyear\n     ,year_total t_w_firstyear\n     ,year_total t_w_secyear\n where t_s_secyear.customer_id = t_s_firstyear.customer_id\n         and t_s_firstyear.customer_id = t_w_secyear.customer_id\n         and t_s_firstyear.customer_id = t_w_firstyear.customer_id\n         and t_s_firstyear.sale_type = 's'\n         and t_w_firstyear.sale_type = 'w'\n         and t_s_secyear.sale_type = 's'\n         and t_w_secyear.sale_type = 'w'\n         and t_s_firstyear.dyear = 1999\n         and t_s_secyear.dyear = 1999+1\n         and t_w_firstyear.dyear = 1999\n         and t_w_secyear.dyear = 1999+1\n         and t_s_firstyear.year_total > 0\n         and t_w_firstyear.year_total > 0\n         and case when t_w_firstyear.year_total > 0 then t_w_secyear.year_total / t_w_firstyear.year_total else 0.0 end\n             > case when t_s_firstyear.year_total > 0 then t_s_secyear.year_total / t_s_firstyear.year_total else 0.0 end\n order by t_s_secyear.customer_id\n         ,t_s_secyear.customer_first_name\n         ,t_s_secyear.customer_last_name\n         ,t_s_secyear.customer_email_address\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q12.sql",
    "content": "-- CometBench-DS query 12 derived from TPC-DS query 12 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_item_id\n      ,i_item_desc \n      ,i_category \n      ,i_class \n      ,i_current_price\n      ,sum(ws_ext_sales_price) as itemrevenue \n      ,sum(ws_ext_sales_price)*100/sum(sum(ws_ext_sales_price)) over\n          (partition by i_class) as revenueratio\nfrom\t\n\tweb_sales\n    \t,item \n    \t,date_dim\nwhere \n\tws_item_sk = i_item_sk \n  \tand i_category in ('Jewelry', 'Books', 'Women')\n  \tand ws_sold_date_sk = d_date_sk\n\tand d_date between cast('2002-03-22' as date) \n\t\t\t\tand (cast('2002-03-22' as date) + INTERVAL '30 DAYS')\ngroup by \n\ti_item_id\n        ,i_item_desc \n        ,i_category\n        ,i_class\n        ,i_current_price\norder by \n\ti_category\n        ,i_class\n        ,i_item_id\n        ,i_item_desc\n        ,revenueratio\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q13.sql",
    "content": "-- CometBench-DS query 13 derived from TPC-DS query 13 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect avg(ss_quantity)\n       ,avg(ss_ext_sales_price)\n       ,avg(ss_ext_wholesale_cost)\n       ,sum(ss_ext_wholesale_cost)\n from store_sales\n     ,store\n     ,customer_demographics\n     ,household_demographics\n     ,customer_address\n     ,date_dim\n where s_store_sk = ss_store_sk\n and  ss_sold_date_sk = d_date_sk and d_year = 2001\n and((ss_hdemo_sk=hd_demo_sk\n  and cd_demo_sk = ss_cdemo_sk\n  and cd_marital_status = 'U'\n  and cd_education_status = '4 yr Degree'\n  and ss_sales_price between 100.00 and 150.00\n  and hd_dep_count = 3   \n     )or\n     (ss_hdemo_sk=hd_demo_sk\n  and cd_demo_sk = ss_cdemo_sk\n  and cd_marital_status = 'S'\n  and cd_education_status = 'Unknown'\n  and ss_sales_price between 50.00 and 100.00   \n  and hd_dep_count = 1\n     ) or \n     (ss_hdemo_sk=hd_demo_sk\n  and cd_demo_sk = ss_cdemo_sk\n  and cd_marital_status = 'D'\n  and cd_education_status = '2 yr Degree'\n  and ss_sales_price between 150.00 and 200.00 \n  and hd_dep_count = 1  \n     ))\n and((ss_addr_sk = ca_address_sk\n  and ca_country = 'United States'\n  and ca_state in ('CO', 'MI', 'MN')\n  and ss_net_profit between 100 and 200  \n     ) or\n     (ss_addr_sk = ca_address_sk\n  and ca_country = 'United States'\n  and ca_state in ('NC', 'NY', 'TX')\n  and ss_net_profit between 150 and 300  \n     ) or\n     (ss_addr_sk = ca_address_sk\n  and ca_country = 'United States'\n  and ca_state in ('CA', 'NE', 'TN')\n  and ss_net_profit between 50 and 250  \n     ))\n;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q14.sql",
    "content": "-- CometBench-DS query 14 derived from TPC-DS query 14 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith  cross_items as\n (select i_item_sk ss_item_sk\n from item,\n (select iss.i_brand_id brand_id\n     ,iss.i_class_id class_id\n     ,iss.i_category_id category_id\n from store_sales\n     ,item iss\n     ,date_dim d1\n where ss_item_sk = iss.i_item_sk\n   and ss_sold_date_sk = d1.d_date_sk\n   and d1.d_year between 1999 AND 1999 + 2\n intersect \n select ics.i_brand_id\n     ,ics.i_class_id\n     ,ics.i_category_id\n from catalog_sales\n     ,item ics\n     ,date_dim d2\n where cs_item_sk = ics.i_item_sk\n   and cs_sold_date_sk = d2.d_date_sk\n   and d2.d_year between 1999 AND 1999 + 2\n intersect\n select iws.i_brand_id\n     ,iws.i_class_id\n     ,iws.i_category_id\n from web_sales\n     ,item iws\n     ,date_dim d3\n where ws_item_sk = iws.i_item_sk\n   and ws_sold_date_sk = d3.d_date_sk\n   and d3.d_year between 1999 AND 1999 + 2)\n where i_brand_id = brand_id\n      and i_class_id = class_id\n      and i_category_id = category_id\n),\n avg_sales as\n (select avg(quantity*list_price) average_sales\n  from (select ss_quantity quantity\n             ,ss_list_price list_price\n       from store_sales\n           ,date_dim\n       where ss_sold_date_sk = d_date_sk\n         and d_year between 1999 and 1999 + 2\n       union all \n       select cs_quantity quantity \n             ,cs_list_price list_price\n       from catalog_sales\n           ,date_dim\n       where cs_sold_date_sk = d_date_sk\n         and d_year between 1999 and 1999 + 2 \n       union all\n       select ws_quantity quantity\n             ,ws_list_price list_price\n       from web_sales\n           ,date_dim\n       where ws_sold_date_sk = d_date_sk\n         and d_year between 1999 and 1999 + 2) x)\n  select  channel, i_brand_id,i_class_id,i_category_id,sum(sales), sum(number_sales)\n from(\n       select 'store' channel, i_brand_id,i_class_id\n             ,i_category_id,sum(ss_quantity*ss_list_price) sales\n             , count(*) number_sales\n       from store_sales\n           ,item\n           ,date_dim\n       where ss_item_sk in (select ss_item_sk from cross_items)\n         and ss_item_sk = i_item_sk\n         and ss_sold_date_sk = d_date_sk\n         and d_year = 1999+2 \n         and d_moy = 11\n       group by i_brand_id,i_class_id,i_category_id\n       having sum(ss_quantity*ss_list_price) > (select average_sales from avg_sales)\n       union all\n       select 'catalog' channel, i_brand_id,i_class_id,i_category_id, sum(cs_quantity*cs_list_price) sales, count(*) number_sales\n       from catalog_sales\n           ,item\n           ,date_dim\n       where cs_item_sk in (select ss_item_sk from cross_items)\n         and cs_item_sk = i_item_sk\n         and cs_sold_date_sk = d_date_sk\n         and d_year = 1999+2 \n         and d_moy = 11\n       group by i_brand_id,i_class_id,i_category_id\n       having sum(cs_quantity*cs_list_price) > (select average_sales from avg_sales)\n       union all\n       select 'web' channel, i_brand_id,i_class_id,i_category_id, sum(ws_quantity*ws_list_price) sales , count(*) number_sales\n       from web_sales\n           ,item\n           ,date_dim\n       where ws_item_sk in (select ss_item_sk from cross_items)\n         and ws_item_sk = i_item_sk\n         and ws_sold_date_sk = d_date_sk\n         and d_year = 1999+2\n         and d_moy = 11\n       group by i_brand_id,i_class_id,i_category_id\n       having sum(ws_quantity*ws_list_price) > (select average_sales from avg_sales)\n ) y\n group by rollup (channel, i_brand_id,i_class_id,i_category_id)\n order by channel,i_brand_id,i_class_id,i_category_id\n  LIMIT 100;\nwith  cross_items as\n (select i_item_sk ss_item_sk\n from item,\n (select iss.i_brand_id brand_id\n     ,iss.i_class_id class_id\n     ,iss.i_category_id category_id\n from store_sales\n     ,item iss\n     ,date_dim d1\n where ss_item_sk = iss.i_item_sk\n   and ss_sold_date_sk = d1.d_date_sk\n   and d1.d_year between 1999 AND 1999 + 2\n intersect\n select ics.i_brand_id\n     ,ics.i_class_id\n     ,ics.i_category_id\n from catalog_sales\n     ,item ics\n     ,date_dim d2\n where cs_item_sk = ics.i_item_sk\n   and cs_sold_date_sk = d2.d_date_sk\n   and d2.d_year between 1999 AND 1999 + 2\n intersect\n select iws.i_brand_id\n     ,iws.i_class_id\n     ,iws.i_category_id\n from web_sales\n     ,item iws\n     ,date_dim d3\n where ws_item_sk = iws.i_item_sk\n   and ws_sold_date_sk = d3.d_date_sk\n   and d3.d_year between 1999 AND 1999 + 2) x\n where i_brand_id = brand_id\n      and i_class_id = class_id\n      and i_category_id = category_id\n),\n avg_sales as\n(select avg(quantity*list_price) average_sales\n  from (select ss_quantity quantity\n             ,ss_list_price list_price\n       from store_sales\n           ,date_dim\n       where ss_sold_date_sk = d_date_sk\n         and d_year between 1999 and 1999 + 2\n       union all\n       select cs_quantity quantity\n             ,cs_list_price list_price\n       from catalog_sales\n           ,date_dim\n       where cs_sold_date_sk = d_date_sk\n         and d_year between 1999 and 1999 + 2\n       union all\n       select ws_quantity quantity\n             ,ws_list_price list_price\n       from web_sales\n           ,date_dim\n       where ws_sold_date_sk = d_date_sk\n         and d_year between 1999 and 1999 + 2) x)\n  select  this_year.channel ty_channel\n                           ,this_year.i_brand_id ty_brand\n                           ,this_year.i_class_id ty_class\n                           ,this_year.i_category_id ty_category\n                           ,this_year.sales ty_sales\n                           ,this_year.number_sales ty_number_sales\n                           ,last_year.channel ly_channel\n                           ,last_year.i_brand_id ly_brand\n                           ,last_year.i_class_id ly_class\n                           ,last_year.i_category_id ly_category\n                           ,last_year.sales ly_sales\n                           ,last_year.number_sales ly_number_sales \n from\n (select 'store' channel, i_brand_id,i_class_id,i_category_id\n        ,sum(ss_quantity*ss_list_price) sales, count(*) number_sales\n from store_sales \n     ,item\n     ,date_dim\n where ss_item_sk in (select ss_item_sk from cross_items)\n   and ss_item_sk = i_item_sk\n   and ss_sold_date_sk = d_date_sk\n   and d_week_seq = (select d_week_seq\n                     from date_dim\n                     where d_year = 1999 + 1\n                       and d_moy = 12\n                       and d_dom = 14)\n group by i_brand_id,i_class_id,i_category_id\n having sum(ss_quantity*ss_list_price) > (select average_sales from avg_sales)) this_year,\n (select 'store' channel, i_brand_id,i_class_id\n        ,i_category_id, sum(ss_quantity*ss_list_price) sales, count(*) number_sales\n from store_sales\n     ,item\n     ,date_dim\n where ss_item_sk in (select ss_item_sk from cross_items)\n   and ss_item_sk = i_item_sk\n   and ss_sold_date_sk = d_date_sk\n   and d_week_seq = (select d_week_seq\n                     from date_dim\n                     where d_year = 1999\n                       and d_moy = 12\n                       and d_dom = 14)\n group by i_brand_id,i_class_id,i_category_id\n having sum(ss_quantity*ss_list_price) > (select average_sales from avg_sales)) last_year\n where this_year.i_brand_id= last_year.i_brand_id\n   and this_year.i_class_id = last_year.i_class_id\n   and this_year.i_category_id = last_year.i_category_id\n order by this_year.channel, this_year.i_brand_id, this_year.i_class_id, this_year.i_category_id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q15.sql",
    "content": "-- CometBench-DS query 15 derived from TPC-DS query 15 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  ca_zip\n       ,sum(cs_sales_price)\n from catalog_sales\n     ,customer\n     ,customer_address\n     ,date_dim\n where cs_bill_customer_sk = c_customer_sk\n \tand c_current_addr_sk = ca_address_sk \n \tand ( substr(ca_zip,1,5) in ('85669', '86197','88274','83405','86475',\n                                   '85392', '85460', '80348', '81792')\n \t      or ca_state in ('CA','WA','GA')\n \t      or cs_sales_price > 500)\n \tand cs_sold_date_sk = d_date_sk\n \tand d_qoy = 2 and d_year = 2002\n group by ca_zip\n order by ca_zip\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q16.sql",
    "content": "-- CometBench-DS query 16 derived from TPC-DS query 16 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n   count(distinct cs_order_number) as `order count`\n  ,sum(cs_ext_ship_cost) as `total shipping cost`\n  ,sum(cs_net_profit) as `total net profit`\nfrom\n   catalog_sales cs1\n  ,date_dim\n  ,customer_address\n  ,call_center\nwhere\n    d_date between '1999-5-01' and \n           (cast('1999-5-01' as date) + INTERVAL '60 DAYS')\nand cs1.cs_ship_date_sk = d_date_sk\nand cs1.cs_ship_addr_sk = ca_address_sk\nand ca_state = 'ID'\nand cs1.cs_call_center_sk = cc_call_center_sk\nand cc_county in ('Williamson County','Williamson County','Williamson County','Williamson County',\n                  'Williamson County'\n)\nand exists (select *\n            from catalog_sales cs2\n            where cs1.cs_order_number = cs2.cs_order_number\n              and cs1.cs_warehouse_sk <> cs2.cs_warehouse_sk)\nand not exists(select *\n               from catalog_returns cr1\n               where cs1.cs_order_number = cr1.cr_order_number)\norder by count(distinct cs_order_number)\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q17.sql",
    "content": "-- CometBench-DS query 17 derived from TPC-DS query 17 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_item_id\n       ,i_item_desc\n       ,s_state\n       ,count(ss_quantity) as store_sales_quantitycount\n       ,avg(ss_quantity) as store_sales_quantityave\n       ,stddev_samp(ss_quantity) as store_sales_quantitystdev\n       ,stddev_samp(ss_quantity)/avg(ss_quantity) as store_sales_quantitycov\n       ,count(sr_return_quantity) as store_returns_quantitycount\n       ,avg(sr_return_quantity) as store_returns_quantityave\n       ,stddev_samp(sr_return_quantity) as store_returns_quantitystdev\n       ,stddev_samp(sr_return_quantity)/avg(sr_return_quantity) as store_returns_quantitycov\n       ,count(cs_quantity) as catalog_sales_quantitycount ,avg(cs_quantity) as catalog_sales_quantityave\n       ,stddev_samp(cs_quantity) as catalog_sales_quantitystdev\n       ,stddev_samp(cs_quantity)/avg(cs_quantity) as catalog_sales_quantitycov\n from store_sales\n     ,store_returns\n     ,catalog_sales\n     ,date_dim d1\n     ,date_dim d2\n     ,date_dim d3\n     ,store\n     ,item\n where d1.d_quarter_name = '1999Q1'\n   and d1.d_date_sk = ss_sold_date_sk\n   and i_item_sk = ss_item_sk\n   and s_store_sk = ss_store_sk\n   and ss_customer_sk = sr_customer_sk\n   and ss_item_sk = sr_item_sk\n   and ss_ticket_number = sr_ticket_number\n   and sr_returned_date_sk = d2.d_date_sk\n   and d2.d_quarter_name in ('1999Q1','1999Q2','1999Q3')\n   and sr_customer_sk = cs_bill_customer_sk\n   and sr_item_sk = cs_item_sk\n   and cs_sold_date_sk = d3.d_date_sk\n   and d3.d_quarter_name in ('1999Q1','1999Q2','1999Q3')\n group by i_item_id\n         ,i_item_desc\n         ,s_state\n order by i_item_id\n         ,i_item_desc\n         ,s_state\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q18.sql",
    "content": "-- CometBench-DS query 18 derived from TPC-DS query 18 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_item_id,\n        ca_country,\n        ca_state, \n        ca_county,\n        avg( cast(cs_quantity as decimal(12,2))) agg1,\n        avg( cast(cs_list_price as decimal(12,2))) agg2,\n        avg( cast(cs_coupon_amt as decimal(12,2))) agg3,\n        avg( cast(cs_sales_price as decimal(12,2))) agg4,\n        avg( cast(cs_net_profit as decimal(12,2))) agg5,\n        avg( cast(c_birth_year as decimal(12,2))) agg6,\n        avg( cast(cd1.cd_dep_count as decimal(12,2))) agg7\n from catalog_sales, customer_demographics cd1, \n      customer_demographics cd2, customer, customer_address, date_dim, item\n where cs_sold_date_sk = d_date_sk and\n       cs_item_sk = i_item_sk and\n       cs_bill_cdemo_sk = cd1.cd_demo_sk and\n       cs_bill_customer_sk = c_customer_sk and\n       cd1.cd_gender = 'M' and \n       cd1.cd_education_status = 'Primary' and\n       c_current_cdemo_sk = cd2.cd_demo_sk and\n       c_current_addr_sk = ca_address_sk and\n       c_birth_month in (1,2,9,5,11,3) and\n       d_year = 1998 and\n       ca_state in ('MS','NE','IA'\n                   ,'MI','GA','NY','CO')\n group by rollup (i_item_id, ca_country, ca_state, ca_county)\n order by ca_country,\n        ca_state, \n        ca_county,\n\ti_item_id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q19.sql",
    "content": "-- CometBench-DS query 19 derived from TPC-DS query 19 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_brand_id brand_id, i_brand brand, i_manufact_id, i_manufact,\n \tsum(ss_ext_sales_price) ext_price\n from date_dim, store_sales, item,customer,customer_address,store\n where d_date_sk = ss_sold_date_sk\n   and ss_item_sk = i_item_sk\n   and i_manager_id=8\n   and d_moy=11\n   and d_year=1999\n   and ss_customer_sk = c_customer_sk \n   and c_current_addr_sk = ca_address_sk\n   and substr(ca_zip,1,5) <> substr(s_zip,1,5) \n   and ss_store_sk = s_store_sk \n group by i_brand\n      ,i_brand_id\n      ,i_manufact_id\n      ,i_manufact\n order by ext_price desc\n         ,i_brand\n         ,i_brand_id\n         ,i_manufact_id\n         ,i_manufact\n LIMIT 100 ;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q2.sql",
    "content": "-- CometBench-DS query 2 derived from TPC-DS query 2 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith wscs as\n (select sold_date_sk\n        ,sales_price\n  from (select ws_sold_date_sk sold_date_sk\n              ,ws_ext_sales_price sales_price\n        from web_sales \n        union all\n        select cs_sold_date_sk sold_date_sk\n              ,cs_ext_sales_price sales_price\n        from catalog_sales)),\n wswscs as \n (select d_week_seq,\n        sum(case when (d_day_name='Sunday') then sales_price else null end) sun_sales,\n        sum(case when (d_day_name='Monday') then sales_price else null end) mon_sales,\n        sum(case when (d_day_name='Tuesday') then sales_price else  null end) tue_sales,\n        sum(case when (d_day_name='Wednesday') then sales_price else null end) wed_sales,\n        sum(case when (d_day_name='Thursday') then sales_price else null end) thu_sales,\n        sum(case when (d_day_name='Friday') then sales_price else null end) fri_sales,\n        sum(case when (d_day_name='Saturday') then sales_price else null end) sat_sales\n from wscs\n     ,date_dim\n where d_date_sk = sold_date_sk\n group by d_week_seq)\n select d_week_seq1\n       ,round(sun_sales1/sun_sales2,2)\n       ,round(mon_sales1/mon_sales2,2)\n       ,round(tue_sales1/tue_sales2,2)\n       ,round(wed_sales1/wed_sales2,2)\n       ,round(thu_sales1/thu_sales2,2)\n       ,round(fri_sales1/fri_sales2,2)\n       ,round(sat_sales1/sat_sales2,2)\n from\n (select wswscs.d_week_seq d_week_seq1\n        ,sun_sales sun_sales1\n        ,mon_sales mon_sales1\n        ,tue_sales tue_sales1\n        ,wed_sales wed_sales1\n        ,thu_sales thu_sales1\n        ,fri_sales fri_sales1\n        ,sat_sales sat_sales1\n  from wswscs,date_dim \n  where date_dim.d_week_seq = wswscs.d_week_seq and\n        d_year = 2000) y,\n (select wswscs.d_week_seq d_week_seq2\n        ,sun_sales sun_sales2\n        ,mon_sales mon_sales2\n        ,tue_sales tue_sales2\n        ,wed_sales wed_sales2\n        ,thu_sales thu_sales2\n        ,fri_sales fri_sales2\n        ,sat_sales sat_sales2\n  from wswscs\n      ,date_dim \n  where date_dim.d_week_seq = wswscs.d_week_seq and\n        d_year = 2000+1) z\n where d_week_seq1=d_week_seq2-53\n order by d_week_seq1;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q20.sql",
    "content": "-- CometBench-DS query 20 derived from TPC-DS query 20 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_item_id\n       ,i_item_desc \n       ,i_category \n       ,i_class \n       ,i_current_price\n       ,sum(cs_ext_sales_price) as itemrevenue \n       ,sum(cs_ext_sales_price)*100/sum(sum(cs_ext_sales_price)) over\n           (partition by i_class) as revenueratio\n from\tcatalog_sales\n     ,item \n     ,date_dim\n where cs_item_sk = i_item_sk \n   and i_category in ('Children', 'Sports', 'Music')\n   and cs_sold_date_sk = d_date_sk\n and d_date between cast('2002-04-01' as date) \n \t\t\t\tand (cast('2002-04-01' as date) + INTERVAL '30 DAYS')\n group by i_item_id\n         ,i_item_desc \n         ,i_category\n         ,i_class\n         ,i_current_price\n order by i_category\n         ,i_class\n         ,i_item_id\n         ,i_item_desc\n         ,revenueratio\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q21.sql",
    "content": "-- CometBench-DS query 21 derived from TPC-DS query 21 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  *\n from(select w_warehouse_name\n            ,i_item_id\n            ,sum(case when (cast(d_date as date) < cast ('2000-05-19' as date))\n\t                then inv_quantity_on_hand \n                      else 0 end) as inv_before\n            ,sum(case when (cast(d_date as date) >= cast ('2000-05-19' as date))\n                      then inv_quantity_on_hand \n                      else 0 end) as inv_after\n   from inventory\n       ,warehouse\n       ,item\n       ,date_dim\n   where i_current_price between 0.99 and 1.49\n     and i_item_sk          = inv_item_sk\n     and inv_warehouse_sk   = w_warehouse_sk\n     and inv_date_sk    = d_date_sk\n     and d_date between (cast ('2000-05-19' as date) - INTERVAL '30 DAYS')\n                    and (cast ('2000-05-19' as date) + INTERVAL '30 DAYS')\n   group by w_warehouse_name, i_item_id) x\n where (case when inv_before > 0 \n             then inv_after / inv_before \n             else null\n             end) between 2.0/3.0 and 3.0/2.0\n order by w_warehouse_name\n         ,i_item_id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q22.sql",
    "content": "-- CometBench-DS query 22 derived from TPC-DS query 22 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_product_name\n             ,i_brand\n             ,i_class\n             ,i_category\n             ,avg(inv_quantity_on_hand) qoh\n       from inventory\n           ,date_dim\n           ,item\n       where inv_date_sk=d_date_sk\n              and inv_item_sk=i_item_sk\n              and d_month_seq between 1201 and 1201 + 11\n       group by rollup(i_product_name\n                       ,i_brand\n                       ,i_class\n                       ,i_category)\norder by qoh, i_product_name, i_brand, i_class, i_category\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q23.sql",
    "content": "-- CometBench-DS query 23 derived from TPC-DS query 23 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith frequent_ss_items as \n (select substr(i_item_desc,1,30) itemdesc,i_item_sk item_sk,d_date solddate,count(*) cnt\n  from store_sales\n      ,date_dim \n      ,item\n  where ss_sold_date_sk = d_date_sk\n    and ss_item_sk = i_item_sk \n    and d_year in (2000,2000+1,2000+2,2000+3)\n  group by substr(i_item_desc,1,30),i_item_sk,d_date\n  having count(*) >4),\n max_store_sales as\n (select max(csales) tpcds_cmax \n  from (select c_customer_sk,sum(ss_quantity*ss_sales_price) csales\n        from store_sales\n            ,customer\n            ,date_dim \n        where ss_customer_sk = c_customer_sk\n         and ss_sold_date_sk = d_date_sk\n         and d_year in (2000,2000+1,2000+2,2000+3) \n        group by c_customer_sk)),\n best_ss_customer as\n (select c_customer_sk,sum(ss_quantity*ss_sales_price) ssales\n  from store_sales\n      ,customer\n  where ss_customer_sk = c_customer_sk\n  group by c_customer_sk\n  having sum(ss_quantity*ss_sales_price) > (95/100.0) * (select\n  *\nfrom\n max_store_sales))\n  select  sum(sales)\n from (select cs_quantity*cs_list_price sales\n       from catalog_sales\n           ,date_dim \n       where d_year = 2000 \n         and d_moy = 3 \n         and cs_sold_date_sk = d_date_sk \n         and cs_item_sk in (select item_sk from frequent_ss_items)\n         and cs_bill_customer_sk in (select c_customer_sk from best_ss_customer)\n      union all\n      select ws_quantity*ws_list_price sales\n       from web_sales \n           ,date_dim \n       where d_year = 2000 \n         and d_moy = 3 \n         and ws_sold_date_sk = d_date_sk \n         and ws_item_sk in (select item_sk from frequent_ss_items)\n         and ws_bill_customer_sk in (select c_customer_sk from best_ss_customer)) \n  LIMIT 100;\nwith frequent_ss_items as\n (select substr(i_item_desc,1,30) itemdesc,i_item_sk item_sk,d_date solddate,count(*) cnt\n  from store_sales\n      ,date_dim\n      ,item\n  where ss_sold_date_sk = d_date_sk\n    and ss_item_sk = i_item_sk\n    and d_year in (2000,2000 + 1,2000 + 2,2000 + 3)\n  group by substr(i_item_desc,1,30),i_item_sk,d_date\n  having count(*) >4),\n max_store_sales as\n (select max(csales) tpcds_cmax\n  from (select c_customer_sk,sum(ss_quantity*ss_sales_price) csales\n        from store_sales\n            ,customer\n            ,date_dim \n        where ss_customer_sk = c_customer_sk\n         and ss_sold_date_sk = d_date_sk\n         and d_year in (2000,2000+1,2000+2,2000+3)\n        group by c_customer_sk)),\n best_ss_customer as\n (select c_customer_sk,sum(ss_quantity*ss_sales_price) ssales\n  from store_sales\n      ,customer\n  where ss_customer_sk = c_customer_sk\n  group by c_customer_sk\n  having sum(ss_quantity*ss_sales_price) > (95/100.0) * (select\n  *\n from max_store_sales))\n  select  c_last_name,c_first_name,sales\n from (select c_last_name,c_first_name,sum(cs_quantity*cs_list_price) sales\n        from catalog_sales\n            ,customer\n            ,date_dim \n        where d_year = 2000 \n         and d_moy = 3 \n         and cs_sold_date_sk = d_date_sk \n         and cs_item_sk in (select item_sk from frequent_ss_items)\n         and cs_bill_customer_sk in (select c_customer_sk from best_ss_customer)\n         and cs_bill_customer_sk = c_customer_sk \n       group by c_last_name,c_first_name\n      union all\n      select c_last_name,c_first_name,sum(ws_quantity*ws_list_price) sales\n       from web_sales\n           ,customer\n           ,date_dim \n       where d_year = 2000 \n         and d_moy = 3 \n         and ws_sold_date_sk = d_date_sk \n         and ws_item_sk in (select item_sk from frequent_ss_items)\n         and ws_bill_customer_sk in (select c_customer_sk from best_ss_customer)\n         and ws_bill_customer_sk = c_customer_sk\n       group by c_last_name,c_first_name) \n     order by c_last_name,c_first_name,sales\n   LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q24.sql",
    "content": "-- CometBench-DS query 24 derived from TPC-DS query 24 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ssales as\n(select c_last_name\n      ,c_first_name\n      ,s_store_name\n      ,ca_state\n      ,s_state\n      ,i_color\n      ,i_current_price\n      ,i_manager_id\n      ,i_units\n      ,i_size\n      ,sum(ss_net_profit) netpaid\nfrom store_sales\n    ,store_returns\n    ,store\n    ,item\n    ,customer\n    ,customer_address\nwhere ss_ticket_number = sr_ticket_number\n  and ss_item_sk = sr_item_sk\n  and ss_customer_sk = c_customer_sk\n  and ss_item_sk = i_item_sk\n  and ss_store_sk = s_store_sk\n  and c_current_addr_sk = ca_address_sk\n  and c_birth_country <> upper(ca_country)\n  and s_zip = ca_zip\nand s_market_id=10\ngroup by c_last_name\n        ,c_first_name\n        ,s_store_name\n        ,ca_state\n        ,s_state\n        ,i_color\n        ,i_current_price\n        ,i_manager_id\n        ,i_units\n        ,i_size)\nselect c_last_name\n      ,c_first_name\n      ,s_store_name\n      ,sum(netpaid) paid\nfrom ssales\nwhere i_color = 'orchid'\ngroup by c_last_name\n        ,c_first_name\n        ,s_store_name\nhaving sum(netpaid) > (select 0.05*avg(netpaid)\n                                 from ssales)\norder by c_last_name\n        ,c_first_name\n        ,s_store_name\n;\nwith ssales as\n(select c_last_name\n      ,c_first_name\n      ,s_store_name\n      ,ca_state\n      ,s_state\n      ,i_color\n      ,i_current_price\n      ,i_manager_id\n      ,i_units\n      ,i_size\n      ,sum(ss_net_profit) netpaid\nfrom store_sales\n    ,store_returns\n    ,store\n    ,item\n    ,customer\n    ,customer_address\nwhere ss_ticket_number = sr_ticket_number\n  and ss_item_sk = sr_item_sk\n  and ss_customer_sk = c_customer_sk\n  and ss_item_sk = i_item_sk\n  and ss_store_sk = s_store_sk\n  and c_current_addr_sk = ca_address_sk\n  and c_birth_country <> upper(ca_country)\n  and s_zip = ca_zip\n  and s_market_id = 10\ngroup by c_last_name\n        ,c_first_name\n        ,s_store_name\n        ,ca_state\n        ,s_state\n        ,i_color\n        ,i_current_price\n        ,i_manager_id\n        ,i_units\n        ,i_size)\nselect c_last_name\n      ,c_first_name\n      ,s_store_name\n      ,sum(netpaid) paid\nfrom ssales\nwhere i_color = 'green'\ngroup by c_last_name\n        ,c_first_name\n        ,s_store_name\nhaving sum(netpaid) > (select 0.05*avg(netpaid)\n                           from ssales)\norder by c_last_name\n        ,c_first_name\n        ,s_store_name\n;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q25.sql",
    "content": "-- CometBench-DS query 25 derived from TPC-DS query 25 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n i_item_id\n ,i_item_desc\n ,s_store_id\n ,s_store_name\n ,min(ss_net_profit) as store_sales_profit\n ,min(sr_net_loss) as store_returns_loss\n ,min(cs_net_profit) as catalog_sales_profit\n from\n store_sales\n ,store_returns\n ,catalog_sales\n ,date_dim d1\n ,date_dim d2\n ,date_dim d3\n ,store\n ,item\n where\n d1.d_moy = 4\n and d1.d_year = 2002\n and d1.d_date_sk = ss_sold_date_sk\n and i_item_sk = ss_item_sk\n and s_store_sk = ss_store_sk\n and ss_customer_sk = sr_customer_sk\n and ss_item_sk = sr_item_sk\n and ss_ticket_number = sr_ticket_number\n and sr_returned_date_sk = d2.d_date_sk\n and d2.d_moy               between 4 and  10\n and d2.d_year              = 2002\n and sr_customer_sk = cs_bill_customer_sk\n and sr_item_sk = cs_item_sk\n and cs_sold_date_sk = d3.d_date_sk\n and d3.d_moy               between 4 and  10 \n and d3.d_year              = 2002\n group by\n i_item_id\n ,i_item_desc\n ,s_store_id\n ,s_store_name\n order by\n i_item_id\n ,i_item_desc\n ,s_store_id\n ,s_store_name\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q26.sql",
    "content": "-- CometBench-DS query 26 derived from TPC-DS query 26 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_item_id, \n        avg(cs_quantity) agg1,\n        avg(cs_list_price) agg2,\n        avg(cs_coupon_amt) agg3,\n        avg(cs_sales_price) agg4 \n from catalog_sales, customer_demographics, date_dim, item, promotion\n where cs_sold_date_sk = d_date_sk and\n       cs_item_sk = i_item_sk and\n       cs_bill_cdemo_sk = cd_demo_sk and\n       cs_promo_sk = p_promo_sk and\n       cd_gender = 'F' and \n       cd_marital_status = 'M' and\n       cd_education_status = '4 yr Degree' and\n       (p_channel_email = 'N' or p_channel_event = 'N') and\n       d_year = 2000 \n group by i_item_id\n order by i_item_id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q27.sql",
    "content": "-- CometBench-DS query 27 derived from TPC-DS query 27 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_item_id,\n        s_state, grouping(s_state) g_state,\n        avg(ss_quantity) agg1,\n        avg(ss_list_price) agg2,\n        avg(ss_coupon_amt) agg3,\n        avg(ss_sales_price) agg4\n from store_sales, customer_demographics, date_dim, store, item\n where ss_sold_date_sk = d_date_sk and\n       ss_item_sk = i_item_sk and\n       ss_store_sk = s_store_sk and\n       ss_cdemo_sk = cd_demo_sk and\n       cd_gender = 'M' and\n       cd_marital_status = 'U' and\n       cd_education_status = 'Secondary' and\n       d_year = 2000 and\n       s_state in ('TN','TN', 'TN', 'TN', 'TN', 'TN')\n group by rollup (i_item_id, s_state)\n order by i_item_id\n         ,s_state\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q28.sql",
    "content": "-- CometBench-DS query 28 derived from TPC-DS query 28 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  *\nfrom (select avg(ss_list_price) B1_LP\n            ,count(ss_list_price) B1_CNT\n            ,count(distinct ss_list_price) B1_CNTD\n      from store_sales\n      where ss_quantity between 0 and 5\n        and (ss_list_price between 28 and 28+10 \n             or ss_coupon_amt between 12573 and 12573+1000\n             or ss_wholesale_cost between 33 and 33+20)) B1,\n     (select avg(ss_list_price) B2_LP\n            ,count(ss_list_price) B2_CNT\n            ,count(distinct ss_list_price) B2_CNTD\n      from store_sales\n      where ss_quantity between 6 and 10\n        and (ss_list_price between 143 and 143+10\n          or ss_coupon_amt between 5562 and 5562+1000\n          or ss_wholesale_cost between 45 and 45+20)) B2,\n     (select avg(ss_list_price) B3_LP\n            ,count(ss_list_price) B3_CNT\n            ,count(distinct ss_list_price) B3_CNTD\n      from store_sales\n      where ss_quantity between 11 and 15\n        and (ss_list_price between 159 and 159+10\n          or ss_coupon_amt between 2807 and 2807+1000\n          or ss_wholesale_cost between 24 and 24+20)) B3,\n     (select avg(ss_list_price) B4_LP\n            ,count(ss_list_price) B4_CNT\n            ,count(distinct ss_list_price) B4_CNTD\n      from store_sales\n      where ss_quantity between 16 and 20\n        and (ss_list_price between 24 and 24+10\n          or ss_coupon_amt between 3706 and 3706+1000\n          or ss_wholesale_cost between 46 and 46+20)) B4,\n     (select avg(ss_list_price) B5_LP\n            ,count(ss_list_price) B5_CNT\n            ,count(distinct ss_list_price) B5_CNTD\n      from store_sales\n      where ss_quantity between 21 and 25\n        and (ss_list_price between 76 and 76+10\n          or ss_coupon_amt between 2096 and 2096+1000\n          or ss_wholesale_cost between 50 and 50+20)) B5,\n     (select avg(ss_list_price) B6_LP\n            ,count(ss_list_price) B6_CNT\n            ,count(distinct ss_list_price) B6_CNTD\n      from store_sales\n      where ss_quantity between 26 and 30\n        and (ss_list_price between 169 and 169+10\n          or ss_coupon_amt between 10672 and 10672+1000\n          or ss_wholesale_cost between 58 and 58+20)) B6\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q29.sql",
    "content": "-- CometBench-DS query 29 derived from TPC-DS query 29 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect   \n     i_item_id\n    ,i_item_desc\n    ,s_store_id\n    ,s_store_name\n    ,stddev_samp(ss_quantity)        as store_sales_quantity\n    ,stddev_samp(sr_return_quantity) as store_returns_quantity\n    ,stddev_samp(cs_quantity)        as catalog_sales_quantity\n from\n    store_sales\n   ,store_returns\n   ,catalog_sales\n   ,date_dim             d1\n   ,date_dim             d2\n   ,date_dim             d3\n   ,store\n   ,item\n where\n     d1.d_moy               = 4 \n and d1.d_year              = 1999\n and d1.d_date_sk           = ss_sold_date_sk\n and i_item_sk              = ss_item_sk\n and s_store_sk             = ss_store_sk\n and ss_customer_sk         = sr_customer_sk\n and ss_item_sk             = sr_item_sk\n and ss_ticket_number       = sr_ticket_number\n and sr_returned_date_sk    = d2.d_date_sk\n and d2.d_moy               between 4 and  4 + 3 \n and d2.d_year              = 1999\n and sr_customer_sk         = cs_bill_customer_sk\n and sr_item_sk             = cs_item_sk\n and cs_sold_date_sk        = d3.d_date_sk     \n and d3.d_year              in (1999,1999+1,1999+2)\n group by\n    i_item_id\n   ,i_item_desc\n   ,s_store_id\n   ,s_store_name\n order by\n    i_item_id \n   ,i_item_desc\n   ,s_store_id\n   ,s_store_name\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q3.sql",
    "content": "-- CometBench-DS query 3 derived from TPC-DS query 3 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  dt.d_year \n       ,item.i_brand_id brand_id \n       ,item.i_brand brand\n       ,sum(ss_net_profit) sum_agg\n from  date_dim dt \n      ,store_sales\n      ,item\n where dt.d_date_sk = store_sales.ss_sold_date_sk\n   and store_sales.ss_item_sk = item.i_item_sk\n   and item.i_manufact_id = 445\n   and dt.d_moy=12\n group by dt.d_year\n      ,item.i_brand\n      ,item.i_brand_id\n order by dt.d_year\n         ,sum_agg desc\n         ,brand_id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q30.sql",
    "content": "-- CometBench-DS query 30 derived from TPC-DS query 30 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith customer_total_return as\n (select wr_returning_customer_sk as ctr_customer_sk\n        ,ca_state as ctr_state, \n \tsum(wr_return_amt) as ctr_total_return\n from web_returns\n     ,date_dim\n     ,customer_address\n where wr_returned_date_sk = d_date_sk \n   and d_year =2000\n   and wr_returning_addr_sk = ca_address_sk \n group by wr_returning_customer_sk\n         ,ca_state)\n  select  c_customer_id,c_salutation,c_first_name,c_last_name,c_preferred_cust_flag\n       ,c_birth_day,c_birth_month,c_birth_year,c_birth_country,c_login,c_email_address\n       ,c_last_review_date_sk,ctr_total_return\n from customer_total_return ctr1\n     ,customer_address\n     ,customer\n where ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2\n \t\t\t  from customer_total_return ctr2 \n                  \t  where ctr1.ctr_state = ctr2.ctr_state)\n       and ca_address_sk = c_current_addr_sk\n       and ca_state = 'KS'\n       and ctr1.ctr_customer_sk = c_customer_sk\n order by c_customer_id,c_salutation,c_first_name,c_last_name,c_preferred_cust_flag\n                  ,c_birth_day,c_birth_month,c_birth_year,c_birth_country,c_login,c_email_address\n                  ,c_last_review_date_sk,ctr_total_return\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q31.sql",
    "content": "-- CometBench-DS query 31 derived from TPC-DS query 31 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ss as\n (select ca_county,d_qoy, d_year,sum(ss_ext_sales_price) as store_sales\n from store_sales,date_dim,customer_address\n where ss_sold_date_sk = d_date_sk\n  and ss_addr_sk=ca_address_sk\n group by ca_county,d_qoy, d_year),\n ws as\n (select ca_county,d_qoy, d_year,sum(ws_ext_sales_price) as web_sales\n from web_sales,date_dim,customer_address\n where ws_sold_date_sk = d_date_sk\n  and ws_bill_addr_sk=ca_address_sk\n group by ca_county,d_qoy, d_year)\n select \n        ss1.ca_county\n       ,ss1.d_year\n       ,ws2.web_sales/ws1.web_sales web_q1_q2_increase\n       ,ss2.store_sales/ss1.store_sales store_q1_q2_increase\n       ,ws3.web_sales/ws2.web_sales web_q2_q3_increase\n       ,ss3.store_sales/ss2.store_sales store_q2_q3_increase\n from\n        ss ss1\n       ,ss ss2\n       ,ss ss3\n       ,ws ws1\n       ,ws ws2\n       ,ws ws3\n where\n    ss1.d_qoy = 1\n    and ss1.d_year = 1999\n    and ss1.ca_county = ss2.ca_county\n    and ss2.d_qoy = 2\n    and ss2.d_year = 1999\n and ss2.ca_county = ss3.ca_county\n    and ss3.d_qoy = 3\n    and ss3.d_year = 1999\n    and ss1.ca_county = ws1.ca_county\n    and ws1.d_qoy = 1\n    and ws1.d_year = 1999\n    and ws1.ca_county = ws2.ca_county\n    and ws2.d_qoy = 2\n    and ws2.d_year = 1999\n    and ws1.ca_county = ws3.ca_county\n    and ws3.d_qoy = 3\n    and ws3.d_year =1999\n    and case when ws1.web_sales > 0 then ws2.web_sales/ws1.web_sales else null end \n       > case when ss1.store_sales > 0 then ss2.store_sales/ss1.store_sales else null end\n    and case when ws2.web_sales > 0 then ws3.web_sales/ws2.web_sales else null end\n       > case when ss2.store_sales > 0 then ss3.store_sales/ss2.store_sales else null end\n order by ss1.ca_county;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q32.sql",
    "content": "-- CometBench-DS query 32 derived from TPC-DS query 32 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  sum(cs_ext_discount_amt)  as `excess discount amount` \nfrom \n   catalog_sales \n   ,item \n   ,date_dim\nwhere\ni_manufact_id = 283\nand i_item_sk = cs_item_sk \nand d_date between '1999-02-22' and \n        (cast('1999-02-22' as date) + INTERVAL '90 DAYS')\nand d_date_sk = cs_sold_date_sk \nand cs_ext_discount_amt  \n     > ( \n         select \n            1.3 * avg(cs_ext_discount_amt) \n         from \n            catalog_sales \n           ,date_dim\n         where \n              cs_item_sk = i_item_sk \n          and d_date between '1999-02-22' and\n                             (cast('1999-02-22' as date) + INTERVAL '90 DAYS')\n          and d_date_sk = cs_sold_date_sk \n      ) \n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q33.sql",
    "content": "-- CometBench-DS query 33 derived from TPC-DS query 33 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ss as (\n select\n          i_manufact_id,sum(ss_ext_sales_price) total_sales\n from\n \tstore_sales,\n \tdate_dim,\n         customer_address,\n         item\n where\n         i_manufact_id in (select\n  i_manufact_id\nfrom\n item\nwhere i_category in ('Books'))\n and     ss_item_sk              = i_item_sk\n and     ss_sold_date_sk         = d_date_sk\n and     d_year                  = 1999\n and     d_moy                   = 4\n and     ss_addr_sk              = ca_address_sk\n and     ca_gmt_offset           = -5 \n group by i_manufact_id),\n cs as (\n select\n          i_manufact_id,sum(cs_ext_sales_price) total_sales\n from\n \tcatalog_sales,\n \tdate_dim,\n         customer_address,\n         item\n where\n         i_manufact_id               in (select\n  i_manufact_id\nfrom\n item\nwhere i_category in ('Books'))\n and     cs_item_sk              = i_item_sk\n and     cs_sold_date_sk         = d_date_sk\n and     d_year                  = 1999\n and     d_moy                   = 4\n and     cs_bill_addr_sk         = ca_address_sk\n and     ca_gmt_offset           = -5 \n group by i_manufact_id),\n ws as (\n select\n          i_manufact_id,sum(ws_ext_sales_price) total_sales\n from\n \tweb_sales,\n \tdate_dim,\n         customer_address,\n         item\n where\n         i_manufact_id               in (select\n  i_manufact_id\nfrom\n item\nwhere i_category in ('Books'))\n and     ws_item_sk              = i_item_sk\n and     ws_sold_date_sk         = d_date_sk\n and     d_year                  = 1999\n and     d_moy                   = 4\n and     ws_bill_addr_sk         = ca_address_sk\n and     ca_gmt_offset           = -5\n group by i_manufact_id)\n  select  i_manufact_id ,sum(total_sales) total_sales\n from  (select * from ss \n        union all\n        select * from cs \n        union all\n        select * from ws) tmp1\n group by i_manufact_id\n order by total_sales\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q34.sql",
    "content": "-- CometBench-DS query 34 derived from TPC-DS query 34 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect c_last_name\n       ,c_first_name\n       ,c_salutation\n       ,c_preferred_cust_flag\n       ,ss_ticket_number\n       ,cnt from\n   (select ss_ticket_number\n          ,ss_customer_sk\n          ,count(*) cnt\n    from store_sales,date_dim,store,household_demographics\n    where store_sales.ss_sold_date_sk = date_dim.d_date_sk\n    and store_sales.ss_store_sk = store.s_store_sk  \n    and store_sales.ss_hdemo_sk = household_demographics.hd_demo_sk\n    and (date_dim.d_dom between 1 and 3 or date_dim.d_dom between 25 and 28)\n    and (household_demographics.hd_buy_potential = '501-1000' or\n         household_demographics.hd_buy_potential = 'Unknown')\n    and household_demographics.hd_vehicle_count > 0\n    and (case when household_demographics.hd_vehicle_count > 0 \n\tthen household_demographics.hd_dep_count/ household_demographics.hd_vehicle_count \n\telse null \n\tend)  > 1.2\n    and date_dim.d_year in (2000,2000+1,2000+2)\n    and store.s_county in ('Williamson County','Williamson County','Williamson County','Williamson County',\n                           'Williamson County','Williamson County','Williamson County','Williamson County')\n    group by ss_ticket_number,ss_customer_sk) dn,customer\n    where ss_customer_sk = c_customer_sk\n      and cnt between 15 and 20\n    order by c_last_name,c_first_name,c_salutation,c_preferred_cust_flag desc, ss_ticket_number;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q35.sql",
    "content": "-- CometBench-DS query 35 derived from TPC-DS query 35 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect   \n  ca_state,\n  cd_gender,\n  cd_marital_status,\n  cd_dep_count,\n  count(*) cnt1,\n  max(cd_dep_count),\n  stddev_samp(cd_dep_count),\n  stddev_samp(cd_dep_count),\n  cd_dep_employed_count,\n  count(*) cnt2,\n  max(cd_dep_employed_count),\n  stddev_samp(cd_dep_employed_count),\n  stddev_samp(cd_dep_employed_count),\n  cd_dep_college_count,\n  count(*) cnt3,\n  max(cd_dep_college_count),\n  stddev_samp(cd_dep_college_count),\n  stddev_samp(cd_dep_college_count)\n from\n  customer c,customer_address ca,customer_demographics\n where\n  c.c_current_addr_sk = ca.ca_address_sk and\n  cd_demo_sk = c.c_current_cdemo_sk and \n  exists (select *\n          from store_sales,date_dim\n          where c.c_customer_sk = ss_customer_sk and\n                ss_sold_date_sk = d_date_sk and\n                d_year = 2000 and\n                d_qoy < 4) and\n   (exists (select *\n            from web_sales,date_dim\n            where c.c_customer_sk = ws_bill_customer_sk and\n                  ws_sold_date_sk = d_date_sk and\n                  d_year = 2000 and\n                  d_qoy < 4) or \n    exists (select * \n            from catalog_sales,date_dim\n            where c.c_customer_sk = cs_ship_customer_sk and\n                  cs_sold_date_sk = d_date_sk and\n                  d_year = 2000 and\n                  d_qoy < 4))\n group by ca_state,\n          cd_gender,\n          cd_marital_status,\n          cd_dep_count,\n          cd_dep_employed_count,\n          cd_dep_college_count\n order by ca_state,\n          cd_gender,\n          cd_marital_status,\n          cd_dep_count,\n          cd_dep_employed_count,\n          cd_dep_college_count\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q36.sql",
    "content": "-- CometBench-DS query 36 derived from TPC-DS query 36 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n    sum(ss_net_profit)/sum(ss_ext_sales_price) as gross_margin\n   ,i_category\n   ,i_class\n   ,grouping(i_category)+grouping(i_class) as lochierarchy\n   ,rank() over (\n \tpartition by grouping(i_category)+grouping(i_class),\n \tcase when grouping(i_class) = 0 then i_category end \n \torder by sum(ss_net_profit)/sum(ss_ext_sales_price) asc) as rank_within_parent\n from\n    store_sales\n   ,date_dim       d1\n   ,item\n   ,store\n where\n    d1.d_year = 2001 \n and d1.d_date_sk = ss_sold_date_sk\n and i_item_sk  = ss_item_sk \n and s_store_sk  = ss_store_sk\n and s_state in ('TN','TN','TN','TN',\n                 'TN','TN','TN','TN')\n group by rollup(i_category,i_class)\n order by\n   lochierarchy desc\n  ,case when lochierarchy = 0 then i_category end\n  ,rank_within_parent\n   LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q37.sql",
    "content": "-- CometBench-DS query 37 derived from TPC-DS query 37 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_item_id\n       ,i_item_desc\n       ,i_current_price\n from item, inventory, date_dim, catalog_sales\n where i_current_price between 26 and 26 + 30\n and inv_item_sk = i_item_sk\n and d_date_sk=inv_date_sk\n and d_date between cast('2001-06-09' as date) and (cast('2001-06-09' as date) +  INTERVAL '60 DAYS')\n and i_manufact_id in (744,884,722,693)\n and inv_quantity_on_hand between 100 and 500\n and cs_item_sk = i_item_sk\n group by i_item_id,i_item_desc,i_current_price\n order by i_item_id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q38.sql",
    "content": "-- CometBench-DS query 38 derived from TPC-DS query 38 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  count(*) from (\n    select distinct c_last_name, c_first_name, d_date\n    from store_sales, date_dim, customer\n          where store_sales.ss_sold_date_sk = date_dim.d_date_sk\n      and store_sales.ss_customer_sk = customer.c_customer_sk\n      and d_month_seq between 1190 and 1190 + 11\n  intersect\n    select distinct c_last_name, c_first_name, d_date\n    from catalog_sales, date_dim, customer\n          where catalog_sales.cs_sold_date_sk = date_dim.d_date_sk\n      and catalog_sales.cs_bill_customer_sk = customer.c_customer_sk\n      and d_month_seq between 1190 and 1190 + 11\n  intersect\n    select distinct c_last_name, c_first_name, d_date\n    from web_sales, date_dim, customer\n          where web_sales.ws_sold_date_sk = date_dim.d_date_sk\n      and web_sales.ws_bill_customer_sk = customer.c_customer_sk\n      and d_month_seq between 1190 and 1190 + 11\n) hot_cust\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q39.sql",
    "content": "-- CometBench-DS query 39 derived from TPC-DS query 39 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith inv as\n(select w_warehouse_name,w_warehouse_sk,i_item_sk,d_moy\n       ,stdev,mean, case mean when 0 then null else stdev/mean end cov\n from(select w_warehouse_name,w_warehouse_sk,i_item_sk,d_moy\n            ,stddev_samp(inv_quantity_on_hand) stdev,avg(inv_quantity_on_hand) mean\n      from inventory\n          ,item\n          ,warehouse\n          ,date_dim\n      where inv_item_sk = i_item_sk\n        and inv_warehouse_sk = w_warehouse_sk\n        and inv_date_sk = d_date_sk\n        and d_year =2001\n      group by w_warehouse_name,w_warehouse_sk,i_item_sk,d_moy) foo\n where case mean when 0 then 0 else stdev/mean end > 1)\nselect inv1.w_warehouse_sk,inv1.i_item_sk,inv1.d_moy,inv1.mean, inv1.cov\n        ,inv2.w_warehouse_sk,inv2.i_item_sk,inv2.d_moy,inv2.mean, inv2.cov\nfrom inv inv1,inv inv2\nwhere inv1.i_item_sk = inv2.i_item_sk\n  and inv1.w_warehouse_sk =  inv2.w_warehouse_sk\n  and inv1.d_moy=1\n  and inv2.d_moy=1+1\norder by inv1.w_warehouse_sk,inv1.i_item_sk,inv1.d_moy,inv1.mean,inv1.cov\n        ,inv2.d_moy,inv2.mean, inv2.cov\n;\nwith inv as\n(select w_warehouse_name,w_warehouse_sk,i_item_sk,d_moy\n       ,stdev,mean, case mean when 0 then null else stdev/mean end cov\n from(select w_warehouse_name,w_warehouse_sk,i_item_sk,d_moy\n            ,stddev_samp(inv_quantity_on_hand) stdev,avg(inv_quantity_on_hand) mean\n      from inventory\n          ,item\n          ,warehouse\n          ,date_dim\n      where inv_item_sk = i_item_sk\n        and inv_warehouse_sk = w_warehouse_sk\n        and inv_date_sk = d_date_sk\n        and d_year =2001\n      group by w_warehouse_name,w_warehouse_sk,i_item_sk,d_moy) foo\n where case mean when 0 then 0 else stdev/mean end > 1)\nselect inv1.w_warehouse_sk,inv1.i_item_sk,inv1.d_moy,inv1.mean, inv1.cov\n        ,inv2.w_warehouse_sk,inv2.i_item_sk,inv2.d_moy,inv2.mean, inv2.cov\nfrom inv inv1,inv inv2\nwhere inv1.i_item_sk = inv2.i_item_sk\n  and inv1.w_warehouse_sk =  inv2.w_warehouse_sk\n  and inv1.d_moy=1\n  and inv2.d_moy=1+1\n  and inv1.cov > 1.5\norder by inv1.w_warehouse_sk,inv1.i_item_sk,inv1.d_moy,inv1.mean,inv1.cov\n        ,inv2.d_moy,inv2.mean, inv2.cov\n;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q4.sql",
    "content": "-- CometBench-DS query 4 derived from TPC-DS query 4 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith year_total as (\n select c_customer_id customer_id\n       ,c_first_name customer_first_name\n       ,c_last_name customer_last_name\n       ,c_preferred_cust_flag customer_preferred_cust_flag\n       ,c_birth_country customer_birth_country\n       ,c_login customer_login\n       ,c_email_address customer_email_address\n       ,d_year dyear\n       ,sum(((ss_ext_list_price-ss_ext_wholesale_cost-ss_ext_discount_amt)+ss_ext_sales_price)/2) year_total\n       ,'s' sale_type\n from customer\n     ,store_sales\n     ,date_dim\n where c_customer_sk = ss_customer_sk\n   and ss_sold_date_sk = d_date_sk\n group by c_customer_id\n         ,c_first_name\n         ,c_last_name\n         ,c_preferred_cust_flag\n         ,c_birth_country\n         ,c_login\n         ,c_email_address\n         ,d_year\n union all\n select c_customer_id customer_id\n       ,c_first_name customer_first_name\n       ,c_last_name customer_last_name\n       ,c_preferred_cust_flag customer_preferred_cust_flag\n       ,c_birth_country customer_birth_country\n       ,c_login customer_login\n       ,c_email_address customer_email_address\n       ,d_year dyear\n       ,sum((((cs_ext_list_price-cs_ext_wholesale_cost-cs_ext_discount_amt)+cs_ext_sales_price)/2) ) year_total\n       ,'c' sale_type\n from customer\n     ,catalog_sales\n     ,date_dim\n where c_customer_sk = cs_bill_customer_sk\n   and cs_sold_date_sk = d_date_sk\n group by c_customer_id\n         ,c_first_name\n         ,c_last_name\n         ,c_preferred_cust_flag\n         ,c_birth_country\n         ,c_login\n         ,c_email_address\n         ,d_year\nunion all\n select c_customer_id customer_id\n       ,c_first_name customer_first_name\n       ,c_last_name customer_last_name\n       ,c_preferred_cust_flag customer_preferred_cust_flag\n       ,c_birth_country customer_birth_country\n       ,c_login customer_login\n       ,c_email_address customer_email_address\n       ,d_year dyear\n       ,sum((((ws_ext_list_price-ws_ext_wholesale_cost-ws_ext_discount_amt)+ws_ext_sales_price)/2) ) year_total\n       ,'w' sale_type\n from customer\n     ,web_sales\n     ,date_dim\n where c_customer_sk = ws_bill_customer_sk\n   and ws_sold_date_sk = d_date_sk\n group by c_customer_id\n         ,c_first_name\n         ,c_last_name\n         ,c_preferred_cust_flag\n         ,c_birth_country\n         ,c_login\n         ,c_email_address\n         ,d_year\n         )\n  select  \n                  t_s_secyear.customer_id\n                 ,t_s_secyear.customer_first_name\n                 ,t_s_secyear.customer_last_name\n                 ,t_s_secyear.customer_email_address\n from year_total t_s_firstyear\n     ,year_total t_s_secyear\n     ,year_total t_c_firstyear\n     ,year_total t_c_secyear\n     ,year_total t_w_firstyear\n     ,year_total t_w_secyear\n where t_s_secyear.customer_id = t_s_firstyear.customer_id\n   and t_s_firstyear.customer_id = t_c_secyear.customer_id\n   and t_s_firstyear.customer_id = t_c_firstyear.customer_id\n   and t_s_firstyear.customer_id = t_w_firstyear.customer_id\n   and t_s_firstyear.customer_id = t_w_secyear.customer_id\n   and t_s_firstyear.sale_type = 's'\n   and t_c_firstyear.sale_type = 'c'\n   and t_w_firstyear.sale_type = 'w'\n   and t_s_secyear.sale_type = 's'\n   and t_c_secyear.sale_type = 'c'\n   and t_w_secyear.sale_type = 'w'\n   and t_s_firstyear.dyear =  2001\n   and t_s_secyear.dyear = 2001+1\n   and t_c_firstyear.dyear =  2001\n   and t_c_secyear.dyear =  2001+1\n   and t_w_firstyear.dyear = 2001\n   and t_w_secyear.dyear = 2001+1\n   and t_s_firstyear.year_total > 0\n   and t_c_firstyear.year_total > 0\n   and t_w_firstyear.year_total > 0\n   and case when t_c_firstyear.year_total > 0 then t_c_secyear.year_total / t_c_firstyear.year_total else null end\n           > case when t_s_firstyear.year_total > 0 then t_s_secyear.year_total / t_s_firstyear.year_total else null end\n   and case when t_c_firstyear.year_total > 0 then t_c_secyear.year_total / t_c_firstyear.year_total else null end\n           > case when t_w_firstyear.year_total > 0 then t_w_secyear.year_total / t_w_firstyear.year_total else null end\n order by t_s_secyear.customer_id\n         ,t_s_secyear.customer_first_name\n         ,t_s_secyear.customer_last_name\n         ,t_s_secyear.customer_email_address\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q40.sql",
    "content": "-- CometBench-DS query 40 derived from TPC-DS query 40 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n   w_state\n  ,i_item_id\n  ,sum(case when (cast(d_date as date) < cast ('2002-05-18' as date)) \n \t\tthen cs_sales_price - coalesce(cr_refunded_cash,0) else 0 end) as sales_before\n  ,sum(case when (cast(d_date as date) >= cast ('2002-05-18' as date)) \n \t\tthen cs_sales_price - coalesce(cr_refunded_cash,0) else 0 end) as sales_after\n from\n   catalog_sales left outer join catalog_returns on\n       (cs_order_number = cr_order_number \n        and cs_item_sk = cr_item_sk)\n  ,warehouse \n  ,item\n  ,date_dim\n where\n     i_current_price between 0.99 and 1.49\n and i_item_sk          = cs_item_sk\n and cs_warehouse_sk    = w_warehouse_sk \n and cs_sold_date_sk    = d_date_sk\n and d_date between (cast ('2002-05-18' as date) - INTERVAL '30 DAYS')\n                and (cast ('2002-05-18' as date) + INTERVAL '30 DAYS') \n group by\n    w_state,i_item_id\n order by w_state,i_item_id\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q41.sql",
    "content": "-- CometBench-DS query 41 derived from TPC-DS query 41 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  distinct(i_product_name)\n from item i1\n where i_manufact_id between 668 and 668+40 \n   and (select count(*) as item_cnt\n        from item\n        where (i_manufact = i1.i_manufact and\n        ((i_category = 'Women' and \n        (i_color = 'cream' or i_color = 'ghost') and \n        (i_units = 'Ton' or i_units = 'Gross') and\n        (i_size = 'economy' or i_size = 'small')\n        ) or\n        (i_category = 'Women' and\n        (i_color = 'midnight' or i_color = 'burlywood') and\n        (i_units = 'Tsp' or i_units = 'Bundle') and\n        (i_size = 'medium' or i_size = 'extra large')\n        ) or\n        (i_category = 'Men' and\n        (i_color = 'lavender' or i_color = 'azure') and\n        (i_units = 'Each' or i_units = 'Lb') and\n        (i_size = 'large' or i_size = 'N/A')\n        ) or\n        (i_category = 'Men' and\n        (i_color = 'chocolate' or i_color = 'steel') and\n        (i_units = 'N/A' or i_units = 'Dozen') and\n        (i_size = 'economy' or i_size = 'small')\n        ))) or\n       (i_manufact = i1.i_manufact and\n        ((i_category = 'Women' and \n        (i_color = 'floral' or i_color = 'royal') and \n        (i_units = 'Unknown' or i_units = 'Tbl') and\n        (i_size = 'economy' or i_size = 'small')\n        ) or\n        (i_category = 'Women' and\n        (i_color = 'navy' or i_color = 'forest') and\n        (i_units = 'Bunch' or i_units = 'Dram') and\n        (i_size = 'medium' or i_size = 'extra large')\n        ) or\n        (i_category = 'Men' and\n        (i_color = 'cyan' or i_color = 'indian') and\n        (i_units = 'Carton' or i_units = 'Cup') and\n        (i_size = 'large' or i_size = 'N/A')\n        ) or\n        (i_category = 'Men' and\n        (i_color = 'coral' or i_color = 'pale') and\n        (i_units = 'Pallet' or i_units = 'Gram') and\n        (i_size = 'economy' or i_size = 'small')\n        )))) > 0\n order by i_product_name\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q42.sql",
    "content": "-- CometBench-DS query 42 derived from TPC-DS query 42 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  dt.d_year\n \t,item.i_category_id\n \t,item.i_category\n \t,sum(ss_ext_sales_price)\n from \tdate_dim dt\n \t,store_sales\n \t,item\n where dt.d_date_sk = store_sales.ss_sold_date_sk\n \tand store_sales.ss_item_sk = item.i_item_sk\n \tand item.i_manager_id = 1  \t\n \tand dt.d_moy=11\n \tand dt.d_year=1998\n group by \tdt.d_year\n \t\t,item.i_category_id\n \t\t,item.i_category\n order by       sum(ss_ext_sales_price) desc,dt.d_year\n \t\t,item.i_category_id\n \t\t,item.i_category\n LIMIT 100 ;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q43.sql",
    "content": "-- CometBench-DS query 43 derived from TPC-DS query 43 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  s_store_name, s_store_id,\n        sum(case when (d_day_name='Sunday') then ss_sales_price else null end) sun_sales,\n        sum(case when (d_day_name='Monday') then ss_sales_price else null end) mon_sales,\n        sum(case when (d_day_name='Tuesday') then ss_sales_price else  null end) tue_sales,\n        sum(case when (d_day_name='Wednesday') then ss_sales_price else null end) wed_sales,\n        sum(case when (d_day_name='Thursday') then ss_sales_price else null end) thu_sales,\n        sum(case when (d_day_name='Friday') then ss_sales_price else null end) fri_sales,\n        sum(case when (d_day_name='Saturday') then ss_sales_price else null end) sat_sales\n from date_dim, store_sales, store\n where d_date_sk = ss_sold_date_sk and\n       s_store_sk = ss_store_sk and\n       s_gmt_offset = -5 and\n       d_year = 2000 \n group by s_store_name, s_store_id\n order by s_store_name, s_store_id,sun_sales,mon_sales,tue_sales,wed_sales,thu_sales,fri_sales,sat_sales\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q44.sql",
    "content": "-- CometBench-DS query 44 derived from TPC-DS query 44 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  asceding.rnk, i1.i_product_name best_performing, i2.i_product_name worst_performing\nfrom(select *\n     from (select item_sk,rank() over (order by rank_col asc) rnk\n           from (select ss_item_sk item_sk,avg(ss_net_profit) rank_col \n                 from store_sales ss1\n                 where ss_store_sk = 6\n                 group by ss_item_sk\n                 having avg(ss_net_profit) > 0.9*(select avg(ss_net_profit) rank_col\n                                                  from store_sales\n                                                  where ss_store_sk = 6\n                                                    and ss_hdemo_sk is null\n                                                  group by ss_store_sk))V1)V11\n     where rnk  < 11) asceding,\n    (select *\n     from (select item_sk,rank() over (order by rank_col desc) rnk\n           from (select ss_item_sk item_sk,avg(ss_net_profit) rank_col\n                 from store_sales ss1\n                 where ss_store_sk = 6\n                 group by ss_item_sk\n                 having avg(ss_net_profit) > 0.9*(select avg(ss_net_profit) rank_col\n                                                  from store_sales\n                                                  where ss_store_sk = 6\n                                                    and ss_hdemo_sk is null\n                                                  group by ss_store_sk))V2)V21\n     where rnk  < 11) descending,\nitem i1,\nitem i2\nwhere asceding.rnk = descending.rnk \n  and i1.i_item_sk=asceding.item_sk\n  and i2.i_item_sk=descending.item_sk\norder by asceding.rnk\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q45.sql",
    "content": "-- CometBench-DS query 45 derived from TPC-DS query 45 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  ca_zip, ca_city, sum(ws_sales_price)\n from web_sales, customer, customer_address, date_dim, item\n where ws_bill_customer_sk = c_customer_sk\n \tand c_current_addr_sk = ca_address_sk \n \tand ws_item_sk = i_item_sk \n \tand ( substr(ca_zip,1,5) in ('85669', '86197','88274','83405','86475', '85392', '85460', '80348', '81792')\n \t      or \n \t      i_item_id in (select i_item_id\n                             from item\n                             where i_item_sk in (2, 3, 5, 7, 11, 13, 17, 19, 23, 29)\n                             )\n \t    )\n \tand ws_sold_date_sk = d_date_sk\n \tand d_qoy = 2 and d_year = 2000\n group by ca_zip, ca_city\n order by ca_zip, ca_city\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q46.sql",
    "content": "-- CometBench-DS query 46 derived from TPC-DS query 46 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  c_last_name\n       ,c_first_name\n       ,ca_city\n       ,bought_city\n       ,ss_ticket_number\n       ,amt,profit \n from\n   (select ss_ticket_number\n          ,ss_customer_sk\n          ,ca_city bought_city\n          ,sum(ss_coupon_amt) amt\n          ,sum(ss_net_profit) profit\n    from store_sales,date_dim,store,household_demographics,customer_address \n    where store_sales.ss_sold_date_sk = date_dim.d_date_sk\n    and store_sales.ss_store_sk = store.s_store_sk  \n    and store_sales.ss_hdemo_sk = household_demographics.hd_demo_sk\n    and store_sales.ss_addr_sk = customer_address.ca_address_sk\n    and (household_demographics.hd_dep_count = 3 or\n         household_demographics.hd_vehicle_count= 1)\n    and date_dim.d_dow in (6,0)\n    and date_dim.d_year in (1999,1999+1,1999+2) \n    and store.s_city in ('Midway','Fairview','Fairview','Midway','Fairview') \n    group by ss_ticket_number,ss_customer_sk,ss_addr_sk,ca_city) dn,customer,customer_address current_addr\n    where ss_customer_sk = c_customer_sk\n      and customer.c_current_addr_sk = current_addr.ca_address_sk\n      and current_addr.ca_city <> bought_city\n  order by c_last_name\n          ,c_first_name\n          ,ca_city\n          ,bought_city\n          ,ss_ticket_number\n   LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q47.sql",
    "content": "-- CometBench-DS query 47 derived from TPC-DS query 47 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith v1 as(\n select i_category, i_brand,\n        s_store_name, s_company_name,\n        d_year, d_moy,\n        sum(ss_sales_price) sum_sales,\n        avg(sum(ss_sales_price)) over\n          (partition by i_category, i_brand,\n                     s_store_name, s_company_name, d_year)\n          avg_monthly_sales,\n        rank() over\n          (partition by i_category, i_brand,\n                     s_store_name, s_company_name\n           order by d_year, d_moy) rn\n from item, store_sales, date_dim, store\n where ss_item_sk = i_item_sk and\n       ss_sold_date_sk = d_date_sk and\n       ss_store_sk = s_store_sk and\n       (\n         d_year = 2001 or\n         ( d_year = 2001-1 and d_moy =12) or\n         ( d_year = 2001+1 and d_moy =1)\n       )\n group by i_category, i_brand,\n          s_store_name, s_company_name,\n          d_year, d_moy),\n v2 as(\n select v1.i_category, v1.i_brand, v1.s_store_name, v1.s_company_name\n        ,v1.d_year\n        ,v1.avg_monthly_sales\n        ,v1.sum_sales, v1_lag.sum_sales psum, v1_lead.sum_sales nsum\n from v1, v1 v1_lag, v1 v1_lead\n where v1.i_category = v1_lag.i_category and\n       v1.i_category = v1_lead.i_category and\n       v1.i_brand = v1_lag.i_brand and\n       v1.i_brand = v1_lead.i_brand and\n       v1.s_store_name = v1_lag.s_store_name and\n       v1.s_store_name = v1_lead.s_store_name and\n       v1.s_company_name = v1_lag.s_company_name and\n       v1.s_company_name = v1_lead.s_company_name and\n       v1.rn = v1_lag.rn + 1 and\n       v1.rn = v1_lead.rn - 1)\n  select  *\n from v2\n where  d_year = 2001 and    \n        avg_monthly_sales > 0 and\n        case when avg_monthly_sales > 0 then abs(sum_sales - avg_monthly_sales) / avg_monthly_sales else null end > 0.1\n order by sum_sales - avg_monthly_sales, nsum\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q48.sql",
    "content": "-- CometBench-DS query 48 derived from TPC-DS query 48 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect sum (ss_quantity)\n from store_sales, store, customer_demographics, customer_address, date_dim\n where s_store_sk = ss_store_sk\n and  ss_sold_date_sk = d_date_sk and d_year = 2001\n and  \n (\n  (\n   cd_demo_sk = ss_cdemo_sk\n   and \n   cd_marital_status = 'W'\n   and \n   cd_education_status = '2 yr Degree'\n   and \n   ss_sales_price between 100.00 and 150.00  \n   )\n or\n  (\n  cd_demo_sk = ss_cdemo_sk\n   and \n   cd_marital_status = 'S'\n   and \n   cd_education_status = 'Advanced Degree'\n   and \n   ss_sales_price between 50.00 and 100.00   \n  )\n or \n (\n  cd_demo_sk = ss_cdemo_sk\n  and \n   cd_marital_status = 'D'\n   and \n   cd_education_status = 'Primary'\n   and \n   ss_sales_price between 150.00 and 200.00  \n )\n )\n and\n (\n  (\n  ss_addr_sk = ca_address_sk\n  and\n  ca_country = 'United States'\n  and\n  ca_state in ('IL', 'KY', 'OR')\n  and ss_net_profit between 0 and 2000  \n  )\n or\n  (ss_addr_sk = ca_address_sk\n  and\n  ca_country = 'United States'\n  and\n  ca_state in ('VA', 'FL', 'AL')\n  and ss_net_profit between 150 and 3000 \n  )\n or\n  (ss_addr_sk = ca_address_sk\n  and\n  ca_country = 'United States'\n  and\n  ca_state in ('OK', 'IA', 'TX')\n  and ss_net_profit between 50 and 25000 \n  )\n )\n;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q49.sql",
    "content": "-- CometBench-DS query 49 derived from TPC-DS query 49 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  channel, item, return_ratio, return_rank, currency_rank from\n (select\n 'web' as channel\n ,web.item\n ,web.return_ratio\n ,web.return_rank\n ,web.currency_rank\n from (\n \tselect \n \t item\n \t,return_ratio\n \t,currency_ratio\n \t,rank() over (order by return_ratio) as return_rank\n \t,rank() over (order by currency_ratio) as currency_rank\n \tfrom\n \t(\tselect ws.ws_item_sk as item\n \t\t,(cast(sum(coalesce(wr.wr_return_quantity,0)) as decimal(15,4))/\n \t\tcast(sum(coalesce(ws.ws_quantity,0)) as decimal(15,4) )) as return_ratio\n \t\t,(cast(sum(coalesce(wr.wr_return_amt,0)) as decimal(15,4))/\n \t\tcast(sum(coalesce(ws.ws_net_paid,0)) as decimal(15,4) )) as currency_ratio\n \t\tfrom \n \t\t web_sales ws left outer join web_returns wr \n \t\t\ton (ws.ws_order_number = wr.wr_order_number and \n \t\t\tws.ws_item_sk = wr.wr_item_sk)\n                 ,date_dim\n \t\twhere \n \t\t\twr.wr_return_amt > 10000 \n \t\t\tand ws.ws_net_profit > 1\n                         and ws.ws_net_paid > 0\n                         and ws.ws_quantity > 0\n                         and ws_sold_date_sk = d_date_sk\n                         and d_year = 2000\n                         and d_moy = 12\n \t\tgroup by ws.ws_item_sk\n \t) in_web\n ) web\n where \n (\n web.return_rank <= 10\n or\n web.currency_rank <= 10\n )\n union\n select \n 'catalog' as channel\n ,catalog.item\n ,catalog.return_ratio\n ,catalog.return_rank\n ,catalog.currency_rank\n from (\n \tselect \n \t item\n \t,return_ratio\n \t,currency_ratio\n \t,rank() over (order by return_ratio) as return_rank\n \t,rank() over (order by currency_ratio) as currency_rank\n \tfrom\n \t(\tselect \n \t\tcs.cs_item_sk as item\n \t\t,(cast(sum(coalesce(cr.cr_return_quantity,0)) as decimal(15,4))/\n \t\tcast(sum(coalesce(cs.cs_quantity,0)) as decimal(15,4) )) as return_ratio\n \t\t,(cast(sum(coalesce(cr.cr_return_amount,0)) as decimal(15,4))/\n \t\tcast(sum(coalesce(cs.cs_net_paid,0)) as decimal(15,4) )) as currency_ratio\n \t\tfrom \n \t\tcatalog_sales cs left outer join catalog_returns cr\n \t\t\ton (cs.cs_order_number = cr.cr_order_number and \n \t\t\tcs.cs_item_sk = cr.cr_item_sk)\n                ,date_dim\n \t\twhere \n \t\t\tcr.cr_return_amount > 10000 \n \t\t\tand cs.cs_net_profit > 1\n                         and cs.cs_net_paid > 0\n                         and cs.cs_quantity > 0\n                         and cs_sold_date_sk = d_date_sk\n                         and d_year = 2000\n                         and d_moy = 12\n                 group by cs.cs_item_sk\n \t) in_cat\n ) catalog\n where \n (\n catalog.return_rank <= 10\n or\n catalog.currency_rank <=10\n )\n union\n select \n 'store' as channel\n ,store.item\n ,store.return_ratio\n ,store.return_rank\n ,store.currency_rank\n from (\n \tselect \n \t item\n \t,return_ratio\n \t,currency_ratio\n \t,rank() over (order by return_ratio) as return_rank\n \t,rank() over (order by currency_ratio) as currency_rank\n \tfrom\n \t(\tselect sts.ss_item_sk as item\n \t\t,(cast(sum(coalesce(sr.sr_return_quantity,0)) as decimal(15,4))/cast(sum(coalesce(sts.ss_quantity,0)) as decimal(15,4) )) as return_ratio\n \t\t,(cast(sum(coalesce(sr.sr_return_amt,0)) as decimal(15,4))/cast(sum(coalesce(sts.ss_net_paid,0)) as decimal(15,4) )) as currency_ratio\n \t\tfrom \n \t\tstore_sales sts left outer join store_returns sr\n \t\t\ton (sts.ss_ticket_number = sr.sr_ticket_number and sts.ss_item_sk = sr.sr_item_sk)\n                ,date_dim\n \t\twhere \n \t\t\tsr.sr_return_amt > 10000 \n \t\t\tand sts.ss_net_profit > 1\n                         and sts.ss_net_paid > 0 \n                         and sts.ss_quantity > 0\n                         and ss_sold_date_sk = d_date_sk\n                         and d_year = 2000\n                         and d_moy = 12\n \t\tgroup by sts.ss_item_sk\n \t) in_store\n ) store\n where  (\n store.return_rank <= 10\n or \n store.currency_rank <= 10\n )\n )\n order by 1,4,5,2\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q5.sql",
    "content": "-- CometBench-DS query 5 derived from TPC-DS query 5 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ssr as\n (select s_store_id,\n        sum(sales_price) as sales,\n        sum(profit) as profit,\n        sum(return_amt) as returns,\n        sum(net_loss) as profit_loss\n from\n  ( select  ss_store_sk as store_sk,\n            ss_sold_date_sk  as date_sk,\n            ss_ext_sales_price as sales_price,\n            ss_net_profit as profit,\n            cast(0 as decimal(7,2)) as return_amt,\n            cast(0 as decimal(7,2)) as net_loss\n    from store_sales\n    union all\n    select sr_store_sk as store_sk,\n           sr_returned_date_sk as date_sk,\n           cast(0 as decimal(7,2)) as sales_price,\n           cast(0 as decimal(7,2)) as profit,\n           sr_return_amt as return_amt,\n           sr_net_loss as net_loss\n    from store_returns\n   ) salesreturns,\n     date_dim,\n     store\n where date_sk = d_date_sk\n       and d_date between cast('2001-08-04' as date) \n                  and (cast('2001-08-04' as date) +  INTERVAL '14 DAYS')\n       and store_sk = s_store_sk\n group by s_store_id)\n ,\n csr as\n (select cp_catalog_page_id,\n        sum(sales_price) as sales,\n        sum(profit) as profit,\n        sum(return_amt) as returns,\n        sum(net_loss) as profit_loss\n from\n  ( select  cs_catalog_page_sk as page_sk,\n            cs_sold_date_sk  as date_sk,\n            cs_ext_sales_price as sales_price,\n            cs_net_profit as profit,\n            cast(0 as decimal(7,2)) as return_amt,\n            cast(0 as decimal(7,2)) as net_loss\n    from catalog_sales\n    union all\n    select cr_catalog_page_sk as page_sk,\n           cr_returned_date_sk as date_sk,\n           cast(0 as decimal(7,2)) as sales_price,\n           cast(0 as decimal(7,2)) as profit,\n           cr_return_amount as return_amt,\n           cr_net_loss as net_loss\n    from catalog_returns\n   ) salesreturns,\n     date_dim,\n     catalog_page\n where date_sk = d_date_sk\n       and d_date between cast('2001-08-04' as date)\n                  and (cast('2001-08-04' as date) +  INTERVAL '14 DAYS')\n       and page_sk = cp_catalog_page_sk\n group by cp_catalog_page_id)\n ,\n wsr as\n (select web_site_id,\n        sum(sales_price) as sales,\n        sum(profit) as profit,\n        sum(return_amt) as returns,\n        sum(net_loss) as profit_loss\n from\n  ( select  ws_web_site_sk as wsr_web_site_sk,\n            ws_sold_date_sk  as date_sk,\n            ws_ext_sales_price as sales_price,\n            ws_net_profit as profit,\n            cast(0 as decimal(7,2)) as return_amt,\n            cast(0 as decimal(7,2)) as net_loss\n    from web_sales\n    union all\n    select ws_web_site_sk as wsr_web_site_sk,\n           wr_returned_date_sk as date_sk,\n           cast(0 as decimal(7,2)) as sales_price,\n           cast(0 as decimal(7,2)) as profit,\n           wr_return_amt as return_amt,\n           wr_net_loss as net_loss\n    from web_returns left outer join web_sales on\n         ( wr_item_sk = ws_item_sk\n           and wr_order_number = ws_order_number)\n   ) salesreturns,\n     date_dim,\n     web_site\n where date_sk = d_date_sk\n       and d_date between cast('2001-08-04' as date)\n                  and (cast('2001-08-04' as date) +  INTERVAL '14 DAYS')\n       and wsr_web_site_sk = web_site_sk\n group by web_site_id)\n  select  channel\n        , id\n        , sum(sales) as sales\n        , sum(returns) as returns\n        , sum(profit) as profit\n from \n (select 'store channel' as channel\n        , 'store' || s_store_id as id\n        , sales\n        , returns\n        , (profit - profit_loss) as profit\n from   ssr\n union all\n select 'catalog channel' as channel\n        , 'catalog_page' || cp_catalog_page_id as id\n        , sales\n        , returns\n        , (profit - profit_loss) as profit\n from  csr\n union all\n select 'web channel' as channel\n        , 'web_site' || web_site_id as id\n        , sales\n        , returns\n        , (profit - profit_loss) as profit\n from   wsr\n ) x\n group by rollup (channel, id)\n order by channel\n         ,id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q50.sql",
    "content": "-- CometBench-DS query 50 derived from TPC-DS query 50 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n   s_store_name\n  ,s_company_id\n  ,s_street_number\n  ,s_street_name\n  ,s_street_type\n  ,s_suite_number\n  ,s_city\n  ,s_county\n  ,s_state\n  ,s_zip\n  ,sum(case when (sr_returned_date_sk - ss_sold_date_sk <= 30 ) then 1 else 0 end)  as `30 days` \n  ,sum(case when (sr_returned_date_sk - ss_sold_date_sk > 30) and \n                 (sr_returned_date_sk - ss_sold_date_sk <= 60) then 1 else 0 end )  as `31-60 days` \n  ,sum(case when (sr_returned_date_sk - ss_sold_date_sk > 60) and \n                 (sr_returned_date_sk - ss_sold_date_sk <= 90) then 1 else 0 end)  as `61-90 days` \n  ,sum(case when (sr_returned_date_sk - ss_sold_date_sk > 90) and\n                 (sr_returned_date_sk - ss_sold_date_sk <= 120) then 1 else 0 end)  as `91-120 days` \n  ,sum(case when (sr_returned_date_sk - ss_sold_date_sk  > 120) then 1 else 0 end)  as `>120 days` \nfrom\n   store_sales\n  ,store_returns\n  ,store\n  ,date_dim d1\n  ,date_dim d2\nwhere\n    d2.d_year = 2002\nand d2.d_moy  = 8\nand ss_ticket_number = sr_ticket_number\nand ss_item_sk = sr_item_sk\nand ss_sold_date_sk   = d1.d_date_sk\nand sr_returned_date_sk   = d2.d_date_sk\nand ss_customer_sk = sr_customer_sk\nand ss_store_sk = s_store_sk\ngroup by\n   s_store_name\n  ,s_company_id\n  ,s_street_number\n  ,s_street_name\n  ,s_street_type\n  ,s_suite_number\n  ,s_city\n  ,s_county\n  ,s_state\n  ,s_zip\norder by s_store_name\n        ,s_company_id\n        ,s_street_number\n        ,s_street_name\n        ,s_street_type\n        ,s_suite_number\n        ,s_city\n        ,s_county\n        ,s_state\n        ,s_zip\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q51.sql",
    "content": "-- CometBench-DS query 51 derived from TPC-DS query 51 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nWITH web_v1 as (\nselect\n  ws_item_sk item_sk, d_date,\n  sum(sum(ws_sales_price))\n      over (partition by ws_item_sk order by d_date rows between unbounded preceding and current row) cume_sales\nfrom web_sales\n    ,date_dim\nwhere ws_sold_date_sk=d_date_sk\n  and d_month_seq between 1215 and 1215+11\n  and ws_item_sk is not NULL\ngroup by ws_item_sk, d_date),\nstore_v1 as (\nselect\n  ss_item_sk item_sk, d_date,\n  sum(sum(ss_sales_price))\n      over (partition by ss_item_sk order by d_date rows between unbounded preceding and current row) cume_sales\nfrom store_sales\n    ,date_dim\nwhere ss_sold_date_sk=d_date_sk\n  and d_month_seq between 1215 and 1215+11\n  and ss_item_sk is not NULL\ngroup by ss_item_sk, d_date)\n select  *\nfrom (select item_sk\n     ,d_date\n     ,web_sales\n     ,store_sales\n     ,max(web_sales)\n         over (partition by item_sk order by d_date rows between unbounded preceding and current row) web_cumulative\n     ,max(store_sales)\n         over (partition by item_sk order by d_date rows between unbounded preceding and current row) store_cumulative\n     from (select case when web.item_sk is not null then web.item_sk else store.item_sk end item_sk\n                 ,case when web.d_date is not null then web.d_date else store.d_date end d_date\n                 ,web.cume_sales web_sales\n                 ,store.cume_sales store_sales\n           from web_v1 web full outer join store_v1 store on (web.item_sk = store.item_sk\n                                                          and web.d_date = store.d_date)\n          )x )y\nwhere web_cumulative > store_cumulative\norder by item_sk\n        ,d_date\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q52.sql",
    "content": "-- CometBench-DS query 52 derived from TPC-DS query 52 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  dt.d_year\n \t,item.i_brand_id brand_id\n \t,item.i_brand brand\n \t,sum(ss_ext_sales_price) ext_price\n from date_dim dt\n     ,store_sales\n     ,item\n where dt.d_date_sk = store_sales.ss_sold_date_sk\n    and store_sales.ss_item_sk = item.i_item_sk\n    and item.i_manager_id = 1\n    and dt.d_moy=11\n    and dt.d_year=2000\n group by dt.d_year\n \t,item.i_brand\n \t,item.i_brand_id\n order by dt.d_year\n \t,ext_price desc\n \t,brand_id\n LIMIT 100 ;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q53.sql",
    "content": "-- CometBench-DS query 53 derived from TPC-DS query 53 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  * from \n(select i_manufact_id,\nsum(ss_sales_price) sum_sales,\navg(sum(ss_sales_price)) over (partition by i_manufact_id) avg_quarterly_sales\nfrom item, store_sales, date_dim, store\nwhere ss_item_sk = i_item_sk and\nss_sold_date_sk = d_date_sk and\nss_store_sk = s_store_sk and\nd_month_seq in (1197,1197+1,1197+2,1197+3,1197+4,1197+5,1197+6,1197+7,1197+8,1197+9,1197+10,1197+11) and\n((i_category in ('Books','Children','Electronics') and\ni_class in ('personal','portable','reference','self-help') and\ni_brand in ('scholaramalgamalg #14','scholaramalgamalg #7',\n\t\t'exportiunivamalg #9','scholaramalgamalg #9'))\nor(i_category in ('Women','Music','Men') and\ni_class in ('accessories','classical','fragrances','pants') and\ni_brand in ('amalgimporto #1','edu packscholar #1','exportiimporto #1',\n\t\t'importoamalg #1')))\ngroup by i_manufact_id, d_qoy ) tmp1\nwhere case when avg_quarterly_sales > 0 \n\tthen abs (sum_sales - avg_quarterly_sales)/ avg_quarterly_sales \n\telse null end > 0.1\norder by avg_quarterly_sales,\n\t sum_sales,\n\t i_manufact_id\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q54.sql",
    "content": "-- CometBench-DS query 54 derived from TPC-DS query 54 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith my_customers as (\n select distinct c_customer_sk\n        , c_current_addr_sk\n from   \n        ( select cs_sold_date_sk sold_date_sk,\n                 cs_bill_customer_sk customer_sk,\n                 cs_item_sk item_sk\n          from   catalog_sales\n          union all\n          select ws_sold_date_sk sold_date_sk,\n                 ws_bill_customer_sk customer_sk,\n                 ws_item_sk item_sk\n          from   web_sales\n         ) cs_or_ws_sales,\n         item,\n         date_dim,\n         customer\n where   sold_date_sk = d_date_sk\n         and item_sk = i_item_sk\n         and i_category = 'Men'\n         and i_class = 'shirts'\n         and c_customer_sk = cs_or_ws_sales.customer_sk\n         and d_moy = 4\n         and d_year = 1998\n )\n , my_revenue as (\n select c_customer_sk,\n        sum(ss_ext_sales_price) as revenue\n from   my_customers,\n        store_sales,\n        customer_address,\n        store,\n        date_dim\n where  c_current_addr_sk = ca_address_sk\n        and ca_county = s_county\n        and ca_state = s_state\n        and ss_sold_date_sk = d_date_sk\n        and c_customer_sk = ss_customer_sk\n        and d_month_seq between (select distinct d_month_seq+1\n                                 from   date_dim where d_year = 1998 and d_moy = 4)\n                           and  (select distinct d_month_seq+3\n                                 from   date_dim where d_year = 1998 and d_moy = 4)\n group by c_customer_sk\n )\n , segments as\n (select cast((revenue/50) as int) as segment\n  from   my_revenue\n )\n  select  segment, count(*) as num_customers, segment*50 as segment_base\n from segments\n group by segment\n order by segment, num_customers\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q55.sql",
    "content": "-- CometBench-DS query 55 derived from TPC-DS query 55 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_brand_id brand_id, i_brand brand,\n \tsum(ss_ext_sales_price) ext_price\n from date_dim, store_sales, item\n where d_date_sk = ss_sold_date_sk\n \tand ss_item_sk = i_item_sk\n \tand i_manager_id=20\n \tand d_moy=12\n \tand d_year=1998\n group by i_brand, i_brand_id\n order by ext_price desc, i_brand_id\n LIMIT 100 ;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q56.sql",
    "content": "-- CometBench-DS query 56 derived from TPC-DS query 56 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ss as (\n select i_item_id,sum(ss_ext_sales_price) total_sales\n from\n \tstore_sales,\n \tdate_dim,\n         customer_address,\n         item\n where i_item_id in (select\n     i_item_id\nfrom item\nwhere i_color in ('powder','goldenrod','bisque'))\n and     ss_item_sk              = i_item_sk\n and     ss_sold_date_sk         = d_date_sk\n and     d_year                  = 1998\n and     d_moy                   = 5\n and     ss_addr_sk              = ca_address_sk\n and     ca_gmt_offset           = -5 \n group by i_item_id),\n cs as (\n select i_item_id,sum(cs_ext_sales_price) total_sales\n from\n \tcatalog_sales,\n \tdate_dim,\n         customer_address,\n         item\n where\n         i_item_id               in (select\n  i_item_id\nfrom item\nwhere i_color in ('powder','goldenrod','bisque'))\n and     cs_item_sk              = i_item_sk\n and     cs_sold_date_sk         = d_date_sk\n and     d_year                  = 1998\n and     d_moy                   = 5\n and     cs_bill_addr_sk         = ca_address_sk\n and     ca_gmt_offset           = -5 \n group by i_item_id),\n ws as (\n select i_item_id,sum(ws_ext_sales_price) total_sales\n from\n \tweb_sales,\n \tdate_dim,\n         customer_address,\n         item\n where\n         i_item_id               in (select\n  i_item_id\nfrom item\nwhere i_color in ('powder','goldenrod','bisque'))\n and     ws_item_sk              = i_item_sk\n and     ws_sold_date_sk         = d_date_sk\n and     d_year                  = 1998\n and     d_moy                   = 5\n and     ws_bill_addr_sk         = ca_address_sk\n and     ca_gmt_offset           = -5\n group by i_item_id)\n  select  i_item_id ,sum(total_sales) total_sales\n from  (select * from ss \n        union all\n        select * from cs \n        union all\n        select * from ws) tmp1\n group by i_item_id\n order by total_sales,\n          i_item_id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q57.sql",
    "content": "-- CometBench-DS query 57 derived from TPC-DS query 57 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith v1 as(\n select i_category, i_brand,\n        cc_name,\n        d_year, d_moy,\n        sum(cs_sales_price) sum_sales,\n        avg(sum(cs_sales_price)) over\n          (partition by i_category, i_brand,\n                     cc_name, d_year)\n          avg_monthly_sales,\n        rank() over\n          (partition by i_category, i_brand,\n                     cc_name\n           order by d_year, d_moy) rn\n from item, catalog_sales, date_dim, call_center\n where cs_item_sk = i_item_sk and\n       cs_sold_date_sk = d_date_sk and\n       cc_call_center_sk= cs_call_center_sk and\n       (\n         d_year = 2000 or\n         ( d_year = 2000-1 and d_moy =12) or\n         ( d_year = 2000+1 and d_moy =1)\n       )\n group by i_category, i_brand,\n          cc_name , d_year, d_moy),\n v2 as(\n select v1.cc_name\n        ,v1.d_year, v1.d_moy\n        ,v1.avg_monthly_sales\n        ,v1.sum_sales, v1_lag.sum_sales psum, v1_lead.sum_sales nsum\n from v1, v1 v1_lag, v1 v1_lead\n where v1.i_category = v1_lag.i_category and\n       v1.i_category = v1_lead.i_category and\n       v1.i_brand = v1_lag.i_brand and\n       v1.i_brand = v1_lead.i_brand and\n       v1. cc_name = v1_lag. cc_name and\n       v1. cc_name = v1_lead. cc_name and\n       v1.rn = v1_lag.rn + 1 and\n       v1.rn = v1_lead.rn - 1)\n  select  *\n from v2\n where  d_year = 2000 and\n        avg_monthly_sales > 0 and\n        case when avg_monthly_sales > 0 then abs(sum_sales - avg_monthly_sales) / avg_monthly_sales else null end > 0.1\n order by sum_sales - avg_monthly_sales, psum\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q58.sql",
    "content": "-- CometBench-DS query 58 derived from TPC-DS query 58 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ss_items as\n (select i_item_id item_id\n        ,sum(ss_ext_sales_price) ss_item_rev \n from store_sales\n     ,item\n     ,date_dim\n where ss_item_sk = i_item_sk\n   and d_date in (select d_date\n                  from date_dim\n                  where d_week_seq = (select d_week_seq \n                                      from date_dim\n                                      where d_date = '2000-02-12'))\n   and ss_sold_date_sk   = d_date_sk\n group by i_item_id),\n cs_items as\n (select i_item_id item_id\n        ,sum(cs_ext_sales_price) cs_item_rev\n  from catalog_sales\n      ,item\n      ,date_dim\n where cs_item_sk = i_item_sk\n  and  d_date in (select d_date\n                  from date_dim\n                  where d_week_seq = (select d_week_seq \n                                      from date_dim\n                                      where d_date = '2000-02-12'))\n  and  cs_sold_date_sk = d_date_sk\n group by i_item_id),\n ws_items as\n (select i_item_id item_id\n        ,sum(ws_ext_sales_price) ws_item_rev\n  from web_sales\n      ,item\n      ,date_dim\n where ws_item_sk = i_item_sk\n  and  d_date in (select d_date\n                  from date_dim\n                  where d_week_seq =(select d_week_seq \n                                     from date_dim\n                                     where d_date = '2000-02-12'))\n  and ws_sold_date_sk   = d_date_sk\n group by i_item_id)\n  select  ss_items.item_id\n       ,ss_item_rev\n       ,ss_item_rev/((ss_item_rev+cs_item_rev+ws_item_rev)/3) * 100 ss_dev\n       ,cs_item_rev\n       ,cs_item_rev/((ss_item_rev+cs_item_rev+ws_item_rev)/3) * 100 cs_dev\n       ,ws_item_rev\n       ,ws_item_rev/((ss_item_rev+cs_item_rev+ws_item_rev)/3) * 100 ws_dev\n       ,(ss_item_rev+cs_item_rev+ws_item_rev)/3 average\n from ss_items,cs_items,ws_items\n where ss_items.item_id=cs_items.item_id\n   and ss_items.item_id=ws_items.item_id \n   and ss_item_rev between 0.9 * cs_item_rev and 1.1 * cs_item_rev\n   and ss_item_rev between 0.9 * ws_item_rev and 1.1 * ws_item_rev\n   and cs_item_rev between 0.9 * ss_item_rev and 1.1 * ss_item_rev\n   and cs_item_rev between 0.9 * ws_item_rev and 1.1 * ws_item_rev\n   and ws_item_rev between 0.9 * ss_item_rev and 1.1 * ss_item_rev\n   and ws_item_rev between 0.9 * cs_item_rev and 1.1 * cs_item_rev\n order by item_id\n         ,ss_item_rev\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q59.sql",
    "content": "-- CometBench-DS query 59 derived from TPC-DS query 59 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith wss as \n (select d_week_seq,\n        ss_store_sk,\n        sum(case when (d_day_name='Sunday') then ss_sales_price else null end) sun_sales,\n        sum(case when (d_day_name='Monday') then ss_sales_price else null end) mon_sales,\n        sum(case when (d_day_name='Tuesday') then ss_sales_price else  null end) tue_sales,\n        sum(case when (d_day_name='Wednesday') then ss_sales_price else null end) wed_sales,\n        sum(case when (d_day_name='Thursday') then ss_sales_price else null end) thu_sales,\n        sum(case when (d_day_name='Friday') then ss_sales_price else null end) fri_sales,\n        sum(case when (d_day_name='Saturday') then ss_sales_price else null end) sat_sales\n from store_sales,date_dim\n where d_date_sk = ss_sold_date_sk\n group by d_week_seq,ss_store_sk\n )\n  select  s_store_name1,s_store_id1,d_week_seq1\n       ,sun_sales1/sun_sales2,mon_sales1/mon_sales2\n       ,tue_sales1/tue_sales2,wed_sales1/wed_sales2,thu_sales1/thu_sales2\n       ,fri_sales1/fri_sales2,sat_sales1/sat_sales2\n from\n (select s_store_name s_store_name1,wss.d_week_seq d_week_seq1\n        ,s_store_id s_store_id1,sun_sales sun_sales1\n        ,mon_sales mon_sales1,tue_sales tue_sales1\n        ,wed_sales wed_sales1,thu_sales thu_sales1\n        ,fri_sales fri_sales1,sat_sales sat_sales1\n  from wss,store,date_dim d\n  where d.d_week_seq = wss.d_week_seq and\n        ss_store_sk = s_store_sk and \n        d_month_seq between 1206 and 1206 + 11) y,\n (select s_store_name s_store_name2,wss.d_week_seq d_week_seq2\n        ,s_store_id s_store_id2,sun_sales sun_sales2\n        ,mon_sales mon_sales2,tue_sales tue_sales2\n        ,wed_sales wed_sales2,thu_sales thu_sales2\n        ,fri_sales fri_sales2,sat_sales sat_sales2\n  from wss,store,date_dim d\n  where d.d_week_seq = wss.d_week_seq and\n        ss_store_sk = s_store_sk and \n        d_month_seq between 1206+ 12 and 1206 + 23) x\n where s_store_id1=s_store_id2\n   and d_week_seq1=d_week_seq2-52\n order by s_store_name1,s_store_id1,d_week_seq1\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q6.sql",
    "content": "-- CometBench-DS query 6 derived from TPC-DS query 6 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  a.ca_state state, count(*) cnt\n from customer_address a\n     ,customer c\n     ,store_sales s\n     ,date_dim d\n     ,item i\n where       a.ca_address_sk = c.c_current_addr_sk\n \tand c.c_customer_sk = s.ss_customer_sk\n \tand s.ss_sold_date_sk = d.d_date_sk\n \tand s.ss_item_sk = i.i_item_sk\n \tand d.d_month_seq = \n \t     (select distinct (d_month_seq)\n \t      from date_dim\n               where d_year = 1998\n \t        and d_moy = 3 )\n \tand i.i_current_price > 1.2 * \n             (select avg(j.i_current_price) \n \t     from item j \n \t     where j.i_category = i.i_category)\n group by a.ca_state\n having count(*) >= 10\n order by cnt, a.ca_state \n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q60.sql",
    "content": "-- CometBench-DS query 60 derived from TPC-DS query 60 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ss as (\n select\n          i_item_id,sum(ss_ext_sales_price) total_sales\n from\n \tstore_sales,\n \tdate_dim,\n         customer_address,\n         item\n where\n         i_item_id in (select\n  i_item_id\nfrom\n item\nwhere i_category in ('Shoes'))\n and     ss_item_sk              = i_item_sk\n and     ss_sold_date_sk         = d_date_sk\n and     d_year                  = 2001\n and     d_moy                   = 10\n and     ss_addr_sk              = ca_address_sk\n and     ca_gmt_offset           = -6 \n group by i_item_id),\n cs as (\n select\n          i_item_id,sum(cs_ext_sales_price) total_sales\n from\n \tcatalog_sales,\n \tdate_dim,\n         customer_address,\n         item\n where\n         i_item_id               in (select\n  i_item_id\nfrom\n item\nwhere i_category in ('Shoes'))\n and     cs_item_sk              = i_item_sk\n and     cs_sold_date_sk         = d_date_sk\n and     d_year                  = 2001\n and     d_moy                   = 10\n and     cs_bill_addr_sk         = ca_address_sk\n and     ca_gmt_offset           = -6 \n group by i_item_id),\n ws as (\n select\n          i_item_id,sum(ws_ext_sales_price) total_sales\n from\n \tweb_sales,\n \tdate_dim,\n         customer_address,\n         item\n where\n         i_item_id               in (select\n  i_item_id\nfrom\n item\nwhere i_category in ('Shoes'))\n and     ws_item_sk              = i_item_sk\n and     ws_sold_date_sk         = d_date_sk\n and     d_year                  = 2001\n and     d_moy                   = 10\n and     ws_bill_addr_sk         = ca_address_sk\n and     ca_gmt_offset           = -6\n group by i_item_id)\n  select   \n  i_item_id\n,sum(total_sales) total_sales\n from  (select * from ss \n        union all\n        select * from cs \n        union all\n        select * from ws) tmp1\n group by i_item_id\n order by i_item_id\n      ,total_sales\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q61.sql",
    "content": "-- CometBench-DS query 61 derived from TPC-DS query 61 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  promotions,total,cast(promotions as decimal(15,4))/cast(total as decimal(15,4))*100\nfrom\n  (select sum(ss_ext_sales_price) promotions\n   from  store_sales\n        ,store\n        ,promotion\n        ,date_dim\n        ,customer\n        ,customer_address \n        ,item\n   where ss_sold_date_sk = d_date_sk\n   and   ss_store_sk = s_store_sk\n   and   ss_promo_sk = p_promo_sk\n   and   ss_customer_sk= c_customer_sk\n   and   ca_address_sk = c_current_addr_sk\n   and   ss_item_sk = i_item_sk \n   and   ca_gmt_offset = -6\n   and   i_category = 'Sports'\n   and   (p_channel_dmail = 'Y' or p_channel_email = 'Y' or p_channel_tv = 'Y')\n   and   s_gmt_offset = -6\n   and   d_year = 2002\n   and   d_moy  = 11) promotional_sales,\n  (select sum(ss_ext_sales_price) total\n   from  store_sales\n        ,store\n        ,date_dim\n        ,customer\n        ,customer_address\n        ,item\n   where ss_sold_date_sk = d_date_sk\n   and   ss_store_sk = s_store_sk\n   and   ss_customer_sk= c_customer_sk\n   and   ca_address_sk = c_current_addr_sk\n   and   ss_item_sk = i_item_sk\n   and   ca_gmt_offset = -6\n   and   i_category = 'Sports'\n   and   s_gmt_offset = -6\n   and   d_year = 2002\n   and   d_moy  = 11) all_sales\norder by promotions, total\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q62.sql",
    "content": "-- CometBench-DS query 62 derived from TPC-DS query 62 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n   substr(w_warehouse_name,1,20)\n  ,sm_type\n  ,web_name\n  ,sum(case when (ws_ship_date_sk - ws_sold_date_sk <= 30 ) then 1 else 0 end)  as `30 days` \n  ,sum(case when (ws_ship_date_sk - ws_sold_date_sk > 30) and \n                 (ws_ship_date_sk - ws_sold_date_sk <= 60) then 1 else 0 end )  as `31-60 days` \n  ,sum(case when (ws_ship_date_sk - ws_sold_date_sk > 60) and \n                 (ws_ship_date_sk - ws_sold_date_sk <= 90) then 1 else 0 end)  as `61-90 days` \n  ,sum(case when (ws_ship_date_sk - ws_sold_date_sk > 90) and\n                 (ws_ship_date_sk - ws_sold_date_sk <= 120) then 1 else 0 end)  as `91-120 days` \n  ,sum(case when (ws_ship_date_sk - ws_sold_date_sk  > 120) then 1 else 0 end)  as `>120 days` \nfrom\n   web_sales\n  ,warehouse\n  ,ship_mode\n  ,web_site\n  ,date_dim\nwhere\n    d_month_seq between 1217 and 1217 + 11\nand ws_ship_date_sk   = d_date_sk\nand ws_warehouse_sk   = w_warehouse_sk\nand ws_ship_mode_sk   = sm_ship_mode_sk\nand ws_web_site_sk    = web_site_sk\ngroup by\n   substr(w_warehouse_name,1,20)\n  ,sm_type\n  ,web_name\norder by substr(w_warehouse_name,1,20)\n        ,sm_type\n       ,web_name\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q63.sql",
    "content": "-- CometBench-DS query 63 derived from TPC-DS query 63 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  * \nfrom (select i_manager_id\n             ,sum(ss_sales_price) sum_sales\n             ,avg(sum(ss_sales_price)) over (partition by i_manager_id) avg_monthly_sales\n      from item\n          ,store_sales\n          ,date_dim\n          ,store\n      where ss_item_sk = i_item_sk\n        and ss_sold_date_sk = d_date_sk\n        and ss_store_sk = s_store_sk\n        and d_month_seq in (1181,1181+1,1181+2,1181+3,1181+4,1181+5,1181+6,1181+7,1181+8,1181+9,1181+10,1181+11)\n        and ((    i_category in ('Books','Children','Electronics')\n              and i_class in ('personal','portable','reference','self-help')\n              and i_brand in ('scholaramalgamalg #14','scholaramalgamalg #7',\n\t\t                  'exportiunivamalg #9','scholaramalgamalg #9'))\n           or(    i_category in ('Women','Music','Men')\n              and i_class in ('accessories','classical','fragrances','pants')\n              and i_brand in ('amalgimporto #1','edu packscholar #1','exportiimporto #1',\n\t\t                 'importoamalg #1')))\ngroup by i_manager_id, d_moy) tmp1\nwhere case when avg_monthly_sales > 0 then abs (sum_sales - avg_monthly_sales) / avg_monthly_sales else null end > 0.1\norder by i_manager_id\n        ,avg_monthly_sales\n        ,sum_sales\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q64.sql",
    "content": "-- CometBench-DS query 64 derived from TPC-DS query 64 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith cs_ui as\n (select cs_item_sk\n        ,sum(cs_ext_list_price) as sale,sum(cr_refunded_cash+cr_reversed_charge+cr_store_credit) as refund\n  from catalog_sales\n      ,catalog_returns\n  where cs_item_sk = cr_item_sk\n    and cs_order_number = cr_order_number\n  group by cs_item_sk\n  having sum(cs_ext_list_price)>2*sum(cr_refunded_cash+cr_reversed_charge+cr_store_credit)),\ncross_sales as\n (select i_product_name product_name\n     ,i_item_sk item_sk\n     ,s_store_name store_name\n     ,s_zip store_zip\n     ,ad1.ca_street_number b_street_number\n     ,ad1.ca_street_name b_street_name\n     ,ad1.ca_city b_city\n     ,ad1.ca_zip b_zip\n     ,ad2.ca_street_number c_street_number\n     ,ad2.ca_street_name c_street_name\n     ,ad2.ca_city c_city\n     ,ad2.ca_zip c_zip\n     ,d1.d_year as syear\n     ,d2.d_year as fsyear\n     ,d3.d_year s2year\n     ,count(*) cnt\n     ,sum(ss_wholesale_cost) s1\n     ,sum(ss_list_price) s2\n     ,sum(ss_coupon_amt) s3\n  FROM   store_sales\n        ,store_returns\n        ,cs_ui\n        ,date_dim d1\n        ,date_dim d2\n        ,date_dim d3\n        ,store\n        ,customer\n        ,customer_demographics cd1\n        ,customer_demographics cd2\n        ,promotion\n        ,household_demographics hd1\n        ,household_demographics hd2\n        ,customer_address ad1\n        ,customer_address ad2\n        ,income_band ib1\n        ,income_band ib2\n        ,item\n  WHERE  ss_store_sk = s_store_sk AND\n         ss_sold_date_sk = d1.d_date_sk AND\n         ss_customer_sk = c_customer_sk AND\n         ss_cdemo_sk= cd1.cd_demo_sk AND\n         ss_hdemo_sk = hd1.hd_demo_sk AND\n         ss_addr_sk = ad1.ca_address_sk and\n         ss_item_sk = i_item_sk and\n         ss_item_sk = sr_item_sk and\n         ss_ticket_number = sr_ticket_number and\n         ss_item_sk = cs_ui.cs_item_sk and\n         c_current_cdemo_sk = cd2.cd_demo_sk AND\n         c_current_hdemo_sk = hd2.hd_demo_sk AND\n         c_current_addr_sk = ad2.ca_address_sk and\n         c_first_sales_date_sk = d2.d_date_sk and\n         c_first_shipto_date_sk = d3.d_date_sk and\n         ss_promo_sk = p_promo_sk and\n         hd1.hd_income_band_sk = ib1.ib_income_band_sk and\n         hd2.hd_income_band_sk = ib2.ib_income_band_sk and\n         cd1.cd_marital_status <> cd2.cd_marital_status and\n         i_color in ('light','cyan','burnished','green','almond','smoke') and\n         i_current_price between 22 and 22 + 10 and\n         i_current_price between 22 + 1 and 22 + 15\ngroup by i_product_name\n       ,i_item_sk\n       ,s_store_name\n       ,s_zip\n       ,ad1.ca_street_number\n       ,ad1.ca_street_name\n       ,ad1.ca_city\n       ,ad1.ca_zip\n       ,ad2.ca_street_number\n       ,ad2.ca_street_name\n       ,ad2.ca_city\n       ,ad2.ca_zip\n       ,d1.d_year\n       ,d2.d_year\n       ,d3.d_year\n)\nselect cs1.product_name\n     ,cs1.store_name\n     ,cs1.store_zip\n     ,cs1.b_street_number\n     ,cs1.b_street_name\n     ,cs1.b_city\n     ,cs1.b_zip\n     ,cs1.c_street_number\n     ,cs1.c_street_name\n     ,cs1.c_city\n     ,cs1.c_zip\n     ,cs1.syear\n     ,cs1.cnt\n     ,cs1.s1 as s11\n     ,cs1.s2 as s21\n     ,cs1.s3 as s31\n     ,cs2.s1 as s12\n     ,cs2.s2 as s22\n     ,cs2.s3 as s32\n     ,cs2.syear\n     ,cs2.cnt\nfrom cross_sales cs1,cross_sales cs2\nwhere cs1.item_sk=cs2.item_sk and\n     cs1.syear = 2001 and\n     cs2.syear = 2001 + 1 and\n     cs2.cnt <= cs1.cnt and\n     cs1.store_name = cs2.store_name and\n     cs1.store_zip = cs2.store_zip\norder by cs1.product_name\n       ,cs1.store_name\n       ,cs2.cnt\n       ,cs1.s1\n       ,cs2.s1;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q65.sql",
    "content": "-- CometBench-DS query 65 derived from TPC-DS query 65 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect \n\ts_store_name,\n\ti_item_desc,\n\tsc.revenue,\n\ti_current_price,\n\ti_wholesale_cost,\n\ti_brand\n from store, item,\n     (select ss_store_sk, avg(revenue) as ave\n \tfrom\n \t    (select  ss_store_sk, ss_item_sk, \n \t\t     sum(ss_sales_price) as revenue\n \t\tfrom store_sales, date_dim\n \t\twhere ss_sold_date_sk = d_date_sk and d_month_seq between 1186 and 1186+11\n \t\tgroup by ss_store_sk, ss_item_sk) sa\n \tgroup by ss_store_sk) sb,\n     (select  ss_store_sk, ss_item_sk, sum(ss_sales_price) as revenue\n \tfrom store_sales, date_dim\n \twhere ss_sold_date_sk = d_date_sk and d_month_seq between 1186 and 1186+11\n \tgroup by ss_store_sk, ss_item_sk) sc\n where sb.ss_store_sk = sc.ss_store_sk and \n       sc.revenue <= 0.1 * sb.ave and\n       s_store_sk = sc.ss_store_sk and\n       i_item_sk = sc.ss_item_sk\n order by s_store_name, i_item_desc\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q66.sql",
    "content": "-- CometBench-DS query 66 derived from TPC-DS query 66 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect   \n         w_warehouse_name\n \t,w_warehouse_sq_ft\n \t,w_city\n \t,w_county\n \t,w_state\n \t,w_country\n        ,ship_carriers\n        ,year\n \t,sum(jan_sales) as jan_sales\n \t,sum(feb_sales) as feb_sales\n \t,sum(mar_sales) as mar_sales\n \t,sum(apr_sales) as apr_sales\n \t,sum(may_sales) as may_sales\n \t,sum(jun_sales) as jun_sales\n \t,sum(jul_sales) as jul_sales\n \t,sum(aug_sales) as aug_sales\n \t,sum(sep_sales) as sep_sales\n \t,sum(oct_sales) as oct_sales\n \t,sum(nov_sales) as nov_sales\n \t,sum(dec_sales) as dec_sales\n \t,sum(jan_sales/w_warehouse_sq_ft) as jan_sales_per_sq_foot\n \t,sum(feb_sales/w_warehouse_sq_ft) as feb_sales_per_sq_foot\n \t,sum(mar_sales/w_warehouse_sq_ft) as mar_sales_per_sq_foot\n \t,sum(apr_sales/w_warehouse_sq_ft) as apr_sales_per_sq_foot\n \t,sum(may_sales/w_warehouse_sq_ft) as may_sales_per_sq_foot\n \t,sum(jun_sales/w_warehouse_sq_ft) as jun_sales_per_sq_foot\n \t,sum(jul_sales/w_warehouse_sq_ft) as jul_sales_per_sq_foot\n \t,sum(aug_sales/w_warehouse_sq_ft) as aug_sales_per_sq_foot\n \t,sum(sep_sales/w_warehouse_sq_ft) as sep_sales_per_sq_foot\n \t,sum(oct_sales/w_warehouse_sq_ft) as oct_sales_per_sq_foot\n \t,sum(nov_sales/w_warehouse_sq_ft) as nov_sales_per_sq_foot\n \t,sum(dec_sales/w_warehouse_sq_ft) as dec_sales_per_sq_foot\n \t,sum(jan_net) as jan_net\n \t,sum(feb_net) as feb_net\n \t,sum(mar_net) as mar_net\n \t,sum(apr_net) as apr_net\n \t,sum(may_net) as may_net\n \t,sum(jun_net) as jun_net\n \t,sum(jul_net) as jul_net\n \t,sum(aug_net) as aug_net\n \t,sum(sep_net) as sep_net\n \t,sum(oct_net) as oct_net\n \t,sum(nov_net) as nov_net\n \t,sum(dec_net) as dec_net\n from (\n     select \n \tw_warehouse_name\n \t,w_warehouse_sq_ft\n \t,w_city\n \t,w_county\n \t,w_state\n \t,w_country\n \t,'FEDEX' || ',' || 'GERMA' as ship_carriers\n       ,d_year as year\n \t,sum(case when d_moy = 1 \n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as jan_sales\n \t,sum(case when d_moy = 2 \n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as feb_sales\n \t,sum(case when d_moy = 3 \n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as mar_sales\n \t,sum(case when d_moy = 4 \n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as apr_sales\n \t,sum(case when d_moy = 5 \n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as may_sales\n \t,sum(case when d_moy = 6 \n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as jun_sales\n \t,sum(case when d_moy = 7 \n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as jul_sales\n \t,sum(case when d_moy = 8 \n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as aug_sales\n \t,sum(case when d_moy = 9 \n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as sep_sales\n \t,sum(case when d_moy = 10 \n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as oct_sales\n \t,sum(case when d_moy = 11\n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as nov_sales\n \t,sum(case when d_moy = 12\n \t\tthen ws_ext_list_price* ws_quantity else 0 end) as dec_sales\n \t,sum(case when d_moy = 1 \n \t\tthen ws_net_profit * ws_quantity else 0 end) as jan_net\n \t,sum(case when d_moy = 2\n \t\tthen ws_net_profit * ws_quantity else 0 end) as feb_net\n \t,sum(case when d_moy = 3 \n \t\tthen ws_net_profit * ws_quantity else 0 end) as mar_net\n \t,sum(case when d_moy = 4 \n \t\tthen ws_net_profit * ws_quantity else 0 end) as apr_net\n \t,sum(case when d_moy = 5 \n \t\tthen ws_net_profit * ws_quantity else 0 end) as may_net\n \t,sum(case when d_moy = 6 \n \t\tthen ws_net_profit * ws_quantity else 0 end) as jun_net\n \t,sum(case when d_moy = 7 \n \t\tthen ws_net_profit * ws_quantity else 0 end) as jul_net\n \t,sum(case when d_moy = 8 \n \t\tthen ws_net_profit * ws_quantity else 0 end) as aug_net\n \t,sum(case when d_moy = 9 \n \t\tthen ws_net_profit * ws_quantity else 0 end) as sep_net\n \t,sum(case when d_moy = 10 \n \t\tthen ws_net_profit * ws_quantity else 0 end) as oct_net\n \t,sum(case when d_moy = 11\n \t\tthen ws_net_profit * ws_quantity else 0 end) as nov_net\n \t,sum(case when d_moy = 12\n \t\tthen ws_net_profit * ws_quantity else 0 end) as dec_net\n     from\n          web_sales\n         ,warehouse\n         ,date_dim\n         ,time_dim\n \t  ,ship_mode\n     where\n            ws_warehouse_sk =  w_warehouse_sk\n        and ws_sold_date_sk = d_date_sk\n        and ws_sold_time_sk = t_time_sk\n \tand ws_ship_mode_sk = sm_ship_mode_sk\n        and d_year = 2001\n \tand t_time between 19072 and 19072+28800 \n \tand sm_carrier in ('FEDEX','GERMA')\n     group by \n        w_warehouse_name\n \t,w_warehouse_sq_ft\n \t,w_city\n \t,w_county\n \t,w_state\n \t,w_country\n       ,d_year\n union all\n     select \n \tw_warehouse_name\n \t,w_warehouse_sq_ft\n \t,w_city\n \t,w_county\n \t,w_state\n \t,w_country\n \t,'FEDEX' || ',' || 'GERMA' as ship_carriers\n       ,d_year as year\n \t,sum(case when d_moy = 1 \n \t\tthen cs_sales_price* cs_quantity else 0 end) as jan_sales\n \t,sum(case when d_moy = 2 \n \t\tthen cs_sales_price* cs_quantity else 0 end) as feb_sales\n \t,sum(case when d_moy = 3 \n \t\tthen cs_sales_price* cs_quantity else 0 end) as mar_sales\n \t,sum(case when d_moy = 4 \n \t\tthen cs_sales_price* cs_quantity else 0 end) as apr_sales\n \t,sum(case when d_moy = 5 \n \t\tthen cs_sales_price* cs_quantity else 0 end) as may_sales\n \t,sum(case when d_moy = 6 \n \t\tthen cs_sales_price* cs_quantity else 0 end) as jun_sales\n \t,sum(case when d_moy = 7 \n \t\tthen cs_sales_price* cs_quantity else 0 end) as jul_sales\n \t,sum(case when d_moy = 8 \n \t\tthen cs_sales_price* cs_quantity else 0 end) as aug_sales\n \t,sum(case when d_moy = 9 \n \t\tthen cs_sales_price* cs_quantity else 0 end) as sep_sales\n \t,sum(case when d_moy = 10 \n \t\tthen cs_sales_price* cs_quantity else 0 end) as oct_sales\n \t,sum(case when d_moy = 11\n \t\tthen cs_sales_price* cs_quantity else 0 end) as nov_sales\n \t,sum(case when d_moy = 12\n \t\tthen cs_sales_price* cs_quantity else 0 end) as dec_sales\n \t,sum(case when d_moy = 1 \n \t\tthen cs_net_paid * cs_quantity else 0 end) as jan_net\n \t,sum(case when d_moy = 2 \n \t\tthen cs_net_paid * cs_quantity else 0 end) as feb_net\n \t,sum(case when d_moy = 3 \n \t\tthen cs_net_paid * cs_quantity else 0 end) as mar_net\n \t,sum(case when d_moy = 4 \n \t\tthen cs_net_paid * cs_quantity else 0 end) as apr_net\n \t,sum(case when d_moy = 5 \n \t\tthen cs_net_paid * cs_quantity else 0 end) as may_net\n \t,sum(case when d_moy = 6 \n \t\tthen cs_net_paid * cs_quantity else 0 end) as jun_net\n \t,sum(case when d_moy = 7 \n \t\tthen cs_net_paid * cs_quantity else 0 end) as jul_net\n \t,sum(case when d_moy = 8 \n \t\tthen cs_net_paid * cs_quantity else 0 end) as aug_net\n \t,sum(case when d_moy = 9 \n \t\tthen cs_net_paid * cs_quantity else 0 end) as sep_net\n \t,sum(case when d_moy = 10 \n \t\tthen cs_net_paid * cs_quantity else 0 end) as oct_net\n \t,sum(case when d_moy = 11\n \t\tthen cs_net_paid * cs_quantity else 0 end) as nov_net\n \t,sum(case when d_moy = 12\n \t\tthen cs_net_paid * cs_quantity else 0 end) as dec_net\n     from\n          catalog_sales\n         ,warehouse\n         ,date_dim\n         ,time_dim\n \t ,ship_mode\n     where\n            cs_warehouse_sk =  w_warehouse_sk\n        and cs_sold_date_sk = d_date_sk\n        and cs_sold_time_sk = t_time_sk\n \tand cs_ship_mode_sk = sm_ship_mode_sk\n        and d_year = 2001\n \tand t_time between 19072 AND 19072+28800 \n \tand sm_carrier in ('FEDEX','GERMA')\n     group by \n        w_warehouse_name\n \t,w_warehouse_sq_ft\n \t,w_city\n \t,w_county\n \t,w_state\n \t,w_country\n       ,d_year\n ) x\n group by \n        w_warehouse_name\n \t,w_warehouse_sq_ft\n \t,w_city\n \t,w_county\n \t,w_state\n \t,w_country\n \t,ship_carriers\n       ,year\n order by w_warehouse_name\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q67.sql",
    "content": "-- CometBench-DS query 67 derived from TPC-DS query 67 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  *\nfrom (select i_category\n            ,i_class\n            ,i_brand\n            ,i_product_name\n            ,d_year\n            ,d_qoy\n            ,d_moy\n            ,s_store_id\n            ,sumsales\n            ,rank() over (partition by i_category order by sumsales desc) rk\n      from (select i_category\n                  ,i_class\n                  ,i_brand\n                  ,i_product_name\n                  ,d_year\n                  ,d_qoy\n                  ,d_moy\n                  ,s_store_id\n                  ,sum(coalesce(ss_sales_price*ss_quantity,0)) sumsales\n            from store_sales\n                ,date_dim\n                ,store\n                ,item\n       where  ss_sold_date_sk=d_date_sk\n          and ss_item_sk=i_item_sk\n          and ss_store_sk = s_store_sk\n          and d_month_seq between 1194 and 1194+11\n       group by  rollup(i_category, i_class, i_brand, i_product_name, d_year, d_qoy, d_moy,s_store_id))dw1) dw2\nwhere rk <= 100\norder by i_category\n        ,i_class\n        ,i_brand\n        ,i_product_name\n        ,d_year\n        ,d_qoy\n        ,d_moy\n        ,s_store_id\n        ,sumsales\n        ,rk\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q68.sql",
    "content": "-- CometBench-DS query 68 derived from TPC-DS query 68 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  c_last_name\n       ,c_first_name\n       ,ca_city\n       ,bought_city\n       ,ss_ticket_number\n       ,extended_price\n       ,extended_tax\n       ,list_price\n from (select ss_ticket_number\n             ,ss_customer_sk\n             ,ca_city bought_city\n             ,sum(ss_ext_sales_price) extended_price \n             ,sum(ss_ext_list_price) list_price\n             ,sum(ss_ext_tax) extended_tax \n       from store_sales\n           ,date_dim\n           ,store\n           ,household_demographics\n           ,customer_address \n       where store_sales.ss_sold_date_sk = date_dim.d_date_sk\n         and store_sales.ss_store_sk = store.s_store_sk  \n        and store_sales.ss_hdemo_sk = household_demographics.hd_demo_sk\n        and store_sales.ss_addr_sk = customer_address.ca_address_sk\n        and date_dim.d_dom between 1 and 2 \n        and (household_demographics.hd_dep_count = 8 or\n             household_demographics.hd_vehicle_count= 3)\n        and date_dim.d_year in (2000,2000+1,2000+2)\n        and store.s_city in ('Midway','Fairview')\n       group by ss_ticket_number\n               ,ss_customer_sk\n               ,ss_addr_sk,ca_city) dn\n      ,customer\n      ,customer_address current_addr\n where ss_customer_sk = c_customer_sk\n   and customer.c_current_addr_sk = current_addr.ca_address_sk\n   and current_addr.ca_city <> bought_city\n order by c_last_name\n         ,ss_ticket_number\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q69.sql",
    "content": "-- CometBench-DS query 69 derived from TPC-DS query 69 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n  cd_gender,\n  cd_marital_status,\n  cd_education_status,\n  count(*) cnt1,\n  cd_purchase_estimate,\n  count(*) cnt2,\n  cd_credit_rating,\n  count(*) cnt3\n from\n  customer c,customer_address ca,customer_demographics\n where\n  c.c_current_addr_sk = ca.ca_address_sk and\n  ca_state in ('IN','VA','MS') and\n  cd_demo_sk = c.c_current_cdemo_sk and \n  exists (select *\n          from store_sales,date_dim\n          where c.c_customer_sk = ss_customer_sk and\n                ss_sold_date_sk = d_date_sk and\n                d_year = 2002 and\n                d_moy between 2 and 2+2) and\n   (not exists (select *\n            from web_sales,date_dim\n            where c.c_customer_sk = ws_bill_customer_sk and\n                  ws_sold_date_sk = d_date_sk and\n                  d_year = 2002 and\n                  d_moy between 2 and 2+2) and\n    not exists (select * \n            from catalog_sales,date_dim\n            where c.c_customer_sk = cs_ship_customer_sk and\n                  cs_sold_date_sk = d_date_sk and\n                  d_year = 2002 and\n                  d_moy between 2 and 2+2))\n group by cd_gender,\n          cd_marital_status,\n          cd_education_status,\n          cd_purchase_estimate,\n          cd_credit_rating\n order by cd_gender,\n          cd_marital_status,\n          cd_education_status,\n          cd_purchase_estimate,\n          cd_credit_rating\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q7.sql",
    "content": "-- CometBench-DS query 7 derived from TPC-DS query 7 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_item_id, \n        avg(ss_quantity) agg1,\n        avg(ss_list_price) agg2,\n        avg(ss_coupon_amt) agg3,\n        avg(ss_sales_price) agg4 \n from store_sales, customer_demographics, date_dim, item, promotion\n where ss_sold_date_sk = d_date_sk and\n       ss_item_sk = i_item_sk and\n       ss_cdemo_sk = cd_demo_sk and\n       ss_promo_sk = p_promo_sk and\n       cd_gender = 'M' and \n       cd_marital_status = 'M' and\n       cd_education_status = '4 yr Degree' and\n       (p_channel_email = 'N' or p_channel_event = 'N') and\n       d_year = 2001 \n group by i_item_id\n order by i_item_id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q70.sql",
    "content": "-- CometBench-DS query 70 derived from TPC-DS query 70 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n    sum(ss_net_profit) as total_sum\n   ,s_state\n   ,s_county\n   ,grouping(s_state)+grouping(s_county) as lochierarchy\n   ,rank() over (\n \tpartition by grouping(s_state)+grouping(s_county),\n \tcase when grouping(s_county) = 0 then s_state end \n \torder by sum(ss_net_profit) desc) as rank_within_parent\n from\n    store_sales\n   ,date_dim       d1\n   ,store\n where\n    d1.d_month_seq between 1180 and 1180+11\n and d1.d_date_sk = ss_sold_date_sk\n and s_store_sk  = ss_store_sk\n and s_state in\n             ( select s_state\n               from  (select s_state as s_state,\n \t\t\t    rank() over ( partition by s_state order by sum(ss_net_profit) desc) as ranking\n                      from   store_sales, store, date_dim\n                      where  d_month_seq between 1180 and 1180+11\n \t\t\t    and d_date_sk = ss_sold_date_sk\n \t\t\t    and s_store_sk  = ss_store_sk\n                      group by s_state\n                     ) tmp1 \n               where ranking <= 5\n             )\n group by rollup(s_state,s_county)\n order by\n   lochierarchy desc\n  ,case when lochierarchy = 0 then s_state end\n  ,rank_within_parent\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q71.sql",
    "content": "-- CometBench-DS query 71 derived from TPC-DS query 71 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect i_brand_id brand_id, i_brand brand,t_hour,t_minute,\n \tsum(ext_price) ext_price\n from item, (select ws_ext_sales_price as ext_price, \n                        ws_sold_date_sk as sold_date_sk,\n                        ws_item_sk as sold_item_sk,\n                        ws_sold_time_sk as time_sk  \n                 from web_sales,date_dim\n                 where d_date_sk = ws_sold_date_sk\n                   and d_moy=11\n                   and d_year=2001\n                 union all\n                 select cs_ext_sales_price as ext_price,\n                        cs_sold_date_sk as sold_date_sk,\n                        cs_item_sk as sold_item_sk,\n                        cs_sold_time_sk as time_sk\n                 from catalog_sales,date_dim\n                 where d_date_sk = cs_sold_date_sk\n                   and d_moy=11\n                   and d_year=2001\n                 union all\n                 select ss_ext_sales_price as ext_price,\n                        ss_sold_date_sk as sold_date_sk,\n                        ss_item_sk as sold_item_sk,\n                        ss_sold_time_sk as time_sk\n                 from store_sales,date_dim\n                 where d_date_sk = ss_sold_date_sk\n                   and d_moy=11\n                   and d_year=2001\n                 ) tmp,time_dim\n where\n   sold_item_sk = i_item_sk\n   and i_manager_id=1\n   and time_sk = t_time_sk\n   and (t_meal_time = 'breakfast' or t_meal_time = 'dinner')\n group by i_brand, i_brand_id,t_hour,t_minute\n order by ext_price desc, i_brand_id\n ;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q72.sql",
    "content": "-- CometBench-DS query 72 derived from TPC-DS query 72 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_item_desc\n      ,w_warehouse_name\n      ,d1.d_week_seq\n      ,sum(case when p_promo_sk is null then 1 else 0 end) no_promo\n      ,sum(case when p_promo_sk is not null then 1 else 0 end) promo\n      ,count(*) total_cnt\nfrom catalog_sales\njoin inventory on (cs_item_sk = inv_item_sk)\njoin warehouse on (w_warehouse_sk=inv_warehouse_sk)\njoin item on (i_item_sk = cs_item_sk)\njoin customer_demographics on (cs_bill_cdemo_sk = cd_demo_sk)\njoin household_demographics on (cs_bill_hdemo_sk = hd_demo_sk)\njoin date_dim d1 on (cs_sold_date_sk = d1.d_date_sk)\njoin date_dim d2 on (inv_date_sk = d2.d_date_sk)\njoin date_dim d3 on (cs_ship_date_sk = d3.d_date_sk)\nleft outer join promotion on (cs_promo_sk=p_promo_sk)\nleft outer join catalog_returns on (cr_item_sk = cs_item_sk and cr_order_number = cs_order_number)\nwhere d1.d_week_seq = d2.d_week_seq\n  and inv_quantity_on_hand < cs_quantity \n  and d3.d_date > d1.d_date + 5\n  and hd_buy_potential = '501-1000'\n  and d1.d_year = 1999\n  and cd_marital_status = 'S'\ngroup by i_item_desc,w_warehouse_name,d1.d_week_seq\norder by total_cnt desc, i_item_desc, w_warehouse_name, d_week_seq\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q73.sql",
    "content": "-- CometBench-DS query 73 derived from TPC-DS query 73 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect c_last_name\n       ,c_first_name\n       ,c_salutation\n       ,c_preferred_cust_flag \n       ,ss_ticket_number\n       ,cnt from\n   (select ss_ticket_number\n          ,ss_customer_sk\n          ,count(*) cnt\n    from store_sales,date_dim,store,household_demographics\n    where store_sales.ss_sold_date_sk = date_dim.d_date_sk\n    and store_sales.ss_store_sk = store.s_store_sk  \n    and store_sales.ss_hdemo_sk = household_demographics.hd_demo_sk\n    and date_dim.d_dom between 1 and 2 \n    and (household_demographics.hd_buy_potential = '1001-5000' or\n         household_demographics.hd_buy_potential = '5001-10000')\n    and household_demographics.hd_vehicle_count > 0\n    and case when household_demographics.hd_vehicle_count > 0 then \n             household_demographics.hd_dep_count/ household_demographics.hd_vehicle_count else null end > 1\n    and date_dim.d_year in (1999,1999+1,1999+2)\n    and store.s_county in ('Williamson County','Williamson County','Williamson County','Williamson County')\n    group by ss_ticket_number,ss_customer_sk) dj,customer\n    where ss_customer_sk = c_customer_sk\n      and cnt between 1 and 5\n    order by cnt desc, c_last_name asc;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q74.sql",
    "content": "-- CometBench-DS query 74 derived from TPC-DS query 74 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith year_total as (\n select c_customer_id customer_id\n       ,c_first_name customer_first_name\n       ,c_last_name customer_last_name\n       ,d_year as year\n       ,stddev_samp(ss_net_paid) year_total\n       ,'s' sale_type\n from customer\n     ,store_sales\n     ,date_dim\n where c_customer_sk = ss_customer_sk\n   and ss_sold_date_sk = d_date_sk\n   and d_year in (2001,2001+1)\n group by c_customer_id\n         ,c_first_name\n         ,c_last_name\n         ,d_year\n union all\n select c_customer_id customer_id\n       ,c_first_name customer_first_name\n       ,c_last_name customer_last_name\n       ,d_year as year\n       ,stddev_samp(ws_net_paid) year_total\n       ,'w' sale_type\n from customer\n     ,web_sales\n     ,date_dim\n where c_customer_sk = ws_bill_customer_sk\n   and ws_sold_date_sk = d_date_sk\n   and d_year in (2001,2001+1)\n group by c_customer_id\n         ,c_first_name\n         ,c_last_name\n         ,d_year\n         )\n  select \n        t_s_secyear.customer_id, t_s_secyear.customer_first_name, t_s_secyear.customer_last_name\n from year_total t_s_firstyear\n     ,year_total t_s_secyear\n     ,year_total t_w_firstyear\n     ,year_total t_w_secyear\n where t_s_secyear.customer_id = t_s_firstyear.customer_id\n         and t_s_firstyear.customer_id = t_w_secyear.customer_id\n         and t_s_firstyear.customer_id = t_w_firstyear.customer_id\n         and t_s_firstyear.sale_type = 's'\n         and t_w_firstyear.sale_type = 'w'\n         and t_s_secyear.sale_type = 's'\n         and t_w_secyear.sale_type = 'w'\n         and t_s_firstyear.year = 2001\n         and t_s_secyear.year = 2001+1\n         and t_w_firstyear.year = 2001\n         and t_w_secyear.year = 2001+1\n         and t_s_firstyear.year_total > 0\n         and t_w_firstyear.year_total > 0\n         and case when t_w_firstyear.year_total > 0 then t_w_secyear.year_total / t_w_firstyear.year_total else null end\n           > case when t_s_firstyear.year_total > 0 then t_s_secyear.year_total / t_s_firstyear.year_total else null end\n order by 3,2,1\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q75.sql",
    "content": "-- CometBench-DS query 75 derived from TPC-DS query 75 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nWITH all_sales AS (\n SELECT d_year\n       ,i_brand_id\n       ,i_class_id\n       ,i_category_id\n       ,i_manufact_id\n       ,SUM(sales_cnt) AS sales_cnt\n       ,SUM(sales_amt) AS sales_amt\n FROM (SELECT d_year\n             ,i_brand_id\n             ,i_class_id\n             ,i_category_id\n             ,i_manufact_id\n             ,cs_quantity - COALESCE(cr_return_quantity,0) AS sales_cnt\n             ,cs_ext_sales_price - COALESCE(cr_return_amount,0.0) AS sales_amt\n       FROM catalog_sales JOIN item ON i_item_sk=cs_item_sk\n                          JOIN date_dim ON d_date_sk=cs_sold_date_sk\n                          LEFT JOIN catalog_returns ON (cs_order_number=cr_order_number \n                                                    AND cs_item_sk=cr_item_sk)\n       WHERE i_category='Shoes'\n       UNION\n       SELECT d_year\n             ,i_brand_id\n             ,i_class_id\n             ,i_category_id\n             ,i_manufact_id\n             ,ss_quantity - COALESCE(sr_return_quantity,0) AS sales_cnt\n             ,ss_ext_sales_price - COALESCE(sr_return_amt,0.0) AS sales_amt\n       FROM store_sales JOIN item ON i_item_sk=ss_item_sk\n                        JOIN date_dim ON d_date_sk=ss_sold_date_sk\n                        LEFT JOIN store_returns ON (ss_ticket_number=sr_ticket_number \n                                                AND ss_item_sk=sr_item_sk)\n       WHERE i_category='Shoes'\n       UNION\n       SELECT d_year\n             ,i_brand_id\n             ,i_class_id\n             ,i_category_id\n             ,i_manufact_id\n             ,ws_quantity - COALESCE(wr_return_quantity,0) AS sales_cnt\n             ,ws_ext_sales_price - COALESCE(wr_return_amt,0.0) AS sales_amt\n       FROM web_sales JOIN item ON i_item_sk=ws_item_sk\n                      JOIN date_dim ON d_date_sk=ws_sold_date_sk\n                      LEFT JOIN web_returns ON (ws_order_number=wr_order_number \n                                            AND ws_item_sk=wr_item_sk)\n       WHERE i_category='Shoes') sales_detail\n GROUP BY d_year, i_brand_id, i_class_id, i_category_id, i_manufact_id)\n SELECT  prev_yr.d_year AS prev_year\n                          ,curr_yr.d_year AS year\n                          ,curr_yr.i_brand_id\n                          ,curr_yr.i_class_id\n                          ,curr_yr.i_category_id\n                          ,curr_yr.i_manufact_id\n                          ,prev_yr.sales_cnt AS prev_yr_cnt\n                          ,curr_yr.sales_cnt AS curr_yr_cnt\n                          ,curr_yr.sales_cnt-prev_yr.sales_cnt AS sales_cnt_diff\n                          ,curr_yr.sales_amt-prev_yr.sales_amt AS sales_amt_diff\n FROM all_sales curr_yr, all_sales prev_yr\n WHERE curr_yr.i_brand_id=prev_yr.i_brand_id\n   AND curr_yr.i_class_id=prev_yr.i_class_id\n   AND curr_yr.i_category_id=prev_yr.i_category_id\n   AND curr_yr.i_manufact_id=prev_yr.i_manufact_id\n   AND curr_yr.d_year=2000\n   AND prev_yr.d_year=2000-1\n   AND CAST(curr_yr.sales_cnt AS DECIMAL(17,2))/CAST(prev_yr.sales_cnt AS DECIMAL(17,2))<0.9\n ORDER BY sales_cnt_diff,sales_amt_diff\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q76.sql",
    "content": "-- CometBench-DS query 76 derived from TPC-DS query 76 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  channel, col_name, d_year, d_qoy, i_category, COUNT(*) sales_cnt, SUM(ext_sales_price) sales_amt FROM (\n        SELECT 'store' as channel, 'ss_customer_sk' col_name, d_year, d_qoy, i_category, ss_ext_sales_price ext_sales_price\n         FROM store_sales, item, date_dim\n         WHERE ss_customer_sk IS NULL\n           AND ss_sold_date_sk=d_date_sk\n           AND ss_item_sk=i_item_sk\n        UNION ALL\n        SELECT 'web' as channel, 'ws_ship_hdemo_sk' col_name, d_year, d_qoy, i_category, ws_ext_sales_price ext_sales_price\n         FROM web_sales, item, date_dim\n         WHERE ws_ship_hdemo_sk IS NULL\n           AND ws_sold_date_sk=d_date_sk\n           AND ws_item_sk=i_item_sk\n        UNION ALL\n        SELECT 'catalog' as channel, 'cs_bill_customer_sk' col_name, d_year, d_qoy, i_category, cs_ext_sales_price ext_sales_price\n         FROM catalog_sales, item, date_dim\n         WHERE cs_bill_customer_sk IS NULL\n           AND cs_sold_date_sk=d_date_sk\n           AND cs_item_sk=i_item_sk) foo\nGROUP BY channel, col_name, d_year, d_qoy, i_category\nORDER BY channel, col_name, d_year, d_qoy, i_category\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q77.sql",
    "content": "-- CometBench-DS query 77 derived from TPC-DS query 77 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ss as\n (select s_store_sk,\n         sum(ss_ext_sales_price) as sales,\n         sum(ss_net_profit) as profit\n from store_sales,\n      date_dim,\n      store\n where ss_sold_date_sk = d_date_sk\n       and d_date between cast('2001-08-11' as date) \n                  and (cast('2001-08-11' as date) +  INTERVAL '30 DAYS') \n       and ss_store_sk = s_store_sk\n group by s_store_sk)\n ,\n sr as\n (select s_store_sk,\n         sum(sr_return_amt) as returns,\n         sum(sr_net_loss) as profit_loss\n from store_returns,\n      date_dim,\n      store\n where sr_returned_date_sk = d_date_sk\n       and d_date between cast('2001-08-11' as date)\n                  and (cast('2001-08-11' as date) +  INTERVAL '30 DAYS')\n       and sr_store_sk = s_store_sk\n group by s_store_sk), \n cs as\n (select cs_call_center_sk,\n        sum(cs_ext_sales_price) as sales,\n        sum(cs_net_profit) as profit\n from catalog_sales,\n      date_dim\n where cs_sold_date_sk = d_date_sk\n       and d_date between cast('2001-08-11' as date)\n                  and (cast('2001-08-11' as date) +  INTERVAL '30 DAYS')\n group by cs_call_center_sk \n ), \n cr as\n (select cr_call_center_sk,\n         sum(cr_return_amount) as returns,\n         sum(cr_net_loss) as profit_loss\n from catalog_returns,\n      date_dim\n where cr_returned_date_sk = d_date_sk\n       and d_date between cast('2001-08-11' as date)\n                  and (cast('2001-08-11' as date) +  INTERVAL '30 DAYS')\n group by cr_call_center_sk\n ), \n ws as\n ( select wp_web_page_sk,\n        sum(ws_ext_sales_price) as sales,\n        sum(ws_net_profit) as profit\n from web_sales,\n      date_dim,\n      web_page\n where ws_sold_date_sk = d_date_sk\n       and d_date between cast('2001-08-11' as date)\n                  and (cast('2001-08-11' as date) +  INTERVAL '30 DAYS')\n       and ws_web_page_sk = wp_web_page_sk\n group by wp_web_page_sk), \n wr as\n (select wp_web_page_sk,\n        sum(wr_return_amt) as returns,\n        sum(wr_net_loss) as profit_loss\n from web_returns,\n      date_dim,\n      web_page\n where wr_returned_date_sk = d_date_sk\n       and d_date between cast('2001-08-11' as date)\n                  and (cast('2001-08-11' as date) +  INTERVAL '30 DAYS')\n       and wr_web_page_sk = wp_web_page_sk\n group by wp_web_page_sk)\n  select  channel\n        , id\n        , sum(sales) as sales\n        , sum(returns) as returns\n        , sum(profit) as profit\n from \n (select 'store channel' as channel\n        , ss.s_store_sk as id\n        , sales\n        , coalesce(returns, 0) as returns\n        , (profit - coalesce(profit_loss,0)) as profit\n from   ss left join sr\n        on  ss.s_store_sk = sr.s_store_sk\n union all\n select 'catalog channel' as channel\n        , cs_call_center_sk as id\n        , sales\n        , returns\n        , (profit - profit_loss) as profit\n from  cs\n       , cr\n union all\n select 'web channel' as channel\n        , ws.wp_web_page_sk as id\n        , sales\n        , coalesce(returns, 0) returns\n        , (profit - coalesce(profit_loss,0)) as profit\n from   ws left join wr\n        on  ws.wp_web_page_sk = wr.wp_web_page_sk\n ) x\n group by rollup (channel, id)\n order by channel\n         ,id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q78.sql",
    "content": "-- CometBench-DS query 78 derived from TPC-DS query 78 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ws as\n  (select d_year AS ws_sold_year, ws_item_sk,\n    ws_bill_customer_sk ws_customer_sk,\n    sum(ws_quantity) ws_qty,\n    sum(ws_wholesale_cost) ws_wc,\n    sum(ws_sales_price) ws_sp\n   from web_sales\n   left join web_returns on wr_order_number=ws_order_number and ws_item_sk=wr_item_sk\n   join date_dim on ws_sold_date_sk = d_date_sk\n   where wr_order_number is null\n   group by d_year, ws_item_sk, ws_bill_customer_sk\n   ),\ncs as\n  (select d_year AS cs_sold_year, cs_item_sk,\n    cs_bill_customer_sk cs_customer_sk,\n    sum(cs_quantity) cs_qty,\n    sum(cs_wholesale_cost) cs_wc,\n    sum(cs_sales_price) cs_sp\n   from catalog_sales\n   left join catalog_returns on cr_order_number=cs_order_number and cs_item_sk=cr_item_sk\n   join date_dim on cs_sold_date_sk = d_date_sk\n   where cr_order_number is null\n   group by d_year, cs_item_sk, cs_bill_customer_sk\n   ),\nss as\n  (select d_year AS ss_sold_year, ss_item_sk,\n    ss_customer_sk,\n    sum(ss_quantity) ss_qty,\n    sum(ss_wholesale_cost) ss_wc,\n    sum(ss_sales_price) ss_sp\n   from store_sales\n   left join store_returns on sr_ticket_number=ss_ticket_number and ss_item_sk=sr_item_sk\n   join date_dim on ss_sold_date_sk = d_date_sk\n   where sr_ticket_number is null\n   group by d_year, ss_item_sk, ss_customer_sk\n   )\n select \nss_customer_sk,\nround(ss_qty/(coalesce(ws_qty,0)+coalesce(cs_qty,0)),2) ratio,\nss_qty store_qty, ss_wc store_wholesale_cost, ss_sp store_sales_price,\ncoalesce(ws_qty,0)+coalesce(cs_qty,0) other_chan_qty,\ncoalesce(ws_wc,0)+coalesce(cs_wc,0) other_chan_wholesale_cost,\ncoalesce(ws_sp,0)+coalesce(cs_sp,0) other_chan_sales_price\nfrom ss\nleft join ws on (ws_sold_year=ss_sold_year and ws_item_sk=ss_item_sk and ws_customer_sk=ss_customer_sk)\nleft join cs on (cs_sold_year=ss_sold_year and cs_item_sk=ss_item_sk and cs_customer_sk=ss_customer_sk)\nwhere (coalesce(ws_qty,0)>0 or coalesce(cs_qty, 0)>0) and ss_sold_year=2001\norder by \n  ss_customer_sk,\n  ss_qty desc, ss_wc desc, ss_sp desc,\n  other_chan_qty,\n  other_chan_wholesale_cost,\n  other_chan_sales_price,\n  ratio\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q79.sql",
    "content": "-- CometBench-DS query 79 derived from TPC-DS query 79 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect \n  c_last_name,c_first_name,substr(s_city,1,30),ss_ticket_number,amt,profit\n  from\n   (select ss_ticket_number\n          ,ss_customer_sk\n          ,store.s_city\n          ,sum(ss_coupon_amt) amt\n          ,sum(ss_net_profit) profit\n    from store_sales,date_dim,store,household_demographics\n    where store_sales.ss_sold_date_sk = date_dim.d_date_sk\n    and store_sales.ss_store_sk = store.s_store_sk  \n    and store_sales.ss_hdemo_sk = household_demographics.hd_demo_sk\n    and (household_demographics.hd_dep_count = 0 or household_demographics.hd_vehicle_count > 4)\n    and date_dim.d_dow = 1\n    and date_dim.d_year in (1999,1999+1,1999+2) \n    and store.s_number_employees between 200 and 295\n    group by ss_ticket_number,ss_customer_sk,ss_addr_sk,store.s_city) ms,customer\n    where ss_customer_sk = c_customer_sk\n order by c_last_name,c_first_name,substr(s_city,1,30), profit\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q8.sql",
    "content": "-- CometBench-DS query 8 derived from TPC-DS query 8 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  s_store_name\n      ,sum(ss_net_profit)\n from store_sales\n     ,date_dim\n     ,store,\n     (select ca_zip\n     from (\n      SELECT substr(ca_zip,1,5) ca_zip\n      FROM customer_address\n      WHERE substr(ca_zip,1,5) IN (\n                          '19100','41548','51640','49699','88329','55986',\n                          '85119','19510','61020','95452','26235',\n                          '51102','16733','42819','27823','90192',\n                          '31905','28865','62197','23750','81398',\n                          '95288','45114','82060','12313','25218',\n                          '64386','46400','77230','69271','43672',\n                          '36521','34217','13017','27936','42766',\n                          '59233','26060','27477','39981','93402',\n                          '74270','13932','51731','71642','17710',\n                          '85156','21679','70840','67191','39214',\n                          '35273','27293','17128','15458','31615',\n                          '60706','67657','54092','32775','14683',\n                          '32206','62543','43053','11297','58216',\n                          '49410','14710','24501','79057','77038',\n                          '91286','32334','46298','18326','67213',\n                          '65382','40315','56115','80162','55956',\n                          '81583','73588','32513','62880','12201',\n                          '11592','17014','83832','61796','57872',\n                          '78829','69912','48524','22016','26905',\n                          '48511','92168','63051','25748','89786',\n                          '98827','86404','53029','37524','14039',\n                          '50078','34487','70142','18697','40129',\n                          '60642','42810','62667','57183','46414',\n                          '58463','71211','46364','34851','54884',\n                          '25382','25239','74126','21568','84204',\n                          '13607','82518','32982','36953','86001',\n                          '79278','21745','64444','35199','83181',\n                          '73255','86177','98043','90392','13882',\n                          '47084','17859','89526','42072','20233',\n                          '52745','75000','22044','77013','24182',\n                          '52554','56138','43440','86100','48791',\n                          '21883','17096','15965','31196','74903',\n                          '19810','35763','92020','55176','54433',\n                          '68063','71919','44384','16612','32109',\n                          '28207','14762','89933','10930','27616',\n                          '56809','14244','22733','33177','29784',\n                          '74968','37887','11299','34692','85843',\n                          '83663','95421','19323','17406','69264',\n                          '28341','50150','79121','73974','92917',\n                          '21229','32254','97408','46011','37169',\n                          '18146','27296','62927','68812','47734',\n                          '86572','12620','80252','50173','27261',\n                          '29534','23488','42184','23695','45868',\n                          '12910','23429','29052','63228','30731',\n                          '15747','25827','22332','62349','56661',\n                          '44652','51862','57007','22773','40361',\n                          '65238','19327','17282','44708','35484',\n                          '34064','11148','92729','22995','18833',\n                          '77528','48917','17256','93166','68576',\n                          '71096','56499','35096','80551','82424',\n                          '17700','32748','78969','46820','57725',\n                          '46179','54677','98097','62869','83959',\n                          '66728','19716','48326','27420','53458',\n                          '69056','84216','36688','63957','41469',\n                          '66843','18024','81950','21911','58387',\n                          '58103','19813','34581','55347','17171',\n                          '35914','75043','75088','80541','26802',\n                          '28849','22356','57721','77084','46385',\n                          '59255','29308','65885','70673','13306',\n                          '68788','87335','40987','31654','67560',\n                          '92309','78116','65961','45018','16548',\n                          '67092','21818','33716','49449','86150',\n                          '12156','27574','43201','50977','52839',\n                          '33234','86611','71494','17823','57172',\n                          '59869','34086','51052','11320','39717',\n                          '79604','24672','70555','38378','91135',\n                          '15567','21606','74994','77168','38607',\n                          '27384','68328','88944','40203','37893',\n                          '42726','83549','48739','55652','27543',\n                          '23109','98908','28831','45011','47525',\n                          '43870','79404','35780','42136','49317',\n                          '14574','99586','21107','14302','83882',\n                          '81272','92552','14916','87533','86518',\n                          '17862','30741','96288','57886','30304',\n                          '24201','79457','36728','49833','35182',\n                          '20108','39858','10804','47042','20439',\n                          '54708','59027','82499','75311','26548',\n                          '53406','92060','41152','60446','33129',\n                          '43979','16903','60319','35550','33887',\n                          '25463','40343','20726','44429')\n     intersect\n      select ca_zip\n      from (SELECT substr(ca_zip,1,5) ca_zip,count(*) cnt\n            FROM customer_address, customer\n            WHERE ca_address_sk = c_current_addr_sk and\n                  c_preferred_cust_flag='Y'\n            group by ca_zip\n            having count(*) > 10)A1)A2) V1\n where ss_store_sk = s_store_sk\n  and ss_sold_date_sk = d_date_sk\n  and d_qoy = 1 and d_year = 2000\n  and (substr(s_zip,1,2) = substr(V1.ca_zip,1,2))\n group by s_store_name\n order by s_store_name\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q80.sql",
    "content": "-- CometBench-DS query 80 derived from TPC-DS query 80 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ssr as\n (select  s_store_id as store_id,\n          sum(ss_ext_sales_price) as sales,\n          sum(coalesce(sr_return_amt, 0)) as returns,\n          sum(ss_net_profit - coalesce(sr_net_loss, 0)) as profit\n  from store_sales left outer join store_returns on\n         (ss_item_sk = sr_item_sk and ss_ticket_number = sr_ticket_number),\n     date_dim,\n     store,\n     item,\n     promotion\n where ss_sold_date_sk = d_date_sk\n       and d_date between cast('2002-08-04' as date) \n                  and (cast('2002-08-04' as date) +  INTERVAL '30 DAYS')\n       and ss_store_sk = s_store_sk\n       and ss_item_sk = i_item_sk\n       and i_current_price > 50\n       and ss_promo_sk = p_promo_sk\n       and p_channel_tv = 'N'\n group by s_store_id)\n ,\n csr as\n (select  cp_catalog_page_id as catalog_page_id,\n          sum(cs_ext_sales_price) as sales,\n          sum(coalesce(cr_return_amount, 0)) as returns,\n          sum(cs_net_profit - coalesce(cr_net_loss, 0)) as profit\n  from catalog_sales left outer join catalog_returns on\n         (cs_item_sk = cr_item_sk and cs_order_number = cr_order_number),\n     date_dim,\n     catalog_page,\n     item,\n     promotion\n where cs_sold_date_sk = d_date_sk\n       and d_date between cast('2002-08-04' as date)\n                  and (cast('2002-08-04' as date) +  INTERVAL '30 DAYS')\n        and cs_catalog_page_sk = cp_catalog_page_sk\n       and cs_item_sk = i_item_sk\n       and i_current_price > 50\n       and cs_promo_sk = p_promo_sk\n       and p_channel_tv = 'N'\ngroup by cp_catalog_page_id)\n ,\n wsr as\n (select  web_site_id,\n          sum(ws_ext_sales_price) as sales,\n          sum(coalesce(wr_return_amt, 0)) as returns,\n          sum(ws_net_profit - coalesce(wr_net_loss, 0)) as profit\n  from web_sales left outer join web_returns on\n         (ws_item_sk = wr_item_sk and ws_order_number = wr_order_number),\n     date_dim,\n     web_site,\n     item,\n     promotion\n where ws_sold_date_sk = d_date_sk\n       and d_date between cast('2002-08-04' as date)\n                  and (cast('2002-08-04' as date) +  INTERVAL '30 DAYS')\n        and ws_web_site_sk = web_site_sk\n       and ws_item_sk = i_item_sk\n       and i_current_price > 50\n       and ws_promo_sk = p_promo_sk\n       and p_channel_tv = 'N'\ngroup by web_site_id)\n  select  channel\n        , id\n        , sum(sales) as sales\n        , sum(returns) as returns\n        , sum(profit) as profit\n from \n (select 'store channel' as channel\n        , 'store' || store_id as id\n        , sales\n        , returns\n        , profit\n from   ssr\n union all\n select 'catalog channel' as channel\n        , 'catalog_page' || catalog_page_id as id\n        , sales\n        , returns\n        , profit\n from  csr\n union all\n select 'web channel' as channel\n        , 'web_site' || web_site_id as id\n        , sales\n        , returns\n        , profit\n from   wsr\n ) x\n group by rollup (channel, id)\n order by channel\n         ,id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q81.sql",
    "content": "-- CometBench-DS query 81 derived from TPC-DS query 81 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith customer_total_return as\n (select cr_returning_customer_sk as ctr_customer_sk\n        ,ca_state as ctr_state, \n \tsum(cr_return_amt_inc_tax) as ctr_total_return\n from catalog_returns\n     ,date_dim\n     ,customer_address\n where cr_returned_date_sk = d_date_sk \n   and d_year =1998\n   and cr_returning_addr_sk = ca_address_sk \n group by cr_returning_customer_sk\n         ,ca_state )\n  select  c_customer_id,c_salutation,c_first_name,c_last_name,ca_street_number,ca_street_name\n                   ,ca_street_type,ca_suite_number,ca_city,ca_county,ca_state,ca_zip,ca_country,ca_gmt_offset\n                  ,ca_location_type,ctr_total_return\n from customer_total_return ctr1\n     ,customer_address\n     ,customer\n where ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2\n \t\t\t  from customer_total_return ctr2 \n                  \t  where ctr1.ctr_state = ctr2.ctr_state)\n       and ca_address_sk = c_current_addr_sk\n       and ca_state = 'TX'\n       and ctr1.ctr_customer_sk = c_customer_sk\n order by c_customer_id,c_salutation,c_first_name,c_last_name,ca_street_number,ca_street_name\n                   ,ca_street_type,ca_suite_number,ca_city,ca_county,ca_state,ca_zip,ca_country,ca_gmt_offset\n                  ,ca_location_type,ctr_total_return\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q82.sql",
    "content": "-- CometBench-DS query 82 derived from TPC-DS query 82 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  i_item_id\n       ,i_item_desc\n       ,i_current_price\n from item, inventory, date_dim, store_sales\n where i_current_price between 69 and 69+30\n and inv_item_sk = i_item_sk\n and d_date_sk=inv_date_sk\n and d_date between cast('1998-06-06' as date) and (cast('1998-06-06' as date) +  INTERVAL '60 DAYS')\n and i_manufact_id in (105,513,180,137)\n and inv_quantity_on_hand between 100 and 500\n and ss_item_sk = i_item_sk\n group by i_item_id,i_item_desc,i_current_price\n order by i_item_id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q83.sql",
    "content": "-- CometBench-DS query 83 derived from TPC-DS query 83 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith sr_items as\n (select i_item_id item_id,\n        sum(sr_return_quantity) sr_item_qty\n from store_returns,\n      item,\n      date_dim\n where sr_item_sk = i_item_sk\n and   d_date    in \n\t(select d_date\n\tfrom date_dim\n\twhere d_week_seq in \n\t\t(select d_week_seq\n\t\tfrom date_dim\n\t  where d_date in ('2000-04-29','2000-09-09','2000-11-02')))\n and   sr_returned_date_sk   = d_date_sk\n group by i_item_id),\n cr_items as\n (select i_item_id item_id,\n        sum(cr_return_quantity) cr_item_qty\n from catalog_returns,\n      item,\n      date_dim\n where cr_item_sk = i_item_sk\n and   d_date    in \n\t(select d_date\n\tfrom date_dim\n\twhere d_week_seq in \n\t\t(select d_week_seq\n\t\tfrom date_dim\n\t  where d_date in ('2000-04-29','2000-09-09','2000-11-02')))\n and   cr_returned_date_sk   = d_date_sk\n group by i_item_id),\n wr_items as\n (select i_item_id item_id,\n        sum(wr_return_quantity) wr_item_qty\n from web_returns,\n      item,\n      date_dim\n where wr_item_sk = i_item_sk\n and   d_date    in \n\t(select d_date\n\tfrom date_dim\n\twhere d_week_seq in \n\t\t(select d_week_seq\n\t\tfrom date_dim\n\t\twhere d_date in ('2000-04-29','2000-09-09','2000-11-02')))\n and   wr_returned_date_sk   = d_date_sk\n group by i_item_id)\n  select  sr_items.item_id\n       ,sr_item_qty\n       ,sr_item_qty/(sr_item_qty+cr_item_qty+wr_item_qty)/3.0 * 100 sr_dev\n       ,cr_item_qty\n       ,cr_item_qty/(sr_item_qty+cr_item_qty+wr_item_qty)/3.0 * 100 cr_dev\n       ,wr_item_qty\n       ,wr_item_qty/(sr_item_qty+cr_item_qty+wr_item_qty)/3.0 * 100 wr_dev\n       ,(sr_item_qty+cr_item_qty+wr_item_qty)/3.0 average\n from sr_items\n     ,cr_items\n     ,wr_items\n where sr_items.item_id=cr_items.item_id\n   and sr_items.item_id=wr_items.item_id \n order by sr_items.item_id\n         ,sr_item_qty\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q84.sql",
    "content": "-- CometBench-DS query 84 derived from TPC-DS query 84 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  c_customer_id as customer_id\n       , coalesce(c_last_name,'') || ', ' || coalesce(c_first_name,'') as customername\n from customer\n     ,customer_address\n     ,customer_demographics\n     ,household_demographics\n     ,income_band\n     ,store_returns\n where ca_city\t        =  'White Oak'\n   and c_current_addr_sk = ca_address_sk\n   and ib_lower_bound   >=  45626\n   and ib_upper_bound   <=  45626 + 50000\n   and ib_income_band_sk = hd_income_band_sk\n   and cd_demo_sk = c_current_cdemo_sk\n   and hd_demo_sk = c_current_hdemo_sk\n   and sr_cdemo_sk = cd_demo_sk\n order by c_customer_id\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q85.sql",
    "content": "-- CometBench-DS query 85 derived from TPC-DS query 85 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  substr(r_reason_desc,1,20)\n       ,avg(ws_quantity)\n       ,avg(wr_refunded_cash)\n       ,avg(wr_fee)\n from web_sales, web_returns, web_page, customer_demographics cd1,\n      customer_demographics cd2, customer_address, date_dim, reason \n where ws_web_page_sk = wp_web_page_sk\n   and ws_item_sk = wr_item_sk\n   and ws_order_number = wr_order_number\n   and ws_sold_date_sk = d_date_sk and d_year = 2001\n   and cd1.cd_demo_sk = wr_refunded_cdemo_sk \n   and cd2.cd_demo_sk = wr_returning_cdemo_sk\n   and ca_address_sk = wr_refunded_addr_sk\n   and r_reason_sk = wr_reason_sk\n   and\n   (\n    (\n     cd1.cd_marital_status = 'D'\n     and\n     cd1.cd_marital_status = cd2.cd_marital_status\n     and\n     cd1.cd_education_status = 'Primary'\n     and \n     cd1.cd_education_status = cd2.cd_education_status\n     and\n     ws_sales_price between 100.00 and 150.00\n    )\n   or\n    (\n     cd1.cd_marital_status = 'U'\n     and\n     cd1.cd_marital_status = cd2.cd_marital_status\n     and\n     cd1.cd_education_status = 'Unknown' \n     and\n     cd1.cd_education_status = cd2.cd_education_status\n     and\n     ws_sales_price between 50.00 and 100.00\n    )\n   or\n    (\n     cd1.cd_marital_status = 'M'\n     and\n     cd1.cd_marital_status = cd2.cd_marital_status\n     and\n     cd1.cd_education_status = 'Advanced Degree'\n     and\n     cd1.cd_education_status = cd2.cd_education_status\n     and\n     ws_sales_price between 150.00 and 200.00\n    )\n   )\n   and\n   (\n    (\n     ca_country = 'United States'\n     and\n     ca_state in ('SC', 'IN', 'VA')\n     and ws_net_profit between 100 and 200  \n    )\n    or\n    (\n     ca_country = 'United States'\n     and\n     ca_state in ('WA', 'KS', 'KY')\n     and ws_net_profit between 150 and 300  \n    )\n    or\n    (\n     ca_country = 'United States'\n     and\n     ca_state in ('SD', 'WI', 'NE')\n     and ws_net_profit between 50 and 250  \n    )\n   )\ngroup by r_reason_desc\norder by substr(r_reason_desc,1,20)\n        ,avg(ws_quantity)\n        ,avg(wr_refunded_cash)\n        ,avg(wr_fee)\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q86.sql",
    "content": "-- CometBench-DS query 86 derived from TPC-DS query 86 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect   \n    sum(ws_net_paid) as total_sum\n   ,i_category\n   ,i_class\n   ,grouping(i_category)+grouping(i_class) as lochierarchy\n   ,rank() over (\n \tpartition by grouping(i_category)+grouping(i_class),\n \tcase when grouping(i_class) = 0 then i_category end \n \torder by sum(ws_net_paid) desc) as rank_within_parent\n from\n    web_sales\n   ,date_dim       d1\n   ,item\n where\n    d1.d_month_seq between 1205 and 1205+11\n and d1.d_date_sk = ws_sold_date_sk\n and i_item_sk  = ws_item_sk\n group by rollup(i_category,i_class)\n order by\n   lochierarchy desc,\n   case when lochierarchy = 0 then i_category end,\n   rank_within_parent\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q87.sql",
    "content": "-- CometBench-DS query 87 derived from TPC-DS query 87 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect count(*) \nfrom ((select distinct c_last_name, c_first_name, d_date\n       from store_sales, date_dim, customer\n       where store_sales.ss_sold_date_sk = date_dim.d_date_sk\n         and store_sales.ss_customer_sk = customer.c_customer_sk\n         and d_month_seq between 1189 and 1189+11)\n       except\n      (select distinct c_last_name, c_first_name, d_date\n       from catalog_sales, date_dim, customer\n       where catalog_sales.cs_sold_date_sk = date_dim.d_date_sk\n         and catalog_sales.cs_bill_customer_sk = customer.c_customer_sk\n         and d_month_seq between 1189 and 1189+11)\n       except\n      (select distinct c_last_name, c_first_name, d_date\n       from web_sales, date_dim, customer\n       where web_sales.ws_sold_date_sk = date_dim.d_date_sk\n         and web_sales.ws_bill_customer_sk = customer.c_customer_sk\n         and d_month_seq between 1189 and 1189+11)\n) cool_cust\n;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q88.sql",
    "content": "-- CometBench-DS query 88 derived from TPC-DS query 88 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  *\nfrom\n (select count(*) h8_30_to_9\n from store_sales, household_demographics , time_dim, store\n where ss_sold_time_sk = time_dim.t_time_sk   \n     and ss_hdemo_sk = household_demographics.hd_demo_sk \n     and ss_store_sk = s_store_sk\n     and time_dim.t_hour = 8\n     and time_dim.t_minute >= 30\n     and ((household_demographics.hd_dep_count = 2 and household_demographics.hd_vehicle_count<=2+2) or\n          (household_demographics.hd_dep_count = 1 and household_demographics.hd_vehicle_count<=1+2) or\n          (household_demographics.hd_dep_count = 4 and household_demographics.hd_vehicle_count<=4+2)) \n     and store.s_store_name = 'ese') s1,\n (select count(*) h9_to_9_30 \n from store_sales, household_demographics , time_dim, store\n where ss_sold_time_sk = time_dim.t_time_sk\n     and ss_hdemo_sk = household_demographics.hd_demo_sk\n     and ss_store_sk = s_store_sk \n     and time_dim.t_hour = 9 \n     and time_dim.t_minute < 30\n     and ((household_demographics.hd_dep_count = 2 and household_demographics.hd_vehicle_count<=2+2) or\n          (household_demographics.hd_dep_count = 1 and household_demographics.hd_vehicle_count<=1+2) or\n          (household_demographics.hd_dep_count = 4 and household_demographics.hd_vehicle_count<=4+2))\n     and store.s_store_name = 'ese') s2,\n (select count(*) h9_30_to_10 \n from store_sales, household_demographics , time_dim, store\n where ss_sold_time_sk = time_dim.t_time_sk\n     and ss_hdemo_sk = household_demographics.hd_demo_sk\n     and ss_store_sk = s_store_sk\n     and time_dim.t_hour = 9\n     and time_dim.t_minute >= 30\n     and ((household_demographics.hd_dep_count = 2 and household_demographics.hd_vehicle_count<=2+2) or\n          (household_demographics.hd_dep_count = 1 and household_demographics.hd_vehicle_count<=1+2) or\n          (household_demographics.hd_dep_count = 4 and household_demographics.hd_vehicle_count<=4+2))\n     and store.s_store_name = 'ese') s3,\n (select count(*) h10_to_10_30\n from store_sales, household_demographics , time_dim, store\n where ss_sold_time_sk = time_dim.t_time_sk\n     and ss_hdemo_sk = household_demographics.hd_demo_sk\n     and ss_store_sk = s_store_sk\n     and time_dim.t_hour = 10 \n     and time_dim.t_minute < 30\n     and ((household_demographics.hd_dep_count = 2 and household_demographics.hd_vehicle_count<=2+2) or\n          (household_demographics.hd_dep_count = 1 and household_demographics.hd_vehicle_count<=1+2) or\n          (household_demographics.hd_dep_count = 4 and household_demographics.hd_vehicle_count<=4+2))\n     and store.s_store_name = 'ese') s4,\n (select count(*) h10_30_to_11\n from store_sales, household_demographics , time_dim, store\n where ss_sold_time_sk = time_dim.t_time_sk\n     and ss_hdemo_sk = household_demographics.hd_demo_sk\n     and ss_store_sk = s_store_sk\n     and time_dim.t_hour = 10 \n     and time_dim.t_minute >= 30\n     and ((household_demographics.hd_dep_count = 2 and household_demographics.hd_vehicle_count<=2+2) or\n          (household_demographics.hd_dep_count = 1 and household_demographics.hd_vehicle_count<=1+2) or\n          (household_demographics.hd_dep_count = 4 and household_demographics.hd_vehicle_count<=4+2))\n     and store.s_store_name = 'ese') s5,\n (select count(*) h11_to_11_30\n from store_sales, household_demographics , time_dim, store\n where ss_sold_time_sk = time_dim.t_time_sk\n     and ss_hdemo_sk = household_demographics.hd_demo_sk\n     and ss_store_sk = s_store_sk \n     and time_dim.t_hour = 11\n     and time_dim.t_minute < 30\n     and ((household_demographics.hd_dep_count = 2 and household_demographics.hd_vehicle_count<=2+2) or\n          (household_demographics.hd_dep_count = 1 and household_demographics.hd_vehicle_count<=1+2) or\n          (household_demographics.hd_dep_count = 4 and household_demographics.hd_vehicle_count<=4+2))\n     and store.s_store_name = 'ese') s6,\n (select count(*) h11_30_to_12\n from store_sales, household_demographics , time_dim, store\n where ss_sold_time_sk = time_dim.t_time_sk\n     and ss_hdemo_sk = household_demographics.hd_demo_sk\n     and ss_store_sk = s_store_sk\n     and time_dim.t_hour = 11\n     and time_dim.t_minute >= 30\n     and ((household_demographics.hd_dep_count = 2 and household_demographics.hd_vehicle_count<=2+2) or\n          (household_demographics.hd_dep_count = 1 and household_demographics.hd_vehicle_count<=1+2) or\n          (household_demographics.hd_dep_count = 4 and household_demographics.hd_vehicle_count<=4+2))\n     and store.s_store_name = 'ese') s7,\n (select count(*) h12_to_12_30\n from store_sales, household_demographics , time_dim, store\n where ss_sold_time_sk = time_dim.t_time_sk\n     and ss_hdemo_sk = household_demographics.hd_demo_sk\n     and ss_store_sk = s_store_sk\n     and time_dim.t_hour = 12\n     and time_dim.t_minute < 30\n     and ((household_demographics.hd_dep_count = 2 and household_demographics.hd_vehicle_count<=2+2) or\n          (household_demographics.hd_dep_count = 1 and household_demographics.hd_vehicle_count<=1+2) or\n          (household_demographics.hd_dep_count = 4 and household_demographics.hd_vehicle_count<=4+2))\n     and store.s_store_name = 'ese') s8\n;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q89.sql",
    "content": "-- CometBench-DS query 89 derived from TPC-DS query 89 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  *\nfrom(\nselect i_category, i_class, i_brand,\n       s_store_name, s_company_name,\n       d_moy,\n       sum(ss_sales_price) sum_sales,\n       avg(sum(ss_sales_price)) over\n         (partition by i_category, i_brand, s_store_name, s_company_name)\n         avg_monthly_sales\nfrom item, store_sales, date_dim, store\nwhere ss_item_sk = i_item_sk and\n      ss_sold_date_sk = d_date_sk and\n      ss_store_sk = s_store_sk and\n      d_year in (2001) and\n        ((i_category in ('Children','Jewelry','Home') and\n          i_class in ('infants','birdal','flatware')\n         )\n      or (i_category in ('Electronics','Music','Books') and\n          i_class in ('audio','classical','science') \n        ))\ngroup by i_category, i_class, i_brand,\n         s_store_name, s_company_name, d_moy) tmp1\nwhere case when (avg_monthly_sales <> 0) then (abs(sum_sales - avg_monthly_sales) / avg_monthly_sales) else null end > 0.1\norder by sum_sales - avg_monthly_sales, s_store_name\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q9.sql",
    "content": "-- CometBench-DS query 9 derived from TPC-DS query 9 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect case when (select count(*) \n                  from store_sales \n                  where ss_quantity between 1 and 20) > 31002\n            then (select avg(ss_ext_discount_amt) \n                  from store_sales \n                  where ss_quantity between 1 and 20) \n            else (select avg(ss_net_profit)\n                  from store_sales\n                  where ss_quantity between 1 and 20) end bucket1 ,\n       case when (select count(*)\n                  from store_sales\n                  where ss_quantity between 21 and 40) > 588\n            then (select avg(ss_ext_discount_amt)\n                  from store_sales\n                  where ss_quantity between 21 and 40) \n            else (select avg(ss_net_profit)\n                  from store_sales\n                  where ss_quantity between 21 and 40) end bucket2,\n       case when (select count(*)\n                  from store_sales\n                  where ss_quantity between 41 and 60) > 2456\n            then (select avg(ss_ext_discount_amt)\n                  from store_sales\n                  where ss_quantity between 41 and 60)\n            else (select avg(ss_net_profit)\n                  from store_sales\n                  where ss_quantity between 41 and 60) end bucket3,\n       case when (select count(*)\n                  from store_sales\n                  where ss_quantity between 61 and 80) > 21645\n            then (select avg(ss_ext_discount_amt)\n                  from store_sales\n                  where ss_quantity between 61 and 80)\n            else (select avg(ss_net_profit)\n                  from store_sales\n                  where ss_quantity between 61 and 80) end bucket4,\n       case when (select count(*)\n                  from store_sales\n                  where ss_quantity between 81 and 100) > 20553\n            then (select avg(ss_ext_discount_amt)\n                  from store_sales\n                  where ss_quantity between 81 and 100)\n            else (select avg(ss_net_profit)\n                  from store_sales\n                  where ss_quantity between 81 and 100) end bucket5\nfrom reason\nwhere r_reason_sk = 1\n;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q90.sql",
    "content": "-- CometBench-DS query 90 derived from TPC-DS query 90 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  cast(amc as decimal(15,4))/cast(pmc as decimal(15,4)) am_pm_ratio\n from ( select count(*) amc\n       from web_sales, household_demographics , time_dim, web_page\n       where ws_sold_time_sk = time_dim.t_time_sk\n         and ws_ship_hdemo_sk = household_demographics.hd_demo_sk\n         and ws_web_page_sk = web_page.wp_web_page_sk\n         and time_dim.t_hour between 9 and 9+1\n         and household_demographics.hd_dep_count = 2\n         and web_page.wp_char_count between 5000 and 5200) at,\n      ( select count(*) pmc\n       from web_sales, household_demographics , time_dim, web_page\n       where ws_sold_time_sk = time_dim.t_time_sk\n         and ws_ship_hdemo_sk = household_demographics.hd_demo_sk\n         and ws_web_page_sk = web_page.wp_web_page_sk\n         and time_dim.t_hour between 15 and 15+1\n         and household_demographics.hd_dep_count = 2\n         and web_page.wp_char_count between 5000 and 5200) pt\n order by am_pm_ratio\n  LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q91.sql",
    "content": "-- CometBench-DS query 91 derived from TPC-DS query 91 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n        cc_call_center_id Call_Center,\n        cc_name Call_Center_Name,\n        cc_manager Manager,\n        sum(cr_net_loss) Returns_Loss\nfrom\n        call_center,\n        catalog_returns,\n        date_dim,\n        customer,\n        customer_address,\n        customer_demographics,\n        household_demographics\nwhere\n        cr_call_center_sk       = cc_call_center_sk\nand     cr_returned_date_sk     = d_date_sk\nand     cr_returning_customer_sk= c_customer_sk\nand     cd_demo_sk              = c_current_cdemo_sk\nand     hd_demo_sk              = c_current_hdemo_sk\nand     ca_address_sk           = c_current_addr_sk\nand     d_year                  = 2002 \nand     d_moy                   = 11\nand     ( (cd_marital_status       = 'M' and cd_education_status     = 'Unknown')\n        or(cd_marital_status       = 'W' and cd_education_status     = 'Advanced Degree'))\nand     hd_buy_potential like 'Unknown%'\nand     ca_gmt_offset           = -6\ngroup by cc_call_center_id,cc_name,cc_manager,cd_marital_status,cd_education_status\norder by sum(cr_net_loss) desc;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q92.sql",
    "content": "-- CometBench-DS query 92 derived from TPC-DS query 92 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n   sum(ws_ext_discount_amt)  as `Excess Discount Amount` \nfrom \n    web_sales \n   ,item \n   ,date_dim\nwhere\ni_manufact_id = 914\nand i_item_sk = ws_item_sk \nand d_date between '2001-01-25' and \n        (cast('2001-01-25' as date) + INTERVAL '90 DAYS')\nand d_date_sk = ws_sold_date_sk \nand ws_ext_discount_amt  \n     > ( \n         SELECT \n            1.3 * avg(ws_ext_discount_amt) \n         FROM \n            web_sales \n           ,date_dim\n         WHERE \n              ws_item_sk = i_item_sk \n          and d_date between '2001-01-25' and\n                             (cast('2001-01-25' as date) + INTERVAL '90 DAYS')\n          and d_date_sk = ws_sold_date_sk \n      ) \norder by sum(ws_ext_discount_amt)\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q93.sql",
    "content": "-- CometBench-DS query 93 derived from TPC-DS query 93 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  ss_customer_sk\n            ,sum(act_sales) sumsales\n      from (select ss_item_sk\n                  ,ss_ticket_number\n                  ,ss_customer_sk\n                  ,case when sr_return_quantity is not null then (ss_quantity-sr_return_quantity)*ss_sales_price\n                                                            else (ss_quantity*ss_sales_price) end act_sales\n            from store_sales left outer join store_returns on (sr_item_sk = ss_item_sk\n                                                               and sr_ticket_number = ss_ticket_number)\n                ,reason\n            where sr_reason_sk = r_reason_sk\n              and r_reason_desc = 'Did not get it on time') t\n      group by ss_customer_sk\n      order by sumsales, ss_customer_sk\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q94.sql",
    "content": "-- CometBench-DS query 94 derived from TPC-DS query 94 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n   count(distinct ws_order_number) as `order count`\n  ,sum(ws_ext_ship_cost) as `total shipping cost`\n  ,sum(ws_net_profit) as `total net profit`\nfrom\n   web_sales ws1\n  ,date_dim\n  ,customer_address\n  ,web_site\nwhere\n    d_date between '1999-4-01' and \n           (cast('1999-4-01' as date) + INTERVAL '60 DAYS')\nand ws1.ws_ship_date_sk = d_date_sk\nand ws1.ws_ship_addr_sk = ca_address_sk\nand ca_state = 'WI'\nand ws1.ws_web_site_sk = web_site_sk\nand web_company_name = 'pri'\nand exists (select *\n            from web_sales ws2\n            where ws1.ws_order_number = ws2.ws_order_number\n              and ws1.ws_warehouse_sk <> ws2.ws_warehouse_sk)\nand not exists(select *\n               from web_returns wr1\n               where ws1.ws_order_number = wr1.wr_order_number)\norder by count(distinct ws_order_number)\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q95.sql",
    "content": "-- CometBench-DS query 95 derived from TPC-DS query 95 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ws_wh as\n(select ws1.ws_order_number,ws1.ws_warehouse_sk wh1,ws2.ws_warehouse_sk wh2\n from web_sales ws1,web_sales ws2\n where ws1.ws_order_number = ws2.ws_order_number\n   and ws1.ws_warehouse_sk <> ws2.ws_warehouse_sk)\n select  \n   count(distinct ws_order_number) as `order count`\n  ,sum(ws_ext_ship_cost) as `total shipping cost`\n  ,sum(ws_net_profit) as `total net profit`\nfrom\n   web_sales ws1\n  ,date_dim\n  ,customer_address\n  ,web_site\nwhere\n    d_date between '2002-5-01' and \n           (cast('2002-5-01' as date) + INTERVAL '60 DAYS')\nand ws1.ws_ship_date_sk = d_date_sk\nand ws1.ws_ship_addr_sk = ca_address_sk\nand ca_state = 'MA'\nand ws1.ws_web_site_sk = web_site_sk\nand web_company_name = 'pri'\nand ws1.ws_order_number in (select ws_order_number\n                            from ws_wh)\nand ws1.ws_order_number in (select wr_order_number\n                            from web_returns,ws_wh\n                            where wr_order_number = ws_wh.ws_order_number)\norder by count(distinct ws_order_number)\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q96.sql",
    "content": "-- CometBench-DS query 96 derived from TPC-DS query 96 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  count(*) \nfrom store_sales\n    ,household_demographics \n    ,time_dim, store\nwhere ss_sold_time_sk = time_dim.t_time_sk   \n    and ss_hdemo_sk = household_demographics.hd_demo_sk \n    and ss_store_sk = s_store_sk\n    and time_dim.t_hour = 8\n    and time_dim.t_minute >= 30\n    and household_demographics.hd_dep_count = 5\n    and store.s_store_name = 'ese'\norder by count(*)\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q97.sql",
    "content": "-- CometBench-DS query 97 derived from TPC-DS query 97 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nwith ssci as (\nselect ss_customer_sk customer_sk\n      ,ss_item_sk item_sk\nfrom store_sales,date_dim\nwhere ss_sold_date_sk = d_date_sk\n  and d_month_seq between 1211 and 1211 + 11\ngroup by ss_customer_sk\n        ,ss_item_sk),\ncsci as(\n select cs_bill_customer_sk customer_sk\n      ,cs_item_sk item_sk\nfrom catalog_sales,date_dim\nwhere cs_sold_date_sk = d_date_sk\n  and d_month_seq between 1211 and 1211 + 11\ngroup by cs_bill_customer_sk\n        ,cs_item_sk)\n select  sum(case when ssci.customer_sk is not null and csci.customer_sk is null then 1 else 0 end) store_only\n      ,sum(case when ssci.customer_sk is null and csci.customer_sk is not null then 1 else 0 end) catalog_only\n      ,sum(case when ssci.customer_sk is not null and csci.customer_sk is not null then 1 else 0 end) store_and_catalog\nfrom ssci full outer join csci on (ssci.customer_sk=csci.customer_sk\n                               and ssci.item_sk = csci.item_sk)\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q98.sql",
    "content": "-- CometBench-DS query 98 derived from TPC-DS query 98 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect i_item_id\n      ,i_item_desc \n      ,i_category \n      ,i_class \n      ,i_current_price\n      ,sum(ss_ext_sales_price) as itemrevenue \n      ,sum(ss_ext_sales_price)*100/sum(sum(ss_ext_sales_price)) over\n          (partition by i_class) as revenueratio\nfrom\t\n\tstore_sales\n    \t,item \n    \t,date_dim\nwhere \n\tss_item_sk = i_item_sk \n  \tand i_category in ('Shoes', 'Music', 'Men')\n  \tand ss_sold_date_sk = d_date_sk\n\tand d_date between cast('2000-01-05' as date) \n\t\t\t\tand (cast('2000-01-05' as date) + INTERVAL '30 DAYS')\ngroup by \n\ti_item_id\n        ,i_item_desc \n        ,i_category\n        ,i_class\n        ,i_current_price\norder by \n\ti_category\n        ,i_class\n        ,i_item_id\n        ,i_item_desc\n        ,revenueratio;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpcds/q99.sql",
    "content": "-- CometBench-DS query 99 derived from TPC-DS query 99 under the terms of the TPC Fair Use Policy.\n-- TPC-DS queries are Copyright 2021 Transaction Processing Performance Council.\n-- This query was generated at scale factor 1.\nselect  \n   substr(w_warehouse_name,1,20)\n  ,sm_type\n  ,cc_name\n  ,sum(case when (cs_ship_date_sk - cs_sold_date_sk <= 30 ) then 1 else 0 end)  as `30 days` \n  ,sum(case when (cs_ship_date_sk - cs_sold_date_sk > 30) and \n                 (cs_ship_date_sk - cs_sold_date_sk <= 60) then 1 else 0 end )  as `31-60 days` \n  ,sum(case when (cs_ship_date_sk - cs_sold_date_sk > 60) and \n                 (cs_ship_date_sk - cs_sold_date_sk <= 90) then 1 else 0 end)  as `61-90 days` \n  ,sum(case when (cs_ship_date_sk - cs_sold_date_sk > 90) and\n                 (cs_ship_date_sk - cs_sold_date_sk <= 120) then 1 else 0 end)  as `91-120 days` \n  ,sum(case when (cs_ship_date_sk - cs_sold_date_sk  > 120) then 1 else 0 end)  as `>120 days` \nfrom\n   catalog_sales\n  ,warehouse\n  ,ship_mode\n  ,call_center\n  ,date_dim\nwhere\n    d_month_seq between 1188 and 1188 + 11\nand cs_ship_date_sk   = d_date_sk\nand cs_warehouse_sk   = w_warehouse_sk\nand cs_ship_mode_sk   = sm_ship_mode_sk\nand cs_call_center_sk = cc_call_center_sk\ngroup by\n   substr(w_warehouse_name,1,20)\n  ,sm_type\n  ,cc_name\norder by substr(w_warehouse_name,1,20)\n        ,sm_type\n        ,cc_name\n LIMIT 100;\n\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q1.sql",
    "content": "-- CometBench-H query 1 derived from TPC-H query 1 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tl_returnflag,\n\tl_linestatus,\n\tsum(l_quantity) as sum_qty,\n\tsum(l_extendedprice) as sum_base_price,\n\tsum(l_extendedprice * (1 - l_discount)) as sum_disc_price,\n\tsum(l_extendedprice * (1 - l_discount) * (1 + l_tax)) as sum_charge,\n\tavg(l_quantity) as avg_qty,\n\tavg(l_extendedprice) as avg_price,\n\tavg(l_discount) as avg_disc,\n\tcount(*) as count_order\nfrom\n\tlineitem\nwhere\n\tl_shipdate <= date '1998-12-01' - interval '68 days'\ngroup by\n\tl_returnflag,\n\tl_linestatus\norder by\n\tl_returnflag,\n\tl_linestatus;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q10.sql",
    "content": "-- CometBench-H query 10 derived from TPC-H query 10 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tc_custkey,\n\tc_name,\n\tsum(l_extendedprice * (1 - l_discount)) as revenue,\n\tc_acctbal,\n\tn_name,\n\tc_address,\n\tc_phone,\n\tc_comment\nfrom\n\tcustomer,\n\torders,\n\tlineitem,\n\tnation\nwhere\n\tc_custkey = o_custkey\n\tand l_orderkey = o_orderkey\n\tand o_orderdate >= date '1993-07-01'\n\tand o_orderdate < date '1993-07-01' + interval '3' month\n\tand l_returnflag = 'R'\n\tand c_nationkey = n_nationkey\ngroup by\n\tc_custkey,\n\tc_name,\n\tc_acctbal,\n\tc_phone,\n\tn_name,\n\tc_address,\n\tc_comment\norder by\n\trevenue desc limit 20;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q11.sql",
    "content": "-- CometBench-H query 11 derived from TPC-H query 11 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tps_partkey,\n\tsum(ps_supplycost * ps_availqty) as value\nfrom\n\tpartsupp,\n\tsupplier,\n\tnation\nwhere\n\tps_suppkey = s_suppkey\n\tand s_nationkey = n_nationkey\n\tand n_name = 'ALGERIA'\ngroup by\n\tps_partkey having\n\t\tsum(ps_supplycost * ps_availqty) > (\n\t\t\tselect\n\t\t\t\tsum(ps_supplycost * ps_availqty) * 0.0001000000\n\t\t\tfrom\n\t\t\t\tpartsupp,\n\t\t\t\tsupplier,\n\t\t\t\tnation\n\t\t\twhere\n\t\t\t\tps_suppkey = s_suppkey\n\t\t\t\tand s_nationkey = n_nationkey\n\t\t\t\tand n_name = 'ALGERIA'\n\t\t)\norder by\n\tvalue desc;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q12.sql",
    "content": "-- CometBench-H query 12 derived from TPC-H query 12 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tl_shipmode,\n\tsum(case\n\t\twhen o_orderpriority = '1-URGENT'\n\t\t\tor o_orderpriority = '2-HIGH'\n\t\t\tthen 1\n\t\telse 0\n\tend) as high_line_count,\n\tsum(case\n\t\twhen o_orderpriority <> '1-URGENT'\n\t\t\tand o_orderpriority <> '2-HIGH'\n\t\t\tthen 1\n\t\telse 0\n\tend) as low_line_count\nfrom\n\torders,\n\tlineitem\nwhere\n\to_orderkey = l_orderkey\n\tand l_shipmode in ('FOB', 'SHIP')\n\tand l_commitdate < l_receiptdate\n\tand l_shipdate < l_commitdate\n\tand l_receiptdate >= date '1995-01-01'\n\tand l_receiptdate < date '1995-01-01' + interval '1' year\ngroup by\n\tl_shipmode\norder by\n\tl_shipmode;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q13.sql",
    "content": "-- CometBench-H query 13 derived from TPC-H query 13 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tc_count,\n\tcount(*) as custdist\nfrom\n\t(\n\t\tselect\n\t\t\tc_custkey,\n\t\t\tcount(o_orderkey)\n\t\tfrom\n\t\t\tcustomer left outer join orders on\n\t\t\t\tc_custkey = o_custkey\n\t\t\t\tand o_comment not like '%express%requests%'\n\t\tgroup by\n\t\t\tc_custkey\n\t) as c_orders (c_custkey, c_count)\ngroup by\n\tc_count\norder by\n\tcustdist desc,\n\tc_count desc;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q14.sql",
    "content": "-- CometBench-H query 14 derived from TPC-H query 14 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\t100.00 * sum(case\n\t\twhen p_type like 'PROMO%'\n\t\t\tthen l_extendedprice * (1 - l_discount)\n\t\telse 0\n\tend) / sum(l_extendedprice * (1 - l_discount)) as promo_revenue\nfrom\n\tlineitem,\n\tpart\nwhere\n\tl_partkey = p_partkey\n\tand l_shipdate >= date '1995-02-01'\n\tand l_shipdate < date '1995-02-01' + interval '1' month;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q15.sql",
    "content": "-- CometBench-H query 15 derived from TPC-H query 15 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\ncreate view revenue0 (supplier_no, total_revenue) as\n\tselect\n\t\tl_suppkey,\n\t\tsum(l_extendedprice * (1 - l_discount))\n\tfrom\n\t\tlineitem\n\twhere\n\t\tl_shipdate >= date '1996-08-01'\n\t\tand l_shipdate < date '1996-08-01' + interval '3' month\n\tgroup by\n\t\tl_suppkey;\nselect\n\ts_suppkey,\n\ts_name,\n\ts_address,\n\ts_phone,\n\ttotal_revenue\nfrom\n\tsupplier,\n\trevenue0\nwhere\n\ts_suppkey = supplier_no\n\tand total_revenue = (\n\t\tselect\n\t\t\tmax(total_revenue)\n\t\tfrom\n\t\t\trevenue0\n\t)\norder by\n\ts_suppkey;\ndrop view revenue0;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q16.sql",
    "content": "-- CometBench-H query 16 derived from TPC-H query 16 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tp_brand,\n\tp_type,\n\tp_size,\n\tcount(distinct ps_suppkey) as supplier_cnt\nfrom\n\tpartsupp,\n\tpart\nwhere\n\tp_partkey = ps_partkey\n\tand p_brand <> 'Brand#14'\n\tand p_type not like 'SMALL PLATED%'\n\tand p_size in (14, 6, 5, 31, 49, 15, 41, 47)\n\tand ps_suppkey not in (\n\t\tselect\n\t\t\ts_suppkey\n\t\tfrom\n\t\t\tsupplier\n\t\twhere\n\t\t\ts_comment like '%Customer%Complaints%'\n\t)\ngroup by\n\tp_brand,\n\tp_type,\n\tp_size\norder by\n\tsupplier_cnt desc,\n\tp_brand,\n\tp_type,\n\tp_size;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q17.sql",
    "content": "-- CometBench-H query 17 derived from TPC-H query 17 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tsum(l_extendedprice) / 7.0 as avg_yearly\nfrom\n\tlineitem,\n\tpart\nwhere\n\tp_partkey = l_partkey\n\tand p_brand = 'Brand#42'\n\tand p_container = 'LG BAG'\n\tand l_quantity < (\n\t\tselect\n\t\t\t0.2 * avg(l_quantity)\n\t\tfrom\n\t\t\tlineitem\n\t\twhere\n\t\t\tl_partkey = p_partkey\n\t);\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q18.sql",
    "content": "-- CometBench-H query 18 derived from TPC-H query 18 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tc_name,\n\tc_custkey,\n\to_orderkey,\n\to_orderdate,\n\to_totalprice,\n\tsum(l_quantity)\nfrom\n\tcustomer,\n\torders,\n\tlineitem\nwhere\n\to_orderkey in (\n\t\tselect\n\t\t\tl_orderkey\n\t\tfrom\n\t\t\tlineitem\n\t\tgroup by\n\t\t\tl_orderkey having\n\t\t\t\tsum(l_quantity) > 313\n\t)\n\tand c_custkey = o_custkey\n\tand o_orderkey = l_orderkey\ngroup by\n\tc_name,\n\tc_custkey,\n\to_orderkey,\n\to_orderdate,\n\to_totalprice\norder by\n\to_totalprice desc,\n\to_orderdate limit 100;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q19.sql",
    "content": "-- CometBench-H query 19 derived from TPC-H query 19 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tsum(l_extendedprice* (1 - l_discount)) as revenue\nfrom\n\tlineitem,\n\tpart\nwhere\n\t(\n\t\tp_partkey = l_partkey\n\t\tand p_brand = 'Brand#21'\n\t\tand p_container in ('SM CASE', 'SM BOX', 'SM PACK', 'SM PKG')\n\t\tand l_quantity >= 8 and l_quantity <= 8 + 10\n\t\tand p_size between 1 and 5\n\t\tand l_shipmode in ('AIR', 'AIR REG')\n\t\tand l_shipinstruct = 'DELIVER IN PERSON'\n\t)\n\tor\n\t(\n\t\tp_partkey = l_partkey\n\t\tand p_brand = 'Brand#13'\n\t\tand p_container in ('MED BAG', 'MED BOX', 'MED PKG', 'MED PACK')\n\t\tand l_quantity >= 20 and l_quantity <= 20 + 10\n\t\tand p_size between 1 and 10\n\t\tand l_shipmode in ('AIR', 'AIR REG')\n\t\tand l_shipinstruct = 'DELIVER IN PERSON'\n\t)\n\tor\n\t(\n\t\tp_partkey = l_partkey\n\t\tand p_brand = 'Brand#52'\n\t\tand p_container in ('LG CASE', 'LG BOX', 'LG PACK', 'LG PKG')\n\t\tand l_quantity >= 30 and l_quantity <= 30 + 10\n\t\tand p_size between 1 and 15\n\t\tand l_shipmode in ('AIR', 'AIR REG')\n\t\tand l_shipinstruct = 'DELIVER IN PERSON'\n\t);\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q2.sql",
    "content": "-- CometBench-H query 2 derived from TPC-H query 2 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\ts_acctbal,\n\ts_name,\n\tn_name,\n\tp_partkey,\n\tp_mfgr,\n\ts_address,\n\ts_phone,\n\ts_comment\nfrom\n\tpart,\n\tsupplier,\n\tpartsupp,\n\tnation,\n\tregion\nwhere\n\tp_partkey = ps_partkey\n\tand s_suppkey = ps_suppkey\n\tand p_size = 48\n\tand p_type like '%TIN'\n\tand s_nationkey = n_nationkey\n\tand n_regionkey = r_regionkey\n\tand r_name = 'ASIA'\n\tand ps_supplycost = (\n\t\tselect\n\t\t\tmin(ps_supplycost)\n\t\tfrom\n\t\t\tpartsupp,\n\t\t\tsupplier,\n\t\t\tnation,\n\t\t\tregion\n\t\twhere\n\t\t\tp_partkey = ps_partkey\n\t\t\tand s_suppkey = ps_suppkey\n\t\t\tand s_nationkey = n_nationkey\n\t\t\tand n_regionkey = r_regionkey\n\t\t\tand r_name = 'ASIA'\n\t)\norder by\n\ts_acctbal desc,\n\tn_name,\n\ts_name,\n\tp_partkey limit 100;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q20.sql",
    "content": "-- CometBench-H query 20 derived from TPC-H query 20 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\ts_name,\n\ts_address\nfrom\n\tsupplier,\n\tnation\nwhere\n\ts_suppkey in (\n\t\tselect\n\t\t\tps_suppkey\n\t\tfrom\n\t\t\tpartsupp\n\t\twhere\n\t\t\tps_partkey in (\n\t\t\t\tselect\n\t\t\t\t\tp_partkey\n\t\t\t\tfrom\n\t\t\t\t\tpart\n\t\t\t\twhere\n\t\t\t\t\tp_name like 'blanched%'\n\t\t\t)\n\t\t\tand ps_availqty > (\n\t\t\t\tselect\n\t\t\t\t\t0.5 * sum(l_quantity)\n\t\t\t\tfrom\n\t\t\t\t\tlineitem\n\t\t\t\twhere\n\t\t\t\t\tl_partkey = ps_partkey\n\t\t\t\t\tand l_suppkey = ps_suppkey\n\t\t\t\t\tand l_shipdate >= date '1993-01-01'\n\t\t\t\t\tand l_shipdate < date '1993-01-01' + interval '1' year\n\t\t\t)\n\t)\n\tand s_nationkey = n_nationkey\n\tand n_name = 'KENYA'\norder by\n\ts_name;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q21.sql",
    "content": "-- CometBench-H query 21 derived from TPC-H query 21 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\ts_name,\n\tcount(*) as numwait\nfrom\n\tsupplier,\n\tlineitem l1,\n\torders,\n\tnation\nwhere\n\ts_suppkey = l1.l_suppkey\n\tand o_orderkey = l1.l_orderkey\n\tand o_orderstatus = 'F'\n\tand l1.l_receiptdate > l1.l_commitdate\n\tand exists (\n\t\tselect\n\t\t\t*\n\t\tfrom\n\t\t\tlineitem l2\n\t\twhere\n\t\t\tl2.l_orderkey = l1.l_orderkey\n\t\t\tand l2.l_suppkey <> l1.l_suppkey\n\t)\n\tand not exists (\n\t\tselect\n\t\t\t*\n\t\tfrom\n\t\t\tlineitem l3\n\t\twhere\n\t\t\tl3.l_orderkey = l1.l_orderkey\n\t\t\tand l3.l_suppkey <> l1.l_suppkey\n\t\t\tand l3.l_receiptdate > l3.l_commitdate\n\t)\n\tand s_nationkey = n_nationkey\n\tand n_name = 'ARGENTINA'\ngroup by\n\ts_name\norder by\n\tnumwait desc,\n\ts_name limit 100;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q22.sql",
    "content": "-- CometBench-H query 22 derived from TPC-H query 22 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tcntrycode,\n\tcount(*) as numcust,\n\tsum(c_acctbal) as totacctbal\nfrom\n\t(\n\t\tselect\n\t\t\tsubstring(c_phone from 1 for 2) as cntrycode,\n\t\t\tc_acctbal\n\t\tfrom\n\t\t\tcustomer\n\t\twhere\n\t\t\tsubstring(c_phone from 1 for 2) in\n\t\t\t\t('24', '34', '16', '30', '33', '14', '13')\n\t\t\tand c_acctbal > (\n\t\t\t\tselect\n\t\t\t\t\tavg(c_acctbal)\n\t\t\t\tfrom\n\t\t\t\t\tcustomer\n\t\t\t\twhere\n\t\t\t\t\tc_acctbal > 0.00\n\t\t\t\t\tand substring(c_phone from 1 for 2) in\n\t\t\t\t\t\t('24', '34', '16', '30', '33', '14', '13')\n\t\t\t)\n\t\t\tand not exists (\n\t\t\t\tselect\n\t\t\t\t\t*\n\t\t\t\tfrom\n\t\t\t\t\torders\n\t\t\t\twhere\n\t\t\t\t\to_custkey = c_custkey\n\t\t\t)\n\t) as custsale\ngroup by\n\tcntrycode\norder by\n\tcntrycode;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q3.sql",
    "content": "-- CometBench-H query 3 derived from TPC-H query 3 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tl_orderkey,\n\tsum(l_extendedprice * (1 - l_discount)) as revenue,\n\to_orderdate,\n\to_shippriority\nfrom\n\tcustomer,\n\torders,\n\tlineitem\nwhere\n\tc_mktsegment = 'BUILDING'\n\tand c_custkey = o_custkey\n\tand l_orderkey = o_orderkey\n\tand o_orderdate < date '1995-03-15'\n\tand l_shipdate > date '1995-03-15'\ngroup by\n\tl_orderkey,\n\to_orderdate,\n\to_shippriority\norder by\n\trevenue desc,\n\to_orderdate limit 10;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q4.sql",
    "content": "-- CometBench-H query 4 derived from TPC-H query 4 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\to_orderpriority,\n\tcount(*) as order_count\nfrom\n\torders\nwhere\n\to_orderdate >= date '1995-04-01'\n\tand o_orderdate < date '1995-04-01' + interval '3' month\n\tand exists (\n\t\tselect\n\t\t\t*\n\t\tfrom\n\t\t\tlineitem\n\t\twhere\n\t\t\tl_orderkey = o_orderkey\n\t\t\tand l_commitdate < l_receiptdate\n\t)\ngroup by\n\to_orderpriority\norder by\n\to_orderpriority;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q5.sql",
    "content": "-- CometBench-H query 5 derived from TPC-H query 5 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tn_name,\n\tsum(l_extendedprice * (1 - l_discount)) as revenue\nfrom\n\tcustomer,\n\torders,\n\tlineitem,\n\tsupplier,\n\tnation,\n\tregion\nwhere\n\tc_custkey = o_custkey\n\tand l_orderkey = o_orderkey\n\tand l_suppkey = s_suppkey\n\tand c_nationkey = s_nationkey\n\tand s_nationkey = n_nationkey\n\tand n_regionkey = r_regionkey\n\tand r_name = 'AFRICA'\n\tand o_orderdate >= date '1994-01-01'\n\tand o_orderdate < date '1994-01-01' + interval '1' year\ngroup by\n\tn_name\norder by\n\trevenue desc;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q6.sql",
    "content": "-- CometBench-H query 6 derived from TPC-H query 6 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tsum(l_extendedprice * l_discount) as revenue\nfrom\n\tlineitem\nwhere\n\tl_shipdate >= date '1994-01-01'\n\tand l_shipdate < date '1994-01-01' + interval '1' year\n\tand l_discount between 0.04 - 0.01 and 0.04 + 0.01\n\tand l_quantity < 24;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q7.sql",
    "content": "-- CometBench-H query 7 derived from TPC-H query 7 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tsupp_nation,\n\tcust_nation,\n\tl_year,\n\tsum(volume) as revenue\nfrom\n\t(\n\t\tselect\n\t\t\tn1.n_name as supp_nation,\n\t\t\tn2.n_name as cust_nation,\n\t\t\textract(year from l_shipdate) as l_year,\n\t\t\tl_extendedprice * (1 - l_discount) as volume\n\t\tfrom\n\t\t\tsupplier,\n\t\t\tlineitem,\n\t\t\torders,\n\t\t\tcustomer,\n\t\t\tnation n1,\n\t\t\tnation n2\n\t\twhere\n\t\t\ts_suppkey = l_suppkey\n\t\t\tand o_orderkey = l_orderkey\n\t\t\tand c_custkey = o_custkey\n\t\t\tand s_nationkey = n1.n_nationkey\n\t\t\tand c_nationkey = n2.n_nationkey\n\t\t\tand (\n\t\t\t\t(n1.n_name = 'GERMANY' and n2.n_name = 'IRAQ')\n\t\t\t\tor (n1.n_name = 'IRAQ' and n2.n_name = 'GERMANY')\n\t\t\t)\n\t\t\tand l_shipdate between date '1995-01-01' and date '1996-12-31'\n\t) as shipping\ngroup by\n\tsupp_nation,\n\tcust_nation,\n\tl_year\norder by\n\tsupp_nation,\n\tcust_nation,\n\tl_year;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q8.sql",
    "content": "-- CometBench-H query 8 derived from TPC-H query 8 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\to_year,\n\tsum(case\n\t\twhen nation = 'IRAQ' then volume\n\t\telse 0\n\tend) / sum(volume) as mkt_share\nfrom\n\t(\n\t\tselect\n\t\t\textract(year from o_orderdate) as o_year,\n\t\t\tl_extendedprice * (1 - l_discount) as volume,\n\t\t\tn2.n_name as nation\n\t\tfrom\n\t\t\tpart,\n\t\t\tsupplier,\n\t\t\tlineitem,\n\t\t\torders,\n\t\t\tcustomer,\n\t\t\tnation n1,\n\t\t\tnation n2,\n\t\t\tregion\n\t\twhere\n\t\t\tp_partkey = l_partkey\n\t\t\tand s_suppkey = l_suppkey\n\t\t\tand l_orderkey = o_orderkey\n\t\t\tand o_custkey = c_custkey\n\t\t\tand c_nationkey = n1.n_nationkey\n\t\t\tand n1.n_regionkey = r_regionkey\n\t\t\tand r_name = 'MIDDLE EAST'\n\t\t\tand s_nationkey = n2.n_nationkey\n\t\t\tand o_orderdate between date '1995-01-01' and date '1996-12-31'\n\t\t\tand p_type = 'LARGE PLATED STEEL'\n\t) as all_nations\ngroup by\n\to_year\norder by\n\to_year;\n"
  },
  {
    "path": "benchmarks/tpc/queries/tpch/q9.sql",
    "content": "-- CometBench-H query 9 derived from TPC-H query 9 under the terms of the TPC Fair Use Policy.\n-- TPC-H queries are Copyright 1993-2022 Transaction Processing Performance Council.\nselect\n\tnation,\n\to_year,\n\tsum(amount) as sum_profit\nfrom\n\t(\n\t\tselect\n\t\t\tn_name as nation,\n\t\t\textract(year from o_orderdate) as o_year,\n\t\t\tl_extendedprice * (1 - l_discount) - ps_supplycost * l_quantity as amount\n\t\tfrom\n\t\t\tpart,\n\t\t\tsupplier,\n\t\t\tlineitem,\n\t\t\tpartsupp,\n\t\t\torders,\n\t\t\tnation\n\t\twhere\n\t\t\ts_suppkey = l_suppkey\n\t\t\tand ps_suppkey = l_suppkey\n\t\t\tand ps_partkey = l_partkey\n\t\t\tand p_partkey = l_partkey\n\t\t\tand o_orderkey = l_orderkey\n\t\t\tand s_nationkey = n_nationkey\n\t\t\tand p_name like '%moccasin%'\n\t) as profit\ngroup by\n\tnation,\n\to_year\norder by\n\tnation,\n\to_year desc;\n"
  },
  {
    "path": "benchmarks/tpc/run.py",
    "content": "#!/usr/bin/env python3\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"Consolidated TPC benchmark runner for Spark-based engines.\n\nUsage:\n    python3 run.py --engine comet --benchmark tpch\n    python3 run.py --engine comet --benchmark tpcds --iterations 3\n    python3 run.py --engine comet-iceberg --benchmark tpch\n    python3 run.py --engine comet --benchmark tpch --dry-run\n    python3 run.py --engine spark --benchmark tpch --no-restart\n\"\"\"\n\nimport argparse\nimport os\nimport re\nimport subprocess\nimport sys\n\n# ---------------------------------------------------------------------------\n# TOML loading – prefer stdlib tomllib (3.11+), else minimal fallback\n# ---------------------------------------------------------------------------\n\ntry:\n    import tomllib  # Python 3.11+\n\n    def load_toml(path):\n        with open(path, \"rb\") as f:\n            return tomllib.load(f)\n\nexcept ModuleNotFoundError:\n\n    def _parse_toml(text):\n        \"\"\"Minimal TOML parser supporting tables, quoted-key strings, plain\n        strings, arrays of strings, booleans, and comments.  Sufficient for\n        the engine config files used by this runner.\"\"\"\n        root = {}\n        current = root\n        for line in text.splitlines():\n            line = line.strip()\n            if not line or line.startswith(\"#\"):\n                continue\n            # Table header: [env.defaults] or [spark_conf]\n            m = re.match(r\"^\\[([^\\]]+)\\]$\", line)\n            if m:\n                keys = m.group(1).split(\".\")\n                current = root\n                for k in keys:\n                    current = current.setdefault(k, {})\n                continue\n            # Key = value\n            m = re.match(r'^(\"(?:[^\"\\\\]|\\\\.)*\"|[A-Za-z0-9_.]+)\\s*=\\s*(.+)$', line)\n            if not m:\n                continue\n            raw_key, raw_val = m.group(1), m.group(2).strip()\n            key = raw_key.strip('\"')\n            val = _parse_value(raw_val)\n            current[key] = val\n        return root\n\n    def _parse_value(raw):\n        if raw == \"true\":\n            return True\n        if raw == \"false\":\n            return False\n        if raw.startswith('\"') and raw.endswith('\"'):\n            return raw[1:-1]\n        if raw.startswith(\"[\"):\n            # Simple array of strings\n            items = []\n            for m in re.finditer(r'\"((?:[^\"\\\\]|\\\\.)*)\"', raw):\n                items.append(m.group(1))\n            return items\n        if raw.startswith(\"{\"):\n            # Inline table: { KEY = \"VAL\", ... }\n            tbl = {}\n            for m in re.finditer(r'([A-Za-z0-9_]+)\\s*=\\s*\"((?:[^\"\\\\]|\\\\.)*)\"', raw):\n                tbl[m.group(1)] = m.group(2)\n            return tbl\n        return raw\n\n    def load_toml(path):\n        with open(path, \"r\") as f:\n            return _parse_toml(f.read())\n\n\n# ---------------------------------------------------------------------------\n# Common Spark configuration (shared across all engines)\n# ---------------------------------------------------------------------------\n\nCOMMON_SPARK_CONF = {\n    \"spark.driver.memory\": \"8G\",\n    \"spark.executor.memory\": \"16g\",\n    \"spark.memory.offHeap.enabled\": \"true\",\n    \"spark.memory.offHeap.size\": \"16g\",\n    \"spark.eventLog.enabled\": \"true\",\n    \"spark.eventLog.dir\": os.environ.get(\"SPARK_EVENT_LOG_DIR\", \"/tmp/spark-events\"),\n    \"spark.hadoop.fs.s3a.impl\": \"org.apache.hadoop.fs.s3a.S3AFileSystem\",\n    \"spark.hadoop.fs.s3a.aws.credentials.provider\": \"com.amazonaws.auth.DefaultAWSCredentialsProviderChain\",\n}\n\n# ---------------------------------------------------------------------------\n# Benchmark profiles\n# ---------------------------------------------------------------------------\n\nBENCHMARK_PROFILES = {\n    \"tpch\": {\n        \"executor_instances\": \"2\",\n        \"executor_cores\": \"8\",\n        \"max_cores\": \"16\",\n        \"data_env\": \"TPCH_DATA\",\n        \"format\": \"parquet\",\n    },\n    \"tpcds\": {\n        \"executor_instances\": \"2\",\n        \"executor_cores\": \"8\",\n        \"max_cores\": \"16\",\n        \"data_env\": \"TPCDS_DATA\",\n        \"format\": None,  # omit --format for TPC-DS\n    },\n}\n\n# ---------------------------------------------------------------------------\n# Helpers\n# ---------------------------------------------------------------------------\n\n\ndef resolve_env(value):\n    \"\"\"Expand $VAR and ${VAR} references using os.environ.\"\"\"\n    if not isinstance(value, str):\n        return value\n    return re.sub(\n        r\"\\$\\{([^}]+)\\}|\\$([A-Za-z_][A-Za-z0-9_]*)\",\n        lambda m: os.environ.get(m.group(1) or m.group(2), \"\"),\n        value,\n    )\n\n\ndef resolve_env_in_list(lst):\n    return [resolve_env(v) for v in lst]\n\n\ndef load_engine_config(engine_name):\n    \"\"\"Load and return the TOML config for the given engine.\"\"\"\n    script_dir = os.path.dirname(os.path.abspath(__file__))\n    config_path = os.path.join(script_dir, \"engines\", f\"{engine_name}.toml\")\n    if not os.path.exists(config_path):\n        available = sorted(\n            f.removesuffix(\".toml\")\n            for f in os.listdir(os.path.join(script_dir, \"engines\"))\n            if f.endswith(\".toml\")\n        )\n        print(f\"Error: Unknown engine '{engine_name}'\", file=sys.stderr)\n        print(f\"Available engines: {', '.join(available)}\", file=sys.stderr)\n        sys.exit(1)\n    return load_toml(config_path)\n\n\ndef apply_env_defaults(config):\n    \"\"\"Set environment variable defaults from [env.defaults].\"\"\"\n    defaults = config.get(\"env\", {}).get(\"defaults\", {})\n    for key, val in defaults.items():\n        if key not in os.environ:\n            os.environ[key] = val\n\n\ndef apply_env_exports(config):\n    \"\"\"Export environment variables from [env.exports].\"\"\"\n    exports = config.get(\"env\", {}).get(\"exports\", {})\n    for key, val in exports.items():\n        os.environ[key] = val\n\n\ndef check_required_env(config):\n    \"\"\"Validate that required environment variables are set.\"\"\"\n    required = config.get(\"env\", {}).get(\"required\", [])\n    missing = [v for v in required if not os.environ.get(v)]\n    if missing:\n        print(\n            f\"Error: Required environment variable(s) not set: {', '.join(missing)}\",\n            file=sys.stderr,\n        )\n        sys.exit(1)\n\n\ndef check_common_env():\n    \"\"\"Validate SPARK_HOME and SPARK_MASTER are set.\"\"\"\n    for var in (\"SPARK_HOME\", \"SPARK_MASTER\"):\n        if not os.environ.get(var):\n            print(f\"Error: {var} is not set\", file=sys.stderr)\n            sys.exit(1)\n\n\ndef check_benchmark_env(config, benchmark):\n    \"\"\"Validate benchmark-specific environment variables.\"\"\"\n    profile = BENCHMARK_PROFILES[benchmark]\n    use_iceberg = config.get(\"tpcbench_args\", {}).get(\"use_iceberg\", False)\n\n    required = []\n    if not use_iceberg:\n        required.append(profile[\"data_env\"])\n\n    missing = [v for v in required if not os.environ.get(v)]\n    if missing:\n        print(\n            f\"Error: Required environment variable(s) not set for \"\n            f\"{benchmark}: {', '.join(missing)}\",\n            file=sys.stderr,\n        )\n        sys.exit(1)\n\n    # Default ICEBERG_DATABASE to the benchmark name if not already set\n    if use_iceberg and not os.environ.get(\"ICEBERG_DATABASE\"):\n        os.environ[\"ICEBERG_DATABASE\"] = benchmark\n\n\ndef build_spark_submit_cmd(config, benchmark, args):\n    \"\"\"Build the spark-submit command list.\"\"\"\n    spark_home = os.environ[\"SPARK_HOME\"]\n    spark_master = os.environ[\"SPARK_MASTER\"]\n    profile = BENCHMARK_PROFILES[benchmark]\n\n    cmd = [os.path.join(spark_home, \"bin\", \"spark-submit\")]\n    cmd += [\"--master\", spark_master]\n\n    # --jars\n    jars = config.get(\"spark_submit\", {}).get(\"jars\", [])\n    if jars:\n        cmd += [\"--jars\", \",\".join(resolve_env_in_list(jars))]\n\n    # --driver-class-path\n    driver_cp = config.get(\"spark_submit\", {}).get(\"driver_class_path\", [])\n    if driver_cp:\n        cmd += [\"--driver-class-path\", \":\".join(resolve_env_in_list(driver_cp))]\n\n    # Merge spark confs: common + benchmark profile + engine overrides\n    conf = dict(COMMON_SPARK_CONF)\n    conf[\"spark.executor.instances\"] = profile[\"executor_instances\"]\n    conf[\"spark.executor.cores\"] = profile[\"executor_cores\"]\n    conf[\"spark.cores.max\"] = profile[\"max_cores\"]\n\n    engine_conf = config.get(\"spark_conf\", {})\n    for key, val in engine_conf.items():\n        if isinstance(val, bool):\n            val = \"true\" if val else \"false\"\n        conf[resolve_env(key)] = resolve_env(str(val))\n\n    # JFR profiling: append to extraJavaOptions (preserving any existing values)\n    if args.jfr:\n        jfr_dir = args.jfr_dir\n        driver_jfr = (\n            f\"-XX:StartFlightRecording=disk=true,dumponexit=true,\"\n            f\"filename={jfr_dir}/driver.jfr,settings=profile\"\n        )\n        executor_jfr = (\n            f\"-XX:StartFlightRecording=disk=true,dumponexit=true,\"\n            f\"filename={jfr_dir}/executor.jfr,settings=profile\"\n        )\n        for spark_key, jfr_opts in [\n            (\"spark.driver.extraJavaOptions\", driver_jfr),\n            (\"spark.executor.extraJavaOptions\", executor_jfr),\n        ]:\n            existing = conf.get(spark_key, \"\")\n            conf[spark_key] = f\"{existing} {jfr_opts}\".strip()\n\n    # async-profiler: attach as a Java agent via -agentpath\n    if args.async_profiler:\n        ap_home = os.environ.get(\"ASYNC_PROFILER_HOME\", \"\")\n        if not ap_home:\n            print(\n                \"Error: ASYNC_PROFILER_HOME is not set. \"\n                \"Set it to the async-profiler installation directory.\",\n                file=sys.stderr,\n            )\n            sys.exit(1)\n        lib_ext = \"dylib\" if sys.platform == \"darwin\" else \"so\"\n        ap_lib = os.path.join(ap_home, \"lib\", f\"libasyncProfiler.{lib_ext}\")\n        ap_dir = args.async_profiler_dir\n        ap_event = args.async_profiler_event\n        ap_fmt = args.async_profiler_format\n        ext = {\"flamegraph\": \"html\", \"jfr\": \"jfr\", \"collapsed\": \"txt\", \"text\": \"txt\"}[ap_fmt]\n\n        driver_ap = (\n            f\"-agentpath:{ap_lib}=start,event={ap_event},\"\n            f\"{ap_fmt},file={ap_dir}/driver.{ext}\"\n        )\n        executor_ap = (\n            f\"-agentpath:{ap_lib}=start,event={ap_event},\"\n            f\"{ap_fmt},file={ap_dir}/executor.{ext}\"\n        )\n        for spark_key, ap_opts in [\n            (\"spark.driver.extraJavaOptions\", driver_ap),\n            (\"spark.executor.extraJavaOptions\", executor_ap),\n        ]:\n            existing = conf.get(spark_key, \"\")\n            conf[spark_key] = f\"{existing} {ap_opts}\".strip()\n\n    for key, val in sorted(conf.items()):\n        cmd += [\"--conf\", f\"{key}={val}\"]\n\n    # tpcbench.py path\n    cmd.append(\"tpcbench.py\")\n\n    # tpcbench args\n    engine_name = config.get(\"engine\", {}).get(\"name\", args.engine)\n    cmd += [\"--name\", engine_name]\n    cmd += [\"--benchmark\", benchmark]\n\n    use_iceberg = config.get(\"tpcbench_args\", {}).get(\"use_iceberg\", False)\n    if use_iceberg:\n        cmd += [\"--catalog\", resolve_env(\"${ICEBERG_CATALOG}\")]\n        cmd += [\"--database\", resolve_env(\"${ICEBERG_DATABASE}\")]\n    else:\n        data_var = profile[\"data_env\"]\n        data_val = os.environ.get(data_var, \"\")\n        cmd += [\"--data\", data_val]\n\n    cmd += [\"--output\", args.output]\n    cmd += [\"--iterations\", str(args.iterations)]\n\n    if args.query is not None:\n        cmd += [\"--query\", str(args.query)]\n\n    if profile[\"format\"] and not use_iceberg:\n        cmd += [\"--format\", profile[\"format\"]]\n\n    return cmd\n\n\ndef restart_spark():\n    \"\"\"Stop and start Spark master and worker.\"\"\"\n    spark_home = os.environ[\"SPARK_HOME\"]\n    sbin = os.path.join(spark_home, \"sbin\")\n    spark_master = os.environ[\"SPARK_MASTER\"]\n\n    # Stop (ignore errors)\n    subprocess.run(\n        [os.path.join(sbin, \"stop-master.sh\")],\n        stdout=subprocess.DEVNULL,\n        stderr=subprocess.DEVNULL,\n    )\n    subprocess.run(\n        [os.path.join(sbin, \"stop-worker.sh\")],\n        stdout=subprocess.DEVNULL,\n        stderr=subprocess.DEVNULL,\n    )\n\n    # Start (check errors)\n    r = subprocess.run([os.path.join(sbin, \"start-master.sh\")])\n    if r.returncode != 0:\n        print(\"Error: Failed to start Spark master\", file=sys.stderr)\n        sys.exit(1)\n\n    r = subprocess.run([os.path.join(sbin, \"start-worker.sh\"), spark_master])\n    if r.returncode != 0:\n        print(\"Error: Failed to start Spark worker\", file=sys.stderr)\n        sys.exit(1)\n\n\ndef main():\n    parser = argparse.ArgumentParser(\n        description=\"Consolidated TPC benchmark runner for Spark-based engines.\"\n    )\n    parser.add_argument(\n        \"--engine\",\n        required=True,\n        help=\"Engine name (matches a TOML file in engines/)\",\n    )\n    parser.add_argument(\n        \"--benchmark\",\n        required=True,\n        choices=[\"tpch\", \"tpcds\"],\n        help=\"Benchmark to run\",\n    )\n    parser.add_argument(\n        \"--iterations\", type=int, default=1, help=\"Number of iterations (default: 1)\"\n    )\n    parser.add_argument(\n        \"--output\", default=\".\", help=\"Output directory (default: .)\"\n    )\n    parser.add_argument(\n        \"--query\", type=int, default=None, help=\"Run a single query number\"\n    )\n    parser.add_argument(\n        \"--no-restart\",\n        action=\"store_true\",\n        help=\"Skip Spark master/worker restart\",\n    )\n    parser.add_argument(\n        \"--dry-run\",\n        action=\"store_true\",\n        help=\"Print the spark-submit command without executing\",\n    )\n    parser.add_argument(\n        \"--jfr\",\n        action=\"store_true\",\n        help=\"Enable Java Flight Recorder profiling for driver and executors\",\n    )\n    parser.add_argument(\n        \"--jfr-dir\",\n        default=\"/results/jfr\",\n        help=\"Directory for JFR output files (default: /results/jfr)\",\n    )\n    parser.add_argument(\n        \"--async-profiler\",\n        action=\"store_true\",\n        help=\"Enable async-profiler for driver and executors (profiles Java + native code)\",\n    )\n    parser.add_argument(\n        \"--async-profiler-dir\",\n        default=\"/results/async-profiler\",\n        help=\"Directory for async-profiler output files (default: /results/async-profiler)\",\n    )\n    parser.add_argument(\n        \"--async-profiler-event\",\n        default=\"cpu\",\n        help=\"async-profiler event type: cpu, wall, alloc, lock, etc. (default: cpu)\",\n    )\n    parser.add_argument(\n        \"--async-profiler-format\",\n        default=\"flamegraph\",\n        choices=[\"flamegraph\", \"jfr\", \"collapsed\", \"text\"],\n        help=\"async-profiler output format (default: flamegraph)\",\n    )\n    args = parser.parse_args()\n\n    config = load_engine_config(args.engine)\n\n    # Apply env defaults and exports before validation\n    apply_env_defaults(config)\n    apply_env_exports(config)\n\n    check_common_env()\n    check_required_env(config)\n    check_benchmark_env(config, args.benchmark)\n\n    # Restart Spark unless --no-restart or --dry-run\n    if not args.no_restart and not args.dry_run:\n        restart_spark()\n\n    # Create profiling output directories (skip for dry-run)\n    if not args.dry_run:\n        if args.jfr:\n            os.makedirs(args.jfr_dir, exist_ok=True)\n        if args.async_profiler:\n            os.makedirs(args.async_profiler_dir, exist_ok=True)\n\n    cmd = build_spark_submit_cmd(config, args.benchmark, args)\n\n    if args.dry_run:\n        # Group paired arguments (e.g. --conf key=value) on one line\n        parts = []\n        i = 0\n        while i < len(cmd):\n            token = cmd[i]\n            if token.startswith(\"--\") and i + 1 < len(cmd) and not cmd[i + 1].startswith(\"--\"):\n                parts.append(f\"{token} {cmd[i + 1]}\")\n                i += 2\n            else:\n                parts.append(token)\n                i += 1\n        print(\" \\\\\\n    \".join(parts))\n    else:\n        r = subprocess.run(cmd)\n        sys.exit(r.returncode)\n\n\nif __name__ == \"__main__\":\n    main()\n"
  },
  {
    "path": "benchmarks/tpc/tpcbench.py",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n\"\"\"\nTPC-H / TPC-DS benchmark runner.\n\nSupports two data sources:\n  - Files: use --data with --format (parquet, csv, json) and optional --options\n  - Iceberg tables: use --catalog and --database to specify the catalog location\n\"\"\"\n\nimport argparse\nfrom datetime import datetime\nimport hashlib\nimport json\nimport os\nfrom pyspark.sql import SparkSession\nimport time\nfrom typing import Dict\n\n\ndef dedup_columns(df):\n    \"\"\"Rename duplicate column aliases: a, a, b, b -> a, a_1, b, b_1\"\"\"\n    counts = {}\n    new_cols = []\n    for c in df.columns:\n        if c not in counts:\n            counts[c] = 0\n            new_cols.append(c)\n        else:\n            counts[c] += 1\n            new_cols.append(f\"{c}_{counts[c]}\")\n    return df.toDF(*new_cols)\n\n\ndef result_hash(rows):\n    \"\"\"Compute a deterministic MD5 hash from collected rows.\"\"\"\n    sorted_rows = sorted(rows, key=lambda r: str(r))\n    h = hashlib.md5()\n    for row in sorted_rows:\n        h.update(str(row).encode(\"utf-8\"))\n    return h.hexdigest()\n\n\ndef main(\n    benchmark: str,\n    data_path: str,\n    catalog: str,\n    database: str,\n    iterations: int,\n    output: str,\n    name: str,\n    format: str,\n    query_num: int = None,\n    write_path: str = None,\n    options: Dict[str, str] = None,\n):\n    if options is None:\n        options = {}\n\n    query_path = os.path.join(\n        os.path.dirname(os.path.abspath(__file__)), \"queries\", benchmark\n    )\n\n    spark = SparkSession.builder \\\n        .appName(f\"{name} benchmark derived from {benchmark}\") \\\n        .getOrCreate()\n\n    # Define tables for each benchmark\n    if benchmark == \"tpch\":\n        num_queries = 22\n        table_names = [\n            \"customer\", \"lineitem\", \"nation\", \"orders\",\n            \"part\", \"partsupp\", \"region\", \"supplier\"\n        ]\n    elif benchmark == \"tpcds\":\n        num_queries = 99\n        table_names = [\n            \"call_center\", \"catalog_page\", \"catalog_returns\", \"catalog_sales\",\n            \"customer\", \"customer_address\", \"customer_demographics\", \"date_dim\",\n            \"time_dim\", \"household_demographics\", \"income_band\", \"inventory\",\n            \"item\", \"promotion\", \"reason\", \"ship_mode\", \"store\", \"store_returns\",\n            \"store_sales\", \"warehouse\", \"web_page\", \"web_returns\", \"web_sales\",\n            \"web_site\"\n        ]\n    else:\n        raise ValueError(f\"Invalid benchmark: {benchmark}\")\n\n    # Register tables from either files or Iceberg catalog\n    using_iceberg = catalog is not None\n    for table in table_names:\n        if using_iceberg:\n            source = f\"{catalog}.{database}.{table}\"\n            print(f\"Registering table {table} from {source}\")\n            df = spark.table(source)\n        else:\n            # Support both \"customer/\" and \"customer.parquet/\" layouts\n            source = f\"{data_path}/{table}.{format}\"\n            if not os.path.exists(source):\n                source = f\"{data_path}/{table}\"\n            print(f\"Registering table {table} from {source}\")\n            df = spark.read.format(format).options(**options).load(source)\n        df.createOrReplaceTempView(table)\n\n    conf_dict = {k: v for k, v in spark.sparkContext.getConf().getAll()}\n\n    results = {\n        'engine': 'datafusion-comet',\n        'benchmark': benchmark,\n        'spark_conf': conf_dict,\n    }\n    if using_iceberg:\n        results['catalog'] = catalog\n        results['database'] = database\n    else:\n        results['data_path'] = data_path\n\n    for iteration in range(iterations):\n        print(f\"\\n{'='*60}\")\n        print(f\"Starting iteration {iteration + 1} of {iterations}\")\n        print(f\"{'='*60}\")\n        iter_start_time = time.time()\n\n        # Determine which queries to run\n        if query_num is not None:\n            if query_num < 1 or query_num > num_queries:\n                raise ValueError(\n                    f\"Query number {query_num} out of range. \"\n                    f\"Valid: 1-{num_queries} for {benchmark}\"\n                )\n            queries_to_run = [query_num]\n        else:\n            queries_to_run = range(1, num_queries + 1)\n\n        for query in queries_to_run:\n            spark.sparkContext.setJobDescription(f\"{benchmark} q{query}\")\n\n            path = f\"{query_path}/q{query}.sql\"\n            print(f\"\\nRunning query {query} from {path}\")\n\n            with open(path, \"r\") as f:\n                text = f.read()\n                queries = text.split(\";\")\n\n                start_time = time.time()\n                for sql in queries:\n                    sql = sql.strip().replace(\"create view\", \"create temp view\")\n                    if len(sql) > 0:\n                        print(f\"Executing: {sql[:100]}...\")\n                        df = spark.sql(sql)\n                        df.explain(\"formatted\")\n\n                        if write_path is not None:\n                            if len(df.columns) > 0:\n                                output_path = f\"{write_path}/q{query}\"\n                                deduped = dedup_columns(df)\n                                deduped.orderBy(*deduped.columns).coalesce(1).write.mode(\"overwrite\").parquet(output_path)\n                                print(f\"Results written to {output_path}\")\n                        else:\n                            rows = df.collect()\n                            row_count = len(rows)\n                            row_hash = result_hash(rows)\n                            print(f\"Query {query} returned {row_count} rows, hash={row_hash}\")\n\n                end_time = time.time()\n                elapsed = end_time - start_time\n                print(f\"Query {query} took {elapsed:.2f} seconds\")\n\n                query_result = results.setdefault(query, {\"durations\": []})\n                query_result[\"durations\"].append(round(elapsed, 3))\n                if \"row_count\" not in query_result and not write_path:\n                    query_result[\"row_count\"] = row_count\n                    query_result[\"result_hash\"] = row_hash\n\n        iter_end_time = time.time()\n        print(f\"\\nIteration {iteration + 1} took {iter_end_time - iter_start_time:.2f} seconds\")\n\n    # Write results\n    result_str = json.dumps(results, indent=4)\n    current_time_millis = int(datetime.now().timestamp() * 1000)\n    results_path = f\"{output}/{name}-{benchmark}-{current_time_millis}.json\"\n    print(f\"\\nWriting results to {results_path}\")\n    with open(results_path, \"w\") as f:\n        f.write(result_str)\n\n    spark.stop()\n\n\nif __name__ == \"__main__\":\n    parser = argparse.ArgumentParser(\n        description=\"TPC-H/TPC-DS benchmark runner for files or Iceberg tables\"\n    )\n    parser.add_argument(\n        \"--benchmark\", required=True,\n        help=\"Benchmark to run (tpch or tpcds)\"\n    )\n\n    # Data source - mutually exclusive: either file path or Iceberg catalog\n    source_group = parser.add_mutually_exclusive_group(required=True)\n    source_group.add_argument(\n        \"--data\",\n        help=\"Path to data files\"\n    )\n    source_group.add_argument(\n        \"--catalog\",\n        help=\"Iceberg catalog name\"\n    )\n\n    # Options for file-based reading\n    parser.add_argument(\n        \"--format\", default=\"parquet\",\n        help=\"Input file format: parquet, csv, json (only used with --data)\"\n    )\n    parser.add_argument(\n        \"--options\", type=json.loads, default={},\n        help='Spark reader options as JSON string, e.g., \\'{\"header\": \"true\"}\\' (only used with --data)'\n    )\n\n    # Options for Iceberg\n    parser.add_argument(\n        \"--database\", default=\"tpch\",\n        help=\"Database containing TPC tables (only used with --catalog)\"\n    )\n\n    parser.add_argument(\n        \"--iterations\", type=int, default=1,\n        help=\"Number of iterations\"\n    )\n    parser.add_argument(\n        \"--output\", required=True,\n        help=\"Path to write results JSON\"\n    )\n    parser.add_argument(\n        \"--name\", required=True,\n        help=\"Prefix for result file\"\n    )\n    parser.add_argument(\n        \"--query\", type=int,\n        help=\"Specific query number (1-based). If omitted, run all.\"\n    )\n    parser.add_argument(\n        \"--write\",\n        help=\"Path to save query results as Parquet\"\n    )\n    args = parser.parse_args()\n\n    main(\n        args.benchmark,\n        args.data,\n        args.catalog,\n        args.database,\n        args.iterations,\n        args.output,\n        args.name,\n        args.format,\n        args.query,\n        args.write,\n        args.options,\n    )\n"
  },
  {
    "path": "common/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>org.apache.datafusion</groupId>\n    <artifactId>comet-parent-spark${spark.version.short}_${scala.binary.version}</artifactId>\n    <version>0.15.0-SNAPSHOT</version>\n    <relativePath>../pom.xml</relativePath>\n  </parent>\n\n  <artifactId>comet-common-spark${spark.version.short}_${scala.binary.version}</artifactId>\n  <name>comet-common</name>\n\n  <properties>\n    <!-- Reverse default (skip installation), and then enable only for child modules -->\n    <maven.deploy.skip>false</maven.deploy.skip>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.apache.spark</groupId>\n      <artifactId>spark-sql_${scala.binary.version}</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.parquet</groupId>\n      <artifactId>parquet-column</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.parquet</groupId>\n      <artifactId>parquet-hadoop</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.parquet</groupId>\n      <artifactId>parquet-format-structures</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.arrow</groupId>\n      <artifactId>arrow-vector</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.arrow</groupId>\n      <artifactId>arrow-memory-unsafe</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.arrow</groupId>\n      <artifactId>arrow-c-data</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.scala-lang.modules</groupId>\n      <artifactId>scala-collection-compat_${scala.binary.version}</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.assertj</groupId>\n      <artifactId>assertj-core</artifactId>\n      <scope>test</scope>\n    </dependency>\n  </dependencies>\n\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>io.github.git-commit-id</groupId>\n        <artifactId>git-commit-id-maven-plugin</artifactId>\n        <version>${git-commit-id-maven-plugin.version}</version>\n        <executions>\n          <execution>\n            <id>get-the-git-infos</id>\n            <goals>\n              <goal>revision</goal>\n            </goals>\n            <phase>initialize</phase>\n          </execution>\n        </executions>\n        <configuration>\n          <generateGitPropertiesFile>true</generateGitPropertiesFile>\n          <generateGitPropertiesFilename>${project.build.outputDirectory}/comet-git-info.properties</generateGitPropertiesFilename>\n          <commitIdGenerationMode>full</commitIdGenerationMode>\n          <includeOnlyProperties>\n            <includeOnlyProperty>^git.branch$</includeOnlyProperty>\n            <includeOnlyProperty>^git.build.*$</includeOnlyProperty>\n            <includeOnlyProperty>^git.commit.id.(abbrev|full)$</includeOnlyProperty>\n            <includeOnlyProperty>^git.remote.*$</includeOnlyProperty>\n          </includeOnlyProperties>\n        </configuration>\n      </plugin>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-shade-plugin</artifactId>\n        <executions>\n          <execution>\n            <phase>package</phase>\n            <goals>\n              <goal>shade</goal>\n            </goals>\n            <configuration>\n              <createSourcesJar>true</createSourcesJar>\n              <shadeSourcesContent>true</shadeSourcesContent>\n              <shadedArtifactAttached>false</shadedArtifactAttached>\n              <createDependencyReducedPom>true</createDependencyReducedPom>\n              <artifactSet>\n                <includes>\n                  <!-- We shade & relocation most of the Arrow classes, to prevent them from\n                    conflicting with those in Spark -->\n                  <include>org.apache.arrow:*</include>\n                </includes>\n              </artifactSet>\n              <filters>\n                <filter>\n                  <artifact>*:*</artifact>\n                  <excludes>\n                    <exclude>**/*.thrift</exclude>\n                    <exclude>git.properties</exclude>\n                    <exclude>log4j.properties</exclude>\n                    <exclude>log4j2.properties</exclude>\n                    <exclude>arrow-git.properties</exclude>\n                  </excludes>\n                </filter>\n                <filter>\n                  <artifact>org.apache.arrow:arrow-vector</artifact>\n                  <excludes>\n                    <!-- Comet doesn't need codegen templates on Arrow -->\n                    <exclude>codegen/**</exclude>\n                  </excludes>\n                </filter>\n              </filters>\n              <relocations>\n                <relocation>\n                  <pattern>org.apache.arrow</pattern>\n                  <shadedPattern>${comet.shade.packageName}.arrow</shadedPattern>\n                  <excludes>\n                    <!-- We can't allocate Jni classes. These classes has no extra dependencies\n                       so it should be OK to exclude -->\n                    <exclude>org/apache/arrow/c/jni/JniWrapper</exclude>\n                    <exclude>org/apache/arrow/c/jni/PrivateData</exclude>\n                    <exclude>org/apache/arrow/c/jni/CDataJniException</exclude>\n                    <!-- Also used by JNI: https://github.com/apache/arrow/blob/apache-arrow-11.0.0/java/c/src/main/cpp/jni_wrapper.cc#L341\n                       Note this class is not used by us, but required when loading the native lib -->\n                    <exclude>org/apache/arrow/c/ArrayStreamExporter$ExportedArrayStreamPrivateData\n                    </exclude>\n                  </excludes>\n                </relocation>\n              </relocations>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>net.alchim31.maven</groupId>\n        <artifactId>scala-maven-plugin</artifactId>\n      </plugin>\n      <plugin>\n        <groupId>org.codehaus.mojo</groupId>\n        <artifactId>build-helper-maven-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>add-shim-source</id>\n            <phase>generate-sources</phase>\n            <goals>\n              <goal>add-source</goal>\n            </goals>\n            <configuration>\n              <sources>\n                <source>src/main/${shims.majorVerSrc}</source>\n                <source>src/main/${shims.minorVerSrc}</source>\n              </sources>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n    <resources>\n      <resource>\n        <directory>${project.basedir}/src/main/resources</directory>\n      </resource>\n      <resource>\n        <directory>${project.basedir}/../native/target/x86_64-apple-darwin/release</directory>\n        <includes>\n          <include>libcomet.dylib</include>\n        </includes>\n        <targetPath>org/apache/comet/darwin/x86_64</targetPath>\n      </resource>\n      <resource>\n        <directory>${project.basedir}/../native/target/aarch64-apple-darwin/release</directory>\n        <includes>\n          <include>libcomet.dylib</include>\n        </includes>\n        <targetPath>org/apache/comet/darwin/aarch64</targetPath>\n      </resource>\n      <resource>\n        <directory>${jni.dir}</directory>\n        <includes>\n          <include>libcomet.dylib</include>\n          <include>libcomet.so</include>\n          <include>comet.dll</include>\n        </includes>\n        <targetPath>org/apache/comet/${platform}/${arch}</targetPath>\n      </resource>\n    </resources>\n  </build>\n\n</project>\n"
  },
  {
    "path": "common/src/main/java/org/apache/arrow/c/AbstractCometSchemaImporter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.arrow.c;\n\nimport org.apache.arrow.memory.BufferAllocator;\nimport org.apache.arrow.vector.FieldVector;\nimport org.apache.arrow.vector.types.pojo.Field;\n\nimport org.apache.comet.IcebergApi;\n\n/** This is a simple wrapper around SchemaImporter to make it accessible from Java Arrow. */\npublic abstract class AbstractCometSchemaImporter {\n  private final BufferAllocator allocator;\n  private final SchemaImporter importer;\n  private final CDataDictionaryProvider provider = new CDataDictionaryProvider();\n\n  public AbstractCometSchemaImporter(BufferAllocator allocator) {\n    this.allocator = allocator;\n    this.importer = new SchemaImporter(allocator);\n  }\n\n  public BufferAllocator getAllocator() {\n    return allocator;\n  }\n\n  public CDataDictionaryProvider getProvider() {\n    return provider;\n  }\n\n  public Field importField(ArrowSchema schema) {\n    try {\n      return importer.importField(schema, provider);\n    } finally {\n      schema.release();\n      schema.close();\n    }\n  }\n\n  /**\n   * Imports data from ArrowArray/ArrowSchema into a FieldVector. This is basically the same as Java\n   * Arrow `Data.importVector`. `Data.importVector` initiates `SchemaImporter` internally which is\n   * used to fill dictionary ids for dictionary encoded vectors. Every call to `importVector` will\n   * begin with dictionary ids starting from 0. So, separate calls to `importVector` will overwrite\n   * dictionary ids. To avoid this, we need to use the same `SchemaImporter` instance for all calls\n   * to `importVector`.\n   */\n  public FieldVector importVector(ArrowArray array, ArrowSchema schema) {\n    Field field = importField(schema);\n    FieldVector vector = field.createVector(allocator);\n    Data.importIntoVector(allocator, array, vector, provider);\n\n    return vector;\n  }\n\n  @IcebergApi\n  public void close() {\n    provider.close();\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/arrow/c/ArrowImporter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.arrow.c;\n\nimport org.apache.arrow.memory.BufferAllocator;\nimport org.apache.arrow.vector.FieldVector;\nimport org.apache.arrow.vector.types.pojo.Field;\n\n/**\n * This class is used to import Arrow schema and array from native execution/shuffle. We cannot use\n * Arrow's Java API to import schema and array directly because Arrow's Java API `Data.importField`\n * initiates a new `SchemaImporter` for each field. Each `SchemaImporter` maintains an internal\n * dictionary id counter. So the dictionary ids for multiple dictionary columns will conflict with\n * each other and cause data corruption.\n */\npublic class ArrowImporter {\n  private final SchemaImporter importer;\n  private final BufferAllocator allocator;\n\n  public ArrowImporter(BufferAllocator allocator) {\n    this.allocator = allocator;\n    this.importer = new SchemaImporter(allocator);\n  }\n\n  Field importField(ArrowSchema schema, CDataDictionaryProvider provider) {\n    try {\n      return importer.importField(schema, provider);\n    } finally {\n      schema.release();\n      schema.close();\n    }\n  }\n\n  public FieldVector importVector(\n      ArrowArray array, ArrowSchema schema, CDataDictionaryProvider provider) {\n    Field field = importField(schema, provider);\n    FieldVector vector = field.createVector(allocator);\n    ArrayImporter importer = new ArrayImporter(allocator, vector, provider);\n    importer.importArray(array);\n    return vector;\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/CometNativeException.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet;\n\n/** Parent class for all exceptions thrown from Comet native side. */\npublic class CometNativeException extends CometRuntimeException {\n  public CometNativeException(String message) {\n    super(message);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/CometOutOfMemoryError.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet;\n\n/** OOM error specific for Comet memory management */\npublic class CometOutOfMemoryError extends OutOfMemoryError {\n  public CometOutOfMemoryError(String msg) {\n    super(msg);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/CometRuntimeException.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet;\n\n/** The parent class for all Comet runtime exceptions */\npublic class CometRuntimeException extends RuntimeException {\n  public CometRuntimeException(String message) {\n    super(message);\n  }\n\n  public CometRuntimeException(String message, Throwable cause) {\n    super(message, cause);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/CometSchemaImporter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet;\n\nimport org.apache.arrow.c.*;\nimport org.apache.arrow.memory.BufferAllocator;\n\n/** This is a simple wrapper around SchemaImporter to make it accessible from Java Arrow. */\n@IcebergApi\npublic class CometSchemaImporter extends AbstractCometSchemaImporter {\n  @IcebergApi\n  public CometSchemaImporter(BufferAllocator allocator) {\n    super(allocator);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/IcebergApi.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet;\n\nimport java.lang.annotation.Documented;\nimport java.lang.annotation.ElementType;\nimport java.lang.annotation.Retention;\nimport java.lang.annotation.RetentionPolicy;\nimport java.lang.annotation.Target;\n\n/**\n * Indicates that the annotated element is part of the public API used by Apache Iceberg.\n *\n * <p>This annotation marks classes, methods, constructors, and fields that form the contract\n * between Comet and Iceberg. Changes to these APIs may break Iceberg's Comet integration, so\n * contributors should exercise caution and consider backward compatibility when modifying annotated\n * elements.\n *\n * <p>The Iceberg integration uses Comet's native Parquet reader for accelerated vectorized reads.\n * See the contributor guide documentation for details on how Iceberg uses these APIs.\n *\n * @see <a href=\"https://iceberg.apache.org/\">Apache Iceberg</a>\n */\n@Documented\n@Retention(RetentionPolicy.RUNTIME)\n@Target({ElementType.TYPE, ElementType.METHOD, ElementType.CONSTRUCTOR, ElementType.FIELD})\npublic @interface IcebergApi {}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/NativeBase.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet;\n\nimport java.io.BufferedReader;\nimport java.io.File;\nimport java.io.FilenameFilter;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.InputStreamReader;\nimport java.nio.file.Files;\nimport java.nio.file.StandardCopyOption;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.spark.sql.comet.util.Utils;\n\nimport static org.apache.comet.Constants.LOG_CONF_NAME;\nimport static org.apache.comet.Constants.LOG_CONF_PATH;\nimport static org.apache.comet.Constants.LOG_LEVEL_ENV;\n\n/** Base class for JNI bindings. MUST be inherited by all classes that introduce JNI APIs. */\npublic abstract class NativeBase {\n  static final String ARROW_UNSAFE_MEMORY_ACCESS = \"arrow.enable_unsafe_memory_access\";\n  static final String ARROW_NULL_CHECK_FOR_GET = \"arrow.enable_null_check_for_get\";\n\n  private static final Logger LOG = LoggerFactory.getLogger(NativeBase.class);\n  private static final String NATIVE_LIB_NAME = \"comet\";\n\n  private static final String libraryToLoad = System.mapLibraryName(NATIVE_LIB_NAME);\n  private static boolean loaded = false;\n  private static volatile Throwable loadErr = null;\n  private static final String searchPattern = \"libcomet-\";\n\n  static {\n    try {\n      load();\n    } catch (Throwable th) {\n      LOG.warn(\"Failed to load comet library\", th);\n      // logging may not be initialized yet, so also write to stderr\n      System.err.println(\"Failed to load comet library: \" + th.getMessage());\n      loadErr = th;\n    }\n  }\n\n  public static synchronized boolean isLoaded() throws Throwable {\n    if (loadErr != null) {\n      throw loadErr;\n    }\n    return loaded;\n  }\n\n  // Only for testing\n  static synchronized void setLoaded(boolean b) {\n    loaded = b;\n  }\n\n  static synchronized void load() {\n    if (loaded) {\n      return;\n    }\n\n    cleanupOldTempLibs();\n\n    // Check if the arch used by JDK is the same as arch on the host machine, in particular,\n    // whether x86_64 JDK is used in arm64 Mac\n    if (!checkArch()) {\n      LOG.warn(\n          \"Comet is disabled. JDK compiled for x86_64 is used in a Mac based on Apple Silicon. \"\n              + \"In order to use Comet, Please install a JDK version for ARM64 architecture\");\n      return;\n    }\n\n    // Try to load Comet library from the java.library.path.\n    try {\n      System.loadLibrary(NATIVE_LIB_NAME);\n      loaded = true;\n    } catch (UnsatisfiedLinkError ex) {\n      // Doesn't exist, so proceed to loading bundled library.\n      bundleLoadLibrary();\n    }\n\n    initWithLogConf();\n    // Only set the Arrow properties when debugging mode is off\n    if (!(boolean) CometConf.COMET_DEBUG_ENABLED().get()) {\n      setArrowProperties();\n    }\n  }\n\n  /**\n   * Use the bundled native libraries. Functionally equivalent to <code>System.loadLibrary</code>.\n   */\n  private static void bundleLoadLibrary() {\n    String resourceName = resourceName();\n    InputStream is = NativeBase.class.getResourceAsStream(resourceName);\n    if (is == null) {\n      throw new UnsupportedOperationException(\n          \"Unsupported OS/arch, cannot find \"\n              + resourceName\n              + \". Please try building from source.\");\n    }\n\n    File tempLib = null;\n    File tempLibLock = null;\n    try {\n      // Create the .lck file first to avoid a race condition\n      // with other concurrently running Java processes using Comet.\n      tempLibLock = File.createTempFile(searchPattern, \".\" + os().libExtension + \".lck\");\n      tempLib = new File(tempLibLock.getAbsolutePath().replaceFirst(\".lck$\", \"\"));\n      // copy to tempLib\n      Files.copy(is, tempLib.toPath(), StandardCopyOption.REPLACE_EXISTING);\n      System.load(tempLib.getAbsolutePath());\n      loaded = true;\n    } catch (IOException e) {\n      throw new IllegalStateException(\"Cannot unpack libcomet: \" + e);\n    } finally {\n      if (!loaded) {\n        if (tempLib != null && tempLib.exists()) {\n          if (!tempLib.delete()) {\n            LOG.error(\n                \"Cannot unpack libcomet / cannot delete a temporary native library \" + tempLib);\n          }\n        }\n        if (tempLibLock != null && tempLibLock.exists()) {\n          if (!tempLibLock.delete()) {\n            LOG.error(\n                \"Cannot unpack libcomet / cannot delete a temporary lock file \" + tempLibLock);\n          }\n        }\n      } else {\n        tempLib.deleteOnExit();\n        tempLibLock.deleteOnExit();\n      }\n    }\n  }\n\n  private static void initWithLogConf() {\n    String logConfPath = System.getProperty(LOG_CONF_PATH(), Utils.getConfPath(LOG_CONF_NAME()));\n    String logLevel = System.getenv(LOG_LEVEL_ENV());\n\n    // If both the system property and the environmental variable failed to find a log\n    // configuration, then fall back to using the deployed default\n    if (logConfPath == null) {\n      LOG.info(\n          \"Couldn't locate log file from either COMET_CONF_DIR or comet.log.file.path. \"\n              + \"Using default log configuration with {} log level which emits to stderr\",\n          logLevel == null ? \"INFO\" : logLevel);\n      logConfPath = \"\";\n    } else {\n      // Ignore log level if a log configuration file is specified\n      if (logLevel != null) {\n        LOG.warn(\"Ignoring log level {} because a log configuration file is specified\", logLevel);\n      }\n\n      LOG.info(\"Using {} for native library logging\", logConfPath);\n    }\n    init(logConfPath, logLevel);\n  }\n\n  private static void cleanupOldTempLibs() {\n    String tempFolder = new File(System.getProperty(\"java.io.tmpdir\")).getAbsolutePath();\n    File dir = new File(tempFolder);\n\n    File[] tempLibFiles =\n        dir.listFiles(\n            new FilenameFilter() {\n              public boolean accept(File dir, String name) {\n                return name.startsWith(searchPattern) && !name.endsWith(\".lck\");\n              }\n            });\n\n    if (tempLibFiles != null) {\n      for (File tempLibFile : tempLibFiles) {\n        File lckFile = new File(tempLibFile.getAbsolutePath() + \".lck\");\n        if (!lckFile.exists()) {\n          try {\n            tempLibFile.delete();\n          } catch (SecurityException e) {\n            LOG.error(\"Failed to delete old temp lib\", e);\n          }\n        }\n      }\n    }\n  }\n\n  // Set Arrow related properties upon initializing native, such as enabling unsafe memory access\n  // as well as disabling null check for get, for performance reasons.\n  private static void setArrowProperties() {\n    setPropertyIfNull(ARROW_UNSAFE_MEMORY_ACCESS, \"true\");\n    setPropertyIfNull(ARROW_NULL_CHECK_FOR_GET, \"false\");\n  }\n\n  private static void setPropertyIfNull(String key, String value) {\n    if (System.getProperty(key) == null) {\n      LOG.info(\"Setting system property {} to {}\", key, value);\n      System.setProperty(key, value);\n    } else {\n      LOG.info(\n          \"Skip setting system property {} to {}, because it is already set to {}\",\n          key,\n          value,\n          System.getProperty(key));\n    }\n  }\n\n  private enum OS {\n    // Even on Windows, the default compiler from cpptasks (gcc) uses .so as a shared lib extension\n    WINDOWS(\"win32\", \"so\"),\n    LINUX(\"linux\", \"so\"),\n    MAC(\"darwin\", \"dylib\"),\n    SOLARIS(\"solaris\", \"so\");\n    public final String name, libExtension;\n\n    OS(String name, String libExtension) {\n      this.name = name;\n      this.libExtension = libExtension;\n    }\n  }\n\n  private static String arch() {\n    return System.getProperty(\"os.arch\");\n  }\n\n  private static OS os() {\n    String osName = System.getProperty(\"os.name\");\n    if (osName.contains(\"Linux\")) {\n      return OS.LINUX;\n    } else if (osName.contains(\"Mac\")) {\n      return OS.MAC;\n    } else if (osName.contains(\"Windows\")) {\n      return OS.WINDOWS;\n    } else if (osName.contains(\"Solaris\") || osName.contains(\"SunOS\")) {\n      return OS.SOLARIS;\n    } else {\n      throw new UnsupportedOperationException(\"Unsupported operating system: \" + osName);\n    }\n  }\n\n  // For some reason users will get JVM crash when running Comet that is compiled for `aarch64`\n  // using a JVM that is compiled against `amd64`. Here we check if that is the case and fallback\n  // to Spark accordingly.\n  private static boolean checkArch() {\n    if (os() == OS.MAC) {\n      try {\n        String javaArch = arch();\n        Process process = Runtime.getRuntime().exec(\"uname -a\");\n        if (process.waitFor() == 0) {\n          BufferedReader in = new BufferedReader(new InputStreamReader(process.getInputStream()));\n          String line;\n          while ((line = in.readLine()) != null) {\n            if (javaArch.equals(\"x86_64\") && line.contains(\"ARM64\")) {\n              return false;\n            }\n          }\n        }\n      } catch (IOException | InterruptedException e) {\n        LOG.warn(\"Error parsing host architecture\", e);\n      }\n    }\n\n    return true;\n  }\n\n  private static String resourceName() {\n    OS os = os();\n    String packagePrefix = NativeBase.class.getPackage().getName().replace('.', '/');\n\n    return \"/\" + packagePrefix + \"/\" + os.name + \"/\" + arch() + \"/\" + libraryToLoad;\n  }\n\n  /**\n   * Initialize the native library through JNI.\n   *\n   * @param logConfPath location to the native log configuration file\n   */\n  static native void init(String logConfPath, String logLevel);\n\n  /**\n   * Check if a specific feature is enabled in the native library.\n   *\n   * @param featureName The name of the feature to check (e.g., \"hdfs\", \"jemalloc\", \"hdfs-opendal\")\n   * @return true if the feature is enabled, false otherwise\n   */\n  public static native boolean isFeatureEnabled(String featureName);\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/ParquetRuntimeException.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet;\n\n/** The parent class for the subset of Comet runtime exceptions related to Parquet. */\npublic class ParquetRuntimeException extends CometRuntimeException {\n  public ParquetRuntimeException(String message) {\n    super(message);\n  }\n\n  public ParquetRuntimeException(String message, Throwable cause) {\n    super(message, cause);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/exceptions/CometQueryExecutionException.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exceptions;\n\nimport org.apache.comet.CometNativeException;\n\n/**\n * Exception thrown from Comet native execution containing JSON-encoded error information. The\n * message contains a JSON object with the following structure:\n *\n * <pre>\n * {\n *   \"errorType\": \"DivideByZero\",\n *   \"errorClass\": \"DIVIDE_BY_ZERO\",\n *   \"params\": { ... },\n *   \"context\": { \"sqlText\": \"...\", \"startOffset\": 0, \"stopOffset\": 10 },\n *   \"hint\": \"Use `try_divide` to tolerate divisor being 0\"\n * }\n * </pre>\n *\n * CometExecIterator parses this JSON and converts it to the appropriate Spark exception by calling\n * the corresponding QueryExecutionErrors.* method.\n */\npublic final class CometQueryExecutionException extends CometNativeException {\n\n  /**\n   * Creates a new CometQueryExecutionException with a JSON-encoded error message.\n   *\n   * @param jsonMessage JSON string containing error information\n   */\n  public CometQueryExecutionException(String jsonMessage) {\n    super(jsonMessage);\n  }\n\n  /**\n   * Returns true if the message appears to be JSON-formatted. This is used to distinguish between\n   * JSON-encoded errors and legacy error messages.\n   *\n   * @return true if message starts with '{' and ends with '}'\n   */\n  public boolean isJsonMessage() {\n    String msg = getMessage();\n    if (msg == null) return false;\n    String trimmed = msg.trim();\n    return trimmed.startsWith(\"{\") && trimmed.endsWith(\"}\");\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/AbstractColumnReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.schema.Type;\nimport org.apache.spark.sql.types.DataType;\nimport org.apache.spark.sql.types.TimestampNTZType$;\n\nimport org.apache.comet.CometConf;\nimport org.apache.comet.IcebergApi;\nimport org.apache.comet.vector.CometVector;\n\n/** Base class for Comet Parquet column reader implementations. */\n@IcebergApi\npublic abstract class AbstractColumnReader implements AutoCloseable {\n  protected static final Logger LOG = LoggerFactory.getLogger(AbstractColumnReader.class);\n\n  /** The Spark data type. */\n  protected final DataType type;\n\n  /** The Spark data type. */\n  protected final Type fieldType;\n\n  /** Parquet column descriptor. */\n  protected final ColumnDescriptor descriptor;\n\n  /**\n   * Whether to always return 128 bit decimals, regardless of its precision. If false, this will\n   * return 32, 64 or 128 bit decimals depending on the precision.\n   */\n  protected final boolean useDecimal128;\n\n  /**\n   * Whether to return dates/timestamps that were written with legacy hybrid (Julian + Gregorian)\n   * calendar as it is. If this is true, Comet will return them as it is, instead of rebasing them\n   * to the new Proleptic Gregorian calendar. If this is false, Comet will throw exceptions when\n   * seeing these dates/timestamps.\n   */\n  protected final boolean useLegacyDateTimestamp;\n\n  /** The size of one batch, gets updated by 'readBatch' */\n  protected int batchSize;\n\n  /** A pointer to the native implementation of ColumnReader. */\n  @IcebergApi protected long nativeHandle;\n\n  AbstractColumnReader(\n      DataType type,\n      Type fieldType,\n      ColumnDescriptor descriptor,\n      boolean useDecimal128,\n      boolean useLegacyDateTimestamp) {\n    this.type = type;\n    this.fieldType = fieldType;\n    this.descriptor = descriptor;\n    this.useDecimal128 = useDecimal128;\n    this.useLegacyDateTimestamp = useLegacyDateTimestamp;\n  }\n\n  AbstractColumnReader(\n      DataType type,\n      ColumnDescriptor descriptor,\n      boolean useDecimal128,\n      boolean useLegacyDateTimestamp) {\n    this(type, null, descriptor, useDecimal128, useLegacyDateTimestamp);\n    TypeUtil.checkParquetType(descriptor, type);\n  }\n\n  ColumnDescriptor getDescriptor() {\n    return descriptor;\n  }\n\n  String getPath() {\n    return String.join(\".\", this.descriptor.getPath());\n  }\n\n  /**\n   * Set the batch size of this reader to be 'batchSize'. Also initializes the native column reader.\n   */\n  @IcebergApi\n  public void setBatchSize(int batchSize) {\n    assert nativeHandle == 0\n        : \"Native column reader shouldn't be initialized before \" + \"'setBatchSize' is called\";\n    this.batchSize = batchSize;\n    initNative();\n  }\n\n  /**\n   * Reads a batch of 'total' new rows.\n   *\n   * @param total the total number of rows to read\n   */\n  public abstract void readBatch(int total);\n\n  /** Returns the {@link CometVector} read by this reader. */\n  public abstract CometVector currentBatch();\n\n  @IcebergApi\n  @Override\n  public void close() {\n    if (nativeHandle != 0) {\n      LOG.debug(\"Closing the column reader\");\n      Native.closeColumnReader(nativeHandle);\n      nativeHandle = 0;\n    }\n  }\n\n  protected void initNative() {\n    LOG.debug(\"initializing the native column reader\");\n    DataType readType = (boolean) CometConf.COMET_SCHEMA_EVOLUTION_ENABLED().get() ? type : null;\n    boolean useLegacyDateTimestampOrNTZ =\n        useLegacyDateTimestamp || type == TimestampNTZType$.MODULE$;\n    nativeHandle =\n        Utils.initColumnReader(\n            descriptor, readType, batchSize, useDecimal128, useLegacyDateTimestampOrNTZ);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/ArrowConstantColumnReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.math.BigDecimal;\n\nimport org.apache.arrow.memory.BufferAllocator;\nimport org.apache.arrow.memory.RootAllocator;\nimport org.apache.arrow.vector.*;\nimport org.apache.spark.sql.catalyst.InternalRow;\nimport org.apache.spark.sql.catalyst.util.ResolveDefaultColumns;\nimport org.apache.spark.sql.types.*;\nimport org.apache.spark.unsafe.types.UTF8String;\n\nimport org.apache.comet.vector.CometPlainVector;\nimport org.apache.comet.vector.CometVector;\n\n/**\n * A column reader that returns constant vectors using Arrow Java vectors directly (no native\n * mutable buffers). Used for partition columns and missing columns in the native_iceberg_compat\n * scan path.\n *\n * <p>The vector is filled with the constant value repeated for every row in the batch. This is\n * necessary because the underlying Arrow vector's buffers must be large enough to match the\n * reported value count — otherwise variable-width types (strings, binary) would have undersized\n * offset buffers, causing out-of-bounds reads on the native side.\n */\npublic class ArrowConstantColumnReader extends AbstractColumnReader {\n  private final BufferAllocator allocator = new RootAllocator();\n\n  private boolean isNull;\n  private Object value;\n  private FieldVector fieldVector;\n  private CometPlainVector vector;\n  private int currentSize;\n\n  /** Constructor for missing columns (default values from schema). */\n  ArrowConstantColumnReader(StructField field, int batchSize, boolean useDecimal128) {\n    super(field.dataType(), TypeUtil.convertToParquet(field), useDecimal128, false);\n    this.batchSize = batchSize;\n    this.value =\n        ResolveDefaultColumns.getExistenceDefaultValues(new StructType(new StructField[] {field}))[\n            0];\n    initVector(value, batchSize);\n  }\n\n  /** Constructor for partition columns with values from a row. */\n  ArrowConstantColumnReader(\n      StructField field, int batchSize, InternalRow values, int index, boolean useDecimal128) {\n    super(field.dataType(), TypeUtil.convertToParquet(field), useDecimal128, false);\n    this.batchSize = batchSize;\n    Object v = values.get(index, field.dataType());\n    this.value = v;\n    initVector(v, batchSize);\n  }\n\n  @Override\n  public void setBatchSize(int batchSize) {\n    close();\n    this.batchSize = batchSize;\n    initVector(value, batchSize);\n  }\n\n  @Override\n  public void readBatch(int total) {\n    if (total != currentSize) {\n      close();\n      initVector(value, total);\n    }\n  }\n\n  @Override\n  public CometVector currentBatch() {\n    return vector;\n  }\n\n  @Override\n  public void close() {\n    if (vector != null) {\n      vector.close();\n      vector = null;\n    }\n    if (fieldVector != null) {\n      fieldVector.close();\n      fieldVector = null;\n    }\n  }\n\n  private void initVector(Object value, int count) {\n    currentSize = count;\n    if (value == null) {\n      isNull = true;\n      fieldVector = createNullVector(count);\n    } else {\n      isNull = false;\n      fieldVector = createFilledVector(value, count);\n    }\n    vector = new CometPlainVector(fieldVector, useDecimal128, false, true);\n  }\n\n  /** Creates a vector of the correct type with {@code count} null values. */\n  private FieldVector createNullVector(int count) {\n    String name = \"constant\";\n    FieldVector v;\n    if (type == DataTypes.BooleanType) {\n      v = new BitVector(name, allocator);\n    } else if (type == DataTypes.ByteType) {\n      v = new TinyIntVector(name, allocator);\n    } else if (type == DataTypes.ShortType) {\n      v = new SmallIntVector(name, allocator);\n    } else if (type == DataTypes.IntegerType || type == DataTypes.DateType) {\n      v = new IntVector(name, allocator);\n    } else if (type == DataTypes.LongType\n        || type == DataTypes.TimestampType\n        || type == TimestampNTZType$.MODULE$) {\n      v = new BigIntVector(name, allocator);\n    } else if (type == DataTypes.FloatType) {\n      v = new Float4Vector(name, allocator);\n    } else if (type == DataTypes.DoubleType) {\n      v = new Float8Vector(name, allocator);\n    } else if (type == DataTypes.BinaryType) {\n      v = new VarBinaryVector(name, allocator);\n    } else if (type == DataTypes.StringType) {\n      v = new VarCharVector(name, allocator);\n    } else if (type instanceof DecimalType) {\n      DecimalType dt = (DecimalType) type;\n      if (!useDecimal128 && dt.precision() <= Decimal.MAX_INT_DIGITS()) {\n        v = new IntVector(name, allocator);\n      } else if (!useDecimal128 && dt.precision() <= Decimal.MAX_LONG_DIGITS()) {\n        v = new BigIntVector(name, allocator);\n      } else {\n        v = new DecimalVector(name, allocator, dt.precision(), dt.scale());\n      }\n    } else {\n      throw new UnsupportedOperationException(\"Unsupported Spark type: \" + type);\n    }\n    v.setValueCount(count);\n    return v;\n  }\n\n  /** Creates a vector filled with {@code count} copies of the given value. */\n  private FieldVector createFilledVector(Object value, int count) {\n    String name = \"constant\";\n    if (type == DataTypes.BooleanType) {\n      BitVector v = new BitVector(name, allocator);\n      v.allocateNew(count);\n      int bit = (boolean) value ? 1 : 0;\n      for (int i = 0; i < count; i++) v.setSafe(i, bit);\n      v.setValueCount(count);\n      return v;\n    } else if (type == DataTypes.ByteType) {\n      TinyIntVector v = new TinyIntVector(name, allocator);\n      v.allocateNew(count);\n      byte val = (byte) value;\n      for (int i = 0; i < count; i++) v.setSafe(i, val);\n      v.setValueCount(count);\n      return v;\n    } else if (type == DataTypes.ShortType) {\n      SmallIntVector v = new SmallIntVector(name, allocator);\n      v.allocateNew(count);\n      short val = (short) value;\n      for (int i = 0; i < count; i++) v.setSafe(i, val);\n      v.setValueCount(count);\n      return v;\n    } else if (type == DataTypes.IntegerType || type == DataTypes.DateType) {\n      IntVector v = new IntVector(name, allocator);\n      v.allocateNew(count);\n      int val = (int) value;\n      for (int i = 0; i < count; i++) v.setSafe(i, val);\n      v.setValueCount(count);\n      return v;\n    } else if (type == DataTypes.LongType\n        || type == DataTypes.TimestampType\n        || type == TimestampNTZType$.MODULE$) {\n      BigIntVector v = new BigIntVector(name, allocator);\n      v.allocateNew(count);\n      long val = (long) value;\n      for (int i = 0; i < count; i++) v.setSafe(i, val);\n      v.setValueCount(count);\n      return v;\n    } else if (type == DataTypes.FloatType) {\n      Float4Vector v = new Float4Vector(name, allocator);\n      v.allocateNew(count);\n      float val = (float) value;\n      for (int i = 0; i < count; i++) v.setSafe(i, val);\n      v.setValueCount(count);\n      return v;\n    } else if (type == DataTypes.DoubleType) {\n      Float8Vector v = new Float8Vector(name, allocator);\n      v.allocateNew(count);\n      double val = (double) value;\n      for (int i = 0; i < count; i++) v.setSafe(i, val);\n      v.setValueCount(count);\n      return v;\n    } else if (type == DataTypes.BinaryType) {\n      VarBinaryVector v = new VarBinaryVector(name, allocator);\n      v.allocateNew(count);\n      byte[] bytes = (byte[]) value;\n      for (int i = 0; i < count; i++) v.setSafe(i, bytes, 0, bytes.length);\n      v.setValueCount(count);\n      return v;\n    } else if (type == DataTypes.StringType) {\n      VarCharVector v = new VarCharVector(name, allocator);\n      v.allocateNew(count);\n      byte[] bytes = ((UTF8String) value).getBytes();\n      for (int i = 0; i < count; i++) v.setSafe(i, bytes, 0, bytes.length);\n      v.setValueCount(count);\n      return v;\n    } else if (type instanceof DecimalType) {\n      DecimalType dt = (DecimalType) type;\n      Decimal d = (Decimal) value;\n      if (!useDecimal128 && dt.precision() <= Decimal.MAX_INT_DIGITS()) {\n        IntVector v = new IntVector(name, allocator);\n        v.allocateNew(count);\n        int val = (int) d.toUnscaledLong();\n        for (int i = 0; i < count; i++) v.setSafe(i, val);\n        v.setValueCount(count);\n        return v;\n      } else if (!useDecimal128 && dt.precision() <= Decimal.MAX_LONG_DIGITS()) {\n        BigIntVector v = new BigIntVector(name, allocator);\n        v.allocateNew(count);\n        long val = d.toUnscaledLong();\n        for (int i = 0; i < count; i++) v.setSafe(i, val);\n        v.setValueCount(count);\n        return v;\n      } else {\n        DecimalVector v = new DecimalVector(name, allocator, dt.precision(), dt.scale());\n        v.allocateNew(count);\n        BigDecimal bd = d.toJavaBigDecimal();\n        for (int i = 0; i < count; i++) v.setSafe(i, bd);\n        v.setValueCount(count);\n        return v;\n      }\n    } else {\n      throw new UnsupportedOperationException(\"Unsupported Spark type: \" + type);\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/ArrowRowIndexColumnReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport org.apache.arrow.memory.BufferAllocator;\nimport org.apache.arrow.memory.RootAllocator;\nimport org.apache.arrow.vector.BigIntVector;\nimport org.apache.spark.sql.types.*;\n\nimport org.apache.comet.vector.CometPlainVector;\nimport org.apache.comet.vector.CometVector;\n\n/**\n * A column reader that computes row indices in Java and creates Arrow BigIntVectors directly (no\n * native mutable buffers). Used for the row index metadata column in the native_iceberg_compat scan\n * path.\n *\n * <p>The {@code indices} array contains alternating pairs of (start_index, count) representing\n * ranges of sequential row indices within each row group.\n */\npublic class ArrowRowIndexColumnReader extends AbstractColumnReader {\n  private final BufferAllocator allocator = new RootAllocator();\n\n  /** Alternating (start_index, count) pairs from row groups. */\n  private final long[] indices;\n\n  /** Number of row indices consumed so far across batches. */\n  private long offset;\n\n  private BigIntVector fieldVector;\n  private CometPlainVector vector;\n\n  public ArrowRowIndexColumnReader(StructField field, int batchSize, long[] indices) {\n    super(field.dataType(), TypeUtil.convertToParquet(field), false, false);\n    this.indices = indices;\n    this.batchSize = batchSize;\n  }\n\n  @Override\n  public void setBatchSize(int batchSize) {\n    close();\n    this.batchSize = batchSize;\n  }\n\n  @Override\n  public void readBatch(int total) {\n    close();\n\n    fieldVector = new BigIntVector(\"row_index\", allocator);\n    fieldVector.allocateNew(total);\n\n    // Port of Rust set_indices: iterate (start, count) pairs, skip offset rows, fill up to total.\n    long skipped = 0;\n    int filled = 0;\n    for (int i = 0; i < indices.length && filled < total; i += 2) {\n      long index = indices[i];\n      long count = indices[i + 1];\n      long skip = Math.min(count, offset - skipped);\n      skipped += skip;\n      if (count == skip) {\n        continue;\n      }\n      long remaining = Math.min(count - skip, total - filled);\n      for (long j = 0; j < remaining; j++) {\n        fieldVector.setSafe(filled, index + skip + j);\n        filled++;\n      }\n    }\n    offset += filled;\n\n    fieldVector.setValueCount(filled);\n    vector = new CometPlainVector(fieldVector, false, false, false);\n    vector.setNumValues(filled);\n  }\n\n  @Override\n  public CometVector currentBatch() {\n    return vector;\n  }\n\n  @Override\n  public void close() {\n    if (vector != null) {\n      vector.close();\n      vector = null;\n    }\n    if (fieldVector != null) {\n      fieldVector.close();\n      fieldVector = null;\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/BloomFilterReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.IOException;\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.parquet.column.values.bloomfilter.BlockSplitBloomFilter;\nimport org.apache.parquet.column.values.bloomfilter.BloomFilter;\nimport org.apache.parquet.crypto.AesCipher;\nimport org.apache.parquet.crypto.InternalColumnDecryptionSetup;\nimport org.apache.parquet.crypto.InternalFileDecryptor;\nimport org.apache.parquet.crypto.ModuleCipherFactory;\nimport org.apache.parquet.crypto.ParquetCryptoRuntimeException;\nimport org.apache.parquet.filter2.predicate.FilterPredicate;\nimport org.apache.parquet.filter2.predicate.Operators;\nimport org.apache.parquet.filter2.predicate.UserDefinedPredicate;\nimport org.apache.parquet.format.BlockCipher;\nimport org.apache.parquet.format.BloomFilterHeader;\nimport org.apache.parquet.format.Util;\nimport org.apache.parquet.hadoop.metadata.BlockMetaData;\nimport org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\nimport org.apache.parquet.hadoop.metadata.ColumnPath;\nimport org.apache.parquet.io.SeekableInputStream;\n\npublic class BloomFilterReader implements FilterPredicate.Visitor<Boolean> {\n  private static final Logger LOG = LoggerFactory.getLogger(BloomFilterReader.class);\n  private static final boolean BLOCK_MIGHT_MATCH = false;\n  private static final boolean BLOCK_CANNOT_MATCH = true;\n\n  private final Map<ColumnPath, ColumnChunkMetaData> columns;\n  private final Map<ColumnPath, BloomFilter> cache = new HashMap<>();\n  private final InternalFileDecryptor fileDecryptor;\n  private final SeekableInputStream inputStream;\n\n  BloomFilterReader(\n      BlockMetaData block, InternalFileDecryptor fileDecryptor, SeekableInputStream inputStream) {\n    this.columns = new HashMap<>();\n    for (ColumnChunkMetaData column : block.getColumns()) {\n      columns.put(column.getPath(), column);\n    }\n    this.fileDecryptor = fileDecryptor;\n    this.inputStream = inputStream;\n  }\n\n  @Override\n  public <T extends Comparable<T>> Boolean visit(Operators.Eq<T> eq) {\n    T value = eq.getValue();\n\n    if (value == null) {\n      // the bloom filter bitset contains only non-null values so isn't helpful. this\n      // could check the column stats, but the StatisticsFilter is responsible\n      return BLOCK_MIGHT_MATCH;\n    }\n\n    Operators.Column<T> filterColumn = eq.getColumn();\n    ColumnChunkMetaData meta = columns.get(filterColumn.getColumnPath());\n    if (meta == null) {\n      // the column isn't in this file so all values are null, but the value\n      // must be non-null because of the above check.\n      return BLOCK_CANNOT_MATCH;\n    }\n\n    try {\n      BloomFilter bloomFilter = readBloomFilter(meta);\n      if (bloomFilter != null && !bloomFilter.findHash(bloomFilter.hash(value))) {\n        return BLOCK_CANNOT_MATCH;\n      }\n    } catch (RuntimeException e) {\n      LOG.warn(e.getMessage());\n      return BLOCK_MIGHT_MATCH;\n    }\n\n    return BLOCK_MIGHT_MATCH;\n  }\n\n  @Override\n  public <T extends Comparable<T>> Boolean visit(Operators.NotEq<T> notEq) {\n    return BLOCK_MIGHT_MATCH;\n  }\n\n  @Override\n  public <T extends Comparable<T>> Boolean visit(Operators.Lt<T> lt) {\n    return BLOCK_MIGHT_MATCH;\n  }\n\n  @Override\n  public <T extends Comparable<T>> Boolean visit(Operators.LtEq<T> ltEq) {\n    return BLOCK_MIGHT_MATCH;\n  }\n\n  @Override\n  public <T extends Comparable<T>> Boolean visit(Operators.Gt<T> gt) {\n    return BLOCK_MIGHT_MATCH;\n  }\n\n  @Override\n  public <T extends Comparable<T>> Boolean visit(Operators.GtEq<T> gtEq) {\n    return BLOCK_MIGHT_MATCH;\n  }\n\n  @Override\n  public Boolean visit(Operators.And and) {\n    return and.getLeft().accept(this) || and.getRight().accept(this);\n  }\n\n  @Override\n  public Boolean visit(Operators.Or or) {\n    return or.getLeft().accept(this) && or.getRight().accept(this);\n  }\n\n  @Override\n  public Boolean visit(Operators.Not not) {\n    throw new IllegalArgumentException(\n        \"This predicate \"\n            + not\n            + \" contains a not! Did you forget\"\n            + \" to run this predicate through LogicalInverseRewriter?\");\n  }\n\n  @Override\n  public <T extends Comparable<T>, U extends UserDefinedPredicate<T>> Boolean visit(\n      Operators.UserDefined<T, U> udp) {\n    return visit(udp, false);\n  }\n\n  @Override\n  public <T extends Comparable<T>, U extends UserDefinedPredicate<T>> Boolean visit(\n      Operators.LogicalNotUserDefined<T, U> udp) {\n    return visit(udp.getUserDefined(), true);\n  }\n\n  private <T extends Comparable<T>, U extends UserDefinedPredicate<T>> Boolean visit(\n      Operators.UserDefined<T, U> ud, boolean inverted) {\n    return BLOCK_MIGHT_MATCH;\n  }\n\n  BloomFilter readBloomFilter(ColumnChunkMetaData meta) {\n    if (cache.containsKey(meta.getPath())) {\n      return cache.get(meta.getPath());\n    }\n    try {\n      if (!cache.containsKey(meta.getPath())) {\n        BloomFilter bloomFilter = readBloomFilterInternal(meta);\n        if (bloomFilter == null) {\n          return null;\n        }\n\n        cache.put(meta.getPath(), bloomFilter);\n      }\n      return cache.get(meta.getPath());\n    } catch (IOException e) {\n      LOG.error(\"Failed to read Bloom filter data\", e);\n    }\n\n    return null;\n  }\n\n  private BloomFilter readBloomFilterInternal(ColumnChunkMetaData meta) throws IOException {\n    long bloomFilterOffset = meta.getBloomFilterOffset();\n    if (bloomFilterOffset < 0) {\n      return null;\n    }\n\n    // Prepare to decrypt Bloom filter (for encrypted columns)\n    BlockCipher.Decryptor bloomFilterDecryptor = null;\n    byte[] bloomFilterHeaderAAD = null;\n    byte[] bloomFilterBitsetAAD = null;\n    if (null != fileDecryptor && !fileDecryptor.plaintextFile()) {\n      InternalColumnDecryptionSetup columnDecryptionSetup =\n          fileDecryptor.getColumnSetup(meta.getPath());\n      if (columnDecryptionSetup.isEncrypted()) {\n        bloomFilterDecryptor = columnDecryptionSetup.getMetaDataDecryptor();\n        bloomFilterHeaderAAD =\n            AesCipher.createModuleAAD(\n                fileDecryptor.getFileAAD(),\n                ModuleCipherFactory.ModuleType.BloomFilterHeader,\n                meta.getRowGroupOrdinal(),\n                columnDecryptionSetup.getOrdinal(),\n                -1);\n        bloomFilterBitsetAAD =\n            AesCipher.createModuleAAD(\n                fileDecryptor.getFileAAD(),\n                ModuleCipherFactory.ModuleType.BloomFilterBitset,\n                meta.getRowGroupOrdinal(),\n                columnDecryptionSetup.getOrdinal(),\n                -1);\n      }\n    }\n\n    // Read Bloom filter data header.\n    inputStream.seek(bloomFilterOffset);\n    BloomFilterHeader bloomFilterHeader;\n    try {\n      bloomFilterHeader =\n          Util.readBloomFilterHeader(inputStream, bloomFilterDecryptor, bloomFilterHeaderAAD);\n    } catch (IOException e) {\n      LOG.warn(\"read no bloom filter\");\n      return null;\n    }\n\n    int numBytes = bloomFilterHeader.getNumBytes();\n    if (numBytes <= 0 || numBytes > BlockSplitBloomFilter.UPPER_BOUND_BYTES) {\n      LOG.warn(\"the read bloom filter size is wrong, size is {}\", bloomFilterHeader.getNumBytes());\n      return null;\n    }\n\n    if (!bloomFilterHeader.getHash().isSetXXHASH()\n        || !bloomFilterHeader.getAlgorithm().isSetBLOCK()\n        || !bloomFilterHeader.getCompression().isSetUNCOMPRESSED()) {\n      LOG.warn(\n          \"the read bloom filter is not supported yet,  algorithm = {}, hash = {}, \"\n              + \"compression = {}\",\n          bloomFilterHeader.getAlgorithm(),\n          bloomFilterHeader.getHash(),\n          bloomFilterHeader.getCompression());\n      return null;\n    }\n\n    byte[] bitset;\n    if (null == bloomFilterDecryptor) {\n      bitset = new byte[numBytes];\n      inputStream.readFully(bitset);\n    } else {\n      bitset = bloomFilterDecryptor.decrypt(inputStream, bloomFilterBitsetAAD);\n      if (bitset.length != numBytes) {\n        throw new ParquetCryptoRuntimeException(\"Wrong length of decrypted bloom filter bitset\");\n      }\n    }\n    return new BlockSplitBloomFilter(bitset);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/ColumnIndexReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.IOException;\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Set;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.parquet.crypto.AesCipher;\nimport org.apache.parquet.crypto.InternalColumnDecryptionSetup;\nimport org.apache.parquet.crypto.InternalFileDecryptor;\nimport org.apache.parquet.crypto.ModuleCipherFactory;\nimport org.apache.parquet.format.BlockCipher;\nimport org.apache.parquet.format.Util;\nimport org.apache.parquet.format.converter.ParquetMetadataConverter;\nimport org.apache.parquet.hadoop.metadata.BlockMetaData;\nimport org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\nimport org.apache.parquet.hadoop.metadata.ColumnPath;\nimport org.apache.parquet.internal.column.columnindex.ColumnIndex;\nimport org.apache.parquet.internal.column.columnindex.OffsetIndex;\nimport org.apache.parquet.internal.filter2.columnindex.ColumnIndexStore;\nimport org.apache.parquet.internal.hadoop.metadata.IndexReference;\nimport org.apache.parquet.io.SeekableInputStream;\n\nclass ColumnIndexReader implements ColumnIndexStore {\n  private static final Logger LOG = LoggerFactory.getLogger(ColumnIndexReader.class);\n\n  // Used for columns are not in this parquet file\n  private static final IndexStore MISSING_INDEX_STORE =\n      new IndexStore() {\n        @Override\n        public ColumnIndex getColumnIndex() {\n          return null;\n        }\n\n        @Override\n        public OffsetIndex getOffsetIndex() {\n          return null;\n        }\n      };\n\n  private static final ColumnIndexReader EMPTY =\n      new ColumnIndexReader(new BlockMetaData(), Collections.emptySet(), null, null) {\n        @Override\n        public ColumnIndex getColumnIndex(ColumnPath column) {\n          return null;\n        }\n\n        @Override\n        public OffsetIndex getOffsetIndex(ColumnPath column) {\n          throw new MissingOffsetIndexException(column);\n        }\n      };\n\n  private final InternalFileDecryptor fileDecryptor;\n  private final SeekableInputStream inputStream;\n  private final Map<ColumnPath, IndexStore> store;\n\n  /**\n   * Creates a column index store which lazily reads column/offset indexes for the columns in paths.\n   * Paths are the set of columns used for the projection.\n   */\n  static ColumnIndexReader create(\n      BlockMetaData block,\n      Set<ColumnPath> paths,\n      InternalFileDecryptor fileDecryptor,\n      SeekableInputStream inputStream) {\n    try {\n      return new ColumnIndexReader(block, paths, fileDecryptor, inputStream);\n    } catch (MissingOffsetIndexException e) {\n      return EMPTY;\n    }\n  }\n\n  private ColumnIndexReader(\n      BlockMetaData block,\n      Set<ColumnPath> paths,\n      InternalFileDecryptor fileDecryptor,\n      SeekableInputStream inputStream) {\n    this.fileDecryptor = fileDecryptor;\n    this.inputStream = inputStream;\n    Map<ColumnPath, IndexStore> store = new HashMap<>();\n    for (ColumnChunkMetaData column : block.getColumns()) {\n      ColumnPath path = column.getPath();\n      if (paths.contains(path)) {\n        store.put(path, new IndexStoreImpl(column));\n      }\n    }\n    this.store = store;\n  }\n\n  @Override\n  public ColumnIndex getColumnIndex(ColumnPath column) {\n    return store.getOrDefault(column, MISSING_INDEX_STORE).getColumnIndex();\n  }\n\n  @Override\n  public OffsetIndex getOffsetIndex(ColumnPath column) {\n    return store.getOrDefault(column, MISSING_INDEX_STORE).getOffsetIndex();\n  }\n\n  private interface IndexStore {\n    ColumnIndex getColumnIndex();\n\n    OffsetIndex getOffsetIndex();\n  }\n\n  private class IndexStoreImpl implements IndexStore {\n    private final ColumnChunkMetaData meta;\n    private ColumnIndex columnIndex;\n    private boolean columnIndexRead;\n    private final OffsetIndex offsetIndex;\n\n    IndexStoreImpl(ColumnChunkMetaData meta) {\n      this.meta = meta;\n      OffsetIndex oi;\n      try {\n        oi = readOffsetIndex(meta);\n      } catch (IOException e) {\n        // If the I/O issue still stands it will fail the reading later;\n        // otherwise we fail the filtering only with a missing offset index.\n        LOG.warn(\"Unable to read offset index for column {}\", meta.getPath(), e);\n        oi = null;\n      }\n      if (oi == null) {\n        throw new MissingOffsetIndexException(meta.getPath());\n      }\n      offsetIndex = oi;\n    }\n\n    @Override\n    public ColumnIndex getColumnIndex() {\n      if (!columnIndexRead) {\n        try {\n          columnIndex = readColumnIndex(meta);\n        } catch (IOException e) {\n          // If the I/O issue still stands it will fail the reading later;\n          // otherwise we fail the filtering only with a missing column index.\n          LOG.warn(\"Unable to read column index for column {}\", meta.getPath(), e);\n        }\n        columnIndexRead = true;\n      }\n      return columnIndex;\n    }\n\n    @Override\n    public OffsetIndex getOffsetIndex() {\n      return offsetIndex;\n    }\n  }\n\n  // Visible for testing\n  ColumnIndex readColumnIndex(ColumnChunkMetaData column) throws IOException {\n    IndexReference ref = column.getColumnIndexReference();\n    if (ref == null) {\n      return null;\n    }\n    inputStream.seek(ref.getOffset());\n\n    BlockCipher.Decryptor columnIndexDecryptor = null;\n    byte[] columnIndexAAD = null;\n    if (null != fileDecryptor && !fileDecryptor.plaintextFile()) {\n      InternalColumnDecryptionSetup columnDecryptionSetup =\n          fileDecryptor.getColumnSetup(column.getPath());\n      if (columnDecryptionSetup.isEncrypted()) {\n        columnIndexDecryptor = columnDecryptionSetup.getMetaDataDecryptor();\n        columnIndexAAD =\n            AesCipher.createModuleAAD(\n                fileDecryptor.getFileAAD(),\n                ModuleCipherFactory.ModuleType.ColumnIndex,\n                column.getRowGroupOrdinal(),\n                columnDecryptionSetup.getOrdinal(),\n                -1);\n      }\n    }\n    return ParquetMetadataConverter.fromParquetColumnIndex(\n        column.getPrimitiveType(),\n        Util.readColumnIndex(inputStream, columnIndexDecryptor, columnIndexAAD));\n  }\n\n  // Visible for testing\n  OffsetIndex readOffsetIndex(ColumnChunkMetaData column) throws IOException {\n    IndexReference ref = column.getOffsetIndexReference();\n    if (ref == null) {\n      return null;\n    }\n    inputStream.seek(ref.getOffset());\n\n    BlockCipher.Decryptor offsetIndexDecryptor = null;\n    byte[] offsetIndexAAD = null;\n    if (null != fileDecryptor && !fileDecryptor.plaintextFile()) {\n      InternalColumnDecryptionSetup columnDecryptionSetup =\n          fileDecryptor.getColumnSetup(column.getPath());\n      if (columnDecryptionSetup.isEncrypted()) {\n        offsetIndexDecryptor = columnDecryptionSetup.getMetaDataDecryptor();\n        offsetIndexAAD =\n            AesCipher.createModuleAAD(\n                fileDecryptor.getFileAAD(),\n                ModuleCipherFactory.ModuleType.OffsetIndex,\n                column.getRowGroupOrdinal(),\n                columnDecryptionSetup.getOrdinal(),\n                -1);\n      }\n    }\n    return ParquetMetadataConverter.fromParquetOffsetIndex(\n        Util.readOffsetIndex(inputStream, offsetIndexDecryptor, offsetIndexAAD));\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/ColumnPageReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.IOException;\nimport java.util.ArrayDeque;\nimport java.util.List;\nimport java.util.Queue;\n\nimport org.apache.parquet.bytes.BytesInput;\nimport org.apache.parquet.column.page.DataPage;\nimport org.apache.parquet.column.page.DataPageV1;\nimport org.apache.parquet.column.page.DataPageV2;\nimport org.apache.parquet.column.page.DictionaryPage;\nimport org.apache.parquet.column.page.PageReader;\nimport org.apache.parquet.compression.CompressionCodecFactory;\nimport org.apache.parquet.crypto.AesCipher;\nimport org.apache.parquet.crypto.ModuleCipherFactory;\nimport org.apache.parquet.format.BlockCipher;\nimport org.apache.parquet.internal.column.columnindex.OffsetIndex;\nimport org.apache.parquet.io.ParquetDecodingException;\n\npublic class ColumnPageReader implements PageReader {\n  private final CompressionCodecFactory.BytesInputDecompressor decompressor;\n  private final long valueCount;\n  private final Queue<DataPage> compressedPages;\n  private final DictionaryPage compressedDictionaryPage;\n\n  private final OffsetIndex offsetIndex;\n  private final long rowCount;\n  private int pageIndex = 0;\n\n  private final BlockCipher.Decryptor blockDecryptor;\n  private final byte[] dataPageAAD;\n  private final byte[] dictionaryPageAAD;\n\n  ColumnPageReader(\n      CompressionCodecFactory.BytesInputDecompressor decompressor,\n      List<DataPage> compressedPages,\n      DictionaryPage compressedDictionaryPage,\n      OffsetIndex offsetIndex,\n      long rowCount,\n      BlockCipher.Decryptor blockDecryptor,\n      byte[] fileAAD,\n      int rowGroupOrdinal,\n      int columnOrdinal) {\n    this.decompressor = decompressor;\n    this.compressedPages = new ArrayDeque<>(compressedPages);\n    this.compressedDictionaryPage = compressedDictionaryPage;\n    long count = 0;\n    for (DataPage p : compressedPages) {\n      count += p.getValueCount();\n    }\n    this.valueCount = count;\n    this.offsetIndex = offsetIndex;\n    this.rowCount = rowCount;\n    this.blockDecryptor = blockDecryptor;\n\n    if (blockDecryptor != null) {\n      dataPageAAD =\n          AesCipher.createModuleAAD(\n              fileAAD, ModuleCipherFactory.ModuleType.DataPage, rowGroupOrdinal, columnOrdinal, 0);\n      dictionaryPageAAD =\n          AesCipher.createModuleAAD(\n              fileAAD,\n              ModuleCipherFactory.ModuleType.DictionaryPage,\n              rowGroupOrdinal,\n              columnOrdinal,\n              -1);\n    } else {\n      dataPageAAD = null;\n      dictionaryPageAAD = null;\n    }\n  }\n\n  @Override\n  public long getTotalValueCount() {\n    return valueCount;\n  }\n\n  /** Returns the total value count of the current page. */\n  public int getPageValueCount() {\n    return compressedPages.element().getValueCount();\n  }\n\n  /** Skips the current page so it won't be returned by {@link #readPage()} */\n  public void skipPage() {\n    compressedPages.poll();\n    pageIndex++;\n  }\n\n  @Override\n  public DataPage readPage() {\n    final DataPage compressedPage = compressedPages.poll();\n    if (compressedPage == null) {\n      return null;\n    }\n    final int currentPageIndex = pageIndex++;\n\n    if (null != blockDecryptor) {\n      AesCipher.quickUpdatePageAAD(dataPageAAD, getPageOrdinal(currentPageIndex));\n    }\n\n    return compressedPage.accept(\n        new DataPage.Visitor<DataPage>() {\n          @Override\n          public DataPage visit(DataPageV1 dataPageV1) {\n            try {\n              BytesInput bytes = dataPageV1.getBytes();\n              if (null != blockDecryptor) {\n                bytes = BytesInput.from(blockDecryptor.decrypt(bytes.toByteArray(), dataPageAAD));\n              }\n              BytesInput decompressed =\n                  decompressor.decompress(bytes, dataPageV1.getUncompressedSize());\n\n              final DataPageV1 decompressedPage;\n              if (offsetIndex == null) {\n                decompressedPage =\n                    new DataPageV1(\n                        decompressed,\n                        dataPageV1.getValueCount(),\n                        dataPageV1.getUncompressedSize(),\n                        dataPageV1.getStatistics(),\n                        dataPageV1.getRlEncoding(),\n                        dataPageV1.getDlEncoding(),\n                        dataPageV1.getValueEncoding());\n              } else {\n                long firstRowIndex = offsetIndex.getFirstRowIndex(currentPageIndex);\n                decompressedPage =\n                    new DataPageV1(\n                        decompressed,\n                        dataPageV1.getValueCount(),\n                        dataPageV1.getUncompressedSize(),\n                        firstRowIndex,\n                        Math.toIntExact(\n                            offsetIndex.getLastRowIndex(currentPageIndex, rowCount)\n                                - firstRowIndex\n                                + 1),\n                        dataPageV1.getStatistics(),\n                        dataPageV1.getRlEncoding(),\n                        dataPageV1.getDlEncoding(),\n                        dataPageV1.getValueEncoding());\n              }\n              if (dataPageV1.getCrc().isPresent()) {\n                decompressedPage.setCrc(dataPageV1.getCrc().getAsInt());\n              }\n              return decompressedPage;\n            } catch (IOException e) {\n              throw new ParquetDecodingException(\"could not decompress page\", e);\n            }\n          }\n\n          @Override\n          public DataPage visit(DataPageV2 dataPageV2) {\n            if (!dataPageV2.isCompressed() && offsetIndex == null && null == blockDecryptor) {\n              return dataPageV2;\n            }\n            BytesInput pageBytes = dataPageV2.getData();\n\n            if (null != blockDecryptor) {\n              try {\n                pageBytes =\n                    BytesInput.from(blockDecryptor.decrypt(pageBytes.toByteArray(), dataPageAAD));\n              } catch (IOException e) {\n                throw new ParquetDecodingException(\n                    \"could not convert page ByteInput to byte array\", e);\n              }\n            }\n            if (dataPageV2.isCompressed()) {\n              int uncompressedSize =\n                  Math.toIntExact(\n                      dataPageV2.getUncompressedSize()\n                          - dataPageV2.getDefinitionLevels().size()\n                          - dataPageV2.getRepetitionLevels().size());\n              try {\n                pageBytes = decompressor.decompress(pageBytes, uncompressedSize);\n              } catch (IOException e) {\n                throw new ParquetDecodingException(\"could not decompress page\", e);\n              }\n            }\n\n            if (offsetIndex == null) {\n              return DataPageV2.uncompressed(\n                  dataPageV2.getRowCount(),\n                  dataPageV2.getNullCount(),\n                  dataPageV2.getValueCount(),\n                  dataPageV2.getRepetitionLevels(),\n                  dataPageV2.getDefinitionLevels(),\n                  dataPageV2.getDataEncoding(),\n                  pageBytes,\n                  dataPageV2.getStatistics());\n            } else {\n              return DataPageV2.uncompressed(\n                  dataPageV2.getRowCount(),\n                  dataPageV2.getNullCount(),\n                  dataPageV2.getValueCount(),\n                  offsetIndex.getFirstRowIndex(currentPageIndex),\n                  dataPageV2.getRepetitionLevels(),\n                  dataPageV2.getDefinitionLevels(),\n                  dataPageV2.getDataEncoding(),\n                  pageBytes,\n                  dataPageV2.getStatistics());\n            }\n          }\n        });\n  }\n\n  @Override\n  public DictionaryPage readDictionaryPage() {\n    if (compressedDictionaryPage == null) {\n      return null;\n    }\n    try {\n      BytesInput bytes = compressedDictionaryPage.getBytes();\n      if (null != blockDecryptor) {\n        bytes = BytesInput.from(blockDecryptor.decrypt(bytes.toByteArray(), dictionaryPageAAD));\n      }\n      DictionaryPage decompressedPage =\n          new DictionaryPage(\n              decompressor.decompress(bytes, compressedDictionaryPage.getUncompressedSize()),\n              compressedDictionaryPage.getDictionarySize(),\n              compressedDictionaryPage.getEncoding());\n      if (compressedDictionaryPage.getCrc().isPresent()) {\n        decompressedPage.setCrc(compressedDictionaryPage.getCrc().getAsInt());\n      }\n      return decompressedPage;\n    } catch (IOException e) {\n      throw new ParquetDecodingException(\"Could not decompress dictionary page\", e);\n    }\n  }\n\n  private int getPageOrdinal(int currentPageIndex) {\n    return offsetIndex == null ? currentPageIndex : offsetIndex.getPageOrdinal(currentPageIndex);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/ColumnReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.IOException;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.arrow.c.ArrowArray;\nimport org.apache.arrow.c.ArrowSchema;\nimport org.apache.arrow.memory.BufferAllocator;\nimport org.apache.arrow.memory.RootAllocator;\nimport org.apache.arrow.vector.FieldVector;\nimport org.apache.arrow.vector.dictionary.Dictionary;\nimport org.apache.arrow.vector.types.pojo.DictionaryEncoding;\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.column.Encoding;\nimport org.apache.parquet.column.page.DataPage;\nimport org.apache.parquet.column.page.DataPageV1;\nimport org.apache.parquet.column.page.DataPageV2;\nimport org.apache.parquet.column.page.DictionaryPage;\nimport org.apache.parquet.column.page.PageReader;\nimport org.apache.parquet.schema.LogicalTypeAnnotation;\nimport org.apache.spark.sql.types.DataType;\n\nimport org.apache.comet.CometSchemaImporter;\nimport org.apache.comet.IcebergApi;\nimport org.apache.comet.vector.CometDecodedVector;\nimport org.apache.comet.vector.CometDictionary;\nimport org.apache.comet.vector.CometDictionaryVector;\nimport org.apache.comet.vector.CometPlainVector;\nimport org.apache.comet.vector.CometVector;\n\n@IcebergApi\npublic class ColumnReader extends AbstractColumnReader {\n  protected static final Logger LOG = LoggerFactory.getLogger(ColumnReader.class);\n  protected final BufferAllocator ALLOCATOR = new RootAllocator();\n\n  /**\n   * The current Comet vector holding all the values read by this column reader. Owned by this\n   * reader and MUST be closed after use.\n   */\n  private CometDecodedVector currentVector;\n\n  /** Dictionary values for this column. Only set if the column is using dictionary encoding. */\n  protected CometDictionary dictionary;\n\n  /** Reader for dictionary & data pages in the current column chunk. */\n  protected PageReader pageReader;\n\n  /** Whether the first data page has been loaded. */\n  private boolean firstPageLoaded = false;\n\n  /**\n   * The number of nulls in the current batch, used when we are skipping importing of Arrow vectors,\n   * in which case we'll simply update the null count of the existing vectors.\n   */\n  int currentNumNulls;\n\n  /**\n   * The number of values in the current batch, used when we are skipping importing of Arrow\n   * vectors, in which case we'll simply update the null count of the existing vectors.\n   */\n  int currentNumValues;\n\n  /**\n   * Whether the last loaded vector contains any null value. This is used to determine if we can\n   * skip vector reloading. If the flag is false, Arrow C API will skip to import the validity\n   * buffer, and therefore we cannot skip vector reloading.\n   */\n  boolean hadNull;\n\n  private final CometSchemaImporter importer;\n\n  private ArrowArray array = null;\n  private ArrowSchema schema = null;\n\n  ColumnReader(\n      DataType type,\n      ColumnDescriptor descriptor,\n      CometSchemaImporter importer,\n      int batchSize,\n      boolean useDecimal128,\n      boolean useLegacyDateTimestamp) {\n    super(type, descriptor, useDecimal128, useLegacyDateTimestamp);\n    assert batchSize > 0 : \"Batch size must be positive, found \" + batchSize;\n    this.batchSize = batchSize;\n    this.importer = importer;\n    initNative();\n  }\n\n  /**\n   * Set the page reader for a new column chunk to read. Expects to call `readBatch` after this.\n   *\n   * @param pageReader the page reader for the new column chunk\n   * @see <a href=\"https://github.com/apache/datafusion-comet/issues/2079\">Comet Issue #2079</a>\n   */\n  @IcebergApi\n  public void setPageReader(PageReader pageReader) throws IOException {\n    this.pageReader = pageReader;\n\n    DictionaryPage dictionaryPage = pageReader.readDictionaryPage();\n    if (dictionaryPage != null) {\n      LOG.debug(\"dictionary page encoding = {}\", dictionaryPage.getEncoding());\n      Native.setDictionaryPage(\n          nativeHandle,\n          dictionaryPage.getDictionarySize(),\n          dictionaryPage.getBytes().toByteArray(),\n          dictionaryPage.getEncoding().ordinal());\n    }\n  }\n\n  /** This method is called from Apache Iceberg. */\n  @IcebergApi\n  public void setRowGroupReader(RowGroupReader rowGroupReader, ParquetColumnSpec columnSpec)\n      throws IOException {\n    ColumnDescriptor descriptor = Utils.buildColumnDescriptor(columnSpec);\n    setPageReader(rowGroupReader.getPageReader(descriptor));\n  }\n\n  @Override\n  public void readBatch(int total) {\n    LOG.debug(\"Start to batch of size = \" + total);\n\n    if (!firstPageLoaded) {\n      readPage();\n      firstPageLoaded = true;\n    }\n\n    // Now first reset the current columnar batch so that it can be used to fill in a new batch\n    // of values. Then, keep reading more data pages (via 'readBatch') until the current batch is\n    // full, or we have read 'total' number of values.\n    Native.resetBatch(nativeHandle);\n\n    int left = total, nullsRead = 0;\n    while (left > 0) {\n      int[] array = Native.readBatch(nativeHandle, left);\n      int valuesRead = array[0];\n      nullsRead += array[1];\n      if (valuesRead < left) {\n        readPage();\n      }\n      left -= valuesRead;\n    }\n\n    this.currentNumValues = total;\n    this.currentNumNulls = nullsRead;\n  }\n\n  /** Returns the {@link CometVector} read by this reader. */\n  @Override\n  public CometVector currentBatch() {\n    return loadVector();\n  }\n\n  @Override\n  public void close() {\n    if (currentVector != null) {\n      currentVector.close();\n      currentVector = null;\n    }\n    super.close();\n  }\n\n  /** Returns a decoded {@link CometDecodedVector Comet vector}. */\n  public CometDecodedVector loadVector() {\n    LOG.debug(\"Reloading vector\");\n\n    // Close the previous vector first to release struct memory allocated to import Arrow array &\n    // schema from native side, through the C data interface\n    if (currentVector != null) {\n      currentVector.close();\n    }\n\n    LogicalTypeAnnotation logicalTypeAnnotation =\n        descriptor.getPrimitiveType().getLogicalTypeAnnotation();\n    boolean isUuid =\n        logicalTypeAnnotation instanceof LogicalTypeAnnotation.UUIDLogicalTypeAnnotation;\n\n    array = ArrowArray.allocateNew(ALLOCATOR);\n    schema = ArrowSchema.allocateNew(ALLOCATOR);\n\n    long arrayAddr = array.memoryAddress();\n    long schemaAddr = schema.memoryAddress();\n\n    Native.currentBatch(nativeHandle, arrayAddr, schemaAddr);\n\n    FieldVector vector = importer.importVector(array, schema);\n\n    DictionaryEncoding dictionaryEncoding = vector.getField().getDictionary();\n\n    CometPlainVector cometVector = new CometPlainVector(vector, useDecimal128);\n\n    // Update whether the current vector contains any null values. This is used in the following\n    // batch(s) to determine whether we can skip loading the native vector.\n    hadNull = cometVector.hasNull();\n\n    if (dictionaryEncoding == null) {\n      if (dictionary != null) {\n        // This means the column was using dictionary encoding but now has fall-back to plain\n        // encoding, on the native side. Setting 'dictionary' to null here, so we can use it as\n        // a condition to check if we can re-use vector later.\n        dictionary = null;\n      }\n      // Either the column is not dictionary encoded, or it was using dictionary encoding but\n      // a new data page has switched back to use plain encoding. For both cases we should\n      // return plain vector.\n      currentVector = cometVector;\n      return currentVector;\n    }\n\n    // We should already re-initiate `CometDictionary` here because `Data.importVector` API will\n    // release the previous dictionary vector and create a new one.\n    Dictionary arrowDictionary = importer.getProvider().lookup(dictionaryEncoding.getId());\n    CometPlainVector dictionaryVector =\n        new CometPlainVector(arrowDictionary.getVector(), useDecimal128, isUuid);\n    if (dictionary != null) {\n      dictionary.setDictionaryVector(dictionaryVector);\n    } else {\n      dictionary = new CometDictionary(dictionaryVector);\n    }\n\n    currentVector =\n        new CometDictionaryVector(\n            cometVector, dictionary, importer.getProvider(), useDecimal128, false, isUuid);\n\n    currentVector =\n        new CometDictionaryVector(cometVector, dictionary, importer.getProvider(), useDecimal128);\n    return currentVector;\n  }\n\n  protected void readPage() {\n    DataPage page = pageReader.readPage();\n    if (page == null) {\n      throw new RuntimeException(\"overreading: returned DataPage is null\");\n    }\n    ;\n    int pageValueCount = page.getValueCount();\n    page.accept(\n        new DataPage.Visitor<Void>() {\n          @Override\n          public Void visit(DataPageV1 dataPageV1) {\n            LOG.debug(\"data page encoding = {}\", dataPageV1.getValueEncoding());\n            if (dataPageV1.getDlEncoding() != Encoding.RLE\n                && descriptor.getMaxDefinitionLevel() != 0) {\n              throw new UnsupportedOperationException(\n                  \"Unsupported encoding: \" + dataPageV1.getDlEncoding());\n            }\n            if (!isValidValueEncoding(dataPageV1.getValueEncoding())) {\n              throw new UnsupportedOperationException(\n                  \"Unsupported value encoding: \" + dataPageV1.getValueEncoding());\n            }\n            try {\n              byte[] array = dataPageV1.getBytes().toByteArray();\n              Native.setPageV1(\n                  nativeHandle, pageValueCount, array, dataPageV1.getValueEncoding().ordinal());\n            } catch (IOException e) {\n              throw new RuntimeException(e);\n            }\n            return null;\n          }\n\n          @Override\n          public Void visit(DataPageV2 dataPageV2) {\n            if (!isValidValueEncoding(dataPageV2.getDataEncoding())) {\n              throw new UnsupportedOperationException(\n                  \"Unsupported encoding: \" + dataPageV2.getDataEncoding());\n            }\n            try {\n              Native.setPageV2(\n                  nativeHandle,\n                  pageValueCount,\n                  dataPageV2.getDefinitionLevels().toByteArray(),\n                  dataPageV2.getRepetitionLevels().toByteArray(),\n                  dataPageV2.getData().toByteArray(),\n                  dataPageV2.getDataEncoding().ordinal());\n            } catch (IOException e) {\n              throw new RuntimeException(e);\n            }\n            return null;\n          }\n        });\n  }\n\n  @SuppressWarnings(\"deprecation\")\n  private boolean isValidValueEncoding(Encoding encoding) {\n    switch (encoding) {\n      case PLAIN:\n      case RLE_DICTIONARY:\n      case PLAIN_DICTIONARY:\n        return true;\n      default:\n        return false;\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/CometFileKeyUnwrapper.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.util.concurrent.ConcurrentHashMap;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.fs.Path;\nimport org.apache.parquet.crypto.DecryptionKeyRetriever;\nimport org.apache.parquet.crypto.DecryptionPropertiesFactory;\nimport org.apache.parquet.crypto.FileDecryptionProperties;\nimport org.apache.parquet.crypto.ParquetCryptoRuntimeException;\n\n// spotless:off\n/*\n * Architecture Overview:\n *\n *          JVM Side                           |                     Native Side\n *   ┌─────────────────────────────────────┐   |   ┌─────────────────────────────────────┐\n *   │     CometFileKeyUnwrapper           │   |   │       Parquet File Reading          │\n *   │                                     │   |   │                                     │\n *   │  ┌─────────────────────────────┐    │   |   │  ┌─────────────────────────────┐    │\n *   │  │      hadoopConf             │    │   |   │  │     file1.parquet           │    │\n *   │  │   (Configuration)           │    │   |   │  │     file2.parquet           │    │\n *   │  └─────────────────────────────┘    │   |   │  │     file3.parquet           │    │\n *   │              │                      │   |   │  └─────────────────────────────┘    │\n *   │              ▼                      │   |   │              │                      │\n *   │  ┌─────────────────────────────┐    │   |   │              │                      │\n *   │  │      factoryCache           │    │   |   │              ▼                      │\n *   │  │   (many-to-one mapping)     │    │   |   │  ┌─────────────────────────────┐    │\n *   │  │                             │    │   |   │  │  Parse file metadata &      │    │\n *   │  │ file1 ──┐                   │    │   |   │  │  extract keyMetadata        │    │\n *   │  │ file2 ──┼─► DecryptionProps │    │   |   │  └─────────────────────────────┘    │\n *   │  │ file3 ──┘      Factory      │    │   |   │              │                      │\n *   │  └─────────────────────────────┘    │   |   │              │                      │\n *   │              │                      │   |   │              ▼                      │\n *   │              ▼                      │   |   │  ╔═════════════════════════════╗    │\n *   │  ┌─────────────────────────────┐    │   |   │  ║        JNI CALL:            ║    │\n *   │  │      retrieverCache         │    │   |   │  ║       getKey(filePath,      ║    │\n *   │  │  filePath -> KeyRetriever   │◄───┼───┼───┼──║        keyMetadata)         ║    │\n *   │  └─────────────────────────────┘    │   |   │  ╚═════════════════════════════╝    │\n *   │              │                      │   |   │                                     │\n *   │              ▼                      │   |   │                                     │\n *   │  ┌─────────────────────────────┐    │   |   │                                     │\n *   │  │  DecryptionKeyRetriever     │    │   |   │                                     │\n *   │  │     .getKey(keyMetadata)    │    │   |   │                                     │\n *   │  └─────────────────────────────┘    │   |   │                                     │\n *   │              │                      │   |   │                                     │\n *   │              ▼                      │   |   │                                     │\n *   │  ┌─────────────────────────────┐    │   |   │  ┌─────────────────────────────┐    │\n *   │  │      return key bytes       │────┼───┼───┼─►│   Use key for decryption    │    │\n *   │  └─────────────────────────────┘    │   |   │  │    of parquet data          │    │\n *   └─────────────────────────────────────┘   |   │  └─────────────────────────────┘    │\n *                                             |   └─────────────────────────────────────┘\n *                                             |\n *                                    JNI Boundary\n *\n * Setup Phase (storeDecryptionKeyRetriever):\n * 1. hadoopConf → DecryptionPropertiesFactory (cached in factoryCache)\n * 2. Factory + filePath → DecryptionKeyRetriever (cached in retrieverCache)\n *\n * Runtime Phase (getKey):\n * 3. Native code calls getKey(filePath, keyMetadata) ──► JVM\n * 4. Retrieve cached DecryptionKeyRetriever for filePath\n * 5. KeyRetriever.getKey(keyMetadata) → decrypted key bytes\n * 6. Return key bytes ──► Native code for parquet decryption\n */\n// spotless:on\n\n/**\n * Helper class to access DecryptionKeyRetriever.getKey from native code via JNI. This class handles\n * the complexity of creating and caching properly configured DecryptionKeyRetriever instances using\n * DecryptionPropertiesFactory. The life of this object is meant to map to a single Comet plan, so\n * associated with CometExecIterator.\n */\npublic class CometFileKeyUnwrapper {\n\n  // Each file path gets a unique DecryptionKeyRetriever\n  private final ConcurrentHashMap<String, DecryptionKeyRetriever> retrieverCache =\n      new ConcurrentHashMap<>();\n\n  // Cache the factory since we should be using the same hadoopConf for every file in this scan.\n  private DecryptionPropertiesFactory factory = null;\n  // Cache the hadoopConf just to assert the assumption above.\n  private Configuration conf = null;\n\n  /**\n   * Normalizes S3 URI schemes to a canonical form. S3 can be accessed via multiple schemes (s3://,\n   * s3a://, s3n://) that refer to the same logical filesystem. This method ensures consistent cache\n   * lookups regardless of which scheme is used.\n   *\n   * @param filePath The file path that may contain an S3 URI\n   * @return The file path with normalized S3 scheme (s3a://)\n   */\n  private String normalizeS3Scheme(final String filePath) {\n    // Normalize s3:// and s3n:// to s3a:// for consistent cache lookups\n    // This handles the case where ObjectStoreUrl uses s3:// but Spark uses s3a://\n    String s3Prefix = \"s3://\";\n    String s3nPrefix = \"s3n://\";\n    if (filePath.startsWith(s3Prefix)) {\n      return \"s3a://\" + filePath.substring(s3Prefix.length());\n    } else if (filePath.startsWith(s3nPrefix)) {\n      return \"s3a://\" + filePath.substring(s3nPrefix.length());\n    }\n    return filePath;\n  }\n\n  /**\n   * Creates and stores a DecryptionKeyRetriever instance for the given file path.\n   *\n   * @param filePath The path to the Parquet file\n   * @param hadoopConf The Hadoop Configuration to use for this file path\n   */\n  public void storeDecryptionKeyRetriever(final String filePath, final Configuration hadoopConf) {\n    final String normalizedPath = normalizeS3Scheme(filePath);\n    // Use DecryptionPropertiesFactory.loadFactory to get the factory and then call\n    // getFileDecryptionProperties\n    if (factory == null) {\n      factory = DecryptionPropertiesFactory.loadFactory(hadoopConf);\n      conf = hadoopConf;\n    } else {\n      // Check the assumption that all files have the same hadoopConf and thus same Factory\n      assert (conf == hadoopConf);\n    }\n    Path path = new Path(filePath);\n    FileDecryptionProperties decryptionProperties =\n        factory.getFileDecryptionProperties(hadoopConf, path);\n\n    DecryptionKeyRetriever keyRetriever = decryptionProperties.getKeyRetriever();\n    retrieverCache.put(normalizedPath, keyRetriever);\n  }\n\n  /**\n   * Gets the decryption key for the given key metadata using the cached DecryptionKeyRetriever for\n   * the specified file path.\n   *\n   * @param filePath The path to the Parquet file\n   * @param keyMetadata The key metadata bytes from the Parquet file\n   * @return The decrypted key bytes\n   * @throws ParquetCryptoRuntimeException if key unwrapping fails\n   */\n  public byte[] getKey(final String filePath, final byte[] keyMetadata)\n      throws ParquetCryptoRuntimeException {\n    final String normalizedPath = normalizeS3Scheme(filePath);\n    DecryptionKeyRetriever keyRetriever = retrieverCache.get(normalizedPath);\n    if (keyRetriever == null) {\n      throw new ParquetCryptoRuntimeException(\n          \"Failed to find DecryptionKeyRetriever for path: \" + filePath);\n    }\n    return keyRetriever.getKey(keyMetadata);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/CometInputFile.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.IOException;\nimport java.util.regex.Matcher;\nimport java.util.regex.Pattern;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.fs.FSDataInputStream;\nimport org.apache.hadoop.fs.FileStatus;\nimport org.apache.hadoop.fs.FileSystem;\nimport org.apache.hadoop.fs.FutureDataInputStreamBuilder;\nimport org.apache.hadoop.fs.Path;\nimport org.apache.hadoop.util.VersionInfo;\nimport org.apache.parquet.hadoop.util.HadoopStreams;\nimport org.apache.parquet.io.InputFile;\nimport org.apache.parquet.io.SeekableInputStream;\n\n/**\n * A Parquet {@link InputFile} implementation that's similar to {@link\n * org.apache.parquet.hadoop.util.HadoopInputFile}, but with optimizations introduced in Hadoop 3.x,\n * for S3 specifically.\n */\npublic class CometInputFile implements InputFile {\n  private static final String MAJOR_MINOR_REGEX = \"^(\\\\d+)\\\\.(\\\\d+)(\\\\..*)?$\";\n  private static final Pattern VERSION_MATCHER = Pattern.compile(MAJOR_MINOR_REGEX);\n\n  private final FileSystem fs;\n  private final FileStatus stat;\n  private final Configuration conf;\n\n  public static CometInputFile fromPath(Path path, Configuration conf) throws IOException {\n    FileSystem fs = path.getFileSystem(conf);\n    return new CometInputFile(fs, fs.getFileStatus(path), conf);\n  }\n\n  private CometInputFile(FileSystem fs, FileStatus stat, Configuration conf) {\n    this.fs = fs;\n    this.stat = stat;\n    this.conf = conf;\n  }\n\n  @Override\n  public long getLength() {\n    return stat.getLen();\n  }\n\n  public Configuration getConf() {\n    return this.conf;\n  }\n\n  public FileSystem getFileSystem() {\n    return this.fs;\n  }\n\n  public Path getPath() {\n    return stat.getPath();\n  }\n\n  @Override\n  public SeekableInputStream newStream() throws IOException {\n    FSDataInputStream stream;\n    try {\n      if (isAtLeastHadoop33()) {\n        // If Hadoop version is >= 3.3.x, we'll use the 'openFile' API which can save a\n        // HEAD request from cloud storages like S3\n        FutureDataInputStreamBuilder inputStreamBuilder =\n            fs.openFile(stat.getPath()).withFileStatus(stat);\n\n        if (stat.getPath().toString().startsWith(\"s3a\")) {\n          // Switch to random S3 input policy so that we don't do sequential read on the entire\n          // S3 object. By default, the policy is normal which does sequential read until a back\n          // seek happens, which in our case will never happen.\n          inputStreamBuilder =\n              inputStreamBuilder.opt(\"fs.s3a.experimental.input.fadvise\", \"random\");\n        }\n        stream = inputStreamBuilder.build().get();\n      } else {\n        stream = fs.open(stat.getPath());\n      }\n    } catch (Exception e) {\n      throw new IOException(\"Error when opening file \" + stat.getPath(), e);\n    }\n    return HadoopStreams.wrap(stream);\n  }\n\n  public SeekableInputStream newStream(long offset, long length) throws IOException {\n    try {\n      FSDataInputStream stream;\n      if (isAtLeastHadoop33()) {\n        FutureDataInputStreamBuilder inputStreamBuilder =\n            fs.openFile(stat.getPath()).withFileStatus(stat);\n\n        if (stat.getPath().toString().startsWith(\"s3a\")) {\n          // Switch to random S3 input policy so that we don't do sequential read on the entire\n          // S3 object. By default, the policy is normal which does sequential read until a back\n          // seek happens, which in our case will never happen.\n          //\n          // Also set read ahead length equal to the column chunk length so we don't have to open\n          // multiple S3 http connections.\n          inputStreamBuilder =\n              inputStreamBuilder\n                  .opt(\"fs.s3a.experimental.input.fadvise\", \"random\")\n                  .opt(\"fs.s3a.readahead.range\", Long.toString(length));\n        }\n\n        stream = inputStreamBuilder.build().get();\n      } else {\n        stream = fs.open(stat.getPath());\n      }\n      return HadoopStreams.wrap(stream);\n    } catch (Exception e) {\n      throw new IOException(\n          \"Error when opening file \" + stat.getPath() + \", offset=\" + offset + \", length=\" + length,\n          e);\n    }\n  }\n\n  @Override\n  public String toString() {\n    return stat.getPath().toString();\n  }\n\n  private static boolean isAtLeastHadoop33() {\n    String version = VersionInfo.getVersion();\n    return CometInputFile.isAtLeastHadoop33(version);\n  }\n\n  static boolean isAtLeastHadoop33(String version) {\n    Matcher matcher = VERSION_MATCHER.matcher(version);\n    if (matcher.matches()) {\n      if (matcher.group(1).equals(\"3\")) {\n        int minorVersion = Integer.parseInt(matcher.group(2));\n        return minorVersion >= 3;\n      }\n    }\n    return false;\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/DictionaryPageReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.IOException;\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Optional;\nimport java.util.concurrent.ConcurrentHashMap;\n\nimport org.apache.parquet.ParquetReadOptions;\nimport org.apache.parquet.bytes.BytesInput;\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.column.page.DictionaryPage;\nimport org.apache.parquet.column.page.DictionaryPageReadStore;\nimport org.apache.parquet.compression.CompressionCodecFactory;\nimport org.apache.parquet.crypto.AesCipher;\nimport org.apache.parquet.crypto.InternalColumnDecryptionSetup;\nimport org.apache.parquet.crypto.InternalFileDecryptor;\nimport org.apache.parquet.crypto.ModuleCipherFactory;\nimport org.apache.parquet.format.BlockCipher;\nimport org.apache.parquet.format.DictionaryPageHeader;\nimport org.apache.parquet.format.PageHeader;\nimport org.apache.parquet.format.Util;\nimport org.apache.parquet.hadoop.metadata.BlockMetaData;\nimport org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\nimport org.apache.parquet.io.ParquetDecodingException;\nimport org.apache.parquet.io.SeekableInputStream;\n\npublic class DictionaryPageReader implements DictionaryPageReadStore {\n  private final Map<String, Optional<DictionaryPage>> cache;\n  private final InternalFileDecryptor fileDecryptor;\n  private final SeekableInputStream inputStream;\n  private final ParquetReadOptions options;\n  private final Map<String, ColumnChunkMetaData> columns;\n\n  DictionaryPageReader(\n      BlockMetaData block,\n      InternalFileDecryptor fileDecryptor,\n      SeekableInputStream inputStream,\n      ParquetReadOptions options) {\n    this.columns = new HashMap<>();\n    this.cache = new ConcurrentHashMap<>();\n    this.fileDecryptor = fileDecryptor;\n    this.inputStream = inputStream;\n    this.options = options;\n\n    for (ColumnChunkMetaData column : block.getColumns()) {\n      columns.put(column.getPath().toDotString(), column);\n    }\n  }\n\n  @Override\n  public DictionaryPage readDictionaryPage(ColumnDescriptor descriptor) {\n    String dotPath = String.join(\".\", descriptor.getPath());\n    ColumnChunkMetaData column = columns.get(dotPath);\n\n    if (column == null) {\n      throw new ParquetDecodingException(\"Failed to load dictionary, unknown column: \" + dotPath);\n    }\n\n    return cache\n        .computeIfAbsent(\n            dotPath,\n            key -> {\n              try {\n                final DictionaryPage dict =\n                    column.hasDictionaryPage() ? readDictionary(column) : null;\n\n                // Copy the dictionary to ensure it can be reused if it is returned\n                // more than once. This can happen when a DictionaryFilter has two or\n                // more predicates for the same column. Cache misses as well.\n                return (dict != null) ? Optional.of(reusableCopy(dict)) : Optional.empty();\n              } catch (IOException e) {\n                throw new ParquetDecodingException(\"Failed to read dictionary\", e);\n              }\n            })\n        .orElse(null);\n  }\n\n  DictionaryPage readDictionary(ColumnChunkMetaData meta) throws IOException {\n    if (!meta.hasDictionaryPage()) {\n      return null;\n    }\n\n    if (inputStream.getPos() != meta.getStartingPos()) {\n      inputStream.seek(meta.getStartingPos());\n    }\n\n    boolean encryptedColumn = false;\n    InternalColumnDecryptionSetup columnDecryptionSetup = null;\n    byte[] dictionaryPageAAD = null;\n    BlockCipher.Decryptor pageDecryptor = null;\n    if (null != fileDecryptor && !fileDecryptor.plaintextFile()) {\n      columnDecryptionSetup = fileDecryptor.getColumnSetup(meta.getPath());\n      if (columnDecryptionSetup.isEncrypted()) {\n        encryptedColumn = true;\n      }\n    }\n\n    PageHeader pageHeader;\n    if (!encryptedColumn) {\n      pageHeader = Util.readPageHeader(inputStream);\n    } else {\n      byte[] dictionaryPageHeaderAAD =\n          AesCipher.createModuleAAD(\n              fileDecryptor.getFileAAD(),\n              ModuleCipherFactory.ModuleType.DictionaryPageHeader,\n              meta.getRowGroupOrdinal(),\n              columnDecryptionSetup.getOrdinal(),\n              -1);\n      pageHeader =\n          Util.readPageHeader(\n              inputStream, columnDecryptionSetup.getMetaDataDecryptor(), dictionaryPageHeaderAAD);\n      dictionaryPageAAD =\n          AesCipher.createModuleAAD(\n              fileDecryptor.getFileAAD(),\n              ModuleCipherFactory.ModuleType.DictionaryPage,\n              meta.getRowGroupOrdinal(),\n              columnDecryptionSetup.getOrdinal(),\n              -1);\n      pageDecryptor = columnDecryptionSetup.getDataDecryptor();\n    }\n\n    if (!pageHeader.isSetDictionary_page_header()) {\n      return null;\n    }\n\n    DictionaryPage compressedPage =\n        readCompressedDictionary(pageHeader, inputStream, pageDecryptor, dictionaryPageAAD);\n    CompressionCodecFactory.BytesInputDecompressor decompressor =\n        options.getCodecFactory().getDecompressor(meta.getCodec());\n\n    return new DictionaryPage(\n        decompressor.decompress(compressedPage.getBytes(), compressedPage.getUncompressedSize()),\n        compressedPage.getDictionarySize(),\n        compressedPage.getEncoding());\n  }\n\n  private DictionaryPage readCompressedDictionary(\n      PageHeader pageHeader,\n      SeekableInputStream fin,\n      BlockCipher.Decryptor pageDecryptor,\n      byte[] dictionaryPageAAD)\n      throws IOException {\n    DictionaryPageHeader dictHeader = pageHeader.getDictionary_page_header();\n\n    int uncompressedPageSize = pageHeader.getUncompressed_page_size();\n    int compressedPageSize = pageHeader.getCompressed_page_size();\n\n    byte[] dictPageBytes = new byte[compressedPageSize];\n    fin.readFully(dictPageBytes);\n\n    BytesInput bin = BytesInput.from(dictPageBytes);\n\n    if (null != pageDecryptor) {\n      bin = BytesInput.from(pageDecryptor.decrypt(bin.toByteArray(), dictionaryPageAAD));\n    }\n\n    return new DictionaryPage(\n        bin,\n        uncompressedPageSize,\n        dictHeader.getNum_values(),\n        org.apache.parquet.column.Encoding.valueOf(dictHeader.getEncoding().name()));\n  }\n\n  private static DictionaryPage reusableCopy(DictionaryPage dict) throws IOException {\n    return new DictionaryPage(\n        BytesInput.from(dict.getBytes().toByteArray()),\n        dict.getDictionarySize(),\n        dict.getEncoding());\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/FileReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.Closeable;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.lang.reflect.Method;\nimport java.net.URI;\nimport java.nio.ByteBuffer;\nimport java.util.ArrayList;\nimport java.util.Arrays;\nimport java.util.Collection;\nimport java.util.HashMap;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.Set;\nimport java.util.concurrent.ExecutionException;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Future;\nimport java.util.stream.Collectors;\nimport java.util.stream.Stream;\nimport java.util.zip.CRC32;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.parquet.HadoopReadOptions;\nimport org.apache.parquet.ParquetReadOptions;\nimport org.apache.parquet.Preconditions;\nimport org.apache.parquet.bytes.ByteBufferInputStream;\nimport org.apache.parquet.bytes.BytesInput;\nimport org.apache.parquet.bytes.BytesUtils;\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.column.page.DataPage;\nimport org.apache.parquet.column.page.DataPageV1;\nimport org.apache.parquet.column.page.DataPageV2;\nimport org.apache.parquet.column.page.DictionaryPage;\nimport org.apache.parquet.column.page.PageReadStore;\nimport org.apache.parquet.compression.CompressionCodecFactory;\nimport org.apache.parquet.crypto.AesCipher;\nimport org.apache.parquet.crypto.EncryptionPropertiesFactory;\nimport org.apache.parquet.crypto.FileDecryptionProperties;\nimport org.apache.parquet.crypto.InternalColumnDecryptionSetup;\nimport org.apache.parquet.crypto.InternalFileDecryptor;\nimport org.apache.parquet.crypto.ModuleCipherFactory;\nimport org.apache.parquet.crypto.ParquetCryptoRuntimeException;\nimport org.apache.parquet.filter2.compat.FilterCompat;\nimport org.apache.parquet.format.BlockCipher;\nimport org.apache.parquet.format.DataPageHeader;\nimport org.apache.parquet.format.DataPageHeaderV2;\nimport org.apache.parquet.format.DictionaryPageHeader;\nimport org.apache.parquet.format.FileCryptoMetaData;\nimport org.apache.parquet.format.PageHeader;\nimport org.apache.parquet.format.Util;\nimport org.apache.parquet.format.converter.ParquetMetadataConverter;\nimport org.apache.parquet.hadoop.ParquetInputFormat;\nimport org.apache.parquet.hadoop.metadata.BlockMetaData;\nimport org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\nimport org.apache.parquet.hadoop.metadata.ColumnPath;\nimport org.apache.parquet.hadoop.metadata.FileMetaData;\nimport org.apache.parquet.hadoop.metadata.ParquetMetadata;\nimport org.apache.parquet.hadoop.util.counters.BenchmarkCounter;\nimport org.apache.parquet.internal.column.columnindex.OffsetIndex;\nimport org.apache.parquet.internal.filter2.columnindex.ColumnIndexFilter;\nimport org.apache.parquet.internal.filter2.columnindex.ColumnIndexStore;\nimport org.apache.parquet.internal.filter2.columnindex.RowRanges;\nimport org.apache.parquet.io.InputFile;\nimport org.apache.parquet.io.ParquetDecodingException;\nimport org.apache.parquet.io.SeekableInputStream;\nimport org.apache.parquet.schema.MessageType;\nimport org.apache.parquet.schema.PrimitiveType;\nimport org.apache.spark.sql.execution.metric.SQLMetric;\n\nimport org.apache.comet.IcebergApi;\n\nimport static org.apache.parquet.hadoop.ParquetFileWriter.EFMAGIC;\nimport static org.apache.parquet.hadoop.ParquetFileWriter.MAGIC;\n\nimport static org.apache.comet.parquet.RowGroupFilter.FilterLevel.BLOOMFILTER;\nimport static org.apache.comet.parquet.RowGroupFilter.FilterLevel.DICTIONARY;\nimport static org.apache.comet.parquet.RowGroupFilter.FilterLevel.STATISTICS;\n\n/**\n * A Parquet file reader. Mostly followed {@code ParquetFileReader} in {@code parquet-mr}, but with\n * customizations & optimizations for Comet.\n */\n@IcebergApi\npublic class FileReader implements Closeable {\n  private static final Logger LOG = LoggerFactory.getLogger(FileReader.class);\n\n  private final ParquetMetadataConverter converter;\n  private final SeekableInputStream f;\n  private final InputFile file;\n  private final Map<String, SQLMetric> metrics;\n  private final Map<ColumnPath, ColumnDescriptor> paths = new HashMap<>();\n  private final FileMetaData fileMetaData; // may be null\n  private final List<BlockMetaData> blocks;\n  private final List<ColumnIndexReader> blockIndexStores;\n  private final List<RowRanges> blockRowRanges;\n  private final CRC32 crc;\n  private final ParquetMetadata footer;\n\n  /**\n   * Read configurations come from two options: - options: these are options defined & specified\n   * from 'parquet-mr' library - cometOptions: these are Comet-specific options, for the features\n   * introduced in Comet's Parquet implementation\n   */\n  private final ParquetReadOptions options;\n\n  private final ReadOptions cometOptions;\n\n  private int currentBlock = 0;\n  private RowGroupReader currentRowGroup = null;\n  private InternalFileDecryptor fileDecryptor;\n\n  FileReader(InputFile file, ParquetReadOptions options, ReadOptions cometOptions)\n      throws IOException {\n    this(file, null, options, cometOptions, null);\n  }\n\n  /** This constructor is called from Apache Iceberg. */\n  @IcebergApi\n  public FileReader(\n      WrappedInputFile file,\n      ReadOptions cometOptions,\n      Map<String, String> properties,\n      Long start,\n      Long length,\n      byte[] fileEncryptionKey,\n      byte[] fileAADPrefix)\n      throws IOException {\n    ParquetReadOptions options =\n        buildParquetReadOptions(\n            new Configuration(), properties, start, length, fileEncryptionKey, fileAADPrefix);\n    this.converter = new ParquetMetadataConverter(options);\n    this.file = file;\n    this.f = file.newStream();\n    this.options = options;\n    this.cometOptions = cometOptions;\n    this.metrics = null;\n    try {\n      this.footer = readFooter(file, options, f, converter);\n    } catch (Exception e) {\n      // In case that reading footer throws an exception in the constructor, the new stream\n      // should be closed. Otherwise, there's no way to close this outside.\n      f.close();\n      throw e;\n    }\n    this.fileMetaData = footer.getFileMetaData();\n    this.fileDecryptor = fileMetaData.getFileDecryptor(); // must be called before filterRowGroups!\n    if (null != fileDecryptor && fileDecryptor.plaintextFile()) {\n      this.fileDecryptor = null; // Plaintext file. No need in decryptor\n    }\n\n    this.blocks = footer.getBlocks(); // filter row group in iceberg\n    this.blockIndexStores = listWithNulls(this.blocks.size());\n    this.blockRowRanges = listWithNulls(this.blocks.size());\n    for (ColumnDescriptor col : footer.getFileMetaData().getSchema().getColumns()) {\n      paths.put(ColumnPath.get(col.getPath()), col);\n    }\n    this.crc = options.usePageChecksumVerification() ? new CRC32() : null;\n  }\n\n  FileReader(\n      InputFile file,\n      ParquetReadOptions options,\n      ReadOptions cometOptions,\n      Map<String, SQLMetric> metrics)\n      throws IOException {\n    this(file, null, options, cometOptions, metrics);\n  }\n\n  FileReader(\n      InputFile file,\n      ParquetMetadata footer,\n      ParquetReadOptions options,\n      ReadOptions cometOptions,\n      Map<String, SQLMetric> metrics)\n      throws IOException {\n    this.converter = new ParquetMetadataConverter(options);\n    this.file = file;\n    this.f = file.newStream();\n    this.options = options;\n    this.cometOptions = cometOptions;\n    this.metrics = metrics;\n    if (footer == null) {\n      try {\n        footer = readFooter(file, options, f, converter);\n      } catch (Exception e) {\n        // In case that reading footer throws an exception in the constructor, the new stream\n        // should be closed. Otherwise, there's no way to close this outside.\n        f.close();\n        throw e;\n      }\n    }\n    this.footer = footer;\n    this.fileMetaData = footer.getFileMetaData();\n    this.fileDecryptor = fileMetaData.getFileDecryptor(); // must be called before filterRowGroups!\n    if (null != fileDecryptor && fileDecryptor.plaintextFile()) {\n      this.fileDecryptor = null; // Plaintext file. No need in decryptor\n    }\n\n    this.blocks = filterRowGroups(footer.getBlocks());\n    this.blockIndexStores = listWithNulls(this.blocks.size());\n    this.blockRowRanges = listWithNulls(this.blocks.size());\n    for (ColumnDescriptor col : footer.getFileMetaData().getSchema().getColumns()) {\n      paths.put(ColumnPath.get(col.getPath()), col);\n    }\n    this.crc = options.usePageChecksumVerification() ? new CRC32() : null;\n  }\n\n  /** Returns the footer of the Parquet file being read. */\n  ParquetMetadata getFooter() {\n    return this.footer;\n  }\n\n  /** Returns the metadata of the Parquet file being read. */\n  FileMetaData getFileMetaData() {\n    return this.fileMetaData;\n  }\n\n  /** Returns the input stream of the Parquet file being read. */\n  public SeekableInputStream getInputStream() {\n    return this.f;\n  }\n\n  /** Returns the Parquet options for reading the file. */\n  public ParquetReadOptions getOptions() {\n    return this.options;\n  }\n\n  /** Returns all the row groups of this reader (after applying row group filtering). */\n  public List<BlockMetaData> getRowGroups() {\n    return blocks;\n  }\n\n  /** Sets the projected columns to be read later via {@link #readNextRowGroup()} */\n  public void setRequestedSchema(List<ColumnDescriptor> projection) {\n    paths.clear();\n    for (ColumnDescriptor col : projection) {\n      paths.put(ColumnPath.get(col.getPath()), col);\n    }\n  }\n\n  /** This method is called from Apache Iceberg. */\n  @IcebergApi\n  public void setRequestedSchemaFromSpecs(List<ParquetColumnSpec> specList) {\n    paths.clear();\n    for (ParquetColumnSpec colSpec : specList) {\n      ColumnDescriptor descriptor = Utils.buildColumnDescriptor(colSpec);\n      paths.put(ColumnPath.get(colSpec.getPath()), descriptor);\n    }\n  }\n\n  private static ParquetReadOptions buildParquetReadOptions(\n      Configuration conf,\n      Map<String, String> properties,\n      Long start,\n      Long length,\n      byte[] fileEncryptionKey,\n      byte[] fileAADPrefix) {\n\n    // Iceberg remove these read properties when building the ParquetReadOptions.\n    // We want build the exact same ParquetReadOptions as Iceberg's.\n    Collection<String> readPropertiesToRemove =\n        Set.of(\n            ParquetInputFormat.UNBOUND_RECORD_FILTER,\n            ParquetInputFormat.FILTER_PREDICATE,\n            ParquetInputFormat.READ_SUPPORT_CLASS,\n            EncryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME);\n\n    for (String property : readPropertiesToRemove) {\n      conf.unset(property);\n    }\n\n    ParquetReadOptions.Builder optionsBuilder = HadoopReadOptions.builder(conf);\n    for (Map.Entry<String, String> entry : properties.entrySet()) {\n      optionsBuilder.set(entry.getKey(), entry.getValue());\n    }\n\n    if (start != null && length != null) {\n      optionsBuilder.withRange(start, start + length);\n    }\n\n    if (fileEncryptionKey != null) {\n      FileDecryptionProperties fileDecryptionProperties =\n          FileDecryptionProperties.builder()\n              .withFooterKey(fileEncryptionKey)\n              .withAADPrefix(fileAADPrefix)\n              .build();\n      optionsBuilder.withDecryption(fileDecryptionProperties);\n    }\n\n    return optionsBuilder.build();\n  }\n\n  /**\n   * Gets the total number of records across all row groups (after applying row group filtering).\n   */\n  public long getRecordCount() {\n    long total = 0;\n    for (BlockMetaData block : blocks) {\n      total += block.getRowCount();\n    }\n    return total;\n  }\n\n  /**\n   * Gets the total number of records across all row groups (after applying both row group filtering\n   * and page-level column index filtering).\n   */\n  public long getFilteredRecordCount() {\n    if (!options.useColumnIndexFilter()\n        || !FilterCompat.isFilteringRequired(options.getRecordFilter())) {\n      return getRecordCount();\n    }\n    long total = 0;\n    for (int i = 0, n = blocks.size(); i < n; ++i) {\n      total += getRowRanges(i).rowCount();\n    }\n    return total;\n  }\n\n  /** Skips the next row group. Returns false if there's no row group to skip. Otherwise, true. */\n  @IcebergApi\n  public boolean skipNextRowGroup() {\n    return advanceToNextBlock();\n  }\n\n  /**\n   * Returns the next row group to read (after applying row group filtering), or null if there's no\n   * more row group.\n   */\n  @IcebergApi\n  public RowGroupReader readNextRowGroup() throws IOException {\n    if (currentBlock == blocks.size()) {\n      return null;\n    }\n    BlockMetaData block = blocks.get(currentBlock);\n    if (block.getRowCount() == 0) {\n      throw new RuntimeException(\"Illegal row group of 0 rows\");\n    }\n    this.currentRowGroup = new RowGroupReader(block.getRowCount(), block.getRowIndexOffset());\n    // prepare the list of consecutive parts to read them in one scan\n    List<ConsecutivePartList> allParts = new ArrayList<>();\n    ConsecutivePartList currentParts = null;\n    for (ColumnChunkMetaData mc : block.getColumns()) {\n      ColumnPath pathKey = mc.getPath();\n      ColumnDescriptor columnDescriptor = paths.get(pathKey);\n      if (columnDescriptor != null) {\n        BenchmarkCounter.incrementTotalBytes(mc.getTotalSize());\n        long startingPos = mc.getStartingPos();\n        boolean mergeRanges = cometOptions.isIOMergeRangesEnabled();\n        int mergeRangeDelta = cometOptions.getIOMergeRangesDelta();\n\n        // start a new list if -\n        //   it is the first part or\n        //   the part is consecutive or\n        //   the part is not consecutive but within the merge range\n        if (currentParts == null\n            || (!mergeRanges && currentParts.endPos() != startingPos)\n            || (mergeRanges && startingPos - currentParts.endPos() > mergeRangeDelta)) {\n          currentParts = new ConsecutivePartList(startingPos);\n          allParts.add(currentParts);\n        }\n        // if we are in a consecutive part list and there is a gap in between the parts,\n        // we treat the gap as a skippable chunk\n        long delta = startingPos - currentParts.endPos();\n        if (mergeRanges && delta > 0 && delta <= mergeRangeDelta) {\n          // add a chunk that will be skipped because it has no column descriptor\n          currentParts.addChunk(new ChunkDescriptor(null, null, startingPos, delta));\n        }\n        currentParts.addChunk(\n            new ChunkDescriptor(columnDescriptor, mc, startingPos, mc.getTotalSize()));\n      }\n    }\n    // actually read all the chunks\n    return readChunks(block, allParts, new ChunkListBuilder());\n  }\n\n  /**\n   * Returns the next row group to read (after applying both row group filtering and page level\n   * column index filtering), or null if there's no more row group.\n   */\n  public PageReadStore readNextFilteredRowGroup() throws IOException {\n    if (currentBlock == blocks.size()) {\n      return null;\n    }\n    if (!options.useColumnIndexFilter()\n        || !FilterCompat.isFilteringRequired(options.getRecordFilter())) {\n      return readNextRowGroup();\n    }\n    BlockMetaData block = blocks.get(currentBlock);\n    if (block.getRowCount() == 0) {\n      throw new RuntimeException(\"Illegal row group of 0 rows\");\n    }\n    ColumnIndexStore ciStore = getColumnIndexReader(currentBlock);\n    RowRanges rowRanges = getRowRanges(currentBlock);\n    long rowCount = rowRanges.rowCount();\n    if (rowCount == 0) {\n      // There are no matching rows -> skipping this row-group\n      advanceToNextBlock();\n      return readNextFilteredRowGroup();\n    }\n    if (rowCount == block.getRowCount()) {\n      // All rows are matching -> fall back to the non-filtering path\n      return readNextRowGroup();\n    }\n\n    this.currentRowGroup = new RowGroupReader(rowRanges);\n    // prepare the list of consecutive parts to read them in one scan\n    ChunkListBuilder builder = new ChunkListBuilder();\n    List<ConsecutivePartList> allParts = new ArrayList<>();\n    ConsecutivePartList currentParts = null;\n    for (ColumnChunkMetaData mc : block.getColumns()) {\n      ColumnPath pathKey = mc.getPath();\n      ColumnDescriptor columnDescriptor = paths.get(pathKey);\n      if (columnDescriptor != null) {\n        OffsetIndex offsetIndex = ciStore.getOffsetIndex(mc.getPath());\n        IndexFilter indexFilter = new IndexFilter(rowRanges, offsetIndex, block.getRowCount());\n        OffsetIndex filteredOffsetIndex = indexFilter.filterOffsetIndex();\n        for (IndexFilter.OffsetRange range :\n            indexFilter.calculateOffsetRanges(filteredOffsetIndex, mc)) {\n          BenchmarkCounter.incrementTotalBytes(range.length);\n          long startingPos = range.offset;\n          // first part or not consecutive => new list\n          if (currentParts == null || currentParts.endPos() != startingPos) {\n            currentParts = new ConsecutivePartList(startingPos);\n            allParts.add(currentParts);\n          }\n          ChunkDescriptor chunkDescriptor =\n              new ChunkDescriptor(columnDescriptor, mc, startingPos, range.length);\n          currentParts.addChunk(chunkDescriptor);\n          builder.setOffsetIndex(chunkDescriptor, filteredOffsetIndex);\n        }\n      }\n    }\n    // actually read all the chunks\n    return readChunks(block, allParts, builder);\n  }\n\n  // Visible for testing\n  ColumnIndexReader getColumnIndexReader(int blockIndex) {\n    ColumnIndexReader ciStore = blockIndexStores.get(blockIndex);\n    if (ciStore == null) {\n      ciStore = ColumnIndexReader.create(blocks.get(blockIndex), paths.keySet(), fileDecryptor, f);\n      blockIndexStores.set(blockIndex, ciStore);\n    }\n    return ciStore;\n  }\n\n  private RowGroupReader readChunks(\n      BlockMetaData block, List<ConsecutivePartList> allParts, ChunkListBuilder builder)\n      throws IOException {\n    if (shouldReadParallel()) {\n      readAllPartsParallel(allParts, builder);\n    } else {\n      for (ConsecutivePartList consecutiveChunks : allParts) {\n        consecutiveChunks.readAll(f, builder);\n      }\n    }\n    for (Chunk chunk : builder.build()) {\n      readChunkPages(chunk, block);\n    }\n\n    advanceToNextBlock();\n\n    return currentRowGroup;\n  }\n\n  private boolean shouldReadParallel() {\n    if (file instanceof CometInputFile) {\n      URI uri = ((CometInputFile) file).getPath().toUri();\n      return shouldReadParallel(cometOptions, uri.getScheme());\n    }\n\n    return false;\n  }\n\n  static boolean shouldReadParallel(ReadOptions options, String scheme) {\n    return options.isParallelIOEnabled() && shouldReadParallelForScheme(scheme);\n  }\n\n  private static boolean shouldReadParallelForScheme(String scheme) {\n    if (scheme == null) {\n      return false;\n    }\n\n    switch (scheme) {\n      case \"s3a\":\n        // Only enable parallel read for S3, so far.\n        return true;\n      default:\n        return false;\n    }\n  }\n\n  static class ReadRange {\n\n    long offset = 0;\n    long length = 0;\n    List<ByteBuffer> buffers = new ArrayList<>();\n\n    @Override\n    public String toString() {\n      return \"ReadRange{\"\n          + \"offset=\"\n          + offset\n          + \", length=\"\n          + length\n          + \", numBuffers=\"\n          + buffers.size()\n          + '}';\n    }\n  }\n\n  List<ReadRange> getReadRanges(List<ConsecutivePartList> allParts, int nBuffers) {\n    int nThreads = cometOptions.parallelIOThreadPoolSize();\n    long buffersPerThread = nBuffers / nThreads + 1;\n    boolean adjustSkew = cometOptions.adjustReadRangesSkew();\n    List<ReadRange> allRanges = new ArrayList<>();\n    for (ConsecutivePartList consecutiveChunk : allParts) {\n      ReadRange readRange = null;\n      long offset = consecutiveChunk.offset;\n      for (int i = 0; i < consecutiveChunk.buffers.size(); i++) {\n        if ((adjustSkew && (i % buffersPerThread == 0)) || i == 0) {\n          readRange = new ReadRange();\n          allRanges.add(readRange);\n          readRange.offset = offset;\n        }\n        ByteBuffer b = consecutiveChunk.buffers.get(i);\n        readRange.length += b.capacity();\n        readRange.buffers.add(b);\n        offset += b.capacity();\n      }\n    }\n    if (LOG.isDebugEnabled()) {\n      StringBuilder sb = new StringBuilder();\n      for (int i = 0; i < allRanges.size(); i++) {\n        sb.append(allRanges.get(i).toString());\n        if (i < allRanges.size() - 1) {\n          sb.append(\",\");\n        }\n      }\n      LOG.debug(\"Read Ranges: {}\", sb);\n    }\n    return allRanges;\n  }\n\n  private void readAllRangesParallel(List<ReadRange> allRanges) {\n    int nThreads = cometOptions.parallelIOThreadPoolSize();\n    ExecutorService threadPool = CometFileReaderThreadPool.getOrCreateThreadPool(nThreads);\n    List<Future<Void>> futures = new ArrayList<>();\n\n    for (ReadRange readRange : allRanges) {\n      futures.add(\n          threadPool.submit(\n              () -> {\n                SeekableInputStream inputStream = null;\n                try {\n                  if (file instanceof CometInputFile) {\n                    // limit the max read ahead to length of the range\n                    inputStream =\n                        (((CometInputFile) file).newStream(readRange.offset, readRange.length));\n                    LOG.debug(\n                        \"Opened new input file: {}, at offset: {}\",\n                        ((CometInputFile) file).getPath().getName(),\n                        readRange.offset);\n                  } else {\n                    inputStream = file.newStream();\n                  }\n                  long curPos = readRange.offset;\n                  for (ByteBuffer buffer : readRange.buffers) {\n                    inputStream.seek(curPos);\n                    LOG.debug(\n                        \"Thread: {} Offset: {} Size: {}\",\n                        Thread.currentThread().getId(),\n                        curPos,\n                        buffer.capacity());\n                    inputStream.readFully(buffer);\n                    buffer.flip();\n                    curPos += buffer.capacity();\n                  } // for\n                } finally {\n                  if (inputStream != null) {\n                    inputStream.close();\n                  }\n                }\n\n                return null;\n              }));\n    }\n    for (Future<Void> future : futures) {\n      try {\n        future.get();\n      } catch (InterruptedException | ExecutionException e) {\n        throw new RuntimeException(e);\n      }\n    }\n  }\n\n  /**\n   * Read all the consecutive part list objects in parallel.\n   *\n   * @param allParts all consecutive parts\n   * @param builder chunk list builder\n   */\n  public void readAllPartsParallel(List<ConsecutivePartList> allParts, ChunkListBuilder builder)\n      throws IOException {\n    int nBuffers = 0;\n    for (ConsecutivePartList consecutiveChunks : allParts) {\n      consecutiveChunks.allocateReadBuffers();\n      nBuffers += consecutiveChunks.buffers.size();\n    }\n    List<ReadRange> allRanges = getReadRanges(allParts, nBuffers);\n\n    long startNs = System.nanoTime();\n    readAllRangesParallel(allRanges);\n\n    for (ConsecutivePartList consecutiveChunks : allParts) {\n      consecutiveChunks.setReadMetrics(startNs);\n      ByteBufferInputStream stream;\n      stream = ByteBufferInputStream.wrap(consecutiveChunks.buffers);\n      // report in a counter the data we just scanned\n      BenchmarkCounter.incrementBytesRead(consecutiveChunks.length);\n      for (int i = 0; i < consecutiveChunks.chunks.size(); i++) {\n        ChunkDescriptor descriptor = consecutiveChunks.chunks.get(i);\n        if (descriptor.col != null) {\n          builder.add(descriptor, stream.sliceBuffers(descriptor.size));\n        } else {\n          stream.skipFully(descriptor.size);\n        }\n      }\n    }\n  }\n\n  private void readChunkPages(Chunk chunk, BlockMetaData block) throws IOException {\n    if (fileDecryptor == null || fileDecryptor.plaintextFile()) {\n      currentRowGroup.addColumn(chunk.descriptor.col, chunk.readAllPages());\n      return;\n    }\n    // Encrypted file\n    ColumnPath columnPath = ColumnPath.get(chunk.descriptor.col.getPath());\n    InternalColumnDecryptionSetup columnDecryptionSetup = fileDecryptor.getColumnSetup(columnPath);\n    if (!columnDecryptionSetup.isEncrypted()) { // plaintext column\n      currentRowGroup.addColumn(chunk.descriptor.col, chunk.readAllPages());\n    } else { // encrypted column\n      currentRowGroup.addColumn(\n          chunk.descriptor.col,\n          chunk.readAllPages(\n              columnDecryptionSetup.getMetaDataDecryptor(),\n              columnDecryptionSetup.getDataDecryptor(),\n              fileDecryptor.getFileAAD(),\n              block.getOrdinal(),\n              columnDecryptionSetup.getOrdinal()));\n    }\n  }\n\n  private boolean advanceToNextBlock() {\n    if (currentBlock == blocks.size()) {\n      return false;\n    }\n    // update the current block and instantiate a dictionary reader for it\n    ++currentBlock;\n    return true;\n  }\n\n  public long[] getRowIndices() {\n    return getRowIndices(blocks);\n  }\n\n  public static long[] getRowIndices(List<BlockMetaData> blocks) {\n    long[] rowIndices = new long[blocks.size() * 2];\n    for (int i = 0, n = blocks.size(); i < n; i++) {\n      BlockMetaData block = blocks.get(i);\n      rowIndices[i * 2] = getRowIndexOffset(block);\n      rowIndices[i * 2 + 1] = block.getRowCount();\n    }\n    return rowIndices;\n  }\n\n  // Uses reflection to get row index offset from a Parquet block metadata.\n  //\n  // The reason reflection is used here is that some Spark versions still depend on a\n  // Parquet version where the method `getRowIndexOffset` is not public.\n  public static long getRowIndexOffset(BlockMetaData metaData) {\n    try {\n      Method method = BlockMetaData.class.getMethod(\"getRowIndexOffset\");\n      method.setAccessible(true);\n      return (long) method.invoke(metaData);\n    } catch (Exception e) {\n      throw new RuntimeException(\"Error when calling getRowIndexOffset\", e);\n    }\n  }\n\n  private RowRanges getRowRanges(int blockIndex) {\n    Preconditions.checkState(\n        FilterCompat.isFilteringRequired(options.getRecordFilter()),\n        \"Should not be invoked if filter is null or NOOP\");\n    RowRanges rowRanges = blockRowRanges.get(blockIndex);\n    if (rowRanges == null) {\n      rowRanges =\n          ColumnIndexFilter.calculateRowRanges(\n              options.getRecordFilter(),\n              getColumnIndexReader(blockIndex),\n              paths.keySet(),\n              blocks.get(blockIndex).getRowCount());\n      blockRowRanges.set(blockIndex, rowRanges);\n    }\n    return rowRanges;\n  }\n\n  private static ParquetMetadata readFooter(\n      InputFile file,\n      ParquetReadOptions options,\n      SeekableInputStream f,\n      ParquetMetadataConverter converter)\n      throws IOException {\n    long fileLen = file.getLength();\n    String filePath = file.toString();\n    LOG.debug(\"File length {}\", fileLen);\n\n    int FOOTER_LENGTH_SIZE = 4;\n\n    // MAGIC + data + footer + footerIndex + MAGIC\n    if (fileLen < MAGIC.length + FOOTER_LENGTH_SIZE + MAGIC.length) {\n      throw new RuntimeException(\n          filePath + \" is not a Parquet file (length is too low: \" + fileLen + \")\");\n    }\n\n    // Read footer length and magic string - with a single seek\n    byte[] magic = new byte[MAGIC.length];\n    long fileMetadataLengthIndex = fileLen - magic.length - FOOTER_LENGTH_SIZE;\n    LOG.debug(\"reading footer index at {}\", fileMetadataLengthIndex);\n    f.seek(fileMetadataLengthIndex);\n    int fileMetadataLength = BytesUtils.readIntLittleEndian(f);\n    f.readFully(magic);\n\n    boolean encryptedFooterMode;\n    if (Arrays.equals(MAGIC, magic)) {\n      encryptedFooterMode = false;\n    } else if (Arrays.equals(EFMAGIC, magic)) {\n      encryptedFooterMode = true;\n    } else {\n      throw new RuntimeException(\n          filePath\n              + \" is not a Parquet file. Expected magic number \"\n              + \"at tail, but found \"\n              + Arrays.toString(magic));\n    }\n\n    long fileMetadataIndex = fileMetadataLengthIndex - fileMetadataLength;\n    LOG.debug(\"read footer length: {}, footer index: {}\", fileMetadataLength, fileMetadataIndex);\n    if (fileMetadataIndex < magic.length || fileMetadataIndex >= fileMetadataLengthIndex) {\n      throw new RuntimeException(\n          \"corrupted file: the footer index is not within the file: \" + fileMetadataIndex);\n    }\n    f.seek(fileMetadataIndex);\n\n    FileDecryptionProperties fileDecryptionProperties = options.getDecryptionProperties();\n    InternalFileDecryptor fileDecryptor = null;\n    if (null != fileDecryptionProperties) {\n      fileDecryptor = new InternalFileDecryptor(fileDecryptionProperties);\n    }\n\n    // Read all the footer bytes in one time to avoid multiple read operations,\n    // since it can be pretty time consuming for a single read operation in HDFS.\n    byte[] footerBytes = new byte[fileMetadataLength];\n    f.readFully(footerBytes);\n    ByteBuffer footerBytesBuffer = ByteBuffer.wrap(footerBytes);\n    LOG.debug(\"Finished to read all footer bytes.\");\n    InputStream footerBytesStream = ByteBufferInputStream.wrap(footerBytesBuffer);\n\n    // Regular file, or encrypted file with plaintext footer\n    if (!encryptedFooterMode) {\n      return converter.readParquetMetadata(\n          footerBytesStream, options.getMetadataFilter(), fileDecryptor, false, fileMetadataLength);\n    }\n\n    // Encrypted file with encrypted footer\n    if (fileDecryptor == null) {\n      throw new ParquetCryptoRuntimeException(\n          \"Trying to read file with encrypted footer. \" + \"No keys available\");\n    }\n    FileCryptoMetaData fileCryptoMetaData = Util.readFileCryptoMetaData(footerBytesStream);\n    fileDecryptor.setFileCryptoMetaData(\n        fileCryptoMetaData.getEncryption_algorithm(), true, fileCryptoMetaData.getKey_metadata());\n    // footer length is required only for signed plaintext footers\n    return converter.readParquetMetadata(\n        footerBytesStream, options.getMetadataFilter(), fileDecryptor, true, 0);\n  }\n\n  private List<BlockMetaData> filterRowGroups(List<BlockMetaData> blocks) {\n    return filterRowGroups(options, blocks, this);\n  }\n\n  public static List<BlockMetaData> filterRowGroups(\n      ParquetReadOptions options, List<BlockMetaData> blocks, FileReader fileReader) {\n    FilterCompat.Filter recordFilter = options.getRecordFilter();\n    if (FilterCompat.isFilteringRequired(recordFilter)) {\n      // set up data filters based on configured levels\n      List<RowGroupFilter.FilterLevel> levels = new ArrayList<>();\n\n      if (options.useStatsFilter()) {\n        levels.add(STATISTICS);\n      }\n\n      if (options.useDictionaryFilter()) {\n        levels.add(DICTIONARY);\n      }\n\n      if (options.useBloomFilter()) {\n        levels.add(BLOOMFILTER);\n      }\n      return RowGroupFilter.filterRowGroups(levels, recordFilter, blocks, fileReader);\n    }\n\n    return blocks;\n  }\n\n  public static List<BlockMetaData> filterRowGroups(\n      ParquetReadOptions options, List<BlockMetaData> blocks, MessageType schema) {\n    FilterCompat.Filter recordFilter = options.getRecordFilter();\n    if (FilterCompat.isFilteringRequired(recordFilter)) {\n      // set up data filters based on configured levels\n      List<RowGroupFilter.FilterLevel> levels = new ArrayList<>();\n\n      if (options.useStatsFilter()) {\n        levels.add(STATISTICS);\n      }\n\n      if (options.useDictionaryFilter()) {\n        levels.add(DICTIONARY);\n      }\n\n      if (options.useBloomFilter()) {\n        levels.add(BLOOMFILTER);\n      }\n      return RowGroupFilter.filterRowGroups(levels, recordFilter, blocks, schema);\n    }\n\n    return blocks;\n  }\n\n  private static <T> List<T> listWithNulls(int size) {\n    return Stream.generate(() -> (T) null).limit(size).collect(Collectors.toList());\n  }\n\n  public void closeStream() throws IOException {\n    if (f != null) {\n      f.close();\n    }\n  }\n\n  @IcebergApi\n  @Override\n  public void close() throws IOException {\n    try {\n      if (f != null) {\n        f.close();\n      }\n    } finally {\n      options.getCodecFactory().release();\n    }\n  }\n\n  /**\n   * Builder to concatenate the buffers of the discontinuous parts for the same column. These parts\n   * are generated as a result of the column-index based filtering when some pages might be skipped\n   * at reading.\n   */\n  private class ChunkListBuilder {\n    private class ChunkData {\n      final List<ByteBuffer> buffers = new ArrayList<>();\n      OffsetIndex offsetIndex;\n    }\n\n    private final Map<ChunkDescriptor, ChunkData> map = new HashMap<>();\n\n    void add(ChunkDescriptor descriptor, List<ByteBuffer> buffers) {\n      ChunkListBuilder.ChunkData data = map.get(descriptor);\n      if (data == null) {\n        data = new ChunkData();\n        map.put(descriptor, data);\n      }\n      data.buffers.addAll(buffers);\n    }\n\n    void setOffsetIndex(ChunkDescriptor descriptor, OffsetIndex offsetIndex) {\n      ChunkData data = map.get(descriptor);\n      if (data == null) {\n        data = new ChunkData();\n        map.put(descriptor, data);\n      }\n      data.offsetIndex = offsetIndex;\n    }\n\n    List<Chunk> build() {\n      List<Chunk> chunks = new ArrayList<>();\n      for (Map.Entry<ChunkDescriptor, ChunkListBuilder.ChunkData> entry : map.entrySet()) {\n        ChunkDescriptor descriptor = entry.getKey();\n        ChunkData data = entry.getValue();\n        chunks.add(new Chunk(descriptor, data.buffers, data.offsetIndex));\n      }\n      return chunks;\n    }\n  }\n\n  /** The data for a column chunk */\n  private class Chunk {\n    private final ChunkDescriptor descriptor;\n    private final ByteBufferInputStream stream;\n    final OffsetIndex offsetIndex;\n\n    /**\n     * @param descriptor descriptor for the chunk\n     * @param buffers ByteBuffers that contain the chunk\n     * @param offsetIndex the offset index for this column; might be null\n     */\n    Chunk(ChunkDescriptor descriptor, List<ByteBuffer> buffers, OffsetIndex offsetIndex) {\n      this.descriptor = descriptor;\n      this.stream = ByteBufferInputStream.wrap(buffers);\n      this.offsetIndex = offsetIndex;\n    }\n\n    protected PageHeader readPageHeader(BlockCipher.Decryptor blockDecryptor, byte[] pageHeaderAAD)\n        throws IOException {\n      return Util.readPageHeader(stream, blockDecryptor, pageHeaderAAD);\n    }\n\n    /**\n     * Calculate checksum of input bytes, throw decoding exception if it does not match the provided\n     * reference crc\n     */\n    private void verifyCrc(int referenceCrc, byte[] bytes, String exceptionMsg) {\n      crc.reset();\n      crc.update(bytes);\n      if (crc.getValue() != ((long) referenceCrc & 0xffffffffL)) {\n        throw new ParquetDecodingException(exceptionMsg);\n      }\n    }\n\n    private ColumnPageReader readAllPages() throws IOException {\n      return readAllPages(null, null, null, -1, -1);\n    }\n\n    private ColumnPageReader readAllPages(\n        BlockCipher.Decryptor headerBlockDecryptor,\n        BlockCipher.Decryptor pageBlockDecryptor,\n        byte[] aadPrefix,\n        int rowGroupOrdinal,\n        int columnOrdinal)\n        throws IOException {\n      List<DataPage> pagesInChunk = new ArrayList<>();\n      DictionaryPage dictionaryPage = null;\n      PrimitiveType type =\n          fileMetaData.getSchema().getType(descriptor.col.getPath()).asPrimitiveType();\n\n      long valuesCountReadSoFar = 0;\n      int dataPageCountReadSoFar = 0;\n      byte[] dataPageHeaderAAD = null;\n      if (null != headerBlockDecryptor) {\n        dataPageHeaderAAD =\n            AesCipher.createModuleAAD(\n                aadPrefix,\n                ModuleCipherFactory.ModuleType.DataPageHeader,\n                rowGroupOrdinal,\n                columnOrdinal,\n                getPageOrdinal(dataPageCountReadSoFar));\n      }\n      while (hasMorePages(valuesCountReadSoFar, dataPageCountReadSoFar)) {\n        byte[] pageHeaderAAD = dataPageHeaderAAD;\n        if (null != headerBlockDecryptor) {\n          // Important: this verifies file integrity (makes sure dictionary page had not been\n          // removed)\n          if (null == dictionaryPage && descriptor.metadata.hasDictionaryPage()) {\n            pageHeaderAAD =\n                AesCipher.createModuleAAD(\n                    aadPrefix,\n                    ModuleCipherFactory.ModuleType.DictionaryPageHeader,\n                    rowGroupOrdinal,\n                    columnOrdinal,\n                    -1);\n          } else {\n            int pageOrdinal = getPageOrdinal(dataPageCountReadSoFar);\n            AesCipher.quickUpdatePageAAD(dataPageHeaderAAD, pageOrdinal);\n          }\n        }\n\n        PageHeader pageHeader = readPageHeader(headerBlockDecryptor, pageHeaderAAD);\n        int uncompressedPageSize = pageHeader.getUncompressed_page_size();\n        int compressedPageSize = pageHeader.getCompressed_page_size();\n        final BytesInput pageBytes;\n        switch (pageHeader.type) {\n          case DICTIONARY_PAGE:\n            // there is only one dictionary page per column chunk\n            if (dictionaryPage != null) {\n              throw new ParquetDecodingException(\n                  \"more than one dictionary page in column \" + descriptor.col);\n            }\n            pageBytes = this.readAsBytesInput(compressedPageSize);\n            if (options.usePageChecksumVerification() && pageHeader.isSetCrc()) {\n              verifyCrc(\n                  pageHeader.getCrc(),\n                  pageBytes.toByteArray(),\n                  \"could not verify dictionary page integrity, CRC checksum verification failed\");\n            }\n            DictionaryPageHeader dicHeader = pageHeader.getDictionary_page_header();\n            dictionaryPage =\n                new DictionaryPage(\n                    pageBytes,\n                    uncompressedPageSize,\n                    dicHeader.getNum_values(),\n                    converter.getEncoding(dicHeader.getEncoding()));\n            // Copy crc to new page, used for testing\n            if (pageHeader.isSetCrc()) {\n              dictionaryPage.setCrc(pageHeader.getCrc());\n            }\n            break;\n\n          case DATA_PAGE:\n            DataPageHeader dataHeaderV1 = pageHeader.getData_page_header();\n            pageBytes = this.readAsBytesInput(compressedPageSize);\n            if (options.usePageChecksumVerification() && pageHeader.isSetCrc()) {\n              verifyCrc(\n                  pageHeader.getCrc(),\n                  pageBytes.toByteArray(),\n                  \"could not verify page integrity, CRC checksum verification failed\");\n            }\n            DataPageV1 dataPageV1 =\n                new DataPageV1(\n                    pageBytes,\n                    dataHeaderV1.getNum_values(),\n                    uncompressedPageSize,\n                    converter.fromParquetStatistics(\n                        getFileMetaData().getCreatedBy(), dataHeaderV1.getStatistics(), type),\n                    converter.getEncoding(dataHeaderV1.getRepetition_level_encoding()),\n                    converter.getEncoding(dataHeaderV1.getDefinition_level_encoding()),\n                    converter.getEncoding(dataHeaderV1.getEncoding()));\n            // Copy crc to new page, used for testing\n            if (pageHeader.isSetCrc()) {\n              dataPageV1.setCrc(pageHeader.getCrc());\n            }\n            pagesInChunk.add(dataPageV1);\n            valuesCountReadSoFar += dataHeaderV1.getNum_values();\n            ++dataPageCountReadSoFar;\n            break;\n\n          case DATA_PAGE_V2:\n            DataPageHeaderV2 dataHeaderV2 = pageHeader.getData_page_header_v2();\n            int dataSize =\n                compressedPageSize\n                    - dataHeaderV2.getRepetition_levels_byte_length()\n                    - dataHeaderV2.getDefinition_levels_byte_length();\n            pagesInChunk.add(\n                new DataPageV2(\n                    dataHeaderV2.getNum_rows(),\n                    dataHeaderV2.getNum_nulls(),\n                    dataHeaderV2.getNum_values(),\n                    this.readAsBytesInput(dataHeaderV2.getRepetition_levels_byte_length()),\n                    this.readAsBytesInput(dataHeaderV2.getDefinition_levels_byte_length()),\n                    converter.getEncoding(dataHeaderV2.getEncoding()),\n                    this.readAsBytesInput(dataSize),\n                    uncompressedPageSize,\n                    converter.fromParquetStatistics(\n                        getFileMetaData().getCreatedBy(), dataHeaderV2.getStatistics(), type),\n                    dataHeaderV2.isIs_compressed()));\n            valuesCountReadSoFar += dataHeaderV2.getNum_values();\n            ++dataPageCountReadSoFar;\n            break;\n\n          default:\n            LOG.debug(\n                \"skipping page of type {} of size {}\", pageHeader.getType(), compressedPageSize);\n            stream.skipFully(compressedPageSize);\n            break;\n        }\n      }\n      if (offsetIndex == null && valuesCountReadSoFar != descriptor.metadata.getValueCount()) {\n        // Would be nice to have a CorruptParquetFileException or something as a subclass?\n        throw new IOException(\n            \"Expected \"\n                + descriptor.metadata.getValueCount()\n                + \" values in column chunk at \"\n                + file\n                + \" offset \"\n                + descriptor.metadata.getFirstDataPageOffset()\n                + \" but got \"\n                + valuesCountReadSoFar\n                + \" values instead over \"\n                + pagesInChunk.size()\n                + \" pages ending at file offset \"\n                + (descriptor.fileOffset + stream.position()));\n      }\n      CompressionCodecFactory.BytesInputDecompressor decompressor =\n          options.getCodecFactory().getDecompressor(descriptor.metadata.getCodec());\n      return new ColumnPageReader(\n          decompressor,\n          pagesInChunk,\n          dictionaryPage,\n          offsetIndex,\n          blocks.get(currentBlock).getRowCount(),\n          pageBlockDecryptor,\n          aadPrefix,\n          rowGroupOrdinal,\n          columnOrdinal);\n    }\n\n    private boolean hasMorePages(long valuesCountReadSoFar, int dataPageCountReadSoFar) {\n      return offsetIndex == null\n          ? valuesCountReadSoFar < descriptor.metadata.getValueCount()\n          : dataPageCountReadSoFar < offsetIndex.getPageCount();\n    }\n\n    private int getPageOrdinal(int dataPageCountReadSoFar) {\n      if (null == offsetIndex) {\n        return dataPageCountReadSoFar;\n      }\n\n      return offsetIndex.getPageOrdinal(dataPageCountReadSoFar);\n    }\n\n    /**\n     * @param size the size of the page\n     * @return the page\n     * @throws IOException if there is an error while reading from the file stream\n     */\n    public BytesInput readAsBytesInput(int size) throws IOException {\n      return BytesInput.from(stream.sliceBuffers(size));\n    }\n  }\n\n  /**\n   * Describes a list of consecutive parts to be read at once. A consecutive part may contain whole\n   * column chunks or only parts of them (some pages).\n   */\n  private class ConsecutivePartList {\n    private final long offset;\n    private final List<ChunkDescriptor> chunks = new ArrayList<>();\n    private long length;\n    private final SQLMetric fileReadTimeMetric;\n    private final SQLMetric fileReadSizeMetric;\n    private final SQLMetric readThroughput;\n    List<ByteBuffer> buffers;\n\n    /**\n     * Constructor\n     *\n     * @param offset where the first chunk starts\n     */\n    ConsecutivePartList(long offset) {\n      if (metrics != null) {\n        this.fileReadTimeMetric = metrics.get(\"ParquetInputFileReadTime\");\n        this.fileReadSizeMetric = metrics.get(\"ParquetInputFileReadSize\");\n        this.readThroughput = metrics.get(\"ParquetInputFileReadThroughput\");\n      } else {\n        this.fileReadTimeMetric = null;\n        this.fileReadSizeMetric = null;\n        this.readThroughput = null;\n      }\n      this.offset = offset;\n    }\n\n    /**\n     * Adds a chunk to the list. It must be consecutive to the previous chunk.\n     *\n     * @param descriptor a chunk descriptor\n     */\n    public void addChunk(ChunkDescriptor descriptor) {\n      chunks.add(descriptor);\n      length += descriptor.size;\n    }\n\n    private void allocateReadBuffers() {\n      int fullAllocations = Math.toIntExact(length / options.getMaxAllocationSize());\n      int lastAllocationSize = Math.toIntExact(length % options.getMaxAllocationSize());\n\n      int numAllocations = fullAllocations + (lastAllocationSize > 0 ? 1 : 0);\n      this.buffers = new ArrayList<>(numAllocations);\n\n      for (int i = 0; i < fullAllocations; i += 1) {\n        this.buffers.add(options.getAllocator().allocate(options.getMaxAllocationSize()));\n      }\n\n      if (lastAllocationSize > 0) {\n        this.buffers.add(options.getAllocator().allocate(lastAllocationSize));\n      }\n    }\n\n    /**\n     * @param f file to read the chunks from\n     * @param builder used to build chunk list to read the pages for the different columns\n     * @throws IOException if there is an error while reading from the stream\n     */\n    public void readAll(SeekableInputStream f, ChunkListBuilder builder) throws IOException {\n      f.seek(offset);\n\n      allocateReadBuffers();\n      long startNs = System.nanoTime();\n\n      for (ByteBuffer buffer : buffers) {\n        f.readFully(buffer);\n        buffer.flip();\n      }\n      setReadMetrics(startNs);\n\n      // report in a counter the data we just scanned\n      BenchmarkCounter.incrementBytesRead(length);\n      ByteBufferInputStream stream = ByteBufferInputStream.wrap(buffers);\n      for (int i = 0; i < chunks.size(); i++) {\n        ChunkDescriptor descriptor = chunks.get(i);\n        if (descriptor.col != null) {\n          builder.add(descriptor, stream.sliceBuffers(descriptor.size));\n        } else {\n          stream.skipFully(descriptor.size);\n        }\n      }\n    }\n\n    private void setReadMetrics(long startNs) {\n      long totalFileReadTimeNs = System.nanoTime() - startNs;\n      double sizeInMb = ((double) length) / (1024 * 1024);\n      double timeInSec = ((double) totalFileReadTimeNs) / 1000_0000_0000L;\n      double throughput = sizeInMb / timeInSec;\n      LOG.debug(\n          \"Comet: File Read stats:  Length: {} MB, Time: {} secs, throughput: {} MB/sec \",\n          sizeInMb,\n          timeInSec,\n          throughput);\n      if (fileReadTimeMetric != null) {\n        fileReadTimeMetric.add(totalFileReadTimeNs);\n      }\n      if (fileReadSizeMetric != null) {\n        fileReadSizeMetric.add(length);\n      }\n      if (readThroughput != null) {\n        readThroughput.set(throughput);\n      }\n    }\n\n    /**\n     * End position of the last byte of these chunks\n     *\n     * @return the position following the last byte of these chunks\n     */\n    public long endPos() {\n      return offset + length;\n    }\n  }\n\n  /** Information needed to read a column chunk or a part of it. */\n  private static class ChunkDescriptor {\n\n    private final ColumnDescriptor col;\n    private final ColumnChunkMetaData metadata;\n    private final long fileOffset;\n    private final long size;\n\n    /**\n     * @param col column this chunk is part of\n     * @param metadata metadata for the column\n     * @param fileOffset offset in the file where this chunk starts\n     * @param size size of the chunk\n     */\n    ChunkDescriptor(\n        ColumnDescriptor col, ColumnChunkMetaData metadata, long fileOffset, long size) {\n      this.col = col;\n      this.metadata = metadata;\n      this.fileOffset = fileOffset;\n      this.size = size;\n    }\n\n    @Override\n    public int hashCode() {\n      return col.hashCode();\n    }\n\n    @Override\n    public boolean equals(Object obj) {\n      if (this == obj) {\n        return true;\n      } else if (obj instanceof ChunkDescriptor) {\n        return col.equals(((ChunkDescriptor) obj).col);\n      } else {\n        return false;\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/FooterReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.IOException;\nimport java.net.URI;\nimport java.net.URISyntaxException;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.fs.Path;\nimport org.apache.parquet.HadoopReadOptions;\nimport org.apache.parquet.ParquetReadOptions;\nimport org.apache.parquet.hadoop.metadata.ParquetMetadata;\nimport org.apache.spark.sql.execution.datasources.PartitionedFile;\n\n/**\n * Copied from Spark's `ParquetFooterReader` in order to avoid shading issue around Parquet.\n *\n * <p>`FooterReader` is a util class which encapsulates the helper methods of reading parquet file\n * footer.\n */\npublic class FooterReader {\n  public static ParquetMetadata readFooter(Configuration configuration, PartitionedFile file)\n      throws IOException, URISyntaxException {\n    long start = file.start();\n    long length = file.length();\n    Path filePath = new Path(new URI(file.filePath().toString()));\n    CometInputFile inputFile = CometInputFile.fromPath(filePath, configuration);\n    ParquetReadOptions readOptions =\n        HadoopReadOptions.builder(inputFile.getConf(), inputFile.getPath())\n            .withRange(start, start + length)\n            .build();\n    ReadOptions cometReadOptions = ReadOptions.builder(configuration).build();\n    // Use try-with-resources to ensure fd is closed.\n    try (FileReader fileReader = new FileReader(inputFile, readOptions, cometReadOptions)) {\n      return fileReader.getFooter();\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/IcebergCometNativeBatchReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.util.Map;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.spark.sql.catalyst.InternalRow;\nimport org.apache.spark.sql.execution.metric.SQLMetric;\nimport org.apache.spark.sql.types.StructType;\n\n/**\n * A specialized NativeBatchReader for Iceberg that accepts ParquetMetadata as a Thrift encoded byte\n * array . This allows Iceberg to pass metadata in serialized form with a two-step initialization\n * pattern.\n */\npublic class IcebergCometNativeBatchReader extends NativeBatchReader {\n\n  public IcebergCometNativeBatchReader(StructType requiredSchema) {\n    super();\n    this.sparkSchema = requiredSchema;\n  }\n\n  /** Initialize the reader using FileInfo instead of PartitionedFile. */\n  public void init(\n      Configuration conf,\n      FileInfo fileInfo,\n      byte[] parquetMetadataBytes,\n      byte[] nativeFilter,\n      int capacity,\n      StructType dataSchema,\n      boolean isCaseSensitive,\n      boolean useFieldId,\n      boolean ignoreMissingIds,\n      boolean useLegacyDateTimestamp,\n      StructType partitionSchema,\n      InternalRow partitionValues,\n      AbstractColumnReader[] preInitializedReaders,\n      Map<String, SQLMetric> metrics)\n      throws Throwable {\n\n    // Set parent fields\n    this.conf = conf;\n    this.fileInfo = fileInfo;\n    this.footer = new ParquetMetadataSerializer().deserialize(parquetMetadataBytes);\n    this.nativeFilter = nativeFilter;\n    this.capacity = capacity;\n    this.dataSchema = dataSchema;\n    this.isCaseSensitive = isCaseSensitive;\n    this.useFieldId = useFieldId;\n    this.ignoreMissingIds = ignoreMissingIds;\n    this.useLegacyDateTimestamp = useLegacyDateTimestamp;\n    this.partitionSchema = partitionSchema;\n    this.partitionValues = partitionValues;\n    this.preInitializedReaders = preInitializedReaders;\n    this.metrics.clear();\n    if (metrics != null) {\n      this.metrics.putAll(metrics);\n    }\n\n    // Call parent init method\n    super.init();\n  }\n\n  public StructType getSparkSchema() {\n    return this.sparkSchema;\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/IndexFilter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\nimport org.apache.parquet.internal.column.columnindex.OffsetIndex;\nimport org.apache.parquet.internal.filter2.columnindex.RowRanges;\n\npublic class IndexFilter {\n  private final RowRanges rowRanges;\n  private final OffsetIndex offsetIndex;\n  private final long totalRowCount;\n\n  public IndexFilter(RowRanges rowRanges, OffsetIndex offsetIndex, long totalRowCount) {\n    this.rowRanges = rowRanges;\n    this.offsetIndex = offsetIndex;\n    this.totalRowCount = totalRowCount;\n  }\n\n  OffsetIndex filterOffsetIndex() {\n    List<Integer> indexMap = new ArrayList<>();\n    for (int i = 0, n = offsetIndex.getPageCount(); i < n; ++i) {\n      long from = offsetIndex.getFirstRowIndex(i);\n      if (rowRanges.isOverlapping(from, offsetIndex.getLastRowIndex(i, totalRowCount))) {\n        indexMap.add(i);\n      }\n    }\n\n    int[] indexArray = new int[indexMap.size()];\n    for (int i = 0; i < indexArray.length; i++) {\n      indexArray[i] = indexMap.get(i);\n    }\n    return new FilteredOffsetIndex(offsetIndex, indexArray);\n  }\n\n  List<OffsetRange> calculateOffsetRanges(OffsetIndex filteredOffsetIndex, ColumnChunkMetaData cm) {\n    List<OffsetRange> ranges = new ArrayList<>();\n    long firstPageOffset = offsetIndex.getOffset(0);\n    int n = filteredOffsetIndex.getPageCount();\n\n    if (n > 0) {\n      OffsetRange currentRange = null;\n\n      // Add a range for the dictionary page if required\n      long rowGroupOffset = cm.getStartingPos();\n      if (rowGroupOffset < firstPageOffset) {\n        currentRange = new OffsetRange(rowGroupOffset, (int) (firstPageOffset - rowGroupOffset));\n        ranges.add(currentRange);\n      }\n\n      for (int i = 0; i < n; ++i) {\n        long offset = filteredOffsetIndex.getOffset(i);\n        int length = filteredOffsetIndex.getCompressedPageSize(i);\n        if (currentRange == null || !currentRange.extend(offset, length)) {\n          currentRange = new OffsetRange(offset, length);\n          ranges.add(currentRange);\n        }\n      }\n    }\n    return ranges;\n  }\n\n  private static class FilteredOffsetIndex implements OffsetIndex {\n    private final OffsetIndex offsetIndex;\n    private final int[] indexMap;\n\n    private FilteredOffsetIndex(OffsetIndex offsetIndex, int[] indexMap) {\n      this.offsetIndex = offsetIndex;\n      this.indexMap = indexMap;\n    }\n\n    @Override\n    public int getPageOrdinal(int pageIndex) {\n      return indexMap[pageIndex];\n    }\n\n    @Override\n    public int getPageCount() {\n      return indexMap.length;\n    }\n\n    @Override\n    public long getOffset(int pageIndex) {\n      return offsetIndex.getOffset(indexMap[pageIndex]);\n    }\n\n    @Override\n    public int getCompressedPageSize(int pageIndex) {\n      return offsetIndex.getCompressedPageSize(indexMap[pageIndex]);\n    }\n\n    @Override\n    public long getFirstRowIndex(int pageIndex) {\n      return offsetIndex.getFirstRowIndex(indexMap[pageIndex]);\n    }\n\n    @Override\n    public long getLastRowIndex(int pageIndex, long totalRowCount) {\n      int nextIndex = indexMap[pageIndex] + 1;\n      return (nextIndex >= offsetIndex.getPageCount()\n              ? totalRowCount\n              : offsetIndex.getFirstRowIndex(nextIndex))\n          - 1;\n    }\n  }\n\n  static class OffsetRange {\n    final long offset;\n    long length;\n\n    private OffsetRange(long offset, int length) {\n      this.offset = offset;\n      this.length = length;\n    }\n\n    private boolean extend(long offset, int length) {\n      if (this.offset + this.length == offset) {\n        this.length += length;\n        return true;\n      } else {\n        return false;\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/LazyColumnReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.IOException;\n\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.column.page.PageReader;\nimport org.apache.spark.sql.types.DataType;\n\nimport org.apache.comet.CometSchemaImporter;\nimport org.apache.comet.vector.CometLazyVector;\nimport org.apache.comet.vector.CometVector;\n\npublic class LazyColumnReader extends ColumnReader {\n\n  // Remember the largest skipped index for sanity checking.\n  private int lastSkippedRowId = Integer.MAX_VALUE;\n\n  // Track whether the underlying page is drained.\n  private boolean isPageDrained = true;\n\n  // Leftover number of rows that did not skip in the previous batch.\n  private int numRowsToSkipFromPrevBatch;\n\n  // The lazy vector being updated.\n  private final CometLazyVector vector;\n\n  LazyColumnReader(\n      DataType sparkReadType,\n      ColumnDescriptor descriptor,\n      CometSchemaImporter importer,\n      int batchSize,\n      boolean useDecimal128,\n      boolean useLegacyDateTimestamp) {\n    super(sparkReadType, descriptor, importer, batchSize, useDecimal128, useLegacyDateTimestamp);\n    this.batchSize = 0; // the batch size is set later in `readBatch`\n    this.vector = new CometLazyVector(sparkReadType, this, useDecimal128);\n  }\n\n  @Override\n  public void setPageReader(PageReader pageReader) throws IOException {\n    super.setPageReader(pageReader);\n    lastSkippedRowId = Integer.MAX_VALUE;\n    isPageDrained = true;\n    numRowsToSkipFromPrevBatch = 0;\n    currentNumValues = batchSize;\n  }\n\n  /**\n   * Lazily read a batch of 'total' rows for this column. The includes: 1) Skip any unused rows from\n   * the previous batch 2) Reset the native columnar batch 3) Reset tracking variables\n   *\n   * @param total the number of rows in the batch. MUST be <= the number of rows available in this\n   *     column chunk.\n   */\n  @Override\n  public void readBatch(int total) {\n    // Before starting a new batch, take care of the remaining rows to skip from the previous batch.\n    tryPageSkip(batchSize);\n    numRowsToSkipFromPrevBatch += batchSize - currentNumValues;\n\n    // Now first reset the current columnar batch so that it can be used to fill in a new batch\n    // of values. Then, keep reading more data pages (via 'readBatch') until the current batch is\n    // full, or we have read 'total' number of values.\n    Native.resetBatch(nativeHandle);\n\n    batchSize = total;\n    currentNumValues = 0;\n    lastSkippedRowId = -1;\n  }\n\n  @Override\n  public CometVector currentBatch() {\n    return vector;\n  }\n\n  /** Read all rows up to the `batchSize`. Expects no rows are skipped so far. */\n  public void readAllBatch() {\n    // All rows should be read without any skips so far\n    assert (lastSkippedRowId == -1);\n\n    readBatch(batchSize - 1, 0);\n  }\n\n  /**\n   * Read at least up to `rowId`. It may read beyond `rowId` if enough rows available in the page.\n   * It may skip reading rows before `rowId`. In case `rowId` is already read, return immediately.\n   *\n   * @param rowId the row index in the batch to read.\n   * @return true if `rowId` is newly materialized, or false if `rowId` is already materialized.\n   */\n  public boolean materializeUpToIfNecessary(int rowId) {\n    // Not allowed reading rowId if it may have skipped previously.\n    assert (rowId > lastSkippedRowId);\n\n    // If `rowId` is already materialized, return immediately.\n    if (rowId < currentNumValues) return false;\n\n    int numRowsWholePageSkipped = tryPageSkip(rowId);\n    readBatch(rowId, numRowsWholePageSkipped);\n    return true;\n  }\n\n  /**\n   * Read up to `rowId` (inclusive). If the whole pages are skipped previously in `tryPageSkip()`,\n   * pad the number of whole page skipped rows with nulls to the underlying vector before reading.\n   *\n   * @param rowId the row index in the batch to read.\n   * @param numNullRowsToPad the number of nulls to pad before reading.\n   */\n  private void readBatch(int rowId, int numNullRowsToPad) {\n    if (numRowsToSkipFromPrevBatch > 0) {\n      // Reaches here only when starting a new batch and the page is previously drained\n      readPage();\n      isPageDrained = false;\n      Native.skipBatch(nativeHandle, numRowsToSkipFromPrevBatch, true);\n      numRowsToSkipFromPrevBatch = 0;\n    }\n    while (rowId >= currentNumValues) {\n      int numRowsToRead = batchSize - currentNumValues;\n      if (isPageDrained) {\n        readPage();\n      }\n      int[] array = Native.readBatch(nativeHandle, numRowsToRead, numNullRowsToPad);\n      int read = array[0];\n      isPageDrained = read < numRowsToRead;\n      currentNumValues += read;\n      currentNumNulls += array[1];\n      // No need to update numNullRowsToPad. numNullRowsToPad > 0 means there were whole page skips.\n      // That guarantees that the Native.readBatch can read up to rowId in the current page.\n    }\n  }\n\n  /**\n   * Try to skip until `rowId` (exclusive). If possible, it skips whole underlying pages without\n   * decompressing. In that case, it returns early at the page end, so that the next iteration can\n   * lazily decide to `readPage()` or `tryPageSkip()` again.\n   *\n   * @param rowId the row index in the batch that it tries to skip up until (exclusive).\n   * @return the number of rows that the whole page skips were applied.\n   */\n  private int tryPageSkip(int rowId) {\n    int total = rowId - currentNumValues;\n    int wholePageSkipped = 0;\n    if (total > 0) {\n      // First try to skip from the non-drained underlying page.\n      int skipped = isPageDrained ? 0 : Native.skipBatch(nativeHandle, total);\n      total -= skipped;\n      isPageDrained = total > 0;\n      if (isPageDrained) {\n        ColumnPageReader columnPageReader = (ColumnPageReader) pageReader;\n        // It is always `columnPageReader.getPageValueCount() > numRowsToSkipFromPriorBatch`\n        int pageValueCount = columnPageReader.getPageValueCount() - numRowsToSkipFromPrevBatch;\n        while (pageValueCount <= total) {\n          // skip the entire page if the next page is small enough\n          columnPageReader.skipPage();\n          numRowsToSkipFromPrevBatch = 0;\n          total -= pageValueCount;\n          wholePageSkipped += pageValueCount;\n          pageValueCount = columnPageReader.getPageValueCount();\n        }\n      }\n\n      currentNumValues += skipped + wholePageSkipped;\n      currentNumNulls += skipped;\n      lastSkippedRowId = currentNumValues - 1;\n    }\n    return wholePageSkipped;\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/Native.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.util.Map;\n\nimport org.apache.comet.IcebergApi;\nimport org.apache.comet.NativeBase;\n\npublic final class Native extends NativeBase {\n  public static int[] readBatch(long handle, int batchSize) {\n    return readBatch(handle, batchSize, 0);\n  }\n\n  public static int skipBatch(long handle, int batchSize) {\n    return skipBatch(handle, batchSize, false);\n  }\n\n  /** Native APIs * */\n\n  /**\n   * Creates a reader for a primitive Parquet column.\n   *\n   * @param physicalTypeId id for Parquet physical type\n   * @param logicalTypeId id for Parquet logical type\n   * @param expectedPhysicalTypeId id for Parquet physical type, converted from Spark read type.\n   *     This is used for type promotion.\n   * @param path the path from the root schema to the column, derived from the method\n   *     'ColumnDescriptor#getPath()'.\n   * @param maxDl the maximum definition level of the primitive column\n   * @param maxRl the maximum repetition level of the primitive column\n   * @param bitWidth (only set when logical type is INT) the bit width for the integer type (INT8,\n   *     INT16, INT32, etc)\n   * @param isSigned (only set when logical type is INT) whether it is signed or unsigned int.\n   * @param typeLength number of bytes required to store a value of the type, only set when the\n   *     physical type is FIXED_LEN_BYTE_ARRAY, otherwise it's 0.\n   * @param precision (only set when logical type is DECIMAL) precision of the decimal type\n   * @param expectedPrecision (only set when logical type is DECIMAL) precision of the decimal type\n   *     from Spark read schema. This is used for type promotion.\n   * @param scale (only set when logical type is DECIMAL) scale of the decimal type\n   * @param tu (only set when logical type is TIMESTAMP) unit for the timestamp\n   * @param isAdjustedUtc (only set when logical type is TIMESTAMP) whether the timestamp is\n   *     adjusted to UTC or not\n   * @param batchSize the batch size for the columnar read\n   * @param useDecimal128 whether to always return 128 bit decimal regardless of precision\n   * @param useLegacyDateTimestampOrNTZ whether to read legacy dates/timestamps as it is\n   * @return a pointer to a native Parquet column reader created\n   */\n  public static native long initColumnReader(\n      int physicalTypeId,\n      int logicalTypeId,\n      int expectedPhysicalTypeId,\n      String[] path,\n      int maxDl,\n      int maxRl,\n      int bitWidth,\n      int expectedBitWidth,\n      boolean isSigned,\n      int typeLength,\n      int precision,\n      int expectedPrecision,\n      int scale,\n      int expectedScale,\n      int tu,\n      boolean isAdjustedUtc,\n      int batchSize,\n      boolean useDecimal128,\n      boolean useLegacyDateTimestampOrNTZ);\n\n  /**\n   * Pass a Parquet dictionary page to the native column reader. Note this should only be called\n   * once per Parquet column chunk. Otherwise it'll panic.\n   *\n   * @param handle the handle to the native Parquet column reader\n   * @param dictionaryValueCount the number of values in this dictionary\n   * @param dictionaryData the actual dictionary page data, including repetition/definition levels\n   *     as well as values\n   * @param encoding the encoding used by the dictionary\n   */\n  public static native void setDictionaryPage(\n      long handle, int dictionaryValueCount, byte[] dictionaryData, int encoding);\n\n  /**\n   * Passes a Parquet data page V1 to the native column reader.\n   *\n   * @param handle the handle to the native Parquet column reader\n   * @param pageValueCount the number of values in this data page\n   * @param pageData the actual page data, which should only contain PLAIN-encoded values.\n   * @param valueEncoding the encoding used by the values\n   */\n  public static native void setPageV1(\n      long handle, int pageValueCount, byte[] pageData, int valueEncoding);\n\n  /**\n   * Passes a Parquet data page V2 to the native column reader.\n   *\n   * @param handle the handle to the native Parquet column reader\n   * @param pageValueCount the number of values in this data page\n   * @param defLevelData the data for definition levels\n   * @param repLevelData the data for repetition levels\n   * @param valueData the data for values\n   * @param valueEncoding the encoding used by the values\n   */\n  public static native void setPageV2(\n      long handle,\n      int pageValueCount,\n      byte[] defLevelData,\n      byte[] repLevelData,\n      byte[] valueData,\n      int valueEncoding);\n\n  /**\n   * Reset the current columnar batch. This will clear all the content of the batch as well as any\n   * internal state such as the current offset.\n   *\n   * @param handle the handle to the native Parquet column reader\n   */\n  @IcebergApi\n  public static native void resetBatch(long handle);\n\n  /**\n   * Reads at most 'batchSize' number of rows from the native Parquet column reader. Returns a tuple\n   * where the first element is the actual number of rows read (including both nulls and non-nulls),\n   * and the second element is the number of nulls read.\n   *\n   * <p>If the returned value is < 'batchSize' then it means the current page has been completely\n   * drained. In this case, the caller should call {@link Native#setPageV1} or {@link\n   * Native#setPageV2} before the next 'readBatch' call.\n   *\n   * <p>Note that the current page could also be drained if the returned value = 'batchSize', i.e.,\n   * the remaining number of rows in the page is exactly equal to 'batchSize'. In this case, the\n   * next 'readBatch' call will return 0 and the caller should call {@link Native#setPageV1} or\n   * {@link Native#setPageV2} next.\n   *\n   * <p>If `nullPadSize` > 0, it pads nulls into the underlying vector before the values will be\n   * read into.\n   *\n   * @param handle the handle to the native Parquet column reader\n   * @param batchSize the number of rows to be read\n   * @param nullPadSize the number of nulls to pad before reading.\n   * @return a tuple: (the actual number of rows read, the number of nulls read)\n   */\n  public static native int[] readBatch(long handle, int batchSize, int nullPadSize);\n\n  /**\n   * Skips at most 'batchSize' number of rows from the native Parquet column reader, and returns the\n   * actual number of rows skipped.\n   *\n   * <p>If the returned value is < 'batchSize' then it means the current page has been completely\n   * drained. In this case, the caller should call {@link Native#setPageV1} or {@link\n   * Native#setPageV2} before the next 'skipBatch' call.\n   *\n   * <p>Note that the current page could also be drained if the returned value = 'batchSize', i.e.,\n   * the remaining number of rows in the page is exactly equal to 'batchSize'. In this case, the\n   * next 'skipBatch' call will return 0 and the caller should call {@link Native#setPageV1} or\n   * {@link Native#setPageV2} next.\n   *\n   * @param handle the handle to the native Parquet column reader\n   * @param batchSize the number of rows to skip in the current page\n   * @param discard if true, discard read rows without padding nulls into the underlying vector\n   * @return the actual number of rows skipped\n   */\n  public static native int skipBatch(long handle, int batchSize, boolean discard);\n\n  /**\n   * Returns the current batch constructed via 'readBatch'\n   *\n   * @param handle the handle to the native Parquet column reader\n   * @param arrayAddr the memory address to the ArrowArray struct\n   * @param schemaAddr the memory address to the ArrowSchema struct\n   */\n  public static native void currentBatch(long handle, long arrayAddr, long schemaAddr);\n\n  /**\n   * Closes the native Parquet column reader and releases all resources associated with it.\n   *\n   * @param handle the handle to the native Parquet column reader\n   */\n  public static native void closeColumnReader(long handle);\n\n  ///////////// Arrow Native Parquet Reader APIs\n  // TODO: Add partitionValues(?), improve requiredColumns to use a projection mask that corresponds\n  // to arrow.\n  //      Add batch size, datetimeRebaseModeSpec, metrics(how?)...\n\n  /**\n   * Verify that object store options are valid. An exception will be thrown if the provided options\n   * are not valid.\n   */\n  public static native void validateObjectStoreConfig(\n      String filePath, Map<String, String> objectStoreOptions);\n\n  /**\n   * Initialize a record batch reader for a PartitionedFile\n   *\n   * @param filePath\n   * @param starts\n   * @param lengths\n   * @return a handle to the record batch reader, used in subsequent calls.\n   */\n  public static native long initRecordBatchReader(\n      String filePath,\n      long fileSize,\n      long[] starts,\n      long[] lengths,\n      byte[] filter,\n      byte[] requiredSchema,\n      byte[] dataSchema,\n      String sessionTimezone,\n      int batchSize,\n      boolean caseSensitive,\n      Map<String, String> objectStoreOptions,\n      CometFileKeyUnwrapper keyUnwrapper,\n      Object metricsNode);\n\n  // arrow native version of read batch\n\n  /**\n   * Read the next batch of data into memory on native side\n   *\n   * @param handle\n   * @return the number of rows read\n   */\n  public static native int readNextRecordBatch(long handle);\n\n  // arrow native equivalent of currentBatch. 'columnNum' is number of the column in the record\n  // batch\n\n  /**\n   * Load the column corresponding to columnNum in the currently loaded record batch into JVM\n   *\n   * @param handle\n   * @param columnNum\n   * @param arrayAddr\n   * @param schemaAddr\n   */\n  public static native void currentColumnBatch(\n      long handle, int columnNum, long arrayAddr, long schemaAddr);\n\n  // arrow native version to close record batch reader\n\n  /**\n   * Close the record batch reader. Free the resources\n   *\n   * @param handle\n   */\n  public static native void closeRecordBatchReader(long handle);\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/NativeBatchReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.ByteArrayOutputStream;\nimport java.io.Closeable;\nimport java.io.IOException;\nimport java.lang.reflect.InvocationTargetException;\nimport java.lang.reflect.Method;\nimport java.net.URI;\nimport java.net.URISyntaxException;\nimport java.nio.channels.Channels;\nimport java.util.*;\nimport java.util.stream.Collectors;\n\nimport scala.Option;\nimport scala.collection.Seq;\nimport scala.collection.mutable.Buffer;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.arrow.memory.BufferAllocator;\nimport org.apache.arrow.memory.RootAllocator;\nimport org.apache.arrow.vector.ipc.WriteChannel;\nimport org.apache.arrow.vector.ipc.message.MessageSerializer;\nimport org.apache.arrow.vector.types.pojo.Schema;\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.fs.Path;\nimport org.apache.hadoop.mapreduce.InputSplit;\nimport org.apache.hadoop.mapreduce.RecordReader;\nimport org.apache.hadoop.mapreduce.TaskAttemptContext;\nimport org.apache.parquet.HadoopReadOptions;\nimport org.apache.parquet.ParquetReadOptions;\nimport org.apache.parquet.Preconditions;\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.hadoop.metadata.BlockMetaData;\nimport org.apache.parquet.hadoop.metadata.ParquetMetadata;\nimport org.apache.parquet.schema.GroupType;\nimport org.apache.parquet.schema.MessageType;\nimport org.apache.parquet.schema.Type;\nimport org.apache.spark.TaskContext;\nimport org.apache.spark.TaskContext$;\nimport org.apache.spark.executor.TaskMetrics;\nimport org.apache.spark.sql.catalyst.InternalRow;\nimport org.apache.spark.sql.comet.parquet.CometParquetReadSupport;\nimport org.apache.spark.sql.comet.util.Utils$;\nimport org.apache.spark.sql.errors.QueryExecutionErrors;\nimport org.apache.spark.sql.execution.datasources.PartitionedFile;\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetColumn;\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetToSparkSchemaConverter;\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetUtils;\nimport org.apache.spark.sql.execution.metric.SQLMetric;\nimport org.apache.spark.sql.internal.SQLConf;\nimport org.apache.spark.sql.types.*;\nimport org.apache.spark.sql.vectorized.ColumnarBatch;\nimport org.apache.spark.util.AccumulatorV2;\n\nimport org.apache.comet.CometConf;\nimport org.apache.comet.CometSchemaImporter;\nimport org.apache.comet.objectstore.NativeConfig;\nimport org.apache.comet.shims.ShimBatchReader;\nimport org.apache.comet.shims.ShimFileFormat;\nimport org.apache.comet.vector.CometVector;\nimport org.apache.comet.vector.NativeUtil;\n\nimport static scala.jdk.javaapi.CollectionConverters.asJava;\n\n/**\n * A vectorized Parquet reader that reads a Parquet file in a batched fashion.\n *\n * <p>Example of how to use this:\n *\n * <pre>\n *   NativeBatchReader reader = new NativeBatchReader(parquetFile, batchSize);\n *   try {\n *     reader.init();\n *     while (reader.readBatch()) {\n *       ColumnarBatch batch = reader.currentBatch();\n *       // consume the batch\n *     }\n *   } finally { // resources associated with the reader should be released\n *     reader.close();\n *   }\n * </pre>\n */\npublic class NativeBatchReader extends RecordReader<Void, ColumnarBatch> implements Closeable {\n\n  /**\n   * A class that contains the necessary file information for reading a Parquet file. This class\n   * provides an abstraction over PartitionedFile properties.\n   */\n  public static class FileInfo {\n    private final long start;\n    private final long length;\n    private final String filePath;\n    private final long fileSize;\n\n    public FileInfo(long start, long length, String filePath, long fileSize)\n        throws URISyntaxException {\n      this.start = start;\n      this.length = length;\n      URI uri = new Path(filePath).toUri();\n      if (uri.getScheme() == null) {\n        uri = new Path(\"file://\" + filePath).toUri();\n      }\n      this.filePath = uri.toString();\n      this.fileSize = fileSize;\n    }\n\n    public static FileInfo fromPartitionedFile(PartitionedFile file) throws URISyntaxException {\n      return new FileInfo(file.start(), file.length(), file.filePath().toString(), file.fileSize());\n    }\n\n    public long start() {\n      return start;\n    }\n\n    public long length() {\n      return length;\n    }\n\n    public String filePath() {\n      return filePath;\n    }\n\n    public long fileSize() {\n      return fileSize;\n    }\n\n    public URI pathUri() throws URISyntaxException {\n      return new URI(filePath);\n    }\n  }\n\n  private static final Logger LOG = LoggerFactory.getLogger(NativeBatchReader.class);\n  protected static final BufferAllocator ALLOCATOR = new RootAllocator();\n  private NativeUtil nativeUtil = new NativeUtil();\n\n  protected Configuration conf;\n  protected int capacity;\n  protected boolean isCaseSensitive;\n  protected boolean useFieldId;\n  protected boolean ignoreMissingIds;\n  protected StructType partitionSchema;\n  protected InternalRow partitionValues;\n  protected PartitionedFile file;\n  protected FileInfo fileInfo;\n  protected final Map<String, SQLMetric> metrics;\n  // Unfortunately CometMetricNode is from the \"spark\" package and cannot be used directly here\n  // TODO: Move it to common package?\n  protected Object metricsNode = null;\n\n  protected StructType sparkSchema;\n  protected StructType dataSchema;\n  MessageType fileSchema;\n  protected MessageType requestedSchema;\n  protected CometVector[] vectors;\n  protected AbstractColumnReader[] columnReaders;\n  protected CometSchemaImporter importer;\n  protected ColumnarBatch currentBatch;\n  //  private FileReader fileReader;\n  protected boolean[] missingColumns;\n  protected boolean isInitialized;\n  protected ParquetMetadata footer;\n  protected byte[] nativeFilter;\n  protected AbstractColumnReader[] preInitializedReaders;\n\n  private ParquetColumn parquetColumn;\n\n  /**\n   * Map from field name to spark schema index for efficient lookups during batch loading. Built\n   * once during initialization and reused across all batch loads.\n   */\n  private Map<String, Integer> sparkFieldIndexMap;\n\n  /**\n   * Whether the native scan should always return decimal represented by 128 bits, regardless of its\n   * precision. Normally, this should be true if native execution is enabled, since Arrow compute\n   * kernels doesn't support 32 and 64 bit decimals yet.\n   */\n  // TODO: (ARROW NATIVE)\n  private boolean useDecimal128;\n\n  /**\n   * Whether to return dates/timestamps that were written with legacy hybrid (Julian + Gregorian)\n   * calendar as it is. If this is true, Comet will return them as it is, instead of rebasing them\n   * to the new Proleptic Gregorian calendar. If this is false, Comet will throw exceptions when\n   * seeing these dates/timestamps.\n   */\n  // TODO: (ARROW NATIVE)\n  protected boolean useLegacyDateTimestamp;\n\n  /** The TaskContext object for executing this task. */\n  private final TaskContext taskContext;\n\n  private long totalRowCount = 0;\n  private long handle;\n\n  // Protected no-arg constructor for subclasses\n  protected NativeBatchReader() {\n    this.taskContext = TaskContext$.MODULE$.get();\n    this.metrics = new HashMap<>();\n  }\n\n  // Only for testing\n  public NativeBatchReader(String file, int capacity) {\n    this(file, capacity, null, null);\n  }\n\n  // Only for testing\n  public NativeBatchReader(\n      String file, int capacity, StructType partitionSchema, InternalRow partitionValues) {\n    this(new Configuration(), file, capacity, partitionSchema, partitionValues);\n  }\n\n  // Only for testing\n  public NativeBatchReader(\n      Configuration conf,\n      String file,\n      int capacity,\n      StructType partitionSchema,\n      InternalRow partitionValues) {\n\n    this.conf = conf;\n    this.capacity = capacity;\n    this.isCaseSensitive = false;\n    this.useFieldId = false;\n    this.ignoreMissingIds = false;\n    this.partitionSchema = partitionSchema;\n    this.partitionValues = partitionValues;\n\n    this.file = ShimBatchReader.newPartitionedFile(partitionValues, file);\n    this.metrics = new HashMap<>();\n\n    this.taskContext = TaskContext$.MODULE$.get();\n  }\n\n  private NativeBatchReader(AbstractColumnReader[] columnReaders) {\n    // Todo: set useDecimal128 and useLazyMaterialization\n    int numColumns = columnReaders.length;\n    this.columnReaders = new AbstractColumnReader[numColumns];\n    vectors = new CometVector[numColumns];\n    currentBatch = new ColumnarBatch(vectors);\n    // This constructor is used by Iceberg only. The columnReaders are\n    // initialized in Iceberg, so no need to call the init()\n    isInitialized = true;\n    this.taskContext = TaskContext$.MODULE$.get();\n    this.metrics = new HashMap<>();\n  }\n\n  NativeBatchReader(\n      Configuration conf,\n      PartitionedFile inputSplit,\n      ParquetMetadata footer,\n      byte[] nativeFilter,\n      int capacity,\n      StructType sparkSchema,\n      StructType dataSchema,\n      boolean isCaseSensitive,\n      boolean useFieldId,\n      boolean ignoreMissingIds,\n      boolean useLegacyDateTimestamp,\n      StructType partitionSchema,\n      InternalRow partitionValues,\n      Map<String, SQLMetric> metrics,\n      Object metricsNode) {\n    this.conf = conf;\n    this.capacity = capacity;\n    this.sparkSchema = sparkSchema;\n    this.dataSchema = dataSchema;\n    this.isCaseSensitive = isCaseSensitive;\n    this.useFieldId = useFieldId;\n    this.ignoreMissingIds = ignoreMissingIds;\n    this.useLegacyDateTimestamp = useLegacyDateTimestamp;\n    this.partitionSchema = partitionSchema;\n    this.partitionValues = partitionValues;\n    this.file = inputSplit;\n    this.footer = footer;\n    this.nativeFilter = nativeFilter;\n    this.metrics = metrics;\n    this.metricsNode = metricsNode;\n    this.taskContext = TaskContext$.MODULE$.get();\n  }\n\n  /** Alternate constructor that accepts FileInfo instead of PartitionedFile. */\n  NativeBatchReader(\n      Configuration conf,\n      FileInfo fileInfo,\n      ParquetMetadata footer,\n      byte[] nativeFilter,\n      int capacity,\n      StructType sparkSchema,\n      StructType dataSchema,\n      boolean isCaseSensitive,\n      boolean useFieldId,\n      boolean ignoreMissingIds,\n      boolean useLegacyDateTimestamp,\n      StructType partitionSchema,\n      InternalRow partitionValues,\n      Map<String, SQLMetric> metrics,\n      Object metricsNode) {\n    this.conf = conf;\n    this.capacity = capacity;\n    this.sparkSchema = sparkSchema;\n    this.dataSchema = dataSchema;\n    this.isCaseSensitive = isCaseSensitive;\n    this.useFieldId = useFieldId;\n    this.ignoreMissingIds = ignoreMissingIds;\n    this.useLegacyDateTimestamp = useLegacyDateTimestamp;\n    this.partitionSchema = partitionSchema;\n    this.partitionValues = partitionValues;\n    this.fileInfo = fileInfo;\n    this.footer = footer;\n    this.nativeFilter = nativeFilter;\n    this.metrics = metrics;\n    this.metricsNode = metricsNode;\n    this.taskContext = TaskContext$.MODULE$.get();\n  }\n\n  /**\n   * Initialize this reader. The reason we don't do it in the constructor is that we want to close\n   * any resource hold by this reader when error happens during the initialization.\n   */\n  public void init() throws Throwable {\n\n    useDecimal128 =\n        conf.getBoolean(\n            CometConf.COMET_USE_DECIMAL_128().key(),\n            (Boolean) CometConf.COMET_USE_DECIMAL_128().defaultValue().get());\n\n    // Use fileInfo if available, otherwise fall back to file\n    long start = fileInfo != null ? fileInfo.start() : file.start();\n    long length = fileInfo != null ? fileInfo.length() : file.length();\n    String filePath = fileInfo != null ? fileInfo.filePath() : file.filePath().toString();\n    long fileSize = fileInfo != null ? fileInfo.fileSize() : file.fileSize();\n    URI pathUri = fileInfo != null ? fileInfo.pathUri() : file.pathUri();\n\n    ParquetReadOptions.Builder builder = HadoopReadOptions.builder(conf, new Path(filePath));\n\n    if (start >= 0 && length >= 0) {\n      builder = builder.withRange(start, start + length);\n    }\n    ParquetReadOptions readOptions = builder.build();\n\n    Map<String, String> objectStoreOptions =\n        asJava(NativeConfig.extractObjectStoreOptions(conf, pathUri));\n\n    // TODO: enable off-heap buffer when they are ready\n    ReadOptions cometReadOptions = ReadOptions.builder(conf).build();\n\n    Path path = new Path(new URI(filePath));\n    try (FileReader fileReader =\n        new FileReader(\n            CometInputFile.fromPath(path, conf), footer, readOptions, cometReadOptions, metrics)) {\n\n      requestedSchema = footer.getFileMetaData().getSchema();\n      fileSchema = requestedSchema;\n\n      if (sparkSchema == null) {\n        ParquetToSparkSchemaConverter converter = new ParquetToSparkSchemaConverter(conf);\n        sparkSchema = converter.convert(requestedSchema);\n      } else {\n        requestedSchema =\n            CometParquetReadSupport.clipParquetSchema(\n                requestedSchema, sparkSchema, isCaseSensitive, useFieldId, ignoreMissingIds);\n        if (requestedSchema.getFieldCount() != sparkSchema.size()) {\n          throw new IllegalArgumentException(\n              String.format(\n                  \"Spark schema has %d columns while \" + \"Parquet schema has %d columns\",\n                  sparkSchema.size(), requestedSchema.getFieldCount()));\n        }\n      }\n\n      boolean caseSensitive =\n          conf.getBoolean(\n              SQLConf.CASE_SENSITIVE().key(),\n              (boolean) SQLConf.CASE_SENSITIVE().defaultValue().get());\n      // rename spark fields based on field_id so name of spark schema field matches the parquet\n      // field name\n      if (useFieldId && ParquetUtils.hasFieldIds(sparkSchema)) {\n        sparkSchema =\n            getSparkSchemaByFieldId(sparkSchema, requestedSchema.asGroupType(), caseSensitive);\n      }\n\n      this.parquetColumn = getParquetColumn(requestedSchema, this.sparkSchema);\n\n      // Create Column readers\n      List<Type> fields = requestedSchema.getFields();\n      List<Type> fileFields = fileSchema.getFields();\n      ParquetColumn[] parquetFields =\n          asJava(parquetColumn.children()).toArray(new ParquetColumn[0]);\n      int numColumns = fields.size();\n      if (partitionSchema != null) numColumns += partitionSchema.size();\n      columnReaders = new AbstractColumnReader[numColumns];\n\n      // Initialize missing columns and use null vectors for them\n      missingColumns = new boolean[numColumns];\n      // We do not need the column index of the row index; but this method has the\n      // side effect of throwing an exception if a column with the same name is\n      // found which we do want (spark unit tests explicitly test for that).\n      ShimFileFormat.findRowIndexColumnIndexInSchema(sparkSchema);\n      StructField[] nonPartitionFields = sparkSchema.fields();\n      boolean hasRowIndexColumn = false;\n      // Ranges of rows to read (needed iff row indexes are being read)\n      List<BlockMetaData> blocks =\n          FileReader.filterRowGroups(readOptions, footer.getBlocks(), fileReader);\n      totalRowCount = fileReader.getFilteredRecordCount();\n      if (totalRowCount == 0) {\n        // all the data is filtered out.\n        isInitialized = true;\n        return;\n      }\n      long[] starts = new long[blocks.size()];\n      long[] lengths = new long[blocks.size()];\n      int blockIndex = 0;\n      for (BlockMetaData block : blocks) {\n        long blockStart = block.getStartingPos();\n        long blockLength = block.getCompressedSize();\n        starts[blockIndex] = blockStart;\n        lengths[blockIndex] = blockLength;\n        blockIndex++;\n      }\n      for (int i = 0; i < fields.size(); i++) {\n        Type field = fields.get(i);\n        Optional<Type> optFileField =\n            fileFields.stream().filter(f -> f.getName().equals(field.getName())).findFirst();\n        if (nonPartitionFields[i].name().equals(ShimFileFormat.ROW_INDEX_TEMPORARY_COLUMN_NAME())) {\n          // Values of ROW_INDEX_TEMPORARY_COLUMN_NAME column are always populated with\n          // generated row indexes, rather than read from the file.\n          // TODO(SPARK-40059): Allow users to include columns named\n          //                    FileFormat.ROW_INDEX_TEMPORARY_COLUMN_NAME in their schemas.\n          long[] rowIndices = FileReader.getRowIndices(blocks);\n          columnReaders[i] =\n              new ArrowRowIndexColumnReader(nonPartitionFields[i], capacity, rowIndices);\n          hasRowIndexColumn = true;\n          missingColumns[i] = true;\n        } else if (optFileField.isPresent()) {\n          // The column we are reading may be a complex type in which case we check if each field in\n          // the requested type is in the file type (and the same data type)\n          // This makes the same check as Spark's VectorizedParquetReader\n          checkColumn(parquetFields[i]);\n          missingColumns[i] = false;\n        } else {\n          if (preInitializedReaders != null\n              && i < preInitializedReaders.length\n              && preInitializedReaders[i] != null) {\n            columnReaders[i] = preInitializedReaders[i];\n            missingColumns[i] = true;\n          } else {\n            if (field.getRepetition() == Type.Repetition.REQUIRED) {\n              throw new IOException(\n                  \"Required column '\"\n                      + field.getName()\n                      + \"' is missing\"\n                      + \" in data file \"\n                      + filePath);\n            }\n            if (field.isPrimitive()) {\n              ArrowConstantColumnReader reader =\n                  new ArrowConstantColumnReader(nonPartitionFields[i], capacity, useDecimal128);\n              columnReaders[i] = reader;\n              missingColumns[i] = true;\n            } else {\n              // the column requested is not in the file, but the native reader can handle that\n              // and will return nulls for all rows requested\n              missingColumns[i] = false;\n            }\n          }\n        }\n      }\n\n      // Initialize constant readers for partition columns\n      if (partitionSchema != null) {\n        StructField[] partitionFields = partitionSchema.fields();\n        for (int i = fields.size(); i < columnReaders.length; i++) {\n          int fieldIndex = i - fields.size();\n          StructField field = partitionFields[fieldIndex];\n          ArrowConstantColumnReader reader =\n              new ArrowConstantColumnReader(\n                  field, capacity, partitionValues, fieldIndex, useDecimal128);\n          columnReaders[i] = reader;\n        }\n      }\n\n      vectors = new CometVector[numColumns];\n      currentBatch = new ColumnarBatch(vectors);\n\n      // For test purpose only\n      // If the last external accumulator is `NumRowGroupsAccumulator`, the row group number to read\n      // will be updated to the accumulator. So we can check if the row groups are filtered or not\n      // in test case.\n      // Note that this tries to get thread local TaskContext object, if this is called at other\n      // thread, it won't update the accumulator.\n      if (taskContext != null) {\n        Option<AccumulatorV2<?, ?>> accu = getTaskAccumulator(taskContext.taskMetrics());\n        if (accu.isDefined() && accu.get().getClass().getSimpleName().equals(\"NumRowGroupsAcc\")) {\n          @SuppressWarnings(\"unchecked\")\n          AccumulatorV2<Integer, Integer> intAccum = (AccumulatorV2<Integer, Integer>) accu.get();\n          intAccum.add(blocks.size());\n        }\n      }\n\n      boolean encryptionEnabled = CometParquetUtils.encryptionEnabled(conf);\n\n      // Create keyUnwrapper if encryption is enabled\n      CometFileKeyUnwrapper keyUnwrapper = null;\n      if (encryptionEnabled) {\n        keyUnwrapper = new CometFileKeyUnwrapper();\n        keyUnwrapper.storeDecryptionKeyRetriever(filePath, conf);\n      }\n\n      // Filter out columns with preinitialized readers from sparkSchema before making the\n      // call to native\n      if (preInitializedReaders != null) {\n        StructType filteredSchema = new StructType();\n        StructField[] sparkFields = sparkSchema.fields();\n        // Build name map for efficient lookups\n        Map<String, Type> fileFieldNameMap =\n            caseSensitive\n                ? buildCaseSensitiveNameMap(fileFields)\n                : buildCaseInsensitiveNameMap(fileFields);\n        for (int i = 0; i < sparkFields.length; i++) {\n          // Keep the column if:\n          // 1. It doesn't have a preinitialized reader, OR\n          // 2. It has a preinitialized reader but exists in fileSchema\n          boolean hasPreInitializedReader =\n              i < preInitializedReaders.length && preInitializedReaders[i] != null;\n          String fieldName =\n              caseSensitive\n                  ? sparkFields[i].name()\n                  : sparkFields[i].name().toLowerCase(Locale.ROOT);\n          boolean existsInFileSchema = fileFieldNameMap.containsKey(fieldName);\n          if (!hasPreInitializedReader || existsInFileSchema) {\n            filteredSchema = filteredSchema.add(sparkFields[i]);\n          }\n        }\n        sparkSchema = filteredSchema;\n      }\n\n      // Native code uses \"UTC\" always as the timeZoneId when converting from spark to arrow schema.\n      String timeZoneId = \"UTC\";\n      Schema arrowSchema = Utils$.MODULE$.toArrowSchema(sparkSchema, timeZoneId);\n      byte[] serializedRequestedArrowSchema = serializeArrowSchema(arrowSchema);\n      Schema dataArrowSchema = Utils$.MODULE$.toArrowSchema(dataSchema, timeZoneId);\n      byte[] serializedDataArrowSchema = serializeArrowSchema(dataArrowSchema);\n\n      int batchSize =\n          conf.getInt(\n              CometConf.COMET_BATCH_SIZE().key(),\n              (Integer) CometConf.COMET_BATCH_SIZE().defaultValue().get());\n      this.handle =\n          Native.initRecordBatchReader(\n              filePath,\n              fileSize,\n              starts,\n              lengths,\n              hasRowIndexColumn ? null : nativeFilter,\n              serializedRequestedArrowSchema,\n              serializedDataArrowSchema,\n              timeZoneId,\n              batchSize,\n              caseSensitive,\n              objectStoreOptions,\n              keyUnwrapper,\n              metricsNode);\n\n      // Build spark field index map for efficient lookups during batch loading\n      StructField[] sparkFields = sparkSchema.fields();\n      sparkFieldIndexMap = new HashMap<>();\n      for (int j = 0; j < sparkFields.length; j++) {\n        String fieldName =\n            caseSensitive ? sparkFields[j].name() : sparkFields[j].name().toLowerCase(Locale.ROOT);\n        sparkFieldIndexMap.put(fieldName, j);\n      }\n    }\n    isInitialized = true;\n  }\n\n  private ParquetColumn getParquetColumn(MessageType schema, StructType sparkSchema) {\n    // We use a different config from the config that is passed in.\n    // This follows the setting  used in Spark's SpecificParquetRecordReaderBase\n    Configuration config = new Configuration(conf);\n    config.setBoolean(SQLConf.PARQUET_BINARY_AS_STRING().key(), false);\n    config.setBoolean(SQLConf.PARQUET_INT96_AS_TIMESTAMP().key(), false);\n    config.setBoolean(SQLConf.CASE_SENSITIVE().key(), false);\n    config.setBoolean(SQLConf.PARQUET_INFER_TIMESTAMP_NTZ_ENABLED().key(), false);\n    config.setBoolean(SQLConf.LEGACY_PARQUET_NANOS_AS_LONG().key(), false);\n    ParquetToSparkSchemaConverter converter = new ParquetToSparkSchemaConverter(config);\n    return converter.convertParquetColumn(schema, Option.apply(sparkSchema));\n  }\n\n  private Map<Integer, List<Type>> getIdToParquetFieldMap(GroupType type) {\n    return type.getFields().stream()\n        .filter(f -> f.getId() != null)\n        .collect(Collectors.groupingBy(f -> f.getId().intValue()));\n  }\n\n  private Map<String, List<Type>> getCaseSensitiveParquetFieldMap(GroupType schema) {\n    return schema.getFields().stream().collect(Collectors.toMap(Type::getName, Arrays::asList));\n  }\n\n  private Map<String, List<Type>> getCaseInsensitiveParquetFieldMap(GroupType schema) {\n    return schema.getFields().stream()\n        .collect(Collectors.groupingBy(f -> f.getName().toLowerCase(Locale.ROOT)));\n  }\n\n  private Map<String, Type> buildCaseSensitiveNameMap(List<Type> types) {\n    return types.stream().collect(Collectors.toMap(Type::getName, t -> t));\n  }\n\n  private Map<String, Type> buildCaseInsensitiveNameMap(List<Type> types) {\n    return types.stream()\n        .collect(Collectors.toMap(t -> t.getName().toLowerCase(Locale.ROOT), t -> t));\n  }\n\n  private Type getMatchingParquetFieldById(\n      StructField f,\n      Map<Integer, List<Type>> idToParquetFieldMap,\n      Map<String, List<Type>> nameToParquetFieldMap,\n      boolean isCaseSensitive) {\n    List<Type> matched = null;\n    int fieldId = 0;\n    if (ParquetUtils.hasFieldId(f)) {\n      fieldId = ParquetUtils.getFieldId(f);\n      matched = idToParquetFieldMap.get(fieldId);\n    } else {\n      String fieldName = isCaseSensitive ? f.name() : f.name().toLowerCase(Locale.ROOT);\n      matched = nameToParquetFieldMap.get(fieldName);\n    }\n\n    if (matched == null || matched.isEmpty()) {\n      return null;\n    }\n    if (matched.size() > 1) {\n      // Need to fail if there is ambiguity, i.e. more than one field is matched\n      String parquetTypesString =\n          matched.stream().map(Type::getName).collect(Collectors.joining(\"[\", \", \", \"]\"));\n      throw QueryExecutionErrors.foundDuplicateFieldInFieldIdLookupModeError(\n          fieldId, parquetTypesString);\n    } else {\n      return matched.get(0);\n    }\n  }\n\n  // Derived from CometParquetReadSupport.matchFieldId\n  private String getMatchingNameById(\n      StructField f,\n      Map<Integer, List<Type>> idToParquetFieldMap,\n      Map<String, List<Type>> nameToParquetFieldMap,\n      boolean isCaseSensitive) {\n    Type matched =\n        getMatchingParquetFieldById(f, idToParquetFieldMap, nameToParquetFieldMap, isCaseSensitive);\n\n    // When there is no ID match, we use a fake name to avoid a name match by accident\n    // We need this name to be unique as well, otherwise there will be type conflicts\n    if (matched == null) {\n      return CometParquetReadSupport.generateFakeColumnName();\n    } else {\n      return matched.getName();\n    }\n  }\n\n  // clip ParquetGroup Type\n  private StructType getSparkSchemaByFieldId(\n      StructType schema, GroupType parquetSchema, boolean caseSensitive) {\n    StructType newSchema = new StructType();\n    Map<Integer, List<Type>> idToParquetFieldMap = getIdToParquetFieldMap(parquetSchema);\n    Map<String, List<Type>> nameToParquetFieldMap =\n        caseSensitive\n            ? getCaseSensitiveParquetFieldMap(parquetSchema)\n            : getCaseInsensitiveParquetFieldMap(parquetSchema);\n    for (StructField f : schema.fields()) {\n      DataType newDataType;\n      String fieldName = isCaseSensitive ? f.name() : f.name().toLowerCase(Locale.ROOT);\n      List<Type> parquetFieldList = nameToParquetFieldMap.get(fieldName);\n      if (parquetFieldList == null) {\n        newDataType = f.dataType();\n      } else {\n        Type fieldType = parquetFieldList.get(0);\n        if (f.dataType() instanceof StructType) {\n          newDataType =\n              getSparkSchemaByFieldId(\n                  (StructType) f.dataType(), fieldType.asGroupType(), caseSensitive);\n        } else {\n          newDataType = getSparkTypeByFieldId(f.dataType(), fieldType, caseSensitive);\n        }\n      }\n      String matchedName =\n          getMatchingNameById(f, idToParquetFieldMap, nameToParquetFieldMap, isCaseSensitive);\n      StructField newField = f.copy(matchedName, newDataType, f.nullable(), f.metadata());\n      newSchema = newSchema.add(newField);\n    }\n    return newSchema;\n  }\n\n  private static boolean isPrimitiveCatalystType(DataType dataType) {\n    return !(dataType instanceof ArrayType)\n        && !(dataType instanceof MapType)\n        && !(dataType instanceof StructType);\n  }\n\n  private DataType getSparkTypeByFieldId(\n      DataType dataType, Type parquetType, boolean caseSensitive) {\n    DataType newDataType;\n    if (dataType instanceof StructType) {\n      newDataType =\n          getSparkSchemaByFieldId((StructType) dataType, parquetType.asGroupType(), caseSensitive);\n    } else if (dataType instanceof ArrayType\n        && !isPrimitiveCatalystType(((ArrayType) dataType).elementType())) {\n\n      newDataType =\n          getSparkArrayTypeByFieldId(\n              (ArrayType) dataType, parquetType.asGroupType(), caseSensitive);\n    } else if (dataType instanceof MapType) {\n      MapType mapType = (MapType) dataType;\n      DataType keyType = mapType.keyType();\n      DataType valueType = mapType.valueType();\n      DataType newKeyType;\n      DataType newValueType;\n      Type parquetMapType = parquetType.asGroupType().getFields().get(0);\n      Type parquetKeyType = parquetMapType.asGroupType().getType(\"key\");\n      Type parquetValueType = parquetMapType.asGroupType().getType(\"value\");\n      if (keyType instanceof StructType) {\n        newKeyType =\n            getSparkSchemaByFieldId(\n                (StructType) keyType, parquetKeyType.asGroupType(), caseSensitive);\n      } else {\n        newKeyType = keyType;\n      }\n      if (valueType instanceof StructType) {\n        newValueType =\n            getSparkSchemaByFieldId(\n                (StructType) valueType, parquetValueType.asGroupType(), caseSensitive);\n      } else {\n        newValueType = valueType;\n      }\n      newDataType = new MapType(newKeyType, newValueType, mapType.valueContainsNull());\n    } else {\n      newDataType = dataType;\n    }\n    return newDataType;\n  }\n\n  private DataType getSparkArrayTypeByFieldId(\n      ArrayType arrayType, GroupType parquetList, boolean caseSensitive) {\n    DataType newDataType;\n    DataType elementType = arrayType.elementType();\n    DataType newElementType;\n    Type parquetElementType;\n    if (parquetList.getLogicalTypeAnnotation() == null\n        && parquetList.isRepetition(Type.Repetition.REPEATED)) {\n      parquetElementType = parquetList;\n    } else {\n      // we expect only non-primitive types here (see clipParquetListTypes for related logic)\n      GroupType repeatedGroup = parquetList.asGroupType().getType(0).asGroupType();\n      if (repeatedGroup.getFieldCount() > 1\n          || Objects.equals(repeatedGroup.getName(), \"array\")\n          || Objects.equals(repeatedGroup.getName(), parquetList.getName() + \"_tuple\")) {\n        parquetElementType = repeatedGroup;\n      } else {\n        parquetElementType = repeatedGroup.getType(0);\n      }\n    }\n    if (elementType instanceof StructType) {\n      newElementType =\n          getSparkSchemaByFieldId(\n              (StructType) elementType, parquetElementType.asGroupType(), caseSensitive);\n    } else {\n      newElementType = getSparkTypeByFieldId(elementType, parquetElementType, caseSensitive);\n    }\n    newDataType = new ArrayType(newElementType, arrayType.containsNull());\n    return newDataType;\n  }\n\n  private void checkParquetType(ParquetColumn column) throws IOException {\n    String[] path = asJava(column.path()).toArray(new String[0]);\n    if (containsPath(fileSchema, path)) {\n      if (column.isPrimitive()) {\n        ColumnDescriptor desc = column.descriptor().get();\n        ColumnDescriptor fd = fileSchema.getColumnDescription(desc.getPath());\n        TypeUtil.checkParquetType(fd, column.sparkType());\n      } else {\n        for (ParquetColumn childColumn : asJava(column.children())) {\n          checkColumn(childColumn);\n        }\n      }\n    } else { // A missing column which is either primitive or complex\n      if (column.required()) {\n        // check if we have a preinitialized column reader for this column.\n        int columnIndex = getColumnIndexFromParquetColumn(column);\n        if (columnIndex == -1\n            || preInitializedReaders == null\n            || columnIndex >= preInitializedReaders.length\n            || preInitializedReaders[columnIndex] == null) {\n          // Column is missing in data but the required data is non-nullable. This file is invalid.\n          throw new IOException(\n              \"Required column is missing in data file. Col: \" + Arrays.toString(path));\n        }\n      }\n    }\n  }\n\n  /**\n   * Get the column index in the requested schema for a given ParquetColumn. Returns -1 if not\n   * found.\n   */\n  private int getColumnIndexFromParquetColumn(ParquetColumn column) {\n    String[] targetPath = asJava(column.path()).toArray(new String[0]);\n    if (targetPath.length == 0) {\n      return -1;\n    }\n\n    // For top-level columns, match by name\n    String columnName = targetPath[0];\n    ParquetColumn[] parquetFields = asJava(parquetColumn.children()).toArray(new ParquetColumn[0]);\n    for (int i = 0; i < parquetFields.length; i++) {\n      String[] fieldPath = asJava(parquetFields[i].path()).toArray(new String[0]);\n      if (fieldPath.length > 0 && fieldPath[0].equals(columnName)) {\n        return i;\n      }\n    }\n    return -1;\n  }\n\n  /**\n   * Checks whether the given 'path' exists in 'parquetType'. The difference between this and {@link\n   * MessageType#containsPath(String[])} is that the latter only support paths to leaf From Spark:\n   * VectorizedParquetRecordReader Check whether a column from requested schema is missing from the\n   * file schema, or whether it conforms to the type of the file schema.\n   */\n  private void checkColumn(ParquetColumn column) throws IOException {\n    String[] path = asJava(column.path()).toArray(new String[0]);\n    if (containsPath(fileSchema, path)) {\n      if (column.isPrimitive()) {\n        ColumnDescriptor desc = column.descriptor().get();\n        ColumnDescriptor fd = fileSchema.getColumnDescription(desc.getPath());\n        if (!fd.equals(desc)) {\n          throw new UnsupportedOperationException(\"Schema evolution not supported.\");\n        }\n      } else {\n        for (ParquetColumn childColumn : asJava(column.children())) {\n          checkColumn(childColumn);\n        }\n      }\n    } else { // A missing column which is either primitive or complex\n      if (column.required()) {\n        // Column is missing in data but the required data is non-nullable. This file is invalid.\n        throw new IOException(\n            \"Required column is missing in data file. Col: \" + Arrays.toString(path));\n      }\n    }\n  }\n\n  /**\n   * Checks whether the given 'path' exists in 'parquetType'. The difference between this and {@link\n   * MessageType#containsPath(String[])} is that the latter only support paths to leaf nodes, while\n   * this support paths both to leaf and non-leaf nodes.\n   */\n  private boolean containsPath(Type parquetType, String[] path) {\n    return containsPath(parquetType, path, 0);\n  }\n\n  private boolean containsPath(Type parquetType, String[] path, int depth) {\n    if (path.length == depth) return true;\n    if (parquetType instanceof GroupType) {\n      String fieldName = path[depth];\n      GroupType parquetGroupType = (GroupType) parquetType;\n      if (parquetGroupType.containsField(fieldName)) {\n        return containsPath(parquetGroupType.getType(fieldName), path, depth + 1);\n      }\n    }\n    return false;\n  }\n\n  public void setSparkSchema(StructType schema) {\n    this.sparkSchema = schema;\n  }\n\n  public AbstractColumnReader[] getColumnReaders() {\n    return columnReaders;\n  }\n\n  @Override\n  public void initialize(InputSplit inputSplit, TaskAttemptContext taskAttemptContext)\n      throws IOException, InterruptedException {\n    // Do nothing. The initialization work is done in 'init' already.\n  }\n\n  @Override\n  public boolean nextKeyValue() throws IOException {\n    return nextBatch();\n  }\n\n  @Override\n  public Void getCurrentKey() {\n    return null;\n  }\n\n  @Override\n  public ColumnarBatch getCurrentValue() {\n    return currentBatch();\n  }\n\n  @Override\n  public float getProgress() {\n    return 0;\n  }\n\n  /**\n   * Returns the current columnar batch being read.\n   *\n   * <p>Note that this must be called AFTER {@link NativeBatchReader#nextBatch()}.\n   */\n  public ColumnarBatch currentBatch() {\n    return currentBatch;\n  }\n\n  /**\n   * Loads the next batch of rows. This is called by Spark _and_ Iceberg\n   *\n   * @return true if there are no more rows to read, false otherwise.\n   */\n  public boolean nextBatch() throws IOException {\n    Preconditions.checkState(isInitialized, \"init() should be called first!\");\n\n    //    if (rowsRead >= totalRowCount) return false;\n\n    if (totalRowCount == 0) return false;\n\n    int batchSize;\n\n    try {\n      batchSize = loadNextBatch();\n    } catch (RuntimeException e) {\n      // Spark will check certain exception e.g. `SchemaColumnConvertNotSupportedException`.\n      throw e;\n    } catch (Throwable e) {\n      throw new IOException(e);\n    }\n\n    if (batchSize == 0) return false;\n\n    long totalDecodeTime = 0, totalLoadTime = 0;\n    for (int i = 0; i < columnReaders.length; i++) {\n      AbstractColumnReader reader = columnReaders[i];\n      long startNs = System.nanoTime();\n      // TODO: read from native reader\n      reader.readBatch(batchSize);\n      //      totalDecodeTime += System.nanoTime() - startNs;\n      //      startNs = System.nanoTime();\n      vectors[i] = reader.currentBatch();\n      totalLoadTime += System.nanoTime() - startNs;\n    }\n\n    // TODO: (ARROW NATIVE) Add Metrics\n    //    SQLMetric decodeMetric = metrics.get(\"ParquetNativeDecodeTime\");\n    //    if (decodeMetric != null) {\n    //      decodeMetric.add(totalDecodeTime);\n    //    }\n    SQLMetric loadMetric = metrics.get(\"ParquetNativeLoadTime\");\n    if (loadMetric != null) {\n      loadMetric.add(totalLoadTime);\n    }\n\n    currentBatch.setNumRows(batchSize);\n    return true;\n  }\n\n  @Override\n  public void close() throws IOException {\n    if (columnReaders != null) {\n      for (AbstractColumnReader reader : columnReaders) {\n        if (reader != null) {\n          reader.close();\n        }\n      }\n    }\n    if (importer != null) {\n      importer.close();\n      importer = null;\n    }\n    nativeUtil.close();\n    if (this.handle > 0) {\n      Native.closeRecordBatchReader(this.handle);\n      this.handle = 0;\n    }\n  }\n\n  @SuppressWarnings(\"deprecation\")\n  private int loadNextBatch() throws Throwable {\n\n    for (ParquetColumn childColumn : asJava(parquetColumn.children())) {\n      checkParquetType(childColumn);\n    }\n\n    int batchSize = Native.readNextRecordBatch(this.handle);\n    if (batchSize == 0) {\n      return batchSize;\n    }\n    if (importer != null) importer.close();\n    importer = new CometSchemaImporter(ALLOCATOR);\n\n    List<Type> fields = requestedSchema.getFields();\n    StructField[] sparkFields = sparkSchema.fields();\n\n    boolean caseSensitive =\n        conf.getBoolean(\n            SQLConf.CASE_SENSITIVE().key(),\n            (boolean) SQLConf.CASE_SENSITIVE().defaultValue().get());\n\n    for (int i = 0; i < fields.size(); i++) {\n      if (!missingColumns[i]) {\n        if (columnReaders[i] != null) columnReaders[i].close();\n        // TODO: (ARROW NATIVE) handle tz, datetime & int96 rebase\n        Type field = fields.get(i);\n\n        // Find the corresponding spark field by matching field names using the prebuilt map\n        String fieldName =\n            caseSensitive ? field.getName() : field.getName().toLowerCase(Locale.ROOT);\n        Integer sparkSchemaIndex = sparkFieldIndexMap.get(fieldName);\n\n        if (sparkSchemaIndex == null) {\n          throw new IOException(\n              \"Could not find matching Spark field for Parquet field: \" + field.getName());\n        }\n\n        DataType dataType = sparkFields[sparkSchemaIndex].dataType();\n        NativeColumnReader reader =\n            new NativeColumnReader(\n                this.handle,\n                sparkSchemaIndex,\n                dataType,\n                field,\n                null,\n                importer,\n                nativeUtil,\n                capacity,\n                useDecimal128,\n                useLegacyDateTimestamp);\n        columnReaders[i] = reader;\n      }\n    }\n    return batchSize;\n  }\n\n  // Signature of externalAccums changed from returning a Buffer to returning a Seq. If comet is\n  // expecting a Buffer but the Spark version returns a Seq or vice versa, we get a\n  // method not found exception.\n  @SuppressWarnings(\"unchecked\")\n  private Option<AccumulatorV2<?, ?>> getTaskAccumulator(TaskMetrics taskMetrics) {\n    Method externalAccumsMethod;\n    try {\n      externalAccumsMethod = TaskMetrics.class.getDeclaredMethod(\"externalAccums\");\n      externalAccumsMethod.setAccessible(true);\n      String returnType = externalAccumsMethod.getReturnType().getName();\n      if (returnType.equals(\"scala.collection.mutable.Buffer\")) {\n        return ((Buffer<AccumulatorV2<?, ?>>) externalAccumsMethod.invoke(taskMetrics))\n            .lastOption();\n      } else if (returnType.equals(\"scala.collection.Seq\")) {\n        return ((Seq<AccumulatorV2<?, ?>>) externalAccumsMethod.invoke(taskMetrics)).lastOption();\n      } else {\n        return Option.apply(null); // None\n      }\n    } catch (NoSuchMethodException | InvocationTargetException | IllegalAccessException e) {\n      return Option.apply(null); // None\n    }\n  }\n\n  private byte[] serializeArrowSchema(Schema schema) throws IOException {\n    ByteArrayOutputStream out = new ByteArrayOutputStream();\n    WriteChannel writeChannel = new WriteChannel(Channels.newChannel(out));\n    MessageSerializer.serialize(writeChannel, schema);\n    return out.toByteArray();\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/NativeColumnReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.arrow.c.ArrowArray;\nimport org.apache.arrow.c.ArrowSchema;\nimport org.apache.arrow.memory.BufferAllocator;\nimport org.apache.arrow.memory.RootAllocator;\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.schema.Type;\nimport org.apache.spark.sql.types.DataType;\n\nimport org.apache.comet.CometSchemaImporter;\nimport org.apache.comet.vector.*;\n\nimport static scala.jdk.javaapi.CollectionConverters.*;\n\n// TODO: extend ColumnReader instead of AbstractColumnReader to reduce code duplication\npublic class NativeColumnReader extends AbstractColumnReader {\n  protected static final Logger LOG = LoggerFactory.getLogger(NativeColumnReader.class);\n  protected final BufferAllocator ALLOCATOR = new RootAllocator();\n\n  /**\n   * The current Comet vector holding all the values read by this column reader. Owned by this\n   * reader and MUST be closed after use.\n   */\n  private CometDecodedVector currentVector;\n\n  /** Dictionary values for this column. Only set if the column is using dictionary encoding. */\n  protected CometDictionary dictionary;\n\n  /**\n   * The number of values in the current batch, used when we are skipping importing of Arrow\n   * vectors, in which case we'll simply update the null count of the existing vectors.\n   */\n  int currentNumValues;\n\n  /**\n   * Whether the last loaded vector contains any null value. This is used to determine if we can\n   * skip vector reloading. If the flag is false, Arrow C API will skip to import the validity\n   * buffer, and therefore we cannot skip vector reloading.\n   */\n  boolean hadNull;\n\n  private final CometSchemaImporter importer;\n  private final NativeUtil nativeUtil;\n\n  private ArrowArray array = null;\n  private ArrowSchema schema = null;\n\n  private long nativeBatchHandle = 0xDEADBEEFL;\n  private final int columnNum;\n\n  NativeColumnReader(\n      long nativeBatchHandle,\n      int columnNum,\n      DataType type,\n      Type fieldType,\n      ColumnDescriptor descriptor,\n      CometSchemaImporter importer,\n      NativeUtil nativeUtil,\n      int batchSize,\n      boolean useDecimal128,\n      boolean useLegacyDateTimestamp) {\n    super(type, fieldType, descriptor, useDecimal128, useLegacyDateTimestamp);\n    assert batchSize > 0 : \"Batch size must be positive, found \" + batchSize;\n    this.batchSize = batchSize;\n    this.nativeUtil = nativeUtil;\n    this.importer = importer;\n    this.nativeBatchHandle = nativeBatchHandle;\n    this.columnNum = columnNum;\n    initNative();\n  }\n\n  @Override\n  // Override in order to avoid creation of JVM side column readers\n  protected void initNative() {\n    LOG.debug(\n        \"Native column reader {} is initialized\", String.join(\".\", this.type.catalogString()));\n    nativeHandle = 0;\n  }\n\n  @Override\n  public void readBatch(int total) {\n    LOG.debug(\"Reading column batch of size = {}\", total);\n\n    this.currentNumValues = total;\n  }\n\n  /** Returns the {@link CometVector} read by this reader. */\n  @Override\n  public CometVector currentBatch() {\n    return loadVector();\n  }\n\n  @Override\n  public void close() {\n    if (currentVector != null) {\n      currentVector.close();\n      currentVector = null;\n    }\n    super.close();\n  }\n\n  /** Returns a decoded {@link CometDecodedVector Comet vector}. */\n  public CometDecodedVector loadVector() {\n\n    LOG.debug(\"Loading vector for next batch\");\n\n    // Close the previous vector first to release struct memory allocated to import Arrow array &\n    // schema from native side, through the C data interface\n    if (currentVector != null) {\n      currentVector.close();\n    }\n\n    // TODO: ARROW NATIVE : Handle Uuid?\n\n    array = ArrowArray.allocateNew(ALLOCATOR);\n    schema = ArrowSchema.allocateNew(ALLOCATOR);\n\n    long arrayAddr = array.memoryAddress();\n    long schemaAddr = schema.memoryAddress();\n\n    Native.currentColumnBatch(nativeBatchHandle, columnNum, arrayAddr, schemaAddr);\n\n    ArrowArray[] arrays = {array};\n    ArrowSchema[] schemas = {schema};\n\n    CometDecodedVector cometVector =\n        (CometDecodedVector) asJava(nativeUtil.importVector(arrays, schemas)).get(0);\n\n    // Update whether the current vector contains any null values. This is used in the following\n    // batch(s) to determine whether we can skip loading the native vector.\n    hadNull = cometVector.hasNull();\n\n    currentVector = cometVector;\n    return currentVector;\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/ParquetColumnSpec.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.util.Map;\n\nimport org.apache.comet.IcebergApi;\n\n/**\n * Parquet ColumnSpec encapsulates the information withing a Parquet ColumnDescriptor. Utility\n * methods can convert from and to a ColumnDescriptor The only purpose of this class is to allow\n * passing of Column descriptors between Comet and Iceberg. This is required because Iceberg shades\n * Parquet, changing the package of Parquet classes and making then incompatible with Comet.\n */\n@IcebergApi\npublic class ParquetColumnSpec {\n\n  private final int fieldId;\n  private final String[] path;\n  private final String physicalType;\n  private final int typeLength;\n  private final boolean isRepeated;\n  private final int maxDefinitionLevel;\n  private final int maxRepetitionLevel;\n\n  // Logical type info\n  private String logicalTypeName;\n  private Map<String, String> logicalTypeParams;\n\n  @IcebergApi\n  public ParquetColumnSpec(\n      int fieldId,\n      String[] path,\n      String physicalType,\n      int typeLength,\n      boolean isRepeated,\n      int maxDefinitionLevel,\n      int maxRepetitionLevel,\n      String logicalTypeName,\n      Map<String, String> logicalTypeParams) {\n    this.fieldId = fieldId;\n    this.path = path;\n    this.physicalType = physicalType;\n    this.typeLength = typeLength;\n    this.isRepeated = isRepeated;\n    this.maxDefinitionLevel = maxDefinitionLevel;\n    this.maxRepetitionLevel = maxRepetitionLevel;\n    this.logicalTypeName = logicalTypeName;\n    this.logicalTypeParams = logicalTypeParams;\n  }\n\n  @IcebergApi\n  public int getFieldId() {\n    return fieldId;\n  }\n\n  @IcebergApi\n  public String[] getPath() {\n    return path;\n  }\n\n  @IcebergApi\n  public String getPhysicalType() {\n    return physicalType;\n  }\n\n  @IcebergApi\n  public int getTypeLength() {\n    return typeLength;\n  }\n\n  public boolean isRepeated() {\n    return isRepeated;\n  }\n\n  @IcebergApi\n  public int getMaxRepetitionLevel() {\n    return maxRepetitionLevel;\n  }\n\n  @IcebergApi\n  public int getMaxDefinitionLevel() {\n    return maxDefinitionLevel;\n  }\n\n  @IcebergApi\n  public String getLogicalTypeName() {\n    return logicalTypeName;\n  }\n\n  @IcebergApi\n  public Map<String, String> getLogicalTypeParams() {\n    return logicalTypeParams;\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/ParquetMetadataSerializer.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.ByteArrayInputStream;\nimport java.io.ByteArrayOutputStream;\nimport java.io.IOException;\n\nimport org.apache.parquet.format.FileMetaData;\nimport org.apache.parquet.format.Util;\nimport org.apache.parquet.format.converter.ParquetMetadataConverter;\nimport org.apache.parquet.hadoop.metadata.ParquetMetadata;\n\n/**\n * Utility class for serializing and deserializing ParquetMetadata instances to/from byte arrays.\n * This uses the Parquet format's FileMetaData structure and the underlying Thrift compact protocol\n * for serialization.\n */\npublic class ParquetMetadataSerializer {\n\n  private final ParquetMetadataConverter converter;\n\n  public ParquetMetadataSerializer() {\n    this.converter = new ParquetMetadataConverter();\n  }\n\n  public ParquetMetadataSerializer(ParquetMetadataConverter converter) {\n    this.converter = converter;\n  }\n\n  /**\n   * Serializes a ParquetMetadata instance to a byte array.\n   *\n   * @param metadata the ParquetMetadata to serialize\n   * @return the serialized byte array\n   * @throws IOException if an error occurs during serialization\n   */\n  public byte[] serialize(ParquetMetadata metadata) throws IOException {\n    FileMetaData fileMetaData = converter.toParquetMetadata(1, metadata);\n    ByteArrayOutputStream outputStream = new ByteArrayOutputStream();\n    Util.writeFileMetaData(fileMetaData, outputStream);\n    return outputStream.toByteArray();\n  }\n\n  /**\n   * Deserializes a byte array back into a ParquetMetadata instance.\n   *\n   * @param bytes the serialized byte array\n   * @return the deserialized ParquetMetadata\n   * @throws IOException if an error occurs during deserialization\n   */\n  public ParquetMetadata deserialize(byte[] bytes) throws IOException {\n    ByteArrayInputStream inputStream = new ByteArrayInputStream(bytes);\n    FileMetaData fileMetaData = Util.readFileMetaData(inputStream);\n    return converter.fromParquetMetadata(fileMetaData);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/ReadOptions.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.spark.SparkEnv;\nimport org.apache.spark.launcher.SparkLauncher;\n\nimport org.apache.comet.CometConf;\nimport org.apache.comet.IcebergApi;\n\n/**\n * Comet specific Parquet related read options.\n *\n * <p>TODO: merge this with {@link org.apache.parquet.HadoopReadOptions} once PARQUET-2203 is done.\n */\n@IcebergApi\npublic class ReadOptions {\n  private static final Logger LOG = LoggerFactory.getLogger(ReadOptions.class);\n\n  // Max number of concurrent tasks we expect. Used to autoconfigure S3 client connections\n  public static final int S3A_MAX_EXPECTED_PARALLELISM = 32;\n  // defined in hadoop-aws - org.apache.hadoop.fs.s3a.Constants.MAXIMUM_CONNECTIONS\n  public static final String S3A_MAXIMUM_CONNECTIONS = \"fs.s3a.connection.maximum\";\n  // default max connections in S3A - org.apache.hadoop.fs.s3a.Constants.DEFAULT_MAXIMUM_CONNECTIONS\n  public static final int S3A_DEFAULT_MAX_HTTP_CONNECTIONS = 96;\n\n  public static final String S3A_READAHEAD_RANGE = \"fs.s3a.readahead.range\";\n  // Default read ahead range in Hadoop is 64K; we increase it to 1 MB\n  public static final long COMET_DEFAULT_READAHEAD_RANGE = 1 * 1024 * 1024; // 1 MB\n\n  private final boolean parallelIOEnabled;\n  private final int parallelIOThreadPoolSize;\n  private final boolean ioMergeRanges;\n  private final int ioMergeRangesDelta;\n  private final boolean adjustReadRangeSkew;\n\n  ReadOptions(\n      boolean parallelIOEnabled,\n      int parallelIOThreadPoolSize,\n      boolean ioMergeRanges,\n      int ioMergeRangesDelta,\n      boolean adjustReadRangeSkew) {\n    this.parallelIOEnabled = parallelIOEnabled;\n    this.parallelIOThreadPoolSize = parallelIOThreadPoolSize;\n    this.ioMergeRanges = ioMergeRanges;\n    this.ioMergeRangesDelta = ioMergeRangesDelta;\n    this.adjustReadRangeSkew = adjustReadRangeSkew;\n  }\n\n  public boolean isParallelIOEnabled() {\n    return this.parallelIOEnabled;\n  }\n\n  public int parallelIOThreadPoolSize() {\n    return this.parallelIOThreadPoolSize;\n  }\n\n  public boolean isIOMergeRangesEnabled() {\n    return ioMergeRanges;\n  }\n\n  public int getIOMergeRangesDelta() {\n    return ioMergeRangesDelta;\n  }\n\n  public boolean adjustReadRangesSkew() {\n    return adjustReadRangeSkew;\n  }\n\n  @IcebergApi\n  public static Builder builder(Configuration conf) {\n    return new Builder(conf);\n  }\n\n  @IcebergApi\n  public static class Builder {\n    private final Configuration conf;\n\n    private boolean parallelIOEnabled;\n    private int parallelIOThreadPoolSize;\n    private boolean ioMergeRanges;\n    private int ioMergeRangesDelta;\n    private boolean adjustReadRangeSkew;\n\n    /**\n     * Whether to enable Parquet parallel IO when reading row groups. If true, Parquet reader will\n     * use multiple threads to read multiple chunks of data from the current row group in parallel.\n     */\n    public Builder enableParallelIO(boolean b) {\n      this.parallelIOEnabled = b;\n      return this;\n    }\n\n    /**\n     * Specify the number of threads to be used in parallel IO.\n     *\n     * <p><b>Note</b>: this will only be effective if parallel IO is enabled (e.g., via {@link\n     * #enableParallelIO(boolean)}).\n     */\n    public Builder withParallelIOThreadPoolSize(int numThreads) {\n      this.parallelIOThreadPoolSize = numThreads;\n      return this;\n    }\n\n    public Builder enableIOMergeRanges(boolean enableIOMergeRanges) {\n      this.ioMergeRanges = enableIOMergeRanges;\n      return this;\n    }\n\n    public Builder withIOMergeRangesDelta(int ioMergeRangesDelta) {\n      this.ioMergeRangesDelta = ioMergeRangesDelta;\n      return this;\n    }\n\n    public Builder adjustReadRangeSkew(boolean adjustReadRangeSkew) {\n      this.adjustReadRangeSkew = adjustReadRangeSkew;\n      return this;\n    }\n\n    @IcebergApi\n    public ReadOptions build() {\n      return new ReadOptions(\n          parallelIOEnabled,\n          parallelIOThreadPoolSize,\n          ioMergeRanges,\n          ioMergeRangesDelta,\n          adjustReadRangeSkew);\n    }\n\n    @IcebergApi\n    public Builder(Configuration conf) {\n      this.conf = conf;\n      this.parallelIOEnabled =\n          conf.getBoolean(\n              CometConf.COMET_PARQUET_PARALLEL_IO_ENABLED().key(),\n              (Boolean) CometConf.COMET_PARQUET_PARALLEL_IO_ENABLED().defaultValue().get());\n      this.parallelIOThreadPoolSize =\n          conf.getInt(\n              CometConf.COMET_PARQUET_PARALLEL_IO_THREADS().key(),\n              (Integer) CometConf.COMET_PARQUET_PARALLEL_IO_THREADS().defaultValue().get());\n      this.ioMergeRanges =\n          conf.getBoolean(\n              CometConf.COMET_IO_MERGE_RANGES().key(),\n              (boolean) CometConf.COMET_IO_MERGE_RANGES().defaultValue().get());\n      this.ioMergeRangesDelta =\n          conf.getInt(\n              CometConf.COMET_IO_MERGE_RANGES_DELTA().key(),\n              (Integer) CometConf.COMET_IO_MERGE_RANGES_DELTA().defaultValue().get());\n      this.adjustReadRangeSkew =\n          conf.getBoolean(\n              CometConf.COMET_IO_ADJUST_READRANGE_SKEW().key(),\n              (Boolean) CometConf.COMET_IO_ADJUST_READRANGE_SKEW().defaultValue().get());\n      // override some S3 defaults\n      setS3Config();\n    }\n\n    // For paths to S3, if the s3 connection pool max is less than twice the product of\n    // parallel reader threads * number of cores, then increase the connection pool max\n    private void setS3Config() {\n      int s3ConnectionsMax = S3A_DEFAULT_MAX_HTTP_CONNECTIONS;\n      SparkEnv env = SparkEnv.get();\n      // Use a default number of cores in case we are using the FileReader outside the context\n      // of Spark.\n      int numExecutorCores = S3A_MAX_EXPECTED_PARALLELISM;\n      if (env != null) {\n        numExecutorCores = env.conf().getInt(SparkLauncher.EXECUTOR_CORES, numExecutorCores);\n      }\n      int parallelReaderThreads = this.parallelIOEnabled ? this.parallelIOThreadPoolSize : 1;\n      s3ConnectionsMax = Math.max(numExecutorCores * parallelReaderThreads * 2, s3ConnectionsMax);\n\n      setS3ConfIfGreater(conf, S3A_MAXIMUM_CONNECTIONS, s3ConnectionsMax);\n      setS3ConfIfGreater(conf, S3A_READAHEAD_RANGE, COMET_DEFAULT_READAHEAD_RANGE);\n    }\n\n    // Update the conf iff the new value is greater than the existing val\n    private void setS3ConfIfGreater(Configuration conf, String key, int newVal) {\n      int maxVal = newVal;\n      String curr = conf.get(key);\n      if (curr != null && !curr.isEmpty()) {\n        maxVal = Math.max(Integer.parseInt(curr), newVal);\n      }\n      LOG.info(\"File reader auto configured '{}={}'\", key, maxVal);\n      conf.set(key, Integer.toString(maxVal));\n    }\n\n    // Update the conf iff the new value is greater than the existing val. This handles values that\n    // may have suffixes (K, M, G, T, P, E) indicating well known bytes size suffixes\n    private void setS3ConfIfGreater(Configuration conf, String key, long newVal) {\n      long maxVal = conf.getLongBytes(key, newVal);\n      maxVal = Math.max(maxVal, newVal);\n      LOG.info(\"File reader auto configured '{}={}'\", key, maxVal);\n      conf.set(key, Long.toString(maxVal));\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/RowGroupFilter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport org.apache.parquet.filter2.compat.FilterCompat;\nimport org.apache.parquet.filter2.compat.FilterCompat.Filter;\nimport org.apache.parquet.filter2.compat.FilterCompat.NoOpFilter;\nimport org.apache.parquet.filter2.compat.FilterCompat.Visitor;\nimport org.apache.parquet.filter2.dictionarylevel.DictionaryFilter;\nimport org.apache.parquet.filter2.predicate.FilterPredicate;\nimport org.apache.parquet.filter2.predicate.SchemaCompatibilityValidator;\nimport org.apache.parquet.filter2.statisticslevel.StatisticsFilter;\nimport org.apache.parquet.hadoop.metadata.BlockMetaData;\nimport org.apache.parquet.schema.MessageType;\n\npublic class RowGroupFilter implements Visitor<List<BlockMetaData>> {\n  private final List<BlockMetaData> blocks;\n  private final MessageType schema;\n  private final List<FilterLevel> levels;\n  private final FileReader reader;\n\n  public enum FilterLevel {\n    STATISTICS,\n    DICTIONARY,\n    BLOOMFILTER\n  }\n\n  public static List<BlockMetaData> filterRowGroups(\n      List<FilterLevel> levels, Filter filter, List<BlockMetaData> blocks, FileReader reader) {\n    return filter.accept(new RowGroupFilter(levels, blocks, reader));\n  }\n\n  public static List<BlockMetaData> filterRowGroups(\n      List<FilterLevel> levels, Filter filter, List<BlockMetaData> blocks, MessageType schema) {\n    return filter.accept(new RowGroupFilter(levels, blocks, schema));\n  }\n\n  private RowGroupFilter(List<FilterLevel> levels, List<BlockMetaData> blocks, FileReader reader) {\n    this.levels = levels;\n    this.blocks = blocks;\n    this.reader = reader;\n    this.schema = reader.getFileMetaData().getSchema();\n  }\n\n  private RowGroupFilter(List<FilterLevel> levels, List<BlockMetaData> blocks, MessageType schema) {\n    this.levels = levels;\n    this.blocks = blocks;\n    this.reader = null;\n    this.schema = schema;\n  }\n\n  @Override\n  public List<BlockMetaData> visit(FilterCompat.FilterPredicateCompat filterPredicateCompat) {\n    FilterPredicate filterPredicate = filterPredicateCompat.getFilterPredicate();\n\n    // check that the schema of the filter matches the schema of the file\n    SchemaCompatibilityValidator.validate(filterPredicate, schema);\n\n    List<BlockMetaData> filteredBlocks = new ArrayList<>();\n\n    for (BlockMetaData block : blocks) {\n      boolean drop = false;\n\n      if (levels.contains(FilterLevel.STATISTICS)) {\n        drop = StatisticsFilter.canDrop(filterPredicate, block.getColumns());\n      }\n\n      if (!drop && levels.contains(FilterLevel.DICTIONARY)) {\n        drop =\n            DictionaryFilter.canDrop(\n                filterPredicate,\n                block.getColumns(),\n                new DictionaryPageReader(\n                    block,\n                    reader.getFileMetaData().getFileDecryptor(),\n                    reader.getInputStream(),\n                    reader.getOptions()));\n      }\n\n      if (!drop && levels.contains(FilterLevel.BLOOMFILTER)) {\n        drop =\n            filterPredicate.accept(\n                new BloomFilterReader(\n                    block, reader.getFileMetaData().getFileDecryptor(), reader.getInputStream()));\n      }\n\n      if (!drop) {\n        filteredBlocks.add(block);\n      }\n    }\n\n    return filteredBlocks;\n  }\n\n  @Override\n  public List<BlockMetaData> visit(\n      FilterCompat.UnboundRecordFilterCompat unboundRecordFilterCompat) {\n    return blocks;\n  }\n\n  @Override\n  public List<BlockMetaData> visit(NoOpFilter noOpFilter) {\n    return blocks;\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/RowGroupReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.util.HashMap;\nimport java.util.Map;\nimport java.util.Optional;\nimport java.util.PrimitiveIterator;\n\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.column.page.PageReadStore;\nimport org.apache.parquet.column.page.PageReader;\nimport org.apache.parquet.internal.filter2.columnindex.RowRanges;\n\nimport org.apache.comet.IcebergApi;\n\n@IcebergApi\npublic class RowGroupReader implements PageReadStore {\n  private final Map<String, PageReader> readers = new HashMap<>();\n  private final long rowCount;\n  private final RowRanges rowRanges;\n  private final long rowIndexOffset;\n\n  public RowGroupReader(long rowCount, long rowIndexOffset) {\n    this.rowCount = rowCount;\n    this.rowRanges = null;\n    this.rowIndexOffset = rowIndexOffset;\n  }\n\n  RowGroupReader(RowRanges rowRanges) {\n    this.rowRanges = rowRanges;\n    this.rowCount = rowRanges.rowCount();\n    this.rowIndexOffset = -1;\n  }\n\n  @IcebergApi\n  @Override\n  public long getRowCount() {\n    return rowCount;\n  }\n\n  @Override\n  public PageReader getPageReader(ColumnDescriptor path) {\n    return getPageReader(path.getPath());\n  }\n\n  public PageReader getPageReader(String[] path) {\n    final PageReader pageReader = readers.get(String.join(\".\", path));\n    if (pageReader == null) {\n      throw new IllegalArgumentException(\n          path + \" is not found: \" + readers.keySet() + \" \" + rowCount);\n    }\n    return pageReader;\n  }\n\n  @Override\n  public Optional<PrimitiveIterator.OfLong> getRowIndexes() {\n    return rowRanges == null ? Optional.empty() : Optional.of(rowRanges.iterator());\n  }\n\n  @Override\n  public Optional<Long> getRowIndexOffset() {\n    return this.rowIndexOffset < 0L ? Optional.empty() : Optional.of(this.rowIndexOffset);\n  }\n\n  void addColumn(ColumnDescriptor path, ColumnPageReader reader) {\n    if (readers.put(String.join(\".\", path.getPath()), reader) != null) {\n      throw new IllegalStateException(path + \" was already added\");\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/TypeUtil.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.util.Arrays;\n\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.schema.*;\nimport org.apache.parquet.schema.LogicalTypeAnnotation.*;\nimport org.apache.spark.package$;\nimport org.apache.spark.sql.execution.datasources.SchemaColumnConvertNotSupportedException;\nimport org.apache.spark.sql.internal.SQLConf;\nimport org.apache.spark.sql.types.*;\n\nimport org.apache.comet.CometConf;\nimport org.apache.comet.IcebergApi;\n\nimport static org.apache.comet.parquet.Utils.descriptorToParquetColumnSpec;\n\npublic class TypeUtil {\n\n  /**\n   * Converts the input Spark 'field' into a Parquet column descriptor.\n   *\n   * @see <a href=\"https://github.com/apache/datafusion-comet/issues/2079\">Comet Issue #2079</a>\n   */\n  @IcebergApi\n  public static ColumnDescriptor convertToParquet(StructField field) {\n    Type.Repetition repetition;\n    int maxDefinitionLevel;\n    if (field.nullable()) {\n      repetition = Type.Repetition.OPTIONAL;\n      maxDefinitionLevel = 1;\n    } else {\n      repetition = Type.Repetition.REQUIRED;\n      maxDefinitionLevel = 0;\n    }\n    String[] path = new String[] {field.name()};\n\n    DataType type = field.dataType();\n\n    Types.PrimitiveBuilder<PrimitiveType> builder = null;\n    // Only partition column can be `NullType`. Here we piggy-back onto Parquet boolean type\n    // for constant vector of null values, we don't really care what Parquet type it is.\n    if (type == DataTypes.BooleanType || type == DataTypes.NullType) {\n      builder = Types.primitive(PrimitiveType.PrimitiveTypeName.BOOLEAN, repetition);\n    } else if (type == DataTypes.IntegerType || type instanceof YearMonthIntervalType) {\n      builder =\n          Types.primitive(PrimitiveType.PrimitiveTypeName.INT32, repetition)\n              .as(LogicalTypeAnnotation.intType(32, true));\n    } else if (type == DataTypes.DateType) {\n      builder =\n          Types.primitive(PrimitiveType.PrimitiveTypeName.INT32, repetition)\n              .as(LogicalTypeAnnotation.dateType());\n    } else if (type == DataTypes.ByteType) {\n      builder =\n          Types.primitive(PrimitiveType.PrimitiveTypeName.INT32, repetition)\n              .as(LogicalTypeAnnotation.intType(8, true));\n    } else if (type == DataTypes.ShortType) {\n      builder =\n          Types.primitive(PrimitiveType.PrimitiveTypeName.INT32, repetition)\n              .as(LogicalTypeAnnotation.intType(16, true));\n    } else if (type == DataTypes.LongType) {\n      builder = Types.primitive(PrimitiveType.PrimitiveTypeName.INT64, repetition);\n    } else if (type == DataTypes.BinaryType) {\n      builder = Types.primitive(PrimitiveType.PrimitiveTypeName.BINARY, repetition);\n    } else if (type == DataTypes.StringType\n        || (type.sameType(DataTypes.StringType) && isSpark40Plus())) {\n      builder =\n          Types.primitive(PrimitiveType.PrimitiveTypeName.BINARY, repetition)\n              .as(LogicalTypeAnnotation.stringType());\n    } else if (type == DataTypes.FloatType) {\n      builder = Types.primitive(PrimitiveType.PrimitiveTypeName.FLOAT, repetition);\n    } else if (type == DataTypes.DoubleType) {\n      builder = Types.primitive(PrimitiveType.PrimitiveTypeName.DOUBLE, repetition);\n    } else if (type == DataTypes.TimestampType) {\n      builder =\n          Types.primitive(PrimitiveType.PrimitiveTypeName.INT64, repetition)\n              .as(LogicalTypeAnnotation.timestampType(true, TimeUnit.MICROS));\n    } else if (type == TimestampNTZType$.MODULE$) {\n      builder =\n          Types.primitive(PrimitiveType.PrimitiveTypeName.INT64, repetition)\n              .as(LogicalTypeAnnotation.timestampType(false, TimeUnit.MICROS));\n    } else if (type instanceof DecimalType) {\n      DecimalType decimalType = (DecimalType) type;\n      builder =\n          Types.primitive(PrimitiveType.PrimitiveTypeName.FIXED_LEN_BYTE_ARRAY, repetition)\n              .length(16) // always store as Decimal128\n              .as(LogicalTypeAnnotation.decimalType(decimalType.scale(), decimalType.precision()));\n    }\n    if (builder == null) {\n      throw new UnsupportedOperationException(\"Unsupported input Spark type: \" + type);\n    }\n\n    return new ColumnDescriptor(path, builder.named(field.name()), 0, maxDefinitionLevel);\n  }\n\n  public static ParquetColumnSpec convertToParquetSpec(StructField field) {\n    return descriptorToParquetColumnSpec(convertToParquet(field));\n  }\n\n  /**\n   * Check whether the Parquet 'descriptor' and Spark read type 'sparkType' are compatible. If not,\n   * throw exception.\n   *\n   * <p>This mostly follows the logic in Spark's\n   * ParquetVectorUpdaterFactory#getUpdater(ColumnDescriptor, DataType)\n   *\n   * @param descriptor descriptor for a Parquet primitive column\n   * @param sparkType Spark read type\n   */\n  public static void checkParquetType(ColumnDescriptor descriptor, DataType sparkType) {\n    PrimitiveType.PrimitiveTypeName typeName = descriptor.getPrimitiveType().getPrimitiveTypeName();\n    LogicalTypeAnnotation logicalTypeAnnotation =\n        descriptor.getPrimitiveType().getLogicalTypeAnnotation();\n    boolean allowTypePromotion = (boolean) CometConf.COMET_SCHEMA_EVOLUTION_ENABLED().get();\n\n    if (sparkType instanceof NullType) {\n      return;\n    }\n\n    switch (typeName) {\n      case BOOLEAN:\n        if (sparkType == DataTypes.BooleanType) return;\n        break;\n      case INT32:\n        if (sparkType == DataTypes.IntegerType || canReadAsIntDecimal(descriptor, sparkType)) {\n          return;\n        } else if (sparkType == DataTypes.LongType\n            && isUnsignedIntTypeMatched(logicalTypeAnnotation, 32)) {\n          // In `ParquetToSparkSchemaConverter`, we map parquet UINT32 to our LongType.\n          // For unsigned int32, it stores as plain signed int32 in Parquet when dictionary\n          // fallbacks. We read them as long values.\n          return;\n        } else if (sparkType == DataTypes.LongType && allowTypePromotion) {\n          // In Comet we allow schema evolution from int to long, if\n          // `spark.comet.schemaEvolution.enabled` is enabled.\n          return;\n        } else if (sparkType == DataTypes.ByteType || sparkType == DataTypes.ShortType) {\n          return;\n        } else if (sparkType == DataTypes.DateType) {\n          // TODO: use dateTimeRebaseMode from Spark side\n          return;\n        } else if (sparkType instanceof YearMonthIntervalType) {\n          return;\n        } else if (sparkType == DataTypes.DoubleType && isSpark40Plus()) {\n          return;\n        } else if (sparkType == TimestampNTZType$.MODULE$\n            && isSpark40Plus()\n            && logicalTypeAnnotation instanceof DateLogicalTypeAnnotation) {\n          return;\n        }\n        break;\n      case INT64:\n        if (sparkType == DataTypes.LongType || canReadAsLongDecimal(descriptor, sparkType)) {\n          return;\n        } else if (isLongDecimal(sparkType)\n            && isUnsignedIntTypeMatched(logicalTypeAnnotation, 64)) {\n          // In `ParquetToSparkSchemaConverter`, we map parquet UINT64 to our Decimal(20, 0).\n          // For unsigned int64, it stores as plain signed int64 in Parquet when dictionary\n          // fallbacks. We read them as decimal values.\n          return;\n        } else if (isTimestampTypeMatched(logicalTypeAnnotation, TimeUnit.MICROS)\n            && (sparkType == TimestampNTZType$.MODULE$ || sparkType == DataTypes.TimestampType)) {\n          validateTimestampType(logicalTypeAnnotation, sparkType);\n          // TODO: use dateTimeRebaseMode from Spark side\n          return;\n        } else if (isTimestampTypeMatched(logicalTypeAnnotation, TimeUnit.MILLIS)\n            && (sparkType == TimestampNTZType$.MODULE$ || sparkType == DataTypes.TimestampType)) {\n          validateTimestampType(logicalTypeAnnotation, sparkType);\n          return;\n        }\n        break;\n      case INT96:\n        if (sparkType == TimestampNTZType$.MODULE$) {\n          if (isSpark40Plus()) return; // Spark 4.0+ supports Timestamp NTZ with INT96\n          convertErrorForTimestampNTZ(typeName.name());\n        } else if (sparkType == DataTypes.TimestampType) {\n          return;\n        }\n        break;\n      case FLOAT:\n        if (sparkType == DataTypes.FloatType) return;\n        // In Comet we allow schema evolution from float to double, if\n        // `spark.comet.schemaEvolution.enabled` is enabled.\n        if (sparkType == DataTypes.DoubleType && allowTypePromotion) return;\n        break;\n      case DOUBLE:\n        if (sparkType == DataTypes.DoubleType) return;\n        break;\n      case BINARY:\n        if (sparkType == DataTypes.StringType\n            || sparkType == DataTypes.BinaryType\n            || canReadAsBinaryDecimal(descriptor, sparkType)) {\n          return;\n        }\n\n        if (sparkType.sameType(DataTypes.StringType) && isSpark40Plus()) {\n          LogicalTypeAnnotation lta = descriptor.getPrimitiveType().getLogicalTypeAnnotation();\n          if (lta instanceof LogicalTypeAnnotation.StringLogicalTypeAnnotation) {\n            return;\n          }\n        }\n        break;\n      case FIXED_LEN_BYTE_ARRAY:\n        if (canReadAsIntDecimal(descriptor, sparkType)\n            || canReadAsLongDecimal(descriptor, sparkType)\n            || canReadAsBinaryDecimal(descriptor, sparkType)\n            || sparkType == DataTypes.BinaryType\n            // for uuid, since iceberg maps uuid to StringType\n            || sparkType == DataTypes.StringType\n                && logicalTypeAnnotation\n                    instanceof LogicalTypeAnnotation.UUIDLogicalTypeAnnotation) {\n          return;\n        }\n        break;\n      default:\n        break;\n    }\n\n    throw new SchemaColumnConvertNotSupportedException(\n        Arrays.toString(descriptor.getPath()),\n        descriptor.getPrimitiveType().getPrimitiveTypeName().toString(),\n        sparkType.catalogString());\n  }\n\n  private static void validateTimestampType(\n      LogicalTypeAnnotation logicalTypeAnnotation, DataType sparkType) {\n    assert (logicalTypeAnnotation instanceof TimestampLogicalTypeAnnotation);\n    // Throw an exception if the Parquet type is TimestampLTZ and the Catalyst type is TimestampNTZ.\n    // This is to avoid mistakes in reading the timestamp values.\n    if (((TimestampLogicalTypeAnnotation) logicalTypeAnnotation).isAdjustedToUTC()\n        && sparkType == TimestampNTZType$.MODULE$\n        && !isSpark40Plus()) {\n      convertErrorForTimestampNTZ(\"int64 time(\" + logicalTypeAnnotation + \")\");\n    }\n  }\n\n  private static void convertErrorForTimestampNTZ(String parquetType) {\n    throw new RuntimeException(\n        \"Unable to create Parquet converter for data type \"\n            + TimestampNTZType$.MODULE$.json()\n            + \" whose Parquet type is \"\n            + parquetType);\n  }\n\n  private static boolean canReadAsIntDecimal(ColumnDescriptor descriptor, DataType dt) {\n    if (!DecimalType.is32BitDecimalType(dt) && !(isSpark40Plus() && dt instanceof DecimalType))\n      return false;\n    return isDecimalTypeMatched(descriptor, dt);\n  }\n\n  private static boolean canReadAsLongDecimal(ColumnDescriptor descriptor, DataType dt) {\n    if (!DecimalType.is64BitDecimalType(dt) && !(isSpark40Plus() && dt instanceof DecimalType))\n      return false;\n    return isDecimalTypeMatched(descriptor, dt);\n  }\n\n  private static boolean canReadAsBinaryDecimal(ColumnDescriptor descriptor, DataType dt) {\n    if (!DecimalType.isByteArrayDecimalType(dt)) return false;\n    return isDecimalTypeMatched(descriptor, dt);\n  }\n\n  private static boolean isLongDecimal(DataType dt) {\n    if (dt instanceof DecimalType) {\n      DecimalType d = (DecimalType) dt;\n      return d.precision() == 20 && d.scale() == 0;\n    }\n    return false;\n  }\n\n  private static boolean isDecimalTypeMatched(ColumnDescriptor descriptor, DataType dt) {\n    DecimalType d = (DecimalType) dt;\n    LogicalTypeAnnotation typeAnnotation = descriptor.getPrimitiveType().getLogicalTypeAnnotation();\n    if (typeAnnotation instanceof DecimalLogicalTypeAnnotation) {\n      DecimalLogicalTypeAnnotation decimalType = (DecimalLogicalTypeAnnotation) typeAnnotation;\n      // It's OK if the required decimal precision is larger than or equal to the physical decimal\n      // precision in the Parquet metadata, as long as the decimal scale is the same.\n      return (decimalType.getPrecision() <= d.precision() && decimalType.getScale() == d.scale())\n          || (isSpark40Plus()\n              && (!SQLConf.get().parquetVectorizedReaderEnabled()\n                  || (decimalType.getScale() <= d.scale()\n                      && decimalType.getPrecision() - decimalType.getScale()\n                          <= d.precision() - d.scale())));\n    } else if (isSpark40Plus()) {\n      boolean isNullTypeAnnotation = typeAnnotation == null;\n      boolean isIntTypeAnnotation = typeAnnotation instanceof IntLogicalTypeAnnotation;\n      if (!SQLConf.get().parquetVectorizedReaderEnabled()) {\n        return isNullTypeAnnotation || isIntTypeAnnotation;\n      } else if (isNullTypeAnnotation\n          || (isIntTypeAnnotation && ((IntLogicalTypeAnnotation) typeAnnotation).isSigned())) {\n        PrimitiveType.PrimitiveTypeName typeName =\n            descriptor.getPrimitiveType().getPrimitiveTypeName();\n        int integerPrecision = d.precision() - d.scale();\n        switch (typeName) {\n          case INT32:\n            return integerPrecision >= DecimalType$.MODULE$.IntDecimal().precision();\n          case INT64:\n            return integerPrecision >= DecimalType$.MODULE$.LongDecimal().precision();\n        }\n      }\n    }\n    return false;\n  }\n\n  private static boolean isTimestampTypeMatched(\n      LogicalTypeAnnotation logicalTypeAnnotation, LogicalTypeAnnotation.TimeUnit unit) {\n    return logicalTypeAnnotation instanceof TimestampLogicalTypeAnnotation\n        && ((TimestampLogicalTypeAnnotation) logicalTypeAnnotation).getUnit() == unit;\n  }\n\n  private static boolean isUnsignedIntTypeMatched(\n      LogicalTypeAnnotation logicalTypeAnnotation, int bitWidth) {\n    return logicalTypeAnnotation instanceof IntLogicalTypeAnnotation\n        && !((IntLogicalTypeAnnotation) logicalTypeAnnotation).isSigned()\n        && ((IntLogicalTypeAnnotation) logicalTypeAnnotation).getBitWidth() == bitWidth;\n  }\n\n  static boolean isSpark40Plus() {\n    return package$.MODULE$.SPARK_VERSION().compareTo(\"4.0\") >= 0;\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/Utils.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.schema.LogicalTypeAnnotation;\nimport org.apache.parquet.schema.PrimitiveType;\nimport org.apache.parquet.schema.Type;\nimport org.apache.parquet.schema.Types;\nimport org.apache.spark.sql.types.*;\n\nimport org.apache.comet.CometSchemaImporter;\nimport org.apache.comet.IcebergApi;\n\npublic class Utils {\n\n  /** This method is called from Apache Iceberg. */\n  @IcebergApi\n  public static ColumnReader getColumnReader(\n      DataType type,\n      ParquetColumnSpec columnSpec,\n      CometSchemaImporter importer,\n      int batchSize,\n      boolean useDecimal128,\n      boolean useLazyMaterialization,\n      boolean useLegacyTimestamp) {\n\n    ColumnDescriptor descriptor = buildColumnDescriptor(columnSpec);\n    return getColumnReader(\n        type,\n        descriptor,\n        importer,\n        batchSize,\n        useDecimal128,\n        useLazyMaterialization,\n        useLegacyTimestamp);\n  }\n\n  /**\n   * This method is called from Apache Iceberg.\n   *\n   * @see <a href=\"https://github.com/apache/datafusion-comet/issues/2079\">Comet Issue #2079</a>\n   */\n  @IcebergApi\n  public static ColumnReader getColumnReader(\n      DataType type,\n      ColumnDescriptor descriptor,\n      CometSchemaImporter importer,\n      int batchSize,\n      boolean useDecimal128,\n      boolean useLazyMaterialization) {\n    // TODO: support `useLegacyDateTimestamp` for Iceberg\n    return getColumnReader(\n        type, descriptor, importer, batchSize, useDecimal128, useLazyMaterialization, true);\n  }\n\n  public static ColumnReader getColumnReader(\n      DataType type,\n      ColumnDescriptor descriptor,\n      CometSchemaImporter importer,\n      int batchSize,\n      boolean useDecimal128,\n      boolean useLazyMaterialization,\n      boolean useLegacyDateTimestamp) {\n    if (useLazyMaterialization && supportLazyMaterialization(type)) {\n      return new LazyColumnReader(\n          type, descriptor, importer, batchSize, useDecimal128, useLegacyDateTimestamp);\n    } else {\n      return new ColumnReader(\n          type, descriptor, importer, batchSize, useDecimal128, useLegacyDateTimestamp);\n    }\n  }\n\n  private static boolean supportLazyMaterialization(DataType type) {\n    return (type instanceof StringType || type instanceof BinaryType);\n  }\n\n  /**\n   * Initialize the Comet native Parquet reader.\n   *\n   * @param descriptor the Parquet column descriptor for the column to be read\n   * @param readType the Spark read type used for type promotion. Null if promotion is not enabled.\n   * @param batchSize the batch size, i.e., maximum number of elements per record batch\n   * @param useDecimal128 whether to always represent decimals using 128 bits. If false, the native\n   *     reader may represent decimals using 32 or 64 bits, depending on the precision.\n   * @param useLegacyDateTimestampOrNTZ whether to read dates/timestamps that were written in the\n   *     legacy hybrid Julian + Gregorian calendar as it is. If false, throw exceptions instead. If\n   *     the spark type is TimestampNTZ, this should be true.\n   */\n  public static long initColumnReader(\n      ColumnDescriptor descriptor,\n      DataType readType,\n      int batchSize,\n      boolean useDecimal128,\n      boolean useLegacyDateTimestampOrNTZ) {\n    PrimitiveType primitiveType = descriptor.getPrimitiveType();\n    int primitiveTypeId = getPhysicalTypeId(primitiveType.getPrimitiveTypeName());\n    LogicalTypeAnnotation annotation = primitiveType.getLogicalTypeAnnotation();\n\n    // Process logical type information\n\n    int bitWidth = -1;\n    boolean isSigned = false;\n    if (annotation instanceof LogicalTypeAnnotation.IntLogicalTypeAnnotation) {\n      LogicalTypeAnnotation.IntLogicalTypeAnnotation intAnnotation =\n          (LogicalTypeAnnotation.IntLogicalTypeAnnotation) annotation;\n      bitWidth = intAnnotation.getBitWidth();\n      isSigned = intAnnotation.isSigned();\n    }\n\n    int precision, scale;\n    precision = scale = -1;\n    if (annotation instanceof LogicalTypeAnnotation.DecimalLogicalTypeAnnotation) {\n      LogicalTypeAnnotation.DecimalLogicalTypeAnnotation decimalAnnotation =\n          (LogicalTypeAnnotation.DecimalLogicalTypeAnnotation) annotation;\n      precision = decimalAnnotation.getPrecision();\n      scale = decimalAnnotation.getScale();\n    }\n\n    int tu = -1;\n    boolean isAdjustedUtc = false;\n    if (annotation instanceof LogicalTypeAnnotation.TimestampLogicalTypeAnnotation) {\n      LogicalTypeAnnotation.TimestampLogicalTypeAnnotation timestampAnnotation =\n          (LogicalTypeAnnotation.TimestampLogicalTypeAnnotation) annotation;\n      tu = getTimeUnitId(timestampAnnotation.getUnit());\n      isAdjustedUtc = timestampAnnotation.isAdjustedToUTC();\n    }\n\n    TypePromotionInfo promotionInfo;\n    if (readType != null) {\n      promotionInfo = new TypePromotionInfo(readType);\n    } else {\n      // If type promotion is not enable, we'll just use the Parquet primitive type and precision.\n      promotionInfo = new TypePromotionInfo(primitiveTypeId, precision, scale, bitWidth);\n    }\n\n    return Native.initColumnReader(\n        primitiveTypeId,\n        getLogicalTypeId(annotation),\n        promotionInfo.physicalTypeId,\n        descriptor.getPath(),\n        descriptor.getMaxDefinitionLevel(),\n        descriptor.getMaxRepetitionLevel(),\n        bitWidth,\n        promotionInfo.bitWidth,\n        isSigned,\n        primitiveType.getTypeLength(),\n        precision,\n        promotionInfo.precision,\n        scale,\n        promotionInfo.scale,\n        tu,\n        isAdjustedUtc,\n        batchSize,\n        useDecimal128,\n        useLegacyDateTimestampOrNTZ);\n  }\n\n  static class TypePromotionInfo {\n    // The Parquet physical type ID converted from the Spark read schema, or the original Parquet\n    // physical type ID if type promotion is not enabled.\n    int physicalTypeId;\n    // Decimal precision from the Spark read schema, or -1 if it's not decimal type.\n    int precision;\n    // Decimal scale from the Spark read schema, or -1 if it's not decimal type.\n    int scale;\n    // Integer bit width from the Spark read schema, or -1 if it's not integer type.\n    int bitWidth;\n\n    TypePromotionInfo(int physicalTypeId, int precision, int scale, int bitWidth) {\n      this.physicalTypeId = physicalTypeId;\n      this.precision = precision;\n      this.scale = scale;\n      this.bitWidth = bitWidth;\n    }\n\n    TypePromotionInfo(DataType sparkReadType) {\n      // Create a dummy `StructField` from the input Spark type. We don't care about\n      // field name, nullability and metadata.\n      StructField f = new StructField(\"f\", sparkReadType, false, Metadata.empty());\n      ColumnDescriptor descriptor = TypeUtil.convertToParquet(f);\n      PrimitiveType primitiveType = descriptor.getPrimitiveType();\n      int physicalTypeId = getPhysicalTypeId(primitiveType.getPrimitiveTypeName());\n      LogicalTypeAnnotation annotation = primitiveType.getLogicalTypeAnnotation();\n      int precision = -1;\n      int scale = -1;\n      int bitWidth = -1;\n      if (annotation instanceof LogicalTypeAnnotation.DecimalLogicalTypeAnnotation) {\n        LogicalTypeAnnotation.DecimalLogicalTypeAnnotation decimalAnnotation =\n            (LogicalTypeAnnotation.DecimalLogicalTypeAnnotation) annotation;\n        precision = decimalAnnotation.getPrecision();\n        scale = decimalAnnotation.getScale();\n      }\n      if (annotation instanceof LogicalTypeAnnotation.IntLogicalTypeAnnotation) {\n        LogicalTypeAnnotation.IntLogicalTypeAnnotation intAnnotation =\n            (LogicalTypeAnnotation.IntLogicalTypeAnnotation) annotation;\n        bitWidth = intAnnotation.getBitWidth();\n      }\n      this.physicalTypeId = physicalTypeId;\n      this.precision = precision;\n      this.scale = scale;\n      this.bitWidth = bitWidth;\n    }\n  }\n\n  /**\n   * Maps the input Parquet physical type 'typeName' to an integer representing it. This is used for\n   * serialization between the Java and native side.\n   *\n   * @param typeName enum for the Parquet physical type\n   * @return an integer representing the input physical type\n   */\n  static int getPhysicalTypeId(PrimitiveType.PrimitiveTypeName typeName) {\n    switch (typeName) {\n      case BOOLEAN:\n        return 0;\n      case INT32:\n        return 1;\n      case INT64:\n        return 2;\n      case INT96:\n        return 3;\n      case FLOAT:\n        return 4;\n      case DOUBLE:\n        return 5;\n      case BINARY:\n        return 6;\n      case FIXED_LEN_BYTE_ARRAY:\n        return 7;\n    }\n    throw new IllegalArgumentException(\"Invalid Parquet physical type: \" + typeName);\n  }\n\n  /**\n   * Maps the input Parquet logical type 'annotation' to an integer representing it. This is used\n   * for serialization between the Java and native side.\n   *\n   * @param annotation the Parquet logical type annotation\n   * @return an integer representing the input logical type\n   */\n  static int getLogicalTypeId(LogicalTypeAnnotation annotation) {\n    if (annotation == null) {\n      return -1; // No logical type associated\n    } else if (annotation instanceof LogicalTypeAnnotation.IntLogicalTypeAnnotation) {\n      return 0;\n    } else if (annotation instanceof LogicalTypeAnnotation.StringLogicalTypeAnnotation) {\n      return 1;\n    } else if (annotation instanceof LogicalTypeAnnotation.DecimalLogicalTypeAnnotation) {\n      return 2;\n    } else if (annotation instanceof LogicalTypeAnnotation.DateLogicalTypeAnnotation) {\n      return 3;\n    } else if (annotation instanceof LogicalTypeAnnotation.TimestampLogicalTypeAnnotation) {\n      return 4;\n    } else if (annotation instanceof LogicalTypeAnnotation.EnumLogicalTypeAnnotation) {\n      return 5;\n    } else if (annotation instanceof LogicalTypeAnnotation.UUIDLogicalTypeAnnotation) {\n      return 6;\n    }\n\n    throw new UnsupportedOperationException(\"Unsupported Parquet logical type \" + annotation);\n  }\n\n  static int getTimeUnitId(LogicalTypeAnnotation.TimeUnit tu) {\n    switch (tu) {\n      case MILLIS:\n        return 0;\n      case MICROS:\n        return 1;\n      case NANOS:\n        return 2;\n      default:\n        throw new UnsupportedOperationException(\"Unsupported TimeUnit \" + tu);\n    }\n  }\n\n  @IcebergApi\n  public static ColumnDescriptor buildColumnDescriptor(ParquetColumnSpec columnSpec) {\n    PrimitiveType.PrimitiveTypeName primType =\n        PrimitiveType.PrimitiveTypeName.valueOf(columnSpec.getPhysicalType());\n\n    Type.Repetition repetition;\n    if (columnSpec.getMaxRepetitionLevel() > 0) {\n      repetition = Type.Repetition.REPEATED;\n    } else if (columnSpec.getMaxDefinitionLevel() > 0) {\n      repetition = Type.Repetition.OPTIONAL;\n    } else {\n      repetition = Type.Repetition.REQUIRED;\n    }\n\n    String name = columnSpec.getPath()[columnSpec.getPath().length - 1];\n    // Reconstruct the logical type from parameters\n    LogicalTypeAnnotation logicalType = null;\n    if (columnSpec.getLogicalTypeName() != null) {\n      logicalType =\n          reconstructLogicalType(\n              columnSpec.getLogicalTypeName(), columnSpec.getLogicalTypeParams());\n    }\n\n    PrimitiveType primitiveType;\n    if (primType == PrimitiveType.PrimitiveTypeName.FIXED_LEN_BYTE_ARRAY) {\n      primitiveType =\n          Types.primitive(primType, repetition)\n              .length(columnSpec.getTypeLength())\n              .as(logicalType)\n              .id(columnSpec.getFieldId())\n              .named(name);\n    } else {\n      primitiveType =\n          Types.primitive(primType, repetition)\n              .as(logicalType)\n              .id(columnSpec.getFieldId())\n              .named(name);\n    }\n\n    return new ColumnDescriptor(\n        columnSpec.getPath(),\n        primitiveType,\n        columnSpec.getMaxRepetitionLevel(),\n        columnSpec.getMaxDefinitionLevel());\n  }\n\n  private static LogicalTypeAnnotation reconstructLogicalType(\n      String logicalTypeName, java.util.Map<String, String> params) {\n\n    switch (logicalTypeName) {\n        // MAP\n      case \"MapLogicalTypeAnnotation\":\n        return LogicalTypeAnnotation.mapType();\n\n        // LIST\n      case \"ListLogicalTypeAnnotation\":\n        return LogicalTypeAnnotation.listType();\n\n        // STRING\n      case \"StringLogicalTypeAnnotation\":\n        return LogicalTypeAnnotation.stringType();\n\n        // MAP_KEY_VALUE\n      case \"MapKeyValueLogicalTypeAnnotation\":\n        return LogicalTypeAnnotation.MapKeyValueTypeAnnotation.getInstance();\n\n        // ENUM\n      case \"EnumLogicalTypeAnnotation\":\n        return LogicalTypeAnnotation.enumType();\n\n        // DECIMAL\n      case \"DecimalLogicalTypeAnnotation\":\n        if (!params.containsKey(\"scale\") || !params.containsKey(\"precision\")) {\n          throw new IllegalArgumentException(\n              \"Missing required parameters for DecimalLogicalTypeAnnotation: \" + params);\n        }\n        int scale = Integer.parseInt(params.get(\"scale\"));\n        int precision = Integer.parseInt(params.get(\"precision\"));\n        return LogicalTypeAnnotation.decimalType(scale, precision);\n\n        // DATE\n      case \"DateLogicalTypeAnnotation\":\n        return LogicalTypeAnnotation.dateType();\n\n        // TIME\n      case \"TimeLogicalTypeAnnotation\":\n        if (!params.containsKey(\"isAdjustedToUTC\") || !params.containsKey(\"unit\")) {\n          throw new IllegalArgumentException(\n              \"Missing required parameters for TimeLogicalTypeAnnotation: \" + params);\n        }\n\n        boolean isUTC = Boolean.parseBoolean(params.get(\"isAdjustedToUTC\"));\n        String timeUnitStr = params.get(\"unit\");\n\n        LogicalTypeAnnotation.TimeUnit timeUnit;\n        switch (timeUnitStr) {\n          case \"MILLIS\":\n            timeUnit = LogicalTypeAnnotation.TimeUnit.MILLIS;\n            break;\n          case \"MICROS\":\n            timeUnit = LogicalTypeAnnotation.TimeUnit.MICROS;\n            break;\n          case \"NANOS\":\n            timeUnit = LogicalTypeAnnotation.TimeUnit.NANOS;\n            break;\n          default:\n            throw new IllegalArgumentException(\"Unknown time unit: \" + timeUnitStr);\n        }\n        return LogicalTypeAnnotation.timeType(isUTC, timeUnit);\n\n        // TIMESTAMP\n      case \"TimestampLogicalTypeAnnotation\":\n        if (!params.containsKey(\"isAdjustedToUTC\") || !params.containsKey(\"unit\")) {\n          throw new IllegalArgumentException(\n              \"Missing required parameters for TimestampLogicalTypeAnnotation: \" + params);\n        }\n        boolean isAdjustedToUTC = Boolean.parseBoolean(params.get(\"isAdjustedToUTC\"));\n        String unitStr = params.get(\"unit\");\n\n        LogicalTypeAnnotation.TimeUnit unit;\n        switch (unitStr) {\n          case \"MILLIS\":\n            unit = LogicalTypeAnnotation.TimeUnit.MILLIS;\n            break;\n          case \"MICROS\":\n            unit = LogicalTypeAnnotation.TimeUnit.MICROS;\n            break;\n          case \"NANOS\":\n            unit = LogicalTypeAnnotation.TimeUnit.NANOS;\n            break;\n          default:\n            throw new IllegalArgumentException(\"Unknown timestamp unit: \" + unitStr);\n        }\n        return LogicalTypeAnnotation.timestampType(isAdjustedToUTC, unit);\n\n        // INTEGER\n      case \"IntLogicalTypeAnnotation\":\n        if (!params.containsKey(\"isSigned\") || !params.containsKey(\"bitWidth\")) {\n          throw new IllegalArgumentException(\n              \"Missing required parameters for IntLogicalTypeAnnotation: \" + params);\n        }\n        boolean isSigned = Boolean.parseBoolean(params.get(\"isSigned\"));\n        int bitWidth = Integer.parseInt(params.get(\"bitWidth\"));\n        return LogicalTypeAnnotation.intType(bitWidth, isSigned);\n\n        // JSON\n      case \"JsonLogicalTypeAnnotation\":\n        return LogicalTypeAnnotation.jsonType();\n\n        // BSON\n      case \"BsonLogicalTypeAnnotation\":\n        return LogicalTypeAnnotation.bsonType();\n\n        // UUID\n      case \"UUIDLogicalTypeAnnotation\":\n        return LogicalTypeAnnotation.uuidType();\n\n        // INTERVAL\n      case \"IntervalLogicalTypeAnnotation\":\n        return LogicalTypeAnnotation.IntervalLogicalTypeAnnotation.getInstance();\n\n      default:\n        throw new IllegalArgumentException(\"Unknown logical type: \" + logicalTypeName);\n    }\n  }\n\n  @IcebergApi\n  public static ParquetColumnSpec descriptorToParquetColumnSpec(ColumnDescriptor descriptor) {\n\n    String[] path = descriptor.getPath();\n    PrimitiveType primitiveType = descriptor.getPrimitiveType();\n    String physicalType = primitiveType.getPrimitiveTypeName().name();\n\n    int typeLength =\n        primitiveType.getPrimitiveTypeName() == PrimitiveType.PrimitiveTypeName.FIXED_LEN_BYTE_ARRAY\n            ? primitiveType.getTypeLength()\n            : 0;\n\n    boolean isRepeated = primitiveType.getRepetition() == Type.Repetition.REPEATED;\n\n    String logicalTypeName = null;\n    Map<String, String> logicalTypeParams = new HashMap<>();\n    LogicalTypeAnnotation logicalType = primitiveType.getLogicalTypeAnnotation();\n\n    if (logicalType != null) {\n      logicalTypeName = logicalType.getClass().getSimpleName();\n\n      // Handle specific logical types\n      if (logicalType instanceof LogicalTypeAnnotation.DecimalLogicalTypeAnnotation) {\n        LogicalTypeAnnotation.DecimalLogicalTypeAnnotation decimal =\n            (LogicalTypeAnnotation.DecimalLogicalTypeAnnotation) logicalType;\n        logicalTypeParams.put(\"precision\", String.valueOf(decimal.getPrecision()));\n        logicalTypeParams.put(\"scale\", String.valueOf(decimal.getScale()));\n      } else if (logicalType instanceof LogicalTypeAnnotation.TimestampLogicalTypeAnnotation) {\n        LogicalTypeAnnotation.TimestampLogicalTypeAnnotation timestamp =\n            (LogicalTypeAnnotation.TimestampLogicalTypeAnnotation) logicalType;\n        logicalTypeParams.put(\"isAdjustedToUTC\", String.valueOf(timestamp.isAdjustedToUTC()));\n        logicalTypeParams.put(\"unit\", timestamp.getUnit().name());\n      } else if (logicalType instanceof LogicalTypeAnnotation.TimeLogicalTypeAnnotation) {\n        LogicalTypeAnnotation.TimeLogicalTypeAnnotation time =\n            (LogicalTypeAnnotation.TimeLogicalTypeAnnotation) logicalType;\n        logicalTypeParams.put(\"isAdjustedToUTC\", String.valueOf(time.isAdjustedToUTC()));\n        logicalTypeParams.put(\"unit\", time.getUnit().name());\n      } else if (logicalType instanceof LogicalTypeAnnotation.IntLogicalTypeAnnotation) {\n        LogicalTypeAnnotation.IntLogicalTypeAnnotation intType =\n            (LogicalTypeAnnotation.IntLogicalTypeAnnotation) logicalType;\n        logicalTypeParams.put(\"isSigned\", String.valueOf(intType.isSigned()));\n        logicalTypeParams.put(\"bitWidth\", String.valueOf(intType.getBitWidth()));\n      }\n    }\n\n    int id = -1;\n    Type type = descriptor.getPrimitiveType();\n    if (type != null && type.getId() != null) {\n      id = type.getId().intValue();\n    }\n\n    return new ParquetColumnSpec(\n        id,\n        path,\n        physicalType,\n        typeLength,\n        isRepeated,\n        descriptor.getMaxDefinitionLevel(),\n        descriptor.getMaxRepetitionLevel(),\n        logicalTypeName,\n        logicalTypeParams);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/WrappedInputFile.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.lang.reflect.Method;\n\nimport org.apache.parquet.io.InputFile;\nimport org.apache.parquet.io.SeekableInputStream;\n\nimport org.apache.comet.IcebergApi;\n\n/**\n * Wraps an Object that possibly implements the methods of a Parquet InputFile (but is not a Parquet\n * InputFile). Such an object` exists, for instance, in Iceberg's InputFile\n */\n@IcebergApi\npublic class WrappedInputFile implements InputFile {\n  Object wrapped;\n\n  @IcebergApi\n  public WrappedInputFile(Object inputFile) {\n    this.wrapped = inputFile;\n  }\n\n  @Override\n  public long getLength() throws IOException {\n    try {\n      Method targetMethod = wrapped.getClass().getDeclaredMethod(\"getLength\"); //\n      targetMethod.setAccessible(true);\n      return (long) targetMethod.invoke(wrapped);\n    } catch (Exception e) {\n      throw new IOException(e);\n    }\n  }\n\n  @Override\n  public SeekableInputStream newStream() throws IOException {\n    try {\n      Method targetMethod = wrapped.getClass().getDeclaredMethod(\"newStream\"); //\n      targetMethod.setAccessible(true);\n      InputStream stream = (InputStream) targetMethod.invoke(wrapped);\n      return new WrappedSeekableInputStream(stream);\n    } catch (Exception e) {\n      throw new IOException(e);\n    }\n  }\n\n  @Override\n  public String toString() {\n    return wrapped.toString();\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/parquet/WrappedSeekableInputStream.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.lang.reflect.Method;\nimport java.util.Objects;\n\nimport org.apache.parquet.io.DelegatingSeekableInputStream;\n\n/**\n * Wraps an InputStream that possibly implements the methods of a Parquet SeekableInputStream (but\n * is not a Parquet SeekableInputStream). Such an InputStream exists, for instance, in Iceberg's\n * SeekableInputStream\n */\npublic class WrappedSeekableInputStream extends DelegatingSeekableInputStream {\n\n  private final InputStream wrappedInputStream; // The InputStream we are wrapping\n\n  public WrappedSeekableInputStream(InputStream inputStream) {\n    super(inputStream);\n    this.wrappedInputStream = Objects.requireNonNull(inputStream, \"InputStream cannot be null\");\n  }\n\n  @Override\n  public long getPos() throws IOException {\n    try {\n      Method targetMethod = wrappedInputStream.getClass().getDeclaredMethod(\"getPos\"); //\n      targetMethod.setAccessible(true);\n      return (long) targetMethod.invoke(wrappedInputStream);\n    } catch (Exception e) {\n      throw new IOException(e);\n    }\n  }\n\n  @Override\n  public void seek(long newPos) throws IOException {\n    try {\n      Method targetMethod = wrappedInputStream.getClass().getDeclaredMethod(\"seek\", long.class);\n      targetMethod.setAccessible(true);\n      targetMethod.invoke(wrappedInputStream, newPos);\n    } catch (Exception e) {\n      throw new IOException(e);\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/vector/CometDecodedVector.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector;\n\nimport org.apache.arrow.vector.BaseVariableWidthVector;\nimport org.apache.arrow.vector.ValueVector;\nimport org.apache.arrow.vector.types.pojo.Field;\nimport org.apache.spark.sql.comet.util.Utils;\nimport org.apache.spark.unsafe.Platform;\n\n/** A Comet vector whose elements are already decoded (i.e., materialized). */\npublic abstract class CometDecodedVector extends CometVector {\n  /**\n   * The vector that stores all the values. For dictionary-backed vector, this is the vector of\n   * indices.\n   */\n  protected final ValueVector valueVector;\n\n  private boolean hasNull;\n  private int numNulls;\n  private int numValues;\n  private int validityByteCacheIndex = -1;\n  private byte validityByteCache;\n  protected boolean isUuid;\n\n  protected CometDecodedVector(ValueVector vector, Field valueField, boolean useDecimal128) {\n    this(vector, valueField, useDecimal128, false);\n  }\n\n  protected CometDecodedVector(\n      ValueVector vector, Field valueField, boolean useDecimal128, boolean isUuid) {\n    super(Utils.fromArrowField(valueField), useDecimal128);\n    this.valueVector = vector;\n    this.numNulls = valueVector.getNullCount();\n    this.numValues = valueVector.getValueCount();\n    this.hasNull = numNulls != 0;\n    this.isUuid = isUuid;\n  }\n\n  @Override\n  public ValueVector getValueVector() {\n    return valueVector;\n  }\n\n  @Override\n  public void setNumNulls(int numNulls) {\n    // We don't need to update null count in 'valueVector' since 'ValueVector.getNullCount' will\n    // re-compute the null count from validity buffer.\n    this.numNulls = numNulls;\n    this.hasNull = numNulls != 0;\n    this.validityByteCacheIndex = -1;\n  }\n\n  @Override\n  public void setNumValues(int numValues) {\n    this.numValues = numValues;\n    if (valueVector instanceof BaseVariableWidthVector) {\n      BaseVariableWidthVector bv = (BaseVariableWidthVector) valueVector;\n      // In case `lastSet` is smaller than `numValues`, `setValueCount` will set all the offsets\n      // within `[lastSet + 1, numValues)` to be empty, which is incorrect in our case.\n      //\n      // For instance, this can happen if one first call `setNumValues` with input 100, and then\n      // again `setNumValues` with 200. The first call will set `lastSet` to 99, while the second\n      // call will set all strings between indices `[100, 200)` to be empty.\n      bv.setLastSet(numValues);\n    }\n    valueVector.setValueCount(numValues);\n  }\n\n  public int numValues() {\n    return numValues;\n  }\n\n  @Override\n  public boolean hasNull() {\n    return hasNull;\n  }\n\n  @Override\n  public int numNulls() {\n    return numNulls;\n  }\n\n  @Override\n  public boolean isNullAt(int rowId) {\n    if (!hasNull) return false;\n\n    int byteIndex = rowId >> 3;\n    if (byteIndex != validityByteCacheIndex) {\n      long validityBufferAddress = valueVector.getValidityBuffer().memoryAddress();\n      validityByteCache = Platform.getByte(null, validityBufferAddress + byteIndex);\n      validityByteCacheIndex = byteIndex;\n    }\n    return ((validityByteCache >> (rowId & 7)) & 1) == 0;\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/vector/CometDelegateVector.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector;\n\nimport org.apache.arrow.vector.ValueVector;\nimport org.apache.arrow.vector.dictionary.DictionaryProvider;\nimport org.apache.spark.sql.types.DataType;\nimport org.apache.spark.sql.types.Decimal;\nimport org.apache.spark.sql.vectorized.ColumnVector;\nimport org.apache.spark.sql.vectorized.ColumnarArray;\nimport org.apache.spark.sql.vectorized.ColumnarMap;\nimport org.apache.spark.unsafe.types.UTF8String;\n\n/** A special Comet vector that just delegate all calls */\npublic class CometDelegateVector extends CometVector {\n  protected CometVector delegate;\n\n  public CometDelegateVector(DataType dataType) {\n    this(dataType, null, false);\n  }\n\n  public CometDelegateVector(DataType dataType, boolean useDecimal128) {\n    this(dataType, null, useDecimal128);\n  }\n\n  public CometDelegateVector(DataType dataType, CometVector delegate, boolean useDecimal128) {\n    super(dataType, useDecimal128);\n    if (delegate instanceof CometDelegateVector) {\n      throw new IllegalArgumentException(\"cannot have nested delegation\");\n    }\n    this.delegate = delegate;\n  }\n\n  protected void setDelegate(CometVector delegate) {\n    this.delegate = delegate;\n  }\n\n  @Override\n  public void setNumNulls(int numNulls) {\n    delegate.setNumNulls(numNulls);\n  }\n\n  @Override\n  public void setNumValues(int numValues) {\n    delegate.setNumValues(numValues);\n  }\n\n  @Override\n  public int numValues() {\n    return delegate.numValues();\n  }\n\n  @Override\n  public boolean hasNull() {\n    return delegate.hasNull();\n  }\n\n  @Override\n  public int numNulls() {\n    return delegate.numNulls();\n  }\n\n  @Override\n  public boolean isNullAt(int rowId) {\n    return delegate.isNullAt(rowId);\n  }\n\n  @Override\n  public boolean getBoolean(int rowId) {\n    return delegate.getBoolean(rowId);\n  }\n\n  @Override\n  public byte getByte(int rowId) {\n    return delegate.getByte(rowId);\n  }\n\n  @Override\n  public short getShort(int rowId) {\n    return delegate.getShort(rowId);\n  }\n\n  @Override\n  public int getInt(int rowId) {\n    return delegate.getInt(rowId);\n  }\n\n  @Override\n  public long getLong(int rowId) {\n    return delegate.getLong(rowId);\n  }\n\n  @Override\n  public long getLongDecimal(int rowId) {\n    return delegate.getLongDecimal(rowId);\n  }\n\n  @Override\n  public float getFloat(int rowId) {\n    return delegate.getFloat(rowId);\n  }\n\n  @Override\n  public double getDouble(int rowId) {\n    return delegate.getDouble(rowId);\n  }\n\n  @Override\n  public Decimal getDecimal(int i, int precision, int scale) {\n    return delegate.getDecimal(i, precision, scale);\n  }\n\n  @Override\n  byte[] getBinaryDecimal(int i) {\n    return delegate.getBinaryDecimal(i);\n  }\n\n  @Override\n  public UTF8String getUTF8String(int rowId) {\n    return delegate.getUTF8String(rowId);\n  }\n\n  @Override\n  public byte[] getBinary(int rowId) {\n    return delegate.getBinary(rowId);\n  }\n\n  @Override\n  public ColumnarArray getArray(int i) {\n    return delegate.getArray(i);\n  }\n\n  @Override\n  public ColumnarMap getMap(int i) {\n    return delegate.getMap(i);\n  }\n\n  @Override\n  public ColumnVector getChild(int i) {\n    return delegate.getChild(i);\n  }\n\n  @Override\n  public ValueVector getValueVector() {\n    return delegate.getValueVector();\n  }\n\n  @Override\n  public CometVector slice(int offset, int length) {\n    return delegate.slice(offset, length);\n  }\n\n  @Override\n  public DictionaryProvider getDictionaryProvider() {\n    return delegate.getDictionaryProvider();\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/vector/CometDictionary.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector;\n\nimport org.apache.arrow.vector.ValueVector;\nimport org.apache.spark.unsafe.types.UTF8String;\n\n/** A dictionary which maps indices (integers) to values. */\npublic class CometDictionary implements AutoCloseable {\n  private static final int DECIMAL_BYTE_WIDTH = 16;\n\n  private CometPlainVector values;\n  private final int numValues;\n\n  /** Decoded dictionary values. We only need to copy values for decimal type. */\n  private volatile ByteArrayWrapper[] binaries;\n\n  public CometDictionary(CometPlainVector values) {\n    this.values = values;\n    this.numValues = values.numValues();\n  }\n\n  public void setDictionaryVector(CometPlainVector values) {\n    this.values = values;\n    if (values.numValues() != numValues) {\n      throw new IllegalArgumentException(\"Mismatched dictionary size\");\n    }\n  }\n\n  public ValueVector getValueVector() {\n    return values.getValueVector();\n  }\n\n  public boolean decodeToBoolean(int index) {\n    return values.getBoolean(index);\n  }\n\n  public byte decodeToByte(int index) {\n    return values.getByte(index);\n  }\n\n  public short decodeToShort(int index) {\n    return values.getShort(index);\n  }\n\n  public int decodeToInt(int index) {\n    return values.getInt(index);\n  }\n\n  public long decodeToLong(int index) {\n    return values.getLong(index);\n  }\n\n  public long decodeToLongDecimal(int index) {\n    return values.getLongDecimal(index);\n  }\n\n  public float decodeToFloat(int index) {\n    return values.getFloat(index);\n  }\n\n  public double decodeToDouble(int index) {\n    return values.getDouble(index);\n  }\n\n  public byte[] decodeToBinary(int index) {\n    switch (values.getValueVector().getMinorType()) {\n      case VARBINARY:\n      case FIXEDSIZEBINARY:\n        return values.getBinary(index);\n      case DECIMAL:\n        if (binaries == null) {\n          // We only need to copy values for decimal 128 type as random access\n          // to the dictionary is not efficient for decimal (it needs to copy\n          // the value to a new byte array everytime).\n          ByteArrayWrapper[] binaries = new ByteArrayWrapper[numValues];\n          for (int i = 0; i < numValues; i++) {\n            // Need copying here since we re-use byte array for decimal\n            byte[] bytes = new byte[DECIMAL_BYTE_WIDTH];\n            bytes = values.copyBinaryDecimal(i, bytes);\n            binaries[i] = new ByteArrayWrapper(bytes);\n          }\n          this.binaries = binaries;\n        }\n        return binaries[index].bytes;\n      default:\n        throw new IllegalArgumentException(\n            \"Invalid Arrow minor type: \" + values.getValueVector().getMinorType());\n    }\n  }\n\n  public UTF8String decodeToUTF8String(int index) {\n    return values.getUTF8String(index);\n  }\n\n  @Override\n  public void close() {\n    values.close();\n  }\n\n  private static class ByteArrayWrapper {\n    private final byte[] bytes;\n\n    ByteArrayWrapper(byte[] bytes) {\n      this.bytes = bytes;\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/vector/CometDictionaryVector.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector;\n\nimport org.apache.arrow.vector.IntVector;\nimport org.apache.arrow.vector.dictionary.DictionaryProvider;\nimport org.apache.arrow.vector.util.TransferPair;\nimport org.apache.parquet.Preconditions;\nimport org.apache.spark.unsafe.types.UTF8String;\n\n/** A column vector whose elements are dictionary-encoded. */\npublic class CometDictionaryVector extends CometDecodedVector {\n  public final CometPlainVector indices;\n  public final CometDictionary values;\n  public final DictionaryProvider provider;\n\n  /** Whether this vector is an alias sliced from another vector. */\n  private final boolean isAlias;\n\n  public CometDictionaryVector(\n      CometPlainVector indices,\n      CometDictionary values,\n      DictionaryProvider provider,\n      boolean useDecimal128) {\n    this(indices, values, provider, useDecimal128, false, false);\n  }\n\n  public CometDictionaryVector(\n      CometPlainVector indices,\n      CometDictionary values,\n      DictionaryProvider provider,\n      boolean useDecimal128,\n      boolean isAlias,\n      boolean isUuid) {\n    super(indices.valueVector, values.getValueVector().getField(), useDecimal128, isUuid);\n    Preconditions.checkArgument(\n        indices.valueVector instanceof IntVector, \"'indices' should be a IntVector\");\n    this.values = values;\n    this.indices = indices;\n    this.provider = provider;\n    this.isAlias = isAlias;\n  }\n\n  @Override\n  public DictionaryProvider getDictionaryProvider() {\n    return this.provider;\n  }\n\n  @Override\n  public void close() {\n    super.close();\n    // Only close the values vector if this is not a sliced vector.\n    if (!isAlias) {\n      values.close();\n    }\n  }\n\n  @Override\n  public boolean getBoolean(int i) {\n    return values.decodeToBoolean(indices.getInt(i));\n  }\n\n  @Override\n  public byte getByte(int i) {\n    return values.decodeToByte(indices.getInt(i));\n  }\n\n  @Override\n  public short getShort(int i) {\n    return values.decodeToShort(indices.getInt(i));\n  }\n\n  @Override\n  public int getInt(int i) {\n    return values.decodeToInt(indices.getInt(i));\n  }\n\n  @Override\n  public long getLong(int i) {\n    return values.decodeToLong(indices.getInt(i));\n  }\n\n  @Override\n  public long getLongDecimal(int i) {\n    return values.decodeToLongDecimal(indices.getInt(i));\n  }\n\n  @Override\n  public float getFloat(int i) {\n    return values.decodeToFloat(indices.getInt(i));\n  }\n\n  @Override\n  public double getDouble(int i) {\n    return values.decodeToDouble(indices.getInt(i));\n  }\n\n  @Override\n  public UTF8String getUTF8String(int i) {\n    return values.decodeToUTF8String(indices.getInt(i));\n  }\n\n  @Override\n  public byte[] getBinary(int i) {\n    return values.decodeToBinary(indices.getInt(i));\n  }\n\n  @Override\n  byte[] getBinaryDecimal(int i) {\n    return values.decodeToBinary(indices.getInt(i));\n  }\n\n  @Override\n  public CometVector slice(int offset, int length) {\n    TransferPair tp = indices.valueVector.getTransferPair(indices.valueVector.getAllocator());\n    tp.splitAndTransfer(offset, length);\n    CometPlainVector sliced = new CometPlainVector(tp.getTo(), useDecimal128);\n\n    // Set the alias flag to true so that the sliced vector will not close the dictionary vector.\n    // Otherwise, if the dictionary is closed, the sliced vector will not be able to access the\n    // dictionary.\n    return new CometDictionaryVector(sliced, values, provider, useDecimal128, true, isUuid);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/vector/CometLazyVector.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector;\n\nimport org.apache.arrow.vector.ValueVector;\nimport org.apache.spark.sql.types.DataType;\n\nimport org.apache.comet.parquet.LazyColumnReader;\n\npublic class CometLazyVector extends CometDelegateVector {\n  private final LazyColumnReader columnReader;\n\n  public CometLazyVector(DataType type, LazyColumnReader columnReader, boolean useDecimal128) {\n    super(type, useDecimal128);\n    this.columnReader = columnReader;\n  }\n\n  public CometDecodedVector getDecodedVector() {\n    return (CometDecodedVector) delegate;\n  }\n\n  @Override\n  public ValueVector getValueVector() {\n    columnReader.readAllBatch();\n    setDelegate(columnReader.loadVector());\n    return super.getValueVector();\n  }\n\n  @Override\n  public void setNumNulls(int numNulls) {\n    throw new UnsupportedOperationException(\"CometLazyVector doesn't support 'setNumNulls'\");\n  }\n\n  @Override\n  public void setNumValues(int numValues) {\n    throw new UnsupportedOperationException(\"CometLazyVector doesn't support 'setNumValues'\");\n  }\n\n  @Override\n  public void close() {\n    // Do nothing. 'vector' is closed by 'columnReader' which owns it.\n  }\n\n  @Override\n  public boolean hasNull() {\n    columnReader.readAllBatch();\n    setDelegate(columnReader.loadVector());\n    return super.hasNull();\n  }\n\n  @Override\n  public int numNulls() {\n    columnReader.readAllBatch();\n    setDelegate(columnReader.loadVector());\n    return super.numNulls();\n  }\n\n  @Override\n  public boolean isNullAt(int rowId) {\n    if (columnReader.materializeUpToIfNecessary(rowId)) {\n      setDelegate(columnReader.loadVector());\n    }\n    return super.isNullAt(rowId);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/vector/CometListVector.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector;\n\nimport org.apache.arrow.vector.*;\nimport org.apache.arrow.vector.complex.ListVector;\nimport org.apache.arrow.vector.dictionary.DictionaryProvider;\nimport org.apache.arrow.vector.util.TransferPair;\nimport org.apache.spark.sql.vectorized.ColumnVector;\nimport org.apache.spark.sql.vectorized.ColumnarArray;\n\n/** A Comet column vector for list type. */\npublic class CometListVector extends CometDecodedVector {\n  final ListVector listVector;\n  final ValueVector dataVector;\n  final ColumnVector dataColumnVector;\n  final DictionaryProvider dictionaryProvider;\n\n  public CometListVector(\n      ValueVector vector, boolean useDecimal128, DictionaryProvider dictionaryProvider) {\n    super(vector, vector.getField(), useDecimal128);\n\n    this.listVector = ((ListVector) vector);\n    this.dataVector = listVector.getDataVector();\n    this.dictionaryProvider = dictionaryProvider;\n    this.dataColumnVector = getVector(dataVector, useDecimal128, dictionaryProvider);\n  }\n\n  @Override\n  public ColumnarArray getArray(int i) {\n    if (isNullAt(i)) return null;\n    int start = listVector.getOffsetBuffer().getInt(i * ListVector.OFFSET_WIDTH);\n    int end = listVector.getOffsetBuffer().getInt((i + 1) * ListVector.OFFSET_WIDTH);\n\n    return new ColumnarArray(dataColumnVector, start, end - start);\n  }\n\n  @Override\n  public CometVector slice(int offset, int length) {\n    TransferPair tp = this.valueVector.getTransferPair(this.valueVector.getAllocator());\n    tp.splitAndTransfer(offset, length);\n\n    return new CometListVector(tp.getTo(), useDecimal128, dictionaryProvider);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/vector/CometMapVector.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector;\n\nimport org.apache.arrow.vector.*;\nimport org.apache.arrow.vector.complex.MapVector;\nimport org.apache.arrow.vector.complex.StructVector;\nimport org.apache.arrow.vector.dictionary.DictionaryProvider;\nimport org.apache.arrow.vector.util.TransferPair;\nimport org.apache.spark.sql.vectorized.ColumnVector;\nimport org.apache.spark.sql.vectorized.ColumnarMap;\n\n/** A Comet column vector for map type. */\npublic class CometMapVector extends CometDecodedVector {\n  final MapVector mapVector;\n  final ValueVector dataVector;\n  final CometStructVector dataColumnVector;\n  final DictionaryProvider dictionaryProvider;\n\n  final ColumnVector keys;\n  final ColumnVector values;\n\n  public CometMapVector(\n      ValueVector vector, boolean useDecimal128, DictionaryProvider dictionaryProvider) {\n    super(vector, vector.getField(), useDecimal128);\n\n    this.mapVector = ((MapVector) vector);\n    this.dataVector = mapVector.getDataVector();\n    this.dictionaryProvider = dictionaryProvider;\n\n    if (dataVector instanceof StructVector) {\n      this.dataColumnVector = new CometStructVector(dataVector, useDecimal128, dictionaryProvider);\n\n      if (dataColumnVector.children.size() != 2) {\n        throw new RuntimeException(\n            \"MapVector's dataVector should have 2 children, but got: \"\n                + dataColumnVector.children.size());\n      }\n\n      this.keys = dataColumnVector.getChild(0);\n      this.values = dataColumnVector.getChild(1);\n    } else {\n      throw new RuntimeException(\n          \"MapVector's dataVector should be StructVector, but got: \"\n              + dataVector.getClass().getSimpleName());\n    }\n  }\n\n  @Override\n  public ColumnarMap getMap(int i) {\n    if (isNullAt(i)) return null;\n    int start = mapVector.getOffsetBuffer().getInt(i * MapVector.OFFSET_WIDTH);\n    int end = mapVector.getOffsetBuffer().getInt((i + 1) * MapVector.OFFSET_WIDTH);\n\n    return new ColumnarMap(keys, values, start, end - start);\n  }\n\n  @Override\n  public CometVector slice(int offset, int length) {\n    TransferPair tp = this.valueVector.getTransferPair(this.valueVector.getAllocator());\n    tp.splitAndTransfer(offset, length);\n\n    return new CometMapVector(tp.getTo(), useDecimal128, dictionaryProvider);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/vector/CometPlainVector.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector;\n\nimport java.nio.ByteBuffer;\nimport java.nio.ByteOrder;\nimport java.util.UUID;\n\nimport org.apache.arrow.c.CDataDictionaryProvider;\nimport org.apache.arrow.vector.*;\nimport org.apache.arrow.vector.util.TransferPair;\nimport org.apache.parquet.Preconditions;\nimport org.apache.spark.unsafe.Platform;\nimport org.apache.spark.unsafe.types.UTF8String;\n\n/** A column vector whose elements are plainly decoded. */\npublic class CometPlainVector extends CometDecodedVector {\n  private final long valueBufferAddress;\n  private final boolean isBaseFixedWidthVector;\n\n  private byte booleanByteCache;\n  private int booleanByteCacheIndex = -1;\n\n  private boolean isReused;\n\n  public CometPlainVector(ValueVector vector, boolean useDecimal128) {\n    this(vector, useDecimal128, false);\n  }\n\n  public CometPlainVector(ValueVector vector, boolean useDecimal128, boolean isUuid) {\n    this(vector, useDecimal128, isUuid, false);\n  }\n\n  public CometPlainVector(\n      ValueVector vector, boolean useDecimal128, boolean isUuid, boolean isReused) {\n    super(vector, vector.getField(), useDecimal128, isUuid);\n    // NullType doesn't have data buffer.\n    if (vector instanceof NullVector) {\n      this.valueBufferAddress = -1;\n    } else {\n      this.valueBufferAddress = vector.getDataBuffer().memoryAddress();\n    }\n\n    isBaseFixedWidthVector = valueVector instanceof BaseFixedWidthVector;\n    this.isReused = isReused;\n  }\n\n  public boolean isReused() {\n    return isReused;\n  }\n\n  public void setReused(boolean isReused) {\n    this.isReused = isReused;\n  }\n\n  @Override\n  public void setNumNulls(int numNulls) {\n    super.setNumNulls(numNulls);\n    this.booleanByteCacheIndex = -1;\n  }\n\n  @Override\n  public boolean getBoolean(int rowId) {\n    int byteIndex = rowId >> 3;\n    if (byteIndex != booleanByteCacheIndex) {\n      booleanByteCache = getByte(byteIndex);\n      booleanByteCacheIndex = byteIndex;\n    }\n    return ((booleanByteCache >> (rowId & 7)) & 1) == 1;\n  }\n\n  @Override\n  public byte getByte(int rowId) {\n    return Platform.getByte(null, valueBufferAddress + rowId);\n  }\n\n  @Override\n  public short getShort(int rowId) {\n    return Platform.getShort(null, valueBufferAddress + rowId * 2L);\n  }\n\n  @Override\n  public int getInt(int rowId) {\n    return Platform.getInt(null, valueBufferAddress + rowId * 4L);\n  }\n\n  @Override\n  public long getLong(int rowId) {\n    return Platform.getLong(null, valueBufferAddress + rowId * 8L);\n  }\n\n  @Override\n  public long getLongDecimal(int rowId) {\n    return Platform.getLong(null, valueBufferAddress + rowId * 16L);\n  }\n\n  @Override\n  public float getFloat(int rowId) {\n    return Platform.getFloat(null, valueBufferAddress + rowId * 4L);\n  }\n\n  @Override\n  public double getDouble(int rowId) {\n    return Platform.getDouble(null, valueBufferAddress + rowId * 8L);\n  }\n\n  @Override\n  public UTF8String getUTF8String(int rowId) {\n    if (isNullAt(rowId)) return null;\n    if (!isBaseFixedWidthVector) {\n      BaseVariableWidthVector varWidthVector = (BaseVariableWidthVector) valueVector;\n      long offsetBufferAddress = varWidthVector.getOffsetBuffer().memoryAddress();\n      int offset = Platform.getInt(null, offsetBufferAddress + rowId * 4L);\n      int length = Platform.getInt(null, offsetBufferAddress + (rowId + 1L) * 4L) - offset;\n      return UTF8String.fromAddress(null, valueBufferAddress + offset, length);\n    } else {\n      BaseFixedWidthVector fixedWidthVector = (BaseFixedWidthVector) valueVector;\n      int length = fixedWidthVector.getTypeWidth();\n      int offset = rowId * length;\n      byte[] result = new byte[length];\n      Platform.copyMemory(\n          null, valueBufferAddress + offset, result, Platform.BYTE_ARRAY_OFFSET, length);\n\n      if (!isUuid) {\n        return UTF8String.fromBytes(result);\n      } else {\n        return UTF8String.fromString(convertToUuid(result).toString());\n      }\n    }\n  }\n\n  @Override\n  public byte[] getBinary(int rowId) {\n    if (isNullAt(rowId)) return null;\n    int offset;\n    int length;\n    if (valueVector instanceof BaseVariableWidthVector) {\n      BaseVariableWidthVector varWidthVector = (BaseVariableWidthVector) valueVector;\n      long offsetBufferAddress = varWidthVector.getOffsetBuffer().memoryAddress();\n      offset = Platform.getInt(null, offsetBufferAddress + rowId * 4L);\n      length = Platform.getInt(null, offsetBufferAddress + (rowId + 1L) * 4L) - offset;\n    } else if (valueVector instanceof BaseFixedWidthVector) {\n      BaseFixedWidthVector fixedWidthVector = (BaseFixedWidthVector) valueVector;\n      length = fixedWidthVector.getTypeWidth();\n      offset = rowId * length;\n    } else {\n      throw new RuntimeException(\"Unsupported binary vector type: \" + valueVector.getName());\n    }\n    byte[] result = new byte[length];\n    Platform.copyMemory(\n        null, valueBufferAddress + offset, result, Platform.BYTE_ARRAY_OFFSET, length);\n    return result;\n  }\n\n  @Override\n  public CDataDictionaryProvider getDictionaryProvider() {\n    return null;\n  }\n\n  @Override\n  public boolean isNullAt(int rowId) {\n    return this.valueBufferAddress == -1 || super.isNullAt(rowId);\n  }\n\n  @Override\n  public CometVector slice(int offset, int length) {\n    TransferPair tp = this.valueVector.getTransferPair(this.valueVector.getAllocator());\n    tp.splitAndTransfer(offset, length);\n\n    return new CometPlainVector(tp.getTo(), useDecimal128);\n  }\n\n  private static UUID convertToUuid(byte[] buf) {\n    Preconditions.checkArgument(buf.length == 16, \"UUID require 16 bytes\");\n    ByteBuffer bb = ByteBuffer.wrap(buf);\n    bb.order(ByteOrder.BIG_ENDIAN);\n    long mostSigBits = bb.getLong();\n    long leastSigBits = bb.getLong();\n    return new UUID(mostSigBits, leastSigBits);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/vector/CometSelectionVector.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector;\n\nimport org.apache.arrow.memory.BufferAllocator;\nimport org.apache.arrow.vector.IntVector;\nimport org.apache.arrow.vector.dictionary.DictionaryProvider;\nimport org.apache.spark.sql.vectorized.ColumnVector;\nimport org.apache.spark.sql.vectorized.ColumnarArray;\nimport org.apache.spark.sql.vectorized.ColumnarMap;\nimport org.apache.spark.unsafe.types.UTF8String;\n\n/**\n * A zero-copy selection vector that extends CometVector. This implementation stores the original\n * data vector and selection indices as separate CometVectors, providing zero copy access to the the\n * underlying data.\n *\n * <p>If the original vector has values [v0, v1, v2, v3, v4, v5, v6, v7] and the selection indices\n * are [0, 1, 3, 4, 5, 7], then this selection vector will logically represent [v0, v1, v3, v4, v5,\n * v7] without actually copying the data.\n *\n * <p>Most of the implementations of CometVector methods are implemented for completeness. We don't\n * use this class except to transfer the original data and the selection indices to the native code.\n */\npublic class CometSelectionVector extends CometVector {\n  /** The original vector containing all values */\n  private final CometVector values;\n\n  /**\n   * The valid indices in the values vector. This array is converted into an Arrow vector so we can\n   * transfer the data to native in one JNI call. This is used to represent the rowid mapping used\n   * by Iceberg\n   */\n  private final int[] selectionIndices;\n\n  /**\n   * The indices vector containing selection indices. This is currently allocated by the JVM side\n   * unlike the values vector which is allocated on the native side\n   */\n  private final CometVector indices;\n\n  /**\n   * Number of selected elements. The indices array may have a length greater than this but only\n   * numValues elements in the array are valid\n   */\n  private final int numValues;\n\n  /**\n   * Creates a new selection vector from the given vector and indices.\n   *\n   * @param values The original vector to select from\n   * @param indices The indices to select from the original vector\n   * @param numValues The number of valid values in the indices array\n   * @throws IllegalArgumentException if any index is out of bounds\n   */\n  public CometSelectionVector(CometVector values, int[] indices, int numValues) {\n    // Use the values vector's datatype, useDecimal128, and dictionary provider\n    super(values.dataType(), values.useDecimal128);\n\n    this.values = values;\n    this.selectionIndices = indices;\n    this.numValues = numValues;\n\n    // Validate indices\n    int originalLength = values.numValues();\n    for (int idx : indices) {\n      if (idx < 0 || idx >= originalLength) {\n        throw new IllegalArgumentException(\n            String.format(\n                \"Index %d is out of bounds for vector of length %d\", idx, originalLength));\n      }\n    }\n\n    // Create indices vector\n    BufferAllocator allocator = values.getValueVector().getAllocator();\n    IntVector indicesVector = new IntVector(\"selection_indices\", allocator);\n    indicesVector.allocateNew(numValues);\n    for (int i = 0; i < numValues; i++) {\n      indicesVector.set(i, indices[i]);\n    }\n    indicesVector.setValueCount(numValues);\n\n    this.indices =\n        CometVector.getVector(indicesVector, values.useDecimal128, values.getDictionaryProvider());\n  }\n\n  /**\n   * Returns the index in the values vector for the given selection vector index.\n   *\n   * @param selectionIndex Index in the selection vector\n   * @return The corresponding index in the original vector\n   * @throws IndexOutOfBoundsException if selectionIndex is out of bounds\n   */\n  private int getValuesIndex(int selectionIndex) {\n    if (selectionIndex < 0 || selectionIndex >= numValues) {\n      throw new IndexOutOfBoundsException(\n          String.format(\n              \"Selection index %d is out of bounds for selection vector of length %d\",\n              selectionIndex, numValues));\n    }\n    return indices.getInt(selectionIndex);\n  }\n\n  /**\n   * Returns a reference to the values vector.\n   *\n   * @return The CometVector containing the values\n   */\n  public CometVector getValues() {\n    return values;\n  }\n\n  /**\n   * Returns the indices vector.\n   *\n   * @return The CometVector containing the indices\n   */\n  public CometVector getIndices() {\n    return indices;\n  }\n\n  /**\n   * Returns the selected indices.\n   *\n   * @return Array of selected indices\n   */\n  private int[] getSelectedIndices() {\n    return selectionIndices;\n  }\n\n  @Override\n  public int numValues() {\n    return numValues;\n  }\n\n  @Override\n  public void setNumValues(int numValues) {\n    // For selection vectors, we don't allow changing the number of values\n    // as it would break the mapping between selection indices and values\n    throw new UnsupportedOperationException(\"CometSelectionVector doesn't support setNumValues\");\n  }\n\n  @Override\n  public void setNumNulls(int numNulls) {\n    // For selection vectors, null count should be delegated to the underlying values vector\n    // The selection doesn't change the null semantics\n    values.setNumNulls(numNulls);\n  }\n\n  @Override\n  public boolean hasNull() {\n    return values.hasNull();\n  }\n\n  @Override\n  public int numNulls() {\n    return values.numNulls();\n  }\n\n  // ColumnVector method implementations - delegate to original vector with index mapping\n  @Override\n  public boolean isNullAt(int rowId) {\n    return values.isNullAt(getValuesIndex(rowId));\n  }\n\n  @Override\n  public boolean getBoolean(int rowId) {\n    return values.getBoolean(getValuesIndex(rowId));\n  }\n\n  @Override\n  public byte getByte(int rowId) {\n    return values.getByte(getValuesIndex(rowId));\n  }\n\n  @Override\n  public short getShort(int rowId) {\n    return values.getShort(getValuesIndex(rowId));\n  }\n\n  @Override\n  public int getInt(int rowId) {\n    return values.getInt(getValuesIndex(rowId));\n  }\n\n  @Override\n  public long getLong(int rowId) {\n    return values.getLong(getValuesIndex(rowId));\n  }\n\n  @Override\n  public long getLongDecimal(int rowId) {\n    return values.getLongDecimal(getValuesIndex(rowId));\n  }\n\n  @Override\n  public float getFloat(int rowId) {\n    return values.getFloat(getValuesIndex(rowId));\n  }\n\n  @Override\n  public double getDouble(int rowId) {\n    return values.getDouble(getValuesIndex(rowId));\n  }\n\n  @Override\n  public UTF8String getUTF8String(int rowId) {\n    return values.getUTF8String(getValuesIndex(rowId));\n  }\n\n  @Override\n  public byte[] getBinary(int rowId) {\n    return values.getBinary(getValuesIndex(rowId));\n  }\n\n  @Override\n  public ColumnarArray getArray(int rowId) {\n    return values.getArray(getValuesIndex(rowId));\n  }\n\n  @Override\n  public ColumnarMap getMap(int rowId) {\n    return values.getMap(getValuesIndex(rowId));\n  }\n\n  @Override\n  public ColumnVector getChild(int ordinal) {\n    // Return the child from the original vector\n    return values.getChild(ordinal);\n  }\n\n  @Override\n  public DictionaryProvider getDictionaryProvider() {\n    return values.getDictionaryProvider();\n  }\n\n  @Override\n  public CometVector slice(int offset, int length) {\n    if (offset < 0 || length < 0 || offset + length > numValues) {\n      throw new IllegalArgumentException(\"Invalid slice parameters\");\n    }\n    // Get the current indices and slice them\n    int[] currentIndices = getSelectedIndices();\n    int[] slicedIndices = new int[length];\n    // This is not a very efficient version of slicing, but that is\n    // not important because we are not likely to use it.\n    System.arraycopy(currentIndices, offset, slicedIndices, 0, length);\n    return new CometSelectionVector(values, slicedIndices, length);\n  }\n\n  @Override\n  public org.apache.arrow.vector.ValueVector getValueVector() {\n    return values.getValueVector();\n  }\n\n  @Override\n  public void close() {\n    // Close both the values and indices vectors\n    values.close();\n    indices.close();\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/vector/CometStructVector.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector;\n\nimport java.util.ArrayList;\nimport java.util.List;\n\nimport org.apache.arrow.vector.*;\nimport org.apache.arrow.vector.complex.StructVector;\nimport org.apache.arrow.vector.dictionary.DictionaryProvider;\nimport org.apache.arrow.vector.util.TransferPair;\nimport org.apache.spark.sql.vectorized.ColumnVector;\n\n/** A Comet column vector for struct type. */\npublic class CometStructVector extends CometDecodedVector {\n  final List<ColumnVector> children;\n  final DictionaryProvider dictionaryProvider;\n\n  public CometStructVector(\n      ValueVector vector, boolean useDecimal128, DictionaryProvider dictionaryProvider) {\n    super(vector, vector.getField(), useDecimal128);\n\n    StructVector structVector = ((StructVector) vector);\n\n    int size = structVector.size();\n    List<ColumnVector> children = new ArrayList<>();\n\n    for (int i = 0; i < size; ++i) {\n      ValueVector value = structVector.getVectorById(i);\n      children.add(getVector(value, useDecimal128, dictionaryProvider));\n    }\n    this.children = children;\n    this.dictionaryProvider = dictionaryProvider;\n  }\n\n  @Override\n  public ColumnVector getChild(int i) {\n    return children.get(i);\n  }\n\n  @Override\n  public CometVector slice(int offset, int length) {\n    TransferPair tp = this.valueVector.getTransferPair(this.valueVector.getAllocator());\n    tp.splitAndTransfer(offset, length);\n\n    return new CometStructVector(tp.getTo(), useDecimal128, dictionaryProvider);\n  }\n}\n"
  },
  {
    "path": "common/src/main/java/org/apache/comet/vector/CometVector.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector;\n\nimport java.math.BigDecimal;\nimport java.math.BigInteger;\n\nimport org.apache.arrow.vector.FixedWidthVector;\nimport org.apache.arrow.vector.ValueVector;\nimport org.apache.arrow.vector.complex.ListVector;\nimport org.apache.arrow.vector.complex.MapVector;\nimport org.apache.arrow.vector.complex.StructVector;\nimport org.apache.arrow.vector.dictionary.Dictionary;\nimport org.apache.arrow.vector.dictionary.DictionaryProvider;\nimport org.apache.arrow.vector.types.pojo.DictionaryEncoding;\nimport org.apache.spark.sql.types.DataType;\nimport org.apache.spark.sql.types.Decimal;\nimport org.apache.spark.sql.types.IntegerType;\nimport org.apache.spark.sql.vectorized.ColumnVector;\nimport org.apache.spark.sql.vectorized.ColumnarArray;\nimport org.apache.spark.sql.vectorized.ColumnarMap;\nimport org.apache.spark.unsafe.Platform;\nimport org.apache.spark.unsafe.types.UTF8String;\n\nimport org.apache.comet.IcebergApi;\n\n/** Base class for all Comet column vector implementations. */\n@IcebergApi\npublic abstract class CometVector extends ColumnVector {\n  private static final int DECIMAL_BYTE_WIDTH = 16;\n  private final byte[] DECIMAL_BYTES = new byte[DECIMAL_BYTE_WIDTH];\n  protected final boolean useDecimal128;\n\n  private static final long decimalValOffset;\n\n  static {\n    try {\n      java.lang.reflect.Field unsafeField = sun.misc.Unsafe.class.getDeclaredField(\"theUnsafe\");\n      unsafeField.setAccessible(true);\n      final sun.misc.Unsafe unsafe = (sun.misc.Unsafe) unsafeField.get(null);\n      decimalValOffset = unsafe.objectFieldOffset(Decimal.class.getDeclaredField(\"decimalVal\"));\n    } catch (Throwable e) {\n      throw new RuntimeException(e);\n    }\n  }\n\n  @IcebergApi\n  public CometVector(DataType type, boolean useDecimal128) {\n    super(type);\n    this.useDecimal128 = useDecimal128;\n  }\n\n  /**\n   * Sets the number of nulls in this vector to be 'numNulls'. This is used when the vector is\n   * reused across batches.\n   */\n  @IcebergApi\n  public abstract void setNumNulls(int numNulls);\n\n  /**\n   * Sets the number of values (including both nulls and non-nulls) in this vector to be\n   * 'numValues'. This is used when the vector is reused across batches.\n   */\n  @IcebergApi\n  public abstract void setNumValues(int numValues);\n\n  /** Returns the number of values in this vector. */\n  @IcebergApi\n  public abstract int numValues();\n\n  /** Whether the elements of this vector are of fixed length. */\n  public boolean isFixedLength() {\n    return getValueVector() instanceof FixedWidthVector;\n  }\n\n  @Override\n  public Decimal getDecimal(int i, int precision, int scale) {\n    if (isNullAt(i)) return null;\n    if (!useDecimal128 && precision <= Decimal.MAX_INT_DIGITS() && type instanceof IntegerType) {\n      return createDecimal(getInt(i), precision, scale);\n    } else if (precision <= Decimal.MAX_LONG_DIGITS()) {\n      return createDecimal(useDecimal128 ? getLongDecimal(i) : getLong(i), precision, scale);\n    } else {\n      byte[] bytes = getBinaryDecimal(i);\n      BigInteger bigInteger = new BigInteger(bytes);\n      BigDecimal javaDecimal = new BigDecimal(bigInteger, scale);\n      return createDecimal(javaDecimal, precision, scale);\n    }\n  }\n\n  /** This method skips the negative scale check, otherwise the same as Decimal.createUnsafe(). */\n  private Decimal createDecimal(long unscaled, int precision, int scale) {\n    Decimal dec = new Decimal();\n    dec.org$apache$spark$sql$types$Decimal$$longVal_$eq(unscaled);\n    dec.org$apache$spark$sql$types$Decimal$$_precision_$eq(precision);\n    dec.org$apache$spark$sql$types$Decimal$$_scale_$eq(scale);\n    return dec;\n  }\n\n  /** This method skips a few checks, otherwise the same as Decimal.apply(). */\n  private Decimal createDecimal(BigDecimal value, int precision, int scale) {\n    Decimal dec = new Decimal();\n    Platform.putObjectVolatile(dec, decimalValOffset, new scala.math.BigDecimal(value));\n    dec.org$apache$spark$sql$types$Decimal$$_precision_$eq(precision);\n    dec.org$apache$spark$sql$types$Decimal$$_scale_$eq(scale);\n    return dec;\n  }\n\n  /**\n   * Reads a 16-byte byte array which are encoded big-endian for decimal128 into internal byte\n   * array.\n   */\n  byte[] getBinaryDecimal(int i) {\n    return copyBinaryDecimal(i, DECIMAL_BYTES);\n  }\n\n  /** Reads a 16-byte byte array which are encoded big-endian for decimal128. */\n  public byte[] copyBinaryDecimal(int i, byte[] dest) {\n    long valueBufferAddress = getValueVector().getDataBuffer().memoryAddress();\n    Platform.copyMemory(\n        null,\n        valueBufferAddress + (long) i * DECIMAL_BYTE_WIDTH,\n        dest,\n        Platform.BYTE_ARRAY_OFFSET,\n        DECIMAL_BYTE_WIDTH);\n    // Decimal is stored little-endian in Arrow, so we need to reverse the bytes here\n    for (int j = 0, k = DECIMAL_BYTE_WIDTH - 1; j < DECIMAL_BYTE_WIDTH / 2; j++, k--) {\n      byte tmp = dest[j];\n      dest[j] = dest[k];\n      dest[k] = tmp;\n    }\n    return dest;\n  }\n\n  @Override\n  public boolean getBoolean(int rowId) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public byte getByte(int rowId) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public short getShort(int rowId) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public int getInt(int rowId) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public long getLong(int rowId) {\n    throw notImplementedException();\n  }\n\n  public long getLongDecimal(int rowId) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public float getFloat(int rowId) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public double getDouble(int rowId) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public UTF8String getUTF8String(int rowId) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public byte[] getBinary(int rowId) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public ColumnarArray getArray(int i) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public ColumnarMap getMap(int i) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public ColumnVector getChild(int i) {\n    throw notImplementedException();\n  }\n\n  @Override\n  public void close() {\n    getValueVector().close();\n  }\n\n  public DictionaryProvider getDictionaryProvider() {\n    throw new UnsupportedOperationException(\"Not implemented\");\n  }\n\n  @IcebergApi\n  public abstract ValueVector getValueVector();\n\n  /**\n   * Returns a zero-copying new vector that contains the values from [offset, offset + length).\n   *\n   * @param offset the offset of the new vector\n   * @param length the length of the new vector\n   * @return the new vector\n   */\n  @IcebergApi\n  public abstract CometVector slice(int offset, int length);\n\n  /**\n   * Returns a corresponding `CometVector` implementation based on the given Arrow `ValueVector`.\n   *\n   * @param vector Arrow `ValueVector`\n   * @param useDecimal128 Whether to use Decimal128 for decimal column\n   * @return `CometVector` implementation\n   */\n  public static CometVector getVector(\n      ValueVector vector, boolean useDecimal128, DictionaryProvider dictionaryProvider) {\n    if (vector instanceof StructVector) {\n      return new CometStructVector(vector, useDecimal128, dictionaryProvider);\n    } else if (vector instanceof MapVector) {\n      return new CometMapVector(vector, useDecimal128, dictionaryProvider);\n    } else if (vector instanceof ListVector) {\n      return new CometListVector(vector, useDecimal128, dictionaryProvider);\n    } else {\n      DictionaryEncoding dictionaryEncoding = vector.getField().getDictionary();\n      CometPlainVector cometVector = new CometPlainVector(vector, useDecimal128);\n\n      if (dictionaryEncoding == null) {\n        return cometVector;\n      } else {\n        Dictionary dictionary = dictionaryProvider.lookup(dictionaryEncoding.getId());\n        CometPlainVector dictionaryVector =\n            new CometPlainVector(dictionary.getVector(), useDecimal128);\n        CometDictionary cometDictionary = new CometDictionary(dictionaryVector);\n\n        return new CometDictionaryVector(\n            cometVector, cometDictionary, dictionaryProvider, useDecimal128);\n      }\n    }\n  }\n\n  protected static CometVector getVector(ValueVector vector, boolean useDecimal128) {\n    return getVector(vector, useDecimal128, null);\n  }\n\n  private UnsupportedOperationException notImplementedException() {\n    return new UnsupportedOperationException(\n        \"CometVector subclass \" + this.getClass().getName() + \" does not implement this method\");\n  }\n}\n"
  },
  {
    "path": "common/src/main/resources/log4j2.properties",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n# \n#   http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# Set everything to be logged to the file target/unit-tests.log\nrootLogger.level = info\nrootLogger.appenderRef.file.ref = ${sys:test.appender:-File}\n\nappender.file.type = File\nappender.file.name = File\nappender.file.fileName = target/unit-tests.log\nappender.file.layout.type = PatternLayout\nappender.file.layout.pattern = %d{yy/MM/dd HH:mm:ss.SSS} %t %p %c{1}: %m%n\n\n# Tests that launch java subprocesses can set the \"test.appender\" system property to\n# \"console\" to avoid having the child process's logs overwrite the unit test's\n# log file.\nappender.console.type = Console\nappender.console.name = console\nappender.console.target = SYSTEM_ERR\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = %t: %m%n\n\n# Ignore messages below warning level from Jetty, because it's a bit verbose\nlogger.jetty.name = org.sparkproject.jetty\nlogger.jetty.level = warn\n\n"
  },
  {
    "path": "common/src/main/scala/org/apache/comet/CometConf.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.util.Locale\nimport java.util.concurrent.TimeUnit\n\nimport scala.collection.mutable.ListBuffer\n\nimport org.apache.spark.network.util.ByteUnit\nimport org.apache.spark.network.util.JavaUtils\nimport org.apache.spark.sql.comet.util.Utils\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.shims.ShimCometConf\n\n/**\n * Configurations for a Comet application. Mostly inspired by [[SQLConf]] in Spark.\n *\n * To get the value of a Comet config key from a [[SQLConf]], you can do the following:\n *\n * {{{\n *   CometConf.COMET_ENABLED.get\n * }}}\n *\n * which retrieves the config value from the thread-local [[SQLConf]] object. Alternatively, you\n * can also explicitly pass a [[SQLConf]] object to the `get` method.\n */\nobject CometConf extends ShimCometConf {\n\n  val COMPAT_GUIDE: String = \"For more information, refer to the Comet Compatibility \" +\n    \"Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html)\"\n\n  private val TUNING_GUIDE = \"For more information, refer to the Comet Tuning \" +\n    \"Guide (https://datafusion.apache.org/comet/user-guide/tuning.html)\"\n\n  private val TRACING_GUIDE = \"For more information, refer to the Comet Tracing \" +\n    \"Guide (https://datafusion.apache.org/comet/contributor-guide/tracing.html)\"\n\n  private val DEBUGGING_GUIDE = \"For more information, refer to the Comet Debugging \" +\n    \"Guide (https://datafusion.apache.org/comet/contributor-guide/debugging.html)\"\n\n  /** List of all configs that is used for generating documentation */\n  val allConfs = new ListBuffer[ConfigEntry[_]]\n\n  private val CATEGORY_SCAN = \"scan\"\n  private val CATEGORY_PARQUET = \"parquet\"\n  private val CATEGORY_EXEC = \"exec\"\n  private val CATEGORY_EXEC_EXPLAIN = \"exec_explain\"\n  private val CATEGORY_ENABLE_EXEC = \"enable_exec\"\n  private val CATEGORY_SHUFFLE = \"shuffle\"\n  private val CATEGORY_TUNING = \"tuning\"\n  private val CATEGORY_TESTING = \"testing\"\n\n  def register(conf: ConfigEntry[_]): Unit = {\n    assert(conf.category.nonEmpty, s\"${conf.key} does not have a category defined\")\n    allConfs.append(conf)\n  }\n\n  def conf(key: String): ConfigBuilder = ConfigBuilder(key)\n\n  val COMET_PREFIX = \"spark.comet\";\n\n  val COMET_EXEC_CONFIG_PREFIX: String = s\"$COMET_PREFIX.exec\"\n\n  val COMET_EXPR_CONFIG_PREFIX: String = s\"$COMET_PREFIX.expression\"\n\n  val COMET_OPERATOR_CONFIG_PREFIX: String = s\"$COMET_PREFIX.operator\"\n\n  val COMET_ENABLED: ConfigEntry[Boolean] = conf(\"spark.comet.enabled\")\n    .category(CATEGORY_EXEC)\n    .doc(\n      \"Whether to enable Comet extension for Spark. When this is turned on, Spark will use \" +\n        \"Comet to read Parquet data source. Note that to enable native vectorized execution, \" +\n        \"both this config and `spark.comet.exec.enabled` need to be enabled.\")\n    .booleanConf\n    .createWithEnvVarOrDefault(\"ENABLE_COMET\", true)\n\n  val COMET_NATIVE_SCAN_ENABLED: ConfigEntry[Boolean] = conf(\"spark.comet.scan.enabled\")\n    .category(CATEGORY_SCAN)\n    .doc(\n      \"Whether to enable native scans. When this is turned on, Spark will use Comet to \" +\n        \"read supported data sources (currently only Parquet is supported natively). Note \" +\n        \"that to enable native vectorized execution, both this config and \" +\n        \"`spark.comet.exec.enabled` need to be enabled.\")\n    .booleanConf\n    .createWithDefault(true)\n\n  val COMET_NATIVE_PARQUET_WRITE_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.parquet.write.enabled\")\n      .category(CATEGORY_TESTING)\n      .doc(\n        \"Whether to enable native Parquet write through Comet. When enabled, \" +\n          \"Comet will intercept Parquet write operations and execute them natively. This \" +\n          \"feature is highly experimental and only partially implemented. It should not \" +\n          \"be used in production.\")\n      .booleanConf\n      .createWithEnvVarOrDefault(\"ENABLE_COMET_WRITE\", false)\n\n  val SCAN_NATIVE_DATAFUSION = \"native_datafusion\"\n  val SCAN_NATIVE_ICEBERG_COMPAT = \"native_iceberg_compat\"\n  val SCAN_AUTO = \"auto\"\n\n  val COMET_NATIVE_SCAN_IMPL: ConfigEntry[String] = conf(\"spark.comet.scan.impl\")\n    .category(CATEGORY_PARQUET)\n    .doc(\n      \"The implementation of Comet's Parquet scan to use. Available scans are \" +\n        s\"`$SCAN_NATIVE_DATAFUSION`, and `$SCAN_NATIVE_ICEBERG_COMPAT`. \" +\n        s\"`$SCAN_NATIVE_DATAFUSION` is a fully native implementation, and \" +\n        s\"`$SCAN_NATIVE_ICEBERG_COMPAT` is a hybrid implementation that supports some \" +\n        \"additional features, such as row indexes and field ids. \" +\n        s\"`$SCAN_AUTO` (default) chooses the best available scan based on the scan schema.\")\n    .stringConf\n    .transform(_.toLowerCase(Locale.ROOT))\n    .checkValues(Set(SCAN_NATIVE_DATAFUSION, SCAN_NATIVE_ICEBERG_COMPAT, SCAN_AUTO))\n    .createWithEnvVarOrDefault(\"COMET_PARQUET_SCAN_IMPL\", SCAN_AUTO)\n\n  val COMET_ICEBERG_NATIVE_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.scan.icebergNative.enabled\")\n      .category(CATEGORY_SCAN)\n      .doc(\n        \"Whether to enable native Iceberg table scan using iceberg-rust. When enabled, \" +\n          \"Iceberg tables are read directly through native execution, bypassing Spark's \" +\n          \"DataSource V2 API for better performance.\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_ICEBERG_DATA_FILE_CONCURRENCY_LIMIT: ConfigEntry[Int] =\n    conf(\"spark.comet.scan.icebergNative.dataFileConcurrencyLimit\")\n      .category(CATEGORY_SCAN)\n      .doc(\n        \"The number of Iceberg data files to read concurrently within a single task. \" +\n          \"Higher values improve throughput for tables with many small files by overlapping \" +\n          \"I/O latency, but increase memory usage. Values between 2 and 8 are suggested.\")\n      .intConf\n      .checkValue(v => v > 0, \"Data file concurrency limit must be positive\")\n      .createWithDefault(1)\n\n  val COMET_CSV_V2_NATIVE_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.scan.csv.v2.enabled\")\n      .category(CATEGORY_TESTING)\n      .doc(\n        \"Whether to use the native Comet V2 CSV reader for improved performance. \" +\n          \"Default: false (uses standard Spark CSV reader) \" +\n          \"Experimental: Performance benefits are workload-dependent.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_RESPECT_PARQUET_FILTER_PUSHDOWN: ConfigEntry[Boolean] =\n    conf(\"spark.comet.parquet.respectFilterPushdown\")\n      .category(CATEGORY_PARQUET)\n      .doc(\n        \"Whether to respect Spark's PARQUET_FILTER_PUSHDOWN_ENABLED config. This needs to be \" +\n          \"respected when running the Spark SQL test suite but the default setting \" +\n          \"results in poor performance in Comet when using the new native scans, \" +\n          \"disabled by default\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_PARQUET_PARALLEL_IO_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.parquet.read.parallel.io.enabled\")\n      .category(CATEGORY_PARQUET)\n      .doc(\n        \"Whether to enable Comet's parallel reader for Parquet files. The parallel reader reads \" +\n          \"ranges of consecutive data in a  file in parallel. It is faster for large files and \" +\n          \"row groups but uses more resources.\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_PARQUET_PARALLEL_IO_THREADS: ConfigEntry[Int] =\n    conf(\"spark.comet.parquet.read.parallel.io.thread-pool.size\")\n      .category(CATEGORY_PARQUET)\n      .doc(\"The maximum number of parallel threads the parallel reader will use in a single \" +\n        \"executor. For executors configured with a smaller number of cores, use a smaller number.\")\n      .intConf\n      .createWithDefault(16)\n\n  val COMET_IO_MERGE_RANGES: ConfigEntry[Boolean] =\n    conf(\"spark.comet.parquet.read.io.mergeRanges\")\n      .category(CATEGORY_PARQUET)\n      .doc(\n        \"When enabled the parallel reader will try to merge ranges of data that are separated \" +\n          \"by less than `comet.parquet.read.io.mergeRanges.delta` bytes. Longer continuous reads \" +\n          \"are faster on cloud storage.\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_IO_MERGE_RANGES_DELTA: ConfigEntry[Int] =\n    conf(\"spark.comet.parquet.read.io.mergeRanges.delta\")\n      .category(CATEGORY_PARQUET)\n      .doc(\"The delta in bytes between consecutive read ranges below which the parallel reader \" +\n        \"will try to merge the ranges. The default is 8MB.\")\n      .intConf\n      .createWithDefault(1 << 23) // 8 MB\n\n  val COMET_IO_ADJUST_READRANGE_SKEW: ConfigEntry[Boolean] =\n    conf(\"spark.comet.parquet.read.io.adjust.readRange.skew\")\n      .category(CATEGORY_PARQUET)\n      .doc(\"In the parallel reader, if the read ranges submitted are skewed in sizes, this \" +\n        \"option will cause the reader to break up larger read ranges into smaller ranges to \" +\n        \"reduce the skew. This will result in a slightly larger number of connections opened to \" +\n        \"the file system but may give improved performance.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_CONVERT_FROM_PARQUET_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.convert.parquet.enabled\")\n      .category(CATEGORY_EXEC)\n      .doc(\n        \"When enabled, data from Spark (non-native) Parquet v1 and v2 scans will be converted to \" +\n          \"Arrow format.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_CONVERT_FROM_JSON_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.convert.json.enabled\")\n      .category(CATEGORY_EXEC)\n      .doc(\n        \"When enabled, data from Spark (non-native) JSON v1 and v2 scans will be converted to \" +\n          \"Arrow format.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_CONVERT_FROM_CSV_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.convert.csv.enabled\")\n      .category(CATEGORY_EXEC)\n      .doc(\n        \"When enabled, data from Spark (non-native) CSV v1 and v2 scans will be converted to \" +\n          \"Arrow format.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_EXEC_ENABLED: ConfigEntry[Boolean] = conf(s\"$COMET_EXEC_CONFIG_PREFIX.enabled\")\n    .category(CATEGORY_EXEC)\n    .doc(\n      \"Whether to enable Comet native vectorized execution for Spark. This controls whether \" +\n        \"Spark should convert operators into their Comet counterparts and execute them in \" +\n        \"native space. Note: each operator is associated with a separate config in the \" +\n        \"format of `spark.comet.exec.<operator_name>.enabled` at the moment, and both the \" +\n        \"config and this need to be turned on, in order for the operator to be executed in \" +\n        \"native.\")\n    .booleanConf\n    .createWithDefault(true)\n\n  val COMET_EXEC_PROJECT_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"project\", defaultValue = true)\n  val COMET_EXEC_FILTER_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"filter\", defaultValue = true)\n  val COMET_EXEC_SORT_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"sort\", defaultValue = true)\n  val COMET_EXEC_LOCAL_LIMIT_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"localLimit\", defaultValue = true)\n  val COMET_EXEC_GLOBAL_LIMIT_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"globalLimit\", defaultValue = true)\n  val COMET_EXEC_BROADCAST_HASH_JOIN_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"broadcastHashJoin\", defaultValue = true)\n  val COMET_EXEC_BROADCAST_EXCHANGE_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"broadcastExchange\", defaultValue = true)\n  val COMET_EXEC_HASH_JOIN_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"hashJoin\", defaultValue = true)\n  val COMET_EXEC_SORT_MERGE_JOIN_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"sortMergeJoin\", defaultValue = true)\n  val COMET_EXEC_AGGREGATE_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"aggregate\", defaultValue = true)\n  val COMET_EXEC_COLLECT_LIMIT_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"collectLimit\", defaultValue = true)\n  val COMET_EXEC_COALESCE_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"coalesce\", defaultValue = true)\n  val COMET_EXEC_UNION_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"union\", defaultValue = true)\n  val COMET_EXEC_EXPAND_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"expand\", defaultValue = true)\n  val COMET_EXEC_EXPLODE_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"explode\", defaultValue = true)\n  val COMET_EXEC_WINDOW_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"window\", defaultValue = true)\n  val COMET_EXEC_TAKE_ORDERED_AND_PROJECT_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"takeOrderedAndProject\", defaultValue = true)\n  val COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED: ConfigEntry[Boolean] =\n    createExecEnabledConfig(\"localTableScan\", defaultValue = false)\n\n  val COMET_NATIVE_COLUMNAR_TO_ROW_ENABLED: ConfigEntry[Boolean] =\n    conf(s\"$COMET_EXEC_CONFIG_PREFIX.columnarToRow.native.enabled\")\n      .category(CATEGORY_EXEC)\n      .doc(\n        \"Whether to enable native columnar to row conversion. When enabled, Comet will use \" +\n          \"native Rust code to convert Arrow columnar data to Spark UnsafeRow format instead \" +\n          \"of the JVM implementation. This can improve performance for queries that need to \" +\n          \"convert between columnar and row formats.\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_EXEC_SORT_MERGE_JOIN_WITH_JOIN_FILTER_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.exec.sortMergeJoinWithJoinFilter.enabled\")\n      .category(CATEGORY_ENABLE_EXEC)\n      .doc(\"Experimental support for Sort Merge Join with filter\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_TRACING_ENABLED: ConfigEntry[Boolean] = conf(\"spark.comet.tracing.enabled\")\n    .category(CATEGORY_TUNING)\n    .doc(s\"Enable fine-grained tracing of events and memory usage. $TRACING_GUIDE.\")\n    .booleanConf\n    .createWithDefault(false)\n\n  val COMET_ONHEAP_MEMORY_OVERHEAD: ConfigEntry[Long] = conf(\"spark.comet.memoryOverhead\")\n    .category(CATEGORY_TESTING)\n    .doc(\n      \"The amount of additional memory to be allocated per executor process for Comet, in MiB, \" +\n        \"when running Spark in on-heap mode.\")\n    .bytesConf(ByteUnit.MiB)\n    .createWithDefault(1024)\n\n  val COMET_EXEC_SHUFFLE_ENABLED: ConfigEntry[Boolean] =\n    conf(s\"$COMET_EXEC_CONFIG_PREFIX.shuffle.enabled\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\n        \"Whether to enable Comet native shuffle. \" +\n          \"Note that this requires setting `spark.shuffle.manager` to \" +\n          \"`org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager`. \" +\n          \"`spark.shuffle.manager` must be set before starting the Spark application and \" +\n          \"cannot be changed during the application.\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_SHUFFLE_DIRECT_READ_ENABLED: ConfigEntry[Boolean] =\n    conf(s\"$COMET_EXEC_CONFIG_PREFIX.shuffle.directRead.enabled\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\n        \"When enabled, native operators that consume shuffle output will read \" +\n          \"compressed shuffle blocks directly in native code, bypassing Arrow FFI. \" +\n          \"Applies to both native shuffle and JVM columnar shuffle. \" +\n          \"Requires spark.comet.exec.shuffle.enabled to be true.\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_SHUFFLE_MODE: ConfigEntry[String] = conf(s\"$COMET_EXEC_CONFIG_PREFIX.shuffle.mode\")\n    .category(CATEGORY_SHUFFLE)\n    .doc(\n      \"This is test config to allow tests to force a particular shuffle implementation to be \" +\n        \"used. Valid values are `jvm` for Columnar Shuffle, `native` for Native Shuffle, \" +\n        s\"and `auto` to pick the best supported option (`native` has priority). $TUNING_GUIDE.\")\n    .internal()\n    .stringConf\n    .transform(_.toLowerCase(Locale.ROOT))\n    .checkValues(Set(\"native\", \"jvm\", \"auto\"))\n    .createWithDefault(\"auto\")\n\n  val COMET_EXEC_BROADCAST_FORCE_ENABLED: ConfigEntry[Boolean] =\n    conf(s\"$COMET_EXEC_CONFIG_PREFIX.broadcast.enabled\")\n      .category(CATEGORY_EXEC)\n      .doc(\n        \"Whether to force enabling broadcasting for Comet native operators. \" +\n          \"Comet broadcast feature will be enabled automatically by \" +\n          \"Comet extension. But for unit tests, we need this feature to force enabling it \" +\n          \"for invalid cases. So this config is only used for unit test.\")\n      .internal()\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_REPLACE_SMJ: ConfigEntry[Boolean] =\n    conf(s\"$COMET_EXEC_CONFIG_PREFIX.replaceSortMergeJoin\")\n      .category(CATEGORY_EXEC)\n      .doc(\"Experimental feature to force Spark to replace SortMergeJoin with ShuffledHashJoin \" +\n        s\"for improved performance. This feature is not stable yet. $TUNING_GUIDE.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_EXEC_SHUFFLE_WITH_HASH_PARTITIONING_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.native.shuffle.partitioning.hash.enabled\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\"Whether to enable hash partitioning for Comet native shuffle.\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.native.shuffle.partitioning.range.enabled\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\"Whether to enable range partitioning for Comet native shuffle.\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_EXEC_SHUFFLE_WITH_ROUND_ROBIN_PARTITIONING_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.native.shuffle.partitioning.roundrobin.enabled\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\n        \"Whether to enable round robin partitioning for Comet native shuffle. \" +\n          \"This is disabled by default because Comet's round-robin produces different \" +\n          \"partition assignments than Spark. Spark sorts rows by their binary UnsafeRow \" +\n          \"representation before assigning partitions, but Comet uses Arrow format which \" +\n          \"has a different binary layout. Instead, Comet implements round-robin as hash \" +\n          \"partitioning on all columns, which achieves the same goals: even distribution, \" +\n          \"deterministic output (for fault tolerance), and no semantic grouping. \" +\n          \"Sorted output will be identical to Spark, but unsorted row ordering may differ.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_EXEC_SHUFFLE_WITH_ROUND_ROBIN_PARTITIONING_MAX_HASH_COLUMNS: ConfigEntry[Int] =\n    conf(\"spark.comet.native.shuffle.partitioning.roundrobin.maxHashColumns\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\n        \"The maximum number of columns to hash for round robin partitioning. \" +\n          \"When set to 0 (the default), all columns are hashed. \" +\n          \"When set to a positive value, only the first N columns are used for hashing, \" +\n          \"which can improve performance for wide tables while still providing \" +\n          \"reasonable distribution.\")\n      .intConf\n      .checkValue(\n        v => v >= 0,\n        \"The maximum number of columns to hash for round robin partitioning must be non-negative.\")\n      .createWithDefault(0)\n\n  val COMET_EXEC_SHUFFLE_COMPRESSION_CODEC: ConfigEntry[String] =\n    conf(s\"$COMET_EXEC_CONFIG_PREFIX.shuffle.compression.codec\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\n        \"The codec of Comet native shuffle used to compress shuffle data. lz4, zstd, and \" +\n          \"snappy are supported. Compression can be disabled by setting \" +\n          \"spark.shuffle.compress=false.\")\n      .stringConf\n      .checkValues(Set(\"zstd\", \"lz4\", \"snappy\"))\n      .createWithDefault(\"lz4\")\n\n  val COMET_EXEC_SHUFFLE_COMPRESSION_ZSTD_LEVEL: ConfigEntry[Int] =\n    conf(s\"$COMET_EXEC_CONFIG_PREFIX.shuffle.compression.zstd.level\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\"The compression level to use when compressing shuffle files with zstd.\")\n      .intConf\n      .createWithDefault(1)\n\n  val COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.columnar.shuffle.async.enabled\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\"Whether to enable asynchronous shuffle for Arrow-based shuffle.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_COLUMNAR_SHUFFLE_ASYNC_THREAD_NUM: ConfigEntry[Int] =\n    conf(\"spark.comet.columnar.shuffle.async.thread.num\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\n        \"Number of threads used for Comet async columnar shuffle per shuffle task. \" +\n          \"Note that more threads means more memory requirement to \" +\n          \"buffer shuffle data before flushing to disk. Also, more threads may not always \" +\n          \"improve performance, and should be set based on the number of cores available.\")\n      .intConf\n      .createWithDefault(3)\n\n  val COMET_COLUMNAR_SHUFFLE_ASYNC_MAX_THREAD_NUM: ConfigEntry[Int] = {\n    conf(\"spark.comet.columnar.shuffle.async.max.thread.num\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\"Maximum number of threads on an executor used for Comet async columnar shuffle. \" +\n        \"This is the upper bound of total number of shuffle \" +\n        \"threads per executor. In other words, if the number of cores * the number of shuffle \" +\n        \"threads per task `spark.comet.columnar.shuffle.async.thread.num` is larger than \" +\n        \"this config. Comet will use this config as the number of shuffle threads per \" +\n        \"executor instead.\")\n      .intConf\n      .createWithDefault(100)\n  }\n\n  val COMET_COLUMNAR_SHUFFLE_SPILL_THRESHOLD: ConfigEntry[Int] =\n    conf(\"spark.comet.columnar.shuffle.spill.threshold\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\n        \"Number of rows to be spilled used for Comet columnar shuffle. \" +\n          \"For every configured number of rows, a new spill file will be created. \" +\n          \"Higher value means more memory requirement to buffer shuffle data before \" +\n          \"flushing to disk. As Comet uses columnar shuffle which is columnar format, \" +\n          \"higher value usually helps to improve shuffle data compression ratio. This is \" +\n          \"internal config for testing purpose or advanced tuning.\")\n      .internal()\n      .intConf\n      .createWithDefault(Int.MaxValue)\n\n  val COMET_ONHEAP_SHUFFLE_MEMORY_FACTOR: ConfigEntry[Double] =\n    conf(\"spark.comet.columnar.shuffle.memory.factor\")\n      .category(CATEGORY_TESTING)\n      .doc(\"Fraction of Comet memory to be allocated per executor process for columnar shuffle \" +\n        s\"when running in on-heap mode. $TUNING_GUIDE.\")\n      .doubleConf\n      .checkValue(\n        factor => factor > 0,\n        \"Ensure that Comet shuffle memory overhead factor is a double greater than 0\")\n      .createWithDefault(1.0)\n\n  val COMET_BATCH_SIZE: ConfigEntry[Int] = conf(\"spark.comet.batchSize\")\n    .category(CATEGORY_TUNING)\n    .doc(\"The columnar batch size, i.e., the maximum number of rows that a batch can contain.\")\n    .intConf\n    .checkValue(v => v > 0, \"Batch size must be positive\")\n    .createWithDefault(8192)\n\n  val COMET_COLUMNAR_SHUFFLE_BATCH_SIZE: ConfigEntry[Int] =\n    conf(\"spark.comet.columnar.shuffle.batch.size\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\"Batch size when writing out sorted spill files on the native side. Note that \" +\n        \"this should not be larger than batch size (i.e., `spark.comet.batchSize`). Otherwise \" +\n        \"it will produce larger batches than expected in the native operator after shuffle.\")\n      .intConf\n      .checkValue(\n        v => v <= COMET_BATCH_SIZE.get(),\n        \"Should not be larger than batch size `spark.comet.batchSize`\")\n      .createWithDefault(8192)\n\n  val COMET_SHUFFLE_WRITE_BUFFER_SIZE: ConfigEntry[Long] =\n    conf(s\"$COMET_EXEC_CONFIG_PREFIX.shuffle.writeBufferSize\")\n      .category(CATEGORY_SHUFFLE)\n      .doc(\"Size of the write buffer in bytes used by the native shuffle writer when writing \" +\n        \"shuffle data to disk. Larger values may improve write performance by reducing \" +\n        \"the number of system calls, but will use more memory. \" +\n        \"The default is 1MB which provides a good balance between performance and memory usage.\")\n      .bytesConf(ByteUnit.MiB)\n      .checkValue(v => v > 0, \"Write buffer size must be positive\")\n      .createWithDefault(1)\n\n  val COMET_SHUFFLE_PREFER_DICTIONARY_RATIO: ConfigEntry[Double] = conf(\n    \"spark.comet.shuffle.preferDictionary.ratio\")\n    .category(CATEGORY_SHUFFLE)\n    .doc(\n      \"The ratio of total values to distinct values in a string column to decide whether to \" +\n        \"prefer dictionary encoding when shuffling the column. If the ratio is higher than \" +\n        \"this config, dictionary encoding will be used on shuffling string column. This config \" +\n        \"is effective if it is higher than 1.0. Note that this \" +\n        \"config is only used when `spark.comet.exec.shuffle.mode` is `jvm`.\")\n    .doubleConf\n    .createWithDefault(10.0)\n\n  val COMET_EXCHANGE_SIZE_MULTIPLIER: ConfigEntry[Double] = conf(\n    \"spark.comet.shuffle.sizeInBytesMultiplier\")\n    .category(CATEGORY_SHUFFLE)\n    .doc(\n      \"Comet reports smaller sizes for shuffle due to using Arrow's columnar memory format \" +\n        \"and this can result in Spark choosing a different join strategy due to the estimated \" +\n        \"size of the exchange being smaller. Comet will multiple sizeInBytes by this amount to \" +\n        \"avoid regressions in join strategy.\")\n    .doubleConf\n    .createWithDefault(1.0)\n\n  val COMET_DPP_FALLBACK_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.dppFallback.enabled\")\n      .category(CATEGORY_EXEC)\n      .doc(\"Whether to fall back to Spark for queries that use DPP.\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_DEBUG_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.debug.enabled\")\n      .category(CATEGORY_EXEC)\n      .doc(\n        \"Whether to enable debug mode for Comet. \" +\n          \"When enabled, Comet will do additional checks for debugging purpose. For example, \" +\n          \"validating array when importing arrays from JVM at native side. Note that these \" +\n          \"checks may be expensive in performance and should only be enabled for debugging \" +\n          \"purpose.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  // Used on native side. Check spark_config.rs how the config is used\n  val COMET_DEBUG_MEMORY_ENABLED: ConfigEntry[Boolean] =\n    conf(s\"$COMET_PREFIX.debug.memory\")\n      .category(CATEGORY_TESTING)\n      .doc(s\"When enabled, log all native memory pool interactions. $DEBUGGING_GUIDE.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_EXTENDED_EXPLAIN_FORMAT_VERBOSE = \"verbose\"\n  val COMET_EXTENDED_EXPLAIN_FORMAT_FALLBACK = \"fallback\"\n\n  val COMET_EXTENDED_EXPLAIN_FORMAT: ConfigEntry[String] =\n    conf(\"spark.comet.explain.format\")\n      .category(CATEGORY_EXEC_EXPLAIN)\n      .doc(\"Choose extended explain output. The default format of \" +\n        s\"'$COMET_EXTENDED_EXPLAIN_FORMAT_VERBOSE' will provide the full query plan annotated \" +\n        \"with fallback reasons as well as a summary of how much of the plan was accelerated \" +\n        s\"by Comet. The format '$COMET_EXTENDED_EXPLAIN_FORMAT_FALLBACK' provides a list of \" +\n        \"fallback reasons instead.\")\n      .stringConf\n      .checkValues(\n        Set(COMET_EXTENDED_EXPLAIN_FORMAT_VERBOSE, COMET_EXTENDED_EXPLAIN_FORMAT_FALLBACK))\n      .createWithDefault(COMET_EXTENDED_EXPLAIN_FORMAT_VERBOSE)\n\n  val COMET_EXPLAIN_NATIVE_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.explain.native.enabled\")\n      .category(CATEGORY_EXEC_EXPLAIN)\n      .doc(\n        \"When this setting is enabled, Comet will provide a tree representation of \" +\n          \"the native query plan before execution and again after execution, with \" +\n          \"metrics.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_EXPLAIN_TRANSFORMATIONS: ConfigEntry[Boolean] =\n    conf(\"spark.comet.explain.rules\")\n      .category(CATEGORY_EXEC_EXPLAIN)\n      .doc(\"When this setting is enabled, Comet will log all plan transformations performed \" +\n        \"in physical optimizer rules. Default: false\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_LOG_FALLBACK_REASONS: ConfigEntry[Boolean] =\n    conf(\"spark.comet.logFallbackReasons.enabled\")\n      .category(CATEGORY_EXEC_EXPLAIN)\n      .doc(\"When this setting is enabled, Comet will log warnings for all fallback reasons.\")\n      .booleanConf\n      .createWithEnvVarOrDefault(\"ENABLE_COMET_LOG_FALLBACK_REASONS\", false)\n\n  val COMET_EXPLAIN_FALLBACK_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.explainFallback.enabled\")\n      .category(CATEGORY_EXEC_EXPLAIN)\n      .doc(\n        \"When this setting is enabled, Comet will provide logging explaining the reason(s) \" +\n          \"why a query stage cannot be executed natively. Set this to false to \" +\n          \"reduce the amount of logging.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_ONHEAP_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.exec.onHeap.enabled\")\n      .category(CATEGORY_TESTING)\n      .doc(\"Whether to allow Comet to run in on-heap mode. Required for running Spark SQL tests.\")\n      .booleanConf\n      .createWithEnvVarOrDefault(\"ENABLE_COMET_ONHEAP\", false)\n\n  val COMET_OFFHEAP_MEMORY_POOL_TYPE: ConfigEntry[String] =\n    conf(\"spark.comet.exec.memoryPool\")\n      .category(CATEGORY_TUNING)\n      .doc(\n        \"The type of memory pool to be used for Comet native execution when running Spark in \" +\n          \"off-heap mode. Available pool types are `greedy_unified` and `fair_unified`. \" +\n          s\"$TUNING_GUIDE.\")\n      .stringConf\n      .createWithDefault(\"fair_unified\")\n\n  val COMET_ONHEAP_MEMORY_POOL_TYPE: ConfigEntry[String] = conf(\n    \"spark.comet.exec.onHeap.memoryPool\")\n    .category(CATEGORY_TESTING)\n    .doc(\n      \"The type of memory pool to be used for Comet native execution \" +\n        \"when running Spark in on-heap mode. Available pool types are `greedy`, `fair_spill`, \" +\n        \"`greedy_task_shared`, `fair_spill_task_shared`, `greedy_global`, `fair_spill_global`, \" +\n        \"and `unbounded`.\")\n    .stringConf\n    .createWithDefault(\"greedy_task_shared\")\n\n  val COMET_OFFHEAP_MEMORY_POOL_FRACTION: ConfigEntry[Double] =\n    conf(\"spark.comet.exec.memoryPool.fraction\")\n      .category(CATEGORY_TUNING)\n      .doc(\n        \"Fraction of off-heap memory pool that is available to Comet. \" +\n          \"Only applies to off-heap mode. \" +\n          s\"$TUNING_GUIDE.\")\n      .doubleConf\n      .createWithDefault(1.0)\n\n  val COMET_NATIVE_LOAD_REQUIRED: ConfigEntry[Boolean] = conf(\"spark.comet.nativeLoadRequired\")\n    .category(CATEGORY_EXEC)\n    .doc(\n      \"Whether to require Comet native library to load successfully when Comet is enabled. \" +\n        \"If not, Comet will silently fallback to Spark when it fails to load the native lib. \" +\n        \"Otherwise, an error will be thrown and the Spark job will be aborted.\")\n    .booleanConf\n    .createWithDefault(false)\n\n  val COMET_EXCEPTION_ON_LEGACY_DATE_TIMESTAMP: ConfigEntry[Boolean] =\n    conf(\"spark.comet.exceptionOnDatetimeRebase\")\n      .category(CATEGORY_EXEC)\n      .doc(\"Whether to throw exception when seeing dates/timestamps from the legacy hybrid \" +\n        \"(Julian + Gregorian) calendar. Since Spark 3, dates/timestamps were written according \" +\n        \"to the Proleptic Gregorian calendar. When this is true, Comet will \" +\n        \"throw exceptions when seeing these dates/timestamps that were written by Spark version \" +\n        \"before 3.0. If this is false, these dates/timestamps will be read as if they were \" +\n        \"written to the Proleptic Gregorian calendar and will not be rebased.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_USE_DECIMAL_128: ConfigEntry[Boolean] = conf(\"spark.comet.use.decimal128\")\n    .internal()\n    .category(CATEGORY_EXEC)\n    .doc(\"If true, Comet will always use 128 bits to represent a decimal value, regardless of \" +\n      \"its precision. If false, Comet will use 32, 64 and 128 bits respectively depending on \" +\n      \"the precision. N.B. this is NOT a user-facing config but should be inferred and set by \" +\n      \"Comet itself.\")\n    .booleanConf\n    .createWithDefault(false)\n\n  val COMET_USE_LAZY_MATERIALIZATION: ConfigEntry[Boolean] = conf(\n    \"spark.comet.use.lazyMaterialization\")\n    .internal()\n    .category(CATEGORY_PARQUET)\n    .doc(\n      \"Whether to enable lazy materialization for Comet. When this is turned on, Comet will \" +\n        \"read Parquet data source lazily for string and binary columns. For filter operations, \" +\n        \"lazy materialization will improve read performance by skipping unused pages.\")\n    .booleanConf\n    .createWithDefault(true)\n\n  val COMET_SCHEMA_EVOLUTION_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.schemaEvolution.enabled\")\n      .internal()\n      .category(CATEGORY_SCAN)\n      .doc(\"Whether to enable schema evolution in Comet. For instance, promoting a integer \" +\n        \"column to a long column, a float column to a double column, etc. This is automatically\" +\n        \"enabled when reading from Iceberg tables.\")\n      .booleanConf\n      .createWithDefault(COMET_SCHEMA_EVOLUTION_ENABLED_DEFAULT)\n\n  val COMET_ENABLE_PARTIAL_HASH_AGGREGATE: ConfigEntry[Boolean] =\n    conf(\"spark.comet.testing.aggregate.partialMode.enabled\")\n      .internal()\n      .category(CATEGORY_TESTING)\n      .doc(\"This setting is used in unit tests\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_ENABLE_FINAL_HASH_AGGREGATE: ConfigEntry[Boolean] =\n    conf(\"spark.comet.testing.aggregate.finalMode.enabled\")\n      .internal()\n      .category(CATEGORY_TESTING)\n      .doc(\"This setting is used in unit tests\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_SPARK_TO_ARROW_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.sparkToColumnar.enabled\")\n      .category(CATEGORY_EXEC)\n      .doc(\"Whether to enable Spark to Arrow columnar conversion. When this is turned on, \" +\n        \"Comet will convert operators in \" +\n        \"`spark.comet.sparkToColumnar.supportedOperatorList` into Arrow columnar format before \" +\n        \"processing.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_SPARK_TO_ARROW_SUPPORTED_OPERATOR_LIST: ConfigEntry[Seq[String]] =\n    conf(\"spark.comet.sparkToColumnar.supportedOperatorList\")\n      .category(CATEGORY_EXEC)\n      .doc(\"A comma-separated list of operators that will be converted to Arrow columnar \" +\n        s\"format when `${COMET_SPARK_TO_ARROW_ENABLED.key}` is true.\")\n      .stringConf\n      .toSequence\n      .createWithDefault(Seq(\"Range,InMemoryTableScan,RDDScan\"))\n\n  val COMET_CASE_CONVERSION_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.caseConversion.enabled\")\n      .category(CATEGORY_EXEC)\n      .doc(\"Java uses locale-specific rules when converting strings to upper or lower case and \" +\n        \"Rust does not, so we disable upper and lower by default.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_PARQUET_UNSIGNED_SMALL_INT_CHECK: ConfigEntry[Boolean] =\n    conf(\"spark.comet.scan.unsignedSmallIntSafetyCheck\")\n      .category(CATEGORY_SCAN)\n      .doc(\n        \"Parquet files may contain unsigned 8-bit integers (UINT_8) which Spark maps to \" +\n          \"ShortType. When this config is true (default), Comet falls back to Spark for \" +\n          \"ShortType columns because we cannot distinguish signed INT16 (safe) from unsigned \" +\n          \"UINT_8 (may produce different results). Set to false to allow native execution of \" +\n          \"ShortType columns if you know your data does not contain unsigned UINT_8 columns \" +\n          s\"from improperly encoded Parquet files. $COMPAT_GUIDE.\")\n      .booleanConf\n      .createWithDefault(true)\n\n  val COMET_EXEC_STRICT_FLOATING_POINT: ConfigEntry[Boolean] =\n    conf(\"spark.comet.exec.strictFloatingPoint\")\n      .category(CATEGORY_EXEC)\n      .doc(\n        \"When enabled, fall back to Spark for floating-point operations that may differ from \" +\n          s\"Spark, such as when comparing or sorting -0.0 and 0.0. $COMPAT_GUIDE.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_METRICS_UPDATE_INTERVAL: ConfigEntry[Long] =\n    conf(\"spark.comet.metrics.updateInterval\")\n      .category(CATEGORY_EXEC)\n      .doc(\"The interval in milliseconds to update metrics. If interval is negative,\" +\n        \" metrics will be updated upon task completion.\")\n      .longConf\n      .createWithDefault(3000L)\n\n  val COMET_METRICS_ENABLED: ConfigEntry[Boolean] =\n    conf(\"spark.comet.metrics.enabled\")\n      .category(CATEGORY_EXEC)\n      .doc(\n        \"Whether to enable Comet metrics reporting through Spark's external monitoring system. \" +\n          \"When enabled, Comet exposes metrics such as native operators, Spark operators, \" +\n          \"queries planned, transitions, and acceleration ratio. These metrics can be \" +\n          \"visualized through tools like Grafana when a metrics sink (e.g., Prometheus) is \" +\n          \"configured. Disabled by default because Spark plan traversal adds overhead and \" +\n          \"metrics require a sink to be useful. \" +\n          \"This config must be set before the SparkSession is created to take effect.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_LIBHDFS_SCHEMES_KEY = \"fs.comet.libhdfs.schemes\"\n\n  val COMET_LIBHDFS_SCHEMES: OptionalConfigEntry[String] =\n    conf(s\"spark.hadoop.$COMET_LIBHDFS_SCHEMES_KEY\")\n      .category(CATEGORY_SCAN)\n      .doc(\"Defines filesystem schemes (e.g., hdfs, webhdfs) that the native side accesses \" +\n        \"via libhdfs, separated by commas. Valid only when built with hdfs feature enabled.\")\n      .stringConf\n      .createOptional\n\n  // Used on native side. Check spark_config.rs how the config is used\n  val COMET_MAX_TEMP_DIRECTORY_SIZE: ConfigEntry[Long] =\n    conf(\"spark.comet.maxTempDirectorySize\")\n      .category(CATEGORY_EXEC)\n      .doc(\"The maximum amount of data (in bytes) stored inside the temporary directories.\")\n      .bytesConf(ByteUnit.BYTE)\n      .createWithDefault(100L * 1024 * 1024 * 1024) // 100 GB\n\n  val COMET_RESPECT_DATAFUSION_CONFIGS: ConfigEntry[Boolean] =\n    conf(s\"$COMET_EXEC_CONFIG_PREFIX.respectDataFusionConfigs\")\n      .category(CATEGORY_TESTING)\n      .doc(\n        \"Development and testing configuration option to allow DataFusion configs set in \" +\n          \"Spark configuration settings starting with `spark.comet.datafusion.` to be passed \" +\n          \"into native execution.\")\n      .booleanConf\n      .createWithDefault(false)\n\n  val COMET_STRICT_TESTING: ConfigEntry[Boolean] = conf(s\"$COMET_PREFIX.testing.strict\")\n    .category(CATEGORY_TESTING)\n    .doc(\"Experimental option to enable strict testing, which will fail tests that could be \" +\n      \"more comprehensive, such as checking for a specific fallback reason.\")\n    .booleanConf\n    .createWithEnvVarOrDefault(\"ENABLE_COMET_STRICT_TESTING\", false)\n\n  val COMET_OPERATOR_DATA_WRITING_COMMAND_ALLOW_INCOMPAT: ConfigEntry[Boolean] =\n    createOperatorIncompatConfig(\"DataWritingCommandExec\")\n\n  /** Create a config to enable a specific operator */\n  private def createExecEnabledConfig(\n      exec: String,\n      defaultValue: Boolean,\n      notes: Option[String] = None): ConfigEntry[Boolean] = {\n    conf(s\"$COMET_EXEC_CONFIG_PREFIX.$exec.enabled\")\n      .category(CATEGORY_ENABLE_EXEC)\n      .doc(\n        s\"Whether to enable $exec by default.\" + notes\n          .map(s => s\" $s.\")\n          .getOrElse(\"\"))\n      .booleanConf\n      .createWithDefault(defaultValue)\n  }\n\n  /**\n   * Converts a config key to a valid environment variable name. Example:\n   * \"spark.comet.operator.DataWritingCommandExec.allowIncompatible\" ->\n   * \"SPARK_COMET_OPERATOR_DATAWRITINGCOMMANDEXEC_ALLOWINCOMPATIBLE\"\n   */\n  private def configKeyToEnvVar(configKey: String): String =\n    configKey.toUpperCase(Locale.ROOT).replace('.', '_')\n\n  private def createOperatorIncompatConfig(name: String): ConfigEntry[Boolean] = {\n    val configKey = getOperatorAllowIncompatConfigKey(name)\n    val envVar = configKeyToEnvVar(configKey)\n    conf(configKey)\n      .category(CATEGORY_EXEC)\n      .doc(s\"Whether to allow incompatibility for operator: $name. \" +\n        s\"False by default. Can be overridden with $envVar env variable\")\n      .booleanConf\n      .createWithEnvVarOrDefault(envVar, false)\n  }\n\n  def isExprEnabled(name: String, conf: SQLConf = SQLConf.get): Boolean = {\n    getBooleanConf(getExprEnabledConfigKey(name), defaultValue = true, conf)\n  }\n\n  def getExprEnabledConfigKey(name: String): String = {\n    s\"${CometConf.COMET_EXPR_CONFIG_PREFIX}.$name.enabled\"\n  }\n\n  def isExprAllowIncompat(name: String, conf: SQLConf = SQLConf.get): Boolean = {\n    getBooleanConf(getExprAllowIncompatConfigKey(name), defaultValue = false, conf)\n  }\n\n  def getExprAllowIncompatConfigKey(name: String): String = {\n    s\"${CometConf.COMET_EXPR_CONFIG_PREFIX}.$name.allowIncompatible\"\n  }\n\n  def getExprAllowIncompatConfigKey(exprClass: Class[_]): String = {\n    s\"${CometConf.COMET_EXPR_CONFIG_PREFIX}.${exprClass.getSimpleName}.allowIncompatible\"\n  }\n\n  def isOperatorAllowIncompat(name: String, conf: SQLConf = SQLConf.get): Boolean = {\n    getBooleanConf(getOperatorAllowIncompatConfigKey(name), defaultValue = false, conf)\n  }\n\n  def getOperatorAllowIncompatConfigKey(name: String): String = {\n    s\"${CometConf.COMET_OPERATOR_CONFIG_PREFIX}.$name.allowIncompatible\"\n  }\n\n  def getOperatorAllowIncompatConfigKey(exprClass: Class[_]): String = {\n    s\"${CometConf.COMET_OPERATOR_CONFIG_PREFIX}.${exprClass.getSimpleName}.allowIncompatible\"\n  }\n\n  def getBooleanConf(name: String, defaultValue: Boolean, conf: SQLConf): Boolean = {\n    conf.getConfString(name, defaultValue.toString).toLowerCase(Locale.ROOT) == \"true\"\n  }\n}\n\nobject ConfigHelpers {\n  def toNumber[T](s: String, converter: String => T, key: String, configType: String): T = {\n    try {\n      converter(s.trim)\n    } catch {\n      case _: NumberFormatException =>\n        throw new IllegalArgumentException(s\"$key should be $configType, but was $s\")\n    }\n  }\n\n  def toBoolean(s: String, key: String): Boolean = {\n    try {\n      s.trim.toBoolean\n    } catch {\n      case _: IllegalArgumentException =>\n        throw new IllegalArgumentException(s\"$key should be boolean, but was $s\")\n    }\n  }\n\n  def stringToSeq[T](str: String, converter: String => T): Seq[T] = {\n    Utils.stringToSeq(str).map(converter)\n  }\n\n  def seqToString[T](v: Seq[T], stringConverter: T => String): String = {\n    v.map(stringConverter).mkString(\",\")\n  }\n\n  def timeFromString(str: String, unit: TimeUnit): Long = JavaUtils.timeStringAs(str, unit)\n\n  def timeToString(v: Long, unit: TimeUnit): String =\n    TimeUnit.MILLISECONDS.convert(v, unit) + \"ms\"\n\n  def byteFromString(str: String, unit: ByteUnit): Long = {\n    val (input, multiplier) =\n      if (str.nonEmpty && str.charAt(0) == '-') {\n        (str.substring(1), -1)\n      } else {\n        (str, 1)\n      }\n    multiplier * JavaUtils.byteStringAs(input, unit)\n  }\n\n  def byteToString(v: Long, unit: ByteUnit): String = unit.convertTo(v, ByteUnit.BYTE) + \"b\"\n}\n\nprivate class TypedConfigBuilder[T](\n    val parent: ConfigBuilder,\n    val converter: String => T,\n    val stringConverter: T => String) {\n\n  import ConfigHelpers._\n\n  def this(parent: ConfigBuilder, converter: String => T) = {\n    this(parent, converter, Option(_).map(_.toString).orNull)\n  }\n\n  /** Apply a transformation to the user-provided values of the config entry. */\n  def transform(fn: T => T): TypedConfigBuilder[T] = {\n    new TypedConfigBuilder(parent, s => fn(converter(s)), stringConverter)\n  }\n\n  /** Checks if the user-provided value for the config matches the validator. */\n  def checkValue(validator: T => Boolean, errorMsg: String): TypedConfigBuilder[T] = {\n    transform { v =>\n      if (!validator(v)) {\n        throw new IllegalArgumentException(s\"'$v' in ${parent.key} is invalid. $errorMsg\")\n      }\n      v\n    }\n  }\n\n  /** Check that user-provided values for the config match a pre-defined set. */\n  def checkValues(validValues: Set[T]): TypedConfigBuilder[T] = {\n    transform { v =>\n      if (!validValues.contains(v)) {\n        throw new IllegalArgumentException(\n          s\"The value of ${parent.key} should be one of ${validValues.mkString(\", \")}, but was $v\")\n      }\n      v\n    }\n  }\n\n  /** Turns the config entry into a sequence of values of the underlying type. */\n  def toSequence: TypedConfigBuilder[Seq[T]] = {\n    new TypedConfigBuilder(parent, stringToSeq(_, converter), seqToString(_, stringConverter))\n  }\n\n  /** Creates a [[ConfigEntry]] that does not have a default value. */\n  def createOptional: OptionalConfigEntry[T] = {\n    val conf = new OptionalConfigEntry[T](\n      parent.key,\n      converter,\n      stringConverter,\n      parent._doc,\n      parent._category,\n      parent._public,\n      parent._version)\n    CometConf.register(conf)\n    conf\n  }\n\n  /** Creates a [[ConfigEntry]] that has a default value. */\n  def createWithDefault(default: T): ConfigEntry[T] = {\n    val transformedDefault = converter(stringConverter(default))\n    val conf = new ConfigEntryWithDefault[T](\n      parent.key,\n      transformedDefault,\n      converter,\n      stringConverter,\n      parent._doc,\n      parent._category,\n      parent._public,\n      parent._version)\n    CometConf.register(conf)\n    conf\n  }\n\n  /**\n   * Creates a [[ConfigEntry]] that has a default value, with support for environment variable\n   * override.\n   *\n   * The value is resolved in the following priority order:\n   *   1. Spark config value (if set) 2. Environment variable value (if set) 3. Default value\n   *\n   * @param envVar\n   *   The environment variable name to check for override value\n   * @param default\n   *   The default value to use if neither config nor env var is set\n   * @return\n   *   A ConfigEntry with environment variable support\n   */\n  def createWithEnvVarOrDefault(envVar: String, default: T): ConfigEntry[T] = {\n    val transformedDefault = converter(sys.env.getOrElse(envVar, stringConverter(default)))\n    val conf = new ConfigEntryWithDefault[T](\n      parent.key,\n      transformedDefault,\n      converter,\n      stringConverter,\n      parent._doc,\n      parent._category,\n      parent._public,\n      parent._version,\n      Some(envVar))\n    CometConf.register(conf)\n    conf\n  }\n}\n\nabstract class ConfigEntry[T](\n    val key: String,\n    val valueConverter: String => T,\n    val stringConverter: T => String,\n    val doc: String,\n    val category: String,\n    val isPublic: Boolean,\n    val version: String) {\n\n  /**\n   * Retrieves the config value from the given [[SQLConf]].\n   */\n  def get(conf: SQLConf): T\n\n  /**\n   * Retrieves the config value from the current thread-local [[SQLConf]]\n   *\n   * @return\n   */\n  def get(): T = get(SQLConf.get)\n\n  def defaultValue: Option[T] = None\n\n  def defaultValueString: String\n\n  /**\n   * The environment variable name that can override this config's default value, if applicable.\n   */\n  def envVar: Option[String] = None\n\n  override def toString: String = {\n    s\"ConfigEntry(key=$key, defaultValue=$defaultValueString, doc=$doc, \" +\n      s\"public=$isPublic, version=$version)\"\n  }\n}\n\nprivate[comet] class ConfigEntryWithDefault[T](\n    key: String,\n    _defaultValue: T,\n    valueConverter: String => T,\n    stringConverter: T => String,\n    doc: String,\n    category: String,\n    isPublic: Boolean,\n    version: String,\n    _envVar: Option[String] = None)\n    extends ConfigEntry(key, valueConverter, stringConverter, doc, category, isPublic, version) {\n  override def defaultValue: Option[T] = Some(_defaultValue)\n\n  override def defaultValueString: String = stringConverter(_defaultValue)\n\n  override def envVar: Option[String] = _envVar\n\n  def get(conf: SQLConf): T = {\n    val tmp = conf.getConfString(key, null)\n    if (tmp == null) {\n      _defaultValue\n    } else {\n      valueConverter(tmp)\n    }\n  }\n}\n\nprivate[comet] class OptionalConfigEntry[T](\n    key: String,\n    val rawValueConverter: String => T,\n    val rawStringConverter: T => String,\n    doc: String,\n    category: String,\n    isPublic: Boolean,\n    version: String)\n    extends ConfigEntry[Option[T]](\n      key,\n      s => Some(rawValueConverter(s)),\n      v => v.map(rawStringConverter).orNull,\n      doc,\n      category,\n      isPublic,\n      version) {\n\n  override def defaultValueString: String = ConfigEntry.UNDEFINED\n\n  override def get(conf: SQLConf): Option[T] = {\n    Option(conf.getConfString(key, null)).map(rawValueConverter)\n  }\n}\n\nprivate[comet] case class ConfigBuilder(key: String) {\n\n  import ConfigHelpers._\n\n  var _public = true\n  var _doc = \"\"\n  var _version = \"\"\n  var _category = \"\"\n\n  def internal(): ConfigBuilder = {\n    _public = false\n    this\n  }\n\n  def doc(s: String): ConfigBuilder = {\n    _doc = s\n    this\n  }\n\n  def category(s: String): ConfigBuilder = {\n    _category = s\n    this\n  }\n\n  def version(v: String): ConfigBuilder = {\n    _version = v\n    this\n  }\n\n  def intConf: TypedConfigBuilder[Int] = {\n    new TypedConfigBuilder(this, toNumber(_, _.toInt, key, \"int\"))\n  }\n\n  def longConf: TypedConfigBuilder[Long] = {\n    new TypedConfigBuilder(this, toNumber(_, _.toLong, key, \"long\"))\n  }\n\n  def doubleConf: TypedConfigBuilder[Double] = {\n    new TypedConfigBuilder(this, toNumber(_, _.toDouble, key, \"double\"))\n  }\n\n  def booleanConf: TypedConfigBuilder[Boolean] = {\n    new TypedConfigBuilder(this, toBoolean(_, key))\n  }\n\n  def stringConf: TypedConfigBuilder[String] = {\n    new TypedConfigBuilder(this, v => v)\n  }\n\n  def timeConf(unit: TimeUnit): TypedConfigBuilder[Long] = {\n    new TypedConfigBuilder(this, timeFromString(_, unit), timeToString(_, unit))\n  }\n\n  def bytesConf(unit: ByteUnit): TypedConfigBuilder[Long] = {\n    new TypedConfigBuilder(this, byteFromString(_, unit), byteToString(_, unit))\n  }\n}\n\nprivate object ConfigEntry {\n  val UNDEFINED = \"<undefined>\"\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/comet/Constants.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nobject Constants {\n  val COMET_CONF_DIR_ENV = \"COMET_CONF_DIR\"\n  val LOG_CONF_PATH = \"comet.log.file.path\"\n  val LOG_CONF_NAME = \"log4rs.yaml\"\n  val LOG_LEVEL_ENV = \"COMET_LOG_LEVEL\"\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/comet/objectstore/NativeConfig.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.objectstore\n\nimport java.net.URI\nimport java.util.Locale\n\nimport org.apache.commons.lang3.StringUtils\nimport org.apache.hadoop.conf.Configuration\n\nimport org.apache.comet.CometConf.COMET_LIBHDFS_SCHEMES_KEY\n\nobject NativeConfig {\n\n  private val objectStoreConfigPrefixes = Map(\n    // Amazon S3 configurations\n    \"s3\" -> Seq(\"fs.s3a.\"),\n    \"s3a\" -> Seq(\"fs.s3a.\"),\n    // Google Cloud Storage configurations\n    \"gs\" -> Seq(\"fs.gs.\"),\n    // Azure Blob Storage configurations (can use both prefixes)\n    \"wasb\" -> Seq(\"fs.azure.\", \"fs.wasb.\"),\n    \"wasbs\" -> Seq(\"fs.azure.\", \"fs.wasb.\"),\n    // Azure Data Lake Storage Gen2 configurations\n    \"abfs\" -> Seq(\"fs.abfs.\"),\n    // Azure Data Lake Storage Gen2 secure configurations (can use both prefixes)\n    \"abfss\" -> Seq(\"fs.abfss.\", \"fs.abfs.\"))\n\n  /**\n   * Extract object store configurations from Hadoop configuration for native DataFusion usage.\n   * This includes S3, GCS, Azure and other cloud storage configurations.\n   *\n   * This method extracts all configurations with supported prefixes, automatically capturing both\n   * global configurations (e.g., fs.s3a.access.key) and per-bucket configurations (e.g.,\n   * fs.s3a.bucket.{bucket-name}.access.key). The native code will prioritize per-bucket\n   * configurations over global ones when both are present.\n   *\n   * The configurations are passed to the native code which uses object_store's parse_url_opts for\n   * consistent and standardized cloud storage support across all providers.\n   */\n  def extractObjectStoreOptions(hadoopConf: Configuration, uri: URI): Map[String, String] = {\n    val scheme = Option(uri.getScheme).map(_.toLowerCase(Locale.ROOT)).getOrElse(\"file\")\n\n    import scala.jdk.CollectionConverters._\n    val options = scala.collection.mutable.Map[String, String]()\n\n    // The schemes will use libhdfs\n    val libhdfsSchemes = hadoopConf.get(COMET_LIBHDFS_SCHEMES_KEY)\n    if (StringUtils.isNotBlank(libhdfsSchemes)) {\n      options(COMET_LIBHDFS_SCHEMES_KEY) = libhdfsSchemes\n    }\n\n    // Get prefixes for this scheme, return early if none found\n    val prefixes = objectStoreConfigPrefixes.get(scheme)\n    if (prefixes.isEmpty) {\n      return options.toMap\n    }\n\n    // Extract all configurations that match the object store prefixes\n    hadoopConf.iterator().asScala.foreach { entry =>\n      val key = entry.getKey\n      val value = entry.getValue\n      // Check if key starts with any of the prefixes for this scheme\n      if (prefixes.get.exists(prefix => key.startsWith(prefix))) {\n        options(key) = value\n      }\n    }\n\n    options.toMap\n  }\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/comet/package.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache\n\nimport java.util.Properties\n\nimport org.apache.arrow.memory.RootAllocator\nimport org.apache.spark.internal.Logging\n\npackage object comet {\n\n  /**\n   * The root allocator for Comet execution. Because Arrow Java memory management is based on\n   * reference counting, exposed arrays increase the reference count of the underlying buffers.\n   * Until the reference count is zero, the memory will not be released. If the consumer side is\n   * finished later than the close of the allocator, the allocator will think the memory is\n   * leaked. To avoid this, we use a single allocator for the whole execution process.\n   */\n  val CometArrowAllocator = new RootAllocator(Long.MaxValue)\n\n  /**\n   * Provides access to build information about the Comet libraries. This will be used by the\n   * benchmarking software to provide the source revision and repository. In addition, the build\n   * information is included to aid in future debugging efforts for releases.\n   */\n  private object CometBuildInfo extends Logging {\n    private val GIT_INFO_PROPS_FILENAME = \"comet-git-info.properties\"\n\n    val props: Properties = {\n      val props = new Properties()\n      val resourceStream = Thread\n        .currentThread()\n        .getContextClassLoader\n        .getResourceAsStream(GIT_INFO_PROPS_FILENAME)\n      if (resourceStream != null) {\n        try {\n          props.load(resourceStream)\n        } catch {\n          case e: Exception =>\n            logError(s\"Error loading properties from $GIT_INFO_PROPS_FILENAME\", e)\n        } finally {\n          if (resourceStream != null) {\n            try {\n              resourceStream.close()\n            } catch {\n              case e: Exception =>\n                logError(\"Error closing Comet build info resource stream\", e)\n            }\n          }\n        }\n      } else {\n        logWarning(s\"Could not find $GIT_INFO_PROPS_FILENAME\")\n      }\n      props\n    }\n  }\n\n  private def getProp(name: String): String = {\n    CometBuildInfo.props.getProperty(name, \"<unknown>\")\n  }\n\n  val COMET_VERSION: String = getProp(\"git.build.version\")\n  val COMET_BRANCH: String = getProp(\"git.branch\")\n  val COMET_REVISION: String = getProp(\"git.commit.id.full\")\n  val COMET_BUILD_USER_EMAIL: String = getProp(\"git.build.user.name\")\n  val COMET_BUILD_USER_NAME: String = getProp(\"git.build.user.email\")\n  val COMET_REPO_URL: String = getProp(\"git.remote.origin.url\")\n  val COMET_BUILD_TIMESTAMP: String = getProp(\"git.build.time\")\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/comet/parquet/CometParquetUtils.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet\n\nimport org.apache.hadoop.conf.Configuration\nimport org.apache.parquet.crypto.DecryptionPropertiesFactory\nimport org.apache.parquet.crypto.keytools.{KeyToolkit, PropertiesDrivenCryptoFactory}\nimport org.apache.spark.sql.internal.SQLConf\n\nobject CometParquetUtils {\n  private val PARQUET_FIELD_ID_WRITE_ENABLED = \"spark.sql.parquet.fieldId.write.enabled\"\n  private val PARQUET_FIELD_ID_READ_ENABLED = \"spark.sql.parquet.fieldId.read.enabled\"\n  private val IGNORE_MISSING_PARQUET_FIELD_ID = \"spark.sql.parquet.fieldId.read.ignoreMissing\"\n\n  // Map of encryption configuration key-value pairs that, if present, are only supported with\n  // these specific values. Generally, these are the default values that won't be present,\n  // but if they are present we want to check them.\n  private val SUPPORTED_ENCRYPTION_CONFIGS: Map[String, Set[String]] = Map(\n    // https://github.com/apache/arrow-rs/blob/main/parquet/src/encryption/ciphers.rs#L21\n    KeyToolkit.DATA_KEY_LENGTH_PROPERTY_NAME -> Set(KeyToolkit.DATA_KEY_LENGTH_DEFAULT.toString),\n    KeyToolkit.KEK_LENGTH_PROPERTY_NAME -> Set(KeyToolkit.KEK_LENGTH_DEFAULT.toString),\n    // https://github.com/apache/arrow-rs/blob/main/parquet/src/file/metadata/parser.rs#L494\n    PropertiesDrivenCryptoFactory.ENCRYPTION_ALGORITHM_PROPERTY_NAME -> Set(\"AES_GCM_V1\"))\n\n  def writeFieldId(conf: SQLConf): Boolean =\n    conf.getConfString(PARQUET_FIELD_ID_WRITE_ENABLED, \"false\").toBoolean\n\n  def writeFieldId(conf: Configuration): Boolean =\n    conf.getBoolean(PARQUET_FIELD_ID_WRITE_ENABLED, false)\n\n  def readFieldId(conf: SQLConf): Boolean =\n    conf.getConfString(PARQUET_FIELD_ID_READ_ENABLED, \"false\").toBoolean\n\n  def ignoreMissingIds(conf: SQLConf): Boolean =\n    conf.getConfString(IGNORE_MISSING_PARQUET_FIELD_ID, \"false\").toBoolean\n\n  /**\n   * Checks if the given Hadoop configuration contains any unsupported encryption settings.\n   *\n   * @param hadoopConf\n   *   The Hadoop configuration to check\n   * @return\n   *   true if all encryption configurations are supported, false if any unsupported config is\n   *   found\n   */\n  def isEncryptionConfigSupported(hadoopConf: Configuration): Boolean = {\n    // Check configurations that, if present, can only have specific allowed values\n    val supportedListCheck = SUPPORTED_ENCRYPTION_CONFIGS.forall {\n      case (configKey, supportedValues) =>\n        val configValue = Option(hadoopConf.get(configKey))\n        configValue match {\n          case Some(value) => supportedValues.contains(value)\n          case None => true // Config not set, so it's supported\n        }\n    }\n\n    supportedListCheck\n  }\n\n  def encryptionEnabled(hadoopConf: Configuration): Boolean = {\n    // TODO: Are there any other properties to check?\n    val encryptionKeys = Seq(\n      DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME,\n      KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME)\n\n    encryptionKeys.exists(key => Option(hadoopConf.get(key)).exists(_.nonEmpty))\n  }\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/comet/parquet/CometReaderThreadPool.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet\n\nimport java.util.concurrent.{Executors, ExecutorService, ThreadFactory}\nimport java.util.concurrent.atomic.AtomicLong\n\nabstract class CometReaderThreadPool {\n  private var threadPool: Option[ExecutorService] = None\n\n  protected def threadNamePrefix: String\n\n  private def initThreadPool(maxThreads: Int): ExecutorService = synchronized {\n    if (threadPool.isEmpty) {\n      val threadFactory: ThreadFactory = new ThreadFactory() {\n        private val defaultThreadFactory = Executors.defaultThreadFactory\n        val count = new AtomicLong(0)\n\n        override def newThread(r: Runnable): Thread = {\n          val thread = defaultThreadFactory.newThread(r)\n          thread.setName(s\"${threadNamePrefix}_${count.getAndIncrement()}\")\n          thread.setDaemon(true)\n          thread\n        }\n      }\n\n      val threadPoolExecutor = Executors.newFixedThreadPool(maxThreads, threadFactory)\n      threadPool = Some(threadPoolExecutor)\n    }\n\n    threadPool.get\n  }\n\n  def getOrCreateThreadPool(numThreads: Int): ExecutorService = {\n    threadPool.getOrElse(initThreadPool(numThreads))\n  }\n\n}\n\n// Thread pool used by the Parquet parallel reader\nobject CometFileReaderThreadPool extends CometReaderThreadPool {\n  override def threadNamePrefix: String = \"file_reader_thread\"\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/comet/vector/NativeUtil.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector\n\nimport scala.collection.mutable\n\nimport org.apache.arrow.c.{ArrowArray, ArrowImporter, ArrowSchema, CDataDictionaryProvider, Data}\nimport org.apache.arrow.vector.VectorSchemaRoot\nimport org.apache.arrow.vector.dictionary.DictionaryProvider\nimport org.apache.spark.SparkException\nimport org.apache.spark.sql.comet.util.Utils\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport org.apache.comet.CometArrowAllocator\n\n/**\n * Provides functionality for importing Arrow vectors from native code and wrapping them as\n * CometVectors.\n *\n * Also provides functionality for exporting Comet columnar batches to native code.\n *\n * Each instance of NativeUtil creates an instance of CDataDictionaryProvider (a\n * DictionaryProvider that is used in C Data Interface for imports).\n *\n * NativeUtil must be closed after use to release resources in the dictionary provider.\n */\nclass NativeUtil {\n  import Utils._\n\n  /** Use the global allocator */\n  private val allocator = CometArrowAllocator\n\n  /** ArrowImporter does not hold any state and does not need to be closed */\n  private val importer = new ArrowImporter(allocator)\n\n  /**\n   * Dictionary provider to use for the lifetime of this instance of NativeUtil. The dictionary\n   * provider is closed when NativeUtil is closed.\n   */\n  private val dictionaryProvider: CDataDictionaryProvider = new CDataDictionaryProvider\n\n  /**\n   * Allocates Arrow structs for the given number of columns.\n   *\n   * @param numCols\n   *   the number of columns\n   * @return\n   *   a pair of Arrow arrays and Arrow schemas\n   */\n  def allocateArrowStructs(numCols: Int): (Array[ArrowArray], Array[ArrowSchema]) = {\n    val arrays = new Array[ArrowArray](numCols)\n    val schemas = new Array[ArrowSchema](numCols)\n\n    (0 until numCols).foreach { index =>\n      val arrowSchema = ArrowSchema.allocateNew(allocator)\n      val arrowArray = ArrowArray.allocateNew(allocator)\n      arrays(index) = arrowArray\n      schemas(index) = arrowSchema\n    }\n\n    (arrays, schemas)\n  }\n\n  /**\n   * Exports a ColumnarBatch to Arrow FFI and returns the memory addresses.\n   *\n   * This is a convenience method that allocates Arrow structs, exports the batch, and returns\n   * just the memory addresses (without exposing the Arrow types).\n   *\n   * @param batch\n   *   the columnar batch to export\n   * @return\n   *   a tuple of (array addresses, schema addresses, number of rows)\n   */\n  def exportBatchToAddresses(batch: ColumnarBatch): (Array[Long], Array[Long], Int) = {\n    val numCols = batch.numCols()\n    val (arrays, schemas) = allocateArrowStructs(numCols)\n    val arrayAddrs = arrays.map(_.memoryAddress())\n    val schemaAddrs = schemas.map(_.memoryAddress())\n    val numRows = exportBatch(arrayAddrs, schemaAddrs, batch)\n    (arrayAddrs, schemaAddrs, numRows)\n  }\n\n  /**\n   * Exports a Comet `ColumnarBatch` into a list of memory addresses that can be consumed by the\n   * native execution.\n   *\n   * @param batch\n   *   the input Comet columnar batch\n   * @return\n   *   an exported batches object containing an array containing number of rows + pairs of memory\n   *   addresses in the format of (address of Arrow array, address of Arrow schema)\n   */\n  def exportBatch(\n      arrayAddrs: Array[Long],\n      schemaAddrs: Array[Long],\n      batch: ColumnarBatch): Int = {\n    val numRows = mutable.ArrayBuffer.empty[Int]\n\n    (0 until batch.numCols()).foreach { index =>\n      batch.column(index) match {\n        case selectionVector: CometSelectionVector =>\n          // For CometSelectionVector, export only the values vector\n          val valuesVector = selectionVector.getValues\n          val valueVector = valuesVector.getValueVector\n\n          // Use the selection vector's logical row count\n          numRows += selectionVector.numValues()\n\n          val provider = if (valueVector.getField.getDictionary != null) {\n            valuesVector.getDictionaryProvider\n          } else {\n            null\n          }\n\n          // The array and schema structures are allocated by native side.\n          // Don't need to deallocate them here.\n          val arrowSchema = ArrowSchema.wrap(schemaAddrs(index))\n          val arrowArray = ArrowArray.wrap(arrayAddrs(index))\n          Data.exportVector(\n            allocator,\n            getFieldVector(valueVector, \"export\"),\n            provider,\n            arrowArray,\n            arrowSchema)\n        case a: CometVector =>\n          val valueVector = a.getValueVector\n\n          numRows += valueVector.getValueCount\n\n          val provider = if (valueVector.getField.getDictionary != null) {\n            a.getDictionaryProvider\n          } else {\n            null\n          }\n\n          // The array and schema structures are allocated by native side.\n          // Don't need to deallocate them here.\n          val arrowSchema = ArrowSchema.wrap(schemaAddrs(index))\n          val arrowArray = ArrowArray.wrap(arrayAddrs(index))\n          Data.exportVector(\n            allocator,\n            getFieldVector(valueVector, \"export\"),\n            provider,\n            arrowArray,\n            arrowSchema)\n        case c =>\n          throw new SparkException(\n            \"Comet execution only takes Arrow Arrays, but got \" +\n              s\"${c.getClass}\")\n      }\n    }\n\n    if (numRows.distinct.length > 1) {\n      throw new SparkException(\n        s\"Number of rows in each column should be the same, but got [${numRows.distinct}]\")\n    }\n\n    // `ColumnarBatch.numRows` might return a different number than the actual number of rows in\n    // the Arrow arrays. For example, Iceberg column reader will skip deleted rows internally in\n    // its `CometVector` implementation. The `ColumnarBatch` returned by the reader will report\n    // logical number of rows which is less than actual number of rows due to row deletion.\n    // Similarly, CometSelectionVector represents a different number of logical rows than the\n    // underlying vector.\n    numRows.headOption.getOrElse(batch.numRows())\n  }\n\n  /**\n   * Exports a single CometVector to native side.\n   *\n   * @param vector\n   *   The CometVector to export\n   * @param arrayAddr\n   *   The address of the ArrowArray structure\n   * @param schemaAddr\n   *   The address of the ArrowSchema structure\n   */\n  def exportSingleVector(vector: CometVector, arrayAddr: Long, schemaAddr: Long): Unit = {\n    val valueVector = vector.getValueVector\n\n    val provider = if (valueVector.getField.getDictionary != null) {\n      vector.getDictionaryProvider\n    } else {\n      null\n    }\n\n    val arrowSchema = ArrowSchema.wrap(schemaAddr)\n    val arrowArray = ArrowArray.wrap(arrayAddr)\n    Data.exportVector(\n      allocator,\n      getFieldVector(valueVector, \"export\"),\n      provider,\n      arrowArray,\n      arrowSchema)\n  }\n\n  /**\n   * Gets the next batch from native execution.\n   *\n   * @param numOutputCols\n   *   The number of output columns\n   * @param func\n   *   The function to call to get the next batch\n   * @return\n   *   The number of row of the next batch, or None if there are no more batches\n   */\n  def getNextBatch(\n      numOutputCols: Int,\n      func: (Array[Long], Array[Long]) => Long): Option[ColumnarBatch] = {\n    val (arrays, schemas) = allocateArrowStructs(numOutputCols)\n\n    val arrayAddrs = arrays.map(_.memoryAddress())\n    val schemaAddrs = schemas.map(_.memoryAddress())\n\n    val result = func(arrayAddrs, schemaAddrs)\n\n    result match {\n      case -1 =>\n        // EOF\n        None\n      case numRows =>\n        val cometVectors = importVector(arrays, schemas)\n        Some(new ColumnarBatch(cometVectors.toArray, numRows.toInt))\n    }\n  }\n\n  /**\n   * Imports a list of Arrow addresses from native execution, and return a list of Comet vectors.\n   *\n   * @param arrays\n   *   a list of Arrow array\n   * @param schemas\n   *   a list of Arrow schema\n   * @return\n   *   a list of Comet vectors\n   */\n  def importVector(arrays: Array[ArrowArray], schemas: Array[ArrowSchema]): Seq[CometVector] = {\n    val arrayVectors = mutable.ArrayBuffer.empty[CometVector]\n\n    (0 until arrays.length).foreach { i =>\n      val arrowSchema = schemas(i)\n      val arrowArray = arrays(i)\n\n      // Native execution should always have 'useDecimal128' set to true since it doesn't support\n      // other cases.\n      arrayVectors += CometVector.getVector(\n        importer.importVector(arrowArray, arrowSchema, dictionaryProvider),\n        true,\n        dictionaryProvider)\n    }\n    arrayVectors.toSeq\n  }\n\n  /**\n   * Takes zero-copy slices of the input batch with given start index and maximum number of rows.\n   *\n   * @param batch\n   *   Input batch\n   * @param startIndex\n   *   Start index of the slice\n   * @param maxNumRows\n   *   Maximum number of rows in the slice\n   * @return\n   *   A new batch with the sliced vectors\n   */\n  def takeRows(batch: ColumnarBatch, startIndex: Int, maxNumRows: Int): ColumnarBatch = {\n    val arrayVectors = mutable.ArrayBuffer.empty[CometVector]\n\n    for (i <- 0 until batch.numCols()) {\n      val column = batch.column(i).asInstanceOf[CometVector]\n      arrayVectors += column.slice(startIndex, maxNumRows)\n    }\n\n    new ColumnarBatch(arrayVectors.toArray, maxNumRows)\n  }\n\n  def close(): Unit = {\n    // closing the dictionary provider also closes the dictionary arrays\n    dictionaryProvider.close()\n  }\n}\n\nobject NativeUtil {\n  def rootAsBatch(arrowRoot: VectorSchemaRoot): ColumnarBatch = {\n    rootAsBatch(arrowRoot, null)\n  }\n\n  def rootAsBatch(arrowRoot: VectorSchemaRoot, provider: DictionaryProvider): ColumnarBatch = {\n    val vectors = (0 until arrowRoot.getFieldVectors.size()).map { i =>\n      val vector = arrowRoot.getFieldVectors.get(i)\n      // Native shuffle always uses decimal128.\n      CometVector.getVector(vector, true, provider)\n    }\n    new ColumnarBatch(vectors.toArray, arrowRoot.getRowCount)\n  }\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/comet/vector/StreamReader.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.vector\n\nimport java.nio.channels.ReadableByteChannel\n\nimport org.apache.arrow.vector.VectorSchemaRoot\nimport org.apache.arrow.vector.ipc.{ArrowStreamReader, ReadChannel}\nimport org.apache.arrow.vector.ipc.message.MessageChannelReader\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport org.apache.comet.CometArrowAllocator\n\n/**\n * A reader that consumes Arrow data from an input channel, and produces Comet batches.\n */\ncase class StreamReader(channel: ReadableByteChannel, source: String) extends AutoCloseable {\n  private val channelReader =\n    new MessageChannelReader(new ReadChannel(channel), CometArrowAllocator)\n  private var arrowReader = new ArrowStreamReader(channelReader, CometArrowAllocator)\n  private var root = arrowReader.getVectorSchemaRoot\n\n  def nextBatch(): Option[ColumnarBatch] = {\n    if (arrowReader.loadNextBatch()) {\n      Some(rootAsBatch(root))\n    } else {\n      None\n    }\n  }\n\n  private def rootAsBatch(root: VectorSchemaRoot): ColumnarBatch = {\n    NativeUtil.rootAsBatch(root, arrowReader)\n  }\n\n  override def close(): Unit = {\n    if (root != null) {\n      arrowReader.close()\n      root.close()\n\n      arrowReader = null\n      root = null\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/spark/sql/comet/CastOverflowException.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport org.apache.spark.SparkArithmeticException\nimport org.apache.spark.sql.errors.QueryExecutionErrors.toSQLConf\nimport org.apache.spark.sql.internal.SQLConf\n\nclass CastOverflowException(t: String, from: String, to: String)\n    extends SparkArithmeticException(\n      \"CAST_OVERFLOW\",\n      Map(\n        \"value\" -> t,\n        \"sourceType\" -> s\"\"\"\"$from\"\"\"\",\n        \"targetType\" -> s\"\"\"\"$to\"\"\"\",\n        \"ansiConfig\" -> toSQLConf(SQLConf.ANSI_ENABLED.key)),\n      Array.empty,\n      \"\") {}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/spark/sql/comet/execution/arrow/ArrowReaderIterator.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.arrow\n\nimport java.nio.channels.ReadableByteChannel\n\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport org.apache.comet.vector._\n\nclass ArrowReaderIterator(channel: ReadableByteChannel, source: String)\n    extends Iterator[ColumnarBatch] {\n\n  private val reader = StreamReader(channel, source)\n  private var batch = nextBatch()\n  private var currentBatch: ColumnarBatch = null\n  private var isClosed: Boolean = false\n\n  override def hasNext: Boolean = {\n    if (isClosed) {\n      return false\n    }\n    if (batch.isDefined) {\n      return true\n    }\n\n    // Release the previous batch.\n    // If it is not released, when closing the reader, arrow library will complain about\n    // memory leak.\n    if (currentBatch != null) {\n      currentBatch.close()\n      currentBatch = null\n    }\n\n    batch = nextBatch()\n    if (batch.isEmpty) {\n      close()\n      return false\n    }\n    true\n  }\n\n  override def next(): ColumnarBatch = {\n    if (!hasNext) {\n      throw new NoSuchElementException\n    }\n\n    val nextBatch = batch.get\n\n    currentBatch = nextBatch\n    batch = None\n    currentBatch\n  }\n\n  private def nextBatch(): Option[ColumnarBatch] = {\n    reader.nextBatch()\n  }\n\n  def close(): Unit =\n    synchronized {\n      if (!isClosed) {\n        if (currentBatch != null) {\n          currentBatch.close()\n          currentBatch = null\n        }\n        reader.close()\n        isClosed = true\n      }\n    }\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/spark/sql/comet/execution/arrow/ArrowWriters.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.arrow\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.arrow.vector._\nimport org.apache.arrow.vector.complex._\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.SpecializedGetters\nimport org.apache.spark.sql.comet.util.Utils\nimport org.apache.spark.sql.errors.QueryExecutionErrors\nimport org.apache.spark.sql.types._\nimport org.apache.spark.sql.vectorized.ColumnarArray\n\n/**\n * This file is mostly copied from Spark SQL's\n * org.apache.spark.sql.execution.arrow.ArrowWriter.scala. Comet shadows Arrow classes to avoid\n * potential conflicts with Spark's Arrow dependencies, hence we cannot reuse Spark's ArrowWriter\n * directly.\n *\n * Performance enhancement: https://github.com/apache/datafusion-comet/issues/888\n */\nprivate[arrow] object ArrowWriter {\n  def create(root: VectorSchemaRoot): ArrowWriter = {\n    val children = root.getFieldVectors().asScala.map { vector =>\n      vector.allocateNew()\n      createFieldWriter(vector)\n    }\n    new ArrowWriter(root, children.toArray)\n  }\n\n  private[sql] def createFieldWriter(vector: ValueVector): ArrowFieldWriter = {\n    val field = vector.getField()\n    (Utils.fromArrowField(field), vector) match {\n      case (BooleanType, vector: BitVector) => new BooleanWriter(vector)\n      case (ByteType, vector: TinyIntVector) => new ByteWriter(vector)\n      case (ShortType, vector: SmallIntVector) => new ShortWriter(vector)\n      case (IntegerType, vector: IntVector) => new IntegerWriter(vector)\n      case (LongType, vector: BigIntVector) => new LongWriter(vector)\n      case (FloatType, vector: Float4Vector) => new FloatWriter(vector)\n      case (DoubleType, vector: Float8Vector) => new DoubleWriter(vector)\n      case (DecimalType.Fixed(precision, scale), vector: DecimalVector) =>\n        new DecimalWriter(vector, precision, scale)\n      case (StringType, vector: VarCharVector) => new StringWriter(vector)\n      case (StringType, vector: LargeVarCharVector) => new LargeStringWriter(vector)\n      case (BinaryType, vector: VarBinaryVector) => new BinaryWriter(vector)\n      case (BinaryType, vector: LargeVarBinaryVector) => new LargeBinaryWriter(vector)\n      case (DateType, vector: DateDayVector) => new DateWriter(vector)\n      case (TimestampType, vector: TimeStampMicroTZVector) => new TimestampWriter(vector)\n      case (TimestampNTZType, vector: TimeStampMicroVector) => new TimestampNTZWriter(vector)\n      case (ArrayType(_, _), vector: ListVector) =>\n        val elementVector = createFieldWriter(vector.getDataVector())\n        new ArrayWriter(vector, elementVector)\n      case (MapType(_, _, _), vector: MapVector) =>\n        val structVector = vector.getDataVector.asInstanceOf[StructVector]\n        val keyWriter = createFieldWriter(structVector.getChild(MapVector.KEY_NAME))\n        val valueWriter = createFieldWriter(structVector.getChild(MapVector.VALUE_NAME))\n        new MapWriter(vector, structVector, keyWriter, valueWriter)\n      case (StructType(_), vector: StructVector) =>\n        val children = (0 until vector.size()).map { ordinal =>\n          createFieldWriter(vector.getChildByOrdinal(ordinal))\n        }\n        new StructWriter(vector, children.toArray)\n      case (NullType, vector: NullVector) => new NullWriter(vector)\n      case (_: YearMonthIntervalType, vector: IntervalYearVector) =>\n        new IntervalYearWriter(vector)\n      case (_: DayTimeIntervalType, vector: DurationVector) => new DurationWriter(vector)\n//      case (CalendarIntervalType, vector: IntervalMonthDayNanoVector) =>\n//        new IntervalMonthDayNanoWriter(vector)\n      case (dt, _) =>\n        throw QueryExecutionErrors.notSupportTypeError(dt)\n    }\n  }\n}\n\nclass ArrowWriter(val root: VectorSchemaRoot, fields: Array[ArrowFieldWriter]) {\n\n  def schema: StructType = Utils.fromArrowSchema(root.getSchema())\n\n  private var count: Int = 0\n\n  def write(row: InternalRow): Unit = {\n    var i = 0\n    while (i < fields.length) {\n      fields(i).write(row, i)\n      i += 1\n    }\n    count += 1\n  }\n\n  def writeCol(input: ColumnarArray, columnIndex: Int): Unit = {\n    fields(columnIndex).writeCol(input)\n    count = input.numElements()\n  }\n\n  def writeColNoNull(input: ColumnarArray, columnIndex: Int): Unit = {\n    fields(columnIndex).writeColNoNull(input)\n    count = input.numElements()\n  }\n\n  def finish(): Unit = {\n    root.setRowCount(count)\n    fields.foreach(_.finish())\n  }\n\n  def reset(): Unit = {\n    root.setRowCount(0)\n    count = 0\n    fields.foreach(_.reset())\n  }\n}\n\nprivate[arrow] abstract class ArrowFieldWriter {\n\n  def valueVector: ValueVector\n\n  def name: String = valueVector.getField().getName()\n  def dataType: DataType = Utils.fromArrowField(valueVector.getField())\n  def nullable: Boolean = valueVector.getField().isNullable()\n\n  def setNull(): Unit\n  def setValue(input: SpecializedGetters, ordinal: Int): Unit\n\n  private[arrow] var count: Int = 0\n\n  def write(input: SpecializedGetters, ordinal: Int): Unit = {\n    if (input.isNullAt(ordinal)) {\n      setNull()\n    } else {\n      setValue(input, ordinal)\n    }\n    count += 1\n  }\n\n  def writeCol(input: ColumnarArray): Unit = {\n    val inputNumElements = input.numElements()\n    valueVector.setInitialCapacity(inputNumElements)\n    while (count < inputNumElements) {\n      if (input.isNullAt(count)) {\n        setNull()\n      } else {\n        setValue(input, count)\n      }\n      count += 1\n    }\n  }\n\n  def writeColNoNull(input: ColumnarArray): Unit = {\n    val inputNumElements = input.numElements()\n    valueVector.setInitialCapacity(inputNumElements)\n    while (count < inputNumElements) {\n      setValue(input, count)\n      count += 1\n    }\n  }\n\n  def finish(): Unit = {\n    valueVector.setValueCount(count)\n  }\n\n  def reset(): Unit = {\n    valueVector.reset()\n    count = 0\n  }\n}\n\nprivate[arrow] class BooleanWriter(val valueVector: BitVector) extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, if (input.getBoolean(ordinal)) 1 else 0)\n  }\n}\n\nprivate[arrow] class ByteWriter(val valueVector: TinyIntVector) extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, input.getByte(ordinal))\n  }\n}\n\nprivate[arrow] class ShortWriter(val valueVector: SmallIntVector) extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, input.getShort(ordinal))\n  }\n}\n\nprivate[arrow] class IntegerWriter(val valueVector: IntVector) extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, input.getInt(ordinal))\n  }\n}\n\nprivate[arrow] class LongWriter(val valueVector: BigIntVector) extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, input.getLong(ordinal))\n  }\n}\n\nprivate[arrow] class FloatWriter(val valueVector: Float4Vector) extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, input.getFloat(ordinal))\n  }\n}\n\nprivate[arrow] class DoubleWriter(val valueVector: Float8Vector) extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, input.getDouble(ordinal))\n  }\n}\n\nprivate[arrow] class DecimalWriter(val valueVector: DecimalVector, precision: Int, scale: Int)\n    extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    val decimal = input.getDecimal(ordinal, precision, scale)\n    if (decimal.changePrecision(precision, scale)) {\n      valueVector.setSafe(count, decimal.toJavaBigDecimal)\n    } else {\n      setNull()\n    }\n  }\n}\n\nprivate[arrow] class StringWriter(val valueVector: VarCharVector) extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    val utf8 = input.getUTF8String(ordinal)\n    val utf8ByteBuffer = utf8.getByteBuffer\n    // todo: for off-heap UTF8String, how to pass in to arrow without copy?\n    valueVector.setSafe(count, utf8ByteBuffer, utf8ByteBuffer.position(), utf8.numBytes())\n  }\n}\n\nprivate[arrow] class LargeStringWriter(val valueVector: LargeVarCharVector)\n    extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    val utf8 = input.getUTF8String(ordinal)\n    val utf8ByteBuffer = utf8.getByteBuffer\n    // todo: for off-heap UTF8String, how to pass in to arrow without copy?\n    valueVector.setSafe(count, utf8ByteBuffer, utf8ByteBuffer.position(), utf8.numBytes())\n  }\n}\n\nprivate[arrow] class BinaryWriter(val valueVector: VarBinaryVector) extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    val bytes = input.getBinary(ordinal)\n    valueVector.setSafe(count, bytes, 0, bytes.length)\n  }\n}\n\nprivate[arrow] class LargeBinaryWriter(val valueVector: LargeVarBinaryVector)\n    extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    val bytes = input.getBinary(ordinal)\n    valueVector.setSafe(count, bytes, 0, bytes.length)\n  }\n}\n\nprivate[arrow] class DateWriter(val valueVector: DateDayVector) extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, input.getInt(ordinal))\n  }\n}\n\nprivate[arrow] class TimestampWriter(val valueVector: TimeStampMicroTZVector)\n    extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, input.getLong(ordinal))\n  }\n}\n\nprivate[arrow] class TimestampNTZWriter(val valueVector: TimeStampMicroVector)\n    extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, input.getLong(ordinal))\n  }\n}\n\nprivate[arrow] class ArrayWriter(val valueVector: ListVector, val elementWriter: ArrowFieldWriter)\n    extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {}\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    val array = input.getArray(ordinal)\n    var i = 0\n    valueVector.startNewValue(count)\n    while (i < array.numElements()) {\n      elementWriter.write(array, i)\n      i += 1\n    }\n    valueVector.endValue(count, array.numElements())\n  }\n\n  override def finish(): Unit = {\n    super.finish()\n    elementWriter.finish()\n  }\n\n  override def reset(): Unit = {\n    super.reset()\n    elementWriter.reset()\n  }\n}\n\nprivate[arrow] class StructWriter(\n    val valueVector: StructVector,\n    children: Array[ArrowFieldWriter])\n    extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {\n    var i = 0\n    while (i < children.length) {\n      children(i).setNull()\n      children(i).count += 1\n      i += 1\n    }\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    val struct = input.getStruct(ordinal, children.length)\n    var i = 0\n    valueVector.setIndexDefined(count)\n    while (i < struct.numFields) {\n      children(i).write(struct, i)\n      i += 1\n    }\n  }\n\n  override def finish(): Unit = {\n    super.finish()\n    children.foreach(_.finish())\n  }\n\n  override def reset(): Unit = {\n    super.reset()\n    children.foreach(_.reset())\n  }\n}\n\nprivate[arrow] class MapWriter(\n    val valueVector: MapVector,\n    val structVector: StructVector,\n    val keyWriter: ArrowFieldWriter,\n    val valueWriter: ArrowFieldWriter)\n    extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {}\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    val map = input.getMap(ordinal)\n    valueVector.startNewValue(count)\n    val keys = map.keyArray()\n    val values = map.valueArray()\n    var i = 0\n    while (i < map.numElements()) {\n      structVector.setIndexDefined(keyWriter.count)\n      keyWriter.write(keys, i)\n      valueWriter.write(values, i)\n      i += 1\n    }\n\n    valueVector.endValue(count, map.numElements())\n  }\n\n  override def finish(): Unit = {\n    super.finish()\n    keyWriter.finish()\n    valueWriter.finish()\n  }\n\n  override def reset(): Unit = {\n    super.reset()\n    keyWriter.reset()\n    valueWriter.reset()\n  }\n}\n\nprivate[arrow] class NullWriter(val valueVector: NullVector) extends ArrowFieldWriter {\n\n  override def setNull(): Unit = {}\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {}\n}\n\nprivate[arrow] class IntervalYearWriter(val valueVector: IntervalYearVector)\n    extends ArrowFieldWriter {\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, input.getInt(ordinal));\n  }\n}\n\nprivate[arrow] class DurationWriter(val valueVector: DurationVector) extends ArrowFieldWriter {\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    valueVector.setSafe(count, input.getLong(ordinal))\n  }\n}\n\nprivate[arrow] class IntervalMonthDayNanoWriter(val valueVector: IntervalMonthDayNanoVector)\n    extends ArrowFieldWriter {\n  override def setNull(): Unit = {\n    valueVector.setNull(count)\n  }\n\n  override def setValue(input: SpecializedGetters, ordinal: Int): Unit = {\n    val ci = input.getInterval(ordinal)\n    valueVector.setSafe(count, ci.months, ci.days, Math.multiplyExact(ci.microseconds, 1000L))\n  }\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/spark/sql/comet/execution/arrow/CometArrowConverters.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.arrow\n\nimport org.apache.arrow.memory.BufferAllocator\nimport org.apache.arrow.vector.VectorSchemaRoot\nimport org.apache.arrow.vector.types.pojo.Schema\nimport org.apache.spark.TaskContext\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.comet.util.Utils\nimport org.apache.spark.sql.types.StructType\nimport org.apache.spark.sql.vectorized.{ColumnarArray, ColumnarBatch}\n\nimport org.apache.comet.CometArrowAllocator\nimport org.apache.comet.vector.NativeUtil\n\nobject CometArrowConverters extends Logging {\n  // This is similar how Spark converts internal row to Arrow format except that it is transforming\n  // the result batch to Comet's ColumnarBatch instead of serialized bytes.\n  // There's another big difference that Comet may consume the ColumnarBatch by exporting it to\n  // the native side. Hence, we need to:\n  // 1. reset the Arrow writer after the ColumnarBatch is consumed\n  // 2. close the allocator when the task is finished but not when the iterator is all consumed\n  // The reason for the second point is that when ColumnarBatch is exported to the native side, the\n  // exported process increases the reference count of the Arrow vectors. The reference count is\n  // only decreased when the native plan is done with the vectors, which is usually longer than\n  // all the ColumnarBatches are consumed.\n\n  abstract private[sql] class ArrowBatchIterBase(\n      schema: StructType,\n      timeZoneId: String,\n      context: TaskContext)\n      extends Iterator[ColumnarBatch]\n      with AutoCloseable {\n\n    protected val arrowSchema: Schema = Utils.toArrowSchema(schema, timeZoneId)\n    // Reuse the same root allocator here.\n    protected val allocator: BufferAllocator =\n      CometArrowAllocator.newChildAllocator(s\"to${this.getClass.getSimpleName}\", 0, Long.MaxValue)\n    protected val root: VectorSchemaRoot = VectorSchemaRoot.create(arrowSchema, allocator)\n    protected val arrowWriter: ArrowWriter = ArrowWriter.create(root)\n\n    protected var currentBatch: ColumnarBatch = null\n    protected var closed: Boolean = false\n\n    Option(context).foreach {\n      _.addTaskCompletionListener[Unit] { _ =>\n        close(true)\n      }\n    }\n\n    override def close(): Unit = {\n      close(false)\n    }\n\n    protected def close(closeAllocator: Boolean): Unit = {\n      try {\n        if (!closed) {\n          if (currentBatch != null) {\n            arrowWriter.reset()\n            currentBatch.close()\n            currentBatch = null\n          }\n          root.close()\n          closed = true\n        }\n      } finally {\n        // the allocator shall be closed when the task is finished\n        if (closeAllocator) {\n          allocator.close()\n        }\n      }\n    }\n\n    override def next(): ColumnarBatch = {\n      currentBatch = nextBatch()\n      currentBatch\n    }\n\n    protected def nextBatch(): ColumnarBatch\n\n  }\n\n  private[sql] class RowToArrowBatchIter(\n      rowIter: Iterator[InternalRow],\n      schema: StructType,\n      maxRecordsPerBatch: Long,\n      timeZoneId: String,\n      context: TaskContext)\n      extends ArrowBatchIterBase(schema, timeZoneId, context)\n      with AutoCloseable {\n\n    override def hasNext: Boolean = rowIter.hasNext || {\n      close(false)\n      false\n    }\n\n    override protected def nextBatch(): ColumnarBatch = {\n      if (rowIter.hasNext) {\n        // the arrow writer shall be reset before writing the next batch\n        arrowWriter.reset()\n        var rowCount = 0L\n        while (rowIter.hasNext && (maxRecordsPerBatch <= 0 || rowCount < maxRecordsPerBatch)) {\n          val row = rowIter.next()\n          arrowWriter.write(row)\n          rowCount += 1\n        }\n        arrowWriter.finish()\n        NativeUtil.rootAsBatch(root)\n      } else {\n        null\n      }\n    }\n  }\n\n  def rowToArrowBatchIter(\n      rowIter: Iterator[InternalRow],\n      schema: StructType,\n      maxRecordsPerBatch: Long,\n      timeZoneId: String,\n      context: TaskContext): Iterator[ColumnarBatch] = {\n    new RowToArrowBatchIter(rowIter, schema, maxRecordsPerBatch, timeZoneId, context)\n  }\n\n  private[sql] class ColumnBatchToArrowBatchIter(\n      colBatch: ColumnarBatch,\n      schema: StructType,\n      maxRecordsPerBatch: Int,\n      timeZoneId: String,\n      context: TaskContext)\n      extends ArrowBatchIterBase(schema, timeZoneId, context)\n      with AutoCloseable {\n\n    private var rowsProduced: Int = 0\n\n    override def hasNext: Boolean = rowsProduced < colBatch.numRows() || {\n      close(false)\n      false\n    }\n\n    override protected def nextBatch(): ColumnarBatch = {\n      val rowsInBatch = colBatch.numRows()\n      if (rowsProduced < rowsInBatch) {\n        // the arrow writer shall be reset before writing the next batch\n        arrowWriter.reset()\n        val rowsToProduce =\n          if (maxRecordsPerBatch <= 0) rowsInBatch - rowsProduced\n          else Math.min(maxRecordsPerBatch, rowsInBatch - rowsProduced)\n\n        for (columnIndex <- 0 until colBatch.numCols()) {\n          val column = colBatch.column(columnIndex)\n          val columnArray = new ColumnarArray(column, rowsProduced, rowsToProduce)\n          if (column.hasNull) {\n            arrowWriter.writeCol(columnArray, columnIndex)\n          } else {\n            arrowWriter.writeColNoNull(columnArray, columnIndex)\n          }\n        }\n\n        rowsProduced += rowsToProduce\n\n        arrowWriter.finish()\n        NativeUtil.rootAsBatch(root)\n      } else {\n        null\n      }\n    }\n  }\n\n  def columnarBatchToArrowBatchIter(\n      colBatch: ColumnarBatch,\n      schema: StructType,\n      maxRecordsPerBatch: Int,\n      timeZoneId: String,\n      context: TaskContext): Iterator[ColumnarBatch] = {\n    new ColumnBatchToArrowBatchIter(colBatch, schema, maxRecordsPerBatch, timeZoneId, context)\n  }\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/spark/sql/comet/parquet/CometParquetReadSupport.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.parquet\n\nimport java.util.{Locale, UUID}\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.parquet.schema._\nimport org.apache.parquet.schema.LogicalTypeAnnotation.ListLogicalTypeAnnotation\nimport org.apache.parquet.schema.Type.Repetition\nimport org.apache.spark.sql.errors.QueryExecutionErrors\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetUtils\nimport org.apache.spark.sql.types._\n\n/**\n * This class is copied & slightly modified from [[ParquetReadSupport]] in Spark. Changes:\n *   - This doesn't extend from Parquet's `ReadSupport` class since that is used for row-based\n *     Parquet reader. Therefore, there is no `init`, `prepareForRead` as well as other methods\n *     that are unused.\n */\nobject CometParquetReadSupport {\n  val SPARK_PARQUET_SCHEMA_NAME = \"spark_schema\"\n\n  val EMPTY_MESSAGE: MessageType =\n    Types.buildMessage().named(SPARK_PARQUET_SCHEMA_NAME)\n\n  def generateFakeColumnName: String = s\"_fake_name_${UUID.randomUUID()}\"\n\n  def clipParquetSchema(\n      parquetSchema: MessageType,\n      catalystSchema: StructType,\n      caseSensitive: Boolean,\n      useFieldId: Boolean,\n      ignoreMissingIds: Boolean): MessageType = {\n    if (!ignoreMissingIds &&\n      !containsFieldIds(parquetSchema) &&\n      ParquetUtils.hasFieldIds(catalystSchema)) {\n      throw new RuntimeException(\n        \"Spark read schema expects field Ids, \" +\n          \"but Parquet file schema doesn't contain any field Ids.\\n\" +\n          \"Please remove the field ids from Spark schema or ignore missing ids by \" +\n          \"setting `spark.sql.parquet.fieldId.read.ignoreMissing = true`\\n\" +\n          s\"\"\"\n             |Spark read schema:\n             |${catalystSchema.prettyJson}\n             |\n             |Parquet file schema:\n             |${parquetSchema.toString}\n             |\"\"\".stripMargin)\n    }\n    clipParquetSchema(parquetSchema, catalystSchema, caseSensitive, useFieldId)\n  }\n\n  /**\n   * Tailors `parquetSchema` according to `catalystSchema` by removing column paths don't exist in\n   * `catalystSchema`, and adding those only exist in `catalystSchema`.\n   */\n  def clipParquetSchema(\n      parquetSchema: MessageType,\n      catalystSchema: StructType,\n      caseSensitive: Boolean,\n      useFieldId: Boolean): MessageType = {\n    val clippedParquetFields = clipParquetGroupFields(\n      parquetSchema.asGroupType(),\n      catalystSchema,\n      caseSensitive,\n      useFieldId)\n    if (clippedParquetFields.isEmpty) {\n      EMPTY_MESSAGE\n    } else {\n      Types\n        .buildMessage()\n        .addFields(clippedParquetFields: _*)\n        .named(SPARK_PARQUET_SCHEMA_NAME)\n    }\n  }\n\n  private def clipParquetType(\n      parquetType: Type,\n      catalystType: DataType,\n      caseSensitive: Boolean,\n      useFieldId: Boolean): Type = {\n    val newParquetType = catalystType match {\n      case t: ArrayType if !isPrimitiveCatalystType(t.elementType) =>\n        // Only clips array types with nested type as element type.\n        clipParquetListType(parquetType.asGroupType(), t.elementType, caseSensitive, useFieldId)\n\n      case t: MapType\n          if !isPrimitiveCatalystType(t.keyType) ||\n            !isPrimitiveCatalystType(t.valueType) =>\n        // Only clips map types with nested key type or value type\n        clipParquetMapType(\n          parquetType.asGroupType(),\n          t.keyType,\n          t.valueType,\n          caseSensitive,\n          useFieldId)\n\n      case t: StructType =>\n        clipParquetGroup(parquetType.asGroupType(), t, caseSensitive, useFieldId)\n\n      case _ =>\n        // UDTs and primitive types are not clipped.  For UDTs, a clipped version might not be able\n        // to be mapped to desired user-space types.  So UDTs shouldn't participate schema merging.\n        parquetType\n    }\n\n    if (useFieldId && parquetType.getId != null) {\n      newParquetType.withId(parquetType.getId.intValue())\n    } else {\n      newParquetType\n    }\n  }\n\n  /**\n   * Whether a Catalyst [[DataType]] is primitive. Primitive [[DataType]] is not equivalent to\n   * [[AtomicType]]. For example, [[CalendarIntervalType]] is primitive, but it's not an\n   * [[AtomicType]].\n   */\n  private def isPrimitiveCatalystType(dataType: DataType): Boolean = {\n    dataType match {\n      case _: ArrayType | _: MapType | _: StructType => false\n      case _ => true\n    }\n  }\n\n  /**\n   * Clips a Parquet [[GroupType]] which corresponds to a Catalyst [[ArrayType]]. The element type\n   * of the [[ArrayType]] should also be a nested type, namely an [[ArrayType]], a [[MapType]], or\n   * a [[StructType]].\n   */\n  private def clipParquetListType(\n      parquetList: GroupType,\n      elementType: DataType,\n      caseSensitive: Boolean,\n      useFieldId: Boolean): Type = {\n    // Precondition of this method, should only be called for lists with nested element types.\n    assert(!isPrimitiveCatalystType(elementType))\n\n    // Unannotated repeated group should be interpreted as required list of required element, so\n    // list element type is just the group itself.  Clip it.\n    if (parquetList.getLogicalTypeAnnotation == null &&\n      parquetList.isRepetition(Repetition.REPEATED)) {\n      clipParquetType(parquetList, elementType, caseSensitive, useFieldId)\n    } else {\n      assert(\n        parquetList.getLogicalTypeAnnotation.isInstanceOf[ListLogicalTypeAnnotation],\n        \"Invalid Parquet schema. \" +\n          \"Logical type annotation of annotated Parquet lists must be ListLogicalTypeAnnotation: \" +\n          parquetList.toString)\n\n      assert(\n        parquetList.getFieldCount == 1 && parquetList\n          .getType(0)\n          .isRepetition(Repetition.REPEATED),\n        \"Invalid Parquet schema. \" +\n          \"LIST-annotated group should only have exactly one repeated field: \" +\n          parquetList)\n\n      // Precondition of this method, should only be called for lists with nested element types.\n      assert(!parquetList.getType(0).isPrimitive)\n\n      val repeatedGroup = parquetList.getType(0).asGroupType()\n\n      // If the repeated field is a group with multiple fields, or the repeated field is a group\n      // with one field and is named either \"array\" or uses the LIST-annotated group's name with\n      // \"_tuple\" appended then the repeated type is the element type and elements are required.\n      // Build a new LIST-annotated group with clipped `repeatedGroup` as element type and the\n      // only field.\n      if (repeatedGroup.getFieldCount > 1 ||\n        repeatedGroup.getName == \"array\" ||\n        repeatedGroup.getName == parquetList.getName + \"_tuple\") {\n        Types\n          .buildGroup(parquetList.getRepetition)\n          .as(LogicalTypeAnnotation.listType())\n          .addField(clipParquetType(repeatedGroup, elementType, caseSensitive, useFieldId))\n          .named(parquetList.getName)\n      } else {\n        val newRepeatedGroup = Types\n          .repeatedGroup()\n          .addField(\n            clipParquetType(repeatedGroup.getType(0), elementType, caseSensitive, useFieldId))\n          .named(repeatedGroup.getName)\n\n        val newElementType = if (useFieldId && repeatedGroup.getId != null) {\n          newRepeatedGroup.withId(repeatedGroup.getId.intValue())\n        } else {\n          newRepeatedGroup\n        }\n\n        // Otherwise, the repeated field's type is the element type with the repeated field's\n        // repetition.\n        Types\n          .buildGroup(parquetList.getRepetition)\n          .as(LogicalTypeAnnotation.listType())\n          .addField(newElementType)\n          .named(parquetList.getName)\n      }\n    }\n  }\n\n  /**\n   * Clips a Parquet [[GroupType]] which corresponds to a Catalyst [[MapType]]. Either key type or\n   * value type of the [[MapType]] must be a nested type, namely an [[ArrayType]], a [[MapType]],\n   * or a [[StructType]].\n   */\n  private def clipParquetMapType(\n      parquetMap: GroupType,\n      keyType: DataType,\n      valueType: DataType,\n      caseSensitive: Boolean,\n      useFieldId: Boolean): GroupType = {\n    // Precondition of this method, only handles maps with nested key types or value types.\n    assert(!isPrimitiveCatalystType(keyType) || !isPrimitiveCatalystType(valueType))\n\n    val repeatedGroup = parquetMap.getType(0).asGroupType()\n    val parquetKeyType = repeatedGroup.getType(0)\n    val parquetValueType = repeatedGroup.getType(1)\n\n    val clippedRepeatedGroup = {\n      val newRepeatedGroup = Types\n        .repeatedGroup()\n        .as(repeatedGroup.getLogicalTypeAnnotation)\n        .addField(clipParquetType(parquetKeyType, keyType, caseSensitive, useFieldId))\n        .addField(clipParquetType(parquetValueType, valueType, caseSensitive, useFieldId))\n        .named(repeatedGroup.getName)\n      if (useFieldId && repeatedGroup.getId != null) {\n        newRepeatedGroup.withId(repeatedGroup.getId.intValue())\n      } else {\n        newRepeatedGroup\n      }\n    }\n\n    Types\n      .buildGroup(parquetMap.getRepetition)\n      .as(parquetMap.getLogicalTypeAnnotation)\n      .addField(clippedRepeatedGroup)\n      .named(parquetMap.getName)\n  }\n\n  /**\n   * Clips a Parquet [[GroupType]] which corresponds to a Catalyst [[StructType]].\n   *\n   * @return\n   *   A clipped [[GroupType]], which has at least one field.\n   * @note\n   *   Parquet doesn't allow creating empty [[GroupType]] instances except for empty\n   *   [[MessageType]]. Because it's legal to construct an empty requested schema for column\n   *   pruning.\n   */\n  private def clipParquetGroup(\n      parquetRecord: GroupType,\n      structType: StructType,\n      caseSensitive: Boolean,\n      useFieldId: Boolean): GroupType = {\n    val clippedParquetFields =\n      clipParquetGroupFields(parquetRecord, structType, caseSensitive, useFieldId)\n    Types\n      .buildGroup(parquetRecord.getRepetition)\n      .as(parquetRecord.getLogicalTypeAnnotation)\n      .addFields(clippedParquetFields: _*)\n      .named(parquetRecord.getName)\n  }\n\n  /**\n   * Clips a Parquet [[GroupType]] which corresponds to a Catalyst [[StructType]].\n   *\n   * @return\n   *   A list of clipped [[GroupType]] fields, which can be empty.\n   */\n  private def clipParquetGroupFields(\n      parquetRecord: GroupType,\n      structType: StructType,\n      caseSensitive: Boolean,\n      useFieldId: Boolean): Seq[Type] = {\n    val toParquet = new CometSparkToParquetSchemaConverter(\n      writeLegacyParquetFormat = false,\n      useFieldId = useFieldId)\n    lazy val caseSensitiveParquetFieldMap =\n      parquetRecord.getFields.asScala.map(f => f.getName -> f).toMap\n    lazy val caseInsensitiveParquetFieldMap =\n      parquetRecord.getFields.asScala.groupBy(_.getName.toLowerCase(Locale.ROOT))\n    lazy val idToParquetFieldMap =\n      parquetRecord.getFields.asScala.filter(_.getId != null).groupBy(f => f.getId.intValue())\n\n    def matchCaseSensitiveField(f: StructField): Type = {\n      caseSensitiveParquetFieldMap\n        .get(f.name)\n        .map(clipParquetType(_, f.dataType, caseSensitive, useFieldId))\n        .getOrElse(toParquet.convertField(f))\n    }\n\n    def matchCaseInsensitiveField(f: StructField): Type = {\n      // Do case-insensitive resolution only if in case-insensitive mode\n      caseInsensitiveParquetFieldMap\n        .get(f.name.toLowerCase(Locale.ROOT))\n        .map { parquetTypes =>\n          if (parquetTypes.size > 1) {\n            // Need to fail if there is ambiguity, i.e. more than one field is matched\n            val parquetTypesString = parquetTypes.map(_.getName).mkString(\"[\", \", \", \"]\")\n            throw QueryExecutionErrors.foundDuplicateFieldInCaseInsensitiveModeError(\n              f.name,\n              parquetTypesString)\n          } else {\n            clipParquetType(parquetTypes.head, f.dataType, caseSensitive, useFieldId)\n          }\n        }\n        .getOrElse(toParquet.convertField(f))\n    }\n\n    def matchIdField(f: StructField): Type = {\n      val fieldId = ParquetUtils.getFieldId(f)\n      idToParquetFieldMap\n        .get(fieldId)\n        .map { parquetTypes =>\n          if (parquetTypes.size > 1) {\n            // Need to fail if there is ambiguity, i.e. more than one field is matched\n            val parquetTypesString = parquetTypes.map(_.getName).mkString(\"[\", \", \", \"]\")\n            throw QueryExecutionErrors.foundDuplicateFieldInFieldIdLookupModeError(\n              fieldId,\n              parquetTypesString)\n          } else {\n            clipParquetType(parquetTypes.head, f.dataType, caseSensitive, useFieldId)\n          }\n        }\n        .getOrElse {\n          // When there is no ID match, we use a fake name to avoid a name match by accident\n          // We need this name to be unique as well, otherwise there will be type conflicts\n          toParquet.convertField(f.copy(name = generateFakeColumnName))\n        }\n    }\n\n    val shouldMatchById = useFieldId && ParquetUtils.hasFieldIds(structType)\n    structType.map { f =>\n      if (shouldMatchById && ParquetUtils.hasFieldId(f)) {\n        matchIdField(f)\n      } else if (caseSensitive) {\n        matchCaseSensitiveField(f)\n      } else {\n        matchCaseInsensitiveField(f)\n      }\n    }\n  }\n\n  /**\n   * Whether the parquet schema contains any field IDs.\n   */\n  private def containsFieldIds(schema: Type): Boolean = schema match {\n    case p: PrimitiveType => p.getId != null\n    // We don't require all fields to have IDs, so we use `exists` here.\n    case g: GroupType => g.getId != null || g.getFields.asScala.exists(containsFieldIds)\n  }\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/spark/sql/comet/parquet/CometSparkToParquetSchemaConverter.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.parquet\n\nimport org.apache.hadoop.conf.Configuration\nimport org.apache.parquet.schema._\nimport org.apache.parquet.schema.LogicalTypeAnnotation._\nimport org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName._\nimport org.apache.parquet.schema.Type.Repetition._\nimport org.apache.spark.sql.errors.QueryCompilationErrors\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetSchemaConverter\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetUtils\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.parquet.CometParquetUtils\n\n/**\n * This class is copied & modified from Spark's [[SparkToParquetSchemaConverter]] class.\n */\nclass CometSparkToParquetSchemaConverter(\n    writeLegacyParquetFormat: Boolean = SQLConf.PARQUET_WRITE_LEGACY_FORMAT.defaultValue.get,\n    outputTimestampType: SQLConf.ParquetOutputTimestampType.Value =\n      SQLConf.ParquetOutputTimestampType.INT96,\n    useFieldId: Boolean = CometParquetUtils.writeFieldId(new SQLConf)) {\n\n  def this(conf: SQLConf) = this(\n    writeLegacyParquetFormat = conf.writeLegacyParquetFormat,\n    outputTimestampType = conf.parquetOutputTimestampType,\n    useFieldId = CometParquetUtils.writeFieldId(conf))\n\n  def this(conf: Configuration) = this(\n    writeLegacyParquetFormat = conf.get(SQLConf.PARQUET_WRITE_LEGACY_FORMAT.key).toBoolean,\n    outputTimestampType = SQLConf.ParquetOutputTimestampType.withName(\n      conf.get(SQLConf.PARQUET_OUTPUT_TIMESTAMP_TYPE.key)),\n    useFieldId = CometParquetUtils.writeFieldId(conf))\n\n  /**\n   * Converts a Spark SQL [[StructType]] to a Parquet [[MessageType]].\n   */\n  def convert(catalystSchema: StructType): MessageType = {\n    Types\n      .buildMessage()\n      .addFields(catalystSchema.map(convertField): _*)\n      .named(ParquetSchemaConverter.SPARK_PARQUET_SCHEMA_NAME)\n  }\n\n  /**\n   * Converts a Spark SQL [[StructField]] to a Parquet [[Type]].\n   */\n  def convertField(field: StructField): Type = {\n    val converted = convertField(field, if (field.nullable) OPTIONAL else REQUIRED)\n    if (useFieldId && ParquetUtils.hasFieldId(field)) {\n      converted.withId(ParquetUtils.getFieldId(field))\n    } else {\n      converted\n    }\n  }\n\n  private def convertField(field: StructField, repetition: Type.Repetition): Type = {\n\n    field.dataType match {\n      // ===================\n      // Simple atomic types\n      // ===================\n\n      case BooleanType =>\n        Types.primitive(BOOLEAN, repetition).named(field.name)\n\n      case ByteType =>\n        Types\n          .primitive(INT32, repetition)\n          .as(LogicalTypeAnnotation.intType(8, true))\n          .named(field.name)\n\n      case ShortType =>\n        Types\n          .primitive(INT32, repetition)\n          .as(LogicalTypeAnnotation.intType(16, true))\n          .named(field.name)\n\n      case IntegerType =>\n        Types.primitive(INT32, repetition).named(field.name)\n\n      case LongType =>\n        Types.primitive(INT64, repetition).named(field.name)\n\n      case FloatType =>\n        Types.primitive(FLOAT, repetition).named(field.name)\n\n      case DoubleType =>\n        Types.primitive(DOUBLE, repetition).named(field.name)\n\n      case StringType =>\n        Types\n          .primitive(BINARY, repetition)\n          .as(LogicalTypeAnnotation.stringType())\n          .named(field.name)\n\n      case DateType =>\n        Types\n          .primitive(INT32, repetition)\n          .as(LogicalTypeAnnotation.dateType())\n          .named(field.name)\n\n      // NOTE: Spark SQL can write timestamp values to Parquet using INT96, TIMESTAMP_MICROS or\n      // TIMESTAMP_MILLIS. TIMESTAMP_MICROS is recommended but INT96 is the default to keep the\n      // behavior same as before.\n      //\n      // As stated in PARQUET-323, Parquet `INT96` was originally introduced to represent nanosecond\n      // timestamp in Impala for some historical reasons.  It's not recommended to be used for any\n      // other types and will probably be deprecated in some future version of parquet-format spec.\n      // That's the reason why parquet-format spec only defines `TIMESTAMP_MILLIS` and\n      // `TIMESTAMP_MICROS` which are both logical types annotating `INT64`.\n      //\n      // Originally, Spark SQL uses the same nanosecond timestamp type as Impala and Hive.  Starting\n      // from Spark 1.5.0, we resort to a timestamp type with microsecond precision so that we can\n      // store a timestamp into a `Long`.  This design decision is subject to change though, for\n      // example, we may resort to nanosecond precision in the future.\n      case TimestampType =>\n        outputTimestampType match {\n          case SQLConf.ParquetOutputTimestampType.INT96 =>\n            Types.primitive(INT96, repetition).named(field.name)\n          case SQLConf.ParquetOutputTimestampType.TIMESTAMP_MICROS =>\n            Types\n              .primitive(INT64, repetition)\n              .as(LogicalTypeAnnotation.timestampType(true, TimeUnit.MICROS))\n              .named(field.name)\n          case SQLConf.ParquetOutputTimestampType.TIMESTAMP_MILLIS =>\n            Types\n              .primitive(INT64, repetition)\n              .as(LogicalTypeAnnotation.timestampType(true, TimeUnit.MILLIS))\n              .named(field.name)\n        }\n\n      case TimestampNTZType =>\n        Types\n          .primitive(INT64, repetition)\n          .as(LogicalTypeAnnotation.timestampType(false, TimeUnit.MICROS))\n          .named(field.name)\n\n      case BinaryType =>\n        Types.primitive(BINARY, repetition).named(field.name)\n\n      // ======================\n      // Decimals (legacy mode)\n      // ======================\n\n      // Spark 1.4.x and prior versions only support decimals with a maximum precision of 18 and\n      // always store decimals in fixed-length byte arrays.  To keep compatibility with these older\n      // versions, here we convert decimals with all precisions to `FIXED_LEN_BYTE_ARRAY` annotated\n      // by `DECIMAL`.\n      case DecimalType.Fixed(precision, scale) if writeLegacyParquetFormat =>\n        Types\n          .primitive(FIXED_LEN_BYTE_ARRAY, repetition)\n          .as(LogicalTypeAnnotation.decimalType(scale, precision))\n          .length(Decimal.minBytesForPrecision(precision))\n          .named(field.name)\n\n      // ========================\n      // Decimals (standard mode)\n      // ========================\n\n      // Uses INT32 for 1 <= precision <= 9\n      case DecimalType.Fixed(precision, scale)\n          if precision <= Decimal.MAX_INT_DIGITS && !writeLegacyParquetFormat =>\n        Types\n          .primitive(INT32, repetition)\n          .as(LogicalTypeAnnotation.decimalType(scale, precision))\n          .named(field.name)\n\n      // Uses INT64 for 1 <= precision <= 18\n      case DecimalType.Fixed(precision, scale)\n          if precision <= Decimal.MAX_LONG_DIGITS && !writeLegacyParquetFormat =>\n        Types\n          .primitive(INT64, repetition)\n          .as(LogicalTypeAnnotation.decimalType(scale, precision))\n          .named(field.name)\n\n      // Uses FIXED_LEN_BYTE_ARRAY for all other precisions\n      case DecimalType.Fixed(precision, scale) if !writeLegacyParquetFormat =>\n        Types\n          .primitive(FIXED_LEN_BYTE_ARRAY, repetition)\n          .as(LogicalTypeAnnotation.decimalType(scale, precision))\n          .length(Decimal.minBytesForPrecision(precision))\n          .named(field.name)\n\n      // ===================================\n      // ArrayType and MapType (legacy mode)\n      // ===================================\n\n      // Spark 1.4.x and prior versions convert `ArrayType` with nullable elements into a 3-level\n      // `LIST` structure.  This behavior is somewhat a hybrid of parquet-hive and parquet-avro\n      // (1.6.0rc3): the 3-level structure is similar to parquet-hive while the 3rd level element\n      // field name \"array\" is borrowed from parquet-avro.\n      case ArrayType(elementType, nullable @ true) if writeLegacyParquetFormat =>\n        // <list-repetition> group <name> (LIST) {\n        //   optional group bag {\n        //     repeated <element-type> array;\n        //   }\n        // }\n\n        // This should not use `listOfElements` here because this new method checks if the\n        // element name is `element` in the `GroupType` and throws an exception if not.\n        // As mentioned above, Spark prior to 1.4.x writes `ArrayType` as `LIST` but with\n        // `array` as its element name as below. Therefore, we build manually\n        // the correct group type here via the builder. (See SPARK-16777)\n        Types\n          .buildGroup(repetition)\n          .as(LogicalTypeAnnotation.listType())\n          .addField(\n            Types\n              .buildGroup(REPEATED)\n              // \"array\" is the name chosen by parquet-hive (1.7.0 and prior version)\n              .addField(convertField(StructField(\"array\", elementType, nullable)))\n              .named(\"bag\"))\n          .named(field.name)\n\n      // Spark 1.4.x and prior versions convert ArrayType with non-nullable elements into a 2-level\n      // LIST structure.  This behavior mimics parquet-avro (1.6.0rc3).  Note that this case is\n      // covered by the backwards-compatibility rules implemented in `isElementType()`.\n      case ArrayType(elementType, nullable @ false) if writeLegacyParquetFormat =>\n        // <list-repetition> group <name> (LIST) {\n        //   repeated <element-type> element;\n        // }\n\n        // Here too, we should not use `listOfElements`. (See SPARK-16777)\n        Types\n          .buildGroup(repetition)\n          .as(LogicalTypeAnnotation.listType())\n          // \"array\" is the name chosen by parquet-avro (1.7.0 and prior version)\n          .addField(convertField(StructField(\"array\", elementType, nullable), REPEATED))\n          .named(field.name)\n\n      // Spark 1.4.x and prior versions convert MapType into a 3-level group annotated by\n      // MAP_KEY_VALUE.  This is covered by `convertGroupField(field: GroupType): DataType`.\n      case MapType(keyType, valueType, valueContainsNull) if writeLegacyParquetFormat =>\n        // <map-repetition> group <name> (MAP) {\n        //   repeated group map (MAP_KEY_VALUE) {\n        //     required <key-type> key;\n        //     <value-repetition> <value-type> value;\n        //   }\n        // }\n        ConversionPatterns.mapType(\n          repetition,\n          field.name,\n          convertField(StructField(\"key\", keyType, nullable = false)),\n          convertField(StructField(\"value\", valueType, valueContainsNull)))\n\n      // =====================================\n      // ArrayType and MapType (standard mode)\n      // =====================================\n\n      case ArrayType(elementType, containsNull) if !writeLegacyParquetFormat =>\n        // <list-repetition> group <name> (LIST) {\n        //   repeated group list {\n        //     <element-repetition> <element-type> element;\n        //   }\n        // }\n        Types\n          .buildGroup(repetition)\n          .as(LogicalTypeAnnotation.listType())\n          .addField(\n            Types\n              .repeatedGroup()\n              .addField(convertField(StructField(\"element\", elementType, containsNull)))\n              .named(\"list\"))\n          .named(field.name)\n\n      case MapType(keyType, valueType, valueContainsNull) =>\n        // <map-repetition> group <name> (MAP) {\n        //   repeated group key_value {\n        //     required <key-type> key;\n        //     <value-repetition> <value-type> value;\n        //   }\n        // }\n        Types\n          .buildGroup(repetition)\n          .as(LogicalTypeAnnotation.mapType())\n          .addField(\n            Types\n              .repeatedGroup()\n              .addField(convertField(StructField(\"key\", keyType, nullable = false)))\n              .addField(convertField(StructField(\"value\", valueType, valueContainsNull)))\n              .named(\"key_value\"))\n          .named(field.name)\n\n      // ===========\n      // Other types\n      // ===========\n\n      case StructType(fields) =>\n        fields\n          .foldLeft(Types.buildGroup(repetition)) { (builder, field) =>\n            builder.addField(convertField(field))\n          }\n          .named(field.name)\n\n      case udt: UserDefinedType[_] =>\n        convertField(field.copy(dataType = udt.sqlType))\n\n      case _ =>\n        throw QueryCompilationErrors.cannotConvertDataTypeToParquetTypeError(field)\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/scala/org/apache/spark/sql/comet/util/Utils.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.util\n\nimport java.io.{DataInputStream, DataOutputStream, File}\nimport java.nio.ByteBuffer\nimport java.nio.channels.Channels\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.arrow.c.CDataDictionaryProvider\nimport org.apache.arrow.vector._\nimport org.apache.arrow.vector.complex.{ListVector, MapVector, StructVector}\nimport org.apache.arrow.vector.dictionary.DictionaryProvider\nimport org.apache.arrow.vector.ipc.{ArrowStreamReader, ArrowStreamWriter}\nimport org.apache.arrow.vector.types._\nimport org.apache.arrow.vector.types.pojo.{ArrowType, Field, FieldType, Schema}\nimport org.apache.arrow.vector.util.VectorSchemaRootAppender\nimport org.apache.spark.{SparkEnv, SparkException}\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.io.CompressionCodec\nimport org.apache.spark.sql.comet.execution.arrow.ArrowReaderIterator\nimport org.apache.spark.sql.types._\nimport org.apache.spark.sql.vectorized.ColumnarBatch\nimport org.apache.spark.util.io.{ChunkedByteBuffer, ChunkedByteBufferOutputStream}\n\nimport org.apache.comet.Constants.COMET_CONF_DIR_ENV\nimport org.apache.comet.shims.CometTypeShim\nimport org.apache.comet.vector.CometVector\n\nobject Utils extends CometTypeShim with Logging {\n  def getConfPath(confFileName: String): String = {\n    sys.env\n      .get(COMET_CONF_DIR_ENV)\n      .map { t => new File(s\"$t${File.separator}$confFileName\") }\n      .filter(_.isFile)\n      .map(_.getAbsolutePath)\n      .orNull\n  }\n\n  def stringToSeq(str: String): Seq[String] = {\n    str.split(\",\").map(_.trim()).filter(_.nonEmpty)\n  }\n\n  /** bridges the function call to Spark's Util */\n  def getSimpleName(cls: Class[_]): String = {\n    org.apache.spark.util.Utils.getSimpleName(cls)\n  }\n\n  def fromArrowField(field: Field): DataType = {\n    field.getType match {\n      case _: ArrowType.Map =>\n        val elementField = field.getChildren.get(0)\n        val keyType = fromArrowField(elementField.getChildren.get(0))\n        val valueType = fromArrowField(elementField.getChildren.get(1))\n        MapType(keyType, valueType, elementField.getChildren.get(1).isNullable)\n      case ArrowType.List.INSTANCE =>\n        val elementField = field.getChildren().get(0)\n        val elementType = fromArrowField(elementField)\n        ArrayType(elementType, containsNull = elementField.isNullable)\n      case ArrowType.Struct.INSTANCE =>\n        val fields = field.getChildren().asScala.map { child =>\n          val dt = fromArrowField(child)\n          StructField(child.getName, dt, child.isNullable)\n        }\n        StructType(fields.toSeq)\n      case arrowType => fromArrowType(arrowType)\n    }\n  }\n\n  def fromArrowType(dt: ArrowType): DataType = dt match {\n    case ArrowType.Bool.INSTANCE => BooleanType\n    case int: ArrowType.Int if int.getIsSigned && int.getBitWidth == 8 => ByteType\n    case int: ArrowType.Int if int.getIsSigned && int.getBitWidth == 8 * 2 => ShortType\n    case int: ArrowType.Int if int.getIsSigned && int.getBitWidth == 8 * 4 => IntegerType\n    case int: ArrowType.Int if int.getIsSigned && int.getBitWidth == 8 * 8 => LongType\n    case float: ArrowType.FloatingPoint if float.getPrecision == FloatingPointPrecision.SINGLE =>\n      FloatType\n    case float: ArrowType.FloatingPoint if float.getPrecision == FloatingPointPrecision.DOUBLE =>\n      DoubleType\n    case ArrowType.Utf8.INSTANCE => StringType\n    case ArrowType.Binary.INSTANCE => BinaryType\n    case _: ArrowType.FixedSizeBinary => BinaryType\n    case d: ArrowType.Decimal => DecimalType(d.getPrecision, d.getScale)\n    case date: ArrowType.Date if date.getUnit == DateUnit.DAY => DateType\n    case ts: ArrowType.Timestamp\n        if ts.getUnit == TimeUnit.MICROSECOND && ts.getTimezone == null =>\n      TimestampNTZType\n    case ts: ArrowType.Timestamp if ts.getUnit == TimeUnit.MICROSECOND => TimestampType\n    case ArrowType.Null.INSTANCE => NullType\n    case yi: ArrowType.Interval if yi.getUnit == IntervalUnit.YEAR_MONTH =>\n      YearMonthIntervalType()\n    case di: ArrowType.Interval if di.getUnit == IntervalUnit.DAY_TIME => DayTimeIntervalType()\n    case _ => throw new UnsupportedOperationException(s\"Unsupported data type: ${dt.toString}\")\n  }\n\n  def fromArrowSchema(schema: Schema): StructType = {\n    StructType(schema.getFields.asScala.map { field =>\n      val dt = fromArrowField(field)\n      StructField(field.getName, dt, field.isNullable)\n    }.toArray)\n  }\n\n  /** Maps data type from Spark to Arrow. NOTE: timeZoneId required for TimestampTypes */\n  def toArrowType(dt: DataType, timeZoneId: String): ArrowType =\n    dt match {\n      case BooleanType => ArrowType.Bool.INSTANCE\n      case ByteType => new ArrowType.Int(8, true)\n      case ShortType => new ArrowType.Int(8 * 2, true)\n      case IntegerType => new ArrowType.Int(8 * 4, true)\n      case LongType => new ArrowType.Int(8 * 8, true)\n      case FloatType => new ArrowType.FloatingPoint(FloatingPointPrecision.SINGLE)\n      case DoubleType => new ArrowType.FloatingPoint(FloatingPointPrecision.DOUBLE)\n      case _: StringType => ArrowType.Utf8.INSTANCE\n      case dt if isStringCollationType(dt) => ArrowType.Utf8.INSTANCE\n      case BinaryType => ArrowType.Binary.INSTANCE\n      case DecimalType.Fixed(precision, scale) => new ArrowType.Decimal(precision, scale, 128)\n      case DateType => new ArrowType.Date(DateUnit.DAY)\n      case TimestampType =>\n        if (timeZoneId == null) {\n          throw new UnsupportedOperationException(\n            s\"${TimestampType.catalogString} must supply timeZoneId parameter\")\n        } else {\n          new ArrowType.Timestamp(TimeUnit.MICROSECOND, timeZoneId)\n        }\n      case TimestampNTZType =>\n        new ArrowType.Timestamp(TimeUnit.MICROSECOND, null)\n      case _ =>\n        throw new UnsupportedOperationException(\n          s\"Unsupported data type: [${dt.getClass.getName}] ${dt.catalogString}\")\n    }\n\n  /** Maps field from Spark to Arrow. NOTE: timeZoneId required for TimestampType */\n  def toArrowField(name: String, dt: DataType, nullable: Boolean, timeZoneId: String): Field = {\n    dt match {\n      case ArrayType(elementType, containsNull) =>\n        val fieldType = new FieldType(nullable, ArrowType.List.INSTANCE, null)\n        new Field(\n          name,\n          fieldType,\n          Seq(toArrowField(\"element\", elementType, containsNull, timeZoneId)).asJava)\n      case StructType(fields) =>\n        val fieldType = new FieldType(nullable, ArrowType.Struct.INSTANCE, null)\n        new Field(\n          name,\n          fieldType,\n          fields\n            .map { field =>\n              toArrowField(field.name, field.dataType, field.nullable, timeZoneId)\n            }\n            .toSeq\n            .asJava)\n      case MapType(keyType, valueType, valueContainsNull) =>\n        val mapType = new FieldType(nullable, new ArrowType.Map(false), null)\n        // Note: Map Type struct can not be null, Struct Type key field can not be null\n        new Field(\n          name,\n          mapType,\n          Seq(\n            toArrowField(\n              MapVector.DATA_VECTOR_NAME,\n              new StructType()\n                .add(MapVector.KEY_NAME, keyType, nullable = false)\n                .add(MapVector.VALUE_NAME, valueType, nullable = valueContainsNull),\n              nullable = false,\n              timeZoneId)).asJava)\n      case dataType =>\n        val fieldType = new FieldType(nullable, toArrowType(dataType, timeZoneId), null)\n        new Field(name, fieldType, Seq.empty[Field].asJava)\n    }\n  }\n\n  /**\n   * Maps schema from Spark to Arrow. NOTE: timeZoneId required for TimestampType in StructType\n   */\n  def toArrowSchema(schema: StructType, timeZoneId: String): Schema = {\n    new Schema(schema.map { field =>\n      toArrowField(field.name, field.dataType, field.nullable, timeZoneId)\n    }.asJava)\n  }\n\n  /**\n   * Serializes a list of `ColumnarBatch` into an output stream. This method must be in `spark`\n   * package because `ChunkedByteBufferOutputStream` is spark private class. As it uses Arrow\n   * classes, it must be in `common` module.\n   *\n   * @param batches\n   *   the output batches, each batch is a list of Arrow vectors wrapped in `CometVector`\n   * @param out\n   *   the output stream\n   */\n  def serializeBatches(batches: Iterator[ColumnarBatch]): Iterator[(Long, ChunkedByteBuffer)] = {\n    batches.map { batch =>\n      val dictionaryProvider: CDataDictionaryProvider = new CDataDictionaryProvider\n\n      val codec = CompressionCodec.createCodec(SparkEnv.get.conf)\n      val cbbos = new ChunkedByteBufferOutputStream(1024 * 1024, ByteBuffer.allocate)\n      val out = new DataOutputStream(codec.compressedOutputStream(cbbos))\n\n      val (fieldVectors, batchProviderOpt) = getBatchFieldVectors(batch)\n      val root = new VectorSchemaRoot(fieldVectors.asJava)\n      val provider = batchProviderOpt.getOrElse(dictionaryProvider)\n\n      val writer = new ArrowStreamWriter(root, provider, Channels.newChannel(out))\n      writer.start()\n      writer.writeBatch()\n      root.clear()\n      writer.close()\n\n      if (out.size() > 0) {\n        (batch.numRows().toLong, cbbos.toChunkedByteBuffer)\n      } else {\n        (batch.numRows().toLong, new ChunkedByteBuffer(Array.empty[ByteBuffer]))\n      }\n    }\n  }\n\n  /**\n   * Decodes the byte arrays back to ColumnarBatchs and put them into buffer.\n   *\n   * @param bytes\n   *   the serialized batches\n   * @param source\n   *   the class that calls this method\n   * @return\n   *   an iterator of ColumnarBatch\n   */\n  def decodeBatches(bytes: ChunkedByteBuffer, source: String): Iterator[ColumnarBatch] = {\n    if (bytes.size == 0) {\n      return Iterator.empty\n    }\n\n    // use Spark's compression codec (LZ4 by default) and not Comet's compression\n    val codec = CompressionCodec.createCodec(SparkEnv.get.conf)\n    val cbbis = bytes.toInputStream()\n    val ins = new DataInputStream(codec.compressedInputStream(cbbis))\n    // batches are in Arrow IPC format\n    new ArrowReaderIterator(Channels.newChannel(ins), source)\n  }\n\n  /**\n   * Coalesces many small Arrow IPC batches into a single batch for broadcasting.\n   *\n   * Why this is necessary: The broadcast exchange collects shuffle output by calling\n   * getByteArrayRdd, which serializes each ColumnarBatch independently into its own\n   * ChunkedByteBuffer. The shuffle reader (CometBlockStoreShuffleReader) produces one\n   * ColumnarBatch per shuffle block, and there is one block per writer task per output partition.\n   * So with W writer tasks and P output partitions, the broadcast collects up to W * P tiny\n   * batches. For example, with 400 writer tasks and 500 partitions, 1M rows would arrive as ~200K\n   * batches of ~5 rows each.\n   *\n   * Without coalescing, every consumer task in the broadcast join would independently deserialize\n   * all of these tiny Arrow IPC streams, paying per-stream overhead (schema parsing, buffer\n   * allocation) for each one. With coalescing, we decode and append all batches into one\n   * VectorSchemaRoot on the driver, then re-serialize once. Each consumer task then deserializes\n   * a single Arrow IPC stream.\n   */\n  def coalesceBroadcastBatches(\n      input: Iterator[ChunkedByteBuffer]): (Array[ChunkedByteBuffer], Long, Long) = {\n    val buffers = input.filterNot(_.size == 0).toArray\n    if (buffers.isEmpty) {\n      return (Array.empty, 0L, 0L)\n    }\n\n    val allocator = org.apache.comet.CometArrowAllocator\n      .newChildAllocator(\"broadcast-coalesce\", 0, Long.MaxValue)\n    try {\n      var targetRoot: VectorSchemaRoot = null\n      var totalRows = 0L\n      var batchCount = 0\n\n      val codec = CompressionCodec.createCodec(SparkEnv.get.conf)\n      try {\n        for (bytes <- buffers) {\n          val compressedInputStream =\n            new DataInputStream(codec.compressedInputStream(bytes.toInputStream()))\n          val reader =\n            new ArrowStreamReader(Channels.newChannel(compressedInputStream), allocator)\n          try {\n            // Comet decodes dictionaries during execution, so this shouldn't happen.\n            // If it does, fall back to the original uncoalesced buffers because each\n            // partition can have a different dictionary, and appending index vectors\n            // would silently mix indices from incompatible dictionaries.\n            if (!reader.getDictionaryVectors.isEmpty) {\n              logWarning(\n                \"Unexpected dictionary-encoded column during BroadcastExchange coalescing; \" +\n                  \"skipping coalesce\")\n              reader.close()\n              if (targetRoot != null) {\n                targetRoot.close()\n                targetRoot = null\n              }\n              return (buffers, 0L, 0L)\n            }\n            while (reader.loadNextBatch()) {\n              val sourceRoot = reader.getVectorSchemaRoot\n              if (targetRoot == null) {\n                targetRoot = VectorSchemaRoot.create(sourceRoot.getSchema, allocator)\n                targetRoot.allocateNew()\n              }\n              VectorSchemaRootAppender.append(targetRoot, sourceRoot)\n              totalRows += sourceRoot.getRowCount\n              batchCount += 1\n            }\n          } finally {\n            reader.close()\n          }\n        }\n\n        if (targetRoot == null) {\n          return (Array.empty, 0L, 0L)\n        }\n\n        assert(\n          targetRoot.getRowCount.toLong == totalRows,\n          s\"Row count mismatch after coalesce: ${targetRoot.getRowCount} != $totalRows\")\n\n        logInfo(s\"Coalesced $batchCount broadcast batches into 1 ($totalRows rows)\")\n\n        val outputStream = new ChunkedByteBufferOutputStream(1024 * 1024, ByteBuffer.allocate)\n        val compressedOutputStream =\n          new DataOutputStream(codec.compressedOutputStream(outputStream))\n        val writer =\n          new ArrowStreamWriter(targetRoot, null, Channels.newChannel(compressedOutputStream))\n        try {\n          writer.start()\n          writer.writeBatch()\n        } finally {\n          writer.close()\n        }\n\n        (Array(outputStream.toChunkedByteBuffer), batchCount.toLong, totalRows)\n      } finally {\n        if (targetRoot != null) {\n          targetRoot.close()\n        }\n      }\n    } finally {\n      allocator.close()\n    }\n  }\n\n  def getBatchFieldVectors(\n      batch: ColumnarBatch): (Seq[FieldVector], Option[DictionaryProvider]) = {\n    var provider: Option[DictionaryProvider] = None\n    val fieldVectors = (0 until batch.numCols()).map { index =>\n      batch.column(index) match {\n        case a: CometVector =>\n          val valueVector = a.getValueVector\n          if (valueVector.getField.getDictionary != null) {\n            if (provider.isEmpty) {\n              provider = Some(a.getDictionaryProvider)\n            }\n          }\n\n          getFieldVector(valueVector, \"serialize\")\n\n        case c =>\n          throw new SparkException(\n            s\"Comet execution only takes Arrow Arrays, but got ${c.getClass}. \" +\n              \"This typically happens when a Comet scan falls back to Spark due to unsupported \" +\n              \"data types (e.g., complex types like structs, arrays, or maps). \" +\n              \"To resolve this, you can: \" +\n              \"(1) enable spark.comet.scan.allowIncompatible=true to use a compatible native \" +\n              \"scan variant, or \" +\n              \"(2) enable spark.comet.convert.parquet.enabled=true to convert Spark Parquet \" +\n              \"data to Arrow format automatically.\")\n      }\n    }\n    (fieldVectors, provider)\n  }\n\n  def getFieldVector(valueVector: ValueVector, reason: String): FieldVector = {\n    valueVector match {\n      case v @ (_: BitVector | _: TinyIntVector | _: SmallIntVector | _: IntVector |\n          _: BigIntVector | _: Float4Vector | _: Float8Vector | _: VarCharVector |\n          _: DecimalVector | _: DateDayVector | _: TimeStampMicroTZVector | _: VarBinaryVector |\n          _: FixedSizeBinaryVector | _: TimeStampMicroVector | _: StructVector | _: ListVector |\n          _: MapVector | _: NullVector) =>\n        v.asInstanceOf[FieldVector]\n      case _ =>\n        throw new SparkException(s\"Unsupported Arrow Vector for $reason: ${valueVector.getClass}\")\n    }\n  }\n}\n"
  },
  {
    "path": "common/src/main/spark-3.4/org/apache/comet/shims/ShimBatchReader.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.paths.SparkPath\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.execution.datasources.PartitionedFile\n\nobject ShimBatchReader {\n\n  def newPartitionedFile(partitionValues: InternalRow, file: String): PartitionedFile =\n    PartitionedFile(\n      partitionValues,\n      SparkPath.fromPathString(file),\n      -1, // -1 means we read the entire file\n      -1,\n      Array.empty[String],\n      0,\n      0)\n}\n"
  },
  {
    "path": "common/src/main/spark-3.4/org/apache/comet/shims/ShimFileFormat.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.execution.datasources.{FileFormat, RowIndexUtil}\nimport org.apache.spark.sql.types.StructType\n\nobject ShimFileFormat {\n\n  // A name for a temporary column that holds row indexes computed by the file format reader\n  // until they can be placed in the _metadata struct.\n  val ROW_INDEX_TEMPORARY_COLUMN_NAME: String = FileFormat.ROW_INDEX_TEMPORARY_COLUMN_NAME\n\n  def findRowIndexColumnIndexInSchema(sparkSchema: StructType): Int =\n    RowIndexUtil.findRowIndexColumnIndexInSchema(sparkSchema)\n}\n"
  },
  {
    "path": "common/src/main/spark-3.4/org/apache/spark/sql/comet/shims/ShimTaskMetrics.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport org.apache.spark.executor.TaskMetrics\nimport org.apache.spark.util.AccumulatorV2\n\nobject ShimTaskMetrics {\n\n  def getTaskAccumulator(taskMetrics: TaskMetrics): Option[AccumulatorV2[_, _]] =\n    taskMetrics.externalAccums.lastOption\n}\n"
  },
  {
    "path": "common/src/main/spark-3.5/org/apache/comet/shims/ShimBatchReader.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.paths.SparkPath\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.execution.datasources.PartitionedFile\n\nobject ShimBatchReader {\n\n  def newPartitionedFile(partitionValues: InternalRow, file: String): PartitionedFile =\n    PartitionedFile(\n      partitionValues,\n      SparkPath.fromPathString(file),\n      -1, // -1 means we read the entire file\n      -1,\n      Array.empty[String],\n      0,\n      0,\n      Map.empty)\n}\n"
  },
  {
    "path": "common/src/main/spark-3.5/org/apache/comet/shims/ShimFileFormat.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetRowIndexUtil\nimport org.apache.spark.sql.types.StructType\n\nobject ShimFileFormat {\n  // A name for a temporary column that holds row indexes computed by the file format reader\n  // until they can be placed in the _metadata struct.\n  val ROW_INDEX_TEMPORARY_COLUMN_NAME = ParquetFileFormat.ROW_INDEX_TEMPORARY_COLUMN_NAME\n\n  def findRowIndexColumnIndexInSchema(sparkSchema: StructType): Int =\n    ParquetRowIndexUtil.findRowIndexColumnIndexInSchema(sparkSchema)\n}\n"
  },
  {
    "path": "common/src/main/spark-3.5/org/apache/spark/sql/comet/shims/ShimTaskMetrics.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport scala.collection.mutable.ArrayBuffer\n\nimport org.apache.spark.executor.TaskMetrics\nimport org.apache.spark.util.AccumulatorV2\n\nobject ShimTaskMetrics {\n\n  def getTaskAccumulator(taskMetrics: TaskMetrics): Option[AccumulatorV2[_, _]] =\n    taskMetrics.withExternalAccums(identity[ArrayBuffer[AccumulatorV2[_, _]]](_)).lastOption\n}\n"
  },
  {
    "path": "common/src/main/spark-3.x/org/apache/comet/shims/CometTypeShim.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport scala.annotation.nowarn\n\nimport org.apache.spark.sql.types.DataType\n\ntrait CometTypeShim {\n  @nowarn // Spark 4 feature; stubbed to false in Spark 3.x for compatibility.\n  def isStringCollationType(dt: DataType): Boolean = false\n}\n"
  },
  {
    "path": "common/src/main/spark-3.x/org/apache/comet/shims/ShimCometConf.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\ntrait ShimCometConf {\n  protected val COMET_SCHEMA_EVOLUTION_ENABLED_DEFAULT = false\n}\n"
  },
  {
    "path": "common/src/main/spark-4.0/org/apache/comet/shims/CometTypeShim.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.internal.types.StringTypeWithCollation\nimport org.apache.spark.sql.types.DataType\n\ntrait CometTypeShim {\n  def isStringCollationType(dt: DataType): Boolean = dt.isInstanceOf[StringTypeWithCollation]\n}\n"
  },
  {
    "path": "common/src/main/spark-4.0/org/apache/comet/shims/ShimBatchReader.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.paths.SparkPath\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.execution.datasources.PartitionedFile\n\nobject ShimBatchReader {\n  def newPartitionedFile(partitionValues: InternalRow, file: String): PartitionedFile =\n    PartitionedFile(\n      partitionValues,\n      SparkPath.fromUrlString(file),\n      -1, // -1 means we read the entire file\n      -1,\n      Array.empty[String],\n      0,\n      0)\n}\n"
  },
  {
    "path": "common/src/main/spark-4.0/org/apache/comet/shims/ShimCometConf.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\ntrait ShimCometConf {\n  protected val COMET_SCHEMA_EVOLUTION_ENABLED_DEFAULT = true\n}\n"
  },
  {
    "path": "common/src/main/spark-4.0/org/apache/comet/shims/ShimFileFormat.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetRowIndexUtil\nimport org.apache.spark.sql.types.StructType\n\nobject ShimFileFormat {\n  // A name for a temporary column that holds row indexes computed by the file format reader\n  // until they can be placed in the _metadata struct.\n  val ROW_INDEX_TEMPORARY_COLUMN_NAME = ParquetFileFormat.ROW_INDEX_TEMPORARY_COLUMN_NAME\n\n  def findRowIndexColumnIndexInSchema(sparkSchema: StructType): Int =\n    ParquetRowIndexUtil.findRowIndexColumnIndexInSchema(sparkSchema)\n}\n"
  },
  {
    "path": "common/src/main/spark-4.0/org/apache/spark/sql/comet/shims/ShimTaskMetrics.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport org.apache.spark.executor.TaskMetrics\nimport org.apache.spark.util.AccumulatorV2\n\nobject ShimTaskMetrics {\n\n  def getTaskAccumulator(taskMetrics: TaskMetrics): Option[AccumulatorV2[_, _]] =\n    taskMetrics._externalAccums.lastOption\n}\n"
  },
  {
    "path": "common/src/test/java/org/apache/comet/parquet/TestColumnReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport org.junit.Test;\n\nimport org.apache.arrow.memory.BufferAllocator;\nimport org.apache.arrow.memory.RootAllocator;\nimport org.apache.arrow.vector.FixedSizeBinaryVector;\nimport org.apache.arrow.vector.IntVector;\nimport org.apache.arrow.vector.ValueVector;\nimport org.apache.arrow.vector.VarBinaryVector;\n\nimport org.apache.comet.vector.CometPlainVector;\nimport org.apache.comet.vector.CometVector;\n\nimport static org.junit.Assert.*;\n\npublic class TestColumnReader {\n  @Test\n  public void testIsFixedLength() {\n    BufferAllocator allocator = new RootAllocator(Integer.MAX_VALUE);\n\n    ValueVector vv = new IntVector(\"v1\", allocator);\n    CometVector vector = new CometPlainVector(vv, false);\n    assertTrue(vector.isFixedLength());\n\n    vv = new FixedSizeBinaryVector(\"v2\", allocator, 12);\n    vector = new CometPlainVector(vv, false);\n    assertTrue(vector.isFixedLength());\n\n    vv = new VarBinaryVector(\"v3\", allocator);\n    vector = new CometPlainVector(vv, false);\n    assertFalse(vector.isFixedLength());\n  }\n}\n"
  },
  {
    "path": "common/src/test/java/org/apache/comet/parquet/TestCometInputFile.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport org.junit.Assert;\nimport org.junit.Test;\n\npublic class TestCometInputFile {\n  @Test\n  public void testIsAtLeastHadoop33() {\n    Assert.assertTrue(CometInputFile.isAtLeastHadoop33(\"3.3.0\"));\n    Assert.assertTrue(CometInputFile.isAtLeastHadoop33(\"3.4.0-SNAPSHOT\"));\n    Assert.assertTrue(CometInputFile.isAtLeastHadoop33(\"3.12.5\"));\n    Assert.assertTrue(CometInputFile.isAtLeastHadoop33(\"3.20.6.4-xyz\"));\n\n    Assert.assertFalse(CometInputFile.isAtLeastHadoop33(\"2.7.2\"));\n    Assert.assertFalse(CometInputFile.isAtLeastHadoop33(\"2.7.3-SNAPSHOT\"));\n    Assert.assertFalse(CometInputFile.isAtLeastHadoop33(\"2.7\"));\n    Assert.assertFalse(CometInputFile.isAtLeastHadoop33(\"2\"));\n    Assert.assertFalse(CometInputFile.isAtLeastHadoop33(\"3\"));\n    Assert.assertFalse(CometInputFile.isAtLeastHadoop33(\"3.2\"));\n    Assert.assertFalse(CometInputFile.isAtLeastHadoop33(\"3.0.2.5-abc\"));\n    Assert.assertFalse(CometInputFile.isAtLeastHadoop33(\"3.1.2-test\"));\n    Assert.assertFalse(CometInputFile.isAtLeastHadoop33(\"3-SNAPSHOT\"));\n    Assert.assertFalse(CometInputFile.isAtLeastHadoop33(\"3.2-SNAPSHOT\"));\n  }\n}\n"
  },
  {
    "path": "common/src/test/java/org/apache/comet/parquet/TestFileReader.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.io.ByteArrayOutputStream;\nimport java.io.File;\nimport java.io.IOException;\nimport java.lang.reflect.Method;\nimport java.nio.ByteBuffer;\nimport java.nio.charset.StandardCharsets;\nimport java.util.*;\n\nimport org.junit.Assert;\nimport org.junit.Rule;\nimport org.junit.Test;\nimport org.junit.rules.TemporaryFolder;\n\nimport org.apache.hadoop.conf.Configuration;\nimport org.apache.hadoop.fs.Path;\nimport org.apache.parquet.HadoopReadOptions;\nimport org.apache.parquet.ParquetReadOptions;\nimport org.apache.parquet.bytes.BytesInput;\nimport org.apache.parquet.bytes.BytesUtils;\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.column.Encoding;\nimport org.apache.parquet.column.page.DataPage;\nimport org.apache.parquet.column.page.DataPageV1;\nimport org.apache.parquet.column.page.DataPageV2;\nimport org.apache.parquet.column.page.DictionaryPage;\nimport org.apache.parquet.column.page.PageReadStore;\nimport org.apache.parquet.column.page.PageReader;\nimport org.apache.parquet.column.statistics.BinaryStatistics;\nimport org.apache.parquet.column.statistics.Statistics;\nimport org.apache.parquet.column.values.bloomfilter.BlockSplitBloomFilter;\nimport org.apache.parquet.column.values.bloomfilter.BloomFilter;\nimport org.apache.parquet.filter2.predicate.FilterApi;\nimport org.apache.parquet.filter2.predicate.FilterPredicate;\nimport org.apache.parquet.filter2.predicate.Operators;\nimport org.apache.parquet.hadoop.ParquetFileWriter;\nimport org.apache.parquet.hadoop.ParquetInputFormat;\nimport org.apache.parquet.hadoop.metadata.*;\nimport org.apache.parquet.hadoop.util.HadoopInputFile;\nimport org.apache.parquet.internal.column.columnindex.BoundaryOrder;\nimport org.apache.parquet.internal.column.columnindex.ColumnIndex;\nimport org.apache.parquet.internal.column.columnindex.OffsetIndex;\nimport org.apache.parquet.io.InputFile;\nimport org.apache.parquet.io.api.Binary;\nimport org.apache.parquet.schema.MessageType;\nimport org.apache.parquet.schema.MessageTypeParser;\nimport org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName;\nimport org.apache.parquet.schema.Types;\n\nimport org.apache.comet.CometConf;\n\nimport static org.apache.parquet.column.Encoding.*;\nimport static org.apache.parquet.format.converter.ParquetMetadataConverter.MAX_STATS_SIZE;\nimport static org.junit.Assert.*;\nimport static org.junit.Assert.assertEquals;\n\nimport static org.apache.comet.parquet.TypeUtil.isSpark40Plus;\n\n@SuppressWarnings(\"deprecation\")\npublic class TestFileReader {\n  private static final MessageType SCHEMA =\n      MessageTypeParser.parseMessageType(\n          \"\"\n              + \"message m {\"\n              + \"  required group a {\"\n              + \"    required binary b;\"\n              + \"  }\"\n              + \"  required group c {\"\n              + \"    required int64 d;\"\n              + \"  }\"\n              + \"}\");\n\n  private static final MessageType SCHEMA2 =\n      MessageTypeParser.parseMessageType(\n          \"\"\n              + \"message root { \"\n              + \"required int32 id;\"\n              + \"required binary name(UTF8); \"\n              + \"required int32 num; \"\n              + \"required binary comment(UTF8);\"\n              + \"}\");\n\n  private static final MessageType PROJECTED_SCHEMA2 =\n      MessageTypeParser.parseMessageType(\n          \"\"\n              + \"message root { \"\n              + \"required int32 id;\"\n              + \"required binary name(UTF8); \"\n              + \"required binary comment(UTF8);\"\n              + \"}\");\n\n  private static final String[] PATH1 = {\"a\", \"b\"};\n  private static final ColumnDescriptor C1 = SCHEMA.getColumnDescription(PATH1);\n  private static final String[] PATH2 = {\"c\", \"d\"};\n  private static final ColumnDescriptor C2 = SCHEMA.getColumnDescription(PATH2);\n\n  private static final byte[] BYTES1 = {0, 1, 2, 3};\n  private static final byte[] BYTES2 = {1, 2, 3, 4};\n  private static final byte[] BYTES3 = {2, 3, 4, 5};\n  private static final byte[] BYTES4 = {3, 4, 5, 6};\n  private static final CompressionCodecName CODEC = CompressionCodecName.UNCOMPRESSED;\n\n  private static final org.apache.parquet.column.statistics.Statistics<?> EMPTY_STATS =\n      org.apache.parquet.column.statistics.Statistics.getBuilderForReading(\n              Types.required(PrimitiveTypeName.BINARY).named(\"test_binary\"))\n          .build();\n\n  @Rule public final TemporaryFolder temp = new TemporaryFolder();\n\n  @Test\n  public void testEnableReadParallel() {\n    Configuration configuration = new Configuration();\n    ReadOptions options = ReadOptions.builder(configuration).build();\n\n    assertFalse(FileReader.shouldReadParallel(options, \"hdfs\"));\n    assertFalse(FileReader.shouldReadParallel(options, \"file\"));\n    assertFalse(FileReader.shouldReadParallel(options, null));\n    assertTrue(FileReader.shouldReadParallel(options, \"s3a\"));\n\n    options = ReadOptions.builder(configuration).enableParallelIO(false).build();\n    assertFalse(FileReader.shouldReadParallel(options, \"s3a\"));\n  }\n\n  @Test\n  public void testReadWrite() throws Exception {\n    File testFile = temp.newFile();\n    testFile.delete();\n\n    Path path = new Path(testFile.toURI());\n    Configuration configuration = new Configuration();\n\n    // Start a Parquet file with 2 row groups, each with 2 column chunks\n    ParquetFileWriter w = new ParquetFileWriter(configuration, SCHEMA, path);\n    w.start();\n    w.startBlock(3);\n    w.startColumn(C1, 5, CODEC);\n    long c1Starts = w.getPos();\n    long c1p1Starts = w.getPos();\n    w.writeDataPage(2, 4, BytesInput.from(BYTES1), EMPTY_STATS, 2, RLE, RLE, PLAIN);\n    w.writeDataPage(3, 4, BytesInput.from(BYTES1), EMPTY_STATS, 3, RLE, RLE, PLAIN);\n    w.endColumn();\n    long c1Ends = w.getPos();\n    w.startColumn(C2, 6, CODEC);\n    long c2Starts = w.getPos();\n    w.writeDictionaryPage(new DictionaryPage(BytesInput.from(BYTES2), 4, RLE_DICTIONARY));\n    long c2p1Starts = w.getPos();\n    w.writeDataPage(2, 4, BytesInput.from(BYTES2), EMPTY_STATS, 2, RLE, RLE, PLAIN);\n    w.writeDataPage(3, 4, BytesInput.from(BYTES2), EMPTY_STATS, 3, RLE, RLE, PLAIN);\n    w.writeDataPage(1, 4, BytesInput.from(BYTES2), EMPTY_STATS, 1, RLE, RLE, PLAIN);\n    w.endColumn();\n    long c2Ends = w.getPos();\n    w.endBlock();\n    w.startBlock(4);\n    w.startColumn(C1, 7, CODEC);\n    w.writeDataPage(7, 4, BytesInput.from(BYTES3), EMPTY_STATS, 7, RLE, RLE, PLAIN);\n    w.endColumn();\n    w.startColumn(C2, 8, CODEC);\n    w.writeDataPage(8, 4, BytesInput.from(BYTES4), EMPTY_STATS, 8, RLE, RLE, PLAIN);\n    w.endColumn();\n    w.endBlock();\n    w.end(new HashMap<>());\n\n    InputFile file = HadoopInputFile.fromPath(path, configuration);\n    ParquetReadOptions options = ParquetReadOptions.builder().build();\n    ReadOptions cometOptions = ReadOptions.builder(configuration).build();\n\n    try (FileReader reader = new FileReader(file, options, cometOptions)) {\n      ParquetMetadata readFooter = reader.getFooter();\n      assertEquals(\"footer: \" + readFooter, 2, readFooter.getBlocks().size());\n      BlockMetaData rowGroup = readFooter.getBlocks().get(0);\n      assertEquals(c1Ends - c1Starts, rowGroup.getColumns().get(0).getTotalSize());\n      assertEquals(c2Ends - c2Starts, rowGroup.getColumns().get(1).getTotalSize());\n      assertEquals(c2Ends - c1Starts, rowGroup.getTotalByteSize());\n\n      assertEquals(c1Starts, rowGroup.getColumns().get(0).getStartingPos());\n      assertEquals(0, rowGroup.getColumns().get(0).getDictionaryPageOffset());\n      assertEquals(c1p1Starts, rowGroup.getColumns().get(0).getFirstDataPageOffset());\n      assertEquals(c2Starts, rowGroup.getColumns().get(1).getStartingPos());\n      assertEquals(c2Starts, rowGroup.getColumns().get(1).getDictionaryPageOffset());\n      assertEquals(c2p1Starts, rowGroup.getColumns().get(1).getFirstDataPageOffset());\n\n      HashSet<Encoding> expectedEncoding = new HashSet<>();\n      expectedEncoding.add(PLAIN);\n      expectedEncoding.add(RLE);\n      assertEquals(expectedEncoding, rowGroup.getColumns().get(0).getEncodings());\n    }\n\n    // read first block of col #1\n    try (FileReader r = new FileReader(file, options, cometOptions)) {\n      r.setRequestedSchema(Arrays.asList(SCHEMA.getColumnDescription(PATH1)));\n      PageReadStore pages = r.readNextRowGroup();\n      assertEquals(3, pages.getRowCount());\n      validateContains(pages, PATH1, 2, BytesInput.from(BYTES1));\n      validateContains(pages, PATH1, 3, BytesInput.from(BYTES1));\n      assertTrue(r.skipNextRowGroup());\n      assertNull(r.readNextRowGroup());\n    }\n\n    // read all blocks of col #1 and #2\n    try (FileReader r = new FileReader(file, options, cometOptions)) {\n      r.setRequestedSchema(\n          Arrays.asList(SCHEMA.getColumnDescription(PATH1), SCHEMA.getColumnDescription(PATH2)));\n      PageReadStore pages = r.readNextRowGroup();\n      assertEquals(3, pages.getRowCount());\n      validateContains(pages, PATH1, 2, BytesInput.from(BYTES1));\n      validateContains(pages, PATH1, 3, BytesInput.from(BYTES1));\n      validateContains(pages, PATH2, 2, BytesInput.from(BYTES2));\n      validateContains(pages, PATH2, 3, BytesInput.from(BYTES2));\n      validateContains(pages, PATH2, 1, BytesInput.from(BYTES2));\n\n      pages = r.readNextRowGroup();\n      assertEquals(4, pages.getRowCount());\n\n      validateContains(pages, PATH1, 7, BytesInput.from(BYTES3));\n      validateContains(pages, PATH2, 8, BytesInput.from(BYTES4));\n\n      assertNull(r.readNextRowGroup());\n    }\n  }\n\n  @Test\n  public void testBloomFilterReadWrite() throws Exception {\n    MessageType schema =\n        MessageTypeParser.parseMessageType(\"message test { required binary foo; }\");\n    File testFile = temp.newFile();\n    testFile.delete();\n    Path path = new Path(testFile.toURI());\n    Configuration configuration = new Configuration();\n    configuration.set(\"parquet.bloom.filter.column.names\", \"foo\");\n    String[] colPath = {\"foo\"};\n\n    ColumnDescriptor col = schema.getColumnDescription(colPath);\n    BinaryStatistics stats1 = new BinaryStatistics();\n    ParquetFileWriter w = new ParquetFileWriter(configuration, schema, path);\n    w.start();\n    w.startBlock(3);\n    w.startColumn(col, 5, CODEC);\n    w.writeDataPage(2, 4, BytesInput.from(BYTES1), stats1, 2, RLE, RLE, PLAIN);\n    w.writeDataPage(3, 4, BytesInput.from(BYTES1), stats1, 2, RLE, RLE, PLAIN);\n    w.endColumn();\n    BloomFilter blockSplitBloomFilter = new BlockSplitBloomFilter(0);\n    blockSplitBloomFilter.insertHash(blockSplitBloomFilter.hash(Binary.fromString(\"hello\")));\n    blockSplitBloomFilter.insertHash(blockSplitBloomFilter.hash(Binary.fromString(\"world\")));\n    addBloomFilter(w, \"foo\", blockSplitBloomFilter);\n    w.endBlock();\n    w.end(new HashMap<>());\n\n    InputFile file = HadoopInputFile.fromPath(path, configuration);\n    ParquetReadOptions options = ParquetReadOptions.builder().build();\n    ReadOptions cometOptions = ReadOptions.builder(configuration).build();\n\n    try (FileReader r = new FileReader(file, options, cometOptions)) {\n      ParquetMetadata footer = r.getFooter();\n      r.setRequestedSchema(Arrays.asList(schema.getColumnDescription(colPath)));\n      BloomFilterReader bloomFilterReader =\n          new BloomFilterReader(\n              footer.getBlocks().get(0),\n              r.getFileMetaData().getFileDecryptor(),\n              r.getInputStream());\n      BloomFilter bloomFilter =\n          bloomFilterReader.readBloomFilter(footer.getBlocks().get(0).getColumns().get(0));\n      assertTrue(bloomFilter.findHash(blockSplitBloomFilter.hash(Binary.fromString(\"hello\"))));\n      assertTrue(bloomFilter.findHash(blockSplitBloomFilter.hash(Binary.fromString(\"world\"))));\n    }\n  }\n\n  @Test\n  public void testReadWriteDataPageV2() throws Exception {\n    File testFile = temp.newFile();\n    testFile.delete();\n\n    Path path = new Path(testFile.toURI());\n    Configuration configuration = new Configuration();\n\n    ParquetFileWriter w = new ParquetFileWriter(configuration, SCHEMA, path);\n    w.start();\n    w.startBlock(14);\n\n    BytesInput repLevels = BytesInput.fromInt(2);\n    BytesInput defLevels = BytesInput.fromInt(1);\n    BytesInput data = BytesInput.fromInt(3);\n    BytesInput data2 = BytesInput.fromInt(10);\n\n    org.apache.parquet.column.statistics.Statistics<?> statsC1P1 = createStatistics(\"s\", \"z\", C1);\n    org.apache.parquet.column.statistics.Statistics<?> statsC1P2 = createStatistics(\"b\", \"d\", C1);\n\n    w.startColumn(C1, 6, CODEC);\n    long c1Starts = w.getPos();\n    w.writeDataPageV2(4, 1, 3, repLevels, defLevels, PLAIN, data, 4, statsC1P1);\n    w.writeDataPageV2(3, 0, 3, repLevels, defLevels, PLAIN, data, 4, statsC1P2);\n    w.endColumn();\n    long c1Ends = w.getPos();\n\n    w.startColumn(C2, 5, CODEC);\n    long c2Starts = w.getPos();\n    w.writeDataPageV2(5, 2, 3, repLevels, defLevels, PLAIN, data2, 4, EMPTY_STATS);\n    w.writeDataPageV2(2, 0, 2, repLevels, defLevels, PLAIN, data2, 4, EMPTY_STATS);\n    w.endColumn();\n    long c2Ends = w.getPos();\n\n    w.endBlock();\n    w.end(new HashMap<>());\n\n    InputFile file = HadoopInputFile.fromPath(path, configuration);\n    ParquetReadOptions options = ParquetReadOptions.builder().build();\n    ReadOptions cometOptions = ReadOptions.builder(configuration).build();\n\n    try (FileReader reader = new FileReader(file, options, cometOptions)) {\n      ParquetMetadata footer = reader.getFooter();\n      assertEquals(\"footer: \" + footer, 1, footer.getBlocks().size());\n      assertEquals(c1Ends - c1Starts, footer.getBlocks().get(0).getColumns().get(0).getTotalSize());\n      assertEquals(c2Ends - c2Starts, footer.getBlocks().get(0).getColumns().get(1).getTotalSize());\n      assertEquals(c2Ends - c1Starts, footer.getBlocks().get(0).getTotalByteSize());\n\n      // check for stats\n      org.apache.parquet.column.statistics.Statistics<?> expectedStats =\n          createStatistics(\"b\", \"z\", C1);\n      assertStatsValuesEqual(\n          expectedStats, footer.getBlocks().get(0).getColumns().get(0).getStatistics());\n\n      HashSet<Encoding> expectedEncoding = new HashSet<>();\n      expectedEncoding.add(PLAIN);\n      assertEquals(expectedEncoding, footer.getBlocks().get(0).getColumns().get(0).getEncodings());\n    }\n\n    try (FileReader r = new FileReader(file, options, cometOptions)) {\n      r.setRequestedSchema(\n          Arrays.asList(SCHEMA.getColumnDescription(PATH1), SCHEMA.getColumnDescription(PATH2)));\n      PageReadStore pages = r.readNextRowGroup();\n      assertEquals(14, pages.getRowCount());\n      validateV2Page(\n          pages,\n          PATH1,\n          3,\n          4,\n          1,\n          repLevels.toByteArray(),\n          defLevels.toByteArray(),\n          data.toByteArray(),\n          12);\n      validateV2Page(\n          pages,\n          PATH1,\n          3,\n          3,\n          0,\n          repLevels.toByteArray(),\n          defLevels.toByteArray(),\n          data.toByteArray(),\n          12);\n      validateV2Page(\n          pages,\n          PATH2,\n          3,\n          5,\n          2,\n          repLevels.toByteArray(),\n          defLevels.toByteArray(),\n          data2.toByteArray(),\n          12);\n      validateV2Page(\n          pages,\n          PATH2,\n          2,\n          2,\n          0,\n          repLevels.toByteArray(),\n          defLevels.toByteArray(),\n          data2.toByteArray(),\n          12);\n      assertNull(r.readNextRowGroup());\n    }\n  }\n\n  @Test\n  public void testColumnIndexFilter() throws Exception {\n    File testFile = temp.newFile();\n    testFile.delete();\n\n    Path path = new Path(testFile.toURI());\n    Configuration configuration = new Configuration();\n\n    ParquetFileWriter w = new ParquetFileWriter(configuration, SCHEMA, path);\n\n    w.start();\n    w.startBlock(4);\n    w.startColumn(C1, 7, CODEC);\n    w.writeDataPage(2, 4, BytesInput.from(BYTES1), EMPTY_STATS, 2, RLE, RLE, PLAIN);\n    w.writeDataPage(2, 4, BytesInput.from(BYTES2), EMPTY_STATS, 2, RLE, RLE, PLAIN);\n    w.endColumn();\n    w.startColumn(C2, 8, CODEC);\n    // the first page contains one matching record\n    w.writeDataPage(1, 4, BytesInput.from(BYTES3), statsC2(2L), 1, RLE, RLE, PLAIN);\n    // all the records of the second page are larger than 2, so should be filtered out\n    w.writeDataPage(3, 4, BytesInput.from(BYTES4), statsC2(3L, 4L, 5L), 3, RLE, RLE, PLAIN);\n    w.endColumn();\n    w.endBlock();\n\n    w.startBlock(4);\n    w.startColumn(C1, 7, CODEC);\n    w.writeDataPage(2, 4, BytesInput.from(BYTES1), EMPTY_STATS, 2, RLE, RLE, PLAIN);\n    w.writeDataPage(2, 4, BytesInput.from(BYTES2), EMPTY_STATS, 2, RLE, RLE, PLAIN);\n    w.endColumn();\n    w.startColumn(C2, 8, CODEC);\n    // the first page should be filtered out\n    w.writeDataPage(1, 4, BytesInput.from(BYTES3), statsC2(4L), 1, RLE, RLE, PLAIN);\n    // the second page will be read since it contains matching record\n    w.writeDataPage(3, 4, BytesInput.from(BYTES4), statsC2(0L, 1L, 3L), 3, RLE, RLE, PLAIN);\n    w.endColumn();\n    w.endBlock();\n\n    w.end(new HashMap<>());\n\n    // set a simple equality filter in the ParquetInputFormat\n    Operators.LongColumn c2 = FilterApi.longColumn(\"c.d\");\n    FilterPredicate p = FilterApi.eq(c2, 2L);\n    ParquetInputFormat.setFilterPredicate(configuration, p);\n    InputFile file = HadoopInputFile.fromPath(path, configuration);\n    ParquetReadOptions options = HadoopReadOptions.builder(configuration).build();\n    ReadOptions cometOptions = ReadOptions.builder(configuration).build();\n\n    try (FileReader r = new FileReader(file, options, cometOptions)) {\n      assertEquals(4, r.getFilteredRecordCount());\n      PageReadStore readStore = r.readNextFilteredRowGroup();\n\n      PageReader c1Reader = readStore.getPageReader(C1);\n      List<DataPage> c1Pages = new ArrayList<>();\n      DataPage page;\n      while ((page = c1Reader.readPage()) != null) {\n        c1Pages.add(page);\n      }\n      // second page of c1 should be filtered out\n      assertEquals(1, c1Pages.size());\n      validatePage(c1Pages.get(0), 2, BytesInput.from(BYTES1));\n\n      PageReader c2Reader = readStore.getPageReader(C2);\n      List<DataPage> c2Pages = new ArrayList<>();\n      while ((page = c2Reader.readPage()) != null) {\n        c2Pages.add(page);\n      }\n      assertEquals(1, c2Pages.size());\n      validatePage(c2Pages.get(0), 1, BytesInput.from(BYTES3));\n\n      // test the second row group\n      readStore = r.readNextFilteredRowGroup();\n      assertNotNull(readStore);\n\n      c1Reader = readStore.getPageReader(C1);\n      c1Pages.clear();\n      while ((page = c1Reader.readPage()) != null) {\n        c1Pages.add(page);\n      }\n      // all pages of c1 should be retained\n      assertEquals(2, c1Pages.size());\n      validatePage(c1Pages.get(0), 2, BytesInput.from(BYTES1));\n      validatePage(c1Pages.get(1), 2, BytesInput.from(BYTES2));\n\n      c2Reader = readStore.getPageReader(C2);\n      c2Pages.clear();\n      while ((page = c2Reader.readPage()) != null) {\n        c2Pages.add(page);\n      }\n      assertEquals(1, c2Pages.size());\n      validatePage(c2Pages.get(0), 3, BytesInput.from(BYTES4));\n    }\n  }\n\n  @Test\n  public void testColumnIndexReadWrite() throws Exception {\n    File testFile = temp.newFile();\n    testFile.delete();\n\n    Path path = new Path(testFile.toURI());\n    Configuration configuration = new Configuration();\n\n    ParquetFileWriter w = new ParquetFileWriter(configuration, SCHEMA, path);\n    w.start();\n    w.startBlock(4);\n    w.startColumn(C1, 7, CODEC);\n    w.writeDataPage(7, 4, BytesInput.from(BYTES3), EMPTY_STATS, RLE, RLE, PLAIN);\n    w.endColumn();\n    w.startColumn(C2, 8, CODEC);\n    w.writeDataPage(8, 4, BytesInput.from(BYTES4), EMPTY_STATS, RLE, RLE, PLAIN);\n    w.endColumn();\n    w.endBlock();\n    w.startBlock(4);\n    w.startColumn(C1, 5, CODEC);\n    long c1p1Starts = w.getPos();\n    w.writeDataPage(\n        2, 4, BytesInput.from(BYTES1), statsC1(null, Binary.fromString(\"aaa\")), 1, RLE, RLE, PLAIN);\n    long c1p2Starts = w.getPos();\n    w.writeDataPage(\n        3,\n        4,\n        BytesInput.from(BYTES1),\n        statsC1(Binary.fromString(\"bbb\"), Binary.fromString(\"ccc\")),\n        3,\n        RLE,\n        RLE,\n        PLAIN);\n    w.endColumn();\n    long c1Ends = w.getPos();\n    w.startColumn(C2, 6, CODEC);\n    long c2p1Starts = w.getPos();\n    w.writeDataPage(2, 4, BytesInput.from(BYTES2), statsC2(117L, 100L), 1, RLE, RLE, PLAIN);\n    long c2p2Starts = w.getPos();\n    w.writeDataPage(3, 4, BytesInput.from(BYTES2), statsC2(null, null, null), 2, RLE, RLE, PLAIN);\n    long c2p3Starts = w.getPos();\n    w.writeDataPage(1, 4, BytesInput.from(BYTES2), statsC2(0L), 1, RLE, RLE, PLAIN);\n    w.endColumn();\n    long c2Ends = w.getPos();\n    w.endBlock();\n    w.startBlock(4);\n    w.startColumn(C1, 7, CODEC);\n    w.writeDataPage(\n        7,\n        4,\n        BytesInput.from(BYTES3),\n        // Creating huge stats so the column index will reach the limit and won't be written\n        statsC1(\n            Binary.fromConstantByteArray(new byte[(int) MAX_STATS_SIZE]),\n            Binary.fromConstantByteArray(new byte[1])),\n        4,\n        RLE,\n        RLE,\n        PLAIN);\n    w.endColumn();\n    w.startColumn(C2, 8, CODEC);\n    w.writeDataPage(8, 4, BytesInput.from(BYTES4), EMPTY_STATS, RLE, RLE, PLAIN);\n    w.endColumn();\n    w.endBlock();\n    w.end(new HashMap<>());\n\n    InputFile file = HadoopInputFile.fromPath(path, configuration);\n    ParquetReadOptions options = ParquetReadOptions.builder().build();\n    ReadOptions cometOptions = ReadOptions.builder(configuration).build();\n\n    try (FileReader reader = new FileReader(file, options, cometOptions)) {\n      ParquetMetadata footer = reader.getFooter();\n      assertEquals(3, footer.getBlocks().size());\n      BlockMetaData blockMeta = footer.getBlocks().get(1);\n      assertEquals(2, blockMeta.getColumns().size());\n\n      ColumnIndexReader indexReader = reader.getColumnIndexReader(1);\n      ColumnIndex columnIndex = indexReader.readColumnIndex(blockMeta.getColumns().get(0));\n      assertEquals(BoundaryOrder.ASCENDING, columnIndex.getBoundaryOrder());\n      assertEquals(Arrays.asList(1L, 0L), columnIndex.getNullCounts());\n      assertEquals(Arrays.asList(false, false), columnIndex.getNullPages());\n      List<ByteBuffer> minValues = columnIndex.getMinValues();\n      assertEquals(2, minValues.size());\n      List<ByteBuffer> maxValues = columnIndex.getMaxValues();\n      assertEquals(2, maxValues.size());\n      assertEquals(\"aaa\", new String(minValues.get(0).array(), StandardCharsets.UTF_8));\n      assertEquals(\"aaa\", new String(maxValues.get(0).array(), StandardCharsets.UTF_8));\n      assertEquals(\"bbb\", new String(minValues.get(1).array(), StandardCharsets.UTF_8));\n      assertEquals(\"ccc\", new String(maxValues.get(1).array(), StandardCharsets.UTF_8));\n\n      columnIndex = indexReader.readColumnIndex(blockMeta.getColumns().get(1));\n      assertEquals(BoundaryOrder.DESCENDING, columnIndex.getBoundaryOrder());\n      assertEquals(Arrays.asList(0L, 3L, 0L), columnIndex.getNullCounts());\n      assertEquals(Arrays.asList(false, true, false), columnIndex.getNullPages());\n      minValues = columnIndex.getMinValues();\n      assertEquals(3, minValues.size());\n      maxValues = columnIndex.getMaxValues();\n      assertEquals(3, maxValues.size());\n      assertEquals(100, BytesUtils.bytesToLong(minValues.get(0).array()));\n      assertEquals(117, BytesUtils.bytesToLong(maxValues.get(0).array()));\n      assertEquals(0, minValues.get(1).array().length);\n      assertEquals(0, maxValues.get(1).array().length);\n      assertEquals(0, BytesUtils.bytesToLong(minValues.get(2).array()));\n      assertEquals(0, BytesUtils.bytesToLong(maxValues.get(2).array()));\n\n      OffsetIndex offsetIndex = indexReader.readOffsetIndex(blockMeta.getColumns().get(0));\n      assertEquals(2, offsetIndex.getPageCount());\n      assertEquals(c1p1Starts, offsetIndex.getOffset(0));\n      assertEquals(c1p2Starts, offsetIndex.getOffset(1));\n      assertEquals(c1p2Starts - c1p1Starts, offsetIndex.getCompressedPageSize(0));\n      assertEquals(c1Ends - c1p2Starts, offsetIndex.getCompressedPageSize(1));\n      assertEquals(0, offsetIndex.getFirstRowIndex(0));\n      assertEquals(1, offsetIndex.getFirstRowIndex(1));\n\n      offsetIndex = indexReader.readOffsetIndex(blockMeta.getColumns().get(1));\n      assertEquals(3, offsetIndex.getPageCount());\n      assertEquals(c2p1Starts, offsetIndex.getOffset(0));\n      assertEquals(c2p2Starts, offsetIndex.getOffset(1));\n      assertEquals(c2p3Starts, offsetIndex.getOffset(2));\n      assertEquals(c2p2Starts - c2p1Starts, offsetIndex.getCompressedPageSize(0));\n      assertEquals(c2p3Starts - c2p2Starts, offsetIndex.getCompressedPageSize(1));\n      assertEquals(c2Ends - c2p3Starts, offsetIndex.getCompressedPageSize(2));\n      assertEquals(0, offsetIndex.getFirstRowIndex(0));\n      assertEquals(1, offsetIndex.getFirstRowIndex(1));\n      assertEquals(3, offsetIndex.getFirstRowIndex(2));\n\n      if (!isSpark40Plus()) { // TODO: https://github.com/apache/datafusion-comet/issues/1948\n        assertNull(indexReader.readColumnIndex(footer.getBlocks().get(2).getColumns().get(0)));\n      }\n    }\n  }\n\n  // Test reader with merging of scan ranges enabled\n  @Test\n  public void testWriteReadMergeScanRange() throws Throwable {\n    Configuration conf = new Configuration();\n    conf.set(CometConf.COMET_IO_MERGE_RANGES().key(), Boolean.toString(true));\n    // Set the merge range delta so small that ranges do not get merged\n    conf.set(CometConf.COMET_IO_MERGE_RANGES_DELTA().key(), Integer.toString(1024));\n    testReadWrite(conf, 2, 1024);\n    // Set the merge range delta so large that all ranges get merged\n    conf.set(CometConf.COMET_IO_MERGE_RANGES_DELTA().key(), Integer.toString(1024 * 1024));\n    testReadWrite(conf, 2, 1024);\n  }\n\n  // `addBloomFilter` is package-private in Parquet, so this uses reflection to access it\n  private void addBloomFilter(ParquetFileWriter w, String s, BloomFilter filter) throws Exception {\n    Method method =\n        ParquetFileWriter.class.getDeclaredMethod(\n            \"addBloomFilter\", String.class, BloomFilter.class);\n    method.setAccessible(true);\n    method.invoke(w, s, filter);\n  }\n\n  private void validateContains(PageReadStore pages, String[] path, int values, BytesInput bytes)\n      throws IOException {\n    PageReader pageReader = pages.getPageReader(SCHEMA.getColumnDescription(path));\n    DataPage page = pageReader.readPage();\n    validatePage(page, values, bytes);\n  }\n\n  private void validatePage(DataPage page, int values, BytesInput bytes) throws IOException {\n    assertEquals(values, page.getValueCount());\n    assertArrayEquals(bytes.toByteArray(), ((DataPageV1) page).getBytes().toByteArray());\n  }\n\n  private void validateV2Page(\n      PageReadStore pages,\n      String[] path,\n      int values,\n      int rows,\n      int nullCount,\n      byte[] repetition,\n      byte[] definition,\n      byte[] data,\n      int uncompressedSize)\n      throws IOException {\n    PageReader pageReader = pages.getPageReader(SCHEMA.getColumnDescription(path));\n    DataPageV2 page = (DataPageV2) pageReader.readPage();\n    assertEquals(values, page.getValueCount());\n    assertEquals(rows, page.getRowCount());\n    assertEquals(nullCount, page.getNullCount());\n    assertEquals(uncompressedSize, page.getUncompressedSize());\n    assertArrayEquals(repetition, page.getRepetitionLevels().toByteArray());\n    assertArrayEquals(definition, page.getDefinitionLevels().toByteArray());\n    assertArrayEquals(data, page.getData().toByteArray());\n  }\n\n  private Statistics<?> createStatistics(String min, String max, ColumnDescriptor col) {\n    return Statistics.getBuilderForReading(col.getPrimitiveType())\n        .withMin(Binary.fromString(min).getBytes())\n        .withMax(Binary.fromString(max).getBytes())\n        .withNumNulls(0)\n        .build();\n  }\n\n  public static void assertStatsValuesEqual(Statistics<?> expected, Statistics<?> actual) {\n    if (expected == actual) {\n      return;\n    }\n    if (expected == null || actual == null) {\n      assertEquals(expected, actual);\n    }\n    Assert.assertArrayEquals(expected.getMaxBytes(), actual.getMaxBytes());\n    Assert.assertArrayEquals(expected.getMinBytes(), actual.getMinBytes());\n    Assert.assertEquals(expected.getNumNulls(), actual.getNumNulls());\n  }\n\n  private Statistics<?> statsC1(Binary... values) {\n    Statistics<?> stats = Statistics.createStats(C1.getPrimitiveType());\n    for (Binary value : values) {\n      if (value == null) {\n        stats.incrementNumNulls();\n      } else {\n        stats.updateStats(value);\n      }\n    }\n    return stats;\n  }\n\n  /**\n   * Generates arbitrary data for simple schemas, writes the data to a file and also returns the\n   * data.\n   *\n   * @return array of data pages for each column\n   */\n  private HashMap<String, byte[][]> generateAndWriteData(\n      Configuration configuration,\n      Path path,\n      MessageType schema,\n      int numPages,\n      int numRecordsPerPage)\n      throws IOException {\n\n    HashMap<String, byte[][]> dataPages = new HashMap<>();\n\n    Generator generator = new Generator();\n    ParquetFileWriter writer = new ParquetFileWriter(configuration, schema, path);\n    writer.start();\n    writer.startBlock((long) numPages * numRecordsPerPage);\n    for (ColumnDescriptor colDesc : schema.getColumns()) {\n      writer.startColumn(colDesc, (long) numPages * numRecordsPerPage, CODEC);\n      String type = colDesc.getPrimitiveType().getName();\n      byte[][] allPages = new byte[numPages][];\n      byte[] data;\n      for (int i = 0; i < numPages; i++) {\n        data = generator.generateValues(numRecordsPerPage, type);\n        writer.writeDataPage(\n            numRecordsPerPage,\n            data.length,\n            BytesInput.from(data),\n            EMPTY_STATS,\n            numRecordsPerPage,\n            RLE,\n            RLE,\n            PLAIN);\n        allPages[i] = data;\n      }\n      dataPages.put(String.join(\".\", colDesc.getPath()), allPages);\n      writer.endColumn();\n    }\n    writer.endBlock();\n    writer.end(new HashMap<>());\n    return dataPages;\n  }\n\n  private void readAndValidatePageData(\n      InputFile inputFile,\n      ParquetReadOptions options,\n      ReadOptions cometOptions,\n      MessageType schema,\n      HashMap<String, byte[][]> expected,\n      int expectedValuesPerPage)\n      throws IOException {\n    try (FileReader fileReader = new FileReader(inputFile, options, cometOptions)) {\n      fileReader.setRequestedSchema(schema.getColumns());\n      PageReadStore pages = fileReader.readNextRowGroup();\n      for (ColumnDescriptor colDesc : schema.getColumns()) {\n        byte[][] allExpectedPages = expected.get(String.join(\".\", colDesc.getPath()));\n        PageReader pageReader = pages.getPageReader(colDesc);\n        for (byte[] expectedPage : allExpectedPages) {\n          DataPage page = pageReader.readPage();\n          validatePage(page, expectedValuesPerPage, BytesInput.from(expectedPage));\n        }\n      }\n    }\n  }\n\n  public void testReadWrite(Configuration configuration, int numPages, int numRecordsPerPage)\n      throws Exception {\n    File testFile = temp.newFile();\n    testFile.delete();\n\n    Path path = new Path(testFile.toURI());\n    HashMap<String, byte[][]> dataPages =\n        generateAndWriteData(configuration, path, SCHEMA2, numPages, numRecordsPerPage);\n    InputFile file = HadoopInputFile.fromPath(path, configuration);\n    ParquetReadOptions options = ParquetReadOptions.builder().build();\n    ReadOptions cometOptions = ReadOptions.builder(configuration).build();\n\n    readAndValidatePageData(\n        file, options, cometOptions, PROJECTED_SCHEMA2, dataPages, numRecordsPerPage);\n  }\n\n  static class Generator {\n\n    static Random random = new Random(1729);\n    private static final String ALPHABET = \"ABCDEFGHIJKLMNOPQRSTUVWXYZabcdefghijklmnopqrstuvwxyz -\";\n    private static final int STR_MIN_SIZE = 5;\n    private static final int STR_MAX_SIZE = 30;\n\n    private byte[] getString(int minSize, int maxSize) {\n      int size = random.nextInt(maxSize - minSize) + minSize;\n      byte[] str = new byte[size];\n      for (int i = 0; i < size; ++i) {\n        str[i] = (byte) ALPHABET.charAt(random.nextInt(ALPHABET.length()));\n      }\n      return str;\n    }\n\n    private byte[] generateValues(int numValues, String type) throws IOException {\n\n      if (type.equals(\"int32\")) {\n        byte[] data = new byte[4 * numValues];\n        random.nextBytes(data);\n        return data;\n      } else {\n        ByteArrayOutputStream outputStream = new ByteArrayOutputStream();\n        for (int i = 0; i < numValues; i++) {\n          outputStream.write(getString(STR_MIN_SIZE, STR_MAX_SIZE));\n        }\n        return outputStream.toByteArray();\n      }\n    }\n  }\n\n  private Statistics<?> statsC2(Long... values) {\n    Statistics<?> stats = Statistics.createStats(C2.getPrimitiveType());\n    for (Long value : values) {\n      if (value == null) {\n        stats.incrementNumNulls();\n      } else {\n        stats.updateStats(value);\n      }\n    }\n    return stats;\n  }\n}\n"
  },
  {
    "path": "common/src/test/java/org/apache/comet/parquet/TestUtils.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet;\n\nimport java.util.Collections;\nimport java.util.HashMap;\nimport java.util.Map;\n\nimport org.junit.Test;\n\nimport org.apache.parquet.column.ColumnDescriptor;\nimport org.apache.parquet.schema.LogicalTypeAnnotation;\nimport org.apache.parquet.schema.PrimitiveType;\n\nimport static org.junit.Assert.*;\n\npublic class TestUtils {\n\n  @Test\n  public void testBuildColumnDescriptorWithTimestamp() {\n    Map<String, String> params = new HashMap<>();\n    params.put(\"isAdjustedToUTC\", \"true\");\n    params.put(\"unit\", \"MICROS\");\n\n    ParquetColumnSpec spec =\n        new ParquetColumnSpec(\n            10,\n            new String[] {\"event_time\"},\n            \"INT64\",\n            0,\n            false,\n            0,\n            0,\n            \"TimestampLogicalTypeAnnotation\",\n            params);\n\n    ColumnDescriptor descriptor = Utils.buildColumnDescriptor(spec);\n    assertNotNull(descriptor);\n\n    PrimitiveType primitiveType = descriptor.getPrimitiveType();\n    assertEquals(PrimitiveType.PrimitiveTypeName.INT64, primitiveType.getPrimitiveTypeName());\n    assertTrue(\n        primitiveType.getLogicalTypeAnnotation()\n            instanceof LogicalTypeAnnotation.TimestampLogicalTypeAnnotation);\n\n    LogicalTypeAnnotation.TimestampLogicalTypeAnnotation ts =\n        (LogicalTypeAnnotation.TimestampLogicalTypeAnnotation)\n            primitiveType.getLogicalTypeAnnotation();\n    assertTrue(ts.isAdjustedToUTC());\n    assertEquals(LogicalTypeAnnotation.TimeUnit.MICROS, ts.getUnit());\n  }\n\n  @Test\n  public void testBuildColumnDescriptorWithDecimal() {\n    Map<String, String> params = new HashMap<>();\n    params.put(\"precision\", \"10\");\n    params.put(\"scale\", \"2\");\n\n    ParquetColumnSpec spec =\n        new ParquetColumnSpec(\n            11,\n            new String[] {\"price\"},\n            \"FIXED_LEN_BYTE_ARRAY\",\n            5,\n            false,\n            0,\n            0,\n            \"DecimalLogicalTypeAnnotation\",\n            params);\n\n    ColumnDescriptor descriptor = Utils.buildColumnDescriptor(spec);\n    PrimitiveType primitiveType = descriptor.getPrimitiveType();\n    assertEquals(\n        PrimitiveType.PrimitiveTypeName.FIXED_LEN_BYTE_ARRAY, primitiveType.getPrimitiveTypeName());\n\n    LogicalTypeAnnotation.DecimalLogicalTypeAnnotation dec =\n        (LogicalTypeAnnotation.DecimalLogicalTypeAnnotation)\n            primitiveType.getLogicalTypeAnnotation();\n    assertEquals(10, dec.getPrecision());\n    assertEquals(2, dec.getScale());\n  }\n\n  @Test\n  public void testBuildColumnDescriptorWithIntLogicalType() {\n    Map<String, String> params = new HashMap<>();\n    params.put(\"bitWidth\", \"32\");\n    params.put(\"isSigned\", \"true\");\n\n    ParquetColumnSpec spec =\n        new ParquetColumnSpec(\n            12,\n            new String[] {\"count\"},\n            \"INT32\",\n            0,\n            false,\n            0,\n            0,\n            \"IntLogicalTypeAnnotation\",\n            params);\n\n    ColumnDescriptor descriptor = Utils.buildColumnDescriptor(spec);\n    PrimitiveType primitiveType = descriptor.getPrimitiveType();\n    assertEquals(PrimitiveType.PrimitiveTypeName.INT32, primitiveType.getPrimitiveTypeName());\n\n    LogicalTypeAnnotation.IntLogicalTypeAnnotation ann =\n        (LogicalTypeAnnotation.IntLogicalTypeAnnotation) primitiveType.getLogicalTypeAnnotation();\n    assertEquals(32, ann.getBitWidth());\n    assertTrue(ann.isSigned());\n  }\n\n  @Test\n  public void testBuildColumnDescriptorWithStringLogicalType() {\n    ParquetColumnSpec spec =\n        new ParquetColumnSpec(\n            13,\n            new String[] {\"name\"},\n            \"BINARY\",\n            0,\n            false,\n            0,\n            0,\n            \"StringLogicalTypeAnnotation\",\n            Collections.emptyMap());\n\n    ColumnDescriptor descriptor = Utils.buildColumnDescriptor(spec);\n    PrimitiveType primitiveType = descriptor.getPrimitiveType();\n    assertEquals(PrimitiveType.PrimitiveTypeName.BINARY, primitiveType.getPrimitiveTypeName());\n    assertTrue(\n        primitiveType.getLogicalTypeAnnotation()\n            instanceof LogicalTypeAnnotation.StringLogicalTypeAnnotation);\n  }\n}\n"
  },
  {
    "path": "common/src/test/resources/log4j.properties",
    "content": "#\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# Set everything to be logged to the file target/unit-tests.log\ntest.appender=file\nlog4j.rootCategory=INFO, ${test.appender}\nlog4j.appender.file=org.apache.log4j.FileAppender\nlog4j.appender.file.append=true\nlog4j.appender.file.file=target/unit-tests.log\nlog4j.appender.file.layout=org.apache.log4j.PatternLayout\nlog4j.appender.file.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss.SSS} %t %p %c{1}: %m%n\n\n# Tests that launch java subprocesses can set the \"test.appender\" system property to\n# \"console\" to avoid having the child process's logs overwrite the unit test's\n# log file.\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%t: %m%n\n\n# Ignore messages below warning level from Jetty, because it's a bit verbose\nlog4j.logger.org.sparkproject.jetty=WARN\n"
  },
  {
    "path": "common/src/test/resources/log4j2.properties",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n# \n#   http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# Set everything to be logged to the file target/unit-tests.log\nrootLogger.level = info\nrootLogger.appenderRef.file.ref = ${sys:test.appender:-File}\n\nappender.file.type = File\nappender.file.name = File\nappender.file.fileName = target/unit-tests.log\nappender.file.layout.type = PatternLayout\nappender.file.layout.pattern = %d{yy/MM/dd HH:mm:ss.SSS} %t %p %c{1}: %m%n\n\n# Tests that launch java subprocesses can set the \"test.appender\" system property to\n# \"console\" to avoid having the child process's logs overwrite the unit test's\n# log file.\nappender.console.type = Console\nappender.console.name = console\nappender.console.target = SYSTEM_ERR\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = %t: %m%n\n\n# Ignore messages below warning level from Jetty, because it's a bit verbose\nlogger.jetty.name = org.sparkproject.jetty\nlogger.jetty.level = warn\n\n"
  },
  {
    "path": "conf/log4rs.yaml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n# \n#   http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nappenders:\n  unittest:\n    kind: file\n    path: \"target/unit-tests.log\"\n\nroot:\n  level: info\n  appenders:\n    - unittest\n"
  },
  {
    "path": "dev/cargo.config",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n# \n#   http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[target.x86_64-apple-darwin]\nlinker = \"x86_64-apple-darwin21.4-clang\"\nar = \"x86_64-apple-darwin21.4-ar\"\n\n[target.aarch64-apple-darwin]\nlinker = \"aarch64-apple-darwin21.4-clang\"\nar = \"aarch64-apple-darwin21.4-ar\"\n"
  },
  {
    "path": "dev/changelog/0.1.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.1.0 Changelog\n\nThis release consists of 343 commits from 41 contributors. See credits at the end of this changelog for more information.\n\n**Implemented enhancements:**\n\n- feat: Add native shuffle and columnar shuffle [#30](https://github.com/apache/datafusion-comet/pull/30) (viirya)\n- feat: Support Emit::First for SumDecimalGroupsAccumulator [#47](https://github.com/apache/datafusion-comet/pull/47) (viirya)\n- feat: Nested map support for columnar shuffle [#51](https://github.com/apache/datafusion-comet/pull/51) (viirya)\n- feat: Support Count(Distinct) and similar aggregation functions [#42](https://github.com/apache/datafusion-comet/pull/42) (huaxingao)\n- feat: Upgrade to `jni-rs` 0.21 [#50](https://github.com/apache/datafusion-comet/pull/50) (sunchao)\n- feat: Handle exception thrown from native side [#61](https://github.com/apache/datafusion-comet/pull/61) (sunchao)\n- feat: Support InSet expression in Comet [#59](https://github.com/apache/datafusion-comet/pull/59) (viirya)\n- feat: Add `CometNativeException` for exceptions thrown from the native side [#62](https://github.com/apache/datafusion-comet/pull/62) (sunchao)\n- feat: Add cause to native exception [#63](https://github.com/apache/datafusion-comet/pull/63) (viirya)\n- feat: Pull based native execution [#69](https://github.com/apache/datafusion-comet/pull/69) (viirya)\n- feat: Add executeColumnarCollectIterator to CometExec to collect Comet operator result [#71](https://github.com/apache/datafusion-comet/pull/71) (viirya)\n- feat: Add CometBroadcastExchangeExec to support broadcasting the result of Comet native operator [#80](https://github.com/apache/datafusion-comet/pull/80) (viirya)\n- feat: Reduce memory consumption when writing sorted shuffle files [#82](https://github.com/apache/datafusion-comet/pull/82) (sunchao)\n- feat: Add struct/map as unsupported map key/value for columnar shuffle [#84](https://github.com/apache/datafusion-comet/pull/84) (viirya)\n- feat: Support multiple input sources for CometNativeExec [#87](https://github.com/apache/datafusion-comet/pull/87) (viirya)\n- feat: Date and timestamp trunc with format array [#94](https://github.com/apache/datafusion-comet/pull/94) (parthchandra)\n- feat: Support `First`/`Last` aggregate functions [#97](https://github.com/apache/datafusion-comet/pull/97) (huaxingao)\n- feat: Add support of TakeOrderedAndProjectExec in Comet [#88](https://github.com/apache/datafusion-comet/pull/88) (viirya)\n- feat: Support Binary in shuffle writer [#106](https://github.com/apache/datafusion-comet/pull/106) (advancedxy)\n- feat: Add license header by spotless:apply automatically [#110](https://github.com/apache/datafusion-comet/pull/110) (advancedxy)\n- feat: Add dictionary binary to shuffle writer [#111](https://github.com/apache/datafusion-comet/pull/111) (viirya)\n- feat: Minimize number of connections used by parallel reader [#126](https://github.com/apache/datafusion-comet/pull/126) (parthchandra)\n- feat: Support CollectLimit operator [#100](https://github.com/apache/datafusion-comet/pull/100) (advancedxy)\n- feat: Enable min/max for boolean type [#165](https://github.com/apache/datafusion-comet/pull/165) (huaxingao)\n- feat: Introduce `CometTaskMemoryManager` and native side memory pool [#83](https://github.com/apache/datafusion-comet/pull/83) (sunchao)\n- feat: Fix old style names [#201](https://github.com/apache/datafusion-comet/pull/201) (comphead)\n- feat: enable comet shuffle manager for comet shell [#204](https://github.com/apache/datafusion-comet/pull/204) (zuston)\n- feat: Support bitwise aggregate functions [#197](https://github.com/apache/datafusion-comet/pull/197) (huaxingao)\n- feat: Support BloomFilterMightContain expr [#179](https://github.com/apache/datafusion-comet/pull/179) (advancedxy)\n- feat: Support sort merge join [#178](https://github.com/apache/datafusion-comet/pull/178) (viirya)\n- feat: Support HashJoin operator [#194](https://github.com/apache/datafusion-comet/pull/194) (viirya)\n- feat: Remove use of nightly int_roundings feature [#228](https://github.com/apache/datafusion-comet/pull/228) (psvri)\n- feat: Support Broadcast HashJoin [#211](https://github.com/apache/datafusion-comet/pull/211) (viirya)\n- feat: Enable Comet broadcast by default [#213](https://github.com/apache/datafusion-comet/pull/213) (viirya)\n- feat: Add CometRowToColumnar operator [#206](https://github.com/apache/datafusion-comet/pull/206) (advancedxy)\n- feat: Document the class path / classloader issue with the shuffle manager [#256](https://github.com/apache/datafusion-comet/pull/256) (holdenk)\n- feat: Port Datafusion Covariance to Comet [#234](https://github.com/apache/datafusion-comet/pull/234) (huaxingao)\n- feat: Add manual test to calculate spark builtin functions coverage [#263](https://github.com/apache/datafusion-comet/pull/263) (comphead)\n- feat: Support ANSI mode in CAST from String to Bool [#290](https://github.com/apache/datafusion-comet/pull/290) (andygrove)\n- feat: Add extended explain info to Comet plan [#255](https://github.com/apache/datafusion-comet/pull/255) (parthchandra)\n- feat: Improve CometSortMergeJoin statistics [#304](https://github.com/apache/datafusion-comet/pull/304) (planga82)\n- feat: Add compatibility guide [#316](https://github.com/apache/datafusion-comet/pull/316) (andygrove)\n- feat: Improve CometHashJoin statistics [#309](https://github.com/apache/datafusion-comet/pull/309) (planga82)\n- feat: Support Variance [#297](https://github.com/apache/datafusion-comet/pull/297) (huaxingao)\n- feat: Support murmur3_hash and sha2 family hash functions [#226](https://github.com/apache/datafusion-comet/pull/226) (advancedxy)\n- feat: Disable cast string to timestamp by default [#337](https://github.com/apache/datafusion-comet/pull/337) (andygrove)\n- feat: Improve CometBroadcastHashJoin statistics [#339](https://github.com/apache/datafusion-comet/pull/339) (planga82)\n- feat: Implement Spark-compatible CAST from string to integral types [#307](https://github.com/apache/datafusion-comet/pull/307) (andygrove)\n- feat: Implement Spark-compatible CAST from string to timestamp types [#335](https://github.com/apache/datafusion-comet/pull/335) (vaibhawvipul)\n- feat: Implement Spark-compatible CAST float/double to string [#346](https://github.com/apache/datafusion-comet/pull/346) (mattharder91)\n- feat: Only allow incompatible cast expressions to run in comet if a config is enabled [#362](https://github.com/apache/datafusion-comet/pull/362) (andygrove)\n- feat: Implement Spark-compatible CAST between integer types [#340](https://github.com/apache/datafusion-comet/pull/340) (ganeshkumar269)\n- feat: Supports Stddev [#348](https://github.com/apache/datafusion-comet/pull/348) (huaxingao)\n- feat: Improve cast compatibility tests and docs [#379](https://github.com/apache/datafusion-comet/pull/379) (andygrove)\n- feat: Implement Spark-compatible CAST from non-integral numeric types to integral types [#399](https://github.com/apache/datafusion-comet/pull/399) (rohitrastogi)\n- feat: Implement Spark unhex [#342](https://github.com/apache/datafusion-comet/pull/342) (tshauck)\n- feat: Enable columnar shuffle by default [#250](https://github.com/apache/datafusion-comet/pull/250) (viirya)\n- feat: Implement Spark-compatible CAST from floating-point/double to decimal [#384](https://github.com/apache/datafusion-comet/pull/384) (vaibhawvipul)\n- feat: Add logging to explain reasons for Comet not being able to run a query stage natively [#397](https://github.com/apache/datafusion-comet/pull/397) (andygrove)\n- feat: Add support for TryCast expression in Spark 3.2 and 3.3 [#416](https://github.com/apache/datafusion-comet/pull/416) (vaibhawvipul)\n- feat: Supports UUID column [#395](https://github.com/apache/datafusion-comet/pull/395) (huaxingao)\n- feat: correlation support [#456](https://github.com/apache/datafusion-comet/pull/456) (huaxingao)\n- feat: Implement Spark-compatible CAST from String to Date [#383](https://github.com/apache/datafusion-comet/pull/383) (vidyasankarv)\n- feat: Add COMET_SHUFFLE_MODE config to control Comet shuffle mode [#460](https://github.com/apache/datafusion-comet/pull/460) (viirya)\n- feat: Add random row generator in data generator [#451](https://github.com/apache/datafusion-comet/pull/451) (advancedxy)\n- feat: Add xxhash64 function support [#424](https://github.com/apache/datafusion-comet/pull/424) (advancedxy)\n- feat: add hex scalar function [#449](https://github.com/apache/datafusion-comet/pull/449) (tshauck)\n- feat: Add \"Comet Fuzz\" fuzz-testing utility [#472](https://github.com/apache/datafusion-comet/pull/472) (andygrove)\n- feat: Use enum to represent CAST eval_mode in expr.proto [#415](https://github.com/apache/datafusion-comet/pull/415) (prashantksharma)\n- feat: Implement ANSI support for UnaryMinus [#471](https://github.com/apache/datafusion-comet/pull/471) (vaibhawvipul)\n- feat: Add specific fuzz tests for cast and try_cast and fix NPE found during fuzz testing [#514](https://github.com/apache/datafusion-comet/pull/514) (andygrove)\n- feat: Add fuzz testing for arithmetic expressions [#519](https://github.com/apache/datafusion-comet/pull/519) (andygrove)\n- feat: Add HashJoin support for BuildRight [#437](https://github.com/apache/datafusion-comet/pull/437) (viirya)\n- feat: Fix Comet error message [#544](https://github.com/apache/datafusion-comet/pull/544) (comphead)\n- feat: Support Ansi mode in abs function [#500](https://github.com/apache/datafusion-comet/pull/500) (planga82)\n- feat: Enable xxhash64 by default [#583](https://github.com/apache/datafusion-comet/pull/583) (andygrove)\n- feat: Add experimental support for Apache Spark 3.5.1 [#587](https://github.com/apache/datafusion-comet/pull/587) (andygrove)\n- feat: add nullOnDivideByZero for Covariance [#564](https://github.com/apache/datafusion-comet/pull/564) (huaxingao)\n- feat: Implement more efficient version of xxhash64 [#575](https://github.com/apache/datafusion-comet/pull/575) (andygrove)\n- feat: Enable Spark SQL tests for Spark 3.5.1 [#603](https://github.com/apache/datafusion-comet/pull/603) (andygrove)\n- feat: Initial support for Window function [#599](https://github.com/apache/datafusion-comet/pull/599) (huaxingao)\n- feat: IsNaN expression in Comet [#612](https://github.com/apache/datafusion-comet/pull/612) (eejbyfeldt)\n- feat: Add support for CreateNamedStruct [#620](https://github.com/apache/datafusion-comet/pull/620) (eejbyfeldt)\n- feat: add cargo machete to remove udeps [#641](https://github.com/apache/datafusion-comet/pull/641) (vaibhawvipul)\n- feat: Upgrade to DataFusion 40.0.0-rc1 [#644](https://github.com/apache/datafusion-comet/pull/644) (andygrove)\n- feat: Use unified allocator for execution iterators [#613](https://github.com/apache/datafusion-comet/pull/613) (viirya)\n- feat: Create new `datafusion-comet-spark-expr` crate containing Spark-compatible DataFusion expressions [#638](https://github.com/apache/datafusion-comet/pull/638) (andygrove)\n- feat: Move `IfExpr` to `spark-expr` crate [#653](https://github.com/apache/datafusion-comet/pull/653) (andygrove)\n- feat: Upgrade to DataFusion 40 [#657](https://github.com/apache/datafusion-comet/pull/657) (andygrove)\n- feat: Show user a more intuitive message when queries fall back to Spark [#656](https://github.com/apache/datafusion-comet/pull/656) (andygrove)\n- feat: Enable remaining Spark 3.5.1 tests [#676](https://github.com/apache/datafusion-comet/pull/676) (andygrove)\n- feat: Spark-4.0 widening type support [#604](https://github.com/apache/datafusion-comet/pull/604) (kazuyukitanimura)\n- feat: add scalar subquery pushdown to scan [#678](https://github.com/apache/datafusion-comet/pull/678) (parthchandra)\n\n**Fixed bugs:**\n\n- fix: Comet sink operator should not have children operators [#26](https://github.com/apache/datafusion-comet/pull/26) (viirya)\n- fix: Fix the UnionExec match branches in CometExecRule [#68](https://github.com/apache/datafusion-comet/pull/68) (wankunde)\n- fix: Appending null values to element array builders of StructBuilder for null row in a StructArray [#78](https://github.com/apache/datafusion-comet/pull/78) (viirya)\n- fix: Fix compilation error for CometBroadcastExchangeExec [#86](https://github.com/apache/datafusion-comet/pull/86) (viirya)\n- fix: Avoid exception caused by broadcasting empty result [#92](https://github.com/apache/datafusion-comet/pull/92) (wForget)\n- fix: Add num_rows when building RecordBatch [#103](https://github.com/apache/datafusion-comet/pull/103) (advancedxy)\n- fix: Cast string to boolean not compatible with Spark [#107](https://github.com/apache/datafusion-comet/pull/107) (erenavsarogullari)\n- fix: Another attempt to fix libcrypto.dylib loading issue [#112](https://github.com/apache/datafusion-comet/pull/112) (advancedxy)\n- fix: Fix compilation error for Spark 3.2 & 3.3 [#117](https://github.com/apache/datafusion-comet/pull/117) (sunchao)\n- fix: Fix corrupted AggregateMode when transforming plan parameters [#118](https://github.com/apache/datafusion-comet/pull/118) (viirya)\n- fix: bitwise shift with different left/right types [#135](https://github.com/apache/datafusion-comet/pull/135) (viirya)\n- fix: Avoid null exception in removeSubquery [#147](https://github.com/apache/datafusion-comet/pull/147) (viirya)\n- fix: rat check error in vscode ide [#161](https://github.com/apache/datafusion-comet/pull/161) (thexiay)\n- fix: Final aggregation should not bind to the input of partial aggregation [#155](https://github.com/apache/datafusion-comet/pull/155) (viirya)\n- fix: coalesce should return correct datatype [#168](https://github.com/apache/datafusion-comet/pull/168) (viirya)\n- fix: attempt to divide by zero error on decimal division [#172](https://github.com/apache/datafusion-comet/pull/172) (viirya)\n- fix: Aggregation without aggregation expressions should use correct result expressions [#175](https://github.com/apache/datafusion-comet/pull/175) (viirya)\n- fix: Comet native operator can be executed after ReusedExchange [#187](https://github.com/apache/datafusion-comet/pull/187) (viirya)\n- fix: Try to convert a static list into a set in Rust [#184](https://github.com/apache/datafusion-comet/pull/184) (advancedxy)\n- fix: Include active spiller when computing peak shuffle memory [#196](https://github.com/apache/datafusion-comet/pull/196) (sunchao)\n- fix: CometExecRule should handle ShuffleQueryStage and ReusedExchange [#186](https://github.com/apache/datafusion-comet/pull/186) (viirya)\n- fix: Use `makeCopy` to change relation in `FileSourceScanExec` [#207](https://github.com/apache/datafusion-comet/pull/207) (viirya)\n- fix: Remove duplicate byte array allocation for CometDictionary [#224](https://github.com/apache/datafusion-comet/pull/224) (viirya)\n- fix: Remove redundant data copy in columnar shuffle [#233](https://github.com/apache/datafusion-comet/pull/233) (viirya)\n- fix: Only maps FIXED_LEN_BYTE_ARRAY to String for uuid type [#238](https://github.com/apache/datafusion-comet/pull/238) (huaxingao)\n- fix: Reduce RowPartition memory allocation [#244](https://github.com/apache/datafusion-comet/pull/244) (viirya)\n- fix: Remove wrong calculation for Murmur3Hash for float with null input [#245](https://github.com/apache/datafusion-comet/pull/245) (advancedxy)\n- fix: Deallocate row addresses and size arrays after exporting [#246](https://github.com/apache/datafusion-comet/pull/246) (viirya)\n- fix: Fix wrong children expression order in IfExpr [#249](https://github.com/apache/datafusion-comet/pull/249) (viirya)\n- fix: Average expression in Comet Final should handle all null inputs from partial Spark aggregation [#261](https://github.com/apache/datafusion-comet/pull/261) (viirya)\n- fix: Only trigger Comet Final aggregation on Comet partial aggregation [#264](https://github.com/apache/datafusion-comet/pull/264) (viirya)\n- fix: incorrect result on Comet multiple column distinct count [#268](https://github.com/apache/datafusion-comet/pull/268) (viirya)\n- fix: Avoid using CometConf [#266](https://github.com/apache/datafusion-comet/pull/266) (snmvaughan)\n- fix: Fix arrow error when sorting on empty batch [#271](https://github.com/apache/datafusion-comet/pull/271) (viirya)\n- fix: Include license using `#` instead of using XML comment [#274](https://github.com/apache/datafusion-comet/pull/274) (snmvaughan)\n- fix: Comet should not translate try_sum to native sum expression [#277](https://github.com/apache/datafusion-comet/pull/277) (viirya)\n- fix: incorrect result with aggregate expression with filter [#284](https://github.com/apache/datafusion-comet/pull/284) (viirya)\n- fix: Comet should not fail on negative limit parameter [#288](https://github.com/apache/datafusion-comet/pull/288) (viirya)\n- fix: Comet columnar shuffle should not be on top of another Comet shuffle operator [#296](https://github.com/apache/datafusion-comet/pull/296) (viirya)\n- fix: Iceberg scan transition should be in front of other data source v2 [#302](https://github.com/apache/datafusion-comet/pull/302) (viirya)\n- fix: CometExec's outputPartitioning might not be same as Spark expects after AQE interferes [#299](https://github.com/apache/datafusion-comet/pull/299) (viirya)\n- fix: CometShuffleExchangeExec logical link should be correct [#324](https://github.com/apache/datafusion-comet/pull/324) (viirya)\n- fix: SortMergeJoin with unsupported key type should fall back to Spark [#355](https://github.com/apache/datafusion-comet/pull/355) (viirya)\n- fix: limit with offset should return correct results [#359](https://github.com/apache/datafusion-comet/pull/359) (viirya)\n- fix: Disable Comet shuffle with AQE coalesce partitions enabled [#380](https://github.com/apache/datafusion-comet/pull/380) (viirya)\n- fix: Unknown operator id when explain with formatted mode [#410](https://github.com/apache/datafusion-comet/pull/410) (leoluan2009)\n- fix: Reuse CometBroadcastExchangeExec with Spark ReuseExchangeAndSubquery rule [#441](https://github.com/apache/datafusion-comet/pull/441) (viirya)\n- fix: newFileScanRDD should not take constructor from custom Spark versions [#412](https://github.com/apache/datafusion-comet/pull/412) (ceppelli)\n- fix: fix CometNativeExec.doCanonicalize for ReusedExchangeExec [#447](https://github.com/apache/datafusion-comet/pull/447) (viirya)\n- fix: Enable cast string to int tests and fix compatibility issue [#453](https://github.com/apache/datafusion-comet/pull/453) (andygrove)\n- fix: Compute murmur3 hash with dictionary input correctly [#433](https://github.com/apache/datafusion-comet/pull/433) (advancedxy)\n- fix: Only delegate to DataFusion cast when we know that it is compatible with Spark [#461](https://github.com/apache/datafusion-comet/pull/461) (andygrove)\n- fix: `ColumnReader.loadVector` should initiate `CometDictionary` after re-import arrays [#473](https://github.com/apache/datafusion-comet/pull/473) (viirya)\n- fix: substring with negative indices should produce correct result [#470](https://github.com/apache/datafusion-comet/pull/470) (sonhmai)\n- fix: CometReader.loadVector should not overwrite dictionary ids [#476](https://github.com/apache/datafusion-comet/pull/476) (viirya)\n- fix: Reuse previous CometDictionary Java arrays [#489](https://github.com/apache/datafusion-comet/pull/489) (viirya)\n- fix: Fallback to Spark for LIKE with custom escape character [#478](https://github.com/apache/datafusion-comet/pull/478) (sujithjay)\n- fix: Incorrect input schema when preparing result expressions for HashAggregation [#501](https://github.com/apache/datafusion-comet/pull/501) (viirya)\n- fix: Input batch to ShuffleRepartitioner.insert_batch should not be larger than configured batch size [#523](https://github.com/apache/datafusion-comet/pull/523) (viirya)\n- fix: Fix integer overflow in date_parser [#529](https://github.com/apache/datafusion-comet/pull/529) (eejbyfeldt)\n- fix: null character not permitted in chr function [#513](https://github.com/apache/datafusion-comet/pull/513) (vaibhawvipul)\n- fix: Overflow when reading Timestamp from parquet file [#542](https://github.com/apache/datafusion-comet/pull/542) (eejbyfeldt)\n- fix: Re-implement some Parquet decode methods without `copy_nonoverlapping` [#558](https://github.com/apache/datafusion-comet/pull/558) (andygrove)\n- fix: requested character too large for encoding in chr function [#552](https://github.com/apache/datafusion-comet/pull/552) (vaibhawvipul)\n- fix: Running cargo build always triggers rebuild [#579](https://github.com/apache/datafusion-comet/pull/579) (eejbyfeldt)\n- fix: Avoid recursive call to `canonicalizePlans` [#582](https://github.com/apache/datafusion-comet/pull/582) (viirya)\n- fix: Return error in pre_timestamp_cast instead of panic [#543](https://github.com/apache/datafusion-comet/pull/543) (eejbyfeldt)\n- perf: Add criterion benchmark for xxhash64 function [#560](https://github.com/apache/datafusion-comet/pull/560) (andygrove)\n- fix: Fix range out of index error with a temporary workaround [#584](https://github.com/apache/datafusion-comet/pull/584) (viirya)\n- fix: Improve error \"BroadcastExchange is not supported\" [#577](https://github.com/apache/datafusion-comet/pull/577) (parthchandra)\n- fix: Avoid creating huge duplicate of canonicalized plans for CometNativeExec [#639](https://github.com/apache/datafusion-comet/pull/639) (viirya)\n- fix: Tag ignored tests that require SubqueryBroadcastExec [#647](https://github.com/apache/datafusion-comet/pull/647) (parthchandra)\n- fix: Optimize some functions to rewrite dictionary-encoded strings [#627](https://github.com/apache/datafusion-comet/pull/627) (vaibhawvipul)\n- fix: Remove nightly flag in release-nogit target in Makefile [#667](https://github.com/apache/datafusion-comet/pull/667) (andygrove)\n- fix: change the not exists base image apache/spark:3.4.3 to 3.4.2 [#686](https://github.com/apache/datafusion-comet/pull/686) (haoxins)\n- fix: Spark 4.0 SparkArithmeticException test [#688](https://github.com/apache/datafusion-comet/pull/688) (kazuyukitanimura)\n- fix: address failure caused by method signature change in SPARK-48791 [#693](https://github.com/apache/datafusion-comet/pull/693) (parthchandra)\n\n**Documentation updates:**\n\n- doc: Add Quickstart Comet doc section [#125](https://github.com/apache/datafusion-comet/pull/125) (comphead)\n- doc: Minor fix Getting started reformatting [#128](https://github.com/apache/datafusion-comet/pull/128) (comphead)\n- doc: Add initial doc how to expand Comet exceptions [#170](https://github.com/apache/datafusion-comet/pull/170) (comphead)\n- doc: Update README.md with shuffle configs [#208](https://github.com/apache/datafusion-comet/pull/208) (viirya)\n- doc: Update supported expressions [#237](https://github.com/apache/datafusion-comet/pull/237) (viirya)\n- doc: Fix a small typo in README.md [#272](https://github.com/apache/datafusion-comet/pull/272) (rz-vastdata)\n- doc: Update DataFusion project name and url [#300](https://github.com/apache/datafusion-comet/pull/300) (viirya)\n- docs: Move existing documentation into new Contributor Guide and add Getting Started section [#334](https://github.com/apache/datafusion-comet/pull/334) (andygrove)\n- docs: Add more content to the user guide [#347](https://github.com/apache/datafusion-comet/pull/347) (andygrove)\n- docs: Generate configuration guide in mvn build [#349](https://github.com/apache/datafusion-comet/pull/349) (andygrove)\n- docs: Add a plugin overview page to the contributors guide [#345](https://github.com/apache/datafusion-comet/pull/345) (andygrove)\n- doc: Fix target typo in development.md [#364](https://github.com/apache/datafusion-comet/pull/364) (jc4x4)\n- doc: Clean up supported JDKs in README [#366](https://github.com/apache/datafusion-comet/pull/366) (edmondop)\n- doc: add contributing in README.md [#382](https://github.com/apache/datafusion-comet/pull/382) (caicancai)\n- docs: fix the docs url of installation instructions [#393](https://github.com/apache/datafusion-comet/pull/393) (haoxins)\n- docs: Running ScalaTest suites from the CLI [#404](https://github.com/apache/datafusion-comet/pull/404) (edmondop)\n- docs: Remove spark.comet.exec.broadcast.enabled from config docs [#421](https://github.com/apache/datafusion-comet/pull/421) (andygrove)\n- docs: fix various sphinx warnings [#428](https://github.com/apache/datafusion-comet/pull/428) (tshauck)\n- doc: Add Plan Stability Testing to development guide [#432](https://github.com/apache/datafusion-comet/pull/432) (viirya)\n- docs: Update Spark shell command to include setting additional class path [#435](https://github.com/apache/datafusion-comet/pull/435) (andygrove)\n- doc: Add Tuning Guide with shuffle configs [#443](https://github.com/apache/datafusion-comet/pull/443) (viirya)\n- docs: Add benchmarking guide [#444](https://github.com/apache/datafusion-comet/pull/444) (andygrove)\n- docs: add guide to adding a new expression [#422](https://github.com/apache/datafusion-comet/pull/422) (tshauck)\n- docs: changes in documentation [#512](https://github.com/apache/datafusion-comet/pull/512) (SemyonSinchenko)\n- docs: Improve user documentation for supported operators and expressions [#520](https://github.com/apache/datafusion-comet/pull/520) (andygrove)\n- docs: Proposal for source release process [#556](https://github.com/apache/datafusion-comet/pull/556) (andygrove)\n- docs: Update benchmark results [#687](https://github.com/apache/datafusion-comet/pull/687) (andygrove)\n- docs: Update percentage speedups in benchmarking guide [#691](https://github.com/apache/datafusion-comet/pull/691) (andygrove)\n- doc: Add memory tuning section to user guide [#684](https://github.com/apache/datafusion-comet/pull/684) (viirya)\n\n**Other:**\n\n- Initial PR [#1](https://github.com/apache/datafusion-comet/pull/1) (sunchao)\n- build: Add Maven wrapper to the project [#13](https://github.com/apache/datafusion-comet/pull/13) (sunchao)\n- build: Add basic CI test pipelines [#18](https://github.com/apache/datafusion-comet/pull/18) (sunchao)\n- Bump com.google.protobuf:protobuf-java from 3.17.3 to 3.19.6 [#5](https://github.com/apache/datafusion-comet/pull/5) (dependabot[bot])\n- build: Add PR template [#23](https://github.com/apache/datafusion-comet/pull/23) (sunchao)\n- build: Create ticket templates [#24](https://github.com/apache/datafusion-comet/pull/24) (comphead)\n- build: Re-enable Scala style checker and spotless [#21](https://github.com/apache/datafusion-comet/pull/21) (sunchao)\n- build: Remove license header from pull request template [#28](https://github.com/apache/datafusion-comet/pull/28) (viirya)\n- build: Exclude .github from apache-rat-plugin check [#32](https://github.com/apache/datafusion-comet/pull/32) (viirya)\n- build: Add CI for MacOS (x64 and aarch64) [#35](https://github.com/apache/datafusion-comet/pull/35) (sunchao)\n- fix broken link in README.md [#39](https://github.com/apache/datafusion-comet/pull/39) (nairbv)\n- test: Add some fuzz testing for cast operations [#16](https://github.com/apache/datafusion-comet/pull/16) (andygrove)\n- test: Fix CI failure on libcrypto [#41](https://github.com/apache/datafusion-comet/pull/41) (sunchao)\n- test: Reduce test time spent in `CometShuffleSuite` [#40](https://github.com/apache/datafusion-comet/pull/40) (sunchao)\n- test: Add test for RoundRobinPartitioning [#54](https://github.com/apache/datafusion-comet/pull/54) (viirya)\n- build: Fix potential libcrypto lib loading issue for X86 mac runners [#55](https://github.com/apache/datafusion-comet/pull/55) (advancedxy)\n- refactor: Remove a few duplicated occurrences [#53](https://github.com/apache/datafusion-comet/pull/53) (sunchao)\n- build: Fix mvn cache for containerized runners [#48](https://github.com/apache/datafusion-comet/pull/48) (advancedxy)\n- test: Ensure traversed operators during finding first partial aggregaion are all native [#58](https://github.com/apache/datafusion-comet/pull/58) (viirya)\n- build: Upgrade arrow-rs to 50.0.0 and DataFusion to 35.0.0 [#65](https://github.com/apache/datafusion-comet/pull/65) (viirya)\n- build: Support built with java 1.8 [#45](https://github.com/apache/datafusion-comet/pull/45) (advancedxy)\n- test: Add golden files for TPCDSPlanStabilitySuite [#73](https://github.com/apache/datafusion-comet/pull/73) (sunchao)\n- test: Add TPC-DS test results [#77](https://github.com/apache/datafusion-comet/pull/77) (sunchao)\n- build: Upgrade spotless version to 2.43.0 [#85](https://github.com/apache/datafusion-comet/pull/85) (viirya)\n- test: Expose thrown exception when executing query in CometTPCHQuerySuite [#96](https://github.com/apache/datafusion-comet/pull/96) (viirya)\n- test: Enable TPCDS q41 in CometTPCDSQuerySuite [#98](https://github.com/apache/datafusion-comet/pull/98) (viirya)\n- build: Add CI for TPCDS queries [#99](https://github.com/apache/datafusion-comet/pull/99) (viirya)\n- build: Add tpcds-sf-1 to license header excluded list [#108](https://github.com/apache/datafusion-comet/pull/108) (viirya)\n- build: Show time duration for scala test [#116](https://github.com/apache/datafusion-comet/pull/116) (advancedxy)\n- test: Move MacOS (x86) pipelines to post-commit [#122](https://github.com/apache/datafusion-comet/pull/122) (sunchao)\n- build: Upgrade DF to 36.0.0 and arrow-rs 50.0.0 [#66](https://github.com/apache/datafusion-comet/pull/66) (comphead)\n- test: Reduce end-to-end test time [#109](https://github.com/apache/datafusion-comet/pull/109) (sunchao)\n- build: Separate and speedup TPC-DS benchmark [#130](https://github.com/apache/datafusion-comet/pull/130) (advancedxy)\n- build: Re-enable TPCDS queries q34 and q64 in `CometTPCDSQuerySuite` [#133](https://github.com/apache/datafusion-comet/pull/133) (viirya)\n- build: Refine names in benchmark.yml [#132](https://github.com/apache/datafusion-comet/pull/132) (advancedxy)\n- build: Make the build system work out of box [#136](https://github.com/apache/datafusion-comet/pull/136) (advancedxy)\n- minor: Update README.md with system diagram [#148](https://github.com/apache/datafusion-comet/pull/148) (alamb)\n- test: Add golden files for test [#150](https://github.com/apache/datafusion-comet/pull/150) (snmvaughan)\n- build: Add checker for PR title [#151](https://github.com/apache/datafusion-comet/pull/151) (sunchao)\n- build: Support CI pipelines for Spark 3.2, 3.3 and 3.4 [#153](https://github.com/apache/datafusion-comet/pull/153) (advancedxy)\n- minor: Only trigger PR title checker on pull requests [#154](https://github.com/apache/datafusion-comet/pull/154) (sunchao)\n- chore: Fix warnings in both compiler and test environments [#164](https://github.com/apache/datafusion-comet/pull/164) (advancedxy)\n- build: Upload test reports and coverage [#163](https://github.com/apache/datafusion-comet/pull/163) (advancedxy)\n- minor: Remove unnecessary logic [#169](https://github.com/apache/datafusion-comet/pull/169) (sunchao)\n- minor: Make `QueryPlanSerde` warning log less confusing [#181](https://github.com/apache/datafusion-comet/pull/181) (viirya)\n- refactor: Skipping slicing on shuffle arrays in shuffle reader [#189](https://github.com/apache/datafusion-comet/pull/189) (viirya)\n- build: Run Spark SQL tests for 3.4 [#166](https://github.com/apache/datafusion-comet/pull/166) (sunchao)\n- build: Enforce scalafix check in CI [#203](https://github.com/apache/datafusion-comet/pull/203) (advancedxy)\n- test: Follow up on Spark 3.4 diff [#209](https://github.com/apache/datafusion-comet/pull/209) (sunchao)\n- build: Avoid confusion by using profile with clean [#215](https://github.com/apache/datafusion-comet/pull/215) (snmvaughan)\n- test: Add TPC-H test results [#218](https://github.com/apache/datafusion-comet/pull/218) (viirya)\n- build: Add CI for TPC-H queries [#220](https://github.com/apache/datafusion-comet/pull/220) (viirya)\n- test: Enable Comet shuffle in Spark SQL tests [#210](https://github.com/apache/datafusion-comet/pull/210) (sunchao)\n- test: Disable spark ui in unit test by default [#235](https://github.com/apache/datafusion-comet/pull/235) (beryllw)\n- chore: Replace deprecated temporal methods [#229](https://github.com/apache/datafusion-comet/pull/229) (snmvaughan)\n- build: Use specified branch of arrow-rs with workaround to invalid offset buffers from Java Arrow [#239](https://github.com/apache/datafusion-comet/pull/239) (viirya)\n- test: Enable string-to-bool cast test [#251](https://github.com/apache/datafusion-comet/pull/251) (andygrove)\n- test: Restore tests in CometTPCDSQuerySuite [#252](https://github.com/apache/datafusion-comet/pull/252) (viirya)\n- test: Enable all remaining TPCDS queries [#254](https://github.com/apache/datafusion-comet/pull/254) (viirya)\n- test: Enable all remaining TPCH queries [#257](https://github.com/apache/datafusion-comet/pull/257) (viirya)\n- chore: Remove some calls to unwrap when calling create_expr in planner.rs [#269](https://github.com/apache/datafusion-comet/pull/269) (andygrove)\n- chore: Fix typo in info message [#279](https://github.com/apache/datafusion-comet/pull/279) (andygrove)\n- chore: Fix NPE when running CometTPCHQueriesList directly [#285](https://github.com/apache/datafusion-comet/pull/285) (advancedxy)\n- chore: Update Comet repo description [#291](https://github.com/apache/datafusion-comet/pull/291) (viirya)\n- Chore: Cleanup how datafusion session config is created [#289](https://github.com/apache/datafusion-comet/pull/289) (psvri)\n- build: Update asf.yaml to use `@datafusion.apache.org` [#294](https://github.com/apache/datafusion-comet/pull/294) (sunchao)\n- chore: Remove unused functions [#301](https://github.com/apache/datafusion-comet/pull/301) (kazuyukitanimura)\n- chore: Ignore unused variables [#306](https://github.com/apache/datafusion-comet/pull/306) (snmvaughan)\n- chore: Update documentation publishing domain and path [#310](https://github.com/apache/datafusion-comet/pull/310) (andygrove)\n- chore: Add documentation publishing infrastructure [#314](https://github.com/apache/datafusion-comet/pull/314) (andygrove)\n- build: Move shim directories [#318](https://github.com/apache/datafusion-comet/pull/318) (kazuyukitanimura)\n- test: Suppress decimal random number tests for 3.2 and 3.3 [#319](https://github.com/apache/datafusion-comet/pull/319) (kazuyukitanimura)\n- chore: Add allocation source to StreamReader [#332](https://github.com/apache/datafusion-comet/pull/332) (viirya)\n- chore: Add more cast tests and improve test framework [#351](https://github.com/apache/datafusion-comet/pull/351) (andygrove)\n- chore: Implement remaining CAST tests [#356](https://github.com/apache/datafusion-comet/pull/356) (andygrove)\n- build: Add Spark SQL test pipeline with ANSI mode enabled [#321](https://github.com/apache/datafusion-comet/pull/321) (parthchandra)\n- chore: Store EXTENSION_INFO as Set[String] instead of newline-delimited String [#386](https://github.com/apache/datafusion-comet/pull/386) (andygrove)\n- build: Add scala-version to matrix [#396](https://github.com/apache/datafusion-comet/pull/396) (snmvaughan)\n- chore: Add criterion benchmarks for casting between integer types [#401](https://github.com/apache/datafusion-comet/pull/401) (andygrove)\n- chore: Make COMET_EXEC_BROADCAST_FORCE_ENABLED internal config [#413](https://github.com/apache/datafusion-comet/pull/413) (viirya)\n- chore: Rename some columnar shuffle configs for code consistently [#418](https://github.com/apache/datafusion-comet/pull/418) (leoluan2009)\n- chore: Remove an unused config [#430](https://github.com/apache/datafusion-comet/pull/430) (andygrove)\n- tests: Move random data generation methods from CometCastSuite to new DataGenerator class [#426](https://github.com/apache/datafusion-comet/pull/426) (andygrove)\n- test: Fix explain with exteded info comet test [#436](https://github.com/apache/datafusion-comet/pull/436) (kazuyukitanimura)\n- chore: Add cargo bench for shuffle writer [#438](https://github.com/apache/datafusion-comet/pull/438) (andygrove)\n- chore: improve fallback message when comet native shuffle is not enabled [#445](https://github.com/apache/datafusion-comet/pull/445) (andygrove)\n- Coverage: Add a manual test to show what Spark built in expression the DF can support directly [#331](https://github.com/apache/datafusion-comet/pull/331) (comphead)\n- build: Add spark-4.0 profile and shims [#407](https://github.com/apache/datafusion-comet/pull/407) (kazuyukitanimura)\n- build: bump spark version to 3.4.3 [#292](https://github.com/apache/datafusion-comet/pull/292) (huaxingao)\n- chore: Removing copying data from dictionary values into CometDictionary [#490](https://github.com/apache/datafusion-comet/pull/490) (viirya)\n- chore: Update README to highlight Comet benefits [#497](https://github.com/apache/datafusion-comet/pull/497) (andygrove)\n- test: fix ClassNotFoundException for Hive tests [#499](https://github.com/apache/datafusion-comet/pull/499) (kazuyukitanimura)\n- build: Enable comet tests with spark-4.0 profile [#493](https://github.com/apache/datafusion-comet/pull/493) (kazuyukitanimura)\n- chore: Switch to stable Rust [#505](https://github.com/apache/datafusion-comet/pull/505) (andygrove)\n- Minor: Generate the supported Spark builtin expression list into MD file [#455](https://github.com/apache/datafusion-comet/pull/455) (comphead)\n- chore: Simplify code in CometExecIterator and avoid some small overhead [#522](https://github.com/apache/datafusion-comet/pull/522) (andygrove)\n- chore: Upgrade spark to 4.0.0-preview1 [#526](https://github.com/apache/datafusion-comet/pull/526) (advancedxy)\n- chore: Add UnboundColumn to carry datatype for unbound reference [#518](https://github.com/apache/datafusion-comet/pull/518) (viirya)\n- chore: Remove 3.4.2.diff [#528](https://github.com/apache/datafusion-comet/pull/528) (kazuyukitanimura)\n- build: Switch back to official DataFusion repo and arrow-rs after Arrow Java 16 is released [#403](https://github.com/apache/datafusion-comet/pull/403) (viirya)\n- chore: Add CometEvalMode enum to replace string literals [#539](https://github.com/apache/datafusion-comet/pull/539) (andygrove)\n- chore: Create initial release process scripts for official ASF source release [#429](https://github.com/apache/datafusion-comet/pull/429) (andygrove)\n- build: Use DataFusion 39.0.0 release [#550](https://github.com/apache/datafusion-comet/pull/550) (viirya)\n- chore: disable xxhash64 by default [#548](https://github.com/apache/datafusion-comet/pull/548) (andygrove)\n- chore: Remove unsafe use of from_raw_parts in Parquet decoder [#549](https://github.com/apache/datafusion-comet/pull/549) (andygrove)\n- test: Add tests for Scalar and Inverval values for UnaryMinus [#538](https://github.com/apache/datafusion-comet/pull/538) (vaibhawvipul)\n- chore: Add changelog generator [#545](https://github.com/apache/datafusion-comet/pull/545) (andygrove)\n- chore: Remove unused hash_utils.rs [#561](https://github.com/apache/datafusion-comet/pull/561) (andygrove)\n- chore: Use in_list func directly [#559](https://github.com/apache/datafusion-comet/pull/559) (advancedxy)\n- chore: Fix most of the scala/java build warnings [#562](https://github.com/apache/datafusion-comet/pull/562) (andygrove)\n- chore: Upgrade to Rust 1.78 and fix UB issues in unsafe code [#546](https://github.com/apache/datafusion-comet/pull/546) (andygrove)\n- chore: Remove `spark.comet.xxhash64.enabled` from the config document [#586](https://github.com/apache/datafusion-comet/pull/586) (viirya)\n- build: Drop Spark 3.2 support [#581](https://github.com/apache/datafusion-comet/pull/581) (huaxingao)\n- test: Enable Spark 4.0 tests [#537](https://github.com/apache/datafusion-comet/pull/537) (kazuyukitanimura)\n- refactor: Remove method get_global_jclass [#580](https://github.com/apache/datafusion-comet/pull/580) (eejbyfeldt)\n- chore: Move some utility methods to submodules of scalar_funcs [#590](https://github.com/apache/datafusion-comet/pull/590) (advancedxy)\n- chore: Upgrade to Rust 1.79 [#570](https://github.com/apache/datafusion-comet/pull/570) (andygrove)\n- chore: Remove some calls to `unwrap` [#598](https://github.com/apache/datafusion-comet/pull/598) (andygrove)\n- chore: Improve JNI safety [#600](https://github.com/apache/datafusion-comet/pull/600) (andygrove)\n- chore: remove some unwraps from shuffle module [#601](https://github.com/apache/datafusion-comet/pull/601) (andygrove)\n- chore: Use proper constructor of IndexShuffleBlockResolver [#610](https://github.com/apache/datafusion-comet/pull/610) (viirya)\n- chore: Update benchmark results [#614](https://github.com/apache/datafusion-comet/pull/614) (andygrove)\n- build: Upgrade to 2.13.14 for scala-2.13 profile [#626](https://github.com/apache/datafusion-comet/pull/626) (viirya)\n- chore: Rename shuffle write metric [#624](https://github.com/apache/datafusion-comet/pull/624) (andygrove)\n- minor: replace .downcast_ref::<T>().is_some() with .is::<T>() [#635](https://github.com/apache/datafusion-comet/pull/635) (andygrove)\n- test: Add CometTPCDSQueryTestSuite [#628](https://github.com/apache/datafusion-comet/pull/628) (viirya)\n- chore: Convert Rust project into a workspace [#637](https://github.com/apache/datafusion-comet/pull/637) (andygrove)\n- chore: Add Miri workflow [#636](https://github.com/apache/datafusion-comet/pull/636) (andygrove)\n- test: Run optimized version of q72 derived from TPC-DS [#652](https://github.com/apache/datafusion-comet/pull/652) (viirya)\n- chore: Refactoring of CometError/SparkError [#655](https://github.com/apache/datafusion-comet/pull/655) (andygrove)\n- chore: Move `cast` to `spark-expr` crate [#654](https://github.com/apache/datafusion-comet/pull/654) (andygrove)\n- chore: Remove utils crate and move utils into spark-expr crate [#658](https://github.com/apache/datafusion-comet/pull/658) (andygrove)\n- chore: Move temporal kernels and expressions to spark-expr crate [#660](https://github.com/apache/datafusion-comet/pull/660) (andygrove)\n- chore: Move protobuf files to separate crate [#661](https://github.com/apache/datafusion-comet/pull/661) (andygrove)\n- Use IfExpr to check when input to log2 is <=0 and return null [#506](https://github.com/apache/datafusion-comet/pull/506) (PedroMDuarte)\n- chore: Change suffix on some expressions from Exec to Expr [#673](https://github.com/apache/datafusion-comet/pull/673) (andygrove)\n- chore: Fix some regressions with Spark 3.5.1 [#674](https://github.com/apache/datafusion-comet/pull/674) (andygrove)\n- chore: Improve fuzz testing coverage [#668](https://github.com/apache/datafusion-comet/pull/668) (andygrove)\n- Create Comet docker file [#675](https://github.com/apache/datafusion-comet/pull/675) (comphead)\n- chore: Add microbenchmarks [#671](https://github.com/apache/datafusion-comet/pull/671) (andygrove)\n- build: Exclude protobug generated codes from apache-rat check [#683](https://github.com/apache/datafusion-comet/pull/683) (viirya)\n- chore: Disable abs and signum because they return incorrect results [#695](https://github.com/apache/datafusion-comet/pull/695) (andygrove)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n   100\tLiang-Chi Hsieh\n    82\tAndy Grove\n    28\tadvancedxy\n    27\tChao Sun\n    14\tHuaxin Gao\n    11\tKAZUYUKI TANIMURA\n     9\tVipul Vaibhaw\n     8\tParth Chandra\n     7\tEmil Ejbyfeldt\n     7\tSteve Vaughan\n     7\tcomphead\n     4\tOleks V\n     4\tPablo Langa\n     4\tTrent Hauck\n     2\tEdmondo Porcu\n     2\tVrishabh\n     2\tXin Hao\n     2\tXuedong Luan\n     1\tAndrew Lamb\n     1\tBrian Vaughan\n     1\tCancai Cai\n     1\tEren Avsarogullari\n     1\tHolden Karau\n     1\tJC\n     1\tJunbo wang\n     1\tJunfan Zhang\n     1\tPedro M Duarte\n     1\tPrashant K. Sharma\n     1\tRickestCode\n     1\tRohit Rastogi\n     1\tRoman Zeyde\n     1\tSemyon\n     1\tSon\n     1\tSujith Jay Nair\n     1\tZhen Wang\n     1\tceppelli\n     1\tdependabot[bot]\n     1\tthexia\n     1\tvidyasankarv\n     1\twankun\n     1\tగణేష్\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.10.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.10.0 Changelog\n\nThis release consists of 183 commits from 26 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: [Iceberg] Fix decimal corruption [#1985](https://github.com/apache/datafusion-comet/pull/1985) (andygrove)\n- fix: broken link in development.md [#2024](https://github.com/apache/datafusion-comet/pull/2024) (petern48)\n- fix: [iceberg] Add LogicalTypeAnnotation in ParquetColumnSpec [#2000](https://github.com/apache/datafusion-comet/pull/2000) (huaxingao)\n- fix: hdfs read into buffer fully [#2031](https://github.com/apache/datafusion-comet/pull/2031) (parthchandra)\n- fix: Refactor arithmetic serde and fix correctness issues with EvalMode::TRY [#2018](https://github.com/apache/datafusion-comet/pull/2018) (andygrove)\n- fix: clean up [iceberg] integration APIs [#2032](https://github.com/apache/datafusion-comet/pull/2032) (huaxingao)\n- fix: zero Arrow Array offset before sending across FFI [#2052](https://github.com/apache/datafusion-comet/pull/2052) (mbutrovich)\n- fix: [iceberg] more fixes for Iceberg integration APIs. [#2078](https://github.com/apache/datafusion-comet/pull/2078) (parthchandra)\n- fix: Add support for StringDecode in Spark 4.0.0 [#2075](https://github.com/apache/datafusion-comet/pull/2075) (peter-toth)\n- fix: Avoid double free in CometUnifiedShuffleMemoryAllocator [#2122](https://github.com/apache/datafusion-comet/pull/2122) (andygrove)\n- fix: Remove duplicate serde code [#2098](https://github.com/apache/datafusion-comet/pull/2098) (andygrove)\n- fix: Improve logic for determining when an UnpackOrDeepCopy is needed [#2142](https://github.com/apache/datafusion-comet/pull/2142) (andygrove)\n- fix: Add CopyExec to inputs to SortMergeJoinExec [#2155](https://github.com/apache/datafusion-comet/pull/2155) (andygrove)\n- fix: Fix repeatedly url-decode path when reading parquet from s3 using native parquet reader [#2138](https://github.com/apache/datafusion-comet/pull/2138) (Kontinuation)\n- fix: [iceberg] Switch to OSS Spark and run Iceberg Spark tests in parallel [#1987](https://github.com/apache/datafusion-comet/pull/1987) (hsiang-c)\n- fix: [iceberg] Fall back to spark for schemas with empty structs [#2204](https://github.com/apache/datafusion-comet/pull/2204) (andygrove)\n- fix: Fix failing TPC-DS workflow in PR CI runs [#2207](https://github.com/apache/datafusion-comet/pull/2207) (andygrove)\n- fix: [iceberg] order query result deterministically [#2208](https://github.com/apache/datafusion-comet/pull/2208) (hsiang-c)\n- fix: use `spark.comet.batchSize` instead of `conf.arrowMaxRecordsPerBatch` for data that is coming from Java [#2196](https://github.com/apache/datafusion-comet/pull/2196) (rluvaton)\n- fix: if expr nullable [#2217](https://github.com/apache/datafusion-comet/pull/2217) (Asura7969)\n- fix: Support `auto` scan mode with Spark 4.0.0 [#1975](https://github.com/apache/datafusion-comet/pull/1975) (andygrove)\n- fix: Make Sha2 fallback message more user-friendly [#2213](https://github.com/apache/datafusion-comet/pull/2213) (rishvin)\n- fix: separate type checking for CometExchange and CometColumnarExchange [#2241](https://github.com/apache/datafusion-comet/pull/2241) (mbutrovich)\n- fix: Fix potential resource leak in native shuffle block reader [#2247](https://github.com/apache/datafusion-comet/pull/2247) (andygrove)\n- fix: Remove unreachable code in `CometScanRule` [#2252](https://github.com/apache/datafusion-comet/pull/2252) (andygrove)\n- fix: Fall back to `native_comet` for encrypted Parquet scans [#2250](https://github.com/apache/datafusion-comet/pull/2250) (andygrove)\n- fix: Fall back to `native_comet` when object store not supported by `native_iceberg_compat` [#2251](https://github.com/apache/datafusion-comet/pull/2251) (andygrove)\n- fix: split expr.proto file (new) [#2267](https://github.com/apache/datafusion-comet/pull/2267) (kination)\n- fix: handle cast to dictionary vector introduced by case when [#2044](https://github.com/apache/datafusion-comet/pull/2044) (parthchandra)\n- fix: Remove check for custom S3 endpoints [#2288](https://github.com/apache/datafusion-comet/pull/2288) (andygrove)\n- fix: implement lazy evaluation in Coalesce function [#2270](https://github.com/apache/datafusion-comet/pull/2270) (coderfender)\n- fix: Update benchmarking scripts [#2293](https://github.com/apache/datafusion-comet/pull/2293) (andygrove)\n- fix: Fix regression in NativeConfigSuite [#2299](https://github.com/apache/datafusion-comet/pull/2299) (andygrove)\n- fix: Validating object store configs should not throw exception [#2308](https://github.com/apache/datafusion-comet/pull/2308) (andygrove)\n- fix: TakeOrderedAndProjectExec is not reporting all fallback reasons [#2323](https://github.com/apache/datafusion-comet/pull/2323) (kazuyukitanimura)\n- fix: Fallback length function with binary input [#2349](https://github.com/apache/datafusion-comet/pull/2349) (wForget)\n\n**Performance related:**\n\n- perf: Optimize `AvgDecimalGroupsAccumulator` [#1893](https://github.com/apache/datafusion-comet/pull/1893) (leung-ming)\n- perf: Optimize `SumDecimalGroupsAccumulator::update_single` [#2069](https://github.com/apache/datafusion-comet/pull/2069) (leung-ming)\n- perf: Avoid FFI copy in `ScanExec` when reading data from exchanges [#2268](https://github.com/apache/datafusion-comet/pull/2268) (andygrove)\n\n**Implemented enhancements:**\n\n- feat: Add from_unixtime support [#1943](https://github.com/apache/datafusion-comet/pull/1943) (kazuyukitanimura)\n- feat: randn expression support [#2010](https://github.com/apache/datafusion-comet/pull/2010) (akupchinskiy)\n- feat: monotonically_increasing_id and spark_partition_id implementation [#2037](https://github.com/apache/datafusion-comet/pull/2037) (akupchinskiy)\n- feat: support `map_entries` [#2059](https://github.com/apache/datafusion-comet/pull/2059) (comphead)\n- feat: Support Array Literal [#2057](https://github.com/apache/datafusion-comet/pull/2057) (comphead)\n- feat: Add new trait for operator serde [#2115](https://github.com/apache/datafusion-comet/pull/2115) (andygrove)\n- feat: limit with offset support [#2070](https://github.com/apache/datafusion-comet/pull/2070) (akupchinskiy)\n- feat: Include scan implementation name in CometScan nodeName [#2141](https://github.com/apache/datafusion-comet/pull/2141) (andygrove)\n- feat: Add config option to log fallback reasons [#2154](https://github.com/apache/datafusion-comet/pull/2154) (andygrove)\n- feat: [iceberg] Enable Comet shuffle in Iceberg diff [#2205](https://github.com/apache/datafusion-comet/pull/2205) (andygrove)\n- feat: Improve shuffle fallback reporting [#2194](https://github.com/apache/datafusion-comet/pull/2194) (andygrove)\n- feat: Reset data buf of NativeBatchDecoderIterator on close [#2235](https://github.com/apache/datafusion-comet/pull/2235) (wForget)\n- feat: Improve fallback mechanism for ANSI mode [#2211](https://github.com/apache/datafusion-comet/pull/2211) (andygrove)\n- feat: Support hdfs with OpenDAL [#2244](https://github.com/apache/datafusion-comet/pull/2244) (wForget)\n- feat: Ignore fallback info for command execs [#2297](https://github.com/apache/datafusion-comet/pull/2297) (wForget)\n- feat: Improve some confusing fallback reasons [#2301](https://github.com/apache/datafusion-comet/pull/2301) (wForget)\n- feat: Make supported hadoop filesystem schemes configurable [#2272](https://github.com/apache/datafusion-comet/pull/2272) (wForget)\n- feat: [1941-Part1]: Introduce map-sort scalar function [#2262](https://github.com/apache/datafusion-comet/pull/2262) (rishvin)\n- feat: [iceberg] delete rows support using selection vectors [#2346](https://github.com/apache/datafusion-comet/pull/2346) (parthchandra)\n\n**Documentation updates:**\n\n- docs: Update benchmark results for 0.9.0 [#1959](https://github.com/apache/datafusion-comet/pull/1959) (andygrove)\n- doc: Add comment about local clippy run before submitting a pull request [#1961](https://github.com/apache/datafusion-comet/pull/1961) (akupchinskiy)\n- docs: Minor improvements to Spark SQL test docs [#1980](https://github.com/apache/datafusion-comet/pull/1980) (andygrove)\n- docs: Update Maven links for 0.9.0 release [#1988](https://github.com/apache/datafusion-comet/pull/1988) (andygrove)\n- docs: Documentation updates for 0.9.0 release [#1981](https://github.com/apache/datafusion-comet/pull/1981) (andygrove)\n- docs: Add guide showing comparison between Comet and Gluten [#2012](https://github.com/apache/datafusion-comet/pull/2012) (andygrove)\n- docs: Remove legacy comment in docs [#2022](https://github.com/apache/datafusion-comet/pull/2022) (andygrove)\n- docs: Update Gluten comparision to clarify that Velox is open-source [#2043](https://github.com/apache/datafusion-comet/pull/2043) (andygrove)\n- docs: Improve Gluten comparison based on feedback from the community [#2048](https://github.com/apache/datafusion-comet/pull/2048) (andygrove)\n- docs: added a missing export into the plan stability section [#2071](https://github.com/apache/datafusion-comet/pull/2071) (akupchinskiy)\n- doc: Added documentation for supported map functions [#2074](https://github.com/apache/datafusion-comet/pull/2074) (codetyri0n)\n- doc: Alternative way to start Spark Master to run benchmarks [#2072](https://github.com/apache/datafusion-comet/pull/2072) (comphead)\n- docs: Update to support try arithmetic functions [#2143](https://github.com/apache/datafusion-comet/pull/2143) (coderfender)\n- doc: update macos standalone spark start instructions [#2103](https://github.com/apache/datafusion-comet/pull/2103) (comphead)\n- docs: Update confs to bypass Iceberg Spark issues [#2166](https://github.com/apache/datafusion-comet/pull/2166) (hsiang-c)\n- docs: Add Roadmap [#2191](https://github.com/apache/datafusion-comet/pull/2191) (andygrove)\n- docs: Update installation guide for 0.9.1 [#2230](https://github.com/apache/datafusion-comet/pull/2230) (andygrove)\n- docs: Publish version-specific user guides [#2269](https://github.com/apache/datafusion-comet/pull/2269) (andygrove)\n- docs: Fix issues with publishing user guide for older Comet versions [#2284](https://github.com/apache/datafusion-comet/pull/2284) (andygrove)\n- docs: Move user guide docs into /user-guide/latest [#2318](https://github.com/apache/datafusion-comet/pull/2318) (andygrove)\n- docs: Add manual redirects from old pages that no longer exist [#2317](https://github.com/apache/datafusion-comet/pull/2317) (andygrove)\n- docs: Fix broken links and other Sphinx warnings [#2320](https://github.com/apache/datafusion-comet/pull/2320) (andygrove)\n- docs: Use `sphinx-reredirects` for redirects [#2324](https://github.com/apache/datafusion-comet/pull/2324) (andygrove)\n- docs: Add note about Root CA Certificate location with native scans [#2325](https://github.com/apache/datafusion-comet/pull/2325) (andygrove)\n- docs: Stop hard-coding Comet version in docs [#2326](https://github.com/apache/datafusion-comet/pull/2326) (andygrove)\n- docs: Update supported expressions and operators in user guide [#2327](https://github.com/apache/datafusion-comet/pull/2327) (andygrove)\n- docs: Update Iceberg docs for 0.10.0 release [#2355](https://github.com/apache/datafusion-comet/pull/2355) (hsiang-c)\n\n**Other:**\n\n- chore: Start 0.10.0 development [#1958](https://github.com/apache/datafusion-comet/pull/1958) (andygrove)\n- build: Fix release dockerfile [#1960](https://github.com/apache/datafusion-comet/pull/1960) (andygrove)\n- test: Run Iceberg Spark tests only when PR title contains [iceberg] [#1976](https://github.com/apache/datafusion-comet/pull/1976) (hsiang-c)\n- chore: Reuse comet allocator [#1973](https://github.com/apache/datafusion-comet/pull/1973) (EmilyMatt)\n- chore: update `CopyExec` with `maintains_input_order`, `supports_limit_pushdown` and `cardinality_effect` [#1979](https://github.com/apache/datafusion-comet/pull/1979) (rluvaton)\n- chore: extract CreateArray from QueryPlanSerde [#1991](https://github.com/apache/datafusion-comet/pull/1991) (tglanz)\n- chore: use DF scalar functions for StartsWith, EndsWith, Contains, DF LikeExpr [#1887](https://github.com/apache/datafusion-comet/pull/1887) (mbutrovich)\n- refactor: standardize div_ceil [#1999](https://github.com/apache/datafusion-comet/pull/1999) (tglanz)\n- Feat: support map_from_arrays [#1932](https://github.com/apache/datafusion-comet/pull/1932) (kazantsev-maksim)\n- chore: Implement BloomFilterMightContain as a ScalarUDFImpl [#1954](https://github.com/apache/datafusion-comet/pull/1954) (tglanz)\n- chore: Drop support for RightSemi and RightAnti join types [#1935](https://github.com/apache/datafusion-comet/pull/1935) (dharanad)\n- minor: Refactor to reduce duplicate serde code [#2011](https://github.com/apache/datafusion-comet/pull/2011) (andygrove)\n- chore: Introduce ANSI support for remainder operation [#1971](https://github.com/apache/datafusion-comet/pull/1971) (rishvin)\n- chore: Improve process for generating dynamic content into documentation [#2017](https://github.com/apache/datafusion-comet/pull/2017) (andygrove)\n- minor: Refactor to move some shuffle-related logic from `QueryPlanSerde` to `CometExecRule` [#2015](https://github.com/apache/datafusion-comet/pull/2015) (andygrove)\n- chore: Add benchmarking scripts [#2025](https://github.com/apache/datafusion-comet/pull/2025) (andygrove)\n- chore: Add scripts for running benchmark based on TPC-DS [#2042](https://github.com/apache/datafusion-comet/pull/2042) (andygrove)\n- Chore: Improve array contains test coverage [#2030](https://github.com/apache/datafusion-comet/pull/2030) (kazantsev-maksim)\n- fix : cast_operands_to_decimal_type_to_fix_arithmetic_overflow [#1996](https://github.com/apache/datafusion-comet/pull/1996) (coderfender)\n- chore: Add scripts for running benchmarks with Blaze [#2050](https://github.com/apache/datafusion-comet/pull/2050) (andygrove)\n- chore: migrate to DF 49.0.0 [#2040](https://github.com/apache/datafusion-comet/pull/2040) (comphead)\n- chore: Refactor aggregate serde to be consistent with other expression serde [#2055](https://github.com/apache/datafusion-comet/pull/2055) (andygrove)\n- Chore: implement string_space as ScalarUDFImpl [#2041](https://github.com/apache/datafusion-comet/pull/2041) (kazantsev-maksim)\n- docs : Change notes for `IntegralDivide` [#2054](https://github.com/apache/datafusion-comet/pull/2054) (coderfender)\n- Chore: refactor Comparison out of QueryPlanSerde [#2028](https://github.com/apache/datafusion-comet/pull/2028) (CuteChuanChuan)\n- chore: Use Datafusion's Sha2 and remove Comet's implementation. [#2063](https://github.com/apache/datafusion-comet/pull/2063) (rishvin)\n- chore: Adding dependabot [#2076](https://github.com/apache/datafusion-comet/pull/2076) (comphead)\n- chore: Fix clippy issues for Rust 1.89.0 [#2082](https://github.com/apache/datafusion-comet/pull/2082) (andygrove)\n- chore: Refactor string expression serde, part 1 [#2068](https://github.com/apache/datafusion-comet/pull/2068) (andygrove)\n- chore: Use `chr` function from datafusion-spark [#2080](https://github.com/apache/datafusion-comet/pull/2080) (andygrove)\n- minor: CometBuffer code cleanup [#2090](https://github.com/apache/datafusion-comet/pull/2090) (andygrove)\n- chore: Refactor string expression serde, part 2 [#2097](https://github.com/apache/datafusion-comet/pull/2097) (andygrove)\n- chore: create copy of fs-hdfs [#2062](https://github.com/apache/datafusion-comet/pull/2062) (parthchandra)\n- Chore: refactor datetime related expressions out of QueryPlanSerde [#2085](https://github.com/apache/datafusion-comet/pull/2085) (CuteChuanChuan)\n- chore(deps): bump actions/checkout from 3 to 4 [#2104](https://github.com/apache/datafusion-comet/pull/2104) (dependabot[bot])\n- chore(deps): bump libc from 0.2.174 to 0.2.175 in /native [#2107](https://github.com/apache/datafusion-comet/pull/2107) (dependabot[bot])\n- chore(deps): bump assertables from 9.8.1 to 9.8.2 in /native [#2108](https://github.com/apache/datafusion-comet/pull/2108) (dependabot[bot])\n- chore: Update dependabot label [#2110](https://github.com/apache/datafusion-comet/pull/2110) (mbutrovich)\n- chore: Move `stringDecode()` to `CommonStringExprs` trait [#2111](https://github.com/apache/datafusion-comet/pull/2111) (peter-toth)\n- chore(deps): bump uuid from 0.8.2 to 1.17.0 in /native [#2106](https://github.com/apache/datafusion-comet/pull/2106) (dependabot[bot])\n- chore(deps): bump actions/download-artifact from 4 to 5 [#2109](https://github.com/apache/datafusion-comet/pull/2109) (dependabot[bot])\n- chore(deps): bump tokio from 1.47.0 to 1.47.1 in /native [#2112](https://github.com/apache/datafusion-comet/pull/2112) (dependabot[bot])\n- chore(deps): bump actions/setup-java from 3 to 4 [#2105](https://github.com/apache/datafusion-comet/pull/2105) (dependabot[bot])\n- chore(deps): bump the proto group in /native with 2 updates [#2113](https://github.com/apache/datafusion-comet/pull/2113) (dependabot[bot])\n- chore: Add type parameter to `CometExpressionSerde` [#2114](https://github.com/apache/datafusion-comet/pull/2114) (peter-toth)\n- chore(deps): bump cc from 1.2.30 to 1.2.32 in /native [#2123](https://github.com/apache/datafusion-comet/pull/2123) (dependabot[bot])\n- chore(deps): bump bindgen from 0.64.0 to 0.69.5 in /native [#2124](https://github.com/apache/datafusion-comet/pull/2124) (dependabot[bot])\n- chore(deps): bump aws-credential-types from 1.2.4 to 1.2.5 in /native [#2125](https://github.com/apache/datafusion-comet/pull/2125) (dependabot[bot])\n- chore(deps): bump actions/checkout from 4 to 5 [#2126](https://github.com/apache/datafusion-comet/pull/2126) (dependabot[bot])\n- chore: fix `QueryPlanSerde` merge error [#2127](https://github.com/apache/datafusion-comet/pull/2127) (comphead)\n- chore(deps): bump slab from 0.4.10 to 0.4.11 in /native [#2128](https://github.com/apache/datafusion-comet/pull/2128) (dependabot[bot])\n- fix : implement_try_eval_mode_arithmetic [#2073](https://github.com/apache/datafusion-comet/pull/2073) (coderfender)\n- chore: Simplify approach to avoiding memory corruption due to buffer reuse [#2156](https://github.com/apache/datafusion-comet/pull/2156) (andygrove)\n- chore: upgrade to DataFusion 49.0.1 [#2077](https://github.com/apache/datafusion-comet/pull/2077) (mbutrovich)\n- chore: CometExecRule code cleanup [#2159](https://github.com/apache/datafusion-comet/pull/2159) (andygrove)\n- chore: Update `CometTestBase` to stop setting the scan implementation to `native_comet` [#2176](https://github.com/apache/datafusion-comet/pull/2176) (andygrove)\n- trivial: remove unnecessary clone() [#2066](https://github.com/apache/datafusion-comet/pull/2066) (isimluk)\n- chore: Pass Spark configs to native `createPlan` [#2180](https://github.com/apache/datafusion-comet/pull/2180) (andygrove)\n- (feat) add support for ArrayMin scalar function [#1944](https://github.com/apache/datafusion-comet/pull/1944) (dharanad)\n- chore: Upgrade to 49.0.2 [#2223](https://github.com/apache/datafusion-comet/pull/2223) (comphead)\n- chore(deps): bump bindgen from 0.69.5 to 0.72.0 in /native [#2222](https://github.com/apache/datafusion-comet/pull/2222) (dependabot[bot])\n- chore: move Round serde into object [#2237](https://github.com/apache/datafusion-comet/pull/2237) (andygrove)\n- chore: Improve expression fallback reporting [#2240](https://github.com/apache/datafusion-comet/pull/2240) (andygrove)\n- chore: Update stability suite to use `auto` scan instead of `native_comet` [#2178](https://github.com/apache/datafusion-comet/pull/2178) (andygrove)\n- chore: Improve documentation for `CometBatchIterator` and fix a potential issue [#2168](https://github.com/apache/datafusion-comet/pull/2168) (andygrove)\n- chore: Fix `array_intersect` test [#2246](https://github.com/apache/datafusion-comet/pull/2246) (comphead)\n- chore(deps): bump actions/checkout from 4 to 5 [#2229](https://github.com/apache/datafusion-comet/pull/2229) (dependabot[bot])\n- chore(deps): bump actions/setup-java from 4 to 5 [#2225](https://github.com/apache/datafusion-comet/pull/2225) (dependabot[bot])\n- chore: Introduce `strict-warning` profile for Scala [#2254](https://github.com/apache/datafusion-comet/pull/2254) (comphead)\n- chore: fix struct to string test for `native_iceberg_compat` [#2253](https://github.com/apache/datafusion-comet/pull/2253) (comphead)\n- chore: Add type parameter to CometAggregateExpressionSerde [#2249](https://github.com/apache/datafusion-comet/pull/2249) (andygrove)\n- Feat: Impl array flatten func [#2039](https://github.com/apache/datafusion-comet/pull/2039) (kazantsev-maksim)\n- Chore: Refactor serde for math expressions [#2259](https://github.com/apache/datafusion-comet/pull/2259) (kazantsev-maksim)\n- chore: Refactor serde for more array and struct expressions [#2257](https://github.com/apache/datafusion-comet/pull/2257) (andygrove)\n- chore: Refactor remaining predicate expression serde [#2265](https://github.com/apache/datafusion-comet/pull/2265) (andygrove)\n- chore(deps): bump procfs from 0.17.0 to 0.18.0 in /native [#2278](https://github.com/apache/datafusion-comet/pull/2278) (dependabot[bot])\n- chore(deps): bump cc from 1.2.34 to 1.2.35 in /native [#2277](https://github.com/apache/datafusion-comet/pull/2277) (dependabot[bot])\n- chore(deps): bump bindgen from 0.72.0 to 0.72.1 in /native [#2274](https://github.com/apache/datafusion-comet/pull/2274) (dependabot[bot])\n- chore(deps): bump aws-credential-types from 1.2.5 to 1.2.6 in /native [#2275](https://github.com/apache/datafusion-comet/pull/2275) (dependabot[bot])\n- minor: Remove useless ENABLE_COMET_SHUFFLE env [#2280](https://github.com/apache/datafusion-comet/pull/2280) (wForget)\n- chore: Refactor serde for conditional expressions [#2266](https://github.com/apache/datafusion-comet/pull/2266) (andygrove)\n- chore(deps): bump mimalloc from 0.1.47 to 0.1.48 in /native [#2276](https://github.com/apache/datafusion-comet/pull/2276) (dependabot[bot])\n- chore: docker publish and docs build only for apache repo [#2289](https://github.com/apache/datafusion-comet/pull/2289) (wForget)\n- minor: Reduce misleading fallback warnings [#2283](https://github.com/apache/datafusion-comet/pull/2283) (andygrove)\n- chore: Refactor `Cast` serde to avoid code duplication [#2242](https://github.com/apache/datafusion-comet/pull/2242) (andygrove)\n- chore: Refactor `hex`/`unhex` SerDe to avoid code duplication [#2287](https://github.com/apache/datafusion-comet/pull/2287) (hsiang-c)\n- minor: Improve exception message for unimplemented CometVector methods [#2291](https://github.com/apache/datafusion-comet/pull/2291) (andygrove)\n- chore: Align sort constraints w/ `arrow-rs` [#2279](https://github.com/apache/datafusion-comet/pull/2279) (hsiang-c)\n- chore: Collect fallback reasons for spark sql tests [#2313](https://github.com/apache/datafusion-comet/pull/2313) (wForget)\n- chore: Refactor serde for named expressions `alias` and `attributeReference` [#2290](https://github.com/apache/datafusion-comet/pull/2290) (andygrove)\n- chore(deps): bump log4rs from 1.3.0 to 1.4.0 in /native [#2334](https://github.com/apache/datafusion-comet/pull/2334) (dependabot[bot])\n- chore(deps): bump twox-hash from 2.1.1 to 2.1.2 in /native [#2335](https://github.com/apache/datafusion-comet/pull/2335) (dependabot[bot])\n- chore(deps): bump actions/setup-python from 5 to 6 [#2331](https://github.com/apache/datafusion-comet/pull/2331) (dependabot[bot])\n- chore(deps): bump actions/download-artifact from 4 to 5 [#2332](https://github.com/apache/datafusion-comet/pull/2332) (dependabot[bot])\n- chore(deps): bump cc from 1.2.35 to 1.2.36 in /native [#2337](https://github.com/apache/datafusion-comet/pull/2337) (dependabot[bot])\n- chore(deps): bump log from 0.4.27 to 0.4.28 in /native [#2333](https://github.com/apache/datafusion-comet/pull/2333) (dependabot[bot])\n- build: Specify SPARK_LOCAL_HOSTNAME to fix CI failures [#2353](https://github.com/apache/datafusion-comet/pull/2353) (andygrove)\n- chore: [branch-0.10] Bump version to 0.10.0 [#2356](https://github.com/apache/datafusion-comet/pull/2356) (andygrove)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    75\tAndy Grove\n    27\tdependabot[bot]\n    11\tOleks V\n     9\tZhen Wang\n     7\thsiang-c\n     5\tArtem Kupchinskiy\n     5\tB Vadlamani\n     5\tKazantsev Maksim\n     5\tMatt Butrovich\n     5\tParth Chandra\n     4\tRishab Joshi\n     3\tPeter Toth\n     3\tTal Glanzman\n     2\tDharan Aditya\n     2\tHuaxin Gao\n     2\tKAZUYUKI TANIMURA\n     2\tLeung Ming\n     2\tRaz Luvaton\n     2\tYu-Chuan Hung\n     1\tAsura7969\n     1\tEmily Matheys\n     1\tK.I. (Dennis) Jung\n     1\tKristin Cowalcijk\n     1\tPeter Nguyen\n     1\tcodetyri0n\n     1\tŠimon Lukašík\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.11.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.11.0 Changelog\n\nThis release consists of 131 commits from 15 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: temporarily ignore test for hdfs file systems [#2359](https://github.com/apache/datafusion-comet/pull/2359) (parthchandra)\n- fix: Check reused broadcast plan in non-AQE and make setNumPartitions thread safe [#2398](https://github.com/apache/datafusion-comet/pull/2398) (wForget)\n- fix: correct `missingInput` for `CometHashAggregateExec` [#2409](https://github.com/apache/datafusion-comet/pull/2409) (comphead)\n- fix:clippy errros rust 1.9.0 update [#2419](https://github.com/apache/datafusion-comet/pull/2419) (coderfender)\n- fix: Avoid spark plan execution cache preventing CometBatchRDD numPartitions change [#2420](https://github.com/apache/datafusion-comet/pull/2420) (wForget)\n- fix: regressions in `CometToPrettyStringSuite` [#2384](https://github.com/apache/datafusion-comet/pull/2384) (hsiang-c)\n- fix: Byte array Literals failed on cast [#2432](https://github.com/apache/datafusion-comet/pull/2432) (comphead)\n- fix: Do not push down subquery filters on native_datafusion scan [#2438](https://github.com/apache/datafusion-comet/pull/2438) (wForget)\n- fix: Improve error handling when resolving S3 bucket region [#2440](https://github.com/apache/datafusion-comet/pull/2440) (andygrove)\n- fix: [iceberg] additional parquet independent api for iceberg integration [#2442](https://github.com/apache/datafusion-comet/pull/2442) (parthchandra)\n- fix: Specify reqwest crate features [#2446](https://github.com/apache/datafusion-comet/pull/2446) (andygrove)\n- fix: distributed RangePartitioning bounds calculation with native shuffle [#2258](https://github.com/apache/datafusion-comet/pull/2258) (mbutrovich)\n- fix: fix regression in tpcbench.py [#2512](https://github.com/apache/datafusion-comet/pull/2512) (andygrove)\n- fix: [iceberg] Close reader instance in ReadConf [#2510](https://github.com/apache/datafusion-comet/pull/2510) (hsiang-c)\n- fix: Enable plan stability tests for `auto` scan [#2516](https://github.com/apache/datafusion-comet/pull/2516) (andygrove)\n- fix: Capture unexpected output when retrieving JVM 17 args in Makefile [#2566](https://github.com/apache/datafusion-comet/pull/2566) (zuston)\n\n**Performance related:**\n\n- perf: New Configuration from shared conf to avoid high costs [#2402](https://github.com/apache/datafusion-comet/pull/2402) (wForget)\n- perf: Use DataFusion's `count_udaf` instead of `SUM(IF(expr IS NOT NULL, 1, 0))` [#2407](https://github.com/apache/datafusion-comet/pull/2407) (andygrove)\n- perf: Improve BroadcastExchangeExec conversion [#2417](https://github.com/apache/datafusion-comet/pull/2417) (wForget)\n\n**Implemented enhancements:**\n\n- feat: Add dynamic `enabled` and `allowIncompat` configs for all supported expressions [#2329](https://github.com/apache/datafusion-comet/pull/2329) (andygrove)\n- feat: feature specific tests [#2372](https://github.com/apache/datafusion-comet/pull/2372) (parthchandra)\n- feat: Support more date part expressions [#2316](https://github.com/apache/datafusion-comet/pull/2316) (wForget)\n- feat: rpad support column for second arg instead of just literal [#2099](https://github.com/apache/datafusion-comet/pull/2099) (coderfender)\n- feat: Support comet native log level conf [#2379](https://github.com/apache/datafusion-comet/pull/2379) (wForget)\n- feat: Enable WeekDay function [#2411](https://github.com/apache/datafusion-comet/pull/2411) (wForget)\n- feat: Add nested Array literal support [#2181](https://github.com/apache/datafusion-comet/pull/2181) (comphead)\n- feat:add_additional_char_support_rpad [#2436](https://github.com/apache/datafusion-comet/pull/2436) (coderfender)\n- feat: do not fallback to Spark for `COUNT(distinct)` [#2429](https://github.com/apache/datafusion-comet/pull/2429) (comphead)\n- feat: implement_ansi_eval_mode_arithmetic [#2136](https://github.com/apache/datafusion-comet/pull/2136) (coderfender)\n- feat: Add plan conversion statistics to extended explain info [#2412](https://github.com/apache/datafusion-comet/pull/2412) (andygrove)\n- feat: implement_comet_native_lpad_expr [#2102](https://github.com/apache/datafusion-comet/pull/2102) (coderfender)\n- feat: Add `backtrace` feature to simplify enabling native backtraces in `CometNativeException` [#2515](https://github.com/apache/datafusion-comet/pull/2515) (andygrove)\n- feat: Support reverse function with ArrayType input [#2481](https://github.com/apache/datafusion-comet/pull/2481) (cfmcgrady)\n- feat: Change default off-heap memory pool from `greedy_unified` to `fair_unified` [#2526](https://github.com/apache/datafusion-comet/pull/2526) (andygrove)\n- feat: Make DiskManager `max_temp_directory_size` configurable [#2479](https://github.com/apache/datafusion-comet/pull/2479) (manuzhang)\n- feat: Parquet Modular Encryption with Spark KMS for native readers [#2447](https://github.com/apache/datafusion-comet/pull/2447) (mbutrovich)\n- feat: Add support for Spark-compatible cast from integral to decimal [#2472](https://github.com/apache/datafusion-comet/pull/2472) (coderfender)\n- feat:Support ANSI mode integral divide [#2421](https://github.com/apache/datafusion-comet/pull/2421) (coderfender)\n- feat: Add config to enable running Comet in onheap mode [#2554](https://github.com/apache/datafusion-comet/pull/2554) (andygrove)\n- feat:support ansi mode rounding function [#2542](https://github.com/apache/datafusion-comet/pull/2542) (coderfender)\n- feat:support ansi mode remainder function [#2556](https://github.com/apache/datafusion-comet/pull/2556) (coderfender)\n- feat: Implement array-to-string cast support [#2425](https://github.com/apache/datafusion-comet/pull/2425) (cfmcgrady)\n- feat: Various improvements to memory pool configuration, logging, and documentation [#2538](https://github.com/apache/datafusion-comet/pull/2538) (andygrove)\n- feat: Enable complex types for columnar shuffle [#2573](https://github.com/apache/datafusion-comet/pull/2573) (mbutrovich)\n- feat: support_decimal_types_bool_cast_native_impl [#2490](https://github.com/apache/datafusion-comet/pull/2490) (coderfender)\n- feat: Use buf write to reduce system call on index write [#2579](https://github.com/apache/datafusion-comet/pull/2579) (zuston)\n\n**Documentation updates:**\n\n- doc: Document usage IcebergCometBatchReader.java [#2347](https://github.com/apache/datafusion-comet/pull/2347) (comphead)\n- docs: Add changelog for 0.10.0 release [#2361](https://github.com/apache/datafusion-comet/pull/2361) (andygrove)\n- docs: Fix error in docs [#2373](https://github.com/apache/datafusion-comet/pull/2373) (andygrove)\n- docs: Fix more comet versions in docs [#2374](https://github.com/apache/datafusion-comet/pull/2374) (andygrove)\n- docs: Publish 0.10.0 user guide [#2394](https://github.com/apache/datafusion-comet/pull/2394) (andygrove)\n- doc: macos benches doc clarifications [#2418](https://github.com/apache/datafusion-comet/pull/2418) (comphead)\n- docs: update configs.md after #2422 [#2428](https://github.com/apache/datafusion-comet/pull/2428) (mbutrovich)\n- docs: update docs and tuning guide related to native shuffle [#2487](https://github.com/apache/datafusion-comet/pull/2487) (mbutrovich)\n- docs: Improve EC2 benchmarking guide [#2474](https://github.com/apache/datafusion-comet/pull/2474) (andygrove)\n- docs: docs_update_ansi_support [#2496](https://github.com/apache/datafusion-comet/pull/2496) (coderfender)\n- docs:support lpad expression documentation update [#2517](https://github.com/apache/datafusion-comet/pull/2517) (coderfender)\n- docs: doc changes to support ANSI mode integral divide [#2570](https://github.com/apache/datafusion-comet/pull/2570) (coderfender)\n- docs: Split configuration guide into different sections (scan, exec, shuffle, etc) [#2568](https://github.com/apache/datafusion-comet/pull/2568) (andygrove)\n- docs: doc update to support ANSI mode remainder function [#2576](https://github.com/apache/datafusion-comet/pull/2576) (coderfender)\n- docs: Documentation updates [#2581](https://github.com/apache/datafusion-comet/pull/2581) (andygrove)\n\n**Other:**\n\n- chore(deps): bump uuid from 1.18.0 to 1.18.1 in /native [#2336](https://github.com/apache/datafusion-comet/pull/2336) (dependabot[bot])\n- build: Check that all Scala test suites run in PR builds [#2304](https://github.com/apache/datafusion-comet/pull/2304) (andygrove)\n- chore: Start 0.11.0 development [#2365](https://github.com/apache/datafusion-comet/pull/2365) (andygrove)\n- chore: Split expression serde hash map into separate categories [#2322](https://github.com/apache/datafusion-comet/pull/2322) (andygrove)\n- chore: exclude Iceberg diffs from rat checks [#2376](https://github.com/apache/datafusion-comet/pull/2376) (hsiang-c)\n- chore: Refactor UnaryMinus serde [#2378](https://github.com/apache/datafusion-comet/pull/2378) (andygrove)\n- chore: Revert \"chore: [1941-Part1]: Introduce `map_sort` scalar function (#2… [#2381](https://github.com/apache/datafusion-comet/pull/2381) (comphead)\n- chore: Refactor Literal serde [#2377](https://github.com/apache/datafusion-comet/pull/2377) (andygrove)\n- chore: Output `BaseAggregateExec` accurate unsupported names [#2383](https://github.com/apache/datafusion-comet/pull/2383) (comphead)\n- chore: Improve Initcap test and docs [#2387](https://github.com/apache/datafusion-comet/pull/2387) (andygrove)\n- build: fix build of 'hdfs-opendal' feature for MacOS [#2392](https://github.com/apache/datafusion-comet/pull/2392) (parthchandra)\n- chore(deps): bump cc from 1.2.36 to 1.2.37 in /native [#2399](https://github.com/apache/datafusion-comet/pull/2399) (dependabot[bot])\n- chore: [iceberg] support Iceberg 1.9.1 [#2386](https://github.com/apache/datafusion-comet/pull/2386) (hsiang-c)\n- minor: Add deprecation notice to `datafusion-comet-spark-expr` crate [#2405](https://github.com/apache/datafusion-comet/pull/2405) (andygrove)\n- minor: Update benchmarking scripts to specify scan implementation [#2403](https://github.com/apache/datafusion-comet/pull/2403) (andygrove)\n- refactor: Scala hygiene - remove `scala.collection.JavaConverters` [#2393](https://github.com/apache/datafusion-comet/pull/2393) (hsiang-c)\n- chore: Improve test coverage for `count` aggregates [#2406](https://github.com/apache/datafusion-comet/pull/2406) (andygrove)\n- chore: upgrade to DataFusion 50.0.0, Arrow 56.1.0, Parquet 56.0.0 among others [#2286](https://github.com/apache/datafusion-comet/pull/2286) (mbutrovich)\n- chore: Support Spark 4.0.1 instead of 4.0.0 [#2414](https://github.com/apache/datafusion-comet/pull/2414) (andygrove)\n- chore: Respect native features env for cargo commands [#2296](https://github.com/apache/datafusion-comet/pull/2296) (wForget)\n- minor: Update TPC-DS microbenchmarks to remove \"scan only\" and \"exec only\" runs [#2396](https://github.com/apache/datafusion-comet/pull/2396) (andygrove)\n- minor: Add RDDScan to default value of sparkToColumnar.supportedOperatorList [#2422](https://github.com/apache/datafusion-comet/pull/2422) (wForget)\n- chore: new TPC-DS golden plans [#2426](https://github.com/apache/datafusion-comet/pull/2426) (mbutrovich)\n- chore: fix `pr_build*.yml` [#2434](https://github.com/apache/datafusion-comet/pull/2434) (comphead)\n- chore: Remove unused class [#2437](https://github.com/apache/datafusion-comet/pull/2437) (wForget)\n- chore(deps): bump cc from 1.2.37 to 1.2.38 in /native [#2439](https://github.com/apache/datafusion-comet/pull/2439) (dependabot[bot])\n- chore: add validate_workflows.yml [#2441](https://github.com/apache/datafusion-comet/pull/2441) (comphead)\n- test: potential native broadcast failure in scenarios with ReusedExhange [#2167](https://github.com/apache/datafusion-comet/pull/2167) (akupchinskiy)\n- chore: Improvements of fallback info [#2450](https://github.com/apache/datafusion-comet/pull/2450) (wForget)\n- chore: Upgrade Apache Release Audit Tool (RAT) to 0.16.1 [#2451](https://github.com/apache/datafusion-comet/pull/2451) (andygrove)\n- minor: Remove reference to SortExec deadlock issue that is now resolved [#2464](https://github.com/apache/datafusion-comet/pull/2464) (andygrove)\n- chore: Use checked operations when growing or shrinking unified memory pool [#2455](https://github.com/apache/datafusion-comet/pull/2455) (andygrove)\n- minor: Improve the log message of `CometTestBase#checkCometOperators` [#2458](https://github.com/apache/datafusion-comet/pull/2458) (cfmcgrady)\n- minor: Skip calculating per-task memory limit when in off-heap mode [#2462](https://github.com/apache/datafusion-comet/pull/2462) (andygrove)\n- Chore: Used DataFusion impl of bit_get function [#2466](https://github.com/apache/datafusion-comet/pull/2466) (kazantsev-maksim)\n- chore(deps): bump regex from 1.11.2 to 1.11.3 in /native [#2483](https://github.com/apache/datafusion-comet/pull/2483) (dependabot[bot])\n- chore: update TPS-DS plans after #2429 [#2486](https://github.com/apache/datafusion-comet/pull/2486) (mbutrovich)\n- chore(deps): bump thiserror from 2.0.16 to 2.0.17 in /native [#2485](https://github.com/apache/datafusion-comet/pull/2485) (dependabot[bot])\n- chore(deps): bump cc from 1.2.38 to 1.2.39 in /native [#2484](https://github.com/apache/datafusion-comet/pull/2484) (dependabot[bot])\n- chore: Support running specific benchmark query [#2491](https://github.com/apache/datafusion-comet/pull/2491) (comphead)\n- chore: Make CometColumnarToRowExec extends CometPlan [#2460](https://github.com/apache/datafusion-comet/pull/2460) (wForget)\n- chore: Update artifacts to 0.10.0 [#2500](https://github.com/apache/datafusion-comet/pull/2500) (comphead)\n- build: Stop caching libcomet in CI [#2498](https://github.com/apache/datafusion-comet/pull/2498) (andygrove)\n- chore: Upgrade Maven plugins [#2494](https://github.com/apache/datafusion-comet/pull/2494) (andygrove)\n- Chore: Used DataFusion impl of date_add and date_sub functions [#2473](https://github.com/apache/datafusion-comet/pull/2473) (kazantsev-maksim)\n- minor: include taskAttemptId in log messages [#2467](https://github.com/apache/datafusion-comet/pull/2467) (andygrove)\n- chore: Improve test assertions in plan stability suite [#2505](https://github.com/apache/datafusion-comet/pull/2505) (andygrove)\n- build: Add Spark 4.0 to release build script [#2514](https://github.com/apache/datafusion-comet/pull/2514) (parthchandra)\n- chore: Enable plan stability tests for `native_iceberg_compat` [#2519](https://github.com/apache/datafusion-comet/pull/2519) (andygrove)\n- chore(deps): bump parking_lot from 0.12.4 to 0.12.5 in /native [#2530](https://github.com/apache/datafusion-comet/pull/2530) (dependabot[bot])\n- chore(deps): bump cc from 1.2.39 to 1.2.40 in /native [#2529](https://github.com/apache/datafusion-comet/pull/2529) (dependabot[bot])\n- chore: Refactor serde for `ArrayCompact` and `ArrayFilter` [#2536](https://github.com/apache/datafusion-comet/pull/2536) (andygrove)\n- Chore: Fix Scala code warnings - common module [#2527](https://github.com/apache/datafusion-comet/pull/2527) (andy-hf-kwok)\n- chore: Refactor serde for `CheckOverflow` [#2537](https://github.com/apache/datafusion-comet/pull/2537) (andygrove)\n- build: Run scala tests against release build of native code [#2541](https://github.com/apache/datafusion-comet/pull/2541) (andygrove)\n- chore: Pass Comet configs to native `createPlan` [#2543](https://github.com/apache/datafusion-comet/pull/2543) (andygrove)\n- chore: Refactor serde for Length [#2547](https://github.com/apache/datafusion-comet/pull/2547) (andygrove)\n- chore: Include spark shim sources for spotless plugin and reformat [#2557](https://github.com/apache/datafusion-comet/pull/2557) (wForget)\n- chore(deps): bump opendal from 0.54.0 to 0.54.1 in /native [#2559](https://github.com/apache/datafusion-comet/pull/2559) (dependabot[bot])\n- chore: Finish moving Cast serde out of QueryPlanSerde [#2550](https://github.com/apache/datafusion-comet/pull/2550) (andygrove)\n- chore: Use cargo-nextest in CI [#2546](https://github.com/apache/datafusion-comet/pull/2546) (andygrove)\n- chore: Delete unused code [#2565](https://github.com/apache/datafusion-comet/pull/2565) (zuston)\n- chore: Improve plan comet transformation log [#2564](https://github.com/apache/datafusion-comet/pull/2564) (wForget)\n- chore(deps): bump cc from 1.2.40 to 1.2.41 in /native [#2560](https://github.com/apache/datafusion-comet/pull/2560) (dependabot[bot])\n- chore(deps): bump aws-credential-types from 1.2.6 to 1.2.7 in /native [#2563](https://github.com/apache/datafusion-comet/pull/2563) (dependabot[bot])\n- chore: Refactor serde for RegExpReplace [#2548](https://github.com/apache/datafusion-comet/pull/2548) (andygrove)\n- chore: use polymorphic map builders in shuffle. [#2571](https://github.com/apache/datafusion-comet/pull/2571) (ashdnazg)\n- chore: Move ToPrettyString serde into shim layer [#2549](https://github.com/apache/datafusion-comet/pull/2549) (andygrove)\n- chore(deps): bump DataFusion dependencies to 50.2.0, refresh Cargo.lock [#2575](https://github.com/apache/datafusion-comet/pull/2575) (mbutrovich)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    47\tAndy Grove\n    15\tZhen Wang\n    14\tB Vadlamani\n    12\tOleks V\n    11\tdependabot[bot]\n    10\tMatt Butrovich\n     5\tParth Chandra\n     5\thsiang-c\n     3\tFu Chen\n     3\tJunfan Zhang\n     2\tKazantsev Maksim\n     1\tArtem Kupchinskiy\n     1\tEshed Schacham\n     1\tManu Zhang\n     1\tandy-hf-kwok\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.12.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.12.0 Changelog\n\nThis release consists of 105 commits from 13 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: Fix `None.get` in `stringDecode` when `bin` child cannot be converted [#2606](https://github.com/apache/datafusion-comet/pull/2606) (cfmcgrady)\n- fix: Update FuzzDataGenerator to produce dictionary-encoded string arrays & fix bugs that this exposes [#2635](https://github.com/apache/datafusion-comet/pull/2635) (andygrove)\n- fix: Fallback to Spark for lpad/rpad for unsupported arguments & fix negative length handling [#2630](https://github.com/apache/datafusion-comet/pull/2630) (andygrove)\n- fix: Mark SortOrder with floating-point as incompatible [#2650](https://github.com/apache/datafusion-comet/pull/2650) (andygrove)\n- fix: Fall back to Spark for `trunc` / `date_trunc` functions when format string is unsupported, or is not a literal value [#2634](https://github.com/apache/datafusion-comet/pull/2634) (andygrove)\n- fix: [native_datafusion] only pass single partition of PartitionedFiles into DataSourceExec [#2675](https://github.com/apache/datafusion-comet/pull/2675) (mbutrovich)\n- fix: Fix subcommands options in fuzz-testing [#2684](https://github.com/apache/datafusion-comet/pull/2684) (manuzhang)\n- fix: Do not replace SMJ with HJ for `LeftSemi` [#2687](https://github.com/apache/datafusion-comet/pull/2687) (comphead)\n- fix: Apply spotless on Iceberg 1.8.1 diff [iceberg] [#2700](https://github.com/apache/datafusion-comet/pull/2700) (hsiang-c)\n- fix: Fix generate-user-guide-reference-docs failure when mvn command is not executed at root [#2691](https://github.com/apache/datafusion-comet/pull/2691) (manuzhang)\n- fix: Fix missing SortOrder fallback reason in range partitioning [#2716](https://github.com/apache/datafusion-comet/pull/2716) (andygrove)\n- fix: CometLiteral class cast exception with arrays [#2718](https://github.com/apache/datafusion-comet/pull/2718) (andygrove)\n- fix: NormalizeNaNAndZero::children() returns child's child [#2732](https://github.com/apache/datafusion-comet/pull/2732) (mbutrovich)\n- fix: checkSparkMaybeThrows should compare Spark and Comet results in success case [#2728](https://github.com/apache/datafusion-comet/pull/2728) (andygrove)\n- fix: Mark `WindowsExec` as incompatible [#2748](https://github.com/apache/datafusion-comet/pull/2748) (andygrove)\n- fix: Add strict floating point mode and fallback to Spark for min/max/sort on floating point inputs when enabled [#2747](https://github.com/apache/datafusion-comet/pull/2747) (andygrove)\n- fix: Implement producedAttributes for CometWindowExec [#2789](https://github.com/apache/datafusion-comet/pull/2789) (rahulbabarwal89)\n- fix: Pass all Comet configs to native plan [#2801](https://github.com/apache/datafusion-comet/pull/2801) (andygrove)\n\n**Implemented enhancements:**\n\n- feat: Add option to write benchmark results to file [#2640](https://github.com/apache/datafusion-comet/pull/2640) (andygrove)\n- feat: Implement metrics for iceberg compat [#2615](https://github.com/apache/datafusion-comet/pull/2615) (EmilyMatt)\n- feat: Define function signatures in CometFuzz [#2614](https://github.com/apache/datafusion-comet/pull/2614) (andygrove)\n- feat: cherry-pick UUID conversion logic from #2528 [#2648](https://github.com/apache/datafusion-comet/pull/2648) (mbutrovich)\n- feat: support `concat` for strings [#2604](https://github.com/apache/datafusion-comet/pull/2604) (comphead)\n- feat: Add support for `abs` [#2689](https://github.com/apache/datafusion-comet/pull/2689) (andygrove)\n- feat: Support variadic function in CometFuzz [#2682](https://github.com/apache/datafusion-comet/pull/2682) (manuzhang)\n- feat: CometExecRule refactor: Unify CometNativeExec creation with Serde in CometOperatorSerde trait [#2768](https://github.com/apache/datafusion-comet/pull/2768) (andygrove)\n- feat: support cot [#2755](https://github.com/apache/datafusion-comet/pull/2755) (psvri)\n- feat: Add bash script to build and run fuzz testing [#2686](https://github.com/apache/datafusion-comet/pull/2686) (manuzhang)\n- feat: Add getSupportLevel to CometAggregateExpressionSerde trait [#2777](https://github.com/apache/datafusion-comet/pull/2777) (andygrove)\n- feat: Add CI check to ensure generated docs are in sync with code [#2779](https://github.com/apache/datafusion-comet/pull/2779) (andygrove)\n- feat: Add prettier enforcement [#2783](https://github.com/apache/datafusion-comet/pull/2783) (andygrove)\n- feat: hyperbolic trig functions [#2784](https://github.com/apache/datafusion-comet/pull/2784) (psvri)\n- feat: [iceberg] Native scan by serializing FileScanTasks to iceberg-rust [#2528](https://github.com/apache/datafusion-comet/pull/2528) (mbutrovich)\n\n**Documentation updates:**\n\n- docs: Add changelog for 0.11.0 release [#2585](https://github.com/apache/datafusion-comet/pull/2585) (mbutrovich)\n- docs: Improve documentation layout [#2587](https://github.com/apache/datafusion-comet/pull/2587) (andygrove)\n- docs: Publish 0.11.0 user guide [#2589](https://github.com/apache/datafusion-comet/pull/2589) (andygrove)\n- docs: Put Comet logo in top nav bar, respect light/dark mode [#2591](https://github.com/apache/datafusion-comet/pull/2591) (andygrove)\n- docs: Improve main landing page [#2593](https://github.com/apache/datafusion-comet/pull/2593) (andygrove)\n- docs: Improve site navigation [#2597](https://github.com/apache/datafusion-comet/pull/2597) (andygrove)\n- docs: Update benchmark results [#2596](https://github.com/apache/datafusion-comet/pull/2596) (andygrove)\n- docs: Upgrade pydata-sphinx-theme to 0.16.1 [#2602](https://github.com/apache/datafusion-comet/pull/2602) (andygrove)\n- docs: Fix redirect [#2603](https://github.com/apache/datafusion-comet/pull/2603) (andygrove)\n- docs: Fix broken image link [#2613](https://github.com/apache/datafusion-comet/pull/2613) (andygrove)\n- docs: Add FFI docs to contributor guide [#2668](https://github.com/apache/datafusion-comet/pull/2668) (andygrove)\n- docs: Various documentation updates [#2674](https://github.com/apache/datafusion-comet/pull/2674) (andygrove)\n- docs: Add supported SortOrder expressions and fix a typo [#2694](https://github.com/apache/datafusion-comet/pull/2694) (andygrove)\n- docs: Minor docs update for running Spark SQL tests [#2712](https://github.com/apache/datafusion-comet/pull/2712) (andygrove)\n- docs: Update contributor guide for adding a new expression [#2704](https://github.com/apache/datafusion-comet/pull/2704) (andygrove)\n- docs: Documentation updates for `LocalTableScan` and `WindowExec` [#2742](https://github.com/apache/datafusion-comet/pull/2742) (andygrove)\n- docs: Typo fix [#2752](https://github.com/apache/datafusion-comet/pull/2752) (wForget)\n- docs: Categorize some configs as `testing` and add notes about known time zone issues [#2740](https://github.com/apache/datafusion-comet/pull/2740) (andygrove)\n- docs: Run prettier on all markdown files [#2782](https://github.com/apache/datafusion-comet/pull/2782) (andygrove)\n- docs: Ignore prettier formatting for generated tables [#2790](https://github.com/apache/datafusion-comet/pull/2790) (andygrove)\n- docs: Add new section to contributor guide, explaining how to add a new operator [#2758](https://github.com/apache/datafusion-comet/pull/2758) (andygrove)\n\n**Other:**\n\n- chore: Start 0.12.0 development [#2584](https://github.com/apache/datafusion-comet/pull/2584) (mbutrovich)\n- chore: Bump Spark from 3.5.6 to 3.5.7 [#2574](https://github.com/apache/datafusion-comet/pull/2574) (cfmcgrady)\n- chore(deps): bump parquet from 56.0.0 to 56.2.0 in /native [#2608](https://github.com/apache/datafusion-comet/pull/2608) (dependabot[bot])\n- chore(deps): bump tikv-jemallocator from 0.6.0 to 0.6.1 in /native [#2609](https://github.com/apache/datafusion-comet/pull/2609) (dependabot[bot])\n- chore(deps): bump tikv-jemalloc-ctl from 0.6.0 to 0.6.1 in /native [#2610](https://github.com/apache/datafusion-comet/pull/2610) (dependabot[bot])\n- tests: FuzzDataGenerator instead of Parquet-specific generator [#2616](https://github.com/apache/datafusion-comet/pull/2616) (mbutrovich)\n- chore: Simplify on-heap memory configuration [#2599](https://github.com/apache/datafusion-comet/pull/2599) (andygrove)\n- Feat: Add sha1 function impl [#2471](https://github.com/apache/datafusion-comet/pull/2471) (kazantsev-maksim)\n- chore: Refactor Parquet/DataFrame fuzz data generators [#2629](https://github.com/apache/datafusion-comet/pull/2629) (andygrove)\n- chore: Remove needless from_raw calls [#2638](https://github.com/apache/datafusion-comet/pull/2638) (EmilyMatt)\n- chore: support DataFusion 50.3.0 [#2605](https://github.com/apache/datafusion-comet/pull/2605) (comphead)\n- chore(deps): bump actions/upload-artifact from 4 to 5 [#2654](https://github.com/apache/datafusion-comet/pull/2654) (dependabot[bot])\n- chore(deps): bump cc from 1.2.42 to 1.2.43 in /native [#2653](https://github.com/apache/datafusion-comet/pull/2653) (dependabot[bot])\n- chore(deps): bump actions/download-artifact from 5 to 6 [#2652](https://github.com/apache/datafusion-comet/pull/2652) (dependabot[bot])\n- chore: extract comparison into separate tool [#2632](https://github.com/apache/datafusion-comet/pull/2632) (comphead)\n- chore: Various improvements to `checkSparkAnswer*` methods in `CometTestBase` [#2656](https://github.com/apache/datafusion-comet/pull/2656) (andygrove)\n- chore: Remove code for unpacking dictionaries prior to FilterExec [#2659](https://github.com/apache/datafusion-comet/pull/2659) (andygrove)\n- chore: display schema for datasets being compared [#2665](https://github.com/apache/datafusion-comet/pull/2665) (comphead)\n- chore: Remove `CopyExec` [#2663](https://github.com/apache/datafusion-comet/pull/2663) (andygrove)\n- chore: Add extended explain plans to stability suite [#2669](https://github.com/apache/datafusion-comet/pull/2669) (andygrove)\n- chore(deps): bump aws-config from 1.8.8 to 1.8.10 in /native [#2677](https://github.com/apache/datafusion-comet/pull/2677) (dependabot[bot])\n- chore(deps): bump cc from 1.2.43 to 1.2.44 in /native [#2678](https://github.com/apache/datafusion-comet/pull/2678) (dependabot[bot])\n- chore: `tpcbench` output `explain` just once and formatted [#2679](https://github.com/apache/datafusion-comet/pull/2679) (comphead)\n- chore: Add tolerance for `ComparisonTool` [#2699](https://github.com/apache/datafusion-comet/pull/2699) (comphead)\n- chore: Expand test coverage for `CometWindowsExec` [#2711](https://github.com/apache/datafusion-comet/pull/2711) (comphead)\n- chore: generate Float/Double NaN [#2695](https://github.com/apache/datafusion-comet/pull/2695) (hsiang-c)\n- minor: Combine two CI workflows for Spark SQL tests [#2727](https://github.com/apache/datafusion-comet/pull/2727) (andygrove)\n- chore: Improve framework for specifying that configs can be set with env vars [#2722](https://github.com/apache/datafusion-comet/pull/2722) (andygrove)\n- chore: Rename `COMET_EXPLAIN_VERBOSE_ENABLED` to `COMET_EXTENDED_EXPLAIN_FORMAT` and change default [#2644](https://github.com/apache/datafusion-comet/pull/2644) (andygrove)\n- chore: Fallback to Spark for windows functions [#2726](https://github.com/apache/datafusion-comet/pull/2726) (comphead)\n- chore: Refactor operator serde - part 1 [#2738](https://github.com/apache/datafusion-comet/pull/2738) (andygrove)\n- Feat: Add CometLocalTableScanExec operator [#2735](https://github.com/apache/datafusion-comet/pull/2735) (kazantsev-maksim)\n- chore(deps): bump cc from 1.2.44 to 1.2.45 in /native [#2750](https://github.com/apache/datafusion-comet/pull/2750) (dependabot[bot])\n- chore(deps): bump aws-credential-types from 1.2.8 to 1.2.9 in /native [#2751](https://github.com/apache/datafusion-comet/pull/2751) (dependabot[bot])\n- chore: Operator serde refactor part 2 [#2741](https://github.com/apache/datafusion-comet/pull/2741) (andygrove)\n- chore: Fallback to Spark for `array_reverse` for `array<binary>` [#2759](https://github.com/apache/datafusion-comet/pull/2759) (comphead)\n- chore: [iceberg] test iceberg 1.10.0 [#2709](https://github.com/apache/datafusion-comet/pull/2709) (manuzhang)\n- chore: Add `docs/comet-*` to rat exclude list [#2762](https://github.com/apache/datafusion-comet/pull/2762) (manuzhang)\n- Chore: Refactor static invoke exprs [#2671](https://github.com/apache/datafusion-comet/pull/2671) (kazantsev-maksim)\n- minor: Small refactor for consistent serde for hash aggregate [#2764](https://github.com/apache/datafusion-comet/pull/2764) (andygrove)\n- minor: Move `operator2Proto` to `CometExecRule` [#2767](https://github.com/apache/datafusion-comet/pull/2767) (andygrove)\n- chore: various refactoring changes for iceberg [iceberg] [#2680](https://github.com/apache/datafusion-comet/pull/2680) (parthchandra)\n- chore: Refactor CometExecRule handling of sink operators [#2771](https://github.com/apache/datafusion-comet/pull/2771) (andygrove)\n- minor: Refactor to move window-specific code from `QueryPlanSerde` to `CometWindowExec` [#2780](https://github.com/apache/datafusion-comet/pull/2780) (andygrove)\n- chore: Remove many references to `COMET_EXPR_ALLOW_INCOMPATIBLE` [#2775](https://github.com/apache/datafusion-comet/pull/2775) (andygrove)\n- chore: Remove COMET_EXPR_ALLOW_INCOMPATIBLE config [#2786](https://github.com/apache/datafusion-comet/pull/2786) (andygrove)\n- chore: check `missingInput` for Comet plan nodes [#2795](https://github.com/apache/datafusion-comet/pull/2795) (comphead)\n- chore: Finish refactoring expression serde out of `QueryPlanSerde` [#2791](https://github.com/apache/datafusion-comet/pull/2791) (andygrove)\n- chore: Update docs to fix CI after #2784 [#2799](https://github.com/apache/datafusion-comet/pull/2799) (mbutrovich)\n- chore: Update q79 golden plan for Spark 4.0 after #2795 [#2800](https://github.com/apache/datafusion-comet/pull/2800) (mbutrovich)\n- Fix: Fix null handling in CometVector implementations [#2643](https://github.com/apache/datafusion-comet/pull/2643) (cfmcgrady)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    54\tAndy Grove\n    11\tOleks V\n    10\tdependabot[bot]\n     9\tMatt Butrovich\n     6\tManu Zhang\n     3\tFu Chen\n     3\tKazantsev Maksim\n     2\tEmily Matheys\n     2\tVrishabh\n     2\thsiang-c\n     1\tParth Chandra\n     1\tZhen Wang\n     1\trahulbabarwal89\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.13.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.13.0 Changelog\n\nThis release consists of 169 commits from 15 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: NativeScan count assert firing for no reason [#2850](https://github.com/apache/datafusion-comet/pull/2850) (EmilyMatt)\n- fix: Correct link to tracing guide in CometConf [#2866](https://github.com/apache/datafusion-comet/pull/2866) (manuzhang)\n- fix: Fall back to Spark for MakeDecimal with unsupported input type [#2815](https://github.com/apache/datafusion-comet/pull/2815) (andygrove)\n- fix: Normalize s3 paths for PME key retriever [#2874](https://github.com/apache/datafusion-comet/pull/2874) (mbutrovich)\n- fix: modify CometNativeScan to generate the file partitions without instantiating RDD [#2891](https://github.com/apache/datafusion-comet/pull/2891) (mbutrovich)\n- fix: Modulus on decimal data type mismatch [#2922](https://github.com/apache/datafusion-comet/pull/2922) (andygrove)\n- fix: [iceberg] Mark nativeIcebergScanMetadata @transient [#2930](https://github.com/apache/datafusion-comet/pull/2930) (mbutrovich)\n- fix: enable cast tests for Spark 4.0 [#2919](https://github.com/apache/datafusion-comet/pull/2919) (manuzhang)\n- fix: Remove fallback for maps containing complex types [#2943](https://github.com/apache/datafusion-comet/pull/2943) (andygrove)\n- fix: CometShuffleManager hang by deferring SparkEnv access [#3002](https://github.com/apache/datafusion-comet/pull/3002) (Shekharrajak)\n- fix: format decimal to string when casting to short [#2916](https://github.com/apache/datafusion-comet/pull/2916) (manuzhang)\n- fix: [iceberg] reduce granularity of metrics updates in IcebergFileStream [#3050](https://github.com/apache/datafusion-comet/pull/3050) (mbutrovich)\n- fix: native shuffle now reports spill metrics correctly [#3197](https://github.com/apache/datafusion-comet/pull/3197) (andygrove)\n- fix: Prevent native write when input is not Arrow format [#3227](https://github.com/apache/datafusion-comet/pull/3227) (andygrove)\n- fix: Add JDK to Docker image for release build [#3262](https://github.com/apache/datafusion-comet/pull/3262) (hsiang-c)\n\n**Performance related:**\n\n- perf: [iceberg] Deduplicate serialized metadata for Iceberg native scan [#2933](https://github.com/apache/datafusion-comet/pull/2933) (mbutrovich)\n- perf: Use await instead of block_on in native shuffle writer [#2937](https://github.com/apache/datafusion-comet/pull/2937) (mbutrovich)\n- perf: refactor executePlan to try to avoid constantly entering Tokio runtime [#2938](https://github.com/apache/datafusion-comet/pull/2938) (mbutrovich)\n- perf: Optimize lpad/rpad to remove unnecessary memory allocations per element [#2963](https://github.com/apache/datafusion-comet/pull/2963) (andygrove)\n- perf: Improve performance of normalize_nan [#2999](https://github.com/apache/datafusion-comet/pull/2999) (andygrove)\n- perf: Improve string expression microbenchmarks [#3012](https://github.com/apache/datafusion-comet/pull/3012) (andygrove)\n- perf: Improve date/time microbenchmarks to avoid redundant/duplicate benchmarks [#3020](https://github.com/apache/datafusion-comet/pull/3020) (andygrove)\n- perf: Improve aggregate expression microbenchmarks [#3021](https://github.com/apache/datafusion-comet/pull/3021) (andygrove)\n- perf: Improve conditional expression microbenchmarks [#3024](https://github.com/apache/datafusion-comet/pull/3024) (andygrove)\n- perf: Improve performance of date truncate [#2997](https://github.com/apache/datafusion-comet/pull/2997) (andygrove)\n- perf: Add microbenchmark for comparison expressions [#3026](https://github.com/apache/datafusion-comet/pull/3026) (andygrove)\n- perf: Implement more microbenchmarks for cast expressions [#3031](https://github.com/apache/datafusion-comet/pull/3031) (andygrove)\n- perf: Add microbenchmark for hash expressions [#3028](https://github.com/apache/datafusion-comet/pull/3028) (andygrove)\n- perf: Improve performance of CAST from string to int [#3017](https://github.com/apache/datafusion-comet/pull/3017) (coderfender)\n- perf: Improve criterion benchmarks for cast string to int [#3049](https://github.com/apache/datafusion-comet/pull/3049) (andygrove)\n- perf: Additional optimizations for cast from string to int [#3048](https://github.com/apache/datafusion-comet/pull/3048) (andygrove)\n- perf: set DataFusion session context's target_partitions to match Spark's spark.task.cpus [#3062](https://github.com/apache/datafusion-comet/pull/3062) (mbutrovich)\n- perf: don't busy-poll Tokio stream for plans without CometScan [#3063](https://github.com/apache/datafusion-comet/pull/3063) (mbutrovich)\n- perf: minor optimizations in `process_sorted_row_partition` [#3059](https://github.com/apache/datafusion-comet/pull/3059) (andygrove)\n- perf: optimize complex-type hash implementations [#3140](https://github.com/apache/datafusion-comet/pull/3140) (mbutrovich)\n- perf: [iceberg] Remove IcebergFileStream, use iceberg-rust's parallelization, bump iceberg-rust to latest, cache SchemaAdapter [#3051](https://github.com/apache/datafusion-comet/pull/3051) (mbutrovich)\n- perf: [iceberg] reduce nativeIcebergScanMetadata serialization points [#3243](https://github.com/apache/datafusion-comet/pull/3243) (mbutrovich)\n- perf: reduce GC pressure in protobuf serialization [#3242](https://github.com/apache/datafusion-comet/pull/3242) (andygrove)\n- perf: cache serialized query plans to avoid per-partition serialization [#3246](https://github.com/apache/datafusion-comet/pull/3246) (andygrove)\n- perf: [iceberg] Use protobuf instead of JSON to serialize Iceberg partition values [#3247](https://github.com/apache/datafusion-comet/pull/3247) (parthchandra)\n\n**Implemented enhancements:**\n\n- feat: Add experimental support for native Parquet writes [#2812](https://github.com/apache/datafusion-comet/pull/2812) (andygrove)\n- feat: Partially implement file commit protocol for native Parquet writes [#2828](https://github.com/apache/datafusion-comet/pull/2828) (andygrove)\n- feat: CometNativeWriteExec support with native scan as a child [#2839](https://github.com/apache/datafusion-comet/pull/2839) (mbutrovich)\n- feat: Add support for `explode` and `explode_outer` for array inputs [#2836](https://github.com/apache/datafusion-comet/pull/2836) (andygrove)\n- feat: Support ANSI mode SUM (Decimal types) [#2826](https://github.com/apache/datafusion-comet/pull/2826) (coderfender)\n- feat: Add expression registry to native planner [#2851](https://github.com/apache/datafusion-comet/pull/2851) (andygrove)\n- feat: Implement native operator registry [#2875](https://github.com/apache/datafusion-comet/pull/2875) (andygrove)\n- feat: Improve fallback reporting for `native_datafusion` scan [#2879](https://github.com/apache/datafusion-comet/pull/2879) (andygrove)\n- feat: Enable bucket pruning with native_datafusion scans [#2888](https://github.com/apache/datafusion-comet/pull/2888) (mbutrovich)\n- feat: support_ansi-mode_aggregated_benchmarking [#2901](https://github.com/apache/datafusion-comet/pull/2901) (coderfender)\n- feat: [iceberg] REST catalog support for CometNativeIcebergScan [#2895](https://github.com/apache/datafusion-comet/pull/2895) (mbutrovich)\n- feat: [iceberg] Support session token in Iceberg Native scan [#2913](https://github.com/apache/datafusion-comet/pull/2913) (hsiang-c)\n- feat: Make shuffle writer buffer size configurable [#2899](https://github.com/apache/datafusion-comet/pull/2899) (andygrove)\n- feat: Add partial support for `from_json` [#2934](https://github.com/apache/datafusion-comet/pull/2934) (andygrove)\n- feat: Create benchmarks comet cast [#2932](https://github.com/apache/datafusion-comet/pull/2932) (coderfender)\n- feat: Support string decimal cast [#2925](https://github.com/apache/datafusion-comet/pull/2925) (coderfender)\n- feat: Remove unnecessary transition for native writes [#2960](https://github.com/apache/datafusion-comet/pull/2960) (comphead)\n- feat: Initial implementation of size for array inputs [#2862](https://github.com/apache/datafusion-comet/pull/2862) (andygrove)\n- feat: Support ANSI mode sum expr (int inputs) [#2600](https://github.com/apache/datafusion-comet/pull/2600) (coderfender)\n- feat: Support casting string float types [#2835](https://github.com/apache/datafusion-comet/pull/2835) (coderfender)\n- feat: Support ANSI mode avg expr (int inputs) [#2817](https://github.com/apache/datafusion-comet/pull/2817) (coderfender)\n- feat: Add support for remote Parquet HDFS writer with openDAL [#2929](https://github.com/apache/datafusion-comet/pull/2929) (comphead)\n- feat: Expand `murmur3` hash support to complex types [#3077](https://github.com/apache/datafusion-comet/pull/3077) (andygrove)\n- feat: Comet Writer should respect object store settings [#3042](https://github.com/apache/datafusion-comet/pull/3042) (comphead)\n- feat: add support for unix_date expression [#3141](https://github.com/apache/datafusion-comet/pull/3141) (andygrove)\n- feat: add partial support for date_format expression [#3201](https://github.com/apache/datafusion-comet/pull/3201) (andygrove)\n- feat: add complex type support to native Parquet writer [#3214](https://github.com/apache/datafusion-comet/pull/3214) (andygrove)\n- feat: implement framework to support multiple pyspark benchmarks [#3080](https://github.com/apache/datafusion-comet/pull/3080) (andygrove)\n- feat: add support for datediff expression [#3145](https://github.com/apache/datafusion-comet/pull/3145) (andygrove)\n- feat: Add support for `unix_timestamp` function [#2936](https://github.com/apache/datafusion-comet/pull/2936) (andygrove)\n- feat: add support for last_day expression [#3143](https://github.com/apache/datafusion-comet/pull/3143) (andygrove)\n- feat: Support left expression [#3206](https://github.com/apache/datafusion-comet/pull/3206) (Shekharrajak)\n- feat: Add support for round-robin partitioning in native shuffle [#3076](https://github.com/apache/datafusion-comet/pull/3076) (andygrove)\n- feat: Native columnar to row conversion (Phase 1) [#3221](https://github.com/apache/datafusion-comet/pull/3221) (andygrove)\n\n**Documentation updates:**\n\n- docs: add documentation for fully-native Iceberg scans [#2868](https://github.com/apache/datafusion-comet/pull/2868) (mbutrovich)\n- docs: Add documentation to contributor guide explaining native + JVM shuffle implementation [#3055](https://github.com/apache/datafusion-comet/pull/3055) (andygrove)\n- docs: add guidance on disabling constant folding for literal tests [#3200](https://github.com/apache/datafusion-comet/pull/3200) (andygrove)\n- docs: Add common pitfalls and improve PR checklist in development guide [#3231](https://github.com/apache/datafusion-comet/pull/3231) (andygrove)\n- docs: various documentation updates in preparation for next release [#3254](https://github.com/apache/datafusion-comet/pull/3254) (andygrove)\n- docs: Stop generating dynamic docs content in build [#3212](https://github.com/apache/datafusion-comet/pull/3212) (andygrove)\n- docs: document datetime rebasing and V2 API limitations for DataFusion-based scans [#3259](https://github.com/apache/datafusion-comet/pull/3259) (andygrove)\n- docs: Mark native_comet scan as deprecated [#3274](https://github.com/apache/datafusion-comet/pull/3274) (andygrove)\n\n**Other:**\n\n- chore: Add 0.12.0 changelog [#2811](https://github.com/apache/datafusion-comet/pull/2811) (andygrove)\n- chore: Prepare for 0.13.0 development [#2809](https://github.com/apache/datafusion-comet/pull/2809) (andygrove)\n- minor: Add microbenchmark for integer sum with grouping [#2805](https://github.com/apache/datafusion-comet/pull/2805) (andygrove)\n- test: extract conditional expression tests (`if`, `case_when` and `coalesce`) [#2807](https://github.com/apache/datafusion-comet/pull/2807) (rluvaton)\n- build: Disable caching for macOS PR builds [#2816](https://github.com/apache/datafusion-comet/pull/2816) (andygrove)\n- chore(deps): bump actions/checkout from 5 to 6 [#2818](https://github.com/apache/datafusion-comet/pull/2818) (dependabot[bot])\n- chore(deps): bump object_store_opendal from 0.54.1 to 0.55.0 in /native [#2819](https://github.com/apache/datafusion-comet/pull/2819) (dependabot[bot])\n- chore(deps): bump cc from 1.2.46 to 1.2.47 in /native [#2822](https://github.com/apache/datafusion-comet/pull/2822) (dependabot[bot])\n- chore(deps): bump opendal from 0.54.1 to 0.55.0 in /native [#2821](https://github.com/apache/datafusion-comet/pull/2821) (dependabot[bot])\n- chore: update `Iceberg` install docs [#2824](https://github.com/apache/datafusion-comet/pull/2824) (comphead)\n- chore(deps): bump cc from 1.2.47 to 1.2.48 in /native [#2833](https://github.com/apache/datafusion-comet/pull/2833) (dependabot[bot])\n- chore(deps): bump the proto group in /native with 2 updates [#2832](https://github.com/apache/datafusion-comet/pull/2832) (dependabot[bot])\n- minor: Clean up shuffle transformation code in `CometExecRule` [#2840](https://github.com/apache/datafusion-comet/pull/2840) (andygrove)\n- chore: fix broken link to Apache DataFusion Comet Overview in README [#2846](https://github.com/apache/datafusion-comet/pull/2846) (onestn)\n- chore: Refactor some of the scan and sink handling in `CometExecRule` to reduce duplicate code [#2844](https://github.com/apache/datafusion-comet/pull/2844) (andygrove)\n- deps: bump lz4_flex, downgrade prost from yanked version [#2847](https://github.com/apache/datafusion-comet/pull/2847) (mbutrovich)\n- minor: Move shuffle logic from `CometExecRule` to `CometShuffleExchangeExec` serde implementation [#2853](https://github.com/apache/datafusion-comet/pull/2853) (andygrove)\n- chore: remove coverage file auto generator [#2854](https://github.com/apache/datafusion-comet/pull/2854) (comphead)\n- chore(deps): bump cc from 1.2.48 to 1.2.49 in /native [#2858](https://github.com/apache/datafusion-comet/pull/2858) (dependabot[bot])\n- chore: Refactor `CometExecRule` handling of `BroadcastHashJoin` and fix fallback reporting [#2856](https://github.com/apache/datafusion-comet/pull/2856) (andygrove)\n- chore: update actions/checkout from v4 to v6 in setup-iceberg and set… [#2857](https://github.com/apache/datafusion-comet/pull/2857) (bjornjorgensen)\n- minor: Small refactor in `CometExecRule` to remove confusing code and fix more fallback reporting [#2860](https://github.com/apache/datafusion-comet/pull/2860) (andygrove)\n- chore: Add unit tests for `CometExecRule` [#2863](https://github.com/apache/datafusion-comet/pull/2863) (andygrove)\n- chore: Add unit tests for `CometScanRule` [#2867](https://github.com/apache/datafusion-comet/pull/2867) (andygrove)\n- minor: Pedantic refactoring to move some methods from `CometSparkSessionExtensions` to `CometScanRule` and `CometExecRule` [#2873](https://github.com/apache/datafusion-comet/pull/2873) (andygrove)\n- deps: [iceberg] upgrade DataFusion to 51, Arrow to 57, Iceberg to latest, MSRV to 1.88 [#2729](https://github.com/apache/datafusion-comet/pull/2729) (mbutrovich)\n- chore: Enable plan stability suite for `native_datafusion` scans [#2877](https://github.com/apache/datafusion-comet/pull/2877) (andygrove)\n- chore: `ScanExec::new` no longer fetches data [#2881](https://github.com/apache/datafusion-comet/pull/2881) (andygrove)\n- Chore: refactor bit_not [#2896](https://github.com/apache/datafusion-comet/pull/2896) (kazantsev-maksim)\n- chore(deps): bump actions/cache from 4 to 5 [#2909](https://github.com/apache/datafusion-comet/pull/2909) (dependabot[bot])\n- chore(deps): bump actions/upload-artifact from 5 to 6 [#2910](https://github.com/apache/datafusion-comet/pull/2910) (dependabot[bot])\n- chore: Refactor string benchmarks (~10x reduction in LOC) [#2907](https://github.com/apache/datafusion-comet/pull/2907) (andygrove)\n- chore(deps): bump actions/download-artifact from 6 to 7 [#2908](https://github.com/apache/datafusion-comet/pull/2908) (dependabot[bot])\n- chore: use datafusion impl of hex function [#2915](https://github.com/apache/datafusion-comet/pull/2915) (kazantsev-maksim)\n- chore: Use fixed seed in RNG in tests [#2917](https://github.com/apache/datafusion-comet/pull/2917) (andygrove)\n- chore: Remove `row_step` from `process_sorted_row_partition` [#2920](https://github.com/apache/datafusion-comet/pull/2920) (andygrove)\n- chore: Move string function handling to new expression registry [#2931](https://github.com/apache/datafusion-comet/pull/2931) (andygrove)\n- chore: Reduce syscalls in metrics update logic [#2940](https://github.com/apache/datafusion-comet/pull/2940) (andygrove)\n- chore: Add shuffle benchmark for deeply nested schemas [#2902](https://github.com/apache/datafusion-comet/pull/2902) (andygrove)\n- chore: Reduce timer overhead in native shuffle writer [#2941](https://github.com/apache/datafusion-comet/pull/2941) (andygrove)\n- chore: Remove low-level ffi/jvm timers from native `ScanExec` [#2939](https://github.com/apache/datafusion-comet/pull/2939) (andygrove)\n- build: Skip problematic Spark SQL test for Spark 4.0.x [#2947](https://github.com/apache/datafusion-comet/pull/2947) (andygrove)\n- build: Reinstate macOS CI builds of Comet with Spark 4.0 [#2950](https://github.com/apache/datafusion-comet/pull/2950) (manuzhang)\n- chore(deps): bump reqwest from 0.12.25 to 0.12.26 in /native [#2952](https://github.com/apache/datafusion-comet/pull/2952) (dependabot[bot])\n- chore(deps): bump cc from 1.2.49 to 1.2.50 in /native [#2954](https://github.com/apache/datafusion-comet/pull/2954) (dependabot[bot])\n- chore(deps): bump assertables from 9.8.2 to 9.8.3 in /native [#2953](https://github.com/apache/datafusion-comet/pull/2953) (dependabot[bot])\n- minor: Refactor expression microbenchmarks to remove duplicate code [#2956](https://github.com/apache/datafusion-comet/pull/2956) (andygrove)\n- build: fix missing import in `main` [#2962](https://github.com/apache/datafusion-comet/pull/2962) (andygrove)\n- build: Skip macOS Spark 4 fuzz test [#2966](https://github.com/apache/datafusion-comet/pull/2966) (andygrove)\n- Avoid duplicated writer nodes when AQE enabled [#2982](https://github.com/apache/datafusion-comet/pull/2982) (comphead)\n- build: Set thread thresholds envs for spark test on macOS [#2987](https://github.com/apache/datafusion-comet/pull/2987) (wForget)\n- chore: Add microbenchmark for casting string to temporal types [#2980](https://github.com/apache/datafusion-comet/pull/2980) (andygrove)\n- chore(deps): bump reqwest from 0.12.26 to 0.12.28 in /native [#3009](https://github.com/apache/datafusion-comet/pull/3009) (dependabot[bot])\n- chore(deps): bump tempfile from 3.23.0 to 3.24.0 in /native [#3006](https://github.com/apache/datafusion-comet/pull/3006) (dependabot[bot])\n- chore(deps): bump serde_json from 1.0.145 to 1.0.148 in /native [#3010](https://github.com/apache/datafusion-comet/pull/3010) (dependabot[bot])\n- chore: Add microbenchmark for casting string to numeric [#2979](https://github.com/apache/datafusion-comet/pull/2979) (andygrove)\n- chore: Skip some CI workflows for benchmark changes [#3030](https://github.com/apache/datafusion-comet/pull/3030) (andygrove)\n- chore: Skip more workflows on benchmark PRs [#3034](https://github.com/apache/datafusion-comet/pull/3034) (andygrove)\n- chore: Improve microbenchmark for string expressions [#2964](https://github.com/apache/datafusion-comet/pull/2964) (andygrove)\n- chore(deps): bump tokio from 1.48.0 to 1.49.0 in /native [#3039](https://github.com/apache/datafusion-comet/pull/3039) (dependabot[bot])\n- chore(deps): bump libc from 0.2.178 to 0.2.179 in /native [#3038](https://github.com/apache/datafusion-comet/pull/3038) (dependabot[bot])\n- chore(deps): bump actions/cache from 4 to 5 [#3037](https://github.com/apache/datafusion-comet/pull/3037) (dependabot[bot])\n- Chore: to_json unit/benchmark tests [#3011](https://github.com/apache/datafusion-comet/pull/3011) (kazantsev-maksim)\n- chore: Add checks to microbenchmarks for plan running natively in Comet [#3045](https://github.com/apache/datafusion-comet/pull/3045) (andygrove)\n- chore: Refactor `CometScanRule` to improve scan selection and fallback logic [#2978](https://github.com/apache/datafusion-comet/pull/2978) (andygrove)\n- chore: Respect to legacySizeOfNull option for size function [#3036](https://github.com/apache/datafusion-comet/pull/3036) (kazantsev-maksim)\n- chore: Add PySpark-based benchmarks, starting with ETL example [#3065](https://github.com/apache/datafusion-comet/pull/3065) (andygrove)\n- chore(deps): bump the proto group in /native with 2 updates [#3071](https://github.com/apache/datafusion-comet/pull/3071) (dependabot[bot])\n- chore: add MacOS file and event trace log to gitignore [#3070](https://github.com/apache/datafusion-comet/pull/3070) (manuzhang)\n- chore(deps): bump arrow from 57.1.0 to 57.2.0 in /native [#3073](https://github.com/apache/datafusion-comet/pull/3073) (dependabot[bot])\n- chore(deps): bump parquet from 57.1.0 to 57.2.0 in /native [#3074](https://github.com/apache/datafusion-comet/pull/3074) (dependabot[bot])\n- chore(deps): bump cc from 1.2.50 to 1.2.52 in /native [#3072](https://github.com/apache/datafusion-comet/pull/3072) (dependabot[bot])\n- chore: improve cast documentation to add support per eval mode [#3056](https://github.com/apache/datafusion-comet/pull/3056) (coderfender)\n- chore: Refactor JVM shuffle: Move `SpillSorter` to top level class and add tests [#3081](https://github.com/apache/datafusion-comet/pull/3081) (andygrove)\n- minor: Split CometShuffleExternalSorter into sync/async implementations [#3192](https://github.com/apache/datafusion-comet/pull/3192) (andygrove)\n- chore: Add pending PR shield [#3205](https://github.com/apache/datafusion-comet/pull/3205) (comphead)\n- chore: deprecate native_comet scan in favor of native_iceberg_compat [#2949](https://github.com/apache/datafusion-comet/pull/2949) (Shekharrajak)\n- chore: add script to regenerate golden files for plan stability tests [#3204](https://github.com/apache/datafusion-comet/pull/3204) (andygrove)\n- chore: fix clippy warnings for Rust 1.93 [#3239](https://github.com/apache/datafusion-comet/pull/3239) (andygrove)\n- build: build native library once and share across CI test jobs [#3249](https://github.com/apache/datafusion-comet/pull/3249) (andygrove)\n- Experimental: Native CSV files read [#3044](https://github.com/apache/datafusion-comet/pull/3044) (kazantsev-maksim)\n- build: add missing datafusion-datasource dependency [#3252](https://github.com/apache/datafusion-comet/pull/3252) (andygrove)\n- chore: Auto scan mode no longer falls back to `native_comet` [#3236](https://github.com/apache/datafusion-comet/pull/3236) (andygrove)\n- build: optimize CI cache usage and add fast lint gate [#3251](https://github.com/apache/datafusion-comet/pull/3251) (andygrove)\n- build: use `install` instead of `compile` in TPC CI jobs [#3263](https://github.com/apache/datafusion-comet/pull/3263) (andygrove)\n- build: remove dead code for 0.8/0.9 docs that broke CI [#3264](https://github.com/apache/datafusion-comet/pull/3264) (andygrove)\n- refactor: rename scan.allowIncompatible to scan.unsignedSmallIntSafetyCheck [#3238](https://github.com/apache/datafusion-comet/pull/3238) (andygrove)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    91\tAndy Grove\n    23\tdependabot[bot]\n    18\tMatt Butrovich\n     9\tB Vadlamani\n     7\tOleks V\n     5\tKazantsev Maksim\n     5\tManu Zhang\n     3\tShekhar Prasad Rajak\n     2\thsiang-c\n     1\tBjørn Jørgensen\n     1\tEmily Matheys\n     1\tParth Chandra\n     1\tRaz Luvaton\n     1\tWonseok Yang\n     1\tZhen Wang\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.14.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.14.0 Changelog\n\nThis release consists of 189 commits from 21 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: [iceberg] Fall back on dynamicpruning expressions for CometIcebergNativeScan [#3335](https://github.com/apache/datafusion-comet/pull/3335) (mbutrovich)\n- fix: [iceberg] Disable native c2r by default [#3348](https://github.com/apache/datafusion-comet/pull/3348) (andygrove)\n- fix: Fix `space()` with negative input [#3347](https://github.com/apache/datafusion-comet/pull/3347) (hsiang-c)\n- fix: respect scan impl config for v2 scan [#3357](https://github.com/apache/datafusion-comet/pull/3357) (andygrove)\n- fix: fix memory safety issue in native c2r [#3367](https://github.com/apache/datafusion-comet/pull/3367) (andygrove)\n- fix: preserve partitioning in CometNativeScanExec for bucketed scans [#3392](https://github.com/apache/datafusion-comet/pull/3392) (andygrove)\n- fix: unignore row index Spark SQL tests for native_datafusion [#3414](https://github.com/apache/datafusion-comet/pull/3414) (andygrove)\n- fix: fall back to Spark when Parquet field ID matching is enabled in native_datafusion [#3415](https://github.com/apache/datafusion-comet/pull/3415) (andygrove)\n- fix: Expose bucketing information from CometNativeScanExec [#3437](https://github.com/apache/datafusion-comet/pull/3437) (andygrove)\n- fix: support scalar processing for `space` function [#3408](https://github.com/apache/datafusion-comet/pull/3408) (kazantsev-maksim)\n- fix: Revert \"perf: Remove mutable buffers from scan partition/missing columns (#3411)\" [iceberg] [#3486](https://github.com/apache/datafusion-comet/pull/3486) (mbutrovich)\n- fix: unignore input_file_name Spark SQL tests for native_datafusion [#3458](https://github.com/apache/datafusion-comet/pull/3458) (andygrove)\n- fix: add scalar support for bit_count expression [#3361](https://github.com/apache/datafusion-comet/pull/3361) (hsiang-c)\n- fix: Support concat_ws with literal NULL separator [#3542](https://github.com/apache/datafusion-comet/pull/3542) (0lai0)\n- fix: handle type mismatches in native c2r conversion [#3583](https://github.com/apache/datafusion-comet/pull/3583) (andygrove)\n- fix: disable native C2R for legacy Iceberg scans [iceberg] [#3663](https://github.com/apache/datafusion-comet/pull/3663) (mbutrovich)\n- fix: resolve Miri UB in null struct field test, re-enable Miri on PRs [#3669](https://github.com/apache/datafusion-comet/pull/3669) (andygrove)\n- fix: Support on all-literal RLIKE expression [#3647](https://github.com/apache/datafusion-comet/pull/3647) (0lai0)\n- fix: Fix scan metrics test to run with both native_datafusion and native_iceberg_compat [#3690](https://github.com/apache/datafusion-comet/pull/3690) (andygrove)\n\n**Performance related:**\n\n- perf: refactor sum int with specialized implementations for each eval_mode [#3054](https://github.com/apache/datafusion-comet/pull/3054) (andygrove)\n- perf: Optimize contains expression with SIMD-based scalar pattern sea… [#2991](https://github.com/apache/datafusion-comet/pull/2991) (Shekharrajak)\n- perf: Add batch coalescing in BufBatchWriter to reduce IPC schema overhead [#3441](https://github.com/apache/datafusion-comet/pull/3441) (andygrove)\n- perf: Use `native_datafusion` scan in benchmark scripts (6% faster for TPC-H) [#3460](https://github.com/apache/datafusion-comet/pull/3460) (andygrove)\n- perf: Remove mutable buffers from scan partition/missing columns [#3411](https://github.com/apache/datafusion-comet/pull/3411) (andygrove)\n- perf: [iceberg] Single-pass FileScanTask validation [#3443](https://github.com/apache/datafusion-comet/pull/3443) (mbutrovich)\n- perf: Improve benchmarks for native row-to-columnar used by JVM shuffle [#3290](https://github.com/apache/datafusion-comet/pull/3290) (andygrove)\n- perf: executePlan uses a channel to park executor task thread instead of yield_now() [iceberg] [#3553](https://github.com/apache/datafusion-comet/pull/3553) (mbutrovich)\n- perf: Initialize tokio runtime worker threads from spark.executor.cores [#3555](https://github.com/apache/datafusion-comet/pull/3555) (andygrove)\n- perf: Add Comet config for native Iceberg reader's data file concurrency [iceberg] [#3584](https://github.com/apache/datafusion-comet/pull/3584) (mbutrovich)\n- perf: reuse CometConf.COMET_TRACING_ENABLED, Native, NativeUtil in NativeBatchDecoderIterator [#3627](https://github.com/apache/datafusion-comet/pull/3627) (mbutrovich)\n- perf: Improve performance of native row-to-columnar transition used by JVM shuffle [#3289](https://github.com/apache/datafusion-comet/pull/3289) (andygrove)\n- perf: use aligned pointer reads for SparkUnsafeRow field accessors [#3670](https://github.com/apache/datafusion-comet/pull/3670) (andygrove)\n- perf: Optimize some decimal expressions [#3619](https://github.com/apache/datafusion-comet/pull/3619) (andygrove)\n\n**Implemented enhancements:**\n\n- feat: Native columnar to row conversion (Phase 2) [#3266](https://github.com/apache/datafusion-comet/pull/3266) (andygrove)\n- feat: Enable native columnar-to-row by default [#3299](https://github.com/apache/datafusion-comet/pull/3299) (andygrove)\n- feat: add support for `width_bucket` expression [#3273](https://github.com/apache/datafusion-comet/pull/3273) (davidlghellin)\n- feat: Drop `native_comet` as a valid option for `COMET_NATIVE_SCAN_IMPL` config [#3358](https://github.com/apache/datafusion-comet/pull/3358) (andygrove)\n- feat: Support date to timestamp cast [#3383](https://github.com/apache/datafusion-comet/pull/3383) (coderfender)\n- feat: CometExecRDD supports per-partition plan data, reduce Iceberg native scan serialization, add DPP [iceberg] [#3349](https://github.com/apache/datafusion-comet/pull/3349) (mbutrovich)\n- feat: Support right expression [#3207](https://github.com/apache/datafusion-comet/pull/3207) (Shekharrajak)\n- feat: support map_contains_key expression [#3369](https://github.com/apache/datafusion-comet/pull/3369) (peterxcli)\n- feat: add support for make_date expression [#3147](https://github.com/apache/datafusion-comet/pull/3147) (andygrove)\n- feat: add support for next_day expression [#3148](https://github.com/apache/datafusion-comet/pull/3148) (andygrove)\n- feat: implement cast from whole numbers to binary format and bool to decimal [#3083](https://github.com/apache/datafusion-comet/pull/3083) (coderfender)\n- feat: Support for StringSplit [#2772](https://github.com/apache/datafusion-comet/pull/2772) (Shekharrajak)\n- feat: CometNativeScan per-partition plan serde [#3511](https://github.com/apache/datafusion-comet/pull/3511) (mbutrovich)\n- feat: Remove mutable buffers from scan partition/missing columns [iceberg] [#3514](https://github.com/apache/datafusion-comet/pull/3514) (andygrove)\n- feat: pass spark.comet.datafusion.\\* configs through to DataFusion session [#3455](https://github.com/apache/datafusion-comet/pull/3455) (andygrove)\n- feat: pass vended credentials to Iceberg native scan [#3523](https://github.com/apache/datafusion-comet/pull/3523) (tokoko)\n- feat: Cast date to Numeric (No Op) [#3544](https://github.com/apache/datafusion-comet/pull/3544) (coderfender)\n- feat: add support `crc32` expression [#3498](https://github.com/apache/datafusion-comet/pull/3498) (rafafrdz)\n- feat: Support int to timestamp casts [#3541](https://github.com/apache/datafusion-comet/pull/3541) (coderfender)\n- feat(benchmarks): add async-profiler support to TPC benchmark scripts [#3613](https://github.com/apache/datafusion-comet/pull/3613) (andygrove)\n- feat: Cast numeric (non int) to timestamp [#3559](https://github.com/apache/datafusion-comet/pull/3559) (coderfender)\n- feat: [ANSI] Ansi sql error messages [#3580](https://github.com/apache/datafusion-comet/pull/3580) (parthchandra)\n- feat: enable debug assertions in CI profile, fix unaligned memory access bug [#3652](https://github.com/apache/datafusion-comet/pull/3652) (andygrove)\n- feat: Enable native c2r by default, add debug asserts [#3649](https://github.com/apache/datafusion-comet/pull/3649) (andygrove)\n- feat: support Spark luhn_check expression [#3573](https://github.com/apache/datafusion-comet/pull/3573) (n0r0shi)\n\n**Documentation updates:**\n\n- docs: Add changelog for 0.13.0 [#3260](https://github.com/apache/datafusion-comet/pull/3260) (andygrove)\n- docs: fix bug in placement of prettier-ignore-end in generated docs [#3287](https://github.com/apache/datafusion-comet/pull/3287) (andygrove)\n- docs: Add contributor guide page for SQL file tests [#3333](https://github.com/apache/datafusion-comet/pull/3333) (andygrove)\n- docs: fix inaccurate claim about mutable buffers in parquet scan docs [#3378](https://github.com/apache/datafusion-comet/pull/3378) (andygrove)\n- docs: Improve documentation on maven usage for running tests [#3370](https://github.com/apache/datafusion-comet/pull/3370) (andygrove)\n- docs: move release process docs to contributor guide [#3492](https://github.com/apache/datafusion-comet/pull/3492) (andygrove)\n- docs: improve release process documentation [#3508](https://github.com/apache/datafusion-comet/pull/3508) (andygrove)\n- docs: update roadmap [#3543](https://github.com/apache/datafusion-comet/pull/3543) (mbutrovich)\n- docs: Update Parquet scan documentation [#3433](https://github.com/apache/datafusion-comet/pull/3433) (andygrove)\n- docs: recommend SQL file tests for new expressions [#3598](https://github.com/apache/datafusion-comet/pull/3598) (andygrove)\n- docs: add SAFETY comments to all unsafe blocks in shuffle spark_unsafe module [#3603](https://github.com/apache/datafusion-comet/pull/3603) (andygrove)\n- docs: Fix link to overview page [#3625](https://github.com/apache/datafusion-comet/pull/3625) (manuzhang)\n- doc: Document sql query error propagation [#3651](https://github.com/apache/datafusion-comet/pull/3651) (parthchandra)\n- docs: update Iceberg docs in advance of 0.14.0 [#3691](https://github.com/apache/datafusion-comet/pull/3691) (mbutrovich)\n\n**Other:**\n\n- chore(deps): bump actions/download-artifact from 4 to 7 [#3281](https://github.com/apache/datafusion-comet/pull/3281) (dependabot[bot])\n- chore(deps): bump cc from 1.2.53 to 1.2.54 in /native [#3284](https://github.com/apache/datafusion-comet/pull/3284) (dependabot[bot])\n- build: Fix docs workflow dependency resolution failure [#3275](https://github.com/apache/datafusion-comet/pull/3275) (andygrove)\n- chore(deps): bump actions/upload-artifact from 4 to 6 [#3280](https://github.com/apache/datafusion-comet/pull/3280) (dependabot[bot])\n- chore(deps): bump actions/cache from 4 to 5 [#3279](https://github.com/apache/datafusion-comet/pull/3279) (dependabot[bot])\n- chore(deps): bump uuid from 1.19.0 to 1.20.0 in /native [#3282](https://github.com/apache/datafusion-comet/pull/3282) (dependabot[bot])\n- build: reduce overhead of fuzz testing [#3257](https://github.com/apache/datafusion-comet/pull/3257) (andygrove)\n- chore: Start 0.14.0 development [#3288](https://github.com/apache/datafusion-comet/pull/3288) (andygrove)\n- chore: Add Comet released artifacts and links to maven [#3291](https://github.com/apache/datafusion-comet/pull/3291) (comphead)\n- chore: Add take/untake workflow for issue self-assignment [#3270](https://github.com/apache/datafusion-comet/pull/3270) (andygrove)\n- ci: Consolidate Spark SQL test jobs to reduce CI time [#3271](https://github.com/apache/datafusion-comet/pull/3271) (andygrove)\n- chore(deps): bump org.assertj:assertj-core from 3.23.1 to 3.27.7 [#3293](https://github.com/apache/datafusion-comet/pull/3293) (dependabot[bot])\n- chore: Add microbenchmark for IcebergScan operator serde roundtrip [#3296](https://github.com/apache/datafusion-comet/pull/3296) (andygrove)\n- chore: Remove IgnoreCometNativeScan from ParquetEncryptionSuite in 3.5.7 diff [#3304](https://github.com/apache/datafusion-comet/pull/3304) (andygrove)\n- chore: Enable native c2r in plan stability suite [#3302](https://github.com/apache/datafusion-comet/pull/3302) (andygrove)\n- chore: Add support for Spark 3.5.8 [#3323](https://github.com/apache/datafusion-comet/pull/3323) (manuzhang)\n- chore: Invert usingDataSourceExec test helper to usingLegacyNativeCometScan [#3310](https://github.com/apache/datafusion-comet/pull/3310) (andygrove)\n- tests: Add SQL test files covering edge cases for (almost) every Comet-supported expression [#3328](https://github.com/apache/datafusion-comet/pull/3328) (andygrove)\n- chore: Adapt caching from #3251 to [iceberg] workflows [#3353](https://github.com/apache/datafusion-comet/pull/3353) (mbutrovich)\n- bug: Fix string decimal type throw right exception [#3248](https://github.com/apache/datafusion-comet/pull/3248) (coderfender)\n- chore: Migrate `concat` tests to sql based testing framework [#3352](https://github.com/apache/datafusion-comet/pull/3352) (andygrove)\n- chore(deps): bump actions/setup-java from 4 to 5 [#3363](https://github.com/apache/datafusion-comet/pull/3363) (dependabot[bot])\n- chore: Annotate classes/methods/fields that are used by Apache Iceberg [#3237](https://github.com/apache/datafusion-comet/pull/3237) (andygrove)\n- Feat: map_from_entries [#2905](https://github.com/apache/datafusion-comet/pull/2905) (kazantsev-maksim)\n- chore: Move spark unsafe classes into spark_unsafe [#3373](https://github.com/apache/datafusion-comet/pull/3373) (EmilyMatt)\n- chore: Extract some tied down logic [#3374](https://github.com/apache/datafusion-comet/pull/3374) (EmilyMatt)\n- Fix: array contains null handling [#3372](https://github.com/apache/datafusion-comet/pull/3372) (Shekharrajak)\n- chore: stop uploading code coverage results [#3381](https://github.com/apache/datafusion-comet/pull/3381) (andygrove)\n- chore: update target-cpus in published binaries to x86-64-v3 and neoverse-n1 [#3368](https://github.com/apache/datafusion-comet/pull/3368) (mbutrovich)\n- chore: show line of error sql [#3390](https://github.com/apache/datafusion-comet/pull/3390) (peterxcli)\n- chore: Move writer-related logic to \"writers\" module [#3385](https://github.com/apache/datafusion-comet/pull/3385) (EmilyMatt)\n- chore(deps): bump bytes from 1.11.0 to 1.11.1 in /native [#3380](https://github.com/apache/datafusion-comet/pull/3380) (dependabot[bot])\n- chore: Clean up and split shuffle module [#3395](https://github.com/apache/datafusion-comet/pull/3395) (EmilyMatt)\n- chore: Make PR workflows match target-cpu flags in published jars [#3402](https://github.com/apache/datafusion-comet/pull/3402) (mbutrovich)\n- chore(deps): bump time from 0.3.45 to 0.3.47 in /native [#3412](https://github.com/apache/datafusion-comet/pull/3412) (dependabot[bot])\n- chore: Run Spark SQL tests with `native_datafusion` in CI [#3393](https://github.com/apache/datafusion-comet/pull/3393) (andygrove)\n- test: Add ANSI mode SQL test files for expressions that throw on invalid input [#3377](https://github.com/apache/datafusion-comet/pull/3377) (andygrove)\n- refactor: Split read benchmarks and add addParquetScanCases helper [#3407](https://github.com/apache/datafusion-comet/pull/3407) (andygrove)\n- chore: 4.5x reduction in number of golden files [#3399](https://github.com/apache/datafusion-comet/pull/3399) (andygrove)\n- Feat: to_csv [#3004](https://github.com/apache/datafusion-comet/pull/3004) (kazantsev-maksim)\n- minor: map_from_entries sql tests [#3394](https://github.com/apache/datafusion-comet/pull/3394) (kazantsev-maksim)\n- chore: add confirmation before tarball is released [#3439](https://github.com/apache/datafusion-comet/pull/3439) (milenkovicm)\n- chore(deps): bump cc from 1.2.54 to 1.2.55 in /native [#3451](https://github.com/apache/datafusion-comet/pull/3451) (dependabot[bot])\n- chore: Add Iceberg TPC-H benchmarking scripts [#3294](https://github.com/apache/datafusion-comet/pull/3294) (andygrove)\n- chore: Remove dead code paths for deprecated native_comet scan [#3396](https://github.com/apache/datafusion-comet/pull/3396) (andygrove)\n- chore(deps): bump arrow from 57.2.0 to 57.3.0 in /native [#3449](https://github.com/apache/datafusion-comet/pull/3449) (dependabot[bot])\n- chore(deps): bump aws-config from 1.8.12 to 1.8.13 in /native [#3450](https://github.com/apache/datafusion-comet/pull/3450) (dependabot[bot])\n- chore(deps): bump regex from 1.12.2 to 1.12.3 in /native [#3453](https://github.com/apache/datafusion-comet/pull/3453) (dependabot[bot])\n- chore(deps): bump rand from 0.9.2 to 0.10.0 in /native [#3465](https://github.com/apache/datafusion-comet/pull/3465) (manuzhang)\n- test: Add additional contains expression tests [#3462](https://github.com/apache/datafusion-comet/pull/3462) (andygrove)\n- chore: Adjust native artifact caching key in CI [#3476](https://github.com/apache/datafusion-comet/pull/3476) (mbutrovich)\n- chore: Add Comet writer nested types test assertion [#3480](https://github.com/apache/datafusion-comet/pull/3480) (comphead)\n- test: Add SQL file tests for left and right expressions [#3463](https://github.com/apache/datafusion-comet/pull/3463) (andygrove)\n- chore: Add GitHub workflow to close stale PRs [#3488](https://github.com/apache/datafusion-comet/pull/3488) (andygrove)\n- chore: Make `push` CI to be triggered for `main` branch only [#3474](https://github.com/apache/datafusion-comet/pull/3474) (comphead)\n- ci: disable Miri safety checks until compatibility is restored [#3504](https://github.com/apache/datafusion-comet/pull/3504) (andygrove)\n- chore: Add memory reservation debug logging [#3489](https://github.com/apache/datafusion-comet/pull/3489) (andygrove)\n- chore: enable GitHub button for updating PR branches with latest from main [#3505](https://github.com/apache/datafusion-comet/pull/3505) (andygrove)\n- chore: remove some dead cast code [#3513](https://github.com/apache/datafusion-comet/pull/3513) (andygrove)\n- chore(deps): bump aws-credential-types from 1.2.11 to 1.2.12 in /native [#3525](https://github.com/apache/datafusion-comet/pull/3525) (dependabot[bot])\n- chore(deps): bump libc from 0.2.180 to 0.2.182 in /native [#3527](https://github.com/apache/datafusion-comet/pull/3527) (dependabot[bot])\n- chore(deps): bump cc from 1.2.55 to 1.2.56 in /native [#3528](https://github.com/apache/datafusion-comet/pull/3528) (dependabot[bot])\n- chore(deps): bump tempfile from 3.24.0 to 3.25.0 in /native [#3529](https://github.com/apache/datafusion-comet/pull/3529) (dependabot[bot])\n- ci: Bump up `actions/upload-artifact` from v4 to v6 [#3533](https://github.com/apache/datafusion-comet/pull/3533) (manuzhang)\n- chore(deps): bump aws-config from 1.8.13 to 1.8.14 in /native [#3526](https://github.com/apache/datafusion-comet/pull/3526) (dependabot[bot])\n- chore: refactor array_repeat [#3516](https://github.com/apache/datafusion-comet/pull/3516) (kazantsev-maksim)\n- chore: Add envvars to override writer configs and cometConf minor clean up [#3540](https://github.com/apache/datafusion-comet/pull/3540) (comphead)\n- chore: Cast module refactor boolean module [#3491](https://github.com/apache/datafusion-comet/pull/3491) (coderfender)\n- chore: Consolidate TPC benchmark scripts [#3538](https://github.com/apache/datafusion-comet/pull/3538) (andygrove)\n- chore(deps): bump parquet from 57.2.0 to 57.3.0 in /native [#3568](https://github.com/apache/datafusion-comet/pull/3568) (dependabot[bot])\n- chore(deps): bump uuid from 1.20.0 to 1.21.0 in /native [#3567](https://github.com/apache/datafusion-comet/pull/3567) (dependabot[bot])\n- chore: Add TPC-\\* queries to repo [#3562](https://github.com/apache/datafusion-comet/pull/3562) (andygrove)\n- chore(deps): bump assertables from 9.8.4 to 9.8.6 in /native [#3570](https://github.com/apache/datafusion-comet/pull/3570) (dependabot[bot])\n- chore(deps): bump actions/stale from 10.1.1 to 10.2.0 [#3565](https://github.com/apache/datafusion-comet/pull/3565) (dependabot[bot])\n- chore(deps): bump aws-credential-types from 1.2.12 to 1.2.13 in /native [#3566](https://github.com/apache/datafusion-comet/pull/3566) (dependabot[bot])\n- chore: makes dependabot to group deps into single PR [#3578](https://github.com/apache/datafusion-comet/pull/3578) (comphead)\n- chore: Cast module refactor : String [#3577](https://github.com/apache/datafusion-comet/pull/3577) (coderfender)\n- chore(deps): bump the all-other-cargo-deps group in /native with 3 updates [#3581](https://github.com/apache/datafusion-comet/pull/3581) (dependabot[bot])\n- chore: Add Docker Compose support for TPC benchmarks [#3576](https://github.com/apache/datafusion-comet/pull/3576) (andygrove)\n- build: Runs-on for `PR Build (Linux)` [#3579](https://github.com/apache/datafusion-comet/pull/3579) (blaginin)\n- chore: Add consistency checks and result hashing to TPC benchmarks [#3582](https://github.com/apache/datafusion-comet/pull/3582) (andygrove)\n- chore: Remove all remaining uses of legacy BatchReader from Comet [iceberg] [#3468](https://github.com/apache/datafusion-comet/pull/3468) (andygrove)\n- build: Skip CI workflows for changes in benchmarks directory [#3599](https://github.com/apache/datafusion-comet/pull/3599) (andygrove)\n- build: fix runs-on tags for consistency [#3601](https://github.com/apache/datafusion-comet/pull/3601) (andygrove)\n- chore: Add Java Flight Recorder profiling to TPC benchmarks [#3597](https://github.com/apache/datafusion-comet/pull/3597) (andygrove)\n- deps: DataFusion 52.0.0 migration (SchemaAdapter changes, etc.) [iceberg] [#3536](https://github.com/apache/datafusion-comet/pull/3536) (comphead)\n- chore(deps): bump actions/download-artifact from 7 to 8 [#3609](https://github.com/apache/datafusion-comet/pull/3609) (dependabot[bot])\n- chore(deps): bump actions/upload-artifact from 6 to 7 [#3610](https://github.com/apache/datafusion-comet/pull/3610) (dependabot[bot])\n- chore: bump iceberg-rust dependency to latest [iceberg] [#3606](https://github.com/apache/datafusion-comet/pull/3606) (mbutrovich)\n- CI: Add CodeQL workflow for GitHub Actions security scanning [#3617](https://github.com/apache/datafusion-comet/pull/3617) (kevinjqliu)\n- CI: update codeql with pinned action versions [#3621](https://github.com/apache/datafusion-comet/pull/3621) (kevinjqliu)\n- chore: replace legacy datetime rebase tests with current scan coverage [iceberg] [#3605](https://github.com/apache/datafusion-comet/pull/3605) (andygrove)\n- build: More runners [#3626](https://github.com/apache/datafusion-comet/pull/3626) (blaginin)\n- deps: bump DataFusion to 52.2 [iceberg] [#3622](https://github.com/apache/datafusion-comet/pull/3622) (mbutrovich)\n- chore: use datafusion impl of `space` function [#3612](https://github.com/apache/datafusion-comet/pull/3612) (kazantsev-maksim)\n- chore: use datafusion impl of `bit_count` function [#3616](https://github.com/apache/datafusion-comet/pull/3616) (kazantsev-maksim)\n- chore: refactor cast module numeric data types [#3623](https://github.com/apache/datafusion-comet/pull/3623) (coderfender)\n- chore: Refactor cast module temporal types [#3624](https://github.com/apache/datafusion-comet/pull/3624) (coderfender)\n- chore: Fix clippy complaints [#3634](https://github.com/apache/datafusion-comet/pull/3634) (comphead)\n- chore(deps): bump docker/build-push-action from 6 to 7 [#3639](https://github.com/apache/datafusion-comet/pull/3639) (dependabot[bot])\n- chore(deps): bump github/codeql-action from 4.32.5 to 4.32.6 [#3637](https://github.com/apache/datafusion-comet/pull/3637) (dependabot[bot])\n- chore(deps): bump docker/setup-buildx-action from 3 to 4 [#3636](https://github.com/apache/datafusion-comet/pull/3636) (dependabot[bot])\n- chore(deps): bump docker/login-action from 3 to 4 [#3638](https://github.com/apache/datafusion-comet/pull/3638) (dependabot[bot])\n- deps: update to latest iceberg-rust to pick up get_byte_ranges [iceberg] [#3635](https://github.com/apache/datafusion-comet/pull/3635) (mbutrovich)\n- chore: Array literals tests enable [#3633](https://github.com/apache/datafusion-comet/pull/3633) (comphead)\n- chore: Add debug assertions before unsafe code blocks [#3655](https://github.com/apache/datafusion-comet/pull/3655) (andygrove)\n- chore: fix license header - ansi docs [#3662](https://github.com/apache/datafusion-comet/pull/3662) (coderfender)\n- chore(deps): bump quinn-proto from 0.11.13 to 0.11.14 in /native [#3660](https://github.com/apache/datafusion-comet/pull/3660) (dependabot[bot])\n- ci: add dedicated RAT license check workflow for all PRs [#3664](https://github.com/apache/datafusion-comet/pull/3664) (andygrove)\n- chore: Remove deprecated SCAN_NATIVE_COMET constant and related test code [#3671](https://github.com/apache/datafusion-comet/pull/3671) (andygrove)\n- chore: Upgrade to DF 52.3.0 [#3672](https://github.com/apache/datafusion-comet/pull/3672) (andygrove)\n- deps: update to iceberg-rust 0.9.0 rc1 [iceberg] [#3657](https://github.com/apache/datafusion-comet/pull/3657) (mbutrovich)\n- chore: Mark expressions with known correctness issues as incompatible [#3675](https://github.com/apache/datafusion-comet/pull/3675) (andygrove)\n- chore(deps): bump actions/setup-java from 4 to 5 [#3683](https://github.com/apache/datafusion-comet/pull/3683) (dependabot[bot])\n- chore(deps): bump runs-on/action from 2.0.3 to 2.1.0 [#3684](https://github.com/apache/datafusion-comet/pull/3684) (dependabot[bot])\n- chore(deps): bump actions/checkout from 4 to 6 [#3685](https://github.com/apache/datafusion-comet/pull/3685) (dependabot[bot])\n- ci: remove Java Iceberg integration tests from CI [iceberg] [#3673](https://github.com/apache/datafusion-comet/pull/3673) (andygrove)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    81\tAndy Grove\n    34\tdependabot[bot]\n    19\tMatt Butrovich\n     9\tB Vadlamani\n     8\tOleks V\n     7\tKazantsev Maksim\n     4\tEmily Matheys\n     4\tManu Zhang\n     4\tShekhar Prasad Rajak\n     2\tBhargava Vadlamani\n     2\tChenChen Lai\n     2\tDmitrii Blaginin\n     2\tKevin Liu\n     2\tParth Chandra\n     2\tPeter Lee\n     2\thsiang-c\n     1\tDavid López\n     1\tMarko Milenković\n     1\tRafael Fernández\n     1\tTornike Gurgenidze\n     1\tn0r0shi\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.14.1.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.14.1 Changelog\n\nThis release consists of 5 commits from 1 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: [branch-0.14] backport #3802 - cache object stores and bucket regions to reduce DNS query volume [#3935](https://github.com/apache/datafusion-comet/pull/3935) (andygrove)\n- fix: [branch-0.14] backport #3924 - share unified memory pools across native execution contexts [#3938](https://github.com/apache/datafusion-comet/pull/3938) (andygrove)\n- fix: [branch-0.14] backport #3879 - skip Comet columnar shuffle for stages with DPP scans [#3934](https://github.com/apache/datafusion-comet/pull/3934) (andygrove)\n- fix: [branch-0.14] backport #3914 - use min instead of max when capping write buffer size to Int range [#3936](https://github.com/apache/datafusion-comet/pull/3936) (andygrove)\n- fix: [branch-0.14] backport #3865 - handle ambiguous and non-existent local times [#3937](https://github.com/apache/datafusion-comet/pull/3937) (andygrove)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n     5\tAndy Grove\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.2.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.2.0 Changelog\n\nThis release consists of 87 commits from 14 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: dictionary decimal vector optimization [#705](https://github.com/apache/datafusion-comet/pull/705) (kazuyukitanimura)\n- fix: Unsupported window expression should fall back to Spark [#710](https://github.com/apache/datafusion-comet/pull/710) (viirya)\n- fix: ReusedExchangeExec can be child operator of CometBroadcastExchangeExec [#713](https://github.com/apache/datafusion-comet/pull/713) (viirya)\n- fix: Fallback to Spark for window expression with range frame [#719](https://github.com/apache/datafusion-comet/pull/719) (viirya)\n- fix: Remove `skip.surefire.tests` mvn property [#739](https://github.com/apache/datafusion-comet/pull/739) (wForget)\n- fix: subquery execution under CometTakeOrderedAndProjectExec should not fail [#748](https://github.com/apache/datafusion-comet/pull/748) (viirya)\n- fix: skip negative scale checks for creating decimals [#723](https://github.com/apache/datafusion-comet/pull/723) (kazuyukitanimura)\n- fix: Fallback to Spark for unsupported partitioning [#759](https://github.com/apache/datafusion-comet/pull/759) (viirya)\n- fix: Unsupported types for SinglePartition should fallback to Spark [#765](https://github.com/apache/datafusion-comet/pull/765) (viirya)\n- fix: unwrap dictionaries in CreateNamedStruct [#754](https://github.com/apache/datafusion-comet/pull/754) (andygrove)\n- fix: Fallback to Spark for unsupported input besides ordering [#768](https://github.com/apache/datafusion-comet/pull/768) (viirya)\n- fix: Native window operator should be CometUnaryExec [#774](https://github.com/apache/datafusion-comet/pull/774) (viirya)\n- fix: Fallback to Spark when shuffling on struct with duplicate field name [#776](https://github.com/apache/datafusion-comet/pull/776) (viirya)\n- fix: withInfo was overwriting information in some cases [#780](https://github.com/apache/datafusion-comet/pull/780) (andygrove)\n- fix: Improve support for nested structs [#800](https://github.com/apache/datafusion-comet/pull/800) (eejbyfeldt)\n- fix: Sort on single struct should fallback to Spark [#811](https://github.com/apache/datafusion-comet/pull/811) (viirya)\n- fix: Check sort order of SortExec instead of child output [#821](https://github.com/apache/datafusion-comet/pull/821) (viirya)\n- fix: Fix panic in `avg` aggregate and disable `stddev` by default [#819](https://github.com/apache/datafusion-comet/pull/819) (andygrove)\n- fix: Supported nested types in HashJoin [#735](https://github.com/apache/datafusion-comet/pull/735) (eejbyfeldt)\n\n**Performance related:**\n\n- perf: Improve performance of CASE .. WHEN expressions [#703](https://github.com/apache/datafusion-comet/pull/703) (andygrove)\n- perf: Optimize IfExpr by delegating to CaseExpr [#681](https://github.com/apache/datafusion-comet/pull/681) (andygrove)\n- fix: optimize isNullAt [#732](https://github.com/apache/datafusion-comet/pull/732) (kazuyukitanimura)\n- perf: decimal decode improvements [#727](https://github.com/apache/datafusion-comet/pull/727) (parthchandra)\n- fix: Remove castting on decimals with a small precision to decimal256 [#741](https://github.com/apache/datafusion-comet/pull/741) (kazuyukitanimura)\n- fix: optimize some bit functions [#718](https://github.com/apache/datafusion-comet/pull/718) (kazuyukitanimura)\n- fix: Optimize getDecimal for small precision [#758](https://github.com/apache/datafusion-comet/pull/758) (kazuyukitanimura)\n- perf: add metrics to CopyExec and ScanExec [#778](https://github.com/apache/datafusion-comet/pull/778) (andygrove)\n- fix: Optimize decimal creation macros [#764](https://github.com/apache/datafusion-comet/pull/764) (kazuyukitanimura)\n- perf: Improve count aggregate performance [#784](https://github.com/apache/datafusion-comet/pull/784) (andygrove)\n- fix: Optimize read_side_padding [#772](https://github.com/apache/datafusion-comet/pull/772) (kazuyukitanimura)\n- perf: Remove some redundant copying of batches [#816](https://github.com/apache/datafusion-comet/pull/816) (andygrove)\n- perf: Remove redundant copying of batches after FilterExec [#835](https://github.com/apache/datafusion-comet/pull/835) (andygrove)\n- fix: Optimize CheckOverflow [#852](https://github.com/apache/datafusion-comet/pull/852) (kazuyukitanimura)\n- perf: Add benchmarks for Spark Scan + Comet Exec [#863](https://github.com/apache/datafusion-comet/pull/863) (andygrove)\n\n**Implemented enhancements:**\n\n- feat: Add support for time-zone, 3 & 5 digit years: Cast from string to timestamp. [#704](https://github.com/apache/datafusion-comet/pull/704) (akhilss99)\n- feat: Support count AggregateUDF for window function [#736](https://github.com/apache/datafusion-comet/pull/736) (huaxingao)\n- feat: Implement basic version of RLIKE [#734](https://github.com/apache/datafusion-comet/pull/734) (andygrove)\n- feat: show executed native plan with metrics when in debug mode [#746](https://github.com/apache/datafusion-comet/pull/746) (andygrove)\n- feat: Add GetStructField expression [#731](https://github.com/apache/datafusion-comet/pull/731) (Kimahriman)\n- feat: Add config to enable native upper and lower string conversion [#767](https://github.com/apache/datafusion-comet/pull/767) (andygrove)\n- feat: Improve native explain [#795](https://github.com/apache/datafusion-comet/pull/795) (andygrove)\n- feat: Add support for null literal with struct type [#797](https://github.com/apache/datafusion-comet/pull/797) (eejbyfeldt)\n- feat: Optimze CreateNamedStruct preserve dictionaries [#789](https://github.com/apache/datafusion-comet/pull/789) (eejbyfeldt)\n- feat: `CreateArray` support [#793](https://github.com/apache/datafusion-comet/pull/793) (Kimahriman)\n- feat: Add native thread configs [#828](https://github.com/apache/datafusion-comet/pull/828) (viirya)\n- feat: Add specific configs for converting Spark Parquet and JSON data to Arrow [#832](https://github.com/apache/datafusion-comet/pull/832) (andygrove)\n- feat: Support sum in window function [#802](https://github.com/apache/datafusion-comet/pull/802) (huaxingao)\n- feat: Simplify configs for enabling/disabling operators [#855](https://github.com/apache/datafusion-comet/pull/855) (andygrove)\n- feat: Enable `clippy::clone_on_ref_ptr` on `proto` and `spark_exprs` crates [#859](https://github.com/apache/datafusion-comet/pull/859) (comphead)\n- feat: Enable `clippy::clone_on_ref_ptr` on `core` crate [#860](https://github.com/apache/datafusion-comet/pull/860) (comphead)\n- feat: Use CometPlugin as main entrypoint [#853](https://github.com/apache/datafusion-comet/pull/853) (andygrove)\n\n**Documentation updates:**\n\n- doc: Update outdated spark.comet.columnar.shuffle.enabled configuration doc [#738](https://github.com/apache/datafusion-comet/pull/738) (wForget)\n- docs: Add explicit configs for enabling operators [#801](https://github.com/apache/datafusion-comet/pull/801) (andygrove)\n- doc: Document CometPlugin to start Comet in cluster mode [#836](https://github.com/apache/datafusion-comet/pull/836) (comphead)\n\n**Other:**\n\n- chore: Make rust clippy happy [#701](https://github.com/apache/datafusion-comet/pull/701) (Xuanwo)\n- chore: Update version to 0.2.0 and add 0.1.0 changelog [#696](https://github.com/apache/datafusion-comet/pull/696) (andygrove)\n- chore: Use rust-toolchain.toml for better toolchain support [#699](https://github.com/apache/datafusion-comet/pull/699) (Xuanwo)\n- chore(native): Make sure all targets in workspace been covered by clippy [#702](https://github.com/apache/datafusion-comet/pull/702) (Xuanwo)\n- Apache DataFusion Comet Logo [#697](https://github.com/apache/datafusion-comet/pull/697) (aocsa)\n- chore: Add logo to rat exclude list [#709](https://github.com/apache/datafusion-comet/pull/709) (andygrove)\n- chore: Use new logo in README and website [#724](https://github.com/apache/datafusion-comet/pull/724) (andygrove)\n- build: Add Comet logo files into exclude list [#726](https://github.com/apache/datafusion-comet/pull/726) (viirya)\n- chore: Remove TPC-DS benchmark results [#728](https://github.com/apache/datafusion-comet/pull/728) (andygrove)\n- chore: make Cast's logic reusable for other projects [#716](https://github.com/apache/datafusion-comet/pull/716) (Blizzara)\n- chore: move scalar_funcs into spark-expr [#712](https://github.com/apache/datafusion-comet/pull/712) (Blizzara)\n- chore: Bump DataFusion to rev 35c2e7e [#740](https://github.com/apache/datafusion-comet/pull/740) (andygrove)\n- chore: add more aggregate functions to benchmark test [#706](https://github.com/apache/datafusion-comet/pull/706) (huaxingao)\n- chore: Add criterion benchmark for decimal_div [#743](https://github.com/apache/datafusion-comet/pull/743) (andygrove)\n- build: Re-enable TPCDS q72 for broadcast and hash join configs [#781](https://github.com/apache/datafusion-comet/pull/781) (viirya)\n- chore: bump DataFusion to rev f4e519f [#783](https://github.com/apache/datafusion-comet/pull/783) (huaxingao)\n- chore: Upgrade to DataFusion rev bddb641 and disable \"skip partial aggregates\" feature [#788](https://github.com/apache/datafusion-comet/pull/788) (andygrove)\n- chore: Remove legacy code for adding a cast to a coalesce [#790](https://github.com/apache/datafusion-comet/pull/790) (andygrove)\n- chore: Use DataFusion 41.0.0-rc1 [#794](https://github.com/apache/datafusion-comet/pull/794) (andygrove)\n- chore: rename `CometRowToColumnar` and fix duplication bug [#785](https://github.com/apache/datafusion-comet/pull/785) (Kimahriman)\n- chore: Enable shuffle in micro benchmarks [#806](https://github.com/apache/datafusion-comet/pull/806) (andygrove)\n- Minor: ScanExec code cleanup and additional documentation [#804](https://github.com/apache/datafusion-comet/pull/804) (andygrove)\n- chore: Make it possible to run 'make benchmark-%' using jvm 17+ [#823](https://github.com/apache/datafusion-comet/pull/823) (eejbyfeldt)\n- chore: Add more unsupported cases to supportedSortType [#825](https://github.com/apache/datafusion-comet/pull/825) (viirya)\n- chore: Enable Comet shuffle with AQE coalesce partitions [#834](https://github.com/apache/datafusion-comet/pull/834) (viirya)\n- chore: Add GitHub workflow to publish Docker image [#847](https://github.com/apache/datafusion-comet/pull/847) (andygrove)\n- chore: Revert \"fix: change the not exists base image apache/spark:3.4.3 to 3.4.2\" [#854](https://github.com/apache/datafusion-comet/pull/854) (haoxins)\n- chore: fix docker-publish attempt 1 [#851](https://github.com/apache/datafusion-comet/pull/851) (andygrove)\n- minor: stop warning that AQEShuffleRead cannot run natively [#842](https://github.com/apache/datafusion-comet/pull/842) (andygrove)\n- chore: Improve ObjectHashAggregate fallback error message [#849](https://github.com/apache/datafusion-comet/pull/849) (andygrove)\n- chore: Fix docker image publishing (specify ghcr.io in tag) [#856](https://github.com/apache/datafusion-comet/pull/856) (andygrove)\n- chore: Use Git tag as Comet version when publishing Docker images [#857](https://github.com/apache/datafusion-comet/pull/857) (andygrove)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    36\tAndy Grove\n    16\tLiang-Chi Hsieh\n     9\tKAZUYUKI TANIMURA\n     5\tEmil Ejbyfeldt\n     4\tHuaxin Gao\n     3\tAdam Binford\n     3\tOleks V\n     3\tXuanwo\n     2\tArttu\n     2\tZhen Wang\n     1\tAkhil S S\n     1\tAlexander Ocsa\n     1\tParth Chandra\n     1\tXin Hao\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.3.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.3.0 Changelog\n\nThis release consists of 57 commits from 12 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: Support type coercion for ScalarUDFs [#865](https://github.com/apache/datafusion-comet/pull/865) (Kimahriman)\n- fix: CometTakeOrderedAndProjectExec native scan node should use child operator's output [#896](https://github.com/apache/datafusion-comet/pull/896) (viirya)\n- fix: Fix various memory leaks problems [#890](https://github.com/apache/datafusion-comet/pull/890) (Kontinuation)\n- fix: Add output to Comet operators equal and hashCode [#902](https://github.com/apache/datafusion-comet/pull/902) (viirya)\n- fix: Fallback to Spark when cannot resolve AttributeReference [#926](https://github.com/apache/datafusion-comet/pull/926) (viirya)\n- fix: Fix memory bloat caused by holding too many unclosed `ArrowReaderIterator`s [#929](https://github.com/apache/datafusion-comet/pull/929) (Kontinuation)\n- fix: Normalize NaN and zeros for floating number comparison [#953](https://github.com/apache/datafusion-comet/pull/953) (viirya)\n- fix: window function range offset should be long instead of int [#733](https://github.com/apache/datafusion-comet/pull/733) (huaxingao)\n- fix: CometScanExec on Spark 3.5.2 [#915](https://github.com/apache/datafusion-comet/pull/915) (Kimahriman)\n- fix: div and rem by negative zero [#960](https://github.com/apache/datafusion-comet/pull/960) (kazuyukitanimura)\n\n**Performance related:**\n\n- perf: Optimize CometSparkToColumnar for columnar input [#892](https://github.com/apache/datafusion-comet/pull/892) (mbutrovich)\n- perf: Fall back to Spark if query uses DPP with v1 data sources [#897](https://github.com/apache/datafusion-comet/pull/897) (andygrove)\n- perf: Report accurate total time for scans [#916](https://github.com/apache/datafusion-comet/pull/916) (andygrove)\n- perf: Add metric for time spent casting in native scan [#919](https://github.com/apache/datafusion-comet/pull/919) (andygrove)\n- perf: Add criterion benchmark for aggregate expressions [#948](https://github.com/apache/datafusion-comet/pull/948) (andygrove)\n- perf: Add metric for time spent in CometSparkToColumnarExec [#931](https://github.com/apache/datafusion-comet/pull/931) (mbutrovich)\n- perf: Optimize decimal precision check in decimal aggregates (sum and avg) [#952](https://github.com/apache/datafusion-comet/pull/952) (andygrove)\n\n**Implemented enhancements:**\n\n- feat: Add config option to enable converting CSV to columnar [#871](https://github.com/apache/datafusion-comet/pull/871) (andygrove)\n- feat: Implement basic version of string to float/double/decimal [#870](https://github.com/apache/datafusion-comet/pull/870) (andygrove)\n- feat: Implement to_json for subset of types [#805](https://github.com/apache/datafusion-comet/pull/805) (andygrove)\n- feat: Add ShuffleQueryStageExec to direct child node for CometBroadcastExchangeExec [#880](https://github.com/apache/datafusion-comet/pull/880) (viirya)\n- feat: Support sort merge join with a join condition [#553](https://github.com/apache/datafusion-comet/pull/553) (viirya)\n- feat: Array element extraction [#899](https://github.com/apache/datafusion-comet/pull/899) (Kimahriman)\n- feat: date_add and date_sub functions [#910](https://github.com/apache/datafusion-comet/pull/910) (mbutrovich)\n- feat: implement scripts for binary release build [#932](https://github.com/apache/datafusion-comet/pull/932) (parthchandra)\n- feat: Publish artifacts to maven [#946](https://github.com/apache/datafusion-comet/pull/946) (parthchandra)\n\n**Documentation updates:**\n\n- doc: Documenting Helm chart for Comet Kube execution [#874](https://github.com/apache/datafusion-comet/pull/874) (comphead)\n- doc: Update native code path in development [#921](https://github.com/apache/datafusion-comet/pull/921) (viirya)\n- docs: Add more detailed architecture documentation [#922](https://github.com/apache/datafusion-comet/pull/922) (andygrove)\n\n**Other:**\n\n- chore: Update installation.md [#869](https://github.com/apache/datafusion-comet/pull/869) (haoxins)\n- chore: Update versions to 0.3.0 / 0.3.0-SNAPSHOT [#868](https://github.com/apache/datafusion-comet/pull/868) (andygrove)\n- chore: Add documentation on running benchmarks with Microk8s [#848](https://github.com/apache/datafusion-comet/pull/848) (andygrove)\n- chore: Improve CometExchange metrics [#873](https://github.com/apache/datafusion-comet/pull/873) (viirya)\n- chore: Add spilling metrics of SortMergeJoin [#878](https://github.com/apache/datafusion-comet/pull/878) (viirya)\n- chore: change shuffle mode default from jvm to auto [#877](https://github.com/apache/datafusion-comet/pull/877) (andygrove)\n- chore: Enable shuffle by default [#881](https://github.com/apache/datafusion-comet/pull/881) (andygrove)\n- chore: print Comet native version to logs after Comet is initialized [#900](https://github.com/apache/datafusion-comet/pull/900) (SemyonSinchenko)\n- chore: Revise batch pull approach to more follow C Data interface semantics [#893](https://github.com/apache/datafusion-comet/pull/893) (viirya)\n- chore: Close dictionary provider when iterator is closed [#904](https://github.com/apache/datafusion-comet/pull/904) (andygrove)\n- chore: Remove unused function [#906](https://github.com/apache/datafusion-comet/pull/906) (viirya)\n- chore: Upgrade to latest DataFusion revision [#909](https://github.com/apache/datafusion-comet/pull/909) (andygrove)\n- build: fix build [#917](https://github.com/apache/datafusion-comet/pull/917) (andygrove)\n- chore: Revise array import to more follow C Data Interface semantics [#905](https://github.com/apache/datafusion-comet/pull/905) (viirya)\n- chore: Address reviews [#920](https://github.com/apache/datafusion-comet/pull/920) (viirya)\n- chore: Enable Comet shuffle for Spark core-1 test [#924](https://github.com/apache/datafusion-comet/pull/924) (viirya)\n- build: Add maven-compiler-plugin for java cross-build [#911](https://github.com/apache/datafusion-comet/pull/911) (viirya)\n- build: Disable upload-test-reports for macos-13 runner [#933](https://github.com/apache/datafusion-comet/pull/933) (viirya)\n- minor: cast timestamp test #468 [#923](https://github.com/apache/datafusion-comet/pull/923) (himadripal)\n- build: Set Java version arg for scala-maven-plugin [#934](https://github.com/apache/datafusion-comet/pull/934) (viirya)\n- chore: Remove redundant RowToColumnar from CometShuffleExchangeExec for columnar shuffle [#944](https://github.com/apache/datafusion-comet/pull/944) (viirya)\n- minor: rename CometMetricNode `add` to `set` and update documentation [#940](https://github.com/apache/datafusion-comet/pull/940) (andygrove)\n- chore: Add config for enabling SMJ with join condition [#937](https://github.com/apache/datafusion-comet/pull/937) (andygrove)\n- chore: Change maven group ID to `org.apache.datafusion` [#941](https://github.com/apache/datafusion-comet/pull/941) (andygrove)\n- chore: Upgrade to DataFusion 42.0.0 [#945](https://github.com/apache/datafusion-comet/pull/945) (andygrove)\n- build: Fix regression in jar packaging [#950](https://github.com/apache/datafusion-comet/pull/950) (andygrove)\n- chore: Show reason for falling back to Spark when SMJ with join condition is not enabled [#956](https://github.com/apache/datafusion-comet/pull/956) (andygrove)\n- chore: clarify tarball installation [#959](https://github.com/apache/datafusion-comet/pull/959) (comphead)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    22\tAndy Grove\n    18\tLiang-Chi Hsieh\n     3\tAdam Binford\n     3\tMatt Butrovich\n     2\tKristin Cowalcijk\n     2\tOleks V\n     2\tParth Chandra\n     1\tHimadri Pal\n     1\tHuaxin Gao\n     1\tKAZUYUKI TANIMURA\n     1\tSemyon\n     1\tXin Hao\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.4.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.4.0 Changelog\n\nThis release consists of 51 commits from 10 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: Use the number of rows from underlying arrays instead of logical row count from RecordBatch [#972](https://github.com/apache/datafusion-comet/pull/972) (viirya)\n- fix: The spilled_bytes metric of CometSortExec should be size instead of time [#984](https://github.com/apache/datafusion-comet/pull/984) (Kontinuation)\n- fix: Properly handle Java exceptions without error messages; fix loading of comet native library from java.library.path [#982](https://github.com/apache/datafusion-comet/pull/982) (Kontinuation)\n- fix: Fallback to Spark if scan has meta columns [#997](https://github.com/apache/datafusion-comet/pull/997) (viirya)\n- fix: Fallback to Spark if named_struct contains duplicate field names [#1016](https://github.com/apache/datafusion-comet/pull/1016) (viirya)\n- fix: Make comet-git-info.properties optional [#1027](https://github.com/apache/datafusion-comet/pull/1027) (andygrove)\n- fix: TopK operator should return correct results on dictionary column with nulls [#1033](https://github.com/apache/datafusion-comet/pull/1033) (viirya)\n- fix: need default value for getSizeAsMb(EXECUTOR_MEMORY.key) [#1046](https://github.com/apache/datafusion-comet/pull/1046) (neyama)\n\n**Performance related:**\n\n- perf: Remove one redundant CopyExec for SMJ [#962](https://github.com/apache/datafusion-comet/pull/962) (andygrove)\n- perf: Add experimental feature to replace SortMergeJoin with ShuffledHashJoin [#1007](https://github.com/apache/datafusion-comet/pull/1007) (andygrove)\n- perf: Cache jstrings during metrics collection [#1029](https://github.com/apache/datafusion-comet/pull/1029) (mbutrovich)\n\n**Implemented enhancements:**\n\n- feat: Support `GetArrayStructFields` expression [#993](https://github.com/apache/datafusion-comet/pull/993) (Kimahriman)\n- feat: Implement bloom_filter_agg [#987](https://github.com/apache/datafusion-comet/pull/987) (mbutrovich)\n- feat: Support more types with BloomFilterAgg [#1039](https://github.com/apache/datafusion-comet/pull/1039) (mbutrovich)\n- feat: Implement CAST from struct to string [#1066](https://github.com/apache/datafusion-comet/pull/1066) (andygrove)\n- feat: Use official DataFusion 43 release [#1070](https://github.com/apache/datafusion-comet/pull/1070) (andygrove)\n- feat: Implement CAST between struct types [#1074](https://github.com/apache/datafusion-comet/pull/1074) (andygrove)\n- feat: support array_append [#1072](https://github.com/apache/datafusion-comet/pull/1072) (NoeB)\n- feat: Require offHeap memory to be enabled (always use unified memory) [#1062](https://github.com/apache/datafusion-comet/pull/1062) (andygrove)\n\n**Documentation updates:**\n\n- doc: add documentation interlinks [#975](https://github.com/apache/datafusion-comet/pull/975) (comphead)\n- docs: Add IntelliJ documentation for generated source code [#985](https://github.com/apache/datafusion-comet/pull/985) (mbutrovich)\n- docs: Update tuning guide [#995](https://github.com/apache/datafusion-comet/pull/995) (andygrove)\n- docs: Various documentation improvements [#1005](https://github.com/apache/datafusion-comet/pull/1005) (andygrove)\n- docs: clarify that Maven central only has jars for Linux [#1009](https://github.com/apache/datafusion-comet/pull/1009) (andygrove)\n- doc: fix K8s links and doc [#1058](https://github.com/apache/datafusion-comet/pull/1058) (comphead)\n- docs: Update benchmarking.md [#1085](https://github.com/apache/datafusion-comet/pull/1085) (rluvaton-flarion)\n\n**Other:**\n\n- chore: Generate changelog for 0.3.0 release [#964](https://github.com/apache/datafusion-comet/pull/964) (andygrove)\n- chore: fix publish-to-maven script [#966](https://github.com/apache/datafusion-comet/pull/966) (andygrove)\n- chore: Update benchmarks results based on 0.3.0-rc1 [#969](https://github.com/apache/datafusion-comet/pull/969) (andygrove)\n- chore: update rem expression guide [#976](https://github.com/apache/datafusion-comet/pull/976) (kazuyukitanimura)\n- chore: Enable additional CreateArray tests [#928](https://github.com/apache/datafusion-comet/pull/928) (Kimahriman)\n- chore: fix compatibility guide [#978](https://github.com/apache/datafusion-comet/pull/978) (kazuyukitanimura)\n- chore: Update for 0.3.0 release, prepare for 0.4.0 development [#970](https://github.com/apache/datafusion-comet/pull/970) (andygrove)\n- chore: Don't transform the HashAggregate to CometHashAggregate if Comet shuffle is disabled [#991](https://github.com/apache/datafusion-comet/pull/991) (viirya)\n- chore: Make parquet reader options Comet options instead of Hadoop options [#968](https://github.com/apache/datafusion-comet/pull/968) (parthchandra)\n- chore: remove legacy comet-spark-shell [#1013](https://github.com/apache/datafusion-comet/pull/1013) (andygrove)\n- chore: Reserve memory for native shuffle writer per partition [#988](https://github.com/apache/datafusion-comet/pull/988) (viirya)\n- chore: Bump arrow-rs to 53.1.0 and datafusion [#1001](https://github.com/apache/datafusion-comet/pull/1001) (kazuyukitanimura)\n- chore: Revert \"chore: Reserve memory for native shuffle writer per partition (#988)\" [#1020](https://github.com/apache/datafusion-comet/pull/1020) (viirya)\n- minor: Remove hard-coded version number from Dockerfile [#1025](https://github.com/apache/datafusion-comet/pull/1025) (andygrove)\n- chore: Reserve memory for native shuffle writer per partition [#1022](https://github.com/apache/datafusion-comet/pull/1022) (viirya)\n- chore: Improve error handling when native lib fails to load [#1000](https://github.com/apache/datafusion-comet/pull/1000) (andygrove)\n- chore: Use twox-hash 2.0 xxhash64 oneshot api instead of custom implementation [#1041](https://github.com/apache/datafusion-comet/pull/1041) (NoeB)\n- chore: Refactor Arrow Array and Schema allocation in ColumnReader and MetadataColumnReader [#1047](https://github.com/apache/datafusion-comet/pull/1047) (viirya)\n- minor: Refactor binary expr serde to reduce code duplication [#1053](https://github.com/apache/datafusion-comet/pull/1053) (andygrove)\n- chore: Upgrade to DataFusion 43.0.0-rc1 [#1057](https://github.com/apache/datafusion-comet/pull/1057) (andygrove)\n- chore: Refactor UnaryExpr and MathExpr in protobuf [#1056](https://github.com/apache/datafusion-comet/pull/1056) (andygrove)\n- minor: use defaults instead of hard-coding values [#1060](https://github.com/apache/datafusion-comet/pull/1060) (andygrove)\n- minor: refactor UnaryExpr handling to make code more concise [#1065](https://github.com/apache/datafusion-comet/pull/1065) (andygrove)\n- chore: Refactor binary and math expression serde code [#1069](https://github.com/apache/datafusion-comet/pull/1069) (andygrove)\n- chore: Simplify CometShuffleMemoryAllocator to use Spark unified memory allocator [#1063](https://github.com/apache/datafusion-comet/pull/1063) (viirya)\n- test: Restore one test in CometExecSuite by adding COMET_SHUFFLE_MODE config [#1087](https://github.com/apache/datafusion-comet/pull/1087) (viirya)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    19\tAndy Grove\n    13\tMatt Butrovich\n     8\tLiang-Chi Hsieh\n     3\tKAZUYUKI TANIMURA\n     2\tAdam Binford\n     2\tKristin Cowalcijk\n     1\tNoeB\n     1\tOleks V\n     1\tParth Chandra\n     1\tneyama\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.5.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.5.0 Changelog\n\nThis release consists of 69 commits from 15 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: Unsigned type related bugs [#1095](https://github.com/apache/datafusion-comet/pull/1095) (kazuyukitanimura)\n- fix: Use RDD partition index [#1112](https://github.com/apache/datafusion-comet/pull/1112) (viirya)\n- fix: Various metrics bug fixes and improvements [#1111](https://github.com/apache/datafusion-comet/pull/1111) (andygrove)\n- fix: Don't create CometScanExec for subclasses of ParquetFileFormat [#1129](https://github.com/apache/datafusion-comet/pull/1129) (Kimahriman)\n- fix: Fix metrics regressions [#1132](https://github.com/apache/datafusion-comet/pull/1132) (andygrove)\n- fix: Enable scenarios accidentally commented out in CometExecBenchmark [#1151](https://github.com/apache/datafusion-comet/pull/1151) (mbutrovich)\n- fix: Spark 4.0-preview1 SPARK-47120 [#1156](https://github.com/apache/datafusion-comet/pull/1156) (kazuyukitanimura)\n- fix: Document enabling comet explain plan usage in Spark (4.0) [#1176](https://github.com/apache/datafusion-comet/pull/1176) (parthchandra)\n- fix: stddev_pop should not directly return 0.0 when count is 1.0 [#1184](https://github.com/apache/datafusion-comet/pull/1184) (viirya)\n- fix: fix missing explanation for then branch in case when [#1200](https://github.com/apache/datafusion-comet/pull/1200) (rluvaton)\n- fix: Fall back to Spark for unsupported partition or sort expressions in window aggregates [#1253](https://github.com/apache/datafusion-comet/pull/1253) (andygrove)\n- fix: Fall back to Spark for distinct aggregates [#1262](https://github.com/apache/datafusion-comet/pull/1262) (andygrove)\n- fix: disable initCap by default [#1276](https://github.com/apache/datafusion-comet/pull/1276) (kazuyukitanimura)\n\n**Performance related:**\n\n- perf: Stop passing Java config map into native createPlan [#1101](https://github.com/apache/datafusion-comet/pull/1101) (andygrove)\n- feat: Make native shuffle compression configurable and respect `spark.shuffle.compress` [#1185](https://github.com/apache/datafusion-comet/pull/1185) (andygrove)\n- perf: Improve query planning to more reliably fall back to columnar shuffle when native shuffle is not supported [#1209](https://github.com/apache/datafusion-comet/pull/1209) (andygrove)\n- feat: Move shuffle block decompression and decoding to native code and add LZ4 & Snappy support [#1192](https://github.com/apache/datafusion-comet/pull/1192) (andygrove)\n- feat: Implement custom RecordBatch serde for shuffle for improved performance [#1190](https://github.com/apache/datafusion-comet/pull/1190) (andygrove)\n\n**Implemented enhancements:**\n\n- feat: support array_insert [#1073](https://github.com/apache/datafusion-comet/pull/1073) (SemyonSinchenko)\n- feat: enable decimal to decimal cast of different precision and scale [#1086](https://github.com/apache/datafusion-comet/pull/1086) (himadripal)\n- feat: Improve ScanExec native metrics [#1133](https://github.com/apache/datafusion-comet/pull/1133) (andygrove)\n- feat: Add Spark-compatible implementation of SchemaAdapterFactory [#1169](https://github.com/apache/datafusion-comet/pull/1169) (andygrove)\n- feat: Improve shuffle metrics (second attempt) [#1175](https://github.com/apache/datafusion-comet/pull/1175) (andygrove)\n- feat: Add a `spark.comet.exec.memoryPool` configuration for experimenting with various datafusion memory pool setups. [#1021](https://github.com/apache/datafusion-comet/pull/1021) (Kontinuation)\n- feat: Reenable tests for filtered SMJ anti join [#1211](https://github.com/apache/datafusion-comet/pull/1211) (comphead)\n- feat: add support for array_remove expression [#1179](https://github.com/apache/datafusion-comet/pull/1179) (jatin510)\n\n**Documentation updates:**\n\n- docs: Update documentation for 0.4.0 release [#1096](https://github.com/apache/datafusion-comet/pull/1096) (andygrove)\n- docs: Fix readme typo FGPA -> FPGA [#1117](https://github.com/apache/datafusion-comet/pull/1117) (gstvg)\n- docs: Add more technical detail and new diagram to Comet plugin overview [#1119](https://github.com/apache/datafusion-comet/pull/1119) (andygrove)\n- docs: Add some documentation explaining how shuffle works [#1148](https://github.com/apache/datafusion-comet/pull/1148) (andygrove)\n- docs: Update TPC-H benchmark results [#1257](https://github.com/apache/datafusion-comet/pull/1257) (andygrove)\n\n**Other:**\n\n- chore: Add changelog for 0.4.0 [#1089](https://github.com/apache/datafusion-comet/pull/1089) (andygrove)\n- chore: Prepare for 0.5.0 development [#1090](https://github.com/apache/datafusion-comet/pull/1090) (andygrove)\n- build: Skip installation of spark-integration and fuzz testing modules [#1091](https://github.com/apache/datafusion-comet/pull/1091) (parthchandra)\n- minor: Add hint for finding the GPG key to use when publishing to maven [#1093](https://github.com/apache/datafusion-comet/pull/1093) (andygrove)\n- chore: Include first ScanExec batch in metrics [#1105](https://github.com/apache/datafusion-comet/pull/1105) (andygrove)\n- chore: Improve CometScan metrics [#1100](https://github.com/apache/datafusion-comet/pull/1100) (andygrove)\n- chore: Add custom metric for native shuffle fetching batches from JVM [#1108](https://github.com/apache/datafusion-comet/pull/1108) (andygrove)\n- chore: Remove unused StringView struct [#1143](https://github.com/apache/datafusion-comet/pull/1143) (andygrove)\n- test: enable more Spark 4.0 tests [#1145](https://github.com/apache/datafusion-comet/pull/1145) (kazuyukitanimura)\n- chore: Refactor cast to use SparkCastOptions param [#1146](https://github.com/apache/datafusion-comet/pull/1146) (andygrove)\n- chore: Move more expressions from core crate to spark-expr crate [#1152](https://github.com/apache/datafusion-comet/pull/1152) (andygrove)\n- chore: Remove dead code [#1155](https://github.com/apache/datafusion-comet/pull/1155) (andygrove)\n- chore: Move string kernels and expressions to spark-expr crate [#1164](https://github.com/apache/datafusion-comet/pull/1164) (andygrove)\n- chore: Move remaining expressions to spark-expr crate + some minor refactoring [#1165](https://github.com/apache/datafusion-comet/pull/1165) (andygrove)\n- chore: Add ignored tests for reading complex types from Parquet [#1167](https://github.com/apache/datafusion-comet/pull/1167) (andygrove)\n- test: enabling Spark tests with offHeap requirement [#1177](https://github.com/apache/datafusion-comet/pull/1177) (kazuyukitanimura)\n- minor: move shuffle classes from common to spark [#1193](https://github.com/apache/datafusion-comet/pull/1193) (andygrove)\n- minor: refactor to move decodeBatches to broadcast exchange code as private function [#1195](https://github.com/apache/datafusion-comet/pull/1195) (andygrove)\n- minor: refactor prepare_output so that it does not require an ExecutionContext [#1194](https://github.com/apache/datafusion-comet/pull/1194) (andygrove)\n- minor: remove unused source files [#1202](https://github.com/apache/datafusion-comet/pull/1202) (andygrove)\n- chore: Upgrade to DataFusion 44.0.0-rc2 [#1154](https://github.com/apache/datafusion-comet/pull/1154) (andygrove)\n- chore: Add safety check to CometBuffer [#1050](https://github.com/apache/datafusion-comet/pull/1050) (viirya)\n- chore: Remove unreachable code [#1213](https://github.com/apache/datafusion-comet/pull/1213) (andygrove)\n- test: Enable Comet by default except some tests in SparkSessionExtensionSuite [#1201](https://github.com/apache/datafusion-comet/pull/1201) (kazuyukitanimura)\n- chore: extract `struct` expressions to folders based on spark grouping [#1216](https://github.com/apache/datafusion-comet/pull/1216) (rluvaton)\n- chore: extract static invoke expressions to folders based on spark grouping [#1217](https://github.com/apache/datafusion-comet/pull/1217) (rluvaton)\n- chore: Follow-on PR to fully enable onheap memory usage [#1210](https://github.com/apache/datafusion-comet/pull/1210) (andygrove)\n- chore: extract agg_funcs expressions to folders based on spark grouping [#1224](https://github.com/apache/datafusion-comet/pull/1224) (rluvaton)\n- chore: extract datetime_funcs expressions to folders based on spark grouping [#1222](https://github.com/apache/datafusion-comet/pull/1222) (rluvaton)\n- chore: Upgrade to DataFusion 44.0.0 from 44.0.0 RC2 [#1232](https://github.com/apache/datafusion-comet/pull/1232) (rluvaton)\n- chore: extract strings file to `strings_func` like in spark grouping [#1215](https://github.com/apache/datafusion-comet/pull/1215) (rluvaton)\n- chore: extract predicate_functions expressions to folders based on spark grouping [#1218](https://github.com/apache/datafusion-comet/pull/1218) (rluvaton)\n- build(deps): bump protobuf version to 3.21.12 [#1234](https://github.com/apache/datafusion-comet/pull/1234) (wForget)\n- chore: extract json_funcs expressions to folders based on spark grouping [#1220](https://github.com/apache/datafusion-comet/pull/1220) (rluvaton)\n- test: Enable shuffle by default in Spark tests [#1240](https://github.com/apache/datafusion-comet/pull/1240) (kazuyukitanimura)\n- chore: extract hash_funcs expressions to folders based on spark grouping [#1221](https://github.com/apache/datafusion-comet/pull/1221) (rluvaton)\n- build: Fix test failure caused by merging conflicting PRs [#1259](https://github.com/apache/datafusion-comet/pull/1259) (andygrove)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    37\tAndy Grove\n    10\tRaz Luvaton\n     7\tKAZUYUKI TANIMURA\n     3\tLiang-Chi Hsieh\n     2\tParth Chandra\n     1\tAdam Binford\n     1\tDharan Aditya\n     1\tHimadri Pal\n     1\tJagdish Parihar\n     1\tKristin Cowalcijk\n     1\tMatt Butrovich\n     1\tOleks V\n     1\tSem\n     1\tZhen Wang\n     1\tgstvg\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.6.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.6.0 Changelog\n\n**Fixed bugs:**\n\n- fix: cast timestamp to decimal is unsupported [#1281](https://github.com/apache/datafusion-comet/pull/1281) (wForget)\n- fix: partially fix consistency issue of hash functions with decimal input [#1295](https://github.com/apache/datafusion-comet/pull/1295) (wForget)\n- fix: Improve testing for array_remove and fallback to Spark for unsupported types [#1308](https://github.com/apache/datafusion-comet/pull/1308) (andygrove)\n- fix: address post merge comet-parquet-exec review comments [#1327](https://github.com/apache/datafusion-comet/pull/1327) (parthchandra)\n- fix: memory pool error type [#1346](https://github.com/apache/datafusion-comet/pull/1346) (kazuyukitanimura)\n- fix: Fall back to Spark when hashing decimals with precision > 18 [#1325](https://github.com/apache/datafusion-comet/pull/1325) (andygrove)\n- fix: expressions doc for ArrayRemove [#1356](https://github.com/apache/datafusion-comet/pull/1356) (kazuyukitanimura)\n- fix: pass scale to DF round in spark_round [#1341](https://github.com/apache/datafusion-comet/pull/1341) (cht42)\n- fix: Mark cast from float/double to decimal as incompatible [#1372](https://github.com/apache/datafusion-comet/pull/1372) (andygrove)\n- fix: Passthrough condition in StaticInvoke case block [#1392](https://github.com/apache/datafusion-comet/pull/1392) (EmilyMatt)\n- fix: disable checking for uint_8 and uint_16 if complex type readers are enabled [#1376](https://github.com/apache/datafusion-comet/pull/1376) (parthchandra)\n\n**Performance related:**\n\n- perf: improve performance of update metrics [#1329](https://github.com/apache/datafusion-comet/pull/1329) (wForget)\n- perf: Use DataFusion FilterExec for experimental native scans [#1395](https://github.com/apache/datafusion-comet/pull/1395) (mbutrovich)\n\n**Implemented enhancements:**\n\n- feat: Add HasRowIdMapping interface [#1288](https://github.com/apache/datafusion-comet/pull/1288) (viirya)\n- feat: Upgrade to DataFusion 45 [#1364](https://github.com/apache/datafusion-comet/pull/1364) (andygrove)\n- feat: Add fair unified memory pool [#1369](https://github.com/apache/datafusion-comet/pull/1369) (kazuyukitanimura)\n- feat: Add unbounded memory pool [#1386](https://github.com/apache/datafusion-comet/pull/1386) (kazuyukitanimura)\n- feat: make random seed configurable in fuzz-testing [#1401](https://github.com/apache/datafusion-comet/pull/1401) (wForget)\n- feat: override executor overhead memory only when comet unified memory manager is disabled [#1379](https://github.com/apache/datafusion-comet/pull/1379) (wForget)\n\n**Documentation updates:**\n\n- docs: Fix links and provide complete benchmarking scripts [#1284](https://github.com/apache/datafusion-comet/pull/1284) (andygrove)\n- doc: update memory tuning guide [#1394](https://github.com/apache/datafusion-comet/pull/1394) (kazuyukitanimura)\n\n**Other:**\n\n- chore: Start 0.6.0 development [#1286](https://github.com/apache/datafusion-comet/pull/1286) (andygrove)\n- minor: update compatibility [#1303](https://github.com/apache/datafusion-comet/pull/1303) (kazuyukitanimura)\n- chore: extract conversion_funcs, conditional_funcs, bitwise_funcs and array_funcs expressions to folders based on spark grouping [#1223](https://github.com/apache/datafusion-comet/pull/1223) (rluvaton)\n- chore: extract math_funcs expressions to folders based on spark grouping [#1219](https://github.com/apache/datafusion-comet/pull/1219) (rluvaton)\n- chore: merge comet-parquet-exec branch into main [#1318](https://github.com/apache/datafusion-comet/pull/1318) (andygrove)\n- Feat: Support array_intersect function [#1271](https://github.com/apache/datafusion-comet/pull/1271) (erenavsarogullari)\n- build(deps): bump pprof from 0.13.0 to 0.14.0 in /native [#1319](https://github.com/apache/datafusion-comet/pull/1319) (dependabot[bot])\n- chore: Fix merge conflicts from merging comet-parquet-exec into main [#1320](https://github.com/apache/datafusion-comet/pull/1320) (andygrove)\n- chore: Revert accidental re-introduction of off-heap memory requirement [#1326](https://github.com/apache/datafusion-comet/pull/1326) (andygrove)\n- chore: Fix merge conflicts from merging comet-parquet-exec into main [#1323](https://github.com/apache/datafusion-comet/pull/1323) (mbutrovich)\n- Feat: Support array_join function [#1290](https://github.com/apache/datafusion-comet/pull/1290) (erenavsarogullari)\n- Fix missing slash in spark script [#1334](https://github.com/apache/datafusion-comet/pull/1334) (xleoken)\n- chore: Refactor QueryPlanSerde to allow logic to be moved to individual classes per expression [#1331](https://github.com/apache/datafusion-comet/pull/1331) (andygrove)\n- build: re-enable upload-test-reports for macos-13 runner [#1335](https://github.com/apache/datafusion-comet/pull/1335) (viirya)\n- chore: Upgrade to Arrow 53.4.0 [#1338](https://github.com/apache/datafusion-comet/pull/1338) (andygrove)\n- Feat: Support arrays_overlap function [#1312](https://github.com/apache/datafusion-comet/pull/1312) (erenavsarogullari)\n- chore: Move all array\\_\\* serde to new framework, use correct INCOMPAT config [#1349](https://github.com/apache/datafusion-comet/pull/1349) (andygrove)\n- chore: Prepare for DataFusion 45 (bump to DataFusion rev 5592834 + Arrow 54.0.0) [#1332](https://github.com/apache/datafusion-comet/pull/1332) (andygrove)\n- minor: commit compatibility doc [#1358](https://github.com/apache/datafusion-comet/pull/1358) (kazuyukitanimura)\n- minor: update fuzz dependency [#1357](https://github.com/apache/datafusion-comet/pull/1357) (kazuyukitanimura)\n- chore: Remove redundant processing from exprToProtoInternal [#1351](https://github.com/apache/datafusion-comet/pull/1351) (andygrove)\n- chore: Adding an optional `hdfs` crate [#1377](https://github.com/apache/datafusion-comet/pull/1377) (comphead)\n- chore: Refactor aggregate expression serde [#1380](https://github.com/apache/datafusion-comet/pull/1380) (andygrove)\n"
  },
  {
    "path": "dev/changelog/0.7.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.7.0 Changelog\n\nThis release consists of 46 commits from 11 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: Change default value of COMET_SCAN_ALLOW_INCOMPATIBLE and add documentation [#1398](https://github.com/apache/datafusion-comet/pull/1398) (andygrove)\n- fix: Reduce cast.rs and utils.rs logic from parquet_support.rs for experimental native scans [#1387](https://github.com/apache/datafusion-comet/pull/1387) (mbutrovich)\n- fix: Remove more cast.rs logic from parquet_support.rs for experimental native scans [#1413](https://github.com/apache/datafusion-comet/pull/1413) (mbutrovich)\n- fix: fix various unit test failures in native_datafusion and native_iceberg_compat readers [#1415](https://github.com/apache/datafusion-comet/pull/1415) (parthchandra)\n- fix: metrics tests for native_datafusion experimental native scan [#1445](https://github.com/apache/datafusion-comet/pull/1445) (mbutrovich)\n- fix: Reduce number of shuffle spill files, fix spilled_bytes metric, add some unit tests [#1440](https://github.com/apache/datafusion-comet/pull/1440) (andygrove)\n- fix: Executor memory overhead overriding [#1462](https://github.com/apache/datafusion-comet/pull/1462) (LukMRVC)\n- fix: Stop copying rust-toolchain to docker file [#1475](https://github.com/apache/datafusion-comet/pull/1475) (andygrove)\n- fix: PartitionBuffers should not have their own MemoryConsumer [#1496](https://github.com/apache/datafusion-comet/pull/1496) (EmilyMatt)\n- fix: enable full decimal to decimal support [#1385](https://github.com/apache/datafusion-comet/pull/1385) (himadripal)\n- fix: use common implementation of handling object store and hdfs urls for native_datafusion and native_iceberg_compat [#1494](https://github.com/apache/datafusion-comet/pull/1494) (parthchandra)\n- fix: Simplify CometShuffleMemoryAllocator logic, rename classes, remove config [#1485](https://github.com/apache/datafusion-comet/pull/1485) (mbutrovich)\n- fix: check overflow for decimal integral division [#1512](https://github.com/apache/datafusion-comet/pull/1512) (wForget)\n\n**Performance related:**\n\n- perf: Update RewriteJoin logic to choose optimal build side [#1424](https://github.com/apache/datafusion-comet/pull/1424) (andygrove)\n- perf: Reduce native shuffle memory overhead by 50% [#1452](https://github.com/apache/datafusion-comet/pull/1452) (andygrove)\n\n**Implemented enhancements:**\n\n- feat: CometNativeScan metrics from ParquetFileMetrics and FileStreamMetrics [#1172](https://github.com/apache/datafusion-comet/pull/1172) (mbutrovich)\n- feat: add experimental remote HDFS support for native DataFusion reader [#1359](https://github.com/apache/datafusion-comet/pull/1359) (comphead)\n- feat: add Win-amd64 profile [#1410](https://github.com/apache/datafusion-comet/pull/1410) (wForget)\n- feat: Support IntegralDivide function [#1428](https://github.com/apache/datafusion-comet/pull/1428) (wForget)\n- feat: Add div operator for fuzz testing and update expression doc [#1464](https://github.com/apache/datafusion-comet/pull/1464) (wForget)\n- feat: Upgrade to DataFusion 46.0.0-rc2 [#1423](https://github.com/apache/datafusion-comet/pull/1423) (andygrove)\n- feat: Add support for rpad [#1470](https://github.com/apache/datafusion-comet/pull/1470) (andygrove)\n- feat: Use official DataFusion 46.0.0 release [#1484](https://github.com/apache/datafusion-comet/pull/1484) (andygrove)\n\n**Documentation updates:**\n\n- docs: Add changelog for 0.6.0 release [#1402](https://github.com/apache/datafusion-comet/pull/1402) (andygrove)\n- docs: Improve documentation for running stability plan tests [#1469](https://github.com/apache/datafusion-comet/pull/1469) (andygrove)\n\n**Other:**\n\n- test: Add experimental native scans to CometReadBenchmark [#1150](https://github.com/apache/datafusion-comet/pull/1150) (mbutrovich)\n- chore: Prepare for 0.7.0 development [#1404](https://github.com/apache/datafusion-comet/pull/1404) (andygrove)\n- chore: Update released version in documentation [#1418](https://github.com/apache/datafusion-comet/pull/1418) (andygrove)\n- chore: Update protobuf to 3.25.5 [#1434](https://github.com/apache/datafusion-comet/pull/1434) (kazuyukitanimura)\n- chore: Update guava to 33.2.1-jre [#1435](https://github.com/apache/datafusion-comet/pull/1435) (kazuyukitanimura)\n- test: Register Spark-compatible expressions with a DataFusion context [#1432](https://github.com/apache/datafusion-comet/pull/1432) (viczsaurav)\n- chore: fixes for kube build [#1421](https://github.com/apache/datafusion-comet/pull/1421) (comphead)\n- build: pin machete to version 0.7.0 [#1444](https://github.com/apache/datafusion-comet/pull/1444) (andygrove)\n- chore: Re-organize shuffle writer code [#1439](https://github.com/apache/datafusion-comet/pull/1439) (andygrove)\n- chore: faster maven mirror [#1447](https://github.com/apache/datafusion-comet/pull/1447) (comphead)\n- build: Use stable channel in rust-toolchain [#1465](https://github.com/apache/datafusion-comet/pull/1465) (andygrove)\n- Feat: support array_compact function [#1321](https://github.com/apache/datafusion-comet/pull/1321) (kazantsev-maksim)\n- chore: Upgrade to Spark 3.5.4 [#1471](https://github.com/apache/datafusion-comet/pull/1471) (andygrove)\n- chore: Enable CI checks for `native_datafusion` scan [#1479](https://github.com/apache/datafusion-comet/pull/1479) (andygrove)\n- chore: Add `native_iceberg_compat` CI checks [#1487](https://github.com/apache/datafusion-comet/pull/1487) (andygrove)\n- chore: Stop disabling readside padding in TPC stability suite [#1491](https://github.com/apache/datafusion-comet/pull/1491) (andygrove)\n- chore: Remove num partitions from repartitioner [#1498](https://github.com/apache/datafusion-comet/pull/1498) (EmilyMatt)\n- test: fix Spark 3.5 tests [#1482](https://github.com/apache/datafusion-comet/pull/1482) (kazuyukitanimura)\n- minor: Remove hard-coded config default [#1503](https://github.com/apache/datafusion-comet/pull/1503) (andygrove)\n- chore: Use Datafusion's existing empty stream [#1517](https://github.com/apache/datafusion-comet/pull/1517) (EmilyMatt)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    20\tAndy Grove\n     6\tMatt Butrovich\n     4\tZhen Wang\n     3\tEmily Matheys\n     3\tKAZUYUKI TANIMURA\n     3\tOleks V\n     2\tHimadri Pal\n     2\tParth Chandra\n     1\tKazantsev Maksim\n     1\tLukas Moravec\n     1\tSaurav Verma\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.8.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.8.0 Changelog\n\nThis release consists of 81 commits from 11 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: remove code duplication in native_datafusion and native_iceberg_compat implementations [#1443](https://github.com/apache/datafusion-comet/pull/1443) (parthchandra)\n- fix: Refactor CometScanRule and fix bugs [#1483](https://github.com/apache/datafusion-comet/pull/1483) (andygrove)\n- fix: check if handle has been initialized before closing [#1554](https://github.com/apache/datafusion-comet/pull/1554) (wForget)\n- fix: Taking slicing into account when writing BooleanBuffers as fast-encoding format [#1522](https://github.com/apache/datafusion-comet/pull/1522) (Kontinuation)\n- fix: isCometEnabled name conflict [#1569](https://github.com/apache/datafusion-comet/pull/1569) (kazuyukitanimura)\n- fix: make register_object_store use same session_env as file scan [#1555](https://github.com/apache/datafusion-comet/pull/1555) (wForget)\n- fix: adjust CometNativeScan's doCanonicalize and hashCode for AQE, use DataSourceScanExec trait [#1578](https://github.com/apache/datafusion-comet/pull/1578) (mbutrovich)\n- fix: corrected the logic of eliminating CometSparkToColumnarExec [#1597](https://github.com/apache/datafusion-comet/pull/1597) (wForget)\n- fix: avoid panic caused by close null handle of parquet reader [#1604](https://github.com/apache/datafusion-comet/pull/1604) (wForget)\n- fix: Make AQE capable of converting Comet shuffled joins to Comet broadcast hash joins [#1605](https://github.com/apache/datafusion-comet/pull/1605) (Kontinuation)\n- fix: Making shuffle files generated in native shuffle mode reclaimable [#1568](https://github.com/apache/datafusion-comet/pull/1568) (Kontinuation)\n- fix: Support per-task shuffle write rows and shuffle write time metrics [#1617](https://github.com/apache/datafusion-comet/pull/1617) (Kontinuation)\n- fix: Modify Spark SQL core 2 tests for `native_datafusion` reader, change 3.5.5 diff hash length to 11 [#1641](https://github.com/apache/datafusion-comet/pull/1641) (mbutrovich)\n- fix: fix spark/sql test failures in native_iceberg_compat [#1593](https://github.com/apache/datafusion-comet/pull/1593) (parthchandra)\n- fix: handle missing field correctly in native_iceberg_compat [#1656](https://github.com/apache/datafusion-comet/pull/1656) (parthchandra)\n- fix: better int96 support for experimental native scans [#1652](https://github.com/apache/datafusion-comet/pull/1652) (mbutrovich)\n- fix: respect `ignoreNulls` flag in `first_value` and `last_value` [#1626](https://github.com/apache/datafusion-comet/pull/1626) (andygrove)\n- fix: update row groups count in internal metrics accumulator [#1658](https://github.com/apache/datafusion-comet/pull/1658) (parthchandra)\n- fix: Shuffle should maintain insertion order [#1660](https://github.com/apache/datafusion-comet/pull/1660) (EmilyMatt)\n\n**Performance related:**\n\n- perf: Use a global tokio runtime [#1614](https://github.com/apache/datafusion-comet/pull/1614) (andygrove)\n- perf: Respect Spark's PARQUET_FILTER_PUSHDOWN_ENABLED config [#1619](https://github.com/apache/datafusion-comet/pull/1619) (andygrove)\n- perf: Experimental fix to avoid join strategy regression [#1674](https://github.com/apache/datafusion-comet/pull/1674) (andygrove)\n\n**Implemented enhancements:**\n\n- feat: add read array support [#1456](https://github.com/apache/datafusion-comet/pull/1456) (comphead)\n- feat: introduce hadoop mini cluster to test native scan on hdfs [#1556](https://github.com/apache/datafusion-comet/pull/1556) (wForget)\n- feat: make parquet native scan schema case insensitive [#1575](https://github.com/apache/datafusion-comet/pull/1575) (wForget)\n- feat: enable iceberg compat tests, more tests for complex types [#1550](https://github.com/apache/datafusion-comet/pull/1550) (comphead)\n- feat: pushdown filter for native_iceberg_compat [#1566](https://github.com/apache/datafusion-comet/pull/1566) (wForget)\n- feat: Fix struct of arrays schema issue [#1592](https://github.com/apache/datafusion-comet/pull/1592) (comphead)\n- feat: adding more struct/arrays tests [#1594](https://github.com/apache/datafusion-comet/pull/1594) (comphead)\n- feat: respect `batchSize/workerThreads/blockingThreads` configurations for native_iceberg_compat scan [#1587](https://github.com/apache/datafusion-comet/pull/1587) (wForget)\n- feat: add MAP type support for first level [#1603](https://github.com/apache/datafusion-comet/pull/1603) (comphead)\n- feat: Add more tests for nested types combinations for `native_datafusion` [#1632](https://github.com/apache/datafusion-comet/pull/1632) (comphead)\n- feat: Override MapBuilder values field with expected schema [#1643](https://github.com/apache/datafusion-comet/pull/1643) (comphead)\n- feat: track unified memory pool [#1651](https://github.com/apache/datafusion-comet/pull/1651) (wForget)\n- feat: Add support for complex types in native shuffle [#1655](https://github.com/apache/datafusion-comet/pull/1655) (andygrove)\n\n**Documentation updates:**\n\n- docs: Update configuration guide to show optional configs [#1524](https://github.com/apache/datafusion-comet/pull/1524) (andygrove)\n- docs: Add changelog for 0.7.0 release [#1527](https://github.com/apache/datafusion-comet/pull/1527) (andygrove)\n- docs: Use a shallow clone for Spark SQL test instructions [#1547](https://github.com/apache/datafusion-comet/pull/1547) (mbutrovich)\n- docs: Update benchmark results for 0.7.0 release [#1548](https://github.com/apache/datafusion-comet/pull/1548) (andygrove)\n- doc: Renew `kubernetes.md` [#1549](https://github.com/apache/datafusion-comet/pull/1549) (comphead)\n- docs: various improvements to tuning guide [#1525](https://github.com/apache/datafusion-comet/pull/1525) (andygrove)\n- docs: Update supported Spark versions [#1580](https://github.com/apache/datafusion-comet/pull/1580) (andygrove)\n- docs: change OSX/OS X to macOS [#1584](https://github.com/apache/datafusion-comet/pull/1584) (mbutrovich)\n- docs: docs for benchmarking in aws ec2 [#1601](https://github.com/apache/datafusion-comet/pull/1601) (andygrove)\n- docs: Update compatibility docs for new native scans [#1657](https://github.com/apache/datafusion-comet/pull/1657) (andygrove)\n- doc: Document local HDFS setup [#1673](https://github.com/apache/datafusion-comet/pull/1673) (comphead)\n\n**Other:**\n\n- chore: fix issue in release process [#1528](https://github.com/apache/datafusion-comet/pull/1528) (andygrove)\n- chore: Remove all subdependencies [#1514](https://github.com/apache/datafusion-comet/pull/1514) (EmilyMatt)\n- chore: Drop support for Spark 3.3 (EOL) [#1529](https://github.com/apache/datafusion-comet/pull/1529) (andygrove)\n- chore: Prepare for 0.8.0 development [#1530](https://github.com/apache/datafusion-comet/pull/1530) (andygrove)\n- chore: Re-enable GitHub discussions [#1535](https://github.com/apache/datafusion-comet/pull/1535) (andygrove)\n- chore: [FOLLOWUP] Drop support for Spark 3.3 (EOL) [#1534](https://github.com/apache/datafusion-comet/pull/1534) (kazuyukitanimura)\n- build: Use unique name for surefire artifacts [#1544](https://github.com/apache/datafusion-comet/pull/1544) (andygrove)\n- chore: Update links for released version [#1540](https://github.com/apache/datafusion-comet/pull/1540) (andygrove)\n- chore: Enable Comet explicitly in `CometTPCDSQueryTestSuite` [#1559](https://github.com/apache/datafusion-comet/pull/1559) (andygrove)\n- chore: Fix some inconsistencies in memory pool configuration [#1561](https://github.com/apache/datafusion-comet/pull/1561) (andygrove)\n- upgraded spark 3.5.4 to 3.5.5 [#1565](https://github.com/apache/datafusion-comet/pull/1565) (YanivKunda)\n- minor: fix typo [#1570](https://github.com/apache/datafusion-comet/pull/1570) (wForget)\n- Chore: simplify array related functions impl [#1490](https://github.com/apache/datafusion-comet/pull/1490) (kazantsev-maksim)\n- added fallback using reflection for backward-compatibility [#1573](https://github.com/apache/datafusion-comet/pull/1573) (YanivKunda)\n- chore: Override node name for CometSparkToColumnar [#1577](https://github.com/apache/datafusion-comet/pull/1577) (l0kr)\n- chore: Reimplement ShuffleWriterExec using interleave_record_batch [#1511](https://github.com/apache/datafusion-comet/pull/1511) (Kontinuation)\n- chore: Run Comet tests for more Spark versions [#1582](https://github.com/apache/datafusion-comet/pull/1582) (andygrove)\n- Feat: support array_except function [#1343](https://github.com/apache/datafusion-comet/pull/1343) (kazantsev-maksim)\n- minor: Fix clippy warnings [#1606](https://github.com/apache/datafusion-comet/pull/1606) (Kontinuation)\n- chore: Remove some unwraps in hashing code [#1600](https://github.com/apache/datafusion-comet/pull/1600) (andygrove)\n- chore: Remove redundant shims for getFailOnError [#1608](https://github.com/apache/datafusion-comet/pull/1608) (andygrove)\n- chore: Making comet native operators write spill files to spark local dir [#1581](https://github.com/apache/datafusion-comet/pull/1581) (Kontinuation)\n- chore: Refactor QueryPlanSerde to use idiomatic Scala and reduce verbosity [#1609](https://github.com/apache/datafusion-comet/pull/1609) (andygrove)\n- chore: Create simple fuzz test as part of test suite [#1610](https://github.com/apache/datafusion-comet/pull/1610) (andygrove)\n- chore: Document `testSingleLineQuery` test method [#1628](https://github.com/apache/datafusion-comet/pull/1628) (comphead)\n- chore: Parquet fuzz testing [#1623](https://github.com/apache/datafusion-comet/pull/1623) (andygrove)\n- chore: Change default Spark version to 3.5 [#1620](https://github.com/apache/datafusion-comet/pull/1620) (andygrove)\n- chore: Add manually-triggered CI jobs for testing Spark SQL with native scans [#1624](https://github.com/apache/datafusion-comet/pull/1624) (andygrove)\n- chore: refactor v2 scan conversion [#1621](https://github.com/apache/datafusion-comet/pull/1621) (andygrove)\n- chore: clean up `planner.rs` [#1650](https://github.com/apache/datafusion-comet/pull/1650) (comphead)\n- chore: correct name of pipelines for native_datafusion ci workflow [#1653](https://github.com/apache/datafusion-comet/pull/1653) (parthchandra)\n- chore: Upgrade to datafusion 47.0.0-rc1 and arrow-rs 55.0.0 [#1563](https://github.com/apache/datafusion-comet/pull/1563) (andygrove)\n- chore: Upgrade to datafusion 47.0.0 [#1663](https://github.com/apache/datafusion-comet/pull/1663) (YanivKunda)\n- chore: Enable CometFuzzTestSuite int96 test for experimental native scans (without complex types) [#1664](https://github.com/apache/datafusion-comet/pull/1664) (mbutrovich)\n- chore: Refactor Memory Pools [#1662](https://github.com/apache/datafusion-comet/pull/1662) (EmilyMatt)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    31\tAndy Grove\n    11\tOleks V\n    10\tZhen Wang\n     7\tKristin Cowalcijk\n     6\tMatt Butrovich\n     5\tParth Chandra\n     3\tEmily Matheys\n     3\tYaniv Kunda\n     2\tKAZUYUKI TANIMURA\n     2\tKazantsev Maksim\n     1\tŁukasz\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.9.0.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.9.0 Changelog\n\nThis release consists of 139 commits from 24 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: typo for `instr` in fuzz testing [#1686](https://github.com/apache/datafusion-comet/pull/1686) (mbutrovich)\n- fix: Bucketed scan fallback for native_datafusion Parquet scan [#1720](https://github.com/apache/datafusion-comet/pull/1720) (mbutrovich)\n- fix: Skip row index Spark SQL tests for native_datafusion Parquet scan [#1724](https://github.com/apache/datafusion-comet/pull/1724) (mbutrovich)\n- fix: Check acquired memory when CometMemoryPool grows [#1732](https://github.com/apache/datafusion-comet/pull/1732) (wForget)\n- fix: Fix data race in memory profiling [#1727](https://github.com/apache/datafusion-comet/pull/1727) (andygrove)\n- fix: Enable some DPP Spark SQL tests [#1734](https://github.com/apache/datafusion-comet/pull/1734) (andygrove)\n- fix: support literal null list and map [#1742](https://github.com/apache/datafusion-comet/pull/1742) (kazuyukitanimura)\n- fix: get_struct field is incorrect when struct in array [#1687](https://github.com/apache/datafusion-comet/pull/1687) (comphead)\n- fix: cast map types correctly in schema adapter [#1771](https://github.com/apache/datafusion-comet/pull/1771) (parthchandra)\n- fix: correct schema type checking in native_iceberg_compat [#1755](https://github.com/apache/datafusion-comet/pull/1755) (parthchandra)\n- fix: default values for native_datafusion scan [#1756](https://github.com/apache/datafusion-comet/pull/1756) (mbutrovich)\n- fix: [native_scans] Support `CASE_SENSITIVE` when reading Parquet [#1782](https://github.com/apache/datafusion-comet/pull/1782) (andygrove)\n- fix: cargo install tpchgen-cli in benchmark doc [#1797](https://github.com/apache/datafusion-comet/pull/1797) (zhuqi-lucas)\n- fix: support `map_keys` [#1788](https://github.com/apache/datafusion-comet/pull/1788) (comphead)\n- fix: fall back on nested types for default values [#1799](https://github.com/apache/datafusion-comet/pull/1799) (mbutrovich)\n- fix: Re-enable Spark 4 tests on Linux [#1806](https://github.com/apache/datafusion-comet/pull/1806) (andygrove)\n- fix: fallback to Spark scan if encryption is enabled (native_datafusion/native_iceberg_compat) [#1785](https://github.com/apache/datafusion-comet/pull/1785) (parthchandra)\n- fix: native_iceberg_compat: move checking parquet types above fetching batch [#1809](https://github.com/apache/datafusion-comet/pull/1809) (mbutrovich)\n- fix: translate missing or corrupt file exceptions, fall back if asked to ignore [#1765](https://github.com/apache/datafusion-comet/pull/1765) (mbutrovich)\n- fix: Fix Spark SQL AQE exchange reuse test failures [#1811](https://github.com/apache/datafusion-comet/pull/1811) (coderfender)\n- fix: Enable more Spark SQL tests [#1834](https://github.com/apache/datafusion-comet/pull/1834) (andygrove)\n- fix: support `map_values` [#1835](https://github.com/apache/datafusion-comet/pull/1835) (comphead)\n- fix: Handle case where num_cols == 0 in native execution [#1840](https://github.com/apache/datafusion-comet/pull/1840) (andygrove)\n- fix: Fix shuffle writing rows containing null struct fields [#1845](https://github.com/apache/datafusion-comet/pull/1845) (Kontinuation)\n- fix: Fall back to Spark for `RANGE BETWEEN` window expressions [#1848](https://github.com/apache/datafusion-comet/pull/1848) (andygrove)\n- fix: Remove COMET_SHUFFLE_FALLBACK_TO_COLUMNAR hack [#1865](https://github.com/apache/datafusion-comet/pull/1865) (andygrove)\n- fix: support read Struct by user schema [#1860](https://github.com/apache/datafusion-comet/pull/1860) (comphead)\n- fix: map parquet field_id correctly (native_iceberg_compat) [#1815](https://github.com/apache/datafusion-comet/pull/1815) (parthchandra)\n- fix: cast_struct_to_struct aligns to Spark behavior [#1879](https://github.com/apache/datafusion-comet/pull/1879) (mbutrovich)\n- fix: correctly handle schemas with nested array of struct (native_iceberg_compat) [#1883](https://github.com/apache/datafusion-comet/pull/1883) (parthchandra)\n- fix: set RangePartitioning for native shuffle default to false [#1907](https://github.com/apache/datafusion-comet/pull/1907) (mbutrovich)\n- fix: conflict between #1905 and #1892. [#1919](https://github.com/apache/datafusion-comet/pull/1919) (mbutrovich)\n- fix: Add overflow check to evaluate of sum decimal accumulator [#1922](https://github.com/apache/datafusion-comet/pull/1922) (leung-ming)\n- fix: Fix overflow handling when casting float to decimal [#1914](https://github.com/apache/datafusion-comet/pull/1914) (leung-ming)\n- fix: Ignore a test case fails on Miri [#1951](https://github.com/apache/datafusion-comet/pull/1951) (leung-ming)\n\n**Performance related:**\n\n- perf: Add memory profiling [#1702](https://github.com/apache/datafusion-comet/pull/1702) (andygrove)\n- perf: Add performance tracing capability [#1706](https://github.com/apache/datafusion-comet/pull/1706) (andygrove)\n- perf: Add `COMET_RESPECT_PARQUET_FILTER_PUSHDOWN` config [#1936](https://github.com/apache/datafusion-comet/pull/1936) (andygrove)\n\n**Implemented enhancements:**\n\n- feat: add jemalloc as optional custom allocator [#1679](https://github.com/apache/datafusion-comet/pull/1679) (mbutrovich)\n- feat: support `array_repeat` [#1680](https://github.com/apache/datafusion-comet/pull/1680) (comphead)\n- feat: More warning info for users [#1667](https://github.com/apache/datafusion-comet/pull/1667) (hsiang-c)\n- feat: decode() expression when using 'utf-8' encoding [#1697](https://github.com/apache/datafusion-comet/pull/1697) (mbutrovich)\n- feat: regexp_replace() expression with no starting offset [#1700](https://github.com/apache/datafusion-comet/pull/1700) (mbutrovich)\n- feat: Improve performance tracing feature [#1730](https://github.com/apache/datafusion-comet/pull/1730) (andygrove)\n- feat: Set/cancel with job tag and make max broadcast table size configurable [#1693](https://github.com/apache/datafusion-comet/pull/1693) (wForget)\n- feat: Add support for `expm1` expression from `datafusion-spark` crate [#1711](https://github.com/apache/datafusion-comet/pull/1711) (andygrove)\n- feat: Add config option for showing all Comet plan transformations [#1780](https://github.com/apache/datafusion-comet/pull/1780) (andygrove)\n- feat: Support Type widening: byte → short/int/long, short → int/long [#1770](https://github.com/apache/datafusion-comet/pull/1770) (huaxingao)\n- feat: Translate Hadoop S3A configurations to object_store configurations [#1817](https://github.com/apache/datafusion-comet/pull/1817) (Kontinuation)\n- feat: Upgrade to official DataFusion 48.0.0 release [#1877](https://github.com/apache/datafusion-comet/pull/1877) (andygrove)\n- feat: Add experimental auto mode for `COMET_PARQUET_SCAN_IMPL` [#1747](https://github.com/apache/datafusion-comet/pull/1747) (andygrove)\n- feat: support RangePartitioning with native shuffle [#1862](https://github.com/apache/datafusion-comet/pull/1862) (mbutrovich)\n- feat: Add support for signum expression [#1889](https://github.com/apache/datafusion-comet/pull/1889) (andygrove)\n- feat: Add support to lookup map by key [#1898](https://github.com/apache/datafusion-comet/pull/1898) (comphead)\n- feat: support array_max [#1892](https://github.com/apache/datafusion-comet/pull/1892) (drexler-sky)\n- feat: pass ignore_nulls flag to first and last [#1866](https://github.com/apache/datafusion-comet/pull/1866) (rluvaton)\n- feat: Implement ToPrettyString [#1921](https://github.com/apache/datafusion-comet/pull/1921) (andygrove)\n- feat: Support hadoop s3a config in native_iceberg_compat [#1925](https://github.com/apache/datafusion-comet/pull/1925) (parthchandra)\n- feat: rand expression support [#1199](https://github.com/apache/datafusion-comet/pull/1199) (akupchinskiy)\n- feat: supports array_distinct [#1923](https://github.com/apache/datafusion-comet/pull/1923) (drexler-sky)\n- feat: `auto` scan mode should check for supported file location [#1930](https://github.com/apache/datafusion-comet/pull/1930) (andygrove)\n- feat: Encapsulate Parquet objects [#1920](https://github.com/apache/datafusion-comet/pull/1920) (huaxingao)\n- feat: Change default value of `COMET_NATIVE_SCAN_IMPL` to `auto` [#1933](https://github.com/apache/datafusion-comet/pull/1933) (andygrove)\n- feat: Supports array_union [#1945](https://github.com/apache/datafusion-comet/pull/1945) (drexler-sky)\n\n**Documentation updates:**\n\n- docs: Add changelog for 0.8.0 [#1675](https://github.com/apache/datafusion-comet/pull/1675) (andygrove)\n- docs: Add instructions on running TPC-H on macOS [#1647](https://github.com/apache/datafusion-comet/pull/1647) (andygrove)\n- docs: Add documentation for accelerating Iceberg Parquet scans with Comet [#1683](https://github.com/apache/datafusion-comet/pull/1683) (andygrove)\n- docs: Add note on setting `core.abbrev` when generating diffs [#1735](https://github.com/apache/datafusion-comet/pull/1735) (andygrove)\n- docs: Remove outdated param in macos bench guide [#1748](https://github.com/apache/datafusion-comet/pull/1748) (ding-young)\n- docs: Add instructions for running individual Spark SQL tests from sbt [#1752](https://github.com/apache/datafusion-comet/pull/1752) (coderfender)\n- docs: Add documentation for native_datafusion Parquet scanner's S3 support [#1832](https://github.com/apache/datafusion-comet/pull/1832) (Kontinuation)\n- docs: Add docs stating that Comet does not support reading decimals encoded in Parquet BINARY format [#1895](https://github.com/apache/datafusion-comet/pull/1895) (andygrove)\n\n**Other:**\n\n- chore: Start 0.9.0 development [#1676](https://github.com/apache/datafusion-comet/pull/1676) (andygrove)\n- chore: Update viable crates [#1677](https://github.com/apache/datafusion-comet/pull/1677) (EmilyMatt)\n- chore: match Maven plugin versions with Spark 3.5 [#1668](https://github.com/apache/datafusion-comet/pull/1668) (hsiang-c)\n- chore: Remove fallback reason \"because the children were not native\" [#1672](https://github.com/apache/datafusion-comet/pull/1672) (andygrove)\n- chore: Rename `scalarExprToProto` to `scalarFunctionExprToProto` [#1688](https://github.com/apache/datafusion-comet/pull/1688) (comphead)\n- chore: fix build errors [#1690](https://github.com/apache/datafusion-comet/pull/1690) (comphead)\n- chore: Make Aggregate transformation more compact [#1670](https://github.com/apache/datafusion-comet/pull/1670) (EmilyMatt)\n- chore: update dev/release/rat_exclude_files.txt [#1689](https://github.com/apache/datafusion-comet/pull/1689) (hsiang-c)\n- chore: Move Comet rules into their own files [#1695](https://github.com/apache/datafusion-comet/pull/1695) (andygrove)\n- chore: Remove fast encoding option [#1703](https://github.com/apache/datafusion-comet/pull/1703) (andygrove)\n- chore: fix CI job name [#1712](https://github.com/apache/datafusion-comet/pull/1712) (hsiang-c)\n- minor: Warn if memory pool is dropped with bytes still reserved [#1721](https://github.com/apache/datafusion-comet/pull/1721) (andygrove)\n- chore: Correct memory acquired size in unified memory pool [#1738](https://github.com/apache/datafusion-comet/pull/1738) (zuston)\n- chore: allow large errors for Clippy [#1743](https://github.com/apache/datafusion-comet/pull/1743) (comphead)\n- chore: Refactor DataTypeSupport [#1741](https://github.com/apache/datafusion-comet/pull/1741) (andygrove)\n- chore: More refactoring of type checking logic [#1744](https://github.com/apache/datafusion-comet/pull/1744) (andygrove)\n- chore: Enable more complex type tests [#1753](https://github.com/apache/datafusion-comet/pull/1753) (andygrove)\n- chore: Add `scanImpl` attribute to `CometScanExec` [#1746](https://github.com/apache/datafusion-comet/pull/1746) (andygrove)\n- chore: Prepare for DataFusion 48.0.0 [#1710](https://github.com/apache/datafusion-comet/pull/1710) (andygrove)\n- Docs: Setup Comet on IntelliJ [#1760](https://github.com/apache/datafusion-comet/pull/1760) (coderfender)\n- chore: Reenable nested types for CometFuzzTestSuite with int96 [#1761](https://github.com/apache/datafusion-comet/pull/1761) (mbutrovich)\n- chore: Enable partial Spark SQL tests for `native_iceberg_compat` scan [#1762](https://github.com/apache/datafusion-comet/pull/1762) (andygrove)\n- chore: [native_iceberg_compat / native_datafusion] Ignore Spark SQL Parquet encryption tests [#1763](https://github.com/apache/datafusion-comet/pull/1763) (andygrove)\n- build: Ignore array_repeat test to fix CI issues [#1774](https://github.com/apache/datafusion-comet/pull/1774) (andygrove)\n- chore: Upload crash logs if Java tests fail [#1779](https://github.com/apache/datafusion-comet/pull/1779) (andygrove)\n- chore: Drop support for Java 8 [#1777](https://github.com/apache/datafusion-comet/pull/1777) (andygrove)\n- chore: Bump arrow to 18.3.0 [#1773](https://github.com/apache/datafusion-comet/pull/1773) (Kontinuation)\n- build: Stop running Comet's Spark 4 tests on Linux for PR builds [#1802](https://github.com/apache/datafusion-comet/pull/1802) (andygrove)\n- Chore: Moved strings expressions to separate file [#1792](https://github.com/apache/datafusion-comet/pull/1792) (kazantsev-maksim)\n- chore: Speed up \"PR Builds\" CI workflows [#1807](https://github.com/apache/datafusion-comet/pull/1807) (andygrove)\n- chore: [native scans] Ignore Spark SQL test for string predicate pushdown [#1768](https://github.com/apache/datafusion-comet/pull/1768) (andygrove)\n- chore: Bump DataFusion to git rev 2c2f225 [#1814](https://github.com/apache/datafusion-comet/pull/1814) (andygrove)\n- Feat: support bit_count function [#1602](https://github.com/apache/datafusion-comet/pull/1602) (kazantsev-maksim)\n- Chore: implement bit_not as ScalarUDFImpl [#1825](https://github.com/apache/datafusion-comet/pull/1825) (kazantsev-maksim)\n- build: Specify -Dsbt.log.noformat=true in sbt CI runs [#1822](https://github.com/apache/datafusion-comet/pull/1822) (andygrove)\n- chore: Use unique artifact names in Java test run [#1818](https://github.com/apache/datafusion-comet/pull/1818) (andygrove)\n- minor: Refactor PhysicalPlanner::default() to avoid duplicate code [#1821](https://github.com/apache/datafusion-comet/pull/1821) (andygrove)\n- Chore: implement bit_count as ScalarUDFImpl [#1826](https://github.com/apache/datafusion-comet/pull/1826) (kazantsev-maksim)\n- chore: IgnoreCometNativeScan on a few more Spark SQL tests [#1837](https://github.com/apache/datafusion-comet/pull/1837) (mbutrovich)\n- chore: Enable tests in RemoveRedundantProjectsSuite.scala related to issue #242 [#1838](https://github.com/apache/datafusion-comet/pull/1838) (rishvin)\n- minor: Replace many instances of `checkSparkAnswer` with `checkSparkAnswerAndOperator` [#1851](https://github.com/apache/datafusion-comet/pull/1851) (andygrove)\n- chore: Update documentation and ignore Spark SQL tests for known issue with count distinct on NaN in aggregate [#1847](https://github.com/apache/datafusion-comet/pull/1847) (andygrove)\n- chore: Ignore Spark SQL WholeStageCodegenSuite tests [#1859](https://github.com/apache/datafusion-comet/pull/1859) (andygrove)\n- chore: Upgrade to DataFusion 48.0.0-rc3 [#1863](https://github.com/apache/datafusion-comet/pull/1863) (andygrove)\n- upgraded spark 3.5.5 to 3.5.6 [#1861](https://github.com/apache/datafusion-comet/pull/1861) (YanivKunda)\n- build: Disable some rounding tests when miri is enabled [#1873](https://github.com/apache/datafusion-comet/pull/1873) (andygrove)\n- chore: Enable Spark SQL tests for `native_iceberg_compat` [#1876](https://github.com/apache/datafusion-comet/pull/1876) (andygrove)\n- chore: Enable more Spark SQL tests [#1869](https://github.com/apache/datafusion-comet/pull/1869) (andygrove)\n- chore: refactor planner read schema tests [#1886](https://github.com/apache/datafusion-comet/pull/1886) (comphead)\n- chore: Implement date_trunc as ScalarUDFImpl [#1880](https://github.com/apache/datafusion-comet/pull/1880) (leung-ming)\n- Chore: implement datetime funcs as ScalarUDFImpl [#1874](https://github.com/apache/datafusion-comet/pull/1874) (trompa)\n- minor: Improve testing of math scalar functions [#1896](https://github.com/apache/datafusion-comet/pull/1896) (andygrove)\n- minor: Avoid rewriting join to unsupported join [#1888](https://github.com/apache/datafusion-comet/pull/1888) (andygrove)\n- chore: Enable `native_iceberg_compat` Spark SQL tests (for real, this time) [#1910](https://github.com/apache/datafusion-comet/pull/1910) (andygrove)\n- chore: rename makeParquetFileAllTypes to makeParquetFileAllPrimitiveTypes [#1905](https://github.com/apache/datafusion-comet/pull/1905) (parthchandra)\n- chore: add a test case to read from an arbitrarily complex type schema [#1911](https://github.com/apache/datafusion-comet/pull/1911) (parthchandra)\n- test: Trigger Spark 3.4.3 SQL tests for iceberg-compat [#1912](https://github.com/apache/datafusion-comet/pull/1912) (kazuyukitanimura)\n- build: Fix conflict between #1910 and #1912 [#1924](https://github.com/apache/datafusion-comet/pull/1924) (andygrove)\n- minor: fix kube/Dockerfile build failed [#1918](https://github.com/apache/datafusion-comet/pull/1918) (zhangxffff)\n- chore: Improve reporting of fallback reasons for CollectLimit [#1694](https://github.com/apache/datafusion-comet/pull/1694) (andygrove)\n- chore: move udf registration to better place [#1899](https://github.com/apache/datafusion-comet/pull/1899) (rluvaton)\n- chore: Comet + Iceberg (1.8.1) CI [#1715](https://github.com/apache/datafusion-comet/pull/1715) (hsiang-c)\n- chore: Introduce `exprHandlers` map in QueryPlanSerde [#1903](https://github.com/apache/datafusion-comet/pull/1903) (andygrove)\n- chore: Enable Spark SQL tests for auto scan mode [#1885](https://github.com/apache/datafusion-comet/pull/1885) (andygrove)\n- Feat: support bit_get function [#1713](https://github.com/apache/datafusion-comet/pull/1713) (kazantsev-maksim)\n- chore: Clippy fixes for Rust 1.88 [#1939](https://github.com/apache/datafusion-comet/pull/1939) (andygrove)\n- Minor: Add unit tests for `ceil`/`floor` functions [#1728](https://github.com/apache/datafusion-comet/pull/1728) (tlm365)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n    62\tAndy Grove\n    16\tMatt Butrovich\n    10\tOleks V\n     8\tParth Chandra\n     5\tKazantsev Maksim\n     5\thsiang-c\n     4\tKristin Cowalcijk\n     4\tLeung Ming\n     3\tB Vadlamani\n     3\tdrexler-sky\n     2\tEmily Matheys\n     2\tHuaxin Gao\n     2\tKAZUYUKI TANIMURA\n     2\tRaz Luvaton\n     2\tZhen Wang\n     1\tArtem Kupchinskiy\n     1\tJunfan Zhang\n     1\tQi Zhu\n     1\tRishab Joshi\n     1\tTai Le Manh\n     1\tYaniv Kunda\n     1\tZhang Xiaofeng\n     1\tding-young\n     1\ttrompa\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/changelog/0.9.1.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# DataFusion Comet 0.9.1 Changelog\n\nThis release consists of 2 commits from 1 contributors. See credits at the end of this changelog for more information.\n\n**Fixed bugs:**\n\n- fix: [branch-0.9] Backport FFI fix [#2164](https://github.com/apache/datafusion-comet/pull/2164) (andygrove)\n- fix: [branch-0.9] Avoid double free in CometUnifiedShuffleMemoryAllocator [#2201](https://github.com/apache/datafusion-comet/pull/2201) (andygrove)\n\n## Credits\n\nThank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) per contributor.\n\n```\n     2\tAndy Grove\n```\n\nThank you also to everyone who contributed in other ways such as filing issues, reviewing PRs, and providing feedback on this release.\n"
  },
  {
    "path": "dev/checkstyle-suppressions.xml",
    "content": "<?xml version='1.0'?>\n<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n<!DOCTYPE suppressions PUBLIC\n\"-//Puppy Crawl//DTD Suppressions 1.1//EN\"\n\"https://checkstyle.org/dtds/suppressions_1_1.dtd\">\n\n<!--\n\n    This file contains suppression rules for Checkstyle checks.\n    Ideally only files that cannot be modified (e.g. third-party code)\n    should be added here. All other violations should be fixed.\n\n-->\n\n<suppressions>\n</suppressions>\n"
  },
  {
    "path": "dev/ci/check-suites.py",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n# http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nimport sys\nfrom pathlib import Path\n\ndef file_to_class_name(path: Path) -> str | None:\n    parts = path.parts\n    if \"org\" not in parts or \"apache\" not in parts:\n        return None\n    org_index = parts.index(\"org\")\n    package_parts = parts[org_index:]\n    class_name = \".\".join(package_parts)\n    class_name = class_name.replace(\".scala\", \"\")\n    return class_name\n\nif __name__ == \"__main__\":\n\n    # ignore traits, abstract classes, and intentionally skipped test suites\n    ignore_list = [\n        \"org.apache.comet.parquet.ParquetReadSuite\", # abstract\n        \"org.apache.comet.parquet.ParquetReadFromS3Suite\", # manual test suite\n        \"org.apache.comet.IcebergReadFromS3Suite\", # manual test suite\n        \"org.apache.spark.sql.comet.CometPlanStabilitySuite\", # abstract\n        \"org.apache.spark.sql.comet.ParquetDatetimeRebaseSuite\", # abstract\n        \"org.apache.comet.exec.CometColumnarShuffleSuite\" # abstract\n    ]\n\n    for workflow_filename in [\".github/workflows/pr_build_linux.yml\", \".github/workflows/pr_build_macos.yml\"]:\n        workflow = open(workflow_filename, encoding=\"utf-8\").read()\n\n        root = Path(\".\")\n        for path in root.rglob(\"*Suite.scala\"):\n            class_name = file_to_class_name(path)\n            if class_name:\n                if \"Shim\" in class_name:\n                    continue\n                if class_name in ignore_list:\n                    continue\n                if class_name not in workflow:\n                    print(f\"Suite not found in workflow {workflow_filename}: {class_name}\")\n                    sys.exit(-1)\n                print(f\"Found {class_name} in {workflow_filename}\")\n"
  },
  {
    "path": "dev/ci/check-working-tree-clean.sh",
    "content": "#!/bin/bash\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\nset -euo pipefail  # Exit on errors, undefined vars, pipe failures\n\n# Check if we're in a git repository\nif ! git rev-parse --git-dir > /dev/null 2>&1; then\n  echo \"Error: Not in a git repository\"\n  # exit 1\nfi\n\n# Fail if there are any local changes (staged, unstaged, or untracked)\nif [ -n \"$(git status --porcelain)\" ]; then\n  echo \"Working tree is not clean:\"\n  git status --short\n  echo \"Full diff:\"\n  git diff\n  echo \"\"\n  echo \"Please commit, stash, or clean these changes before proceeding.\"\n  exit 1\nelse\n  echo \"Working tree is clean\"\nfi\n\n"
  },
  {
    "path": "dev/copyright/java-header.txt",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\n"
  },
  {
    "path": "dev/diffs/3.4.3.diff",
    "content": "diff --git a/pom.xml b/pom.xml\nindex d3544881af1..377683b10c5 100644\n--- a/pom.xml\n+++ b/pom.xml\n@@ -148,6 +148,8 @@\n     <chill.version>0.10.0</chill.version>\n     <ivy.version>2.5.1</ivy.version>\n     <oro.version>2.0.8</oro.version>\n+    <spark.version.short>3.4</spark.version.short>\n+    <comet.version>0.15.0-SNAPSHOT</comet.version>\n     <!--\n     If you changes codahale.metrics.version, you also need to change\n     the link to metrics.dropwizard.io in docs/monitoring.md.\n@@ -2784,6 +2786,25 @@\n         <artifactId>arpack</artifactId>\n         <version>${netlib.ludovic.dev.version}</version>\n       </dependency>\n+      <dependency>\n+        <groupId>org.apache.datafusion</groupId>\n+        <artifactId>comet-spark-spark${spark.version.short}_${scala.binary.version}</artifactId>\n+        <version>${comet.version}</version>\n+        <exclusions>\n+          <exclusion>\n+            <groupId>org.apache.spark</groupId>\n+            <artifactId>spark-sql_${scala.binary.version}</artifactId>\n+          </exclusion>\n+          <exclusion>\n+            <groupId>org.apache.spark</groupId>\n+            <artifactId>spark-core_${scala.binary.version}</artifactId>\n+          </exclusion>\n+          <exclusion>\n+            <groupId>org.apache.spark</groupId>\n+            <artifactId>spark-catalyst_${scala.binary.version}</artifactId>\n+          </exclusion>\n+        </exclusions>\n+      </dependency>\n     </dependencies>\n   </dependencyManagement>\n \ndiff --git a/sql/core/pom.xml b/sql/core/pom.xml\nindex b386d135da1..46449e3f3f1 100644\n--- a/sql/core/pom.xml\n+++ b/sql/core/pom.xml\n@@ -77,6 +77,10 @@\n       <groupId>org.apache.spark</groupId>\n       <artifactId>spark-tags_${scala.binary.version}</artifactId>\n     </dependency>\n+    <dependency>\n+      <groupId>org.apache.datafusion</groupId>\n+      <artifactId>comet-spark-spark${spark.version.short}_${scala.binary.version}</artifactId>\n+    </dependency>\n \n     <!--\n       This spark-tags test-dep is needed even though it isn't used in this module, otherwise testing-cmds that exclude\ndiff --git a/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala b/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala\nindex c595b50950b..3abb6cb9441 100644\n--- a/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala\n+++ b/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala\n@@ -102,7 +102,7 @@ class SparkSession private(\n       sc: SparkContext,\n       initialSessionOptions: java.util.HashMap[String, String]) = {\n     this(sc, None, None,\n-      SparkSession.applyExtensions(\n+      SparkSession.applyExtensions(sc,\n         sc.getConf.get(StaticSQLConf.SPARK_SESSION_EXTENSIONS).getOrElse(Seq.empty),\n         new SparkSessionExtensions), initialSessionOptions.asScala.toMap)\n   }\n@@ -1028,7 +1028,7 @@ object SparkSession extends Logging {\n         }\n \n         loadExtensions(extensions)\n-        applyExtensions(\n+        applyExtensions(sparkContext,\n           sparkContext.getConf.get(StaticSQLConf.SPARK_SESSION_EXTENSIONS).getOrElse(Seq.empty),\n           extensions)\n \n@@ -1282,14 +1282,24 @@ object SparkSession extends Logging {\n     }\n   }\n \n+  private def loadCometExtension(sparkContext: SparkContext): Seq[String] = {\n+    if (sparkContext.getConf.getBoolean(\"spark.comet.enabled\", isCometEnabled)) {\n+      Seq(\"org.apache.comet.CometSparkSessionExtensions\")\n+    } else {\n+      Seq.empty\n+    }\n+  }\n+\n   /**\n    * Initialize extensions for given extension classnames. The classes will be applied to the\n    * extensions passed into this function.\n    */\n   private def applyExtensions(\n+      sparkContext: SparkContext,\n       extensionConfClassNames: Seq[String],\n       extensions: SparkSessionExtensions): SparkSessionExtensions = {\n-    extensionConfClassNames.foreach { extensionConfClassName =>\n+    val extensionClassNames = extensionConfClassNames ++ loadCometExtension(sparkContext)\n+    extensionClassNames.foreach { extensionConfClassName =>\n       try {\n         val extensionConfClass = Utils.classForName(extensionConfClassName)\n         val extensionConf = extensionConfClass.getConstructor().newInstance()\n@@ -1323,4 +1333,12 @@ object SparkSession extends Logging {\n       }\n     }\n   }\n+\n+  /**\n+   * Whether Comet extension is enabled\n+   */\n+  def isCometEnabled: Boolean = {\n+    val v = System.getenv(\"ENABLE_COMET\")\n+    v == null || v.toBoolean\n+  }\n }\ndiff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala\nindex db587dd9868..aac7295a53d 100644\n--- a/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala\n+++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala\n@@ -18,6 +18,7 @@\n package org.apache.spark.sql.execution\n \n import org.apache.spark.annotation.DeveloperApi\n+import org.apache.spark.sql.comet.CometScanExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanExec, QueryStageExec}\n import org.apache.spark.sql.execution.columnar.InMemoryTableScanExec\n import org.apache.spark.sql.execution.exchange.ReusedExchangeExec\n@@ -67,6 +68,7 @@ private[execution] object SparkPlanInfo {\n     // dump the file scan metadata (e.g file path) to event log\n     val metadata = plan match {\n       case fileScan: FileSourceScanExec => fileScan.metadata\n+      case cometScan: CometScanExec => cometScan.metadata\n       case _ => Map[String, String]()\n     }\n     new SparkPlanInfo(\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql b/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql\nindex 7aef901da4f..f3d6e18926d 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql\n@@ -2,3 +2,4 @@\n \n --SET spark.sql.adaptive.enabled=true\n --SET spark.sql.maxMetadataStringLength = 500\n+--SET spark.comet.enabled = false\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql b/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql\nindex eeb2180f7a5..afd1b5ec289 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql\n@@ -1,5 +1,6 @@\n --SET spark.sql.cbo.enabled=true\n --SET spark.sql.maxMetadataStringLength = 500\n+--SET spark.comet.enabled = false\n \n CREATE TABLE explain_temp1(a INT, b INT) USING PARQUET;\n CREATE TABLE explain_temp2(c INT, d INT) USING PARQUET;\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/explain.sql b/sql/core/src/test/resources/sql-tests/inputs/explain.sql\nindex 698ca009b4f..57d774a3617 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/explain.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/explain.sql\n@@ -1,6 +1,7 @@\n --SET spark.sql.codegen.wholeStage = true\n --SET spark.sql.adaptive.enabled = false\n --SET spark.sql.maxMetadataStringLength = 500\n+--SET spark.comet.enabled = false\n \n -- Test tables\n CREATE table  explain_temp1 (key int, val int) USING PARQUET;\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part1.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part1.sql\nindex 1152d77da0c..f77493f690b 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part1.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part1.sql\n@@ -7,6 +7,9 @@\n \n -- avoid bit-exact output here because operations may not be bit-exact.\n -- SET extra_float_digits = 0;\n+-- Disable Comet exec due to floating point precision difference\n+--SET spark.comet.exec.enabled = false\n+\n \n -- Test aggregate operator with codegen on and off.\n --CONFIG_DIM1 spark.sql.codegen.wholeStage=true\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql\nindex 41fd4de2a09..44cd244d3b0 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql\n@@ -5,6 +5,9 @@\n -- AGGREGATES [Part 3]\n -- https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/sql/aggregates.sql#L352-L605\n \n+-- Disable Comet exec due to floating point precision difference\n+--SET spark.comet.exec.enabled = false\n+\n -- Test aggregate operator with codegen on and off.\n --CONFIG_DIM1 spark.sql.codegen.wholeStage=true\n --CONFIG_DIM1 spark.sql.codegen.wholeStage=false,spark.sql.codegen.factoryMode=CODEGEN_ONLY\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql\nindex 3a409eea348..38fed024c98 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql\n@@ -69,6 +69,8 @@ SELECT '' AS one, i.* FROM INT4_TBL i WHERE (i.f1 % smallint('2')) = smallint('1\n -- any evens\n SELECT '' AS three, i.* FROM INT4_TBL i WHERE (i.f1 % int('2')) = smallint('0');\n \n+-- https://github.com/apache/datafusion-comet/issues/2215\n+--SET spark.comet.exec.enabled=false\n -- [SPARK-28024] Incorrect value when out of range\n SELECT '' AS five, i.f1, i.f1 * smallint('2') AS x FROM INT4_TBL i;\n \ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql\nindex fac23b4a26f..2b73732c33f 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql\n@@ -1,6 +1,10 @@\n --\n -- Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group\n --\n+\n+-- Disable Comet exec due to floating point precision difference\n+--SET spark.comet.exec.enabled = false\n+\n --\n -- INT8\n -- Test int8 64-bit integers.\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql\nindex 0efe0877e9b..423d3b3d76d 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql\n@@ -1,6 +1,10 @@\n --\n -- Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group\n --\n+\n+-- Disable Comet exec due to floating point precision difference\n+--SET spark.comet.exec.enabled = false\n+\n --\n -- SELECT_HAVING\n -- https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/sql/select_having.sql\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala\nindex cf40e944c09..bdd5be4f462 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala\n@@ -38,7 +38,7 @@ import org.apache.spark.sql.catalyst.util.DateTimeConstants\n import org.apache.spark.sql.execution.{ColumnarToRowExec, ExecSubqueryExpression, RDDScanExec, SparkPlan}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.columnar._\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.execution.ui.SparkListenerSQLAdaptiveExecutionUpdate\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -516,7 +516,8 @@ class CachedTableSuite extends QueryTest with SQLTestUtils\n    */\n   private def verifyNumExchanges(df: DataFrame, expected: Int): Unit = {\n     assert(\n-      collect(df.queryExecution.executedPlan) { case e: ShuffleExchangeExec => e }.size == expected)\n+      collect(df.queryExecution.executedPlan) {\n+        case _: ShuffleExchangeLike => 1 }.size == expected)\n   }\n \n   test(\"A cached table preserves the partitioning and ordering of its cached SparkPlan\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala\nindex 1cc09c3d7fc..f031fa45c33 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala\n@@ -27,7 +27,7 @@ import org.apache.spark.SparkException\n import org.apache.spark.sql.execution.WholeStageCodegenExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.aggregate.{HashAggregateExec, ObjectHashAggregateExec, SortAggregateExec}\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.expressions.Window\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -755,7 +755,7 @@ class DataFrameAggregateSuite extends QueryTest\n       assert(objHashAggPlans.nonEmpty)\n \n       val exchangePlans = collect(aggPlan) {\n-        case shuffle: ShuffleExchangeExec => shuffle\n+        case shuffle: ShuffleExchangeLike => shuffle\n       }\n       assert(exchangePlans.length == 1)\n     }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala\nindex 56e9520fdab..917932336df 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala\n@@ -435,7 +435,9 @@ class DataFrameJoinSuite extends QueryTest\n \n     withTempDatabase { dbName =>\n       withTable(table1Name, table2Name) {\n-        withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n+        withSQLConf(\n+            SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n+            \"spark.comet.enabled\" -> \"false\") {\n           spark.range(50).write.saveAsTable(s\"$dbName.$table1Name\")\n           spark.range(100).write.saveAsTable(s\"$dbName.$table2Name\")\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala\nindex a9f69ab28a1..760ea0e9565 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala\n@@ -39,11 +39,12 @@ import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeMap, Attri\n import org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation\n import org.apache.spark.sql.catalyst.plans.logical.{ColumnStat, LeafNode, LocalRelation, LogicalPlan, OneRowRelation, Statistics}\n import org.apache.spark.sql.catalyst.util.DateTimeUtils\n+import org.apache.spark.sql.comet.CometBroadcastExchangeExec\n import org.apache.spark.sql.connector.FakeV2Provider\n import org.apache.spark.sql.execution.{FilterExec, LogicalRDD, QueryExecution, SortExec, WholeStageCodegenExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.aggregate.HashAggregateExec\n-import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ReusedExchangeExec, ShuffleExchangeExec, ShuffleExchangeLike}\n+import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ReusedExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.expressions.{Aggregator, Window}\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -1981,7 +1982,7 @@ class DataFrameSuite extends QueryTest\n           fail(\"Should not have back to back Aggregates\")\n         }\n         atFirstAgg = true\n-      case e: ShuffleExchangeExec => atFirstAgg = false\n+      case e: ShuffleExchangeLike => atFirstAgg = false\n       case _ =>\n     }\n   }\n@@ -2305,7 +2306,7 @@ class DataFrameSuite extends QueryTest\n       checkAnswer(join, df)\n       assert(\n         collect(join.queryExecution.executedPlan) {\n-          case e: ShuffleExchangeExec => true }.size === 1)\n+          case _: ShuffleExchangeLike => true }.size === 1)\n       assert(\n         collect(join.queryExecution.executedPlan) { case e: ReusedExchangeExec => true }.size === 1)\n       val broadcasted = broadcast(join)\n@@ -2313,10 +2314,12 @@ class DataFrameSuite extends QueryTest\n       checkAnswer(join2, df)\n       assert(\n         collect(join2.queryExecution.executedPlan) {\n-          case e: ShuffleExchangeExec => true }.size == 1)\n+          case _: ShuffleExchangeLike => true }.size == 1)\n       assert(\n         collect(join2.queryExecution.executedPlan) {\n-          case e: BroadcastExchangeExec => true }.size === 1)\n+          case e: BroadcastExchangeExec => true\n+          case _: CometBroadcastExchangeExec => true\n+        }.size === 1)\n       assert(\n         collect(join2.queryExecution.executedPlan) { case e: ReusedExchangeExec => true }.size == 4)\n     }\n@@ -2876,7 +2879,7 @@ class DataFrameSuite extends QueryTest\n \n     // Assert that no extra shuffle introduced by cogroup.\n     val exchanges = collect(df3.queryExecution.executedPlan) {\n-      case h: ShuffleExchangeExec => h\n+      case h: ShuffleExchangeLike => h\n     }\n     assert(exchanges.size == 2)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala\nindex 433b4741979..07148eee480 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala\n@@ -23,8 +23,9 @@ import org.apache.spark.TestUtils.{assertNotSpilled, assertSpilled}\n import org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression, Lag, Literal, NonFoldableLiteral}\n import org.apache.spark.sql.catalyst.optimizer.TransposeWindow\n import org.apache.spark.sql.catalyst.plans.physical.HashPartitioning\n+import org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n-import org.apache.spark.sql.execution.exchange.{ENSURE_REQUIREMENTS, Exchange, ShuffleExchangeExec}\n+import org.apache.spark.sql.execution.exchange.{ENSURE_REQUIREMENTS, Exchange, ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.window.WindowExec\n import org.apache.spark.sql.expressions.{Aggregator, MutableAggregationBuffer, UserDefinedAggregateFunction, Window}\n import org.apache.spark.sql.functions._\n@@ -1186,10 +1187,12 @@ class DataFrameWindowFunctionsSuite extends QueryTest\n     }\n \n     def isShuffleExecByRequirement(\n-        plan: ShuffleExchangeExec,\n+        plan: ShuffleExchangeLike,\n         desiredClusterColumns: Seq[String]): Boolean = plan match {\n       case ShuffleExchangeExec(op: HashPartitioning, _, ENSURE_REQUIREMENTS) =>\n         partitionExpressionsColumns(op.expressions) === desiredClusterColumns\n+      case CometShuffleExchangeExec(op: HashPartitioning, _, _, ENSURE_REQUIREMENTS, _, _) =>\n+        partitionExpressionsColumns(op.expressions) === desiredClusterColumns\n       case _ => false\n     }\n \n@@ -1212,7 +1215,7 @@ class DataFrameWindowFunctionsSuite extends QueryTest\n       val shuffleByRequirement = windowed.queryExecution.executedPlan.exists {\n         case w: WindowExec =>\n           w.child.exists {\n-            case s: ShuffleExchangeExec => isShuffleExecByRequirement(s, Seq(\"key1\", \"key2\"))\n+            case s: ShuffleExchangeLike => isShuffleExecByRequirement(s, Seq(\"key1\", \"key2\"))\n             case _ => false\n           }\n         case _ => false\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala\nindex daef11ae4d6..9f3cc9181f2 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala\n@@ -39,7 +39,7 @@ import org.apache.spark.sql.catalyst.plans.{LeftAnti, LeftSemi}\n import org.apache.spark.sql.catalyst.util.sideBySide\n import org.apache.spark.sql.execution.{LogicalRDD, RDDScanExec, SQLExecution}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n-import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ShuffleExchangeExec}\n+import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.streaming.MemoryStream\n import org.apache.spark.sql.expressions.UserDefinedFunction\n import org.apache.spark.sql.functions._\n@@ -2254,7 +2254,7 @@ class DatasetSuite extends QueryTest\n \n     // Assert that no extra shuffle introduced by cogroup.\n     val exchanges = collect(df3.queryExecution.executedPlan) {\n-      case h: ShuffleExchangeExec => h\n+      case h: ShuffleExchangeLike => h\n     }\n     assert(exchanges.size == 2)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala\nindex f33432ddb6f..7d758d2481f 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala\n@@ -22,6 +22,7 @@ import org.scalatest.GivenWhenThen\n import org.apache.spark.sql.catalyst.expressions.{DynamicPruningExpression, Expression}\n import org.apache.spark.sql.catalyst.expressions.CodegenObjectFactoryMode._\n import org.apache.spark.sql.catalyst.plans.ExistenceJoin\n+import org.apache.spark.sql.comet.CometScanExec\n import org.apache.spark.sql.connector.catalog.{InMemoryTableCatalog, InMemoryTableWithV2FilterCatalog}\n import org.apache.spark.sql.execution._\n import org.apache.spark.sql.execution.adaptive._\n@@ -262,6 +263,9 @@ abstract class DynamicPartitionPruningSuiteBase\n       case s: BatchScanExec => s.runtimeFilters.collect {\n         case d: DynamicPruningExpression => d.child\n       }\n+      case s: CometScanExec => s.partitionFilters.collect {\n+        case d: DynamicPruningExpression => d.child\n+      }\n       case _ => Nil\n     }\n   }\n@@ -1027,7 +1032,8 @@ abstract class DynamicPartitionPruningSuiteBase\n     }\n   }\n \n-  test(\"avoid reordering broadcast join keys to match input hash partitioning\") {\n+  test(\"avoid reordering broadcast join keys to match input hash partitioning\",\n+    IgnoreComet(\"TODO: https://github.com/apache/datafusion-comet/issues/1839\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"false\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n       withTable(\"large\", \"dimTwo\", \"dimThree\") {\n@@ -1215,7 +1221,8 @@ abstract class DynamicPartitionPruningSuiteBase\n   }\n \n   test(\"SPARK-32509: Unused Dynamic Pruning filter shouldn't affect \" +\n-    \"canonicalization and exchange reuse\") {\n+    \"canonicalization and exchange reuse\",\n+    IgnoreComet(\"TODO: https://github.com/apache/datafusion-comet/issues/1839\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"true\") {\n       withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n         val df = sql(\n@@ -1423,7 +1430,8 @@ abstract class DynamicPartitionPruningSuiteBase\n     }\n   }\n \n-  test(\"SPARK-34637: DPP side broadcast query stage is created firstly\") {\n+  test(\"SPARK-34637: DPP side broadcast query stage is created firstly\",\n+    IgnoreComet(\"TODO: https://github.com/apache/datafusion-comet/issues/1839\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"true\") {\n       val df = sql(\n         \"\"\" WITH v as (\n@@ -1698,7 +1706,8 @@ abstract class DynamicPartitionPruningV1Suite extends DynamicPartitionPruningDat\n    * Check the static scan metrics with and without DPP\n    */\n   test(\"static scan metrics\",\n-    DisableAdaptiveExecution(\"DPP in AQE must reuse broadcast\")) {\n+    DisableAdaptiveExecution(\"DPP in AQE must reuse broadcast\"),\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3442\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_ENABLED.key -> \"true\",\n       SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"false\",\n       SQLConf.EXCHANGE_REUSE_ENABLED.key -> \"false\") {\n@@ -1729,6 +1738,8 @@ abstract class DynamicPartitionPruningV1Suite extends DynamicPartitionPruningDat\n               case s: BatchScanExec =>\n                 // we use f1 col for v2 tables due to schema pruning\n                 s.output.exists(_.exists(_.argString(maxFields = 100).contains(\"f1\")))\n+              case s: CometScanExec =>\n+                s.output.exists(_.exists(_.argString(maxFields = 100).contains(\"fid\")))\n               case _ => false\n             }\n           assert(scanOption.isDefined)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala\nindex a6b295578d6..91acca4306f 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala\n@@ -463,7 +463,8 @@ class ExplainSuite extends ExplainSuiteHelper with DisableAdaptiveExecutionSuite\n     }\n   }\n \n-  test(\"Explain formatted output for scan operator for datasource V2\") {\n+  test(\"Explain formatted output for scan operator for datasource V2\",\n+      IgnoreComet(\"Comet explain output is different\")) {\n     withTempDir { dir =>\n       Seq(\"parquet\", \"orc\", \"csv\", \"json\").foreach { fmt =>\n         val basePath = dir.getCanonicalPath + \"/\" + fmt\n@@ -541,7 +542,9 @@ class ExplainSuite extends ExplainSuiteHelper with DisableAdaptiveExecutionSuite\n   }\n }\n \n-class ExplainSuiteAE extends ExplainSuiteHelper with EnableAdaptiveExecutionSuite {\n+// Ignored when Comet is enabled. Comet changes expected query plans.\n+class ExplainSuiteAE extends ExplainSuiteHelper with EnableAdaptiveExecutionSuite\n+    with IgnoreCometSuite {\n   import testImplicits._\n \n   test(\"SPARK-35884: Explain Formatted\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala\nindex 2796b1cf154..53dcfde932e 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala\n@@ -33,6 +33,7 @@ import org.apache.spark.sql.TestingUDT.{IntervalUDT, NullData, NullUDT}\n import org.apache.spark.sql.catalyst.expressions.{AttributeReference, GreaterThan, Literal}\n import org.apache.spark.sql.catalyst.expressions.IntegralLiteralTestUtils.{negativeInt, positiveInt}\n import org.apache.spark.sql.catalyst.plans.logical.Filter\n+import org.apache.spark.sql.comet.{CometBatchScanExec, CometNativeScanExec, CometScanExec, CometSortMergeJoinExec}\n import org.apache.spark.sql.execution.{FileSourceScanLike, SimpleMode}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.datasources.FilePartition\n@@ -516,21 +517,24 @@ class FileBasedDataSourceSuite extends QueryTest\n             checkAnswer(sql(s\"select A from $tableName\"), data.select(\"A\"))\n \n             // RuntimeException is triggered at executor side, which is then wrapped as\n-            // SparkException at driver side\n-            val e1 = intercept[SparkException] {\n-              sql(s\"select b from $tableName\").collect()\n+            // SparkException at driver side. Comet native readers throw RuntimeException\n+            // directly without the SparkException wrapper.\n+            def getDuplicateFieldError(query: String): RuntimeException = {\n+              try {\n+                sql(query).collect()\n+                fail(\"Expected an exception\").asInstanceOf[RuntimeException]\n+              } catch {\n+                case e: SparkException =>\n+                  e.getCause.asInstanceOf[RuntimeException]\n+                case e: RuntimeException => e\n+              }\n             }\n-            assert(\n-              e1.getCause.isInstanceOf[RuntimeException] &&\n-                e1.getCause.getMessage.contains(\n-                  \"\"\"Found duplicate field(s) \"b\": [b, B] in case-insensitive mode\"\"\"))\n-            val e2 = intercept[SparkException] {\n-              sql(s\"select B from $tableName\").collect()\n-            }\n-            assert(\n-              e2.getCause.isInstanceOf[RuntimeException] &&\n-                e2.getCause.getMessage.contains(\n-                  \"\"\"Found duplicate field(s) \"b\": [b, B] in case-insensitive mode\"\"\"))\n+            val e1 = getDuplicateFieldError(s\"select b from $tableName\")\n+            assert(e1.getMessage.contains(\n+              \"\"\"Found duplicate field(s) \"b\": [b, B] in case-insensitive mode\"\"\"))\n+            val e2 = getDuplicateFieldError(s\"select B from $tableName\")\n+            assert(e2.getMessage.contains(\n+              \"\"\"Found duplicate field(s) \"b\": [b, B] in case-insensitive mode\"\"\"))\n           }\n \n           withSQLConf(SQLConf.CASE_SENSITIVE.key -> \"true\") {\n@@ -815,6 +819,7 @@ class FileBasedDataSourceSuite extends QueryTest\n             assert(bJoinExec.isEmpty)\n             val smJoinExec = collect(joinedDF.queryExecution.executedPlan) {\n               case smJoin: SortMergeJoinExec => smJoin\n+              case smJoin: CometSortMergeJoinExec => smJoin\n             }\n             assert(smJoinExec.nonEmpty)\n           }\n@@ -875,6 +880,7 @@ class FileBasedDataSourceSuite extends QueryTest\n \n           val fileScan = df.queryExecution.executedPlan collectFirst {\n             case BatchScanExec(_, f: FileScan, _, _, _, _, _, _, _) => f\n+            case CometBatchScanExec(BatchScanExec(_, f: FileScan, _, _, _, _, _, _, _), _, _) => f\n           }\n           assert(fileScan.nonEmpty)\n           assert(fileScan.get.partitionFilters.nonEmpty)\n@@ -916,6 +922,7 @@ class FileBasedDataSourceSuite extends QueryTest\n \n           val fileScan = df.queryExecution.executedPlan collectFirst {\n             case BatchScanExec(_, f: FileScan, _, _, _, _, _, _, _) => f\n+            case CometBatchScanExec(BatchScanExec(_, f: FileScan, _, _, _, _, _, _, _), _, _) => f\n           }\n           assert(fileScan.nonEmpty)\n           assert(fileScan.get.partitionFilters.isEmpty)\n@@ -1100,6 +1107,9 @@ class FileBasedDataSourceSuite extends QueryTest\n           val filters = df.queryExecution.executedPlan.collect {\n             case f: FileSourceScanLike => f.dataFilters\n             case b: BatchScanExec => b.scan.asInstanceOf[FileScan].dataFilters\n+            case b: CometScanExec => b.dataFilters\n+            case b: CometNativeScanExec => b.dataFilters\n+            case b: CometBatchScanExec => b.scan.asInstanceOf[FileScan].dataFilters\n           }.flatten\n           assert(filters.contains(GreaterThan(scan.logicalPlan.output.head, Literal(5L))))\n         }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/IgnoreComet.scala b/sql/core/src/test/scala/org/apache/spark/sql/IgnoreComet.scala\nnew file mode 100644\nindex 00000000000..5691536c114\n--- /dev/null\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/IgnoreComet.scala\n@@ -0,0 +1,45 @@\n+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one or more\n+ * contributor license agreements.  See the NOTICE file distributed with\n+ * this work for additional information regarding copyright ownership.\n+ * The ASF licenses this file to You under the Apache License, Version 2.0\n+ * (the \"License\"); you may not use this file except in compliance with\n+ * the License.  You may obtain a copy of the License at\n+ *\n+ *    http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+\n+package org.apache.spark.sql\n+\n+import org.scalactic.source.Position\n+import org.scalatest.Tag\n+\n+import org.apache.spark.sql.test.SQLTestUtils\n+\n+/**\n+ * Tests with this tag will be ignored when Comet is enabled (e.g., via `ENABLE_COMET`).\n+ */\n+case class IgnoreComet(reason: String) extends Tag(\"DisableComet\")\n+case class IgnoreCometNativeIcebergCompat(reason: String) extends Tag(\"DisableComet\")\n+case class IgnoreCometNativeDataFusion(reason: String) extends Tag(\"DisableComet\")\n+case class IgnoreCometNativeScan(reason: String) extends Tag(\"DisableComet\")\n+\n+/**\n+ * Helper trait that disables Comet for all tests regardless of default config values.\n+ */\n+trait IgnoreCometSuite extends SQLTestUtils {\n+  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)\n+    (implicit pos: Position): Unit = {\n+    if (isCometEnabled) {\n+      ignore(testName + \" (disabled when Comet is on)\", testTags: _*)(testFun)\n+    } else {\n+      super.test(testName, testTags: _*)(testFun)\n+    }\n+  }\n+}\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/InjectRuntimeFilterSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/InjectRuntimeFilterSuite.scala\nindex fda442eeef0..1b69e4f280e 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/InjectRuntimeFilterSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/InjectRuntimeFilterSuite.scala\n@@ -468,7 +468,8 @@ class InjectRuntimeFilterSuite extends QueryTest with SQLTestUtils with SharedSp\n   }\n \n   test(\"Runtime bloom filter join: do not add bloom filter if dpp filter exists \" +\n-    \"on the same column\") {\n+    \"on the same column\",\n+    IgnoreComet(\"TODO: Support SubqueryBroadcastExec in Comet: #242\")) {\n     withSQLConf(SQLConf.RUNTIME_BLOOM_FILTER_APPLICATION_SIDE_SCAN_SIZE_THRESHOLD.key -> \"3000\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"2000\") {\n       assertDidNotRewriteWithBloomFilter(\"select * from bf5part join bf2 on \" +\n@@ -477,7 +478,8 @@ class InjectRuntimeFilterSuite extends QueryTest with SQLTestUtils with SharedSp\n   }\n \n   test(\"Runtime bloom filter join: add bloom filter if dpp filter exists on \" +\n-    \"a different column\") {\n+    \"a different column\",\n+    IgnoreComet(\"TODO: Support SubqueryBroadcastExec in Comet: #242\")) {\n     withSQLConf(SQLConf.RUNTIME_BLOOM_FILTER_APPLICATION_SIDE_SCAN_SIZE_THRESHOLD.key -> \"3000\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"2000\") {\n       assertRewroteWithBloomFilter(\"select * from bf5part join bf2 on \" +\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala\nindex 1792b4c32eb..1616e6f39bd 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala\n@@ -23,6 +23,7 @@ import org.apache.spark.sql.catalyst.optimizer.{BuildLeft, BuildRight, BuildSide\n import org.apache.spark.sql.catalyst.plans.PlanTest\n import org.apache.spark.sql.catalyst.plans.logical._\n import org.apache.spark.sql.catalyst.rules.RuleExecutor\n+import org.apache.spark.sql.comet.{CometHashJoinExec, CometSortMergeJoinExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.joins._\n import org.apache.spark.sql.internal.SQLConf\n@@ -362,6 +363,7 @@ class JoinHintSuite extends PlanTest with SharedSparkSession with AdaptiveSparkP\n     val executedPlan = df.queryExecution.executedPlan\n     val shuffleHashJoins = collect(executedPlan) {\n       case s: ShuffledHashJoinExec => s\n+      case c: CometHashJoinExec => c.originalPlan.asInstanceOf[ShuffledHashJoinExec]\n     }\n     assert(shuffleHashJoins.size == 1)\n     assert(shuffleHashJoins.head.buildSide == buildSide)\n@@ -371,6 +373,7 @@ class JoinHintSuite extends PlanTest with SharedSparkSession with AdaptiveSparkP\n     val executedPlan = df.queryExecution.executedPlan\n     val shuffleMergeJoins = collect(executedPlan) {\n       case s: SortMergeJoinExec => s\n+      case c: CometSortMergeJoinExec => c\n     }\n     assert(shuffleMergeJoins.size == 1)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala\nindex 7f062bfb899..0ed85486e80 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala\n@@ -30,7 +30,8 @@ import org.apache.spark.sql.catalyst.TableIdentifier\n import org.apache.spark.sql.catalyst.analysis.UnresolvedRelation\n import org.apache.spark.sql.catalyst.expressions.{Ascending, GenericRow, SortOrder}\n import org.apache.spark.sql.catalyst.plans.logical.Filter\n-import org.apache.spark.sql.execution.{BinaryExecNode, FilterExec, ProjectExec, SortExec, SparkPlan, WholeStageCodegenExec}\n+import org.apache.spark.sql.comet._\n+import org.apache.spark.sql.execution.{BinaryExecNode, ColumnarToRowExec, FilterExec, InputAdapter, ProjectExec, SortExec, SparkPlan, WholeStageCodegenExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.exchange.{ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.joins._\n@@ -740,7 +741,8 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n     }\n   }\n \n-  test(\"test SortMergeJoin (with spill)\") {\n+  test(\"test SortMergeJoin (with spill)\",\n+      IgnoreComet(\"TODO: Comet SMJ doesn't support spill yet\")) {\n     withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"1\",\n       SQLConf.SORT_MERGE_JOIN_EXEC_BUFFER_IN_MEMORY_THRESHOLD.key -> \"0\",\n       SQLConf.SORT_MERGE_JOIN_EXEC_BUFFER_SPILL_THRESHOLD.key -> \"1\") {\n@@ -866,10 +868,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n       val physical = df.queryExecution.sparkPlan\n       val physicalJoins = physical.collect {\n         case j: SortMergeJoinExec => j\n+        case j: CometSortMergeJoinExec => j.originalPlan.asInstanceOf[SortMergeJoinExec]\n       }\n       val executed = df.queryExecution.executedPlan\n       val executedJoins = collect(executed) {\n         case j: SortMergeJoinExec => j\n+        case j: CometSortMergeJoinExec => j.originalPlan.asInstanceOf[SortMergeJoinExec]\n       }\n       // This only applies to the above tested queries, in which a child SortMergeJoin always\n       // contains the SortOrder required by its parent SortMergeJoin. Thus, SortExec should never\n@@ -1115,9 +1119,11 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n       val plan = df1.join(df2.hint(\"SHUFFLE_HASH\"), $\"k1\" === $\"k2\", joinType)\n         .groupBy($\"k1\").count()\n         .queryExecution.executedPlan\n-      assert(collect(plan) { case _: ShuffledHashJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: ShuffledHashJoinExec | _: CometHashJoinExec => true }.size === 1)\n       // No extra shuffle before aggregate\n-      assert(collect(plan) { case _: ShuffleExchangeExec => true }.size === 2)\n+      assert(collect(plan) {\n+        case _: ShuffleExchangeLike => true }.size === 2)\n     })\n   }\n \n@@ -1134,10 +1140,11 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n         .join(df4.hint(\"SHUFFLE_MERGE\"), $\"k1\" === $\"k4\", joinType)\n         .queryExecution\n         .executedPlan\n-      assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 2)\n+      assert(collect(plan) {\n+        case _: SortMergeJoinExec | _: CometSortMergeJoinExec => true }.size === 2)\n       assert(collect(plan) { case _: BroadcastHashJoinExec => true }.size === 1)\n       // No extra sort before last sort merge join\n-      assert(collect(plan) { case _: SortExec => true }.size === 3)\n+      assert(collect(plan) { case _: SortExec | _: CometSortExec => true }.size === 3)\n     })\n \n     // Test shuffled hash join\n@@ -1147,10 +1154,13 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n         .join(df4.hint(\"SHUFFLE_MERGE\"), $\"k1\" === $\"k4\", joinType)\n         .queryExecution\n         .executedPlan\n-      assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 2)\n-      assert(collect(plan) { case _: ShuffledHashJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: SortMergeJoinExec | _: CometSortMergeJoinExec => true }.size === 2)\n+      assert(collect(plan) {\n+        case _: ShuffledHashJoinExec | _: CometHashJoinExec => true }.size === 1)\n       // No extra sort before last sort merge join\n-      assert(collect(plan) { case _: SortExec => true }.size === 3)\n+      assert(collect(plan) {\n+        case _: SortExec | _: CometSortExec => true }.size === 3)\n     })\n   }\n \n@@ -1241,12 +1251,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n     inputDFs.foreach { case (df1, df2, joinExprs) =>\n       val smjDF = df1.join(df2.hint(\"SHUFFLE_MERGE\"), joinExprs, \"full\")\n       assert(collect(smjDF.queryExecution.executedPlan) {\n-        case _: SortMergeJoinExec => true }.size === 1)\n+        case _: SortMergeJoinExec | _: CometSortMergeJoinExec => true }.size === 1)\n       val smjResult = smjDF.collect()\n \n       val shjDF = df1.join(df2.hint(\"SHUFFLE_HASH\"), joinExprs, \"full\")\n       assert(collect(shjDF.queryExecution.executedPlan) {\n-        case _: ShuffledHashJoinExec => true }.size === 1)\n+        case _: ShuffledHashJoinExec | _: CometHashJoinExec => true }.size === 1)\n       // Same result between shuffled hash join and sort merge join\n       checkAnswer(shjDF, smjResult)\n     }\n@@ -1282,18 +1292,26 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n       }\n \n       // Test shuffled hash join\n-      withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\") {\n+      withSQLConf(\"spark.comet.enabled\" -> \"true\",\n+        SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\") {\n         val shjCodegenDF = df1.join(df2.hint(\"SHUFFLE_HASH\"), $\"k1\" === $\"k2\", joinType)\n         assert(shjCodegenDF.queryExecution.executedPlan.collect {\n           case WholeStageCodegenExec(_ : ShuffledHashJoinExec) => true\n           case WholeStageCodegenExec(ProjectExec(_, _ : ShuffledHashJoinExec)) => true\n+          case WholeStageCodegenExec(ColumnarToRowExec(InputAdapter(_: CometHashJoinExec))) =>\n+            true\n+          case WholeStageCodegenExec(ColumnarToRowExec(\n+            InputAdapter(CometProjectExec(_, _, _, _, _: CometHashJoinExec, _)))) => true\n+          case _: CometHashJoinExec => true\n         }.size === 1)\n         checkAnswer(shjCodegenDF, Seq.empty)\n \n         withSQLConf(SQLConf.WHOLESTAGE_CODEGEN_ENABLED.key -> \"false\") {\n           val shjNonCodegenDF = df1.join(df2.hint(\"SHUFFLE_HASH\"), $\"k1\" === $\"k2\", joinType)\n           assert(shjNonCodegenDF.queryExecution.executedPlan.collect {\n-            case _: ShuffledHashJoinExec => true }.size === 1)\n+            case _: ShuffledHashJoinExec => true\n+            case _: CometHashJoinExec => true\n+          }.size === 1)\n           checkAnswer(shjNonCodegenDF, Seq.empty)\n         }\n       }\n@@ -1341,7 +1359,8 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           val plan = sql(getAggQuery(selectExpr, joinType)).queryExecution.executedPlan\n           assert(collect(plan) { case _: BroadcastNestedLoopJoinExec => true }.size === 1)\n           // Have shuffle before aggregation\n-          assert(collect(plan) { case _: ShuffleExchangeExec => true }.size === 1)\n+          assert(collect(plan) {\n+            case _: ShuffleExchangeLike => true }.size === 1)\n       }\n \n       def getJoinQuery(selectExpr: String, joinType: String): String = {\n@@ -1370,9 +1389,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           }\n           val plan = sql(getJoinQuery(selectExpr, joinType)).queryExecution.executedPlan\n           assert(collect(plan) { case _: BroadcastNestedLoopJoinExec => true }.size === 1)\n-          assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 3)\n+          assert(collect(plan) {\n+            case _: SortMergeJoinExec => true\n+            case _: CometSortMergeJoinExec => true\n+          }.size === 3)\n           // No extra sort on left side before last sort merge join\n-          assert(collect(plan) { case _: SortExec => true }.size === 5)\n+          assert(collect(plan) { case _: SortExec | _: CometSortExec => true }.size === 5)\n       }\n \n       // Test output ordering is not preserved\n@@ -1381,9 +1403,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           val selectExpr = \"/*+ BROADCAST(left_t) */ k1 as k0\"\n           val plan = sql(getJoinQuery(selectExpr, joinType)).queryExecution.executedPlan\n           assert(collect(plan) { case _: BroadcastNestedLoopJoinExec => true }.size === 1)\n-          assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 3)\n+          assert(collect(plan) {\n+            case _: SortMergeJoinExec => true\n+            case _: CometSortMergeJoinExec => true\n+          }.size === 3)\n           // Have sort on left side before last sort merge join\n-          assert(collect(plan) { case _: SortExec => true }.size === 6)\n+          assert(collect(plan) { case _: SortExec | _: CometSortExec => true }.size === 6)\n       }\n \n       // Test singe partition\n@@ -1393,7 +1418,8 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n            |FROM range(0, 10, 1, 1) t1 FULL OUTER JOIN range(0, 10, 1, 1) t2\n            |\"\"\".stripMargin)\n       val plan = fullJoinDF.queryExecution.executedPlan\n-      assert(collect(plan) { case _: ShuffleExchangeExec => true}.size == 1)\n+      assert(collect(plan) {\n+        case _: ShuffleExchangeLike => true}.size == 1)\n       checkAnswer(fullJoinDF, Row(100))\n     }\n   }\n@@ -1438,6 +1464,9 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           Seq(semiJoinDF, antiJoinDF).foreach { df =>\n             assert(collect(df.queryExecution.executedPlan) {\n               case j: ShuffledHashJoinExec if j.ignoreDuplicatedKey == ignoreDuplicatedKey => true\n+              case j: CometHashJoinExec\n+                if j.originalPlan.asInstanceOf[ShuffledHashJoinExec].ignoreDuplicatedKey ==\n+                  ignoreDuplicatedKey => true\n             }.size == 1)\n           }\n       }\n@@ -1482,14 +1511,20 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n \n   test(\"SPARK-43113: Full outer join with duplicate stream-side references in condition (SMJ)\") {\n     def check(plan: SparkPlan): Unit = {\n-      assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: SortMergeJoinExec => true\n+        case _: CometSortMergeJoinExec => true\n+      }.size === 1)\n     }\n     dupStreamSideColTest(\"MERGE\", check)\n   }\n \n   test(\"SPARK-43113: Full outer join with duplicate stream-side references in condition (SHJ)\") {\n     def check(plan: SparkPlan): Unit = {\n-      assert(collect(plan) { case _: ShuffledHashJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: ShuffledHashJoinExec => true\n+        case _: CometHashJoinExec => true\n+      }.size === 1)\n     }\n     dupStreamSideColTest(\"SHUFFLE_HASH\", check)\n   }\n@@ -1605,7 +1640,8 @@ class ThreadLeakInSortMergeJoinSuite\n       sparkConf.set(SHUFFLE_SPILL_NUM_ELEMENTS_FORCE_SPILL_THRESHOLD, 20))\n   }\n \n-  test(\"SPARK-47146: thread leak when doing SortMergeJoin (with spill)\") {\n+  test(\"SPARK-47146: thread leak when doing SortMergeJoin (with spill)\",\n+    IgnoreComet(\"Comet SMJ doesn't spill yet\")) {\n \n     withSQLConf(\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"1\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala\nindex b5b34922694..a72403780c4 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala\n@@ -69,7 +69,7 @@ import org.apache.spark.tags.ExtendedSQLTest\n  * }}}\n  */\n // scalastyle:on line.size.limit\n-trait PlanStabilitySuite extends DisableAdaptiveExecutionSuite {\n+trait PlanStabilitySuite extends DisableAdaptiveExecutionSuite with IgnoreCometSuite {\n \n   protected val baseResourcePath = {\n     // use the same way as `SQLQueryTestSuite` to get the resource path\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala\nindex 525d97e4998..843f0472c23 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala\n@@ -1508,7 +1508,8 @@ class SQLQuerySuite extends QueryTest with SharedSparkSession with AdaptiveSpark\n     checkAnswer(sql(\"select -0.001\"), Row(BigDecimal(\"-0.001\")))\n   }\n \n-  test(\"external sorting updates peak execution memory\") {\n+  test(\"external sorting updates peak execution memory\",\n+    IgnoreComet(\"TODO: native CometSort does not update peak execution memory\")) {\n     AccumulatorSuite.verifyPeakExecutionMemorySet(sparkContext, \"external sort\") {\n       sql(\"SELECT * FROM testData2 ORDER BY a ASC, b ASC\").collect()\n     }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala\nindex 48ad10992c5..51d1ee65422 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala\n@@ -221,6 +221,8 @@ class SparkSessionExtensionSuite extends SparkFunSuite with SQLHelper {\n     withSession(extensions) { session =>\n       session.conf.set(SQLConf.ADAPTIVE_EXECUTION_ENABLED, true)\n       session.conf.set(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key, \"-1\")\n+      // https://github.com/apache/datafusion-comet/issues/1197\n+      session.conf.set(\"spark.comet.enabled\", false)\n       assert(session.sessionState.columnarRules.contains(\n         MyColumnarRule(PreRuleReplaceAddWithBrokenVersion(), MyPostRule())))\n       import session.sqlContext.implicits._\n@@ -279,6 +281,8 @@ class SparkSessionExtensionSuite extends SparkFunSuite with SQLHelper {\n     }\n     withSession(extensions) { session =>\n       session.conf.set(SQLConf.ADAPTIVE_EXECUTION_ENABLED, enableAQE)\n+      // https://github.com/apache/datafusion-comet/issues/1197\n+      session.conf.set(\"spark.comet.enabled\", false)\n       assert(session.sessionState.columnarRules.contains(\n         MyColumnarRule(PreRuleReplaceAddWithBrokenVersion(), MyPostRule())))\n       import session.sqlContext.implicits._\n@@ -317,6 +321,8 @@ class SparkSessionExtensionSuite extends SparkFunSuite with SQLHelper {\n     val session = SparkSession.builder()\n       .master(\"local[1]\")\n       .config(COLUMN_BATCH_SIZE.key, 2)\n+      // https://github.com/apache/datafusion-comet/issues/1197\n+      .config(\"spark.comet.enabled\", false)\n       .withExtensions { extensions =>\n         extensions.injectColumnar(session =>\n           MyColumnarRule(PreRuleReplaceAddWithBrokenVersion(), MyPostRule())) }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/StringFunctionsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/StringFunctionsSuite.scala\nindex 18123a4d6ec..0fe185baa33 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/StringFunctionsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/StringFunctionsSuite.scala\n@@ -17,6 +17,8 @@\n \n package org.apache.spark.sql\n \n+import org.apache.comet.CometConf\n+\n import org.apache.spark.{SPARK_DOC_ROOT, SparkRuntimeException}\n import org.apache.spark.sql.catalyst.expressions.Cast._\n import org.apache.spark.sql.catalyst.expressions.TryToNumber\n@@ -133,29 +135,31 @@ class StringFunctionsSuite extends QueryTest with SharedSparkSession {\n   }\n \n   test(\"string regex_replace / regex_extract\") {\n-    val df = Seq(\n-      (\"100-200\", \"(\\\\d+)-(\\\\d+)\", \"300\"),\n-      (\"100-200\", \"(\\\\d+)-(\\\\d+)\", \"400\"),\n-      (\"100-200\", \"(\\\\d+)\", \"400\")).toDF(\"a\", \"b\", \"c\")\n-\n-    checkAnswer(\n-      df.select(\n-        regexp_replace($\"a\", \"(\\\\d+)\", \"num\"),\n-        regexp_replace($\"a\", $\"b\", $\"c\"),\n-        regexp_extract($\"a\", \"(\\\\d+)-(\\\\d+)\", 1)),\n-      Row(\"num-num\", \"300\", \"100\") :: Row(\"num-num\", \"400\", \"100\") ::\n-        Row(\"num-num\", \"400-400\", \"100\") :: Nil)\n+    withSQLConf(CometConf.getExprAllowIncompatConfigKey(\"regexp\") -> \"true\") {\n+      val df = Seq(\n+        (\"100-200\", \"(\\\\d+)-(\\\\d+)\", \"300\"),\n+        (\"100-200\", \"(\\\\d+)-(\\\\d+)\", \"400\"),\n+        (\"100-200\", \"(\\\\d+)\", \"400\")).toDF(\"a\", \"b\", \"c\")\n \n-    // for testing the mutable state of the expression in code gen.\n-    // This is a hack way to enable the codegen, thus the codegen is enable by default,\n-    // it will still use the interpretProjection if projection followed by a LocalRelation,\n-    // hence we add a filter operator.\n-    // See the optimizer rule `ConvertToLocalRelation`\n-    checkAnswer(\n-      df.filter(\"isnotnull(a)\").selectExpr(\n-        \"regexp_replace(a, b, c)\",\n-        \"regexp_extract(a, b, 1)\"),\n-      Row(\"300\", \"100\") :: Row(\"400\", \"100\") :: Row(\"400-400\", \"100\") :: Nil)\n+      checkAnswer(\n+        df.select(\n+          regexp_replace($\"a\", \"(\\\\d+)\", \"num\"),\n+          regexp_replace($\"a\", $\"b\", $\"c\"),\n+          regexp_extract($\"a\", \"(\\\\d+)-(\\\\d+)\", 1)),\n+        Row(\"num-num\", \"300\", \"100\") :: Row(\"num-num\", \"400\", \"100\") ::\n+          Row(\"num-num\", \"400-400\", \"100\") :: Nil)\n+\n+      // for testing the mutable state of the expression in code gen.\n+      // This is a hack way to enable the codegen, thus the codegen is enable by default,\n+      // it will still use the interpretProjection if projection followed by a LocalRelation,\n+      // hence we add a filter operator.\n+      // See the optimizer rule `ConvertToLocalRelation`\n+      checkAnswer(\n+        df.filter(\"isnotnull(a)\").selectExpr(\n+          \"regexp_replace(a, b, c)\",\n+          \"regexp_extract(a, b, 1)\"),\n+        Row(\"300\", \"100\") :: Row(\"400\", \"100\") :: Row(\"400-400\", \"100\") :: Nil)\n+    }\n   }\n \n   test(\"non-matching optional group\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala\nindex 75eabcb96f2..7c0bbd71551 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala\n@@ -21,10 +21,11 @@ import scala.collection.mutable.ArrayBuffer\n \n import org.apache.spark.sql.catalyst.expressions.SubqueryExpression\n import org.apache.spark.sql.catalyst.plans.logical.{Aggregate, Join, LogicalPlan, Project, Sort, Union}\n+import org.apache.spark.sql.comet.CometScanExec\n import org.apache.spark.sql.execution._\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecution}\n import org.apache.spark.sql.execution.datasources.FileScanRDD\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.execution.joins.{BaseJoinExec, BroadcastHashJoinExec, BroadcastNestedLoopJoinExec}\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SharedSparkSession\n@@ -1543,6 +1544,12 @@ class SubquerySuite extends QueryTest\n             fs.inputRDDs().forall(\n               _.asInstanceOf[FileScanRDD].filePartitions.forall(\n                 _.files.forall(_.urlEncodedPath.contains(\"p=0\"))))\n+        case WholeStageCodegenExec(ColumnarToRowExec(InputAdapter(\n+        fs @ CometScanExec(_, _, _, _, partitionFilters, _, _, _, _, _, _)))) =>\n+          partitionFilters.exists(ExecSubqueryExpression.hasSubquery) &&\n+            fs.inputRDDs().forall(\n+              _.asInstanceOf[FileScanRDD].filePartitions.forall(\n+                _.files.forall(_.urlEncodedPath.contains(\"p=0\"))))\n         case _ => false\n       })\n     }\n@@ -2108,7 +2115,7 @@ class SubquerySuite extends QueryTest\n \n       df.collect()\n       val exchanges = collect(df.queryExecution.executedPlan) {\n-        case s: ShuffleExchangeExec => s\n+        case s: ShuffleExchangeLike => s\n       }\n       assert(exchanges.size === 1)\n     }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala b/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala\nindex 02990a7a40d..bddf5e1ccc2 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala\n@@ -24,6 +24,7 @@ import test.org.apache.spark.sql.connector._\n \n import org.apache.spark.sql.{AnalysisException, DataFrame, QueryTest, Row}\n import org.apache.spark.sql.catalyst.InternalRow\n+import org.apache.spark.sql.comet.CometSortExec\n import org.apache.spark.sql.connector.catalog.{PartitionInternalRow, SupportsRead, Table, TableCapability, TableProvider}\n import org.apache.spark.sql.connector.catalog.TableCapability._\n import org.apache.spark.sql.connector.expressions.{Expression, FieldReference, Literal, NamedReference, NullOrdering, SortDirection, SortOrder, Transform}\n@@ -33,7 +34,7 @@ import org.apache.spark.sql.connector.read.partitioning.{KeyGroupedPartitioning,\n import org.apache.spark.sql.execution.SortExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.datasources.v2.{BatchScanExec, DataSourceV2Relation, DataSourceV2ScanRelation}\n-import org.apache.spark.sql.execution.exchange.{Exchange, ShuffleExchangeExec}\n+import org.apache.spark.sql.execution.exchange.{Exchange, ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.vectorized.OnHeapColumnVector\n import org.apache.spark.sql.expressions.Window\n import org.apache.spark.sql.functions._\n@@ -268,13 +269,13 @@ class DataSourceV2Suite extends QueryTest with SharedSparkSession with AdaptiveS\n           val groupByColJ = df.groupBy($\"j\").agg(sum($\"i\"))\n           checkAnswer(groupByColJ, Seq(Row(2, 8), Row(4, 2), Row(6, 5)))\n           assert(collectFirst(groupByColJ.queryExecution.executedPlan) {\n-            case e: ShuffleExchangeExec => e\n+            case e: ShuffleExchangeLike => e\n           }.isDefined)\n \n           val groupByIPlusJ = df.groupBy($\"i\" + $\"j\").agg(count(\"*\"))\n           checkAnswer(groupByIPlusJ, Seq(Row(5, 2), Row(6, 2), Row(8, 1), Row(9, 1)))\n           assert(collectFirst(groupByIPlusJ.queryExecution.executedPlan) {\n-            case e: ShuffleExchangeExec => e\n+            case e: ShuffleExchangeLike => e\n           }.isDefined)\n         }\n       }\n@@ -334,10 +335,11 @@ class DataSourceV2Suite extends QueryTest with SharedSparkSession with AdaptiveS\n \n                 val (shuffleExpected, sortExpected) = groupByExpects\n                 assert(collectFirst(groupBy.queryExecution.executedPlan) {\n-                  case e: ShuffleExchangeExec => e\n+                  case e: ShuffleExchangeLike => e\n                 }.isDefined === shuffleExpected)\n                 assert(collectFirst(groupBy.queryExecution.executedPlan) {\n                   case e: SortExec => e\n+                  case c: CometSortExec => c\n                 }.isDefined === sortExpected)\n               }\n \n@@ -352,10 +354,11 @@ class DataSourceV2Suite extends QueryTest with SharedSparkSession with AdaptiveS\n \n                 val (shuffleExpected, sortExpected) = windowFuncExpects\n                 assert(collectFirst(windowPartByColIOrderByColJ.queryExecution.executedPlan) {\n-                  case e: ShuffleExchangeExec => e\n+                  case e: ShuffleExchangeLike => e\n                 }.isDefined === shuffleExpected)\n                 assert(collectFirst(windowPartByColIOrderByColJ.queryExecution.executedPlan) {\n                   case e: SortExec => e\n+                  case c: CometSortExec => c\n                 }.isDefined === sortExpected)\n               }\n             }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala\nindex cfc8b2cc845..c4be7eb3731 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala\n@@ -21,6 +21,7 @@ import scala.collection.mutable.ArrayBuffer\n import org.apache.spark.SparkConf\n import org.apache.spark.sql.{AnalysisException, QueryTest}\n import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan\n+import org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.connector.catalog.{SupportsRead, SupportsWrite, Table, TableCapability}\n import org.apache.spark.sql.connector.read.ScanBuilder\n import org.apache.spark.sql.connector.write.{LogicalWriteInfo, WriteBuilder}\n@@ -184,7 +185,11 @@ class FileDataSourceV2FallBackSuite extends QueryTest with SharedSparkSession {\n             val df = spark.read.format(format).load(path.getCanonicalPath)\n             checkAnswer(df, inputData.toDF())\n             assert(\n-              df.queryExecution.executedPlan.exists(_.isInstanceOf[FileSourceScanExec]))\n+              df.queryExecution.executedPlan.exists {\n+                case _: FileSourceScanExec | _: CometScanExec | _: CometNativeScanExec => true\n+                case _ => false\n+              }\n+            )\n           }\n         } finally {\n           spark.listenerManager.unregister(listener)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala\nindex cf76f6ca32c..f454128af06 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala\n@@ -22,6 +22,7 @@ import org.apache.spark.sql.{DataFrame, Row}\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions.{Literal, TransformExpression}\n import org.apache.spark.sql.catalyst.plans.physical\n+import org.apache.spark.sql.comet.CometSortMergeJoinExec\n import org.apache.spark.sql.connector.catalog.Identifier\n import org.apache.spark.sql.connector.catalog.InMemoryTableCatalog\n import org.apache.spark.sql.connector.catalog.functions._\n@@ -31,7 +32,7 @@ import org.apache.spark.sql.connector.expressions.Expressions._\n import org.apache.spark.sql.execution.SparkPlan\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanRelation\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.execution.joins.SortMergeJoinExec\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.internal.SQLConf._\n@@ -279,13 +280,14 @@ class KeyGroupedPartitioningSuite extends DistributionAndOrderingSuiteBase {\n         Row(\"bbb\", 20, 250.0), Row(\"bbb\", 20, 350.0), Row(\"ccc\", 30, 400.50)))\n   }\n \n-  private def collectShuffles(plan: SparkPlan): Seq[ShuffleExchangeExec] = {\n+  private def collectShuffles(plan: SparkPlan): Seq[ShuffleExchangeLike] = {\n     // here we skip collecting shuffle operators that are not associated with SMJ\n     collect(plan) {\n       case s: SortMergeJoinExec => s\n+      case c: CometSortMergeJoinExec => c.originalPlan\n     }.flatMap(smj =>\n       collect(smj) {\n-        case s: ShuffleExchangeExec => s\n+        case s: ShuffleExchangeLike => s\n       })\n   }\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala\nindex c0ec8a58bd5..4e8bc6ed3c5 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala\n@@ -27,7 +27,7 @@ import org.apache.hadoop.fs.permission.FsPermission\n import org.mockito.Mockito.{mock, spy, when}\n \n import org.apache.spark._\n-import org.apache.spark.sql.{AnalysisException, DataFrame, Dataset, QueryTest, Row, SaveMode}\n+import org.apache.spark.sql.{AnalysisException, DataFrame, Dataset, IgnoreComet, QueryTest, Row, SaveMode}\n import org.apache.spark.sql.catalyst.expressions.CodegenObjectFactoryMode._\n import org.apache.spark.sql.catalyst.util.BadRecordException\n import org.apache.spark.sql.execution.datasources.jdbc.{DriverRegistry, JDBCOptions}\n@@ -248,7 +248,8 @@ class QueryExecutionErrorsSuite\n   }\n \n   test(\"INCONSISTENT_BEHAVIOR_CROSS_VERSION: \" +\n-    \"compatibility with Spark 2.4/3.2 in reading/writing dates\") {\n+    \"compatibility with Spark 2.4/3.2 in reading/writing dates\",\n+    IgnoreComet(\"Comet doesn't completely support datetime rebase mode yet\")) {\n \n     // Fail to read ancient datetime values.\n     withSQLConf(SQLConf.PARQUET_REBASE_MODE_IN_READ.key -> EXCEPTION.toString) {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala\nindex 418ca3430bb..eb8267192f8 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala\n@@ -23,7 +23,7 @@ import scala.util.Random\n import org.apache.hadoop.fs.Path\n \n import org.apache.spark.SparkConf\n-import org.apache.spark.sql.{DataFrame, QueryTest}\n+import org.apache.spark.sql.{DataFrame, IgnoreComet, QueryTest}\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.execution.datasources.v2.orc.OrcScan\n import org.apache.spark.sql.internal.SQLConf\n@@ -195,7 +195,7 @@ class DataSourceV2ScanExecRedactionSuite extends DataSourceScanRedactionTest {\n     }\n   }\n \n-  test(\"FileScan description\") {\n+  test(\"FileScan description\", IgnoreComet(\"Comet doesn't use BatchScan\")) {\n     Seq(\"json\", \"orc\", \"parquet\").foreach { format =>\n       withTempPath { path =>\n         val dir = path.getCanonicalPath\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala\nindex 743ec41dbe7..9f30d6c8e04 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala\n@@ -53,6 +53,10 @@ class LogicalPlanTagInSparkPlanSuite extends TPCDSQuerySuite with DisableAdaptiv\n     case ColumnarToRowExec(i: InputAdapter) => isScanPlanTree(i.child)\n     case p: ProjectExec => isScanPlanTree(p.child)\n     case f: FilterExec => isScanPlanTree(f.child)\n+    // Comet produces scan plan tree like:\n+    // ColumnarToRow\n+    //  +- ReusedExchange\n+    case _: ReusedExchangeExec => false\n     case _: LeafExecNode => true\n     case _ => false\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala\nindex 4b3d3a4b805..56e1e0e6f16 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala\n@@ -18,7 +18,7 @@\n package org.apache.spark.sql.execution\n \n import org.apache.spark.rdd.RDD\n-import org.apache.spark.sql.{execution, DataFrame, Row}\n+import org.apache.spark.sql.{execution, DataFrame, IgnoreCometSuite, Row}\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions._\n import org.apache.spark.sql.catalyst.plans._\n@@ -35,7 +35,9 @@ import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.sql.types._\n \n-class PlannerSuite extends SharedSparkSession with AdaptiveSparkPlanHelper {\n+// Ignore this suite when Comet is enabled. This suite tests the Spark planner and Comet planner\n+// comes out with too many difference. Simply ignoring this suite for now.\n+class PlannerSuite extends SharedSparkSession with AdaptiveSparkPlanHelper with IgnoreCometSuite {\n   import testImplicits._\n \n   setupTestData()\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala\nindex 9e9d717db3b..ec73082f458 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala\n@@ -17,7 +17,10 @@\n \n package org.apache.spark.sql.execution\n \n-import org.apache.spark.sql.{DataFrame, QueryTest, Row}\n+import org.apache.comet.CometConf\n+\n+import org.apache.spark.sql.{DataFrame, IgnoreComet, QueryTest, Row}\n+import org.apache.spark.sql.comet.CometProjectExec\n import org.apache.spark.sql.connector.SimpleWritableDataSource\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.internal.SQLConf\n@@ -34,7 +37,10 @@ abstract class RemoveRedundantProjectsSuiteBase\n   private def assertProjectExecCount(df: DataFrame, expected: Int): Unit = {\n     withClue(df.queryExecution) {\n       val plan = df.queryExecution.executedPlan\n-      val actual = collectWithSubqueries(plan) { case p: ProjectExec => p }.size\n+      val actual = collectWithSubqueries(plan) {\n+        case p: ProjectExec => p\n+        case p: CometProjectExec => p\n+      }.size\n       assert(actual == expected)\n     }\n   }\n@@ -112,7 +118,8 @@ abstract class RemoveRedundantProjectsSuiteBase\n     assertProjectExec(query, 1, 3)\n   }\n \n-  test(\"join with ordering requirement\") {\n+  test(\"join with ordering requirement\",\n+    IgnoreComet(\"TODO: Support SubqueryBroadcastExec in Comet: #242\")) {\n     val query = \"select * from (select key, a, c, b from testView) as t1 join \" +\n       \"(select key, a, b, c from testView) as t2 on t1.key = t2.key where t2.a > 50\"\n     assertProjectExec(query, 2, 2)\n@@ -134,12 +141,21 @@ abstract class RemoveRedundantProjectsSuiteBase\n       val df = data.selectExpr(\"a\", \"b\", \"key\", \"explode(array(key, a, b)) as d\").filter(\"d > 0\")\n       df.collect()\n       val plan = df.queryExecution.executedPlan\n-      val numProjects = collectWithSubqueries(plan) { case p: ProjectExec => p }.length\n+      val numProjects = collectWithSubqueries(plan) {\n+        case p: ProjectExec => p\n+        case p: CometProjectExec => p\n+      }.length\n+\n+      // Comet-specific change to get original Spark plan before applying\n+      // a transformation to add a new ProjectExec\n+      var sparkPlan: SparkPlan = null\n+      withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"false\") {\n+        val df = data.selectExpr(\"a\", \"b\", \"key\", \"explode(array(key, a, b)) as d\").filter(\"d > 0\")\n+        df.collect()\n+        sparkPlan = df.queryExecution.executedPlan\n+      }\n \n-      // Create a new plan that reverse the GenerateExec output and add a new ProjectExec between\n-      // GenerateExec and its child. This is to test if the ProjectExec is removed, the output of\n-      // the query will be incorrect.\n-      val newPlan = stripAQEPlan(plan) transform {\n+      val newPlan = stripAQEPlan(sparkPlan) transform {\n         case g @ GenerateExec(_, requiredChildOutput, _, _, child) =>\n           g.copy(requiredChildOutput = requiredChildOutput.reverse,\n             child = ProjectExec(requiredChildOutput.reverse, child))\n@@ -151,6 +167,7 @@ abstract class RemoveRedundantProjectsSuiteBase\n       // The manually added ProjectExec node shouldn't be removed.\n       assert(collectWithSubqueries(newExecutedPlan) {\n         case p: ProjectExec => p\n+        case p: CometProjectExec => p\n       }.size == numProjects + 1)\n \n       // Check the original plan's output and the new plan's output are the same.\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala\nindex 30ce940b032..0d3f6c6c934 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala\n@@ -19,6 +19,7 @@ package org.apache.spark.sql.execution\n \n import org.apache.spark.sql.{DataFrame, QueryTest}\n import org.apache.spark.sql.catalyst.plans.physical.{RangePartitioning, UnknownPartitioning}\n+import org.apache.spark.sql.comet.CometSortExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.execution.joins.ShuffledJoin\n import org.apache.spark.sql.internal.SQLConf\n@@ -33,7 +34,7 @@ abstract class RemoveRedundantSortsSuiteBase\n \n   private def checkNumSorts(df: DataFrame, count: Int): Unit = {\n     val plan = df.queryExecution.executedPlan\n-    assert(collectWithSubqueries(plan) { case s: SortExec => s }.length == count)\n+    assert(collectWithSubqueries(plan) { case _: SortExec | _: CometSortExec => 1 }.length == count)\n   }\n \n   private def checkSorts(query: String, enabledCount: Int, disabledCount: Int): Unit = {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala\nindex 47679ed7865..9ffbaecb98e 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala\n@@ -18,6 +18,7 @@\n package org.apache.spark.sql.execution\n \n import org.apache.spark.sql.{DataFrame, QueryTest}\n+import org.apache.spark.sql.comet.CometHashAggregateExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.execution.aggregate.{HashAggregateExec, ObjectHashAggregateExec, SortAggregateExec}\n import org.apache.spark.sql.internal.SQLConf\n@@ -31,7 +32,7 @@ abstract class ReplaceHashWithSortAggSuiteBase\n   private def checkNumAggs(df: DataFrame, hashAggCount: Int, sortAggCount: Int): Unit = {\n     val plan = df.queryExecution.executedPlan\n     assert(collectWithSubqueries(plan) {\n-      case s @ (_: HashAggregateExec | _: ObjectHashAggregateExec) => s\n+      case s @ (_: HashAggregateExec | _: ObjectHashAggregateExec | _: CometHashAggregateExec ) => s\n     }.length == hashAggCount)\n     assert(collectWithSubqueries(plan) { case s: SortAggregateExec => s }.length == sortAggCount)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala\nindex eec396b2e39..bf3f1c769d6 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala\n@@ -18,7 +18,7 @@\n package org.apache.spark.sql.execution\n \n import org.apache.spark.TestUtils.assertSpilled\n-import org.apache.spark.sql.{AnalysisException, QueryTest, Row}\n+import org.apache.spark.sql.{AnalysisException, IgnoreComet, QueryTest, Row}\n import org.apache.spark.sql.internal.SQLConf.{WINDOW_EXEC_BUFFER_IN_MEMORY_THRESHOLD, WINDOW_EXEC_BUFFER_SPILL_THRESHOLD}\n import org.apache.spark.sql.test.SharedSparkSession\n \n@@ -470,7 +470,7 @@ class SQLWindowFunctionSuite extends QueryTest with SharedSparkSession {\n       Row(1, 3, null) :: Row(2, null, 4) :: Nil)\n   }\n \n-  test(\"test with low buffer spill threshold\") {\n+  test(\"test with low buffer spill threshold\", IgnoreComet(\"Comet does not support spilling\")) {\n     val nums = sparkContext.parallelize(1 to 10).map(x => (x, x % 2)).toDF(\"x\", \"y\")\n     nums.createOrReplaceTempView(\"nums\")\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala\nindex b14f4a405f6..90bed10eca9 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala\n@@ -23,6 +23,7 @@ import org.apache.spark.sql.QueryTest\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference}\n import org.apache.spark.sql.catalyst.plans.logical.Deduplicate\n+import org.apache.spark.sql.comet.{CometColumnarToRowExec, CometNativeColumnarToRowExec}\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SharedSparkSession\n@@ -131,7 +132,11 @@ class SparkPlanSuite extends QueryTest with SharedSparkSession {\n         spark.range(1).write.parquet(path.getAbsolutePath)\n         val df = spark.read.parquet(path.getAbsolutePath)\n         val columnarToRowExec =\n-          df.queryExecution.executedPlan.collectFirst { case p: ColumnarToRowExec => p }.get\n+          df.queryExecution.executedPlan.collectFirst {\n+            case p: ColumnarToRowExec => p\n+            case p: CometColumnarToRowExec => p\n+            case p: CometNativeColumnarToRowExec => p\n+          }.get\n         try {\n           spark.range(1).foreach { _ =>\n             columnarToRowExec.canonicalized\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala\nindex ac710c32296..2854b433dd3 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala\n@@ -17,7 +17,7 @@\n \n package org.apache.spark.sql.execution\n \n-import org.apache.spark.sql.{Dataset, QueryTest, Row, SaveMode}\n+import org.apache.spark.sql.{Dataset, IgnoreCometSuite, QueryTest, Row, SaveMode}\n import org.apache.spark.sql.catalyst.expressions.codegen.{ByteCodeStats, CodeAndComment, CodeGenerator}\n import org.apache.spark.sql.execution.adaptive.DisableAdaptiveExecutionSuite\n import org.apache.spark.sql.execution.aggregate.{HashAggregateExec, SortAggregateExec}\n@@ -29,7 +29,7 @@ import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.sql.types.{IntegerType, StringType, StructType}\n \n // Disable AQE because the WholeStageCodegenExec is added when running QueryStageExec\n-class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n+class WholeStageCodegenSuite extends QueryTest with SharedSparkSession with IgnoreCometSuite\n   with DisableAdaptiveExecutionSuite {\n \n   import testImplicits._\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala\nindex 593bd7bb4ba..32af28b0238 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala\n@@ -26,9 +26,11 @@ import org.scalatest.time.SpanSugar._\n \n import org.apache.spark.SparkException\n import org.apache.spark.scheduler.{SparkListener, SparkListenerEvent, SparkListenerJobStart}\n-import org.apache.spark.sql.{Dataset, QueryTest, Row, SparkSession, Strategy}\n+import org.apache.spark.sql.{Dataset, IgnoreComet, QueryTest, Row, SparkSession, Strategy}\n import org.apache.spark.sql.catalyst.optimizer.{BuildLeft, BuildRight}\n import org.apache.spark.sql.catalyst.plans.logical.{Aggregate, LogicalPlan}\n+import org.apache.spark.sql.comet._\n+import org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\n import org.apache.spark.sql.execution.{CollectLimitExec, LocalTableScanExec, PartialReducerPartitionSpec, QueryExecution, ReusedSubqueryExec, ShuffledRowRDD, SortExec, SparkPlan, SparkPlanInfo, UnionExec}\n import org.apache.spark.sql.execution.aggregate.BaseAggregateExec\n import org.apache.spark.sql.execution.command.DataWritingCommandExec\n@@ -104,6 +106,7 @@ class AdaptiveQueryExecSuite\n   private def findTopLevelBroadcastHashJoin(plan: SparkPlan): Seq[BroadcastHashJoinExec] = {\n     collect(plan) {\n       case j: BroadcastHashJoinExec => j\n+      case j: CometBroadcastHashJoinExec => j.originalPlan.asInstanceOf[BroadcastHashJoinExec]\n     }\n   }\n \n@@ -116,30 +119,39 @@ class AdaptiveQueryExecSuite\n   private def findTopLevelSortMergeJoin(plan: SparkPlan): Seq[SortMergeJoinExec] = {\n     collect(plan) {\n       case j: SortMergeJoinExec => j\n+      case j: CometSortMergeJoinExec =>\n+        assert(j.originalPlan.isInstanceOf[SortMergeJoinExec])\n+        j.originalPlan.asInstanceOf[SortMergeJoinExec]\n     }\n   }\n \n   private def findTopLevelShuffledHashJoin(plan: SparkPlan): Seq[ShuffledHashJoinExec] = {\n     collect(plan) {\n       case j: ShuffledHashJoinExec => j\n+      case j: CometHashJoinExec => j.originalPlan.asInstanceOf[ShuffledHashJoinExec]\n     }\n   }\n \n   private def findTopLevelBaseJoin(plan: SparkPlan): Seq[BaseJoinExec] = {\n     collect(plan) {\n       case j: BaseJoinExec => j\n+      case c: CometHashJoinExec => c.originalPlan.asInstanceOf[BaseJoinExec]\n+      case c: CometSortMergeJoinExec => c.originalPlan.asInstanceOf[BaseJoinExec]\n+      case c: CometBroadcastHashJoinExec => c.originalPlan.asInstanceOf[BaseJoinExec]\n     }\n   }\n \n   private def findTopLevelSort(plan: SparkPlan): Seq[SortExec] = {\n     collect(plan) {\n       case s: SortExec => s\n+      case s: CometSortExec => s.originalPlan.asInstanceOf[SortExec]\n     }\n   }\n \n   private def findTopLevelAggregate(plan: SparkPlan): Seq[BaseAggregateExec] = {\n     collect(plan) {\n       case agg: BaseAggregateExec => agg\n+      case agg: CometHashAggregateExec => agg.originalPlan.asInstanceOf[BaseAggregateExec]\n     }\n   }\n \n@@ -176,6 +188,7 @@ class AdaptiveQueryExecSuite\n       val parts = rdd.partitions\n       assert(parts.forall(rdd.preferredLocations(_).nonEmpty))\n     }\n+\n     assert(numShuffles === (numLocalReads.length + numShufflesWithoutLocalRead))\n   }\n \n@@ -184,7 +197,7 @@ class AdaptiveQueryExecSuite\n     val plan = df.queryExecution.executedPlan\n     assert(plan.isInstanceOf[AdaptiveSparkPlanExec])\n     val shuffle = plan.asInstanceOf[AdaptiveSparkPlanExec].executedPlan.collect {\n-      case s: ShuffleExchangeExec => s\n+      case s: ShuffleExchangeLike => s\n     }\n     assert(shuffle.size == 1)\n     assert(shuffle(0).outputPartitioning.numPartitions == numPartition)\n@@ -200,7 +213,8 @@ class AdaptiveQueryExecSuite\n       assert(smj.size == 1)\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n-      checkNumLocalShuffleReads(adaptivePlan)\n+      // Comet shuffle changes shuffle metrics\n+      // checkNumLocalShuffleReads(adaptivePlan)\n     }\n   }\n \n@@ -227,7 +241,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Reuse the parallelism of coalesced shuffle in local shuffle read\") {\n+  test(\"Reuse the parallelism of coalesced shuffle in local shuffle read\",\n+      IgnoreComet(\"Comet shuffle changes shuffle partition size\")) {\n     withSQLConf(\n       SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\",\n@@ -259,7 +274,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Reuse the default parallelism in local shuffle read\") {\n+  test(\"Reuse the default parallelism in local shuffle read\",\n+      IgnoreComet(\"Comet shuffle changes shuffle partition size\")) {\n     withSQLConf(\n       SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\",\n@@ -273,7 +289,8 @@ class AdaptiveQueryExecSuite\n       val localReads = collect(adaptivePlan) {\n         case read: AQEShuffleReadExec if read.isLocalRead => read\n       }\n-      assert(localReads.length == 2)\n+      // Comet shuffle changes shuffle metrics\n+      assert(localReads.length == 1)\n       val localShuffleRDD0 = localReads(0).execute().asInstanceOf[ShuffledRowRDD]\n       val localShuffleRDD1 = localReads(1).execute().asInstanceOf[ShuffledRowRDD]\n       // the final parallelism is math.max(1, numReduces / numMappers): math.max(1, 5/2) = 2\n@@ -298,7 +315,9 @@ class AdaptiveQueryExecSuite\n           .groupBy($\"a\").count()\n         checkAnswer(testDf, Seq())\n         val plan = testDf.queryExecution.executedPlan\n-        assert(find(plan)(_.isInstanceOf[SortMergeJoinExec]).isDefined)\n+        assert(find(plan) { case p =>\n+          p.isInstanceOf[SortMergeJoinExec] || p.isInstanceOf[CometSortMergeJoinExec]\n+        }.isDefined)\n         val coalescedReads = collect(plan) {\n           case r: AQEShuffleReadExec => r\n         }\n@@ -312,7 +331,9 @@ class AdaptiveQueryExecSuite\n           .groupBy($\"a\").count()\n         checkAnswer(testDf, Seq())\n         val plan = testDf.queryExecution.executedPlan\n-        assert(find(plan)(_.isInstanceOf[BroadcastHashJoinExec]).isDefined)\n+        assert(find(plan) { case p =>\n+          p.isInstanceOf[BroadcastHashJoinExec] || p.isInstanceOf[CometBroadcastHashJoinExec]\n+        }.isDefined)\n         val coalescedReads = collect(plan) {\n           case r: AQEShuffleReadExec => r\n         }\n@@ -322,7 +343,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Scalar subquery\") {\n+  test(\"Scalar subquery\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -337,7 +358,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Scalar subquery in later stages\") {\n+  test(\"Scalar subquery in later stages\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -353,7 +374,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"multiple joins\") {\n+  test(\"multiple joins\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -398,7 +419,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"multiple joins with aggregate\") {\n+  test(\"multiple joins with aggregate\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -443,7 +464,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"multiple joins with aggregate 2\") {\n+  test(\"multiple joins with aggregate 2\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"500\") {\n@@ -489,7 +510,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Exchange reuse\") {\n+  test(\"Exchange reuse\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -508,7 +529,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Exchange reuse with subqueries\") {\n+  test(\"Exchange reuse with subqueries\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -539,7 +560,9 @@ class AdaptiveQueryExecSuite\n       assert(smj.size == 1)\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n-      checkNumLocalShuffleReads(adaptivePlan)\n+      // Comet shuffle changes shuffle metrics,\n+      // so we can't check the number of local shuffle reads.\n+      // checkNumLocalShuffleReads(adaptivePlan)\n       // Even with local shuffle read, the query stage reuse can also work.\n       val ex = findReusedExchange(adaptivePlan)\n       assert(ex.nonEmpty)\n@@ -560,7 +583,9 @@ class AdaptiveQueryExecSuite\n       assert(smj.size == 1)\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n-      checkNumLocalShuffleReads(adaptivePlan)\n+      // Comet shuffle changes shuffle metrics,\n+      // so we can't check the number of local shuffle reads.\n+      // checkNumLocalShuffleReads(adaptivePlan)\n       // Even with local shuffle read, the query stage reuse can also work.\n       val ex = findReusedExchange(adaptivePlan)\n       assert(ex.isEmpty)\n@@ -569,7 +594,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Broadcast exchange reuse across subqueries\") {\n+  test(\"Broadcast exchange reuse across subqueries\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"20000000\",\n@@ -664,7 +690,8 @@ class AdaptiveQueryExecSuite\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n       // There is still a SMJ, and its two shuffles can't apply local read.\n-      checkNumLocalShuffleReads(adaptivePlan, 2)\n+      // Comet shuffle changes shuffle metrics\n+      // checkNumLocalShuffleReads(adaptivePlan, 2)\n     }\n   }\n \n@@ -786,7 +813,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-29544: adaptive skew join with different join types\") {\n+  test(\"SPARK-29544: adaptive skew join with different join types\",\n+      IgnoreComet(\"Comet shuffle has different partition metrics\")) {\n     Seq(\"SHUFFLE_MERGE\", \"SHUFFLE_HASH\").foreach { joinHint =>\n       def getJoinNode(plan: SparkPlan): Seq[ShuffledJoin] = if (joinHint == \"SHUFFLE_MERGE\") {\n         findTopLevelSortMergeJoin(plan)\n@@ -1004,7 +1032,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"metrics of the shuffle read\") {\n+  test(\"metrics of the shuffle read\",\n+      IgnoreComet(\"Comet shuffle changes the metrics\")) {\n     withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\") {\n       val (_, adaptivePlan) = runAdaptiveAndVerifyResult(\n         \"SELECT key FROM testData GROUP BY key\")\n@@ -1599,7 +1628,7 @@ class AdaptiveQueryExecSuite\n         val (_, adaptivePlan) = runAdaptiveAndVerifyResult(\n           \"SELECT id FROM v1 GROUP BY id DISTRIBUTE BY id\")\n         assert(collect(adaptivePlan) {\n-          case s: ShuffleExchangeExec => s\n+          case s: ShuffleExchangeLike => s\n         }.length == 1)\n       }\n     }\n@@ -1679,7 +1708,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-33551: Do not use AQE shuffle read for repartition\") {\n+  test(\"SPARK-33551: Do not use AQE shuffle read for repartition\",\n+      IgnoreComet(\"Comet shuffle changes partition size\")) {\n     def hasRepartitionShuffle(plan: SparkPlan): Boolean = {\n       find(plan) {\n         case s: ShuffleExchangeLike =>\n@@ -1864,6 +1894,9 @@ class AdaptiveQueryExecSuite\n     def checkNoCoalescePartitions(ds: Dataset[Row], origin: ShuffleOrigin): Unit = {\n       assert(collect(ds.queryExecution.executedPlan) {\n         case s: ShuffleExchangeExec if s.shuffleOrigin == origin && s.numPartitions == 2 => s\n+        case c: CometShuffleExchangeExec\n+          if c.originalPlan.shuffleOrigin == origin &&\n+            c.originalPlan.numPartitions == 2 => c\n       }.size == 1)\n       ds.collect()\n       val plan = ds.queryExecution.executedPlan\n@@ -1872,6 +1905,9 @@ class AdaptiveQueryExecSuite\n       }.isEmpty)\n       assert(collect(plan) {\n         case s: ShuffleExchangeExec if s.shuffleOrigin == origin && s.numPartitions == 2 => s\n+        case c: CometShuffleExchangeExec\n+          if c.originalPlan.shuffleOrigin == origin &&\n+            c.originalPlan.numPartitions == 2 => c\n       }.size == 1)\n       checkAnswer(ds, testData)\n     }\n@@ -2028,7 +2064,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-35264: Support AQE side shuffled hash join formula\") {\n+  test(\"SPARK-35264: Support AQE side shuffled hash join formula\",\n+      IgnoreComet(\"Comet shuffle changes the partition size\")) {\n     withTempView(\"t1\", \"t2\") {\n       def checkJoinStrategy(shouldShuffleHashJoin: Boolean): Unit = {\n         Seq(\"100\", \"100000\").foreach { size =>\n@@ -2114,7 +2151,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-35725: Support optimize skewed partitions in RebalancePartitions\") {\n+  test(\"SPARK-35725: Support optimize skewed partitions in RebalancePartitions\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withTempView(\"v\") {\n       withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n@@ -2213,7 +2251,7 @@ class AdaptiveQueryExecSuite\n               runAdaptiveAndVerifyResult(s\"SELECT $repartition key1 FROM skewData1 \" +\n                 s\"JOIN skewData2 ON key1 = key2 GROUP BY key1\")\n             val shuffles1 = collect(adaptive1) {\n-              case s: ShuffleExchangeExec => s\n+              case s: ShuffleExchangeLike => s\n             }\n             assert(shuffles1.size == 3)\n             // shuffles1.head is the top-level shuffle under the Aggregate operator\n@@ -2226,7 +2264,7 @@ class AdaptiveQueryExecSuite\n               runAdaptiveAndVerifyResult(s\"SELECT $repartition key1 FROM skewData1 \" +\n                 s\"JOIN skewData2 ON key1 = key2\")\n             val shuffles2 = collect(adaptive2) {\n-              case s: ShuffleExchangeExec => s\n+              case s: ShuffleExchangeLike => s\n             }\n             if (hasRequiredDistribution) {\n               assert(shuffles2.size == 3)\n@@ -2260,7 +2298,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-35794: Allow custom plugin for cost evaluator\") {\n+  test(\"SPARK-35794: Allow custom plugin for cost evaluator\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     CostEvaluator.instantiate(\n       classOf[SimpleShuffleSortCostEvaluator].getCanonicalName, spark.sparkContext.getConf)\n     intercept[IllegalArgumentException] {\n@@ -2404,6 +2443,7 @@ class AdaptiveQueryExecSuite\n           val (_, adaptive) = runAdaptiveAndVerifyResult(query)\n           assert(adaptive.collect {\n             case sort: SortExec => sort\n+            case sort: CometSortExec => sort\n           }.size == 1)\n           val read = collect(adaptive) {\n             case read: AQEShuffleReadExec => read\n@@ -2421,7 +2461,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-37357: Add small partition factor for rebalance partitions\") {\n+  test(\"SPARK-37357: Add small partition factor for rebalance partitions\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withTempView(\"v\") {\n       withSQLConf(\n         SQLConf.ADAPTIVE_OPTIMIZE_SKEWS_IN_REBALANCE_PARTITIONS_ENABLED.key -> \"true\",\n@@ -2533,7 +2574,7 @@ class AdaptiveQueryExecSuite\n           runAdaptiveAndVerifyResult(\"SELECT key1 FROM skewData1 JOIN skewData2 ON key1 = key2 \" +\n             \"JOIN skewData3 ON value2 = value3\")\n         val shuffles1 = collect(adaptive1) {\n-          case s: ShuffleExchangeExec => s\n+          case s: ShuffleExchangeLike => s\n         }\n         assert(shuffles1.size == 4)\n         val smj1 = findTopLevelSortMergeJoin(adaptive1)\n@@ -2544,7 +2585,7 @@ class AdaptiveQueryExecSuite\n           runAdaptiveAndVerifyResult(\"SELECT key1 FROM skewData1 JOIN skewData2 ON key1 = key2 \" +\n             \"JOIN skewData3 ON value1 = value3\")\n         val shuffles2 = collect(adaptive2) {\n-          case s: ShuffleExchangeExec => s\n+          case s: ShuffleExchangeLike => s\n         }\n         assert(shuffles2.size == 4)\n         val smj2 = findTopLevelSortMergeJoin(adaptive2)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala\nindex bd9c79e5b96..2ada8c28842 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala\n@@ -27,6 +27,7 @@ import org.apache.spark.sql.catalyst.SchemaPruningTest\n import org.apache.spark.sql.catalyst.expressions.Concat\n import org.apache.spark.sql.catalyst.parser.CatalystSqlParser\n import org.apache.spark.sql.catalyst.plans.logical.Expand\n+import org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.execution.FileSourceScanExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.functions._\n@@ -867,6 +868,8 @@ abstract class SchemaPruningSuite\n     val fileSourceScanSchemata =\n       collect(df.queryExecution.executedPlan) {\n         case scan: FileSourceScanExec => scan.requiredSchema\n+        case scan: CometScanExec => scan.requiredSchema\n+        case scan: CometNativeScanExec => scan.requiredSchema\n       }\n     assert(fileSourceScanSchemata.size === expectedSchemaCatalogStrings.size,\n       s\"Found ${fileSourceScanSchemata.size} file sources in dataframe, \" +\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala\nindex ce43edb79c1..8436cb727c6 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala\n@@ -17,9 +17,10 @@\n \n package org.apache.spark.sql.execution.datasources\n \n-import org.apache.spark.sql.{QueryTest, Row}\n+import org.apache.spark.sql.{IgnoreComet, QueryTest, Row}\n import org.apache.spark.sql.catalyst.expressions.{Ascending, AttributeReference, NullsFirst, SortOrder}\n import org.apache.spark.sql.catalyst.plans.logical.{LogicalPlan, Sort}\n+import org.apache.spark.sql.comet.CometSortExec\n import org.apache.spark.sql.execution.{QueryExecution, SortExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec\n import org.apache.spark.sql.internal.SQLConf\n@@ -225,6 +226,7 @@ class V1WriteCommandSuite extends QueryTest with SharedSparkSession with V1Write\n           // assert the outer most sort in the executed plan\n           assert(plan.collectFirst {\n             case s: SortExec => s\n+            case s: CometSortExec => s.originalPlan.asInstanceOf[SortExec]\n           }.exists {\n             case SortExec(Seq(\n               SortOrder(AttributeReference(\"key\", IntegerType, _, _), Ascending, NullsFirst, _),\n@@ -272,6 +274,7 @@ class V1WriteCommandSuite extends QueryTest with SharedSparkSession with V1Write\n         // assert the outer most sort in the executed plan\n         assert(plan.collectFirst {\n           case s: SortExec => s\n+          case s: CometSortExec => s.originalPlan.asInstanceOf[SortExec]\n         }.exists {\n           case SortExec(Seq(\n             SortOrder(AttributeReference(\"value\", StringType, _, _), Ascending, NullsFirst, _),\n@@ -305,7 +308,8 @@ class V1WriteCommandSuite extends QueryTest with SharedSparkSession with V1Write\n     }\n   }\n \n-  test(\"v1 write with AQE changing SMJ to BHJ\") {\n+  test(\"v1 write with AQE changing SMJ to BHJ\",\n+      IgnoreComet(\"TODO: Comet SMJ to BHJ by AQE\")) {\n     withPlannedWrite { enabled =>\n       withTable(\"t\") {\n         sql(\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala\nindex 1d2e467c94c..3ea82cd1a3f 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala\n@@ -28,7 +28,7 @@ import org.apache.hadoop.fs.{FileStatus, FileSystem, GlobFilter, Path}\n import org.mockito.Mockito.{mock, when}\n \n import org.apache.spark.SparkException\n-import org.apache.spark.sql.{DataFrame, QueryTest, Row}\n+import org.apache.spark.sql.{DataFrame, IgnoreCometSuite, QueryTest, Row}\n import org.apache.spark.sql.catalyst.encoders.RowEncoder\n import org.apache.spark.sql.execution.datasources.PartitionedFile\n import org.apache.spark.sql.functions.col\n@@ -38,7 +38,9 @@ import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.sql.types._\n import org.apache.spark.util.Utils\n \n-class BinaryFileFormatSuite extends QueryTest with SharedSparkSession {\n+// For some reason this suite is flaky w/ or w/o Comet when running in Github workflow.\n+// Since it isn't related to Comet, we disable it for now.\n+class BinaryFileFormatSuite extends QueryTest with SharedSparkSession with IgnoreCometSuite {\n   import BinaryFileFormat._\n \n   private var testDir: String = _\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala\nindex 07e2849ce6f..3e73645b638 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala\n@@ -28,7 +28,7 @@ import org.apache.parquet.hadoop.ParquetOutputFormat\n \n import org.apache.spark.TestUtils\n import org.apache.spark.memory.MemoryMode\n-import org.apache.spark.sql.Row\n+import org.apache.spark.sql.{IgnoreComet, Row}\n import org.apache.spark.sql.catalyst.util.DateTimeUtils\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SharedSparkSession\n@@ -201,7 +201,8 @@ class ParquetEncodingSuite extends ParquetCompatibilityTest with SharedSparkSess\n     }\n   }\n \n-  test(\"parquet v2 pages - rle encoding for boolean value columns\") {\n+  test(\"parquet v2 pages - rle encoding for boolean value columns\",\n+      IgnoreComet(\"Comet doesn't support RLE encoding yet\")) {\n     val extraOptions = Map[String, String](\n       ParquetOutputFormat.WRITER_VERSION -> ParquetProperties.WriterVersion.PARQUET_2_0.toString\n     )\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala\nindex 104b4e416cd..b8af360fa14 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala\n@@ -38,6 +38,7 @@ import org.apache.parquet.schema.MessageType\n \n import org.apache.spark.{SparkConf, SparkException}\n import org.apache.spark.sql._\n+import org.apache.spark.sql.{IgnoreCometNativeDataFusion, IgnoreCometNativeScan}\n import org.apache.spark.sql.catalyst.dsl.expressions._\n import org.apache.spark.sql.catalyst.expressions._\n import org.apache.spark.sql.catalyst.optimizer.InferFiltersFromConstraints\n@@ -1096,7 +1097,11 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n           // When a filter is pushed to Parquet, Parquet can apply it to every row.\n           // So, we can check the number of rows returned from the Parquet\n           // to make sure our filter pushdown work.\n-          assert(stripSparkFilter(df).count == 1)\n+          // Similar to Spark's vectorized reader, Comet doesn't do row-level filtering but relies\n+          // on Spark to apply the data filters after columnar batches are returned\n+          if (!isCometEnabled) {\n+            assert(stripSparkFilter(df).count == 1)\n+          }\n         }\n       }\n     }\n@@ -1499,7 +1504,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"Filters should be pushed down for vectorized Parquet reader at row group level\") {\n+  test(\"Filters should be pushed down for vectorized Parquet reader at row group level\",\n+    IgnoreCometNativeScan(\"Native scans do not support the tested accumulator\")) {\n     import testImplicits._\n \n     withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"true\",\n@@ -1581,7 +1587,11 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n           // than the total length but should not be a single record.\n           // Note that, if record level filtering is enabled, it should be a single record.\n           // If no filter is pushed down to Parquet, it should be the total length of data.\n-          assert(actual > 1 && actual < data.length)\n+          // Only enable Comet test iff it's scan only, since with native execution\n+          // `stripSparkFilter` can't remove the native filter\n+          if (!isCometEnabled || isCometScanOnly) {\n+            assert(actual > 1 && actual < data.length)\n+          }\n         }\n       }\n     }\n@@ -1608,7 +1618,11 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n         // than the total length but should not be a single record.\n         // Note that, if record level filtering is enabled, it should be a single record.\n         // If no filter is pushed down to Parquet, it should be the total length of data.\n-        assert(actual > 1 && actual < data.length)\n+        // Only enable Comet test iff it's scan only, since with native execution\n+        // `stripSparkFilter` can't remove the native filter\n+        if (!isCometEnabled || isCometScanOnly) {\n+          assert(actual > 1 && actual < data.length)\n+        }\n       }\n     }\n   }\n@@ -1700,7 +1714,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n       (attr, value) => sources.StringContains(attr, value))\n   }\n \n-  test(\"filter pushdown - StringPredicate\") {\n+  test(\"filter pushdown - StringPredicate\",\n+    IgnoreCometNativeDataFusion(\"cannot be pushed down\")) {\n     import testImplicits._\n     // keep() should take effect on StartsWith/EndsWith/Contains\n     Seq(\n@@ -1744,7 +1759,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"SPARK-17091: Convert IN predicate to Parquet filter push-down\") {\n+  test(\"SPARK-17091: Convert IN predicate to Parquet filter push-down\",\n+    IgnoreCometNativeScan(\"Comet has different push-down behavior\")) {\n     val schema = StructType(Seq(\n       StructField(\"a\", IntegerType, nullable = false)\n     ))\n@@ -1950,11 +1966,21 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n            \"\"\".stripMargin)\n \n         withSQLConf(SQLConf.CASE_SENSITIVE.key -> \"false\") {\n-          val e = intercept[SparkException] {\n+          // Spark native readers wrap the error in SparkException.\n+          // Comet native readers throw RuntimeException directly.\n+          val msg = try {\n             sql(s\"select a from $tableName where b > 0\").collect()\n+            fail(\"Expected an exception\")\n+          } catch {\n+            case e: SparkException =>\n+              assert(e.getCause.isInstanceOf[RuntimeException])\n+              e.getCause.getMessage\n+            case e: RuntimeException =>\n+              e.getMessage\n           }\n-          assert(e.getCause.isInstanceOf[RuntimeException] && e.getCause.getMessage.contains(\n-            \"\"\"Found duplicate field(s) \"B\": [B, b] in case-insensitive mode\"\"\"))\n+          assert(msg.contains(\n+            \"\"\"Found duplicate field(s) \"B\": [B, b] in case-insensitive mode\"\"\"),\n+            s\"Unexpected error message: $msg\")\n         }\n \n         withSQLConf(SQLConf.CASE_SENSITIVE.key -> \"true\") {\n@@ -1985,7 +2011,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"Support Parquet column index\") {\n+  test(\"Support Parquet column index\",\n+      IgnoreComet(\"Comet doesn't support Parquet column index yet\")) {\n     // block 1:\n     //                      null count  min                                       max\n     // page-0                         0  0                                         99\n@@ -2045,7 +2072,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"SPARK-34562: Bloom filter push down\") {\n+  test(\"SPARK-34562: Bloom filter push down\",\n+    IgnoreCometNativeScan(\"Native scans do not support the tested accumulator\")) {\n     withTempPath { dir =>\n       val path = dir.getCanonicalPath\n       spark.range(100).selectExpr(\"id * 2 AS id\")\n@@ -2277,7 +2305,11 @@ class ParquetV1FilterSuite extends ParquetFilterSuite {\n           assert(pushedParquetFilters.exists(_.getClass === filterClass),\n             s\"${pushedParquetFilters.map(_.getClass).toList} did not contain ${filterClass}.\")\n \n-          checker(stripSparkFilter(query), expected)\n+          // Similar to Spark's vectorized reader, Comet doesn't do row-level filtering but relies\n+          // on Spark to apply the data filters after columnar batches are returned\n+          if (!isCometEnabled) {\n+            checker(stripSparkFilter(query), expected)\n+          }\n         } else {\n           assert(selectedFilters.isEmpty, \"There is filter pushed down\")\n         }\n@@ -2337,7 +2369,11 @@ class ParquetV2FilterSuite extends ParquetFilterSuite {\n           assert(pushedParquetFilters.exists(_.getClass === filterClass),\n             s\"${pushedParquetFilters.map(_.getClass).toList} did not contain ${filterClass}.\")\n \n-          checker(stripSparkFilter(query), expected)\n+          // Similar to Spark's vectorized reader, Comet doesn't do row-level filtering but relies\n+          // on Spark to apply the data filters after columnar batches are returned\n+          if (!isCometEnabled) {\n+            checker(stripSparkFilter(query), expected)\n+          }\n \n         case _ =>\n           throw new AnalysisException(\"Can not match ParquetTable in the query.\")\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala\nindex 8670d95c65e..c7ba51f770f 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala\n@@ -41,6 +41,7 @@ import org.apache.parquet.schema.{MessageType, MessageTypeParser}\n \n import org.apache.spark.{SPARK_VERSION_SHORT, SparkException, TestUtils}\n import org.apache.spark.sql._\n+import org.apache.spark.sql.IgnoreCometNativeDataFusion\n import org.apache.spark.sql.catalyst.{InternalRow, ScalaReflection}\n import org.apache.spark.sql.catalyst.expressions.{GenericInternalRow, UnsafeRow}\n import org.apache.spark.sql.catalyst.util.DateTimeUtils\n@@ -1064,7 +1065,8 @@ class ParquetIOSuite extends QueryTest with ParquetTest with SharedSparkSession\n     }\n   }\n \n-  test(\"SPARK-35640: read binary as timestamp should throw schema incompatible error\") {\n+  test(\"SPARK-35640: read binary as timestamp should throw schema incompatible error\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     val data = (1 to 4).map(i => Tuple1(i.toString))\n     val readSchema = StructType(Seq(StructField(\"_1\", DataTypes.TimestampType)))\n \n@@ -1075,7 +1077,8 @@ class ParquetIOSuite extends QueryTest with ParquetTest with SharedSparkSession\n     }\n   }\n \n-  test(\"SPARK-35640: int as long should throw schema incompatible error\") {\n+  test(\"SPARK-35640: int as long should throw schema incompatible error\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     val data = (1 to 4).map(i => Tuple1(i))\n     val readSchema = StructType(Seq(StructField(\"_1\", DataTypes.LongType)))\n \n@@ -1335,7 +1338,8 @@ class ParquetIOSuite extends QueryTest with ParquetTest with SharedSparkSession\n     }\n   }\n \n-  test(\"SPARK-40128 read DELTA_LENGTH_BYTE_ARRAY encoded strings\") {\n+  test(\"SPARK-40128 read DELTA_LENGTH_BYTE_ARRAY encoded strings\",\n+      IgnoreComet(\"Comet doesn't support DELTA encoding yet\")) {\n     withAllParquetReaders {\n       checkAnswer(\n         // \"fruit\" column in this file is encoded using DELTA_LENGTH_BYTE_ARRAY.\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala\nindex 29cb224c878..ee5a87fa200 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala\n@@ -27,6 +27,7 @@ import org.apache.parquet.hadoop.ParquetOutputFormat\n \n import org.apache.spark.{DebugFilesystem, SparkConf, SparkException}\n import org.apache.spark.sql._\n+import org.apache.spark.sql.IgnoreCometNativeDataFusion\n import org.apache.spark.sql.catalyst.{InternalRow, TableIdentifier}\n import org.apache.spark.sql.catalyst.expressions.SpecificInternalRow\n import org.apache.spark.sql.catalyst.util.ArrayData\n@@ -185,7 +186,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     }\n   }\n \n-  test(\"SPARK-36182: can't read TimestampLTZ as TimestampNTZ\") {\n+  test(\"SPARK-36182: can't read TimestampLTZ as TimestampNTZ\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     val data = (1 to 1000).map { i =>\n       val ts = new java.sql.Timestamp(i)\n       Row(ts)\n@@ -978,7 +980,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     }\n   }\n \n-  test(\"SPARK-26677: negated null-safe equality comparison should not filter matched row groups\") {\n+  test(\"SPARK-26677: negated null-safe equality comparison should not filter matched row groups\",\n+    IgnoreCometNativeScan(\"Native scans had the filter pushed into DF operator, cannot strip\")) {\n     withAllParquetReaders {\n       withTempPath { path =>\n         // Repeated values for dictionary encoding.\n@@ -1031,7 +1034,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     testMigration(fromTsType = \"TIMESTAMP_MICROS\", toTsType = \"INT96\")\n   }\n \n-  test(\"SPARK-34212 Parquet should read decimals correctly\") {\n+  test(\"SPARK-34212 Parquet should read decimals correctly\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     def readParquet(schema: String, path: File): DataFrame = {\n       spark.read.schema(schema).parquet(path.toString)\n     }\n@@ -1047,7 +1051,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n         checkAnswer(readParquet(schema, path), df)\n       }\n \n-      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\") {\n+      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\",\n+          \"spark.comet.enabled\" -> \"false\") {\n         val schema1 = \"a DECIMAL(3, 2), b DECIMAL(18, 3), c DECIMAL(37, 3)\"\n         checkAnswer(readParquet(schema1, path), df)\n         val schema2 = \"a DECIMAL(3, 0), b DECIMAL(18, 1), c DECIMAL(37, 1)\"\n@@ -1069,7 +1074,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n       val df = sql(s\"SELECT 1 a, 123456 b, ${Int.MaxValue.toLong * 10} c, CAST('1.2' AS BINARY) d\")\n       df.write.parquet(path.toString)\n \n-      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\") {\n+      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\",\n+          \"spark.comet.enabled\" -> \"false\") {\n         checkAnswer(readParquet(\"a DECIMAL(3, 2)\", path), sql(\"SELECT 1.00\"))\n         checkAnswer(readParquet(\"b DECIMAL(3, 2)\", path), Row(null))\n         checkAnswer(readParquet(\"b DECIMAL(11, 1)\", path), sql(\"SELECT 123456.0\"))\n@@ -1113,7 +1119,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     }\n   }\n \n-  test(\"row group skipping doesn't overflow when reading into larger type\") {\n+  test(\"row group skipping doesn't overflow when reading into larger type\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     withTempPath { path =>\n       Seq(0).toDF(\"a\").write.parquet(path.toString)\n       // The vectorized and non-vectorized readers will produce different exceptions, we don't need\n@@ -1128,7 +1135,7 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n             .where(s\"a < ${Long.MaxValue}\")\n             .collect()\n         }\n-        assert(exception.getCause.getCause.isInstanceOf[SchemaColumnConvertNotSupportedException])\n+        assert(exception.getMessage.contains(\"Column: [a], Expected: bigint, Found: INT32\"))\n       }\n     }\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala\nindex 240bb4e6dcb..8287ffa03ca 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala\n@@ -21,7 +21,7 @@ import java.nio.file.{Files, Paths, StandardCopyOption}\n import java.sql.{Date, Timestamp}\n \n import org.apache.spark.{SPARK_VERSION_SHORT, SparkConf, SparkException, SparkUpgradeException}\n-import org.apache.spark.sql.{QueryTest, Row, SPARK_LEGACY_DATETIME_METADATA_KEY, SPARK_LEGACY_INT96_METADATA_KEY, SPARK_TIMEZONE_METADATA_KEY}\n+import org.apache.spark.sql.{IgnoreCometSuite, QueryTest, Row, SPARK_LEGACY_DATETIME_METADATA_KEY, SPARK_LEGACY_INT96_METADATA_KEY, SPARK_TIMEZONE_METADATA_KEY}\n import org.apache.spark.sql.catalyst.util.DateTimeTestUtils\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.internal.SQLConf.{LegacyBehaviorPolicy, ParquetOutputTimestampType}\n@@ -30,9 +30,11 @@ import org.apache.spark.sql.internal.SQLConf.ParquetOutputTimestampType.{INT96,\n import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.tags.SlowSQLTest\n \n+// Comet is disabled for this suite because it doesn't support datetime rebase mode\n abstract class ParquetRebaseDatetimeSuite\n   extends QueryTest\n   with ParquetTest\n+  with IgnoreCometSuite\n   with SharedSparkSession {\n \n   import testImplicits._\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala\nindex 351c6d698fc..cef6bb08b8c 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala\n@@ -26,6 +26,7 @@ import org.apache.parquet.hadoop.{ParquetFileReader, ParquetOutputFormat}\n import org.apache.parquet.hadoop.ParquetWriter.DEFAULT_BLOCK_SIZE\n \n import org.apache.spark.sql.QueryTest\n+import org.apache.spark.sql.comet.{CometBatchScanExec, CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.execution.FileSourceScanExec\n import org.apache.spark.sql.execution.datasources.FileFormat\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n@@ -230,6 +231,17 @@ class ParquetRowIndexSuite extends QueryTest with SharedSparkSession {\n             case f: FileSourceScanExec =>\n               numPartitions += f.inputRDD.partitions.length\n               numOutputRows += f.metrics(\"numOutputRows\").value\n+            case b: CometScanExec =>\n+              numPartitions += b.inputRDD.partitions.length\n+              numOutputRows += b.metrics(\"numOutputRows\").value\n+            case b: CometBatchScanExec =>\n+              numPartitions += b.inputRDD.partitions.length\n+              numOutputRows += b.metrics(\"numOutputRows\").value\n+            case b: CometNativeScanExec =>\n+              numPartitions +=\n+                b.originalPlan.inputRDD.partitions.length\n+              numOutputRows +=\n+                b.metrics(\"numOutputRows\").value\n             case _ =>\n           }\n           assert(numPartitions > 0)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala\nindex 5c0b7def039..151184bc98c 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala\n@@ -20,6 +20,7 @@ package org.apache.spark.sql.execution.datasources.parquet\n import org.apache.spark.SparkConf\n import org.apache.spark.sql.DataFrame\n import org.apache.spark.sql.catalyst.parser.CatalystSqlParser\n+import org.apache.spark.sql.comet.CometBatchScanExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.datasources.SchemaPruningSuite\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n@@ -56,6 +57,7 @@ class ParquetV2SchemaPruningSuite extends ParquetSchemaPruningSuite {\n     val fileSourceScanSchemata =\n       collect(df.queryExecution.executedPlan) {\n         case scan: BatchScanExec => scan.scan.asInstanceOf[ParquetScan].readDataSchema\n+        case scan: CometBatchScanExec => scan.scan.asInstanceOf[ParquetScan].readDataSchema\n       }\n     assert(fileSourceScanSchemata.size === expectedSchemaCatalogStrings.size,\n       s\"Found ${fileSourceScanSchemata.size} file sources in dataframe, \" +\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala\nindex bf5c51b89bb..4e2f0bdb389 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala\n@@ -27,6 +27,7 @@ import org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName\n import org.apache.parquet.schema.Type._\n \n import org.apache.spark.SparkException\n+import org.apache.spark.sql.{IgnoreComet, IgnoreCometNativeDataFusion}\n import org.apache.spark.sql.catalyst.ScalaReflection\n import org.apache.spark.sql.execution.datasources.SchemaColumnConvertNotSupportedException\n import org.apache.spark.sql.functions.desc\n@@ -1016,7 +1017,8 @@ class ParquetSchemaSuite extends ParquetSchemaTest {\n     e\n   }\n \n-  test(\"schema mismatch failure error message for parquet reader\") {\n+  test(\"schema mismatch failure error message for parquet reader\",\n+      IgnoreComet(\"Comet doesn't work with vectorizedReaderEnabled = false\")) {\n     withTempPath { dir =>\n       val e = testSchemaMismatch(dir.getCanonicalPath, vectorizedReaderEnabled = false)\n       val expectedMessage = \"Encountered error while reading file\"\n@@ -1026,7 +1028,8 @@ class ParquetSchemaSuite extends ParquetSchemaTest {\n     }\n   }\n \n-  test(\"schema mismatch failure error message for parquet vectorized reader\") {\n+  test(\"schema mismatch failure error message for parquet vectorized reader\",\n+      IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     withTempPath { dir =>\n       val e = testSchemaMismatch(dir.getCanonicalPath, vectorizedReaderEnabled = true)\n       assert(e.getCause.isInstanceOf[SparkException])\n@@ -1067,7 +1070,8 @@ class ParquetSchemaSuite extends ParquetSchemaTest {\n     }\n   }\n \n-  test(\"SPARK-45604: schema mismatch failure error on timestamp_ntz to array<timestamp_ntz>\") {\n+  test(\"SPARK-45604: schema mismatch failure error on timestamp_ntz to array<timestamp_ntz>\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     import testImplicits._\n \n     withTempPath { dir =>\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala\nindex 3a0bd35cb70..b28f06a757f 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala\n@@ -20,6 +20,7 @@ package org.apache.spark.sql.execution.debug\n import java.io.ByteArrayOutputStream\n \n import org.apache.spark.rdd.RDD\n+import org.apache.spark.sql.IgnoreComet\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions.Attribute\n import org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext\n@@ -124,7 +125,8 @@ class DebuggingSuite extends DebuggingSuiteBase with DisableAdaptiveExecutionSui\n          | id LongType: {}\"\"\".stripMargin))\n   }\n \n-  test(\"SPARK-28537: DebugExec cannot debug columnar related queries\") {\n+  test(\"SPARK-28537: DebugExec cannot debug columnar related queries\",\n+      IgnoreComet(\"Comet does not use FileScan\")) {\n     withTempPath { workDir =>\n       val workDirPath = workDir.getAbsolutePath\n       val input = spark.range(5).toDF(\"id\")\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala\nindex 26e61c6b58d..cb09d7e116a 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala\n@@ -45,8 +45,10 @@ import org.apache.spark.sql.util.QueryExecutionListener\n import org.apache.spark.util.{AccumulatorContext, JsonProtocol}\n \n // Disable AQE because metric info is different with AQE on/off\n+// This test suite runs tests against the metrics of physical operators.\n+// Disabling it for Comet because the metrics are different with Comet enabled.\n class SQLMetricsSuite extends SharedSparkSession with SQLMetricsTestUtils\n-  with DisableAdaptiveExecutionSuite {\n+  with DisableAdaptiveExecutionSuite with IgnoreCometSuite {\n   import testImplicits._\n \n   /**\n@@ -737,7 +739,8 @@ class SQLMetricsSuite extends SharedSparkSession with SQLMetricsTestUtils\n     }\n   }\n \n-  test(\"SPARK-26327: FileSourceScanExec metrics\") {\n+  test(\"SPARK-26327: FileSourceScanExec metrics\",\n+      IgnoreComet(\"Spark uses row-based Parquet reader while Comet is vectorized\")) {\n     withTable(\"testDataForScan\") {\n       spark.range(10).selectExpr(\"id\", \"id % 3 as p\")\n         .write.partitionBy(\"p\").saveAsTable(\"testDataForScan\")\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala\nindex 0ab8691801d..b18a5bea944 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala\n@@ -18,6 +18,7 @@\n package org.apache.spark.sql.execution.python\n \n import org.apache.spark.sql.catalyst.plans.logical.{ArrowEvalPython, BatchEvalPython, Limit, LocalLimit}\n+import org.apache.spark.sql.comet._\n import org.apache.spark.sql.execution.{FileSourceScanExec, SparkPlan, SparkPlanTest}\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.execution.datasources.v2.parquet.ParquetScan\n@@ -108,6 +109,8 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: FileSourceScanExec => scan\n+            case scan: CometScanExec => scan\n+            case scan: CometNativeScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           assert(scanNodes.head.output.map(_.name) == Seq(\"a\"))\n@@ -120,11 +123,18 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: FileSourceScanExec => scan\n+            case scan: CometScanExec => scan\n+            case scan: CometNativeScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           // $\"a\" is not null and $\"a\" > 1\n-          assert(scanNodes.head.dataFilters.length == 2)\n-          assert(scanNodes.head.dataFilters.flatMap(_.references.map(_.name)).distinct == Seq(\"a\"))\n+          val dataFilters = scanNodes.head match {\n+            case scan: FileSourceScanExec => scan.dataFilters\n+            case scan: CometScanExec => scan.dataFilters\n+            case scan: CometNativeScanExec => scan.dataFilters\n+          }\n+          assert(dataFilters.length == 2)\n+          assert(dataFilters.flatMap(_.references.map(_.name)).distinct == Seq(\"a\"))\n         }\n       }\n     }\n@@ -145,6 +155,7 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: BatchScanExec => scan\n+            case scan: CometBatchScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           assert(scanNodes.head.output.map(_.name) == Seq(\"a\"))\n@@ -157,6 +168,7 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: BatchScanExec => scan\n+            case scan: CometBatchScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           // $\"a\" is not null and $\"a\" > 1\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala\nindex d083cac48ff..3c11bcde807 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala\n@@ -37,8 +37,10 @@ import org.apache.spark.sql.streaming.{StreamingQuery, StreamingQueryException,\n import org.apache.spark.sql.streaming.util.StreamManualClock\n import org.apache.spark.util.Utils\n \n+// For some reason this suite is flaky w/ or w/o Comet when running in Github workflow.\n+// Since it isn't related to Comet, we disable it for now.\n class AsyncProgressTrackingMicroBatchExecutionSuite\n-  extends StreamTest with BeforeAndAfter with Matchers {\n+  extends StreamTest with BeforeAndAfter with Matchers with IgnoreCometSuite {\n \n   import testImplicits._\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala\nindex 266bb343526..f8ad838e2b2 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala\n@@ -19,15 +19,18 @@ package org.apache.spark.sql.sources\n \n import scala.util.Random\n \n+import org.apache.comet.CometConf\n+\n import org.apache.spark.sql._\n import org.apache.spark.sql.catalyst.catalog.BucketSpec\n import org.apache.spark.sql.catalyst.expressions\n import org.apache.spark.sql.catalyst.expressions._\n import org.apache.spark.sql.catalyst.plans.physical.HashPartitioning\n-import org.apache.spark.sql.execution.{FileSourceScanExec, SortExec, SparkPlan}\n+import org.apache.spark.sql.comet._\n+import org.apache.spark.sql.execution.{ColumnarToRowExec, FileSourceScanExec, SortExec, SparkPlan}\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanExec, AdaptiveSparkPlanHelper, DisableAdaptiveExecution}\n import org.apache.spark.sql.execution.datasources.BucketingUtils\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.{ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.joins.SortMergeJoinExec\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -101,12 +104,22 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n     }\n   }\n \n-  private def getFileScan(plan: SparkPlan): FileSourceScanExec = {\n-    val fileScan = collect(plan) { case f: FileSourceScanExec => f }\n+  private def getFileScan(plan: SparkPlan): SparkPlan = {\n+    val fileScan = collect(plan) {\n+      case f: FileSourceScanExec => f\n+      case f: CometScanExec => f\n+      case f: CometNativeScanExec => f\n+    }\n     assert(fileScan.nonEmpty, plan)\n     fileScan.head\n   }\n \n+  private def getBucketScan(plan: SparkPlan): Boolean = getFileScan(plan) match {\n+    case fs: FileSourceScanExec => fs.bucketedScan\n+    case bs: CometScanExec => bs.bucketedScan\n+    case ns: CometNativeScanExec => ns.bucketedScan\n+  }\n+\n   // To verify if the bucket pruning works, this function checks two conditions:\n   //   1) Check if the pruned buckets (before filtering) are empty.\n   //   2) Verify the final result is the same as the expected one\n@@ -155,7 +168,8 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n           val planWithoutBucketedScan = bucketedDataFrame.filter(filterCondition)\n             .queryExecution.executedPlan\n           val fileScan = getFileScan(planWithoutBucketedScan)\n-          assert(!fileScan.bucketedScan, s\"except no bucketed scan but found\\n$fileScan\")\n+          val bucketedScan = getBucketScan(planWithoutBucketedScan)\n+          assert(!bucketedScan, s\"except no bucketed scan but found\\n$fileScan\")\n \n           val bucketColumnType = bucketedDataFrame.schema.apply(bucketColumnIndex).dataType\n           val rowsWithInvalidBuckets = fileScan.execute().filter(row => {\n@@ -451,28 +465,54 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n         val joinOperator = if (joined.sqlContext.conf.adaptiveExecutionEnabled) {\n           val executedPlan =\n             joined.queryExecution.executedPlan.asInstanceOf[AdaptiveSparkPlanExec].executedPlan\n-          assert(executedPlan.isInstanceOf[SortMergeJoinExec])\n-          executedPlan.asInstanceOf[SortMergeJoinExec]\n+          executedPlan match {\n+            case s: SortMergeJoinExec => s\n+            case b: CometSortMergeJoinExec =>\n+              b.originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+          }\n         } else {\n           val executedPlan = joined.queryExecution.executedPlan\n-          assert(executedPlan.isInstanceOf[SortMergeJoinExec])\n-          executedPlan.asInstanceOf[SortMergeJoinExec]\n+          executedPlan match {\n+            case s: SortMergeJoinExec => s\n+            case ColumnarToRowExec(child) =>\n+              child.asInstanceOf[CometSortMergeJoinExec].originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case CometColumnarToRowExec(child) =>\n+              child.asInstanceOf[CometSortMergeJoinExec].originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case CometNativeColumnarToRowExec(child) =>\n+              child.asInstanceOf[CometSortMergeJoinExec].originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+          }\n         }\n \n         // check existence of shuffle\n         assert(\n-          joinOperator.left.exists(_.isInstanceOf[ShuffleExchangeExec]) == shuffleLeft,\n+          joinOperator.left.exists(op => op.isInstanceOf[ShuffleExchangeLike]) == shuffleLeft,\n           s\"expected shuffle in plan to be $shuffleLeft but found\\n${joinOperator.left}\")\n         assert(\n-          joinOperator.right.exists(_.isInstanceOf[ShuffleExchangeExec]) == shuffleRight,\n+          joinOperator.right.exists(op => op.isInstanceOf[ShuffleExchangeLike]) == shuffleRight,\n           s\"expected shuffle in plan to be $shuffleRight but found\\n${joinOperator.right}\")\n \n         // check existence of sort\n         assert(\n-          joinOperator.left.exists(_.isInstanceOf[SortExec]) == sortLeft,\n+          joinOperator.left.exists(op => op.isInstanceOf[SortExec] || op.isInstanceOf[CometExec] &&\n+            op.asInstanceOf[CometExec].originalPlan.isInstanceOf[SortExec]) == sortLeft,\n           s\"expected sort in the left child to be $sortLeft but found\\n${joinOperator.left}\")\n         assert(\n-          joinOperator.right.exists(_.isInstanceOf[SortExec]) == sortRight,\n+          joinOperator.right.exists(op => op.isInstanceOf[SortExec] || op.isInstanceOf[CometExec] &&\n+            op.asInstanceOf[CometExec].originalPlan.isInstanceOf[SortExec]) == sortRight,\n           s\"expected sort in the right child to be $sortRight but found\\n${joinOperator.right}\")\n \n         // check the output partitioning\n@@ -835,11 +875,11 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n       df1.write.format(\"parquet\").bucketBy(8, \"i\").saveAsTable(\"bucketed_table\")\n \n       val scanDF = spark.table(\"bucketed_table\").select(\"j\")\n-      assert(!getFileScan(scanDF.queryExecution.executedPlan).bucketedScan)\n+      assert(!getBucketScan(scanDF.queryExecution.executedPlan))\n       checkAnswer(scanDF, df1.select(\"j\"))\n \n       val aggDF = spark.table(\"bucketed_table\").groupBy(\"j\").agg(max(\"k\"))\n-      assert(!getFileScan(aggDF.queryExecution.executedPlan).bucketedScan)\n+      assert(!getBucketScan(aggDF.queryExecution.executedPlan))\n       checkAnswer(aggDF, df1.groupBy(\"j\").agg(max(\"k\")))\n     }\n   }\n@@ -894,7 +934,10 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n   }\n \n   test(\"SPARK-29655 Read bucketed tables obeys spark.sql.shuffle.partitions\") {\n+    // Range partitioning uses random samples, so per-partition comparisons do not always yield\n+    // the same results. Disable Comet native range partitioning.\n     withSQLConf(\n+      CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> \"false\",\n       SQLConf.SHUFFLE_PARTITIONS.key -> \"5\",\n       SQLConf.COALESCE_PARTITIONS_INITIAL_PARTITION_NUM.key -> \"7\")  {\n       val bucketSpec = Some(BucketSpec(6, Seq(\"i\", \"j\"), Nil))\n@@ -913,7 +956,10 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n   }\n \n   test(\"SPARK-32767 Bucket join should work if SHUFFLE_PARTITIONS larger than bucket number\") {\n+    // Range partitioning uses random samples, so per-partition comparisons do not always yield\n+    // the same results. Disable Comet native range partitioning.\n     withSQLConf(\n+      CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> \"false\",\n       SQLConf.SHUFFLE_PARTITIONS.key -> \"9\",\n       SQLConf.COALESCE_PARTITIONS_INITIAL_PARTITION_NUM.key -> \"10\")  {\n \n@@ -943,7 +989,10 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n   }\n \n   test(\"bucket coalescing eliminates shuffle\") {\n+    // Range partitioning uses random samples, so per-partition comparisons do not always yield\n+    // the same results. Disable Comet native range partitioning.\n     withSQLConf(\n+      CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> \"false\",\n       SQLConf.COALESCE_BUCKETS_IN_JOIN_ENABLED.key -> \"true\",\n       SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\") {\n       // The side with bucketedTableTestSpec1 will be coalesced to have 4 output partitions.\n@@ -1026,15 +1075,26 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n             expectedNumShuffles: Int,\n             expectedCoalescedNumBuckets: Option[Int]): Unit = {\n           val plan = sql(query).queryExecution.executedPlan\n-          val shuffles = plan.collect { case s: ShuffleExchangeExec => s }\n+          val shuffles = plan.collect {\n+            case s: ShuffleExchangeLike => s\n+          }\n           assert(shuffles.length == expectedNumShuffles)\n \n           val scans = plan.collect {\n             case f: FileSourceScanExec if f.optionalNumCoalescedBuckets.isDefined => f\n+            case b: CometScanExec if b.optionalNumCoalescedBuckets.isDefined => b\n+            case b: CometNativeScanExec if b.optionalNumCoalescedBuckets.isDefined => b\n           }\n           if (expectedCoalescedNumBuckets.isDefined) {\n             assert(scans.length == 1)\n-            assert(scans.head.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+            scans.head match {\n+              case f: FileSourceScanExec =>\n+                assert(f.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+              case b: CometScanExec =>\n+                assert(b.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+              case b: CometNativeScanExec =>\n+                assert(b.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+            }\n           } else {\n             assert(scans.isEmpty)\n           }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala\nindex b5f6d2f9f68..277784a92af 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala\n@@ -20,7 +20,7 @@ package org.apache.spark.sql.sources\n import java.io.File\n \n import org.apache.spark.SparkException\n-import org.apache.spark.sql.AnalysisException\n+import org.apache.spark.sql.{AnalysisException, IgnoreCometSuite}\n import org.apache.spark.sql.catalyst.TableIdentifier\n import org.apache.spark.sql.catalyst.catalog.{BucketSpec, CatalogTableType}\n import org.apache.spark.sql.catalyst.parser.ParseException\n@@ -28,7 +28,10 @@ import org.apache.spark.sql.internal.SQLConf.BUCKETING_MAX_BUCKETS\n import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.util.Utils\n \n-class CreateTableAsSelectSuite extends DataSourceTest with SharedSparkSession {\n+// For some reason this suite is flaky w/ or w/o Comet when running in Github workflow.\n+// Since it isn't related to Comet, we disable it for now.\n+class CreateTableAsSelectSuite extends DataSourceTest with SharedSparkSession\n+    with IgnoreCometSuite {\n   import testImplicits._\n \n   protected override lazy val sql = spark.sql _\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala\nindex 1f55742cd67..f20129d9dd8 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala\n@@ -20,6 +20,7 @@ package org.apache.spark.sql.sources\n import org.apache.spark.sql.QueryTest\n import org.apache.spark.sql.catalyst.expressions.AttributeReference\n import org.apache.spark.sql.catalyst.plans.physical.HashPartitioning\n+import org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.execution.FileSourceScanExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n@@ -71,7 +72,11 @@ abstract class DisableUnnecessaryBucketedScanSuite\n \n     def checkNumBucketedScan(query: String, expectedNumBucketedScan: Int): Unit = {\n       val plan = sql(query).queryExecution.executedPlan\n-      val bucketedScan = collect(plan) { case s: FileSourceScanExec if s.bucketedScan => s }\n+      val bucketedScan = collect(plan) {\n+        case s: FileSourceScanExec if s.bucketedScan => s\n+        case s: CometScanExec if s.bucketedScan => s\n+        case s: CometNativeScanExec if s.bucketedScan => s\n+      }\n       assert(bucketedScan.length == expectedNumBucketedScan)\n     }\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala\nindex 75f440caefc..36b1146bc3a 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala\n@@ -34,6 +34,7 @@ import org.apache.spark.paths.SparkPath\n import org.apache.spark.scheduler.{SparkListener, SparkListenerTaskEnd}\n import org.apache.spark.sql.{AnalysisException, DataFrame}\n import org.apache.spark.sql.catalyst.util.stringToFile\n+import org.apache.spark.sql.comet.CometBatchScanExec\n import org.apache.spark.sql.execution.DataSourceScanExec\n import org.apache.spark.sql.execution.datasources._\n import org.apache.spark.sql.execution.datasources.v2.{BatchScanExec, DataSourceV2Relation, FileScan, FileTable}\n@@ -748,6 +749,8 @@ class FileStreamSinkV2Suite extends FileStreamSinkSuite {\n       val fileScan = df.queryExecution.executedPlan.collect {\n         case batch: BatchScanExec if batch.scan.isInstanceOf[FileScan] =>\n           batch.scan.asInstanceOf[FileScan]\n+        case batch: CometBatchScanExec if batch.scan.isInstanceOf[FileScan] =>\n+          batch.scan.asInstanceOf[FileScan]\n       }.headOption.getOrElse {\n         fail(s\"No FileScan in query\\n${df.queryExecution}\")\n       }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/FlatMapGroupsWithStateSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/FlatMapGroupsWithStateSuite.scala\nindex 6aa7d0945c7..ad26ad833e2 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/FlatMapGroupsWithStateSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/FlatMapGroupsWithStateSuite.scala\n@@ -46,6 +46,7 @@ case class RunningCount(count: Long)\n \n case class Result(key: Long, count: Int)\n \n+// TODO: fix Comet to enable this suite\n @SlowSQLTest\n class FlatMapGroupsWithStateSuite extends StateStoreMetricsTest {\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala\nindex ef5b8a769fe..84fe1bfabc9 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala\n@@ -37,6 +37,7 @@ import org.apache.spark.sql._\n import org.apache.spark.sql.catalyst.plans.logical.{Range, RepartitionByExpression}\n import org.apache.spark.sql.catalyst.streaming.{InternalOutputModes, StreamingRelationV2}\n import org.apache.spark.sql.catalyst.util.DateTimeUtils\n+import org.apache.spark.sql.comet.CometLocalLimitExec\n import org.apache.spark.sql.execution.{LocalLimitExec, SimpleMode, SparkPlan}\n import org.apache.spark.sql.execution.command.ExplainCommand\n import org.apache.spark.sql.execution.streaming._\n@@ -1103,11 +1104,12 @@ class StreamSuite extends StreamTest {\n       val localLimits = execPlan.collect {\n         case l: LocalLimitExec => l\n         case l: StreamingLocalLimitExec => l\n+        case l: CometLocalLimitExec => l\n       }\n \n       require(\n         localLimits.size == 1,\n-        s\"Cant verify local limit optimization with this plan:\\n$execPlan\")\n+        s\"Cant verify local limit optimization ${localLimits.size} with this plan:\\n$execPlan\")\n \n       if (expectStreamingLimit) {\n         assert(\n@@ -1115,7 +1117,8 @@ class StreamSuite extends StreamTest {\n           s\"Local limit was not StreamingLocalLimitExec:\\n$execPlan\")\n       } else {\n         assert(\n-          localLimits.head.isInstanceOf[LocalLimitExec],\n+          localLimits.head.isInstanceOf[LocalLimitExec] ||\n+            localLimits.head.isInstanceOf[CometLocalLimitExec],\n           s\"Local limit was not LocalLimitExec:\\n$execPlan\")\n       }\n     }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala\nindex b4c4ec7acbf..20579284856 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala\n@@ -23,6 +23,7 @@ import org.apache.commons.io.FileUtils\n import org.scalatest.Assertions\n \n import org.apache.spark.sql.catalyst.plans.physical.UnspecifiedDistribution\n+import org.apache.spark.sql.comet.CometHashAggregateExec\n import org.apache.spark.sql.execution.aggregate.BaseAggregateExec\n import org.apache.spark.sql.execution.streaming.{MemoryStream, StateStoreRestoreExec, StateStoreSaveExec}\n import org.apache.spark.sql.functions.count\n@@ -67,6 +68,7 @@ class StreamingAggregationDistributionSuite extends StreamTest\n         // verify aggregations in between, except partial aggregation\n         val allAggregateExecs = query.lastExecution.executedPlan.collect {\n           case a: BaseAggregateExec => a\n+          case c: CometHashAggregateExec => c.originalPlan\n         }\n \n         val aggregateExecsWithoutPartialAgg = allAggregateExecs.filter {\n@@ -201,6 +203,7 @@ class StreamingAggregationDistributionSuite extends StreamTest\n         // verify aggregations in between, except partial aggregation\n         val allAggregateExecs = executedPlan.collect {\n           case a: BaseAggregateExec => a\n+          case c: CometHashAggregateExec => c.originalPlan\n         }\n \n         val aggregateExecsWithoutPartialAgg = allAggregateExecs.filter {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala\nindex 4d92e270539..33f1c2eb75e 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala\n@@ -31,7 +31,7 @@ import org.apache.spark.scheduler.ExecutorCacheTaskLocation\n import org.apache.spark.sql.{DataFrame, Row, SparkSession}\n import org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression}\n import org.apache.spark.sql.catalyst.plans.physical.HashPartitioning\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.execution.streaming.{MemoryStream, StatefulOperatorStateInfo, StreamingSymmetricHashJoinExec, StreamingSymmetricHashJoinHelper}\n import org.apache.spark.sql.execution.streaming.state.{RocksDBStateStoreProvider, StateStore, StateStoreProviderId}\n import org.apache.spark.sql.functions._\n@@ -619,14 +619,28 @@ class StreamingInnerJoinSuite extends StreamingJoinSuite {\n \n         val numPartitions = spark.sqlContext.conf.getConf(SQLConf.SHUFFLE_PARTITIONS)\n \n-        assert(query.lastExecution.executedPlan.collect {\n-          case j @ StreamingSymmetricHashJoinExec(_, _, _, _, _, _, _, _, _,\n-            ShuffleExchangeExec(opA: HashPartitioning, _, _),\n-            ShuffleExchangeExec(opB: HashPartitioning, _, _))\n-              if partitionExpressionsColumns(opA.expressions) === Seq(\"a\", \"b\")\n-                && partitionExpressionsColumns(opB.expressions) === Seq(\"a\", \"b\")\n-                && opA.numPartitions == numPartitions && opB.numPartitions == numPartitions => j\n-        }.size == 1)\n+        val join = query.lastExecution.executedPlan.collect {\n+          case j: StreamingSymmetricHashJoinExec => j\n+        }.head\n+        val opA = join.left.collect {\n+          case s: ShuffleExchangeLike\n+            if s.outputPartitioning.isInstanceOf[HashPartitioning] &&\n+              partitionExpressionsColumns(\n+                s.outputPartitioning\n+                  .asInstanceOf[HashPartitioning].expressions) === Seq(\"a\", \"b\") =>\n+            s.outputPartitioning\n+              .asInstanceOf[HashPartitioning]\n+        }.head\n+        val opB = join.right.collect {\n+          case s: ShuffleExchangeLike\n+            if s.outputPartitioning.isInstanceOf[HashPartitioning] &&\n+              partitionExpressionsColumns(\n+                s.outputPartitioning\n+                  .asInstanceOf[HashPartitioning].expressions) === Seq(\"a\", \"b\") =>\n+            s.outputPartitioning\n+              .asInstanceOf[HashPartitioning]\n+        }.head\n+        assert(opA.numPartitions == numPartitions && opB.numPartitions == numPartitions)\n       })\n   }\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala\nindex abe606ad9c1..2d930b64cca 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala\n@@ -22,7 +22,7 @@ import java.util\n \n import org.scalatest.BeforeAndAfter\n \n-import org.apache.spark.sql.{AnalysisException, Row, SaveMode}\n+import org.apache.spark.sql.{AnalysisException, IgnoreComet, Row, SaveMode}\n import org.apache.spark.sql.catalyst.TableIdentifier\n import org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException\n import org.apache.spark.sql.catalyst.catalog.{CatalogStorageFormat, CatalogTable, CatalogTableType}\n@@ -327,7 +327,8 @@ class DataStreamTableAPISuite extends StreamTest with BeforeAndAfter {\n     }\n   }\n \n-  test(\"explain with table on DSv1 data source\") {\n+  test(\"explain with table on DSv1 data source\",\n+      IgnoreComet(\"Comet explain output is different\")) {\n     val tblSourceName = \"tbl_src\"\n     val tblTargetName = \"tbl_target\"\n     val tblSourceQualified = s\"default.$tblSourceName\"\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala b/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala\nindex dd55fcfe42c..99bc018008a 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala\n@@ -27,6 +27,7 @@ import scala.concurrent.duration._\n import scala.language.implicitConversions\n import scala.util.control.NonFatal\n \n+import org.apache.comet.CometConf\n import org.apache.hadoop.fs.Path\n import org.scalactic.source.Position\n import org.scalatest.{BeforeAndAfterAll, Suite, Tag}\n@@ -41,6 +42,7 @@ import org.apache.spark.sql.catalyst.plans.PlanTest\n import org.apache.spark.sql.catalyst.plans.PlanTestBase\n import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan\n import org.apache.spark.sql.catalyst.util._\n+import org.apache.spark.sql.comet._\n import org.apache.spark.sql.execution.FilterExec\n import org.apache.spark.sql.execution.adaptive.DisableAdaptiveExecution\n import org.apache.spark.sql.execution.datasources.DataSourceUtils\n@@ -119,6 +121,34 @@ private[sql] trait SQLTestUtils extends SparkFunSuite with SQLTestUtilsBase with\n \n   override protected def test(testName: String, testTags: Tag*)(testFun: => Any)\n       (implicit pos: Position): Unit = {\n+    // Check Comet skip tags first, before DisableAdaptiveExecution handling\n+    if (isCometEnabled && testTags.exists(_.isInstanceOf[IgnoreComet])) {\n+      ignore(testName + \" (disabled when Comet is on)\", testTags: _*)(testFun)\n+      return\n+    }\n+    if (isCometEnabled) {\n+      val cometScanImpl = CometConf.COMET_NATIVE_SCAN_IMPL.get(conf)\n+      val isNativeIcebergCompat = cometScanImpl == CometConf.SCAN_NATIVE_ICEBERG_COMPAT ||\n+        cometScanImpl == CometConf.SCAN_AUTO\n+      val isNativeDataFusion = cometScanImpl == CometConf.SCAN_NATIVE_DATAFUSION ||\n+        cometScanImpl == CometConf.SCAN_AUTO\n+      if (isNativeIcebergCompat &&\n+        testTags.exists(_.isInstanceOf[IgnoreCometNativeIcebergCompat])) {\n+        ignore(testName + \" (disabled for NATIVE_ICEBERG_COMPAT)\", testTags: _*)(testFun)\n+        return\n+      }\n+      if (isNativeDataFusion &&\n+        testTags.exists(_.isInstanceOf[IgnoreCometNativeDataFusion])) {\n+        ignore(testName + \" (disabled for NATIVE_DATAFUSION)\", testTags: _*)(testFun)\n+        return\n+      }\n+      if ((isNativeDataFusion || isNativeIcebergCompat) &&\n+        testTags.exists(_.isInstanceOf[IgnoreCometNativeScan])) {\n+        ignore(testName + \" (disabled for NATIVE_DATAFUSION and NATIVE_ICEBERG_COMPAT)\",\n+          testTags: _*)(testFun)\n+        return\n+      }\n+    }\n     if (testTags.exists(_.isInstanceOf[DisableAdaptiveExecution])) {\n       super.test(testName, testTags: _*) {\n         withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\") {\n@@ -242,6 +272,29 @@ private[sql] trait SQLTestUtilsBase\n     protected override def _sqlContext: SQLContext = self.spark.sqlContext\n   }\n \n+  /**\n+   * Whether Comet extension is enabled\n+   */\n+  protected def isCometEnabled: Boolean = SparkSession.isCometEnabled\n+\n+  /**\n+   * Whether to enable ansi mode This is only effective when\n+   * [[isCometEnabled]] returns true.\n+   */\n+  protected def enableCometAnsiMode: Boolean = {\n+    val v = System.getenv(\"ENABLE_COMET_ANSI_MODE\")\n+    v != null && v.toBoolean\n+  }\n+\n+  /**\n+   * Whether Spark should only apply Comet scan optimization. This is only effective when\n+   * [[isCometEnabled]] returns true.\n+   */\n+  protected def isCometScanOnly: Boolean = {\n+    val v = System.getenv(\"ENABLE_COMET_SCAN_ONLY\")\n+    v != null && v.toBoolean\n+  }\n+\n   protected override def withSQLConf(pairs: (String, String)*)(f: => Unit): Unit = {\n     SparkSession.setActiveSession(spark)\n     super.withSQLConf(pairs: _*)(f)\n@@ -434,6 +487,8 @@ private[sql] trait SQLTestUtilsBase\n     val schema = df.schema\n     val withoutFilters = df.queryExecution.executedPlan.transform {\n       case FilterExec(_, child) => child\n+      case CometFilterExec(_, _, _, _, child, _) => child\n+      case CometProjectExec(_, _, _, _, CometFilterExec(_, _, _, _, child, _), _) => child\n     }\n \n     spark.internalCreateDataFrame(withoutFilters.execute(), schema)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala b/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala\nindex ed2e309fa07..a5ea58146ad 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala\n@@ -74,6 +74,31 @@ trait SharedSparkSessionBase\n       // this rule may potentially block testing of other optimization rules such as\n       // ConstantPropagation etc.\n       .set(SQLConf.OPTIMIZER_EXCLUDED_RULES.key, ConvertToLocalRelation.ruleName)\n+    // Enable Comet if `ENABLE_COMET` environment variable is set\n+    if (isCometEnabled) {\n+      conf\n+        .set(\"spark.sql.extensions\", \"org.apache.comet.CometSparkSessionExtensions\")\n+        .set(\"spark.comet.enabled\", \"true\")\n+        .set(\"spark.comet.parquet.respectFilterPushdown\", \"true\")\n+\n+      if (!isCometScanOnly) {\n+        conf\n+          .set(\"spark.comet.exec.enabled\", \"true\")\n+          .set(\"spark.shuffle.manager\",\n+            \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+          .set(\"spark.comet.exec.shuffle.enabled\", \"true\")\n+          .set(\"spark.comet.memoryOverhead\", \"10g\")\n+      } else {\n+        conf\n+          .set(\"spark.comet.exec.enabled\", \"false\")\n+          .set(\"spark.comet.exec.shuffle.enabled\", \"false\")\n+      }\n+\n+      if (enableCometAnsiMode) {\n+        conf\n+          .set(\"spark.sql.ansi.enabled\", \"true\")\n+      }\n+    }\n     conf.set(\n       StaticSQLConf.WAREHOUSE_PATH,\n       conf.get(StaticSQLConf.WAREHOUSE_PATH) + \"/\" + getClass.getCanonicalName)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala b/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala\nindex 1510e8957f9..7618419d8ff 100644\n--- a/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala\n@@ -43,7 +43,7 @@ class SqlResourceWithActualMetricsSuite\n   import testImplicits._\n \n   // Exclude nodes which may not have the metrics\n-  val excludedNodes = List(\"WholeStageCodegen\", \"Project\", \"SerializeFromObject\")\n+  val excludedNodes = List(\"WholeStageCodegen\", \"Project\", \"SerializeFromObject\", \"RowToColumnar\")\n \n   implicit val formats = new DefaultFormats {\n     override def dateFormatter = new SimpleDateFormat(\"yyyy-MM-dd'T'HH:mm:ss\")\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala\nindex 52abd248f3a..7a199931a08 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala\n@@ -19,6 +19,7 @@ package org.apache.spark.sql.hive\n \n import org.apache.spark.sql._\n import org.apache.spark.sql.catalyst.expressions.{DynamicPruningExpression, Expression}\n+import org.apache.spark.sql.comet._\n import org.apache.spark.sql.execution._\n import org.apache.spark.sql.execution.adaptive.{DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.hive.execution.HiveTableScanExec\n@@ -35,6 +36,9 @@ abstract class DynamicPartitionPruningHiveScanSuiteBase\n       case s: FileSourceScanExec => s.partitionFilters.collect {\n         case d: DynamicPruningExpression => d.child\n       }\n+      case s: CometScanExec => s.partitionFilters.collect {\n+        case d: DynamicPruningExpression => d.child\n+      }\n       case h: HiveTableScanExec => h.partitionPruningPred.collect {\n         case d: DynamicPruningExpression => d.child\n       }\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala\nindex de3b1ffccf0..2a76d127093 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala\n@@ -23,14 +23,15 @@ import java.util.concurrent.{Executors, TimeUnit}\n import org.scalatest.BeforeAndAfterEach\n \n import org.apache.spark.metrics.source.HiveCatalogMetrics\n-import org.apache.spark.sql.QueryTest\n+import org.apache.spark.sql.{IgnoreCometSuite, QueryTest}\n import org.apache.spark.sql.execution.datasources.FileStatusCache\n import org.apache.spark.sql.hive.test.TestHiveSingleton\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SQLTestUtils\n \n class PartitionedTablePerfStatsSuite\n-  extends QueryTest with TestHiveSingleton with SQLTestUtils with BeforeAndAfterEach {\n+  extends QueryTest with TestHiveSingleton with SQLTestUtils with BeforeAndAfterEach\n+    with IgnoreCometSuite {\n \n   override def beforeEach(): Unit = {\n     super.beforeEach()\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala\nindex a902cb3a69e..800a3acbe99 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala\n@@ -24,6 +24,7 @@ import java.sql.{Date, Timestamp}\n import java.util.{Locale, Set}\n \n import com.google.common.io.Files\n+import org.apache.comet.CometConf\n import org.apache.hadoop.fs.{FileSystem, Path}\n \n import org.apache.spark.{SparkException, TestUtils}\n@@ -838,8 +839,13 @@ abstract class SQLQuerySuiteBase extends QueryTest with SQLTestUtils with TestHi\n   }\n \n   test(\"SPARK-2554 SumDistinct partial aggregation\") {\n-    checkAnswer(sql(\"SELECT sum( distinct key) FROM src group by key order by key\"),\n-      sql(\"SELECT distinct key FROM src order by key\").collect().toSeq)\n+    // Range partitioning uses random samples, so per-partition comparisons do not always yield\n+    // the same results. Disable Comet native range partitioning.\n+    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> \"false\")\n+    {\n+      checkAnswer(sql(\"SELECT sum( distinct key) FROM src group by key order by key\"),\n+        sql(\"SELECT distinct key FROM src order by key\").collect().toSeq)\n+    }\n   }\n \n   test(\"SPARK-4963 DataFrame sample on mutable row return wrong result\") {\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala\nindex 07361cfdce9..97dab2a3506 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala\n@@ -55,25 +55,54 @@ object TestHive\n     new SparkContext(\n       System.getProperty(\"spark.sql.test.master\", \"local[1]\"),\n       \"TestSQLContext\",\n-      new SparkConf()\n-        .set(\"spark.sql.test\", \"\")\n-        .set(SQLConf.CODEGEN_FALLBACK.key, \"false\")\n-        .set(SQLConf.CODEGEN_FACTORY_MODE.key, CodegenObjectFactoryMode.CODEGEN_ONLY.toString)\n-        .set(HiveUtils.HIVE_METASTORE_BARRIER_PREFIXES.key,\n-          \"org.apache.spark.sql.hive.execution.PairSerDe\")\n-        .set(WAREHOUSE_PATH.key, TestHiveContext.makeWarehouseDir().toURI.getPath)\n-        // SPARK-8910\n-        .set(UI_ENABLED, false)\n-        .set(config.UNSAFE_EXCEPTION_ON_MEMORY_LEAK, true)\n-        // Hive changed the default of hive.metastore.disallow.incompatible.col.type.changes\n-        // from false to true. For details, see the JIRA HIVE-12320 and HIVE-17764.\n-        .set(\"spark.hadoop.hive.metastore.disallow.incompatible.col.type.changes\", \"false\")\n-        // Disable ConvertToLocalRelation for better test coverage. Test cases built on\n-        // LocalRelation will exercise the optimization rules better by disabling it as\n-        // this rule may potentially block testing of other optimization rules such as\n-        // ConstantPropagation etc.\n-        .set(SQLConf.OPTIMIZER_EXCLUDED_RULES.key, ConvertToLocalRelation.ruleName)))\n+      {\n+        val conf = new SparkConf()\n+          .set(\"spark.sql.test\", \"\")\n+          .set(SQLConf.CODEGEN_FALLBACK.key, \"false\")\n+          .set(SQLConf.CODEGEN_FACTORY_MODE.key, CodegenObjectFactoryMode.CODEGEN_ONLY.toString)\n+          .set(HiveUtils.HIVE_METASTORE_BARRIER_PREFIXES.key,\n+            \"org.apache.spark.sql.hive.execution.PairSerDe\")\n+          .set(WAREHOUSE_PATH.key, TestHiveContext.makeWarehouseDir().toURI.getPath)\n+          // SPARK-8910\n+          .set(UI_ENABLED, false)\n+          .set(config.UNSAFE_EXCEPTION_ON_MEMORY_LEAK, true)\n+          // Hive changed the default of hive.metastore.disallow.incompatible.col.type.changes\n+          // from false to true. For details, see the JIRA HIVE-12320 and HIVE-17764.\n+          .set(\"spark.hadoop.hive.metastore.disallow.incompatible.col.type.changes\", \"false\")\n+          // Disable ConvertToLocalRelation for better test coverage. Test cases built on\n+          // LocalRelation will exercise the optimization rules better by disabling it as\n+          // this rule may potentially block testing of other optimization rules such as\n+          // ConstantPropagation etc.\n+          .set(SQLConf.OPTIMIZER_EXCLUDED_RULES.key, ConvertToLocalRelation.ruleName)\n+\n+        if (SparkSession.isCometEnabled) {\n+          conf\n+            .set(\"spark.sql.extensions\", \"org.apache.comet.CometSparkSessionExtensions\")\n+            .set(\"spark.comet.enabled\", \"true\")\n+\n+          val v = System.getenv(\"ENABLE_COMET_SCAN_ONLY\")\n+          if (v == null || !v.toBoolean) {\n+            conf\n+              .set(\"spark.comet.exec.enabled\", \"true\")\n+              .set(\"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+              .set(\"spark.comet.exec.shuffle.enabled\", \"true\")\n+          } else {\n+            conf\n+              .set(\"spark.comet.exec.enabled\", \"false\")\n+              .set(\"spark.comet.exec.shuffle.enabled\", \"false\")\n+          }\n+\n+          val a = System.getenv(\"ENABLE_COMET_ANSI_MODE\")\n+          if (a != null && a.toBoolean) {\n+            conf\n+              .set(\"spark.sql.ansi.enabled\", \"true\")\n+          }\n+        }\n \n+        conf\n+      }\n+    ))\n \n case class TestHiveVersion(hiveClient: HiveClient)\n   extends TestHiveContext(TestHive.sparkContext, hiveClient)\n"
  },
  {
    "path": "dev/diffs/3.5.8.diff",
    "content": "diff --git a/pom.xml b/pom.xml\nindex edd2ad57880..837b95d1ada 100644\n--- a/pom.xml\n+++ b/pom.xml\n@@ -152,6 +152,8 @@\n     -->\n     <ivy.version>2.5.1</ivy.version>\n     <oro.version>2.0.8</oro.version>\n+    <spark.version.short>3.5</spark.version.short>\n+    <comet.version>0.15.0-SNAPSHOT</comet.version>\n     <!--\n     If you changes codahale.metrics.version, you also need to change\n     the link to metrics.dropwizard.io in docs/monitoring.md.\n@@ -2840,6 +2842,25 @@\n         <artifactId>okio</artifactId>\n         <version>${okio.version}</version>\n       </dependency>\n+      <dependency>\n+        <groupId>org.apache.datafusion</groupId>\n+        <artifactId>comet-spark-spark${spark.version.short}_${scala.binary.version}</artifactId>\n+        <version>${comet.version}</version>\n+        <exclusions>\n+          <exclusion>\n+            <groupId>org.apache.spark</groupId>\n+            <artifactId>spark-sql_${scala.binary.version}</artifactId>\n+          </exclusion>\n+          <exclusion>\n+            <groupId>org.apache.spark</groupId>\n+            <artifactId>spark-core_${scala.binary.version}</artifactId>\n+          </exclusion>\n+          <exclusion>\n+            <groupId>org.apache.spark</groupId>\n+            <artifactId>spark-catalyst_${scala.binary.version}</artifactId>\n+          </exclusion>\n+        </exclusions>\n+      </dependency>\n     </dependencies>\n   </dependencyManagement>\n \ndiff --git a/sql/core/pom.xml b/sql/core/pom.xml\nindex bc00c448b80..82068d7a2eb 100644\n--- a/sql/core/pom.xml\n+++ b/sql/core/pom.xml\n@@ -77,6 +77,10 @@\n       <groupId>org.apache.spark</groupId>\n       <artifactId>spark-tags_${scala.binary.version}</artifactId>\n     </dependency>\n+    <dependency>\n+      <groupId>org.apache.datafusion</groupId>\n+      <artifactId>comet-spark-spark${spark.version.short}_${scala.binary.version}</artifactId>\n+    </dependency>\n \n     <!--\n       This spark-tags test-dep is needed even though it isn't used in this module, otherwise testing-cmds that exclude\ndiff --git a/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala b/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala\nindex 27ae10b3d59..d12fb7c42c2 100644\n--- a/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala\n+++ b/sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala\n@@ -1353,6 +1353,14 @@ object SparkSession extends Logging {\n     }\n   }\n \n+  private def loadCometExtension(sparkContext: SparkContext): Seq[String] = {\n+    if (sparkContext.getConf.getBoolean(\"spark.comet.enabled\", isCometEnabled)) {\n+      Seq(\"org.apache.comet.CometSparkSessionExtensions\")\n+    } else {\n+      Seq.empty\n+    }\n+  }\n+\n   /**\n    * Initialize extensions specified in [[StaticSQLConf]]. The classes will be applied to the\n    * extensions passed into this function.\n@@ -1362,7 +1370,8 @@ object SparkSession extends Logging {\n       extensions: SparkSessionExtensions): SparkSessionExtensions = {\n     val extensionConfClassNames = sparkContext.getConf.get(StaticSQLConf.SPARK_SESSION_EXTENSIONS)\n       .getOrElse(Seq.empty)\n-    extensionConfClassNames.foreach { extensionConfClassName =>\n+    val extensionClassNames = extensionConfClassNames ++ loadCometExtension(sparkContext)\n+    extensionClassNames.foreach { extensionConfClassName =>\n       try {\n         val extensionConfClass = Utils.classForName(extensionConfClassName)\n         val extensionConf = extensionConfClass.getConstructor().newInstance()\n@@ -1396,4 +1405,12 @@ object SparkSession extends Logging {\n       }\n     }\n   }\n+\n+  /**\n+   * Whether Comet extension is enabled\n+   */\n+  def isCometEnabled: Boolean = {\n+    val v = System.getenv(\"ENABLE_COMET\")\n+    v == null || v.toBoolean\n+  }\n }\ndiff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala\nindex db587dd9868..aac7295a53d 100644\n--- a/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala\n+++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala\n@@ -18,6 +18,7 @@\n package org.apache.spark.sql.execution\n \n import org.apache.spark.annotation.DeveloperApi\n+import org.apache.spark.sql.comet.CometScanExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanExec, QueryStageExec}\n import org.apache.spark.sql.execution.columnar.InMemoryTableScanExec\n import org.apache.spark.sql.execution.exchange.ReusedExchangeExec\n@@ -67,6 +68,7 @@ private[execution] object SparkPlanInfo {\n     // dump the file scan metadata (e.g file path) to event log\n     val metadata = plan match {\n       case fileScan: FileSourceScanExec => fileScan.metadata\n+      case cometScan: CometScanExec => cometScan.metadata\n       case _ => Map[String, String]()\n     }\n     new SparkPlanInfo(\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql b/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql\nindex 7aef901da4f..f3d6e18926d 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql\n@@ -2,3 +2,4 @@\n \n --SET spark.sql.adaptive.enabled=true\n --SET spark.sql.maxMetadataStringLength = 500\n+--SET spark.comet.enabled = false\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql b/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql\nindex eeb2180f7a5..afd1b5ec289 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql\n@@ -1,5 +1,6 @@\n --SET spark.sql.cbo.enabled=true\n --SET spark.sql.maxMetadataStringLength = 500\n+--SET spark.comet.enabled = false\n \n CREATE TABLE explain_temp1(a INT, b INT) USING PARQUET;\n CREATE TABLE explain_temp2(c INT, d INT) USING PARQUET;\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/explain.sql b/sql/core/src/test/resources/sql-tests/inputs/explain.sql\nindex 698ca009b4f..57d774a3617 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/explain.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/explain.sql\n@@ -1,6 +1,7 @@\n --SET spark.sql.codegen.wholeStage = true\n --SET spark.sql.adaptive.enabled = false\n --SET spark.sql.maxMetadataStringLength = 500\n+--SET spark.comet.enabled = false\n \n -- Test tables\n CREATE table  explain_temp1 (key int, val int) USING PARQUET;\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part1.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part1.sql\nindex 1152d77da0c..f77493f690b 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part1.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part1.sql\n@@ -7,6 +7,9 @@\n \n -- avoid bit-exact output here because operations may not be bit-exact.\n -- SET extra_float_digits = 0;\n+-- Disable Comet exec due to floating point precision difference\n+--SET spark.comet.exec.enabled = false\n+\n \n -- Test aggregate operator with codegen on and off.\n --CONFIG_DIM1 spark.sql.codegen.wholeStage=true\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql\nindex 41fd4de2a09..44cd244d3b0 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql\n@@ -5,6 +5,9 @@\n -- AGGREGATES [Part 3]\n -- https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/sql/aggregates.sql#L352-L605\n \n+-- Disable Comet exec due to floating point precision difference\n+--SET spark.comet.exec.enabled = false\n+\n -- Test aggregate operator with codegen on and off.\n --CONFIG_DIM1 spark.sql.codegen.wholeStage=true\n --CONFIG_DIM1 spark.sql.codegen.wholeStage=false,spark.sql.codegen.factoryMode=CODEGEN_ONLY\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql\nindex 3a409eea348..38fed024c98 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql\n@@ -69,6 +69,8 @@ SELECT '' AS one, i.* FROM INT4_TBL i WHERE (i.f1 % smallint('2')) = smallint('1\n -- any evens\n SELECT '' AS three, i.* FROM INT4_TBL i WHERE (i.f1 % int('2')) = smallint('0');\n \n+-- https://github.com/apache/datafusion-comet/issues/2215\n+--SET spark.comet.exec.enabled=false\n -- [SPARK-28024] Incorrect value when out of range\n SELECT '' AS five, i.f1, i.f1 * smallint('2') AS x FROM INT4_TBL i;\n \ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql\nindex fac23b4a26f..2b73732c33f 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql\n@@ -1,6 +1,10 @@\n --\n -- Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group\n --\n+\n+-- Disable Comet exec due to floating point precision difference\n+--SET spark.comet.exec.enabled = false\n+\n --\n -- INT8\n -- Test int8 64-bit integers.\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql\nindex 0efe0877e9b..423d3b3d76d 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql\n@@ -1,6 +1,10 @@\n --\n -- Portions Copyright (c) 1996-2019, PostgreSQL Global Development Group\n --\n+\n+-- Disable Comet exec due to floating point precision difference\n+--SET spark.comet.exec.enabled = false\n+\n --\n -- SELECT_HAVING\n -- https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/sql/select_having.sql\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala\nindex e5494726695..00937f025c2 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala\n@@ -38,7 +38,7 @@ import org.apache.spark.sql.catalyst.util.DateTimeConstants\n import org.apache.spark.sql.execution.{ColumnarToRowExec, ExecSubqueryExpression, RDDScanExec, SparkPlan}\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, AQEPropagateEmptyRelation}\n import org.apache.spark.sql.execution.columnar._\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.execution.ui.SparkListenerSQLAdaptiveExecutionUpdate\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -519,7 +519,8 @@ class CachedTableSuite extends QueryTest with SQLTestUtils\n       df.collect()\n     }\n     assert(\n-      collect(df.queryExecution.executedPlan) { case e: ShuffleExchangeExec => e }.size == expected)\n+      collect(df.queryExecution.executedPlan) {\n+        case _: ShuffleExchangeLike => 1 }.size == expected)\n   }\n \n   test(\"A cached table preserves the partitioning and ordering of its cached SparkPlan\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala\nindex 6f3090d8908..c08a60fb0c2 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala\n@@ -28,7 +28,7 @@ import org.apache.spark.sql.catalyst.plans.logical.Expand\n import org.apache.spark.sql.execution.WholeStageCodegenExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.aggregate.{HashAggregateExec, ObjectHashAggregateExec, SortAggregateExec}\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.expressions.Window\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -793,7 +793,7 @@ class DataFrameAggregateSuite extends QueryTest\n       assert(objHashAggPlans.nonEmpty)\n \n       val exchangePlans = collect(aggPlan) {\n-        case shuffle: ShuffleExchangeExec => shuffle\n+        case shuffle: ShuffleExchangeLike => shuffle\n       }\n       assert(exchangePlans.length == 1)\n     }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala\nindex 56e9520fdab..917932336df 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala\n@@ -435,7 +435,9 @@ class DataFrameJoinSuite extends QueryTest\n \n     withTempDatabase { dbName =>\n       withTable(table1Name, table2Name) {\n-        withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n+        withSQLConf(\n+            SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n+            \"spark.comet.enabled\" -> \"false\") {\n           spark.range(50).write.saveAsTable(s\"$dbName.$table1Name\")\n           spark.range(100).write.saveAsTable(s\"$dbName.$table2Name\")\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala\nindex 7ee18df3756..d09f70e5d99 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala\n@@ -40,11 +40,12 @@ import org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation\n import org.apache.spark.sql.catalyst.parser.ParseException\n import org.apache.spark.sql.catalyst.plans.logical.{ColumnStat, LeafNode, LocalRelation, LogicalPlan, OneRowRelation, Statistics}\n import org.apache.spark.sql.catalyst.util.DateTimeUtils\n+import org.apache.spark.sql.comet.CometBroadcastExchangeExec\n import org.apache.spark.sql.connector.FakeV2Provider\n import org.apache.spark.sql.execution.{FilterExec, LogicalRDD, QueryExecution, SortExec, WholeStageCodegenExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.aggregate.HashAggregateExec\n-import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ReusedExchangeExec, ShuffleExchangeExec, ShuffleExchangeLike}\n+import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ReusedExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.expressions.{Aggregator, Window}\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -2006,7 +2007,7 @@ class DataFrameSuite extends QueryTest\n           fail(\"Should not have back to back Aggregates\")\n         }\n         atFirstAgg = true\n-      case e: ShuffleExchangeExec => atFirstAgg = false\n+      case e: ShuffleExchangeLike => atFirstAgg = false\n       case _ =>\n     }\n   }\n@@ -2330,7 +2331,7 @@ class DataFrameSuite extends QueryTest\n       checkAnswer(join, df)\n       assert(\n         collect(join.queryExecution.executedPlan) {\n-          case e: ShuffleExchangeExec => true }.size === 1)\n+          case _: ShuffleExchangeLike => true }.size === 1)\n       assert(\n         collect(join.queryExecution.executedPlan) { case e: ReusedExchangeExec => true }.size === 1)\n       val broadcasted = broadcast(join)\n@@ -2338,10 +2339,12 @@ class DataFrameSuite extends QueryTest\n       checkAnswer(join2, df)\n       assert(\n         collect(join2.queryExecution.executedPlan) {\n-          case e: ShuffleExchangeExec => true }.size == 1)\n+          case _: ShuffleExchangeLike => true }.size == 1)\n       assert(\n         collect(join2.queryExecution.executedPlan) {\n-          case e: BroadcastExchangeExec => true }.size === 1)\n+          case e: BroadcastExchangeExec => true\n+          case _: CometBroadcastExchangeExec => true\n+        }.size === 1)\n       assert(\n         collect(join2.queryExecution.executedPlan) { case e: ReusedExchangeExec => true }.size == 4)\n     }\n@@ -2901,7 +2904,7 @@ class DataFrameSuite extends QueryTest\n \n     // Assert that no extra shuffle introduced by cogroup.\n     val exchanges = collect(df3.queryExecution.executedPlan) {\n-      case h: ShuffleExchangeExec => h\n+      case h: ShuffleExchangeLike => h\n     }\n     assert(exchanges.size == 2)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala\nindex a1d5d579338..c201d39cc78 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala\n@@ -24,8 +24,9 @@ import org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression\n import org.apache.spark.sql.catalyst.optimizer.TransposeWindow\n import org.apache.spark.sql.catalyst.plans.logical.{Window => LogicalWindow}\n import org.apache.spark.sql.catalyst.plans.physical.HashPartitioning\n+import org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n-import org.apache.spark.sql.execution.exchange.{ENSURE_REQUIREMENTS, Exchange, ShuffleExchangeExec}\n+import org.apache.spark.sql.execution.exchange.{ENSURE_REQUIREMENTS, Exchange, ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.window.WindowExec\n import org.apache.spark.sql.expressions.{Aggregator, MutableAggregationBuffer, UserDefinedAggregateFunction, Window}\n import org.apache.spark.sql.functions._\n@@ -1187,10 +1188,12 @@ class DataFrameWindowFunctionsSuite extends QueryTest\n     }\n \n     def isShuffleExecByRequirement(\n-        plan: ShuffleExchangeExec,\n+        plan: ShuffleExchangeLike,\n         desiredClusterColumns: Seq[String]): Boolean = plan match {\n       case ShuffleExchangeExec(op: HashPartitioning, _, ENSURE_REQUIREMENTS, _) =>\n         partitionExpressionsColumns(op.expressions) === desiredClusterColumns\n+      case CometShuffleExchangeExec(op: HashPartitioning, _, _, ENSURE_REQUIREMENTS, _, _) =>\n+        partitionExpressionsColumns(op.expressions) === desiredClusterColumns\n       case _ => false\n     }\n \n@@ -1213,7 +1216,7 @@ class DataFrameWindowFunctionsSuite extends QueryTest\n       val shuffleByRequirement = windowed.queryExecution.executedPlan.exists {\n         case w: WindowExec =>\n           w.child.exists {\n-            case s: ShuffleExchangeExec => isShuffleExecByRequirement(s, Seq(\"key1\", \"key2\"))\n+            case s: ShuffleExchangeLike => isShuffleExecByRequirement(s, Seq(\"key1\", \"key2\"))\n             case _ => false\n           }\n         case _ => false\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala\nindex c4fb4fa943c..a04b23870a8 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala\n@@ -38,7 +38,7 @@ import org.apache.spark.sql.catalyst.plans.{LeftAnti, LeftSemi}\n import org.apache.spark.sql.catalyst.util.sideBySide\n import org.apache.spark.sql.execution.{LogicalRDD, RDDScanExec, SQLExecution}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n-import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ShuffleExchangeExec}\n+import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.streaming.MemoryStream\n import org.apache.spark.sql.expressions.UserDefinedFunction\n import org.apache.spark.sql.functions._\n@@ -2288,7 +2288,7 @@ class DatasetSuite extends QueryTest\n \n     // Assert that no extra shuffle introduced by cogroup.\n     val exchanges = collect(df3.queryExecution.executedPlan) {\n-      case h: ShuffleExchangeExec => h\n+      case h: ShuffleExchangeLike => h\n     }\n     assert(exchanges.size == 2)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala\nindex f33432ddb6f..7122af0d414 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala\n@@ -22,6 +22,7 @@ import org.scalatest.GivenWhenThen\n import org.apache.spark.sql.catalyst.expressions.{DynamicPruningExpression, Expression}\n import org.apache.spark.sql.catalyst.expressions.CodegenObjectFactoryMode._\n import org.apache.spark.sql.catalyst.plans.ExistenceJoin\n+import org.apache.spark.sql.comet.CometScanExec\n import org.apache.spark.sql.connector.catalog.{InMemoryTableCatalog, InMemoryTableWithV2FilterCatalog}\n import org.apache.spark.sql.execution._\n import org.apache.spark.sql.execution.adaptive._\n@@ -262,6 +263,9 @@ abstract class DynamicPartitionPruningSuiteBase\n       case s: BatchScanExec => s.runtimeFilters.collect {\n         case d: DynamicPruningExpression => d.child\n       }\n+      case s: CometScanExec => s.partitionFilters.collect {\n+        case d: DynamicPruningExpression => d.child\n+      }\n       case _ => Nil\n     }\n   }\n@@ -1027,7 +1031,8 @@ abstract class DynamicPartitionPruningSuiteBase\n     }\n   }\n \n-  test(\"avoid reordering broadcast join keys to match input hash partitioning\") {\n+  test(\"avoid reordering broadcast join keys to match input hash partitioning\",\n+    IgnoreComet(\"TODO: https://github.com/apache/datafusion-comet/issues/1839\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"false\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n       withTable(\"large\", \"dimTwo\", \"dimThree\") {\n@@ -1215,7 +1220,8 @@ abstract class DynamicPartitionPruningSuiteBase\n   }\n \n   test(\"SPARK-32509: Unused Dynamic Pruning filter shouldn't affect \" +\n-    \"canonicalization and exchange reuse\") {\n+    \"canonicalization and exchange reuse\",\n+    IgnoreComet(\"TODO: https://github.com/apache/datafusion-comet/issues/1839\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"true\") {\n       withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n         val df = sql(\n@@ -1423,7 +1429,8 @@ abstract class DynamicPartitionPruningSuiteBase\n     }\n   }\n \n-  test(\"SPARK-34637: DPP side broadcast query stage is created firstly\") {\n+  test(\"SPARK-34637: DPP side broadcast query stage is created firstly\",\n+    IgnoreComet(\"TODO: https://github.com/apache/datafusion-comet/issues/1839\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"true\") {\n       val df = sql(\n         \"\"\" WITH v as (\n@@ -1698,7 +1705,8 @@ abstract class DynamicPartitionPruningV1Suite extends DynamicPartitionPruningDat\n    * Check the static scan metrics with and without DPP\n    */\n   test(\"static scan metrics\",\n-    DisableAdaptiveExecution(\"DPP in AQE must reuse broadcast\")) {\n+    DisableAdaptiveExecution(\"DPP in AQE must reuse broadcast\"),\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3442\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_ENABLED.key -> \"true\",\n       SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"false\",\n       SQLConf.EXCHANGE_REUSE_ENABLED.key -> \"false\") {\n@@ -1729,6 +1737,8 @@ abstract class DynamicPartitionPruningV1Suite extends DynamicPartitionPruningDat\n               case s: BatchScanExec =>\n                 // we use f1 col for v2 tables due to schema pruning\n                 s.output.exists(_.exists(_.argString(maxFields = 100).contains(\"f1\")))\n+              case s: CometScanExec =>\n+                s.output.exists(_.exists(_.argString(maxFields = 100).contains(\"fid\")))\n               case _ => false\n             }\n           assert(scanOption.isDefined)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala\nindex a206e97c353..fea1149b67d 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala\n@@ -467,7 +467,8 @@ class ExplainSuite extends ExplainSuiteHelper with DisableAdaptiveExecutionSuite\n     }\n   }\n \n-  test(\"Explain formatted output for scan operator for datasource V2\") {\n+  test(\"Explain formatted output for scan operator for datasource V2\",\n+      IgnoreComet(\"Comet explain output is different\")) {\n     withTempDir { dir =>\n       Seq(\"parquet\", \"orc\", \"csv\", \"json\").foreach { fmt =>\n         val basePath = dir.getCanonicalPath + \"/\" + fmt\n@@ -545,7 +546,9 @@ class ExplainSuite extends ExplainSuiteHelper with DisableAdaptiveExecutionSuite\n   }\n }\n \n-class ExplainSuiteAE extends ExplainSuiteHelper with EnableAdaptiveExecutionSuite {\n+// Ignored when Comet is enabled. Comet changes expected query plans.\n+class ExplainSuiteAE extends ExplainSuiteHelper with EnableAdaptiveExecutionSuite\n+    with IgnoreCometSuite {\n   import testImplicits._\n \n   test(\"SPARK-35884: Explain Formatted\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala\nindex 93275487f29..78150c9163e 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala\n@@ -23,6 +23,7 @@ import java.nio.file.{Files, StandardOpenOption}\n \n import scala.collection.mutable\n \n+import org.apache.comet.CometConf\n import org.apache.hadoop.conf.Configuration\n import org.apache.hadoop.fs.{LocalFileSystem, Path}\n \n@@ -33,6 +34,7 @@ import org.apache.spark.sql.catalyst.expressions.{AttributeReference, GreaterTha\n import org.apache.spark.sql.catalyst.expressions.IntegralLiteralTestUtils.{negativeInt, positiveInt}\n import org.apache.spark.sql.catalyst.plans.logical.Filter\n import org.apache.spark.sql.catalyst.types.DataTypeUtils\n+import org.apache.spark.sql.comet.{CometBatchScanExec, CometNativeScanExec, CometScanExec, CometSortMergeJoinExec}\n import org.apache.spark.sql.execution.{FileSourceScanLike, SimpleMode}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.datasources.FilePartition\n@@ -250,6 +252,12 @@ class FileBasedDataSourceSuite extends QueryTest\n               case \"\" => \"_LEGACY_ERROR_TEMP_2062\"\n               case _ => \"_LEGACY_ERROR_TEMP_2055\"\n             }\n+            // native_datafusion Parquet scan cannot throw\n+            // a SparkFileNotFoundException\n+            assume(!Seq(\n+              CometConf.SCAN_NATIVE_DATAFUSION,\n+              CometConf.SCAN_AUTO\n+            ).contains(CometConf.COMET_NATIVE_SCAN_IMPL.get()))\n             checkErrorMatchPVals(\n               exception = intercept[SparkException] {\n                 testIgnoreMissingFiles(options)\n@@ -656,18 +664,25 @@ class FileBasedDataSourceSuite extends QueryTest\n             checkAnswer(sql(s\"select A from $tableName\"), data.select(\"A\"))\n \n             // RuntimeException is triggered at executor side, which is then wrapped as\n-            // SparkException at driver side\n+            // SparkException at driver side. Comet native readers throw\n+            // SparkRuntimeException directly without the SparkException wrapper.\n+            def getDuplicateFieldError(query: String): SparkRuntimeException = {\n+              try {\n+                sql(query).collect()\n+                fail(\"Expected an exception\").asInstanceOf[SparkRuntimeException]\n+              } catch {\n+                case e: SparkException =>\n+                  e.getCause.asInstanceOf[SparkRuntimeException]\n+                case e: SparkRuntimeException => e\n+              }\n+            }\n             checkError(\n-              exception = intercept[SparkException] {\n-                sql(s\"select b from $tableName\").collect()\n-              }.getCause.asInstanceOf[SparkRuntimeException],\n+              exception = getDuplicateFieldError(s\"select b from $tableName\"),\n               errorClass = \"_LEGACY_ERROR_TEMP_2093\",\n               parameters = Map(\"requiredFieldName\" -> \"b\", \"matchedOrcFields\" -> \"[b, B]\")\n             )\n             checkError(\n-              exception = intercept[SparkException] {\n-                sql(s\"select B from $tableName\").collect()\n-              }.getCause.asInstanceOf[SparkRuntimeException],\n+              exception = getDuplicateFieldError(s\"select B from $tableName\"),\n               errorClass = \"_LEGACY_ERROR_TEMP_2093\",\n               parameters = Map(\"requiredFieldName\" -> \"b\", \"matchedOrcFields\" -> \"[b, B]\")\n             )\n@@ -955,6 +970,7 @@ class FileBasedDataSourceSuite extends QueryTest\n             assert(bJoinExec.isEmpty)\n             val smJoinExec = collect(joinedDF.queryExecution.executedPlan) {\n               case smJoin: SortMergeJoinExec => smJoin\n+              case smJoin: CometSortMergeJoinExec => smJoin\n             }\n             assert(smJoinExec.nonEmpty)\n           }\n@@ -1015,6 +1031,7 @@ class FileBasedDataSourceSuite extends QueryTest\n \n           val fileScan = df.queryExecution.executedPlan collectFirst {\n             case BatchScanExec(_, f: FileScan, _, _, _, _) => f\n+            case CometBatchScanExec(BatchScanExec(_, f: FileScan, _, _, _, _), _, _) => f\n           }\n           assert(fileScan.nonEmpty)\n           assert(fileScan.get.partitionFilters.nonEmpty)\n@@ -1056,6 +1073,7 @@ class FileBasedDataSourceSuite extends QueryTest\n \n           val fileScan = df.queryExecution.executedPlan collectFirst {\n             case BatchScanExec(_, f: FileScan, _, _, _, _) => f\n+            case CometBatchScanExec(BatchScanExec(_, f: FileScan, _, _, _, _), _, _) => f\n           }\n           assert(fileScan.nonEmpty)\n           assert(fileScan.get.partitionFilters.isEmpty)\n@@ -1240,6 +1258,9 @@ class FileBasedDataSourceSuite extends QueryTest\n           val filters = df.queryExecution.executedPlan.collect {\n             case f: FileSourceScanLike => f.dataFilters\n             case b: BatchScanExec => b.scan.asInstanceOf[FileScan].dataFilters\n+            case b: CometScanExec => b.dataFilters\n+            case b: CometNativeScanExec => b.dataFilters\n+            case b: CometBatchScanExec => b.scan.asInstanceOf[FileScan].dataFilters\n           }.flatten\n           assert(filters.contains(GreaterThan(scan.logicalPlan.output.head, Literal(5L))))\n         }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/IgnoreComet.scala b/sql/core/src/test/scala/org/apache/spark/sql/IgnoreComet.scala\nnew file mode 100644\nindex 00000000000..1ee842b6f62\n--- /dev/null\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/IgnoreComet.scala\n@@ -0,0 +1,45 @@\n+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one or more\n+ * contributor license agreements.  See the NOTICE file distributed with\n+ * this work for additional information regarding copyright ownership.\n+ * The ASF licenses this file to You under the Apache License, Version 2.0\n+ * (the \"License\"); you may not use this file except in compliance with\n+ * the License.  You may obtain a copy of the License at\n+ *\n+ *    http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+\n+package org.apache.spark.sql\n+\n+import org.scalactic.source.Position\n+import org.scalatest.Tag\n+\n+import org.apache.spark.sql.test.SQLTestUtils\n+\n+/**\n+ * Tests with this tag will be ignored when Comet is enabled (e.g., via `ENABLE_COMET`).\n+ */\n+case class IgnoreComet(reason: String) extends Tag(\"DisableComet\")\n+case class IgnoreCometNativeIcebergCompat(reason: String) extends Tag(\"DisableComet\")\n+case class IgnoreCometNativeDataFusion(reason: String) extends Tag(\"DisableComet\")\n+case class IgnoreCometNativeScan(reason: String) extends Tag(\"DisableComet\")\n+\n+/**\n+ * Helper trait that disables Comet for all tests regardless of default config values.\n+ */\n+trait IgnoreCometSuite extends SQLTestUtils {\n+  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n+      pos: Position): Unit = {\n+    if (isCometEnabled) {\n+      ignore(testName + \" (disabled when Comet is on)\", testTags: _*)(testFun)\n+    } else {\n+      super.test(testName, testTags: _*)(testFun)\n+    }\n+  }\n+}\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala\nindex 7af826583bd..3c3def1eb67 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala\n@@ -23,6 +23,7 @@ import org.apache.spark.sql.catalyst.optimizer.{BuildLeft, BuildRight, BuildSide\n import org.apache.spark.sql.catalyst.plans.PlanTest\n import org.apache.spark.sql.catalyst.plans.logical._\n import org.apache.spark.sql.catalyst.rules.RuleExecutor\n+import org.apache.spark.sql.comet.{CometHashJoinExec, CometSortMergeJoinExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.joins._\n import org.apache.spark.sql.internal.SQLConf\n@@ -362,6 +363,7 @@ class JoinHintSuite extends PlanTest with SharedSparkSession with AdaptiveSparkP\n     val executedPlan = df.queryExecution.executedPlan\n     val shuffleHashJoins = collect(executedPlan) {\n       case s: ShuffledHashJoinExec => s\n+      case c: CometHashJoinExec => c.originalPlan.asInstanceOf[ShuffledHashJoinExec]\n     }\n     assert(shuffleHashJoins.size == 1)\n     assert(shuffleHashJoins.head.buildSide == buildSide)\n@@ -371,6 +373,7 @@ class JoinHintSuite extends PlanTest with SharedSparkSession with AdaptiveSparkP\n     val executedPlan = df.queryExecution.executedPlan\n     val shuffleMergeJoins = collect(executedPlan) {\n       case s: SortMergeJoinExec => s\n+      case c: CometSortMergeJoinExec => c\n     }\n     assert(shuffleMergeJoins.size == 1)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala\nindex 44c8cb92fc3..f098beeca26 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala\n@@ -31,7 +31,8 @@ import org.apache.spark.sql.catalyst.analysis.UnresolvedRelation\n import org.apache.spark.sql.catalyst.expressions.{Ascending, GenericRow, SortOrder}\n import org.apache.spark.sql.catalyst.optimizer.{BuildLeft, BuildRight}\n import org.apache.spark.sql.catalyst.plans.logical.{Filter, HintInfo, Join, JoinHint, NO_BROADCAST_AND_REPLICATION}\n-import org.apache.spark.sql.execution.{BinaryExecNode, FilterExec, ProjectExec, SortExec, SparkPlan, WholeStageCodegenExec}\n+import org.apache.spark.sql.comet._\n+import org.apache.spark.sql.execution.{BinaryExecNode, ColumnarToRowExec, FilterExec, InputAdapter, ProjectExec, SortExec, SparkPlan, WholeStageCodegenExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.exchange.{ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.joins._\n@@ -802,7 +803,8 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n     }\n   }\n \n-  test(\"test SortMergeJoin (with spill)\") {\n+  test(\"test SortMergeJoin (with spill)\",\n+      IgnoreComet(\"TODO: Comet SMJ doesn't support spill yet\")) {\n     withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"1\",\n       SQLConf.SORT_MERGE_JOIN_EXEC_BUFFER_IN_MEMORY_THRESHOLD.key -> \"0\",\n       SQLConf.SORT_MERGE_JOIN_EXEC_BUFFER_SPILL_THRESHOLD.key -> \"1\") {\n@@ -928,10 +930,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n       val physical = df.queryExecution.sparkPlan\n       val physicalJoins = physical.collect {\n         case j: SortMergeJoinExec => j\n+        case j: CometSortMergeJoinExec => j.originalPlan.asInstanceOf[SortMergeJoinExec]\n       }\n       val executed = df.queryExecution.executedPlan\n       val executedJoins = collect(executed) {\n         case j: SortMergeJoinExec => j\n+        case j: CometSortMergeJoinExec => j.originalPlan.asInstanceOf[SortMergeJoinExec]\n       }\n       // This only applies to the above tested queries, in which a child SortMergeJoin always\n       // contains the SortOrder required by its parent SortMergeJoin. Thus, SortExec should never\n@@ -1177,9 +1181,11 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n       val plan = df1.join(df2.hint(\"SHUFFLE_HASH\"), $\"k1\" === $\"k2\", joinType)\n         .groupBy($\"k1\").count()\n         .queryExecution.executedPlan\n-      assert(collect(plan) { case _: ShuffledHashJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: ShuffledHashJoinExec | _: CometHashJoinExec => true }.size === 1)\n       // No extra shuffle before aggregate\n-      assert(collect(plan) { case _: ShuffleExchangeExec => true }.size === 2)\n+      assert(collect(plan) {\n+        case _: ShuffleExchangeLike => true }.size === 2)\n     })\n   }\n \n@@ -1196,10 +1202,11 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n         .join(df4.hint(\"SHUFFLE_MERGE\"), $\"k1\" === $\"k4\", joinType)\n         .queryExecution\n         .executedPlan\n-      assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 2)\n+      assert(collect(plan) {\n+        case _: SortMergeJoinExec | _: CometSortMergeJoinExec => true }.size === 2)\n       assert(collect(plan) { case _: BroadcastHashJoinExec => true }.size === 1)\n       // No extra sort before last sort merge join\n-      assert(collect(plan) { case _: SortExec => true }.size === 3)\n+      assert(collect(plan) { case _: SortExec | _: CometSortExec => true }.size === 3)\n     })\n \n     // Test shuffled hash join\n@@ -1209,10 +1216,13 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n         .join(df4.hint(\"SHUFFLE_MERGE\"), $\"k1\" === $\"k4\", joinType)\n         .queryExecution\n         .executedPlan\n-      assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 2)\n-      assert(collect(plan) { case _: ShuffledHashJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: SortMergeJoinExec | _: CometSortMergeJoinExec => true }.size === 2)\n+      assert(collect(plan) {\n+        case _: ShuffledHashJoinExec | _: CometHashJoinExec => true }.size === 1)\n       // No extra sort before last sort merge join\n-      assert(collect(plan) { case _: SortExec => true }.size === 3)\n+      assert(collect(plan) {\n+        case _: SortExec | _: CometSortExec => true }.size === 3)\n     })\n   }\n \n@@ -1303,12 +1313,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n     inputDFs.foreach { case (df1, df2, joinExprs) =>\n       val smjDF = df1.join(df2.hint(\"SHUFFLE_MERGE\"), joinExprs, \"full\")\n       assert(collect(smjDF.queryExecution.executedPlan) {\n-        case _: SortMergeJoinExec => true }.size === 1)\n+        case _: SortMergeJoinExec | _: CometSortMergeJoinExec => true }.size === 1)\n       val smjResult = smjDF.collect()\n \n       val shjDF = df1.join(df2.hint(\"SHUFFLE_HASH\"), joinExprs, \"full\")\n       assert(collect(shjDF.queryExecution.executedPlan) {\n-        case _: ShuffledHashJoinExec => true }.size === 1)\n+        case _: ShuffledHashJoinExec | _: CometHashJoinExec => true }.size === 1)\n       // Same result between shuffled hash join and sort merge join\n       checkAnswer(shjDF, smjResult)\n     }\n@@ -1367,12 +1377,14 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           val smjDF = df1.hint(\"SHUFFLE_MERGE\").join(df2, joinExprs, \"leftouter\")\n           assert(collect(smjDF.queryExecution.executedPlan) {\n             case _: SortMergeJoinExec => true\n+            case _: CometSortMergeJoinExec => true\n           }.size === 1)\n           val smjResult = smjDF.collect()\n \n           val shjDF = df1.hint(\"SHUFFLE_HASH\").join(df2, joinExprs, \"leftouter\")\n           assert(collect(shjDF.queryExecution.executedPlan) {\n             case _: ShuffledHashJoinExec => true\n+            case _: CometHashJoinExec => true\n           }.size === 1)\n           // Same result between shuffled hash join and sort merge join\n           checkAnswer(shjDF, smjResult)\n@@ -1383,12 +1395,14 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           val smjDF = df2.join(df1.hint(\"SHUFFLE_MERGE\"), joinExprs, \"rightouter\")\n           assert(collect(smjDF.queryExecution.executedPlan) {\n             case _: SortMergeJoinExec => true\n+            case _: CometSortMergeJoinExec => true\n           }.size === 1)\n           val smjResult = smjDF.collect()\n \n           val shjDF = df2.join(df1.hint(\"SHUFFLE_HASH\"), joinExprs, \"rightouter\")\n           assert(collect(shjDF.queryExecution.executedPlan) {\n             case _: ShuffledHashJoinExec => true\n+            case _: CometHashJoinExec => true\n           }.size === 1)\n           // Same result between shuffled hash join and sort merge join\n           checkAnswer(shjDF, smjResult)\n@@ -1432,13 +1446,20 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n         assert(shjCodegenDF.queryExecution.executedPlan.collect {\n           case WholeStageCodegenExec(_ : ShuffledHashJoinExec) => true\n           case WholeStageCodegenExec(ProjectExec(_, _ : ShuffledHashJoinExec)) => true\n+          case WholeStageCodegenExec(ColumnarToRowExec(InputAdapter(_: CometHashJoinExec))) =>\n+            true\n+          case WholeStageCodegenExec(ColumnarToRowExec(\n+            InputAdapter(CometProjectExec(_, _, _, _, _: CometHashJoinExec, _)))) => true\n+          case _: CometHashJoinExec => true\n         }.size === 1)\n         checkAnswer(shjCodegenDF, Seq.empty)\n \n         withSQLConf(SQLConf.WHOLESTAGE_CODEGEN_ENABLED.key -> \"false\") {\n           val shjNonCodegenDF = df1.join(df2.hint(\"SHUFFLE_HASH\"), $\"k1\" === $\"k2\", joinType)\n           assert(shjNonCodegenDF.queryExecution.executedPlan.collect {\n-            case _: ShuffledHashJoinExec => true }.size === 1)\n+            case _: ShuffledHashJoinExec => true\n+            case _: CometHashJoinExec => true\n+          }.size === 1)\n           checkAnswer(shjNonCodegenDF, Seq.empty)\n         }\n       }\n@@ -1486,7 +1507,8 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           val plan = sql(getAggQuery(selectExpr, joinType)).queryExecution.executedPlan\n           assert(collect(plan) { case _: BroadcastNestedLoopJoinExec => true }.size === 1)\n           // Have shuffle before aggregation\n-          assert(collect(plan) { case _: ShuffleExchangeExec => true }.size === 1)\n+          assert(collect(plan) {\n+            case _: ShuffleExchangeLike => true }.size === 1)\n       }\n \n       def getJoinQuery(selectExpr: String, joinType: String): String = {\n@@ -1515,9 +1537,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           }\n           val plan = sql(getJoinQuery(selectExpr, joinType)).queryExecution.executedPlan\n           assert(collect(plan) { case _: BroadcastNestedLoopJoinExec => true }.size === 1)\n-          assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 3)\n+          assert(collect(plan) {\n+            case _: SortMergeJoinExec => true\n+            case _: CometSortMergeJoinExec => true\n+          }.size === 3)\n           // No extra sort on left side before last sort merge join\n-          assert(collect(plan) { case _: SortExec => true }.size === 5)\n+          assert(collect(plan) { case _: SortExec | _: CometSortExec => true }.size === 5)\n       }\n \n       // Test output ordering is not preserved\n@@ -1526,9 +1551,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           val selectExpr = \"/*+ BROADCAST(left_t) */ k1 as k0\"\n           val plan = sql(getJoinQuery(selectExpr, joinType)).queryExecution.executedPlan\n           assert(collect(plan) { case _: BroadcastNestedLoopJoinExec => true }.size === 1)\n-          assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 3)\n+          assert(collect(plan) {\n+            case _: SortMergeJoinExec => true\n+            case _: CometSortMergeJoinExec => true\n+          }.size === 3)\n           // Have sort on left side before last sort merge join\n-          assert(collect(plan) { case _: SortExec => true }.size === 6)\n+          assert(collect(plan) { case _: SortExec | _: CometSortExec => true }.size === 6)\n       }\n \n       // Test singe partition\n@@ -1538,7 +1566,8 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n            |FROM range(0, 10, 1, 1) t1 FULL OUTER JOIN range(0, 10, 1, 1) t2\n            |\"\"\".stripMargin)\n       val plan = fullJoinDF.queryExecution.executedPlan\n-      assert(collect(plan) { case _: ShuffleExchangeExec => true}.size == 1)\n+      assert(collect(plan) {\n+        case _: ShuffleExchangeLike => true}.size == 1)\n       checkAnswer(fullJoinDF, Row(100))\n     }\n   }\n@@ -1611,6 +1640,9 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           Seq(semiJoinDF, antiJoinDF).foreach { df =>\n             assert(collect(df.queryExecution.executedPlan) {\n               case j: ShuffledHashJoinExec if j.ignoreDuplicatedKey == ignoreDuplicatedKey => true\n+              case j: CometHashJoinExec\n+                if j.originalPlan.asInstanceOf[ShuffledHashJoinExec].ignoreDuplicatedKey ==\n+                  ignoreDuplicatedKey => true\n             }.size == 1)\n           }\n       }\n@@ -1655,14 +1687,20 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n \n   test(\"SPARK-43113: Full outer join with duplicate stream-side references in condition (SMJ)\") {\n     def check(plan: SparkPlan): Unit = {\n-      assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: SortMergeJoinExec => true\n+        case _: CometSortMergeJoinExec => true\n+      }.size === 1)\n     }\n     dupStreamSideColTest(\"MERGE\", check)\n   }\n \n   test(\"SPARK-43113: Full outer join with duplicate stream-side references in condition (SHJ)\") {\n     def check(plan: SparkPlan): Unit = {\n-      assert(collect(plan) { case _: ShuffledHashJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: ShuffledHashJoinExec => true\n+        case _: CometHashJoinExec => true\n+      }.size === 1)\n     }\n     dupStreamSideColTest(\"SHUFFLE_HASH\", check)\n   }\n@@ -1798,7 +1836,8 @@ class ThreadLeakInSortMergeJoinSuite\n       sparkConf.set(SHUFFLE_SPILL_NUM_ELEMENTS_FORCE_SPILL_THRESHOLD, 20))\n   }\n \n-  test(\"SPARK-47146: thread leak when doing SortMergeJoin (with spill)\") {\n+  test(\"SPARK-47146: thread leak when doing SortMergeJoin (with spill)\",\n+    IgnoreComet(\"Comet does not support spilling\")) {\n \n     withSQLConf(\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"1\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala\nindex c26757c9cff..d55775f09d7 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala\n@@ -69,7 +69,7 @@ import org.apache.spark.tags.ExtendedSQLTest\n  * }}}\n  */\n // scalastyle:on line.size.limit\n-trait PlanStabilitySuite extends DisableAdaptiveExecutionSuite {\n+trait PlanStabilitySuite extends DisableAdaptiveExecutionSuite with IgnoreCometSuite {\n \n   protected val baseResourcePath = {\n     // use the same way as `SQLQueryTestSuite` to get the resource path\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala\nindex 3cf2bfd17ab..49728c35c42 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala\n@@ -1521,7 +1521,8 @@ class SQLQuerySuite extends QueryTest with SharedSparkSession with AdaptiveSpark\n     checkAnswer(sql(\"select -0.001\"), Row(BigDecimal(\"-0.001\")))\n   }\n \n-  test(\"external sorting updates peak execution memory\") {\n+  test(\"external sorting updates peak execution memory\",\n+    IgnoreComet(\"TODO: native CometSort does not update peak execution memory\")) {\n     AccumulatorSuite.verifyPeakExecutionMemorySet(sparkContext, \"external sort\") {\n       sql(\"SELECT * FROM testData2 ORDER BY a ASC, b ASC\").collect()\n     }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala\nindex 8b4ac474f87..3f79f20822f 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala\n@@ -223,6 +223,8 @@ class SparkSessionExtensionSuite extends SparkFunSuite with SQLHelper with Adapt\n     withSession(extensions) { session =>\n       session.conf.set(SQLConf.ADAPTIVE_EXECUTION_ENABLED, true)\n       session.conf.set(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key, \"-1\")\n+      // https://github.com/apache/datafusion-comet/issues/1197\n+      session.conf.set(\"spark.comet.enabled\", false)\n       assert(session.sessionState.columnarRules.contains(\n         MyColumnarRule(PreRuleReplaceAddWithBrokenVersion(), MyPostRule())))\n       import session.sqlContext.implicits._\n@@ -281,6 +283,8 @@ class SparkSessionExtensionSuite extends SparkFunSuite with SQLHelper with Adapt\n     }\n     withSession(extensions) { session =>\n       session.conf.set(SQLConf.ADAPTIVE_EXECUTION_ENABLED, enableAQE)\n+      // https://github.com/apache/datafusion-comet/issues/1197\n+      session.conf.set(\"spark.comet.enabled\", false)\n       assert(session.sessionState.columnarRules.contains(\n         MyColumnarRule(PreRuleReplaceAddWithBrokenVersion(), MyPostRule())))\n       import session.sqlContext.implicits._\n@@ -319,6 +323,8 @@ class SparkSessionExtensionSuite extends SparkFunSuite with SQLHelper with Adapt\n     val session = SparkSession.builder()\n       .master(\"local[1]\")\n       .config(COLUMN_BATCH_SIZE.key, 2)\n+      // https://github.com/apache/datafusion-comet/issues/1197\n+      .config(\"spark.comet.enabled\", false)\n       .withExtensions { extensions =>\n         extensions.injectColumnar(session =>\n           MyColumnarRule(PreRuleReplaceAddWithBrokenVersion(), MyPostRule())) }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala\nindex 04702201f82..5ee11f83ecf 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala\n@@ -22,10 +22,11 @@ import scala.collection.mutable.ArrayBuffer\n import org.apache.spark.sql.catalyst.expressions.SubqueryExpression\n import org.apache.spark.sql.catalyst.plans.{LeftAnti, LeftSemi}\n import org.apache.spark.sql.catalyst.plans.logical.{Aggregate, Join, LogicalPlan, Project, Sort, Union}\n+import org.apache.spark.sql.comet.CometScanExec\n import org.apache.spark.sql.execution._\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecution}\n import org.apache.spark.sql.execution.datasources.FileScanRDD\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.execution.joins.{BaseJoinExec, BroadcastHashJoinExec, BroadcastNestedLoopJoinExec}\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SharedSparkSession\n@@ -1599,6 +1600,12 @@ class SubquerySuite extends QueryTest\n             fs.inputRDDs().forall(\n               _.asInstanceOf[FileScanRDD].filePartitions.forall(\n                 _.files.forall(_.urlEncodedPath.contains(\"p=0\"))))\n+        case WholeStageCodegenExec(ColumnarToRowExec(InputAdapter(\n+        fs @ CometScanExec(_, _, _, _, partitionFilters, _, _, _, _, _, _)))) =>\n+          partitionFilters.exists(ExecSubqueryExpression.hasSubquery) &&\n+            fs.inputRDDs().forall(\n+              _.asInstanceOf[FileScanRDD].filePartitions.forall(\n+                _.files.forall(_.urlEncodedPath.contains(\"p=0\"))))\n         case _ => false\n       })\n     }\n@@ -2164,7 +2171,7 @@ class SubquerySuite extends QueryTest\n \n       df.collect()\n       val exchanges = collect(df.queryExecution.executedPlan) {\n-        case s: ShuffleExchangeExec => s\n+        case s: ShuffleExchangeLike => s\n       }\n       assert(exchanges.size === 1)\n     }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala b/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala\nindex d269290e616..13726a31e07 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala\n@@ -24,6 +24,7 @@ import test.org.apache.spark.sql.connector._\n \n import org.apache.spark.sql.{AnalysisException, DataFrame, QueryTest, Row}\n import org.apache.spark.sql.catalyst.InternalRow\n+import org.apache.spark.sql.comet.CometSortExec\n import org.apache.spark.sql.connector.catalog.{PartitionInternalRow, SupportsRead, Table, TableCapability, TableProvider}\n import org.apache.spark.sql.connector.catalog.TableCapability._\n import org.apache.spark.sql.connector.expressions.{Expression, FieldReference, Literal, NamedReference, NullOrdering, SortDirection, SortOrder, Transform}\n@@ -34,7 +35,7 @@ import org.apache.spark.sql.connector.read.partitioning.{KeyGroupedPartitioning,\n import org.apache.spark.sql.execution.SortExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.datasources.v2.{BatchScanExec, DataSourceV2Relation, DataSourceV2ScanRelation}\n-import org.apache.spark.sql.execution.exchange.{Exchange, ShuffleExchangeExec}\n+import org.apache.spark.sql.execution.exchange.{Exchange, ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.vectorized.OnHeapColumnVector\n import org.apache.spark.sql.expressions.Window\n import org.apache.spark.sql.functions._\n@@ -269,13 +270,13 @@ class DataSourceV2Suite extends QueryTest with SharedSparkSession with AdaptiveS\n           val groupByColJ = df.groupBy($\"j\").agg(sum($\"i\"))\n           checkAnswer(groupByColJ, Seq(Row(2, 8), Row(4, 2), Row(6, 5)))\n           assert(collectFirst(groupByColJ.queryExecution.executedPlan) {\n-            case e: ShuffleExchangeExec => e\n+            case e: ShuffleExchangeLike => e\n           }.isDefined)\n \n           val groupByIPlusJ = df.groupBy($\"i\" + $\"j\").agg(count(\"*\"))\n           checkAnswer(groupByIPlusJ, Seq(Row(5, 2), Row(6, 2), Row(8, 1), Row(9, 1)))\n           assert(collectFirst(groupByIPlusJ.queryExecution.executedPlan) {\n-            case e: ShuffleExchangeExec => e\n+            case e: ShuffleExchangeLike => e\n           }.isDefined)\n         }\n       }\n@@ -335,10 +336,11 @@ class DataSourceV2Suite extends QueryTest with SharedSparkSession with AdaptiveS\n \n                 val (shuffleExpected, sortExpected) = groupByExpects\n                 assert(collectFirst(groupBy.queryExecution.executedPlan) {\n-                  case e: ShuffleExchangeExec => e\n+                  case e: ShuffleExchangeLike => e\n                 }.isDefined === shuffleExpected)\n                 assert(collectFirst(groupBy.queryExecution.executedPlan) {\n                   case e: SortExec => e\n+                  case c: CometSortExec => c\n                 }.isDefined === sortExpected)\n               }\n \n@@ -353,10 +355,11 @@ class DataSourceV2Suite extends QueryTest with SharedSparkSession with AdaptiveS\n \n                 val (shuffleExpected, sortExpected) = windowFuncExpects\n                 assert(collectFirst(windowPartByColIOrderByColJ.queryExecution.executedPlan) {\n-                  case e: ShuffleExchangeExec => e\n+                  case e: ShuffleExchangeLike => e\n                 }.isDefined === shuffleExpected)\n                 assert(collectFirst(windowPartByColIOrderByColJ.queryExecution.executedPlan) {\n                   case e: SortExec => e\n+                  case c: CometSortExec => c\n                 }.isDefined === sortExpected)\n               }\n             }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala\nindex cfc8b2cc845..c4be7eb3731 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala\n@@ -21,6 +21,7 @@ import scala.collection.mutable.ArrayBuffer\n import org.apache.spark.SparkConf\n import org.apache.spark.sql.{AnalysisException, QueryTest}\n import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan\n+import org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.connector.catalog.{SupportsRead, SupportsWrite, Table, TableCapability}\n import org.apache.spark.sql.connector.read.ScanBuilder\n import org.apache.spark.sql.connector.write.{LogicalWriteInfo, WriteBuilder}\n@@ -184,7 +185,11 @@ class FileDataSourceV2FallBackSuite extends QueryTest with SharedSparkSession {\n             val df = spark.read.format(format).load(path.getCanonicalPath)\n             checkAnswer(df, inputData.toDF())\n             assert(\n-              df.queryExecution.executedPlan.exists(_.isInstanceOf[FileSourceScanExec]))\n+              df.queryExecution.executedPlan.exists {\n+                case _: FileSourceScanExec | _: CometScanExec | _: CometNativeScanExec => true\n+                case _ => false\n+              }\n+            )\n           }\n         } finally {\n           spark.listenerManager.unregister(listener)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala\nindex 71e030f535e..d5ae6cbf3d5 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala\n@@ -22,6 +22,7 @@ import org.apache.spark.sql.{DataFrame, Row}\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions.{Literal, TransformExpression}\n import org.apache.spark.sql.catalyst.plans.physical\n+import org.apache.spark.sql.comet.CometSortMergeJoinExec\n import org.apache.spark.sql.connector.catalog.Identifier\n import org.apache.spark.sql.connector.catalog.InMemoryTableCatalog\n import org.apache.spark.sql.connector.catalog.functions._\n@@ -31,7 +32,7 @@ import org.apache.spark.sql.connector.expressions.Expressions._\n import org.apache.spark.sql.execution.SparkPlan\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanRelation\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.execution.joins.SortMergeJoinExec\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.internal.SQLConf._\n@@ -282,13 +283,14 @@ class KeyGroupedPartitioningSuite extends DistributionAndOrderingSuiteBase {\n         Row(\"bbb\", 20, 250.0), Row(\"bbb\", 20, 350.0), Row(\"ccc\", 30, 400.50)))\n   }\n \n-  private def collectShuffles(plan: SparkPlan): Seq[ShuffleExchangeExec] = {\n+  private def collectShuffles(plan: SparkPlan): Seq[ShuffleExchangeLike] = {\n     // here we skip collecting shuffle operators that are not associated with SMJ\n     collect(plan) {\n       case s: SortMergeJoinExec => s\n+      case c: CometSortMergeJoinExec => c.originalPlan\n     }.flatMap(smj =>\n       collect(smj) {\n-        case s: ShuffleExchangeExec => s\n+        case s: ShuffleExchangeLike => s\n       })\n   }\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/connector/WriteDistributionAndOrderingSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/connector/WriteDistributionAndOrderingSuite.scala\nindex 12007cd94cd..07020f201fb 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/connector/WriteDistributionAndOrderingSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/connector/WriteDistributionAndOrderingSuite.scala\n@@ -21,7 +21,7 @@ package org.apache.spark.sql.connector\n import java.sql.Date\n import java.util.Collections\n \n-import org.apache.spark.sql.{catalyst, AnalysisException, DataFrame, Row}\n+import org.apache.spark.sql.{catalyst, AnalysisException, DataFrame, IgnoreCometSuite, Row}\n import org.apache.spark.sql.catalyst.expressions.{ApplyFunctionExpression, Cast, Literal}\n import org.apache.spark.sql.catalyst.expressions.objects.Invoke\n import org.apache.spark.sql.catalyst.plans.physical\n@@ -45,7 +45,8 @@ import org.apache.spark.sql.util.QueryExecutionListener\n import org.apache.spark.tags.SlowSQLTest\n \n @SlowSQLTest\n-class WriteDistributionAndOrderingSuite extends DistributionAndOrderingSuiteBase {\n+class WriteDistributionAndOrderingSuite extends DistributionAndOrderingSuiteBase\n+  with IgnoreCometSuite {\n   import testImplicits._\n \n   before {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala\nindex ae1c0a86a14..1d3b914fd64 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala\n@@ -27,7 +27,7 @@ import org.apache.hadoop.fs.permission.FsPermission\n import org.mockito.Mockito.{mock, spy, when}\n \n import org.apache.spark._\n-import org.apache.spark.sql.{AnalysisException, DataFrame, Dataset, QueryTest, Row, SaveMode}\n+import org.apache.spark.sql.{AnalysisException, DataFrame, Dataset, IgnoreComet, QueryTest, Row, SaveMode}\n import org.apache.spark.sql.catalyst.FunctionIdentifier\n import org.apache.spark.sql.catalyst.analysis.{NamedParameter, UnresolvedGenerator}\n import org.apache.spark.sql.catalyst.expressions.{Grouping, Literal, RowNumber}\n@@ -256,7 +256,8 @@ class QueryExecutionErrorsSuite\n   }\n \n   test(\"INCONSISTENT_BEHAVIOR_CROSS_VERSION: \" +\n-    \"compatibility with Spark 2.4/3.2 in reading/writing dates\") {\n+    \"compatibility with Spark 2.4/3.2 in reading/writing dates\",\n+    IgnoreComet(\"Comet doesn't completely support datetime rebase mode yet\")) {\n \n     // Fail to read ancient datetime values.\n     withSQLConf(SQLConf.PARQUET_REBASE_MODE_IN_READ.key -> EXCEPTION.toString) {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala\nindex 418ca3430bb..eb8267192f8 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala\n@@ -23,7 +23,7 @@ import scala.util.Random\n import org.apache.hadoop.fs.Path\n \n import org.apache.spark.SparkConf\n-import org.apache.spark.sql.{DataFrame, QueryTest}\n+import org.apache.spark.sql.{DataFrame, IgnoreComet, QueryTest}\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.execution.datasources.v2.orc.OrcScan\n import org.apache.spark.sql.internal.SQLConf\n@@ -195,7 +195,7 @@ class DataSourceV2ScanExecRedactionSuite extends DataSourceScanRedactionTest {\n     }\n   }\n \n-  test(\"FileScan description\") {\n+  test(\"FileScan description\", IgnoreComet(\"Comet doesn't use BatchScan\")) {\n     Seq(\"json\", \"orc\", \"parquet\").foreach { format =>\n       withTempPath { path =>\n         val dir = path.getCanonicalPath\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala\nindex 743ec41dbe7..9f30d6c8e04 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala\n@@ -53,6 +53,10 @@ class LogicalPlanTagInSparkPlanSuite extends TPCDSQuerySuite with DisableAdaptiv\n     case ColumnarToRowExec(i: InputAdapter) => isScanPlanTree(i.child)\n     case p: ProjectExec => isScanPlanTree(p.child)\n     case f: FilterExec => isScanPlanTree(f.child)\n+    // Comet produces scan plan tree like:\n+    // ColumnarToRow\n+    //  +- ReusedExchange\n+    case _: ReusedExchangeExec => false\n     case _: LeafExecNode => true\n     case _ => false\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala\nindex de24b8c82b0..1f835481290 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala\n@@ -18,7 +18,7 @@\n package org.apache.spark.sql.execution\n \n import org.apache.spark.rdd.RDD\n-import org.apache.spark.sql.{execution, DataFrame, Row}\n+import org.apache.spark.sql.{execution, DataFrame, IgnoreCometSuite, Row}\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions._\n import org.apache.spark.sql.catalyst.plans._\n@@ -35,7 +35,9 @@ import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.sql.types._\n \n-class PlannerSuite extends SharedSparkSession with AdaptiveSparkPlanHelper {\n+// Ignore this suite when Comet is enabled. This suite tests the Spark planner and Comet planner\n+// comes out with too many difference. Simply ignoring this suite for now.\n+class PlannerSuite extends SharedSparkSession with AdaptiveSparkPlanHelper with IgnoreCometSuite {\n   import testImplicits._\n \n   setupTestData()\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala\nindex 9e9d717db3b..73de2b84938 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala\n@@ -17,7 +17,10 @@\n \n package org.apache.spark.sql.execution\n \n+import org.apache.comet.CometConf\n+\n import org.apache.spark.sql.{DataFrame, QueryTest, Row}\n+import org.apache.spark.sql.comet.CometProjectExec\n import org.apache.spark.sql.connector.SimpleWritableDataSource\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.internal.SQLConf\n@@ -34,7 +37,10 @@ abstract class RemoveRedundantProjectsSuiteBase\n   private def assertProjectExecCount(df: DataFrame, expected: Int): Unit = {\n     withClue(df.queryExecution) {\n       val plan = df.queryExecution.executedPlan\n-      val actual = collectWithSubqueries(plan) { case p: ProjectExec => p }.size\n+      val actual = collectWithSubqueries(plan) {\n+        case p: ProjectExec => p\n+        case p: CometProjectExec => p\n+      }.size\n       assert(actual == expected)\n     }\n   }\n@@ -134,12 +140,26 @@ abstract class RemoveRedundantProjectsSuiteBase\n       val df = data.selectExpr(\"a\", \"b\", \"key\", \"explode(array(key, a, b)) as d\").filter(\"d > 0\")\n       df.collect()\n       val plan = df.queryExecution.executedPlan\n-      val numProjects = collectWithSubqueries(plan) { case p: ProjectExec => p }.length\n+\n+      val numProjects = collectWithSubqueries(plan) {\n+        case p: ProjectExec => p\n+        case p: CometProjectExec => p\n+      }.length\n \n       // Create a new plan that reverse the GenerateExec output and add a new ProjectExec between\n       // GenerateExec and its child. This is to test if the ProjectExec is removed, the output of\n       // the query will be incorrect.\n-      val newPlan = stripAQEPlan(plan) transform {\n+\n+      // Comet-specific change to get original Spark plan before applying\n+      // a transformation to add a new ProjectExec\n+      var sparkPlan: SparkPlan = null\n+      withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"false\") {\n+        val df = data.selectExpr(\"a\", \"b\", \"key\", \"explode(array(key, a, b)) as d\").filter(\"d > 0\")\n+        df.collect()\n+        sparkPlan = df.queryExecution.executedPlan\n+      }\n+\n+      val newPlan = stripAQEPlan(sparkPlan) transform {\n         case g @ GenerateExec(_, requiredChildOutput, _, _, child) =>\n           g.copy(requiredChildOutput = requiredChildOutput.reverse,\n             child = ProjectExec(requiredChildOutput.reverse, child))\n@@ -151,6 +171,7 @@ abstract class RemoveRedundantProjectsSuiteBase\n       // The manually added ProjectExec node shouldn't be removed.\n       assert(collectWithSubqueries(newExecutedPlan) {\n         case p: ProjectExec => p\n+        case p: CometProjectExec => p\n       }.size == numProjects + 1)\n \n       // Check the original plan's output and the new plan's output are the same.\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala\nindex 005e764cc30..92ec088efab 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala\n@@ -19,6 +19,7 @@ package org.apache.spark.sql.execution\n \n import org.apache.spark.sql.{DataFrame, QueryTest}\n import org.apache.spark.sql.catalyst.plans.physical.{RangePartitioning, UnknownPartitioning}\n+import org.apache.spark.sql.comet.CometSortExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.execution.joins.ShuffledJoin\n import org.apache.spark.sql.internal.SQLConf\n@@ -33,7 +34,7 @@ abstract class RemoveRedundantSortsSuiteBase\n \n   private def checkNumSorts(df: DataFrame, count: Int): Unit = {\n     val plan = df.queryExecution.executedPlan\n-    assert(collectWithSubqueries(plan) { case s: SortExec => s }.length == count)\n+    assert(collectWithSubqueries(plan) { case _: SortExec | _: CometSortExec => 1 }.length == count)\n   }\n \n   private def checkSorts(query: String, enabledCount: Int, disabledCount: Int): Unit = {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala\nindex 47679ed7865..9ffbaecb98e 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala\n@@ -18,6 +18,7 @@\n package org.apache.spark.sql.execution\n \n import org.apache.spark.sql.{DataFrame, QueryTest}\n+import org.apache.spark.sql.comet.CometHashAggregateExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.execution.aggregate.{HashAggregateExec, ObjectHashAggregateExec, SortAggregateExec}\n import org.apache.spark.sql.internal.SQLConf\n@@ -31,7 +32,7 @@ abstract class ReplaceHashWithSortAggSuiteBase\n   private def checkNumAggs(df: DataFrame, hashAggCount: Int, sortAggCount: Int): Unit = {\n     val plan = df.queryExecution.executedPlan\n     assert(collectWithSubqueries(plan) {\n-      case s @ (_: HashAggregateExec | _: ObjectHashAggregateExec) => s\n+      case s @ (_: HashAggregateExec | _: ObjectHashAggregateExec | _: CometHashAggregateExec ) => s\n     }.length == hashAggCount)\n     assert(collectWithSubqueries(plan) { case s: SortAggregateExec => s }.length == sortAggCount)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala\nindex eec396b2e39..bf3f1c769d6 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLWindowFunctionSuite.scala\n@@ -18,7 +18,7 @@\n package org.apache.spark.sql.execution\n \n import org.apache.spark.TestUtils.assertSpilled\n-import org.apache.spark.sql.{AnalysisException, QueryTest, Row}\n+import org.apache.spark.sql.{AnalysisException, IgnoreComet, QueryTest, Row}\n import org.apache.spark.sql.internal.SQLConf.{WINDOW_EXEC_BUFFER_IN_MEMORY_THRESHOLD, WINDOW_EXEC_BUFFER_SPILL_THRESHOLD}\n import org.apache.spark.sql.test.SharedSparkSession\n \n@@ -470,7 +470,7 @@ class SQLWindowFunctionSuite extends QueryTest with SharedSparkSession {\n       Row(1, 3, null) :: Row(2, null, 4) :: Nil)\n   }\n \n-  test(\"test with low buffer spill threshold\") {\n+  test(\"test with low buffer spill threshold\", IgnoreComet(\"Comet does not support spilling\")) {\n     val nums = sparkContext.parallelize(1 to 10).map(x => (x, x % 2)).toDF(\"x\", \"y\")\n     nums.createOrReplaceTempView(\"nums\")\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala\nindex b14f4a405f6..90bed10eca9 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala\n@@ -23,6 +23,7 @@ import org.apache.spark.sql.QueryTest\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference}\n import org.apache.spark.sql.catalyst.plans.logical.Deduplicate\n+import org.apache.spark.sql.comet.{CometColumnarToRowExec, CometNativeColumnarToRowExec}\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SharedSparkSession\n@@ -131,7 +132,11 @@ class SparkPlanSuite extends QueryTest with SharedSparkSession {\n         spark.range(1).write.parquet(path.getAbsolutePath)\n         val df = spark.read.parquet(path.getAbsolutePath)\n         val columnarToRowExec =\n-          df.queryExecution.executedPlan.collectFirst { case p: ColumnarToRowExec => p }.get\n+          df.queryExecution.executedPlan.collectFirst {\n+            case p: ColumnarToRowExec => p\n+            case p: CometColumnarToRowExec => p\n+            case p: CometNativeColumnarToRowExec => p\n+          }.get\n         try {\n           spark.range(1).foreach { _ =>\n             columnarToRowExec.canonicalized\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala\nindex 5a413c77754..207b66e1d7b 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala\n@@ -17,7 +17,7 @@\n \n package org.apache.spark.sql.execution\n \n-import org.apache.spark.sql.{Dataset, QueryTest, Row, SaveMode}\n+import org.apache.spark.sql.{Dataset, IgnoreCometSuite, QueryTest, Row, SaveMode}\n import org.apache.spark.sql.catalyst.expressions.CodegenObjectFactoryMode\n import org.apache.spark.sql.catalyst.expressions.codegen.{ByteCodeStats, CodeAndComment, CodeGenerator}\n import org.apache.spark.sql.execution.adaptive.DisableAdaptiveExecutionSuite\n@@ -30,7 +30,7 @@ import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.sql.types.{IntegerType, StringType, StructType}\n \n // Disable AQE because the WholeStageCodegenExec is added when running QueryStageExec\n-class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n+class WholeStageCodegenSuite extends QueryTest with SharedSparkSession with IgnoreCometSuite\n   with DisableAdaptiveExecutionSuite {\n \n   import testImplicits._\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala\nindex 2f8e401e743..a4f94417dcc 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala\n@@ -27,9 +27,11 @@ import org.scalatest.time.SpanSugar._\n import org.apache.spark.SparkException\n import org.apache.spark.scheduler.{SparkListener, SparkListenerEvent, SparkListenerJobStart}\n import org.apache.spark.shuffle.sort.SortShuffleManager\n-import org.apache.spark.sql.{Dataset, QueryTest, Row, SparkSession, Strategy}\n+import org.apache.spark.sql.{Dataset, IgnoreComet, QueryTest, Row, SparkSession, Strategy}\n import org.apache.spark.sql.catalyst.optimizer.{BuildLeft, BuildRight}\n import org.apache.spark.sql.catalyst.plans.logical.{Aggregate, LogicalPlan}\n+import org.apache.spark.sql.comet._\n+import org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\n import org.apache.spark.sql.execution._\n import org.apache.spark.sql.execution.aggregate.BaseAggregateExec\n import org.apache.spark.sql.execution.columnar.{InMemoryTableScanExec, InMemoryTableScanLike}\n@@ -117,6 +119,7 @@ class AdaptiveQueryExecSuite\n   private def findTopLevelBroadcastHashJoin(plan: SparkPlan): Seq[BroadcastHashJoinExec] = {\n     collect(plan) {\n       case j: BroadcastHashJoinExec => j\n+      case j: CometBroadcastHashJoinExec => j.originalPlan.asInstanceOf[BroadcastHashJoinExec]\n     }\n   }\n \n@@ -129,36 +132,46 @@ class AdaptiveQueryExecSuite\n   private def findTopLevelSortMergeJoin(plan: SparkPlan): Seq[SortMergeJoinExec] = {\n     collect(plan) {\n       case j: SortMergeJoinExec => j\n+      case j: CometSortMergeJoinExec =>\n+        assert(j.originalPlan.isInstanceOf[SortMergeJoinExec])\n+        j.originalPlan.asInstanceOf[SortMergeJoinExec]\n     }\n   }\n \n   private def findTopLevelShuffledHashJoin(plan: SparkPlan): Seq[ShuffledHashJoinExec] = {\n     collect(plan) {\n       case j: ShuffledHashJoinExec => j\n+      case j: CometHashJoinExec => j.originalPlan.asInstanceOf[ShuffledHashJoinExec]\n     }\n   }\n \n   private def findTopLevelBaseJoin(plan: SparkPlan): Seq[BaseJoinExec] = {\n     collect(plan) {\n       case j: BaseJoinExec => j\n+      case c: CometHashJoinExec => c.originalPlan.asInstanceOf[BaseJoinExec]\n+      case c: CometSortMergeJoinExec => c.originalPlan.asInstanceOf[BaseJoinExec]\n+      case c: CometBroadcastHashJoinExec => c.originalPlan.asInstanceOf[BaseJoinExec]\n     }\n   }\n \n   private def findTopLevelSort(plan: SparkPlan): Seq[SortExec] = {\n     collect(plan) {\n       case s: SortExec => s\n+      case s: CometSortExec => s.originalPlan.asInstanceOf[SortExec]\n     }\n   }\n \n   private def findTopLevelAggregate(plan: SparkPlan): Seq[BaseAggregateExec] = {\n     collect(plan) {\n       case agg: BaseAggregateExec => agg\n+      case agg: CometHashAggregateExec => agg.originalPlan.asInstanceOf[BaseAggregateExec]\n     }\n   }\n \n   private def findTopLevelLimit(plan: SparkPlan): Seq[CollectLimitExec] = {\n     collect(plan) {\n       case l: CollectLimitExec => l\n+      case l: CometCollectLimitExec => l.originalPlan.asInstanceOf[CollectLimitExec]\n     }\n   }\n \n@@ -202,6 +215,7 @@ class AdaptiveQueryExecSuite\n       val parts = rdd.partitions\n       assert(parts.forall(rdd.preferredLocations(_).nonEmpty))\n     }\n+\n     assert(numShuffles === (numLocalReads.length + numShufflesWithoutLocalRead))\n   }\n \n@@ -210,7 +224,7 @@ class AdaptiveQueryExecSuite\n     val plan = df.queryExecution.executedPlan\n     assert(plan.isInstanceOf[AdaptiveSparkPlanExec])\n     val shuffle = plan.asInstanceOf[AdaptiveSparkPlanExec].executedPlan.collect {\n-      case s: ShuffleExchangeExec => s\n+      case s: ShuffleExchangeLike => s\n     }\n     assert(shuffle.size == 1)\n     assert(shuffle(0).outputPartitioning.numPartitions == numPartition)\n@@ -226,7 +240,8 @@ class AdaptiveQueryExecSuite\n       assert(smj.size == 1)\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n-      checkNumLocalShuffleReads(adaptivePlan)\n+      // Comet shuffle changes shuffle metrics\n+      // checkNumLocalShuffleReads(adaptivePlan)\n     }\n   }\n \n@@ -253,7 +268,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Reuse the parallelism of coalesced shuffle in local shuffle read\") {\n+  test(\"Reuse the parallelism of coalesced shuffle in local shuffle read\",\n+      IgnoreComet(\"Comet shuffle changes shuffle partition size\")) {\n     withSQLConf(\n       SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\",\n@@ -285,7 +301,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Reuse the default parallelism in local shuffle read\") {\n+  test(\"Reuse the default parallelism in local shuffle read\",\n+      IgnoreComet(\"Comet shuffle changes shuffle partition size\")) {\n     withSQLConf(\n       SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\",\n@@ -299,7 +316,8 @@ class AdaptiveQueryExecSuite\n       val localReads = collect(adaptivePlan) {\n         case read: AQEShuffleReadExec if read.isLocalRead => read\n       }\n-      assert(localReads.length == 2)\n+      // Comet shuffle changes shuffle metrics\n+      assert(localReads.length == 1)\n       val localShuffleRDD0 = localReads(0).execute().asInstanceOf[ShuffledRowRDD]\n       val localShuffleRDD1 = localReads(1).execute().asInstanceOf[ShuffledRowRDD]\n       // the final parallelism is math.max(1, numReduces / numMappers): math.max(1, 5/2) = 2\n@@ -324,7 +342,9 @@ class AdaptiveQueryExecSuite\n           .groupBy($\"a\").count()\n         checkAnswer(testDf, Seq())\n         val plan = testDf.queryExecution.executedPlan\n-        assert(find(plan)(_.isInstanceOf[SortMergeJoinExec]).isDefined)\n+        assert(find(plan) { case p =>\n+          p.isInstanceOf[SortMergeJoinExec] || p.isInstanceOf[CometSortMergeJoinExec]\n+        }.isDefined)\n         val coalescedReads = collect(plan) {\n           case r: AQEShuffleReadExec => r\n         }\n@@ -338,7 +358,9 @@ class AdaptiveQueryExecSuite\n           .groupBy($\"a\").count()\n         checkAnswer(testDf, Seq())\n         val plan = testDf.queryExecution.executedPlan\n-        assert(find(plan)(_.isInstanceOf[BroadcastHashJoinExec]).isDefined)\n+        assert(find(plan) { case p =>\n+          p.isInstanceOf[BroadcastHashJoinExec] || p.isInstanceOf[CometBroadcastHashJoinExec]\n+        }.isDefined)\n         val coalescedReads = collect(plan) {\n           case r: AQEShuffleReadExec => r\n         }\n@@ -348,7 +370,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Scalar subquery\") {\n+  test(\"Scalar subquery\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -363,7 +385,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Scalar subquery in later stages\") {\n+  test(\"Scalar subquery in later stages\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -379,7 +401,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"multiple joins\") {\n+  test(\"multiple joins\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -424,7 +446,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"multiple joins with aggregate\") {\n+  test(\"multiple joins with aggregate\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -469,7 +491,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"multiple joins with aggregate 2\") {\n+  test(\"multiple joins with aggregate 2\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"500\") {\n@@ -515,7 +537,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Exchange reuse\") {\n+  test(\"Exchange reuse\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -534,7 +556,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Exchange reuse with subqueries\") {\n+  test(\"Exchange reuse with subqueries\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -565,7 +587,9 @@ class AdaptiveQueryExecSuite\n       assert(smj.size == 1)\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n-      checkNumLocalShuffleReads(adaptivePlan)\n+      // Comet shuffle changes shuffle metrics,\n+      // so we can't check the number of local shuffle reads.\n+      // checkNumLocalShuffleReads(adaptivePlan)\n       // Even with local shuffle read, the query stage reuse can also work.\n       val ex = findReusedExchange(adaptivePlan)\n       assert(ex.nonEmpty)\n@@ -586,7 +610,9 @@ class AdaptiveQueryExecSuite\n       assert(smj.size == 1)\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n-      checkNumLocalShuffleReads(adaptivePlan)\n+      // Comet shuffle changes shuffle metrics,\n+      // so we can't check the number of local shuffle reads.\n+      // checkNumLocalShuffleReads(adaptivePlan)\n       // Even with local shuffle read, the query stage reuse can also work.\n       val ex = findReusedExchange(adaptivePlan)\n       assert(ex.isEmpty)\n@@ -595,7 +621,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Broadcast exchange reuse across subqueries\") {\n+  test(\"Broadcast exchange reuse across subqueries\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"20000000\",\n@@ -690,7 +717,8 @@ class AdaptiveQueryExecSuite\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n       // There is still a SMJ, and its two shuffles can't apply local read.\n-      checkNumLocalShuffleReads(adaptivePlan, 2)\n+      // Comet shuffle changes shuffle metrics\n+      // checkNumLocalShuffleReads(adaptivePlan, 2)\n     }\n   }\n \n@@ -812,7 +840,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-29544: adaptive skew join with different join types\") {\n+  test(\"SPARK-29544: adaptive skew join with different join types\",\n+      IgnoreComet(\"Comet shuffle has different partition metrics\")) {\n     Seq(\"SHUFFLE_MERGE\", \"SHUFFLE_HASH\").foreach { joinHint =>\n       def getJoinNode(plan: SparkPlan): Seq[ShuffledJoin] = if (joinHint == \"SHUFFLE_MERGE\") {\n         findTopLevelSortMergeJoin(plan)\n@@ -1030,7 +1059,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"metrics of the shuffle read\") {\n+  test(\"metrics of the shuffle read\",\n+      IgnoreComet(\"Comet shuffle changes the metrics\")) {\n     withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\") {\n       val (_, adaptivePlan) = runAdaptiveAndVerifyResult(\n         \"SELECT key FROM testData GROUP BY key\")\n@@ -1625,7 +1655,7 @@ class AdaptiveQueryExecSuite\n         val (_, adaptivePlan) = runAdaptiveAndVerifyResult(\n           \"SELECT id FROM v1 GROUP BY id DISTRIBUTE BY id\")\n         assert(collect(adaptivePlan) {\n-          case s: ShuffleExchangeExec => s\n+          case s: ShuffleExchangeLike => s\n         }.length == 1)\n       }\n     }\n@@ -1705,7 +1735,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-33551: Do not use AQE shuffle read for repartition\") {\n+  test(\"SPARK-33551: Do not use AQE shuffle read for repartition\",\n+      IgnoreComet(\"Comet shuffle changes partition size\")) {\n     def hasRepartitionShuffle(plan: SparkPlan): Boolean = {\n       find(plan) {\n         case s: ShuffleExchangeLike =>\n@@ -1890,6 +1921,9 @@ class AdaptiveQueryExecSuite\n     def checkNoCoalescePartitions(ds: Dataset[Row], origin: ShuffleOrigin): Unit = {\n       assert(collect(ds.queryExecution.executedPlan) {\n         case s: ShuffleExchangeExec if s.shuffleOrigin == origin && s.numPartitions == 2 => s\n+        case c: CometShuffleExchangeExec\n+          if c.originalPlan.shuffleOrigin == origin &&\n+            c.originalPlan.numPartitions == 2 => c\n       }.size == 1)\n       ds.collect()\n       val plan = ds.queryExecution.executedPlan\n@@ -1898,6 +1932,9 @@ class AdaptiveQueryExecSuite\n       }.isEmpty)\n       assert(collect(plan) {\n         case s: ShuffleExchangeExec if s.shuffleOrigin == origin && s.numPartitions == 2 => s\n+        case c: CometShuffleExchangeExec\n+          if c.originalPlan.shuffleOrigin == origin &&\n+            c.originalPlan.numPartitions == 2 => c\n       }.size == 1)\n       checkAnswer(ds, testData)\n     }\n@@ -2054,7 +2091,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-35264: Support AQE side shuffled hash join formula\") {\n+  test(\"SPARK-35264: Support AQE side shuffled hash join formula\",\n+      IgnoreComet(\"Comet shuffle changes the partition size\")) {\n     withTempView(\"t1\", \"t2\") {\n       def checkJoinStrategy(shouldShuffleHashJoin: Boolean): Unit = {\n         Seq(\"100\", \"100000\").foreach { size =>\n@@ -2140,7 +2178,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-35725: Support optimize skewed partitions in RebalancePartitions\") {\n+  test(\"SPARK-35725: Support optimize skewed partitions in RebalancePartitions\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withTempView(\"v\") {\n       withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n@@ -2239,7 +2278,7 @@ class AdaptiveQueryExecSuite\n               runAdaptiveAndVerifyResult(s\"SELECT $repartition key1 FROM skewData1 \" +\n                 s\"JOIN skewData2 ON key1 = key2 GROUP BY key1\")\n             val shuffles1 = collect(adaptive1) {\n-              case s: ShuffleExchangeExec => s\n+              case s: ShuffleExchangeLike => s\n             }\n             assert(shuffles1.size == 3)\n             // shuffles1.head is the top-level shuffle under the Aggregate operator\n@@ -2252,7 +2291,7 @@ class AdaptiveQueryExecSuite\n               runAdaptiveAndVerifyResult(s\"SELECT $repartition key1 FROM skewData1 \" +\n                 s\"JOIN skewData2 ON key1 = key2\")\n             val shuffles2 = collect(adaptive2) {\n-              case s: ShuffleExchangeExec => s\n+              case s: ShuffleExchangeLike => s\n             }\n             if (hasRequiredDistribution) {\n               assert(shuffles2.size == 3)\n@@ -2286,7 +2325,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-35794: Allow custom plugin for cost evaluator\") {\n+  test(\"SPARK-35794: Allow custom plugin for cost evaluator\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     CostEvaluator.instantiate(\n       classOf[SimpleShuffleSortCostEvaluator].getCanonicalName, spark.sparkContext.getConf)\n     intercept[IllegalArgumentException] {\n@@ -2452,6 +2493,7 @@ class AdaptiveQueryExecSuite\n           val (_, adaptive) = runAdaptiveAndVerifyResult(query)\n           assert(adaptive.collect {\n             case sort: SortExec => sort\n+            case sort: CometSortExec => sort\n           }.size == 1)\n           val read = collect(adaptive) {\n             case read: AQEShuffleReadExec => read\n@@ -2469,7 +2511,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-37357: Add small partition factor for rebalance partitions\") {\n+  test(\"SPARK-37357: Add small partition factor for rebalance partitions\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withTempView(\"v\") {\n       withSQLConf(\n         SQLConf.ADAPTIVE_OPTIMIZE_SKEWS_IN_REBALANCE_PARTITIONS_ENABLED.key -> \"true\",\n@@ -2581,7 +2624,7 @@ class AdaptiveQueryExecSuite\n           runAdaptiveAndVerifyResult(\"SELECT key1 FROM skewData1 JOIN skewData2 ON key1 = key2 \" +\n             \"JOIN skewData3 ON value2 = value3\")\n         val shuffles1 = collect(adaptive1) {\n-          case s: ShuffleExchangeExec => s\n+          case s: ShuffleExchangeLike => s\n         }\n         assert(shuffles1.size == 4)\n         val smj1 = findTopLevelSortMergeJoin(adaptive1)\n@@ -2592,7 +2635,7 @@ class AdaptiveQueryExecSuite\n           runAdaptiveAndVerifyResult(\"SELECT key1 FROM skewData1 JOIN skewData2 ON key1 = key2 \" +\n             \"JOIN skewData3 ON value1 = value3\")\n         val shuffles2 = collect(adaptive2) {\n-          case s: ShuffleExchangeExec => s\n+          case s: ShuffleExchangeLike => s\n         }\n         assert(shuffles2.size == 4)\n         val smj2 = findTopLevelSortMergeJoin(adaptive2)\n@@ -2850,6 +2893,7 @@ class AdaptiveQueryExecSuite\n         }.size == (if (firstAccess) 1 else 0))\n         assert(collect(initialExecutedPlan) {\n           case s: SortExec => s\n+          case s: CometSortExec => s\n         }.size == (if (firstAccess) 2 else 0))\n         assert(collect(initialExecutedPlan) {\n           case i: InMemoryTableScanLike => i\n@@ -2980,7 +3024,9 @@ class AdaptiveQueryExecSuite\n \n       val plan = df.queryExecution.executedPlan.asInstanceOf[AdaptiveSparkPlanExec]\n       assert(plan.inputPlan.isInstanceOf[TakeOrderedAndProjectExec])\n-      assert(plan.finalPhysicalPlan.isInstanceOf[WindowExec])\n+      assert(\n+        plan.finalPhysicalPlan.isInstanceOf[WindowExec] ||\n+        plan.finalPhysicalPlan.find(_.isInstanceOf[CometWindowExec]).nonEmpty)\n       plan.inputPlan.output.zip(plan.finalPhysicalPlan.output).foreach { case (o1, o2) =>\n         assert(o1.semanticEquals(o2), \"Different output column order after AQE optimization\")\n       }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala\nindex fd52d038ca6..154c800be67 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala\n@@ -28,6 +28,7 @@ import org.apache.spark.sql.catalyst.expressions.Concat\n import org.apache.spark.sql.catalyst.parser.CatalystSqlParser\n import org.apache.spark.sql.catalyst.plans.logical.Expand\n import org.apache.spark.sql.catalyst.types.DataTypeUtils\n+import org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.execution.FileSourceScanExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.functions._\n@@ -884,6 +885,8 @@ abstract class SchemaPruningSuite\n     val fileSourceScanSchemata =\n       collect(df.queryExecution.executedPlan) {\n         case scan: FileSourceScanExec => scan.requiredSchema\n+        case scan: CometScanExec => scan.requiredSchema\n+        case scan: CometNativeScanExec => scan.requiredSchema\n       }\n     assert(fileSourceScanSchemata.size === expectedSchemaCatalogStrings.size,\n       s\"Found ${fileSourceScanSchemata.size} file sources in dataframe, \" +\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala\nindex 5fd27410dcb..468abb1543a 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala\n@@ -20,6 +20,7 @@ package org.apache.spark.sql.execution.datasources\n import org.apache.spark.sql.{QueryTest, Row}\n import org.apache.spark.sql.catalyst.expressions.{Ascending, AttributeReference, NullsFirst, SortOrder}\n import org.apache.spark.sql.catalyst.plans.logical.{LogicalPlan, Sort}\n+import org.apache.spark.sql.comet.CometSortExec\n import org.apache.spark.sql.execution.{QueryExecution, SortExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec\n import org.apache.spark.sql.internal.SQLConf\n@@ -243,6 +244,7 @@ class V1WriteCommandSuite extends QueryTest with SharedSparkSession with V1Write\n           // assert the outer most sort in the executed plan\n           assert(plan.collectFirst {\n             case s: SortExec => s\n+            case s: CometSortExec => s.originalPlan.asInstanceOf[SortExec]\n           }.exists {\n             case SortExec(Seq(\n               SortOrder(AttributeReference(\"key\", IntegerType, _, _), Ascending, NullsFirst, _),\n@@ -290,6 +292,7 @@ class V1WriteCommandSuite extends QueryTest with SharedSparkSession with V1Write\n         // assert the outer most sort in the executed plan\n         assert(plan.collectFirst {\n           case s: SortExec => s\n+          case s: CometSortExec => s.originalPlan.asInstanceOf[SortExec]\n         }.exists {\n           case SortExec(Seq(\n             SortOrder(AttributeReference(\"value\", StringType, _, _), Ascending, NullsFirst, _),\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala\nindex 0b6fdef4f74..5b18c55da4b 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala\n@@ -28,7 +28,7 @@ import org.apache.hadoop.fs.{FileStatus, FileSystem, GlobFilter, Path}\n import org.mockito.Mockito.{mock, when}\n \n import org.apache.spark.SparkException\n-import org.apache.spark.sql.{DataFrame, QueryTest, Row}\n+import org.apache.spark.sql.{DataFrame, IgnoreCometSuite, QueryTest, Row}\n import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder\n import org.apache.spark.sql.execution.datasources.PartitionedFile\n import org.apache.spark.sql.functions.col\n@@ -38,7 +38,9 @@ import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.sql.types._\n import org.apache.spark.util.Utils\n \n-class BinaryFileFormatSuite extends QueryTest with SharedSparkSession {\n+// For some reason this suite is flaky w/ or w/o Comet when running in Github workflow.\n+// Since it isn't related to Comet, we disable it for now.\n+class BinaryFileFormatSuite extends QueryTest with SharedSparkSession with IgnoreCometSuite {\n   import BinaryFileFormat._\n \n   private var testDir: String = _\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala\nindex 07e2849ce6f..3e73645b638 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala\n@@ -28,7 +28,7 @@ import org.apache.parquet.hadoop.ParquetOutputFormat\n \n import org.apache.spark.TestUtils\n import org.apache.spark.memory.MemoryMode\n-import org.apache.spark.sql.Row\n+import org.apache.spark.sql.{IgnoreComet, Row}\n import org.apache.spark.sql.catalyst.util.DateTimeUtils\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SharedSparkSession\n@@ -201,7 +201,8 @@ class ParquetEncodingSuite extends ParquetCompatibilityTest with SharedSparkSess\n     }\n   }\n \n-  test(\"parquet v2 pages - rle encoding for boolean value columns\") {\n+  test(\"parquet v2 pages - rle encoding for boolean value columns\",\n+      IgnoreComet(\"Comet doesn't support RLE encoding yet\")) {\n     val extraOptions = Map[String, String](\n       ParquetOutputFormat.WRITER_VERSION -> ParquetProperties.WriterVersion.PARQUET_2_0.toString\n     )\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala\nindex 8e88049f51e..20d7ef7b1bc 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala\n@@ -1095,7 +1095,11 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n           // When a filter is pushed to Parquet, Parquet can apply it to every row.\n           // So, we can check the number of rows returned from the Parquet\n           // to make sure our filter pushdown work.\n-          assert(stripSparkFilter(df).count == 1)\n+          // Similar to Spark's vectorized reader, Comet doesn't do row-level filtering but relies\n+          // on Spark to apply the data filters after columnar batches are returned\n+          if (!isCometEnabled) {\n+            assert(stripSparkFilter(df).count == 1)\n+          }\n         }\n       }\n     }\n@@ -1498,7 +1502,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"Filters should be pushed down for vectorized Parquet reader at row group level\") {\n+  test(\"Filters should be pushed down for vectorized Parquet reader at row group level\",\n+    IgnoreCometNativeScan(\"Native scans do not support the tested accumulator\")) {\n     import testImplicits._\n \n     withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"true\",\n@@ -1580,7 +1585,11 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n           // than the total length but should not be a single record.\n           // Note that, if record level filtering is enabled, it should be a single record.\n           // If no filter is pushed down to Parquet, it should be the total length of data.\n-          assert(actual > 1 && actual < data.length)\n+          // Only enable Comet test iff it's scan only, since with native execution\n+          // `stripSparkFilter` can't remove the native filter\n+          if (!isCometEnabled || isCometScanOnly) {\n+            assert(actual > 1 && actual < data.length)\n+          }\n         }\n       }\n     }\n@@ -1607,7 +1616,11 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n         // than the total length but should not be a single record.\n         // Note that, if record level filtering is enabled, it should be a single record.\n         // If no filter is pushed down to Parquet, it should be the total length of data.\n-        assert(actual > 1 && actual < data.length)\n+        // Only enable Comet test iff it's scan only, since with native execution\n+        // `stripSparkFilter` can't remove the native filter\n+        if (!isCometEnabled || isCometScanOnly) {\n+          assert(actual > 1 && actual < data.length)\n+        }\n       }\n     }\n   }\n@@ -1699,7 +1712,7 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n       (attr, value) => sources.StringContains(attr, value))\n   }\n \n-  test(\"filter pushdown - StringPredicate\") {\n+  test(\"filter pushdown - StringPredicate\", IgnoreCometNativeScan(\"cannot be pushed down\")) {\n     import testImplicits._\n     // keep() should take effect on StartsWith/EndsWith/Contains\n     Seq(\n@@ -1743,7 +1756,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"SPARK-17091: Convert IN predicate to Parquet filter push-down\") {\n+  test(\"SPARK-17091: Convert IN predicate to Parquet filter push-down\",\n+    IgnoreCometNativeScan(\"Comet has different push-down behavior\")) {\n     val schema = StructType(Seq(\n       StructField(\"a\", IntegerType, nullable = false)\n     ))\n@@ -1949,11 +1963,24 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n            \"\"\".stripMargin)\n \n         withSQLConf(SQLConf.CASE_SENSITIVE.key -> \"false\") {\n-          val e = intercept[SparkException] {\n+          // Spark native readers wrap the error in SparkException(FAILED_READ_FILE).\n+          // Comet native readers throw SparkRuntimeException directly.\n+          val msg = try {\n             sql(s\"select a from $tableName where b > 0\").collect()\n+            fail(\"Expected an exception\")\n+          } catch {\n+            case e: SparkException =>\n+              assert(e.getCause.isInstanceOf[RuntimeException])\n+              e.getCause.getMessage\n+            case e: RuntimeException =>\n+              e.getMessage\n           }\n-          assert(e.getCause.isInstanceOf[RuntimeException] && e.getCause.getMessage.contains(\n-            \"\"\"Found duplicate field(s) \"B\": [B, b] in case-insensitive mode\"\"\"))\n+          assert(\n+            msg.contains(\n+              \"\"\"Found duplicate field(s) \"B\": [B, b] in case-insensitive mode\"\"\") ||\n+              msg.contains(\n+                \"\"\"Found duplicate field(s) \"b\": [B, b] in case-insensitive mode\"\"\"),\n+            s\"Unexpected error message: $msg\")\n         }\n \n         withSQLConf(SQLConf.CASE_SENSITIVE.key -> \"true\") {\n@@ -1984,7 +2011,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"Support Parquet column index\") {\n+  test(\"Support Parquet column index\",\n+      IgnoreComet(\"Comet doesn't support Parquet column index yet\")) {\n     // block 1:\n     //                      null count  min                                       max\n     // page-0                         0  0                                         99\n@@ -2044,7 +2072,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"SPARK-34562: Bloom filter push down\") {\n+  test(\"SPARK-34562: Bloom filter push down\",\n+    IgnoreCometNativeScan(\"Native scans do not support the tested accumulator\")) {\n     withTempPath { dir =>\n       val path = dir.getCanonicalPath\n       spark.range(100).selectExpr(\"id * 2 AS id\")\n@@ -2276,7 +2305,11 @@ class ParquetV1FilterSuite extends ParquetFilterSuite {\n           assert(pushedParquetFilters.exists(_.getClass === filterClass),\n             s\"${pushedParquetFilters.map(_.getClass).toList} did not contain ${filterClass}.\")\n \n-          checker(stripSparkFilter(query), expected)\n+          // Similar to Spark's vectorized reader, Comet doesn't do row-level filtering but relies\n+          // on Spark to apply the data filters after columnar batches are returned\n+          if (!isCometEnabled) {\n+            checker(stripSparkFilter(query), expected)\n+          }\n         } else {\n           assert(selectedFilters.isEmpty, \"There is filter pushed down\")\n         }\n@@ -2336,7 +2369,11 @@ class ParquetV2FilterSuite extends ParquetFilterSuite {\n           assert(pushedParquetFilters.exists(_.getClass === filterClass),\n             s\"${pushedParquetFilters.map(_.getClass).toList} did not contain ${filterClass}.\")\n \n-          checker(stripSparkFilter(query), expected)\n+          // Similar to Spark's vectorized reader, Comet doesn't do row-level filtering but relies\n+          // on Spark to apply the data filters after columnar batches are returned\n+          if (!isCometEnabled) {\n+            checker(stripSparkFilter(query), expected)\n+          }\n \n         case _ =>\n           throw new AnalysisException(\"Can not match ParquetTable in the query.\")\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala\nindex 8ed9ef1630e..a865928c1b2 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala\n@@ -1064,7 +1064,8 @@ class ParquetIOSuite extends QueryTest with ParquetTest with SharedSparkSession\n     }\n   }\n \n-  test(\"SPARK-35640: read binary as timestamp should throw schema incompatible error\") {\n+  test(\"SPARK-35640: read binary as timestamp should throw schema incompatible error\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     val data = (1 to 4).map(i => Tuple1(i.toString))\n     val readSchema = StructType(Seq(StructField(\"_1\", DataTypes.TimestampType)))\n \n@@ -1075,7 +1076,8 @@ class ParquetIOSuite extends QueryTest with ParquetTest with SharedSparkSession\n     }\n   }\n \n-  test(\"SPARK-35640: int as long should throw schema incompatible error\") {\n+  test(\"SPARK-35640: int as long should throw schema incompatible error\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     val data = (1 to 4).map(i => Tuple1(i))\n     val readSchema = StructType(Seq(StructField(\"_1\", DataTypes.LongType)))\n \n@@ -1345,7 +1347,8 @@ class ParquetIOSuite extends QueryTest with ParquetTest with SharedSparkSession\n     }\n   }\n \n-  test(\"SPARK-40128 read DELTA_LENGTH_BYTE_ARRAY encoded strings\") {\n+  test(\"SPARK-40128 read DELTA_LENGTH_BYTE_ARRAY encoded strings\",\n+      IgnoreComet(\"Comet doesn't support DELTA encoding yet\")) {\n     withAllParquetReaders {\n       checkAnswer(\n         // \"fruit\" column in this file is encoded using DELTA_LENGTH_BYTE_ARRAY.\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala\nindex f6472ba3d9d..5ea2d938664 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala\n@@ -185,7 +185,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     }\n   }\n \n-  test(\"SPARK-36182: can't read TimestampLTZ as TimestampNTZ\") {\n+  test(\"SPARK-36182: can't read TimestampLTZ as TimestampNTZ\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     val data = (1 to 1000).map { i =>\n       val ts = new java.sql.Timestamp(i)\n       Row(ts)\n@@ -998,7 +999,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     }\n   }\n \n-  test(\"SPARK-26677: negated null-safe equality comparison should not filter matched row groups\") {\n+  test(\"SPARK-26677: negated null-safe equality comparison should not filter matched row groups\",\n+    IgnoreCometNativeScan(\"Native scans had the filter pushed into DF operator, cannot strip\")) {\n     withAllParquetReaders {\n       withTempPath { path =>\n         // Repeated values for dictionary encoding.\n@@ -1051,7 +1053,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     testMigration(fromTsType = \"TIMESTAMP_MICROS\", toTsType = \"INT96\")\n   }\n \n-  test(\"SPARK-34212 Parquet should read decimals correctly\") {\n+  test(\"SPARK-34212 Parquet should read decimals correctly\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     def readParquet(schema: String, path: File): DataFrame = {\n       spark.read.schema(schema).parquet(path.toString)\n     }\n@@ -1067,7 +1070,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n         checkAnswer(readParquet(schema, path), df)\n       }\n \n-      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\") {\n+      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\",\n+          \"spark.comet.enabled\" -> \"false\") {\n         val schema1 = \"a DECIMAL(3, 2), b DECIMAL(18, 3), c DECIMAL(37, 3)\"\n         checkAnswer(readParquet(schema1, path), df)\n         val schema2 = \"a DECIMAL(3, 0), b DECIMAL(18, 1), c DECIMAL(37, 1)\"\n@@ -1089,7 +1093,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n       val df = sql(s\"SELECT 1 a, 123456 b, ${Int.MaxValue.toLong * 10} c, CAST('1.2' AS BINARY) d\")\n       df.write.parquet(path.toString)\n \n-      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\") {\n+      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\",\n+          \"spark.comet.enabled\" -> \"false\") {\n         checkAnswer(readParquet(\"a DECIMAL(3, 2)\", path), sql(\"SELECT 1.00\"))\n         checkAnswer(readParquet(\"b DECIMAL(3, 2)\", path), Row(null))\n         checkAnswer(readParquet(\"b DECIMAL(11, 1)\", path), sql(\"SELECT 123456.0\"))\n@@ -1133,7 +1138,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     }\n   }\n \n-  test(\"row group skipping doesn't overflow when reading into larger type\") {\n+  test(\"row group skipping doesn't overflow when reading into larger type\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     withTempPath { path =>\n       Seq(0).toDF(\"a\").write.parquet(path.toString)\n       // The vectorized and non-vectorized readers will produce different exceptions, we don't need\n@@ -1148,7 +1154,7 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n             .where(s\"a < ${Long.MaxValue}\")\n             .collect()\n         }\n-        assert(exception.getCause.getCause.isInstanceOf[SchemaColumnConvertNotSupportedException])\n+        assert(exception.getMessage.contains(\"Column: [a], Expected: bigint, Found: INT32\"))\n       }\n     }\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala\nindex 4f906411345..6cc69f7e915 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala\n@@ -21,7 +21,7 @@ import java.nio.file.{Files, Paths, StandardCopyOption}\n import java.sql.{Date, Timestamp}\n \n import org.apache.spark.{SPARK_VERSION_SHORT, SparkConf, SparkException, SparkUpgradeException}\n-import org.apache.spark.sql.{QueryTest, Row, SPARK_LEGACY_DATETIME_METADATA_KEY, SPARK_LEGACY_INT96_METADATA_KEY, SPARK_TIMEZONE_METADATA_KEY}\n+import org.apache.spark.sql.{IgnoreCometSuite, QueryTest, Row, SPARK_LEGACY_DATETIME_METADATA_KEY, SPARK_LEGACY_INT96_METADATA_KEY, SPARK_TIMEZONE_METADATA_KEY}\n import org.apache.spark.sql.catalyst.util.DateTimeTestUtils\n import org.apache.spark.sql.internal.{LegacyBehaviorPolicy, SQLConf}\n import org.apache.spark.sql.internal.LegacyBehaviorPolicy.{CORRECTED, EXCEPTION, LEGACY}\n@@ -30,9 +30,11 @@ import org.apache.spark.sql.internal.SQLConf.ParquetOutputTimestampType.{INT96,\n import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.tags.SlowSQLTest\n \n+// Comet is disabled for this suite because it doesn't support datetime rebase mode\n abstract class ParquetRebaseDatetimeSuite\n   extends QueryTest\n   with ParquetTest\n+  with IgnoreCometSuite\n   with SharedSparkSession {\n \n   import testImplicits._\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala\nindex 27c2a2148fd..808baf9e778 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala\n@@ -26,6 +26,7 @@ import org.apache.parquet.hadoop.{ParquetFileReader, ParquetOutputFormat}\n import org.apache.parquet.hadoop.ParquetWriter.DEFAULT_BLOCK_SIZE\n \n import org.apache.spark.sql.QueryTest\n+import org.apache.spark.sql.comet.{CometBatchScanExec, CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.execution.FileSourceScanExec\n import org.apache.spark.sql.execution.datasources.FileFormat\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n@@ -243,6 +244,17 @@ class ParquetRowIndexSuite extends QueryTest with SharedSparkSession {\n             case f: FileSourceScanExec =>\n               numPartitions += f.inputRDD.partitions.length\n               numOutputRows += f.metrics(\"numOutputRows\").value\n+            case b: CometScanExec =>\n+              numPartitions += b.inputRDD.partitions.length\n+              numOutputRows += b.metrics(\"numOutputRows\").value\n+            case b: CometBatchScanExec =>\n+              numPartitions += b.inputRDD.partitions.length\n+              numOutputRows += b.metrics(\"numOutputRows\").value\n+            case b: CometNativeScanExec =>\n+              numPartitions +=\n+                b.originalPlan.inputRDD.partitions.length\n+              numOutputRows +=\n+                b.metrics(\"numOutputRows\").value\n             case _ =>\n           }\n           assert(numPartitions > 0)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala\nindex 5c0b7def039..151184bc98c 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala\n@@ -20,6 +20,7 @@ package org.apache.spark.sql.execution.datasources.parquet\n import org.apache.spark.SparkConf\n import org.apache.spark.sql.DataFrame\n import org.apache.spark.sql.catalyst.parser.CatalystSqlParser\n+import org.apache.spark.sql.comet.CometBatchScanExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.datasources.SchemaPruningSuite\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n@@ -56,6 +57,7 @@ class ParquetV2SchemaPruningSuite extends ParquetSchemaPruningSuite {\n     val fileSourceScanSchemata =\n       collect(df.queryExecution.executedPlan) {\n         case scan: BatchScanExec => scan.scan.asInstanceOf[ParquetScan].readDataSchema\n+        case scan: CometBatchScanExec => scan.scan.asInstanceOf[ParquetScan].readDataSchema\n       }\n     assert(fileSourceScanSchemata.size === expectedSchemaCatalogStrings.size,\n       s\"Found ${fileSourceScanSchemata.size} file sources in dataframe, \" +\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala\nindex 3f47c5e506f..f1ce3194279 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala\n@@ -27,6 +27,7 @@ import org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName\n import org.apache.parquet.schema.Type._\n \n import org.apache.spark.SparkException\n+import org.apache.spark.sql.{IgnoreComet, IgnoreCometNativeDataFusion}\n import org.apache.spark.sql.catalyst.expressions.Cast.toSQLType\n import org.apache.spark.sql.execution.datasources.SchemaColumnConvertNotSupportedException\n import org.apache.spark.sql.functions.desc\n@@ -1036,7 +1037,8 @@ class ParquetSchemaSuite extends ParquetSchemaTest {\n     e\n   }\n \n-  test(\"schema mismatch failure error message for parquet reader\") {\n+  test(\"schema mismatch failure error message for parquet reader\",\n+      IgnoreComet(\"Comet doesn't work with vectorizedReaderEnabled = false\")) {\n     withTempPath { dir =>\n       val e = testSchemaMismatch(dir.getCanonicalPath, vectorizedReaderEnabled = false)\n       val expectedMessage = \"Encountered error while reading file\"\n@@ -1046,7 +1048,8 @@ class ParquetSchemaSuite extends ParquetSchemaTest {\n     }\n   }\n \n-  test(\"schema mismatch failure error message for parquet vectorized reader\") {\n+  test(\"schema mismatch failure error message for parquet vectorized reader\",\n+      IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     withTempPath { dir =>\n       val e = testSchemaMismatch(dir.getCanonicalPath, vectorizedReaderEnabled = true)\n       assert(e.getCause.isInstanceOf[SparkException])\n@@ -1087,7 +1090,8 @@ class ParquetSchemaSuite extends ParquetSchemaTest {\n     }\n   }\n \n-  test(\"SPARK-45604: schema mismatch failure error on timestamp_ntz to array<timestamp_ntz>\") {\n+  test(\"SPARK-45604: schema mismatch failure error on timestamp_ntz to array<timestamp_ntz>\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     import testImplicits._\n \n     withTempPath { dir =>\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala\nindex b8f3ea3c6f3..bbd44221288 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala\n@@ -20,6 +20,7 @@ package org.apache.spark.sql.execution.debug\n import java.io.ByteArrayOutputStream\n \n import org.apache.spark.rdd.RDD\n+import org.apache.spark.sql.IgnoreComet\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions.Attribute\n import org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext\n@@ -125,7 +126,8 @@ class DebuggingSuite extends DebuggingSuiteBase with DisableAdaptiveExecutionSui\n          | id LongType: {}\"\"\".stripMargin))\n   }\n \n-  test(\"SPARK-28537: DebugExec cannot debug columnar related queries\") {\n+  test(\"SPARK-28537: DebugExec cannot debug columnar related queries\",\n+      IgnoreComet(\"Comet does not use FileScan\")) {\n     withTempPath { workDir =>\n       val workDirPath = workDir.getAbsolutePath\n       val input = spark.range(5).toDF(\"id\")\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala\nindex 5cdbdc27b32..307fba16578 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala\n@@ -46,8 +46,10 @@ import org.apache.spark.sql.util.QueryExecutionListener\n import org.apache.spark.util.{AccumulatorContext, JsonProtocol}\n \n // Disable AQE because metric info is different with AQE on/off\n+// This test suite runs tests against the metrics of physical operators.\n+// Disabling it for Comet because the metrics are different with Comet enabled.\n class SQLMetricsSuite extends SharedSparkSession with SQLMetricsTestUtils\n-  with DisableAdaptiveExecutionSuite {\n+  with DisableAdaptiveExecutionSuite with IgnoreCometSuite {\n   import testImplicits._\n \n   /**\n@@ -765,7 +767,8 @@ class SQLMetricsSuite extends SharedSparkSession with SQLMetricsTestUtils\n     }\n   }\n \n-  test(\"SPARK-26327: FileSourceScanExec metrics\") {\n+  test(\"SPARK-26327: FileSourceScanExec metrics\",\n+      IgnoreComet(\"Spark uses row-based Parquet reader while Comet is vectorized\")) {\n     withTable(\"testDataForScan\") {\n       spark.range(10).selectExpr(\"id\", \"id % 3 as p\")\n         .write.partitionBy(\"p\").saveAsTable(\"testDataForScan\")\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala\nindex 0ab8691801d..b18a5bea944 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala\n@@ -18,6 +18,7 @@\n package org.apache.spark.sql.execution.python\n \n import org.apache.spark.sql.catalyst.plans.logical.{ArrowEvalPython, BatchEvalPython, Limit, LocalLimit}\n+import org.apache.spark.sql.comet._\n import org.apache.spark.sql.execution.{FileSourceScanExec, SparkPlan, SparkPlanTest}\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.execution.datasources.v2.parquet.ParquetScan\n@@ -108,6 +109,8 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: FileSourceScanExec => scan\n+            case scan: CometScanExec => scan\n+            case scan: CometNativeScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           assert(scanNodes.head.output.map(_.name) == Seq(\"a\"))\n@@ -120,11 +123,18 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: FileSourceScanExec => scan\n+            case scan: CometScanExec => scan\n+            case scan: CometNativeScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           // $\"a\" is not null and $\"a\" > 1\n-          assert(scanNodes.head.dataFilters.length == 2)\n-          assert(scanNodes.head.dataFilters.flatMap(_.references.map(_.name)).distinct == Seq(\"a\"))\n+          val dataFilters = scanNodes.head match {\n+            case scan: FileSourceScanExec => scan.dataFilters\n+            case scan: CometScanExec => scan.dataFilters\n+            case scan: CometNativeScanExec => scan.dataFilters\n+          }\n+          assert(dataFilters.length == 2)\n+          assert(dataFilters.flatMap(_.references.map(_.name)).distinct == Seq(\"a\"))\n         }\n       }\n     }\n@@ -145,6 +155,7 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: BatchScanExec => scan\n+            case scan: CometBatchScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           assert(scanNodes.head.output.map(_.name) == Seq(\"a\"))\n@@ -157,6 +168,7 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: BatchScanExec => scan\n+            case scan: CometBatchScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           // $\"a\" is not null and $\"a\" > 1\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala\nindex d083cac48ff..3c11bcde807 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala\n@@ -37,8 +37,10 @@ import org.apache.spark.sql.streaming.{StreamingQuery, StreamingQueryException,\n import org.apache.spark.sql.streaming.util.StreamManualClock\n import org.apache.spark.util.Utils\n \n+// For some reason this suite is flaky w/ or w/o Comet when running in Github workflow.\n+// Since it isn't related to Comet, we disable it for now.\n class AsyncProgressTrackingMicroBatchExecutionSuite\n-  extends StreamTest with BeforeAndAfter with Matchers {\n+  extends StreamTest with BeforeAndAfter with Matchers with IgnoreCometSuite {\n \n   import testImplicits._\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala\nindex 746f289c393..e5dc13b87d5 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala\n@@ -19,16 +19,19 @@ package org.apache.spark.sql.sources\n \n import scala.util.Random\n \n+import org.apache.comet.CometConf\n+\n import org.apache.spark.sql._\n import org.apache.spark.sql.catalyst.catalog.BucketSpec\n import org.apache.spark.sql.catalyst.expressions\n import org.apache.spark.sql.catalyst.expressions._\n import org.apache.spark.sql.catalyst.plans.physical.HashPartitioning\n import org.apache.spark.sql.catalyst.types.DataTypeUtils\n-import org.apache.spark.sql.execution.{FileSourceScanExec, SortExec, SparkPlan}\n+import org.apache.spark.sql.comet._\n+import org.apache.spark.sql.execution.{ColumnarToRowExec, FileSourceScanExec, SortExec, SparkPlan}\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanExec, AdaptiveSparkPlanHelper}\n import org.apache.spark.sql.execution.datasources.BucketingUtils\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.{ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.joins.SortMergeJoinExec\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -102,12 +105,22 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n     }\n   }\n \n-  private def getFileScan(plan: SparkPlan): FileSourceScanExec = {\n-    val fileScan = collect(plan) { case f: FileSourceScanExec => f }\n+  private def getFileScan(plan: SparkPlan): SparkPlan = {\n+    val fileScan = collect(plan) {\n+      case f: FileSourceScanExec => f\n+      case f: CometScanExec => f\n+      case f: CometNativeScanExec => f\n+    }\n     assert(fileScan.nonEmpty, plan)\n     fileScan.head\n   }\n \n+  private def getBucketScan(plan: SparkPlan): Boolean = getFileScan(plan) match {\n+    case fs: FileSourceScanExec => fs.bucketedScan\n+    case bs: CometScanExec => bs.bucketedScan\n+    case ns: CometNativeScanExec => ns.bucketedScan\n+  }\n+\n   // To verify if the bucket pruning works, this function checks two conditions:\n   //   1) Check if the pruned buckets (before filtering) are empty.\n   //   2) Verify the final result is the same as the expected one\n@@ -156,7 +169,8 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n           val planWithoutBucketedScan = bucketedDataFrame.filter(filterCondition)\n             .queryExecution.executedPlan\n           val fileScan = getFileScan(planWithoutBucketedScan)\n-          assert(!fileScan.bucketedScan, s\"except no bucketed scan but found\\n$fileScan\")\n+          val bucketedScan = getBucketScan(planWithoutBucketedScan)\n+          assert(!bucketedScan, s\"except no bucketed scan but found\\n$fileScan\")\n \n           val bucketColumnType = bucketedDataFrame.schema.apply(bucketColumnIndex).dataType\n           val rowsWithInvalidBuckets = fileScan.execute().filter(row => {\n@@ -452,28 +466,54 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n         val joinOperator = if (joined.sqlContext.conf.adaptiveExecutionEnabled) {\n           val executedPlan =\n             joined.queryExecution.executedPlan.asInstanceOf[AdaptiveSparkPlanExec].executedPlan\n-          assert(executedPlan.isInstanceOf[SortMergeJoinExec])\n-          executedPlan.asInstanceOf[SortMergeJoinExec]\n+          executedPlan match {\n+            case s: SortMergeJoinExec => s\n+            case b: CometSortMergeJoinExec =>\n+              b.originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+          }\n         } else {\n           val executedPlan = joined.queryExecution.executedPlan\n-          assert(executedPlan.isInstanceOf[SortMergeJoinExec])\n-          executedPlan.asInstanceOf[SortMergeJoinExec]\n+          executedPlan match {\n+            case s: SortMergeJoinExec => s\n+            case ColumnarToRowExec(child) =>\n+              child.asInstanceOf[CometSortMergeJoinExec].originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case CometColumnarToRowExec(child) =>\n+              child.asInstanceOf[CometSortMergeJoinExec].originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case CometNativeColumnarToRowExec(child) =>\n+              child.asInstanceOf[CometSortMergeJoinExec].originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+          }\n         }\n \n         // check existence of shuffle\n         assert(\n-          joinOperator.left.exists(_.isInstanceOf[ShuffleExchangeExec]) == shuffleLeft,\n+          joinOperator.left.exists(op => op.isInstanceOf[ShuffleExchangeLike]) == shuffleLeft,\n           s\"expected shuffle in plan to be $shuffleLeft but found\\n${joinOperator.left}\")\n         assert(\n-          joinOperator.right.exists(_.isInstanceOf[ShuffleExchangeExec]) == shuffleRight,\n+          joinOperator.right.exists(op => op.isInstanceOf[ShuffleExchangeLike]) == shuffleRight,\n           s\"expected shuffle in plan to be $shuffleRight but found\\n${joinOperator.right}\")\n \n         // check existence of sort\n         assert(\n-          joinOperator.left.exists(_.isInstanceOf[SortExec]) == sortLeft,\n+          joinOperator.left.exists(op => op.isInstanceOf[SortExec] || op.isInstanceOf[CometExec] &&\n+            op.asInstanceOf[CometExec].originalPlan.isInstanceOf[SortExec]) == sortLeft,\n           s\"expected sort in the left child to be $sortLeft but found\\n${joinOperator.left}\")\n         assert(\n-          joinOperator.right.exists(_.isInstanceOf[SortExec]) == sortRight,\n+          joinOperator.right.exists(op => op.isInstanceOf[SortExec] || op.isInstanceOf[CometExec] &&\n+            op.asInstanceOf[CometExec].originalPlan.isInstanceOf[SortExec]) == sortRight,\n           s\"expected sort in the right child to be $sortRight but found\\n${joinOperator.right}\")\n \n         // check the output partitioning\n@@ -836,11 +876,11 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n       df1.write.format(\"parquet\").bucketBy(8, \"i\").saveAsTable(\"bucketed_table\")\n \n       val scanDF = spark.table(\"bucketed_table\").select(\"j\")\n-      assert(!getFileScan(scanDF.queryExecution.executedPlan).bucketedScan)\n+      assert(!getBucketScan(scanDF.queryExecution.executedPlan))\n       checkAnswer(scanDF, df1.select(\"j\"))\n \n       val aggDF = spark.table(\"bucketed_table\").groupBy(\"j\").agg(max(\"k\"))\n-      assert(!getFileScan(aggDF.queryExecution.executedPlan).bucketedScan)\n+      assert(!getBucketScan(aggDF.queryExecution.executedPlan))\n       checkAnswer(aggDF, df1.groupBy(\"j\").agg(max(\"k\")))\n     }\n   }\n@@ -895,7 +935,10 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n   }\n \n   test(\"SPARK-29655 Read bucketed tables obeys spark.sql.shuffle.partitions\") {\n+    // Range partitioning uses random samples, so per-partition comparisons do not always yield\n+    // the same results. Disable Comet native range partitioning.\n     withSQLConf(\n+      CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> \"false\",\n       SQLConf.SHUFFLE_PARTITIONS.key -> \"5\",\n       SQLConf.COALESCE_PARTITIONS_INITIAL_PARTITION_NUM.key -> \"7\")  {\n       val bucketSpec = Some(BucketSpec(6, Seq(\"i\", \"j\"), Nil))\n@@ -914,7 +957,10 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n   }\n \n   test(\"SPARK-32767 Bucket join should work if SHUFFLE_PARTITIONS larger than bucket number\") {\n+    // Range partitioning uses random samples, so per-partition comparisons do not always yield\n+    // the same results. Disable Comet native range partitioning.\n     withSQLConf(\n+      CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> \"false\",\n       SQLConf.SHUFFLE_PARTITIONS.key -> \"9\",\n       SQLConf.COALESCE_PARTITIONS_INITIAL_PARTITION_NUM.key -> \"10\")  {\n \n@@ -944,7 +990,10 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n   }\n \n   test(\"bucket coalescing eliminates shuffle\") {\n+    // Range partitioning uses random samples, so per-partition comparisons do not always yield\n+    // the same results. Disable Comet native range partitioning.\n     withSQLConf(\n+      CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> \"false\",\n       SQLConf.COALESCE_BUCKETS_IN_JOIN_ENABLED.key -> \"true\",\n       SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\") {\n       // The side with bucketedTableTestSpec1 will be coalesced to have 4 output partitions.\n@@ -1029,15 +1078,24 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n           Seq(true, false).foreach { aqeEnabled =>\n             withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> aqeEnabled.toString) {\n               val plan = sql(query).queryExecution.executedPlan\n-              val shuffles = collect(plan) { case s: ShuffleExchangeExec => s }\n+              val shuffles = collect(plan) { case s: ShuffleExchangeLike => s }\n               assert(shuffles.length == expectedNumShuffles)\n \n               val scans = collect(plan) {\n                 case f: FileSourceScanExec if f.optionalNumCoalescedBuckets.isDefined => f\n+                case b: CometScanExec if b.optionalNumCoalescedBuckets.isDefined => b\n+                case n: CometNativeScanExec if n.optionalNumCoalescedBuckets.isDefined => n\n               }\n               if (expectedCoalescedNumBuckets.isDefined) {\n                 assert(scans.length == 1)\n-                assert(scans.head.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+                scans.head match {\n+                  case f: FileSourceScanExec =>\n+                    assert(f.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+                  case b: CometScanExec =>\n+                    assert(b.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+                  case n: CometNativeScanExec =>\n+                    assert(n.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+                }\n               } else {\n                 assert(scans.isEmpty)\n               }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala\nindex 6f897a9c0b7..b0723634f68 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala\n@@ -20,6 +20,7 @@ package org.apache.spark.sql.sources\n import java.io.File\n \n import org.apache.spark.SparkException\n+import org.apache.spark.sql.IgnoreCometSuite\n import org.apache.spark.sql.catalyst.TableIdentifier\n import org.apache.spark.sql.catalyst.catalog.{BucketSpec, CatalogTableType}\n import org.apache.spark.sql.catalyst.parser.ParseException\n@@ -27,7 +28,10 @@ import org.apache.spark.sql.internal.SQLConf.BUCKETING_MAX_BUCKETS\n import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.util.Utils\n \n-class CreateTableAsSelectSuite extends DataSourceTest with SharedSparkSession {\n+// For some reason this suite is flaky w/ or w/o Comet when running in Github workflow.\n+// Since it isn't related to Comet, we disable it for now.\n+class CreateTableAsSelectSuite extends DataSourceTest with SharedSparkSession\n+    with IgnoreCometSuite {\n   import testImplicits._\n \n   protected override lazy val sql = spark.sql _\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala\nindex d675503a8ba..f220892396e 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala\n@@ -18,6 +18,7 @@\n package org.apache.spark.sql.sources\n \n import org.apache.spark.sql.QueryTest\n+import org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.execution.FileSourceScanExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.internal.SQLConf\n@@ -68,7 +69,11 @@ abstract class DisableUnnecessaryBucketedScanSuite\n \n     def checkNumBucketedScan(query: String, expectedNumBucketedScan: Int): Unit = {\n       val plan = sql(query).queryExecution.executedPlan\n-      val bucketedScan = collect(plan) { case s: FileSourceScanExec if s.bucketedScan => s }\n+      val bucketedScan = collect(plan) {\n+        case s: FileSourceScanExec if s.bucketedScan => s\n+        case s: CometScanExec if s.bucketedScan => s\n+        case s: CometNativeScanExec if s.bucketedScan => s\n+      }\n       assert(bucketedScan.length == expectedNumBucketedScan)\n     }\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala\nindex 7f6fa2a123e..c778b4e2c48 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala\n@@ -35,6 +35,7 @@ import org.apache.spark.paths.SparkPath\n import org.apache.spark.scheduler.{SparkListener, SparkListenerTaskEnd}\n import org.apache.spark.sql.{AnalysisException, DataFrame}\n import org.apache.spark.sql.catalyst.util.stringToFile\n+import org.apache.spark.sql.comet.CometBatchScanExec\n import org.apache.spark.sql.execution.DataSourceScanExec\n import org.apache.spark.sql.execution.datasources._\n import org.apache.spark.sql.execution.datasources.v2.{BatchScanExec, DataSourceV2Relation, FileScan, FileTable}\n@@ -777,6 +778,8 @@ class FileStreamSinkV2Suite extends FileStreamSinkSuite {\n       val fileScan = df.queryExecution.executedPlan.collect {\n         case batch: BatchScanExec if batch.scan.isInstanceOf[FileScan] =>\n           batch.scan.asInstanceOf[FileScan]\n+        case batch: CometBatchScanExec if batch.scan.isInstanceOf[FileScan] =>\n+          batch.scan.asInstanceOf[FileScan]\n       }.headOption.getOrElse {\n         fail(s\"No FileScan in query\\n${df.queryExecution}\")\n       }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala\nindex c97979a57a5..45a998db0e0 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala\n@@ -38,6 +38,7 @@ import org.apache.spark.sql.catalyst.plans.logical.{Range, RepartitionByExpressi\n import org.apache.spark.sql.catalyst.streaming.{InternalOutputModes, StreamingRelationV2}\n import org.apache.spark.sql.catalyst.types.DataTypeUtils.toAttributes\n import org.apache.spark.sql.catalyst.util.DateTimeUtils\n+import org.apache.spark.sql.comet.CometLocalLimitExec\n import org.apache.spark.sql.execution.{LocalLimitExec, SimpleMode, SparkPlan}\n import org.apache.spark.sql.execution.command.ExplainCommand\n import org.apache.spark.sql.execution.streaming._\n@@ -1114,11 +1115,12 @@ class StreamSuite extends StreamTest {\n       val localLimits = execPlan.collect {\n         case l: LocalLimitExec => l\n         case l: StreamingLocalLimitExec => l\n+        case l: CometLocalLimitExec => l\n       }\n \n       require(\n         localLimits.size == 1,\n-        s\"Cant verify local limit optimization with this plan:\\n$execPlan\")\n+        s\"Cant verify local limit optimization ${localLimits.size} with this plan:\\n$execPlan\")\n \n       if (expectStreamingLimit) {\n         assert(\n@@ -1126,7 +1128,8 @@ class StreamSuite extends StreamTest {\n           s\"Local limit was not StreamingLocalLimitExec:\\n$execPlan\")\n       } else {\n         assert(\n-          localLimits.head.isInstanceOf[LocalLimitExec],\n+          localLimits.head.isInstanceOf[LocalLimitExec] ||\n+            localLimits.head.isInstanceOf[CometLocalLimitExec],\n           s\"Local limit was not LocalLimitExec:\\n$execPlan\")\n       }\n     }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala\nindex b4c4ec7acbf..20579284856 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala\n@@ -23,6 +23,7 @@ import org.apache.commons.io.FileUtils\n import org.scalatest.Assertions\n \n import org.apache.spark.sql.catalyst.plans.physical.UnspecifiedDistribution\n+import org.apache.spark.sql.comet.CometHashAggregateExec\n import org.apache.spark.sql.execution.aggregate.BaseAggregateExec\n import org.apache.spark.sql.execution.streaming.{MemoryStream, StateStoreRestoreExec, StateStoreSaveExec}\n import org.apache.spark.sql.functions.count\n@@ -67,6 +68,7 @@ class StreamingAggregationDistributionSuite extends StreamTest\n         // verify aggregations in between, except partial aggregation\n         val allAggregateExecs = query.lastExecution.executedPlan.collect {\n           case a: BaseAggregateExec => a\n+          case c: CometHashAggregateExec => c.originalPlan\n         }\n \n         val aggregateExecsWithoutPartialAgg = allAggregateExecs.filter {\n@@ -201,6 +203,7 @@ class StreamingAggregationDistributionSuite extends StreamTest\n         // verify aggregations in between, except partial aggregation\n         val allAggregateExecs = executedPlan.collect {\n           case a: BaseAggregateExec => a\n+          case c: CometHashAggregateExec => c.originalPlan\n         }\n \n         val aggregateExecsWithoutPartialAgg = allAggregateExecs.filter {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala\nindex aad91601758..201083bd621 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala\n@@ -31,7 +31,7 @@ import org.apache.spark.scheduler.ExecutorCacheTaskLocation\n import org.apache.spark.sql.{DataFrame, Row, SparkSession}\n import org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression}\n import org.apache.spark.sql.catalyst.plans.physical.HashPartitioning\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.execution.streaming.{MemoryStream, StatefulOperatorStateInfo, StreamingSymmetricHashJoinExec, StreamingSymmetricHashJoinHelper}\n import org.apache.spark.sql.execution.streaming.state.{RocksDBStateStoreProvider, StateStore, StateStoreProviderId}\n import org.apache.spark.sql.functions._\n@@ -619,14 +619,27 @@ class StreamingInnerJoinSuite extends StreamingJoinSuite {\n \n         val numPartitions = spark.sqlContext.conf.getConf(SQLConf.SHUFFLE_PARTITIONS)\n \n-        assert(query.lastExecution.executedPlan.collect {\n-          case j @ StreamingSymmetricHashJoinExec(_, _, _, _, _, _, _, _, _,\n-            ShuffleExchangeExec(opA: HashPartitioning, _, _, _),\n-            ShuffleExchangeExec(opB: HashPartitioning, _, _, _))\n-              if partitionExpressionsColumns(opA.expressions) === Seq(\"a\", \"b\")\n-                && partitionExpressionsColumns(opB.expressions) === Seq(\"a\", \"b\")\n-                && opA.numPartitions == numPartitions && opB.numPartitions == numPartitions => j\n-        }.size == 1)\n+        val join = query.lastExecution.executedPlan.collect {\n+          case j: StreamingSymmetricHashJoinExec => j\n+        }.head\n+        val opA = join.left.collect {\n+          case s: ShuffleExchangeLike\n+            if s.outputPartitioning.isInstanceOf[HashPartitioning] &&\n+              partitionExpressionsColumns(\n+                s.outputPartitioning\n+                  .asInstanceOf[HashPartitioning].expressions) === Seq(\"a\", \"b\") =>\n+            s.outputPartitioning.asInstanceOf[HashPartitioning]\n+        }.head\n+        val opB = join.right.collect {\n+          case s: ShuffleExchangeLike\n+            if s.outputPartitioning.isInstanceOf[HashPartitioning] &&\n+              partitionExpressionsColumns(\n+                s.outputPartitioning\n+                  .asInstanceOf[HashPartitioning].expressions) === Seq(\"a\", \"b\") =>\n+            s.outputPartitioning\n+              .asInstanceOf[HashPartitioning]\n+        }.head\n+        assert(opA.numPartitions == numPartitions && opB.numPartitions == numPartitions)\n       })\n   }\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala\nindex abe606ad9c1..2d930b64cca 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala\n@@ -22,7 +22,7 @@ import java.util\n \n import org.scalatest.BeforeAndAfter\n \n-import org.apache.spark.sql.{AnalysisException, Row, SaveMode}\n+import org.apache.spark.sql.{AnalysisException, IgnoreComet, Row, SaveMode}\n import org.apache.spark.sql.catalyst.TableIdentifier\n import org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException\n import org.apache.spark.sql.catalyst.catalog.{CatalogStorageFormat, CatalogTable, CatalogTableType}\n@@ -327,7 +327,8 @@ class DataStreamTableAPISuite extends StreamTest with BeforeAndAfter {\n     }\n   }\n \n-  test(\"explain with table on DSv1 data source\") {\n+  test(\"explain with table on DSv1 data source\",\n+      IgnoreComet(\"Comet explain output is different\")) {\n     val tblSourceName = \"tbl_src\"\n     val tblTargetName = \"tbl_target\"\n     val tblSourceQualified = s\"default.$tblSourceName\"\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala b/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala\nindex e937173a590..7d20538bc68 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala\n@@ -27,6 +27,7 @@ import scala.concurrent.duration._\n import scala.language.implicitConversions\n import scala.util.control.NonFatal\n \n+import org.apache.comet.CometConf\n import org.apache.hadoop.fs.Path\n import org.scalactic.source.Position\n import org.scalatest.{BeforeAndAfterAll, Suite, Tag}\n@@ -41,6 +42,7 @@ import org.apache.spark.sql.catalyst.plans.PlanTest\n import org.apache.spark.sql.catalyst.plans.PlanTestBase\n import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan\n import org.apache.spark.sql.catalyst.util._\n+import org.apache.spark.sql.comet._\n import org.apache.spark.sql.execution.FilterExec\n import org.apache.spark.sql.execution.adaptive.DisableAdaptiveExecution\n import org.apache.spark.sql.execution.datasources.DataSourceUtils\n@@ -119,6 +121,34 @@ private[sql] trait SQLTestUtils extends SparkFunSuite with SQLTestUtilsBase with\n \n   override protected def test(testName: String, testTags: Tag*)(testFun: => Any)\n       (implicit pos: Position): Unit = {\n+    // Check Comet skip tags first, before DisableAdaptiveExecution handling\n+    if (isCometEnabled && testTags.exists(_.isInstanceOf[IgnoreComet])) {\n+      ignore(testName + \" (disabled when Comet is on)\", testTags: _*)(testFun)\n+      return\n+    }\n+    if (isCometEnabled) {\n+      val cometScanImpl = CometConf.COMET_NATIVE_SCAN_IMPL.get(conf)\n+      val isNativeIcebergCompat = cometScanImpl == CometConf.SCAN_NATIVE_ICEBERG_COMPAT ||\n+        cometScanImpl == CometConf.SCAN_AUTO\n+      val isNativeDataFusion = cometScanImpl == CometConf.SCAN_NATIVE_DATAFUSION ||\n+        cometScanImpl == CometConf.SCAN_AUTO\n+      if (isNativeIcebergCompat &&\n+        testTags.exists(_.isInstanceOf[IgnoreCometNativeIcebergCompat])) {\n+        ignore(testName + \" (disabled for NATIVE_ICEBERG_COMPAT)\", testTags: _*)(testFun)\n+        return\n+      }\n+      if (isNativeDataFusion &&\n+        testTags.exists(_.isInstanceOf[IgnoreCometNativeDataFusion])) {\n+        ignore(testName + \" (disabled for NATIVE_DATAFUSION)\", testTags: _*)(testFun)\n+        return\n+      }\n+      if ((isNativeDataFusion || isNativeIcebergCompat) &&\n+        testTags.exists(_.isInstanceOf[IgnoreCometNativeScan])) {\n+        ignore(testName + \" (disabled for NATIVE_DATAFUSION and NATIVE_ICEBERG_COMPAT)\",\n+          testTags: _*)(testFun)\n+        return\n+      }\n+    }\n     if (testTags.exists(_.isInstanceOf[DisableAdaptiveExecution])) {\n       super.test(testName, testTags: _*) {\n         withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\") {\n@@ -242,6 +272,29 @@ private[sql] trait SQLTestUtilsBase\n     protected override def _sqlContext: SQLContext = self.spark.sqlContext\n   }\n \n+  /**\n+   * Whether Comet extension is enabled\n+   */\n+  protected def isCometEnabled: Boolean = SparkSession.isCometEnabled\n+\n+  /**\n+   * Whether to enable ansi mode This is only effective when\n+   * [[isCometEnabled]] returns true.\n+   */\n+  protected def enableCometAnsiMode: Boolean = {\n+    val v = System.getenv(\"ENABLE_COMET_ANSI_MODE\")\n+    v != null && v.toBoolean\n+  }\n+\n+  /**\n+   * Whether Spark should only apply Comet scan optimization. This is only effective when\n+   * [[isCometEnabled]] returns true.\n+   */\n+  protected def isCometScanOnly: Boolean = {\n+    val v = System.getenv(\"ENABLE_COMET_SCAN_ONLY\")\n+    v != null && v.toBoolean\n+  }\n+\n   protected override def withSQLConf(pairs: (String, String)*)(f: => Unit): Unit = {\n     SparkSession.setActiveSession(spark)\n     super.withSQLConf(pairs: _*)(f)\n@@ -435,6 +488,8 @@ private[sql] trait SQLTestUtilsBase\n     val schema = df.schema\n     val withoutFilters = df.queryExecution.executedPlan.transform {\n       case FilterExec(_, child) => child\n+      case CometFilterExec(_, _, _, _, child, _) => child\n+      case CometProjectExec(_, _, _, _, CometFilterExec(_, _, _, _, child, _), _) => child\n     }\n \n     spark.internalCreateDataFrame(withoutFilters.execute(), schema)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala b/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala\nindex ed2e309fa07..a5ea58146ad 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala\n@@ -74,6 +74,31 @@ trait SharedSparkSessionBase\n       // this rule may potentially block testing of other optimization rules such as\n       // ConstantPropagation etc.\n       .set(SQLConf.OPTIMIZER_EXCLUDED_RULES.key, ConvertToLocalRelation.ruleName)\n+    // Enable Comet if `ENABLE_COMET` environment variable is set\n+    if (isCometEnabled) {\n+      conf\n+        .set(\"spark.sql.extensions\", \"org.apache.comet.CometSparkSessionExtensions\")\n+        .set(\"spark.comet.enabled\", \"true\")\n+        .set(\"spark.comet.parquet.respectFilterPushdown\", \"true\")\n+\n+      if (!isCometScanOnly) {\n+        conf\n+          .set(\"spark.comet.exec.enabled\", \"true\")\n+          .set(\"spark.shuffle.manager\",\n+            \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+          .set(\"spark.comet.exec.shuffle.enabled\", \"true\")\n+          .set(\"spark.comet.memoryOverhead\", \"10g\")\n+      } else {\n+        conf\n+          .set(\"spark.comet.exec.enabled\", \"false\")\n+          .set(\"spark.comet.exec.shuffle.enabled\", \"false\")\n+      }\n+\n+      if (enableCometAnsiMode) {\n+        conf\n+          .set(\"spark.sql.ansi.enabled\", \"true\")\n+      }\n+    }\n     conf.set(\n       StaticSQLConf.WAREHOUSE_PATH,\n       conf.get(StaticSQLConf.WAREHOUSE_PATH) + \"/\" + getClass.getCanonicalName)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala b/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala\nindex c63c748953f..7edca9c93a6 100644\n--- a/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala\n@@ -45,7 +45,7 @@ class SqlResourceWithActualMetricsSuite\n   import testImplicits._\n \n   // Exclude nodes which may not have the metrics\n-  val excludedNodes = List(\"WholeStageCodegen\", \"Project\", \"SerializeFromObject\")\n+  val excludedNodes = List(\"WholeStageCodegen\", \"Project\", \"SerializeFromObject\", \"RowToColumnar\")\n \n   implicit val formats = new DefaultFormats {\n     override def dateFormatter = new SimpleDateFormat(\"yyyy-MM-dd'T'HH:mm:ss\")\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala\nindex 52abd248f3a..7a199931a08 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala\n@@ -19,6 +19,7 @@ package org.apache.spark.sql.hive\n \n import org.apache.spark.sql._\n import org.apache.spark.sql.catalyst.expressions.{DynamicPruningExpression, Expression}\n+import org.apache.spark.sql.comet._\n import org.apache.spark.sql.execution._\n import org.apache.spark.sql.execution.adaptive.{DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.hive.execution.HiveTableScanExec\n@@ -35,6 +36,9 @@ abstract class DynamicPartitionPruningHiveScanSuiteBase\n       case s: FileSourceScanExec => s.partitionFilters.collect {\n         case d: DynamicPruningExpression => d.child\n       }\n+      case s: CometScanExec => s.partitionFilters.collect {\n+        case d: DynamicPruningExpression => d.child\n+      }\n       case h: HiveTableScanExec => h.partitionPruningPred.collect {\n         case d: DynamicPruningExpression => d.child\n       }\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala\nindex de3b1ffccf0..2a76d127093 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala\n@@ -23,14 +23,15 @@ import java.util.concurrent.{Executors, TimeUnit}\n import org.scalatest.BeforeAndAfterEach\n \n import org.apache.spark.metrics.source.HiveCatalogMetrics\n-import org.apache.spark.sql.QueryTest\n+import org.apache.spark.sql.{IgnoreCometSuite, QueryTest}\n import org.apache.spark.sql.execution.datasources.FileStatusCache\n import org.apache.spark.sql.hive.test.TestHiveSingleton\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SQLTestUtils\n \n class PartitionedTablePerfStatsSuite\n-  extends QueryTest with TestHiveSingleton with SQLTestUtils with BeforeAndAfterEach {\n+  extends QueryTest with TestHiveSingleton with SQLTestUtils with BeforeAndAfterEach\n+    with IgnoreCometSuite {\n \n   override def beforeEach(): Unit = {\n     super.beforeEach()\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala\nindex 6160c3e5f6c..0956d7d9edc 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/execution/SQLQuerySuite.scala\n@@ -24,6 +24,7 @@ import java.sql.{Date, Timestamp}\n import java.util.{Locale, Set}\n \n import com.google.common.io.Files\n+import org.apache.comet.CometConf\n import org.apache.hadoop.fs.{FileSystem, Path}\n \n import org.apache.spark.{SparkException, TestUtils}\n@@ -838,8 +839,13 @@ abstract class SQLQuerySuiteBase extends QueryTest with SQLTestUtils with TestHi\n   }\n \n   test(\"SPARK-2554 SumDistinct partial aggregation\") {\n-    checkAnswer(sql(\"SELECT sum( distinct key) FROM src group by key order by key\"),\n-      sql(\"SELECT distinct key FROM src order by key\").collect().toSeq)\n+    // Range partitioning uses random samples, so per-partition comparisons do not always yield\n+    // the same results. Disable Comet native range partitioning.\n+    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> \"false\")\n+    {\n+      checkAnswer(sql(\"SELECT sum( distinct key) FROM src group by key order by key\"),\n+        sql(\"SELECT distinct key FROM src order by key\").collect().toSeq)\n+    }\n   }\n \n   test(\"SPARK-4963 DataFrame sample on mutable row return wrong result\") {\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala\nindex 1d646f40b3e..5babe505301 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala\n@@ -53,25 +53,54 @@ object TestHive\n     new SparkContext(\n       System.getProperty(\"spark.sql.test.master\", \"local[1]\"),\n       \"TestSQLContext\",\n-      new SparkConf()\n-        .set(\"spark.sql.test\", \"\")\n-        .set(SQLConf.CODEGEN_FALLBACK.key, \"false\")\n-        .set(SQLConf.CODEGEN_FACTORY_MODE.key, CodegenObjectFactoryMode.CODEGEN_ONLY.toString)\n-        .set(HiveUtils.HIVE_METASTORE_BARRIER_PREFIXES.key,\n-          \"org.apache.spark.sql.hive.execution.PairSerDe\")\n-        .set(WAREHOUSE_PATH.key, TestHiveContext.makeWarehouseDir().toURI.getPath)\n-        // SPARK-8910\n-        .set(UI_ENABLED, false)\n-        .set(config.UNSAFE_EXCEPTION_ON_MEMORY_LEAK, true)\n-        // Hive changed the default of hive.metastore.disallow.incompatible.col.type.changes\n-        // from false to true. For details, see the JIRA HIVE-12320 and HIVE-17764.\n-        .set(\"spark.hadoop.hive.metastore.disallow.incompatible.col.type.changes\", \"false\")\n-        // Disable ConvertToLocalRelation for better test coverage. Test cases built on\n-        // LocalRelation will exercise the optimization rules better by disabling it as\n-        // this rule may potentially block testing of other optimization rules such as\n-        // ConstantPropagation etc.\n-        .set(SQLConf.OPTIMIZER_EXCLUDED_RULES.key, ConvertToLocalRelation.ruleName)))\n+      {\n+        val conf = new SparkConf()\n+          .set(\"spark.sql.test\", \"\")\n+          .set(SQLConf.CODEGEN_FALLBACK.key, \"false\")\n+          .set(SQLConf.CODEGEN_FACTORY_MODE.key, CodegenObjectFactoryMode.CODEGEN_ONLY.toString)\n+          .set(HiveUtils.HIVE_METASTORE_BARRIER_PREFIXES.key,\n+            \"org.apache.spark.sql.hive.execution.PairSerDe\")\n+          .set(WAREHOUSE_PATH.key, TestHiveContext.makeWarehouseDir().toURI.getPath)\n+          // SPARK-8910\n+          .set(UI_ENABLED, false)\n+          .set(config.UNSAFE_EXCEPTION_ON_MEMORY_LEAK, true)\n+          // Hive changed the default of hive.metastore.disallow.incompatible.col.type.changes\n+          // from false to true. For details, see the JIRA HIVE-12320 and HIVE-17764.\n+          .set(\"spark.hadoop.hive.metastore.disallow.incompatible.col.type.changes\", \"false\")\n+          // Disable ConvertToLocalRelation for better test coverage. Test cases built on\n+          // LocalRelation will exercise the optimization rules better by disabling it as\n+          // this rule may potentially block testing of other optimization rules such as\n+          // ConstantPropagation etc.\n+          .set(SQLConf.OPTIMIZER_EXCLUDED_RULES.key, ConvertToLocalRelation.ruleName)\n+\n+        if (SparkSession.isCometEnabled) {\n+          conf\n+            .set(\"spark.sql.extensions\", \"org.apache.comet.CometSparkSessionExtensions\")\n+            .set(\"spark.comet.enabled\", \"true\")\n+\n+          val v = System.getenv(\"ENABLE_COMET_SCAN_ONLY\")\n+          if (v == null || !v.toBoolean) {\n+            conf\n+              .set(\"spark.comet.exec.enabled\", \"true\")\n+              .set(\"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+              .set(\"spark.comet.exec.shuffle.enabled\", \"true\")\n+          } else {\n+            conf\n+              .set(\"spark.comet.exec.enabled\", \"false\")\n+              .set(\"spark.comet.exec.shuffle.enabled\", \"false\")\n+          }\n+\n+          val a = System.getenv(\"ENABLE_COMET_ANSI_MODE\")\n+          if (a != null && a.toBoolean) {\n+            conf\n+              .set(\"spark.sql.ansi.enabled\", \"true\")\n+          }\n+        }\n \n+        conf\n+      }\n+    ))\n \n case class TestHiveVersion(hiveClient: HiveClient)\n   extends TestHiveContext(TestHive.sparkContext, hiveClient)\n"
  },
  {
    "path": "dev/diffs/4.0.1.diff",
    "content": "diff --git a/core/src/test/scala/org/apache/spark/storage/FallbackStorageSuite.scala b/core/src/test/scala/org/apache/spark/storage/FallbackStorageSuite.scala\nindex 6c51bd4ff2e..e72ec1d26e2 100644\n--- a/core/src/test/scala/org/apache/spark/storage/FallbackStorageSuite.scala\n+++ b/core/src/test/scala/org/apache/spark/storage/FallbackStorageSuite.scala\n@@ -231,6 +231,11 @@ class FallbackStorageSuite extends SparkFunSuite with LocalSparkContext {\n   }\n \n   test(\"Upload from all decommissioned executors\") {\n+    // Comet replaces Spark's shuffle with its own native shuffle, which is incompatible with\n+    // the fallback storage migration path used by BlockManagerDecommissioner.\n+    val cometEnv = System.getenv(\"ENABLE_COMET\")\n+    assume(cometEnv == null || cometEnv == \"0\" || cometEnv == \"false\",\n+      \"Skipped when Comet is enabled: incompatible with Comet native shuffle storage\")\n     sc = new SparkContext(getSparkConf(2, 2))\n     withSpark(sc) { sc =>\n       TestUtils.waitUntilExecutorsUp(sc, 2, 60000)\n@@ -261,6 +266,11 @@ class FallbackStorageSuite extends SparkFunSuite with LocalSparkContext {\n   }\n \n   test(\"Upload multi stages\") {\n+    // Comet replaces Spark's shuffle with its own native shuffle, which is incompatible with\n+    // the fallback storage migration path used by BlockManagerDecommissioner.\n+    val cometEnv = System.getenv(\"ENABLE_COMET\")\n+    assume(cometEnv == null || cometEnv == \"0\" || cometEnv == \"false\",\n+      \"Skipped when Comet is enabled: incompatible with Comet native shuffle storage\")\n     sc = new SparkContext(getSparkConf())\n     withSpark(sc) { sc =>\n       TestUtils.waitUntilExecutorsUp(sc, 1, 60000)\n@@ -295,6 +305,11 @@ class FallbackStorageSuite extends SparkFunSuite with LocalSparkContext {\n \n   CompressionCodec.shortCompressionCodecNames.keys.foreach { codec =>\n     test(s\"$codec - Newly added executors should access old data from remote storage\") {\n+      // Comet replaces Spark's shuffle with its own native shuffle, which is incompatible with\n+      // the fallback storage migration path used by BlockManagerDecommissioner.\n+      val cometEnv = System.getenv(\"ENABLE_COMET\")\n+      assume(cometEnv == null || cometEnv == \"0\" || cometEnv == \"false\",\n+        \"Skipped when Comet is enabled: incompatible with Comet native shuffle storage\")\n       sc = new SparkContext(getSparkConf(2, 0).set(IO_COMPRESSION_CODEC, codec))\n       withSpark(sc) { sc =>\n         TestUtils.waitUntilExecutorsUp(sc, 2, 60000)\ndiff --git a/pom.xml b/pom.xml\nindex 22922143fc3..97332f7e6ac 100644\n--- a/pom.xml\n+++ b/pom.xml\n@@ -148,6 +148,8 @@\n     <kryo.version>4.0.3</kryo.version>\n     <ivy.version>2.5.3</ivy.version>\n     <oro.version>2.0.8</oro.version>\n+    <spark.version.short>4.0</spark.version.short>\n+    <comet.version>0.15.0-SNAPSHOT</comet.version>\n     <!--\n     If you change codahale.metrics.version, you also need to change\n     the link to metrics.dropwizard.io in docs/monitoring.md.\n@@ -2596,6 +2598,25 @@\n         <artifactId>arpack</artifactId>\n         <version>${netlib.ludovic.dev.version}</version>\n       </dependency>\n+      <dependency>\n+        <groupId>org.apache.datafusion</groupId>\n+        <artifactId>comet-spark-spark${spark.version.short}_${scala.binary.version}</artifactId>\n+        <version>${comet.version}</version>\n+        <exclusions>\n+          <exclusion>\n+            <groupId>org.apache.spark</groupId>\n+            <artifactId>spark-sql_${scala.binary.version}</artifactId>\n+          </exclusion>\n+          <exclusion>\n+            <groupId>org.apache.spark</groupId>\n+            <artifactId>spark-core_${scala.binary.version}</artifactId>\n+          </exclusion>\n+          <exclusion>\n+            <groupId>org.apache.spark</groupId>\n+            <artifactId>spark-catalyst_${scala.binary.version}</artifactId>\n+          </exclusion>\n+        </exclusions>\n+      </dependency>\n       <!-- SPARK-16484 add `datasketches-java` for support Datasketches HllSketch -->\n       <dependency>\n         <groupId>org.apache.datasketches</groupId>\ndiff --git a/sql/core/pom.xml b/sql/core/pom.xml\nindex dcf6223a98b..0458a5bb640 100644\n--- a/sql/core/pom.xml\n+++ b/sql/core/pom.xml\n@@ -90,6 +90,10 @@\n       <groupId>org.apache.spark</groupId>\n       <artifactId>spark-tags_${scala.binary.version}</artifactId>\n     </dependency>\n+    <dependency>\n+      <groupId>org.apache.datafusion</groupId>\n+      <artifactId>comet-spark-spark${spark.version.short}_${scala.binary.version}</artifactId>\n+    </dependency>\n \n     <!--\n       This spark-tags test-dep is needed even though it isn't used in this module, otherwise testing-cmds that exclude\ndiff --git a/sql/core/src/main/scala/org/apache/spark/sql/classic/SparkSession.scala b/sql/core/src/main/scala/org/apache/spark/sql/classic/SparkSession.scala\nindex 0015d7ff99e..dcbf0325904 100644\n--- a/sql/core/src/main/scala/org/apache/spark/sql/classic/SparkSession.scala\n+++ b/sql/core/src/main/scala/org/apache/spark/sql/classic/SparkSession.scala\n@@ -1042,6 +1042,23 @@ object SparkSession extends SparkSessionCompanion with Logging {\n     extensions\n   }\n \n+  /**\n+   * Whether Comet extension is enabled\n+   */\n+  def isCometEnabled: Boolean = {\n+    val v = System.getenv(\"ENABLE_COMET\")\n+    v == null || v == \"1\" || v.toBoolean\n+  }\n+\n+\n+  private def loadCometExtension(sparkContext: SparkContext): Seq[String] = {\n+    if (sparkContext.getConf.getBoolean(\"spark.comet.enabled\", isCometEnabled)) {\n+      Seq(\"org.apache.comet.CometSparkSessionExtensions\")\n+    } else {\n+      Seq.empty\n+    }\n+  }\n+\n   /**\n    * Initialize extensions specified in [[StaticSQLConf]]. The classes will be applied to the\n    * extensions passed into this function.\n@@ -1051,7 +1068,8 @@ object SparkSession extends SparkSessionCompanion with Logging {\n       extensions: SparkSessionExtensions): SparkSessionExtensions = {\n     val extensionConfClassNames = sparkContext.conf.get(StaticSQLConf.SPARK_SESSION_EXTENSIONS)\n       .getOrElse(Seq.empty)\n-    extensionConfClassNames.foreach { extensionConfClassName =>\n+    val extensionClassNames = extensionConfClassNames ++ loadCometExtension(sparkContext)\n+    extensionClassNames.foreach { extensionConfClassName =>\n       try {\n         val extensionConfClass = Utils.classForName(extensionConfClassName)\n         val extensionConf = extensionConfClass.getConstructor().newInstance()\ndiff --git a/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala b/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala\nindex 4410fe50912..43bcce2a038 100644\n--- a/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala\n+++ b/sql/core/src/main/scala/org/apache/spark/sql/execution/SparkPlanInfo.scala\n@@ -19,6 +19,7 @@ package org.apache.spark.sql.execution\n \n import org.apache.spark.annotation.DeveloperApi\n import org.apache.spark.sql.catalyst.plans.logical.{EmptyRelation, LogicalPlan}\n+import org.apache.spark.sql.comet.CometScanExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanExec, QueryStageExec}\n import org.apache.spark.sql.execution.adaptive.LogicalQueryStage\n import org.apache.spark.sql.execution.columnar.InMemoryTableScanExec\n@@ -84,6 +85,7 @@ private[execution] object SparkPlanInfo {\n     // dump the file scan metadata (e.g file path) to event log\n     val metadata = plan match {\n       case fileScan: FileSourceScanLike => fileScan.metadata\n+      case cometScan: CometScanExec => cometScan.metadata\n       case _ => Map[String, String]()\n     }\n     val childrenInfo = children.flatMap {\ndiff --git a/sql/core/src/test/resources/sql-tests/analyzer-results/listagg-collations.sql.out b/sql/core/src/test/resources/sql-tests/analyzer-results/listagg-collations.sql.out\nindex 7aca17dcb25..8afeb3b4a2f 100644\n--- a/sql/core/src/test/resources/sql-tests/analyzer-results/listagg-collations.sql.out\n+++ b/sql/core/src/test/resources/sql-tests/analyzer-results/listagg-collations.sql.out\n@@ -64,15 +64,6 @@ WithCTE\n       +- CTERelationRef xxxx, true, [c1#x], false, false\n \n \n--- !query\n-SELECT lower(listagg(DISTINCT c1 COLLATE utf8_lcase) WITHIN GROUP (ORDER BY c1 COLLATE utf8_lcase)) FROM (VALUES ('a'), ('B'), ('b'), ('A')) AS t(c1)\n--- !query analysis\n-Aggregate [lower(listagg(distinct collate(c1#x, utf8_lcase), null, collate(c1#x, utf8_lcase) ASC NULLS FIRST, 0, 0)) AS lower(listagg(DISTINCT collate(c1, utf8_lcase), NULL) WITHIN GROUP (ORDER BY collate(c1, utf8_lcase) ASC NULLS FIRST))#x]\n-+- SubqueryAlias t\n-   +- Project [col1#x AS c1#x]\n-      +- LocalRelation [col1#x]\n-\n-\n -- !query\n WITH t(c1) AS (SELECT replace(listagg(DISTINCT col1 COLLATE unicode_rtrim) COLLATE utf8_binary, ' ', '') FROM (VALUES ('xbc  '), ('xbc '), ('a'), ('xbc'))) SELECT len(c1), regexp_count(c1, 'a'), regexp_count(c1, 'xbc') FROM t\n -- !query analysis\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/collations.sql b/sql/core/src/test/resources/sql-tests/inputs/collations.sql\nindex 17815ed5dde..baad440b1ce 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/collations.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/collations.sql\n@@ -1,3 +1,6 @@\n+-- TODO: https://github.com/apache/datafusion-comet/issues/551\n+--SET spark.comet.enabled = false\n+\n -- test cases for collation support\n \n -- Create a test table with data\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/decimalArithmeticOperations.sql b/sql/core/src/test/resources/sql-tests/inputs/decimalArithmeticOperations.sql\nindex 13bbd9d81b7..541cdfb1e04 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/decimalArithmeticOperations.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/decimalArithmeticOperations.sql\n@@ -15,6 +15,12 @@\n --   limitations under the License.\n --\n \n+-- TODO: Disabled due to one of the test failed for Spark4.0\n+-- TODO: https://github.com/apache/datafusion-comet/issues/1948\n+-- The following query failed\n+-- select /*+ COALESCE(1) */ id, a+b, a-b, a*b, a/b from decimals_test order by id\n+--SET spark.comet.enabled = false\n+\n CREATE TEMPORARY VIEW t AS SELECT 1.0 as a, 0.0 as b;\n \n -- division, remainder and pmod by 0 return NULL\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql b/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql\nindex 7aef901da4f..f3d6e18926d 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/explain-aqe.sql\n@@ -2,3 +2,4 @@\n \n --SET spark.sql.adaptive.enabled=true\n --SET spark.sql.maxMetadataStringLength = 500\n+--SET spark.comet.enabled = false\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql b/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql\nindex eeb2180f7a5..afd1b5ec289 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/explain-cbo.sql\n@@ -1,5 +1,6 @@\n --SET spark.sql.cbo.enabled=true\n --SET spark.sql.maxMetadataStringLength = 500\n+--SET spark.comet.enabled = false\n \n CREATE TABLE explain_temp1(a INT, b INT) USING PARQUET;\n CREATE TABLE explain_temp2(c INT, d INT) USING PARQUET;\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/explain.sql b/sql/core/src/test/resources/sql-tests/inputs/explain.sql\nindex 698ca009b4f..57d774a3617 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/explain.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/explain.sql\n@@ -1,6 +1,7 @@\n --SET spark.sql.codegen.wholeStage = true\n --SET spark.sql.adaptive.enabled = false\n --SET spark.sql.maxMetadataStringLength = 500\n+--SET spark.comet.enabled = false\n \n -- Test tables\n CREATE table  explain_temp1 (key int, val int) USING PARQUET;\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/listagg-collations.sql b/sql/core/src/test/resources/sql-tests/inputs/listagg-collations.sql\nindex aa3d02dc2fb..c4f878d9908 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/listagg-collations.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/listagg-collations.sql\n@@ -5,7 +5,9 @@ WITH t(c1) AS (SELECT listagg(col1) WITHIN GROUP (ORDER BY col1) FROM (VALUES ('\n -- Test cases with utf8_lcase. Lower expression added for determinism\n SELECT lower(listagg(c1) WITHIN GROUP (ORDER BY c1 COLLATE utf8_lcase)) FROM (VALUES ('a'), ('A'), ('b'), ('B')) AS t(c1);\n WITH t(c1) AS (SELECT lower(listagg(DISTINCT col1 COLLATE utf8_lcase)) FROM (VALUES ('a'), ('A'), ('b'), ('B'))) SELECT len(c1), regexp_count(c1, 'a'), regexp_count(c1, 'b') FROM t;\n-SELECT lower(listagg(DISTINCT c1 COLLATE utf8_lcase) WITHIN GROUP (ORDER BY c1 COLLATE utf8_lcase)) FROM (VALUES ('a'), ('B'), ('b'), ('A')) AS t(c1);\n+-- TODO https://github.com/apache/datafusion-comet/issues/1947\n+-- TODO fix Comet for this query\n+-- SELECT lower(listagg(DISTINCT c1 COLLATE utf8_lcase) WITHIN GROUP (ORDER BY c1 COLLATE utf8_lcase)) FROM (VALUES ('a'), ('B'), ('b'), ('A')) AS t(c1);\n -- Test cases with unicode_rtrim.\n WITH t(c1) AS (SELECT replace(listagg(DISTINCT col1 COLLATE unicode_rtrim) COLLATE utf8_binary, ' ', '') FROM (VALUES ('xbc  '), ('xbc '), ('a'), ('xbc'))) SELECT len(c1), regexp_count(c1, 'a'), regexp_count(c1, 'xbc') FROM t;\n WITH t(c1) AS (SELECT listagg(col1) WITHIN GROUP (ORDER BY col1 COLLATE unicode_rtrim) FROM (VALUES ('abc '), ('abc\\n'), ('abc'), ('x'))) SELECT replace(replace(c1, ' ', ''), '\\n', '$') FROM t;\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql\nindex 41fd4de2a09..162d5a817b6 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/aggregates_part3.sql\n@@ -6,6 +6,10 @@\n -- https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/sql/aggregates.sql#L352-L605\n \n -- Test aggregate operator with codegen on and off.\n+\n+-- Floating-point precision difference between DataFusion and JVM for FILTER aggregates\n+--SET spark.comet.enabled = false\n+\n --CONFIG_DIM1 spark.sql.codegen.wholeStage=true\n --CONFIG_DIM1 spark.sql.codegen.wholeStage=false,spark.sql.codegen.factoryMode=CODEGEN_ONLY\n --CONFIG_DIM1 spark.sql.codegen.wholeStage=false,spark.sql.codegen.factoryMode=NO_CODEGEN\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql\nindex 3a409eea348..26e9aaf215c 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int4.sql\n@@ -6,6 +6,9 @@\n -- https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/sql/int4.sql\n --\n \n+-- TODO: https://github.com/apache/datafusion-comet/issues/551\n+--SET spark.comet.enabled = false\n+\n CREATE TABLE INT4_TBL(f1 int) USING parquet;\n \n -- [SPARK-28023] Trim the string when cast string type to other types\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql\nindex fac23b4a26f..98b12ae5ccc 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/int8.sql\n@@ -6,6 +6,10 @@\n -- Test int8 64-bit integers.\n -- https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/sql/int8.sql\n --\n+\n+-- TODO: https://github.com/apache/datafusion-comet/issues/551\n+--SET spark.comet.enabled = false\n+\n CREATE TABLE INT8_TBL(q1 bigint, q2 bigint) USING parquet;\n \n -- PostgreSQL implicitly casts string literals to data with integral types, but\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql\nindex 0efe0877e9b..f9df0400c99 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/postgreSQL/select_having.sql\n@@ -6,6 +6,9 @@\n -- https://github.com/postgres/postgres/blob/REL_12_BETA2/src/test/regress/sql/select_having.sql\n --\n \n+-- TODO: https://github.com/apache/datafusion-comet/issues/551\n+--SET spark.comet.enabled = false\n+\n -- load test data\n CREATE TABLE test_having (a int, b int, c string, d string) USING parquet;\n INSERT INTO test_having VALUES (0, 1, 'XXXX', 'A');\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/subquery/in-subquery/in-limit.sql b/sql/core/src/test/resources/sql-tests/inputs/subquery/in-subquery/in-limit.sql\nindex 7c816d8a416..b1551a2b296 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/subquery/in-subquery/in-limit.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/subquery/in-subquery/in-limit.sql\n@@ -1,6 +1,23 @@\n -- A test suite for IN LIMIT in parent side, subquery, and both predicate subquery\n -- It includes correlated cases.\n \n+-- TODO: Disabled due to one of the test failed for Spark4.0\n+-- TODO: https://github.com/apache/datafusion-comet/issues/1948\n+-- The following query failed\n+-- SELECT Count(DISTINCT( t1a )),\n+--        t1b\n+-- FROM   t1\n+-- WHERE  t1d NOT IN (SELECT t2d\n+--                    FROM   t2\n+--                    WHERE t2b > t1b\n+--                    ORDER  BY t2b DESC nulls first, t2d\n+--     LIMIT 1\n+-- OFFSET 1)\n+-- GROUP  BY t1b\n+-- ORDER BY t1b NULLS last\n+--     LIMIT  1\n+-- OFFSET 1;\n+--SET spark.comet.enabled = false\n --CONFIG_DIM1 spark.sql.optimizeNullAwareAntiJoin=true\n --CONFIG_DIM1 spark.sql.optimizeNullAwareAntiJoin=false\n \n@@ -61,6 +78,7 @@ WHERE  t1a IN (SELECT t2a\n                WHERE  t1d = t2d)\n LIMIT  2;\n \n+--SET spark.sql.cbo.enabled=true\n -- correlated IN subquery\n -- LIMIT on both parent and subquery sides\n SELECT *\ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/view-schema-binding-config.sql b/sql/core/src/test/resources/sql-tests/inputs/view-schema-binding-config.sql\nindex e803254ea64..74db78aee38 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/view-schema-binding-config.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/view-schema-binding-config.sql\n@@ -1,6 +1,9 @@\n -- This test suits check the spark.sql.viewSchemaBindingMode configuration.\n -- It can be DISABLED and COMPENSATION\n \n+-- TODO: https://github.com/apache/datafusion-comet/issues/551\n+--SET spark.comet.enabled = false\n+\n -- Verify the default binding is true\n SET spark.sql.legacy.viewSchemaBindingMode;\n \ndiff --git a/sql/core/src/test/resources/sql-tests/inputs/view-schema-compensation.sql b/sql/core/src/test/resources/sql-tests/inputs/view-schema-compensation.sql\nindex 21a3ce1e122..f4762ab98f0 100644\n--- a/sql/core/src/test/resources/sql-tests/inputs/view-schema-compensation.sql\n+++ b/sql/core/src/test/resources/sql-tests/inputs/view-schema-compensation.sql\n@@ -1,5 +1,9 @@\n -- This test suite checks the WITH SCHEMA COMPENSATION clause\n -- Disable ANSI mode to ensure we are forcing it explicitly in the CASTS\n+\n+-- TODO: https://github.com/apache/datafusion-comet/issues/551\n+--SET spark.comet.enabled = false\n+\n SET spark.sql.ansi.enabled = false;\n \n -- In COMPENSATION views get invalidated if the type can't cast\ndiff --git a/sql/core/src/test/resources/sql-tests/results/listagg-collations.sql.out b/sql/core/src/test/resources/sql-tests/results/listagg-collations.sql.out\nindex 1f8c5822e7d..b7de4e28813 100644\n--- a/sql/core/src/test/resources/sql-tests/results/listagg-collations.sql.out\n+++ b/sql/core/src/test/resources/sql-tests/results/listagg-collations.sql.out\n@@ -40,14 +40,6 @@ struct<len(c1):int,regexp_count(c1, a):int,regexp_count(c1, b):int>\n 2\t1\t1\n \n \n--- !query\n-SELECT lower(listagg(DISTINCT c1 COLLATE utf8_lcase) WITHIN GROUP (ORDER BY c1 COLLATE utf8_lcase)) FROM (VALUES ('a'), ('B'), ('b'), ('A')) AS t(c1)\n--- !query schema\n-struct<lower(listagg(DISTINCT collate(c1, utf8_lcase), NULL) WITHIN GROUP (ORDER BY collate(c1, utf8_lcase) ASC NULLS FIRST)):string collate UTF8_LCASE>\n--- !query output\n-ab\n-\n-\n -- !query\n WITH t(c1) AS (SELECT replace(listagg(DISTINCT col1 COLLATE unicode_rtrim) COLLATE utf8_binary, ' ', '') FROM (VALUES ('xbc  '), ('xbc '), ('a'), ('xbc'))) SELECT len(c1), regexp_count(c1, 'a'), regexp_count(c1, 'xbc') FROM t\n -- !query schema\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala\nindex 0f42502f1d9..e9ff802141f 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala\n@@ -39,7 +39,7 @@ import org.apache.spark.sql.catalyst.util.DateTimeConstants\n import org.apache.spark.sql.execution.{ColumnarToRowExec, ExecSubqueryExpression, RDDScanExec, SparkPlan, SparkPlanInfo}\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, AQEPropagateEmptyRelation}\n import org.apache.spark.sql.execution.columnar._\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.execution.ui.SparkListenerSQLAdaptiveExecutionUpdate\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -520,7 +520,8 @@ class CachedTableSuite extends QueryTest with SQLTestUtils\n       df.collect()\n     }\n     assert(\n-      collect(df.queryExecution.executedPlan) { case e: ShuffleExchangeExec => e }.size == expected)\n+      collect(df.queryExecution.executedPlan) {\n+        case _: ShuffleExchangeLike => 1 }.size == expected)\n   }\n \n   test(\"A cached table preserves the partitioning and ordering of its cached SparkPlan\") {\n@@ -1659,9 +1660,18 @@ class CachedTableSuite extends QueryTest with SQLTestUtils\n           _.nodeName.contains(\"TableCacheQueryStage\"))\n         val aqeNode = findNodeInSparkPlanInfo(inMemoryScanNode.get,\n           _.nodeName.contains(\"AdaptiveSparkPlan\"))\n-        val aqePlanRoot = findNodeInSparkPlanInfo(inMemoryScanNode.get,\n-          _.nodeName.contains(\"ResultQueryStage\"))\n-        aqePlanRoot.get.children.head.nodeName == \"AQEShuffleRead\"\n+        // Spark 4.0 wraps results in ResultQueryStage. The coalescing indicator is AQEShuffleRead\n+        // as the direct child of InputAdapter.\n+        // AdaptiveSparkPlan -> ResultQueryStage -> WholestageCodegen ->\n+        //     CometColumnarToRow -> InputAdapter -> AQEShuffleRead (if coalesced)\n+        val resultStage = aqeNode.get.children.head  // ResultQueryStage\n+        val wsc = resultStage.children.head           // WholeStageCodegen\n+        val c2r = wsc.children.head                   // ColumnarToRow or CometColumnarToRow\n+        val inputAdapter = c2r.children.head           // InputAdapter\n+        resultStage.nodeName  ==   \"ResultQueryStage\" &&\n+          wsc.nodeName.startsWith(\"WholeStageCodegen\") && // could be \"WholeStageCodegen (1)\"\n+          (c2r.nodeName ==  \"CometColumnarToRow\" || c2r.nodeName == \"ColumnarToRow\") &&\n+          inputAdapter.children.head.nodeName == \"AQEShuffleRead\"\n       }\n \n       withTempView(\"t0\", \"t1\", \"t2\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala\nindex 9db406ff12f..245e4caa319 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameAggregateSuite.scala\n@@ -30,7 +30,7 @@ import org.apache.spark.sql.errors.DataTypeErrors.toSQLId\n import org.apache.spark.sql.execution.WholeStageCodegenExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.aggregate.{HashAggregateExec, ObjectHashAggregateExec, SortAggregateExec}\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.expressions.Window\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -855,7 +855,7 @@ class DataFrameAggregateSuite extends QueryTest\n       assert(objHashAggPlans.nonEmpty)\n \n       val exchangePlans = collect(aggPlan) {\n-        case shuffle: ShuffleExchangeExec => shuffle\n+        case shuffle: ShuffleExchangeLike => shuffle\n       }\n       assert(exchangePlans.length == 1)\n     }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala\nindex ed182322aec..1ae6afa686a 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameJoinSuite.scala\n@@ -435,7 +435,9 @@ class DataFrameJoinSuite extends QueryTest\n \n     withTempDatabase { dbName =>\n       withTable(table1Name, table2Name) {\n-        withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n+        withSQLConf(\n+            SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n+            \"spark.comet.enabled\" -> \"false\") {\n           spark.range(50).write.saveAsTable(s\"$dbName.$table1Name\")\n           spark.range(100).write.saveAsTable(s\"$dbName.$table2Name\")\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala\nindex 5b88eeefeca..d4f07bc182a 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameSuite.scala\n@@ -36,11 +36,12 @@ import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference,\n import org.apache.spark.sql.catalyst.optimizer.ConvertToLocalRelation\n import org.apache.spark.sql.catalyst.parser.ParseException\n import org.apache.spark.sql.catalyst.plans.logical.{Filter, LeafNode, LocalRelation, LogicalPlan, OneRowRelation}\n+import org.apache.spark.sql.comet.CometBroadcastExchangeExec\n import org.apache.spark.sql.connector.FakeV2Provider\n import org.apache.spark.sql.execution.{FilterExec, LogicalRDD, QueryExecution, SortExec, WholeStageCodegenExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.aggregate.HashAggregateExec\n-import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ReusedExchangeExec, ShuffleExchangeExec, ShuffleExchangeLike}\n+import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ReusedExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.expressions.{Aggregator, Window}\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -1493,7 +1494,7 @@ class DataFrameSuite extends QueryTest\n           fail(\"Should not have back to back Aggregates\")\n         }\n         atFirstAgg = true\n-      case e: ShuffleExchangeExec => atFirstAgg = false\n+      case e: ShuffleExchangeLike => atFirstAgg = false\n       case _ =>\n     }\n   }\n@@ -1683,7 +1684,7 @@ class DataFrameSuite extends QueryTest\n       checkAnswer(join, df)\n       assert(\n         collect(join.queryExecution.executedPlan) {\n-          case e: ShuffleExchangeExec => true }.size === 1)\n+          case _: ShuffleExchangeLike => true }.size === 1)\n       assert(\n         collect(join.queryExecution.executedPlan) { case e: ReusedExchangeExec => true }.size === 1)\n       val broadcasted = broadcast(join)\n@@ -1691,10 +1692,12 @@ class DataFrameSuite extends QueryTest\n       checkAnswer(join2, df)\n       assert(\n         collect(join2.queryExecution.executedPlan) {\n-          case e: ShuffleExchangeExec => true }.size == 1)\n+          case _: ShuffleExchangeLike => true }.size == 1)\n       assert(\n         collect(join2.queryExecution.executedPlan) {\n-          case e: BroadcastExchangeExec => true }.size === 1)\n+          case e: BroadcastExchangeExec => true\n+          case _: CometBroadcastExchangeExec => true\n+        }.size === 1)\n       assert(\n         collect(join2.queryExecution.executedPlan) { case e: ReusedExchangeExec => true }.size == 4)\n     }\n@@ -2092,7 +2095,7 @@ class DataFrameSuite extends QueryTest\n \n     // Assert that no extra shuffle introduced by cogroup.\n     val exchanges = collect(df3.queryExecution.executedPlan) {\n-      case h: ShuffleExchangeExec => h\n+      case h: ShuffleExchangeLike => h\n     }\n     assert(exchanges.size == 2)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala\nindex 01e72daead4..0a8d1e8b9b9 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DataFrameWindowFunctionsSuite.scala\n@@ -24,8 +24,9 @@ import org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression\n import org.apache.spark.sql.catalyst.optimizer.TransposeWindow\n import org.apache.spark.sql.catalyst.plans.logical.{Window => LogicalWindow}\n import org.apache.spark.sql.catalyst.plans.physical.HashPartitioning\n+import org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n-import org.apache.spark.sql.execution.exchange.{ENSURE_REQUIREMENTS, Exchange, ShuffleExchangeExec}\n+import org.apache.spark.sql.execution.exchange.{ENSURE_REQUIREMENTS, Exchange, ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.window.WindowExec\n import org.apache.spark.sql.expressions.{Aggregator, MutableAggregationBuffer, UserDefinedAggregateFunction, Window}\n import org.apache.spark.sql.functions._\n@@ -1142,10 +1143,12 @@ class DataFrameWindowFunctionsSuite extends QueryTest\n     }\n \n     def isShuffleExecByRequirement(\n-        plan: ShuffleExchangeExec,\n+        plan: ShuffleExchangeLike,\n         desiredClusterColumns: Seq[String]): Boolean = plan match {\n       case ShuffleExchangeExec(op: HashPartitioning, _, ENSURE_REQUIREMENTS, _) =>\n         partitionExpressionsColumns(op.expressions) === desiredClusterColumns\n+      case CometShuffleExchangeExec(op: HashPartitioning, _, _, ENSURE_REQUIREMENTS, _, _) =>\n+        partitionExpressionsColumns(op.expressions) === desiredClusterColumns\n       case _ => false\n     }\n \n@@ -1168,7 +1171,7 @@ class DataFrameWindowFunctionsSuite extends QueryTest\n       val shuffleByRequirement = windowed.queryExecution.executedPlan.exists {\n         case w: WindowExec =>\n           w.child.exists {\n-            case s: ShuffleExchangeExec => isShuffleExecByRequirement(s, Seq(\"key1\", \"key2\"))\n+            case s: ShuffleExchangeLike => isShuffleExecByRequirement(s, Seq(\"key1\", \"key2\"))\n             case _ => false\n           }\n         case _ => false\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala\nindex 81713c777bc..b5f92ed9742 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DatasetSuite.scala\n@@ -46,7 +46,7 @@ import org.apache.spark.sql.catalyst.trees.DataFrameQueryContext\n import org.apache.spark.sql.catalyst.util.sideBySide\n import org.apache.spark.sql.execution.{LogicalRDD, RDDScanExec, SQLExecution}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n-import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ShuffleExchangeExec}\n+import org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.streaming.MemoryStream\n import org.apache.spark.sql.expressions.UserDefinedFunction\n import org.apache.spark.sql.functions._\n@@ -2415,7 +2415,7 @@ class DatasetSuite extends QueryTest\n \n     // Assert that no extra shuffle introduced by cogroup.\n     val exchanges = collect(df3.queryExecution.executedPlan) {\n-      case h: ShuffleExchangeExec => h\n+      case h: ShuffleExchangeLike => h\n     }\n     assert(exchanges.size == 2)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala\nindex 2c24cc7d570..dae4419bd1c 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/DynamicPartitionPruningSuite.scala\n@@ -22,6 +22,7 @@ import org.scalatest.GivenWhenThen\n import org.apache.spark.sql.catalyst.expressions.{DynamicPruningExpression, Expression}\n import org.apache.spark.sql.catalyst.expressions.CodegenObjectFactoryMode._\n import org.apache.spark.sql.catalyst.plans.ExistenceJoin\n+import org.apache.spark.sql.comet.CometScanExec\n import org.apache.spark.sql.connector.catalog.{InMemoryTableCatalog, InMemoryTableWithV2FilterCatalog}\n import org.apache.spark.sql.execution._\n import org.apache.spark.sql.execution.adaptive._\n@@ -262,6 +263,9 @@ abstract class DynamicPartitionPruningSuiteBase\n       case s: BatchScanExec => s.runtimeFilters.collect {\n         case d: DynamicPruningExpression => d.child\n       }\n+      case s: CometScanExec => s.partitionFilters.collect {\n+        case d: DynamicPruningExpression => d.child\n+      }\n       case _ => Nil\n     }\n   }\n@@ -1027,7 +1031,8 @@ abstract class DynamicPartitionPruningSuiteBase\n     }\n   }\n \n-  test(\"avoid reordering broadcast join keys to match input hash partitioning\") {\n+  test(\"avoid reordering broadcast join keys to match input hash partitioning\",\n+    IgnoreComet(\"TODO: https://github.com/apache/datafusion-comet/issues/1839\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"false\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n       withTable(\"large\", \"dimTwo\", \"dimThree\") {\n@@ -1151,7 +1156,8 @@ abstract class DynamicPartitionPruningSuiteBase\n     }\n   }\n \n-  test(\"join key with multiple references on the filtering plan\") {\n+  test(\"join key with multiple references on the filtering plan\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3321\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"true\",\n       SQLConf.ADAPTIVE_OPTIMIZER_EXCLUDED_RULES.key -> AQEPropagateEmptyRelation.ruleName,\n       SQLConf.ANSI_ENABLED.key -> \"false\" // ANSI mode doesn't support \"String + String\"\n@@ -1215,7 +1221,8 @@ abstract class DynamicPartitionPruningSuiteBase\n   }\n \n   test(\"SPARK-32509: Unused Dynamic Pruning filter shouldn't affect \" +\n-    \"canonicalization and exchange reuse\") {\n+    \"canonicalization and exchange reuse\",\n+    IgnoreComet(\"TODO: https://github.com/apache/datafusion-comet/issues/1839\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"true\") {\n       withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n         val df = sql(\n@@ -1330,6 +1337,7 @@ abstract class DynamicPartitionPruningSuiteBase\n   }\n \n   test(\"Subquery reuse across the whole plan\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3321\"),\n     DisableAdaptiveExecution(\"DPP in AQE must reuse broadcast\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_ENABLED.key -> \"true\",\n       SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"false\",\n@@ -1424,7 +1432,8 @@ abstract class DynamicPartitionPruningSuiteBase\n     }\n   }\n \n-  test(\"SPARK-34637: DPP side broadcast query stage is created firstly\") {\n+  test(\"SPARK-34637: DPP side broadcast query stage is created firstly\",\n+    IgnoreComet(\"TODO: https://github.com/apache/datafusion-comet/issues/1839\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"true\") {\n       val df = sql(\n         \"\"\" WITH v as (\n@@ -1699,7 +1708,8 @@ abstract class DynamicPartitionPruningV1Suite extends DynamicPartitionPruningDat\n    * Check the static scan metrics with and without DPP\n    */\n   test(\"static scan metrics\",\n-    DisableAdaptiveExecution(\"DPP in AQE must reuse broadcast\")) {\n+    DisableAdaptiveExecution(\"DPP in AQE must reuse broadcast\"),\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3442\")) {\n     withSQLConf(SQLConf.DYNAMIC_PARTITION_PRUNING_ENABLED.key -> \"true\",\n       SQLConf.DYNAMIC_PARTITION_PRUNING_REUSE_BROADCAST_ONLY.key -> \"false\",\n       SQLConf.EXCHANGE_REUSE_ENABLED.key -> \"false\") {\n@@ -1730,6 +1740,8 @@ abstract class DynamicPartitionPruningV1Suite extends DynamicPartitionPruningDat\n               case s: BatchScanExec =>\n                 // we use f1 col for v2 tables due to schema pruning\n                 s.output.exists(_.exists(_.argString(maxFields = 100).contains(\"f1\")))\n+              case s: CometScanExec =>\n+                s.output.exists(_.exists(_.argString(maxFields = 100).contains(\"fid\")))\n               case _ => false\n             }\n           assert(scanOption.isDefined)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala\nindex 9c90e0105a4..fadf2f0f698 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/ExplainSuite.scala\n@@ -470,7 +470,8 @@ class ExplainSuite extends ExplainSuiteHelper with DisableAdaptiveExecutionSuite\n     }\n   }\n \n-  test(\"Explain formatted output for scan operator for datasource V2\") {\n+  test(\"Explain formatted output for scan operator for datasource V2\",\n+      IgnoreComet(\"Comet explain output is different\")) {\n     withTempDir { dir =>\n       Seq(\"parquet\", \"orc\", \"csv\", \"json\").foreach { fmt =>\n         val basePath = dir.getCanonicalPath + \"/\" + fmt\n@@ -548,7 +549,9 @@ class ExplainSuite extends ExplainSuiteHelper with DisableAdaptiveExecutionSuite\n   }\n }\n \n-class ExplainSuiteAE extends ExplainSuiteHelper with EnableAdaptiveExecutionSuite {\n+// Ignored when Comet is enabled. Comet changes expected query plans.\n+class ExplainSuiteAE extends ExplainSuiteHelper with EnableAdaptiveExecutionSuite\n+    with IgnoreCometSuite {\n   import testImplicits._\n \n   test(\"SPARK-35884: Explain Formatted\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala\nindex 9c529d14221..6d5db65b5d8 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala\n@@ -33,6 +33,8 @@ import org.apache.spark.sql.catalyst.expressions.{AttributeReference, GreaterTha\n import org.apache.spark.sql.catalyst.expressions.IntegralLiteralTestUtils.{negativeInt, positiveInt}\n import org.apache.spark.sql.catalyst.plans.logical.Filter\n import org.apache.spark.sql.catalyst.types.DataTypeUtils\n+import org.apache.spark.sql.catalyst.util.quietly\n+import org.apache.spark.sql.comet.{CometBatchScanExec, CometNativeScanExec, CometScanExec, CometSortMergeJoinExec}\n import org.apache.spark.sql.execution.{FileSourceScanLike, SimpleMode}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.datasources.FilePartition\n@@ -203,7 +205,11 @@ class FileBasedDataSourceSuite extends QueryTest\n   }\n \n   allFileBasedDataSources.foreach { format =>\n-    testQuietly(s\"Enabling/disabling ignoreMissingFiles using $format\") {\n+    val ignoreMissingTags: Seq[org.scalatest.Tag] = if (format == \"parquet\") {\n+      Seq(IgnoreCometNativeDataFusion(\n+        \"https://github.com/apache/datafusion-comet/issues/3321\"))\n+    } else Seq.empty\n+    test(s\"Enabling/disabling ignoreMissingFiles using $format\", ignoreMissingTags: _*) { quietly {\n       def testIgnoreMissingFiles(options: Map[String, String]): Unit = {\n         withTempDir { dir =>\n           val basePath = dir.getCanonicalPath\n@@ -263,7 +269,7 @@ class FileBasedDataSourceSuite extends QueryTest\n           }\n         }\n       }\n-    }\n+    }}\n   }\n \n   Seq(\"json\", \"orc\").foreach { format =>\n@@ -668,18 +674,25 @@ class FileBasedDataSourceSuite extends QueryTest\n             checkAnswer(sql(s\"select A from $tableName\"), data.select(\"A\"))\n \n             // RuntimeException is triggered at executor side, which is then wrapped as\n-            // SparkException at driver side\n+            // SparkException at driver side. Comet native readers throw\n+            // SparkRuntimeException directly without the SparkException wrapper.\n+            def getDuplicateFieldError(query: String): SparkRuntimeException = {\n+              try {\n+                sql(query).collect()\n+                fail(\"Expected an exception\").asInstanceOf[SparkRuntimeException]\n+              } catch {\n+                case e: SparkException =>\n+                  e.getCause.asInstanceOf[SparkRuntimeException]\n+                case e: SparkRuntimeException => e\n+              }\n+            }\n             checkError(\n-              exception = intercept[SparkException] {\n-                sql(s\"select b from $tableName\").collect()\n-              }.getCause.asInstanceOf[SparkRuntimeException],\n+              exception = getDuplicateFieldError(s\"select b from $tableName\"),\n               condition = \"_LEGACY_ERROR_TEMP_2093\",\n               parameters = Map(\"requiredFieldName\" -> \"b\", \"matchedOrcFields\" -> \"[b, B]\")\n             )\n             checkError(\n-              exception = intercept[SparkException] {\n-                sql(s\"select B from $tableName\").collect()\n-              }.getCause.asInstanceOf[SparkRuntimeException],\n+              exception = getDuplicateFieldError(s\"select B from $tableName\"),\n               condition = \"_LEGACY_ERROR_TEMP_2093\",\n               parameters = Map(\"requiredFieldName\" -> \"b\", \"matchedOrcFields\" -> \"[b, B]\")\n             )\n@@ -967,6 +980,7 @@ class FileBasedDataSourceSuite extends QueryTest\n             assert(bJoinExec.isEmpty)\n             val smJoinExec = collect(joinedDF.queryExecution.executedPlan) {\n               case smJoin: SortMergeJoinExec => smJoin\n+              case smJoin: CometSortMergeJoinExec => smJoin\n             }\n             assert(smJoinExec.nonEmpty)\n           }\n@@ -1027,6 +1041,7 @@ class FileBasedDataSourceSuite extends QueryTest\n \n           val fileScan = df.queryExecution.executedPlan collectFirst {\n             case BatchScanExec(_, f: FileScan, _, _, _, _) => f\n+            case CometBatchScanExec(BatchScanExec(_, f: FileScan, _, _, _, _), _, _) => f\n           }\n           assert(fileScan.nonEmpty)\n           assert(fileScan.get.partitionFilters.nonEmpty)\n@@ -1068,6 +1083,7 @@ class FileBasedDataSourceSuite extends QueryTest\n \n           val fileScan = df.queryExecution.executedPlan collectFirst {\n             case BatchScanExec(_, f: FileScan, _, _, _, _) => f\n+            case CometBatchScanExec(BatchScanExec(_, f: FileScan, _, _, _, _), _, _) => f\n           }\n           assert(fileScan.nonEmpty)\n           assert(fileScan.get.partitionFilters.isEmpty)\n@@ -1252,6 +1268,9 @@ class FileBasedDataSourceSuite extends QueryTest\n           val filters = df.queryExecution.executedPlan.collect {\n             case f: FileSourceScanLike => f.dataFilters\n             case b: BatchScanExec => b.scan.asInstanceOf[FileScan].dataFilters\n+            case b: CometScanExec => b.dataFilters\n+            case b: CometNativeScanExec => b.dataFilters\n+            case b: CometBatchScanExec => b.scan.asInstanceOf[FileScan].dataFilters\n           }.flatten\n           assert(filters.contains(GreaterThan(scan.logicalPlan.output.head, Literal(5L))))\n         }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/IgnoreComet.scala b/sql/core/src/test/scala/org/apache/spark/sql/IgnoreComet.scala\nnew file mode 100644\nindex 00000000000..5691536c114\n--- /dev/null\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/IgnoreComet.scala\n@@ -0,0 +1,45 @@\n+/*\n+ * Licensed to the Apache Software Foundation (ASF) under one or more\n+ * contributor license agreements.  See the NOTICE file distributed with\n+ * this work for additional information regarding copyright ownership.\n+ * The ASF licenses this file to You under the Apache License, Version 2.0\n+ * (the \"License\"); you may not use this file except in compliance with\n+ * the License.  You may obtain a copy of the License at\n+ *\n+ *    http://www.apache.org/licenses/LICENSE-2.0\n+ *\n+ * Unless required by applicable law or agreed to in writing, software\n+ * distributed under the License is distributed on an \"AS IS\" BASIS,\n+ * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n+ * See the License for the specific language governing permissions and\n+ * limitations under the License.\n+ */\n+\n+package org.apache.spark.sql\n+\n+import org.scalactic.source.Position\n+import org.scalatest.Tag\n+\n+import org.apache.spark.sql.test.SQLTestUtils\n+\n+/**\n+ * Tests with this tag will be ignored when Comet is enabled (e.g., via `ENABLE_COMET`).\n+ */\n+case class IgnoreComet(reason: String) extends Tag(\"DisableComet\")\n+case class IgnoreCometNativeIcebergCompat(reason: String) extends Tag(\"DisableComet\")\n+case class IgnoreCometNativeDataFusion(reason: String) extends Tag(\"DisableComet\")\n+case class IgnoreCometNativeScan(reason: String) extends Tag(\"DisableComet\")\n+\n+/**\n+ * Helper trait that disables Comet for all tests regardless of default config values.\n+ */\n+trait IgnoreCometSuite extends SQLTestUtils {\n+  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)\n+    (implicit pos: Position): Unit = {\n+    if (isCometEnabled) {\n+      ignore(testName + \" (disabled when Comet is on)\", testTags: _*)(testFun)\n+    } else {\n+      super.test(testName, testTags: _*)(testFun)\n+    }\n+  }\n+}\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/InjectRuntimeFilterSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/InjectRuntimeFilterSuite.scala\nindex 7d7185ae6c1..442a5bddeb8 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/InjectRuntimeFilterSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/InjectRuntimeFilterSuite.scala\n@@ -442,7 +442,8 @@ class InjectRuntimeFilterSuite extends QueryTest with SQLTestUtils with SharedSp\n   }\n \n   test(\"Runtime bloom filter join: do not add bloom filter if dpp filter exists \" +\n-    \"on the same column\") {\n+    \"on the same column\",\n+    IgnoreComet(\"TODO: Support SubqueryBroadcastExec in Comet: #242\")) {\n     withSQLConf(SQLConf.RUNTIME_BLOOM_FILTER_APPLICATION_SIDE_SCAN_SIZE_THRESHOLD.key -> \"3000\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"2000\") {\n       assertDidNotRewriteWithBloomFilter(\"select * from bf5part join bf2 on \" +\n@@ -451,7 +452,8 @@ class InjectRuntimeFilterSuite extends QueryTest with SQLTestUtils with SharedSp\n   }\n \n   test(\"Runtime bloom filter join: add bloom filter if dpp filter exists on \" +\n-    \"a different column\") {\n+    \"a different column\",\n+    IgnoreComet(\"TODO: Support SubqueryBroadcastExec in Comet: #242\")) {\n     withSQLConf(SQLConf.RUNTIME_BLOOM_FILTER_APPLICATION_SIDE_SCAN_SIZE_THRESHOLD.key -> \"3000\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"2000\") {\n       assertRewroteWithBloomFilter(\"select * from bf5part join bf2 on \" +\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala\nindex 53e47f428c3..a55d8f0c161 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/JoinHintSuite.scala\n@@ -23,6 +23,7 @@ import org.apache.spark.sql.catalyst.optimizer.{BuildLeft, BuildRight, BuildSide\n import org.apache.spark.sql.catalyst.plans.PlanTest\n import org.apache.spark.sql.catalyst.plans.logical._\n import org.apache.spark.sql.catalyst.rules.RuleExecutor\n+import org.apache.spark.sql.comet.{CometHashJoinExec, CometSortMergeJoinExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.joins._\n import org.apache.spark.sql.internal.SQLConf\n@@ -362,6 +363,7 @@ class JoinHintSuite extends PlanTest with SharedSparkSession with AdaptiveSparkP\n     val executedPlan = df.queryExecution.executedPlan\n     val shuffleHashJoins = collect(executedPlan) {\n       case s: ShuffledHashJoinExec => s\n+      case c: CometHashJoinExec => c.originalPlan.asInstanceOf[ShuffledHashJoinExec]\n     }\n     assert(shuffleHashJoins.size == 1)\n     assert(shuffleHashJoins.head.buildSide == buildSide)\n@@ -371,6 +373,7 @@ class JoinHintSuite extends PlanTest with SharedSparkSession with AdaptiveSparkP\n     val executedPlan = df.queryExecution.executedPlan\n     val shuffleMergeJoins = collect(executedPlan) {\n       case s: SortMergeJoinExec => s\n+      case c: CometSortMergeJoinExec => c\n     }\n     assert(shuffleMergeJoins.size == 1)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala\nindex aaac0ebc9aa..fbef0774d46 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala\n@@ -29,7 +29,8 @@ import org.apache.spark.sql.catalyst.analysis.UnresolvedRelation\n import org.apache.spark.sql.catalyst.expressions.{Ascending, GenericRow, SortOrder}\n import org.apache.spark.sql.catalyst.optimizer.{BuildLeft, BuildRight, JoinSelectionHelper}\n import org.apache.spark.sql.catalyst.plans.logical.{Filter, HintInfo, Join, JoinHint, NO_BROADCAST_AND_REPLICATION}\n-import org.apache.spark.sql.execution.{BinaryExecNode, FilterExec, ProjectExec, SortExec, SparkPlan, WholeStageCodegenExec}\n+import org.apache.spark.sql.comet._\n+import org.apache.spark.sql.execution.{BinaryExecNode, ColumnarToRowExec, FilterExec, InputAdapter, ProjectExec, SortExec, SparkPlan, WholeStageCodegenExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.exchange.{ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.joins._\n@@ -805,7 +806,8 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n     }\n   }\n \n-  test(\"test SortMergeJoin (with spill)\") {\n+  test(\"test SortMergeJoin (with spill)\",\n+      IgnoreComet(\"TODO: Comet SMJ doesn't support spill yet\")) {\n     withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"1\",\n       SQLConf.SORT_MERGE_JOIN_EXEC_BUFFER_IN_MEMORY_THRESHOLD.key -> \"0\",\n       SQLConf.SORT_MERGE_JOIN_EXEC_BUFFER_SPILL_THRESHOLD.key -> \"1\") {\n@@ -931,10 +933,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n       val physical = df.queryExecution.sparkPlan\n       val physicalJoins = physical.collect {\n         case j: SortMergeJoinExec => j\n+        case j: CometSortMergeJoinExec => j.originalPlan.asInstanceOf[SortMergeJoinExec]\n       }\n       val executed = df.queryExecution.executedPlan\n       val executedJoins = collect(executed) {\n         case j: SortMergeJoinExec => j\n+        case j: CometSortMergeJoinExec => j.originalPlan.asInstanceOf[SortMergeJoinExec]\n       }\n       // This only applies to the above tested queries, in which a child SortMergeJoin always\n       // contains the SortOrder required by its parent SortMergeJoin. Thus, SortExec should never\n@@ -1180,9 +1184,11 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n       val plan = df1.join(df2.hint(\"SHUFFLE_HASH\"), $\"k1\" === $\"k2\", joinType)\n         .groupBy($\"k1\").count()\n         .queryExecution.executedPlan\n-      assert(collect(plan) { case _: ShuffledHashJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: ShuffledHashJoinExec | _: CometHashJoinExec => true }.size === 1)\n       // No extra shuffle before aggregate\n-      assert(collect(plan) { case _: ShuffleExchangeExec => true }.size === 2)\n+      assert(collect(plan) {\n+        case _: ShuffleExchangeLike => true }.size === 2)\n     })\n   }\n \n@@ -1199,10 +1205,11 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n         .join(df4.hint(\"SHUFFLE_MERGE\"), $\"k1\" === $\"k4\", joinType)\n         .queryExecution\n         .executedPlan\n-      assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 2)\n+      assert(collect(plan) {\n+        case _: SortMergeJoinExec | _: CometSortMergeJoinExec => true }.size === 2)\n       assert(collect(plan) { case _: BroadcastHashJoinExec => true }.size === 1)\n       // No extra sort before last sort merge join\n-      assert(collect(plan) { case _: SortExec => true }.size === 3)\n+      assert(collect(plan) { case _: SortExec | _: CometSortExec => true }.size === 3)\n     })\n \n     // Test shuffled hash join\n@@ -1212,10 +1219,13 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n         .join(df4.hint(\"SHUFFLE_MERGE\"), $\"k1\" === $\"k4\", joinType)\n         .queryExecution\n         .executedPlan\n-      assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 2)\n-      assert(collect(plan) { case _: ShuffledHashJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: SortMergeJoinExec | _: CometSortMergeJoinExec => true }.size === 2)\n+      assert(collect(plan) {\n+        case _: ShuffledHashJoinExec | _: CometHashJoinExec => true }.size === 1)\n       // No extra sort before last sort merge join\n-      assert(collect(plan) { case _: SortExec => true }.size === 3)\n+      assert(collect(plan) {\n+        case _: SortExec | _: CometSortExec => true }.size === 3)\n     })\n   }\n \n@@ -1306,12 +1316,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n     inputDFs.foreach { case (df1, df2, joinExprs) =>\n       val smjDF = df1.join(df2.hint(\"SHUFFLE_MERGE\"), joinExprs, \"full\")\n       assert(collect(smjDF.queryExecution.executedPlan) {\n-        case _: SortMergeJoinExec => true }.size === 1)\n+        case _: SortMergeJoinExec | _: CometSortMergeJoinExec => true }.size === 1)\n       val smjResult = smjDF.collect()\n \n       val shjDF = df1.join(df2.hint(\"SHUFFLE_HASH\"), joinExprs, \"full\")\n       assert(collect(shjDF.queryExecution.executedPlan) {\n-        case _: ShuffledHashJoinExec => true }.size === 1)\n+        case _: ShuffledHashJoinExec | _: CometHashJoinExec => true }.size === 1)\n       // Same result between shuffled hash join and sort merge join\n       checkAnswer(shjDF, smjResult)\n     }\n@@ -1370,12 +1380,14 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           val smjDF = df1.hint(\"SHUFFLE_MERGE\").join(df2, joinExprs, \"leftouter\")\n           assert(collect(smjDF.queryExecution.executedPlan) {\n             case _: SortMergeJoinExec => true\n+            case _: CometSortMergeJoinExec => true\n           }.size === 1)\n           val smjResult = smjDF.collect()\n \n           val shjDF = df1.hint(\"SHUFFLE_HASH\").join(df2, joinExprs, \"leftouter\")\n           assert(collect(shjDF.queryExecution.executedPlan) {\n             case _: ShuffledHashJoinExec => true\n+            case _: CometHashJoinExec => true\n           }.size === 1)\n           // Same result between shuffled hash join and sort merge join\n           checkAnswer(shjDF, smjResult)\n@@ -1386,12 +1398,14 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           val smjDF = df2.join(df1.hint(\"SHUFFLE_MERGE\"), joinExprs, \"rightouter\")\n           assert(collect(smjDF.queryExecution.executedPlan) {\n             case _: SortMergeJoinExec => true\n+            case _: CometSortMergeJoinExec => true\n           }.size === 1)\n           val smjResult = smjDF.collect()\n \n           val shjDF = df2.join(df1.hint(\"SHUFFLE_HASH\"), joinExprs, \"rightouter\")\n           assert(collect(shjDF.queryExecution.executedPlan) {\n             case _: ShuffledHashJoinExec => true\n+            case _: CometHashJoinExec => true\n           }.size === 1)\n           // Same result between shuffled hash join and sort merge join\n           checkAnswer(shjDF, smjResult)\n@@ -1435,13 +1449,20 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n         assert(shjCodegenDF.queryExecution.executedPlan.collect {\n           case WholeStageCodegenExec(_ : ShuffledHashJoinExec) => true\n           case WholeStageCodegenExec(ProjectExec(_, _ : ShuffledHashJoinExec)) => true\n+          case WholeStageCodegenExec(ColumnarToRowExec(InputAdapter(_: CometHashJoinExec))) =>\n+            true\n+          case WholeStageCodegenExec(ColumnarToRowExec(\n+            InputAdapter(CometProjectExec(_, _, _, _, _: CometHashJoinExec, _)))) => true\n+          case _: CometHashJoinExec => true\n         }.size === 1)\n         checkAnswer(shjCodegenDF, Seq.empty)\n \n         withSQLConf(SQLConf.WHOLESTAGE_CODEGEN_ENABLED.key -> \"false\") {\n           val shjNonCodegenDF = df1.join(df2.hint(\"SHUFFLE_HASH\"), $\"k1\" === $\"k2\", joinType)\n           assert(shjNonCodegenDF.queryExecution.executedPlan.collect {\n-            case _: ShuffledHashJoinExec => true }.size === 1)\n+            case _: ShuffledHashJoinExec => true\n+            case _: CometHashJoinExec => true\n+          }.size === 1)\n           checkAnswer(shjNonCodegenDF, Seq.empty)\n         }\n       }\n@@ -1489,7 +1510,8 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           val plan = sql(getAggQuery(selectExpr, joinType)).queryExecution.executedPlan\n           assert(collect(plan) { case _: BroadcastNestedLoopJoinExec => true }.size === 1)\n           // Have shuffle before aggregation\n-          assert(collect(plan) { case _: ShuffleExchangeExec => true }.size === 1)\n+          assert(collect(plan) {\n+            case _: ShuffleExchangeLike => true }.size === 1)\n       }\n \n       def getJoinQuery(selectExpr: String, joinType: String): String = {\n@@ -1518,9 +1540,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           }\n           val plan = sql(getJoinQuery(selectExpr, joinType)).queryExecution.executedPlan\n           assert(collect(plan) { case _: BroadcastNestedLoopJoinExec => true }.size === 1)\n-          assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 3)\n+          assert(collect(plan) {\n+            case _: SortMergeJoinExec => true\n+            case _: CometSortMergeJoinExec => true\n+          }.size === 3)\n           // No extra sort on left side before last sort merge join\n-          assert(collect(plan) { case _: SortExec => true }.size === 5)\n+          assert(collect(plan) { case _: SortExec | _: CometSortExec => true }.size === 5)\n       }\n \n       // Test output ordering is not preserved\n@@ -1529,9 +1554,12 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           val selectExpr = \"/*+ BROADCAST(left_t) */ k1 as k0\"\n           val plan = sql(getJoinQuery(selectExpr, joinType)).queryExecution.executedPlan\n           assert(collect(plan) { case _: BroadcastNestedLoopJoinExec => true }.size === 1)\n-          assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 3)\n+          assert(collect(plan) {\n+            case _: SortMergeJoinExec => true\n+            case _: CometSortMergeJoinExec => true\n+          }.size === 3)\n           // Have sort on left side before last sort merge join\n-          assert(collect(plan) { case _: SortExec => true }.size === 6)\n+          assert(collect(plan) { case _: SortExec | _: CometSortExec => true }.size === 6)\n       }\n \n       // Test singe partition\n@@ -1541,7 +1569,8 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n            |FROM range(0, 10, 1, 1) t1 FULL OUTER JOIN range(0, 10, 1, 1) t2\n            |\"\"\".stripMargin)\n       val plan = fullJoinDF.queryExecution.executedPlan\n-      assert(collect(plan) { case _: ShuffleExchangeExec => true}.size == 1)\n+      assert(collect(plan) {\n+        case _: ShuffleExchangeLike => true}.size == 1)\n       checkAnswer(fullJoinDF, Row(100))\n     }\n   }\n@@ -1614,6 +1643,9 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n           Seq(semiJoinDF, antiJoinDF).foreach { df =>\n             assert(collect(df.queryExecution.executedPlan) {\n               case j: ShuffledHashJoinExec if j.ignoreDuplicatedKey == ignoreDuplicatedKey => true\n+              case j: CometHashJoinExec\n+                if j.originalPlan.asInstanceOf[ShuffledHashJoinExec].ignoreDuplicatedKey ==\n+                  ignoreDuplicatedKey => true\n             }.size == 1)\n           }\n       }\n@@ -1658,14 +1690,20 @@ class JoinSuite extends QueryTest with SharedSparkSession with AdaptiveSparkPlan\n \n   test(\"SPARK-43113: Full outer join with duplicate stream-side references in condition (SMJ)\") {\n     def check(plan: SparkPlan): Unit = {\n-      assert(collect(plan) { case _: SortMergeJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: SortMergeJoinExec => true\n+        case _: CometSortMergeJoinExec => true\n+      }.size === 1)\n     }\n     dupStreamSideColTest(\"MERGE\", check)\n   }\n \n   test(\"SPARK-43113: Full outer join with duplicate stream-side references in condition (SHJ)\") {\n     def check(plan: SparkPlan): Unit = {\n-      assert(collect(plan) { case _: ShuffledHashJoinExec => true }.size === 1)\n+      assert(collect(plan) {\n+        case _: ShuffledHashJoinExec => true\n+        case _: CometHashJoinExec => true\n+      }.size === 1)\n     }\n     dupStreamSideColTest(\"SHUFFLE_HASH\", check)\n   }\n@@ -1801,7 +1839,8 @@ class ThreadLeakInSortMergeJoinSuite\n       sparkConf.set(SHUFFLE_SPILL_NUM_ELEMENTS_FORCE_SPILL_THRESHOLD, 20))\n   }\n \n-  test(\"SPARK-47146: thread leak when doing SortMergeJoin (with spill)\") {\n+  test(\"SPARK-47146: thread leak when doing SortMergeJoin (with spill)\",\n+    IgnoreComet(\"Comet SMJ doesn't spill yet\")) {\n \n     withSQLConf(\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"1\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala\nindex ad424b3a7cc..4ece0117a34 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/PlanStabilitySuite.scala\n@@ -69,7 +69,7 @@ import org.apache.spark.tags.ExtendedSQLTest\n  * }}}\n  */\n // scalastyle:on line.size.limit\n-trait PlanStabilitySuite extends DisableAdaptiveExecutionSuite {\n+trait PlanStabilitySuite extends DisableAdaptiveExecutionSuite with IgnoreCometSuite {\n \n   protected val baseResourcePath = {\n     // use the same way as `SQLQueryTestSuite` to get the resource path\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala\nindex f294ff81021..7775027bcee 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/SQLQuerySuite.scala\n@@ -1524,7 +1524,8 @@ class SQLQuerySuite extends QueryTest with SharedSparkSession with AdaptiveSpark\n     checkAnswer(sql(\"select -0.001\"), Row(BigDecimal(\"-0.001\")))\n   }\n \n-  test(\"external sorting updates peak execution memory\") {\n+  test(\"external sorting updates peak execution memory\",\n+    IgnoreComet(\"TODO: native CometSort does not update peak execution memory\")) {\n     AccumulatorSuite.verifyPeakExecutionMemorySet(sparkContext, \"external sort\") {\n       sql(\"SELECT * FROM testData2 ORDER BY a ASC, b ASC\").collect()\n     }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala\nindex c1c041509c3..7d463e4b85e 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionExtensionSuite.scala\n@@ -235,6 +235,8 @@ class SparkSessionExtensionSuite extends SparkFunSuite with SQLHelper with Adapt\n     withSession(extensions) { session =>\n       session.conf.set(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key, true)\n       session.conf.set(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key, \"-1\")\n+      // https://github.com/apache/datafusion-comet/issues/1197\n+      session.conf.set(\"spark.comet.enabled\", false)\n       assert(session.sessionState.columnarRules.contains(\n         MyColumnarRule(PreRuleReplaceAddWithBrokenVersion(), MyPostRule())))\n       import session.implicits._\n@@ -293,6 +295,8 @@ class SparkSessionExtensionSuite extends SparkFunSuite with SQLHelper with Adapt\n     }\n     withSession(extensions) { session =>\n       session.conf.set(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key, enableAQE)\n+      // https://github.com/apache/datafusion-comet/issues/1197\n+      session.conf.set(\"spark.comet.enabled\", false)\n       assert(session.sessionState.columnarRules.contains(\n         MyColumnarRule(PreRuleReplaceAddWithBrokenVersion(), MyPostRule())))\n       import session.implicits._\n@@ -331,6 +335,8 @@ class SparkSessionExtensionSuite extends SparkFunSuite with SQLHelper with Adapt\n     val session = SparkSession.builder()\n       .master(\"local[1]\")\n       .config(COLUMN_BATCH_SIZE.key, 2)\n+      // https://github.com/apache/datafusion-comet/issues/1197\n+      .config(\"spark.comet.enabled\", false)\n       .withExtensions { extensions =>\n         extensions.injectColumnar(session =>\n           MyColumnarRule(PreRuleReplaceAddWithBrokenVersion(), MyPostRule())) }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionJobTaggingAndCancellationSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionJobTaggingAndCancellationSuite.scala\nindex 5ba69c8f9d9..ac1256afe88 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionJobTaggingAndCancellationSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/SparkSessionJobTaggingAndCancellationSuite.scala\n@@ -82,6 +82,10 @@ class SparkSessionJobTaggingAndCancellationSuite\n   }\n \n   test(\"Tags set from session are prefixed with session UUID\") {\n+    // This test relies on job scheduling order which is timing-dependent and becomes unreliable\n+    // when Comet is enabled due to changes in async execution behaviour.\n+    assume(!classic.SparkSession.isCometEnabled,\n+      \"Skipped when Comet is enabled: test results are timing-dependent\")\n     sc = new SparkContext(\"local[2]\", \"test\")\n     val session = classic.SparkSession.builder().sparkContext(sc).getOrCreate()\n     import session.implicits._\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/StringFunctionsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/StringFunctionsSuite.scala\nindex 0df7f806272..92390bd819f 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/StringFunctionsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/StringFunctionsSuite.scala\n@@ -17,6 +17,8 @@\n \n package org.apache.spark.sql\n \n+import org.apache.comet.CometConf\n+\n import org.apache.spark.{SPARK_DOC_ROOT, SparkIllegalArgumentException, SparkRuntimeException}\n import org.apache.spark.sql.catalyst.expressions.Cast._\n import org.apache.spark.sql.catalyst.expressions.IsNotNull\n@@ -179,29 +181,31 @@ class StringFunctionsSuite extends QueryTest with SharedSparkSession {\n   }\n \n   test(\"string regex_replace / regex_extract\") {\n-    val df = Seq(\n-      (\"100-200\", \"(\\\\d+)-(\\\\d+)\", \"300\"),\n-      (\"100-200\", \"(\\\\d+)-(\\\\d+)\", \"400\"),\n-      (\"100-200\", \"(\\\\d+)\", \"400\")).toDF(\"a\", \"b\", \"c\")\n+    withSQLConf(CometConf.getExprAllowIncompatConfigKey(\"regexp\") -> \"true\") {\n+      val df = Seq(\n+        (\"100-200\", \"(\\\\d+)-(\\\\d+)\", \"300\"),\n+        (\"100-200\", \"(\\\\d+)-(\\\\d+)\", \"400\"),\n+        (\"100-200\", \"(\\\\d+)\", \"400\")).toDF(\"a\", \"b\", \"c\")\n \n-    checkAnswer(\n-      df.select(\n-        regexp_replace($\"a\", \"(\\\\d+)\", \"num\"),\n-        regexp_replace($\"a\", $\"b\", $\"c\"),\n-        regexp_extract($\"a\", \"(\\\\d+)-(\\\\d+)\", 1)),\n-      Row(\"num-num\", \"300\", \"100\") :: Row(\"num-num\", \"400\", \"100\") ::\n-        Row(\"num-num\", \"400-400\", \"100\") :: Nil)\n-\n-    // for testing the mutable state of the expression in code gen.\n-    // This is a hack way to enable the codegen, thus the codegen is enable by default,\n-    // it will still use the interpretProjection if projection followed by a LocalRelation,\n-    // hence we add a filter operator.\n-    // See the optimizer rule `ConvertToLocalRelation`\n-    checkAnswer(\n-      df.filter(\"isnotnull(a)\").selectExpr(\n-        \"regexp_replace(a, b, c)\",\n-        \"regexp_extract(a, b, 1)\"),\n-      Row(\"300\", \"100\") :: Row(\"400\", \"100\") :: Row(\"400-400\", \"100\") :: Nil)\n+      checkAnswer(\n+        df.select(\n+          regexp_replace($\"a\", \"(\\\\d+)\", \"num\"),\n+          regexp_replace($\"a\", $\"b\", $\"c\"),\n+          regexp_extract($\"a\", \"(\\\\d+)-(\\\\d+)\", 1)),\n+        Row(\"num-num\", \"300\", \"100\") :: Row(\"num-num\", \"400\", \"100\") ::\n+          Row(\"num-num\", \"400-400\", \"100\") :: Nil)\n+\n+      // for testing the mutable state of the expression in code gen.\n+      // This is a hack way to enable the codegen, thus the codegen is enable by default,\n+      // it will still use the interpretProjection if projection followed by a LocalRelation,\n+      // hence we add a filter operator.\n+      // See the optimizer rule `ConvertToLocalRelation`\n+      checkAnswer(\n+        df.filter(\"isnotnull(a)\").selectExpr(\n+          \"regexp_replace(a, b, c)\",\n+          \"regexp_extract(a, b, 1)\"),\n+        Row(\"300\", \"100\") :: Row(\"400\", \"100\") :: Row(\"400-400\", \"100\") :: Nil)\n+    }\n   }\n \n   test(\"non-matching optional group\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala\nindex 2e33f6505ab..c74f3e5562f 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala\n@@ -23,12 +23,14 @@ import org.apache.spark.SparkRuntimeException\n import org.apache.spark.sql.catalyst.expressions.SubqueryExpression\n import org.apache.spark.sql.catalyst.plans.{LeftAnti, LeftSemi}\n import org.apache.spark.sql.catalyst.plans.logical.{Aggregate, Filter, Join, LogicalPlan, Project, Sort, Union}\n+import org.apache.spark.sql.comet.CometScanExec\n import org.apache.spark.sql.execution._\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecution}\n import org.apache.spark.sql.execution.datasources.FileScanRDD\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.execution.joins.{BaseJoinExec, BroadcastHashJoinExec, BroadcastNestedLoopJoinExec}\n import org.apache.spark.sql.internal.SQLConf\n+import org.apache.spark.sql.IgnoreCometNativeDataFusion\n import org.apache.spark.sql.test.SharedSparkSession\n \n class SubquerySuite extends QueryTest\n@@ -1529,6 +1531,12 @@ class SubquerySuite extends QueryTest\n             fs.inputRDDs().forall(\n               _.asInstanceOf[FileScanRDD].filePartitions.forall(\n                 _.files.forall(_.urlEncodedPath.contains(\"p=0\"))))\n+        case WholeStageCodegenExec(ColumnarToRowExec(InputAdapter(\n+        fs @ CometScanExec(_, _, _, _, partitionFilters, _, _, _, _, _, _)))) =>\n+          partitionFilters.exists(ExecSubqueryExpression.hasSubquery) &&\n+            fs.inputRDDs().forall(\n+              _.asInstanceOf[FileScanRDD].filePartitions.forall(\n+                _.files.forall(_.urlEncodedPath.contains(\"p=0\"))))\n         case _ => false\n       })\n     }\n@@ -2094,7 +2102,7 @@ class SubquerySuite extends QueryTest\n \n       df.collect()\n       val exchanges = collect(df.queryExecution.executedPlan) {\n-        case s: ShuffleExchangeExec => s\n+        case s: ShuffleExchangeLike => s\n       }\n       assert(exchanges.size === 1)\n     }\n@@ -2674,22 +2682,31 @@ class SubquerySuite extends QueryTest\n     }\n   }\n \n-  test(\"SPARK-43402: FileSourceScanExec supports push down data filter with scalar subquery\") {\n+  test(\"SPARK-43402: FileSourceScanExec supports push down data filter with scalar subquery\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3321\")) {\n     def checkFileSourceScan(query: String, answer: Seq[Row]): Unit = {\n       val df = sql(query)\n       checkAnswer(df, answer)\n-      val fileSourceScanExec = collect(df.queryExecution.executedPlan) {\n-        case f: FileSourceScanExec => f\n+      val dataSourceScanExec = collect(df.queryExecution.executedPlan) {\n+        case f: FileSourceScanLike => f\n+        case c: CometScanExec => c\n       }\n       sparkContext.listenerBus.waitUntilEmpty()\n-      assert(fileSourceScanExec.size === 1)\n-      val scalarSubquery = fileSourceScanExec.head.dataFilters.flatMap(_.collect {\n-        case s: ScalarSubquery => s\n-      })\n+      assert(dataSourceScanExec.size === 1)\n+      val scalarSubquery = dataSourceScanExec.head match {\n+        case f: FileSourceScanLike =>\n+          f.dataFilters.flatMap(_.collect {\n+            case s: ScalarSubquery => s\n+          })\n+        case c: CometScanExec =>\n+          c.dataFilters.flatMap(_.collect {\n+            case s: ScalarSubquery => s\n+          })\n+      }\n       assert(scalarSubquery.length === 1)\n       assert(scalarSubquery.head.plan.isInstanceOf[ReusedSubqueryExec])\n-      assert(fileSourceScanExec.head.metrics(\"numFiles\").value === 1)\n-      assert(fileSourceScanExec.head.metrics(\"numOutputRows\").value === answer.size)\n+      assert(dataSourceScanExec.head.metrics(\"numFiles\").value === 1)\n+      assert(dataSourceScanExec.head.metrics(\"numOutputRows\").value === answer.size)\n     }\n \n     withTable(\"t1\", \"t2\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/VariantShreddingSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/VariantShreddingSuite.scala\nindex fee375db10a..8c2c24e2c5f 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/VariantShreddingSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/VariantShreddingSuite.scala\n@@ -33,7 +33,9 @@ import org.apache.spark.sql.types._\n import org.apache.spark.types.variant._\n import org.apache.spark.unsafe.types.{UTF8String, VariantVal}\n \n-class VariantShreddingSuite extends QueryTest with SharedSparkSession with ParquetTest {\n+class VariantShreddingSuite extends QueryTest with SharedSparkSession with ParquetTest\n+    // TODO enable tests once https://github.com/apache/datafusion-comet/issues/2209 is fixed\n+    with IgnoreCometSuite {\n   def parseJson(s: String): VariantVal = {\n     val v = VariantBuilder.parseJson(s, false)\n     new VariantVal(v.getValue, v.getMetadata)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/collation/CollationSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/collation/CollationSuite.scala\nindex 11e9547dfc5..637411056ae 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/collation/CollationSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/collation/CollationSuite.scala\n@@ -24,6 +24,7 @@ import org.apache.spark.sql.{AnalysisException, Row}\n import org.apache.spark.sql.catalyst.ExtendedAnalysisException\n import org.apache.spark.sql.catalyst.expressions._\n import org.apache.spark.sql.catalyst.util.CollationFactory\n+import org.apache.spark.sql.comet.{CometBroadcastHashJoinExec, CometHashJoinExec, CometSortMergeJoinExec}\n import org.apache.spark.sql.connector.{DatasourceV2SQLBase, FakeV2ProviderWithCustomSchema}\n import org.apache.spark.sql.connector.catalog.{Identifier, InMemoryTable}\n import org.apache.spark.sql.connector.catalog.CatalogV2Implicits.CatalogHelper\n@@ -55,7 +56,9 @@ class CollationSuite extends DatasourceV2SQLBase with AdaptiveSparkPlanHelper {\n     assert(\n       collectFirst(queryPlan) {\n         case _: SortMergeJoinExec => assert(isSortMergeForced)\n+        case _: CometSortMergeJoinExec => assert(isSortMergeForced)\n         case _: HashJoin => assert(!isSortMergeForced)\n+        case _: CometHashJoinExec | _: CometBroadcastHashJoinExec => assert(!isSortMergeForced)\n       }.nonEmpty\n     )\n   }\n@@ -1547,10 +1550,14 @@ class CollationSuite extends DatasourceV2SQLBase with AdaptiveSparkPlanHelper {\n             if (!CollationFactory.fetchCollation(t.collation).supportsBinaryEquality) {\n               assert(collectFirst(queryPlan) {\n                 case b: HashJoin => b.leftKeys.head\n+                case ch: CometHashJoinExec => ch.leftKeys.head\n+                case cbh: CometBroadcastHashJoinExec => cbh.leftKeys.head\n               }.head.isInstanceOf[CollationKey])\n             } else {\n               assert(!collectFirst(queryPlan) {\n                 case b: HashJoin => b.leftKeys.head\n+                case ch: CometHashJoinExec => ch.leftKeys.head\n+                case cbh: CometBroadcastHashJoinExec => cbh.leftKeys.head\n               }.head.isInstanceOf[CollationKey])\n             }\n           }\n@@ -1606,11 +1613,13 @@ class CollationSuite extends DatasourceV2SQLBase with AdaptiveSparkPlanHelper {\n             if (!CollationFactory.fetchCollation(t.collation).supportsBinaryEquality) {\n               assert(collectFirst(queryPlan) {\n                 case b: BroadcastHashJoinExec => b.leftKeys.head\n+                case b: CometBroadcastHashJoinExec => b.leftKeys.head\n               }.head.asInstanceOf[ArrayTransform].function.asInstanceOf[LambdaFunction].\n                 function.isInstanceOf[CollationKey])\n             } else {\n               assert(!collectFirst(queryPlan) {\n                 case b: BroadcastHashJoinExec => b.leftKeys.head\n+                case b: CometBroadcastHashJoinExec => b.leftKeys.head\n               }.head.isInstanceOf[ArrayTransform])\n             }\n           }\n@@ -1676,6 +1685,7 @@ class CollationSuite extends DatasourceV2SQLBase with AdaptiveSparkPlanHelper {\n             } else {\n               assert(!collectFirst(queryPlan) {\n                 case b: BroadcastHashJoinExec => b.leftKeys.head\n+                case b: CometBroadcastHashJoinExec => b.leftKeys.head\n               }.head.isInstanceOf[ArrayTransform])\n             }\n           }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala b/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala\nindex 3eeed2e4175..9f21d547c1c 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/connector/DataSourceV2Suite.scala\n@@ -26,6 +26,7 @@ import test.org.apache.spark.sql.connector._\n import org.apache.spark.SparkUnsupportedOperationException\n import org.apache.spark.sql.{AnalysisException, DataFrame, QueryTest, Row}\n import org.apache.spark.sql.catalyst.InternalRow\n+import org.apache.spark.sql.comet.CometSortExec\n import org.apache.spark.sql.connector.catalog.{PartitionInternalRow, SupportsRead, Table, TableCapability, TableProvider}\n import org.apache.spark.sql.connector.catalog.TableCapability._\n import org.apache.spark.sql.connector.expressions.{Expression, FieldReference, Literal, NamedReference, NullOrdering, SortDirection, SortOrder, Transform}\n@@ -36,7 +37,7 @@ import org.apache.spark.sql.connector.read.partitioning.{KeyGroupedPartitioning,\n import org.apache.spark.sql.execution.SortExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.datasources.v2.{BatchScanExec, DataSourceV2Relation, DataSourceV2ScanRelation}\n-import org.apache.spark.sql.execution.exchange.{Exchange, ShuffleExchangeExec}\n+import org.apache.spark.sql.execution.exchange.{Exchange, ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.vectorized.OnHeapColumnVector\n import org.apache.spark.sql.expressions.Window\n import org.apache.spark.sql.functions._\n@@ -278,13 +279,13 @@ class DataSourceV2Suite extends QueryTest with SharedSparkSession with AdaptiveS\n           val groupByColJ = df.groupBy($\"j\").agg(sum($\"i\"))\n           checkAnswer(groupByColJ, Seq(Row(2, 8), Row(4, 2), Row(6, 5)))\n           assert(collectFirst(groupByColJ.queryExecution.executedPlan) {\n-            case e: ShuffleExchangeExec => e\n+            case e: ShuffleExchangeLike => e\n           }.isDefined)\n \n           val groupByIPlusJ = df.groupBy($\"i\" + $\"j\").agg(count(\"*\"))\n           checkAnswer(groupByIPlusJ, Seq(Row(5, 2), Row(6, 2), Row(8, 1), Row(9, 1)))\n           assert(collectFirst(groupByIPlusJ.queryExecution.executedPlan) {\n-            case e: ShuffleExchangeExec => e\n+            case e: ShuffleExchangeLike => e\n           }.isDefined)\n         }\n       }\n@@ -344,10 +345,11 @@ class DataSourceV2Suite extends QueryTest with SharedSparkSession with AdaptiveS\n \n                 val (shuffleExpected, sortExpected) = groupByExpects\n                 assert(collectFirst(groupBy.queryExecution.executedPlan) {\n-                  case e: ShuffleExchangeExec => e\n+                  case e: ShuffleExchangeLike => e\n                 }.isDefined === shuffleExpected)\n                 assert(collectFirst(groupBy.queryExecution.executedPlan) {\n                   case e: SortExec => e\n+                  case c: CometSortExec => c\n                 }.isDefined === sortExpected)\n               }\n \n@@ -362,10 +364,11 @@ class DataSourceV2Suite extends QueryTest with SharedSparkSession with AdaptiveS\n \n                 val (shuffleExpected, sortExpected) = windowFuncExpects\n                 assert(collectFirst(windowPartByColIOrderByColJ.queryExecution.executedPlan) {\n-                  case e: ShuffleExchangeExec => e\n+                  case e: ShuffleExchangeLike => e\n                 }.isDefined === shuffleExpected)\n                 assert(collectFirst(windowPartByColIOrderByColJ.queryExecution.executedPlan) {\n                   case e: SortExec => e\n+                  case c: CometSortExec => c\n                 }.isDefined === sortExpected)\n               }\n             }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala\nindex 2a0ab21ddb0..6030e7c2b9b 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/connector/FileDataSourceV2FallBackSuite.scala\n@@ -21,6 +21,7 @@ import scala.collection.mutable.ArrayBuffer\n import org.apache.spark.{SparkConf, SparkException}\n import org.apache.spark.sql.QueryTest\n import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan\n+import org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.connector.catalog.{SupportsRead, SupportsWrite, Table, TableCapability}\n import org.apache.spark.sql.connector.read.ScanBuilder\n import org.apache.spark.sql.connector.write.{LogicalWriteInfo, WriteBuilder}\n@@ -188,7 +189,11 @@ class FileDataSourceV2FallBackSuite extends QueryTest with SharedSparkSession {\n             val df = spark.read.format(format).load(path.getCanonicalPath)\n             checkAnswer(df, inputData.toDF())\n             assert(\n-              df.queryExecution.executedPlan.exists(_.isInstanceOf[FileSourceScanExec]))\n+              df.queryExecution.executedPlan.exists {\n+                case _: FileSourceScanExec | _: CometScanExec | _: CometNativeScanExec => true\n+                case _ => false\n+              }\n+            )\n           }\n         } finally {\n           spark.listenerManager.unregister(listener)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala\nindex c73e8e16fbb..399f1442ad5 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/connector/KeyGroupedPartitioningSuite.scala\n@@ -24,6 +24,8 @@ import org.apache.spark.sql.{DataFrame, Row}\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions.{Literal, TransformExpression}\n import org.apache.spark.sql.catalyst.plans.physical\n+import org.apache.spark.sql.comet.CometSortMergeJoinExec\n+import org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\n import org.apache.spark.sql.connector.catalog.{Column, Identifier, InMemoryTableCatalog}\n import org.apache.spark.sql.connector.catalog.functions._\n import org.apache.spark.sql.connector.distributions.Distributions\n@@ -32,7 +34,7 @@ import org.apache.spark.sql.connector.expressions.Expressions._\n import org.apache.spark.sql.execution.SparkPlan\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.execution.datasources.v2.DataSourceV2ScanRelation\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.{ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.joins.SortMergeJoinExec\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.internal.SQLConf._\n@@ -299,19 +301,21 @@ class KeyGroupedPartitioningSuite extends DistributionAndOrderingSuiteBase {\n         Row(\"bbb\", 20, 250.0), Row(\"bbb\", 20, 350.0), Row(\"ccc\", 30, 400.50)))\n   }\n \n-  private def collectAllShuffles(plan: SparkPlan): Seq[ShuffleExchangeExec] = {\n+  private def collectAllShuffles(plan: SparkPlan): Seq[ShuffleExchangeLike] = {\n     collect(plan) {\n       case s: ShuffleExchangeExec => s\n+      case c: CometShuffleExchangeExec => c\n     }\n   }\n \n-  private def collectShuffles(plan: SparkPlan): Seq[ShuffleExchangeExec] = {\n+  private def collectShuffles(plan: SparkPlan): Seq[ShuffleExchangeLike] = {\n     // here we skip collecting shuffle operators that are not associated with SMJ\n     collect(plan) {\n       case s: SortMergeJoinExec => s\n+      case c: CometSortMergeJoinExec => c.originalPlan\n     }.flatMap(smj =>\n       collect(smj) {\n-        case s: ShuffleExchangeExec => s\n+        case s: ShuffleExchangeLike => s\n       })\n   }\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/connector/WriteDistributionAndOrderingSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/connector/WriteDistributionAndOrderingSuite.scala\nindex f62e092138a..c0404bfe85e 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/connector/WriteDistributionAndOrderingSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/connector/WriteDistributionAndOrderingSuite.scala\n@@ -21,7 +21,7 @@ package org.apache.spark.sql.connector\n import java.sql.Date\n import java.util.Collections\n \n-import org.apache.spark.sql.{catalyst, AnalysisException, DataFrame, Row}\n+import org.apache.spark.sql.{catalyst, AnalysisException, DataFrame, IgnoreCometSuite, Row}\n import org.apache.spark.sql.catalyst.expressions.{ApplyFunctionExpression, Cast, Literal}\n import org.apache.spark.sql.catalyst.expressions.objects.Invoke\n import org.apache.spark.sql.catalyst.plans.physical\n@@ -45,7 +45,8 @@ import org.apache.spark.sql.util.QueryExecutionListener\n import org.apache.spark.tags.SlowSQLTest\n \n @SlowSQLTest\n-class WriteDistributionAndOrderingSuite extends DistributionAndOrderingSuiteBase {\n+class WriteDistributionAndOrderingSuite extends DistributionAndOrderingSuiteBase\n+  with IgnoreCometSuite {\n   import testImplicits._\n \n   before {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala\nindex 04d33ecd3d5..450df347297 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala\n@@ -31,7 +31,7 @@ import org.mockito.Mockito.{mock, spy, when}\n import org.scalatest.time.SpanSugar._\n \n import org.apache.spark._\n-import org.apache.spark.sql.{AnalysisException, DataFrame, Dataset, Encoder, KryoData, QueryTest, Row, SaveMode}\n+import org.apache.spark.sql.{AnalysisException, DataFrame, Dataset, Encoder, IgnoreComet, KryoData, QueryTest, Row, SaveMode}\n import org.apache.spark.sql.catalyst.FunctionIdentifier\n import org.apache.spark.sql.catalyst.analysis.{NamedParameter, UnresolvedGenerator}\n import org.apache.spark.sql.catalyst.encoders.{ExpressionEncoder, RowEncoder}\n@@ -267,7 +267,8 @@ class QueryExecutionErrorsSuite\n   }\n \n   test(\"INCONSISTENT_BEHAVIOR_CROSS_VERSION: \" +\n-    \"compatibility with Spark 2.4/3.2 in reading/writing dates\") {\n+    \"compatibility with Spark 2.4/3.2 in reading/writing dates\",\n+    IgnoreComet(\"Comet doesn't completely support datetime rebase mode yet\")) {\n \n     // Fail to read ancient datetime values.\n     withSQLConf(SQLConf.PARQUET_REBASE_MODE_IN_READ.key -> EXCEPTION.toString) {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala\nindex 418ca3430bb..eb8267192f8 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/DataSourceScanExecRedactionSuite.scala\n@@ -23,7 +23,7 @@ import scala.util.Random\n import org.apache.hadoop.fs.Path\n \n import org.apache.spark.SparkConf\n-import org.apache.spark.sql.{DataFrame, QueryTest}\n+import org.apache.spark.sql.{DataFrame, IgnoreComet, QueryTest}\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.execution.datasources.v2.orc.OrcScan\n import org.apache.spark.sql.internal.SQLConf\n@@ -195,7 +195,7 @@ class DataSourceV2ScanExecRedactionSuite extends DataSourceScanRedactionTest {\n     }\n   }\n \n-  test(\"FileScan description\") {\n+  test(\"FileScan description\", IgnoreComet(\"Comet doesn't use BatchScan\")) {\n     Seq(\"json\", \"orc\", \"parquet\").foreach { format =>\n       withTempPath { path =>\n         val dir = path.getCanonicalPath\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/InsertSortForLimitAndOffsetSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/InsertSortForLimitAndOffsetSuite.scala\nindex d1b11a74cf3..75e4600863a 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/InsertSortForLimitAndOffsetSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/InsertSortForLimitAndOffsetSuite.scala\n@@ -19,6 +19,7 @@ package org.apache.spark.sql.execution\n \n import org.apache.spark.sql.{Dataset, QueryTest}\n import org.apache.spark.sql.IntegratedUDFTestUtils._\n+import org.apache.spark.sql.comet.{CometCollectLimitExec, CometGlobalLimitExec, CometProjectExec, CometSortExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.functions.rand\n import org.apache.spark.sql.internal.SQLConf\n@@ -39,7 +40,7 @@ class InsertSortForLimitAndOffsetSuite extends QueryTest\n \n   private def assertHasCollectLimitExec(plan: SparkPlan): Unit = {\n     assert(find(plan) {\n-      case _: CollectLimitExec => true\n+      case _: CollectLimitExec | _: CometCollectLimitExec => true\n       case _ => false\n     }.isDefined)\n   }\n@@ -47,6 +48,7 @@ class InsertSortForLimitAndOffsetSuite extends QueryTest\n   private def assertHasGlobalLimitExec(plan: SparkPlan): Unit = {\n     assert(find(plan) {\n       case _: GlobalLimitExec => true\n+      case _: CometGlobalLimitExec => true\n       case _ => false\n     }.isDefined)\n   }\n@@ -55,6 +57,11 @@ class InsertSortForLimitAndOffsetSuite extends QueryTest\n     find(plan) {\n       case GlobalLimitExec(_, s: SortExec, _) => !s.global\n       case GlobalLimitExec(_, ProjectExec(_, s: SortExec), _) => !s.global\n+      case CometGlobalLimitExec(_, _, _, _, s: CometSortExec, _) =>\n+        !s.originalPlan.asInstanceOf[SortExec].global\n+      case CometGlobalLimitExec(_, _, _, _,\n+        CometProjectExec(_, _, _, _, s: CometSortExec, _), _) =>\n+        !s.originalPlan.asInstanceOf[SortExec].global\n       case _ => false\n     }.isDefined\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala\nindex 743ec41dbe7..9f30d6c8e04 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/LogicalPlanTagInSparkPlanSuite.scala\n@@ -53,6 +53,10 @@ class LogicalPlanTagInSparkPlanSuite extends TPCDSQuerySuite with DisableAdaptiv\n     case ColumnarToRowExec(i: InputAdapter) => isScanPlanTree(i.child)\n     case p: ProjectExec => isScanPlanTree(p.child)\n     case f: FilterExec => isScanPlanTree(f.child)\n+    // Comet produces scan plan tree like:\n+    // ColumnarToRow\n+    //  +- ReusedExchange\n+    case _: ReusedExchangeExec => false\n     case _: LeafExecNode => true\n     case _ => false\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala\nindex 1400ee25f43..5b016c3f9c5 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/PlannerSuite.scala\n@@ -19,7 +19,7 @@ package org.apache.spark.sql.execution\n \n import org.apache.spark.SparkUnsupportedOperationException\n import org.apache.spark.rdd.RDD\n-import org.apache.spark.sql.{execution, DataFrame, Row}\n+import org.apache.spark.sql.{execution, DataFrame, IgnoreCometSuite, Row}\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions._\n import org.apache.spark.sql.catalyst.plans._\n@@ -36,7 +36,9 @@ import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.sql.types._\n \n-class PlannerSuite extends SharedSparkSession with AdaptiveSparkPlanHelper {\n+// Ignore this suite when Comet is enabled. This suite tests the Spark planner and Comet planner\n+// comes out with too many difference. Simply ignoring this suite for now.\n+class PlannerSuite extends SharedSparkSession with AdaptiveSparkPlanHelper with IgnoreCometSuite {\n   import testImplicits._\n \n   setupTestData()\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/QueryExecutionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/QueryExecutionSuite.scala\nindex 47d5ff67b84..8dc8f65d4b1 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/QueryExecutionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/QueryExecutionSuite.scala\n@@ -20,7 +20,7 @@ import scala.collection.mutable\n import scala.io.Source\n import scala.util.Try\n \n-import org.apache.spark.sql.{AnalysisException, ExtendedExplainGenerator, FastOperator}\n+import org.apache.spark.sql.{AnalysisException, ExtendedExplainGenerator, FastOperator, IgnoreComet}\n import org.apache.spark.sql.catalyst.{QueryPlanningTracker, QueryPlanningTrackerCallback, TableIdentifier}\n import org.apache.spark.sql.catalyst.analysis.{CurrentNamespace, UnresolvedFunction, UnresolvedRelation}\n import org.apache.spark.sql.catalyst.expressions.{Alias, UnsafeRow}\n@@ -400,7 +400,7 @@ class QueryExecutionSuite extends SharedSparkSession {\n     }\n   }\n \n-  test(\"SPARK-47289: extended explain info\") {\n+  test(\"SPARK-47289: extended explain info\", IgnoreComet(\"Comet plan extended info is different\")) {\n     val concat = new PlanStringConcat()\n     withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala\nindex b5bac8079c4..9420dbdb936 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantProjectsSuite.scala\n@@ -17,7 +17,10 @@\n \n package org.apache.spark.sql.execution\n \n-import org.apache.spark.sql.{DataFrame, QueryTest, Row}\n+import org.apache.comet.CometConf\n+\n+import org.apache.spark.sql.{DataFrame, IgnoreComet, QueryTest, Row}\n+import org.apache.spark.sql.comet.CometProjectExec\n import org.apache.spark.sql.connector.SimpleWritableDataSource\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.internal.SQLConf\n@@ -34,7 +37,10 @@ abstract class RemoveRedundantProjectsSuiteBase\n   private def assertProjectExecCount(df: DataFrame, expected: Int): Unit = {\n     withClue(df.queryExecution) {\n       val plan = df.queryExecution.executedPlan\n-      val actual = collectWithSubqueries(plan) { case p: ProjectExec => p }.size\n+      val actual = collectWithSubqueries(plan) {\n+        case p: ProjectExec => p\n+        case p: CometProjectExec => p\n+      }.size\n       assert(actual == expected)\n     }\n   }\n@@ -112,7 +118,8 @@ abstract class RemoveRedundantProjectsSuiteBase\n     assertProjectExec(query, 1, 3)\n   }\n \n-  test(\"join with ordering requirement\") {\n+  test(\"join with ordering requirement\",\n+    IgnoreComet(\"TODO: Support SubqueryBroadcastExec in Comet: #242\")) {\n     val query = \"select * from (select key, a, c, b from testView) as t1 join \" +\n       \"(select key, a, b, c from testView) as t2 on t1.key = t2.key where t2.a > 50\"\n     assertProjectExec(query, 2, 2)\n@@ -134,12 +141,25 @@ abstract class RemoveRedundantProjectsSuiteBase\n       val df = data.selectExpr(\"a\", \"b\", \"key\", \"explode(array(key, a, b)) as d\").filter(\"d > 0\")\n       df.collect()\n       val plan = df.queryExecution.executedPlan\n-      val numProjects = collectWithSubqueries(plan) { case p: ProjectExec => p }.length\n+      val numProjects = collectWithSubqueries(plan) {\n+        case p: ProjectExec => p\n+        case p: CometProjectExec => p\n+      }.length\n \n       // Create a new plan that reverse the GenerateExec output and add a new ProjectExec between\n       // GenerateExec and its child. This is to test if the ProjectExec is removed, the output of\n       // the query will be incorrect.\n-      val newPlan = stripAQEPlan(plan) transform {\n+\n+      // Comet-specific change to get original Spark plan before applying\n+      // a transformation to add a new ProjectExec\n+      var sparkPlan: SparkPlan = null\n+      withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"false\") {\n+        val df = data.selectExpr(\"a\", \"b\", \"key\", \"explode(array(key, a, b)) as d\").filter(\"d > 0\")\n+        df.collect()\n+        sparkPlan = df.queryExecution.executedPlan\n+      }\n+\n+      val newPlan = stripAQEPlan(sparkPlan) transform {\n         case g @ GenerateExec(_, requiredChildOutput, _, _, child) =>\n           g.copy(requiredChildOutput = requiredChildOutput.reverse,\n             child = ProjectExec(requiredChildOutput.reverse, child))\n@@ -151,6 +171,7 @@ abstract class RemoveRedundantProjectsSuiteBase\n       // The manually added ProjectExec node shouldn't be removed.\n       assert(collectWithSubqueries(newExecutedPlan) {\n         case p: ProjectExec => p\n+        case p: CometProjectExec => p\n       }.size == numProjects + 1)\n \n       // Check the original plan's output and the new plan's output are the same.\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala\nindex 005e764cc30..92ec088efab 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/RemoveRedundantSortsSuite.scala\n@@ -19,6 +19,7 @@ package org.apache.spark.sql.execution\n \n import org.apache.spark.sql.{DataFrame, QueryTest}\n import org.apache.spark.sql.catalyst.plans.physical.{RangePartitioning, UnknownPartitioning}\n+import org.apache.spark.sql.comet.CometSortExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.execution.joins.ShuffledJoin\n import org.apache.spark.sql.internal.SQLConf\n@@ -33,7 +34,7 @@ abstract class RemoveRedundantSortsSuiteBase\n \n   private def checkNumSorts(df: DataFrame, count: Int): Unit = {\n     val plan = df.queryExecution.executedPlan\n-    assert(collectWithSubqueries(plan) { case s: SortExec => s }.length == count)\n+    assert(collectWithSubqueries(plan) { case _: SortExec | _: CometSortExec => 1 }.length == count)\n   }\n \n   private def checkSorts(query: String, enabledCount: Int, disabledCount: Int): Unit = {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala\nindex 47679ed7865..9ffbaecb98e 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/ReplaceHashWithSortAggSuite.scala\n@@ -18,6 +18,7 @@\n package org.apache.spark.sql.execution\n \n import org.apache.spark.sql.{DataFrame, QueryTest}\n+import org.apache.spark.sql.comet.CometHashAggregateExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.execution.aggregate.{HashAggregateExec, ObjectHashAggregateExec, SortAggregateExec}\n import org.apache.spark.sql.internal.SQLConf\n@@ -31,7 +32,7 @@ abstract class ReplaceHashWithSortAggSuiteBase\n   private def checkNumAggs(df: DataFrame, hashAggCount: Int, sortAggCount: Int): Unit = {\n     val plan = df.queryExecution.executedPlan\n     assert(collectWithSubqueries(plan) {\n-      case s @ (_: HashAggregateExec | _: ObjectHashAggregateExec) => s\n+      case s @ (_: HashAggregateExec | _: ObjectHashAggregateExec | _: CometHashAggregateExec ) => s\n     }.length == hashAggCount)\n     assert(collectWithSubqueries(plan) { case s: SortAggregateExec => s }.length == sortAggCount)\n   }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala\nindex 77a988f340e..e4deeb6b1d8 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/SQLViewSuite.scala\n@@ -1061,7 +1061,8 @@ abstract class SQLViewSuite extends QueryTest with SQLTestUtils {\n     }\n   }\n \n-  test(\"alter temporary view should follow current storeAnalyzedPlanForView config\") {\n+  test(\"alter temporary view should follow current storeAnalyzedPlanForView config\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3321\")) {\n     withTable(\"t\") {\n       Seq(2, 3, 1).toDF(\"c1\").write.format(\"parquet\").saveAsTable(\"t\")\n       withView(\"v1\") {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala\nindex aed11badb71..1a365b5aacf 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/SparkPlanSuite.scala\n@@ -23,6 +23,7 @@ import org.apache.spark.sql.QueryTest\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeReference}\n import org.apache.spark.sql.catalyst.plans.logical.Deduplicate\n+import org.apache.spark.sql.comet.{CometColumnarToRowExec, CometNativeColumnarToRowExec}\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SharedSparkSession\n@@ -134,7 +135,11 @@ class SparkPlanSuite extends QueryTest with SharedSparkSession {\n         spark.range(1).write.parquet(path.getAbsolutePath)\n         val df = spark.read.parquet(path.getAbsolutePath)\n         val columnarToRowExec =\n-          df.queryExecution.executedPlan.collectFirst { case p: ColumnarToRowExec => p }.get\n+          df.queryExecution.executedPlan.collectFirst {\n+            case p: ColumnarToRowExec => p\n+            case p: CometColumnarToRowExec => p\n+            case p: CometNativeColumnarToRowExec => p\n+          }.get\n         try {\n           spark.range(1).foreach { _ =>\n             columnarToRowExec.canonicalized\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala\nindex a3cfdc5a240..3793b6191bf 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala\n@@ -19,9 +19,10 @@ package org.apache.spark.sql.execution\n \n import org.apache.spark.SparkException\n import org.apache.spark.rdd.MapPartitionsWithEvaluatorRDD\n-import org.apache.spark.sql.{Dataset, QueryTest, Row, SaveMode}\n+import org.apache.spark.sql.{Dataset, IgnoreCometSuite, QueryTest, Row, SaveMode}\n import org.apache.spark.sql.catalyst.expressions.CodegenObjectFactoryMode\n import org.apache.spark.sql.catalyst.expressions.codegen.{ByteCodeStats, CodeAndComment, CodeGenerator}\n+import org.apache.spark.sql.comet.{CometColumnarToRowExec, CometHashJoinExec, CometSortExec, CometSortMergeJoinExec}\n import org.apache.spark.sql.execution.adaptive.DisableAdaptiveExecutionSuite\n import org.apache.spark.sql.execution.aggregate.{HashAggregateExec, SortAggregateExec}\n import org.apache.spark.sql.execution.columnar.InMemoryTableScanExec\n@@ -33,7 +34,7 @@ import org.apache.spark.sql.types.{IntegerType, StringType, StructType}\n \n // Disable AQE because the WholeStageCodegenExec is added when running QueryStageExec\n class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n-  with DisableAdaptiveExecutionSuite {\n+  with DisableAdaptiveExecutionSuite with IgnoreCometSuite {\n \n   import testImplicits._\n \n@@ -172,6 +173,7 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n     val oneJoinDF = df1.join(df2.hint(\"SHUFFLE_HASH\"), $\"k1\" === $\"k2\")\n     assert(oneJoinDF.queryExecution.executedPlan.collect {\n       case WholeStageCodegenExec(_ : ShuffledHashJoinExec) => true\n+      case _: CometHashJoinExec => true\n     }.size === 1)\n     checkAnswer(oneJoinDF, Seq(Row(0, 0), Row(1, 1), Row(2, 2), Row(3, 3), Row(4, 4)))\n \n@@ -180,6 +182,7 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n       .join(df3.hint(\"SHUFFLE_HASH\"), $\"k1\" === $\"k3\")\n     assert(twoJoinsDF.queryExecution.executedPlan.collect {\n       case WholeStageCodegenExec(_ : ShuffledHashJoinExec) => true\n+      case _: CometHashJoinExec => true\n     }.size === 2)\n     checkAnswer(twoJoinsDF,\n       Seq(Row(0, 0, 0), Row(1, 1, 1), Row(2, 2, 2), Row(3, 3, 3), Row(4, 4, 4)))\n@@ -206,6 +209,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n       assert(joinUniqueDF.queryExecution.executedPlan.collect {\n         case WholeStageCodegenExec(_ : ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n         case WholeStageCodegenExec(_ : SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+        case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+        case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n       }.size === 1)\n       checkAnswer(joinUniqueDF, Seq(Row(0, 0), Row(1, 1), Row(2, 2), Row(3, 3), Row(4, 4),\n         Row(null, 5), Row(null, 6), Row(null, 7), Row(null, 8), Row(null, 9)))\n@@ -216,6 +221,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n       assert(joinNonUniqueDF.queryExecution.executedPlan.collect {\n         case WholeStageCodegenExec(_ : ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n         case WholeStageCodegenExec(_ : SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+        case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+        case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n       }.size === 1)\n       checkAnswer(joinNonUniqueDF, Seq(Row(0, 0), Row(0, 3), Row(0, 6), Row(0, 9), Row(1, 1),\n         Row(1, 4), Row(1, 7), Row(2, 2), Row(2, 5), Row(2, 8), Row(3, null), Row(4, null)))\n@@ -226,6 +233,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n       assert(joinWithNonEquiDF.queryExecution.executedPlan.collect {\n         case WholeStageCodegenExec(_ : ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n         case WholeStageCodegenExec(_ : SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+        case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+        case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n       }.size === 1)\n       checkAnswer(joinWithNonEquiDF, Seq(Row(0, 0), Row(0, 6), Row(0, 9), Row(1, 1),\n         Row(1, 7), Row(2, 2), Row(2, 8), Row(3, null), Row(4, null), Row(null, 3), Row(null, 4),\n@@ -237,6 +246,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n       assert(twoJoinsDF.queryExecution.executedPlan.collect {\n         case WholeStageCodegenExec(_ : ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n         case WholeStageCodegenExec(_ : SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+        case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+        case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n       }.size === 2)\n       checkAnswer(twoJoinsDF,\n         Seq(Row(0, 0, 0), Row(1, 1, null), Row(2, 2, 2), Row(3, 3, null), Row(4, 4, null),\n@@ -258,6 +269,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n         assert(rightJoinUniqueDf.queryExecution.executedPlan.collect {\n           case WholeStageCodegenExec(_: ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n           case WholeStageCodegenExec(_: SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+          case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+          case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n         }.size === 1)\n         checkAnswer(rightJoinUniqueDf, Seq(Row(1, 1), Row(2, 2), Row(3, 3), Row(4, 4),\n           Row(null, 5), Row(null, 6), Row(null, 7), Row(null, 8), Row(null, 9),\n@@ -269,6 +282,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n         assert(leftJoinUniqueDf.queryExecution.executedPlan.collect {\n           case WholeStageCodegenExec(_: ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n           case WholeStageCodegenExec(_: SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+          case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+          case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n         }.size === 1)\n         checkAnswer(leftJoinUniqueDf, Seq(Row(0, null), Row(1, 1), Row(2, 2), Row(3, 3), Row(4, 4)))\n         assert(leftJoinUniqueDf.count() === 5)\n@@ -278,6 +293,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n         assert(rightJoinNonUniqueDf.queryExecution.executedPlan.collect {\n           case WholeStageCodegenExec(_: ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n           case WholeStageCodegenExec(_: SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+          case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+          case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n         }.size === 1)\n         checkAnswer(rightJoinNonUniqueDf, Seq(Row(0, 3), Row(0, 6), Row(0, 9), Row(1, 1),\n           Row(1, 4), Row(1, 7), Row(1, 10), Row(2, 2), Row(2, 5), Row(2, 8)))\n@@ -287,6 +304,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n         assert(leftJoinNonUniqueDf.queryExecution.executedPlan.collect {\n           case WholeStageCodegenExec(_: ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n           case WholeStageCodegenExec(_: SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+          case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+          case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n         }.size === 1)\n         checkAnswer(leftJoinNonUniqueDf, Seq(Row(0, 3), Row(0, 6), Row(0, 9), Row(1, 1),\n           Row(1, 4), Row(1, 7), Row(1, 10), Row(2, 2), Row(2, 5), Row(2, 8), Row(3, null),\n@@ -298,6 +317,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n         assert(rightJoinWithNonEquiDf.queryExecution.executedPlan.collect {\n           case WholeStageCodegenExec(_: ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n           case WholeStageCodegenExec(_: SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+          case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+          case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n         }.size === 1)\n         checkAnswer(rightJoinWithNonEquiDf, Seq(Row(0, 6), Row(0, 9), Row(1, 1), Row(1, 7),\n           Row(1, 10), Row(2, 2), Row(2, 8), Row(null, 3), Row(null, 4), Row(null, 5)))\n@@ -308,6 +329,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n         assert(leftJoinWithNonEquiDf.queryExecution.executedPlan.collect {\n           case WholeStageCodegenExec(_: ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n           case WholeStageCodegenExec(_: SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+          case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+          case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n         }.size === 1)\n         checkAnswer(leftJoinWithNonEquiDf, Seq(Row(0, 6), Row(0, 9), Row(1, 1), Row(1, 7),\n           Row(1, 10), Row(2, 2), Row(2, 8), Row(3, null), Row(4, null)))\n@@ -318,6 +341,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n         assert(twoRightJoinsDf.queryExecution.executedPlan.collect {\n           case WholeStageCodegenExec(_: ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n           case WholeStageCodegenExec(_: SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+          case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+          case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n         }.size === 2)\n         checkAnswer(twoRightJoinsDf, Seq(Row(2, 2, 2), Row(3, 3, 3), Row(4, 4, 4)))\n \n@@ -327,6 +352,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n         assert(twoLeftJoinsDf.queryExecution.executedPlan.collect {\n           case WholeStageCodegenExec(_: ShuffledHashJoinExec) if hint == \"SHUFFLE_HASH\" => true\n           case WholeStageCodegenExec(_: SortMergeJoinExec) if hint == \"SHUFFLE_MERGE\" => true\n+          case _: CometHashJoinExec if hint == \"SHUFFLE_HASH\" => true\n+          case _: CometSortMergeJoinExec if hint == \"SHUFFLE_MERGE\" => true\n         }.size === 2)\n         checkAnswer(twoLeftJoinsDf,\n           Seq(Row(0, null, null), Row(1, 1, null), Row(2, 2, 2), Row(3, 3, 3), Row(4, 4, 4)))\n@@ -343,6 +370,7 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n     val oneLeftOuterJoinDF = df1.join(df2.hint(\"SHUFFLE_MERGE\"), $\"k1\" === $\"k2\", \"left_outer\")\n     assert(oneLeftOuterJoinDF.queryExecution.executedPlan.collect {\n       case WholeStageCodegenExec(_ : SortMergeJoinExec) => true\n+      case _: CometSortMergeJoinExec => true\n     }.size === 1)\n     checkAnswer(oneLeftOuterJoinDF, Seq(Row(0, 0), Row(1, 1), Row(2, 2), Row(3, 3), Row(4, null),\n       Row(5, null), Row(6, null), Row(7, null), Row(8, null), Row(9, null)))\n@@ -351,6 +379,7 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n     val oneRightOuterJoinDF = df2.join(df3.hint(\"SHUFFLE_MERGE\"), $\"k2\" === $\"k3\", \"right_outer\")\n     assert(oneRightOuterJoinDF.queryExecution.executedPlan.collect {\n       case WholeStageCodegenExec(_ : SortMergeJoinExec) => true\n+      case _: CometSortMergeJoinExec => true\n     }.size === 1)\n     checkAnswer(oneRightOuterJoinDF, Seq(Row(0, 0), Row(1, 1), Row(2, 2), Row(3, 3), Row(null, 4),\n       Row(null, 5)))\n@@ -360,6 +389,7 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n       .join(df1.hint(\"SHUFFLE_MERGE\"), $\"k3\" === $\"k1\", \"right_outer\")\n     assert(twoJoinsDF.queryExecution.executedPlan.collect {\n       case WholeStageCodegenExec(_ : SortMergeJoinExec) => true\n+      case _: CometSortMergeJoinExec => true\n     }.size === 2)\n     checkAnswer(twoJoinsDF,\n       Seq(Row(0, 0, 0), Row(1, 1, 1), Row(2, 2, 2), Row(3, 3, 3), Row(4, null, 4), Row(5, null, 5),\n@@ -375,6 +405,7 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n     val oneJoinDF = df1.join(df2.hint(\"SHUFFLE_MERGE\"), $\"k1\" === $\"k2\", \"left_semi\")\n     assert(oneJoinDF.queryExecution.executedPlan.collect {\n       case WholeStageCodegenExec(ProjectExec(_, _ : SortMergeJoinExec)) => true\n+      case _: CometSortMergeJoinExec => true\n     }.size === 1)\n     checkAnswer(oneJoinDF, Seq(Row(0), Row(1), Row(2), Row(3)))\n \n@@ -382,8 +413,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n     val twoJoinsDF = df3.join(df2.hint(\"SHUFFLE_MERGE\"), $\"k3\" === $\"k2\", \"left_semi\")\n       .join(df1.hint(\"SHUFFLE_MERGE\"), $\"k3\" === $\"k1\", \"left_semi\")\n     assert(twoJoinsDF.queryExecution.executedPlan.collect {\n-      case WholeStageCodegenExec(ProjectExec(_, _ : SortMergeJoinExec)) |\n-           WholeStageCodegenExec(_ : SortMergeJoinExec) => true\n+      case _: SortMergeJoinExec => true\n+      case _: CometSortMergeJoinExec => true\n     }.size === 2)\n     checkAnswer(twoJoinsDF, Seq(Row(0), Row(1), Row(2), Row(3)))\n   }\n@@ -397,6 +428,7 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n     val oneJoinDF = df1.join(df2.hint(\"SHUFFLE_MERGE\"), $\"k1\" === $\"k2\", \"left_anti\")\n     assert(oneJoinDF.queryExecution.executedPlan.collect {\n       case WholeStageCodegenExec(ProjectExec(_, _ : SortMergeJoinExec)) => true\n+      case _: CometSortMergeJoinExec => true\n     }.size === 1)\n     checkAnswer(oneJoinDF, Seq(Row(4), Row(5), Row(6), Row(7), Row(8), Row(9)))\n \n@@ -404,8 +436,8 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n     val twoJoinsDF = df1.join(df2.hint(\"SHUFFLE_MERGE\"), $\"k1\" === $\"k2\", \"left_anti\")\n       .join(df3.hint(\"SHUFFLE_MERGE\"), $\"k1\" === $\"k3\", \"left_anti\")\n     assert(twoJoinsDF.queryExecution.executedPlan.collect {\n-      case WholeStageCodegenExec(ProjectExec(_, _ : SortMergeJoinExec)) |\n-           WholeStageCodegenExec(_ : SortMergeJoinExec) => true\n+      case _: SortMergeJoinExec => true\n+      case _: CometSortMergeJoinExec => true\n     }.size === 2)\n     checkAnswer(twoJoinsDF, Seq(Row(6), Row(7), Row(8), Row(9)))\n   }\n@@ -538,7 +570,10 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n     val plan = df.queryExecution.executedPlan\n     assert(plan.exists(p =>\n       p.isInstanceOf[WholeStageCodegenExec] &&\n-        p.asInstanceOf[WholeStageCodegenExec].child.isInstanceOf[SortExec]))\n+        p.asInstanceOf[WholeStageCodegenExec].collect {\n+          case _: SortExec => true\n+          case _: CometSortExec => true\n+        }.nonEmpty))\n     assert(df.collect() === Array(Row(1), Row(2), Row(3)))\n   }\n \n@@ -718,7 +753,9 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n           .write.mode(SaveMode.Overwrite).parquet(path)\n \n         withSQLConf(SQLConf.WHOLESTAGE_MAX_NUM_FIELDS.key -> \"255\",\n-            SQLConf.WHOLESTAGE_SPLIT_CONSUME_FUNC_BY_OPERATOR.key -> \"true\") {\n+            SQLConf.WHOLESTAGE_SPLIT_CONSUME_FUNC_BY_OPERATOR.key -> \"true\",\n+            // Disable Comet native execution because this checks wholestage codegen.\n+            \"spark.comet.exec.enabled\" -> \"false\") {\n           val projection = Seq.tabulate(columnNum)(i => s\"c$i + c$i as newC$i\")\n           val df = spark.read.parquet(path).selectExpr(projection: _*)\n \n@@ -815,6 +852,9 @@ class WholeStageCodegenSuite extends QueryTest with SharedSparkSession\n       assert(distinctWithId.queryExecution.executedPlan.exists {\n         case WholeStageCodegenExec(\n           ProjectExec(_, BroadcastHashJoinExec(_, _, _, _, _, _: HashAggregateExec, _, _))) => true\n+        case WholeStageCodegenExec(\n+          ProjectExec(_, BroadcastHashJoinExec(_, _, _, _, _, _: CometColumnarToRowExec, _, _))) =>\n+            true\n         case _ => false\n       })\n       checkAnswer(distinctWithId, Seq(Row(1, 0), Row(1, 0)))\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala\nindex 272be70f9fe..d38a6d41a47 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala\n@@ -28,12 +28,14 @@ import org.apache.spark.SparkException\n import org.apache.spark.rdd.RDD\n import org.apache.spark.scheduler.{SparkListener, SparkListenerEvent, SparkListenerJobStart}\n import org.apache.spark.shuffle.sort.SortShuffleManager\n-import org.apache.spark.sql.{DataFrame, Dataset, QueryTest, Row, SparkSession}\n+import org.apache.spark.sql.{DataFrame, Dataset, IgnoreComet, QueryTest, Row, SparkSession}\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions.Attribute\n import org.apache.spark.sql.catalyst.optimizer.{BuildLeft, BuildRight}\n import org.apache.spark.sql.catalyst.plans.logical.{Aggregate, LogicalPlan}\n import org.apache.spark.sql.classic.Strategy\n+import org.apache.spark.sql.comet._\n+import org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\n import org.apache.spark.sql.execution._\n import org.apache.spark.sql.execution.aggregate.BaseAggregateExec\n import org.apache.spark.sql.execution.columnar.{InMemoryTableScanExec, InMemoryTableScanLike}\n@@ -119,6 +121,7 @@ class AdaptiveQueryExecSuite\n   private def findTopLevelBroadcastHashJoin(plan: SparkPlan): Seq[BroadcastHashJoinExec] = {\n     collect(plan) {\n       case j: BroadcastHashJoinExec => j\n+      case j: CometBroadcastHashJoinExec => j.originalPlan.asInstanceOf[BroadcastHashJoinExec]\n     }\n   }\n \n@@ -131,36 +134,46 @@ class AdaptiveQueryExecSuite\n   private def findTopLevelSortMergeJoin(plan: SparkPlan): Seq[SortMergeJoinExec] = {\n     collect(plan) {\n       case j: SortMergeJoinExec => j\n+      case j: CometSortMergeJoinExec =>\n+        assert(j.originalPlan.isInstanceOf[SortMergeJoinExec])\n+        j.originalPlan.asInstanceOf[SortMergeJoinExec]\n     }\n   }\n \n   private def findTopLevelShuffledHashJoin(plan: SparkPlan): Seq[ShuffledHashJoinExec] = {\n     collect(plan) {\n       case j: ShuffledHashJoinExec => j\n+      case j: CometHashJoinExec => j.originalPlan.asInstanceOf[ShuffledHashJoinExec]\n     }\n   }\n \n   private def findTopLevelBaseJoin(plan: SparkPlan): Seq[BaseJoinExec] = {\n     collect(plan) {\n       case j: BaseJoinExec => j\n+      case c: CometHashJoinExec => c.originalPlan.asInstanceOf[BaseJoinExec]\n+      case c: CometSortMergeJoinExec => c.originalPlan.asInstanceOf[BaseJoinExec]\n+      case c: CometBroadcastHashJoinExec => c.originalPlan.asInstanceOf[BaseJoinExec]\n     }\n   }\n \n   private def findTopLevelSort(plan: SparkPlan): Seq[SortExec] = {\n     collect(plan) {\n       case s: SortExec => s\n+      case s: CometSortExec => s.originalPlan.asInstanceOf[SortExec]\n     }\n   }\n \n   private def findTopLevelAggregate(plan: SparkPlan): Seq[BaseAggregateExec] = {\n     collect(plan) {\n       case agg: BaseAggregateExec => agg\n+      case agg: CometHashAggregateExec => agg.originalPlan.asInstanceOf[BaseAggregateExec]\n     }\n   }\n \n   private def findTopLevelLimit(plan: SparkPlan): Seq[CollectLimitExec] = {\n     collect(plan) {\n       case l: CollectLimitExec => l\n+      case l: CometCollectLimitExec => l.originalPlan.asInstanceOf[CollectLimitExec]\n     }\n   }\n \n@@ -204,6 +217,7 @@ class AdaptiveQueryExecSuite\n       val parts = rdd.partitions\n       assert(parts.forall(rdd.preferredLocations(_).nonEmpty))\n     }\n+\n     assert(numShuffles === (numLocalReads.length + numShufflesWithoutLocalRead))\n   }\n \n@@ -212,7 +226,7 @@ class AdaptiveQueryExecSuite\n     val plan = df.queryExecution.executedPlan\n     assert(plan.isInstanceOf[AdaptiveSparkPlanExec])\n     val shuffle = plan.asInstanceOf[AdaptiveSparkPlanExec].executedPlan.collect {\n-      case s: ShuffleExchangeExec => s\n+      case s: ShuffleExchangeLike => s\n     }\n     assert(shuffle.size == 1)\n     assert(shuffle(0).outputPartitioning.numPartitions == numPartition)\n@@ -228,7 +242,8 @@ class AdaptiveQueryExecSuite\n       assert(smj.size == 1)\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n-      checkNumLocalShuffleReads(adaptivePlan)\n+      // Comet shuffle changes shuffle metrics\n+      // checkNumLocalShuffleReads(adaptivePlan)\n     }\n   }\n \n@@ -255,7 +270,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Reuse the parallelism of coalesced shuffle in local shuffle read\") {\n+  test(\"Reuse the parallelism of coalesced shuffle in local shuffle read\",\n+      IgnoreComet(\"Comet shuffle changes shuffle partition size\")) {\n     withSQLConf(\n       SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\",\n@@ -287,7 +303,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Reuse the default parallelism in local shuffle read\") {\n+  test(\"Reuse the default parallelism in local shuffle read\",\n+      IgnoreComet(\"Comet shuffle changes shuffle partition size\")) {\n     withSQLConf(\n       SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n       SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\",\n@@ -301,7 +318,8 @@ class AdaptiveQueryExecSuite\n       val localReads = collect(adaptivePlan) {\n         case read: AQEShuffleReadExec if read.isLocalRead => read\n       }\n-      assert(localReads.length == 2)\n+      // Comet shuffle changes shuffle metrics\n+      assert(localReads.length == 1)\n       val localShuffleRDD0 = localReads(0).execute().asInstanceOf[ShuffledRowRDD]\n       val localShuffleRDD1 = localReads(1).execute().asInstanceOf[ShuffledRowRDD]\n       // the final parallelism is math.max(1, numReduces / numMappers): math.max(1, 5/2) = 2\n@@ -326,7 +344,9 @@ class AdaptiveQueryExecSuite\n           .groupBy($\"a\").count()\n         checkAnswer(testDf, Seq())\n         val plan = testDf.queryExecution.executedPlan\n-        assert(find(plan)(_.isInstanceOf[SortMergeJoinExec]).isDefined)\n+        assert(find(plan) { case p =>\n+          p.isInstanceOf[SortMergeJoinExec] || p.isInstanceOf[CometSortMergeJoinExec]\n+        }.isDefined)\n         val coalescedReads = collect(plan) {\n           case r: AQEShuffleReadExec => r\n         }\n@@ -340,7 +360,9 @@ class AdaptiveQueryExecSuite\n           .groupBy($\"a\").count()\n         checkAnswer(testDf, Seq())\n         val plan = testDf.queryExecution.executedPlan\n-        assert(find(plan)(_.isInstanceOf[BroadcastHashJoinExec]).isDefined)\n+        assert(find(plan) { case p =>\n+          p.isInstanceOf[BroadcastHashJoinExec] || p.isInstanceOf[CometBroadcastHashJoinExec]\n+        }.isDefined)\n         val coalescedReads = collect(plan) {\n           case r: AQEShuffleReadExec => r\n         }\n@@ -350,7 +372,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Scalar subquery\") {\n+  test(\"Scalar subquery\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -365,7 +387,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Scalar subquery in later stages\") {\n+  test(\"Scalar subquery in later stages\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -381,7 +403,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"multiple joins\") {\n+  test(\"multiple joins\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -426,7 +448,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"multiple joins with aggregate\") {\n+  test(\"multiple joins with aggregate\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -471,7 +493,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"multiple joins with aggregate 2\") {\n+  test(\"multiple joins with aggregate 2\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"500\") {\n@@ -517,7 +539,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Exchange reuse\") {\n+  test(\"Exchange reuse\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -536,7 +558,7 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Exchange reuse with subqueries\") {\n+  test(\"Exchange reuse with subqueries\", IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"80\") {\n@@ -567,7 +589,9 @@ class AdaptiveQueryExecSuite\n       assert(smj.size == 1)\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n-      checkNumLocalShuffleReads(adaptivePlan)\n+      // Comet shuffle changes shuffle metrics,\n+      // so we can't check the number of local shuffle reads.\n+      // checkNumLocalShuffleReads(adaptivePlan)\n       // Even with local shuffle read, the query stage reuse can also work.\n       val ex = findReusedExchange(adaptivePlan)\n       assert(ex.nonEmpty)\n@@ -588,7 +612,9 @@ class AdaptiveQueryExecSuite\n       assert(smj.size == 1)\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n-      checkNumLocalShuffleReads(adaptivePlan)\n+      // Comet shuffle changes shuffle metrics,\n+      // so we can't check the number of local shuffle reads.\n+      // checkNumLocalShuffleReads(adaptivePlan)\n       // Even with local shuffle read, the query stage reuse can also work.\n       val ex = findReusedExchange(adaptivePlan)\n       assert(ex.isEmpty)\n@@ -597,7 +623,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"Broadcast exchange reuse across subqueries\") {\n+  test(\"Broadcast exchange reuse across subqueries\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n         SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"20000000\",\n@@ -692,7 +719,8 @@ class AdaptiveQueryExecSuite\n       val bhj = findTopLevelBroadcastHashJoin(adaptivePlan)\n       assert(bhj.size == 1)\n       // There is still a SMJ, and its two shuffles can't apply local read.\n-      checkNumLocalShuffleReads(adaptivePlan, 2)\n+      // Comet shuffle changes shuffle metrics\n+      // checkNumLocalShuffleReads(adaptivePlan, 2)\n     }\n   }\n \n@@ -814,7 +842,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-29544: adaptive skew join with different join types\") {\n+  test(\"SPARK-29544: adaptive skew join with different join types\",\n+      IgnoreComet(\"Comet shuffle has different partition metrics\")) {\n     Seq(\"SHUFFLE_MERGE\", \"SHUFFLE_HASH\").foreach { joinHint =>\n       def getJoinNode(plan: SparkPlan): Seq[ShuffledJoin] = if (joinHint == \"SHUFFLE_MERGE\") {\n         findTopLevelSortMergeJoin(plan)\n@@ -1087,7 +1116,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"metrics of the shuffle read\") {\n+  test(\"metrics of the shuffle read\",\n+      IgnoreComet(\"Comet shuffle changes the metrics\")) {\n     withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\") {\n       val (_, adaptivePlan) = runAdaptiveAndVerifyResult(\n         \"SELECT key FROM testData GROUP BY key\")\n@@ -1721,7 +1751,7 @@ class AdaptiveQueryExecSuite\n         val (_, adaptivePlan) = runAdaptiveAndVerifyResult(\n           \"SELECT id FROM v1 GROUP BY id DISTRIBUTE BY id\")\n         assert(collect(adaptivePlan) {\n-          case s: ShuffleExchangeExec => s\n+          case s: ShuffleExchangeLike => s\n         }.length == 1)\n       }\n     }\n@@ -1801,7 +1831,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-33551: Do not use AQE shuffle read for repartition\") {\n+  test(\"SPARK-33551: Do not use AQE shuffle read for repartition\",\n+      IgnoreComet(\"Comet shuffle changes partition size\")) {\n     def hasRepartitionShuffle(plan: SparkPlan): Boolean = {\n       find(plan) {\n         case s: ShuffleExchangeLike =>\n@@ -1986,6 +2017,9 @@ class AdaptiveQueryExecSuite\n     def checkNoCoalescePartitions(ds: Dataset[Row], origin: ShuffleOrigin): Unit = {\n       assert(collect(ds.queryExecution.executedPlan) {\n         case s: ShuffleExchangeExec if s.shuffleOrigin == origin && s.numPartitions == 2 => s\n+        case c: CometShuffleExchangeExec\n+          if c.originalPlan.shuffleOrigin == origin &&\n+            c.originalPlan.numPartitions == 2 => c\n       }.size == 1)\n       ds.collect()\n       val plan = ds.queryExecution.executedPlan\n@@ -1994,6 +2028,9 @@ class AdaptiveQueryExecSuite\n       }.isEmpty)\n       assert(collect(plan) {\n         case s: ShuffleExchangeExec if s.shuffleOrigin == origin && s.numPartitions == 2 => s\n+        case c: CometShuffleExchangeExec\n+          if c.originalPlan.shuffleOrigin == origin &&\n+            c.originalPlan.numPartitions == 2 => c\n       }.size == 1)\n       checkAnswer(ds, testData)\n     }\n@@ -2150,7 +2187,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-35264: Support AQE side shuffled hash join formula\") {\n+  test(\"SPARK-35264: Support AQE side shuffled hash join formula\",\n+      IgnoreComet(\"Comet shuffle changes the partition size\")) {\n     withTempView(\"t1\", \"t2\") {\n       def checkJoinStrategy(shouldShuffleHashJoin: Boolean): Unit = {\n         Seq(\"100\", \"100000\").foreach { size =>\n@@ -2236,7 +2274,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-35725: Support optimize skewed partitions in RebalancePartitions\") {\n+  test(\"SPARK-35725: Support optimize skewed partitions in RebalancePartitions\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withTempView(\"v\") {\n       withSQLConf(\n         SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n@@ -2335,7 +2374,7 @@ class AdaptiveQueryExecSuite\n               runAdaptiveAndVerifyResult(s\"SELECT $repartition key1 FROM skewData1 \" +\n                 s\"JOIN skewData2 ON key1 = key2 GROUP BY key1\")\n             val shuffles1 = collect(adaptive1) {\n-              case s: ShuffleExchangeExec => s\n+              case s: ShuffleExchangeLike => s\n             }\n             assert(shuffles1.size == 3)\n             // shuffles1.head is the top-level shuffle under the Aggregate operator\n@@ -2348,7 +2387,7 @@ class AdaptiveQueryExecSuite\n               runAdaptiveAndVerifyResult(s\"SELECT $repartition key1 FROM skewData1 \" +\n                 s\"JOIN skewData2 ON key1 = key2\")\n             val shuffles2 = collect(adaptive2) {\n-              case s: ShuffleExchangeExec => s\n+              case s: ShuffleExchangeLike => s\n             }\n             if (hasRequiredDistribution) {\n               assert(shuffles2.size == 3)\n@@ -2382,7 +2421,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-35794: Allow custom plugin for cost evaluator\") {\n+  test(\"SPARK-35794: Allow custom plugin for cost evaluator\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     CostEvaluator.instantiate(\n       classOf[SimpleShuffleSortCostEvaluator].getCanonicalName, spark.sparkContext.getConf)\n     intercept[IllegalArgumentException] {\n@@ -2548,6 +2588,7 @@ class AdaptiveQueryExecSuite\n           val (_, adaptive) = runAdaptiveAndVerifyResult(query)\n           assert(adaptive.collect {\n             case sort: SortExec => sort\n+            case sort: CometSortExec => sort\n           }.size == 1)\n           val read = collect(adaptive) {\n             case read: AQEShuffleReadExec => read\n@@ -2565,7 +2606,8 @@ class AdaptiveQueryExecSuite\n     }\n   }\n \n-  test(\"SPARK-37357: Add small partition factor for rebalance partitions\") {\n+  test(\"SPARK-37357: Add small partition factor for rebalance partitions\",\n+      IgnoreComet(\"Comet shuffle changes shuffle metrics\")) {\n     withTempView(\"v\") {\n       withSQLConf(\n         SQLConf.ADAPTIVE_OPTIMIZE_SKEWS_IN_REBALANCE_PARTITIONS_ENABLED.key -> \"true\",\n@@ -2677,7 +2719,7 @@ class AdaptiveQueryExecSuite\n           runAdaptiveAndVerifyResult(\"SELECT key1 FROM skewData1 JOIN skewData2 ON key1 = key2 \" +\n             \"JOIN skewData3 ON value2 = value3\")\n         val shuffles1 = collect(adaptive1) {\n-          case s: ShuffleExchangeExec => s\n+          case s: ShuffleExchangeLike => s\n         }\n         assert(shuffles1.size == 4)\n         val smj1 = findTopLevelSortMergeJoin(adaptive1)\n@@ -2688,7 +2730,7 @@ class AdaptiveQueryExecSuite\n           runAdaptiveAndVerifyResult(\"SELECT key1 FROM skewData1 JOIN skewData2 ON key1 = key2 \" +\n             \"JOIN skewData3 ON value1 = value3\")\n         val shuffles2 = collect(adaptive2) {\n-          case s: ShuffleExchangeExec => s\n+          case s: ShuffleExchangeLike => s\n         }\n         assert(shuffles2.size == 4)\n         val smj2 = findTopLevelSortMergeJoin(adaptive2)\n@@ -2946,6 +2988,7 @@ class AdaptiveQueryExecSuite\n         }.size == (if (firstAccess) 1 else 0))\n         assert(collect(initialExecutedPlan) {\n           case s: SortExec => s\n+          case s: CometSortExec => s.originalPlan.asInstanceOf[SortExec]\n         }.size == (if (firstAccess) 2 else 0))\n         assert(collect(initialExecutedPlan) {\n           case i: InMemoryTableScanLike => i\n@@ -2958,6 +3001,7 @@ class AdaptiveQueryExecSuite\n         }.isEmpty)\n         assert(collect(finalExecutedPlan) {\n           case s: SortExec => s\n+          case s: CometSortExec => s.originalPlan.asInstanceOf[SortExec]\n         }.isEmpty)\n         assert(collect(initialExecutedPlan) {\n           case i: InMemoryTableScanLike => i\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala\nindex 0a0b23d1e60..dcc9c141315 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala\n@@ -28,6 +28,7 @@ import org.apache.spark.sql.catalyst.expressions.Concat\n import org.apache.spark.sql.catalyst.parser.CatalystSqlParser\n import org.apache.spark.sql.catalyst.plans.logical.Expand\n import org.apache.spark.sql.catalyst.types.DataTypeUtils\n+import org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.execution.FileSourceScanExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.functions._\n@@ -868,6 +869,8 @@ abstract class SchemaPruningSuite\n     val fileSourceScanSchemata =\n       collect(df.queryExecution.executedPlan) {\n         case scan: FileSourceScanExec => scan.requiredSchema\n+        case scan: CometScanExec => scan.requiredSchema\n+        case scan: CometNativeScanExec => scan.requiredSchema\n       }\n     assert(fileSourceScanSchemata.size === expectedSchemaCatalogStrings.size,\n       s\"Found ${fileSourceScanSchemata.size} file sources in dataframe, \" +\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala\nindex 80d771428d9..9327dca6c21 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/V1WriteCommandSuite.scala\n@@ -17,9 +17,10 @@\n \n package org.apache.spark.sql.execution.datasources\n \n-import org.apache.spark.sql.{QueryTest, Row}\n+import org.apache.spark.sql.{IgnoreComet, QueryTest, Row}\n import org.apache.spark.sql.catalyst.expressions.{Ascending, AttributeReference, NullsFirst, SortOrder}\n import org.apache.spark.sql.catalyst.plans.logical.{LogicalPlan, Sort}\n+import org.apache.spark.sql.comet.CometSortExec\n import org.apache.spark.sql.execution.{QueryExecution, SortExec}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n@@ -226,6 +227,7 @@ class V1WriteCommandSuite extends QueryTest with SharedSparkSession with V1Write\n           // assert the outer most sort in the executed plan\n           assert(plan.collectFirst {\n             case s: SortExec => s\n+            case s: CometSortExec => s.originalPlan.asInstanceOf[SortExec]\n           }.exists {\n             case SortExec(Seq(\n               SortOrder(AttributeReference(\"key\", IntegerType, _, _), Ascending, NullsFirst, _),\n@@ -273,6 +275,7 @@ class V1WriteCommandSuite extends QueryTest with SharedSparkSession with V1Write\n         // assert the outer most sort in the executed plan\n         assert(plan.collectFirst {\n           case s: SortExec => s\n+          case s: CometSortExec => s.originalPlan.asInstanceOf[SortExec]\n         }.exists {\n           case SortExec(Seq(\n             SortOrder(AttributeReference(\"value\", StringType, _, _), Ascending, NullsFirst, _),\n@@ -306,7 +309,8 @@ class V1WriteCommandSuite extends QueryTest with SharedSparkSession with V1Write\n     }\n   }\n \n-  test(\"v1 write with AQE changing SMJ to BHJ\") {\n+  test(\"v1 write with AQE changing SMJ to BHJ\",\n+      IgnoreComet(\"TODO: Comet SMJ to BHJ by AQE\")) {\n     withPlannedWrite { enabled =>\n       withTable(\"t\") {\n         sql(\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala\nindex 62f2f2cb10a..feef4bb2928 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala\n@@ -28,7 +28,7 @@ import org.apache.hadoop.fs.{FileStatus, FileSystem, GlobFilter, Path}\n import org.mockito.Mockito.{mock, when}\n \n import org.apache.spark.{SparkException, SparkUnsupportedOperationException}\n-import org.apache.spark.sql.{DataFrame, QueryTest, Row}\n+import org.apache.spark.sql.{DataFrame, IgnoreCometSuite, QueryTest, Row}\n import org.apache.spark.sql.catalyst.encoders.ExpressionEncoder\n import org.apache.spark.sql.execution.datasources.PartitionedFile\n import org.apache.spark.sql.functions.col\n@@ -38,7 +38,9 @@ import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.sql.types._\n import org.apache.spark.util.Utils\n \n-class BinaryFileFormatSuite extends QueryTest with SharedSparkSession {\n+// For some reason this suite is flaky w/ or w/o Comet when running in Github workflow.\n+// Since it isn't related to Comet, we disable it for now.\n+class BinaryFileFormatSuite extends QueryTest with SharedSparkSession with IgnoreCometSuite {\n   import BinaryFileFormat._\n \n   private var testDir: String = _\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala\nindex cd6f41b4ef4..4b6a17344bc 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetEncodingSuite.scala\n@@ -28,7 +28,7 @@ import org.apache.parquet.hadoop.ParquetOutputFormat\n \n import org.apache.spark.TestUtils\n import org.apache.spark.memory.MemoryMode\n-import org.apache.spark.sql.Row\n+import org.apache.spark.sql.{IgnoreComet, Row}\n import org.apache.spark.sql.catalyst.util.DateTimeUtils\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SharedSparkSession\n@@ -201,7 +201,8 @@ class ParquetEncodingSuite extends ParquetCompatibilityTest with SharedSparkSess\n     }\n   }\n \n-  test(\"parquet v2 pages - rle encoding for boolean value columns\") {\n+  test(\"parquet v2 pages - rle encoding for boolean value columns\",\n+      IgnoreComet(\"Comet doesn't support RLE encoding yet\")) {\n     val extraOptions = Map[String, String](\n       ParquetOutputFormat.WRITER_VERSION -> ParquetProperties.WriterVersion.PARQUET_2_0.toString\n     )\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala\nindex 6080a5e8e4b..ea058d57b4b 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetFilterSuite.scala\n@@ -38,6 +38,7 @@ import org.apache.parquet.schema.MessageType\n \n import org.apache.spark.{SparkConf, SparkException, SparkRuntimeException}\n import org.apache.spark.sql._\n+import org.apache.spark.sql.IgnoreCometNativeScan\n import org.apache.spark.sql.catalyst.dsl.expressions._\n import org.apache.spark.sql.catalyst.expressions._\n import org.apache.spark.sql.catalyst.optimizer.InferFiltersFromConstraints\n@@ -1102,7 +1103,11 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n           // When a filter is pushed to Parquet, Parquet can apply it to every row.\n           // So, we can check the number of rows returned from the Parquet\n           // to make sure our filter pushdown work.\n-          assert(stripSparkFilter(df).count() == 1)\n+          // Similar to Spark's vectorized reader, Comet doesn't do row-level filtering but relies\n+          // on Spark to apply the data filters after columnar batches are returned\n+          if (!isCometEnabled) {\n+            assert(stripSparkFilter(df).count() == 1)\n+          }\n         }\n       }\n     }\n@@ -1505,7 +1510,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"Filters should be pushed down for vectorized Parquet reader at row group level\") {\n+  test(\"Filters should be pushed down for vectorized Parquet reader at row group level\",\n+    IgnoreCometNativeScan(\"Native scans do not support the tested accumulator\")) {\n     import testImplicits._\n \n     withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"true\",\n@@ -1587,7 +1593,11 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n           // than the total length but should not be a single record.\n           // Note that, if record level filtering is enabled, it should be a single record.\n           // If no filter is pushed down to Parquet, it should be the total length of data.\n-          assert(actual > 1 && actual < data.length)\n+          // Only enable Comet test iff it's scan only, since with native execution\n+          // `stripSparkFilter` can't remove the native filter\n+          if (!isCometEnabled || isCometScanOnly) {\n+            assert(actual > 1 && actual < data.length)\n+          }\n         }\n       }\n     }\n@@ -1614,7 +1624,11 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n         // than the total length but should not be a single record.\n         // Note that, if record level filtering is enabled, it should be a single record.\n         // If no filter is pushed down to Parquet, it should be the total length of data.\n-        assert(actual > 1 && actual < data.length)\n+        // Only enable Comet test iff it's scan only, since with native execution\n+        // `stripSparkFilter` can't remove the native filter\n+        if (!isCometEnabled || isCometScanOnly) {\n+          assert(actual > 1 && actual < data.length)\n+        }\n       }\n     }\n   }\n@@ -1706,7 +1720,7 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n       (attr, value) => sources.StringContains(attr, value))\n   }\n \n-  test(\"filter pushdown - StringPredicate\") {\n+  test(\"filter pushdown - StringPredicate\", IgnoreCometNativeScan(\"cannot be pushed down\")) {\n     import testImplicits._\n     // keep() should take effect on StartsWith/EndsWith/Contains\n     Seq(\n@@ -1750,7 +1764,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"SPARK-17091: Convert IN predicate to Parquet filter push-down\") {\n+  test(\"SPARK-17091: Convert IN predicate to Parquet filter push-down\",\n+    IgnoreCometNativeScan(\"Comet has different push-down behavior\")) {\n     val schema = StructType(Seq(\n       StructField(\"a\", IntegerType, nullable = false)\n     ))\n@@ -1956,13 +1971,21 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n            \"\"\".stripMargin)\n \n         withSQLConf(SQLConf.CASE_SENSITIVE.key -> \"false\") {\n-          val ex = intercept[SparkException] {\n+          // Spark native readers wrap the error in SparkException(FAILED_READ_FILE).\n+          // Comet native readers throw SparkRuntimeException directly.\n+          try {\n             sql(s\"select a from $tableName where b > 0\").collect()\n+            fail(\"Expected an exception\")\n+          } catch {\n+            case ex: SparkException =>\n+              assert(ex.getCondition.startsWith(\"FAILED_READ_FILE\"))\n+              assert(ex.getCause.isInstanceOf[SparkRuntimeException])\n+              assert(ex.getCause.getMessage.contains(\n+                \"\"\"Found duplicate field(s) \"B\": [B, b] in case-insensitive mode\"\"\"))\n+            case ex: SparkRuntimeException =>\n+              assert(ex.getMessage.contains(\n+                \"\"\"Found duplicate field(s) \"B\": [B, b] in case-insensitive mode\"\"\"))\n           }\n-          assert(ex.getCondition.startsWith(\"FAILED_READ_FILE\"))\n-          assert(ex.getCause.isInstanceOf[SparkRuntimeException])\n-          assert(ex.getCause.getMessage.contains(\n-            \"\"\"Found duplicate field(s) \"B\": [B, b] in case-insensitive mode\"\"\"))\n         }\n \n         withSQLConf(SQLConf.CASE_SENSITIVE.key -> \"true\") {\n@@ -1993,7 +2016,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"Support Parquet column index\") {\n+  test(\"Support Parquet column index\",\n+      IgnoreComet(\"Comet doesn't support Parquet column index yet\")) {\n     // block 1:\n     //                      null count  min                                       max\n     // page-0                         0  0                                         99\n@@ -2053,7 +2077,8 @@ abstract class ParquetFilterSuite extends QueryTest with ParquetTest with Shared\n     }\n   }\n \n-  test(\"SPARK-34562: Bloom filter push down\") {\n+  test(\"SPARK-34562: Bloom filter push down\",\n+    IgnoreCometNativeScan(\"Native scans do not support the tested accumulator\")) {\n     withTempPath { dir =>\n       val path = dir.getCanonicalPath\n       spark.range(100).selectExpr(\"id * 2 AS id\")\n@@ -2305,7 +2330,11 @@ class ParquetV1FilterSuite extends ParquetFilterSuite {\n           assert(pushedParquetFilters.exists(_.getClass === filterClass),\n             s\"${pushedParquetFilters.map(_.getClass).toList} did not contain ${filterClass}.\")\n \n-          checker(stripSparkFilter(query), expected)\n+          // Similar to Spark's vectorized reader, Comet doesn't do row-level filtering but relies\n+          // on Spark to apply the data filters after columnar batches are returned\n+          if (!isCometEnabled) {\n+            checker(stripSparkFilter(query), expected)\n+          }\n         } else {\n           assert(selectedFilters.isEmpty, \"There is filter pushed down\")\n         }\n@@ -2368,7 +2397,11 @@ class ParquetV2FilterSuite extends ParquetFilterSuite {\n           assert(pushedParquetFilters.exists(_.getClass === filterClass),\n             s\"${pushedParquetFilters.map(_.getClass).toList} did not contain ${filterClass}.\")\n \n-          checker(stripSparkFilter(query), expected)\n+          // Similar to Spark's vectorized reader, Comet doesn't do row-level filtering but relies\n+          // on Spark to apply the data filters after columnar batches are returned\n+          if (!isCometEnabled) {\n+            checker(stripSparkFilter(query), expected)\n+          }\n \n         case _ => assert(false, \"Can not match ParquetTable in the query.\")\n       }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala\nindex 4474ec1fd42..05fa0257c82 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetIOSuite.scala\n@@ -39,6 +39,7 @@ import org.apache.parquet.schema.{MessageType, MessageTypeParser}\n \n import org.apache.spark.{SPARK_VERSION_SHORT, SparkException, TestUtils}\n import org.apache.spark.sql._\n+import org.apache.spark.sql.IgnoreCometNativeDataFusion\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions.{GenericInternalRow, UnsafeRow}\n import org.apache.spark.sql.catalyst.util.DateTimeUtils\n@@ -1059,7 +1060,8 @@ class ParquetIOSuite extends QueryTest with ParquetTest with SharedSparkSession\n     }\n   }\n \n-  test(\"SPARK-35640: read binary as timestamp should throw schema incompatible error\") {\n+  test(\"SPARK-35640: read binary as timestamp should throw schema incompatible error\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     val data = (1 to 4).map(i => Tuple1(i.toString))\n     val readSchema = StructType(Seq(StructField(\"_1\", DataTypes.TimestampType)))\n \n@@ -1344,7 +1346,8 @@ class ParquetIOSuite extends QueryTest with ParquetTest with SharedSparkSession\n     }\n   }\n \n-  test(\"SPARK-40128 read DELTA_LENGTH_BYTE_ARRAY encoded strings\") {\n+  test(\"SPARK-40128 read DELTA_LENGTH_BYTE_ARRAY encoded strings\",\n+      IgnoreComet(\"Comet doesn't support DELTA encoding yet\")) {\n     withAllParquetReaders {\n       checkAnswer(\n         // \"fruit\" column in this file is encoded using DELTA_LENGTH_BYTE_ARRAY.\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala\nindex bba71f1c48d..e1b0c25a354 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetQuerySuite.scala\n@@ -27,6 +27,7 @@ import org.apache.parquet.hadoop.ParquetOutputFormat\n \n import org.apache.spark.{DebugFilesystem, SparkConf, SparkException}\n import org.apache.spark.sql._\n+import org.apache.spark.sql.IgnoreCometNativeDataFusion\n import org.apache.spark.sql.catalyst.{InternalRow, TableIdentifier}\n import org.apache.spark.sql.catalyst.expressions.SpecificInternalRow\n import org.apache.spark.sql.catalyst.util.ArrayData\n@@ -185,7 +186,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     }\n   }\n \n-  test(\"SPARK-47447: read TimestampLTZ as TimestampNTZ\") {\n+  test(\"SPARK-47447: read TimestampLTZ as TimestampNTZ\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     val providedSchema = StructType(Seq(StructField(\"time\", TimestampNTZType, false)))\n \n     Seq(\"INT96\", \"TIMESTAMP_MICROS\", \"TIMESTAMP_MILLIS\").foreach { tsType =>\n@@ -318,7 +320,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     }\n   }\n \n-  test(\"Enabling/disabling ignoreCorruptFiles\") {\n+  test(\"Enabling/disabling ignoreCorruptFiles\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3321\")) {\n     def testIgnoreCorruptFiles(options: Map[String, String]): Unit = {\n       withTempDir { dir =>\n         val basePath = dir.getCanonicalPath\n@@ -996,7 +999,11 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n         Seq(Some(\"A\"), Some(\"A\"), None).toDF().repartition(1)\n           .write.parquet(path.getAbsolutePath)\n         val df = spark.read.parquet(path.getAbsolutePath)\n-        checkAnswer(stripSparkFilter(df.where(\"NOT (value <=> 'A')\")), df)\n+        // Similar to Spark's vectorized reader, Comet doesn't do row-level filtering but relies\n+        // on Spark to apply the data filters after columnar batches are returned\n+        if (!isCometEnabled) {\n+          checkAnswer(stripSparkFilter(df.where(\"NOT (value <=> 'A')\")), df)\n+        }\n       }\n     }\n   }\n@@ -1042,7 +1049,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     testMigration(fromTsType = \"TIMESTAMP_MICROS\", toTsType = \"INT96\")\n   }\n \n-  test(\"SPARK-34212 Parquet should read decimals correctly\") {\n+  test(\"SPARK-34212 Parquet should read decimals correctly\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     def readParquet(schema: String, path: File): DataFrame = {\n       spark.read.schema(schema).parquet(path.toString)\n     }\n@@ -1060,7 +1068,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n         checkAnswer(readParquet(schema2, path), df)\n       }\n \n-      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\") {\n+      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\",\n+          \"spark.comet.enabled\" -> \"false\") {\n         val schema1 = \"a DECIMAL(3, 2), b DECIMAL(18, 3), c DECIMAL(37, 3)\"\n         checkAnswer(readParquet(schema1, path), df)\n         val schema2 = \"a DECIMAL(3, 0), b DECIMAL(18, 1), c DECIMAL(37, 1)\"\n@@ -1084,7 +1093,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n       val df = sql(s\"SELECT 1 a, 123456 b, ${Int.MaxValue.toLong * 10} c, CAST('1.2' AS BINARY) d\")\n       df.write.parquet(path.toString)\n \n-      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\") {\n+      withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\",\n+          \"spark.comet.enabled\" -> \"false\") {\n         checkAnswer(readParquet(\"a DECIMAL(3, 2)\", path), sql(\"SELECT 1.00\"))\n         checkAnswer(readParquet(\"a DECIMAL(11, 2)\", path), sql(\"SELECT 1.00\"))\n         checkAnswer(readParquet(\"b DECIMAL(3, 2)\", path), Row(null))\n@@ -1131,7 +1141,8 @@ abstract class ParquetQuerySuite extends QueryTest with ParquetTest with SharedS\n     }\n   }\n \n-  test(\"row group skipping doesn't overflow when reading into larger type\") {\n+  test(\"row group skipping doesn't overflow when reading into larger type\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     withTempPath { path =>\n       Seq(0).toDF(\"a\").write.parquet(path.toString)\n       withAllParquetReaders {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala\nindex 30503af0fab..1491f4bc2d5 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala\n@@ -21,7 +21,7 @@ import java.nio.file.{Files, Paths, StandardCopyOption}\n import java.sql.{Date, Timestamp}\n \n import org.apache.spark.{SPARK_VERSION_SHORT, SparkConf, SparkException, SparkUpgradeException}\n-import org.apache.spark.sql.{QueryTest, Row, SPARK_LEGACY_DATETIME_METADATA_KEY, SPARK_LEGACY_INT96_METADATA_KEY, SPARK_TIMEZONE_METADATA_KEY}\n+import org.apache.spark.sql.{IgnoreCometSuite, QueryTest, Row, SPARK_LEGACY_DATETIME_METADATA_KEY, SPARK_LEGACY_INT96_METADATA_KEY, SPARK_TIMEZONE_METADATA_KEY}\n import org.apache.spark.sql.catalyst.util.DateTimeTestUtils\n import org.apache.spark.sql.internal.{LegacyBehaviorPolicy, SQLConf}\n import org.apache.spark.sql.internal.LegacyBehaviorPolicy.{CORRECTED, EXCEPTION, LEGACY}\n@@ -30,9 +30,11 @@ import org.apache.spark.sql.internal.SQLConf.ParquetOutputTimestampType.{INT96,\n import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.tags.SlowSQLTest\n \n+// Comet is disabled for this suite because it doesn't support datetime rebase mode\n abstract class ParquetRebaseDatetimeSuite\n   extends QueryTest\n   with ParquetTest\n+  with IgnoreCometSuite\n   with SharedSparkSession {\n \n   import testImplicits._\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala\nindex 08fd8a9ecb5..27aee839b8c 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRowIndexSuite.scala\n@@ -20,6 +20,7 @@ import java.io.File\n \n import scala.jdk.CollectionConverters._\n \n+import org.apache.comet.CometConf\n import org.apache.hadoop.fs.Path\n import org.apache.parquet.column.ParquetProperties._\n import org.apache.parquet.hadoop.{ParquetFileReader, ParquetOutputFormat}\n@@ -27,6 +28,7 @@ import org.apache.parquet.hadoop.ParquetWriter.DEFAULT_BLOCK_SIZE\n \n import org.apache.spark.SparkException\n import org.apache.spark.sql.QueryTest\n+import org.apache.spark.sql.comet.{CometBatchScanExec, CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.execution.FileSourceScanExec\n import org.apache.spark.sql.execution.datasources.FileFormat\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n@@ -245,6 +247,17 @@ class ParquetRowIndexSuite extends QueryTest with SharedSparkSession {\n             case f: FileSourceScanExec =>\n               numPartitions += f.inputRDD.partitions.length\n               numOutputRows += f.metrics(\"numOutputRows\").value\n+            case b: CometScanExec =>\n+              numPartitions += b.inputRDD.partitions.length\n+              numOutputRows += b.metrics(\"numOutputRows\").value\n+            case b: CometBatchScanExec =>\n+              numPartitions += b.inputRDD.partitions.length\n+              numOutputRows += b.metrics(\"numOutputRows\").value\n+            case b: CometNativeScanExec =>\n+              numPartitions +=\n+                b.originalPlan.inputRDD.partitions.length\n+              numOutputRows +=\n+                b.metrics(\"numOutputRows\").value\n             case _ =>\n           }\n           assert(numPartitions > 0)\n@@ -303,6 +316,12 @@ class ParquetRowIndexSuite extends QueryTest with SharedSparkSession {\n     val conf = RowIndexTestConf(useDataSourceV2 = useDataSourceV2)\n \n     test(s\"invalid row index column type - ${conf.desc}\") {\n+      // https://github.com/apache/datafusion-comet/issues/3886\n+      // Comet throws RuntimeException instead of SparkException\n+      assume(!Seq(\n+        CometConf.SCAN_NATIVE_DATAFUSION,\n+        CometConf.SCAN_AUTO\n+      ).contains(CometConf.COMET_NATIVE_SCAN_IMPL.get()))\n       withSQLConf(conf.sqlConfs: _*) {\n         withTempPath{ path =>\n           val df = spark.range(0, 10, 1, 1).toDF(\"id\")\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala\nindex 5c0b7def039..151184bc98c 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaPruningSuite.scala\n@@ -20,6 +20,7 @@ package org.apache.spark.sql.execution.datasources.parquet\n import org.apache.spark.SparkConf\n import org.apache.spark.sql.DataFrame\n import org.apache.spark.sql.catalyst.parser.CatalystSqlParser\n+import org.apache.spark.sql.comet.CometBatchScanExec\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.datasources.SchemaPruningSuite\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n@@ -56,6 +57,7 @@ class ParquetV2SchemaPruningSuite extends ParquetSchemaPruningSuite {\n     val fileSourceScanSchemata =\n       collect(df.queryExecution.executedPlan) {\n         case scan: BatchScanExec => scan.scan.asInstanceOf[ParquetScan].readDataSchema\n+        case scan: CometBatchScanExec => scan.scan.asInstanceOf[ParquetScan].readDataSchema\n       }\n     assert(fileSourceScanSchemata.size === expectedSchemaCatalogStrings.size,\n       s\"Found ${fileSourceScanSchemata.size} file sources in dataframe, \" +\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala\nindex 0acb21f3e6f..1f9c3fd13fc 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala\n@@ -27,7 +27,7 @@ import org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName\n import org.apache.parquet.schema.Type._\n \n import org.apache.spark.SparkException\n-import org.apache.spark.sql.{AnalysisException, Row}\n+import org.apache.spark.sql.{AnalysisException, IgnoreComet, IgnoreCometNativeDataFusion, Row}\n import org.apache.spark.sql.catalyst.expressions.Cast.toSQLType\n import org.apache.spark.sql.execution.datasources.SchemaColumnConvertNotSupportedException\n import org.apache.spark.sql.functions.desc\n@@ -1037,7 +1037,8 @@ class ParquetSchemaSuite extends ParquetSchemaTest {\n     e\n   }\n \n-  test(\"schema mismatch failure error message for parquet reader\") {\n+  test(\"schema mismatch failure error message for parquet reader\",\n+      IgnoreComet(\"Comet doesn't work with vectorizedReaderEnabled = false\")) {\n     withTempPath { dir =>\n       val e = testSchemaMismatch(dir.getCanonicalPath, vectorizedReaderEnabled = false)\n       val expectedMessage = \"Encountered error while reading file\"\n@@ -1046,7 +1047,8 @@ class ParquetSchemaSuite extends ParquetSchemaTest {\n     }\n   }\n \n-  test(\"schema mismatch failure error message for parquet vectorized reader\") {\n+  test(\"schema mismatch failure error message for parquet vectorized reader\",\n+      IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     withTempPath { dir =>\n       val e = testSchemaMismatch(dir.getCanonicalPath, vectorizedReaderEnabled = true)\n       assert(e.getCause.isInstanceOf[SchemaColumnConvertNotSupportedException])\n@@ -1079,7 +1081,8 @@ class ParquetSchemaSuite extends ParquetSchemaTest {\n     }\n   }\n \n-  test(\"SPARK-45604: schema mismatch failure error on timestamp_ntz to array<timestamp_ntz>\") {\n+  test(\"SPARK-45604: schema mismatch failure error on timestamp_ntz to array<timestamp_ntz>\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3720\")) {\n     import testImplicits._\n \n     withTempPath { dir =>\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTypeWideningSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTypeWideningSuite.scala\nindex 09ed6955a51..98e313cddd4 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTypeWideningSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetTypeWideningSuite.scala\n@@ -24,7 +24,7 @@ import org.apache.parquet.format.converter.ParquetMetadataConverter\n import org.apache.parquet.hadoop.{ParquetFileReader, ParquetOutputFormat}\n \n import org.apache.spark.SparkException\n-import org.apache.spark.sql.{DataFrame, QueryTest, Row}\n+import org.apache.spark.sql.{DataFrame, IgnoreCometNativeDataFusion, QueryTest, Row}\n import org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n import org.apache.spark.sql.execution.datasources.SchemaColumnConvertNotSupportedException\n import org.apache.spark.sql.functions.col\n@@ -65,7 +65,9 @@ class ParquetTypeWideningSuite\n     withClue(\n       s\"with dictionary encoding '$dictionaryEnabled' with timestamp rebase mode \" +\n         s\"'$timestampRebaseMode''\") {\n-      withAllParquetWriters {\n+      // TODO: Comet cannot read DELTA_BINARY_PACKED created by V2 writer\n+      // https://github.com/apache/datafusion-comet/issues/574\n+      // withAllParquetWriters {\n         withTempDir { dir =>\n           val expected =\n             writeParquetFiles(dir, values, fromType, dictionaryEnabled, timestampRebaseMode)\n@@ -86,7 +88,7 @@ class ParquetTypeWideningSuite\n             }\n           }\n         }\n-      }\n+      // }\n     }\n   }\n \n@@ -190,10 +192,16 @@ class ParquetTypeWideningSuite\n       (Seq(\"1\", \"2\", Short.MinValue.toString), ShortType, DoubleType),\n       (Seq(\"1\", \"2\", Int.MinValue.toString), IntegerType, DoubleType),\n       (Seq(\"1.23\", \"10.34\"), FloatType, DoubleType),\n-      (Seq(\"2020-01-01\", \"2020-01-02\", \"1312-02-27\"), DateType, TimestampNTZType)\n+      // TODO: Comet cannot handle older than \"1582-10-15\"\n+      (Seq(\"2020-01-01\", \"2020-01-02\"/* , \"1312-02-27\" */), DateType, TimestampNTZType)\n     )\n+    wideningTags: Seq[org.scalatest.Tag] =\n+      if (fromType == DateType && toType == TimestampNTZType) {\n+        Seq(IgnoreCometNativeDataFusion(\n+          \"https://github.com/apache/datafusion-comet/issues/3321\"))\n+      } else Seq.empty\n   }\n-  test(s\"parquet widening conversion $fromType -> $toType\") {\n+  test(s\"parquet widening conversion $fromType -> $toType\", wideningTags: _*) {\n     checkAllParquetReaders(values, fromType, toType, expectError = false)\n   }\n \n@@ -231,7 +239,8 @@ class ParquetTypeWideningSuite\n       (Seq(\"2020-01-01\", \"2020-01-02\", \"1312-02-27\"), DateType, TimestampType)\n     )\n   }\n-  test(s\"unsupported parquet conversion $fromType -> $toType\") {\n+  test(s\"unsupported parquet conversion $fromType -> $toType\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3321\")) {\n     checkAllParquetReaders(values, fromType, toType, expectError = true)\n   }\n \n@@ -257,7 +266,8 @@ class ParquetTypeWideningSuite\n       (Seq(\"1\", \"2\"), LongType, DecimalType(LongDecimal.precision, 1))\n     )\n   }\n-  test(s\"unsupported parquet conversion $fromType -> $toType\") {\n+  test(s\"unsupported parquet conversion $fromType -> $toType\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3321\")) {\n     checkAllParquetReaders(values, fromType, toType,\n       expectError =\n       // parquet-mr allows reading decimals into a smaller precision decimal type without\n@@ -271,7 +281,8 @@ class ParquetTypeWideningSuite\n       (Seq(\"2020-01-01\", \"2020-01-02\", \"1312-02-27\"), TimestampNTZType, DateType))\n     outputTimestampType <- ParquetOutputTimestampType.values\n   }\n-  test(s\"unsupported parquet timestamp conversion $fromType ($outputTimestampType) -> $toType\") {\n+  test(s\"unsupported parquet timestamp conversion $fromType ($outputTimestampType) -> $toType\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3321\")) {\n     withSQLConf(\n       SQLConf.PARQUET_OUTPUT_TIMESTAMP_TYPE.key -> outputTimestampType.toString,\n       SQLConf.PARQUET_INT96_REBASE_MODE_IN_WRITE.key -> LegacyBehaviorPolicy.CORRECTED.toString\n@@ -291,7 +302,8 @@ class ParquetTypeWideningSuite\n       Seq(7 -> 5, 10 -> 5, 20 -> 5, 12 -> 10, 20 -> 10, 22 -> 20)\n   }\n   test(\n-    s\"parquet decimal precision change Decimal($fromPrecision, 2) -> Decimal($toPrecision, 2)\") {\n+    s\"parquet decimal precision change Decimal($fromPrecision, 2) -> Decimal($toPrecision, 2)\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3321\")) {\n     checkAllParquetReaders(\n       values = Seq(\"1.23\", \"10.34\"),\n       fromType = DecimalType(fromPrecision, 2),\n@@ -322,7 +334,8 @@ class ParquetTypeWideningSuite\n       Seq((5, 2) -> (6, 4), (10, 4) -> (12, 7), (20, 5) -> (22, 8))\n   }\n   test(s\"parquet decimal precision and scale change Decimal($fromPrecision, $fromScale) -> \" +\n-    s\"Decimal($toPrecision, $toScale)\"\n+    s\"Decimal($toPrecision, $toScale)\",\n+    IgnoreCometNativeDataFusion(\"https://github.com/apache/datafusion-comet/issues/3321\")\n   ) {\n     checkAllParquetReaders(\n       values = Seq(\"1.23\", \"10.34\"),\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetVariantShreddingSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetVariantShreddingSuite.scala\nindex 458b5dfc0f4..d209f3c85bc 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetVariantShreddingSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetVariantShreddingSuite.scala\n@@ -26,7 +26,7 @@ import org.apache.parquet.hadoop.util.HadoopInputFile\n import org.apache.parquet.schema.{LogicalTypeAnnotation, PrimitiveType}\n import org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName\n \n-import org.apache.spark.sql.{QueryTest, Row}\n+import org.apache.spark.sql.{IgnoreCometSuite, QueryTest, Row}\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.internal.SQLConf.ParquetOutputTimestampType\n import org.apache.spark.sql.test.SharedSparkSession\n@@ -35,7 +35,9 @@ import org.apache.spark.unsafe.types.VariantVal\n /**\n  * Test shredding Variant values in the Parquet reader/writer.\n  */\n-class ParquetVariantShreddingSuite extends QueryTest with ParquetTest with SharedSparkSession {\n+class ParquetVariantShreddingSuite extends QueryTest with ParquetTest with SharedSparkSession\n+    // TODO enable tests once https://github.com/apache/datafusion-comet/issues/2209 is fixed\n+    with IgnoreCometSuite {\n \n   private def testWithTempDir(name: String)(block: File => Unit): Unit = test(name) {\n     withTempDir { dir =>\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala\nindex b8f3ea3c6f3..bbd44221288 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/debug/DebuggingSuite.scala\n@@ -20,6 +20,7 @@ package org.apache.spark.sql.execution.debug\n import java.io.ByteArrayOutputStream\n \n import org.apache.spark.rdd.RDD\n+import org.apache.spark.sql.IgnoreComet\n import org.apache.spark.sql.catalyst.InternalRow\n import org.apache.spark.sql.catalyst.expressions.Attribute\n import org.apache.spark.sql.catalyst.expressions.codegen.CodegenContext\n@@ -125,7 +126,8 @@ class DebuggingSuite extends DebuggingSuiteBase with DisableAdaptiveExecutionSui\n          | id LongType: {}\"\"\".stripMargin))\n   }\n \n-  test(\"SPARK-28537: DebugExec cannot debug columnar related queries\") {\n+  test(\"SPARK-28537: DebugExec cannot debug columnar related queries\",\n+      IgnoreComet(\"Comet does not use FileScan\")) {\n     withTempPath { workDir =>\n       val workDirPath = workDir.getAbsolutePath\n       val input = spark.range(5).toDF(\"id\")\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala\nindex 0dd90925d3c..7d53ec845ef 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/metric/SQLMetricsSuite.scala\n@@ -46,8 +46,10 @@ import org.apache.spark.sql.util.QueryExecutionListener\n import org.apache.spark.util.{AccumulatorContext, JsonProtocol}\n \n // Disable AQE because metric info is different with AQE on/off\n+// This test suite runs tests against the metrics of physical operators.\n+// Disabling it for Comet because the metrics are different with Comet enabled.\n class SQLMetricsSuite extends SharedSparkSession with SQLMetricsTestUtils\n-  with DisableAdaptiveExecutionSuite {\n+  with DisableAdaptiveExecutionSuite with IgnoreCometSuite {\n   import testImplicits._\n \n   /**\n@@ -765,7 +767,8 @@ class SQLMetricsSuite extends SharedSparkSession with SQLMetricsTestUtils\n     }\n   }\n \n-  test(\"SPARK-26327: FileSourceScanExec metrics\") {\n+  test(\"SPARK-26327: FileSourceScanExec metrics\",\n+      IgnoreComet(\"Spark uses row-based Parquet reader while Comet is vectorized\")) {\n     withTable(\"testDataForScan\") {\n       spark.range(10).selectExpr(\"id\", \"id % 3 as p\")\n         .write.partitionBy(\"p\").saveAsTable(\"testDataForScan\")\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala\nindex 0ab8691801d..b18a5bea944 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/python/ExtractPythonUDFsSuite.scala\n@@ -18,6 +18,7 @@\n package org.apache.spark.sql.execution.python\n \n import org.apache.spark.sql.catalyst.plans.logical.{ArrowEvalPython, BatchEvalPython, Limit, LocalLimit}\n+import org.apache.spark.sql.comet._\n import org.apache.spark.sql.execution.{FileSourceScanExec, SparkPlan, SparkPlanTest}\n import org.apache.spark.sql.execution.datasources.v2.BatchScanExec\n import org.apache.spark.sql.execution.datasources.v2.parquet.ParquetScan\n@@ -108,6 +109,8 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: FileSourceScanExec => scan\n+            case scan: CometScanExec => scan\n+            case scan: CometNativeScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           assert(scanNodes.head.output.map(_.name) == Seq(\"a\"))\n@@ -120,11 +123,18 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: FileSourceScanExec => scan\n+            case scan: CometScanExec => scan\n+            case scan: CometNativeScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           // $\"a\" is not null and $\"a\" > 1\n-          assert(scanNodes.head.dataFilters.length == 2)\n-          assert(scanNodes.head.dataFilters.flatMap(_.references.map(_.name)).distinct == Seq(\"a\"))\n+          val dataFilters = scanNodes.head match {\n+            case scan: FileSourceScanExec => scan.dataFilters\n+            case scan: CometScanExec => scan.dataFilters\n+            case scan: CometNativeScanExec => scan.dataFilters\n+          }\n+          assert(dataFilters.length == 2)\n+          assert(dataFilters.flatMap(_.references.map(_.name)).distinct == Seq(\"a\"))\n         }\n       }\n     }\n@@ -145,6 +155,7 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: BatchScanExec => scan\n+            case scan: CometBatchScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           assert(scanNodes.head.output.map(_.name) == Seq(\"a\"))\n@@ -157,6 +168,7 @@ class ExtractPythonUDFsSuite extends SparkPlanTest with SharedSparkSession {\n \n           val scanNodes = query.queryExecution.executedPlan.collect {\n             case scan: BatchScanExec => scan\n+            case scan: CometBatchScanExec => scan\n           }\n           assert(scanNodes.length == 1)\n           // $\"a\" is not null and $\"a\" > 1\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala\nindex 7838e62013d..8fa09652921 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/execution/streaming/AsyncProgressTrackingMicroBatchExecutionSuite.scala\n@@ -37,8 +37,10 @@ import org.apache.spark.sql.streaming.{StreamingQuery, StreamingQueryException,\n import org.apache.spark.sql.streaming.util.StreamManualClock\n import org.apache.spark.util.Utils\n \n+// For some reason this suite is flaky w/ or w/o Comet when running in Github workflow.\n+// Since it isn't related to Comet, we disable it for now.\n class AsyncProgressTrackingMicroBatchExecutionSuite\n-  extends StreamTest with BeforeAndAfter with Matchers {\n+  extends StreamTest with BeforeAndAfter with Matchers with IgnoreCometSuite {\n \n   import testImplicits._\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala\nindex c4b09c4b289..75c3437788e 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala\n@@ -26,10 +26,11 @@ import org.apache.spark.sql.catalyst.expressions\n import org.apache.spark.sql.catalyst.expressions._\n import org.apache.spark.sql.catalyst.plans.physical.HashPartitioning\n import org.apache.spark.sql.catalyst.types.DataTypeUtils\n-import org.apache.spark.sql.execution.{FileSourceScanExec, SortExec, SparkPlan}\n+import org.apache.spark.sql.comet._\n+import org.apache.spark.sql.execution.{ColumnarToRowExec, FileSourceScanExec, SortExec, SparkPlan}\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanExec, AdaptiveSparkPlanHelper}\n import org.apache.spark.sql.execution.datasources.BucketingUtils\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.{ShuffleExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.joins.SortMergeJoinExec\n import org.apache.spark.sql.functions._\n import org.apache.spark.sql.internal.SQLConf\n@@ -103,12 +104,22 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n     }\n   }\n \n-  private def getFileScan(plan: SparkPlan): FileSourceScanExec = {\n-    val fileScan = collect(plan) { case f: FileSourceScanExec => f }\n+  private def getFileScan(plan: SparkPlan): SparkPlan = {\n+    val fileScan = collect(plan) {\n+      case f: FileSourceScanExec => f\n+      case f: CometScanExec => f\n+      case f: CometNativeScanExec => f\n+    }\n     assert(fileScan.nonEmpty, plan)\n     fileScan.head\n   }\n \n+  private def getBucketScan(plan: SparkPlan): Boolean = getFileScan(plan) match {\n+    case fs: FileSourceScanExec => fs.bucketedScan\n+    case bs: CometScanExec => bs.bucketedScan\n+    case ns: CometNativeScanExec => ns.bucketedScan\n+  }\n+\n   // To verify if the bucket pruning works, this function checks two conditions:\n   //   1) Check if the pruned buckets (before filtering) are empty.\n   //   2) Verify the final result is the same as the expected one\n@@ -157,7 +168,8 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n           val planWithoutBucketedScan = bucketedDataFrame.filter(filterCondition)\n             .queryExecution.executedPlan\n           val fileScan = getFileScan(planWithoutBucketedScan)\n-          assert(!fileScan.bucketedScan, s\"except no bucketed scan but found\\n$fileScan\")\n+          val bucketedScan = getBucketScan(planWithoutBucketedScan)\n+          assert(!bucketedScan, s\"except no bucketed scan but found\\n$fileScan\")\n \n           val bucketColumnType = bucketedDataFrame.schema.apply(bucketColumnIndex).dataType\n           val rowsWithInvalidBuckets = fileScan.execute().filter(row => {\n@@ -454,28 +466,54 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n         val joinOperator = if (joined.sparkSession.sessionState.conf.adaptiveExecutionEnabled) {\n           val executedPlan =\n             joined.queryExecution.executedPlan.asInstanceOf[AdaptiveSparkPlanExec].executedPlan\n-          assert(executedPlan.isInstanceOf[SortMergeJoinExec])\n-          executedPlan.asInstanceOf[SortMergeJoinExec]\n+          executedPlan match {\n+            case s: SortMergeJoinExec => s\n+            case b: CometSortMergeJoinExec =>\n+              b.originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+          }\n         } else {\n           val executedPlan = joined.queryExecution.executedPlan\n-          assert(executedPlan.isInstanceOf[SortMergeJoinExec])\n-          executedPlan.asInstanceOf[SortMergeJoinExec]\n+          executedPlan match {\n+            case s: SortMergeJoinExec => s\n+            case ColumnarToRowExec(child) =>\n+              child.asInstanceOf[CometSortMergeJoinExec].originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case CometColumnarToRowExec(child) =>\n+              child.asInstanceOf[CometSortMergeJoinExec].originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case CometNativeColumnarToRowExec(child) =>\n+              child.asInstanceOf[CometSortMergeJoinExec].originalPlan match {\n+                case s: SortMergeJoinExec => s\n+                case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+              }\n+            case o => fail(s\"expected SortMergeJoinExec, but found\\n$o\")\n+          }\n         }\n \n         // check existence of shuffle\n         assert(\n-          joinOperator.left.exists(_.isInstanceOf[ShuffleExchangeExec]) == shuffleLeft,\n+          joinOperator.left.exists(op => op.isInstanceOf[ShuffleExchangeLike]) == shuffleLeft,\n           s\"expected shuffle in plan to be $shuffleLeft but found\\n${joinOperator.left}\")\n         assert(\n-          joinOperator.right.exists(_.isInstanceOf[ShuffleExchangeExec]) == shuffleRight,\n+          joinOperator.right.exists(op => op.isInstanceOf[ShuffleExchangeLike]) == shuffleRight,\n           s\"expected shuffle in plan to be $shuffleRight but found\\n${joinOperator.right}\")\n \n         // check existence of sort\n         assert(\n-          joinOperator.left.exists(_.isInstanceOf[SortExec]) == sortLeft,\n+          joinOperator.left.exists(op => op.isInstanceOf[SortExec] || op.isInstanceOf[CometExec] &&\n+            op.asInstanceOf[CometExec].originalPlan.isInstanceOf[SortExec]) == sortLeft,\n           s\"expected sort in the left child to be $sortLeft but found\\n${joinOperator.left}\")\n         assert(\n-          joinOperator.right.exists(_.isInstanceOf[SortExec]) == sortRight,\n+          joinOperator.right.exists(op => op.isInstanceOf[SortExec] || op.isInstanceOf[CometExec] &&\n+            op.asInstanceOf[CometExec].originalPlan.isInstanceOf[SortExec]) == sortRight,\n           s\"expected sort in the right child to be $sortRight but found\\n${joinOperator.right}\")\n \n         // check the output partitioning\n@@ -838,11 +876,11 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n       df1.write.format(\"parquet\").bucketBy(8, \"i\").saveAsTable(\"bucketed_table\")\n \n       val scanDF = spark.table(\"bucketed_table\").select(\"j\")\n-      assert(!getFileScan(scanDF.queryExecution.executedPlan).bucketedScan)\n+      assert(!getBucketScan(scanDF.queryExecution.executedPlan))\n       checkAnswer(scanDF, df1.select(\"j\"))\n \n       val aggDF = spark.table(\"bucketed_table\").groupBy(\"j\").agg(max(\"k\"))\n-      assert(!getFileScan(aggDF.queryExecution.executedPlan).bucketedScan)\n+      assert(!getBucketScan(aggDF.queryExecution.executedPlan))\n       checkAnswer(aggDF, df1.groupBy(\"j\").agg(max(\"k\")))\n     }\n   }\n@@ -1031,15 +1069,24 @@ abstract class BucketedReadSuite extends QueryTest with SQLTestUtils with Adapti\n           Seq(true, false).foreach { aqeEnabled =>\n             withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> aqeEnabled.toString) {\n               val plan = sql(query).queryExecution.executedPlan\n-              val shuffles = collect(plan) { case s: ShuffleExchangeExec => s }\n+              val shuffles = collect(plan) { case s: ShuffleExchangeLike => s }\n               assert(shuffles.length == expectedNumShuffles)\n \n               val scans = collect(plan) {\n                 case f: FileSourceScanExec if f.optionalNumCoalescedBuckets.isDefined => f\n+                case b: CometScanExec if b.optionalNumCoalescedBuckets.isDefined => b\n+                case b: CometNativeScanExec if b.optionalNumCoalescedBuckets.isDefined => b\n               }\n               if (expectedCoalescedNumBuckets.isDefined) {\n                 assert(scans.length == 1)\n-                assert(scans.head.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+                scans.head match {\n+                  case f: FileSourceScanExec =>\n+                    assert(f.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+                  case b: CometScanExec =>\n+                    assert(b.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+                  case b: CometNativeScanExec =>\n+                    assert(b.optionalNumCoalescedBuckets == expectedCoalescedNumBuckets)\n+                }\n               } else {\n                 assert(scans.isEmpty)\n               }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala\nindex 95c2fcbd7b5..e2d4a20c5d9 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala\n@@ -20,6 +20,7 @@ package org.apache.spark.sql.sources\n import java.io.File\n \n import org.apache.spark.SparkException\n+import org.apache.spark.sql.IgnoreCometSuite\n import org.apache.spark.sql.catalyst.TableIdentifier\n import org.apache.spark.sql.catalyst.catalog.{BucketSpec, CatalogTableType}\n import org.apache.spark.sql.catalyst.parser.ParseException\n@@ -27,7 +28,10 @@ import org.apache.spark.sql.internal.SQLConf.BUCKETING_MAX_BUCKETS\n import org.apache.spark.sql.test.SharedSparkSession\n import org.apache.spark.util.Utils\n \n-class CreateTableAsSelectSuite extends DataSourceTest with SharedSparkSession {\n+// For some reason this suite is flaky w/ or w/o Comet when running in Github workflow.\n+// Since it isn't related to Comet, we disable it for now.\n+class CreateTableAsSelectSuite extends DataSourceTest with SharedSparkSession\n+    with IgnoreCometSuite {\n   import testImplicits._\n \n   protected override lazy val sql = spark.sql _\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala\nindex c5c56f081d8..6cc51f93b4f 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala\n@@ -18,6 +18,7 @@\n package org.apache.spark.sql.sources\n \n import org.apache.spark.sql.QueryTest\n+import org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\n import org.apache.spark.sql.execution.FileSourceScanExec\n import org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.internal.SQLConf\n@@ -68,7 +69,11 @@ abstract class DisableUnnecessaryBucketedScanSuite\n \n     def checkNumBucketedScan(query: String, expectedNumBucketedScan: Int): Unit = {\n       val plan = sql(query).queryExecution.executedPlan\n-      val bucketedScan = collect(plan) { case s: FileSourceScanExec if s.bucketedScan => s }\n+      val bucketedScan = collect(plan) {\n+        case s: FileSourceScanExec if s.bucketedScan => s\n+        case s: CometScanExec if s.bucketedScan => s\n+        case s: CometNativeScanExec if s.bucketedScan => s\n+      }\n       assert(bucketedScan.length == expectedNumBucketedScan)\n     }\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala\nindex 9742a004545..4e0417d730a 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/FileStreamSinkSuite.scala\n@@ -34,6 +34,7 @@ import org.apache.spark.paths.SparkPath\n import org.apache.spark.scheduler.{SparkListener, SparkListenerTaskEnd}\n import org.apache.spark.sql.{AnalysisException, DataFrame}\n import org.apache.spark.sql.catalyst.util.stringToFile\n+import org.apache.spark.sql.comet.CometBatchScanExec\n import org.apache.spark.sql.execution.DataSourceScanExec\n import org.apache.spark.sql.execution.datasources._\n import org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat\n@@ -786,6 +787,8 @@ class FileStreamSinkV2Suite extends FileStreamSinkSuite {\n       val fileScan = df.queryExecution.executedPlan.collect {\n         case batch: BatchScanExec if batch.scan.isInstanceOf[FileScan] =>\n           batch.scan.asInstanceOf[FileScan]\n+        case batch: CometBatchScanExec if batch.scan.isInstanceOf[FileScan] =>\n+          batch.scan.asInstanceOf[FileScan]\n       }.headOption.getOrElse {\n         fail(s\"No FileScan in query\\n${df.queryExecution}\")\n       }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala\nindex b0967d5ffdf..3d567f913de 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala\n@@ -40,6 +40,7 @@ import org.apache.spark.sql.catalyst.types.DataTypeUtils.toAttributes\n import org.apache.spark.sql.catalyst.util.DateTimeUtils\n import org.apache.spark.sql.classic.{DataFrame, Dataset}\n import org.apache.spark.sql.classic.ClassicConversions._\n+import org.apache.spark.sql.comet.CometLocalLimitExec\n import org.apache.spark.sql.execution.{LocalLimitExec, SimpleMode, SparkPlan}\n import org.apache.spark.sql.execution.command.ExplainCommand\n import org.apache.spark.sql.execution.streaming._\n@@ -1118,11 +1119,12 @@ class StreamSuite extends StreamTest {\n       val localLimits = execPlan.collect {\n         case l: LocalLimitExec => l\n         case l: StreamingLocalLimitExec => l\n+        case l: CometLocalLimitExec => l\n       }\n \n       require(\n         localLimits.size == 1,\n-        s\"Cant verify local limit optimization with this plan:\\n$execPlan\")\n+        s\"Cant verify local limit optimization ${localLimits.size} with this plan:\\n$execPlan\")\n \n       if (expectStreamingLimit) {\n         assert(\n@@ -1130,7 +1132,8 @@ class StreamSuite extends StreamTest {\n           s\"Local limit was not StreamingLocalLimitExec:\\n$execPlan\")\n       } else {\n         assert(\n-          localLimits.head.isInstanceOf[LocalLimitExec],\n+          localLimits.head.isInstanceOf[LocalLimitExec] ||\n+            localLimits.head.isInstanceOf[CometLocalLimitExec],\n           s\"Local limit was not LocalLimitExec:\\n$execPlan\")\n       }\n     }\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala\nindex b4c4ec7acbf..20579284856 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingAggregationDistributionSuite.scala\n@@ -23,6 +23,7 @@ import org.apache.commons.io.FileUtils\n import org.scalatest.Assertions\n \n import org.apache.spark.sql.catalyst.plans.physical.UnspecifiedDistribution\n+import org.apache.spark.sql.comet.CometHashAggregateExec\n import org.apache.spark.sql.execution.aggregate.BaseAggregateExec\n import org.apache.spark.sql.execution.streaming.{MemoryStream, StateStoreRestoreExec, StateStoreSaveExec}\n import org.apache.spark.sql.functions.count\n@@ -67,6 +68,7 @@ class StreamingAggregationDistributionSuite extends StreamTest\n         // verify aggregations in between, except partial aggregation\n         val allAggregateExecs = query.lastExecution.executedPlan.collect {\n           case a: BaseAggregateExec => a\n+          case c: CometHashAggregateExec => c.originalPlan\n         }\n \n         val aggregateExecsWithoutPartialAgg = allAggregateExecs.filter {\n@@ -201,6 +203,7 @@ class StreamingAggregationDistributionSuite extends StreamTest\n         // verify aggregations in between, except partial aggregation\n         val allAggregateExecs = executedPlan.collect {\n           case a: BaseAggregateExec => a\n+          case c: CometHashAggregateExec => c.originalPlan\n         }\n \n         val aggregateExecsWithoutPartialAgg = allAggregateExecs.filter {\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala\nindex d3c44dcead3..8096bce4436 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala\n@@ -33,7 +33,7 @@ import org.apache.spark.sql.{DataFrame, Row, SparkSession}\n import org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression}\n import org.apache.spark.sql.catalyst.plans.physical.HashPartitioning\n import org.apache.spark.sql.execution.datasources.v2.state.StateSourceOptions\n-import org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\n+import org.apache.spark.sql.execution.exchange.ShuffleExchangeLike\n import org.apache.spark.sql.execution.streaming.{MemoryStream, StatefulOperatorStateInfo, StreamingSymmetricHashJoinExec, StreamingSymmetricHashJoinHelper}\n import org.apache.spark.sql.execution.streaming.state.{RocksDBStateStoreProvider, StateStore, StateStoreProviderId}\n import org.apache.spark.sql.functions._\n@@ -642,14 +642,28 @@ class StreamingInnerJoinSuite extends StreamingJoinSuite {\n \n         val numPartitions = spark.sessionState.conf.getConf(SQLConf.SHUFFLE_PARTITIONS)\n \n-        assert(query.lastExecution.executedPlan.collect {\n-          case j @ StreamingSymmetricHashJoinExec(_, _, _, _, _, _, _, _, _,\n-            ShuffleExchangeExec(opA: HashPartitioning, _, _, _),\n-            ShuffleExchangeExec(opB: HashPartitioning, _, _, _))\n-              if partitionExpressionsColumns(opA.expressions) === Seq(\"a\", \"b\")\n-                && partitionExpressionsColumns(opB.expressions) === Seq(\"a\", \"b\")\n-                && opA.numPartitions == numPartitions && opB.numPartitions == numPartitions => j\n-        }.size == 1)\n+        val join = query.lastExecution.executedPlan.collect {\n+          case j: StreamingSymmetricHashJoinExec => j\n+        }.head\n+        val opA = join.left.collect {\n+          case s: ShuffleExchangeLike\n+            if s.outputPartitioning.isInstanceOf[HashPartitioning] &&\n+              partitionExpressionsColumns(\n+                s.outputPartitioning\n+                  .asInstanceOf[HashPartitioning].expressions) === Seq(\"a\", \"b\") =>\n+            s.outputPartitioning\n+              .asInstanceOf[HashPartitioning]\n+        }.head\n+        val opB = join.right.collect {\n+          case s: ShuffleExchangeLike\n+            if s.outputPartitioning.isInstanceOf[HashPartitioning] &&\n+              partitionExpressionsColumns(\n+                s.outputPartitioning\n+                  .asInstanceOf[HashPartitioning].expressions) === Seq(\"a\", \"b\") =>\n+            s.outputPartitioning\n+              .asInstanceOf[HashPartitioning]\n+        }.head\n+        assert(opA.numPartitions == numPartitions && opB.numPartitions == numPartitions)\n       })\n   }\n \ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala\nindex e33d4f1f6ab..ce0a21d1e9d 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingQuerySuite.scala\n@@ -44,7 +44,7 @@ import org.apache.spark.sql.catalyst.types.DataTypeUtils.toAttributes\n import org.apache.spark.sql.classic.{DataFrame, Dataset}\n import org.apache.spark.sql.connector.read.InputPartition\n import org.apache.spark.sql.connector.read.streaming.{Offset => OffsetV2, ReadLimit}\n-import org.apache.spark.sql.execution.exchange.{REQUIRED_BY_STATEFUL_OPERATOR, ReusedExchangeExec, ShuffleExchangeExec}\n+import org.apache.spark.sql.execution.exchange.{REQUIRED_BY_STATEFUL_OPERATOR, ReusedExchangeExec, ShuffleExchangeLike}\n import org.apache.spark.sql.execution.streaming._\n import org.apache.spark.sql.execution.streaming.sources.{MemorySink, TestForeachWriter}\n import org.apache.spark.sql.functions._\n@@ -1462,7 +1462,7 @@ class StreamingQuerySuite extends StreamTest with BeforeAndAfter with Logging wi\n       CheckAnswer((1, 2), (2, 2), (3, 2)),\n       Execute { qe =>\n         val shuffleOpt = qe.lastExecution.executedPlan.collect {\n-          case s: ShuffleExchangeExec => s\n+          case s: ShuffleExchangeLike => s\n         }\n \n         assert(shuffleOpt.nonEmpty, \"No shuffle exchange found in the query plan\")\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala b/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala\nindex 86c4e49f6f6..2e639e5f38d 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/streaming/test/DataStreamTableAPISuite.scala\n@@ -22,7 +22,7 @@ import java.util\n \n import org.scalatest.BeforeAndAfter\n \n-import org.apache.spark.sql.{AnalysisException, Row, SaveMode}\n+import org.apache.spark.sql.{AnalysisException, IgnoreComet, Row, SaveMode}\n import org.apache.spark.sql.catalyst.TableIdentifier\n import org.apache.spark.sql.catalyst.analysis.TableAlreadyExistsException\n import org.apache.spark.sql.catalyst.catalog.{CatalogStorageFormat, CatalogTable, CatalogTableType}\n@@ -359,7 +359,8 @@ class DataStreamTableAPISuite extends StreamTest with BeforeAndAfter {\n     }\n   }\n \n-  test(\"explain with table on DSv1 data source\") {\n+  test(\"explain with table on DSv1 data source\",\n+      IgnoreComet(\"Comet explain output is different\")) {\n     val tblSourceName = \"tbl_src\"\n     val tblTargetName = \"tbl_target\"\n     val tblSourceQualified = s\"default.$tblSourceName\"\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala b/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala\nindex f0f3f94b811..f77b54dcef9 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/test/SQLTestUtils.scala\n@@ -27,13 +27,14 @@ import scala.jdk.CollectionConverters._\n import scala.language.implicitConversions\n import scala.util.control.NonFatal\n \n+import org.apache.comet.CometConf\n import org.apache.hadoop.fs.Path\n import org.scalactic.source.Position\n import org.scalatest.{BeforeAndAfterAll, Suite, Tag}\n import org.scalatest.concurrent.Eventually\n \n import org.apache.spark.SparkFunSuite\n-import org.apache.spark.sql.{AnalysisException, Row}\n+import org.apache.spark.sql.{AnalysisException, IgnoreComet, IgnoreCometNativeDataFusion, IgnoreCometNativeIcebergCompat, IgnoreCometNativeScan, Row}\n import org.apache.spark.sql.catalyst.FunctionIdentifier\n import org.apache.spark.sql.catalyst.analysis.NoSuchTableException\n import org.apache.spark.sql.catalyst.catalog.SessionCatalog.DEFAULT_DATABASE\n@@ -42,6 +43,7 @@ import org.apache.spark.sql.catalyst.plans.PlanTestBase\n import org.apache.spark.sql.catalyst.plans.logical.LogicalPlan\n import org.apache.spark.sql.catalyst.util._\n import org.apache.spark.sql.classic.{ClassicConversions, ColumnConversions, ColumnNodeToExpressionConverter, DataFrame, Dataset, SparkSession, SQLImplicits}\n+import org.apache.spark.sql.comet.{CometFilterExec, CometProjectExec}\n import org.apache.spark.sql.execution.FilterExec\n import org.apache.spark.sql.execution.adaptive.DisableAdaptiveExecution\n import org.apache.spark.sql.execution.datasources.DataSourceUtils\n@@ -121,6 +123,34 @@ private[sql] trait SQLTestUtils extends SparkFunSuite with SQLTestUtilsBase with\n \n   override protected def test(testName: String, testTags: Tag*)(testFun: => Any)\n       (implicit pos: Position): Unit = {\n+    // Check Comet skip tags first, before DisableAdaptiveExecution handling\n+    if (isCometEnabled && testTags.exists(_.isInstanceOf[IgnoreComet])) {\n+      ignore(testName + \" (disabled when Comet is on)\", testTags: _*)(testFun)\n+      return\n+    }\n+    if (isCometEnabled) {\n+      val cometScanImpl = CometConf.COMET_NATIVE_SCAN_IMPL.get(conf)\n+      val isNativeIcebergCompat = cometScanImpl == CometConf.SCAN_NATIVE_ICEBERG_COMPAT ||\n+        cometScanImpl == CometConf.SCAN_AUTO\n+      val isNativeDataFusion = cometScanImpl == CometConf.SCAN_NATIVE_DATAFUSION ||\n+        cometScanImpl == CometConf.SCAN_AUTO\n+      if (isNativeIcebergCompat &&\n+        testTags.exists(_.isInstanceOf[IgnoreCometNativeIcebergCompat])) {\n+        ignore(testName + \" (disabled for NATIVE_ICEBERG_COMPAT)\", testTags: _*)(testFun)\n+        return\n+      }\n+      if (isNativeDataFusion &&\n+        testTags.exists(_.isInstanceOf[IgnoreCometNativeDataFusion])) {\n+        ignore(testName + \" (disabled for NATIVE_DATAFUSION)\", testTags: _*)(testFun)\n+        return\n+      }\n+      if ((isNativeDataFusion || isNativeIcebergCompat) &&\n+        testTags.exists(_.isInstanceOf[IgnoreCometNativeScan])) {\n+        ignore(testName + \" (disabled for NATIVE_DATAFUSION and NATIVE_ICEBERG_COMPAT)\",\n+          testTags: _*)(testFun)\n+        return\n+      }\n+    }\n     if (testTags.exists(_.isInstanceOf[DisableAdaptiveExecution])) {\n       super.test(testName, testTags: _*) {\n         withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\") {\n@@ -248,8 +278,24 @@ private[sql] trait SQLTestUtilsBase\n     override protected def converter: ColumnNodeToExpressionConverter = self.spark.converter\n   }\n \n+  /**\n+   * Whether Comet extension is enabled\n+   */\n+  protected def isCometEnabled: Boolean = SparkSession.isCometEnabled\n+\n+  /**\n+   * Whether Spark should only apply Comet scan optimization. This is only effective when\n+   * [[isCometEnabled]] returns true.\n+   */\n+  protected def isCometScanOnly: Boolean = {\n+    val v = System.getenv(\"ENABLE_COMET_SCAN_ONLY\")\n+    v != null && v.toBoolean\n+  }\n+\n   protected override def withSQLConf[T](pairs: (String, String)*)(f: => T): T = {\n     SparkSession.setActiveSession(spark)\n+\n+\n     super.withSQLConf(pairs: _*)(f)\n   }\n \n@@ -451,6 +497,8 @@ private[sql] trait SQLTestUtilsBase\n     val schema = df.schema\n     val withoutFilters = df.queryExecution.executedPlan.transform {\n       case FilterExec(_, child) => child\n+      case CometFilterExec(_, _, _, _, child, _) => child\n+      case CometProjectExec(_, _, _, _, CometFilterExec(_, _, _, _, child, _), _) => child\n     }\n \n     spark.internalCreateDataFrame(withoutFilters.execute(), schema)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala b/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala\nindex 245219c1756..a611836f086 100644\n--- a/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/sql/test/SharedSparkSession.scala\n@@ -75,6 +75,27 @@ trait SharedSparkSessionBase\n       // this rule may potentially block testing of other optimization rules such as\n       // ConstantPropagation etc.\n       .set(SQLConf.OPTIMIZER_EXCLUDED_RULES.key, ConvertToLocalRelation.ruleName)\n+    // Enable Comet if `ENABLE_COMET` environment variable is set\n+    if (isCometEnabled) {\n+      conf\n+        .set(\"spark.sql.extensions\", \"org.apache.comet.CometSparkSessionExtensions\")\n+        .set(\"spark.comet.enabled\", \"true\")\n+        .set(\"spark.comet.parquet.respectFilterPushdown\", \"true\")\n+\n+      if (!isCometScanOnly) {\n+        conf\n+          .set(\"spark.comet.exec.enabled\", \"true\")\n+          .set(\"spark.shuffle.manager\",\n+            \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+          .set(\"spark.comet.exec.shuffle.enabled\", \"true\")\n+          .set(\"spark.comet.memoryOverhead\", \"10g\")\n+      } else {\n+        conf\n+          .set(\"spark.comet.exec.enabled\", \"false\")\n+          .set(\"spark.comet.exec.shuffle.enabled\", \"false\")\n+      }\n+\n+    }\n     conf.set(\n       StaticSQLConf.WAREHOUSE_PATH,\n       conf.get(StaticSQLConf.WAREHOUSE_PATH) + \"/\" + getClass.getCanonicalName)\ndiff --git a/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala b/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala\nindex 982d57fb287..6017f36c440 100644\n--- a/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala\n+++ b/sql/core/src/test/scala/org/apache/spark/status/api/v1/sql/SqlResourceWithActualMetricsSuite.scala\n@@ -46,7 +46,7 @@ class SqlResourceWithActualMetricsSuite\n   import testImplicits._\n \n   // Exclude nodes which may not have the metrics\n-  val excludedNodes = List(\"WholeStageCodegen\", \"Project\", \"SerializeFromObject\")\n+  val excludedNodes = List(\"WholeStageCodegen\", \"Project\", \"SerializeFromObject\", \"RowToColumnar\")\n \n   implicit val formats: DefaultFormats = new DefaultFormats {\n     override def dateFormatter = new SimpleDateFormat(\"yyyy-MM-dd'T'HH:mm:ss\")\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala\nindex 52abd248f3a..7a199931a08 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/DynamicPartitionPruningHiveScanSuite.scala\n@@ -19,6 +19,7 @@ package org.apache.spark.sql.hive\n \n import org.apache.spark.sql._\n import org.apache.spark.sql.catalyst.expressions.{DynamicPruningExpression, Expression}\n+import org.apache.spark.sql.comet._\n import org.apache.spark.sql.execution._\n import org.apache.spark.sql.execution.adaptive.{DisableAdaptiveExecutionSuite, EnableAdaptiveExecutionSuite}\n import org.apache.spark.sql.hive.execution.HiveTableScanExec\n@@ -35,6 +36,9 @@ abstract class DynamicPartitionPruningHiveScanSuiteBase\n       case s: FileSourceScanExec => s.partitionFilters.collect {\n         case d: DynamicPruningExpression => d.child\n       }\n+      case s: CometScanExec => s.partitionFilters.collect {\n+        case d: DynamicPruningExpression => d.child\n+      }\n       case h: HiveTableScanExec => h.partitionPruningPred.collect {\n         case d: DynamicPruningExpression => d.child\n       }\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveUDFDynamicLoadSuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveUDFDynamicLoadSuite.scala\nindex 4b27082e188..6710c90c789 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveUDFDynamicLoadSuite.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/HiveUDFDynamicLoadSuite.scala\n@@ -147,7 +147,9 @@ class HiveUDFDynamicLoadSuite extends QueryTest with SQLTestUtils with TestHiveS\n \n     // This jar file should not be placed to the classpath.\n     val jarPath = \"src/test/noclasspath/hive-test-udfs.jar\"\n-    assume(new java.io.File(jarPath).exists)\n+    // Comet: hive-test-udfs.jar files has been removed from Apache Spark repository\n+    //        comment out the following line for now\n+    // assume(new java.io.File(jarPath).exists)\n     val jarUrl = s\"file://${System.getProperty(\"user.dir\")}/$jarPath\"\n \n     test(\"Spark should be able to run Hive UDF using jar regardless of \" +\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertSuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertSuite.scala\nindex cc7bb193731..06555d48da7 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertSuite.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/InsertSuite.scala\n@@ -818,7 +818,8 @@ class InsertSuite extends QueryTest with TestHiveSingleton with BeforeAndAfter\n     }\n   }\n \n-  test(\"SPARK-30201 HiveOutputWriter standardOI should use ObjectInspectorCopyOption.DEFAULT\") {\n+  test(\"SPARK-30201 HiveOutputWriter standardOI should use ObjectInspectorCopyOption.DEFAULT\",\n+      IgnoreComet(\"Comet does not support reading non UTF-8 strings\")) {\n     withTable(\"t1\", \"t2\") {\n       withTempDir { dir =>\n         val file = new File(dir, \"test.hex\")\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala\nindex b67370f6eb9..746b3974b29 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/PartitionedTablePerfStatsSuite.scala\n@@ -23,14 +23,15 @@ import java.util.concurrent.{Executors, TimeUnit}\n import org.scalatest.BeforeAndAfterEach\n \n import org.apache.spark.metrics.source.HiveCatalogMetrics\n-import org.apache.spark.sql.QueryTest\n+import org.apache.spark.sql.{IgnoreCometSuite, QueryTest}\n import org.apache.spark.sql.execution.datasources.FileStatusCache\n import org.apache.spark.sql.hive.test.TestHiveSingleton\n import org.apache.spark.sql.internal.SQLConf\n import org.apache.spark.sql.test.SQLTestUtils\n \n class PartitionedTablePerfStatsSuite\n-  extends QueryTest with TestHiveSingleton with SQLTestUtils with BeforeAndAfterEach {\n+  extends QueryTest with TestHiveSingleton with SQLTestUtils with BeforeAndAfterEach\n+    with IgnoreCometSuite {\n \n   override def beforeEach(): Unit = {\n     super.beforeEach()\ndiff --git a/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala b/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala\nindex a394d0b7393..a4bc3d3fd8e 100644\n--- a/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala\n+++ b/sql/hive/src/test/scala/org/apache/spark/sql/hive/test/TestHive.scala\n@@ -53,24 +53,41 @@ object TestHive\n     new SparkContext(\n       System.getProperty(\"spark.sql.test.master\", \"local[1]\"),\n       \"TestSQLContext\",\n-      new SparkConf()\n-        .set(\"spark.sql.test\", \"\")\n-        .set(SQLConf.CODEGEN_FALLBACK.key, \"false\")\n-        .set(SQLConf.CODEGEN_FACTORY_MODE.key, CodegenObjectFactoryMode.CODEGEN_ONLY.toString)\n-        .set(HiveUtils.HIVE_METASTORE_BARRIER_PREFIXES.key,\n-          \"org.apache.spark.sql.hive.execution.PairSerDe\")\n-        .set(WAREHOUSE_PATH.key, TestHiveContext.makeWarehouseDir().toURI.getPath)\n-        // SPARK-8910\n-        .set(UI_ENABLED, false)\n-        .set(config.UNSAFE_EXCEPTION_ON_MEMORY_LEAK, true)\n-        // Hive changed the default of hive.metastore.disallow.incompatible.col.type.changes\n-        // from false to true. For details, see the JIRA HIVE-12320 and HIVE-17764.\n-        .set(\"spark.hadoop.hive.metastore.disallow.incompatible.col.type.changes\", \"false\")\n-        // Disable ConvertToLocalRelation for better test coverage. Test cases built on\n-        // LocalRelation will exercise the optimization rules better by disabling it as\n-        // this rule may potentially block testing of other optimization rules such as\n-        // ConstantPropagation etc.\n-        .set(SQLConf.OPTIMIZER_EXCLUDED_RULES.key, ConvertToLocalRelation.ruleName)\n+      {\n+        val conf = new SparkConf()\n+          .set(\"spark.sql.test\", \"\")\n+          .set(SQLConf.CODEGEN_FALLBACK.key, \"false\")\n+          .set(SQLConf.CODEGEN_FACTORY_MODE.key, CodegenObjectFactoryMode.CODEGEN_ONLY.toString)\n+          .set(HiveUtils.HIVE_METASTORE_BARRIER_PREFIXES.key,\n+            \"org.apache.spark.sql.hive.execution.PairSerDe\")\n+          .set(WAREHOUSE_PATH.key, TestHiveContext.makeWarehouseDir().toURI.getPath)\n+          .set(UI_ENABLED, false)\n+          .set(config.UNSAFE_EXCEPTION_ON_MEMORY_LEAK, true)\n+          .set(\"spark.hadoop.hive.metastore.disallow.incompatible.col.type.changes\", \"false\")\n+          .set(SQLConf.OPTIMIZER_EXCLUDED_RULES.key, ConvertToLocalRelation.ruleName)\n+\n+        if (SparkSession.isCometEnabled) {\n+          conf\n+            .set(\"spark.sql.extensions\", \"org.apache.comet.CometSparkSessionExtensions\")\n+            .set(\"spark.comet.enabled\", \"true\")\n+\n+          val v = System.getenv(\"ENABLE_COMET_SCAN_ONLY\")\n+          if (v == null || !v.toBoolean) {\n+            conf\n+              .set(\"spark.comet.exec.enabled\", \"true\")\n+              .set(\"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+              .set(\"spark.comet.exec.shuffle.enabled\", \"true\")\n+          } else {\n+            conf\n+              .set(\"spark.comet.exec.enabled\", \"false\")\n+              .set(\"spark.comet.exec.shuffle.enabled\", \"false\")\n+          }\n+        }\n+\n+        conf\n+      }\n+\n         .set(SHUFFLE_EXCHANGE_MAX_THREAD_THRESHOLD,\n           sys.env.getOrElse(\"SPARK_TEST_HIVE_SHUFFLE_EXCHANGE_MAX_THREAD_THRESHOLD\",\n             SHUFFLE_EXCHANGE_MAX_THREAD_THRESHOLD.defaultValueString).toInt)\n"
  },
  {
    "path": "dev/diffs/iceberg/1.10.0.diff",
    "content": "diff --git a/gradle/libs.versions.toml b/gradle/libs.versions.toml\nindex eeabe54f5f..4d3160b9f7 100644\n--- a/gradle/libs.versions.toml\n+++ b/gradle/libs.versions.toml\n@@ -37,7 +37,7 @@ awssdk-s3accessgrants = \"2.3.0\"\n bson-ver = \"4.11.5\"\n caffeine = \"2.9.3\"\n calcite = \"1.40.0\"\n-comet = \"0.8.1\"\n+comet = \"0.15.0-SNAPSHOT\"\n datasketches = \"6.2.0\"\n delta-standalone = \"3.3.2\"\n delta-spark = \"3.3.2\"\ndiff --git a/spark/v3.4/build.gradle b/spark/v3.4/build.gradle\nindex 714be0831d..dede65f213 100644\n--- a/spark/v3.4/build.gradle\n+++ b/spark/v3.4/build.gradle\n@@ -264,6 +264,7 @@ project(\":iceberg-spark:iceberg-spark-runtime-${sparkMajorVersion}_${scalaVersio\n     integrationImplementation project(path: ':iceberg-hive-metastore', configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-extensions-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n+    integrationImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     // runtime dependencies for running Hive Catalog based integration test\n     integrationRuntimeOnly project(':iceberg-hive-metastore')\ndiff --git a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\nindex 4c1a509591..91fcd4b844 100644\n--- a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\n+++ b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\n@@ -59,7 +59,27 @@ public abstract class ExtensionsTestBase extends CatalogTestBase {\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n             .config(\n                 SQLConf.ADAPTIVE_EXECUTION_ENABLED().key(), String.valueOf(RANDOM.nextBoolean()))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     TestBase.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\ndiff --git a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\nindex 58d054bd05..4ead587cc1 100644\n--- a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n+++ b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n@@ -56,6 +56,16 @@ public class TestCallStatementParser {\n             .master(\"local[2]\")\n             .config(\"spark.sql.extensions\", IcebergSparkSessionExtensions.class.getName())\n             .config(\"spark.extra.prop\", \"value\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestCallStatementParser.parser = spark.sessionState().sqlParser();\n   }\ndiff --git a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\nindex b6ade2bff3..c12127d618 100644\n--- a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n+++ b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n@@ -170,6 +170,16 @@ public class DeleteOrphanFilesBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", catalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local\");\n     spark = builder.getOrCreate();\n   }\ndiff --git a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\nindex 7cd41bd7e7..e2681d6243 100644\n--- a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n+++ b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n@@ -386,6 +386,16 @@ public class IcebergSortCompactionBenchmark {\n                 \"spark.sql.catalog.spark_catalog\", \"org.apache.iceberg.spark.SparkSessionCatalog\")\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", getCatalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\");\n     spark = builder.getOrCreate();\n     Configuration sparkHadoopConf = spark.sessionState().newHadoopConf();\ndiff --git a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\nindex 68c537e34a..1e9e90d539 100644\n--- a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n+++ b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n@@ -94,7 +94,19 @@ public abstract class IcebergSourceBenchmark {\n   }\n \n   protected void setupSpark(boolean enableDictionaryEncoding) {\n-    SparkSession.Builder builder = SparkSession.builder().config(\"spark.ui.enabled\", false);\n+    SparkSession.Builder builder =\n+        SparkSession.builder()\n+            .config(\"spark.ui.enabled\", false)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\");\n     if (!enableDictionaryEncoding) {\n       builder\n           .config(\"parquet.dictionary.page.size\", \"1\")\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\ndeleted file mode 100644\nindex 81b7d83a70..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\n+++ /dev/null\n@@ -1,140 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import org.apache.comet.CometSchemaImporter;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.ColumnReader;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.comet.parquet.Utils;\n-import org.apache.comet.shaded.arrow.memory.RootAllocator;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.column.page.PageReader;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-\n-class CometColumnReader implements VectorizedReader<ColumnVector> {\n-  // use the Comet default batch size\n-  public static final int DEFAULT_BATCH_SIZE = 8192;\n-\n-  private final ColumnDescriptor descriptor;\n-  private final DataType sparkType;\n-\n-  // The delegated ColumnReader from Comet side\n-  private AbstractColumnReader delegate;\n-  private boolean initialized = false;\n-  private int batchSize = DEFAULT_BATCH_SIZE;\n-  private CometSchemaImporter importer;\n-\n-  CometColumnReader(DataType sparkType, ColumnDescriptor descriptor) {\n-    this.sparkType = sparkType;\n-    this.descriptor = descriptor;\n-  }\n-\n-  CometColumnReader(Types.NestedField field) {\n-    DataType dataType = SparkSchemaUtil.convert(field.type());\n-    StructField structField = new StructField(field.name(), dataType, false, Metadata.empty());\n-    this.sparkType = dataType;\n-    this.descriptor = TypeUtil.convertToParquet(structField);\n-  }\n-\n-  public AbstractColumnReader delegate() {\n-    return delegate;\n-  }\n-\n-  void setDelegate(AbstractColumnReader delegate) {\n-    this.delegate = delegate;\n-  }\n-\n-  void setInitialized(boolean initialized) {\n-    this.initialized = initialized;\n-  }\n-\n-  public int batchSize() {\n-    return batchSize;\n-  }\n-\n-  /**\n-   * This method is to initialized/reset the CometColumnReader. This needs to be called for each row\n-   * group after readNextRowGroup, so a new dictionary encoding can be set for each of the new row\n-   * groups.\n-   */\n-  public void reset() {\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-\n-    this.importer = new CometSchemaImporter(new RootAllocator());\n-    this.delegate = Utils.getColumnReader(sparkType, descriptor, importer, batchSize, false, false);\n-    this.initialized = true;\n-  }\n-\n-  public ColumnDescriptor descriptor() {\n-    return descriptor;\n-  }\n-\n-  /** Returns the Spark data type for this column. */\n-  public DataType sparkType() {\n-    return sparkType;\n-  }\n-\n-  /**\n-   * Set the page reader to be 'pageReader'.\n-   *\n-   * <p>NOTE: this should be called before reading a new Parquet column chunk, and after {@link\n-   * CometColumnReader#reset} is called.\n-   */\n-  public void setPageReader(PageReader pageReader) throws IOException {\n-    Preconditions.checkState(initialized, \"Invalid state: 'reset' should be called first\");\n-    ((ColumnReader) delegate).setPageReader(pageReader);\n-  }\n-\n-  @Override\n-  public void close() {\n-    // close resources on native side\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-  }\n-\n-  @Override\n-  public void setBatchSize(int size) {\n-    this.batchSize = size;\n-  }\n-\n-  @Override\n-  public ColumnVector read(ColumnVector reuse, int numRowsToRead) {\n-    throw new UnsupportedOperationException(\"Not supported\");\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\ndeleted file mode 100644\nindex 04ac69476a..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\n+++ /dev/null\n@@ -1,197 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import java.io.UncheckedIOException;\n-import java.util.List;\n-import java.util.Map;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.BatchReader;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.util.Pair;\n-import org.apache.parquet.column.page.PageReadStore;\n-import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\n-import org.apache.parquet.hadoop.metadata.ColumnPath;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-import org.apache.spark.sql.vectorized.ColumnarBatch;\n-\n-/**\n- * {@link VectorizedReader} that returns Spark's {@link ColumnarBatch} to support Spark's vectorized\n- * read path. The {@link ColumnarBatch} returned is created by passing in the Arrow vectors\n- * populated via delegated read calls to {@link CometColumnReader VectorReader(s)}.\n- */\n-@SuppressWarnings(\"checkstyle:VisibilityModifier\")\n-class CometColumnarBatchReader implements VectorizedReader<ColumnarBatch> {\n-\n-  private final CometColumnReader[] readers;\n-  private final boolean hasIsDeletedColumn;\n-\n-  // The delegated BatchReader on the Comet side does the real work of loading a batch of rows.\n-  // The Comet BatchReader contains an array of ColumnReader. There is no need to explicitly call\n-  // ColumnReader.readBatch; instead, BatchReader.nextBatch will be called, which underneath calls\n-  // ColumnReader.readBatch. The only exception is DeleteColumnReader, because at the time of\n-  // calling BatchReader.nextBatch, the isDeleted value is not yet available, so\n-  // DeleteColumnReader.readBatch must be called explicitly later, after the isDeleted value is\n-  // available.\n-  private final BatchReader delegate;\n-  private DeleteFilter<InternalRow> deletes = null;\n-  private long rowStartPosInBatch = 0;\n-\n-  CometColumnarBatchReader(List<VectorizedReader<?>> readers, Schema schema) {\n-    this.readers =\n-        readers.stream().map(CometColumnReader.class::cast).toArray(CometColumnReader[]::new);\n-    this.hasIsDeletedColumn =\n-        readers.stream().anyMatch(reader -> reader instanceof CometDeleteColumnReader);\n-\n-    AbstractColumnReader[] abstractColumnReaders = new AbstractColumnReader[readers.size()];\n-    this.delegate = new BatchReader(abstractColumnReaders);\n-    delegate.setSparkSchema(SparkSchemaUtil.convert(schema));\n-  }\n-\n-  @Override\n-  public void setRowGroupInfo(\n-      PageReadStore pageStore, Map<ColumnPath, ColumnChunkMetaData> metaData) {\n-    for (int i = 0; i < readers.length; i++) {\n-      try {\n-        if (!(readers[i] instanceof CometConstantColumnReader)\n-            && !(readers[i] instanceof CometPositionColumnReader)\n-            && !(readers[i] instanceof CometDeleteColumnReader)) {\n-          readers[i].reset();\n-          readers[i].setPageReader(pageStore.getPageReader(readers[i].descriptor()));\n-        }\n-      } catch (IOException e) {\n-        throw new UncheckedIOException(\"Failed to setRowGroupInfo for Comet vectorization\", e);\n-      }\n-    }\n-\n-    for (int i = 0; i < readers.length; i++) {\n-      delegate.getColumnReaders()[i] = this.readers[i].delegate();\n-    }\n-\n-    this.rowStartPosInBatch =\n-        pageStore\n-            .getRowIndexOffset()\n-            .orElseThrow(\n-                () ->\n-                    new IllegalArgumentException(\n-                        \"PageReadStore does not contain row index offset\"));\n-  }\n-\n-  public void setDeleteFilter(DeleteFilter<InternalRow> deleteFilter) {\n-    this.deletes = deleteFilter;\n-  }\n-\n-  @Override\n-  public final ColumnarBatch read(ColumnarBatch reuse, int numRowsToRead) {\n-    ColumnarBatch columnarBatch = new ColumnBatchLoader(numRowsToRead).loadDataToColumnBatch();\n-    rowStartPosInBatch += numRowsToRead;\n-    return columnarBatch;\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.setBatchSize(batchSize);\n-      }\n-    }\n-  }\n-\n-  @Override\n-  public void close() {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.close();\n-      }\n-    }\n-  }\n-\n-  private class ColumnBatchLoader {\n-    private final int batchSize;\n-\n-    ColumnBatchLoader(int numRowsToRead) {\n-      Preconditions.checkArgument(\n-          numRowsToRead > 0, \"Invalid number of rows to read: %s\", numRowsToRead);\n-      this.batchSize = numRowsToRead;\n-    }\n-\n-    ColumnarBatch loadDataToColumnBatch() {\n-      ColumnVector[] vectors = readDataToColumnVectors();\n-      int numLiveRows = batchSize;\n-\n-      if (hasIsDeletedColumn) {\n-        boolean[] isDeleted = buildIsDeleted(vectors);\n-        readDeletedColumn(vectors, isDeleted);\n-      } else {\n-        Pair<int[], Integer> pair = buildRowIdMapping(vectors);\n-        if (pair != null) {\n-          int[] rowIdMapping = pair.first();\n-          numLiveRows = pair.second();\n-          for (int i = 0; i < vectors.length; i++) {\n-            vectors[i] = new ColumnVectorWithFilter(vectors[i], rowIdMapping);\n-          }\n-        }\n-      }\n-\n-      if (deletes != null && deletes.hasEqDeletes()) {\n-        vectors = ColumnarBatchUtil.removeExtraColumns(deletes, vectors);\n-      }\n-\n-      ColumnarBatch batch = new ColumnarBatch(vectors);\n-      batch.setNumRows(numLiveRows);\n-      return batch;\n-    }\n-\n-    private boolean[] buildIsDeleted(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildIsDeleted(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    private Pair<int[], Integer> buildRowIdMapping(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildRowIdMapping(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    ColumnVector[] readDataToColumnVectors() {\n-      ColumnVector[] columnVectors = new ColumnVector[readers.length];\n-      // Fetch rows for all readers in the delegate\n-      delegate.nextBatch(batchSize);\n-      for (int i = 0; i < readers.length; i++) {\n-        columnVectors[i] = readers[i].delegate().currentBatch();\n-      }\n-\n-      return columnVectors;\n-    }\n-\n-    void readDeletedColumn(ColumnVector[] columnVectors, boolean[] isDeleted) {\n-      for (int i = 0; i < readers.length; i++) {\n-        if (readers[i] instanceof CometDeleteColumnReader) {\n-          CometDeleteColumnReader deleteColumnReader = new CometDeleteColumnReader<>(isDeleted);\n-          deleteColumnReader.setBatchSize(batchSize);\n-          deleteColumnReader.delegate().readBatch(batchSize);\n-          columnVectors[i] = deleteColumnReader.delegate().currentBatch();\n-        }\n-      }\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\ndeleted file mode 100644\nindex c665002e8f..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\n+++ /dev/null\n@@ -1,65 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.math.BigDecimal;\n-import java.nio.ByteBuffer;\n-import org.apache.comet.parquet.ConstantColumnReader;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Decimal;\n-import org.apache.spark.sql.types.DecimalType;\n-import org.apache.spark.unsafe.types.UTF8String;\n-\n-class CometConstantColumnReader<T> extends CometColumnReader {\n-\n-  CometConstantColumnReader(T value, Types.NestedField field) {\n-    super(field);\n-    // use delegate to set constant value on the native side to be consumed by native execution.\n-    setDelegate(\n-        new ConstantColumnReader(sparkType(), descriptor(), convertToSparkValue(value), false));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private Object convertToSparkValue(T value) {\n-    DataType dataType = sparkType();\n-    // Match the value to Spark internal type if necessary\n-    if (dataType == DataTypes.StringType && value instanceof String) {\n-      // the internal type for StringType is UTF8String\n-      return UTF8String.fromString((String) value);\n-    } else if (dataType instanceof DecimalType && value instanceof BigDecimal) {\n-      // the internal type for DecimalType is Decimal\n-      return Decimal.apply((BigDecimal) value);\n-    } else if (dataType == DataTypes.BinaryType && value instanceof ByteBuffer) {\n-      // the internal type for DecimalType is byte[]\n-      // Iceberg default value should always use HeapBufferBuffer, so calling ByteBuffer.array()\n-      // should be safe.\n-      return ((java.nio.ByteBuffer) value).array();\n-    } else {\n-      return value;\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\ndeleted file mode 100644\nindex 4a28fc51da..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\n+++ /dev/null\n@@ -1,75 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-\n-class CometDeleteColumnReader<T> extends CometColumnReader {\n-  CometDeleteColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new DeleteColumnReader());\n-  }\n-\n-  CometDeleteColumnReader(boolean[] isDeleted) {\n-    super(MetadataColumns.IS_DELETED);\n-    setDelegate(new DeleteColumnReader(isDeleted));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class DeleteColumnReader extends MetadataColumnReader {\n-    private boolean[] isDeleted;\n-\n-    DeleteColumnReader() {\n-      super(\n-          DataTypes.BooleanType,\n-          TypeUtil.convertToParquet(\n-              new StructField(\"_deleted\", DataTypes.BooleanType, false, Metadata.empty())),\n-          false /* useDecimal128 = false */,\n-          false /* isConstant */);\n-      this.isDeleted = new boolean[0];\n-    }\n-\n-    DeleteColumnReader(boolean[] isDeleted) {\n-      this();\n-      this.isDeleted = isDeleted;\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set isDeleted on the native side to be consumed by native execution\n-      Native.setIsDeleted(nativeHandle, isDeleted);\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\ndeleted file mode 100644\nindex 1949a71798..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\n+++ /dev/null\n@@ -1,62 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.spark.sql.types.DataTypes;\n-\n-class CometPositionColumnReader extends CometColumnReader {\n-  CometPositionColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new PositionColumnReader(descriptor()));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class PositionColumnReader extends MetadataColumnReader {\n-    /** The current position value of the column that are used to initialize this column reader. */\n-    private long position;\n-\n-    PositionColumnReader(ColumnDescriptor descriptor) {\n-      super(\n-          DataTypes.LongType,\n-          descriptor,\n-          false /* useDecimal128 = false */,\n-          false /* isConstant */);\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set position on the native side to be consumed by native execution\n-      Native.setPosition(nativeHandle, position, total);\n-      position += total;\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\ndeleted file mode 100644\nindex d36f1a7274..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\n+++ /dev/null\n@@ -1,147 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.util.List;\n-import java.util.Map;\n-import java.util.function.Function;\n-import java.util.stream.IntStream;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.TypeWithSchemaVisitor;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;\n-import org.apache.iceberg.relocated.com.google.common.collect.Lists;\n-import org.apache.iceberg.relocated.com.google.common.collect.Maps;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.schema.GroupType;\n-import org.apache.parquet.schema.MessageType;\n-import org.apache.parquet.schema.PrimitiveType;\n-import org.apache.parquet.schema.Type;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-\n-class CometVectorizedReaderBuilder extends TypeWithSchemaVisitor<VectorizedReader<?>> {\n-\n-  private final MessageType parquetSchema;\n-  private final Schema icebergSchema;\n-  private final Map<Integer, ?> idToConstant;\n-  private final Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory;\n-  private final DeleteFilter<InternalRow> deleteFilter;\n-\n-  CometVectorizedReaderBuilder(\n-      Schema expectedSchema,\n-      MessageType parquetSchema,\n-      Map<Integer, ?> idToConstant,\n-      Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    this.parquetSchema = parquetSchema;\n-    this.icebergSchema = expectedSchema;\n-    this.idToConstant = idToConstant;\n-    this.readerFactory = readerFactory;\n-    this.deleteFilter = deleteFilter;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> message(\n-      Types.StructType expected, MessageType message, List<VectorizedReader<?>> fieldReaders) {\n-    GroupType groupType = message.asGroupType();\n-    Map<Integer, VectorizedReader<?>> readersById = Maps.newHashMap();\n-    List<Type> fields = groupType.getFields();\n-\n-    IntStream.range(0, fields.size())\n-        .filter(pos -> fields.get(pos).getId() != null)\n-        .forEach(pos -> readersById.put(fields.get(pos).getId().intValue(), fieldReaders.get(pos)));\n-\n-    List<Types.NestedField> icebergFields =\n-        expected != null ? expected.fields() : ImmutableList.of();\n-\n-    List<VectorizedReader<?>> reorderedFields =\n-        Lists.newArrayListWithExpectedSize(icebergFields.size());\n-\n-    for (Types.NestedField field : icebergFields) {\n-      int id = field.fieldId();\n-      VectorizedReader<?> reader = readersById.get(id);\n-      if (idToConstant.containsKey(id)) {\n-        CometConstantColumnReader constantReader =\n-            new CometConstantColumnReader<>(idToConstant.get(id), field);\n-        reorderedFields.add(constantReader);\n-      } else if (id == MetadataColumns.ROW_POSITION.fieldId()) {\n-        reorderedFields.add(new CometPositionColumnReader(field));\n-      } else if (id == MetadataColumns.IS_DELETED.fieldId()) {\n-        CometColumnReader deleteReader = new CometDeleteColumnReader<>(field);\n-        reorderedFields.add(deleteReader);\n-      } else if (reader != null) {\n-        reorderedFields.add(reader);\n-      } else if (field.initialDefault() != null) {\n-        CometColumnReader constantReader =\n-            new CometConstantColumnReader<>(field.initialDefault(), field);\n-        reorderedFields.add(constantReader);\n-      } else if (field.isOptional()) {\n-        CometColumnReader constantReader = new CometConstantColumnReader<>(null, field);\n-        reorderedFields.add(constantReader);\n-      } else {\n-        throw new IllegalArgumentException(\n-            String.format(\"Missing required field: %s\", field.name()));\n-      }\n-    }\n-    return vectorizedReader(reorderedFields);\n-  }\n-\n-  protected VectorizedReader<?> vectorizedReader(List<VectorizedReader<?>> reorderedFields) {\n-    VectorizedReader<?> reader = readerFactory.apply(reorderedFields);\n-    if (deleteFilter != null) {\n-      ((CometColumnarBatchReader) reader).setDeleteFilter(deleteFilter);\n-    }\n-    return reader;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> struct(\n-      Types.StructType expected, GroupType groupType, List<VectorizedReader<?>> fieldReaders) {\n-    if (expected != null) {\n-      throw new UnsupportedOperationException(\n-          \"Vectorized reads are not supported yet for struct fields\");\n-    }\n-    return null;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> primitive(\n-      org.apache.iceberg.types.Type.PrimitiveType expected, PrimitiveType primitive) {\n-\n-    if (primitive.getId() == null) {\n-      return null;\n-    }\n-    int parquetFieldId = primitive.getId().intValue();\n-    ColumnDescriptor desc = parquetSchema.getColumnDescription(currentPath());\n-    // Nested types not yet supported for vectorized reads\n-    if (desc.getMaxRepetitionLevel() > 0) {\n-      return null;\n-    }\n-    Types.NestedField icebergField = icebergSchema.findField(parquetFieldId);\n-    if (icebergField == null) {\n-      return null;\n-    }\n-\n-    return new CometColumnReader(SparkSchemaUtil.convert(icebergField.type()), desc);\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\nindex d95baa724b..865cf5085f 100644\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n+++ b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n@@ -83,23 +83,6 @@ public class VectorizedSparkParquetReaders {\n         expectedSchema, fileSchema, idToConstant, deleteFilter, ArrowAllocation.rootAllocator());\n   }\n \n-  public static CometColumnarBatchReader buildCometReader(\n-      Schema expectedSchema,\n-      MessageType fileSchema,\n-      Map<Integer, ?> idToConstant,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    return (CometColumnarBatchReader)\n-        TypeWithSchemaVisitor.visit(\n-            expectedSchema.asStruct(),\n-            fileSchema,\n-            new CometVectorizedReaderBuilder(\n-                expectedSchema,\n-                fileSchema,\n-                idToConstant,\n-                readers -> new CometColumnarBatchReader(readers, expectedSchema),\n-                deleteFilter));\n-  }\n-\n   // enables unsafe memory access to avoid costly checks to see if index is within bounds\n   // as long as it is not configured explicitly (see BoundsChecking in Arrow)\n   private static void enableUnsafeMemoryAccess() {\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\nindex a0f45e7610..78e7845911 100644\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n+++ b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n@@ -34,7 +34,6 @@ import org.apache.iceberg.parquet.Parquet;\n import org.apache.iceberg.relocated.com.google.common.collect.Sets;\n import org.apache.iceberg.spark.OrcBatchReadConf;\n import org.apache.iceberg.spark.ParquetBatchReadConf;\n-import org.apache.iceberg.spark.ParquetReaderType;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkOrcReaders;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkParquetReaders;\n import org.apache.iceberg.types.TypeUtil;\n@@ -94,15 +93,9 @@ abstract class BaseBatchReader<T extends ScanTask> extends BaseReader<ColumnarBa\n         .project(requiredSchema)\n         .split(start, length)\n         .createBatchedReaderFunc(\n-            fileSchema -> {\n-              if (parquetConf.readerType() == ParquetReaderType.COMET) {\n-                return VectorizedSparkParquetReaders.buildCometReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              } else {\n-                return VectorizedSparkParquetReaders.buildReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              }\n-            })\n+            fileSchema ->\n+                VectorizedSparkParquetReaders.buildReader(\n+                    requiredSchema, fileSchema, idToConstant, deleteFilter))\n         .recordsPerBatch(parquetConf.batchSize())\n         .filter(residual)\n         .caseSensitive(caseSensitive())\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\nindex 404ba72846..00e97e96f9 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n@@ -90,6 +90,16 @@ public abstract class SparkDistributedDataScanTestBase\n         .master(\"local[2]\")\n         .config(\"spark.serializer\", serializer)\n         .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+        .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+        .config(\n+            \"spark.shuffle.manager\",\n+            \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+        .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+        .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.size\", \"10g\")\n+        .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+        .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n         .getOrCreate();\n   }\n }\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\nindex 9361c63176..9a78bbf3b6 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n@@ -69,6 +69,16 @@ public class TestSparkDistributedDataScanDeletes\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\nindex a218f965ea..eca0125ace 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n@@ -62,6 +62,16 @@ public class TestSparkDistributedDataScanFilterFiles\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\nindex acd4688440..9be94eab2f 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n@@ -59,6 +59,16 @@ public class TestSparkDistributedDataScanReporting\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/TestBase.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\nindex 3c32b46936..5ba2c6dd95 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\n@@ -79,6 +79,16 @@ public abstract class TestBase extends SparkTestHelperBase {\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\nindex 6647a1b483..b8b428229b 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n@@ -62,6 +62,16 @@ public abstract class ScanTestBase extends AvroDataTestBase {\n         SparkSession.builder()\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n             .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     ScanTestBase.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\nindex 24a14bb64d..be45d95ae8 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n@@ -144,7 +144,20 @@ public class TestCompressionSettings extends CatalogTestBase {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestCompressionSettings.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestCompressionSettings.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @BeforeEach\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\nindex 99d3f38ee7..c94f21e391 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n@@ -74,7 +74,20 @@ public class TestDataSourceOptions extends TestBaseWithCatalog {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestDataSourceOptions.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestDataSourceOptions.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\nindex a70aeb1ba6..f75ce46a35 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n@@ -123,6 +123,16 @@ public class TestFilteredScan {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     // define UDFs used by partition tests\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\nindex c03f7b94ec..a3e803359a 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n@@ -99,6 +99,16 @@ public class TestForwardCompatibility {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\nindex f4f57157e4..7dd2ed811d 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n@@ -51,6 +51,16 @@ public class TestIcebergSpark {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\nindex e1402396fa..8b60b8bc48 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n@@ -118,6 +118,16 @@ public class TestPartitionPruning {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestPartitionPruning.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\nindex b0ad930487..8b76a364bd 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n@@ -112,6 +112,16 @@ public class TestPartitionValues {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\nindex 11865db7fc..bdb9ae32a6 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n@@ -91,6 +91,16 @@ public class TestSnapshotSelection {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\nindex 833a0b6b5e..95007024d1 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n@@ -125,6 +125,16 @@ public class TestSparkDataFile {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestSparkDataFile.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\nindex 8884654fe0..aacaf584f3 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n@@ -100,6 +100,16 @@ public class TestSparkDataWrite {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \n@@ -144,7 +154,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n     for (ManifestFile manifest :\n         SnapshotUtil.latestSnapshot(table, branch).allManifests(table.io())) {\n@@ -213,7 +223,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n   }\n \n@@ -258,7 +268,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n   }\n \n@@ -310,7 +320,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n   }\n \n@@ -352,7 +362,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n   }\n \n@@ -391,7 +401,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n \n     List<DataFile> files = Lists.newArrayList();\n@@ -459,7 +469,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n   }\n \n@@ -707,7 +717,7 @@ public class TestSparkDataWrite {\n     // Since write and commit succeeded, the rows should be readable\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual)\n         .hasSize(records.size() + records2.size())\n         .containsExactlyInAnyOrder(\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\nindex 596d05d30b..e681d81704 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n@@ -88,6 +88,16 @@ public class TestSparkReadProjection extends TestReadProjection {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     ImmutableMap<String, String> config =\n         ImmutableMap.of(\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\nindex 16fa726032..64e367cf47 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n@@ -132,7 +132,27 @@ public class TestSparkReaderDeletes extends DeleteReadTests {\n             .config(\"spark.ui.liveUpdate.period\", 0)\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     catalog =\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\nindex baf7fa8f88..665946ad82 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n@@ -182,7 +182,27 @@ public class TestSparkReaderWithBloomFilter {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     catalog =\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\nindex dc4fc7e187..c18a701ecf 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n@@ -69,6 +69,16 @@ public class TestStructuredStreaming {\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n             .config(\"spark.sql.shuffle.partitions\", 4)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\nindex 8b1e3fbfc7..b4385c3950 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n@@ -75,7 +75,20 @@ public class TestTimestampWithoutZone extends TestBase {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestTimestampWithoutZone.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestTimestampWithoutZone.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\nindex c3fac70dd3..b6c66f3f88 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n@@ -84,6 +84,16 @@ public class TestWriteMetricsConfig {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestWriteMetricsConfig.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\nindex 5ce56b4fec..40e4fb0a74 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n@@ -63,7 +63,27 @@ public class TestAggregatePushDown extends CatalogTestBase {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.sql.iceberg.aggregate_pushdown\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     TestBase.catalog =\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\nindex 9d2ce2b388..5e23368848 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n@@ -598,9 +598,7 @@ public class TestFilterPushDown extends TestBaseWithCatalog {\n     String planAsString = sparkPlan.toString().replaceAll(\"#(\\\\d+L?)\", \"\");\n \n     if (sparkFilter != null) {\n-      assertThat(planAsString)\n-          .as(\"Post scan filter should match\")\n-          .contains(\"Filter (\" + sparkFilter + \")\");\n+      assertThat(planAsString).as(\"Post scan filter should match\").contains(\"CometFilter\");\n     } else {\n       assertThat(planAsString).as(\"Should be no post scan filter\").doesNotContain(\"Filter (\");\n     }\ndiff --git a/spark/v3.5/build.gradle b/spark/v3.5/build.gradle\nindex 69700d8436..49ea338a45 100644\n--- a/spark/v3.5/build.gradle\n+++ b/spark/v3.5/build.gradle\n@@ -264,6 +264,7 @@ project(\":iceberg-spark:iceberg-spark-runtime-${sparkMajorVersion}_${scalaVersio\n     integrationImplementation project(path: ':iceberg-hive-metastore', configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-extensions-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n+    integrationImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     // runtime dependencies for running Hive Catalog based integration test\n     integrationRuntimeOnly project(':iceberg-hive-metastore')\ndiff --git a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\nindex 4c1a509591..44b401644e 100644\n--- a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\n+++ b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\n@@ -59,6 +59,16 @@ public abstract class ExtensionsTestBase extends CatalogTestBase {\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n             .config(\n                 SQLConf.ADAPTIVE_EXECUTION_ENABLED().key(), String.valueOf(RANDOM.nextBoolean()))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\nindex ecf9e6f8a5..3475260ca5 100644\n--- a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n+++ b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n@@ -56,6 +56,16 @@ public class TestCallStatementParser {\n             .master(\"local[2]\")\n             .config(\"spark.sql.extensions\", IcebergSparkSessionExtensions.class.getName())\n             .config(\"spark.extra.prop\", \"value\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestCallStatementParser.parser = spark.sessionState().sqlParser();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\nindex 64edb1002e..0fc10120ff 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n@@ -179,6 +179,16 @@ public class DeleteOrphanFilesBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", catalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local\");\n     spark = builder.getOrCreate();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\nindex 77b79384a6..01a4c82dc9 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n@@ -392,6 +392,16 @@ public class IcebergSortCompactionBenchmark {\n                 \"spark.sql.catalog.spark_catalog\", \"org.apache.iceberg.spark.SparkSessionCatalog\")\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", getCatalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\");\n     spark = builder.getOrCreate();\n     Configuration sparkHadoopConf = spark.sessionState().newHadoopConf();\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java\nindex c6794e43c6..457d2823ee 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java\n@@ -239,6 +239,16 @@ public class DVReaderBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", newWarehouseDir())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\")\n             .getOrCreate();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java\nindex ac74fb5a10..eab09293de 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java\n@@ -223,6 +223,16 @@ public class DVWriterBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", newWarehouseDir())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\")\n             .getOrCreate();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\nindex 68c537e34a..1e9e90d539 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n@@ -94,7 +94,19 @@ public abstract class IcebergSourceBenchmark {\n   }\n \n   protected void setupSpark(boolean enableDictionaryEncoding) {\n-    SparkSession.Builder builder = SparkSession.builder().config(\"spark.ui.enabled\", false);\n+    SparkSession.Builder builder =\n+        SparkSession.builder()\n+            .config(\"spark.ui.enabled\", false)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\");\n     if (!enableDictionaryEncoding) {\n       builder\n           .config(\"parquet.dictionary.page.size\", \"1\")\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\ndeleted file mode 100644\nindex 81b7d83a70..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\n+++ /dev/null\n@@ -1,140 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import org.apache.comet.CometSchemaImporter;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.ColumnReader;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.comet.parquet.Utils;\n-import org.apache.comet.shaded.arrow.memory.RootAllocator;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.column.page.PageReader;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-\n-class CometColumnReader implements VectorizedReader<ColumnVector> {\n-  // use the Comet default batch size\n-  public static final int DEFAULT_BATCH_SIZE = 8192;\n-\n-  private final ColumnDescriptor descriptor;\n-  private final DataType sparkType;\n-\n-  // The delegated ColumnReader from Comet side\n-  private AbstractColumnReader delegate;\n-  private boolean initialized = false;\n-  private int batchSize = DEFAULT_BATCH_SIZE;\n-  private CometSchemaImporter importer;\n-\n-  CometColumnReader(DataType sparkType, ColumnDescriptor descriptor) {\n-    this.sparkType = sparkType;\n-    this.descriptor = descriptor;\n-  }\n-\n-  CometColumnReader(Types.NestedField field) {\n-    DataType dataType = SparkSchemaUtil.convert(field.type());\n-    StructField structField = new StructField(field.name(), dataType, false, Metadata.empty());\n-    this.sparkType = dataType;\n-    this.descriptor = TypeUtil.convertToParquet(structField);\n-  }\n-\n-  public AbstractColumnReader delegate() {\n-    return delegate;\n-  }\n-\n-  void setDelegate(AbstractColumnReader delegate) {\n-    this.delegate = delegate;\n-  }\n-\n-  void setInitialized(boolean initialized) {\n-    this.initialized = initialized;\n-  }\n-\n-  public int batchSize() {\n-    return batchSize;\n-  }\n-\n-  /**\n-   * This method is to initialized/reset the CometColumnReader. This needs to be called for each row\n-   * group after readNextRowGroup, so a new dictionary encoding can be set for each of the new row\n-   * groups.\n-   */\n-  public void reset() {\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-\n-    this.importer = new CometSchemaImporter(new RootAllocator());\n-    this.delegate = Utils.getColumnReader(sparkType, descriptor, importer, batchSize, false, false);\n-    this.initialized = true;\n-  }\n-\n-  public ColumnDescriptor descriptor() {\n-    return descriptor;\n-  }\n-\n-  /** Returns the Spark data type for this column. */\n-  public DataType sparkType() {\n-    return sparkType;\n-  }\n-\n-  /**\n-   * Set the page reader to be 'pageReader'.\n-   *\n-   * <p>NOTE: this should be called before reading a new Parquet column chunk, and after {@link\n-   * CometColumnReader#reset} is called.\n-   */\n-  public void setPageReader(PageReader pageReader) throws IOException {\n-    Preconditions.checkState(initialized, \"Invalid state: 'reset' should be called first\");\n-    ((ColumnReader) delegate).setPageReader(pageReader);\n-  }\n-\n-  @Override\n-  public void close() {\n-    // close resources on native side\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-  }\n-\n-  @Override\n-  public void setBatchSize(int size) {\n-    this.batchSize = size;\n-  }\n-\n-  @Override\n-  public ColumnVector read(ColumnVector reuse, int numRowsToRead) {\n-    throw new UnsupportedOperationException(\"Not supported\");\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\ndeleted file mode 100644\nindex 04ac69476a..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\n+++ /dev/null\n@@ -1,197 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import java.io.UncheckedIOException;\n-import java.util.List;\n-import java.util.Map;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.BatchReader;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.util.Pair;\n-import org.apache.parquet.column.page.PageReadStore;\n-import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\n-import org.apache.parquet.hadoop.metadata.ColumnPath;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-import org.apache.spark.sql.vectorized.ColumnarBatch;\n-\n-/**\n- * {@link VectorizedReader} that returns Spark's {@link ColumnarBatch} to support Spark's vectorized\n- * read path. The {@link ColumnarBatch} returned is created by passing in the Arrow vectors\n- * populated via delegated read calls to {@link CometColumnReader VectorReader(s)}.\n- */\n-@SuppressWarnings(\"checkstyle:VisibilityModifier\")\n-class CometColumnarBatchReader implements VectorizedReader<ColumnarBatch> {\n-\n-  private final CometColumnReader[] readers;\n-  private final boolean hasIsDeletedColumn;\n-\n-  // The delegated BatchReader on the Comet side does the real work of loading a batch of rows.\n-  // The Comet BatchReader contains an array of ColumnReader. There is no need to explicitly call\n-  // ColumnReader.readBatch; instead, BatchReader.nextBatch will be called, which underneath calls\n-  // ColumnReader.readBatch. The only exception is DeleteColumnReader, because at the time of\n-  // calling BatchReader.nextBatch, the isDeleted value is not yet available, so\n-  // DeleteColumnReader.readBatch must be called explicitly later, after the isDeleted value is\n-  // available.\n-  private final BatchReader delegate;\n-  private DeleteFilter<InternalRow> deletes = null;\n-  private long rowStartPosInBatch = 0;\n-\n-  CometColumnarBatchReader(List<VectorizedReader<?>> readers, Schema schema) {\n-    this.readers =\n-        readers.stream().map(CometColumnReader.class::cast).toArray(CometColumnReader[]::new);\n-    this.hasIsDeletedColumn =\n-        readers.stream().anyMatch(reader -> reader instanceof CometDeleteColumnReader);\n-\n-    AbstractColumnReader[] abstractColumnReaders = new AbstractColumnReader[readers.size()];\n-    this.delegate = new BatchReader(abstractColumnReaders);\n-    delegate.setSparkSchema(SparkSchemaUtil.convert(schema));\n-  }\n-\n-  @Override\n-  public void setRowGroupInfo(\n-      PageReadStore pageStore, Map<ColumnPath, ColumnChunkMetaData> metaData) {\n-    for (int i = 0; i < readers.length; i++) {\n-      try {\n-        if (!(readers[i] instanceof CometConstantColumnReader)\n-            && !(readers[i] instanceof CometPositionColumnReader)\n-            && !(readers[i] instanceof CometDeleteColumnReader)) {\n-          readers[i].reset();\n-          readers[i].setPageReader(pageStore.getPageReader(readers[i].descriptor()));\n-        }\n-      } catch (IOException e) {\n-        throw new UncheckedIOException(\"Failed to setRowGroupInfo for Comet vectorization\", e);\n-      }\n-    }\n-\n-    for (int i = 0; i < readers.length; i++) {\n-      delegate.getColumnReaders()[i] = this.readers[i].delegate();\n-    }\n-\n-    this.rowStartPosInBatch =\n-        pageStore\n-            .getRowIndexOffset()\n-            .orElseThrow(\n-                () ->\n-                    new IllegalArgumentException(\n-                        \"PageReadStore does not contain row index offset\"));\n-  }\n-\n-  public void setDeleteFilter(DeleteFilter<InternalRow> deleteFilter) {\n-    this.deletes = deleteFilter;\n-  }\n-\n-  @Override\n-  public final ColumnarBatch read(ColumnarBatch reuse, int numRowsToRead) {\n-    ColumnarBatch columnarBatch = new ColumnBatchLoader(numRowsToRead).loadDataToColumnBatch();\n-    rowStartPosInBatch += numRowsToRead;\n-    return columnarBatch;\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.setBatchSize(batchSize);\n-      }\n-    }\n-  }\n-\n-  @Override\n-  public void close() {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.close();\n-      }\n-    }\n-  }\n-\n-  private class ColumnBatchLoader {\n-    private final int batchSize;\n-\n-    ColumnBatchLoader(int numRowsToRead) {\n-      Preconditions.checkArgument(\n-          numRowsToRead > 0, \"Invalid number of rows to read: %s\", numRowsToRead);\n-      this.batchSize = numRowsToRead;\n-    }\n-\n-    ColumnarBatch loadDataToColumnBatch() {\n-      ColumnVector[] vectors = readDataToColumnVectors();\n-      int numLiveRows = batchSize;\n-\n-      if (hasIsDeletedColumn) {\n-        boolean[] isDeleted = buildIsDeleted(vectors);\n-        readDeletedColumn(vectors, isDeleted);\n-      } else {\n-        Pair<int[], Integer> pair = buildRowIdMapping(vectors);\n-        if (pair != null) {\n-          int[] rowIdMapping = pair.first();\n-          numLiveRows = pair.second();\n-          for (int i = 0; i < vectors.length; i++) {\n-            vectors[i] = new ColumnVectorWithFilter(vectors[i], rowIdMapping);\n-          }\n-        }\n-      }\n-\n-      if (deletes != null && deletes.hasEqDeletes()) {\n-        vectors = ColumnarBatchUtil.removeExtraColumns(deletes, vectors);\n-      }\n-\n-      ColumnarBatch batch = new ColumnarBatch(vectors);\n-      batch.setNumRows(numLiveRows);\n-      return batch;\n-    }\n-\n-    private boolean[] buildIsDeleted(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildIsDeleted(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    private Pair<int[], Integer> buildRowIdMapping(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildRowIdMapping(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    ColumnVector[] readDataToColumnVectors() {\n-      ColumnVector[] columnVectors = new ColumnVector[readers.length];\n-      // Fetch rows for all readers in the delegate\n-      delegate.nextBatch(batchSize);\n-      for (int i = 0; i < readers.length; i++) {\n-        columnVectors[i] = readers[i].delegate().currentBatch();\n-      }\n-\n-      return columnVectors;\n-    }\n-\n-    void readDeletedColumn(ColumnVector[] columnVectors, boolean[] isDeleted) {\n-      for (int i = 0; i < readers.length; i++) {\n-        if (readers[i] instanceof CometDeleteColumnReader) {\n-          CometDeleteColumnReader deleteColumnReader = new CometDeleteColumnReader<>(isDeleted);\n-          deleteColumnReader.setBatchSize(batchSize);\n-          deleteColumnReader.delegate().readBatch(batchSize);\n-          columnVectors[i] = deleteColumnReader.delegate().currentBatch();\n-        }\n-      }\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\ndeleted file mode 100644\nindex 047c96314b..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\n+++ /dev/null\n@@ -1,65 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.math.BigDecimal;\n-import java.nio.ByteBuffer;\n-import org.apache.comet.parquet.ConstantColumnReader;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Decimal;\n-import org.apache.spark.sql.types.DecimalType;\n-import org.apache.spark.unsafe.types.UTF8String;\n-\n-class CometConstantColumnReader<T> extends CometColumnReader {\n-\n-  CometConstantColumnReader(T value, Types.NestedField field) {\n-    super(field);\n-    // use delegate to set constant value on the native side to be consumed by native execution.\n-    setDelegate(\n-        new ConstantColumnReader(sparkType(), descriptor(), convertToSparkValue(value), false));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private Object convertToSparkValue(T value) {\n-    DataType dataType = sparkType();\n-    // Match the value to Spark internal type if necessary\n-    if (dataType == DataTypes.StringType && value instanceof String) {\n-      // the internal type for StringType is UTF8String\n-      return UTF8String.fromString((String) value);\n-    } else if (dataType instanceof DecimalType && value instanceof BigDecimal) {\n-      // the internal type for DecimalType is Decimal\n-      return Decimal.apply((BigDecimal) value);\n-    } else if (dataType == DataTypes.BinaryType && value instanceof ByteBuffer) {\n-      // the internal type for DecimalType is byte[]\n-      // Iceberg default value should always use HeapBufferBuffer, so calling ByteBuffer.array()\n-      // should be safe.\n-      return ((ByteBuffer) value).array();\n-    } else {\n-      return value;\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\ndeleted file mode 100644\nindex 6235bfe486..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\n+++ /dev/null\n@@ -1,75 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-\n-class CometDeleteColumnReader<T> extends CometColumnReader {\n-  CometDeleteColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new DeleteColumnReader());\n-  }\n-\n-  CometDeleteColumnReader(boolean[] isDeleted) {\n-    super(MetadataColumns.IS_DELETED);\n-    setDelegate(new DeleteColumnReader(isDeleted));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class DeleteColumnReader extends MetadataColumnReader {\n-    private boolean[] isDeleted;\n-\n-    DeleteColumnReader() {\n-      super(\n-          DataTypes.BooleanType,\n-          TypeUtil.convertToParquet(\n-              new StructField(\"_deleted\", DataTypes.BooleanType, false, Metadata.empty())),\n-          false /* useDecimal128 = false */,\n-          false /* isConstant = false */);\n-      this.isDeleted = new boolean[0];\n-    }\n-\n-    DeleteColumnReader(boolean[] isDeleted) {\n-      this();\n-      this.isDeleted = isDeleted;\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set isDeleted on the native side to be consumed by native execution\n-      Native.setIsDeleted(nativeHandle, isDeleted);\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\ndeleted file mode 100644\nindex bcc0e514c2..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\n+++ /dev/null\n@@ -1,62 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.spark.sql.types.DataTypes;\n-\n-class CometPositionColumnReader extends CometColumnReader {\n-  CometPositionColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new PositionColumnReader(descriptor()));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class PositionColumnReader extends MetadataColumnReader {\n-    /** The current position value of the column that are used to initialize this column reader. */\n-    private long position;\n-\n-    PositionColumnReader(ColumnDescriptor descriptor) {\n-      super(\n-          DataTypes.LongType,\n-          descriptor,\n-          false /* useDecimal128 = false */,\n-          false /* isConstant = false */);\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set position on the native side to be consumed by native execution\n-      Native.setPosition(nativeHandle, position, total);\n-      position += total;\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\ndeleted file mode 100644\nindex d36f1a7274..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\n+++ /dev/null\n@@ -1,147 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.util.List;\n-import java.util.Map;\n-import java.util.function.Function;\n-import java.util.stream.IntStream;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.TypeWithSchemaVisitor;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;\n-import org.apache.iceberg.relocated.com.google.common.collect.Lists;\n-import org.apache.iceberg.relocated.com.google.common.collect.Maps;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.schema.GroupType;\n-import org.apache.parquet.schema.MessageType;\n-import org.apache.parquet.schema.PrimitiveType;\n-import org.apache.parquet.schema.Type;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-\n-class CometVectorizedReaderBuilder extends TypeWithSchemaVisitor<VectorizedReader<?>> {\n-\n-  private final MessageType parquetSchema;\n-  private final Schema icebergSchema;\n-  private final Map<Integer, ?> idToConstant;\n-  private final Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory;\n-  private final DeleteFilter<InternalRow> deleteFilter;\n-\n-  CometVectorizedReaderBuilder(\n-      Schema expectedSchema,\n-      MessageType parquetSchema,\n-      Map<Integer, ?> idToConstant,\n-      Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    this.parquetSchema = parquetSchema;\n-    this.icebergSchema = expectedSchema;\n-    this.idToConstant = idToConstant;\n-    this.readerFactory = readerFactory;\n-    this.deleteFilter = deleteFilter;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> message(\n-      Types.StructType expected, MessageType message, List<VectorizedReader<?>> fieldReaders) {\n-    GroupType groupType = message.asGroupType();\n-    Map<Integer, VectorizedReader<?>> readersById = Maps.newHashMap();\n-    List<Type> fields = groupType.getFields();\n-\n-    IntStream.range(0, fields.size())\n-        .filter(pos -> fields.get(pos).getId() != null)\n-        .forEach(pos -> readersById.put(fields.get(pos).getId().intValue(), fieldReaders.get(pos)));\n-\n-    List<Types.NestedField> icebergFields =\n-        expected != null ? expected.fields() : ImmutableList.of();\n-\n-    List<VectorizedReader<?>> reorderedFields =\n-        Lists.newArrayListWithExpectedSize(icebergFields.size());\n-\n-    for (Types.NestedField field : icebergFields) {\n-      int id = field.fieldId();\n-      VectorizedReader<?> reader = readersById.get(id);\n-      if (idToConstant.containsKey(id)) {\n-        CometConstantColumnReader constantReader =\n-            new CometConstantColumnReader<>(idToConstant.get(id), field);\n-        reorderedFields.add(constantReader);\n-      } else if (id == MetadataColumns.ROW_POSITION.fieldId()) {\n-        reorderedFields.add(new CometPositionColumnReader(field));\n-      } else if (id == MetadataColumns.IS_DELETED.fieldId()) {\n-        CometColumnReader deleteReader = new CometDeleteColumnReader<>(field);\n-        reorderedFields.add(deleteReader);\n-      } else if (reader != null) {\n-        reorderedFields.add(reader);\n-      } else if (field.initialDefault() != null) {\n-        CometColumnReader constantReader =\n-            new CometConstantColumnReader<>(field.initialDefault(), field);\n-        reorderedFields.add(constantReader);\n-      } else if (field.isOptional()) {\n-        CometColumnReader constantReader = new CometConstantColumnReader<>(null, field);\n-        reorderedFields.add(constantReader);\n-      } else {\n-        throw new IllegalArgumentException(\n-            String.format(\"Missing required field: %s\", field.name()));\n-      }\n-    }\n-    return vectorizedReader(reorderedFields);\n-  }\n-\n-  protected VectorizedReader<?> vectorizedReader(List<VectorizedReader<?>> reorderedFields) {\n-    VectorizedReader<?> reader = readerFactory.apply(reorderedFields);\n-    if (deleteFilter != null) {\n-      ((CometColumnarBatchReader) reader).setDeleteFilter(deleteFilter);\n-    }\n-    return reader;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> struct(\n-      Types.StructType expected, GroupType groupType, List<VectorizedReader<?>> fieldReaders) {\n-    if (expected != null) {\n-      throw new UnsupportedOperationException(\n-          \"Vectorized reads are not supported yet for struct fields\");\n-    }\n-    return null;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> primitive(\n-      org.apache.iceberg.types.Type.PrimitiveType expected, PrimitiveType primitive) {\n-\n-    if (primitive.getId() == null) {\n-      return null;\n-    }\n-    int parquetFieldId = primitive.getId().intValue();\n-    ColumnDescriptor desc = parquetSchema.getColumnDescription(currentPath());\n-    // Nested types not yet supported for vectorized reads\n-    if (desc.getMaxRepetitionLevel() > 0) {\n-      return null;\n-    }\n-    Types.NestedField icebergField = icebergSchema.findField(parquetFieldId);\n-    if (icebergField == null) {\n-      return null;\n-    }\n-\n-    return new CometColumnReader(SparkSchemaUtil.convert(icebergField.type()), desc);\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\nindex d95baa724b..865cf5085f 100644\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n+++ b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n@@ -83,23 +83,6 @@ public class VectorizedSparkParquetReaders {\n         expectedSchema, fileSchema, idToConstant, deleteFilter, ArrowAllocation.rootAllocator());\n   }\n \n-  public static CometColumnarBatchReader buildCometReader(\n-      Schema expectedSchema,\n-      MessageType fileSchema,\n-      Map<Integer, ?> idToConstant,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    return (CometColumnarBatchReader)\n-        TypeWithSchemaVisitor.visit(\n-            expectedSchema.asStruct(),\n-            fileSchema,\n-            new CometVectorizedReaderBuilder(\n-                expectedSchema,\n-                fileSchema,\n-                idToConstant,\n-                readers -> new CometColumnarBatchReader(readers, expectedSchema),\n-                deleteFilter));\n-  }\n-\n   // enables unsafe memory access to avoid costly checks to see if index is within bounds\n   // as long as it is not configured explicitly (see BoundsChecking in Arrow)\n   private static void enableUnsafeMemoryAccess() {\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\nindex a0f45e7610..78e7845911 100644\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n+++ b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n@@ -34,7 +34,6 @@ import org.apache.iceberg.parquet.Parquet;\n import org.apache.iceberg.relocated.com.google.common.collect.Sets;\n import org.apache.iceberg.spark.OrcBatchReadConf;\n import org.apache.iceberg.spark.ParquetBatchReadConf;\n-import org.apache.iceberg.spark.ParquetReaderType;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkOrcReaders;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkParquetReaders;\n import org.apache.iceberg.types.TypeUtil;\n@@ -94,15 +93,9 @@ abstract class BaseBatchReader<T extends ScanTask> extends BaseReader<ColumnarBa\n         .project(requiredSchema)\n         .split(start, length)\n         .createBatchedReaderFunc(\n-            fileSchema -> {\n-              if (parquetConf.readerType() == ParquetReaderType.COMET) {\n-                return VectorizedSparkParquetReaders.buildCometReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              } else {\n-                return VectorizedSparkParquetReaders.buildReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              }\n-            })\n+            fileSchema ->\n+                VectorizedSparkParquetReaders.buildReader(\n+                    requiredSchema, fileSchema, idToConstant, deleteFilter))\n         .recordsPerBatch(parquetConf.batchSize())\n         .filter(residual)\n         .caseSensitive(caseSensitive())\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\nindex 404ba72846..00e97e96f9 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n@@ -90,6 +90,16 @@ public abstract class SparkDistributedDataScanTestBase\n         .master(\"local[2]\")\n         .config(\"spark.serializer\", serializer)\n         .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+        .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+        .config(\n+            \"spark.shuffle.manager\",\n+            \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+        .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+        .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.size\", \"10g\")\n+        .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+        .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n         .getOrCreate();\n   }\n }\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\nindex 659507e4c5..9076ec24d1 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n@@ -73,6 +73,16 @@ public class TestSparkDistributedDataScanDeletes\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\nindex a218f965ea..eca0125ace 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n@@ -62,6 +62,16 @@ public class TestSparkDistributedDataScanFilterFiles\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\nindex 2665d7ba8d..2381d0aa14 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n@@ -63,6 +63,16 @@ public class TestSparkDistributedDataScanReporting\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\nindex daf4e29ac0..cffe2944e0 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\n@@ -79,6 +79,17 @@ public abstract class TestBase extends SparkTestHelperBase {\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .config(\"spark.comet.exec.broadcastExchange.enabled\", \"false\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/vectorized/parquet/TestParquetDictionaryEncodedVectorizedReads.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/vectorized/parquet/TestParquetDictionaryEncodedVectorizedReads.java\nindex 973a17c9a3..4490c219d5 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/vectorized/parquet/TestParquetDictionaryEncodedVectorizedReads.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/vectorized/parquet/TestParquetDictionaryEncodedVectorizedReads.java\n@@ -65,6 +65,16 @@ public class TestParquetDictionaryEncodedVectorizedReads extends TestParquetVect\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\nindex 1c5905744a..543edf3b2d 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n@@ -61,6 +61,16 @@ public abstract class ScanTestBase extends AvroDataTestBase {\n     ScanTestBase.spark =\n         SparkSession.builder()\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[2]\")\n             .getOrCreate();\n     ScanTestBase.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\nindex 19ec6d13dd..e2d0a84d01 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n@@ -144,7 +144,20 @@ public class TestCompressionSettings extends CatalogTestBase {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestCompressionSettings.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestCompressionSettings.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @BeforeEach\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\nindex a7702b169a..e1fdafae87 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n@@ -74,7 +74,20 @@ public class TestDataSourceOptions extends TestBaseWithCatalog {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestDataSourceOptions.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestDataSourceOptions.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\nindex fd7d52178f..c8aab2996e 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n@@ -114,6 +114,16 @@ public class TestFilteredScan {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\nindex 153564f7d1..264179459d 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n@@ -98,6 +98,16 @@ public class TestForwardCompatibility {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\nindex f4f57157e4..7dd2ed811d 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n@@ -51,6 +51,16 @@ public class TestIcebergSpark {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\nindex e1402396fa..8b60b8bc48 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n@@ -118,6 +118,16 @@ public class TestPartitionPruning {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestPartitionPruning.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\nindex 0b6ab2052b..dec4026f3b 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n@@ -112,6 +112,16 @@ public class TestPartitionValues {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\nindex 11865db7fc..bdb9ae32a6 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n@@ -91,6 +91,16 @@ public class TestSnapshotSelection {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\nindex 3051e27d72..22899c233b 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n@@ -125,6 +125,16 @@ public class TestSparkDataFile {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestSparkDataFile.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\nindex 4ccbf86f12..2089bac577 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n@@ -100,6 +100,16 @@ public class TestSparkDataWrite {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \n@@ -144,7 +154,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n     for (ManifestFile manifest :\n         SnapshotUtil.latestSnapshot(table, branch).allManifests(table.io())) {\n@@ -213,7 +223,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n   }\n \n@@ -258,7 +268,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n   }\n \n@@ -310,7 +320,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n   }\n \n@@ -352,7 +362,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n   }\n \n@@ -391,7 +401,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n \n     List<DataFile> files = Lists.newArrayList();\n@@ -459,7 +469,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).hasSameSizeAs(expected).isEqualTo(expected);\n   }\n \n@@ -706,7 +716,7 @@ public class TestSparkDataWrite {\n     // Since write and commit succeeded, the rows should be readable\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual)\n         .hasSize(records.size() + records2.size())\n         .containsExactlyInAnyOrder(\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\nindex 596d05d30b..e681d81704 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n@@ -88,6 +88,16 @@ public class TestSparkReadProjection extends TestReadProjection {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     ImmutableMap<String, String> config =\n         ImmutableMap.of(\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\nindex 42699f4662..74927dbd8d 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n@@ -138,6 +138,16 @@ public class TestSparkReaderDeletes extends DeleteReadTests {\n             .config(\"spark.ui.liveUpdate.period\", 0)\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \n@@ -206,7 +216,8 @@ public class TestSparkReaderDeletes extends DeleteReadTests {\n   }\n \n   protected boolean countDeletes() {\n-    return true;\n+    // TODO: Enable once iceberg-rust exposes delete count metrics to Comet\n+    return false;\n   }\n \n   @Override\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\nindex baf7fa8f88..150cc89696 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n@@ -182,6 +182,16 @@ public class TestSparkReaderWithBloomFilter {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\nindex 54048bbf21..c87216824c 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n@@ -69,6 +69,16 @@ public class TestStructuredStreaming {\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n             .config(\"spark.sql.shuffle.partitions\", 4)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\nindex 8b1e3fbfc7..b4385c3950 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n@@ -75,7 +75,20 @@ public class TestTimestampWithoutZone extends TestBase {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestTimestampWithoutZone.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestTimestampWithoutZone.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\nindex c3fac70dd3..b6c66f3f88 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n@@ -84,6 +84,16 @@ public class TestWriteMetricsConfig {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.driver.host\", InetAddress.getLoopbackAddress().getHostAddress())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestWriteMetricsConfig.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\nindex 5ce56b4fec..df1f914da3 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n@@ -63,6 +63,16 @@ public class TestAggregatePushDown extends CatalogTestBase {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.sql.iceberg.aggregate_pushdown\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\nindex 9d2ce2b388..5e23368848 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n@@ -598,9 +598,7 @@ public class TestFilterPushDown extends TestBaseWithCatalog {\n     String planAsString = sparkPlan.toString().replaceAll(\"#(\\\\d+L?)\", \"\");\n \n     if (sparkFilter != null) {\n-      assertThat(planAsString)\n-          .as(\"Post scan filter should match\")\n-          .contains(\"Filter (\" + sparkFilter + \")\");\n+      assertThat(planAsString).as(\"Post scan filter should match\").contains(\"CometFilter\");\n     } else {\n       assertThat(planAsString).as(\"Should be no post scan filter\").doesNotContain(\"Filter (\");\n     }\n"
  },
  {
    "path": "dev/diffs/iceberg/1.8.1.diff",
    "content": "diff --git a/gradle/libs.versions.toml b/gradle/libs.versions.toml\nindex 04ffa8f4ed..2d2c0d6e76 100644\n--- a/gradle/libs.versions.toml\n+++ b/gradle/libs.versions.toml\n@@ -34,6 +34,7 @@ azuresdk-bom = \"1.2.31\"\n awssdk-s3accessgrants = \"2.3.0\"\n caffeine = \"2.9.3\"\n calcite = \"1.10.0\"\n+comet = \"0.15.0-SNAPSHOT\"\n datasketches = \"6.2.0\"\n delta-standalone = \"3.3.0\"\n delta-spark = \"3.3.0\"\n@@ -81,7 +82,7 @@ slf4j = \"2.0.16\"\n snowflake-jdbc = \"3.22.0\"\n spark-hive33 = \"3.3.4\"\n spark-hive34 = \"3.4.4\"\n-spark-hive35 = \"3.5.4\"\n+spark-hive35 = \"3.5.6\"\n sqlite-jdbc = \"3.48.0.0\"\n testcontainers = \"1.20.4\"\n tez010 = \"0.10.4\"\ndiff --git a/spark/v3.4/build.gradle b/spark/v3.4/build.gradle\nindex 6eb26e8b73..38d9cf3ca3 100644\n--- a/spark/v3.4/build.gradle\n+++ b/spark/v3.4/build.gradle\n@@ -75,7 +75,7 @@ project(\":iceberg-spark:iceberg-spark-${sparkMajorVersion}_${scalaVersion}\") {\n       exclude group: 'org.roaringbitmap'\n     }\n \n-    compileOnly \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:0.5.0\"\n+    compileOnly \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     implementation libs.parquet.column\n     implementation libs.parquet.hadoop\n@@ -185,7 +185,7 @@ project(\":iceberg-spark:iceberg-spark-extensions-${sparkMajorVersion}_${scalaVer\n     testImplementation libs.avro.avro\n     testImplementation libs.parquet.hadoop\n     testImplementation libs.junit.vintage.engine\n-    testImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:0.5.0\"\n+    testImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     // Required because we remove antlr plugin dependencies from the compile configuration, see note above\n     runtimeOnly libs.antlr.runtime\n@@ -260,6 +260,8 @@ project(\":iceberg-spark:iceberg-spark-runtime-${sparkMajorVersion}_${scalaVersio\n     integrationImplementation project(path: ':iceberg-hive-metastore', configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-extensions-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n+    integrationImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n+    integrationImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     // runtime dependencies for running Hive Catalog based integration test\n     integrationRuntimeOnly project(':iceberg-hive-metastore')\ndiff --git a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/SparkExtensionsTestBase.java b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/SparkExtensionsTestBase.java\nindex 4f137f5b8d..4f06c036e1 100644\n--- a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/SparkExtensionsTestBase.java\n+++ b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/SparkExtensionsTestBase.java\n@@ -60,7 +60,27 @@ public abstract class SparkExtensionsTestBase extends SparkCatalogTestBase {\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n             .config(\n                 SQLConf.ADAPTIVE_EXECUTION_ENABLED().key(), String.valueOf(RANDOM.nextBoolean()))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     SparkTestBase.catalog =\ndiff --git a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\nindex 55a413063e..e34bfa57ed 100644\n--- a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n+++ b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n@@ -61,6 +61,16 @@ public class TestCallStatementParser {\n             .master(\"local[2]\")\n             .config(\"spark.sql.extensions\", IcebergSparkSessionExtensions.class.getName())\n             .config(\"spark.extra.prop\", \"value\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestCallStatementParser.parser = spark.sessionState().sqlParser();\n   }\ndiff --git a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\nindex b6ade2bff3..c12127d618 100644\n--- a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n+++ b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n@@ -170,6 +170,16 @@ public class DeleteOrphanFilesBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", catalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local\");\n     spark = builder.getOrCreate();\n   }\ndiff --git a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\nindex b08c352819..4ffee26156 100644\n--- a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n+++ b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n@@ -386,6 +386,16 @@ public class IcebergSortCompactionBenchmark {\n                 \"spark.sql.catalog.spark_catalog\", \"org.apache.iceberg.spark.SparkSessionCatalog\")\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", getCatalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\");\n     spark = builder.getOrCreate();\n     Configuration sparkHadoopConf = spark.sessionState().newHadoopConf();\ndiff --git a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\nindex 68c537e34a..1e9e90d539 100644\n--- a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n+++ b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n@@ -94,7 +94,19 @@ public abstract class IcebergSourceBenchmark {\n   }\n \n   protected void setupSpark(boolean enableDictionaryEncoding) {\n-    SparkSession.Builder builder = SparkSession.builder().config(\"spark.ui.enabled\", false);\n+    SparkSession.Builder builder =\n+        SparkSession.builder()\n+            .config(\"spark.ui.enabled\", false)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\");\n     if (!enableDictionaryEncoding) {\n       builder\n           .config(\"parquet.dictionary.page.size\", \"1\")\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\ndeleted file mode 100644\nindex 4794863ab1..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\n+++ /dev/null\n@@ -1,150 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import java.util.Map;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.ColumnReader;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.comet.parquet.Utils;\n-import org.apache.comet.shaded.arrow.c.CometSchemaImporter;\n-import org.apache.comet.shaded.arrow.memory.RootAllocator;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.column.page.PageReadStore;\n-import org.apache.parquet.column.page.PageReader;\n-import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\n-import org.apache.parquet.hadoop.metadata.ColumnPath;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-\n-class CometColumnReader implements VectorizedReader<ColumnVector> {\n-  // use the Comet default batch size\n-  public static final int DEFAULT_BATCH_SIZE = 8192;\n-\n-  private final ColumnDescriptor descriptor;\n-  private final DataType sparkType;\n-\n-  // The delegated ColumnReader from Comet side\n-  private AbstractColumnReader delegate;\n-  private boolean initialized = false;\n-  private int batchSize = DEFAULT_BATCH_SIZE;\n-  private CometSchemaImporter importer;\n-\n-  CometColumnReader(DataType sparkType, ColumnDescriptor descriptor) {\n-    this.sparkType = sparkType;\n-    this.descriptor = descriptor;\n-  }\n-\n-  CometColumnReader(Types.NestedField field) {\n-    DataType dataType = SparkSchemaUtil.convert(field.type());\n-    StructField structField = new StructField(field.name(), dataType, false, Metadata.empty());\n-    this.sparkType = dataType;\n-    this.descriptor = TypeUtil.convertToParquet(structField);\n-  }\n-\n-  public AbstractColumnReader delegate() {\n-    return delegate;\n-  }\n-\n-  void setDelegate(AbstractColumnReader delegate) {\n-    this.delegate = delegate;\n-  }\n-\n-  void setInitialized(boolean initialized) {\n-    this.initialized = initialized;\n-  }\n-\n-  public int batchSize() {\n-    return batchSize;\n-  }\n-\n-  /**\n-   * This method is to initialized/reset the CometColumnReader. This needs to be called for each row\n-   * group after readNextRowGroup, so a new dictionary encoding can be set for each of the new row\n-   * groups.\n-   */\n-  public void reset() {\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-\n-    this.importer = new CometSchemaImporter(new RootAllocator());\n-    this.delegate = Utils.getColumnReader(sparkType, descriptor, importer, batchSize, false, false);\n-    this.initialized = true;\n-  }\n-\n-  public ColumnDescriptor descriptor() {\n-    return descriptor;\n-  }\n-\n-  /** Returns the Spark data type for this column. */\n-  public DataType sparkType() {\n-    return sparkType;\n-  }\n-\n-  /**\n-   * Set the page reader to be 'pageReader'.\n-   *\n-   * <p>NOTE: this should be called before reading a new Parquet column chunk, and after {@link\n-   * CometColumnReader#reset} is called.\n-   */\n-  public void setPageReader(PageReader pageReader) throws IOException {\n-    Preconditions.checkState(initialized, \"Invalid state: 'reset' should be called first\");\n-    ((ColumnReader) delegate).setPageReader(pageReader);\n-  }\n-\n-  @Override\n-  public void close() {\n-    // close resources on native side\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-  }\n-\n-  @Override\n-  public void setBatchSize(int size) {\n-    this.batchSize = size;\n-  }\n-\n-  @Override\n-  public void setRowGroupInfo(\n-      PageReadStore pageReadStore, Map<ColumnPath, ColumnChunkMetaData> map, long size) {\n-    throw new UnsupportedOperationException(\"Not supported\");\n-  }\n-\n-  @Override\n-  public ColumnVector read(ColumnVector reuse, int numRowsToRead) {\n-    throw new UnsupportedOperationException(\"Not supported\");\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\ndeleted file mode 100644\nindex 1440e5d1d3..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\n+++ /dev/null\n@@ -1,203 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import java.io.UncheckedIOException;\n-import java.util.List;\n-import java.util.Map;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.BatchReader;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.util.Pair;\n-import org.apache.parquet.column.page.PageReadStore;\n-import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\n-import org.apache.parquet.hadoop.metadata.ColumnPath;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-import org.apache.spark.sql.vectorized.ColumnarBatch;\n-\n-/**\n- * {@link VectorizedReader} that returns Spark's {@link ColumnarBatch} to support Spark's vectorized\n- * read path. The {@link ColumnarBatch} returned is created by passing in the Arrow vectors\n- * populated via delegated read calls to {@link CometColumnReader VectorReader(s)}.\n- */\n-@SuppressWarnings(\"checkstyle:VisibilityModifier\")\n-class CometColumnarBatchReader implements VectorizedReader<ColumnarBatch> {\n-\n-  private final CometColumnReader[] readers;\n-  private final boolean hasIsDeletedColumn;\n-\n-  // The delegated BatchReader on the Comet side does the real work of loading a batch of rows.\n-  // The Comet BatchReader contains an array of ColumnReader. There is no need to explicitly call\n-  // ColumnReader.readBatch; instead, BatchReader.nextBatch will be called, which underneath calls\n-  // ColumnReader.readBatch. The only exception is DeleteColumnReader, because at the time of\n-  // calling BatchReader.nextBatch, the isDeleted value is not yet available, so\n-  // DeleteColumnReader.readBatch must be called explicitly later, after the isDeleted value is\n-  // available.\n-  private final BatchReader delegate;\n-  private DeleteFilter<InternalRow> deletes = null;\n-  private long rowStartPosInBatch = 0;\n-\n-  CometColumnarBatchReader(List<VectorizedReader<?>> readers, Schema schema) {\n-    this.readers =\n-        readers.stream().map(CometColumnReader.class::cast).toArray(CometColumnReader[]::new);\n-    this.hasIsDeletedColumn =\n-        readers.stream().anyMatch(reader -> reader instanceof CometDeleteColumnReader);\n-\n-    AbstractColumnReader[] abstractColumnReaders = new AbstractColumnReader[readers.size()];\n-    this.delegate = new BatchReader(abstractColumnReaders);\n-    delegate.setSparkSchema(SparkSchemaUtil.convert(schema));\n-  }\n-\n-  @Override\n-  public void setRowGroupInfo(\n-      PageReadStore pageStore, Map<ColumnPath, ColumnChunkMetaData> metaData, long rowPosition) {\n-    setRowGroupInfo(pageStore, metaData);\n-  }\n-\n-  @Override\n-  public void setRowGroupInfo(\n-      PageReadStore pageStore, Map<ColumnPath, ColumnChunkMetaData> metaData) {\n-    for (int i = 0; i < readers.length; i++) {\n-      try {\n-        if (!(readers[i] instanceof CometConstantColumnReader)\n-            && !(readers[i] instanceof CometPositionColumnReader)\n-            && !(readers[i] instanceof CometDeleteColumnReader)) {\n-          readers[i].reset();\n-          readers[i].setPageReader(pageStore.getPageReader(readers[i].descriptor()));\n-        }\n-      } catch (IOException e) {\n-        throw new UncheckedIOException(\"Failed to setRowGroupInfo for Comet vectorization\", e);\n-      }\n-    }\n-\n-    for (int i = 0; i < readers.length; i++) {\n-      delegate.getColumnReaders()[i] = this.readers[i].delegate();\n-    }\n-\n-    this.rowStartPosInBatch =\n-        pageStore\n-            .getRowIndexOffset()\n-            .orElseThrow(\n-                () ->\n-                    new IllegalArgumentException(\n-                        \"PageReadStore does not contain row index offset\"));\n-  }\n-\n-  public void setDeleteFilter(DeleteFilter<InternalRow> deleteFilter) {\n-    this.deletes = deleteFilter;\n-  }\n-\n-  @Override\n-  public final ColumnarBatch read(ColumnarBatch reuse, int numRowsToRead) {\n-    ColumnarBatch columnarBatch = new ColumnBatchLoader(numRowsToRead).loadDataToColumnBatch();\n-    rowStartPosInBatch += numRowsToRead;\n-    return columnarBatch;\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.setBatchSize(batchSize);\n-      }\n-    }\n-  }\n-\n-  @Override\n-  public void close() {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.close();\n-      }\n-    }\n-  }\n-\n-  private class ColumnBatchLoader {\n-    private final int batchSize;\n-\n-    ColumnBatchLoader(int numRowsToRead) {\n-      Preconditions.checkArgument(\n-          numRowsToRead > 0, \"Invalid number of rows to read: %s\", numRowsToRead);\n-      this.batchSize = numRowsToRead;\n-    }\n-\n-    ColumnarBatch loadDataToColumnBatch() {\n-      ColumnVector[] vectors = readDataToColumnVectors();\n-      int numLiveRows = batchSize;\n-\n-      if (hasIsDeletedColumn) {\n-        boolean[] isDeleted = buildIsDeleted(vectors);\n-        readDeletedColumn(vectors, isDeleted);\n-      } else {\n-        Pair<int[], Integer> pair = buildRowIdMapping(vectors);\n-        if (pair != null) {\n-          int[] rowIdMapping = pair.first();\n-          numLiveRows = pair.second();\n-          for (int i = 0; i < vectors.length; i++) {\n-            vectors[i] = new ColumnVectorWithFilter(vectors[i], rowIdMapping);\n-          }\n-        }\n-      }\n-\n-      if (deletes != null && deletes.hasEqDeletes()) {\n-        vectors = ColumnarBatchUtil.removeExtraColumns(deletes, vectors);\n-      }\n-\n-      ColumnarBatch batch = new ColumnarBatch(vectors);\n-      batch.setNumRows(numLiveRows);\n-      return batch;\n-    }\n-\n-    private boolean[] buildIsDeleted(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildIsDeleted(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    private Pair<int[], Integer> buildRowIdMapping(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildRowIdMapping(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    ColumnVector[] readDataToColumnVectors() {\n-      ColumnVector[] columnVectors = new ColumnVector[readers.length];\n-      // Fetch rows for all readers in the delegate\n-      delegate.nextBatch(batchSize);\n-      for (int i = 0; i < readers.length; i++) {\n-        columnVectors[i] = readers[i].delegate().currentBatch();\n-      }\n-\n-      return columnVectors;\n-    }\n-\n-    void readDeletedColumn(ColumnVector[] columnVectors, boolean[] isDeleted) {\n-      for (int i = 0; i < readers.length; i++) {\n-        if (readers[i] instanceof CometDeleteColumnReader) {\n-          CometDeleteColumnReader deleteColumnReader = new CometDeleteColumnReader<>(isDeleted);\n-          deleteColumnReader.setBatchSize(batchSize);\n-          deleteColumnReader.delegate().readBatch(batchSize);\n-          columnVectors[i] = deleteColumnReader.delegate().currentBatch();\n-        }\n-      }\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\ndeleted file mode 100644\nindex c665002e8f..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\n+++ /dev/null\n@@ -1,65 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.math.BigDecimal;\n-import java.nio.ByteBuffer;\n-import org.apache.comet.parquet.ConstantColumnReader;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Decimal;\n-import org.apache.spark.sql.types.DecimalType;\n-import org.apache.spark.unsafe.types.UTF8String;\n-\n-class CometConstantColumnReader<T> extends CometColumnReader {\n-\n-  CometConstantColumnReader(T value, Types.NestedField field) {\n-    super(field);\n-    // use delegate to set constant value on the native side to be consumed by native execution.\n-    setDelegate(\n-        new ConstantColumnReader(sparkType(), descriptor(), convertToSparkValue(value), false));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private Object convertToSparkValue(T value) {\n-    DataType dataType = sparkType();\n-    // Match the value to Spark internal type if necessary\n-    if (dataType == DataTypes.StringType && value instanceof String) {\n-      // the internal type for StringType is UTF8String\n-      return UTF8String.fromString((String) value);\n-    } else if (dataType instanceof DecimalType && value instanceof BigDecimal) {\n-      // the internal type for DecimalType is Decimal\n-      return Decimal.apply((BigDecimal) value);\n-    } else if (dataType == DataTypes.BinaryType && value instanceof ByteBuffer) {\n-      // the internal type for DecimalType is byte[]\n-      // Iceberg default value should always use HeapBufferBuffer, so calling ByteBuffer.array()\n-      // should be safe.\n-      return ((java.nio.ByteBuffer) value).array();\n-    } else {\n-      return value;\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\ndeleted file mode 100644\nindex 4a28fc51da..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\n+++ /dev/null\n@@ -1,75 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-\n-class CometDeleteColumnReader<T> extends CometColumnReader {\n-  CometDeleteColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new DeleteColumnReader());\n-  }\n-\n-  CometDeleteColumnReader(boolean[] isDeleted) {\n-    super(MetadataColumns.IS_DELETED);\n-    setDelegate(new DeleteColumnReader(isDeleted));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class DeleteColumnReader extends MetadataColumnReader {\n-    private boolean[] isDeleted;\n-\n-    DeleteColumnReader() {\n-      super(\n-          DataTypes.BooleanType,\n-          TypeUtil.convertToParquet(\n-              new StructField(\"_deleted\", DataTypes.BooleanType, false, Metadata.empty())),\n-          false /* useDecimal128 = false */,\n-          false /* isConstant */);\n-      this.isDeleted = new boolean[0];\n-    }\n-\n-    DeleteColumnReader(boolean[] isDeleted) {\n-      this();\n-      this.isDeleted = isDeleted;\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set isDeleted on the native side to be consumed by native execution\n-      Native.setIsDeleted(nativeHandle, isDeleted);\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\ndeleted file mode 100644\nindex 1949a71798..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\n+++ /dev/null\n@@ -1,62 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.spark.sql.types.DataTypes;\n-\n-class CometPositionColumnReader extends CometColumnReader {\n-  CometPositionColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new PositionColumnReader(descriptor()));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class PositionColumnReader extends MetadataColumnReader {\n-    /** The current position value of the column that are used to initialize this column reader. */\n-    private long position;\n-\n-    PositionColumnReader(ColumnDescriptor descriptor) {\n-      super(\n-          DataTypes.LongType,\n-          descriptor,\n-          false /* useDecimal128 = false */,\n-          false /* isConstant */);\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set position on the native side to be consumed by native execution\n-      Native.setPosition(nativeHandle, position, total);\n-      position += total;\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\ndeleted file mode 100644\nindex d36f1a7274..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\n+++ /dev/null\n@@ -1,147 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.util.List;\n-import java.util.Map;\n-import java.util.function.Function;\n-import java.util.stream.IntStream;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.TypeWithSchemaVisitor;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;\n-import org.apache.iceberg.relocated.com.google.common.collect.Lists;\n-import org.apache.iceberg.relocated.com.google.common.collect.Maps;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.schema.GroupType;\n-import org.apache.parquet.schema.MessageType;\n-import org.apache.parquet.schema.PrimitiveType;\n-import org.apache.parquet.schema.Type;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-\n-class CometVectorizedReaderBuilder extends TypeWithSchemaVisitor<VectorizedReader<?>> {\n-\n-  private final MessageType parquetSchema;\n-  private final Schema icebergSchema;\n-  private final Map<Integer, ?> idToConstant;\n-  private final Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory;\n-  private final DeleteFilter<InternalRow> deleteFilter;\n-\n-  CometVectorizedReaderBuilder(\n-      Schema expectedSchema,\n-      MessageType parquetSchema,\n-      Map<Integer, ?> idToConstant,\n-      Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    this.parquetSchema = parquetSchema;\n-    this.icebergSchema = expectedSchema;\n-    this.idToConstant = idToConstant;\n-    this.readerFactory = readerFactory;\n-    this.deleteFilter = deleteFilter;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> message(\n-      Types.StructType expected, MessageType message, List<VectorizedReader<?>> fieldReaders) {\n-    GroupType groupType = message.asGroupType();\n-    Map<Integer, VectorizedReader<?>> readersById = Maps.newHashMap();\n-    List<Type> fields = groupType.getFields();\n-\n-    IntStream.range(0, fields.size())\n-        .filter(pos -> fields.get(pos).getId() != null)\n-        .forEach(pos -> readersById.put(fields.get(pos).getId().intValue(), fieldReaders.get(pos)));\n-\n-    List<Types.NestedField> icebergFields =\n-        expected != null ? expected.fields() : ImmutableList.of();\n-\n-    List<VectorizedReader<?>> reorderedFields =\n-        Lists.newArrayListWithExpectedSize(icebergFields.size());\n-\n-    for (Types.NestedField field : icebergFields) {\n-      int id = field.fieldId();\n-      VectorizedReader<?> reader = readersById.get(id);\n-      if (idToConstant.containsKey(id)) {\n-        CometConstantColumnReader constantReader =\n-            new CometConstantColumnReader<>(idToConstant.get(id), field);\n-        reorderedFields.add(constantReader);\n-      } else if (id == MetadataColumns.ROW_POSITION.fieldId()) {\n-        reorderedFields.add(new CometPositionColumnReader(field));\n-      } else if (id == MetadataColumns.IS_DELETED.fieldId()) {\n-        CometColumnReader deleteReader = new CometDeleteColumnReader<>(field);\n-        reorderedFields.add(deleteReader);\n-      } else if (reader != null) {\n-        reorderedFields.add(reader);\n-      } else if (field.initialDefault() != null) {\n-        CometColumnReader constantReader =\n-            new CometConstantColumnReader<>(field.initialDefault(), field);\n-        reorderedFields.add(constantReader);\n-      } else if (field.isOptional()) {\n-        CometColumnReader constantReader = new CometConstantColumnReader<>(null, field);\n-        reorderedFields.add(constantReader);\n-      } else {\n-        throw new IllegalArgumentException(\n-            String.format(\"Missing required field: %s\", field.name()));\n-      }\n-    }\n-    return vectorizedReader(reorderedFields);\n-  }\n-\n-  protected VectorizedReader<?> vectorizedReader(List<VectorizedReader<?>> reorderedFields) {\n-    VectorizedReader<?> reader = readerFactory.apply(reorderedFields);\n-    if (deleteFilter != null) {\n-      ((CometColumnarBatchReader) reader).setDeleteFilter(deleteFilter);\n-    }\n-    return reader;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> struct(\n-      Types.StructType expected, GroupType groupType, List<VectorizedReader<?>> fieldReaders) {\n-    if (expected != null) {\n-      throw new UnsupportedOperationException(\n-          \"Vectorized reads are not supported yet for struct fields\");\n-    }\n-    return null;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> primitive(\n-      org.apache.iceberg.types.Type.PrimitiveType expected, PrimitiveType primitive) {\n-\n-    if (primitive.getId() == null) {\n-      return null;\n-    }\n-    int parquetFieldId = primitive.getId().intValue();\n-    ColumnDescriptor desc = parquetSchema.getColumnDescription(currentPath());\n-    // Nested types not yet supported for vectorized reads\n-    if (desc.getMaxRepetitionLevel() > 0) {\n-      return null;\n-    }\n-    Types.NestedField icebergField = icebergSchema.findField(parquetFieldId);\n-    if (icebergField == null) {\n-      return null;\n-    }\n-\n-    return new CometColumnReader(SparkSchemaUtil.convert(icebergField.type()), desc);\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\nindex b523bc5bff..636ad3be7d 100644\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n+++ b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n@@ -70,23 +70,6 @@ public class VectorizedSparkParquetReaders {\n                 deleteFilter));\n   }\n \n-  public static CometColumnarBatchReader buildCometReader(\n-      Schema expectedSchema,\n-      MessageType fileSchema,\n-      Map<Integer, ?> idToConstant,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    return (CometColumnarBatchReader)\n-        TypeWithSchemaVisitor.visit(\n-            expectedSchema.asStruct(),\n-            fileSchema,\n-            new CometVectorizedReaderBuilder(\n-                expectedSchema,\n-                fileSchema,\n-                idToConstant,\n-                readers -> new CometColumnarBatchReader(readers, expectedSchema),\n-                deleteFilter));\n-  }\n-\n   // enables unsafe memory access to avoid costly checks to see if index is within bounds\n   // as long as it is not configured explicitly (see BoundsChecking in Arrow)\n   private static void enableUnsafeMemoryAccess() {\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\nindex 780e1750a5..25f253eede 100644\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n+++ b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n@@ -34,7 +34,6 @@ import org.apache.iceberg.parquet.Parquet;\n import org.apache.iceberg.relocated.com.google.common.collect.Sets;\n import org.apache.iceberg.spark.OrcBatchReadConf;\n import org.apache.iceberg.spark.ParquetBatchReadConf;\n-import org.apache.iceberg.spark.ParquetReaderType;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkOrcReaders;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkParquetReaders;\n import org.apache.iceberg.types.TypeUtil;\n@@ -92,15 +91,9 @@ abstract class BaseBatchReader<T extends ScanTask> extends BaseReader<ColumnarBa\n         .project(requiredSchema)\n         .split(start, length)\n         .createBatchedReaderFunc(\n-            fileSchema -> {\n-              if (parquetConf.readerType() == ParquetReaderType.COMET) {\n-                return VectorizedSparkParquetReaders.buildCometReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              } else {\n-                return VectorizedSparkParquetReaders.buildReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              }\n-            })\n+            fileSchema ->\n+                VectorizedSparkParquetReaders.buildReader(\n+                    requiredSchema, fileSchema, idToConstant, deleteFilter))\n         .recordsPerBatch(parquetConf.batchSize())\n         .filter(residual)\n         .caseSensitive(caseSensitive())\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\nindex 404ba72846..00e97e96f9 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n@@ -90,6 +90,16 @@ public abstract class SparkDistributedDataScanTestBase\n         .master(\"local[2]\")\n         .config(\"spark.serializer\", serializer)\n         .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+        .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+        .config(\n+            \"spark.shuffle.manager\",\n+            \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+        .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+        .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.size\", \"10g\")\n+        .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+        .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n         .getOrCreate();\n   }\n }\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\nindex 9361c63176..9a78bbf3b6 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n@@ -69,6 +69,16 @@ public class TestSparkDistributedDataScanDeletes\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\nindex a218f965ea..eca0125ace 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n@@ -62,6 +62,16 @@ public class TestSparkDistributedDataScanFilterFiles\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\nindex acd4688440..9be94eab2f 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n@@ -59,6 +59,16 @@ public class TestSparkDistributedDataScanReporting\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/SparkTestBase.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/SparkTestBase.java\nindex 3e8953fb95..cac4f75e95 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/SparkTestBase.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/SparkTestBase.java\n@@ -77,6 +77,16 @@ public abstract class SparkTestBase extends SparkTestHelperBase {\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\nindex 3a269740b7..42ecd7ac7b 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n@@ -54,7 +54,20 @@ public abstract class ScanTestBase extends AvroDataTest {\n \n   @BeforeAll\n   public static void startSpark() {\n-    ScanTestBase.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    ScanTestBase.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     ScanTestBase.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\nindex 724c6edde2..40b17106b0 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n@@ -103,7 +103,20 @@ public class TestCompressionSettings extends SparkCatalogTestBase {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestCompressionSettings.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestCompressionSettings.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @Parameterized.AfterParam\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\nindex 013b8d4386..613f35417e 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n@@ -76,7 +76,20 @@ public class TestDataSourceOptions extends SparkTestBaseWithCatalog {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestDataSourceOptions.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestDataSourceOptions.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\nindex ba13d005bd..73d76e2d3e 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n@@ -118,7 +118,20 @@ public class TestFilteredScan {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestFilteredScan.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestFilteredScan.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n \n     // define UDFs used by partition tests\n     Function<Object, Integer> bucket4 = Transforms.bucket(4).bind(Types.LongType.get());\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\nindex 9f97753094..a69cf4f33d 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n@@ -94,7 +94,20 @@ public class TestForwardCompatibility {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestForwardCompatibility.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestForwardCompatibility.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\nindex 0154506f86..79138afc26 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n@@ -46,7 +46,20 @@ public class TestIcebergSpark {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestIcebergSpark.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestIcebergSpark.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\nindex 639d37c793..c4570510a6 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n@@ -111,7 +111,20 @@ public class TestPartitionPruning {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestPartitionPruning.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestPartitionPruning.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestPartitionPruning.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n \n     String optionKey = String.format(\"fs.%s.impl\", CountOpenLocalFileSystem.scheme);\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\nindex ad0984ef42..cb143ac2fc 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n@@ -104,7 +104,20 @@ public class TestPartitionValues {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestPartitionValues.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestPartitionValues.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\nindex 9fc576dde5..2765862300 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n@@ -83,7 +83,20 @@ public class TestSnapshotSelection {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestSnapshotSelection.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSnapshotSelection.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\nindex 16fde3c954..9c01167064 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n@@ -121,7 +121,20 @@ public class TestSparkDataFile {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestSparkDataFile.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkDataFile.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestSparkDataFile.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\nindex 63c18277aa..d214d7c23c 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n@@ -89,7 +89,20 @@ public class TestSparkDataWrite {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestSparkDataWrite.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkDataWrite.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @Parameterized.AfterParam\n@@ -138,7 +151,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n     for (ManifestFile manifest :\n@@ -208,7 +221,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n   }\n@@ -254,7 +267,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n   }\n@@ -307,7 +320,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n   }\n@@ -350,7 +363,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n   }\n@@ -390,7 +403,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n \n@@ -455,7 +468,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n   }\n@@ -619,7 +632,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n \n@@ -705,7 +718,7 @@ public class TestSparkDataWrite {\n     // Since write and commit succeeded, the rows should be readable\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\n         \"Number of rows should match\", records.size() + records2.size(), actual.size());\n     assertThat(actual)\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\nindex 3a4b235c46..2de0760813 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n@@ -86,7 +86,20 @@ public class TestSparkReadProjection extends TestReadProjection {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestSparkReadProjection.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkReadProjection.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     ImmutableMap<String, String> config =\n         ImmutableMap.of(\n             \"type\", \"hive\",\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\nindex dda49b4946..529992de6b 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n@@ -131,7 +131,27 @@ public class TestSparkReaderDeletes extends DeleteReadTests {\n             .config(\"spark.ui.liveUpdate.period\", 0)\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     catalog =\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\nindex e5831b76e4..5c45a111d9 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n@@ -176,7 +176,27 @@ public class TestSparkReaderWithBloomFilter {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     catalog =\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\nindex f420f1b955..d64dbfa61a 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n@@ -66,6 +66,16 @@ public class TestStructuredStreaming {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.sql.shuffle.partitions\", 4)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\nindex ac674e2e62..4460d597b3 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n@@ -73,7 +73,20 @@ public class TestTimestampWithoutZone extends SparkTestBase {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestTimestampWithoutZone.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestTimestampWithoutZone.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\nindex 73827b309b..aa5e28e34d 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n@@ -80,7 +80,20 @@ public class TestWriteMetricsConfig {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestWriteMetricsConfig.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestWriteMetricsConfig.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestWriteMetricsConfig.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\nindex 1a4e2f3e1c..641cb855e9 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n@@ -65,7 +65,27 @@ public class TestAggregatePushDown extends SparkCatalogTestBase {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.sql.iceberg.aggregate_pushdown\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     SparkTestBase.catalog =\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\nindex f92055ab7a..95fde91f19 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n@@ -585,9 +585,7 @@ public class TestFilterPushDown extends SparkTestBaseWithCatalog {\n     String planAsString = sparkPlan.toString().replaceAll(\"#(\\\\d+L?)\", \"\");\n \n     if (sparkFilter != null) {\n-      assertThat(planAsString)\n-          .as(\"Post scan filter should match\")\n-          .contains(\"Filter (\" + sparkFilter + \")\");\n+      assertThat(planAsString).as(\"Post scan filter should match\").contains(\"CometFilter\");\n     } else {\n       assertThat(planAsString).as(\"Should be no post scan filter\").doesNotContain(\"Filter (\");\n     }\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\nindex 8a1ec5060f..3611521cc4 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\n@@ -605,7 +605,7 @@ public class TestStoragePartitionedJoins extends SparkTestBaseWithCatalog {\n             + \"FROM %s t1 \"\n             + \"INNER JOIN %s t2 \"\n             + \"ON t1.id = t2.id AND t1.%s = t2.%s \"\n-            + \"ORDER BY t1.id, t1.%s\",\n+            + \"ORDER BY t1.id, t1.%s, t1.salary\",\n         sourceColumnName,\n         tableName,\n         tableName(OTHER_TABLE_NAME),\ndiff --git a/spark/v3.5/build.gradle b/spark/v3.5/build.gradle\nindex e2d2c7a7ac..95cd3c08b5 100644\n--- a/spark/v3.5/build.gradle\n+++ b/spark/v3.5/build.gradle\n@@ -75,7 +75,7 @@ project(\":iceberg-spark:iceberg-spark-${sparkMajorVersion}_${scalaVersion}\") {\n       exclude group: 'org.roaringbitmap'\n     }\n \n-    compileOnly \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:0.5.0\"\n+    compileOnly \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     implementation libs.parquet.column\n     implementation libs.parquet.hadoop\n@@ -183,7 +183,7 @@ project(\":iceberg-spark:iceberg-spark-extensions-${sparkMajorVersion}_${scalaVer\n     testImplementation libs.avro.avro\n     testImplementation libs.parquet.hadoop\n     testImplementation libs.awaitility\n-    testImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:0.5.0\"\n+    testImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     // Required because we remove antlr plugin dependencies from the compile configuration, see note above\n     runtimeOnly libs.antlr.runtime\n@@ -263,6 +263,7 @@ project(\":iceberg-spark:iceberg-spark-runtime-${sparkMajorVersion}_${scalaVersio\n     integrationImplementation project(path: ':iceberg-hive-metastore', configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-extensions-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n+    integrationImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     // runtime dependencies for running Hive Catalog based integration test\n     integrationRuntimeOnly project(':iceberg-hive-metastore')\ndiff --git a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\nindex 578845e3da..4f44a73db3 100644\n--- a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\n+++ b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\n@@ -57,6 +57,16 @@ public abstract class ExtensionsTestBase extends CatalogTestBase {\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n             .config(\n                 SQLConf.ADAPTIVE_EXECUTION_ENABLED().key(), String.valueOf(RANDOM.nextBoolean()))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\nindex ade19de36f..9111397e9c 100644\n--- a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n+++ b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n@@ -56,6 +56,16 @@ public class TestCallStatementParser {\n             .master(\"local[2]\")\n             .config(\"spark.sql.extensions\", IcebergSparkSessionExtensions.class.getName())\n             .config(\"spark.extra.prop\", \"value\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestCallStatementParser.parser = spark.sessionState().sqlParser();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\nindex 64edb1002e..0fc10120ff 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n@@ -179,6 +179,16 @@ public class DeleteOrphanFilesBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", catalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local\");\n     spark = builder.getOrCreate();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\nindex a5d0456b0b..f0759f8378 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n@@ -392,6 +392,16 @@ public class IcebergSortCompactionBenchmark {\n                 \"spark.sql.catalog.spark_catalog\", \"org.apache.iceberg.spark.SparkSessionCatalog\")\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", getCatalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\");\n     spark = builder.getOrCreate();\n     Configuration sparkHadoopConf = spark.sessionState().newHadoopConf();\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java\nindex c6794e43c6..457d2823ee 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java\n@@ -239,6 +239,16 @@ public class DVReaderBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", newWarehouseDir())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\")\n             .getOrCreate();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java\nindex ac74fb5a10..eab09293de 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java\n@@ -223,6 +223,16 @@ public class DVWriterBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", newWarehouseDir())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\")\n             .getOrCreate();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\nindex 68c537e34a..1e9e90d539 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n@@ -94,7 +94,19 @@ public abstract class IcebergSourceBenchmark {\n   }\n \n   protected void setupSpark(boolean enableDictionaryEncoding) {\n-    SparkSession.Builder builder = SparkSession.builder().config(\"spark.ui.enabled\", false);\n+    SparkSession.Builder builder =\n+        SparkSession.builder()\n+            .config(\"spark.ui.enabled\", false)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\");\n     if (!enableDictionaryEncoding) {\n       builder\n           .config(\"parquet.dictionary.page.size\", \"1\")\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\ndeleted file mode 100644\nindex 4794863ab1..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\n+++ /dev/null\n@@ -1,150 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import java.util.Map;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.ColumnReader;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.comet.parquet.Utils;\n-import org.apache.comet.shaded.arrow.c.CometSchemaImporter;\n-import org.apache.comet.shaded.arrow.memory.RootAllocator;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.column.page.PageReadStore;\n-import org.apache.parquet.column.page.PageReader;\n-import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\n-import org.apache.parquet.hadoop.metadata.ColumnPath;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-\n-class CometColumnReader implements VectorizedReader<ColumnVector> {\n-  // use the Comet default batch size\n-  public static final int DEFAULT_BATCH_SIZE = 8192;\n-\n-  private final ColumnDescriptor descriptor;\n-  private final DataType sparkType;\n-\n-  // The delegated ColumnReader from Comet side\n-  private AbstractColumnReader delegate;\n-  private boolean initialized = false;\n-  private int batchSize = DEFAULT_BATCH_SIZE;\n-  private CometSchemaImporter importer;\n-\n-  CometColumnReader(DataType sparkType, ColumnDescriptor descriptor) {\n-    this.sparkType = sparkType;\n-    this.descriptor = descriptor;\n-  }\n-\n-  CometColumnReader(Types.NestedField field) {\n-    DataType dataType = SparkSchemaUtil.convert(field.type());\n-    StructField structField = new StructField(field.name(), dataType, false, Metadata.empty());\n-    this.sparkType = dataType;\n-    this.descriptor = TypeUtil.convertToParquet(structField);\n-  }\n-\n-  public AbstractColumnReader delegate() {\n-    return delegate;\n-  }\n-\n-  void setDelegate(AbstractColumnReader delegate) {\n-    this.delegate = delegate;\n-  }\n-\n-  void setInitialized(boolean initialized) {\n-    this.initialized = initialized;\n-  }\n-\n-  public int batchSize() {\n-    return batchSize;\n-  }\n-\n-  /**\n-   * This method is to initialized/reset the CometColumnReader. This needs to be called for each row\n-   * group after readNextRowGroup, so a new dictionary encoding can be set for each of the new row\n-   * groups.\n-   */\n-  public void reset() {\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-\n-    this.importer = new CometSchemaImporter(new RootAllocator());\n-    this.delegate = Utils.getColumnReader(sparkType, descriptor, importer, batchSize, false, false);\n-    this.initialized = true;\n-  }\n-\n-  public ColumnDescriptor descriptor() {\n-    return descriptor;\n-  }\n-\n-  /** Returns the Spark data type for this column. */\n-  public DataType sparkType() {\n-    return sparkType;\n-  }\n-\n-  /**\n-   * Set the page reader to be 'pageReader'.\n-   *\n-   * <p>NOTE: this should be called before reading a new Parquet column chunk, and after {@link\n-   * CometColumnReader#reset} is called.\n-   */\n-  public void setPageReader(PageReader pageReader) throws IOException {\n-    Preconditions.checkState(initialized, \"Invalid state: 'reset' should be called first\");\n-    ((ColumnReader) delegate).setPageReader(pageReader);\n-  }\n-\n-  @Override\n-  public void close() {\n-    // close resources on native side\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-  }\n-\n-  @Override\n-  public void setBatchSize(int size) {\n-    this.batchSize = size;\n-  }\n-\n-  @Override\n-  public void setRowGroupInfo(\n-      PageReadStore pageReadStore, Map<ColumnPath, ColumnChunkMetaData> map, long size) {\n-    throw new UnsupportedOperationException(\"Not supported\");\n-  }\n-\n-  @Override\n-  public ColumnVector read(ColumnVector reuse, int numRowsToRead) {\n-    throw new UnsupportedOperationException(\"Not supported\");\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\ndeleted file mode 100644\nindex 1440e5d1d3..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\n+++ /dev/null\n@@ -1,203 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import java.io.UncheckedIOException;\n-import java.util.List;\n-import java.util.Map;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.BatchReader;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.util.Pair;\n-import org.apache.parquet.column.page.PageReadStore;\n-import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\n-import org.apache.parquet.hadoop.metadata.ColumnPath;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-import org.apache.spark.sql.vectorized.ColumnarBatch;\n-\n-/**\n- * {@link VectorizedReader} that returns Spark's {@link ColumnarBatch} to support Spark's vectorized\n- * read path. The {@link ColumnarBatch} returned is created by passing in the Arrow vectors\n- * populated via delegated read calls to {@link CometColumnReader VectorReader(s)}.\n- */\n-@SuppressWarnings(\"checkstyle:VisibilityModifier\")\n-class CometColumnarBatchReader implements VectorizedReader<ColumnarBatch> {\n-\n-  private final CometColumnReader[] readers;\n-  private final boolean hasIsDeletedColumn;\n-\n-  // The delegated BatchReader on the Comet side does the real work of loading a batch of rows.\n-  // The Comet BatchReader contains an array of ColumnReader. There is no need to explicitly call\n-  // ColumnReader.readBatch; instead, BatchReader.nextBatch will be called, which underneath calls\n-  // ColumnReader.readBatch. The only exception is DeleteColumnReader, because at the time of\n-  // calling BatchReader.nextBatch, the isDeleted value is not yet available, so\n-  // DeleteColumnReader.readBatch must be called explicitly later, after the isDeleted value is\n-  // available.\n-  private final BatchReader delegate;\n-  private DeleteFilter<InternalRow> deletes = null;\n-  private long rowStartPosInBatch = 0;\n-\n-  CometColumnarBatchReader(List<VectorizedReader<?>> readers, Schema schema) {\n-    this.readers =\n-        readers.stream().map(CometColumnReader.class::cast).toArray(CometColumnReader[]::new);\n-    this.hasIsDeletedColumn =\n-        readers.stream().anyMatch(reader -> reader instanceof CometDeleteColumnReader);\n-\n-    AbstractColumnReader[] abstractColumnReaders = new AbstractColumnReader[readers.size()];\n-    this.delegate = new BatchReader(abstractColumnReaders);\n-    delegate.setSparkSchema(SparkSchemaUtil.convert(schema));\n-  }\n-\n-  @Override\n-  public void setRowGroupInfo(\n-      PageReadStore pageStore, Map<ColumnPath, ColumnChunkMetaData> metaData, long rowPosition) {\n-    setRowGroupInfo(pageStore, metaData);\n-  }\n-\n-  @Override\n-  public void setRowGroupInfo(\n-      PageReadStore pageStore, Map<ColumnPath, ColumnChunkMetaData> metaData) {\n-    for (int i = 0; i < readers.length; i++) {\n-      try {\n-        if (!(readers[i] instanceof CometConstantColumnReader)\n-            && !(readers[i] instanceof CometPositionColumnReader)\n-            && !(readers[i] instanceof CometDeleteColumnReader)) {\n-          readers[i].reset();\n-          readers[i].setPageReader(pageStore.getPageReader(readers[i].descriptor()));\n-        }\n-      } catch (IOException e) {\n-        throw new UncheckedIOException(\"Failed to setRowGroupInfo for Comet vectorization\", e);\n-      }\n-    }\n-\n-    for (int i = 0; i < readers.length; i++) {\n-      delegate.getColumnReaders()[i] = this.readers[i].delegate();\n-    }\n-\n-    this.rowStartPosInBatch =\n-        pageStore\n-            .getRowIndexOffset()\n-            .orElseThrow(\n-                () ->\n-                    new IllegalArgumentException(\n-                        \"PageReadStore does not contain row index offset\"));\n-  }\n-\n-  public void setDeleteFilter(DeleteFilter<InternalRow> deleteFilter) {\n-    this.deletes = deleteFilter;\n-  }\n-\n-  @Override\n-  public final ColumnarBatch read(ColumnarBatch reuse, int numRowsToRead) {\n-    ColumnarBatch columnarBatch = new ColumnBatchLoader(numRowsToRead).loadDataToColumnBatch();\n-    rowStartPosInBatch += numRowsToRead;\n-    return columnarBatch;\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.setBatchSize(batchSize);\n-      }\n-    }\n-  }\n-\n-  @Override\n-  public void close() {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.close();\n-      }\n-    }\n-  }\n-\n-  private class ColumnBatchLoader {\n-    private final int batchSize;\n-\n-    ColumnBatchLoader(int numRowsToRead) {\n-      Preconditions.checkArgument(\n-          numRowsToRead > 0, \"Invalid number of rows to read: %s\", numRowsToRead);\n-      this.batchSize = numRowsToRead;\n-    }\n-\n-    ColumnarBatch loadDataToColumnBatch() {\n-      ColumnVector[] vectors = readDataToColumnVectors();\n-      int numLiveRows = batchSize;\n-\n-      if (hasIsDeletedColumn) {\n-        boolean[] isDeleted = buildIsDeleted(vectors);\n-        readDeletedColumn(vectors, isDeleted);\n-      } else {\n-        Pair<int[], Integer> pair = buildRowIdMapping(vectors);\n-        if (pair != null) {\n-          int[] rowIdMapping = pair.first();\n-          numLiveRows = pair.second();\n-          for (int i = 0; i < vectors.length; i++) {\n-            vectors[i] = new ColumnVectorWithFilter(vectors[i], rowIdMapping);\n-          }\n-        }\n-      }\n-\n-      if (deletes != null && deletes.hasEqDeletes()) {\n-        vectors = ColumnarBatchUtil.removeExtraColumns(deletes, vectors);\n-      }\n-\n-      ColumnarBatch batch = new ColumnarBatch(vectors);\n-      batch.setNumRows(numLiveRows);\n-      return batch;\n-    }\n-\n-    private boolean[] buildIsDeleted(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildIsDeleted(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    private Pair<int[], Integer> buildRowIdMapping(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildRowIdMapping(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    ColumnVector[] readDataToColumnVectors() {\n-      ColumnVector[] columnVectors = new ColumnVector[readers.length];\n-      // Fetch rows for all readers in the delegate\n-      delegate.nextBatch(batchSize);\n-      for (int i = 0; i < readers.length; i++) {\n-        columnVectors[i] = readers[i].delegate().currentBatch();\n-      }\n-\n-      return columnVectors;\n-    }\n-\n-    void readDeletedColumn(ColumnVector[] columnVectors, boolean[] isDeleted) {\n-      for (int i = 0; i < readers.length; i++) {\n-        if (readers[i] instanceof CometDeleteColumnReader) {\n-          CometDeleteColumnReader deleteColumnReader = new CometDeleteColumnReader<>(isDeleted);\n-          deleteColumnReader.setBatchSize(batchSize);\n-          deleteColumnReader.delegate().readBatch(batchSize);\n-          columnVectors[i] = deleteColumnReader.delegate().currentBatch();\n-        }\n-      }\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\ndeleted file mode 100644\nindex 047c96314b..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\n+++ /dev/null\n@@ -1,65 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.math.BigDecimal;\n-import java.nio.ByteBuffer;\n-import org.apache.comet.parquet.ConstantColumnReader;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Decimal;\n-import org.apache.spark.sql.types.DecimalType;\n-import org.apache.spark.unsafe.types.UTF8String;\n-\n-class CometConstantColumnReader<T> extends CometColumnReader {\n-\n-  CometConstantColumnReader(T value, Types.NestedField field) {\n-    super(field);\n-    // use delegate to set constant value on the native side to be consumed by native execution.\n-    setDelegate(\n-        new ConstantColumnReader(sparkType(), descriptor(), convertToSparkValue(value), false));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private Object convertToSparkValue(T value) {\n-    DataType dataType = sparkType();\n-    // Match the value to Spark internal type if necessary\n-    if (dataType == DataTypes.StringType && value instanceof String) {\n-      // the internal type for StringType is UTF8String\n-      return UTF8String.fromString((String) value);\n-    } else if (dataType instanceof DecimalType && value instanceof BigDecimal) {\n-      // the internal type for DecimalType is Decimal\n-      return Decimal.apply((BigDecimal) value);\n-    } else if (dataType == DataTypes.BinaryType && value instanceof ByteBuffer) {\n-      // the internal type for DecimalType is byte[]\n-      // Iceberg default value should always use HeapBufferBuffer, so calling ByteBuffer.array()\n-      // should be safe.\n-      return ((ByteBuffer) value).array();\n-    } else {\n-      return value;\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\ndeleted file mode 100644\nindex 6235bfe486..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\n+++ /dev/null\n@@ -1,75 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-\n-class CometDeleteColumnReader<T> extends CometColumnReader {\n-  CometDeleteColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new DeleteColumnReader());\n-  }\n-\n-  CometDeleteColumnReader(boolean[] isDeleted) {\n-    super(MetadataColumns.IS_DELETED);\n-    setDelegate(new DeleteColumnReader(isDeleted));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class DeleteColumnReader extends MetadataColumnReader {\n-    private boolean[] isDeleted;\n-\n-    DeleteColumnReader() {\n-      super(\n-          DataTypes.BooleanType,\n-          TypeUtil.convertToParquet(\n-              new StructField(\"_deleted\", DataTypes.BooleanType, false, Metadata.empty())),\n-          false /* useDecimal128 = false */,\n-          false /* isConstant = false */);\n-      this.isDeleted = new boolean[0];\n-    }\n-\n-    DeleteColumnReader(boolean[] isDeleted) {\n-      this();\n-      this.isDeleted = isDeleted;\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set isDeleted on the native side to be consumed by native execution\n-      Native.setIsDeleted(nativeHandle, isDeleted);\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\ndeleted file mode 100644\nindex bcc0e514c2..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\n+++ /dev/null\n@@ -1,62 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.spark.sql.types.DataTypes;\n-\n-class CometPositionColumnReader extends CometColumnReader {\n-  CometPositionColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new PositionColumnReader(descriptor()));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class PositionColumnReader extends MetadataColumnReader {\n-    /** The current position value of the column that are used to initialize this column reader. */\n-    private long position;\n-\n-    PositionColumnReader(ColumnDescriptor descriptor) {\n-      super(\n-          DataTypes.LongType,\n-          descriptor,\n-          false /* useDecimal128 = false */,\n-          false /* isConstant = false */);\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set position on the native side to be consumed by native execution\n-      Native.setPosition(nativeHandle, position, total);\n-      position += total;\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\ndeleted file mode 100644\nindex d36f1a7274..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\n+++ /dev/null\n@@ -1,147 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.util.List;\n-import java.util.Map;\n-import java.util.function.Function;\n-import java.util.stream.IntStream;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.TypeWithSchemaVisitor;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;\n-import org.apache.iceberg.relocated.com.google.common.collect.Lists;\n-import org.apache.iceberg.relocated.com.google.common.collect.Maps;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.schema.GroupType;\n-import org.apache.parquet.schema.MessageType;\n-import org.apache.parquet.schema.PrimitiveType;\n-import org.apache.parquet.schema.Type;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-\n-class CometVectorizedReaderBuilder extends TypeWithSchemaVisitor<VectorizedReader<?>> {\n-\n-  private final MessageType parquetSchema;\n-  private final Schema icebergSchema;\n-  private final Map<Integer, ?> idToConstant;\n-  private final Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory;\n-  private final DeleteFilter<InternalRow> deleteFilter;\n-\n-  CometVectorizedReaderBuilder(\n-      Schema expectedSchema,\n-      MessageType parquetSchema,\n-      Map<Integer, ?> idToConstant,\n-      Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    this.parquetSchema = parquetSchema;\n-    this.icebergSchema = expectedSchema;\n-    this.idToConstant = idToConstant;\n-    this.readerFactory = readerFactory;\n-    this.deleteFilter = deleteFilter;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> message(\n-      Types.StructType expected, MessageType message, List<VectorizedReader<?>> fieldReaders) {\n-    GroupType groupType = message.asGroupType();\n-    Map<Integer, VectorizedReader<?>> readersById = Maps.newHashMap();\n-    List<Type> fields = groupType.getFields();\n-\n-    IntStream.range(0, fields.size())\n-        .filter(pos -> fields.get(pos).getId() != null)\n-        .forEach(pos -> readersById.put(fields.get(pos).getId().intValue(), fieldReaders.get(pos)));\n-\n-    List<Types.NestedField> icebergFields =\n-        expected != null ? expected.fields() : ImmutableList.of();\n-\n-    List<VectorizedReader<?>> reorderedFields =\n-        Lists.newArrayListWithExpectedSize(icebergFields.size());\n-\n-    for (Types.NestedField field : icebergFields) {\n-      int id = field.fieldId();\n-      VectorizedReader<?> reader = readersById.get(id);\n-      if (idToConstant.containsKey(id)) {\n-        CometConstantColumnReader constantReader =\n-            new CometConstantColumnReader<>(idToConstant.get(id), field);\n-        reorderedFields.add(constantReader);\n-      } else if (id == MetadataColumns.ROW_POSITION.fieldId()) {\n-        reorderedFields.add(new CometPositionColumnReader(field));\n-      } else if (id == MetadataColumns.IS_DELETED.fieldId()) {\n-        CometColumnReader deleteReader = new CometDeleteColumnReader<>(field);\n-        reorderedFields.add(deleteReader);\n-      } else if (reader != null) {\n-        reorderedFields.add(reader);\n-      } else if (field.initialDefault() != null) {\n-        CometColumnReader constantReader =\n-            new CometConstantColumnReader<>(field.initialDefault(), field);\n-        reorderedFields.add(constantReader);\n-      } else if (field.isOptional()) {\n-        CometColumnReader constantReader = new CometConstantColumnReader<>(null, field);\n-        reorderedFields.add(constantReader);\n-      } else {\n-        throw new IllegalArgumentException(\n-            String.format(\"Missing required field: %s\", field.name()));\n-      }\n-    }\n-    return vectorizedReader(reorderedFields);\n-  }\n-\n-  protected VectorizedReader<?> vectorizedReader(List<VectorizedReader<?>> reorderedFields) {\n-    VectorizedReader<?> reader = readerFactory.apply(reorderedFields);\n-    if (deleteFilter != null) {\n-      ((CometColumnarBatchReader) reader).setDeleteFilter(deleteFilter);\n-    }\n-    return reader;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> struct(\n-      Types.StructType expected, GroupType groupType, List<VectorizedReader<?>> fieldReaders) {\n-    if (expected != null) {\n-      throw new UnsupportedOperationException(\n-          \"Vectorized reads are not supported yet for struct fields\");\n-    }\n-    return null;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> primitive(\n-      org.apache.iceberg.types.Type.PrimitiveType expected, PrimitiveType primitive) {\n-\n-    if (primitive.getId() == null) {\n-      return null;\n-    }\n-    int parquetFieldId = primitive.getId().intValue();\n-    ColumnDescriptor desc = parquetSchema.getColumnDescription(currentPath());\n-    // Nested types not yet supported for vectorized reads\n-    if (desc.getMaxRepetitionLevel() > 0) {\n-      return null;\n-    }\n-    Types.NestedField icebergField = icebergSchema.findField(parquetFieldId);\n-    if (icebergField == null) {\n-      return null;\n-    }\n-\n-    return new CometColumnReader(SparkSchemaUtil.convert(icebergField.type()), desc);\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\nindex b523bc5bff..636ad3be7d 100644\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n+++ b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n@@ -70,23 +70,6 @@ public class VectorizedSparkParquetReaders {\n                 deleteFilter));\n   }\n \n-  public static CometColumnarBatchReader buildCometReader(\n-      Schema expectedSchema,\n-      MessageType fileSchema,\n-      Map<Integer, ?> idToConstant,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    return (CometColumnarBatchReader)\n-        TypeWithSchemaVisitor.visit(\n-            expectedSchema.asStruct(),\n-            fileSchema,\n-            new CometVectorizedReaderBuilder(\n-                expectedSchema,\n-                fileSchema,\n-                idToConstant,\n-                readers -> new CometColumnarBatchReader(readers, expectedSchema),\n-                deleteFilter));\n-  }\n-\n   // enables unsafe memory access to avoid costly checks to see if index is within bounds\n   // as long as it is not configured explicitly (see BoundsChecking in Arrow)\n   private static void enableUnsafeMemoryAccess() {\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\nindex 780e1750a5..25f253eede 100644\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n+++ b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n@@ -34,7 +34,6 @@ import org.apache.iceberg.parquet.Parquet;\n import org.apache.iceberg.relocated.com.google.common.collect.Sets;\n import org.apache.iceberg.spark.OrcBatchReadConf;\n import org.apache.iceberg.spark.ParquetBatchReadConf;\n-import org.apache.iceberg.spark.ParquetReaderType;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkOrcReaders;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkParquetReaders;\n import org.apache.iceberg.types.TypeUtil;\n@@ -92,15 +91,9 @@ abstract class BaseBatchReader<T extends ScanTask> extends BaseReader<ColumnarBa\n         .project(requiredSchema)\n         .split(start, length)\n         .createBatchedReaderFunc(\n-            fileSchema -> {\n-              if (parquetConf.readerType() == ParquetReaderType.COMET) {\n-                return VectorizedSparkParquetReaders.buildCometReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              } else {\n-                return VectorizedSparkParquetReaders.buildReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              }\n-            })\n+            fileSchema ->\n+                VectorizedSparkParquetReaders.buildReader(\n+                    requiredSchema, fileSchema, idToConstant, deleteFilter))\n         .recordsPerBatch(parquetConf.batchSize())\n         .filter(residual)\n         .caseSensitive(caseSensitive())\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\nindex 404ba72846..00e97e96f9 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n@@ -90,6 +90,16 @@ public abstract class SparkDistributedDataScanTestBase\n         .master(\"local[2]\")\n         .config(\"spark.serializer\", serializer)\n         .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+        .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+        .config(\n+            \"spark.shuffle.manager\",\n+            \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+        .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+        .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.size\", \"10g\")\n+        .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+        .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n         .getOrCreate();\n   }\n }\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\nindex 659507e4c5..9076ec24d1 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n@@ -73,6 +73,16 @@ public class TestSparkDistributedDataScanDeletes\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\nindex a218f965ea..eca0125ace 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n@@ -62,6 +62,16 @@ public class TestSparkDistributedDataScanFilterFiles\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\nindex 2665d7ba8d..2381d0aa14 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n@@ -63,6 +63,16 @@ public class TestSparkDistributedDataScanReporting\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\nindex de68351f6e..2b5325e266 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\n@@ -77,6 +77,17 @@ public abstract class TestBase extends SparkTestHelperBase {\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .config(\"spark.comet.exec.broadcastExchange.enabled\", \"false\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/parquet/vectorized/TestParquetDictionaryEncodedVectorizedReads.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/parquet/vectorized/TestParquetDictionaryEncodedVectorizedReads.java\nindex bc4e722bc8..3bbff053fd 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/parquet/vectorized/TestParquetDictionaryEncodedVectorizedReads.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/parquet/vectorized/TestParquetDictionaryEncodedVectorizedReads.java\n@@ -59,7 +59,20 @@ public class TestParquetDictionaryEncodedVectorizedReads extends TestParquetVect\n \n   @BeforeAll\n   public static void startSpark() {\n-    spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\nindex 3a269740b7..42ecd7ac7b 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n@@ -54,7 +54,20 @@ public abstract class ScanTestBase extends AvroDataTest {\n \n   @BeforeAll\n   public static void startSpark() {\n-    ScanTestBase.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    ScanTestBase.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     ScanTestBase.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\nindex f411920a5d..afb7830aaa 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n@@ -144,7 +144,20 @@ public class TestCompressionSettings extends CatalogTestBase {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestCompressionSettings.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestCompressionSettings.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @BeforeEach\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\nindex c4ba96e634..98398968b0 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n@@ -75,7 +75,20 @@ public class TestDataSourceOptions extends TestBaseWithCatalog {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestDataSourceOptions.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestDataSourceOptions.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\nindex 348173596e..9017b4a37a 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n@@ -110,7 +110,20 @@ public class TestFilteredScan {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestFilteredScan.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestFilteredScan.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\nindex 84c99a575c..1d60b01df5 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n@@ -93,7 +93,20 @@ public class TestForwardCompatibility {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestForwardCompatibility.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestForwardCompatibility.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\nindex 7eff93d204..7aab7f9682 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n@@ -46,7 +46,20 @@ public class TestIcebergSpark {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestIcebergSpark.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestIcebergSpark.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\nindex 9464f687b0..ca4e3e0353 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n@@ -112,7 +112,20 @@ public class TestPartitionPruning {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestPartitionPruning.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestPartitionPruning.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestPartitionPruning.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n \n     String optionKey = String.format(\"fs.%s.impl\", CountOpenLocalFileSystem.scheme);\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\nindex 5c218f21c4..1d22228217 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n@@ -107,7 +107,20 @@ public class TestPartitionValues {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestPartitionValues.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestPartitionValues.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\nindex a7334a580c..ecd25670fe 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n@@ -87,7 +87,20 @@ public class TestSnapshotSelection {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestSnapshotSelection.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSnapshotSelection.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\nindex 182b1ef8f5..205892e4a6 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n@@ -120,7 +120,20 @@ public class TestSparkDataFile {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestSparkDataFile.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkDataFile.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestSparkDataFile.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\nindex fb2b312bed..3fca3330d0 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n@@ -96,7 +96,20 @@ public class TestSparkDataWrite {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestSparkDataWrite.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkDataWrite.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterEach\n@@ -140,7 +153,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n     for (ManifestFile manifest :\n@@ -210,7 +223,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n   }\n@@ -256,7 +269,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n   }\n@@ -309,7 +322,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n   }\n@@ -352,7 +365,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n   }\n@@ -392,7 +405,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n \n@@ -458,7 +471,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n   }\n@@ -622,7 +635,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n \n@@ -708,7 +721,7 @@ public class TestSparkDataWrite {\n     // Since write and commit succeeded, the rows should be readable\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSize(records.size() + records2.size());\n     assertThat(actual)\n         .describedAs(\"Result rows should match\")\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\nindex becf6a064d..21701450a8 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n@@ -83,7 +83,20 @@ public class TestSparkReadProjection extends TestReadProjection {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestSparkReadProjection.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkReadProjection.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     ImmutableMap<String, String> config =\n         ImmutableMap.of(\n             \"type\", \"hive\",\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\nindex 4f1cef5d37..f0f4277324 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n@@ -136,6 +136,16 @@ public class TestSparkReaderDeletes extends DeleteReadTests {\n             .config(\"spark.ui.liveUpdate.period\", 0)\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \n@@ -204,7 +214,8 @@ public class TestSparkReaderDeletes extends DeleteReadTests {\n   }\n \n   protected boolean countDeletes() {\n-    return true;\n+    // TODO: Enable once iceberg-rust exposes delete count metrics to Comet\n+    return false;\n   }\n \n   @Override\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\nindex baf7fa8f88..150cc89696 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n@@ -182,6 +182,16 @@ public class TestSparkReaderWithBloomFilter {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\nindex 17db46b85c..d7e91b4eef 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n@@ -65,6 +65,16 @@ public class TestStructuredStreaming {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.sql.shuffle.partitions\", 4)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\nindex 306444b9f2..b661aa8ced 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n@@ -75,7 +75,20 @@ public class TestTimestampWithoutZone extends TestBase {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestTimestampWithoutZone.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestTimestampWithoutZone.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\nindex 841268a6be..2fcc92bcb6 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n@@ -80,7 +80,20 @@ public class TestWriteMetricsConfig {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestWriteMetricsConfig.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestWriteMetricsConfig.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestWriteMetricsConfig.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\nindex 6e09252704..e0d7431808 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n@@ -60,6 +60,16 @@ public class TestAggregatePushDown extends CatalogTestBase {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.sql.iceberg.aggregate_pushdown\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\nindex 9d2ce2b388..5e23368848 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n@@ -598,9 +598,7 @@ public class TestFilterPushDown extends TestBaseWithCatalog {\n     String planAsString = sparkPlan.toString().replaceAll(\"#(\\\\d+L?)\", \"\");\n \n     if (sparkFilter != null) {\n-      assertThat(planAsString)\n-          .as(\"Post scan filter should match\")\n-          .contains(\"Filter (\" + sparkFilter + \")\");\n+      assertThat(planAsString).as(\"Post scan filter should match\").contains(\"CometFilter\");\n     } else {\n       assertThat(planAsString).as(\"Should be no post scan filter\").doesNotContain(\"Filter (\");\n     }\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\nindex 6719c45ca9..2515454401 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\n@@ -616,7 +616,7 @@ public class TestStoragePartitionedJoins extends TestBaseWithCatalog {\n             + \"FROM %s t1 \"\n             + \"INNER JOIN %s t2 \"\n             + \"ON t1.id = t2.id AND t1.%s = t2.%s \"\n-            + \"ORDER BY t1.id, t1.%s\",\n+            + \"ORDER BY t1.id, t1.%s, t1.salary\",\n         sourceColumnName,\n         tableName,\n         tableName(OTHER_TABLE_NAME),\n"
  },
  {
    "path": "dev/diffs/iceberg/1.9.1.diff",
    "content": "diff --git a/gradle/libs.versions.toml b/gradle/libs.versions.toml\nindex c50991c5fc..c3bd841c95 100644\n--- a/gradle/libs.versions.toml\n+++ b/gradle/libs.versions.toml\n@@ -36,6 +36,7 @@ awssdk-s3accessgrants = \"2.3.0\"\n bson-ver = \"4.11.5\"\n caffeine = \"2.9.3\"\n calcite = \"1.39.0\"\n+comet = \"0.15.0-SNAPSHOT\"\n datasketches = \"6.2.0\"\n delta-standalone = \"3.3.1\"\n delta-spark = \"3.3.1\"\ndiff --git a/spark/v3.4/build.gradle b/spark/v3.4/build.gradle\nindex 2cd8666892..4077116abb 100644\n--- a/spark/v3.4/build.gradle\n+++ b/spark/v3.4/build.gradle\n@@ -75,7 +75,7 @@ project(\":iceberg-spark:iceberg-spark-${sparkMajorVersion}_${scalaVersion}\") {\n       exclude group: 'org.roaringbitmap'\n     }\n \n-    compileOnly \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:0.5.0\"\n+    compileOnly \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     implementation libs.parquet.column\n     implementation libs.parquet.hadoop\n@@ -186,7 +186,7 @@ project(\":iceberg-spark:iceberg-spark-extensions-${sparkMajorVersion}_${scalaVer\n     testImplementation libs.parquet.hadoop\n     testImplementation libs.awaitility\n     testImplementation libs.junit.vintage.engine\n-    testImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:0.5.0\"\n+    testImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     // Required because we remove antlr plugin dependencies from the compile configuration, see note above\n     runtimeOnly libs.antlr.runtime\n@@ -267,6 +267,7 @@ project(\":iceberg-spark:iceberg-spark-runtime-${sparkMajorVersion}_${scalaVersio\n     integrationImplementation project(path: ':iceberg-hive-metastore', configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-extensions-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n+    integrationImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     // runtime dependencies for running Hive Catalog based integration test\n     integrationRuntimeOnly project(':iceberg-hive-metastore')\ndiff --git a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\nindex 578845e3da..fd93d91bc3 100644\n--- a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\n+++ b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\n@@ -57,7 +57,27 @@ public abstract class ExtensionsTestBase extends CatalogTestBase {\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n             .config(\n                 SQLConf.ADAPTIVE_EXECUTION_ENABLED().key(), String.valueOf(RANDOM.nextBoolean()))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     TestBase.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\ndiff --git a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\nindex 58d054bd05..4ead587cc1 100644\n--- a/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n+++ b/spark/v3.4/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n@@ -56,6 +56,16 @@ public class TestCallStatementParser {\n             .master(\"local[2]\")\n             .config(\"spark.sql.extensions\", IcebergSparkSessionExtensions.class.getName())\n             .config(\"spark.extra.prop\", \"value\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestCallStatementParser.parser = spark.sessionState().sqlParser();\n   }\ndiff --git a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\nindex b6ade2bff3..c12127d618 100644\n--- a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n+++ b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n@@ -170,6 +170,16 @@ public class DeleteOrphanFilesBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", catalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local\");\n     spark = builder.getOrCreate();\n   }\ndiff --git a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\nindex b08c352819..4ffee26156 100644\n--- a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n+++ b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n@@ -386,6 +386,16 @@ public class IcebergSortCompactionBenchmark {\n                 \"spark.sql.catalog.spark_catalog\", \"org.apache.iceberg.spark.SparkSessionCatalog\")\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", getCatalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\");\n     spark = builder.getOrCreate();\n     Configuration sparkHadoopConf = spark.sessionState().newHadoopConf();\ndiff --git a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\nindex 68c537e34a..1e9e90d539 100644\n--- a/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n+++ b/spark/v3.4/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n@@ -94,7 +94,19 @@ public abstract class IcebergSourceBenchmark {\n   }\n \n   protected void setupSpark(boolean enableDictionaryEncoding) {\n-    SparkSession.Builder builder = SparkSession.builder().config(\"spark.ui.enabled\", false);\n+    SparkSession.Builder builder =\n+        SparkSession.builder()\n+            .config(\"spark.ui.enabled\", false)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\");\n     if (!enableDictionaryEncoding) {\n       builder\n           .config(\"parquet.dictionary.page.size\", \"1\")\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\ndeleted file mode 100644\nindex 16159dcbdf..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\n+++ /dev/null\n@@ -1,140 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.ColumnReader;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.comet.parquet.Utils;\n-import org.apache.comet.shaded.arrow.c.CometSchemaImporter;\n-import org.apache.comet.shaded.arrow.memory.RootAllocator;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.column.page.PageReader;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-\n-class CometColumnReader implements VectorizedReader<ColumnVector> {\n-  // use the Comet default batch size\n-  public static final int DEFAULT_BATCH_SIZE = 8192;\n-\n-  private final ColumnDescriptor descriptor;\n-  private final DataType sparkType;\n-\n-  // The delegated ColumnReader from Comet side\n-  private AbstractColumnReader delegate;\n-  private boolean initialized = false;\n-  private int batchSize = DEFAULT_BATCH_SIZE;\n-  private CometSchemaImporter importer;\n-\n-  CometColumnReader(DataType sparkType, ColumnDescriptor descriptor) {\n-    this.sparkType = sparkType;\n-    this.descriptor = descriptor;\n-  }\n-\n-  CometColumnReader(Types.NestedField field) {\n-    DataType dataType = SparkSchemaUtil.convert(field.type());\n-    StructField structField = new StructField(field.name(), dataType, false, Metadata.empty());\n-    this.sparkType = dataType;\n-    this.descriptor = TypeUtil.convertToParquet(structField);\n-  }\n-\n-  public AbstractColumnReader delegate() {\n-    return delegate;\n-  }\n-\n-  void setDelegate(AbstractColumnReader delegate) {\n-    this.delegate = delegate;\n-  }\n-\n-  void setInitialized(boolean initialized) {\n-    this.initialized = initialized;\n-  }\n-\n-  public int batchSize() {\n-    return batchSize;\n-  }\n-\n-  /**\n-   * This method is to initialized/reset the CometColumnReader. This needs to be called for each row\n-   * group after readNextRowGroup, so a new dictionary encoding can be set for each of the new row\n-   * groups.\n-   */\n-  public void reset() {\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-\n-    this.importer = new CometSchemaImporter(new RootAllocator());\n-    this.delegate = Utils.getColumnReader(sparkType, descriptor, importer, batchSize, false, false);\n-    this.initialized = true;\n-  }\n-\n-  public ColumnDescriptor descriptor() {\n-    return descriptor;\n-  }\n-\n-  /** Returns the Spark data type for this column. */\n-  public DataType sparkType() {\n-    return sparkType;\n-  }\n-\n-  /**\n-   * Set the page reader to be 'pageReader'.\n-   *\n-   * <p>NOTE: this should be called before reading a new Parquet column chunk, and after {@link\n-   * CometColumnReader#reset} is called.\n-   */\n-  public void setPageReader(PageReader pageReader) throws IOException {\n-    Preconditions.checkState(initialized, \"Invalid state: 'reset' should be called first\");\n-    ((ColumnReader) delegate).setPageReader(pageReader);\n-  }\n-\n-  @Override\n-  public void close() {\n-    // close resources on native side\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-  }\n-\n-  @Override\n-  public void setBatchSize(int size) {\n-    this.batchSize = size;\n-  }\n-\n-  @Override\n-  public ColumnVector read(ColumnVector reuse, int numRowsToRead) {\n-    throw new UnsupportedOperationException(\"Not supported\");\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\ndeleted file mode 100644\nindex 04ac69476a..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\n+++ /dev/null\n@@ -1,197 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import java.io.UncheckedIOException;\n-import java.util.List;\n-import java.util.Map;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.BatchReader;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.util.Pair;\n-import org.apache.parquet.column.page.PageReadStore;\n-import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\n-import org.apache.parquet.hadoop.metadata.ColumnPath;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-import org.apache.spark.sql.vectorized.ColumnarBatch;\n-\n-/**\n- * {@link VectorizedReader} that returns Spark's {@link ColumnarBatch} to support Spark's vectorized\n- * read path. The {@link ColumnarBatch} returned is created by passing in the Arrow vectors\n- * populated via delegated read calls to {@link CometColumnReader VectorReader(s)}.\n- */\n-@SuppressWarnings(\"checkstyle:VisibilityModifier\")\n-class CometColumnarBatchReader implements VectorizedReader<ColumnarBatch> {\n-\n-  private final CometColumnReader[] readers;\n-  private final boolean hasIsDeletedColumn;\n-\n-  // The delegated BatchReader on the Comet side does the real work of loading a batch of rows.\n-  // The Comet BatchReader contains an array of ColumnReader. There is no need to explicitly call\n-  // ColumnReader.readBatch; instead, BatchReader.nextBatch will be called, which underneath calls\n-  // ColumnReader.readBatch. The only exception is DeleteColumnReader, because at the time of\n-  // calling BatchReader.nextBatch, the isDeleted value is not yet available, so\n-  // DeleteColumnReader.readBatch must be called explicitly later, after the isDeleted value is\n-  // available.\n-  private final BatchReader delegate;\n-  private DeleteFilter<InternalRow> deletes = null;\n-  private long rowStartPosInBatch = 0;\n-\n-  CometColumnarBatchReader(List<VectorizedReader<?>> readers, Schema schema) {\n-    this.readers =\n-        readers.stream().map(CometColumnReader.class::cast).toArray(CometColumnReader[]::new);\n-    this.hasIsDeletedColumn =\n-        readers.stream().anyMatch(reader -> reader instanceof CometDeleteColumnReader);\n-\n-    AbstractColumnReader[] abstractColumnReaders = new AbstractColumnReader[readers.size()];\n-    this.delegate = new BatchReader(abstractColumnReaders);\n-    delegate.setSparkSchema(SparkSchemaUtil.convert(schema));\n-  }\n-\n-  @Override\n-  public void setRowGroupInfo(\n-      PageReadStore pageStore, Map<ColumnPath, ColumnChunkMetaData> metaData) {\n-    for (int i = 0; i < readers.length; i++) {\n-      try {\n-        if (!(readers[i] instanceof CometConstantColumnReader)\n-            && !(readers[i] instanceof CometPositionColumnReader)\n-            && !(readers[i] instanceof CometDeleteColumnReader)) {\n-          readers[i].reset();\n-          readers[i].setPageReader(pageStore.getPageReader(readers[i].descriptor()));\n-        }\n-      } catch (IOException e) {\n-        throw new UncheckedIOException(\"Failed to setRowGroupInfo for Comet vectorization\", e);\n-      }\n-    }\n-\n-    for (int i = 0; i < readers.length; i++) {\n-      delegate.getColumnReaders()[i] = this.readers[i].delegate();\n-    }\n-\n-    this.rowStartPosInBatch =\n-        pageStore\n-            .getRowIndexOffset()\n-            .orElseThrow(\n-                () ->\n-                    new IllegalArgumentException(\n-                        \"PageReadStore does not contain row index offset\"));\n-  }\n-\n-  public void setDeleteFilter(DeleteFilter<InternalRow> deleteFilter) {\n-    this.deletes = deleteFilter;\n-  }\n-\n-  @Override\n-  public final ColumnarBatch read(ColumnarBatch reuse, int numRowsToRead) {\n-    ColumnarBatch columnarBatch = new ColumnBatchLoader(numRowsToRead).loadDataToColumnBatch();\n-    rowStartPosInBatch += numRowsToRead;\n-    return columnarBatch;\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.setBatchSize(batchSize);\n-      }\n-    }\n-  }\n-\n-  @Override\n-  public void close() {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.close();\n-      }\n-    }\n-  }\n-\n-  private class ColumnBatchLoader {\n-    private final int batchSize;\n-\n-    ColumnBatchLoader(int numRowsToRead) {\n-      Preconditions.checkArgument(\n-          numRowsToRead > 0, \"Invalid number of rows to read: %s\", numRowsToRead);\n-      this.batchSize = numRowsToRead;\n-    }\n-\n-    ColumnarBatch loadDataToColumnBatch() {\n-      ColumnVector[] vectors = readDataToColumnVectors();\n-      int numLiveRows = batchSize;\n-\n-      if (hasIsDeletedColumn) {\n-        boolean[] isDeleted = buildIsDeleted(vectors);\n-        readDeletedColumn(vectors, isDeleted);\n-      } else {\n-        Pair<int[], Integer> pair = buildRowIdMapping(vectors);\n-        if (pair != null) {\n-          int[] rowIdMapping = pair.first();\n-          numLiveRows = pair.second();\n-          for (int i = 0; i < vectors.length; i++) {\n-            vectors[i] = new ColumnVectorWithFilter(vectors[i], rowIdMapping);\n-          }\n-        }\n-      }\n-\n-      if (deletes != null && deletes.hasEqDeletes()) {\n-        vectors = ColumnarBatchUtil.removeExtraColumns(deletes, vectors);\n-      }\n-\n-      ColumnarBatch batch = new ColumnarBatch(vectors);\n-      batch.setNumRows(numLiveRows);\n-      return batch;\n-    }\n-\n-    private boolean[] buildIsDeleted(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildIsDeleted(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    private Pair<int[], Integer> buildRowIdMapping(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildRowIdMapping(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    ColumnVector[] readDataToColumnVectors() {\n-      ColumnVector[] columnVectors = new ColumnVector[readers.length];\n-      // Fetch rows for all readers in the delegate\n-      delegate.nextBatch(batchSize);\n-      for (int i = 0; i < readers.length; i++) {\n-        columnVectors[i] = readers[i].delegate().currentBatch();\n-      }\n-\n-      return columnVectors;\n-    }\n-\n-    void readDeletedColumn(ColumnVector[] columnVectors, boolean[] isDeleted) {\n-      for (int i = 0; i < readers.length; i++) {\n-        if (readers[i] instanceof CometDeleteColumnReader) {\n-          CometDeleteColumnReader deleteColumnReader = new CometDeleteColumnReader<>(isDeleted);\n-          deleteColumnReader.setBatchSize(batchSize);\n-          deleteColumnReader.delegate().readBatch(batchSize);\n-          columnVectors[i] = deleteColumnReader.delegate().currentBatch();\n-        }\n-      }\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\ndeleted file mode 100644\nindex c665002e8f..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\n+++ /dev/null\n@@ -1,65 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.math.BigDecimal;\n-import java.nio.ByteBuffer;\n-import org.apache.comet.parquet.ConstantColumnReader;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Decimal;\n-import org.apache.spark.sql.types.DecimalType;\n-import org.apache.spark.unsafe.types.UTF8String;\n-\n-class CometConstantColumnReader<T> extends CometColumnReader {\n-\n-  CometConstantColumnReader(T value, Types.NestedField field) {\n-    super(field);\n-    // use delegate to set constant value on the native side to be consumed by native execution.\n-    setDelegate(\n-        new ConstantColumnReader(sparkType(), descriptor(), convertToSparkValue(value), false));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private Object convertToSparkValue(T value) {\n-    DataType dataType = sparkType();\n-    // Match the value to Spark internal type if necessary\n-    if (dataType == DataTypes.StringType && value instanceof String) {\n-      // the internal type for StringType is UTF8String\n-      return UTF8String.fromString((String) value);\n-    } else if (dataType instanceof DecimalType && value instanceof BigDecimal) {\n-      // the internal type for DecimalType is Decimal\n-      return Decimal.apply((BigDecimal) value);\n-    } else if (dataType == DataTypes.BinaryType && value instanceof ByteBuffer) {\n-      // the internal type for DecimalType is byte[]\n-      // Iceberg default value should always use HeapBufferBuffer, so calling ByteBuffer.array()\n-      // should be safe.\n-      return ((java.nio.ByteBuffer) value).array();\n-    } else {\n-      return value;\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\ndeleted file mode 100644\nindex 4a28fc51da..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\n+++ /dev/null\n@@ -1,75 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-\n-class CometDeleteColumnReader<T> extends CometColumnReader {\n-  CometDeleteColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new DeleteColumnReader());\n-  }\n-\n-  CometDeleteColumnReader(boolean[] isDeleted) {\n-    super(MetadataColumns.IS_DELETED);\n-    setDelegate(new DeleteColumnReader(isDeleted));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class DeleteColumnReader extends MetadataColumnReader {\n-    private boolean[] isDeleted;\n-\n-    DeleteColumnReader() {\n-      super(\n-          DataTypes.BooleanType,\n-          TypeUtil.convertToParquet(\n-              new StructField(\"_deleted\", DataTypes.BooleanType, false, Metadata.empty())),\n-          false /* useDecimal128 = false */,\n-          false /* isConstant */);\n-      this.isDeleted = new boolean[0];\n-    }\n-\n-    DeleteColumnReader(boolean[] isDeleted) {\n-      this();\n-      this.isDeleted = isDeleted;\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set isDeleted on the native side to be consumed by native execution\n-      Native.setIsDeleted(nativeHandle, isDeleted);\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\ndeleted file mode 100644\nindex 1949a71798..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\n+++ /dev/null\n@@ -1,62 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.spark.sql.types.DataTypes;\n-\n-class CometPositionColumnReader extends CometColumnReader {\n-  CometPositionColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new PositionColumnReader(descriptor()));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class PositionColumnReader extends MetadataColumnReader {\n-    /** The current position value of the column that are used to initialize this column reader. */\n-    private long position;\n-\n-    PositionColumnReader(ColumnDescriptor descriptor) {\n-      super(\n-          DataTypes.LongType,\n-          descriptor,\n-          false /* useDecimal128 = false */,\n-          false /* isConstant */);\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set position on the native side to be consumed by native execution\n-      Native.setPosition(nativeHandle, position, total);\n-      position += total;\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\ndeleted file mode 100644\nindex d36f1a7274..0000000000\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\n+++ /dev/null\n@@ -1,147 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.util.List;\n-import java.util.Map;\n-import java.util.function.Function;\n-import java.util.stream.IntStream;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.TypeWithSchemaVisitor;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;\n-import org.apache.iceberg.relocated.com.google.common.collect.Lists;\n-import org.apache.iceberg.relocated.com.google.common.collect.Maps;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.schema.GroupType;\n-import org.apache.parquet.schema.MessageType;\n-import org.apache.parquet.schema.PrimitiveType;\n-import org.apache.parquet.schema.Type;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-\n-class CometVectorizedReaderBuilder extends TypeWithSchemaVisitor<VectorizedReader<?>> {\n-\n-  private final MessageType parquetSchema;\n-  private final Schema icebergSchema;\n-  private final Map<Integer, ?> idToConstant;\n-  private final Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory;\n-  private final DeleteFilter<InternalRow> deleteFilter;\n-\n-  CometVectorizedReaderBuilder(\n-      Schema expectedSchema,\n-      MessageType parquetSchema,\n-      Map<Integer, ?> idToConstant,\n-      Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    this.parquetSchema = parquetSchema;\n-    this.icebergSchema = expectedSchema;\n-    this.idToConstant = idToConstant;\n-    this.readerFactory = readerFactory;\n-    this.deleteFilter = deleteFilter;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> message(\n-      Types.StructType expected, MessageType message, List<VectorizedReader<?>> fieldReaders) {\n-    GroupType groupType = message.asGroupType();\n-    Map<Integer, VectorizedReader<?>> readersById = Maps.newHashMap();\n-    List<Type> fields = groupType.getFields();\n-\n-    IntStream.range(0, fields.size())\n-        .filter(pos -> fields.get(pos).getId() != null)\n-        .forEach(pos -> readersById.put(fields.get(pos).getId().intValue(), fieldReaders.get(pos)));\n-\n-    List<Types.NestedField> icebergFields =\n-        expected != null ? expected.fields() : ImmutableList.of();\n-\n-    List<VectorizedReader<?>> reorderedFields =\n-        Lists.newArrayListWithExpectedSize(icebergFields.size());\n-\n-    for (Types.NestedField field : icebergFields) {\n-      int id = field.fieldId();\n-      VectorizedReader<?> reader = readersById.get(id);\n-      if (idToConstant.containsKey(id)) {\n-        CometConstantColumnReader constantReader =\n-            new CometConstantColumnReader<>(idToConstant.get(id), field);\n-        reorderedFields.add(constantReader);\n-      } else if (id == MetadataColumns.ROW_POSITION.fieldId()) {\n-        reorderedFields.add(new CometPositionColumnReader(field));\n-      } else if (id == MetadataColumns.IS_DELETED.fieldId()) {\n-        CometColumnReader deleteReader = new CometDeleteColumnReader<>(field);\n-        reorderedFields.add(deleteReader);\n-      } else if (reader != null) {\n-        reorderedFields.add(reader);\n-      } else if (field.initialDefault() != null) {\n-        CometColumnReader constantReader =\n-            new CometConstantColumnReader<>(field.initialDefault(), field);\n-        reorderedFields.add(constantReader);\n-      } else if (field.isOptional()) {\n-        CometColumnReader constantReader = new CometConstantColumnReader<>(null, field);\n-        reorderedFields.add(constantReader);\n-      } else {\n-        throw new IllegalArgumentException(\n-            String.format(\"Missing required field: %s\", field.name()));\n-      }\n-    }\n-    return vectorizedReader(reorderedFields);\n-  }\n-\n-  protected VectorizedReader<?> vectorizedReader(List<VectorizedReader<?>> reorderedFields) {\n-    VectorizedReader<?> reader = readerFactory.apply(reorderedFields);\n-    if (deleteFilter != null) {\n-      ((CometColumnarBatchReader) reader).setDeleteFilter(deleteFilter);\n-    }\n-    return reader;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> struct(\n-      Types.StructType expected, GroupType groupType, List<VectorizedReader<?>> fieldReaders) {\n-    if (expected != null) {\n-      throw new UnsupportedOperationException(\n-          \"Vectorized reads are not supported yet for struct fields\");\n-    }\n-    return null;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> primitive(\n-      org.apache.iceberg.types.Type.PrimitiveType expected, PrimitiveType primitive) {\n-\n-    if (primitive.getId() == null) {\n-      return null;\n-    }\n-    int parquetFieldId = primitive.getId().intValue();\n-    ColumnDescriptor desc = parquetSchema.getColumnDescription(currentPath());\n-    // Nested types not yet supported for vectorized reads\n-    if (desc.getMaxRepetitionLevel() > 0) {\n-      return null;\n-    }\n-    Types.NestedField icebergField = icebergSchema.findField(parquetFieldId);\n-    if (icebergField == null) {\n-      return null;\n-    }\n-\n-    return new CometColumnReader(SparkSchemaUtil.convert(icebergField.type()), desc);\n-  }\n-}\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\nindex b523bc5bff..636ad3be7d 100644\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n+++ b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n@@ -70,23 +70,6 @@ public class VectorizedSparkParquetReaders {\n                 deleteFilter));\n   }\n \n-  public static CometColumnarBatchReader buildCometReader(\n-      Schema expectedSchema,\n-      MessageType fileSchema,\n-      Map<Integer, ?> idToConstant,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    return (CometColumnarBatchReader)\n-        TypeWithSchemaVisitor.visit(\n-            expectedSchema.asStruct(),\n-            fileSchema,\n-            new CometVectorizedReaderBuilder(\n-                expectedSchema,\n-                fileSchema,\n-                idToConstant,\n-                readers -> new CometColumnarBatchReader(readers, expectedSchema),\n-                deleteFilter));\n-  }\n-\n   // enables unsafe memory access to avoid costly checks to see if index is within bounds\n   // as long as it is not configured explicitly (see BoundsChecking in Arrow)\n   private static void enableUnsafeMemoryAccess() {\ndiff --git a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\nindex 780e1750a5..25f253eede 100644\n--- a/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n+++ b/spark/v3.4/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n@@ -34,7 +34,6 @@ import org.apache.iceberg.parquet.Parquet;\n import org.apache.iceberg.relocated.com.google.common.collect.Sets;\n import org.apache.iceberg.spark.OrcBatchReadConf;\n import org.apache.iceberg.spark.ParquetBatchReadConf;\n-import org.apache.iceberg.spark.ParquetReaderType;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkOrcReaders;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkParquetReaders;\n import org.apache.iceberg.types.TypeUtil;\n@@ -92,15 +91,9 @@ abstract class BaseBatchReader<T extends ScanTask> extends BaseReader<ColumnarBa\n         .project(requiredSchema)\n         .split(start, length)\n         .createBatchedReaderFunc(\n-            fileSchema -> {\n-              if (parquetConf.readerType() == ParquetReaderType.COMET) {\n-                return VectorizedSparkParquetReaders.buildCometReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              } else {\n-                return VectorizedSparkParquetReaders.buildReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              }\n-            })\n+            fileSchema ->\n+                VectorizedSparkParquetReaders.buildReader(\n+                    requiredSchema, fileSchema, idToConstant, deleteFilter))\n         .recordsPerBatch(parquetConf.batchSize())\n         .filter(residual)\n         .caseSensitive(caseSensitive())\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\nindex 404ba72846..00e97e96f9 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n@@ -90,6 +90,16 @@ public abstract class SparkDistributedDataScanTestBase\n         .master(\"local[2]\")\n         .config(\"spark.serializer\", serializer)\n         .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+        .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+        .config(\n+            \"spark.shuffle.manager\",\n+            \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+        .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+        .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.size\", \"10g\")\n+        .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+        .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n         .getOrCreate();\n   }\n }\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\nindex 9361c63176..9a78bbf3b6 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n@@ -69,6 +69,16 @@ public class TestSparkDistributedDataScanDeletes\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\nindex a218f965ea..eca0125ace 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n@@ -62,6 +62,16 @@ public class TestSparkDistributedDataScanFilterFiles\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\nindex acd4688440..9be94eab2f 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n@@ -59,6 +59,16 @@ public class TestSparkDistributedDataScanReporting\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/SparkTestBase.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/SparkTestBase.java\nindex 3e8953fb95..cac4f75e95 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/SparkTestBase.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/SparkTestBase.java\n@@ -77,6 +77,16 @@ public abstract class SparkTestBase extends SparkTestHelperBase {\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\nindex 06d5e0c44f..b4f6d3907a 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n@@ -57,7 +57,20 @@ public abstract class ScanTestBase extends AvroDataTest {\n \n   @BeforeAll\n   public static void startSpark() {\n-    ScanTestBase.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    ScanTestBase.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     ScanTestBase.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\nindex 26cb4dcc95..1d5eaf18e9 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n@@ -104,7 +104,20 @@ public class TestCompressionSettings extends SparkCatalogTestBase {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestCompressionSettings.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestCompressionSettings.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @Before\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\nindex 013b8d4386..613f35417e 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n@@ -76,7 +76,20 @@ public class TestDataSourceOptions extends SparkTestBaseWithCatalog {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestDataSourceOptions.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestDataSourceOptions.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\nindex ba13d005bd..73d76e2d3e 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n@@ -118,7 +118,20 @@ public class TestFilteredScan {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestFilteredScan.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestFilteredScan.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n \n     // define UDFs used by partition tests\n     Function<Object, Integer> bucket4 = Transforms.bucket(4).bind(Types.LongType.get());\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\nindex 9f97753094..a69cf4f33d 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n@@ -94,7 +94,20 @@ public class TestForwardCompatibility {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestForwardCompatibility.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestForwardCompatibility.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\nindex 0154506f86..79138afc26 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n@@ -46,7 +46,20 @@ public class TestIcebergSpark {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestIcebergSpark.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestIcebergSpark.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\nindex 639d37c793..c4570510a6 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n@@ -111,7 +111,20 @@ public class TestPartitionPruning {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestPartitionPruning.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestPartitionPruning.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestPartitionPruning.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n \n     String optionKey = String.format(\"fs.%s.impl\", CountOpenLocalFileSystem.scheme);\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\nindex ad0984ef42..cb143ac2fc 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n@@ -104,7 +104,20 @@ public class TestPartitionValues {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestPartitionValues.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestPartitionValues.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\nindex 9fc576dde5..2765862300 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n@@ -83,7 +83,20 @@ public class TestSnapshotSelection {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestSnapshotSelection.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSnapshotSelection.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\nindex 16fde3c954..9c01167064 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n@@ -121,7 +121,20 @@ public class TestSparkDataFile {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestSparkDataFile.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkDataFile.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestSparkDataFile.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\nindex 63c18277aa..d214d7c23c 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n@@ -89,7 +89,20 @@ public class TestSparkDataWrite {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestSparkDataWrite.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkDataWrite.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @Parameterized.AfterParam\n@@ -138,7 +151,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n     for (ManifestFile manifest :\n@@ -208,7 +221,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n   }\n@@ -254,7 +267,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n   }\n@@ -307,7 +320,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n   }\n@@ -350,7 +363,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n   }\n@@ -390,7 +403,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n \n@@ -455,7 +468,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n   }\n@@ -619,7 +632,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\"Number of rows should match\", expected.size(), actual.size());\n     Assert.assertEquals(\"Result rows should match\", expected, actual);\n \n@@ -705,7 +718,7 @@ public class TestSparkDataWrite {\n     // Since write and commit succeeded, the rows should be readable\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     Assert.assertEquals(\n         \"Number of rows should match\", records.size() + records2.size(), actual.size());\n     assertThat(actual)\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\nindex 584a6b1c70..6b38a1195a 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n@@ -86,7 +86,20 @@ public class TestSparkReadProjection extends TestReadProjection {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestSparkReadProjection.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkReadProjection.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     ImmutableMap<String, String> config =\n         ImmutableMap.of(\n             \"type\", \"hive\",\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\nindex dda49b4946..529992de6b 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n@@ -131,7 +131,27 @@ public class TestSparkReaderDeletes extends DeleteReadTests {\n             .config(\"spark.ui.liveUpdate.period\", 0)\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     catalog =\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\nindex e5831b76e4..5c45a111d9 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n@@ -176,7 +176,27 @@ public class TestSparkReaderWithBloomFilter {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     catalog =\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\nindex 961d69b721..57ffe5462f 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n@@ -68,6 +68,16 @@ public class TestStructuredStreaming {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.sql.shuffle.partitions\", 4)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\nindex ac674e2e62..4460d597b3 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n@@ -73,7 +73,20 @@ public class TestTimestampWithoutZone extends SparkTestBase {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestTimestampWithoutZone.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestTimestampWithoutZone.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterClass\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\nindex 73827b309b..aa5e28e34d 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n@@ -80,7 +80,20 @@ public class TestWriteMetricsConfig {\n \n   @BeforeClass\n   public static void startSpark() {\n-    TestWriteMetricsConfig.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestWriteMetricsConfig.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestWriteMetricsConfig.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\nindex 1a4e2f3e1c..641cb855e9 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n@@ -65,7 +65,27 @@ public class TestAggregatePushDown extends SparkCatalogTestBase {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.sql.iceberg.aggregate_pushdown\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n \n     SparkTestBase.catalog =\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\nindex f92055ab7a..95fde91f19 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n@@ -585,9 +585,7 @@ public class TestFilterPushDown extends SparkTestBaseWithCatalog {\n     String planAsString = sparkPlan.toString().replaceAll(\"#(\\\\d+L?)\", \"\");\n \n     if (sparkFilter != null) {\n-      assertThat(planAsString)\n-          .as(\"Post scan filter should match\")\n-          .contains(\"Filter (\" + sparkFilter + \")\");\n+      assertThat(planAsString).as(\"Post scan filter should match\").contains(\"CometFilter\");\n     } else {\n       assertThat(planAsString).as(\"Should be no post scan filter\").doesNotContain(\"Filter (\");\n     }\ndiff --git a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\nindex 8a1ec5060f..3611521cc4 100644\n--- a/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\n+++ b/spark/v3.4/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\n@@ -605,7 +605,7 @@ public class TestStoragePartitionedJoins extends SparkTestBaseWithCatalog {\n             + \"FROM %s t1 \"\n             + \"INNER JOIN %s t2 \"\n             + \"ON t1.id = t2.id AND t1.%s = t2.%s \"\n-            + \"ORDER BY t1.id, t1.%s\",\n+            + \"ORDER BY t1.id, t1.%s, t1.salary\",\n         sourceColumnName,\n         tableName,\n         tableName(OTHER_TABLE_NAME),\ndiff --git a/spark/v3.5/build.gradle b/spark/v3.5/build.gradle\nindex 572c32f929..29013d26c0 100644\n--- a/spark/v3.5/build.gradle\n+++ b/spark/v3.5/build.gradle\n@@ -75,7 +75,7 @@ project(\":iceberg-spark:iceberg-spark-${sparkMajorVersion}_${scalaVersion}\") {\n       exclude group: 'org.roaringbitmap'\n     }\n \n-    compileOnly \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:0.5.0\"\n+    compileOnly \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     implementation libs.parquet.column\n     implementation libs.parquet.hadoop\n@@ -184,7 +184,7 @@ project(\":iceberg-spark:iceberg-spark-extensions-${sparkMajorVersion}_${scalaVer\n     testImplementation libs.avro.avro\n     testImplementation libs.parquet.hadoop\n     testImplementation libs.awaitility\n-    testImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:0.5.0\"\n+    testImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     // Required because we remove antlr plugin dependencies from the compile configuration, see note above\n     runtimeOnly libs.antlr.runtime\n@@ -265,6 +265,7 @@ project(\":iceberg-spark:iceberg-spark-runtime-${sparkMajorVersion}_${scalaVersio\n     integrationImplementation project(path: ':iceberg-hive-metastore', configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n     integrationImplementation project(path: \":iceberg-spark:iceberg-spark-extensions-${sparkMajorVersion}_${scalaVersion}\", configuration: 'testArtifacts')\n+    integrationImplementation \"org.apache.datafusion:comet-spark-spark${sparkMajorVersion}_${scalaVersion}:${libs.versions.comet.get()}\"\n \n     // runtime dependencies for running Hive Catalog based integration test\n     integrationRuntimeOnly project(':iceberg-hive-metastore')\ndiff --git a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\nindex 578845e3da..4f44a73db3 100644\n--- a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\n+++ b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/ExtensionsTestBase.java\n@@ -57,6 +57,16 @@ public abstract class ExtensionsTestBase extends CatalogTestBase {\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n             .config(\n                 SQLConf.ADAPTIVE_EXECUTION_ENABLED().key(), String.valueOf(RANDOM.nextBoolean()))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\nindex ecf9e6f8a5..3475260ca5 100644\n--- a/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n+++ b/spark/v3.5/spark-extensions/src/test/java/org/apache/iceberg/spark/extensions/TestCallStatementParser.java\n@@ -56,6 +56,16 @@ public class TestCallStatementParser {\n             .master(\"local[2]\")\n             .config(\"spark.sql.extensions\", IcebergSparkSessionExtensions.class.getName())\n             .config(\"spark.extra.prop\", \"value\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n     TestCallStatementParser.parser = spark.sessionState().sqlParser();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\nindex 64edb1002e..0fc10120ff 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/DeleteOrphanFilesBenchmark.java\n@@ -179,6 +179,16 @@ public class DeleteOrphanFilesBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", catalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local\");\n     spark = builder.getOrCreate();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\nindex a5d0456b0b..f0759f8378 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/action/IcebergSortCompactionBenchmark.java\n@@ -392,6 +392,16 @@ public class IcebergSortCompactionBenchmark {\n                 \"spark.sql.catalog.spark_catalog\", \"org.apache.iceberg.spark.SparkSessionCatalog\")\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", getCatalogWarehouse())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\");\n     spark = builder.getOrCreate();\n     Configuration sparkHadoopConf = spark.sessionState().newHadoopConf();\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java\nindex c6794e43c6..457d2823ee 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVReaderBenchmark.java\n@@ -239,6 +239,16 @@ public class DVReaderBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", newWarehouseDir())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\")\n             .getOrCreate();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java\nindex ac74fb5a10..eab09293de 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/DVWriterBenchmark.java\n@@ -223,6 +223,16 @@ public class DVWriterBenchmark {\n             .config(\"spark.sql.catalog.spark_catalog\", SparkSessionCatalog.class.getName())\n             .config(\"spark.sql.catalog.spark_catalog.type\", \"hadoop\")\n             .config(\"spark.sql.catalog.spark_catalog.warehouse\", newWarehouseDir())\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .master(\"local[*]\")\n             .getOrCreate();\n   }\ndiff --git a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\nindex 68c537e34a..1e9e90d539 100644\n--- a/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n+++ b/spark/v3.5/spark/src/jmh/java/org/apache/iceberg/spark/source/IcebergSourceBenchmark.java\n@@ -94,7 +94,19 @@ public abstract class IcebergSourceBenchmark {\n   }\n \n   protected void setupSpark(boolean enableDictionaryEncoding) {\n-    SparkSession.Builder builder = SparkSession.builder().config(\"spark.ui.enabled\", false);\n+    SparkSession.Builder builder =\n+        SparkSession.builder()\n+            .config(\"spark.ui.enabled\", false)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\");\n     if (!enableDictionaryEncoding) {\n       builder\n           .config(\"parquet.dictionary.page.size\", \"1\")\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\ndeleted file mode 100644\nindex 16159dcbdf..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnReader.java\n+++ /dev/null\n@@ -1,140 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.ColumnReader;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.comet.parquet.Utils;\n-import org.apache.comet.shaded.arrow.c.CometSchemaImporter;\n-import org.apache.comet.shaded.arrow.memory.RootAllocator;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.column.page.PageReader;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-\n-class CometColumnReader implements VectorizedReader<ColumnVector> {\n-  // use the Comet default batch size\n-  public static final int DEFAULT_BATCH_SIZE = 8192;\n-\n-  private final ColumnDescriptor descriptor;\n-  private final DataType sparkType;\n-\n-  // The delegated ColumnReader from Comet side\n-  private AbstractColumnReader delegate;\n-  private boolean initialized = false;\n-  private int batchSize = DEFAULT_BATCH_SIZE;\n-  private CometSchemaImporter importer;\n-\n-  CometColumnReader(DataType sparkType, ColumnDescriptor descriptor) {\n-    this.sparkType = sparkType;\n-    this.descriptor = descriptor;\n-  }\n-\n-  CometColumnReader(Types.NestedField field) {\n-    DataType dataType = SparkSchemaUtil.convert(field.type());\n-    StructField structField = new StructField(field.name(), dataType, false, Metadata.empty());\n-    this.sparkType = dataType;\n-    this.descriptor = TypeUtil.convertToParquet(structField);\n-  }\n-\n-  public AbstractColumnReader delegate() {\n-    return delegate;\n-  }\n-\n-  void setDelegate(AbstractColumnReader delegate) {\n-    this.delegate = delegate;\n-  }\n-\n-  void setInitialized(boolean initialized) {\n-    this.initialized = initialized;\n-  }\n-\n-  public int batchSize() {\n-    return batchSize;\n-  }\n-\n-  /**\n-   * This method is to initialized/reset the CometColumnReader. This needs to be called for each row\n-   * group after readNextRowGroup, so a new dictionary encoding can be set for each of the new row\n-   * groups.\n-   */\n-  public void reset() {\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-\n-    this.importer = new CometSchemaImporter(new RootAllocator());\n-    this.delegate = Utils.getColumnReader(sparkType, descriptor, importer, batchSize, false, false);\n-    this.initialized = true;\n-  }\n-\n-  public ColumnDescriptor descriptor() {\n-    return descriptor;\n-  }\n-\n-  /** Returns the Spark data type for this column. */\n-  public DataType sparkType() {\n-    return sparkType;\n-  }\n-\n-  /**\n-   * Set the page reader to be 'pageReader'.\n-   *\n-   * <p>NOTE: this should be called before reading a new Parquet column chunk, and after {@link\n-   * CometColumnReader#reset} is called.\n-   */\n-  public void setPageReader(PageReader pageReader) throws IOException {\n-    Preconditions.checkState(initialized, \"Invalid state: 'reset' should be called first\");\n-    ((ColumnReader) delegate).setPageReader(pageReader);\n-  }\n-\n-  @Override\n-  public void close() {\n-    // close resources on native side\n-    if (importer != null) {\n-      importer.close();\n-    }\n-\n-    if (delegate != null) {\n-      delegate.close();\n-    }\n-  }\n-\n-  @Override\n-  public void setBatchSize(int size) {\n-    this.batchSize = size;\n-  }\n-\n-  @Override\n-  public ColumnVector read(ColumnVector reuse, int numRowsToRead) {\n-    throw new UnsupportedOperationException(\"Not supported\");\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\ndeleted file mode 100644\nindex 04ac69476a..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometColumnarBatchReader.java\n+++ /dev/null\n@@ -1,197 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.io.IOException;\n-import java.io.UncheckedIOException;\n-import java.util.List;\n-import java.util.Map;\n-import org.apache.comet.parquet.AbstractColumnReader;\n-import org.apache.comet.parquet.BatchReader;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.base.Preconditions;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.util.Pair;\n-import org.apache.parquet.column.page.PageReadStore;\n-import org.apache.parquet.hadoop.metadata.ColumnChunkMetaData;\n-import org.apache.parquet.hadoop.metadata.ColumnPath;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-import org.apache.spark.sql.vectorized.ColumnVector;\n-import org.apache.spark.sql.vectorized.ColumnarBatch;\n-\n-/**\n- * {@link VectorizedReader} that returns Spark's {@link ColumnarBatch} to support Spark's vectorized\n- * read path. The {@link ColumnarBatch} returned is created by passing in the Arrow vectors\n- * populated via delegated read calls to {@link CometColumnReader VectorReader(s)}.\n- */\n-@SuppressWarnings(\"checkstyle:VisibilityModifier\")\n-class CometColumnarBatchReader implements VectorizedReader<ColumnarBatch> {\n-\n-  private final CometColumnReader[] readers;\n-  private final boolean hasIsDeletedColumn;\n-\n-  // The delegated BatchReader on the Comet side does the real work of loading a batch of rows.\n-  // The Comet BatchReader contains an array of ColumnReader. There is no need to explicitly call\n-  // ColumnReader.readBatch; instead, BatchReader.nextBatch will be called, which underneath calls\n-  // ColumnReader.readBatch. The only exception is DeleteColumnReader, because at the time of\n-  // calling BatchReader.nextBatch, the isDeleted value is not yet available, so\n-  // DeleteColumnReader.readBatch must be called explicitly later, after the isDeleted value is\n-  // available.\n-  private final BatchReader delegate;\n-  private DeleteFilter<InternalRow> deletes = null;\n-  private long rowStartPosInBatch = 0;\n-\n-  CometColumnarBatchReader(List<VectorizedReader<?>> readers, Schema schema) {\n-    this.readers =\n-        readers.stream().map(CometColumnReader.class::cast).toArray(CometColumnReader[]::new);\n-    this.hasIsDeletedColumn =\n-        readers.stream().anyMatch(reader -> reader instanceof CometDeleteColumnReader);\n-\n-    AbstractColumnReader[] abstractColumnReaders = new AbstractColumnReader[readers.size()];\n-    this.delegate = new BatchReader(abstractColumnReaders);\n-    delegate.setSparkSchema(SparkSchemaUtil.convert(schema));\n-  }\n-\n-  @Override\n-  public void setRowGroupInfo(\n-      PageReadStore pageStore, Map<ColumnPath, ColumnChunkMetaData> metaData) {\n-    for (int i = 0; i < readers.length; i++) {\n-      try {\n-        if (!(readers[i] instanceof CometConstantColumnReader)\n-            && !(readers[i] instanceof CometPositionColumnReader)\n-            && !(readers[i] instanceof CometDeleteColumnReader)) {\n-          readers[i].reset();\n-          readers[i].setPageReader(pageStore.getPageReader(readers[i].descriptor()));\n-        }\n-      } catch (IOException e) {\n-        throw new UncheckedIOException(\"Failed to setRowGroupInfo for Comet vectorization\", e);\n-      }\n-    }\n-\n-    for (int i = 0; i < readers.length; i++) {\n-      delegate.getColumnReaders()[i] = this.readers[i].delegate();\n-    }\n-\n-    this.rowStartPosInBatch =\n-        pageStore\n-            .getRowIndexOffset()\n-            .orElseThrow(\n-                () ->\n-                    new IllegalArgumentException(\n-                        \"PageReadStore does not contain row index offset\"));\n-  }\n-\n-  public void setDeleteFilter(DeleteFilter<InternalRow> deleteFilter) {\n-    this.deletes = deleteFilter;\n-  }\n-\n-  @Override\n-  public final ColumnarBatch read(ColumnarBatch reuse, int numRowsToRead) {\n-    ColumnarBatch columnarBatch = new ColumnBatchLoader(numRowsToRead).loadDataToColumnBatch();\n-    rowStartPosInBatch += numRowsToRead;\n-    return columnarBatch;\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.setBatchSize(batchSize);\n-      }\n-    }\n-  }\n-\n-  @Override\n-  public void close() {\n-    for (CometColumnReader reader : readers) {\n-      if (reader != null) {\n-        reader.close();\n-      }\n-    }\n-  }\n-\n-  private class ColumnBatchLoader {\n-    private final int batchSize;\n-\n-    ColumnBatchLoader(int numRowsToRead) {\n-      Preconditions.checkArgument(\n-          numRowsToRead > 0, \"Invalid number of rows to read: %s\", numRowsToRead);\n-      this.batchSize = numRowsToRead;\n-    }\n-\n-    ColumnarBatch loadDataToColumnBatch() {\n-      ColumnVector[] vectors = readDataToColumnVectors();\n-      int numLiveRows = batchSize;\n-\n-      if (hasIsDeletedColumn) {\n-        boolean[] isDeleted = buildIsDeleted(vectors);\n-        readDeletedColumn(vectors, isDeleted);\n-      } else {\n-        Pair<int[], Integer> pair = buildRowIdMapping(vectors);\n-        if (pair != null) {\n-          int[] rowIdMapping = pair.first();\n-          numLiveRows = pair.second();\n-          for (int i = 0; i < vectors.length; i++) {\n-            vectors[i] = new ColumnVectorWithFilter(vectors[i], rowIdMapping);\n-          }\n-        }\n-      }\n-\n-      if (deletes != null && deletes.hasEqDeletes()) {\n-        vectors = ColumnarBatchUtil.removeExtraColumns(deletes, vectors);\n-      }\n-\n-      ColumnarBatch batch = new ColumnarBatch(vectors);\n-      batch.setNumRows(numLiveRows);\n-      return batch;\n-    }\n-\n-    private boolean[] buildIsDeleted(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildIsDeleted(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    private Pair<int[], Integer> buildRowIdMapping(ColumnVector[] vectors) {\n-      return ColumnarBatchUtil.buildRowIdMapping(vectors, deletes, rowStartPosInBatch, batchSize);\n-    }\n-\n-    ColumnVector[] readDataToColumnVectors() {\n-      ColumnVector[] columnVectors = new ColumnVector[readers.length];\n-      // Fetch rows for all readers in the delegate\n-      delegate.nextBatch(batchSize);\n-      for (int i = 0; i < readers.length; i++) {\n-        columnVectors[i] = readers[i].delegate().currentBatch();\n-      }\n-\n-      return columnVectors;\n-    }\n-\n-    void readDeletedColumn(ColumnVector[] columnVectors, boolean[] isDeleted) {\n-      for (int i = 0; i < readers.length; i++) {\n-        if (readers[i] instanceof CometDeleteColumnReader) {\n-          CometDeleteColumnReader deleteColumnReader = new CometDeleteColumnReader<>(isDeleted);\n-          deleteColumnReader.setBatchSize(batchSize);\n-          deleteColumnReader.delegate().readBatch(batchSize);\n-          columnVectors[i] = deleteColumnReader.delegate().currentBatch();\n-        }\n-      }\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\ndeleted file mode 100644\nindex 047c96314b..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometConstantColumnReader.java\n+++ /dev/null\n@@ -1,65 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.math.BigDecimal;\n-import java.nio.ByteBuffer;\n-import org.apache.comet.parquet.ConstantColumnReader;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataType;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Decimal;\n-import org.apache.spark.sql.types.DecimalType;\n-import org.apache.spark.unsafe.types.UTF8String;\n-\n-class CometConstantColumnReader<T> extends CometColumnReader {\n-\n-  CometConstantColumnReader(T value, Types.NestedField field) {\n-    super(field);\n-    // use delegate to set constant value on the native side to be consumed by native execution.\n-    setDelegate(\n-        new ConstantColumnReader(sparkType(), descriptor(), convertToSparkValue(value), false));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private Object convertToSparkValue(T value) {\n-    DataType dataType = sparkType();\n-    // Match the value to Spark internal type if necessary\n-    if (dataType == DataTypes.StringType && value instanceof String) {\n-      // the internal type for StringType is UTF8String\n-      return UTF8String.fromString((String) value);\n-    } else if (dataType instanceof DecimalType && value instanceof BigDecimal) {\n-      // the internal type for DecimalType is Decimal\n-      return Decimal.apply((BigDecimal) value);\n-    } else if (dataType == DataTypes.BinaryType && value instanceof ByteBuffer) {\n-      // the internal type for DecimalType is byte[]\n-      // Iceberg default value should always use HeapBufferBuffer, so calling ByteBuffer.array()\n-      // should be safe.\n-      return ((ByteBuffer) value).array();\n-    } else {\n-      return value;\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\ndeleted file mode 100644\nindex 6235bfe486..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometDeleteColumnReader.java\n+++ /dev/null\n@@ -1,75 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.comet.parquet.TypeUtil;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.types.Types;\n-import org.apache.spark.sql.types.DataTypes;\n-import org.apache.spark.sql.types.Metadata;\n-import org.apache.spark.sql.types.StructField;\n-\n-class CometDeleteColumnReader<T> extends CometColumnReader {\n-  CometDeleteColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new DeleteColumnReader());\n-  }\n-\n-  CometDeleteColumnReader(boolean[] isDeleted) {\n-    super(MetadataColumns.IS_DELETED);\n-    setDelegate(new DeleteColumnReader(isDeleted));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class DeleteColumnReader extends MetadataColumnReader {\n-    private boolean[] isDeleted;\n-\n-    DeleteColumnReader() {\n-      super(\n-          DataTypes.BooleanType,\n-          TypeUtil.convertToParquet(\n-              new StructField(\"_deleted\", DataTypes.BooleanType, false, Metadata.empty())),\n-          false /* useDecimal128 = false */,\n-          false /* isConstant = false */);\n-      this.isDeleted = new boolean[0];\n-    }\n-\n-    DeleteColumnReader(boolean[] isDeleted) {\n-      this();\n-      this.isDeleted = isDeleted;\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set isDeleted on the native side to be consumed by native execution\n-      Native.setIsDeleted(nativeHandle, isDeleted);\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\ndeleted file mode 100644\nindex bcc0e514c2..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometPositionColumnReader.java\n+++ /dev/null\n@@ -1,62 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import org.apache.comet.parquet.MetadataColumnReader;\n-import org.apache.comet.parquet.Native;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.spark.sql.types.DataTypes;\n-\n-class CometPositionColumnReader extends CometColumnReader {\n-  CometPositionColumnReader(Types.NestedField field) {\n-    super(field);\n-    setDelegate(new PositionColumnReader(descriptor()));\n-  }\n-\n-  @Override\n-  public void setBatchSize(int batchSize) {\n-    super.setBatchSize(batchSize);\n-    delegate().setBatchSize(batchSize);\n-    setInitialized(true);\n-  }\n-\n-  private static class PositionColumnReader extends MetadataColumnReader {\n-    /** The current position value of the column that are used to initialize this column reader. */\n-    private long position;\n-\n-    PositionColumnReader(ColumnDescriptor descriptor) {\n-      super(\n-          DataTypes.LongType,\n-          descriptor,\n-          false /* useDecimal128 = false */,\n-          false /* isConstant = false */);\n-    }\n-\n-    @Override\n-    public void readBatch(int total) {\n-      Native.resetBatch(nativeHandle);\n-      // set position on the native side to be consumed by native execution\n-      Native.setPosition(nativeHandle, position, total);\n-      position += total;\n-\n-      super.readBatch(total);\n-    }\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\ndeleted file mode 100644\nindex d36f1a7274..0000000000\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/CometVectorizedReaderBuilder.java\n+++ /dev/null\n@@ -1,147 +0,0 @@\n-/*\n- * Licensed to the Apache Software Foundation (ASF) under one\n- * or more contributor license agreements.  See the NOTICE file\n- * distributed with this work for additional information\n- * regarding copyright ownership.  The ASF licenses this file\n- * to you under the Apache License, Version 2.0 (the\n- * \"License\"); you may not use this file except in compliance\n- * with the License.  You may obtain a copy of the License at\n- *\n- *   http://www.apache.org/licenses/LICENSE-2.0\n- *\n- * Unless required by applicable law or agreed to in writing,\n- * software distributed under the License is distributed on an\n- * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n- * KIND, either express or implied.  See the License for the\n- * specific language governing permissions and limitations\n- * under the License.\n- */\n-package org.apache.iceberg.spark.data.vectorized;\n-\n-import java.util.List;\n-import java.util.Map;\n-import java.util.function.Function;\n-import java.util.stream.IntStream;\n-import org.apache.iceberg.MetadataColumns;\n-import org.apache.iceberg.Schema;\n-import org.apache.iceberg.data.DeleteFilter;\n-import org.apache.iceberg.parquet.TypeWithSchemaVisitor;\n-import org.apache.iceberg.parquet.VectorizedReader;\n-import org.apache.iceberg.relocated.com.google.common.collect.ImmutableList;\n-import org.apache.iceberg.relocated.com.google.common.collect.Lists;\n-import org.apache.iceberg.relocated.com.google.common.collect.Maps;\n-import org.apache.iceberg.spark.SparkSchemaUtil;\n-import org.apache.iceberg.types.Types;\n-import org.apache.parquet.column.ColumnDescriptor;\n-import org.apache.parquet.schema.GroupType;\n-import org.apache.parquet.schema.MessageType;\n-import org.apache.parquet.schema.PrimitiveType;\n-import org.apache.parquet.schema.Type;\n-import org.apache.spark.sql.catalyst.InternalRow;\n-\n-class CometVectorizedReaderBuilder extends TypeWithSchemaVisitor<VectorizedReader<?>> {\n-\n-  private final MessageType parquetSchema;\n-  private final Schema icebergSchema;\n-  private final Map<Integer, ?> idToConstant;\n-  private final Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory;\n-  private final DeleteFilter<InternalRow> deleteFilter;\n-\n-  CometVectorizedReaderBuilder(\n-      Schema expectedSchema,\n-      MessageType parquetSchema,\n-      Map<Integer, ?> idToConstant,\n-      Function<List<VectorizedReader<?>>, VectorizedReader<?>> readerFactory,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    this.parquetSchema = parquetSchema;\n-    this.icebergSchema = expectedSchema;\n-    this.idToConstant = idToConstant;\n-    this.readerFactory = readerFactory;\n-    this.deleteFilter = deleteFilter;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> message(\n-      Types.StructType expected, MessageType message, List<VectorizedReader<?>> fieldReaders) {\n-    GroupType groupType = message.asGroupType();\n-    Map<Integer, VectorizedReader<?>> readersById = Maps.newHashMap();\n-    List<Type> fields = groupType.getFields();\n-\n-    IntStream.range(0, fields.size())\n-        .filter(pos -> fields.get(pos).getId() != null)\n-        .forEach(pos -> readersById.put(fields.get(pos).getId().intValue(), fieldReaders.get(pos)));\n-\n-    List<Types.NestedField> icebergFields =\n-        expected != null ? expected.fields() : ImmutableList.of();\n-\n-    List<VectorizedReader<?>> reorderedFields =\n-        Lists.newArrayListWithExpectedSize(icebergFields.size());\n-\n-    for (Types.NestedField field : icebergFields) {\n-      int id = field.fieldId();\n-      VectorizedReader<?> reader = readersById.get(id);\n-      if (idToConstant.containsKey(id)) {\n-        CometConstantColumnReader constantReader =\n-            new CometConstantColumnReader<>(idToConstant.get(id), field);\n-        reorderedFields.add(constantReader);\n-      } else if (id == MetadataColumns.ROW_POSITION.fieldId()) {\n-        reorderedFields.add(new CometPositionColumnReader(field));\n-      } else if (id == MetadataColumns.IS_DELETED.fieldId()) {\n-        CometColumnReader deleteReader = new CometDeleteColumnReader<>(field);\n-        reorderedFields.add(deleteReader);\n-      } else if (reader != null) {\n-        reorderedFields.add(reader);\n-      } else if (field.initialDefault() != null) {\n-        CometColumnReader constantReader =\n-            new CometConstantColumnReader<>(field.initialDefault(), field);\n-        reorderedFields.add(constantReader);\n-      } else if (field.isOptional()) {\n-        CometColumnReader constantReader = new CometConstantColumnReader<>(null, field);\n-        reorderedFields.add(constantReader);\n-      } else {\n-        throw new IllegalArgumentException(\n-            String.format(\"Missing required field: %s\", field.name()));\n-      }\n-    }\n-    return vectorizedReader(reorderedFields);\n-  }\n-\n-  protected VectorizedReader<?> vectorizedReader(List<VectorizedReader<?>> reorderedFields) {\n-    VectorizedReader<?> reader = readerFactory.apply(reorderedFields);\n-    if (deleteFilter != null) {\n-      ((CometColumnarBatchReader) reader).setDeleteFilter(deleteFilter);\n-    }\n-    return reader;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> struct(\n-      Types.StructType expected, GroupType groupType, List<VectorizedReader<?>> fieldReaders) {\n-    if (expected != null) {\n-      throw new UnsupportedOperationException(\n-          \"Vectorized reads are not supported yet for struct fields\");\n-    }\n-    return null;\n-  }\n-\n-  @Override\n-  public VectorizedReader<?> primitive(\n-      org.apache.iceberg.types.Type.PrimitiveType expected, PrimitiveType primitive) {\n-\n-    if (primitive.getId() == null) {\n-      return null;\n-    }\n-    int parquetFieldId = primitive.getId().intValue();\n-    ColumnDescriptor desc = parquetSchema.getColumnDescription(currentPath());\n-    // Nested types not yet supported for vectorized reads\n-    if (desc.getMaxRepetitionLevel() > 0) {\n-      return null;\n-    }\n-    Types.NestedField icebergField = icebergSchema.findField(parquetFieldId);\n-    if (icebergField == null) {\n-      return null;\n-    }\n-\n-    return new CometColumnReader(SparkSchemaUtil.convert(icebergField.type()), desc);\n-  }\n-}\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\nindex b523bc5bff..636ad3be7d 100644\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n+++ b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/data/vectorized/VectorizedSparkParquetReaders.java\n@@ -70,23 +70,6 @@ public class VectorizedSparkParquetReaders {\n                 deleteFilter));\n   }\n \n-  public static CometColumnarBatchReader buildCometReader(\n-      Schema expectedSchema,\n-      MessageType fileSchema,\n-      Map<Integer, ?> idToConstant,\n-      DeleteFilter<InternalRow> deleteFilter) {\n-    return (CometColumnarBatchReader)\n-        TypeWithSchemaVisitor.visit(\n-            expectedSchema.asStruct(),\n-            fileSchema,\n-            new CometVectorizedReaderBuilder(\n-                expectedSchema,\n-                fileSchema,\n-                idToConstant,\n-                readers -> new CometColumnarBatchReader(readers, expectedSchema),\n-                deleteFilter));\n-  }\n-\n   // enables unsafe memory access to avoid costly checks to see if index is within bounds\n   // as long as it is not configured explicitly (see BoundsChecking in Arrow)\n   private static void enableUnsafeMemoryAccess() {\ndiff --git a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\nindex 780e1750a5..25f253eede 100644\n--- a/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n+++ b/spark/v3.5/spark/src/main/java/org/apache/iceberg/spark/source/BaseBatchReader.java\n@@ -34,7 +34,6 @@ import org.apache.iceberg.parquet.Parquet;\n import org.apache.iceberg.relocated.com.google.common.collect.Sets;\n import org.apache.iceberg.spark.OrcBatchReadConf;\n import org.apache.iceberg.spark.ParquetBatchReadConf;\n-import org.apache.iceberg.spark.ParquetReaderType;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkOrcReaders;\n import org.apache.iceberg.spark.data.vectorized.VectorizedSparkParquetReaders;\n import org.apache.iceberg.types.TypeUtil;\n@@ -92,15 +91,9 @@ abstract class BaseBatchReader<T extends ScanTask> extends BaseReader<ColumnarBa\n         .project(requiredSchema)\n         .split(start, length)\n         .createBatchedReaderFunc(\n-            fileSchema -> {\n-              if (parquetConf.readerType() == ParquetReaderType.COMET) {\n-                return VectorizedSparkParquetReaders.buildCometReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              } else {\n-                return VectorizedSparkParquetReaders.buildReader(\n-                    requiredSchema, fileSchema, idToConstant, deleteFilter);\n-              }\n-            })\n+            fileSchema ->\n+                VectorizedSparkParquetReaders.buildReader(\n+                    requiredSchema, fileSchema, idToConstant, deleteFilter))\n         .recordsPerBatch(parquetConf.batchSize())\n         .filter(residual)\n         .caseSensitive(caseSensitive())\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\nindex 404ba72846..00e97e96f9 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/SparkDistributedDataScanTestBase.java\n@@ -90,6 +90,16 @@ public abstract class SparkDistributedDataScanTestBase\n         .master(\"local[2]\")\n         .config(\"spark.serializer\", serializer)\n         .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+        .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+        .config(\n+            \"spark.shuffle.manager\",\n+            \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+        .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+        .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.enabled\", \"true\")\n+        .config(\"spark.memory.offHeap.size\", \"10g\")\n+        .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+        .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n         .getOrCreate();\n   }\n }\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\nindex 659507e4c5..9076ec24d1 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanDeletes.java\n@@ -73,6 +73,16 @@ public class TestSparkDistributedDataScanDeletes\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\nindex a218f965ea..eca0125ace 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanFilterFiles.java\n@@ -62,6 +62,16 @@ public class TestSparkDistributedDataScanFilterFiles\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\nindex 2665d7ba8d..2381d0aa14 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/TestSparkDistributedDataScanReporting.java\n@@ -63,6 +63,16 @@ public class TestSparkDistributedDataScanReporting\n             .master(\"local[2]\")\n             .config(\"spark.serializer\", \"org.apache.spark.serializer.KryoSerializer\")\n             .config(SQLConf.SHUFFLE_PARTITIONS().key(), \"4\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\nindex 3e9f3334ef..4401e8d0cd 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/TestBase.java\n@@ -77,6 +77,17 @@ public abstract class TestBase extends SparkTestHelperBase {\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n             .config(\"spark.sql.legacy.respectNullabilityInTextDatasetConversion\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .config(\"spark.comet.exec.broadcastExchange.enabled\", \"false\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/parquet/vectorized/TestParquetDictionaryEncodedVectorizedReads.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/parquet/vectorized/TestParquetDictionaryEncodedVectorizedReads.java\nindex bc4e722bc8..3bbff053fd 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/parquet/vectorized/TestParquetDictionaryEncodedVectorizedReads.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/data/parquet/vectorized/TestParquetDictionaryEncodedVectorizedReads.java\n@@ -59,7 +59,20 @@ public class TestParquetDictionaryEncodedVectorizedReads extends TestParquetVect\n \n   @BeforeAll\n   public static void startSpark() {\n-    spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\nindex 0886df957d..b9d49d55ce 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/ScanTestBase.java\n@@ -57,7 +57,20 @@ public abstract class ScanTestBase extends AvroDataTest {\n \n   @BeforeAll\n   public static void startSpark() {\n-    ScanTestBase.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    ScanTestBase.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     ScanTestBase.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\nindex 6b7d861364..c857dafce3 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestCompressionSettings.java\n@@ -144,7 +144,20 @@ public class TestCompressionSettings extends CatalogTestBase {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestCompressionSettings.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestCompressionSettings.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @BeforeEach\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\nindex c4ba96e634..98398968b0 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestDataSourceOptions.java\n@@ -75,7 +75,20 @@ public class TestDataSourceOptions extends TestBaseWithCatalog {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestDataSourceOptions.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestDataSourceOptions.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\nindex 348173596e..9017b4a37a 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestFilteredScan.java\n@@ -110,7 +110,20 @@ public class TestFilteredScan {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestFilteredScan.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestFilteredScan.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\nindex 84c99a575c..1d60b01df5 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestForwardCompatibility.java\n@@ -93,7 +93,20 @@ public class TestForwardCompatibility {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestForwardCompatibility.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestForwardCompatibility.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\nindex 7eff93d204..7aab7f9682 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestIcebergSpark.java\n@@ -46,7 +46,20 @@ public class TestIcebergSpark {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestIcebergSpark.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestIcebergSpark.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\nindex 9464f687b0..ca4e3e0353 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionPruning.java\n@@ -112,7 +112,20 @@ public class TestPartitionPruning {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestPartitionPruning.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestPartitionPruning.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestPartitionPruning.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n \n     String optionKey = String.format(\"fs.%s.impl\", CountOpenLocalFileSystem.scheme);\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\nindex 5c218f21c4..1d22228217 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestPartitionValues.java\n@@ -107,7 +107,20 @@ public class TestPartitionValues {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestPartitionValues.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestPartitionValues.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\nindex a7334a580c..ecd25670fe 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSnapshotSelection.java\n@@ -87,7 +87,20 @@ public class TestSnapshotSelection {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestSnapshotSelection.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSnapshotSelection.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\nindex 182b1ef8f5..205892e4a6 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataFile.java\n@@ -120,7 +120,20 @@ public class TestSparkDataFile {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestSparkDataFile.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkDataFile.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestSparkDataFile.sparkContext = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\nindex fb2b312bed..3fca3330d0 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkDataWrite.java\n@@ -96,7 +96,20 @@ public class TestSparkDataWrite {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestSparkDataWrite.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkDataWrite.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterEach\n@@ -140,7 +153,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n     for (ManifestFile manifest :\n@@ -210,7 +223,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n   }\n@@ -256,7 +269,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n   }\n@@ -309,7 +322,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n   }\n@@ -352,7 +365,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n   }\n@@ -392,7 +405,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n \n@@ -458,7 +471,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n   }\n@@ -622,7 +635,7 @@ public class TestSparkDataWrite {\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n \n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSameSizeAs(expected);\n     assertThat(actual).as(\"Result rows should match\").isEqualTo(expected);\n \n@@ -708,7 +721,7 @@ public class TestSparkDataWrite {\n     // Since write and commit succeeded, the rows should be readable\n     Dataset<Row> result = spark.read().format(\"iceberg\").load(targetLocation);\n     List<SimpleRecord> actual =\n-        result.orderBy(\"id\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n+        result.orderBy(\"id\", \"data\").as(Encoders.bean(SimpleRecord.class)).collectAsList();\n     assertThat(actual).as(\"Number of rows should match\").hasSize(records.size() + records2.size());\n     assertThat(actual)\n         .describedAs(\"Result rows should match\")\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\nindex becf6a064d..21701450a8 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReadProjection.java\n@@ -83,7 +83,20 @@ public class TestSparkReadProjection extends TestReadProjection {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestSparkReadProjection.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestSparkReadProjection.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     ImmutableMap<String, String> config =\n         ImmutableMap.of(\n             \"type\", \"hive\",\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\nindex 4f1cef5d37..f0f4277324 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderDeletes.java\n@@ -136,6 +136,16 @@ public class TestSparkReaderDeletes extends DeleteReadTests {\n             .config(\"spark.ui.liveUpdate.period\", 0)\n             .config(SQLConf.PARTITION_OVERWRITE_MODE().key(), \"dynamic\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \n@@ -204,7 +214,8 @@ public class TestSparkReaderDeletes extends DeleteReadTests {\n   }\n \n   protected boolean countDeletes() {\n-    return true;\n+    // TODO: Enable once iceberg-rust exposes delete count metrics to Comet\n+    return false;\n   }\n \n   @Override\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\nindex baf7fa8f88..150cc89696 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestSparkReaderWithBloomFilter.java\n@@ -182,6 +182,16 @@ public class TestSparkReaderWithBloomFilter {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.hadoop.\" + METASTOREURIS.varname, hiveConf.get(METASTOREURIS.varname))\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\nindex c84a65cbe9..6a6dd8a49b 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestStructuredStreaming.java\n@@ -67,6 +67,16 @@ public class TestStructuredStreaming {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.sql.shuffle.partitions\", 4)\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .getOrCreate();\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\nindex 306444b9f2..b661aa8ced 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestTimestampWithoutZone.java\n@@ -75,7 +75,20 @@ public class TestTimestampWithoutZone extends TestBase {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestTimestampWithoutZone.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestTimestampWithoutZone.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n   }\n \n   @AfterAll\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\nindex 841268a6be..2fcc92bcb6 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/source/TestWriteMetricsConfig.java\n@@ -80,7 +80,20 @@ public class TestWriteMetricsConfig {\n \n   @BeforeAll\n   public static void startSpark() {\n-    TestWriteMetricsConfig.spark = SparkSession.builder().master(\"local[2]\").getOrCreate();\n+    TestWriteMetricsConfig.spark =\n+        SparkSession.builder()\n+            .master(\"local[2]\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n+            .getOrCreate();\n     TestWriteMetricsConfig.sc = JavaSparkContext.fromSparkContext(spark.sparkContext());\n   }\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\nindex 6e09252704..e0d7431808 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestAggregatePushDown.java\n@@ -60,6 +60,16 @@ public class TestAggregatePushDown extends CatalogTestBase {\n         SparkSession.builder()\n             .master(\"local[2]\")\n             .config(\"spark.sql.iceberg.aggregate_pushdown\", \"true\")\n+            .config(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n+            .config(\n+                \"spark.shuffle.manager\",\n+                \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n+            .config(\"spark.comet.explainFallback.enabled\", \"true\")\n+            .config(\"spark.comet.scan.icebergNative.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.enabled\", \"true\")\n+            .config(\"spark.memory.offHeap.size\", \"10g\")\n+            .config(\"spark.comet.use.lazyMaterialization\", \"false\")\n+            .config(\"spark.comet.schemaEvolution.enabled\", \"true\")\n             .enableHiveSupport()\n             .getOrCreate();\n \ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\nindex 9d2ce2b388..5e23368848 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestFilterPushDown.java\n@@ -598,9 +598,7 @@ public class TestFilterPushDown extends TestBaseWithCatalog {\n     String planAsString = sparkPlan.toString().replaceAll(\"#(\\\\d+L?)\", \"\");\n \n     if (sparkFilter != null) {\n-      assertThat(planAsString)\n-          .as(\"Post scan filter should match\")\n-          .contains(\"Filter (\" + sparkFilter + \")\");\n+      assertThat(planAsString).as(\"Post scan filter should match\").contains(\"CometFilter\");\n     } else {\n       assertThat(planAsString).as(\"Should be no post scan filter\").doesNotContain(\"Filter (\");\n     }\ndiff --git a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\nindex 6719c45ca9..2515454401 100644\n--- a/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\n+++ b/spark/v3.5/spark/src/test/java/org/apache/iceberg/spark/sql/TestStoragePartitionedJoins.java\n@@ -616,7 +616,7 @@ public class TestStoragePartitionedJoins extends TestBaseWithCatalog {\n             + \"FROM %s t1 \"\n             + \"INNER JOIN %s t2 \"\n             + \"ON t1.id = t2.id AND t1.%s = t2.%s \"\n-            + \"ORDER BY t1.id, t1.%s\",\n+            + \"ORDER BY t1.id, t1.%s, t1.salary\",\n         sourceColumnName,\n         tableName,\n         tableName(OTHER_TABLE_NAME),\n"
  },
  {
    "path": "dev/ensure-jars-have-correct-contents.sh",
    "content": "#!/usr/bin/env bash\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# Borrowed from Hadoop\n\n# Usage: $0 [/path/to/some/example.jar;/path/to/another/example/created.jar]\n#\n# accepts a single command line argument with a colon separated list of\n# paths to jars to check. Iterates through each such passed jar and checks\n# all the contained paths to make sure they follow the below constructed\n# safe list.\n\n# We use +=, which is a bash 3.1+ feature\nif [[ -z \"${BASH_VERSINFO[0]}\" ]] \\\n   || [[ \"${BASH_VERSINFO[0]}\" -lt 3 ]] \\\n   || [[ \"${BASH_VERSINFO[0]}\" -eq 3 && \"${BASH_VERSINFO[1]}\" -lt 1 ]]; then\n  echo \"bash v3.1+ is required. Sorry.\"\n  exit 1\nfi\n\nset -e\nset -o pipefail\n\nallowed_expr=\"(^org/$|^org/apache/$\"\n# we have to allow the directories that lead to the org/apache/comet dir\n# We allow all the classes under the following packages:\n#   * org.apache.comet\n#   * org.apache.spark.comet\n#   * org.apache.spark.sql.comet\n#   * org.apache.arrow.c\nallowed_expr+=\"|^org/apache/comet/\"\nallowed_expr+=\"|^org/apache/spark/comet/\"\nallowed_expr+=\"|^org/apache/spark/sql/comet/\"\nallowed_expr+=\"|^org/apache/arrow/c/\"\n#   * whatever in the \"META-INF\" directory\nallowed_expr+=\"|^META-INF/\"\n#   * whatever under the \"conf\" directory\nallowed_expr+=\"|^conf/\"\n#   * whatever under the \"lib\" directory\nallowed_expr+=\"|^lib/\"\n# Native dynamic library from Arrow\nallowed_expr+=\"|^x86_64/\"\nallowed_expr+=\"|^aarch_64/\"\nallowed_expr+=\"|^x86_64/libarrow_cdata_jni.so$\"\nallowed_expr+=\"|^x86_64/libarrow_cdata_jni.dylib$\"\nallowed_expr+=\"|^x86_64/arrow_cdata_jni.dll$\"\nallowed_expr+=\"|^aarch_64/libarrow_cdata_jni.dylib$\"\n\nallowed_expr+=\"|^arrow_cdata_jni/\"\nallowed_expr+=\"|^arrow_cdata_jni/x86_64/\"\nallowed_expr+=\"|^arrow_cdata_jni/aarch_64/\"\nallowed_expr+=\"|^arrow_cdata_jni/x86_64/libarrow_cdata_jni.so$\"\nallowed_expr+=\"|^arrow_cdata_jni/x86_64/libarrow_cdata_jni.dylib$\"\nallowed_expr+=\"|^arrow_cdata_jni/x86_64/arrow_cdata_jni.dll$\"\nallowed_expr+=\"|^arrow_cdata_jni/aarch_64/libarrow_cdata_jni.dylib$\"\n# Two classes in Arrow C module: StructVectorLoader and StructVectorUnloader, are not\n# under org/apache/arrow/c, so we'll need to treat them specially.\nallowed_expr+=\"|^org/apache/arrow/$\"\nallowed_expr+=\"|^org/apache/arrow/vector/$\"\nallowed_expr+=\"|^org/apache/arrow/vector/StructVectorLoader.class$\"\nallowed_expr+=\"|^org/apache/arrow/vector/StructVectorUnloader.class$\"\n# Log4J stuff\nallowed_expr+=\"|log4j2.properties\"\n# Git Info properties\nallowed_expr+=\"|comet-git-info.properties\"\n# For some reason org/apache/spark/sql directory is also included, but with no content\nallowed_expr+=\"|^org/apache/spark/$\"\n# Some shuffle related classes are spark-private, e.g. TempShuffleBlockId, ShuffleWriteMetricsReporter,\n# so these classes which use shuffle classes have to be in org/apache/spark.\nallowed_expr+=\"|^org/apache/spark/shuffle/$\"\nallowed_expr+=\"|^org/apache/spark/shuffle/sort/$\"\nallowed_expr+=\"|^org/apache/spark/shuffle/sort/CometShuffleExternalSorter.*$\"\nallowed_expr+=\"|^org/apache/spark/shuffle/sort/RowPartition.class$\"\nallowed_expr+=\"|^org/apache/spark/shuffle/sort/SpillSorter.*$\"\nallowed_expr+=\"|^org/apache/spark/shuffle/comet/.*$\"\nallowed_expr+=\"|^org/apache/spark/sql/$\"\n# allow ExplainPlanGenerator trait since it may not be available in older Spark versions\nallowed_expr+=\"|^org/apache/spark/sql/ExtendedExplainGenerator.*$\"\nallowed_expr+=\"|^org/apache/spark/CometPlugin.class$\"\nallowed_expr+=\"|^org/apache/spark/CometDriverPlugin.*$\"\nallowed_expr+=\"|^org/apache/spark/CometSource.*$\"\nallowed_expr+=\"|^org/apache/spark/CometTaskMemoryManager.class$\"\nallowed_expr+=\"|^org/apache/spark/CometTaskMemoryManager.*$\"\nallowed_expr+=\"|^scala-collection-compat.properties$\"\nallowed_expr+=\"|^scala/$\"\nallowed_expr+=\"|^scala/annotation/\"\nallowed_expr+=\"|^scala/collection/\"\nallowed_expr+=\"|^scala/jdk/\"\nallowed_expr+=\"|^scala/util/\"\n\nallowed_expr+=\")\"\ndeclare -i bad_artifacts=0\ndeclare -a bad_contents\ndeclare -a artifact_list\nwhile IFS='' read -r -d ';' line; do artifact_list+=(\"$line\"); done < <(printf '%s;' \"$1\")\nif [ \"${#artifact_list[@]}\" -eq 0 ]; then\n  echo \"[ERROR] No artifacts passed in.\"\n  exit 1\nfi\n\njar_list_failed ()\n{\n    echo \"[ERROR] Listing jar contents for file '${artifact}' failed.\"\n    exit 1\n}\ntrap jar_list_failed SIGUSR1\n\nfor artifact in \"${artifact_list[@]}\"; do\n  bad_contents=()\n  # Note: On Windows the output from jar tf may contain \\r\\n's.  Normalize to \\n.\n  while IFS='' read -r line; do bad_contents+=(\"$line\"); done < <( ( jar tf \"${artifact}\" | sed 's/\\\\r//' || kill -SIGUSR1 $$ ) | grep -v -E \"${allowed_expr}\" )\n  if [ ${#bad_contents[@]} -gt 0 ]; then\n    echo \"[ERROR] Found artifact with unexpected contents: '${artifact}'\"\n    echo \"    Please check the following and either correct the build or update\"\n    echo \"    the allowed list with reasoning.\"\n    echo \"\"\n    for bad_line in \"${bad_contents[@]}\"; do\n      echo \"    ${bad_line}\"\n    done\n    bad_artifacts=${bad_artifacts}+1\n  else\n    echo \"[INFO] Artifact looks correct: '$(basename \"${artifact}\")'\"\n  fi\ndone\n\nif [ \"${bad_artifacts}\" -gt 0 ]; then\n  exit 1\nfi\n"
  },
  {
    "path": "dev/generate-release-docs.sh",
    "content": "#!/usr/bin/env bash\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# This script generates documentation content for a release branch.\n# It should be run once when creating a new release branch to \"freeze\"\n# the generated docs (configs, compatibility matrices) into the branch.\n#\n# Usage: ./dev/generate-release-docs.sh\n#\n# This script:\n# 1. Compiles the spark module to access CometConf and CometCast\n# 2. Runs GenerateDocs to populate the template markers in the docs\n# 3. The resulting changes should be committed to the release branch\n#\n# Example workflow when cutting release 0.13.0:\n#   git checkout -b branch-0.13 main\n#   ./dev/generate-release-docs.sh\n#   git add docs/source/user-guide/latest/\n#   git commit -m \"Generate docs for 0.13.0 release\"\n#   git push origin branch-0.13\n\nset -e\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" && pwd)\"\nPROJECT_ROOT=\"$(cd \"${SCRIPT_DIR}/..\" && pwd)\"\n\ncd \"${PROJECT_ROOT}\"\n\necho \"Compiling and generating documentation content...\"\n./mvnw package -Pgenerate-docs -DskipTests -Dmaven.test.skip=true\n\necho \"\"\necho \"Done! Generated documentation content in docs/source/user-guide/latest/\"\necho \"\"\necho \"Next steps:\"\necho \"  git add docs/source/user-guide/latest/\"\necho \"  git commit -m 'Generate docs for release'\"\necho \"  git push\"\n"
  },
  {
    "path": "dev/regenerate-golden-files.sh",
    "content": "#!/usr/bin/env bash\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# Script to regenerate golden files for plan stability testing.\n# This script must be run from the root of the Comet repository.\n#\n# Usage: ./dev/regenerate-golden-files.sh [--spark-version <version>]\n#\n# Options:\n#   --spark-version <version>  Only regenerate for specified Spark version (3.4, 3.5, or 4.0)\n#                              If not specified, regenerates for all versions.\n#\n# Examples:\n#   ./dev/regenerate-golden-files.sh              # Regenerate for all Spark versions\n#   ./dev/regenerate-golden-files.sh --spark-version 3.5  # Regenerate only for Spark 3.5\n\nset -e\nset -o pipefail\n\n# Check for JDK 17 or later (required for Spark 4.0)\ncheck_jdk_version() {\n    if [ -z \"$JAVA_HOME\" ]; then\n        echo \"[ERROR] JAVA_HOME is not set\"\n        exit 1\n    fi\n\n    java_version=$(\"$JAVA_HOME/bin/java\" -version 2>&1 | head -n 1 | cut -d'\"' -f2 | cut -d'.' -f1)\n\n    # Handle both \"17\" and \"17.0.x\" formats\n    if [[ \"$java_version\" =~ ^1\\. ]]; then\n        # Old format like 1.8.0 -> extract 8\n        java_version=$(echo \"$java_version\" | cut -d'.' -f2)\n    fi\n\n    if [ \"$java_version\" -lt 17 ]; then\n        echo \"[ERROR] JDK 17 or later is required for Spark 4.0 compatibility\"\n        echo \"[ERROR] Current JDK version: $java_version\"\n        echo \"[ERROR] Please set JAVA_HOME to point to JDK 17 or later\"\n        exit 1\n    fi\n\n    echo \"[INFO] JDK version check passed: version $java_version\"\n}\n\n# Check if running from repo root\ncheck_repo_root() {\n    if [ ! -f \"pom.xml\" ] || [ ! -d \"spark\" ] || [ ! -d \"native\" ]; then\n        echo \"[ERROR] This script must be run from the root of the Comet repository\"\n        exit 1\n    fi\n}\n\n# Build native code\nbuild_native() {\n    echo \"\"\n    echo \"==============================================\"\n    echo \"[INFO] Building native code\"\n    echo \"==============================================\"\n    cd native && cargo build && cd ..\n}\n\n# Regenerate golden files for a specific Spark version\nregenerate_golden_files() {\n    local spark_version=$1\n\n    echo \"\"\n    echo \"==============================================\"\n    echo \"[INFO] Regenerating golden files for Spark $spark_version\"\n    echo \"==============================================\"\n\n    echo \"[INFO] Running CometTPCDSV1_4_PlanStabilitySuite...\"\n    SPARK_GENERATE_GOLDEN_FILES=1 ./mvnw \\\n        -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV1_4_PlanStabilitySuite\" \\\n        -Pspark-$spark_version -nsu test\n\n    echo \"[INFO] Running CometTPCDSV2_7_PlanStabilitySuite...\"\n    SPARK_GENERATE_GOLDEN_FILES=1 ./mvnw \\\n        -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV2_7_PlanStabilitySuite\" \\\n        -Pspark-$spark_version -nsu test\n}\n\n# Main script\nmain() {\n    local target_version=\"\"\n\n    # Parse command line arguments\n    while [[ $# -gt 0 ]]; do\n        case $1 in\n            --spark-version)\n                target_version=\"$2\"\n                shift 2\n                ;;\n            -h|--help)\n                echo \"Usage: $0 [--spark-version <version>]\"\n                echo \"\"\n                echo \"Options:\"\n                echo \"  --spark-version <version>  Only regenerate for specified Spark version (3.4, 3.5, or 4.0)\"\n                echo \"                             If not specified, regenerates for all versions.\"\n                exit 0\n                ;;\n            *)\n                echo \"[ERROR] Unknown option: $1\"\n                echo \"Use --help for usage information\"\n                exit 1\n                ;;\n        esac\n    done\n\n    # Validate target version if specified\n    if [ -n \"$target_version\" ]; then\n        if [[ ! \"$target_version\" =~ ^(3\\.4|3\\.5|4\\.0)$ ]]; then\n            echo \"[ERROR] Invalid Spark version: $target_version\"\n            echo \"[ERROR] Supported versions: 3.4, 3.5, 4.0\"\n            exit 1\n        fi\n    fi\n\n    check_repo_root\n    check_jdk_version\n\n    # Set SPARK_HOME to current directory (required for golden file output)\n    export SPARK_HOME=$(pwd)\n    echo \"[INFO] SPARK_HOME set to: $SPARK_HOME\"\n\n    # Build native code first\n    build_native\n\n    # Determine which versions to process\n    local versions\n    if [ -n \"$target_version\" ]; then\n        versions=(\"$target_version\")\n    else\n        versions=(\"3.4\" \"3.5\" \"4.0\")\n    fi\n\n    # Regenerate for each version\n    for version in \"${versions[@]}\"; do\n        regenerate_golden_files \"$version\"\n    done\n\n    echo \"\"\n    echo \"==============================================\"\n    echo \"[INFO] Golden file regeneration complete!\"\n    echo \"==============================================\"\n    echo \"\"\n    echo \"The golden files have been updated in:\"\n    echo \"  spark/src/test/resources/tpcds-plan-stability/\"\n    echo \"\"\n    echo \"Please review the changes with 'git diff' before committing.\"\n}\n\nmain \"$@\"\n"
  },
  {
    "path": "dev/release/build-release-comet.sh",
    "content": "#!/bin/bash\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\nset -e\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" >/dev/null && pwd)\"\nCOMET_HOME_DIR=$SCRIPT_DIR/../..\n\nfunction usage {\n  local NAME=$(basename $0)\n  cat <<EOF\nUsage: $NAME [options]\n\nThis script builds comet native binaries inside a docker image. The image is named\n\"comet-rm\" and will be generated by this script\n\nOptions are:\n\n  -r [repo]   : git repo (default: ${REPO})\n  -b [branch] : git branch (default: ${BRANCH})\n  -t [tag]    : tag for the spark-rm docker image to use for building (default: \"latest\").\nEOF\nexit 1\n}\n\nfunction cleanup()\n{\n  if [ $CLEANUP != 0 ]\n  then\n    echo Cleaning up ...\n    if [ \"$(docker ps -a | grep comet-arm64-builder-container)\" != \"\" ]\n    then\n      docker rm comet-arm64-builder-container\n    fi\n    if [ \"$(docker ps -a | grep comet-amd64-builder-container)\" != \"\" ]\n    then\n      docker rm comet-amd64-builder-container\n    fi\n    CLEANUP=0\n  fi\n}\n\ntrap cleanup SIGINT SIGTERM EXIT\n\nCLEANUP=1\n\nREPO=\"https://github.com/apache/datafusion-comet.git\"\nBRANCH=\"release\"\nMACOS_SDK=\nHAS_MACOS_SDK=\"false\"\nIMGTAG=latest\n\nwhile getopts \"b:hr:t:\" opt; do\n  case $opt in\n    r) REPO=\"$OPTARG\";;\n    b) BRANCH=\"$OPTARG\";;\n    t) IMGTAG=\"$OPTARG\" ;;\n    h) usage ;;\n    \\?) error \"Invalid option. Run with -h for help.\" ;;\n  esac\ndone\n\necho \"Building binaries from $REPO/$BRANCH\"\n\n# Check Java version\nJAVA_VERSION=$(java -version 2>&1 | awk -F '\"' '/version/ {print $2}' | awk -F '.' '{print $1}')\nif [ \"$JAVA_VERSION\" -lt 17 ]; then\n  echo \"Error: Java version must be at least 17. Current version: $(java -version 2>&1 | head -n 1)\"\n  exit 1\nfi\necho \"Java version check passed: $JAVA_VERSION\"\n\nWORKING_DIR=\"$SCRIPT_DIR/comet-rm/workdir\"\ncp $SCRIPT_DIR/../cargo.config $WORKING_DIR\n\n# TODO: Search for Xcode (Once building macos binaries works)\n#PS3=\"Select Xcode:\"\n#select xcode_path in  `find . -name \"${MACOS_SDK}\"`\n#do\n#  echo \"found Xcode in $xcode_path\"\n#  cp $xcode_path $WORKING_DIR\n#  break\n#done\n\nif [ -f \"${WORKING_DIR}/${MACOS_SDK}\" ]\nthen\n  HAS_MACOS_SDK=\"true\"\nfi\n\nBUILDER_IMAGE_ARM64=\"comet-rm-arm64:$IMGTAG\"\nBUILDER_IMAGE_AMD64=\"comet-rm-amd64:$IMGTAG\"\n\n# Build the docker image in which we will do the build\ndocker build --no-cache \\\n  --platform=linux/arm64 \\\n  -t \"$BUILDER_IMAGE_ARM64\" \\\n  --build-arg HAS_MACOS_SDK=${HAS_MACOS_SDK} \\\n  --build-arg MACOS_SDK=${MACOS_SDK} \\\n  \"$SCRIPT_DIR/comet-rm\"\n\ndocker build --no-cache \\\n  --platform=linux/amd64 \\\n  -t \"$BUILDER_IMAGE_AMD64\" \\\n  --build-arg HAS_MACOS_SDK=${HAS_MACOS_SDK} \\\n  --build-arg MACOS_SDK=${MACOS_SDK} \\\n  \"$SCRIPT_DIR/comet-rm\"\n\n# Clean previous Java build\npushd $COMET_HOME_DIR && ./mvnw clean && popd\n\n# Run the builder container for each architecture. The entrypoint script will build the binaries\n\n# AMD64\necho \"Building amd64 binary\"\ndocker run \\\n   --name comet-amd64-builder-container \\\n   --memory 24g \\\n   --cpus 6 \\\n   -it \\\n   --platform linux/amd64 \\\n   $BUILDER_IMAGE_AMD64 \"${REPO}\" \"${BRANCH}\" amd64\n\nif [ $? != 0 ]\nthen\n  echo \"Building amd64 binary failed.\"\n  exit 1\nfi\n\n# ARM64\necho \"Building arm64 binary\"\ndocker run \\\n   --name comet-arm64-builder-container \\\n   --memory 24g \\\n   --cpus 6 \\\n   -it \\\n   --platform linux/arm64 \\\n   $BUILDER_IMAGE_ARM64 \"${REPO}\" \"${BRANCH}\" arm64\n\nif [ $? != 0 ]\nthen\n  echo \"Building arm64 binary failed.\"\n  exit 1\nfi\n\necho \"Building binaries completed\"\necho \"Copying to java build directories\"\n\nJVM_TARGET_DIR=$COMET_HOME_DIR/common/target/classes/org/apache/comet\nmkdir -p $JVM_TARGET_DIR\n\nmkdir -p $JVM_TARGET_DIR/linux/amd64\ndocker cp \\\n  comet-amd64-builder-container:\"/opt/comet-rm/comet/native/target/release/libcomet.so\" \\\n  $JVM_TARGET_DIR/linux/amd64/\n\nif [ \"$HAS_MACOS_SDK\" == \"true\" ]\nthen\n  mkdir -p $JVM_TARGET_DIR/darwin/x86_64\n  docker cp \\\n    comet-amd64-builder-container:\"/opt/comet-rm/comet/native/target/x86_64-apple-darwin/release/libcomet.dylib\" \\\n    $JVM_TARGET_DIR/darwin/x86_64/\nfi\n\nmkdir -p $JVM_TARGET_DIR/linux/aarch64\ndocker cp \\\n  comet-arm64-builder-container:\"/opt/comet-rm/comet/native/target/release/libcomet.so\" \\\n  $JVM_TARGET_DIR/linux/aarch64/\n\nif [ \"$HAS_MACOS_SDK\" == \"true\" ]\nthen\n  mkdir -p $JVM_TARGET_DIR/linux/aarch64\n  docker cp \\\n    comet-arm64-builder-container:\"/opt/comet-rm/comet/native/target/aarch64-apple-darwin/release/libcomet.dylib\" \\\n    $JVM_TARGET_DIR/darwin/aarch64/\nfi\n\n# Build final jar\necho \"Building uber jar and publishing it locally\"\npushd $COMET_HOME_DIR\n\nGIT_HASH=$(git rev-parse --short HEAD)\nLOCAL_REPO=$(mktemp -d /tmp/comet-staging-repo-XXXXX)\n\n./mvnw  \"-Dmaven.repo.local=${LOCAL_REPO}\" -P spark-3.4 -P scala-2.12  -DskipTests install\n./mvnw  \"-Dmaven.repo.local=${LOCAL_REPO}\" -P spark-3.4 -P scala-2.13  -DskipTests install\n./mvnw  \"-Dmaven.repo.local=${LOCAL_REPO}\" -P spark-3.5 -P scala-2.12  -DskipTests install\n./mvnw  \"-Dmaven.repo.local=${LOCAL_REPO}\" -P spark-3.5 -P scala-2.13  -DskipTests install\n./mvnw  \"-Dmaven.repo.local=${LOCAL_REPO}\" -P spark-4.0 -P scala-2.13  -DskipTests install\n\necho \"Installed to local repo: ${LOCAL_REPO}\"\n\npopd\n"
  },
  {
    "path": "dev/release/check-rat-report.py",
    "content": "#!/usr/bin/python\n##############################################################################\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n##############################################################################\nimport fnmatch\nimport re\nimport sys\nimport xml.etree.ElementTree as ET\n\nif len(sys.argv) != 3:\n    sys.stderr.write(\"Usage: %s exclude_globs.lst rat_report.xml\\n\" %\n                     sys.argv[0])\n    sys.exit(1)\n\nexclude_globs_filename = sys.argv[1]\nxml_filename = sys.argv[2]\n\nglobs = [line.strip() for line in open(exclude_globs_filename, \"r\")]\n\ntree = ET.parse(xml_filename)\nroot = tree.getroot()\nresources = root.findall('resource')\n\nall_ok = True\nfor r in resources:\n    approvals = r.findall('license-approval')\n    if not approvals or approvals[0].attrib['name'] == 'true':\n        continue\n    clean_name = re.sub('^[^/]+/', '', r.attrib['name'])\n    excluded = False\n    for g in globs:\n        if fnmatch.fnmatch(clean_name, g):\n            excluded = True\n            break\n    if not excluded:\n        sys.stdout.write(\"NOT APPROVED: %s (%s): %s\\n\" % (\n            clean_name, r.attrib['name'], approvals[0].attrib['name']))\n        all_ok = False\n\nif not all_ok:\n    sys.exit(1)\n\nprint('OK')\nsys.exit(0)\n"
  },
  {
    "path": "dev/release/comet-rm/Dockerfile",
    "content": "#\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\nARG HAS_MACOS_SDK=\"false\"\n\nFROM ubuntu:20.04 AS base\n\nUSER root\n\n# For apt to be noninteractive\nENV DEBIAN_FRONTEND=noninteractive\nENV DEBCONF_NONINTERACTIVE_SEEN=true\n\nENV LC_ALL=C\n# Install pr-requisites for rust\nRUN export LC_ALL=C \\\n    && apt-get update \\\n    && apt-get install --no-install-recommends -y \\\n        ca-certificates \\\n        build-essential \\\n        curl \\\n        wget \\\n        git \\\n        llvm \\\n        clang \\\n        libssl-dev \\\n        lzma-dev \\\n        liblzma-dev \\\n        openssh-client \\\n        cmake \\\n        cpio \\\n        libxml2-dev \\\n        patch \\\n        bzip2 \\\n        libbz2-dev \\\n        zlib1g-dev \\\n        default-jdk\n\nRUN apt install -y gcc-10 g++-10 cpp-10 unzip\nENV CC=\"gcc-10\"\nENV CXX=\"g++-10\"\n\nRUN PB_REL=\"https://github.com/protocolbuffers/protobuf/releases\" \\\n    && curl -LO $PB_REL/download/v30.2/protoc-30.2-linux-x86_64.zip \\\n    && unzip protoc-30.2-linux-x86_64.zip -d /root/.local\nENV PATH=\"$PATH:/root/.local/bin\"\n\n# Install rust\nRUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y\nENV PATH=\"/root/.cargo/bin:${PATH}\"\nRUN cargo install cargo2junit\n\n# Stage to add OSXCross if MacOSSDK is provided\nFROM base AS with-macos-sdk-true\nARG MACOS_SDK\n\nCOPY workdir/$MACOS_SDK /opt/xcode/\n\nRUN if [ \"$TARGETPLATFORM\" = \"linux/arm64\" ]; then \\\n    rustup target add aarch64-apple-darwin; \\\nelif [ \"$TARGETPLATFORM\" = \"linux/amd64\" ]; then \\\n    rustup target add x86_64-apple-darwin; \\\nfi\n\n# Build OSXCross\nRUN cd /opt && git clone --depth 1 https://github.com/tpoechtrager/osxcross.git \\\n    && cd /opt/osxcross \\\n    && ./tools/gen_sdk_package_pbzx.sh /opt/xcode/${MACOS_SDK} \\\n    && cd .. \\\n    && cp /opt/osxcross/*.tar.xz tarballs \\\n    && UNATTENDED=1 ./build.sh\nENV PATH=\"/opt/osxcross/target/bin:${PATH}\"\n# Use osxcross toolchain for cargo\nCOPY workdir/cargo.config /root/.cargo/config\nENV HAS_OSXCROSS=\"true\"\n\n# Placeholder Stage if MacOSSDK is not provided\nFROM base AS with-macos-sdk-false\nRUN echo \"Building without MacOS\"\n\n\nFROM with-macos-sdk-${HAS_MACOS_SDK} AS final\n\nCOPY build-comet-native-libs.sh /opt/comet-rm/build-comet-native-libs.sh\nWORKDIR /opt/comet-rm\n\nENTRYPOINT [ \"/opt/comet-rm/build-comet-native-libs.sh\"]\n"
  },
  {
    "path": "dev/release/comet-rm/build-comet-native-libs.sh",
    "content": "#!/bin/bash\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\n# builds a comet binary\n\nset -e\n\nREPO=$1\nBRANCH=$2\nARCH=$3\n\nfunction usage {\n  local NAME=$(basename $0)\n  echo \"Usage: ${NAME} [git repo] [branch] [arm64 | amd64]\"\n  exit 1\n}\n\nif [ $# -ne 3 ]\nthen\n  usage\nfi\n\nif [ \"$ARCH\" != \"arm64\" ] && [ \"$ARCH\" != \"amd64\" ]\nthen\n  usage\nfi\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" >/dev/null && pwd)\"\n\necho \"Building architecture: ${ARCH} for ${REPO}/${BRANCH}\"\nrm -fr comet\ngit clone \"$REPO\" comet\ncd comet\ngit checkout \"$BRANCH\"\n\n# build comet binaries\n\nmake core-${ARCH}-libs\n"
  },
  {
    "path": "dev/release/create-tarball.sh",
    "content": "#!/bin/bash\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\n# Adapted from https://github.com/apache/arrow-rs/tree/master/dev/release/create-tarball.sh\n\n# This script creates a signed tarball in\n# dev/dist/apache-datafusion-comet-<version>-<sha>.tar.gz and uploads it to\n# the \"dev\" area of the dist.apache.datafusion repository and prepares an\n# email for sending to the dev@datafusion.apache.org list for a formal\n# vote.\n#\n# See release/README.md for full release instructions\n#\n# Requirements:\n#\n# 1. gpg setup for signing and have uploaded your public\n# signature to https://pgp.mit.edu/\n#\n# 2. Logged into the apache svn server with the appropriate\n# credentials\n#\n# 3. Install the requests python package\n#\n#\n# Based in part on 02-source.sh from apache/arrow\n#\n\nset -e\n\nDEV_RELEASE_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" && pwd)\"\nDEV_RELEASE_TOP_DIR=\"$(cd \"${DEV_RELEASE_DIR}/../../\" && pwd)\"\n\nif [ \"$#\" -ne 2 ]; then\n    echo \"Usage: $0 <version> <rc>\"\n    echo \"ex. $0 4.1.0 2\"\n    exit\nfi\n\nversion=$1\nrc=$2\ntag=\"${version}-rc${rc}\"\n\necho \"Attempting to create ${tarball} from tag ${tag}\"\nrelease_hash=$(cd \"${DEV_RELEASE_TOP_DIR}\" && git rev-list --max-count=1 ${tag})\n\nrelease=apache-datafusion-comet-${version}\ndistdir=${DEV_RELEASE_TOP_DIR}/dev/dist/${release}-rc${rc}\ntarname=${release}.tar.gz\ntarball=${distdir}/${tarname}\nurl=\"https://dist.apache.org/repos/dist/dev/datafusion/${release}-rc${rc}\"\n\nif [ -z \"$release_hash\" ]; then\n    echo \"Cannot continue: unknown git tag: ${tag}\"\nfi\n\necho \"Draft email for dev@datafusion.apache.org mailing list\"\necho \"\"\necho \"---------------------------------------------------------\"\ncat <<MAIL\nTo: dev@datafusion.apache.org\nSubject: [VOTE] Release Apache DataFusion Comet ${version} RC${rc}\nHi,\n\nI would like to propose a release of Apache DataFusion Comet version ${version}.\n\nThis release candidate is based on commit: ${release_hash} [1]\nThe proposed release tarball and signatures are hosted at [2].\nPre-built jar files are available in a Maven staging repository [3].\nThe changelog is located at [4].\n\nPlease download, verify checksums and signatures, run the unit tests, and vote\non the release. The vote will be open for at least 72 hours.\n\nOnly votes from PMC members are binding, but all members of the community are\nencouraged to test the release and vote with \"(non-binding)\".\n\nThe standard verification procedure is documented at https://github.com/apache/datafusion-comet/blob/main/dev/release/verifying-release-candidates.md\n\n[ ] +1 Release this as Apache DataFusion Comet ${version}\n[ ] +0\n[ ] -1 Do not release this as Apache DataFusion Comet ${version} because...\n\nHere is my vote:\n\n+1\n\n[1]: https://github.com/apache/datafusion-comet/tree/${release_hash}\n[2]: ${url}\n[3]: https://repository.apache.org/#nexus-search;quick~org.apache.datafusion\n[4]: https://github.com/apache/datafusion-comet/blob/${release_hash}/CHANGELOG.md\nMAIL\necho \"---------------------------------------------------------\"\n\n\n# create <tarball> containing the files in git at $release_hash\n# the files in the tarball are prefixed with {version} (e.g. 4.0.1)\nmkdir -p ${distdir}\n(cd \"${DEV_RELEASE_TOP_DIR}\" && git archive ${release_hash} --prefix ${release}/ | gzip > ${tarball})\n\necho \"Running rat license checker on ${tarball}\"\n${DEV_RELEASE_DIR}/run-rat.sh ${tarball}\n\necho \"Signing tarball and creating checksums\"\ngpg --pinentry-mode loopback --armor --output ${tarball}.asc --detach-sig ${tarball}\n# create signing with relative path of tarball\n# so that they can be verified with a command such as\n#  shasum --check apache-datafusion-comet-0.1.0-rc1.tar.gz.sha512\n(cd ${distdir} && shasum -a 256 ${tarname}) > ${tarball}.sha256\n(cd ${distdir} && shasum -a 512 ${tarname}) > ${tarball}.sha512\n\n\necho \"Uploading to datafusion dist/dev to ${url}\"\nsvn co --depth=empty https://dist.apache.org/repos/dist/dev/datafusion ${DEV_RELEASE_TOP_DIR}/dev/dist\nsvn add ${distdir}\nsvn ci -m \"Apache DataFusion Comet ${version} ${rc}\" ${distdir}\n"
  },
  {
    "path": "dev/release/generate-changelog.py",
    "content": "#!/usr/bin/env python\n\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n\nimport argparse\nimport sys\nfrom github import Github\nimport os\nimport re\nimport subprocess\n\ndef print_pulls(repo_name, title, pulls):\n    if len(pulls)  > 0:\n        print(\"**{}:**\".format(title))\n        print()\n        for (pull, commit) in pulls:\n            url = \"https://github.com/{}/pull/{}\".format(repo_name, pull.number)\n            print(\"- {} [#{}]({}) ({})\".format(pull.title, pull.number, url, commit.author.login))\n        print()\n\n\ndef generate_changelog(repo, repo_name, tag1, tag2, version):\n\n    # get a list of commits between two tags\n    print(f\"Fetching list of commits between {tag1} and {tag2}\", file=sys.stderr)\n    comparison = repo.compare(tag1, tag2)\n\n    # get the pull requests for these commits\n    print(\"Fetching pull requests\", file=sys.stderr)\n    unique_pulls = []\n    all_pulls = []\n    for commit in comparison.commits:\n        pulls = commit.get_pulls()\n        for pull in pulls:\n            # there can be multiple commits per PR if squash merge is not being used and\n            # in this case we should get all the author names, but for now just pick one\n            if pull.number not in unique_pulls:\n                unique_pulls.append(pull.number)\n                all_pulls.append((pull, commit))\n\n    # we split the pulls into categories\n    breaking = []\n    bugs = []\n    docs = []\n    enhancements = []\n    performance = []\n    other = []\n\n    # categorize the pull requests based on GitHub labels\n    print(\"Categorizing pull requests\", file=sys.stderr)\n    for (pull, commit) in all_pulls:\n\n        # see if PR title uses Conventional Commits\n        cc_type = ''\n        cc_scope = ''\n        cc_breaking = ''\n        parts = re.findall(r'^([a-z]+)(\\([a-z]+\\))?(!)?:', pull.title)\n        if len(parts) == 1:\n            parts_tuple = parts[0]\n            cc_type = parts_tuple[0] # fix, feat, docs, chore\n            cc_scope = parts_tuple[1] # component within project\n            cc_breaking = parts_tuple[2] == '!'\n\n        labels = [label.name for label in pull.labels]\n        if 'api change' in labels or cc_breaking:\n            breaking.append((pull, commit))\n        elif 'performance' in labels or cc_type == 'perf':\n            performance.append((pull, commit))\n        elif 'bug' in labels or cc_type == 'fix':\n            bugs.append((pull, commit))\n        elif 'enhancement' in labels or cc_type == 'feat':\n            enhancements.append((pull, commit))\n        elif 'documentation' in labels or cc_type == 'docs' or cc_type == 'doc':\n            docs.append((pull, commit))\n        else:\n            other.append((pull, commit))\n\n    # produce the changelog content\n    print(\"Generating changelog content\", file=sys.stderr)\n\n    # ASF header\n    print(\"\"\"<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\\n\"\"\")\n\n    print(f\"# DataFusion Comet {version} Changelog\\n\")\n\n    # get the number of commits\n    commit_count = subprocess.check_output(f\"git log --pretty=oneline {tag1}..{tag2} | wc -l\", shell=True, text=True).strip()\n\n    # get number of contributors\n    contributor_count = subprocess.check_output(f\"git shortlog -sn {tag1}..{tag2} | wc -l\", shell=True, text=True).strip()\n\n    print(f\"This release consists of {commit_count} commits from {contributor_count} contributors. \"\n          f\"See credits at the end of this changelog for more information.\\n\")\n\n    print_pulls(repo_name, \"Breaking changes\", breaking)\n    print_pulls(repo_name, \"Fixed bugs\", bugs)\n    print_pulls(repo_name, \"Performance related\", performance)\n    print_pulls(repo_name, \"Implemented enhancements\", enhancements)\n    print_pulls(repo_name, \"Documentation updates\", docs)\n    print_pulls(repo_name, \"Other\", other)\n\n    # show code contributions\n    credits = subprocess.check_output(f\"git shortlog -sn {tag1}..{tag2}\", shell=True, text=True).rstrip()\n\n    print(\"## Credits\\n\")\n    print(\"Thank you to everyone who contributed to this release. Here is a breakdown of commits (PRs merged) \"\n          \"per contributor.\\n\")\n    print(\"```\")\n    print(credits)\n    print(\"```\\n\")\n\n    print(\"Thank you also to everyone who contributed in other ways such as filing issues, reviewing \"\n          \"PRs, and providing feedback on this release.\\n\")\n\ndef resolve_ref(ref):\n    \"\"\"Resolve a git ref (e.g. HEAD, branch name) to a full commit SHA.\"\"\"\n    try:\n        return subprocess.check_output(\n            [\"git\", \"rev-parse\", ref], text=True\n        ).strip()\n    except subprocess.CalledProcessError:\n        # If it can't be resolved locally, return as-is (e.g. a tag name\n        # that the GitHub API can resolve)\n        return ref\n\n\ndef cli(args=None):\n    \"\"\"Process command line arguments.\"\"\"\n    if not args:\n        args = sys.argv[1:]\n\n    parser = argparse.ArgumentParser()\n    parser.add_argument(\"tag1\", help=\"The previous commit or tag (e.g. 0.1.0)\")\n    parser.add_argument(\"tag2\", help=\"The current commit or tag (e.g. HEAD)\")\n    parser.add_argument(\"version\", help=\"The version number to include in the changelog\")\n    args = parser.parse_args()\n\n    # Resolve refs to SHAs so the GitHub API compares the same commits\n    # as the local git log. Without this, refs like HEAD get resolved by\n    # the GitHub API to the default branch instead of the current branch.\n    tag1 = resolve_ref(args.tag1)\n    tag2 = resolve_ref(args.tag2)\n\n    token = os.getenv(\"GITHUB_TOKEN\")\n    project = \"apache/datafusion-comet\"\n\n    g = Github(token)\n    repo = g.get_repo(project)\n    generate_changelog(repo, project, tag1, tag2, args.version)\n\nif __name__ == \"__main__\":\n    cli()"
  },
  {
    "path": "dev/release/publish-to-maven.sh",
    "content": "#!/bin/bash\n\n#\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n###\n# This file is based on the script release-build.sh in Spark\n###\n\nfunction usage {\n  local NAME=$(basename $0)\n  cat << EOF\nusage: $NAME options\n\nPublish signed artifacts to Maven.\n\nOptions\n  -u ASF_USERNAME - Username of ASF committer account\n  -r LOCAL_REPO - path to temporary local maven repo (created and written to by 'build-release-comet.sh')\n\nThe following will be prompted for -\n  ASF_PASSWORD - Password of ASF committer account\n  GPG_KEY - GPG key used to sign release artifacts\n  GPG_PASSPHRASE - Passphrase for GPG key\nEOF\n  exit 1\n}\n\nSCRIPT_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]}\")\" >/dev/null && pwd)\"\nCOMET_HOME_DIR=$SCRIPT_DIR/../..\n\nASF_USERNAME=\"\"\nASF_PASSWORD=\"\"\n\nNEXUS_ROOT=https://repository.apache.org/service/local/staging\nNEXUS_PROFILE=789e15c00fd47\n\nwhile getopts \"u:r:h\" opt; do\n  case $opt in\n    u) ASF_USERNAME=\"$OPTARG\" ;;\n    r) LOCAL_REPO=\"$OPTARG\" ;;\n    h) usage ;;\n    \\?) error \"Invalid option. Run with -h for help.\" ;;\n  esac\ndone\n\nif [ \"$LOCAL_REPO\" == \"\" ]\nthen\n  \"Please provide the local Maven repository (from 'build-release-comet.sh')\"\n  usage\nfi\n\nif [ \"$ASF_USERNAME\" == \"\" ]\nthen\n  read -p \"ASF Username : \" ASF_USERNAME && echo \"\"\nfi\n# Read some secret information\nread -s -p \"ASF Password : \" ASF_PASSWORD && echo \"\"\nread -s -p \"GPG Key (Optional): \" GPG_KEY && echo \"\"\nread -s -p \"GPG Passphrase : \" GPG_PASSPHRASE && echo \"\"\n\nif [ \"$ASF_USERNAME\" == \"\" ] || [ \"$ASF_PASSWORD\" == \"\" ] || [ \"$GPG_PASSPHRASE\" = \"\" ]\nthen\n  echo \"Missing credentials\"\n  exit 1\nfi\n\n# Default GPG command to use\nGPG=\"gpg --pinentry-mode loopback\"\nif [ \"$GPG_KEY\" != \"\" ]\nthen\n  GPG=\"$GPG -u $GPG_KEY\"\nfi\n\n# sha1sum on linux, shasum on macos\nSHA1SUM=$( which sha1sum || which shasum)\n\nGIT_HASH=$(git rev-parse --short HEAD)\n\n# REF: https://support.sonatype.com/hc/en-us/articles/213465818-How-can-I-programmatically-upload-an-artifact-into-Nexus-Repo-2\n# REF: https://support.sonatype.com/hc/en-us/articles/213465868-Uploading-to-a-Nexus-Repository-2-Staging-Repository-via-REST-API\necho \"Creating Nexus staging repository\"\n\nREPO_REQUEST=\"<promoteRequest><data><description>Apache Datafusion Comet $COMET_VERSION (commit $GIT_HASH)</description></data></promoteRequest>\"\nREPO_REQUEST_RESPONSE=$(curl -X POST -d \"$REPO_REQUEST\" -u $ASF_USERNAME:$ASF_PASSWORD \\\n  -H \"Content-Type:application/xml\"  \\\n  $NEXUS_ROOT/profiles/$NEXUS_PROFILE/start)\nif [ $? -ne 0 ]\nthen\n  echo \"Error creating staged repository\"\n  echo \"$REPO_REQUEST_RESPONSE\"\n  exit 1\nfi\n\nSTAGED_REPO_ID=$(echo $REPO_REQUEST_RESPONSE | xmllint --xpath \"//stagedRepositoryId/text()\" -)\necho \"Created Nexus staging repository: $STAGED_REPO_ID\"\n\nif [ \"$STAGED_REPO_ID\" == \"\" ]\nthen\n  echo \"Error creating staged repository\"\n  echo \"$REPO_REQUEST_RESPONSE\"\n  exit 1\nfi\n\necho \"Deploying artifacts from $LOCAL_REPO\"\n\npushd $LOCAL_REPO/org/apache/datafusion\n\n# Remove any extra files generated during install\nfind . -type f |grep -v \\.jar |grep -v \\.pom | xargs rm\n\necho \"Creating hash and signature files\"\nfor file in $(find . -type f)\ndo\necho $GPG_PASSPHRASE | $GPG --passphrase-fd 0 --output $file.asc \\\n  --detach-sig --armour $file;\nif [ $(command -v md5) ]; then\n  # Available on macOS; -q to keep only hash\n  md5 -q $file > $file.md5\nelse\n  # Available on Linux; cut to keep only hash\n  md5sum $file | cut -f1 -d' ' > $file.md5\nfi\n$SHA1SUM $file | cut -f1 -d' ' > $file.sha1\ndone\n\nNEXUS_UPLOAD=$NEXUS_ROOT/deployByRepositoryId/$STAGED_REPO_ID\necho \"Uploading files to $NEXUS_UPLOAD\"\nfor file in $(find . -type f)\ndo\n  # strip leading ./\n  FILE_SHORT=$(echo $file | sed -e \"s/\\.\\///\")\n  DEST_URL=\"$NEXUS_UPLOAD/org/apache/datafusion/$FILE_SHORT\"\n  echo \"  Uploading $FILE_SHORT\"\n  curl -u $ASF_USERNAME:$ASF_PASSWORD --upload-file $FILE_SHORT $DEST_URL\n  if [ $? -ne 0 ]\n  then\n    echo \"    - Failed\"\n    exit 2\n  fi\ndone\n\necho \"Closing nexus staging repository\"\nREPO_REQUEST=\"<promoteRequest><data><stagedRepositoryId>$STAGED_REPO_ID</stagedRepositoryId><description>Apache Datafusion Comet $COMET_VERSION (commit $GIT_HASH)</description></data></promoteRequest>\"\nREPO_REQUEST_RESPONSE=$(curl -X POST -d \"$REPO_REQUEST\" -u $ASF_USERNAME:$ASF_PASSWORD \\\n  -H \"Content-Type:application/xml\" -v \\\n  $NEXUS_ROOT/profiles/$NEXUS_PROFILE/finish)\necho \"Closed Nexus staging repository: $STAGED_REPO_ID\"\n\npopd"
  },
  {
    "path": "dev/release/rat_exclude_files.txt",
    "content": "*.gitignore\n*.dockerignore\n.github/pull_request_template.md\n.gitmodules\nnative/Cargo.lock\nnative/testdata/backtrace.txt\nnative/testdata/stacktrace.txt\nnative/core/testdata/backtrace.txt\nnative/core/testdata/stacktrace.txt\ndev/copyright/scala-header.txt\ndev/diffs/iceberg/*.diff\ndev/release/requirements.txt\ndev/release/rat_exclude_files.txt\ndocs/comet-*\ndocs/source/_static/images/*.svg\ndocs/source/_static/images/comet-dataflow.excalidraw\ndocs/source/contributor-guide/benchmark-results/**/*.json\ndocs/logos/*.png\ndocs/logos/*.svg\ndocs/source/contributor-guide/*.svg\nrust-toolchain\nspark/src/test/resources/tpcds-extended/q*.sql\nspark/src/test/resources/tpcds-query-results/*.out\nspark/src/test/resources/tpcds-micro-benchmarks/*.sql\nspark/src/test/resources/tpcds-plan-stability/approved-plans*/**/explain.txt\nspark/src/test/resources/tpcds-plan-stability/approved-plans*/**/simplified.txt\nspark/src/test/resources/tpcds-plan-stability/approved-plans*/**/extended.txt\nspark/src/test/resources/tpch-query-results/*.out\nspark/src/test/resources/tpch-extended/q*.sql\nspark/src/test/resources/test-data/*.csv\nspark/src/test/resources/test-data/*.ndjson\nspark/inspections/CometTPC*results.txt\nbenchmarks/tpc/queries/**/*.sql\n"
  },
  {
    "path": "dev/release/release-tarball.sh",
    "content": "#!/bin/bash\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\n# Adapted from https://github.com/apache/arrow-rs/tree/master/dev/release/release-tarball.sh\n\n# This script copies a tarball from the \"dev\" area of the\n# dist.apache.datafusion repository to the \"release\" area\n#\n# This script should only be run after the release has been approved\n# by the Apache DataFusion PMC committee.\n#\n# See release/README.md for full release instructions\n#\n# Based in part on post-01-upload.sh from apache/arrow\n\n\nset -e\nset -u\n\nif [ \"$#\" -ne 2 ]; then\n  echo \"Usage: $0 <version> <rc-num>\"\n  echo \"ex. $0 4.1.0 2\"\n  exit\nfi\n\nversion=$1\nrc=$2\n\nread -r -p \"Proceed to release tarball for ${version}-rc${rc}? [y/N]: \" answer\nanswer=${answer:-no}\nif [ \"${answer}\" != \"y\" ]; then\n  echo \"Cancelled tarball release!\"\n  exit 1\nfi\n\ntmp_dir=tmp-apache-datafusion-comet-dist\n\necho \"Recreate temporary directory: ${tmp_dir}\"\nrm -rf ${tmp_dir}\nmkdir -p ${tmp_dir}\n\necho \"Clone dev dist repository\"\nsvn \\\n  co \\\n  https://dist.apache.org/repos/dist/dev/datafusion/apache-datafusion-comet-${version}-rc${rc} \\\n  ${tmp_dir}/dev\n\necho \"Clone release dist repository\"\nsvn co https://dist.apache.org/repos/dist/release/datafusion ${tmp_dir}/release\n\necho \"Copy ${version}-rc${rc} to release working copy\"\nrelease_version=datafusion-comet-${version}\nmkdir -p ${tmp_dir}/release/${release_version}\ncp -r ${tmp_dir}/dev/* ${tmp_dir}/release/${release_version}/\nsvn add ${tmp_dir}/release/${release_version}\n\necho \"Commit release\"\nsvn ci -m \"Apache DataFusion Comet ${version}\" ${tmp_dir}/release\n\necho \"Clean up\"\nrm -rf ${tmp_dir}\n\necho \"Success! The release is available here:\"\necho \"  https://dist.apache.org/repos/dist/release/datafusion/${release_version}\"\n"
  },
  {
    "path": "dev/release/requirements.txt",
    "content": "PyGitHub"
  },
  {
    "path": "dev/release/run-rat.sh",
    "content": "#!/bin/bash\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\nRAT_VERSION=0.16.1\n\n# download apache rat\nif [ ! -f apache-rat-${RAT_VERSION}.jar ]; then\n  curl -s https://repo1.maven.org/maven2/org/apache/rat/apache-rat/${RAT_VERSION}/apache-rat-${RAT_VERSION}.jar > apache-rat-${RAT_VERSION}.jar\nfi\n\nRAT=\"java -jar apache-rat-${RAT_VERSION}.jar -x \"\n\nRELEASE_DIR=$(cd \"$(dirname \"$BASH_SOURCE\")\"; pwd)\n\n# generate the rat report\n$RAT $1 > rat.txt\npython3 $RELEASE_DIR/check-rat-report.py $RELEASE_DIR/rat_exclude_files.txt rat.txt > filtered_rat.txt\nUNAPPROVED=`cat filtered_rat.txt  | grep \"NOT APPROVED\" | wc -l`\n\nif [ \"0\" -eq \"${UNAPPROVED}\" ]; then\n  echo \"No unapproved licenses\"\nelse\n  echo \"${UNAPPROVED} unapproved licences. Check rat report: rat.txt\"\n  cat filtered_rat.txt\n  exit 1\nfi\n"
  },
  {
    "path": "dev/release/verify-release-candidate.sh",
    "content": "#!/bin/bash\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\ncase $# in\n  2) VERSION=\"$1\"\n     RC_NUMBER=\"$2\"\n     ;;\n  *) echo \"Usage: $0 X.Y.Z RC_NUMBER\"\n     exit 1\n     ;;\nesac\n\nset -e\nset -x\nset -o pipefail\n\nSOURCE_DIR=\"$(cd \"$(dirname \"${BASH_SOURCE[0]:-$0}\")\" && pwd)\"\nCOMET_DIR=\"$(dirname $(dirname ${SOURCE_DIR}))\"\nCOMET_DIST_URL='https://dist.apache.org/repos/dist/dev/datafusion'\n\ndownload_dist_file() {\n  curl \\\n    --silent \\\n    --show-error \\\n    --fail \\\n    --location \\\n    --remote-name $COMET_DIST_URL/$1\n}\n\ndownload_rc_file() {\n  download_dist_file apache-datafusion-comet-${VERSION}-rc${RC_NUMBER}/$1\n}\n\nimport_gpg_keys() {\n  download_dist_file KEYS\n  gpg --import KEYS\n}\n\nif type shasum >/dev/null 2>&1; then\n  sha256_verify=\"shasum -a 256 -c\"\n  sha512_verify=\"shasum -a 512 -c\"\nelse\n  sha256_verify=\"sha256sum -c\"\n  sha512_verify=\"sha512sum -c\"\nfi\n\nfetch_archive() {\n  local dist_name=$1\n  download_rc_file ${dist_name}.tar.gz\n  download_rc_file ${dist_name}.tar.gz.asc\n  download_rc_file ${dist_name}.tar.gz.sha256\n  download_rc_file ${dist_name}.tar.gz.sha512\n  verify_dir_artifact_signatures\n}\n\nverify_dir_artifact_signatures() {\n  # verify the signature and the checksums of each artifact\n  find . -name '*.asc' | while read sigfile; do\n    artifact=${sigfile/.asc/}\n    gpg --verify $sigfile $artifact || exit 1\n\n    # go into the directory because the checksum files contain only the\n    # basename of the artifact\n    pushd $(dirname $artifact)\n    base_artifact=$(basename $artifact)\n    ${sha256_verify} $base_artifact.sha256 || exit 1\n    ${sha512_verify} $base_artifact.sha512 || exit 1\n    popd\n  done\n}\n\nsetup_tempdir() {\n  cleanup() {\n    if [ \"${TEST_SUCCESS}\" = \"yes\" ]; then\n      rm -fr \"${COMET_TMPDIR}\"\n    else\n      echo \"Failed to verify release candidate. See ${COMET_TMPDIR} for details.\"\n    fi\n  }\n\n  if [ -z \"${COMET_TMPDIR}\" ]; then\n    # clean up automatically if COMET_TMPDIR is not defined\n    COMET_TMPDIR=$(mktemp -d -t \"$1.XXXXX\")\n    trap cleanup EXIT\n  else\n    # don't clean up automatically\n    mkdir -p \"${COMET_TMPDIR}\"\n  fi\n}\n\ntest_source_distribution() {\n  set -e\n  pushd native\n    RUSTFLAGS=\"-Ctarget-cpu=native\" cargo build --release\n  popd\n  # test with the latest supported version of Spark\n  ./mvnw verify -Prelease -DskipTests -P\"spark-3.4\" -Dmaven.gitcommitid.skip=true\n}\n\nTEST_SUCCESS=no\n\nsetup_tempdir \"datafusion-comet-${VERSION}\"\necho \"Working in sandbox ${COMET_TMPDIR}\"\ncd ${COMET_TMPDIR}\n\ndist_name=\"apache-datafusion-comet-${VERSION}\"\nimport_gpg_keys\nfetch_archive ${dist_name}\ntar xf ${dist_name}.tar.gz\npushd ${dist_name}\n    test_source_distribution\npopd\n\nTEST_SUCCESS=yes\necho 'Release candidate looks good!'\nexit 0\n"
  },
  {
    "path": "dev/release/verifying-release-candidates.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Verifying DataFusion Comet Release Candidates\n\nThe `dev/release/verify-release-candidate.sh` script in this repository can assist in the verification\nprocess. It checks the hashes and runs the build. It does not run the test suite because this takes a long time\nfor this project and the test suites already run in CI before we create the release candidate, so running them\nagain is somewhat redundant.\n\n```shell\n./dev/release/verify-release-candidate.sh 0.1.0 1\n```\n\nThe following command can be used to build a release for testing.\n\n```shell\nmake release-nogit\n```\n\nWe hope that users will verify the release beyond running this script by testing the release candidate with their\nexisting Spark jobs and report any functional issues or performance regressions.\n\nThe email announcing the vote should contain a link to pre-built jar files in a Maven staging repository.\n\nAnother way of verifying the release is to follow the\n[Comet Benchmarking Guide](https://datafusion.apache.org/comet/contributor-guide/benchmarking.html) and compare\nperformance with the previous release.\n"
  },
  {
    "path": "dev/scalastyle-config.xml",
    "content": "<!--\n\nIf you wish to turn off checking for a section of code, you can put a comment in the source\nbefore and after the section, with the following syntax:\n\n  // scalastyle:off\n  ...  // stuff that breaks the styles\n  // scalastyle:on\n\nYou can also disable only one rule, by specifying its rule id, as specified in:\n  http://www.scalastyle.org/rules-0.7.0.html\n\n  // scalastyle:off no.finalize\n  override def finalize(): Unit = ...\n  // scalastyle:on no.finalize\n\nThis file is divided into 3 sections:\n (1) rules that we enforce.\n (2) rules that we would like to enforce, but haven't cleaned up the codebase to turn on yet\n     (or we need to make the scalastyle rule more configurable).\n (3) rules that we don't want to enforce.\n-->\n\n<scalastyle>\n  <name>Scalastyle standard configuration</name>\n\n  <!-- ================================================================================ -->\n  <!--                               rules we enforce                                   -->\n  <!-- ================================================================================ -->\n\n  <check level=\"error\" class=\"org.scalastyle.file.FileTabChecker\" enabled=\"true\"></check>\n\n  <!-- disabled for now since we don't have ASF header in our project\n  <check level=\"error\" class=\"org.scalastyle.file.HeaderMatchesChecker\" enabled=\"true\">\n    <parameters>\n       <parameter name=\"header\"><![CDATA[/*\n * Licensed to the Apache Software Foundation (ASF) under one or more\n * contributor license agreements.  See the NOTICE file distributed with\n * this work for additional information regarding copyright ownership.\n * The ASF licenses this file to You under the Apache License, Version 2.0\n * (the \"License\"); you may not use this file except in compliance with\n * the License.  You may obtain a copy of the License at\n *\n *    http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */]]></parameter>\n    </parameters>\n  </check>\n  -->\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.SpacesAfterPlusChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.SpacesBeforePlusChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.file.WhitespaceEndOfLineChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.file.FileLineLengthChecker\" enabled=\"true\">\n    <parameters>\n      <parameter name=\"maxLineLength\"><![CDATA[100]]></parameter>\n      <parameter name=\"tabSize\"><![CDATA[2]]></parameter>\n      <parameter name=\"ignoreImports\">true</parameter>\n    </parameters>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.ClassNamesChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\"><![CDATA[[A-Z][A-Za-z]*]]></parameter></parameters>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.ObjectNamesChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\"><![CDATA[(config|[A-Z][A-Za-z]*)]]></parameter></parameters>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.PackageObjectNamesChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\"><![CDATA[^[a-z][A-Za-z]*$]]></parameter></parameters>\n  </check>\n\n  <check customId=\"argcount\" level=\"error\" class=\"org.scalastyle.scalariform.ParameterNumberChecker\" enabled=\"true\">\n    <parameters><parameter name=\"maxParameters\"><![CDATA[10]]></parameter></parameters>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.NoFinalizeChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.CovariantEqualsChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.StructuralTypeChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.UppercaseLChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.IfBraceChecker\" enabled=\"true\">\n    <parameters>\n      <parameter name=\"singleLineAllowed\"><![CDATA[true]]></parameter>\n      <parameter name=\"doubleLineAllowed\"><![CDATA[true]]></parameter>\n    </parameters>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.PublicMethodsHaveTypeChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.file.NewLineAtEofChecker\" enabled=\"true\"></check>\n\n  <check customId=\"nonascii\" level=\"error\" class=\"org.scalastyle.scalariform.NonASCIICharacterChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.SpaceAfterCommentStartChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.EnsureSingleSpaceBeforeTokenChecker\" enabled=\"true\">\n   <parameters>\n     <parameter name=\"tokens\">ARROW, EQUALS, ELSE, TRY, CATCH, FINALLY, LARROW, RARROW</parameter>\n   </parameters>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.EnsureSingleSpaceAfterTokenChecker\" enabled=\"true\">\n    <parameters>\n     <parameter name=\"tokens\">ARROW, EQUALS, COMMA, COLON, IF, ELSE, DO, WHILE, FOR, MATCH, TRY, CATCH, FINALLY, LARROW, RARROW</parameter>\n    </parameters>\n  </check>\n\n  <!-- ??? usually shouldn't be checked into the code base. -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.NotImplementedErrorUsage\" enabled=\"true\"></check>\n\n  <check customId=\"println\" level=\"error\" class=\"org.scalastyle.scalariform.TokenChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">^println$</parameter></parameters>\n    <customMessage><![CDATA[Are you sure you want to println? If yes, wrap the code block with\n      // scalastyle:off println\n      println(...)\n      // scalastyle:on println]]></customMessage>\n  </check>\n\n  <check customId=\"visiblefortesting\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">@VisibleForTesting</parameter></parameters>\n    <customMessage><![CDATA[\n      @VisibleForTesting causes classpath issues. Please note this in the java doc instead (SPARK-11615).\n    ]]></customMessage>\n  </check>\n\n  <check customId=\"runtimeaddshutdownhook\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">Runtime\\.getRuntime\\.addShutdownHook</parameter></parameters>\n    <customMessage><![CDATA[\n      Are you sure that you want to use Runtime.getRuntime.addShutdownHook? In most cases, you should use\n      ShutdownHookManager.addShutdownHook instead.\n      If you must use Runtime.getRuntime.addShutdownHook, wrap the code block with\n      // scalastyle:off runtimeaddshutdownhook\n      Runtime.getRuntime.addShutdownHook(...)\n      // scalastyle:on runtimeaddshutdownhook\n    ]]></customMessage>\n  </check>\n\n  <check customId=\"mutablesynchronizedbuffer\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">mutable\\.SynchronizedBuffer</parameter></parameters>\n    <customMessage><![CDATA[\n      Are you sure that you want to use mutable.SynchronizedBuffer? In most cases, you should use\n      java.util.concurrent.ConcurrentLinkedQueue instead.\n      If you must use mutable.SynchronizedBuffer, wrap the code block with\n      // scalastyle:off mutablesynchronizedbuffer\n      mutable.SynchronizedBuffer[...]\n      // scalastyle:on mutablesynchronizedbuffer\n    ]]></customMessage>\n  </check>\n\n  <check customId=\"classforname\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">Class\\.forName</parameter></parameters>\n    <customMessage><![CDATA[\n      Are you sure that you want to use Class.forName? In most cases, you should use Utils.classForName instead.\n      If you must use Class.forName, wrap the code block with\n      // scalastyle:off classforname\n      Class.forName(...)\n      // scalastyle:on classforname\n    ]]></customMessage>\n  </check>\n\n  <check customId=\"awaitresult\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">Await\\.result</parameter></parameters>\n    <customMessage><![CDATA[\n      Are you sure that you want to use Await.result? In most cases, you should use ThreadUtils.awaitResult instead.\n      If you must use Await.result, wrap the code block with\n      // scalastyle:off awaitresult\n      Await.result(...)\n      // scalastyle:on awaitresult\n    ]]></customMessage>\n  </check>\n\n  <check customId=\"awaitready\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">Await\\.ready</parameter></parameters>\n    <customMessage><![CDATA[\n      Are you sure that you want to use Await.ready? In most cases, you should use ThreadUtils.awaitReady instead.\n      If you must use Await.ready, wrap the code block with\n      // scalastyle:off awaitready\n      Await.ready(...)\n      // scalastyle:on awaitready\n    ]]></customMessage>\n  </check>\n\n  <check customId=\"caselocale\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">(\\.toUpperCase|\\.toLowerCase)(?!(\\(|\\(Locale.ROOT\\)))</parameter></parameters>\n    <customMessage><![CDATA[\n      Are you sure that you want to use toUpperCase or toLowerCase without the root locale? In most cases, you\n      should use toUpperCase(Locale.ROOT) or toLowerCase(Locale.ROOT) instead.\n      If you must use toUpperCase or toLowerCase without the root locale, wrap the code block with\n      // scalastyle:off caselocale\n      .toUpperCase\n      .toLowerCase\n      // scalastyle:on caselocale\n    ]]></customMessage>\n  </check>\n\n  <check customId=\"throwerror\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">throw new \\w+Error\\(</parameter></parameters>\n    <customMessage><![CDATA[\n      Are you sure that you want to throw Error? In most cases, you should use appropriate Exception instead.\n      If you must throw Error, wrap the code block with\n      // scalastyle:off throwerror\n      throw new XXXError(...)\n      // scalastyle:on throwerror\n    ]]></customMessage>\n  </check>\n\n  <check customId=\"commonslang2\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">org\\.apache\\.commons\\.lang\\.</parameter></parameters>\n    <customMessage>Use Commons Lang 3 classes (package org.apache.commons.lang3.*) instead\n    of Commons Lang 2 (package org.apache.commons.lang.*)</customMessage>\n  </check>\n\n  <check customId=\"FileSystemGet\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">FileSystem.get\\([a-zA-Z_$][a-zA-Z_$0-9]*\\)</parameter></parameters>\n    <customMessage><![CDATA[\n      Are you sure that you want to use \"FileSystem.get(Configuration conf)\"? If the input\n      configuration is not set properly, a default FileSystem instance will be returned. It can\n      lead to errors when you deal with multiple file systems. Please consider using\n      \"FileSystem.get(URI uri, Configuration conf)\" or \"Path.getFileSystem(Configuration conf)\" instead.\n      If you must use the method \"FileSystem.get(Configuration conf)\", wrap the code block with\n      // scalastyle:off FileSystemGet\n      FileSystem.get(...)\n      // scalastyle:on FileSystemGet\n    ]]></customMessage>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.ImportOrderChecker\" enabled=\"true\">\n    <parameters>\n      <parameter name=\"groups\">java,scala,org,apache,3rdParty,comet</parameter>\n      <parameter name=\"group.java\">javax?\\..*</parameter>\n      <parameter name=\"group.scala\">scala\\..*</parameter>\n      <parameter name=\"group.org\">org\\.(?!apache).*</parameter>\n      <parameter name=\"group.apache\">org\\.apache\\.(?!comet).*</parameter>\n      <parameter name=\"group.3rdParty\">(?!(javax?\\.|scala\\.|org\\.apache\\.comet\\.)).*</parameter>\n      <parameter name=\"group.comet\">org\\.apache\\.comet\\..*</parameter>\n    </parameters>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.DisallowSpaceBeforeTokenChecker\" enabled=\"true\">\n    <parameters>\n      <parameter name=\"tokens\">COMMA</parameter>\n    </parameters>\n  </check>\n\n  <!-- SPARK-3854: Single Space between ')' and '{' -->\n  <check customId=\"SingleSpaceBetweenRParenAndLCurlyBrace\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">\\)\\{</parameter></parameters>\n    <customMessage><![CDATA[\n      Single Space between ')' and `{`.\n    ]]></customMessage>\n  </check>\n\n  <check customId=\"NoScalaDoc\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">(?m)^(\\s*)/[*][*].*$(\\r|)\\n^\\1  [*]</parameter></parameters>\n    <customMessage>Use Javadoc style indentation for multiline comments</customMessage>\n  </check>\n\n  <check customId=\"OmitBracesInCase\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">case[^\\n>]*=>\\s*\\{</parameter></parameters>\n    <customMessage>Omit braces in case clauses.</customMessage>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">new (java\\.lang\\.)?(Byte|Integer|Long|Short)\\(</parameter></parameters>\n    <customMessage>Use static factory 'valueOf' or 'parseXXX' instead of the deprecated constructors.</customMessage>\n  </check>\n\n  <!-- SPARK-16877: Avoid Java annotations -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.OverrideJavaChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.DeprecatedJavaChecker\" enabled=\"true\"></check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.IllegalImportsChecker\" enabled=\"true\">\n    <parameters><parameter name=\"illegalImports\"><![CDATA[scala.collection.Seq,scala.collection.IndexedSeq]]></parameter></parameters>\n    <customMessage><![CDATA[\n      Don't import scala.collection.Seq and scala.collection.IndexedSeq as it may bring some problems with cross-build between Scala 2.12 and 2.13.\n\n      Please refer below page to see the details of changes around Seq / IndexedSeq.\n      https://docs.scala-lang.org/overviews/core/collections-migration-213.html\n\n      If you really need to use scala.collection.Seq or scala.collection.IndexedSeq, please use the fully-qualified name instead.\n    ]]></customMessage>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.IllegalImportsChecker\" enabled=\"true\">\n    <parameters><parameter name=\"illegalImports\"><![CDATA[org.apache.log4j]]></parameter></parameters>\n    <customMessage>Please use Apache Log4j 2 instead.</customMessage>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.NoWhitespaceBeforeLeftBracketChecker\" enabled=\"true\"></check>\n  <!-- This rule conflicts with spotless, so disabling it for now -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.NoWhitespaceAfterLeftBracketChecker\" enabled=\"false\"></check>\n\n  <!-- This breaks symbolic method names so we don't turn it on. -->\n  <!-- Maybe we should update it to allow basic symbolic names, and then we are good to go. -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.MethodNamesChecker\" enabled=\"false\">\n    <parameters>\n    <parameter name=\"regex\"><![CDATA[^[a-z][A-Za-z0-9]*$]]></parameter>\n    </parameters>\n  </check>\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.EqualsHashCodeChecker\" enabled=\"true\"></check>\n\n  <!-- ================================================================================ -->\n  <!--                               rules we don't want                                -->\n  <!-- ================================================================================ -->\n\n  <check level=\"error\" class=\"org.scalastyle.scalariform.IllegalImportsChecker\" enabled=\"false\">\n    <parameters><parameter name=\"illegalImports\"><![CDATA[sun._,java.awt._]]></parameter></parameters>\n  </check>\n\n  <!-- We want the opposite of this: NewLineAtEofChecker -->\n  <check level=\"error\" class=\"org.scalastyle.file.NoNewLineAtEofChecker\" enabled=\"false\"></check>\n\n  <!-- This one complains about all kinds of random things. Disable. -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.SimplifyBooleanExpressionChecker\" enabled=\"false\"></check>\n\n  <!-- We use return quite a bit for control flows and guards -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.ReturnChecker\" enabled=\"false\"></check>\n\n  <!-- We use null a lot in low level code and to interface with 3rd party code -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.NullChecker\" enabled=\"false\"></check>\n\n  <!-- Doesn't seem super big deal here ... -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.NoCloneChecker\" enabled=\"false\"></check>\n\n  <!-- Doesn't seem super big deal here ... -->\n  <check level=\"error\" class=\"org.scalastyle.file.FileLengthChecker\" enabled=\"false\">\n    <parameters><parameter name=\"maxFileLength\">800></parameter></parameters>\n  </check>\n\n  <!-- Doesn't seem super big deal here ... -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.NumberOfTypesChecker\" enabled=\"false\">\n    <parameters><parameter name=\"maxTypes\">30</parameter></parameters>\n  </check>\n\n  <!-- Doesn't seem super big deal here ... -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.CyclomaticComplexityChecker\" enabled=\"false\">\n    <parameters><parameter name=\"maximum\">10</parameter></parameters>\n  </check>\n\n  <!-- Doesn't seem super big deal here ... -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.MethodLengthChecker\" enabled=\"false\">\n    <parameters><parameter name=\"maxLength\">50</parameter></parameters>\n  </check>\n\n  <!-- Not exactly feasible to enforce this right now. -->\n  <!-- It is also infrequent that somebody introduces a new class with a lot of methods. -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.NumberOfMethodsInTypeChecker\" enabled=\"false\">\n    <parameters><parameter name=\"maxMethods\"><![CDATA[30]]></parameter></parameters>\n  </check>\n\n  <!-- Doesn't seem super big deal here, and we have a lot of magic numbers ... -->\n  <check level=\"error\" class=\"org.scalastyle.scalariform.MagicNumberChecker\" enabled=\"false\">\n    <parameters><parameter name=\"ignore\">-1,0,1,2,3</parameter></parameters>\n  </check>\n\n  <check customId=\"GuavaToStringHelper\" level=\"error\" class=\"org.scalastyle.file.RegexChecker\" enabled=\"true\">\n    <parameters><parameter name=\"regex\">Objects.toStringHelper</parameter></parameters>\n    <customMessage>Avoid using Object.toStringHelper. Use ToStringBuilder instead.</customMessage>\n  </check>\n</scalastyle>\n"
  },
  {
    "path": "docs/.gitignore",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nbuild\ntemp\nvenv/\n.python-version\ncomet-*\n"
  },
  {
    "path": "docs/Makefile",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n#\n# Minimal makefile for Sphinx documentation\n#\n\n# You can set these variables from the command line, and also\n# from the environment for the first two.\nSPHINXOPTS    ?=\nSPHINXBUILD   ?= sphinx-build\nSOURCEDIR     = source\nBUILDDIR      = build\n\n# Put it first so that \"make\" without argument is like \"make help\".\nhelp:\n\t@$(SPHINXBUILD) -M help \"$(SOURCEDIR)\" \"$(BUILDDIR)\" $(SPHINXOPTS) $(O)\n\n.PHONY: help Makefile\n\n# Catch-all target: route all unknown targets to Sphinx using the new\n# \"make mode\" option.  $(O) is meant as a shortcut for $(SPHINXOPTS).\n%: Makefile\n\t@$(SPHINXBUILD) -M $@ \"$(SOURCEDIR)\" \"$(BUILDDIR)\" $(SPHINXOPTS) $(O)\n"
  },
  {
    "path": "docs/README.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Apache DataFusion Comet Documentation\n\nThis folder contains the source content for the Apache DataFusion Comet documentation site. This content is published\nto <https://datafusion.apache.org/comet> when any changes are merged into the main branch.\n\n## Dependencies\n\nIt's recommended to install build dependencies and build the documentation\ninside a Python virtualenv.\n\n- Python\n- `pip install -r requirements.txt`\n\n## Build & Preview\n\nRun the provided script to build the HTML pages.\n\n```bash\n./build.sh\n```\n\nThe HTML will be generated into a `build` directory.\n\nPreview the site on Linux by running this command.\n\n```bash\nfirefox build/html/index.html\n```\n\n## Making Changes\n\nTo make changes to the docs, simply make a Pull Request with your\nproposed changes as normal. When the PR is merged the docs will be\nautomatically updated.\n\n## Release Process\n\nThis documentation is hosted at <https://datafusion.apache.org/comet/>\n\nWhen the PR is merged to the `main` branch of the `datafusion-comet`\nrepository, a [GitHub workflow](https://github.com/apache/datafusion-comet/blob/main/.github/workflows/docs.yaml) which:\n\n1. Builds the html content\n2. Pushes the html content to the [`asf-site`](https://github.com/apache/datafusion-comet/tree/asf-site) branch in this repository.\n\nThe Apache Software Foundation provides <https://datafusion.apache.org/>,\nwhich serves content based on the configuration in\n[.asf.yaml](https://github.com/apache/datafusion-comet/blob/main/.asf.yaml),\nwhich specifies the target as <https://datafusion.apache.org/comet/>.\n"
  },
  {
    "path": "docs/build.sh",
    "content": "#!/bin/bash\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n\nset -e\nrm -rf build 2> /dev/null\nrm -rf temp 2> /dev/null\nmkdir temp\ncp -rf source/* temp/\n\n# Add user guide from published releases\nrm -rf comet-0.11\nrm -rf comet-0.12\nrm -rf comet-0.13\npython3 generate-versions.py\n\n# Generate dynamic content (configs, compatibility matrices) for latest docs\n# This runs GenerateDocs against the temp copy, not source files\necho \"Generating dynamic documentation content...\"\ncd ..\n./mvnw -q package -Pgenerate-docs -DskipTests -Dmaven.test.skip=true \\\n  -Dexec.arguments=\"$(pwd)/docs/temp/user-guide/latest/\"\ncd docs\n\nmake SOURCEDIR=`pwd`/temp html\n"
  },
  {
    "path": "docs/generate-versions.py",
    "content": "#!/usr/bin/python\n##############################################################################\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n##############################################################################\n\n# This script clones release branches such as branch-0.9 and then copies the user guide markdown\n# for inclusion in the published documentation so that we publish user guides for released versions\n# of Comet\n\nimport os\nimport xml.etree.ElementTree as ET\nfrom pathlib import Path\n\ndef get_major_minor_version(version: str):\n    parts = version.split('.')\n    return f\"{parts[0]}.{parts[1]}\"\n\ndef get_version_from_pom():\n    pom_path = Path(__file__).parent.parent / \"pom.xml\"\n    tree = ET.parse(pom_path)\n    root = tree.getroot()\n    \n    namespace = {'maven': 'http://maven.apache.org/POM/4.0.0'}\n    version_element = root.find('maven:version', namespace)\n    \n    if version_element is not None:\n        return version_element.text\n    else:\n        raise ValueError(\"Could not find version in pom.xml\")\n\ndef replace_in_files(root: str, filename_pattern: str, search: str, replace: str):\n    root_path = Path(root)\n    for file in root_path.rglob(filename_pattern):\n        text = file.read_text(encoding=\"utf-8\")\n        updated = text.replace(search, replace)\n        if text != updated:\n            file.write_text(updated, encoding=\"utf-8\")\n            print(f\"Replaced {search} with {replace} in {file}\")\n\ndef insert_warning_after_asf_header(root: str, warning: str):\n    root_path = Path(root)\n    for file in root_path.rglob(\"*.md\"):\n        lines = file.read_text(encoding=\"utf-8\").splitlines(keepends=True)\n        new_lines = []\n        inserted = False\n        for line in lines:\n            new_lines.append(line)\n            if not inserted and \"-->\" in line:\n                new_lines.append(warning + \"\\n\")\n                inserted = True\n        file.write_text(\"\".join(new_lines), encoding=\"utf-8\")\n\n\ndef get_user_guide_dir(major_minor: str):\n    if major_minor == \"0.8\" or major_minor == \"0.9\":\n        return \"docs/source/user-guide\"\n    else:\n        return \"docs/source/user-guide/latest\"\n\ndef publish_released_version(version: str):\n    major_minor = get_major_minor_version(version)\n    dir = get_user_guide_dir(major_minor)\n    os.system(f\"git clone --depth 1 https://github.com/apache/datafusion-comet.git -b branch-{major_minor} comet-{major_minor}\")\n    os.system(f\"mkdir temp/user-guide/{major_minor}\")\n    os.system(f\"cp -rf comet-{major_minor}/{dir}/* temp/user-guide/{major_minor}\")\n    # Replace $COMET_VERSION with actual version\n    for file_pattern in [\"*.md\", \"*.rst\"]:\n        replace_in_files(f\"temp/user-guide/{major_minor}\", file_pattern, \"$COMET_VERSION\", version)\n\ndef generate_docs(snapshot_version: str, latest_released_version: str, previous_versions: list[str]):\n\n    # Replace $COMET_VERSION with actual version for snapshot version\n    for file_pattern in [\"*.md\", \"*.rst\"]:\n        replace_in_files(f\"temp/user-guide/latest\", file_pattern, \"$COMET_VERSION\", snapshot_version)\n\n    # Add user guide content for latest released versions\n    publish_released_version(latest_released_version)\n\n    # Add user guide content for older released versions\n    for version in previous_versions:\n        publish_released_version(version)\n        # add warning that this is out-of-date documentation\n        warning = f\"\"\"```{{warning}}\nThis is **out-of-date** documentation. The latest Comet release is version {latest_released_version}.\n```\"\"\"\n        major_minor = get_major_minor_version(version)\n        insert_warning_after_asf_header(f\"temp/user-guide/{major_minor}\", warning)\n\nif __name__ == \"__main__\":\n    print(\"Generating versioned user guide docs...\")\n    snapshot_version = get_version_from_pom()\n    latest_released_version = \"0.14.0\"\n    previous_versions = [\"0.11.0\", \"0.12.0\", \"0.13.0\"]\n    generate_docs(snapshot_version, latest_released_version, previous_versions)"
  },
  {
    "path": "docs/make.bat",
    "content": "@rem Licensed to the Apache Software Foundation (ASF) under one\r\n@rem or more contributor license agreements.  See the NOTICE file\r\n@rem distributed with this work for additional information\r\n@rem regarding copyright ownership.  The ASF licenses this file\r\n@rem to you under the Apache License, Version 2.0 (the\r\n@rem \"License\"); you may not use this file except in compliance\r\n@rem with the License.  You may obtain a copy of the License at\r\n@rem\r\n@rem   http://www.apache.org/licenses/LICENSE-2.0\r\n@rem\r\n@rem Unless required by applicable law or agreed to in writing,\r\n@rem software distributed under the License is distributed on an\r\n@rem \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\r\n@rem KIND, either express or implied.  See the License for the\r\n@rem specific language governing permissions and limitations\r\n@rem under the License.\r\n\r\n@ECHO OFF\r\n\r\npushd %~dp0\r\n\r\nREM Command file for Sphinx documentation\r\n\r\nif \"%SPHINXBUILD%\" == \"\" (\r\n\tset SPHINXBUILD=sphinx-build\r\n)\r\nset SOURCEDIR=source\r\nset BUILDDIR=build\r\n\r\nif \"%1\" == \"\" goto help\r\n\r\n%SPHINXBUILD% >NUL 2>NUL\r\nif errorlevel 9009 (\r\n\techo.\r\n\techo.The 'sphinx-build' command was not found. Make sure you have Sphinx\r\n\techo.installed, then set the SPHINXBUILD environment variable to point\r\n\techo.to the full path of the 'sphinx-build' executable. Alternatively you\r\n\techo.may add the Sphinx directory to PATH.\r\n\techo.\r\n\techo.If you don't have Sphinx installed, grab it from\r\n\techo.http://sphinx-doc.org/\r\n\texit /b 1\r\n)\r\n\r\n%SPHINXBUILD% -M %1 %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%\r\ngoto end\r\n\r\n:help\r\n%SPHINXBUILD% -M help %SOURCEDIR% %BUILDDIR% %SPHINXOPTS% %O%\r\n\r\n:end\r\npopd\r\n"
  },
  {
    "path": "docs/requirements.txt",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nsetuptools\nsphinx>=7.0,<8.0\nsphinx-reredirects\npydata-sphinx-theme>=0.16.1,<0.17.0\nmyst-parser>=2.0,<4.0\nmaturin\njinja2\n"
  },
  {
    "path": "docs/source/_static/theme_overrides.css",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\n\n/* Customizing with theme CSS variables */\n\n:root {\n  --pst-color-active-navigation: 215, 70, 51;\n  --pst-color-link-hover: 215, 70, 51;\n  --pst-color-headerlink: 215, 70, 51;\n  /* Use normal text color (like h3, ..) instead of primary color */\n  --pst-color-h1: var(--color-text-base);\n  --pst-color-h2: var(--color-text-base);\n  /* Use softer blue from bootstrap's default info color */\n  --pst-color-info: 23, 162, 184;\n  --pst-content-max-width: 100%; /* center column */\n  --pst-font-size-base: 0.9rem;\n}\n\n/* Scale down the logo in the top navbar */\n.bd-header .navbar-brand img.logo__image {\n  height: 40px;\n  width: auto;\n  margin-right: 0.25rem;\n  vertical-align: middle;\n}\n\n/* --- remove the right (secondary) sidebar entirely --- */\n.bd-sidebar-secondary { display: none !important; }\n\n/* Some versions still reserve the grid column for it — collapse it */\n.bd-main {\n  /* left sidebar + content only */\n  grid-template-columns: 20rem minmax(0, 1fr) !important;\n}\n\n/* --- make the left (primary) sidebar small --- */\n.bd-sidebar-primary {\n  width: 20rem !important;\n  flex: 0 0 20rem !important;\n}\n\n/* Optional: make its text a bit tighter */\n.bd-sidebar-primary .bd-sidebar {\n  font-size: 0.9rem;\n}\n\n/* --- let the center content use all remaining width --- */\n.bd-content, .bd-article-container {\n  max-width: none !important;   /* remove internal cap */\n}\n\n\ncode {\n  color: rgb(215, 70, 51);\n}\n\n.footer {\n  text-align: center;\n}\n\n/* Ensure the logo is properly displayed */\n\n.navbar-brand {\n  height: auto;\n  width: auto;\n  padding: 0 2em;\n}\n\n/* This is the bootstrap CSS style for \"table-striped\". Since the theme does\nnot yet provide an easy way to configure this globally, it easier to simply\ninclude this snippet here than updating each table in all rst files to\nadd \":class: table-striped\" */\n\n.table tbody tr:nth-of-type(odd) {\n  background-color: rgba(0, 0, 0, 0.05);\n}\n\n\n/* Limit the max height of the sidebar navigation section. Because in our\ncustomized template, there is more content above the navigation, i.e.\nlarger logo: if we don't decrease the max-height, it will overlap with\nthe footer.\nDetails: 8rem for search box etc*/\n\n@media (min-width:720px) {\n  @supports (position:-webkit-sticky) or (position:sticky) {\n    .bd-links {\n      max-height: calc(100vh - 8rem)\n    }\n  }\n}\n\n\n/* Fix table text wrapping in RTD theme,\n * see https://rackerlabs.github.io/docs-rackspace/tools/rtd-tables.html\n */\n\n@media screen {\n    table.docutils td {\n        /* !important prevents the common CSS stylesheets from overriding\n          this as on RTD they are loaded after this stylesheet */\n        white-space: normal !important;\n    }\n}\n"
  },
  {
    "path": "docs/source/_templates/docs-sidebar.html",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n<nav class=\"bd-links\" id=\"bd-docs-nav\" aria-label=\"Main navigation\">\n  <div class=\"bd-toc-item active\">\n    {{ toctree(maxdepth=4, collapse=True, includehidden=True, titles_only=True) }}\n  </div>\n</nav>\n\n"
  },
  {
    "path": "docs/source/_templates/layout.html",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n{% extends \"pydata_sphinx_theme/layout.html\" %}\n\n<!--\n    Custom footer\n-->\n{% block footer %}\n<!-- Based on pydata_sphinx_theme/footer.html -->\n<footer class=\"footer mt-5 mt-md-0\">\n  <div class=\"container\">\n    {% for footer_item in theme_footer_items %}\n    <div class=\"footer-item\">\n      {% include footer_item %}\n    </div>\n    {% endfor %}\n    <div class=\"footer-item\">\n      <p>Apache DataFusion, Apache DataFusion Comet, Apache, the Apache feather logo, and the Apache DataFusion project logo</p>\n      <p>are either registered trademarks or trademarks of The Apache Software Foundation in the United States and other countries.</p>\n    </div>\n  </div>\n</footer>\n\n{% endblock %}\n"
  },
  {
    "path": "docs/source/about/gluten_comparison.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Comparison of Comet and Gluten\n\nThis document provides a comparison of the Comet and Gluten projects to help guide users who are looking to choose\nbetween them. This document is likely biased because the Comet community maintains it.\n\nWe recommend trying out both Comet and Gluten to see which is the best fit for your needs.\n\nThis document is based on Comet 0.9.0 and Gluten 1.4.0.\n\n## Architecture\n\nComet and Gluten have very similar architectures. Both are Spark plugins that translate Spark physical plans to\na serialized representation and pass the serialized plan to native code for execution.\n\nGluten serializes the plans using the Substrait format and has an extensible architecture that supports execution\nagainst multiple engines. Velox and Clickhouse are currently supported, but Velox is more widely used.\n\nComet serializes the plans in a proprietary Protocol Buffer format. Execution is delegated to Apache DataFusion. Comet\ndoes not plan to support multiple engines, but rather focus on a tight integration between Spark and DataFusion.\n\n## Underlying Execution Engine: DataFusion vs Velox\n\nOne of the main differences between Comet and Gluten is the choice of native execution engine.\n\nGluten uses Velox, which is an open-source C++ vectorized query engine created by Meta.\n\nComet uses Apache DataFusion, which is an open-source vectorized query engine implemented in Rust and is governed by the\nApache Software Foundation.\n\nVelox and DataFusion are both mature query engines that are growing in popularity.\n\nFrom the point of view of the usage of these query engines in Gluten and Comet, the most significant difference is\nthe choice of implementation language (Rust vs C++) and this may be the main factor that users should consider when\nchoosing a solution. For users wishing to implement UDFs in Rust, Comet would likely be a better choice. For users\nwishing to implement UDFs in C++, Gluten would likely be a better choice.\n\nIf users are just interested in speeding up their existing Spark jobs and do not need to implement UDFs in native\ncode, then we suggest benchmarking with both solutions and choosing the fastest one for your use case.\n\n![github-stars-datafusion-velox.png](/_static/images/github-stars-datafusion-velox.png)\n\n## Compatibility\n\nComet relies on the full Spark SQL test suite (consisting of more than 24,000 tests) as well its own unit and\nintegration tests to ensure compatibility with Spark. Features that are known to have compatibility differences with\nSpark are disabled by default, but users can opt in. See the [Comet Compatibility Guide] for more information.\n\n[Comet Compatibility Guide]: /user-guide/latest/compatibility.md\n\nGluten also aims to provide compatibility with Spark, and includes a subset of the Spark SQL tests in its own test\nsuite. See the [Gluten Compatibility Guide] for more information.\n\n[Gluten Compatibility Guide]: https://apache.github.io/incubator-gluten-site/archives/v1.3.0/velox-backend/limitations/\n\n## Performance\n\nWhen running a benchmark derived from TPC-H on a single node against local Parquet files, we see that both Comet\nand Gluten provide an impressive speedup when compared to Spark. Comet provides a 2.4x speedup compares to a 2.8x speedup\nwith Gluten.\n\nGluten is currently faster than Comet for this particular benchmark, but we expect to close that gap over time.\n\nAlthough TPC-H is a good benchmark for operators such as joins and aggregates, it doesn't necessarily represent\nreal-world queries, especially for ETL use cases. For example, there are no complex types involved and no string\nmanipulation, regular expressions, or other advanced expressions. We recommend running your own benchmarks based\non your existing Spark jobs.\n\n![tpch_allqueries_comet_gluten.png](/_static/images/tpch_allqueries_comet_gluten.png)\n\nThe scripts that were used to generate these results can be found [here](https://github.com/apache/datafusion-comet/tree/main/benchmarks/tpc).\n\n## Ease of Development & Contributing\n\nSetting up a local development environment with Comet is generally easier than with Gluten due to Rust's package\nmanagement capabilities vs the complexities around installing C++ dependencies.\n\n## Summary\n\nComet and Gluten are both good solutions for accelerating Spark jobs. We recommend trying both to see which is the\nbest fit for your needs.\n"
  },
  {
    "path": "docs/source/about/index.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Overview\n\nApache DataFusion Comet is a high-performance accelerator for Apache Spark, built on top of the powerful\n[Apache DataFusion] query engine. Comet is designed to significantly enhance the\nperformance of Apache Spark workloads while leveraging commodity hardware and seamlessly integrating with the\nSpark ecosystem without requiring any code changes.\n\n[Apache DataFusion]: https://datafusion.apache.org\n\nThe following diagram provides an overview of Comet's architecture.\n\n![Comet Overview](/_static/images/comet-overview.png)\n\n## Architecture\n\nThe following diagram shows how Comet integrates with Apache Spark.\n\n![Comet System Diagram](/_static/images/comet-system-diagram.png)\n\n## Feature Parity with Apache Spark\n\nThe project strives to keep feature parity with Apache Spark, that is,\nusers should expect the same behavior (w.r.t features, configurations,\nquery results, etc) with Comet turned on or turned off in their Spark\njobs. In addition, Comet extension should automatically detect unsupported\nfeatures and fallback to Spark engine.\n\n## Comparison with other open-source Spark accelerators\n\nThere are two other major open-source Spark accelerators:\n\n- [Apache Gluten (incubating)](https://github.com/apache/incubator-gluten)\n- [NVIDIA Spark RAPIDS](https://github.com/NVIDIA/spark-rapids)\n\nWe have a detailed guide [comparing Apache DataFusion Comet with Apache Gluten].\n\nSpark RAPIDS is a solution that provides hardware acceleration on NVIDIA GPUs. Comet does not require specialized\nhardware.\n\n[comparing Apache DataFusion Comet with Apache Gluten]: gluten_comparison.md\n\n## Getting Started\n\nRefer to the [Comet Installation Guide] to get started.\n\n[Comet Installation Guide]: /user-guide/latest/installation.md\n\n```{toctree}\n:maxdepth: 1\n:caption: About\n:hidden:\n\nComparison with Gluten <gluten_comparison>\n```\n"
  },
  {
    "path": "docs/source/asf/index.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# ASF Links\n\n```{toctree}\n:maxdepth: 1\n:caption: ASF Links\n\nApache Software Foundation <https://apache.org>\nLicense <https://www.apache.org/licenses/>\nDonate <https://www.apache.org/foundation/sponsorship.html>\nThanks <https://www.apache.org/foundation/thanks.html>\nSecurity <https://www.apache.org/security/>\nCode of conduct <https://www.apache.org/foundation/policies/conduct.html>\n```\n"
  },
  {
    "path": "docs/source/conf.py",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# Configuration file for the Sphinx documentation builder.\n#\n# This file only contains a selection of the most common options. For a full\n# list see the documentation:\n# https://www.sphinx-doc.org/en/master/usage/configuration.html\n\n# -- Path setup --------------------------------------------------------------\n\n# If extensions (or modules to document with autodoc) are in another directory,\n# add these directories to sys.path here. If the directory is relative to the\n# documentation root, use os.path.abspath to make it absolute, like shown here.\n#\n# import os\n# import sys\n# sys.path.insert(0, os.path.abspath('.'))\n\n# -- Project information -----------------------------------------------------\n\nproject = 'Apache DataFusion Comet'\ncopyright = '2023-2024, Apache Software Foundation'\nauthor = 'Apache Software Foundation'\n\n\n# -- General configuration ---------------------------------------------------\n\n# Add any Sphinx extension module names here, as strings. They can be\n# extensions coming with Sphinx (named 'sphinx.ext.*') or your custom\n# ones.\nextensions = [\n    'sphinx.ext.autodoc',\n    'sphinx.ext.autosummary',\n    'sphinx.ext.doctest',\n    'sphinx.ext.ifconfig',\n    'sphinx.ext.mathjax',\n    'sphinx.ext.viewcode',\n    'sphinx.ext.napoleon',\n    'myst_parser',\n    'sphinx_reredirects',\n]\n\nsource_suffix = {\n    '.rst': 'restructuredtext',\n    '.md': 'markdown',\n}\n\n# Add any paths that contain templates here, relative to this directory.\ntemplates_path = ['_templates']\n\n# List of patterns, relative to source directory, that match files and\n# directories to ignore when looking for source files.\n# This pattern also affects html_static_path and html_extra_path.\nexclude_patterns = []\n\n# Show members for classes in .. autosummary\nautodoc_default_options = {\n    \"members\": None,\n    \"undoc-members\": None,\n    \"show-inheritance\": None,\n    \"inherited-members\": None,\n}\n\nautosummary_generate = True\n\n# -- Options for HTML output -------------------------------------------------\n\n# The theme to use for HTML and HTML Help pages.  See the documentation for\n# a list of builtin themes.\n#\nhtml_theme = 'pydata_sphinx_theme'\n\nhtml_logo = \"_static/images/DataFusionComet-Logo-Light.png\"\n\nhtml_theme_options = {\n    \"logo\": {\n        \"image_light\": \"_static/images/DataFusionComet-Logo-Light.png\",\n        \"image_dark\": \"_static/images/DataFusionComet-Logo-Dark.png\",\n    },\n\n    \"use_edit_page_button\": False,\n    \"secondary_sidebar_items\": [],\n    \"collapse_navigation\": True,\n    \"navbar_start\": [\"navbar-logo\"],\n    \"navbar_center\": [\"navbar-nav\"],\n    \"navbar_end\": [\"navbar-icon-links\", \"theme-switcher\"],\n    \"icon_links\": [\n        {\n            \"name\": \"GitHub\",\n            \"url\": \"https://github.com/apache/datafusion-comet\",\n            \"icon\": \"fa-brands fa-github\",\n        },\n    ],\n\n}\n\nhtml_context = {\n    \"default_mode\": \"light\",\n    \"github_user\": \"apache\",\n    \"github_repo\": \"datafusion-comet\",\n    \"github_version\": \"main\",\n    \"doc_path\": \"docs/source\",\n}\n\n# Add any paths that contain custom static files (such as style sheets) here,\n# relative to this directory. They are copied after the builtin static files,\n# so a file named \"default.css\" will overwrite the builtin \"default.css\".\nhtml_static_path = ['_static']\n\nhtml_css_files = [\n    \"theme_overrides.css\"\n]\n\nhtml_js_files = [\n    (\"https://buttons.github.io/buttons.js\", {'async': 'true', 'defer': 'true'}),\n]\n\nhtml_sidebars = {\n    \"**\": [\"docs-sidebar.html\"],\n}\n\n# tell myst_parser to auto-generate anchor links for headers h1, h2, h3\nmyst_heading_anchors = 3\n\n# enable nice rendering of checkboxes for the task lists\nmyst_enable_extensions = [\"colon_fence\", \"deflist\", \"tasklist\"]\n\nredirects = {\n    \"overview.html\": \"about/index.html\",\n    \"gluten_comparison.html\": \"about/gluten_comparison.html\",\n    \"user-guide/overview.html\": \"../about/overview.html\",\n    \"user-guide/gluten_comparison.html\": \"../about/gluten_comparison.html\",\n    \"user-guide/compatibility.html\": \"latest/compatibility.html\",\n    \"user-guide/configs.html\": \"latest/configs.html\",\n    \"user-guide/datasource.html\": \"latest/datasource.html\",\n    \"user-guide/datatypes.html\": \"latest/datatypes.html\",\n    \"user-guide/expressions.html\": \"latest/expressions.html\",\n    \"user-guide/iceberg.html\": \"latest/iceberg.html\",\n    \"user-guide/installation.html\": \"latest/installation.html\",\n    \"user-guide/kubernetes.html\": \"latest/kubernetes.html\",\n    \"user-guide/metrics.html\": \"latest/metrics.html\",\n    \"user-guide/operators.html\": \"latest/operators.html\",\n    \"user-guide/source.html\": \"latest/source.html\",\n    \"user-guide/tuning.html\": \"latest/tuning.html\",\n}"
  },
  {
    "path": "docs/source/contributor-guide/adding_a_new_expression.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Adding a New Expression\n\nThere are a number of Spark expression that are not supported by DataFusion Comet yet, and implementing them is a good way to contribute to the project.\n\nBefore you start, have a look through [these slides](https://docs.google.com/presentation/d/1H0fF2MOkkBK8fPBlnqK6LejUeLcVD917JhVWfp3mb8A/edit#slide=id.p) as they provide a conceptual overview. And a video of a presentation on those slides is available [here](https://drive.google.com/file/d/1POU4lFAZfYwZR8zV1X2eoLiAmc1GDtSP/view?usp=sharing).\n\n## Finding an Expression to Add\n\nYou may have a specific expression in mind that you'd like to add, but if not, you can review the [expression coverage document](https://github.com/apache/datafusion-comet/blob/main/docs/spark_expressions_support.md) to see which expressions are not yet supported.\n\n## Implementing the Expression\n\nOnce you have the expression you'd like to add, you should take inventory of the following:\n\n1. What is the Spark expression's behavior across different Spark versions? These make good test cases and will inform you of any compatibility issues, such as an API change that will have to be addressed.\n2. Check if the expression is already implemented in DataFusion and if it is compatible with the Spark expression.\n   1. If it is, you can potentially reuse the existing implementation though you'll need to add tests to verify compatibility.\n   2. If it's not, consider an initial version in DataFusion for expressions that are common across different engines. For expressions that are specific to Spark, consider an initial version in DataFusion Comet.\n3. Test cases for the expression. As mentioned, you can refer to Spark's test cases for a good idea of what to test.\n\nOnce you know what you want to add, you'll need to update the query planner to recognize the new expression in Scala and potentially add a new expression implementation in the Rust package.\n\n### Adding the Expression in Scala\n\nDataFusion Comet uses a framework based on the `CometExpressionSerde` trait for converting Spark expressions to protobuf. Instead of a large match statement, each expression type has its own serialization handler. For aggregate expressions, use the `CometAggregateExpressionSerde` trait instead.\n\n#### Creating a CometExpressionSerde Implementation\n\nFirst, create an object that extends `CometExpressionSerde[T]` where `T` is the Spark expression type. This is typically added to one of the serde files in `spark/src/main/scala/org/apache/comet/serde/` (e.g., `math.scala`, `strings.scala`, `arrays.scala`, etc.).\n\nFor example, the `unhex` function looks like this:\n\n```scala\nobject CometUnhex extends CometExpressionSerde[Unhex] {\n  override def convert(\n      expr: Unhex,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val failOnErrorExpr = exprToProtoInternal(Literal(expr.failOnError), inputs, binding)\n\n    val optExpr =\n      scalarFunctionExprToProtoWithReturnType(\n        \"unhex\",\n        expr.dataType,\n        false,\n        childExpr,\n        failOnErrorExpr)\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n```\n\nThe `CometExpressionSerde` trait provides three methods you can override:\n\n- `convert(expr: T, inputs: Seq[Attribute], binding: Boolean): Option[Expr]` - **Required**. Converts the Spark expression to protobuf. Return `None` if the expression cannot be converted.\n- `getSupportLevel(expr: T): SupportLevel` - Optional. Returns the level of support for the expression. See \"Using getSupportLevel\" section below for details.\n- `getExprConfigName(expr: T): String` - Optional. Returns a short name for configuration keys. Defaults to the Spark class name.\n\nFor simple scalar functions that map directly to a DataFusion function, you can use the built-in `CometScalarFunction` implementation:\n\n```scala\nclassOf[Cos] -> CometScalarFunction(\"cos\")\n```\n\n#### Registering the Expression Handler\n\nOnce you've created your `CometExpressionSerde` implementation, register it in `QueryPlanSerde.scala` by adding it to the appropriate expression map (e.g., `mathExpressions`, `stringExpressions`, `predicateExpressions`, etc.):\n\n```scala\nprivate val mathExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n  // ... other expressions ...\n  classOf[Unhex] -> CometUnhex,\n  classOf[Hex] -> CometHex)\n```\n\nThe `exprToProtoInternal` method will automatically use this mapping to find and invoke your handler when it encounters the corresponding Spark expression type.\n\nA few things to note:\n\n- The `convert` method is recursively called on child expressions using `exprToProtoInternal`, so you'll need to make sure that the child expressions are also converted to protobuf.\n- `scalarFunctionExprToProtoWithReturnType` is for scalar functions that need to return type information. Your expression may use a different method depending on the type of expression.\n- Use helper methods like `createBinaryExpr` and `createUnaryExpr` from `QueryPlanSerde` for common expression patterns.\n\n#### Using getSupportLevel\n\nThe `getSupportLevel` method allows you to control whether an expression should be executed by Comet based on various conditions such as data types, parameter values, or other expression-specific constraints. This is particularly useful when:\n\n1. Your expression only supports specific data types\n2. Your expression has known incompatibilities with Spark's behavior\n3. Your expression has edge cases that aren't yet supported\n\nThe method returns one of three `SupportLevel` values:\n\n- **`Compatible(notes: Option[String] = None)`** - Comet supports this expression with full compatibility with Spark, or may have known differences in specific edge cases that are unlikely to be an issue for most users. This is the default if you don't override `getSupportLevel`.\n- **`Incompatible(notes: Option[String] = None)`** - Comet supports this expression but results can be different from Spark. The expression will only be used if `spark.comet.expr.allowIncompatible=true` or the expression-specific config `spark.comet.expr.<exprName>.allowIncompatible=true` is set.\n- **`Unsupported(notes: Option[String] = None)`** - Comet does not support this expression under the current conditions. The expression will not be used and Spark will fall back to its native execution.\n\nAll three support levels accept an optional `notes` parameter to provide additional context about the support level.\n\n##### Examples\n\n**Example 1: Restricting to specific data types**\n\nThe `Abs` expression only supports numeric types:\n\n```scala\nobject CometAbs extends CometExpressionSerde[Abs] {\n  override def getSupportLevel(expr: Abs): SupportLevel = {\n    expr.child.dataType match {\n      case _: NumericType =>\n        Compatible()\n      case _ =>\n        // Spark supports NumericType, DayTimeIntervalType, and YearMonthIntervalType\n        Unsupported(Some(\"Only integral, floating-point, and decimal types are supported\"))\n    }\n  }\n\n  override def convert(\n      expr: Abs,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // ... conversion logic ...\n  }\n}\n```\n\n**Example 2: Validating parameter values**\n\nThe `TruncDate` expression only supports specific format strings:\n\n```scala\nobject CometTruncDate extends CometExpressionSerde[TruncDate] {\n  val supportedFormats: Seq[String] =\n    Seq(\"year\", \"yyyy\", \"yy\", \"quarter\", \"mon\", \"month\", \"mm\", \"week\")\n\n  override def getSupportLevel(expr: TruncDate): SupportLevel = {\n    expr.format match {\n      case Literal(fmt: UTF8String, _) =>\n        if (supportedFormats.contains(fmt.toString.toLowerCase(Locale.ROOT))) {\n          Compatible()\n        } else {\n          Unsupported(Some(s\"Format $fmt is not supported\"))\n        }\n      case _ =>\n        Incompatible(\n          Some(\"Invalid format strings will throw an exception instead of returning NULL\"))\n    }\n  }\n\n  override def convert(\n      expr: TruncDate,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // ... conversion logic ...\n  }\n}\n```\n\n**Example 3: Marking known incompatibilities**\n\nThe `ArrayAppend` expression has known behavioral differences from Spark:\n\n```scala\nobject CometArrayAppend extends CometExpressionSerde[ArrayAppend] {\n  override def getSupportLevel(expr: ArrayAppend): SupportLevel = Incompatible(None)\n\n  override def convert(\n      expr: ArrayAppend,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // ... conversion logic ...\n  }\n}\n```\n\nThis expression will only be used when users explicitly enable incompatible expressions via configuration.\n\n##### How getSupportLevel Affects Execution\n\nWhen the query planner encounters an expression:\n\n1. It first checks if the expression is explicitly disabled via `spark.comet.expr.<exprName>.enabled=false`\n2. It then calls `getSupportLevel` on the expression handler\n3. Based on the result:\n   - `Compatible()`: Expression proceeds to conversion\n   - `Incompatible()`: Expression is skipped unless `spark.comet.expr.allowIncompatible=true` or expression-specific allow config is set\n   - `Unsupported()`: Expression is skipped and a fallback to Spark occurs\n\nAny notes provided will be logged to help with debugging and understanding why an expression was not used.\n\n#### Adding Spark-side Tests for the New Expression\n\nIt is important to verify that the new expression is correctly recognized by the native execution engine and matches the expected Spark behavior. The preferred way to add test coverage is to write a SQL test file using the SQL file test framework. This approach is simpler than writing Scala test code and makes it easy to cover many input combinations and edge cases.\n\n##### Writing a SQL test file\n\nCreate a `.sql` file under the appropriate subdirectory in `spark/src/test/resources/sql-tests/expressions/` (e.g., `string/`, `math/`, `array/`). The file should create a table with test data, then run queries that exercise the expression. Here is an example for the `unhex` expression:\n\n```sql\nstatement\nCREATE TABLE test_unhex(col string) USING parquet\n\nstatement\nINSERT INTO test_unhex VALUES\n  ('537061726B2053514C'),\n  ('737472696E67'),\n  ('\\0'),\n  (''),\n  ('###'),\n  ('G123'),\n  ('hello'),\n  ('A1B'),\n  ('0A1B'),\n  (NULL)\n\n-- column argument\nquery\nSELECT unhex(col) FROM test_unhex\n\n-- literal arguments\nquery\nSELECT unhex('48656C6C6F'), unhex(''), unhex(NULL)\n```\n\nEach `query` block automatically runs the SQL through both Spark and Comet and compares results, and also verifies that Comet executes the expression natively (not falling back to Spark).\n\nRun the test with:\n\n```shell\n./mvnw test -Dsuites=\"org.apache.comet.CometSqlFileTestSuite unhex\" -Dtest=none\n```\n\nFor full documentation on the test file format — including directives like `ConfigMatrix`, query modes like `spark_answer_only` and `tolerance`, handling known bugs with `ignore(...)`, and tips for writing thorough tests — see the [SQL File Tests](sql-file-tests.md) guide.\n\n##### Tips\n\n- **Cover both column references and literals.** Comet often uses different code paths for each. The SQL file test suite automatically disables constant folding, so all-literal queries are evaluated natively.\n- **Include edge cases** such as `NULL`, empty strings, boundary values, `NaN`, and multibyte UTF-8 characters.\n- **Keep one file per expression** to make failures easy to locate.\n\n##### Scala tests (alternative)\n\nFor cases that require programmatic setup or custom assertions beyond what SQL files support, you can also add Scala test cases in `CometExpressionSuite` using the `checkSparkAnswerAndOperator` method:\n\n```scala\ntest(\"unhex\") {\n  val table = \"unhex_table\"\n  withTable(table) {\n    sql(s\"create table $table(col string) using parquet\")\n\n    sql(s\"\"\"INSERT INTO $table VALUES\n      |('537061726B2053514C'),\n      |('737472696E67'),\n      |('\\\\0'),\n      |(''),\n      |('###'),\n      |('G123'),\n      |('hello'),\n      |('A1B'),\n      |('0A1B')\"\"\".stripMargin)\n\n    checkSparkAnswerAndOperator(s\"SELECT unhex(col) FROM $table\")\n  }\n}\n```\n\nWhen writing Scala tests with literal values (e.g., `SELECT my_func('literal')`), Spark's constant folding optimizer may evaluate the expression at planning time, bypassing Comet. To prevent this, disable constant folding:\n\n```scala\ntest(\"my_func with literals\") {\n  withSQLConf(SQLConf.OPTIMIZER_EXCLUDED_RULES.key ->\n      \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n    checkSparkAnswerAndOperator(\"SELECT my_func('literal_value')\")\n  }\n}\n```\n\n### Adding the Expression To the Protobuf Definition\n\nOnce you have the expression implemented in Scala, you might need to update the protobuf definition to include the new expression. You may not need to do this if the expression is already covered by the existing protobuf definition (e.g. you're adding a new scalar function that uses the `ScalarFunc` message).\n\nYou can find the protobuf definition in `native/proto/src/proto/expr.proto`, and in particular the `Expr` or potentially the `AggExpr` messages. If you were to add a new expression called `Add2`, you would add a new case to the `Expr` message like so:\n\n```proto\nmessage Expr {\n  oneof expr_struct {\n    ...\n    Add2 add2 = 100;  // Choose the next available number\n  }\n}\n```\n\nThen you would define the `Add2` message like so:\n\n```proto\nmessage Add2 {\n  Expr left = 1;\n  Expr right = 2;\n}\n```\n\n### Adding the Expression in Rust\n\nWith the serialization complete, the next step is to implement the expression in Rust and ensure that the incoming plan can make use of it.\n\nHow this works is somewhat dependent on the type of expression you're adding. Expression implementations live in the `native/spark-expr/src/` directory, organized by category (e.g., `math_funcs/`, `string_funcs/`, `array_funcs/`).\n\n#### Generally Adding a New Expression\n\nIf you're adding a new expression that requires custom protobuf serialization, you may need to:\n\n1. Add a new message to the protobuf definition in `native/proto/src/proto/expr.proto`\n2. Add a native expression handler in `expression_registry.rs` to deserialize the new protobuf message type and\n   create a native expression\n\nFor most expressions, you can skip this step if you're using the existing scalar function infrastructure.\n\n#### Adding a New Scalar Function Expression\n\nFor a new scalar function, you can reuse a lot of code by updating the `create_comet_physical_fun` method in `native/spark-expr/src/comet_scalar_funcs.rs`. Add a match case for your function name:\n\n```rust\nmatch fun_name {\n    // ... other functions ...\n    \"unhex\" => {\n        let func = Arc::new(spark_unhex);\n        make_comet_scalar_udf!(\"unhex\", func, without data_type)\n    }\n    // ... more functions ...\n}\n```\n\nThe `make_comet_scalar_udf!` macro has several variants depending on whether your function needs:\n\n- A data type parameter: `make_comet_scalar_udf!(\"ceil\", spark_ceil, data_type)`\n- No data type parameter: `make_comet_scalar_udf!(\"unhex\", func, without data_type)`\n- An eval mode: `make_comet_scalar_udf!(\"decimal_div\", spark_decimal_div, data_type, eval_mode)`\n- A fail_on_error flag: `make_comet_scalar_udf!(\"spark_modulo\", func, without data_type, fail_on_error)`\n\n#### Implementing the Function\n\nThen implement your function in an appropriate module under `native/spark-expr/src/`. The function signature will look like:\n\n```rust\npub fn spark_unhex(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {\n    // Do the work here\n}\n```\n\nIf your function uses the data type or eval mode, the signature will include those as additional parameters:\n\n```rust\npub fn spark_ceil(\n    args: &[ColumnarValue],\n    data_type: &DataType\n) -> Result<ColumnarValue, DataFusionError> {\n    // Implementation\n}\n```\n\n### API Differences Between Spark Versions\n\nIf the expression you're adding has different behavior across different Spark versions, you'll need to account for that in your implementation. There are two tools at your disposal to help with this:\n\n1. Shims that exist in `spark/src/main/spark-$SPARK_VERSION/org/apache/comet/shims/CometExprShim.scala` for each Spark version. These shims are used to provide compatibility between different Spark versions.\n2. Variables that correspond to the Spark version, such as `isSpark33Plus`, which can be used to conditionally execute code based on the Spark version.\n\n## Shimming to Support Different Spark Versions\n\nIf the expression you're adding has different behavior across different Spark versions, you can use the shim system located in `spark/src/main/spark-$SPARK_VERSION/org/apache/comet/shims/CometExprShim.scala` for each Spark version.\n\nThe `CometExprShim` trait provides several mechanisms for handling version differences:\n\n1. **Version-specific methods** - Override methods in the trait to provide version-specific behavior\n2. **Version-specific expression handling** - Use `versionSpecificExprToProtoInternal` to handle expressions that only exist in certain Spark versions\n\nFor example, the `StringDecode` expression only exists in certain Spark versions. The shim handles this:\n\n```scala\ntrait CometExprShim {\n  def versionSpecificExprToProtoInternal(\n      expr: Expression,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n    expr match {\n      case s: StringDecode =>\n        stringDecode(expr, s.charset, s.bin, inputs, binding)\n      case _ => None\n    }\n  }\n}\n```\n\nThe `QueryPlanSerde.exprToProtoInternal` method calls `versionSpecificExprToProtoInternal` first, allowing shims to intercept and handle version-specific expressions before falling back to the standard expression maps.\n\nYour `CometExpressionSerde` implementation can also access shim methods by mixing in the `CometExprShim` trait, though in most cases you can directly access the expression properties if they're available across all supported Spark versions.\n\n## Resources\n\n- [Variance PR](https://github.com/apache/datafusion-comet/pull/297)\n  - Aggregation function\n- [Unhex PR](https://github.com/apache/datafusion-comet/pull/342)\n  - Basic scalar function with shims for different Spark versions\n"
  },
  {
    "path": "docs/source/contributor-guide/adding_a_new_operator.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Adding a New Operator\n\nThis guide explains how to add support for a new Spark physical operator in Apache DataFusion Comet.\n\n## Overview\n\n`CometExecRule` is responsible for replacing Spark operators with Comet operators. There are different approaches to\nimplementing Comet operators depending on where they execute and how they integrate with the native execution engine.\n\n### Types of Comet Operators\n\n`CometExecRule` maintains two distinct maps of operators:\n\n#### 1. Native Operators (`nativeExecs` map)\n\nThese operators run entirely in native Rust code and are the primary way to accelerate Spark workloads. Native\noperators are registered in the `nativeExecs` map in `CometExecRule.scala`.\n\nKey characteristics of native operators:\n\n- They are converted to their corresponding native protobuf representation\n- They execute as DataFusion operators in the native engine\n- The `CometOperatorSerde` implementation handles enable/disable checks, support validation, and protobuf serialization\n\nExamples: `ProjectExec`, `FilterExec`, `SortExec`, `HashAggregateExec`, `SortMergeJoinExec`, `ExpandExec`, `WindowExec`\n\n#### 2. Sink Operators (`sinks` map)\n\nSink operators serve as entry points (data sources) for native execution blocks. They are registered in the `sinks`\nmap in `CometExecRule.scala`.\n\nKey characteristics of sinks:\n\n- They become `ScanExec` operators in the native plan (see `operator2Proto` in `CometExecRule.scala`)\n- They can be leaf nodes that feed data into native execution blocks\n- They are wrapped with `CometScanWrapper` or `CometSinkPlaceHolder` during plan transformation\n- Examples include operators that bring data from various sources into native execution\n\nExamples: `UnionExec`, `CoalesceExec`, `CollectLimitExec`, `TakeOrderedAndProjectExec`\n\nSpecial sinks (not in the `sinks` map but also treated as sinks):\n\n- `CometScanExec` - File scans\n- `CometSparkToColumnarExec` - Conversion from Spark row format\n- `ShuffleExchangeExec` / `BroadcastExchangeExec` - Exchange operators\n\n#### 3. Comet JVM Operators\n\nThese operators run in the JVM but are part of the Comet execution path. For JVM operators, all checks happen\nin `CometExecRule` rather than using `CometOperatorSerde`, because they don't need protobuf serialization.\n\nExamples: `CometBroadcastExchangeExec`, `CometShuffleExchangeExec`\n\n### Choosing the Right Operator Type\n\nWhen adding a new operator, choose based on these criteria:\n\n**Use Native Operators when:**\n\n- The operator transforms data (e.g., project, filter, sort, aggregate, join)\n- The operator has a direct DataFusion equivalent or custom implementation\n- The operator consumes native child operators and produces native output\n- The operator is in the middle of an execution pipeline\n\n**Use Sink Operators when:**\n\n- The operator serves as a data source for native execution (becomes a `ScanExec`)\n- The operator brings data from non-native sources (e.g., `UnionExec` combining multiple inputs)\n- The operator is typically a leaf or near-leaf node in the execution tree\n- The operator needs special handling to interface with the native engine\n\n**Implementation Note for Sinks:**\n\nSink operators are handled specially in `CometExecRule.operator2Proto`. Instead of converting to their own operator\ntype, they are converted to `ScanExec` in the native plan. This allows them to serve as entry points for native\nexecution blocks. The original Spark operator is wrapped with `CometScanWrapper` or `CometSinkPlaceHolder` which\nmanages the boundary between JVM and native execution.\n\n## Implementing a Native Operator\n\nThis section focuses on adding a native operator, which is the most common and complex case.\n\n### Step 1: Define the Protobuf Message\n\nFirst, add the operator definition to `native/proto/src/proto/operator.proto`.\n\n#### Add to the Operator Message\n\nAdd your new operator to the `oneof op_struct` in the main `Operator` message:\n\n```proto\nmessage Operator {\n  repeated Operator children = 1;\n  uint32 plan_id = 2;\n\n  oneof op_struct {\n    Scan scan = 100;\n    Projection projection = 101;\n    Filter filter = 102;\n    // ... existing operators ...\n    YourNewOperator your_new_operator = 112;  // Choose next available number\n  }\n}\n```\n\n#### Define the Operator Message\n\nCreate a message for your operator with the necessary fields:\n\n```proto\nmessage YourNewOperator {\n  // Fields specific to your operator\n  repeated spark.spark_expression.Expr expressions = 1;\n  // Add other configuration fields as needed\n}\n```\n\nFor reference, see existing operators like `Filter` (simple), `HashAggregate` (complex), or `Sort` (with ordering).\n\n### Step 2: Create a CometOperatorSerde Implementation\n\nCreate a new Scala file in `spark/src/main/scala/org/apache/comet/serde/operator/` (e.g., `CometYourOperator.scala`) that extends `CometOperatorSerde[T]` where `T` is the Spark operator type.\n\nThe `CometOperatorSerde` trait provides several key methods:\n\n- `enabledConfig: Option[ConfigEntry[Boolean]]` - Configuration to enable/disable this operator\n- `getSupportLevel(operator: T): SupportLevel` - Determines if the operator is supported\n- `convert(op: T, builder: Operator.Builder, childOp: Operator*): Option[Operator]` - Converts to protobuf\n- `createExec(nativeOp: Operator, op: T): CometNativeExec` - Creates the Comet execution operator wrapper\n\nThe validation workflow in `CometExecRule.isOperatorEnabled`:\n\n1. Checks if the operator is enabled via `enabledConfig`\n2. Calls `getSupportLevel()` to determine compatibility\n3. Handles Compatible/Incompatible/Unsupported cases with appropriate fallback messages\n\n#### Simple Example (Filter)\n\n```scala\nobject CometFilterExec extends CometOperatorSerde[FilterExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] =\n    Some(CometConf.COMET_EXEC_FILTER_ENABLED)\n\n  override def convert(\n      op: FilterExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    val cond = exprToProto(op.condition, op.child.output)\n\n    if (cond.isDefined && childOp.nonEmpty) {\n      val filterBuilder = OperatorOuterClass.Filter\n        .newBuilder()\n        .setPredicate(cond.get)\n      Some(builder.setFilter(filterBuilder).build())\n    } else {\n      withInfo(op, op.condition, op.child)\n      None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: FilterExec): CometNativeExec = {\n    CometFilterExec(nativeOp, op, op.output, op.condition, op.child, SerializedPlan(None))\n  }\n}\n\ncase class CometFilterExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    condition: Expression,\n    child: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometUnaryExec {\n\n  override def outputPartitioning: Partitioning = child.outputPartitioning\n\n  override def outputOrdering: Seq[SortOrder] = child.outputOrdering\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n}\n```\n\n#### More Complex Example (Project)\n\n```scala\nobject CometProjectExec extends CometOperatorSerde[ProjectExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] =\n    Some(CometConf.COMET_EXEC_PROJECT_ENABLED)\n\n  override def convert(\n      op: ProjectExec,\n      builder: Operator.Builder,\n      childOp: Operator*): Option[OperatorOuterClass.Operator] = {\n    val exprs = op.projectList.map(exprToProto(_, op.child.output))\n\n    if (exprs.forall(_.isDefined) && childOp.nonEmpty) {\n      val projectBuilder = OperatorOuterClass.Projection\n        .newBuilder()\n        .addAllProjectList(exprs.map(_.get).asJava)\n      Some(builder.setProjection(projectBuilder).build())\n    } else {\n      withInfo(op, op.projectList: _*)\n      None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: ProjectExec): CometNativeExec = {\n    CometProjectExec(nativeOp, op, op.output, op.projectList, op.child, SerializedPlan(None))\n  }\n}\n\ncase class CometProjectExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    projectList: Seq[NamedExpression],\n    child: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometUnaryExec\n    with PartitioningPreservingUnaryExecNode {\n\n  override def producedAttributes: AttributeSet = outputSet\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n}\n```\n\n#### Using getSupportLevel\n\nOverride `getSupportLevel` to control operator support based on specific conditions:\n\n```scala\noverride def getSupportLevel(operator: YourOperatorExec): SupportLevel = {\n  // Check for unsupported features\n  if (operator.hasUnsupportedFeature) {\n    return Unsupported(Some(\"Feature X is not supported\"))\n  }\n\n  // Check for incompatible behavior\n  if (operator.hasKnownDifferences) {\n    return Incompatible(Some(\"Known differences in edge case Y\"))\n  }\n\n  Compatible()\n}\n```\n\nSupport levels:\n\n- **`Compatible()`** - Fully compatible with Spark (default)\n- **`Incompatible()`** - Supported but may differ; requires explicit opt-in\n- **`Unsupported()`** - Not supported under current conditions\n\nNote that Comet will treat an operator as incompatible if any of the child expressions are incompatible.\n\n### Step 3: Register the Operator\n\nAdd your operator to the appropriate map in `CometExecRule.scala`:\n\n#### For Native Operators\n\nAdd to the `nativeExecs` map (`CometExecRule.scala`):\n\n```scala\nval nativeExecs: Map[Class[_ <: SparkPlan], CometOperatorSerde[_]] =\n  Map(\n    classOf[ProjectExec] -> CometProjectExec,\n    classOf[FilterExec] -> CometFilterExec,\n    // ... existing operators ...\n    classOf[YourOperatorExec] -> CometYourOperator,\n  )\n```\n\n#### For Sink Operators\n\nIf your operator is a sink (becomes a `ScanExec` in the native plan), add to the `sinks` map (`CometExecRule.scala`):\n\n```scala\nval sinks: Map[Class[_ <: SparkPlan], CometOperatorSerde[_]] =\n  Map(\n    classOf[CoalesceExec] -> CometCoalesceExec,\n    classOf[UnionExec] -> CometUnionExec,\n    // ... existing operators ...\n    classOf[YourSinkOperatorExec] -> CometYourSinkOperator,\n  )\n```\n\nNote: The `allExecs` map automatically combines both `nativeExecs` and `sinks`, so you only need to add to one of the two maps.\n\n### Step 4: Add Configuration Entry\n\nAdd a configuration entry in `common/src/main/scala/org/apache/comet/CometConf.scala`:\n\n```scala\nval COMET_EXEC_YOUR_OPERATOR_ENABLED: ConfigEntry[Boolean] =\n  conf(\"spark.comet.exec.yourOperator.enabled\")\n    .doc(\"Whether to enable your operator in Comet\")\n    .booleanConf\n    .createWithDefault(true)\n```\n\nRun `make` to update the user guide. The new configuration option will be added to `docs/source/user-guide/latest/configs.md`.\n\n### Step 5: Implement the Native Operator in Rust\n\n#### Update the Planner\n\nIn `native/core/src/execution/planner.rs`, add a match case in the operator deserialization logic to handle your new protobuf message:\n\n```rust\nuse datafusion_comet_proto::spark_operator::operator::OpStruct;\n\n// In the create_plan or similar method:\nmatch op.op_struct.as_ref() {\n    Some(OpStruct::Scan(scan)) => {\n        // ... existing cases ...\n    }\n    Some(OpStruct::YourNewOperator(your_op)) => {\n        create_your_operator_exec(your_op, children, session_ctx)\n    }\n    // ... other cases ...\n}\n```\n\n#### Implement the Operator\n\nCreate the operator implementation, either in an existing file or a new file in `native/core/src/execution/operators/`:\n\n```rust\nuse datafusion::physical_plan::{ExecutionPlan, ...};\nuse datafusion_comet_proto::spark_operator::YourNewOperator;\n\npub fn create_your_operator_exec(\n    op: &YourNewOperator,\n    children: Vec<Arc<dyn ExecutionPlan>>,\n    session_ctx: &SessionContext,\n) -> Result<Arc<dyn ExecutionPlan>, ExecutionError> {\n    // Deserialize expressions and configuration\n    // Create and return the execution plan\n\n    // Option 1: Use existing DataFusion operator\n    // Ok(Arc::new(SomeDataFusionExec::try_new(...)?))\n\n    // Option 2: Implement custom operator (see ExpandExec for example)\n    // Ok(Arc::new(YourCustomExec::new(...)))\n}\n```\n\nFor custom operators, you'll need to implement the `ExecutionPlan` trait. See `native/core/src/execution/operators/expand.rs` or `scan.rs` for examples.\n\n### Step 6: Add Tests\n\n#### Scala Integration Tests\n\nAdd tests in `spark/src/test/scala/org/apache/comet/exec/CometExecSuite.scala` or a related test suite:\n\n```scala\ntest(\"your operator\") {\n  withTable(\"test_table\") {\n    sql(\"CREATE TABLE test_table(col1 INT, col2 STRING) USING parquet\")\n    sql(\"INSERT INTO test_table VALUES (1, 'a'), (2, 'b')\")\n\n    // Test query that uses your operator\n    checkSparkAnswerAndOperator(\n      \"SELECT * FROM test_table WHERE col1 > 1\"\n    )\n  }\n}\n```\n\nThe `checkSparkAnswerAndOperator` helper verifies:\n\n1. Results match Spark's native execution\n2. Your operator is actually being used (not falling back)\n\n#### Rust Unit Tests\n\nAdd unit tests in your Rust implementation file:\n\n```rust\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_your_operator() {\n        // Test operator creation and execution\n    }\n}\n```\n\n### Step 7: Update Documentation\n\nAdd your operator to the supported operators list in `docs/source/user-guide/latest/compatibility.md` or similar documentation.\n\n## Implementing a Sink Operator\n\nSink operators are converted to `ScanExec` in the native plan and serve as entry points for native execution. The implementation is simpler than native operators because sink operators extend the `CometSink` base class which provides the conversion logic.\n\n### Step 1: Create a CometOperatorSerde Implementation\n\nCreate a new Scala file in `spark/src/main/scala/org/apache/spark/sql/comet/` (e.g., `CometYourSinkOperator.scala`):\n\n```scala\nimport org.apache.comet.serde.operator.CometSink\n\nobject CometYourSinkOperator extends CometSink[YourSinkExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] =\n    Some(CometConf.COMET_EXEC_YOUR_SINK_ENABLED)\n\n  // Optional: Override if the data produced is FFI safe\n  override def isFfiSafe: Boolean = false\n\n  override def createExec(\n      nativeOp: OperatorOuterClass.Operator,\n      op: YourSinkExec): CometNativeExec = {\n    CometSinkPlaceHolder(\n      nativeOp,\n      op,\n      CometYourSinkExec(op, op.output, /* other parameters */, op.child))\n  }\n\n  // Optional: Override getSupportLevel if you need custom validation beyond data types\n  override def getSupportLevel(operator: YourSinkExec): SupportLevel = {\n    // CometSink base class already checks data types in convert()\n    // Add any additional validation here\n    Compatible()\n  }\n}\n\n/**\n * Comet implementation of YourSinkExec that supports columnar processing\n */\ncase class CometYourSinkExec(\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    /* other parameters */,\n    child: SparkPlan)\n    extends CometExec\n    with UnaryExecNode {\n\n  override protected def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    // Implement columnar execution logic\n    val rdd = child.executeColumnar()\n    // Apply your sink operator's logic\n    rdd\n  }\n\n  override def outputPartitioning: Partitioning = {\n    // Define output partitioning\n  }\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n}\n```\n\n**Key Points:**\n\n- Extend `CometSink[T]` which provides the `convert()` method that transforms the operator to `ScanExec`\n- The `CometSink.convert()` method (in `CometSink.scala`) automatically handles:\n  - Data type validation\n  - Conversion to `ScanExec` in the native plan\n  - Setting FFI safety flags\n- You must implement `createExec()` to wrap the operator appropriately\n- You typically need to create a corresponding `CometYourSinkExec` class that implements columnar execution\n\n### Step 2: Register the Sink\n\nAdd your sink to the `sinks` map in `CometExecRule.scala`:\n\n```scala\nval sinks: Map[Class[_ <: SparkPlan], CometOperatorSerde[_]] =\n  Map(\n    classOf[CoalesceExec] -> CometCoalesceExec,\n    classOf[UnionExec] -> CometUnionExec,\n    classOf[YourSinkExec] -> CometYourSinkOperator,\n  )\n```\n\n### Step 3: Add Configuration\n\nAdd a configuration entry in `CometConf.scala`:\n\n```scala\nval COMET_EXEC_YOUR_SINK_ENABLED: ConfigEntry[Boolean] =\n  conf(\"spark.comet.exec.yourSink.enabled\")\n    .doc(\"Whether to enable your sink operator in Comet\")\n    .booleanConf\n    .createWithDefault(true)\n```\n\n### Step 4: Add Tests\n\nTest that your sink operator correctly feeds data into native execution:\n\n```scala\ntest(\"your sink operator\") {\n  withTable(\"test_table\") {\n    sql(\"CREATE TABLE test_table(col1 INT, col2 STRING) USING parquet\")\n    sql(\"INSERT INTO test_table VALUES (1, 'a'), (2, 'b')\")\n\n    // Test query that uses your sink operator followed by native operators\n    checkSparkAnswerAndOperator(\n      \"SELECT col1 + 1 FROM (/* query that produces YourSinkExec */)\"\n    )\n  }\n}\n```\n\n**Important Notes for Sinks:**\n\n- Sinks extend the `CometSink` base class, which provides the `convert()` method implementation\n- The `CometSink.convert()` method automatically handles conversion to `ScanExec` in the native plan\n- You don't need to add protobuf definitions for sink operators - they use the standard `Scan` message\n- You don't need Rust implementation for sinks - they become standard `ScanExec` operators that read from the JVM\n- Sink implementations should provide a columnar-compatible execution class (e.g., `CometCoalesceExec`)\n- The `createExec()` method wraps the operator with `CometSinkPlaceHolder` to manage the JVM-to-native boundary\n- See `CometCoalesceExec.scala` or `CometUnionExec` in `spark/src/main/scala/org/apache/spark/sql/comet/` for reference implementations\n\n## Implementing a JVM Operator\n\nFor operators that run in the JVM:\n\n1. Create a new operator class extending appropriate Spark base classes in `spark/src/main/scala/org/apache/comet/`\n2. Add matching logic in `CometExecRule.scala` to transform the Spark operator\n3. No protobuf or Rust implementation needed\n\nExample pattern from `CometExecRule.scala`:\n\n```scala\ncase s: ShuffleExchangeExec if nativeShuffleSupported(s) =>\n  CometShuffleExchangeExec(s, shuffleType = CometNativeShuffle)\n```\n\n## Common Patterns and Helpers\n\n### Expression Conversion\n\nUse `QueryPlanSerde.exprToProto` to convert Spark expressions to protobuf:\n\n```scala\nval protoExpr = exprToProto(sparkExpr, inputSchema)\n```\n\n### Handling Fallback\n\nUse `withInfo` to tag operators with fallback reasons:\n\n```scala\nif (!canConvert) {\n  withInfo(op, \"Reason for fallback\", childNodes: _*)\n  return None\n}\n```\n\n### Child Operator Validation\n\nAlways check that child operators were successfully converted:\n\n```scala\nif (childOp.isEmpty) {\n  // Cannot convert if children failed\n  return None\n}\n```\n\n## Debugging Tips\n\n1. **Enable verbose logging**: Set `spark.comet.explain.format=verbose` to see detailed plan transformations\n2. **Check fallback reasons**: Set `spark.comet.logFallbackReasons.enabled=true` to log why operators fall back to Spark\n3. **Verify protobuf**: Add debug prints in Rust to inspect deserialized operators\n4. **Use EXPLAIN**: Run `EXPLAIN EXTENDED` on queries to see the physical plan\n"
  },
  {
    "path": "docs/source/contributor-guide/benchmark-results/blaze-0.5.0-tpcds.json",
    "content": "{\n    \"engine\": \"datafusion-comet\",\n    \"benchmark\": \"tpcds\",\n    \"data_path\": \"/mnt/bigdata/tpcds/sf100\",\n    \"query_path\": \"/mnt/bigdata/tpcds/queries\",\n    \"spark_conf\": {\n        \"spark.blaze.forceShuffledHashJoin\": \"true\",\n        \"spark.driver.port\": \"35949\",\n        \"spark.eventLog.enabled\": \"true\",\n        \"spark.sql.analyzer.canonicalization.multiCommutativeOpMemoryOptThreshold\": \"2147483647\",\n        \"spark.cores.max\": \"16\",\n        \"spark.driver.extraClassPath\": \"/opt/blaze/blaze-engine-spark-3.5-release-5.0.0-SNAPSHOT.jar\",\n        \"spark.app.submitTime\": \"1753976545258\",\n        \"spark.executor.memory\": \"16g\",\n        \"spark.shuffle.manager\": \"org.apache.spark.sql.execution.blaze.shuffle.BlazeShuffleManager\",\n        \"spark.executor.instances\": \"2\",\n        \"spark.app.startTime\": \"1753976545559\",\n        \"spark.serializer.objectStreamReset\": \"100\",\n        \"spark.driver.host\": \"10.0.0.118\",\n        \"spark.submit.deployMode\": \"client\",\n        \"spark.app.name\": \"blaze benchmark derived from tpcds\",\n        \"spark.executor.cores\": \"8\",\n        \"spark.driver.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.executor.memoryOverhead\": \"16g\",\n        \"spark.sql.extensions\": \"org.apache.spark.sql.blaze.BlazeSparkSessionExtension\",\n        \"spark.blaze.enable\": \"true\",\n        \"spark.sql.adaptive.enabled\": \"true\",\n        \"spark.executor.id\": \"driver\",\n        \"spark.master\": \"spark://woody:7077\",\n        \"spark.sql.adaptive.forceApply\": \"true\",\n        \"spark.repl.local.jars\": \"file:///opt/blaze/blaze-engine-spark-3.5-release-5.0.0-SNAPSHOT.jar\",\n        \"spark.memory.offHeap.enabled\": \"false\",\n        \"spark.driver.memory\": \"8G\",\n        \"spark.jars\": \"file:///opt/blaze/blaze-engine-spark-3.5-release-5.0.0-SNAPSHOT.jar\",\n        \"spark.app.initial.jar.urls\": \"spark://10.0.0.118:35949/jars/blaze-engine-spark-3.5-release-5.0.0-SNAPSHOT.jar\",\n        \"spark.app.id\": \"app-20250731094225-0000\",\n        \"spark.rdd.compress\": \"True\",\n        \"spark.executor.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.submit.pyFiles\": \"\",\n        \"spark.sql.warehouse.dir\": \"file:/home/andy/git/apache/datafusion-comet/dev/benchmarks/spark-warehouse\",\n        \"spark.executor.extraClassPath\": \"/opt/blaze/blaze-engine-spark-3.5-release-5.0.0-SNAPSHOT.jar\"\n    },\n    \"1\": [\n        4.29833984375\n    ],\n    \"2\": [\n        3.2578773498535156\n    ],\n    \"3\": [\n        1.8652851581573486\n    ],\n    \"4\": [\n        28.0850727558136\n    ],\n    \"5\": [\n        5.483048439025879\n    ],\n    \"6\": [\n        1.8989448547363281\n    ],\n    \"7\": [\n        2.7168705463409424\n    ],\n    \"8\": [\n        2.0041682720184326\n    ],\n    \"9\": [\n        6.036026239395142\n    ],\n    \"10\": [\n        2.158907175064087\n    ],\n    \"11\": [\n        12.088945388793945\n    ],\n    \"12\": [\n        1.6278820037841797\n    ],\n    \"13\": [\n        4.141969919204712\n    ],\n    \"14\": [\n        27.16408944129944\n    ],\n    \"15\": [\n        7.972369909286499\n    ],\n    \"16\": [\n        19.509708166122437\n    ],\n    \"17\": [\n        2.9771337509155273\n    ],\n    \"18\": [\n        3.184837818145752\n    ],\n    \"19\": [\n        3.512895107269287\n    ],\n    \"20\": [\n        2.5250091552734375\n    ],\n    \"21\": [\n        2.032010316848755\n    ],\n    \"22\": [\n        10.054096698760986\n    ],\n    \"23\": [\n        37.075517892837524\n    ],\n    \"24\": [\n        4.5132224559783936\n    ],\n    \"25\": [\n        3.260453939437866\n    ],\n    \"26\": [\n        1.6490797996520996\n    ],\n    \"27\": [\n        2.562084436416626\n    ],\n    \"28\": [\n        6.1220972537994385\n    ],\n    \"29\": [\n        3.355574607849121\n    ],\n    \"30\": [\n        1.3987014293670654\n    ],\n    \"31\": [\n        5.99055290222168\n    ],\n    \"32\": [\n        1.860522985458374\n    ],\n    \"33\": [\n        3.1227269172668457\n    ],\n    \"34\": [\n        1.4920480251312256\n    ],\n    \"35\": [\n        3.3718972206115723\n    ],\n    \"36\": [\n        3.137023687362671\n    ],\n    \"37\": [\n        2.0510008335113525\n    ],\n    \"38\": [\n        3.81714129447937\n    ],\n    \"39\": [\n        13.598978281021118\n    ],\n    \"40\": [\n        3.6551194190979004\n    ],\n    \"41\": [\n        0.20307397842407227\n    ],\n    \"42\": [\n        1.4140422344207764\n    ],\n    \"43\": [\n        1.4719886779785156\n    ],\n    \"44\": [\n        1.0411145687103271\n    ],\n    \"45\": [\n        2.1550612449645996\n    ],\n    \"46\": [\n        4.502093076705933\n    ],\n    \"47\": [\n        5.857606649398804\n    ],\n    \"48\": [\n        4.466501235961914\n    ],\n    \"49\": [\n        4.6020188331604\n    ],\n    \"50\": [\n        3.786076784133911\n    ],\n    \"51\": [\n        9.466694355010986\n    ],\n    \"52\": [\n        1.7045836448669434\n    ],\n    \"53\": [\n        1.6680505275726318\n    ],\n    \"54\": [\n        3.2161707878112793\n    ],\n    \"55\": [\n        1.3395226001739502\n    ],\n    \"56\": [\n        2.775632619857788\n    ],\n    \"57\": [\n        2.668205976486206\n    ],\n    \"58\": [\n        4.912959575653076\n    ],\n    \"59\": [\n        3.08463454246521\n    ],\n    \"60\": [\n        2.8756072521209717\n    ],\n    \"61\": [\n        4.997696161270142\n    ],\n    \"62\": [\n        1.0771088600158691\n    ],\n    \"63\": [\n        1.4476990699768066\n    ],\n    \"64\": [\n        8.340136051177979\n    ],\n    \"65\": [\n        9.196853160858154\n    ],\n    \"66\": [\n        3.8863117694854736\n    ],\n    \"67\": [\n        24.989885330200195\n    ],\n    \"68\": [\n        4.229811429977417\n    ],\n    \"69\": [\n        2.1849424839019775\n    ],\n    \"70\": [\n        3.462109088897705\n    ],\n    \"71\": [\n        2.9780192375183105\n    ],\n    \"72\": [\n        98.07282423973083\n    ],\n    \"73\": [\n        1.2939915657043457\n    ],\n    \"74\": [\n        17.994235277175903\n    ],\n    \"75\": [\n        9.07810640335083\n    ],\n    \"76\": [\n        1.987945795059204\n    ],\n    \"77\": [\n        3.2832038402557373\n    ],\n    \"78\": [\n        15.809038877487183\n    ],\n    \"79\": [\n        2.3230249881744385\n    ],\n    \"80\": [\n        9.651153087615967\n    ],\n    \"81\": [\n        1.4056549072265625\n    ],\n    \"82\": [\n        2.151021718978882\n    ],\n    \"83\": [\n        0.9110991954803467\n    ],\n    \"84\": [\n        0.9381260871887207\n    ],\n    \"85\": [\n        4.502171277999878\n    ],\n    \"86\": [\n        0.8276948928833008\n    ],\n    \"87\": [\n        3.5236589908599854\n    ],\n    \"88\": [\n        7.260309457778931\n    ],\n    \"89\": [\n        1.6737573146820068\n    ],\n    \"90\": [\n        0.6564974784851074\n    ],\n    \"91\": [\n        0.5676999092102051\n    ],\n    \"92\": [\n        0.9616613388061523\n    ],\n    \"93\": [\n        4.848900318145752\n    ],\n    \"94\": [\n        9.96233868598938\n    ],\n    \"95\": [\n        36.08780312538147\n    ],\n    \"96\": [\n        0.9135136604309082\n    ],\n    \"97\": [\n        4.4856956005096436\n    ],\n    \"98\": [\n        4.316124677658081\n    ],\n    \"99\": [\n        1.8331043720245361\n    ]\n}"
  },
  {
    "path": "docs/source/contributor-guide/benchmark-results/blaze-0.5.0-tpch.json",
    "content": "{\n    \"engine\": \"datafusion-comet\",\n    \"benchmark\": \"tpch\",\n    \"data_path\": \"/mnt/bigdata/tpch/sf100/\",\n    \"query_path\": \"/mnt/bigdata/tpch/queries/\",\n    \"spark_conf\": {\n        \"spark.app.name\": \"blaze benchmark derived from tpch\",\n        \"spark.blaze.forceShuffledHashJoin\": \"true\",\n        \"spark.eventLog.enabled\": \"true\",\n        \"spark.app.id\": \"app-20250731095942-0000\",\n        \"spark.sql.analyzer.canonicalization.multiCommutativeOpMemoryOptThreshold\": \"2147483647\",\n        \"spark.driver.extraClassPath\": \"/opt/blaze/blaze-engine-spark-3.5-release-5.0.0-SNAPSHOT.jar\",\n        \"spark.app.submitTime\": \"1753977581869\",\n        \"spark.app.initial.jar.urls\": \"spark://10.0.0.118:34855/jars/blaze-engine-spark-3.5-release-5.0.0-SNAPSHOT.jar\",\n        \"spark.executor.memory\": \"16g\",\n        \"spark.shuffle.manager\": \"org.apache.spark.sql.execution.blaze.shuffle.BlazeShuffleManager\",\n        \"spark.serializer.objectStreamReset\": \"100\",\n        \"spark.driver.host\": \"10.0.0.118\",\n        \"spark.submit.deployMode\": \"client\",\n        \"spark.executor.cores\": \"8\",\n        \"spark.driver.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.executor.memoryOverhead\": \"16g\",\n        \"spark.sql.extensions\": \"org.apache.spark.sql.blaze.BlazeSparkSessionExtension\",\n        \"spark.blaze.enable\": \"true\",\n        \"spark.sql.adaptive.enabled\": \"true\",\n        \"spark.executor.id\": \"driver\",\n        \"spark.master\": \"spark://woody:7077\",\n        \"spark.sql.adaptive.forceApply\": \"true\",\n        \"spark.repl.local.jars\": \"file:///opt/blaze/blaze-engine-spark-3.5-release-5.0.0-SNAPSHOT.jar\",\n        \"spark.memory.offHeap.enabled\": \"false\",\n        \"spark.driver.memory\": \"8G\",\n        \"spark.jars\": \"file:///opt/blaze/blaze-engine-spark-3.5-release-5.0.0-SNAPSHOT.jar\",\n        \"spark.driver.port\": \"34855\",\n        \"spark.rdd.compress\": \"True\",\n        \"spark.executor.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.app.startTime\": \"1753977582179\",\n        \"spark.executor.instances\": \"1\",\n        \"spark.cores.max\": \"8\",\n        \"spark.submit.pyFiles\": \"\",\n        \"spark.sql.warehouse.dir\": \"file:/home/andy/git/apache/datafusion-comet/dev/benchmarks/spark-warehouse\",\n        \"spark.executor.extraClassPath\": \"/opt/blaze/blaze-engine-spark-3.5-release-5.0.0-SNAPSHOT.jar\"\n    },\n    \"1\": [\n        9.673133850097656\n    ],\n    \"2\": [\n        5.968065500259399\n    ],\n    \"3\": [\n        11.941619157791138\n    ],\n    \"4\": [\n        11.886268138885498\n    ],\n    \"5\": [\n        23.566248416900635\n    ],\n    \"6\": [\n        2.833730697631836\n    ],\n    \"7\": [\n        13.405823469161987\n    ],\n    \"8\": [\n        23.069715976715088\n    ],\n    \"9\": [\n        39.501715898513794\n    ],\n    \"10\": [\n        11.25059962272644\n    ],\n    \"11\": [\n        4.793748140335083\n    ],\n    \"12\": [\n        6.490489959716797\n    ],\n    \"13\": [\n        6.752679824829102\n    ],\n    \"14\": [\n        3.9720609188079834\n    ],\n    \"15\": [\n        8.393324375152588\n    ],\n    \"16\": [\n        2.637575149536133\n    ],\n    \"17\": [\n        29.3641459941864\n    ],\n    \"18\": [\n        35.25342917442322\n    ],\n    \"19\": [\n        6.869648456573486\n    ],\n    \"20\": [\n        7.14982533454895\n    ],\n    \"21\": [\n        51.66389608383179\n    ],\n    \"22\": [\n        4.223745107650757\n    ]\n}"
  },
  {
    "path": "docs/source/contributor-guide/benchmark-results/gluten-1.4.0-tpcds.json",
    "content": "{\n    \"engine\": \"datafusion-comet\",\n    \"benchmark\": \"tpcds\",\n    \"data_path\": \"/mnt/bigdata/tpcds/sf100\",\n    \"query_path\": \"/mnt/bigdata/tpcds/queries\",\n    \"spark_conf\": {\n        \"spark.eventLog.enabled\": \"true\",\n        \"spark.gluten.sql.session.timeZone.default\": \"UTC\",\n        \"spark.plugins\": \"org.apache.gluten.GlutenPlugin\",\n        \"spark.app.name\": \"gluten benchmark derived from tpcds\",\n        \"spark.cores.max\": \"16\",\n        \"spark.gluten.memoryOverhead.size.in.bytes\": \"5153751040\",\n        \"spark.memory.offHeap.enabled\": \"true\",\n        \"spark.repl.local.jars\": \"file:///opt/gluten/gluten-velox-bundle-spark3.5_2.12-linux_amd64-1.4.0.jar\",\n        \"spark.executor.extraClassPath\": \"/opt/gluten/gluten-velox-bundle-spark3.5_2.12-linux_amd64-1.4.0.jar\",\n        \"spark.executor.instances\": \"2\",\n        \"spark.executor.memory\": \"16G\",\n        \"spark.driver.extraClassPath\": \"/opt/gluten/gluten-velox-bundle-spark3.5_2.12-linux_amd64-1.4.0.jar\",\n        \"spark.driver.port\": \"38403\",\n        \"spark.driver.host\": \"10.0.0.118\",\n        \"spark.serializer.objectStreamReset\": \"100\",\n        \"spark.app.submitTime\": \"1753671747082\",\n        \"spark.submit.deployMode\": \"client\",\n        \"spark.app.id\": \"app-20250728030228-0000\",\n        \"spark.executor.cores\": \"8\",\n        \"spark.app.initial.jar.urls\": \"spark://10.0.0.118:38403/jars/gluten-velox-bundle-spark3.5_2.12-linux_amd64-1.4.0.jar\",\n        \"spark.jars\": \"file:///opt/gluten/gluten-velox-bundle-spark3.5_2.12-linux_amd64-1.4.0.jar\",\n        \"spark.driver.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.gluten.memory.offHeap.size.in.bytes\": \"17179869184\",\n        \"spark.shuffle.manager\": \"org.apache.spark.shuffle.sort.ColumnarShuffleManager\",\n        \"spark.executor.id\": \"driver\",\n        \"spark.master\": \"spark://woody:7077\",\n        \"spark.executor.memoryOverhead\": \"4915\",\n        \"spark.sql.extensions\": \"org.apache.gluten.extension.GlutenSessionExtensions\",\n        \"spark.gluten.numTaskSlotsPerExecutor\": \"8\",\n        \"spark.driver.memory\": \"8G\",\n        \"spark.gluten.memory.task.offHeap.size.in.bytes\": \"2147483648\",\n        \"spark.memory.offHeap.size\": \"17179869184\",\n        \"spark.rdd.compress\": \"True\",\n        \"spark.executor.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.app.startTime\": \"1753671747382\",\n        \"spark.gluten.memory.conservative.task.offHeap.size.in.bytes\": \"1073741824\",\n        \"spark.gluten.sql.columnar.forceShuffledHashJoin\": \"true\",\n        \"spark.sql.adaptive.customCostEvaluatorClass\": \"org.apache.spark.sql.execution.adaptive.GlutenCostEvaluator\",\n        \"spark.submit.pyFiles\": \"\",\n        \"spark.gluten.memory.dynamic.offHeap.sizing.enabled\": \"false\",\n        \"spark.sql.session.timeZone\": \"UTC\",\n        \"spark.sql.warehouse.dir\": \"file:/home/andy/git/apache/datafusion-comet/dev/benchmarks/spark-warehouse\"\n    },\n    \"1\": [\n        3.0712015628814697\n    ],\n    \"2\": [\n        3.0666799545288086\n    ],\n    \"3\": [\n        1.0671255588531494\n    ],\n    \"4\": [\n        17.276039361953735\n    ],\n    \"5\": [\n        4.336325407028198\n    ],\n    \"6\": [\n        2.240464925765991\n    ],\n    \"7\": [\n        2.1070384979248047\n    ],\n    \"8\": [\n        2.2761263847351074\n    ],\n    \"9\": [\n        3.4592885971069336\n    ],\n    \"10\": [\n        2.2161333560943604\n    ],\n    \"11\": [\n        8.601247549057007\n    ],\n    \"12\": [\n        1.3066213130950928\n    ],\n    \"13\": [\n        2.623806953430176\n    ],\n    \"14\": [\n        23.665879487991333\n    ],\n    \"15\": [\n        2.3319854736328125\n    ],\n    \"16\": [\n        6.435860633850098\n    ],\n    \"17\": [\n        3.3381738662719727\n    ],\n    \"18\": [\n        2.4940454959869385\n    ],\n    \"19\": [\n        3.214183807373047\n    ],\n    \"20\": [\n        0.8629763126373291\n    ],\n    \"21\": [\n        1.1388742923736572\n    ],\n    \"22\": [\n        13.283171892166138\n    ],\n    \"23\": [\n        33.25598192214966\n    ],\n    \"24\": [\n        13.960798025131226\n    ],\n    \"25\": [\n        3.493340492248535\n    ],\n    \"26\": [\n        1.003934383392334\n    ],\n    \"27\": [\n        1.8391008377075195\n    ],\n    \"28\": [\n        4.88685941696167\n    ],\n    \"29\": [\n        3.245540142059326\n    ],\n    \"30\": [\n        1.6746954917907715\n    ],\n    \"31\": [\n        7.37675404548645\n    ],\n    \"32\": [\n        1.4505951404571533\n    ],\n    \"33\": [\n        2.6308183670043945\n    ],\n    \"34\": [\n        1.872574806213379\n    ],\n    \"35\": [\n        2.4917962551116943\n    ],\n    \"36\": [\n        2.046686887741089\n    ],\n    \"37\": [\n        0.798964262008667\n    ],\n    \"38\": [\n        3.3161089420318604\n    ],\n    \"39\": [\n        4.972580671310425\n    ],\n    \"40\": [\n        3.0317022800445557\n    ],\n    \"41\": [\n        0.292935848236084\n    ],\n    \"42\": [\n        0.9157257080078125\n    ],\n    \"43\": [\n        1.17030668258667\n    ],\n    \"44\": [\n        0.5907480716705322\n    ],\n    \"45\": [\n        1.6078379154205322\n    ],\n    \"46\": [\n        3.1738409996032715\n    ],\n    \"47\": [\n        3.6137521266937256\n    ],\n    \"48\": [\n        2.1740095615386963\n    ],\n    \"49\": [\n        4.5014050006866455\n    ],\n    \"50\": [\n        3.800581216812134\n    ],\n    \"51\": [\n        5.458707571029663\n    ],\n    \"52\": [\n        0.9787416458129883\n    ],\n    \"53\": [\n        1.2668261528015137\n    ],\n    \"54\": [\n        4.599064111709595\n    ],\n    \"55\": [\n        0.9526393413543701\n    ],\n    \"56\": [\n        2.744847059249878\n    ],\n    \"57\": [\n        2.3258957862854004\n    ],\n    \"58\": [\n        2.707801580429077\n    ],\n    \"59\": [\n        2.9688880443573\n    ],\n    \"60\": [\n        2.8286445140838623\n    ],\n    \"61\": [\n        7.713818073272705\n    ],\n    \"62\": [\n        1.3007659912109375\n    ],\n    \"63\": [\n        1.6228349208831787\n    ],\n    \"64\": [\n        10.753086805343628\n    ],\n    \"65\": [\n        5.098642826080322\n    ],\n    \"66\": [\n        1.919985055923462\n    ],\n    \"67\": [\n        31.3416805267334\n    ],\n    \"68\": [\n        2.760091781616211\n    ],\n    \"69\": [\n        2.1082839965820312\n    ],\n    \"70\": [\n        3.231919050216675\n    ],\n    \"71\": [\n        3.1762571334838867\n    ],\n    \"72\": [\n        811.9890258312225\n    ],\n    \"73\": [\n        1.1076979637145996\n    ],\n    \"74\": [\n        6.423391580581665\n    ],\n    \"75\": [\n        6.315303325653076\n    ],\n    \"76\": [\n        3.0667858123779297\n    ],\n    \"77\": [\n        2.4472310543060303\n    ],\n    \"78\": [\n        9.947726488113403\n    ],\n    \"79\": [\n        2.0369961261749268\n    ],\n    \"80\": [\n        7.902697801589966\n    ],\n    \"81\": [\n        1.4782285690307617\n    ],\n    \"82\": [\n        1.0187582969665527\n    ],\n    \"83\": [\n        1.3076961040496826\n    ],\n    \"84\": [\n        0.7356717586517334\n    ],\n    \"85\": [\n        2.1237404346466064\n    ],\n    \"86\": [\n        1.196605920791626\n    ],\n    \"87\": [\n        3.310823917388916\n    ],\n    \"88\": [\n        5.557470083236694\n    ],\n    \"89\": [\n        1.4879262447357178\n    ],\n    \"90\": [\n        1.3077824115753174\n    ],\n    \"91\": [\n        1.2596931457519531\n    ],\n    \"92\": [\n        1.1216387748718262\n    ],\n    \"93\": [\n        3.8458316326141357\n    ],\n    \"94\": [\n        3.957663059234619\n    ],\n    \"95\": [\n        18.828344583511353\n    ],\n    \"96\": [\n        0.8511285781860352\n    ],\n    \"97\": [\n        3.0931687355041504\n    ],\n    \"98\": [\n        2.2251434326171875\n    ],\n    \"99\": [\n        1.0991463661193848\n    ]\n}"
  },
  {
    "path": "docs/source/contributor-guide/benchmark-results/gluten-1.4.0-tpch.json",
    "content": "{\n    \"engine\": \"datafusion-comet\",\n    \"benchmark\": \"tpch\",\n    \"data_path\": \"/mnt/bigdata/tpch/sf100/\",\n    \"query_path\": \"/mnt/bigdata/tpch/queries/\",\n    \"spark_conf\": {\n        \"spark.eventLog.enabled\": \"true\",\n        \"spark.gluten.sql.session.timeZone.default\": \"UTC\",\n        \"spark.app.submitTime\": \"1752337239601\",\n        \"spark.plugins\": \"org.apache.gluten.GlutenPlugin\",\n        \"spark.gluten.memoryOverhead.size.in.bytes\": \"5153751040\",\n        \"spark.memory.offHeap.enabled\": \"true\",\n        \"spark.repl.local.jars\": \"file:///opt/gluten/gluten-velox-bundle-spark3.5_2.12-linux_amd64-1.4.0.jar\",\n        \"spark.executor.extraClassPath\": \"/opt/gluten/gluten-velox-bundle-spark3.5_2.12-linux_amd64-1.4.0.jar\",\n        \"spark.driver.port\": \"46545\",\n        \"spark.executor.memory\": \"16G\",\n        \"spark.driver.extraClassPath\": \"/opt/gluten/gluten-velox-bundle-spark3.5_2.12-linux_amd64-1.4.0.jar\",\n        \"spark.serializer.objectStreamReset\": \"100\",\n        \"spark.driver.host\": \"10.0.0.118\",\n        \"spark.submit.deployMode\": \"client\",\n        \"spark.executor.cores\": \"8\",\n        \"spark.jars\": \"file:///opt/gluten/gluten-velox-bundle-spark3.5_2.12-linux_amd64-1.4.0.jar\",\n        \"spark.driver.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.gluten.memory.offHeap.size.in.bytes\": \"17179869184\",\n        \"spark.shuffle.manager\": \"org.apache.spark.shuffle.sort.ColumnarShuffleManager\",\n        \"spark.app.id\": \"app-20250712162041-0000\",\n        \"spark.executor.id\": \"driver\",\n        \"spark.master\": \"spark://woody:7077\",\n        \"spark.app.name\": \"gluten benchmark derived from tpch\",\n        \"spark.executor.memoryOverhead\": \"4915\",\n        \"spark.app.startTime\": \"1752337239956\",\n        \"spark.sql.extensions\": \"org.apache.gluten.extension.GlutenSessionExtensions\",\n        \"spark.gluten.numTaskSlotsPerExecutor\": \"8\",\n        \"spark.driver.memory\": \"8G\",\n        \"spark.gluten.memory.task.offHeap.size.in.bytes\": \"2147483648\",\n        \"spark.memory.offHeap.size\": \"17179869184\",\n        \"spark.app.initial.jar.urls\": \"spark://10.0.0.118:46545/jars/gluten-velox-bundle-spark3.5_2.12-linux_amd64-1.4.0.jar\",\n        \"spark.rdd.compress\": \"True\",\n        \"spark.executor.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.gluten.memory.conservative.task.offHeap.size.in.bytes\": \"1073741824\",\n        \"spark.gluten.sql.columnar.forceShuffledHashJoin\": \"true\",\n        \"spark.sql.adaptive.customCostEvaluatorClass\": \"org.apache.spark.sql.execution.adaptive.GlutenCostEvaluator\",\n        \"spark.submit.pyFiles\": \"\",\n        \"spark.executor.instances\": \"1\",\n        \"spark.cores.max\": \"8\",\n        \"spark.sql.session.timeZone\": \"UTC\",\n        \"spark.sql.warehouse.dir\": \"file:/home/andy/git/apache/datafusion-comet/dev/benchmarks/spark-warehouse\",\n        \"spark.gluten.memory.dynamic.offHeap.sizing.enabled\": \"false\"\n    },\n    \"1\": [\n        7.653544187545776\n    ],\n    \"2\": [\n        4.837377309799194\n    ],\n    \"3\": [\n        7.874017000198364\n    ],\n    \"4\": [\n        7.201406717300415\n    ],\n    \"5\": [\n        15.82085108757019\n    ],\n    \"6\": [\n        1.7845516204833984\n    ],\n    \"7\": [\n        10.334960460662842\n    ],\n    \"8\": [\n        16.844615697860718\n    ],\n    \"9\": [\n        24.79072380065918\n    ],\n    \"10\": [\n        8.695719480514526\n    ],\n    \"11\": [\n        3.6076815128326416\n    ],\n    \"12\": [\n        4.583712339401245\n    ],\n    \"13\": [\n        7.016078233718872\n    ],\n    \"14\": [\n        3.003683567047119\n    ],\n    \"15\": [\n        7.852885007858276\n    ],\n    \"16\": [\n        3.1703848838806152\n    ],\n    \"17\": [\n        19.128700017929077\n    ],\n    \"18\": [\n        20.703757524490356\n    ],\n    \"19\": [\n        3.294731378555298\n    ],\n    \"20\": [\n        6.121671199798584\n    ],\n    \"21\": [\n        41.27772641181946\n    ],\n    \"22\": [\n        3.542634963989258\n    ]\n}"
  },
  {
    "path": "docs/source/contributor-guide/benchmark-results/spark-3.5.3-tpcds.json",
    "content": "{\n    \"engine\": \"datafusion-comet\",\n    \"benchmark\": \"tpcds\",\n    \"data_path\": \"/mnt/bigdata/tpcds/sf100/\",\n    \"query_path\": \"/mnt/bigdata/tpcds/queries/\",\n    \"spark_conf\": {\n        \"spark.eventLog.enabled\": \"true\",\n        \"spark.driver.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.cores.max\": \"16\",\n        \"spark.app.submitTime\": \"1760657346549\",\n        \"spark.app.startTime\": \"1760657346845\",\n        \"spark.hadoop.fs.s3a.aws.credentials.provider\": \"com.amazonaws.auth.DefaultAWSCredentialsProviderChain\",\n        \"spark.driver.port\": \"41151\",\n        \"spark.master\": \"spark://woody:7077\",\n        \"spark.executor.id\": \"driver\",\n        \"spark.memory.offHeap.enabled\": \"true\",\n        \"spark.driver.memory\": \"8G\",\n        \"spark.executor.memory\": \"16g\",\n        \"spark.hadoop.fs.s3a.impl\": \"org.apache.hadoop.fs.s3a.S3AFileSystem\",\n        \"spark.app.id\": \"app-20251016172907-0000\",\n        \"spark.rdd.compress\": \"True\",\n        \"spark.executor.instances\": \"2\",\n        \"spark.executor.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.serializer.objectStreamReset\": \"100\",\n        \"spark.submit.pyFiles\": \"\",\n        \"spark.submit.deployMode\": \"client\",\n        \"spark.sql.warehouse.dir\": \"file:/home/andy/git/apache/datafusion-comet/dev/benchmarks/spark-warehouse\",\n        \"spark.app.name\": \"spark benchmark derived from tpcds\",\n        \"spark.executor.cores\": \"8\",\n        \"spark.driver.host\": \"woody.lan\",\n        \"spark.memory.offHeap.size\": \"16g\"\n    },\n    \"1\": [\n        4.523299217224121\n    ],\n    \"2\": [\n        3.061577081680298\n    ],\n    \"3\": [\n        1.5242812633514404\n    ],\n    \"4\": [\n        50.726277589797974\n    ],\n    \"5\": [\n        12.996135711669922\n    ],\n    \"6\": [\n        1.9649882316589355\n    ],\n    \"7\": [\n        3.0779569149017334\n    ],\n    \"8\": [\n        2.091928720474243\n    ],\n    \"9\": [\n        5.696160078048706\n    ],\n    \"10\": [\n        2.6276841163635254\n    ],\n    \"11\": [\n        14.976734399795532\n    ],\n    \"12\": [\n        1.0809345245361328\n    ],\n    \"13\": [\n        3.9019551277160645\n    ],\n    \"14\": [\n        40.51576566696167\n    ],\n    \"15\": [\n        3.2053213119506836\n    ],\n    \"16\": [\n        12.394548416137695\n    ],\n    \"17\": [\n        4.5201404094696045\n    ],\n    \"18\": [\n        3.710134506225586\n    ],\n    \"19\": [\n        2.388089895248413\n    ],\n    \"20\": [\n        1.1710665225982666\n    ],\n    \"21\": [\n        1.8241875171661377\n    ],\n    \"22\": [\n        22.95406436920166\n    ],\n    \"23\": [\n        91.60468244552612\n    ],\n    \"24\": [\n        14.566667556762695\n    ],\n    \"25\": [\n        4.0827412605285645\n    ],\n    \"26\": [\n        1.9840991497039795\n    ],\n    \"27\": [\n        2.4599902629852295\n    ],\n    \"28\": [\n        8.550125122070312\n    ],\n    \"29\": [\n        4.786677122116089\n    ],\n    \"30\": [\n        1.974135398864746\n    ],\n    \"31\": [\n        5.949168920516968\n    ],\n    \"32\": [\n        1.8300220966339111\n    ],\n    \"33\": [\n        3.0961456298828125\n    ],\n    \"34\": [\n        1.5340814590454102\n    ],\n    \"35\": [\n        4.07404637336731\n    ],\n    \"36\": [\n        2.289029121398926\n    ],\n    \"37\": [\n        4.415144681930542\n    ],\n    \"38\": [\n        7.2151618003845215\n    ],\n    \"39\": [\n        8.983118057250977\n    ],\n    \"40\": [\n        8.090219020843506\n    ],\n    \"41\": [\n        0.2758638858795166\n    ],\n    \"42\": [\n        1.4861207008361816\n    ],\n    \"43\": [\n        1.4047019481658936\n    ],\n    \"44\": [\n        1.1311781406402588\n    ],\n    \"45\": [\n        1.7742793560028076\n    ],\n    \"46\": [\n        2.6378352642059326\n    ],\n    \"47\": [\n        5.033041954040527\n    ],\n    \"48\": [\n        7.473065137863159\n    ],\n    \"49\": [\n        7.34236478805542\n    ],\n    \"50\": [\n        10.566041946411133\n    ],\n    \"51\": [\n        13.90496039390564\n    ],\n    \"52\": [\n        1.2180531024932861\n    ],\n    \"53\": [\n        1.8595950603485107\n    ],\n    \"54\": [\n        2.967790365219116\n    ],\n    \"55\": [\n        1.1428017616271973\n    ],\n    \"56\": [\n        2.6576313972473145\n    ],\n    \"57\": [\n        3.1256778240203857\n    ],\n    \"58\": [\n        3.110593795776367\n    ],\n    \"59\": [\n        3.6817147731781006\n    ],\n    \"60\": [\n        2.342132568359375\n    ],\n    \"61\": [\n        3.5699379444122314\n    ],\n    \"62\": [\n        1.0830762386322021\n    ],\n    \"63\": [\n        1.5091686248779297\n    ],\n    \"64\": [\n        21.24241876602173\n    ],\n    \"65\": [\n        11.174801588058472\n    ],\n    \"66\": [\n        2.968890428543091\n    ],\n    \"67\": [\n        56.65302085876465\n    ],\n    \"68\": [\n        2.7817704677581787\n    ],\n    \"69\": [\n        2.238774061203003\n    ],\n    \"70\": [\n        2.8807759284973145\n    ],\n    \"71\": [\n        2.878760576248169\n    ],\n    \"72\": [\n        87.73793363571167\n    ],\n    \"73\": [\n        1.3357255458831787\n    ],\n    \"74\": [\n        17.069884300231934\n    ],\n    \"75\": [\n        12.418950080871582\n    ],\n    \"76\": [\n        2.7246549129486084\n    ],\n    \"77\": [\n        3.2758727073669434\n    ],\n    \"78\": [\n        31.570815801620483\n    ],\n    \"79\": [\n        2.1316466331481934\n    ],\n    \"80\": [\n        25.685710191726685\n    ],\n    \"81\": [\n        2.383108615875244\n    ],\n    \"82\": [\n        2.1166131496429443\n    ],\n    \"83\": [\n        1.825127124786377\n    ],\n    \"84\": [\n        1.4185044765472412\n    ],\n    \"85\": [\n        4.894887924194336\n    ],\n    \"86\": [\n        0.9917283058166504\n    ],\n    \"87\": [\n        7.2068891525268555\n    ],\n    \"88\": [\n        5.049949645996094\n    ],\n    \"89\": [\n        1.8566343784332275\n    ],\n    \"90\": [\n        0.8218870162963867\n    ],\n    \"91\": [\n        1.0735762119293213\n    ],\n    \"92\": [\n        1.3357479572296143\n    ],\n    \"93\": [\n        21.765851259231567\n    ],\n    \"94\": [\n        6.729715585708618\n    ],\n    \"95\": [\n        23.40083408355713\n    ],\n    \"96\": [\n        0.6852657794952393\n    ],\n    \"97\": [\n        11.261420965194702\n    ],\n    \"98\": [\n        2.003811836242676\n    ],\n    \"99\": [\n        1.5077838897705078\n    ]\n}"
  },
  {
    "path": "docs/source/contributor-guide/benchmark-results/spark-3.5.3-tpch.json",
    "content": "{\n    \"engine\": \"datafusion-comet\",\n    \"benchmark\": \"tpch\",\n    \"data_path\": \"/mnt/bigdata/tpch/sf100/\",\n    \"query_path\": \"/mnt/bigdata/tpch/queries/\",\n    \"spark_conf\": {\n        \"spark.app.submitTime\": \"1760654734485\",\n        \"spark.eventLog.enabled\": \"true\",\n        \"spark.driver.port\": \"45867\",\n        \"spark.driver.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.executor.id\": \"driver\",\n        \"spark.hadoop.fs.s3a.aws.credentials.provider\": \"com.amazonaws.auth.DefaultAWSCredentialsProviderChain\",\n        \"spark.app.startTime\": \"1760654734783\",\n        \"spark.master\": \"spark://woody:7077\",\n        \"spark.memory.offHeap.enabled\": \"true\",\n        \"spark.driver.memory\": \"8G\",\n        \"spark.executor.memory\": \"16g\",\n        \"spark.hadoop.fs.s3a.impl\": \"org.apache.hadoop.fs.s3a.S3AFileSystem\",\n        \"spark.rdd.compress\": \"True\",\n        \"spark.app.name\": \"spark benchmark derived from tpch\",\n        \"spark.executor.extraJavaOptions\": \"-Djava.net.preferIPv6Addresses=false -XX:+IgnoreUnrecognizedVMOptions --add-opens=java.base/java.lang=ALL-UNNAMED --add-opens=java.base/java.lang.invoke=ALL-UNNAMED --add-opens=java.base/java.lang.reflect=ALL-UNNAMED --add-opens=java.base/java.io=ALL-UNNAMED --add-opens=java.base/java.net=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED --add-opens=java.base/java.util=ALL-UNNAMED --add-opens=java.base/java.util.concurrent=ALL-UNNAMED --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED --add-opens=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/sun.nio.cs=ALL-UNNAMED --add-opens=java.base/sun.security.action=ALL-UNNAMED --add-opens=java.base/sun.util.calendar=ALL-UNNAMED --add-opens=java.security.jgss/sun.security.krb5=ALL-UNNAMED -Djdk.reflect.useDirectMethodHandle=false\",\n        \"spark.serializer.objectStreamReset\": \"100\",\n        \"spark.executor.instances\": \"1\",\n        \"spark.cores.max\": \"8\",\n        \"spark.app.id\": \"app-20251016164535-0000\",\n        \"spark.submit.pyFiles\": \"\",\n        \"spark.submit.deployMode\": \"client\",\n        \"spark.sql.warehouse.dir\": \"file:/home/andy/git/apache/datafusion-comet/dev/benchmarks/spark-warehouse\",\n        \"spark.executor.cores\": \"8\",\n        \"spark.driver.host\": \"woody.lan\",\n        \"spark.memory.offHeap.size\": \"16g\"\n    },\n    \"1\": [\n        82.14922451972961\n    ],\n    \"2\": [\n        13.39389944076538\n    ],\n    \"3\": [\n        26.29530620574951\n    ],\n    \"4\": [\n        22.093881368637085\n    ],\n    \"5\": [\n        52.291497230529785\n    ],\n    \"6\": [\n        3.5577659606933594\n    ],\n    \"7\": [\n        23.285813808441162\n    ],\n    \"8\": [\n        36.04274129867554\n    ],\n    \"9\": [\n        77.91567945480347\n    ],\n    \"10\": [\n        20.83555793762207\n    ],\n    \"11\": [\n        13.609780311584473\n    ],\n    \"12\": [\n        13.592298984527588\n    ],\n    \"13\": [\n        22.524098873138428\n    ],\n    \"14\": [\n        6.217005252838135\n    ],\n    \"15\": [\n        16.401182413101196\n    ],\n    \"16\": [\n        7.700634241104126\n    ],\n    \"17\": [\n        70.78637218475342\n    ],\n    \"18\": [\n        77.76704359054565\n    ],\n    \"19\": [\n        7.267014503479004\n    ],\n    \"20\": [\n        10.419305562973022\n    ],\n    \"21\": [\n        72.9061770439148\n    ],\n    \"22\": [\n        9.621031522750854\n    ]\n}"
  },
  {
    "path": "docs/source/contributor-guide/benchmark-results/tpc-ds.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Apache DataFusion Comet: Benchmarks Derived From TPC-DS\n\nThe following benchmarks were performed on a Linux workstation with PCIe 5, AMD 7950X CPU (16 cores), 128 GB RAM, and\ndata stored locally in Parquet format on NVMe storage. Performance characteristics will vary in different environments\nand we encourage you to run these benchmarks in your own environments.\n\nThe operating system is Ubuntu 22.04.5 LTS.\n\nThe tracking issue for improving TPC-DS performance is [#858](https://github.com/apache/datafusion-comet/issues/858).\n\n![](../../_static/images/benchmark-results/0.11.0/tpcds_allqueries.png)\n\nHere is a breakdown showing relative performance of Spark and Comet for each query.\n\n![](../../_static/images/benchmark-results/0.11.0/tpcds_queries_compare.png)\n\nThe following chart shows how much Comet currently accelerates each query from the benchmark in relative terms.\n\n![](../../_static/images/benchmark-results/0.11.0/tpcds_queries_speedup_rel.png)\n\nThe following chart shows how much Comet currently accelerates each query from the benchmark in absolute terms.\n\n![](../../_static/images/benchmark-results/0.11.0/tpcds_queries_speedup_abs.png)\n\nThe raw results of these benchmarks in JSON format is available here:\n\n- [Spark](spark-3.5.3-tpcds.json)\n- [Comet](comet-0.11.0-tpcds.json)\n\nThe scripts that were used to generate these results can be found [here](https://github.com/apache/datafusion-comet/tree/main/benchmarks/tpc).\n"
  },
  {
    "path": "docs/source/contributor-guide/benchmark-results/tpc-h.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Apache DataFusion Comet: Benchmarks Derived From TPC-H\n\nThe following benchmarks were performed on a Linux workstation with PCIe 5, AMD 7950X CPU (16 cores), 128 GB RAM, and\ndata stored locally in Parquet format on NVMe storage. Performance characteristics will vary in different environments\nand we encourage you to run these benchmarks in your own environments.\n\nThe operating system is Ubuntu 22.04.5 LTS.\n\nThe tracking issue for improving TPC-H performance is [#391](https://github.com/apache/datafusion-comet/issues/391).\n\n![](../../_static/images/benchmark-results/0.11.0/tpch_allqueries.png)\n\nHere is a breakdown showing relative performance of Spark and Comet for each query.\n\n![](../../_static/images/benchmark-results/0.11.0/tpch_queries_compare.png)\n\nThe following chart shows how much Comet currently accelerates each query from the benchmark in relative terms.\n\n![](../../_static/images/benchmark-results/0.11.0/tpch_queries_speedup_rel.png)\n\nThe following chart shows how much Comet currently accelerates each query from the benchmark in absolute terms.\n\n![](../../_static/images/benchmark-results/0.11.0/tpch_queries_speedup_abs.png)\n\nThe raw results of these benchmarks in JSON format is available here:\n\n- [Spark](spark-3.5.3-tpch.json)\n- [Comet](comet-0.11.0-tpch.json)\n\nThe scripts that were used to generate these results can be found [here](https://github.com/apache/datafusion-comet/tree/main/benchmarks/tpc).\n"
  },
  {
    "path": "docs/source/contributor-guide/benchmarking.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Benchmarking Guide\n\nTo track progress on performance, we regularly run benchmarks derived from TPC-H and TPC-DS.\n\nThe benchmarking scripts are contained [here](https://github.com/apache/datafusion-comet/tree/main/benchmarks/tpc).\n\nData generation scripts are available in the [DataFusion Benchmarks](https://github.com/apache/datafusion-benchmarks) GitHub repository.\n\n## Current Benchmark Results\n\nThe published benchmarks are performed on a Linux workstation with PCIe 5, AMD 7950X CPU (16 cores), 128 GB RAM, and\ndata stored locally in Parquet format on NVMe storage. Performance characteristics will vary in different environments\nand we encourage you to run these benchmarks in your own environments.\n\nThe operating system used was Ubuntu 22.04.5 LTS.\n\n- [Benchmarks derived from TPC-H](benchmark-results/tpc-h)\n- [Benchmarks derived from TPC-DS](benchmark-results/tpc-ds)\n\n## Benchmarking Guides\n\nAvailable benchmarking guides:\n\n- [Benchmarking on macOS](benchmarking_macos.md)\n- [Benchmarking on AWS EC2](benchmarking_aws_ec2)\n- [TPC-DS Benchmarking with spark-sql-perf](benchmarking_spark_sql_perf.md)\n\nWe also have many micro benchmarks that can be run from an IDE located [here](https://github.com/apache/datafusion-comet/tree/main/spark/src/test/scala/org/apache/spark/sql/benchmark).\n"
  },
  {
    "path": "docs/source/contributor-guide/benchmarking_aws_ec2.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Benchmarking in EC2\n\nThis guide is for setting up benchmarks on AWS EC2 with a single node with Parquet files either located on an\nattached EBS volume, or stored in S3.\n\n## Create EC2 Instance\n\n- Create an EC2 instance with an EBS volume sized for approximately 2x the size of\n  the dataset to be generated (200 GB for scale factor 100, 2 TB for scale factor 1000, and so on)\n- Optionally, create an S3 bucket to store the Parquet files\n\nRecommendation: Use `c7i.4xlarge` instance type with `provisioned iops 2` EBS volume (8000 IOPS).\n\n## Install Prerequisites\n\n```shell\nsudo yum update -y\nsudo yum install -y git protobuf-compiler java-17-amazon-corretto-headless java-17-amazon-corretto-devel\nsudo yum groupinstall -y \"Development Tools\"\nexport JAVA_HOME=/usr/lib/jvm/java-17-amazon-corretto\n```\n\n## Generate Benchmark Data\n\n```shell\ncurl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh\nRUSTFLAGS='-C target-cpu=native' cargo install tpchgen-cli\ntpchgen-cli -s 100 --format parquet --parts 32 --output-dir data\n```\n\nRename the generated directories so that they have a `.parquet` suffix. For example, rename `customer` to\n`customer.parquet`.\n\nSet the `TPCH_DATA` environment variable. This will be referenced by the benchmarking scripts.\n\n```shell\nexport TPCH_DATA=/home/ec2-user/data\n```\n\n## Install Apache Spark\n\n```shell\nexport SPARK_VERSION=3.5.6\nwget https://archive.apache.org/dist/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop3.tgz\ntar xzf spark-$SPARK_VERSION-bin-hadoop3.tgz\nsudo mv spark-$SPARK_VERSION-bin-hadoop3 /opt\nexport SPARK_HOME=/opt/spark-$SPARK_VERSION-bin-hadoop3/\nmkdir /tmp/spark-events\n```\n\nSet `SPARK_MASTER` env var (IP address will need to be edited):\n\n```shell\nexport SPARK_MASTER=spark://172.31.34.87:7077\n```\n\nSet `SPARK_LOCAL_DIRS` to point to EBS volume\n\n```shell\nsudo mkdir /mnt/tmp\nsudo chmod 777 /mnt/tmp\nmv $SPARK_HOME/conf/spark-env.sh.template $SPARK_HOME/conf/spark-env.sh\n```\n\nAdd the following entry to `spark-env.sh`:\n\n```shell\nSPARK_LOCAL_DIRS=/mnt/tmp\n```\n\n## Git Clone DataFusion Repositories\n\n```shell\ngit clone https://github.com/apache/datafusion-benchmarks.git\ngit clone https://github.com/apache/datafusion-comet.git\n```\n\nBuild Comet\n\n```shell\ncd datafusion-comet\nmake release\n```\n\nSet `COMET_JAR` environment variable.\n\n```shell\nexport COMET_JAR=/home/ec2-user/datafusion-comet/spark/target/comet-spark-spark3.5_2.12-$COMET_VERSION.jar\n```\n\n## Run Benchmarks\n\nUse the scripts in `benchmarks/tpc` in the Comet repository.\n\n```shell\ncd benchmarks/tpc\nexport TPCH_QUERIES=/home/ec2-user/datafusion-benchmarks/tpch/queries/\n```\n\nRun Spark benchmark:\n\n```shell\npython3 run.py --engine spark --benchmark tpch\n```\n\nRun Comet benchmark:\n\n```shell\npython3 run.py --engine comet --benchmark tpch\n```\n\n## Running Benchmarks with S3\n\nCopy the Parquet data to an S3 bucket.\n\n```shell\naws s3 cp /home/ec2-user/data s3://your-bucket-name/--recursive\n```\n\nUpdate `TPCH_DATA` environment variable.\n\n```shell\nexport TPCH_DATA=s3a://your-bucket-name\n```\n\nInstall Hadoop jar files:\n\n```shell\nwget https://repo1.maven.org/maven2/org/apache/hadoop/hadoop-aws/3.3.4/hadoop-aws-3.3.4.jar -P $SPARK_HOME/jars\nwget https://repo1.maven.org/maven2/com/amazonaws/aws-java-sdk-bundle/1.11.1026/aws-java-sdk-bundle-1.11.1026.jar -P $SPARK_HOME/jars\n```\n\nAdd credentials to `~/.aws/credentials`:\n\n```shell\n[default]\naws_access_key_id=your-access-key\naws_secret_access_key=your-secret-key\n```\n\nModify the scripts to add the following configurations.\n\n```shell\n--conf spark.hadoop.fs.s3a.impl=org.apache.hadoop.fs.s3a.S3AFileSystem \\\n--conf spark.hadoop.fs.s3a.aws.credentials.provider=com.amazonaws.auth.DefaultAWSCredentialsProviderChain \\\n```\n\nNow run the benchmarks:\n\n```shell\npython3 run.py --engine spark --benchmark tpch\npython3 run.py --engine comet --benchmark tpch\n```\n"
  },
  {
    "path": "docs/source/contributor-guide/benchmarking_macos.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Benchmarking on macOS\n\nThis guide is for setting up TPC-H benchmarks locally on macOS using the 100 GB dataset.\n\nNote that running this benchmark on macOS is not ideal because we cannot force Spark or Comet to use performance\ncores rather than efficiency cores, and background processes are sharing these cores. Also, power and thermal\nmanagement may throttle CPU cores.\n\n## Prerequisites\n\nJava and Rust must be installed locally.\n\n## Data Generation\n\n```shell\ncargo install tpchgen-cli\nmkdir benchmark_data\ncd benchmark_data\ntpchgen-cli -s 100 --format=parquet\nexport $BENCH_DATA=`pwd`\n```\n\nCreate a temp folder for spark events emitted during benchmarking\n\n```shell\nmkdir /tmp/spark-events\n```\n\n## Clone the DataFusion Benchmarks Repository\n\n```shell\ngit clone https://github.com/apache/datafusion-benchmarks.git\ncd\nexport DF_BENCH=`pwd`\n```\n\n## Install Spark\n\nInstall Apache Spark. This example refers to 3.5.4 version.\n\n```shell\nwget https://archive.apache.org/dist/spark/spark-3.5.4/spark-3.5.4-bin-hadoop3.tgz\ntar xzf spark-3.5.4-bin-hadoop3.tgz\nsudo mv spark-3.5.4-bin-hadoop3 /opt\nexport SPARK_HOME=/opt/spark-3.5.4-bin-hadoop3/\n```\n\nStart Spark in standalone mode:\n\n```shell\n$SPARK_HOME/sbin/start-master.sh\n```\n\nSet `SPARK_MASTER` env var (host name will need to be edited):\n\n```shell\nexport SPARK_MASTER=spark://Rustys-MacBook-Pro.local:7077\n```\n\n```shell\n$SPARK_HOME/sbin/start-worker.sh $SPARK_MASTER\n```\n\n### Start local Apache Spark cluster using `spark-class`\n\nFor Apache Spark distributions installed using `brew` tool, it may happen there is no `$SPARK_HOME/sbin` folder on your machine.\nIn order to start local Apache Spark cluster on `localhost:7077` port, run:\n\n```shell\n$SPARK_HOME/bin/spark-class org.apache.spark.deploy.master.Master --host 127.0.0.1 --port 7077 --webui-port 8080\n```\n\nOnce master has started, in separate console start the worker referring the spark master uri on `localhost:7077`\n\n```shell\n$SPARK_HOME/bin/spark-class org.apache.spark.deploy.worker.Worker --cores 8 --memory 16G spark://localhost:7077\n```\n\n## Run Spark Benchmarks\n\nRun the following command (the `--data` parameter will need to be updated to point to your TPC-H data):\n\n```shell\n$SPARK_HOME/bin/spark-submit \\\n    --master $SPARK_MASTER \\\n    --conf spark.driver.memory=8G \\\n    --conf spark.executor.instances=1 \\\n    --conf spark.executor.cores=8 \\\n    --conf spark.cores.max=8 \\\n    --conf spark.executor.memory=16g \\\n    --conf spark.memory.offHeap.enabled=true \\\n    --conf spark.memory.offHeap.size=16g \\\n    --conf spark.eventLog.enabled=true \\\n    $DF_BENCH/runners/datafusion-comet/tpcbench.py \\\n    --benchmark tpch \\\n    --data $BENCH_DATA/tpch-data/ \\\n    --queries $DF_BENCH/tpch/queries \\\n    --output . \\\n    --iterations 1\n```\n\n## Run Comet Benchmarks\n\nBuild Comet from source, with `mimalloc` enabled.\n\n```shell\nmake release COMET_FEATURES=mimalloc\n```\n\nSet `COMET_JAR` to point to the location of the Comet jar file. Example for Comet 0.8\n\n```shell\nexport COMET_JAR=`pwd`/spark/target/comet-spark-spark3.5_2.12-0.8.0-SNAPSHOT.jar\n```\n\nRun the following command (the `--data` parameter will need to be updated to point to your S3 bucket):\n\n```shell\n$SPARK_HOME/bin/spark-submit \\\n    --master $SPARK_MASTER \\\n    --conf spark.driver.memory=8G \\\n    --conf spark.executor.instances=1 \\\n    --conf spark.executor.cores=8 \\\n    --conf spark.cores.max=8 \\\n    --conf spark.executor.memory=16g \\\n    --conf spark.memory.offHeap.enabled=true \\\n    --conf spark.memory.offHeap.size=16g \\\n    --conf spark.eventLog.enabled=true \\\n    --jars $COMET_JAR \\\n    --driver-class-path $COMET_JAR \\\n    --conf spark.driver.extraClassPath=$COMET_JAR \\\n    --conf spark.executor.extraClassPath=$COMET_JAR \\\n    --conf spark.plugins=org.apache.spark.CometPlugin \\\n    --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n    --conf spark.comet.enabled=true \\\n    --conf spark.comet.exec.shuffle.enableFastEncoding=true \\\n    --conf spark.comet.exec.shuffle.fallbackToColumnar=true \\\n    --conf spark.comet.exec.replaceSortMergeJoin=true \\\n    --conf spark.comet.expression.allowIncompatible=true \\\n    $DF_BENCH/runners/datafusion-comet/tpcbench.py \\\n    --benchmark tpch \\\n    --data $BENCH_DATA/tpch-data/ \\\n    --queries $DF_BENCH/tpch/queries \\\n    --output . \\\n    --iterations 1\n```\n"
  },
  {
    "path": "docs/source/contributor-guide/benchmarking_spark_sql_perf.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# TPC-DS Benchmarking with spark-sql-perf\n\nThis guide explains how to generate TPC-DS data and run TPC-DS benchmarks using the\n[KubedAI/spark-sql-perf](https://github.com/KubedAI/spark-sql-perf) framework (a fork of\n[databricks/spark-sql-perf](https://github.com/databricks/spark-sql-perf)).\n\nWe use the KubedAI fork because it adds two features not present in the upstream Databricks repository:\n\n- **Apache Iceberg support** — allows generating and registering TPC-DS tables in Iceberg format, which is\n  needed for Comet Iceberg benchmarking.\n- **TPC-DS v4.0 queries** — adds the full set of TPC-DS v4.0 queries alongside the existing v1.4 and v2.4 sets.\n\nThe spark-sql-perf approach uses the TPC-DS `dsdgen` tool to generate data directly through Spark, which handles\npartitioning and writing to Parquet format automatically.\n\n## Prerequisites\n\n- Java 17 (for Spark 3.5+)\n- Apache Spark 3.5.x\n- SBT (Scala Build Tool)\n- C compiler toolchain (`gcc`, `make`, `flex`, `bison`, `byacc`)\n\n## Step 1: Build tpcds-kit\n\nThe `dsdgen` tool from [databricks/tpcds-kit](https://github.com/databricks/tpcds-kit) is required for data\ngeneration. This is a modified fork of the official TPC-DS toolkit that outputs to stdout, allowing Spark to\ningest the data directly.\n\n**Linux (Ubuntu/Debian):**\n\n```shell\nsudo apt-get install -y gcc make flex bison byacc git\ngit clone https://github.com/databricks/tpcds-kit.git\ncd tpcds-kit/tools\nmake OS=LINUX\n```\n\n**Linux (CentOS/RHEL/Amazon Linux):**\n\n```shell\nsudo yum install -y gcc make flex bison byacc git\ngit clone https://github.com/databricks/tpcds-kit.git\ncd tpcds-kit/tools\nmake OS=LINUX\n```\n\n**macOS:**\n\n```shell\nxcode-select --install\ngit clone https://github.com/databricks/tpcds-kit.git\ncd tpcds-kit/tools\nmake OS=MACOS\n```\n\nVerify the build succeeded:\n\n```shell\nls -l dsdgen\n```\n\n## Step 2: Build spark-sql-perf\n\n```shell\ngit clone https://github.com/KubedAI/spark-sql-perf.git\ncd spark-sql-perf\ngit checkout support-iceberg-tpcds-v4.0\nsbt package\n```\n\nThis produces a JAR file at `target/scala-2.12/spark-sql-perf_2.12-0.5.1-SNAPSHOT.jar` (the exact version may\nvary). Note the path to this JAR for later use.\n\n## Step 3: Install and Start Spark\n\nIf you do not already have Spark installed:\n\n```shell\nexport SPARK_VERSION=3.5.6\nwget https://archive.apache.org/dist/spark/spark-$SPARK_VERSION/spark-$SPARK_VERSION-bin-hadoop3.tgz\ntar xzf spark-$SPARK_VERSION-bin-hadoop3.tgz\nsudo mv spark-$SPARK_VERSION-bin-hadoop3 /opt\nexport SPARK_HOME=/opt/spark-$SPARK_VERSION-bin-hadoop3/\nexport JAVA_HOME=/usr/lib/jvm/java-17-openjdk-amd64\nmkdir -p /tmp/spark-events\n```\n\nStart Spark in standalone mode:\n\n```shell\n$SPARK_HOME/sbin/start-master.sh\nexport SPARK_MASTER=spark://$(hostname):7077\n$SPARK_HOME/sbin/start-worker.sh $SPARK_MASTER\n```\n\n## Step 4: Generate TPC-DS Data\n\nLaunch `spark-shell` with the spark-sql-perf JAR loaded:\n\n```shell\n$SPARK_HOME/bin/spark-shell \\\n    --master $SPARK_MASTER \\\n    --jars /path/to/spark-sql-perf/target/scala-2.12/spark-sql-perf_2.12-0.5.1-SNAPSHOT.jar \\\n    --conf spark.driver.memory=8G \\\n    --conf spark.executor.instances=1 \\\n    --conf spark.executor.cores=8 \\\n    --conf spark.executor.memory=16g\n```\n\nIn the Spark shell, run the following to generate data. Adjust `scaleFactor` and paths as needed:\n\n```scala\nimport com.databricks.spark.sql.perf.tpcds.TPCDSTables\n\nval tpcdsKit = \"/path/to/tpcds-kit/tools\"\nval scaleFactor = \"100\"    // 100 GB\nval dataDir = \"/path/to/tpcds-data\"\nval format = \"parquet\"\nval numPartitions = 32     // adjust based on cluster size\n\nval tables = new TPCDSTables(spark.sqlContext, dsdgenDir = tpcdsKit, scaleFactor = scaleFactor)\ntables.genData(\n  location = dataDir,\n  format = format,\n  overwrite = true,\n  partitionTables = true,\n  clusterByPartitionColumns = true,\n  filterOutNullPartitionValues = false,\n  numPartitions = numPartitions\n)\n```\n\nData generation for SF100 typically takes 20-60 minutes depending on hardware. When complete, exit the shell:\n\n```scala\n:quit\n```\n\nVerify the data was generated:\n\n```shell\nls /path/to/tpcds-data/\n```\n\nYou should see directories for each TPC-DS table (`store_sales`, `catalog_sales`, `web_sales`, `customer`,\n`date_dim`, etc.).\n\nSet the `TPCDS_DATA` environment variable:\n\n```shell\nexport TPCDS_DATA=/path/to/tpcds-data\n```\n\n## Step 5: Run TPC-DS Benchmarks\n\n### Register Tables\n\nLaunch `spark-shell` with the spark-sql-perf JAR (same as Step 4) and register the generated data as tables:\n\n```scala\nimport com.databricks.spark.sql.perf.tpcds.{TPCDS, TPCDSTables}\n\nval scaleFactor = \"100\"\nval dataDir = \"/path/to/tpcds-data\"\nval format = \"parquet\"\nval databaseName = \"tpcds\"\n\n// Create database and register tables\nsql(s\"CREATE DATABASE IF NOT EXISTS $databaseName\")\nval tables = new TPCDSTables(spark.sqlContext, dsdgenDir = \"\", scaleFactor = scaleFactor)\ntables.createExternalTables(\n  location = dataDir,\n  format = format,\n  databaseName = databaseName,\n  overwrite = true,\n  discoverPartitions = true\n)\n\nsql(s\"USE $databaseName\")\n```\n\n### Run Spark Baseline\n\n```scala\nval tpcds = new TPCDS(spark.sqlContext)\n\n// Choose a query set: tpcds1_4Queries, tpcds2_4Queries, or tpcds4_0Queries\nval queries = tpcds.tpcds2_4Queries\n\nval experiment = tpcds.runExperiment(\n  executionsToRun = queries,\n  iterations = 3,\n  resultLocation = \"/path/to/results/spark\",\n  tags = Map(\"engine\" -> \"spark\", \"scale_factor\" -> \"100\"),\n  forkThread = true\n)\n\nexperiment.waitForFinish(86400)\n```\n\nResults are saved as JSON to the `resultLocation` path.\n\n### Run with Comet\n\nBuild Comet from source and launch `spark-shell` with both the Comet and spark-sql-perf JARs:\n\n```shell\nmake release\nexport COMET_JAR=$(pwd)/spark/target/comet-spark-spark3.5_2.12-*.jar\n\n$SPARK_HOME/bin/spark-shell \\\n    --master $SPARK_MASTER \\\n    --jars /path/to/spark-sql-perf/target/scala-2.12/spark-sql-perf_2.12-0.5.1-SNAPSHOT.jar,$COMET_JAR \\\n    --conf spark.driver.memory=8G \\\n    --conf spark.executor.instances=1 \\\n    --conf spark.executor.cores=8 \\\n    --conf spark.executor.memory=8g \\\n    --conf spark.memory.offHeap.enabled=true \\\n    --conf spark.memory.offHeap.size=8g \\\n    --conf spark.executor.extraClassPath=$COMET_JAR \\\n    --conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \\\n    --conf spark.comet.enabled=true \\\n    --conf spark.comet.exec.enabled=true \\\n    --conf spark.comet.exec.all.enabled=true \\\n    --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n    --conf spark.comet.exec.shuffle.enabled=true \\\n    --conf spark.comet.columnar.shuffle.enabled=true\n```\n\nThen register tables and run the benchmark the same way as the Spark baseline, changing the tags and result\nlocation:\n\n```scala\nval experiment = tpcds.runExperiment(\n  executionsToRun = queries,\n  iterations = 3,\n  resultLocation = \"/path/to/results/comet\",\n  tags = Map(\"engine\" -> \"comet\", \"scale_factor\" -> \"100\"),\n  forkThread = true\n)\n\nexperiment.waitForFinish(86400)\n```\n\n### View Results\n\nResults are saved as JSON under the `resultLocation`. You can query them directly in Spark:\n\n```scala\nval results = spark.read.json(\"/path/to/results/spark\")\nresults.select(\"name\", \"parsingTime\", \"analysisTime\", \"optimizationTime\", \"planningTime\", \"executionTime\")\n  .withColumn(\"totalTime\", (col(\"parsingTime\") + col(\"analysisTime\") +\n    col(\"optimizationTime\") + col(\"planningTime\") + col(\"executionTime\")) / 1000.0)\n  .orderBy(\"name\")\n  .show(200, false)\n```\n\n## Alternative: Command-Line Data Generation\n\nYou can also generate TPC-DS data without the Spark shell using `spark-submit`:\n\n```shell\n$SPARK_HOME/bin/spark-submit \\\n    --master $SPARK_MASTER \\\n    --class com.databricks.spark.sql.perf.tpcds.GenTPCDSData \\\n    --conf spark.driver.memory=8G \\\n    --conf spark.executor.instances=1 \\\n    --conf spark.executor.cores=8 \\\n    --conf spark.executor.memory=16g \\\n    /path/to/spark-sql-perf/target/scala-2.12/spark-sql-perf_2.12-0.5.1-SNAPSHOT.jar \\\n    -d /path/to/tpcds-kit/tools \\\n    -s 100 \\\n    -l /path/to/tpcds-data \\\n    -f parquet\n```\n\n## Troubleshooting\n\n### dsdgen not found\n\nEnsure `tpcds-kit/tools/dsdgen` exists and is executable. The `dsdgenDir` parameter in the Spark shell (or `-d`\nflag in the CLI) must point to the directory containing the `dsdgen` binary, not the binary itself.\n\n### Out of memory during data generation\n\nFor large scale factors (SF1000+), increase executor memory and the number of partitions:\n\n```shell\n--conf spark.executor.memory=32g\n```\n\nAnd in the Spark shell, use a higher `numPartitions` value (e.g., 200+).\n"
  },
  {
    "path": "docs/source/contributor-guide/bug_triage.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Bug Triage Guide\n\nThis guide describes how we prioritize and triage bugs in the Comet project. The goal is to ensure\nthat the most impactful bugs — especially correctness issues that produce wrong results — are\nidentified and addressed before less critical issues.\n\n## Priority Labels\n\nEvery bug should have exactly one priority label. When filing or triaging a bug, apply the\nappropriate label from the table below.\n\n| Label               | Color  | Description                                                                          | Examples                                                              |\n| ------------------- | ------ | ------------------------------------------------------------------------------------ | --------------------------------------------------------------------- |\n| `priority:critical` | Red    | Data corruption, silent wrong results, security vulnerabilities                      | Wrong aggregation results, FFI data corruption, incorrect cast output |\n| `priority:high`     | Orange | Crashes, panics, segfaults, major functional breakage affecting production workloads | Native engine panic, JVM segfault, NPE on supported code path         |\n| `priority:medium`   | Yellow | Functional bugs, performance regressions, broken features that have workarounds      | Missing expression support, writer feature gaps, excessive spilling   |\n| `priority:low`      | Green  | Minor issues, test-only failures, tooling, CI flakes, cosmetic issues                | Flaky CI test, build script edge case, documentation generator bug    |\n\n### How to Choose a Priority\n\nUse this decision tree:\n\n1. **Can this bug cause silent wrong results?** If yes → `priority:critical`. These are the most\n   dangerous bugs because users may not notice the incorrect output.\n2. **Does this bug crash the JVM or native engine?** If yes → `priority:high`. Crashes are\n   disruptive but at least visible to the user.\n3. **Does this bug break a feature or cause significant performance degradation?** If yes →\n   `priority:medium`. The user can work around it (e.g., falling back to Spark) but it impacts\n   the value of Comet.\n4. **Everything else** → `priority:low`. Test failures, CI issues, tooling, and cosmetic problems.\n\n### Escalation Triggers\n\nA bug should be escalated to a higher priority if:\n\n- A `priority:high` crash is discovered to also produce wrong results silently in some cases →\n  escalate to `priority:critical`\n- A `priority:medium` bug is reported by multiple users or affects a common workload → consider\n  escalating to `priority:high`\n- A `priority:low` CI flake is blocking PR merges consistently → escalate to `priority:medium`\n\n## Area Labels\n\nArea labels indicate which subsystem is affected. A bug may have multiple area labels. These\nhelp contributors find bugs in their area of expertise.\n\n| Label              | Description                               |\n| ------------------ | ----------------------------------------- |\n| `area:writer`      | Native writer (Parquet and other formats) |\n| `area:shuffle`     | Shuffle (JVM and native)                  |\n| `area:aggregation` | Hash aggregates, aggregate expressions    |\n| `area:scan`        | Data source scan (Parquet, CSV, Iceberg)  |\n| `area:expressions` | Expression evaluation                     |\n| `area:ffi`         | Arrow FFI / JNI boundary                  |\n| `area:ci`          | CI/CD, GitHub Actions, build tooling      |\n\nThe following pre-existing labels also serve as area indicators: `native_datafusion`,\n`native_iceberg_compat`, `spark 4`, `spark sql tests`.\n\n## Triage Process\n\nEvery new issue is automatically labeled with `requires-triage` when it is opened. This makes it\neasy to find issues that have not yet been triaged by filtering on that label. Once an issue has\nbeen triaged, remove the `requires-triage` label and apply the appropriate priority and area labels.\n\n### For New Issues\n\nWhen a new bug is filed:\n\n1. **Reproduce or verify** the issue if possible. If the report lacks reproduction steps, ask\n   the reporter for more details.\n2. **Assess correctness impact first.** Ask: \"Could this produce wrong results silently?\" This\n   is more important than whether it crashes.\n3. **Apply a priority label** using the decision tree above.\n4. **Apply area labels** to indicate the affected subsystem(s).\n5. **Apply `good first issue`** if the fix is likely straightforward and well-scoped.\n6. **Remove the `requires-triage` label** to indicate triage is complete.\n\n### For Existing Bugs\n\nPeriodically review open bugs to ensure priorities are still accurate:\n\n- Has a `priority:medium` bug been open for a long time with user reports? Consider escalating.\n- Has a `priority:high` bug been fixed by a related change? Close it.\n- Are there clusters of related bugs that should be tracked under an EPIC?\n\n### Prioritization Principles\n\n1. **Correctness over crashes.** A bug that silently returns wrong results is worse than one that\n   crashes, because crashes are at least visible.\n2. **User-reported over test-only.** A bug hit by a real user on a real workload takes priority\n   over one found only in test suites.\n3. **Core path over experimental.** Bugs in the default scan mode (`native_comet`) or widely-used\n   expressions take priority over bugs in experimental features like `native_datafusion` or\n   `native_iceberg_compat`.\n4. **Production safety over feature completeness.** Fixing a data corruption bug is more important\n   than adding support for a new expression.\n\n## Common Bug Categories\n\n### Correctness Bugs (`priority:critical`)\n\nThese are bugs where Comet produces different results than Spark without any error or warning.\nExamples include:\n\n- Incorrect cast behavior (e.g., negative zero to string)\n- Aggregate functions ignoring configuration (e.g., `ignoreNulls`)\n- Data corruption in FFI boundary (e.g., boolean arrays with non-zero offset)\n- Type mismatches between partial and final aggregation stages\n\nWhen fixing correctness bugs, always add a regression test that verifies the output matches Spark.\n\n### Crash Bugs (`priority:high`)\n\nThese are bugs where the native engine panics, segfaults, or throws an unhandled exception.\nCommon patterns include:\n\n- **All-scalar inputs:** Some expressions assume at least one columnar input and panic when all\n  inputs are literals (e.g., when `ConstantFolding` is disabled)\n- **Type mismatches:** Downcasting to the wrong Arrow array type\n- **Memory safety:** FFI boundary issues, unaligned arrays, GlobalRef lifecycle\n\n### Aggregate Planning Bugs\n\nSeveral bugs relate to how Comet plans hash aggregates across stage boundaries. The key issue is\nthat Spark's AQE may materialize a Comet partial aggregate but then run the final aggregate in\nSpark (or vice versa), and the intermediate formats may not be compatible. See the\n[EPIC #2892](https://github.com/apache/datafusion-comet/issues/2892) for the full picture.\n\n### Native Writer Bugs\n\nThe native Parquet writer has a cluster of known test failures tracked as individual issues\n(#3417–#3430). These are lower priority since the native writer is still maturing, but they\nshould be addressed before the writer is promoted to production-ready status.\n\n## How to Help with Triage\n\nTriage is a valuable contribution that doesn't require writing code. You can help by:\n\n- Reviewing new issues and suggesting a priority label\n- Reproducing reported bugs and adding details\n- Identifying duplicate issues\n- Linking related issues together\n- Testing whether old bugs have been fixed by recent changes\n"
  },
  {
    "path": "docs/source/contributor-guide/contributing.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Contributing to Apache DataFusion Comet\n\nWe welcome contributions to Comet in many areas, and encourage new contributors to get involved.\n\nHere are some areas where you can help:\n\n- Testing Comet with existing Spark jobs and reporting issues for any bugs or performance issues\n- Contributing code to support Spark expressions, operators, and data types that are not currently supported\n- Reviewing pull requests and helping to test new features for correctness and performance\n- Improving documentation\n\n## Finding issues to work on\n\nWe maintain a list of good first issues in GitHub [here](https://github.com/apache/datafusion-comet/issues?q=is%3Aopen+is%3Aissue+label%3A%22good+first+issue%22). We also have a [roadmap](roadmap.md).\n\nTo assign yourself an issue, comment `take` on the issue. To unassign yourself, comment `untake`.\n\n## Reporting issues\n\nWe use [GitHub issues](https://github.com/apache/datafusion-comet/issues) for bug reports and feature requests.\n\n## Asking for Help\n\nThe Comet project uses the same Slack and Discord channels as the main Apache DataFusion project. See details at\n[Apache DataFusion Communications]. There are dedicated Comet channels in both Slack and Discord.\n\n## Regular public meetings\n\nThe Comet contributors hold regular video calls where new and current contributors are welcome to ask questions and\ncoordinate on issues that they are working on.\n\nSee the [Apache DataFusion Comet community meeting] Google document for more information.\n\n[Apache DataFusion Communications]: https://datafusion.apache.org/contributor-guide/communication.html\n[Apache DataFusion Comet community meeting]: https://docs.google.com/document/d/1NBpkIAuU7O9h8Br5CbFksDhX-L9TyO9wmGLPMe0Plc8/edit?usp=sharing\n"
  },
  {
    "path": "docs/source/contributor-guide/debugging.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Debugging Guide\n\nThis HOWTO describes how to debug JVM code and Native code concurrently. The guide assumes you have:\n\n1. IntelliJ as the Java IDE\n2. CLion as the Native IDE. For Rust code, the CLion Rust language plugin is required. Note that the\n   IntelliJ Rust plugin is not sufficient.\n3. CLion/LLDB as the native debugger. CLion ships with a bundled LLDB and the Rust community has\n   its own packaging of LLDB (`lldb-rust`). Both provide a better display of Rust symbols than plain\n   LLDB or the LLDB that is bundled with XCode. We will use the LLDB packaged with CLion for this guide.\n4. We will use a Comet _unit_ test as the canonical use case.\n\n_Caveat: The steps here have only been tested with JDK 11_ on Mac (M1)\n\n## Debugging for Advanced Developers\n\nAdd a `.lldbinit` to comet/native. This is not strictly necessary but will be useful if you want to\nuse advanced `lldb` debugging. For example, we can ignore some exceptions and signals that are not relevant\nto our debugging and would otherwise cause the debugger to stop.\n\n```\nsettings set platform.plugin.darwin.ignored-exceptions EXC_BAD_ACCESS|EXC_BAD_INSTRUCTION\nprocess handle -n true -p true -s false SIGBUS SIGSEGV SIGILL\n```\n\n### In IntelliJ\n\n1. Set a breakpoint in `NativeBase.load()`, at a point _after_ the Comet library has been loaded.\n\n1. Add a Debug Configuration for the unit test\n\n1. In the Debug Configuration for that unit test add `-Xint` as a JVM parameter. This option is\n   undocumented _magic_. Without this, the LLDB debugger hits a EXC_BAD_ACCESS (or EXC_BAD_INSTRUCTION) from\n   which one cannot recover.\n\n1. Add a println to the unit test to print the PID of the JVM process. (jps can also be used but this is less error prone if you have multiple jvm processes running)\n\n   ```scala\n        println(\"Waiting for Debugger: PID - \", ManagementFactory.getRuntimeMXBean().getName())\n   ```\n\n   This will print something like : `PID@your_machine_name`.\n\n   For JDK9 and newer\n\n   ```scala\n        println(\"Waiting for Debugger: PID - \", ProcessHandle.current.pid)\n   ```\n\n   ==> Note the PID\n\n1. Debug-run the test in IntelliJ and wait for the breakpoint to be hit\n\n### In CLion\n\n1. After the breakpoint is hit in IntelliJ, in Clion (or LLDB from terminal or editor) -\n   1. Attach to the jvm process (make sure the PID matches). In CLion, this is `Run -> Attach to process`\n\n   1. Put your breakpoint in the native code\n\n1. Go back to IntelliJ and resume the process.\n\n1. Most debugging in CLion is similar to IntelliJ. For advanced LLDB based debugging the LLDB command line can be accessed from the LLDB tab in the Debugger view. Refer to the [LLDB manual](https://lldb.llvm.org/use/tutorial.html) for LLDB commands.\n\n### After your debugging is done\n\n1. In CLion, detach from the process if not already detached\n\n2. In IntelliJ, the debugger might have lost track of the process. If so, the debugger tab\n   will show the process as running (even if the test/job is shown as completed).\n\n3. Close the debugger tab, and if the IDS asks whether it should terminate the process,\n   click Yes.\n\n4. In terminal, use jps to identify the process with the process id you were debugging. If\n   it shows up as running, kill -9 [pid]. If that doesn't remove the process, don't bother,\n   the process will be left behind as a zombie and will consume no (significant) resources.\n   Eventually it will be cleaned up when you reboot possibly after a software update.\n\n### Additional Info\n\nOpenJDK mailing list on debugging the JDK on MacOS\n<https://mail.openjdk.org/pipermail/hotspot-dev/2019-September/039429.html>\n\nDetecting the debugger\n<https://stackoverflow.com/questions/5393403/can-a-java-application-detect-that-a-debugger-is-attached#:~:text=No.,to%20let%20your%20app%20continue.&text=I%20know%20that%20those%20are,meant%20with%20my%20first%20phrase>).\n\n## Verbose debug\n\n### Enabling Native Backtraces\n\nBy default, Comet does not show native backtraces when exceptions happen in native code:\n\n```scala\nscala> spark.sql(\"my_failing_query\").show(false)\n\n24/03/05 17:00:07 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)/ 1]\norg.apache.comet.CometNativeException: Internal error: MIN/MAX is not expected to receive scalars of incompatible types (Date32(\"NULL\"), Int32(15901)).\nThis was likely caused by a bug in DataFusion's code and we would welcome that you file an bug report in our issue tracker\n        at org.apache.comet.Native.executePlan(Native Method)\n        at org.apache.comet.CometExecIterator.executeNative(CometExecIterator.scala:65)\n        at org.apache.comet.CometExecIterator.getNextBatch(CometExecIterator.scala:111)\n        at org.apache.comet.CometExecIterator.hasNext(CometExecIterator.scala:126)\n\n```\n\nComet can be built with DataFusion's [backtrace] feature enabled, which will include native back traces in `CometNativeException`.\n\n[backtrace]: https://arrow.apache.org/datafusion/user-guide/example-usage.html#enable-backtraces\n\nTo build Comet with this feature enabled:\n\n```shell\nmake release COMET_FEATURES=backtrace\n```\n\nSet `RUST_BACKTRACE=1` for the Spark worker/executor process, or for `spark-submit` if running in local mode.\n\n```console\nRUST_BACKTRACE=1 $SPARK_HOME/spark-shell --jars spark/target/comet-spark-spark3.5_2.12-$COMET_VERSION.jar --conf spark.plugins=org.apache.spark.CometPlugin --conf spark.comet.enabled=true --conf spark.comet.exec.enabled=true\n```\n\nGet the expanded exception details\n\n```scala\nscala> spark.sql(\"my_failing_query\").show(false)\n24/03/05 17:00:49 ERROR Executor: Exception in task 0.0 in stage 0.0 (TID 0)\norg.apache.comet.CometNativeException: Internal error: MIN/MAX is not expected to receive scalars of incompatible types (Date32(\"NULL\"), Int32(15901))\n\nbacktrace:\n  0: std::backtrace::Backtrace::create\n  1: datafusion::physical_expr::aggregate::min_max::min\n  2: <datafusion::physical_expr::aggregate::min_max::MinAccumulator as datafusion_expr::accumulator::Accumulator>::update_batch\n  3: <futures_util::stream::stream::fuse::Fuse<S> as futures_core::stream::Stream>::poll_next\n  4: comet::execution::jni_api::Java_org_apache_comet_Native_executePlan::{{closure}}\n  5: _Java_org_apache_comet_Native_executePlan\n  (reduced)\n\nThis was likely caused by a bug in DataFusion's code and we would welcome that you file an bug report in our issue tracker\n        at org.apache.comet.Native.executePlan(Native Method)\nat org.apache.comet.CometExecIterator.executeNative(CometExecIterator.scala:65)\nat org.apache.comet.CometExecIterator.getNextBatch(CometExecIterator.scala:111)\nat org.apache.comet.CometExecIterator.hasNext(CometExecIterator.scala:126)\n(reduced)\n\n```\n\nNote:\n\n- The backtrace coverage in DataFusion is still improving. So there is a chance the error still not covered, if so feel free to file a [ticket](https://github.com/apache/arrow-datafusion/issues)\n- The backtrace evaluation comes with performance cost and intended mostly for debugging purposes\n\n### Native log configuration\n\nBy default, Comet emits native-side logs at the `INFO` level to `stderr`.\n\nYou can use the `COMET_LOG_LEVEL` environment variable to specify the log level. Supported values are: `OFF`, `ERROR`, `WARN`, `INFO`, `DEBUG`, `TRACE`.\n\nFor example, to configure native logs at the `DEBUG` level on spark executor:\n\n```\nspark.executorEnv.COMET_LOG_LEVEL=DEBUG\n```\n\nThis produces output like the following:\n\n```\n25/09/15 20:17:42 INFO core/src/lib.rs: Comet native library version 0.11.0 initialized\n25/09/15 20:17:44 DEBUG /xxx/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/datafusion-execution-49.0.2/src/disk_manager.rs: Created local dirs [TempDir { path: \"/private/var/folders/4p/9gtjq1s10fd6frkv9kzy0y740000gn/T/blockmgr-ba524f95-a792-4d79-b49c-276ba324941e/datafusion-qrpApx\" }] as DataFusion working directory\n25/09/15 20:17:44 DEBUG /xxx/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/datafusion-functions-nested-49.0.2/src/lib.rs: Overwrite existing UDF: array_to_string\n25/09/15 20:17:44 DEBUG /xxx/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/datafusion-functions-nested-49.0.2/src/lib.rs: Overwrite existing UDF: string_to_array\n25/09/15 20:17:44 DEBUG /xxx/.cargo/registry/src/index.crates.io-1949cf8c6b5b557f/datafusion-functions-nested-49.0.2/src/lib.rs: Overwrite existing UDF: range\n...\n```\n\nAdditionally, you can place a `log4rs.yaml` configuration file inside the Comet configuration directory specified by the `COMET_CONF_DIR` environment variable to enable more advanced logging configurations. This file uses the [log4rs YAML configuration format](https://docs.rs/log4rs/latest/log4rs/#configuration-via-a-yaml-file).\nFor example, see: [log4rs.yaml](https://github.com/apache/datafusion-comet/blob/main/conf/log4rs.yaml).\n\n### Debugging Memory Reservations\n\nSet `spark.comet.debug.memory=true` to log all calls that grow or shrink memory reservations.\n\nExample log output:\n\n```\n[Task 486] MemoryPool[ExternalSorter[6]].try_grow(256232960) returning Ok\n[Task 486] MemoryPool[ExternalSorter[6]].try_grow(256375168) returning Ok\n[Task 486] MemoryPool[ExternalSorter[6]].try_grow(256899456) returning Ok\n[Task 486] MemoryPool[ExternalSorter[6]].try_grow(257296128) returning Ok\n[Task 486] MemoryPool[ExternalSorter[6]].try_grow(257820416) returning Err\n[Task 486] MemoryPool[ExternalSorterMerge[6]].shrink(10485760)\n[Task 486] MemoryPool[ExternalSorter[6]].shrink(150464)\n[Task 486] MemoryPool[ExternalSorter[6]].shrink(146688)\n[Task 486] MemoryPool[ExternalSorter[6]].shrink(137856)\n[Task 486] MemoryPool[ExternalSorter[6]].shrink(141952)\n[Task 486] MemoryPool[ExternalSorterMerge[6]].try_grow(0) returning Ok\n[Task 486] MemoryPool[ExternalSorterMerge[6]].try_grow(0) returning Ok\n[Task 486] MemoryPool[ExternalSorter[6]].shrink(524288)\n[Task 486] MemoryPool[ExternalSorterMerge[6]].try_grow(0) returning Ok\n[Task 486] MemoryPool[ExternalSorterMerge[6]].try_grow(68928) returning Ok\n```\n\nWhen backtraces are enabled (see earlier section) then backtraces will be included for failed allocations.\n"
  },
  {
    "path": "docs/source/contributor-guide/development.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Development Guide\n\n## Project Layout\n\n```\n├── common     <- common Java/Scala code\n├── conf       <- configuration files\n├── native     <- native code, in Rust\n├── spark      <- Spark integration\n```\n\n## Threading Architecture\n\nComet's native execution runs on a shared tokio multi-threaded runtime. Understanding this\narchitecture is important because it affects how you write native operators and JVM callbacks.\n\n### How execution works\n\nSpark calls into native code via JNI from an **executor task thread**. There are two execution\npaths depending on whether the plan reads data from the JVM:\n\n**Async I/O path (no JVM data sources, e.g. Iceberg scans):** The DataFusion stream is spawned\nonto a tokio worker thread and batches are delivered to the executor thread via an `mpsc` channel.\nThe executor thread parks in `blocking_recv()` until the next batch is ready. This avoids\nbusy-polling on I/O-bound workloads.\n\n**JVM data source path (ScanExec present):** The executor thread calls `block_on()` and polls the\nDataFusion stream directly, interleaving `pull_input_batches()` calls on `Poll::Pending` to feed\ndata from the JVM into ScanExec operators.\n\nIn both cases, DataFusion operators execute on **tokio worker threads**, not on the Spark executor\ntask thread. All Spark tasks on an executor share one tokio runtime.\n\n### Rules for native code\n\n**Do not use `thread_local!` or assume thread identity.** Tokio may run your operator's `poll`\nmethod on any worker thread, and may move it between threads across polls. Any state must live\nin the operator struct or be shared via `Arc`.\n\n**JNI calls work from any thread, but have overhead.** `JVMClasses::get_env()` calls\n`AttachCurrentThread`, which acquires JVM internal locks. The `AttachGuard` detaches the thread\nwhen dropped. Repeated attach/detach cycles on tokio workers add overhead, so avoid calling\ninto the JVM on hot paths during stream execution.\n\n**Do not call `TaskContext.get()` from JVM callbacks during execution.** Spark's `TaskContext` is\na `ThreadLocal` on the executor task thread. JVM methods invoked from tokio worker threads will\nsee `null`. If you need task metadata, capture it at construction time (in `createPlan` or\noperator setup) and store it in the operator. See `CometTaskMemoryManager` for an example — it\ncaptures `TaskContext.get().taskMemoryManager()` in its constructor and uses the stored reference\nthereafter.\n\n**Memory pool operations call into the JVM.** `CometUnifiedMemoryPool` and `CometFairMemoryPool`\ncall `acquireMemory()` / `releaseMemory()` via JNI whenever DataFusion operators grow or shrink\nmemory reservations. This happens on whatever thread the operator is executing on. These calls\nare thread-safe (they use stored `GlobalRef`s, not thread-locals), but they do trigger\n`AttachCurrentThread`.\n\n**Scalar subqueries call into the JVM.** `Subquery::evaluate()` calls static methods on\n`CometScalarSubquery` via JNI. These use a static `HashMap`, not thread-locals, so they are\nsafe from any thread.\n\n**Parquet encryption calls into the JVM.** `CometKeyRetriever::retrieve_key()` calls the JVM\nto unwrap decryption keys during Parquet reads. It uses a stored `GlobalRef` and a cached\n`JMethodID`, so it is safe from any thread.\n\n### The tokio runtime\n\nThe runtime is created once per executor JVM in a `Lazy<Runtime>` static:\n\n- **Worker threads:** `num_cpus` by default, configurable via `COMET_WORKER_THREADS`\n- **Max blocking threads:** 512 by default, configurable via `COMET_MAX_BLOCKING_THREADS`\n- All async I/O (S3, HTTP, Parquet reads) runs on worker threads as non-blocking futures\n\n### Summary of what is safe and what is not\n\n| Pattern                                   | Safe?  | Notes                                    |\n| ----------------------------------------- | ------ | ---------------------------------------- |\n| `Arc<T>` shared across operators          | Yes    | Standard Rust thread safety              |\n| `JVMClasses::get_env()` from tokio worker | Yes    | Attaches thread to JVM automatically     |\n| `thread_local!` in operator code          | **No** | Tokio moves tasks between threads        |\n| `TaskContext.get()` in JVM callback       | **No** | Returns `null` on non-executor threads   |\n| Storing `JNIEnv` in an operator           | **No** | `JNIEnv` is thread-specific              |\n| Capturing state at plan creation time     | Yes    | Runs on executor thread, store in struct |\n\n## Global singletons\n\nComet code runs in both the driver and executor JVM processes, and different parts of the\ncodebase run in each. Global singletons have **process lifetime** — they are created once and\nnever dropped until the JVM exits. Since multiple Spark jobs, queries, and tasks share the same\nprocess, this makes it difficult to reason about what state a singleton holds and whether it is\nstill valid.\n\n### How to recognize them\n\n**Rust:** `static` variables using `OnceLock`, `LazyLock`, `OnceCell`, `Lazy`, or `lazy_static!`:\n\n```rust\nstatic TOKIO_RUNTIME: OnceLock<Runtime> = OnceLock::new();\nstatic TASK_SHARED_MEMORY_POOLS: Lazy<Mutex<HashMap<i64, PerTaskMemoryPool>>> = Lazy::new(..);\n```\n\n**Java:** `static` fields, especially mutable collections:\n\n```java\nprivate static final HashMap<Long, HashMap<Long, ScalarSubquery>> subqueryMap = new HashMap<>();\n```\n\n**Scala:** `object` declarations (companion objects are JVM singletons) holding mutable state:\n\n```scala\nobject MyCache {\n  private val cache = new ConcurrentHashMap[String, Value]()\n}\n```\n\n### Why they are dangerous\n\n- **Credential staleness.** A singleton caching an authenticated client will hold stale\n  credentials after token rotation, causing silent failures mid-job.\n- **Unbounded growth.** A cache keyed by file path or configuration grows with every query\n  but never shrinks. Over hours of process uptime this becomes a memory leak.\n- **Cross-job contamination.** Different Spark jobs on the same process may use different\n  configurations. A singleton initialized by the first job silently serves wrong state to\n  subsequent jobs.\n- **Testing difficulty.** Global state persists across test cases, making tests\n  order-dependent.\n\n### When a singleton is acceptable\n\nSome state genuinely has process lifetime:\n\n| Singleton                                     | Why it is safe                                      |\n| --------------------------------------------- | --------------------------------------------------- |\n| `TOKIO_RUNTIME`                               | One runtime per executor, no configuration variance |\n| `JAVA_VM` / `JVM_CLASSES`                     | One JVM per process, set once at JNI load           |\n| `OperatorRegistry` / `ExpressionRegistry`     | Immutable after initialization                      |\n| Compiled `Regex` patterns (`LazyLock<Regex>`) | Stateless and immutable                             |\n\n### When to avoid a singleton\n\nIf any of these apply, do **not** use a global singleton:\n\n- The state depends on configuration that can vary between jobs or queries\n- The state holds credentials or authenticated connections that will not expire or invalidate appropriately\n- The state grows proportionally to the number of queries or files processed\n- The state needs cleanup or refresh during process lifetime\n\nInstead, scope state to the plan or task by adding the cache as a field in an existing session or context object.\n\nIf a singleton is truly needed, add a comment explaining why `static` is the right lifetime,\nwhether the cache is bounded, and how credential refresh is handled (if applicable).\n\n## Development Setup\n\n1. Make sure `JAVA_HOME` is set and point to JDK using [support matrix](../user-guide/latest/installation.md)\n2. Install Rust toolchain. The easiest way is to use\n   [rustup](https://rustup.rs).\n\n## Build & Test\n\nA few common commands are specified in project's `Makefile`:\n\n- `make`: compile the entire project, but don't run tests\n- `make test-rust`: compile the project and run tests in Rust side\n- `make test-jvm`: compile the project and run tests in Java side\n- `make test`: compile the project and run tests in both Rust and Java\n  side.\n- `make release`: compile the project and creates a release build. This\n  is useful when you want to test Comet local installation in another project\n  such as Spark.\n- `make clean`: clean up the workspace\n\n## Common Build and Test Pitfalls\n\n### Native Code Must Be Built First\n\nThe native Rust code must be compiled before running JVM tests. If you skip this step, tests will\nfail because they cannot find the native library. Always run `make core` (or `cd native && cargo build`)\nbefore running Maven tests.\n\n```sh\n# Correct order\nmake core                    # Build native code first\n./mvnw test -Dsuites=\"...\"   # Then run JVM tests\n```\n\n### Debug vs Release Mode\n\nThere is no need to use release mode (`make release`) during normal development. Debug builds\nare faster to compile and provide better error messages. Only use release mode when:\n\n- Running benchmarks\n- Testing performance\n- Creating a release build for deployment\n\nFor regular development and testing, use `make` or `make core` which build in debug mode.\n\n### Running Rust Tests\n\nWhen running Rust tests directly with `cargo test`, the JVM library (`libjvm.so`) must be on\nyour library path. Set the `LD_LIBRARY_PATH` environment variable to include your JDK's `lib/server`\ndirectory:\n\n```sh\n# Find your libjvm.so location (example for typical JDK installation)\nexport LD_LIBRARY_PATH=$JAVA_HOME/lib/server:$LD_LIBRARY_PATH\n\n# Now you can run Rust tests\ncd native && cargo test\n```\n\nAlternatively, use `make test-rust` which handles the JVM compilation dependency automatically.\n\n### Avoid Using `-pl` to Select Modules\n\nWhen running Maven tests, avoid using `-pl spark` to select only the spark module. This can cause\nMaven to pick up the `common` module from your local Maven repository instead of using the current\ncodebase, leading to inconsistent test results:\n\n```sh\n# Avoid this - may use stale common module from local repo\n./mvnw test -pl spark -Dsuites=\"...\"\n\n# Do this instead - builds and tests with current code\n./mvnw test -Dsuites=\"...\"\n```\n\n### Use `wildcardSuites` for Running Tests\n\nWhen running specific test suites, use `wildcardSuites` instead of `suites` for more flexible\nmatching. The `wildcardSuites` parameter allows partial matching of suite names:\n\n```sh\n# Run all suites containing \"CometCast\"\n./mvnw test -DwildcardSuites=\"CometCast\"\n\n# Run specific suite with filter\n./mvnw test -Dsuites=\"org.apache.comet.CometCastSuite valid\"\n```\n\n## Development Environment\n\nComet is a multi-language project with native code written in Rust and JVM code written in Java and Scala.\nFor Rust code, the CLion IDE is recommended. For JVM code, IntelliJ IDEA is recommended.\n\nBefore opening the project in an IDE, make sure to run `make` first to generate the necessary files for the IDEs. Currently, it's mostly about\ngenerating protobuf message classes for the JVM side. It's only required to run `make` once after cloning the repo.\n\n### IntelliJ IDEA\n\nFirst make sure to install the Scala plugin in IntelliJ IDEA.\nAfter that, you can open the project in IntelliJ IDEA. The IDE should automatically detect the project structure and import as a Maven project.\n\nComet uses generated source files that are too large for IntelliJ's default size limit for code inspections. To avoid IDE errors\n(missing definitions, etc.) caused by IntelliJ skipping these generated files, modify\n[IntelliJ's Platform Properties](https://intellij-support.jetbrains.com/hc/en-us/articles/206544869-Configuring-JVM-options-and-platform-properties)\nby going to `Help -> Edit Custom Properties...`. For example, adding `idea.max.intellisense.filesize=16384` increases the file\nsize limit to 16 MB.\n\n### CLion\n\nFirst make sure to install the Rust plugin in CLion or you can use the dedicated Rust IDE: RustRover.\nAfter that you can open the project in CLion. The IDE should automatically detect the project structure and import as a Cargo project.\n\n### SQL file tests (recommended for expressions)\n\nFor testing expressions and operators, prefer using SQL file tests over writing Scala test\ncode. SQL file tests are plain `.sql` files that are automatically discovered and executed --\nno Scala code to write, and no recompilation needed when tests change. This makes it easy to\niterate quickly and to get good coverage of edge cases and argument combinations.\n\nSee the [SQL File Tests](sql-file-tests) guide for the full documentation on how to write\nand run these tests.\n\n### Running Tests in IDEA\n\nLike other Maven projects, you can run tests in IntelliJ IDEA by right-clicking on the test class or test method and selecting \"Run\" or \"Debug\".\nHowever if the tests is related to the native side. Please make sure to run `make core` or `cd native && cargo build` before running the tests in IDEA.\n\n### Running Tests from command line\n\nIt is possible to specify which ScalaTest suites you want to run from the CLI using the `suites`\nargument, for example if you only want to execute the test cases that contains _valid_\nin their name in `org.apache.comet.CometCastSuite` you can use\n\n```sh\n./mvnw test -Dtest=none -Dsuites=\"org.apache.comet.CometCastSuite valid\"\n```\n\nOther options for selecting specific suites are described in the [ScalaTest Maven Plugin documentation](https://www.scalatest.org/user_guide/using_the_scalatest_maven_plugin)\n\n## Plan Stability Testing\n\nComet has a plan stability testing framework that can be used to test the stability of the query plans generated by Comet.\nThe plan stability testing framework is located in the `spark` module.\n\n### Using the Helper Script\n\nThe easiest way to regenerate golden files is to use the provided script:\n\n```sh\n# Regenerate golden files for all Spark versions\n./dev/regenerate-golden-files.sh\n\n# Regenerate only for a specific Spark version\n./dev/regenerate-golden-files.sh --spark-version 3.5\n```\n\nThe script verifies that JDK 17+ is configured (required for Spark 4.0), installs Comet for each\nSpark version, and runs the plan stability tests with `SPARK_GENERATE_GOLDEN_FILES=1`.\n\n### Manual Instructions\n\nAlternatively, you can run the tests manually using the following commands.\n\nNote that the output files get written to `$SPARK_HOME`.\n\nThe tests can be run with:\n\n```sh\nexport SPARK_HOME=`pwd`\n./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV1_4_PlanStabilitySuite\" -Pspark-3.4 -nsu test\n./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV1_4_PlanStabilitySuite\" -Pspark-3.5 -nsu test\n./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV1_4_PlanStabilitySuite\" -Pspark-4.0 -nsu test\n```\n\nand\n\n```sh\nexport SPARK_HOME=`pwd`\n./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV2_7_PlanStabilitySuite\" -Pspark-3.4 -nsu test\n./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV2_7_PlanStabilitySuite\" -Pspark-3.5 -nsu test\n./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV2_7_PlanStabilitySuite\" -Pspark-4.0 -nsu test\n```\n\nIf your pull request changes the query plans generated by Comet, you should regenerate the golden files.\nTo regenerate the golden files, you can run the following commands.\n\n```sh\nexport SPARK_HOME=`pwd`\nSPARK_GENERATE_GOLDEN_FILES=1 ./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV1_4_PlanStabilitySuite\" -Pspark-3.4 -nsu test\nSPARK_GENERATE_GOLDEN_FILES=1 ./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV1_4_PlanStabilitySuite\" -Pspark-3.5 -nsu test\nSPARK_GENERATE_GOLDEN_FILES=1 ./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV1_4_PlanStabilitySuite\" -Pspark-4.0 -nsu test\n```\n\nand\n\n```sh\nexport SPARK_HOME=`pwd`\nSPARK_GENERATE_GOLDEN_FILES=1 ./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV2_7_PlanStabilitySuite\" -Pspark-3.4 -nsu test\nSPARK_GENERATE_GOLDEN_FILES=1 ./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV2_7_PlanStabilitySuite\" -Pspark-3.5 -nsu test\nSPARK_GENERATE_GOLDEN_FILES=1 ./mvnw -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV2_7_PlanStabilitySuite\" -Pspark-4.0 -nsu test\n```\n\n## Benchmark\n\nThere's a `make` command to run micro benchmarks in the repo. For\ninstance:\n\n```\nmake benchmark-org.apache.spark.sql.benchmark.CometReadBenchmark\n```\n\nTo run TPC-H or TPC-DS micro benchmarks, please follow the instructions\nin the respective source code, e.g., `CometTPCHQueryBenchmark`.\n\n## Debugging\n\nComet is a multi-language project with native code written in Rust and JVM code written in Java and Scala.\nIt is possible to debug both native and JVM code concurrently as described in the [DEBUGGING guide](debugging)\n\n## Submitting a Pull Request\n\nBefore submitting a pull request, follow this checklist to ensure your changes are ready:\n\n### 1. Format Your Code\n\nComet uses `cargo fmt`, [Scalafix](https://github.com/scalacenter/scalafix) and [Spotless](https://github.com/diffplug/spotless/tree/main/plugin-maven) to\nautomatically format the code. Run the following command to format all code:\n\n```sh\nmake format\n```\n\n### 2. Build and Verify\n\nAfter formatting, run a full build to ensure everything compiles correctly and generated\ndocumentation is up to date:\n\n```sh\nmake\n```\n\nThis builds both native and JVM code. Fix any compilation errors before proceeding.\n\n### 3. Run Clippy (Recommended)\n\nIt's strongly recommended to run Clippy locally to catch potential issues before the CI/CD pipeline does. You can run the same Clippy checks used in CI/CD with:\n\n```bash\ncd native\ncargo clippy --color=never --all-targets --workspace -- -D warnings\n```\n\nMake sure to resolve any Clippy warnings before submitting your pull request, as the CI/CD pipeline will fail if warnings are present.\n\n### 4. Run Tests\n\nRun the relevant tests for your changes:\n\n```sh\n# Run all tests\nmake test\n\n# Or run only Rust tests\nmake test-rust\n\n# Or run only JVM tests (native must be built first)\nmake test-jvm\n```\n\n### 5. Register New Test Suites in CI\n\nComet's CI does not automatically discover test suites. Instead, test suites are explicitly listed\nin the GitHub Actions workflow files so they can be grouped by category and run as separate parallel\njobs. This reduces overall CI time.\n\nIf you add a new Scala test suite, you must add it to the `suite` matrix in **both** workflow files:\n\n- `.github/workflows/pr_build_linux.yml`\n- `.github/workflows/pr_build_macos.yml`\n\nEach file contains a `suite` matrix with named groups such as `fuzz`, `shuffle`, `parquet`, `csv`,\n`exec`, `expressions`, and `sql`. Add your new suite's fully qualified class name to the\nappropriate group. For example, if you add a new expression test suite, add it to the\n`expressions` group:\n\n```yaml\n- name: \"expressions\"\n  value: |\n    org.apache.comet.CometExpressionSuite\n    # ... existing suites ...\n    org.apache.comet.YourNewExpressionSuite  # <-- add here\n```\n\nChoose the group that best matches the area your test covers:\n\n| Group         | Covers                                                     |\n| ------------- | ---------------------------------------------------------- |\n| `fuzz`        | Fuzz testing and data generation                           |\n| `shuffle`     | Shuffle operators and related exchange behavior            |\n| `parquet`     | Parquet read/write and native reader tests                 |\n| `csv`         | CSV native read tests                                      |\n| `exec`        | Execution operators, joins, aggregates, plan rules, TPC-\\* |\n| `expressions` | Expression evaluation, casts, and SQL file tests           |\n| `sql`         | SQL-level behavior tests                                   |\n\n**Important:** The suite lists in both workflow files must stay in sync. A separate CI check\n(`.github/workflows/pr_missing_suites.yml`) runs `dev/ci/check-suites.py` on every pull request.\nIt scans for all `*Suite.scala` files in the repository and verifies that each one appears in both\nworkflow files. If any suite is missing, this check will fail and block the PR.\n\n### Pre-PR Summary\n\n```sh\nmake format   # Format code\nmake          # Build everything and update generated docs\nmake test     # Run tests (optional but recommended)\n```\n\n## How to format `.md` document\n\nWe are using `prettier` to format `.md` files.\n\nYou can either use `npm i -g prettier` to install it globally or use `npx` to run it as a standalone binary. Using `npx` required a working node environment. Upgrading to the latest prettier is recommended (by adding `--upgrade` to the `npm` command).\n\n```bash\n$ prettier --version\n2.3.0\n```\n\nAfter you've confirmed your prettier version, you can format all the `.md` files:\n\n```bash\nnpx prettier \"**/*.md\" --write\n```\n"
  },
  {
    "path": "docs/source/contributor-guide/expression-audit-log.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Expression Audit Log\n\nThis document tracks which Comet expressions have been audited against their Spark\nimplementations for correctness and test coverage.\n\nEach audit compares the Comet implementation against the Spark source code for the listed\nversions, reviews existing test coverage, identifies gaps, and adds missing tests where needed.\n\n## Audited Expressions\n\n| Expression     | Spark Versions Checked | Date       | Findings                                                                                                                                                                                                                                                                                                                                                                                                                                                              |\n| -------------- | ---------------------- | ---------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `array_insert` | 3.4.3, 3.5.8, 4.0.1    | 2026-04-02 | No behavioral differences across Spark versions. Fixed `nullable()` metadata bug (did not account for `pos_expr`). Added SQL tests for multiple types (string, boolean, double, float, long, short, byte), literal arguments, null handling, negative indices, out-of-bounds padding, special float values (NaN, Infinity), multibyte UTF-8, and legacy negative index mode. Known incompatibility: pos=0 error message differs from Spark's `INVALID_INDEX_OF_ZERO`. |\n"
  },
  {
    "path": "docs/source/contributor-guide/ffi.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Arrow FFI Usage in Comet\n\n## Overview\n\nComet uses the [Arrow C Data Interface](https://arrow.apache.org/docs/format/CDataInterface.html) for zero-copy data transfer in two directions:\n\n1. **JVM → Native**: Native code pulls batches from JVM using `CometBatchIterator`\n2. **Native → JVM**: JVM pulls batches from native code using `CometExecIterator`\n\nThe following diagram shows an example of the end-to-end flow for a query stage.\n\n![Diagram of Comet Data Flow](/_static/images/comet-dataflow.svg)\n\nBoth scenarios use the same FFI mechanism but have different ownership semantics and memory management implications.\n\n## Arrow FFI Basics\n\nThe Arrow C Data Interface defines two C structures:\n\n- `ArrowArray`: Contains pointers to data buffers and metadata\n- `ArrowSchema`: Contains type information\n\n### Key Characteristics\n\n- **Zero-copy**: Data buffers can be shared across language boundaries without copying\n- **Ownership transfer**: Clear semantics for who owns and must free the data\n- **Release callbacks**: Custom cleanup functions for proper resource management\n\n## JVM → Native Data Flow (ScanExec)\n\n### Architecture\n\nWhen native code needs data from the JVM, it uses `ScanExec` which calls into `CometBatchIterator`:\n\n```\n┌─────────────────┐\n│  Spark/Scala    │\n│ CometExecIter   │\n└────────┬────────┘\n         │ produces batches\n         ▼\n┌─────────────────┐\n│ CometBatchIter  │ ◄─── JNI call from native\n│  (JVM side)     │\n└────────┬────────┘\n         │ Arrow FFI\n         │ (transfers ArrowArray/ArrowSchema pointers)\n         ▼\n┌─────────────────┐\n│    ScanExec     │\n│  (Rust/native)  │\n└────────┬────────┘\n         │\n         ▼\n┌─────────────────┐\n│   DataFusion    │\n│   operators     │\n└─────────────────┘\n```\n\n### FFI Transfer Process\n\nThe data transfer happens in `ScanExec::get_next()`:\n\n```rust\n// 1. Allocate FFI structures on native side (Rust heap)\nfor _ in 0..num_cols {\n    let arrow_array = Rc::new(FFI_ArrowArray::empty());\n    let arrow_schema = Rc::new(FFI_ArrowSchema::empty());\n    let array_ptr = Rc::into_raw(arrow_array) as i64;\n    let schema_ptr = Rc::into_raw(arrow_schema) as i64;\n    // Store pointers...\n}\n\n// 2. Call JVM to populate FFI structures\nlet num_rows: i32 = unsafe {\n    jni_call!(env, comet_batch_iterator(iter).next(array_obj, schema_obj) -> i32)?\n};\n\n// 3. Import data from FFI structures\nfor i in 0..num_cols {\n    let array_data = ArrayData::from_spark((array_ptr, schema_ptr))?;\n    let array = make_array(array_data);\n    // ... process array\n}\n```\n\n### Memory Layout\n\nWhen a batch is transferred from JVM to native:\n\n```\nJVM Heap:                           Native Memory:\n┌──────────────────┐               ┌──────────────────┐\n│ ColumnarBatch    │               │ FFI_ArrowArray   │\n│ ┌──────────────┐ │               │ ┌──────────────┐ │\n│ │ ArrowBuf     │─┼──────────────>│ │ buffers[0]   │ │\n│ │ (off-heap)   │ │               │ │ (pointer)    │ │\n│ └──────────────┘ │               │ └──────────────┘ │\n└──────────────────┘               └──────────────────┘\n        │                                   │\n        │                                   │\nOff-heap Memory:                            │\n┌──────────────────┐ <──────────────────────┘\n│ Actual Data      │\n│ (e.g., int32[])  │\n└──────────────────┘\n```\n\n**Key Point**: The actual data buffers can be off-heap, but the `ArrowArray` and `ArrowSchema` wrapper objects are **always allocated on the JVM heap**.\n\n### Wrapper Object Lifecycle\n\nWhen arrays are created in the JVM and passed to native code, the JVM creates the array data off-heap and creates\nwrapper objects `ArrowArray` and `ArrowSchema` on-heap. These wrapper objects can consume significant memory over\ntime.\n\n```\nPer batch overhead on JVM heap:\n- ArrowArray object: ~100 bytes\n- ArrowSchema object: ~100 bytes\n- Per column: ~200 bytes\n- 100 columns × 1000 batches = ~20 MB of wrapper objects\n```\n\nWhen native code pulls batches from the JVM, the JVM wrapper objects are kept alive until the native code drops\nall references to the arrays.\n\nWhen operators such as `SortExec` fetch many batches and buffer them in native code, the number of wrapper objects\nin Java on-heap memory keeps growing until the batches are released in native code at the end of the sort operation.\n\n### Ownership Transfer\n\nThe Arrow C data interface supports ownership transfer by registering callbacks in the C struct that is passed over\nthe JNI boundary for the function to delete the array data. For example, the `ArrowArray` struct has:\n\n```c\n// Release callback\nvoid (*release)(struct ArrowArray*);\n```\n\nComet currently does not always follow best practice around ownership transfer because there are some cases where\nComet JVM code will retain references to arrays after passing them to native code and may mutate the underlying\nbuffers. There is an `arrow_ffi_safe` flag in the protocol buffer definition of `Scan` that indicates whether\nownership is being transferred according to the Arrow C data interface specification.\n\n```protobuf\nmessage Scan {\n  repeated spark.spark_expression.DataType fields = 1;\n  // The source of the scan (e.g. file scan, broadcast exchange, shuffle, etc). This\n  // is purely for informational purposes when viewing native query plans in\n  // debug mode.\n  string source = 2;\n  // Whether native code can assume ownership of batches that it receives\n  bool arrow_ffi_safe = 3;\n}\n```\n\n#### When ownership is NOT transferred to native:\n\nIf the data originates from a scan that uses mutable buffers\nthen ownership is not transferred to native and the JVM may re-use the underlying buffers in the future.\n\nIt is critical that the native code performs a deep copy of the arrays if the arrays are to be buffered by\noperators such as `SortExec` or `ShuffleWriterExec`, otherwise data corruption is likely to occur.\n\n#### When ownership IS transferred to native:\n\nWhen ownership is transferred, it is safe to buffer batches in native. However, JVM wrapper objects will not be\nreleased until the native batches are dropped. This can lead to OOM or GC pressure if there is not enough Java\nheap memory configured.\n\n## Native → JVM Data Flow (CometExecIterator)\n\n### Architecture\n\nWhen JVM needs results from native execution:\n\n```\n┌─────────────────┐\n│ DataFusion Plan │\n│   (native)      │\n└────────┬────────┘\n         │ produces RecordBatch\n         ▼\n┌─────────────────┐\n│ CometExecIter   │\n│  (Rust/native)  │\n└────────┬────────┘\n         │ Arrow FFI\n         │ (transfers ArrowArray/ArrowSchema pointers)\n         ▼\n┌─────────────────┐\n│ CometExecIter   │ ◄─── JNI call from Spark\n│  (Scala side)   │\n└────────┬────────┘\n         │\n         ▼\n┌─────────────────┐\n│  Spark Actions  │\n│  (collect, etc) │\n└─────────────────┘\n```\n\n### FFI Transfer Process\n\nThe transfer happens in `CometExecIterator::getNextBatch()`:\n\n```scala\n// Scala side\ndef getNextBatch(): ColumnarBatch = {\n  val batchHandle = Native.getNextBatch(nativeHandle)\n\n  // Import from FFI structures\n  val vectors = (0 until schema.length).map { i =>\n    val array = Array.empty[Long](1)\n    val schemaPtr = Array.empty[Long](1)\n\n    // Get FFI pointers from native\n    Native.exportVector(batchHandle, i, array, schemaPtr)\n\n    // Import into Arrow Java\n    Data.importVector(allocator, array(0), schemaPtr(0))\n  }\n\n  new ColumnarBatch(vectors.toArray, numRows)\n}\n```\n\n```rust\n// Native side (simplified)\n#[no_mangle]\npub extern \"system\" fn Java_..._getNextBatch(\n    env: JNIEnv,\n    handle: jlong,\n) -> jlong {\n    let context = get_exec_context(handle)?;\n    let batch = context.stream.next().await?;\n\n    // Store batch and return handle\n    let batch_handle = Box::into_raw(Box::new(batch)) as i64;\n    batch_handle\n}\n\n#[no_mangle]\npub extern \"system\" fn Java_..._exportVector(\n    env: JNIEnv,\n    batch_handle: jlong,\n    col_idx: jint,\n    array_ptr: jlongArray,\n    schema_ptr: jlongArray,\n) {\n    let batch = get_batch(batch_handle)?;\n    let array = batch.column(col_idx);\n\n    // Export to FFI structures\n    let (array_ffi, schema_ffi) = to_ffi(array.to_data())?;\n\n    // Write pointers back to JVM\n    env.set_long_array_region(array_ptr, 0, &[array_ffi as i64])?;\n    env.set_long_array_region(schema_ptr, 0, &[schema_ffi as i64])?;\n}\n```\n\n### Wrapper Object Lifecycle (Native → JVM)\n\n```\nTime    Native Memory              JVM Heap              Off-heap/Native\n────────────────────────────────────────────────────────────────────────\nt0      RecordBatch produced       -                     Data in native\n        in DataFusion\n\nt1      FFI_ArrowArray created     -                     Data in native\n        FFI_ArrowSchema created\n        (native heap)\n\nt2      Pointers exported to JVM   ArrowBuf created      Data in native\n                                   (wraps native ptr)\n\nt3      FFI structures kept alive  Spark processes       Data in native\n        via batch handle           ColumnarBatch         ✓ Valid\n\nt4      Batch handle released      ArrowBuf freed        Data freed\n        Release callback runs      (triggers native      (via release\n                                   release callback)     callback)\n```\n\n**Key Difference from JVM → Native**:\n\n- Native code controls lifecycle through batch handle\n- JVM creates `ArrowBuf` wrappers that point to native memory\n- Release callback ensures proper cleanup when JVM is done\n- No GC pressure issue because native allocator manages the data\n\n### Release Callbacks\n\nCritical for proper cleanup:\n\n```rust\n// Native release callback (simplified)\nextern \"C\" fn release_batch(array: *mut FFI_ArrowArray) {\n    if !array.is_null() {\n        unsafe {\n            // Free the data buffers\n            for buffer in (*array).buffers {\n                drop(Box::from_raw(buffer));\n            }\n            // Free the array structure itself\n            drop(Box::from_raw(array));\n        }\n    }\n}\n```\n\nWhen JVM is done with the data:\n\n```java\n// ArrowBuf.close() triggers the release callback\narrowBuf.close();  // → calls native release_batch()\n```\n\n## Memory Ownership Rules\n\n### JVM → Native\n\n| Scenario           | `arrow_ffi_safe` | Ownership   | Action Required                        |\n| ------------------ | ---------------- | ----------- | -------------------------------------- |\n| Temporary scan     | `false`          | JVM keeps   | **Must deep copy** to avoid corruption |\n| Ownership transfer | `true`           | Native owns | Copy only to unpack dictionaries       |\n\n### Native → JVM\n\n| Scenario  | Ownership                        | Action Required                                            |\n| --------- | -------------------------------- | ---------------------------------------------------------- |\n| All cases | Native allocates, JVM references | JVM must call `close()` to trigger native release callback |\n\n## Further Reading\n\n- [Arrow C Data Interface Specification](https://arrow.apache.org/docs/format/CDataInterface.html)\n- [Arrow Java FFI Implementation](https://github.com/apache/arrow/tree/main/java/c)\n- [Arrow Rust FFI Implementation](https://docs.rs/arrow/latest/arrow/ffi/)\n"
  },
  {
    "path": "docs/source/contributor-guide/iceberg-spark-tests.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Running Iceberg Spark Tests\n\nRunning Apache Iceberg's Spark tests with Comet enabled is a good way to ensure that Comet produces the same\nresults as Spark when reading Iceberg tables. To enable this, we apply diff files to the Apache Iceberg source\ncode so that Comet is loaded when we run the tests.\n\nHere is an overview of the changes that the diffs make to Iceberg:\n\n- Configure Comet as a dependency and set the correct version in `libs.versions.toml` and `build.gradle`\n- Delete upstream Comet reader classes that reference legacy Comet APIs removed in [#3739]. These classes were\n  added upstream in [apache/iceberg#15674] and depend on Comet's old Iceberg Java integration. Since Comet now\n  uses a native Iceberg scan, these classes fail to compile and must be removed.\n- Configure test base classes (`TestBase`, `ExtensionsTestBase`, `ScanTestBase`, etc.) to load the Comet Spark\n  plugin and shuffle manager\n\n[#3739]: https://github.com/apache/datafusion-comet/pull/3739\n[apache/iceberg#15674]: https://github.com/apache/iceberg/pull/15674\n\n## 1. Install Comet\n\nRun `make release` in Comet to install the Comet JAR into the local Maven repository, specifying the Spark version.\n\n```shell\nPROFILES=\"-Pspark-3.5\" make release\n```\n\n## 2. Clone Iceberg and Apply Diff\n\nClone Apache Iceberg locally and apply the diff file from Comet against the matching tag.\n\n```shell\ngit clone git@github.com:apache/iceberg.git apache-iceberg\ncd apache-iceberg\ngit checkout apache-iceberg-1.8.1\ngit apply ../datafusion-comet/dev/diffs/iceberg/1.8.1.diff\n```\n\n## 3. Run Iceberg Spark Tests\n\n```shell\nENABLE_COMET=true ./gradlew -DsparkVersions=3.5 -DscalaVersion=2.13 -DflinkVersions= -DkafkaVersions= \\\n  :iceberg-spark:iceberg-spark-3.5_2.13:test \\\n  -Pquick=true -x javadoc\n```\n\nThe three Gradle targets tested in CI are:\n\n| Gradle Target                                 | What It Covers                                                                                                                                                                                                                                                                                                                                                          |\n| --------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `iceberg-spark-<ver>:test`                    | Core read/write paths (Parquet, Avro, ORC, vectorized), scan operations, filtering, bloom filters, runtime filtering, deletion handling, structured streaming, DDL/DML (create/alter/drop, writes, deletes), filter and aggregate pushdown, actions (snapshot expiration, file rewriting, orphan cleanup, table migration), serialization, and data format conversions. |\n| `iceberg-spark-extensions-<ver>:test`         | SQL extensions: stored procedures (migrate, snapshot, cherrypick, rollback, rewrite-data-files, rewrite-manifests, expire-snapshots, remove-orphan-files, etc.), row-level operations (copy-on-write and merge-on-read update/delete/merge), DDL extensions (branches, tags, alter schema, partition fields), changelog tables/views, metadata tables, and views.       |\n| `iceberg-spark-runtime-<ver>:integrationTest` | A single smoke test (`SmokeTest.java`) that validates the shaded runtime JAR. The `spark-runtime` module has no main source — it packages Iceberg and all dependencies into a shaded uber-JAR. The smoke test exercises basic create, insert, merge, query, partition field, and sort order operations to confirm the shaded JAR works end-to-end.                      |\n\n## Updating Diffs\n\nTo update a diff (e.g. after modifying test configuration), apply the existing diff, make changes, then\nregenerate:\n\n```shell\ncd apache-iceberg\ngit reset --hard apache-iceberg-1.8.1 && git clean -fd\ngit apply ../datafusion-comet/dev/diffs/iceberg/1.8.1.diff\n\n# Make changes, then run spotless to fix formatting\n./gradlew spotlessApply\n\n# Stage any new or deleted files, then generate the diff\ngit add -A\ngit diff apache-iceberg-1.8.1 > ../datafusion-comet/dev/diffs/iceberg/1.8.1.diff\n```\n\nRepeat for each Iceberg version (1.8.1, 1.9.1, 1.10.0). The file contents differ between versions, so each\ndiff must be generated against its own tag.\n\n## Running Tests in CI\n\nThe `iceberg_spark_test.yml` workflow applies these diffs and runs the three Gradle targets above against\neach Iceberg version. The test matrix covers Spark 3.4 and 3.5 across Iceberg 1.8.1, 1.9.1, and 1.10.0\nwith Java 11 and 17. The workflow runs on all pull requests and pushes to the main branch.\n"
  },
  {
    "path": "docs/source/contributor-guide/index.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Contributor Guide\n\n```{toctree}\n:maxdepth: 2\n:caption: Contributor Guide\n\nGetting Started <contributing>\nComet Plugin Overview <plugin_overview>\nArrow FFI <ffi>\nJVM Shuffle <jvm_shuffle>\nNative Shuffle <native_shuffle>\nParquet Scans <parquet_scans>\nDevelopment Guide <development>\nDebugging Guide <debugging>\nBenchmarking Guide <benchmarking>\nAdding a New Operator <adding_a_new_operator>\nAdding a New Expression <adding_a_new_expression>\nTracing <tracing>\nProfiling <profiling>\nSpark SQL Tests <spark-sql-tests.md>\nIceberg Spark Tests <iceberg-spark-tests.md>\nSQL File Tests <sql-file-tests.md>\nBug Triage <bug_triage>\nRoadmap <roadmap.md>\nRelease Process <release_process>\nGithub and Issue Tracker <https://github.com/apache/datafusion-comet>\n```\n"
  },
  {
    "path": "docs/source/contributor-guide/jvm_shuffle.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# JVM Shuffle\n\nThis document describes Comet's JVM-based columnar shuffle implementation (`CometColumnarShuffle`), which\nwrites shuffle data in Arrow IPC format using JVM code with native encoding. For the fully native\nalternative, see [Native Shuffle](native_shuffle.md).\n\n## Overview\n\nComet provides two shuffle implementations:\n\n- **CometNativeShuffle** (`CometExchange`): Fully native shuffle using Rust. Takes columnar input directly\n  from Comet native operators and performs partitioning in native code.\n- **CometColumnarShuffle** (`CometColumnarExchange`): JVM-based shuffle that operates on rows internally,\n  buffers `UnsafeRow`s in memory pages, and uses native code (via JNI) to encode them to Arrow IPC format.\n  Uses Spark's partitioner for partition assignment. Can accept either row-based or columnar input\n  (columnar input is converted to rows via `ColumnarToRowExec`).\n\nThe JVM shuffle is selected via `CometShuffleDependency.shuffleType`.\n\n## When JVM Shuffle is Used\n\nJVM shuffle (`CometColumnarExchange`) is used instead of native shuffle (`CometExchange`) in the following cases:\n\n1. **Shuffle mode is explicitly set to \"jvm\"**: When `spark.comet.exec.shuffle.mode` is set to `jvm`.\n\n2. **Child plan is not a Comet native operator**: When the child plan is a Spark row-based operator\n   (not a `CometPlan`), JVM shuffle is the only option since native shuffle requires columnar input\n   from Comet operators.\n\n3. **Unsupported partition key types**: For `HashPartitioning` and `RangePartitioning`, native shuffle\n   only supports primitive types as partition keys. Complex types (struct, array, map) cannot be used\n   as partition keys in native shuffle and will fall back to JVM columnar shuffle. Note that complex types are\n   fully supported as data columns in both implementations.\n\n## Input Handling\n\n### Spark Row-Based Input\n\nWhen the child plan is a Spark row-based operator, `CometColumnarExchange` calls `child.execute()` which\nreturns an `RDD[InternalRow]`. The rows flow directly to the JVM shuffle writers.\n\n### Comet Columnar Input\n\nWhen the child plan is a Comet native operator (e.g., `CometHashAggregate`) but JVM shuffle is selected\n(due to shuffle mode setting or unsupported partitioning), `CometColumnarExchange` still calls\n`child.execute()`. Comet operators implement `doExecute()` by wrapping themselves with `ColumnarToRowExec`:\n\n```scala\n// In CometExec base class\noverride def doExecute(): RDD[InternalRow] =\n  ColumnarToRowExec(this).doExecute()\n```\n\nThis means the data path becomes:\n\n```\nComet Native (columnar) → ColumnarToRowExec → rows → JVM Shuffle → Arrow IPC → columnar\n```\n\nThis is less efficient than native shuffle which avoids the columnar-to-row conversion:\n\n```\nComet Native (columnar) → Native Shuffle → Arrow IPC → columnar\n```\n\n### Why Use Spark's Partitioner?\n\nJVM shuffle uses row-based input so it can leverage Spark's existing partitioner infrastructure\n(`partitioner.getPartition(key)`). This allows Comet to support all of Spark's partitioning schemes\nwithout reimplementing them in Rust. Native shuffle, by contrast, serializes the partitioning scheme\nto protobuf and implements the partitioning logic in native code.\n\n## Architecture\n\n```\n┌─────────────────────────────────────────────────────────────────────────┐\n│                         CometShuffleManager                             │\n│  - Extends Spark's ShuffleManager                                       │\n│  - Routes to appropriate writer/reader based on ShuffleHandle type      │\n└─────────────────────────────────────────────────────────────────────────┘\n                                    │\n           ┌────────────────────────┼────────────────────────┐\n           ▼                        ▼                        ▼\n┌─────────────────────┐  ┌─────────────────────┐  ┌─────────────────────┐\n│ CometBypassMerge-   │  │ CometUnsafe-        │  │ CometNative-        │\n│ SortShuffleWriter   │  │ ShuffleWriter       │  │ ShuffleWriter       │\n│ (hash-based)        │  │ (sort-based)        │  │ (fully native)      │\n└─────────────────────┘  └─────────────────────┘  └─────────────────────┘\n           │                        │\n           ▼                        ▼\n┌─────────────────────┐  ┌─────────────────────┐\n│ CometDiskBlock-     │  │ CometShuffleExternal│\n│ Writer              │  │ Sorter              │\n└─────────────────────┘  └─────────────────────┘\n           │                        │\n           └────────────┬───────────┘\n                        ▼\n              ┌─────────────────────┐\n              │ SpillWriter         │\n              │ (native encoding    │\n              │  via JNI)           │\n              └─────────────────────┘\n```\n\n## Key Classes\n\n### Shuffle Manager\n\n| Class                    | Location                                   | Description                                                                                                                                      |\n| ------------------------ | ------------------------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------ |\n| `CometShuffleManager`    | `.../shuffle/CometShuffleManager.scala`    | Entry point. Extends Spark's `ShuffleManager`. Selects writer/reader based on handle type. Delegates non-Comet shuffles to `SortShuffleManager`. |\n| `CometShuffleDependency` | `.../shuffle/CometShuffleDependency.scala` | Extends `ShuffleDependency`. Contains `shuffleType` (`CometColumnarShuffle` or `CometNativeShuffle`) and schema info.                            |\n\n### Shuffle Handles\n\n| Handle                              | Writer Strategy                                           |\n| ----------------------------------- | --------------------------------------------------------- |\n| `CometBypassMergeSortShuffleHandle` | Hash-based: one file per partition, merged at end         |\n| `CometSerializedShuffleHandle`      | Sort-based: records sorted by partition ID, single output |\n| `CometNativeShuffleHandle`          | Fully native shuffle                                      |\n\nSelection logic in `CometShuffleManager.shouldBypassMergeSort()`:\n\n- Uses bypass if partitions < threshold AND partitions × cores ≤ max threads\n- Otherwise uses sort-based to avoid OOM from many concurrent writers\n\n### Writers\n\n| Class                               | Location                                             | Description                                                                                                         |\n| ----------------------------------- | ---------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------- |\n| `CometBypassMergeSortShuffleWriter` | `.../shuffle/CometBypassMergeSortShuffleWriter.java` | Hash-based writer. Creates one `CometDiskBlockWriter` per partition. Supports async writes.                         |\n| `CometUnsafeShuffleWriter`          | `.../shuffle/CometUnsafeShuffleWriter.java`          | Sort-based writer. Uses `CometShuffleExternalSorter` to buffer and sort records, then merges spill files.           |\n| `CometDiskBlockWriter`              | `.../shuffle/CometDiskBlockWriter.java`              | Buffers rows in memory pages for a single partition. Spills to disk via native encoding. Used by bypass writer.     |\n| `CometShuffleExternalSorter`        | `.../shuffle/sort/CometShuffleExternalSorter.java`   | Buffers records across all partitions, sorts by partition ID, spills sorted data. Used by unsafe writer.            |\n| `SpillWriter`                       | `.../shuffle/SpillWriter.java`                       | Base class for spill logic. Manages memory pages and calls `Native.writeSortedFileNative()` for Arrow IPC encoding. |\n\n### Reader\n\n| Class                          | Location                                         | Description                                                                                        |\n| ------------------------------ | ------------------------------------------------ | -------------------------------------------------------------------------------------------------- |\n| `CometBlockStoreShuffleReader` | `.../shuffle/CometBlockStoreShuffleReader.scala` | Fetches shuffle blocks via `ShuffleBlockFetcherIterator`. Decodes Arrow IPC to `ColumnarBatch`.    |\n| `NativeBatchDecoderIterator`   | `.../shuffle/NativeBatchDecoderIterator.scala`   | Reads compressed Arrow IPC batches from input stream. Calls `Native.decodeShuffleBlock()` via JNI. |\n\n## Data Flow\n\n### Write Path\n\n1. `ShuffleWriteProcessor` calls `CometShuffleManager.getWriter()`\n2. Writer receives `Iterator[Product2[K, V]]` where V is `UnsafeRow`\n3. Rows are serialized and buffered in off-heap memory pages\n4. When memory threshold or batch size is reached, `SpillWriter.doSpilling()` is called\n5. Native code (`Native.writeSortedFileNative()`) converts rows to Arrow arrays and writes IPC format\n6. For bypass writer: partition files are concatenated into final output\n7. For sort writer: spill files are merged\n\n### Read Path\n\n1. `CometBlockStoreShuffleReader.read()` creates `ShuffleBlockFetcherIterator`\n2. For each block, `NativeBatchDecoderIterator` reads the IPC stream\n3. Native code (`Native.decodeShuffleBlock()`) decompresses and decodes to Arrow arrays\n4. Arrow FFI imports arrays as `ColumnarBatch`\n\n## Memory Management\n\n- `CometShuffleMemoryAllocator`: Custom allocator for off-heap memory pages\n- Memory is allocated in pages; when allocation fails, writers spill to disk\n- `CometDiskBlockWriter` coordinates spilling across all partition writers (largest first)\n- Async spilling is supported via `ShuffleThreadPool`\n\n## Configuration\n\n| Config                                          | Description                         |\n| ----------------------------------------------- | ----------------------------------- |\n| `spark.comet.columnar.shuffle.async.enabled`    | Enable async spill writes           |\n| `spark.comet.columnar.shuffle.async.thread.num` | Threads per writer for async        |\n| `spark.comet.columnar.shuffle.batch.size`       | Rows per Arrow batch                |\n| `spark.comet.columnar.shuffle.spill.threshold`  | Row count threshold for spill       |\n| `spark.comet.exec.shuffle.compression.codec`    | Compression codec (zstd, lz4, etc.) |\n"
  },
  {
    "path": "docs/source/contributor-guide/native_shuffle.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Native Shuffle\n\nThis document describes Comet's native shuffle implementation (`CometNativeShuffle`), which performs\nshuffle operations entirely in Rust code for maximum performance. For the JVM-based alternative,\nsee [JVM Shuffle](jvm_shuffle.md).\n\n## Overview\n\nNative shuffle takes columnar input directly from Comet native operators and performs partitioning,\nencoding, and writing in native Rust code. This avoids the columnar-to-row-to-columnar conversion\noverhead that JVM shuffle incurs.\n\n```\nComet Native (columnar) → Native Shuffle → Arrow IPC → columnar\n```\n\nCompare this to JVM shuffle's data path:\n\n```\nComet Native (columnar) → ColumnarToRowExec → rows → JVM Shuffle → Arrow IPC → columnar\n```\n\n## When Native Shuffle is Used\n\nNative shuffle (`CometExchange`) is selected when all of the following conditions are met:\n\n1. **Shuffle mode allows native**: `spark.comet.exec.shuffle.mode` is `native` or `auto`.\n\n2. **Child plan is a Comet native operator**: The child must be a `CometPlan` that produces\n   columnar output. Row-based Spark operators require JVM shuffle.\n\n3. **Supported partitioning type**: Native shuffle supports:\n   - `HashPartitioning`\n   - `RangePartitioning`\n   - `SinglePartition`\n   - `RoundRobinPartitioning`\n\n4. **Supported partition key types**: For `HashPartitioning` and `RangePartitioning`, partition\n   keys must be primitive types. Complex types (struct, array, map) as partition keys require\n   JVM shuffle. Note that complex types are fully supported as data columns in native shuffle.\n\n## Architecture\n\n```\n┌─────────────────────────────────────────────────────────────────────────────┐\n│                           CometShuffleManager                                │\n│  - Routes to CometNativeShuffleWriter for CometNativeShuffleHandle           │\n└─────────────────────────────────────────────────────────────────────────────┘\n                                      │\n                                      ▼\n┌─────────────────────────────────────────────────────────────────────────────┐\n│                         CometNativeShuffleWriter                             │\n│  - Constructs protobuf operator plan                                         │\n│  - Invokes native execution via CometExec.getCometIterator()                 │\n└─────────────────────────────────────────────────────────────────────────────┘\n                                      │\n                                      ▼ (JNI)\n┌─────────────────────────────────────────────────────────────────────────────┐\n│                         ShuffleWriterExec (Rust)                             │\n│  - DataFusion ExecutionPlan                                                  │\n│  - Orchestrates partitioning and writing                                     │\n└─────────────────────────────────────────────────────────────────────────────┘\n                    │                                     │\n                    ▼                                     ▼\n┌───────────────────────────────────┐   ┌───────────────────────────────────┐\n│ MultiPartitionShuffleRepartitioner │   │ SinglePartitionShufflePartitioner │\n│ (hash/range partitioning)          │   │ (single partition case)           │\n└───────────────────────────────────┘   └───────────────────────────────────┘\n                    │\n                    ▼\n┌───────────────────────────────────┐\n│ ShuffleBlockWriter                 │\n│ (Arrow IPC + compression)          │\n└───────────────────────────────────┘\n                    │\n                    ▼\n         ┌─────────────────┐\n         │  Data + Index   │\n         │     Files       │\n         └─────────────────┘\n```\n\n## Key Classes\n\n### Scala Side\n\n| Class                          | Location                                         | Description                                                                                   |\n| ------------------------------ | ------------------------------------------------ | --------------------------------------------------------------------------------------------- |\n| `CometShuffleExchangeExec`     | `.../shuffle/CometShuffleExchangeExec.scala`     | Physical plan node. Validates types and partitioning, creates `CometShuffleDependency`.       |\n| `CometNativeShuffleWriter`     | `.../shuffle/CometNativeShuffleWriter.scala`     | Implements `ShuffleWriter`. Builds protobuf plan and invokes native execution.                |\n| `CometShuffleDependency`       | `.../shuffle/CometShuffleDependency.scala`       | Extends `ShuffleDependency`. Holds shuffle type, schema, and range partition bounds.          |\n| `CometBlockStoreShuffleReader` | `.../shuffle/CometBlockStoreShuffleReader.scala` | Reads shuffle blocks via `ShuffleBlockFetcherIterator`. Decodes Arrow IPC to `ColumnarBatch`. |\n| `NativeBatchDecoderIterator`   | `.../shuffle/NativeBatchDecoderIterator.scala`   | Reads compressed Arrow IPC from input stream. Calls native decode via JNI.                    |\n\n### Rust Side\n\n| File                    | Location                             | Description                                                                          |\n| ----------------------- | ------------------------------------ | ------------------------------------------------------------------------------------ |\n| `shuffle_writer.rs`     | `native/core/src/execution/shuffle/` | `ShuffleWriterExec` plan and partitioners. Main shuffle logic.                       |\n| `codec.rs`              | `native/core/src/execution/shuffle/` | `ShuffleBlockWriter` for Arrow IPC encoding with compression. Also handles decoding. |\n| `comet_partitioning.rs` | `native/core/src/execution/shuffle/` | `CometPartitioning` enum defining partition schemes (Hash, Range, Single).           |\n\n## Data Flow\n\n### Write Path\n\n1. **Plan construction**: `CometNativeShuffleWriter` builds a protobuf operator plan containing:\n   - A scan operator reading from the input iterator\n   - A `ShuffleWriter` operator with partitioning config and compression codec\n\n2. **Native execution**: `CometExec.getCometIterator()` executes the plan in Rust.\n\n3. **Partitioning**: `ShuffleWriterExec` receives batches and routes to the appropriate partitioner:\n   - `MultiPartitionShuffleRepartitioner`: For hash/range/round-robin partitioning\n   - `SinglePartitionShufflePartitioner`: For single partition (simpler path)\n\n4. **Buffering and spilling**: The partitioner buffers rows per partition. When memory pressure\n   exceeds the threshold, partitions spill to temporary files.\n\n5. **Encoding**: `ShuffleBlockWriter` encodes each partition's data as compressed Arrow IPC:\n   - Writes compression type header\n   - Writes field count header\n   - Writes compressed IPC stream\n\n6. **Output files**: Two files are produced:\n   - **Data file**: Concatenated partition data\n   - **Index file**: Array of 8-byte little-endian offsets marking partition boundaries\n\n7. **Commit**: Back in JVM, `CometNativeShuffleWriter` reads the index file to get partition\n   lengths and commits via Spark's `IndexShuffleBlockResolver`.\n\n### Read Path\n\n1. `CometBlockStoreShuffleReader` fetches shuffle blocks via `ShuffleBlockFetcherIterator`.\n\n2. For each block, `NativeBatchDecoderIterator`:\n   - Reads the 8-byte compressed length header\n   - Reads the 8-byte field count header\n   - Reads the compressed IPC data\n   - Calls `Native.decodeShuffleBlock()` via JNI\n\n3. Native code decompresses and deserializes the Arrow IPC stream.\n\n4. Arrow FFI transfers the `RecordBatch` to JVM as a `ColumnarBatch`.\n\n## Partitioning\n\n### Hash Partitioning\n\nNative shuffle implements Spark-compatible hash partitioning:\n\n- Uses Murmur3 hash function with seed 42 (matching Spark)\n- Computes hash of partition key columns\n- Applies modulo by partition count: `partition_id = hash % num_partitions`\n\n### Range Partitioning\n\nFor range partitioning:\n\n1. Spark's `RangePartitioner` samples data and computes partition boundaries on the driver.\n2. Boundaries are serialized to the native plan.\n3. Native code converts sort key columns to comparable row format.\n4. Binary search (`partition_point`) determines which partition each row belongs to.\n\n### Single Partition\n\nThe simplest case: all rows go to partition 0. Uses `SinglePartitionShufflePartitioner` which\nsimply concatenates batches to reach the configured batch size.\n\n### Round Robin Partitioning\n\nComet implements round robin partitioning using hash-based assignment for determinism:\n\n1. Computes a Murmur3 hash of columns (using seed 42)\n2. Assigns partitions directly using the hash: `partition_id = hash % num_partitions`\n\nThis approach guarantees determinism across retries, which is critical for fault tolerance.\nHowever, unlike true round robin which cycles through partitions row-by-row, hash-based\nassignment only provides even distribution when the data has sufficient variation in the\nhashed columns. Data with low cardinality or identical values may result in skewed partition\nsizes.\n\n## Memory Management\n\nNative shuffle uses DataFusion's memory management with spilling support:\n\n- **Memory pool**: Tracks memory usage across the shuffle operation.\n- **Spill threshold**: When buffered data exceeds the threshold, partitions spill to disk.\n- **Per-partition spilling**: Each partition has its own spill file. Multiple spills for a\n  partition are concatenated when writing the final output.\n- **Scratch space**: Reusable buffers for partition ID computation to reduce allocations.\n\nThe `MultiPartitionShuffleRepartitioner` manages:\n\n- `PartitionBuffer`: In-memory buffer for each partition\n- `SpillFile`: Temporary file for spilled data\n- Memory tracking via `MemoryConsumer` trait\n\n## Compression\n\nNative shuffle supports multiple compression codecs configured via\n`spark.comet.exec.shuffle.compression.codec`:\n\n| Codec    | Description                                            |\n| -------- | ------------------------------------------------------ |\n| `zstd`   | Zstandard compression. Best ratio, configurable level. |\n| `lz4`    | LZ4 compression. Fast with good ratio.                 |\n| `snappy` | Snappy compression. Fastest, lower ratio.              |\n| `none`   | No compression.                                        |\n\nThe compression codec is applied uniformly to all partitions. Each partition's data is\nindependently compressed, allowing parallel decompression during reads.\n\n## Configuration\n\n| Config                                            | Default | Description                              |\n| ------------------------------------------------- | ------- | ---------------------------------------- |\n| `spark.comet.exec.shuffle.enabled`                | `true`  | Enable Comet shuffle                     |\n| `spark.comet.exec.shuffle.mode`                   | `auto`  | Shuffle mode: `native`, `jvm`, or `auto` |\n| `spark.comet.exec.shuffle.compression.codec`      | `zstd`  | Compression codec                        |\n| `spark.comet.exec.shuffle.compression.zstd.level` | `1`     | Zstd compression level                   |\n| `spark.comet.shuffle.write.buffer.size`           | `1MB`   | Write buffer size                        |\n| `spark.comet.columnar.shuffle.batch.size`         | `8192`  | Target rows per batch                    |\n\n## Comparison with JVM Shuffle\n\n| Aspect              | Native Shuffle                         | JVM Shuffle                       |\n| ------------------- | -------------------------------------- | --------------------------------- |\n| Input format        | Columnar (direct from Comet operators) | Row-based (via ColumnarToRowExec) |\n| Partitioning logic  | Rust implementation                    | Spark's partitioner               |\n| Supported schemes   | Hash, Range, Single, RoundRobin        | Hash, Range, Single, RoundRobin   |\n| Partition key types | Primitives only (Hash, Range)          | Any type                          |\n| Performance         | Higher (no format conversion)          | Lower (columnar→row→columnar)     |\n| Writer variants     | Single path                            | Bypass (hash) and sort-based      |\n\nSee [JVM Shuffle](jvm_shuffle.md) for details on the JVM-based implementation.\n"
  },
  {
    "path": "docs/source/contributor-guide/parquet_scans.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Parquet Scan Implementations\n\nComet currently has two distinct implementations of the Parquet scan operator.\n\n| Scan Implementation     | Notes                  |\n| ----------------------- | ---------------------- |\n| `native_datafusion`     | Fully native scan      |\n| `native_iceberg_compat` | Hybrid JVM/native scan |\n\nThe configuration property\n`spark.comet.scan.impl` is used to select an implementation. The default setting is `spark.comet.scan.impl=auto`, which\nattempts to use `native_datafusion` first, and falls back to Spark if the scan cannot be converted\n(e.g., due to unsupported features). Most users should not need to change this setting.\nHowever, it is possible to force Comet to use a particular implementation for all scan operations by setting\nthis configuration property to one of the following implementations. For example: `--conf spark.comet.scan.impl=native_datafusion`.\n\nThe following features are not supported by either scan implementation, and Comet will fall back to Spark in these scenarios:\n\n- `ShortType` columns, by default. When reading Parquet files written by systems other than Spark that contain\n  columns with the logical type `UINT_8` (unsigned 8-bit integers), Comet may produce different results than Spark.\n  Spark maps `UINT_8` to `ShortType`, but Comet's Arrow-based readers respect the unsigned type and read the data as\n  unsigned rather than signed. Since Comet cannot distinguish `ShortType` columns that came from `UINT_8` versus\n  signed `INT16`, by default Comet falls back to Spark when scanning Parquet files containing `ShortType` columns.\n  This behavior can be disabled by setting `spark.comet.scan.unsignedSmallIntSafetyCheck=false`. Note that `ByteType`\n  columns are always safe because they can only come from signed `INT8`, where truncation preserves the signed value.\n- Default values that are nested types (e.g., maps, arrays, structs). Literal default values are supported.\n- Spark's Datasource V2 API. When `spark.sql.sources.useV1SourceList` does not include `parquet`, Spark uses the\n  V2 API for Parquet scans. The DataFusion-based implementations only support the V1 API.\n- Spark metadata columns (e.g., `_metadata.file_path`)\n- No support for Dynamic Partition Pruning (DPP)\n\nThe following shared limitation may produce incorrect results without falling back to Spark:\n\n- No support for datetime rebasing. When reading Parquet files containing dates or timestamps written before\n  Spark 3.0 (which used a hybrid Julian/Gregorian calendar), dates/timestamps will be read as if they were\n  written using the Proleptic Gregorian calendar. This may produce incorrect results for dates before\n  October 15, 1582.\n\nThe `native_datafusion` scan has some additional limitations, mostly related to Parquet metadata. All of these\ncause Comet to fall back to Spark (including when using `auto` mode). Note that the `native_datafusion` scan\nrequires `spark.comet.exec.enabled=true` because the scan node must be wrapped by `CometExecRule`.\n\n- No support for row indexes\n- No support for reading Parquet field IDs\n- No support for `input_file_name()`, `input_file_block_start()`, or `input_file_block_length()` SQL functions.\n  The `native_datafusion` scan does not use Spark's `FileScanRDD`, so these functions cannot populate their values.\n- No support for `ignoreMissingFiles` or `ignoreCorruptFiles` being set to `true`\n- Duplicate field names in case-insensitive mode (e.g., a Parquet file with both `B` and `b` columns)\n  are detected at read time and raise a `SparkRuntimeException` with error class `_LEGACY_ERROR_TEMP_2093`,\n  matching Spark's behavior.\n\nThe `native_iceberg_compat` scan has the following additional limitation that may produce incorrect results\nwithout falling back to Spark:\n\n- Some Spark configuration values are hard-coded to their defaults rather than respecting user-specified values.\n  This may produce incorrect results when non-default values are set. The affected configurations are\n  `spark.sql.parquet.binaryAsString`, `spark.sql.parquet.int96AsTimestamp`, `spark.sql.caseSensitive`,\n  `spark.sql.parquet.inferTimestampNTZ.enabled`, and `spark.sql.legacy.parquet.nanosAsLong`. See\n  [issue #1816](https://github.com/apache/datafusion-comet/issues/1816) for more details.\n\n## S3 Support\n\nThe `native_datafusion` and `native_iceberg_compat` Parquet scan implementations completely offload data loading\nto native code. They use the [`object_store` crate](https://crates.io/crates/object_store) to read data from S3 and\nsupport configuring S3 access using standard [Hadoop S3A configurations](https://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/index.html#General_S3A_Client_configuration) by translating them to\nthe `object_store` crate's format.\n\nThis implementation maintains compatibility with existing Hadoop S3A configurations, so existing code will\ncontinue to work as long as the configurations are supported and can be translated without loss of functionality.\n\n#### Additional S3 Configuration Options\n\nBeyond credential providers, the `native_datafusion` and `native_iceberg_compat` implementations support additional\nS3 configuration options:\n\n| Option                          | Description                                                                                        |\n| ------------------------------- | -------------------------------------------------------------------------------------------------- |\n| `fs.s3a.endpoint`               | The endpoint of the S3 service                                                                     |\n| `fs.s3a.endpoint.region`        | The AWS region for the S3 service. If not specified, the region will be auto-detected.             |\n| `fs.s3a.path.style.access`      | Whether to use path style access for the S3 service (true/false, defaults to virtual hosted style) |\n| `fs.s3a.requester.pays.enabled` | Whether to enable requester pays for S3 requests (true/false)                                      |\n\nAll configuration options support bucket-specific overrides using the pattern `fs.s3a.bucket.{bucket-name}.{option}`.\n\n#### Examples\n\nThe following examples demonstrate how to configure S3 access with the `native_datafusion` and `native_iceberg_compat`\nParquet scan implementations using different authentication methods.\n\n**Example 1: Simple Credentials**\n\nThis example shows how to access a private S3 bucket using an access key and secret key. The `fs.s3a.aws.credentials.provider` configuration can be omitted since `org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider` is included in Hadoop S3A's default credential provider chain.\n\n```shell\n$SPARK_HOME/bin/spark-shell \\\n...\n--conf spark.comet.scan.impl=native_datafusion \\\n--conf spark.hadoop.fs.s3a.access.key=my-access-key \\\n--conf spark.hadoop.fs.s3a.secret.key=my-secret-key\n...\n```\n\n**Example 2: Assume Role with Web Identity Token**\n\nThis example demonstrates using an assumed role credential to access a private S3 bucket, where the base credential for assuming the role is provided by a web identity token credentials provider.\n\n```shell\n$SPARK_HOME/bin/spark-shell \\\n...\n--conf spark.comet.scan.impl=native_datafusion \\\n--conf spark.hadoop.fs.s3a.aws.credentials.provider=org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider \\\n--conf spark.hadoop.fs.s3a.assumed.role.arn=arn:aws:iam::123456789012:role/my-role \\\n--conf spark.hadoop.fs.s3a.assumed.role.session.name=my-session \\\n--conf spark.hadoop.fs.s3a.assumed.role.credentials.provider=com.amazonaws.auth.WebIdentityTokenCredentialsProvider\n...\n```\n\n#### Limitations\n\nThe S3 support of `native_datafusion` and `native_iceberg_compat` has the following limitations:\n\n1. **Partial Hadoop S3A configuration support**: Not all Hadoop S3A configurations are currently supported. Only the configurations listed in the tables above are translated and applied to the underlying `object_store` crate.\n\n2. **Custom credential providers**: Custom implementations of AWS credential providers are not supported. The implementation only supports the standard credential providers listed in the table above. We are planning to add support for custom credential providers through a JNI-based adapter that will allow calling Java credential providers from native code. See [issue #1829](https://github.com/apache/datafusion-comet/issues/1829) for more details.\n"
  },
  {
    "path": "docs/source/contributor-guide/plugin_overview.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Plugin Architecture\n\n## Overview\n\nThe Comet plugin enhances Spark SQL by introducing optimized query execution and shuffle mechanisms leveraging\nnative code. It integrates with Spark's plugin framework and extension API to replace or extend Spark's\ndefault behavior.\n\n---\n\n# Plugin Components\n\n## Comet SQL Plugin\n\nThe entry point to Comet is the org.apache.spark.CometPlugin class, which is registered in Spark using the following\nconfiguration:\n\n```\n--conf spark.plugins=org.apache.spark.CometPlugin\n```\n\nThe plugin is loaded on the Spark driver and does not provide executor-side plugins.\n\nThe plugin will update the current `SparkConf` with the extra configuration provided by Comet, such as executor memory\nconfiguration.\n\nThe plugin also registers `CometSparkSessionExtensions` with Spark's extension API.\n\n## CometSparkSessionExtensions\n\nOn initialization, this class registers two physical plan optimization rules with Spark: `CometScanRule`\nand `CometExecRule`. These rules run whenever a query stage is being planned during Adaptive Query Execution, and\nrun once for the entire plan when Adaptive Query Execution is disabled.\n\n### CometScanRule\n\n`CometScanRule` replaces any Parquet scans with Comet operators. There are different paths for Spark v1 and v2 data sources.\n\nWhen reading from Parquet v1 data sources, Comet replaces `FileSourceScanExec` with a `CometScanExec`, and for v2\ndata sources, `BatchScanExec` is replaced with `CometBatchScanExec`. In both cases, Comet replaces Spark's Parquet\nreader with a custom vectorized Parquet reader. This is similar to Spark's vectorized Parquet reader used by the v2\nParquet data source but leverages native code for decoding Parquet row groups directly into Arrow format.\n\nComet only supports a subset of data types and will fall back to Spark's scan if unsupported types\nexist. Comet can still accelerate the rest of the query execution in this case because `CometSparkToColumnarExec` will\nconvert the output from Spark's scan to Arrow arrays. Note that both `spark.comet.exec.enabled=true` and\n`spark.comet.convert.parquet.enabled=true` must be set to enable this conversion.\n\nRefer to the [Supported Spark Data Types](https://datafusion.apache.org/comet/user-guide/datatypes.html) section\nin the contributor guide to see a list of currently supported data types.\n\n### CometExecRule\n\nThis rule traverses bottom-up from the original Spark plan and attempts to replace each operator with a Comet equivalent.\nFor example, a `ProjectExec` will be replaced by `CometProjectExec`.\n\nWhen replacing a node, various checks are performed to determine if Comet can support the operator and its expressions.\nIf an operator, expression, or data type is not supported by Comet then the reason will be stored in a tag on the\nunderlying Spark node and the plan will not be converted.\n\nComet does not support partially replacing subsets of the plan within a query stage because this would involve adding\ntransitions to convert between row-based and columnar data between Spark operators and Comet operators and the overhead\nof this could outweigh the benefits of running parts of the query stage natively in Comet.\n\n## Query Execution\n\nOnce the plan has been transformed, any consecutive native Comet operators are combined into a `CometNativeExec` which contains\na protocol buffer serialized version of the plan (the serialization code can be found in `QueryPlanSerde`).\n\nSpark serializes the physical plan and sends it to the executors when executing tasks. The executors deserialize the\nplan and invoke it.\n\nWhen `CometNativeExec` is invoked, it will pass the serialized protobuf plan into\n`Native.createPlan`, which invokes the native code via JNI, where the plan is then deserialized.\n\nIn the native code there is a `PhysicalPlanner` struct (in `planner.rs`) which converts the deserialized plan into an\nApache DataFusion `ExecutionPlan`. In some cases, Comet provides specialized physical operators and expressions to\noverride the DataFusion versions to ensure compatibility with Apache Spark.\n\nThe leaf nodes in the physical plan are always `ScanExec` and each of these operators will make a JNI call to\n`CometBatchIterator.next()` to fetch the next input batch. The input could be a Comet native Parquet scan,\na Spark exchange, or another native plan.\n\n`CometNativeExec` creates a `CometExecIterator` and applies this iterator to the input RDD\npartitions. Each call to `CometExecIterator.next()` will invoke `Native.executePlan`. Once the plan finishes\nexecuting, the resulting Arrow batches are imported into the JVM using Arrow FFI.\n\n## Shuffle\n\nComet integrates with Spark's shuffle mechanism, optimizing both shuffle writes and reads. Comet's shuffle manager\nmust be registered with Spark using the following configuration:\n\n```\n--conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\n```\n\n### Shuffle Writes\n\nFor shuffle writes, a `ShuffleMapTask` runs in the executors. This task contains a `ShuffleDependency` that is\nbroadcast to all of the executors. It then passes the input RDD to `ShuffleWriteProcessor.write()` which\nrequests a `ShuffleWriter` from the shuffle manager, and this is where it gets a Comet shuffle writer.\n\n`ShuffleWriteProcessor` then invokes the dependency RDD and fetches rows/batches and passes them to Comet's\nshuffle writer, which writes batches to disk in Arrow IPC format.\n\nAs a result, we cannot avoid having one native plan to produce the shuffle input and another native plan for\nwriting the batches to the shuffle file.\n\n### Shuffle Reads\n\nFor shuffle reads a `ShuffledRDD` requests a `ShuffleReader` from the shuffle manager. Comet provides a\n`CometBlockStoreShuffleReader` which is implemented in JVM and fetches blocks from Spark and then creates an\n`ArrowReaderIterator` to process the blocks using Arrow's `StreamReader` for decoding IPC batches.\n"
  },
  {
    "path": "docs/source/contributor-guide/profiling.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Profiling\n\nThis guide covers profiling tools and techniques for Comet development. Because Comet\nspans JVM (Spark) and native (Rust) code, choosing the right profiler depends on what\nyou are investigating.\n\n## Choosing a Profiling Tool\n\n| Tool                                                                           | JVM Frames | Native (Rust) Frames | Install Required | Best For                                                                       |\n| ------------------------------------------------------------------------------ | ---------- | -------------------- | ---------------- | ------------------------------------------------------------------------------ |\n| [async-profiler](https://github.com/async-profiler/async-profiler)             | Yes        | Yes                  | Yes              | End-to-end Comet profiling with unified JVM + native flame graphs              |\n| [Java Flight Recorder (JFR)](https://docs.oracle.com/en/java/javase/17/jfapi/) | Yes        | No                   | No (JDK 11+)     | GC pressure, allocations, thread contention, I/O — any JVM-level investigation |\n| [cargo-flamegraph](https://github.com/flamegraph-rs/flamegraph)                | No         | Yes                  | Yes              | Isolated Rust micro-benchmarks without a JVM                                   |\n\n**Recommendation:** Use **async-profiler** when profiling Spark queries with Comet enabled —\nit is the only tool that shows both JVM and native frames in a single flame graph.\nUse **JFR** when you need rich JVM event data (GC, locks, I/O) without installing anything.\nUse **cargo-flamegraph** when working on native code in isolation via `cargo bench`.\n\n## Profiling with async-profiler\n\n[async-profiler](https://github.com/async-profiler/async-profiler) captures JVM and\nnative code in the same flame graph by using Linux `perf_events` or macOS `dtrace`.\nThis makes it ideal for profiling Comet, where hot paths cross the JNI boundary between\nSpark and Rust.\n\n### Installation\n\nDownload a release from the [async-profiler releases page](https://github.com/async-profiler/async-profiler/releases):\n\n```shell\n# Linux x64\nwget https://github.com/async-profiler/async-profiler/releases/download/v3.0/async-profiler-3.0-linux-x64.tar.gz\nmkdir -p $HOME/opt/async-profiler\ntar xzf async-profiler-3.0-linux-x64.tar.gz -C $HOME/opt/async-profiler --strip-components=1\nexport ASYNC_PROFILER_HOME=$HOME/opt/async-profiler\n```\n\nOn macOS, download the appropriate `macos` archive instead.\n\n### Attaching to a running Spark application\n\nUse the `asprof` launcher to attach to a running JVM by PID:\n\n```shell\n# Start CPU profiling for 30 seconds, output an HTML flame graph\n$ASYNC_PROFILER_HOME/bin/asprof -d 30 -f flamegraph.html <pid>\n\n# Wall-clock profiling\n$ASYNC_PROFILER_HOME/bin/asprof -e wall -d 30 -f flamegraph.html <pid>\n\n# Start profiling (no duration limit), then stop later\n$ASYNC_PROFILER_HOME/bin/asprof start -e cpu <pid>\n# ... run your query ...\n$ASYNC_PROFILER_HOME/bin/asprof stop -f flamegraph.html <pid>\n```\n\nFind the Spark driver/executor PID with `jps` or `pgrep -f SparkSubmit`.\n\n### Passing profiler flags via spark-submit\n\nYou can also attach async-profiler as a Java agent at JVM startup:\n\n```shell\nspark-submit \\\n  --conf \"spark.driver.extraJavaOptions=-agentpath:$ASYNC_PROFILER_HOME/lib/libasyncProfiler.so=start,event=cpu,file=driver.html,tree\" \\\n  --conf \"spark.executor.extraJavaOptions=-agentpath:$ASYNC_PROFILER_HOME/lib/libasyncProfiler.so=start,event=cpu,file=executor.html,tree\" \\\n  ...\n```\n\nNote: If the executor is distributed then `executor.html` will be written on the remote node.\n\n### Choosing an event type\n\n| Event   | When to use                                                                                               |\n| ------- | --------------------------------------------------------------------------------------------------------- |\n| `cpu`   | Default. Shows where CPU cycles are spent. Use for compute-bound queries.                                 |\n| `wall`  | Wall-clock time including blocked/waiting threads. Use to find JNI boundary overhead and I/O stalls.      |\n| `alloc` | Heap allocation profiling. Use to find JVM allocation hotspots around Arrow FFI and columnar conversions. |\n| `lock`  | Lock contention. Use when threads appear to spend time waiting on synchronized blocks or locks.           |\n\n### Output formats\n\n| Format           | Flag               | Description                                        |\n| ---------------- | ------------------ | -------------------------------------------------- |\n| HTML flame graph | `-f out.html`      | Interactive flame graph (default and most useful). |\n| JFR              | `-f out.jfr`       | Viewable in JDK Mission Control or IntelliJ.       |\n| Collapsed stacks | `-f out.collapsed` | For use with Brendan Gregg's FlameGraph scripts.   |\n| Text summary     | `-o text`          | Flat list of hot methods.                          |\n\n### Platform notes\n\n**Linux:** Set `perf_event_paranoid` to allow profiling:\n\n```shell\nsudo sysctl kernel.perf_event_paranoid=1   # or 0 / -1 for full access\nsudo sysctl kernel.kptr_restrict=0          # optional: enable kernel symbols\n```\n\n**macOS:** async-profiler uses `dtrace` on macOS, which requires running as root or\nwith SIP (System Integrity Protection) adjustments. Native Rust frames may not be fully\nresolved on macOS; Linux is recommended for the most complete flame graphs.\n\n### Integrated benchmark profiling\n\nThe TPC benchmark scripts in `benchmarks/tpc/` have built-in async-profiler support via\nthe `--async-profiler` flag. See [benchmarks/tpc/README.md](https://github.com/apache/datafusion-comet/blob/main/benchmarks/tpc/README.md)\nfor details.\n\n## Profiling with Java Flight Recorder\n\n[Java Flight Recorder (JFR)](https://docs.oracle.com/en/java/javase/17/jfapi/) is built\ninto JDK 11+ and collects detailed JVM runtime data with very low overhead. It does not\nsee native Rust frames, but is excellent for diagnosing GC pressure, thread contention,\nI/O latency, and JVM-level allocation patterns.\n\n### Adding JFR flags to spark-submit\n\n```shell\nspark-submit \\\n  --conf \"spark.driver.extraJavaOptions=-XX:StartFlightRecording=duration=120s,filename=driver.jfr\" \\\n  --conf \"spark.executor.extraJavaOptions=-XX:StartFlightRecording=duration=120s,filename=executor.jfr\" \\\n  ...\n```\n\nFor continuous recording without a fixed duration:\n\n```shell\n--conf \"spark.driver.extraJavaOptions=-XX:StartFlightRecording=disk=true,maxsize=500m,filename=driver.jfr\"\n```\n\nYou can also start and stop recording dynamically using `jcmd`:\n\n```shell\njcmd <pid> JFR.start name=profile\n# ... run your query ...\njcmd <pid> JFR.stop name=profile filename=recording.jfr\n```\n\n### Viewing recordings\n\n- **[JDK Mission Control (JMC)](https://jdk.java.net/jmc/)** — the most comprehensive viewer.\n  Shows flame graphs, GC timeline, thread activity, I/O, and allocation hot spots.\n- **IntelliJ IDEA** — open `.jfr` files directly in the built-in profiler\n  (Run → Open Profiler Snapshot).\n- **`jfr` CLI** — quick summaries from the command line: `jfr summary driver.jfr`\n\n### Useful JFR events for Comet debugging\n\n| Event                                                               | What it shows                                                               |\n| ------------------------------------------------------------------- | --------------------------------------------------------------------------- |\n| `jdk.GCPhasePause`                                                  | GC pause durations — helps identify memory pressure from Arrow allocations. |\n| `jdk.ObjectAllocationInNewTLAB` / `jdk.ObjectAllocationOutsideTLAB` | Allocation hot spots.                                                       |\n| `jdk.JavaMonitorWait` / `jdk.ThreadPark`                            | Thread contention and lock waits.                                           |\n| `jdk.FileRead` / `jdk.FileWrite` / `jdk.SocketRead`                 | I/O latency.                                                                |\n| `jdk.ExecutionSample`                                               | CPU sampling (method profiling, similar to a flame graph).                  |\n\n### Integrated benchmark profiling\n\nThe TPC benchmark scripts support `--jfr` for automatic JFR recording during benchmark\nruns. See [benchmarks/tpc/README.md](https://github.com/apache/datafusion-comet/blob/main/benchmarks/tpc/README.md) for details.\n\n## Profiling Native Code with cargo-flamegraph\n\nFor profiling Rust code in isolation — without a JVM — use `cargo bench` with\n[cargo-flamegraph](https://github.com/flamegraph-rs/flamegraph).\n\n### Running micro benchmarks with cargo bench\n\nWhen implementing a new operator or expression, it is good practice to add a new microbenchmark under `core/benches`.\n\nIt is often easiest to copy an existing benchmark and modify it for the new operator or expression. It is also\nnecessary to add a new section to the `Cargo.toml` file, such as:\n\n```toml\n[[bench]]\nname = \"shuffle_writer\"\nharness = false\n```\n\nThese benchmarks are useful for comparing performance between releases or between feature branches and the\nmain branch to help prevent regressions in performance when adding new features or fixing bugs.\n\nIndividual benchmarks can be run by name with the following command.\n\n```shell\ncargo bench shuffle_writer\n```\n\nHere is some sample output from running this command.\n\n```\n     Running benches/shuffle_writer.rs (target/release/deps/shuffle_writer-e37b59e37879cce7)\nGnuplot not found, using plotters backend\nshuffle_writer/shuffle_writer\n                        time:   [2.0880 ms 2.0989 ms 2.1118 ms]\nFound 9 outliers among 100 measurements (9.00%)\n  3 (3.00%) high mild\n  6 (6.00%) high severe\n```\n\n### Profiling with cargo-flamegraph\n\nInstall cargo-flamegraph:\n\n```shell\ncargo install flamegraph\n```\n\nFollow the instructions in [cargo-flamegraph](https://github.com/flamegraph-rs/flamegraph) for your platform for\nrunning flamegraph.\n\nHere is a sample command for running `cargo-flamegraph` on MacOS.\n\n```shell\ncargo flamegraph --root --bench shuffle_writer\n```\n\nThis will produce output similar to the following.\n\n```\ndtrace: system integrity protection is on, some features will not be available\n\ndtrace: description 'profile-997 ' matched 1 probe\nGnuplot not found, using plotters backend\nTesting shuffle_writer/shuffle_writer\nSuccess\n\ndtrace: pid 66402 has exited\nwriting flamegraph to \"flamegraph.svg\"\n```\n\nThe generated flamegraph can now be opened in a browser that supports svg format.\n\nHere is the flamegraph for this example:\n\n![flamegraph](../_static/images/flamegraph.png)\n\n## Tips for Profiling Comet\n\n### Use wall-clock profiling to spot JNI boundary overhead\n\nWhen profiling Comet with async-profiler, `wall` mode is often more revealing than `cpu`\nbecause it captures time spent crossing the JNI boundary, waiting for native results,\nand blocked on I/O — none of which show up in CPU-only profiles.\n\n```shell\n$ASYNC_PROFILER_HOME/bin/asprof -e wall -d 60 -f wall-profile.html <pid>\n```\n\n### Use alloc profiling around Arrow FFI\n\nJVM allocation profiling can identify hotspots in the Arrow FFI path where temporary\nobjects are created during data transfer between JVM and native code:\n\n```shell\n$ASYNC_PROFILER_HOME/bin/asprof -e alloc -d 60 -f alloc-profile.html <pid>\n```\n\nLook for allocations in `CometExecIterator`, `CometBatchIterator`, and Arrow vector\nclasses.\n\n### Isolate Rust-only performance issues\n\nIf a flame graph shows the hot path is entirely within native code, switch to\n`cargo-flamegraph` to get better symbol resolution and avoid JVM noise:\n\n```shell\ncd native\ncargo flamegraph --root --bench <benchmark_name>\n```\n\n### Correlating JVM and native frames\n\nIn async-profiler flame graphs, native Rust frames appear below JNI entry points like\n`Java_org_apache_comet_Native_*`. Look for these transition points to understand how\ntime is split between Spark's JVM code and Comet's native execution.\n"
  },
  {
    "path": "docs/source/contributor-guide/release_process.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Apache DataFusion Comet: Release Process\n\nThis documentation explains the release process for Apache DataFusion Comet. Some preparation tasks can be\nperformed by any contributor, while certain release tasks can only be performed by a DataFusion Project Management\nCommittee (PMC) member.\n\n## Checklist\n\nThe following is a quick-reference checklist for the full release process. See the detailed sections below for\ninstructions on each step.\n\n- [ ] Release preparation: review expression support status and user guide\n- [ ] Create release branch\n- [ ] Generate release documentation\n- [ ] Update Maven version in release branch\n- [ ] Update version in main for next development cycle\n- [ ] Generate the change log and create PR against main\n- [ ] Cherry-pick the change log commit into the release branch\n- [ ] Build the jars\n- [ ] Tag the release candidate\n- [ ] Update documentation for the new release\n- [ ] Publish Maven artifacts to staging\n- [ ] Create the release candidate tarball\n- [ ] Start the email voting thread\n- [ ] Once the vote passes:\n  - [ ] Publish source tarball\n  - [ ] Create GitHub release\n  - [ ] Promote Maven artifacts to production\n  - [ ] Push the release tag\n  - [ ] Close the vote and announce the release\n- [ ] Post release:\n  - [ ] Register the release with Apache Reporter\n  - [ ] Delete old RCs and releases from SVN\n  - [ ] Write a blog post\n\n## Release Preparation\n\nBefore starting the release process, review the user guide to ensure it accurately reflects the current state of the\nproject:\n\n- Review the supported expressions and operators lists in the user guide. Verify that any expressions added since\n  the last release are included and that their support status is accurate.\n- Spot-check the support status of individual expressions by running tests or queries to confirm they work as\n  documented.\n- Look for any expressions that may have regressed or changed behavior since the last release and update the\n  documentation accordingly.\n\nIt is also recommended to run benchmarks (such as TPC-H and TPC-DS) comparing performance against the previous\nrelease to check for regressions. See the\n[Comet Benchmarking Guide](benchmarking.md) for instructions.\n\nThese are tasks where agentic coding tools can be particularly helpful — for example, scanning the codebase for\nnewly registered expressions and cross-referencing them against the documented list, or generating test queries to\nverify expression support status.\n\nAny issues found should be addressed before creating the release branch.\n\n## Creating the Release Candidate\n\nThis part of the process can be performed by any committer.\n\nHere are the steps, using the 0.13.0 release as an example.\n\n### Create Release Branch\n\nThis document assumes that GitHub remotes are set up as follows:\n\n```shell\n$ git remote -v\napache\tgit@github.com:apache/datafusion-comet.git (fetch)\napache\tgit@github.com:apache/datafusion-comet.git (push)\norigin\tgit@github.com:yourgithubid/datafusion-comet.git (fetch)\norigin\tgit@github.com:yourgithubid/datafusion-comet.git (push)\n```\n\nCreate a release branch from the latest commit in main and push to the `apache` repo:\n\n```shell\ngit fetch apache\ngit checkout main\ngit reset --hard apache/main\ngit checkout -b branch-0.13\ngit push apache branch-0.13\n```\n\n### Generate Release Documentation\n\nGenerate the documentation content for this release. The docs on `main` contain only template markers,\nso we need to generate the actual content (config tables, compatibility matrices) for the release branch:\n\n```shell\n./dev/generate-release-docs.sh\ngit add docs/source/user-guide/latest/\ngit commit -m \"Generate docs for 0.13.0 release\"\ngit push apache branch-0.13\n```\n\nThis freezes the documentation to reflect the configs and expressions available in this release.\n\n### Update Maven Version\n\nUpdate the `pom.xml` files in the release branch to update the Maven version from `0.13.0-SNAPSHOT` to `0.13.0`.\n\nThere is no need to update the Rust crate versions because they will already be `0.13.0`.\n\n### Update Version in main\n\nCreate a PR against the main branch to prepare for developing the next release:\n\n- Update the Rust crate version to `0.14.0`.\n- Update the Maven version to `0.14.0-SNAPSHOT` (both in the `pom.xml` files and also in the diff files\n  under `dev/diffs`).\n\n### Generate the Change Log\n\nGenerate a change log to cover changes between the previous release and the release branch HEAD by running\nthe provided `dev/release/generate-changelog.py`.\n\nIt is recommended that you set up a virtual Python environment and then install the dependencies:\n\n```shell\ncd dev/release\npython3 -m venv venv\nsource venv/bin/activate\npip3 install -r requirements.txt\n```\n\nTo generate the changelog, set the `GITHUB_TOKEN` environment variable to a valid token and then run the script\nproviding two commit ids or tags followed by the version number of the release being created. The following\nexample generates a change log of all changes between the previous version and the current release branch HEAD revision.\n\n```shell\nexport GITHUB_TOKEN=<your-token-here>\npython3 generate-changelog.py 0.12.0 HEAD 0.13.0 > ../changelog/0.13.0.md\n```\n\nCreate a PR against the _main_ branch to add this change log and once this is approved and merged, cherry-pick the\ncommit into the release branch.\n\n### Build the jars\n\n#### Setup to do the build\n\nThe build process requires Docker. Download the latest Docker Desktop from https://www.docker.com/products/docker-desktop/.\nIf you have multiple docker contexts running switch to the context of the Docker Desktop. For example -\n\n```shell\n$ docker context ls\nNAME              DESCRIPTION                               DOCKER ENDPOINT                               ERROR\ndefault           Current DOCKER_HOST based configuration   unix:///var/run/docker.sock\ndesktop-linux     Docker Desktop                            unix:///Users/parth/.docker/run/docker.sock\nmy_custom_context *                                         tcp://192.168.64.2:2376\n\n$ docker context use desktop-linux\n```\n\n#### Run the build script\n\nThe `build-release-comet.sh` script will create a docker image for each architecture and use the image\nto build the platform specific binaries. These builder images are created every time this script is run.\nThe script optionally allows overriding of the repository and branch to build the binaries from (Note that\nthe local git repo is not used in the building of the binaries, but it is used to build the final uber jar).\n\n```shell\nUsage: build-release-comet.sh [options]\n\nThis script builds comet native binaries inside a docker image. The image is named\n\"comet-rm\" and will be generated by this script\n\nOptions are:\n\n-r [repo]   : git repo (default: https://github.com/apache/datafusion-comet.git)\n-b [branch] : git branch (default: release)\n-t [tag]    : tag for the spark-rm docker image to use for building (default: \"latest\").\n```\n\nExample:\n\n```shell\ncd dev/release && ./build-release-comet.sh && cd ../..\n```\n\n#### Build output\n\nThe build output is installed to a temporary local maven repository. The build script will print the name of the\nrepository location at the end. This location will be required at the time of deploying the artifacts to a staging\nrepository\n\n### Tag the Release Candidate\n\nEnsure that the Maven version update and changelog cherry-pick have been pushed to the release branch before tagging.\n\nTag the release branch with `0.13.0-rc1` and push to the `apache` repo\n\n```shell\ngit fetch apache\ngit checkout branch-0.13\ngit reset --hard apache/branch-0.13\ngit tag 0.13.0-rc1\ngit push apache 0.13.0-rc1\n```\n\nNote that pushing a release candidate tag will trigger a GitHub workflow that will build a Docker image and publish\nit to GitHub Container Registry at https://github.com/apache/datafusion-comet/pkgs/container/datafusion-comet\n\n### Publishing Documentation\n\nIn `docs` directory:\n\n- Update `docs/source/index.rst` and add a new navigation menu link for the new release in the section `_toc.user-guide-links-versioned`\n- Add a new line to `build.sh` to delete the locally cloned `comet-*` branch for the new release e.g. `comet-0.13`\n- Update the main method in `generate-versions.py`:\n\n```python\n    latest_released_version = \"0.13.0\"\n    previous_versions = [\"0.11.0\", \"0.12.0\"]\n```\n\nTest the documentation build locally, following the instructions in `docs/README.md`.\n\nOnce verified, create a PR against the main branch with these documentation changes. After merging, the docs will be\ndeployed to https://datafusion.apache.org/comet/ by the documentation publishing workflow.\n\nNote that the download links in the installation guide will not work until the release is finalized, but having the\ndocumentation available could be useful for anyone testing out the release candidate during the voting period.\n\n## Publishing the Release Candidate\n\nThis part of the process can mostly only be performed by a PMC member.\n\n### Publish the maven artifacts\n\n#### Setup maven\n\n##### One time project setup\n\nSetting up your project in the ASF Nexus Repository from here: https://infra.apache.org/publishing-maven-artifacts.html\n\n##### Release Manager Setup\n\nSet up your development environment from here: https://infra.apache.org/publishing-maven-artifacts.html\n\n##### Build and publish a release candidate to nexus.\n\nThe script `publish-to-maven.sh` will publish the artifacts created by the `build-release-comet.sh` script.\nThe artifacts will be signed using the gpg key of the release manager and uploaded to the maven staging repository.\n\nNote that installed GPG keys can be listed with `gpg --list-keys`. The gpg key is a 40 character hex string.\n\nNote: This script needs `xmllint` to be installed. On macOS xmllint is available by default.\n\nOn Ubuntu `apt-get install -y libxml2-utils`\n\nOn RedHat `yum install -y xmlstarlet`\n\n```shell\n./dev/release/publish-to-maven.sh -h\nusage: publish-to-maven.sh options\n\nPublish signed artifacts to Maven.\n\nOptions\n-u ASF_USERNAME - Username of ASF committer account\n-r LOCAL_REPO - path to temporary local maven repo (created and written to by 'build-release-comet.sh')\n\nThe following will be prompted for -\nASF_PASSWORD - Password of ASF committer account\nGPG_KEY - GPG key used to sign release artifacts\nGPG_PASSPHRASE - Passphrase for GPG key\n```\n\nexample\n\n```shell\n./dev/release/publish-to-maven.sh -u release_manager_asf_id -r /tmp/comet-staging-repo-VsYOX\nASF Password :\nGPG Key (Optional):\nGPG Passphrase :\nCreating Nexus staging repository\n...\n```\n\nIn the Nexus repository UI (https://repository.apache.org/) locate and verify the artifacts in\nstaging (https://central.sonatype.org/publish/release/#locate-and-examine-your-staging-repository).\n\nThe script closes the staging repository but does not release it. Releasing to Maven Central is a manual step\nperformed only after the vote passes (see [Publishing Maven Artifacts](#publishing-maven-artifacts) below).\n\nNote that the Maven artifacts are always published under the final release version (e.g. `0.13.0`), not the RC\nversion — the `-rc1` / `-rc2` suffix only appears in the git tag and the source tarball in SVN. Because the script\ncreates a new staging repository on each run, re-staging the same version for a subsequent RC is supported as long\nas no staging repository for that version has been released to Maven Central.\n\n### Create the Release Candidate Tarball\n\nThe `create-tarball.sh` script creates a signed source tarball and uploads it to the dev subversion repository.\n\n#### Prerequisites\n\nBefore running this script, ensure you have:\n\n1. A GPG key set up for signing, with your public key uploaded to https://pgp.mit.edu/\n2. Apache SVN credentials (you must be logged into the Apache SVN server)\n3. The `requests` Python package installed (`pip3 install requests`)\n\n#### Run the script\n\nRun the create-tarball script on the release candidate tag (`0.13.0-rc1`):\n\n```shell\n./dev/release/create-tarball.sh 0.13.0 1\n```\n\nThis will generate an email template for starting the vote.\n\n### Start an Email Voting Thread\n\nSend the email that is generated in the previous step to `dev@datafusion.apache.org`.\n\nThe verification procedure for voters is documented in\n[Verifying Release Candidates](https://github.com/apache/datafusion-comet/blob/main/dev/release/verifying-release-candidates.md).\nVoters can also use the `dev/release/verify-release-candidate.sh` script to assist with verification:\n\n```shell\n./dev/release/verify-release-candidate.sh 0.13.0 1\n```\n\n### If the Vote Fails\n\nIf the vote does not pass, address the issues raised, increment the release candidate number, and repeat from\nthe [Tag the Release Candidate](#tag-the-release-candidate) step. For example, the next attempt would be tagged\n`0.13.0-rc2`.\n\nBefore staging the next RC, drop the previous RC's staging repository in the\n[Nexus UI](https://repository.apache.org/#stagingRepositories) by selecting it and clicking \"Drop\". This avoids\nleaving multiple closed staging repositories for the same version and prevents accidentally releasing the wrong\none when the vote eventually passes. The Maven version (e.g. `0.13.0`) is shared across all RCs, so each run of\n`publish-to-maven.sh` creates a new staging repository for the same GAV — only one of them should ever be\nreleased to Maven Central.\n\n## Publishing Binary Releases\n\nOnce the vote passes, we can publish the source and binary releases.\n\n### Publishing Source Tarball\n\nRun the release-tarball script to move the tarball to the release subversion repository.\n\n```shell\n./dev/release/release-tarball.sh 0.13.0 1\n```\n\n### Create a release in the GitHub repository\n\nGo to https://github.com/apache/datafusion-comet/releases and create a release for the release tag, and paste the\nchangelog in the description.\n\n### Publishing Maven Artifacts\n\nPromote the Maven artifacts from staging to production by visiting https://repository.apache.org/#stagingRepositories\nand selecting the staging repository and then clicking the \"release\" button.\n\n### Push a release tag to the repo\n\nPush a release tag (`0.13.0`) to the `apache` repository.\n\n```shell\ngit fetch apache\ngit checkout 0.13.0-rc1\ngit tag 0.13.0\ngit push apache 0.13.0\n```\n\nNote that pushing a release tag will trigger a GitHub workflow that will build a Docker image and publish\nit to GitHub Container Registry at https://github.com/apache/datafusion-comet/pkgs/container/datafusion-comet\n\nReply to the vote thread to close the vote and announce the release. The announcement email should include:\n\n- The release version\n- A link to the release notes / changelog\n- A link to the download page or Maven coordinates\n- Thanks to everyone who contributed and voted\n\n## Post Release\n\n### Register the release\n\nRegister the release with the [Apache Reporter Service](https://reporter.apache.org/addrelease.html?datafusion) using\na version such as `COMET-0.13.0`.\n\n### Delete old RCs and Releases\n\nSee the ASF documentation on [when to archive](https://www.apache.org/legal/release-policy.html#when-to-archive)\nfor more information.\n\n#### Deleting old release candidates from `dev` svn\n\nRelease candidates should be deleted once the release is published.\n\nGet a list of DataFusion Comet release candidates:\n\n```shell\nsvn ls https://dist.apache.org/repos/dist/dev/datafusion | grep comet\n```\n\nDelete a release candidate:\n\n```shell\nsvn delete -m \"delete old DataFusion Comet RC\" https://dist.apache.org/repos/dist/dev/datafusion/apache-datafusion-comet-0.13.0-rc1/\n```\n\n#### Deleting old releases from `release` svn\n\nOnly the latest release should be available. Delete old releases after publishing the new release.\n\nGet a list of DataFusion releases:\n\n```shell\nsvn ls https://dist.apache.org/repos/dist/release/datafusion | grep comet\n```\n\nDelete a release:\n\n```shell\nsvn delete -m \"delete old DataFusion Comet release\" https://dist.apache.org/repos/dist/release/datafusion/datafusion-comet-0.12.0\n```\n\n### Write a blog post\n\nWriting a blog post about the release is a great way to generate more interest in the project. We typically create a\nGoogle document where the community can collaborate on a blog post. Once the content is agreed then a PR can be\ncreated against the [datafusion-site](https://github.com/apache/datafusion-site) repository to add the blog post. Any\ncontributor can drive this process.\n"
  },
  {
    "path": "docs/source/contributor-guide/roadmap.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Roadmap\n\nComet is an open-source project and contributors are welcome to work on any issues at any time, but we find it\nhelpful to have a roadmap for some of the major items that require coordination between contributors.\n\n## Major Initiatives\n\n### Iceberg Integration\n\nReads of Iceberg tables with Parquet data files are fully native and enabled by default, powered by a scan operator\nbacked by Iceberg-rust ([#2528]). We anticipate major improvements in the next few releases, including bringing Iceberg table format V3 features (_e.g._,\nencryption) to the reader.\n\n[#2528]: https://github.com/apache/datafusion-comet/pull/2528\n\n### Spark 4.0 Support\n\nComet has experimental support for Spark 4.0, but there is more work to do ([#1637]), such as enabling\nmore Spark SQL tests and fully implementing ANSI support ([#313]) for all supported expressions.\n\n[#313]: https://github.com/apache/datafusion-comet/issues/313\n[#1637]: https://github.com/apache/datafusion-comet/issues/1637\n\n### Dynamic Partition Pruning\n\nIceberg table scans support Dynamic Partition Pruning (DPP) filters generated by Spark's `PlanDynamicPruningFilters`\noptimizer rule ([#3349]). However, we still need to bring this functionality to our Parquet reader. Furthermore,\nSpark's `PlanAdaptiveDynamicPruningFilters` optimizer rule runs after Comet's rules, so DPP with Adaptive Query\nExecution requires a redesign of Comet's plan translation. We are focused on implementing DPP to keep Comet competitive\nwith benchmarks that benefit from this feature like TPC-DS. This effort can be tracked at [#3510].\n\n[#3349]: https://github.com/apache/datafusion-comet/pull/3349\n[#3510]: https://github.com/apache/datafusion-comet/issues/3510\n\n## Ongoing Improvements\n\nIn addition to the major initiatives above, we have the following ongoing areas of work:\n\n- Adding support for more Spark expressions\n- Moving more expressions to the `datafusion-spark` crate in the core DataFusion repository\n- Performance tuning\n- Nested type support improvements\n"
  },
  {
    "path": "docs/source/contributor-guide/spark-sql-tests.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Running Spark SQL Tests\n\nRunning Apache Spark's SQL tests with Comet enabled is a good way to ensure that Comet produces the same\nresults as that version of Spark. To enable this, we apply some changes to the Apache Spark source code so that\nComet is enabled when we run the tests.\n\nHere is an overview of the changes that we need to make to Spark:\n\n- Update the pom.xml to add a dependency on Comet\n- Modify SparkSession to load the Comet extension\n- Modify TestHive to load Comet\n- Modify SQLTestUtilsBase to load Comet when `ENABLE_COMET` environment variable exists\n\nHere are the steps involved in running the Spark SQL tests with Comet, using Spark 3.4.3 for this example.\n\n## 1. Install Comet\n\nRun `make release` in Comet to install the Comet JAR into the local Maven repository, specifying the Spark version.\n\n```shell\nPROFILES=\"-Pspark-3.4\" make release\n```\n\n## 2. Clone Spark and Apply Diff\n\nClone Apache Spark locally and apply the diff file from Comet.\n\nNote: this is a shallow clone of a tagged Spark commit and is not suitable for general Spark development.\n\n```shell\ngit clone -b 'v3.4.3' --single-branch --depth 1 git@github.com:apache/spark.git apache-spark\ncd apache-spark\ngit apply ../datafusion-comet/dev/diffs/3.4.3.diff\n```\n\n## 3. Run Spark SQL Tests\n\n### Use the following commands to run the Spark SQL test suite locally.\n\nOptionally, enable Comet fallback logging, so that all fallback reasons are logged at `WARN` level.\n\n```shell\nexport ENABLE_COMET_LOG_FALLBACK_REASONS=true\n```\n\n```shell\nENABLE_COMET=true ENABLE_COMET_ONHEAP=true build/sbt catalyst/test\nENABLE_COMET=true ENABLE_COMET_ONHEAP=true build/sbt \"sql/testOnly * -- -l org.apache.spark.tags.ExtendedSQLTest -l org.apache.spark.tags.SlowSQLTest\"\nENABLE_COMET=true ENABLE_COMET_ONHEAP=true build/sbt \"sql/testOnly * -- -n org.apache.spark.tags.ExtendedSQLTest\"\nENABLE_COMET=true ENABLE_COMET_ONHEAP=true build/sbt \"sql/testOnly * -- -n org.apache.spark.tags.SlowSQLTest\"\nENABLE_COMET=true ENABLE_COMET_ONHEAP=true build/sbt \"hive/testOnly * -- -l org.apache.spark.tags.ExtendedHiveTest -l org.apache.spark.tags.SlowHiveTest\"\nENABLE_COMET=true ENABLE_COMET_ONHEAP=true build/sbt \"hive/testOnly * -- -n org.apache.spark.tags.ExtendedHiveTest\"\nENABLE_COMET=true ENABLE_COMET_ONHEAP=true build/sbt \"hive/testOnly * -- -n org.apache.spark.tags.SlowHiveTest\"\n```\n\n### Steps to run individual test suites through SBT\n\n1. Open SBT with Comet enabled\n\n```shell\nENABLE_COMET=true ENABLE_COMET_ONHEAP=true sbt -J-Xmx4096m -Dspark.test.includeSlowTests=true\n```\n\n2. Run individual tests (Below code runs test named `SPARK-35568` in the `spark-sql` module)\n\n```shell\n sql/testOnly  org.apache.spark.sql.DynamicPartitionPruningV1SuiteAEOn -- -z \"SPARK-35568\"\n```\n\n### Steps to run individual test suites in IntelliJ IDE\n\n1. Add below configuration in VM Options for your test case (apache-spark repository)\n\n```shell\n-Dspark.comet.enabled=true -Dspark.comet.debug.enabled=true -Dspark.plugins=org.apache.spark.CometPlugin -DXmx4096m -Dspark.executor.heartbeatInterval=20000 -Dspark.network.timeout=10000 --add-exports=java.base/sun.nio.ch=ALL-UNNAMED --add-opens=java.base/java.nio=ALL-UNNAMED\n```\n\n2. Set `ENABLE_COMET=true` in environment variables\n   ![img.png](img.png)\n3. After the above tests are configured, spark tests can be run with debugging enabled on spark/comet code. Note that Comet is added as a dependency and the classes are readonly while debugging from Spark. Any new changes to Comet are to be built and deployed locally through the command (`PROFILES=\"-Pspark-3.4\" make release`)\n\n## Creating a diff file for a new Spark version\n\nOnce Comet has support for a new Spark version, we need to create a diff file that can be applied to that version\nof Apache Spark to enable Comet when running tests. This is a highly manual process and the process can\nvary depending on the changes in the new version of Spark, but here is a general guide to the process.\n\nWe typically start by applying a patch from a previous version of Spark. For example, when enabling the tests\nfor Spark version 3.5.6 we may start by applying the existing diff for 3.5.5 first.\n\n```shell\ncd git/apache/spark\ngit checkout v3.5.6\ngit apply --reject --whitespace=fix ../datafusion-comet/dev/diffs/3.5.5.diff\n```\n\nAny changes that cannot be cleanly applied will instead be written out to reject files. For example, the above\ncommand generated the following files.\n\n```shell\nfind . -name \"*.rej\"\n./pom.xml.rej\n./sql/core/src/test/scala/org/apache/spark/sql/FileBasedDataSourceSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamingJoinSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/streaming/StreamSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/JoinSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/SchemaPruningSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/binaryfile/BinaryFileFormatSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetRebaseDatetimeSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/execution/datasources/parquet/ParquetSchemaSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/execution/WholeStageCodegenSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/execution/adaptive/AdaptiveQueryExecSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/sources/CreateTableAsSelectSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/sources/BucketedReadSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/sources/DisableUnnecessaryBucketedScanSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/errors/QueryExecutionErrorsSuite.scala.rej\n./sql/core/src/test/scala/org/apache/spark/sql/CachedTableSuite.scala.rej\n./sql/core/src/main/scala/org/apache/spark/sql/SparkSession.scala.rej\n```\n\nThe changes in these reject files need to be applied manually.\n\nOne method is to use the [wiggle](https://github.com/neilbrown/wiggle) command (`brew install wiggle` on Mac).\n\nFor example:\n\n```shell\nwiggle --replace ./sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala ./sql/core/src/test/scala/org/apache/spark/sql/SubquerySuite.scala.rej\n```\n\n## Generating The Diff File\n\nThe diff file can be generated using the `git diff` command. It may be necessary to set the `core.abbrev`\nconfiguration setting to use 11 digits hashes for consistency with existing diff files.\n\nNote that there is an `IgnoreComet.scala` that is not part of the Spark codebase, and therefore needs to be added\nusing `git add` before generating the diff.\n\n```shell\ngit config core.abbrev 11;\ngit add sql/core/src/test/scala/org/apache/spark/sql/IgnoreComet.scala\ngit diff v3.5.6 > ../datafusion-comet/dev/diffs/3.5.6.diff\n```\n\n## Running Tests in CI\n\nThe easiest way to run the tests is to create a PR against Comet and let CI run the tests. When working with a\nnew Spark version, the `spark_sql_test.yaml` and `spark_sql_test_ansi.yaml` files will need updating with the\nnew version.\n"
  },
  {
    "path": "docs/source/contributor-guide/sql-file-tests.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# SQL File Tests\n\n`CometSqlFileTestSuite` is a test suite that automatically discovers `.sql` test files and\nruns each query through both Spark and Comet, comparing results. This provides a lightweight\nway to add expression and operator test coverage without writing Scala test code.\n\n## Running the tests\n\nRun all SQL file tests:\n\n```shell\n./mvnw test -Dsuites=\"org.apache.comet.CometSqlFileTestSuite\" -Dtest=none\n```\n\nRun a single test file by adding the file name (without `.sql` extension) after the suite name:\n\n```shell\n./mvnw test -Dsuites=\"org.apache.comet.CometSqlFileTestSuite create_named_struct\" -Dtest=none\n```\n\nThis uses ScalaTest's substring matching, so the argument must match part of the test name.\nTest names follow the pattern `sql-file: expressions/<category>/<file>.sql [<config>]`.\n\n## Test file location\n\nSQL test files live under:\n\n```\nspark/src/test/resources/sql-tests/expressions/\n```\n\nFiles are organized into category subdirectories:\n\n```\nexpressions/\n  aggregate/     -- avg, sum, count, min_max, ...\n  array/         -- array_contains, array_append, get_array_item, ...\n  bitwise/\n  cast/\n  conditional/   -- case_when, coalesce, if_expr, ...\n  datetime/      -- date_add, date_diff, unix_timestamp, ...\n  decimal/\n  hash/\n  map/           -- get_map_value, map_keys, map_values, ...\n  math/          -- abs, ceil, floor, round, sqrt, ...\n  misc/          -- width_bucket, scalar_subquery, ...\n  string/        -- concat, like, substring, lower, upper, ...\n  struct/        -- create_named_struct, get_struct_field, ...\n```\n\nThe test suite recursively discovers all `.sql` files in these directories. Each file becomes\none or more ScalaTest test cases.\n\n## File format\n\nA test file consists of SQL comments, directives, statements, and queries separated by blank\nlines. Here is a minimal example:\n\n```sql\nstatement\nCREATE TABLE test_abs(v double) USING parquet\n\nstatement\nINSERT INTO test_abs VALUES (1.5), (-2.5), (0.0), (NULL)\n\nquery\nSELECT abs(v) FROM test_abs\n```\n\n### Directives\n\nDirectives are SQL comments at the top of the file that configure how the test runs.\n\n#### `Config`\n\nSets a Spark SQL config for all queries in the file.\n\n```sql\n-- Config: spark.sql.ansi.enabled=true\n```\n\n#### `ConfigMatrix`\n\nRuns the entire file once per combination of values. Multiple `ConfigMatrix` lines produce a\ncross product of all combinations.\n\n```sql\n-- ConfigMatrix: spark.sql.optimizer.inSetConversionThreshold=100,0\n```\n\nThis generates two test cases:\n\n```\nsql-file: expressions/conditional/in_set.sql [spark.sql.optimizer.inSetConversionThreshold=100]\nsql-file: expressions/conditional/in_set.sql [spark.sql.optimizer.inSetConversionThreshold=0]\n```\n\nOnly add a `ConfigMatrix` directive when there is a real reason to run the test under\nmultiple configurations. Do not add `ConfigMatrix` directives speculatively.\n\n#### `MinSparkVersion`\n\nSkips the file when running on a Spark version older than the specified version.\n\n```sql\n-- MinSparkVersion: 3.5\n```\n\n### Statements\n\nA `statement` block executes DDL or DML and does not check results. Use this for `CREATE TABLE`\nand `INSERT` commands. Table names are automatically extracted for cleanup after the test.\n\n```sql\nstatement\nCREATE TABLE my_table(x int, y double) USING parquet\n\nstatement\nINSERT INTO my_table VALUES (1, 2.0), (3, 4.0), (NULL, NULL)\n```\n\n### Queries\n\nA `query` block executes a SELECT and compares results between Spark and Comet. The query\nmode controls how results are validated.\n\n#### `query` (default mode)\n\nChecks that the query runs natively on Comet (not falling back to Spark) and that results\nmatch Spark exactly.\n\n```sql\nquery\nSELECT abs(v) FROM test_abs\n```\n\n#### `query spark_answer_only`\n\nOnly checks that Comet results match Spark. Does not assert that the query runs natively.\nUse this for expressions that Comet may not fully support yet but should still produce\ncorrect results.\n\n```sql\nquery spark_answer_only\nSELECT some_expression(v) FROM test_table\n```\n\n#### `query tolerance=<value>`\n\nChecks results with a numeric tolerance. Useful for floating-point functions where small\ndifferences are acceptable.\n\n```sql\nquery tolerance=0.0001\nSELECT cos(v) FROM test_trig\n```\n\n#### `query expect_fallback(<reason>)`\n\nAsserts that the query falls back to Spark and verifies the fallback reason contains the\ngiven string.\n\n```sql\nquery expect_fallback(unsupported expression)\nSELECT unsupported_func(v) FROM test_table\n```\n\n#### `query ignore(<reason>)`\n\nSkips the query entirely. Use this for queries that hit known bugs. The reason should be a\nlink to the tracking GitHub issue.\n\n```sql\n-- Comet bug: space(-1) causes native crash\nquery ignore(https://github.com/apache/datafusion-comet/issues/3326)\nSELECT space(n) FROM test_space WHERE n < 0\n```\n\n#### `query expect_error(<pattern>)`\n\nAsserts that both Spark and Comet throw an exception containing the given pattern. Use this\nfor ANSI mode tests where invalid operations should throw errors.\n\n```sql\n-- Config: spark.sql.ansi.enabled=true\n\n-- integer overflow should throw in ANSI mode\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT 2147483647 + 1\n\n-- division by zero should throw in ANSI mode\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT 1 / 0\n\n-- array out of bounds should throw in ANSI mode\nquery expect_error(INVALID_ARRAY_INDEX)\nSELECT array(1, 2, 3)[10]\n```\n\n## Adding a new test\n\n1. Create a `.sql` file under the appropriate subdirectory in\n   `spark/src/test/resources/sql-tests/expressions/`. Create a new subdirectory if no\n   existing category fits.\n\n2. Add the Apache license header as a SQL comment.\n\n3. Add a `ConfigMatrix` directive only if the test needs to run under multiple configurations\n   (e.g., testing behavior that varies with a specific Spark config). Do not add `ConfigMatrix`\n   directives speculatively.\n\n4. Create tables and insert test data using `statement` blocks. Include edge cases such as\n   `NULL`, boundary values, and negative numbers.\n\n5. Add `query` blocks for each expression or behavior to test. Use the default `query` mode\n   when you expect Comet to run the expression natively. Use `query spark_answer_only` when\n   native execution is not yet expected.\n\n6. Run the tests to verify:\n\n   ```shell\n   ./mvnw test -Dsuites=\"org.apache.comet.CometSqlFileTestSuite\" -Dtest=none\n   ```\n\n### Tips for writing thorough tests\n\n#### Cover all combinations of literal and column arguments\n\nComet often uses different code paths for literal values versus column references. Tests\nshould exercise both. For a function with multiple arguments, test every useful combination.\n\nFor a single-argument function, test both a column reference and a literal:\n\n```sql\n-- column argument (reads from Parquet, goes through columnar evaluation)\nquery\nSELECT ascii(s) FROM test_ascii\n\n-- literal arguments\nquery\nSELECT ascii('A'), ascii(''), ascii(NULL)\n```\n\nFor a multi-argument function like `concat_ws(sep, str1, str2, ...)`, test with the separator\nas a column versus a literal, and similarly for the other arguments:\n\n```sql\n-- all columns\nquery\nSELECT concat_ws(sep, a, b) FROM test_table\n\n-- literal separator, column values\nquery\nSELECT concat_ws(',', a, b) FROM test_table\n\n-- all literals\nquery\nSELECT concat_ws(',', 'hello', 'world')\n```\n\n**Note on constant folding:** Normally Spark constant-folds all-literal expressions during\nplanning, so Comet would never see them. However, `CometSqlFileTestSuite` automatically\ndisables constant folding (by excluding `ConstantFolding` from the optimizer rules), so\nall-literal queries are evaluated by Comet's native engine. This means you can use the\ndefault `query` mode for all-literal cases and they will be tested natively just like\ncolumn-based queries.\n\n#### Cover edge cases\n\nInclude edge-case values in your test data. The exact cases depend on the function, but\ncommon ones include:\n\n- **NULL values** -- every test should include NULLs\n- **Empty strings** -- for string functions\n- **Zero, negative, and very large numbers** -- for numeric functions\n- **Boundary values** -- `INT_MIN`, `INT_MAX`, `NaN`, `Infinity`, `-Infinity` for numeric\n  types\n- **Special characters and multibyte UTF-8** -- for string functions (e.g. `'é'`, `'中文'`,\n  `'\\t'`)\n- **Empty arrays/maps** -- for collection functions\n- **Single-element and multi-element collections** -- for aggregate and collection functions\n\n#### One file per expression\n\nKeep each `.sql` file focused on a single expression or a small group of closely related\nexpressions. This makes failures easy to locate and keeps files readable.\n\n#### Use comments to label sections\n\nAdd SQL comments before query blocks to describe what aspect of the expression is being\ntested. This helps reviewers and future maintainers understand the intent:\n\n```sql\n-- literal separator with NULL values\nquery\nSELECT concat_ws(',', NULL, 'b', 'c')\n\n-- empty separator\nquery\nSELECT concat_ws('', a, b, c) FROM test_table\n```\n\n### Using agentic coding tools\n\nWriting thorough SQL test files is a task well suited to agentic coding tools such as\nClaude Code. You can point the tool at an existing test file as an example, describe the\nexpression you want to test, and ask it to generate a complete `.sql` file covering all\nargument combinations and edge cases. This is significantly faster than writing the\ncombinatorial test cases by hand and helps ensure nothing is missed.\n\nFor example:\n\n```\nRead the test file spark/src/test/resources/sql-tests/expressions/string/ascii.sql\nand the documentation in docs/source/contributor-guide/sql-file-tests.md.\nThen write a similar test file for the `reverse` function, covering column arguments,\nliteral arguments, NULLs, empty strings, and multibyte characters.\n```\n\n## Handling test failures\n\nWhen a query fails due to a known Comet bug:\n\n1. File a GitHub issue describing the problem.\n2. Change the query mode to `ignore(...)` with a link to the issue.\n3. Optionally add a SQL comment above the query explaining the problem.\n\n```sql\n-- GetArrayItem returns incorrect results with dynamic index\nquery ignore(https://github.com/apache/datafusion-comet/issues/3332)\nSELECT arr[idx] FROM test_get_array_item\n```\n\nWhen the bug is fixed, remove the `ignore(...)` and restore the original query mode.\n\n## Architecture\n\nThe test infrastructure consists of two Scala files:\n\n- **`SqlFileTestParser`** (`spark/src/test/scala/org/apache/comet/SqlFileTestParser.scala`) --\n  Parses `.sql` files into a `SqlTestFile` data structure containing directives, statements,\n  and queries.\n\n- **`CometSqlFileTestSuite`** (`spark/src/test/scala/org/apache/comet/CometSqlFileTestSuite.scala`) --\n  Discovers test files at suite initialization time, generates ScalaTest test cases for each\n  file and config combination, and executes them using `CometTestBase` assertion methods.\n\nTables created in test files are automatically cleaned up after each test.\n"
  },
  {
    "path": "docs/source/contributor-guide/sql_error_propagation.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# ANSI SQL Error Propagation in Comet\n\n## Overview\n\nApache Comet is a native query accelerator for Apache Spark. It runs SQL expressions in\n**Rust** (via Apache DataFusion) instead of the JVM, which is much faster. But there's a\ncatch: when something goes wrong — say, a divide-by-zero or a type cast failure — the error\nneeds to travel from Rust code all the way back to the Spark/Scala/Java world as a proper\nSpark exception with the right type, the right message, and even a pointer to the exact\ncharacter in the original SQL query where the error happened.\n\nThis document explains the end-to-end error-propagation pipeline.\n\n---\n\n## The Big Picture\n\n![Error propagation pipeline overview](./error_pipeline_overview.svg)\n\n```\nSQL Query (Spark/Scala)\n        │\n        │  1. Spark serializes the plan + query context into Protobuf\n        ▼\n  Protobuf bytes ──────────────────────────────────►  JNI boundary\n        │\n        │  2. Rust deserializes the plan and registers query contexts\n        ▼\n   Native execution (DataFusion / Rust)\n        │\n        │  3. An error occurs (e.g. divide by zero)\n        ▼\n   SparkError (Rust enum)\n        │\n        │  4. Error is wrapped with SQL location context\n        ▼\n   SparkErrorWithContext\n        │\n        │  5. Serialized to JSON string\n        ▼\n   JSON string ◄────────────────────────────────────  JNI boundary\n        │\n        │  6. Thrown as CometQueryExecutionException\n        ▼\n   CometExecIterator.scala catches it\n        │\n        │  7. JSON is parsed, proper Spark exception is reconstructed\n        ▼\n   Spark exception (e.g. ArithmeticException with DIVIDE_BY_ZERO errorClass)\n        │\n        │  8. Spark displays it to the user with SQL location pointer\n        ▼\n   User sees:\n   [DIVIDE_BY_ZERO] Division by zero.\n   == SQL (line 1, position 8) ==\n   SELECT a/b FROM t\n          ^^^\n```\n\n---\n\n## Step 1: Spark Serializes Query Context into Protobuf\n\nWhen Spark compiles a SQL query, it parses it and attaches _origin_ information to every\nexpression — the line number, column offset, and the full SQL text.\n\n`QueryPlanSerde.scala` is the Scala code that converts Spark's physical execution plan into\na Protobuf binary that gets sent to the Rust side. It extracts origin information from each\nexpression and encodes it alongside the expression in the Protobuf payload.\n\n### The `extractQueryContext` function\n\n```scala\n// spark/src/main/scala/org/apache/comet/serde/QueryPlanSerde.scala\n\nprivate def extractQueryContext(expr: Expression): Option[ExprOuterClass.QueryContext] = {\n  val contexts = expr.origin.getQueryContext       // Spark stores context in expr.origin\n  if (contexts != null && contexts.length > 0) {\n    val ctx = contexts(0)\n    ctx match {\n      case sqlCtx: SQLQueryContext =>\n        val builder = ExprOuterClass.QueryContext.newBuilder()\n          .setSqlText(sqlCtx.sqlText.getOrElse(\"\"))         // full SQL text\n          .setStartIndex(sqlCtx.originStartIndex.getOrElse(...)) // char offset of expression start\n          .setStopIndex(sqlCtx.originStopIndex.getOrElse(...))   // char offset of expression end\n          .setLine(sqlCtx.line.getOrElse(0))\n          .setStartPosition(sqlCtx.startPosition.getOrElse(0))\n        // ...\n        Some(builder.build())\n    }\n  }\n}\n```\n\nThen, for **every single expression** converted to Protobuf, a unique numeric ID and the\ncontext are attached:\n\n```scala\n.map { protoExpr =>\n  val builder = protoExpr.toBuilder\n  builder.setExprId(nextExprId())           // unique ID (monotonically increasing counter)\n  extractQueryContext(expr).foreach { ctx =>\n    builder.setQueryContext(ctx)            // attach the SQL location info\n  }\n  builder.build()\n}\n```\n\n### The Protobuf schema\n\n```protobuf\n// native/proto/src/proto/expr.proto\n\nmessage Expr {\n  optional uint64 expr_id = 89;             // unique ID for each expression\n  optional QueryContext query_context = 90; // SQL location info\n  // ... actual expression type ...\n}\n\nmessage QueryContext {\n  string sql_text = 1;      // \"SELECT a/b FROM t\"\n  int32 start_index = 2;    // 7  (0-based character index of 'a')\n  int32 stop_index = 3;     // 9  (0-based character index of 'b', inclusive)\n  int32 line = 4;           // 1  (1-based line number)\n  int32 start_position = 5; // 7  (0-based column position)\n  optional string object_type = 6; // e.g. \"VIEW\"\n  optional string object_name = 7; // e.g. \"v1\"\n}\n```\n\n---\n\n## Step 2: Rust Deserializes the Plan and Registers Query Contexts\n\nOn the Rust side, `PhysicalPlanner` in `planner.rs` converts the Protobuf into DataFusion's\nphysical plan. A `QueryContextMap` — a global registry — maps expression IDs to their SQL\ncontext.\n\n### `QueryContextMap` (`native/spark-expr/src/query_context.rs`)\n\n```rust\npub struct QueryContextMap {\n    contexts: RwLock<HashMap<u64, Arc<QueryContext>>>,\n}\n\nimpl QueryContextMap {\n    pub fn register(&self, expr_id: u64, context: QueryContext) { ... }\n    pub fn get(&self, expr_id: u64) -> Option<Arc<QueryContext>> { ... }\n}\n```\n\nThis is basically a lookup table: \"for expression #42, the SQL context is: text=`SELECT a/b\nFROM t`, characters 7–9, line 1, column 7\".\n\n### The `PhysicalPlanner` registers contexts during plan creation\n\n```rust\n// native/core/src/execution/planner.rs\n\npub struct PhysicalPlanner {\n    query_context_registry: Arc<QueryContextMap>,\n    // ...\n}\n\npub(crate) fn create_expr(&self, spark_expr: &Expr, ...) {\n    // 1. If this expression has a query context, register it\n    if let (Some(expr_id), Some(ctx_proto)) =\n        (spark_expr.expr_id, spark_expr.query_context.as_ref()) {\n        let query_ctx = QueryContext::new(\n            ctx_proto.sql_text.clone(),\n            ctx_proto.start_index,\n            ctx_proto.stop_index,\n            ...\n        );\n        self.query_context_registry.register(expr_id, query_ctx);\n    }\n\n    // 2. When building specific expressions (Cast, CheckOverflow, etc.),\n    //    look up the context and pass it to the expression\n    ExprStruct::Cast(expr) => {\n        let query_context = spark_expr.expr_id.and_then(|id| {\n            self.query_context_registry.get(id)\n        });\n        Ok(Arc::new(Cast::new(child, datatype, options, spark_expr.expr_id, query_context)))\n    }\n}\n```\n\n---\n\n## Step 3: An Error Occurs During Native Execution\n\nDuring query execution, a Rust expression might encounter something like division by zero.\n\n### The `SparkError` enum (`native/spark-expr/src/error.rs`)\n\nThis enum contains one variant for every kind of error Spark can produce, with exactly the\nsame error message format as Spark:\n\n```rust\npub enum SparkError {\n    #[error(\"[DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor \\\n        being 0 and return NULL instead. If necessary set \\\n        \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    DivideByZero,\n\n    #[error(\"[CAST_INVALID_INPUT] The value '{value}' of the type \\\"{from_type}\\\" \\\n        cannot be cast to \\\"{to_type}\\\" because it is malformed. ...\")]\n    CastInvalidValue { value: String, from_type: String, to_type: String },\n\n    // ... 30+ more variants matching Spark's error codes ...\n}\n```\n\nWhen a divide-by-zero happens, the arithmetic expression creates:\n\n```rust\nreturn Err(DataFusionError::External(Box::new(SparkError::DivideByZero)));\n```\n\n---\n\n## Step 4: Error Gets Wrapped with SQL Context\n\nThe expression wrappers (`CheckedBinaryExpr`, `CheckOverflow`, `Cast`) catch the\n`SparkError` and attach the SQL context using `SparkErrorWithContext`:\n\n```rust\n// native/core/src/execution/expressions/arithmetic.rs\n// CheckedBinaryExpr wraps arithmetic operations\n\nimpl PhysicalExpr for CheckedBinaryExpr {\n    fn evaluate(&self, batch: &RecordBatch) -> Result<...> {\n        match self.child.evaluate(batch) {\n            Err(DataFusionError::External(e)) if self.query_context.is_some() => {\n                if let Some(spark_error) = e.downcast_ref::<SparkError>() {\n                    // Wrap the error with SQL location info\n                    let wrapped = SparkErrorWithContext::with_context(\n                        spark_error.clone(),\n                        Arc::clone(self.query_context.as_ref().unwrap()),\n                    );\n                    return Err(DataFusionError::External(Box::new(wrapped)));\n                }\n                Err(DataFusionError::External(e))\n            }\n            other => other,\n        }\n    }\n}\n```\n\n### `SparkErrorWithContext` (`native/spark-expr/src/error.rs`)\n\n```rust\npub struct SparkErrorWithContext {\n    pub error: SparkError,                          // the actual error\n    pub context: Option<Arc<QueryContext>>,         // optional SQL location\n}\n```\n\n---\n\n## Step 5: Error Is Serialized to JSON\n\nWhen DataFusion propagates the error all the way up through the execution engine and it\nreaches the JNI boundary, `throw_exception()` in `errors.rs` is called. It detects the\n`SparkErrorWithContext` type and calls `.to_json()` on it:\n\n```rust\n// native/core/src/errors.rs\n\nfn throw_exception(env: &mut JNIEnv, error: &CometError, ...) {\n    match error {\n        CometError::DataFusion {\n            source: DataFusionError::External(e), ..\n        } => {\n            if let Some(spark_err_ctx) = e.downcast_ref::<SparkErrorWithContext>() {\n                // Has SQL context → throw with JSON payload\n                let json = spark_err_ctx.to_json();\n                env.throw_new(\"org/apache/comet/exceptions/CometQueryExecutionException\", json)\n            } else if let Some(spark_err) = e.downcast_ref::<SparkError>() {\n                // No SQL context → throw with JSON payload (no context field)\n                throw_spark_error_as_json(env, spark_err)\n            }\n        }\n        // ...\n    }\n}\n```\n\nThe JSON looks like this for a divide-by-zero in `SELECT a/b FROM t`:\n\n```json\n{\n  \"errorType\": \"DivideByZero\",\n  \"errorClass\": \"DIVIDE_BY_ZERO\",\n  \"params\": {},\n  \"context\": {\n    \"sqlText\": \"SELECT a/b FROM t\",\n    \"startIndex\": 7,\n    \"stopIndex\": 9,\n    \"line\": 1,\n    \"startPosition\": 7,\n    \"objectType\": null,\n    \"objectName\": null\n  },\n  \"summary\": \"== SQL (line 1, position 8) ==\\nSELECT a/b FROM t\\n       ^^^\"\n}\n```\n\n---\n\n## Step 6: Java Receives `CometQueryExecutionException`\n\nOn the Java side, a thin exception class carries the JSON string as its message:\n\n```java\n// common/src/main/java/org/apache/comet/exceptions/CometQueryExecutionException.java\n\npublic final class CometQueryExecutionException extends CometNativeException {\n  public CometQueryExecutionException(String jsonMessage) {\n    super(jsonMessage);\n  }\n\n  public boolean isJsonMessage() {\n    String msg = getMessage();\n    return msg != null && msg.trim().startsWith(\"{\") && msg.trim().endsWith(\"}\");\n  }\n}\n```\n\n---\n\n## Step 7: Scala Converts JSON Back to a Real Spark Exception\n\n`CometExecIterator.scala` is the Scala code that drives the native execution. Every time it\ncalls into the native engine for the next batch of data, it catches\n`CometQueryExecutionException` and converts it:\n\n```scala\n// spark/src/main/scala/org/apache/comet/CometExecIterator.scala\n\ntry {\n  nativeUtil.getNextBatch(...)\n} catch {\n  case e: CometQueryExecutionException =>\n    logError(s\"Native execution for task $taskAttemptId failed\", e)\n    throw SparkErrorConverter.convertToSparkException(e)   // converts JSON to real exception\n}\n```\n\n### `SparkErrorConverter.scala` parses the JSON\n\n```scala\n// spark/src/main/scala/org/apache/comet/SparkErrorConverter.scala\n\ndef convertToSparkException(e: CometQueryExecutionException): Throwable = {\n  val json = parse(e.getMessage)\n  val errorJson = json.extract[ErrorJson]\n\n  // Reconstruct Spark's SQLQueryContext from the embedded context\n  val sparkContext: Array[QueryContext] = errorJson.context match {\n    case Some(ctx) =>\n      Array(SQLQueryContext(\n        sqlText = Some(ctx.sqlText),\n        line = Some(ctx.line),\n        startPosition = Some(ctx.startPosition),\n        originStartIndex = Some(ctx.startIndex),\n        originStopIndex = Some(ctx.stopIndex),\n        originObjectType = ctx.objectType,\n        originObjectName = ctx.objectName))\n    case None => Array.empty\n  }\n\n  // Delegate to version-specific shim\n  convertErrorType(errorJson.errorType, errorClass, params, sparkContext, summary)\n}\n```\n\n### `ShimSparkErrorConverter` calls the real Spark API\n\nBecause Spark's `QueryExecutionErrors` API changes between Spark versions (3.4, 3.5, 4.0),\nthere is a separate implementation per version (in `spark-3.4/`, `spark-3.5/`, `spark-4.0/`).\n\n![Shim pattern for per-version Spark API bridging](./shim_pattern.svg)\n\n```scala\n// spark/src/main/spark-3.5/org/apache/spark/sql/comet/shims/ShimSparkErrorConverter.scala\n\ndef convertErrorType(errorType: String, errorClass: String,\n    params: Map[String, Any], context: Array[QueryContext], summary: String): Option[Throwable] = {\n\n  errorType match {\n    case \"DivideByZero\" =>\n      Some(QueryExecutionErrors.divideByZeroError(sqlCtx(context)))\n      // This is the REAL Spark method that creates the ArithmeticException\n      // with the SQL context pointer. The error message will include\n      // \"== SQL (line 1, position 8) ==\" etc.\n\n    case \"CastInvalidValue\" =>\n      Some(QueryExecutionErrors.castingCauseOverflowError(...))\n\n    // ... all other error types ...\n\n    case _ => None  // fallback to generic SparkException\n  }\n}\n```\n\nThe Spark 3.5 shim vs the Spark 4.0 shim differ in subtle API ways:\n\n```scala\n// 3.5: binaryArithmeticCauseOverflowError does NOT take functionName\ncase \"BinaryArithmeticOverflow\" =>\n  Some(QueryExecutionErrors.binaryArithmeticCauseOverflowError(\n    params(\"value1\").toString.toShort,\n    params(\"symbol\").toString,\n    params(\"value2\").toString.toShort))\n\n// 4.0: overloadedMethod takes functionName parameter\ncase \"BinaryArithmeticOverflow\" =>\n  Some(QueryExecutionErrors.binaryArithmeticCauseOverflowError(\n    params(\"value1\").toString.toShort,\n    params(\"symbol\").toString,\n    params(\"value2\").toString.toShort,\n    params(\"functionName\").toString))   // extra param in 4.0\n```\n\n---\n\n## Step 8: The User Sees a Proper Spark Error\n\nThe final exception that propagates out of Spark looks exactly like what native Spark would\nproduce for the same error, including the ANSI error code and the SQL pointer:\n\n```\norg.apache.spark.SparkArithmeticException: [DIVIDE_BY_ZERO] Division by zero.\nUse `try_divide` to tolerate divisor being 0 and return NULL instead.\nIf necessary set \"spark.sql.ansi.enabled\" to \"false\" to bypass this error.\n\n== SQL (line 1, position 8) ==\nSELECT a/b FROM t\n       ^^^\n```\n\n---\n\n## Key Data Structures: A Summary\n\n| Structure                      | Language | File                                | Purpose                                             |\n| ------------------------------ | -------- | ----------------------------------- | --------------------------------------------------- |\n| `QueryContext` (proto)         | Protobuf | `expr.proto`                        | Wire format for SQL location info                   |\n| `QueryContext` (Rust)          | Rust     | `query_context.rs`                  | In-memory SQL location info                         |\n| `QueryContextMap`              | Rust     | `query_context.rs`                  | Registry: expr_id → QueryContext                    |\n| `SparkError`                   | Rust     | `error.rs`                          | Typed Rust enum matching all Spark error variants   |\n| `SparkErrorWithContext`        | Rust     | `error.rs`                          | SparkError + optional QueryContext                  |\n| `CometQueryExecutionException` | Java     | `CometQueryExecutionException.java` | JNI transport: carries JSON string                  |\n| `SparkErrorConverter`          | Scala    | `SparkErrorConverter.scala`         | Parses JSON, creates real Spark exception           |\n| `ShimSparkErrorConverter`      | Scala    | `ShimSparkErrorConverter.scala`     | Per-Spark-version calls to `QueryExecutionErrors.*` |\n\n---\n\n## Why JSON? Why Not Throw the Right Exception Directly?\n\nJNI does not support throwing arbitrary Java exception subclasses from Rust directly — you\ncan only provide a class name and a string message. The class name is fixed (always\n`CometQueryExecutionException`), but the string payload can carry any structured data.\n\nJSON was chosen because:\n\n1. It is self-describing — the receiver can parse it without knowing the structure in advance.\n2. It is easy to add new fields without breaking old parsers.\n3. It maps cleanly to Scala's case class extraction (`json.extract[ErrorJson]`).\n4. All the typed information (error class, parameters, SQL context) can be round-tripped\n   perfectly.\n\nThe alternative — throwing different Java exception classes from Rust — would require a\nseparate JNI throw path for each of the 30+ error types, which would be much harder to\nmaintain.\n\n---\n\n## Why `SparkError` Has Its Own `error_class()` Method\n\nEach `SparkError` variant knows its own ANSI error class code (e.g. `\"DIVIDE_BY_ZERO\"`,\n`\"CAST_INVALID_INPUT\"`). This is used both:\n\n- In the JSON payload's `\"errorClass\"` field (so the Java side can pass it to\n  `SparkException(errorClass = ...)` as a fallback)\n- In the legacy `exception_class()` method that maps to the right Java exception class (e.g.\n  `\"java/lang/ArithmeticException\"`)\n\n---\n\n## Diagrams\n\n### End-to-End Pipeline\n\n![Error propagation pipeline overview](./error_pipeline_overview.svg)\n\n### QueryContext Journey: From SQL Text to Error Pointer\n\n![QueryContext journey from SQL parser to error message](./query_context_journey.svg)\n\n### Shim Pattern: Per-Version Spark API Bridging\n\n![Shim pattern for per-version Spark API bridging](./shim_pattern.svg)\n\n---\n\n_This document and its diagrams were written by [Claude](https://claude.ai) (Anthropic)._\n"
  },
  {
    "path": "docs/source/contributor-guide/tracing.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Tracing\n\nTracing can be enabled by setting `spark.comet.tracing.enabled=true`.\n\nWith this feature enabled, each Spark executor will write a JSON event log file in\nChrome's [Trace Event Format]. The file will be written to the executor's current working\ndirectory with the filename `comet-event-trace.json`.\n\n[Trace Event Format]: https://docs.google.com/document/d/1CvAClvFfyA5R-PhYUmn5OOQtYMH4h6I0nSsKchNAySU/preview?tab=t.0#heading=h.yr4qxyxotyw\n\nAdditionally, enabling the `jemalloc` feature will enable tracing of native memory allocations.\n\n```shell\nmake release COMET_FEATURES=\"jemalloc\"\n```\n\nExample output:\n\n```json\n{ \"name\": \"decodeShuffleBlock\", \"cat\": \"PERF\", \"ph\": \"B\", \"pid\": 1, \"tid\": 5, \"ts\": 10109225730 },\n{ \"name\": \"decodeShuffleBlock\", \"cat\": \"PERF\", \"ph\": \"E\", \"pid\": 1, \"tid\": 5, \"ts\": 10109228835 },\n{ \"name\": \"decodeShuffleBlock\", \"cat\": \"PERF\", \"ph\": \"B\", \"pid\": 1, \"tid\": 5, \"ts\": 10109245928 },\n{ \"name\": \"decodeShuffleBlock\", \"cat\": \"PERF\", \"ph\": \"E\", \"pid\": 1, \"tid\": 5, \"ts\": 10109248843 },\n{ \"name\": \"executePlan\", \"cat\": \"PERF\", \"ph\": \"E\", \"pid\": 1, \"tid\": 5, \"ts\": 10109350935 },\n{ \"name\": \"getNextBatch[JVM] stage=2\", \"cat\": \"PERF\", \"ph\": \"E\", \"pid\": 1, \"tid\": 5, \"ts\": 10109367116 },\n{ \"name\": \"getNextBatch[JVM] stage=2\", \"cat\": \"PERF\", \"ph\": \"B\", \"pid\": 1, \"tid\": 5, \"ts\": 10109479156 },\n```\n\nTraces can be viewed with [Perfetto UI].\n\n[Perfetto UI]: https://ui.perfetto.dev\n\nExample trace visualization:\n\n![tracing](../_static/images/tracing.png)\n\n## Analyzing Memory Usage\n\nThe `analyze_trace` tool parses a trace log and compares jemalloc usage against the sum of per-thread\nComet memory pool reservations. This is useful for detecting untracked native memory growth where jemalloc\nallocations exceed what the memory pools account for.\n\nBuild and run:\n\n```shell\ncd native\ncargo run --bin analyze_trace -- /path/to/comet-event-trace.json\n```\n\nThe tool reads counter events from the trace log. Because tracing logs metrics per thread, `jemalloc_allocated`\nis a process-wide value (the same global allocation reported from whichever thread logs it), while\n`thread_NNN_comet_memory_reserved` values are per-thread pool reservations that are summed to get the total\ntracked memory.\n\nSample output:\n\n```\n=== Comet Trace Memory Analysis ===\n\nCounter events parsed: 193104\nThreads with memory pools: 8\nPeak jemalloc allocated:   3068.2 MB\nPeak pool total:           2864.6 MB\nPeak excess (jemalloc - pool): 364.6 MB\n\nWARNING: jemalloc exceeded pool reservation at 138 sampled points:\n\n     Time (us)        jemalloc      pool_total          excess\n--------------------------------------------------------------\n        179578        210.8 MB          0.1 MB        210.7 MB\n        429663        420.5 MB        145.1 MB        275.5 MB\n       1304969       2122.5 MB       1797.2 MB        325.2 MB\n      21974838        407.0 MB         42.3 MB        364.6 MB\n      33543599          5.5 MB          0.1 MB          5.3 MB\n\n--- Final per-thread pool reservations ---\n\n  thread_60_comet_memory_reserved: 0.0 MB\n  thread_95_comet_memory_reserved: 0.0 MB\n  thread_96_comet_memory_reserved: 0.0 MB\n  ...\n\n  Total: 0.0 MB\n```\n\nSome excess is expected (jemalloc metadata, fragmentation, non-pool allocations like Arrow IPC buffers).\nLarge or growing excess may indicate memory that is not being tracked by the pool.\n\n## Definition of Labels\n\n| Label                            | Meaning                                                                                                                  |\n| -------------------------------- | ------------------------------------------------------------------------------------------------------------------------ |\n| jvm_heap_used                    | JVM heap memory usage of live objects for the executor process                                                           |\n| jemalloc_allocated               | Native memory usage for the executor process (requires `jemalloc` feature)                                               |\n| thread_NNN_comet_memory_reserved | Memory reserved by Comet's DataFusion memory pool (summed across all contexts on the thread). NNN is the Rust thread ID. |\n| thread_NNN_comet_jvm_shuffle     | Off-heap memory allocated by Comet for columnar shuffle. NNN is the Rust thread ID.                                      |\n"
  },
  {
    "path": "docs/source/index.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Accelerator for Apache Spark and Apache Iceberg\n\n<!-- Code from https://buttons.github.io/ -->\n<p>\n  <!-- Place this tag where you want the button to render. -->\n  <a class=\"github-button\" href=\"https://github.com/apache/datafusion-comet\" data-size=\"large\" data-show-count=\"true\" aria-label=\"Star apache/datafusion-comet on GitHub\">Star</a>\n  <!-- Place this tag where you want the button to render. -->\n   <a class=\"github-button\" href=\"https://github.com/apache/datafusion-comet/fork\" data-size=\"large\" data-show-count=\"true\" aria-label=\"Fork apache/datafusion-comet on GitHub\">Fork</a>\n</p>\n\nApache DataFusion Comet is a high-performance accelerator for Apache Spark, built on top of the powerful\n[Apache DataFusion] query engine. Comet is designed to significantly enhance the\nperformance of Apache Spark workloads while leveraging commodity hardware and seamlessly integrating with the\nSpark ecosystem without requiring any code changes.\n\nComet also accelerates Apache Iceberg, when performing Parquet scans from Spark.\n\n[Apache DataFusion]: https://datafusion.apache.org\n\n## Run Spark Queries at DataFusion Speeds\n\nComet delivers a performance speedup for many queries, enabling faster data processing and shorter time-to-insights.\n\nThe following chart shows the time it takes to run the 22 TPC-H queries against 100 GB of data in Parquet format\nusing a single executor with 8 cores. See the [Comet Benchmarking Guide](https://datafusion.apache.org/comet/contributor-guide/benchmarking.html)\nfor details of the environment used for these benchmarks.\n\nWhen using Comet, the overall run time is reduced from 687 seconds to 302 seconds, a 2.2x speedup.\n\n![](_static/images/benchmark-results/0.11.0/tpch_allqueries.png)\n\nHere is a breakdown showing relative performance of Spark and Comet for each TPC-H query.\n\n![](_static/images/benchmark-results/0.11.0/tpch_queries_compare.png)\n\nThese benchmarks can be reproduced in any environment using the documentation in the\n[Comet Benchmarking Guide](/contributor-guide/benchmarking.md). We encourage\nyou to run your own benchmarks.\n\n## Use Commodity Hardware\n\nComet leverages commodity hardware, eliminating the need for costly hardware upgrades or\nspecialized hardware accelerators, such as GPUs or FPGA. By maximizing the utilization of commodity hardware, Comet\nensures cost-effectiveness and scalability for your Spark deployments.\n\n## Spark Compatibility\n\nComet aims for 100% compatibility with all supported versions of Apache Spark, allowing you to integrate Comet into\nyour existing Spark deployments and workflows seamlessly. With no code changes required, you can immediately harness\nthe benefits of Comet's acceleration capabilities without disrupting your Spark applications.\n\n## Tight Integration with Apache DataFusion\n\nComet tightly integrates with the core Apache DataFusion project, leveraging its powerful execution engine. With\nseamless interoperability between Comet and DataFusion, you can achieve optimal performance and efficiency in your\nSpark workloads.\n\n## Active Community\n\nComet boasts a vibrant and active community of developers, contributors, and users dedicated to advancing the\ncapabilities of Apache DataFusion and accelerating the performance of Apache Spark.\n\n## Getting Started\n\nTo get started with Apache DataFusion Comet, follow the\n[installation instructions](https://datafusion.apache.org/comet/user-guide/installation.html). Join the\n[DataFusion Slack and Discord channels](https://datafusion.apache.org/contributor-guide/communication.html) to connect\nwith other users, ask questions, and share your experiences with Comet.\n\nFollow [Apache DataFusion Comet Overview](https://datafusion.apache.org/comet/about/index.html) to get more detailed information\n\n## Contributing\n\nWe welcome contributions from the community to help improve and enhance Apache DataFusion Comet. Whether it's fixing\nbugs, adding new features, writing documentation, or optimizing performance, your contributions are invaluable in\nshaping the future of Comet. Check out our\n[contributor guide](https://datafusion.apache.org/comet/contributor-guide/contributing.html) to get started.\n\n```{toctree}\n:maxdepth: 1\n:caption: Index\n:hidden:\n\nComet Overview <about/index>\nUser Guide <user-guide/index>\nContributor Guide <contributor-guide/index>\nASF Links <asf/index>\n```\n"
  },
  {
    "path": "docs/source/user-guide/index.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet User Guide\n\n```{toctree}\n:maxdepth: 2\n:caption: User Guides\n\n0.15.0-SNAPSHOT <latest/index>\n0.14.x <0.14/index>\n0.13.x <0.13/index>\n0.12.x <0.12/index>\n0.11.x <0.11/index>\n```\n"
  },
  {
    "path": "docs/source/user-guide/latest/compatibility.md",
    "content": "<!---\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Compatibility Guide\n\nComet aims to provide consistent results with the version of Apache Spark that is being used.\n\nThis guide offers information about areas of functionality where there are known differences.\n\n## Parquet\n\nComet has the following limitations when reading Parquet files:\n\n- Comet does not support reading decimals encoded in binary format.\n- No support for default values that are nested types (e.g., maps, arrays, structs). Literal default values are supported.\n\n## ANSI Mode\n\nComet will fall back to Spark for the following expressions when ANSI mode is enabled. These expressions can be enabled by setting\n`spark.comet.expression.EXPRNAME.allowIncompatible=true`, where `EXPRNAME` is the Spark expression class name. See\nthe [Comet Supported Expressions Guide](expressions.md) for more information on this configuration setting.\n\n- Average (supports all numeric inputs except decimal types)\n- Cast (in some cases)\n\nThere is an [epic](https://github.com/apache/datafusion-comet/issues/313) where we are tracking the work to fully implement ANSI support.\n\n## Floating-point Number Comparison\n\nSpark normalizes NaN and zero for floating point numbers for several cases. See `NormalizeFloatingNumbers` optimization rule in Spark.\nHowever, one exception is comparison. Spark does not normalize NaN and zero when comparing values\nbecause they are handled well in Spark (e.g., `SQLOrderingUtil.compareFloats`). But the comparison\nfunctions of arrow-rs used by DataFusion do not normalize NaN and zero (e.g., [arrow::compute::kernels::cmp::eq](https://docs.rs/arrow/latest/arrow/compute/kernels/cmp/fn.eq.html#)).\nSo Comet adds additional normalization expression of NaN and zero for comparisons, and may still have differences\nto Spark in some cases, especially when the data contains both positive and negative zero. This is likely an edge\ncase that is not of concern for many users. If it is a concern, setting `spark.comet.exec.strictFloatingPoint=true`\nwill make relevant operations fall back to Spark.\n\n## Incompatible Expressions\n\nExpressions that are not 100% Spark-compatible will fall back to Spark by default and can be enabled by setting\n`spark.comet.expression.EXPRNAME.allowIncompatible=true`, where `EXPRNAME` is the Spark expression class name. See\nthe [Comet Supported Expressions Guide](expressions.md) for more information on this configuration setting.\n\n### Array Expressions\n\n- **ArraysOverlap**: Inconsistent behavior when arrays contain NULL values.\n  [#3645](https://github.com/apache/datafusion-comet/issues/3645),\n  [#2036](https://github.com/apache/datafusion-comet/issues/2036)\n- **ArrayUnion**: Sorts input arrays before performing the union, while Spark preserves the order of the first array\n  and appends unique elements from the second.\n  [#3644](https://github.com/apache/datafusion-comet/issues/3644)\n- **SortArray**: Nested arrays with `Struct` or `Null` child values are not supported natively and will fall back to Spark.\n\n### Date/Time Expressions\n\n- **Hour, Minute, Second**: Incorrectly apply timezone conversion to TimestampNTZ inputs. TimestampNTZ stores local\n  time without timezone, so no conversion should be applied. These expressions work correctly with Timestamp inputs.\n  [#3180](https://github.com/apache/datafusion-comet/issues/3180)\n- **TruncTimestamp (date_trunc)**: Produces incorrect results when used with non-UTC timezones. Compatible when\n  timezone is UTC.\n  [#2649](https://github.com/apache/datafusion-comet/issues/2649)\n\n### Struct Expressions\n\n- **StructsToJson (to_json)**: Does not support `+Infinity` and `-Infinity` for numeric types (float, double).\n  [#3016](https://github.com/apache/datafusion-comet/issues/3016)\n\n## Regular Expressions\n\nComet uses the Rust regexp crate for evaluating regular expressions, and this has different behavior from Java's\nregular expression engine. Comet will fall back to Spark for patterns that are known to produce different results, but\nthis can be overridden by setting `spark.comet.expression.regexp.allowIncompatible=true`.\n\n## Window Functions\n\nComet's support for window functions is incomplete and known to be incorrect. It is disabled by default and\nshould not be used in production. The feature will be enabled in a future release. Tracking issue: [#2721](https://github.com/apache/datafusion-comet/issues/2721).\n\n## Round-Robin Partitioning\n\nComet's native shuffle implementation of round-robin partitioning (`df.repartition(n)`) is not compatible with\nSpark's implementation and is disabled by default. It can be enabled by setting\n`spark.comet.native.shuffle.partitioning.roundrobin.enabled=true`.\n\n**Why the incompatibility exists:**\n\nSpark's round-robin partitioning sorts rows by their binary `UnsafeRow` representation before assigning them to\npartitions. This ensures deterministic output for fault tolerance (task retries produce identical results).\nComet uses Arrow format internally, which has a completely different binary layout than `UnsafeRow`, making it\nimpossible to match Spark's exact partition assignments.\n\n**Comet's approach:**\n\nInstead of true round-robin assignment, Comet implements round-robin as hash partitioning on ALL columns. This\nachieves the same semantic goals:\n\n- **Even distribution**: Rows are distributed evenly across partitions (as long as the hash varies sufficiently -\n  in some cases there could be skew)\n- **Deterministic**: Same input always produces the same partition assignments (important for fault tolerance)\n- **No semantic grouping**: Unlike hash partitioning on specific columns, this doesn't group related rows together\n\nThe only difference is that Comet's partition assignments will differ from Spark's. When results are sorted,\nthey will be identical to Spark. Unsorted results may have different row ordering.\n\n## Cast\n\nCast operations in Comet fall into three levels of support:\n\n- **C (Compatible)**: The results match Apache Spark\n- **I (Incompatible)**: The results may match Apache Spark for some inputs, but there are known issues where some inputs\n  will result in incorrect results or exceptions. The query stage will fall back to Spark by default. Setting\n  `spark.comet.expression.Cast.allowIncompatible=true` will allow all incompatible casts to run natively in Comet, but this is not\n  recommended for production use.\n- **U (Unsupported)**: Comet does not provide a native version of this cast expression and the query stage will fall back to\n  Spark.\n- **N/A**: Spark does not support this cast.\n\n### String to Decimal\n\nComet's native `CAST(string AS DECIMAL)` implementation matches Apache Spark's behavior,\nincluding:\n\n- Leading and trailing ASCII whitespace is trimmed before parsing.\n- Null bytes (`\\u0000`) at the start or end of a string are trimmed, matching Spark's\n  `UTF8String` behavior. Null bytes embedded in the middle of a string produce `NULL`.\n- Fullwidth Unicode digits (U+FF10–U+FF19, e.g. `１２３.４５`) are treated as their ASCII\n  equivalents, so `CAST('１２３.４５' AS DECIMAL(10,2))` returns `123.45`.\n- Scientific notation (e.g. `1.23E+5`) is supported.\n- Special values (`inf`, `infinity`, `nan`) produce `NULL`.\n\n### String to Timestamp\n\nComet's native `CAST(string AS TIMESTAMP)` implementation supports all timestamp formats accepted\nby Apache Spark, including ISO 8601 date-time strings, date-only strings, time-only strings\n(`HH:MM:SS`), embedded timezone offsets (e.g. `+07:30`, `GMT-01:00`, `UTC`), named timezone\nsuffixes (e.g. `Europe/Moscow`), and the full Spark timestamp year range\n(-290308 to 294247). Note that `CAST(string AS DATE)` is only compatible for years between\n262143 BC and 262142 AD due to an underlying library limitation.\n\n### Legacy Mode\n\n<!--BEGIN:CAST_LEGACY_TABLE-->\n<!--END:CAST_LEGACY_TABLE-->\n\n### Try Mode\n\n<!--BEGIN:CAST_TRY_TABLE-->\n<!--END:CAST_TRY_TABLE-->\n\n### ANSI Mode\n\n<!--BEGIN:CAST_ANSI_TABLE-->\n<!--END:CAST_ANSI_TABLE-->\n\nSee the [tracking issue](https://github.com/apache/datafusion-comet/issues/286) for more details.\n"
  },
  {
    "path": "docs/source/user-guide/latest/configs.md",
    "content": "<!---\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Configuration Settings\n\nComet provides the following configuration settings.\n\n## Scan Configuration Settings\n\n<!--BEGIN:CONFIG_TABLE[scan]-->\n<!--END:CONFIG_TABLE-->\n\n## Parquet Reader Configuration Settings\n\n<!--BEGIN:CONFIG_TABLE[parquet]-->\n<!--END:CONFIG_TABLE-->\n\n## Query Execution Settings\n\n<!--BEGIN:CONFIG_TABLE[exec]-->\n<!--END:CONFIG_TABLE-->\n\n## Viewing Explain Plan & Fallback Reasons\n\nThese settings can be used to determine which parts of the plan are accelerated by Comet and to see why some parts of the plan could not be supported by Comet.\n\n<!--BEGIN:CONFIG_TABLE[exec_explain]-->\n<!--END:CONFIG_TABLE-->\n\n## Shuffle Configuration Settings\n\n<!--BEGIN:CONFIG_TABLE[shuffle]-->\n<!--END:CONFIG_TABLE-->\n\n## Memory & Tuning Configuration Settings\n\n<!--BEGIN:CONFIG_TABLE[tuning]-->\n<!--END:CONFIG_TABLE-->\n\n## Development & Testing Settings\n\n<!--BEGIN:CONFIG_TABLE[testing]-->\n<!--END:CONFIG_TABLE-->\n\n## Enabling or Disabling Individual Operators\n\n<!--BEGIN:CONFIG_TABLE[enable_exec]-->\n<!--END:CONFIG_TABLE-->\n\n## Enabling or Disabling Individual Scalar Expressions\n\n<!--BEGIN:CONFIG_TABLE[enable_expr]-->\n<!--END:CONFIG_TABLE-->\n\n## Enabling or Disabling Individual Aggregate Expressions\n\n<!--BEGIN:CONFIG_TABLE[enable_agg_expr]-->\n<!--END:CONFIG_TABLE-->\n"
  },
  {
    "path": "docs/source/user-guide/latest/datasources.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Supported Spark Data Sources\n\n## File Formats\n\n### Parquet\n\nWhen `spark.comet.scan.enabled` is enabled, Parquet scans will be performed natively by Comet if all data types\nin the schema are supported. When this option is not enabled, the scan will fall back to Spark. In this case,\nenabling `spark.comet.convert.parquet.enabled` will immediately convert the data into Arrow format, allowing native\nexecution to happen after that, but the process may not be efficient.\n\n### Apache Iceberg\n\nComet accelerates Iceberg scans of Parquet files. See the [Iceberg Guide] for more information.\n\n[Iceberg Guide]: iceberg.md\n\n### CSV\n\nComet provides experimental native CSV scan support. When `spark.comet.scan.csv.v2.enabled` is enabled, CSV files\nare read natively for improved performance. This feature is experimental and performance benefits are\nworkload-dependent.\n\nAlternatively, when `spark.comet.convert.csv.enabled` is enabled, data from Spark's CSV reader is immediately\nconverted into Arrow format, allowing native execution to happen after that.\n\n### JSON\n\nComet does not provide native JSON scan, but when `spark.comet.convert.json.enabled` is enabled, data is immediately\nconverted into Arrow format, allowing native execution to happen after that.\n\n## Data Catalogs\n\n### Apache Iceberg\n\nSee the dedicated [Comet and Iceberg Guide](iceberg.md).\n\n## Supported Storages\n\nComet supports most standard storage systems, such as local file system and object storage.\n\n### HDFS\n\nApache DataFusion Comet native reader seamlessly scans files from remote HDFS for [supported formats](#supported-spark-data-sources)\n\n### Using experimental native DataFusion reader\n\nUnlike to native Comet reader the Datafusion reader fully supports nested types processing. This reader is currently experimental only\n\nTo build Comet with native DataFusion reader and remote HDFS support it is required to have a JDK installed\n\nExample:\nBuild a Comet for `spark-3.5` provide a JDK path in `JAVA_HOME`\nProvide the JRE linker path in `RUSTFLAGS`, the path can vary depending on the system. Typically JRE linker is a part of installed JDK\n\n```shell\nexport JAVA_HOME=\"/opt/homebrew/opt/openjdk@11\"\nmake release PROFILES=\"-Pspark-3.5\" COMET_FEATURES=hdfs RUSTFLAGS=\"-L $JAVA_HOME/libexec/openjdk.jdk/Contents/Home/lib/server\"\n```\n\nStart Comet with experimental reader and HDFS support as [described](installation.md/#run-spark-shell-with-comet-enabled)\nand add additional parameters\n\n```shell\n--conf spark.comet.scan.impl=native_datafusion \\\n--conf spark.hadoop.fs.defaultFS=\"hdfs://namenode:9000\" \\\n--conf spark.hadoop.dfs.client.use.datanode.hostname = true \\\n--conf dfs.client.use.datanode.hostname = true\n```\n\nQuery a struct type from Remote HDFS\n\n```shell\nspark.read.parquet(\"hdfs://namenode:9000/user/data\").show(false)\n\nroot\n |-- id: integer (nullable = true)\n |-- first_name: string (nullable = true)\n |-- personal_info: struct (nullable = true)\n |    |-- firstName: string (nullable = true)\n |    |-- lastName: string (nullable = true)\n |    |-- ageInYears: integer (nullable = true)\n\n25/01/30 16:50:43 INFO core/src/lib.rs: Comet native library version $COMET_VERSION initialized\n== Physical Plan ==\n* CometColumnarToRow (2)\n+- CometNativeScan:  (1)\n\n\n(1) CometNativeScan:\nOutput [3]: [id#0, first_name#1, personal_info#4]\nArguments: [id#0, first_name#1, personal_info#4]\n\n(2) CometColumnarToRow [codegen id : 1]\nInput [3]: [id#0, first_name#1, personal_info#4]\n\n\n25/01/30 16:50:44 INFO fs-hdfs-0.1.12/src/hdfs.rs: Connecting to Namenode (hdfs://namenode:9000)\n+---+----------+-----------------+\n|id |first_name|personal_info    |\n+---+----------+-----------------+\n|2  |Jane      |{Jane, Smith, 34}|\n|1  |John      |{John, Doe, 28}  |\n+---+----------+-----------------+\n\n\n\n```\n\nVerify the native scan type should be `CometNativeScan`.\n\nMore on [HDFS Reader](https://github.com/apache/datafusion-comet/blob/main/native/hdfs/README.md)\n\n### Local HDFS development\n\n- Configure local machine network. Add hostname to `/etc/hosts`\n\n```shell\n127.0.0.1\tlocalhost   namenode datanode1 datanode2 datanode3\n::1             localhost namenode datanode1 datanode2 datanode3\n```\n\n- Start local HDFS cluster, 3 datanodes, namenode url is `namenode:9000`\n\n```shell\ndocker compose -f kube/local/hdfs-docker-compose.yml up\n```\n\n- Check the local namenode is up and running on `http://localhost:9870/dfshealth.html#tab-overview`\n- Build a project with HDFS support\n\n```shell\nJAVA_HOME=\"/opt/homebrew/opt/openjdk@11\" make release PROFILES=\"-Pspark-3.5\" COMET_FEATURES=hdfs RUSTFLAGS=\"-L /opt/homebrew/opt/openjdk@11/libexec/openjdk.jdk/Contents/Home/lib/server\"\n```\n\n- Run local test\n\n```scala\n\n    withSQLConf(\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_DATAFUSION,\n      SQLConf.USE_V1_SOURCE_LIST.key -> \"parquet\",\n      \"fs.defaultFS\" -> \"hdfs://namenode:9000\",\n      \"dfs.client.use.datanode.hostname\" -> \"true\") {\n      val df = spark.read.parquet(\"/tmp/2\")\n      df.show(false)\n      df.explain(\"extended\")\n    }\n  }\n```\n\nOr use `spark-shell` with HDFS support as described [above](#using-experimental-native-datafusion-reader)\n\n## S3\n\n#### Root CA Certificates\n\nOne major difference between Spark and Comet is the mechanism for discovering Root\nCA Certificates. Spark uses the JVM to read CA Certificates from the Java Trust Store, but native Comet\nscans use system Root CA Certificates (typically stored\nin `/etc/ssl/certs` on Linux). These scans will not be able to interact with S3 if the Root CA Certificates are not\ninstalled.\n\n#### Supported Credential Providers\n\nAWS credential providers can be configured using the `fs.s3a.aws.credentials.provider` configuration. The following table shows the supported credential providers and their configuration options:\n\n| Credential provider                                                                                                                                                                          | Description                                                                                                     | Supported Options                                                                                                               |\n| -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | --------------------------------------------------------------------------------------------------------------- | ------------------------------------------------------------------------------------------------------------------------------- |\n| `org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider`                                                                                                                                      | Access S3 using access key and secret key                                                                       | `fs.s3a.access.key`, `fs.s3a.secret.key`                                                                                        |\n| `org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider`                                                                                                                                   | Access S3 using temporary credentials                                                                           | `fs.s3a.access.key`, `fs.s3a.secret.key`, `fs.s3a.session.token`                                                                |\n| `org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider`                                                                                                                                | Access S3 using AWS STS assume role                                                                             | `fs.s3a.assumed.role.arn`, `fs.s3a.assumed.role.session.name` (optional), `fs.s3a.assumed.role.credentials.provider` (optional) |\n| `org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider`                                                                                                                               | Access S3 using EC2 instance profile or ECS task credentials (tries ECS first, then IMDS)                       | None (auto-detected)                                                                                                            |\n| `org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider`<br/>`com.amazonaws.auth.AnonymousAWSCredentials`<br/>`software.amazon.awssdk.auth.credentials.AnonymousCredentialsProvider`       | Access S3 without authentication (public buckets only)                                                          | None                                                                                                                            |\n| `com.amazonaws.auth.EnvironmentVariableCredentialsProvider`<br/>`software.amazon.awssdk.auth.credentials.EnvironmentVariableCredentialsProvider`                                             | Load credentials from environment variables (`AWS_ACCESS_KEY_ID`, `AWS_SECRET_ACCESS_KEY`, `AWS_SESSION_TOKEN`) | None                                                                                                                            |\n| `com.amazonaws.auth.InstanceProfileCredentialsProvider`<br/>`software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider`                                                     | Access S3 using EC2 instance metadata service (IMDS)                                                            | None                                                                                                                            |\n| `com.amazonaws.auth.ContainerCredentialsProvider`<br/>`software.amazon.awssdk.auth.credentials.ContainerCredentialsProvider`<br/>`com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper` | Access S3 using ECS task credentials                                                                            | None                                                                                                                            |\n| `com.amazonaws.auth.WebIdentityTokenCredentialsProvider`<br/>`software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider`                                               | Authenticate using web identity token file                                                                      | None                                                                                                                            |\n\nMultiple credential providers can be specified in a comma-separated list using the `fs.s3a.aws.credentials.provider` configuration, just as Hadoop AWS supports. If `fs.s3a.aws.credentials.provider` is not configured, Hadoop S3A's default credential provider chain will be used. All configuration options also support bucket-specific overrides using the pattern `fs.s3a.bucket.{bucket-name}.{option}`.\n"
  },
  {
    "path": "docs/source/user-guide/latest/datatypes.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Supported Spark Data Types\n\nComet supports the following Spark data types. Refer to the [Comet Compatibility Guide] for information about data\ntype support in scans and other operators.\n\n[Comet Compatibility Guide]: compatibility.md\n\n<!-- based on org.apache.comet.serde.QueryPlanSerde.supportedDataType -->\n\n| Data Type    |\n| ------------ |\n| Null         |\n| Boolean      |\n| Byte         |\n| Short        |\n| Integer      |\n| Long         |\n| Float        |\n| Double       |\n| Decimal      |\n| String       |\n| Binary       |\n| Date         |\n| Timestamp    |\n| TimestampNTZ |\n| Struct       |\n| Array        |\n| Map          |\n"
  },
  {
    "path": "docs/source/user-guide/latest/expressions.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Supported Spark Expressions\n\nComet supports the following Spark expressions. Expressions that are marked as Spark-compatible will either run\nnatively in Comet and provide the same results as Spark, or will fall back to Spark for cases that would not\nbe compatible.\n\nAll expressions are enabled by default, but most can be disabled by setting\n`spark.comet.expression.EXPRNAME.enabled=false`, where `EXPRNAME` is the expression name as specified in\nthe following tables, such as `Length`, or `StartsWith`. See the [Comet Configuration Guide] for a full list\nof expressions that be disabled.\n\nExpressions that are not Spark-compatible will fall back to Spark by default and can be enabled by setting\n`spark.comet.expression.EXPRNAME.allowIncompatible=true`.\n\n## Conditional Expressions\n\n| Expression | SQL                                         | Spark-Compatible? |\n| ---------- | ------------------------------------------- | ----------------- |\n| CaseWhen   | `CASE WHEN expr THEN expr ELSE expr END`    | Yes               |\n| If         | `IF(predicate_expr, true_expr, false_expr)` | Yes               |\n\n## Predicate Expressions\n\n| Expression         | SQL           | Spark-Compatible? |\n| ------------------ | ------------- | ----------------- |\n| And                | `AND`         | Yes               |\n| EqualTo            | `=`           | Yes               |\n| EqualNullSafe      | `<=>`         | Yes               |\n| GreaterThan        | `>`           | Yes               |\n| GreaterThanOrEqual | `>=`          | Yes               |\n| LessThan           | `<`           | Yes               |\n| LessThanOrEqual    | `<=`          | Yes               |\n| In                 | `IN`          | Yes               |\n| IsNotNull          | `IS NOT NULL` | Yes               |\n| IsNull             | `IS NULL`     | Yes               |\n| InSet              | `IN (...)`    | Yes               |\n| Not                | `NOT`         | Yes               |\n| Or                 | `OR`          | Yes               |\n\n## String Functions\n\n| Expression      | Spark-Compatible? | Compatibility Notes                                                                                        |\n| --------------- | ----------------- | ---------------------------------------------------------------------------------------------------------- |\n| Ascii           | Yes               |                                                                                                            |\n| BitLength       | Yes               |                                                                                                            |\n| Chr             | Yes               |                                                                                                            |\n| Concat          | Yes               | Only string inputs are supported                                                                           |\n| ConcatWs        | Yes               |                                                                                                            |\n| Contains        | Yes               |                                                                                                            |\n| EndsWith        | Yes               |                                                                                                            |\n| InitCap         | No                | Behavior is different in some cases, such as hyphenated names.                                             |\n| Left            | Yes               | Length argument must be a literal value                                                                    |\n| Length          | Yes               |                                                                                                            |\n| Like            | Yes               |                                                                                                            |\n| Lower           | No                | Results can vary depending on locale and character set. Requires `spark.comet.caseConversion.enabled=true` |\n| OctetLength     | Yes               |                                                                                                            |\n| Reverse         | Yes               |                                                                                                            |\n| RLike           | No                | Uses Rust regexp engine, which has different behavior to Java regexp engine                                |\n| StartsWith      | Yes               |                                                                                                            |\n| StringInstr     | Yes               |                                                                                                            |\n| StringRepeat    | Yes               | Negative argument for number of times to repeat causes exception                                           |\n| StringReplace   | Yes               |                                                                                                            |\n| StringLPad      | Yes               |                                                                                                            |\n| StringRPad      | Yes               |                                                                                                            |\n| StringSpace     | Yes               |                                                                                                            |\n| StringTranslate | Yes               |                                                                                                            |\n| StringTrim      | Yes               |                                                                                                            |\n| StringTrimBoth  | Yes               |                                                                                                            |\n| StringTrimLeft  | Yes               |                                                                                                            |\n| StringTrimRight | Yes               |                                                                                                            |\n| Substring       | Yes               |                                                                                                            |\n| Upper           | No                | Results can vary depending on locale and character set. Requires `spark.comet.caseConversion.enabled=true` |\n\n## JSON Functions\n\n| Expression    | Spark-Compatible? | Compatibility Notes                                                                           |\n| ------------- | ----------------- | --------------------------------------------------------------------------------------------- |\n| GetJsonObject | No                | Spark allows single-quoted JSON and unescaped control characters which Comet does not support |\n\n## Date/Time Functions\n\n| Expression     | SQL                          | Spark-Compatible? | Compatibility Notes                                                                                                              |\n| -------------- | ---------------------------- | ----------------- | -------------------------------------------------------------------------------------------------------------------------------- |\n| DateAdd        | `date_add`                   | Yes               |                                                                                                                                  |\n| DateDiff       | `datediff`                   | Yes               |                                                                                                                                  |\n| DateFormat     | `date_format`                | Yes               | Partial support. Only specific format patterns are supported.                                                                    |\n| DateSub        | `date_sub`                   | Yes               |                                                                                                                                  |\n| DatePart       | `date_part(field, source)`   | Yes               | Supported values of `field`: `year`/`month`/`week`/`day`/`dayofweek`/`dayofweek_iso`/`doy`/`quarter`/`hour`/`minute`             |\n| Days           | `days`                       | Yes               | V2 partition transform. Supports DateType and TimestampType inputs.                                                              |\n| Extract        | `extract(field FROM source)` | Yes               | Supported values of `field`: `year`/`month`/`week`/`day`/`dayofweek`/`dayofweek_iso`/`doy`/`quarter`/`hour`/`minute`             |\n| FromUnixTime   | `from_unixtime`              | No                | Does not support format, supports only -8334601211038 <= sec <= 8210266876799                                                    |\n| Hour           | `hour`                       | No                | Incorrectly applies timezone conversion to TimestampNTZ inputs ([#3180](https://github.com/apache/datafusion-comet/issues/3180)) |\n| LastDay        | `last_day`                   | Yes               |                                                                                                                                  |\n| Minute         | `minute`                     | No                | Incorrectly applies timezone conversion to TimestampNTZ inputs ([#3180](https://github.com/apache/datafusion-comet/issues/3180)) |\n| Second         | `second`                     | No                | Incorrectly applies timezone conversion to TimestampNTZ inputs ([#3180](https://github.com/apache/datafusion-comet/issues/3180)) |\n| TruncDate      | `trunc`                      | Yes               |                                                                                                                                  |\n| TruncTimestamp | `date_trunc`                 | No                | Incorrect results in non-UTC timezones ([#2649](https://github.com/apache/datafusion-comet/issues/2649))                         |\n| UnixDate       | `unix_date`                  | Yes               |                                                                                                                                  |\n| UnixTimestamp  | `unix_timestamp`             | Yes               |                                                                                                                                  |\n| Year           | `year`                       | Yes               |                                                                                                                                  |\n| Month          | `month`                      | Yes               |                                                                                                                                  |\n| DayOfMonth     | `day`/`dayofmonth`           | Yes               |                                                                                                                                  |\n| DayOfWeek      | `dayofweek`                  | Yes               |                                                                                                                                  |\n| WeekDay        | `weekday`                    | Yes               |                                                                                                                                  |\n| DayOfYear      | `dayofyear`                  | Yes               |                                                                                                                                  |\n| WeekOfYear     | `weekofyear`                 | Yes               |                                                                                                                                  |\n| Quarter        | `quarter`                    | Yes               |                                                                                                                                  |\n\n## Math Expressions\n\n| Expression     | SQL       | Spark-Compatible? | Compatibility Notes               |\n| -------------- | --------- | ----------------- | --------------------------------- |\n| Abs            | `abs`     | Yes               |                                   |\n| Acos           | `acos`    | Yes               |                                   |\n| Add            | `+`       | Yes               |                                   |\n| Asin           | `asin`    | Yes               |                                   |\n| Atan           | `atan`    | Yes               |                                   |\n| Atan2          | `atan2`   | Yes               |                                   |\n| BRound         | `bround`  | Yes               |                                   |\n| Ceil           | `ceil`    | Yes               |                                   |\n| Cos            | `cos`     | Yes               |                                   |\n| Cosh           | `cosh`    | Yes               |                                   |\n| Cot            | `cot`     | Yes               |                                   |\n| Divide         | `/`       | Yes               |                                   |\n| Exp            | `exp`     | Yes               |                                   |\n| Expm1          | `expm1`   | Yes               |                                   |\n| Floor          | `floor`   | Yes               |                                   |\n| Hex            | `hex`     | Yes               |                                   |\n| IntegralDivide | `div`     | Yes               |                                   |\n| IsNaN          | `isnan`   | Yes               |                                   |\n| Log            | `log`     | Yes               |                                   |\n| Log2           | `log2`    | Yes               |                                   |\n| Log10          | `log10`   | Yes               |                                   |\n| Multiply       | `*`       | Yes               |                                   |\n| Pow            | `power`   | Yes               |                                   |\n| Rand           | `rand`    | Yes               |                                   |\n| Randn          | `randn`   | Yes               |                                   |\n| Remainder      | `%`       | Yes               |                                   |\n| Round          | `round`   | Yes               |                                   |\n| Signum         | `signum`  | Yes               |                                   |\n| Sin            | `sin`     | Yes               |                                   |\n| Sinh           | `sinh`    | Yes               |                                   |\n| Sqrt           | `sqrt`    | Yes               |                                   |\n| Subtract       | `-`       | Yes               |                                   |\n| Tan            | `tan`     | Yes               |                                   |\n| Tanh           | `tanh`    | Yes               |                                   |\n| TryAdd         | `try_add` | Yes               | Only integer inputs are supported |\n| TryDivide      | `try_div` | Yes               | Only integer inputs are supported |\n| TryMultiply    | `try_mul` | Yes               | Only integer inputs are supported |\n| TrySubtract    | `try_sub` | Yes               | Only integer inputs are supported |\n| UnaryMinus     | `-`       | Yes               |                                   |\n| Unhex          | `unhex`   | Yes               |                                   |\n\n## Hashing Functions\n\n| Expression  | Spark-Compatible? |\n| ----------- | ----------------- |\n| Md5         | Yes               |\n| Murmur3Hash | Yes               |\n| Sha1        | Yes               |\n| Sha2        | Yes               |\n| XxHash64    | Yes               |\n\n## Bitwise Expressions\n\n| Expression   | SQL  | Spark-Compatible? |\n| ------------ | ---- | ----------------- |\n| BitwiseAnd   | `&`  | Yes               |\n| BitwiseCount |      | Yes               |\n| BitwiseGet   |      | Yes               |\n| BitwiseOr    | `\\|` | Yes               |\n| BitwiseNot   | `~`  | Yes               |\n| BitwiseXor   | `^`  | Yes               |\n| ShiftLeft    | `<<` | Yes               |\n| ShiftRight   | `>>` | Yes               |\n\n## Aggregate Expressions\n\n| Expression    | SQL        | Spark-Compatible?         | Compatibility Notes                                              |\n| ------------- | ---------- | ------------------------- | ---------------------------------------------------------------- |\n| Average       |            | Yes, except for ANSI mode |                                                                  |\n| BitAndAgg     |            | Yes                       |                                                                  |\n| BitOrAgg      |            | Yes                       |                                                                  |\n| BitXorAgg     |            | Yes                       |                                                                  |\n| BoolAnd       | `bool_and` | Yes                       |                                                                  |\n| BoolOr        | `bool_or`  | Yes                       |                                                                  |\n| Corr          |            | Yes                       |                                                                  |\n| Count         |            | Yes                       |                                                                  |\n| CovPopulation |            | Yes                       |                                                                  |\n| CovSample     |            | Yes                       |                                                                  |\n| First         |            | No                        | This function is not deterministic. Results may not match Spark. |\n| Last          |            | No                        | This function is not deterministic. Results may not match Spark. |\n| Max           |            | Yes                       |                                                                  |\n| Min           |            | Yes                       |                                                                  |\n| StddevPop     |            | Yes                       |                                                                  |\n| StddevSamp    |            | Yes                       |                                                                  |\n| Sum           |            | Yes, except for ANSI mode |                                                                  |\n| VariancePop   |            | Yes                       |                                                                  |\n| VarianceSamp  |            | Yes                       |                                                                  |\n\n## Window Functions\n\n```{warning}\nWindow support is disabled by default due to known correctness issues. Tracking issue: [#2721](https://github.com/apache/datafusion-comet/issues/2721).\n```\n\nComet supports using the following aggregate functions within window contexts with PARTITION BY and ORDER BY clauses.\n\n| Expression | Spark-Compatible? | Compatibility Notes |\n| ---------- | ----------------- | ------------------- |\n| Count      | Yes               |                     |\n| Max        | Yes               |                     |\n| Min        | Yes               |                     |\n| Sum        | Yes               |                     |\n\n**Note:** Dedicated window functions such as `rank`, `dense_rank`, `row_number`, `lag`, `lead`, `ntile`, `cume_dist`, `percent_rank`, and `nth_value` are not currently supported and will fall back to Spark.\n\n## Array Expressions\n\n| Expression     | Spark-Compatible? | Compatibility Notes                                                                                                                                                                       |\n| -------------- | ----------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| ArrayAppend    | Yes               |                                                                                                                                                                                           |\n| ArrayCompact   | No                |                                                                                                                                                                                           |\n| ArrayContains  | Yes               |                                                                                                                                                                                           |\n| ArrayDistinct  | Yes               |                                                                                                                                                                                           |\n| ArrayExcept    | No                |                                                                                                                                                                                           |\n| ArrayFilter    | Yes               | Only supports case where function is `IsNotNull`                                                                                                                                          |\n| ArrayInsert    | No                |                                                                                                                                                                                           |\n| ArrayIntersect | No                |                                                                                                                                                                                           |\n| ArrayJoin      | No                |                                                                                                                                                                                           |\n| ArrayMax       | Yes               |                                                                                                                                                                                           |\n| ArrayMin       | Yes               |                                                                                                                                                                                           |\n| ArrayRemove    | Yes               |                                                                                                                                                                                           |\n| ArrayRepeat    | No                |                                                                                                                                                                                           |\n| ArrayUnion     | No                | Behaves differently than spark. Comet sorts the input arrays before performing the union, while Spark preserves the order of the first array and appends unique elements from the second. |\n| ArraysOverlap  | No                |                                                                                                                                                                                           |\n| CreateArray    | Yes               |                                                                                                                                                                                           |\n| ElementAt      | Yes               | Input must be an array. Map inputs are not supported.                                                                                                                                     |\n| Flatten        | Yes               |                                                                                                                                                                                           |\n| GetArrayItem   | Yes               |                                                                                                                                                                                           |\n\n## Map Expressions\n\n| Expression    | Spark-Compatible? |\n| ------------- | ----------------- |\n| GetMapValue   | Yes               |\n| MapKeys       | Yes               |\n| MapEntries    | Yes               |\n| MapValues     | Yes               |\n| MapFromArrays | Yes               |\n\n## Struct Expressions\n\n| Expression           | Spark-Compatible? | Compatibility Notes                                                                                                     |\n| -------------------- | ----------------- | ----------------------------------------------------------------------------------------------------------------------- |\n| CreateNamedStruct    | Yes               |                                                                                                                         |\n| GetArrayStructFields | Yes               |                                                                                                                         |\n| GetStructField       | Yes               |                                                                                                                         |\n| JsonToStructs        | No                | Partial support. Requires explicit schema.                                                                              |\n| StructsToJson        | No                | Does not support Infinity/-Infinity for numeric types ([#3016](https://github.com/apache/datafusion-comet/issues/3016)) |\n\n## Conversion Expressions\n\n| Expression | Spark-Compatible         | Compatibility Notes                                                                         |\n| ---------- | ------------------------ | ------------------------------------------------------------------------------------------- |\n| Cast       | Depends on specific cast | See the [Comet Compatibility Guide] for list of supported cast expressions and known issues |\n\n## SortOrder\n\n| Expression | Spark-Compatible? | Compatibility Notes |\n| ---------- | ----------------- | ------------------- |\n| NullsFirst | Yes               |                     |\n| NullsLast  | Yes               |                     |\n| Ascending  | Yes               |                     |\n| Descending | Yes               |                     |\n\n## Other\n\n| Expression                   | Spark-Compatible? | Compatibility Notes                                                         |\n| ---------------------------- | ----------------- | --------------------------------------------------------------------------- |\n| Alias                        | Yes               |                                                                             |\n| AttributeReference           | Yes               |                                                                             |\n| BloomFilterMightContain      | Yes               |                                                                             |\n| Coalesce                     | Yes               |                                                                             |\n| CheckOverflow                | Yes               |                                                                             |\n| KnownFloatingPointNormalized | Yes               |                                                                             |\n| Literal                      | Yes               |                                                                             |\n| MakeDecimal                  | Yes               |                                                                             |\n| MonotonicallyIncreasingID    | Yes               |                                                                             |\n| NormalizeNaNAndZero          | Yes               |                                                                             |\n| PromotePrecision             | Yes               |                                                                             |\n| RegExpReplace                | No                | Uses Rust regexp engine, which has different behavior to Java regexp engine |\n| ScalarSubquery               | Yes               |                                                                             |\n| SparkPartitionID             | Yes               |                                                                             |\n| ToPrettyString               | Yes               |                                                                             |\n| UnscaledValue                | Yes               |                                                                             |\n\n[Comet Configuration Guide]: configs.md\n[Comet Compatibility Guide]: compatibility.md\n"
  },
  {
    "path": "docs/source/user-guide/latest/iceberg.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Accelerating Apache Iceberg Parquet Scans using Comet\n\n## Native Reader\n\nComet's native Iceberg reader relies on reflection to extract `FileScanTask`s from Iceberg, which are\nthen serialized to Comet's native execution engine (see\n[PR #2528](https://github.com/apache/datafusion-comet/pull/2528)).\n\nThe example below uses Spark's package downloader to retrieve Comet 0.14.0 and Iceberg\n1.8.1, but Comet has been tested with Iceberg 1.5, 1.7, 1.8, 1.9, and 1.10. The native Iceberg\nreader is enabled by default. To disable it, set `spark.comet.scan.icebergNative.enabled=false`.\n\n```shell\n$SPARK_HOME/bin/spark-shell \\\n    --packages org.apache.datafusion:comet-spark-spark3.5_2.12:0.14.0,org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.8.1,org.apache.iceberg:iceberg-core:1.8.1 \\\n    --repositories https://repo1.maven.org/maven2/ \\\n    --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \\\n    --conf spark.sql.catalog.spark_catalog=org.apache.iceberg.spark.SparkCatalog \\\n    --conf spark.sql.catalog.spark_catalog.type=hadoop \\\n    --conf spark.sql.catalog.spark_catalog.warehouse=/tmp/warehouse \\\n    --conf spark.plugins=org.apache.spark.CometPlugin \\\n    --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n    --conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \\\n    --conf spark.comet.explainFallback.enabled=true \\\n    --conf spark.memory.offHeap.enabled=true \\\n    --conf spark.memory.offHeap.size=2g\n```\n\n### Tuning\n\nComet’s native Iceberg reader supports fetching multiple files in parallel to hide I/O latency with the\nconfig `spark.comet.scan.icebergNative.dataFileConcurrencyLimit`. This value defaults to 1 to\nmaintain test behavior on Iceberg Java tests without `ORDER BY` clauses, but we suggest increasing it to\nvalues between 2 and 8 based on your workload.\n\n### Supported features\n\nThe native Iceberg reader supports the following features:\n\n**Table specifications:**\n\n- Iceberg table spec v1 and v2 (v3 will fall back to Spark)\n\n**Schema and data types:**\n\n- All primitive types including UUID\n- Complex types: arrays, maps, and structs\n- Schema evolution (adding and dropping columns)\n\n**Time travel and branching:**\n\n- `VERSION AS OF` queries to read historical snapshots\n- Branch reads for accessing named branches\n\n**Delete handling (Merge-On-Read tables):**\n\n- Positional deletes\n- Equality deletes\n- Mixed delete types\n\n**Filter pushdown:**\n\n- Equality and comparison predicates (`=`, `!=`, `>`, `>=`, `<`, `<=`)\n- Logical operators (`AND`, `OR`)\n- NULL checks (`IS NULL`, `IS NOT NULL`)\n- `IN` and `NOT IN` list operations\n- `BETWEEN` operations\n\n**Partitioning:**\n\n- Standard partitioning with partition pruning\n- Date partitioning with `days()` transform\n- Bucket partitioning\n- Truncate transform\n- Hour transform\n\n**Storage:**\n\n- Local filesystem\n- Hadoop Distributed File System (HDFS)\n- S3-compatible storage (AWS S3, MinIO)\n\n### REST Catalog\n\nComet's native Iceberg reader also supports REST catalogs. The following example shows how to\nconfigure Spark to use a REST catalog with Comet's native Iceberg scan:\n\n```shell\n$SPARK_HOME/bin/spark-shell \\\n    --packages org.apache.datafusion:comet-spark-spark3.5_2.12:0.14.0,org.apache.iceberg:iceberg-spark-runtime-3.5_2.12:1.8.1,org.apache.iceberg:iceberg-core:1.8.1 \\\n    --repositories https://repo1.maven.org/maven2/ \\\n    --conf spark.sql.extensions=org.apache.iceberg.spark.extensions.IcebergSparkSessionExtensions \\\n    --conf spark.sql.catalog.rest_cat=org.apache.iceberg.spark.SparkCatalog \\\n    --conf spark.sql.catalog.rest_cat.catalog-impl=org.apache.iceberg.rest.RESTCatalog \\\n    --conf spark.sql.catalog.rest_cat.uri=http://localhost:8181 \\\n    --conf spark.sql.catalog.rest_cat.warehouse=/tmp/warehouse \\\n    --conf spark.plugins=org.apache.spark.CometPlugin \\\n    --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n    --conf spark.sql.extensions=org.apache.comet.CometSparkSessionExtensions \\\n    --conf spark.comet.explainFallback.enabled=true \\\n    --conf spark.memory.offHeap.enabled=true \\\n    --conf spark.memory.offHeap.size=2g\n```\n\nNote that REST catalogs require explicit namespace creation before creating tables:\n\n```scala\nscala> spark.sql(\"CREATE NAMESPACE rest_cat.db\")\nscala> spark.sql(\"CREATE TABLE rest_cat.db.test_table (id INT, name STRING) USING iceberg\")\nscala> spark.sql(\"INSERT INTO rest_cat.db.test_table VALUES (1, 'Alice'), (2, 'Bob')\")\nscala> spark.sql(\"SELECT * FROM rest_cat.db.test_table\").show()\n```\n\n### Current limitations\n\nThe following scenarios will fall back to Spark's native Iceberg reader:\n\n- Iceberg table spec v3 scans\n- Iceberg writes (reads are accelerated, writes use Spark)\n- Tables backed by Avro or ORC data files (only Parquet is accelerated)\n- Tables partitioned on `BINARY` or `DECIMAL` (with precision >28) columns\n- Scans with residual filters using `truncate`, `bucket`, `year`, `month`, `day`, or `hour`\n  transform functions (partition pruning still works, but row-level filtering of these\n  transforms falls back)\n"
  },
  {
    "path": "docs/source/user-guide/latest/index.rst",
    "content": ".. Licensed to the Apache Software Foundation (ASF) under one\n.. or more contributor license agreements.  See the NOTICE file\n.. distributed with this work for additional information\n.. regarding copyright ownership.  The ASF licenses this file\n.. to you under the Apache License, Version 2.0 (the\n.. \"License\"); you may not use this file except in compliance\n.. with the License.  You may obtain a copy of the License at\n\n..   http://www.apache.org/licenses/LICENSE-2.0\n\n.. Unless required by applicable law or agreed to in writing,\n.. software distributed under the License is distributed on an\n.. \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n.. KIND, either express or implied.  See the License for the\n.. specific language governing permissions and limitations\n.. under the License.\n\n.. image:: /_static/images/DataFusionComet-Logo-Light.png\n  :alt: DataFusion Comet Logo\n\n================================\nComet $COMET_VERSION User Guide\n================================\n\n.. _toc.user-guide-links-$COMET_VERSION:\n.. toctree::\n   :maxdepth: 1\n   :caption: Comet $COMET_VERSION User Guide\n\n   Installing Comet <installation>\n   Building From Source <source>\n   Supported Data Sources <datasources>\n   Supported Data Types <datatypes>\n   Supported Operators <operators>\n   Supported Expressions <expressions>\n   Configuration Settings <configs>\n   Compatibility Guide <compatibility>\n   Tuning Guide <tuning>\n   Metrics Guide <metrics>\n   Iceberg Guide <iceberg>\n   Kubernetes Guide <kubernetes>\n"
  },
  {
    "path": "docs/source/user-guide/latest/installation.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Installing DataFusion Comet\n\n## Prerequisites\n\nMake sure the following requirements are met and software installed on your machine.\n\n### Supported Operating Systems\n\n- Linux\n- Apple macOS (Intel and Apple Silicon)\n\n### Supported Spark Versions\n\nComet $COMET_VERSION supports the following versions of Apache Spark.\n\nWe recommend only using Comet with Spark versions where we currently have both Comet and Spark tests enabled in CI.\nOther versions may work well enough for development and evaluation purposes.\n\n| Spark Version | Java Version | Scala Version | Comet Tests in CI | Spark SQL Tests in CI |\n| ------------- | ------------ | ------------- | ----------------- | --------------------- |\n| 3.4.3         | 11/17        | 2.12/2.13     | Yes               | Yes                   |\n| 3.5.5         | 11/17        | 2.12/2.13     | Yes               | No                    |\n| 3.5.6         | 11/17        | 2.12/2.13     | Yes               | No                    |\n| 3.5.7         | 11/17        | 2.12/2.13     | Yes               | Yes                   |\n| 3.5.8         | 11/17        | 2.12/2.13     | Yes               | Yes                   |\n\nNote that we do not test the full matrix of supported Java and Scala versions in CI for every Spark version.\n\nExperimental support is provided for the following versions of Apache Spark and is intended for development/testing\nuse only and should not be used in production yet.\n\n| Spark Version | Java Version | Scala Version | Comet Tests in CI | Spark SQL Tests in CI |\n| ------------- | ------------ | ------------- | ----------------- | --------------------- |\n| 4.0.1         | 17           | 2.13          | Yes               | Yes                   |\n\nNote that Comet may not fully work with proprietary forks of Apache Spark such as the Spark versions offered by\nCloud Service Providers.\n\n## Using a Published JAR File\n\nComet jar files are available in [Maven Central](https://central.sonatype.com/namespace/org.apache.datafusion) for amd64 and arm64 architectures for Linux. For Apple macOS, it\nis currently necessary to build from source.\n\nFor performance reasons, published Comet jar files target baseline CPUs available in modern data centers. For example,\nthe amd64 build uses the `x86-64-v3` target that adds CPU instructions (_e.g._, AVX2) common after 2013. Similarly, the\narm64 build uses the `neoverse-n1` target, which is a common baseline for ARM cores found in AWS (Graviton2+), GCP, and\nAzure after 2019. If the Comet library fails for SIGILL (illegal instruction), please open an issue on the GitHub\nrepository describing your environment, and [build from source] for your target architecture.\n\nHere are the direct links for downloading the Comet $COMET_VERSION jar file. Note that these links are not valid if\nyou are viewing the documentation for the latest `SNAPSHOT` release, or for the latest release while it is still going\nthrough the release vote. Release candidate jars can be found [here](https://repository.apache.org/#nexus-search;quick~org.apache.datafusion).\n\n- [Comet plugin for Spark 3.4 / Scala 2.12](https://repo1.maven.org/maven2/org/apache/datafusion/comet-spark-spark3.4_2.12/$COMET_VERSION/comet-spark-spark3.4_2.12-$COMET_VERSION.jar)\n- [Comet plugin for Spark 3.4 / Scala 2.13](https://repo1.maven.org/maven2/org/apache/datafusion/comet-spark-spark3.4_2.13/$COMET_VERSION/comet-spark-spark3.4_2.13-$COMET_VERSION.jar)\n- [Comet plugin for Spark 3.5 / Scala 2.12](https://repo1.maven.org/maven2/org/apache/datafusion/comet-spark-spark3.5_2.12/$COMET_VERSION/comet-spark-spark3.5_2.12-$COMET_VERSION.jar)\n- [Comet plugin for Spark 3.5 / Scala 2.13](https://repo1.maven.org/maven2/org/apache/datafusion/comet-spark-spark3.5_2.13/$COMET_VERSION/comet-spark-spark3.5_2.13-$COMET_VERSION.jar)\n- [Comet plugin for Spark 4.0 / Scala 2.13 (Experimental)](https://repo1.maven.org/maven2/org/apache/datafusion/comet-spark-spark4.0_2.13/$COMET_VERSION/comet-spark-spark4.0_2.13-$COMET_VERSION.jar)\n\n## Building from source\n\nRefer to the [Building from source] guide for instructions from building Comet from source, either from official\nsource releases, or from the latest code in the GitHub repository.\n\n[Building from source]: source.md\n\n## Deploying to Kubernetes\n\nSee the [Comet Kubernetes Guide](kubernetes.md) guide.\n\n## Run Spark Shell with Comet enabled\n\nMake sure `SPARK_HOME` points to the same Spark version as Comet was built for.\n\n```shell\nexport COMET_JAR=spark/target/comet-spark-spark3.5_2.12-$COMET_VERSION.jar\n\n$SPARK_HOME/bin/spark-shell \\\n    --jars $COMET_JAR \\\n    --conf spark.driver.extraClassPath=$COMET_JAR \\\n    --conf spark.executor.extraClassPath=$COMET_JAR \\\n    --conf spark.plugins=org.apache.spark.CometPlugin \\\n    --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n    --conf spark.comet.explainFallback.enabled=true \\\n    --conf spark.memory.offHeap.enabled=true \\\n    --conf spark.memory.offHeap.size=16g\n```\n\n### Verify Comet enabled for Spark SQL query\n\nCreate a test Parquet source\n\n```scala\nscala> (0 until 10).toDF(\"a\").write.mode(\"overwrite\").parquet(\"/tmp/test\")\n```\n\nQuery the data from the test source and check:\n\n- INFO message shows the native Comet library has been initialized.\n- The query plan reflects Comet operators being used for this query instead of Spark ones\n\n```scala\nscala> spark.read.parquet(\"/tmp/test\").createOrReplaceTempView(\"t1\")\nscala> spark.sql(\"select * from t1 where a > 5\").explain\nINFO src/lib.rs: Comet native library initialized\n== Physical Plan ==\n        *(1) ColumnarToRow\n        +- CometFilter [a#14], (isnotnull(a#14) AND (a#14 > 5))\n          +- CometScan parquet [a#14] Batched: true, DataFilters: [isnotnull(a#14), (a#14 > 5)],\n             Format: CometParquet, Location: InMemoryFileIndex(1 paths)[file:/tmp/test], PartitionFilters: [],\n             PushedFilters: [IsNotNull(a), GreaterThan(a,5)], ReadSchema: struct<a:int>\n```\n\nWith the configuration `spark.comet.explainFallback.enabled=true`, Comet will log any reasons that prevent a plan from\nbeing executed natively.\n\n```scala\nscala> Seq(1,2,3,4).toDF(\"a\").write.parquet(\"/tmp/test.parquet\")\nWARN CometSparkSessionExtensions$CometExecRule: Comet cannot execute some parts of this plan natively because:\n  - LocalTableScan is not supported\n  - WriteFiles is not supported\n  - Execute InsertIntoHadoopFsRelationCommand is not supported\n```\n\n## Additional Configuration\n\nDepending on your deployment mode you may also need to set the driver & executor class path(s) to\nexplicitly contain Comet otherwise Spark may use a different class-loader for the Comet components than its internal\ncomponents which will then fail at runtime. For example:\n\n```\n--driver-class-path spark/target/comet-spark-spark3.5_2.12-$COMET_VERSION.jar\n```\n\nSome cluster managers may require additional configuration, see <https://spark.apache.org/docs/latest/cluster-overview.html>\n\n### Memory tuning\n\nIn addition to Apache Spark memory configuration parameters, Comet introduces additional parameters to configure memory\nallocation for native execution. See [Comet Memory Tuning](./tuning.md) for details.\n"
  },
  {
    "path": "docs/source/user-guide/latest/kubernetes.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Comet Kubernetes Support\n\n## Comet Docker Images\n\nRun the following command from the root of this repository to build the Comet Docker image, or use a [published\nDocker image](https://hub.docker.com/r/apache/datafusion-comet).\n\n```shell\ndocker build -t apache/datafusion-comet -f kube/Dockerfile .\n```\n\n## Example Spark Submit\n\nThe exact syntax will vary depending on the Kubernetes distribution, but an example `spark-submit` command can be\nfound [here](https://github.com/apache/datafusion-comet/tree/main/benchmarks).\n\n## Helm chart\n\nInstall helm Spark operator for Kubernetes\n\n```bash\n# Add the Helm repository\nhelm repo add spark-operator https://kubeflow.github.io/spark-operator\nhelm repo update\n\n# Install the operator into the spark-operator namespace and wait for deployments to be ready\nhelm install spark-operator spark-operator/spark-operator --namespace spark-operator --create-namespace --wait\n```\n\nCheck the operator is deployed\n\n```bash\nhelm status --namespace spark-operator spark-operator\n\nNAME: my-release\nNAMESPACE: spark-operator\nSTATUS: deployed\nREVISION: 1\nTEST SUITE: None\n```\n\nCreate example Spark application file `spark-pi.yaml`\n\n```bash\napiVersion: sparkoperator.k8s.io/v1beta2\nkind: SparkApplication\nmetadata:\n  name: spark-pi\n  namespace: default\nspec:\n  type: Scala\n  mode: cluster\n  image: apache/datafusion-comet:0.7.0-spark3.5.5-scala2.12-java11\n  imagePullPolicy: IfNotPresent\n  mainClass: org.apache.spark.examples.SparkPi\n  mainApplicationFile: local:///opt/spark/examples/jars/spark-examples_2.12-3.5.5.jar\n  sparkConf:\n    \"spark.executor.extraClassPath\": \"/opt/spark/jars/comet-spark-spark3.5_2.12-0.7.0.jar\"\n    \"spark.driver.extraClassPath\": \"/opt/spark/jars/comet-spark-spark3.5_2.12-0.7.0.jar\"\n    \"spark.plugins\": \"org.apache.spark.CometPlugin\"\n    \"spark.comet.enabled\": \"true\"\n    \"spark.comet.exec.enabled\": \"true\"\n    \"spark.comet.exec.shuffle.enabled\": \"true\"\n    \"spark.comet.exec.shuffle.mode\": \"auto\"\n    \"spark.shuffle.manager\": \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\"\n  sparkVersion: 3.5.6\n  driver:\n    labels:\n      version: 3.5.6\n    cores: 1\n    coreLimit: 1200m\n    memory: 512m\n    serviceAccount: spark-operator-spark\n  executor:\n    labels:\n      version: 3.5.6\n    instances: 1\n    cores: 1\n    coreLimit: 1200m\n    memory: 512m\n```\n\nRefer to [Comet builds](#comet-docker-images)\n\nRun Apache Spark application with Comet enabled\n\n```bash\nkubectl apply -f spark-pi.yaml\nsparkapplication.sparkoperator.k8s.io/spark-pi created\n```\n\nCheck application status\n\n```bash\nkubectl get sparkapp spark-pi\n\nNAME       STATUS    ATTEMPTS   START                  FINISH       AGE\nspark-pi   RUNNING   1          2025-03-18T21:19:48Z   <no value>   65s\n```\n\nTo check more runtime details\n\n```bash\nkubectl describe sparkapplication spark-pi\n\n....\nEvents:\n  Type    Reason                     Age    From                          Message\n  ----    ------                     ----   ----                          -------\n  Normal  SparkApplicationSubmitted  8m15s  spark-application-controller  SparkApplication spark-pi was submitted successfully\n  Normal  SparkDriverRunning         7m18s  spark-application-controller  Driver spark-pi-driver is running\n  Normal  SparkExecutorPending       7m11s  spark-application-controller  Executor [spark-pi-68732195ab217303-exec-1] is pending\n  Normal  SparkExecutorRunning       7m10s  spark-application-controller  Executor [spark-pi-68732195ab217303-exec-1] is running\n  Normal  SparkExecutorCompleted     7m5s   spark-application-controller  Executor [spark-pi-68732195ab217303-exec-1] completed\n  Normal  SparkDriverCompleted       7m4s   spark-application-controller  Driver spark-pi-driver completed\n\n```\n\nGet Driver Logs\n\n```bash\nkubectl logs spark-pi-driver\n```\n\nMore info on [Kube Spark operator](https://www.kubeflow.org/docs/components/spark-operator/getting-started/)\n"
  },
  {
    "path": "docs/source/user-guide/latest/metrics.md",
    "content": "<!---\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Metrics\n\n## Spark SQL Metrics\n\nSet `spark.comet.metrics.detailed=true` to see all available Comet metrics.\n\n### CometScanExec\n\n| Metric      | Description                                                                                                                                                                                                                                                                        |\n| ----------- | ---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n| `scan time` | Total time to scan a Parquet file. This is not comparable to the same metric in Spark because Comet's scan metric is more accurate. Although both Comet and Spark measure the time in nanoseconds, Spark rounds this time to the nearest millisecond per batch and Comet does not. |\n\n### Exchange\n\nComet adds some additional metrics:\n\n| Metric                          | Description                                                   |\n| ------------------------------- | ------------------------------------------------------------- |\n| `native shuffle time`           | Total time in native code excluding any child operators.      |\n| `repartition time`              | Time to repartition batches.                                  |\n| `memory pool time`              | Time interacting with memory pool.                            |\n| `encoding and compression time` | Time to encode batches in IPC format and compress using ZSTD. |\n\n## Native Metrics\n\nSetting `spark.comet.explain.native.enabled=true` will cause native plans to be logged in each executor. Metrics are\nlogged for each native plan (and there is one plan per task, so this is very verbose).\n\nHere is a guide to some of the native metrics.\n\n### ScanExec\n\n| Metric            | Description                                                                                         |\n| ----------------- | --------------------------------------------------------------------------------------------------- |\n| `elapsed_compute` | Total time spent in this operator, fetching batches from a JVM iterator.                            |\n| `jvm_fetch_time`  | Time spent in the JVM fetching input batches to be read by this `ScanExec` instance.                |\n| `arrow_ffi_time`  | Time spent using Arrow FFI to create Arrow batches from the memory addresses returned from the JVM. |\n\n### ShuffleWriterExec\n\n| Metric            | Description                                                   |\n| ----------------- | ------------------------------------------------------------- |\n| `elapsed_compute` | Total time excluding any child operators.                     |\n| `repart_time`     | Time to repartition batches.                                  |\n| `ipc_time`        | Time to encode batches in IPC format and compress using ZSTD. |\n| `mempool_time`    | Time interacting with memory pool.                            |\n| `write_time`      | Time spent writing bytes to disk.                             |\n"
  },
  {
    "path": "docs/source/user-guide/latest/operators.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Supported Spark Operators\n\nThe following Spark operators are currently replaced with native versions. Query stages that contain any operators\nnot supported by Comet will fall back to regular Spark execution.\n\n| Operator                          | Spark-Compatible? | Compatibility Notes                                                                                                |\n| --------------------------------- | ----------------- | ------------------------------------------------------------------------------------------------------------------ |\n| BatchScanExec                     | Yes               | Supports Parquet files and Apache Iceberg Parquet scans. See the [Comet Compatibility Guide] for more information. |\n| BroadcastExchangeExec             | Yes               |                                                                                                                    |\n| BroadcastHashJoinExec             | Yes               |                                                                                                                    |\n| ExpandExec                        | Yes               |                                                                                                                    |\n| FileSourceScanExec                | Yes               | Supports Parquet files. See the [Comet Compatibility Guide] for more information.                                  |\n| FilterExec                        | Yes               |                                                                                                                    |\n| GenerateExec                      | Yes               | Supports `explode` generator only.                                                                                 |\n| GlobalLimitExec                   | Yes               |                                                                                                                    |\n| HashAggregateExec                 | Yes               |                                                                                                                    |\n| InsertIntoHadoopFsRelationCommand | No                | Experimental support for native Parquet writes. Disabled by default.                                               |\n| LocalLimitExec                    | Yes               |                                                                                                                    |\n| LocalTableScanExec                | No                | Experimental and disabled by default.                                                                              |\n| ObjectHashAggregateExec           | Yes               | Supports a limited number of aggregates, such as `bloom_filter_agg`.                                               |\n| ProjectExec                       | Yes               |                                                                                                                    |\n| ShuffleExchangeExec               | Yes               |                                                                                                                    |\n| ShuffledHashJoinExec              | Yes               |                                                                                                                    |\n| SortExec                          | Yes               |                                                                                                                    |\n| SortMergeJoinExec                 | Yes               |                                                                                                                    |\n| UnionExec                         | Yes               |                                                                                                                    |\n| WindowExec                        | No                | Disabled by default due to known correctness issues.                                                               |\n\n[Comet Compatibility Guide]: compatibility.md\n"
  },
  {
    "path": "docs/source/user-guide/latest/source.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Building Comet From Source\n\nIt is sometimes preferable to build from source for a specific platform.\n\n## Using a Published Source Release\n\nOfficial source releases can be downloaded from https://dist.apache.org/repos/dist/release/datafusion/\n\n```console\n# Pick the latest version\nexport COMET_VERSION=$COMET_VERSION\n# Download the tarball\ncurl -O \"https://dist.apache.org/repos/dist/release/datafusion/datafusion-comet-$COMET_VERSION/apache-datafusion-comet-$COMET_VERSION.tar.gz\"\n# Unpack\ntar -xzf apache-datafusion-comet-$COMET_VERSION.tar.gz\ncd apache-datafusion-comet-$COMET_VERSION\n```\n\nBuild\n\n```console\nmake release-nogit PROFILES=\"-Pspark-3.5\"\n```\n\n## Building from the GitHub repository\n\nClone the repository:\n\n```console\ngit clone https://github.com/apache/datafusion-comet.git\n```\n\nBuild Comet for a specific Spark version:\n\n```console\ncd datafusion-comet\nmake release PROFILES=\"-Pspark-3.5\"\n```\n\nNote that the project builds for Scala 2.12 by default but can be built for Scala 2.13 using an additional profile:\n\n```console\nmake release PROFILES=\"-Pspark-3.5 -Pscala-2.13\"\n```\n\nTo build Comet from the source distribution on an isolated environment without an access to `github.com` it is necessary to disable `git-commit-id-maven-plugin`, otherwise you will face errors that there is no access to the git during the build process. In that case you may use:\n\n```console\nmake release-nogit PROFILES=\"-Pspark-3.5\"\n```\n"
  },
  {
    "path": "docs/source/user-guide/latest/tuning.md",
    "content": "<!---\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\nhttp://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Tuning Guide\n\nComet provides some tuning options to help you get the best performance from your queries.\n\n## Configuring Tokio Runtime\n\nComet uses a global tokio runtime per executor process using tokio's defaults of one worker thread per core and a\nmaximum of 512 blocking threads. These values can be overridden using the environment variables `COMET_WORKER_THREADS`\nand `COMET_MAX_BLOCKING_THREADS`.\n\nIt is recommended that `COMET_WORKER_THREADS` be set to the number of executor cores. This may not be necessary\nin some environments, such as Kubernetes, where the number of cores allocated to a pod will already be equal to the\nnumber of executor cores.\n\n## Memory Tuning\n\nIt is necessary to specify how much memory Comet can use in addition to memory already allocated to Spark. In some\ncases, it may be possible to reduce the amount of memory allocated to Spark so that overall memory allocation is\nthe same or lower than the original configuration. In other cases, enabling Comet may require allocating more memory\nthan before. See the [Determining How Much Memory to Allocate] section for more details.\n\n[Determining How Much Memory to Allocate]: #determining-how-much-memory-to-allocate\n\n### Configuring Comet Memory\n\nComet shares an off-heap memory pool with Spark. The size of the pool is\nspecified by `spark.memory.offHeap.size`.\n\nComet's memory accounting isn't 100% accurate and this can result in Comet using more memory than it reserves,\nleading to out-of-memory exceptions. To work around this issue, it is possible to\nset `spark.comet.exec.memoryPool.fraction` to a value less than `1.0` to restrict the amount of memory that can be\nreserved by Comet.\n\nFor more details about Spark off-heap memory mode, please refer to [Spark documentation].\n\n[Spark documentation]: https://spark.apache.org/docs/latest/configuration.html\n\nComet implements multiple memory pool implementations. The type of pool can be specified with `spark.comet.exec.memoryPool`.\n\nThe valid pool types are:\n\n- `fair_unified` (default when `spark.memory.offHeap.enabled=true` is set)\n- `greedy_unified`\n\nBoth pool types are shared across all native execution contexts within the same Spark task. When\nComet executes a shuffle, it runs two native execution contexts concurrently (e.g. one for\npre-shuffle operators and one for the shuffle writer). The shared pool ensures that the combined\nmemory usage stays within the per-task limit.\n\nThe `fair_unified` pool prevents operators from using more than an even fraction of the available memory\n(i.e. `pool_size / num_reservations`). This pool works best when you know beforehand\nthe query has multiple operators that will likely all need to spill. Sometimes it will cause spills even\nwhen there is sufficient memory in order to leave enough memory for other operators.\n\nThe `greedy_unified` pool type implements a greedy first-come first-serve limit. This pool works well for queries that do not\nneed to spill or have a single spillable operator.\n\n[shuffle]: #shuffle\n[Advanced Memory Tuning]: #advanced-memory-tuning\n\n### Determining How Much Memory to Allocate\n\nGenerally, increasing the amount of memory allocated to Comet will improve query performance by reducing the\namount of time spent spilling to disk, especially for aggregate, join, and shuffle operations. Allocating insufficient\nmemory can result in out-of-memory errors. This is no different from allocating memory in Spark and the amount of\nmemory will vary for different workloads, so some experimentation will be required.\n\nHere is a real-world example, based on running benchmarks derived from TPC-H, running on a single executor against\nlocal Parquet files using the 100 GB data set.\n\nBaseline Spark Performance\n\n- Spark completes the benchmark in 632 seconds with 8 cores and 8 GB RAM\n- With less than 8 GB RAM, performance degrades due to spilling\n- Spark can complete the benchmark with as little as 3 GB of RAM, but with worse performance (744 seconds)\n\nComet Performance\n\n- Comet requires at least 5 GB of RAM, but performance at this level\n  is around 340 seconds, which is significantly faster than Spark with any amount of RAM\n- Comet running in off-heap with 8 cores completes the benchmark in 295 seconds, more than 2x faster than Spark\n- It is worth noting that running Comet with only 4 cores and 4 GB RAM completes the benchmark in 520 seconds,\n  providing better performance than Spark for half the resource\n\nIt may be possible to reduce Comet's memory overhead by reducing batch sizes or increasing number of partitions.\n\n## Optimizing Sorting on Floating-Point Values\n\nSorting on floating-point data types (or complex types containing floating-point values) is not compatible with\nSpark if the data contains both zero and negative zero. This is likely an edge case that is not of concern for many users\nand sorting on floating-point data can be enabled by setting `spark.comet.expression.SortOrder.allowIncompatible=true`.\n\n## Optimizing Joins\n\nSpark often chooses `SortMergeJoin` over `ShuffledHashJoin` for stability reasons. If the build-side of a\n`ShuffledHashJoin` is very large then it could lead to OOM in Spark.\n\nVectorized query engines tend to perform better with `ShuffledHashJoin`, so for best performance it is often preferable\nto configure Comet to convert `SortMergeJoin` to `ShuffledHashJoin`. Comet does not yet provide spill-to-disk for\n`ShuffledHashJoin` so this could result in OOM. Also, `SortMergeJoin` may still be faster in some cases. It is best\nto test with both for your specific workloads.\n\nTo configure Comet to convert `SortMergeJoin` to `ShuffledHashJoin`, set `spark.comet.exec.replaceSortMergeJoin=true`.\n\n## Shuffle\n\nComet provides accelerated shuffle implementations that can be used to improve the performance of your queries.\n\nTo enable Comet shuffle, set the following configuration in your Spark configuration:\n\n```\nspark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\nspark.comet.exec.shuffle.enabled=true\n```\n\n`spark.shuffle.manager` is a Spark static configuration which cannot be changed at runtime.\nIt must be set before the Spark context is created. You can enable or disable Comet shuffle\nat runtime by setting `spark.comet.exec.shuffle.enabled` to `true` or `false`.\nOnce it is disabled, Comet will fall back to the default Spark shuffle manager.\n\n### Shuffle Implementations\n\nComet provides two shuffle implementations: Native Shuffle and Columnar Shuffle. Comet will first try to use Native\nShuffle and if that is not possible it will try to use Columnar Shuffle. If neither can be applied, it will fall\nback to Spark for shuffle operations.\n\n#### Native Shuffle\n\nComet provides a fully native shuffle implementation, which generally provides the best performance. Native shuffle\nsupports `HashPartitioning`, `RangePartitioning` and `SinglePartitioning` but currently only supports primitive type\npartitioning keys. Columns that are not partitioning keys may contain complex types like maps, structs, and arrays.\n\n#### Columnar (JVM) Shuffle\n\nComet Columnar shuffle is JVM-based and supports `HashPartitioning`, `RoundRobinPartitioning`, `RangePartitioning`, and\n`SinglePartitioning`. This shuffle implementation supports complex data types as partitioning keys.\n\n### Shuffle Compression\n\nBy default, Spark compresses shuffle files using LZ4 compression. Comet overrides this behavior with ZSTD compression.\nCompression can be disabled by setting `spark.shuffle.compress=false`, which may result in faster shuffle times in\ncertain environments, such as single-node setups with fast NVMe drives, at the expense of increased disk space usage.\n\n## Explain Plan\n\n### Extended Explain\n\nWith Spark 4.0.0 and newer, Comet can provide extended explain plan information in the Spark UI. Currently this lists\nreasons why Comet may not have been enabled for specific operations.\nTo enable this, in the Spark configuration, set the following:\n\n```shell\n-c spark.sql.extendedExplainProviders=org.apache.comet.ExtendedExplainInfo\n```\n\nThis will add a section to the detailed plan displayed in the Spark SQL UI page.\n"
  },
  {
    "path": "docs/spark_expressions_support.md",
    "content": "<!---\n  Licensed to the Apache Software Foundation (ASF) under one\n  or more contributor license agreements.  See the NOTICE file\n  distributed with this work for additional information\n  regarding copyright ownership.  The ASF licenses this file\n  to you under the Apache License, Version 2.0 (the\n  \"License\"); you may not use this file except in compliance\n  with the License.  You may obtain a copy of the License at\n\n    http://www.apache.org/licenses/LICENSE-2.0\n\n  Unless required by applicable law or agreed to in writing,\n  software distributed under the License is distributed on an\n  \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n  KIND, either express or implied.  See the License for the\n  specific language governing permissions and limitations\n  under the License.\n-->\n\n# Supported Spark Expressions\n\n### agg_funcs\n\n- [x] any\n- [x] any_value\n- [ ] approx_count_distinct\n- [ ] approx_percentile\n- [ ] array_agg\n- [x] avg\n- [x] bit_and\n- [x] bit_or\n- [x] bit_xor\n- [x] bool_and\n- [x] bool_or\n- [ ] collect_list\n- [ ] collect_set\n- [ ] corr\n- [x] count\n- [x] count_if\n- [ ] count_min_sketch\n- [x] covar_pop\n- [x] covar_samp\n- [x] every\n- [x] first\n- [x] first_value\n- [ ] grouping\n- [ ] grouping_id\n- [ ] histogram_numeric\n- [ ] kurtosis\n- [x] last\n- [x] last_value\n- [x] max\n- [ ] max_by\n- [x] mean\n- [ ] median\n- [x] min\n- [ ] min_by\n- [ ] mode\n- [ ] percentile\n- [ ] percentile_approx\n- [x] regr_avgx\n- [x] regr_avgy\n- [x] regr_count\n- [ ] regr_intercept\n- [ ] regr_r2\n- [ ] regr_slope\n- [ ] regr_sxx\n- [ ] regr_sxy\n- [ ] regr_syy\n- [ ] skewness\n- [x] some\n- [x] std\n- [x] stddev\n- [x] stddev_pop\n- [x] stddev_samp\n- [x] sum\n- [ ] try_avg\n- [ ] try_sum\n- [x] var_pop\n- [x] var_samp\n- [x] variance\n\n### array_funcs\n\n- [x] array\n- [x] array_append\n- [x] array_compact\n- [x] array_contains\n- [x] array_distinct\n- [x] array_except\n- [x] array_insert\n- [x] array_intersect\n- [x] array_join\n- [x] array_max\n- [ ] array_min\n- [ ] array_position\n- [x] array_remove\n- [x] array_repeat\n- [x] array_union\n- [x] arrays_overlap\n- [ ] arrays_zip\n- [x] element_at\n- [ ] flatten\n- [x] get\n- [ ] sequence\n- [ ] shuffle\n- [ ] slice\n- [x] sort_array\n\n### bitwise_funcs\n\n- [x] &\n- [x] ^\n- [ ] bit_count\n- [ ] bit_get\n- [ ] getbit\n- [x] shiftright\n- [ ] shiftrightunsigned\n- [x] |\n- [x] ~\n\n### collection_funcs\n\n- [ ] array_size\n- [ ] cardinality\n- [ ] concat\n- [x] reverse\n- [ ] size\n\n### conditional_funcs\n\n- [x] coalesce\n- [x] if\n- [x] ifnull\n- [ ] nanvl\n- [x] nullif\n- [x] nvl\n- [x] nvl2\n- [ ] when\n\n### conversion_funcs\n\n- [ ] bigint\n- [ ] binary\n- [ ] boolean\n- [ ] cast\n- [ ] date\n- [ ] decimal\n- [ ] double\n- [ ] float\n- [ ] int\n- [ ] smallint\n- [ ] string\n- [ ] timestamp\n- [ ] tinyint\n\n### csv_funcs\n\n- [ ] from_csv\n- [ ] schema_of_csv\n- [ ] to_csv\n\n### datetime_funcs\n\n- [ ] add_months\n- [ ] convert_timezone\n- [x] curdate\n- [x] current_date\n- [ ] current_timestamp\n- [x] current_timezone\n- [ ] date_add\n- [ ] date_diff\n- [ ] date_format\n- [x] date_from_unix_date\n- [x] date_part\n- [ ] date_sub\n- [ ] date_trunc\n- [ ] dateadd\n- [ ] datediff\n- [x] datepart\n- [ ] day\n- [ ] dayofmonth\n- [ ] dayofweek\n- [ ] dayofyear\n- [x] extract\n- [x] from_unixtime\n- [ ] from_utc_timestamp\n- [ ] hour\n- [ ] last_day\n- [ ] localtimestamp\n- [ ] make_date\n- [ ] make_dt_interval\n- [ ] make_interval\n- [ ] make_timestamp\n- [ ] make_timestamp_ltz\n- [ ] make_timestamp_ntz\n- [ ] make_ym_interval\n- [ ] minute\n- [ ] month\n- [ ] months_between\n- [ ] next_day\n- [ ] now\n- [ ] quarter\n- [ ] second\n- [ ] timestamp_micros\n- [ ] timestamp_millis\n- [ ] timestamp_seconds\n- [ ] to_date\n- [ ] to_timestamp\n- [ ] to_timestamp_ltz\n- [ ] to_timestamp_ntz\n- [ ] to_unix_timestamp\n- [ ] to_utc_timestamp\n- [ ] trunc\n- [ ] try_to_timestamp\n- [ ] unix_date\n- [ ] unix_micros\n- [ ] unix_millis\n- [ ] unix_seconds\n- [x] unix_timestamp\n- [ ] weekday\n- [ ] weekofyear\n- [ ] year\n\n### generator_funcs\n\n- [ ] explode\n- [ ] explode_outer\n- [ ] inline\n- [ ] inline_outer\n- [ ] posexplode\n- [ ] posexplode_outer\n- [ ] stack\n\n### hash_funcs\n\n- [x] crc32\n- [ ] hash\n- [x] md5\n- [ ] sha\n- [ ] sha1\n- [ ] sha2\n- [ ] xxhash64\n\n### json_funcs\n\n- [ ] from_json\n- [x] get_json_object\n- [ ] json_array_length\n- [ ] json_object_keys\n- [ ] json_tuple\n- [ ] schema_of_json\n- [ ] to_json\n\n### lambda_funcs\n\n- [ ] aggregate\n- [ ] array_sort\n- [ ] exists\n- [ ] filter\n- [ ] forall\n- [ ] map_filter\n- [ ] map_zip_with\n- [ ] reduce\n- [ ] transform\n- [ ] transform_keys\n- [ ] transform_values\n- [ ] zip_with\n\n### map_funcs\n\n- [ ] element_at\n- [ ] map\n- [ ] map_concat\n- [x] map_contains_key\n- [ ] map_entries\n- [ ] map_from_arrays\n- [ ] map_from_entries\n- [x] map_keys\n- [ ] map_values\n- [ ] str_to_map\n- [ ] try_element_at\n\n### math_funcs\n\n- [x] %\n- [x] -\n- [x] -\n- [x] -\n- [x] /\n- [x] abs\n- [x] acos\n- [ ] acosh\n- [x] asin\n- [ ] asinh\n- [x] atan\n- [x] atan2\n- [ ] atanh\n- [x] bin\n- [ ] bround\n- [ ] cbrt\n- [x] ceil\n- [x] ceiling\n- [ ] conv\n- [x] cos\n- [ ] cosh\n- [ ] cot\n- [ ] csc\n- [ ] degrees\n- [ ] div\n- [ ] e\n- [x] exp\n- [ ] expm1\n- [ ] factorial\n- [x] floor\n- [ ] greatest\n- [ ] hex\n- [ ] hypot\n- [ ] least\n- [x] ln\n- [ ] log\n- [x] log10\n- [ ] log1p\n- [x] log2\n- [x] mod\n- [x] negative\n- [ ] pi\n- [ ] pmod\n- [x] positive\n- [x] pow\n- [x] power\n- [ ] radians\n- [ ] rand\n- [ ] randn\n- [ ] random\n- [ ] rint\n- [x] round\n- [ ] sec\n- [x] shiftleft\n- [x] sign\n- [x] signum\n- [x] sin\n- [ ] sinh\n- [x] sqrt\n- [x] tan\n- [ ] tanh\n- [x] try_add\n- [x] try_divide\n- [x] try_multiply\n- [x] try_subtract\n- [x] unhex\n- [x] width_bucket\n\n### misc_funcs\n\n- [ ] aes_decrypt\n- [ ] aes_encrypt\n- [ ] assert_true\n- [x] current_catalog\n- [x] current_database\n- [x] current_schema\n- [x] current_user\n- [x] equal_null\n- [ ] input_file_block_length\n- [ ] input_file_block_start\n- [ ] input_file_name\n- [x] monotonically_increasing_id\n- [ ] raise_error\n- [x] rand\n- [x] randn\n- [x] spark_partition_id\n- [ ] typeof\n- [x] user\n- [ ] uuid\n- [ ] version\n\n### predicate_funcs\n\n- [x] !\n- [x] <\n- [x] <=\n- [x] <=>\n- [x] =\n- [x] ==\n- [x] >\n- [x] > =\n- [x] and\n- [x] ilike\n- [x] in\n- [ ] isnan\n- [x] isnotnull\n- [x] isnull\n- [x] like\n- [x] not\n- [x] or\n- [ ] regexp\n- [ ] regexp_like\n- [ ] rlike\n\n### string_funcs\n\n- [x] ascii\n- [ ] base64\n- [x] bit_length\n- [x] btrim\n- [x] char\n- [x] char_length\n- [x] character_length\n- [x] chr\n- [x] concat_ws\n- [x] contains\n- [ ] decode\n- [ ] elt\n- [ ] encode\n- [x] endswith\n- [ ] find_in_set\n- [ ] format_number\n- [ ] format_string\n- [x] initcap\n- [x] instr\n- [x] lcase\n- [ ] left\n- [x] len\n- [x] length\n- [ ] levenshtein\n- [ ] locate\n- [x] lower\n- [x] lpad\n- [x] ltrim\n- [ ] mask\n- [x] octet_length\n- [ ] overlay\n- [ ] position\n- [ ] printf\n- [ ] regexp_count\n- [ ] regexp_extract\n- [ ] regexp_extract_all\n- [ ] regexp_instr\n- [ ] regexp_replace\n- [ ] regexp_substr\n- [x] repeat\n- [x] replace\n- [ ] right\n- [x] rpad\n- [x] rtrim\n- [ ] sentences\n- [ ] soundex\n- [x] space\n- [ ] split\n- [ ] split_part\n- [x] startswith\n- [ ] substr\n- [ ] substring\n- [ ] substring_index\n- [ ] to_binary\n- [ ] to_char\n- [ ] to_number\n- [x] translate\n- [x] trim\n- [ ] try_to_binary\n- [ ] try_to_number\n- [x] ucase\n- [ ] unbase64\n- [x] upper\n\n### struct_funcs\n\n- [ ] named_struct\n- [ ] struct\n\n### url_funcs\n\n- [ ] parse_url\n- [ ] url_decode\n- [ ] url_encode\n\n### window_funcs\n\n- [ ] cume_dist\n- [ ] dense_rank\n- [ ] lag\n- [ ] lead\n- [ ] nth_value\n- [ ] ntile\n- [ ] percent_rank\n- [ ] rank\n- [ ] row_number\n\n### xml_funcs\n\n- [ ] xpath\n- [ ] xpath_boolean\n- [ ] xpath_double\n- [ ] xpath_float\n- [ ] xpath_int\n- [ ] xpath_long\n- [ ] xpath_number\n- [ ] xpath_short\n- [ ] xpath_string\n"
  },
  {
    "path": "fuzz-testing/.gitignore",
    "content": ".idea\ntarget\nspark-warehouse\nqueries.sql\nresults*.md\ntest*.parquet"
  },
  {
    "path": "fuzz-testing/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Comet Fuzz\n\nComet Fuzz is a standalone project for generating random data and queries and executing queries against Spark\nwith Comet disabled and enabled and checking for incompatibilities.\n\nAlthough it is a simple tool it has already been useful in finding many bugs.\n\nComet Fuzz is inspired by the [SparkFuzz](https://ir.cwi.nl/pub/30222) paper from Databricks and CWI.\n\n## Roadmap\n\nPlanned areas of improvement:\n\n- ANSI mode\n- Support for all data types, expressions, and operators supported by Comet\n- IF and CASE WHEN expressions\n- Complex (nested) expressions\n- Literal scalar values in queries\n- Add option to avoid grouping and sorting on floating-point columns\n- Improve join query support:\n  - Support joins without join keys\n  - Support composite join keys\n  - Support multiple join keys\n  - Support join conditions that use expressions\n\n## Usage\n\nFrom the root of the project, run `mvn install -DskipTests` to install Comet.\n\nThen build the fuzz testing jar.\n\n```shell\nmvn package\n```\n\nSet appropriate values for `SPARK_HOME`, `SPARK_MASTER`, and `COMET_JAR` environment variables and then use\n`spark-submit` to run CometFuzz against a Spark cluster.\n\n### Generating Data Files\n\n```shell\n$SPARK_HOME/bin/spark-submit \\\n    --master $SPARK_MASTER \\\n    --class org.apache.comet.fuzz.Main \\\n    target/comet-fuzz-spark3.5_2.12-0.13.0-SNAPSHOT-jar-with-dependencies.jar \\\n    data --num-files=2 --num-rows=200 --exclude-negative-zero --generate-arrays --generate-structs --generate-maps\n```\n\nThere is an optional `--exclude-negative-zero` flag for excluding `-0.0` from the generated data, which is\nsometimes useful because we already know that we often have different behavior for this edge case due to\ndifferences between Rust and Java handling of this value.\n\n### Generating Queries\n\nGenerate random queries that are based on the available test files.\n\n```shell\n$SPARK_HOME/bin/spark-submit \\\n    --master $SPARK_MASTER \\\n    --class org.apache.comet.fuzz.Main \\\n    target/comet-fuzz-spark3.5_2.12-0.13.0-SNAPSHOT-jar-with-dependencies.jar \\\n    queries --num-files=2 --num-queries=500\n```\n\nNote that the output filename is currently hard-coded as `queries.sql`\n\n### Execute Queries\n\n```shell\n$SPARK_HOME/bin/spark-submit \\\n    --master $SPARK_MASTER \\\n    --conf spark.memory.offHeap.enabled=true \\\n    --conf spark.memory.offHeap.size=16G \\\n    --conf spark.plugins=org.apache.spark.CometPlugin \\\n    --conf spark.comet.enabled=true \\\n    --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n    --conf spark.comet.exec.shuffle.enabled=true \\\n    --jars $COMET_JAR \\\n    --conf spark.driver.extraClassPath=$COMET_JAR \\\n    --conf spark.executor.extraClassPath=$COMET_JAR \\\n    --class org.apache.comet.fuzz.Main \\\n    target/comet-fuzz-spark3.5_2.12-0.13.0-SNAPSHOT-jar-with-dependencies.jar \\\n    run --num-files=2 --filename=queries.sql\n```\n\nNote that the output filename is currently hard-coded as `results-${System.currentTimeMillis()}.md`\n\n### Compare existing datasets\n\nTo compare a pair of existing datasets you can use a comparison tool.\nThe example below is for TPC-H queries results generated by pure Spark and Comet\n\n```shell\n$SPARK_HOME/bin/spark-submit \\\n    --master $SPARK_MASTER \\\n    --class org.apache.comet.fuzz.ComparisonTool\n    target/comet-fuzz-spark3.5_2.12-0.13.0-SNAPSHOT-jar-with-dependencies.jar \\\n    compareParquet --input-spark-folder=/tmp/tpch/spark --input-comet-folder=/tmp/tpch/comet\n```\n\nThe tool takes a pair of existing folders of the same layout and compares subfolders treating them as parquet based datasets\n"
  },
  {
    "path": "fuzz-testing/pom.xml",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\"\n         xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <modelVersion>4.0.0</modelVersion>\n\n    <parent>\n        <groupId>org.apache.datafusion</groupId>\n        <artifactId>comet-parent-spark${spark.version.short}_${scala.binary.version}</artifactId>\n        <version>0.15.0-SNAPSHOT</version>\n        <relativePath>../pom.xml</relativePath>\n    </parent>\n\n    <artifactId>comet-fuzz-spark${spark.version.short}_${scala.binary.version}</artifactId>\n    <name>comet-fuzz</name>\n    <url>http://maven.apache.org</url>\n    <packaging>jar</packaging>\n\n    <properties>\n        <!-- Reverse default (skip installation), and then enable only for child modules -->\n        <maven.deploy.skip>false</maven.deploy.skip>\n    </properties>\n\n    <dependencies>\n        <dependency>\n            <groupId>org.scala-lang</groupId>\n            <artifactId>scala-library</artifactId>\n            <version>${scala.version}</version>\n            <scope>provided</scope>\n        </dependency>\n        <dependency>\n            <groupId>org.apache.spark</groupId>\n            <artifactId>spark-sql_${scala.binary.version}</artifactId>\n            <scope>provided</scope>\n            <exclusions>\n                <exclusion>\n                    <groupId>org.scala-lang.modules</groupId>\n                    <artifactId>scala-collection-compat_${scala.binary.version}</artifactId>\n                </exclusion>\n            </exclusions>\n        </dependency>\n        <dependency>\n            <groupId>org.apache.datafusion</groupId>\n            <artifactId>comet-spark-spark${spark.version.short}_${scala.binary.version}</artifactId>\n            <version>${project.version}</version>\n            <exclusions>\n                <exclusion>\n                    <groupId>org.scala-lang.modules</groupId>\n                    <artifactId>scala-collection-compat_${scala.binary.version}</artifactId>\n                </exclusion>\n            </exclusions>\n        </dependency>\n        <dependency>\n            <groupId>org.rogach</groupId>\n            <artifactId>scallop_${scala.binary.version}</artifactId>\n        </dependency>\n    </dependencies>\n\n    <build>\n        <sourceDirectory>src/main/scala</sourceDirectory>\n        <plugins>\n            <plugin>\n                <groupId>org.apache.maven.plugins</groupId>\n                <artifactId>maven-compiler-plugin</artifactId>\n                <version>${maven-compiler-plugin.version}</version>\n                <configuration>\n                    <source>${java.version}</source>\n                    <target>${java.version}</target>\n                </configuration>\n            </plugin>\n            <plugin>\n                <groupId>net.alchim31.maven</groupId>\n                <artifactId>scala-maven-plugin</artifactId>\n                <version>${scala.plugin.version}</version>\n                <executions>\n                    <execution>\n                        <goals>\n                            <goal>compile</goal>\n                            <goal>testCompile</goal>\n                        </goals>\n                    </execution>\n                </executions>\n            </plugin>\n            <plugin>\n                <artifactId>maven-assembly-plugin</artifactId>\n                <version>${maven-assembly-plugin.version}</version>\n                <configuration>\n                    <descriptorRefs>\n                        <descriptorRef>jar-with-dependencies</descriptorRef>\n                    </descriptorRefs>\n                </configuration>\n                <executions>\n                    <execution>\n                        <id>make-assembly</id>\n                        <phase>package</phase>\n                        <goals>\n                            <goal>single</goal>\n                        </goals>\n                    </execution>\n                </executions>\n            </plugin>\n            <plugin>\n                <groupId>org.apache.maven.plugins</groupId>\n                <artifactId>maven-install-plugin</artifactId>\n                <configuration>\n                    <skip>true</skip>\n                </configuration>\n            </plugin>\n        </plugins>\n    </build>\n</project>\n"
  },
  {
    "path": "fuzz-testing/run.sh",
    "content": "#!/bin/bash\n#\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n#\n# Usage: ./run.sh\n# Builds necessary JARs, generates data and queries, and runs fuzz tests for Comet Spark.\n# Environment variables:\n#   SPARK_HOME - path to Spark installation\n#   SPARK_MASTER - Spark master URL (default: local[*])\n#   SCALA_MAJOR_VERSION - Scala major version to use (default: 2.12)\n#   SPARK_MAJOR_VERSION - Spark major version to use (default: 3.5)\n#   NUM_FILES - number of data files to generate (default: 2)\n#   NUM_ROWS - number of rows per file (default: 200)\n#   NUM_QUERIES - number of queries to generate (default: 500)\n\nset -eux\n\nDIR=\"$(cd \"$(dirname \"$0\")\" && pwd)\"\nPARENT_DIR=\"${DIR}/..\"\nMVN_CMD=\"${PARENT_DIR}/mvnw\"\nSPARK_MASTER=\"${SPARK_MASTER:-local[*]}\"\nSCALA_MAJOR_VERSION=\"${SCALA_MAJOR_VERSION:-2.12}\"\nSPARK_MAJOR_VERSION=\"${SPARK_MAJOR_VERSION:-3.5}\"\nPROFILES=\"-Pscala-${SCALA_MAJOR_VERSION},spark-${SPARK_MAJOR_VERSION}\"\nPROJECT_VERSION=$(\"${MVN_CMD}\" -f \"${DIR}/pom.xml\" -q help:evaluate -Dexpression=project.version -DforceStdout)\nCOMET_SPARK_JAR=\"${PARENT_DIR}/spark/target/comet-spark${SPARK_MAJOR_VERSION}_${SCALA_MAJOR_VERSION}-${PROJECT_VERSION}.jar\"\nCOMET_FUZZ_JAR=\"${DIR}/target/comet-fuzz-spark${SPARK_MAJOR_VERSION}_${SCALA_MAJOR_VERSION}-${PROJECT_VERSION}-jar-with-dependencies.jar\"\nNUM_FILES=\"${NUM_FILES:-2}\"\nNUM_ROWS=\"${NUM_ROWS:-200}\"\nNUM_QUERIES=\"${NUM_QUERIES:-500}\"\n\nif [ ! -f \"${COMET_SPARK_JAR}\" ]; then\n  echo \"Building Comet Spark jar...\"\n  pushd \"${PARENT_DIR}\"\n  PROFILES=\"${PROFILES}\" make\n  popd\nelse\n  echo \"Building Fuzz testing jar...\"\n  \"${MVN_CMD}\" -f \"${DIR}/pom.xml\" package -DskipTests \"${PROFILES}\"\nfi\n\necho \"Generating data...\"\n\"${SPARK_HOME}/bin/spark-submit\" \\\n  --master \"${SPARK_MASTER}\" \\\n  --class org.apache.comet.fuzz.Main \\\n  \"${COMET_FUZZ_JAR}\" \\\n  data --num-files=\"${NUM_FILES}\" --num-rows=\"${NUM_ROWS}\" \\\n  --exclude-negative-zero \\\n  --generate-arrays --generate-structs --generate-maps\n\necho \"Generating queries...\"\n\"${SPARK_HOME}/bin/spark-submit\" \\\n  --master \"${SPARK_MASTER}\" \\\n  --class org.apache.comet.fuzz.Main \\\n  \"${COMET_FUZZ_JAR}\" \\\n  queries --num-files=\"${NUM_FILES}\" --num-queries=\"${NUM_QUERIES}\"\n\necho \"Running fuzz tests...\"\n\"${SPARK_HOME}/bin/spark-submit\" \\\n  --master \"${SPARK_MASTER}\" \\\n  --conf spark.memory.offHeap.enabled=true \\\n  --conf spark.memory.offHeap.size=16G \\\n  --conf spark.plugins=org.apache.spark.CometPlugin \\\n  --conf spark.comet.enabled=true \\\n  --conf spark.shuffle.manager=org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager \\\n  --conf spark.comet.exec.shuffle.enabled=true \\\n  --jars \"${COMET_SPARK_JAR}\" \\\n  --conf spark.driver.extraClassPath=\"${COMET_SPARK_JAR}\" \\\n  --conf spark.executor.extraClassPath=\"${COMET_SPARK_JAR}\" \\\n  --class org.apache.comet.fuzz.Main \\\n  \"${COMET_FUZZ_JAR}\" \\\n  run --num-files=\"${NUM_FILES}\" --filename=\"queries.sql\"\n"
  },
  {
    "path": "fuzz-testing/src/main/scala/org/apache/comet/fuzz/ComparisonTool.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.fuzz\n\nimport java.io.File\n\nimport org.rogach.scallop.{ScallopConf, ScallopOption, Subcommand}\n\nimport org.apache.spark.sql.{functions, SparkSession}\n\nclass ComparisonToolConf(arguments: Seq[String]) extends ScallopConf(arguments) {\n  object compareParquet extends Subcommand(\"compareParquet\") {\n    val inputSparkFolder: ScallopOption[String] =\n      opt[String](required = true, descr = \"Folder with Spark produced results in Parquet format\")\n    val inputCometFolder: ScallopOption[String] =\n      opt[String](required = true, descr = \"Folder with Comet produced results in Parquet format\")\n    val tolerance: ScallopOption[Double] =\n      opt[Double](default = Some(0.000002), descr = \"Tolerance for floating point comparisons\")\n  }\n  addSubcommand(compareParquet)\n  verify()\n}\n\nobject ComparisonTool {\n\n  lazy val spark: SparkSession = SparkSession\n    .builder()\n    .getOrCreate()\n\n  def main(args: Array[String]): Unit = {\n    val conf = new ComparisonToolConf(args.toIndexedSeq)\n    conf.subcommand match {\n      case Some(conf.compareParquet) =>\n        compareParquetFolders(\n          spark,\n          conf.compareParquet.inputSparkFolder(),\n          conf.compareParquet.inputCometFolder(),\n          conf.compareParquet.tolerance())\n\n      case _ =>\n        // scalastyle:off println\n        println(\"Invalid subcommand\")\n        // scalastyle:on println\n        sys.exit(-1)\n    }\n  }\n\n  private def compareParquetFolders(\n      spark: SparkSession,\n      sparkFolderPath: String,\n      cometFolderPath: String,\n      tolerance: Double): Unit = {\n\n    val output = QueryRunner.createOutputMdFile()\n\n    try {\n      val sparkFolder = new File(sparkFolderPath)\n      val cometFolder = new File(cometFolderPath)\n\n      if (!sparkFolder.exists() || !sparkFolder.isDirectory) {\n        throw new IllegalArgumentException(\n          s\"Spark folder does not exist or is not a directory: $sparkFolderPath\")\n      }\n\n      if (!cometFolder.exists() || !cometFolder.isDirectory) {\n        throw new IllegalArgumentException(\n          s\"Comet folder does not exist or is not a directory: $cometFolderPath\")\n      }\n\n      // Get all subdirectories from the Spark folder\n      val sparkSubfolders = sparkFolder\n        .listFiles()\n        .filter(_.isDirectory)\n        .map(_.getName)\n        .sorted\n\n      output.write(\"# Comparing Parquet Folders\\n\\n\")\n      output.write(s\"Spark folder: $sparkFolderPath\\n\")\n      output.write(s\"Comet folder: $cometFolderPath\\n\")\n      output.write(s\"Found ${sparkSubfolders.length} subfolders to compare\\n\\n\")\n\n      // Compare each subfolder\n      sparkSubfolders.foreach { subfolderName =>\n        val sparkSubfolderPath = new File(sparkFolder, subfolderName)\n        val cometSubfolderPath = new File(cometFolder, subfolderName)\n\n        if (!cometSubfolderPath.exists() || !cometSubfolderPath.isDirectory) {\n          output.write(s\"## Subfolder: $subfolderName\\n\")\n          output.write(\n            s\"[WARNING] Comet subfolder not found: ${cometSubfolderPath.getAbsolutePath}\\n\\n\")\n        } else {\n          output.write(s\"## Comparing subfolder: $subfolderName\\n\\n\")\n\n          try {\n            // Read Spark parquet files\n            spark.conf.set(\"spark.comet.enabled\", \"false\")\n            val sparkDf = spark.read.parquet(sparkSubfolderPath.getAbsolutePath)\n            val sparkRows = sparkDf.orderBy(sparkDf.columns.map(functions.col): _*).collect()\n\n            // Read Comet parquet files\n            val cometDf = spark.read.parquet(cometSubfolderPath.getAbsolutePath)\n            val cometRows = cometDf.orderBy(cometDf.columns.map(functions.col): _*).collect()\n\n            // Compare the results\n            if (QueryComparison.assertSameRows(sparkRows, cometRows, output, tolerance)) {\n              output.write(s\"Subfolder $subfolderName: ${sparkRows.length} rows matched\\n\\n\")\n            } else {\n              // Output schema if dataframes are not equal\n              QueryComparison.showSchema(\n                output,\n                sparkDf.schema.treeString,\n                cometDf.schema.treeString)\n            }\n          } catch {\n            case e: Exception =>\n              output.write(\n                s\"[ERROR] Failed to compare subfolder $subfolderName: ${e.getMessage}\\n\")\n              val sw = new java.io.StringWriter()\n              val p = new java.io.PrintWriter(sw)\n              e.printStackTrace(p)\n              p.close()\n              output.write(s\"```\\n${sw.toString}\\n```\\n\\n\")\n          }\n        }\n\n        output.flush()\n      }\n\n      output.write(\"\\n# Comparison Complete\\n\")\n      output.write(s\"Compared ${sparkSubfolders.length} subfolders\\n\")\n\n    } finally {\n      output.close()\n    }\n  }\n}\n"
  },
  {
    "path": "fuzz-testing/src/main/scala/org/apache/comet/fuzz/Main.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.fuzz\n\nimport scala.util.Random\n\nimport org.rogach.scallop.{ScallopConf, Subcommand}\nimport org.rogach.scallop.ScallopOption\n\nimport org.apache.spark.sql.SparkSession\n\nimport org.apache.comet.testing.{DataGenOptions, ParquetGenerator, SchemaGenOptions}\n\nclass Conf(arguments: Seq[String]) extends ScallopConf(arguments) {\n  object generateData extends Subcommand(\"data\") {\n    val numFiles: ScallopOption[Int] =\n      opt[Int](required = true, descr = \"Number of files to generate\")\n    val numRows: ScallopOption[Int] = opt[Int](required = true, descr = \"Number of rows per file\")\n    val randomSeed: ScallopOption[Long] =\n      opt[Long](required = false, descr = \"Random seed to use\")\n    val generateArrays: ScallopOption[Boolean] =\n      opt[Boolean](required = false, descr = \"Whether to generate arrays\")\n    val generateStructs: ScallopOption[Boolean] =\n      opt[Boolean](required = false, descr = \"Whether to generate structs\")\n    val generateMaps: ScallopOption[Boolean] =\n      opt[Boolean](required = false, descr = \"Whether to generate maps\")\n    val excludeNegativeZero: ScallopOption[Boolean] =\n      opt[Boolean](required = false, descr = \"Whether to exclude negative zero\")\n  }\n  addSubcommand(generateData)\n  object generateQueries extends Subcommand(\"queries\") {\n    val numFiles: ScallopOption[Int] =\n      opt[Int](required = true, descr = \"Number of input files to use\")\n    val numQueries: ScallopOption[Int] =\n      opt[Int](required = true, descr = \"Number of queries to generate\")\n    val randomSeed: ScallopOption[Long] =\n      opt[Long](required = false, descr = \"Random seed to use\")\n  }\n  addSubcommand(generateQueries)\n  object runQueries extends Subcommand(\"run\") {\n    val filename: ScallopOption[String] =\n      opt[String](required = true, descr = \"File to write queries to\")\n    val numFiles: ScallopOption[Int] =\n      opt[Int](required = true, descr = \"Number of input files to use\")\n    val showFailedSparkQueries: ScallopOption[Boolean] =\n      opt[Boolean](required = false, descr = \"Whether to show failed Spark queries\")\n  }\n  addSubcommand(runQueries)\n  verify()\n}\n\nobject Main {\n\n  lazy val spark: SparkSession = SparkSession\n    .builder()\n    .getOrCreate()\n\n  def main(args: Array[String]): Unit = {\n    val conf = new Conf(args.toIndexedSeq)\n    conf.subcommand match {\n      case Some(conf.generateData) =>\n        val r = conf.generateData.randomSeed.toOption match {\n          case Some(seed) => new Random(seed)\n          case None => new Random()\n        }\n        for (i <- 0 until conf.generateData.numFiles()) {\n          ParquetGenerator.makeParquetFile(\n            r,\n            spark,\n            s\"test$i.parquet\",\n            numRows = conf.generateData.numRows(),\n            SchemaGenOptions(\n              generateArray = conf.generateData.generateArrays(),\n              generateStruct = conf.generateData.generateStructs(),\n              generateMap = conf.generateData.generateMaps(),\n              // create two columns of each primitive type so that they can be used in binary\n              // expressions such as `a + b` and `a <  b`\n              primitiveTypes = SchemaGenOptions.defaultPrimitiveTypes ++\n                SchemaGenOptions.defaultPrimitiveTypes),\n            DataGenOptions(\n              allowNull = true,\n              generateNegativeZero = !conf.generateData.excludeNegativeZero()))\n        }\n      case Some(conf.generateQueries) =>\n        val r = conf.generateQueries.randomSeed.toOption match {\n          case Some(seed) => new Random(seed)\n          case None => new Random()\n        }\n        QueryGen.generateRandomQueries(\n          r,\n          spark,\n          numFiles = conf.generateQueries.numFiles(),\n          conf.generateQueries.numQueries())\n      case Some(conf.runQueries) =>\n        QueryRunner.runQueries(\n          spark,\n          conf.runQueries.numFiles(),\n          conf.runQueries.filename(),\n          conf.runQueries.showFailedSparkQueries())\n      case _ =>\n        // scalastyle:off println\n        println(\"Invalid subcommand\")\n        // scalastyle:on println\n        sys.exit(-1)\n    }\n  }\n}\n"
  },
  {
    "path": "fuzz-testing/src/main/scala/org/apache/comet/fuzz/Meta.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.fuzz\n\nimport org.apache.spark.sql.types.DataType\nimport org.apache.spark.sql.types.DataTypes\n\nsealed trait SparkType\ncase class SparkTypeOneOf(dataTypes: Seq[SparkType]) extends SparkType\ncase object SparkBooleanType extends SparkType\ncase object SparkBinaryType extends SparkType\ncase object SparkStringType extends SparkType\ncase object SparkIntegralType extends SparkType\ncase object SparkByteType extends SparkType\ncase object SparkShortType extends SparkType\ncase object SparkIntType extends SparkType\ncase object SparkLongType extends SparkType\ncase object SparkFloatType extends SparkType\ncase object SparkDoubleType extends SparkType\ncase class SparkDecimalType(p: Int, s: Int) extends SparkType\ncase object SparkNumericType extends SparkType\ncase object SparkDateType extends SparkType\ncase object SparkTimestampType extends SparkType\ncase object SparkDateOrTimestampType extends SparkType\ncase class SparkArrayType(elementType: SparkType) extends SparkType\ncase class SparkMapType(keyType: SparkType, valueType: SparkType) extends SparkType\ncase class SparkStructType(fields: Seq[SparkType]) extends SparkType\ncase object SparkAnyType extends SparkType\n\ncase class FunctionSignature(inputTypes: Seq[SparkType], varArgs: Boolean = false)\n\ncase class Function(name: String, signatures: Seq[FunctionSignature])\n\nobject Meta {\n\n  val primitiveSparkTypes: Seq[SparkType] = Seq(\n    SparkBooleanType,\n    SparkBinaryType,\n    SparkStringType,\n    SparkByteType,\n    SparkShortType,\n    SparkIntType,\n    SparkLongType,\n    SparkFloatType,\n    SparkDoubleType,\n    SparkDateType,\n    SparkTimestampType)\n\n  val dataTypes: Seq[(DataType, Double)] = Seq(\n    (DataTypes.BooleanType, 0.1),\n    (DataTypes.ByteType, 0.2),\n    (DataTypes.ShortType, 0.2),\n    (DataTypes.IntegerType, 0.2),\n    (DataTypes.LongType, 0.2),\n    (DataTypes.FloatType, 0.2),\n    (DataTypes.DoubleType, 0.2),\n    (DataTypes.createDecimalType(10, 2), 0.2),\n    (DataTypes.DateType, 0.2),\n    (DataTypes.TimestampType, 0.2),\n    (DataTypes.TimestampNTZType, 0.2),\n    (DataTypes.StringType, 0.2),\n    (DataTypes.BinaryType, 0.1))\n\n  private def createFunctionWithInputTypes(\n      name: String,\n      inputs: Seq[SparkType],\n      varArgs: Boolean = false): Function = {\n    Function(name, Seq(FunctionSignature(inputs, varArgs)))\n    createFunctions(name, Seq(FunctionSignature(inputs, varArgs)))\n  }\n\n  private def createFunctions(name: String, signatures: Seq[FunctionSignature]): Function = {\n    signatures.foreach { s =>\n      assert(\n        !s.varArgs || s.inputTypes.length == 1,\n        s\"Variadic function $s must have exactly one input type\")\n    }\n    Function(name, signatures)\n  }\n\n  private def createUnaryStringFunction(name: String): Function = {\n    createFunctionWithInputTypes(name, Seq(SparkStringType))\n  }\n\n  private def createUnaryNumericFunction(name: String): Function = {\n    createFunctionWithInputTypes(name, Seq(SparkNumericType))\n  }\n\n  // Math expressions (corresponds to mathExpressions in QueryPlanSerde)\n  val mathScalarFunc: Seq[Function] = Seq(\n    createUnaryNumericFunction(\"abs\"),\n    createUnaryNumericFunction(\"acos\"),\n    createUnaryNumericFunction(\"asin\"),\n    createUnaryNumericFunction(\"atan\"),\n    createFunctionWithInputTypes(\"atan2\", Seq(SparkNumericType, SparkNumericType)),\n    createUnaryNumericFunction(\"cos\"),\n    createUnaryNumericFunction(\"cosh\"),\n    createUnaryNumericFunction(\"exp\"),\n    createUnaryNumericFunction(\"expm1\"),\n    createFunctionWithInputTypes(\"log\", Seq(SparkNumericType, SparkNumericType)),\n    createUnaryNumericFunction(\"log10\"),\n    createUnaryNumericFunction(\"log2\"),\n    createFunctionWithInputTypes(\"pow\", Seq(SparkNumericType, SparkNumericType)),\n    createFunctionWithInputTypes(\"remainder\", Seq(SparkNumericType, SparkNumericType)),\n    createFunctions(\n      \"round\",\n      Seq(\n        FunctionSignature(Seq(SparkNumericType)),\n        FunctionSignature(Seq(SparkNumericType, SparkIntType)))),\n    createUnaryNumericFunction(\"signum\"),\n    createUnaryNumericFunction(\"sin\"),\n    createUnaryNumericFunction(\"sinh\"),\n    createUnaryNumericFunction(\"sqrt\"),\n    createUnaryNumericFunction(\"tan\"),\n    createUnaryNumericFunction(\"tanh\"),\n    createUnaryNumericFunction(\"cot\"),\n    createUnaryNumericFunction(\"ceil\"),\n    createUnaryNumericFunction(\"floor\"),\n    createFunctionWithInputTypes(\"unary_minus\", Seq(SparkNumericType)))\n\n  // Hash expressions (corresponds to hashExpressions in QueryPlanSerde)\n  val hashScalarFunc: Seq[Function] = Seq(\n    createFunctionWithInputTypes(\"md5\", Seq(SparkAnyType)),\n    createFunctionWithInputTypes(\"murmur3_hash\", Seq(SparkAnyType)), // TODO variadic\n    createFunctionWithInputTypes(\"sha2\", Seq(SparkAnyType, SparkIntType)))\n\n  // String expressions (corresponds to stringExpressions in QueryPlanSerde)\n  val stringScalarFunc: Seq[Function] = Seq(\n    createUnaryStringFunction(\"ascii\"),\n    createUnaryStringFunction(\"bit_length\"),\n    createUnaryStringFunction(\"chr\"),\n    createFunctionWithInputTypes(\n      \"concat\",\n      Seq(\n        SparkTypeOneOf(\n          Seq(\n            SparkStringType,\n            SparkBinaryType,\n            SparkArrayType(SparkStringType),\n            SparkArrayType(SparkNumericType),\n            SparkArrayType(SparkBinaryType)))),\n      varArgs = true),\n    createFunctionWithInputTypes(\"concat_ws\", Seq(SparkStringType, SparkStringType)),\n    createFunctionWithInputTypes(\"contains\", Seq(SparkStringType, SparkStringType)),\n    createFunctionWithInputTypes(\"ends_with\", Seq(SparkStringType, SparkStringType)),\n    createFunctionWithInputTypes(\n      \"hex\",\n      Seq(SparkTypeOneOf(Seq(SparkStringType, SparkBinaryType, SparkIntType, SparkLongType)))),\n    createUnaryStringFunction(\"init_cap\"),\n    createFunctionWithInputTypes(\"instr\", Seq(SparkStringType, SparkStringType)),\n    createFunctionWithInputTypes(\n      \"length\",\n      Seq(SparkTypeOneOf(Seq(SparkStringType, SparkBinaryType)))),\n    createFunctionWithInputTypes(\"like\", Seq(SparkStringType, SparkStringType)),\n    createUnaryStringFunction(\"lower\"),\n    createFunctions(\n      \"lpad\",\n      Seq(\n        FunctionSignature(Seq(SparkStringType, SparkIntegralType)),\n        FunctionSignature(Seq(SparkStringType, SparkIntegralType, SparkStringType)))),\n    createUnaryStringFunction(\"ltrim\"),\n    createUnaryStringFunction(\"octet_length\"),\n    createFunctions(\n      \"regexp_replace\",\n      Seq(\n        FunctionSignature(Seq(SparkStringType, SparkStringType, SparkStringType)),\n        FunctionSignature(Seq(SparkStringType, SparkStringType, SparkStringType, SparkIntType)))),\n    createFunctionWithInputTypes(\"repeat\", Seq(SparkStringType, SparkIntType)),\n    createFunctions(\n      \"replace\",\n      Seq(\n        FunctionSignature(Seq(SparkStringType, SparkStringType)),\n        FunctionSignature(Seq(SparkStringType, SparkStringType, SparkStringType)))),\n    createFunctions(\n      \"reverse\",\n      Seq(\n        FunctionSignature(Seq(SparkStringType)),\n        FunctionSignature(Seq(SparkArrayType(SparkAnyType))))),\n    createFunctionWithInputTypes(\"rlike\", Seq(SparkStringType, SparkStringType)),\n    createFunctions(\n      \"rpad\",\n      Seq(\n        FunctionSignature(Seq(SparkStringType, SparkIntegralType)),\n        FunctionSignature(Seq(SparkStringType, SparkIntegralType, SparkStringType)))),\n    createUnaryStringFunction(\"rtrim\"),\n    createFunctions(\n      \"split\",\n      Seq(\n        FunctionSignature(Seq(SparkStringType, SparkStringType)),\n        FunctionSignature(Seq(SparkStringType, SparkStringType, SparkIntType)))),\n    createFunctionWithInputTypes(\"starts_with\", Seq(SparkStringType, SparkStringType)),\n    createFunctionWithInputTypes(\"string_space\", Seq(SparkIntType)),\n    createFunctionWithInputTypes(\"substring\", Seq(SparkStringType, SparkIntType, SparkIntType)),\n    createFunctionWithInputTypes(\"translate\", Seq(SparkStringType, SparkStringType)),\n    createUnaryStringFunction(\"trim\"),\n    createUnaryStringFunction(\"btrim\"),\n    createUnaryStringFunction(\"unhex\"),\n    createUnaryStringFunction(\"upper\"),\n    createFunctionWithInputTypes(\"xxhash64\", Seq(SparkAnyType)), // TODO variadic\n    createFunctionWithInputTypes(\"sha1\", Seq(SparkAnyType)))\n\n  // Conditional expressions (corresponds to conditionalExpressions in QueryPlanSerde)\n  val conditionalScalarFunc: Seq[Function] = Seq(\n    createFunctionWithInputTypes(\"if\", Seq(SparkBooleanType, SparkAnyType, SparkAnyType)))\n\n  // Map expressions (corresponds to mapExpressions in QueryPlanSerde)\n  val mapScalarFunc: Seq[Function] = Seq(\n    createFunctionWithInputTypes(\n      \"map_extract\",\n      Seq(SparkMapType(SparkAnyType, SparkAnyType), SparkAnyType)),\n    createFunctionWithInputTypes(\"map_keys\", Seq(SparkMapType(SparkAnyType, SparkAnyType))),\n    createFunctionWithInputTypes(\"map_entries\", Seq(SparkMapType(SparkAnyType, SparkAnyType))),\n    createFunctionWithInputTypes(\"map_values\", Seq(SparkMapType(SparkAnyType, SparkAnyType))),\n    createFunctionWithInputTypes(\n      \"map_from_arrays\",\n      Seq(SparkArrayType(SparkAnyType), SparkArrayType(SparkAnyType))),\n    createFunctionWithInputTypes(\n      \"map_from_entries\",\n      Seq(\n        SparkArrayType(\n          SparkStructType(Seq(\n            SparkTypeOneOf(primitiveSparkTypes.filterNot(_ == SparkBinaryType)),\n            SparkTypeOneOf(primitiveSparkTypes.filterNot(_ == SparkBinaryType))))))))\n\n  // Predicate expressions (corresponds to predicateExpressions in QueryPlanSerde)\n  val predicateScalarFunc: Seq[Function] = Seq(\n    createFunctionWithInputTypes(\"and\", Seq(SparkBooleanType, SparkBooleanType)),\n    createFunctionWithInputTypes(\"or\", Seq(SparkBooleanType, SparkBooleanType)),\n    createFunctionWithInputTypes(\"not\", Seq(SparkBooleanType)),\n    createFunctionWithInputTypes(\"in\", Seq(SparkAnyType, SparkAnyType))\n  ) // TODO: variadic\n\n  // Struct expressions (corresponds to structExpressions in QueryPlanSerde)\n  val structScalarFunc: Seq[Function] = Seq(\n    createFunctionWithInputTypes(\n      \"create_named_struct\",\n      Seq(SparkStringType, SparkAnyType)\n    ), // TODO: variadic name/value pairs\n    createFunctionWithInputTypes(\n      \"get_struct_field\",\n      Seq(SparkStructType(Seq(SparkAnyType)), SparkStringType)))\n\n  // Bitwise expressions (corresponds to bitwiseExpressions in QueryPlanSerde)\n  val bitwiseScalarFunc: Seq[Function] = Seq(\n    createFunctionWithInputTypes(\"bitwise_and\", Seq(SparkIntegralType, SparkIntegralType)),\n    createFunctionWithInputTypes(\"bitwise_count\", Seq(SparkIntegralType)),\n    createFunctionWithInputTypes(\"bitwise_get\", Seq(SparkIntegralType, SparkIntType)),\n    createFunctionWithInputTypes(\"bitwise_or\", Seq(SparkIntegralType, SparkIntegralType)),\n    createFunctionWithInputTypes(\"bitwise_not\", Seq(SparkIntegralType)),\n    createFunctionWithInputTypes(\"bitwise_xor\", Seq(SparkIntegralType, SparkIntegralType)),\n    createFunctionWithInputTypes(\"shift_left\", Seq(SparkIntegralType, SparkIntType)),\n    createFunctionWithInputTypes(\"shift_right\", Seq(SparkIntegralType, SparkIntType)))\n\n  // Misc expressions (corresponds to miscExpressions in QueryPlanSerde)\n  val miscScalarFunc: Seq[Function] =\n    Seq(\n      createFunctionWithInputTypes(\"isnan\", Seq(SparkNumericType)),\n      createFunctionWithInputTypes(\"isnull\", Seq(SparkAnyType)),\n      createFunctionWithInputTypes(\"isnotnull\", Seq(SparkAnyType)),\n      createFunctionWithInputTypes(\"coalesce\", Seq(SparkAnyType, SparkAnyType))\n    ) // TODO: variadic\n\n  // Array expressions (corresponds to arrayExpressions in QueryPlanSerde)\n  val arrayScalarFunc: Seq[Function] = Seq(\n    createFunctionWithInputTypes(\"array_append\", Seq(SparkArrayType(SparkAnyType), SparkAnyType)),\n    createFunctionWithInputTypes(\"array_compact\", Seq(SparkArrayType(SparkAnyType))),\n    createFunctionWithInputTypes(\n      \"array_contains\",\n      Seq(SparkArrayType(SparkAnyType), SparkAnyType)),\n    createFunctionWithInputTypes(\"array_distinct\", Seq(SparkArrayType(SparkAnyType))),\n    createFunctionWithInputTypes(\n      \"array_except\",\n      Seq(SparkArrayType(SparkAnyType), SparkArrayType(SparkAnyType))),\n    createFunctionWithInputTypes(\n      \"array_insert\",\n      Seq(SparkArrayType(SparkAnyType), SparkIntType, SparkAnyType)),\n    createFunctionWithInputTypes(\n      \"array_intersect\",\n      Seq(SparkArrayType(SparkAnyType), SparkArrayType(SparkAnyType))),\n    createFunctions(\n      \"array_join\",\n      Seq(\n        FunctionSignature(Seq(SparkArrayType(SparkAnyType), SparkStringType)),\n        FunctionSignature(Seq(SparkArrayType(SparkAnyType), SparkStringType, SparkStringType)))),\n    createFunctionWithInputTypes(\"array_max\", Seq(SparkArrayType(SparkAnyType))),\n    createFunctionWithInputTypes(\"array_min\", Seq(SparkArrayType(SparkAnyType))),\n    createFunctionWithInputTypes(\"array_remove\", Seq(SparkArrayType(SparkAnyType), SparkAnyType)),\n    createFunctionWithInputTypes(\"array_repeat\", Seq(SparkAnyType, SparkIntType)),\n    createFunctionWithInputTypes(\n      \"arrays_overlap\",\n      Seq(SparkArrayType(SparkAnyType), SparkArrayType(SparkAnyType))),\n    createFunctionWithInputTypes(\n      \"array_union\",\n      Seq(SparkArrayType(SparkAnyType), SparkArrayType(SparkAnyType))),\n    createFunctionWithInputTypes(\"array\", Seq(SparkAnyType, SparkAnyType)), // TODO: variadic\n    createFunctionWithInputTypes(\n      \"element_at\",\n      Seq(\n        SparkTypeOneOf(\n          Seq(SparkArrayType(SparkAnyType), SparkMapType(SparkAnyType, SparkAnyType))),\n        SparkAnyType)),\n    createFunctionWithInputTypes(\"flatten\", Seq(SparkArrayType(SparkArrayType(SparkAnyType)))),\n    createFunctionWithInputTypes(\n      \"get_array_item\",\n      Seq(SparkArrayType(SparkAnyType), SparkIntType)))\n\n  // Temporal expressions (corresponds to temporalExpressions in QueryPlanSerde)\n  val temporalScalarFunc: Seq[Function] =\n    Seq(\n      createFunctionWithInputTypes(\"date_add\", Seq(SparkDateType, SparkIntType)),\n      createFunctionWithInputTypes(\"date_sub\", Seq(SparkDateType, SparkIntType)),\n      createFunctions(\n        \"from_unixtime\",\n        Seq(\n          FunctionSignature(Seq(SparkLongType)),\n          FunctionSignature(Seq(SparkLongType, SparkStringType)))),\n      createFunctionWithInputTypes(\"hour\", Seq(SparkDateOrTimestampType)),\n      createFunctionWithInputTypes(\"minute\", Seq(SparkDateOrTimestampType)),\n      createFunctionWithInputTypes(\"second\", Seq(SparkDateOrTimestampType)),\n      createFunctionWithInputTypes(\"trunc\", Seq(SparkDateOrTimestampType, SparkStringType)),\n      createFunctionWithInputTypes(\"year\", Seq(SparkDateOrTimestampType)),\n      createFunctionWithInputTypes(\"month\", Seq(SparkDateOrTimestampType)),\n      createFunctionWithInputTypes(\"day\", Seq(SparkDateOrTimestampType)),\n      createFunctionWithInputTypes(\"dayofmonth\", Seq(SparkDateOrTimestampType)),\n      createFunctionWithInputTypes(\"dayofweek\", Seq(SparkDateOrTimestampType)),\n      createFunctionWithInputTypes(\"weekday\", Seq(SparkDateOrTimestampType)),\n      createFunctionWithInputTypes(\"dayofyear\", Seq(SparkDateOrTimestampType)),\n      createFunctionWithInputTypes(\"weekofyear\", Seq(SparkDateOrTimestampType)),\n      createFunctionWithInputTypes(\"quarter\", Seq(SparkDateOrTimestampType)))\n\n  // Combined in same order as exprSerdeMap in QueryPlanSerde\n  val scalarFunc: Seq[Function] = mathScalarFunc ++ hashScalarFunc ++ stringScalarFunc ++\n    conditionalScalarFunc ++ mapScalarFunc ++ predicateScalarFunc ++\n    structScalarFunc ++ bitwiseScalarFunc ++ miscScalarFunc ++ arrayScalarFunc ++\n    temporalScalarFunc\n\n  val aggFunc: Seq[Function] = Seq(\n    createFunctionWithInputTypes(\"min\", Seq(SparkAnyType)),\n    createFunctionWithInputTypes(\"max\", Seq(SparkAnyType)),\n    createFunctionWithInputTypes(\"count\", Seq(SparkAnyType)),\n    createUnaryNumericFunction(\"avg\"),\n    createUnaryNumericFunction(\"sum\"),\n    // first/last are non-deterministic and known to be incompatible with Spark\n//    createFunctionWithInputTypes(\"first\", Seq(SparkAnyType)),\n//    createFunctionWithInputTypes(\"last\", Seq(SparkAnyType)),\n    createUnaryNumericFunction(\"var_pop\"),\n    createUnaryNumericFunction(\"var_samp\"),\n    createFunctionWithInputTypes(\"covar_pop\", Seq(SparkNumericType, SparkNumericType)),\n    createFunctionWithInputTypes(\"covar_samp\", Seq(SparkNumericType, SparkNumericType)),\n    createUnaryNumericFunction(\"stddev_pop\"),\n    createUnaryNumericFunction(\"stddev_samp\"),\n    createFunctionWithInputTypes(\"corr\", Seq(SparkNumericType, SparkNumericType)),\n    createFunctionWithInputTypes(\"bit_and\", Seq(SparkIntegralType)),\n    createFunctionWithInputTypes(\"bit_or\", Seq(SparkIntegralType)),\n    createFunctionWithInputTypes(\"bit_xor\", Seq(SparkIntegralType)))\n\n  val unaryArithmeticOps: Seq[String] = Seq(\"+\", \"-\")\n\n  val binaryArithmeticOps: Seq[String] =\n    Seq(\"+\", \"-\", \"*\", \"/\", \"%\", \"&\", \"|\", \"^\", \"<<\", \">>\", \"div\")\n\n  val comparisonOps: Seq[String] = Seq(\"=\", \"<=>\", \">\", \">=\", \"<\", \"<=\")\n\n  // TODO make this more comprehensive\n  val comparisonTypes: Seq[SparkType] = Seq(\n    SparkStringType,\n    SparkBinaryType,\n    SparkNumericType,\n    SparkDateType,\n    SparkTimestampType,\n    SparkArrayType(SparkTypeOneOf(Seq(SparkStringType, SparkNumericType, SparkDateType))))\n\n}\n"
  },
  {
    "path": "fuzz-testing/src/main/scala/org/apache/comet/fuzz/QueryGen.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.fuzz\n\nimport java.io.{BufferedWriter, FileWriter}\n\nimport scala.annotation.tailrec\nimport scala.collection.mutable\nimport scala.util.Random\n\nimport org.apache.spark.sql.{DataFrame, SparkSession}\nimport org.apache.spark.sql.types._\n\nobject QueryGen {\n\n  def generateRandomQueries(\n      r: Random,\n      spark: SparkSession,\n      numFiles: Int,\n      numQueries: Int): Unit = {\n    for (i <- 0 until numFiles) {\n      spark.read.parquet(s\"test$i.parquet\").createTempView(s\"test$i\")\n    }\n\n    val w = new BufferedWriter(new FileWriter(\"queries.sql\"))\n\n    val uniqueQueries = mutable.HashSet[String]()\n\n    for (_ <- 0 until numQueries) {\n      try {\n        val sql = r.nextInt().abs % 8 match {\n          case 0 => generateJoin(r, spark, numFiles)\n          case 1 => generateAggregate(r, spark, numFiles)\n          case 2 => generateScalar(r, spark, numFiles)\n          case 3 => generateCast(r, spark, numFiles)\n          case 4 => generateUnaryArithmetic(r, spark, numFiles)\n          case 5 => generateBinaryArithmetic(r, spark, numFiles)\n          case 6 => generateBinaryComparison(r, spark, numFiles)\n          case _ => generateConditional(r, spark, numFiles)\n        }\n        if (!uniqueQueries.contains(sql)) {\n          uniqueQueries += sql\n          w.write(sql + \"\\n\")\n        }\n      } catch {\n        case e: Exception =>\n          // scalastyle:off\n          println(s\"Failed to generate query: ${e.getMessage}\")\n      }\n    }\n    w.close()\n  }\n\n  private def generateAggregate(r: Random, spark: SparkSession, numFiles: Int): String = {\n    val tableName = s\"test${r.nextInt(numFiles)}\"\n    val table = spark.table(tableName)\n\n    val func = Utils.randomChoice(Meta.aggFunc, r)\n    try {\n      val signature = Utils.randomChoice(func.signatures, r)\n      val args = signature.inputTypes.map(x => pickRandomColumn(r, table, x))\n\n      val groupingCols = Range(0, 2).map(_ => Utils.randomChoice(table.columns, r))\n\n      if (groupingCols.isEmpty) {\n        s\"SELECT ${args.mkString(\", \")}, ${func.name}(${args.mkString(\", \")}) AS x \" +\n          s\"FROM $tableName \" +\n          s\"ORDER BY ${args.mkString(\", \")};\"\n      } else {\n        s\"SELECT ${groupingCols.mkString(\", \")}, ${func.name}(${args.mkString(\", \")}) \" +\n          s\"FROM $tableName \" +\n          s\"GROUP BY ${groupingCols.mkString(\",\")} \" +\n          s\"ORDER BY ${groupingCols.mkString(\", \")};\"\n      }\n    } catch {\n      case e: Exception =>\n        throw new IllegalStateException(\n          s\"Failed to generate SQL for aggregate function ${func.name}\",\n          e)\n    }\n  }\n\n  private def generateScalar(r: Random, spark: SparkSession, numFiles: Int): String = {\n    val tableName = s\"test${r.nextInt(numFiles)}\"\n    val table = spark.table(tableName)\n\n    val func = Utils.randomChoice(Meta.scalarFunc, r)\n    try {\n      val signature = Utils.randomChoice(func.signatures, r)\n      val args =\n        if (signature.varArgs) {\n          pickRandomColumns(r, table, signature.inputTypes.head)\n        } else {\n          signature.inputTypes.map(x => pickRandomColumn(r, table, x))\n        }\n\n      // Example SELECT c0, log(c0) as x FROM test0\n      s\"SELECT ${args.mkString(\", \")}, ${func.name}(${args.mkString(\", \")}) AS x \" +\n        s\"FROM $tableName \" +\n        s\"ORDER BY ${args.mkString(\", \")};\"\n    } catch {\n      case e: Exception =>\n        throw new IllegalStateException(\n          s\"Failed to generate SQL for scalar function ${func.name}\",\n          e)\n    }\n  }\n\n  @tailrec\n  private def pickRandomColumns(r: Random, df: DataFrame, targetType: SparkType): Seq[String] = {\n    targetType match {\n      case SparkTypeOneOf(choices) =>\n        val chosenType = Utils.randomChoice(choices, r)\n        pickRandomColumns(r, df, chosenType)\n      case _ =>\n        var columns = Set.empty[String]\n        for (_ <- 0 to r.nextInt(df.columns.length)) {\n          columns += pickRandomColumn(r, df, targetType)\n        }\n        columns.toSeq\n    }\n  }\n\n  private def pickRandomColumn(r: Random, df: DataFrame, targetType: SparkType): String = {\n    targetType match {\n      case SparkAnyType =>\n        Utils.randomChoice(df.schema.fields, r).name\n      case SparkBooleanType =>\n        select(r, df, _.dataType == BooleanType)\n      case SparkByteType =>\n        select(r, df, _.dataType == ByteType)\n      case SparkShortType =>\n        select(r, df, _.dataType == ShortType)\n      case SparkIntType =>\n        select(r, df, _.dataType == IntegerType)\n      case SparkLongType =>\n        select(r, df, _.dataType == LongType)\n      case SparkFloatType =>\n        select(r, df, _.dataType == FloatType)\n      case SparkDoubleType =>\n        select(r, df, _.dataType == DoubleType)\n      case SparkDecimalType(_, _) =>\n        select(r, df, _.dataType.isInstanceOf[DecimalType])\n      case SparkIntegralType =>\n        select(\n          r,\n          df,\n          f =>\n            f.dataType == ByteType || f.dataType == ShortType ||\n              f.dataType == IntegerType || f.dataType == LongType)\n      case SparkNumericType =>\n        select(r, df, f => isNumeric(f.dataType))\n      case SparkStringType =>\n        select(r, df, _.dataType == StringType)\n      case SparkBinaryType =>\n        select(r, df, _.dataType == BinaryType)\n      case SparkDateType =>\n        select(r, df, _.dataType == DateType)\n      case SparkTimestampType =>\n        select(r, df, _.dataType == TimestampType)\n      case SparkDateOrTimestampType =>\n        select(r, df, f => f.dataType == DateType || f.dataType == TimestampType)\n      case SparkTypeOneOf(choices) =>\n        pickRandomColumn(r, df, Utils.randomChoice(choices, r))\n      case SparkArrayType(elementType) =>\n        select(\n          r,\n          df,\n          _.dataType match {\n            case ArrayType(x, _) if typeMatch(elementType, x) => true\n            case _ => false\n          })\n      case SparkMapType(keyType, valueType) =>\n        select(\n          r,\n          df,\n          _.dataType match {\n            case MapType(k, v, _) if typeMatch(keyType, k) && typeMatch(valueType, v) => true\n            case _ => false\n          })\n      case SparkStructType(fields) =>\n        select(\n          r,\n          df,\n          _.dataType match {\n            case StructType(structFields) if structFields.length == fields.length => true\n            case _ => false\n          })\n      case _ =>\n        throw new IllegalStateException(targetType.toString)\n    }\n  }\n\n  def pickTwoRandomColumns(r: Random, df: DataFrame, targetType: SparkType): (String, String) = {\n    val a = pickRandomColumn(r, df, targetType)\n    val df2 = df.drop(a)\n    val b = pickRandomColumn(r, df2, targetType)\n    (a, b)\n  }\n\n  /** Select a random field that matches a predicate */\n  private def select(r: Random, df: DataFrame, predicate: StructField => Boolean): String = {\n    val candidates = df.schema.fields.filter(predicate)\n    if (candidates.isEmpty) {\n      throw new IllegalStateException(\"Failed to find suitable column\")\n    }\n    Utils.randomChoice(candidates, r).name\n  }\n\n  private def isNumeric(d: DataType): Boolean = {\n    d match {\n      case _: ByteType | _: ShortType | _: IntegerType | _: LongType | _: FloatType |\n          _: DoubleType | _: DecimalType =>\n        true\n      case _ => false\n    }\n  }\n\n  private def typeMatch(s: SparkType, d: DataType): Boolean = {\n    (s, d) match {\n      case (SparkAnyType, _) => true\n      case (SparkBooleanType, BooleanType) => true\n      case (SparkByteType, ByteType) => true\n      case (SparkShortType, ShortType) => true\n      case (SparkIntType, IntegerType) => true\n      case (SparkLongType, LongType) => true\n      case (SparkFloatType, FloatType) => true\n      case (SparkDoubleType, DoubleType) => true\n      case (SparkDecimalType(_, _), _: DecimalType) => true\n      case (SparkIntegralType, ByteType | ShortType | IntegerType | LongType) => true\n      case (SparkNumericType, _) if isNumeric(d) => true\n      case (SparkStringType, StringType) => true\n      case (SparkBinaryType, BinaryType) => true\n      case (SparkDateType, DateType) => true\n      case (SparkTimestampType, TimestampType | TimestampNTZType) => true\n      case (SparkDateOrTimestampType, DateType | TimestampType | TimestampNTZType) => true\n      case (SparkArrayType(elementType), ArrayType(elementDataType, _)) =>\n        typeMatch(elementType, elementDataType)\n      case (SparkMapType(keyType, valueType), MapType(keyDataType, valueDataType, _)) =>\n        typeMatch(keyType, keyDataType) && typeMatch(valueType, valueDataType)\n      case (SparkStructType(fields), StructType(structFields)) =>\n        fields.length == structFields.length &&\n        fields.zip(structFields.map(_.dataType)).forall { case (sparkType, dataType) =>\n          typeMatch(sparkType, dataType)\n        }\n      case (SparkTypeOneOf(choices), _) =>\n        choices.exists(choice => typeMatch(choice, d))\n      case _ => false\n    }\n  }\n\n  private def generateUnaryArithmetic(r: Random, spark: SparkSession, numFiles: Int): String = {\n    val tableName = s\"test${r.nextInt(numFiles)}\"\n    val table = spark.table(tableName)\n\n    val op = Utils.randomChoice(Meta.unaryArithmeticOps, r)\n    val a = pickRandomColumn(r, table, SparkNumericType)\n\n    // Example SELECT a, -a FROM test0\n    s\"SELECT $a, $op$a \" +\n      s\"FROM $tableName \" +\n      s\"ORDER BY $a;\"\n  }\n\n  private def generateBinaryArithmetic(r: Random, spark: SparkSession, numFiles: Int): String = {\n    val tableName = s\"test${r.nextInt(numFiles)}\"\n    val table = spark.table(tableName)\n\n    val op = Utils.randomChoice(Meta.binaryArithmeticOps, r)\n    val (a, b) = pickTwoRandomColumns(r, table, SparkNumericType)\n\n    // Example SELECT a, b, a+b FROM test0\n    s\"SELECT $a, $b, $a $op $b \" +\n      s\"FROM $tableName \" +\n      s\"ORDER BY $a, $b;\"\n  }\n\n  private def generateBinaryComparison(r: Random, spark: SparkSession, numFiles: Int): String = {\n    val tableName = s\"test${r.nextInt(numFiles)}\"\n    val table = spark.table(tableName)\n\n    val op = Utils.randomChoice(Meta.comparisonOps, r)\n\n    // pick two columns with the same type\n    val opType = Utils.randomChoice(Meta.comparisonTypes, r)\n    val (a, b) = pickTwoRandomColumns(r, table, opType)\n\n    // Example SELECT a, b, a <=> b FROM test0\n    s\"SELECT $a, $b, $a $op $b \" +\n      s\"FROM $tableName \" +\n      s\"ORDER BY $a, $b;\"\n  }\n\n  private def generateConditional(r: Random, spark: SparkSession, numFiles: Int): String = {\n    val tableName = s\"test${r.nextInt(numFiles)}\"\n    val table = spark.table(tableName)\n\n    val op = Utils.randomChoice(Meta.comparisonOps, r)\n\n    // pick two columns with the same type\n    val opType = Utils.randomChoice(Meta.comparisonTypes, r)\n    val (a, b) = pickTwoRandomColumns(r, table, opType)\n\n    // Example SELECT a, b, IF(a <=> b, 1, 2), CASE WHEN a <=> b THEN 1 ELSE 2 END FROM test0\n    s\"SELECT $a, $b, $a $op $b, IF($a $op $b, 1, 2), CASE WHEN $a $op $b THEN 1 ELSE 2 END \" +\n      s\"FROM $tableName \" +\n      s\"ORDER BY $a, $b;\"\n  }\n\n  private def generateCast(r: Random, spark: SparkSession, numFiles: Int): String = {\n    val tableName = s\"test${r.nextInt(numFiles)}\"\n    val table = spark.table(tableName)\n\n    val toType = Utils.randomWeightedChoice(Meta.dataTypes, r).sql\n    val arg = Utils.randomChoice(table.columns, r)\n\n    // We test both `cast` and `try_cast` to cover LEGACY and TRY eval modes. It is not\n    // recommended to run Comet Fuzz with ANSI enabled currently.\n    // Example SELECT c0, cast(c0 as float), try_cast(c0 as float) FROM test0\n    s\"SELECT $arg, cast($arg as $toType), try_cast($arg as $toType) \" +\n      s\"FROM $tableName \" +\n      s\"ORDER BY $arg;\"\n  }\n\n  private def generateJoin(r: Random, spark: SparkSession, numFiles: Int): String = {\n    val leftTableName = s\"test${r.nextInt(numFiles)}\"\n    val rightTableName = s\"test${r.nextInt(numFiles)}\"\n    val leftTable = spark.table(leftTableName)\n    val rightTable = spark.table(rightTableName)\n\n    val leftCol = Utils.randomChoice(leftTable.columns, r)\n    val rightCol = Utils.randomChoice(rightTable.columns, r)\n\n    val joinTypes = Seq((\"INNER\", 0.4), (\"LEFT\", 0.3), (\"RIGHT\", 0.3))\n    val joinType = Utils.randomWeightedChoice(joinTypes, r)\n\n    val leftColProjection = leftTable.columns.map(c => s\"l.$c\").mkString(\", \")\n    val rightColProjection = rightTable.columns.map(c => s\"r.$c\").mkString(\", \")\n    \"SELECT \" +\n      s\"$leftColProjection, \" +\n      s\"$rightColProjection \" +\n      s\"FROM $leftTableName l \" +\n      s\"$joinType JOIN $rightTableName r \" +\n      s\"ON l.$leftCol = r.$rightCol \" +\n      \"ORDER BY \" +\n      s\"$leftColProjection, \" +\n      s\"$rightColProjection;\"\n  }\n\n}\n"
  },
  {
    "path": "fuzz-testing/src/main/scala/org/apache/comet/fuzz/QueryRunner.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.fuzz\n\nimport java.io.{BufferedWriter, FileWriter, PrintWriter, StringWriter}\n\nimport scala.collection.mutable\nimport scala.io.Source\n\nimport org.apache.spark.sql.{Row, SparkSession}\n\nimport org.apache.comet.fuzz.QueryComparison.showPlans\n\nobject QueryRunner {\n\n  def createOutputMdFile(): BufferedWriter = {\n    val outputFilename = s\"results-${System.currentTimeMillis()}.md\"\n    // scalastyle:off println\n    println(s\"Writing results to $outputFilename\")\n    // scalastyle:on println\n\n    new BufferedWriter(new FileWriter(outputFilename))\n  }\n\n  def runQueries(\n      spark: SparkSession,\n      numFiles: Int,\n      filename: String,\n      showFailedSparkQueries: Boolean = false): Unit = {\n\n    var queryCount = 0\n    var invalidQueryCount = 0\n    var cometFailureCount = 0\n    var cometSuccessCount = 0\n\n    val w = createOutputMdFile()\n\n    // register input files\n    for (i <- 0 until numFiles) {\n      val table = spark.read.parquet(s\"test$i.parquet\")\n      val tableName = s\"test$i\"\n      table.createTempView(tableName)\n      w.write(\n        s\"Created table $tableName with schema:\\n\\t\" +\n          s\"${table.schema.fields.map(f => s\"${f.name}: ${f.dataType}\").mkString(\"\\n\\t\")}\\n\\n\")\n    }\n\n    val querySource = Source.fromFile(filename)\n    try {\n      querySource\n        .getLines()\n        .foreach(sql => {\n          queryCount += 1\n          try {\n            // execute with Spark\n            spark.conf.set(\"spark.comet.enabled\", \"false\")\n            val df = spark.sql(sql)\n            val sparkRows = df.collect()\n            val sparkPlan = df.queryExecution.executedPlan.toString\n\n            // execute with Comet\n            try {\n              spark.conf.set(\"spark.comet.enabled\", \"true\")\n              val df = spark.sql(sql)\n              val cometRows = df.collect()\n              val cometPlan = df.queryExecution.executedPlan.toString\n\n              var success = QueryComparison.assertSameRows(sparkRows, cometRows, output = w)\n\n              // check that the plan contains Comet operators\n              if (!cometPlan.contains(\"Comet\")) {\n                success = false\n                w.write(\"[ERROR] Comet did not accelerate any part of the plan\\n\")\n              }\n\n              QueryComparison.showSQL(w, sql)\n\n              if (success) {\n                cometSuccessCount += 1\n              } else {\n                // show plans for failed queries\n                showPlans(w, sparkPlan, cometPlan)\n                cometFailureCount += 1\n              }\n\n            } catch {\n              case e: Exception =>\n                // the query worked in Spark but failed in Comet, so this is likely a bug in Comet\n                cometFailureCount += 1\n                QueryComparison.showSQL(w, sql)\n                w.write(\"### Spark Plan\\n\")\n                w.write(s\"```\\n$sparkPlan\\n```\\n\")\n\n                w.write(s\"[ERROR] Query failed in Comet: ${e.getMessage}:\\n\")\n                w.write(\"```\\n\")\n                val sw = new StringWriter()\n                val p = new PrintWriter(sw)\n                e.printStackTrace(p)\n                p.close()\n                w.write(s\"${sw.toString}\\n\")\n                w.write(\"```\\n\")\n            }\n\n            // flush after every query so that results are saved in the event of the driver crashing\n            w.flush()\n\n          } catch {\n            case e: Exception =>\n              // we expect many generated queries to be invalid\n              invalidQueryCount += 1\n              if (showFailedSparkQueries) {\n                QueryComparison.showSQL(w, sql)\n                w.write(s\"Query failed in Spark: ${e.getMessage}\\n\")\n              }\n          }\n        })\n\n      w.write(\"# Summary\\n\")\n      w.write(\n        s\"Total queries: $queryCount; Invalid queries: $invalidQueryCount; \" +\n          s\"Comet failed: $cometFailureCount; Comet succeeded: $cometSuccessCount\\n\")\n\n    } finally {\n      w.close()\n      querySource.close()\n    }\n  }\n}\n\nobject QueryComparison {\n  def assertSameRows(\n      sparkRows: Array[Row],\n      cometRows: Array[Row],\n      output: BufferedWriter,\n      tolerance: Double = 0.000001): Boolean = {\n    if (sparkRows.length == cometRows.length) {\n      var i = 0\n      while (i < sparkRows.length) {\n        val l = sparkRows(i)\n        val r = cometRows(i)\n\n        // Check the schema is equal for first row only\n        if (i == 0 && l.schema != r.schema) {\n          output.write(\"[ERROR] Spark produced different schema than Comet.\\n\")\n\n          return false\n        }\n\n        assert(l.length == r.length)\n        for (j <- 0 until l.length) {\n          if (!same(l(j), r(j), tolerance)) {\n            output.write(s\"First difference at row $i:\\n\")\n            output.write(\"Spark: `\" + formatRow(l) + \"`\\n\")\n            output.write(\"Comet: `\" + formatRow(r) + \"`\\n\")\n            i = sparkRows.length\n\n            return false\n          }\n        }\n        i += 1\n      }\n    } else {\n      output.write(\n        s\"[ERROR] Spark produced ${sparkRows.length} rows and \" +\n          s\"Comet produced ${cometRows.length} rows.\\n\")\n\n      return false\n    }\n\n    true\n  }\n\n  private def same(l: Any, r: Any, tolerance: Double): Boolean = {\n    if (l == null || r == null) {\n      return l == null && r == null\n    }\n    (l, r) match {\n      case (a: Float, b: Float) if a.isPosInfinity => b.isPosInfinity\n      case (a: Float, b: Float) if a.isNegInfinity => b.isNegInfinity\n      case (a: Float, b: Float) if a.isInfinity => b.isInfinity\n      case (a: Float, b: Float) if a.isNaN => b.isNaN\n      case (a: Float, b: Float) => (a - b).abs <= tolerance\n      case (a: Double, b: Double) if a.isPosInfinity => b.isPosInfinity\n      case (a: Double, b: Double) if a.isNegInfinity => b.isNegInfinity\n      case (a: Double, b: Double) if a.isInfinity => b.isInfinity\n      case (a: Double, b: Double) if a.isNaN => b.isNaN\n      case (a: Double, b: Double) => (a - b).abs <= tolerance\n      case (a: Array[_], b: Array[_]) =>\n        a.length == b.length && a.zip(b).forall(x => same(x._1, x._2, tolerance))\n      case (a: mutable.WrappedArray[_], b: mutable.WrappedArray[_]) =>\n        a.length == b.length && a.zip(b).forall(x => same(x._1, x._2, tolerance))\n      case (a: Row, b: Row) =>\n        val aa = a.toSeq\n        val bb = b.toSeq\n        aa.length == bb.length && aa.zip(bb).forall(x => same(x._1, x._2, tolerance))\n      case (a, b) => a == b\n    }\n  }\n\n  private def format(value: Any): String = {\n    value match {\n      case null => \"NULL\"\n      case v: mutable.WrappedArray[_] => s\"[${v.map(format).mkString(\",\")}]\"\n      case v: Array[Byte] => s\"[${v.mkString(\",\")}]\"\n      case r: Row => formatRow(r)\n      case other => other.toString\n    }\n  }\n\n  private def formatRow(row: Row): String = {\n    row.toSeq.map(format).mkString(\",\")\n  }\n\n  def showSQL(w: BufferedWriter, sql: String, maxLength: Int = 120): Unit = {\n    w.write(\"## SQL\\n\")\n    w.write(\"```\\n\")\n    val words = sql.split(\" \")\n    val currentLine = new StringBuilder\n    for (word <- words) {\n      if (currentLine.length + word.length + 1 > maxLength) {\n        w.write(currentLine.toString.trim)\n        w.write(\"\\n\")\n        currentLine.setLength(0)\n      }\n      currentLine.append(word).append(\" \")\n    }\n    if (currentLine.nonEmpty) {\n      w.write(currentLine.toString.trim)\n      w.write(\"\\n\")\n    }\n    w.write(\"```\\n\")\n  }\n\n  def showPlans(w: BufferedWriter, sparkPlan: String, cometPlan: String): Unit = {\n    w.write(\"### Spark Plan\\n\")\n    w.write(s\"```\\n$sparkPlan\\n```\\n\")\n    w.write(\"### Comet Plan\\n\")\n    w.write(s\"```\\n$cometPlan\\n```\\n\")\n  }\n\n  def showSchema(w: BufferedWriter, sparkSchema: String, cometSchema: String): Unit = {\n    w.write(\"### Spark Schema\\n\")\n    w.write(s\"```\\n$sparkSchema\\n```\\n\")\n    w.write(\"### Comet Schema\\n\")\n    w.write(s\"```\\n$cometSchema\\n```\\n\")\n  }\n}\n"
  },
  {
    "path": "fuzz-testing/src/main/scala/org/apache/comet/fuzz/Utils.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.fuzz\n\nimport scala.util.Random\n\nobject Utils {\n\n  def randomChoice[T](list: Seq[T], r: Random): T = {\n    list(r.nextInt(list.length))\n  }\n\n  def randomWeightedChoice[T](valuesWithWeights: Seq[(T, Double)], r: Random): T = {\n    val totalWeight = valuesWithWeights.map(_._2).sum\n    val randomValue = r.nextDouble() * totalWeight\n    var cumulativeWeight = 0.0\n\n    for ((value, weight) <- valuesWithWeights) {\n      cumulativeWeight += weight\n      if (cumulativeWeight >= randomValue) {\n        return value\n      }\n    }\n\n    // If for some reason the loop doesn't return, return the last value\n    valuesWithWeights.last._1\n  }\n\n}\n"
  },
  {
    "path": "kube/Dockerfile",
    "content": "#\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nFROM apache/spark:3.5.8 AS builder\n\nUSER root\n\n# Installing JDK11 as the image comes with JRE\nRUN apt update \\\n    && apt install -y curl \\\n    && apt install -y openjdk-11-jdk \\\n    && apt clean\n\nRUN apt install -y gcc-10 g++-10 cpp-10 unzip\nENV CC=\"gcc-10\"\nENV CXX=\"g++-10\"\n\nRUN PB_REL=\"https://github.com/protocolbuffers/protobuf/releases\" \\\n    && curl -LO $PB_REL/download/v30.2/protoc-30.2-linux-x86_64.zip \\\n    && unzip protoc-30.2-linux-x86_64.zip -d /root/.local\nENV PATH=\"$PATH:/root/.local/bin\"\n\nRUN curl --proto '=https' --tlsv1.2 -sSf https://sh.rustup.rs | sh -s -- -y\nENV PATH=\"/root/.cargo/bin:${PATH}\"\nENV RUSTFLAGS=\"-C debuginfo=line-tables-only -C incremental=false\"\nENV SPARK_VERSION=3.5\nENV SCALA_VERSION=2.12\n\n# copy source files to Docker image\nRUN mkdir /comet\nWORKDIR /comet\n\n# build native code first so that this layer can be re-used\n# if only Scala code gets modified\nCOPY native /comet/native\nRUN cd native && RUSTFLAGS=\"-Ctarget-cpu=native\" cargo build --release\n\n# copy the rest of the project\nCOPY .mvn /comet/.mvn\nCOPY mvnw /comet/mvnw\nCOPY common /comet/common\nCOPY dev /comet/dev\nCOPY docs /comet/docs\nCOPY fuzz-testing /comet/fuzz-testing\nCOPY spark /comet/spark\nCOPY spark-integration /comet/spark-integration\nCOPY scalafmt.conf /comet/scalafmt.conf\nCOPY .scalafix.conf /comet/.scalafix.conf\nCOPY Makefile /comet/Makefile\nCOPY pom.xml /comet/pom.xml\n\nRUN mkdir -p /root/.m2 && \\\n    echo '<settings><mirrors><mirror><id>central</id><mirrorOf>central</mirrorOf><url>https://repo1.maven.org/maven2</url></mirror></mirrors></settings>' > /root/.m2/settings.xml\n\n# Pick the JDK instead of JRE to compile Comet\nRUN cd /comet \\\n    && JAVA_HOME=$(readlink -f $(which javac) | sed \"s/\\/bin\\/javac//\") make release-nogit PROFILES=\"-Pspark-$SPARK_VERSION -Pscala-$SCALA_VERSION\"\n\nFROM apache/spark:3.5.8\nENV SPARK_VERSION=3.5\nENV SCALA_VERSION=2.12\nUSER root\n\n# note the use of a wildcard in the file name so that this works with both snapshot and final release versions\nCOPY --from=builder  /comet/spark/target/comet-spark-spark${SPARK_VERSION}_$SCALA_VERSION-*.jar $SPARK_HOME/jars\n"
  },
  {
    "path": "kube/local/hadoop.env",
    "content": "#\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nCORE_CONF_fs_defaultFS=hdfs://namenode:9000\nCORE_CONF_hadoop_http_staticuser_user=root\nCORE_CONF_hadoop_proxyuser_hue_hosts=*\nCORE_CONF_hadoop_proxyuser_hue_groups=*\nCORE_CONF_io_compression_codecs=org.apache.hadoop.io.compress.SnappyCodec\nCORE_CONF_hadoop_tmp_dir=/hadoop-data\nCORE_CONF_dfs_client_use_datanode_hostname=true\nCORE_CONF_dfs_datanode_use_datanode_hostname=true\n\nHDFS_CONF_dfs_webhdfs_enabled=true\nHDFS_CONF_dfs_permissions_enabled=false\nHDFS_CONF_dfs_namenode_datanode_registration_ip___hostname___check=false\nHDFS_CONF_dfs_client_use_datanode_hostname=true\nHDFS_CONF_dfs_datanode_use_datanode_hostname=true"
  },
  {
    "path": "kube/local/hdfs-docker-compose.yml",
    "content": "#\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\nversion: \"3\"\n\nservices:\n  namenode:\n    image: bde2020/hadoop-namenode:2.0.0-hadoop3.2.1-java8\n    container_name: namenode\n    restart: always\n    ports:\n      - 9870:9870\n      - 9000:9000\n    volumes:\n      - /tmp/hadoop/dfs/name:/hadoop/dfs/name\n    environment:\n      - CLUSTER_NAME=test\n    env_file:\n      - hadoop.env\n\n  datanode1:\n    image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8\n    container_name: datanode1\n    hostname: datanode1\n    restart: always\n    ports:\n      - 9866:9866\n      - 9864:9864\n    depends_on:\n      - namenode\n    volumes:\n      - /tmp/hadoop/dfs/data1:/hadoop/dfs/data\n    environment:\n      SERVICE_PRECONDITION: \"namenode:9870\"\n    env_file:\n      - hadoop.env\n  datanode2:\n    image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8\n    container_name: datanode2\n    hostname: datanode2\n    restart: always\n    ports:\n      - 9867:9866\n      - 9865:9864\n    depends_on:\n      - namenode\n    volumes:\n      - /tmp/hadoop/dfs/data2:/hadoop/dfs/data\n    environment:\n      SERVICE_PRECONDITION: \"namenode:9870\"\n    env_file:\n      - hadoop.env\n  datanode3:\n    image: bde2020/hadoop-datanode:2.0.0-hadoop3.2.1-java8\n    container_name: datanode3\n    hostname: datanode3\n    restart: always\n    ports:\n      - 9868:9866\n      - 9863:9864\n    depends_on:\n      - namenode\n    volumes:\n      - /tmp/hadoop/dfs/data3:/hadoop/dfs/data\n    environment:\n      SERVICE_PRECONDITION: \"namenode:9870\"\n    env_file:\n      - hadoop.env\n\n  resourcemanager:\n    image: bde2020/hadoop-resourcemanager:2.0.0-hadoop3.2.1-java8\n    container_name: resourcemanager\n    restart: always\n    environment:\n      SERVICE_PRECONDITION: \"namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 datanode3:9864 datanode1:9866 datanode2:9866 datanode3:9866\"\n    env_file:\n      - hadoop.env\n\n  nodemanager1:\n    image: bde2020/hadoop-nodemanager:2.0.0-hadoop3.2.1-java8\n    container_name: nodemanager\n    restart: always\n    environment:\n      SERVICE_PRECONDITION: \"namenode:9000 namenode:9870 datanode1:9864 datanode2:9864 datanode3:9864 datanode1:9866 datanode2:9866 datanode3:9866 resourcemanager:8088\"\n    env_file:\n      - hadoop.env"
  },
  {
    "path": "mvnw",
    "content": "#!/bin/sh\n# ----------------------------------------------------------------------------\n# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n# ----------------------------------------------------------------------------\n\n# ----------------------------------------------------------------------------\n# Apache Maven Wrapper startup batch script, version 3.2.0\n#\n# Required ENV vars:\n# ------------------\n#   JAVA_HOME - location of a JDK home dir\n#\n# Optional ENV vars\n# -----------------\n#   MAVEN_OPTS - parameters passed to the Java VM when running Maven\n#     e.g. to debug Maven itself, use\n#       set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000\n#   MAVEN_SKIP_RC - flag to disable loading of mavenrc files\n# ----------------------------------------------------------------------------\n\nif [ -z \"$MAVEN_SKIP_RC\" ] ; then\n\n  if [ -f /usr/local/etc/mavenrc ] ; then\n    . /usr/local/etc/mavenrc\n  fi\n\n  if [ -f /etc/mavenrc ] ; then\n    . /etc/mavenrc\n  fi\n\n  if [ -f \"$HOME/.mavenrc\" ] ; then\n    . \"$HOME/.mavenrc\"\n  fi\n\nfi\n\n# OS specific support.  $var _must_ be set to either true or false.\ncygwin=false;\ndarwin=false;\nmingw=false\ncase \"$(uname)\" in\n  CYGWIN*) cygwin=true ;;\n  MINGW*) mingw=true;;\n  Darwin*) darwin=true\n    # Use /usr/libexec/java_home if available, otherwise fall back to /Library/Java/Home\n    # See https://developer.apple.com/library/mac/qa/qa1170/_index.html\n    if [ -z \"$JAVA_HOME\" ]; then\n      if [ -x \"/usr/libexec/java_home\" ]; then\n        JAVA_HOME=\"$(/usr/libexec/java_home)\"; export JAVA_HOME\n      else\n        JAVA_HOME=\"/Library/Java/Home\"; export JAVA_HOME\n      fi\n    fi\n    ;;\nesac\n\nif [ -z \"$JAVA_HOME\" ] ; then\n  if [ -r /etc/gentoo-release ] ; then\n    JAVA_HOME=$(java-config --jre-home)\n  fi\nfi\n\n# For Cygwin, ensure paths are in UNIX format before anything is touched\nif $cygwin ; then\n  [ -n \"$JAVA_HOME\" ] &&\n    JAVA_HOME=$(cygpath --unix \"$JAVA_HOME\")\n  [ -n \"$CLASSPATH\" ] &&\n    CLASSPATH=$(cygpath --path --unix \"$CLASSPATH\")\nfi\n\n# For Mingw, ensure paths are in UNIX format before anything is touched\nif $mingw ; then\n  [ -n \"$JAVA_HOME\" ] && [ -d \"$JAVA_HOME\" ] &&\n    JAVA_HOME=\"$(cd \"$JAVA_HOME\" || (echo \"cannot cd into $JAVA_HOME.\"; exit 1); pwd)\"\nfi\n\nif [ -z \"$JAVA_HOME\" ]; then\n  javaExecutable=\"$(which javac)\"\n  if [ -n \"$javaExecutable\" ] && ! [ \"$(expr \"\\\"$javaExecutable\\\"\" : '\\([^ ]*\\)')\" = \"no\" ]; then\n    # readlink(1) is not available as standard on Solaris 10.\n    readLink=$(which readlink)\n    if [ ! \"$(expr \"$readLink\" : '\\([^ ]*\\)')\" = \"no\" ]; then\n      if $darwin ; then\n        javaHome=\"$(dirname \"\\\"$javaExecutable\\\"\")\"\n        javaExecutable=\"$(cd \"\\\"$javaHome\\\"\" && pwd -P)/javac\"\n      else\n        javaExecutable=\"$(readlink -f \"\\\"$javaExecutable\\\"\")\"\n      fi\n      javaHome=\"$(dirname \"\\\"$javaExecutable\\\"\")\"\n      javaHome=$(expr \"$javaHome\" : '\\(.*\\)/bin')\n      JAVA_HOME=\"$javaHome\"\n      export JAVA_HOME\n    fi\n  fi\nfi\n\nif [ -z \"$JAVACMD\" ] ; then\n  if [ -n \"$JAVA_HOME\"  ] ; then\n    if [ -x \"$JAVA_HOME/jre/sh/java\" ] ; then\n      # IBM's JDK on AIX uses strange locations for the executables\n      JAVACMD=\"$JAVA_HOME/jre/sh/java\"\n    else\n      JAVACMD=\"$JAVA_HOME/bin/java\"\n    fi\n  else\n    JAVACMD=\"$(\\unset -f command 2>/dev/null; \\command -v java)\"\n  fi\nfi\n\nif [ ! -x \"$JAVACMD\" ] ; then\n  echo \"Error: JAVA_HOME is not defined correctly.\" >&2\n  echo \"  We cannot execute $JAVACMD\" >&2\n  exit 1\nfi\n\nif [ -z \"$JAVA_HOME\" ] ; then\n  echo \"Warning: JAVA_HOME environment variable is not set.\"\nfi\n\n# traverses directory structure from process work directory to filesystem root\n# first directory with .mvn subdirectory is considered project base directory\nfind_maven_basedir() {\n  if [ -z \"$1\" ]\n  then\n    echo \"Path not specified to find_maven_basedir\"\n    return 1\n  fi\n\n  basedir=\"$1\"\n  wdir=\"$1\"\n  while [ \"$wdir\" != '/' ] ; do\n    if [ -d \"$wdir\"/.mvn ] ; then\n      basedir=$wdir\n      break\n    fi\n    # workaround for JBEAP-8937 (on Solaris 10/Sparc)\n    if [ -d \"${wdir}\" ]; then\n      wdir=$(cd \"$wdir/..\" || exit 1; pwd)\n    fi\n    # end of workaround\n  done\n  printf '%s' \"$(cd \"$basedir\" || exit 1; pwd)\"\n}\n\n# concatenates all lines of a file\nconcat_lines() {\n  if [ -f \"$1\" ]; then\n    # Remove \\r in case we run on Windows within Git Bash\n    # and check out the repository with auto CRLF management\n    # enabled. Otherwise, we may read lines that are delimited with\n    # \\r\\n and produce $'-Xarg\\r' rather than -Xarg due to word\n    # splitting rules.\n    tr -s '\\r\\n' ' ' < \"$1\"\n  fi\n}\n\nlog() {\n  if [ \"$MVNW_VERBOSE\" = true ]; then\n    printf '%s\\n' \"$1\"\n  fi\n}\n\nBASE_DIR=$(find_maven_basedir \"$(dirname \"$0\")\")\nif [ -z \"$BASE_DIR\" ]; then\n  exit 1;\nfi\n\nMAVEN_PROJECTBASEDIR=${MAVEN_BASEDIR:-\"$BASE_DIR\"}; export MAVEN_PROJECTBASEDIR\nlog \"$MAVEN_PROJECTBASEDIR\"\n\n##########################################################################################\n# Extension to allow automatically downloading the maven-wrapper.jar from Maven-central\n# This allows using the maven wrapper in projects that prohibit checking in binary data.\n##########################################################################################\nwrapperJarPath=\"$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar\"\nif [ -r \"$wrapperJarPath\" ]; then\n    log \"Found $wrapperJarPath\"\nelse\n    log \"Couldn't find $wrapperJarPath, downloading it ...\"\n\n    if [ -n \"$MVNW_REPOURL\" ]; then\n      wrapperUrl=\"$MVNW_REPOURL/org/apache/maven/wrapper/maven-wrapper/3.2.0/maven-wrapper-3.2.0.jar\"\n    else\n      wrapperUrl=\"https://repo.maven.apache.org/maven2/org/apache/maven/wrapper/maven-wrapper/3.2.0/maven-wrapper-3.2.0.jar\"\n    fi\n    while IFS=\"=\" read -r key value; do\n      # Remove '\\r' from value to allow usage on windows as IFS does not consider '\\r' as a separator ( considers space, tab, new line ('\\n'), and custom '=' )\n      safeValue=$(echo \"$value\" | tr -d '\\r')\n      case \"$key\" in (wrapperUrl) wrapperUrl=\"$safeValue\"; break ;;\n      esac\n    done < \"$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.properties\"\n    log \"Downloading from: $wrapperUrl\"\n\n    if $cygwin; then\n      wrapperJarPath=$(cygpath --path --windows \"$wrapperJarPath\")\n    fi\n\n    if command -v wget > /dev/null; then\n        log \"Found wget ... using wget\"\n        [ \"$MVNW_VERBOSE\" = true ] && QUIET=\"\" || QUIET=\"--quiet\"\n        if [ -z \"$MVNW_USERNAME\" ] || [ -z \"$MVNW_PASSWORD\" ]; then\n            wget $QUIET \"$wrapperUrl\" -O \"$wrapperJarPath\" || rm -f \"$wrapperJarPath\"\n        else\n            wget $QUIET --http-user=\"$MVNW_USERNAME\" --http-password=\"$MVNW_PASSWORD\" \"$wrapperUrl\" -O \"$wrapperJarPath\" || rm -f \"$wrapperJarPath\"\n        fi\n    elif command -v curl > /dev/null; then\n        log \"Found curl ... using curl\"\n        [ \"$MVNW_VERBOSE\" = true ] && QUIET=\"\" || QUIET=\"--silent\"\n        if [ -z \"$MVNW_USERNAME\" ] || [ -z \"$MVNW_PASSWORD\" ]; then\n            curl $QUIET -o \"$wrapperJarPath\" \"$wrapperUrl\" -f -L || rm -f \"$wrapperJarPath\"\n        else\n            curl $QUIET --user \"$MVNW_USERNAME:$MVNW_PASSWORD\" -o \"$wrapperJarPath\" \"$wrapperUrl\" -f -L || rm -f \"$wrapperJarPath\"\n        fi\n    else\n        log \"Falling back to using Java to download\"\n        javaSource=\"$MAVEN_PROJECTBASEDIR/.mvn/wrapper/MavenWrapperDownloader.java\"\n        javaClass=\"$MAVEN_PROJECTBASEDIR/.mvn/wrapper/MavenWrapperDownloader.class\"\n        # For Cygwin, switch paths to Windows format before running javac\n        if $cygwin; then\n          javaSource=$(cygpath --path --windows \"$javaSource\")\n          javaClass=$(cygpath --path --windows \"$javaClass\")\n        fi\n        if [ -e \"$javaSource\" ]; then\n            if [ ! -e \"$javaClass\" ]; then\n                log \" - Compiling MavenWrapperDownloader.java ...\"\n                (\"$JAVA_HOME/bin/javac\" \"$javaSource\")\n            fi\n            if [ -e \"$javaClass\" ]; then\n                log \" - Running MavenWrapperDownloader.java ...\"\n                (\"$JAVA_HOME/bin/java\" -cp .mvn/wrapper MavenWrapperDownloader \"$wrapperUrl\" \"$wrapperJarPath\") || rm -f \"$wrapperJarPath\"\n            fi\n        fi\n    fi\nfi\n##########################################################################################\n# End of extension\n##########################################################################################\n\n# If specified, validate the SHA-256 sum of the Maven wrapper jar file\nwrapperSha256Sum=\"\"\nwhile IFS=\"=\" read -r key value; do\n  case \"$key\" in (wrapperSha256Sum) wrapperSha256Sum=$value; break ;;\n  esac\ndone < \"$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.properties\"\nif [ -n \"$wrapperSha256Sum\" ]; then\n  wrapperSha256Result=false\n  if command -v sha256sum > /dev/null; then\n    if echo \"$wrapperSha256Sum  $wrapperJarPath\" | sha256sum -c > /dev/null 2>&1; then\n      wrapperSha256Result=true\n    fi\n  elif command -v shasum > /dev/null; then\n    if echo \"$wrapperSha256Sum  $wrapperJarPath\" | shasum -a 256 -c > /dev/null 2>&1; then\n      wrapperSha256Result=true\n    fi\n  else\n    echo \"Checksum validation was requested but neither 'sha256sum' or 'shasum' are available.\"\n    echo \"Please install either command, or disable validation by removing 'wrapperSha256Sum' from your maven-wrapper.properties.\"\n    exit 1\n  fi\n  if [ $wrapperSha256Result = false ]; then\n    echo \"Error: Failed to validate Maven wrapper SHA-256, your Maven wrapper might be compromised.\" >&2\n    echo \"Investigate or delete $wrapperJarPath to attempt a clean download.\" >&2\n    echo \"If you updated your Maven version, you need to update the specified wrapperSha256Sum property.\" >&2\n    exit 1\n  fi\nfi\n\nMAVEN_OPTS=\"$(concat_lines \"$MAVEN_PROJECTBASEDIR/.mvn/jvm.config\") $MAVEN_OPTS\"\n\n# For Cygwin, switch paths to Windows format before running java\nif $cygwin; then\n  [ -n \"$JAVA_HOME\" ] &&\n    JAVA_HOME=$(cygpath --path --windows \"$JAVA_HOME\")\n  [ -n \"$CLASSPATH\" ] &&\n    CLASSPATH=$(cygpath --path --windows \"$CLASSPATH\")\n  [ -n \"$MAVEN_PROJECTBASEDIR\" ] &&\n    MAVEN_PROJECTBASEDIR=$(cygpath --path --windows \"$MAVEN_PROJECTBASEDIR\")\nfi\n\n# Provide a \"standardized\" way to retrieve the CLI args that will\n# work with both Windows and non-Windows executions.\nMAVEN_CMD_LINE_ARGS=\"$MAVEN_CONFIG $*\"\nexport MAVEN_CMD_LINE_ARGS\n\nWRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain\n\n# shellcheck disable=SC2086 # safe args\nexec \"$JAVACMD\" \\\n  $MAVEN_OPTS \\\n  $MAVEN_DEBUG_OPTS \\\n  -classpath \"$MAVEN_PROJECTBASEDIR/.mvn/wrapper/maven-wrapper.jar\" \\\n  \"-Dmaven.multiModuleProjectDirectory=${MAVEN_PROJECTBASEDIR}\" \\\n  ${WRAPPER_LAUNCHER} $MAVEN_CONFIG \"$@\"\n"
  },
  {
    "path": "mvnw.cmd",
    "content": "@REM ----------------------------------------------------------------------------\r\n@REM Licensed to the Apache Software Foundation (ASF) under one\r\n@REM or more contributor license agreements.  See the NOTICE file\r\n@REM distributed with this work for additional information\r\n@REM regarding copyright ownership.  The ASF licenses this file\r\n@REM to you under the Apache License, Version 2.0 (the\r\n@REM \"License\"); you may not use this file except in compliance\r\n@REM with the License.  You may obtain a copy of the License at\r\n@REM\r\n@REM    http://www.apache.org/licenses/LICENSE-2.0\r\n@REM\r\n@REM Unless required by applicable law or agreed to in writing,\r\n@REM software distributed under the License is distributed on an\r\n@REM \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\r\n@REM KIND, either express or implied.  See the License for the\r\n@REM specific language governing permissions and limitations\r\n@REM under the License.\r\n@REM ----------------------------------------------------------------------------\r\n\r\n@REM ----------------------------------------------------------------------------\r\n@REM Apache Maven Wrapper startup batch script, version 3.2.0\r\n@REM\r\n@REM Required ENV vars:\r\n@REM JAVA_HOME - location of a JDK home dir\r\n@REM\r\n@REM Optional ENV vars\r\n@REM MAVEN_BATCH_ECHO - set to 'on' to enable the echoing of the batch commands\r\n@REM MAVEN_BATCH_PAUSE - set to 'on' to wait for a keystroke before ending\r\n@REM MAVEN_OPTS - parameters passed to the Java VM when running Maven\r\n@REM     e.g. to debug Maven itself, use\r\n@REM set MAVEN_OPTS=-Xdebug -Xrunjdwp:transport=dt_socket,server=y,suspend=y,address=8000\r\n@REM MAVEN_SKIP_RC - flag to disable loading of mavenrc files\r\n@REM ----------------------------------------------------------------------------\r\n\r\n@REM Begin all REM lines with '@' in case MAVEN_BATCH_ECHO is 'on'\r\n@echo off\r\n@REM set title of command window\r\ntitle %0\r\n@REM enable echoing by setting MAVEN_BATCH_ECHO to 'on'\r\n@if \"%MAVEN_BATCH_ECHO%\" == \"on\"  echo %MAVEN_BATCH_ECHO%\r\n\r\n@REM set %HOME% to equivalent of $HOME\r\nif \"%HOME%\" == \"\" (set \"HOME=%HOMEDRIVE%%HOMEPATH%\")\r\n\r\n@REM Execute a user defined script before this one\r\nif not \"%MAVEN_SKIP_RC%\" == \"\" goto skipRcPre\r\n@REM check for pre script, once with legacy .bat ending and once with .cmd ending\r\nif exist \"%USERPROFILE%\\mavenrc_pre.bat\" call \"%USERPROFILE%\\mavenrc_pre.bat\" %*\r\nif exist \"%USERPROFILE%\\mavenrc_pre.cmd\" call \"%USERPROFILE%\\mavenrc_pre.cmd\" %*\r\n:skipRcPre\r\n\r\n@setlocal\r\n\r\nset ERROR_CODE=0\r\n\r\n@REM To isolate internal variables from possible post scripts, we use another setlocal\r\n@setlocal\r\n\r\n@REM ==== START VALIDATION ====\r\nif not \"%JAVA_HOME%\" == \"\" goto OkJHome\r\n\r\necho.\r\necho Error: JAVA_HOME not found in your environment. >&2\r\necho Please set the JAVA_HOME variable in your environment to match the >&2\r\necho location of your Java installation. >&2\r\necho.\r\ngoto error\r\n\r\n:OkJHome\r\nif exist \"%JAVA_HOME%\\bin\\java.exe\" goto init\r\n\r\necho.\r\necho Error: JAVA_HOME is set to an invalid directory. >&2\r\necho JAVA_HOME = \"%JAVA_HOME%\" >&2\r\necho Please set the JAVA_HOME variable in your environment to match the >&2\r\necho location of your Java installation. >&2\r\necho.\r\ngoto error\r\n\r\n@REM ==== END VALIDATION ====\r\n\r\n:init\r\n\r\n@REM Find the project base dir, i.e. the directory that contains the folder \".mvn\".\r\n@REM Fallback to current working directory if not found.\r\n\r\nset MAVEN_PROJECTBASEDIR=%MAVEN_BASEDIR%\r\nIF NOT \"%MAVEN_PROJECTBASEDIR%\"==\"\" goto endDetectBaseDir\r\n\r\nset EXEC_DIR=%CD%\r\nset WDIR=%EXEC_DIR%\r\n:findBaseDir\r\nIF EXIST \"%WDIR%\"\\.mvn goto baseDirFound\r\ncd ..\r\nIF \"%WDIR%\"==\"%CD%\" goto baseDirNotFound\r\nset WDIR=%CD%\r\ngoto findBaseDir\r\n\r\n:baseDirFound\r\nset MAVEN_PROJECTBASEDIR=%WDIR%\r\ncd \"%EXEC_DIR%\"\r\ngoto endDetectBaseDir\r\n\r\n:baseDirNotFound\r\nset MAVEN_PROJECTBASEDIR=%EXEC_DIR%\r\ncd \"%EXEC_DIR%\"\r\n\r\n:endDetectBaseDir\r\n\r\nIF NOT EXIST \"%MAVEN_PROJECTBASEDIR%\\.mvn\\jvm.config\" goto endReadAdditionalConfig\r\n\r\n@setlocal EnableExtensions EnableDelayedExpansion\r\nfor /F \"usebackq delims=\" %%a in (\"%MAVEN_PROJECTBASEDIR%\\.mvn\\jvm.config\") do set JVM_CONFIG_MAVEN_PROPS=!JVM_CONFIG_MAVEN_PROPS! %%a\r\n@endlocal & set JVM_CONFIG_MAVEN_PROPS=%JVM_CONFIG_MAVEN_PROPS%\r\n\r\n:endReadAdditionalConfig\r\n\r\nSET MAVEN_JAVA_EXE=\"%JAVA_HOME%\\bin\\java.exe\"\r\nset WRAPPER_JAR=\"%MAVEN_PROJECTBASEDIR%\\.mvn\\wrapper\\maven-wrapper.jar\"\r\nset WRAPPER_LAUNCHER=org.apache.maven.wrapper.MavenWrapperMain\r\n\r\nset WRAPPER_URL=\"https://repo.maven.apache.org/maven2/org/apache/maven/wrapper/maven-wrapper/3.2.0/maven-wrapper-3.2.0.jar\"\r\n\r\nFOR /F \"usebackq tokens=1,2 delims==\" %%A IN (\"%MAVEN_PROJECTBASEDIR%\\.mvn\\wrapper\\maven-wrapper.properties\") DO (\r\n    IF \"%%A\"==\"wrapperUrl\" SET WRAPPER_URL=%%B\r\n)\r\n\r\n@REM Extension to allow automatically downloading the maven-wrapper.jar from Maven-central\r\n@REM This allows using the maven wrapper in projects that prohibit checking in binary data.\r\nif exist %WRAPPER_JAR% (\r\n    if \"%MVNW_VERBOSE%\" == \"true\" (\r\n        echo Found %WRAPPER_JAR%\r\n    )\r\n) else (\r\n    if not \"%MVNW_REPOURL%\" == \"\" (\r\n        SET WRAPPER_URL=\"%MVNW_REPOURL%/org/apache/maven/wrapper/maven-wrapper/3.2.0/maven-wrapper-3.2.0.jar\"\r\n    )\r\n    if \"%MVNW_VERBOSE%\" == \"true\" (\r\n        echo Couldn't find %WRAPPER_JAR%, downloading it ...\r\n        echo Downloading from: %WRAPPER_URL%\r\n    )\r\n\r\n    powershell -Command \"&{\"^\r\n\t\t\"$webclient = new-object System.Net.WebClient;\"^\r\n\t\t\"if (-not ([string]::IsNullOrEmpty('%MVNW_USERNAME%') -and [string]::IsNullOrEmpty('%MVNW_PASSWORD%'))) {\"^\r\n\t\t\"$webclient.Credentials = new-object System.Net.NetworkCredential('%MVNW_USERNAME%', '%MVNW_PASSWORD%');\"^\r\n\t\t\"}\"^\r\n\t\t\"[Net.ServicePointManager]::SecurityProtocol = [Net.SecurityProtocolType]::Tls12; $webclient.DownloadFile('%WRAPPER_URL%', '%WRAPPER_JAR%')\"^\r\n\t\t\"}\"\r\n    if \"%MVNW_VERBOSE%\" == \"true\" (\r\n        echo Finished downloading %WRAPPER_JAR%\r\n    )\r\n)\r\n@REM End of extension\r\n\r\n@REM If specified, validate the SHA-256 sum of the Maven wrapper jar file\r\nSET WRAPPER_SHA_256_SUM=\"\"\r\nFOR /F \"usebackq tokens=1,2 delims==\" %%A IN (\"%MAVEN_PROJECTBASEDIR%\\.mvn\\wrapper\\maven-wrapper.properties\") DO (\r\n    IF \"%%A\"==\"wrapperSha256Sum\" SET WRAPPER_SHA_256_SUM=%%B\r\n)\r\nIF NOT %WRAPPER_SHA_256_SUM%==\"\" (\r\n    powershell -Command \"&{\"^\r\n       \"$hash = (Get-FileHash \\\"%WRAPPER_JAR%\\\" -Algorithm SHA256).Hash.ToLower();\"^\r\n       \"If('%WRAPPER_SHA_256_SUM%' -ne $hash){\"^\r\n       \"  Write-Output 'Error: Failed to validate Maven wrapper SHA-256, your Maven wrapper might be compromised.';\"^\r\n       \"  Write-Output 'Investigate or delete %WRAPPER_JAR% to attempt a clean download.';\"^\r\n       \"  Write-Output 'If you updated your Maven version, you need to update the specified wrapperSha256Sum property.';\"^\r\n       \"  exit 1;\"^\r\n       \"}\"^\r\n       \"}\"\r\n    if ERRORLEVEL 1 goto error\r\n)\r\n\r\n@REM Provide a \"standardized\" way to retrieve the CLI args that will\r\n@REM work with both Windows and non-Windows executions.\r\nset MAVEN_CMD_LINE_ARGS=%*\r\n\r\n%MAVEN_JAVA_EXE% ^\r\n  %JVM_CONFIG_MAVEN_PROPS% ^\r\n  %MAVEN_OPTS% ^\r\n  %MAVEN_DEBUG_OPTS% ^\r\n  -classpath %WRAPPER_JAR% ^\r\n  \"-Dmaven.multiModuleProjectDirectory=%MAVEN_PROJECTBASEDIR%\" ^\r\n  %WRAPPER_LAUNCHER% %MAVEN_CONFIG% %*\r\nif ERRORLEVEL 1 goto error\r\ngoto end\r\n\r\n:error\r\nset ERROR_CODE=1\r\n\r\n:end\r\n@endlocal & set ERROR_CODE=%ERROR_CODE%\r\n\r\nif not \"%MAVEN_SKIP_RC%\"==\"\" goto skipRcPost\r\n@REM check for post script, once with legacy .bat ending and once with .cmd ending\r\nif exist \"%USERPROFILE%\\mavenrc_post.bat\" call \"%USERPROFILE%\\mavenrc_post.bat\"\r\nif exist \"%USERPROFILE%\\mavenrc_post.cmd\" call \"%USERPROFILE%\\mavenrc_post.cmd\"\r\n:skipRcPost\r\n\r\n@REM pause the script if MAVEN_BATCH_PAUSE is set to 'on'\r\nif \"%MAVEN_BATCH_PAUSE%\"==\"on\" pause\r\n\r\nif \"%MAVEN_TERMINATE_CMD%\"==\"on\" exit %ERROR_CODE%\r\n\r\ncmd /C exit /B %ERROR_CODE%\r\n"
  },
  {
    "path": "native/Cargo.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[workspace]\ndefault-members = [\"core\", \"spark-expr\", \"common\", \"proto\", \"jni-bridge\", \"shuffle\"]\nmembers = [\"core\", \"spark-expr\", \"common\", \"proto\", \"jni-bridge\", \"shuffle\", \"hdfs\", \"fs-hdfs\"]\nresolver = \"2\"\n\n[workspace.package]\nversion = \"0.15.0\"\nhomepage = \"https://datafusion.apache.org/comet\"\nrepository = \"https://github.com/apache/datafusion-comet\"\nauthors = [\"Apache DataFusion <dev@datafusion.apache.org>\"]\ndescription = \"Apache DataFusion Comet: High performance accelerator for Apache Spark\"\nreadme = \"README.md\"\nlicense = \"Apache-2.0\"\nedition = \"2021\"\n\n# Comet uses the same minimum Rust version as DataFusion\nrust-version = \"1.88\"\n\n[workspace.dependencies]\narrow = { version = \"58.1.0\", features = [\"prettyprint\", \"ffi\", \"chrono-tz\"] }\nasync-trait = { version = \"0.1\" }\nbytes = { version = \"1.11.1\" }\nparquet = { version = \"58.1.0\", default-features = false, features = [\"experimental\"] }\ndatafusion = { version = \"53.1.0\", default-features = false, features = [\"unicode_expressions\", \"crypto_expressions\", \"nested_expressions\", \"parquet\"] }\ndatafusion-datasource = { version = \"53.1.0\" }\ndatafusion-physical-expr-adapter = { version = \"53.1.0\" }\ndatafusion-spark = { version = \"53.1.0\", features = [\"core\"] }\ndatafusion-comet-spark-expr = { path = \"spark-expr\" }\ndatafusion-comet-common = { path = \"common\" }\ndatafusion-comet-jni-bridge = { path = \"jni-bridge\" }\ndatafusion-comet-proto = { path = \"proto\" }\ndatafusion-comet-shuffle = { path = \"shuffle\" }\nchrono = { version = \"0.4\", default-features = false, features = [\"clock\"] }\nchrono-tz = { version = \"0.10\" }\nfutures = \"0.3.32\"\nnum = \"0.4\"\nrand = \"0.10\"\nregex = \"1.12.3\"\nthiserror = \"2\"\nobject_store = { version = \"0.13.1\", features = [\"gcp\", \"azure\", \"aws\", \"http\"] }\nurl = \"2.2\"\naws-config = \"1.8.14\"\naws-credential-types = \"1.2.13\"\niceberg = { git = \"https://github.com/apache/iceberg-rust\", rev = \"a2f067d\" }\niceberg-storage-opendal = { git = \"https://github.com/apache/iceberg-rust\", rev = \"a2f067d\", features = [\"opendal-all\"] }\n\n[profile.release]\ndebug = true\noverflow-checks = false\nlto = \"thin\"\ncodegen-units = 1\nstrip = \"debuginfo\"\n\n# CI profile: faster compilation, same overflow behavior as release\n# Use with: cargo build --profile ci\n[profile.ci]\ninherits = \"release\"\nlto = false          # Skip LTO for faster linking\ncodegen-units = 16   # Parallel codegen (faster compile, slightly larger binary)\ndebug-assertions = true\npanic = \"unwind\"     # Allow panics to be caught and logged across FFI boundary\n# overflow-checks inherited as false from release\n"
  },
  {
    "path": "native/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Apache DataFusion Comet Native Code\n\nThis project contains the following crates:\n\n- [core](core): Native code used by the Comet Spark plugin\n- [proto](proto): Comet protocol buffer definition for query plans\n- [spark-expr](spark-expr): Spark-compatible DataFusion operators and expressions\n"
  },
  {
    "path": "native/common/Cargo.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[package]\nname = \"datafusion-comet-common\"\ndescription = \"Apache DataFusion Comet: common types shared across crates\"\nversion = { workspace = true }\nhomepage = { workspace = true }\nrepository = { workspace = true }\nauthors = { workspace = true }\nreadme = { workspace = true }\nlicense = { workspace = true }\nedition = { workspace = true }\n\npublish = false\n\n[dependencies]\narrow = { workspace = true }\ndatafusion = { workspace = true }\nserde = { version = \"1.0\", features = [\"derive\"] }\nserde_json = \"1.0\"\nthiserror = { workspace = true }\n\n[lib]\nname = \"datafusion_comet_common\"\npath = \"src/lib.rs\"\n\n[[bin]]\nname = \"analyze_trace\"\npath = \"src/bin/analyze_trace.rs\"\n"
  },
  {
    "path": "native/common/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# datafusion-comet-common: Common Types\n\nThis crate provides common types shared across Apache DataFusion Comet crates and is maintained as part of the\n[Apache DataFusion Comet] subproject.\n\n[Apache DataFusion Comet]: https://github.com/apache/datafusion-comet/\n"
  },
  {
    "path": "native/common/src/bin/analyze_trace.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Analyzes a Comet chrome trace event log (`comet-event-trace.json`) and\n//! compares jemalloc usage against the sum of per-thread Comet memory pool\n//! reservations. Reports any points where jemalloc exceeds the total pool size.\n//!\n//! Usage:\n//!   cargo run --bin analyze_trace -- <path-to-comet-event-trace.json>\n\nuse serde::Deserialize;\nuse std::collections::HashMap;\nuse std::io::{BufRead, BufReader};\nuse std::{env, fs::File};\n\n/// A single Chrome trace event (only the fields we care about).\n#[derive(Deserialize)]\nstruct TraceEvent {\n    name: String,\n    ph: String,\n    #[allow(dead_code)]\n    tid: u64,\n    ts: u64,\n    #[serde(default)]\n    args: HashMap<String, serde_json::Value>,\n}\n\n/// Snapshot of memory state at a given timestamp.\nstruct MemorySnapshot {\n    ts: u64,\n    jemalloc: u64,\n    pool_total: u64,\n}\n\nfn format_bytes(bytes: u64) -> String {\n    const MB: f64 = 1024.0 * 1024.0;\n    format!(\"{:.1} MB\", bytes as f64 / MB)\n}\n\nfn main() {\n    let args: Vec<String> = env::args().collect();\n    if args.len() != 2 {\n        eprintln!(\"Usage: analyze_trace <path-to-comet-event-trace.json>\");\n        std::process::exit(1);\n    }\n\n    let file = File::open(&args[1]).expect(\"Failed to open trace file\");\n    let reader = BufReader::new(file);\n\n    // Latest jemalloc value (global, not per-thread)\n    let mut latest_jemalloc: u64 = 0;\n    // Per-thread pool reservations: thread_NNN -> bytes\n    let mut pool_by_thread: HashMap<String, u64> = HashMap::new();\n    // Points where jemalloc exceeded pool total\n    let mut violations: Vec<MemorySnapshot> = Vec::new();\n    // Track peak values\n    let mut peak_jemalloc: u64 = 0;\n    let mut peak_pool_total: u64 = 0;\n    let mut peak_excess: u64 = 0;\n    let mut counter_events: u64 = 0;\n\n    // Each line is one JSON event, possibly with a trailing comma.\n    // The file starts with \"[ \" on the first event line or as a prefix.\n    for line in reader.lines() {\n        let line = line.expect(\"Failed to read line\");\n        let trimmed = line.trim();\n\n        // Skip empty lines or bare array brackets\n        if trimmed.is_empty() || trimmed == \"[\" || trimmed == \"]\" {\n            continue;\n        }\n\n        // Strip leading \"[ \" (first event) and trailing comma\n        let json_str = trimmed\n            .trim_start_matches(\"[ \")\n            .trim_start_matches('[')\n            .trim_end_matches(',');\n\n        if json_str.is_empty() {\n            continue;\n        }\n\n        // Only parse counter events (they contain \"\\\"ph\\\": \\\"C\\\"\")\n        if !json_str.contains(\"\\\"ph\\\": \\\"C\\\"\") {\n            continue;\n        }\n\n        let event: TraceEvent = match serde_json::from_str(json_str) {\n            Ok(e) => e,\n            Err(_) => continue,\n        };\n\n        if event.ph != \"C\" {\n            continue;\n        }\n\n        counter_events += 1;\n\n        if event.name == \"jemalloc_allocated\" {\n            if let Some(val) = event.args.get(\"jemalloc_allocated\") {\n                latest_jemalloc = val.as_u64().unwrap_or(0);\n                if latest_jemalloc > peak_jemalloc {\n                    peak_jemalloc = latest_jemalloc;\n                }\n            }\n        } else if event.name.contains(\"comet_memory_reserved\") {\n            // Name format: thread_NNN_comet_memory_reserved\n            let thread_key = event.name.clone();\n            if let Some(val) = event.args.get(&event.name) {\n                let bytes = val.as_u64().unwrap_or(0);\n                pool_by_thread.insert(thread_key, bytes);\n            }\n        } else {\n            // Skip jvm_heap_used and other counters\n            continue;\n        }\n\n        // After each jemalloc or pool update, check the current state\n        let pool_total: u64 = pool_by_thread.values().sum();\n        if pool_total > peak_pool_total {\n            peak_pool_total = pool_total;\n        }\n\n        if latest_jemalloc > 0 && pool_total > 0 && latest_jemalloc > pool_total {\n            let excess = latest_jemalloc - pool_total;\n            if excess > peak_excess {\n                peak_excess = excess;\n            }\n            // Record violation (sample - don't record every single one)\n            if violations.is_empty()\n                || event.ts.saturating_sub(violations.last().unwrap().ts) > 1_000_000\n                || excess == peak_excess\n            {\n                violations.push(MemorySnapshot {\n                    ts: event.ts,\n                    jemalloc: latest_jemalloc,\n                    pool_total,\n                });\n            }\n        }\n    }\n\n    // Print summary\n    println!(\"=== Comet Trace Memory Analysis ===\\n\");\n    println!(\"Counter events parsed: {counter_events}\");\n    println!(\"Threads with memory pools: {}\", pool_by_thread.len());\n    println!(\"Peak jemalloc allocated:   {}\", format_bytes(peak_jemalloc));\n    println!(\n        \"Peak pool total:           {}\",\n        format_bytes(peak_pool_total)\n    );\n    println!(\n        \"Peak excess (jemalloc - pool): {}\",\n        format_bytes(peak_excess)\n    );\n    println!();\n\n    if violations.is_empty() {\n        println!(\"OK: jemalloc never exceeded the total pool reservation.\");\n    } else {\n        println!(\n            \"WARNING: jemalloc exceeded pool reservation at {} sampled points:\\n\",\n            violations.len()\n        );\n        println!(\n            \"{:>14}  {:>14}  {:>14}  {:>14}\",\n            \"Time (us)\", \"jemalloc\", \"pool_total\", \"excess\"\n        );\n        println!(\"{}\", \"-\".repeat(62));\n        for snap in &violations {\n            let excess = snap.jemalloc - snap.pool_total;\n            println!(\n                \"{:>14}  {:>14}  {:>14}  {:>14}\",\n                snap.ts,\n                format_bytes(snap.jemalloc),\n                format_bytes(snap.pool_total),\n                format_bytes(excess),\n            );\n        }\n    }\n\n    // Show final per-thread pool state\n    println!(\"\\n--- Final per-thread pool reservations ---\\n\");\n    let mut threads: Vec<_> = pool_by_thread.iter().collect();\n    threads.sort_by_key(|(k, _)| (*k).clone());\n    for (thread, bytes) in &threads {\n        println!(\"  {thread}: {}\", format_bytes(**bytes));\n    }\n    println!(\"\\n  Total: {}\", format_bytes(pool_by_thread.values().sum()));\n}\n"
  },
  {
    "path": "native/common/src/error.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::error::ArrowError;\nuse datafusion::common::DataFusionError;\nuse std::sync::Arc;\n\n#[derive(thiserror::Error, Debug, Clone)]\npub enum SparkError {\n    // This list was generated from the Spark code. Many of the exceptions are not yet used by Comet\n    #[error(\"[CAST_INVALID_INPUT] The value '{value}' of the type \\\"{from_type}\\\" cannot be cast to \\\"{to_type}\\\" \\\n        because it is malformed. Correct the value as per the syntax, or change its target type. \\\n        Use `try_cast` to tolerate malformed input and return NULL instead. If necessary \\\n        set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    CastInvalidValue {\n        value: String,\n        from_type: String,\n        to_type: String,\n    },\n\n    /// Like CastInvalidValue but maps to SparkDateTimeException instead of SparkNumberFormatException.\n    /// Used for string → timestamp/date cast failures.\n    #[error(\"[CAST_INVALID_INPUT] The value '{value}' of the type \\\"{from_type}\\\" cannot be cast to \\\"{to_type}\\\" \\\n        because it is malformed. Correct the value as per the syntax, or change its target type. \\\n        Use `try_cast` to tolerate malformed input and return NULL instead. If necessary \\\n        set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    InvalidInputInCastToDatetime {\n        value: String,\n        from_type: String,\n        to_type: String,\n    },\n\n    #[error(\"[NUMERIC_VALUE_OUT_OF_RANGE.WITH_SUGGESTION] {value} cannot be represented as Decimal({precision}, {scale}). If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error, and return NULL instead.\")]\n    NumericValueOutOfRange {\n        value: String,\n        precision: u8,\n        scale: i8,\n    },\n\n    #[error(\"[NUMERIC_OUT_OF_SUPPORTED_RANGE] The value {value} cannot be interpreted as a numeric since it has more than 38 digits.\")]\n    NumericOutOfRange { value: String },\n\n    #[error(\"[CAST_OVERFLOW] The value {value} of the type \\\"{from_type}\\\" cannot be cast to \\\"{to_type}\\\" \\\n        due to an overflow. Use `try_cast` to tolerate overflow and return NULL instead. If necessary \\\n        set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    CastOverFlow {\n        value: String,\n        from_type: String,\n        to_type: String,\n    },\n\n    #[error(\"[CANNOT_PARSE_DECIMAL] Cannot parse decimal.\")]\n    CannotParseDecimal,\n\n    #[error(\"[ARITHMETIC_OVERFLOW] {from_type} overflow. If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    ArithmeticOverflow { from_type: String },\n\n    #[error(\"[ARITHMETIC_OVERFLOW] Overflow in integral divide. Use `try_divide` to tolerate overflow and return NULL instead. If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    IntegralDivideOverflow,\n\n    #[error(\"[ARITHMETIC_OVERFLOW] Overflow in sum of decimals. If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    DecimalSumOverflow,\n\n    #[error(\"[DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead. If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    DivideByZero,\n\n    #[error(\"[REMAINDER_BY_ZERO] Division by zero. Use `try_remainder` to tolerate divisor being 0 and return NULL instead. If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    RemainderByZero,\n\n    #[error(\"[INTERVAL_DIVIDED_BY_ZERO] Divide by zero in interval arithmetic.\")]\n    IntervalDividedByZero,\n\n    #[error(\"[BINARY_ARITHMETIC_OVERFLOW] {value1} {symbol} {value2} caused overflow. Use `{function_name}` to tolerate overflow and return NULL instead. If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    BinaryArithmeticOverflow {\n        value1: String,\n        symbol: String,\n        value2: String,\n        function_name: String,\n    },\n\n    #[error(\"[INTERVAL_ARITHMETIC_OVERFLOW.WITH_SUGGESTION] Interval arithmetic overflow. Use `{function_name}` to tolerate overflow and return NULL instead. If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    IntervalArithmeticOverflowWithSuggestion { function_name: String },\n\n    #[error(\"[INTERVAL_ARITHMETIC_OVERFLOW.WITHOUT_SUGGESTION] Interval arithmetic overflow. If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    IntervalArithmeticOverflowWithoutSuggestion,\n\n    #[error(\"[DATETIME_OVERFLOW] Datetime arithmetic overflow.\")]\n    DatetimeOverflow,\n\n    #[error(\"[INVALID_ARRAY_INDEX] The index {index_value} is out of bounds. The array has {array_size} elements. Use the SQL function get() to tolerate accessing element at invalid index and return NULL instead. If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    InvalidArrayIndex { index_value: i32, array_size: i32 },\n\n    #[error(\"[INVALID_ARRAY_INDEX_IN_ELEMENT_AT] The index {index_value} is out of bounds. The array has {array_size} elements. Use try_element_at to tolerate accessing element at invalid index and return NULL instead. If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    InvalidElementAtIndex { index_value: i32, array_size: i32 },\n\n    #[error(\"[INVALID_BITMAP_POSITION] The bit position {bit_position} is out of bounds. The bitmap has {bitmap_num_bytes} bytes ({bitmap_num_bits} bits).\")]\n    InvalidBitmapPosition {\n        bit_position: i64,\n        bitmap_num_bytes: i64,\n        bitmap_num_bits: i64,\n    },\n\n    #[error(\"[INVALID_INDEX_OF_ZERO] The index 0 is invalid. An index shall be either < 0 or > 0 (the first element has index 1).\")]\n    InvalidIndexOfZero,\n\n    #[error(\"[DUPLICATED_MAP_KEY] Cannot create map with duplicate keys: {key}.\")]\n    DuplicatedMapKey { key: String },\n\n    #[error(\"[NULL_MAP_KEY] Cannot use null as map key.\")]\n    NullMapKey,\n\n    #[error(\"[MAP_KEY_VALUE_DIFF_SIZES] The key array and value array of a map must have the same length.\")]\n    MapKeyValueDiffSizes,\n\n    #[error(\"[EXCEED_LIMIT_LENGTH] Cannot create a map with {size} elements which exceeds the limit {max_size}.\")]\n    ExceedMapSizeLimit { size: i32, max_size: i32 },\n\n    #[error(\"[COLLECTION_SIZE_LIMIT_EXCEEDED] Cannot create array with {num_elements} elements which exceeds the limit {max_elements}.\")]\n    CollectionSizeLimitExceeded {\n        num_elements: i64,\n        max_elements: i64,\n    },\n\n    #[error(\"[NOT_NULL_ASSERT_VIOLATION] The field `{field_name}` cannot be null.\")]\n    NotNullAssertViolation { field_name: String },\n\n    #[error(\"[VALUE_IS_NULL] The value of field `{field_name}` at row {row_index} is null.\")]\n    ValueIsNull { field_name: String, row_index: i32 },\n\n    #[error(\"[CANNOT_PARSE_TIMESTAMP] Cannot parse timestamp: {message}. Try using `{suggested_func}` instead.\")]\n    CannotParseTimestamp {\n        message: String,\n        suggested_func: String,\n    },\n\n    #[error(\"[INVALID_FRACTION_OF_SECOND] The fraction of second {value} is invalid. Valid values are in the range [0, 60]. If necessary set \\\"spark.sql.ansi.enabled\\\" to \\\"false\\\" to bypass this error.\")]\n    InvalidFractionOfSecond { value: f64 },\n\n    #[error(\"[INVALID_UTF8_STRING] Invalid UTF-8 string: {hex_string}.\")]\n    InvalidUtf8String { hex_string: String },\n\n    #[error(\"[UNEXPECTED_POSITIVE_VALUE] The {parameter_name} parameter must be less than or equal to 0. The actual value is {actual_value}.\")]\n    UnexpectedPositiveValue {\n        parameter_name: String,\n        actual_value: i32,\n    },\n\n    #[error(\"[UNEXPECTED_NEGATIVE_VALUE] The {parameter_name} parameter must be greater than or equal to 0. The actual value is {actual_value}.\")]\n    UnexpectedNegativeValue {\n        parameter_name: String,\n        actual_value: i32,\n    },\n\n    #[error(\"[INVALID_PARAMETER_VALUE] Invalid regex group index {group_index} in function `{function_name}`. Group count is {group_count}.\")]\n    InvalidRegexGroupIndex {\n        function_name: String,\n        group_count: i32,\n        group_index: i32,\n    },\n\n    #[error(\"[DATATYPE_CANNOT_ORDER] Cannot order by type: {data_type}.\")]\n    DatatypeCannotOrder { data_type: String },\n\n    #[error(\"[SCALAR_SUBQUERY_TOO_MANY_ROWS] Scalar subquery returned more than one row.\")]\n    ScalarSubqueryTooManyRows,\n\n    #[error(\"{message}\")]\n    FileNotFound { message: String },\n\n    #[error(\"[_LEGACY_ERROR_TEMP_2093] Found duplicate field(s) \\\"{required_field_name}\\\": [{matched_fields}] in case-insensitive mode\")]\n    DuplicateFieldCaseInsensitive {\n        required_field_name: String,\n        matched_fields: String,\n    },\n\n    #[error(\"ArrowError: {0}.\")]\n    Arrow(Arc<ArrowError>),\n\n    #[error(\"InternalError: {0}.\")]\n    Internal(String),\n}\n\nimpl SparkError {\n    /// Serialize this error to JSON format for JNI transfer\n    pub fn to_json(&self) -> String {\n        let error_class = self.error_class().unwrap_or(\"\");\n\n        // Create a JSON structure with errorType, errorClass, and params\n        match serde_json::to_string(&serde_json::json!({\n            \"errorType\": self.error_type_name(),\n            \"errorClass\": error_class,\n            \"params\": self.params_as_json(),\n        })) {\n            Ok(json) => json,\n            Err(e) => {\n                // Fallback if serialization fails\n                format!(\n                    \"{{\\\"errorType\\\":\\\"SerializationError\\\",\\\"message\\\":\\\"{}\\\"}}\",\n                    e\n                )\n            }\n        }\n    }\n\n    /// Get the error type name for JSON serialization\n    pub(crate) fn error_type_name(&self) -> &'static str {\n        match self {\n            SparkError::CastInvalidValue { .. } => \"CastInvalidValue\",\n            SparkError::InvalidInputInCastToDatetime { .. } => \"InvalidInputInCastToDatetime\",\n            SparkError::NumericValueOutOfRange { .. } => \"NumericValueOutOfRange\",\n            SparkError::NumericOutOfRange { .. } => \"NumericOutOfRange\",\n            SparkError::CastOverFlow { .. } => \"CastOverFlow\",\n            SparkError::CannotParseDecimal => \"CannotParseDecimal\",\n            SparkError::ArithmeticOverflow { .. } => \"ArithmeticOverflow\",\n            SparkError::IntegralDivideOverflow => \"IntegralDivideOverflow\",\n            SparkError::DecimalSumOverflow => \"DecimalSumOverflow\",\n            SparkError::DivideByZero => \"DivideByZero\",\n            SparkError::RemainderByZero => \"RemainderByZero\",\n            SparkError::IntervalDividedByZero => \"IntervalDividedByZero\",\n            SparkError::BinaryArithmeticOverflow { .. } => \"BinaryArithmeticOverflow\",\n            SparkError::IntervalArithmeticOverflowWithSuggestion { .. } => {\n                \"IntervalArithmeticOverflowWithSuggestion\"\n            }\n            SparkError::IntervalArithmeticOverflowWithoutSuggestion => {\n                \"IntervalArithmeticOverflowWithoutSuggestion\"\n            }\n            SparkError::DatetimeOverflow => \"DatetimeOverflow\",\n            SparkError::InvalidArrayIndex { .. } => \"InvalidArrayIndex\",\n            SparkError::InvalidElementAtIndex { .. } => \"InvalidElementAtIndex\",\n            SparkError::InvalidBitmapPosition { .. } => \"InvalidBitmapPosition\",\n            SparkError::InvalidIndexOfZero => \"InvalidIndexOfZero\",\n            SparkError::DuplicatedMapKey { .. } => \"DuplicatedMapKey\",\n            SparkError::NullMapKey => \"NullMapKey\",\n            SparkError::MapKeyValueDiffSizes => \"MapKeyValueDiffSizes\",\n            SparkError::ExceedMapSizeLimit { .. } => \"ExceedMapSizeLimit\",\n            SparkError::CollectionSizeLimitExceeded { .. } => \"CollectionSizeLimitExceeded\",\n            SparkError::NotNullAssertViolation { .. } => \"NotNullAssertViolation\",\n            SparkError::ValueIsNull { .. } => \"ValueIsNull\",\n            SparkError::CannotParseTimestamp { .. } => \"CannotParseTimestamp\",\n            SparkError::InvalidFractionOfSecond { .. } => \"InvalidFractionOfSecond\",\n            SparkError::InvalidUtf8String { .. } => \"InvalidUtf8String\",\n            SparkError::UnexpectedPositiveValue { .. } => \"UnexpectedPositiveValue\",\n            SparkError::UnexpectedNegativeValue { .. } => \"UnexpectedNegativeValue\",\n            SparkError::InvalidRegexGroupIndex { .. } => \"InvalidRegexGroupIndex\",\n            SparkError::DatatypeCannotOrder { .. } => \"DatatypeCannotOrder\",\n            SparkError::ScalarSubqueryTooManyRows => \"ScalarSubqueryTooManyRows\",\n            SparkError::FileNotFound { .. } => \"FileNotFound\",\n            SparkError::DuplicateFieldCaseInsensitive { .. } => \"DuplicateFieldCaseInsensitive\",\n            SparkError::Arrow(_) => \"Arrow\",\n            SparkError::Internal(_) => \"Internal\",\n        }\n    }\n\n    /// Extract parameters as JSON value\n    pub(crate) fn params_as_json(&self) -> serde_json::Value {\n        match self {\n            SparkError::CastInvalidValue {\n                value,\n                from_type,\n                to_type,\n            } => {\n                serde_json::json!({\n                    \"value\": value,\n                    \"fromType\": from_type,\n                    \"toType\": to_type,\n                })\n            }\n            SparkError::InvalidInputInCastToDatetime {\n                value,\n                from_type,\n                to_type,\n            } => {\n                serde_json::json!({\n                    \"value\": value,\n                    \"fromType\": from_type,\n                    \"toType\": to_type,\n                })\n            }\n            SparkError::NumericValueOutOfRange {\n                value,\n                precision,\n                scale,\n            } => {\n                serde_json::json!({\n                    \"value\": value,\n                    \"precision\": precision,\n                    \"scale\": scale,\n                })\n            }\n            SparkError::NumericOutOfRange { value } => {\n                serde_json::json!({\n                    \"value\": value,\n                })\n            }\n            SparkError::CastOverFlow {\n                value,\n                from_type,\n                to_type,\n            } => {\n                serde_json::json!({\n                    \"value\": value,\n                    \"fromType\": from_type,\n                    \"toType\": to_type,\n                })\n            }\n            SparkError::ArithmeticOverflow { from_type } => {\n                serde_json::json!({\n                    \"fromType\": from_type,\n                })\n            }\n            SparkError::BinaryArithmeticOverflow {\n                value1,\n                symbol,\n                value2,\n                function_name,\n            } => {\n                serde_json::json!({\n                    \"value1\": value1,\n                    \"symbol\": symbol,\n                    \"value2\": value2,\n                    \"functionName\": function_name,\n                })\n            }\n            SparkError::IntervalArithmeticOverflowWithSuggestion { function_name } => {\n                serde_json::json!({\n                    \"functionName\": function_name,\n                })\n            }\n            SparkError::InvalidArrayIndex {\n                index_value,\n                array_size,\n            } => {\n                serde_json::json!({\n                    \"indexValue\": index_value,\n                    \"arraySize\": array_size,\n                })\n            }\n            SparkError::InvalidElementAtIndex {\n                index_value,\n                array_size,\n            } => {\n                serde_json::json!({\n                    \"indexValue\": index_value,\n                    \"arraySize\": array_size,\n                })\n            }\n            SparkError::InvalidBitmapPosition {\n                bit_position,\n                bitmap_num_bytes,\n                bitmap_num_bits,\n            } => {\n                serde_json::json!({\n                    \"bitPosition\": bit_position,\n                    \"bitmapNumBytes\": bitmap_num_bytes,\n                    \"bitmapNumBits\": bitmap_num_bits,\n                })\n            }\n            SparkError::DuplicatedMapKey { key } => {\n                serde_json::json!({\n                    \"key\": key,\n                })\n            }\n            SparkError::ExceedMapSizeLimit { size, max_size } => {\n                serde_json::json!({\n                    \"size\": size,\n                    \"maxSize\": max_size,\n                })\n            }\n            SparkError::CollectionSizeLimitExceeded {\n                num_elements,\n                max_elements,\n            } => {\n                serde_json::json!({\n                    \"numElements\": num_elements,\n                    \"maxElements\": max_elements,\n                })\n            }\n            SparkError::NotNullAssertViolation { field_name } => {\n                serde_json::json!({\n                    \"fieldName\": field_name,\n                })\n            }\n            SparkError::ValueIsNull {\n                field_name,\n                row_index,\n            } => {\n                serde_json::json!({\n                    \"fieldName\": field_name,\n                    \"rowIndex\": row_index,\n                })\n            }\n            SparkError::CannotParseTimestamp {\n                message,\n                suggested_func,\n            } => {\n                serde_json::json!({\n                    \"message\": message,\n                    \"suggestedFunc\": suggested_func,\n                })\n            }\n            SparkError::InvalidFractionOfSecond { value } => {\n                serde_json::json!({\n                    \"value\": value,\n                })\n            }\n            SparkError::InvalidUtf8String { hex_string } => {\n                serde_json::json!({\n                    \"hexString\": hex_string,\n                })\n            }\n            SparkError::UnexpectedPositiveValue {\n                parameter_name,\n                actual_value,\n            } => {\n                serde_json::json!({\n                    \"parameterName\": parameter_name,\n                    \"actualValue\": actual_value,\n                })\n            }\n            SparkError::UnexpectedNegativeValue {\n                parameter_name,\n                actual_value,\n            } => {\n                serde_json::json!({\n                    \"parameterName\": parameter_name,\n                    \"actualValue\": actual_value,\n                })\n            }\n            SparkError::InvalidRegexGroupIndex {\n                function_name,\n                group_count,\n                group_index,\n            } => {\n                serde_json::json!({\n                    \"functionName\": function_name,\n                    \"groupCount\": group_count,\n                    \"groupIndex\": group_index,\n                })\n            }\n            SparkError::DatatypeCannotOrder { data_type } => {\n                serde_json::json!({\n                    \"dataType\": data_type,\n                })\n            }\n            SparkError::FileNotFound { message } => {\n                serde_json::json!({\n                    \"message\": message,\n                })\n            }\n            SparkError::DuplicateFieldCaseInsensitive {\n                required_field_name,\n                matched_fields,\n            } => {\n                serde_json::json!({\n                    \"requiredFieldName\": required_field_name,\n                    \"matchedOrcFields\": matched_fields,\n                })\n            }\n            SparkError::Arrow(e) => {\n                serde_json::json!({\n                    \"message\": e.to_string(),\n                })\n            }\n            SparkError::Internal(msg) => {\n                serde_json::json!({\n                    \"message\": msg,\n                })\n            }\n            // Simple errors with no parameters\n            _ => serde_json::json!({}),\n        }\n    }\n\n    /// Returns the appropriate Spark exception class for this error\n    pub fn exception_class(&self) -> &'static str {\n        match self {\n            // ArithmeticException\n            SparkError::DivideByZero\n            | SparkError::RemainderByZero\n            | SparkError::IntervalDividedByZero\n            | SparkError::NumericValueOutOfRange { .. }\n            | SparkError::NumericOutOfRange { .. } // Comet-specific extension\n            | SparkError::ArithmeticOverflow { .. }\n            | SparkError::IntegralDivideOverflow\n            | SparkError::DecimalSumOverflow\n            | SparkError::BinaryArithmeticOverflow { .. }\n            | SparkError::IntervalArithmeticOverflowWithSuggestion { .. }\n            | SparkError::IntervalArithmeticOverflowWithoutSuggestion\n            | SparkError::DatetimeOverflow => \"org/apache/spark/SparkArithmeticException\",\n\n            // CastOverflow gets special handling with CastOverflowException\n            SparkError::CastOverFlow { .. } => \"org/apache/spark/sql/comet/CastOverflowException\",\n\n            // NumberFormatException (for cast invalid input errors)\n            SparkError::CastInvalidValue { .. } => \"org/apache/spark/SparkNumberFormatException\",\n\n            // ArrayIndexOutOfBoundsException\n            SparkError::InvalidArrayIndex { .. }\n            | SparkError::InvalidElementAtIndex { .. }\n            | SparkError::InvalidBitmapPosition { .. }\n            | SparkError::InvalidIndexOfZero => \"org/apache/spark/SparkArrayIndexOutOfBoundsException\",\n\n            // RuntimeException\n            SparkError::CannotParseDecimal\n            | SparkError::DuplicatedMapKey { .. }\n            | SparkError::NullMapKey\n            | SparkError::MapKeyValueDiffSizes\n            | SparkError::ExceedMapSizeLimit { .. }\n            | SparkError::CollectionSizeLimitExceeded { .. }\n            | SparkError::NotNullAssertViolation { .. }\n            | SparkError::ValueIsNull { .. } // Comet-specific extension\n            | SparkError::UnexpectedPositiveValue { .. }\n            | SparkError::UnexpectedNegativeValue { .. }\n            | SparkError::InvalidRegexGroupIndex { .. }\n            | SparkError::ScalarSubqueryTooManyRows => \"org/apache/spark/SparkRuntimeException\",\n\n            // DateTimeException\n            SparkError::InvalidInputInCastToDatetime { .. }\n            | SparkError::CannotParseTimestamp { .. }\n            | SparkError::InvalidFractionOfSecond { .. } => \"org/apache/spark/SparkDateTimeException\",\n\n            // IllegalArgumentException\n            SparkError::DatatypeCannotOrder { .. }\n            | SparkError::InvalidUtf8String { .. } => \"org/apache/spark/SparkIllegalArgumentException\",\n\n            // FileNotFound - will be converted to SparkFileNotFoundException by the shim\n            SparkError::FileNotFound { .. } => \"org/apache/spark/SparkException\",\n\n            // DuplicateFieldCaseInsensitive - converted to SparkRuntimeException by the shim\n            SparkError::DuplicateFieldCaseInsensitive { .. } => {\n                \"org/apache/spark/SparkRuntimeException\"\n            }\n\n            // Generic errors\n            SparkError::Arrow(_) | SparkError::Internal(_) => \"org/apache/spark/SparkException\",\n        }\n    }\n\n    /// Returns the Spark error class code for this error\n    pub(crate) fn error_class(&self) -> Option<&'static str> {\n        match self {\n            // Cast errors\n            SparkError::CastInvalidValue { .. } => Some(\"CAST_INVALID_INPUT\"),\n            SparkError::InvalidInputInCastToDatetime { .. } => Some(\"CAST_INVALID_INPUT\"),\n            SparkError::CastOverFlow { .. } => Some(\"CAST_OVERFLOW\"),\n            SparkError::NumericValueOutOfRange { .. } => {\n                Some(\"NUMERIC_VALUE_OUT_OF_RANGE.WITH_SUGGESTION\")\n            }\n            SparkError::NumericOutOfRange { .. } => Some(\"NUMERIC_OUT_OF_SUPPORTED_RANGE\"),\n            SparkError::CannotParseDecimal => Some(\"CANNOT_PARSE_DECIMAL\"),\n\n            // Arithmetic errors\n            SparkError::DivideByZero => Some(\"DIVIDE_BY_ZERO\"),\n            SparkError::RemainderByZero => Some(\"REMAINDER_BY_ZERO\"),\n            SparkError::IntervalDividedByZero => Some(\"INTERVAL_DIVIDED_BY_ZERO\"),\n            SparkError::ArithmeticOverflow { .. } => Some(\"ARITHMETIC_OVERFLOW\"),\n            SparkError::IntegralDivideOverflow => Some(\"ARITHMETIC_OVERFLOW\"),\n            SparkError::DecimalSumOverflow => Some(\"ARITHMETIC_OVERFLOW\"),\n            SparkError::BinaryArithmeticOverflow { .. } => Some(\"BINARY_ARITHMETIC_OVERFLOW\"),\n            SparkError::IntervalArithmeticOverflowWithSuggestion { .. } => {\n                Some(\"INTERVAL_ARITHMETIC_OVERFLOW\")\n            }\n            SparkError::IntervalArithmeticOverflowWithoutSuggestion => {\n                Some(\"INTERVAL_ARITHMETIC_OVERFLOW\")\n            }\n            SparkError::DatetimeOverflow => Some(\"DATETIME_OVERFLOW\"),\n\n            // Array index errors\n            SparkError::InvalidArrayIndex { .. } => Some(\"INVALID_ARRAY_INDEX\"),\n            SparkError::InvalidElementAtIndex { .. } => Some(\"INVALID_ARRAY_INDEX_IN_ELEMENT_AT\"),\n            SparkError::InvalidBitmapPosition { .. } => Some(\"INVALID_BITMAP_POSITION\"),\n            SparkError::InvalidIndexOfZero => Some(\"INVALID_INDEX_OF_ZERO\"),\n\n            // Map/Collection errors\n            SparkError::DuplicatedMapKey { .. } => Some(\"DUPLICATED_MAP_KEY\"),\n            SparkError::NullMapKey => Some(\"NULL_MAP_KEY\"),\n            SparkError::MapKeyValueDiffSizes => Some(\"MAP_KEY_VALUE_DIFF_SIZES\"),\n            SparkError::ExceedMapSizeLimit { .. } => Some(\"EXCEED_LIMIT_LENGTH\"),\n            SparkError::CollectionSizeLimitExceeded { .. } => {\n                Some(\"COLLECTION_SIZE_LIMIT_EXCEEDED\")\n            }\n\n            // Null validation errors\n            SparkError::NotNullAssertViolation { .. } => Some(\"NOT_NULL_ASSERT_VIOLATION\"),\n            SparkError::ValueIsNull { .. } => Some(\"VALUE_IS_NULL\"),\n\n            // DateTime errors\n            SparkError::CannotParseTimestamp { .. } => Some(\"CANNOT_PARSE_TIMESTAMP\"),\n            SparkError::InvalidFractionOfSecond { .. } => Some(\"INVALID_FRACTION_OF_SECOND\"),\n\n            // String/UTF8 errors\n            SparkError::InvalidUtf8String { .. } => Some(\"INVALID_UTF8_STRING\"),\n\n            // Function parameter errors\n            SparkError::UnexpectedPositiveValue { .. } => Some(\"UNEXPECTED_POSITIVE_VALUE\"),\n            SparkError::UnexpectedNegativeValue { .. } => Some(\"UNEXPECTED_NEGATIVE_VALUE\"),\n\n            // Regex errors\n            SparkError::InvalidRegexGroupIndex { .. } => Some(\"INVALID_PARAMETER_VALUE\"),\n\n            // Unsupported operation errors\n            SparkError::DatatypeCannotOrder { .. } => Some(\"DATATYPE_CANNOT_ORDER\"),\n\n            // Subquery errors\n            SparkError::ScalarSubqueryTooManyRows => Some(\"SCALAR_SUBQUERY_TOO_MANY_ROWS\"),\n\n            // File not found\n            SparkError::FileNotFound { .. } => Some(\"_LEGACY_ERROR_TEMP_2055\"),\n\n            // Duplicate field in case-insensitive mode\n            SparkError::DuplicateFieldCaseInsensitive { .. } => Some(\"_LEGACY_ERROR_TEMP_2093\"),\n\n            // Generic errors (no error class)\n            SparkError::Arrow(_) | SparkError::Internal(_) => None,\n        }\n    }\n}\n\npub type SparkResult<T> = Result<T, SparkError>;\n\n/// Convert decimal overflow to SparkError::NumericValueOutOfRange.\npub fn decimal_overflow_error(value: i128, precision: u8, scale: i8) -> SparkError {\n    SparkError::NumericValueOutOfRange {\n        value: value.to_string(),\n        precision,\n        scale,\n    }\n}\n\n/// Wrapper that adds QueryContext to SparkError\n///\n/// This allows attaching SQL context information (query text, line/position, object name) to errors\n#[derive(Debug, Clone)]\npub struct SparkErrorWithContext {\n    /// The underlying SparkError\n    pub error: SparkError,\n    /// Optional QueryContext for SQL location information\n    pub context: Option<Arc<crate::QueryContext>>,\n}\n\nimpl SparkErrorWithContext {\n    /// Create a SparkErrorWithContext without context\n    pub fn new(error: SparkError) -> Self {\n        Self {\n            error,\n            context: None,\n        }\n    }\n\n    /// Create a SparkErrorWithContext with QueryContext\n    pub fn with_context(error: SparkError, context: Arc<crate::QueryContext>) -> Self {\n        Self {\n            error,\n            context: Some(context),\n        }\n    }\n\n    /// Serialize to JSON including optional context field\n    pub fn to_json(&self) -> String {\n        let mut json_obj = serde_json::json!({\n            \"errorType\": self.error.error_type_name(),\n            \"errorClass\": self.error.error_class().unwrap_or(\"\"),\n            \"params\": self.error.params_as_json(),\n        });\n\n        if let Some(ctx) = &self.context {\n            // Serialize context fields\n            json_obj[\"context\"] = serde_json::json!({\n                \"sqlText\": ctx.sql_text.as_str(),\n                \"startIndex\": ctx.start_index,\n                \"stopIndex\": ctx.stop_index,\n                \"objectType\": ctx.object_type,\n                \"objectName\": ctx.object_name,\n                \"line\": ctx.line,\n                \"startPosition\": ctx.start_position,\n            });\n\n            // Add formatted summary\n            json_obj[\"summary\"] = serde_json::json!(ctx.format_summary());\n        }\n\n        serde_json::to_string(&json_obj).unwrap_or_else(|e| {\n            format!(\n                \"{{\\\"errorType\\\":\\\"SerializationError\\\",\\\"message\\\":\\\"{}\\\"}}\",\n                e\n            )\n        })\n    }\n}\n\nimpl std::fmt::Display for SparkErrorWithContext {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"{}\", self.error)?;\n        if let Some(ctx) = &self.context {\n            write!(f, \"\\n{}\", ctx.format_summary())?;\n        }\n        Ok(())\n    }\n}\n\nimpl std::error::Error for SparkErrorWithContext {}\n\nimpl From<SparkError> for SparkErrorWithContext {\n    fn from(error: SparkError) -> Self {\n        SparkErrorWithContext::new(error)\n    }\n}\n\nimpl From<SparkErrorWithContext> for DataFusionError {\n    fn from(value: SparkErrorWithContext) -> Self {\n        DataFusionError::External(Box::new(value))\n    }\n}\n\nimpl From<ArrowError> for SparkError {\n    fn from(value: ArrowError) -> Self {\n        SparkError::Arrow(Arc::new(value))\n    }\n}\n\nimpl From<SparkError> for DataFusionError {\n    fn from(value: SparkError) -> Self {\n        DataFusionError::External(Box::new(value))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_divide_by_zero_json() {\n        let error = SparkError::DivideByZero;\n        let json = error.to_json();\n\n        assert!(json.contains(\"\\\"errorType\\\":\\\"DivideByZero\\\"\"));\n        assert!(json.contains(\"\\\"errorClass\\\":\\\"DIVIDE_BY_ZERO\\\"\"));\n\n        // Verify it's valid JSON\n        let parsed: serde_json::Value = serde_json::from_str(&json).unwrap();\n        assert_eq!(parsed[\"errorType\"], \"DivideByZero\");\n        assert_eq!(parsed[\"errorClass\"], \"DIVIDE_BY_ZERO\");\n    }\n\n    #[test]\n    fn test_remainder_by_zero_json() {\n        let error = SparkError::RemainderByZero;\n        let json = error.to_json();\n\n        assert!(json.contains(\"\\\"errorType\\\":\\\"RemainderByZero\\\"\"));\n        assert!(json.contains(\"\\\"errorClass\\\":\\\"REMAINDER_BY_ZERO\\\"\"));\n    }\n\n    #[test]\n    fn test_binary_overflow_json() {\n        let error = SparkError::BinaryArithmeticOverflow {\n            value1: \"32767\".to_string(),\n            symbol: \"+\".to_string(),\n            value2: \"1\".to_string(),\n            function_name: \"try_add\".to_string(),\n        };\n        let json = error.to_json();\n\n        // Verify it's valid JSON\n        let parsed: serde_json::Value = serde_json::from_str(&json).unwrap();\n        assert_eq!(parsed[\"errorType\"], \"BinaryArithmeticOverflow\");\n        assert_eq!(parsed[\"errorClass\"], \"BINARY_ARITHMETIC_OVERFLOW\");\n        assert_eq!(parsed[\"params\"][\"value1\"], \"32767\");\n        assert_eq!(parsed[\"params\"][\"symbol\"], \"+\");\n        assert_eq!(parsed[\"params\"][\"value2\"], \"1\");\n        assert_eq!(parsed[\"params\"][\"functionName\"], \"try_add\");\n    }\n\n    #[test]\n    fn test_invalid_array_index_json() {\n        let error = SparkError::InvalidArrayIndex {\n            index_value: 10,\n            array_size: 3,\n        };\n        let json = error.to_json();\n\n        let parsed: serde_json::Value = serde_json::from_str(&json).unwrap();\n        assert_eq!(parsed[\"errorType\"], \"InvalidArrayIndex\");\n        assert_eq!(parsed[\"errorClass\"], \"INVALID_ARRAY_INDEX\");\n        assert_eq!(parsed[\"params\"][\"indexValue\"], 10);\n        assert_eq!(parsed[\"params\"][\"arraySize\"], 3);\n    }\n\n    #[test]\n    fn test_numeric_value_out_of_range_json() {\n        let error = SparkError::NumericValueOutOfRange {\n            value: \"999.99\".to_string(),\n            precision: 5,\n            scale: 2,\n        };\n        let json = error.to_json();\n\n        let parsed: serde_json::Value = serde_json::from_str(&json).unwrap();\n        assert_eq!(parsed[\"errorType\"], \"NumericValueOutOfRange\");\n        assert_eq!(\n            parsed[\"errorClass\"],\n            \"NUMERIC_VALUE_OUT_OF_RANGE.WITH_SUGGESTION\"\n        );\n        assert_eq!(parsed[\"params\"][\"value\"], \"999.99\");\n        assert_eq!(parsed[\"params\"][\"precision\"], 5);\n        assert_eq!(parsed[\"params\"][\"scale\"], 2);\n    }\n\n    #[test]\n    fn test_cast_invalid_value_json() {\n        let error = SparkError::CastInvalidValue {\n            value: \"abc\".to_string(),\n            from_type: \"STRING\".to_string(),\n            to_type: \"INT\".to_string(),\n        };\n        let json = error.to_json();\n\n        let parsed: serde_json::Value = serde_json::from_str(&json).unwrap();\n        assert_eq!(parsed[\"errorType\"], \"CastInvalidValue\");\n        assert_eq!(parsed[\"errorClass\"], \"CAST_INVALID_INPUT\");\n        assert_eq!(parsed[\"params\"][\"value\"], \"abc\");\n        assert_eq!(parsed[\"params\"][\"fromType\"], \"STRING\");\n        assert_eq!(parsed[\"params\"][\"toType\"], \"INT\");\n    }\n\n    #[test]\n    fn test_duplicated_map_key_json() {\n        let error = SparkError::DuplicatedMapKey {\n            key: \"duplicate_key\".to_string(),\n        };\n        let json = error.to_json();\n\n        let parsed: serde_json::Value = serde_json::from_str(&json).unwrap();\n        assert_eq!(parsed[\"errorType\"], \"DuplicatedMapKey\");\n        assert_eq!(parsed[\"errorClass\"], \"DUPLICATED_MAP_KEY\");\n        assert_eq!(parsed[\"params\"][\"key\"], \"duplicate_key\");\n    }\n\n    #[test]\n    fn test_null_map_key_json() {\n        let error = SparkError::NullMapKey;\n        let json = error.to_json();\n\n        let parsed: serde_json::Value = serde_json::from_str(&json).unwrap();\n        assert_eq!(parsed[\"errorType\"], \"NullMapKey\");\n        assert_eq!(parsed[\"errorClass\"], \"NULL_MAP_KEY\");\n        // Params should be an empty object\n        assert_eq!(parsed[\"params\"], serde_json::json!({}));\n    }\n\n    #[test]\n    fn test_error_class_mapping() {\n        // Test that error_class() returns the correct error class\n        assert_eq!(\n            SparkError::DivideByZero.error_class(),\n            Some(\"DIVIDE_BY_ZERO\")\n        );\n        assert_eq!(\n            SparkError::RemainderByZero.error_class(),\n            Some(\"REMAINDER_BY_ZERO\")\n        );\n        assert_eq!(\n            SparkError::InvalidArrayIndex {\n                index_value: 0,\n                array_size: 0\n            }\n            .error_class(),\n            Some(\"INVALID_ARRAY_INDEX\")\n        );\n        assert_eq!(SparkError::NullMapKey.error_class(), Some(\"NULL_MAP_KEY\"));\n    }\n\n    #[test]\n    fn test_exception_class_mapping() {\n        // Test that exception_class() returns the correct Java exception class\n        assert_eq!(\n            SparkError::DivideByZero.exception_class(),\n            \"org/apache/spark/SparkArithmeticException\"\n        );\n        assert_eq!(\n            SparkError::InvalidArrayIndex {\n                index_value: 0,\n                array_size: 0\n            }\n            .exception_class(),\n            \"org/apache/spark/SparkArrayIndexOutOfBoundsException\"\n        );\n        assert_eq!(\n            SparkError::NullMapKey.exception_class(),\n            \"org/apache/spark/SparkRuntimeException\"\n        );\n    }\n}\n"
  },
  {
    "path": "native/common/src/lib.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod error;\nmod query_context;\npub mod tracing;\nmod utils;\n\npub use error::{decimal_overflow_error, SparkError, SparkErrorWithContext, SparkResult};\npub use query_context::{create_query_context_map, QueryContext, QueryContextMap};\npub use utils::bytes_to_i128;\n"
  },
  {
    "path": "native/common/src/query_context.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Query execution context for error reporting\n//!\n//! This module provides QueryContext which mirrors Spark's SQLQueryContext\n//! for providing SQL text, line/position information, and error location\n//! pointers in exception messages.\n\nuse serde::{Deserialize, Serialize};\nuse std::sync::Arc;\n\n/// Based on Spark's SQLQueryContext for error reporting.\n///\n/// Contains information about where an error occurred in a SQL query,\n/// including the full SQL text, line/column positions, and object context.\n#[derive(Debug, Clone, Serialize, Deserialize, PartialEq, Eq)]\npub struct QueryContext {\n    /// Full SQL query text\n    #[serde(rename = \"sqlText\")]\n    pub sql_text: Arc<String>,\n\n    /// Start offset in SQL text (0-based, character index)\n    #[serde(rename = \"startIndex\")]\n    pub start_index: i32,\n\n    /// Stop offset in SQL text (0-based, character index, inclusive)\n    #[serde(rename = \"stopIndex\")]\n    pub stop_index: i32,\n\n    /// Object type (e.g., \"VIEW\", \"Project\", \"Filter\")\n    #[serde(rename = \"objectType\", skip_serializing_if = \"Option::is_none\")]\n    pub object_type: Option<String>,\n\n    /// Object name (e.g., view name, column name)\n    #[serde(rename = \"objectName\", skip_serializing_if = \"Option::is_none\")]\n    pub object_name: Option<String>,\n\n    /// Line number in SQL query (1-based)\n    pub line: i32,\n\n    /// Column position within the line (0-based)\n    #[serde(rename = \"startPosition\")]\n    pub start_position: i32,\n}\n\nimpl QueryContext {\n    #[allow(clippy::too_many_arguments)]\n    pub fn new(\n        sql_text: String,\n        start_index: i32,\n        stop_index: i32,\n        object_type: Option<String>,\n        object_name: Option<String>,\n        line: i32,\n        start_position: i32,\n    ) -> Self {\n        Self {\n            sql_text: Arc::new(sql_text),\n            start_index,\n            stop_index,\n            object_type,\n            object_name,\n            line,\n            start_position,\n        }\n    }\n\n    /// Convert a character index to a byte offset in the SQL text.\n    /// Returns None if the character index is out of range.\n    fn char_index_to_byte_offset(&self, char_index: usize) -> Option<usize> {\n        self.sql_text\n            .char_indices()\n            .nth(char_index)\n            .map(|(byte_offset, _)| byte_offset)\n    }\n\n    /// Generate a summary string showing SQL fragment with error location.\n    /// (From SQLQueryContext.summary)\n    ///\n    /// Format example:\n    /// ```text\n    /// == SQL of VIEW v1 (line 1, position 8) ==\n    /// SELECT a/b FROM t\n    ///        ^^^\n    /// ```\n    pub fn format_summary(&self) -> String {\n        let start_char = self.start_index.max(0) as usize;\n        // stop_index is inclusive; fragment covers [start, stop]\n        let stop_char = (self.stop_index + 1).max(0) as usize;\n\n        let fragment = match (\n            self.char_index_to_byte_offset(start_char),\n            // stop_char may equal sql_text.chars().count() (one past the end)\n            self.char_index_to_byte_offset(stop_char).or_else(|| {\n                if stop_char == self.sql_text.chars().count() {\n                    Some(self.sql_text.len())\n                } else {\n                    None\n                }\n            }),\n        ) {\n            (Some(start_byte), Some(stop_byte)) => &self.sql_text[start_byte..stop_byte],\n            _ => \"\",\n        };\n\n        // Build the header line\n        let mut summary = String::from(\"== SQL\");\n\n        if let Some(obj_type) = &self.object_type {\n            if !obj_type.is_empty() {\n                summary.push_str(\" of \");\n                summary.push_str(obj_type);\n\n                if let Some(obj_name) = &self.object_name {\n                    if !obj_name.is_empty() {\n                        summary.push(' ');\n                        summary.push_str(obj_name);\n                    }\n                }\n            }\n        }\n\n        summary.push_str(&format!(\n            \" (line {}, position {}) ==\\n\",\n            self.line,\n            self.start_position + 1 // Convert 0-based to 1-based for display\n        ));\n\n        // Add the SQL text with fragment highlighted\n        summary.push_str(&self.sql_text);\n        summary.push('\\n');\n\n        // Add caret pointer\n        let caret_position = self.start_position.max(0) as usize;\n        summary.push_str(&\" \".repeat(caret_position));\n        // fragment.chars().count() gives the correct display width for non-ASCII\n        summary.push_str(&\"^\".repeat(fragment.chars().count().max(1)));\n\n        summary\n    }\n\n    /// Returns the SQL fragment that caused the error.\n    #[cfg(test)]\n    fn fragment(&self) -> String {\n        let start_char = self.start_index.max(0) as usize;\n        let stop_char = (self.stop_index + 1).max(0) as usize;\n\n        match (\n            self.char_index_to_byte_offset(start_char),\n            self.char_index_to_byte_offset(stop_char).or_else(|| {\n                if stop_char == self.sql_text.chars().count() {\n                    Some(self.sql_text.len())\n                } else {\n                    None\n                }\n            }),\n        ) {\n            (Some(start_byte), Some(stop_byte)) => self.sql_text[start_byte..stop_byte].to_string(),\n            _ => String::new(),\n        }\n    }\n}\n\nuse std::collections::HashMap;\nuse std::sync::RwLock;\n\n/// Map that stores QueryContext information for expressions during execution.\n///\n/// This map is populated during plan deserialization and accessed\n/// during error creation to attach SQL context to exceptions.\n#[derive(Debug)]\npub struct QueryContextMap {\n    /// Map from expression ID to QueryContext\n    contexts: RwLock<HashMap<u64, Arc<QueryContext>>>,\n}\n\nimpl QueryContextMap {\n    pub fn new() -> Self {\n        Self {\n            contexts: RwLock::new(HashMap::new()),\n        }\n    }\n\n    /// Register a QueryContext for an expression ID.\n    ///\n    /// If the expression ID already exists, it will be replaced.\n    ///\n    /// # Arguments\n    /// * `expr_id` - Unique expression identifier from protobuf\n    /// * `context` - QueryContext containing SQL text and position info\n    pub fn register(&self, expr_id: u64, context: QueryContext) {\n        let mut contexts = self.contexts.write().unwrap();\n        contexts.insert(expr_id, Arc::new(context));\n    }\n\n    /// Get the QueryContext for an expression ID.\n    ///\n    /// Returns None if no context is registered for this expression.\n    ///\n    /// # Arguments\n    /// * `expr_id` - Expression identifier to look up\n    pub fn get(&self, expr_id: u64) -> Option<Arc<QueryContext>> {\n        let contexts = self.contexts.read().unwrap();\n        contexts.get(&expr_id).cloned()\n    }\n\n    /// Clear all registered contexts.\n    ///\n    /// This is typically called after plan execution completes to free memory.\n    pub fn clear(&self) {\n        let mut contexts = self.contexts.write().unwrap();\n        contexts.clear();\n    }\n\n    /// Return the number of registered contexts (for debugging/testing)\n    pub fn len(&self) -> usize {\n        let contexts = self.contexts.read().unwrap();\n        contexts.len()\n    }\n\n    /// Check if the map is empty\n    pub fn is_empty(&self) -> bool {\n        self.len() == 0\n    }\n}\n\nimpl Default for QueryContextMap {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\n/// Create a new session-scoped QueryContextMap.\n///\n/// This should be called once per SessionContext during plan creation\n/// and passed to expressions that need query context for error reporting.\npub fn create_query_context_map() -> Arc<QueryContextMap> {\n    Arc::new(QueryContextMap::new())\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_query_context_creation() {\n        let ctx = QueryContext::new(\n            \"SELECT a/b FROM t\".to_string(),\n            7,\n            9,\n            Some(\"Divide\".to_string()),\n            Some(\"a/b\".to_string()),\n            1,\n            7,\n        );\n\n        assert_eq!(*ctx.sql_text, \"SELECT a/b FROM t\");\n        assert_eq!(ctx.start_index, 7);\n        assert_eq!(ctx.stop_index, 9);\n        assert_eq!(ctx.object_type, Some(\"Divide\".to_string()));\n        assert_eq!(ctx.object_name, Some(\"a/b\".to_string()));\n        assert_eq!(ctx.line, 1);\n        assert_eq!(ctx.start_position, 7);\n    }\n\n    #[test]\n    fn test_query_context_serialization() {\n        let ctx = QueryContext::new(\n            \"SELECT a/b FROM t\".to_string(),\n            7,\n            9,\n            Some(\"Divide\".to_string()),\n            Some(\"a/b\".to_string()),\n            1,\n            7,\n        );\n\n        let json = serde_json::to_string(&ctx).unwrap();\n        let deserialized: QueryContext = serde_json::from_str(&json).unwrap();\n\n        assert_eq!(ctx, deserialized);\n    }\n\n    #[test]\n    fn test_format_summary() {\n        let ctx = QueryContext::new(\n            \"SELECT a/b FROM t\".to_string(),\n            7,\n            9,\n            Some(\"VIEW\".to_string()),\n            Some(\"v1\".to_string()),\n            1,\n            7,\n        );\n\n        let summary = ctx.format_summary();\n\n        assert!(summary.contains(\"== SQL of VIEW v1 (line 1, position 8) ==\"));\n        assert!(summary.contains(\"SELECT a/b FROM t\"));\n        assert!(summary.contains(\"^^^\")); // Three carets for \"a/b\"\n    }\n\n    #[test]\n    fn test_format_summary_without_object() {\n        let ctx = QueryContext::new(\"SELECT a/b FROM t\".to_string(), 7, 9, None, None, 1, 7);\n\n        let summary = ctx.format_summary();\n\n        assert!(summary.contains(\"== SQL (line 1, position 8) ==\"));\n        assert!(summary.contains(\"SELECT a/b FROM t\"));\n    }\n\n    #[test]\n    fn test_fragment() {\n        let ctx = QueryContext::new(\"SELECT a/b FROM t\".to_string(), 7, 9, None, None, 1, 7);\n\n        assert_eq!(ctx.fragment(), \"a/b\");\n    }\n\n    #[test]\n    fn test_arc_string_sharing() {\n        let ctx1 = QueryContext::new(\"SELECT a/b FROM t\".to_string(), 7, 9, None, None, 1, 7);\n\n        let ctx2 = ctx1.clone();\n\n        // Arc should share the same allocation\n        assert!(Arc::ptr_eq(&ctx1.sql_text, &ctx2.sql_text));\n    }\n\n    #[test]\n    fn test_json_with_optional_fields() {\n        let ctx = QueryContext::new(\"SELECT a/b FROM t\".to_string(), 7, 9, None, None, 1, 7);\n\n        let json = serde_json::to_string(&ctx).unwrap();\n\n        // Should not serialize objectType and objectName when None\n        assert!(!json.contains(\"objectType\"));\n        assert!(!json.contains(\"objectName\"));\n    }\n\n    #[test]\n    fn test_map_register_and_get() {\n        let map = QueryContextMap::new();\n\n        let ctx = QueryContext::new(\"SELECT a/b FROM t\".to_string(), 7, 9, None, None, 1, 7);\n\n        map.register(1, ctx.clone());\n\n        let retrieved = map.get(1).unwrap();\n        assert_eq!(*retrieved.sql_text, \"SELECT a/b FROM t\");\n        assert_eq!(retrieved.start_index, 7);\n    }\n\n    #[test]\n    fn test_map_get_nonexistent() {\n        let map = QueryContextMap::new();\n        assert!(map.get(999).is_none());\n    }\n\n    #[test]\n    fn test_map_clear() {\n        let map = QueryContextMap::new();\n\n        let ctx = QueryContext::new(\"SELECT a/b FROM t\".to_string(), 7, 9, None, None, 1, 7);\n\n        map.register(1, ctx);\n        assert_eq!(map.len(), 1);\n\n        map.clear();\n        assert_eq!(map.len(), 0);\n        assert!(map.is_empty());\n    }\n\n    // Verify that fragment() and format_summary() correctly handle SQL text that\n    // contains multi-byte characters\n\n    #[test]\n    fn test_fragment_non_ascii_accented() {\n        // \"é\" is a 2-byte UTF-8 sequence (U+00E9).\n        // SQL: \"SELECT café FROM t\"\n        //       0123456789...\n        // char indices: c=7, a=8, f=9, é=10, ' '=11 ...  FROM = 12..\n        // start_index=7, stop_index=10 should yield \"café\"\n        let sql = \"SELECT café FROM t\".to_string();\n        let ctx = QueryContext::new(sql, 7, 10, None, None, 1, 7);\n        assert_eq!(ctx.fragment(), \"café\");\n    }\n}\n"
  },
  {
    "path": "native/common/src/tracing.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse datafusion::common::instant::Instant;\nuse std::fs::{File, OpenOptions};\nuse std::io::{BufWriter, Write};\nuse std::sync::{Arc, LazyLock, Mutex};\n\npub static RECORDER: LazyLock<Recorder> = LazyLock::new(Recorder::new);\n\n/// Log events using Chrome trace format JSON\n/// https://github.com/catapult-project/catapult/blob/main/tracing/README.md\npub struct Recorder {\n    now: Instant,\n    writer: Arc<Mutex<BufWriter<File>>>,\n}\n\nimpl Default for Recorder {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl Recorder {\n    pub fn new() -> Self {\n        let file = OpenOptions::new()\n            .create(true)\n            .append(true)\n            .open(\"comet-event-trace.json\")\n            .expect(\"Error writing tracing\");\n\n        let mut writer = BufWriter::new(file);\n\n        // Write start of JSON array. Note that there is no requirement to write\n        // the closing ']'.\n        writer\n            .write_all(\"[ \".as_bytes())\n            .expect(\"Error writing tracing\");\n        Self {\n            now: Instant::now(),\n            writer: Arc::new(Mutex::new(writer)),\n        }\n    }\n    pub fn begin_task(&self, name: &str) {\n        self.log_event(name, \"B\")\n    }\n\n    pub fn end_task(&self, name: &str) {\n        self.log_event(name, \"E\")\n    }\n\n    pub fn log_memory_usage(&self, name: &str, usage_bytes: u64) {\n        let json = format!(\n            \"{{ \\\"name\\\": \\\"{name}\\\", \\\"cat\\\": \\\"PERF\\\", \\\"ph\\\": \\\"C\\\", \\\"pid\\\": 1, \\\"tid\\\": {}, \\\"ts\\\": {}, \\\"args\\\": {{ \\\"{name}\\\": {usage_bytes} }} }},\\n\",\n            get_thread_id(),\n            self.now.elapsed().as_micros()\n        );\n        let mut writer = self.writer.lock().unwrap();\n        writer\n            .write_all(json.as_bytes())\n            .expect(\"Error writing tracing\");\n    }\n\n    fn log_event(&self, name: &str, ph: &str) {\n        let json = format!(\n            \"{{ \\\"name\\\": \\\"{}\\\", \\\"cat\\\": \\\"PERF\\\", \\\"ph\\\": \\\"{ph}\\\", \\\"pid\\\": 1, \\\"tid\\\": {}, \\\"ts\\\": {} }},\\n\",\n            name,\n            get_thread_id(),\n            self.now.elapsed().as_micros()\n        );\n        let mut writer = self.writer.lock().unwrap();\n        writer\n            .write_all(json.as_bytes())\n            .expect(\"Error writing tracing\");\n    }\n}\n\npub fn get_thread_id() -> u64 {\n    let thread_id = std::thread::current().id();\n    format!(\"{thread_id:?}\")\n        .trim_start_matches(\"ThreadId(\")\n        .trim_end_matches(\")\")\n        .parse()\n        .expect(\"Error parsing thread id\")\n}\n\npub fn trace_begin(name: &str) {\n    RECORDER.begin_task(name);\n}\n\npub fn trace_end(name: &str) {\n    RECORDER.end_task(name);\n}\n\npub fn log_memory_usage(name: &str, value: u64) {\n    RECORDER.log_memory_usage(name, value);\n}\n\npub fn with_trace<T, F>(label: &str, tracing_enabled: bool, f: F) -> T\nwhere\n    F: FnOnce() -> T,\n{\n    if tracing_enabled {\n        trace_begin(label);\n    }\n\n    let result = f();\n\n    if tracing_enabled {\n        trace_end(label);\n    }\n\n    result\n}\n\npub async fn with_trace_async<F, Fut, T>(label: &str, tracing_enabled: bool, f: F) -> T\nwhere\n    F: FnOnce() -> Fut,\n    Fut: std::future::Future<Output = T>,\n{\n    if tracing_enabled {\n        trace_begin(label);\n    }\n\n    let result = f().await;\n\n    if tracing_enabled {\n        trace_end(label);\n    }\n\n    result\n}\n"
  },
  {
    "path": "native/common/src/utils.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n/// Converts a slice of bytes to i128. The bytes are serialized in big-endian order by\n/// `BigInteger.toByteArray()` in Java.\npub fn bytes_to_i128(slice: &[u8]) -> i128 {\n    let mut bytes = [0; 16];\n    let mut i = 0;\n    while i != 16 && i != slice.len() {\n        bytes[i] = slice[slice.len() - 1 - i];\n        i += 1;\n    }\n\n    // if the decimal is negative, we need to flip all the bits\n    if (slice[0] as i8) < 0 {\n        while i < 16 {\n            bytes[i] = !bytes[i];\n            i += 1;\n        }\n    }\n\n    i128::from_le_bytes(bytes)\n}\n"
  },
  {
    "path": "native/core/Cargo.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[package]\nname = \"datafusion-comet\"\nversion = { workspace = true }\nhomepage = \"https://datafusion.apache.org/comet\"\nrepository = \"https://github.com/apache/datafusion-comet\"\nauthors = [\"Apache DataFusion <dev@datafusion.apache.org>\"]\ndescription = \"Apache DataFusion Comet: High performance accelerator for Apache Spark\"\nreadme = \"README.md\"\nlicense = \"Apache-2.0\"\nedition = \"2021\"\ninclude = [\n    \"benches/*.rs\",\n    \"src/**/*.rs\",\n    \"Cargo.toml\",\n]\n\n# this crate is used in the Spark plugin and does not contain public Rust APIs so we do not publish this crate\npublish = false\n\n[dependencies]\narrow = { workspace = true }\nparquet = { workspace = true, default-features = false, features = [\"experimental\", \"arrow\"] }\nfutures = { workspace = true }\nmimalloc = { version = \"*\", default-features = false, optional = true }\ntikv-jemallocator = { version = \"0.6.1\", optional = true, features = [\"disable_initial_exec_tls\"] }\ntikv-jemalloc-ctl = { version = \"0.6.1\", optional = true, features = [\"disable_initial_exec_tls\", \"stats\"] }\ntokio = { version = \"1\", features = [\"rt-multi-thread\"] }\nasync-trait = { workspace = true }\nlog = \"0.4\"\nlog4rs = \"1.4.0\"\nprost = \"0.14.3\"\njni = \"0.22.4\"\nrand = { workspace = true }\nnum = { workspace = true }\nbytes = { workspace = true }\ntempfile = \"3.26.0\"\nitertools = \"0.14.0\"\npaste = \"1.0.14\"\ndatafusion = { workspace = true, features = [\"parquet_encryption\", \"sql\"] }\ndatafusion-physical-expr-adapter = { workspace = true }\ndatafusion-datasource = { workspace = true }\ndatafusion-spark = { workspace = true }\nonce_cell = \"1.18.0\"\ndatafusion-comet-common = { workspace = true }\ndatafusion-comet-spark-expr = { workspace = true }\ndatafusion-comet-jni-bridge = { workspace = true }\ndatafusion-comet-proto = { workspace = true }\ndatafusion-comet-shuffle = { workspace = true }\nobject_store = { workspace = true }\nurl = { workspace = true }\naws-config = { workspace = true }\naws-credential-types = { workspace = true }\nparking_lot = \"0.12.5\"\ndatafusion-comet-objectstore-hdfs = { path = \"../hdfs\", optional = true, default-features = false, features = [\"hdfs\"] }\nreqwest = { version = \"0.12\", default-features = false, features = [\"rustls-tls-native-roots\", \"http2\"] }\nobject_store_opendal = { git = \"https://github.com/apache/opendal\", rev = \"6909efcdfd12b3b2ac3a76f654c35ee576811512\", package = \"object_store_opendal\", optional = true}\nhdfs-sys = {version = \"0.3\", optional = true, features = [\"hdfs_3_3\"]}\nopendal = { git = \"https://github.com/apache/opendal\", rev = \"6909efcdfd12b3b2ac3a76f654c35ee576811512\", optional = true, features = [\"services-hdfs\"] }\niceberg = { workspace = true }\niceberg-storage-opendal = { workspace = true }\nserde_json = \"1.0\"\nuuid = \"1.23.0\"\n\n[target.'cfg(target_os = \"linux\")'.dependencies]\nprocfs = \"0.18.0\"\n\n[target.'cfg(target_os = \"macos\")'.dependencies]\nhdrs = { version = \"0.3.2\", features = [\"vendored\"] }\n\n[dev-dependencies]\npprof = { version = \"0.15\", features = [\"flamegraph\"] }\ncriterion = { version = \"0.7\", features = [\"async\", \"async_tokio\", \"async_std\"] }\njni = { version = \"0.22.4\", features = [\"invocation\"] }\nlazy_static = \"1.4\"\nassertables = \"9\"\nhex = \"0.4.3\"\ndatafusion-functions-nested = { version = \"53.1.0\" }\n\n[features]\nbacktrace = [\"datafusion/backtrace\"]\ndefault = [\"hdfs-opendal\"]\nhdfs = [\"datafusion-comet-objectstore-hdfs\"]\nhdfs-opendal = [\"opendal\", \"object_store_opendal\", \"hdfs-sys\"]\njemalloc = [\"tikv-jemallocator\", \"tikv-jemalloc-ctl\"]\n\n# exclude optional packages from cargo machete verifications\n[package.metadata.cargo-machete]\nignored = [\"hdfs-sys\", \"paste\"]\n\n[lib]\nname = \"comet\"\n# \"rlib\" is for benchmarking with criterion.\ncrate-type = [\"cdylib\", \"rlib\"]\n\n[[bench]]\nname = \"parquet_read\"\nharness = false\n\n[[bench]]\nname = \"bit_util\"\nharness = false\n\n[[bench]]\nname = \"parquet_decode\"\nharness = false\n\n[[bench]]\nname = \"array_element_append\"\nharness = false\n"
  },
  {
    "path": "native/core/benches/array_element_append.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Micro-benchmarks for SparkUnsafeArray element iteration.\n//!\n//! This tests the low-level `append_to_builder` function which converts\n//! SparkUnsafeArray elements to Arrow array builders. This is the inner loop\n//! used when processing List/Array columns in JVM shuffle.\n\nuse arrow::array::builder::{\n    Date32Builder, Float64Builder, Int32Builder, Int64Builder, TimestampMicrosecondBuilder,\n};\nuse arrow::datatypes::{DataType, TimeUnit};\nuse comet::execution::shuffle::spark_unsafe::list::{append_to_builder, SparkUnsafeArray};\nuse criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};\n\nconst NUM_ELEMENTS: usize = 10000;\n\n/// Create a SparkUnsafeArray in memory with i32 elements.\n/// Layout:\n/// - 8 bytes: num_elements (i64)\n/// - null bitset: 8 bytes per 64 elements\n/// - element data: 4 bytes per element (i32)\nfn create_spark_unsafe_array_i32(num_elements: usize, with_nulls: bool) -> Vec<u8> {\n    // Header size: 8 (num_elements) + ceil(num_elements/64) * 8 (null bitset)\n    let null_bitset_words = num_elements.div_ceil(64);\n    let header_size = 8 + null_bitset_words * 8;\n    let data_size = num_elements * 4; // i32 = 4 bytes\n    let total_size = header_size + data_size;\n\n    let mut buffer = vec![0u8; total_size];\n\n    // Write num_elements\n    buffer[0..8].copy_from_slice(&(num_elements as i64).to_le_bytes());\n\n    // Write null bitset (set every 10th element as null if with_nulls)\n    if with_nulls {\n        for i in (0..num_elements).step_by(10) {\n            let word_idx = i / 64;\n            let bit_idx = i % 64;\n            let word_offset = 8 + word_idx * 8;\n            let current_word =\n                i64::from_le_bytes(buffer[word_offset..word_offset + 8].try_into().unwrap());\n            let new_word = current_word | (1i64 << bit_idx);\n            buffer[word_offset..word_offset + 8].copy_from_slice(&new_word.to_le_bytes());\n        }\n    }\n\n    // Write element data\n    for i in 0..num_elements {\n        let offset = header_size + i * 4;\n        buffer[offset..offset + 4].copy_from_slice(&(i as i32).to_le_bytes());\n    }\n\n    buffer\n}\n\n/// Create a SparkUnsafeArray in memory with i64 elements.\nfn create_spark_unsafe_array_i64(num_elements: usize, with_nulls: bool) -> Vec<u8> {\n    let null_bitset_words = num_elements.div_ceil(64);\n    let header_size = 8 + null_bitset_words * 8;\n    let data_size = num_elements * 8; // i64 = 8 bytes\n    let total_size = header_size + data_size;\n\n    let mut buffer = vec![0u8; total_size];\n\n    // Write num_elements\n    buffer[0..8].copy_from_slice(&(num_elements as i64).to_le_bytes());\n\n    // Write null bitset\n    if with_nulls {\n        for i in (0..num_elements).step_by(10) {\n            let word_idx = i / 64;\n            let bit_idx = i % 64;\n            let word_offset = 8 + word_idx * 8;\n            let current_word =\n                i64::from_le_bytes(buffer[word_offset..word_offset + 8].try_into().unwrap());\n            let new_word = current_word | (1i64 << bit_idx);\n            buffer[word_offset..word_offset + 8].copy_from_slice(&new_word.to_le_bytes());\n        }\n    }\n\n    // Write element data\n    for i in 0..num_elements {\n        let offset = header_size + i * 8;\n        buffer[offset..offset + 8].copy_from_slice(&(i as i64).to_le_bytes());\n    }\n\n    buffer\n}\n\n/// Create a SparkUnsafeArray in memory with f64 elements.\nfn create_spark_unsafe_array_f64(num_elements: usize, with_nulls: bool) -> Vec<u8> {\n    let null_bitset_words = num_elements.div_ceil(64);\n    let header_size = 8 + null_bitset_words * 8;\n    let data_size = num_elements * 8; // f64 = 8 bytes\n    let total_size = header_size + data_size;\n\n    let mut buffer = vec![0u8; total_size];\n\n    // Write num_elements\n    buffer[0..8].copy_from_slice(&(num_elements as i64).to_le_bytes());\n\n    // Write null bitset\n    if with_nulls {\n        for i in (0..num_elements).step_by(10) {\n            let word_idx = i / 64;\n            let bit_idx = i % 64;\n            let word_offset = 8 + word_idx * 8;\n            let current_word =\n                i64::from_le_bytes(buffer[word_offset..word_offset + 8].try_into().unwrap());\n            let new_word = current_word | (1i64 << bit_idx);\n            buffer[word_offset..word_offset + 8].copy_from_slice(&new_word.to_le_bytes());\n        }\n    }\n\n    // Write element data\n    for i in 0..num_elements {\n        let offset = header_size + i * 8;\n        buffer[offset..offset + 8].copy_from_slice(&(i as f64).to_le_bytes());\n    }\n\n    buffer\n}\n\nfn benchmark_array_conversion(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"spark_unsafe_array_to_arrow\");\n\n    // Benchmark i32 array conversion\n    for with_nulls in [false, true] {\n        let buffer = create_spark_unsafe_array_i32(NUM_ELEMENTS, with_nulls);\n        let array = SparkUnsafeArray::new(buffer.as_ptr() as i64);\n        let null_str = if with_nulls { \"with_nulls\" } else { \"no_nulls\" };\n\n        group.bench_with_input(\n            BenchmarkId::new(\"i32\", null_str),\n            &(&array, &buffer),\n            |b, (array, _buffer)| {\n                b.iter(|| {\n                    let mut builder = Int32Builder::with_capacity(NUM_ELEMENTS);\n                    if with_nulls {\n                        append_to_builder::<true>(&DataType::Int32, &mut builder, array).unwrap();\n                    } else {\n                        append_to_builder::<false>(&DataType::Int32, &mut builder, array).unwrap();\n                    }\n                    builder.finish()\n                });\n            },\n        );\n    }\n\n    // Benchmark i64 array conversion\n    for with_nulls in [false, true] {\n        let buffer = create_spark_unsafe_array_i64(NUM_ELEMENTS, with_nulls);\n        let array = SparkUnsafeArray::new(buffer.as_ptr() as i64);\n        let null_str = if with_nulls { \"with_nulls\" } else { \"no_nulls\" };\n\n        group.bench_with_input(\n            BenchmarkId::new(\"i64\", null_str),\n            &(&array, &buffer),\n            |b, (array, _buffer)| {\n                b.iter(|| {\n                    let mut builder = Int64Builder::with_capacity(NUM_ELEMENTS);\n                    if with_nulls {\n                        append_to_builder::<true>(&DataType::Int64, &mut builder, array).unwrap();\n                    } else {\n                        append_to_builder::<false>(&DataType::Int64, &mut builder, array).unwrap();\n                    }\n                    builder.finish()\n                });\n            },\n        );\n    }\n\n    // Benchmark f64 array conversion\n    for with_nulls in [false, true] {\n        let buffer = create_spark_unsafe_array_f64(NUM_ELEMENTS, with_nulls);\n        let array = SparkUnsafeArray::new(buffer.as_ptr() as i64);\n        let null_str = if with_nulls { \"with_nulls\" } else { \"no_nulls\" };\n\n        group.bench_with_input(\n            BenchmarkId::new(\"f64\", null_str),\n            &(&array, &buffer),\n            |b, (array, _buffer)| {\n                b.iter(|| {\n                    let mut builder = Float64Builder::with_capacity(NUM_ELEMENTS);\n                    if with_nulls {\n                        append_to_builder::<true>(&DataType::Float64, &mut builder, array).unwrap();\n                    } else {\n                        append_to_builder::<false>(&DataType::Float64, &mut builder, array)\n                            .unwrap();\n                    }\n                    builder.finish()\n                });\n            },\n        );\n    }\n\n    // Benchmark date32 array conversion (same memory layout as i32)\n    for with_nulls in [false, true] {\n        let buffer = create_spark_unsafe_array_i32(NUM_ELEMENTS, with_nulls);\n        let array = SparkUnsafeArray::new(buffer.as_ptr() as i64);\n        let null_str = if with_nulls { \"with_nulls\" } else { \"no_nulls\" };\n\n        group.bench_with_input(\n            BenchmarkId::new(\"date32\", null_str),\n            &(&array, &buffer),\n            |b, (array, _buffer)| {\n                b.iter(|| {\n                    let mut builder = Date32Builder::with_capacity(NUM_ELEMENTS);\n                    if with_nulls {\n                        append_to_builder::<true>(&DataType::Date32, &mut builder, array).unwrap();\n                    } else {\n                        append_to_builder::<false>(&DataType::Date32, &mut builder, array).unwrap();\n                    }\n                    builder.finish()\n                });\n            },\n        );\n    }\n\n    // Benchmark timestamp array conversion (same memory layout as i64)\n    for with_nulls in [false, true] {\n        let buffer = create_spark_unsafe_array_i64(NUM_ELEMENTS, with_nulls);\n        let array = SparkUnsafeArray::new(buffer.as_ptr() as i64);\n        let null_str = if with_nulls { \"with_nulls\" } else { \"no_nulls\" };\n\n        group.bench_with_input(\n            BenchmarkId::new(\"timestamp\", null_str),\n            &(&array, &buffer),\n            |b, (array, _buffer)| {\n                b.iter(|| {\n                    let mut builder = TimestampMicrosecondBuilder::with_capacity(NUM_ELEMENTS);\n                    let dt = DataType::Timestamp(TimeUnit::Microsecond, None);\n                    if with_nulls {\n                        append_to_builder::<true>(&dt, &mut builder, array).unwrap();\n                    } else {\n                        append_to_builder::<false>(&dt, &mut builder, array).unwrap();\n                    }\n                    builder.finish()\n                });\n            },\n        );\n    }\n\n    group.finish();\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = benchmark_array_conversion\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/core/benches/bit_util.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{mem::size_of, time::Duration};\n\nuse rand::{rng, RngExt};\n\nuse arrow::buffer::Buffer;\nuse comet::common::bit::{\n    log2, read_num_bytes_u32, read_num_bytes_u64, read_u32, read_u64, set_bits, trailing_bits,\n    BitReader, BitWriter,\n};\nuse criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};\nuse std::hint::black_box;\n\n/// Benchmark to measure bit_util performance.\n/// To run this benchmark:\n/// `cd core && cargo bench --bench bit_util`\n/// Results will be written to \"core/target/criterion/bit_util/\"\nfn criterion_benchmark(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"bit_util\");\n\n    const N: usize = 1024 * 1024;\n    let mut writer: BitWriter = BitWriter::new(N * 10);\n    for _ in 0..N {\n        if !writer.put_vlq_int(rng().random::<u64>()) {\n            break;\n        }\n    }\n    let buffer = writer.consume();\n    let buffer = Buffer::from(buffer.as_slice());\n\n    // log2\n    for bits in (0..64).step_by(3) {\n        let x = 1u64 << bits;\n        group.bench_with_input(BenchmarkId::new(\"log2\", bits), &x, |b, &x| {\n            b.iter(|| log2(black_box(x)));\n        });\n    }\n\n    // set_bits\n    for offset in (0..16).step_by(3) {\n        for length in (0..16).step_by(3) {\n            let x = (offset, length);\n            group.bench_with_input(\n                BenchmarkId::new(\"set_bits\", format!(\"offset_{}_length_{}\", x.0, x.1)),\n                &x,\n                |b, &x| {\n                    b.iter(|| set_bits(&mut [0u8; 4], black_box(x.0), black_box(x.1)));\n                },\n            );\n        }\n    }\n\n    // get_vlq_int\n    group.bench_function(\"get_vlq_int\", |b| {\n        b.iter(|| {\n            let mut reader: BitReader = BitReader::new_all(buffer.slice(0));\n            bench_get_vlq_int(&mut reader)\n        })\n    });\n\n    // get_bits\n    for offset in (0..32).step_by(17) {\n        for num_bits in (1..5).step_by(1) {\n            let x = (offset, num_bits);\n            group.bench_with_input(\n                BenchmarkId::new(\"get_bits\", format!(\"offset_{}_num_bits_{}\", x.0, x.1)),\n                &x,\n                |b, &x| {\n                    let mut reader: BitReader = BitReader::new_all(buffer.slice(0));\n                    b.iter(|| reader.get_bits(&mut [0u8; 4], black_box(x.0), black_box(x.1)));\n                },\n            );\n        }\n    }\n\n    // get_aligned\n    for num_bytes in (1..=size_of::<u8>()).step_by(3) {\n        let x = num_bytes;\n        group.bench_with_input(\n            BenchmarkId::new(\"get_aligned\", format!(\"u8_num_bytes_{x}\")),\n            &x,\n            |b, &x| {\n                let mut reader: BitReader = BitReader::new_all(buffer.slice(0));\n                b.iter(|| reader.get_aligned::<u8>(black_box(x)));\n            },\n        );\n    }\n    for num_bytes in (1..=size_of::<u32>()).step_by(3) {\n        let x = num_bytes;\n        group.bench_with_input(\n            BenchmarkId::new(\"get_aligned\", format!(\"u32_num_bytes_{x}\")),\n            &x,\n            |b, &x| {\n                let mut reader: BitReader = BitReader::new_all(buffer.slice(0));\n                b.iter(|| reader.get_aligned::<u32>(black_box(x)));\n            },\n        );\n    }\n    for num_bytes in (1..=size_of::<i32>()).step_by(3) {\n        let x = num_bytes;\n        group.bench_with_input(\n            BenchmarkId::new(\"get_aligned\", format!(\"i32_num_bytes_{x}\")),\n            &x,\n            |b, &x| {\n                let mut reader: BitReader = BitReader::new_all(buffer.slice(0));\n                b.iter(|| reader.get_aligned::<i32>(black_box(x)));\n            },\n        );\n    }\n\n    // get_value\n    for num_bytes in (1..=size_of::<i32>()).step_by(3) {\n        let x = num_bytes * 8;\n        group.bench_with_input(\n            BenchmarkId::new(\"get_value\", format!(\"i32_num_bits_{x}\")),\n            &x,\n            |b, &x| {\n                let mut reader: BitReader = BitReader::new_all(buffer.slice(0));\n                b.iter(|| reader.get_value::<i32>(black_box(x)));\n            },\n        );\n    }\n\n    // read_num_bytes_u64\n    for num_bytes in (1..=8).step_by(7) {\n        let x = num_bytes;\n        group.bench_with_input(\n            BenchmarkId::new(\"read_num_bytes_u64\", format!(\"num_bytes_{x}\")),\n            &x,\n            |b, &x| {\n                b.iter(|| read_num_bytes_u64(black_box(x), black_box(buffer.as_slice())));\n            },\n        );\n    }\n\n    // read_num_bytes_u32\n    for num_bytes in (1..=4).step_by(3) {\n        let x = num_bytes;\n        group.bench_with_input(\n            BenchmarkId::new(\"read_num_bytes_u32\", format!(\"num_bytes_{x}\")),\n            &x,\n            |b, &x| {\n                b.iter(|| read_num_bytes_u32(black_box(x), black_box(buffer.as_slice())));\n            },\n        );\n    }\n\n    // trailing_bits\n    for length in (0..=64).step_by(32) {\n        let x = length;\n        group.bench_with_input(\n            BenchmarkId::new(\"trailing_bits\", format!(\"num_bits_{x}\")),\n            &x,\n            |b, &x| {\n                b.iter(|| trailing_bits(black_box(1234567890), black_box(x)));\n            },\n        );\n    }\n\n    // read_u64\n    group.bench_function(\"read_u64\", |b| {\n        b.iter(|| read_u64(black_box(&[0u8; 8])));\n    });\n\n    // read_u32\n    group.bench_function(\"read_u32\", |b| {\n        b.iter(|| read_u32(black_box(&[0u8; 4])));\n    });\n\n    // get_u32_value\n    group.bench_function(\"get_u32_value\", |b| {\n        b.iter(|| {\n            let mut reader: BitReader = BitReader::new_all(buffer.slice(0));\n            for _ in 0..(buffer.len() * 8 / 31) {\n                black_box(reader.get_u32_value(black_box(31)));\n            }\n        })\n    });\n\n    group.finish();\n}\n\nfn bench_get_vlq_int(reader: &mut BitReader) {\n    while let Some(v) = reader.get_vlq_int() {\n        black_box(v);\n    }\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n        .measurement_time(Duration::from_millis(500))\n        .warm_up_time(Duration::from_millis(500))\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = criterion_benchmark\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/core/benches/common.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::error::ArrowError;\nuse arrow::{\n    array::{DictionaryArray, Int64Array, PrimitiveArray},\n    datatypes::{ArrowPrimitiveType, Int32Type},\n};\nuse rand::{\n    distr::{Distribution, StandardUniform},\n    rngs::StdRng,\n    RngExt, SeedableRng,\n};\nuse std::sync::Arc;\n\n/// Returns fixed seedable RNG\npub fn seedable_rng() -> StdRng {\n    StdRng::seed_from_u64(42)\n}\n\npub fn create_int64_array(size: usize, null_density: f32, min: i64, max: i64) -> Int64Array {\n    let mut rng = seedable_rng();\n    (0..size)\n        .map(|_| {\n            if rng.random::<f32>() < null_density {\n                None\n            } else {\n                Some(rng.random_range(min..max))\n            }\n        })\n        .collect()\n}\n\n#[allow(dead_code)]\npub fn create_primitive_array<T>(size: usize, null_density: f32) -> PrimitiveArray<T>\nwhere\n    T: ArrowPrimitiveType,\n    StandardUniform: Distribution<T::Native>,\n{\n    let mut rng = seedable_rng();\n    (0..size)\n        .map(|_| {\n            if rng.random::<f32>() < null_density {\n                None\n            } else {\n                Some(rng.random())\n            }\n        })\n        .collect()\n}\n\n/// Creates a dictionary with random keys and values, with value type `T`.\n/// Note here the keys are the dictionary indices.\n#[allow(dead_code)]\npub fn create_dictionary_array<T>(\n    size: usize,\n    value_size: usize,\n    null_density: f32,\n) -> Result<DictionaryArray<Int32Type>, ArrowError>\nwhere\n    T: ArrowPrimitiveType,\n    StandardUniform: Distribution<T::Native>,\n{\n    // values are not null\n    let values = create_primitive_array::<T>(value_size, 0.0);\n    let keys = create_primitive_array::<Int32Type>(size, null_density)\n        .iter()\n        .map(|v| v.map(|w| w.abs() % (value_size as i32)))\n        .collect();\n    DictionaryArray::try_new(keys, Arc::new(values))\n}\n"
  },
  {
    "path": "native/core/benches/parquet_decode.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::datatypes::ToByteSlice;\nuse comet::parquet::read::values::{copy_i32_to_i16, copy_i32_to_u16, copy_i64_to_i64};\nuse criterion::{criterion_group, criterion_main, Criterion};\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let num = 1000;\n    let source = vec![78_i8; num * 8];\n    let mut group = c.benchmark_group(\"parquet_decode\");\n    group.bench_function(\"decode_i32_to_i16\", |b| {\n        let mut dest: Vec<u8> = vec![b' '; num * 2];\n        b.iter(|| {\n            copy_i32_to_i16(source.to_byte_slice(), dest.as_mut_slice(), num);\n        });\n    });\n    group.bench_function(\"decode_i32_to_u16\", |b| {\n        let mut dest: Vec<u8> = vec![b' '; num * 4];\n        b.iter(|| {\n            copy_i32_to_u16(source.to_byte_slice(), dest.as_mut_slice(), num);\n        });\n    });\n    group.bench_function(\"decode_i64_to_i64\", |b| {\n        let mut dest: Vec<u8> = vec![b' '; num * 8];\n        b.iter(|| {\n            copy_i64_to_i64(source.to_byte_slice(), dest.as_mut_slice(), num);\n        });\n    });\n}\n\n// Create UTF8 batch with strings representing ints, floats, nulls\nfn config() -> Criterion {\n    Criterion::default()\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = criterion_benchmark\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/core/benches/parquet_read.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod perf;\n\nuse std::sync::Arc;\n\nuse arrow::{array::ArrayData, buffer::Buffer};\nuse comet::parquet::{read::ColumnReader, util::jni::TypePromotionInfo};\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse parquet::{\n    basic::{Encoding, Type as PhysicalType},\n    column::page::{PageIterator, PageReader},\n    data_type::Int32Type,\n    schema::types::{\n        ColumnDescPtr, ColumnDescriptor, ColumnPath, PrimitiveTypeBuilder, SchemaDescPtr, TypePtr,\n    },\n};\n\nuse comet::parquet::util::test_common::page_util::{\n    DataPageBuilder, DataPageBuilderImpl, InMemoryPageIterator,\n};\n\nuse perf::FlamegraphProfiler;\nuse rand::{prelude::StdRng, RngExt, SeedableRng};\n\nfn bench(c: &mut Criterion) {\n    let expected_num_values: usize = NUM_PAGES * VALUES_PER_PAGE;\n    let mut group = c.benchmark_group(\"comet_parquet_read\");\n    let schema = build_test_schema();\n\n    let pages = build_plain_int32_pages(schema.column(0), 0.0);\n    group.bench_function(\"INT/PLAIN/NOT_NULL\", |b| {\n        let t = TypePtr::new(\n            PrimitiveTypeBuilder::new(\"f\", PhysicalType::INT32)\n                .with_length(4)\n                .build()\n                .unwrap(),\n        );\n        b.iter(|| {\n            let cd = ColumnDescriptor::new(t.clone(), 0, 0, ColumnPath::from(Vec::new()));\n            let promotion_info = TypePromotionInfo::new(PhysicalType::INT32, -1, -1, 32);\n            let mut column_reader = TestColumnReader::new(\n                cd,\n                promotion_info,\n                BATCH_SIZE,\n                pages.clone(),\n                expected_num_values,\n            );\n\n            let mut total = 0;\n            for batch in column_reader.by_ref() {\n                total += batch.len();\n                ::std::mem::forget(batch);\n            }\n            assert_eq!(total, expected_num_values);\n        });\n    });\n}\n\nfn profiled() -> Criterion {\n    Criterion::default().with_profiler(FlamegraphProfiler::new(100))\n}\n\ncriterion_group! {\n    name = benches;\n    config = profiled();\n    targets = bench\n}\ncriterion_main!(benches);\n\nfn build_test_schema() -> SchemaDescPtr {\n    use parquet::schema::{parser::parse_message_type, types::SchemaDescriptor};\n    let message_type = \"\n        message test_schema {\n            REQUIRED INT32 c1;\n            OPTIONAL INT32 c2;\n        }\n        \";\n    parse_message_type(message_type)\n        .map(|t| Arc::new(SchemaDescriptor::new(Arc::new(t))))\n        .unwrap()\n}\n\nfn seedable_rng() -> StdRng {\n    StdRng::seed_from_u64(42)\n}\n\n// test data params\nconst NUM_PAGES: usize = 1000;\nconst VALUES_PER_PAGE: usize = 10_000;\nconst BATCH_SIZE: usize = 4096;\n\nfn build_plain_int32_pages(\n    column_desc: ColumnDescPtr,\n    null_density: f32,\n) -> impl PageIterator + Clone {\n    let max_def_level = column_desc.max_def_level();\n    let max_rep_level = column_desc.max_rep_level();\n    let rep_levels = vec![0; VALUES_PER_PAGE];\n    let mut rng = seedable_rng();\n    let mut pages: Vec<parquet::column::page::Page> = Vec::new();\n    let mut int32_value = 0;\n    for _ in 0..NUM_PAGES {\n        // generate page\n        let mut values = Vec::with_capacity(VALUES_PER_PAGE);\n        let mut def_levels = Vec::with_capacity(VALUES_PER_PAGE);\n        for _ in 0..VALUES_PER_PAGE {\n            let def_level = if rng.random::<f32>() < null_density {\n                max_def_level - 1\n            } else {\n                max_def_level\n            };\n            if def_level == max_def_level {\n                int32_value += 1;\n                values.push(int32_value);\n            }\n            def_levels.push(def_level);\n        }\n        let mut page_builder =\n            DataPageBuilderImpl::new(column_desc.clone(), values.len() as u32, true);\n        page_builder.add_rep_levels(max_rep_level, &rep_levels);\n        page_builder.add_def_levels(max_def_level, &def_levels);\n        page_builder.add_values::<Int32Type>(Encoding::PLAIN, &values);\n        pages.push(page_builder.consume());\n    }\n\n    // Since `InMemoryPageReader` is not exposed from parquet crate, here we use\n    // `InMemoryPageIterator` instead which is a Iter<Iter<Page>>.\n    InMemoryPageIterator::new(vec![pages])\n}\n\nstruct TestColumnReader {\n    inner: ColumnReader,\n    pages: Box<dyn PageReader>,\n    batch_size: usize,\n    total_num_values: usize,\n    total_num_values_read: usize,\n    first_page_loaded: bool,\n}\n\nimpl TestColumnReader {\n    pub fn new(\n        cd: ColumnDescriptor,\n        promotion_info: TypePromotionInfo,\n        batch_size: usize,\n        mut page_iter: impl PageIterator + 'static,\n        total_num_values: usize,\n    ) -> Self {\n        let reader = ColumnReader::get(cd, promotion_info, batch_size, false, false);\n        let first = page_iter.next().unwrap().unwrap();\n        Self {\n            inner: reader,\n            pages: first,\n            batch_size,\n            total_num_values,\n            total_num_values_read: 0,\n            first_page_loaded: false,\n        }\n    }\n\n    fn load_page(&mut self) {\n        if let Some(page) = self.pages.get_next_page().unwrap() {\n            let num_values = page.num_values() as usize;\n            let buffer = Buffer::from_slice_ref(page.buffer());\n            self.inner.set_page_v1(num_values, buffer, page.encoding());\n        }\n    }\n}\n\nimpl Iterator for TestColumnReader {\n    type Item = ArrayData;\n\n    fn next(&mut self) -> Option<Self::Item> {\n        if self.total_num_values_read >= self.total_num_values {\n            return None;\n        }\n\n        if !self.first_page_loaded {\n            self.load_page();\n            self.first_page_loaded = true;\n        }\n\n        self.inner.reset_batch();\n        let total = ::std::cmp::min(\n            self.batch_size,\n            self.total_num_values - self.total_num_values_read,\n        );\n\n        let mut left = total;\n        while left > 0 {\n            let (num_read, _) = self.inner.read_batch(left, 0);\n            if num_read < left {\n                self.load_page();\n            }\n            left -= num_read;\n        }\n        self.total_num_values_read += total;\n\n        Some(self.inner.current_batch().unwrap())\n    }\n}\n"
  },
  {
    "path": "native/core/benches/perf.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{fs::File, os::raw::c_int, path::Path};\n\nuse criterion::profiler::Profiler;\nuse pprof::ProfilerGuard;\n\n/// A custom profiler for criterion which generates flamegraph.\n///\n/// Mostly followed this blog post: https://www.jibbow.com/posts/criterion-flamegraphs/\n/// After `cargo bench --bench <bench-name> -- --profile-time=<time>`\n/// You can find flamegraph.svg under `target/criterion/<bench-name>/<bench-method-name>/profile`\npub struct FlamegraphProfiler<'a> {\n    frequency: c_int,\n    active_profiler: Option<ProfilerGuard<'a>>,\n}\n\nimpl FlamegraphProfiler<'_> {\n    pub fn new(frequency: c_int) -> Self {\n        FlamegraphProfiler {\n            frequency,\n            active_profiler: None,\n        }\n    }\n}\n\nimpl Profiler for FlamegraphProfiler<'_> {\n    fn start_profiling(&mut self, _benchmark_id: &str, _benchmark_dir: &Path) {\n        self.active_profiler = Some(ProfilerGuard::new(self.frequency).unwrap());\n    }\n\n    fn stop_profiling(&mut self, _benchmark_id: &str, benchmark_dir: &Path) {\n        std::fs::create_dir_all(benchmark_dir).unwrap();\n        let flamegraph_path = benchmark_dir.join(\"flamegraph.svg\");\n        let flamegraph_file =\n            File::create(flamegraph_path).expect(\"File system error while creating flamegraph.svg\");\n        if let Some(profiler) = self.active_profiler.take() {\n            profiler\n                .report()\n                .build()\n                .unwrap()\n                .flamegraph(flamegraph_file)\n                .expect(\"Error writing flamegraph\");\n        }\n    }\n}\n"
  },
  {
    "path": "native/core/src/common/bit.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{cmp::min, mem::size_of};\n\nuse crate::{\n    errors::CometResult as Result,\n    parquet::{data_type::AsBytes, util::bit_packing::unpack32},\n};\nuse arrow::buffer::Buffer;\nuse datafusion_comet_spark_expr::utils::{likely, unlikely};\n\n#[inline]\npub fn from_ne_slice<T: FromBytes>(bs: &[u8]) -> T {\n    let mut b = T::Buffer::default();\n    {\n        let b = b.as_mut();\n        let bs = &bs[..b.len()];\n        b.copy_from_slice(bs);\n    }\n    T::from_ne_bytes(b)\n}\n\npub trait FromBytes: Sized {\n    type Buffer: AsMut<[u8]> + Default;\n    fn from_le_bytes(bs: Self::Buffer) -> Self;\n    fn from_be_bytes(bs: Self::Buffer) -> Self;\n    fn from_ne_bytes(bs: Self::Buffer) -> Self;\n    fn from(v: u64) -> Self;\n}\n\nmacro_rules! from_le_bytes {\n    ($($ty: ty),*) => {\n        $(\n        impl FromBytes for $ty {\n            type Buffer = [u8; size_of::<Self>()];\n            fn from_le_bytes(bs: Self::Buffer) -> Self {\n                <$ty>::from_le_bytes(bs)\n            }\n            fn from_be_bytes(bs: Self::Buffer) -> Self {\n                <$ty>::from_be_bytes(bs)\n            }\n            fn from_ne_bytes(bs: Self::Buffer) -> Self {\n                <$ty>::from_ne_bytes(bs)\n            }\n            fn from(v: u64) -> Self {\n                v as $ty\n            }\n        }\n        )*\n    };\n}\n\nimpl FromBytes for bool {\n    type Buffer = [u8; 1];\n    fn from_le_bytes(bs: Self::Buffer) -> Self {\n        Self::from_ne_bytes(bs)\n    }\n    fn from_be_bytes(bs: Self::Buffer) -> Self {\n        Self::from_ne_bytes(bs)\n    }\n    fn from_ne_bytes(bs: Self::Buffer) -> Self {\n        match bs[0] {\n            0 => false,\n            1 => true,\n            _ => panic!(\"Invalid byte when reading bool\"),\n        }\n    }\n    fn from(v: u64) -> Self {\n        (v & 1) == 1\n    }\n}\n\n// TODO: support f32 and f64 in the future, but there is no use case right now\n//       f32/f64::from(v: u64) will be like `from_ne_slice(v.as_bytes()))` and that is\n//       expensive as it involves copying buffers\nfrom_le_bytes! { u8, u16, u32, u64, i8, i16, i32, i64 }\n\n/// Reads `$size` of bytes from `$src`, and reinterprets them as type `$ty`, in\n/// little-endian order. `$ty` must implement the `Default` trait. Otherwise this won't\n/// compile.\n/// This is copied and modified from byteorder crate.\nmacro_rules! read_num_bytes {\n    ($ty:ty, $size:expr, $src:expr) => {{\n        debug_assert!($size <= $src.len());\n        let mut buffer = <$ty as $crate::common::bit::FromBytes>::Buffer::default();\n        buffer.as_mut()[..$size].copy_from_slice(&$src[..$size]);\n        <$ty>::from_ne_bytes(buffer)\n    }};\n}\n\n/// u64 specific version of read_num_bytes!\n/// This is faster than read_num_bytes! because this method avoids buffer copies.\n#[inline]\npub fn read_num_bytes_u64(size: usize, src: &[u8]) -> u64 {\n    debug_assert!(size <= src.len());\n    if unlikely(src.len() < 8) {\n        return read_num_bytes!(u64, size, src);\n    }\n    let in_ptr = src as *const [u8] as *const u8 as *const u64;\n    let v = unsafe { in_ptr.read_unaligned() };\n    trailing_bits(v, size * 8)\n}\n\n/// u32 specific version of read_num_bytes!\n/// This is faster than read_num_bytes! because this method avoids buffer copies.\n#[inline]\npub fn read_num_bytes_u32(size: usize, src: &[u8]) -> u32 {\n    debug_assert!(size <= src.len());\n    if unlikely(src.len() < 4) {\n        return read_num_bytes!(u32, size, src);\n    }\n    let in_ptr = src as *const [u8] as *const u8 as *const u32;\n    let v = unsafe { in_ptr.read_unaligned() };\n    trailing_bits(v as u64, size * 8) as u32\n}\n\n#[inline]\npub fn read_u64(src: &[u8]) -> u64 {\n    let in_ptr = src.as_ptr() as *const u64;\n    unsafe { in_ptr.read_unaligned() }\n}\n\n#[inline]\npub fn read_u32(src: &[u8]) -> u32 {\n    let in_ptr = src.as_ptr() as *const u32;\n    unsafe { in_ptr.read_unaligned() }\n}\n\n#[inline]\npub fn memcpy(source: &[u8], target: &mut [u8]) {\n    debug_assert!(target.len() >= source.len(), \"Copying from source to target is not possible. Source has {} bytes but target has {} bytes\", source.len(), target.len());\n    // Originally `target[..source.len()].copy_from_slice(source)`\n    // We use the unsafe copy method to avoid some expensive bounds checking/\n    unsafe { std::ptr::copy_nonoverlapping(source.as_ptr(), target.as_mut_ptr(), source.len()) }\n}\n\n#[inline]\npub fn memcpy_value<T>(source: &T, num_bytes: usize, target: &mut [u8])\nwhere\n    T: ?Sized + AsBytes,\n{\n    debug_assert!(\n        target.len() >= num_bytes,\n        \"Not enough space. Only had {} bytes but need to put {} bytes\",\n        target.len(),\n        num_bytes\n    );\n    memcpy(&source.as_bytes()[..num_bytes], target)\n}\n\n/// Returns ceil(log2(x))\n#[inline]\npub fn log2(mut x: u64) -> u32 {\n    if x == 1 {\n        return 0;\n    }\n    x -= 1;\n    64u32 - x.leading_zeros()\n}\n\n/// Returns the `num_bits` least-significant bits of `v`\n#[inline]\npub fn trailing_bits(v: u64, num_bits: usize) -> u64 {\n    if unlikely(num_bits == 0) {\n        return 0;\n    }\n    if unlikely(num_bits >= 64) {\n        return v;\n    }\n    v & ((1 << num_bits) - 1)\n}\n\n#[inline]\npub fn set_bit(bits: &mut [u8], i: usize) {\n    bits[i / 8] |= 1 << (i % 8);\n}\n\n/// Set the bit value at index `i`, for buffer `bits`.\n///\n/// # Safety\n/// This doesn't check bounds, the caller must ensure that `i` is in (0, bits.len() * 8)\n#[inline]\npub unsafe fn set_bit_raw(bits: *mut u8, i: usize) {\n    *bits.add(i / 8) |= 1 << (i % 8);\n}\n\n#[inline]\npub fn unset_bit(bits: &mut [u8], i: usize) {\n    bits[i / 8] &= !(1 << (i % 8));\n}\n\n#[inline]\npub fn set_bits(bits: &mut [u8], offset: usize, length: usize) {\n    let mut byte_i = offset / 8;\n    let offset_r = offset % 8;\n    let end = offset + length;\n    let end_byte_i = end / 8;\n    let end_r = end % 8;\n\n    // if the offset starts in the middle of a byte, update the byte first\n    if offset_r != 0 {\n        let num_bits = min(length, 7);\n        bits[byte_i] |= ((1u8 << num_bits) - 1) << offset_r;\n        byte_i += 1;\n    }\n\n    // See if there is an opportunity to do a bulk byte write\n    if byte_i < end_byte_i {\n        unsafe {\n            bits.as_mut_ptr()\n                .add(byte_i)\n                .write_bytes(255, end_byte_i - byte_i);\n        }\n        byte_i = end_byte_i;\n    }\n\n    // take care of the last byte\n    if end_r > 0 && (byte_i == end_byte_i) {\n        bits[byte_i] |= (1u8 << end_r) - 1;\n    }\n}\n\n#[inline(always)]\npub fn mix_hash(lower: u64, upper: u64) -> u64 {\n    let hash = (17 * 37u64).wrapping_add(lower);\n    hash.wrapping_mul(37).wrapping_add(upper)\n}\n\nstatic BIT_MASK: [u8; 8] = [1, 2, 4, 8, 16, 32, 64, 128];\n\n/// Returns whether bit at position `i` in `data` is set or not\n#[inline]\npub fn get_bit(data: &[u8], i: usize) -> bool {\n    (data[i >> 3] & BIT_MASK[i & 7]) != 0\n}\n\n/// Returns the boolean value at index `i`.\n///\n/// # Safety\n/// This doesn't check bounds, the caller must ensure that index < self.len()\n#[inline]\npub unsafe fn get_bit_raw(ptr: *const u8, i: usize) -> bool {\n    (*ptr.add(i >> 3) & BIT_MASK[i & 7]) != 0\n}\n\n/// Utility class for writing bit/byte streams. This class can write data in either\n/// bit packed or byte aligned fashion.\npub struct BitWriter {\n    buffer: Vec<u8>,\n    max_bytes: usize,\n    buffered_values: u64,\n    byte_offset: usize,\n    bit_offset: usize,\n    start: usize,\n}\n\nimpl BitWriter {\n    pub fn new(max_bytes: usize) -> Self {\n        Self {\n            buffer: vec![0; max_bytes],\n            max_bytes,\n            buffered_values: 0,\n            byte_offset: 0,\n            bit_offset: 0,\n            start: 0,\n        }\n    }\n\n    /// Initializes the writer from the existing buffer `buffer` and starting\n    /// offset `start`.\n    pub fn new_from_buf(buffer: Vec<u8>, start: usize) -> Self {\n        debug_assert!(start < buffer.len());\n        let len = buffer.len();\n        Self {\n            buffer,\n            max_bytes: len,\n            buffered_values: 0,\n            byte_offset: start,\n            bit_offset: 0,\n            start,\n        }\n    }\n\n    /// Extend buffer size by `increment` bytes\n    #[inline]\n    pub fn extend(&mut self, increment: usize) {\n        self.max_bytes += increment;\n        let extra = vec![0; increment];\n        self.buffer.extend(extra);\n    }\n\n    /// Report buffer size, in bytes\n    #[inline]\n    pub fn capacity(&mut self) -> usize {\n        self.max_bytes\n    }\n\n    /// Consumes and returns the current buffer.\n    #[inline]\n    pub fn consume(mut self) -> Vec<u8> {\n        self.flush();\n        self.buffer.truncate(self.byte_offset);\n        self.buffer\n    }\n\n    /// Flushes the internal buffered bits and returns the buffer's content.\n    /// This is a borrow equivalent of `consume` method.\n    #[inline]\n    pub fn flush_buffer(&mut self) -> &[u8] {\n        self.flush();\n        &self.buffer()[0..self.byte_offset]\n    }\n\n    /// Clears the internal state so the buffer can be reused.\n    #[inline]\n    pub fn clear(&mut self) {\n        self.buffered_values = 0;\n        self.byte_offset = self.start;\n        self.bit_offset = 0;\n    }\n\n    /// Flushes the internal buffered bits and the align the buffer to the next byte.\n    #[inline]\n    pub fn flush(&mut self) {\n        let num_bytes = self.bit_offset.div_ceil(8);\n        debug_assert!(self.byte_offset + num_bytes <= self.max_bytes);\n        memcpy_value(\n            &self.buffered_values,\n            num_bytes,\n            &mut self.buffer[self.byte_offset..],\n        );\n        self.buffered_values = 0;\n        self.bit_offset = 0;\n        self.byte_offset += num_bytes;\n    }\n\n    /// Advances the current offset by skipping `num_bytes`, flushing the internal bit\n    /// buffer first.\n    /// This is useful when you want to jump over `num_bytes` bytes and come back later\n    /// to fill these bytes.\n    ///\n    /// Returns error if `num_bytes` is beyond the boundary of the internal buffer.\n    /// Otherwise, returns the old offset.\n    #[inline]\n    pub fn skip(&mut self, num_bytes: usize) -> Result<usize> {\n        self.flush();\n        debug_assert!(self.byte_offset <= self.max_bytes);\n        if unlikely(self.byte_offset + num_bytes > self.max_bytes) {\n            return Err(general_err!(\n                \"Not enough bytes left in BitWriter. Need {} but only have {}\",\n                self.byte_offset + num_bytes,\n                self.max_bytes\n            ));\n        }\n        let result = self.byte_offset;\n        self.byte_offset += num_bytes;\n        Ok(result)\n    }\n\n    /// Returns a slice containing the next `num_bytes` bytes starting from the current\n    /// offset, and advances the underlying buffer by `num_bytes`.\n    /// This is useful when you want to jump over `num_bytes` bytes and come back later\n    /// to fill these bytes.\n    #[inline]\n    pub fn get_next_byte_ptr(&mut self, num_bytes: usize) -> Result<&mut [u8]> {\n        let offset = self.skip(num_bytes)?;\n        Ok(&mut self.buffer[offset..offset + num_bytes])\n    }\n\n    #[inline]\n    pub fn bytes_written(&self) -> usize {\n        self.byte_offset - self.start + self.bit_offset.div_ceil(8)\n    }\n\n    #[inline]\n    pub fn buffer(&self) -> &[u8] {\n        &self.buffer[self.start..]\n    }\n\n    #[inline]\n    pub fn byte_offset(&self) -> usize {\n        self.byte_offset\n    }\n\n    /// Returns the internal buffer length. This is the maximum number of bytes that this\n    /// writer can write. User needs to call `consume` to consume the current buffer\n    /// before more data can be written.\n    #[inline]\n    pub fn buffer_len(&self) -> usize {\n        self.max_bytes\n    }\n\n    /// Writes the entire byte `value` at the byte `offset`\n    pub fn write_at(&mut self, offset: usize, value: u8) {\n        self.buffer[offset] = value;\n    }\n\n    /// Writes the `num_bits` LSB of value `v` to the internal buffer of this writer.\n    /// The `num_bits` must not be greater than 64. This is bit packed.\n    ///\n    /// Returns false if there's not enough room left. True otherwise.\n    #[inline]\n    #[allow(clippy::unnecessary_cast)]\n    pub fn put_value(&mut self, v: u64, num_bits: usize) -> bool {\n        debug_assert!(num_bits <= 64);\n        debug_assert_eq!(v.checked_shr(num_bits as u32).unwrap_or(0), 0); // covers case v >> 64\n\n        let num_bytes = self.byte_offset * 8 + self.bit_offset + num_bits;\n        if unlikely(num_bytes > self.max_bytes as usize * 8) {\n            return false;\n        }\n\n        self.buffered_values |= v << self.bit_offset;\n        self.bit_offset += num_bits;\n        if self.bit_offset >= 64 {\n            memcpy_value(\n                &self.buffered_values,\n                8,\n                &mut self.buffer[self.byte_offset..],\n            );\n            self.byte_offset += 8;\n            self.bit_offset -= 64;\n            self.buffered_values = 0;\n            // Perform checked right shift: v >> offset, where offset < 64, otherwise we\n            // shift all bits\n            self.buffered_values = v\n                .checked_shr((num_bits - self.bit_offset) as u32)\n                .unwrap_or(0);\n        }\n        debug_assert!(self.bit_offset < 64);\n        true\n    }\n\n    /// Writes `val` of `num_bytes` bytes to the next aligned byte. If size of `T` is\n    /// larger than `num_bytes`, extra higher ordered bytes will be ignored.\n    ///\n    /// Returns false if there's not enough room left. True otherwise.\n    #[inline]\n    pub fn put_aligned<T: AsBytes>(&mut self, val: T, num_bytes: usize) -> bool {\n        let result = self.get_next_byte_ptr(num_bytes);\n        if unlikely(result.is_err()) {\n            // TODO: should we return `Result` for this func?\n            return false;\n        }\n        let ptr = result.unwrap();\n        memcpy_value(&val, num_bytes, ptr);\n        true\n    }\n\n    /// Writes `val` of `num_bytes` bytes at the designated `offset`. The `offset` is the\n    /// offset starting from the beginning of the internal buffer that this writer\n    /// maintains. Note that this will overwrite any existing data between `offset` and\n    /// `offset + num_bytes`. Also that if size of `T` is larger than `num_bytes`, extra\n    /// higher ordered bytes will be ignored.\n    ///\n    /// Returns false if there's not enough room left, or the `pos` is not valid.\n    /// True otherwise.\n    #[inline]\n    pub fn put_aligned_offset<T: AsBytes>(\n        &mut self,\n        val: T,\n        num_bytes: usize,\n        offset: usize,\n    ) -> bool {\n        if unlikely(num_bytes + offset > self.max_bytes) {\n            return false;\n        }\n        memcpy_value(\n            &val,\n            num_bytes,\n            &mut self.buffer[offset..offset + num_bytes],\n        );\n        true\n    }\n\n    /// Writes a VLQ encoded integer `v` to this buffer. The value is byte aligned.\n    ///\n    /// Returns false if there's not enough room left. True otherwise.\n    #[inline]\n    pub fn put_vlq_int(&mut self, mut v: u64) -> bool {\n        let mut result = true;\n        while v & 0xFFFFFFFFFFFFFF80 != 0 {\n            result &= self.put_aligned::<u8>(((v & 0x7F) | 0x80) as u8, 1);\n            v >>= 7;\n        }\n        result &= self.put_aligned::<u8>((v & 0x7F) as u8, 1);\n        result\n    }\n\n    /// Writes a zigzag-VLQ encoded (in little endian order) int `v` to this buffer.\n    /// Zigzag-VLQ is a variant of VLQ encoding where negative and positive\n    /// numbers are encoded in a zigzag fashion.\n    /// See: https://developers.google.com/protocol-buffers/docs/encoding\n    ///\n    /// Returns false if there's not enough room left. True otherwise.\n    #[inline]\n    pub fn put_zigzag_vlq_int(&mut self, v: i64) -> bool {\n        let u: u64 = ((v << 1) ^ (v >> 63)) as u64;\n        self.put_vlq_int(u)\n    }\n}\n\n/// Maximum byte length for a VLQ encoded integer\n/// MAX_VLQ_BYTE_LEN = 5 for i32, and MAX_VLQ_BYTE_LEN = 10 for i64\npub const MAX_VLQ_BYTE_LEN: usize = 10;\n\npub struct BitReader {\n    /// The byte buffer to read from, passed in by client\n    buffer: Buffer, // TODO: generalize this\n\n    /// Bytes are memcpy'd from `buffer` and values are read from this variable.\n    /// This is faster than reading values byte by byte directly from `buffer`\n    buffered_values: u64,\n\n    ///\n    /// End                                         Start\n    /// |............|B|B|B|B|B|B|B|B|..............|\n    ///                   ^          ^\n    ///                 bit_offset   byte_offset\n    ///\n    /// Current byte offset in `buffer`\n    byte_offset: usize,\n\n    /// Current bit offset in `buffered_values`\n    bit_offset: usize,\n\n    /// Total number of bytes in `buffer`\n    total_bytes: usize,\n}\n\n/// Utility class to read bit/byte stream. This class can read bits or bytes that are\n/// either byte aligned or not.\nimpl BitReader {\n    pub fn new(buf: Buffer, len: usize) -> Self {\n        let buffered_values = if size_of::<u64>() > len {\n            read_num_bytes_u64(len, buf.as_slice())\n        } else {\n            read_u64(buf.as_slice())\n        };\n        BitReader {\n            buffer: buf,\n            buffered_values,\n            byte_offset: 0,\n            bit_offset: 0,\n            total_bytes: len,\n        }\n    }\n\n    pub fn new_all(buf: Buffer) -> Self {\n        let len = buf.len();\n        Self::new(buf, len)\n    }\n\n    pub fn reset(&mut self, buf: Buffer) {\n        self.buffer = buf;\n        self.total_bytes = self.buffer.len();\n        self.buffered_values = if size_of::<u64>() > self.total_bytes {\n            read_num_bytes_u64(self.total_bytes, self.buffer.as_slice())\n        } else {\n            read_u64(self.buffer.as_slice())\n        };\n        self.byte_offset = 0;\n        self.bit_offset = 0;\n    }\n\n    /// Gets the current byte offset\n    #[inline]\n    pub fn get_byte_offset(&self) -> usize {\n        self.byte_offset + self.bit_offset.div_ceil(8)\n    }\n\n    /// Reads a value of type `T` and of size `num_bits`.\n    ///\n    /// Returns `None` if there's not enough data available. `Some` otherwise.\n    pub fn get_value<T: FromBytes>(&mut self, num_bits: usize) -> Option<T> {\n        debug_assert!(num_bits <= 64);\n        debug_assert!(num_bits <= size_of::<T>() * 8);\n\n        if unlikely(self.byte_offset * 8 + self.bit_offset + num_bits > self.total_bytes * 8) {\n            return None;\n        }\n\n        let v = self.get_u64_value(num_bits);\n        Some(T::from(v))\n    }\n\n    /// Reads a `u32` value encoded using `num_bits` of bits.\n    ///\n    /// # Safety\n    ///\n    /// This method asusumes the following:\n    ///\n    /// - the `num_bits` is <= 64\n    /// - the remaining number of bits to read in this reader is >= `num_bits`.\n    ///\n    /// Undefined behavior will happen if any of the above assumptions is violated.\n    #[inline]\n    pub fn get_u32_value(&mut self, num_bits: usize) -> u32 {\n        self.get_u64_value(num_bits) as u32\n    }\n\n    #[inline(always)]\n    fn get_u64_value(&mut self, num_bits: usize) -> u64 {\n        if unlikely(num_bits == 0) {\n            0\n        } else {\n            let v = self.buffered_values >> self.bit_offset;\n            let mask = u64::MAX >> (64 - num_bits);\n            self.bit_offset += num_bits;\n            if self.bit_offset < 64 {\n                v & mask\n            } else {\n                self.byte_offset += 8;\n                self.bit_offset -= 64;\n                self.reload_buffer_values();\n                ((self.buffered_values << (num_bits - self.bit_offset)) | v) & mask\n            }\n        }\n    }\n\n    /// Gets at most `num` bits from this reader, and append them to the `dst` byte slice, starting\n    /// at bit offset `offset`.\n    ///\n    /// Returns the actual number of bits appended. In case either the `dst` slice doesn't have\n    /// enough space or the current reader doesn't have enough bits to consume, the returned value\n    /// will be less than the input `num_bits`.\n    ///\n    /// # Preconditions\n    /// * `offset` MUST < dst.len() * 8\n    pub fn get_bits(&mut self, dst: &mut [u8], offset: usize, num_bits: usize) -> usize {\n        debug_assert!(offset < dst.len() * 8);\n\n        let remaining_bits = (self.total_bytes - self.byte_offset) * 8 - self.bit_offset;\n        let num_bits_to_read = min(remaining_bits, min(num_bits, dst.len() * 8 - offset));\n        let mut i = 0;\n\n        // First consume all the remaining bits from the `buffered_values` if there're any.\n        if likely(self.bit_offset != 0) {\n            i += self.get_bits_buffered(dst, offset, num_bits_to_read);\n        }\n\n        debug_assert!(self.bit_offset == 0 || i == num_bits_to_read);\n\n        // Check if there's opportunity to directly copy bytes using `memcpy`.\n        if (offset + i).is_multiple_of(8) && i < num_bits_to_read {\n            let num_bytes = (num_bits_to_read - i) / 8;\n            let dst_byte_offset = (offset + i) / 8;\n            if num_bytes > 0 {\n                memcpy(\n                    &self.buffer[self.byte_offset..self.byte_offset + num_bytes],\n                    &mut dst[dst_byte_offset..],\n                );\n                i += num_bytes * 8;\n                self.byte_offset += num_bytes;\n                self.reload_buffer_values();\n            }\n        }\n\n        debug_assert!(!(offset + i).is_multiple_of(8) || num_bits_to_read - i < 8);\n\n        // Now copy the remaining bits if there's any.\n        while i < num_bits_to_read {\n            i += self.get_bits_buffered(dst, offset + i, num_bits_to_read - i);\n        }\n\n        num_bits_to_read\n    }\n\n    /// Consume at most `n` bits from `buffered_values`. Returns the actual number of bits consumed.\n    ///\n    /// # Postcondition\n    /// - either bits from `buffered_values` are completely drained (i.e., `bit_offset` == 0)\n    /// - OR the `num_bits` is < the number of remaining bits in `buffered_values` and thus the\n    ///   returned value is < `num_bits`.\n    ///\n    /// Either way, the returned value is in range [0, 64].\n    #[inline]\n    fn get_bits_buffered(&mut self, dst: &mut [u8], offset: usize, num_bits: usize) -> usize {\n        if unlikely(num_bits == 0) {\n            return 0;\n        }\n\n        let n = min(num_bits, 64 - self.bit_offset);\n        let offset_i = offset / 8;\n        let offset_r = offset % 8;\n\n        // Extract the value to read out of the buffer\n        let mut v = trailing_bits(self.buffered_values >> self.bit_offset, n);\n\n        // Read the first byte always because n > 0\n        dst[offset_i] |= (v << offset_r) as u8;\n        v >>= 8 - offset_r;\n\n        // Read the rest of the bytes\n        ((offset_i + 1)..(offset_i + usize::div_ceil(n + offset_r, 8))).for_each(|i| {\n            dst[i] |= v as u8;\n            v >>= 8;\n        });\n\n        self.bit_offset += n;\n        if self.bit_offset == 64 {\n            self.byte_offset += 8;\n            self.bit_offset -= 64;\n            self.reload_buffer_values();\n        }\n\n        n\n    }\n\n    /// Skips at most `num` bits from this reader.\n    ///\n    /// Returns the actual number of bits skipped.\n    pub fn skip_bits(&mut self, num_bits: usize) -> usize {\n        let remaining_bits = (self.total_bytes - self.byte_offset) * 8 - self.bit_offset;\n        let num_bits_to_read = min(remaining_bits, num_bits);\n        let mut i = 0;\n\n        // First skip all the remaining bits by updating the offsets of `buffered_values`.\n        if likely(self.bit_offset != 0) {\n            let n = 64 - self.bit_offset;\n            if num_bits_to_read < n {\n                self.bit_offset += num_bits_to_read;\n                i = num_bits_to_read;\n            } else {\n                self.byte_offset += 8;\n                self.bit_offset = 0;\n                i = n;\n            }\n        }\n\n        // Check if there's opportunity to skip by byte\n        if i + 7 < num_bits_to_read {\n            let num_bytes = (num_bits_to_read - i) / 8;\n            i += num_bytes * 8;\n            self.byte_offset += num_bytes;\n        }\n\n        if self.bit_offset == 0 {\n            self.reload_buffer_values();\n        }\n\n        // Now skip the remaining bits if there's any.\n        if i < num_bits_to_read {\n            self.bit_offset += num_bits_to_read - i;\n        }\n\n        num_bits_to_read\n    }\n\n    /// Reads a batch of `u32` values encoded using `num_bits` of bits, into `dst`.\n    ///\n    /// # Safety\n    ///\n    /// This method asusumes the following:\n    ///\n    /// - the `num_bits` is <= 64\n    /// - the remaining number of bits to read in this reader is >= `total * num_bits`.\n    ///\n    /// Undefined behavior will happen if any of the above assumptions is violated.\n    ///\n    /// Unlike `[get_batch]`, this method removes a few checks such as checking the remaining number\n    /// of bits as well as checking the bit width for the element type in `dst`. Therefore, it is\n    /// more efficient.\n    pub unsafe fn get_u32_batch(&mut self, mut dst: *mut u32, total: usize, num_bits: usize) {\n        let mut i = 0;\n\n        // First align bit offset to byte offset\n        if likely(self.bit_offset != 0) {\n            while i < total && self.bit_offset != 0 {\n                *dst = self.get_u32_value(num_bits);\n                dst = dst.offset(1);\n                i += 1;\n            }\n        }\n\n        let in_buf = &self.buffer.as_slice()[self.byte_offset..];\n        let mut in_ptr = in_buf as *const [u8] as *const u8 as *const u32;\n        while total - i >= 32 {\n            in_ptr = unpack32(in_ptr, dst, num_bits);\n            self.byte_offset += 4 * num_bits;\n            dst = dst.offset(32);\n            i += 32;\n        }\n\n        self.reload_buffer_values();\n        while i < total {\n            *dst = self.get_u32_value(num_bits);\n            dst = dst.offset(1);\n            i += 1;\n        }\n    }\n\n    pub fn get_batch<T: FromBytes>(&mut self, batch: &mut [T], num_bits: usize) -> usize {\n        debug_assert!(num_bits <= 32);\n        debug_assert!(num_bits <= size_of::<T>() * 8);\n\n        let mut values_to_read = batch.len();\n        let needed_bits = num_bits * values_to_read;\n        let remaining_bits = (self.total_bytes - self.byte_offset) * 8 - self.bit_offset;\n        if remaining_bits < needed_bits {\n            values_to_read = remaining_bits / num_bits;\n        }\n\n        let mut i = 0;\n\n        // First align bit offset to byte offset\n        if likely(self.bit_offset != 0) {\n            while i < values_to_read && self.bit_offset != 0 {\n                batch[i] = self\n                    .get_value(num_bits)\n                    .expect(\"expected to have more data\");\n                i += 1;\n            }\n        }\n\n        unsafe {\n            let in_buf = &self.buffer.as_slice()[self.byte_offset..];\n            let mut in_ptr = in_buf as *const [u8] as *const u8 as *const u32;\n            // FIXME assert!(memory::is_ptr_aligned(in_ptr));\n            if size_of::<T>() == 4 {\n                while values_to_read - i >= 32 {\n                    let out_ptr = &mut batch[i..] as *mut [T] as *mut T as *mut u32;\n                    in_ptr = unpack32(in_ptr, out_ptr, num_bits);\n                    self.byte_offset += 4 * num_bits;\n                    i += 32;\n                }\n            } else {\n                let mut out_buf = [0u32; 32];\n                let out_ptr = &mut out_buf as &mut [u32] as *mut [u32] as *mut u32;\n                while values_to_read - i >= 32 {\n                    in_ptr = unpack32(in_ptr, out_ptr, num_bits);\n                    self.byte_offset += 4 * num_bits;\n                    for n in 0..32 {\n                        // We need to copy from smaller size to bigger size to avoid\n                        // overwriting other memory regions.\n                        if size_of::<T>() > size_of::<u32>() {\n                            std::ptr::copy_nonoverlapping(\n                                out_buf[n..].as_ptr(),\n                                &mut batch[i] as *mut T as *mut u32,\n                                1,\n                            );\n                        } else {\n                            std::ptr::copy_nonoverlapping(\n                                out_buf[n..].as_ptr() as *const T,\n                                &mut batch[i] as *mut T,\n                                1,\n                            );\n                        }\n                        i += 1;\n                    }\n                }\n            }\n        }\n\n        debug_assert!(values_to_read - i < 32);\n\n        self.reload_buffer_values();\n        while i < values_to_read {\n            batch[i] = self\n                .get_value(num_bits)\n                .expect(\"expected to have more data\");\n            i += 1;\n        }\n\n        values_to_read\n    }\n\n    /// Reads a `num_bytes`-sized value from this buffer and return it.\n    /// `T` needs to be a little-endian native type. The value is assumed to be byte\n    /// aligned so the bit reader will be advanced to the start of the next byte before\n    /// reading the value.\n    /// Returns `Some` if there's enough bytes left to form a value of `T`.\n    /// Otherwise `None`.\n    pub fn get_aligned<T: FromBytes>(&mut self, num_bytes: usize) -> Option<T> {\n        debug_assert!(8 >= size_of::<T>());\n        debug_assert!(num_bytes <= size_of::<T>());\n\n        let bytes_read = self.bit_offset.div_ceil(8);\n        if unlikely(self.byte_offset + bytes_read + num_bytes > self.total_bytes) {\n            return None;\n        }\n\n        if bytes_read + num_bytes > 8 {\n            // There may be still unread bytes in buffered_values; however, just reloading seems to\n            // be faster than stitching the buffer with the next buffer based on micro benchmarks\n            // because reloading logic can be simpler\n\n            // Advance byte_offset to next unread byte\n            self.byte_offset += bytes_read;\n            // Reset buffered_values\n            self.reload_buffer_values();\n            self.bit_offset = 0\n        } else {\n            // Advance bit_offset to next unread byte\n            self.bit_offset = bytes_read * 8;\n        }\n\n        let v = T::from(trailing_bits(\n            self.buffered_values >> self.bit_offset,\n            num_bytes * 8,\n        ));\n        self.bit_offset += num_bytes * 8;\n\n        if self.bit_offset == 64 {\n            self.byte_offset += 8;\n            self.bit_offset -= 64;\n            self.reload_buffer_values();\n        }\n\n        Some(v)\n    }\n\n    /// Reads a VLQ encoded (in little endian order) int from the stream.\n    /// The encoded int must start at the beginning of a byte.\n    ///\n    /// Returns `None` if there's not enough bytes in the stream. `Some` otherwise.\n    pub fn get_vlq_int(&mut self) -> Option<i64> {\n        let mut shift = 0;\n        let mut v: i64 = 0;\n        while let Some(byte) = self.get_aligned::<u8>(1) {\n            v |= ((byte & 0x7F) as i64) << shift;\n            shift += 7;\n            debug_assert!(\n                shift <= MAX_VLQ_BYTE_LEN * 7,\n                \"Num of bytes exceed MAX_VLQ_BYTE_LEN ({MAX_VLQ_BYTE_LEN})\"\n            );\n            if likely(byte & 0x80 == 0) {\n                return Some(v);\n            }\n        }\n        None\n    }\n\n    /// Reads a zigzag-VLQ encoded (in little endian order) int from the stream\n    /// Zigzag-VLQ is a variant of VLQ encoding where negative and positive numbers are\n    /// encoded in a zigzag fashion.\n    /// See: https://developers.google.com/protocol-buffers/docs/encoding\n    ///\n    /// Note: the encoded int must start at the beginning of a byte.\n    ///\n    /// Returns `None` if the number of bytes there's not enough bytes in the stream.\n    /// `Some` otherwise.\n    #[inline]\n    pub fn get_zigzag_vlq_int(&mut self) -> Option<i64> {\n        self.get_vlq_int().map(|v| {\n            let u = v as u64;\n            (u >> 1) as i64 ^ -((u & 1) as i64)\n        })\n    }\n\n    fn reload_buffer_values(&mut self) {\n        let bytes_to_read = self.total_bytes - self.byte_offset;\n        self.buffered_values = if 8 > bytes_to_read {\n            read_num_bytes_u64(bytes_to_read, &self.buffer.as_slice()[self.byte_offset..])\n        } else {\n            read_u64(&self.buffer.as_slice()[self.byte_offset..])\n        };\n    }\n}\n\nimpl From<Vec<u8>> for BitReader {\n    #[inline]\n    fn from(vec: Vec<u8>) -> Self {\n        let len = vec.len();\n        BitReader::new(Buffer::from(vec.as_slice()), len)\n    }\n}\n\n/// Returns the nearest multiple of `factor` that is `>=` than `num`. Here `factor` must\n/// be a power of 2.\n///\n/// Copied from the arrow crate to make arrow optional\npub fn round_upto_power_of_2(num: usize, factor: usize) -> usize {\n    debug_assert!(factor > 0 && (factor & (factor - 1)) == 0);\n    (num + (factor - 1)) & !(factor - 1)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::parquet::util::test_common::*;\n\n    use rand::{\n        distr::{Distribution, StandardUniform},\n        RngExt,\n    };\n    use std::fmt::Debug;\n\n    #[test]\n    fn test_read_num_bytes_u64() {\n        let buffer: Vec<u8> = vec![0, 1, 2, 3, 4, 5, 6, 7];\n        for size in 0..buffer.len() {\n            assert_eq!(\n                read_num_bytes_u64(size, &buffer),\n                read_num_bytes!(u64, size, &buffer),\n            );\n        }\n    }\n\n    #[test]\n    fn test_read_u64() {\n        let buffer: Vec<u8> = vec![0, 1, 2, 3, 4, 5, 6, 7];\n        assert_eq!(read_u64(&buffer), read_num_bytes!(u64, 8, &buffer),);\n    }\n\n    #[test]\n    fn test_read_num_bytes_u32() {\n        let buffer: Vec<u8> = vec![0, 1, 2, 3];\n        for size in 0..buffer.len() {\n            assert_eq!(\n                read_num_bytes_u32(size, &buffer),\n                read_num_bytes!(u32, size, &buffer),\n            );\n        }\n    }\n\n    #[test]\n    fn test_read_u32() {\n        let buffer: Vec<u8> = vec![0, 1, 2, 3];\n        assert_eq!(read_u32(&buffer), read_num_bytes!(u32, 4, &buffer),);\n    }\n\n    #[test]\n    fn test_bit_reader_get_byte_offset() {\n        let buffer = vec![255; 10];\n        let mut bit_reader = BitReader::from(buffer);\n        assert_eq!(bit_reader.get_byte_offset(), 0); // offset (0 bytes, 0 bits)\n        bit_reader.get_value::<i32>(6);\n        assert_eq!(bit_reader.get_byte_offset(), 1); // offset (0 bytes, 6 bits)\n        bit_reader.get_value::<i32>(10);\n        assert_eq!(bit_reader.get_byte_offset(), 2); // offset (0 bytes, 16 bits)\n        bit_reader.get_value::<i32>(20);\n        assert_eq!(bit_reader.get_byte_offset(), 5); // offset (0 bytes, 36 bits)\n        bit_reader.get_value::<i32>(30);\n        assert_eq!(bit_reader.get_byte_offset(), 9); // offset (8 bytes, 2 bits)\n    }\n\n    #[test]\n    fn test_bit_reader_get_value() {\n        let buffer = vec![255, 0];\n        let mut bit_reader = BitReader::from(buffer);\n        assert_eq!(bit_reader.get_value::<i32>(1), Some(1));\n        assert_eq!(bit_reader.get_value::<i32>(2), Some(3));\n        assert_eq!(bit_reader.get_value::<i32>(3), Some(7));\n        assert_eq!(bit_reader.get_value::<i32>(4), Some(3));\n    }\n\n    #[test]\n    fn test_bit_reader_get_value_boundary() {\n        let buffer = vec![10, 0, 0, 0, 20, 0, 30, 0, 0, 0, 40, 0];\n        let mut bit_reader = BitReader::from(buffer);\n        assert_eq!(bit_reader.get_value::<i64>(32), Some(10));\n        assert_eq!(bit_reader.get_value::<i64>(16), Some(20));\n        assert_eq!(bit_reader.get_value::<i64>(32), Some(30));\n        assert_eq!(bit_reader.get_value::<i64>(16), Some(40));\n    }\n\n    #[test]\n    fn test_bit_reader_get_aligned() {\n        // 01110101 11001011\n        let buffer = Buffer::from(&[0x75, 0xCB]);\n        let mut bit_reader = BitReader::new_all(buffer.clone());\n        assert_eq!(bit_reader.get_value::<i32>(3), Some(5));\n        assert_eq!(bit_reader.get_aligned::<i32>(1), Some(203));\n        assert_eq!(bit_reader.get_value::<i32>(1), None);\n        bit_reader.reset(buffer);\n        assert_eq!(bit_reader.get_aligned::<i32>(3), None);\n    }\n\n    #[test]\n    fn test_bit_reader_get_vlq_int() {\n        // 10001001 00000001 11110010 10110101 00000110\n        let buffer: Vec<u8> = vec![0x89, 0x01, 0xF2, 0xB5, 0x06];\n        let mut bit_reader = BitReader::from(buffer);\n        assert_eq!(bit_reader.get_vlq_int(), Some(137));\n        assert_eq!(bit_reader.get_vlq_int(), Some(105202));\n    }\n\n    #[test]\n    fn test_bit_reader_get_zigzag_vlq_int() {\n        let buffer: Vec<u8> = vec![0, 1, 2, 3];\n        let mut bit_reader = BitReader::from(buffer);\n        assert_eq!(bit_reader.get_zigzag_vlq_int(), Some(0));\n        assert_eq!(bit_reader.get_zigzag_vlq_int(), Some(-1));\n        assert_eq!(bit_reader.get_zigzag_vlq_int(), Some(1));\n        assert_eq!(bit_reader.get_zigzag_vlq_int(), Some(-2));\n    }\n\n    #[test]\n    fn test_set_bit() {\n        let mut buffer = vec![0, 0, 0];\n        set_bit(&mut buffer[..], 1);\n        assert_eq!(buffer, vec![2, 0, 0]);\n        set_bit(&mut buffer[..], 4);\n        assert_eq!(buffer, vec![18, 0, 0]);\n        unset_bit(&mut buffer[..], 1);\n        assert_eq!(buffer, vec![16, 0, 0]);\n        set_bit(&mut buffer[..], 10);\n        assert_eq!(buffer, vec![16, 4, 0]);\n        set_bit(&mut buffer[..], 10);\n        assert_eq!(buffer, vec![16, 4, 0]);\n        set_bit(&mut buffer[..], 11);\n        assert_eq!(buffer, vec![16, 12, 0]);\n        unset_bit(&mut buffer[..], 10);\n        assert_eq!(buffer, vec![16, 8, 0]);\n    }\n\n    #[test]\n    fn test_set_bits() {\n        for offset in 0..=16 {\n            for length in 0..=16 {\n                let mut actual = vec![0, 0, 0, 0];\n                set_bits(&mut actual[..], offset, length);\n                let mut expected = vec![0, 0, 0, 0];\n                for i in 0..length {\n                    set_bit(&mut expected, offset + i);\n                }\n                assert_eq!(actual, expected);\n            }\n        }\n    }\n\n    #[test]\n    fn test_get_bit() {\n        // 00001101\n        assert!(get_bit(&[0b00001101], 0));\n        assert!(!get_bit(&[0b00001101], 1));\n        assert!(get_bit(&[0b00001101], 2));\n        assert!(get_bit(&[0b00001101], 3));\n\n        // 01001001 01010010\n        assert!(get_bit(&[0b01001001, 0b01010010], 0));\n        assert!(!get_bit(&[0b01001001, 0b01010010], 1));\n        assert!(!get_bit(&[0b01001001, 0b01010010], 2));\n        assert!(get_bit(&[0b01001001, 0b01010010], 3));\n        assert!(!get_bit(&[0b01001001, 0b01010010], 4));\n        assert!(!get_bit(&[0b01001001, 0b01010010], 5));\n        assert!(get_bit(&[0b01001001, 0b01010010], 6));\n        assert!(!get_bit(&[0b01001001, 0b01010010], 7));\n        assert!(!get_bit(&[0b01001001, 0b01010010], 8));\n        assert!(get_bit(&[0b01001001, 0b01010010], 9));\n        assert!(!get_bit(&[0b01001001, 0b01010010], 10));\n        assert!(!get_bit(&[0b01001001, 0b01010010], 11));\n        assert!(get_bit(&[0b01001001, 0b01010010], 12));\n        assert!(!get_bit(&[0b01001001, 0b01010010], 13));\n        assert!(get_bit(&[0b01001001, 0b01010010], 14));\n        assert!(!get_bit(&[0b01001001, 0b01010010], 15));\n    }\n\n    #[test]\n    fn test_log2() {\n        assert_eq!(log2(1), 0);\n        assert_eq!(log2(2), 1);\n        assert_eq!(log2(3), 2);\n        assert_eq!(log2(4), 2);\n        assert_eq!(log2(5), 3);\n        assert_eq!(log2(5), 3);\n        assert_eq!(log2(6), 3);\n        assert_eq!(log2(7), 3);\n        assert_eq!(log2(8), 3);\n        assert_eq!(log2(9), 4);\n    }\n\n    #[test]\n    fn test_skip() {\n        let mut writer = BitWriter::new(5);\n        let old_offset = writer.skip(1).expect(\"skip() should return OK\");\n        writer.put_aligned(42, 4);\n        writer.put_aligned_offset(0x10, 1, old_offset);\n        let result = writer.consume();\n        assert_eq!(result.as_ref(), [0x10, 42, 0, 0, 0]);\n\n        writer = BitWriter::new(4);\n        let result = writer.skip(5);\n        assert!(result.is_err());\n    }\n\n    #[test]\n    fn test_get_next_byte_ptr() {\n        let mut writer = BitWriter::new(5);\n        {\n            let first_byte = writer\n                .get_next_byte_ptr(1)\n                .expect(\"get_next_byte_ptr() should return OK\");\n            first_byte[0] = 0x10;\n        }\n        writer.put_aligned(42, 4);\n        let result = writer.consume();\n        assert_eq!(result.as_ref(), [0x10, 42, 0, 0, 0]);\n    }\n\n    #[test]\n    fn test_consume_flush_buffer() {\n        let mut writer1 = BitWriter::new(3);\n        let mut writer2 = BitWriter::new(3);\n        for i in 1..10 {\n            writer1.put_value(i, 4);\n            writer2.put_value(i, 4);\n        }\n        let res1 = writer1.flush_buffer();\n        let res2 = writer2.consume();\n        assert_eq!(res1, &res2[..]);\n    }\n\n    #[test]\n    fn test_put_get_bool() {\n        let len = 8;\n        let mut writer = BitWriter::new(len);\n\n        for i in 0..8 {\n            let result = writer.put_value(i % 2, 1);\n            assert!(result);\n        }\n\n        writer.flush();\n        {\n            let buffer = writer.buffer();\n            assert_eq!(buffer[0], 0b10101010);\n        }\n\n        // Write 00110011\n        for i in 0..8 {\n            let result = match i {\n                0 | 1 | 4 | 5 => writer.put_value(false as u64, 1),\n                _ => writer.put_value(true as u64, 1),\n            };\n            assert!(result);\n        }\n        writer.flush();\n        {\n            let buffer = writer.buffer();\n            assert_eq!(buffer[0], 0b10101010);\n            assert_eq!(buffer[1], 0b11001100);\n        }\n\n        let mut reader = BitReader::from(writer.consume());\n\n        for i in 0..8 {\n            let val = reader\n                .get_value::<u8>(1)\n                .expect(\"get_value() should return OK\");\n            assert_eq!(val, i % 2);\n        }\n\n        for i in 0..8 {\n            let val = reader\n                .get_value::<bool>(1)\n                .expect(\"get_value() should return OK\");\n            match i {\n                0 | 1 | 4 | 5 => assert!(!val),\n                _ => assert!(val),\n            }\n        }\n    }\n\n    #[test]\n    fn test_put_value_roundtrip() {\n        test_put_value_rand_numbers(32, 2);\n        test_put_value_rand_numbers(32, 3);\n        test_put_value_rand_numbers(32, 4);\n        test_put_value_rand_numbers(32, 5);\n        test_put_value_rand_numbers(32, 6);\n        test_put_value_rand_numbers(32, 7);\n        test_put_value_rand_numbers(32, 8);\n        test_put_value_rand_numbers(64, 16);\n        test_put_value_rand_numbers(64, 24);\n        test_put_value_rand_numbers(64, 32);\n    }\n\n    fn test_put_value_rand_numbers(total: usize, num_bits: usize) {\n        assert!(num_bits < 64);\n        let num_bytes = num_bits.div_ceil(8);\n        let mut writer = BitWriter::new(num_bytes * total);\n        let values: Vec<u64> = random_numbers::<u64>(total)\n            .iter()\n            .map(|v| v & ((1 << num_bits) - 1))\n            .collect();\n        (0..total).for_each(|i| {\n            assert!(\n                writer.put_value(values[i], num_bits),\n                \"[{i}]: put_value() failed\"\n            );\n        });\n\n        let mut reader = BitReader::from(writer.consume());\n        (0..total).for_each(|i| {\n            let v = reader\n                .get_value::<u64>(num_bits)\n                .expect(\"get_value() should return OK\");\n            assert_eq!(\n                v, values[i],\n                \"[{}]: expected {} but got {}\",\n                i, values[i], v\n            );\n        });\n    }\n\n    #[test]\n    fn test_get_bits() {\n        const NUM_BYTES: usize = 100;\n\n        let mut vec = vec![0; NUM_BYTES];\n        let total_num_bits = NUM_BYTES * 8;\n        let v = random_bools(total_num_bits);\n        (0..total_num_bits).for_each(|i| {\n            if v[i] {\n                set_bit(&mut vec, i);\n            } else {\n                unset_bit(&mut vec, i);\n            }\n        });\n\n        let expected = vec.clone();\n\n        // test reading the first time from a buffer\n        for &(offset, num_bits) in [(0, 10), (2, 10), (8, 16), (25, 40), (7, 64)].iter() {\n            let mut reader = BitReader::from(vec.clone());\n            let mut buffer = vec![0; NUM_BYTES];\n\n            let actual_bits_read = reader.get_bits(&mut buffer, offset, num_bits);\n            let expected_bits_read = ::std::cmp::min(buffer.len() * 8 - offset, num_bits);\n            assert_eq!(expected_bits_read, actual_bits_read);\n\n            for i in 0..actual_bits_read {\n                assert_eq!(get_bit(&expected, i), get_bit(&buffer, offset + i));\n            }\n        }\n\n        // test reading consecutively from a buffer\n        let mut reader = BitReader::from(vec);\n        let mut buffer = vec![0; NUM_BYTES];\n        let mut rng = rand::rng();\n        let mut bits_read = 0;\n\n        loop {\n            if bits_read >= total_num_bits {\n                break;\n            }\n            let n: u64 = rng.random();\n            let num_bits = n % 20;\n            bits_read += reader.get_bits(&mut buffer, bits_read, num_bits as usize);\n        }\n\n        assert_eq!(total_num_bits, bits_read);\n        assert_eq!(&expected, &buffer);\n    }\n\n    #[test]\n    fn test_skip_bits() {\n        const NUM_BYTES: usize = 100;\n\n        let mut vec = vec![0; NUM_BYTES];\n        let total_num_bits = NUM_BYTES * 8;\n        let v = random_bools(total_num_bits);\n        (0..total_num_bits).for_each(|i| {\n            if v[i] {\n                set_bit(&mut vec, i);\n            } else {\n                unset_bit(&mut vec, i);\n            }\n        });\n\n        let expected = vec.clone();\n\n        // test skipping and check the next value\n        let mut reader = BitReader::from(vec);\n        let mut bits_read = 0;\n        for &num_bits in [10, 60, 8].iter() {\n            let actual_bits_read = reader.skip_bits(num_bits);\n            assert_eq!(num_bits, actual_bits_read);\n\n            bits_read += num_bits;\n            assert_eq!(Some(get_bit(&expected, bits_read)), reader.get_value(1));\n            bits_read += 1;\n        }\n\n        // test skipping consecutively\n        let mut rng = rand::rng();\n        loop {\n            if bits_read >= total_num_bits {\n                break;\n            }\n            let n: u64 = rng.random();\n            let num_bits = n % 20;\n            bits_read += reader.skip_bits(num_bits as usize);\n        }\n\n        assert_eq!(total_num_bits, bits_read);\n    }\n\n    #[test]\n    fn test_get_batch() {\n        const SIZE: &[usize] = &[1, 31, 32, 33, 128, 129];\n        for s in SIZE {\n            for i in 0..33 {\n                match i {\n                    0..=8 => test_get_batch_helper::<u8>(*s, i),\n                    9..=16 => test_get_batch_helper::<u16>(*s, i),\n                    _ => test_get_batch_helper::<u32>(*s, i),\n                }\n            }\n        }\n    }\n\n    fn test_get_batch_helper<T>(total: usize, num_bits: usize)\n    where\n        T: FromBytes + Default + Clone + Debug + Eq,\n    {\n        assert!(num_bits <= 32);\n        let num_bytes = num_bits.div_ceil(8);\n        let mut writer = BitWriter::new(num_bytes * total);\n\n        let values: Vec<u32> = random_numbers::<u32>(total)\n            .iter()\n            .map(|v| v & ((1u64 << num_bits) - 1) as u32)\n            .collect();\n\n        // Generic values used to check against actual values read from `get_batch`.\n        let expected_values: Vec<T> = values.iter().map(|v| from_ne_slice(v.as_bytes())).collect();\n\n        (0..total).for_each(|i| {\n            assert!(writer.put_value(values[i] as u64, num_bits));\n        });\n\n        let buf = writer.consume();\n        let mut reader = BitReader::from(buf);\n        let mut batch = vec![T::default(); values.len()];\n        let values_read = reader.get_batch::<T>(&mut batch, num_bits);\n        assert_eq!(values_read, values.len());\n        for i in 0..batch.len() {\n            assert_eq!(\n                batch[i], expected_values[i],\n                \"num_bits = {num_bits}, index = {i}\"\n            );\n        }\n    }\n\n    #[test]\n    fn test_get_u32_batch() {\n        const SIZE: &[usize] = &[1, 31, 32, 33, 128, 129];\n        for total in SIZE {\n            for num_bits in 1..33 {\n                let num_bytes = usize::div_ceil(num_bits, 8);\n                let mut writer = BitWriter::new(num_bytes * total);\n\n                let values: Vec<u32> = random_numbers::<u32>(*total)\n                    .iter()\n                    .map(|v| v & ((1u64 << num_bits) - 1) as u32)\n                    .collect();\n\n                (0..*total).for_each(|i| {\n                    assert!(writer.put_value(values[i] as u64, num_bits));\n                });\n\n                let buf = writer.consume();\n                let mut reader = BitReader::from(buf);\n                let mut batch = vec![0u32; values.len()];\n                unsafe {\n                    reader.get_u32_batch(batch.as_mut_ptr(), *total, num_bits);\n                }\n                for i in 0..batch.len() {\n                    assert_eq!(batch[i], values[i], \"num_bits = {num_bits}, index = {i}\");\n                }\n            }\n        }\n    }\n\n    #[test]\n    fn test_put_aligned_roundtrip() {\n        test_put_aligned_rand_numbers::<u8>(4, 3);\n        test_put_aligned_rand_numbers::<u8>(16, 5);\n        test_put_aligned_rand_numbers::<i16>(32, 7);\n        test_put_aligned_rand_numbers::<i16>(32, 9);\n        test_put_aligned_rand_numbers::<i32>(32, 11);\n        test_put_aligned_rand_numbers::<i32>(32, 13);\n        test_put_aligned_rand_numbers::<i64>(32, 17);\n        test_put_aligned_rand_numbers::<i64>(32, 23);\n    }\n\n    fn test_put_aligned_rand_numbers<T>(total: usize, num_bits: usize)\n    where\n        T: Copy + FromBytes + AsBytes + Debug + PartialEq,\n        StandardUniform: Distribution<T>,\n    {\n        assert!(num_bits <= 32);\n        assert_eq!(total % 2, 0);\n\n        let aligned_value_byte_width = std::mem::size_of::<T>();\n        let value_byte_width = num_bits.div_ceil(8);\n        let mut writer =\n            BitWriter::new((total / 2) * (aligned_value_byte_width + value_byte_width));\n        let values: Vec<u32> = random_numbers::<u32>(total / 2)\n            .iter()\n            .map(|v| v & ((1 << num_bits) - 1))\n            .collect();\n        let aligned_values = random_numbers::<T>(total / 2);\n\n        for i in 0..total {\n            let j = i / 2;\n            if i % 2 == 0 {\n                assert!(\n                    writer.put_value(values[j] as u64, num_bits),\n                    \"[{i}]: put_value() failed\"\n                );\n            } else {\n                assert!(\n                    writer.put_aligned::<T>(aligned_values[j], aligned_value_byte_width),\n                    \"[{i}]: put_aligned() failed\"\n                );\n            }\n        }\n\n        let mut reader = BitReader::from(writer.consume());\n        for i in 0..total {\n            let j = i / 2;\n            if i % 2 == 0 {\n                let v = reader\n                    .get_value::<u64>(num_bits)\n                    .expect(\"get_value() should return OK\");\n                assert_eq!(\n                    v, values[j] as u64,\n                    \"[{}]: expected {} but got {}\",\n                    i, values[j], v\n                );\n            } else {\n                let v = reader\n                    .get_aligned::<T>(aligned_value_byte_width)\n                    .expect(\"get_aligned() should return OK\");\n                assert_eq!(\n                    v, aligned_values[j],\n                    \"[{}]: expected {:?} but got {:?}\",\n                    i, aligned_values[j], v\n                );\n            }\n        }\n    }\n\n    #[test]\n    fn test_put_vlq_int() {\n        let total = 64;\n        let mut writer = BitWriter::new(total * 32);\n        let values = random_numbers::<u32>(total);\n        (0..total).for_each(|i| {\n            assert!(\n                writer.put_vlq_int(values[i] as u64),\n                \"[{i}]; put_vlq_int() failed\"\n            );\n        });\n\n        let mut reader = BitReader::from(writer.consume());\n        (0..total).for_each(|i| {\n            let v = reader\n                .get_vlq_int()\n                .expect(\"get_vlq_int() should return OK\");\n            assert_eq!(\n                v as u32, values[i],\n                \"[{}]: expected {} but got {}\",\n                i, values[i], v\n            );\n        });\n    }\n\n    #[test]\n    fn test_put_zigzag_vlq_int() {\n        let total = 64;\n        let mut writer = BitWriter::new(total * 32);\n        let values = random_numbers::<i32>(total);\n        (0..total).for_each(|i| {\n            assert!(\n                writer.put_zigzag_vlq_int(values[i] as i64),\n                \"[{i}]; put_zigzag_vlq_int() failed\"\n            );\n        });\n\n        let mut reader = BitReader::from(writer.consume());\n        (0..total).for_each(|i| {\n            let v = reader\n                .get_zigzag_vlq_int()\n                .expect(\"get_zigzag_vlq_int() should return OK\");\n            assert_eq!(\n                v as i32, values[i],\n                \"[{}]: expected {} but got {}\",\n                i, values[i], v\n            );\n        });\n    }\n}\n"
  },
  {
    "path": "native/core/src/common/buffer.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::common::bit;\nuse crate::execution::operators::ExecutionError;\nuse arrow::buffer::Buffer as ArrowBuffer;\nuse std::{\n    alloc::{handle_alloc_error, Layout},\n    ptr::NonNull,\n    sync::Arc,\n};\n\n/// A buffer implementation. This is very similar to Arrow's [`MutableBuffer`] implementation,\n/// except that there are two modes depending on whether `owned` is true or false.\n///\n/// If `owned` is true, this behaves the same way as a Arrow [`MutableBuffer`], and the struct is\n/// the unique owner for the memory it wraps. The holder of this buffer can read or write the\n/// buffer, and the buffer itself will be released when it goes out of scope.\n///\n/// Also note that, in `owned` mode, the buffer is always filled with 0s, and its length is always\n/// equal to its capacity. It's up to the caller to decide which part of the buffer contains valid\n/// data.\n///\n/// If `owned` is false, this buffer is an alias to another buffer. The buffer itself becomes\n/// immutable and can only be read.\n#[derive(Debug)]\npub struct CometBuffer {\n    data: NonNull<u8>,\n    len: usize,\n    capacity: usize,\n    /// Whether this buffer owns the data it points to.\n    owned: bool,\n    /// The allocation instance for this buffer.\n    allocation: Arc<CometBufferAllocation>,\n}\n\nunsafe impl Sync for CometBuffer {}\nunsafe impl Send for CometBuffer {}\n\n/// All buffers are aligned to 64 bytes.\nconst ALIGNMENT: usize = 64;\n\nimpl CometBuffer {\n    /// Initializes a owned buffer filled with 0.\n    pub fn new(capacity: usize) -> Self {\n        let aligned_capacity = bit::round_upto_power_of_2(capacity, ALIGNMENT);\n        unsafe {\n            let layout = Layout::from_size_align_unchecked(aligned_capacity, ALIGNMENT);\n            let ptr = std::alloc::alloc_zeroed(layout);\n            Self {\n                data: NonNull::new(ptr).unwrap_or_else(|| handle_alloc_error(layout)),\n                len: aligned_capacity,\n                capacity: aligned_capacity,\n                owned: true,\n                allocation: Arc::new(CometBufferAllocation::new()),\n            }\n        }\n    }\n\n    /// Returns the capacity of this buffer.\n    pub fn capacity(&self) -> usize {\n        self.capacity\n    }\n\n    /// Returns the length (i.e., number of bytes) in this buffer.\n    pub fn len(&self) -> usize {\n        self.len\n    }\n\n    /// Whether this buffer is empty.\n    pub fn is_empty(&self) -> bool {\n        self.len == 0\n    }\n\n    /// Returns the data stored in this buffer as a slice.\n    pub fn as_slice(&self) -> &[u8] {\n        self\n    }\n\n    /// Returns the data stored in this buffer as a mutable slice.\n    pub fn as_slice_mut(&mut self) -> &mut [u8] {\n        debug_assert!(self.owned, \"cannot modify un-owned buffer\");\n        self\n    }\n\n    /// Returns a raw pointer to this buffer's internal memory\n    /// This pointer is guaranteed to be aligned along cache-lines.\n    #[inline]\n    pub const fn as_ptr(&self) -> *const u8 {\n        self.data.as_ptr()\n    }\n\n    /// Returns a mutable raw pointer to this buffer's internal memory\n    /// This pointer is guaranteed to be aligned along cache-lines.\n    #[inline]\n    pub fn as_mut_ptr(&mut self) -> *mut u8 {\n        debug_assert!(self.owned, \"cannot modify un-owned buffer\");\n        self.data.as_ptr()\n    }\n\n    /// Returns an immutable Arrow buffer on the content of this buffer.\n    ///\n    /// # Safety\n    ///\n    /// This function is highly unsafe since it leaks the raw pointer to the memory region that the\n    /// originally this buffer is tracking. Because of this, the caller of this function is\n    /// expected to make sure the returned immutable [`ArrowBuffer`] will never live longer than the\n    /// this buffer. Otherwise it will result to dangling pointers.\n    ///\n    /// In the particular case of the columnar reader, we'll guarantee the above since the reader\n    /// itself is closed at the very end, after the Spark task is completed (either successfully or\n    /// unsuccessfully) through task completion listener.\n    ///\n    /// When re-using [`MutableVector`] in Comet native operators, across multiple input batches,\n    /// because of the iterator-style pattern, the content of the original mutable buffer will only\n    /// be updated once upstream operators fully consumed the previous output batch. For breaking\n    /// operators, they are responsible for copying content out of the buffers.\n    pub unsafe fn to_arrow(&self) -> Result<ArrowBuffer, ExecutionError> {\n        let ptr = NonNull::new_unchecked(self.data.as_ptr());\n        self.check_reference()?;\n        Ok(ArrowBuffer::from_custom_allocation(\n            ptr,\n            self.len,\n            Arc::<CometBufferAllocation>::clone(&self.allocation),\n        ))\n    }\n\n    /// Checks if this buffer is exclusively owned by Comet. If not, an error is returned.\n    /// We run this check when we want to update the buffer. If the buffer is also shared by\n    /// other components, e.g. one DataFusion operator stores the buffer, Comet cannot safely\n    /// modify the buffer.\n    pub fn check_reference(&self) -> Result<(), ExecutionError> {\n        if Arc::strong_count(&self.allocation) > 1 {\n            Err(ExecutionError::GeneralError(\n                \"Error on modifying a buffer which is not exclusively owned by Comet\".to_string(),\n            ))\n        } else {\n            Ok(())\n        }\n    }\n\n    /// Resets this buffer by filling all bytes with zeros.\n    pub fn reset(&mut self) {\n        debug_assert!(self.owned, \"cannot modify un-owned buffer\");\n        unsafe {\n            std::ptr::write_bytes(self.as_mut_ptr(), 0, self.len);\n        }\n    }\n\n    /// Resize this buffer to the `new_capacity`. For additional bytes allocated, they are filled\n    /// with 0. if `new_capacity` is less than the current capacity of this buffer, this is a no-op.\n    #[inline(always)]\n    pub fn resize(&mut self, new_capacity: usize) {\n        assert!(self.owned, \"cannot modify un-owned buffer\");\n        if new_capacity > self.len {\n            let (ptr, new_capacity) =\n                unsafe { Self::reallocate(self.data, self.capacity, new_capacity) };\n            let diff = new_capacity - self.len;\n            self.data = ptr;\n            self.capacity = new_capacity;\n            // write the value\n            unsafe { self.data.as_ptr().add(self.len).write_bytes(0, diff) };\n            self.len = new_capacity;\n        }\n    }\n\n    unsafe fn reallocate(\n        ptr: NonNull<u8>,\n        old_capacity: usize,\n        new_capacity: usize,\n    ) -> (NonNull<u8>, usize) {\n        let new_capacity = bit::round_upto_power_of_2(new_capacity, ALIGNMENT);\n        let new_capacity = std::cmp::max(new_capacity, old_capacity * 2);\n        let raw_ptr = std::alloc::realloc(\n            ptr.as_ptr(),\n            Layout::from_size_align_unchecked(old_capacity, ALIGNMENT),\n            new_capacity,\n        );\n        let ptr = NonNull::new(raw_ptr).unwrap_or_else(|| {\n            handle_alloc_error(Layout::from_size_align_unchecked(new_capacity, ALIGNMENT))\n        });\n        (ptr, new_capacity)\n    }\n}\n\nimpl Drop for CometBuffer {\n    fn drop(&mut self) {\n        if self.owned {\n            unsafe {\n                std::alloc::dealloc(\n                    self.data.as_ptr(),\n                    Layout::from_size_align_unchecked(self.capacity, ALIGNMENT),\n                )\n            }\n        }\n    }\n}\n\nimpl PartialEq for CometBuffer {\n    fn eq(&self, other: &CometBuffer) -> bool {\n        if self.data.as_ptr() == other.data.as_ptr() {\n            return true;\n        }\n        if self.len != other.len {\n            return false;\n        }\n        if self.capacity != other.capacity {\n            return false;\n        }\n        self.as_slice() == other.as_slice()\n    }\n}\n\nimpl std::ops::Deref for CometBuffer {\n    type Target = [u8];\n\n    fn deref(&self) -> &[u8] {\n        unsafe { std::slice::from_raw_parts(self.as_ptr(), self.len) }\n    }\n}\n\nimpl std::ops::DerefMut for CometBuffer {\n    fn deref_mut(&mut self) -> &mut [u8] {\n        assert!(self.owned, \"cannot modify un-owned buffer\");\n        unsafe { std::slice::from_raw_parts_mut(self.as_mut_ptr(), self.capacity) }\n    }\n}\n\n#[derive(Debug)]\nstruct CometBufferAllocation {}\n\nimpl CometBufferAllocation {\n    fn new() -> Self {\n        Self {}\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::buffer::Buffer as ArrowBuffer;\n\n    impl CometBuffer {\n        pub fn from_ptr(ptr: *const u8, len: usize, capacity: usize) -> Self {\n            assert_eq!(\n                capacity % ALIGNMENT,\n                0,\n                \"input buffer is not aligned to {ALIGNMENT} bytes\"\n            );\n            Self {\n                data: NonNull::new(ptr as *mut u8).unwrap_or_else(|| {\n                    panic!(\n                        \"cannot create CometBuffer from (ptr: {ptr:?}, len: {len}, capacity: {capacity}\"\n                    )\n                }),\n                len,\n                capacity,\n                owned: false,\n                allocation: Arc::new(CometBufferAllocation::new()),\n            }\n        }\n\n        /// Extends this buffer (must be an owned buffer) by appending bytes from `src`,\n        /// starting from `offset`.\n        pub fn extend_from_slice(&mut self, offset: usize, src: &[u8]) {\n            assert!(self.owned, \"cannot modify un-owned buffer\");\n            assert!(\n                offset + src.len() <= self.capacity(),\n                \"buffer overflow, offset = {}, src.len = {}, capacity = {}\",\n                offset,\n                src.len(),\n                self.capacity()\n            );\n\n            unsafe {\n                let dst = self.data.as_ptr().add(offset);\n                std::ptr::copy_nonoverlapping(src.as_ptr(), dst, src.len())\n            }\n        }\n    }\n\n    #[test]\n    fn test_buffer_new() {\n        let buf = CometBuffer::new(63);\n        assert_eq!(64, buf.capacity());\n        assert_eq!(64, buf.len());\n        assert!(!buf.is_empty());\n    }\n\n    #[test]\n    fn test_resize() {\n        let mut buf = CometBuffer::new(1);\n        assert_eq!(64, buf.capacity());\n        assert_eq!(64, buf.len());\n\n        buf.resize(100);\n        assert_eq!(128, buf.capacity());\n        assert_eq!(128, buf.len());\n\n        // resize with less capacity is no-op\n        buf.resize(20);\n        assert_eq!(128, buf.capacity());\n        assert_eq!(128, buf.len());\n    }\n\n    #[test]\n    fn test_extend_from_slice() {\n        let mut buf = CometBuffer::new(100);\n        buf.extend_from_slice(0, b\"hello\");\n        assert_eq!(b\"hello\", &buf.as_slice()[0..5]);\n\n        buf.extend_from_slice(5, b\" world\");\n        assert_eq!(b\"hello world\", &buf.as_slice()[0..11]);\n\n        buf.reset();\n        buf.extend_from_slice(0, b\"hello arrow\");\n        assert_eq!(b\"hello arrow\", &buf.as_slice()[0..11]);\n    }\n\n    #[test]\n    fn test_to_arrow() {\n        let mut buf = CometBuffer::new(1);\n\n        let str = b\"aaaa bbbb cccc dddd\";\n        buf.extend_from_slice(0, str.as_slice());\n\n        assert_eq!(64, buf.len());\n        assert_eq!(64, buf.capacity());\n        assert_eq!(b\"aaaa bbbb cccc dddd\", &buf.as_slice()[0..str.len()]);\n\n        unsafe {\n            let immutable_buf: ArrowBuffer = buf.to_arrow().unwrap();\n            assert_eq!(64, immutable_buf.len());\n            assert_eq!(str, &immutable_buf.as_slice()[0..str.len()]);\n        }\n    }\n\n    #[test]\n    fn test_unowned() {\n        let arrow_buf = ArrowBuffer::from(b\"hello comet\");\n        let buf = CometBuffer::from_ptr(arrow_buf.as_ptr(), arrow_buf.len(), arrow_buf.capacity());\n\n        assert_eq!(11, buf.len());\n        assert_eq!(64, buf.capacity());\n        assert_eq!(b\"hello comet\", &buf.as_slice()[0..11]);\n\n        unsafe {\n            let arrow_buf2 = buf.to_arrow().unwrap();\n            assert_eq!(arrow_buf, arrow_buf2);\n        }\n    }\n}\n"
  },
  {
    "path": "native/core/src/common/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n#[macro_use]\npub mod bit;\nmod buffer;\npub use buffer::*;\n"
  },
  {
    "path": "native/core/src/execution/columnar_to_row.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Native implementation of columnar to row conversion for Spark UnsafeRow format.\n//!\n//! This module converts Arrow columnar data to Spark's UnsafeRow format, which is used\n//! for row-based operations in Spark. The conversion is done in native code for better\n//! performance compared to the JVM implementation.\n//!\n//! # UnsafeRow Format\n//!\n//! ```text\n//! ┌─────────────────────────────────────────────────────────────┐\n//! │ Null Bitset: ((numFields + 63) / 64) * 8 bytes             │\n//! ├─────────────────────────────────────────────────────────────┤\n//! │ Fixed-width portion: 8 bytes per field                      │\n//! │ - Primitives: value stored directly (in lowest bytes)       │\n//! │ - Variable-length: (offset << 32) | length                  │\n//! ├─────────────────────────────────────────────────────────────┤\n//! │ Variable-length data: 8-byte aligned                        │\n//! └─────────────────────────────────────────────────────────────┘\n//! ```\n\nuse crate::errors::{CometError, CometResult};\nuse arrow::array::types::{\n    ArrowDictionaryKeyType, Int16Type, Int32Type, Int64Type, Int8Type, UInt16Type, UInt32Type,\n    UInt64Type, UInt8Type,\n};\nuse arrow::array::*;\nuse arrow::compute::{cast_with_options, CastOptions};\nuse arrow::datatypes::{ArrowNativeType, DataType, TimeUnit};\nuse std::sync::Arc;\n\n/// Maximum digits for decimal that can fit in a long (8 bytes).\nconst MAX_LONG_DIGITS: u8 = 18;\n\n/// Helper macro for downcasting arrays with consistent error messages.\nmacro_rules! downcast_array {\n    ($array:expr, $array_type:ty) => {\n        $array\n            .as_any()\n            .downcast_ref::<$array_type>()\n            .ok_or_else(|| {\n                CometError::Internal(format!(\n                    \"Failed to downcast to {}, actual type: {:?}\",\n                    stringify!($array_type),\n                    $array.data_type()\n                ))\n            })\n    };\n}\n\n/// Macro to implement is_null for typed array enums.\n/// Generates a complete match expression for all variants that have an array as first field.\nmacro_rules! impl_is_null {\n    ($self:expr, $row_idx:expr, [$($variant:ident),+ $(,)?]) => {\n        match $self {\n            $(Self::$variant(arr, ..) => arr.is_null($row_idx),)+\n        }\n    };\n    // Version with special handling for Null variant\n    ($self:expr, $row_idx:expr, null_always_true, [$($variant:ident),+ $(,)?]) => {\n        match $self {\n            Self::Null => true,\n            $(Self::$variant(arr, ..) => arr.is_null($row_idx),)+\n        }\n    };\n}\n\n/// Macro to generate TypedElements::from_array match arms for primitive types.\nmacro_rules! typed_elements_from_primitive {\n    ($array:expr, $element_type:expr, $(($dt:pat, $variant:ident, $arr_type:ty)),+ $(,)?) => {\n        match $element_type {\n            $(\n                $dt => {\n                    if let Some(arr) = $array.as_any().downcast_ref::<$arr_type>() {\n                        return TypedElements::$variant(arr);\n                    }\n                }\n            )+\n            _ => {}\n        }\n    };\n}\n\n/// Macro for write_column_fixed_width arms - handles downcast + loop pattern.\nmacro_rules! write_fixed_column_primitive {\n    ($self:expr, $array:expr, $row_size:expr, $field_offset:expr, $num_rows:expr,\n     $arr_type:ty, $to_i64:expr) => {{\n        let arr = downcast_array!($array, $arr_type)?;\n        for row_idx in 0..$num_rows {\n            if !arr.is_null(row_idx) {\n                let offset = row_idx * $row_size + $field_offset;\n                let value: i64 = $to_i64(arr.value(row_idx));\n                $self.buffer[offset..offset + 8].copy_from_slice(&value.to_le_bytes());\n            }\n        }\n        Ok(())\n    }};\n}\n\n/// Macro for get_field_value arms - handles downcast + value extraction.\nmacro_rules! get_field_value_primitive {\n    ($array:expr, $row_idx:expr, $arr_type:ty, $to_i64:expr) => {{\n        let arr = downcast_array!($array, $arr_type)?;\n        Ok($to_i64(arr.value($row_idx)))\n    }};\n}\n\n/// Macro for write_struct_to_buffer fixed-width field extraction.\nmacro_rules! extract_fixed_value {\n    ($column:expr, $row_idx:expr, $(($dt:pat, $arr_type:ty, $to_i64:expr)),+ $(,)?) => {\n        match $column.data_type() {\n            $(\n                $dt => {\n                    let arr = downcast_array!($column, $arr_type)?;\n                    Some($to_i64(arr.value($row_idx)))\n                }\n            )+\n            _ => None,\n        }\n    };\n}\n\n/// Writes bytes to buffer with 8-byte alignment padding.\n/// Returns the unpadded length.\n#[inline]\nfn write_bytes_padded(buffer: &mut Vec<u8>, bytes: &[u8]) -> usize {\n    let len = bytes.len();\n    buffer.extend_from_slice(bytes);\n    let padding = round_up_to_8(len) - len;\n    buffer.extend(std::iter::repeat_n(0u8, padding));\n    len\n}\n\n/// Pre-downcast array reference to avoid type dispatch in inner loops.\n/// This enum holds references to concrete array types, allowing direct access\n/// without repeated downcast_ref calls.\nenum TypedArray<'a> {\n    Null,\n    Boolean(&'a BooleanArray),\n    Int8(&'a Int8Array),\n    Int16(&'a Int16Array),\n    Int32(&'a Int32Array),\n    Int64(&'a Int64Array),\n    Float32(&'a Float32Array),\n    Float64(&'a Float64Array),\n    Date32(&'a Date32Array),\n    TimestampMicro(&'a TimestampMicrosecondArray),\n    Decimal128(&'a Decimal128Array, u8), // array + precision\n    String(&'a StringArray),\n    LargeString(&'a LargeStringArray),\n    Binary(&'a BinaryArray),\n    LargeBinary(&'a LargeBinaryArray),\n    FixedSizeBinary(&'a FixedSizeBinaryArray),\n    Struct(\n        &'a StructArray,\n        arrow::datatypes::Fields,\n        Vec<TypedElements<'a>>,\n    ),\n    List(&'a ListArray, arrow::datatypes::FieldRef),\n    LargeList(&'a LargeListArray, arrow::datatypes::FieldRef),\n    Map(&'a MapArray, arrow::datatypes::FieldRef),\n    Dictionary(&'a ArrayRef, DataType), // fallback for dictionary types\n}\n\nimpl<'a> TypedArray<'a> {\n    /// Pre-downcast an ArrayRef to a TypedArray.\n    fn from_array(array: &'a ArrayRef) -> CometResult<Self> {\n        let actual_type = array.data_type();\n        match actual_type {\n            DataType::Null => {\n                // Verify the array is actually a NullArray, but we don't need to store the reference\n                // since all values are null by definition\n                downcast_array!(array, NullArray)?;\n                Ok(TypedArray::Null)\n            }\n            DataType::Boolean => Ok(TypedArray::Boolean(downcast_array!(array, BooleanArray)?)),\n            DataType::Int8 => Ok(TypedArray::Int8(downcast_array!(array, Int8Array)?)),\n            DataType::Int16 => Ok(TypedArray::Int16(downcast_array!(array, Int16Array)?)),\n            DataType::Int32 => Ok(TypedArray::Int32(downcast_array!(array, Int32Array)?)),\n            DataType::Int64 => Ok(TypedArray::Int64(downcast_array!(array, Int64Array)?)),\n            DataType::Float32 => Ok(TypedArray::Float32(downcast_array!(array, Float32Array)?)),\n            DataType::Float64 => Ok(TypedArray::Float64(downcast_array!(array, Float64Array)?)),\n            DataType::Date32 => Ok(TypedArray::Date32(downcast_array!(array, Date32Array)?)),\n            DataType::Timestamp(TimeUnit::Microsecond, _) => Ok(TypedArray::TimestampMicro(\n                downcast_array!(array, TimestampMicrosecondArray)?,\n            )),\n            DataType::Decimal128(p, _) => Ok(TypedArray::Decimal128(\n                downcast_array!(array, Decimal128Array)?,\n                *p,\n            )),\n            DataType::Utf8 => Ok(TypedArray::String(downcast_array!(array, StringArray)?)),\n            DataType::LargeUtf8 => Ok(TypedArray::LargeString(downcast_array!(\n                array,\n                LargeStringArray\n            )?)),\n            DataType::Binary => Ok(TypedArray::Binary(downcast_array!(array, BinaryArray)?)),\n            DataType::LargeBinary => Ok(TypedArray::LargeBinary(downcast_array!(\n                array,\n                LargeBinaryArray\n            )?)),\n            DataType::FixedSizeBinary(_) => Ok(TypedArray::FixedSizeBinary(downcast_array!(\n                array,\n                FixedSizeBinaryArray\n            )?)),\n            DataType::Struct(fields) => {\n                let struct_arr = downcast_array!(array, StructArray)?;\n                // Pre-downcast all struct fields once\n                let typed_fields: Vec<TypedElements> = fields\n                    .iter()\n                    .enumerate()\n                    .map(|(idx, field)| {\n                        TypedElements::from_array(struct_arr.column(idx), field.data_type())\n                    })\n                    .collect();\n                Ok(TypedArray::Struct(struct_arr, fields.clone(), typed_fields))\n            }\n            DataType::List(field) => Ok(TypedArray::List(\n                downcast_array!(array, ListArray)?,\n                Arc::clone(field),\n            )),\n            DataType::LargeList(field) => Ok(TypedArray::LargeList(\n                downcast_array!(array, LargeListArray)?,\n                Arc::clone(field),\n            )),\n            DataType::Map(field, _) => Ok(TypedArray::Map(\n                downcast_array!(array, MapArray)?,\n                Arc::clone(field),\n            )),\n            DataType::Dictionary(_, _) => Ok(TypedArray::Dictionary(array, actual_type.clone())),\n            _ => Err(CometError::Internal(format!(\n                \"Unsupported data type for pre-downcast: {:?}\",\n                actual_type\n            ))),\n        }\n    }\n\n    /// Check if the value at the given index is null.\n    #[inline]\n    fn is_null(&self, row_idx: usize) -> bool {\n        impl_is_null!(\n            self,\n            row_idx,\n            null_always_true,\n            [\n                Boolean,\n                Int8,\n                Int16,\n                Int32,\n                Int64,\n                Float32,\n                Float64,\n                Date32,\n                TimestampMicro,\n                Decimal128,\n                String,\n                LargeString,\n                Binary,\n                LargeBinary,\n                FixedSizeBinary,\n                Struct,\n                List,\n                LargeList,\n                Map,\n                Dictionary\n            ]\n        )\n    }\n\n    /// Get the fixed-width value as i64 (for types that fit in 8 bytes).\n    #[inline]\n    fn get_fixed_value(&self, row_idx: usize) -> i64 {\n        match self {\n            TypedArray::Boolean(arr) => arr.value(row_idx) as i64,\n            TypedArray::Int8(arr) => arr.value(row_idx) as i64,\n            TypedArray::Int16(arr) => arr.value(row_idx) as i64,\n            TypedArray::Int32(arr) => arr.value(row_idx) as i64,\n            TypedArray::Int64(arr) => arr.value(row_idx),\n            TypedArray::Float32(arr) => arr.value(row_idx).to_bits() as i64,\n            TypedArray::Float64(arr) => arr.value(row_idx).to_bits() as i64,\n            TypedArray::Date32(arr) => arr.value(row_idx) as i64,\n            TypedArray::TimestampMicro(arr) => arr.value(row_idx),\n            TypedArray::Decimal128(arr, precision) if *precision <= MAX_LONG_DIGITS => {\n                arr.value(row_idx) as i64\n            }\n            TypedArray::Decimal128(_, _) => 0, // Variable-length decimal, handled elsewhere\n            // Variable-length types return 0, actual value written separately\n            _ => 0,\n        }\n    }\n\n    /// Check if this is a variable-length type.\n    #[inline]\n    fn is_variable_length(&self) -> bool {\n        match self {\n            TypedArray::Null\n            | TypedArray::Boolean(_)\n            | TypedArray::Int8(_)\n            | TypedArray::Int16(_)\n            | TypedArray::Int32(_)\n            | TypedArray::Int64(_)\n            | TypedArray::Float32(_)\n            | TypedArray::Float64(_)\n            | TypedArray::Date32(_)\n            | TypedArray::TimestampMicro(_) => false,\n            TypedArray::Decimal128(_, precision) => *precision > MAX_LONG_DIGITS,\n            _ => true,\n        }\n    }\n\n    /// Write variable-length data to buffer. Returns actual length (0 if not variable-length).\n    fn write_variable_to_buffer(&self, buffer: &mut Vec<u8>, row_idx: usize) -> CometResult<usize> {\n        match self {\n            TypedArray::String(arr) => {\n                Ok(write_bytes_padded(buffer, arr.value(row_idx).as_bytes()))\n            }\n            TypedArray::LargeString(arr) => {\n                Ok(write_bytes_padded(buffer, arr.value(row_idx).as_bytes()))\n            }\n            TypedArray::Binary(arr) => Ok(write_bytes_padded(buffer, arr.value(row_idx))),\n            TypedArray::LargeBinary(arr) => Ok(write_bytes_padded(buffer, arr.value(row_idx))),\n            TypedArray::FixedSizeBinary(arr) => Ok(write_bytes_padded(buffer, arr.value(row_idx))),\n            TypedArray::Decimal128(arr, precision) if *precision > MAX_LONG_DIGITS => {\n                let bytes = i128_to_spark_decimal_bytes(arr.value(row_idx));\n                Ok(write_bytes_padded(buffer, &bytes))\n            }\n            TypedArray::Struct(arr, fields, typed_fields) => {\n                write_struct_to_buffer_typed(buffer, arr, row_idx, fields, typed_fields)\n            }\n            TypedArray::List(arr, field) => write_list_to_buffer(buffer, arr, row_idx, field),\n            TypedArray::LargeList(arr, field) => {\n                write_large_list_to_buffer(buffer, arr, row_idx, field)\n            }\n            TypedArray::Map(arr, field) => write_map_to_buffer(buffer, arr, row_idx, field),\n            TypedArray::Dictionary(arr, schema_type) => {\n                if let DataType::Dictionary(key_type, value_type) = schema_type {\n                    write_dictionary_to_buffer(\n                        buffer,\n                        arr,\n                        row_idx,\n                        key_type.as_ref(),\n                        value_type.as_ref(),\n                    )\n                } else {\n                    Err(CometError::Internal(format!(\n                        \"Expected Dictionary type but got {:?}\",\n                        schema_type\n                    )))\n                }\n            }\n            _ => Ok(0), // Fixed-width types\n        }\n    }\n}\n\n/// Pre-downcast element array for list/array types.\n/// This allows direct access to element values without per-row allocation.\nenum TypedElements<'a> {\n    Boolean(&'a BooleanArray),\n    Int8(&'a Int8Array),\n    Int16(&'a Int16Array),\n    Int32(&'a Int32Array),\n    Int64(&'a Int64Array),\n    Float32(&'a Float32Array),\n    Float64(&'a Float64Array),\n    Date32(&'a Date32Array),\n    TimestampMicro(&'a TimestampMicrosecondArray),\n    Decimal128(&'a Decimal128Array, u8),\n    String(&'a StringArray),\n    LargeString(&'a LargeStringArray),\n    Binary(&'a BinaryArray),\n    LargeBinary(&'a LargeBinaryArray),\n    FixedSizeBinary(&'a FixedSizeBinaryArray),\n    // For nested types, fall back to ArrayRef\n    Other(&'a ArrayRef, DataType),\n}\n\nimpl<'a> TypedElements<'a> {\n    /// Create from an ArrayRef and element type.\n    fn from_array(array: &'a ArrayRef, element_type: &DataType) -> Self {\n        // Try primitive types first using macro\n        typed_elements_from_primitive!(\n            array,\n            element_type,\n            (DataType::Boolean, Boolean, BooleanArray),\n            (DataType::Int8, Int8, Int8Array),\n            (DataType::Int16, Int16, Int16Array),\n            (DataType::Int32, Int32, Int32Array),\n            (DataType::Int64, Int64, Int64Array),\n            (DataType::Float32, Float32, Float32Array),\n            (DataType::Float64, Float64, Float64Array),\n            (DataType::Date32, Date32, Date32Array),\n            (DataType::Utf8, String, StringArray),\n            (DataType::LargeUtf8, LargeString, LargeStringArray),\n            (DataType::Binary, Binary, BinaryArray),\n            (DataType::LargeBinary, LargeBinary, LargeBinaryArray),\n        );\n\n        // Handle special cases that need extra processing\n        match element_type {\n            DataType::Timestamp(TimeUnit::Microsecond, _) => {\n                if let Some(arr) = array.as_any().downcast_ref::<TimestampMicrosecondArray>() {\n                    return TypedElements::TimestampMicro(arr);\n                }\n            }\n            DataType::Decimal128(p, _) => {\n                if let Some(arr) = array.as_any().downcast_ref::<Decimal128Array>() {\n                    return TypedElements::Decimal128(arr, *p);\n                }\n            }\n            DataType::FixedSizeBinary(_) => {\n                if let Some(arr) = array.as_any().downcast_ref::<FixedSizeBinaryArray>() {\n                    return TypedElements::FixedSizeBinary(arr);\n                }\n            }\n            _ => {}\n        }\n        TypedElements::Other(array, element_type.clone())\n    }\n\n    /// Get element size for UnsafeArrayData format.\n    fn element_size(&self) -> usize {\n        match self {\n            TypedElements::Boolean(_) => 1,\n            TypedElements::Int8(_) => 1,\n            TypedElements::Int16(_) => 2,\n            TypedElements::Int32(_) | TypedElements::Date32(_) | TypedElements::Float32(_) => 4,\n            TypedElements::Int64(_)\n            | TypedElements::TimestampMicro(_)\n            | TypedElements::Float64(_) => 8,\n            TypedElements::Decimal128(_, p) if *p <= MAX_LONG_DIGITS => 8,\n            _ => 8, // Variable-length uses 8 bytes for offset+length\n        }\n    }\n\n    /// Check if this is a fixed-width primitive type that supports bulk copy.\n    fn supports_bulk_copy(&self) -> bool {\n        matches!(\n            self,\n            TypedElements::Int8(_)\n                | TypedElements::Int16(_)\n                | TypedElements::Int32(_)\n                | TypedElements::Int64(_)\n                | TypedElements::Float32(_)\n                | TypedElements::Float64(_)\n                | TypedElements::Date32(_)\n                | TypedElements::TimestampMicro(_)\n        )\n    }\n\n    /// Check if value at given index is null.\n    #[inline]\n    fn is_null_at(&self, idx: usize) -> bool {\n        impl_is_null!(\n            self,\n            idx,\n            [\n                Boolean,\n                Int8,\n                Int16,\n                Int32,\n                Int64,\n                Float32,\n                Float64,\n                Date32,\n                TimestampMicro,\n                Decimal128,\n                String,\n                LargeString,\n                Binary,\n                LargeBinary,\n                FixedSizeBinary,\n                Other\n            ]\n        )\n    }\n\n    /// Check if this is a fixed-width type (value fits in 8-byte slot).\n    #[inline]\n    fn is_fixed_width(&self) -> bool {\n        match self {\n            TypedElements::Boolean(_)\n            | TypedElements::Int8(_)\n            | TypedElements::Int16(_)\n            | TypedElements::Int32(_)\n            | TypedElements::Int64(_)\n            | TypedElements::Float32(_)\n            | TypedElements::Float64(_)\n            | TypedElements::Date32(_)\n            | TypedElements::TimestampMicro(_) => true,\n            TypedElements::Decimal128(_, p) => *p <= MAX_LONG_DIGITS,\n            _ => false,\n        }\n    }\n\n    /// Get fixed-width value as i64 for the 8-byte field slot.\n    #[inline]\n    fn get_fixed_value(&self, idx: usize) -> i64 {\n        match self {\n            TypedElements::Boolean(arr) => arr.value(idx) as i64,\n            TypedElements::Int8(arr) => arr.value(idx) as i64,\n            TypedElements::Int16(arr) => arr.value(idx) as i64,\n            TypedElements::Int32(arr) => arr.value(idx) as i64,\n            TypedElements::Int64(arr) => arr.value(idx),\n            TypedElements::Float32(arr) => (arr.value(idx).to_bits() as i32) as i64,\n            TypedElements::Float64(arr) => arr.value(idx).to_bits() as i64,\n            TypedElements::Date32(arr) => arr.value(idx) as i64,\n            TypedElements::TimestampMicro(arr) => arr.value(idx),\n            TypedElements::Decimal128(arr, _) => arr.value(idx) as i64,\n            _ => 0, // Should not be called for variable-length types\n        }\n    }\n\n    /// Write variable-length data to buffer. Returns length written (0 for fixed-width).\n    fn write_variable_value(&self, buffer: &mut Vec<u8>, idx: usize) -> CometResult<usize> {\n        match self {\n            TypedElements::String(arr) => Ok(write_bytes_padded(buffer, arr.value(idx).as_bytes())),\n            TypedElements::LargeString(arr) => {\n                Ok(write_bytes_padded(buffer, arr.value(idx).as_bytes()))\n            }\n            TypedElements::Binary(arr) => Ok(write_bytes_padded(buffer, arr.value(idx))),\n            TypedElements::LargeBinary(arr) => Ok(write_bytes_padded(buffer, arr.value(idx))),\n            TypedElements::FixedSizeBinary(arr) => Ok(write_bytes_padded(buffer, arr.value(idx))),\n            TypedElements::Decimal128(arr, precision) if *precision > MAX_LONG_DIGITS => {\n                let bytes = i128_to_spark_decimal_bytes(arr.value(idx));\n                Ok(write_bytes_padded(buffer, &bytes))\n            }\n            TypedElements::Other(arr, element_type) => {\n                write_nested_variable_to_buffer(buffer, element_type, arr, idx)\n            }\n            _ => Ok(0), // Fixed-width types\n        }\n    }\n\n    /// Write a range of elements to buffer in UnsafeArrayData format.\n    /// Returns the total bytes written (including header).\n    fn write_range_to_buffer(\n        &self,\n        buffer: &mut Vec<u8>,\n        start_idx: usize,\n        num_elements: usize,\n    ) -> CometResult<usize> {\n        let element_size = self.element_size();\n        let array_start = buffer.len();\n        let element_bitset_width = ColumnarToRowContext::calculate_bitset_width(num_elements);\n\n        // Write number of elements\n        buffer.extend_from_slice(&(num_elements as i64).to_le_bytes());\n\n        // Reserve space for null bitset\n        let null_bitset_start = buffer.len();\n        buffer.resize(null_bitset_start + element_bitset_width, 0);\n\n        // Reserve space for element values\n        let elements_start = buffer.len();\n        let elements_total_size = round_up_to_8(num_elements * element_size);\n        buffer.resize(elements_start + elements_total_size, 0);\n\n        // Try bulk copy for primitive types\n        if self.supports_bulk_copy() {\n            self.bulk_copy_range(\n                buffer,\n                null_bitset_start,\n                elements_start,\n                start_idx,\n                num_elements,\n            );\n            return Ok(buffer.len() - array_start);\n        }\n\n        // Handle other types element by element\n        self.write_elements_slow(\n            buffer,\n            array_start,\n            null_bitset_start,\n            elements_start,\n            element_size,\n            start_idx,\n            num_elements,\n        )\n    }\n\n    /// Bulk copy primitive values from a range.\n    fn bulk_copy_range(\n        &self,\n        buffer: &mut [u8],\n        null_bitset_start: usize,\n        elements_start: usize,\n        start_idx: usize,\n        num_elements: usize,\n    ) {\n        macro_rules! bulk_copy_range {\n            ($arr:expr, $elem_size:expr) => {{\n                let values_slice = $arr.values();\n                let byte_len = num_elements * $elem_size;\n                let src_start = start_idx * $elem_size;\n                debug_assert!(\n                    src_start + byte_len <= values_slice.len() * $elem_size,\n                    \"bulk_copy_range: source slice out of bounds: src_start={}, byte_len={}, values_len={}, elem_size={}\",\n                    src_start, byte_len, values_slice.len() * $elem_size, $elem_size\n                );\n                debug_assert!(\n                    elements_start + byte_len <= buffer.len(),\n                    \"bulk_copy_range: destination slice out of bounds: elements_start={}, byte_len={}, buffer_len={}\",\n                    elements_start, byte_len, buffer.len()\n                );\n                let src_bytes = unsafe {\n                    std::slice::from_raw_parts(\n                        (values_slice.as_ptr() as *const u8).add(src_start),\n                        byte_len,\n                    )\n                };\n                buffer[elements_start..elements_start + byte_len].copy_from_slice(src_bytes);\n\n                // Set null bits\n                if $arr.null_count() > 0 {\n                    for i in 0..num_elements {\n                        if $arr.is_null(start_idx + i) {\n                            let word_idx = i / 64;\n                            let bit_idx = i % 64;\n                            let word_offset = null_bitset_start + word_idx * 8;\n                            let mut word = i64::from_le_bytes(\n                                buffer[word_offset..word_offset + 8].try_into().unwrap(),\n                            );\n                            word |= 1i64 << bit_idx;\n                            buffer[word_offset..word_offset + 8]\n                                .copy_from_slice(&word.to_le_bytes());\n                        }\n                    }\n                }\n            }};\n        }\n\n        match self {\n            TypedElements::Int8(arr) => bulk_copy_range!(arr, 1),\n            TypedElements::Int16(arr) => bulk_copy_range!(arr, 2),\n            TypedElements::Int32(arr) => bulk_copy_range!(arr, 4),\n            TypedElements::Int64(arr) => bulk_copy_range!(arr, 8),\n            TypedElements::Float32(arr) => bulk_copy_range!(arr, 4),\n            TypedElements::Float64(arr) => bulk_copy_range!(arr, 8),\n            TypedElements::Date32(arr) => bulk_copy_range!(arr, 4),\n            TypedElements::TimestampMicro(arr) => bulk_copy_range!(arr, 8),\n            _ => {} // Should not reach here due to supports_bulk_copy check\n        }\n    }\n\n    /// Slow path for non-bulk-copyable types.\n    #[allow(clippy::too_many_arguments)]\n    fn write_elements_slow(\n        &self,\n        buffer: &mut Vec<u8>,\n        array_start: usize,\n        null_bitset_start: usize,\n        elements_start: usize,\n        element_size: usize,\n        start_idx: usize,\n        num_elements: usize,\n    ) -> CometResult<usize> {\n        match self {\n            TypedElements::Boolean(arr) => {\n                for i in 0..num_elements {\n                    let src_idx = start_idx + i;\n                    if arr.is_null(src_idx) {\n                        set_null_bit(buffer, null_bitset_start, i);\n                    } else {\n                        buffer[elements_start + i] = if arr.value(src_idx) { 1 } else { 0 };\n                    }\n                }\n            }\n            TypedElements::Decimal128(arr, precision) if *precision <= MAX_LONG_DIGITS => {\n                for i in 0..num_elements {\n                    let src_idx = start_idx + i;\n                    if arr.is_null(src_idx) {\n                        set_null_bit(buffer, null_bitset_start, i);\n                    } else {\n                        let slot_offset = elements_start + i * 8;\n                        let value = arr.value(src_idx) as i64;\n                        buffer[slot_offset..slot_offset + 8].copy_from_slice(&value.to_le_bytes());\n                    }\n                }\n            }\n            TypedElements::Decimal128(arr, _) => {\n                // Large decimal - variable length\n                for i in 0..num_elements {\n                    let src_idx = start_idx + i;\n                    if arr.is_null(src_idx) {\n                        set_null_bit(buffer, null_bitset_start, i);\n                    } else {\n                        let bytes = i128_to_spark_decimal_bytes(arr.value(src_idx));\n                        let len = write_bytes_padded(buffer, &bytes);\n                        let data_offset = buffer.len() - round_up_to_8(len) - array_start;\n                        let offset_and_len = ((data_offset as i64) << 32) | (len as i64);\n                        let slot_offset = elements_start + i * 8;\n                        buffer[slot_offset..slot_offset + 8]\n                            .copy_from_slice(&offset_and_len.to_le_bytes());\n                    }\n                }\n            }\n            TypedElements::String(arr) => {\n                for i in 0..num_elements {\n                    let src_idx = start_idx + i;\n                    if arr.is_null(src_idx) {\n                        set_null_bit(buffer, null_bitset_start, i);\n                    } else {\n                        let len = write_bytes_padded(buffer, arr.value(src_idx).as_bytes());\n                        let data_offset = buffer.len() - round_up_to_8(len) - array_start;\n                        let offset_and_len = ((data_offset as i64) << 32) | (len as i64);\n                        let slot_offset = elements_start + i * 8;\n                        buffer[slot_offset..slot_offset + 8]\n                            .copy_from_slice(&offset_and_len.to_le_bytes());\n                    }\n                }\n            }\n            TypedElements::LargeString(arr) => {\n                for i in 0..num_elements {\n                    let src_idx = start_idx + i;\n                    if arr.is_null(src_idx) {\n                        set_null_bit(buffer, null_bitset_start, i);\n                    } else {\n                        let len = write_bytes_padded(buffer, arr.value(src_idx).as_bytes());\n                        let data_offset = buffer.len() - round_up_to_8(len) - array_start;\n                        let offset_and_len = ((data_offset as i64) << 32) | (len as i64);\n                        let slot_offset = elements_start + i * 8;\n                        buffer[slot_offset..slot_offset + 8]\n                            .copy_from_slice(&offset_and_len.to_le_bytes());\n                    }\n                }\n            }\n            TypedElements::Binary(arr) => {\n                for i in 0..num_elements {\n                    let src_idx = start_idx + i;\n                    if arr.is_null(src_idx) {\n                        set_null_bit(buffer, null_bitset_start, i);\n                    } else {\n                        let len = write_bytes_padded(buffer, arr.value(src_idx));\n                        let data_offset = buffer.len() - round_up_to_8(len) - array_start;\n                        let offset_and_len = ((data_offset as i64) << 32) | (len as i64);\n                        let slot_offset = elements_start + i * 8;\n                        buffer[slot_offset..slot_offset + 8]\n                            .copy_from_slice(&offset_and_len.to_le_bytes());\n                    }\n                }\n            }\n            TypedElements::LargeBinary(arr) => {\n                for i in 0..num_elements {\n                    let src_idx = start_idx + i;\n                    if arr.is_null(src_idx) {\n                        set_null_bit(buffer, null_bitset_start, i);\n                    } else {\n                        let len = write_bytes_padded(buffer, arr.value(src_idx));\n                        let data_offset = buffer.len() - round_up_to_8(len) - array_start;\n                        let offset_and_len = ((data_offset as i64) << 32) | (len as i64);\n                        let slot_offset = elements_start + i * 8;\n                        buffer[slot_offset..slot_offset + 8]\n                            .copy_from_slice(&offset_and_len.to_le_bytes());\n                    }\n                }\n            }\n            TypedElements::Other(arr, element_type) => {\n                // Fall back to old method for nested types\n                for i in 0..num_elements {\n                    let src_idx = start_idx + i;\n                    if arr.is_null(src_idx) {\n                        set_null_bit(buffer, null_bitset_start, i);\n                    } else {\n                        let slot_offset = elements_start + i * element_size;\n                        let var_len =\n                            write_nested_variable_to_buffer(buffer, element_type, arr, src_idx)?;\n\n                        if var_len > 0 {\n                            let padded_len = round_up_to_8(var_len);\n                            let data_offset = buffer.len() - padded_len - array_start;\n                            let offset_and_len = ((data_offset as i64) << 32) | (var_len as i64);\n                            buffer[slot_offset..slot_offset + 8]\n                                .copy_from_slice(&offset_and_len.to_le_bytes());\n                        } else {\n                            let value = get_field_value(element_type, arr, src_idx)?;\n                            write_array_element(buffer, element_type, value, slot_offset);\n                        }\n                    }\n                }\n            }\n            _ => {\n                // Should not reach here - all cases covered above\n            }\n        }\n        Ok(buffer.len() - array_start)\n    }\n}\n\n/// Helper to set a null bit in the buffer.\n#[inline]\nfn set_null_bit(buffer: &mut [u8], null_bitset_start: usize, idx: usize) {\n    let word_idx = idx / 64;\n    let bit_idx = idx % 64;\n    let word_offset = null_bitset_start + word_idx * 8;\n    let mut word = i64::from_le_bytes(buffer[word_offset..word_offset + 8].try_into().unwrap());\n    word |= 1i64 << bit_idx;\n    buffer[word_offset..word_offset + 8].copy_from_slice(&word.to_le_bytes());\n}\n\n/// Check if a data type is fixed-width for UnsafeRow purposes.\n/// Fixed-width types are stored directly in the 8-byte field slot.\n#[inline]\nfn is_fixed_width(data_type: &DataType) -> bool {\n    match data_type {\n        DataType::Boolean\n        | DataType::Int8\n        | DataType::Int16\n        | DataType::Int32\n        | DataType::Int64\n        | DataType::Float32\n        | DataType::Float64\n        | DataType::Date32\n        | DataType::Timestamp(TimeUnit::Microsecond, _) => true,\n        DataType::Decimal128(p, _) => *p <= MAX_LONG_DIGITS,\n        _ => false,\n    }\n}\n\n/// Check if all columns in a schema are fixed-width.\n#[inline]\nfn is_all_fixed_width(schema: &[DataType]) -> bool {\n    schema.iter().all(is_fixed_width)\n}\n\n/// Context for columnar to row conversion.\n///\n/// This struct maintains the output buffer and schema information needed for\n/// converting Arrow columnar data to Spark UnsafeRow format. The buffer is\n/// reused across multiple `convert` calls to minimize allocations.\npub struct ColumnarToRowContext {\n    /// The Arrow data types for each column.\n    schema: Vec<DataType>,\n    /// The output buffer containing converted rows.\n    /// Layout: [Row0][Row1]...[RowN] where each row is an UnsafeRow.\n    buffer: Vec<u8>,\n    /// Byte offset where each row starts in the buffer.\n    offsets: Vec<i32>,\n    /// Byte length of each row.\n    lengths: Vec<i32>,\n    /// Pre-calculated null bitset width in bytes.\n    null_bitset_width: usize,\n    /// Pre-calculated fixed-width portion size in bytes (null bitset + 8 bytes per field).\n    fixed_width_size: usize,\n    /// Maximum batch size for pre-allocation.\n    _batch_size: usize,\n    /// Whether all columns are fixed-width (enables fast path).\n    all_fixed_width: bool,\n}\n\nimpl ColumnarToRowContext {\n    /// Creates a new ColumnarToRowContext with the given schema.\n    ///\n    /// # Arguments\n    ///\n    /// * `schema` - The Arrow data types for each column.\n    /// * `batch_size` - Maximum number of rows expected per batch (for pre-allocation).\n    pub fn new(schema: Vec<DataType>, batch_size: usize) -> Self {\n        let num_fields = schema.len();\n        let null_bitset_width = Self::calculate_bitset_width(num_fields);\n        let fixed_width_size = null_bitset_width + num_fields * 8;\n        let all_fixed_width = is_all_fixed_width(&schema);\n\n        // Pre-allocate buffer for maximum batch size\n        // For fixed-width schemas, we know exact size; otherwise estimate\n        let estimated_row_size = if all_fixed_width {\n            fixed_width_size\n        } else {\n            fixed_width_size + 64 // Conservative estimate for variable-length data\n        };\n        let initial_capacity = batch_size * estimated_row_size;\n\n        Self {\n            schema,\n            buffer: Vec::with_capacity(initial_capacity),\n            offsets: Vec::with_capacity(batch_size),\n            lengths: Vec::with_capacity(batch_size),\n            null_bitset_width,\n            fixed_width_size,\n            _batch_size: batch_size,\n            all_fixed_width,\n        }\n    }\n\n    /// Calculate the width of the null bitset in bytes.\n    /// This matches Spark's `UnsafeRow.calculateBitSetWidthInBytes`.\n    #[inline]\n    pub const fn calculate_bitset_width(num_fields: usize) -> usize {\n        num_fields.div_ceil(64) * 8\n    }\n\n    /// Round up to the nearest multiple of 8 for alignment.\n    #[inline]\n    const fn round_up_to_8(value: usize) -> usize {\n        value.div_ceil(8) * 8\n    }\n\n    /// Converts Arrow arrays to Spark UnsafeRow format.\n    ///\n    /// # Arguments\n    ///\n    /// * `arrays` - The Arrow arrays to convert, one per column.\n    /// * `num_rows` - The number of rows to convert.\n    ///\n    /// # Returns\n    ///\n    /// A tuple containing:\n    /// - A pointer to the output buffer\n    /// - A reference to the offsets array\n    /// - A reference to the lengths array\n    pub fn convert(\n        &mut self,\n        arrays: &[ArrayRef],\n        num_rows: usize,\n    ) -> CometResult<(*const u8, &[i32], &[i32])> {\n        if arrays.len() != self.schema.len() {\n            return Err(CometError::Internal(format!(\n                \"Column count mismatch: expected {}, got {}\",\n                self.schema.len(),\n                arrays.len()\n            )));\n        }\n\n        // Unpack any dictionary arrays to their underlying value type\n        // This is needed because Parquet may return dictionary-encoded arrays\n        // even when the schema expects a specific type like Decimal128\n        let arrays: Vec<ArrayRef> = arrays\n            .iter()\n            .zip(self.schema.iter())\n            .map(|(arr, schema_type)| Self::maybe_cast_to_schema_type(arr, schema_type))\n            .collect::<CometResult<Vec<_>>>()?;\n        let arrays = arrays.as_slice();\n\n        // Clear previous data\n        self.buffer.clear();\n        self.offsets.clear();\n        self.lengths.clear();\n\n        // Reserve space for offsets and lengths\n        self.offsets.reserve(num_rows);\n        self.lengths.reserve(num_rows);\n\n        // Use fast path for fixed-width-only schemas\n        if self.all_fixed_width {\n            return self.convert_fixed_width(arrays, num_rows);\n        }\n\n        // Pre-downcast all arrays to avoid type dispatch in inner loop\n        let typed_arrays: Vec<TypedArray> = arrays\n            .iter()\n            .map(TypedArray::from_array)\n            .collect::<CometResult<Vec<_>>>()?;\n\n        // Pre-compute variable-length column indices (once per batch, not per row)\n        let var_len_indices: Vec<usize> = typed_arrays\n            .iter()\n            .enumerate()\n            .filter(|(_, arr)| arr.is_variable_length())\n            .map(|(idx, _)| idx)\n            .collect();\n\n        // Process each row (general path for variable-length data)\n        for row_idx in 0..num_rows {\n            let row_start = self.buffer.len();\n            self.offsets.push(row_start as i32);\n\n            // Write fixed-width portion (null bitset + field values)\n            self.write_row_typed(&typed_arrays, &var_len_indices, row_idx)?;\n\n            let row_end = self.buffer.len();\n            self.lengths.push((row_end - row_start) as i32);\n        }\n\n        Ok((self.buffer.as_ptr(), &self.offsets, &self.lengths))\n    }\n\n    /// Casts an array to match the expected schema type if needed.\n    /// This handles cases where:\n    /// 1. Parquet returns dictionary-encoded arrays but the schema expects a non-dictionary type\n    /// 2. Parquet returns NullArray when all values are null, but the schema expects a typed array\n    /// 3. Parquet returns Int32/Int64 for small-precision decimals but schema expects Decimal128\n    fn maybe_cast_to_schema_type(\n        array: &ArrayRef,\n        schema_type: &DataType,\n    ) -> CometResult<ArrayRef> {\n        let actual_type = array.data_type();\n\n        // If types already match, no cast needed\n        if actual_type == schema_type {\n            return Ok(Arc::clone(array));\n        }\n\n        match (actual_type, schema_type) {\n            (DataType::Dictionary(_, _), schema)\n                if !matches!(schema, DataType::Dictionary(_, _)) =>\n            {\n                // Unpack dictionary if the schema type is not a dictionary\n                let options = CastOptions::default();\n                cast_with_options(array, schema_type, &options).map_err(|e| {\n                    CometError::Internal(format!(\n                        \"Failed to unpack dictionary array from {:?} to {:?}: {}\",\n                        actual_type, schema_type, e\n                    ))\n                })\n            }\n            (DataType::Null, _) => {\n                // Cast NullArray to the expected schema type\n                // This happens when all values in a column are null\n                let options = CastOptions::default();\n                cast_with_options(array, schema_type, &options).map_err(|e| {\n                    CometError::Internal(format!(\n                        \"Failed to cast NullArray to {:?}: {}\",\n                        schema_type, e\n                    ))\n                })\n            }\n            (DataType::Int32, DataType::Decimal128(precision, scale)) => {\n                // Parquet stores small-precision decimals as Int32 for efficiency.\n                // When COMET_USE_DECIMAL_128 is false, BatchReader produces these types.\n                // The Int32 value is already scaled (e.g., -1 means -0.01 for scale 2).\n                // We need to reinterpret (not cast) to Decimal128 preserving the value.\n                let int_array = array.as_any().downcast_ref::<Int32Array>().ok_or_else(|| {\n                    CometError::Internal(\"Failed to downcast to Int32Array\".to_string())\n                })?;\n                let decimal_array: Decimal128Array = int_array\n                    .iter()\n                    .map(|v| v.map(|x| x as i128))\n                    .collect::<Decimal128Array>()\n                    .with_precision_and_scale(*precision, *scale)\n                    .map_err(|e| {\n                        CometError::Internal(format!(\"Invalid decimal precision/scale: {}\", e))\n                    })?;\n                Ok(Arc::new(decimal_array))\n            }\n            (DataType::Int64, DataType::Decimal128(precision, scale)) => {\n                // Same as Int32 but for medium-precision decimals stored as Int64.\n                let int_array = array.as_any().downcast_ref::<Int64Array>().ok_or_else(|| {\n                    CometError::Internal(\"Failed to downcast to Int64Array\".to_string())\n                })?;\n                let decimal_array: Decimal128Array = int_array\n                    .iter()\n                    .map(|v| v.map(|x| x as i128))\n                    .collect::<Decimal128Array>()\n                    .with_precision_and_scale(*precision, *scale)\n                    .map_err(|e| {\n                        CometError::Internal(format!(\"Invalid decimal precision/scale: {}\", e))\n                    })?;\n                Ok(Arc::new(decimal_array))\n            }\n            _ => {\n                // For any other type mismatch, attempt an Arrow cast.\n                // This handles cases like Int32 → Date32 (which can happen when Spark\n                // generates default column values using the physical storage type rather\n                // than the logical type).\n                let options = CastOptions::default();\n                cast_with_options(array, schema_type, &options).map_err(|e| {\n                    CometError::Internal(format!(\n                        \"Failed to cast array from {:?} to {:?}: {}\",\n                        actual_type, schema_type, e\n                    ))\n                })\n            }\n        }\n    }\n\n    /// Fast path for schemas with only fixed-width columns.\n    /// Pre-allocates entire buffer and processes more efficiently.\n    fn convert_fixed_width(\n        &mut self,\n        arrays: &[ArrayRef],\n        num_rows: usize,\n    ) -> CometResult<(*const u8, &[i32], &[i32])> {\n        let row_size = self.fixed_width_size;\n        let total_size = row_size * num_rows;\n        let null_bitset_width = self.null_bitset_width;\n\n        // Pre-allocate entire buffer at once (all zeros)\n        self.buffer.resize(total_size, 0);\n\n        // Pre-fill offsets and lengths (constant for fixed-width)\n        let row_size_i32 = row_size as i32;\n        for row_idx in 0..num_rows {\n            self.offsets.push((row_idx * row_size) as i32);\n            self.lengths.push(row_size_i32);\n        }\n\n        // Process column by column for better cache locality\n        for (col_idx, array) in arrays.iter().enumerate() {\n            let field_offset_in_row = null_bitset_width + col_idx * 8;\n\n            // Write values for all rows in this column\n            self.write_column_fixed_width(\n                array,\n                &self.schema[col_idx].clone(),\n                col_idx,\n                field_offset_in_row,\n                row_size,\n                num_rows,\n            )?;\n        }\n\n        Ok((self.buffer.as_ptr(), &self.offsets, &self.lengths))\n    }\n\n    /// Write a fixed-width column's values for all rows.\n    /// Processes column-by-column for better cache locality.\n    fn write_column_fixed_width(\n        &mut self,\n        array: &ArrayRef,\n        data_type: &DataType,\n        col_idx: usize,\n        field_offset_in_row: usize,\n        row_size: usize,\n        num_rows: usize,\n    ) -> CometResult<()> {\n        // Handle nulls first - set null bits\n        if array.null_count() > 0 {\n            let word_idx = col_idx / 64;\n            let bit_idx = col_idx % 64;\n            let bit_mask = 1i64 << bit_idx;\n\n            for row_idx in 0..num_rows {\n                if array.is_null(row_idx) {\n                    let row_start = row_idx * row_size;\n                    let word_offset = row_start + word_idx * 8;\n                    let mut word = i64::from_le_bytes(\n                        self.buffer[word_offset..word_offset + 8]\n                            .try_into()\n                            .unwrap(),\n                    );\n                    word |= bit_mask;\n                    self.buffer[word_offset..word_offset + 8].copy_from_slice(&word.to_le_bytes());\n                }\n            }\n        }\n\n        // Write non-null values using type-specific fast paths\n        match data_type {\n            DataType::Boolean => {\n                // Boolean is special: writes single byte, not 8-byte i64\n                let arr = downcast_array!(array, BooleanArray)?;\n                for row_idx in 0..num_rows {\n                    if !arr.is_null(row_idx) {\n                        let offset = row_idx * row_size + field_offset_in_row;\n                        self.buffer[offset] = if arr.value(row_idx) { 1 } else { 0 };\n                    }\n                }\n                Ok(())\n            }\n            DataType::Int8 => write_fixed_column_primitive!(\n                self,\n                array,\n                row_size,\n                field_offset_in_row,\n                num_rows,\n                Int8Array,\n                |v: i8| v as i64\n            ),\n            DataType::Int16 => write_fixed_column_primitive!(\n                self,\n                array,\n                row_size,\n                field_offset_in_row,\n                num_rows,\n                Int16Array,\n                |v: i16| v as i64\n            ),\n            DataType::Int32 => write_fixed_column_primitive!(\n                self,\n                array,\n                row_size,\n                field_offset_in_row,\n                num_rows,\n                Int32Array,\n                |v: i32| v as i64\n            ),\n            DataType::Int64 => write_fixed_column_primitive!(\n                self,\n                array,\n                row_size,\n                field_offset_in_row,\n                num_rows,\n                Int64Array,\n                |v: i64| v\n            ),\n            DataType::Float32 => write_fixed_column_primitive!(\n                self,\n                array,\n                row_size,\n                field_offset_in_row,\n                num_rows,\n                Float32Array,\n                |v: f32| v.to_bits() as i64\n            ),\n            DataType::Float64 => write_fixed_column_primitive!(\n                self,\n                array,\n                row_size,\n                field_offset_in_row,\n                num_rows,\n                Float64Array,\n                |v: f64| v.to_bits() as i64\n            ),\n            DataType::Date32 => write_fixed_column_primitive!(\n                self,\n                array,\n                row_size,\n                field_offset_in_row,\n                num_rows,\n                Date32Array,\n                |v: i32| v as i64\n            ),\n            DataType::Timestamp(TimeUnit::Microsecond, _) => write_fixed_column_primitive!(\n                self,\n                array,\n                row_size,\n                field_offset_in_row,\n                num_rows,\n                TimestampMicrosecondArray,\n                |v: i64| v\n            ),\n            DataType::Decimal128(precision, _) if *precision <= MAX_LONG_DIGITS => {\n                write_fixed_column_primitive!(\n                    self,\n                    array,\n                    row_size,\n                    field_offset_in_row,\n                    num_rows,\n                    Decimal128Array,\n                    |v: i128| v as i64\n                )\n            }\n            _ => Err(CometError::Internal(format!(\n                \"Unexpected non-fixed-width type in fast path: {:?}\",\n                data_type\n            ))),\n        }\n    }\n\n    /// Writes a complete row using pre-downcast TypedArrays.\n    /// This avoids type dispatch overhead in the inner loop.\n    fn write_row_typed(\n        &mut self,\n        typed_arrays: &[TypedArray],\n        var_len_indices: &[usize],\n        row_idx: usize,\n    ) -> CometResult<()> {\n        let row_start = self.buffer.len();\n        let null_bitset_width = self.null_bitset_width;\n        let fixed_width_size = self.fixed_width_size;\n\n        // Extend buffer for fixed-width portion\n        self.buffer.resize(row_start + fixed_width_size, 0);\n\n        // First pass: write null bits and fixed-width values\n        for (col_idx, typed_arr) in typed_arrays.iter().enumerate() {\n            let is_null = typed_arr.is_null(row_idx);\n\n            if is_null {\n                // Set null bit\n                let word_idx = col_idx / 64;\n                let bit_idx = col_idx % 64;\n                let word_offset = row_start + word_idx * 8;\n\n                let mut word = i64::from_le_bytes(\n                    self.buffer[word_offset..word_offset + 8]\n                        .try_into()\n                        .unwrap(),\n                );\n                word |= 1i64 << bit_idx;\n                self.buffer[word_offset..word_offset + 8].copy_from_slice(&word.to_le_bytes());\n            } else if !typed_arr.is_variable_length() {\n                // Write fixed-width field value (skip variable-length, they're handled in pass 2)\n                let field_offset = row_start + null_bitset_width + col_idx * 8;\n                let value = typed_arr.get_fixed_value(row_idx);\n                self.buffer[field_offset..field_offset + 8].copy_from_slice(&value.to_le_bytes());\n            }\n        }\n\n        // Second pass: write variable-length data (only iterate over var-len columns)\n        for &col_idx in var_len_indices {\n            let typed_arr = &typed_arrays[col_idx];\n            if typed_arr.is_null(row_idx) {\n                continue;\n            }\n\n            // Write variable-length data directly to buffer\n            let actual_len = typed_arr.write_variable_to_buffer(&mut self.buffer, row_idx)?;\n            if actual_len > 0 {\n                // Calculate offset: buffer grew by padded_len, but we need offset to start of data\n                let padded_len = Self::round_up_to_8(actual_len);\n                let current_offset = self.buffer.len() - padded_len - row_start;\n\n                // Update the field slot with (offset << 32) | length\n                let offset_and_len = ((current_offset as i64) << 32) | (actual_len as i64);\n                let field_offset = row_start + null_bitset_width + col_idx * 8;\n                self.buffer[field_offset..field_offset + 8]\n                    .copy_from_slice(&offset_and_len.to_le_bytes());\n            }\n        }\n\n        Ok(())\n    }\n\n    /// Returns a pointer to the buffer.\n    pub fn buffer_ptr(&self) -> *const u8 {\n        self.buffer.as_ptr()\n    }\n\n    /// Returns the schema.\n    pub fn schema(&self) -> &[DataType] {\n        &self.schema\n    }\n}\n\n/// Gets the fixed-width value for a field as i64.\n#[inline]\nfn get_field_value(data_type: &DataType, array: &ArrayRef, row_idx: usize) -> CometResult<i64> {\n    // Use the actual array type for dispatching to handle type mismatches\n    let actual_type = array.data_type();\n\n    match actual_type {\n        DataType::Boolean => {\n            let arr = downcast_array!(array, BooleanArray)?;\n            Ok(arr.value(row_idx) as i64)\n        }\n        DataType::Int8 => get_field_value_primitive!(array, row_idx, Int8Array, |v: i8| v as i64),\n        DataType::Int16 => {\n            get_field_value_primitive!(array, row_idx, Int16Array, |v: i16| v as i64)\n        }\n        DataType::Int32 => {\n            get_field_value_primitive!(array, row_idx, Int32Array, |v: i32| v as i64)\n        }\n        DataType::Int64 => get_field_value_primitive!(array, row_idx, Int64Array, |v: i64| v),\n        DataType::Float32 => {\n            get_field_value_primitive!(array, row_idx, Float32Array, |v: f32| v.to_bits() as i64)\n        }\n        DataType::Float64 => {\n            get_field_value_primitive!(array, row_idx, Float64Array, |v: f64| v.to_bits() as i64)\n        }\n        DataType::Date32 => {\n            get_field_value_primitive!(array, row_idx, Date32Array, |v: i32| v as i64)\n        }\n        DataType::Timestamp(TimeUnit::Microsecond, _) => {\n            get_field_value_primitive!(array, row_idx, TimestampMicrosecondArray, |v: i64| v)\n        }\n        DataType::Decimal128(precision, _) if *precision <= MAX_LONG_DIGITS => {\n            get_field_value_primitive!(array, row_idx, Decimal128Array, |v: i128| v as i64)\n        }\n        // Variable-length types use placeholder (will be overwritten by get_variable_length_data)\n        DataType::Utf8\n        | DataType::LargeUtf8\n        | DataType::Binary\n        | DataType::LargeBinary\n        | DataType::Decimal128(_, _)\n        | DataType::Struct(_)\n        | DataType::List(_)\n        | DataType::LargeList(_)\n        | DataType::Map(_, _)\n        | DataType::Dictionary(_, _) => Ok(0i64),\n        _ => {\n            // Check if the schema type is a known type that we should handle\n            match data_type {\n                DataType::Boolean\n                | DataType::Int8\n                | DataType::Int16\n                | DataType::Int32\n                | DataType::Int64\n                | DataType::Float32\n                | DataType::Float64\n                | DataType::Date32\n                | DataType::Timestamp(TimeUnit::Microsecond, _)\n                | DataType::Decimal128(_, _) => Err(CometError::Internal(format!(\n                    \"Type mismatch in get_field_value: schema expects {:?} but actual array type is {:?}\",\n                    data_type, actual_type\n                ))),\n                // If schema is also a variable-length type, return placeholder\n                DataType::Utf8\n                | DataType::LargeUtf8\n                | DataType::Binary\n                | DataType::LargeBinary\n                | DataType::Struct(_)\n                | DataType::List(_)\n                | DataType::LargeList(_)\n                | DataType::Map(_, _)\n                | DataType::Dictionary(_, _) => Ok(0i64),\n                _ => Err(CometError::Internal(format!(\n                    \"Unsupported data type for columnar to row conversion: schema={:?}, actual={:?}\",\n                    data_type, actual_type\n                ))),\n            }\n        }\n    }\n}\n\n/// Writes dictionary-encoded value directly to buffer.\n#[inline]\nfn write_dictionary_to_buffer(\n    buffer: &mut Vec<u8>,\n    array: &ArrayRef,\n    row_idx: usize,\n    key_type: &DataType,\n    value_type: &DataType,\n) -> CometResult<usize> {\n    match key_type {\n        DataType::Int8 => {\n            write_dictionary_to_buffer_with_key::<Int8Type>(buffer, array, row_idx, value_type)\n        }\n        DataType::Int16 => {\n            write_dictionary_to_buffer_with_key::<Int16Type>(buffer, array, row_idx, value_type)\n        }\n        DataType::Int32 => {\n            write_dictionary_to_buffer_with_key::<Int32Type>(buffer, array, row_idx, value_type)\n        }\n        DataType::Int64 => {\n            write_dictionary_to_buffer_with_key::<Int64Type>(buffer, array, row_idx, value_type)\n        }\n        DataType::UInt8 => {\n            write_dictionary_to_buffer_with_key::<UInt8Type>(buffer, array, row_idx, value_type)\n        }\n        DataType::UInt16 => {\n            write_dictionary_to_buffer_with_key::<UInt16Type>(buffer, array, row_idx, value_type)\n        }\n        DataType::UInt32 => {\n            write_dictionary_to_buffer_with_key::<UInt32Type>(buffer, array, row_idx, value_type)\n        }\n        DataType::UInt64 => {\n            write_dictionary_to_buffer_with_key::<UInt64Type>(buffer, array, row_idx, value_type)\n        }\n        _ => Err(CometError::Internal(format!(\n            \"Unsupported dictionary key type: {:?}\",\n            key_type\n        ))),\n    }\n}\n\n/// Writes dictionary value directly to buffer with specific key type.\n#[inline]\nfn write_dictionary_to_buffer_with_key<K: ArrowDictionaryKeyType>(\n    buffer: &mut Vec<u8>,\n    array: &ArrayRef,\n    row_idx: usize,\n    value_type: &DataType,\n) -> CometResult<usize> {\n    let dict_array = array\n        .as_any()\n        .downcast_ref::<DictionaryArray<K>>()\n        .ok_or_else(|| {\n            CometError::Internal(format!(\n                \"Failed to downcast to DictionaryArray<{:?}>\",\n                std::any::type_name::<K>()\n            ))\n        })?;\n\n    let values = dict_array.values();\n    let key_idx = dict_array.keys().value(row_idx).to_usize().ok_or_else(|| {\n        CometError::Internal(\"Dictionary key index out of usize range\".to_string())\n    })?;\n\n    match value_type {\n        DataType::Utf8 => {\n            let string_values = downcast_array!(values, StringArray)?;\n            Ok(write_bytes_padded(\n                buffer,\n                string_values.value(key_idx).as_bytes(),\n            ))\n        }\n        DataType::LargeUtf8 => {\n            let string_values = downcast_array!(values, LargeStringArray)?;\n            Ok(write_bytes_padded(\n                buffer,\n                string_values.value(key_idx).as_bytes(),\n            ))\n        }\n        DataType::Binary => {\n            let binary_values = downcast_array!(values, BinaryArray)?;\n            Ok(write_bytes_padded(buffer, binary_values.value(key_idx)))\n        }\n        DataType::LargeBinary => {\n            let binary_values = downcast_array!(values, LargeBinaryArray)?;\n            Ok(write_bytes_padded(buffer, binary_values.value(key_idx)))\n        }\n        _ => Err(CometError::Internal(format!(\n            \"Unsupported dictionary value type for direct buffer write: {:?}\",\n            value_type\n        ))),\n    }\n}\n\n/// Converts i128 to Spark's big-endian decimal byte format.\nfn i128_to_spark_decimal_bytes(value: i128) -> Vec<u8> {\n    // Spark uses big-endian format for large decimals\n    let bytes = value.to_be_bytes();\n\n    // Find the minimum number of bytes needed (excluding leading sign-extension bytes)\n    let is_negative = value < 0;\n    let sign_byte = if is_negative { 0xFF } else { 0x00 };\n\n    let mut start = 0;\n    while start < 15 && bytes[start] == sign_byte {\n        // Check if the next byte's sign bit matches\n        let next_byte = bytes[start + 1];\n        let has_correct_sign = if is_negative {\n            (next_byte & 0x80) != 0\n        } else {\n            (next_byte & 0x80) == 0\n        };\n        if has_correct_sign {\n            start += 1;\n        } else {\n            break;\n        }\n    }\n\n    bytes[start..].to_vec()\n}\n\n/// Round up to the nearest multiple of 8 for alignment.\n#[inline]\nconst fn round_up_to_8(value: usize) -> usize {\n    value.div_ceil(8) * 8\n}\n\n/// Writes a primitive value with the correct size for UnsafeArrayData.\n#[inline]\nfn write_array_element(buffer: &mut [u8], data_type: &DataType, value: i64, offset: usize) {\n    match data_type {\n        DataType::Boolean => {\n            buffer[offset] = if value != 0 { 1 } else { 0 };\n        }\n        DataType::Int8 => {\n            buffer[offset] = value as u8;\n        }\n        DataType::Int16 => {\n            buffer[offset..offset + 2].copy_from_slice(&(value as i16).to_le_bytes());\n        }\n        DataType::Int32 | DataType::Date32 => {\n            buffer[offset..offset + 4].copy_from_slice(&(value as i32).to_le_bytes());\n        }\n        DataType::Float32 => {\n            buffer[offset..offset + 4].copy_from_slice(&(value as u32).to_le_bytes());\n        }\n        // All 8-byte types\n        _ => {\n            buffer[offset..offset + 8].copy_from_slice(&value.to_le_bytes());\n        }\n    }\n}\n\n// =============================================================================\n// Optimized direct-write functions for complex types\n// These write directly to the output buffer to avoid intermediate allocations.\n// =============================================================================\n\n/// Writes a struct value directly to the buffer using pre-downcast typed fields.\n/// Returns the unpadded length written.\n///\n/// This version uses pre-downcast TypedElements for each field, eliminating\n/// per-row type dispatch overhead.\n#[inline]\nfn write_struct_to_buffer_typed(\n    buffer: &mut Vec<u8>,\n    _struct_array: &StructArray,\n    row_idx: usize,\n    _fields: &arrow::datatypes::Fields,\n    typed_fields: &[TypedElements],\n) -> CometResult<usize> {\n    let num_fields = typed_fields.len();\n    let nested_bitset_width = ColumnarToRowContext::calculate_bitset_width(num_fields);\n    let nested_fixed_size = nested_bitset_width + num_fields * 8;\n\n    // Remember where this struct starts in the buffer\n    let struct_start = buffer.len();\n\n    // Reserve space for fixed-width portion (zeros for null bits and field slots)\n    buffer.resize(struct_start + nested_fixed_size, 0);\n\n    // Write each field using pre-downcast types\n    for (field_idx, typed_field) in typed_fields.iter().enumerate() {\n        if typed_field.is_null_at(row_idx) {\n            // Set null bit in nested struct\n            set_null_bit(buffer, struct_start, field_idx);\n        } else {\n            let field_offset = struct_start + nested_bitset_width + field_idx * 8;\n\n            if typed_field.is_fixed_width() {\n                // Fixed-width field - use pre-downcast accessor\n                let value = typed_field.get_fixed_value(row_idx);\n                buffer[field_offset..field_offset + 8].copy_from_slice(&value.to_le_bytes());\n            } else {\n                // Variable-length field - use pre-downcast writer\n                let var_len = typed_field.write_variable_value(buffer, row_idx)?;\n                if var_len > 0 {\n                    let padded_len = round_up_to_8(var_len);\n                    let data_offset = buffer.len() - padded_len - struct_start;\n                    let offset_and_len = ((data_offset as i64) << 32) | (var_len as i64);\n                    buffer[field_offset..field_offset + 8]\n                        .copy_from_slice(&offset_and_len.to_le_bytes());\n                }\n            }\n        }\n    }\n\n    Ok(buffer.len() - struct_start)\n}\n\n/// Writes a struct value directly to the buffer.\n/// Returns the unpadded length written.\n///\n/// Processes each field using inline type dispatch to avoid allocation overhead.\n/// This is used for nested structs where we don't have pre-downcast fields.\nfn write_struct_to_buffer(\n    buffer: &mut Vec<u8>,\n    struct_array: &StructArray,\n    row_idx: usize,\n    fields: &arrow::datatypes::Fields,\n) -> CometResult<usize> {\n    let num_fields = fields.len();\n    let nested_bitset_width = ColumnarToRowContext::calculate_bitset_width(num_fields);\n    let nested_fixed_size = nested_bitset_width + num_fields * 8;\n\n    // Remember where this struct starts in the buffer\n    let struct_start = buffer.len();\n\n    // Reserve space for fixed-width portion (zeros for null bits and field slots)\n    buffer.resize(struct_start + nested_fixed_size, 0);\n\n    // Write each field with inline type handling (no allocation)\n    for (field_idx, field) in fields.iter().enumerate() {\n        let column = struct_array.column(field_idx);\n        let data_type = field.data_type();\n\n        if column.is_null(row_idx) {\n            // Set null bit in nested struct\n            set_null_bit(buffer, struct_start, field_idx);\n        } else {\n            let field_offset = struct_start + nested_bitset_width + field_idx * 8;\n\n            // Inline type dispatch for fixed-width types (most common case)\n            let value: Option<i64> = extract_fixed_value!(\n                column,\n                row_idx,\n                (DataType::Boolean, BooleanArray, |v: bool| if v {\n                    1i64\n                } else {\n                    0i64\n                }),\n                (DataType::Int8, Int8Array, |v: i8| v as i64),\n                (DataType::Int16, Int16Array, |v: i16| v as i64),\n                (DataType::Int32, Int32Array, |v: i32| v as i64),\n                (DataType::Int64, Int64Array, |v: i64| v),\n                (DataType::Float32, Float32Array, |v: f32| v.to_bits() as i64),\n                (DataType::Float64, Float64Array, |v: f64| v.to_bits() as i64),\n                (DataType::Date32, Date32Array, |v: i32| v as i64),\n                (\n                    DataType::Timestamp(TimeUnit::Microsecond, _),\n                    TimestampMicrosecondArray,\n                    |v: i64| v\n                ),\n            );\n            // Handle Decimal128 with precision guard separately\n            let value: Option<i64> = match (value, data_type) {\n                (Some(v), _) => Some(v),\n                (None, DataType::Decimal128(p, _)) if *p <= MAX_LONG_DIGITS => {\n                    let arr = downcast_array!(column, Decimal128Array)?;\n                    Some(arr.value(row_idx) as i64)\n                }\n                _ => None,\n            };\n\n            if let Some(v) = value {\n                // Fixed-width field\n                buffer[field_offset..field_offset + 8].copy_from_slice(&v.to_le_bytes());\n            } else {\n                // Variable-length field\n                let var_len = write_nested_variable_to_buffer(buffer, data_type, column, row_idx)?;\n                if var_len > 0 {\n                    let padded_len = round_up_to_8(var_len);\n                    let data_offset = buffer.len() - padded_len - struct_start;\n                    let offset_and_len = ((data_offset as i64) << 32) | (var_len as i64);\n                    buffer[field_offset..field_offset + 8]\n                        .copy_from_slice(&offset_and_len.to_le_bytes());\n                }\n            }\n        }\n    }\n\n    Ok(buffer.len() - struct_start)\n}\n\n/// Writes a list value directly to the buffer in UnsafeArrayData format.\n/// Returns the unpadded length written.\n///\n/// This uses offsets directly to avoid per-row ArrayRef allocation.\n#[inline]\nfn write_list_to_buffer(\n    buffer: &mut Vec<u8>,\n    list_array: &ListArray,\n    row_idx: usize,\n    element_field: &arrow::datatypes::FieldRef,\n) -> CometResult<usize> {\n    // Get offsets directly to avoid creating a sliced ArrayRef\n    let offsets = list_array.value_offsets();\n    let start_offset = offsets[row_idx] as usize;\n    let end_offset = offsets[row_idx + 1] as usize;\n    let num_elements = end_offset - start_offset;\n\n    // Pre-downcast the element array once\n    let element_array = list_array.values();\n    let element_type = element_field.data_type();\n    let typed_elements = TypedElements::from_array(element_array, element_type);\n\n    // Write the range of elements\n    typed_elements.write_range_to_buffer(buffer, start_offset, num_elements)\n}\n\n/// Writes a large list value directly to the buffer in UnsafeArrayData format.\n/// Returns the unpadded length written.\n///\n/// This uses offsets directly to avoid per-row ArrayRef allocation.\n#[inline]\nfn write_large_list_to_buffer(\n    buffer: &mut Vec<u8>,\n    list_array: &LargeListArray,\n    row_idx: usize,\n    element_field: &arrow::datatypes::FieldRef,\n) -> CometResult<usize> {\n    // Get offsets directly to avoid creating a sliced ArrayRef\n    let offsets = list_array.value_offsets();\n    let start_offset = offsets[row_idx] as usize;\n    let end_offset = offsets[row_idx + 1] as usize;\n    let num_elements = end_offset - start_offset;\n\n    // Pre-downcast the element array once\n    let element_array = list_array.values();\n    let element_type = element_field.data_type();\n    let typed_elements = TypedElements::from_array(element_array, element_type);\n\n    // Write the range of elements\n    typed_elements.write_range_to_buffer(buffer, start_offset, num_elements)\n}\n\n/// Writes a map value directly to the buffer in UnsafeMapData format.\n/// Returns the unpadded length written.\n///\n/// This uses offsets directly to avoid per-row ArrayRef allocation.\nfn write_map_to_buffer(\n    buffer: &mut Vec<u8>,\n    map_array: &MapArray,\n    row_idx: usize,\n    entries_field: &arrow::datatypes::FieldRef,\n) -> CometResult<usize> {\n    // UnsafeMapData format:\n    // [key array size: 8 bytes][key array data][value array data]\n    let map_start = buffer.len();\n\n    // Get offsets directly to avoid creating a sliced ArrayRef\n    let offsets = map_array.value_offsets();\n    let start_offset = offsets[row_idx] as usize;\n    let end_offset = offsets[row_idx + 1] as usize;\n    let num_entries = end_offset - start_offset;\n\n    // Get keys and values from the underlying entries struct\n    let entries_array = map_array.entries();\n    let keys = entries_array.column(0);\n    let values = entries_array.column(1);\n\n    let (key_type, value_type) = if let DataType::Struct(fields) = entries_field.data_type() {\n        (fields[0].data_type().clone(), fields[1].data_type().clone())\n    } else {\n        return Err(CometError::Internal(format!(\n            \"Map entries field is not a struct: {:?}\",\n            entries_field.data_type()\n        )));\n    };\n\n    // Pre-downcast keys and values once\n    let typed_keys = TypedElements::from_array(keys, &key_type);\n    let typed_values = TypedElements::from_array(values, &value_type);\n\n    // Placeholder for key array size\n    let key_size_offset = buffer.len();\n    buffer.extend_from_slice(&0i64.to_le_bytes());\n\n    // Write key array using range\n    let key_array_start = buffer.len();\n    typed_keys.write_range_to_buffer(buffer, start_offset, num_entries)?;\n    let key_array_size = (buffer.len() - key_array_start) as i64;\n    buffer[key_size_offset..key_size_offset + 8].copy_from_slice(&key_array_size.to_le_bytes());\n\n    // Write value array using range\n    typed_values.write_range_to_buffer(buffer, start_offset, num_entries)?;\n\n    Ok(buffer.len() - map_start)\n}\n\n/// Writes variable-length data for a nested field directly to buffer.\n/// Used by struct, list, and map writers for their nested elements.\n/// Returns the unpadded length written (0 if not variable-length).\nfn write_nested_variable_to_buffer(\n    buffer: &mut Vec<u8>,\n    data_type: &DataType,\n    array: &ArrayRef,\n    row_idx: usize,\n) -> CometResult<usize> {\n    let actual_type = array.data_type();\n\n    match actual_type {\n        DataType::Utf8 => {\n            let arr = downcast_array!(array, StringArray)?;\n            Ok(write_bytes_padded(buffer, arr.value(row_idx).as_bytes()))\n        }\n        DataType::LargeUtf8 => {\n            let arr = downcast_array!(array, LargeStringArray)?;\n            Ok(write_bytes_padded(buffer, arr.value(row_idx).as_bytes()))\n        }\n        DataType::Binary => {\n            let arr = downcast_array!(array, BinaryArray)?;\n            Ok(write_bytes_padded(buffer, arr.value(row_idx)))\n        }\n        DataType::LargeBinary => {\n            let arr = downcast_array!(array, LargeBinaryArray)?;\n            Ok(write_bytes_padded(buffer, arr.value(row_idx)))\n        }\n        DataType::Decimal128(precision, _) if *precision > MAX_LONG_DIGITS => {\n            let arr = downcast_array!(array, Decimal128Array)?;\n            let bytes = i128_to_spark_decimal_bytes(arr.value(row_idx));\n            Ok(write_bytes_padded(buffer, &bytes))\n        }\n        DataType::Struct(fields) => {\n            let struct_array = downcast_array!(array, StructArray)?;\n            write_struct_to_buffer(buffer, struct_array, row_idx, fields)\n        }\n        DataType::List(field) => {\n            let list_array = downcast_array!(array, ListArray)?;\n            write_list_to_buffer(buffer, list_array, row_idx, field)\n        }\n        DataType::LargeList(field) => {\n            let list_array = downcast_array!(array, LargeListArray)?;\n            write_large_list_to_buffer(buffer, list_array, row_idx, field)\n        }\n        DataType::Map(field, _) => {\n            let map_array = downcast_array!(array, MapArray)?;\n            write_map_to_buffer(buffer, map_array, row_idx, field)\n        }\n        DataType::Dictionary(key_type, value_type) => {\n            write_dictionary_to_buffer(buffer, array, row_idx, key_type, value_type)\n        }\n        // Check if schema type expects variable-length but actual type doesn't match\n        _ => match data_type {\n            DataType::Utf8\n            | DataType::LargeUtf8\n            | DataType::Binary\n            | DataType::LargeBinary\n            | DataType::Struct(_)\n            | DataType::List(_)\n            | DataType::LargeList(_)\n            | DataType::Map(_, _) => Err(CometError::Internal(format!(\n                \"Type mismatch in nested write: schema expects {:?} but actual array type is {:?}\",\n                data_type, actual_type\n            ))),\n            DataType::Decimal128(precision, _) if *precision > MAX_LONG_DIGITS => {\n                Err(CometError::Internal(format!(\n                    \"Type mismatch for large decimal: schema expects {:?} but actual is {:?}\",\n                    data_type, actual_type\n                )))\n            }\n            _ => Ok(0), // Not a variable-length type\n        },\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use std::sync::Arc;\n\n    #[test]\n    fn test_bitset_width_calculation() {\n        assert_eq!(ColumnarToRowContext::calculate_bitset_width(0), 0);\n        assert_eq!(ColumnarToRowContext::calculate_bitset_width(1), 8);\n        assert_eq!(ColumnarToRowContext::calculate_bitset_width(64), 8);\n        assert_eq!(ColumnarToRowContext::calculate_bitset_width(65), 16);\n        assert_eq!(ColumnarToRowContext::calculate_bitset_width(128), 16);\n        assert_eq!(ColumnarToRowContext::calculate_bitset_width(129), 24);\n    }\n\n    #[test]\n    fn test_round_up_to_8() {\n        assert_eq!(ColumnarToRowContext::round_up_to_8(0), 0);\n        assert_eq!(ColumnarToRowContext::round_up_to_8(1), 8);\n        assert_eq!(ColumnarToRowContext::round_up_to_8(7), 8);\n        assert_eq!(ColumnarToRowContext::round_up_to_8(8), 8);\n        assert_eq!(ColumnarToRowContext::round_up_to_8(9), 16);\n    }\n\n    #[test]\n    fn test_convert_int_array() {\n        let schema = vec![DataType::Int32];\n        let mut ctx = ColumnarToRowContext::new(schema, 100);\n\n        let array: ArrayRef = Arc::new(Int32Array::from(vec![Some(1), Some(2), None, Some(4)]));\n        let arrays = vec![array];\n\n        let (ptr, offsets, lengths) = ctx.convert(&arrays, 4).unwrap();\n\n        assert!(!ptr.is_null());\n        assert_eq!(offsets.len(), 4);\n        assert_eq!(lengths.len(), 4);\n\n        // Each row should have: 8 bytes null bitset + 8 bytes for one field = 16 bytes\n        for len in lengths {\n            assert_eq!(*len, 16);\n        }\n    }\n\n    #[test]\n    fn test_convert_multiple_columns() {\n        let schema = vec![DataType::Int32, DataType::Int64, DataType::Float64];\n        let mut ctx = ColumnarToRowContext::new(schema, 100);\n\n        let array1: ArrayRef = Arc::new(Int32Array::from(vec![1, 2, 3]));\n        let array2: ArrayRef = Arc::new(Int64Array::from(vec![100i64, 200, 300]));\n        let array3: ArrayRef = Arc::new(Float64Array::from(vec![1.1, 2.2, 3.3]));\n        let arrays = vec![array1, array2, array3];\n\n        let (ptr, offsets, lengths) = ctx.convert(&arrays, 3).unwrap();\n\n        assert!(!ptr.is_null());\n        assert_eq!(offsets.len(), 3);\n        assert_eq!(lengths.len(), 3);\n\n        // Each row should have: 8 bytes null bitset + 24 bytes for three fields = 32 bytes\n        for len in lengths {\n            assert_eq!(*len, 32);\n        }\n    }\n\n    #[test]\n    fn test_fixed_width_fast_path() {\n        // Test that the fixed-width fast path produces correct results\n        let schema = vec![DataType::Int32, DataType::Int64, DataType::Float64];\n        let mut ctx = ColumnarToRowContext::new(schema.clone(), 100);\n\n        // Verify that the context detects this as all fixed-width\n        assert!(\n            ctx.all_fixed_width,\n            \"Schema should be detected as all fixed-width\"\n        );\n\n        let array1: ArrayRef = Arc::new(Int32Array::from(vec![Some(1), None, Some(3)]));\n        let array2: ArrayRef = Arc::new(Int64Array::from(vec![Some(100i64), Some(200), None]));\n        let array3: ArrayRef = Arc::new(Float64Array::from(vec![1.5, 2.5, 3.5]));\n        let arrays = vec![array1, array2, array3];\n\n        let (ptr, offsets, lengths) = ctx.convert(&arrays, 3).unwrap();\n\n        assert!(!ptr.is_null());\n        assert_eq!(offsets.len(), 3);\n        assert_eq!(lengths.len(), 3);\n\n        // Each row: 8 bytes null bitset + 24 bytes for three fields = 32 bytes\n        let row_size = 32;\n        for (i, len) in lengths.iter().enumerate() {\n            assert_eq!(\n                *len, row_size as i32,\n                \"Row {} should be {} bytes\",\n                i, row_size\n            );\n        }\n\n        // Verify the actual data\n        let buffer = unsafe { std::slice::from_raw_parts(ptr, row_size * 3) };\n\n        // Row 0: int32=1 (not null), int64=100 (not null), float64=1.5 (not null)\n        let null_bitset_0 = i64::from_le_bytes(buffer[0..8].try_into().unwrap());\n        assert_eq!(null_bitset_0, 0, \"Row 0 should have no nulls\");\n        let val0_0 = i64::from_le_bytes(buffer[8..16].try_into().unwrap());\n        assert_eq!(val0_0, 1, \"Row 0, col 0 should be 1\");\n        let val0_1 = i64::from_le_bytes(buffer[16..24].try_into().unwrap());\n        assert_eq!(val0_1, 100, \"Row 0, col 1 should be 100\");\n        let val0_2 = f64::from_bits(u64::from_le_bytes(buffer[24..32].try_into().unwrap()));\n        assert!((val0_2 - 1.5).abs() < 0.001, \"Row 0, col 2 should be 1.5\");\n\n        // Row 1: int32=null, int64=200 (not null), float64=2.5 (not null)\n        let null_bitset_1 = i64::from_le_bytes(buffer[32..40].try_into().unwrap());\n        assert_eq!(null_bitset_1 & 1, 1, \"Row 1, col 0 should be null\");\n        let val1_1 = i64::from_le_bytes(buffer[48..56].try_into().unwrap());\n        assert_eq!(val1_1, 200, \"Row 1, col 1 should be 200\");\n\n        // Row 2: int32=3 (not null), int64=null, float64=3.5 (not null)\n        let null_bitset_2 = i64::from_le_bytes(buffer[64..72].try_into().unwrap());\n        assert_eq!(null_bitset_2 & 2, 2, \"Row 2, col 1 should be null\");\n        let val2_0 = i64::from_le_bytes(buffer[72..80].try_into().unwrap());\n        assert_eq!(val2_0, 3, \"Row 2, col 0 should be 3\");\n    }\n\n    #[test]\n    fn test_mixed_schema_uses_general_path() {\n        // Test that schemas with variable-length types use the general path\n        let schema = vec![DataType::Int32, DataType::Utf8];\n        let ctx = ColumnarToRowContext::new(schema, 100);\n\n        // Should NOT be detected as all fixed-width\n        assert!(\n            !ctx.all_fixed_width,\n            \"Schema with Utf8 should not be all fixed-width\"\n        );\n    }\n\n    #[test]\n    fn test_convert_string_array() {\n        let schema = vec![DataType::Utf8];\n        let mut ctx = ColumnarToRowContext::new(schema, 100);\n\n        let array: ArrayRef = Arc::new(StringArray::from(vec![\"hello\", \"world\"]));\n        let arrays = vec![array];\n\n        let (ptr, offsets, lengths) = ctx.convert(&arrays, 2).unwrap();\n\n        assert!(!ptr.is_null());\n        assert_eq!(offsets.len(), 2);\n        assert_eq!(lengths.len(), 2);\n\n        // Row 0: 8 (bitset) + 8 (field slot) + 8 (aligned \"hello\") = 24\n        // Row 1: 8 (bitset) + 8 (field slot) + 8 (aligned \"world\") = 24\n        assert_eq!(lengths[0], 24);\n        assert_eq!(lengths[1], 24);\n    }\n\n    #[test]\n    fn test_i128_to_spark_decimal_bytes() {\n        // Test positive number\n        let bytes = i128_to_spark_decimal_bytes(12345);\n        assert!(bytes.len() <= 16);\n\n        // Test negative number\n        let bytes = i128_to_spark_decimal_bytes(-12345);\n        assert!(bytes.len() <= 16);\n\n        // Test zero\n        let bytes = i128_to_spark_decimal_bytes(0);\n        assert!(!bytes.is_empty());\n    }\n\n    #[test]\n    fn test_list_data_conversion() {\n        use arrow::datatypes::Field;\n\n        // Create a list with elements [0, 1, 2, 3, 4]\n        let values = Int32Array::from(vec![0, 1, 2, 3, 4]);\n        let offsets = arrow::buffer::OffsetBuffer::new(vec![0, 5].into());\n\n        let list_field = Arc::new(Field::new(\"item\", DataType::Int32, true));\n        let list_array = ListArray::new(Arc::clone(&list_field), offsets, Arc::new(values), None);\n\n        // Convert the list for row 0 using the new direct-write function\n        let mut buffer = Vec::new();\n        write_list_to_buffer(&mut buffer, &list_array, 0, &list_field).expect(\"conversion failed\");\n        let result = &buffer;\n\n        // UnsafeArrayData format for Int32:\n        // [0..8]: numElements = 5\n        // [8..16]: null bitset (8 bytes for up to 64 elements)\n        // [16..20]: element 0 (4 bytes for Int32)\n        // [20..24]: element 1 (4 bytes for Int32)\n        // ... (total 20 bytes for 5 elements, rounded up to 24 for 8-byte alignment)\n\n        let num_elements = i64::from_le_bytes(result[0..8].try_into().unwrap());\n        assert_eq!(num_elements, 5, \"should have 5 elements\");\n\n        let bitset_width = ColumnarToRowContext::calculate_bitset_width(5);\n        assert_eq!(bitset_width, 8);\n\n        // Read each element value (Int32 uses 4 bytes per element)\n        let element_size = 4; // Int32\n        for i in 0..5 {\n            let slot_offset = 8 + bitset_width + i * element_size;\n            let value =\n                i32::from_le_bytes(result[slot_offset..slot_offset + 4].try_into().unwrap());\n            assert_eq!(value, i as i32, \"element {} should be {}\", i, i);\n        }\n    }\n\n    #[test]\n    fn test_list_data_conversion_multiple_rows() {\n        use arrow::datatypes::Field;\n\n        // Create multiple lists:\n        // Row 0: [0]\n        // Row 1: [0, 1]\n        // Row 2: [0, 1, 2]\n        let values = Int32Array::from(vec![\n            0, // row 0\n            0, 1, // row 1\n            0, 1, 2, // row 2\n        ]);\n        let offsets = arrow::buffer::OffsetBuffer::new(vec![0, 1, 3, 6].into());\n\n        let list_field = Arc::new(Field::new(\"item\", DataType::Int32, true));\n        let list_array = ListArray::new(Arc::clone(&list_field), offsets, Arc::new(values), None);\n\n        // Test row 1 which has elements [0, 1]\n        let mut buffer = Vec::new();\n        write_list_to_buffer(&mut buffer, &list_array, 1, &list_field).expect(\"conversion failed\");\n        let result = &buffer;\n\n        let num_elements = i64::from_le_bytes(result[0..8].try_into().unwrap());\n        assert_eq!(num_elements, 2, \"row 1 should have 2 elements\");\n\n        // Int32 uses 4 bytes per element\n        let element_size = 4;\n        let bitset_width = ColumnarToRowContext::calculate_bitset_width(2);\n        let slot0_offset = 8 + bitset_width;\n        let slot1_offset = slot0_offset + element_size;\n\n        let value0 = i32::from_le_bytes(result[slot0_offset..slot0_offset + 4].try_into().unwrap());\n        let value1 = i32::from_le_bytes(result[slot1_offset..slot1_offset + 4].try_into().unwrap());\n\n        assert_eq!(value0, 0, \"row 1, element 0 should be 0\");\n        assert_eq!(value1, 1, \"row 1, element 1 should be 1\");\n\n        // Also verify that list slicing is working correctly\n        let list_values = list_array.value(1);\n        assert_eq!(list_values.len(), 2, \"row 1 should have 2 elements\");\n        let int_arr = list_values.as_any().downcast_ref::<Int32Array>().unwrap();\n        assert_eq!(int_arr.value(0), 0, \"row 1 value[0] via Arrow should be 0\");\n        assert_eq!(int_arr.value(1), 1, \"row 1 value[1] via Arrow should be 1\");\n    }\n\n    #[test]\n    fn test_map_data_conversion() {\n        use arrow::datatypes::{Field, Fields};\n\n        // Create a map with 3 entries: {\"key_0\": 0, \"key_1\": 10, \"key_2\": 20}\n        let keys = StringArray::from(vec![\"key_0\", \"key_1\", \"key_2\"]);\n        let values = Int32Array::from(vec![0, 10, 20]);\n\n        let entries_field = Field::new(\n            \"entries\",\n            DataType::Struct(Fields::from(vec![\n                Field::new(\"key\", DataType::Utf8, false),\n                Field::new(\"value\", DataType::Int32, true),\n            ])),\n            false,\n        );\n\n        let entries = StructArray::from(vec![\n            (\n                Arc::new(Field::new(\"key\", DataType::Utf8, false)),\n                Arc::new(keys) as ArrayRef,\n            ),\n            (\n                Arc::new(Field::new(\"value\", DataType::Int32, true)),\n                Arc::new(values) as ArrayRef,\n            ),\n        ]);\n\n        let map_array = MapArray::new(\n            Arc::new(entries_field.clone()),\n            arrow::buffer::OffsetBuffer::new(vec![0, 3].into()),\n            entries,\n            None,\n            false,\n        );\n\n        // Convert the map for row 0\n        let mut buffer = Vec::new();\n        write_map_to_buffer(&mut buffer, &map_array, 0, &Arc::new(entries_field))\n            .expect(\"conversion failed\");\n        let result = &buffer;\n\n        // Verify the structure:\n        // - [0..8]: key array size\n        // - [8..key_end]: key array data (UnsafeArrayData format)\n        // - [key_end..]: value array data (UnsafeArrayData format)\n\n        let key_array_size = i64::from_le_bytes(result[0..8].try_into().unwrap());\n        assert!(key_array_size > 0, \"key array size should be positive\");\n\n        let value_array_start = (8 + key_array_size) as usize;\n        assert!(\n            value_array_start < result.len(),\n            \"value array should start within buffer\"\n        );\n\n        // Read value array\n        let value_num_elements = i64::from_le_bytes(\n            result[value_array_start..value_array_start + 8]\n                .try_into()\n                .unwrap(),\n        );\n        assert_eq!(value_num_elements, 3, \"should have 3 values\");\n\n        // Value array layout for Int32 (4 bytes per element):\n        // [0..8]: numElements = 3\n        // [8..16]: null bitset (8 bytes for up to 64 elements)\n        // [16..20]: element 0 (4 bytes)\n        // [20..24]: element 1 (4 bytes)\n        // [24..28]: element 2 (4 bytes)\n        let value_bitset_width = ColumnarToRowContext::calculate_bitset_width(3);\n        assert_eq!(value_bitset_width, 8);\n\n        let element_size = 4; // Int32\n        let slot0_offset = value_array_start + 8 + value_bitset_width;\n        let slot1_offset = slot0_offset + element_size;\n        let slot2_offset = slot1_offset + element_size;\n\n        let value0 = i32::from_le_bytes(result[slot0_offset..slot0_offset + 4].try_into().unwrap());\n        let value1 = i32::from_le_bytes(result[slot1_offset..slot1_offset + 4].try_into().unwrap());\n        let value2 = i32::from_le_bytes(result[slot2_offset..slot2_offset + 4].try_into().unwrap());\n\n        assert_eq!(value0, 0, \"first value should be 0\");\n        assert_eq!(value1, 10, \"second value should be 10\");\n        assert_eq!(value2, 20, \"third value should be 20\");\n    }\n\n    #[test]\n    fn test_map_data_conversion_multiple_rows() {\n        use arrow::datatypes::{Field, Fields};\n\n        // Create multiple maps:\n        // Row 0: {\"key_0\": 0}\n        // Row 1: {\"key_0\": 0, \"key_1\": 10}\n        // Row 2: {\"key_0\": 0, \"key_1\": 10, \"key_2\": 20}\n        // All entries are concatenated in the underlying arrays\n        let keys = StringArray::from(vec![\n            \"key_0\", // row 0\n            \"key_0\", \"key_1\", // row 1\n            \"key_0\", \"key_1\", \"key_2\", // row 2\n        ]);\n        let values = Int32Array::from(vec![\n            0, // row 0\n            0, 10, // row 1\n            0, 10, 20, // row 2\n        ]);\n\n        let entries_field = Field::new(\n            \"entries\",\n            DataType::Struct(Fields::from(vec![\n                Field::new(\"key\", DataType::Utf8, false),\n                Field::new(\"value\", DataType::Int32, true),\n            ])),\n            false,\n        );\n\n        let entries = StructArray::from(vec![\n            (\n                Arc::new(Field::new(\"key\", DataType::Utf8, false)),\n                Arc::new(keys) as ArrayRef,\n            ),\n            (\n                Arc::new(Field::new(\"value\", DataType::Int32, true)),\n                Arc::new(values) as ArrayRef,\n            ),\n        ]);\n\n        // Offsets: row 0 has 1 entry, row 1 has 2 entries, row 2 has 3 entries\n        let map_array = MapArray::new(\n            Arc::new(entries_field.clone()),\n            arrow::buffer::OffsetBuffer::new(vec![0, 1, 3, 6].into()),\n            entries,\n            None,\n            false,\n        );\n\n        // Test row 1 which has 2 entries\n        let mut buffer = Vec::new();\n        write_map_to_buffer(&mut buffer, &map_array, 1, &Arc::new(entries_field.clone()))\n            .expect(\"conversion failed\");\n        let result = &buffer;\n\n        let key_array_size = i64::from_le_bytes(result[0..8].try_into().unwrap());\n        let value_array_start = (8 + key_array_size) as usize;\n\n        let value_num_elements = i64::from_le_bytes(\n            result[value_array_start..value_array_start + 8]\n                .try_into()\n                .unwrap(),\n        );\n        assert_eq!(value_num_elements, 2, \"row 1 should have 2 values\");\n\n        // Int32 uses 4 bytes per element\n        let element_size = 4;\n        let value_bitset_width = ColumnarToRowContext::calculate_bitset_width(2);\n        let slot0_offset = value_array_start + 8 + value_bitset_width;\n        let slot1_offset = slot0_offset + element_size;\n\n        let value0 = i32::from_le_bytes(result[slot0_offset..slot0_offset + 4].try_into().unwrap());\n        let value1 = i32::from_le_bytes(result[slot1_offset..slot1_offset + 4].try_into().unwrap());\n\n        assert_eq!(value0, 0, \"row 1, first value should be 0\");\n        assert_eq!(value1, 10, \"row 1, second value should be 10\");\n\n        // Also verify that entries slicing is working correctly\n        let entries_row1 = map_array.value(1);\n        assert_eq!(entries_row1.len(), 2, \"row 1 should have 2 entries\");\n\n        let entries_values = entries_row1.column(1);\n        assert_eq!(\n            entries_values.len(),\n            2,\n            \"row 1 values should have 2 elements\"\n        );\n\n        // Check the actual values from the sliced array\n        let values_arr = entries_values\n            .as_any()\n            .downcast_ref::<Int32Array>()\n            .unwrap();\n        assert_eq!(\n            values_arr.value(0),\n            0,\n            \"row 1 value[0] via Arrow should be 0\"\n        );\n        assert_eq!(\n            values_arr.value(1),\n            10,\n            \"row 1 value[1] via Arrow should be 10\"\n        );\n    }\n\n    /// Test map conversion with a sliced MapArray to simulate FFI import behavior.\n    /// When data comes from FFI, the MapArray might be a slice of a larger array,\n    /// and the entries' child arrays might have offsets that don't start at 0.\n    #[test]\n    fn test_map_data_conversion_sliced_maparray() {\n        use arrow::datatypes::{Field, Fields};\n\n        // Create multiple maps (same as above)\n        let keys = StringArray::from(vec![\n            \"key_0\", // row 0\n            \"key_0\", \"key_1\", // row 1\n            \"key_0\", \"key_1\", \"key_2\", // row 2\n        ]);\n        let values = Int32Array::from(vec![\n            0, // row 0\n            0, 10, // row 1\n            0, 10, 20, // row 2\n        ]);\n\n        let entries_field = Field::new(\n            \"entries\",\n            DataType::Struct(Fields::from(vec![\n                Field::new(\"key\", DataType::Utf8, false),\n                Field::new(\"value\", DataType::Int32, true),\n            ])),\n            false,\n        );\n\n        let entries = StructArray::from(vec![\n            (\n                Arc::new(Field::new(\"key\", DataType::Utf8, false)),\n                Arc::new(keys) as ArrayRef,\n            ),\n            (\n                Arc::new(Field::new(\"value\", DataType::Int32, true)),\n                Arc::new(values) as ArrayRef,\n            ),\n        ]);\n\n        let map_array = MapArray::new(\n            Arc::new(entries_field.clone()),\n            arrow::buffer::OffsetBuffer::new(vec![0, 1, 3, 6].into()),\n            entries,\n            None,\n            false,\n        );\n\n        // Slice the MapArray to skip row 0 - this simulates what might happen with FFI\n        let sliced_map = map_array.slice(1, 2);\n        let sliced_map_array = sliced_map.as_any().downcast_ref::<MapArray>().unwrap();\n\n        // Now test row 0 of the sliced array (which is row 1 of the original)\n        let mut buffer = Vec::new();\n        write_map_to_buffer(\n            &mut buffer,\n            sliced_map_array,\n            0,\n            &Arc::new(entries_field.clone()),\n        )\n        .expect(\"conversion failed\");\n        let result = &buffer;\n\n        let key_array_size = i64::from_le_bytes(result[0..8].try_into().unwrap());\n        let value_array_start = (8 + key_array_size) as usize;\n\n        let value_num_elements = i64::from_le_bytes(\n            result[value_array_start..value_array_start + 8]\n                .try_into()\n                .unwrap(),\n        );\n        assert_eq!(value_num_elements, 2, \"sliced row 0 should have 2 values\");\n\n        let value_bitset_width = ColumnarToRowContext::calculate_bitset_width(2);\n        let slot0_offset = value_array_start + 8 + value_bitset_width;\n        let slot1_offset = slot0_offset + 4; // Int32 uses 4 bytes\n\n        let value0 = i32::from_le_bytes(result[slot0_offset..slot0_offset + 4].try_into().unwrap());\n        let value1 = i32::from_le_bytes(result[slot1_offset..slot1_offset + 4].try_into().unwrap());\n\n        assert_eq!(value0, 0, \"sliced row 0, first value should be 0\");\n        assert_eq!(value1, 10, \"sliced row 0, second value should be 10\");\n    }\n\n    #[test]\n    fn test_large_list_data_conversion() {\n        use arrow::datatypes::Field;\n\n        // Create a large list with elements [0, 1, 2, 3, 4]\n        // LargeListArray uses i64 offsets instead of i32\n        let values = Int32Array::from(vec![0, 1, 2, 3, 4]);\n        let offsets = arrow::buffer::OffsetBuffer::new(vec![0i64, 5].into());\n\n        let list_field = Arc::new(Field::new(\"item\", DataType::Int32, true));\n        let list_array =\n            LargeListArray::new(Arc::clone(&list_field), offsets, Arc::new(values), None);\n\n        // Convert the list for row 0\n        let mut buffer = Vec::new();\n        write_large_list_to_buffer(&mut buffer, &list_array, 0, &list_field)\n            .expect(\"conversion failed\");\n        let result = &buffer;\n\n        // UnsafeArrayData format for Int32:\n        // [0..8]: numElements = 5\n        // [8..16]: null bitset (8 bytes for up to 64 elements)\n        // [16..20]: element 0 (4 bytes for Int32)\n        // ... (total 20 bytes for 5 elements, rounded up to 24 for 8-byte alignment)\n\n        let num_elements = i64::from_le_bytes(result[0..8].try_into().unwrap());\n        assert_eq!(num_elements, 5, \"should have 5 elements\");\n\n        let bitset_width = ColumnarToRowContext::calculate_bitset_width(5);\n        assert_eq!(bitset_width, 8);\n\n        // Read each element value (Int32 uses 4 bytes per element)\n        let element_size = 4; // Int32\n        for i in 0..5 {\n            let slot_offset = 8 + bitset_width + i * element_size;\n            let value =\n                i32::from_le_bytes(result[slot_offset..slot_offset + 4].try_into().unwrap());\n            assert_eq!(value, i as i32, \"element {} should be {}\", i, i);\n        }\n    }\n\n    #[test]\n    fn test_convert_fixed_size_binary_array() {\n        // FixedSizeBinary(3) - each value is exactly 3 bytes\n        let schema = vec![DataType::FixedSizeBinary(3)];\n        let mut ctx = ColumnarToRowContext::new(schema, 100);\n\n        let array: ArrayRef = Arc::new(FixedSizeBinaryArray::from(vec![\n            Some(&[1u8, 2, 3][..]),\n            Some(&[4u8, 5, 6][..]),\n            None, // Test null handling\n        ]));\n        let arrays = vec![array];\n\n        let (ptr, offsets, lengths) = ctx.convert(&arrays, 3).unwrap();\n\n        assert!(!ptr.is_null());\n        assert_eq!(offsets.len(), 3);\n        assert_eq!(lengths.len(), 3);\n\n        // Row 0: 8 (bitset) + 8 (field slot) + 8 (aligned 3-byte data) = 24\n        // Row 1: 8 (bitset) + 8 (field slot) + 8 (aligned 3-byte data) = 24\n        // Row 2: 8 (bitset) + 8 (field slot) = 16 (null, no variable data)\n        assert_eq!(lengths[0], 24);\n        assert_eq!(lengths[1], 24);\n        assert_eq!(lengths[2], 16);\n\n        // Verify the data is correct for non-null rows\n        unsafe {\n            let row0 =\n                std::slice::from_raw_parts(ptr.add(offsets[0] as usize), lengths[0] as usize);\n            // Variable data starts at offset 16 (8 bitset + 8 field slot)\n            assert_eq!(&row0[16..19], &[1u8, 2, 3]);\n\n            let row1 =\n                std::slice::from_raw_parts(ptr.add(offsets[1] as usize), lengths[1] as usize);\n            assert_eq!(&row1[16..19], &[4u8, 5, 6]);\n        }\n    }\n\n    #[test]\n    fn test_convert_dictionary_decimal_array() {\n        // Test that dictionary-encoded decimals are correctly unpacked and converted\n        // This tests the fix for casting to schema_type instead of value_type\n        use arrow::datatypes::Int8Type;\n\n        // Create a dictionary array with Decimal128 values\n        // Values: [-0.01, -0.02, -0.03] represented as [-1, -2, -3] with scale 2\n        let values = Decimal128Array::from(vec![-1i128, -2, -3])\n            .with_precision_and_scale(5, 2)\n            .unwrap();\n\n        // Keys: [0, 1, 2, 0, 1, 2] - each value appears twice\n        let keys = Int8Array::from(vec![0i8, 1, 2, 0, 1, 2]);\n\n        let dict_array: ArrayRef =\n            Arc::new(DictionaryArray::<Int8Type>::try_new(keys, Arc::new(values)).unwrap());\n\n        // Schema expects Decimal128(5, 2) - not a dictionary type\n        let schema = vec![DataType::Decimal128(5, 2)];\n        let mut ctx = ColumnarToRowContext::new(schema, 100);\n\n        let arrays = vec![dict_array];\n        let (ptr, offsets, lengths) = ctx.convert(&arrays, 6).unwrap();\n\n        assert!(!ptr.is_null());\n        assert_eq!(offsets.len(), 6);\n        assert_eq!(lengths.len(), 6);\n\n        // Verify the decimal values are correct (not doubled or otherwise corrupted)\n        // Fixed-width decimal is stored directly in the 8-byte field slot\n        unsafe {\n            for (i, expected) in [-1i64, -2, -3, -1, -2, -3].iter().enumerate() {\n                let row =\n                    std::slice::from_raw_parts(ptr.add(offsets[i] as usize), lengths[i] as usize);\n                // Field value starts at offset 8 (after null bitset)\n                let value = i64::from_le_bytes(row[8..16].try_into().unwrap());\n                assert_eq!(\n                    value, *expected,\n                    \"Row {} should have value {}, got {}\",\n                    i, expected, value\n                );\n            }\n        }\n    }\n\n    #[test]\n    fn test_convert_int32_to_decimal128() {\n        // Test that Int32 arrays are correctly cast to Decimal128 when schema expects Decimal128.\n        // This can happen when COMET_USE_DECIMAL_128 is false and the parquet reader produces\n        // Int32 for small-precision decimals.\n\n        // Create an Int32 array representing decimals: [-1, -2, -3] which at scale 2 means\n        // [-0.01, -0.02, -0.03]\n        let int_array: ArrayRef = Arc::new(Int32Array::from(vec![-1i32, -2, -3]));\n\n        // Schema expects Decimal128(5, 2)\n        let schema = vec![DataType::Decimal128(5, 2)];\n        let mut ctx = ColumnarToRowContext::new(schema, 100);\n\n        let arrays = vec![int_array];\n        let (ptr, offsets, lengths) = ctx.convert(&arrays, 3).unwrap();\n\n        assert!(!ptr.is_null());\n        assert_eq!(offsets.len(), 3);\n        assert_eq!(lengths.len(), 3);\n\n        // Verify the decimal values are correct after casting\n        // Fixed-width decimal is stored directly in the 8-byte field slot\n        unsafe {\n            for (i, expected) in [-1i64, -2, -3].iter().enumerate() {\n                let row =\n                    std::slice::from_raw_parts(ptr.add(offsets[i] as usize), lengths[i] as usize);\n                // Field value starts at offset 8 (after null bitset)\n                let value = i64::from_le_bytes(row[8..16].try_into().unwrap());\n                assert_eq!(\n                    value, *expected,\n                    \"Row {} should have value {}, got {}\",\n                    i, expected, value\n                );\n            }\n        }\n    }\n\n    #[test]\n    fn test_convert_int64_to_decimal128() {\n        // Test that Int64 arrays are correctly cast to Decimal128 when schema expects Decimal128.\n        // This can happen when COMET_USE_DECIMAL_128 is false and the parquet reader produces\n        // Int64 for medium-precision decimals.\n\n        // Create an Int64 array representing decimals\n        let int_array: ArrayRef = Arc::new(Int64Array::from(vec![-100i64, -200, -300]));\n\n        // Schema expects Decimal128(10, 2)\n        let schema = vec![DataType::Decimal128(10, 2)];\n        let mut ctx = ColumnarToRowContext::new(schema, 100);\n\n        let arrays = vec![int_array];\n        let (ptr, offsets, lengths) = ctx.convert(&arrays, 3).unwrap();\n\n        assert!(!ptr.is_null());\n        assert_eq!(offsets.len(), 3);\n        assert_eq!(lengths.len(), 3);\n\n        // Verify the decimal values are correct after casting\n        unsafe {\n            for (i, expected) in [-100i64, -200, -300].iter().enumerate() {\n                let row =\n                    std::slice::from_raw_parts(ptr.add(offsets[i] as usize), lengths[i] as usize);\n                // Field value starts at offset 8 (after null bitset)\n                let value = i64::from_le_bytes(row[8..16].try_into().unwrap());\n                assert_eq!(\n                    value, *expected,\n                    \"Row {} should have value {}, got {}\",\n                    i, expected, value\n                );\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/expressions/arithmetic.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Arithmetic expression builders\n\nuse std::any::Any;\nuse std::fmt::{Display, Formatter};\nuse std::hash::{Hash, Hasher};\n\nuse arrow::datatypes::{DataType, Schema};\nuse arrow::record_batch::RecordBatch;\nuse datafusion::common::DataFusionError;\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion_comet_spark_expr::{QueryContext, SparkError, SparkErrorWithContext};\n\n/// Wrapper expression that catches and wraps SparkError with QueryContext\n/// for binary arithmetic operations.\n#[derive(Debug)]\npub struct CheckedBinaryExpr {\n    /// The underlying physical expression (typically a ScalarFunctionExpr)\n    child: Arc<dyn PhysicalExpr>,\n    /// Optional query context to attach to errors\n    query_context: Option<Arc<QueryContext>>,\n}\n\nimpl CheckedBinaryExpr {\n    pub fn new(child: Arc<dyn PhysicalExpr>, query_context: Option<Arc<QueryContext>>) -> Self {\n        Self {\n            child,\n            query_context,\n        }\n    }\n}\n\nimpl Display for CheckedBinaryExpr {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"CheckedBinaryExpr({})\", self.child)\n    }\n}\n\nimpl PartialEq for CheckedBinaryExpr {\n    fn eq(&self, other: &Self) -> bool {\n        self.child.eq(&other.child)\n    }\n}\n\nimpl Eq for CheckedBinaryExpr {}\n\nimpl PartialEq<dyn Any> for CheckedBinaryExpr {\n    fn eq(&self, other: &dyn Any) -> bool {\n        other\n            .downcast_ref::<Self>()\n            .map(|x| self.eq(x))\n            .unwrap_or(false)\n    }\n}\n\nimpl Hash for CheckedBinaryExpr {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.child.hash(state);\n    }\n}\n\nimpl PhysicalExpr for CheckedBinaryExpr {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        self.child.fmt_sql(f)\n    }\n\n    fn data_type(&self, input_schema: &Schema) -> datafusion::common::Result<DataType> {\n        self.child.data_type(input_schema)\n    }\n\n    fn nullable(&self, input_schema: &Schema) -> datafusion::common::Result<bool> {\n        self.child.nullable(input_schema)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> datafusion::common::Result<ColumnarValue> {\n        let result = self.child.evaluate(batch);\n\n        // If there's an error and we have query_context, wrap it\n        match result {\n            Err(DataFusionError::External(e)) if self.query_context.is_some() => {\n                if let Some(spark_err) = e.downcast_ref::<SparkError>() {\n                    let wrapped = SparkErrorWithContext::with_context(\n                        spark_err.clone(),\n                        Arc::clone(self.query_context.as_ref().unwrap()),\n                    );\n                    Err(DataFusionError::External(Box::new(wrapped)))\n                } else {\n                    Err(DataFusionError::External(e))\n                }\n            }\n            other => other,\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.child]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        match children.len() {\n            1 => Ok(Arc::new(CheckedBinaryExpr::new(\n                Arc::clone(&children[0]),\n                self.query_context.clone(),\n            ))),\n            _ => Err(DataFusionError::Internal(\n                \"CheckedBinaryExpr should have exactly one child\".to_string(),\n            )),\n        }\n    }\n}\n\n/// Macro to generate arithmetic expression builders that need eval_mode handling\n#[macro_export]\nmacro_rules! arithmetic_expr_builder {\n    ($builder_name:ident, $expr_type:ident, $operator:expr) => {\n        pub struct $builder_name;\n\n        impl $crate::execution::planner::expression_registry::ExpressionBuilder for $builder_name {\n            fn build(\n                &self,\n                spark_expr: &datafusion_comet_proto::spark_expression::Expr,\n                input_schema: arrow::datatypes::SchemaRef,\n                planner: &$crate::execution::planner::PhysicalPlanner,\n            ) -> Result<\n                std::sync::Arc<dyn datafusion::physical_expr::PhysicalExpr>,\n                $crate::execution::operators::ExecutionError,\n            > {\n                let expr = $crate::extract_expr!(spark_expr, $expr_type);\n                let eval_mode =\n                    $crate::execution::planner::from_protobuf_eval_mode(expr.eval_mode)?;\n                planner.create_binary_expr(\n                    spark_expr, // Pass the full spark_expr for query_context lookup\n                    expr.left.as_ref().unwrap(),\n                    expr.right.as_ref().unwrap(),\n                    expr.return_type.as_ref(),\n                    $operator,\n                    input_schema,\n                    eval_mode,\n                )\n            }\n        }\n    };\n}\n\nuse std::sync::Arc;\n\nuse arrow::datatypes::SchemaRef;\nuse datafusion::logical_expr::Operator as DataFusionOperator;\nuse datafusion_comet_proto::spark_expression::Expr;\nuse datafusion_comet_spark_expr::{create_modulo_expr, create_negate_expr, EvalMode};\n\nuse crate::execution::{\n    expressions::extract_expr,\n    operators::ExecutionError,\n    planner::{\n        expression_registry::ExpressionBuilder, from_protobuf_eval_mode, BinaryExprOptions,\n        PhysicalPlanner,\n    },\n};\n\n/// Macro to define basic arithmetic builders that use eval_mode\nmacro_rules! define_basic_arithmetic_builders {\n    ($(($builder:ident, $expr_type:ident, $op:expr)),* $(,)?) => {\n        $(\n            arithmetic_expr_builder!($builder, $expr_type, $op);\n        )*\n    };\n}\n\ndefine_basic_arithmetic_builders![\n    (AddBuilder, Add, DataFusionOperator::Plus),\n    (SubtractBuilder, Subtract, DataFusionOperator::Minus),\n    (MultiplyBuilder, Multiply, DataFusionOperator::Multiply),\n    (DivideBuilder, Divide, DataFusionOperator::Divide),\n];\n\n/// Builder for IntegralDivide expressions (requires special options)\npub struct IntegralDivideBuilder;\n\nimpl ExpressionBuilder for IntegralDivideBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, IntegralDivide);\n        let eval_mode = from_protobuf_eval_mode(expr.eval_mode)?;\n        planner.create_binary_expr_with_options(\n            spark_expr, // Pass the full spark_expr for query_context lookup\n            expr.left.as_ref().unwrap(),\n            expr.right.as_ref().unwrap(),\n            expr.return_type.as_ref(),\n            DataFusionOperator::Divide,\n            input_schema,\n            BinaryExprOptions {\n                is_integral_div: true,\n            },\n            eval_mode,\n        )\n    }\n}\n\n/// Builder for Remainder expressions (uses special modulo function)\npub struct RemainderBuilder;\n\nimpl ExpressionBuilder for RemainderBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, Remainder);\n        let eval_mode = from_protobuf_eval_mode(expr.eval_mode)?;\n        let left = planner.create_expr(expr.left.as_ref().unwrap(), Arc::clone(&input_schema))?;\n        let right = planner.create_expr(expr.right.as_ref().unwrap(), Arc::clone(&input_schema))?;\n\n        let result = create_modulo_expr(\n            left,\n            right,\n            expr.return_type\n                .as_ref()\n                .map(crate::execution::serde::to_arrow_datatype)\n                .unwrap(),\n            input_schema,\n            eval_mode == EvalMode::Ansi,\n            &planner.session_ctx().state(),\n        );\n        result.map_err(|e| ExecutionError::GeneralError(e.to_string()))\n    }\n}\n\n/// Builder for UnaryMinus expressions (uses special negate function)\npub struct UnaryMinusBuilder;\n\nimpl ExpressionBuilder for UnaryMinusBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, UnaryMinus);\n        let child = planner.create_expr(expr.child.as_ref().unwrap(), input_schema)?;\n        let result = create_negate_expr(child, expr.fail_on_error);\n        result.map_err(|e| ExecutionError::GeneralError(e.to_string()))\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/expressions/bitwise.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Bitwise expression builders\n\nuse datafusion::logical_expr::Operator as DataFusionOperator;\n\nuse crate::binary_expr_builder;\n\n/// Macro to define all bitwise builders at once\nmacro_rules! define_bitwise_builders {\n    ($(($builder:ident, $expr_type:ident, $op:expr)),* $(,)?) => {\n        $(\n            binary_expr_builder!($builder, $expr_type, $op);\n        )*\n    };\n}\n\ndefine_bitwise_builders![\n    (\n        BitwiseAndBuilder,\n        BitwiseAnd,\n        DataFusionOperator::BitwiseAnd\n    ),\n    (BitwiseOrBuilder, BitwiseOr, DataFusionOperator::BitwiseOr),\n    (\n        BitwiseXorBuilder,\n        BitwiseXor,\n        DataFusionOperator::BitwiseXor\n    ),\n    (\n        BitwiseShiftLeftBuilder,\n        BitwiseShiftLeft,\n        DataFusionOperator::BitwiseShiftLeft\n    ),\n    (\n        BitwiseShiftRightBuilder,\n        BitwiseShiftRight,\n        DataFusionOperator::BitwiseShiftRight\n    ),\n];\n"
  },
  {
    "path": "native/core/src/execution/expressions/comparison.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Comparison expression builders\n\nuse datafusion::logical_expr::Operator as DataFusionOperator;\n\nuse crate::binary_expr_builder;\n\n/// Macro to define all comparison builders at once\nmacro_rules! define_comparison_builders {\n    ($(($builder:ident, $expr_type:ident, $op:expr)),* $(,)?) => {\n        $(\n            binary_expr_builder!($builder, $expr_type, $op);\n        )*\n    };\n}\n\ndefine_comparison_builders![\n    (EqBuilder, Eq, DataFusionOperator::Eq),\n    (NeqBuilder, Neq, DataFusionOperator::NotEq),\n    (LtBuilder, Lt, DataFusionOperator::Lt),\n    (LtEqBuilder, LtEq, DataFusionOperator::LtEq),\n    (GtBuilder, Gt, DataFusionOperator::Gt),\n    (GtEqBuilder, GtEq, DataFusionOperator::GtEq),\n    (\n        EqNullSafeBuilder,\n        EqNullSafe,\n        DataFusionOperator::IsNotDistinctFrom\n    ),\n    (\n        NeqNullSafeBuilder,\n        NeqNullSafe,\n        DataFusionOperator::IsDistinctFrom\n    ),\n];\n"
  },
  {
    "path": "native/core/src/execution/expressions/logical.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Logical expression builders\n\nuse datafusion::logical_expr::Operator as DataFusionOperator;\nuse datafusion::physical_expr::expressions::NotExpr;\n\nuse crate::{binary_expr_builder, unary_expr_builder};\n\n/// Macro to define all logical builders at once\nmacro_rules! define_logical_builders {\n    () => {\n        binary_expr_builder!(AndBuilder, And, DataFusionOperator::And);\n        binary_expr_builder!(OrBuilder, Or, DataFusionOperator::Or);\n        unary_expr_builder!(NotBuilder, Not, NotExpr::new);\n    };\n}\n\ndefine_logical_builders!();\n"
  },
  {
    "path": "native/core/src/execution/expressions/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Native DataFusion expressions\n\npub mod arithmetic;\npub mod bitwise;\npub mod comparison;\npub mod logical;\npub mod nullcheck;\npub mod partition;\npub mod random;\npub mod strings;\npub mod subquery;\npub mod temporal;\n\npub use datafusion_comet_spark_expr::EvalMode;\n\n// Re-export the extract_expr macro for convenience in expression builders\npub use crate::extract_expr;\n"
  },
  {
    "path": "native/core/src/execution/expressions/nullcheck.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Null check expression builders\n\nuse datafusion::physical_expr::expressions::{IsNotNullExpr, IsNullExpr};\n\nuse crate::unary_expr_builder;\n\n/// Macro to define all null check builders at once\nmacro_rules! define_null_check_builders {\n    () => {\n        unary_expr_builder!(IsNullBuilder, IsNull, IsNullExpr::new);\n        unary_expr_builder!(IsNotNullBuilder, IsNotNull, IsNotNullExpr::new);\n    };\n}\n\ndefine_null_check_builders!();\n"
  },
  {
    "path": "native/core/src/execution/expressions/partition.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::execution::operators::ExecutionError;\nuse crate::execution::planner::expression_registry::ExpressionBuilder;\nuse crate::execution::planner::PhysicalPlanner;\nuse arrow::datatypes::SchemaRef;\nuse datafusion::common::ScalarValue;\nuse datafusion::physical_expr::expressions::Literal;\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion_comet_proto::spark_expression::Expr;\nuse datafusion_comet_spark_expr::monotonically_increasing_id::MonotonicallyIncreasingId;\nuse std::sync::Arc;\n\npub struct SparkPartitionIdBuilder;\n\nimpl ExpressionBuilder for SparkPartitionIdBuilder {\n    fn build(\n        &self,\n        _spark_expr: &Expr,\n        _input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        Ok(Arc::new(Literal::new(ScalarValue::Int32(Some(\n            planner.partition(),\n        )))))\n    }\n}\n\npub struct MonotonicallyIncreasingIdBuilder;\n\nimpl ExpressionBuilder for MonotonicallyIncreasingIdBuilder {\n    fn build(\n        &self,\n        _spark_expr: &Expr,\n        _input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        Ok(Arc::new(MonotonicallyIncreasingId::from_partition_id(\n            planner.partition(),\n        )))\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/expressions/random.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::execution::operators::ExecutionError;\nuse crate::execution::planner::expression_registry::ExpressionBuilder;\nuse crate::execution::planner::PhysicalPlanner;\nuse crate::extract_expr;\nuse arrow::datatypes::SchemaRef;\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion_comet_proto::spark_expression::Expr;\nuse datafusion_comet_spark_expr::{RandExpr, RandnExpr};\nuse std::sync::Arc;\n\npub struct RandBuilder;\n\nimpl ExpressionBuilder for RandBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        _input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, Rand);\n        let seed = expr.seed.wrapping_add(planner.partition().into());\n        Ok(Arc::new(RandExpr::new(seed)))\n    }\n}\n\npub struct RandnBuilder;\n\nimpl ExpressionBuilder for RandnBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        _input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, Randn);\n        let seed = expr.seed.wrapping_add(planner.partition().into());\n        Ok(Arc::new(RandnExpr::new(seed)))\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/expressions/strings.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! String expression builders\n\nuse std::cmp::max;\nuse std::sync::Arc;\n\nuse arrow::datatypes::SchemaRef;\nuse datafusion::common::ScalarValue;\nuse datafusion::physical_expr::expressions::{LikeExpr, Literal};\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion_comet_proto::spark_expression::Expr;\nuse datafusion_comet_spark_expr::{FromJson, RLike, SubstringExpr};\n\nuse crate::execution::{\n    expressions::extract_expr,\n    operators::ExecutionError,\n    planner::{expression_registry::ExpressionBuilder, PhysicalPlanner},\n    serde::to_arrow_datatype,\n};\n\n/// Builder for Substring expressions\npub struct SubstringBuilder;\n\nimpl ExpressionBuilder for SubstringBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, Substring);\n        let child = planner.create_expr(expr.child.as_ref().unwrap(), input_schema)?;\n        // Spark Substring's start is 1-based when start > 0\n        let start = expr.start - i32::from(expr.start > 0);\n        // substring negative len is treated as 0 in Spark\n        let len = max(expr.len, 0);\n\n        Ok(Arc::new(SubstringExpr::new(\n            child,\n            start as i64,\n            len as u64,\n        )))\n    }\n}\n\n/// Builder for Like expressions\npub struct LikeBuilder;\n\nimpl ExpressionBuilder for LikeBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, Like);\n        let left = planner.create_expr(expr.left.as_ref().unwrap(), Arc::clone(&input_schema))?;\n        let right = planner.create_expr(expr.right.as_ref().unwrap(), input_schema)?;\n\n        Ok(Arc::new(LikeExpr::new(false, false, left, right)))\n    }\n}\n\n/// Builder for Rlike (regex like) expressions\npub struct RlikeBuilder;\n\nimpl ExpressionBuilder for RlikeBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, Rlike);\n        let left = planner.create_expr(expr.left.as_ref().unwrap(), Arc::clone(&input_schema))?;\n        let right = planner.create_expr(expr.right.as_ref().unwrap(), input_schema)?;\n\n        match right.as_any().downcast_ref::<Literal>().unwrap().value() {\n            ScalarValue::Utf8(Some(pattern)) => Ok(Arc::new(RLike::try_new(left, pattern)?)),\n            _ => Err(ExecutionError::GeneralError(\n                \"RLike only supports scalar patterns\".to_string(),\n            )),\n        }\n    }\n}\n\npub struct FromJsonBuilder;\n\nimpl ExpressionBuilder for FromJsonBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, FromJson);\n        let child = planner.create_expr(\n            expr.child.as_ref().ok_or_else(|| {\n                ExecutionError::GeneralError(\"FromJson missing child\".to_string())\n            })?,\n            input_schema,\n        )?;\n        let schema =\n            to_arrow_datatype(expr.schema.as_ref().ok_or_else(|| {\n                ExecutionError::GeneralError(\"FromJson missing schema\".to_string())\n            })?);\n        Ok(Arc::new(FromJson::new(child, schema, &expr.timezone)))\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/expressions/subquery.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::{\n    execution::utils::bytes_to_i128,\n    jvm_bridge::{BinaryWrapper, JVMClasses, StringWrapper},\n};\nuse arrow::array::RecordBatch;\nuse arrow::datatypes::{DataType, Schema, TimeUnit};\nuse datafusion::common::{internal_err, ScalarValue};\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse jni::{\n    objects::{JByteArray, JString},\n    sys::{jboolean, jbyte, jint, jlong, jshort},\n};\nuse std::{\n    any::Any,\n    fmt::{Display, Formatter},\n    hash::Hash,\n    sync::Arc,\n};\n\n#[derive(Debug, Hash, PartialEq, Eq)]\npub struct Subquery {\n    /// The ID of the execution context that owns this subquery. We use this ID to retrieve the\n    /// subquery result.\n    exec_context_id: i64,\n    /// The ID of the subquery, we retrieve the subquery result from JVM using this ID.\n    pub id: i64,\n    /// The data type of the subquery result.\n    pub data_type: DataType,\n}\n\nimpl Subquery {\n    pub fn new(exec_context_id: i64, id: i64, data_type: DataType) -> Self {\n        Self {\n            exec_context_id,\n            id,\n            data_type,\n        }\n    }\n}\n\nimpl Display for Subquery {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"Subquery [id: {}]\", self.id)\n    }\n}\n\nimpl PhysicalExpr for Subquery {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, _: &Schema) -> datafusion::common::Result<DataType> {\n        Ok(self.data_type.clone())\n    }\n\n    fn nullable(&self, _: &Schema) -> datafusion::common::Result<bool> {\n        Ok(true)\n    }\n\n    fn evaluate(&self, _: &RecordBatch) -> datafusion::common::Result<ColumnarValue> {\n        JVMClasses::with_env(|env| unsafe {\n            let is_null = jni_static_call!(env,\n                comet_exec.is_null(self.exec_context_id, self.id) -> jboolean\n            )?;\n\n            if is_null {\n                return Ok(ColumnarValue::Scalar(ScalarValue::try_from(\n                    &self.data_type,\n                )?));\n            }\n\n            match &self.data_type {\n                DataType::Boolean => {\n                    let r = jni_static_call!(env,\n                        comet_exec.get_bool(self.exec_context_id, self.id) -> jboolean\n                    )?;\n                    Ok(ColumnarValue::Scalar(ScalarValue::Boolean(Some(r))))\n                }\n                DataType::Int8 => {\n                    let r = jni_static_call!(env,\n                        comet_exec.get_byte(self.exec_context_id, self.id) -> jbyte\n                    )?;\n                    Ok(ColumnarValue::Scalar(ScalarValue::Int8(Some(r))))\n                }\n                DataType::Int16 => {\n                    let r = jni_static_call!(env,\n                        comet_exec.get_short(self.exec_context_id, self.id) -> jshort\n                    )?;\n                    Ok(ColumnarValue::Scalar(ScalarValue::Int16(Some(r))))\n                }\n                DataType::Int32 => {\n                    let r = jni_static_call!(env,\n                        comet_exec.get_int(self.exec_context_id, self.id) -> jint\n                    )?;\n                    Ok(ColumnarValue::Scalar(ScalarValue::Int32(Some(r))))\n                }\n                DataType::Int64 => {\n                    let r = jni_static_call!(env,\n                        comet_exec.get_long(self.exec_context_id, self.id) -> jlong\n                    )?;\n                    Ok(ColumnarValue::Scalar(ScalarValue::Int64(Some(r))))\n                }\n                DataType::Float32 => {\n                    let r = jni_static_call!(env,\n                        comet_exec.get_float(self.exec_context_id, self.id) -> f32\n                    )?;\n                    Ok(ColumnarValue::Scalar(ScalarValue::Float32(Some(r))))\n                }\n                DataType::Float64 => {\n                    let r = jni_static_call!(env,\n                        comet_exec.get_double(self.exec_context_id, self.id) -> f64\n                    )?;\n\n                    Ok(ColumnarValue::Scalar(ScalarValue::Float64(Some(r))))\n                }\n                DataType::Decimal128(p, s) => {\n                    let bytes = jni_static_call!(env,\n                        comet_exec.get_decimal(self.exec_context_id, self.id) -> BinaryWrapper\n                    )?;\n                    let bytes = JByteArray::from_raw(env, bytes.get().as_raw());\n                    let slice = env.convert_byte_array(bytes).unwrap();\n\n                    Ok(ColumnarValue::Scalar(ScalarValue::Decimal128(\n                        Some(bytes_to_i128(&slice)),\n                        *p,\n                        *s,\n                    )))\n                }\n                DataType::Date32 => {\n                    let r = jni_static_call!(env,\n                        comet_exec.get_int(self.exec_context_id, self.id) -> jint\n                    )?;\n\n                    Ok(ColumnarValue::Scalar(ScalarValue::Date32(Some(r))))\n                }\n                DataType::Timestamp(TimeUnit::Microsecond, timezone) => {\n                    let r = jni_static_call!(env,\n                        comet_exec.get_long(self.exec_context_id, self.id) -> jlong\n                    )?;\n\n                    Ok(ColumnarValue::Scalar(ScalarValue::TimestampMicrosecond(\n                        Some(r),\n                        timezone.clone(),\n                    )))\n                }\n                DataType::Utf8 => {\n                    let string = jni_static_call!(env,\n                        comet_exec.get_string(self.exec_context_id, self.id) -> StringWrapper\n                    )?;\n\n                    let string = JString::from_raw(env, string.get().as_raw())\n                        .try_to_string(env)\n                        .unwrap();\n                    Ok(ColumnarValue::Scalar(ScalarValue::Utf8(Some(string))))\n                }\n                DataType::Binary => {\n                    let bytes = jni_static_call!(env,\n                        comet_exec.get_binary(self.exec_context_id, self.id) -> BinaryWrapper\n                    )?;\n                    let bytes = JByteArray::from_raw(env, bytes.get().as_raw());\n                    let slice = env.convert_byte_array(bytes).unwrap();\n\n                    Ok(ColumnarValue::Scalar(ScalarValue::Binary(Some(slice))))\n                }\n                _ => internal_err!(\"Unsupported scalar subquery data type {:?}\", self.data_type),\n            }\n        })\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        _: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        Ok(self)\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/expressions/temporal.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Temporal expression builders\n\nuse std::sync::Arc;\n\nuse arrow::datatypes::{DataType, Field, SchemaRef};\nuse datafusion::config::ConfigOptions;\nuse datafusion::logical_expr::ScalarUDF;\nuse datafusion::physical_expr::{PhysicalExpr, ScalarFunctionExpr};\nuse datafusion_comet_proto::spark_expression::Expr;\nuse datafusion_comet_spark_expr::{\n    SparkHour, SparkHoursTransform, SparkMinute, SparkSecond, SparkUnixTimestamp,\n    TimestampTruncExpr,\n};\n\nuse crate::execution::{\n    expressions::extract_expr,\n    operators::ExecutionError,\n    planner::{expression_registry::ExpressionBuilder, PhysicalPlanner},\n};\n\npub struct HourBuilder;\n\nimpl ExpressionBuilder for HourBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, Hour);\n        let child = planner.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&input_schema))?;\n        let timezone = expr.timezone.clone();\n        let args = vec![child];\n        let comet_hour = Arc::new(ScalarUDF::new_from_impl(SparkHour::new(timezone)));\n        let field_ref = Arc::new(Field::new(\"hour\", DataType::Int32, true));\n        let expr: ScalarFunctionExpr = ScalarFunctionExpr::new(\n            \"hour\",\n            comet_hour,\n            args,\n            field_ref,\n            Arc::new(ConfigOptions::default()),\n        );\n\n        Ok(Arc::new(expr))\n    }\n}\n\npub struct MinuteBuilder;\n\nimpl ExpressionBuilder for MinuteBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, Minute);\n        let child = planner.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&input_schema))?;\n        let timezone = expr.timezone.clone();\n        let args = vec![child];\n        let comet_minute = Arc::new(ScalarUDF::new_from_impl(SparkMinute::new(timezone)));\n        let field_ref = Arc::new(Field::new(\"minute\", DataType::Int32, true));\n        let expr: ScalarFunctionExpr = ScalarFunctionExpr::new(\n            \"minute\",\n            comet_minute,\n            args,\n            field_ref,\n            Arc::new(ConfigOptions::default()),\n        );\n\n        Ok(Arc::new(expr))\n    }\n}\n\npub struct SecondBuilder;\n\nimpl ExpressionBuilder for SecondBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, Second);\n        let child = planner.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&input_schema))?;\n        let timezone = expr.timezone.clone();\n        let args = vec![child];\n        let comet_second = Arc::new(ScalarUDF::new_from_impl(SparkSecond::new(timezone)));\n        let field_ref = Arc::new(Field::new(\"second\", DataType::Int32, true));\n        let expr: ScalarFunctionExpr = ScalarFunctionExpr::new(\n            \"second\",\n            comet_second,\n            args,\n            field_ref,\n            Arc::new(ConfigOptions::default()),\n        );\n\n        Ok(Arc::new(expr))\n    }\n}\n\npub struct UnixTimestampBuilder;\n\nimpl ExpressionBuilder for UnixTimestampBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, UnixTimestamp);\n        let child = planner.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&input_schema))?;\n        let timezone = expr.timezone.clone();\n        let args = vec![child];\n        let comet_unix_timestamp =\n            Arc::new(ScalarUDF::new_from_impl(SparkUnixTimestamp::new(timezone)));\n        let field_ref = Arc::new(Field::new(\"unix_timestamp\", DataType::Int64, true));\n        let expr: ScalarFunctionExpr = ScalarFunctionExpr::new(\n            \"unix_timestamp\",\n            comet_unix_timestamp,\n            args,\n            field_ref,\n            Arc::new(ConfigOptions::default()),\n        );\n\n        Ok(Arc::new(expr))\n    }\n}\n\npub struct TruncTimestampBuilder;\n\nimpl ExpressionBuilder for TruncTimestampBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, TruncTimestamp);\n        let child = planner.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&input_schema))?;\n        let format = planner.create_expr(expr.format.as_ref().unwrap(), input_schema)?;\n        let timezone = expr.timezone.clone();\n\n        Ok(Arc::new(TimestampTruncExpr::new(child, format, timezone)))\n    }\n}\n\npub struct HoursTransformBuilder;\n\nimpl ExpressionBuilder for HoursTransformBuilder {\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr = extract_expr!(spark_expr, HoursTransform);\n        let child = planner.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&input_schema))?;\n        let args = vec![child];\n        let comet_hours_transform = Arc::new(ScalarUDF::new_from_impl(SparkHoursTransform::new()));\n        let field_ref = Arc::new(Field::new(\"hours_transform\", DataType::Int32, true));\n        let expr: ScalarFunctionExpr = ScalarFunctionExpr::new(\n            \"hours_transform\",\n            comet_hours_transform,\n            args,\n            field_ref,\n            Arc::new(ConfigOptions::default()),\n        );\n\n        Ok(Arc::new(expr))\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/jni_api.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Define JNI APIs which can be called from Java/Scala.\n\nuse super::{serde, utils::SparkArrowConvert};\nuse crate::{\n    errors::{try_unwrap_or_throw, CometError, CometResult},\n    execution::{\n        metrics::utils::update_comet_metric, planner::PhysicalPlanner, serde::to_arrow_datatype,\n        shuffle::spark_unsafe::row::process_sorted_row_partition, sort::RdxSort,\n    },\n    jvm_bridge::JVMClasses,\n};\nuse std::collections::HashSet;\n\nuse arrow::array::{Array, RecordBatch, UInt32Array};\nuse arrow::compute::{take, TakeOptions};\nuse arrow::datatypes::DataType as ArrowDataType;\nuse datafusion::common::{DataFusionError, Result as DataFusionResult, ScalarValue};\nuse datafusion::execution::disk_manager::DiskManagerMode;\nuse datafusion::execution::memory_pool::MemoryPool;\nuse datafusion::execution::runtime_env::RuntimeEnvBuilder;\nuse datafusion::logical_expr::ScalarUDF;\nuse datafusion::{\n    execution::disk_manager::DiskManagerBuilder,\n    physical_plan::{display::DisplayableExecutionPlan, SendableRecordBatchStream},\n    prelude::{SessionConfig, SessionContext},\n};\nuse datafusion_comet_proto::spark_operator::Operator;\nuse datafusion_spark::function::array::array_contains::SparkArrayContains;\nuse datafusion_spark::function::bitwise::bit_count::SparkBitCount;\nuse datafusion_spark::function::bitwise::bit_get::SparkBitGet;\nuse datafusion_spark::function::bitwise::bitwise_not::SparkBitwiseNot;\nuse datafusion_spark::function::datetime::date_add::SparkDateAdd;\nuse datafusion_spark::function::datetime::date_sub::SparkDateSub;\nuse datafusion_spark::function::datetime::last_day::SparkLastDay;\nuse datafusion_spark::function::datetime::next_day::SparkNextDay;\nuse datafusion_spark::function::hash::crc32::SparkCrc32;\nuse datafusion_spark::function::hash::sha1::SparkSha1;\nuse datafusion_spark::function::hash::sha2::SparkSha2;\nuse datafusion_spark::function::map::map_from_entries::MapFromEntries;\nuse datafusion_spark::function::math::expm1::SparkExpm1;\nuse datafusion_spark::function::math::hex::SparkHex;\nuse datafusion_spark::function::math::width_bucket::SparkWidthBucket;\nuse datafusion_spark::function::string::char::CharFunc;\nuse datafusion_spark::function::string::concat::SparkConcat;\nuse datafusion_spark::function::string::luhn_check::SparkLuhnCheck;\nuse datafusion_spark::function::string::space::SparkSpace;\nuse futures::poll;\nuse futures::stream::StreamExt;\nuse futures::FutureExt;\nuse jni::objects::JByteBuffer;\nuse jni::sys::{jlongArray, JNI_FALSE};\nuse jni::{\n    errors::Result as JNIResult,\n    objects::{\n        Global, JByteArray, JClass, JIntArray, JLongArray, JObject, JObjectArray, JString,\n        ReleaseMode,\n    },\n    sys::{jboolean, jdouble, jint, jlong},\n    Env, EnvUnowned,\n};\nuse parking_lot::Mutex;\nuse std::collections::HashMap;\nuse std::path::PathBuf;\nuse std::time::{Duration, Instant};\nuse std::{sync::Arc, task::Poll};\nuse tokio::runtime::Runtime;\nuse tokio::sync::mpsc;\n\nuse crate::execution::memory_pools::{\n    create_memory_pool, handle_task_shared_pool_release, parse_memory_pool_config, MemoryPoolConfig,\n};\nuse crate::execution::operators::{ScanExec, ShuffleScanExec};\nuse crate::execution::shuffle::{read_ipc_compressed, CompressionCodec};\nuse crate::execution::spark_plan::SparkPlan;\n\nuse crate::execution::tracing::{\n    get_thread_id, log_memory_usage, trace_begin, trace_end, with_trace,\n};\n\nuse crate::execution::memory_pools::logging_pool::LoggingMemoryPool;\nuse crate::execution::spark_config::{\n    SparkConfig, COMET_DEBUG_ENABLED, COMET_DEBUG_MEMORY, COMET_EXPLAIN_NATIVE_ENABLED,\n    COMET_MAX_TEMP_DIRECTORY_SIZE, COMET_TRACING_ENABLED, SPARK_EXECUTOR_CORES,\n};\nuse crate::parquet::encryption_support::{CometEncryptionFactory, ENCRYPTION_FACTORY_ID};\nuse datafusion_comet_proto::spark_operator::operator::OpStruct;\nuse log::info;\nuse std::sync::OnceLock;\n#[cfg(feature = \"jemalloc\")]\nuse tikv_jemalloc_ctl::{epoch, stats};\n\nstatic TOKIO_RUNTIME: OnceLock<Runtime> = OnceLock::new();\n\n#[cfg(feature = \"jemalloc\")]\nfn log_jemalloc_usage() {\n    let e = epoch::mib().unwrap();\n    let allocated = stats::allocated::mib().unwrap();\n    e.advance().unwrap();\n    log_memory_usage(\"jemalloc_allocated\", allocated.read().unwrap() as u64);\n}\n\n/// Registry of active memory pools per Rust thread ID.\n/// Used to sum memory reservations across all contexts on the same thread for tracing.\ntype ThreadPoolMap = HashMap<u64, HashMap<i64, Arc<dyn MemoryPool>>>;\n\nstatic THREAD_MEMORY_POOLS: OnceLock<Mutex<ThreadPoolMap>> = OnceLock::new();\n\nfn get_thread_memory_pools() -> &'static Mutex<ThreadPoolMap> {\n    THREAD_MEMORY_POOLS.get_or_init(|| Mutex::new(HashMap::new()))\n}\n\nfn register_memory_pool(thread_id: u64, context_id: i64, pool: Arc<dyn MemoryPool>) {\n    get_thread_memory_pools()\n        .lock()\n        .entry(thread_id)\n        .or_default()\n        .insert(context_id, pool);\n}\n\n/// Unregister a context's pool and return the remaining total reserved for the thread.\nfn unregister_and_total(thread_id: u64, context_id: i64) -> usize {\n    let mut map = get_thread_memory_pools().lock();\n    if let Some(pools) = map.get_mut(&thread_id) {\n        pools.remove(&context_id);\n        if pools.is_empty() {\n            map.remove(&thread_id);\n            return 0;\n        }\n        let mut seen = HashSet::new();\n        return pools\n            .values()\n            .filter_map(|p| {\n                let ptr = Arc::as_ptr(p) as *const ();\n                seen.insert(ptr).then(|| p.reserved())\n            })\n            .sum::<usize>();\n    }\n    0\n}\n\nfn total_reserved_for_thread(thread_id: u64) -> usize {\n    let map = get_thread_memory_pools().lock();\n    map.get(&thread_id)\n        .map(|pools| {\n            // Deduplicate pools that share the same underlying allocation\n            // (e.g. task-shared pools registered by multiple execution contexts)\n            let mut seen = HashSet::new();\n            pools\n                .values()\n                .filter_map(|p| {\n                    let ptr = Arc::as_ptr(p) as *const ();\n                    seen.insert(ptr).then(|| p.reserved())\n                })\n                .sum::<usize>()\n        })\n        .unwrap_or(0)\n}\n\nfn parse_usize_env_var(name: &str) -> Option<usize> {\n    std::env::var_os(name).and_then(|n| n.to_str().and_then(|s| s.parse::<usize>().ok()))\n}\n\nfn build_runtime(default_worker_threads: Option<usize>) -> Runtime {\n    let mut builder = tokio::runtime::Builder::new_multi_thread();\n    if let Some(n) = parse_usize_env_var(\"COMET_WORKER_THREADS\") {\n        info!(\"Comet tokio runtime: using COMET_WORKER_THREADS={n}\");\n        builder.worker_threads(n);\n    } else if let Some(n) = default_worker_threads {\n        info!(\"Comet tokio runtime: using spark.executor.cores={n} worker threads\");\n        builder.worker_threads(n);\n    } else {\n        info!(\"Comet tokio runtime: using default thread count\");\n    }\n    if let Some(n) = parse_usize_env_var(\"COMET_MAX_BLOCKING_THREADS\") {\n        builder.max_blocking_threads(n);\n    }\n    builder\n        .enable_all()\n        .build()\n        .expect(\"Failed to create Tokio runtime\")\n}\n\n/// Initialize the global Tokio runtime with the given default worker thread count.\n/// If the runtime is already initialized, this is a no-op.\npub fn init_runtime(default_worker_threads: usize) {\n    TOKIO_RUNTIME.get_or_init(|| build_runtime(Some(default_worker_threads)));\n}\n\n/// Function to get a handle to the global Tokio runtime\npub fn get_runtime() -> &'static Runtime {\n    TOKIO_RUNTIME.get_or_init(|| build_runtime(None))\n}\n\n/// Returns a short name for an OpStruct variant.\nfn op_name(op: &OpStruct) -> &'static str {\n    match op {\n        OpStruct::Scan(_) => \"Scan\",\n        OpStruct::Projection(_) => \"Projection\",\n        OpStruct::Filter(_) => \"Filter\",\n        OpStruct::Sort(_) => \"Sort\",\n        OpStruct::HashAgg(_) => \"HashAgg\",\n        OpStruct::Limit(_) => \"Limit\",\n        OpStruct::ShuffleWriter(_) => \"ShuffleWriter\",\n        OpStruct::Expand(_) => \"Expand\",\n        OpStruct::SortMergeJoin(_) => \"SortMergeJoin\",\n        OpStruct::HashJoin(_) => \"HashJoin\",\n        OpStruct::Window(_) => \"Window\",\n        OpStruct::NativeScan(_) => \"NativeScan\",\n        OpStruct::IcebergScan(_) => \"IcebergScan\",\n        OpStruct::ParquetWriter(_) => \"ParquetWriter\",\n        OpStruct::Explode(_) => \"Explode\",\n        OpStruct::CsvScan(_) => \"CsvScan\",\n        OpStruct::ShuffleScan(_) => \"ShuffleScan\",\n    }\n}\n\n/// Collect distinct operator names from a plan tree and build a tracing event name.\nfn build_tracing_event_name(plan: &Operator) -> String {\n    let mut names = std::collections::BTreeSet::new();\n    collect_op_names(plan, &mut names);\n    if names.is_empty() {\n        \"executePlan\".to_string()\n    } else {\n        format!(\n            \"executePlan({})\",\n            names.into_iter().collect::<Vec<_>>().join(\",\")\n        )\n    }\n}\n\nfn collect_op_names<'a>(op: &'a Operator, names: &mut std::collections::BTreeSet<&'a str>) {\n    if let Some(ref op_struct) = op.op_struct {\n        names.insert(op_name(op_struct));\n    }\n    for child in &op.children {\n        collect_op_names(child, names);\n    }\n}\n\n/// Comet native execution context. Kept alive across JNI calls.\nstruct ExecutionContext {\n    /// The id of the execution context.\n    pub id: i64,\n    /// Task attempt id\n    pub task_attempt_id: i64,\n    /// The deserialized Spark plan\n    pub spark_plan: Operator,\n    /// The number of partitions\n    pub partition_count: usize,\n    /// The DataFusion root operator converted from the `spark_plan`\n    pub root_op: Option<Arc<SparkPlan>>,\n    /// The input sources for the DataFusion plan\n    pub scans: Vec<ScanExec>,\n    /// The shuffle scan input sources for the DataFusion plan\n    pub shuffle_scans: Vec<ShuffleScanExec>,\n    /// The global reference of input sources for the DataFusion plan\n    pub input_sources: Vec<Arc<Global<JObject<'static>>>>,\n    /// The record batch stream to pull results from\n    pub stream: Option<SendableRecordBatchStream>,\n    /// Receives batches from a spawned tokio task (async I/O path)\n    pub batch_receiver: Option<mpsc::Receiver<DataFusionResult<RecordBatch>>>,\n    /// Native metrics\n    pub metrics: Arc<Global<JObject<'static>>>,\n    // The interval in milliseconds to update metrics\n    pub metrics_update_interval: Option<Duration>,\n    // The last update time of metrics\n    pub metrics_last_update_time: Instant,\n    /// Counter to avoid checking time on every poll iteration (reduces syscalls)\n    pub poll_count_since_metrics_check: u32,\n    /// The time it took to create the native plan and configure the context\n    pub plan_creation_time: Duration,\n    /// DataFusion SessionContext\n    pub session_ctx: Arc<SessionContext>,\n    /// Whether to enable additional debugging checks & messages\n    pub debug_native: bool,\n    /// Whether to write native plans with metrics to stdout\n    pub explain_native: bool,\n    /// Memory pool config\n    pub memory_pool_config: MemoryPoolConfig,\n    /// Whether to log memory usage on each call to execute_plan\n    pub tracing_enabled: bool,\n    /// Rust thread ID, used for aggregating tracing metrics per thread\n    pub rust_thread_id: u64,\n    /// Pre-computed metric name for tracing memory usage\n    pub tracing_memory_metric_name: String,\n    /// Pre-computed tracing event name for executePlan calls\n    pub tracing_event_name: String,\n}\n\n/// Accept serialized query plan and return the address of the native query plan.\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\n#[no_mangle]\npub unsafe extern \"system\" fn Java_org_apache_comet_Native_createPlan(\n    e: EnvUnowned,\n    _class: JClass,\n    id: jlong,\n    iterators: JObjectArray,\n    serialized_query: JByteArray,\n    serialized_spark_configs: JByteArray,\n    partition_count: jint,\n    metrics_node: JObject,\n    metrics_update_interval: jlong,\n    comet_task_memory_manager_obj: JObject,\n    local_dirs: JObjectArray,\n    batch_size: jint,\n    off_heap_mode: jboolean,\n    memory_pool_type: JString,\n    memory_limit: jlong,\n    memory_limit_per_task: jlong,\n    task_attempt_id: jlong,\n    task_cpus: jlong,\n    key_unwrapper_obj: JObject,\n) -> jlong {\n    try_unwrap_or_throw(&e, |env| {\n        // Deserialize Spark configs\n        let bytes = env.convert_byte_array(serialized_spark_configs)?;\n        let spark_configs = serde::deserialize_config(bytes.as_slice())?;\n        let spark_config: HashMap<String, String> = spark_configs.entries.into_iter().collect();\n\n        // Initialize the tokio runtime with spark.executor.cores as the default\n        // worker thread count, falling back to 1 if not set.\n        let executor_cores = spark_config.get_usize(SPARK_EXECUTOR_CORES, 1);\n        init_runtime(executor_cores);\n\n        // Access Comet configs\n        let debug_native = spark_config.get_bool(COMET_DEBUG_ENABLED);\n        let explain_native = spark_config.get_bool(COMET_EXPLAIN_NATIVE_ENABLED);\n        let tracing_enabled = spark_config.get_bool(COMET_TRACING_ENABLED);\n        let max_temp_directory_size =\n            spark_config.get_u64(COMET_MAX_TEMP_DIRECTORY_SIZE, 100 * 1024 * 1024 * 1024);\n        let logging_memory_pool = spark_config.get_bool(COMET_DEBUG_MEMORY);\n\n        with_trace(\"createPlan\", tracing_enabled, || {\n            // Init JVM classes\n            JVMClasses::init(env);\n\n            let start = Instant::now();\n\n            // Deserialize query plan\n            let bytes = env.convert_byte_array(serialized_query)?;\n            let spark_plan = serde::deserialize_op(bytes.as_slice())?;\n\n            let metrics = Arc::new(jni_new_global_ref!(env, metrics_node)?);\n\n            // Get the global references of input sources\n            let mut input_sources = vec![];\n            let num_inputs = iterators.len(env)?;\n            for i in 0..num_inputs {\n                let input_source = iterators.get_element(env, i)?;\n                let input_source = Arc::new(jni_new_global_ref!(env, input_source)?);\n                input_sources.push(input_source);\n            }\n\n            // Create DataFusion memory pool\n            let task_memory_manager =\n                Arc::new(jni_new_global_ref!(env, comet_task_memory_manager_obj)?);\n\n            let memory_pool_type = memory_pool_type.try_to_string(env)?;\n            let memory_pool_config = parse_memory_pool_config(\n                off_heap_mode != JNI_FALSE,\n                memory_pool_type,\n                memory_limit,\n                memory_limit_per_task,\n            )?;\n            let memory_pool =\n                create_memory_pool(&memory_pool_config, task_memory_manager, task_attempt_id);\n\n            let memory_pool = if logging_memory_pool {\n                Arc::new(LoggingMemoryPool::new(task_attempt_id as u64, memory_pool))\n            } else {\n                memory_pool\n            };\n\n            // Get local directories for storing spill files\n            let num_local_dirs = local_dirs.len(env)?;\n            let mut local_dirs_vec = vec![];\n            for i in 0..num_local_dirs {\n                let local_dir = local_dirs.get_element(env, i)?;\n                let local_dir = unsafe { JString::from_raw(&*env, local_dir.into_raw()) };\n                let local_dir = local_dir.try_to_string(env)?;\n                local_dirs_vec.push(local_dir);\n            }\n\n            // We need to keep the session context alive. Some session state like temporary\n            // dictionaries are stored in session context. If it is dropped, the temporary\n            // dictionaries will be dropped as well.\n            let session = prepare_datafusion_session_context(\n                batch_size as usize,\n                memory_pool,\n                local_dirs_vec,\n                max_temp_directory_size,\n                task_cpus as usize,\n                &spark_config,\n            )?;\n\n            let plan_creation_time = start.elapsed();\n\n            let metrics_update_interval = if metrics_update_interval > 0 {\n                Some(Duration::from_millis(metrics_update_interval as u64))\n            } else {\n                None\n            };\n\n            // Handle key unwrapper for encrypted files\n            if !key_unwrapper_obj.is_null() {\n                let encryption_factory = CometEncryptionFactory {\n                    key_unwrapper: Arc::new(jni_new_global_ref!(env, key_unwrapper_obj)?),\n                };\n                session.runtime_env().register_parquet_encryption_factory(\n                    ENCRYPTION_FACTORY_ID,\n                    Arc::new(encryption_factory),\n                );\n            }\n\n            let session = Arc::new(session);\n\n            // Register this context's memory pool so we can sum all pools\n            // on the same thread when emitting tracing metrics.\n            let rust_thread_id = get_thread_id();\n            if tracing_enabled {\n                register_memory_pool(\n                    rust_thread_id,\n                    id,\n                    Arc::clone(&session.runtime_env().memory_pool),\n                );\n            }\n\n            let tracing_event_name = if tracing_enabled {\n                build_tracing_event_name(&spark_plan)\n            } else {\n                String::new()\n            };\n\n            let exec_context = Box::new(ExecutionContext {\n                id,\n                task_attempt_id,\n                spark_plan,\n                partition_count: partition_count as usize,\n                root_op: None,\n                scans: vec![],\n                shuffle_scans: vec![],\n                input_sources,\n                stream: None,\n                batch_receiver: None,\n                metrics,\n                metrics_update_interval,\n                metrics_last_update_time: Instant::now(),\n                poll_count_since_metrics_check: 0,\n                plan_creation_time,\n                session_ctx: session,\n                debug_native,\n                explain_native,\n                memory_pool_config,\n                tracing_enabled,\n                rust_thread_id,\n                tracing_memory_metric_name: format!(\n                    \"thread_{rust_thread_id}_comet_memory_reserved\"\n                ),\n                tracing_event_name,\n            });\n\n            Ok(Box::into_raw(exec_context) as i64)\n        })\n    })\n}\n\n/// Configure DataFusion session context.\nfn prepare_datafusion_session_context(\n    batch_size: usize,\n    memory_pool: Arc<dyn MemoryPool>,\n    local_dirs: Vec<String>,\n    max_temp_directory_size: u64,\n    task_cpus: usize,\n    spark_config: &HashMap<String, String>,\n) -> CometResult<SessionContext> {\n    let paths = local_dirs.into_iter().map(PathBuf::from).collect();\n    let disk_manager = DiskManagerBuilder::default()\n        .with_mode(DiskManagerMode::Directories(paths))\n        .with_max_temp_directory_size(max_temp_directory_size);\n    let mut rt_config = RuntimeEnvBuilder::new().with_disk_manager_builder(disk_manager);\n    rt_config = rt_config.with_memory_pool(memory_pool);\n\n    let mut session_config = SessionConfig::new()\n        .with_target_partitions(task_cpus)\n        // This DataFusion context is within the scope of an executing Spark Task. We want to set\n        // its internal parallelism to the number of CPUs allocated to Spark Tasks. This can be\n        // modified by changing spark.task.cpus in the Spark config.\n        .with_batch_size(batch_size)\n        // DataFusion partial aggregates can emit duplicate rows so we disable the\n        // skip partial aggregation feature because this is not compatible with Spark's\n        // use of partial aggregates.\n        .set(\n            \"datafusion.execution.skip_partial_aggregation_probe_ratio_threshold\",\n            // this is the threshold of number of groups / number of rows and the\n            // maximum value is 1.0, so we set the threshold a little higher just\n            // to be safe\n            &ScalarValue::Float64(Some(1.1)),\n        );\n\n    // Pass through DataFusion configs from Spark.\n    // e.g: spark-shell --conf spark.comet.datafusion.sql_parser.parse_float_as_decimal=true\n    // becomes datafusion.sql_parser.parse_float_as_decimal=true\n    const SPARK_COMET_DF_PREFIX: &str = \"spark.comet.datafusion.\";\n    for (key, value) in spark_config {\n        if let Some(df_key) = key.strip_prefix(SPARK_COMET_DF_PREFIX) {\n            let df_key = format!(\"datafusion.{df_key}\");\n            session_config = session_config.set_str(&df_key, value);\n        }\n    }\n\n    let runtime = rt_config.build()?;\n\n    let mut session_ctx = SessionContext::new_with_config_rt(session_config, Arc::new(runtime));\n\n    datafusion::functions_nested::register_all(&mut session_ctx)?;\n    register_datafusion_spark_function(&session_ctx);\n    // Must be the last one to override existing functions with the same name\n    datafusion_comet_spark_expr::register_all_comet_functions(&mut session_ctx)?;\n\n    Ok(session_ctx)\n}\n\n// register UDFs from datafusion-spark crate\nfn register_datafusion_spark_function(session_ctx: &SessionContext) {\n    // Don't register SparkArrayRepeat — it returns NULL when the element is NULL\n    // (e.g. array_repeat(null, 3) returns NULL instead of [null, null, null]).\n    // Comet's Scala serde wraps the call in a CaseWhen for null count handling,\n    // so DataFusion's built-in ArrayRepeat is sufficient.\n    // TODO: file upstream issue against datafusion-spark\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkExpm1::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkSha2::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(CharFunc::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkBitGet::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkDateAdd::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkDateSub::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkLastDay::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkNextDay::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkSha1::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkConcat::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkBitwiseNot::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkHex::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkWidthBucket::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(MapFromEntries::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkCrc32::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkLuhnCheck::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkSpace::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkBitCount::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkArrayContains::default()));\n    session_ctx.register_udf(ScalarUDF::new_from_impl(SparkBin::default()));\n}\n\n/// Prepares arrow arrays for output.\nfn prepare_output(\n    env: &mut Env,\n    array_addrs: JLongArray,\n    schema_addrs: JLongArray,\n    output_batch: RecordBatch,\n    validate: bool,\n) -> CometResult<jlong> {\n    let num_cols = array_addrs.len(env)?;\n\n    let array_addrs = unsafe { array_addrs.get_elements(env, ReleaseMode::NoCopyBack)? };\n    let array_addrs = &*array_addrs;\n\n    let schema_addrs = unsafe { schema_addrs.get_elements(env, ReleaseMode::NoCopyBack)? };\n    let schema_addrs = &*schema_addrs;\n\n    let results = output_batch.columns();\n    let num_rows = output_batch.num_rows();\n\n    // there are edge cases where num_cols can be zero due to Spark optimizations\n    // when the results of a query are not used\n    if num_cols > 0 {\n        if results.len() != num_cols {\n            return Err(CometError::Internal(format!(\n                \"Output column count mismatch: expected {num_cols}, got {}\",\n                results.len()\n            )));\n        }\n\n        if validate {\n            // Validate the output arrays.\n            for array in results.iter() {\n                let array_data = array.to_data();\n                array_data\n                    .validate_full()\n                    .expect(\"Invalid output array data\");\n            }\n        }\n\n        let mut i = 0;\n        while i < results.len() {\n            let array_ref = results.get(i).ok_or(CometError::IndexOutOfBounds(i))?;\n\n            if array_ref.offset() != 0 {\n                // https://github.com/apache/datafusion-comet/issues/2051\n                // Bug with non-zero offset FFI, so take to a new array which will have an offset of 0.\n                // We expect this to be a cold code path, hence the check_bounds: true and assert_eq.\n                let indices = UInt32Array::from((0..num_rows as u32).collect::<Vec<u32>>());\n                let new_array = take(\n                    array_ref,\n                    &indices,\n                    Some(TakeOptions { check_bounds: true }),\n                )?;\n\n                assert_eq!(new_array.offset(), 0);\n\n                new_array\n                    .to_data()\n                    .move_to_spark(array_addrs[i], schema_addrs[i])?;\n            } else {\n                array_ref\n                    .to_data()\n                    .move_to_spark(array_addrs[i], schema_addrs[i])?;\n            }\n            i += 1;\n        }\n    }\n\n    Ok(num_rows as jlong)\n}\n\n/// Pull the next input from JVM. Note that we cannot pull input batches in\n/// `ScanStream.poll_next` when the execution stream is polled for output.\n/// Because the input source could be another native execution stream, which\n/// will be executed in another tokio blocking thread. It causes JNI throw\n/// Java exception. So we pull input batches here and insert them into scan\n/// operators before polling the stream,\n#[inline]\nfn pull_input_batches(exec_context: &mut ExecutionContext) -> Result<(), CometError> {\n    exec_context.scans.iter_mut().try_for_each(|scan| {\n        scan.get_next_batch()?;\n        Ok::<(), CometError>(())\n    })?;\n    exec_context.shuffle_scans.iter_mut().try_for_each(|scan| {\n        scan.get_next_batch()?;\n        Ok::<(), CometError>(())\n    })\n}\n\n/// Accept serialized query plan and the addresses of Arrow Arrays from Spark,\n/// then execute the query. Return addresses of arrow vector.\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\n#[no_mangle]\npub unsafe extern \"system\" fn Java_org_apache_comet_Native_executePlan(\n    e: EnvUnowned,\n    _class: JClass,\n    stage_id: jint,\n    partition: jint,\n    exec_context: jlong,\n    array_addrs: JLongArray,\n    schema_addrs: JLongArray,\n) -> jlong {\n    try_unwrap_or_throw(&e, |env| {\n        // Retrieve the query\n        let exec_context = get_execution_context(exec_context);\n\n        let tracing_enabled = exec_context.tracing_enabled;\n        // Clone the label only when tracing is enabled. The clone is needed\n        // because the closure below mutably borrows exec_context.\n        let owned_label;\n        let tracing_label = if tracing_enabled {\n            owned_label = exec_context.tracing_event_name.clone();\n            owned_label.as_str()\n        } else {\n            \"\"\n        };\n\n        let result = with_trace(tracing_label, tracing_enabled, || {\n            let exec_context_id = exec_context.id;\n\n            // Initialize the execution stream.\n            // Because we don't know if input arrays are dictionary-encoded when we create\n            // query plan, we need to defer stream initialization to first time execution.\n            if exec_context.root_op.is_none() {\n                let start = Instant::now();\n                let planner =\n                    PhysicalPlanner::new(Arc::clone(&exec_context.session_ctx), partition)\n                        .with_exec_id(exec_context_id);\n                let (scans, shuffle_scans, root_op) = planner.create_plan(\n                    &exec_context.spark_plan,\n                    &mut exec_context.input_sources.clone(),\n                    exec_context.partition_count,\n                )?;\n                let physical_plan_time = start.elapsed();\n\n                exec_context.plan_creation_time += physical_plan_time;\n                exec_context.scans = scans;\n                exec_context.shuffle_scans = shuffle_scans;\n\n                if exec_context.explain_native {\n                    let formatted_plan_str =\n                        DisplayableExecutionPlan::new(root_op.native_plan.as_ref()).indent(true);\n                    info!(\"Comet native query plan:\\n{formatted_plan_str:}\");\n                }\n\n                let task_ctx = exec_context.session_ctx.task_ctx();\n                // Each Comet native execution corresponds to a single Spark partition,\n                // so we should always execute partition 0.\n                let stream = root_op.native_plan.execute(0, task_ctx)?;\n\n                if exec_context.scans.is_empty() && exec_context.shuffle_scans.is_empty() {\n                    // No JVM data sources — spawn onto tokio so the executor\n                    // thread parks in blocking_recv instead of busy-polling.\n                    //\n                    // Channel capacity of 2 allows the producer to work one batch\n                    // ahead while the consumer processes the current one via JNI,\n                    // without buffering excessive memory. Increasing this would\n                    // trade memory for latency hiding if JNI/FFI overhead dominates;\n                    // decreasing to 1 would serialize production and consumption.\n                    let (tx, rx) = mpsc::channel(2);\n                    let mut stream = stream;\n                    get_runtime().spawn(async move {\n                        let result = std::panic::AssertUnwindSafe(async {\n                            while let Some(batch) = stream.next().await {\n                                if tx.send(batch).await.is_err() {\n                                    break;\n                                }\n                            }\n                        })\n                        .catch_unwind()\n                        .await;\n\n                        if let Err(panic) = result {\n                            let msg = match panic.downcast_ref::<&str>() {\n                                Some(s) => s.to_string(),\n                                None => match panic.downcast_ref::<String>() {\n                                    Some(s) => s.clone(),\n                                    None => \"unknown panic\".to_string(),\n                                },\n                            };\n                            let _ = tx\n                                .send(Err(DataFusionError::Execution(format!(\n                                    \"native panic: {msg}\"\n                                ))))\n                                .await;\n                        }\n                    });\n                    exec_context.batch_receiver = Some(rx);\n                } else {\n                    exec_context.stream = Some(stream);\n                }\n                exec_context.root_op = Some(root_op);\n            } else {\n                // Pull input batches\n                pull_input_batches(exec_context)?;\n            }\n\n            if let Some(rx) = &mut exec_context.batch_receiver {\n                match rx.blocking_recv() {\n                    Some(Ok(batch)) => {\n                        update_metrics(env, exec_context)?;\n                        return prepare_output(\n                            env,\n                            array_addrs,\n                            schema_addrs,\n                            batch,\n                            exec_context.debug_native,\n                        );\n                    }\n                    Some(Err(e)) => {\n                        return Err(e.into());\n                    }\n                    None => {\n                        log_plan_metrics(exec_context, stage_id, partition);\n                        return Ok(-1);\n                    }\n                }\n            }\n\n            // ScanExec path: busy-poll to interleave JVM batch pulls with stream polling\n            get_runtime().block_on(async {\n                loop {\n                    let next_item = exec_context.stream.as_mut().unwrap().next();\n                    let poll_output = poll!(next_item);\n\n                    // Only check time/tracing every 100 polls to reduce overhead\n                    exec_context.poll_count_since_metrics_check += 1;\n                    if exec_context.poll_count_since_metrics_check >= 100 {\n                        exec_context.poll_count_since_metrics_check = 0;\n                        if let Some(interval) = exec_context.metrics_update_interval {\n                            let now = Instant::now();\n                            if now - exec_context.metrics_last_update_time >= interval {\n                                update_metrics(env, exec_context)?;\n                                exec_context.metrics_last_update_time = now;\n                            }\n                        }\n                        if exec_context.tracing_enabled {\n                            log_memory_usage(\n                                &exec_context.tracing_memory_metric_name,\n                                total_reserved_for_thread(exec_context.rust_thread_id) as u64,\n                            );\n                        }\n                    }\n\n                    match poll_output {\n                        Poll::Ready(Some(output)) => {\n                            return prepare_output(\n                                env,\n                                array_addrs,\n                                schema_addrs,\n                                output?,\n                                exec_context.debug_native,\n                            );\n                        }\n                        Poll::Ready(None) => {\n                            log_plan_metrics(exec_context, stage_id, partition);\n                            return Ok(-1);\n                        }\n                        Poll::Pending => {\n                            // JNI call to pull batches from JVM into ScanExec operators.\n                            // block_in_place lets tokio move other tasks off this worker\n                            // while we wait for JVM data.\n                            tokio::task::block_in_place(|| pull_input_batches(exec_context))?;\n                        }\n                    }\n                }\n            })\n        });\n\n        if exec_context.tracing_enabled {\n            #[cfg(feature = \"jemalloc\")]\n            log_jemalloc_usage();\n            log_memory_usage(\n                &exec_context.tracing_memory_metric_name,\n                total_reserved_for_thread(exec_context.rust_thread_id) as u64,\n            );\n        }\n\n        result\n    })\n}\n\n#[no_mangle]\n/// Drop the native query plan object and context object.\npub extern \"system\" fn Java_org_apache_comet_Native_releasePlan(\n    e: EnvUnowned,\n    _class: JClass,\n    exec_context: jlong,\n) {\n    try_unwrap_or_throw(&e, |env| unsafe {\n        let execution_context = get_execution_context(exec_context);\n\n        // Update metrics\n        update_metrics(env, execution_context)?;\n\n        handle_task_shared_pool_release(\n            execution_context.memory_pool_config.pool_type,\n            execution_context.task_attempt_id,\n        );\n\n        // Unregister this context's pool and emit the remaining total for the thread\n        if execution_context.tracing_enabled {\n            let remaining =\n                unregister_and_total(execution_context.rust_thread_id, execution_context.id);\n            log_memory_usage(\n                &execution_context.tracing_memory_metric_name,\n                remaining as u64,\n            );\n        }\n\n        let _: Box<ExecutionContext> = Box::from_raw(execution_context);\n        Ok(())\n    })\n}\n\nfn update_metrics(env: &mut Env, exec_context: &mut ExecutionContext) -> CometResult<()> {\n    if let Some(native_query) = &exec_context.root_op {\n        let metrics = exec_context.metrics.as_obj();\n        update_comet_metric(env, metrics, native_query)\n    } else {\n        Ok(())\n    }\n}\n\nfn log_plan_metrics(exec_context: &ExecutionContext, stage_id: jint, partition: jint) {\n    if exec_context.explain_native {\n        if let Some(plan) = &exec_context.root_op {\n            let formatted_plan_str =\n                DisplayableExecutionPlan::with_metrics(plan.native_plan.as_ref()).indent(true);\n            info!(\n                \"Comet native query plan with metrics (Plan #{} Stage {} Partition {}):\\\n                \\n plan creation took {:?}:\\\n                \\n{formatted_plan_str:}\",\n                plan.plan_id, stage_id, partition, exec_context.plan_creation_time\n            );\n        }\n    }\n}\n\nfn convert_datatype_arrays(\n    env: &mut Env,\n    serialized_datatypes: JObjectArray,\n) -> JNIResult<Vec<ArrowDataType>> {\n    let array_len = serialized_datatypes.len(env)?;\n    let mut res: Vec<ArrowDataType> = Vec::new();\n\n    for i in 0..array_len {\n        let inner_array = serialized_datatypes.get_element(env, i)?;\n        let inner_array = unsafe { JByteArray::from_raw(&*env, inner_array.into_raw()) };\n        let bytes = env.convert_byte_array(inner_array)?;\n        let data_type = serde::deserialize_data_type(bytes.as_slice()).unwrap();\n        let arrow_dt = to_arrow_datatype(&data_type);\n        res.push(arrow_dt);\n    }\n\n    Ok(res)\n}\n\nfn get_execution_context<'a>(id: i64) -> &'a mut ExecutionContext {\n    unsafe {\n        (id as *mut ExecutionContext)\n            .as_mut()\n            .expect(\"Comet execution context shouldn't be null!\")\n    }\n}\n\n/// Used by Comet shuffle external sorter to write sorted records to disk.\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\n#[no_mangle]\npub unsafe extern \"system\" fn Java_org_apache_comet_Native_writeSortedFileNative(\n    e: EnvUnowned,\n    _class: JClass,\n    row_addresses: JLongArray,\n    row_sizes: JIntArray,\n    serialized_datatypes: JObjectArray,\n    file_path: JString,\n    prefer_dictionary_ratio: jdouble,\n    batch_size: jlong,\n    checksum_enabled: jboolean,\n    checksum_algo: jint,\n    current_checksum: jlong,\n    compression_codec: JString,\n    compression_level: jint,\n    tracing_enabled: jboolean,\n) -> jlongArray {\n    try_unwrap_or_throw(&e, |env| unsafe {\n        with_trace(\n            \"writeSortedFileNative\",\n            tracing_enabled != JNI_FALSE,\n            || {\n                let data_types = convert_datatype_arrays(env, serialized_datatypes)?;\n\n                let row_num = row_addresses.len(env)?;\n                let row_addresses = row_addresses.get_elements(env, ReleaseMode::NoCopyBack)?;\n\n                let row_sizes = row_sizes.get_elements(env, ReleaseMode::NoCopyBack)?;\n\n                let row_addresses_ptr = row_addresses.as_ptr();\n                let row_sizes_ptr = row_sizes.as_ptr();\n\n                let output_path: String = file_path.try_to_string(env).unwrap();\n\n                let current_checksum = if current_checksum == i64::MIN {\n                    // Initial checksum is not available.\n                    None\n                } else {\n                    Some(current_checksum as u32)\n                };\n\n                let compression_codec: String = compression_codec.try_to_string(env).unwrap();\n\n                let compression_codec = match compression_codec.as_str() {\n                    \"zstd\" => CompressionCodec::Zstd(compression_level),\n                    \"lz4\" => CompressionCodec::Lz4Frame,\n                    \"snappy\" => CompressionCodec::Snappy,\n                    _ => CompressionCodec::Lz4Frame,\n                };\n\n                let (written_bytes, checksum) = process_sorted_row_partition(\n                    row_num,\n                    batch_size as usize,\n                    row_addresses_ptr,\n                    row_sizes_ptr,\n                    &data_types,\n                    output_path,\n                    prefer_dictionary_ratio,\n                    checksum_enabled,\n                    checksum_algo,\n                    current_checksum,\n                    &compression_codec,\n                )?;\n\n                let checksum = if let Some(checksum) = checksum {\n                    checksum as i64\n                } else {\n                    // Spark checksums (CRC32 or Adler32) are both u32, so we use i64::MIN to indicate\n                    // checksum is not available.\n                    i64::MIN\n                };\n\n                let long_array = env.new_long_array(2)?;\n                long_array.set_region(env, 0, &[written_bytes, checksum])?;\n\n                Ok(long_array.into_raw())\n            },\n        )\n    })\n}\n\n#[no_mangle]\n/// Used by Comet shuffle external sorter to sort in-memory row partition ids.\npub extern \"system\" fn Java_org_apache_comet_Native_sortRowPartitionsNative(\n    e: EnvUnowned,\n    _class: JClass,\n    address: jlong,\n    size: jlong,\n    tracing_enabled: jboolean,\n) {\n    try_unwrap_or_throw(&e, |_| {\n        with_trace(\n            \"sortRowPartitionsNative\",\n            tracing_enabled != JNI_FALSE,\n            || {\n                // SAFETY: JVM unsafe memory allocation is aligned with long.\n                debug_assert!(address != 0, \"sortRowPartitionsNative: null address\");\n                debug_assert!(size >= 0, \"sortRowPartitionsNative: negative size {size}\");\n                debug_assert_eq!(\n                    (address as usize) % std::mem::align_of::<i64>(),\n                    0,\n                    \"sortRowPartitionsNative: address not aligned to i64\"\n                );\n                let array =\n                    unsafe { std::slice::from_raw_parts_mut(address as *mut i64, size as usize) };\n                array.rdxsort();\n                Ok(())\n            },\n        )\n    })\n}\n\n#[no_mangle]\n/// Used by Comet native shuffle reader\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\npub unsafe extern \"system\" fn Java_org_apache_comet_Native_decodeShuffleBlock(\n    e: EnvUnowned,\n    _class: JClass,\n    byte_buffer: JByteBuffer,\n    length: jint,\n    array_addrs: JLongArray,\n    schema_addrs: JLongArray,\n    tracing_enabled: jboolean,\n) -> jlong {\n    try_unwrap_or_throw(&e, |env| {\n        with_trace(\"decodeShuffleBlock\", tracing_enabled != JNI_FALSE, || {\n            let raw_pointer = env.get_direct_buffer_address(&byte_buffer)?;\n            let length = length as usize;\n            let slice: &[u8] = unsafe { std::slice::from_raw_parts(raw_pointer, length) };\n            let batch = read_ipc_compressed(slice)?;\n            prepare_output(env, array_addrs, schema_addrs, batch, false)\n        })\n    })\n}\n\n#[no_mangle]\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\npub unsafe extern \"system\" fn Java_org_apache_comet_Native_traceBegin(\n    e: EnvUnowned,\n    _class: JClass,\n    event: JString,\n) {\n    try_unwrap_or_throw(&e, |env| {\n        let name: String = event.try_to_string(env).unwrap();\n        trace_begin(&name);\n        Ok(())\n    })\n}\n\n#[no_mangle]\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\npub unsafe extern \"system\" fn Java_org_apache_comet_Native_traceEnd(\n    e: EnvUnowned,\n    _class: JClass,\n    event: JString,\n) {\n    try_unwrap_or_throw(&e, |env| {\n        let name: String = event.try_to_string(env).unwrap();\n        trace_end(&name);\n        Ok(())\n    })\n}\n\n#[no_mangle]\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\npub unsafe extern \"system\" fn Java_org_apache_comet_Native_logMemoryUsage(\n    e: EnvUnowned,\n    _class: JClass,\n    name: JString,\n    value: jlong,\n) {\n    try_unwrap_or_throw(&e, |env| {\n        let name: String = name.try_to_string(env).unwrap();\n        log_memory_usage(&name, value as u64);\n        Ok(())\n    })\n}\n\n#[no_mangle]\n/// Returns the Rust thread ID for the current thread.\n/// This allows Java code to use Rust thread IDs in tracing metric names.\npub extern \"system\" fn Java_org_apache_comet_Native_getRustThreadId(\n    _e: EnvUnowned,\n    _class: JClass,\n) -> jlong {\n    get_thread_id() as jlong\n}\n\n// ============================================================================\n// Native Columnar to Row Conversion\n// ============================================================================\n\nuse crate::execution::columnar_to_row::ColumnarToRowContext;\nuse arrow::ffi::{from_ffi, FFI_ArrowArray, FFI_ArrowSchema};\nuse datafusion_spark::function::math::bin::SparkBin;\n\n/// Initialize a native columnar to row converter.\n///\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\n#[no_mangle]\npub unsafe extern \"system\" fn Java_org_apache_comet_Native_columnarToRowInit(\n    e: EnvUnowned,\n    _class: JClass,\n    serialized_schema: JObjectArray,\n    batch_size: jint,\n) -> jlong {\n    try_unwrap_or_throw(&e, |env| {\n        // Deserialize the schema\n        let schema = convert_datatype_arrays(env, serialized_schema)?;\n\n        // Create the context\n        let ctx = Box::new(ColumnarToRowContext::new(schema, batch_size as usize));\n\n        Ok(Box::into_raw(ctx) as jlong)\n    })\n}\n\n/// Convert Arrow columnar data to Spark UnsafeRow format.\n///\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\n#[no_mangle]\npub unsafe extern \"system\" fn Java_org_apache_comet_Native_columnarToRowConvert(\n    e: EnvUnowned,\n    _class: JClass,\n    c2r_handle: jlong,\n    array_addrs: JLongArray,\n    schema_addrs: JLongArray,\n    num_rows: jint,\n) -> jni::sys::jobject {\n    try_unwrap_or_throw(&e, |env| {\n        // Get the context\n        debug_assert!(c2r_handle != 0, \"columnarToRowConvert: c2r_handle is null\");\n        let ctx = (c2r_handle as *mut ColumnarToRowContext)\n            .as_mut()\n            .ok_or_else(|| CometError::Internal(\"Null columnar to row context\".to_string()))?;\n\n        let num_cols = array_addrs.len(env)?;\n\n        // Get array and schema addresses\n        let array_addrs_elements =\n            unsafe { array_addrs.get_elements(env, ReleaseMode::NoCopyBack)? };\n        let schema_addrs_elements =\n            unsafe { schema_addrs.get_elements(env, ReleaseMode::NoCopyBack)? };\n\n        // Import Arrow arrays from FFI\n        let mut arrays = Vec::with_capacity(num_cols);\n        for i in 0..num_cols {\n            let array_ptr = array_addrs_elements[i] as *mut FFI_ArrowArray;\n            let schema_ptr = schema_addrs_elements[i] as *mut FFI_ArrowSchema;\n\n            debug_assert!(\n                !array_ptr.is_null(),\n                \"columnarToRowConvert: null array pointer at index {}\",\n                i\n            );\n            debug_assert!(\n                !schema_ptr.is_null(),\n                \"columnarToRowConvert: null schema pointer at index {}\",\n                i\n            );\n\n            // Take ownership of the FFI structures\n            let ffi_array = unsafe { std::ptr::read(array_ptr) };\n            let ffi_schema = unsafe { std::ptr::read(schema_ptr) };\n\n            // Convert to Arrow ArrayData\n            let array_data = from_ffi(ffi_array, &ffi_schema)\n                .map_err(|e| CometError::Internal(format!(\"Failed to import array: {}\", e)))?;\n\n            arrays.push(arrow::array::make_array(array_data));\n        }\n\n        // Convert columnar to row\n        debug_assert!(\n            num_rows >= 0,\n            \"columnarToRowConvert: num_rows is negative: {}\",\n            num_rows\n        );\n        let (buffer_ptr, offsets, lengths) = ctx.convert(&arrays, num_rows as usize)?;\n\n        // Create Java int arrays for offsets and lengths\n        let offsets_array = env.new_int_array(offsets.len())?;\n        offsets_array.set_region(env, 0, offsets)?;\n\n        let lengths_array = env.new_int_array(lengths.len())?;\n        lengths_array.set_region(env, 0, lengths)?;\n\n        // Create the NativeColumnarToRowInfo object\n        let info_class =\n            env.find_class(jni::jni_str!(\"org/apache/comet/NativeColumnarToRowInfo\"))?;\n        let info_obj = env.new_object(\n            info_class,\n            jni::jni_sig!(\"(J[I[I)V\"),\n            &[\n                jni::objects::JValue::Long(buffer_ptr as jlong),\n                jni::objects::JValue::Object(&offsets_array),\n                jni::objects::JValue::Object(&lengths_array),\n            ],\n        )?;\n\n        Ok(info_obj.into_raw())\n    })\n}\n\n/// Close and release the native columnar to row converter.\n///\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\n#[no_mangle]\npub unsafe extern \"system\" fn Java_org_apache_comet_Native_columnarToRowClose(\n    e: EnvUnowned,\n    _class: JClass,\n    c2r_handle: jlong,\n) {\n    try_unwrap_or_throw(&e, |_env| {\n        debug_assert!(c2r_handle != 0, \"columnarToRowClose: c2r_handle is null\");\n        if c2r_handle != 0 {\n            let _ctx: Box<ColumnarToRowContext> =\n                Box::from_raw(c2r_handle as *mut ColumnarToRowContext);\n            // ctx is dropped here, freeing the buffer\n        }\n        Ok(())\n    })\n}\n"
  },
  {
    "path": "native/core/src/execution/memory_pools/config.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::errors::{CometError, CometResult};\n\n#[derive(Copy, Clone, PartialEq, Eq)]\npub(crate) enum MemoryPoolType {\n    GreedyUnified,\n    FairUnified,\n    Greedy,\n    FairSpill,\n    GreedyTaskShared,\n    FairSpillTaskShared,\n    GreedyGlobal,\n    FairSpillGlobal,\n    Unbounded,\n}\n\nimpl MemoryPoolType {\n    pub(crate) fn is_task_shared(&self) -> bool {\n        matches!(\n            self,\n            MemoryPoolType::GreedyTaskShared\n                | MemoryPoolType::FairSpillTaskShared\n                | MemoryPoolType::FairUnified\n                | MemoryPoolType::GreedyUnified\n        )\n    }\n}\n\npub(crate) struct MemoryPoolConfig {\n    pub(crate) pool_type: MemoryPoolType,\n    pub(crate) pool_size: usize,\n}\n\nimpl MemoryPoolConfig {\n    pub(crate) fn new(pool_type: MemoryPoolType, pool_size: usize) -> Self {\n        Self {\n            pool_type,\n            pool_size,\n        }\n    }\n}\n\npub(crate) fn parse_memory_pool_config(\n    off_heap_mode: bool,\n    memory_pool_type: String,\n    memory_limit: i64,\n    memory_limit_per_task: i64,\n) -> CometResult<MemoryPoolConfig> {\n    let pool_size = memory_limit as usize;\n    let memory_pool_config = if off_heap_mode {\n        match memory_pool_type.as_str() {\n            \"fair_unified\" => MemoryPoolConfig::new(MemoryPoolType::FairUnified, pool_size),\n            \"greedy_unified\" => {\n                // the `unified` memory pool interacts with Spark's memory pool to allocate\n                // memory therefore does not need a size to be explicitly set. The pool size\n                // shared with Spark is set by `spark.memory.offHeap.size`.\n                MemoryPoolConfig::new(MemoryPoolType::GreedyUnified, 0)\n            }\n            _ => {\n                return Err(CometError::Config(format!(\n                    \"Unsupported memory pool type for off-heap mode: {memory_pool_type}\"\n                )))\n            }\n        }\n    } else {\n        // Use the memory pool from DF\n        let pool_size_per_task = memory_limit_per_task as usize;\n        match memory_pool_type.as_str() {\n            \"fair_spill_task_shared\" => {\n                MemoryPoolConfig::new(MemoryPoolType::FairSpillTaskShared, pool_size_per_task)\n            }\n            \"greedy_task_shared\" => {\n                MemoryPoolConfig::new(MemoryPoolType::GreedyTaskShared, pool_size_per_task)\n            }\n            \"fair_spill_global\" => {\n                MemoryPoolConfig::new(MemoryPoolType::FairSpillGlobal, pool_size)\n            }\n            \"greedy_global\" => MemoryPoolConfig::new(MemoryPoolType::GreedyGlobal, pool_size),\n            \"fair_spill\" => MemoryPoolConfig::new(MemoryPoolType::FairSpill, pool_size_per_task),\n            \"greedy\" => MemoryPoolConfig::new(MemoryPoolType::Greedy, pool_size_per_task),\n            \"unbounded\" => MemoryPoolConfig::new(MemoryPoolType::Unbounded, 0),\n            _ => {\n                return Err(CometError::Config(format!(\n                    \"Unsupported memory pool type for on-heap mode: {memory_pool_type}\"\n                )))\n            }\n        }\n    };\n    Ok(memory_pool_config)\n}\n"
  },
  {
    "path": "native/core/src/execution/memory_pools/fair_pool.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{\n    fmt::{Debug, Formatter, Result as FmtResult},\n    sync::Arc,\n};\n\nuse jni::objects::{Global, JObject};\n\nuse crate::{errors::CometResult, jvm_bridge::JVMClasses};\nuse datafusion::common::resources_err;\nuse datafusion::execution::memory_pool::MemoryConsumer;\nuse datafusion::{\n    common::DataFusionError,\n    execution::memory_pool::{MemoryPool, MemoryReservation},\n};\nuse parking_lot::Mutex;\n\n/// A DataFusion fair `MemoryPool` implementation for Comet. Internally this is\n/// implemented via delegating calls to [`crate::jvm_bridge::CometTaskMemoryManager`].\npub struct CometFairMemoryPool {\n    task_memory_manager_handle: Arc<Global<JObject<'static>>>,\n    pool_size: usize,\n    state: Mutex<CometFairPoolState>,\n}\n\nstruct CometFairPoolState {\n    used: usize,\n    num: usize,\n}\n\nimpl Debug for CometFairMemoryPool {\n    fn fmt(&self, f: &mut Formatter<'_>) -> FmtResult {\n        let state = self.state.lock();\n        f.debug_struct(\"CometFairMemoryPool\")\n            .field(\"pool_size\", &self.pool_size)\n            .field(\"used\", &state.used)\n            .field(\"num\", &state.num)\n            .finish()\n    }\n}\n\nimpl CometFairMemoryPool {\n    pub fn new(\n        task_memory_manager_handle: Arc<Global<JObject<'static>>>,\n        pool_size: usize,\n    ) -> CometFairMemoryPool {\n        Self {\n            task_memory_manager_handle,\n            pool_size,\n            state: Mutex::new(CometFairPoolState { used: 0, num: 0 }),\n        }\n    }\n\n    fn acquire(&self, additional: usize) -> CometResult<i64> {\n        let handle = self.task_memory_manager_handle.as_obj();\n        JVMClasses::with_env(|env| unsafe {\n            jni_call!(env,\n              comet_task_memory_manager(handle).acquire_memory(additional as i64) -> i64)\n        })\n    }\n\n    fn release(&self, size: usize) -> CometResult<()> {\n        let handle = self.task_memory_manager_handle.as_obj();\n        JVMClasses::with_env(|env| unsafe {\n            jni_call!(env, comet_task_memory_manager(handle).release_memory(size as i64) -> ())\n        })\n    }\n}\n\nunsafe impl Send for CometFairMemoryPool {}\nunsafe impl Sync for CometFairMemoryPool {}\n\nimpl MemoryPool for CometFairMemoryPool {\n    fn register(&self, _: &MemoryConsumer) {\n        let mut state = self.state.lock();\n        state.num = state\n            .num\n            .checked_add(1)\n            .expect(\"unexpected amount of register happened\");\n    }\n\n    fn unregister(&self, _: &MemoryConsumer) {\n        let mut state = self.state.lock();\n        state.num = state\n            .num\n            .checked_sub(1)\n            .expect(\"unexpected amount of unregister happened\");\n    }\n\n    fn grow(&self, _reservation: &MemoryReservation, additional: usize) {\n        self.try_grow(_reservation, additional).unwrap();\n    }\n\n    fn shrink(&self, _reservation: &MemoryReservation, subtractive: usize) {\n        if subtractive > 0 {\n            let mut state = self.state.lock();\n            // We don't use reservation.size() here because DataFusion 53+ decrements\n            // the reservation's atomic size before calling pool.shrink(), so it would\n            // reflect the post-shrink value rather than the pre-shrink value.\n            if state.used < subtractive {\n                panic!(\n                    \"Failed to release {subtractive} bytes where only {} bytes tracked by pool\",\n                    state.used\n                )\n            }\n            self.release(subtractive)\n                .unwrap_or_else(|_| panic!(\"Failed to release {subtractive} bytes\"));\n            state.used = state.used.checked_sub(subtractive).unwrap();\n        }\n    }\n\n    fn try_grow(\n        &self,\n        _reservation: &MemoryReservation,\n        additional: usize,\n    ) -> Result<(), DataFusionError> {\n        if additional > 0 {\n            let mut state = self.state.lock();\n            let num = state.num;\n            let limit = self\n                .pool_size\n                .checked_div(num)\n                .expect(\"overflow in checked_div\");\n            // We use state.used instead of reservation.size() because DataFusion 53+\n            // calls pool.try_grow() before incrementing the reservation's atomic size,\n            // so reservation.size() would not include prior grows.\n            let used = state.used;\n            if limit < used + additional {\n                return resources_err!(\n                    \"Failed to acquire {additional} bytes where {used} bytes already reserved and the fair limit is {limit} bytes, {num} registered\"\n                );\n            }\n\n            let acquired = self.acquire(additional)?;\n            // If the number of bytes we acquired is less than the requested, return an error,\n            // and hopefully will trigger spilling from the caller side.\n            if acquired < additional as i64 {\n                // Release the acquired bytes before throwing error\n                self.release(acquired as usize)?;\n\n                return resources_err!(\n                    \"Failed to acquire {} bytes, only got {} bytes. Reserved: {} bytes\",\n                    additional,\n                    acquired,\n                    state.used\n                );\n            }\n            state.used = state\n                .used\n                .checked_add(additional)\n                .expect(\"overflow in checked_add\");\n        }\n        Ok(())\n    }\n\n    fn reserved(&self) -> usize {\n        self.state.lock().used\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/memory_pools/logging_pool.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse datafusion::execution::memory_pool::{\n    MemoryConsumer, MemoryLimit, MemoryPool, MemoryReservation,\n};\nuse log::{info, warn};\nuse std::sync::Arc;\n\n#[derive(Debug)]\npub(crate) struct LoggingMemoryPool {\n    task_attempt_id: u64,\n    pool: Arc<dyn MemoryPool>,\n}\n\nimpl LoggingMemoryPool {\n    pub fn new(task_attempt_id: u64, pool: Arc<dyn MemoryPool>) -> Self {\n        Self {\n            task_attempt_id,\n            pool,\n        }\n    }\n}\n\nimpl MemoryPool for LoggingMemoryPool {\n    fn register(&self, consumer: &MemoryConsumer) {\n        info!(\n            \"[Task {}] MemoryPool[{}].register()\",\n            self.task_attempt_id,\n            consumer.name(),\n        );\n        self.pool.register(consumer)\n    }\n\n    fn unregister(&self, consumer: &MemoryConsumer) {\n        info!(\n            \"[Task {}] MemoryPool[{}].unregister()\",\n            self.task_attempt_id,\n            consumer.name(),\n        );\n        self.pool.unregister(consumer)\n    }\n\n    fn grow(&self, reservation: &MemoryReservation, additional: usize) {\n        info!(\n            \"[Task {}] MemoryPool[{}].grow({})\",\n            self.task_attempt_id,\n            reservation.consumer().name(),\n            additional\n        );\n        self.pool.grow(reservation, additional);\n    }\n\n    fn shrink(&self, reservation: &MemoryReservation, shrink: usize) {\n        info!(\n            \"[Task {}] MemoryPool[{}].shrink({})\",\n            self.task_attempt_id,\n            reservation.consumer().name(),\n            shrink\n        );\n        self.pool.shrink(reservation, shrink);\n    }\n\n    fn try_grow(\n        &self,\n        reservation: &MemoryReservation,\n        additional: usize,\n    ) -> datafusion::common::Result<()> {\n        match self.pool.try_grow(reservation, additional) {\n            Ok(_) => {\n                info!(\n                    \"[Task {}] MemoryPool[{}].try_grow({}) returning Ok\",\n                    self.task_attempt_id,\n                    reservation.consumer().name(),\n                    additional\n                );\n                Ok(())\n            }\n            Err(e) => {\n                warn!(\n                    \"[Task {}] MemoryPool[{}].try_grow({}) returning Err: {e:?}\",\n                    self.task_attempt_id,\n                    reservation.consumer().name(),\n                    additional\n                );\n                Err(e)\n            }\n        }\n    }\n\n    fn reserved(&self) -> usize {\n        self.pool.reserved()\n    }\n\n    fn memory_limit(&self) -> MemoryLimit {\n        self.pool.memory_limit()\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/memory_pools/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod config;\nmod fair_pool;\npub mod logging_pool;\nmod task_shared;\nmod unified_pool;\n\nuse datafusion::execution::memory_pool::{\n    FairSpillPool, GreedyMemoryPool, MemoryPool, TrackConsumersPool, UnboundedMemoryPool,\n};\nuse fair_pool::CometFairMemoryPool;\nuse jni::objects::{Global, JObject};\nuse once_cell::sync::OnceCell;\nuse std::num::NonZeroUsize;\nuse std::sync::Arc;\nuse unified_pool::CometUnifiedMemoryPool;\n\npub(crate) use config::*;\npub(crate) use task_shared::*;\n\npub(crate) fn create_memory_pool(\n    memory_pool_config: &MemoryPoolConfig,\n    comet_task_memory_manager: Arc<Global<JObject<'static>>>,\n    task_attempt_id: i64,\n) -> Arc<dyn MemoryPool> {\n    const NUM_TRACKED_CONSUMERS: usize = 10;\n    match memory_pool_config.pool_type {\n        MemoryPoolType::GreedyUnified => {\n            let mut memory_pool_map = TASK_SHARED_MEMORY_POOLS.lock().unwrap();\n            let per_task_memory_pool =\n                memory_pool_map.entry(task_attempt_id).or_insert_with(|| {\n                    let pool: Arc<dyn MemoryPool> = Arc::new(TrackConsumersPool::new(\n                        CometUnifiedMemoryPool::new(\n                            Arc::clone(&comet_task_memory_manager),\n                            task_attempt_id,\n                        ),\n                        NonZeroUsize::new(NUM_TRACKED_CONSUMERS).unwrap(),\n                    ));\n                    PerTaskMemoryPool::new(pool)\n                });\n            per_task_memory_pool.num_plans += 1;\n            Arc::clone(&per_task_memory_pool.memory_pool)\n        }\n        MemoryPoolType::FairUnified => {\n            let mut memory_pool_map = TASK_SHARED_MEMORY_POOLS.lock().unwrap();\n            let per_task_memory_pool =\n                memory_pool_map.entry(task_attempt_id).or_insert_with(|| {\n                    let pool: Arc<dyn MemoryPool> = Arc::new(TrackConsumersPool::new(\n                        CometFairMemoryPool::new(\n                            Arc::clone(&comet_task_memory_manager),\n                            memory_pool_config.pool_size,\n                        ),\n                        NonZeroUsize::new(NUM_TRACKED_CONSUMERS).unwrap(),\n                    ));\n                    PerTaskMemoryPool::new(pool)\n                });\n            per_task_memory_pool.num_plans += 1;\n            Arc::clone(&per_task_memory_pool.memory_pool)\n        }\n        MemoryPoolType::Greedy => Arc::new(TrackConsumersPool::new(\n            GreedyMemoryPool::new(memory_pool_config.pool_size),\n            NonZeroUsize::new(NUM_TRACKED_CONSUMERS).unwrap(),\n        )),\n        MemoryPoolType::FairSpill => Arc::new(TrackConsumersPool::new(\n            FairSpillPool::new(memory_pool_config.pool_size),\n            NonZeroUsize::new(NUM_TRACKED_CONSUMERS).unwrap(),\n        )),\n        MemoryPoolType::GreedyGlobal => {\n            static GLOBAL_MEMORY_POOL_GREEDY: OnceCell<Arc<dyn MemoryPool>> = OnceCell::new();\n            let memory_pool = GLOBAL_MEMORY_POOL_GREEDY.get_or_init(|| {\n                Arc::new(TrackConsumersPool::new(\n                    GreedyMemoryPool::new(memory_pool_config.pool_size),\n                    NonZeroUsize::new(NUM_TRACKED_CONSUMERS).unwrap(),\n                ))\n            });\n            Arc::clone(memory_pool)\n        }\n        MemoryPoolType::FairSpillGlobal => {\n            static GLOBAL_MEMORY_POOL_FAIR: OnceCell<Arc<dyn MemoryPool>> = OnceCell::new();\n            let memory_pool = GLOBAL_MEMORY_POOL_FAIR.get_or_init(|| {\n                Arc::new(TrackConsumersPool::new(\n                    FairSpillPool::new(memory_pool_config.pool_size),\n                    NonZeroUsize::new(NUM_TRACKED_CONSUMERS).unwrap(),\n                ))\n            });\n            Arc::clone(memory_pool)\n        }\n        MemoryPoolType::GreedyTaskShared | MemoryPoolType::FairSpillTaskShared => {\n            let mut memory_pool_map = TASK_SHARED_MEMORY_POOLS.lock().unwrap();\n            let per_task_memory_pool =\n                memory_pool_map.entry(task_attempt_id).or_insert_with(|| {\n                    let pool: Arc<dyn MemoryPool> =\n                        if memory_pool_config.pool_type == MemoryPoolType::GreedyTaskShared {\n                            Arc::new(TrackConsumersPool::new(\n                                GreedyMemoryPool::new(memory_pool_config.pool_size),\n                                NonZeroUsize::new(NUM_TRACKED_CONSUMERS).unwrap(),\n                            ))\n                        } else {\n                            Arc::new(TrackConsumersPool::new(\n                                FairSpillPool::new(memory_pool_config.pool_size),\n                                NonZeroUsize::new(NUM_TRACKED_CONSUMERS).unwrap(),\n                            ))\n                        };\n                    PerTaskMemoryPool::new(pool)\n                });\n            per_task_memory_pool.num_plans += 1;\n            Arc::clone(&per_task_memory_pool.memory_pool)\n        }\n        MemoryPoolType::Unbounded => Arc::new(UnboundedMemoryPool::default()),\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/memory_pools/task_shared.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::execution::memory_pools::MemoryPoolType;\nuse datafusion::execution::memory_pool::MemoryPool;\nuse once_cell::sync::Lazy;\nuse std::collections::HashMap;\nuse std::sync::{Arc, Mutex};\n\n/// The per-task memory pools keyed by task attempt id.\npub(crate) static TASK_SHARED_MEMORY_POOLS: Lazy<Mutex<HashMap<i64, PerTaskMemoryPool>>> =\n    Lazy::new(|| Mutex::new(HashMap::new()));\n\npub(crate) struct PerTaskMemoryPool {\n    pub(crate) memory_pool: Arc<dyn MemoryPool>,\n    pub(crate) num_plans: usize,\n}\n\nimpl PerTaskMemoryPool {\n    pub(crate) fn new(memory_pool: Arc<dyn MemoryPool>) -> Self {\n        Self {\n            memory_pool,\n            num_plans: 0,\n        }\n    }\n}\n\n// This function reduces the refcount of a per-task memory pool when a native plan is released.\n// If the refcount reaches zero, the memory pool is removed from the map and dropped.\npub(crate) fn handle_task_shared_pool_release(pool_type: MemoryPoolType, task_attempt_id: i64) {\n    if !pool_type.is_task_shared() {\n        return;\n    }\n\n    // Decrement the number of native plans using the per-task shared memory pool, and\n    // remove the memory pool if the released native plan is the last native plan using it.\n    let mut memory_pool_map = TASK_SHARED_MEMORY_POOLS.lock().unwrap();\n    if let Some(per_task_memory_pool) = memory_pool_map.get_mut(&task_attempt_id) {\n        per_task_memory_pool.num_plans -= 1;\n        if per_task_memory_pool.num_plans == 0 {\n            // Drop the memory pool from the per-task memory pool map if there are no\n            // more native plans using it.\n            memory_pool_map.remove(&task_attempt_id);\n        }\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/memory_pools/unified_pool.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{\n    fmt::{Debug, Formatter, Result as FmtResult},\n    sync::{\n        atomic::{AtomicUsize, Ordering::Relaxed},\n        Arc,\n    },\n};\n\nuse crate::{errors::CometResult, jvm_bridge::JVMClasses};\nuse datafusion::{\n    common::{resources_datafusion_err, DataFusionError},\n    execution::memory_pool::{MemoryPool, MemoryReservation},\n};\nuse jni::objects::{Global, JObject};\nuse log::warn;\n\n/// A DataFusion `MemoryPool` implementation for Comet that delegates to\n/// Spark's off-heap executor memory pool via JNI by calling\n/// [`crate::jvm_bridge::CometTaskMemoryManager`].\npub struct CometUnifiedMemoryPool {\n    task_memory_manager_handle: Arc<Global<JObject<'static>>>,\n    used: AtomicUsize,\n    task_attempt_id: i64,\n}\n\nimpl Debug for CometUnifiedMemoryPool {\n    fn fmt(&self, f: &mut Formatter<'_>) -> FmtResult {\n        f.debug_struct(\"CometUnifiedMemoryPool\")\n            .field(\"used\", &self.used.load(Relaxed))\n            .finish()\n    }\n}\n\nimpl CometUnifiedMemoryPool {\n    pub fn new(\n        task_memory_manager_handle: Arc<Global<JObject<'static>>>,\n        task_attempt_id: i64,\n    ) -> CometUnifiedMemoryPool {\n        Self {\n            task_memory_manager_handle,\n            task_attempt_id,\n            used: AtomicUsize::new(0),\n        }\n    }\n\n    /// Request memory from Spark's off-heap memory pool via JNI\n    fn acquire_from_spark(&self, additional: usize) -> CometResult<i64> {\n        let handle = self.task_memory_manager_handle.as_obj();\n        JVMClasses::with_env(|env| unsafe {\n            jni_call!(env,\n              comet_task_memory_manager(handle).acquire_memory(additional as i64) -> i64)\n        })\n    }\n\n    /// Release memory to Spark's off-heap memory pool via JNI\n    fn release_to_spark(&self, size: usize) -> CometResult<()> {\n        let handle = self.task_memory_manager_handle.as_obj();\n        JVMClasses::with_env(|env| unsafe {\n            jni_call!(env, comet_task_memory_manager(handle).release_memory(size as i64) -> ())\n        })\n    }\n}\n\nimpl Drop for CometUnifiedMemoryPool {\n    fn drop(&mut self) {\n        let used = self.used.load(Relaxed);\n        if used != 0 {\n            warn!(\n                \"Task {} dropped CometUnifiedMemoryPool with {used} bytes still reserved\",\n                self.task_attempt_id\n            );\n        }\n    }\n}\n\nunsafe impl Send for CometUnifiedMemoryPool {}\nunsafe impl Sync for CometUnifiedMemoryPool {}\n\nimpl MemoryPool for CometUnifiedMemoryPool {\n    fn grow(&self, reservation: &MemoryReservation, additional: usize) {\n        self.try_grow(reservation, additional).unwrap();\n    }\n\n    fn shrink(&self, _: &MemoryReservation, size: usize) {\n        if let Err(e) = self.release_to_spark(size) {\n            panic!(\n                \"Task {} failed to return {size} bytes to Spark: {e:?}\",\n                self.task_attempt_id\n            );\n        }\n        if let Err(prev) = self\n            .used\n            .fetch_update(Relaxed, Relaxed, |old| old.checked_sub(size))\n        {\n            panic!(\n                \"Task {} overflow when releasing {size} of {prev} bytes\",\n                self.task_attempt_id\n            );\n        }\n    }\n\n    fn try_grow(&self, _: &MemoryReservation, additional: usize) -> Result<(), DataFusionError> {\n        if additional > 0 {\n            let acquired = self.acquire_from_spark(additional)?;\n            // If the number of bytes we acquired is less than the requested, return an error,\n            // and hopefully will trigger spilling from the caller side.\n            if acquired < additional as i64 {\n                // Release the acquired bytes before throwing error\n                self.release_to_spark(acquired as usize)?;\n\n                return Err(resources_datafusion_err!(\n                    \"Task {} failed to acquire {} bytes, only got {}. Reserved: {}\",\n                    self.task_attempt_id,\n                    additional,\n                    acquired,\n                    self.reserved()\n                ));\n            }\n            if let Err(prev) = self\n                .used\n                .fetch_update(Relaxed, Relaxed, |old| old.checked_add(acquired as usize))\n            {\n                return Err(resources_datafusion_err!(\n                    \"Task {} failed to acquire {} bytes due to overflow. Reserved: {}\",\n                    self.task_attempt_id,\n                    additional,\n                    prev\n                ));\n            }\n        }\n        Ok(())\n    }\n\n    fn reserved(&self) -> usize {\n        self.used.load(Relaxed)\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/metrics/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub mod utils;\n"
  },
  {
    "path": "native/core/src/execution/metrics/utils.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::errors::CometError;\nuse crate::execution::spark_plan::SparkPlan;\nuse datafusion::physical_plan::metrics::MetricValue;\nuse datafusion_comet_proto::spark_metric::NativeMetricNode;\nuse jni::{objects::JObject, Env};\nuse prost::Message;\nuse std::collections::HashMap;\nuse std::sync::Arc;\n\n/// Updates the metrics of a CometMetricNode. This function is called recursively to\n/// update the metrics of all the children nodes. The metrics are pulled from the\n/// native execution plan and pushed to the Java side through JNI.\npub(crate) fn update_comet_metric(\n    env: &mut Env,\n    metric_node: &JObject,\n    spark_plan: &Arc<SparkPlan>,\n) -> Result<(), CometError> {\n    if metric_node.is_null() {\n        return Ok(());\n    }\n\n    let native_metric = to_native_metric_node(spark_plan);\n    let jbytes = env.byte_array_from_slice(&native_metric?.encode_to_vec())?;\n\n    unsafe { jni_call!(env, comet_metric_node(metric_node).set_all_from_bytes(&jbytes) -> ()) }\n}\n\npub(crate) fn to_native_metric_node(\n    spark_plan: &Arc<SparkPlan>,\n) -> Result<NativeMetricNode, CometError> {\n    let mut native_metric_node = NativeMetricNode {\n        metrics: HashMap::new(),\n        children: Vec::new(),\n    };\n\n    let node_metrics = if spark_plan.additional_native_plans.is_empty() {\n        spark_plan.native_plan.metrics()\n    } else {\n        let mut metrics = spark_plan.native_plan.metrics().unwrap_or_default();\n        for plan in &spark_plan.additional_native_plans {\n            let additional_metrics = plan.metrics().unwrap_or_default();\n            for c in additional_metrics.iter() {\n                match c.value() {\n                    MetricValue::OutputRows(_) => {\n                        // we do not want to double count output rows\n                    }\n                    _ => metrics.push(c.to_owned()),\n                }\n            }\n        }\n        Some(metrics.aggregate_by_name())\n    };\n\n    // add metrics\n    node_metrics\n        .unwrap_or_default()\n        .iter()\n        .map(|m| m.value())\n        .map(|m| (m.name(), m.as_usize() as i64))\n        .for_each(|(name, value)| {\n            native_metric_node.metrics.insert(name.to_string(), value);\n        });\n\n    // add children\n    for child_plan in spark_plan.children() {\n        let child_node = to_native_metric_node(child_plan)?;\n        native_metric_node.children.push(child_node);\n    }\n\n    Ok(native_metric_node)\n}\n"
  },
  {
    "path": "native/core/src/execution/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! PoC of vectorization execution through JNI to Rust.\npub mod columnar_to_row;\npub mod expressions;\npub mod jni_api;\npub(crate) mod metrics;\npub mod operators;\npub(crate) mod planner;\npub mod serde;\npub use datafusion_comet_shuffle as shuffle;\npub(crate) mod sort;\npub(crate) mod spark_plan;\npub use datafusion_comet_spark_expr::timezone;\nmod memory_pools;\npub(crate) mod spark_config;\npub(crate) mod tracing;\npub(crate) mod utils;\n\n#[cfg(test)]\nmod tests {\n    #[test]\n    fn it_works() {\n        let result = 2 + 2;\n        assert_eq!(result, 4);\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/operators/copy.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::compute::{cast_with_options, CastOptions};\nuse std::sync::Arc;\n\nuse arrow::array::{downcast_dictionary_array, make_array, Array, ArrayRef, MutableArrayData};\nuse arrow::datatypes::DataType;\nuse arrow::error::ArrowError;\n\n#[derive(Debug, PartialEq, Clone)]\npub enum CopyMode {\n    /// Perform a deep copy and also unpack dictionaries\n    UnpackOrDeepCopy,\n    /// Perform a clone and also unpack dictionaries\n    UnpackOrClone,\n}\n\n/// Copy an Arrow Array\npub(crate) fn copy_array(array: &dyn Array) -> ArrayRef {\n    let capacity = array.len();\n    let data = array.to_data();\n\n    let mut mutable = MutableArrayData::new(vec![&data], false, capacity);\n\n    mutable.extend(0, 0, capacity);\n\n    if matches!(array.data_type(), DataType::Dictionary(_, _)) {\n        let copied_dict = make_array(mutable.freeze());\n        let ref_copied_dict = &copied_dict;\n\n        downcast_dictionary_array!(\n            ref_copied_dict => {\n                // Copying dictionary value array\n                let values = ref_copied_dict.values();\n                let data = values.to_data();\n\n                let mut mutable = MutableArrayData::new(vec![&data], false, values.len());\n                mutable.extend(0, 0, values.len());\n\n                let copied_dict = ref_copied_dict.with_values(make_array(mutable.freeze()));\n                Arc::new(copied_dict)\n            }\n            t => unreachable!(\"Should not reach here: {}\", t)\n        )\n    } else {\n        make_array(mutable.freeze())\n    }\n}\n\n/// Copy an Arrow Array or cast to primitive type if it is a dictionary array.\n/// This is used for `CopyExec` to copy/cast the input array. If the input array\n/// is a dictionary array, we will cast the dictionary array to primitive type\n/// (i.e., unpack the dictionary array) and copy the primitive array. If the input\n/// array is a primitive array, we simply copy the array.\npub(crate) fn copy_or_unpack_array(\n    array: &Arc<dyn Array>,\n    mode: &CopyMode,\n) -> Result<ArrayRef, ArrowError> {\n    match array.data_type() {\n        DataType::Dictionary(_, value_type) => {\n            let options = CastOptions::default();\n            // We need to copy the array after `cast` because arrow-rs `take` kernel which is used\n            // to unpack dictionary array might reuse the input array's null buffer.\n            Ok(copy_array(&cast_with_options(\n                array,\n                value_type.as_ref(),\n                &options,\n            )?))\n        }\n        _ => {\n            if mode == &CopyMode::UnpackOrDeepCopy {\n                Ok(copy_array(array))\n            } else {\n                Ok(Arc::clone(array))\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/operators/csv_scan.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::execution::operators::ExecutionError;\nuse arrow::datatypes::SchemaRef;\nuse datafusion::common::config::CsvOptions as DFCsvOptions;\nuse datafusion::common::DataFusionError;\nuse datafusion::common::Result;\nuse datafusion::datasource::object_store::ObjectStoreUrl;\nuse datafusion::datasource::physical_plan::CsvSource;\nuse datafusion_comet_proto::spark_operator::CsvOptions;\nuse datafusion_datasource::file_groups::FileGroup;\nuse datafusion_datasource::file_scan_config::FileScanConfigBuilder;\nuse datafusion_datasource::source::DataSourceExec;\nuse datafusion_datasource::PartitionedFile;\nuse std::sync::Arc;\n\npub fn init_csv_datasource_exec(\n    object_store_url: ObjectStoreUrl,\n    file_groups: Vec<Vec<PartitionedFile>>,\n    data_schema: SchemaRef,\n    _partition_schema: SchemaRef,\n    projection_vector: Vec<usize>,\n    csv_options: &CsvOptions,\n) -> Result<Arc<DataSourceExec>, ExecutionError> {\n    let csv_source = build_csv_source(data_schema, csv_options)?;\n\n    let file_groups = file_groups\n        .iter()\n        .map(|files| FileGroup::new(files.clone()))\n        .collect();\n\n    let file_scan_config = FileScanConfigBuilder::new(object_store_url, csv_source)\n        .with_file_groups(file_groups)\n        .with_projection_indices(Some(projection_vector))?\n        .build();\n\n    Ok(DataSourceExec::from_data_source(file_scan_config))\n}\n\nfn build_csv_source(schema: SchemaRef, options: &CsvOptions) -> Result<Arc<CsvSource>> {\n    let delimiter = string_to_u8(&options.delimiter, \"delimiter\")?;\n    let quote = string_to_u8(&options.quote, \"quote\")?;\n    let escape = string_to_u8(&options.escape, \"escape\")?;\n    let terminator = string_to_u8(&options.terminator, \"terminator\")?;\n    let comment = options\n        .comment\n        .as_ref()\n        .map(|c| string_to_u8(c, \"comment\"))\n        .transpose()?;\n\n    let df_csv_options = DFCsvOptions {\n        has_header: Some(options.has_header),\n        delimiter,\n        quote,\n        escape: Some(escape),\n        terminator: Some(terminator),\n        comment,\n        truncated_rows: Some(options.truncated_rows),\n        ..Default::default()\n    };\n\n    let csv_source = CsvSource::new(schema).with_csv_options(df_csv_options);\n    Ok(Arc::new(csv_source))\n}\n\nfn string_to_u8(option: &str, option_name: &str) -> Result<u8> {\n    match option.as_bytes().first() {\n        Some(&ch) if ch.is_ascii() => Ok(ch),\n        _ => Err(DataFusionError::Configuration(format!(\n            \"invalid {option_name} character '{option}': must be an ASCII character\"\n        ))),\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/operators/expand.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{RecordBatch, RecordBatchOptions};\nuse arrow::datatypes::SchemaRef;\nuse datafusion::common::DataFusionError;\nuse datafusion::physical_expr::{EquivalenceProperties, PhysicalExpr};\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType};\nuse datafusion::{\n    execution::TaskContext,\n    physical_plan::{\n        DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, PlanProperties,\n        RecordBatchStream, SendableRecordBatchStream,\n    },\n};\nuse futures::{Stream, StreamExt};\nuse std::{\n    any::Any,\n    pin::Pin,\n    sync::Arc,\n    task::{Context, Poll},\n};\n\n/// A Comet native operator that expands a single row into multiple rows. This behaves as same as\n/// Spark Expand operator.\n#[derive(Debug)]\npub struct ExpandExec {\n    projections: Vec<Vec<Arc<dyn PhysicalExpr>>>,\n    child: Arc<dyn ExecutionPlan>,\n    schema: SchemaRef,\n    cache: Arc<PlanProperties>,\n}\n\nimpl ExpandExec {\n    /// Create a new ExpandExec\n    pub fn new(\n        projections: Vec<Vec<Arc<dyn PhysicalExpr>>>,\n        child: Arc<dyn ExecutionPlan>,\n        schema: SchemaRef,\n    ) -> Self {\n        let cache = Arc::new(PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        ));\n\n        Self {\n            projections,\n            child,\n            schema,\n            cache,\n        }\n    }\n}\n\nimpl DisplayAs for ExpandExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"CometExpandExec\")?;\n                write!(f, \"Projections: [\")?;\n                for projection in &self.projections {\n                    write!(f, \"[\")?;\n                    for expr in projection {\n                        write!(f, \"{expr}, \")?;\n                    }\n                    write!(f, \"], \")?;\n                }\n                write!(f, \"]\")?;\n\n                Ok(())\n            }\n            DisplayFormatType::TreeRender => unimplemented!(),\n        }\n    }\n}\n\nimpl ExecutionPlan for ExpandExec {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        vec![&self.child]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> datafusion::common::Result<Arc<dyn ExecutionPlan>> {\n        let new_expand = ExpandExec::new(\n            self.projections.clone(),\n            Arc::clone(&children[0]),\n            Arc::clone(&self.schema),\n        );\n        Ok(Arc::new(new_expand))\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        context: Arc<TaskContext>,\n    ) -> datafusion::common::Result<SendableRecordBatchStream> {\n        let child_stream = self.child.execute(partition, context)?;\n        let expand_stream = ExpandStream::new(\n            self.projections.clone(),\n            child_stream,\n            Arc::clone(&self.schema),\n        );\n        Ok(Box::pin(expand_stream))\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.cache\n    }\n\n    fn name(&self) -> &str {\n        \"CometExpandExec\"\n    }\n}\n\npub struct ExpandStream {\n    projections: Vec<Vec<Arc<dyn PhysicalExpr>>>,\n    child_stream: SendableRecordBatchStream,\n    schema: SchemaRef,\n    current_index: i32,\n    max_index: i32,\n    current_batch: Option<RecordBatch>,\n}\n\nimpl ExpandStream {\n    /// Create a new ExpandStream\n    pub fn new(\n        projections: Vec<Vec<Arc<dyn PhysicalExpr>>>,\n        child_stream: SendableRecordBatchStream,\n        schema: SchemaRef,\n    ) -> Self {\n        let max_index = projections.len() as i32;\n        Self {\n            projections,\n            child_stream,\n            schema,\n            current_index: -1,\n            max_index,\n            current_batch: None,\n        }\n    }\n\n    fn expand(\n        &self,\n        batch: &RecordBatch,\n        projection: &[Arc<dyn PhysicalExpr>],\n    ) -> Result<RecordBatch, DataFusionError> {\n        let mut columns = vec![];\n\n        projection.iter().try_for_each(|expr| {\n            let column = expr.evaluate(batch)?;\n            columns.push(column.into_array(batch.num_rows())?);\n\n            Ok::<(), DataFusionError>(())\n        })?;\n\n        let options = RecordBatchOptions::new().with_row_count(Some(batch.num_rows()));\n        RecordBatch::try_new_with_options(Arc::clone(&self.schema), columns, &options)\n            .map_err(|e| e.into())\n    }\n}\n\nimpl Stream for ExpandStream {\n    type Item = datafusion::common::Result<RecordBatch>;\n\n    fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {\n        if self.current_index == -1 {\n            let next = self.child_stream.poll_next_unpin(cx);\n            match next {\n                Poll::Ready(Some(Ok(batch))) => {\n                    self.current_batch = Some(batch);\n                    self.current_index = 0;\n                }\n                other => return other,\n            }\n        }\n        assert!(self.current_batch.is_some());\n\n        let projection = &self.projections[self.current_index as usize];\n        let batch = self.expand(self.current_batch.as_ref().unwrap(), projection);\n\n        self.current_index += 1;\n\n        if self.current_index == self.max_index {\n            self.current_index = -1;\n            self.current_batch = None;\n        }\n        Poll::Ready(Some(batch))\n    }\n}\n\nimpl RecordBatchStream for ExpandStream {\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/operators/iceberg_scan.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Native Iceberg table scan operator using iceberg-rust\n\nuse std::any::Any;\nuse std::collections::HashMap;\nuse std::fmt;\nuse std::pin::Pin;\nuse std::sync::Arc;\nuse std::task::{Context, Poll};\n\nuse arrow::array::{ArrayRef, RecordBatch, RecordBatchOptions};\nuse arrow::datatypes::SchemaRef;\nuse datafusion::common::{DataFusionError, Result as DFResult};\nuse datafusion::execution::{RecordBatchStream, SendableRecordBatchStream, TaskContext};\nuse datafusion::physical_expr::expressions::Column;\nuse datafusion::physical_expr::{EquivalenceProperties, PhysicalExpr};\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType};\nuse datafusion::physical_plan::metrics::{\n    BaselineMetrics, Count, ExecutionPlanMetricsSet, MetricBuilder, MetricsSet,\n};\nuse datafusion::physical_plan::{\n    DisplayAs, DisplayFormatType, ExecutionPlan, Partitioning, PlanProperties,\n};\nuse futures::{Stream, StreamExt, TryStreamExt};\nuse iceberg::io::{FileIO, FileIOBuilder, StorageFactory};\nuse iceberg_storage_opendal::OpenDalStorageFactory;\n\nuse crate::execution::operators::ExecutionError;\nuse crate::parquet::parquet_support::SparkParquetOptions;\nuse crate::parquet::schema_adapter::SparkPhysicalExprAdapterFactory;\nuse datafusion_comet_spark_expr::EvalMode;\nuse datafusion_physical_expr_adapter::{PhysicalExprAdapter, PhysicalExprAdapterFactory};\nuse iceberg::scan::FileScanTask;\n\n/// Iceberg table scan operator that uses iceberg-rust to read Iceberg tables.\n///\n/// Executes pre-planned FileScanTasks for efficient parallel scanning.\n#[derive(Debug)]\npub struct IcebergScanExec {\n    /// Iceberg table metadata location for FileIO initialization\n    metadata_location: String,\n    /// Output schema after projection\n    output_schema: SchemaRef,\n    /// Cached execution plan properties\n    plan_properties: Arc<PlanProperties>,\n    /// Catalog-specific configuration for FileIO\n    catalog_properties: HashMap<String, String>,\n    /// Pre-planned file scan tasks\n    tasks: Vec<FileScanTask>,\n    /// Number of data files to read concurrently\n    data_file_concurrency_limit: usize,\n    /// Metrics\n    metrics: ExecutionPlanMetricsSet,\n}\n\nimpl IcebergScanExec {\n    pub fn new(\n        metadata_location: String,\n        schema: SchemaRef,\n        catalog_properties: HashMap<String, String>,\n        tasks: Vec<FileScanTask>,\n        data_file_concurrency_limit: usize,\n    ) -> Result<Self, ExecutionError> {\n        let output_schema = schema;\n        let plan_properties = Self::compute_properties(Arc::clone(&output_schema), 1);\n\n        let metrics = ExecutionPlanMetricsSet::new();\n\n        Ok(Self {\n            metadata_location,\n            output_schema,\n            plan_properties,\n            catalog_properties,\n            tasks,\n            data_file_concurrency_limit,\n            metrics,\n        })\n    }\n\n    fn compute_properties(schema: SchemaRef, num_partitions: usize) -> Arc<PlanProperties> {\n        Arc::new(PlanProperties::new(\n            EquivalenceProperties::new(schema),\n            Partitioning::UnknownPartitioning(num_partitions),\n            EmissionType::Incremental,\n            Boundedness::Bounded,\n        ))\n    }\n}\n\nimpl ExecutionPlan for IcebergScanExec {\n    fn name(&self) -> &str {\n        \"IcebergScanExec\"\n    }\n\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.output_schema)\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.plan_properties\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        vec![]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        _children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> DFResult<Arc<dyn ExecutionPlan>> {\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        _partition: usize,\n        context: Arc<TaskContext>,\n    ) -> DFResult<SendableRecordBatchStream> {\n        self.execute_with_tasks(self.tasks.clone(), context)\n    }\n\n    fn metrics(&self) -> Option<MetricsSet> {\n        Some(self.metrics.clone_inner())\n    }\n}\n\nimpl IcebergScanExec {\n    /// Handles MOR (Merge-On-Read) tables by automatically applying positional and equality\n    /// deletes via iceberg-rust's ArrowReader.\n    fn execute_with_tasks(\n        &self,\n        tasks: Vec<FileScanTask>,\n        context: Arc<TaskContext>,\n    ) -> DFResult<SendableRecordBatchStream> {\n        let output_schema = Arc::clone(&self.output_schema);\n        let file_io = Self::load_file_io(&self.catalog_properties, &self.metadata_location)?;\n        let batch_size = context.session_config().batch_size();\n\n        let metrics = IcebergScanMetrics::new(&self.metrics);\n        let num_tasks = tasks.len();\n        metrics.num_splits.add(num_tasks);\n\n        let task_stream = futures::stream::iter(tasks.into_iter().map(Ok)).boxed();\n\n        let reader = iceberg::arrow::ArrowReaderBuilder::new(file_io)\n            .with_batch_size(batch_size)\n            .with_data_file_concurrency_limit(self.data_file_concurrency_limit)\n            .with_row_selection_enabled(true)\n            .with_metadata_size_hint(512 * 1024) // Same as DataFusion's default\n            .build();\n\n        // Pass all tasks to iceberg-rust at once to utilize its flatten_unordered\n        // parallelization, avoiding overhead of single-task streams\n        let stream = reader.read(task_stream).map_err(|e| {\n            DataFusionError::Execution(format!(\"Failed to read Iceberg tasks: {}\", e))\n        })?;\n\n        let spark_options = SparkParquetOptions::new(EvalMode::Legacy, \"UTC\", false);\n        let adapter_factory = SparkPhysicalExprAdapterFactory::new(spark_options, None);\n\n        let adapted_stream =\n            stream.map_err(|e| DataFusionError::Execution(format!(\"Iceberg scan error: {}\", e)));\n\n        let wrapped_stream = IcebergStreamWrapper {\n            inner: adapted_stream,\n            schema: output_schema,\n            adapter_factory,\n            cached: None,\n            baseline_metrics: metrics.baseline,\n        };\n\n        Ok(Box::pin(wrapped_stream))\n    }\n\n    fn storage_factory_for(path: &str) -> Result<Arc<dyn StorageFactory>, DataFusionError> {\n        let scheme = if path.contains(\"://\") {\n            path.split(\"://\").next().unwrap_or(\"file\")\n        } else {\n            \"file\"\n        };\n        match scheme {\n            \"file\" => Ok(Arc::new(OpenDalStorageFactory::Fs)),\n            \"s3\" | \"s3a\" => Ok(Arc::new(OpenDalStorageFactory::S3 {\n                configured_scheme: scheme.to_string(),\n                customized_credential_load: None,\n            })),\n            \"gs\" => Ok(Arc::new(OpenDalStorageFactory::Gcs)),\n            \"oss\" => Ok(Arc::new(OpenDalStorageFactory::Oss)),\n            _ => Err(DataFusionError::Execution(format!(\n                \"Unsupported storage scheme: {scheme}\"\n            ))),\n        }\n    }\n\n    fn load_file_io(\n        catalog_properties: &HashMap<String, String>,\n        metadata_location: &str,\n    ) -> Result<FileIO, DataFusionError> {\n        let factory = Self::storage_factory_for(metadata_location)?;\n        let mut file_io_builder = FileIOBuilder::new(factory);\n\n        for (key, value) in catalog_properties {\n            file_io_builder = file_io_builder.with_prop(key, value);\n        }\n\n        Ok(file_io_builder.build())\n    }\n}\n\n/// Metrics for IcebergScanExec\nstruct IcebergScanMetrics {\n    /// Baseline metrics (output rows, elapsed compute time)\n    baseline: BaselineMetrics,\n    /// Count of file splits (FileScanTasks) processed\n    num_splits: Count,\n}\n\nimpl IcebergScanMetrics {\n    fn new(metrics: &ExecutionPlanMetricsSet) -> Self {\n        Self {\n            baseline: BaselineMetrics::new(metrics, 0),\n            num_splits: MetricBuilder::new(metrics).counter(\"num_splits\", 0),\n        }\n    }\n}\n\n/// Wrapper around iceberg-rust's stream that performs schema adaptation.\n/// Handles batches from multiple files that may have different Arrow schemas\n/// (metadata, field IDs, etc.).\nstruct IcebergStreamWrapper<S> {\n    inner: S,\n    schema: SchemaRef,\n    /// Factory for creating adapters when file schema changes\n    adapter_factory: SparkPhysicalExprAdapterFactory,\n    /// Cached adapter and projection expressions for the current file schema,\n    /// reused across batches with the same schema\n    cached: Option<CachedProjection>,\n    /// Metrics for output tracking\n    baseline_metrics: BaselineMetrics,\n}\n\n/// Cached projection state: file schema, adapter, and pre-built projection expressions.\nstruct CachedProjection {\n    file_schema: SchemaRef,\n    projection_exprs: Vec<Arc<dyn PhysicalExpr>>,\n}\n\nimpl<S> Stream for IcebergStreamWrapper<S>\nwhere\n    S: Stream<Item = DFResult<RecordBatch>> + Unpin,\n{\n    type Item = DFResult<RecordBatch>;\n\n    fn poll_next(mut self: Pin<&mut Self>, cx: &mut Context<'_>) -> Poll<Option<Self::Item>> {\n        let poll_result = self.inner.poll_next_unpin(cx);\n\n        let result = match poll_result {\n            Poll::Ready(Some(Ok(batch))) => {\n                let file_schema = batch.schema();\n\n                // Reuse cached projection expressions if file schema hasn't changed.\n                // Batches from the same file share the same Arc<Schema> pointer,\n                // so pointer equality is sufficient here.\n                let projection_exprs = match &self.cached {\n                    Some(cached) if Arc::ptr_eq(&cached.file_schema, &file_schema) => {\n                        &cached.projection_exprs\n                    }\n                    _ => {\n                        let adapter = self\n                            .adapter_factory\n                            .create(Arc::clone(&self.schema), Arc::clone(&file_schema))?;\n                        let exprs =\n                            build_projection_expressions(&self.schema, &adapter).map_err(|e| {\n                                DataFusionError::Execution(format!(\n                                    \"Failed to build projection expressions: {}\",\n                                    e\n                                ))\n                            })?;\n                        self.cached = Some(CachedProjection {\n                            file_schema,\n                            projection_exprs: exprs,\n                        });\n                        &self.cached.as_ref().unwrap().projection_exprs\n                    }\n                };\n\n                let result = adapt_batch_with_expressions(batch, &self.schema, projection_exprs)\n                    .map_err(|e| {\n                        DataFusionError::Execution(format!(\"Batch adaptation failed: {}\", e))\n                    });\n\n                Poll::Ready(Some(result))\n            }\n            other => other,\n        };\n\n        self.baseline_metrics.record_poll(result)\n    }\n}\n\nimpl<S> RecordBatchStream for IcebergStreamWrapper<S>\nwhere\n    S: Stream<Item = DFResult<RecordBatch>> + Unpin,\n{\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n}\n\nimpl DisplayAs for IcebergScanExec {\n    fn fmt_as(&self, _t: DisplayFormatType, f: &mut fmt::Formatter) -> fmt::Result {\n        write!(\n            f,\n            \"IcebergScanExec: metadata_location={}, num_tasks={}\",\n            self.metadata_location,\n            self.tasks.len()\n        )\n    }\n}\n\n/// Build projection expressions that adapt batches from a file schema to the target schema.\n///\n/// The returned expressions can be cached and reused across multiple batches\n/// that share the same file schema, avoiding repeated expression construction.\nfn build_projection_expressions(\n    target_schema: &SchemaRef,\n    adapter: &Arc<dyn PhysicalExprAdapter>,\n) -> DFResult<Vec<Arc<dyn PhysicalExpr>>> {\n    target_schema\n        .fields()\n        .iter()\n        .enumerate()\n        .map(|(i, _field)| {\n            let col_expr: Arc<dyn PhysicalExpr> = Arc::new(Column::new_with_schema(\n                target_schema.field(i).name(),\n                target_schema.as_ref(),\n            )?);\n            adapter.rewrite(col_expr)\n        })\n        .collect::<DFResult<Vec<_>>>()\n}\n\n/// Adapt a batch to match the target schema using pre-built projection expressions.\n///\n/// The caller provides pre-built `projection_exprs` (from [`build_projection_expressions`])\n/// which can be cached and reused across multiple batches with the same file schema.\nfn adapt_batch_with_expressions(\n    batch: RecordBatch,\n    target_schema: &SchemaRef,\n    projection_exprs: &[Arc<dyn PhysicalExpr>],\n) -> DFResult<RecordBatch> {\n    // If schemas match, no adaptation needed\n    if Arc::ptr_eq(&batch.schema(), target_schema) {\n        return Ok(batch);\n    }\n\n    // Zero-column projection (e.g. SELECT count(*)) — preserve row count\n    if projection_exprs.is_empty() {\n        return Ok(RecordBatch::try_new_with_options(\n            Arc::clone(target_schema),\n            vec![],\n            &RecordBatchOptions::new().with_row_count(Some(batch.num_rows())),\n        )?);\n    }\n\n    // Evaluate expressions against batch\n    let columns: Vec<ArrayRef> = projection_exprs\n        .iter()\n        .map(|expr| expr.evaluate(&batch)?.into_array(batch.num_rows()))\n        .collect::<DFResult<Vec<_>>>()?;\n\n    RecordBatch::try_new(Arc::clone(target_schema), columns).map_err(|e| e.into())\n}\n"
  },
  {
    "path": "native/core/src/execution/operators/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Operators\n\npub use crate::errors::ExecutionError;\n\npub use copy::*;\npub use iceberg_scan::*;\npub use scan::*;\n\nmod copy;\nmod expand;\npub use expand::ExpandExec;\nmod iceberg_scan;\nmod parquet_writer;\npub use parquet_writer::ParquetWriterExec;\nmod csv_scan;\npub mod projection;\nmod scan;\nmod shuffle_scan;\npub use csv_scan::init_csv_datasource_exec;\npub use shuffle_scan::ShuffleScanExec;\n"
  },
  {
    "path": "native/core/src/execution/operators/parquet_writer.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Parquet writer operator for writing RecordBatches to Parquet files\n\nuse std::{\n    any::Any,\n    collections::HashMap,\n    fmt,\n    fmt::{Debug, Formatter},\n    fs::File,\n    sync::Arc,\n};\n\n#[cfg(feature = \"hdfs-opendal\")]\nuse opendal::Operator;\n#[cfg(feature = \"hdfs-opendal\")]\nuse std::io::Cursor;\n\nuse crate::execution::shuffle::CompressionCodec;\nuse crate::parquet::parquet_support::is_hdfs_scheme;\n#[cfg(feature = \"hdfs-opendal\")]\nuse crate::parquet::parquet_support::{create_hdfs_operator, prepare_object_store_with_configs};\nuse arrow::datatypes::{Schema, SchemaRef};\nuse arrow::record_batch::RecordBatch;\nuse async_trait::async_trait;\nuse datafusion::{\n    error::{DataFusionError, Result},\n    execution::context::TaskContext,\n    physical_expr::EquivalenceProperties,\n    physical_plan::{\n        execution_plan::{Boundedness, EmissionType},\n        metrics::{ExecutionPlanMetricsSet, MetricsSet},\n        stream::RecordBatchStreamAdapter,\n        DisplayAs, DisplayFormatType, ExecutionPlan, ExecutionPlanProperties, PlanProperties,\n        SendableRecordBatchStream,\n    },\n};\nuse futures::TryStreamExt;\nuse parquet::{\n    arrow::ArrowWriter,\n    basic::{Compression, ZstdLevel},\n    file::properties::WriterProperties,\n};\nuse url::Url;\n\n/// Enum representing different types of Arrow writers based on storage backend\nenum ParquetWriter {\n    /// Writer for local file system\n    LocalFile(ArrowWriter<File>),\n    /// Writer for HDFS or other remote storage (writes to in-memory buffer)\n    /// Contains the arrow writer, HDFS operator, and destination path\n    /// an Arrow writer writes to in-memory buffer the data converted to Parquet format\n    /// The opendal::Writer is created lazily on first write\n    #[cfg(feature = \"hdfs-opendal\")]\n    Remote(\n        ArrowWriter<Cursor<Vec<u8>>>,\n        Option<opendal::Writer>,\n        Operator,\n        String,\n    ),\n}\n\nimpl ParquetWriter {\n    /// Write a RecordBatch to the underlying writer\n    async fn write(\n        &mut self,\n        batch: &RecordBatch,\n    ) -> std::result::Result<(), parquet::errors::ParquetError> {\n        match self {\n            ParquetWriter::LocalFile(writer) => writer.write(batch),\n            #[cfg(feature = \"hdfs-opendal\")]\n            ParquetWriter::Remote(\n                arrow_parquet_buffer_writer,\n                hdfs_writer_opt,\n                op,\n                output_path,\n            ) => {\n                // Write batch to in-memory buffer\n                arrow_parquet_buffer_writer.write(batch)?;\n\n                // Flush and get the current buffer content\n                arrow_parquet_buffer_writer.flush()?;\n                let cursor = arrow_parquet_buffer_writer.inner_mut();\n                let current_data = cursor.get_ref().clone();\n\n                // Create HDFS writer lazily on first write\n                if hdfs_writer_opt.is_none() {\n                    let writer = op.writer(output_path.as_str()).await.map_err(|e| {\n                        parquet::errors::ParquetError::External(\n                            format!(\"Failed to create HDFS writer for '{}': {}\", output_path, e)\n                                .into(),\n                        )\n                    })?;\n                    *hdfs_writer_opt = Some(writer);\n                }\n\n                // Write the accumulated data to HDFS\n                if let Some(hdfs_writer) = hdfs_writer_opt {\n                    hdfs_writer.write(current_data).await.map_err(|e| {\n                        parquet::errors::ParquetError::External(\n                            format!(\n                                \"Failed to write batch to HDFS file '{}': {}\",\n                                output_path, e\n                            )\n                            .into(),\n                        )\n                    })?;\n                }\n\n                // Clear the buffer after upload\n                cursor.get_mut().clear();\n                cursor.set_position(0);\n\n                Ok(())\n            }\n        }\n    }\n\n    /// Close the writer and finalize the file\n    async fn close(self) -> std::result::Result<(), parquet::errors::ParquetError> {\n        match self {\n            ParquetWriter::LocalFile(writer) => {\n                writer.close()?;\n                Ok(())\n            }\n            #[cfg(feature = \"hdfs-opendal\")]\n            ParquetWriter::Remote(\n                arrow_parquet_buffer_writer,\n                mut hdfs_writer_opt,\n                op,\n                output_path,\n            ) => {\n                // Close the arrow writer to finalize parquet format\n                let cursor = arrow_parquet_buffer_writer.into_inner()?;\n                let final_data = cursor.into_inner();\n\n                // Create HDFS writer if not already created\n                if hdfs_writer_opt.is_none() && !final_data.is_empty() {\n                    let writer = op.writer(output_path.as_str()).await.map_err(|e| {\n                        parquet::errors::ParquetError::External(\n                            format!(\"Failed to create HDFS writer for '{}': {}\", output_path, e)\n                                .into(),\n                        )\n                    })?;\n                    hdfs_writer_opt = Some(writer);\n                }\n\n                // Write any remaining data\n                if !final_data.is_empty() {\n                    if let Some(mut hdfs_writer) = hdfs_writer_opt {\n                        hdfs_writer.write(final_data).await.map_err(|e| {\n                            parquet::errors::ParquetError::External(\n                                format!(\n                                    \"Failed to write final data to HDFS file '{}': {}\",\n                                    output_path, e\n                                )\n                                .into(),\n                            )\n                        })?;\n\n                        // Close the HDFS writer\n                        hdfs_writer.close().await.map_err(|e| {\n                            parquet::errors::ParquetError::External(\n                                format!(\"Failed to close HDFS writer for '{}': {}\", output_path, e)\n                                    .into(),\n                            )\n                        })?;\n                    }\n                }\n\n                Ok(())\n            }\n        }\n    }\n}\n\n/// Parquet writer operator that writes input batches to a Parquet file\n#[derive(Debug)]\npub struct ParquetWriterExec {\n    /// Input execution plan\n    input: Arc<dyn ExecutionPlan>,\n    /// Output file path (final destination)\n    output_path: String,\n    /// Working directory for temporary files (used by FileCommitProtocol)\n    work_dir: String,\n    /// Job ID for tracking this write operation\n    job_id: Option<String>,\n    /// Task attempt ID for this specific task\n    task_attempt_id: Option<i32>,\n    /// Compression codec\n    compression: CompressionCodec,\n    /// Partition ID (from Spark TaskContext)\n    partition_id: i32,\n    /// Column names to use in the output Parquet file\n    column_names: Vec<String>,\n    /// Object store configuration options\n    object_store_options: HashMap<String, String>,\n    /// Metrics\n    metrics: ExecutionPlanMetricsSet,\n    /// Cache for plan properties\n    cache: Arc<PlanProperties>,\n}\n\nimpl ParquetWriterExec {\n    /// Create a new ParquetWriterExec\n    #[allow(clippy::too_many_arguments)]\n    pub fn try_new(\n        input: Arc<dyn ExecutionPlan>,\n        output_path: String,\n        work_dir: String,\n        job_id: Option<String>,\n        task_attempt_id: Option<i32>,\n        compression: CompressionCodec,\n        partition_id: i32,\n        column_names: Vec<String>,\n        object_store_options: HashMap<String, String>,\n    ) -> Result<Self> {\n        // Preserve the input's partitioning so each partition writes its own file\n        let input_partitioning = input.output_partitioning().clone();\n\n        let cache = Arc::new(PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&input.schema())),\n            input_partitioning,\n            EmissionType::Final,\n            Boundedness::Bounded,\n        ));\n\n        Ok(ParquetWriterExec {\n            input,\n            output_path,\n            work_dir,\n            job_id,\n            task_attempt_id,\n            compression,\n            partition_id,\n            column_names,\n            object_store_options,\n            metrics: ExecutionPlanMetricsSet::new(),\n            cache,\n        })\n    }\n\n    fn compression_to_parquet(&self) -> Result<Compression> {\n        match self.compression {\n            CompressionCodec::None => Ok(Compression::UNCOMPRESSED),\n            CompressionCodec::Zstd(level) => Ok(Compression::ZSTD(ZstdLevel::try_new(level)?)),\n            CompressionCodec::Lz4Frame => Ok(Compression::LZ4),\n            CompressionCodec::Snappy => Ok(Compression::SNAPPY),\n        }\n    }\n\n    /// Create an Arrow writer based on the storage scheme\n    ///\n    /// # Arguments\n    /// * `output_file_path` - The full path to the output file\n    /// * `schema` - The Arrow schema for the Parquet file\n    /// * `props` - Writer properties including compression\n    /// * `runtime_env` - Runtime environment for object store registration\n    /// * `object_store_options` - Configuration options for object store\n    ///\n    /// # Returns\n    /// * `Ok(ParquetWriter)` - A writer appropriate for the storage scheme\n    /// * `Err(DataFusionError)` - If writer creation fails\n    fn create_arrow_writer(\n        output_file_path: &str,\n        schema: SchemaRef,\n        props: WriterProperties,\n        _runtime_env: Arc<datafusion::execution::runtime_env::RuntimeEnv>,\n        object_store_options: &HashMap<String, String>,\n    ) -> Result<ParquetWriter> {\n        // Parse URL and match on storage scheme directly\n        let url = Url::parse(output_file_path).map_err(|e| {\n            DataFusionError::Execution(format!(\"Failed to parse URL '{}': {}\", output_file_path, e))\n        })?;\n\n        if is_hdfs_scheme(&url, object_store_options) {\n            #[cfg(feature = \"hdfs-opendal\")]\n            {\n                // Use prepare_object_store_with_configs to create and register the object store\n                let (_object_store_url, object_store_path) = prepare_object_store_with_configs(\n                    _runtime_env,\n                    output_file_path.to_string(),\n                    object_store_options,\n                )\n                .map_err(|e| {\n                    DataFusionError::Execution(format!(\n                        \"Failed to prepare object store for '{}': {}\",\n                        output_file_path, e\n                    ))\n                })?;\n\n                // For remote storage (HDFS, S3), write to an in-memory buffer\n                let buffer = Vec::new();\n                let cursor = Cursor::new(buffer);\n                let arrow_parquet_buffer_writer = ArrowWriter::try_new(cursor, schema, Some(props))\n                    .map_err(|e| {\n                        DataFusionError::Execution(format!(\"Failed to create HDFS writer: {}\", e))\n                    })?;\n\n                // Create HDFS operator with configuration options using the helper function\n                let op = create_hdfs_operator(&url).map_err(|e| {\n                    DataFusionError::Execution(format!(\n                        \"Failed to create HDFS operator for '{}': {}\",\n                        output_file_path, e\n                    ))\n                })?;\n\n                // HDFS writer will be created lazily on first write\n                // Use the path from prepare_object_store_with_configs\n                Ok(ParquetWriter::Remote(\n                    arrow_parquet_buffer_writer,\n                    None,\n                    op,\n                    object_store_path.to_string(),\n                ))\n            }\n            #[cfg(not(feature = \"hdfs-opendal\"))]\n            {\n                Err(DataFusionError::Execution(\n                    \"HDFS support is not enabled. Rebuild with the 'hdfs-opendal' feature.\".into(),\n                ))\n            }\n        } else if output_file_path.starts_with(\"file://\")\n            || output_file_path.starts_with(\"file:\")\n            || !output_file_path.contains(\"://\")\n        {\n            // Local file system\n            {\n                // For a local file system, write directly to file\n                // Strip file:// or file: prefix if present\n                let local_path = output_file_path\n                    .strip_prefix(\"file://\")\n                    .or_else(|| output_file_path.strip_prefix(\"file:\"))\n                    .unwrap_or(output_file_path);\n\n                // Extract the parent directory from the file path\n                let output_dir = std::path::Path::new(local_path).parent().ok_or_else(|| {\n                    DataFusionError::Execution(format!(\n                        \"Failed to extract parent directory from path '{}'\",\n                        local_path\n                    ))\n                })?;\n\n                // Create the parent directory if it doesn't exist\n                std::fs::create_dir_all(output_dir).map_err(|e| {\n                    DataFusionError::Execution(format!(\n                        \"Failed to create output directory '{}': {}\",\n                        output_dir.display(),\n                        e\n                    ))\n                })?;\n\n                let file = File::create(local_path).map_err(|e| {\n                    DataFusionError::Execution(format!(\n                        \"Failed to create output file '{}': {}\",\n                        local_path, e\n                    ))\n                })?;\n\n                let writer = ArrowWriter::try_new(file, schema, Some(props)).map_err(|e| {\n                    DataFusionError::Execution(format!(\"Failed to create local file writer: {}\", e))\n                })?;\n                Ok(ParquetWriter::LocalFile(writer))\n            }\n        } else {\n            // Unsupported storage scheme\n            Err(DataFusionError::Execution(format!(\n                \"Unsupported storage scheme in path: {}\",\n                output_file_path\n            )))\n        }\n    }\n}\n\nimpl DisplayAs for ParquetWriterExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut Formatter) -> fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(\n                    f,\n                    \"ParquetWriterExec: path={}, compression={:?}\",\n                    self.output_path, self.compression\n                )\n            }\n            DisplayFormatType::TreeRender => unimplemented!(),\n        }\n    }\n}\n\n#[async_trait]\nimpl ExecutionPlan for ParquetWriterExec {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"ParquetWriterExec\"\n    }\n\n    fn metrics(&self) -> Option<MetricsSet> {\n        Some(self.metrics.clone_inner())\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.cache\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::new(Schema::empty())\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        vec![&self.input]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        match children.len() {\n            1 => Ok(Arc::new(ParquetWriterExec::try_new(\n                Arc::clone(&children[0]),\n                self.output_path.clone(),\n                self.work_dir.clone(),\n                self.job_id.clone(),\n                self.task_attempt_id,\n                self.compression.clone(),\n                self.partition_id,\n                self.column_names.clone(),\n                self.object_store_options.clone(),\n            )?)),\n            _ => Err(DataFusionError::Internal(\n                \"ParquetWriterExec requires exactly one child\".to_string(),\n            )),\n        }\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        use datafusion::physical_plan::metrics::MetricBuilder;\n\n        // Create metrics for tracking write statistics\n        let files_written = MetricBuilder::new(&self.metrics).counter(\"files_written\", partition);\n        let bytes_written = MetricBuilder::new(&self.metrics).counter(\"bytes_written\", partition);\n        let rows_written = MetricBuilder::new(&self.metrics).counter(\"rows_written\", partition);\n\n        let runtime_env = context.runtime_env();\n        let input = self.input.execute(partition, context)?;\n        let input_schema = self.input.schema();\n        let work_dir = self.work_dir.clone();\n        let task_attempt_id = self.task_attempt_id;\n        let compression = self.compression_to_parquet()?;\n        let column_names = self.column_names.clone();\n\n        assert_eq!(input_schema.fields().len(), column_names.len());\n\n        // Replace the generic column names (col_0, col_1, etc.) with the actual names\n        let fields: Vec<_> = input_schema\n            .fields()\n            .iter()\n            .enumerate()\n            .map(|(i, field)| Arc::new(field.as_ref().clone().with_name(&column_names[i])))\n            .collect();\n        let output_schema = Arc::new(arrow::datatypes::Schema::new(fields));\n\n        // Generate part file name for this partition\n        // If using FileCommitProtocol (work_dir is set), include task_attempt_id in the filename\n        let part_file = if let Some(attempt_id) = task_attempt_id {\n            format!(\n                \"{}/part-{:05}-{:05}.parquet\",\n                work_dir, self.partition_id, attempt_id\n            )\n        } else {\n            format!(\"{}/part-{:05}.parquet\", work_dir, self.partition_id)\n        };\n\n        // Configure writer properties\n        let props = WriterProperties::builder()\n            .set_compression(compression)\n            .build();\n\n        let object_store_options = self.object_store_options.clone();\n        let mut writer = Self::create_arrow_writer(\n            &part_file,\n            Arc::clone(&output_schema),\n            props,\n            runtime_env,\n            &object_store_options,\n        )?;\n\n        // Clone schema for use in async closure\n        let schema_for_write = Arc::clone(&output_schema);\n\n        // Write batches\n        let write_task = async move {\n            let mut stream = input;\n            let mut total_rows = 0i64;\n\n            while let Some(batch_result) = stream.try_next().await.transpose() {\n                let batch = batch_result?;\n\n                // Track row count\n                total_rows += batch.num_rows() as i64;\n\n                // Rename columns in the batch to match output schema\n                let renamed_batch = if !column_names.is_empty() {\n                    RecordBatch::try_new(Arc::clone(&schema_for_write), batch.columns().to_vec())\n                        .map_err(|e| {\n                            DataFusionError::Execution(format!(\n                                \"Failed to rename batch columns: {}\",\n                                e\n                            ))\n                        })?\n                } else {\n                    batch\n                };\n\n                writer.write(&renamed_batch).await.map_err(|e| {\n                    DataFusionError::Execution(format!(\"Failed to write batch: {}\", e))\n                })?;\n            }\n\n            writer.close().await.map_err(|e| {\n                DataFusionError::Execution(format!(\"Failed to close writer: {}\", e))\n            })?;\n\n            // Get file size - strip file:// prefix if present for local filesystem access\n            let local_path = part_file\n                .strip_prefix(\"file://\")\n                .or_else(|| part_file.strip_prefix(\"file:\"))\n                .unwrap_or(&part_file);\n            let file_size = std::fs::metadata(local_path)\n                .map(|m| m.len() as i64)\n                .unwrap_or(0);\n\n            // Update metrics with write statistics\n            files_written.add(1);\n            bytes_written.add(file_size as usize);\n            rows_written.add(total_rows as usize);\n\n            // Log metadata for debugging\n            eprintln!(\n                \"Wrote Parquet file: path={}, size={}, rows={}\",\n                part_file, file_size, total_rows\n            );\n\n            // Return empty stream to indicate completion\n            Ok::<_, DataFusionError>(futures::stream::empty())\n        };\n\n        // Execute the write task and create a stream that does not return any batches\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            self.schema(),\n            futures::stream::once(write_task).try_flatten(),\n        )))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::{Int32Array, StringArray};\n    use arrow::datatypes::{DataType, Field, Schema};\n    use std::sync::Arc;\n\n    /// Helper function to create a test RecordBatch with 1000 rows of (int, string) data\n    /// Example batch_id 1 -> 0..1000, 2 -> 1001..2000\n    #[allow(dead_code)]\n    fn create_test_record_batch(batch_id: i32) -> Result<RecordBatch> {\n        assert!(batch_id > 0, \"batch_id must be greater than 0\");\n        let num_rows = batch_id * 1000;\n\n        let int_array = Int32Array::from_iter_values(((batch_id - 1) * 1000)..num_rows);\n\n        let string_values: Vec<String> = (((batch_id - 1) * 1000)..num_rows)\n            .map(|i| format!(\"value_{}\", i))\n            .collect();\n        let string_array = StringArray::from(string_values);\n\n        // Define schema\n        let schema = Arc::new(Schema::new(vec![\n            Field::new(\"id\", DataType::Int32, false),\n            Field::new(\"name\", DataType::Utf8, false),\n        ]));\n\n        // Create RecordBatch\n        RecordBatch::try_new(schema, vec![Arc::new(int_array), Arc::new(string_array)])\n            .map_err(|e| DataFusionError::ArrowError(Box::new(e), None))\n    }\n\n    #[tokio::test]\n    #[cfg(feature = \"hdfs-opendal\")]\n    #[ignore = \"This test requires a running HDFS cluster\"]\n    async fn test_write_to_hdfs_sync() -> Result<()> {\n        use opendal::services::Hdfs;\n        use opendal::Operator;\n\n        // Configure HDFS connection\n        let namenode = \"hdfs://namenode:9000\";\n        let output_path = \"/user/test_write/data.parquet\";\n\n        // Create OpenDAL HDFS operator\n        let builder = Hdfs::default().name_node(namenode);\n        let op = Operator::new(builder)\n            .map_err(|e| {\n                DataFusionError::Execution(format!(\"Failed to create HDFS operator: {}\", e))\n            })?\n            .finish();\n\n        let mut hdfs_writer = op.writer(output_path).await.map_err(|e| {\n            DataFusionError::Execution(format!(\"Failed to create HDFS writer: {}\", e))\n        })?;\n\n        let mut buffer = Cursor::new(Vec::new());\n        let mut writer =\n            ArrowWriter::try_new(&mut buffer, create_test_record_batch(1)?.schema(), None)?;\n\n        for i in 1..=5 {\n            let record_batch = create_test_record_batch(i)?;\n\n            writer.write(&record_batch)?;\n\n            println!(\n                \"Successfully wrote 1000 rows to HDFS at {}{}\",\n                namenode, output_path\n            );\n        }\n\n        writer.close()?;\n\n        hdfs_writer.write(buffer.into_inner()).await.map_err(|e| {\n            DataFusionError::Execution(format!(\"Failed to write with HDFS writer: {}\", e))\n        })?;\n\n        hdfs_writer.close().await.map_err(|e| {\n            DataFusionError::Execution(format!(\"Failed to close HDFS writer: {}\", e))\n        })?;\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    #[cfg(feature = \"hdfs-opendal\")]\n    #[ignore = \"This test requires a running HDFS cluster\"]\n    async fn test_write_to_hdfs_streaming() -> Result<()> {\n        use opendal::services::Hdfs;\n        use opendal::Operator;\n\n        // Configure HDFS connection\n        let namenode = \"hdfs://namenode:9000\";\n        let output_path = \"/user/test_write_streaming/data.parquet\";\n\n        // Create OpenDAL HDFS operator\n        let builder = Hdfs::default().name_node(namenode);\n        let op = Operator::new(builder)\n            .map_err(|e| {\n                DataFusionError::Execution(format!(\"Failed to create HDFS operator: {}\", e))\n            })?\n            .finish();\n\n        // Create a single HDFS writer for the entire file\n        let mut hdfs_writer = op.writer(output_path).await.map_err(|e| {\n            DataFusionError::Execution(format!(\"Failed to create HDFS writer: {}\", e))\n        })?;\n\n        // Create a single ArrowWriter that will be used for all batches\n        let buffer = Cursor::new(Vec::new());\n        let mut writer = ArrowWriter::try_new(buffer, create_test_record_batch(1)?.schema(), None)?;\n\n        // Write each batch and upload to HDFS immediately (streaming approach)\n        for i in 1..=5 {\n            let record_batch = create_test_record_batch(i)?;\n\n            // Write the batch to the parquet writer\n            writer.write(&record_batch)?;\n\n            // Flush the writer to ensure data is written to the buffer\n            writer.flush()?;\n\n            // Get the current buffer content through the writer\n            let cursor = writer.inner_mut();\n            let current_data = cursor.get_ref().clone();\n\n            // Write the accumulated data to HDFS\n            hdfs_writer.write(current_data).await.map_err(|e| {\n                DataFusionError::Execution(format!(\"Failed to write batch {} to HDFS: {}\", i, e))\n            })?;\n\n            // Clear the buffer for the next iteration\n            cursor.get_mut().clear();\n            cursor.set_position(0);\n\n            println!(\n                \"Successfully streamed batch {} (1000 rows) to HDFS at {}{}\",\n                i, namenode, output_path\n            );\n        }\n\n        // Close the ArrowWriter to finalize the parquet file\n        let cursor = writer.into_inner()?;\n\n        // Write any remaining data from closing the writer\n        let final_data = cursor.into_inner();\n        if !final_data.is_empty() {\n            hdfs_writer.write(final_data).await.map_err(|e| {\n                DataFusionError::Execution(format!(\"Failed to write final data to HDFS: {}\", e))\n            })?;\n        }\n\n        // Close the HDFS writer\n        hdfs_writer.close().await.map_err(|e| {\n            DataFusionError::Execution(format!(\"Failed to close HDFS writer: {}\", e))\n        })?;\n\n        println!(\n            \"Successfully completed streaming write of 5 batches (5000 total rows) to HDFS at {}{}\",\n            namenode, output_path\n        );\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    #[cfg(feature = \"hdfs-opendal\")]\n    #[ignore = \"This test requires a running HDFS cluster\"]\n    async fn test_parquet_writer_streaming() -> Result<()> {\n        // Configure output path\n        let output_path = \"/user/test_parquet_writer_streaming/data.parquet\";\n\n        // Configure writer properties\n        let props = WriterProperties::builder()\n            .set_compression(Compression::UNCOMPRESSED)\n            .build();\n\n        // Create ParquetWriter using the create_arrow_writer method\n        // Use full HDFS URL format\n        let full_output_path = format!(\"hdfs://namenode:9000{}\", output_path);\n        let session_ctx = datafusion::prelude::SessionContext::new();\n        let runtime_env = session_ctx.runtime_env();\n        let mut writer = ParquetWriterExec::create_arrow_writer(\n            &full_output_path,\n            create_test_record_batch(1)?.schema(),\n            props,\n            runtime_env,\n            &HashMap::new(),\n        )?;\n\n        // Write 5 batches in a loop\n        for i in 1..=5 {\n            let record_batch = create_test_record_batch(i)?;\n\n            writer.write(&record_batch).await.map_err(|e| {\n                DataFusionError::Execution(format!(\"Failed to write batch {}: {}\", i, e))\n            })?;\n\n            println!(\n                \"Successfully wrote batch {} (1000 rows) using ParquetWriter\",\n                i\n            );\n        }\n\n        // Close the writer\n        writer\n            .close()\n            .await\n            .map_err(|e| DataFusionError::Execution(format!(\"Failed to close writer: {}\", e)))?;\n\n        println!(\n            \"Successfully completed ParquetWriter streaming write of 5 batches (5000 total rows) to HDFS at {}\",\n            output_path\n        );\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    #[cfg(feature = \"hdfs-opendal\")]\n    #[ignore = \"This test requires a running HDFS cluster\"]\n    async fn test_parquet_writer_exec_with_memory_input() -> Result<()> {\n        use datafusion::datasource::memory::MemorySourceConfig;\n        use datafusion::datasource::source::DataSourceExec;\n        use datafusion::prelude::SessionContext;\n\n        // Create 5 batches for the DataSourceExec input\n        let mut batches = Vec::new();\n        for i in 1..=5 {\n            batches.push(create_test_record_batch(i)?);\n        }\n\n        // Get schema from the first batch\n        let schema = batches[0].schema();\n\n        // Create DataSourceExec with MemorySourceConfig containing the 5 batches as a single partition\n        let partitions = vec![batches];\n        let memory_source_config = MemorySourceConfig::try_new(&partitions, schema, None)?;\n        let memory_exec = Arc::new(DataSourceExec::new(Arc::new(memory_source_config)));\n\n        // Create ParquetWriterExec with DataSourceExec as input\n        let output_path = \"unused\".to_string();\n        let work_dir = \"hdfs://namenode:9000/user/test_parquet_writer_exec\".to_string();\n        let column_names = vec![\"id\".to_string(), \"name\".to_string()];\n\n        let parquet_writer = ParquetWriterExec::try_new(\n            memory_exec,\n            output_path,\n            work_dir,\n            None,      // job_id\n            Some(123), // task_attempt_id\n            CompressionCodec::None,\n            0, // partition_id\n            column_names,\n            HashMap::new(), // object_store_options\n        )?;\n\n        // Create a session context and execute the plan\n        let session_ctx = SessionContext::new();\n        let task_ctx = session_ctx.task_ctx();\n\n        // Execute partition 0\n        let mut stream = parquet_writer.execute(0, task_ctx)?;\n\n        // Consume the stream (this triggers the write)\n        while let Some(batch_result) = stream.try_next().await? {\n            // The stream should be empty as ParquetWriterExec returns empty batches\n            assert_eq!(batch_result.num_rows(), 0);\n        }\n\n        println!(\n            \"Successfully completed ParquetWriterExec test with DataSourceExec input (5 batches, 5000 total rows)\"\n        );\n\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/operators/projection.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Projection operator builder\n\nuse std::sync::Arc;\n\nuse datafusion::physical_plan::projection::ProjectionExec;\nuse datafusion_comet_proto::spark_operator::Operator;\nuse jni::objects::{Global, JObject};\n\nuse crate::{\n    execution::{\n        planner::{operator_registry::OperatorBuilder, PhysicalPlanner, PlanCreationResult},\n        spark_plan::SparkPlan,\n    },\n    extract_op,\n};\n\n/// Builder for Projection operators\npub struct ProjectionBuilder;\n\nimpl OperatorBuilder for ProjectionBuilder {\n    fn build(\n        &self,\n        spark_plan: &Operator,\n        inputs: &mut Vec<Arc<Global<JObject<'static>>>>,\n        partition_count: usize,\n        planner: &PhysicalPlanner,\n    ) -> PlanCreationResult {\n        let project = extract_op!(spark_plan, Projection);\n        let children = &spark_plan.children;\n\n        assert_eq!(children.len(), 1);\n        let (scans, shuffle_scans, child) =\n            planner.create_plan(&children[0], inputs, partition_count)?;\n\n        // Create projection expressions\n        let exprs: Result<Vec<_>, _> = project\n            .project_list\n            .iter()\n            .enumerate()\n            .map(|(idx, expr)| {\n                planner\n                    .create_expr(expr, child.schema())\n                    .map(|r| (r, format!(\"col_{idx}\")))\n            })\n            .collect();\n\n        let projection = Arc::new(ProjectionExec::try_new(\n            exprs?,\n            Arc::clone(&child.native_plan),\n        )?);\n\n        Ok((\n            scans,\n            shuffle_scans,\n            Arc::new(SparkPlan::new(spark_plan.plan_id, projection, vec![child])),\n        ))\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/operators/scan.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::execution::operators::{copy_array, copy_or_unpack_array, CopyMode};\nuse crate::{\n    errors::CometError,\n    execution::{\n        operators::ExecutionError, planner::TEST_EXEC_CONTEXT_ID, utils::SparkArrowConvert,\n    },\n    jvm_bridge::JVMClasses,\n};\nuse arrow::array::{make_array, ArrayData, ArrayRef, RecordBatch, RecordBatchOptions};\nuse arrow::compute::{cast_with_options, take, CastOptions};\nuse arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse arrow::ffi::FFI_ArrowArray;\nuse arrow::ffi::FFI_ArrowSchema;\nuse datafusion::common::{arrow_datafusion_err, DataFusionError, Result as DataFusionResult};\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType};\nuse datafusion::physical_plan::metrics::{\n    BaselineMetrics, ExecutionPlanMetricsSet, MetricBuilder, MetricsSet, Time,\n};\nuse datafusion::{\n    execution::TaskContext,\n    physical_expr::*,\n    physical_plan::{ExecutionPlan, *},\n};\nuse futures::Stream;\nuse itertools::Itertools;\nuse jni::objects::{Global, JObject, JValue};\nuse std::rc::Rc;\nuse std::{\n    any::Any,\n    pin::Pin,\n    sync::{Arc, Mutex},\n    task::{Context, Poll},\n};\n\n/// ScanExec reads batches of data from Spark via JNI. The source of the scan could be a file\n/// scan or the result of reading a broadcast or shuffle exchange. ScanExec isn't invoked\n/// until the data is already available in the JVM. When CometExecIterator invokes\n/// Native.executePlan, it passes in the memory addresses of the input batches.\n#[derive(Debug, Clone)]\npub struct ScanExec {\n    /// The ID of the execution context that owns this subquery. We use this ID to retrieve the JVM\n    /// environment `JNIEnv` from the execution context.\n    pub exec_context_id: i64,\n    /// The input source of scan node. It is a global reference of JVM `CometBatchIterator` object.\n    pub input_source: Option<Arc<Global<JObject<'static>>>>,\n    /// A description of the input source for informational purposes\n    pub input_source_description: String,\n    /// The data types of columns of the input batch. Converted from Spark schema.\n    pub data_types: Vec<DataType>,\n    /// Schema of first batch\n    pub schema: SchemaRef,\n    /// The input batch of input data. Used to determine the schema of the input data.\n    /// It is also used in unit test to mock the input data from JVM.\n    pub batch: Arc<Mutex<Option<InputBatch>>>,\n    /// Cache of expensive-to-compute plan properties\n    cache: Arc<PlanProperties>,\n    /// Metrics collector\n    metrics: ExecutionPlanMetricsSet,\n    /// Baseline metrics\n    baseline_metrics: BaselineMetrics,\n    /// Whether native code can assume ownership of batches that it receives\n    arrow_ffi_safe: bool,\n}\n\nimpl ScanExec {\n    pub fn new(\n        exec_context_id: i64,\n        input_source: Option<Arc<Global<JObject<'static>>>>,\n        input_source_description: &str,\n        data_types: Vec<DataType>,\n        arrow_ffi_safe: bool,\n    ) -> Result<Self, CometError> {\n        let metrics_set = ExecutionPlanMetricsSet::default();\n        let baseline_metrics = BaselineMetrics::new(&metrics_set, 0);\n\n        // Build schema directly from data types since get_next now always unpacks dictionaries\n        let schema = schema_from_data_types(&data_types);\n\n        let cache = Arc::new(PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&schema)),\n            // The partitioning is not important because we are not using DataFusion's\n            // query planner or optimizer\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        ));\n\n        Ok(Self {\n            exec_context_id,\n            input_source,\n            input_source_description: input_source_description.to_string(),\n            data_types,\n            batch: Arc::new(Mutex::new(None)),\n            cache,\n            metrics: metrics_set,\n            baseline_metrics,\n            schema,\n            arrow_ffi_safe,\n        })\n    }\n\n    /// Unpack all dictionary types because some DataFusion operators\n    /// and expressions do not support dictionary types\n    fn unpack_dictionary_type(dt: &DataType) -> DataType {\n        if let DataType::Dictionary(_, vt) = dt {\n            // return the underlying data type\n            return vt.as_ref().clone();\n        }\n        dt.clone()\n    }\n\n    /// Feeds input batch into this `Scan`. Only used in unit test.\n    pub fn set_input_batch(&mut self, input: InputBatch) {\n        *self.batch.try_lock().unwrap() = Some(input);\n    }\n\n    /// Pull next input batch from JVM.\n    pub fn get_next_batch(&mut self) -> Result<(), CometError> {\n        if self.input_source.is_none() {\n            // This is a unit test. We don't need to call JNI.\n            return Ok(());\n        }\n        let mut timer = self.baseline_metrics.elapsed_compute().timer();\n\n        let mut current_batch = self.batch.try_lock().unwrap();\n        if current_batch.is_none() {\n            let next_batch = ScanExec::get_next(\n                self.exec_context_id,\n                self.input_source.as_ref().unwrap().as_obj(),\n                self.data_types.len(),\n                self.arrow_ffi_safe,\n            )?;\n            *current_batch = Some(next_batch);\n        }\n\n        timer.stop();\n\n        Ok(())\n    }\n\n    /// Invokes JNI call to get next batch.\n    fn get_next(\n        exec_context_id: i64,\n        iter: &JObject,\n        num_cols: usize,\n        arrow_ffi_safe: bool,\n    ) -> Result<InputBatch, CometError> {\n        if exec_context_id == TEST_EXEC_CONTEXT_ID {\n            // This is a unit test. We don't need to call JNI.\n            return Ok(InputBatch::EOF);\n        }\n\n        if iter.is_null() {\n            return Err(CometError::from(ExecutionError::GeneralError(format!(\n                \"Null batch iterator object. Plan id: {exec_context_id}\"\n            ))));\n        }\n\n        JVMClasses::with_env(|env| {\n            let num_rows: i32 = unsafe {\n                jni_call!(env,\n            comet_batch_iterator(iter).has_next() -> i32)?\n            };\n\n            if num_rows == -1 {\n                return Ok(InputBatch::EOF);\n            }\n\n            // Check for selection vectors and get selection indices if needed from\n            // JVM via FFI\n            // Selection vectors can be provided by, for instance, Iceberg to\n            // remove rows that have been deleted.\n            let selection_indices_arrays = Self::get_selection_indices(env, iter, num_cols)?;\n\n            // fetch batch data from JVM via FFI\n            let (num_rows, array_addrs, schema_addrs) =\n                Self::allocate_and_fetch_batch(env, iter, num_cols)?;\n\n            let mut inputs: Vec<ArrayRef> = Vec::with_capacity(num_cols);\n\n            // Process each column\n            for i in 0..num_cols {\n                let array_ptr = array_addrs[i];\n                let schema_ptr = schema_addrs[i];\n                let array_data = ArrayData::from_spark((array_ptr, schema_ptr))?;\n\n                // TODO: validate array input data\n                // array_data.validate_full()?;\n\n                let array = make_array(array_data);\n\n                // Apply selection if selection vectors exist (applies to all columns)\n                let array = if let Some(ref selection_arrays) = selection_indices_arrays {\n                    let indices = &selection_arrays[i];\n                    // Apply the selection using Arrow's take kernel\n                    match take(&*array, &**indices, None) {\n                        Ok(selected_array) => selected_array,\n                        Err(e) => {\n                            return Err(CometError::from(ExecutionError::ArrowError(format!(\n                                \"Failed to apply selection for column {i}: {e}\",\n                            ))));\n                        }\n                    }\n                } else {\n                    array\n                };\n\n                let array = if arrow_ffi_safe {\n                    // ownership of this array has been transferred to native\n                    // but we still need to unpack dictionary arrays\n                    copy_or_unpack_array(&array, &CopyMode::UnpackOrClone)?\n                } else {\n                    // it is necessary to copy the array because the contents may be\n                    // overwritten on the JVM side in the future\n                    copy_array(&array)\n                };\n\n                inputs.push(array);\n\n                // Drop the Arcs to avoid memory leak\n                unsafe {\n                    Rc::from_raw(array_ptr as *const FFI_ArrowArray);\n                    Rc::from_raw(schema_ptr as *const FFI_ArrowSchema);\n                }\n            }\n\n            // If selection was applied, determine the actual row count from the selected arrays\n            let actual_num_rows = if let Some(ref selection_arrays) = selection_indices_arrays {\n                if !selection_arrays.is_empty() {\n                    // Use the length of the first selection array as the actual row count\n                    selection_arrays[0].len()\n                } else {\n                    num_rows as usize\n                }\n            } else {\n                num_rows as usize\n            };\n\n            Ok(InputBatch::new(inputs, Some(actual_num_rows)))\n        })\n    }\n\n    /// Allocates Arrow FFI structures and calls JNI to get the next batch data.\n    /// Returns the number of rows and the allocated array/schema addresses.\n    fn allocate_and_fetch_batch(\n        env: &mut jni::Env,\n        iter: &JObject,\n        num_cols: usize,\n    ) -> Result<(i32, Vec<i64>, Vec<i64>), CometError> {\n        let mut array_addrs = Vec::with_capacity(num_cols);\n        let mut schema_addrs = Vec::with_capacity(num_cols);\n\n        for _ in 0..num_cols {\n            let arrow_array = Rc::new(FFI_ArrowArray::empty());\n            let arrow_schema = Rc::new(FFI_ArrowSchema::empty());\n            let (array_ptr, schema_ptr) = (\n                Rc::into_raw(arrow_array) as i64,\n                Rc::into_raw(arrow_schema) as i64,\n            );\n\n            array_addrs.push(array_ptr);\n            schema_addrs.push(schema_ptr);\n        }\n\n        // Prepare the java array parameters\n        let long_array_addrs = env.new_long_array(num_cols)?;\n        let long_schema_addrs = env.new_long_array(num_cols)?;\n\n        long_array_addrs.set_region(env, 0, &array_addrs)?;\n        long_schema_addrs.set_region(env, 0, &schema_addrs)?;\n\n        let array_obj = JObject::from(long_array_addrs);\n        let schema_obj = JObject::from(long_schema_addrs);\n\n        let array_obj = JValue::Object(array_obj.as_ref());\n        let schema_obj = JValue::Object(schema_obj.as_ref());\n\n        let num_rows: i32 = unsafe {\n            jni_call!(env,\n        comet_batch_iterator(iter).next(array_obj, schema_obj) -> i32)?\n        };\n\n        // we already checked for end of results on call to has_next() so should always\n        // have a valid row count when calling next()\n        assert!(num_rows != -1);\n\n        Ok((num_rows, array_addrs, schema_addrs))\n    }\n\n    /// Checks for selection vectors and exports selection indices if needed.\n    /// Returns selection arrays if they exist (applies to all columns).\n    fn get_selection_indices(\n        env: &mut jni::Env,\n        iter: &JObject,\n        num_cols: usize,\n    ) -> Result<Option<Vec<ArrayRef>>, CometError> {\n        // Check if all columns have selection vectors\n        let has_selection_vectors_result: jni::sys::jboolean = unsafe {\n            jni_call!(env,\n                comet_batch_iterator(iter).has_selection_vectors() -> jni::sys::jboolean)?\n        };\n        let has_selection_vectors = has_selection_vectors_result;\n\n        let selection_indices_arrays = if has_selection_vectors {\n            // Allocate arrays for selection indices export (one per column)\n            let mut indices_array_addrs = Vec::with_capacity(num_cols);\n            let mut indices_schema_addrs = Vec::with_capacity(num_cols);\n\n            for _ in 0..num_cols {\n                let arrow_array = Rc::new(FFI_ArrowArray::empty());\n                let arrow_schema = Rc::new(FFI_ArrowSchema::empty());\n                indices_array_addrs.push(Rc::into_raw(arrow_array) as i64);\n                indices_schema_addrs.push(Rc::into_raw(arrow_schema) as i64);\n            }\n\n            // Prepare JNI arrays for the export call\n            let indices_array_obj = env.new_long_array(num_cols)?;\n            let indices_schema_obj = env.new_long_array(num_cols)?;\n            indices_array_obj.set_region(env, 0, &indices_array_addrs)?;\n            indices_schema_obj.set_region(env, 0, &indices_schema_addrs)?;\n\n            // Export selection indices from JVM\n            let _exported_count: i32 = unsafe {\n                jni_call!(env,\n                    comet_batch_iterator(iter).export_selection_indices(\n                        JValue::Object(JObject::from(indices_array_obj).as_ref()),\n                        JValue::Object(JObject::from(indices_schema_obj).as_ref())\n                    ) -> i32)?\n            };\n\n            // Convert to ArrayRef for easier handling\n            let mut selection_arrays = Vec::with_capacity(num_cols);\n            for i in 0..num_cols {\n                let array_data =\n                    ArrayData::from_spark((indices_array_addrs[i], indices_schema_addrs[i]))?;\n                selection_arrays.push(make_array(array_data));\n\n                // Drop the references to the FFI arrays\n                unsafe {\n                    Rc::from_raw(indices_array_addrs[i] as *const FFI_ArrowArray);\n                    Rc::from_raw(indices_schema_addrs[i] as *const FFI_ArrowSchema);\n                }\n            }\n\n            Some(selection_arrays)\n        } else {\n            None\n        };\n\n        Ok(selection_indices_arrays)\n    }\n}\n\nfn schema_from_data_types(data_types: &[DataType]) -> SchemaRef {\n    let fields = data_types\n        .iter()\n        .enumerate()\n        .map(|(idx, dt)| {\n            let datatype = ScanExec::unpack_dictionary_type(dt);\n            // We don't use the field name. Put a placeholder.\n            Field::new(format!(\"col_{idx}\"), datatype, true)\n        })\n        .collect::<Vec<Field>>();\n\n    Arc::new(Schema::new(fields))\n}\n\nimpl ExecutionPlan for ScanExec {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        vec![]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        _: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> datafusion::common::Result<Arc<dyn ExecutionPlan>> {\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _: Arc<TaskContext>,\n    ) -> datafusion::common::Result<SendableRecordBatchStream> {\n        Ok(Box::pin(ScanStream::new(\n            self.clone(),\n            self.schema(),\n            partition,\n            self.baseline_metrics.clone(),\n        )))\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.cache\n    }\n\n    fn name(&self) -> &str {\n        \"ScanExec\"\n    }\n\n    fn metrics(&self) -> Option<MetricsSet> {\n        Some(self.metrics.clone_inner())\n    }\n}\n\nimpl DisplayAs for ScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(f, \"ScanExec: source=[{}], \", self.input_source_description)?;\n                let fields: Vec<String> = self\n                    .data_types\n                    .iter()\n                    .enumerate()\n                    .map(|(idx, dt)| format!(\"col_{idx:}: {dt:}\"))\n                    .collect();\n                write!(f, \"schema=[{}]\", fields.join(\", \"))?;\n            }\n            DisplayFormatType::TreeRender => unimplemented!(),\n        }\n        Ok(())\n    }\n}\n\n/// A async-stream feeds input batch from `Scan` into DataFusion physical plan.\nstruct ScanStream<'a> {\n    /// The `Scan` node producing input batches\n    scan: ScanExec,\n    /// Schema representing the data\n    schema: SchemaRef,\n    /// Metrics\n    baseline_metrics: BaselineMetrics,\n    /// Cast options\n    cast_options: CastOptions<'a>,\n    /// elapsed time for casting columns to different data types during scan\n    cast_time: Time,\n}\n\nimpl ScanStream<'_> {\n    pub fn new(\n        scan: ScanExec,\n        schema: SchemaRef,\n        partition: usize,\n        baseline_metrics: BaselineMetrics,\n    ) -> Self {\n        let cast_time = MetricBuilder::new(&scan.metrics).subset_time(\"cast_time\", partition);\n        Self {\n            scan,\n            schema,\n            baseline_metrics,\n            cast_options: CastOptions::default(),\n            cast_time,\n        }\n    }\n\n    fn build_record_batch(\n        &self,\n        columns: &[ArrayRef],\n        num_rows: usize,\n    ) -> DataFusionResult<RecordBatch, DataFusionError> {\n        let schema_fields = self.schema.fields();\n        assert_eq!(columns.len(), schema_fields.len());\n\n        // Cast dictionary-encoded primitive arrays to regular arrays and cast\n        // Utf8/LargeUtf8/Binary arrays to dictionary-encoded if the schema is\n        // defined as dictionary-encoded and the data in this batch is not\n        // dictionary-encoded (could also be the other way around)\n        let new_columns: Vec<ArrayRef> = columns\n            .iter()\n            .zip(schema_fields.iter())\n            .map(|(column, f)| {\n                if column.data_type() != f.data_type() {\n                    let mut timer = self.cast_time.timer();\n                    let cast_array = cast_with_options(column, f.data_type(), &self.cast_options);\n                    timer.stop();\n                    cast_array\n                } else {\n                    Ok(Arc::clone(column))\n                }\n            })\n            .collect::<Result<Vec<_>, _>>()?;\n        let options = RecordBatchOptions::new().with_row_count(Some(num_rows));\n        RecordBatch::try_new_with_options(Arc::clone(&self.schema), new_columns, &options)\n            .map_err(|e| arrow_datafusion_err!(e))\n    }\n}\n\nimpl Stream for ScanStream<'_> {\n    type Item = DataFusionResult<RecordBatch>;\n\n    fn poll_next(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll<Option<Self::Item>> {\n        let mut timer = self.baseline_metrics.elapsed_compute().timer();\n        let mut scan_batch = self.scan.batch.try_lock().unwrap();\n\n        let input_batch = &*scan_batch;\n        let input_batch = if let Some(batch) = input_batch {\n            batch\n        } else {\n            timer.stop();\n            return Poll::Pending;\n        };\n\n        let result = match input_batch {\n            InputBatch::EOF => Poll::Ready(None),\n            InputBatch::Batch(columns, num_rows) => {\n                self.baseline_metrics.record_output(*num_rows);\n                let maybe_batch = self.build_record_batch(columns, *num_rows);\n                Poll::Ready(Some(maybe_batch))\n            }\n        };\n\n        *scan_batch = None;\n\n        timer.stop();\n\n        result\n    }\n}\n\nimpl RecordBatchStream for ScanStream<'_> {\n    /// Get the schema\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n}\n\n#[derive(Clone, Debug)]\npub enum InputBatch {\n    /// The end of input batches.\n    EOF,\n\n    /// A normal batch with columns and a number of rows.\n    /// It is possible to have a zero-column batch with a non-zero number of rows,\n    /// i.e. reading empty schema from scan.\n    Batch(Vec<ArrayRef>, usize),\n}\n\nimpl InputBatch {\n    /// Constructs an ` InputBatch ` from columns and an optional number of rows.\n    /// If `num_rows` is none, this function will calculate it from given\n    /// columns.\n    pub fn new(columns: Vec<ArrayRef>, num_rows: Option<usize>) -> Self {\n        let num_rows = num_rows.unwrap_or_else(|| {\n            let lengths = columns.iter().map(|a| a.len()).unique().collect::<Vec<_>>();\n            assert!(lengths.len() <= 1, \"Columns have different lengths.\");\n\n            if lengths.is_empty() {\n                // All are scalar values\n                1\n            } else {\n                lengths[0]\n            }\n        });\n\n        InputBatch::Batch(columns, num_rows)\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/operators/shuffle_scan.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::{\n    errors::CometError,\n    execution::{\n        operators::ExecutionError, planner::TEST_EXEC_CONTEXT_ID, shuffle::ipc::read_ipc_compressed,\n    },\n    jvm_bridge::{jni_call, JVMClasses},\n};\nuse arrow::array::ArrayRef;\nuse arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::common::{arrow_datafusion_err, Result as DataFusionResult};\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType};\nuse datafusion::physical_plan::metrics::{\n    BaselineMetrics, ExecutionPlanMetricsSet, MetricBuilder, MetricsSet, Time,\n};\nuse datafusion::{\n    execution::TaskContext,\n    physical_expr::*,\n    physical_plan::{ExecutionPlan, *},\n};\nuse futures::Stream;\nuse jni::objects::{Global, JByteBuffer, JObject};\nuse std::{\n    any::Any,\n    pin::Pin,\n    sync::{Arc, Mutex},\n    task::{Context, Poll},\n};\n\nuse super::scan::InputBatch;\n\n/// ShuffleScanExec reads compressed shuffle blocks from JVM via JNI and decodes them natively.\n/// Unlike ScanExec which receives Arrow arrays via FFI, ShuffleScanExec receives raw compressed\n/// bytes from CometShuffleBlockIterator and decodes them using read_ipc_compressed().\n#[derive(Debug, Clone)]\npub struct ShuffleScanExec {\n    /// The ID of the execution context that owns this subquery.\n    pub exec_context_id: i64,\n    /// The input source: a global reference to a JVM CometShuffleBlockIterator object.\n    pub input_source: Option<Arc<Global<JObject<'static>>>>,\n    /// The data types of columns in the shuffle output.\n    pub data_types: Vec<DataType>,\n    /// Schema of the shuffle output.\n    pub schema: SchemaRef,\n    /// The current input batch, populated by get_next_batch() before poll_next().\n    pub batch: Arc<Mutex<Option<InputBatch>>>,\n    /// Cache of plan properties.\n    cache: Arc<PlanProperties>,\n    /// Metrics collector.\n    metrics: ExecutionPlanMetricsSet,\n    /// Baseline metrics.\n    baseline_metrics: BaselineMetrics,\n    /// Time spent decoding compressed shuffle blocks.\n    decode_time: Time,\n}\n\nimpl ShuffleScanExec {\n    pub fn new(\n        exec_context_id: i64,\n        input_source: Option<Arc<Global<JObject<'static>>>>,\n        data_types: Vec<DataType>,\n    ) -> Result<Self, CometError> {\n        let metrics_set = ExecutionPlanMetricsSet::default();\n        let baseline_metrics = BaselineMetrics::new(&metrics_set, 0);\n        let decode_time = MetricBuilder::new(&metrics_set).subset_time(\"decode_time\", 0);\n\n        let schema = schema_from_data_types(&data_types);\n\n        let cache = Arc::new(PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&schema)),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        ));\n\n        Ok(Self {\n            exec_context_id,\n            input_source,\n            data_types,\n            batch: Arc::new(Mutex::new(None)),\n            cache,\n            metrics: metrics_set,\n            baseline_metrics,\n            schema,\n            decode_time,\n        })\n    }\n\n    /// Feeds input batch into this scan. Only used in unit tests.\n    pub fn set_input_batch(&mut self, input: InputBatch) {\n        *self.batch.try_lock().unwrap() = Some(input);\n    }\n\n    /// Pull next input batch from JVM. Called externally before poll_next()\n    /// because JNI calls cannot happen from within poll_next on tokio threads.\n    pub fn get_next_batch(&mut self) -> Result<(), CometError> {\n        if self.input_source.is_none() {\n            // Unit test mode - no JNI calls needed.\n            return Ok(());\n        }\n        let mut timer = self.baseline_metrics.elapsed_compute().timer();\n\n        let mut current_batch = self.batch.try_lock().unwrap();\n        if current_batch.is_none() {\n            let next_batch = Self::get_next(\n                self.exec_context_id,\n                self.input_source.as_ref().unwrap().as_obj(),\n                &self.data_types,\n                &self.decode_time,\n            )?;\n            *current_batch = Some(next_batch);\n        }\n\n        timer.stop();\n\n        Ok(())\n    }\n\n    /// Invokes JNI calls to get the next compressed shuffle block and decode it.\n    fn get_next(\n        exec_context_id: i64,\n        iter: &JObject,\n        data_types: &[DataType],\n        decode_time: &Time,\n    ) -> Result<InputBatch, CometError> {\n        if exec_context_id == TEST_EXEC_CONTEXT_ID {\n            return Ok(InputBatch::EOF);\n        }\n\n        if iter.is_null() {\n            return Err(CometError::from(ExecutionError::GeneralError(format!(\n                \"Null shuffle block iterator object. Plan id: {exec_context_id}\"\n            ))));\n        }\n\n        JVMClasses::with_env(|env| {\n            // has_next() reads the next block and returns its length, or -1 if EOF\n            let block_length: i32 = unsafe {\n                jni_call!(env,\n                    comet_shuffle_block_iterator(iter).has_next() -> i32)?\n            };\n\n            if block_length == -1 {\n                return Ok(InputBatch::EOF);\n            }\n\n            // Get the DirectByteBuffer containing the compressed shuffle block\n            let buffer: JObject = unsafe {\n                jni_call!(env,\n                    comet_shuffle_block_iterator(iter).get_buffer() -> JObject)?\n            };\n\n            let byte_buffer = unsafe { JByteBuffer::from_raw(env, buffer.into_raw()) };\n            let raw_pointer = env.get_direct_buffer_address(&byte_buffer)?;\n            let length = block_length as usize;\n            let slice: &[u8] = unsafe { std::slice::from_raw_parts(raw_pointer, length) };\n\n            // Decode the compressed IPC data\n            let mut timer = decode_time.timer();\n            let batch = read_ipc_compressed(slice)?;\n            timer.stop();\n\n            let num_rows = batch.num_rows();\n\n            // Extract column arrays, unpacking any dictionary-encoded columns.\n            // Native shuffle may dictionary-encode string/binary columns for efficiency,\n            // but downstream DataFusion operators expect the value types declared in the\n            // schema (e.g. Utf8, not Dictionary<Int32, Utf8>).\n            let columns: Vec<ArrayRef> = batch\n                .columns()\n                .iter()\n                .map(|col| unpack_dictionary(col))\n                .collect();\n\n            debug_assert_eq!(\n                columns.len(),\n                data_types.len(),\n                \"Shuffle block column count mismatch: got {} but expected {}\",\n                columns.len(),\n                data_types.len()\n            );\n\n            Ok(InputBatch::new(columns, Some(num_rows)))\n        })\n    }\n}\n\n/// If `array` is dictionary-encoded, cast it to the value type. Otherwise return as-is.\nfn unpack_dictionary(array: &ArrayRef) -> ArrayRef {\n    if let DataType::Dictionary(_, value_type) = array.data_type() {\n        arrow::compute::cast(array, value_type.as_ref()).expect(\"failed to unpack dictionary array\")\n    } else {\n        Arc::clone(array)\n    }\n}\n\nfn schema_from_data_types(data_types: &[DataType]) -> SchemaRef {\n    let fields = data_types\n        .iter()\n        .enumerate()\n        .map(|(idx, dt)| Field::new(format!(\"col_{idx}\"), dt.clone(), true))\n        .collect::<Vec<Field>>();\n\n    Arc::new(Schema::new(fields))\n}\n\nimpl ExecutionPlan for ShuffleScanExec {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn schema(&self) -> SchemaRef {\n        Arc::clone(&self.schema)\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        vec![]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        _: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> datafusion::common::Result<Arc<dyn ExecutionPlan>> {\n        Ok(self)\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        _: Arc<TaskContext>,\n    ) -> datafusion::common::Result<SendableRecordBatchStream> {\n        Ok(Box::pin(ShuffleScanStream::new(\n            self.clone(),\n            partition,\n            self.baseline_metrics.clone(),\n        )))\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.cache\n    }\n\n    fn name(&self) -> &str {\n        \"ShuffleScanExec\"\n    }\n\n    fn metrics(&self) -> Option<MetricsSet> {\n        Some(self.metrics.clone_inner())\n    }\n}\n\nimpl DisplayAs for ShuffleScanExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                let fields: Vec<String> = self\n                    .data_types\n                    .iter()\n                    .enumerate()\n                    .map(|(idx, dt)| format!(\"col_{idx}: {dt}\"))\n                    .collect();\n                write!(f, \"ShuffleScanExec: schema=[{}]\", fields.join(\", \"))?;\n            }\n            DisplayFormatType::TreeRender => unimplemented!(),\n        }\n        Ok(())\n    }\n}\n\n/// An async stream that feeds decoded shuffle batches into the DataFusion plan.\nstruct ShuffleScanStream {\n    /// The ShuffleScanExec producing input batches.\n    shuffle_scan: ShuffleScanExec,\n    /// Metrics.\n    baseline_metrics: BaselineMetrics,\n}\n\nimpl ShuffleScanStream {\n    pub fn new(\n        shuffle_scan: ShuffleScanExec,\n        _partition: usize,\n        baseline_metrics: BaselineMetrics,\n    ) -> Self {\n        Self {\n            shuffle_scan,\n            baseline_metrics,\n        }\n    }\n}\n\nimpl Stream for ShuffleScanStream {\n    type Item = DataFusionResult<arrow::array::RecordBatch>;\n\n    fn poll_next(self: Pin<&mut Self>, _: &mut Context<'_>) -> Poll<Option<Self::Item>> {\n        let mut timer = self.baseline_metrics.elapsed_compute().timer();\n        let mut scan_batch = self.shuffle_scan.batch.try_lock().unwrap();\n\n        let input_batch = &*scan_batch;\n        let input_batch = if let Some(batch) = input_batch {\n            batch\n        } else {\n            timer.stop();\n            return Poll::Pending;\n        };\n\n        let result = match input_batch {\n            InputBatch::EOF => Poll::Ready(None),\n            InputBatch::Batch(columns, num_rows) => {\n                self.baseline_metrics.record_output(*num_rows);\n                let options =\n                    arrow::array::RecordBatchOptions::new().with_row_count(Some(*num_rows));\n                let maybe_batch = arrow::array::RecordBatch::try_new_with_options(\n                    self.shuffle_scan.schema(),\n                    columns.clone(),\n                    &options,\n                )\n                .map_err(|e| arrow_datafusion_err!(e));\n                Poll::Ready(Some(maybe_batch))\n            }\n        };\n\n        *scan_batch = None;\n\n        timer.stop();\n\n        result\n    }\n}\n\nimpl RecordBatchStream for ShuffleScanStream {\n    fn schema(&self) -> SchemaRef {\n        self.shuffle_scan.schema()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::execution::shuffle::{CompressionCodec, ShuffleBlockWriter};\n    use arrow::array::{Int32Array, StringArray};\n    use arrow::datatypes::{DataType, Field, Schema};\n    use arrow::record_batch::RecordBatch;\n    use datafusion::physical_plan::metrics::Time;\n    use std::io::Cursor;\n    use std::sync::Arc;\n\n    use crate::execution::shuffle::ipc::read_ipc_compressed;\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // Miri cannot call FFI functions (zstd)\n    fn test_read_compressed_ipc_block() {\n        let schema = Arc::new(Schema::new(vec![\n            Field::new(\"id\", DataType::Int32, false),\n            Field::new(\"name\", DataType::Utf8, true),\n        ]));\n        let batch = RecordBatch::try_new(\n            Arc::clone(&schema),\n            vec![\n                Arc::new(Int32Array::from(vec![1, 2, 3])),\n                Arc::new(StringArray::from(vec![\"a\", \"b\", \"c\"])),\n            ],\n        )\n        .unwrap();\n\n        // Write as compressed IPC\n        let writer =\n            ShuffleBlockWriter::try_new(&batch.schema(), CompressionCodec::Zstd(1)).unwrap();\n        let mut buf = Cursor::new(Vec::new());\n        let ipc_time = Time::new();\n        writer.write_batch(&batch, &mut buf, &ipc_time).unwrap();\n\n        // Read back (skip 16-byte header: 8 compressed_length + 8 field_count)\n        let bytes = buf.into_inner();\n        let body = &bytes[16..];\n\n        let decoded = read_ipc_compressed(body).unwrap();\n        assert_eq!(decoded.num_rows(), 3);\n        assert_eq!(decoded.num_columns(), 2);\n\n        // Verify data\n        let col0 = decoded\n            .column(0)\n            .as_any()\n            .downcast_ref::<Int32Array>()\n            .unwrap();\n        assert_eq!(col0.value(0), 1);\n        assert_eq!(col0.value(1), 2);\n        assert_eq!(col0.value(2), 3);\n    }\n\n    /// Tests that ShuffleScanExec correctly unpacks dictionary-encoded columns.\n    /// Native shuffle may dictionary-encode string/binary columns, but the schema\n    /// declares value types (e.g. Utf8). Without unpacking, RecordBatch creation\n    /// fails with a schema mismatch.\n    #[test]\n    #[cfg_attr(miri, ignore)]\n    fn test_dictionary_encoded_shuffle_block_is_unpacked() {\n        use super::*;\n        use arrow::array::StringDictionaryBuilder;\n        use arrow::datatypes::Int32Type;\n        use datafusion::physical_plan::ExecutionPlan;\n        use futures::StreamExt;\n\n        // Build a batch with a dictionary-encoded string column (simulating what\n        // the native shuffle writer produces for string columns).\n        let mut dict_builder = StringDictionaryBuilder::<Int32Type>::new();\n        dict_builder.append_value(\"hello\");\n        dict_builder.append_value(\"world\");\n        dict_builder.append_value(\"hello\"); // repeated value, good for dictionary\n        let dict_array = dict_builder.finish();\n\n        // The IPC schema includes the dictionary type\n        let dict_schema = Arc::new(Schema::new(vec![\n            Field::new(\"id\", DataType::Int32, false),\n            Field::new(\n                \"name\",\n                DataType::Dictionary(Box::new(DataType::Int32), Box::new(DataType::Utf8)),\n                true,\n            ),\n        ]));\n        let dict_batch = RecordBatch::try_new(\n            Arc::clone(&dict_schema),\n            vec![\n                Arc::new(Int32Array::from(vec![1, 2, 3])),\n                Arc::new(dict_array),\n            ],\n        )\n        .unwrap();\n\n        // Write as compressed IPC (preserves dictionary encoding)\n        let writer =\n            ShuffleBlockWriter::try_new(&dict_batch.schema(), CompressionCodec::Zstd(1)).unwrap();\n        let mut buf = Cursor::new(Vec::new());\n        let ipc_time = Time::new();\n        writer\n            .write_batch(&dict_batch, &mut buf, &ipc_time)\n            .unwrap();\n        let bytes = buf.into_inner();\n        let body = &bytes[16..];\n\n        // Confirm that read_ipc_compressed returns dictionary-encoded arrays\n        let decoded = read_ipc_compressed(body).unwrap();\n        assert!(\n            matches!(decoded.column(1).data_type(), DataType::Dictionary(_, _)),\n            \"Expected dictionary-encoded column from IPC, got {:?}\",\n            decoded.column(1).data_type()\n        );\n\n        // Create ShuffleScanExec with value types (Utf8, not Dictionary) — this is\n        // what the protobuf schema provides.\n        let mut scan = ShuffleScanExec::new(\n            super::super::super::planner::TEST_EXEC_CONTEXT_ID,\n            None,\n            vec![DataType::Int32, DataType::Utf8],\n        )\n        .unwrap();\n\n        // Feed the decoded batch through unpack_dictionary (simulating get_next)\n        let columns: Vec<ArrayRef> = decoded\n            .columns()\n            .iter()\n            .map(|col| super::unpack_dictionary(col))\n            .collect();\n        let input = InputBatch::new(columns, Some(decoded.num_rows()));\n        scan.set_input_batch(input);\n\n        // Execute and verify the output RecordBatch has the expected schema\n        let rt = tokio::runtime::Runtime::new().unwrap();\n        rt.block_on(async {\n            let ctx = Arc::new(TaskContext::default());\n            let mut stream = scan.execute(0, ctx).unwrap();\n            let result_batch = stream.next().await.unwrap().unwrap();\n\n            // Schema should have Utf8, not Dictionary\n            assert_eq!(\n                *result_batch.schema().field(1).data_type(),\n                DataType::Utf8,\n                \"Expected Utf8 after dictionary unpacking\"\n            );\n\n            // Verify data integrity\n            let col1 = result_batch\n                .column(1)\n                .as_any()\n                .downcast_ref::<StringArray>()\n                .expect(\"Column should be StringArray after unpacking\");\n            assert_eq!(col1.value(0), \"hello\");\n            assert_eq!(col1.value(1), \"world\");\n            assert_eq!(col1.value(2), \"hello\");\n        });\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/planner/expression_registry.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Expression registry for dispatching expression creation\n\nuse std::collections::HashMap;\nuse std::sync::Arc;\n\nuse arrow::datatypes::SchemaRef;\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion_comet_proto::spark_expression::{expr::ExprStruct, Expr};\n\nuse crate::execution::operators::ExecutionError;\n\n/// Trait for building physical expressions from Spark protobuf expressions\npub trait ExpressionBuilder: Send + Sync {\n    /// Build a DataFusion physical expression from a Spark protobuf expression\n    fn build(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &super::PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError>;\n}\n\n/// Enum to identify different expression types for registry dispatch\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]\npub enum ExpressionType {\n    // Arithmetic expressions\n    Add,\n    Subtract,\n    Multiply,\n    Divide,\n    IntegralDivide,\n    Remainder,\n    UnaryMinus,\n\n    // Comparison expressions\n    Eq,\n    Neq,\n    Lt,\n    LtEq,\n    Gt,\n    GtEq,\n    EqNullSafe,\n    NeqNullSafe,\n\n    // Logical expressions\n    And,\n    Or,\n    Not,\n\n    // Null checks\n    IsNull,\n    IsNotNull,\n\n    // Bitwise operations\n    BitwiseAnd,\n    BitwiseOr,\n    BitwiseXor,\n    BitwiseShiftLeft,\n    BitwiseShiftRight,\n\n    // Other expressions\n    Bound,\n    Unbound,\n    Literal,\n    Cast,\n    CaseWhen,\n    In,\n    If,\n    Substring,\n    Like,\n    Rlike,\n    CheckOverflow,\n    ScalarFunc,\n    NormalizeNanAndZero,\n    Subquery,\n    BloomFilterMightContain,\n    CreateNamedStruct,\n    GetStructField,\n    ToJson,\n    FromJson,\n    ToPrettyString,\n    ListExtract,\n    GetArrayStructFields,\n    ArrayInsert,\n    Rand,\n    Randn,\n    SparkPartitionId,\n    MonotonicallyIncreasingId,\n\n    // Time functions\n    Hour,\n    Minute,\n    Second,\n    TruncTimestamp,\n    UnixTimestamp,\n    HoursTransform,\n}\n\n/// Registry for expression builders\npub struct ExpressionRegistry {\n    builders: HashMap<ExpressionType, Box<dyn ExpressionBuilder>>,\n}\n\nimpl ExpressionRegistry {\n    /// Create a new expression registry with all builders registered\n    fn new() -> Self {\n        let mut registry = Self {\n            builders: HashMap::new(),\n        };\n\n        registry.register_all_expressions();\n        registry\n    }\n\n    /// Get the global shared registry instance\n    pub fn global() -> &'static ExpressionRegistry {\n        static REGISTRY: std::sync::OnceLock<ExpressionRegistry> = std::sync::OnceLock::new();\n        REGISTRY.get_or_init(ExpressionRegistry::new)\n    }\n\n    /// Check if the registry can handle a given expression type\n    pub fn can_handle(&self, spark_expr: &Expr) -> bool {\n        if let Ok(expr_type) = Self::get_expression_type(spark_expr) {\n            self.builders.contains_key(&expr_type)\n        } else {\n            false\n        }\n    }\n\n    /// Create a physical expression from a Spark protobuf expression\n    pub fn create_expr(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n        planner: &super::PhysicalPlanner,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let expr_type = Self::get_expression_type(spark_expr)?;\n\n        if let Some(builder) = self.builders.get(&expr_type) {\n            builder.build(spark_expr, input_schema, planner)\n        } else {\n            Err(ExecutionError::GeneralError(format!(\n                \"No builder registered for expression type: {:?}\",\n                expr_type\n            )))\n        }\n    }\n\n    /// Register all expression builders\n    fn register_all_expressions(&mut self) {\n        // Register arithmetic expressions\n        self.register_arithmetic_expressions();\n\n        // Register comparison expressions\n        self.register_comparison_expressions();\n\n        // Register bitwise expressions\n        self.register_bitwise_expressions();\n\n        // Register logical expressions\n        self.register_logical_expressions();\n\n        // Register null check expressions\n        self.register_null_check_expressions();\n\n        // Register string expressions\n        self.register_string_expressions();\n\n        // Register temporal expressions\n        self.register_temporal_expressions();\n\n        // Register random expressions\n        self.register_random_expressions();\n\n        // Register partition expressions\n        self.register_partition_expressions();\n    }\n\n    /// Register arithmetic expression builders\n    fn register_arithmetic_expressions(&mut self) {\n        use crate::execution::expressions::arithmetic::*;\n\n        self.builders\n            .insert(ExpressionType::Add, Box::new(AddBuilder));\n        self.builders\n            .insert(ExpressionType::Subtract, Box::new(SubtractBuilder));\n        self.builders\n            .insert(ExpressionType::Multiply, Box::new(MultiplyBuilder));\n        self.builders\n            .insert(ExpressionType::Divide, Box::new(DivideBuilder));\n        self.builders.insert(\n            ExpressionType::IntegralDivide,\n            Box::new(IntegralDivideBuilder),\n        );\n        self.builders\n            .insert(ExpressionType::Remainder, Box::new(RemainderBuilder));\n        self.builders\n            .insert(ExpressionType::UnaryMinus, Box::new(UnaryMinusBuilder));\n    }\n\n    /// Register comparison expression builders\n    fn register_comparison_expressions(&mut self) {\n        use crate::execution::expressions::comparison::*;\n\n        self.builders\n            .insert(ExpressionType::Eq, Box::new(EqBuilder));\n        self.builders\n            .insert(ExpressionType::Neq, Box::new(NeqBuilder));\n        self.builders\n            .insert(ExpressionType::Lt, Box::new(LtBuilder));\n        self.builders\n            .insert(ExpressionType::LtEq, Box::new(LtEqBuilder));\n        self.builders\n            .insert(ExpressionType::Gt, Box::new(GtBuilder));\n        self.builders\n            .insert(ExpressionType::GtEq, Box::new(GtEqBuilder));\n        self.builders\n            .insert(ExpressionType::EqNullSafe, Box::new(EqNullSafeBuilder));\n        self.builders\n            .insert(ExpressionType::NeqNullSafe, Box::new(NeqNullSafeBuilder));\n    }\n\n    /// Register bitwise expression builders\n    fn register_bitwise_expressions(&mut self) {\n        use crate::execution::expressions::bitwise::*;\n\n        self.builders\n            .insert(ExpressionType::BitwiseAnd, Box::new(BitwiseAndBuilder));\n        self.builders\n            .insert(ExpressionType::BitwiseOr, Box::new(BitwiseOrBuilder));\n        self.builders\n            .insert(ExpressionType::BitwiseXor, Box::new(BitwiseXorBuilder));\n        self.builders.insert(\n            ExpressionType::BitwiseShiftLeft,\n            Box::new(BitwiseShiftLeftBuilder),\n        );\n        self.builders.insert(\n            ExpressionType::BitwiseShiftRight,\n            Box::new(BitwiseShiftRightBuilder),\n        );\n    }\n\n    /// Register logical expression builders\n    fn register_logical_expressions(&mut self) {\n        use crate::execution::expressions::logical::*;\n\n        self.builders\n            .insert(ExpressionType::And, Box::new(AndBuilder));\n        self.builders\n            .insert(ExpressionType::Or, Box::new(OrBuilder));\n        self.builders\n            .insert(ExpressionType::Not, Box::new(NotBuilder));\n    }\n\n    /// Register null check expression builders\n    fn register_null_check_expressions(&mut self) {\n        use crate::execution::expressions::nullcheck::*;\n\n        self.builders\n            .insert(ExpressionType::IsNull, Box::new(IsNullBuilder));\n        self.builders\n            .insert(ExpressionType::IsNotNull, Box::new(IsNotNullBuilder));\n    }\n\n    /// Register string expression builders\n    fn register_string_expressions(&mut self) {\n        use crate::execution::expressions::strings::*;\n\n        self.builders\n            .insert(ExpressionType::Substring, Box::new(SubstringBuilder));\n        self.builders\n            .insert(ExpressionType::Like, Box::new(LikeBuilder));\n        self.builders\n            .insert(ExpressionType::Rlike, Box::new(RlikeBuilder));\n        self.builders\n            .insert(ExpressionType::FromJson, Box::new(FromJsonBuilder));\n    }\n\n    /// Register temporal expression builders\n    fn register_temporal_expressions(&mut self) {\n        use crate::execution::expressions::temporal::*;\n\n        self.builders\n            .insert(ExpressionType::Hour, Box::new(HourBuilder));\n        self.builders\n            .insert(ExpressionType::Minute, Box::new(MinuteBuilder));\n        self.builders\n            .insert(ExpressionType::Second, Box::new(SecondBuilder));\n        self.builders.insert(\n            ExpressionType::UnixTimestamp,\n            Box::new(UnixTimestampBuilder),\n        );\n        self.builders.insert(\n            ExpressionType::TruncTimestamp,\n            Box::new(TruncTimestampBuilder),\n        );\n        self.builders.insert(\n            ExpressionType::HoursTransform,\n            Box::new(HoursTransformBuilder),\n        );\n    }\n\n    /// Extract expression type from Spark protobuf expression\n    fn get_expression_type(spark_expr: &Expr) -> Result<ExpressionType, ExecutionError> {\n        match spark_expr.expr_struct.as_ref() {\n            Some(ExprStruct::Add(_)) => Ok(ExpressionType::Add),\n            Some(ExprStruct::Subtract(_)) => Ok(ExpressionType::Subtract),\n            Some(ExprStruct::Multiply(_)) => Ok(ExpressionType::Multiply),\n            Some(ExprStruct::Divide(_)) => Ok(ExpressionType::Divide),\n            Some(ExprStruct::IntegralDivide(_)) => Ok(ExpressionType::IntegralDivide),\n            Some(ExprStruct::Remainder(_)) => Ok(ExpressionType::Remainder),\n            Some(ExprStruct::UnaryMinus(_)) => Ok(ExpressionType::UnaryMinus),\n\n            Some(ExprStruct::Eq(_)) => Ok(ExpressionType::Eq),\n            Some(ExprStruct::Neq(_)) => Ok(ExpressionType::Neq),\n            Some(ExprStruct::Lt(_)) => Ok(ExpressionType::Lt),\n            Some(ExprStruct::LtEq(_)) => Ok(ExpressionType::LtEq),\n            Some(ExprStruct::Gt(_)) => Ok(ExpressionType::Gt),\n            Some(ExprStruct::GtEq(_)) => Ok(ExpressionType::GtEq),\n            Some(ExprStruct::EqNullSafe(_)) => Ok(ExpressionType::EqNullSafe),\n            Some(ExprStruct::NeqNullSafe(_)) => Ok(ExpressionType::NeqNullSafe),\n\n            Some(ExprStruct::And(_)) => Ok(ExpressionType::And),\n            Some(ExprStruct::Or(_)) => Ok(ExpressionType::Or),\n            Some(ExprStruct::Not(_)) => Ok(ExpressionType::Not),\n\n            Some(ExprStruct::IsNull(_)) => Ok(ExpressionType::IsNull),\n            Some(ExprStruct::IsNotNull(_)) => Ok(ExpressionType::IsNotNull),\n\n            Some(ExprStruct::BitwiseAnd(_)) => Ok(ExpressionType::BitwiseAnd),\n            Some(ExprStruct::BitwiseOr(_)) => Ok(ExpressionType::BitwiseOr),\n            Some(ExprStruct::BitwiseXor(_)) => Ok(ExpressionType::BitwiseXor),\n            Some(ExprStruct::BitwiseShiftLeft(_)) => Ok(ExpressionType::BitwiseShiftLeft),\n            Some(ExprStruct::BitwiseShiftRight(_)) => Ok(ExpressionType::BitwiseShiftRight),\n\n            Some(ExprStruct::Bound(_)) => Ok(ExpressionType::Bound),\n            Some(ExprStruct::Unbound(_)) => Ok(ExpressionType::Unbound),\n            Some(ExprStruct::Literal(_)) => Ok(ExpressionType::Literal),\n            Some(ExprStruct::Cast(_)) => Ok(ExpressionType::Cast),\n            Some(ExprStruct::CaseWhen(_)) => Ok(ExpressionType::CaseWhen),\n            Some(ExprStruct::In(_)) => Ok(ExpressionType::In),\n            Some(ExprStruct::If(_)) => Ok(ExpressionType::If),\n            Some(ExprStruct::Substring(_)) => Ok(ExpressionType::Substring),\n            Some(ExprStruct::Like(_)) => Ok(ExpressionType::Like),\n            Some(ExprStruct::Rlike(_)) => Ok(ExpressionType::Rlike),\n            Some(ExprStruct::CheckOverflow(_)) => Ok(ExpressionType::CheckOverflow),\n            Some(ExprStruct::ScalarFunc(_)) => Ok(ExpressionType::ScalarFunc),\n            Some(ExprStruct::NormalizeNanAndZero(_)) => Ok(ExpressionType::NormalizeNanAndZero),\n            Some(ExprStruct::Subquery(_)) => Ok(ExpressionType::Subquery),\n            Some(ExprStruct::BloomFilterMightContain(_)) => {\n                Ok(ExpressionType::BloomFilterMightContain)\n            }\n            Some(ExprStruct::CreateNamedStruct(_)) => Ok(ExpressionType::CreateNamedStruct),\n            Some(ExprStruct::GetStructField(_)) => Ok(ExpressionType::GetStructField),\n            Some(ExprStruct::ToJson(_)) => Ok(ExpressionType::ToJson),\n            Some(ExprStruct::FromJson(_)) => Ok(ExpressionType::FromJson),\n            Some(ExprStruct::ToPrettyString(_)) => Ok(ExpressionType::ToPrettyString),\n            Some(ExprStruct::ListExtract(_)) => Ok(ExpressionType::ListExtract),\n            Some(ExprStruct::GetArrayStructFields(_)) => Ok(ExpressionType::GetArrayStructFields),\n            Some(ExprStruct::ArrayInsert(_)) => Ok(ExpressionType::ArrayInsert),\n            Some(ExprStruct::Rand(_)) => Ok(ExpressionType::Rand),\n            Some(ExprStruct::Randn(_)) => Ok(ExpressionType::Randn),\n            Some(ExprStruct::SparkPartitionId(_)) => Ok(ExpressionType::SparkPartitionId),\n            Some(ExprStruct::MonotonicallyIncreasingId(_)) => {\n                Ok(ExpressionType::MonotonicallyIncreasingId)\n            }\n\n            Some(ExprStruct::Hour(_)) => Ok(ExpressionType::Hour),\n            Some(ExprStruct::Minute(_)) => Ok(ExpressionType::Minute),\n            Some(ExprStruct::Second(_)) => Ok(ExpressionType::Second),\n            Some(ExprStruct::TruncTimestamp(_)) => Ok(ExpressionType::TruncTimestamp),\n            Some(ExprStruct::UnixTimestamp(_)) => Ok(ExpressionType::UnixTimestamp),\n            Some(ExprStruct::HoursTransform(_)) => Ok(ExpressionType::HoursTransform),\n\n            Some(other) => Err(ExecutionError::GeneralError(format!(\n                \"Unsupported expression type: {:?}\",\n                other\n            ))),\n            None => Err(ExecutionError::GeneralError(\n                \"Expression struct is None\".to_string(),\n            )),\n        }\n    }\n\n    /// Register random expression builders\n    fn register_random_expressions(&mut self) {\n        use crate::execution::expressions::random::*;\n\n        self.builders\n            .insert(ExpressionType::Rand, Box::new(RandBuilder));\n        self.builders\n            .insert(ExpressionType::Randn, Box::new(RandnBuilder));\n    }\n\n    /// Register partition expression builders\n    fn register_partition_expressions(&mut self) {\n        use crate::execution::expressions::partition::*;\n\n        self.builders.insert(\n            ExpressionType::SparkPartitionId,\n            Box::new(SparkPartitionIdBuilder),\n        );\n        self.builders.insert(\n            ExpressionType::MonotonicallyIncreasingId,\n            Box::new(MonotonicallyIncreasingIdBuilder),\n        );\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/planner/macros.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Core macros for the modular planner framework\n\n/// Macro to extract a specific expression variant, panicking if called with wrong type.\n/// This should be used in expression builders where the registry guarantees the correct\n/// expression type has been routed to the builder.\n#[macro_export]\nmacro_rules! extract_expr {\n    ($spark_expr:expr, $variant:ident) => {\n        match $spark_expr\n            .expr_struct\n            .as_ref()\n            .expect(\"expression struct must be present\")\n        {\n            datafusion_comet_proto::spark_expression::expr::ExprStruct::$variant(expr) => expr,\n            other => panic!(\n                \"{} builder called with wrong expression type: {:?}\",\n                stringify!($variant),\n                other\n            ),\n        }\n    };\n}\n\n/// Macro to extract a specific operator variant, panicking if called with wrong type.\n/// This should be used in operator builders where the registry guarantees the correct\n/// operator type has been routed to the builder.\n#[macro_export]\nmacro_rules! extract_op {\n    ($spark_operator:expr, $variant:ident) => {\n        match $spark_operator\n            .op_struct\n            .as_ref()\n            .expect(\"operator struct must be present\")\n        {\n            datafusion_comet_proto::spark_operator::operator::OpStruct::$variant(op) => op,\n            other => panic!(\n                \"{} builder called with wrong operator type: {:?}\",\n                stringify!($variant),\n                other\n            ),\n        }\n    };\n}\n\n/// Macro to generate binary expression builders with minimal boilerplate\n#[macro_export]\nmacro_rules! binary_expr_builder {\n    ($builder_name:ident, $expr_type:ident, $operator:expr) => {\n        pub struct $builder_name;\n\n        impl $crate::execution::planner::expression_registry::ExpressionBuilder for $builder_name {\n            fn build(\n                &self,\n                spark_expr: &datafusion_comet_proto::spark_expression::Expr,\n                input_schema: arrow::datatypes::SchemaRef,\n                planner: &$crate::execution::planner::PhysicalPlanner,\n            ) -> Result<\n                std::sync::Arc<dyn datafusion::physical_expr::PhysicalExpr>,\n                $crate::execution::operators::ExecutionError,\n            > {\n                let expr = $crate::extract_expr!(spark_expr, $expr_type);\n                let left = planner.create_expr(\n                    expr.left.as_ref().unwrap(),\n                    std::sync::Arc::clone(&input_schema),\n                )?;\n                let right = planner.create_expr(expr.right.as_ref().unwrap(), input_schema)?;\n                Ok(std::sync::Arc::new(\n                    datafusion::physical_expr::expressions::BinaryExpr::new(left, $operator, right),\n                ))\n            }\n        }\n    };\n}\n\n/// Macro to generate unary expression builders\n#[macro_export]\nmacro_rules! unary_expr_builder {\n    ($builder_name:ident, $expr_type:ident, $expr_constructor:expr) => {\n        pub struct $builder_name;\n\n        impl $crate::execution::planner::expression_registry::ExpressionBuilder for $builder_name {\n            fn build(\n                &self,\n                spark_expr: &datafusion_comet_proto::spark_expression::Expr,\n                input_schema: arrow::datatypes::SchemaRef,\n                planner: &$crate::execution::planner::PhysicalPlanner,\n            ) -> Result<\n                std::sync::Arc<dyn datafusion::physical_expr::PhysicalExpr>,\n                $crate::execution::operators::ExecutionError,\n            > {\n                let expr = $crate::extract_expr!(spark_expr, $expr_type);\n                let child = planner.create_expr(expr.child.as_ref().unwrap(), input_schema)?;\n                Ok(std::sync::Arc::new($expr_constructor(child)))\n            }\n        }\n    };\n}\n"
  },
  {
    "path": "native/core/src/execution/planner/operator_registry.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Registry for operator builders using modular pattern\n\nuse std::{\n    collections::HashMap,\n    sync::{Arc, OnceLock},\n};\n\nuse datafusion_comet_proto::spark_operator::Operator;\nuse jni::objects::{Global, JObject};\n\nuse super::{PhysicalPlanner, PlanCreationResult};\nuse crate::execution::operators::ExecutionError;\n\n/// Trait for building physical operators from Spark protobuf operators\npub trait OperatorBuilder: Send + Sync {\n    /// Build a Spark plan from a protobuf operator\n    fn build(\n        &self,\n        spark_plan: &datafusion_comet_proto::spark_operator::Operator,\n        inputs: &mut Vec<Arc<Global<JObject<'static>>>>,\n        partition_count: usize,\n        planner: &PhysicalPlanner,\n    ) -> PlanCreationResult;\n}\n\n/// Enum to identify different operator types for registry dispatch\n#[derive(Debug, Clone, Copy, PartialEq, Eq, Hash)]\npub enum OperatorType {\n    Scan,\n    NativeScan,\n    IcebergScan,\n    Projection,\n    Filter,\n    HashAgg,\n    Limit,\n    Sort,\n    ShuffleWriter,\n    ParquetWriter,\n    Expand,\n    SortMergeJoin,\n    HashJoin,\n    Window,\n    CsvScan,\n}\n\n/// Global registry of operator builders\npub struct OperatorRegistry {\n    builders: HashMap<OperatorType, Box<dyn OperatorBuilder>>,\n}\n\nimpl OperatorRegistry {\n    /// Create a new empty registry\n    fn new() -> Self {\n        Self {\n            builders: HashMap::new(),\n        }\n    }\n\n    /// Get the global singleton instance of the operator registry\n    pub fn global() -> &'static OperatorRegistry {\n        static REGISTRY: OnceLock<OperatorRegistry> = OnceLock::new();\n        REGISTRY.get_or_init(|| {\n            let mut registry = OperatorRegistry::new();\n            registry.register_all_operators();\n            registry\n        })\n    }\n\n    /// Check if the registry can handle a given operator\n    pub fn can_handle(&self, spark_operator: &Operator) -> bool {\n        get_operator_type(spark_operator)\n            .map(|op_type| self.builders.contains_key(&op_type))\n            .unwrap_or(false)\n    }\n\n    /// Create a Spark plan using the registered builder for this operator type\n    pub fn create_plan(\n        &self,\n        spark_operator: &Operator,\n        inputs: &mut Vec<Arc<Global<JObject<'static>>>>,\n        partition_count: usize,\n        planner: &PhysicalPlanner,\n    ) -> PlanCreationResult {\n        let operator_type = get_operator_type(spark_operator).ok_or_else(|| {\n            ExecutionError::GeneralError(format!(\n                \"Unsupported operator type: {:?}\",\n                spark_operator.op_struct\n            ))\n        })?;\n\n        let builder = self.builders.get(&operator_type).ok_or_else(|| {\n            ExecutionError::GeneralError(format!(\n                \"No builder registered for operator type: {:?}\",\n                operator_type\n            ))\n        })?;\n\n        builder.build(spark_operator, inputs, partition_count, planner)\n    }\n\n    /// Register all operator builders\n    fn register_all_operators(&mut self) {\n        self.register_projection_operators();\n    }\n\n    /// Register projection operators\n    fn register_projection_operators(&mut self) {\n        use crate::execution::operators::projection::ProjectionBuilder;\n\n        self.builders\n            .insert(OperatorType::Projection, Box::new(ProjectionBuilder));\n    }\n}\n\n/// Extract the operator type from a Spark operator\nfn get_operator_type(spark_operator: &Operator) -> Option<OperatorType> {\n    use datafusion_comet_proto::spark_operator::operator::OpStruct;\n\n    match spark_operator.op_struct.as_ref()? {\n        OpStruct::Projection(_) => Some(OperatorType::Projection),\n        OpStruct::Filter(_) => Some(OperatorType::Filter),\n        OpStruct::HashAgg(_) => Some(OperatorType::HashAgg),\n        OpStruct::Limit(_) => Some(OperatorType::Limit),\n        OpStruct::Sort(_) => Some(OperatorType::Sort),\n        OpStruct::Scan(_) => Some(OperatorType::Scan),\n        OpStruct::NativeScan(_) => Some(OperatorType::NativeScan),\n        OpStruct::IcebergScan(_) => Some(OperatorType::IcebergScan),\n        OpStruct::ShuffleWriter(_) => Some(OperatorType::ShuffleWriter),\n        OpStruct::ParquetWriter(_) => Some(OperatorType::ParquetWriter),\n        OpStruct::Expand(_) => Some(OperatorType::Expand),\n        OpStruct::SortMergeJoin(_) => Some(OperatorType::SortMergeJoin),\n        OpStruct::HashJoin(_) => Some(OperatorType::HashJoin),\n        OpStruct::Window(_) => Some(OperatorType::Window),\n        OpStruct::Explode(_) => None, // Not yet in OperatorType enum\n        OpStruct::CsvScan(_) => Some(OperatorType::CsvScan),\n        OpStruct::ShuffleScan(_) => None, // Not yet in OperatorType enum\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/planner.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Converts Spark physical plan to DataFusion physical plan\n\npub mod expression_registry;\npub mod macros;\npub mod operator_registry;\n\nuse crate::execution::operators::init_csv_datasource_exec;\nuse crate::execution::operators::IcebergScanExec;\nuse crate::execution::{\n    expressions::subquery::Subquery,\n    operators::{ExecutionError, ExpandExec, ParquetWriterExec, ScanExec, ShuffleScanExec},\n    planner::expression_registry::ExpressionRegistry,\n    planner::operator_registry::OperatorRegistry,\n    serde::to_arrow_datatype,\n    shuffle::ShuffleWriterExec,\n};\nuse arrow::compute::CastOptions;\nuse arrow::datatypes::{DataType, Field, FieldRef, Schema, TimeUnit, DECIMAL128_MAX_PRECISION};\nuse datafusion::functions_aggregate::bit_and_or_xor::{bit_and_udaf, bit_or_udaf, bit_xor_udaf};\nuse datafusion::functions_aggregate::count::count_udaf;\nuse datafusion::functions_aggregate::min_max::max_udaf;\nuse datafusion::functions_aggregate::min_max::min_udaf;\nuse datafusion::functions_aggregate::sum::sum_udaf;\nuse datafusion::physical_expr::aggregate::{AggregateExprBuilder, AggregateFunctionExpr};\nuse datafusion::physical_plan::windows::BoundedWindowAggExec;\nuse datafusion::physical_plan::InputOrderMode;\nuse datafusion::{\n    arrow::{compute::SortOptions, datatypes::SchemaRef},\n    common::DataFusionError,\n    config::ConfigOptions,\n    execution::FunctionRegistry,\n    functions_aggregate::first_last::{FirstValue, LastValue},\n    logical_expr::Operator as DataFusionOperator,\n    physical_expr::{\n        expressions::{\n            in_list, BinaryExpr, CaseExpr, CastExpr, Column, IsNullExpr,\n            Literal as DataFusionLiteral,\n        },\n        PhysicalExpr, PhysicalSortExpr, ScalarFunctionExpr,\n    },\n    physical_plan::{\n        aggregates::{AggregateMode as DFAggregateMode, PhysicalGroupBy},\n        empty::EmptyExec,\n        joins::{utils::JoinFilter, HashJoinExec, PartitionMode, SortMergeJoinExec},\n        limit::LocalLimitExec,\n        projection::ProjectionExec,\n        sorts::sort::SortExec,\n        ExecutionPlan,\n    },\n    prelude::SessionContext,\n};\nuse datafusion_comet_spark_expr::{\n    create_comet_physical_fun, create_comet_physical_fun_with_eval_mode, BinaryOutputStyle,\n    BloomFilterAgg, BloomFilterMightContain, CsvWriteOptions, EvalMode, SumInteger, ToCsv,\n};\nuse iceberg::expr::Bind;\n\nuse crate::execution::operators::ExecutionError::GeneralError;\nuse crate::execution::shuffle::{CometPartitioning, CompressionCodec};\nuse crate::execution::spark_plan::SparkPlan;\nuse crate::parquet::parquet_support::prepare_object_store_with_configs;\nuse datafusion::common::scalar::ScalarStructBuilder;\nuse datafusion::common::{\n    tree_node::{Transformed, TransformedResult, TreeNode, TreeNodeRecursion, TreeNodeRewriter},\n    JoinType as DFJoinType, NullEquality, ScalarValue,\n};\nuse datafusion::datasource::listing::PartitionedFile;\nuse datafusion::logical_expr::type_coercion::functions::fields_with_udf;\nuse datafusion::logical_expr::type_coercion::other::get_coerce_type_for_case_expression;\nuse datafusion::logical_expr::{\n    AggregateUDF, ReturnFieldArgs, ScalarUDF, TypeSignature, WindowFrame, WindowFrameBound,\n    WindowFrameUnits, WindowFunctionDefinition,\n};\nuse datafusion::physical_expr::expressions::{Literal, StatsType};\nuse datafusion::physical_expr::window::WindowExpr;\nuse datafusion::physical_expr::LexOrdering;\n\nuse crate::parquet::parquet_exec::init_datasource_exec;\n\nuse arrow::array::{\n    new_empty_array, Array, ArrayRef, BinaryBuilder, BooleanArray, Date32Array, Decimal128Array,\n    Float32Array, Float64Array, Int16Array, Int32Array, Int64Array, Int8Array, ListArray,\n    NullArray, StringBuilder, TimestampMicrosecondArray,\n};\nuse arrow::buffer::{BooleanBuffer, NullBuffer, OffsetBuffer};\nuse arrow::row::{OwnedRow, RowConverter, SortField};\nuse datafusion::common::utils::SingleRowListArrayBuilder;\nuse datafusion::common::UnnestOptions;\nuse datafusion::physical_plan::filter::FilterExec;\nuse datafusion::physical_plan::limit::GlobalLimitExec;\nuse datafusion::physical_plan::unnest::{ListUnnest, UnnestExec};\nuse datafusion_comet_proto::spark_expression::ListLiteral;\nuse datafusion_comet_proto::spark_operator::SparkFilePartition;\nuse datafusion_comet_proto::{\n    spark_expression::{\n        self, agg_expr::ExprStruct as AggExprStruct, expr::ExprStruct, literal::Value, AggExpr,\n        Expr, ScalarFunc,\n    },\n    spark_operator::{\n        self, lower_window_frame_bound::LowerFrameBoundStruct, operator::OpStruct,\n        upper_window_frame_bound::UpperFrameBoundStruct, BuildSide,\n        CompressionCodec as SparkCompressionCodec, JoinType, Operator, WindowFrameType,\n    },\n    spark_partitioning::{partitioning::PartitioningStruct, Partitioning as SparkPartitioning},\n};\nuse datafusion_comet_spark_expr::{\n    ArrayInsert, Avg, AvgDecimal, Cast, CheckOverflow, Correlation, Covariance, CreateNamedStruct,\n    DecimalRescaleCheckOverflow, GetArrayStructFields, GetStructField, IfExpr, ListExtract,\n    NormalizeNaNAndZero, SparkCastOptions, Stddev, SumDecimal, ToJson, UnboundColumn, Variance,\n    WideDecimalBinaryExpr, WideDecimalOp,\n};\nuse itertools::Itertools;\nuse jni::objects::{Global, JObject};\nuse num::{BigInt, ToPrimitive};\nuse object_store::path::Path;\nuse std::cmp::max;\nuse std::{collections::HashMap, sync::Arc};\nuse url::Url;\n\n// For clippy error on type_complexity.\ntype PhyAggResult = Result<Vec<AggregateFunctionExpr>, ExecutionError>;\ntype PhyExprResult = Result<Vec<(Arc<dyn PhysicalExpr>, String)>, ExecutionError>;\ntype PartitionPhyExprResult = Result<Vec<Arc<dyn PhysicalExpr>>, ExecutionError>;\npub type PlanCreationResult =\n    Result<(Vec<ScanExec>, Vec<ShuffleScanExec>, Arc<SparkPlan>), ExecutionError>;\n\nstruct JoinParameters {\n    pub left: Arc<SparkPlan>,\n    pub right: Arc<SparkPlan>,\n    pub join_on: Vec<(Arc<dyn PhysicalExpr>, Arc<dyn PhysicalExpr>)>,\n    pub join_filter: Option<JoinFilter>,\n    pub join_type: DFJoinType,\n}\n\n#[derive(Default)]\npub struct BinaryExprOptions {\n    pub is_integral_div: bool,\n}\n\npub const TEST_EXEC_CONTEXT_ID: i64 = -1;\n\n/// The query planner for converting Spark query plans to DataFusion query plans.\npub struct PhysicalPlanner {\n    // The execution context id of this planner.\n    exec_context_id: i64,\n    partition: i32,\n    session_ctx: Arc<SessionContext>,\n    query_context_registry: Arc<datafusion_comet_spark_expr::QueryContextMap>,\n}\n\nimpl Default for PhysicalPlanner {\n    fn default() -> Self {\n        Self::new(Arc::new(SessionContext::new()), 0)\n    }\n}\n\nimpl PhysicalPlanner {\n    pub fn new(session_ctx: Arc<SessionContext>, partition: i32) -> Self {\n        Self {\n            exec_context_id: TEST_EXEC_CONTEXT_ID,\n            session_ctx,\n            partition,\n            query_context_registry: datafusion_comet_spark_expr::create_query_context_map(),\n        }\n    }\n\n    pub fn with_exec_id(self, exec_context_id: i64) -> Self {\n        Self {\n            exec_context_id,\n            partition: self.partition,\n            session_ctx: Arc::clone(&self.session_ctx),\n            query_context_registry: Arc::clone(&self.query_context_registry),\n        }\n    }\n\n    /// Return session context of this planner.\n    pub fn session_ctx(&self) -> &Arc<SessionContext> {\n        &self.session_ctx\n    }\n\n    /// Return partition id of this planner.\n    pub fn partition(&self) -> i32 {\n        self.partition\n    }\n\n    /// get DataFusion PartitionedFiles from a Spark FilePartition\n    fn get_partitioned_files(\n        &self,\n        partition: &SparkFilePartition,\n    ) -> Result<Vec<PartitionedFile>, ExecutionError> {\n        let mut files = Vec::with_capacity(partition.partitioned_file.len());\n        partition.partitioned_file.iter().try_for_each(|file| {\n            assert!(file.start + file.length <= file.file_size);\n\n            let mut partitioned_file = PartitionedFile::new_with_range(\n                String::new(), // Dummy file path.\n                file.file_size as u64,\n                file.start,\n                file.start + file.length,\n            );\n\n            // Spark sends the path over as URL-encoded, parse that first.\n            let url =\n                Url::parse(file.file_path.as_ref()).map_err(|e| GeneralError(e.to_string()))?;\n            // Convert that to a Path object to use in the PartitionedFile.\n            let path = Path::from_url_path(url.path()).map_err(|e| GeneralError(e.to_string()))?;\n            partitioned_file.object_meta.location = path;\n\n            // Process partition values\n            // Create an empty input schema for partition values because they are all literals.\n            let empty_schema = Arc::new(Schema::empty());\n            let partition_values: Result<Vec<_>, _> = file\n                .partition_values\n                .iter()\n                .map(|partition_value| {\n                    let literal =\n                        self.create_expr(partition_value, Arc::<Schema>::clone(&empty_schema))?;\n                    literal\n                        .as_any()\n                        .downcast_ref::<DataFusionLiteral>()\n                        .ok_or_else(|| {\n                            GeneralError(\"Expected literal of partition value\".to_string())\n                        })\n                        .map(|literal| literal.value().clone())\n                })\n                .collect();\n            let partition_values = partition_values?;\n\n            partitioned_file.partition_values = partition_values;\n\n            files.push(partitioned_file);\n            Ok::<(), ExecutionError>(())\n        })?;\n\n        Ok(files)\n    }\n\n    /// Create a DataFusion physical expression from Spark physical expression\n    pub(crate) fn create_expr(\n        &self,\n        spark_expr: &Expr,\n        input_schema: SchemaRef,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        // Register QueryContext if present\n        if let (Some(expr_id), Some(ctx_proto)) =\n            (spark_expr.expr_id, spark_expr.query_context.as_ref())\n        {\n            // Deserialize QueryContext from protobuf\n            let query_ctx = datafusion_comet_spark_expr::QueryContext::new(\n                ctx_proto.sql_text.clone(),\n                ctx_proto.start_index,\n                ctx_proto.stop_index,\n                ctx_proto.object_type.clone(),\n                ctx_proto.object_name.clone(),\n                ctx_proto.line,\n                ctx_proto.start_position,\n            );\n\n            // Register query context for error reporting\n            let registry = &self.query_context_registry;\n            registry.register(expr_id, query_ctx);\n        }\n\n        // Try to use the modular registry first - this automatically handles any registered expression types\n        if ExpressionRegistry::global().can_handle(spark_expr) {\n            return ExpressionRegistry::global().create_expr(spark_expr, input_schema, self);\n        }\n\n        // Fall back to the original monolithic match for other expressions\n        match spark_expr.expr_struct.as_ref().unwrap() {\n            ExprStruct::Bound(bound) => {\n                let idx = bound.index as usize;\n                if idx >= input_schema.fields().len() {\n                    return Err(GeneralError(format!(\n                        \"Column index {idx} is out of bound. Schema: {input_schema}\"\n                    )));\n                }\n                let field = input_schema.field(idx);\n                Ok(Arc::new(Column::new(field.name().as_str(), idx)))\n            }\n            ExprStruct::Unbound(unbound) => {\n                let data_type = to_arrow_datatype(unbound.datatype.as_ref().unwrap());\n                Ok(Arc::new(UnboundColumn::new(\n                    unbound.name.as_str(),\n                    data_type,\n                )))\n            }\n            ExprStruct::Literal(literal) => {\n                let data_type = to_arrow_datatype(literal.datatype.as_ref().unwrap());\n                let scalar_value = if literal.is_null {\n                    match data_type {\n                        DataType::Boolean => ScalarValue::Boolean(None),\n                        DataType::Int8 => ScalarValue::Int8(None),\n                        DataType::Int16 => ScalarValue::Int16(None),\n                        DataType::Int32 => ScalarValue::Int32(None),\n                        DataType::Int64 => ScalarValue::Int64(None),\n                        DataType::Float32 => ScalarValue::Float32(None),\n                        DataType::Float64 => ScalarValue::Float64(None),\n                        DataType::Utf8 => ScalarValue::Utf8(None),\n                        DataType::Date32 => ScalarValue::Date32(None),\n                        DataType::Timestamp(TimeUnit::Microsecond, timezone) => {\n                            ScalarValue::TimestampMicrosecond(None, timezone)\n                        }\n                        DataType::Binary => ScalarValue::Binary(None),\n                        DataType::Decimal128(p, s) => ScalarValue::Decimal128(None, p, s),\n                        DataType::Struct(fields) => ScalarStructBuilder::new_null(fields),\n                        DataType::Map(f, s) => DataType::Map(f, s).try_into()?,\n                        DataType::List(f) => DataType::List(f).try_into()?,\n                        DataType::Null => ScalarValue::Null,\n                        dt => {\n                            return Err(GeneralError(format!(\"{dt:?} is not supported in Comet\")))\n                        }\n                    }\n                } else {\n                    match literal.value.as_ref().unwrap() {\n                        Value::BoolVal(value) => ScalarValue::Boolean(Some(*value)),\n                        Value::ByteVal(value) => ScalarValue::Int8(Some(*value as i8)),\n                        Value::ShortVal(value) => ScalarValue::Int16(Some(*value as i16)),\n                        Value::IntVal(value) => match data_type {\n                            DataType::Int32 => ScalarValue::Int32(Some(*value)),\n                            DataType::Date32 => ScalarValue::Date32(Some(*value)),\n                            dt => {\n                                return Err(GeneralError(format!(\n                                    \"Expected either 'Int32' or 'Date32' for IntVal, but found {dt:?}\"\n                                )))\n                            }\n                        },\n                        Value::LongVal(value) => match data_type {\n                            DataType::Int64 => ScalarValue::Int64(Some(*value)),\n                            DataType::Timestamp(TimeUnit::Microsecond, None) => {\n                                ScalarValue::TimestampMicrosecond(Some(*value), None)\n                            }\n                            DataType::Timestamp(TimeUnit::Microsecond, Some(tz)) => {\n                                ScalarValue::TimestampMicrosecond(Some(*value), Some(tz))\n                            }\n                            dt => {\n                                return Err(GeneralError(format!(\n                                    \"Expected either 'Int64' or 'Timestamp' for LongVal, but found {dt:?}\"\n                                )))\n                            }\n                        },\n                        Value::FloatVal(value) => ScalarValue::Float32(Some(*value)),\n                        Value::DoubleVal(value) => ScalarValue::Float64(Some(*value)),\n                        Value::StringVal(value) => ScalarValue::Utf8(Some(value.clone())),\n                        Value::BytesVal(value) => ScalarValue::Binary(Some(value.clone())),\n                        Value::DecimalVal(value) => {\n                            let big_integer = BigInt::from_signed_bytes_be(value);\n                            let integer = big_integer.to_i128().ok_or_else(|| {\n                                GeneralError(format!(\n                                    \"Cannot parse {big_integer:?} as i128 for Decimal literal\"\n                                ))\n                            })?;\n\n                            match data_type {\n                                DataType::Decimal128(p, s) => {\n                                    ScalarValue::Decimal128(Some(integer), p, s)\n                                }\n                                dt => {\n                                    return Err(GeneralError(format!(\n                                        \"Decimal literal's data type should be Decimal128 but got {dt:?}\"\n                                    )))\n                                }\n                            }\n                        }\n                        Value::ListVal(values) => {\n                            if let DataType::List(_) = data_type {\n                                SingleRowListArrayBuilder::new(literal_to_array_ref(data_type, values.clone())?).build_list_scalar()\n                            } else {\n                                return Err(GeneralError(format!(\n                                    \"Expected DataType::List but got {data_type:?}\"\n                                )));\n                            }\n                        }\n                    }\n                };\n                Ok(Arc::new(DataFusionLiteral::new(scalar_value)))\n            }\n            ExprStruct::Cast(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), input_schema)?;\n                let datatype = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                let eval_mode = from_protobuf_eval_mode(expr.eval_mode)?;\n\n                // Look up query context from registry if expr_id is present\n                let query_context = spark_expr.expr_id.and_then(|expr_id| {\n                    let registry = &self.query_context_registry;\n                    registry.get(expr_id)\n                });\n\n                Ok(Arc::new(Cast::new(\n                    child,\n                    datatype,\n                    SparkCastOptions::new_with_version(\n                        eval_mode,\n                        &expr.timezone,\n                        expr.allow_incompat,\n                        expr.is_spark4_plus,\n                    ),\n                    spark_expr.expr_id,\n                    query_context,\n                )))\n            }\n            ExprStruct::CheckOverflow(expr) => {\n                let child =\n                    self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&input_schema))?;\n                let data_type = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                let fail_on_error = expr.fail_on_error;\n\n                // WideDecimalBinaryExpr already handles overflow — skip redundant check\n                // but only if its output type matches CheckOverflow's declared type\n                if child\n                    .as_any()\n                    .downcast_ref::<WideDecimalBinaryExpr>()\n                    .is_some()\n                {\n                    let child_type = child.data_type(&input_schema)?;\n                    if child_type == data_type {\n                        return Ok(child);\n                    }\n                }\n\n                // Fuse Cast(Decimal128→Decimal128) + CheckOverflow into single rescale+check\n                // Only fuse when the Cast target type matches the CheckOverflow output type\n                if let Some(cast) = child.as_any().downcast_ref::<Cast>() {\n                    if let (\n                        DataType::Decimal128(p_out, s_out),\n                        Ok(DataType::Decimal128(_p_in, s_in)),\n                    ) = (&data_type, cast.child.data_type(&input_schema))\n                    {\n                        let cast_target = cast.data_type(&input_schema)?;\n                        if cast_target == data_type {\n                            return Ok(Arc::new(DecimalRescaleCheckOverflow::new(\n                                Arc::clone(&cast.child),\n                                s_in,\n                                *p_out,\n                                *s_out,\n                                fail_on_error,\n                            )));\n                        }\n                    }\n                }\n\n                // Look up query context from registry if expr_id is present\n                let query_context = spark_expr.expr_id.and_then(|expr_id| {\n                    let registry = &self.query_context_registry;\n                    registry.get(expr_id)\n                });\n\n                Ok(Arc::new(CheckOverflow::new(\n                    child,\n                    data_type,\n                    fail_on_error,\n                    spark_expr.expr_id,\n                    query_context,\n                )))\n            }\n            ExprStruct::ScalarFunc(expr) => {\n                let func = self.create_scalar_function_expr(expr, input_schema);\n                match expr.func.as_ref() {\n                    // DataFusion map_extract returns array of struct entries even if lookup by key\n                    // Apache Spark wants a single value, so wrap the result into additional list extraction\n                    \"map_extract\" => Ok(Arc::new(ListExtract::new(\n                        func?,\n                        Arc::new(Literal::new(ScalarValue::Int32(Some(1)))),\n                        None,\n                        true,\n                        false,\n                        None, // No expr_id for internal map_extract wrapper\n                        Arc::clone(&self.query_context_registry),\n                    ))),\n                    // DataFusion 49 hardcodes return type for MD5 built in function as UTF8View\n                    // which is not yet supported in Comet\n                    // Converting forcibly to UTF8. To be removed after UTF8View supported\n                    \"md5\" => Ok(Arc::new(Cast::new(\n                        func?,\n                        DataType::Utf8,\n                        SparkCastOptions::new_without_timezone(EvalMode::Try, true),\n                        None,\n                        None,\n                    ))),\n                    _ => func,\n                }\n            }\n            ExprStruct::CaseWhen(case_when) => {\n                let when_then_pairs = case_when\n                    .when\n                    .iter()\n                    .map(|x| self.create_expr(x, Arc::clone(&input_schema)))\n                    .zip(\n                        case_when\n                            .then\n                            .iter()\n                            .map(|then| self.create_expr(then, Arc::clone(&input_schema))),\n                    )\n                    .try_fold(Vec::new(), |mut acc, (a, b)| {\n                        acc.push((a?, b?));\n                        Ok::<Vec<(Arc<dyn PhysicalExpr>, Arc<dyn PhysicalExpr>)>, ExecutionError>(\n                            acc,\n                        )\n                    })?;\n\n                let else_phy_expr = match &case_when.else_expr {\n                    None => None,\n                    Some(_) => Some(self.create_expr(\n                        case_when.else_expr.as_ref().unwrap(),\n                        Arc::clone(&input_schema),\n                    )?),\n                };\n\n                create_case_expr(when_then_pairs, else_phy_expr, &input_schema)\n            }\n            ExprStruct::In(expr) => {\n                let value =\n                    self.create_expr(expr.in_value.as_ref().unwrap(), Arc::clone(&input_schema))?;\n                let list = expr\n                    .lists\n                    .iter()\n                    .map(|x| self.create_expr(x, Arc::clone(&input_schema)))\n                    .collect::<Result<Vec<_>, _>>()?;\n\n                in_list(value, list, &expr.negated, input_schema.as_ref()).map_err(|e| e.into())\n            }\n            ExprStruct::If(expr) => {\n                let if_expr =\n                    self.create_expr(expr.if_expr.as_ref().unwrap(), Arc::clone(&input_schema))?;\n                let true_expr =\n                    self.create_expr(expr.true_expr.as_ref().unwrap(), Arc::clone(&input_schema))?;\n                let false_expr =\n                    self.create_expr(expr.false_expr.as_ref().unwrap(), input_schema)?;\n                Ok(Arc::new(IfExpr::new(if_expr, true_expr, false_expr)))\n            }\n            ExprStruct::NormalizeNanAndZero(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), input_schema)?;\n                let data_type = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                Ok(Arc::new(NormalizeNaNAndZero::new(data_type, child)))\n            }\n            ExprStruct::Subquery(expr) => {\n                let id = expr.id;\n                let data_type = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                Ok(Arc::new(Subquery::new(self.exec_context_id, id, data_type)))\n            }\n            ExprStruct::BloomFilterMightContain(expr) => {\n                let bloom_filter_expr = self.create_expr(\n                    expr.bloom_filter.as_ref().unwrap(),\n                    Arc::clone(&input_schema),\n                )?;\n\n                // We only provide the values as argument, the bloom filter is created only in plan time.\n                let value_expr = self.create_expr(expr.value.as_ref().unwrap(), input_schema)?;\n                let args = vec![value_expr];\n                let udf =\n                    ScalarUDF::new_from_impl(BloomFilterMightContain::try_new(bloom_filter_expr)?);\n\n                let field_ref = Arc::new(Field::new(\"might_contain\", DataType::Boolean, true));\n                let expr: ScalarFunctionExpr = ScalarFunctionExpr::new(\n                    \"might_contain\",\n                    Arc::new(udf),\n                    args,\n                    field_ref,\n                    Arc::new(ConfigOptions::default()),\n                );\n                Ok(Arc::new(expr))\n            }\n            ExprStruct::CreateNamedStruct(expr) => {\n                let values = expr\n                    .values\n                    .iter()\n                    .map(|expr| self.create_expr(expr, Arc::clone(&input_schema)))\n                    .collect::<Result<Vec<_>, _>>()?;\n                let names = expr.names.clone();\n                Ok(Arc::new(CreateNamedStruct::new(values, names)))\n            }\n            ExprStruct::GetStructField(expr) => {\n                let child =\n                    self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&input_schema))?;\n                Ok(Arc::new(GetStructField::new(child, expr.ordinal as usize)))\n            }\n            ExprStruct::ToJson(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), input_schema)?;\n                Ok(Arc::new(ToJson::new(child, &expr.timezone)))\n            }\n            ExprStruct::ToPrettyString(expr) => {\n                let mut spark_cast_options =\n                    SparkCastOptions::new(EvalMode::Try, &expr.timezone, true);\n                let null_string = \"NULL\";\n                spark_cast_options.null_string = null_string.to_string();\n                spark_cast_options.binary_output_style =\n                    from_protobuf_binary_output_style(expr.binary_output_style).ok();\n                let child = self.create_expr(expr.child.as_ref().unwrap(), input_schema)?;\n                let cast = Arc::new(Cast::new(\n                    Arc::clone(&child),\n                    DataType::Utf8,\n                    spark_cast_options,\n                    None,\n                    None,\n                ));\n                Ok(Arc::new(IfExpr::new(\n                    Arc::new(IsNullExpr::new(child)),\n                    Arc::new(Literal::new(ScalarValue::Utf8(Some(\n                        null_string.to_string(),\n                    )))),\n                    cast,\n                )))\n            }\n            ExprStruct::ListExtract(expr) => {\n                let child =\n                    self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&input_schema))?;\n                let ordinal =\n                    self.create_expr(expr.ordinal.as_ref().unwrap(), Arc::clone(&input_schema))?;\n                let default_value = expr\n                    .default_value\n                    .as_ref()\n                    .map(|e| self.create_expr(e, Arc::clone(&input_schema)))\n                    .transpose()?;\n                Ok(Arc::new(ListExtract::new(\n                    child,\n                    ordinal,\n                    default_value,\n                    expr.one_based,\n                    expr.fail_on_error,\n                    spark_expr.expr_id,\n                    Arc::clone(&self.query_context_registry),\n                )))\n            }\n            ExprStruct::GetArrayStructFields(expr) => {\n                let child =\n                    self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&input_schema))?;\n\n                Ok(Arc::new(GetArrayStructFields::new(\n                    child,\n                    expr.ordinal as usize,\n                )))\n            }\n            ExprStruct::ArrayInsert(expr) => {\n                let src_array_expr = self.create_expr(\n                    expr.src_array_expr.as_ref().unwrap(),\n                    Arc::clone(&input_schema),\n                )?;\n                let pos_expr =\n                    self.create_expr(expr.pos_expr.as_ref().unwrap(), Arc::clone(&input_schema))?;\n                let item_expr =\n                    self.create_expr(expr.item_expr.as_ref().unwrap(), Arc::clone(&input_schema))?;\n                Ok(Arc::new(ArrayInsert::new(\n                    src_array_expr,\n                    pos_expr,\n                    item_expr,\n                    expr.legacy_negative_index,\n                )))\n            }\n            ExprStruct::ToCsv(expr) => {\n                let csv_struct_expr =\n                    self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&input_schema))?;\n                let options = expr.options.clone().unwrap();\n                let csv_write_options = CsvWriteOptions::new(\n                    options.delimiter,\n                    options.quote,\n                    options.escape,\n                    options.null_value,\n                    options.quote_all,\n                    options.ignore_leading_white_space,\n                    options.ignore_trailing_white_space,\n                );\n                Ok(Arc::new(ToCsv::new(\n                    csv_struct_expr,\n                    &options.timezone,\n                    csv_write_options,\n                )))\n            }\n            expr => Err(GeneralError(format!(\"Not implemented: {expr:?}\"))),\n        }\n    }\n\n    /// Create a DataFusion physical sort expression from Spark physical expression\n    fn create_sort_expr<'a>(\n        &'a self,\n        spark_expr: &'a Expr,\n        input_schema: SchemaRef,\n    ) -> Result<PhysicalSortExpr, ExecutionError> {\n        match spark_expr.expr_struct.as_ref().unwrap() {\n            ExprStruct::SortOrder(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), input_schema)?;\n                let descending = expr.direction == 1;\n                let nulls_first = expr.null_ordering == 0;\n\n                let options = SortOptions {\n                    descending,\n                    nulls_first,\n                };\n\n                Ok(PhysicalSortExpr {\n                    expr: child,\n                    options,\n                })\n            }\n            expr => Err(GeneralError(format!(\"{expr:?} isn't a SortOrder\"))),\n        }\n    }\n\n    #[allow(clippy::too_many_arguments)]\n    pub fn create_binary_expr(\n        &self,\n        spark_expr: &Expr,\n        left: &Expr,\n        right: &Expr,\n        return_type: Option<&spark_expression::DataType>,\n        op: DataFusionOperator,\n        input_schema: SchemaRef,\n        eval_mode: EvalMode,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        self.create_binary_expr_with_options(\n            spark_expr,\n            left,\n            right,\n            return_type,\n            op,\n            input_schema,\n            BinaryExprOptions::default(),\n            eval_mode,\n        )\n    }\n\n    #[allow(clippy::too_many_arguments)]\n    pub fn create_binary_expr_with_options(\n        &self,\n        spark_expr: &Expr,\n        left: &Expr,\n        right: &Expr,\n        return_type: Option<&spark_expression::DataType>,\n        op: DataFusionOperator,\n        input_schema: SchemaRef,\n        options: BinaryExprOptions,\n        eval_mode: EvalMode,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        // Look up query context from registry if expr_id is present\n        let query_context = spark_expr.expr_id.and_then(|expr_id| {\n            let registry = &self.query_context_registry;\n            registry.get(expr_id)\n        });\n\n        let left = self.create_expr(left, Arc::clone(&input_schema))?;\n        let right = self.create_expr(right, Arc::clone(&input_schema))?;\n        match (\n            &op,\n            left.data_type(&input_schema),\n            right.data_type(&input_schema),\n        ) {\n            (\n                DataFusionOperator::Plus | DataFusionOperator::Minus | DataFusionOperator::Multiply,\n                Ok(DataType::Decimal128(p1, s1)),\n                Ok(DataType::Decimal128(p2, s2)),\n            ) if ((op == DataFusionOperator::Plus || op == DataFusionOperator::Minus)\n                && max(s1, s2) as u8 + max(p1 - s1 as u8, p2 - s2 as u8)\n                    >= DECIMAL128_MAX_PRECISION)\n                || (op == DataFusionOperator::Multiply && p1 + p2 >= DECIMAL128_MAX_PRECISION) =>\n            {\n                let data_type = return_type.map(to_arrow_datatype).unwrap();\n                let (p_out, s_out) = match &data_type {\n                    DataType::Decimal128(p, s) => (*p, *s),\n                    dt => {\n                        return Err(ExecutionError::GeneralError(format!(\n                            \"Expected Decimal128 return type, got {dt:?}\"\n                        )))\n                    }\n                };\n                let wide_op = match op {\n                    DataFusionOperator::Plus => WideDecimalOp::Add,\n                    DataFusionOperator::Minus => WideDecimalOp::Subtract,\n                    DataFusionOperator::Multiply => WideDecimalOp::Multiply,\n                    _ => unreachable!(),\n                };\n                Ok(Arc::new(WideDecimalBinaryExpr::new(\n                    left, right, wide_op, p_out, s_out, eval_mode,\n                )))\n            }\n            (\n                DataFusionOperator::Divide,\n                Ok(DataType::Decimal128(_p1, _s1)),\n                Ok(DataType::Decimal128(_p2, _s2)),\n            ) => {\n                let data_type = return_type.map(to_arrow_datatype).unwrap();\n                let func_name = if options.is_integral_div {\n                    // Decimal256 division in Arrow may overflow, so we still need this variant of decimal_div.\n                    // Otherwise, we may be able to reuse the previous case-match instead of here,\n                    // see more: https://github.com/apache/datafusion-comet/pull/1428#discussion_r1972648463\n                    \"decimal_integral_div\"\n                } else {\n                    \"decimal_div\"\n                };\n                let fun_expr = create_comet_physical_fun_with_eval_mode(\n                    func_name,\n                    data_type.clone(),\n                    &self.session_ctx.state(),\n                    None,\n                    eval_mode,\n                )?;\n                Ok(Arc::new(ScalarFunctionExpr::new(\n                    func_name,\n                    fun_expr,\n                    vec![left, right],\n                    Arc::new(Field::new(func_name, data_type, true)),\n                    Arc::new(ConfigOptions::default()),\n                )))\n            }\n            // Date +/- Int8/Int16/Int32: DataFusion 52's arrow-arith kernels only\n            // support Date32 +/- Interval types, not raw integers. Use the Spark\n            // date_add / date_sub UDFs which handle Int8/Int16/Int32 directly.\n            (\n                DataFusionOperator::Plus,\n                Ok(DataType::Date32),\n                Ok(DataType::Int8 | DataType::Int16 | DataType::Int32),\n            ) => {\n                let udf = Arc::new(ScalarUDF::new_from_impl(\n                    datafusion_spark::function::datetime::date_add::SparkDateAdd::new(),\n                ));\n                Ok(Arc::new(ScalarFunctionExpr::new(\n                    \"date_add\",\n                    udf,\n                    vec![left, right],\n                    Arc::new(Field::new(\"date_add\", DataType::Date32, true)),\n                    Arc::new(ConfigOptions::default()),\n                )))\n            }\n            (\n                DataFusionOperator::Minus,\n                Ok(DataType::Date32),\n                Ok(DataType::Int8 | DataType::Int16 | DataType::Int32),\n            ) => {\n                let udf = Arc::new(ScalarUDF::new_from_impl(\n                    datafusion_spark::function::datetime::date_sub::SparkDateSub::new(),\n                ));\n                Ok(Arc::new(ScalarFunctionExpr::new(\n                    \"date_sub\",\n                    udf,\n                    vec![left, right],\n                    Arc::new(Field::new(\"date_sub\", DataType::Date32, true)),\n                    Arc::new(ConfigOptions::default()),\n                )))\n            }\n            _ => {\n                let data_type = return_type.map(to_arrow_datatype).unwrap();\n                if [EvalMode::Try, EvalMode::Ansi].contains(&eval_mode)\n                    && (data_type.is_integer()\n                        || (data_type.is_floating() && op == DataFusionOperator::Divide))\n                {\n                    let op_str = match op {\n                        DataFusionOperator::Plus => \"checked_add\",\n                        DataFusionOperator::Minus => \"checked_sub\",\n                        DataFusionOperator::Multiply => \"checked_mul\",\n                        DataFusionOperator::Divide => \"checked_div\",\n                        _ => {\n                            todo!(\"ANSI mode for Operator yet to be implemented!\");\n                        }\n                    };\n                    let fun_expr = create_comet_physical_fun_with_eval_mode(\n                        op_str,\n                        data_type.clone(),\n                        &self.session_ctx.state(),\n                        None,\n                        eval_mode,\n                    )?;\n                    let scalar_expr = Arc::new(ScalarFunctionExpr::new(\n                        op_str,\n                        fun_expr,\n                        vec![left, right],\n                        Arc::new(Field::new(op_str, data_type, true)),\n                        Arc::new(ConfigOptions::default()),\n                    ));\n\n                    // Wrap with CheckedBinaryExpr to add query_context to errors\n                    use crate::execution::expressions::arithmetic::CheckedBinaryExpr;\n                    Ok(Arc::new(CheckedBinaryExpr::new(scalar_expr, query_context)))\n                } else {\n                    Ok(Arc::new(BinaryExpr::new(left, op, right)))\n                }\n            }\n        }\n    }\n\n    /// Create a DataFusion physical plan from Spark physical plan. There is a level of\n    /// abstraction where a tree of SparkPlan nodes is returned. There is a 1:1 mapping from a\n    /// protobuf Operator (that represents a Spark operator) to a native SparkPlan struct. We\n    /// need this 1:1 mapping so that we can report metrics back to Spark. The native execution\n    /// plan that is generated for each Operator is sometimes a single ExecutionPlan, but in some\n    /// cases we generate a tree of ExecutionPlans and we need to collect metrics for all of these\n    /// plans so we store references to them in the SparkPlan struct.\n    ///\n    /// `inputs` is a vector of input source IDs. It is used to create `ScanExec`s. Each `ScanExec`\n    /// will be assigned a unique ID from `inputs` and the ID will be used to identify the input\n    /// source at JNI API.\n    ///\n    /// Note that `ScanExec` will pull initial input batch during initialization. It is because we\n    /// need to know the exact schema (not only data type but also dictionary-encoding) at\n    /// `ScanExec`s. It is because some DataFusion operators, e.g., `ProjectionExec`, gets child\n    /// operator schema during initialization and uses it later for `RecordBatch`. We may be\n    /// able to get rid of it once `RecordBatch` relaxes schema check.\n    ///\n    /// Note that we return created `Scan`s which will be kept at JNI API. JNI calls will use it to\n    /// feed in new input batch from Spark JVM side.\n    pub(crate) fn create_plan<'a>(\n        &'a self,\n        spark_plan: &'a Operator,\n        inputs: &mut Vec<Arc<Global<JObject<'static>>>>,\n        partition_count: usize,\n    ) -> PlanCreationResult {\n        // Try to use the modular registry first - this automatically handles any registered operator types\n        if OperatorRegistry::global().can_handle(spark_plan) {\n            return OperatorRegistry::global().create_plan(\n                spark_plan,\n                inputs,\n                partition_count,\n                self,\n            );\n        }\n\n        // Fall back to the original monolithic match for other operators\n        let children = &spark_plan.children;\n        match spark_plan.op_struct.as_ref().unwrap() {\n            OpStruct::Filter(filter) => {\n                assert_eq!(children.len(), 1);\n                let (scans, shuffle_scans, child) =\n                    self.create_plan(&children[0], inputs, partition_count)?;\n                let predicate =\n                    self.create_expr(filter.predicate.as_ref().unwrap(), child.schema())?;\n\n                let filter: Arc<dyn ExecutionPlan> = Arc::new(FilterExec::try_new(\n                    predicate,\n                    Arc::clone(&child.native_plan),\n                )?);\n\n                Ok((\n                    scans,\n                    shuffle_scans,\n                    Arc::new(SparkPlan::new(spark_plan.plan_id, filter, vec![child])),\n                ))\n            }\n            OpStruct::HashAgg(agg) => {\n                assert_eq!(children.len(), 1);\n                let (scans, shuffle_scans, child) =\n                    self.create_plan(&children[0], inputs, partition_count)?;\n\n                let group_exprs: PhyExprResult = agg\n                    .grouping_exprs\n                    .iter()\n                    .enumerate()\n                    .map(|(idx, expr)| {\n                        self.create_expr(expr, child.schema())\n                            .map(|r| (r, format!(\"col_{idx}\")))\n                    })\n                    .collect();\n                let group_by = PhysicalGroupBy::new_single(group_exprs?);\n                let schema = child.schema();\n\n                let mode = if agg.mode == 0 {\n                    DFAggregateMode::Partial\n                } else {\n                    DFAggregateMode::Final\n                };\n\n                let agg_exprs: PhyAggResult = agg\n                    .agg_exprs\n                    .iter()\n                    .map(|expr| self.create_agg_expr(expr, Arc::clone(&schema)))\n                    .collect();\n\n                let aggr_expr = agg_exprs?.into_iter().map(Arc::new).collect();\n\n                // Build per-aggregate filter expressions from the FILTER (WHERE ...) clause.\n                // Filters are only present in Partial mode; Final/PartialMerge always get None.\n                let filter_exprs: Result<Vec<Option<Arc<dyn PhysicalExpr>>>, ExecutionError> = agg\n                    .agg_exprs\n                    .iter()\n                    .map(|expr| {\n                        if let Some(f) = expr.filter.as_ref() {\n                            self.create_expr(f, Arc::clone(&schema)).map(Some)\n                        } else {\n                            Ok(None)\n                        }\n                    })\n                    .collect();\n\n                let aggregate: Arc<dyn ExecutionPlan> = Arc::new(\n                    datafusion::physical_plan::aggregates::AggregateExec::try_new(\n                        mode,\n                        group_by,\n                        aggr_expr,\n                        filter_exprs?,\n                        Arc::clone(&child.native_plan),\n                        Arc::clone(&schema),\n                    )?,\n                );\n                let result_exprs: PhyExprResult = agg\n                    .result_exprs\n                    .iter()\n                    .enumerate()\n                    .map(|(idx, expr)| {\n                        self.create_expr(expr, aggregate.schema())\n                            .map(|r| (r, format!(\"col_{idx}\")))\n                    })\n                    .collect();\n\n                if agg.result_exprs.is_empty() {\n                    Ok((\n                        scans,\n                        shuffle_scans,\n                        Arc::new(SparkPlan::new(spark_plan.plan_id, aggregate, vec![child])),\n                    ))\n                } else {\n                    // For final aggregation, DF's hash aggregate exec doesn't support Spark's\n                    // aggregate result expressions like `COUNT(col) + 1`, but instead relying\n                    // on additional `ProjectionExec` to handle the case. Therefore, here we'll\n                    // add a projection node on top of the aggregate node.\n                    //\n                    // Note that `result_exprs` should only be set for final aggregation on the\n                    // Spark side.\n                    let projection = Arc::new(ProjectionExec::try_new(\n                        result_exprs?,\n                        Arc::clone(&aggregate),\n                    )?);\n                    Ok((\n                        scans,\n                        shuffle_scans,\n                        Arc::new(SparkPlan::new_with_additional(\n                            spark_plan.plan_id,\n                            projection,\n                            vec![child],\n                            vec![aggregate],\n                        )),\n                    ))\n                }\n            }\n            OpStruct::Limit(limit) => {\n                assert_eq!(children.len(), 1);\n                let num = limit.limit;\n                let offset: i32 = limit.offset;\n                if num != -1 && offset > num {\n                    return Err(GeneralError(format!(\n                        \"Invalid limit/offset combination: [{num}. {offset}]\"\n                    )));\n                }\n                let (scans, shuffle_scans, child) =\n                    self.create_plan(&children[0], inputs, partition_count)?;\n                let limit: Arc<dyn ExecutionPlan> = if offset == 0 {\n                    Arc::new(LocalLimitExec::new(\n                        Arc::clone(&child.native_plan),\n                        num as usize,\n                    ))\n                } else {\n                    let fetch = if num == -1 {\n                        None\n                    } else {\n                        Some((num - offset) as usize)\n                    };\n                    Arc::new(GlobalLimitExec::new(\n                        Arc::clone(&child.native_plan),\n                        offset as usize,\n                        fetch,\n                    ))\n                };\n                Ok((\n                    scans,\n                    shuffle_scans,\n                    Arc::new(SparkPlan::new(spark_plan.plan_id, limit, vec![child])),\n                ))\n            }\n            OpStruct::Sort(sort) => {\n                assert_eq!(children.len(), 1);\n                let (scans, shuffle_scans, child) =\n                    self.create_plan(&children[0], inputs, partition_count)?;\n\n                let exprs: Result<Vec<PhysicalSortExpr>, ExecutionError> = sort\n                    .sort_orders\n                    .iter()\n                    .map(|expr| self.create_sort_expr(expr, child.schema()))\n                    .collect();\n\n                let fetch = sort.fetch.map(|num| num as usize);\n\n                let mut sort_exec: Arc<dyn ExecutionPlan> = Arc::new(\n                    SortExec::new(\n                        LexOrdering::new(exprs?).unwrap(),\n                        Arc::clone(&child.native_plan),\n                    )\n                    .with_fetch(fetch),\n                );\n\n                if let Some(skip) = sort.skip.filter(|&n| n > 0).map(|n| n as usize) {\n                    sort_exec = Arc::new(GlobalLimitExec::new(sort_exec, skip, None));\n                }\n\n                Ok((\n                    scans,\n                    shuffle_scans,\n                    Arc::new(SparkPlan::new(\n                        spark_plan.plan_id,\n                        sort_exec,\n                        vec![Arc::clone(&child)],\n                    )),\n                ))\n            }\n            OpStruct::NativeScan(scan) => {\n                // Extract common data and single partition's file list\n                // Per-partition injection happens in Scala before sending to native\n                let common = scan\n                    .common\n                    .as_ref()\n                    .ok_or_else(|| GeneralError(\"NativeScan missing common data\".into()))?;\n\n                let data_schema =\n                    convert_spark_types_to_arrow_schema(common.data_schema.as_slice());\n                let required_schema: SchemaRef =\n                    convert_spark_types_to_arrow_schema(common.required_schema.as_slice());\n                let partition_schema: SchemaRef =\n                    convert_spark_types_to_arrow_schema(common.partition_schema.as_slice());\n                let projection_vector: Vec<usize> = common\n                    .projection_vector\n                    .iter()\n                    .map(|offset| *offset as usize)\n                    .collect();\n\n                let partition_files = scan\n                    .file_partition\n                    .as_ref()\n                    .ok_or_else(|| GeneralError(\"NativeScan missing file_partition\".into()))?;\n\n                // Check if this partition has any files (bucketed scan with bucket pruning may have empty partitions)\n                if partition_files.partitioned_file.is_empty() {\n                    let empty_exec = Arc::new(EmptyExec::new(required_schema));\n                    return Ok((\n                        vec![],\n                        vec![],\n                        Arc::new(SparkPlan::new(spark_plan.plan_id, empty_exec, vec![])),\n                    ));\n                }\n\n                // Convert the Spark expressions to Physical expressions\n                let data_filters: Result<Vec<Arc<dyn PhysicalExpr>>, ExecutionError> = common\n                    .data_filters\n                    .iter()\n                    .map(|expr| self.create_expr(expr, Arc::clone(&required_schema)))\n                    .collect();\n\n                let default_values: Option<HashMap<Column, ScalarValue>> = if !common\n                    .default_values\n                    .is_empty()\n                {\n                    // We have default values. Extract the two lists (same length) of values and\n                    // indexes in the schema, and then create a HashMap to use in the SchemaMapper.\n                    let default_values: Result<Vec<ScalarValue>, DataFusionError> = common\n                        .default_values\n                        .iter()\n                        .map(|expr| {\n                            let literal = self.create_expr(expr, Arc::clone(&required_schema))?;\n                            let df_literal = literal\n                                .as_any()\n                                .downcast_ref::<DataFusionLiteral>()\n                                .ok_or_else(|| {\n                                GeneralError(\"Expected literal of default value.\".to_string())\n                            })?;\n                            Ok(df_literal.value().clone())\n                        })\n                        .collect();\n                    let default_values = default_values?;\n                    let default_values_indexes: Vec<usize> = common\n                        .default_values_indexes\n                        .iter()\n                        .map(|offset| *offset as usize)\n                        .collect();\n                    Some(\n                        default_values_indexes\n                            .into_iter()\n                            .zip(default_values)\n                            .map(|(idx, scalar_value)| {\n                                let field = required_schema.field(idx);\n                                let column = Column::new(field.name().as_str(), idx);\n                                (column, scalar_value)\n                            })\n                            .collect(),\n                    )\n                } else {\n                    None\n                };\n\n                // Get one file from this partition (we know it's not empty due to early return above)\n                let one_file = partition_files\n                    .partitioned_file\n                    .first()\n                    .map(|f| f.file_path.clone())\n                    .expect(\"partition should have files after empty check\");\n\n                let object_store_options: HashMap<String, String> = common\n                    .object_store_options\n                    .iter()\n                    .map(|(k, v)| (k.clone(), v.clone()))\n                    .collect();\n                let (object_store_url, _) = prepare_object_store_with_configs(\n                    self.session_ctx.runtime_env(),\n                    one_file,\n                    &object_store_options,\n                )?;\n\n                // Get files for this partition\n                let files = self.get_partitioned_files(partition_files)?;\n                let file_groups: Vec<Vec<PartitionedFile>> = vec![files];\n\n                let scan = init_datasource_exec(\n                    required_schema,\n                    Some(data_schema),\n                    Some(partition_schema),\n                    object_store_url,\n                    file_groups,\n                    Some(projection_vector),\n                    Some(data_filters?),\n                    default_values,\n                    common.session_timezone.as_str(),\n                    common.case_sensitive,\n                    self.session_ctx(),\n                    common.encryption_enabled,\n                )?;\n                Ok((\n                    vec![],\n                    vec![],\n                    Arc::new(SparkPlan::new(spark_plan.plan_id, scan, vec![])),\n                ))\n            }\n            OpStruct::CsvScan(scan) => {\n                let data_schema = convert_spark_types_to_arrow_schema(scan.data_schema.as_slice());\n                let partition_schema =\n                    convert_spark_types_to_arrow_schema(scan.partition_schema.as_slice());\n                let projection_vector: Vec<usize> =\n                    scan.projection_vector.iter().map(|i| *i as usize).collect();\n                let object_store_options: HashMap<String, String> = scan\n                    .object_store_options\n                    .iter()\n                    .map(|(k, v)| (k.clone(), v.clone()))\n                    .collect();\n                let one_file = scan\n                    .file_partitions\n                    .first()\n                    .and_then(|f| f.partitioned_file.first())\n                    .map(|f| f.file_path.clone())\n                    .ok_or(GeneralError(\"Failed to locate file\".to_string()))?;\n                let (object_store_url, _) = prepare_object_store_with_configs(\n                    self.session_ctx.runtime_env(),\n                    one_file,\n                    &object_store_options,\n                )?;\n                let files =\n                    self.get_partitioned_files(&scan.file_partitions[self.partition as usize])?;\n                let file_groups: Vec<Vec<PartitionedFile>> = vec![files];\n                let scan = init_csv_datasource_exec(\n                    object_store_url,\n                    file_groups,\n                    data_schema,\n                    partition_schema,\n                    projection_vector,\n                    &scan.csv_options.clone().unwrap(),\n                )?;\n                Ok((\n                    vec![],\n                    vec![],\n                    Arc::new(SparkPlan::new(spark_plan.plan_id, scan, vec![])),\n                ))\n            }\n            OpStruct::Scan(scan) => {\n                let data_types = scan.fields.iter().map(to_arrow_datatype).collect_vec();\n\n                // If it is not test execution context for unit test, we should have at least one\n                // input source\n                if self.exec_context_id != TEST_EXEC_CONTEXT_ID && inputs.is_empty() {\n                    return Err(GeneralError(\"No input for scan\".to_string()));\n                }\n\n                // Consumes the first input source for the scan\n                let input_source =\n                    if self.exec_context_id == TEST_EXEC_CONTEXT_ID && inputs.is_empty() {\n                        // For unit test, we will set input batch to scan directly by `set_input_batch`.\n                        None\n                    } else {\n                        Some(inputs.remove(0))\n                    };\n\n                // The `ScanExec` operator will take actual arrays from Spark during execution\n                let scan = ScanExec::new(\n                    self.exec_context_id,\n                    input_source,\n                    &scan.source,\n                    data_types,\n                    scan.arrow_ffi_safe,\n                )?;\n\n                Ok((\n                    vec![scan.clone()],\n                    vec![],\n                    Arc::new(SparkPlan::new(spark_plan.plan_id, Arc::new(scan), vec![])),\n                ))\n            }\n            OpStruct::IcebergScan(scan) => {\n                // Extract common data and single partition's file tasks\n                // Per-partition injection happens in Scala before sending to native\n                let common = scan\n                    .common\n                    .as_ref()\n                    .ok_or_else(|| GeneralError(\"IcebergScan missing common data\".into()))?;\n\n                let required_schema =\n                    convert_spark_types_to_arrow_schema(common.required_schema.as_slice());\n                let catalog_properties: HashMap<String, String> = common\n                    .catalog_properties\n                    .iter()\n                    .map(|(k, v)| (k.clone(), v.clone()))\n                    .collect();\n                let metadata_location = common.metadata_location.clone();\n                let tasks = parse_file_scan_tasks_from_common(common, &scan.file_scan_tasks)?;\n                let data_file_concurrency_limit = common.data_file_concurrency_limit as usize;\n\n                let iceberg_scan = IcebergScanExec::new(\n                    metadata_location,\n                    required_schema,\n                    catalog_properties,\n                    tasks,\n                    data_file_concurrency_limit,\n                )?;\n\n                Ok((\n                    vec![],\n                    vec![],\n                    Arc::new(SparkPlan::new(\n                        spark_plan.plan_id,\n                        Arc::new(iceberg_scan),\n                        vec![],\n                    )),\n                ))\n            }\n            OpStruct::ShuffleWriter(writer) => {\n                assert_eq!(children.len(), 1);\n                let (scans, shuffle_scans, child) =\n                    self.create_plan(&children[0], inputs, partition_count)?;\n\n                let partitioning = self.create_partitioning(\n                    writer.partitioning.as_ref().unwrap(),\n                    child.native_plan.schema(),\n                )?;\n\n                let codec = match writer.codec.try_into() {\n                    Ok(SparkCompressionCodec::None) => Ok(CompressionCodec::None),\n                    Ok(SparkCompressionCodec::Snappy) => Ok(CompressionCodec::Snappy),\n                    Ok(SparkCompressionCodec::Zstd) => {\n                        Ok(CompressionCodec::Zstd(writer.compression_level))\n                    }\n                    Ok(SparkCompressionCodec::Lz4) => Ok(CompressionCodec::Lz4Frame),\n                    _ => Err(GeneralError(format!(\n                        \"Unsupported shuffle compression codec: {:?}\",\n                        writer.codec\n                    ))),\n                }?;\n\n                let write_buffer_size = writer.write_buffer_size as usize;\n                let shuffle_writer = Arc::new(ShuffleWriterExec::try_new(\n                    Arc::clone(&child.native_plan),\n                    partitioning,\n                    codec,\n                    writer.output_data_file.clone(),\n                    writer.output_index_file.clone(),\n                    writer.tracing_enabled,\n                    write_buffer_size,\n                )?);\n\n                Ok((\n                    scans,\n                    shuffle_scans,\n                    Arc::new(SparkPlan::new(\n                        spark_plan.plan_id,\n                        shuffle_writer,\n                        vec![Arc::clone(&child)],\n                    )),\n                ))\n            }\n            OpStruct::ParquetWriter(writer) => {\n                assert_eq!(children.len(), 1);\n                let (scans, shuffle_scans, child) =\n                    self.create_plan(&children[0], inputs, partition_count)?;\n\n                let codec = match writer.compression.try_into() {\n                    Ok(SparkCompressionCodec::None) => Ok(CompressionCodec::None),\n                    Ok(SparkCompressionCodec::Snappy) => Ok(CompressionCodec::Snappy),\n                    Ok(SparkCompressionCodec::Zstd) => Ok(CompressionCodec::Zstd(3)),\n                    Ok(SparkCompressionCodec::Lz4) => Ok(CompressionCodec::Lz4Frame),\n                    _ => Err(GeneralError(format!(\n                        \"Unsupported parquet compression codec: {:?}\",\n                        writer.compression\n                    ))),\n                }?;\n\n                let object_store_options: HashMap<String, String> = writer\n                    .object_store_options\n                    .iter()\n                    .map(|(k, v)| (k.clone(), v.clone()))\n                    .collect();\n\n                let parquet_writer = Arc::new(ParquetWriterExec::try_new(\n                    Arc::clone(&child.native_plan),\n                    writer.output_path.clone(),\n                    writer\n                        .work_dir\n                        .as_ref()\n                        .expect(\"work_dir is provided\")\n                        .clone(),\n                    writer.job_id.clone(),\n                    writer.task_attempt_id,\n                    codec,\n                    self.partition,\n                    writer.column_names.clone(),\n                    object_store_options,\n                )?);\n\n                Ok((\n                    scans,\n                    shuffle_scans,\n                    Arc::new(SparkPlan::new(\n                        spark_plan.plan_id,\n                        parquet_writer,\n                        vec![Arc::clone(&child)],\n                    )),\n                ))\n            }\n            OpStruct::Expand(expand) => {\n                assert_eq!(children.len(), 1);\n                let (scans, shuffle_scans, child) =\n                    self.create_plan(&children[0], inputs, partition_count)?;\n\n                let mut projections = vec![];\n                let mut projection = vec![];\n\n                expand.project_list.iter().try_for_each(|expr| {\n                    let expr = self.create_expr(expr, child.schema())?;\n                    projection.push(expr);\n\n                    if projection.len() == expand.num_expr_per_project as usize {\n                        projections.push(projection.clone());\n                        projection = vec![];\n                    }\n\n                    Ok::<(), ExecutionError>(())\n                })?;\n\n                assert!(\n                    !projections.is_empty(),\n                    \"Expand should have at least one projection\"\n                );\n\n                let datatypes = projections[0]\n                    .iter()\n                    .map(|expr| expr.data_type(&child.schema()))\n                    .collect::<Result<Vec<DataType>, _>>()?;\n                let fields: Vec<Field> = datatypes\n                    .iter()\n                    .enumerate()\n                    .map(|(idx, dt)| Field::new(format!(\"col_{idx}\"), dt.clone(), true))\n                    .collect();\n                let schema = Arc::new(Schema::new(fields));\n\n                // `Expand` operator keeps the input batch and expands it to multiple output\n                // batches. However, `ScanExec` will reuse input arrays for the next\n                // input batch. Therefore, we need to copy the input batch to avoid\n                // the data corruption. Note that we only need to copy the input batch\n                // if the child operator is `ScanExec`, because other operators after `ScanExec`\n                // will create new arrays for the output batch.\n                let input = Arc::clone(&child.native_plan);\n                let expand = Arc::new(ExpandExec::new(projections, input, schema));\n                Ok((\n                    scans,\n                    shuffle_scans,\n                    Arc::new(SparkPlan::new(spark_plan.plan_id, expand, vec![child])),\n                ))\n            }\n            OpStruct::Explode(explode) => {\n                assert_eq!(children.len(), 1);\n                let (scans, shuffle_scans, child) =\n                    self.create_plan(&children[0], inputs, partition_count)?;\n\n                // Create the expression for the array to explode\n                let child_expr = if let Some(child_expr) = &explode.child {\n                    self.create_expr(child_expr, child.schema())?\n                } else {\n                    return Err(ExecutionError::GeneralError(\n                        \"Explode operator requires a child expression\".to_string(),\n                    ));\n                };\n\n                // Create projection expressions for other columns\n                let projections: Vec<Arc<dyn PhysicalExpr>> = explode\n                    .project_list\n                    .iter()\n                    .map(|expr| self.create_expr(expr, child.schema()))\n                    .collect::<Result<Vec<_>, _>>()?;\n\n                // For UnnestExec, we need to add a projection to put the columns in the right order:\n                // 1. First add all projection columns\n                // 2. Then add the array column to be exploded\n                // Then UnnestExec will unnest the last column\n\n                // Use return_field() to get the proper column names from the expressions\n                let child_schema = child.schema();\n                let mut project_exprs: Vec<(Arc<dyn PhysicalExpr>, String)> = projections\n                    .iter()\n                    .map(|expr| {\n                        let field = expr\n                            .return_field(&child_schema)\n                            .expect(\"Failed to get field from expression\");\n                        let name = field.name().to_string();\n                        (Arc::clone(expr), name)\n                    })\n                    .collect();\n\n                // Add the array column as the last column\n                let array_field = child_expr\n                    .return_field(&child_schema)\n                    .expect(\"Failed to get field from array expression\");\n                let array_col_name = array_field.name().to_string();\n                project_exprs.push((Arc::clone(&child_expr), array_col_name.clone()));\n\n                // Create a projection to arrange columns as needed\n                let project_exec = Arc::new(ProjectionExec::try_new(\n                    project_exprs,\n                    Arc::clone(&child.native_plan),\n                )?);\n\n                // Get the input schema from the projection\n                let project_schema = project_exec.schema();\n\n                // Build the output schema for UnnestExec\n                // The output schema replaces the list column with its element type\n                let mut output_fields: Vec<Field> = Vec::new();\n\n                // Add all projection columns (non-array columns)\n                for i in 0..projections.len() {\n                    output_fields.push(project_schema.field(i).clone());\n                }\n\n                // Add the unnested array element field\n                // Extract the element type from the list/array type\n                let array_field = project_schema.field(projections.len());\n                let element_type = match array_field.data_type() {\n                    DataType::List(field) => field.data_type().clone(),\n                    dt => {\n                        return Err(ExecutionError::GeneralError(format!(\n                            \"Expected List type for explode, got {:?}\",\n                            dt\n                        )))\n                    }\n                };\n\n                // The output column has the same name as the input array column\n                // but with the element type instead of the list type\n                output_fields.push(Field::new(\n                    array_field.name(),\n                    element_type,\n                    true, // Element is nullable after unnesting\n                ));\n\n                let output_schema = Arc::new(Schema::new(output_fields));\n\n                // Use UnnestExec to explode the last column (the array column)\n                // ListUnnest specifies which column to unnest and the depth (1 for single level)\n                let list_unnest = ListUnnest {\n                    index_in_input_schema: projections.len(), // Index of the array column to unnest\n                    depth: 1, // Unnest one level (explode single array)\n                };\n\n                let unnest_options = UnnestOptions {\n                    preserve_nulls: explode.outer,\n                    recursions: vec![],\n                };\n\n                let unnest_exec = Arc::new(UnnestExec::new(\n                    project_exec,\n                    vec![list_unnest],\n                    vec![], // No struct columns to unnest\n                    output_schema,\n                    unnest_options,\n                )?);\n\n                Ok((\n                    scans,\n                    shuffle_scans,\n                    Arc::new(SparkPlan::new(spark_plan.plan_id, unnest_exec, vec![child])),\n                ))\n            }\n            OpStruct::SortMergeJoin(join) => {\n                let (join_params, scans, shuffle_scans) = self.parse_join_parameters(\n                    inputs,\n                    children,\n                    &join.left_join_keys,\n                    &join.right_join_keys,\n                    join.join_type,\n                    &join.condition,\n                    partition_count,\n                )?;\n\n                let sort_options = join\n                    .sort_options\n                    .iter()\n                    .map(|sort_option| {\n                        let sort_expr = self\n                            .create_sort_expr(sort_option, join_params.left.schema())\n                            .unwrap();\n                        SortOptions {\n                            descending: sort_expr.options.descending,\n                            nulls_first: sort_expr.options.nulls_first,\n                        }\n                    })\n                    .collect();\n\n                let left = Arc::clone(&join_params.left.native_plan);\n                let right = Arc::clone(&join_params.right.native_plan);\n\n                let join = Arc::new(SortMergeJoinExec::try_new(\n                    Arc::clone(&left),\n                    Arc::clone(&right),\n                    join_params.join_on,\n                    join_params.join_filter,\n                    join_params.join_type,\n                    sort_options,\n                    // null doesn't equal to null in Spark join key. If the join key is\n                    // `EqualNullSafe`, Spark will rewrite it during planning.\n                    NullEquality::NullEqualsNothing,\n                )?);\n\n                Ok((\n                    scans,\n                    shuffle_scans,\n                    Arc::new(SparkPlan::new(\n                        spark_plan.plan_id,\n                        join,\n                        vec![\n                            Arc::clone(&join_params.left),\n                            Arc::clone(&join_params.right),\n                        ],\n                    )),\n                ))\n            }\n            OpStruct::HashJoin(join) => {\n                let (join_params, scans, shuffle_scans) = self.parse_join_parameters(\n                    inputs,\n                    children,\n                    &join.left_join_keys,\n                    &join.right_join_keys,\n                    join.join_type,\n                    &join.condition,\n                    partition_count,\n                )?;\n\n                let left = Arc::clone(&join_params.left.native_plan);\n                let right = Arc::clone(&join_params.right.native_plan);\n\n                let hash_join = Arc::new(HashJoinExec::try_new(\n                    left,\n                    right,\n                    join_params.join_on,\n                    join_params.join_filter,\n                    &join_params.join_type,\n                    None,\n                    PartitionMode::Partitioned,\n                    // null doesn't equal to null in Spark join key. If the join key is\n                    // `EqualNullSafe`, Spark will rewrite it during planning.\n                    NullEquality::NullEqualsNothing,\n                    // null_aware is for null-aware anti joins (NOT IN subqueries).\n                    // NullEquality controls whether NULL = NULL in join keys generally,\n                    // while null_aware changes anti-join semantics so any NULL changes\n                    // the entire result. Spark doesn't use this path (it rewrites\n                    // EqualNullSafe at plan time), so false is correct.\n                    false,\n                )?);\n\n                // If the hash join is build right, we need to swap the left and right\n                if join.build_side == BuildSide::BuildLeft as i32 {\n                    Ok((\n                        scans,\n                        shuffle_scans,\n                        Arc::new(SparkPlan::new(\n                            spark_plan.plan_id,\n                            hash_join,\n                            vec![join_params.left, join_params.right],\n                        )),\n                    ))\n                } else {\n                    let swapped_hash_join =\n                        hash_join.as_ref().swap_inputs(PartitionMode::Partitioned)?;\n\n                    let mut additional_native_plans = vec![];\n                    if swapped_hash_join.as_any().is::<ProjectionExec>() {\n                        // a projection was added to the hash join\n                        additional_native_plans.push(Arc::clone(swapped_hash_join.children()[0]));\n                    }\n\n                    Ok((\n                        scans,\n                        shuffle_scans,\n                        Arc::new(SparkPlan::new_with_additional(\n                            spark_plan.plan_id,\n                            swapped_hash_join,\n                            vec![join_params.left, join_params.right],\n                            additional_native_plans,\n                        )),\n                    ))\n                }\n            }\n            OpStruct::Window(wnd) => {\n                let (scans, shuffle_scans, child) =\n                    self.create_plan(&children[0], inputs, partition_count)?;\n                let input_schema = child.schema();\n                let sort_exprs: Result<Vec<PhysicalSortExpr>, ExecutionError> = wnd\n                    .order_by_list\n                    .iter()\n                    .map(|expr| self.create_sort_expr(expr, Arc::clone(&input_schema)))\n                    .collect();\n\n                let partition_exprs: Result<Vec<Arc<dyn PhysicalExpr>>, ExecutionError> = wnd\n                    .partition_by_list\n                    .iter()\n                    .map(|expr| self.create_expr(expr, Arc::clone(&input_schema)))\n                    .collect();\n\n                let sort_exprs = &sort_exprs?;\n                let partition_exprs = &partition_exprs?;\n\n                let window_expr: Result<Vec<Arc<dyn WindowExpr>>, ExecutionError> = wnd\n                    .window_expr\n                    .iter()\n                    .map(|expr| {\n                        self.create_window_expr(\n                            expr,\n                            Arc::clone(&input_schema),\n                            partition_exprs,\n                            sort_exprs,\n                        )\n                    })\n                    .collect();\n\n                let window_agg = Arc::new(BoundedWindowAggExec::try_new(\n                    window_expr?,\n                    Arc::clone(&child.native_plan),\n                    InputOrderMode::Sorted,\n                    !partition_exprs.is_empty(),\n                )?);\n                Ok((\n                    scans,\n                    shuffle_scans,\n                    Arc::new(SparkPlan::new(spark_plan.plan_id, window_agg, vec![child])),\n                ))\n            }\n            OpStruct::ShuffleScan(scan) => {\n                let data_types = scan.fields.iter().map(to_arrow_datatype).collect_vec();\n\n                if self.exec_context_id != TEST_EXEC_CONTEXT_ID && inputs.is_empty() {\n                    return Err(GeneralError(\"No input for shuffle scan\".to_string()));\n                }\n\n                let input_source =\n                    if self.exec_context_id == TEST_EXEC_CONTEXT_ID && inputs.is_empty() {\n                        None\n                    } else {\n                        Some(inputs.remove(0))\n                    };\n\n                let shuffle_scan =\n                    ShuffleScanExec::new(self.exec_context_id, input_source, data_types)?;\n\n                Ok((\n                    vec![],\n                    vec![shuffle_scan.clone()],\n                    Arc::new(SparkPlan::new(\n                        spark_plan.plan_id,\n                        Arc::new(shuffle_scan),\n                        vec![],\n                    )),\n                ))\n            }\n            _ => Err(GeneralError(format!(\n                \"Unsupported or unregistered operator type: {:?}\",\n                spark_plan.op_struct\n            ))),\n        }\n    }\n\n    #[allow(clippy::too_many_arguments)]\n    fn parse_join_parameters(\n        &self,\n        inputs: &mut Vec<Arc<Global<JObject<'static>>>>,\n        children: &[Operator],\n        left_join_keys: &[Expr],\n        right_join_keys: &[Expr],\n        join_type: i32,\n        condition: &Option<Expr>,\n        partition_count: usize,\n    ) -> Result<(JoinParameters, Vec<ScanExec>, Vec<ShuffleScanExec>), ExecutionError> {\n        assert_eq!(children.len(), 2);\n        let (mut left_scans, mut left_shuffle_scans, left) =\n            self.create_plan(&children[0], inputs, partition_count)?;\n        let (mut right_scans, mut right_shuffle_scans, right) =\n            self.create_plan(&children[1], inputs, partition_count)?;\n\n        left_scans.append(&mut right_scans);\n        left_shuffle_scans.append(&mut right_shuffle_scans);\n\n        let left_join_exprs: Vec<_> = left_join_keys\n            .iter()\n            .map(|expr| self.create_expr(expr, left.schema()))\n            .collect::<Result<Vec<_>, _>>()?;\n        let right_join_exprs: Vec<_> = right_join_keys\n            .iter()\n            .map(|expr| self.create_expr(expr, right.schema()))\n            .collect::<Result<Vec<_>, _>>()?;\n\n        let join_on = left_join_exprs\n            .into_iter()\n            .zip(right_join_exprs)\n            .collect::<Vec<_>>();\n\n        let join_type = match join_type.try_into() {\n            Ok(JoinType::Inner) => DFJoinType::Inner,\n            Ok(JoinType::LeftOuter) => DFJoinType::Left,\n            Ok(JoinType::RightOuter) => DFJoinType::Right,\n            Ok(JoinType::FullOuter) => DFJoinType::Full,\n            Ok(JoinType::LeftSemi) => DFJoinType::LeftSemi,\n            Ok(JoinType::LeftAnti) => DFJoinType::LeftAnti,\n            Err(_) => {\n                return Err(GeneralError(format!(\n                    \"Unsupported join type: {join_type:?}\"\n                )));\n            }\n        };\n\n        // Handle join filter as DataFusion `JoinFilter` struct\n        let join_filter = if let Some(expr) = condition {\n            let left_schema = left.schema();\n            let right_schema = right.schema();\n            let left_fields = left_schema.fields();\n            let right_fields = right_schema.fields();\n            let all_fields: Vec<_> = left_fields\n                .into_iter()\n                .chain(right_fields)\n                .cloned()\n                .collect();\n            let full_schema = Arc::new(Schema::new(all_fields));\n\n            // Because we cast dictionary array to array in scan operator,\n            // we need to change dictionary type to data type for join filter expression.\n            let fields: Vec<_> = full_schema\n                .fields()\n                .iter()\n                .map(|f| match f.data_type() {\n                    DataType::Dictionary(_, val_type) => Arc::new(Field::new(\n                        f.name(),\n                        val_type.as_ref().clone(),\n                        f.is_nullable(),\n                    )),\n                    _ => Arc::clone(f),\n                })\n                .collect();\n\n            let full_schema = Arc::new(Schema::new(fields));\n\n            let physical_expr = self.create_expr(expr, full_schema)?;\n            let (left_field_indices, right_field_indices) =\n                expr_to_columns(&physical_expr, left_fields.len(), right_fields.len())?;\n            let column_indices = JoinFilter::build_column_indices(\n                left_field_indices.clone(),\n                right_field_indices.clone(),\n            );\n\n            let filter_fields: Vec<Field> = left_field_indices\n                .clone()\n                .into_iter()\n                .map(|i| left.schema().field(i).clone())\n                .chain(\n                    right_field_indices\n                        .clone()\n                        .into_iter()\n                        .map(|i| right.schema().field(i).clone()),\n                )\n                // Because we cast dictionary array to array in scan operator,\n                // we need to change dictionary type to data type for join filter expression.\n                .map(|f| match f.data_type() {\n                    DataType::Dictionary(_, val_type) => {\n                        Field::new(f.name(), val_type.as_ref().clone(), f.is_nullable())\n                    }\n                    _ => f.clone(),\n                })\n                .collect_vec();\n\n            let filter_schema = Schema::new_with_metadata(filter_fields, HashMap::new());\n\n            // Rewrite the physical expression to use the new column indices.\n            // DataFusion's join filter is bound to intermediate schema which contains\n            // only the fields used in the filter expression. But the Spark's join filter\n            // expression is bound to the full schema. We need to rewrite the physical\n            // expression to use the new column indices.\n            let rewritten_physical_expr = rewrite_physical_expr(\n                physical_expr,\n                left_schema.fields.len(),\n                right_schema.fields.len(),\n                &left_field_indices,\n                &right_field_indices,\n            )?;\n\n            Some(JoinFilter::new(\n                rewritten_physical_expr,\n                column_indices,\n                filter_schema.into(),\n            ))\n        } else {\n            None\n        };\n\n        Ok((\n            JoinParameters {\n                left: Arc::clone(&left),\n                right: Arc::clone(&right),\n                join_on,\n                join_type,\n                join_filter,\n            },\n            left_scans,\n            left_shuffle_scans,\n        ))\n    }\n\n    /// Create a DataFusion physical aggregate expression from Spark physical aggregate expression\n    fn create_agg_expr(\n        &self,\n        spark_expr: &AggExpr,\n        schema: SchemaRef,\n    ) -> Result<AggregateFunctionExpr, ExecutionError> {\n        // Register QueryContext if present\n        if let (Some(expr_id), Some(ctx_proto)) =\n            (spark_expr.expr_id, spark_expr.query_context.as_ref())\n        {\n            // Deserialize QueryContext from protobuf\n            let query_ctx = datafusion_comet_spark_expr::QueryContext::new(\n                ctx_proto.sql_text.clone(),\n                ctx_proto.start_index,\n                ctx_proto.stop_index,\n                ctx_proto.object_type.clone(),\n                ctx_proto.object_name.clone(),\n                ctx_proto.line,\n                ctx_proto.start_position,\n            );\n\n            // Register query context for error reporting\n            let registry = &self.query_context_registry;\n            registry.register(expr_id, query_ctx);\n        }\n\n        match spark_expr.expr_struct.as_ref().unwrap() {\n            AggExprStruct::Count(expr) => {\n                assert!(!expr.children.is_empty());\n                let children = expr\n                    .children\n                    .iter()\n                    .map(|child| self.create_expr(child, Arc::clone(&schema)))\n                    .collect::<Result<Vec<_>, _>>()?;\n\n                AggregateExprBuilder::new(count_udaf(), children)\n                    .schema(schema)\n                    .alias(\"count\")\n                    .with_ignore_nulls(false)\n                    .with_distinct(false)\n                    .build()\n                    .map_err(|e| ExecutionError::DataFusionError(e.to_string()))\n            }\n            AggExprStruct::Min(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                let datatype = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                let child = Arc::new(CastExpr::new(child, datatype.clone(), None));\n\n                AggregateExprBuilder::new(min_udaf(), vec![child])\n                    .schema(schema)\n                    .alias(\"min\")\n                    .with_ignore_nulls(false)\n                    .with_distinct(false)\n                    .build()\n                    .map_err(|e| ExecutionError::DataFusionError(e.to_string()))\n            }\n            AggExprStruct::Max(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                let datatype = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                let child = Arc::new(CastExpr::new(child, datatype.clone(), None));\n\n                AggregateExprBuilder::new(max_udaf(), vec![child])\n                    .schema(schema)\n                    .alias(\"max\")\n                    .with_ignore_nulls(false)\n                    .with_distinct(false)\n                    .build()\n                    .map_err(|e| ExecutionError::DataFusionError(e.to_string()))\n            }\n            AggExprStruct::Sum(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                let datatype = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n\n                let builder = match datatype {\n                    DataType::Decimal128(_, _) => {\n                        let eval_mode = from_protobuf_eval_mode(expr.eval_mode)?;\n                        let func = AggregateUDF::new_from_impl(SumDecimal::try_new(\n                            datatype,\n                            eval_mode,\n                            spark_expr.expr_id,\n                            Arc::clone(&self.query_context_registry),\n                        )?);\n                        AggregateExprBuilder::new(Arc::new(func), vec![child])\n                    }\n                    DataType::Int8 | DataType::Int16 | DataType::Int32 | DataType::Int64 => {\n                        let eval_mode = from_protobuf_eval_mode(expr.eval_mode)?;\n                        let func =\n                            AggregateUDF::new_from_impl(SumInteger::try_new(datatype, eval_mode)?);\n                        AggregateExprBuilder::new(Arc::new(func), vec![child])\n                    }\n                    _ => {\n                        // cast to the result data type of SUM if necessary, we should not expect\n                        // a cast failure since it should have already been checked at Spark side\n                        let child =\n                            Arc::new(CastExpr::new(Arc::clone(&child), datatype.clone(), None));\n                        AggregateExprBuilder::new(sum_udaf(), vec![child])\n                    }\n                };\n                builder\n                    .schema(schema)\n                    .alias(\"sum\")\n                    .with_ignore_nulls(false)\n                    .with_distinct(false)\n                    .build()\n                    .map_err(|e| e.into())\n            }\n            AggExprStruct::Avg(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                let datatype = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                let input_datatype = to_arrow_datatype(expr.sum_datatype.as_ref().unwrap());\n\n                let builder = match datatype {\n                    DataType::Decimal128(_, _) => {\n                        let eval_mode = from_protobuf_eval_mode(expr.eval_mode)?;\n                        let func = AggregateUDF::new_from_impl(AvgDecimal::new(\n                            datatype,\n                            input_datatype,\n                            eval_mode,\n                            spark_expr.expr_id,\n                            Arc::clone(&self.query_context_registry),\n                        ));\n                        AggregateExprBuilder::new(Arc::new(func), vec![child])\n                    }\n                    _ => {\n                        // For all other numeric types (Int8/16/32/64, Float32/64):\n                        // Cast to Float64 for accumulation\n                        let child: Arc<dyn PhysicalExpr> =\n                            Arc::new(CastExpr::new(Arc::clone(&child), DataType::Float64, None));\n                        let func = AggregateUDF::new_from_impl(Avg::new(\"avg\", DataType::Float64));\n                        AggregateExprBuilder::new(Arc::new(func), vec![child])\n                    }\n                };\n                builder\n                    .schema(schema)\n                    .alias(\"avg\")\n                    .with_ignore_nulls(false)\n                    .with_distinct(false)\n                    .build()\n                    .map_err(|e| e.into())\n            }\n            AggExprStruct::First(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                let func = AggregateUDF::new_from_impl(FirstValue::new());\n\n                AggregateExprBuilder::new(Arc::new(func), vec![child])\n                    .schema(schema)\n                    .alias(\"first\")\n                    .with_ignore_nulls(expr.ignore_nulls)\n                    .with_distinct(false)\n                    .build()\n                    .map_err(|e| e.into())\n            }\n            AggExprStruct::Last(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                let func = AggregateUDF::new_from_impl(LastValue::new());\n\n                AggregateExprBuilder::new(Arc::new(func), vec![child])\n                    .schema(schema)\n                    .alias(\"last\")\n                    .with_ignore_nulls(expr.ignore_nulls)\n                    .with_distinct(false)\n                    .build()\n                    .map_err(|e| e.into())\n            }\n            AggExprStruct::BitAndAgg(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n\n                AggregateExprBuilder::new(bit_and_udaf(), vec![child])\n                    .schema(schema)\n                    .alias(\"bit_and\")\n                    .with_ignore_nulls(false)\n                    .with_distinct(false)\n                    .build()\n                    .map_err(|e| e.into())\n            }\n            AggExprStruct::BitOrAgg(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n\n                AggregateExprBuilder::new(bit_or_udaf(), vec![child])\n                    .schema(schema)\n                    .alias(\"bit_or\")\n                    .with_ignore_nulls(false)\n                    .with_distinct(false)\n                    .build()\n                    .map_err(|e| e.into())\n            }\n            AggExprStruct::BitXorAgg(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n\n                AggregateExprBuilder::new(bit_xor_udaf(), vec![child])\n                    .schema(schema)\n                    .alias(\"bit_xor\")\n                    .with_ignore_nulls(false)\n                    .with_distinct(false)\n                    .build()\n                    .map_err(|e| e.into())\n            }\n            AggExprStruct::Covariance(expr) => {\n                let child1 =\n                    self.create_expr(expr.child1.as_ref().unwrap(), Arc::clone(&schema))?;\n                let child2 =\n                    self.create_expr(expr.child2.as_ref().unwrap(), Arc::clone(&schema))?;\n                let datatype = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                match expr.stats_type {\n                    0 => {\n                        let func = AggregateUDF::new_from_impl(Covariance::new(\n                            \"covariance\",\n                            datatype,\n                            StatsType::Sample,\n                            expr.null_on_divide_by_zero,\n                        ));\n\n                        Self::create_aggr_func_expr(\n                            \"covariance\",\n                            schema,\n                            vec![child1, child2],\n                            func,\n                        )\n                    }\n                    1 => {\n                        let func = AggregateUDF::new_from_impl(Covariance::new(\n                            \"covariance_pop\",\n                            datatype,\n                            StatsType::Population,\n                            expr.null_on_divide_by_zero,\n                        ));\n\n                        Self::create_aggr_func_expr(\n                            \"covariance_pop\",\n                            schema,\n                            vec![child1, child2],\n                            func,\n                        )\n                    }\n                    stats_type => Err(GeneralError(format!(\n                        \"Unknown StatisticsType {stats_type:?} for Variance\"\n                    ))),\n                }\n            }\n            AggExprStruct::Variance(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                let datatype = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                match expr.stats_type {\n                    0 => {\n                        let func = AggregateUDF::new_from_impl(Variance::new(\n                            \"variance\",\n                            datatype,\n                            StatsType::Sample,\n                            expr.null_on_divide_by_zero,\n                        ));\n\n                        Self::create_aggr_func_expr(\"variance\", schema, vec![child], func)\n                    }\n                    1 => {\n                        let func = AggregateUDF::new_from_impl(Variance::new(\n                            \"variance_pop\",\n                            datatype,\n                            StatsType::Population,\n                            expr.null_on_divide_by_zero,\n                        ));\n\n                        Self::create_aggr_func_expr(\"variance_pop\", schema, vec![child], func)\n                    }\n                    stats_type => Err(GeneralError(format!(\n                        \"Unknown StatisticsType {stats_type:?} for Variance\"\n                    ))),\n                }\n            }\n            AggExprStruct::Stddev(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                let datatype = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                match expr.stats_type {\n                    0 => {\n                        let func = AggregateUDF::new_from_impl(Stddev::new(\n                            \"stddev\",\n                            datatype,\n                            StatsType::Sample,\n                            expr.null_on_divide_by_zero,\n                        ));\n\n                        Self::create_aggr_func_expr(\"stddev\", schema, vec![child], func)\n                    }\n                    1 => {\n                        let func = AggregateUDF::new_from_impl(Stddev::new(\n                            \"stddev_pop\",\n                            datatype,\n                            StatsType::Population,\n                            expr.null_on_divide_by_zero,\n                        ));\n\n                        Self::create_aggr_func_expr(\"stddev_pop\", schema, vec![child], func)\n                    }\n                    stats_type => Err(GeneralError(format!(\n                        \"Unknown StatisticsType {stats_type:?} for stddev\"\n                    ))),\n                }\n            }\n            AggExprStruct::Correlation(expr) => {\n                let child1 =\n                    self.create_expr(expr.child1.as_ref().unwrap(), Arc::clone(&schema))?;\n                let child2 =\n                    self.create_expr(expr.child2.as_ref().unwrap(), Arc::clone(&schema))?;\n                let datatype = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                let func = AggregateUDF::new_from_impl(Correlation::new(\n                    \"correlation\",\n                    datatype,\n                    expr.null_on_divide_by_zero,\n                ));\n                Self::create_aggr_func_expr(\"correlation\", schema, vec![child1, child2], func)\n            }\n            AggExprStruct::BloomFilterAgg(expr) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                let num_items =\n                    self.create_expr(expr.num_items.as_ref().unwrap(), Arc::clone(&schema))?;\n                let num_bits =\n                    self.create_expr(expr.num_bits.as_ref().unwrap(), Arc::clone(&schema))?;\n                let datatype = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                let func = AggregateUDF::new_from_impl(BloomFilterAgg::new(\n                    Arc::clone(&num_items),\n                    Arc::clone(&num_bits),\n                    datatype,\n                ));\n                Self::create_aggr_func_expr(\"bloom_filter_agg\", schema, vec![child], func)\n            }\n        }\n    }\n\n    /// Create a DataFusion windows physical expression from Spark physical expression\n    fn create_window_expr<'a>(\n        &'a self,\n        spark_expr: &'a spark_operator::WindowExpr,\n        input_schema: SchemaRef,\n        partition_by: &[Arc<dyn PhysicalExpr>],\n        sort_exprs: &[PhysicalSortExpr],\n    ) -> Result<Arc<dyn WindowExpr>, ExecutionError> {\n        let window_func_name: String;\n        let window_args: Vec<Arc<dyn PhysicalExpr>>;\n        if let Some(func) = &spark_expr.built_in_window_function {\n            match &func.expr_struct {\n                Some(ExprStruct::ScalarFunc(f)) => {\n                    window_func_name = f.func.clone();\n                    window_args = f\n                        .args\n                        .iter()\n                        .map(|expr| self.create_expr(expr, Arc::clone(&input_schema)))\n                        .collect::<Result<Vec<_>, ExecutionError>>()?;\n                }\n                other => {\n                    return Err(GeneralError(format!(\n                        \"{other:?} not supported for window function\"\n                    )))\n                }\n            };\n        } else if let Some(agg_func) = &spark_expr.agg_func {\n            let result = self.process_agg_func(agg_func, Arc::clone(&input_schema))?;\n            window_func_name = result.0;\n            window_args = result.1;\n        } else {\n            return Err(GeneralError(\n                \"Both func and agg_func are not set\".to_string(),\n            ));\n        }\n\n        let window_func = match self.find_df_window_function(&window_func_name) {\n            Some(f) => f,\n            _ => {\n                return Err(GeneralError(format!(\n                    \"{window_func_name} not supported for window function\"\n                )))\n            }\n        };\n\n        let spark_window_frame = match spark_expr\n            .spec\n            .as_ref()\n            .and_then(|inner| inner.frame_specification.as_ref())\n        {\n            Some(frame) => frame,\n            _ => {\n                return Err(ExecutionError::DeserializeError(\n                    \"Cannot deserialize window frame\".to_string(),\n                ))\n            }\n        };\n\n        let units = match spark_window_frame.frame_type() {\n            WindowFrameType::Rows => WindowFrameUnits::Rows,\n            WindowFrameType::Range => WindowFrameUnits::Range,\n        };\n\n        let lower_bound: WindowFrameBound = match spark_window_frame\n            .lower_bound\n            .as_ref()\n            .and_then(|inner| inner.lower_frame_bound_struct.as_ref())\n        {\n            Some(l) => match l {\n                LowerFrameBoundStruct::UnboundedPreceding(_) => match units {\n                    WindowFrameUnits::Rows => {\n                        WindowFrameBound::Preceding(ScalarValue::UInt64(None))\n                    }\n                    WindowFrameUnits::Range => {\n                        WindowFrameBound::Preceding(ScalarValue::Int64(None))\n                    }\n                    WindowFrameUnits::Groups => {\n                        return Err(GeneralError(\n                            \"WindowFrameUnits::Groups is not supported.\".to_string(),\n                        ));\n                    }\n                },\n                LowerFrameBoundStruct::Preceding(offset) => {\n                    let offset_value = offset.offset.abs();\n                    match units {\n                        WindowFrameUnits::Rows => WindowFrameBound::Preceding(ScalarValue::UInt64(\n                            Some(offset_value as u64),\n                        )),\n                        WindowFrameUnits::Range => {\n                            WindowFrameBound::Preceding(ScalarValue::Int64(Some(offset_value)))\n                        }\n                        WindowFrameUnits::Groups => {\n                            return Err(GeneralError(\n                                \"WindowFrameUnits::Groups is not supported.\".to_string(),\n                            ));\n                        }\n                    }\n                }\n                LowerFrameBoundStruct::CurrentRow(_) => WindowFrameBound::CurrentRow,\n            },\n            None => match units {\n                WindowFrameUnits::Rows => WindowFrameBound::Preceding(ScalarValue::UInt64(None)),\n                WindowFrameUnits::Range => WindowFrameBound::Preceding(ScalarValue::Int64(None)),\n                WindowFrameUnits::Groups => {\n                    return Err(GeneralError(\n                        \"WindowFrameUnits::Groups is not supported.\".to_string(),\n                    ));\n                }\n            },\n        };\n\n        let upper_bound: WindowFrameBound = match spark_window_frame\n            .upper_bound\n            .as_ref()\n            .and_then(|inner| inner.upper_frame_bound_struct.as_ref())\n        {\n            Some(u) => match u {\n                UpperFrameBoundStruct::UnboundedFollowing(_) => match units {\n                    WindowFrameUnits::Rows => {\n                        WindowFrameBound::Following(ScalarValue::UInt64(None))\n                    }\n                    WindowFrameUnits::Range => {\n                        WindowFrameBound::Following(ScalarValue::Int64(None))\n                    }\n                    WindowFrameUnits::Groups => {\n                        return Err(GeneralError(\n                            \"WindowFrameUnits::Groups is not supported.\".to_string(),\n                        ));\n                    }\n                },\n                UpperFrameBoundStruct::Following(offset) => match units {\n                    WindowFrameUnits::Rows => {\n                        WindowFrameBound::Following(ScalarValue::UInt64(Some(offset.offset as u64)))\n                    }\n                    WindowFrameUnits::Range => {\n                        WindowFrameBound::Following(ScalarValue::Int64(Some(offset.offset)))\n                    }\n                    WindowFrameUnits::Groups => {\n                        return Err(GeneralError(\n                            \"WindowFrameUnits::Groups is not supported.\".to_string(),\n                        ));\n                    }\n                },\n                UpperFrameBoundStruct::CurrentRow(_) => WindowFrameBound::CurrentRow,\n            },\n            None => match units {\n                WindowFrameUnits::Rows => WindowFrameBound::Following(ScalarValue::UInt64(None)),\n                WindowFrameUnits::Range => WindowFrameBound::Following(ScalarValue::Int64(None)),\n                WindowFrameUnits::Groups => {\n                    return Err(GeneralError(\n                        \"WindowFrameUnits::Groups is not supported.\".to_string(),\n                    ));\n                }\n            },\n        };\n\n        let window_frame = WindowFrame::new_bounds(units, lower_bound, upper_bound);\n        let lex_orderings = LexOrdering::new(sort_exprs.to_vec());\n        let sort_phy_exprs = lex_orderings.as_deref().unwrap_or(&[]);\n\n        datafusion::physical_plan::windows::create_window_expr(\n            &window_func,\n            window_func_name,\n            &window_args,\n            partition_by,\n            sort_phy_exprs,\n            window_frame.into(),\n            input_schema,\n            spark_expr.ignore_nulls,\n            false, // TODO: Spark does not support DISTINCT ... OVER\n            None,\n        )\n        .map_err(|e| ExecutionError::DataFusionError(e.to_string()))\n    }\n\n    fn process_agg_func(\n        &self,\n        agg_func: &AggExpr,\n        schema: SchemaRef,\n    ) -> Result<(String, Vec<Arc<dyn PhysicalExpr>>), ExecutionError> {\n        match &agg_func.expr_struct {\n            Some(AggExprStruct::Count(expr)) => {\n                let children = expr\n                    .children\n                    .iter()\n                    .map(|child| self.create_expr(child, Arc::clone(&schema)))\n                    .collect::<Result<Vec<_>, _>>()?;\n                Ok((\"count\".to_string(), children))\n            }\n            Some(AggExprStruct::Min(expr)) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                Ok((\"min\".to_string(), vec![child]))\n            }\n            Some(AggExprStruct::Max(expr)) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                Ok((\"max\".to_string(), vec![child]))\n            }\n            Some(AggExprStruct::Sum(expr)) => {\n                let child = self.create_expr(expr.child.as_ref().unwrap(), Arc::clone(&schema))?;\n                let arrow_type = to_arrow_datatype(expr.datatype.as_ref().unwrap());\n                let datatype = child.data_type(&schema)?;\n\n                let child = if datatype != arrow_type {\n                    Arc::new(CastExpr::new(child, arrow_type.clone(), None))\n                } else {\n                    child\n                };\n                Ok((\"sum\".to_string(), vec![child]))\n            }\n            other => Err(GeneralError(format!(\n                \"{other:?} not supported for window function\"\n            ))),\n        }\n    }\n\n    /// Find DataFusion's built-in window function by name.\n    fn find_df_window_function(&self, name: &str) -> Option<WindowFunctionDefinition> {\n        let registry = &self.session_ctx.state();\n        registry\n            .udaf(name)\n            .map(WindowFunctionDefinition::AggregateUDF)\n            .ok()\n            .or_else(|| {\n                registry\n                    .udwf(name)\n                    .map(WindowFunctionDefinition::WindowUDF)\n                    .ok()\n            })\n    }\n\n    /// Create a DataFusion physical partitioning from Spark physical partitioning\n    fn create_partitioning(\n        &self,\n        spark_partitioning: &SparkPartitioning,\n        input_schema: SchemaRef,\n    ) -> Result<CometPartitioning, ExecutionError> {\n        match spark_partitioning.partitioning_struct.as_ref().unwrap() {\n            PartitioningStruct::HashPartition(hash_partition) => {\n                let exprs: PartitionPhyExprResult = hash_partition\n                    .hash_expression\n                    .iter()\n                    .map(|x| self.create_expr(x, Arc::clone(&input_schema)))\n                    .collect();\n                Ok(CometPartitioning::Hash(\n                    exprs?,\n                    hash_partition.num_partitions as usize,\n                ))\n            }\n            PartitioningStruct::RangePartition(range_partition) => {\n                // Generate the lexical ordering for comparisons\n                let exprs: Result<Vec<PhysicalSortExpr>, ExecutionError> = range_partition\n                    .sort_orders\n                    .iter()\n                    .map(|expr| self.create_sort_expr(expr, Arc::clone(&input_schema)))\n                    .collect();\n                let lex_ordering = LexOrdering::new(exprs?).unwrap();\n\n                // Generate the row converter for comparing incoming batches to boundary rows\n                let sort_fields: Vec<SortField> = lex_ordering\n                    .iter()\n                    .map(|sort_expr| {\n                        sort_expr\n                            .expr\n                            .data_type(input_schema.as_ref())\n                            .map(|dt| SortField::new_with_options(dt, sort_expr.options))\n                    })\n                    .collect::<Result<Vec<_>, _>>()?;\n\n                // Deserialize the literals to columnar collections of ScalarValues\n                let mut scalar_values: Vec<Vec<ScalarValue>> = vec![vec![]; lex_ordering.len()];\n                for boundary_row in &range_partition.boundary_rows {\n                    // For each serialized expr in a boundary row, convert to a Literal\n                    // expression, then extract the ScalarValue from the Literal and push it\n                    // into the collection of ScalarValues\n                    for (col_idx, col_values) in scalar_values\n                        .iter_mut()\n                        .enumerate()\n                        .take(lex_ordering.len())\n                    {\n                        let expr = self.create_expr(\n                            &boundary_row.partition_bounds[col_idx],\n                            Arc::clone(&input_schema),\n                        )?;\n                        let literal_expr =\n                            expr.as_any().downcast_ref::<Literal>().expect(\"Literal\");\n                        col_values.push(literal_expr.value().clone());\n                    }\n                }\n\n                // Convert the collection of ScalarValues to collection of Arrow Arrays\n                let arrays: Vec<ArrayRef> = scalar_values\n                    .iter()\n                    .map(|scalar_vec| ScalarValue::iter_to_array(scalar_vec.iter().cloned()))\n                    .collect::<Result<Vec<_>, _>>()?;\n\n                // Create a RowConverter and use to create OwnedRows from the Arrays\n                let converter = RowConverter::new(sort_fields)?;\n                let boundary_rows = converter.convert_columns(&arrays)?;\n                // Rows are only a view into Arrow Arrays. We need to create OwnedRows with their\n                // own internal memory ownership to pass as our boundary values to the partitioner.\n                let boundary_owned_rows: Vec<OwnedRow> =\n                    boundary_rows.iter().map(|row| row.owned()).collect();\n\n                Ok(CometPartitioning::RangePartitioning(\n                    lex_ordering,\n                    range_partition.num_partitions as usize,\n                    Arc::new(converter),\n                    boundary_owned_rows,\n                ))\n            }\n            PartitioningStruct::SinglePartition(_) => Ok(CometPartitioning::SinglePartition),\n            PartitioningStruct::RoundRobinPartition(rr_partition) => {\n                // Treat negative max_hash_columns as 0 (no limit)\n                let max_hash_columns = if rr_partition.max_hash_columns <= 0 {\n                    0\n                } else {\n                    rr_partition.max_hash_columns as usize\n                };\n                Ok(CometPartitioning::RoundRobin(\n                    rr_partition.num_partitions as usize,\n                    max_hash_columns,\n                ))\n            }\n        }\n    }\n\n    fn create_scalar_function_expr(\n        &self,\n        expr: &ScalarFunc,\n        input_schema: SchemaRef,\n    ) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n        let args = expr\n            .args\n            .iter()\n            .map(|x| self.create_expr(x, Arc::clone(&input_schema)))\n            .collect::<Result<Vec<_>, _>>()?;\n\n        let fun_name = &expr.func;\n        let input_expr_types = args\n            .iter()\n            .map(|x| x.data_type(input_schema.as_ref()))\n            .collect::<Result<Vec<_>, _>>()?;\n\n        let (data_type, coerced_input_types) =\n            match expr.return_type.as_ref().map(to_arrow_datatype) {\n                Some(t) => (t, input_expr_types.clone()),\n                None => {\n                    let fun_name = match fun_name.as_ref() {\n                        \"read_side_padding\" => \"rpad\", // use the same return type as rpad\n                        other => other,\n                    };\n                    let func = self.session_ctx.udf(fun_name)?;\n\n                    // Type coercion strategy:\n                    //\n                    // In DF52, Comet used coerce_types() which returns NotImplemented\n                    // for most UDFs, so input types were kept unchanged. In DF53,\n                    // fields_with_udf() runs full coercion which aggressively promotes\n                    // types (e.g. Utf8 to Utf8View via Variadic signatures, Int32 to Int64\n                    // via Exact signatures). This breaks Comet's native implementations.\n                    //\n                    // Strategy:\n                    // 1. Try coerce_types() — only UDFs that explicitly implement it\n                    //    will return Ok. Same as DF52 behavior.\n                    // 2. For \"well-supported\" signatures (Coercible, String, Numeric,\n                    //    Comparable), use fields_with_udf(). These preserve input types\n                    //    (e.g. Utf8 stays Utf8, not promoted to Utf8View).\n                    // 3. For all other signatures (Variadic, Exact, etc.), keep original\n                    //    types unchanged. Same as DF52 behavior.\n                    let coerced_types = match func.coerce_types(&input_expr_types) {\n                        Ok(types) => types,\n                        Err(_) if needs_fields_coercion(&func.signature().type_signature) => {\n                            let input_fields: Vec<_> = input_expr_types\n                                .iter()\n                                .enumerate()\n                                .map(|(i, dt)| {\n                                    Arc::new(Field::new(format!(\"arg{i}\"), dt.clone(), true))\n                                })\n                                .collect();\n                            let arg_fields = fields_with_udf(&input_fields, func.as_ref())?;\n                            arg_fields.iter().map(|f| f.data_type().clone()).collect()\n                        }\n                        Err(_) => input_expr_types.clone(),\n                    };\n\n                    let arg_fields: Vec<_> = coerced_types\n                        .iter()\n                        .enumerate()\n                        .map(|(i, dt)| Arc::new(Field::new(format!(\"arg{i}\"), dt.clone(), true)))\n                        .collect();\n\n                    // TODO this should try and find scalar\n                    let arguments = args\n                        .iter()\n                        .map(|e| {\n                            e.as_ref()\n                                .as_any()\n                                .downcast_ref::<Literal>()\n                                .map(|lit| lit.value())\n                        })\n                        .collect::<Vec<_>>();\n\n                    let args = ReturnFieldArgs {\n                        arg_fields: &arg_fields,\n                        scalar_arguments: &arguments,\n                    };\n\n                    let data_type = Arc::clone(&func.inner().return_field_from_args(args)?)\n                        .data_type()\n                        .clone();\n\n                    (data_type, coerced_types)\n                }\n            };\n\n        let fun_expr = create_comet_physical_fun(\n            fun_name,\n            data_type.clone(),\n            &self.session_ctx.state(),\n            Some(expr.fail_on_error),\n        )?;\n\n        let args = args\n            .into_iter()\n            .zip(input_expr_types.into_iter().zip(coerced_input_types))\n            .map(|(expr, (from_type, to_type))| {\n                if from_type != to_type {\n                    Arc::new(CastExpr::new(\n                        expr,\n                        to_type,\n                        Some(CastOptions {\n                            safe: false,\n                            ..Default::default()\n                        }),\n                    ))\n                } else {\n                    expr\n                }\n            })\n            .collect::<Vec<_>>();\n\n        let scalar_expr: Arc<dyn PhysicalExpr> = Arc::new(ScalarFunctionExpr::new(\n            fun_name,\n            fun_expr,\n            args.to_vec(),\n            Arc::new(Field::new(fun_name, data_type.clone(), true)),\n            Arc::new(ConfigOptions::default()),\n        ));\n\n        // DF53 changed some UDFs (e.g. md5) to return StringViewArray at execution\n        // time (apache/datafusion#20045). Comet does not yet support view types, so\n        // cast the result back to the non-view variant.\n        let scalar_expr = match data_type {\n            DataType::Utf8View => Arc::new(CastExpr::new(\n                scalar_expr,\n                DataType::Utf8,\n                Some(CastOptions {\n                    safe: false,\n                    ..Default::default()\n                }),\n            )) as Arc<dyn PhysicalExpr>,\n            DataType::BinaryView => Arc::new(CastExpr::new(\n                scalar_expr,\n                DataType::Binary,\n                Some(CastOptions {\n                    safe: false,\n                    ..Default::default()\n                }),\n            )) as Arc<dyn PhysicalExpr>,\n            _ => scalar_expr,\n        };\n\n        Ok(scalar_expr)\n    }\n\n    fn create_aggr_func_expr(\n        name: &str,\n        schema: SchemaRef,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n        func: AggregateUDF,\n    ) -> Result<AggregateFunctionExpr, ExecutionError> {\n        AggregateExprBuilder::new(Arc::new(func), children)\n            .schema(schema)\n            .alias(name)\n            .with_ignore_nulls(false)\n            .with_distinct(false)\n            .build()\n            .map_err(|e| e.into())\n    }\n}\n\n/// Collects the indices of the columns in the input schema that are used in the expression\n/// and returns them as a pair of vectors, one for the left side and one for the right side.\nfn expr_to_columns(\n    expr: &Arc<dyn PhysicalExpr>,\n    left_field_len: usize,\n    right_field_len: usize,\n) -> Result<(Vec<usize>, Vec<usize>), ExecutionError> {\n    let mut left_field_indices: Vec<usize> = vec![];\n    let mut right_field_indices: Vec<usize> = vec![];\n\n    expr.apply(&mut |expr: &Arc<dyn PhysicalExpr>| {\n        Ok({\n            if let Some(column) = expr.as_any().downcast_ref::<Column>() {\n                if column.index() > left_field_len + right_field_len {\n                    return Err(DataFusionError::Internal(format!(\n                        \"Column index {} out of range\",\n                        column.index()\n                    )));\n                } else if column.index() < left_field_len {\n                    left_field_indices.push(column.index());\n                } else {\n                    right_field_indices.push(column.index() - left_field_len);\n                }\n            }\n            TreeNodeRecursion::Continue\n        })\n    })?;\n\n    left_field_indices.sort();\n    right_field_indices.sort();\n\n    Ok((left_field_indices, right_field_indices))\n}\n\n/// A physical join filter rewritter which rewrites the column indices in the expression\n/// to use the new column indices. See `rewrite_physical_expr`.\nstruct JoinFilterRewriter<'a> {\n    left_field_len: usize,\n    right_field_len: usize,\n    left_field_indices: &'a [usize],\n    right_field_indices: &'a [usize],\n}\n\nimpl JoinFilterRewriter<'_> {\n    fn new<'a>(\n        left_field_len: usize,\n        right_field_len: usize,\n        left_field_indices: &'a [usize],\n        right_field_indices: &'a [usize],\n    ) -> JoinFilterRewriter<'a> {\n        JoinFilterRewriter {\n            left_field_len,\n            right_field_len,\n            left_field_indices,\n            right_field_indices,\n        }\n    }\n}\n\nimpl TreeNodeRewriter for JoinFilterRewriter<'_> {\n    type Node = Arc<dyn PhysicalExpr>;\n\n    fn f_down(&mut self, node: Self::Node) -> datafusion::common::Result<Transformed<Self::Node>> {\n        if let Some(column) = node.as_any().downcast_ref::<Column>() {\n            if column.index() < self.left_field_len {\n                // left side\n                let new_index = self\n                    .left_field_indices\n                    .iter()\n                    .position(|&x| x == column.index())\n                    .ok_or_else(|| {\n                        DataFusionError::Internal(format!(\n                            \"Column index {} not found in left field indices\",\n                            column.index()\n                        ))\n                    })?;\n                Ok(Transformed::yes(Arc::new(Column::new(\n                    column.name(),\n                    new_index,\n                ))))\n            } else if column.index() < self.left_field_len + self.right_field_len {\n                // right side\n                let new_index = self\n                    .right_field_indices\n                    .iter()\n                    .position(|&x| x + self.left_field_len == column.index())\n                    .ok_or_else(|| {\n                        DataFusionError::Internal(format!(\n                            \"Column index {} not found in right field indices\",\n                            column.index()\n                        ))\n                    })?;\n                Ok(Transformed::yes(Arc::new(Column::new(\n                    column.name(),\n                    new_index + self.left_field_indices.len(),\n                ))))\n            } else {\n                Err(DataFusionError::Internal(format!(\n                    \"Column index {} out of range\",\n                    column.index()\n                )))\n            }\n        } else {\n            Ok(Transformed::no(node))\n        }\n    }\n}\n\n/// Rewrites the physical expression to use the new column indices.\n/// This is necessary when the physical expression is used in a join filter, as the column\n/// indices are different from the original schema.\nfn rewrite_physical_expr(\n    expr: Arc<dyn PhysicalExpr>,\n    left_field_len: usize,\n    right_field_len: usize,\n    left_field_indices: &[usize],\n    right_field_indices: &[usize],\n) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n    let mut rewriter = JoinFilterRewriter::new(\n        left_field_len,\n        right_field_len,\n        left_field_indices,\n        right_field_indices,\n    );\n\n    Ok(expr.rewrite(&mut rewriter).data()?)\n}\n\npub fn from_protobuf_eval_mode(value: i32) -> Result<EvalMode, prost::UnknownEnumValue> {\n    match spark_expression::EvalMode::try_from(value)? {\n        spark_expression::EvalMode::Legacy => Ok(EvalMode::Legacy),\n        spark_expression::EvalMode::Try => Ok(EvalMode::Try),\n        spark_expression::EvalMode::Ansi => Ok(EvalMode::Ansi),\n    }\n}\n\nfn convert_spark_types_to_arrow_schema(\n    spark_types: &[spark_operator::SparkStructField],\n) -> SchemaRef {\n    let arrow_fields = spark_types\n        .iter()\n        .map(|spark_type| {\n            Field::new(\n                String::clone(&spark_type.name),\n                to_arrow_datatype(spark_type.data_type.as_ref().unwrap()),\n                spark_type.nullable,\n            )\n        })\n        .collect_vec();\n    let arrow_schema: SchemaRef = Arc::new(Schema::new(arrow_fields));\n    arrow_schema\n}\n\n/// Converts a protobuf PartitionValue to an iceberg Literal.\n///\nfn partition_value_to_literal(\n    proto_value: &spark_operator::PartitionValue,\n) -> Result<Option<iceberg::spec::Literal>, ExecutionError> {\n    use spark_operator::partition_value::Value;\n\n    if proto_value.is_null {\n        return Ok(None);\n    }\n\n    let literal = match &proto_value.value {\n        Some(Value::IntVal(v)) => iceberg::spec::Literal::int(*v),\n        Some(Value::LongVal(v)) => iceberg::spec::Literal::long(*v),\n        Some(Value::DateVal(v)) => {\n            // Convert i64 to i32 for date (days since epoch)\n            let days = (*v)\n                .try_into()\n                .map_err(|_| GeneralError(format!(\"Date value out of range: {}\", v)))?;\n            iceberg::spec::Literal::date(days)\n        }\n        Some(Value::TimestampVal(v)) => iceberg::spec::Literal::timestamp(*v),\n        Some(Value::TimestampTzVal(v)) => iceberg::spec::Literal::timestamptz(*v),\n        Some(Value::StringVal(s)) => iceberg::spec::Literal::string(s.clone()),\n        Some(Value::DoubleVal(v)) => iceberg::spec::Literal::double(*v),\n        Some(Value::FloatVal(v)) => iceberg::spec::Literal::float(*v),\n        Some(Value::DecimalVal(bytes)) => {\n            // Deserialize unscaled BigInteger bytes to i128\n            // BigInteger is serialized as signed big-endian bytes\n            if bytes.len() > 16 {\n                return Err(GeneralError(format!(\n                    \"Decimal bytes too large: {} bytes (max 16 for i128)\",\n                    bytes.len()\n                )));\n            }\n\n            // Convert big-endian bytes to i128\n            let mut buf = [0u8; 16];\n            let offset = 16 - bytes.len();\n            buf[offset..].copy_from_slice(bytes);\n\n            // Handle sign extension for negative numbers\n            let value = if !bytes.is_empty() && (bytes[0] & 0x80) != 0 {\n                // Negative number - sign extend\n                for byte in buf.iter_mut().take(offset) {\n                    *byte = 0xFF;\n                }\n                i128::from_be_bytes(buf)\n            } else {\n                // Positive number\n                i128::from_be_bytes(buf)\n            };\n\n            iceberg::spec::Literal::decimal(value)\n        }\n        Some(Value::BoolVal(v)) => iceberg::spec::Literal::bool(*v),\n        Some(Value::UuidVal(bytes)) => {\n            // Deserialize UUID from 16 bytes\n            if bytes.len() != 16 {\n                return Err(GeneralError(format!(\n                    \"Invalid UUID bytes length: {} (expected 16)\",\n                    bytes.len()\n                )));\n            }\n            let uuid = uuid::Uuid::from_slice(bytes)\n                .map_err(|e| GeneralError(format!(\"Failed to parse UUID: {}\", e)))?;\n            iceberg::spec::Literal::uuid(uuid)\n        }\n        Some(Value::FixedVal(bytes)) => iceberg::spec::Literal::fixed(bytes.to_vec()),\n        Some(Value::BinaryVal(bytes)) => iceberg::spec::Literal::binary(bytes.to_vec()),\n        None => {\n            return Err(GeneralError(\n                \"PartitionValue has no value set and is_null is false\".to_string(),\n            ));\n        }\n    };\n\n    Ok(Some(literal))\n}\n\n/// Converts a protobuf PartitionData to an iceberg Struct.\n///\n/// Uses the existing Struct::from_iter() API from iceberg-rust to construct the struct\n/// from the list of partition values.\n/// This can potentially be upstreamed to iceberg_rust\nfn partition_data_to_struct(\n    proto_partition: &spark_operator::PartitionData,\n) -> Result<iceberg::spec::Struct, ExecutionError> {\n    let literals: Vec<Option<iceberg::spec::Literal>> = proto_partition\n        .values\n        .iter()\n        .map(partition_value_to_literal)\n        .collect::<Result<Vec<_>, _>>()?;\n\n    Ok(iceberg::spec::Struct::from_iter(literals))\n}\n\n/// Converts protobuf FileScanTasks from Scala into iceberg-rust FileScanTask objects.\n///\n/// Each task contains a residual predicate that is used for row-group level filtering\n/// during Parquet scanning.\n///\n/// This function uses deduplication pools from the IcebergScanCommon to avoid redundant\n/// parsing of schemas, partition specs, partition types, name mappings, and other repeated data.\nfn parse_file_scan_tasks_from_common(\n    proto_common: &spark_operator::IcebergScanCommon,\n    proto_tasks: &[spark_operator::IcebergFileScanTask],\n) -> Result<Vec<iceberg::scan::FileScanTask>, ExecutionError> {\n    // Parse each unique schema once, not once per task\n    let schema_cache: Vec<Arc<iceberg::spec::Schema>> = proto_common\n        .schema_pool\n        .iter()\n        .map(|json| {\n            serde_json::from_str(json).map(Arc::new).map_err(|e| {\n                ExecutionError::GeneralError(format!(\n                    \"Failed to parse schema JSON from pool: {}\",\n                    e\n                ))\n            })\n        })\n        .collect::<Result<Vec<_>, _>>()?;\n\n    let partition_spec_cache: Vec<Option<Arc<iceberg::spec::PartitionSpec>>> = proto_common\n        .partition_spec_pool\n        .iter()\n        .map(|json| {\n            serde_json::from_str::<iceberg::spec::PartitionSpec>(json)\n                .ok()\n                .map(Arc::new)\n        })\n        .collect();\n\n    let name_mapping_cache: Vec<Option<Arc<iceberg::spec::NameMapping>>> = proto_common\n        .name_mapping_pool\n        .iter()\n        .map(|json| {\n            serde_json::from_str::<iceberg::spec::NameMapping>(json)\n                .ok()\n                .map(Arc::new)\n        })\n        .collect();\n\n    let delete_files_cache: Vec<Vec<iceberg::scan::FileScanTaskDeleteFile>> = proto_common\n        .delete_files_pool\n        .iter()\n        .map(|list| {\n            list.delete_files\n                .iter()\n                .map(|del| {\n                    let file_type = match del.content_type.as_str() {\n                        \"POSITION_DELETES\" => iceberg::spec::DataContentType::PositionDeletes,\n                        \"EQUALITY_DELETES\" => iceberg::spec::DataContentType::EqualityDeletes,\n                        other => {\n                            return Err(GeneralError(format!(\n                                \"Invalid delete content type '{}'\",\n                                other\n                            )))\n                        }\n                    };\n\n                    Ok(iceberg::scan::FileScanTaskDeleteFile {\n                        file_path: del.file_path.clone(),\n                        file_type,\n                        file_size_in_bytes: del.file_size_in_bytes,\n                        partition_spec_id: del.partition_spec_id,\n                        equality_ids: if del.equality_ids.is_empty() {\n                            None\n                        } else {\n                            Some(del.equality_ids.clone())\n                        },\n                    })\n                })\n                .collect::<Result<Vec<_>, ExecutionError>>()\n        })\n        .collect::<Result<Vec<_>, _>>()?;\n\n    let results: Result<Vec<_>, _> = proto_tasks\n        .iter()\n        .map(|proto_task| {\n            let schema_ref = Arc::clone(\n                schema_cache\n                    .get(proto_task.schema_idx as usize)\n                    .ok_or_else(|| {\n                        ExecutionError::GeneralError(format!(\n                            \"Invalid schema_idx: {} (pool size: {})\",\n                            proto_task.schema_idx,\n                            schema_cache.len()\n                        ))\n                    })?,\n            );\n\n            let data_file_format = iceberg::spec::DataFileFormat::Parquet;\n\n            let deletes = if let Some(idx) = proto_task.delete_files_idx {\n                delete_files_cache\n                    .get(idx as usize)\n                    .ok_or_else(|| {\n                        ExecutionError::GeneralError(format!(\n                            \"Invalid delete_files_idx: {} (pool size: {})\",\n                            idx,\n                            delete_files_cache.len()\n                        ))\n                    })?\n                    .clone()\n            } else {\n                vec![]\n            };\n\n            let bound_predicate = if let Some(idx) = proto_task.residual_idx {\n                proto_common\n                    .residual_pool\n                    .get(idx as usize)\n                    .and_then(convert_spark_expr_to_predicate)\n                    .map(\n                        |pred| -> Result<iceberg::expr::BoundPredicate, ExecutionError> {\n                            pred.bind(Arc::clone(&schema_ref), true).map_err(|e| {\n                                ExecutionError::GeneralError(format!(\n                                    \"Failed to bind predicate to schema: {}\",\n                                    e\n                                ))\n                            })\n                        },\n                    )\n                    .transpose()?\n            } else {\n                None\n            };\n\n            let partition = if let Some(partition_data_idx) = proto_task.partition_data_idx {\n                let partition_data_proto = proto_common\n                    .partition_data_pool\n                    .get(partition_data_idx as usize)\n                    .ok_or_else(|| {\n                        ExecutionError::GeneralError(format!(\n                            \"Invalid partition_data_idx: {} (pool size: {})\",\n                            partition_data_idx,\n                            proto_common.partition_data_pool.len()\n                        ))\n                    })?;\n\n                match partition_data_to_struct(partition_data_proto) {\n                    Ok(s) => Some(s),\n                    Err(e) => {\n                        return Err(ExecutionError::GeneralError(format!(\n                            \"Failed to deserialize partition data: {}\",\n                            e\n                        )))\n                    }\n                }\n            } else {\n                None\n            };\n\n            let partition_spec = proto_task\n                .partition_spec_idx\n                .and_then(|idx| partition_spec_cache.get(idx as usize))\n                .and_then(|opt| opt.clone());\n\n            let name_mapping = proto_task\n                .name_mapping_idx\n                .and_then(|idx| name_mapping_cache.get(idx as usize))\n                .and_then(|opt| opt.clone());\n\n            let project_field_ids = proto_common\n                .project_field_ids_pool\n                .get(proto_task.project_field_ids_idx as usize)\n                .ok_or_else(|| {\n                    ExecutionError::GeneralError(format!(\n                        \"Invalid project_field_ids_idx: {} (pool size: {})\",\n                        proto_task.project_field_ids_idx,\n                        proto_common.project_field_ids_pool.len()\n                    ))\n                })?\n                .field_ids\n                .clone();\n\n            Ok(iceberg::scan::FileScanTask {\n                file_size_in_bytes: proto_task.file_size_in_bytes,\n                data_file_path: proto_task.data_file_path.clone(),\n                start: proto_task.start,\n                length: proto_task.length,\n                record_count: proto_task.record_count,\n                data_file_format,\n                schema: schema_ref,\n                project_field_ids,\n                predicate: bound_predicate,\n                deletes,\n                partition,\n                partition_spec,\n                name_mapping,\n                case_sensitive: false,\n            })\n        })\n        .collect();\n\n    results\n}\n\n/// Create CASE WHEN expression and add casting as needed\nfn create_case_expr(\n    when_then_pairs: Vec<(Arc<dyn PhysicalExpr>, Arc<dyn PhysicalExpr>)>,\n    else_expr: Option<Arc<dyn PhysicalExpr>>,\n    input_schema: &Schema,\n) -> Result<Arc<dyn PhysicalExpr>, ExecutionError> {\n    let then_types: Vec<DataType> = when_then_pairs\n        .iter()\n        .map(|x| x.1.data_type(input_schema))\n        .collect::<Result<Vec<_>, _>>()?;\n\n    let else_type: Option<DataType> = else_expr\n        .as_ref()\n        .map(|x| Arc::clone(x).data_type(input_schema))\n        .transpose()?\n        .or(Some(DataType::Null));\n\n    if let Some(coerce_type) = get_coerce_type_for_case_expression(&then_types, else_type.as_ref())\n    {\n        let cast_options = SparkCastOptions::new_without_timezone(EvalMode::Legacy, false);\n\n        let when_then_pairs = when_then_pairs\n            .iter()\n            .map(|x| {\n                let t: Arc<dyn PhysicalExpr> = Arc::new(Cast::new(\n                    Arc::clone(&x.1),\n                    coerce_type.clone(),\n                    cast_options.clone(),\n                    None,\n                    None,\n                ));\n                (Arc::clone(&x.0), t)\n            })\n            .collect::<Vec<(Arc<dyn PhysicalExpr>, Arc<dyn PhysicalExpr>)>>();\n\n        let else_phy_expr: Option<Arc<dyn PhysicalExpr>> = else_expr.clone().map(|x| {\n            Arc::new(Cast::new(\n                x,\n                coerce_type.clone(),\n                cast_options.clone(),\n                None,\n                None,\n            )) as Arc<dyn PhysicalExpr>\n        });\n        Ok(Arc::new(CaseExpr::try_new(\n            None,\n            when_then_pairs,\n            else_phy_expr,\n        )?))\n    } else {\n        Ok(Arc::new(CaseExpr::try_new(\n            None,\n            when_then_pairs,\n            else_expr.clone(),\n        )?))\n    }\n}\n\nfn from_protobuf_binary_output_style(\n    value: i32,\n) -> Result<BinaryOutputStyle, prost::UnknownEnumValue> {\n    match spark_expression::BinaryOutputStyle::try_from(value)? {\n        spark_expression::BinaryOutputStyle::Utf8 => Ok(BinaryOutputStyle::Utf8),\n        spark_expression::BinaryOutputStyle::Basic => Ok(BinaryOutputStyle::Basic),\n        spark_expression::BinaryOutputStyle::Base64 => Ok(BinaryOutputStyle::Base64),\n        spark_expression::BinaryOutputStyle::Hex => Ok(BinaryOutputStyle::Hex),\n        spark_expression::BinaryOutputStyle::HexDiscrete => Ok(BinaryOutputStyle::HexDiscrete),\n    }\n}\n\nfn literal_to_array_ref(\n    data_type: DataType,\n    list_literal: ListLiteral,\n) -> Result<ArrayRef, ExecutionError> {\n    let nulls = &list_literal.null_mask;\n    match data_type {\n        DataType::Null => Ok(Arc::new(NullArray::new(nulls.len()))),\n        DataType::Boolean => Ok(Arc::new(BooleanArray::new(\n            BooleanBuffer::from(list_literal.boolean_values),\n            Some(nulls.clone().into()),\n        ))),\n        DataType::Int8 => Ok(Arc::new(Int8Array::new(\n            list_literal\n                .byte_values\n                .iter()\n                .map(|&x| x as i8)\n                .collect::<Vec<_>>()\n                .into(),\n            Some(nulls.clone().into()),\n        ))),\n        DataType::Int16 => Ok(Arc::new(Int16Array::new(\n            list_literal\n                .short_values\n                .iter()\n                .map(|&x| x as i16)\n                .collect::<Vec<_>>()\n                .into(),\n            Some(nulls.clone().into()),\n        ))),\n        DataType::Int32 => Ok(Arc::new(Int32Array::new(\n            list_literal.int_values.into(),\n            Some(nulls.clone().into()),\n        ))),\n        DataType::Int64 => Ok(Arc::new(Int64Array::new(\n            list_literal.long_values.into(),\n            Some(nulls.clone().into()),\n        ))),\n        DataType::Float32 => Ok(Arc::new(Float32Array::new(\n            list_literal.float_values.into(),\n            Some(nulls.clone().into()),\n        ))),\n        DataType::Float64 => Ok(Arc::new(Float64Array::new(\n            list_literal.double_values.into(),\n            Some(nulls.clone().into()),\n        ))),\n        DataType::Date32 => Ok(Arc::new(Date32Array::new(\n            list_literal.int_values.into(),\n            Some(nulls.clone().into()),\n        ))),\n        DataType::Timestamp(TimeUnit::Microsecond, None) => {\n            Ok(Arc::new(TimestampMicrosecondArray::new(\n                list_literal.long_values.into(),\n                Some(nulls.clone().into()),\n            )))\n        }\n        DataType::Timestamp(TimeUnit::Microsecond, Some(tz)) => Ok(Arc::new(\n            TimestampMicrosecondArray::new(\n                list_literal.long_values.into(),\n                Some(nulls.clone().into()),\n            )\n            .with_timezone(Arc::clone(&tz)),\n        )),\n        DataType::Binary => {\n            // Using a builder as it is cumbersome to create BinaryArray from a vector with nulls\n            // and calculate correct offsets\n            let item_capacity = list_literal.bytes_values.len();\n            let data_capacity = list_literal\n                .bytes_values\n                .first()\n                .map(|s| s.len() * item_capacity)\n                .unwrap_or(0);\n            let mut arr = BinaryBuilder::with_capacity(item_capacity, data_capacity);\n\n            for (i, v) in list_literal.bytes_values.into_iter().enumerate() {\n                if nulls[i] {\n                    arr.append_value(v);\n                } else {\n                    arr.append_null();\n                }\n            }\n\n            Ok(Arc::new(arr.finish()))\n        }\n        DataType::Utf8 => {\n            // Using a builder as it is cumbersome to create StringArray from a vector with nulls\n            // and calculate correct offsets\n            let item_capacity = list_literal.string_values.len();\n            let data_capacity = list_literal\n                .string_values\n                .first()\n                .map(|s| s.len() * item_capacity)\n                .unwrap_or(0);\n            let mut arr = StringBuilder::with_capacity(item_capacity, data_capacity);\n\n            for (i, v) in list_literal.string_values.into_iter().enumerate() {\n                if nulls[i] {\n                    arr.append_value(v);\n                } else {\n                    arr.append_null();\n                }\n            }\n\n            Ok(Arc::new(arr.finish()))\n        }\n        DataType::Decimal128(p, s) => Ok(Arc::new(\n            Decimal128Array::new(\n                list_literal\n                    .decimal_values\n                    .into_iter()\n                    .map(|v| {\n                        let big_integer = BigInt::from_signed_bytes_be(&v);\n                        big_integer\n                            .to_i128()\n                            .ok_or_else(|| {\n                                GeneralError(format!(\n                                    \"Cannot parse {big_integer:?} as i128 for Decimal literal\"\n                                ))\n                            })\n                            .unwrap()\n                    })\n                    .collect::<Vec<_>>()\n                    .into(),\n                Some(nulls.clone().into()),\n            )\n            .with_precision_and_scale(p, s)?,\n        )),\n        // list of primitive types\n        DataType::List(f) if !matches!(f.data_type(), DataType::List(_)) => {\n            literal_to_array_ref(f.data_type().clone(), list_literal)\n        }\n        DataType::List(ref f) => {\n            let dt = f.data_type().clone();\n\n            // Build offsets and collect non-null child arrays\n            let mut offsets = Vec::with_capacity(list_literal.list_values.len() + 1);\n            // Offsets starts with 0\n            offsets.push(0i32);\n            let mut child_arrays: Vec<ArrayRef> = Vec::new();\n\n            for (i, child_literal) in list_literal.list_values.iter().enumerate() {\n                // Check if the current child literal is non-null and not the empty array\n                if list_literal.null_mask[i] && *child_literal != ListLiteral::default() {\n                    // Non-null entry: process the child array\n                    let child_array = literal_to_array_ref(dt.clone(), child_literal.clone())?;\n                    let len = child_array.len() as i32;\n                    offsets.push(offsets.last().unwrap() + len);\n                    child_arrays.push(child_array);\n                } else {\n                    // Null entry: just repeat the last offset (empty slot)\n                    offsets.push(*offsets.last().unwrap());\n                }\n            }\n\n            // Concatenate all non-null child arrays' values into one array\n            let output_array = if !child_arrays.is_empty() {\n                let child_refs: Vec<&dyn Array> = child_arrays.iter().map(|a| a.as_ref()).collect();\n                arrow::compute::concat(&child_refs)?\n            } else {\n                // All entries are null or the list is empty\n                new_empty_array(&dt)\n            };\n\n            // Create and return the parent ListArray\n            Ok(Arc::new(ListArray::new(\n                FieldRef::from(Field::new(\"item\", output_array.data_type().clone(), true)),\n                OffsetBuffer::new(offsets.into()),\n                output_array,\n                Some(NullBuffer::from(list_literal.null_mask.clone())),\n            )))\n        }\n        dt => Err(GeneralError(format!(\n            \"DataType::List literal does not support {dt:?} type\"\n        ))),\n    }\n}\n\n// ============================================================================\n// Spark Expression to Iceberg Predicate Conversion\n// ============================================================================\n//\n// Predicates are converted through Spark expressions rather than directly from\n// Iceberg Java to Iceberg Rust. This leverages Comet's existing expression\n// serialization infrastructure, which handles hundreds of expression types.\n//\n// Conversion path:\n//   Iceberg Expression (Java) -> Spark Catalyst Expression -> Protobuf -> Iceberg Predicate (Rust)\n//\n// Note: NOT IN predicates are skipped because iceberg-rust's RowGroupMetricsEvaluator::not_in()\n// always returns MIGHT_MATCH (never prunes row groups). These are handled by CometFilter post-scan.\n\n/// Converts a protobuf Spark expression to an Iceberg predicate for row-group filtering.\nfn convert_spark_expr_to_predicate(\n    expr: &spark_expression::Expr,\n) -> Option<iceberg::expr::Predicate> {\n    use spark_expression::expr::ExprStruct;\n\n    match &expr.expr_struct {\n        Some(ExprStruct::Eq(binary)) => convert_binary_to_predicate(\n            &binary.left,\n            &binary.right,\n            iceberg::expr::PredicateOperator::Eq,\n        ),\n        Some(ExprStruct::Neq(binary)) => convert_binary_to_predicate(\n            &binary.left,\n            &binary.right,\n            iceberg::expr::PredicateOperator::NotEq,\n        ),\n        Some(ExprStruct::Lt(binary)) => convert_binary_to_predicate(\n            &binary.left,\n            &binary.right,\n            iceberg::expr::PredicateOperator::LessThan,\n        ),\n        Some(ExprStruct::LtEq(binary)) => convert_binary_to_predicate(\n            &binary.left,\n            &binary.right,\n            iceberg::expr::PredicateOperator::LessThanOrEq,\n        ),\n        Some(ExprStruct::Gt(binary)) => convert_binary_to_predicate(\n            &binary.left,\n            &binary.right,\n            iceberg::expr::PredicateOperator::GreaterThan,\n        ),\n        Some(ExprStruct::GtEq(binary)) => convert_binary_to_predicate(\n            &binary.left,\n            &binary.right,\n            iceberg::expr::PredicateOperator::GreaterThanOrEq,\n        ),\n        Some(ExprStruct::IsNull(unary)) => {\n            if let Some(ref child) = unary.child {\n                extract_column_reference(child).map(|column| {\n                    iceberg::expr::Predicate::Unary(iceberg::expr::UnaryExpression::new(\n                        iceberg::expr::PredicateOperator::IsNull,\n                        iceberg::expr::Reference::new(column),\n                    ))\n                })\n            } else {\n                None\n            }\n        }\n        Some(ExprStruct::IsNotNull(unary)) => {\n            if let Some(ref child) = unary.child {\n                extract_column_reference(child).map(|column| {\n                    iceberg::expr::Predicate::Unary(iceberg::expr::UnaryExpression::new(\n                        iceberg::expr::PredicateOperator::NotNull,\n                        iceberg::expr::Reference::new(column),\n                    ))\n                })\n            } else {\n                None\n            }\n        }\n        Some(ExprStruct::And(binary)) => {\n            let left = binary\n                .left\n                .as_ref()\n                .and_then(|e| convert_spark_expr_to_predicate(e));\n            let right = binary\n                .right\n                .as_ref()\n                .and_then(|e| convert_spark_expr_to_predicate(e));\n            match (left, right) {\n                (Some(l), Some(r)) => Some(l.and(r)),\n                (Some(l), None) => Some(l),\n                (None, Some(r)) => Some(r),\n                _ => None,\n            }\n        }\n        Some(ExprStruct::Or(binary)) => {\n            let left = binary\n                .left\n                .as_ref()\n                .and_then(|e| convert_spark_expr_to_predicate(e));\n            let right = binary\n                .right\n                .as_ref()\n                .and_then(|e| convert_spark_expr_to_predicate(e));\n            match (left, right) {\n                (Some(l), Some(r)) => Some(l.or(r)),\n                _ => None, // OR requires both sides to be valid\n            }\n        }\n        Some(ExprStruct::Not(unary)) => unary\n            .child\n            .as_ref()\n            .and_then(|child| convert_spark_expr_to_predicate(child))\n            .map(|p| !p),\n        Some(ExprStruct::In(in_expr)) => {\n            // NOT IN predicates don't work correctly with iceberg-rust's row-group filtering.\n            // The iceberg-rust RowGroupMetricsEvaluator::not_in() always returns MIGHT_MATCH\n            // (never prunes row groups), even in cases where pruning is possible (e.g., when\n            // min == max == value and value is in the NOT IN set).\n            //\n            // Workaround: Skip NOT IN in predicate pushdown and let CometFilter handle it\n            // post-scan. This sacrifices row-group pruning for NOT IN but ensures correctness.\n            if in_expr.negated {\n                return None;\n            }\n\n            if let Some(ref value) = in_expr.in_value {\n                if let Some(column) = extract_column_reference(value) {\n                    let datums: Vec<iceberg::spec::Datum> = in_expr\n                        .lists\n                        .iter()\n                        .filter_map(extract_literal_as_datum)\n                        .collect();\n\n                    if datums.len() == in_expr.lists.len() {\n                        Some(iceberg::expr::Reference::new(column).is_in(datums))\n                    } else {\n                        None\n                    }\n                } else {\n                    None\n                }\n            } else {\n                None\n            }\n        }\n        _ => None, // Unsupported expression\n    }\n}\n\nfn convert_binary_to_predicate(\n    left: &Option<Box<spark_expression::Expr>>,\n    right: &Option<Box<spark_expression::Expr>>,\n    op: iceberg::expr::PredicateOperator,\n) -> Option<iceberg::expr::Predicate> {\n    let left_ref = left.as_ref()?;\n    let right_ref = right.as_ref()?;\n\n    if let (Some(column), Some(datum)) = (\n        extract_column_reference(left_ref),\n        extract_literal_as_datum(right_ref),\n    ) {\n        return Some(iceberg::expr::Predicate::Binary(\n            iceberg::expr::BinaryExpression::new(op, iceberg::expr::Reference::new(column), datum),\n        ));\n    }\n\n    if let (Some(datum), Some(column)) = (\n        extract_literal_as_datum(left_ref),\n        extract_column_reference(right_ref),\n    ) {\n        let reversed_op = match op {\n            iceberg::expr::PredicateOperator::LessThan => {\n                iceberg::expr::PredicateOperator::GreaterThan\n            }\n            iceberg::expr::PredicateOperator::LessThanOrEq => {\n                iceberg::expr::PredicateOperator::GreaterThanOrEq\n            }\n            iceberg::expr::PredicateOperator::GreaterThan => {\n                iceberg::expr::PredicateOperator::LessThan\n            }\n            iceberg::expr::PredicateOperator::GreaterThanOrEq => {\n                iceberg::expr::PredicateOperator::LessThanOrEq\n            }\n            _ => op, // Eq and NotEq are symmetric\n        };\n        return Some(iceberg::expr::Predicate::Binary(\n            iceberg::expr::BinaryExpression::new(\n                reversed_op,\n                iceberg::expr::Reference::new(column),\n                datum,\n            ),\n        ));\n    }\n\n    None\n}\n\nfn extract_column_reference(expr: &spark_expression::Expr) -> Option<String> {\n    use spark_expression::expr::ExprStruct;\n\n    match &expr.expr_struct {\n        Some(ExprStruct::Unbound(unbound_ref)) => Some(unbound_ref.name.clone()),\n        _ => None,\n    }\n}\n\nfn extract_literal_as_datum(expr: &spark_expression::Expr) -> Option<iceberg::spec::Datum> {\n    use spark_expression::expr::ExprStruct;\n\n    match &expr.expr_struct {\n        Some(ExprStruct::Literal(literal)) => {\n            if literal.is_null {\n                return None;\n            }\n\n            match &literal.value {\n                Some(spark_expression::literal::Value::IntVal(v)) => {\n                    Some(iceberg::spec::Datum::int(*v))\n                }\n                Some(spark_expression::literal::Value::LongVal(v)) => {\n                    Some(iceberg::spec::Datum::long(*v))\n                }\n                Some(spark_expression::literal::Value::FloatVal(v)) => {\n                    Some(iceberg::spec::Datum::double(*v as f64))\n                }\n                Some(spark_expression::literal::Value::DoubleVal(v)) => {\n                    Some(iceberg::spec::Datum::double(*v))\n                }\n                Some(spark_expression::literal::Value::StringVal(v)) => {\n                    Some(iceberg::spec::Datum::string(v.clone()))\n                }\n                Some(spark_expression::literal::Value::BoolVal(v)) => {\n                    Some(iceberg::spec::Datum::bool(*v))\n                }\n                Some(spark_expression::literal::Value::ByteVal(v)) => {\n                    Some(iceberg::spec::Datum::int(*v))\n                }\n                Some(spark_expression::literal::Value::ShortVal(v)) => {\n                    Some(iceberg::spec::Datum::int(*v))\n                }\n                _ => None,\n            }\n        }\n        _ => None,\n    }\n}\n\n/// Returns true for signature types that need fields_with_udf() for coercion.\n///\n/// \"Well-supported\" signatures (Coercible, String, Numeric, Comparable) preserve\n/// input types naturally (e.g. Utf8 stays Utf8) and need fields_with_udf() because\n/// they don't implement coerce_types(). Other signatures (Variadic, Exact, etc.)\n/// should keep original types to match DF52 behavior and avoid unwanted promotions\n/// like Utf8 to Utf8View or Int32 to Int64.\nfn needs_fields_coercion(sig: &TypeSignature) -> bool {\n    match sig {\n        TypeSignature::Coercible(_)\n        | TypeSignature::String(_)\n        | TypeSignature::Numeric(_)\n        | TypeSignature::Comparable(_) => true,\n        TypeSignature::OneOf(sigs) => sigs.iter().any(needs_fields_coercion),\n        _ => false,\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use futures::{poll, StreamExt};\n    use std::{sync::Arc, task::Poll};\n\n    use arrow::array::{\n        Array, DictionaryArray, Int32Array, Int8Array, ListArray, RecordBatch, StringArray,\n    };\n    use arrow::datatypes::{DataType, Field, FieldRef, Fields, Schema};\n    use datafusion::catalog::memory::DataSourceExec;\n    use datafusion::config::TableParquetOptions;\n    use datafusion::datasource::listing::PartitionedFile;\n    use datafusion::datasource::object_store::ObjectStoreUrl;\n    use datafusion::datasource::physical_plan::{\n        FileGroup, FileScanConfigBuilder, FileSource, ParquetSource,\n    };\n    use datafusion::error::DataFusionError;\n    use datafusion::logical_expr::ScalarUDF;\n    use datafusion::physical_plan::ExecutionPlan;\n    use datafusion::{assert_batches_eq, physical_plan::common::collect, prelude::SessionContext};\n    use datafusion_physical_expr_adapter::PhysicalExprAdapterFactory;\n    use tempfile::TempDir;\n    use tokio::sync::mpsc;\n\n    use crate::execution::{operators::InputBatch, planner::PhysicalPlanner};\n\n    use crate::execution::operators::ExecutionError;\n    use crate::execution::planner::literal_to_array_ref;\n    use crate::parquet::parquet_support::SparkParquetOptions;\n    use crate::parquet::schema_adapter::SparkPhysicalExprAdapterFactory;\n    use datafusion_comet_proto::spark_expression::expr::ExprStruct;\n    use datafusion_comet_proto::spark_expression::ListLiteral;\n    use datafusion_comet_proto::{\n        spark_expression::expr::ExprStruct::*,\n        spark_expression::Expr,\n        spark_expression::{self, literal},\n        spark_operator,\n        spark_operator::{operator::OpStruct, Operator},\n    };\n    use datafusion_comet_spark_expr::EvalMode;\n\n    #[test]\n    fn test_unpack_dictionary_primitive() {\n        let op_scan = Operator {\n            plan_id: 0,\n            children: vec![],\n            op_struct: Some(OpStruct::Scan(spark_operator::Scan {\n                fields: vec![spark_expression::DataType {\n                    type_id: 3, // Int32\n                    type_info: None,\n                }],\n                source: \"\".to_string(),\n                arrow_ffi_safe: false,\n            })),\n        };\n\n        let op = create_filter(op_scan, 3);\n        let planner = PhysicalPlanner::default();\n        let row_count = 100;\n\n        // Create a dictionary array with 100 values, and use it as input to the execution.\n        let keys = Int32Array::new((0..(row_count as i32)).map(|n| n % 4).collect(), None);\n        let values = Int32Array::from(vec![0, 1, 2, 3]);\n        let input_array = DictionaryArray::new(keys, Arc::new(values));\n        let input_batch = InputBatch::Batch(vec![Arc::new(input_array)], row_count);\n\n        let (mut scans, _shuffle_scans, datafusion_plan) =\n            planner.create_plan(&op, &mut vec![], 1).unwrap();\n        scans[0].set_input_batch(input_batch);\n\n        let session_ctx = SessionContext::new();\n        let task_ctx = session_ctx.task_ctx();\n        let mut stream = datafusion_plan.native_plan.execute(0, task_ctx).unwrap();\n\n        let runtime = tokio::runtime::Runtime::new().unwrap();\n        runtime.block_on(async move {\n            let mut eof_sent = false;\n            let mut got_result = false;\n            loop {\n                match poll!(stream.next()) {\n                    Poll::Ready(Some(batch)) => {\n                        assert!(batch.is_ok(), \"got error {}\", batch.unwrap_err());\n                        let batch = batch.unwrap();\n                        assert_eq!(batch.num_rows(), row_count / 4);\n                        // dictionary should be unpacked\n                        assert!(matches!(batch.column(0).data_type(), DataType::Int32));\n                        got_result = true;\n                    }\n                    Poll::Ready(None) => {\n                        break;\n                    }\n                    Poll::Pending => {\n                        // Stream needs more input (e.g. FilterExec's batch coalescer\n                        // is accumulating). Send EOF to flush.\n                        if !eof_sent {\n                            scans[0].set_input_batch(InputBatch::EOF);\n                            eof_sent = true;\n                        }\n                    }\n                }\n            }\n            assert!(got_result, \"Expected at least one result batch\");\n        });\n    }\n\n    const STRING_TYPE_ID: i32 = 7;\n\n    #[test]\n    fn test_unpack_dictionary_string() {\n        let op_scan = Operator {\n            plan_id: 0,\n            children: vec![],\n            op_struct: Some(OpStruct::Scan(spark_operator::Scan {\n                fields: vec![spark_expression::DataType {\n                    type_id: STRING_TYPE_ID, // String\n                    type_info: None,\n                }],\n                source: \"\".to_string(),\n                arrow_ffi_safe: false,\n            })),\n        };\n\n        let lit = spark_expression::Literal {\n            value: Some(literal::Value::StringVal(\"foo\".to_string())),\n            datatype: Some(spark_expression::DataType {\n                type_id: STRING_TYPE_ID,\n                type_info: None,\n            }),\n            is_null: false,\n        };\n\n        let op = create_filter_literal(op_scan, STRING_TYPE_ID, lit);\n        let planner = PhysicalPlanner::default();\n\n        let row_count = 100;\n\n        let keys = Int32Array::new((0..(row_count as i32)).map(|n| n % 4).collect(), None);\n        let values = StringArray::from(vec![\"foo\", \"bar\", \"hello\", \"comet\"]);\n        let input_array = DictionaryArray::new(keys, Arc::new(values));\n        let input_batch = InputBatch::Batch(vec![Arc::new(input_array)], row_count);\n\n        let (mut scans, _shuffle_scans, datafusion_plan) =\n            planner.create_plan(&op, &mut vec![], 1).unwrap();\n\n        // Scan's schema is determined by the input batch, so we need to set it before execution.\n        scans[0].set_input_batch(input_batch);\n\n        let session_ctx = SessionContext::new();\n        let task_ctx = session_ctx.task_ctx();\n        let mut stream = datafusion_plan.native_plan.execute(0, task_ctx).unwrap();\n\n        let runtime = tokio::runtime::Runtime::new().unwrap();\n        runtime.block_on(async move {\n            let mut eof_sent = false;\n            let mut got_result = false;\n            loop {\n                match poll!(stream.next()) {\n                    Poll::Ready(Some(batch)) => {\n                        assert!(batch.is_ok(), \"got error {}\", batch.unwrap_err());\n                        let batch = batch.unwrap();\n                        assert_eq!(batch.num_rows(), row_count / 4);\n                        // string/binary should no longer be packed with dictionary\n                        assert!(matches!(batch.column(0).data_type(), DataType::Utf8));\n                        got_result = true;\n                    }\n                    Poll::Ready(None) => {\n                        break;\n                    }\n                    Poll::Pending => {\n                        // Stream needs more input (e.g. FilterExec's batch coalescer\n                        // is accumulating). Send EOF to flush.\n                        if !eof_sent {\n                            scans[0].set_input_batch(InputBatch::EOF);\n                            eof_sent = true;\n                        }\n                    }\n                }\n            }\n            assert!(got_result, \"Expected at least one result batch\");\n        });\n    }\n\n    #[tokio::test()]\n    #[allow(clippy::field_reassign_with_default)]\n    async fn to_datafusion_filter() {\n        let op_scan = create_scan();\n        let op = create_filter(op_scan, 0);\n        let planner = PhysicalPlanner::default();\n\n        let (mut scans, _shuffle_scans, datafusion_plan) =\n            planner.create_plan(&op, &mut vec![], 1).unwrap();\n\n        let scan = &mut scans[0];\n        scan.set_input_batch(InputBatch::EOF);\n\n        let session_ctx = SessionContext::new();\n        let task_ctx = session_ctx.task_ctx();\n\n        let stream = datafusion_plan\n            .native_plan\n            .execute(0, Arc::clone(&task_ctx))\n            .unwrap();\n        let output = collect(stream).await.unwrap();\n        assert!(output.is_empty());\n    }\n\n    #[tokio::test()]\n    async fn from_datafusion_error_to_comet() {\n        let err_msg = \"exec error\";\n        let err = datafusion::common::DataFusionError::Execution(err_msg.to_string());\n        let comet_err: ExecutionError = err.into();\n        assert_eq!(comet_err.to_string(), \"Error from DataFusion: exec error.\");\n    }\n\n    // Creates a filter operator which takes an `Int32Array` and selects rows that are equal to\n    // `value`.\n    fn create_filter(child_op: spark_operator::Operator, value: i32) -> spark_operator::Operator {\n        let lit = spark_expression::Literal {\n            value: Some(literal::Value::IntVal(value)),\n            datatype: Some(spark_expression::DataType {\n                type_id: 3,\n                type_info: None,\n            }),\n            is_null: false,\n        };\n\n        create_filter_literal(child_op, 3, lit)\n    }\n\n    fn create_filter_literal(\n        child_op: spark_operator::Operator,\n        type_id: i32,\n        lit: spark_expression::Literal,\n    ) -> spark_operator::Operator {\n        let left = spark_expression::Expr {\n            expr_struct: Some(Bound(spark_expression::BoundReference {\n                index: 0,\n                datatype: Some(spark_expression::DataType {\n                    type_id,\n                    type_info: None,\n                }),\n            })),\n            query_context: None,\n            expr_id: None,\n        };\n        let right = spark_expression::Expr {\n            expr_struct: Some(Literal(lit)),\n            query_context: None,\n            expr_id: None,\n        };\n\n        let expr = spark_expression::Expr {\n            expr_struct: Some(Eq(Box::new(spark_expression::BinaryExpr {\n                left: Some(Box::new(left)),\n                right: Some(Box::new(right)),\n            }))),\n            query_context: None,\n            expr_id: None,\n        };\n\n        Operator {\n            plan_id: 0,\n            children: vec![child_op],\n            op_struct: Some(OpStruct::Filter(spark_operator::Filter {\n                predicate: Some(expr),\n            })),\n        }\n    }\n\n    #[test]\n    fn spark_plan_metrics_filter() {\n        let op_scan = create_scan();\n        let op = create_filter(op_scan, 0);\n        let planner = PhysicalPlanner::default();\n\n        let (_scans, _shuffle_scans, filter_exec) =\n            planner.create_plan(&op, &mut vec![], 1).unwrap();\n\n        assert_eq!(\"FilterExec\", filter_exec.native_plan.name());\n        assert_eq!(1, filter_exec.children.len());\n        assert_eq!(0, filter_exec.additional_native_plans.len());\n    }\n\n    #[test]\n    fn spark_plan_metrics_hash_join() {\n        let op_scan = create_scan();\n        let op_join = Operator {\n            plan_id: 0,\n            children: vec![op_scan.clone(), op_scan.clone()],\n            op_struct: Some(OpStruct::HashJoin(spark_operator::HashJoin {\n                left_join_keys: vec![create_bound_reference(0)],\n                right_join_keys: vec![create_bound_reference(0)],\n                join_type: 0,\n                condition: None,\n                build_side: 0,\n            })),\n        };\n\n        let planner = PhysicalPlanner::default();\n\n        let (_scans, _shuffle_scans, hash_join_exec) =\n            planner.create_plan(&op_join, &mut vec![], 1).unwrap();\n\n        assert_eq!(\"HashJoinExec\", hash_join_exec.native_plan.name());\n        assert_eq!(2, hash_join_exec.children.len());\n        assert_eq!(\"ScanExec\", hash_join_exec.children[0].native_plan.name());\n        assert_eq!(\"ScanExec\", hash_join_exec.children[1].native_plan.name());\n    }\n\n    fn create_bound_reference(index: i32) -> Expr {\n        Expr {\n            expr_struct: Some(Bound(spark_expression::BoundReference {\n                index,\n                datatype: Some(create_proto_datatype()),\n            })),\n            query_context: None,\n            expr_id: None,\n        }\n    }\n\n    fn create_scan() -> Operator {\n        Operator {\n            plan_id: 0,\n            children: vec![],\n            op_struct: Some(OpStruct::Scan(spark_operator::Scan {\n                fields: vec![create_proto_datatype()],\n                source: \"\".to_string(),\n                arrow_ffi_safe: false,\n            })),\n        }\n    }\n\n    fn create_proto_datatype() -> spark_expression::DataType {\n        spark_expression::DataType {\n            type_id: 3,\n            type_info: None,\n        }\n    }\n\n    #[test]\n    fn test_create_array() {\n        let session_ctx = SessionContext::new();\n        session_ctx.register_udf(ScalarUDF::from(\n            datafusion_functions_nested::make_array::MakeArray::new(),\n        ));\n        let task_ctx = session_ctx.task_ctx();\n        let planner = PhysicalPlanner::new(Arc::from(session_ctx), 0);\n\n        // Create a plan for\n        // ProjectionExec: expr=[make_array(col_0@0) as col_0]\n        // ScanExec: source=[CometScan parquet  (unknown)], schema=[col_0: Int32]\n        let op_scan = Operator {\n            plan_id: 0,\n            children: vec![],\n            op_struct: Some(OpStruct::Scan(spark_operator::Scan {\n                fields: vec![\n                    spark_expression::DataType {\n                        type_id: 3, // Int32\n                        type_info: None,\n                    },\n                    spark_expression::DataType {\n                        type_id: 3, // Int32\n                        type_info: None,\n                    },\n                    spark_expression::DataType {\n                        type_id: 3, // Int32\n                        type_info: None,\n                    },\n                ],\n                source: \"\".to_string(),\n                arrow_ffi_safe: false,\n            })),\n        };\n\n        let array_col = spark_expression::Expr {\n            expr_struct: Some(Bound(spark_expression::BoundReference {\n                index: 0,\n                datatype: Some(spark_expression::DataType {\n                    type_id: 3,\n                    type_info: None,\n                }),\n            })),\n            query_context: None,\n            expr_id: None,\n        };\n\n        let array_col_1 = spark_expression::Expr {\n            expr_struct: Some(Bound(spark_expression::BoundReference {\n                index: 1,\n                datatype: Some(spark_expression::DataType {\n                    type_id: 3,\n                    type_info: None,\n                }),\n            })),\n            query_context: None,\n            expr_id: None,\n        };\n\n        let projection = Operator {\n            children: vec![op_scan],\n            plan_id: 0,\n            op_struct: Some(OpStruct::Projection(spark_operator::Projection {\n                project_list: vec![spark_expression::Expr {\n                    expr_struct: Some(ExprStruct::ScalarFunc(spark_expression::ScalarFunc {\n                        func: \"make_array\".to_string(),\n                        args: vec![array_col, array_col_1],\n                        return_type: None,\n                        fail_on_error: false,\n                    })),\n                    query_context: None,\n                    expr_id: None,\n                }],\n            })),\n        };\n\n        let (mut scans, _shuffle_scans, datafusion_plan) =\n            planner.create_plan(&projection, &mut vec![], 1).unwrap();\n\n        let mut stream = datafusion_plan.native_plan.execute(0, task_ctx).unwrap();\n\n        let runtime = tokio::runtime::Runtime::new().unwrap();\n        let (tx, mut rx) = mpsc::channel(1);\n\n        // Separate thread to send the EOF signal once we've processed the only input batch\n        runtime.spawn(async move {\n            let a = Int32Array::from(vec![0, 3]);\n            let b = Int32Array::from(vec![1, 4]);\n            let c = Int32Array::from(vec![2, 5]);\n            let input_batch1 = InputBatch::Batch(vec![Arc::new(a), Arc::new(b), Arc::new(c)], 2);\n            let input_batch2 = InputBatch::EOF;\n\n            let batches = vec![input_batch1, input_batch2];\n\n            for batch in batches.into_iter() {\n                tx.send(batch).await.unwrap();\n            }\n        });\n\n        runtime.block_on(async move {\n            loop {\n                let batch = rx.recv().await.unwrap();\n                scans[0].set_input_batch(batch);\n                match poll!(stream.next()) {\n                    Poll::Ready(Some(batch)) => {\n                        assert!(batch.is_ok(), \"got error {}\", batch.unwrap_err());\n                        let batch = batch.unwrap();\n                        assert_eq!(batch.num_rows(), 2);\n                        let expected = [\n                            \"+--------+\",\n                            \"| col_0  |\",\n                            \"+--------+\",\n                            \"| [0, 1] |\",\n                            \"| [3, 4] |\",\n                            \"+--------+\",\n                        ];\n                        assert_batches_eq!(expected, &[batch]);\n                    }\n                    Poll::Ready(None) => {\n                        break;\n                    }\n                    _ => {}\n                }\n            }\n        });\n    }\n\n    #[test]\n    fn test_array_repeat() {\n        // Use built-in ArrayRepeat, not SparkArrayRepeat (see jni_api.rs comment)\n        let session_ctx = SessionContext::new();\n        let task_ctx = session_ctx.task_ctx();\n        let planner = PhysicalPlanner::new(Arc::from(session_ctx), 0);\n\n        // Mock scan operator with 3 INT32 columns\n        let op_scan = Operator {\n            plan_id: 0,\n            children: vec![],\n            op_struct: Some(OpStruct::Scan(spark_operator::Scan {\n                fields: vec![\n                    spark_expression::DataType {\n                        type_id: 3, // Int32\n                        type_info: None,\n                    },\n                    spark_expression::DataType {\n                        type_id: 3, // Int32\n                        type_info: None,\n                    },\n                    spark_expression::DataType {\n                        type_id: 3, // Int32\n                        type_info: None,\n                    },\n                ],\n                source: \"\".to_string(),\n                arrow_ffi_safe: false,\n            })),\n        };\n\n        // Mock expression to read a INT32 column with position 0\n        let array_col = spark_expression::Expr {\n            expr_struct: Some(Bound(spark_expression::BoundReference {\n                index: 0,\n                datatype: Some(spark_expression::DataType {\n                    type_id: 3,\n                    type_info: None,\n                }),\n            })),\n            query_context: None,\n            expr_id: None,\n        };\n\n        // Mock expression to read a INT32 column with position 1\n        let array_col_1 = spark_expression::Expr {\n            expr_struct: Some(Bound(spark_expression::BoundReference {\n                index: 1,\n                datatype: Some(spark_expression::DataType {\n                    type_id: 3,\n                    type_info: None,\n                }),\n            })),\n            query_context: None,\n            expr_id: None,\n        };\n\n        // Make a projection operator with array_repeat(array_col, array_col_1)\n        let projection = Operator {\n            children: vec![op_scan],\n            plan_id: 0,\n            op_struct: Some(OpStruct::Projection(spark_operator::Projection {\n                project_list: vec![spark_expression::Expr {\n                    expr_struct: Some(ExprStruct::ScalarFunc(spark_expression::ScalarFunc {\n                        func: \"array_repeat\".to_string(),\n                        args: vec![array_col, array_col_1],\n                        return_type: None,\n                        fail_on_error: false,\n                    })),\n                    query_context: None,\n                    expr_id: None,\n                }],\n            })),\n        };\n\n        // Create a physical plan\n        let (mut scans, _shuffle_scans, datafusion_plan) =\n            planner.create_plan(&projection, &mut vec![], 1).unwrap();\n\n        // Start executing the plan in a separate thread\n        // The plan waits for incoming batches and emitting result as input comes\n        let mut stream = datafusion_plan.native_plan.execute(0, task_ctx).unwrap();\n\n        let runtime = tokio::runtime::Runtime::new().unwrap();\n        // create async channel\n        let (tx, mut rx) = mpsc::channel(1);\n\n        // Send data as input to the plan being executed in a separate thread\n        runtime.spawn(async move {\n            // create data batch\n            // 0, 1, 2\n            // 3, 4, 5\n            // 6, null, null\n            let a = Int32Array::from(vec![Some(0), Some(3), Some(6)]);\n            let b = Int32Array::from(vec![Some(1), Some(4), None]);\n            let c = Int32Array::from(vec![Some(2), Some(5), None]);\n            let input_batch1 = InputBatch::Batch(vec![Arc::new(a), Arc::new(b), Arc::new(c)], 3);\n            let input_batch2 = InputBatch::EOF;\n\n            let batches = vec![input_batch1, input_batch2];\n\n            for batch in batches.into_iter() {\n                tx.send(batch).await.unwrap();\n            }\n        });\n\n        // Wait for the plan to finish executing and assert the result\n        runtime.block_on(async move {\n            loop {\n                let batch = rx.recv().await.unwrap();\n                scans[0].set_input_batch(batch);\n                match poll!(stream.next()) {\n                    Poll::Ready(Some(batch)) => {\n                        assert!(batch.is_ok(), \"got error {}\", batch.unwrap_err());\n                        let batch = batch.unwrap();\n                        let expected = [\n                            \"+--------------+\",\n                            \"| col_0        |\",\n                            \"+--------------+\",\n                            \"| [0]          |\",\n                            \"| [3, 3, 3, 3] |\",\n                            \"|              |\",\n                            \"+--------------+\",\n                        ];\n                        assert_batches_eq!(expected, &[batch]);\n                    }\n                    Poll::Ready(None) => {\n                        break;\n                    }\n                    _ => {}\n                }\n            }\n        });\n    }\n\n    /// Executes a `test_data_query` SQL query\n    /// and saves the result into a temp folder using parquet format\n    /// Read the file back to the memory using a custom schema\n    async fn make_parquet_data(\n        test_data_query: &str,\n        read_schema: Schema,\n    ) -> Result<RecordBatch, DataFusionError> {\n        let session_ctx = SessionContext::new();\n\n        // generate test data in the temp folder\n        let tmp_dir = TempDir::new()?;\n        let test_path = tmp_dir.path().to_str().unwrap();\n\n        let plan = session_ctx\n            .sql(test_data_query)\n            .await?\n            .create_physical_plan()\n            .await?;\n\n        // Write a parquet file into temp folder\n        session_ctx.write_parquet(plan, test_path, None).await?;\n\n        // Register all parquet with temp data as file groups\n        let mut file_groups: Vec<FileGroup> = vec![];\n        for entry in std::fs::read_dir(test_path)? {\n            let entry = entry?;\n            let path = entry.path();\n\n            if path.extension().and_then(|ext| ext.to_str()) == Some(\"parquet\") {\n                if let Some(path_str) = path.to_str() {\n                    file_groups.push(FileGroup::new(vec![PartitionedFile::from_path(\n                        path_str.into(),\n                    )?]));\n                }\n            }\n        }\n\n        let source = Arc::new(\n            ParquetSource::new(Arc::new(read_schema.clone()))\n                .with_table_parquet_options(TableParquetOptions::new()),\n        ) as Arc<dyn FileSource>;\n\n        let spark_parquet_options = SparkParquetOptions::new(EvalMode::Legacy, \"UTC\", false);\n\n        let expr_adapter_factory: Arc<dyn PhysicalExprAdapterFactory> = Arc::new(\n            SparkPhysicalExprAdapterFactory::new(spark_parquet_options, None),\n        );\n\n        let object_store_url = ObjectStoreUrl::local_filesystem();\n        let file_scan_config = FileScanConfigBuilder::new(object_store_url, source)\n            .with_expr_adapter(Some(expr_adapter_factory))\n            .with_file_groups(file_groups)\n            .build();\n\n        // Run native read\n        let scan = Arc::new(DataSourceExec::new(Arc::new(file_scan_config.clone())));\n        let result: Vec<_> = scan.execute(0, session_ctx.task_ctx())?.collect().await;\n        Ok(result.first().unwrap().as_ref().unwrap().clone())\n    }\n\n    /*\n    Testing a nested types scenario\n\n    select arr[0].a, arr[0].c from (\n        select array(named_struct('a', 1, 'b', 'n', 'c', 'x')) arr)\n     */\n    #[tokio::test]\n    async fn test_nested_types_list_of_struct_by_index() -> Result<(), DataFusionError> {\n        let test_data = \"select make_array(named_struct('a', 1, 'b', 'n', 'c', 'x')) c0\";\n\n        // Define schema Comet reads with\n        let required_schema = Schema::new(Fields::from(vec![Field::new(\n            \"c0\",\n            DataType::List(\n                Field::new(\n                    \"element\",\n                    DataType::Struct(Fields::from(vec![\n                        Field::new(\"a\", DataType::Int32, true),\n                        Field::new(\"c\", DataType::Utf8, true),\n                    ] as Vec<Field>)),\n                    true,\n                )\n                .into(),\n            ),\n            true,\n        )]));\n\n        let actual = make_parquet_data(test_data, required_schema).await?;\n\n        let expected = [\n            \"+----------------+\",\n            \"| c0             |\",\n            \"+----------------+\",\n            \"| [{a: 1, c: x}] |\",\n            \"+----------------+\",\n        ];\n        assert_batches_eq!(expected, &[actual]);\n\n        Ok(())\n    }\n\n    /*\n    Testing a nested types scenario map[struct, struct]\n\n    select map_keys(m).b from (\n        select map(named_struct('a', 1, 'b', 'n', 'c', 'x'), named_struct('a', 1, 'b', 'n', 'c', 'x')) m\n     */\n    #[tokio::test]\n    async fn test_nested_types_map_keys() -> Result<(), DataFusionError> {\n        let test_data = \"select map([named_struct('a', 1, 'b', 'n', 'c', 'x')], [named_struct('a', 2, 'b', 'm', 'c', 'y')]) c0\";\n        let required_schema = Schema::new(Fields::from(vec![Field::new(\n            \"c0\",\n            DataType::Map(\n                Field::new(\n                    \"entries\",\n                    DataType::Struct(Fields::from(vec![\n                        Field::new(\n                            \"key\",\n                            DataType::Struct(Fields::from(vec![Field::new(\n                                \"b\",\n                                DataType::Utf8,\n                                true,\n                            )])),\n                            false,\n                        ),\n                        Field::new(\n                            \"value\",\n                            DataType::Struct(Fields::from(vec![\n                                Field::new(\"a\", DataType::Int64, true),\n                                Field::new(\"b\", DataType::Utf8, true),\n                                Field::new(\"c\", DataType::Utf8, true),\n                            ])),\n                            true,\n                        ),\n                    ] as Vec<Field>)),\n                    false,\n                )\n                .into(),\n                false,\n            ),\n            true,\n        )]));\n\n        let actual = make_parquet_data(test_data, required_schema).await?;\n        let expected = [\n            \"+------------------------------+\",\n            \"| c0                           |\",\n            \"+------------------------------+\",\n            \"| {{b: n}: {a: 2, b: m, c: y}} |\",\n            \"+------------------------------+\",\n        ];\n        assert_batches_eq!(expected, std::slice::from_ref(&actual));\n\n        Ok(())\n    }\n\n    // Read struct using schema where schema fields do not overlap with\n    // struct fields\n    #[tokio::test]\n    async fn test_nested_types_extract_missing_struct_names_non_overlap(\n    ) -> Result<(), DataFusionError> {\n        let test_data = \"select named_struct('a', 1, 'b', 'abc') c0\";\n        let required_schema = Schema::new(Fields::from(vec![Field::new(\n            \"c0\",\n            DataType::Struct(Fields::from(vec![\n                Field::new(\"c\", DataType::Int64, true),\n                Field::new(\"d\", DataType::Utf8, true),\n            ])),\n            true,\n        )]));\n        let actual = make_parquet_data(test_data, required_schema).await?;\n        let expected = [\"+----+\", \"| c0 |\", \"+----+\", \"|    |\", \"+----+\"];\n        assert_batches_eq!(expected, &[actual]);\n        Ok(())\n    }\n\n    // Read struct using custom schema to read just a single field from the struct\n    #[tokio::test]\n    async fn test_nested_types_extract_missing_struct_names_single_field(\n    ) -> Result<(), DataFusionError> {\n        let test_data = \"select named_struct('a', 1, 'b', 'abc') c0\";\n        let required_schema = Schema::new(Fields::from(vec![Field::new(\n            \"c0\",\n            DataType::Struct(Fields::from(vec![Field::new(\"a\", DataType::Int64, true)])),\n            true,\n        )]));\n        let actual = make_parquet_data(test_data, required_schema).await?;\n        let expected = [\n            \"+--------+\",\n            \"| c0     |\",\n            \"+--------+\",\n            \"| {a: 1} |\",\n            \"+--------+\",\n        ];\n        assert_batches_eq!(expected, &[actual]);\n        Ok(())\n    }\n\n    // Read struct using custom schema to handle a missing field\n    #[tokio::test]\n    async fn test_nested_types_extract_missing_struct_names_missing_field(\n    ) -> Result<(), DataFusionError> {\n        let test_data = \"select named_struct('a', 1, 'b', 'abc') c0\";\n        let required_schema = Schema::new(Fields::from(vec![Field::new(\n            \"c0\",\n            DataType::Struct(Fields::from(vec![\n                Field::new(\"a\", DataType::Int64, true),\n                Field::new(\"x\", DataType::Int64, true),\n            ])),\n            true,\n        )]));\n        let actual = make_parquet_data(test_data, required_schema).await?;\n        let expected = [\n            \"+-------------+\",\n            \"| c0          |\",\n            \"+-------------+\",\n            \"| {a: 1, x: } |\",\n            \"+-------------+\",\n        ];\n        assert_batches_eq!(expected, &[actual]);\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_literal_to_list() -> Result<(), DataFusionError> {\n        /*\n           [\n               [\n                   [1, 2, 3],\n                   [4, 5, 6],\n                   [7, 8, 9, null],\n                   [],\n                   null\n               ],\n               [\n                   [10, null, 12]\n               ],\n               null,\n               []\n          ]\n        */\n        let data = ListLiteral {\n            list_values: vec![\n                ListLiteral {\n                    list_values: vec![\n                        ListLiteral {\n                            int_values: vec![1, 2, 3],\n                            null_mask: vec![true, true, true],\n                            ..Default::default()\n                        },\n                        ListLiteral {\n                            int_values: vec![4, 5, 6],\n                            null_mask: vec![true, true, true],\n                            ..Default::default()\n                        },\n                        ListLiteral {\n                            int_values: vec![7, 8, 9, 0],\n                            null_mask: vec![true, true, true, false],\n                            ..Default::default()\n                        },\n                        ListLiteral {\n                            ..Default::default()\n                        },\n                        ListLiteral {\n                            ..Default::default()\n                        },\n                    ],\n                    null_mask: vec![true, true, true, false, true],\n                    ..Default::default()\n                },\n                ListLiteral {\n                    list_values: vec![ListLiteral {\n                        int_values: vec![10, 0, 11],\n                        null_mask: vec![true, false, true],\n                        ..Default::default()\n                    }],\n                    null_mask: vec![true],\n                    ..Default::default()\n                },\n                ListLiteral {\n                    ..Default::default()\n                },\n                ListLiteral {\n                    ..Default::default()\n                },\n            ],\n            null_mask: vec![true, true, false, true],\n            ..Default::default()\n        };\n\n        let nested_type = DataType::List(FieldRef::from(Field::new(\n            \"item\",\n            DataType::List(\n                Field::new(\n                    \"item\",\n                    DataType::List(\n                        Field::new(\n                            \"item\",\n                            DataType::Int32,\n                            true, // Int32 nullable\n                        )\n                        .into(),\n                    ),\n                    true, // inner list nullable\n                )\n                .into(),\n            ),\n            true, // outer list nullable\n        )));\n\n        let array = literal_to_array_ref(nested_type, data)?;\n\n        // Top-level should be ListArray<ListArray<Int32>>\n        let list_outer = array.as_any().downcast_ref::<ListArray>().unwrap();\n        assert_eq!(list_outer.len(), 4);\n\n        // First outer element: ListArray<Int32>\n        let first_elem = list_outer.value(0);\n        let list_inner = first_elem.as_any().downcast_ref::<ListArray>().unwrap();\n        assert_eq!(list_inner.len(), 5);\n\n        // Inner values\n        let v0 = list_inner.value(0);\n        let vals0 = v0.as_any().downcast_ref::<Int32Array>().unwrap();\n        assert_eq!(vals0.values(), &[1, 2, 3]);\n\n        let v1 = list_inner.value(1);\n        let vals1 = v1.as_any().downcast_ref::<Int32Array>().unwrap();\n        assert_eq!(vals1.values(), &[4, 5, 6]);\n\n        let v2 = list_inner.value(2);\n        let vals2 = v2.as_any().downcast_ref::<Int32Array>().unwrap();\n        assert_eq!(vals2.values(), &[7, 8, 9, 0]);\n\n        // Second outer element\n        let second_elem = list_outer.value(1);\n        let list_inner2 = second_elem.as_any().downcast_ref::<ListArray>().unwrap();\n        assert_eq!(list_inner2.len(), 1);\n\n        let v3 = list_inner2.value(0);\n        let vals3 = v3.as_any().downcast_ref::<Int32Array>().unwrap();\n        assert_eq!(vals3.values(), &[10, 0, 11]);\n\n        Ok(())\n    }\n\n    /// Test that reproduces the \"Cast error: Casting from Int8 to Date32 not supported\" error\n    /// that occurs when performing date subtraction with Int8 (TINYINT) values.\n    /// This corresponds to the Scala test \"date_sub with int arrays\" in CometExpressionSuite.\n    ///\n    /// The error occurs because DataFusion's BinaryExpr tries to cast Int8 to Date32\n    /// when evaluating date - int8, but this cast is not supported.\n    #[test]\n    fn test_date_sub_with_int8_cast_error() {\n        use arrow::array::Date32Array;\n\n        let planner = PhysicalPlanner::default();\n        let row_count = 3;\n\n        // Create a Scan operator with Date32 (DATE) and Int8 (TINYINT) columns\n        let op_scan = Operator {\n            plan_id: 0,\n            children: vec![],\n            op_struct: Some(OpStruct::Scan(spark_operator::Scan {\n                fields: vec![\n                    spark_expression::DataType {\n                        type_id: 12, // DATE (Date32)\n                        type_info: None,\n                    },\n                    spark_expression::DataType {\n                        type_id: 1, // INT8 (TINYINT)\n                        type_info: None,\n                    },\n                ],\n                source: \"\".to_string(),\n                arrow_ffi_safe: false,\n            })),\n        };\n\n        // Create bound reference for the DATE column (index 0)\n        let date_col = spark_expression::Expr {\n            expr_struct: Some(Bound(spark_expression::BoundReference {\n                index: 0,\n                datatype: Some(spark_expression::DataType {\n                    type_id: 12, // DATE\n                    type_info: None,\n                }),\n            })),\n            expr_id: None,\n            query_context: None,\n        };\n\n        // Create bound reference for the INT8 column (index 1)\n        let int8_col = spark_expression::Expr {\n            expr_struct: Some(Bound(spark_expression::BoundReference {\n                index: 1,\n                datatype: Some(spark_expression::DataType {\n                    type_id: 1, // INT8\n                    type_info: None,\n                }),\n            })),\n            expr_id: None,\n            query_context: None,\n        };\n\n        // Create a Subtract expression: date_col - int8_col\n        // This is equivalent to the SQL: SELECT _20 - _2 FROM tbl (date_sub operation)\n        // In the protobuf, subtract uses MathExpr type\n        let subtract_expr = spark_expression::Expr {\n            expr_struct: Some(ExprStruct::Subtract(Box::new(spark_expression::MathExpr {\n                left: Some(Box::new(date_col)),\n                right: Some(Box::new(int8_col)),\n                return_type: Some(spark_expression::DataType {\n                    type_id: 12, // DATE - result should be DATE\n                    type_info: None,\n                }),\n                eval_mode: 0, // Legacy mode\n            }))),\n            expr_id: None,\n            query_context: None,\n        };\n\n        // Create a projection operator with the subtract expression\n        let projection = Operator {\n            children: vec![op_scan],\n            plan_id: 1,\n            op_struct: Some(OpStruct::Projection(spark_operator::Projection {\n                project_list: vec![subtract_expr],\n            })),\n        };\n\n        // Create the physical plan\n        let (mut scans, _shuffle_scans, datafusion_plan) =\n            planner.create_plan(&projection, &mut vec![], 1).unwrap();\n\n        // Create test data: Date32 and Int8 columns\n        let date_array = Date32Array::from(vec![Some(19000), Some(19001), Some(19002)]);\n        let int8_array = Int8Array::from(vec![Some(1i8), Some(2i8), Some(3i8)]);\n\n        // Set input batch for the scan\n        let input_batch =\n            InputBatch::Batch(vec![Arc::new(date_array), Arc::new(int8_array)], row_count);\n        scans[0].set_input_batch(input_batch);\n\n        let session_ctx = SessionContext::new();\n        let task_ctx = session_ctx.task_ctx();\n        let mut stream = datafusion_plan.native_plan.execute(0, task_ctx).unwrap();\n\n        let runtime = tokio::runtime::Runtime::new().unwrap();\n        let (tx, mut rx) = mpsc::channel(1);\n\n        // Separate thread to send the EOF signal once we've processed the only input batch\n        runtime.spawn(async move {\n            // Create test data again for the second batch\n            let date_array = Date32Array::from(vec![Some(19000), Some(19001), Some(19002)]);\n            let int8_array = Int8Array::from(vec![Some(1i8), Some(2i8), Some(3i8)]);\n            let input_batch1 =\n                InputBatch::Batch(vec![Arc::new(date_array), Arc::new(int8_array)], row_count);\n            let input_batch2 = InputBatch::EOF;\n\n            let batches = vec![input_batch1, input_batch2];\n\n            for batch in batches.into_iter() {\n                tx.send(batch).await.unwrap();\n            }\n        });\n\n        runtime.block_on(async move {\n            loop {\n                let batch = rx.recv().await.unwrap();\n                scans[0].set_input_batch(batch);\n                match poll!(stream.next()) {\n                    Poll::Ready(Some(result)) => {\n                        // We expect success - the Int8 should be automatically cast to Int32\n                        assert!(\n                            result.is_ok(),\n                            \"Expected success for date - int8 operation but got error: {:?}\",\n                            result.unwrap_err()\n                        );\n\n                        let batch = result.unwrap();\n                        assert_eq!(batch.num_rows(), row_count);\n\n                        // The result should be Date32 type\n                        assert_eq!(batch.column(0).data_type(), &DataType::Date32);\n\n                        // Verify the values: 19000-1=18999, 19001-2=18999, 19002-3=18999\n                        let date_array = batch\n                            .column(0)\n                            .as_any()\n                            .downcast_ref::<Date32Array>()\n                            .unwrap();\n                        assert_eq!(date_array.value(0), 18999); // 19000 - 1\n                        assert_eq!(date_array.value(1), 18999); // 19001 - 2\n                        assert_eq!(date_array.value(2), 18999); // 19002 - 3\n                    }\n                    Poll::Ready(None) => {\n                        break;\n                    }\n                    _ => {}\n                }\n            }\n        });\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/serde.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Ser/De for expression/operators.\n\nuse super::operators::ExecutionError;\nuse crate::errors::ExpressionError;\nuse arrow::datatypes::{DataType as ArrowDataType, TimeUnit};\nuse arrow::datatypes::{Field, Fields};\nuse datafusion_comet_proto::{\n    spark_config, spark_expression,\n    spark_expression::data_type::{\n        data_type_info::DatatypeStruct,\n        DataTypeId,\n        DataTypeId::{Bool, Bytes, Decimal, Double, Float, Int16, Int32, Int64, Int8, String},\n    },\n    spark_expression::DataType,\n    spark_operator,\n};\nuse prost::Message;\nuse std::{io::Cursor, sync::Arc};\n\n/// Deserialize bytes to protobuf type of expression\npub fn deserialize_expr(buf: &[u8]) -> Result<spark_expression::Expr, ExpressionError> {\n    match spark_expression::Expr::decode(&mut Cursor::new(buf)) {\n        Ok(e) => Ok(e),\n        Err(err) => Err(ExpressionError::from(err)),\n    }\n}\n\n/// Deserialize bytes to protobuf type of operator\npub fn deserialize_op(buf: &[u8]) -> Result<spark_operator::Operator, ExecutionError> {\n    match spark_operator::Operator::decode(&mut Cursor::new(buf)) {\n        Ok(e) => Ok(e),\n        Err(err) => Err(ExecutionError::from(err)),\n    }\n}\n\n/// Deserialize bytes to protobuf type of data type\npub fn deserialize_config(buf: &[u8]) -> Result<spark_config::ConfigMap, ExecutionError> {\n    match spark_config::ConfigMap::decode(&mut Cursor::new(buf)) {\n        Ok(e) => Ok(e),\n        Err(err) => Err(ExecutionError::from(err)),\n    }\n}\n\n/// Deserialize bytes to protobuf type of data type\npub fn deserialize_data_type(buf: &[u8]) -> Result<spark_expression::DataType, ExecutionError> {\n    match spark_expression::DataType::decode(&mut Cursor::new(buf)) {\n        Ok(e) => Ok(e),\n        Err(err) => Err(ExecutionError::from(err)),\n    }\n}\n\n/// Converts Protobuf data type to Arrow data type.\npub fn to_arrow_datatype(dt_value: &DataType) -> ArrowDataType {\n    match DataTypeId::try_from(dt_value.type_id).unwrap() {\n        Bool => ArrowDataType::Boolean,\n        Int8 => ArrowDataType::Int8,\n        Int16 => ArrowDataType::Int16,\n        Int32 => ArrowDataType::Int32,\n        Int64 => ArrowDataType::Int64,\n        Float => ArrowDataType::Float32,\n        Double => ArrowDataType::Float64,\n        String => ArrowDataType::Utf8,\n        Bytes => ArrowDataType::Binary,\n        Decimal => match dt_value\n            .type_info\n            .as_ref()\n            .unwrap()\n            .datatype_struct\n            .as_ref()\n            .unwrap()\n        {\n            DatatypeStruct::Decimal(info) => {\n                ArrowDataType::Decimal128(info.precision as u8, info.scale as i8)\n            }\n            _ => unreachable!(),\n        },\n        DataTypeId::Timestamp => {\n            ArrowDataType::Timestamp(TimeUnit::Microsecond, Some(\"UTC\".to_string().into()))\n        }\n        DataTypeId::TimestampNtz => ArrowDataType::Timestamp(TimeUnit::Microsecond, None),\n        DataTypeId::Date => ArrowDataType::Date32,\n        DataTypeId::Null => ArrowDataType::Null,\n        DataTypeId::List => match dt_value\n            .type_info\n            .as_ref()\n            .unwrap()\n            .datatype_struct\n            .as_ref()\n            .unwrap()\n        {\n            DatatypeStruct::List(info) => {\n                let field = Field::new(\n                    \"item\",\n                    to_arrow_datatype(info.element_type.as_ref().unwrap()),\n                    info.contains_null,\n                );\n                ArrowDataType::List(Arc::new(field))\n            }\n            _ => unreachable!(),\n        },\n        DataTypeId::Map => match dt_value\n            .type_info\n            .as_ref()\n            .unwrap()\n            .datatype_struct\n            .as_ref()\n            .unwrap()\n        {\n            DatatypeStruct::Map(info) => {\n                let key_field = Field::new(\n                    \"key\",\n                    to_arrow_datatype(info.key_type.as_ref().unwrap()),\n                    false,\n                );\n                let value_field = Field::new(\n                    \"value\",\n                    to_arrow_datatype(info.value_type.as_ref().unwrap()),\n                    info.value_contains_null,\n                );\n                let struct_field = Field::new(\n                    \"entries\",\n                    ArrowDataType::Struct(Fields::from(vec![key_field, value_field])),\n                    false,\n                );\n                ArrowDataType::Map(Arc::new(struct_field), false)\n            }\n            _ => unreachable!(),\n        },\n        DataTypeId::Struct => match dt_value\n            .type_info\n            .as_ref()\n            .unwrap()\n            .datatype_struct\n            .as_ref()\n            .unwrap()\n        {\n            DatatypeStruct::Struct(info) => {\n                let fields = info\n                    .field_names\n                    .iter()\n                    .enumerate()\n                    .map(|(idx, name)| {\n                        Field::new(\n                            name,\n                            to_arrow_datatype(&info.field_datatypes[idx]),\n                            info.field_nullable[idx],\n                        )\n                    })\n                    .collect();\n                ArrowDataType::Struct(fields)\n            }\n            _ => unreachable!(),\n        },\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/sort.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{cmp, mem, ptr};\n\n/// This is a copy of the `rdxsort-rs` crate, with the following changes:\n/// - removed `Rdx` implementations for all types except for i64 which is the packed representation\n///   of row addresses and partition ids from Spark.\npub trait Rdx {\n    /// Sets the number of buckets used by the generic implementation.\n    fn cfg_nbuckets() -> usize;\n\n    /// Sets the number of rounds scheduled by the generic implementation.\n    fn cfg_nrounds() -> usize;\n\n    /// Returns the bucket, depending on the round.\n    ///\n    /// This should respect the radix, e.g.:\n    ///\n    /// - if the number of buckets is `2` and the type is an unsigned integer, then the result is\n    ///   the bit starting with the least significant one.\n    /// - if the number of buckets is `8` and the type is an unsigned integer, then the result is\n    ///   the byte starting with the least significant one.\n    ///\n    /// **Never** return a bucker greater or equal the number of buckets. See warning above!\n    fn get_bucket(&self, round: usize) -> usize;\n\n    /// Describes the fact that the content of a bucket should be copied back in reverse order\n    /// after a certain round.\n    fn reverse(round: usize, bucket: usize) -> bool;\n}\n\nconst MASK_LONG_LOWER_40_BITS: u64 = (1u64 << 40) - 1;\nconst MASK_LONG_UPPER_24_BITS: u64 = !MASK_LONG_LOWER_40_BITS;\n\n/// `Rdx` implementation for particular i64 which represents a packed representation of row address\n/// and partition id from Spark.\nimpl Rdx for i64 {\n    #[inline]\n    fn cfg_nbuckets() -> usize {\n        16\n    }\n\n    #[inline]\n    fn cfg_nrounds() -> usize {\n        // Partition id is 3 bytes. Each byte has 2 rounds.\n        6\n    }\n\n    #[inline]\n    fn get_bucket(&self, round: usize) -> usize {\n        let partition_id = (*self as u64 & MASK_LONG_UPPER_24_BITS) >> 40;\n\n        let shift = round << 2;\n        ((partition_id >> shift) & 15u64) as usize\n    }\n\n    #[inline]\n    fn reverse(_round: usize, _bucket: usize) -> bool {\n        false\n    }\n}\n\n/// Radix Sort implementation for some type\npub trait RdxSort {\n    /// Execute Radix Sort, overwrites (unsorted) content of the type.\n    fn rdxsort(&mut self);\n}\n\n#[inline]\nfn helper_bucket<T, I>(buckets_b: &mut [Vec<T>], iter: I, cfg_nbuckets: usize, round: usize)\nwhere\n    T: Rdx,\n    I: Iterator<Item = T>,\n{\n    for x in iter {\n        let b = x.get_bucket(round);\n        assert!(\n            b < cfg_nbuckets,\n            \"Your Rdx implementation returns a bucket >= cfg_nbuckets()!\"\n        );\n        unsafe {\n            buckets_b.get_unchecked_mut(b).push(x);\n        }\n    }\n}\n\nimpl<T> RdxSort for [T]\nwhere\n    T: Rdx + Clone,\n{\n    fn rdxsort(&mut self) {\n        // config\n        let cfg_nbuckets = T::cfg_nbuckets();\n        let cfg_nrounds = T::cfg_nrounds();\n\n        // early return\n        if cfg_nrounds == 0 {\n            return;\n        }\n\n        let n = self.len();\n        let presize = cmp::max(16, (n << 2) / cfg_nbuckets); // TODO: justify the presize value\n        let mut buckets_a: Vec<Vec<T>> = Vec::with_capacity(cfg_nbuckets);\n        let mut buckets_b: Vec<Vec<T>> = Vec::with_capacity(cfg_nbuckets);\n        for _ in 0..cfg_nbuckets {\n            buckets_a.push(Vec::with_capacity(presize));\n            buckets_b.push(Vec::with_capacity(presize));\n        }\n\n        helper_bucket(&mut buckets_a, self.iter().cloned(), cfg_nbuckets, 0);\n\n        for round in 1..cfg_nrounds {\n            for bucket in &mut buckets_b {\n                bucket.clear();\n            }\n            for (i, bucket) in buckets_a.iter().enumerate() {\n                if T::reverse(round - 1, i) {\n                    helper_bucket(\n                        &mut buckets_b,\n                        bucket.iter().rev().cloned(),\n                        cfg_nbuckets,\n                        round,\n                    );\n                } else {\n                    helper_bucket(&mut buckets_b, bucket.iter().cloned(), cfg_nbuckets, round);\n                }\n            }\n            mem::swap(&mut buckets_a, &mut buckets_b);\n        }\n\n        let mut pos = 0;\n        for (i, bucket) in buckets_a.iter_mut().enumerate() {\n            assert!(\n                pos + bucket.len() <= self.len(),\n                \"bug: a buckets got oversized\"\n            );\n\n            if T::reverse(cfg_nrounds - 1, i) {\n                for x in bucket.iter().rev().cloned() {\n                    unsafe {\n                        *self.get_unchecked_mut(pos) = x;\n                    }\n                    pos += 1;\n                }\n            } else {\n                if !bucket.is_empty() {\n                    unsafe {\n                        // SAFETY the buckets are created with correct alignment\n                        // because they are defined as Vec<Vec<T>>\n                        ptr::copy_nonoverlapping(\n                            bucket.as_ptr(),\n                            self.as_mut_ptr().add(pos),\n                            bucket.len(),\n                        );\n                    }\n                }\n                pos += bucket.len();\n            }\n        }\n\n        assert!(pos == self.len(), \"bug: bucket size does not sum up\");\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    const MAXIMUM_PARTITION_ID: i32 = (1i32 << 24) - 1;\n    const MASK_LONG_LOWER_51_BITS: i64 = (1i64 << 51) - 1;\n    const MASK_LONG_UPPER_13_BITS: i64 = !MASK_LONG_LOWER_51_BITS;\n    const MASK_LONG_LOWER_27_BITS: i64 = (1i64 << 27) - 1;\n\n    /// Copied from Spark class `PackedRecordPointer`.\n    fn pack_pointer(pointer: i64, partition_id: i32) -> i64 {\n        assert!(partition_id <= MAXIMUM_PARTITION_ID);\n\n        let page_number = (pointer & MASK_LONG_UPPER_13_BITS) >> 24;\n        let compressed_address = page_number | (pointer & MASK_LONG_LOWER_27_BITS);\n        ((partition_id as i64) << 40) | compressed_address\n    }\n\n    #[test]\n    fn test_rdxsort() {\n        let mut v = vec![\n            pack_pointer(1, 0),\n            pack_pointer(2, 3),\n            pack_pointer(3, 2),\n            pack_pointer(4, 5),\n            pack_pointer(5, 0),\n            pack_pointer(6, 1),\n            pack_pointer(7, 3),\n            pack_pointer(8, 3),\n        ];\n        v.rdxsort();\n\n        let expected = vec![\n            pack_pointer(1, 0),\n            pack_pointer(5, 0),\n            pack_pointer(6, 1),\n            pack_pointer(3, 2),\n            pack_pointer(2, 3),\n            pack_pointer(7, 3),\n            pack_pointer(8, 3),\n            pack_pointer(4, 5),\n        ];\n\n        assert_eq!(v, expected);\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/spark_config.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::collections::HashMap;\n\npub(crate) const COMET_TRACING_ENABLED: &str = \"spark.comet.tracing.enabled\";\npub(crate) const COMET_DEBUG_ENABLED: &str = \"spark.comet.debug.enabled\";\npub(crate) const COMET_EXPLAIN_NATIVE_ENABLED: &str = \"spark.comet.explain.native.enabled\";\npub(crate) const COMET_MAX_TEMP_DIRECTORY_SIZE: &str = \"spark.comet.maxTempDirectorySize\";\npub(crate) const COMET_DEBUG_MEMORY: &str = \"spark.comet.debug.memory\";\npub(crate) const SPARK_EXECUTOR_CORES: &str = \"spark.executor.cores\";\n\npub(crate) trait SparkConfig {\n    fn get_bool(&self, name: &str) -> bool;\n    fn get_u64(&self, name: &str, default_value: u64) -> u64;\n    fn get_usize(&self, name: &str, default_value: usize) -> usize;\n}\n\nimpl SparkConfig for HashMap<String, String> {\n    fn get_bool(&self, name: &str) -> bool {\n        self.get(name)\n            .and_then(|str_val| str_val.parse::<bool>().ok())\n            .unwrap_or(false)\n    }\n\n    fn get_u64(&self, name: &str, default_value: u64) -> u64 {\n        self.get(name)\n            .and_then(|str_val| str_val.parse::<u64>().ok())\n            .unwrap_or(default_value)\n    }\n\n    fn get_usize(&self, name: &str, default_value: usize) -> usize {\n        self.get(name)\n            .and_then(|str_val| str_val.parse::<usize>().ok())\n            .unwrap_or(default_value)\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/spark_plan.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::datatypes::SchemaRef;\nuse datafusion::physical_plan::ExecutionPlan;\nuse std::sync::Arc;\n\n/// Wrapper around a native plan that maps to a Spark plan and can optionally contain\n/// references to other native plans that should contribute to the Spark SQL metrics\n/// for the root plan (such as CopyExec and ScanExec nodes)\n#[derive(Debug, Clone)]\npub(crate) struct SparkPlan {\n    /// Spark plan ID (used for informational purposes only)\n    pub(crate) plan_id: u32,\n    /// The root of the native plan that was generated for this Spark plan\n    pub(crate) native_plan: Arc<dyn ExecutionPlan>,\n    /// Child Spark plans\n    pub(crate) children: Vec<Arc<SparkPlan>>,\n    /// Additional native plans that were generated for this Spark plan that we need\n    /// to collect metrics for\n    pub(crate) additional_native_plans: Vec<Arc<dyn ExecutionPlan>>,\n}\n\nimpl SparkPlan {\n    /// Create a SparkPlan that consists of a single native plan\n    pub(crate) fn new(\n        plan_id: u32,\n        native_plan: Arc<dyn ExecutionPlan>,\n        children: Vec<Arc<SparkPlan>>,\n    ) -> Self {\n        Self {\n            plan_id,\n            native_plan,\n            children,\n            additional_native_plans: vec![],\n        }\n    }\n\n    /// Create a SparkPlan that consists of more than one native plan\n    pub(crate) fn new_with_additional(\n        plan_id: u32,\n        native_plan: Arc<dyn ExecutionPlan>,\n        children: Vec<Arc<SparkPlan>>,\n        additional_native_plans: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Self {\n        let mut accum: Vec<Arc<dyn ExecutionPlan>> = vec![];\n        for plan in &additional_native_plans {\n            accum.push(Arc::clone(plan));\n        }\n        Self {\n            plan_id,\n            native_plan,\n            children,\n            additional_native_plans: accum,\n        }\n    }\n\n    /// Get the schema of the native plan\n    pub(crate) fn schema(&self) -> SchemaRef {\n        self.native_plan.schema()\n    }\n\n    /// Get the child SparkPlan instances\n    pub(crate) fn children(&self) -> &Vec<Arc<SparkPlan>> {\n        &self.children\n    }\n}\n"
  },
  {
    "path": "native/core/src/execution/tracing.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub(crate) use datafusion_comet_common::tracing::*;\n"
  },
  {
    "path": "native/core/src/execution/utils.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n/// Utils for array vector, etc.\nuse crate::execution::operators::ExecutionError;\nuse arrow::{\n    array::ArrayData,\n    ffi::{from_ffi, FFI_ArrowArray, FFI_ArrowSchema},\n};\n\npub trait SparkArrowConvert {\n    /// Build Arrow Arrays from C data interface passed from Spark.\n    /// It accepts a tuple (ArrowArray address, ArrowSchema address).\n    fn from_spark(addresses: (i64, i64)) -> Result<Self, ExecutionError>\n    where\n        Self: Sized;\n\n    /// Move Arrow Arrays to C data interface.\n    fn move_to_spark(&self, array: i64, schema: i64) -> Result<(), ExecutionError>;\n}\n\nimpl SparkArrowConvert for ArrayData {\n    fn from_spark(addresses: (i64, i64)) -> Result<Self, ExecutionError> {\n        let (array_ptr, schema_ptr) = addresses;\n\n        let array_ptr = array_ptr as *mut FFI_ArrowArray;\n        let schema_ptr = schema_ptr as *mut FFI_ArrowSchema;\n\n        if array_ptr.is_null() || schema_ptr.is_null() {\n            return Err(ExecutionError::ArrowError(\n                \"At least one of passed pointers is null\".to_string(),\n            ));\n        };\n\n        // `ArrowArray` will convert raw pointers back to `Arc`. No worries\n        // about memory leak.\n        let mut ffi_array = unsafe {\n            let array_data = std::ptr::replace(array_ptr, FFI_ArrowArray::empty());\n            let schema_data = std::ptr::replace(schema_ptr, FFI_ArrowSchema::empty());\n\n            from_ffi(array_data, &schema_data)?\n        };\n\n        // Align imported buffers from Java.\n        ffi_array.align_buffers();\n\n        Ok(ffi_array)\n    }\n\n    /// Move this ArrowData to pointers of Arrow C data interface.\n    fn move_to_spark(&self, array: i64, schema: i64) -> Result<(), ExecutionError> {\n        let array_ptr = array as *mut FFI_ArrowArray;\n        let schema_ptr = schema as *mut FFI_ArrowSchema;\n\n        let array_align = std::mem::align_of::<FFI_ArrowArray>();\n        let schema_align = std::mem::align_of::<FFI_ArrowSchema>();\n\n        // Check if the pointer alignment is correct.\n        if array_ptr.align_offset(array_align) != 0 || schema_ptr.align_offset(schema_align) != 0 {\n            unsafe {\n                std::ptr::write_unaligned(array_ptr, FFI_ArrowArray::new(self));\n                std::ptr::write_unaligned(schema_ptr, FFI_ArrowSchema::try_from(self.data_type())?);\n            }\n        } else {\n            // SAFETY: `array_ptr` and `schema_ptr` are aligned correctly.\n            debug_assert_eq!(\n                array_ptr.align_offset(array_align),\n                0,\n                \"move_to_spark: array_ptr not aligned\"\n            );\n            debug_assert_eq!(\n                schema_ptr.align_offset(schema_align),\n                0,\n                \"move_to_spark: schema_ptr not aligned\"\n            );\n            unsafe {\n                std::ptr::write(array_ptr, FFI_ArrowArray::new(self));\n                std::ptr::write(schema_ptr, FFI_ArrowSchema::try_from(self.data_type())?);\n            }\n        }\n\n        Ok(())\n    }\n}\n\npub use datafusion_comet_common::bytes_to_i128;\n"
  },
  {
    "path": "native/core/src/lib.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n#![allow(incomplete_features)]\n#![allow(non_camel_case_types)]\n#![allow(clippy::upper_case_acronyms)]\n#![allow(clippy::result_large_err)]\n// For prost generated struct\n#![allow(clippy::derive_partial_eq_without_eq)]\n// The clippy throws an error if the reference clone not wrapped into `Arc::clone`\n// The lint makes easier for code reader/reviewer separate references clones from more heavyweight ones\n#![deny(clippy::clone_on_ref_ptr)]\nextern crate core;\n\n#[macro_use]\nextern crate datafusion_comet_jni_bridge;\n\nuse jni::{\n    objects::{JClass, JString},\n    EnvUnowned,\n};\nuse log::info;\nuse log4rs::{\n    append::console::{ConsoleAppender, Target},\n    config::{load_config_file, Appender, Deserializers, Root},\n    encode::pattern::PatternEncoder,\n    Config,\n};\n\n#[cfg(all(\n    not(target_env = \"msvc\"),\n    feature = \"jemalloc\",\n    not(feature = \"mimalloc\")\n))]\nuse tikv_jemallocator::Jemalloc;\n\n#[cfg(all(\n    feature = \"mimalloc\",\n    not(all(not(target_env = \"msvc\"), feature = \"jemalloc\"))\n))]\nuse mimalloc::MiMalloc;\n\n// Re-export from jvm-bridge crate for internal use\npub use datafusion_comet_jni_bridge::errors;\npub use datafusion_comet_jni_bridge::JAVA_VM;\n\n/// Re-export jvm-bridge items under the `jvm_bridge` name for convenience.\npub mod jvm_bridge {\n    pub use datafusion_comet_jni_bridge::*;\n}\n\nuse errors::{try_unwrap_or_throw, CometError, CometResult};\n\n#[macro_use]\npub mod common;\npub mod execution;\npub mod parquet;\n\n#[cfg(all(\n    not(target_env = \"msvc\"),\n    feature = \"jemalloc\",\n    not(feature = \"mimalloc\")\n))]\n#[global_allocator]\nstatic GLOBAL: Jemalloc = Jemalloc;\n\n#[cfg(all(\n    feature = \"mimalloc\",\n    not(all(not(target_env = \"msvc\"), feature = \"jemalloc\"))\n))]\n#[global_allocator]\nstatic GLOBAL: MiMalloc = MiMalloc;\n\n#[no_mangle]\npub extern \"system\" fn Java_org_apache_comet_NativeBase_init(\n    e: EnvUnowned,\n    _: JClass,\n    log_conf_path: JString,\n    log_level: JString,\n) {\n    // Initialize the error handling to capture panic backtraces\n    errors::init();\n\n    try_unwrap_or_throw(&e, |env| {\n        let path: String = log_conf_path.try_to_string(env)?;\n\n        // empty path means there is no custom log4rs config file provided, so fallback to use\n        // the default configuration\n        let log_config = if path.is_empty() {\n            let log_level: String = match log_level.try_to_string(env) {\n                Ok(level) => level,\n                Err(_) => \"info\".parse().unwrap(),\n            };\n            default_logger_config(&log_level)\n        } else {\n            load_config_file(path, Deserializers::default())\n                .map_err(|err| CometError::Config(err.to_string()))\n        };\n\n        let _ = log4rs::init_config(log_config?).map_err(|err| CometError::Config(err.to_string()));\n\n        // Initialize the global Java VM\n        let java_vm = env.get_java_vm()?;\n        JAVA_VM.get_or_init(|| java_vm);\n\n        let comet_version = env!(\"CARGO_PKG_VERSION\");\n        info!(\"Comet native library version {comet_version} initialized\");\n        Ok(())\n    })\n}\n\nconst LOG_PATTERN: &str = \"{d(%y/%m/%d %H:%M:%S)} {l} {f}: {m}{n}\";\n\n/// JNI method to check if a specific feature is enabled in the native Rust code.\n/// # Arguments\n/// * `feature_name` - The name of the feature to check. Supported features:\n///   - \"jemalloc\" - tikv-jemallocator memory allocator\n///   - \"hdfs\" - HDFS object store support\n///   - \"hdfs-opendal\" - HDFS support via OpenDAL\n/// # Returns\n/// * `1` (true) if the feature is enabled\n/// * `0` (false) if the feature is disabled or unknown\n#[no_mangle]\npub extern \"system\" fn Java_org_apache_comet_NativeBase_isFeatureEnabled(\n    env: EnvUnowned,\n    _: JClass,\n    feature_name: JString,\n) -> jni::sys::jboolean {\n    try_unwrap_or_throw(&env, |env| {\n        let feature: String = feature_name.try_to_string(env)?;\n\n        let enabled = match feature.as_str() {\n            \"jemalloc\" => cfg!(feature = \"jemalloc\"),\n            \"hdfs\" => cfg!(feature = \"hdfs\"),\n            \"hdfs-opendal\" => cfg!(feature = \"hdfs-opendal\"),\n            _ => false, // Unknown features return false\n        };\n\n        Ok(enabled)\n    })\n}\n\n// Creates a default log4rs config, which logs to console with log level.\nfn default_logger_config(log_level: &str) -> CometResult<Config> {\n    let console_append = ConsoleAppender::builder()\n        .target(Target::Stderr)\n        .encoder(Box::new(PatternEncoder::new(LOG_PATTERN)))\n        .build();\n    let appender = Appender::builder().build(\"console\", Box::new(console_append));\n    let root = Root::builder().appender(\"console\").build(\n        log_level\n            .parse()\n            .map_err(|err| CometError::Config(format!(\"{err}\")))?,\n    );\n    Config::builder()\n        .appender(appender)\n        .build(root)\n        .map_err(|err| CometError::Config(err.to_string()))\n}\n"
  },
  {
    "path": "native/core/src/parquet/cast_column.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\nuse arrow::{\n    array::{\n        make_array, Array, ArrayRef, LargeListArray, ListArray, MapArray, StructArray,\n        TimestampMicrosecondArray, TimestampMillisecondArray,\n    },\n    compute::CastOptions,\n    datatypes::{DataType, FieldRef, Schema, TimeUnit},\n    record_batch::RecordBatch,\n};\n\nuse crate::parquet::parquet_support::{spark_parquet_convert, SparkParquetOptions};\nuse datafusion::common::format::DEFAULT_CAST_OPTIONS;\nuse datafusion::common::Result as DataFusionResult;\nuse datafusion::common::ScalarValue;\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::{\n    any::Any,\n    fmt::{self, Display},\n    hash::Hash,\n    sync::Arc,\n};\n\n/// Returns true if two DataTypes are structurally equivalent (same data layout)\n/// but may differ in field names within nested types.\nfn types_differ_only_in_field_names(physical: &DataType, logical: &DataType) -> bool {\n    match (physical, logical) {\n        (DataType::List(pf), DataType::List(lf)) => {\n            pf.is_nullable() == lf.is_nullable()\n                && (pf.data_type() == lf.data_type()\n                    || types_differ_only_in_field_names(pf.data_type(), lf.data_type()))\n        }\n        (DataType::LargeList(pf), DataType::LargeList(lf)) => {\n            pf.is_nullable() == lf.is_nullable()\n                && (pf.data_type() == lf.data_type()\n                    || types_differ_only_in_field_names(pf.data_type(), lf.data_type()))\n        }\n        (DataType::Map(pf, p_sorted), DataType::Map(lf, l_sorted)) => {\n            p_sorted == l_sorted\n                && pf.is_nullable() == lf.is_nullable()\n                && (pf.data_type() == lf.data_type()\n                    || types_differ_only_in_field_names(pf.data_type(), lf.data_type()))\n        }\n        (DataType::Struct(pfields), DataType::Struct(lfields)) => {\n            // For Struct types, field names are semantically meaningful (they\n            // identify different columns), so we require name equality here.\n            // This distinguishes from List/Map wrapper field names (\"item\" vs\n            // \"element\") which are purely cosmetic.\n            pfields.len() == lfields.len()\n                && pfields.iter().zip(lfields.iter()).all(|(pf, lf)| {\n                    pf.name() == lf.name()\n                        && pf.is_nullable() == lf.is_nullable()\n                        && (pf.data_type() == lf.data_type()\n                            || types_differ_only_in_field_names(pf.data_type(), lf.data_type()))\n                })\n        }\n        _ => false,\n    }\n}\n\n/// Recursively relabel an array so its DataType matches `target_type`.\n/// This only changes metadata (field names, nullability flags in nested fields);\n/// it does NOT change the underlying buffer data.\nfn relabel_array(array: ArrayRef, target_type: &DataType) -> ArrayRef {\n    if array.data_type() == target_type {\n        return array;\n    }\n    match target_type {\n        DataType::List(target_field) => {\n            let list = array.as_any().downcast_ref::<ListArray>().unwrap();\n            let values = relabel_array(Arc::clone(list.values()), target_field.data_type());\n            Arc::new(ListArray::new(\n                Arc::clone(target_field),\n                list.offsets().clone(),\n                values,\n                list.nulls().cloned(),\n            ))\n        }\n        DataType::LargeList(target_field) => {\n            let list = array.as_any().downcast_ref::<LargeListArray>().unwrap();\n            let values = relabel_array(Arc::clone(list.values()), target_field.data_type());\n            Arc::new(LargeListArray::new(\n                Arc::clone(target_field),\n                list.offsets().clone(),\n                values,\n                list.nulls().cloned(),\n            ))\n        }\n        DataType::Map(target_entries_field, sorted) => {\n            let map = array.as_any().downcast_ref::<MapArray>().unwrap();\n            let entries = relabel_array(\n                Arc::new(map.entries().clone()),\n                target_entries_field.data_type(),\n            );\n            let entries_struct = entries.as_any().downcast_ref::<StructArray>().unwrap();\n            Arc::new(MapArray::new(\n                Arc::clone(target_entries_field),\n                map.offsets().clone(),\n                entries_struct.clone(),\n                map.nulls().cloned(),\n                *sorted,\n            ))\n        }\n        DataType::Struct(target_fields) => {\n            let struct_arr = array.as_any().downcast_ref::<StructArray>().unwrap();\n            let columns: Vec<ArrayRef> = target_fields\n                .iter()\n                .zip(struct_arr.columns())\n                .map(|(tf, col)| relabel_array(Arc::clone(col), tf.data_type()))\n                .collect();\n            Arc::new(StructArray::new(\n                target_fields.clone(),\n                columns,\n                struct_arr.nulls().cloned(),\n            ))\n        }\n        // Primitive types - shallow swap is safe\n        _ => {\n            let data = array.to_data();\n            let new_data = data\n                .into_builder()\n                .data_type(target_type.clone())\n                .build()\n                .expect(\"relabel_array: data layout must be compatible\");\n            make_array(new_data)\n        }\n    }\n}\n\n/// Casts a Timestamp(Microsecond) array to Timestamp(Millisecond) by dividing values by 1000.\n/// Preserves the timezone from the target type.\nfn cast_timestamp_micros_to_millis_array(\n    array: &ArrayRef,\n    target_tz: Option<Arc<str>>,\n) -> ArrayRef {\n    let micros_array = array\n        .as_any()\n        .downcast_ref::<TimestampMicrosecondArray>()\n        .expect(\"Expected TimestampMicrosecondArray\");\n\n    let millis_values: TimestampMillisecondArray =\n        arrow::compute::kernels::arity::unary(micros_array, |v| v / 1000);\n\n    // Apply timezone if present\n    let result = if let Some(tz) = target_tz {\n        millis_values.with_timezone(tz)\n    } else {\n        millis_values\n    };\n\n    Arc::new(result)\n}\n\n/// Casts a Timestamp(Microsecond) scalar to Timestamp(Millisecond) by dividing the value by 1000.\n/// Preserves the timezone from the target type.\nfn cast_timestamp_micros_to_millis_scalar(\n    opt_val: Option<i64>,\n    target_tz: Option<Arc<str>>,\n) -> ScalarValue {\n    let new_val = opt_val.map(|v| v / 1000);\n    ScalarValue::TimestampMillisecond(new_val, target_tz)\n}\n\n#[derive(Debug, Clone, Eq)]\npub struct CometCastColumnExpr {\n    /// The physical expression producing the value to cast.\n    expr: Arc<dyn PhysicalExpr>,\n    /// The physical field of the input column.\n    input_physical_field: FieldRef,\n    /// The field type required by query\n    target_field: FieldRef,\n    /// Options forwarded to [`cast_column`].\n    cast_options: CastOptions<'static>,\n    /// Spark parquet options for complex nested type conversions.\n    /// When present, enables `spark_parquet_convert` as a fallback.\n    parquet_options: Option<SparkParquetOptions>,\n}\n\n// Manually derive `PartialEq`/`Hash` as `Arc<dyn PhysicalExpr>` does not\n// implement these traits by default for the trait object.\nimpl PartialEq for CometCastColumnExpr {\n    fn eq(&self, other: &Self) -> bool {\n        self.expr.eq(&other.expr)\n            && self.input_physical_field.eq(&other.input_physical_field)\n            && self.target_field.eq(&other.target_field)\n            && self.cast_options.eq(&other.cast_options)\n            && self.parquet_options.eq(&other.parquet_options)\n    }\n}\n\nimpl Hash for CometCastColumnExpr {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.expr.hash(state);\n        self.input_physical_field.hash(state);\n        self.target_field.hash(state);\n        self.cast_options.hash(state);\n        self.parquet_options.hash(state);\n    }\n}\n\nimpl CometCastColumnExpr {\n    /// Create a new [`CometCastColumnExpr`].\n    pub fn new(\n        expr: Arc<dyn PhysicalExpr>,\n        physical_field: FieldRef,\n        target_field: FieldRef,\n        cast_options: Option<CastOptions<'static>>,\n    ) -> Self {\n        Self {\n            expr,\n            input_physical_field: physical_field,\n            target_field,\n            cast_options: cast_options.unwrap_or(DEFAULT_CAST_OPTIONS),\n            parquet_options: None,\n        }\n    }\n\n    /// Set Spark parquet options to enable complex nested type conversions.\n    pub fn with_parquet_options(mut self, options: SparkParquetOptions) -> Self {\n        self.parquet_options = Some(options);\n        self\n    }\n}\n\nimpl Display for CometCastColumnExpr {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        write!(\n            f,\n            \"COMET_CAST_COLUMN({} AS {})\",\n            self.expr,\n            self.target_field.data_type()\n        )\n    }\n}\n\nimpl PhysicalExpr for CometCastColumnExpr {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn data_type(&self, _input_schema: &Schema) -> DataFusionResult<DataType> {\n        Ok(self.target_field.data_type().clone())\n    }\n\n    fn nullable(&self, _input_schema: &Schema) -> DataFusionResult<bool> {\n        Ok(self.target_field.is_nullable())\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> DataFusionResult<ColumnarValue> {\n        let value = self.expr.evaluate(batch)?;\n\n        // Use == (PartialEq) instead of equals_datatype because equals_datatype\n        // ignores field names in nested types (Struct, List, Map). We need to detect\n        // when field names differ (e.g., Struct(\"a\",\"b\") vs Struct(\"c\",\"d\")) so that\n        // we can apply spark_parquet_convert for field-name-based selection.\n        if value.data_type() == *self.target_field.data_type() {\n            return Ok(value);\n        }\n\n        let input_physical_field = self.input_physical_field.data_type();\n        let target_field = self.target_field.data_type();\n\n        // Handle specific type conversions with custom casts\n        match (input_physical_field, target_field) {\n            // Timestamp(Microsecond) -> Timestamp(Millisecond)\n            (\n                DataType::Timestamp(TimeUnit::Microsecond, _),\n                DataType::Timestamp(TimeUnit::Millisecond, target_tz),\n            ) => match value {\n                ColumnarValue::Array(array) => {\n                    let casted = cast_timestamp_micros_to_millis_array(&array, target_tz.clone());\n                    Ok(ColumnarValue::Array(casted))\n                }\n                ColumnarValue::Scalar(ScalarValue::TimestampMicrosecond(opt_val, _)) => {\n                    let casted = cast_timestamp_micros_to_millis_scalar(opt_val, target_tz.clone());\n                    Ok(ColumnarValue::Scalar(casted))\n                }\n                _ => Ok(value),\n            },\n            // Nested types that differ only in field names (e.g., List element named\n            // \"item\" vs \"element\", or Map entries named \"key_value\" vs \"entries\").\n            // Re-label the array so the DataType metadata matches the logical schema.\n            (physical, logical)\n                if physical != logical && types_differ_only_in_field_names(physical, logical) =>\n            {\n                match value {\n                    ColumnarValue::Array(array) => {\n                        let relabeled = relabel_array(array, logical);\n                        Ok(ColumnarValue::Array(relabeled))\n                    }\n                    other => Ok(other),\n                }\n            }\n            // Fallback: use spark_parquet_convert for complex nested type conversions\n            // (e.g., List<Struct{a,b,c}> → List<Struct{a,c}>, Map field selection, etc.)\n            _ => {\n                if let Some(parquet_options) = &self.parquet_options {\n                    let converted = spark_parquet_convert(value, target_field, parquet_options)?;\n                    Ok(converted)\n                } else {\n                    Ok(value)\n                }\n            }\n        }\n    }\n\n    fn return_field(&self, _input_schema: &Schema) -> DataFusionResult<FieldRef> {\n        Ok(Arc::clone(&self.target_field))\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.expr]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        mut children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> DataFusionResult<Arc<dyn PhysicalExpr>> {\n        assert_eq!(children.len(), 1);\n        let child = children.pop().expect(\"CastColumnExpr child\");\n        let mut new_expr = Self::new(\n            child,\n            Arc::clone(&self.input_physical_field),\n            Arc::clone(&self.target_field),\n            Some(self.cast_options.clone()),\n        );\n        if let Some(opts) = &self.parquet_options {\n            new_expr = new_expr.with_parquet_options(opts.clone());\n        }\n        Ok(Arc::new(new_expr))\n    }\n\n    fn fmt_sql(&self, f: &mut fmt::Formatter<'_>) -> fmt::Result {\n        Display::fmt(self, f)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::{Array, Int32Array, StringArray};\n    use arrow::datatypes::{Field, Fields};\n    use datafusion::physical_expr::expressions::Column;\n\n    #[test]\n    fn test_cast_timestamp_micros_to_millis_array() {\n        // Create a TimestampMicrosecond array with some values\n        let micros_array: TimestampMicrosecondArray = vec![\n            Some(1_000_000),  // 1 second in micros\n            Some(2_500_000),  // 2.5 seconds in micros\n            None,             // null value\n            Some(0),          // zero\n            Some(-1_000_000), // negative value (before epoch)\n        ]\n        .into();\n        let array_ref: ArrayRef = Arc::new(micros_array);\n\n        // Cast without timezone\n        let result = cast_timestamp_micros_to_millis_array(&array_ref, None);\n        let millis_array = result\n            .as_any()\n            .downcast_ref::<TimestampMillisecondArray>()\n            .expect(\"Expected TimestampMillisecondArray\");\n\n        assert_eq!(millis_array.len(), 5);\n        assert_eq!(millis_array.value(0), 1000); // 1_000_000 / 1000\n        assert_eq!(millis_array.value(1), 2500); // 2_500_000 / 1000\n        assert!(millis_array.is_null(2));\n        assert_eq!(millis_array.value(3), 0);\n        assert_eq!(millis_array.value(4), -1000); // -1_000_000 / 1000\n    }\n\n    #[test]\n    fn test_cast_timestamp_micros_to_millis_array_with_timezone() {\n        let micros_array: TimestampMicrosecondArray = vec![Some(1_000_000), Some(2_000_000)].into();\n        let array_ref: ArrayRef = Arc::new(micros_array);\n\n        let target_tz: Option<Arc<str>> = Some(Arc::from(\"UTC\"));\n        let result = cast_timestamp_micros_to_millis_array(&array_ref, target_tz);\n        let millis_array = result\n            .as_any()\n            .downcast_ref::<TimestampMillisecondArray>()\n            .expect(\"Expected TimestampMillisecondArray\");\n\n        assert_eq!(millis_array.value(0), 1000);\n        assert_eq!(millis_array.value(1), 2000);\n        // Verify timezone is preserved\n        assert_eq!(\n            result.data_type(),\n            &DataType::Timestamp(TimeUnit::Millisecond, Some(Arc::from(\"UTC\")))\n        );\n    }\n\n    #[test]\n    fn test_cast_timestamp_micros_to_millis_scalar() {\n        // Test with a value\n        let result = cast_timestamp_micros_to_millis_scalar(Some(1_500_000), None);\n        assert_eq!(result, ScalarValue::TimestampMillisecond(Some(1500), None));\n\n        // Test with null\n        let null_result = cast_timestamp_micros_to_millis_scalar(None, None);\n        assert_eq!(null_result, ScalarValue::TimestampMillisecond(None, None));\n\n        // Test with timezone\n        let target_tz: Option<Arc<str>> = Some(Arc::from(\"UTC\"));\n        let tz_result = cast_timestamp_micros_to_millis_scalar(Some(2_000_000), target_tz.clone());\n        assert_eq!(\n            tz_result,\n            ScalarValue::TimestampMillisecond(Some(2000), target_tz)\n        );\n    }\n\n    #[test]\n    fn test_comet_cast_column_expr_evaluate_micros_to_millis_array() {\n        // Create input schema with TimestampMicrosecond column\n        let input_field = Arc::new(Field::new(\n            \"ts\",\n            DataType::Timestamp(TimeUnit::Microsecond, None),\n            true,\n        ));\n        let schema = Schema::new(vec![Arc::clone(&input_field)]);\n\n        // Create target field with TimestampMillisecond\n        let target_field = Arc::new(Field::new(\n            \"ts\",\n            DataType::Timestamp(TimeUnit::Millisecond, None),\n            true,\n        ));\n\n        // Create a column expression\n        let col_expr: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"ts\", 0));\n\n        // Create the CometCastColumnExpr\n        let cast_expr = CometCastColumnExpr::new(col_expr, input_field, target_field, None);\n\n        // Create a record batch with TimestampMicrosecond data\n        let micros_array: TimestampMicrosecondArray =\n            vec![Some(1_000_000), Some(2_000_000), None].into();\n        let batch = RecordBatch::try_new(Arc::new(schema), vec![Arc::new(micros_array)]).unwrap();\n\n        // Evaluate\n        let result = cast_expr.evaluate(&batch).unwrap();\n\n        match result {\n            ColumnarValue::Array(arr) => {\n                let millis_array = arr\n                    .as_any()\n                    .downcast_ref::<TimestampMillisecondArray>()\n                    .expect(\"Expected TimestampMillisecondArray\");\n                assert_eq!(millis_array.value(0), 1000);\n                assert_eq!(millis_array.value(1), 2000);\n                assert!(millis_array.is_null(2));\n            }\n            _ => panic!(\"Expected Array result\"),\n        }\n    }\n\n    #[test]\n    fn test_comet_cast_column_expr_evaluate_micros_to_millis_scalar() {\n        // Create input schema with TimestampMicrosecond column\n        let input_field = Arc::new(Field::new(\n            \"ts\",\n            DataType::Timestamp(TimeUnit::Microsecond, None),\n            true,\n        ));\n        let schema = Schema::new(vec![Arc::clone(&input_field)]);\n\n        // Create target field with TimestampMillisecond\n        let target_field = Arc::new(Field::new(\n            \"ts\",\n            DataType::Timestamp(TimeUnit::Millisecond, None),\n            true,\n        ));\n\n        // Create a literal expression that returns a scalar\n        let scalar = ScalarValue::TimestampMicrosecond(Some(1_500_000), None);\n        let literal_expr: Arc<dyn PhysicalExpr> =\n            Arc::new(datafusion::physical_expr::expressions::Literal::new(scalar));\n\n        // Create the CometCastColumnExpr\n        let cast_expr = CometCastColumnExpr::new(literal_expr, input_field, target_field, None);\n\n        // Create an empty batch (scalar doesn't need data)\n        let batch = RecordBatch::new_empty(Arc::new(schema));\n\n        // Evaluate\n        let result = cast_expr.evaluate(&batch).unwrap();\n\n        match result {\n            ColumnarValue::Scalar(s) => {\n                assert_eq!(s, ScalarValue::TimestampMillisecond(Some(1500), None));\n            }\n            _ => panic!(\"Expected Scalar result\"),\n        }\n    }\n\n    #[test]\n    fn test_relabel_list_field_name() {\n        // Physical: List(Field(\"item\", Int32))\n        // Logical:  List(Field(\"element\", Int32))\n        let physical_field = Arc::new(Field::new(\"item\", DataType::Int32, true));\n        let logical_field = Arc::new(Field::new(\"element\", DataType::Int32, true));\n\n        let values = Int32Array::from(vec![1, 2, 3]);\n        let list = ListArray::new(\n            physical_field,\n            arrow::buffer::OffsetBuffer::new(vec![0, 2, 3].into()),\n            Arc::new(values),\n            None,\n        );\n        let array: ArrayRef = Arc::new(list);\n\n        let target_type = DataType::List(Arc::clone(&logical_field));\n        let result = relabel_array(array, &target_type);\n        assert_eq!(result.data_type(), &target_type);\n    }\n\n    #[test]\n    fn test_relabel_map_entries_field_name() {\n        // Physical: Map(Field(\"key_value\", Struct{key, value}))\n        // Logical:  Map(Field(\"entries\", Struct{key, value}))\n        let key_field = Arc::new(Field::new(\"key\", DataType::Utf8, false));\n        let value_field = Arc::new(Field::new(\"value\", DataType::Int32, true));\n        let struct_fields = Fields::from(vec![Arc::clone(&key_field), Arc::clone(&value_field)]);\n\n        let physical_entries_field = Arc::new(Field::new(\n            \"key_value\",\n            DataType::Struct(struct_fields.clone()),\n            false,\n        ));\n        let logical_entries_field = Arc::new(Field::new(\n            \"entries\",\n            DataType::Struct(struct_fields.clone()),\n            false,\n        ));\n\n        let keys = StringArray::from(vec![\"a\", \"b\"]);\n        let values = Int32Array::from(vec![1, 2]);\n        let entries = StructArray::new(struct_fields, vec![Arc::new(keys), Arc::new(values)], None);\n        let map = MapArray::new(\n            physical_entries_field,\n            arrow::buffer::OffsetBuffer::new(vec![0, 2].into()),\n            entries,\n            None,\n            false,\n        );\n        let array: ArrayRef = Arc::new(map);\n\n        let target_type = DataType::Map(logical_entries_field, false);\n        let result = relabel_array(array, &target_type);\n        assert_eq!(result.data_type(), &target_type);\n    }\n\n    #[test]\n    fn test_relabel_struct_metadata() {\n        // Physical: Struct { Field(\"a\", Int32, metadata={\"PARQUET:field_id\": \"1\"}) }\n        // Logical:  Struct { Field(\"a\", Int32, metadata={}) }\n        let mut metadata = std::collections::HashMap::new();\n        metadata.insert(\"PARQUET:field_id\".to_string(), \"1\".to_string());\n        let physical_field =\n            Arc::new(Field::new(\"a\", DataType::Int32, true).with_metadata(metadata));\n        let logical_field = Arc::new(Field::new(\"a\", DataType::Int32, true));\n\n        let col = Int32Array::from(vec![10, 20]);\n        let physical_fields = Fields::from(vec![physical_field]);\n        let logical_fields = Fields::from(vec![logical_field]);\n\n        let struct_arr = StructArray::new(physical_fields, vec![Arc::new(col)], None);\n        let array: ArrayRef = Arc::new(struct_arr);\n\n        let target_type = DataType::Struct(logical_fields);\n        let result = relabel_array(array, &target_type);\n        assert_eq!(result.data_type(), &target_type);\n    }\n\n    #[test]\n    fn test_relabel_nested_struct_containing_list() {\n        // Physical: Struct { Field(\"col\", List(Field(\"item\", Int32))) }\n        // Logical:  Struct { Field(\"col\", List(Field(\"element\", Int32))) }\n        let physical_list_field = Arc::new(Field::new(\"item\", DataType::Int32, true));\n        let logical_list_field = Arc::new(Field::new(\"element\", DataType::Int32, true));\n\n        let physical_struct_field = Arc::new(Field::new(\n            \"col\",\n            DataType::List(Arc::clone(&physical_list_field)),\n            true,\n        ));\n        let logical_struct_field = Arc::new(Field::new(\n            \"col\",\n            DataType::List(Arc::clone(&logical_list_field)),\n            true,\n        ));\n\n        let values = Int32Array::from(vec![1, 2, 3]);\n        let list = ListArray::new(\n            physical_list_field,\n            arrow::buffer::OffsetBuffer::new(vec![0, 2, 3].into()),\n            Arc::new(values),\n            None,\n        );\n\n        let physical_fields = Fields::from(vec![physical_struct_field]);\n        let logical_fields = Fields::from(vec![logical_struct_field]);\n\n        let struct_arr = StructArray::new(physical_fields, vec![Arc::new(list) as ArrayRef], None);\n        let array: ArrayRef = Arc::new(struct_arr);\n\n        let target_type = DataType::Struct(logical_fields);\n        let result = relabel_array(array, &target_type);\n        assert_eq!(result.data_type(), &target_type);\n\n        // Verify we can access the nested data without panics\n        let result_struct = result.as_any().downcast_ref::<StructArray>().unwrap();\n        let result_list = result_struct\n            .column(0)\n            .as_any()\n            .downcast_ref::<ListArray>()\n            .unwrap();\n        assert_eq!(result_list.len(), 2);\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/data_type.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse super::read::{PlainDecoding, PlainDictDecoding};\n\npub trait DataType: PlainDecoding + PlainDictDecoding + 'static {}\n\nmacro_rules! make_type {\n    ($name:ident) => {\n        pub struct $name {}\n        impl DataType for $name {}\n    };\n}\n\nmake_type!(BoolType);\nmake_type!(Int8Type);\nmake_type!(UInt8Type);\nmake_type!(Int16Type);\nmake_type!(Int16ToDoubleType);\nmake_type!(UInt16Type);\nmake_type!(Int32Type);\nmake_type!(Int32To64Type);\nmake_type!(Int32ToDecimal64Type);\nmake_type!(Int32ToDoubleType);\nmake_type!(UInt32Type);\nmake_type!(Int64Type);\nmake_type!(Int64ToDecimal64Type);\nmake_type!(UInt64Type);\nmake_type!(FloatType);\nmake_type!(DoubleType);\nmake_type!(FloatToDoubleType);\nmake_type!(ByteArrayType);\nmake_type!(StringType);\nmake_type!(Int32DecimalType);\nmake_type!(Int64DecimalType);\nmake_type!(FLBADecimalType);\nmake_type!(FLBADecimal32Type);\nmake_type!(FLBADecimal64Type);\nmake_type!(FLBAType);\nmake_type!(Int32DateType);\nmake_type!(Int32TimestampMicrosType);\nmake_type!(Int64TimestampMillisType);\nmake_type!(Int64TimestampMicrosType);\nmake_type!(Int96TimestampMicrosType);\n\npub trait AsBytes {\n    /// Returns the slice of bytes for an instance of this data type.\n    fn as_bytes(&self) -> &[u8];\n}\n\nimpl AsBytes for Vec<u8> {\n    fn as_bytes(&self) -> &[u8] {\n        self.as_slice()\n    }\n}\n\nimpl AsBytes for &str {\n    fn as_bytes(&self) -> &[u8] {\n        (self as &str).as_bytes()\n    }\n}\n\nimpl AsBytes for [u8] {\n    fn as_bytes(&self) -> &[u8] {\n        self\n    }\n}\n\nimpl AsBytes for str {\n    fn as_bytes(&self) -> &[u8] {\n        (self as &str).as_bytes()\n    }\n}\n\nmacro_rules! make_as_bytes {\n    ($source_ty:ident) => {\n        impl AsBytes for $source_ty {\n            #[allow(clippy::size_of_in_element_count)]\n            fn as_bytes(&self) -> &[u8] {\n                unsafe {\n                    ::std::slice::from_raw_parts(\n                        self as *const $source_ty as *const u8,\n                        ::std::mem::size_of::<$source_ty>(),\n                    )\n                }\n            }\n        }\n    };\n}\n\nmake_as_bytes!(bool);\nmake_as_bytes!(i8);\nmake_as_bytes!(u8);\nmake_as_bytes!(i16);\nmake_as_bytes!(u16);\nmake_as_bytes!(i32);\nmake_as_bytes!(u32);\nmake_as_bytes!(i64);\nmake_as_bytes!(u64);\nmake_as_bytes!(f32);\nmake_as_bytes!(f64);\nmake_as_bytes!(i128);\n"
  },
  {
    "path": "native/core/src/parquet/encryption_support.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::execution::operators::ExecutionError;\nuse crate::jvm_bridge::{check_exception, JVMClasses};\nuse arrow::datatypes::SchemaRef;\nuse async_trait::async_trait;\nuse datafusion::common::{extensions_options, Result as DataFusionResult};\nuse datafusion::config::EncryptionFactoryOptions;\nuse datafusion::error::DataFusionError;\nuse datafusion::execution::parquet_encryption::EncryptionFactory;\nuse jni::objects::{Global, JMethodID, JObject};\nuse object_store::path::Path;\nuse parquet::encryption::decrypt::{FileDecryptionProperties, KeyRetriever};\nuse parquet::encryption::encrypt::FileEncryptionProperties;\nuse parquet::errors::ParquetError;\nuse std::sync::Arc;\n\npub const ENCRYPTION_FACTORY_ID: &str = \"comet.jni_kms_encryption\";\n\nextensions_options! {\n    pub struct CometEncryptionConfig {\n        // Native side strips file down to a path (not a URI) but Spark wants the full URI,\n        // so we cache the prefix to stick on the front before calling over JNI\n        pub uri_base: String, default = \"file:///\".into()\n    }\n}\n\n#[derive(Debug)]\npub struct CometEncryptionFactory {\n    pub(crate) key_unwrapper: Arc<Global<JObject<'static>>>,\n}\n\n/// `EncryptionFactory` is a DataFusion trait for types that generate\n/// file encryption and decryption properties.\n#[async_trait]\nimpl EncryptionFactory for CometEncryptionFactory {\n    async fn get_file_encryption_properties(\n        &self,\n        _options: &EncryptionFactoryOptions,\n        _schema: &SchemaRef,\n        _file_path: &Path,\n    ) -> DataFusionResult<Option<Arc<FileEncryptionProperties>>> {\n        Err(DataFusionError::NotImplemented(\n            \"Comet does not support Parquet encryption yet.\"\n                .parse()\n                .unwrap(),\n        ))\n    }\n\n    /// Generate file decryption properties to use when reading a Parquet file.\n    /// Rather than provide the AES keys directly for decryption, we set a `KeyRetriever`\n    /// that can determine the keys using the encryption metadata.\n    async fn get_file_decryption_properties(\n        &self,\n        options: &EncryptionFactoryOptions,\n        file_path: &Path,\n    ) -> DataFusionResult<Option<Arc<FileDecryptionProperties>>> {\n        let config: CometEncryptionConfig = options.to_extension_options()?;\n\n        let full_path: String = config.uri_base + file_path.as_ref();\n        let key_retriever = CometKeyRetriever::new(&full_path, Arc::clone(&self.key_unwrapper))\n            .map_err(|e| DataFusionError::External(Box::new(e)))?;\n        let decryption_properties =\n            FileDecryptionProperties::with_key_retriever(Arc::new(key_retriever)).build()?;\n        Ok(Some(decryption_properties))\n    }\n}\n\npub struct CometKeyRetriever {\n    file_path: String,\n    key_unwrapper: Arc<Global<JObject<'static>>>,\n    get_key_method_id: JMethodID,\n}\n\nimpl CometKeyRetriever {\n    pub fn new(\n        file_path: &str,\n        key_unwrapper: Arc<Global<JObject<'static>>>,\n    ) -> Result<Self, ExecutionError> {\n        JVMClasses::with_env(|env| {\n            Ok(CometKeyRetriever {\n                file_path: file_path.to_string(),\n                key_unwrapper,\n                get_key_method_id: env\n                    .get_method_id(\n                        jni::jni_str!(\"org/apache/comet/parquet/CometFileKeyUnwrapper\"),\n                        jni::jni_str!(\"getKey\"),\n                        jni::jni_sig!(\"(Ljava/lang/String;[B)[B\"),\n                    )\n                    .map_err(|e| {\n                        ExecutionError::GeneralError(format!(\"Failed to get JNI method ID: {}\", e))\n                    })?,\n            })\n        })\n    }\n}\n\nimpl KeyRetriever for CometKeyRetriever {\n    /// Get a data encryption key using the metadata stored in the Parquet file.\n    fn retrieve_key(&self, key_metadata: &[u8]) -> datafusion::parquet::errors::Result<Vec<u8>> {\n        use jni::signature::ReturnType;\n\n        JVMClasses::with_env(|env| {\n            // Get the key unwrapper instance from Global\n            let instance = self.key_unwrapper.as_obj();\n\n            // Convert file path to JString\n            let file_path_jstring = env\n                .new_string(&self.file_path)\n                .map_err(|e| ParquetError::General(format!(\"Failed to create JString: {}\", e)))?;\n\n            // Convert key_metadata to JByteArray\n            let key_metadata_array = env.byte_array_from_slice(key_metadata).map_err(|e| {\n                ParquetError::General(format!(\"Failed to create byte array: {}\", e))\n            })?;\n\n            // Call instance method FileKeyUnwrapper.getKey(String, byte[]) -> byte[]\n            let result = unsafe {\n                env.call_method_unchecked(\n                    instance,\n                    self.get_key_method_id,\n                    ReturnType::Array,\n                    &[\n                        jni::objects::JValue::from(&file_path_jstring).as_jni(),\n                        jni::objects::JValue::from(&key_metadata_array).as_jni(),\n                    ],\n                )\n            };\n\n            // Check for Java exceptions first, before processing the result\n            if let Some(exception) = check_exception(env).map_err(|e| {\n                ParquetError::General(format!(\"Failed to check for Java exception: {}\", e))\n            })? {\n                return Err(ParquetError::General(format!(\n                    \"Java exception during key retrieval: {}\",\n                    exception\n                )));\n            }\n\n            let result = result\n                .map_err(|e| ParquetError::General(format!(\"JNI method call failed: {}\", e)))?;\n\n            // Extract the byte array from the result\n            let result_array = result\n                .l()\n                .map_err(|e| ParquetError::General(format!(\"Failed to extract result: {}\", e)))?;\n\n            // Convert JObject to JByteArray and then to Vec<u8>\n            let byte_array =\n                unsafe { jni::objects::JByteArray::from_raw(env, result_array.into_raw()) };\n\n            let result_vec = env.convert_byte_array(&byte_array).map_err(|e| {\n                ParquetError::General(format!(\"Failed to convert byte array: {}\", e))\n            })?;\n            Ok(result_vec)\n        })\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub mod data_type;\npub mod encryption_support;\npub mod mutable_vector;\npub use mutable_vector::*;\n\n#[macro_use]\npub mod util;\npub mod parquet_exec;\npub mod parquet_read_cached_factory;\npub mod parquet_support;\npub mod read;\npub mod schema_adapter;\n\nmod cast_column;\nmod objectstore;\n\nuse std::collections::HashMap;\nuse std::task::Poll;\nuse std::{boxed::Box, sync::Arc};\n\nuse crate::errors::{try_unwrap_or_throw, CometError};\n\n/// JNI exposed methods\nuse jni::{\n    objects::{Global, JClass},\n    sys::{jboolean, jint, jlong},\n    Env, EnvUnowned,\n};\n\nuse self::util::jni::TypePromotionInfo;\nuse crate::execution::jni_api::get_runtime;\nuse crate::execution::metrics::utils::update_comet_metric;\nuse crate::execution::operators::ExecutionError;\nuse crate::execution::planner::PhysicalPlanner;\nuse crate::execution::serde;\nuse crate::execution::spark_plan::SparkPlan;\nuse crate::execution::utils::SparkArrowConvert;\nuse crate::jvm_bridge::JVMClasses;\nuse crate::parquet::data_type::AsBytes;\nuse crate::parquet::encryption_support::{CometEncryptionFactory, ENCRYPTION_FACTORY_ID};\nuse crate::parquet::parquet_exec::init_datasource_exec;\nuse crate::parquet::parquet_support::prepare_object_store_with_configs;\nuse arrow::array::{Array, RecordBatch};\nuse arrow::buffer::MutableBuffer;\nuse datafusion::datasource::listing::PartitionedFile;\nuse datafusion::execution::SendableRecordBatchStream;\nuse datafusion::physical_plan::ExecutionPlan;\nuse datafusion::prelude::{SessionConfig, SessionContext};\nuse futures::{poll, StreamExt};\nuse jni::objects::{JByteArray, JLongArray, JMap, JObject, JObjectArray, JString, ReleaseMode};\nuse jni::sys::{jintArray, JNI_FALSE};\nuse object_store::path::Path;\nuse read::ColumnReader;\nuse util::jni::{convert_column_descriptor, convert_encoding, deserialize_schema};\n\n/// Parquet read context maintained across multiple JNI calls.\nstruct Context {\n    pub column_reader: ColumnReader,\n}\n\n#[no_mangle]\npub extern \"system\" fn Java_org_apache_comet_parquet_Native_initColumnReader(\n    e: EnvUnowned,\n    _jclass: JClass,\n    primitive_type: jint,\n    logical_type: jint,\n    read_primitive_type: jint,\n    jni_path: JObjectArray,\n    max_dl: jint,\n    max_rl: jint,\n    bit_width: jint,\n    read_bit_width: jint,\n    is_signed: jboolean,\n    type_length: jint,\n    precision: jint,\n    read_precision: jint,\n    scale: jint,\n    read_scale: jint,\n    time_unit: jint,\n    is_adjusted_utc: jboolean,\n    batch_size: jint,\n    use_decimal_128: jboolean,\n    use_legacy_date_timestamp: jboolean,\n) -> jlong {\n    try_unwrap_or_throw(&e, |env| {\n        let desc = convert_column_descriptor(\n            env,\n            primitive_type,\n            logical_type,\n            max_dl,\n            max_rl,\n            bit_width,\n            is_signed,\n            type_length,\n            precision,\n            scale,\n            time_unit,\n            is_adjusted_utc,\n            jni_path,\n        )?;\n        let promotion_info = TypePromotionInfo::new_from_jni(\n            read_primitive_type,\n            read_precision,\n            read_scale,\n            read_bit_width,\n        );\n        let ctx = Context {\n            column_reader: ColumnReader::get(\n                desc,\n                promotion_info,\n                batch_size as usize,\n                use_decimal_128,\n                use_legacy_date_timestamp,\n            ),\n        };\n        let res = Box::new(ctx);\n        Ok(Box::into_raw(res) as i64)\n    })\n}\n\n/// # Safety\n/// This function is inheritly unsafe since it deals with raw pointers passed from JNI.\n#[no_mangle]\npub unsafe extern \"system\" fn Java_org_apache_comet_parquet_Native_setDictionaryPage(\n    e: EnvUnowned,\n    _jclass: JClass,\n    handle: jlong,\n    page_value_count: jint,\n    page_data: JByteArray,\n    encoding: jint,\n) {\n    try_unwrap_or_throw(&e, |env| {\n        let reader = get_reader(handle)?;\n\n        // convert value encoding ordinal to the native encoding definition\n        let encoding = convert_encoding(encoding);\n\n        // copy the input on-heap buffer to native\n        let page_len = page_data.len(env)?;\n        let mut buffer = MutableBuffer::from_len_zeroed(page_len);\n        page_data.get_region(env, 0, from_u8_slice(buffer.as_slice_mut()))?;\n\n        reader.set_dictionary_page(page_value_count as usize, buffer.into(), encoding);\n        Ok(())\n    })\n}\n\n/// # Safety\n/// This function is inheritly unsafe since it deals with raw pointers passed from JNI.\n#[no_mangle]\npub unsafe extern \"system\" fn Java_org_apache_comet_parquet_Native_setPageV1(\n    e: EnvUnowned,\n    _jclass: JClass,\n    handle: jlong,\n    page_value_count: jint,\n    page_data: JByteArray,\n    value_encoding: jint,\n) {\n    try_unwrap_or_throw(&e, |env| {\n        let reader = get_reader(handle)?;\n\n        // convert value encoding ordinal to the native encoding definition\n        let encoding = convert_encoding(value_encoding);\n\n        // copy the input on-heap buffer to native\n        let page_len = page_data.len(env)?;\n        let mut buffer = MutableBuffer::from_len_zeroed(page_len);\n        page_data.get_region(env, 0, from_u8_slice(buffer.as_slice_mut()))?;\n\n        reader.set_page_v1(page_value_count as usize, buffer.into(), encoding);\n        Ok(())\n    })\n}\n\n/// # Safety\n/// This function is inheritly unsafe since it deals with raw pointers passed from JNI.\n#[no_mangle]\npub unsafe extern \"system\" fn Java_org_apache_comet_parquet_Native_setPageV2(\n    e: EnvUnowned,\n    _jclass: JClass,\n    handle: jlong,\n    page_value_count: jint,\n    def_level_data: JByteArray,\n    rep_level_data: JByteArray,\n    value_data: JByteArray,\n    value_encoding: jint,\n) {\n    try_unwrap_or_throw(&e, |env| {\n        let reader = get_reader(handle)?;\n\n        // convert value encoding ordinal to the native encoding definition\n        let encoding = convert_encoding(value_encoding);\n\n        // copy the input on-heap buffer to native\n        let dl_len = def_level_data.len(env)?;\n        let mut dl_buffer = MutableBuffer::from_len_zeroed(dl_len);\n        def_level_data.get_region(env, 0, from_u8_slice(dl_buffer.as_slice_mut()))?;\n\n        let rl_len = rep_level_data.len(env)?;\n        let mut rl_buffer = MutableBuffer::from_len_zeroed(rl_len);\n        rep_level_data.get_region(env, 0, from_u8_slice(rl_buffer.as_slice_mut()))?;\n\n        let v_len = value_data.len(env)?;\n        let mut v_buffer = MutableBuffer::from_len_zeroed(v_len);\n        value_data.get_region(env, 0, from_u8_slice(v_buffer.as_slice_mut()))?;\n\n        reader.set_page_v2(\n            page_value_count as usize,\n            dl_buffer.into(),\n            rl_buffer.into(),\n            v_buffer.into(),\n            encoding,\n        );\n        Ok(())\n    })\n}\n\n#[no_mangle]\npub extern \"system\" fn Java_org_apache_comet_parquet_Native_resetBatch(\n    env: EnvUnowned,\n    _jclass: JClass,\n    handle: jlong,\n) {\n    try_unwrap_or_throw(&env, |_| {\n        let reader = get_reader(handle)?;\n        reader.reset_batch();\n        Ok(())\n    })\n}\n\n#[no_mangle]\npub extern \"system\" fn Java_org_apache_comet_parquet_Native_readBatch(\n    e: EnvUnowned,\n    _jclass: JClass,\n    handle: jlong,\n    batch_size: jint,\n    null_pad_size: jint,\n) -> jintArray {\n    try_unwrap_or_throw(&e, |env| {\n        let reader = get_reader(handle)?;\n        let (num_values, num_nulls) =\n            reader.read_batch(batch_size as usize, null_pad_size as usize);\n        let res = env.new_int_array(2)?;\n        let buf: [i32; 2] = [num_values as i32, num_nulls as i32];\n        res.set_region(env, 0, &buf)?;\n        Ok(res.into_raw())\n    })\n}\n\n#[no_mangle]\npub extern \"system\" fn Java_org_apache_comet_parquet_Native_skipBatch(\n    env: EnvUnowned,\n    _jclass: JClass,\n    handle: jlong,\n    batch_size: jint,\n    discard: jboolean,\n) -> jint {\n    try_unwrap_or_throw(&env, |_| {\n        let reader = get_reader(handle)?;\n        Ok(reader.skip_batch(batch_size as usize, discard) as jint)\n    })\n}\n\n#[no_mangle]\npub extern \"system\" fn Java_org_apache_comet_parquet_Native_currentBatch(\n    e: EnvUnowned,\n    _jclass: JClass,\n    handle: jlong,\n    array_addr: jlong,\n    schema_addr: jlong,\n) {\n    try_unwrap_or_throw(&e, |_env| {\n        let ctx = get_context(handle)?;\n        let reader = &mut ctx.column_reader;\n        let data = reader.current_batch()?;\n        data.move_to_spark(array_addr, schema_addr)\n            .map_err(|e| e.into())\n    })\n}\n\n#[inline]\nfn get_context<'a>(handle: jlong) -> Result<&'a mut Context, CometError> {\n    unsafe {\n        (handle as *mut Context)\n            .as_mut()\n            .ok_or_else(|| CometError::NullPointer(\"null context handle\".to_string()))\n    }\n}\n\n#[inline]\nfn get_reader<'a>(handle: jlong) -> Result<&'a mut ColumnReader, CometError> {\n    Ok(&mut get_context(handle)?.column_reader)\n}\n\n#[no_mangle]\npub extern \"system\" fn Java_org_apache_comet_parquet_Native_closeColumnReader(\n    env: EnvUnowned,\n    _jclass: JClass,\n    handle: jlong,\n) {\n    try_unwrap_or_throw(&env, |_| {\n        unsafe {\n            let ctx = get_context(handle)?;\n            let _ = Box::from_raw(ctx);\n        };\n        Ok(())\n    })\n}\n\nfn from_u8_slice(src: &mut [u8]) -> &mut [i8] {\n    let raw_ptr = src.as_mut_ptr() as *mut i8;\n    unsafe { std::slice::from_raw_parts_mut(raw_ptr, src.len()) }\n}\n\n// TODO: (ARROW NATIVE) remove this if not needed.\nenum ParquetReaderState {\n    Init,\n    Reading,\n    Complete,\n}\n/// Parquet read context maintained across multiple JNI calls.\nstruct BatchContext {\n    native_plan: Arc<SparkPlan>,\n    metrics_node: Arc<Global<JObject<'static>>>,\n    batch_stream: Option<SendableRecordBatchStream>,\n    current_batch: Option<RecordBatch>,\n    reader_state: ParquetReaderState,\n}\n\n#[inline]\nfn get_batch_context<'a>(handle: jlong) -> Result<&'a mut BatchContext, CometError> {\n    unsafe {\n        (handle as *mut BatchContext)\n            .as_mut()\n            .ok_or_else(|| CometError::NullPointer(\"null batch context handle\".to_string()))\n    }\n}\n\nfn get_file_groups_single_file(\n    path: &Path,\n    file_size: u64,\n    starts: &mut [i64],\n    lengths: &mut [i64],\n) -> Vec<Vec<PartitionedFile>> {\n    assert!(!starts.is_empty() && starts.len() == lengths.len());\n    let mut groups: Vec<PartitionedFile> = Vec::with_capacity(starts.len());\n    for (i, &start) in starts.iter().enumerate() {\n        let mut partitioned_file = PartitionedFile::new_with_range(\n            String::new(), // Dummy file path. We will override this with our path so that url encoding does not occur\n            file_size,\n            start,\n            start + lengths[i],\n        );\n        partitioned_file.object_meta.location = (*path).clone();\n        groups.push(partitioned_file);\n    }\n    vec![groups]\n}\n\npub fn get_object_store_options(\n    env: &mut Env,\n    map_object: JObject,\n) -> Result<HashMap<String, String>, CometError> {\n    let map = env.cast_local::<JMap>(map_object)?;\n    // Convert to a HashMap\n    let mut collected_map = HashMap::new();\n    map.iter(env).and_then(|mut iter| {\n        while let Some(entry) = iter.next(env)? {\n            let key = entry.key(env)?;\n            let value = entry.value(env)?;\n            let key = unsafe { JString::from_raw(env, key.into_raw()) };\n            let value = unsafe { JString::from_raw(env, value.into_raw()) };\n            let key_string = key.try_to_string(env)?;\n            let value_string = value.try_to_string(env)?;\n            collected_map.insert(key_string, value_string);\n        }\n        Ok(())\n    })?;\n\n    Ok(collected_map)\n}\n\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\n#[no_mangle]\npub unsafe extern \"system\" fn Java_org_apache_comet_parquet_Native_validateObjectStoreConfig(\n    e: EnvUnowned,\n    _jclass: JClass,\n    file_path: JString,\n    object_store_options: JObject,\n) {\n    try_unwrap_or_throw(&e, |env| {\n        let session_config = SessionConfig::new();\n        let planner =\n            PhysicalPlanner::new(Arc::new(SessionContext::new_with_config(session_config)), 0);\n        let session_ctx = planner.session_ctx();\n        let path: String = file_path.try_to_string(env).unwrap();\n        let object_store_config = get_object_store_options(env, object_store_options)?;\n        let (_, _) = prepare_object_store_with_configs(\n            session_ctx.runtime_env(),\n            path.clone(),\n            &object_store_config,\n        )?;\n        Ok(())\n    })\n}\n\n/// # Safety\n/// This function is inherently unsafe since it deals with raw pointers passed from JNI.\n#[no_mangle]\npub unsafe extern \"system\" fn Java_org_apache_comet_parquet_Native_initRecordBatchReader(\n    e: EnvUnowned,\n    _jclass: JClass,\n    file_path: JString,\n    file_size: jlong,\n    starts: JLongArray,\n    lengths: JLongArray,\n    filter: JByteArray,\n    required_schema: JByteArray,\n    data_schema: JByteArray,\n    session_timezone: JString,\n    batch_size: jint,\n    case_sensitive: jboolean,\n    object_store_options: JObject,\n    key_unwrapper_obj: JObject,\n    metrics_node: JObject,\n) -> jlong {\n    try_unwrap_or_throw(&e, |env| unsafe {\n        JVMClasses::init(env);\n        let session_config = SessionConfig::new().with_batch_size(batch_size as usize);\n        let planner =\n            PhysicalPlanner::new(Arc::new(SessionContext::new_with_config(session_config)), 0);\n        let session_ctx = planner.session_ctx();\n\n        let path: String = file_path.try_to_string(env).unwrap();\n\n        let object_store_config = get_object_store_options(env, object_store_options)?;\n        let (object_store_url, object_store_path) = prepare_object_store_with_configs(\n            session_ctx.runtime_env(),\n            path.clone(),\n            &object_store_config,\n        )?;\n\n        let required_schema_buffer = env.convert_byte_array(&required_schema)?;\n        let required_schema = Arc::new(deserialize_schema(required_schema_buffer.as_bytes())?);\n\n        let data_schema_buffer = env.convert_byte_array(&data_schema)?;\n        let data_schema = Arc::new(deserialize_schema(data_schema_buffer.as_bytes())?);\n\n        let data_filters = if !filter.is_null() {\n            let filter_buffer = env.convert_byte_array(&filter)?;\n            let filter_expr = serde::deserialize_expr(filter_buffer.as_slice())?;\n            Some(vec![\n                planner.create_expr(&filter_expr, Arc::clone(&data_schema))?\n            ])\n        } else {\n            None\n        };\n        let starts = starts.get_elements(env, ReleaseMode::NoCopyBack)?;\n        let starts = core::slice::from_raw_parts_mut(starts.as_ptr(), starts.len());\n\n        let lengths = lengths.get_elements(env, ReleaseMode::NoCopyBack)?;\n        let lengths = core::slice::from_raw_parts_mut(lengths.as_ptr(), lengths.len());\n\n        let file_groups =\n            get_file_groups_single_file(&object_store_path, file_size as u64, starts, lengths);\n\n        let session_timezone: String = session_timezone.try_to_string(env).unwrap();\n\n        // Handle key unwrapper for encrypted files\n        let encryption_enabled = if !key_unwrapper_obj.is_null() {\n            let encryption_factory = CometEncryptionFactory {\n                key_unwrapper: Arc::new(jni_new_global_ref!(env, key_unwrapper_obj)?),\n            };\n            session_ctx\n                .runtime_env()\n                .register_parquet_encryption_factory(\n                    ENCRYPTION_FACTORY_ID,\n                    Arc::new(encryption_factory),\n                );\n            true\n        } else {\n            false\n        };\n\n        let scan = init_datasource_exec(\n            required_schema,\n            Some(data_schema),\n            None,\n            object_store_url,\n            file_groups,\n            None,\n            data_filters,\n            None,\n            session_timezone.as_str(),\n            case_sensitive != JNI_FALSE,\n            session_ctx,\n            encryption_enabled,\n        )?;\n\n        let partition_index: usize = 0;\n        let batch_stream = scan.execute(partition_index, session_ctx.task_ctx())?;\n\n        let ctx = BatchContext {\n            native_plan: Arc::new(SparkPlan::new(0, scan, vec![])),\n            metrics_node: Arc::new(jni_new_global_ref!(env, metrics_node)?),\n            batch_stream: Some(batch_stream),\n            current_batch: None,\n            reader_state: ParquetReaderState::Init,\n        };\n        let res = Box::new(ctx);\n\n        Ok(Box::into_raw(res) as i64)\n    })\n}\n\n#[no_mangle]\npub extern \"system\" fn Java_org_apache_comet_parquet_Native_readNextRecordBatch(\n    e: EnvUnowned,\n    _jclass: JClass,\n    handle: jlong,\n) -> jint {\n    try_unwrap_or_throw(&e, |env| {\n        let context = get_batch_context(handle)?;\n        let mut rows_read: i32 = 0;\n        let batch_stream = context.batch_stream.as_mut().unwrap();\n        let runtime = get_runtime();\n\n        loop {\n            let next_item = batch_stream.next();\n            let poll_batch: Poll<Option<datafusion::common::Result<RecordBatch>>> =\n                runtime.block_on(async { poll!(next_item) });\n\n            match poll_batch {\n                Poll::Ready(Some(batch)) => {\n                    let batch = batch?;\n                    rows_read = batch.num_rows() as i32;\n                    context.current_batch = Some(batch);\n                    context.reader_state = ParquetReaderState::Reading;\n                    break;\n                }\n                Poll::Ready(None) => {\n                    // EOF\n\n                    update_comet_metric(env, context.metrics_node.as_obj(), &context.native_plan)?;\n\n                    context.current_batch = None;\n                    context.reader_state = ParquetReaderState::Complete;\n                    break;\n                }\n                Poll::Pending => {\n                    // TODO: (ARROW NATIVE): Just keeping polling??\n                    // Ideally we want to yield to avoid consuming CPU while blocked on IO ??\n                    continue;\n                }\n            }\n        }\n        Ok(rows_read)\n    })\n}\n\n#[no_mangle]\npub extern \"system\" fn Java_org_apache_comet_parquet_Native_currentColumnBatch(\n    e: EnvUnowned,\n    _jclass: JClass,\n    handle: jlong,\n    column_idx: jint,\n    array_addr: jlong,\n    schema_addr: jlong,\n) {\n    try_unwrap_or_throw(&e, |_env| {\n        let context = get_batch_context(handle)?;\n        let batch_reader = context\n            .current_batch\n            .as_mut()\n            .ok_or_else(|| CometError::Execution {\n                source: ExecutionError::GeneralError(\"There is no more data to read\".to_string()),\n            });\n        let data = batch_reader?.column(column_idx as usize).into_data();\n        data.move_to_spark(array_addr, schema_addr)\n            .map_err(|e| e.into())\n    })\n}\n\n#[no_mangle]\npub extern \"system\" fn Java_org_apache_comet_parquet_Native_closeRecordBatchReader(\n    env: EnvUnowned,\n    _jclass: JClass,\n    handle: jlong,\n) {\n    try_unwrap_or_throw(&env, |_| {\n        unsafe {\n            let ctx = get_batch_context(handle)?;\n            let _ = Box::from_raw(ctx);\n        };\n        Ok(())\n    })\n}\n"
  },
  {
    "path": "native/core/src/parquet/mutable_vector.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::{array::ArrayData, datatypes::DataType as ArrowDataType};\n\nuse crate::common::{bit, CometBuffer};\nuse crate::execution::operators::ExecutionError;\n\nconst DEFAULT_ARRAY_LEN: usize = 4;\n\n/// A mutable vector that can be re-used across batches, for Parquet read.\n///\n/// Note this class is similar to [`MutableVector`](crate::common::MutableVector). However, the\n/// latter has functionalities such as `ValueGetter`, `ValueSetter`. In addition, it represents\n/// String and Binary data using [`StringView`](crate::data_type::StringView), while this struct\n/// uses Arrow format to represent them.\n///\n/// TODO: unify the two structs in future\n#[derive(Debug)]\npub struct ParquetMutableVector {\n    /// The Arrow type for the elements of this vector.\n    pub(crate) arrow_type: ArrowDataType,\n\n    /// The number of total elements in this vector.\n    pub(crate) num_values: usize,\n\n    /// The number of null elements in this vector, must <= `num_values`.\n    pub(crate) num_nulls: usize,\n\n    /// The validity buffer of this Arrow vector. A bit set at position `i` indicates the `i`th\n    /// element is not null. Otherwise, an unset bit at position `i` indicates the `i`th element is\n    /// null.\n    pub(crate) validity_buffer: CometBuffer,\n\n    /// The value buffer of this Arrow vector. This could store either offsets if the vector\n    /// is of list or struct type, or actual values themselves otherwise.\n    pub(crate) value_buffer: CometBuffer,\n\n    /// Child vectors for non-primitive types (e.g., list, struct).\n    pub(crate) children: Vec<ParquetMutableVector>,\n\n    /// Dictionary (i.e., values) associated with this vector. Only set if using dictionary\n    /// encoding.\n    pub(crate) dictionary: Option<Box<ParquetMutableVector>>,\n}\n\nimpl ParquetMutableVector {\n    pub fn new(capacity: usize, arrow_type: &ArrowDataType) -> Self {\n        let bit_width = Self::bit_width(arrow_type);\n        Self::new_with_bit_width(capacity, arrow_type.clone(), bit_width)\n    }\n\n    pub fn new_with_bit_width(\n        capacity: usize,\n        arrow_type: ArrowDataType,\n        bit_width: usize,\n    ) -> Self {\n        let validity_len = capacity.div_ceil(8);\n        let validity_buffer = CometBuffer::new(validity_len);\n\n        let mut value_capacity = capacity;\n        if Self::is_binary_type(&arrow_type) {\n            // Arrow offset array needs to have one extra slot\n            value_capacity += 1;\n        }\n        // Make sure the capacity is positive\n        let len = usize::div_ceil(value_capacity * bit_width, 8);\n        let mut value_buffer = CometBuffer::new(len);\n\n        let mut children = Vec::new();\n\n        match arrow_type {\n            ArrowDataType::Binary | ArrowDataType::Utf8 => {\n                children.push(ParquetMutableVector::new_with_bit_width(\n                    capacity,\n                    ArrowDataType::Int8,\n                    DEFAULT_ARRAY_LEN * 8,\n                ));\n            }\n            _ => {}\n        }\n\n        if Self::is_binary_type(&arrow_type) {\n            // Setup the first offset which is always 0.\n            let zero: u32 = 0;\n            bit::memcpy_value(&zero, 4, &mut value_buffer);\n        }\n\n        Self {\n            arrow_type,\n            num_values: 0,\n            num_nulls: 0,\n            validity_buffer,\n            value_buffer,\n            children,\n            dictionary: None,\n        }\n    }\n\n    /// Whether the given value at `idx` of this vector is null.\n    #[inline]\n    pub fn is_null(&self, idx: usize) -> bool {\n        unsafe { !bit::get_bit_raw(self.validity_buffer.as_ptr(), idx) }\n    }\n\n    /// Resets this vector to the initial state.\n    #[inline]\n    pub fn reset(&mut self) {\n        self.num_values = 0;\n        self.num_nulls = 0;\n        self.validity_buffer.reset();\n        if Self::is_binary_type(&self.arrow_type) {\n            // Reset the first offset to 0\n            let zero: u32 = 0;\n            bit::memcpy_value(&zero, 4, &mut self.value_buffer);\n            // Also reset the child value vector\n            let child = &mut self.children[0];\n            child.reset();\n        } else if Self::should_reset_value_buffer(&self.arrow_type) {\n            self.value_buffer.reset();\n        }\n    }\n\n    /// Appends a new null value to the end of this vector.\n    #[inline]\n    pub fn put_null(&mut self) {\n        self.put_nulls(1)\n    }\n\n    /// Appends `n` null values to the end of this vector.\n    #[inline]\n    pub fn put_nulls(&mut self, n: usize) {\n        // We need to update offset buffer for binary.\n        if Self::is_binary_type(&self.arrow_type) {\n            let mut offset = self.num_values * 4;\n            let prev_offset_value = bit::read_u32(&self.value_buffer[offset..]);\n            offset += 4;\n            (0..n).for_each(|_| {\n                bit::memcpy_value(&prev_offset_value, 4, &mut self.value_buffer[offset..]);\n                offset += 4;\n            });\n        }\n\n        self.num_nulls += n;\n        self.num_values += n;\n    }\n\n    /// Returns the number of total values (including both null and non-null) of this vector.\n    #[inline]\n    pub fn num_values(&self) -> usize {\n        self.num_values\n    }\n\n    /// Returns the number of null values of this vector.\n    #[inline]\n    pub fn num_nulls(&self) -> usize {\n        self.num_nulls\n    }\n\n    /// Sets the dictionary of this to be `dict`.\n    pub fn set_dictionary(&mut self, dict: ParquetMutableVector) {\n        self.dictionary = Some(Box::new(dict))\n    }\n\n    /// Clones this into an Arrow [`ArrayData`](arrow::array::ArrayData). Note that the caller of\n    /// this method MUST make sure the returned `ArrayData` won't live longer than this vector\n    /// itself. Otherwise, dangling pointer may happen.\n    ///\n    /// # Safety\n    ///\n    /// This method is highly unsafe since it calls `CometBuffer::to_arrow` which leaks raw\n    /// pointer to the memory region that are tracked by `CometBuffer`. Please see comments on\n    /// `to_arrow` buffer to understand the motivation.\n    pub fn get_array_data(&mut self) -> Result<ArrayData, ExecutionError> {\n        unsafe {\n            let data_type = if let Some(d) = &self.dictionary {\n                ArrowDataType::Dictionary(\n                    Box::new(ArrowDataType::Int32),\n                    Box::new(d.arrow_type.clone()),\n                )\n            } else {\n                self.arrow_type.clone()\n            };\n            let mut builder = ArrayData::builder(data_type)\n                .len(self.num_values)\n                .add_buffer(self.value_buffer.to_arrow()?)\n                .null_bit_buffer(Some(self.validity_buffer.to_arrow()?))\n                .null_count(self.num_nulls);\n\n            if Self::is_binary_type(&self.arrow_type) && self.dictionary.is_none() {\n                let child = &mut self.children[0];\n                builder = builder.add_buffer(child.value_buffer.to_arrow()?);\n            }\n\n            if let Some(d) = &mut self.dictionary {\n                builder = builder.add_child_data(d.get_array_data()?);\n            }\n            Ok(builder.build_unchecked())\n        }\n    }\n\n    /// Returns the number of bits it takes to store one element of `arrow_type` in the value buffer\n    /// of this vector.\n    pub fn bit_width(arrow_type: &ArrowDataType) -> usize {\n        match arrow_type {\n            ArrowDataType::Boolean => 1,\n            ArrowDataType::Int8 => 8,\n            ArrowDataType::Int16 => 16,\n            ArrowDataType::Int32 | ArrowDataType::Float32 | ArrowDataType::Date32 => 32,\n            ArrowDataType::Int64 | ArrowDataType::Float64 | ArrowDataType::Timestamp(_, _) => 64,\n            ArrowDataType::FixedSizeBinary(type_length) => *type_length as usize * 8,\n            ArrowDataType::Decimal128(..) => 128, // Arrow stores decimal with 16 bytes\n            ArrowDataType::Binary | ArrowDataType::Utf8 => 32, // Only count offset size\n            dt => panic!(\"Unsupported Arrow data type: {dt:?}\"),\n        }\n    }\n\n    #[inline]\n    fn is_binary_type(dt: &ArrowDataType) -> bool {\n        matches!(dt, ArrowDataType::Binary | ArrowDataType::Utf8)\n    }\n\n    #[inline]\n    fn should_reset_value_buffer(dt: &ArrowDataType) -> bool {\n        // - Boolean type expects have a zeroed value buffer\n        // - Decimal may pad buffer with 0xff so we need to clear them before a new batch\n        matches!(dt, ArrowDataType::Boolean | ArrowDataType::Decimal128(_, _))\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/objectstore/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub mod s3;\n"
  },
  {
    "path": "native/core/src/parquet/objectstore/s3.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse log::{debug, error};\nuse std::collections::HashMap;\nuse std::sync::OnceLock;\nuse url::Url;\n\nuse crate::execution::jni_api::get_runtime;\nuse async_trait::async_trait;\nuse aws_config::{\n    ecs::EcsCredentialsProvider, environment::EnvironmentVariableCredentialsProvider,\n    imds::credentials::ImdsCredentialsProvider, meta::credentials::CredentialsProviderChain,\n    provider_config::ProviderConfig, sts::AssumeRoleProvider,\n    web_identity_token::WebIdentityTokenCredentialsProvider, BehaviorVersion,\n};\nuse aws_credential_types::{\n    provider::{error::CredentialsError, ProvideCredentials},\n    Credentials,\n};\nuse object_store::{\n    aws::{AmazonS3Builder, AmazonS3ConfigKey, AwsCredential},\n    path::Path,\n    CredentialProvider, ObjectStore, ObjectStoreScheme,\n};\nuse std::error::Error;\nuse std::{\n    sync::{Arc, RwLock},\n    time::{Duration, SystemTime},\n};\n\n/// Creates an S3 object store using options specified as Hadoop S3A configurations.\n///\n/// # Arguments\n///\n/// * `url` - The URL of the S3 object to access.\n/// * `configs` - The Hadoop S3A configurations to use for building the object store.\n/// * `min_ttl` - Time buffer before credential expiry when refresh should be triggered.\n///\n/// # Returns\n///\n/// * `(Box<dyn ObjectStore>, Path)` - The object store and path of the S3 object store.\n///\npub fn create_store(\n    url: &Url,\n    configs: &HashMap<String, String>,\n    min_ttl: Duration,\n) -> Result<(Box<dyn ObjectStore>, Path), object_store::Error> {\n    let (scheme, path) = ObjectStoreScheme::parse(url)?;\n    if scheme != ObjectStoreScheme::AmazonS3 {\n        return Err(object_store::Error::Generic {\n            store: \"S3\",\n            source: format!(\"Scheme of URL is not S3: {url}\").into(),\n        });\n    }\n    let path = Path::parse(path)?;\n\n    let mut builder = AmazonS3Builder::new()\n        .with_url(url.to_string())\n        .with_allow_http(true);\n    let bucket = url.host_str().ok_or_else(|| object_store::Error::Generic {\n        store: \"S3\",\n        source: \"Missing bucket name in S3 URL\".into(),\n    })?;\n\n    let credential_provider =\n        get_runtime().block_on(build_credential_provider(configs, bucket, min_ttl))?;\n    builder = match credential_provider {\n        Some(provider) => builder.with_credentials(Arc::new(provider)),\n        None => builder.with_skip_signature(true),\n    };\n\n    let s3_configs = extract_s3_config_options(configs, bucket);\n    debug!(\"S3 configs for bucket {bucket}: {s3_configs:?}\");\n\n    // When using the default AWS S3 endpoint (no custom endpoint configured), a valid region\n    // is required. If no region is explicitly configured, attempt to auto-resolve it by\n    // making a HeadBucket request to determine the bucket's region.\n    if !s3_configs.contains_key(&AmazonS3ConfigKey::Endpoint)\n        && !s3_configs.contains_key(&AmazonS3ConfigKey::Region)\n    {\n        let region = get_runtime()\n            .block_on(resolve_bucket_region(bucket))\n            .map_err(|e| object_store::Error::Generic {\n                store: \"S3\",\n                source: format!(\"Failed to resolve region: {e}\").into(),\n            })?;\n        debug!(\"resolved region: {region:?}\");\n        builder = builder.with_config(AmazonS3ConfigKey::Region, region.to_string());\n    }\n\n    for (key, value) in s3_configs {\n        builder = builder.with_config(key, value);\n    }\n\n    let object_store = builder.build()?;\n\n    Ok((Box::new(object_store), path))\n}\n\n/// Process-wide cache of resolved S3 bucket regions, keyed by bucket name.\n///\n/// ## Why static / process lifetime?\n///\n/// See the equivalent rationale on `object_store_cache` in `parquet_support.rs`: the JNI\n/// call site creates a new `RuntimeEnv` per file, leaving the executor process as the only\n/// available scope for cross-call state.  In the standard Spark-on-Kubernetes deployment\n/// model each executor is dedicated to a single application, so process and application\n/// lifetimes are equivalent.\n///\n/// ## Unbounded size\n///\n/// A Spark job accesses a bounded, typically small set of S3 buckets, so the number of\n/// entries stays proportional to the number of distinct buckets.  Entries are just\n/// `(String, String)` pairs and the set does not grow beyond what the job actually touches.\n///\n/// ## Invalidation\n///\n/// An S3 bucket's region is permanently fixed at creation time and cannot change; no\n/// invalidation is therefore needed.  This is what makes a static, never-evicting cache\n/// safe here and on the equivalent region-resolution path inside the `object_store` crate.\nfn region_cache() -> &'static RwLock<HashMap<String, String>> {\n    static CACHE: OnceLock<RwLock<HashMap<String, String>>> = OnceLock::new();\n    CACHE.get_or_init(|| RwLock::new(HashMap::new()))\n}\n\n/// Get the bucket region using the [HeadBucket API]. This will fail if the bucket does not exist.\n/// Results are cached per bucket to avoid redundant network calls.\n///\n/// [HeadBucket API]: https://docs.aws.amazon.com/AmazonS3/latest/API/API_HeadBucket.html\n///\n/// TODO this is copied from the object store crate and has been adapted as a workaround\n/// for https://github.com/apache/arrow-rs-object-store/issues/479\npub async fn resolve_bucket_region(bucket: &str) -> Result<String, Box<dyn Error>> {\n    // Check cache first\n    if let Ok(cache) = region_cache().read() {\n        if let Some(region) = cache.get(bucket) {\n            debug!(\"Using cached region '{region}' for bucket '{bucket}'\");\n            return Ok(region.clone());\n        }\n    }\n\n    let endpoint = format!(\"https://{bucket}.s3.amazonaws.com\");\n    let client = reqwest::Client::new();\n\n    let response = client.head(&endpoint).send().await?;\n\n    if response.status() == reqwest::StatusCode::NOT_FOUND {\n        return Err(Box::new(object_store::Error::Generic {\n            store: \"S3\",\n            source: format!(\"Bucket not found: {bucket}\").into(),\n        }));\n    }\n\n    let region = response\n        .headers()\n        .get(\"x-amz-bucket-region\")\n        .ok_or_else(|| {\n            Box::new(object_store::Error::Generic {\n                store: \"S3\",\n                source: format!(\"Missing region for bucket: {bucket}\").into(),\n            })\n        })?\n        .to_str()?\n        .to_string();\n\n    // Cache the resolved region\n    if let Ok(mut cache) = region_cache().write() {\n        debug!(\"Caching region '{region}' for bucket '{bucket}'\");\n        cache.insert(bucket.to_string(), region.clone());\n    }\n\n    Ok(region)\n}\n\n/// Extracts S3 configuration options from Hadoop S3A configurations and returns them\n/// as a HashMap of (AmazonS3ConfigKey, String) pairs that can be applied to an AmazonS3Builder.\n///\n/// # Arguments\n///\n/// * `configs` - The Hadoop S3A configurations to extract from.\n/// * `bucket` - The bucket name to extract configurations for.\n///\n/// # Returns\n///\n/// * `HashMap<AmazonS3ConfigKey, String>` - The extracted S3 configuration options.\n///\nfn extract_s3_config_options(\n    configs: &HashMap<String, String>,\n    bucket: &str,\n) -> HashMap<AmazonS3ConfigKey, String> {\n    let mut s3_configs = HashMap::new();\n\n    // Extract region configuration\n    if let Some(region) = get_config_trimmed(configs, bucket, \"endpoint.region\") {\n        s3_configs.insert(AmazonS3ConfigKey::Region, region.to_string());\n    }\n\n    // Extract and handle path style access (virtual hosted style)\n    let mut virtual_hosted_style_request = false;\n    if let Some(path_style) = get_config_trimmed(configs, bucket, \"path.style.access\") {\n        virtual_hosted_style_request = path_style.to_lowercase() == \"true\";\n        s3_configs.insert(\n            AmazonS3ConfigKey::VirtualHostedStyleRequest,\n            virtual_hosted_style_request.to_string(),\n        );\n    }\n\n    // Extract endpoint configuration and modify if virtual hosted style is enabled\n    if let Some(endpoint) = get_config_trimmed(configs, bucket, \"endpoint\") {\n        let normalized_endpoint =\n            normalize_endpoint(endpoint, bucket, virtual_hosted_style_request);\n        if let Some(endpoint) = normalized_endpoint {\n            s3_configs.insert(AmazonS3ConfigKey::Endpoint, endpoint);\n        }\n    }\n\n    // Extract request payer configuration\n    if let Some(requester_pays) = get_config_trimmed(configs, bucket, \"requester.pays.enabled\") {\n        let requester_pays_enabled = requester_pays.to_lowercase() == \"true\";\n        s3_configs.insert(\n            AmazonS3ConfigKey::RequestPayer,\n            requester_pays_enabled.to_string(),\n        );\n    }\n\n    s3_configs\n}\n\nfn normalize_endpoint(\n    endpoint: &str,\n    bucket: &str,\n    virtual_hosted_style_request: bool,\n) -> Option<String> {\n    if endpoint.is_empty() {\n        return None;\n    }\n\n    // This is the default Hadoop S3A configuration. Explicitly specifying this endpoint will lead to HTTP\n    // request failures when using object_store crate, so we ignore it and let object_store crate\n    // use the default endpoint.\n    if endpoint == \"s3.amazonaws.com\" {\n        return None;\n    }\n\n    let endpoint = if !endpoint.starts_with(\"http://\") && !endpoint.starts_with(\"https://\") {\n        format!(\"https://{endpoint}\")\n    } else {\n        endpoint.to_string()\n    };\n\n    if virtual_hosted_style_request {\n        if endpoint.ends_with(\"/\") {\n            Some(format!(\"{endpoint}{bucket}\"))\n        } else {\n            Some(format!(\"{endpoint}/{bucket}\"))\n        }\n    } else {\n        Some(endpoint) // Avoid extra to_string() call since endpoint is already a String\n    }\n}\n\nfn get_config<'a>(\n    configs: &'a HashMap<String, String>,\n    bucket: &str,\n    property: &str,\n) -> Option<&'a String> {\n    let per_bucket_key = format!(\"fs.s3a.bucket.{bucket}.{property}\");\n    configs.get(&per_bucket_key).or_else(|| {\n        let global_key = format!(\"fs.s3a.{property}\");\n        configs.get(&global_key)\n    })\n}\n\nfn get_config_trimmed<'a>(\n    configs: &'a HashMap<String, String>,\n    bucket: &str,\n    property: &str,\n) -> Option<&'a str> {\n    get_config(configs, bucket, property).map(|s| s.trim())\n}\n\n// Hadoop S3A credential provider constants\nconst HADOOP_IAM_INSTANCE: &str = \"org.apache.hadoop.fs.s3a.auth.IAMInstanceCredentialsProvider\";\nconst HADOOP_SIMPLE: &str = \"org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider\";\nconst HADOOP_TEMPORARY: &str = \"org.apache.hadoop.fs.s3a.TemporaryAWSCredentialsProvider\";\nconst HADOOP_ASSUMED_ROLE: &str = \"org.apache.hadoop.fs.s3a.auth.AssumedRoleCredentialProvider\";\nconst HADOOP_ANONYMOUS: &str = \"org.apache.hadoop.fs.s3a.AnonymousAWSCredentialsProvider\";\n\n// AWS SDK credential provider constants\nconst AWS_CONTAINER_CREDENTIALS: &str =\n    \"software.amazon.awssdk.auth.credentials.ContainerCredentialsProvider\";\nconst AWS_CONTAINER_CREDENTIALS_V1: &str = \"com.amazonaws.auth.ContainerCredentialsProvider\";\nconst AWS_EC2_CONTAINER_CREDENTIALS: &str =\n    \"com.amazonaws.auth.EC2ContainerCredentialsProviderWrapper\";\nconst AWS_INSTANCE_PROFILE: &str =\n    \"software.amazon.awssdk.auth.credentials.InstanceProfileCredentialsProvider\";\nconst AWS_INSTANCE_PROFILE_V1: &str = \"com.amazonaws.auth.InstanceProfileCredentialsProvider\";\nconst AWS_ENVIRONMENT: &str =\n    \"software.amazon.awssdk.auth.credentials.EnvironmentVariableCredentialsProvider\";\nconst AWS_ENVIRONMENT_V1: &str = \"com.amazonaws.auth.EnvironmentVariableCredentialsProvider\";\nconst AWS_WEB_IDENTITY: &str =\n    \"software.amazon.awssdk.auth.credentials.WebIdentityTokenFileCredentialsProvider\";\nconst AWS_WEB_IDENTITY_V1: &str = \"com.amazonaws.auth.WebIdentityTokenCredentialsProvider\";\nconst AWS_ANONYMOUS: &str = \"software.amazon.awssdk.auth.credentials.AnonymousCredentialsProvider\";\nconst AWS_ANONYMOUS_V1: &str = \"com.amazonaws.auth.AnonymousAWSCredentials\";\n\n/// Builds an AWS credential provider from the given configurations.\n/// It first checks if the credential provider is anonymous, and if so, returns `None`.\n/// Otherwise, it builds a [CachedAwsCredentialProvider] from the given configurations.\n///\n/// # Arguments\n///\n/// * `configs` - The Hadoop S3A configurations to use for building the credential provider.\n/// * `bucket` - The bucket to build the credential provider for.\n/// * `min_ttl` - Time buffer before credential expiry when refresh should be triggered.\n///\n/// # Returns\n///\n/// * `None` - If the credential provider is anonymous.\n/// * `Some(CachedAwsCredentialProvider)` - If the credential provider is not anonymous.\n///\nasync fn build_credential_provider(\n    configs: &HashMap<String, String>,\n    bucket: &str,\n    min_ttl: Duration,\n) -> Result<Option<CachedAwsCredentialProvider>, object_store::Error> {\n    let aws_credential_provider_names =\n        get_config_trimmed(configs, bucket, \"aws.credentials.provider\");\n    let aws_credential_provider_names =\n        aws_credential_provider_names.map_or(Vec::new(), |s| parse_credential_provider_names(s));\n    if aws_credential_provider_names\n        .iter()\n        .any(|name| is_anonymous_credential_provider(name))\n    {\n        if aws_credential_provider_names.len() > 1 {\n            return Err(object_store::Error::Generic {\n                store: \"S3\",\n                source:\n                    \"Anonymous credential provider cannot be mixed with other credential providers\"\n                        .into(),\n            });\n        }\n        return Ok(None);\n    }\n    let provider_metadata = build_chained_aws_credential_provider_metadata(\n        aws_credential_provider_names,\n        configs,\n        bucket,\n    )?;\n    debug!(\n        \"Credential providers for S3 bucket {}: {}\",\n        bucket,\n        provider_metadata.simple_string()\n    );\n    let provider = provider_metadata.create_credential_provider().await?;\n    Ok(Some(CachedAwsCredentialProvider::new(\n        provider,\n        provider_metadata,\n        min_ttl,\n    )))\n}\n\nfn parse_credential_provider_names(aws_credential_provider_names: &str) -> Vec<&str> {\n    aws_credential_provider_names\n        .split(',')\n        .map(|s| s.trim())\n        .filter(|s| !s.is_empty())\n        .collect::<Vec<&str>>()\n}\n\nfn is_anonymous_credential_provider(credential_provider_name: &str) -> bool {\n    [HADOOP_ANONYMOUS, AWS_ANONYMOUS_V1, AWS_ANONYMOUS].contains(&credential_provider_name)\n}\n\nfn build_chained_aws_credential_provider_metadata(\n    credential_provider_names: Vec<&str>,\n    configs: &HashMap<String, String>,\n    bucket: &str,\n) -> Result<CredentialProviderMetadata, object_store::Error> {\n    if credential_provider_names.is_empty() {\n        // Use the default credential provider chain. This is actually more permissive than\n        // the default Hadoop S3A FileSystem behavior, which only uses\n        // TemporaryAWSCredentialsProvider, SimpleAWSCredentialsProvider,\n        // EnvironmentVariableCredentialsProvider and IAMInstanceCredentialsProvider\n        return Ok(CredentialProviderMetadata::Default);\n    }\n\n    // Safety: credential_provider_names is not empty, taking its first element is safe\n    let provider_name = credential_provider_names[0];\n    let provider_metadata = build_aws_credential_provider_metadata(provider_name, configs, bucket)?;\n    if credential_provider_names.len() == 1 {\n        // No need to chain the provider as there's only one provider\n        return Ok(provider_metadata);\n    }\n\n    // More than one credential provider names were specified, we need to chain them together\n    let mut metadata_vec = vec![provider_metadata];\n    for provider_name in credential_provider_names[1..].iter() {\n        let provider_metadata =\n            build_aws_credential_provider_metadata(provider_name, configs, bucket)?;\n        metadata_vec.push(provider_metadata);\n    }\n\n    Ok(CredentialProviderMetadata::Chain(metadata_vec))\n}\n\nfn build_aws_credential_provider_metadata(\n    credential_provider_name: &str,\n    configs: &HashMap<String, String>,\n    bucket: &str,\n) -> Result<CredentialProviderMetadata, object_store::Error> {\n    match credential_provider_name {\n        AWS_CONTAINER_CREDENTIALS\n        | AWS_CONTAINER_CREDENTIALS_V1\n        | AWS_EC2_CONTAINER_CREDENTIALS => Ok(CredentialProviderMetadata::Ecs),\n        AWS_INSTANCE_PROFILE | AWS_INSTANCE_PROFILE_V1 => Ok(CredentialProviderMetadata::Imds),\n        HADOOP_IAM_INSTANCE => Ok(CredentialProviderMetadata::Chain(vec![\n            CredentialProviderMetadata::Ecs,\n            CredentialProviderMetadata::Imds,\n        ])),\n        AWS_ENVIRONMENT_V1 | AWS_ENVIRONMENT => Ok(CredentialProviderMetadata::Environment),\n        HADOOP_SIMPLE | HADOOP_TEMPORARY => {\n            build_static_credential_provider_metadata(credential_provider_name, configs, bucket)\n        }\n        HADOOP_ASSUMED_ROLE => build_assume_role_credential_provider_metadata(configs, bucket),\n        AWS_WEB_IDENTITY_V1 | AWS_WEB_IDENTITY => Ok(CredentialProviderMetadata::WebIdentity),\n        _ => Err(object_store::Error::Generic {\n            store: \"S3\",\n            source: format!(\"Unsupported credential provider: {credential_provider_name}\").into(),\n        }),\n    }\n}\n\nfn build_static_credential_provider_metadata(\n    credential_provider_name: &str,\n    configs: &HashMap<String, String>,\n    bucket: &str,\n) -> Result<CredentialProviderMetadata, object_store::Error> {\n    let access_key_id = get_config_trimmed(configs, bucket, \"access.key\");\n    let secret_access_key = get_config_trimmed(configs, bucket, \"secret.key\");\n    let session_token = if credential_provider_name == HADOOP_TEMPORARY {\n        get_config_trimmed(configs, bucket, \"session.token\")\n    } else {\n        None\n    };\n\n    // Allow static credential provider creation even when access/secret keys are missing.\n    // This maintains compatibility with Hadoop S3A FileSystem, whose default credential chain\n    // includes TemporaryAWSCredentialsProvider. Missing credentials won't prevent other\n    // providers in the chain from working - this provider will error only when accessed.\n    let mut is_valid = access_key_id.is_some() && secret_access_key.is_some();\n    if credential_provider_name == HADOOP_TEMPORARY {\n        is_valid = is_valid && session_token.is_some();\n    };\n\n    Ok(CredentialProviderMetadata::Static {\n        is_valid,\n        access_key: access_key_id.unwrap_or(\"\").to_string(),\n        secret_key: secret_access_key.unwrap_or(\"\").to_string(),\n        session_token: session_token.map(|s| s.to_string()),\n    })\n}\n\nfn build_assume_role_credential_provider_metadata(\n    configs: &HashMap<String, String>,\n    bucket: &str,\n) -> Result<CredentialProviderMetadata, object_store::Error> {\n    let base_provider_names =\n        get_config_trimmed(configs, bucket, \"assumed.role.credentials.provider\")\n            .map(|s| parse_credential_provider_names(s));\n    let base_provider_names = if let Some(v) = base_provider_names {\n        if v.iter().any(|name| is_anonymous_credential_provider(name)) {\n            return Err(object_store::Error::Generic {\n                store: \"S3\",\n                source: \"Anonymous credential provider cannot be used as assumed role credential provider\".into(),\n            });\n        }\n        v\n    } else {\n        // If credential provider for performing assume role operation is not specified, we'll use simple\n        // credential provider first, and fallback to environment variable credential provider. This is the\n        // same behavior as Hadoop S3A FileSystem.\n        vec![HADOOP_SIMPLE, AWS_ENVIRONMENT]\n    };\n\n    let role_arn = get_config_trimmed(configs, bucket, \"assumed.role.arn\").ok_or(\n        object_store::Error::Generic {\n            store: \"S3\",\n            source: \"Missing required assume role ARN configuration\".into(),\n        },\n    )?;\n    let default_session_name = \"comet-parquet-s3\".to_string();\n    let session_name = get_config_trimmed(configs, bucket, \"assumed.role.session.name\")\n        .unwrap_or(&default_session_name);\n\n    let base_provider_metadata =\n        build_chained_aws_credential_provider_metadata(base_provider_names, configs, bucket)?;\n    Ok(CredentialProviderMetadata::AssumeRole {\n        role_arn: role_arn.to_string(),\n        session_name: session_name.to_string(),\n        base_provider_metadata: Box::new(base_provider_metadata),\n    })\n}\n\n/// A caching wrapper around AWS credential providers that implements the object_store `CredentialProvider` trait.\n///\n/// This struct bridges AWS SDK credential providers (`ProvideCredentials`) with the object_store\n/// crate's `CredentialProvider` trait, enabling seamless use of AWS credentials with object_store's\n/// S3 implementation. It also provides credential caching to improve performance and reduce the\n/// frequency of credential refresh operations. Many AWS credential providers (like IMDS, ECS, STS\n/// assume role) involve network calls or complex authentication flows that can be expensive to\n/// repeat constantly.\n#[derive(Debug)]\nstruct CachedAwsCredentialProvider {\n    /// The underlying AWS credential provider that this cache wraps.\n    /// This can be any provider implementing `ProvideCredentials` (static, IMDS, ECS, assume role, etc.)\n    provider: Arc<dyn ProvideCredentials>,\n\n    /// Cache holding the most recently fetched credentials. [CredentialProvider] is required to be\n    /// Send + Sync, so we have to use Arc + RwLock to make it thread-safe.\n    cached: Arc<RwLock<Option<aws_credential_types::Credentials>>>,\n\n    /// Time buffer before credential expiry when refresh should be triggered.\n    /// For example, if set to 5 minutes, credentials will be refreshed when they have\n    /// 5 minutes or less remaining before expiration. This prevents credential expiry\n    /// during active operations.\n    min_ttl: Duration,\n\n    /// The metadata of the credential provider. Only present when running tests. This field is used\n    /// to assert on the structure of the credential provider.\n    #[cfg(test)]\n    metadata: CredentialProviderMetadata,\n}\n\nimpl CachedAwsCredentialProvider {\n    #[allow(unused_variables)]\n    fn new(\n        credential_provider: Arc<dyn ProvideCredentials>,\n        metadata: CredentialProviderMetadata,\n        min_ttl: Duration,\n    ) -> Self {\n        Self {\n            provider: credential_provider,\n            cached: Arc::new(RwLock::new(None)),\n            min_ttl,\n            #[cfg(test)]\n            metadata,\n        }\n    }\n\n    #[cfg(test)]\n    fn metadata(&self) -> CredentialProviderMetadata {\n        self.metadata.clone()\n    }\n\n    fn fetch_credential(&self) -> Option<aws_credential_types::Credentials> {\n        let locked = self.cached.read().unwrap();\n        locked.as_ref().and_then(|cred| match cred.expiry() {\n            Some(expiry) => {\n                if expiry < SystemTime::now() + self.min_ttl {\n                    None\n                } else {\n                    Some(cred.clone())\n                }\n            }\n            None => Some(cred.clone()),\n        })\n    }\n\n    async fn refresh_credential(&self) -> object_store::Result<aws_credential_types::Credentials> {\n        let credentials = self.provider.provide_credentials().await.map_err(|e| {\n            error!(\"Failed to retrieve credentials: {e:?}\");\n            object_store::Error::Generic {\n                store: \"S3\",\n                source: Box::new(e),\n            }\n        })?;\n        *self.cached.write().unwrap() = Some(credentials.clone());\n        Ok(credentials)\n    }\n}\n\n#[async_trait]\nimpl CredentialProvider for CachedAwsCredentialProvider {\n    /// The type of credential returned by this provider\n    type Credential = AwsCredential;\n\n    /// Return a credential\n    async fn get_credential(&self) -> object_store::Result<Arc<AwsCredential>> {\n        let credentials = match self.fetch_credential() {\n            Some(cred) => cred,\n            None => self.refresh_credential().await?,\n        };\n        Ok(Arc::new(AwsCredential {\n            key_id: credentials.access_key_id().to_string(),\n            secret_key: credentials.secret_access_key().to_string(),\n            token: credentials.session_token().map(|s| s.to_string()),\n        }))\n    }\n}\n\n/// A custom AWS credential provider that holds static, pre-configured credentials.\n///\n/// This provider is used when the S3 credential configuration specifies static access keys,\n/// such as when using Hadoop's `SimpleAWSCredentialsProvider` or `TemporaryAWSCredentialsProvider`.\n/// Unlike dynamic credential providers (like IMDS or ECS), this provider returns the same\n/// credentials every time without any external API calls.\n#[derive(Debug)]\nstruct StaticCredentialProvider {\n    is_valid: bool,\n    cred: Credentials,\n}\n\nimpl StaticCredentialProvider {\n    fn new(is_valid: bool, ak: String, sk: String, token: Option<String>) -> Self {\n        let mut builder = Credentials::builder()\n            .access_key_id(ak)\n            .secret_access_key(sk)\n            .provider_name(\"AwsStaticCredentialProvider\");\n        if let Some(token) = token {\n            builder = builder.session_token(token);\n        }\n        let cred = builder.build();\n        Self { is_valid, cred }\n    }\n}\n\nimpl ProvideCredentials for StaticCredentialProvider {\n    fn provide_credentials<'a>(\n        &'a self,\n    ) -> aws_credential_types::provider::future::ProvideCredentials<'a>\n    where\n        Self: 'a,\n    {\n        if self.is_valid {\n            aws_credential_types::provider::future::ProvideCredentials::ready(Ok(self.cred.clone()))\n        } else {\n            aws_credential_types::provider::future::ProvideCredentials::ready(Err(\n                CredentialsError::not_loaded_no_source(),\n            ))\n        }\n    }\n}\n\n/// Structural representation of credential provider types. It reflects the nested structure of the\n/// credential providers, and can be used as blueprint to creating the actual credential providers.\n/// We are defining this type because it is hard to assert on the structures of credential providers\n/// using the `dyn ProvideCredentials` values directly. Please refer to the test cases for usages of\n/// this type.\n#[derive(Debug, Clone, PartialEq)]\nenum CredentialProviderMetadata {\n    Default,\n    Ecs,\n    Imds,\n    Environment,\n    WebIdentity,\n    Static {\n        is_valid: bool,\n        access_key: String,\n        secret_key: String,\n        session_token: Option<String>,\n    },\n    AssumeRole {\n        role_arn: String,\n        session_name: String,\n        base_provider_metadata: Box<CredentialProviderMetadata>,\n    },\n    Chain(Vec<CredentialProviderMetadata>),\n}\n\nimpl CredentialProviderMetadata {\n    fn name(&self) -> &'static str {\n        match self {\n            CredentialProviderMetadata::Default => \"Default\",\n            CredentialProviderMetadata::Ecs => \"Ecs\",\n            CredentialProviderMetadata::Imds => \"Imds\",\n            CredentialProviderMetadata::Environment => \"Environment\",\n            CredentialProviderMetadata::WebIdentity => \"WebIdentity\",\n            CredentialProviderMetadata::Static { .. } => \"Static\",\n            CredentialProviderMetadata::AssumeRole { .. } => \"AssumeRole\",\n            CredentialProviderMetadata::Chain(..) => \"Chain\",\n        }\n    }\n\n    /// Return a simple name for the credential provider. Security sensitive informations are not included.\n    /// This is useful for logging and debugging.\n    fn simple_string(&self) -> String {\n        match self {\n            CredentialProviderMetadata::Default => \"Default\".to_string(),\n            CredentialProviderMetadata::Ecs => \"Ecs\".to_string(),\n            CredentialProviderMetadata::Imds => \"Imds\".to_string(),\n            CredentialProviderMetadata::Environment => \"Environment\".to_string(),\n            CredentialProviderMetadata::WebIdentity => \"WebIdentity\".to_string(),\n            CredentialProviderMetadata::Static { is_valid, .. } => {\n                format!(\"Static(valid: {is_valid})\")\n            }\n            CredentialProviderMetadata::AssumeRole {\n                role_arn,\n                session_name,\n                base_provider_metadata,\n            } => {\n                format!(\n                    \"AssumeRole(role: {}, session: {}, base: {})\",\n                    role_arn,\n                    session_name,\n                    base_provider_metadata.simple_string()\n                )\n            }\n            CredentialProviderMetadata::Chain(providers) => {\n                let provider_strings: Vec<String> =\n                    providers.iter().map(|p| p.simple_string()).collect();\n                format!(\"Chain({})\", provider_strings.join(\" -> \"))\n            }\n        }\n    }\n}\n\nimpl CredentialProviderMetadata {\n    /// Create a credential provider from the metadata.\n    ///\n    /// Note: this function is not covered by tests. However, the implementation of this function is\n    /// quite straightforward and should be easy to verify.\n    async fn create_credential_provider(\n        &self,\n    ) -> Result<Arc<dyn ProvideCredentials>, object_store::Error> {\n        match self {\n            CredentialProviderMetadata::Default => {\n                let config = aws_config::defaults(BehaviorVersion::latest()).load().await;\n                let credential_provider =\n                    config\n                        .credentials_provider()\n                        .ok_or(object_store::Error::Generic {\n                            store: \"S3\",\n                            source: \"Cannot get default credential provider chain\".into(),\n                        })?;\n                Ok(Arc::new(credential_provider))\n            }\n            CredentialProviderMetadata::Ecs => {\n                let credential_provider = EcsCredentialsProvider::builder().build();\n                Ok(Arc::new(credential_provider))\n            }\n            CredentialProviderMetadata::Imds => {\n                let credential_provider = ImdsCredentialsProvider::builder().build();\n                Ok(Arc::new(credential_provider))\n            }\n            CredentialProviderMetadata::Environment => {\n                let credential_provider = EnvironmentVariableCredentialsProvider::new();\n                Ok(Arc::new(credential_provider))\n            }\n            CredentialProviderMetadata::WebIdentity => {\n                let credential_provider = WebIdentityTokenCredentialsProvider::builder()\n                    .configure(&ProviderConfig::with_default_region().await)\n                    .build();\n                Ok(Arc::new(credential_provider))\n            }\n            CredentialProviderMetadata::Static {\n                is_valid,\n                access_key,\n                secret_key,\n                session_token,\n            } => {\n                let credential_provider = StaticCredentialProvider::new(\n                    *is_valid,\n                    access_key.clone(),\n                    secret_key.clone(),\n                    session_token.clone(),\n                );\n                Ok(Arc::new(credential_provider))\n            }\n            CredentialProviderMetadata::AssumeRole {\n                role_arn,\n                session_name,\n                base_provider_metadata,\n            } => {\n                let base_provider =\n                    Box::pin(base_provider_metadata.create_credential_provider()).await?;\n                let credential_provider = AssumeRoleProvider::builder(role_arn)\n                    .session_name(session_name)\n                    .build_from_provider(base_provider)\n                    .await;\n                Ok(Arc::new(credential_provider))\n            }\n            CredentialProviderMetadata::Chain(metadata_vec) => {\n                if metadata_vec.is_empty() {\n                    return Err(object_store::Error::Generic {\n                        store: \"S3\",\n                        source: \"Cannot create credential provider chain with empty providers\"\n                            .into(),\n                    });\n                }\n                let mut chained_provider = CredentialsProviderChain::first_try(\n                    metadata_vec[0].name(),\n                    Box::pin(metadata_vec[0].create_credential_provider()).await?,\n                );\n                for metadata in metadata_vec[1..].iter() {\n                    chained_provider = chained_provider.or_else(\n                        metadata.name(),\n                        Box::pin(metadata.create_credential_provider()).await?,\n                    );\n                }\n                Ok(Arc::new(chained_provider))\n            }\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use std::sync::atomic::{AtomicI32, Ordering};\n\n    use super::*;\n\n    /// Test configuration builder for easier setup Hadoop configurations\n    #[derive(Debug, Default)]\n    struct TestConfigBuilder {\n        configs: HashMap<String, String>,\n    }\n\n    impl TestConfigBuilder {\n        fn new() -> Self {\n            Self::default()\n        }\n\n        fn with_region(mut self, region: &str) -> Self {\n            self.configs\n                .insert(\"fs.s3a.endpoint.region\".to_string(), region.to_string());\n            self\n        }\n\n        fn with_credential_provider(mut self, provider: &str) -> Self {\n            self.configs.insert(\n                \"fs.s3a.aws.credentials.provider\".to_string(),\n                provider.to_string(),\n            );\n            self\n        }\n\n        fn with_bucket_credential_provider(mut self, bucket: &str, provider: &str) -> Self {\n            self.configs.insert(\n                format!(\"fs.s3a.bucket.{bucket}.aws.credentials.provider\"),\n                provider.to_string(),\n            );\n            self\n        }\n\n        fn with_access_key(mut self, key: &str) -> Self {\n            self.configs\n                .insert(\"fs.s3a.access.key\".to_string(), key.to_string());\n            self\n        }\n\n        fn with_secret_key(mut self, key: &str) -> Self {\n            self.configs\n                .insert(\"fs.s3a.secret.key\".to_string(), key.to_string());\n            self\n        }\n\n        fn with_session_token(mut self, token: &str) -> Self {\n            self.configs\n                .insert(\"fs.s3a.session.token\".to_string(), token.to_string());\n            self\n        }\n\n        fn with_bucket_access_key(mut self, bucket: &str, key: &str) -> Self {\n            self.configs.insert(\n                format!(\"fs.s3a.bucket.{bucket}.access.key\"),\n                key.to_string(),\n            );\n            self\n        }\n\n        fn with_bucket_secret_key(mut self, bucket: &str, key: &str) -> Self {\n            self.configs.insert(\n                format!(\"fs.s3a.bucket.{bucket}.secret.key\"),\n                key.to_string(),\n            );\n            self\n        }\n\n        fn with_bucket_session_token(mut self, bucket: &str, token: &str) -> Self {\n            self.configs.insert(\n                format!(\"fs.s3a.bucket.{bucket}.session.token\"),\n                token.to_string(),\n            );\n            self\n        }\n\n        fn with_assume_role_arn(mut self, arn: &str) -> Self {\n            self.configs\n                .insert(\"fs.s3a.assumed.role.arn\".to_string(), arn.to_string());\n            self\n        }\n\n        fn with_assume_role_session_name(mut self, name: &str) -> Self {\n            self.configs.insert(\n                \"fs.s3a.assumed.role.session.name\".to_string(),\n                name.to_string(),\n            );\n            self\n        }\n\n        fn with_assume_role_credentials_provider(mut self, provider: &str) -> Self {\n            self.configs.insert(\n                \"fs.s3a.assumed.role.credentials.provider\".to_string(),\n                provider.to_string(),\n            );\n            self\n        }\n\n        fn build(self) -> HashMap<String, String> {\n            self.configs\n        }\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers and object_store call foreign functions\n    fn test_create_store() {\n        let url = Url::parse(\"s3a://test_bucket/comet/spark-warehouse/part-00000.snappy.parquet\")\n            .unwrap();\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_ANONYMOUS)\n            .with_region(\"us-east-1\")\n            .build();\n        let (_object_store, path) = create_store(&url, &configs, Duration::from_secs(300)).unwrap();\n        assert_eq!(\n            path,\n            Path::from(\"/comet/spark-warehouse/part-00000.snappy.parquet\")\n        );\n    }\n\n    #[test]\n    fn test_get_config_trimmed() {\n        let configs = TestConfigBuilder::new()\n            .with_access_key(\"test_key\")\n            .with_secret_key(\"  \\n  test_secret_key\\n  \\n\")\n            .with_session_token(\"  \\n  test_session_token\\n  \\n\")\n            .with_bucket_access_key(\"test-bucket\", \"test_bucket_key\")\n            .with_bucket_secret_key(\"test-bucket\", \"  \\n  test_bucket_secret_key\\n  \\n\")\n            .with_bucket_session_token(\"test-bucket\", \"  \\n  test_bucket_session_token\\n  \\n\")\n            .build();\n\n        // bucket-specific keys\n        let access_key = get_config_trimmed(&configs, \"test-bucket\", \"access.key\");\n        assert_eq!(access_key, Some(\"test_bucket_key\"));\n        let secret_key = get_config_trimmed(&configs, \"test-bucket\", \"secret.key\");\n        assert_eq!(secret_key, Some(\"test_bucket_secret_key\"));\n        let session_token = get_config_trimmed(&configs, \"test-bucket\", \"session.token\");\n        assert_eq!(session_token, Some(\"test_bucket_session_token\"));\n\n        // global keys\n        let access_key = get_config_trimmed(&configs, \"test-bucket-2\", \"access.key\");\n        assert_eq!(access_key, Some(\"test_key\"));\n        let secret_key = get_config_trimmed(&configs, \"test-bucket-2\", \"secret.key\");\n        assert_eq!(secret_key, Some(\"test_secret_key\"));\n        let session_token = get_config_trimmed(&configs, \"test-bucket-2\", \"session.token\");\n        assert_eq!(session_token, Some(\"test_session_token\"));\n    }\n\n    #[test]\n    fn test_parse_credential_provider_names() {\n        let credential_provider_names = parse_credential_provider_names(\"\");\n        assert!(credential_provider_names.is_empty());\n\n        let credential_provider_names = parse_credential_provider_names(HADOOP_ANONYMOUS);\n        assert_eq!(credential_provider_names, vec![HADOOP_ANONYMOUS]);\n\n        let aws_credential_provider_names =\n            format!(\"{HADOOP_ANONYMOUS},{AWS_ENVIRONMENT},{AWS_ENVIRONMENT_V1}\");\n        let credential_provider_names =\n            parse_credential_provider_names(&aws_credential_provider_names);\n        assert_eq!(\n            credential_provider_names,\n            vec![HADOOP_ANONYMOUS, AWS_ENVIRONMENT, AWS_ENVIRONMENT_V1]\n        );\n\n        let aws_credential_provider_names =\n            format!(\" {HADOOP_ANONYMOUS}, {AWS_ENVIRONMENT},, {AWS_ENVIRONMENT_V1},\");\n        let credential_provider_names =\n            parse_credential_provider_names(&aws_credential_provider_names);\n        assert_eq!(\n            credential_provider_names,\n            vec![HADOOP_ANONYMOUS, AWS_ENVIRONMENT, AWS_ENVIRONMENT_V1]\n        );\n\n        let aws_credential_provider_names = format!(\n            \"\\n  {HADOOP_ANONYMOUS},\\n  {AWS_ENVIRONMENT},\\n  , \\n  {AWS_ENVIRONMENT_V1},\\n\"\n        );\n        let credential_provider_names =\n            parse_credential_provider_names(&aws_credential_provider_names);\n        assert_eq!(\n            credential_provider_names,\n            vec![HADOOP_ANONYMOUS, AWS_ENVIRONMENT, AWS_ENVIRONMENT_V1]\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_default_credential_provider() {\n        let configs0 = TestConfigBuilder::new().build();\n        let configs1 = TestConfigBuilder::new()\n            .with_credential_provider(\"\")\n            .build();\n        let configs2 = TestConfigBuilder::new()\n            .with_credential_provider(\"\\n  ,\")\n            .build();\n\n        for configs in [configs0, configs1, configs2] {\n            let result =\n                build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n                    .await\n                    .unwrap();\n            assert!(\n                result.is_some(),\n                \"Should return a credential provider for default config\"\n            );\n            assert_eq!(\n                result.unwrap().metadata(),\n                CredentialProviderMetadata::Default\n            );\n        }\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_anonymous_credential_provider() {\n        for provider_name in [HADOOP_ANONYMOUS, AWS_ANONYMOUS, AWS_ANONYMOUS_V1] {\n            let configs = TestConfigBuilder::new()\n                .with_credential_provider(provider_name)\n                .build();\n\n            let result =\n                build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n                    .await\n                    .unwrap();\n            assert!(result.is_none(), \"Anonymous provider should return None\");\n        }\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_mixed_anonymous_and_other_providers_error() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(&format!(\"{HADOOP_ANONYMOUS},{AWS_ENVIRONMENT}\"))\n            .build();\n\n        let result =\n            build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300)).await;\n        assert!(\n            result.is_err(),\n            \"Should error when mixing anonymous with other providers\"\n        );\n\n        if let Err(e) = result {\n            assert!(e\n                .to_string()\n                .contains(\"Anonymous credential provider cannot be mixed\"));\n        }\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_simple_credential_provider() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_SIMPLE)\n            .with_access_key(\"test_access_key\")\n            .with_secret_key(\"test_secret_key\")\n            .with_session_token(\"test_session_token\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return a credential provider for simple credentials\"\n        );\n\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::Static {\n                is_valid: true,\n                access_key: \"test_access_key\".to_string(),\n                secret_key: \"test_secret_key\".to_string(),\n                session_token: None\n            }\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_temporary_credential_provider() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_TEMPORARY)\n            .with_access_key(\"test_access_key\")\n            .with_secret_key(\"test_secret_key\")\n            .with_session_token(\"test_session_token\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return a credential provider for temporary credentials\"\n        );\n\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::Static {\n                is_valid: true,\n                access_key: \"test_access_key\".to_string(),\n                secret_key: \"test_secret_key\".to_string(),\n                session_token: Some(\"test_session_token\".to_string())\n            }\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_missing_access_key() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_SIMPLE)\n            .with_secret_key(\"test_secret_key\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return an invalid credential provider when access key is missing\"\n        );\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::Static {\n                is_valid: false,\n                access_key: \"\".to_string(),\n                secret_key: \"test_secret_key\".to_string(),\n                session_token: None\n            }\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_missing_secret_key() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_SIMPLE)\n            .with_access_key(\"test_access_key\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return an invalid credential provider when secret key is missing\"\n        );\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::Static {\n                is_valid: false,\n                access_key: \"test_access_key\".to_string(),\n                secret_key: \"\".to_string(),\n                session_token: None\n            }\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_missing_session_token_for_temporary() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_TEMPORARY)\n            .with_access_key(\"test_access_key\")\n            .with_secret_key(\"test_secret_key\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return an invalid credential provider when session token is missing\"\n        );\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::Static {\n                is_valid: false,\n                access_key: \"test_access_key\".to_string(),\n                secret_key: \"test_secret_key\".to_string(),\n                session_token: None\n            }\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_bucket_specific_configuration() {\n        let configs = TestConfigBuilder::new()\n            .with_bucket_credential_provider(\"specific-bucket\", HADOOP_SIMPLE)\n            .with_bucket_access_key(\"specific-bucket\", \"bucket_access_key\")\n            .with_bucket_secret_key(\"specific-bucket\", \"bucket_secret_key\")\n            .build();\n\n        let result =\n            build_credential_provider(&configs, \"specific-bucket\", Duration::from_secs(300))\n                .await\n                .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return a credential provider for bucket-specific config\"\n        );\n\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_SIMPLE)\n            .with_access_key(\"test_access_key\")\n            .with_secret_key(\"test_secret_key\")\n            .with_bucket_credential_provider(\"specific-bucket\", HADOOP_TEMPORARY)\n            .with_bucket_access_key(\"specific-bucket\", \"bucket_access_key\")\n            .with_bucket_secret_key(\"specific-bucket\", \"bucket_secret_key\")\n            .with_bucket_session_token(\"specific-bucket\", \"bucket_session_token\")\n            .with_bucket_credential_provider(\"specific-bucket-2\", HADOOP_TEMPORARY)\n            .with_bucket_access_key(\"specific-bucket-2\", \"bucket_access_key_2\")\n            .with_bucket_secret_key(\"specific-bucket-2\", \"bucket_secret_key_2\")\n            .with_bucket_session_token(\"specific-bucket-2\", \"bucket_session_token_2\")\n            .build();\n\n        let result =\n            build_credential_provider(&configs, \"specific-bucket\", Duration::from_secs(300))\n                .await\n                .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return a credential provider for bucket-specific config\"\n        );\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::Static {\n                is_valid: true,\n                access_key: \"bucket_access_key\".to_string(),\n                secret_key: \"bucket_secret_key\".to_string(),\n                session_token: Some(\"bucket_session_token\".to_string())\n            }\n        );\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return a credential provider for default config\"\n        );\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::Static {\n                is_valid: true,\n                access_key: \"test_access_key\".to_string(),\n                secret_key: \"test_secret_key\".to_string(),\n                session_token: None\n            }\n        );\n\n        let result =\n            build_credential_provider(&configs, \"specific-bucket-2\", Duration::from_secs(300))\n                .await\n                .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return a credential provider for bucket-specific config\"\n        );\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::Static {\n                is_valid: true,\n                access_key: \"bucket_access_key_2\".to_string(),\n                secret_key: \"bucket_secret_key_2\".to_string(),\n                session_token: Some(\"bucket_session_token_2\".to_string())\n            }\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_assume_role_credential_provider() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_ASSUMED_ROLE)\n            .with_assume_role_arn(\"arn:aws:iam::123456789012:role/test-role\")\n            .with_assume_role_session_name(\"test-session\")\n            .with_access_key(\"base_access_key\")\n            .with_secret_key(\"base_secret_key\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return a credential provider for assume role\"\n        );\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::AssumeRole {\n                role_arn: \"arn:aws:iam::123456789012:role/test-role\".to_string(),\n                session_name: \"test-session\".to_string(),\n                base_provider_metadata: Box::new(CredentialProviderMetadata::Chain(vec![\n                    CredentialProviderMetadata::Static {\n                        is_valid: true,\n                        access_key: \"base_access_key\".to_string(),\n                        secret_key: \"base_secret_key\".to_string(),\n                        session_token: None\n                    },\n                    CredentialProviderMetadata::Environment,\n                ]))\n            }\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_assume_role_missing_arn_error() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_ASSUMED_ROLE)\n            .with_access_key(\"base_access_key\")\n            .with_secret_key(\"base_secret_key\")\n            .build();\n\n        let result =\n            build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300)).await;\n        assert!(\n            result.is_err(),\n            \"Should error when assume role ARN is missing\"\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_unsupported_credential_provider_error() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(\"unsupported.provider.Class\")\n            .build();\n\n        let result =\n            build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300)).await;\n        assert!(\n            result.is_err(),\n            \"Should error for unsupported credential provider\"\n        );\n\n        if let Err(e) = result {\n            assert!(e.to_string().contains(\"Unsupported credential provider\"));\n        }\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_environment_credential_provider() {\n        for provider_name in [AWS_ENVIRONMENT, AWS_ENVIRONMENT_V1] {\n            let configs = TestConfigBuilder::new()\n                .with_credential_provider(provider_name)\n                .build();\n\n            let result =\n                build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n                    .await\n                    .unwrap();\n            assert!(result.is_some(), \"Should return a credential provider\");\n\n            let test_provider = result.unwrap().metadata();\n            assert_eq!(test_provider, CredentialProviderMetadata::Environment);\n        }\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_ecs_credential_provider() {\n        for provider_name in [\n            AWS_CONTAINER_CREDENTIALS,\n            AWS_CONTAINER_CREDENTIALS_V1,\n            AWS_EC2_CONTAINER_CREDENTIALS,\n        ] {\n            let configs = TestConfigBuilder::new()\n                .with_credential_provider(provider_name)\n                .build();\n\n            let result =\n                build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n                    .await\n                    .unwrap();\n            assert!(result.is_some(), \"Should return a credential provider\");\n\n            let test_provider = result.unwrap().metadata();\n            assert_eq!(test_provider, CredentialProviderMetadata::Ecs);\n        }\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_imds_credential_provider() {\n        for provider_name in [AWS_INSTANCE_PROFILE, AWS_INSTANCE_PROFILE_V1] {\n            let configs = TestConfigBuilder::new()\n                .with_credential_provider(provider_name)\n                .build();\n\n            let result =\n                build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n                    .await\n                    .unwrap();\n            assert!(result.is_some(), \"Should return a credential provider\");\n\n            let test_provider = result.unwrap().metadata();\n            assert_eq!(test_provider, CredentialProviderMetadata::Imds);\n        }\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_web_identity_credential_provider() {\n        for provider_name in [AWS_WEB_IDENTITY, AWS_WEB_IDENTITY_V1] {\n            let configs = TestConfigBuilder::new()\n                .with_credential_provider(provider_name)\n                .build();\n\n            let result =\n                build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n                    .await\n                    .unwrap();\n            assert!(result.is_some(), \"Should return a credential provider\");\n\n            let test_provider = result.unwrap().metadata();\n            assert_eq!(test_provider, CredentialProviderMetadata::WebIdentity);\n        }\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_hadoop_iam_instance_credential_provider() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_IAM_INSTANCE)\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(result.is_some(), \"Should return a credential provider\");\n\n        let test_provider = result.unwrap().metadata();\n        assert_eq!(\n            test_provider,\n            CredentialProviderMetadata::Chain(vec![\n                CredentialProviderMetadata::Ecs,\n                CredentialProviderMetadata::Imds\n            ])\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_chained_credential_providers() {\n        // Test three providers in chain: Environment -> IMDS -> ECS\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(&format!(\n                \"{AWS_ENVIRONMENT},{AWS_INSTANCE_PROFILE},{AWS_CONTAINER_CREDENTIALS}\"\n            ))\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return a credential provider for complex chain\"\n        );\n\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::Chain(vec![\n                CredentialProviderMetadata::Environment,\n                CredentialProviderMetadata::Imds,\n                CredentialProviderMetadata::Ecs\n            ])\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_static_environment_web_identity_chain() {\n        // Test chaining static credentials -> environment -> web identity\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(&format!(\n                \"{HADOOP_SIMPLE},{AWS_ENVIRONMENT},{AWS_WEB_IDENTITY}\"\n            ))\n            .with_access_key(\"chain_access_key\")\n            .with_secret_key(\"chain_secret_key\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return a credential provider for static+env+web chain\"\n        );\n\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::Chain(vec![\n                CredentialProviderMetadata::Static {\n                    is_valid: true,\n                    access_key: \"chain_access_key\".to_string(),\n                    secret_key: \"chain_secret_key\".to_string(),\n                    session_token: None\n                },\n                CredentialProviderMetadata::Environment,\n                CredentialProviderMetadata::WebIdentity\n            ])\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_assume_role_with_static_base_provider() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_ASSUMED_ROLE)\n            .with_assume_role_arn(\"arn:aws:iam::123456789012:role/test-role\")\n            .with_assume_role_session_name(\"static-base-session\")\n            .with_assume_role_credentials_provider(HADOOP_TEMPORARY)\n            .with_access_key(\"base_static_access_key\")\n            .with_secret_key(\"base_static_secret_key\")\n            .with_session_token(\"base_static_session_token\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return assume role provider with static base\"\n        );\n\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::AssumeRole {\n                role_arn: \"arn:aws:iam::123456789012:role/test-role\".to_string(),\n                session_name: \"static-base-session\".to_string(),\n                base_provider_metadata: Box::new(CredentialProviderMetadata::Static {\n                    is_valid: true,\n                    access_key: \"base_static_access_key\".to_string(),\n                    secret_key: \"base_static_secret_key\".to_string(),\n                    session_token: Some(\"base_static_session_token\".to_string())\n                })\n            }\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_assume_role_with_web_identity_base_provider() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_ASSUMED_ROLE)\n            .with_assume_role_arn(\"arn:aws:iam::123456789012:role/web-identity-role\")\n            .with_assume_role_session_name(\"web-identity-session\")\n            .with_assume_role_credentials_provider(AWS_WEB_IDENTITY)\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return assume role provider with web identity base\"\n        );\n\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::AssumeRole {\n                role_arn: \"arn:aws:iam::123456789012:role/web-identity-role\".to_string(),\n                session_name: \"web-identity-session\".to_string(),\n                base_provider_metadata: Box::new(CredentialProviderMetadata::WebIdentity)\n            }\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_assume_role_with_chained_base_providers() {\n        // Test assume role with multiple base providers: Static -> Environment -> IMDS\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_ASSUMED_ROLE)\n            .with_assume_role_arn(\"arn:aws:iam::123456789012:role/chained-role\")\n            .with_assume_role_session_name(\"chained-base-session\")\n            .with_assume_role_credentials_provider(&format!(\n                \"{HADOOP_SIMPLE},{AWS_ENVIRONMENT},{AWS_INSTANCE_PROFILE}\"\n            ))\n            .with_access_key(\"chained_base_access_key\")\n            .with_secret_key(\"chained_base_secret_key\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return assume role provider with chained base\"\n        );\n\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::AssumeRole {\n                role_arn: \"arn:aws:iam::123456789012:role/chained-role\".to_string(),\n                session_name: \"chained-base-session\".to_string(),\n                base_provider_metadata: Box::new(CredentialProviderMetadata::Chain(vec![\n                    CredentialProviderMetadata::Static {\n                        is_valid: true,\n                        access_key: \"chained_base_access_key\".to_string(),\n                        secret_key: \"chained_base_secret_key\".to_string(),\n                        session_token: None\n                    },\n                    CredentialProviderMetadata::Environment,\n                    CredentialProviderMetadata::Imds\n                ]))\n            }\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_assume_role_chained_with_other_providers() {\n        // Test assume role as first provider in a chain, followed by environment and IMDS\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(&format!(\n                \"  {HADOOP_ASSUMED_ROLE}\\n,  {AWS_INSTANCE_PROFILE}\\n\"\n            ))\n            .with_assume_role_arn(\"arn:aws:iam::123456789012:role/first-in-chain\")\n            .with_assume_role_session_name(\"first-chain-session\")\n            .with_assume_role_credentials_provider(&format!(\n                \"  {AWS_WEB_IDENTITY}\\n,  {HADOOP_TEMPORARY}\\n,  {AWS_ENVIRONMENT}\\n\"\n            ))\n            .with_access_key(\"assume_role_base_key\")\n            .with_secret_key(\"assume_role_base_secret\")\n            .with_session_token(\"assume_role_base_token\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(\n            result.is_some(),\n            \"Should return chained provider with assume role first\"\n        );\n\n        assert_eq!(\n            result.unwrap().metadata(),\n            CredentialProviderMetadata::Chain(vec![\n                CredentialProviderMetadata::AssumeRole {\n                    role_arn: \"arn:aws:iam::123456789012:role/first-in-chain\".to_string(),\n                    session_name: \"first-chain-session\".to_string(),\n                    base_provider_metadata: Box::new(CredentialProviderMetadata::Chain(vec![\n                        CredentialProviderMetadata::WebIdentity,\n                        CredentialProviderMetadata::Static {\n                            is_valid: true,\n                            access_key: \"assume_role_base_key\".to_string(),\n                            secret_key: \"assume_role_base_secret\".to_string(),\n                            session_token: Some(\"assume_role_base_token\".to_string())\n                        },\n                        CredentialProviderMetadata::Environment,\n                    ]))\n                },\n                CredentialProviderMetadata::Imds\n            ])\n        );\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_assume_role_with_anonymous_base_provider_error() {\n        // Test that assume role with anonymous base provider fails\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_ASSUMED_ROLE)\n            .with_assume_role_arn(\"arn:aws:iam::123456789012:role/should-fail\")\n            .with_assume_role_session_name(\"should-fail-session\")\n            .with_assume_role_credentials_provider(HADOOP_ANONYMOUS)\n            .build();\n\n        let result =\n            build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300)).await;\n        assert!(\n            result.is_err(),\n            \"Should error when assume role uses anonymous base provider\"\n        );\n\n        if let Err(e) = result {\n            assert!(e.to_string().contains(\n                \"Anonymous credential provider cannot be used as assumed role credential provider\"\n            ));\n        }\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_get_credential_from_static_credential_provider() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_SIMPLE)\n            .with_access_key(\"test_access_key\")\n            .with_secret_key(\"test_secret_key\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(result.is_some(), \"Should return a credential provider\");\n\n        let test_provider = result.unwrap();\n        let credential = test_provider.get_credential().await.unwrap();\n        assert_eq!(credential.key_id, \"test_access_key\");\n        assert_eq!(credential.secret_key, \"test_secret_key\");\n        assert_eq!(credential.token, None);\n\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_TEMPORARY)\n            .with_access_key(\"test_access_key_2\")\n            .with_secret_key(\"test_secret_key_2\")\n            .with_session_token(\"test_session_token_2\")\n            .build();\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(result.is_some(), \"Should return a credential provider\");\n\n        let test_provider = result.unwrap();\n        let credential = test_provider.get_credential().await.unwrap();\n        assert_eq!(credential.key_id, \"test_access_key_2\");\n        assert_eq!(credential.secret_key, \"test_secret_key_2\");\n        assert_eq!(credential.token, Some(\"test_session_token_2\".to_string()));\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_get_credential_from_invalid_static_credential_provider() {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(HADOOP_SIMPLE)\n            .with_access_key(\"test_access_key\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(result.is_some(), \"Should return a credential provider\");\n\n        let test_provider = result.unwrap();\n        let result = test_provider.get_credential().await;\n        assert!(result.is_err(), \"Should return an error when getting credential from invalid static credential provider\");\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_invalid_static_credential_provider_should_not_prevent_other_providers_from_working(\n    ) {\n        let configs = TestConfigBuilder::new()\n            .with_credential_provider(&format!(\"{HADOOP_TEMPORARY},{HADOOP_SIMPLE}\"))\n            .with_access_key(\"test_access_key\")\n            .with_secret_key(\"test_secret_key\")\n            .build();\n\n        let result = build_credential_provider(&configs, \"test-bucket\", Duration::from_secs(300))\n            .await\n            .unwrap();\n        assert!(result.is_some(), \"Should return a credential provider\");\n\n        assert_eq!(\n            result.as_ref().unwrap().metadata(),\n            CredentialProviderMetadata::Chain(vec![\n                CredentialProviderMetadata::Static {\n                    is_valid: false,\n                    access_key: \"test_access_key\".to_string(),\n                    secret_key: \"test_secret_key\".to_string(),\n                    session_token: None,\n                },\n                CredentialProviderMetadata::Static {\n                    is_valid: true,\n                    access_key: \"test_access_key\".to_string(),\n                    secret_key: \"test_secret_key\".to_string(),\n                    session_token: None,\n                }\n            ])\n        );\n\n        let test_provider = result.unwrap();\n\n        for _ in 0..10 {\n            let credential = test_provider.get_credential().await.unwrap();\n            assert_eq!(credential.key_id, \"test_access_key\");\n            assert_eq!(credential.secret_key, \"test_secret_key\");\n        }\n    }\n\n    #[derive(Debug)]\n    struct MockAwsCredentialProvider {\n        counter: AtomicI32,\n    }\n\n    impl ProvideCredentials for MockAwsCredentialProvider {\n        fn provide_credentials<'a>(\n            &'a self,\n        ) -> aws_credential_types::provider::future::ProvideCredentials<'a>\n        where\n            Self: 'a,\n        {\n            let cnt = self.counter.fetch_add(1, Ordering::SeqCst);\n            let cred = Credentials::builder()\n                .access_key_id(format!(\"test_access_key_{cnt}\"))\n                .secret_access_key(format!(\"test_secret_key_{cnt}\"))\n                .expiry(SystemTime::now() + Duration::from_secs(60))\n                .provider_name(\"mock_provider\")\n                .build();\n            aws_credential_types::provider::future::ProvideCredentials::ready(Ok(cred))\n        }\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_cached_credential_provider_refresh_credential() {\n        let provider = Arc::new(MockAwsCredentialProvider {\n            counter: AtomicI32::new(0),\n        });\n\n        // 60 seconds before expiry, the credential is always refreshed\n        let cached_provider = CachedAwsCredentialProvider::new(\n            provider,\n            CredentialProviderMetadata::Default,\n            Duration::from_secs(60),\n        );\n        for k in 0..3 {\n            let credential = cached_provider.get_credential().await.unwrap();\n            assert_eq!(credential.key_id, format!(\"test_access_key_{k}\"));\n            assert_eq!(credential.secret_key, format!(\"test_secret_key_{k}\"));\n        }\n    }\n\n    #[tokio::test]\n    #[cfg_attr(miri, ignore)] // AWS credential providers call foreign functions\n    async fn test_cached_credential_provider_cache_credential() {\n        let provider = Arc::new(MockAwsCredentialProvider {\n            counter: AtomicI32::new(0),\n        });\n\n        // 10 seconds before expiry, the credential is not refreshed\n        let cached_provider = CachedAwsCredentialProvider::new(\n            provider,\n            CredentialProviderMetadata::Default,\n            Duration::from_secs(10),\n        );\n        for _ in 0..3 {\n            let credential = cached_provider.get_credential().await.unwrap();\n            assert_eq!(credential.key_id, \"test_access_key_0\");\n            assert_eq!(credential.secret_key, \"test_secret_key_0\");\n        }\n    }\n\n    #[test]\n    fn test_extract_s3_config_options() {\n        let mut configs = HashMap::new();\n        configs.insert(\n            \"fs.s3a.endpoint.region\".to_string(),\n            \"ap-northeast-1\".to_string(),\n        );\n        configs.insert(\n            \"fs.s3a.requester.pays.enabled\".to_string(),\n            \"true\".to_string(),\n        );\n        let s3_configs = extract_s3_config_options(&configs, \"test-bucket\");\n        assert_eq!(\n            s3_configs.get(&AmazonS3ConfigKey::Region),\n            Some(&\"ap-northeast-1\".to_string())\n        );\n        assert_eq!(\n            s3_configs.get(&AmazonS3ConfigKey::RequestPayer),\n            Some(&\"true\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_extract_s3_config_custom_endpoint() {\n        let cases = vec![\n            (\"custom.endpoint.com\", \"https://custom.endpoint.com\"),\n            (\"https://custom.endpoint.com\", \"https://custom.endpoint.com\"),\n            (\n                \"https://custom.endpoint.com/path/to/resource\",\n                \"https://custom.endpoint.com/path/to/resource\",\n            ),\n        ];\n        for (endpoint, configured_endpoint) in cases {\n            let mut configs = HashMap::new();\n            configs.insert(\"fs.s3a.endpoint\".to_string(), endpoint.to_string());\n            let s3_configs = extract_s3_config_options(&configs, \"test-bucket\");\n            assert_eq!(\n                s3_configs.get(&AmazonS3ConfigKey::Endpoint),\n                Some(&configured_endpoint.to_string())\n            );\n        }\n    }\n\n    #[test]\n    fn test_extract_s3_config_custom_endpoint_with_virtual_hosted_style() {\n        let cases = vec![\n            (\n                \"custom.endpoint.com\",\n                \"https://custom.endpoint.com/test-bucket\",\n            ),\n            (\n                \"https://custom.endpoint.com\",\n                \"https://custom.endpoint.com/test-bucket\",\n            ),\n            (\n                \"https://custom.endpoint.com/\",\n                \"https://custom.endpoint.com/test-bucket\",\n            ),\n            (\n                \"https://custom.endpoint.com/path/to/resource\",\n                \"https://custom.endpoint.com/path/to/resource/test-bucket\",\n            ),\n            (\n                \"https://custom.endpoint.com/path/to/resource/\",\n                \"https://custom.endpoint.com/path/to/resource/test-bucket\",\n            ),\n        ];\n        for (endpoint, configured_endpoint) in cases {\n            let mut configs = HashMap::new();\n            configs.insert(\"fs.s3a.endpoint\".to_string(), endpoint.to_string());\n            configs.insert(\"fs.s3a.path.style.access\".to_string(), \"true\".to_string());\n            let s3_configs = extract_s3_config_options(&configs, \"test-bucket\");\n            assert_eq!(\n                s3_configs.get(&AmazonS3ConfigKey::Endpoint),\n                Some(&configured_endpoint.to_string())\n            );\n        }\n    }\n\n    #[test]\n    fn test_extract_s3_config_ignore_default_endpoint() {\n        let mut configs = HashMap::new();\n        configs.insert(\n            \"fs.s3a.endpoint\".to_string(),\n            \"s3.amazonaws.com\".to_string(),\n        );\n        let s3_configs = extract_s3_config_options(&configs, \"test-bucket\");\n        assert!(s3_configs.is_empty());\n\n        configs.insert(\"fs.s3a.endpoint\".to_string(), \"\".to_string());\n        let s3_configs = extract_s3_config_options(&configs, \"test-bucket\");\n        assert!(s3_configs.is_empty());\n    }\n\n    #[test]\n    fn test_credential_provider_metadata_simple_string() {\n        // Test Static provider\n        let static_metadata = CredentialProviderMetadata::Static {\n            is_valid: true,\n            access_key: \"sensitive_key\".to_string(),\n            secret_key: \"sensitive_secret\".to_string(),\n            session_token: Some(\"sensitive_token\".to_string()),\n        };\n        assert_eq!(static_metadata.simple_string(), \"Static(valid: true)\");\n\n        // Test AssumeRole provider\n        let assume_role_metadata = CredentialProviderMetadata::AssumeRole {\n            role_arn: \"arn:aws:iam::123456789012:role/test-role\".to_string(),\n            session_name: \"test-session\".to_string(),\n            base_provider_metadata: Box::new(CredentialProviderMetadata::Environment),\n        };\n        assert_eq!(\n            assume_role_metadata.simple_string(),\n            \"AssumeRole(role: arn:aws:iam::123456789012:role/test-role, session: test-session, base: Environment)\"\n        );\n\n        // Test Chain provider\n        let chain_metadata = CredentialProviderMetadata::Chain(vec![\n            CredentialProviderMetadata::Static {\n                is_valid: false,\n                access_key: \"key1\".to_string(),\n                secret_key: \"secret1\".to_string(),\n                session_token: None,\n            },\n            CredentialProviderMetadata::Environment,\n            CredentialProviderMetadata::Imds,\n        ]);\n        assert_eq!(\n            chain_metadata.simple_string(),\n            \"Chain(Static(valid: false) -> Environment -> Imds)\"\n        );\n\n        // Test nested AssumeRole with Chain base\n        let nested_metadata = CredentialProviderMetadata::AssumeRole {\n            role_arn: \"arn:aws:iam::123456789012:role/nested-role\".to_string(),\n            session_name: \"nested-session\".to_string(),\n            base_provider_metadata: Box::new(chain_metadata),\n        };\n        assert_eq!(\n            nested_metadata.simple_string(),\n            \"AssumeRole(role: arn:aws:iam::123456789012:role/nested-role, session: nested-session, base: Chain(Static(valid: false) -> Environment -> Imds))\"\n        );\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/parquet_exec.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::execution::operators::ExecutionError;\nuse crate::parquet::encryption_support::{CometEncryptionConfig, ENCRYPTION_FACTORY_ID};\nuse crate::parquet::parquet_read_cached_factory::CachingParquetReaderFactory;\nuse crate::parquet::parquet_support::SparkParquetOptions;\nuse crate::parquet::schema_adapter::SparkPhysicalExprAdapterFactory;\nuse arrow::datatypes::{Field, SchemaRef};\nuse datafusion::config::TableParquetOptions;\nuse datafusion::datasource::listing::PartitionedFile;\nuse datafusion::datasource::physical_plan::{\n    FileGroup, FileScanConfigBuilder, FileSource, ParquetSource,\n};\nuse datafusion::datasource::source::DataSourceExec;\nuse datafusion::execution::object_store::ObjectStoreUrl;\nuse datafusion::execution::SendableRecordBatchStream;\nuse datafusion::physical_expr::expressions::{BinaryExpr, Column};\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_expr_adapter::PhysicalExprAdapterFactory;\nuse datafusion::physical_plan::stream::RecordBatchStreamAdapter;\nuse datafusion::prelude::SessionContext;\nuse datafusion::scalar::ScalarValue;\nuse datafusion_comet_spark_expr::EvalMode;\nuse datafusion_datasource::TableSchema;\nuse std::collections::HashMap;\nuse std::sync::Arc;\n\n/// Initializes a DataSourceExec plan with a ParquetSource. This may be used by either the\n/// `native_datafusion` scan or the `native_iceberg_compat` scan.\n///\n///   `required_schema`: Schema to be projected by the scan.\n///\n///   `data_schema`: Schema of the underlying data. It is optional and, if provided, is used\n/// instead of `required_schema` to initialize the file scan\n///\n///   `partition_schema` and `partition_fields` are optional. If `partition_schema` is specified,\n/// then `partition_fields` must also be specified\n///\n///   `object_store_url`: Url to read data from\n///\n///   `file_groups`: A collection of groups of `PartitionedFiles` that are to be read by the scan\n///\n///   `projection_vector`: A vector of the indexes in the schema of the fields to be projected\n///\n///   `data_filters`: Any predicate that must be applied to the data returned by the scan. If\n/// specified, then `data_schema` must also be specified.\n#[allow(clippy::too_many_arguments)]\npub(crate) fn init_datasource_exec(\n    required_schema: SchemaRef,\n    data_schema: Option<SchemaRef>,\n    partition_schema: Option<SchemaRef>,\n    object_store_url: ObjectStoreUrl,\n    file_groups: Vec<Vec<PartitionedFile>>,\n    projection_vector: Option<Vec<usize>>,\n    data_filters: Option<Vec<Arc<dyn PhysicalExpr>>>,\n    default_values: Option<HashMap<Column, ScalarValue>>,\n    session_timezone: &str,\n    case_sensitive: bool,\n    session_ctx: &Arc<SessionContext>,\n    encryption_enabled: bool,\n) -> Result<Arc<DataSourceExec>, ExecutionError> {\n    let (table_parquet_options, spark_parquet_options) = get_options(\n        session_timezone,\n        case_sensitive,\n        &object_store_url,\n        encryption_enabled,\n    );\n\n    // Determine the schema and projection to use for ParquetSource.\n    // When data_schema is provided, use it as the base schema so DataFusion knows the full\n    // file schema. Compute a projection vector to select only the required columns.\n    let (base_schema, projection) = match (&data_schema, &projection_vector) {\n        (Some(schema), Some(proj)) => (Arc::clone(schema), Some(proj.clone())),\n        (Some(schema), None) => {\n            // Compute projection: map required_schema field names to data_schema indices.\n            // This is needed for schema pruning when the data_schema has more columns than\n            // the required_schema.\n            let projection: Vec<usize> = required_schema\n                .fields()\n                .iter()\n                .filter_map(|req_field| {\n                    schema.fields().iter().position(|data_field| {\n                        if case_sensitive {\n                            data_field.name() == req_field.name()\n                        } else {\n                            data_field.name().to_lowercase() == req_field.name().to_lowercase()\n                        }\n                    })\n                })\n                .collect();\n            // Only use data_schema + projection when all required fields were found by name.\n            // When some fields can't be matched (e.g., Parquet field ID mapping where names\n            // differ between required and data schemas), fall back to using required_schema\n            // directly with no projection.\n            if projection.len() == required_schema.fields().len() {\n                (Arc::clone(schema), Some(projection))\n            } else {\n                (Arc::clone(&required_schema), None)\n            }\n        }\n        _ => (Arc::clone(&required_schema), None),\n    };\n    let partition_fields: Vec<_> = partition_schema\n        .iter()\n        .flat_map(|s| s.fields().iter())\n        .map(|f| Arc::new(Field::new(f.name(), f.data_type().clone(), f.is_nullable())) as _)\n        .collect();\n    let table_schema =\n        TableSchema::from_file_schema(base_schema).with_table_partition_cols(partition_fields);\n\n    let mut parquet_source =\n        ParquetSource::new(table_schema).with_table_parquet_options(table_parquet_options);\n\n    // Create a conjunctive form of the vector because ParquetExecBuilder takes\n    // a single expression\n    if let Some(data_filters) = data_filters {\n        let cnf_data_filters = data_filters.clone().into_iter().reduce(|left, right| {\n            Arc::new(BinaryExpr::new(\n                left,\n                datafusion::logical_expr::Operator::And,\n                right,\n            ))\n        });\n\n        if let Some(filter) = cnf_data_filters {\n            parquet_source = parquet_source.with_predicate(filter);\n        }\n    }\n\n    if encryption_enabled {\n        parquet_source = parquet_source.with_encryption_factory(\n            session_ctx\n                .runtime_env()\n                .parquet_encryption_factory(ENCRYPTION_FACTORY_ID)?,\n        );\n    }\n\n    // Use caching reader factory to avoid redundant footer reads across partitions\n    let store = session_ctx.runtime_env().object_store(&object_store_url)?;\n    parquet_source = parquet_source\n        .with_parquet_file_reader_factory(Arc::new(CachingParquetReaderFactory::new(store)));\n\n    let expr_adapter_factory: Arc<dyn PhysicalExprAdapterFactory> = Arc::new(\n        SparkPhysicalExprAdapterFactory::new(spark_parquet_options, default_values),\n    );\n\n    let file_source: Arc<dyn FileSource> = Arc::new(parquet_source);\n\n    let file_groups = file_groups\n        .iter()\n        .map(|files| FileGroup::new(files.clone()))\n        .collect();\n\n    let mut file_scan_config_builder = FileScanConfigBuilder::new(object_store_url, file_source)\n        .with_file_groups(file_groups)\n        .with_expr_adapter(Some(expr_adapter_factory));\n\n    if let Some(projection) = projection {\n        file_scan_config_builder =\n            file_scan_config_builder.with_projection_indices(Some(projection))?;\n    }\n\n    let file_scan_config = file_scan_config_builder.build();\n\n    let data_source_exec = Arc::new(DataSourceExec::new(Arc::new(file_scan_config)));\n\n    Ok(data_source_exec)\n}\n\nfn get_options(\n    session_timezone: &str,\n    case_sensitive: bool,\n    object_store_url: &ObjectStoreUrl,\n    encryption_enabled: bool,\n) -> (TableParquetOptions, SparkParquetOptions) {\n    let mut table_parquet_options = TableParquetOptions::new();\n    table_parquet_options.global.pushdown_filters = true;\n    table_parquet_options.global.reorder_filters = true;\n    table_parquet_options.global.coerce_int96 = Some(\"us\".to_string());\n    let mut spark_parquet_options =\n        SparkParquetOptions::new(EvalMode::Legacy, session_timezone, false);\n    spark_parquet_options.allow_cast_unsigned_ints = true;\n    spark_parquet_options.case_sensitive = case_sensitive;\n\n    if encryption_enabled {\n        table_parquet_options.crypto.configure_factory(\n            ENCRYPTION_FACTORY_ID,\n            &CometEncryptionConfig {\n                uri_base: object_store_url.to_string(),\n            },\n        );\n    }\n\n    (table_parquet_options, spark_parquet_options)\n}\n\n/// Wraps a `SendableRecordBatchStream` to print each batch as it flows through.\n/// Returns a new `SendableRecordBatchStream` that yields the same batches.\npub fn dbg_batch_stream(stream: SendableRecordBatchStream) -> SendableRecordBatchStream {\n    use futures::StreamExt;\n    let schema = stream.schema();\n    let printing_stream = stream.map(|batch_result| {\n        match &batch_result {\n            Ok(batch) => {\n                dbg!(batch, batch.schema());\n                for (col_idx, column) in batch.columns().iter().enumerate() {\n                    dbg!(col_idx, column, column.nulls());\n                }\n            }\n            Err(e) => {\n                println!(\"batch error: {:?}\", e);\n            }\n        }\n        batch_result\n    });\n    Box::pin(RecordBatchStreamAdapter::new(schema, printing_stream))\n}\n"
  },
  {
    "path": "native/core/src/parquet/parquet_read_cached_factory.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! A `ParquetFileReaderFactory` that caches parquet footer metadata across\n//! partitions within a single scan. When multiple Spark partitions read from\n//! the same parquet file (different row group ranges), this avoids redundant\n//! footer reads and parsing.\n//!\n//! The cache is scoped to the factory instance (one per scan), not global,\n//! so it does not persist across queries.\n//!\n//! Uses `tokio::sync::OnceCell` per file path so that concurrent partitions\n//! wait for the first reader to load the footer rather than all racing.\n\nuse bytes::Bytes;\nuse datafusion::common::Result as DataFusionResult;\nuse datafusion::datasource::physical_plan::parquet::ParquetFileReaderFactory;\nuse datafusion::physical_plan::metrics::{ExecutionPlanMetricsSet, MetricBuilder};\nuse datafusion_datasource::PartitionedFile;\nuse futures::future::BoxFuture;\nuse futures::FutureExt;\nuse object_store::path::Path;\nuse object_store::ObjectStore;\nuse parquet::arrow::arrow_reader::ArrowReaderOptions;\nuse parquet::arrow::async_reader::{AsyncFileReader, ParquetObjectReader};\nuse parquet::file::metadata::ParquetMetaData;\nuse std::collections::HashMap;\nuse std::fmt::Debug;\nuse std::ops::Range;\nuse std::sync::{Arc, Mutex};\nuse tokio::sync::OnceCell;\n\ntype MetadataCell = Arc<OnceCell<Arc<ParquetMetaData>>>;\n\n/// A `ParquetFileReaderFactory` that caches footer metadata by file path.\n/// The cache is scoped to this factory instance (shared across partitions\n/// within a single scan via Arc), not global.\n#[derive(Debug)]\npub struct CachingParquetReaderFactory {\n    store: Arc<dyn ObjectStore>,\n    cache: Arc<Mutex<HashMap<Path, MetadataCell>>>,\n}\n\nimpl CachingParquetReaderFactory {\n    pub fn new(store: Arc<dyn ObjectStore>) -> Self {\n        Self {\n            store,\n            cache: Arc::new(Mutex::new(HashMap::new())),\n        }\n    }\n}\n\nimpl ParquetFileReaderFactory for CachingParquetReaderFactory {\n    fn create_reader(\n        &self,\n        partition_index: usize,\n        partitioned_file: PartitionedFile,\n        metadata_size_hint: Option<usize>,\n        metrics: &ExecutionPlanMetricsSet,\n    ) -> DataFusionResult<Box<dyn AsyncFileReader + Send>> {\n        let bytes_scanned = MetricBuilder::new(metrics).counter(\"bytes_scanned\", partition_index);\n\n        let location = partitioned_file.object_meta.location.clone();\n\n        // Get or create the OnceCell for this file path\n        let cell = Arc::<OnceCell<Arc<ParquetMetaData>>>::clone(\n            self.cache\n                .lock()\n                .unwrap()\n                .entry(location.clone())\n                .or_insert_with(|| Arc::new(OnceCell::new())),\n        );\n\n        let mut inner = ParquetObjectReader::new(Arc::clone(&self.store), location.clone())\n            .with_file_size(partitioned_file.object_meta.size);\n\n        if let Some(hint) = metadata_size_hint {\n            inner = inner.with_footer_size_hint(hint);\n        }\n\n        Ok(Box::new(CachingParquetFileReader {\n            inner,\n            location,\n            cell,\n            bytes_scanned,\n        }))\n    }\n}\n\nstruct CachingParquetFileReader {\n    inner: ParquetObjectReader,\n    location: Path,\n    cell: MetadataCell,\n    bytes_scanned: datafusion::physical_plan::metrics::Count,\n}\n\nimpl AsyncFileReader for CachingParquetFileReader {\n    fn get_bytes(&mut self, range: Range<u64>) -> BoxFuture<'_, parquet::errors::Result<Bytes>> {\n        self.bytes_scanned.add((range.end - range.start) as usize);\n        self.inner.get_bytes(range)\n    }\n\n    fn get_byte_ranges(\n        &mut self,\n        ranges: Vec<Range<u64>>,\n    ) -> BoxFuture<'_, parquet::errors::Result<Vec<Bytes>>>\n    where\n        Self: Send,\n    {\n        let total: u64 = ranges.iter().map(|r| r.end - r.start).sum();\n        self.bytes_scanned.add(total as usize);\n        self.inner.get_byte_ranges(ranges)\n    }\n\n    fn get_metadata<'a>(\n        &'a mut self,\n        options: Option<&'a ArrowReaderOptions>,\n    ) -> BoxFuture<'a, parquet::errors::Result<Arc<ParquetMetaData>>> {\n        let cell = Arc::clone(&self.cell);\n        let location = self.location.clone();\n\n        async move {\n            let metadata = cell\n                .get_or_try_init(|| async {\n                    log::trace!(\"CachingParquetFileReader: loading footer for {}\", location);\n                    self.inner.get_metadata(options).await\n                })\n                .await?;\n\n            log::trace!(\"CachingParquetFileReader: cache HIT for {}\", self.location);\n            Ok(Arc::clone(metadata))\n        }\n        .boxed()\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/parquet_support.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::execution::operators::ExecutionError;\nuse arrow::array::{FixedSizeBinaryArray, ListArray, MapArray, StringArray};\nuse arrow::buffer::NullBuffer;\nuse arrow::compute::can_cast_types;\nuse arrow::datatypes::{FieldRef, Fields};\nuse arrow::{\n    array::{\n        cast::AsArray, new_null_array, types::Int32Type, types::TimestampMicrosecondType, Array,\n        ArrayRef, DictionaryArray, StructArray,\n    },\n    compute::{cast_with_options, take, CastOptions},\n    datatypes::{DataType, TimeUnit},\n    util::display::FormatOptions,\n};\nuse datafusion::common::{Result as DataFusionResult, ScalarValue};\nuse datafusion::error::DataFusionError;\nuse datafusion::execution::object_store::ObjectStoreUrl;\nuse datafusion::execution::runtime_env::RuntimeEnv;\nuse datafusion::physical_plan::ColumnarValue;\nuse datafusion_comet_spark_expr::EvalMode;\nuse log::debug;\nuse object_store::path::Path;\nuse object_store::{parse_url, ObjectStore};\nuse std::collections::HashMap;\nuse std::sync::OnceLock;\nuse std::time::Duration;\nuse std::{collections::hash_map::DefaultHasher, hash::Hasher, sync::RwLock};\nuse std::{fmt::Debug, hash::Hash, sync::Arc};\nuse url::Url;\n\nuse super::objectstore;\n\n// This file originates from cast.rs. While developing native scan support and implementing\n// SparkSchemaAdapter we observed that Spark's type conversion logic on Parquet reads does not\n// always align to the CAST expression's logic, so it was duplicated here to adapt its behavior.\n\nstatic TIMESTAMP_FORMAT: Option<&str> = Some(\"%Y-%m-%d %H:%M:%S%.f\");\n\nstatic PARQUET_OPTIONS: CastOptions = CastOptions {\n    safe: true,\n    format_options: FormatOptions::new()\n        .with_timestamp_tz_format(TIMESTAMP_FORMAT)\n        .with_timestamp_format(TIMESTAMP_FORMAT),\n};\n\n/// Spark Parquet type conversion options\n#[derive(Debug, Clone, Hash, PartialEq, Eq)]\npub struct SparkParquetOptions {\n    /// Spark evaluation mode\n    pub eval_mode: EvalMode,\n    /// When cast from/to timezone related types, we need timezone, which will be resolved with\n    /// session local timezone by an analyzer in Spark.\n    // TODO we should change timezone to Tz to avoid repeated parsing\n    pub timezone: String,\n    /// Allow casts that are supported but not guaranteed to be 100% compatible\n    pub allow_incompat: bool,\n    /// Support casting unsigned ints to signed ints (used by Parquet SchemaAdapter)\n    pub allow_cast_unsigned_ints: bool,\n    /// Whether to always represent decimals using 128 bits. If false, the native reader may represent decimals using 32 or 64 bits, depending on the precision.\n    pub use_decimal_128: bool,\n    /// Whether to read dates/timestamps that were written in the legacy hybrid Julian + Gregorian calendar as it is. If false, throw exceptions instead. If the spark type is TimestampNTZ, this should be true.\n    pub use_legacy_date_timestamp_or_ntz: bool,\n    // Whether schema field names are case sensitive\n    pub case_sensitive: bool,\n}\n\nimpl SparkParquetOptions {\n    pub fn new(eval_mode: EvalMode, timezone: &str, allow_incompat: bool) -> Self {\n        Self {\n            eval_mode,\n            timezone: timezone.to_string(),\n            allow_incompat,\n            allow_cast_unsigned_ints: false,\n            use_decimal_128: false,\n            use_legacy_date_timestamp_or_ntz: false,\n            case_sensitive: false,\n        }\n    }\n\n    pub fn new_without_timezone(eval_mode: EvalMode, allow_incompat: bool) -> Self {\n        Self {\n            eval_mode,\n            timezone: \"\".to_string(),\n            allow_incompat,\n            allow_cast_unsigned_ints: false,\n            use_decimal_128: false,\n            use_legacy_date_timestamp_or_ntz: false,\n            case_sensitive: false,\n        }\n    }\n}\n\n/// Spark-compatible cast implementation. Defers to DataFusion's cast where that is known\n/// to be compatible, and returns an error when a not supported and not DF-compatible cast\n/// is requested.\npub fn spark_parquet_convert(\n    arg: ColumnarValue,\n    data_type: &DataType,\n    parquet_options: &SparkParquetOptions,\n) -> DataFusionResult<ColumnarValue> {\n    match arg {\n        ColumnarValue::Array(array) => Ok(ColumnarValue::Array(parquet_convert_array(\n            array,\n            data_type,\n            parquet_options,\n        )?)),\n        ColumnarValue::Scalar(scalar) => {\n            // Note that normally CAST(scalar) should be fold in Spark JVM side. However, for\n            // some cases e.g., scalar subquery, Spark will not fold it, so we need to handle it\n            // here.\n            let array = scalar.to_array()?;\n            let scalar = ScalarValue::try_from_array(\n                &parquet_convert_array(array, data_type, parquet_options)?,\n                0,\n            )?;\n            Ok(ColumnarValue::Scalar(scalar))\n        }\n    }\n}\n\nfn parquet_convert_array(\n    array: ArrayRef,\n    to_type: &DataType,\n    parquet_options: &SparkParquetOptions,\n) -> DataFusionResult<ArrayRef> {\n    use DataType::*;\n    let from_type = array.data_type().clone();\n\n    let array = match &from_type {\n        Dictionary(key_type, value_type)\n            if key_type.as_ref() == &Int32\n                && (value_type.as_ref() == &Utf8 || value_type.as_ref() == &LargeUtf8) =>\n        {\n            let dict_array = array\n                .as_any()\n                .downcast_ref::<DictionaryArray<Int32Type>>()\n                .expect(\"Expected a dictionary array\");\n\n            let casted_dictionary = DictionaryArray::<Int32Type>::new(\n                dict_array.keys().clone(),\n                parquet_convert_array(Arc::clone(dict_array.values()), to_type, parquet_options)?,\n            );\n\n            let casted_result = match to_type {\n                Dictionary(_, _) => Arc::new(casted_dictionary.clone()),\n                _ => take(casted_dictionary.values().as_ref(), dict_array.keys(), None)?,\n            };\n            return Ok(casted_result);\n        }\n        _ => array,\n    };\n    let from_type = array.data_type();\n\n    // Try Comet specific handlers first, then arrow-rs cast if supported,\n    // return uncasted data otherwise\n    match (from_type, to_type) {\n        (Struct(_), Struct(_)) => Ok(parquet_convert_struct_to_struct(\n            array.as_struct(),\n            from_type,\n            to_type,\n            parquet_options,\n        )?),\n        (List(_), List(to_inner_type)) => {\n            let list_arr: &ListArray = array.as_list();\n            let cast_field = parquet_convert_array(\n                Arc::clone(list_arr.values()),\n                to_inner_type.data_type(),\n                parquet_options,\n            )?;\n\n            Ok(Arc::new(ListArray::new(\n                Arc::clone(to_inner_type),\n                list_arr.offsets().clone(),\n                cast_field,\n                list_arr.nulls().cloned(),\n            )))\n        }\n        (Timestamp(TimeUnit::Microsecond, None), Timestamp(TimeUnit::Microsecond, Some(tz))) => {\n            Ok(Arc::new(\n                array\n                    .as_primitive::<TimestampMicrosecondType>()\n                    .reinterpret_cast::<TimestampMicrosecondType>()\n                    .with_timezone(Arc::clone(tz)),\n            ))\n        }\n        (Map(_, ordered_from), Map(_, ordered_to)) if ordered_from == ordered_to =>\n            parquet_convert_map_to_map(array.as_map(), to_type, parquet_options, *ordered_to)\n            ,\n        // Iceberg stores UUIDs as 16-byte fixed binary but Spark expects string representation.\n        // Arrow doesn't support casting FixedSizeBinary to Utf8, so we handle it manually.\n        (FixedSizeBinary(16), Utf8) => {\n            let binary_array = array\n                .as_any()\n                .downcast_ref::<FixedSizeBinaryArray>()\n                .expect(\"Expected a FixedSizeBinaryArray\");\n\n            let string_array: StringArray = binary_array\n                .iter()\n                .map(|opt_bytes| {\n                    opt_bytes.map(|bytes| {\n                        let uuid = uuid::Uuid::from_bytes(\n                            bytes.try_into().expect(\"Expected 16 bytes\")\n                        );\n                        uuid.to_string()\n                    })\n                })\n                .collect();\n\n            Ok(Arc::new(string_array))\n        }\n        // If Arrow cast supports the cast, delegate the cast to Arrow\n        _ if can_cast_types(from_type, to_type) => {\n            Ok(cast_with_options(&array, to_type, &PARQUET_OPTIONS)?)\n        }\n        _ => Ok(array),\n    }\n}\n\n/// Cast between struct types based on logic in\n/// `org.apache.spark.sql.catalyst.expressions.Cast#castStruct`.\nfn parquet_convert_struct_to_struct(\n    array: &StructArray,\n    from_type: &DataType,\n    to_type: &DataType,\n    parquet_options: &SparkParquetOptions,\n) -> DataFusionResult<ArrayRef> {\n    match (from_type, to_type) {\n        (DataType::Struct(from_fields), DataType::Struct(to_fields)) => {\n            // if dest and target schemas has any column in common\n            let mut field_overlap = false;\n            // TODO some of this logic may be specific to converting Parquet to Spark\n            let mut field_name_to_index_map = HashMap::new();\n            for (i, field) in from_fields.iter().enumerate() {\n                if parquet_options.case_sensitive {\n                    field_name_to_index_map.insert(field.name().clone(), i);\n                } else {\n                    field_name_to_index_map.insert(field.name().to_lowercase(), i);\n                }\n            }\n            assert_eq!(field_name_to_index_map.len(), from_fields.len());\n            let mut cast_fields: Vec<ArrayRef> = Vec::with_capacity(to_fields.len());\n            for i in 0..to_fields.len() {\n                // Fields in the to_type schema may not exist in the from_type schema\n                // i.e. the required schema may have fields that the file does not\n                // have\n                let key = if parquet_options.case_sensitive {\n                    to_fields[i].name().clone()\n                } else {\n                    to_fields[i].name().to_lowercase()\n                };\n                if field_name_to_index_map.contains_key(&key) {\n                    let from_index = field_name_to_index_map[&key];\n                    let cast_field = parquet_convert_array(\n                        Arc::clone(array.column(from_index)),\n                        to_fields[i].data_type(),\n                        parquet_options,\n                    )?;\n                    cast_fields.push(cast_field);\n                    field_overlap = true;\n                } else {\n                    cast_fields.push(new_null_array(to_fields[i].data_type(), array.len()));\n                }\n            }\n\n            // If target schema doesn't contain any of the existing fields\n            // mark such a column in array as NULL\n            let nulls = if field_overlap {\n                array.nulls().cloned()\n            } else {\n                Some(NullBuffer::new_null(array.len()))\n            };\n\n            Ok(Arc::new(StructArray::new(\n                to_fields.clone(),\n                cast_fields,\n                nulls,\n            )))\n        }\n        _ => unreachable!(),\n    }\n}\n\n/// Cast a map type to another map type. The same as arrow-cast except we recursively call our own\n/// parquet_convert_array\nfn parquet_convert_map_to_map(\n    from: &MapArray,\n    to_data_type: &DataType,\n    parquet_options: &SparkParquetOptions,\n    to_ordered: bool,\n) -> Result<ArrayRef, DataFusionError> {\n    match to_data_type {\n        DataType::Map(entries_field, _) => {\n            let key_field = key_field(entries_field).ok_or(DataFusionError::Internal(\n                \"map is missing key field\".to_string(),\n            ))?;\n            let value_field = value_field(entries_field).ok_or(DataFusionError::Internal(\n                \"map is missing value field\".to_string(),\n            ))?;\n\n            let key_array = parquet_convert_array(\n                Arc::clone(from.keys()),\n                key_field.data_type(),\n                parquet_options,\n            )?;\n            let value_array = parquet_convert_array(\n                Arc::clone(from.values()),\n                value_field.data_type(),\n                parquet_options,\n            )?;\n\n            Ok(Arc::new(MapArray::new(\n                Arc::<arrow::datatypes::Field>::clone(entries_field),\n                from.offsets().clone(),\n                StructArray::new(\n                    Fields::from(vec![key_field, value_field]),\n                    vec![key_array, value_array],\n                    from.entries().nulls().cloned(),\n                ),\n                from.nulls().cloned(),\n                to_ordered,\n            )))\n        }\n        dt => Err(DataFusionError::Internal(format!(\n            \"Expected MapType. Got: {dt}\"\n        ))),\n    }\n}\n\n/// Gets the key field from the entries of a map.  For all other types returns None.\nfn key_field(entries_field: &FieldRef) -> Option<FieldRef> {\n    if let DataType::Struct(fields) = entries_field.data_type() {\n        fields.first().cloned()\n    } else {\n        None\n    }\n}\n\n/// Gets the value field from the entries of a map.  For all other types returns None.\nfn value_field(entries_field: &FieldRef) -> Option<FieldRef> {\n    if let DataType::Struct(fields) = entries_field.data_type() {\n        fields.get(1).cloned()\n    } else {\n        None\n    }\n}\n\npub fn is_hdfs_scheme(url: &Url, object_store_configs: &HashMap<String, String>) -> bool {\n    const COMET_LIBHDFS_SCHEMES_KEY: &str = \"fs.comet.libhdfs.schemes\";\n    let scheme = url.scheme();\n    if let Some(libhdfs_schemes) = object_store_configs.get(COMET_LIBHDFS_SCHEMES_KEY) {\n        use itertools::Itertools;\n        libhdfs_schemes.split(\",\").contains(scheme)\n    } else {\n        scheme == \"hdfs\"\n    }\n}\n\n// Creates an HDFS object store from a URL using the native HDFS implementation\n#[cfg(all(feature = \"hdfs\", not(feature = \"hdfs-opendal\")))]\nfn create_hdfs_object_store(\n    url: &Url,\n) -> Result<(Box<dyn ObjectStore>, Path), object_store::Error> {\n    match datafusion_comet_objectstore_hdfs::object_store::hdfs::HadoopFileSystem::new(url.as_ref())\n    {\n        Some(object_store) => {\n            let path = object_store.get_path(url.as_str());\n            Ok((Box::new(object_store), path))\n        }\n        _ => Err(object_store::Error::Generic {\n            store: \"HadoopFileSystem\",\n            source: \"Could not create hdfs object store\".into(),\n        }),\n    }\n}\n\n// Creates an OpenDAL HDFS Operator from a URL with optional configuration\n#[cfg(feature = \"hdfs-opendal\")]\npub(crate) fn create_hdfs_operator(url: &Url) -> Result<opendal::Operator, object_store::Error> {\n    let name_node = get_name_node_uri(url)?;\n    let builder = opendal::services::Hdfs::default().name_node(&name_node);\n\n    opendal::Operator::new(builder)\n        .map_err(|error| object_store::Error::Generic {\n            store: \"hdfs-opendal\",\n            source: error.into(),\n        })\n        .map(|op| op.finish())\n}\n\n// Creates an HDFS object store from a URL using OpenDAL\n#[cfg(feature = \"hdfs-opendal\")]\npub(crate) fn create_hdfs_object_store(\n    url: &Url,\n) -> Result<(Box<dyn ObjectStore>, Path), object_store::Error> {\n    let op = create_hdfs_operator(url)?;\n    let store = object_store_opendal::OpendalStore::new(op);\n    let path = Path::parse(url.path())?;\n    Ok((Box::new(store), path))\n}\n\n#[cfg(feature = \"hdfs-opendal\")]\nfn get_name_node_uri(url: &Url) -> Result<String, object_store::Error> {\n    use std::fmt::Write;\n    if let Some(host) = url.host() {\n        let schema = url.scheme();\n        let mut uri_builder = String::new();\n        write!(&mut uri_builder, \"{schema}://{host}\").unwrap();\n\n        if let Some(port) = url.port() {\n            write!(&mut uri_builder, \":{port}\").unwrap();\n        }\n        Ok(uri_builder)\n    } else {\n        Err(object_store::Error::InvalidPath {\n            source: object_store::path::Error::InvalidPath {\n                path: std::path::PathBuf::from(url.as_str()),\n            },\n        })\n    }\n}\n\n// Stub implementation when HDFS support is not enabled\n#[cfg(all(not(feature = \"hdfs\"), not(feature = \"hdfs-opendal\")))]\nfn create_hdfs_object_store(\n    _url: &Url,\n) -> Result<(Box<dyn ObjectStore>, Path), object_store::Error> {\n    Err(object_store::Error::Generic {\n        store: \"HadoopFileSystem\",\n        source: \"Hdfs support is not enabled in this build\".into(),\n    })\n}\n\ntype ObjectStoreCache = RwLock<HashMap<(String, u64), Arc<dyn ObjectStore>>>;\n\n/// Process-wide cache of object stores, keyed by `(scheme://host:port, config_hash)`.\n///\n/// ## Why static / process lifetime?\n///\n/// Comet's JNI architecture calls `initRecordBatchReader` once per Parquet file, and each\n/// call constructs a fresh `RuntimeEnv`.  There is therefore no executor-scoped Rust object\n/// with a lifetime longer than a single file read that could own this cache.  The executor\n/// process itself is the natural scope for HTTP connection-pool reuse, so process lifetime\n/// (i.e. `static`) is the appropriate choice here.  In the standard Spark-on-Kubernetes\n/// deployment model each executor process is dedicated to a single Spark application, so\n/// process lifetime and application lifetime are equivalent; the cache is reclaimed when\n/// the executor pod terminates.\n///\n/// ## Unbounded size\n///\n/// Cache entries are indexed by `(scheme://host:port, hash-of-configs)`.  A typical Spark\n/// job accesses a small, fixed set of buckets with a stable configuration, so the number of\n/// distinct keys is O(buckets × credential-configs) and remains small throughout the job.\n/// Entries are cheap relative to the cost of creating a new object store (new HTTP\n/// connection pool + DNS resolution), and there is no meaningful benefit from eviction, so\n/// no eviction policy is applied.\n///\n/// ## Credential invalidation\n///\n/// Object stores that use dynamic credentials (IMDS, WebIdentity, ECS role, STS assume-role)\n/// delegate credential refresh to a `CometCredentialProvider` that fetches fresh credentials\n/// on every request, so credential rotation is transparent and requires no cache\n/// invalidation.  Object stores whose credentials are embedded in the Hadoop configuration\n/// (e.g. `fs.s3a.access.key` / `fs.s3a.secret.key`) produce a different `config_hash` when\n/// those values change, which causes a new store to be created and inserted under the new\n/// key; the old entry is harmlessly superseded.\nfn object_store_cache() -> &'static ObjectStoreCache {\n    static CACHE: OnceLock<ObjectStoreCache> = OnceLock::new();\n    CACHE.get_or_init(|| RwLock::new(HashMap::new()))\n}\n\n/// Compute a hash of the object store configuration for cache keying.\nfn hash_object_store_configs(configs: &HashMap<String, String>) -> u64 {\n    let mut hasher = DefaultHasher::new();\n    let mut keys: Vec<&String> = configs.keys().collect();\n    keys.sort();\n    for key in keys {\n        key.hash(&mut hasher);\n        configs[key].hash(&mut hasher);\n    }\n    hasher.finish()\n}\n\n/// Parses the url, registers the object store with configurations, and returns a tuple of the object store url\n/// and object store path\npub(crate) fn prepare_object_store_with_configs(\n    runtime_env: Arc<RuntimeEnv>,\n    url: String,\n    object_store_configs: &HashMap<String, String>,\n) -> Result<(ObjectStoreUrl, Path), ExecutionError> {\n    let mut url = Url::parse(url.as_str())\n        .map_err(|e| ExecutionError::GeneralError(format!(\"Error parsing URL {url}: {e}\")))?;\n    let is_hdfs_scheme = is_hdfs_scheme(&url, object_store_configs);\n    let mut scheme = url.scheme();\n    if !is_hdfs_scheme && scheme == \"s3a\" {\n        scheme = \"s3\";\n        url.set_scheme(\"s3\").map_err(|_| {\n            ExecutionError::GeneralError(\"Could not convert scheme from s3a to s3\".to_string())\n        })?;\n    }\n    let url_key = format!(\n        \"{}://{}\",\n        scheme,\n        &url[url::Position::BeforeHost..url::Position::AfterPort],\n    );\n\n    let config_hash = hash_object_store_configs(object_store_configs);\n    let cache_key = (url_key.clone(), config_hash);\n\n    // Check the cache first to reuse existing object store instances.\n    // This enables HTTP connection pooling and avoids redundant DNS lookups.\n    let cached = {\n        let cache = object_store_cache()\n            .read()\n            .map_err(|e| ExecutionError::GeneralError(format!(\"Object store cache error: {e}\")))?;\n        cache.get(&cache_key).cloned()\n    };\n\n    let (object_store, object_store_path): (Arc<dyn ObjectStore>, Path) =\n        if let Some(store) = cached {\n            debug!(\"Reusing cached object store for {url_key}\");\n            let path = Path::from_url_path(url.path())\n                .map_err(|e| ExecutionError::GeneralError(e.to_string()))?;\n            (store, path)\n        } else {\n            debug!(\"Creating new object store for {url_key}\");\n            let (store, path): (Box<dyn ObjectStore>, Path) = if is_hdfs_scheme {\n                create_hdfs_object_store(&url)\n            } else if scheme == \"s3\" {\n                objectstore::s3::create_store(&url, object_store_configs, Duration::from_secs(300))\n            } else {\n                parse_url(&url)\n            }\n            .map_err(|e| ExecutionError::GeneralError(e.to_string()))?;\n\n            let store: Arc<dyn ObjectStore> = Arc::from(store);\n            // Insert into cache\n            if let Ok(mut cache) = object_store_cache().write() {\n                cache.insert(cache_key, Arc::clone(&store));\n            }\n            (store, path)\n        };\n\n    let object_store_url = ObjectStoreUrl::parse(url_key.clone())?;\n    runtime_env.register_object_store(&url, object_store);\n    Ok((object_store_url, object_store_path))\n}\n\n#[cfg(test)]\nmod tests {\n    #[cfg(any(\n        all(not(feature = \"hdfs\"), not(feature = \"hdfs-opendal\")),\n        feature = \"hdfs\"\n    ))]\n    use datafusion::execution::object_store::ObjectStoreUrl;\n    #[cfg(any(\n        all(not(feature = \"hdfs\"), not(feature = \"hdfs-opendal\")),\n        feature = \"hdfs\"\n    ))]\n    use datafusion::execution::runtime_env::RuntimeEnv;\n    #[cfg(any(\n        all(not(feature = \"hdfs\"), not(feature = \"hdfs-opendal\")),\n        feature = \"hdfs\"\n    ))]\n    use object_store::path::Path;\n    #[cfg(any(\n        all(not(feature = \"hdfs\"), not(feature = \"hdfs-opendal\")),\n        feature = \"hdfs\"\n    ))]\n    use std::sync::Arc;\n    #[cfg(any(\n        all(not(feature = \"hdfs\"), not(feature = \"hdfs-opendal\")),\n        feature = \"hdfs\"\n    ))]\n    use url::Url;\n\n    #[cfg(all(not(feature = \"hdfs\"), not(feature = \"hdfs-opendal\")))]\n    use crate::execution::operators::ExecutionError;\n    #[cfg(all(not(feature = \"hdfs\"), not(feature = \"hdfs-opendal\")))]\n    use std::collections::HashMap;\n\n    /// Parses the url, registers the object store, and returns a tuple of the object store url and object store path\n    #[cfg(all(not(feature = \"hdfs\"), not(feature = \"hdfs-opendal\")))]\n    pub(crate) fn prepare_object_store(\n        runtime_env: Arc<RuntimeEnv>,\n        url: String,\n    ) -> Result<(ObjectStoreUrl, Path), ExecutionError> {\n        use crate::parquet::parquet_support::prepare_object_store_with_configs;\n        prepare_object_store_with_configs(runtime_env, url, &HashMap::new())\n    }\n\n    /// Parses the url, registers the object store, and returns a tuple of the object store url and object store path\n    #[cfg(feature = \"hdfs\")]\n    pub(crate) fn prepare_object_store(\n        runtime_env: Arc<RuntimeEnv>,\n        url: String,\n    ) -> Result<(ObjectStoreUrl, Path), crate::execution::operators::ExecutionError> {\n        use crate::parquet::parquet_support::prepare_object_store_with_configs;\n        use std::collections::HashMap;\n        prepare_object_store_with_configs(runtime_env, url, &HashMap::new())\n    }\n\n    #[cfg(all(not(feature = \"hdfs\"), not(feature = \"hdfs-opendal\")))]\n    #[test]\n    fn test_prepare_object_store() {\n        use crate::execution::operators::ExecutionError;\n\n        let local_file_system_url = \"file:///comet/spark-warehouse/part-00000.snappy.parquet\";\n        let hdfs_url = \"hdfs://localhost:8020/comet/spark-warehouse/part-00000.snappy.parquet\";\n\n        let all_urls = [local_file_system_url, hdfs_url];\n        let expected: Vec<Result<(ObjectStoreUrl, Path), ExecutionError>> = vec![\n            Ok((\n                ObjectStoreUrl::parse(\"file://\").unwrap(),\n                Path::from(\"/comet/spark-warehouse/part-00000.snappy.parquet\"),\n            )),\n            Err(ExecutionError::GeneralError(\n                \"Generic HadoopFileSystem error: Hdfs support is not enabled in this build\"\n                    .parse()\n                    .unwrap(),\n            )),\n        ];\n\n        for (i, url_str) in all_urls.iter().enumerate() {\n            let url = &Url::parse(url_str).unwrap();\n            let res = prepare_object_store(Arc::new(RuntimeEnv::default()), url.to_string());\n\n            let expected = expected.get(i).unwrap();\n            match expected {\n                Ok((o, p)) => {\n                    let (r_o, r_p) = res.unwrap();\n                    assert_eq!(r_o, *o);\n                    assert_eq!(r_p, *p);\n                }\n                Err(e) => {\n                    assert!(res.is_err());\n                    let Err(res_e) = res else {\n                        panic!(\"test failed\")\n                    };\n                    assert_eq!(e.to_string(), res_e.to_string())\n                }\n            }\n        }\n    }\n\n    #[test]\n    #[cfg(feature = \"hdfs\")]\n    fn test_prepare_object_store() {\n        // we use a local file system url instead of an hdfs url because the latter requires\n        // a running namenode\n        let hdfs_url = \"file:///comet/spark-warehouse/part-00000.snappy.parquet\";\n        let expected: (ObjectStoreUrl, Path) = (\n            ObjectStoreUrl::parse(\"file://\").unwrap(),\n            Path::from(\"/comet/spark-warehouse/part-00000.snappy.parquet\"),\n        );\n\n        let url = &Url::parse(hdfs_url).unwrap();\n        let res = prepare_object_store(Arc::new(RuntimeEnv::default()), url.to_string());\n\n        let res = res.unwrap();\n        assert_eq!(res.0, expected.0);\n        assert_eq!(res.1, expected.1);\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/read/column.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{marker::PhantomData, sync::Arc};\n\nuse arrow::{\n    array::ArrayData,\n    buffer::Buffer,\n    datatypes::{DataType as ArrowDataType, TimeUnit},\n};\n\nuse parquet::{\n    basic::{Encoding, LogicalType, TimeUnit as ParquetTimeUnit, Type as PhysicalType},\n    schema::types::{ColumnDescPtr, ColumnDescriptor},\n};\n\nuse crate::parquet::{data_type::*, util::jni::TypePromotionInfo, ParquetMutableVector};\n\nuse super::{\n    levels::LevelDecoder,\n    values::{get_decoder, Decoder},\n    ReadOptions,\n};\n\nuse crate::common::bit::log2;\nuse crate::execution::operators::ExecutionError;\n\n/// Maximum number of decimal digits an i32 can represent\nconst DECIMAL_MAX_INT_DIGITS: i32 = 9;\n\n/// Maximum number of decimal digits an i64 can represent\nconst DECIMAL_MAX_LONG_DIGITS: i32 = 18;\n\npub enum ColumnReader {\n    BoolColumnReader(TypedColumnReader<BoolType>),\n    Int8ColumnReader(TypedColumnReader<Int8Type>),\n    UInt8ColumnReader(TypedColumnReader<UInt8Type>),\n    Int16ColumnReader(TypedColumnReader<Int16Type>),\n    Int16ToDoubleColumnReader(TypedColumnReader<Int16ToDoubleType>),\n    UInt16ColumnReader(TypedColumnReader<UInt16Type>),\n    Int32ColumnReader(TypedColumnReader<Int32Type>),\n    Int32To64ColumnReader(TypedColumnReader<Int32To64Type>),\n    Int32ToDecimal64ColumnReader(TypedColumnReader<Int32ToDecimal64Type>),\n    Int32ToDoubleColumnReader(TypedColumnReader<Int32ToDoubleType>),\n    UInt32ColumnReader(TypedColumnReader<UInt32Type>),\n    Int32DecimalColumnReader(TypedColumnReader<Int32DecimalType>),\n    Int32DateColumnReader(TypedColumnReader<Int32DateType>),\n    Int32TimestampMicrosColumnReader(TypedColumnReader<Int32TimestampMicrosType>),\n    Int64ColumnReader(TypedColumnReader<Int64Type>),\n    Int64ToDecimal64ColumnReader(TypedColumnReader<Int64ToDecimal64Type>),\n    UInt64DecimalColumnReader(TypedColumnReader<UInt64Type>),\n    Int64DecimalColumnReader(TypedColumnReader<Int64DecimalType>),\n    Int64TimestampMillisColumnReader(TypedColumnReader<Int64TimestampMillisType>),\n    Int64TimestampMicrosColumnReader(TypedColumnReader<Int64TimestampMicrosType>),\n    Int64TimestampNanosColumnReader(TypedColumnReader<Int64Type>),\n    Int96ColumnReader(TypedColumnReader<Int96TimestampMicrosType>),\n    FloatColumnReader(TypedColumnReader<FloatType>),\n    FloatToDoubleColumnReader(TypedColumnReader<FloatToDoubleType>),\n    DoubleColumnReader(TypedColumnReader<DoubleType>),\n    ByteArrayColumnReader(TypedColumnReader<ByteArrayType>),\n    StringColumnReader(TypedColumnReader<StringType>),\n    FLBADecimalColumnReader(TypedColumnReader<FLBADecimalType>),\n    FLBADecimal32ColumnReader(TypedColumnReader<FLBADecimal32Type>),\n    FLBADecimal64ColumnReader(TypedColumnReader<FLBADecimal64Type>),\n    FLBAColumnReader(TypedColumnReader<FLBAType>),\n}\n\nimpl ColumnReader {\n    /// Creates a new column reader according to the input `desc`.\n    ///\n    /// - `desc`: The actual descriptor for the underlying Parquet files\n    /// - `promotion_info`: Extra information about type promotion. This is passed in to support\n    ///   schema evolution, e.g., int -> long, where Parquet type is int but Spark type is long.\n    /// - `use_decimal_128`: Whether to read small precision decimals as `i128` instead of as `i32`\n    ///   or `i64` as Spark does\n    /// - `use_legacy_date_timestamp_or_ntz`: Whether to read dates/timestamps that were written\n    ///   using the legacy Julian/Gregorian hybrid calendar as it is. If false, exceptions will be\n    ///   thrown. If the spark type is TimestampNTZ, this should be true.\n    pub fn get(\n        desc: ColumnDescriptor,\n        promotion_info: TypePromotionInfo,\n        capacity: usize,\n        use_decimal_128: bool,\n        use_legacy_date_timestamp_or_ntz: bool,\n    ) -> Self {\n        let read_options = ReadOptions {\n            use_legacy_date_timestamp_or_ntz,\n        };\n        macro_rules! typed_reader {\n            ($reader_ty:ident, $arrow_ty:ident) => {\n                Self::$reader_ty(TypedColumnReader::new(\n                    desc,\n                    capacity,\n                    ArrowDataType::$arrow_ty,\n                    read_options,\n                ))\n            };\n            ($reader_ty:ident, $arrow_ty:expr) => {\n                Self::$reader_ty(TypedColumnReader::new(\n                    desc,\n                    capacity,\n                    $arrow_ty,\n                    read_options,\n                ))\n            };\n        }\n\n        match desc.physical_type() {\n            PhysicalType::BOOLEAN => typed_reader!(BoolColumnReader, Boolean),\n            PhysicalType::INT32 => {\n                if let Some(ref logical_type) = desc.logical_type_ref() {\n                    match logical_type {\n                        lt @ LogicalType::Integer {\n                            bit_width,\n                            is_signed,\n                        } => match (bit_width, is_signed) {\n                            (8, true) => match promotion_info.physical_type {\n                                PhysicalType::FIXED_LEN_BYTE_ARRAY => {\n                                    if promotion_info.precision <= DECIMAL_MAX_INT_DIGITS\n                                        && promotion_info.scale < 1\n                                    {\n                                        typed_reader!(Int32ColumnReader, Int32)\n                                    } else if promotion_info.precision <= DECIMAL_MAX_LONG_DIGITS {\n                                        typed_reader!(\n                                            Int32ToDecimal64ColumnReader,\n                                            ArrowDataType::Decimal128(\n                                                promotion_info.precision as u8,\n                                                promotion_info.scale as i8\n                                            )\n                                        )\n                                    } else {\n                                        typed_reader!(\n                                            Int32DecimalColumnReader,\n                                            ArrowDataType::Decimal128(\n                                                promotion_info.precision as u8,\n                                                promotion_info.scale as i8\n                                            )\n                                        )\n                                    }\n                                }\n                                // promote byte to short\n                                PhysicalType::INT32 if promotion_info.bit_width == 16 => {\n                                    typed_reader!(Int16ColumnReader, Int16)\n                                }\n                                // promote byte to int\n                                PhysicalType::INT32 if promotion_info.bit_width == 32 => {\n                                    typed_reader!(Int32ColumnReader, Int32)\n                                }\n                                // promote byte to long\n                                PhysicalType::INT64 => typed_reader!(Int32To64ColumnReader, Int64),\n                                _ => typed_reader!(Int8ColumnReader, Int8),\n                            },\n                            (8, false) => typed_reader!(UInt8ColumnReader, Int16),\n                            (16, true) => match promotion_info.physical_type {\n                                PhysicalType::DOUBLE => {\n                                    typed_reader!(Int16ToDoubleColumnReader, Float64)\n                                }\n                                // promote short to long\n                                PhysicalType::INT64 => {\n                                    typed_reader!(Int32To64ColumnReader, Int64)\n                                }\n                                PhysicalType::INT32 if promotion_info.bit_width == 32 => {\n                                    typed_reader!(Int32ColumnReader, Int32)\n                                }\n                                PhysicalType::FIXED_LEN_BYTE_ARRAY => {\n                                    if promotion_info.precision <= DECIMAL_MAX_INT_DIGITS\n                                        && promotion_info.scale < 1\n                                    {\n                                        typed_reader!(Int32ColumnReader, Int32)\n                                    } else if promotion_info.precision <= DECIMAL_MAX_LONG_DIGITS {\n                                        typed_reader!(\n                                            Int32ToDecimal64ColumnReader,\n                                            ArrowDataType::Decimal128(\n                                                promotion_info.precision as u8,\n                                                promotion_info.scale as i8\n                                            )\n                                        )\n                                    } else {\n                                        typed_reader!(\n                                            Int32DecimalColumnReader,\n                                            ArrowDataType::Decimal128(\n                                                promotion_info.precision as u8,\n                                                promotion_info.scale as i8\n                                            )\n                                        )\n                                    }\n                                }\n                                _ => typed_reader!(Int16ColumnReader, Int16),\n                            },\n                            (16, false) => typed_reader!(UInt16ColumnReader, Int32),\n                            (32, true) => match promotion_info.physical_type {\n                                PhysicalType::INT32 if promotion_info.bit_width == 16 => {\n                                    typed_reader!(Int16ColumnReader, Int16)\n                                }\n                                _ => typed_reader!(Int32ColumnReader, Int32),\n                            },\n                            (32, false) => typed_reader!(UInt32ColumnReader, Int64),\n                            _ => unimplemented!(\"Unsupported INT32 annotation: {:?}\", lt),\n                        },\n                        LogicalType::Decimal {\n                            scale: _,\n                            precision: _,\n                        } => {\n                            if use_decimal_128 || promotion_info.precision > DECIMAL_MAX_LONG_DIGITS\n                            {\n                                typed_reader!(\n                                    Int32DecimalColumnReader,\n                                    ArrowDataType::Decimal128(\n                                        promotion_info.precision as u8,\n                                        promotion_info.scale as i8\n                                    )\n                                )\n                            } else {\n                                typed_reader!(\n                                    Int32ToDecimal64ColumnReader,\n                                    ArrowDataType::Decimal128(\n                                        promotion_info.precision as u8,\n                                        promotion_info.scale as i8\n                                    )\n                                )\n                            }\n                        }\n                        LogicalType::Date => match promotion_info.physical_type {\n                            PhysicalType::INT64 => typed_reader!(\n                                Int32TimestampMicrosColumnReader,\n                                ArrowDataType::Timestamp(TimeUnit::Microsecond, None)\n                            ),\n                            _ => typed_reader!(Int32DateColumnReader, Date32),\n                        },\n                        lt => unimplemented!(\"Unsupported logical type for INT32: {:?}\", lt),\n                    }\n                } else {\n                    // We support type promotion from int to long\n                    match promotion_info.physical_type {\n                        PhysicalType::INT32 if promotion_info.bit_width == 16 => {\n                            typed_reader!(Int16ColumnReader, Int16)\n                        }\n                        PhysicalType::INT32 => typed_reader!(Int32ColumnReader, Int32),\n                        PhysicalType::INT64 => typed_reader!(Int32To64ColumnReader, Int64),\n                        PhysicalType::DOUBLE => typed_reader!(Int32ToDoubleColumnReader, Float64),\n                        PhysicalType::FIXED_LEN_BYTE_ARRAY => {\n                            if promotion_info.precision <= DECIMAL_MAX_INT_DIGITS\n                                && promotion_info.scale < 1\n                            {\n                                typed_reader!(Int32ColumnReader, Int32)\n                            } else if promotion_info.precision <= DECIMAL_MAX_LONG_DIGITS {\n                                typed_reader!(\n                                    Int32ToDecimal64ColumnReader,\n                                    ArrowDataType::Decimal128(\n                                        promotion_info.precision as u8,\n                                        promotion_info.scale as i8\n                                    )\n                                )\n                            } else {\n                                typed_reader!(\n                                    Int32DecimalColumnReader,\n                                    ArrowDataType::Decimal128(\n                                        promotion_info.precision as u8,\n                                        promotion_info.scale as i8\n                                    )\n                                )\n                            }\n                        }\n                        t => unimplemented!(\"Unsupported read physical type for INT32: {}\", t),\n                    }\n                }\n            }\n            PhysicalType::INT64 => {\n                if let Some(ref logical_type) = desc.logical_type_ref() {\n                    match logical_type {\n                        lt @ LogicalType::Integer {\n                            bit_width,\n                            is_signed,\n                        } => match (bit_width, is_signed) {\n                            (64, true) => typed_reader!(Int64ColumnReader, Int64),\n                            (64, false) => typed_reader!(\n                                UInt64DecimalColumnReader,\n                                ArrowDataType::Decimal128(20u8, 0i8)\n                            ),\n                            _ => panic!(\"Unsupported INT64 annotation: {lt:?}\"),\n                        },\n                        LogicalType::Decimal {\n                            scale: _,\n                            precision: _,\n                        } => {\n                            if use_decimal_128 || promotion_info.precision > DECIMAL_MAX_LONG_DIGITS\n                            {\n                                typed_reader!(\n                                    Int64DecimalColumnReader,\n                                    ArrowDataType::Decimal128(\n                                        promotion_info.precision as u8,\n                                        promotion_info.scale as i8\n                                    )\n                                )\n                            } else {\n                                typed_reader!(\n                                    Int64ToDecimal64ColumnReader,\n                                    ArrowDataType::Decimal128(\n                                        promotion_info.precision as u8,\n                                        promotion_info.scale as i8\n                                    )\n                                )\n                            }\n                        }\n                        LogicalType::Timestamp {\n                            is_adjusted_to_u_t_c,\n                            unit,\n                        } => {\n                            // To be consistent with Spark, we always store as micro-second and\n                            // convert milli-second to it.\n                            let time_unit = TimeUnit::Microsecond;\n                            let time_zone = if *is_adjusted_to_u_t_c {\n                                Some(\"UTC\".to_string().into())\n                            } else {\n                                None\n                            };\n                            match unit {\n                                ParquetTimeUnit::MILLIS => {\n                                    typed_reader!(\n                                        Int64TimestampMillisColumnReader,\n                                        ArrowDataType::Timestamp(time_unit, time_zone)\n                                    )\n                                }\n                                ParquetTimeUnit::MICROS => {\n                                    typed_reader!(\n                                        Int64TimestampMicrosColumnReader,\n                                        ArrowDataType::Timestamp(time_unit, time_zone)\n                                    )\n                                }\n                                ParquetTimeUnit::NANOS => {\n                                    typed_reader!(\n                                        Int64TimestampNanosColumnReader,\n                                        ArrowDataType::Int64\n                                    )\n                                }\n                            }\n                        }\n                        lt => panic!(\"Unsupported logical type for INT64: {lt:?}\"),\n                    }\n                } else {\n                    match promotion_info.physical_type {\n                        PhysicalType::FIXED_LEN_BYTE_ARRAY => {\n                            if promotion_info.precision <= DECIMAL_MAX_LONG_DIGITS\n                                && promotion_info.scale < 1\n                            {\n                                typed_reader!(Int64ColumnReader, Int64)\n                            } else {\n                                typed_reader!(\n                                    Int64DecimalColumnReader,\n                                    ArrowDataType::Decimal128(\n                                        promotion_info.precision as u8,\n                                        promotion_info.scale as i8\n                                    )\n                                )\n                            }\n                        }\n                        // By default it is INT(64, true)\n                        _ => typed_reader!(Int64ColumnReader, Int64),\n                    }\n                }\n            }\n            PhysicalType::INT96 => {\n                typed_reader!(\n                    Int96ColumnReader,\n                    ArrowDataType::Timestamp(TimeUnit::Microsecond, Some(\"UTC\".to_string().into()))\n                )\n            }\n            PhysicalType::FLOAT => match promotion_info.physical_type {\n                // We support type promotion from float to double\n                PhysicalType::FLOAT => typed_reader!(FloatColumnReader, Float32),\n                PhysicalType::DOUBLE => typed_reader!(FloatToDoubleColumnReader, Float64),\n                t => panic!(\"Unsupported read physical type: {t} for FLOAT\"),\n            },\n\n            PhysicalType::DOUBLE => typed_reader!(DoubleColumnReader, Float64),\n            PhysicalType::BYTE_ARRAY => {\n                if let Some(logical_type) = desc.logical_type_ref() {\n                    match logical_type {\n                        LogicalType::String => typed_reader!(StringColumnReader, Utf8),\n                        // https://github.com/apache/parquet-format/blob/master/LogicalTypes.md\n                        // \"enum type should interpret ENUM annotated field as a UTF-8\"\n                        LogicalType::Enum => typed_reader!(StringColumnReader, Utf8),\n                        lt => panic!(\"Unsupported logical type for BYTE_ARRAY: {lt:?}\"),\n                    }\n                } else {\n                    typed_reader!(ByteArrayColumnReader, Binary)\n                }\n            }\n            PhysicalType::FIXED_LEN_BYTE_ARRAY => {\n                if let Some(logical_type) = desc.logical_type_ref() {\n                    match logical_type {\n                        LogicalType::Decimal {\n                            precision,\n                            scale: _,\n                        } => {\n                            if !use_decimal_128 && precision <= &DECIMAL_MAX_INT_DIGITS {\n                                typed_reader!(FLBADecimal32ColumnReader, Int32)\n                            } else if !use_decimal_128\n                                && promotion_info.precision <= DECIMAL_MAX_LONG_DIGITS\n                            {\n                                typed_reader!(\n                                    FLBADecimal64ColumnReader,\n                                    ArrowDataType::Decimal128(\n                                        promotion_info.precision as u8,\n                                        promotion_info.scale as i8\n                                    )\n                                )\n                            } else {\n                                typed_reader!(\n                                    FLBADecimalColumnReader,\n                                    ArrowDataType::Decimal128(\n                                        promotion_info.precision as u8,\n                                        promotion_info.scale as i8\n                                    )\n                                )\n                            }\n                        }\n                        LogicalType::Uuid => {\n                            let type_length = desc.type_length();\n                            typed_reader!(\n                                FLBAColumnReader,\n                                ArrowDataType::FixedSizeBinary(type_length)\n                            )\n                        }\n                        t => panic!(\"Unsupported logical type for FIXED_LEN_BYTE_ARRAY: {t:?}\"),\n                    }\n                } else {\n                    let type_length = desc.type_length();\n                    typed_reader!(\n                        FLBAColumnReader,\n                        ArrowDataType::FixedSizeBinary(type_length)\n                    )\n                }\n            }\n        }\n    }\n}\n\nmacro_rules! make_func {\n    ($self:ident, $func:ident $(,$args:ident)*) => ({\n        match *$self {\n            Self::BoolColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int8ColumnReader(ref typed) => typed.$func($($args),*),\n            Self::UInt8ColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int16ColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int16ToDoubleColumnReader(ref typed) => typed.$func($($args), *),\n            Self::UInt16ColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int32ColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int32To64ColumnReader(ref typed) => typed.$func($($args), *),\n            Self::Int32ToDecimal64ColumnReader(ref typed) => typed.$func($($args), *),\n            Self::Int32ToDoubleColumnReader(ref typed) => typed.$func($($args), *),\n            Self::UInt32ColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int32DateColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int32DecimalColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int32TimestampMicrosColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int64ColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int64ToDecimal64ColumnReader(ref typed) => typed.$func($($args), *),\n            Self::UInt64DecimalColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int64DecimalColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int64TimestampMillisColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int64TimestampMicrosColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int64TimestampNanosColumnReader(ref typed) => typed.$func($($args),*),\n            Self::FloatColumnReader(ref typed) => typed.$func($($args),*),\n            Self::DoubleColumnReader(ref typed) => typed.$func($($args),*),\n            Self::FloatToDoubleColumnReader(ref typed) => typed.$func($($args),*),\n            Self::ByteArrayColumnReader(ref typed) => typed.$func($($args),*),\n            Self::StringColumnReader(ref typed) => typed.$func($($args),*),\n            Self::FLBADecimalColumnReader(ref typed) => typed.$func($($args),*),\n            Self::FLBADecimal32ColumnReader(ref typed) => typed.$func($($args),*),\n            Self::FLBADecimal64ColumnReader(ref typed) => typed.$func($($args),*),\n            Self::FLBAColumnReader(ref typed) => typed.$func($($args),*),\n            Self::Int96ColumnReader(ref typed) => typed.$func($($args),*),\n        }\n    });\n}\n\nmacro_rules! make_func_mut {\n    ($self:ident, $func:ident $(,$args:ident)*) => ({\n        match *$self {\n            Self::BoolColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int8ColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::UInt8ColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int16ColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int16ToDoubleColumnReader(ref mut typed) => typed.$func($($args), *),\n            Self::UInt16ColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int32ColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int32To64ColumnReader(ref mut typed) => typed.$func($($args), *),\n            Self::Int32ToDecimal64ColumnReader(ref mut typed) => typed.$func($($args), *),\n            Self::Int32ToDoubleColumnReader(ref mut typed) => typed.$func($($args), *),\n            Self::UInt32ColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int32DateColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int32DecimalColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int32TimestampMicrosColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int64ColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int64ToDecimal64ColumnReader(ref mut typed) => typed.$func($($args), *),\n            Self::UInt64DecimalColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int64DecimalColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int64TimestampMillisColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int64TimestampMicrosColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int64TimestampNanosColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::FloatColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::DoubleColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::FloatToDoubleColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::ByteArrayColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::StringColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::FLBADecimalColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::FLBADecimal32ColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::FLBADecimal64ColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::FLBAColumnReader(ref mut typed) => typed.$func($($args),*),\n            Self::Int96ColumnReader(ref mut typed) => typed.$func($($args),*),\n        }\n    });\n}\n\nimpl ColumnReader {\n    #[inline]\n    pub fn get_descriptor(&self) -> &ColumnDescriptor {\n        make_func!(self, get_descriptor)\n    }\n\n    #[inline]\n    pub fn set_dictionary_page(\n        &mut self,\n        page_value_count: usize,\n        page_data: Buffer,\n        encoding: Encoding,\n    ) {\n        make_func_mut!(\n            self,\n            set_dictionary_page,\n            page_value_count,\n            page_data,\n            encoding\n        )\n    }\n\n    #[inline]\n    pub fn set_page_v1(&mut self, page_value_count: usize, page_data: Buffer, encoding: Encoding) {\n        make_func_mut!(self, set_page_v1, page_value_count, page_data, encoding)\n    }\n\n    #[inline]\n    pub fn set_page_v2(\n        &mut self,\n        page_value_count: usize,\n        def_level_data: Buffer,\n        rep_level_data: Buffer,\n        value_data: Buffer,\n        encoding: Encoding,\n    ) {\n        make_func_mut!(\n            self,\n            set_page_v2,\n            page_value_count,\n            def_level_data,\n            rep_level_data,\n            value_data,\n            encoding\n        )\n    }\n\n    #[inline]\n    pub fn reset_batch(&mut self) {\n        make_func_mut!(self, reset_batch)\n    }\n\n    #[inline]\n    pub fn current_batch(&mut self) -> Result<ArrayData, ExecutionError> {\n        make_func_mut!(self, current_batch)\n    }\n\n    #[inline]\n    pub fn read_batch(&mut self, total: usize, null_pad_size: usize) -> (usize, usize) {\n        make_func_mut!(self, read_batch, total, null_pad_size)\n    }\n\n    #[inline]\n    pub fn skip_batch(&mut self, total: usize, put_nulls: bool) -> usize {\n        make_func_mut!(self, skip_batch, total, put_nulls)\n    }\n}\n\n/// A batched reader for a primitive Parquet column.\npub struct TypedColumnReader<T: DataType> {\n    desc: ColumnDescPtr,\n    arrow_type: ArrowDataType,\n    rep_level_decoder: Option<LevelDecoder>,\n    def_level_decoder: Option<LevelDecoder>,\n    value_decoder: Option<Box<dyn Decoder>>,\n\n    /// The remaining number of values to read in the current page\n    num_values_in_page: usize,\n    /// The value vector for this column reader; reused across batches.\n    vector: ParquetMutableVector,\n    /// The batch size for this column vector.\n    capacity: usize,\n    /// Number of bits used to represent one value in Parquet.\n    bit_width: usize,\n    // Options for reading Parquet\n    read_options: ReadOptions,\n\n    /// Marker to allow `T` in the generic parameter of the struct.\n    _phantom: PhantomData<T>,\n}\n\nimpl<T: DataType> TypedColumnReader<T> {\n    pub fn new(\n        desc: ColumnDescriptor,\n        capacity: usize,\n        arrow_type: ArrowDataType,\n        read_options: ReadOptions,\n    ) -> Self {\n        let vector = ParquetMutableVector::new(capacity, &arrow_type);\n        let bit_width = ParquetMutableVector::bit_width(&arrow_type);\n        Self {\n            desc: Arc::new(desc),\n            arrow_type,\n            rep_level_decoder: None,\n            def_level_decoder: None,\n            value_decoder: None,\n            num_values_in_page: 0,\n            vector,\n            capacity,\n            bit_width,\n            read_options,\n            _phantom: PhantomData,\n        }\n    }\n\n    #[inline]\n    pub fn get_descriptor(&self) -> &ColumnDescriptor {\n        &self.desc\n    }\n\n    /// Reset the current batch. This will clear all the content of the current columnar vector as\n    /// well as reset all of its internal states.\n    #[inline]\n    pub fn reset_batch(&mut self) {\n        self.vector.reset()\n    }\n\n    /// Returns the current batch that's been constructed.\n    ///\n    /// Note: the caller must make sure the returned Arrow vector is fully consumed before calling\n    /// `read_batch` again.\n    #[inline]\n    pub fn current_batch(&mut self) -> Result<ArrayData, ExecutionError> {\n        self.vector.get_array_data()\n    }\n\n    /// Reads a batch of at most `total` values from the current page this reader has. Returns a\n    /// tuple where the first element is the actual number of values read (including both nulls and\n    /// non-nulls), and the second element is the actual number of nulls read.\n    ///\n    /// Pad nulls for the amount of `null_pad_size` before reading.\n    ///\n    /// If the return number of values is < `total`, it means the current page is drained and the\n    /// caller should call `set_page_v1` or `set_page_v2` before calling next `read_batch`.\n    pub fn read_batch(&mut self, total: usize, null_pad_size: usize) -> (usize, usize) {\n        debug_assert!(\n            self.value_decoder.is_some() && self.def_level_decoder.is_some(),\n            \"set_page_v1/v2 should have been called\"\n        );\n\n        let n = ::std::cmp::min(self.num_values_in_page, total);\n        self.num_values_in_page -= n;\n        let value_decoder = self.value_decoder.as_mut().unwrap();\n        let dl_decoder = self.def_level_decoder.as_mut().unwrap();\n\n        let previous_num_nulls = self.vector.num_nulls;\n        self.vector.put_nulls(null_pad_size);\n        dl_decoder.read_batch(n, &mut self.vector, value_decoder.as_mut());\n\n        (n, self.vector.num_nulls - previous_num_nulls)\n    }\n\n    /// Skips a batch of at most `total` values from the current page this reader has, and returns\n    /// the actual number of values skipped.\n    ///\n    /// If the return value is < `total`, it means the current page is drained and the caller should\n    /// call `set_page_v1` or `set_page_v2` before calling next `skip_batch`.\n    pub fn skip_batch(&mut self, total: usize, put_nulls: bool) -> usize {\n        debug_assert!(\n            self.value_decoder.is_some() && self.def_level_decoder.is_some(),\n            \"set_page_v1/v2 should have been called\"\n        );\n\n        let n = ::std::cmp::min(self.num_values_in_page, total);\n        self.num_values_in_page -= n;\n        let value_decoder = self.value_decoder.as_mut().unwrap();\n        let dl_decoder = self.def_level_decoder.as_mut().unwrap();\n\n        dl_decoder.skip_batch(n, &mut self.vector, value_decoder.as_mut(), put_nulls);\n\n        n\n    }\n\n    /// Sets the dictionary page for this column reader and eagerly reads it.\n    ///\n    /// # Panics\n    ///\n    /// - If being called more than once during the lifetime of this column reader. A Parquet column\n    ///   chunk should only contain a single dictionary page.\n    /// - If the input `encoding` is neither `PLAIN`, `PLAIN_DICTIONARY` nor `RLE_DICTIONARY`.\n    pub fn set_dictionary_page(\n        &mut self,\n        page_value_count: usize,\n        page_data: Buffer,\n        mut encoding: Encoding,\n    ) {\n        // In Parquet format v1, both dictionary page and data page use the same encoding\n        // `PLAIN_DICTIONARY`, while in v2, dictioanry page uses `PLAIN` and data page uses\n        // `RLE_DICTIONARY`.\n        //\n        // Here, we convert `PLAIN` from v2 dictionary page to `PLAIN_DICTIONARY`, so that v1 and v2\n        // shares the same encoding. Later on, `get_decoder` will use the `PlainDecoder` for this\n        // case.\n        if encoding == Encoding::PLAIN {\n            encoding = Encoding::PLAIN_DICTIONARY;\n        }\n\n        if encoding != Encoding::PLAIN_DICTIONARY {\n            panic!(\"Invalid encoding type for Parquet dictionary: {encoding}\");\n        }\n\n        if self.vector.dictionary.is_some() {\n            panic!(\"Parquet column cannot have more than one dictionary\");\n        }\n\n        // Create a new vector for dictionary values\n        let mut value_vector = ParquetMutableVector::new(page_value_count, &self.arrow_type);\n\n        let mut dictionary = self.get_decoder(page_data, encoding);\n        dictionary.read_batch(&mut value_vector, page_value_count);\n        value_vector.num_values = page_value_count;\n\n        // Re-create the parent vector since it is initialized with the dictionary value type, not\n        // the key type (which is always integer).\n        self.vector = ParquetMutableVector::new(self.capacity, &ArrowDataType::Int32);\n        self.vector.set_dictionary(value_vector);\n    }\n\n    /// Resets the Parquet data page for this column reader.\n    pub fn set_page_v1(\n        &mut self,\n        page_value_count: usize,\n        page_data: Buffer,\n        mut encoding: Encoding,\n    ) {\n        // In v1, when data is encoded with dictionary, data page uses `PLAIN_DICTIONARY`, while v2\n        // uses  `RLE_DICTIONARY`. To consolidate the two, here we convert `PLAIN_DICTIONARY` to\n        // `RLE_DICTIONARY` following v2. Later on, `get_decoder` will use `DictDecoder` for this\n        // case.\n        if encoding == Encoding::PLAIN_DICTIONARY {\n            encoding = Encoding::RLE_DICTIONARY;\n        }\n\n        self.num_values_in_page = page_value_count;\n        self.check_dictionary(&encoding);\n\n        let mut page_buffer = page_data;\n\n        let bit_width = log2(self.desc.max_rep_level() as u64 + 1) as u8;\n        let mut rl_decoder = LevelDecoder::new(Arc::clone(&self.desc), bit_width, true);\n        let offset = rl_decoder.set_data(page_value_count, &page_buffer);\n        self.rep_level_decoder = Some(rl_decoder);\n        page_buffer = page_buffer.slice(offset);\n\n        let bit_width = log2(self.desc.max_def_level() as u64 + 1) as u8;\n        let mut dl_decoder = LevelDecoder::new(Arc::clone(&self.desc), bit_width, true);\n        let offset = dl_decoder.set_data(page_value_count, &page_buffer);\n        self.def_level_decoder = Some(dl_decoder);\n        page_buffer = page_buffer.slice(offset);\n\n        let value_decoder = self.get_decoder(page_buffer, encoding);\n        self.value_decoder = Some(value_decoder);\n    }\n\n    /// Resets the Parquet data page for this column reader.\n    pub fn set_page_v2(\n        &mut self,\n        page_value_count: usize,\n        def_level_data: Buffer,\n        rep_level_data: Buffer,\n        value_data: Buffer,\n        encoding: Encoding,\n    ) {\n        self.num_values_in_page = page_value_count;\n        self.check_dictionary(&encoding);\n\n        let bit_width = log2(self.desc.max_rep_level() as u64 + 1) as u8;\n        let mut rl_decoder = LevelDecoder::new(Arc::clone(&self.desc), bit_width, false);\n        rl_decoder.set_data(page_value_count, &rep_level_data);\n        self.rep_level_decoder = Some(rl_decoder);\n\n        let bit_width = log2(self.desc.max_def_level() as u64 + 1) as u8;\n        let mut dl_decoder = LevelDecoder::new(Arc::clone(&self.desc), bit_width, false);\n        dl_decoder.set_data(page_value_count, &def_level_data);\n        self.def_level_decoder = Some(dl_decoder);\n\n        let value_decoder = self.get_decoder(value_data, encoding);\n        self.value_decoder = Some(value_decoder);\n    }\n\n    fn check_dictionary(&mut self, encoding: &Encoding) {\n        // The column has a dictionary while the new page is of PLAIN encoding. In this case, we\n        // should eagerly decode all the dictionary indices and convert the underlying vector to a\n        // plain encoded vector.\n        if self.vector.dictionary.is_some() && *encoding == Encoding::PLAIN {\n            let new_vector = ParquetMutableVector::new(self.capacity, &self.arrow_type);\n            let old_vector = std::mem::replace(&mut self.vector, new_vector);\n            T::decode_dict(old_vector, &mut self.vector, self.bit_width);\n            debug_assert!(self.vector.dictionary.is_none());\n        }\n    }\n\n    fn get_decoder(&self, value_data: Buffer, encoding: Encoding) -> Box<dyn Decoder> {\n        get_decoder::<T>(\n            value_data,\n            encoding,\n            Arc::clone(&self.desc),\n            self.read_options,\n        )\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/read/levels.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::mem;\n\nuse super::values::Decoder;\nuse crate::{\n    common::bit::{self, read_u32, BitReader},\n    parquet::ParquetMutableVector,\n};\nuse arrow::buffer::Buffer;\nuse datafusion_comet_spark_expr::utils::unlikely;\nuse parquet::schema::types::ColumnDescPtr;\n\nconst INITIAL_BUF_LEN: usize = 16;\n\nenum Mode {\n    RLE,\n    BitPacked,\n}\n\n/// A decoder for Parquet definition & repetition levels.\npub struct LevelDecoder {\n    /// The descriptor of the column that this level decoder is associated to.\n    desc: ColumnDescPtr,\n    /// Number of bits used to represent the levels.\n    bit_width: u8,\n    /// Mode for the current run.\n    mode: Mode,\n    /// Total number of values (including both null and non-null) to be decoded.\n    num_values: usize,\n    /// The current value in a RLE run. Unused if BitPacked.\n    current_value: i32,\n    /// The number of total values in the current RLE run. Unused if BitPacked.\n    current_count: usize,\n    /// The current buffer used in a BitPacked run. Unused if RLE.\n    /// This will be resized if the total number of values in the BitPacked run is larger than its\n    /// capacity.\n    current_buffer: Vec<i32>, // TODO: double check this\n    /// The index into `current_buffer` in a BitPacked run. Unused if RLE.\n    current_buffer_idx: usize,\n    /// A bit reader wrapping the input buffer for levels.\n    bit_reader: Option<BitReader>,\n    /// Whether we need to read the length of data. This is typically true for Parquet page V1, but\n    /// not for V2, since it uses separate buffer for definition & repetition levels.\n    need_length: bool,\n}\n\nimpl LevelDecoder {\n    pub fn new(desc: ColumnDescPtr, bit_width: u8, need_length: bool) -> Self {\n        Self {\n            desc,\n            bit_width,\n            mode: Mode::RLE,\n            num_values: 0,\n            current_value: 0,\n            current_count: 0,\n            current_buffer: vec![0; INITIAL_BUF_LEN],\n            current_buffer_idx: 0,\n            bit_reader: None,\n            need_length,\n        }\n    }\n\n    /// Sets data for this level decoder, and returns total number of bytes consumed. This is used\n    /// for reading DataPage v1 levels.\n    pub fn set_data(&mut self, page_value_count: usize, page_data: &Buffer) -> usize {\n        self.num_values = page_value_count;\n        if self.bit_width == 0 {\n            // Special case where the page doesn't have encoded rl/dl data. Here we'll treat it as\n            // an RLE run of `page_value_count` number of 0s.\n            self.mode = Mode::RLE;\n            self.current_count = page_value_count;\n            0\n        } else if self.need_length {\n            let u32_size = mem::size_of::<u32>();\n            let data_size = read_u32(page_data.as_slice()) as usize;\n            self.bit_reader = Some(BitReader::new(page_data.slice(u32_size), data_size));\n            u32_size + data_size\n        } else {\n            // No need to read length, just read the whole buffer\n            self.bit_reader = Some(BitReader::new_all(page_data.to_owned()));\n            0\n        }\n    }\n\n    /// Reads a batch of `total` values into `vector`. The value decoding is done by\n    /// `value_decoder`.\n    pub fn read_batch(\n        &mut self,\n        total: usize,\n        vector: &mut ParquetMutableVector,\n        value_decoder: &mut dyn Decoder,\n    ) {\n        let mut left = total;\n        while left > 0 {\n            if unlikely(self.current_count == 0) {\n                self.read_next_group();\n            }\n\n            debug_assert!(self.current_count > 0);\n\n            let n = ::std::cmp::min(left, self.current_count);\n            let max_def_level = self.desc.max_def_level();\n\n            match self.mode {\n                Mode::RLE => {\n                    if self.current_value as i16 == max_def_level {\n                        bit::set_bits(vector.validity_buffer.as_slice_mut(), vector.num_values, n);\n                        value_decoder.read_batch(vector, n);\n                        vector.num_values += n;\n                    } else {\n                        vector.put_nulls(n);\n                    }\n                }\n                Mode::BitPacked => {\n                    for i in 0..n {\n                        if self.current_buffer[self.current_buffer_idx + i] == max_def_level as i32\n                        {\n                            bit::set_bit(vector.validity_buffer.as_slice_mut(), vector.num_values);\n                            value_decoder.read(vector);\n                            vector.num_values += 1;\n                        } else {\n                            vector.put_null();\n                        }\n                    }\n                    self.current_buffer_idx += n;\n                }\n            }\n\n            left -= n;\n            self.current_count -= n;\n        }\n    }\n\n    /// Skips a batch of `total` values. The value decoding is done by `value_decoder`.\n    pub fn skip_batch(\n        &mut self,\n        total: usize,\n        vector: &mut ParquetMutableVector,\n        value_decoder: &mut dyn Decoder,\n        put_nulls: bool,\n    ) {\n        let mut skip = total;\n        while skip > 0 {\n            if unlikely(self.current_count == 0) {\n                self.read_next_group();\n            }\n\n            debug_assert!(self.current_count > 0);\n\n            let n = ::std::cmp::min(skip, self.current_count);\n            let max_def_level = self.desc.max_def_level();\n\n            match self.mode {\n                Mode::RLE => {\n                    if self.current_value as i16 == max_def_level {\n                        value_decoder.skip_batch(n);\n                    }\n                }\n                Mode::BitPacked => {\n                    let mut num_skips = 0;\n                    for i in 0..n {\n                        if self.current_buffer[self.current_buffer_idx + i] == max_def_level as i32\n                        {\n                            num_skips += 1;\n                        }\n                    }\n                    value_decoder.skip_batch(num_skips);\n                    self.current_buffer_idx += n;\n                }\n            }\n            if put_nulls {\n                vector.put_nulls(n);\n            }\n\n            skip -= n;\n            self.current_count -= n;\n        }\n    }\n\n    /// Loads the next group from this RLE/BitPacked hybrid reader.\n    fn read_next_group(&mut self) {\n        let bit_reader = self.bit_reader.as_mut().expect(\"bit_reader should be set\");\n        if let Some(indicator_value) = bit_reader.get_vlq_int() {\n            self.mode = if indicator_value & 1 == 1 {\n                Mode::BitPacked\n            } else {\n                Mode::RLE\n            };\n\n            match self.mode {\n                Mode::BitPacked => {\n                    self.current_count = ((indicator_value >> 1) * 8) as usize;\n                    if self.current_buffer.len() < self.current_count {\n                        self.current_buffer.resize(self.current_count, 0);\n                    }\n                    self.current_buffer_idx = 0;\n                    bit_reader.get_batch(\n                        &mut self.current_buffer[..self.current_count],\n                        self.bit_width as usize,\n                    );\n                }\n                Mode::RLE => {\n                    // RLE\n                    self.current_count = (indicator_value >> 1) as usize;\n                    let value_width = (self.bit_width as usize).div_ceil(8);\n                    self.current_value = bit_reader\n                        .get_aligned::<i32>(value_width)\n                        .expect(\"current value should be set\");\n                }\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/read/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub mod column;\npub mod levels;\npub mod values;\n\npub use column::ColumnReader;\nuse parquet::schema::types::ColumnDescPtr;\n\nuse super::ParquetMutableVector;\nuse crate::common::bit::{self, BitReader};\nuse arrow::buffer::Buffer;\nuse bytes::Buf;\n\n#[derive(Clone, Copy)]\npub struct ReadOptions {\n    // Whether to read legacy dates/timestamps as it is. If false, throw exceptions.\n    pub(crate) use_legacy_date_timestamp_or_ntz: bool,\n}\n\n/// Internal states for PLAIN decoder. Used in combination of `PlainDecoding`.\npub struct PlainDecoderInner {\n    /// The input buffer containing values to be decoded\n    data: Buffer,\n\n    /// The current offset in `data`, in bytes.\n    offset: usize,\n\n    /// Reads `data` bit by bit, used if `T` is [`BoolType`].\n    bit_reader: BitReader,\n\n    /// Options for reading Parquet\n    read_options: ReadOptions,\n\n    /// The Parquet column descriptor\n    desc: ColumnDescPtr,\n}\n\n/// A trait for [`super::DataType`] to implement how PLAIN encoded data is to be decoded into Arrow\n/// format given an input and output buffer.\n///\n/// The actual implementations of this trait is in `read/values.rs`.\npub trait PlainDecoding {\n    /// Decodes `num` of items from `src`, and store the result into `dst`, in Arrow format.\n    ///\n    /// Note: this assumes the `src` has data for at least `num` elements, and won't do any\n    /// bound checking. The condition MUST be guaranteed from the caller side.\n    fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize);\n\n    /// Skip `num` of items from `src`\n    ///\n    /// Note: this assumes the `src` has data for at least `num` elements, and won't do any\n    /// bound checking. The condition MUST be guaranteed from the caller side.\n    fn skip(src: &mut PlainDecoderInner, num: usize);\n}\n\npub trait PlainDictDecoding {\n    /// Eagerly decode vector `src` which must have dictionary associated. The decoded values are\n    /// appended into `dst`.\n    fn decode_dict(src: ParquetMutableVector, dst: &mut ParquetMutableVector, bit_width: usize) {\n        assert!(dst.dictionary.is_none());\n        assert!(src.dictionary.is_some());\n\n        let mut value_buf = src.value_buffer.as_slice();\n        let validity_buf = src.validity_buffer.as_slice();\n        let dictionary = src.dictionary.as_ref().unwrap();\n\n        for i in 0..src.num_values {\n            if bit::get_bit(validity_buf, i) {\n                // non-null value: lookup the value position and copy its value into `dst`\n                let val_idx = value_buf.get_u32_le();\n                Self::decode_dict_one(i, val_idx as usize, dictionary, dst, bit_width);\n                dst.num_values += 1;\n            } else {\n                value_buf.advance(4);\n                dst.put_null();\n            }\n        }\n\n        dst.validity_buffer = src.validity_buffer;\n    }\n\n    /// Decode a single value from `src`, whose position in the dictionary indices (i.e., keys)\n    /// is `idx` and the positions in the dictionary values is `val_idx`. The decoded value is\n    /// appended to `dst`.\n    fn decode_dict_one(\n        idx: usize,\n        val_idx: usize,\n        src: &ParquetMutableVector,\n        dst: &mut ParquetMutableVector,\n        bit_width: usize,\n    );\n}\n"
  },
  {
    "path": "native/core/src/parquet/read/values.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{marker::PhantomData, mem};\n\nuse arrow::buffer::Buffer;\nuse bytes::Buf;\nuse log::debug;\nuse parquet::{basic::Encoding, schema::types::ColumnDescPtr};\n\nuse super::{PlainDecoderInner, PlainDecoding, PlainDictDecoding, ReadOptions};\nuse crate::write_null;\nuse crate::write_val_or_null;\nuse crate::{\n    common::bit::{self, BitReader},\n    parquet::{data_type::*, ParquetMutableVector},\n};\nuse arrow::datatypes::DataType as ArrowDataType;\nuse datafusion_comet_spark_expr::utils::unlikely;\n\npub fn get_decoder<T: DataType>(\n    value_data: Buffer,\n    encoding: Encoding,\n    desc: ColumnDescPtr,\n    read_options: ReadOptions,\n) -> Box<dyn Decoder> {\n    let decoder: Box<dyn Decoder> = match encoding {\n        Encoding::PLAIN | Encoding::PLAIN_DICTIONARY => {\n            Box::new(PlainDecoder::<T>::new(value_data, desc, read_options))\n        }\n        // This is for dictionary indices\n        Encoding::RLE_DICTIONARY => Box::new(DictDecoder::new(value_data)),\n        _ => panic!(\"Unsupported encoding: {encoding}\"),\n    };\n    decoder\n}\n\n/// A Parquet decoder for values within a Parquet data page.\npub trait Decoder {\n    /// Consumes a single value from the decoder and stores it into `dst`.\n    ///\n    /// # Preconditions\n    ///\n    /// * `dst` have enough length to hold at least one value.\n    /// * `data` of this decoder should have enough bytes left to be decoded.\n    fn read(&mut self, dst: &mut ParquetMutableVector);\n\n    /// Consumes a batch of `num` values from the data and stores  them to `dst`.\n    ///\n    /// # Preconditions\n    ///\n    /// * `dst` should have length >= `num * T::type_size()` .\n    /// * `data` of this decoder should have >= `num * T::type_size()` bytes left to be decoded.\n    fn read_batch(&mut self, dst: &mut ParquetMutableVector, num: usize);\n\n    /// Skips a batch of `num` values from the data.\n    ///\n    /// # Preconditions\n    ///\n    /// * `data` of this decoder should have >= `num * T::type_size()` bytes left to be decoded.\n    fn skip_batch(&mut self, num: usize);\n\n    /// Returns the encoding for this decoder.\n    fn encoding(&self) -> Encoding;\n}\n\n/// The switch off date between Julian and Gregorian calendar. See\n///   https://docs.oracle.com/javase/7/docs/api/java/util/GregorianCalendar.html\nconst JULIAN_GREGORIAN_SWITCH_OFF_DAY: i32 = -141427;\n\n/// The switch off timestamp (in micros) between Julian and Gregorian calendar. See\n///   https://docs.oracle.com/javase/7/docs/api/java/util/GregorianCalendar.html\nconst JULIAN_GREGORIAN_SWITCH_OFF_TS: i64 = -2208988800000000;\n\n/// See http://stackoverflow.com/questions/466321/convert-unix-timestamp-to-julian\n/// Also see Spark's `DateTimeUtils.JULIAN_DAY_OF_EPOCH`\nconst JULIAN_DAY_OF_EPOCH: i32 = 2440588;\n\n/// Number of micro seconds per milli second.\nconst MICROS_PER_MILLIS: i64 = 1000;\n\nconst MICROS_PER_DAY: i64 = 24_i64 * 60 * 60 * 1000 * 1000;\n\npub struct PlainDecoder<T: DataType> {\n    /// Internal states for this decoder.\n    inner: PlainDecoderInner,\n\n    /// Marker to allow `T` in the generic parameter of the struct.\n    _phantom: PhantomData<T>,\n}\n\nimpl<T: DataType> PlainDecoder<T> {\n    pub fn new(value_data: Buffer, desc: ColumnDescPtr, read_options: ReadOptions) -> Self {\n        let len = value_data.len();\n        let inner = PlainDecoderInner {\n            data: value_data.clone(),\n            offset: 0,\n            bit_reader: BitReader::new(value_data, len),\n            read_options,\n            desc,\n        };\n        Self {\n            inner,\n            _phantom: PhantomData,\n        }\n    }\n}\n\nmacro_rules! make_plain_default_impl {\n    ($($ty: ident), *) => {\n        $(\n            impl PlainDecoding for $ty {\n                /// Default implementation for PLAIN encoding. Uses `mempcy` when the physical\n                /// layout is the same between Parquet and Arrow.\n                fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize) {\n                    let src_data = &src.data;\n                    let byte_width = src.desc.type_length() as usize;\n                    let num_bytes = byte_width * num;\n                    let dst_offset = byte_width * dst.num_values;\n\n                    bit::memcpy(\n                        &src_data[src.offset..src.offset + num_bytes],\n                        &mut dst.value_buffer[dst_offset..]);\n                    src.offset += num_bytes;\n                }\n\n                fn skip(src: &mut PlainDecoderInner, num: usize) {\n                    let num_bytes = src.desc.type_length() as usize * num;\n                    src.offset += num_bytes;\n                }\n            }\n        )*\n    };\n}\n\nmake_plain_default_impl! { Int32Type, Int64Type, FloatType, DoubleType, FLBAType }\n\nmacro_rules! make_plain_dict_impl {\n    ($($ty: ident), *) => {\n        $(\n            impl PlainDictDecoding for $ty {\n                fn decode_dict_one(\n                    idx: usize,\n                    val_idx: usize,\n                    src: &ParquetMutableVector,\n                    dst: &mut ParquetMutableVector,\n                    bit_width: usize,\n                ) {\n                    let byte_width = bit_width / 8;\n                    bit::memcpy(\n                        &src.value_buffer[val_idx * byte_width..(val_idx+1) * byte_width],\n                        &mut dst.value_buffer[idx * byte_width..],\n                    );\n                }\n            }\n        )*\n    };\n}\n\nmake_plain_dict_impl! { Int8Type, UInt8Type, Int16Type, UInt16Type, Int32Type, UInt32Type }\nmake_plain_dict_impl! { Int32DateType, Int64Type, FloatType, Int64ToDecimal64Type, FLBAType }\nmake_plain_dict_impl! { DoubleType, Int64TimestampMillisType, Int64TimestampMicrosType }\n\nmacro_rules! make_int_variant_dict_impl {\n    ($ty:ty, $src_ty:ty, $dst_ty:ty) => {\n        impl PlainDictDecoding for $ty {\n            fn decode_dict_one(\n                idx: usize,\n                val_idx: usize,\n                src: &ParquetMutableVector,\n                dst: &mut ParquetMutableVector,\n                _: usize,\n            ) {\n                let src_ptr = src.value_buffer.as_ptr() as *const $src_ty;\n                let dst_ptr = dst.value_buffer.as_mut_ptr() as *mut $dst_ty;\n                unsafe {\n                    // SAFETY the caller must ensure `idx`th pointer is in bounds\n                    dst_ptr\n                        .add(idx)\n                        .write_unaligned(src_ptr.add(val_idx).read_unaligned() as $dst_ty);\n                }\n            }\n        }\n    };\n}\n\nmake_int_variant_dict_impl!(Int16ToDoubleType, i16, f64);\nmake_int_variant_dict_impl!(Int32To64Type, i32, i64);\nmake_int_variant_dict_impl!(Int32ToDecimal64Type, i32, i64);\nmake_int_variant_dict_impl!(Int32ToDoubleType, i32, f64);\nmake_int_variant_dict_impl!(Int32TimestampMicrosType, i32, i64);\nmake_int_variant_dict_impl!(FloatToDoubleType, f32, f64);\nmake_int_variant_dict_impl!(Int32DecimalType, i128, i128);\nmake_int_variant_dict_impl!(Int64DecimalType, i128, i128);\nmake_int_variant_dict_impl!(UInt64Type, u128, u128);\n\nimpl PlainDecoding for Int32DateType {\n    fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize) {\n        let src_data = &src.data;\n        let byte_width = src.desc.type_length() as usize;\n        let num_bytes = byte_width * num;\n        let dst_offset = byte_width * dst.num_values;\n\n        if !src.read_options.use_legacy_date_timestamp_or_ntz {\n            // By default we panic if the date value is before the switch date between Julian\n            // calendar and Gregorian calendar, which is 1582-10-15, and -141727 days\n            // before the unix epoch date 1970-01-01.\n            let mut offset = src.offset;\n            for _ in 0..num {\n                let v = &src_data[offset..offset + byte_width] as *const [u8] as *const u8\n                    as *const i32;\n\n                // TODO: optimize this further as checking value one by one is not very efficient\n                unsafe {\n                    if unlikely(v.read_unaligned() < JULIAN_GREGORIAN_SWITCH_OFF_DAY) {\n                        panic!(\n                        \"Encountered date value {}, which is before 1582-10-15 (counting backwards \\\n                         from Unix epoch date 1970-01-01), and could be ambigous depending on \\\n                         whether a legacy Julian/Gregorian hybrid calendar is used, or a Proleptic \\\n                         Gregorian calendar is used.\",\n                        *v\n                    );\n                    }\n                }\n\n                offset += byte_width;\n            }\n        }\n\n        bit::memcpy(\n            &src_data[src.offset..src.offset + num_bytes],\n            &mut dst.value_buffer[dst_offset..],\n        );\n\n        src.offset += num_bytes;\n    }\n\n    fn skip(src: &mut PlainDecoderInner, num: usize) {\n        let num_bytes = src.desc.type_length() as usize * num;\n        src.offset += num_bytes;\n    }\n}\n\nimpl PlainDecoding for Int32TimestampMicrosType {\n    #[inline]\n    fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize) {\n        let src_data = &src.data;\n        let byte_width = src.desc.type_length() as usize;\n        let num_bytes = byte_width * num;\n\n        {\n            let mut offset = src.offset;\n            for _ in 0..num {\n                let v = src_data[offset..offset + byte_width].as_ptr() as *const i32;\n\n                // TODO: optimize this further as checking value one by one is not very efficient\n                unsafe {\n                    if unlikely(v.read_unaligned() < JULIAN_GREGORIAN_SWITCH_OFF_DAY) {\n                        panic!(\n                            \"Encountered timestamp value {}, which is before 1582-10-15 (counting \\\n                        backwards from Unix epoch date 1970-01-01), and could be ambigous \\\n                        depending on whether a legacy Julian/Gregorian hybrid calendar is used, \\\n                        or a Proleptic Gregorian calendar is used.\",\n                            *v\n                        );\n                    }\n                }\n\n                offset += byte_width;\n            }\n        }\n\n        let mut offset = src.offset;\n        let dst_byte_width = byte_width * 2;\n        let mut dst_offset = dst_byte_width * dst.num_values;\n        for _ in 0..num {\n            let v = src_data[offset..offset + byte_width].as_ptr() as *const i32;\n            let v = unsafe { v.read_unaligned() };\n            let v = (v as i64).wrapping_mul(MICROS_PER_DAY);\n            bit::memcpy_value(&v, dst_byte_width, &mut dst.value_buffer[dst_offset..]);\n            offset += byte_width;\n            dst_offset += dst_byte_width;\n        }\n\n        src.offset += num_bytes;\n    }\n\n    #[inline]\n    fn skip(src: &mut PlainDecoderInner, num: usize) {\n        let num_bytes = src.desc.type_length() as usize * num;\n        src.offset += num_bytes;\n    }\n}\n\nimpl PlainDecoding for Int64TimestampMillisType {\n    #[inline]\n    fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize) {\n        let src_data = &src.data;\n        let byte_width = src.desc.type_length() as usize;\n        let num_bytes = byte_width * num;\n\n        if !src.read_options.use_legacy_date_timestamp_or_ntz {\n            let mut offset = src.offset;\n            for _ in 0..num {\n                unsafe {\n                    let v = &src_data[offset..offset + byte_width] as *const [u8] as *const u8\n                        as *const i64;\n                    let v = v.read_unaligned() * MICROS_PER_MILLIS;\n\n                    // TODO: optimize this further as checking value one by one is not very\n                    // efficient\n                    if unlikely(v < JULIAN_GREGORIAN_SWITCH_OFF_TS) {\n                        panic!(\n                            \"Encountered timestamp value {v}, which is before 1582-10-15 (counting \\\n                         backwards from Unix epoch date 1970-01-01), and could be ambigous \\\n                         depending on whether a legacy Julian/Gregorian hybrid calendar is used, \\\n                         or a Proleptic Gregorian calendar is used.\"\n                        );\n                    }\n\n                    offset += byte_width;\n                }\n            }\n        }\n\n        unsafe {\n            let mut offset = src.offset;\n            let mut dst_offset = byte_width * dst.num_values;\n            for _ in 0..num {\n                let v = &src_data[offset..offset + byte_width] as *const [u8] as *const u8\n                    as *const i64;\n                let v = v.read_unaligned() * MICROS_PER_MILLIS;\n                bit::memcpy_value(&v, byte_width, &mut dst.value_buffer[dst_offset..]);\n                offset += byte_width;\n                dst_offset += byte_width;\n            }\n        }\n\n        src.offset += num_bytes;\n    }\n\n    #[inline]\n    fn skip(src: &mut PlainDecoderInner, num: usize) {\n        let num_bytes = src.desc.type_length() as usize * num;\n        src.offset += num_bytes;\n    }\n}\n\nimpl PlainDecoding for Int64TimestampMicrosType {\n    #[inline]\n    fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize) {\n        let src_data = &src.data;\n        let byte_width = src.desc.type_length() as usize;\n        let num_bytes = byte_width * num;\n        let dst_offset = byte_width * dst.num_values;\n\n        if !src.read_options.use_legacy_date_timestamp_or_ntz {\n            let mut offset = src.offset;\n            for _ in 0..num {\n                unsafe {\n                    let v = &src_data[offset..offset + byte_width] as *const [u8] as *const u8\n                        as *const i64;\n\n                    // TODO: optimize this further as checking value one by one is not very\n                    // efficient\n                    if unlikely(v.read_unaligned() < JULIAN_GREGORIAN_SWITCH_OFF_TS) {\n                        panic!(\n                            \"Encountered timestamp value {}, which is before 1582-10-15 (counting \\\n                         backwards from Unix epoch date 1970-01-01), and could be ambigous \\\n                         depending on whether a legacy Julian/Gregorian hybrid calendar is used, \\\n                         or a Proleptic Gregorian calendar is used.\",\n                            *v\n                        );\n                    }\n\n                    offset += byte_width;\n                }\n            }\n        }\n\n        bit::memcpy(\n            &src_data[src.offset..src.offset + num_bytes],\n            &mut dst.value_buffer[dst_offset..],\n        );\n\n        src.offset += num_bytes;\n    }\n\n    #[inline]\n    fn skip(src: &mut PlainDecoderInner, num: usize) {\n        let num_bytes = src.desc.type_length() as usize * num;\n        src.offset += num_bytes;\n    }\n}\n\nimpl PlainDecoding for BoolType {\n    /// Specific implementation for PLAIN encoding of boolean type. Even though both Parquet and\n    /// Arrow share the same physical layout for the type (which is 1 bit for each value), we'll\n    /// need to treat the number of bytes specifically.\n    #[inline]\n    fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize) {\n        src.bit_reader\n            .get_bits(&mut dst.value_buffer, dst.num_values, num);\n    }\n\n    #[inline]\n    fn skip(src: &mut PlainDecoderInner, num: usize) {\n        src.bit_reader.skip_bits(num);\n    }\n}\n\n// Does it make sense to encode booleans with dictionary?\nimpl PlainDictDecoding for BoolType {\n    #[inline]\n    fn decode_dict_one(\n        idx: usize,\n        val_idx: usize,\n        src: &ParquetMutableVector,\n        dst: &mut ParquetMutableVector,\n        _: usize,\n    ) {\n        let v = bit::get_bit(src.value_buffer.as_slice(), val_idx);\n        if v {\n            bit::set_bit(dst.value_buffer.as_slice_mut(), idx);\n        } // `dst` should be zero initialized so no need to call `unset_bit`.\n    }\n}\n\nmacro_rules! make_int_variant_impl {\n    ($dst_type:ty, $copy_fn:ident, $type_width:expr) => {\n        impl PlainDecoding for $dst_type {\n            fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize) {\n                let dst_slice = dst.value_buffer.as_slice_mut();\n                let dst_offset = dst.num_values * $type_width;\n                $copy_fn(&src.data[src.offset..], &mut dst_slice[dst_offset..], num);\n                src.offset += 4 * num; // Parquet stores Int8/Int16 using 4 bytes\n            }\n\n            fn skip(src: &mut PlainDecoderInner, num: usize) {\n                src.offset += 4 * num; // Parquet stores Int8/Int16 using 4 bytes\n            }\n        }\n    };\n}\n\nmake_int_variant_impl!(Int8Type, copy_i32_to_i8, 1);\nmake_int_variant_impl!(Int16Type, copy_i32_to_i16, 2);\nmake_int_variant_impl!(Int16ToDoubleType, copy_i32_to_f64, 8); // Parquet uses Int16 using 4 bytes\nmake_int_variant_impl!(Int32To64Type, copy_i32_to_i64, 8);\nmake_int_variant_impl!(Int32ToDoubleType, copy_i32_to_f64, 8);\nmake_int_variant_impl!(FloatToDoubleType, copy_f32_to_f64, 8);\n\n// unsigned type require double the width and zeroes are written for the second half\n// because they are implemented as the next size up signed type\nmake_int_variant_impl!(UInt8Type, copy_i32_to_u8, 2);\nmake_int_variant_impl!(UInt16Type, copy_i32_to_u16, 4);\nmake_int_variant_impl!(UInt32Type, copy_i32_to_u32, 8);\n\nmacro_rules! make_int_decimal_variant_impl {\n    ($ty:ty, $copy_fn:ident, $type_width:expr, $dst_type:ty) => {\n        impl PlainDecoding for $ty {\n            fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize) {\n                let dst_slice = dst.value_buffer.as_slice_mut();\n                let dst_offset = dst.num_values * std::mem::size_of::<$dst_type>();\n                $copy_fn(&src.data[src.offset..], &mut dst_slice[dst_offset..], num);\n\n                let src_precision = src.desc.type_precision() as u32;\n                let src_scale = ::std::cmp::max(src.desc.type_scale(), 0) as u32;\n                let (dst_precision, dst_scale) = match dst.arrow_type {\n                    ArrowDataType::Decimal128(p, s) if s >= 0 => (p as u32, s as u32),\n                    _ => unreachable!(),\n                };\n                let upper = (10 as $dst_type).pow(dst_precision);\n                let v = dst_slice[dst_offset..].as_mut_ptr() as *mut $dst_type;\n                if dst_scale > src_scale {\n                    let mul = (10 as $dst_type).pow(dst_scale - src_scale);\n                    for i in 0..num {\n                        unsafe {\n                            // SAFETY the caller must ensure `i`th pointer is in bounds\n                            let v = v.add(i);\n                            write_val_or_null!(v, v.read_unaligned() * mul, upper, dst, i);\n                        }\n                    }\n                } else if dst_scale < src_scale {\n                    let div = (10 as $dst_type).pow(src_scale - dst_scale);\n                    for i in 0..num {\n                        unsafe {\n                            // SAFETY the caller must ensure `i`th pointer is in bounds\n                            let v = v.add(i);\n                            write_val_or_null!(v, v.read_unaligned() / div, upper, dst, i);\n                        }\n                    }\n                } else if src_precision > dst_precision {\n                    for i in 0..num {\n                        unsafe {\n                            // SAFETY the caller must ensure `i`th pointer is in bounds\n                            let v = v.add(i);\n                            write_null!(v.read_unaligned(), upper, dst, i);\n                        }\n                    }\n                }\n\n                src.offset += $type_width * num;\n            }\n\n            fn skip(src: &mut PlainDecoderInner, num: usize) {\n                src.offset += $type_width * num;\n            }\n        }\n    };\n}\nmake_int_decimal_variant_impl!(Int32ToDecimal64Type, copy_i32_to_i64, 4, i64);\nmake_int_decimal_variant_impl!(Int32DecimalType, copy_i32_to_i128, 4, i128);\nmake_int_decimal_variant_impl!(Int64ToDecimal64Type, copy_i64_to_i64, 8, i64);\nmake_int_decimal_variant_impl!(Int64DecimalType, copy_i64_to_i128, 8, i128);\nmake_int_decimal_variant_impl!(UInt64Type, copy_u64_to_u128, 8, u128);\n\n#[macro_export]\nmacro_rules! write_val_or_null {\n    ($v: expr, $adjusted: expr, $upper: expr, $dst: expr, $i: expr) => {\n        let adjusted = $adjusted;\n        $v.write_unaligned(adjusted);\n        write_null!(adjusted, $upper, $dst, $i);\n    };\n}\n\n#[macro_export]\nmacro_rules! write_null {\n    ($val: expr, $upper: expr, $dst: expr, $i: expr) => {\n        if $upper <= $val {\n            bit::unset_bit($dst.validity_buffer.as_slice_mut(), $dst.num_values + $i);\n            $dst.num_nulls += 1;\n        }\n    };\n}\n\nmacro_rules! generate_cast_to_unsigned {\n    ($name: ident, $src_type:ty, $dst_type:ty, $zero_value:expr) => {\n        pub fn $name(src: &[u8], dst: &mut [u8], num: usize) {\n            debug_assert!(\n                src.len() >= num * std::mem::size_of::<$src_type>(),\n                \"Source slice is too small\"\n            );\n            debug_assert!(\n                dst.len() >= num * std::mem::size_of::<$dst_type>() * 2,\n                \"Destination slice is too small\"\n            );\n\n            let src_ptr = src.as_ptr() as *const $src_type;\n            let dst_ptr = dst.as_mut_ptr() as *mut $dst_type;\n            unsafe {\n                for i in 0..num {\n                    dst_ptr\n                        .add(2 * i)\n                        .write_unaligned(src_ptr.add(i).read_unaligned() as $dst_type);\n                    // write zeroes\n                    dst_ptr.add(2 * i + 1).write_unaligned($zero_value);\n                }\n            }\n        }\n    };\n}\n\ngenerate_cast_to_unsigned!(copy_i32_to_u32, i32, u32, 0_u32);\n\nmacro_rules! generate_cast_to_signed {\n    ($name: ident, $src_type:ty, $dst_type:ty) => {\n        pub fn $name(src: &[u8], dst: &mut [u8], num: usize) {\n            debug_assert!(\n                src.len() >= num * std::mem::size_of::<$src_type>(),\n                \"Source slice is too small\"\n            );\n            debug_assert!(\n                dst.len() >= num * std::mem::size_of::<$dst_type>(),\n                \"Destination slice is too small\"\n            );\n\n            let src_ptr = src.as_ptr() as *const $src_type;\n            let dst_ptr = dst.as_mut_ptr() as *mut $dst_type;\n            unsafe {\n                for i in 0..num {\n                    dst_ptr\n                        .add(i)\n                        .write_unaligned(src_ptr.add(i).read_unaligned() as $dst_type);\n                }\n            }\n        }\n    };\n}\n\ngenerate_cast_to_signed!(copy_i32_to_i8, i32, i8);\ngenerate_cast_to_signed!(copy_i32_to_i16, i32, i16);\ngenerate_cast_to_signed!(copy_i32_to_i64, i32, i64);\ngenerate_cast_to_signed!(copy_i32_to_i128, i32, i128);\ngenerate_cast_to_signed!(copy_i32_to_f64, i32, f64);\ngenerate_cast_to_signed!(copy_i64_to_i64, i64, i64);\ngenerate_cast_to_signed!(copy_i64_to_i128, i64, i128);\ngenerate_cast_to_signed!(copy_u64_to_u128, u64, u128);\ngenerate_cast_to_signed!(copy_f32_to_f64, f32, f64);\n// even for u8/u16, need to copy full i16/i32 width for Spark compatibility\ngenerate_cast_to_signed!(copy_i32_to_u8, i32, i16);\ngenerate_cast_to_signed!(copy_i32_to_u16, i32, i32);\n\n// Shared implementation for variants of Binary type\nmacro_rules! make_plain_binary_impl {\n    ($($ty: ident), *) => {\n        $(\n            impl PlainDecoding for $ty {\n                fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize) {\n                    let src_data = &src.data;\n                    let mut src_offset = src.offset;\n\n                    let mut offset_offset = dst.num_values * 4;\n                    let offset_buf = &mut dst.value_buffer.as_slice_mut();\n                    let mut offset_value = read_num_bytes!(i32, 4, &offset_buf[offset_offset..]);\n                    offset_offset += 4;\n\n                    // The actual content of a byte array is stored contiguously in the child vector\n                    let child = &mut dst.children[0];\n                    let mut value_offset = child.num_values; // num_values == num of bytes\n\n                    (0..num).for_each(|_| {\n                        let len = read_num_bytes!(i32, 4, &src_data[src_offset..]) as usize;\n                        offset_value += len as i32;\n\n                        // Copy offset for the current string value into the offset buffer\n                        bit::memcpy_value(&offset_value, 4, &mut offset_buf[offset_offset..]);\n\n                        // Reserve additional space in child value buffer if not enough\n                        let value_buf_len = child.value_buffer.len();\n\n                        if unlikely(value_buf_len < value_offset + len) {\n                            let new_capacity = ::std::cmp::max(value_offset + len, value_buf_len * 2);\n                            debug!(\"Reserving additional space ({} -> {} bytes) for value buffer\",\n                                   value_buf_len, new_capacity);\n                            child.value_buffer.resize(new_capacity);\n                        }\n\n                        // Copy the actual string content into the value buffer\n                        src_offset += mem::size_of::<u32>();\n                        bit::memcpy(\n                            &src_data[src_offset..src_offset + len],\n                            &mut child.value_buffer.as_slice_mut()[value_offset..],\n                        );\n\n                        value_offset += len;\n                        src_offset += len;\n                        offset_offset += 4;\n                    });\n\n                    src.offset = src_offset;\n                    child.num_values = value_offset;\n                }\n\n                fn skip(src: &mut PlainDecoderInner, num: usize) {\n                    let src_data = &src.data;\n                    let mut src_offset = src.offset;\n\n                    (0..num).for_each(|_| {\n                        let len = read_num_bytes!(i32, 4, &src_data[src_offset..]) as usize;\n                        src_offset += mem::size_of::<u32>();\n                        src_offset += len;\n                    });\n\n                    src.offset = src_offset;\n                }\n            }\n        )*\n    };\n}\n\nmake_plain_binary_impl! { ByteArrayType, StringType }\n\nmacro_rules! make_plain_dict_binary_impl {\n    ($($ty: ident), *) => {\n        $(\n            impl PlainDictDecoding for $ty {\n                #[inline]\n                fn decode_dict_one(\n                    idx: usize,\n                    val_idx: usize,\n                    src: &ParquetMutableVector,\n                    dst: &mut ParquetMutableVector,\n                    _: usize,\n                ) {\n                    debug_assert!(src.children.len() == 1);\n                    debug_assert!(dst.children.len() == 1);\n\n                    let src_child = &src.children[0];\n                    let dst_child = &mut dst.children[0];\n\n                    // get the offset & data for the binary value at index `val_idx`\n                    let mut start_slice = &src.value_buffer[val_idx * 4..];\n                    let start = start_slice.get_u32_le() as usize;\n                    let mut end_slice = &src.value_buffer[(val_idx + 1) * 4..];\n                    let end = end_slice.get_u32_le() as usize;\n\n                    debug_assert!(end >= start);\n\n                    let len = end - start;\n                    let curr_offset = read_num_bytes!(u32, 4, &dst.value_buffer[idx * 4..]) as usize;\n\n                    // Reserve additional space in child value buffer if not enough\n                    let value_buf_len = dst_child.value_buffer.len();\n\n                    if unlikely(value_buf_len < curr_offset + len) {\n                        let new_capacity = ::std::cmp::max(curr_offset + len, value_buf_len * 2);\n                        debug!(\"Reserving additional space ({} -> {} bytes) for value buffer \\\n                                during dictionary fallback\", value_buf_len,\n                               new_capacity);\n                        dst_child.value_buffer.resize(new_capacity);\n                    }\n\n                    bit::memcpy(\n                        &src_child.value_buffer[start..end],\n                        &mut dst_child.value_buffer[curr_offset..],\n                    );\n\n                    bit::memcpy_value(\n                        &((curr_offset + len) as u32),\n                        4,\n                        &mut dst.value_buffer[(idx + 1) * 4..],\n                    );\n\n                    dst_child.num_values += len;\n                }\n            }\n        )*\n    };\n}\n\nmake_plain_dict_binary_impl! { ByteArrayType, StringType }\n\nmacro_rules! make_plain_decimal_int_impl {\n    ($($ty: ident; $dst_type:ty), *) => {\n        $(\n            impl PlainDecoding for $ty {\n                fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize) {\n                    let num_bytes = std::mem::size_of::<$dst_type>();\n                    let byte_width = src.desc.type_length() as usize;\n                    let num_bits = num_bytes.saturating_sub(byte_width) * 8;\n\n                    let src_data = &src.data[src.offset..];\n                    let dst_data = &mut dst.value_buffer[dst.num_values * num_bytes..];\n\n                    let mut src_offset = 0;\n\n                    let src_precision = src.desc.type_precision() as u32;\n                    let src_scale = ::std::cmp::max(src.desc.type_scale(), 0) as u32;\n                    let (dst_precision, dst_scale) = match dst.arrow_type {\n                        ArrowDataType::Decimal128(p, s) if s >= 0 => (p as u32, s as u32),\n                        _ => (src_precision, src_scale),\n                    };\n                    let upper = (10 as $dst_type).pow(dst_precision);\n                    let mul_div = (10 as $dst_type).pow(dst_scale.abs_diff(src_scale));\n\n                    for i in 0..num {\n                        let mut unscaled: $dst_type = 0;\n                        for _ in 0..byte_width {\n                            unscaled = unscaled << 8 | src_data[src_offset] as $dst_type;\n                            src_offset += 1;\n                        }\n                        unscaled = (unscaled << num_bits) >> num_bits;\n                        if dst_scale > src_scale {\n                            unscaled *= mul_div;\n                        } else if dst_scale < src_scale {\n                            unscaled /= mul_div;\n                        }\n                        bit::memcpy_value(&unscaled, num_bytes, &mut dst_data[i * num_bytes..]);\n                        if src_precision > dst_precision {\n                            write_null!(unscaled, upper, dst, i);\n                        }\n                    }\n\n                    src.offset += num * byte_width;\n                }\n\n                fn skip(src: &mut PlainDecoderInner, num: usize) {\n                    let num_bytes_to_skip = num * src.desc.type_length() as usize;\n                    src.offset += num_bytes_to_skip;\n                }\n            }\n\n            impl PlainDictDecoding for $ty {\n                #[inline]\n                fn decode_dict_one(_: usize, val_idx: usize, src: &ParquetMutableVector, dst: &mut ParquetMutableVector, _: usize) {\n                    let num_bytes = std::mem::size_of::<$dst_type>();\n                    bit::memcpy(\n                        &src.value_buffer[val_idx * num_bytes..(val_idx + 1) * num_bytes],\n                        &mut dst.value_buffer[dst.num_values * num_bytes..],\n                    );\n                }\n            }\n        )*\n    };\n}\n\nmake_plain_decimal_int_impl!(FLBADecimal32Type; i32, FLBADecimal64Type; i64, FLBADecimalType; i128);\n\n// Int96 contains 12 bytes\nconst INT96_SRC_BYTE_WIDTH: usize = 12;\n// We convert INT96 to micros and store using i64\nconst INT96_DST_BYTE_WIDTH: usize = 8;\n\nfn int96_to_microsecond(v: &[u8]) -> i64 {\n    let nanos = &v[..INT96_DST_BYTE_WIDTH] as *const [u8] as *const u8 as *const i64;\n    let day = &v[INT96_DST_BYTE_WIDTH..] as *const [u8] as *const u8 as *const i32;\n\n    unsafe {\n        ((day.read_unaligned() - JULIAN_DAY_OF_EPOCH) as i64)\n            .wrapping_mul(MICROS_PER_DAY)\n            .wrapping_add(nanos.read_unaligned() / 1000)\n    }\n}\n\n/// Decode timestamps represented as INT96 into i64 with micros precision\nimpl PlainDecoding for Int96TimestampMicrosType {\n    #[inline]\n    fn decode(src: &mut PlainDecoderInner, dst: &mut ParquetMutableVector, num: usize) {\n        let src_data = &src.data;\n\n        if !src.read_options.use_legacy_date_timestamp_or_ntz {\n            let mut offset = src.offset;\n            for _ in 0..num {\n                // TODO: optimize this further as checking value one by one is not very efficient\n                let micros = int96_to_microsecond(&src_data[offset..offset + INT96_SRC_BYTE_WIDTH]);\n\n                if unlikely(micros < JULIAN_GREGORIAN_SWITCH_OFF_TS) {\n                    panic!(\n                        \"Encountered timestamp value {micros}, which is before 1582-10-15 (counting \\\n                         backwards from Unix epoch date 1970-01-01), and could be ambigous \\\n                         depending on whether a legacy Julian/Gregorian hybrid calendar is used, \\\n                         or a Proleptic Gregorian calendar is used.\"\n                    );\n                }\n\n                offset += INT96_SRC_BYTE_WIDTH;\n            }\n        }\n\n        let mut offset = src.offset;\n        let mut dst_offset = INT96_DST_BYTE_WIDTH * dst.num_values;\n        for _ in 0..num {\n            let micros = int96_to_microsecond(&src_data[offset..offset + INT96_SRC_BYTE_WIDTH]);\n\n            bit::memcpy_value(\n                &micros,\n                INT96_DST_BYTE_WIDTH,\n                &mut dst.value_buffer[dst_offset..],\n            );\n\n            offset += INT96_SRC_BYTE_WIDTH;\n            dst_offset += INT96_DST_BYTE_WIDTH;\n        }\n\n        src.offset = offset;\n    }\n\n    #[inline]\n    fn skip(src: &mut PlainDecoderInner, num: usize) {\n        src.offset += INT96_SRC_BYTE_WIDTH * num;\n    }\n}\n\nimpl PlainDictDecoding for Int96TimestampMicrosType {\n    fn decode_dict_one(\n        _: usize,\n        val_idx: usize,\n        src: &ParquetMutableVector,\n        dst: &mut ParquetMutableVector,\n        _: usize,\n    ) {\n        let src_offset = val_idx * INT96_DST_BYTE_WIDTH;\n        let dst_offset = dst.num_values * INT96_DST_BYTE_WIDTH;\n\n        bit::memcpy(\n            &src.value_buffer[src_offset..src_offset + INT96_DST_BYTE_WIDTH],\n            &mut dst.value_buffer[dst_offset..dst_offset + INT96_DST_BYTE_WIDTH],\n        );\n    }\n}\n\nimpl<T: DataType> Decoder for PlainDecoder<T> {\n    #[inline]\n    fn read(&mut self, dst: &mut ParquetMutableVector) {\n        self.read_batch(dst, 1)\n    }\n\n    /// Default implementation for PLAIN encoding, which uses a `memcpy` to copy from Parquet to the\n    /// Arrow vector. NOTE: this only works if the Parquet physical type has the same type width as\n    /// the Arrow's physical type (e.g., Parquet INT32 vs Arrow INT32). For other cases, we should\n    /// have special implementations.\n    #[inline]\n    fn read_batch(&mut self, dst: &mut ParquetMutableVector, num: usize) {\n        T::decode(&mut self.inner, dst, num);\n    }\n\n    #[inline]\n    fn skip_batch(&mut self, num: usize) {\n        T::skip(&mut self.inner, num);\n    }\n\n    #[inline]\n    fn encoding(&self) -> Encoding {\n        Encoding::PLAIN\n    }\n}\n\n/// A decoder for Parquet dictionary indices, which is always of integer type, and encoded with\n/// RLE/BitPacked encoding.\npub struct DictDecoder {\n    /// Number of bits used to represent dictionary indices. Must be between `[0, 64]`.\n    bit_width: usize,\n\n    /// Bit reader\n    bit_reader: BitReader,\n\n    /// Number of values left in the current RLE run\n    rle_left: usize,\n\n    /// Number of values left in the current BIT_PACKED run\n    bit_packed_left: usize,\n\n    /// Current value in the RLE run. Unused if BIT_PACKED\n    current_value: u32,\n}\n\nimpl DictDecoder {\n    pub fn new(buf: Buffer) -> Self {\n        let bit_width = buf.as_bytes()[0] as usize;\n\n        Self {\n            bit_width,\n            bit_reader: BitReader::new_all(buf.slice(1)),\n            rle_left: 0,\n            bit_packed_left: 0,\n            current_value: 0,\n        }\n    }\n}\n\nimpl DictDecoder {\n    /// Reads the header of the next RLE/BitPacked run, and update the internal state such as # of\n    /// values for the next run, as well as the current value in case it's RLE.\n    fn reload(&mut self) {\n        if let Some(indicator_value) = self.bit_reader.get_vlq_int() {\n            if indicator_value & 1 == 1 {\n                self.bit_packed_left = ((indicator_value >> 1) * 8) as usize;\n            } else {\n                self.rle_left = (indicator_value >> 1) as usize;\n                let value_width = self.bit_width.div_ceil(8);\n                self.current_value = self.bit_reader.get_aligned::<u32>(value_width).unwrap();\n            }\n        } else {\n            panic!(\"Can't read VLQ int from BitReader\");\n        }\n    }\n}\n\nimpl Decoder for DictDecoder {\n    fn read(&mut self, dst: &mut ParquetMutableVector) {\n        let dst_slice = dst.value_buffer.as_slice_mut();\n        let dst_offset = dst.num_values * 4;\n\n        // We've finished the current run. Now load the next.\n        if self.rle_left == 0 && self.bit_packed_left == 0 {\n            self.reload();\n        }\n\n        let value = if self.rle_left > 0 {\n            self.rle_left -= 1;\n            self.current_value\n        } else {\n            self.bit_packed_left -= 1;\n            self.bit_reader.get_u32_value(self.bit_width)\n        };\n\n        // Directly copy the value into the destination buffer\n        unsafe {\n            let dst = &mut dst_slice[dst_offset..] as *mut [u8] as *mut u8 as *mut u32;\n            *dst = value;\n        }\n    }\n\n    fn read_batch(&mut self, dst: &mut ParquetMutableVector, num: usize) {\n        let mut values_read = 0;\n        let dst_slice = dst.value_buffer.as_slice_mut();\n        let mut dst_offset = dst.num_values * 4;\n\n        while values_read < num {\n            let num_to_read = num - values_read;\n            let mut dst = &mut dst_slice[dst_offset..] as *mut [u8] as *mut u8 as *mut u32;\n\n            if self.rle_left > 0 {\n                let n = std::cmp::min(num_to_read, self.rle_left);\n                unsafe {\n                    // Copy the current RLE value into the destination buffer.\n                    for _ in 0..n {\n                        *dst = self.current_value;\n                        dst = dst.offset(1);\n                    }\n                    dst_offset += 4 * n;\n                }\n                self.rle_left -= n;\n                values_read += n;\n            } else if self.bit_packed_left > 0 {\n                let n = std::cmp::min(num_to_read, self.bit_packed_left);\n                unsafe {\n                    // Decode the next `n` BitPacked values into u32 and put the result directly to\n                    // `dst`.\n                    self.bit_reader.get_u32_batch(dst, n, self.bit_width);\n                }\n                dst_offset += 4 * n;\n                self.bit_packed_left -= n;\n                values_read += n;\n            } else {\n                self.reload();\n            }\n        }\n    }\n\n    fn skip_batch(&mut self, num: usize) {\n        let mut values_skipped = 0;\n\n        while values_skipped < num {\n            let num_to_skip = num - values_skipped;\n\n            if self.rle_left > 0 {\n                let n = std::cmp::min(num_to_skip, self.rle_left);\n                self.rle_left -= n;\n                values_skipped += n;\n            } else if self.bit_packed_left > 0 {\n                let n = std::cmp::min(num_to_skip, self.bit_packed_left);\n                self.bit_reader.skip_bits(n * self.bit_width);\n                self.bit_packed_left -= n;\n                values_skipped += n;\n            } else {\n                self.reload();\n            }\n        }\n    }\n\n    fn encoding(&self) -> Encoding {\n        Encoding::RLE_DICTIONARY\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use parquet::data_type::AsBytes;\n\n    #[test]\n    fn test_i32_to_i8() {\n        let source =\n            hex::decode(\"8a000000dbffffff1800000034ffffff300000001d000000abffffff37fffffff1000000\")\n                .unwrap();\n        let expected = hex::decode(\"8adb1834301dab37f1\").unwrap();\n        let num = source.len() / 4;\n        let mut dest: Vec<u8> = vec![b' '; num];\n        copy_i32_to_i8(source.as_bytes(), dest.as_mut_slice(), num);\n        assert_eq!(expected.as_bytes(), dest.as_bytes());\n    }\n\n    #[test]\n    fn test_i32_to_u8() {\n        let source =\n            hex::decode(\"8a000000dbffffff1800000034ffffff300000001d000000abffffff37fffffff1000000\")\n                .unwrap();\n        let expected = hex::decode(\"8a00dbff180034ff30001d00abff37fff100\").unwrap();\n        let num = source.len() / 4;\n        let mut dest: Vec<u8> = vec![b' '; num * 2];\n        copy_i32_to_u8(source.as_bytes(), dest.as_mut_slice(), num);\n        assert_eq!(expected.as_bytes(), dest.as_bytes());\n    }\n\n    #[test]\n    fn test_i32_to_i16() {\n        let source =\n            hex::decode(\"8a0e0000db93ffff1826000034f4ffff300200001d2b0000abe3ffff378dfffff1470000\")\n                .unwrap();\n        let expected = hex::decode(\"8a0edb93182634f430021d2babe3378df147\").unwrap();\n        let num = source.len() / 4;\n        let mut dest: Vec<u8> = vec![b' '; num * 2];\n        copy_i32_to_i16(source.as_bytes(), dest.as_mut_slice(), num);\n        assert_eq!(expected.as_bytes(), dest.as_bytes());\n    }\n\n    #[test]\n    fn test_i32_to_u16() {\n        let source = hex::decode(\n            \"ff7f0000008000000180000002800000038000000480000005800000068000000780000008800000\",\n        )\n        .unwrap();\n        let expected = hex::decode(\n            \"ff7f0000008000000180000002800000038000000480000005800000068000000780000008800000\",\n        )\n        .unwrap();\n        let num = source.len() / 4;\n        let mut dest: Vec<u8> = vec![b' '; num * 4];\n        copy_i32_to_u16(source.as_bytes(), dest.as_mut_slice(), num);\n        assert_eq!(expected.as_bytes(), dest.as_bytes());\n    }\n\n    #[test]\n    fn test_i32_to_u32() {\n        let source = hex::decode(\n            \"ffffff7f000000800100008002000080030000800400008005000080060000800700008008000080\",\n        )\n        .unwrap();\n        let expected = hex::decode(\"ffffff7f00000000000000800000000001000080000000000200008000000000030000800000000004000080000000000500008000000000060000800000000007000080000000000800008000000000\").unwrap();\n        let num = source.len() / 4;\n        let mut dest: Vec<u8> = vec![b' '; num * 8];\n        copy_i32_to_u32(source.as_bytes(), dest.as_mut_slice(), num);\n        assert_eq!(expected.as_bytes(), dest.as_bytes());\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/schema_adapter.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::parquet::cast_column::CometCastColumnExpr;\nuse crate::parquet::parquet_support::{spark_parquet_convert, SparkParquetOptions};\nuse arrow::datatypes::{DataType, Field, Schema, SchemaRef};\nuse datafusion::common::tree_node::{Transformed, TransformedResult, TreeNode};\nuse datafusion::common::{DataFusionError, Result as DataFusionResult};\nuse datafusion::physical_expr::expressions::Column;\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_plan::ColumnarValue;\nuse datafusion::scalar::ScalarValue;\nuse datafusion_comet_common::SparkError;\nuse datafusion_comet_spark_expr::{Cast, SparkCastOptions};\nuse datafusion_physical_expr_adapter::{\n    replace_columns_with_literals, DefaultPhysicalExprAdapterFactory, PhysicalExprAdapter,\n    PhysicalExprAdapterFactory,\n};\nuse std::collections::HashMap;\nuse std::sync::Arc;\n\n/// Factory for creating Spark-compatible physical expression adapters.\n///\n/// This factory creates adapters that rewrite expressions at planning time\n/// to inject Spark-compatible casts where needed.\n#[derive(Clone, Debug)]\npub struct SparkPhysicalExprAdapterFactory {\n    /// Spark-specific parquet options for type conversions\n    parquet_options: SparkParquetOptions,\n    /// Default values for columns that may be missing from the physical schema.\n    /// The key is the Column (containing name and index).\n    default_values: Option<HashMap<Column, ScalarValue>>,\n}\n\nimpl SparkPhysicalExprAdapterFactory {\n    /// Create a new factory with the given options.\n    pub fn new(\n        parquet_options: SparkParquetOptions,\n        default_values: Option<HashMap<Column, ScalarValue>>,\n    ) -> Self {\n        Self {\n            parquet_options,\n            default_values,\n        }\n    }\n}\n\n/// Remap physical schema field names to match logical schema field names using\n/// case-insensitive matching. This allows the DefaultPhysicalExprAdapter (which\n/// uses exact name matching) to correctly find columns when the parquet file has\n/// different casing than the table schema (e.g., file has \"a\" but table has \"A\").\nfn remap_physical_schema_names(\n    logical_schema: &SchemaRef,\n    physical_schema: &SchemaRef,\n) -> SchemaRef {\n    let remapped_fields: Vec<_> = physical_schema\n        .fields()\n        .iter()\n        .map(|field| {\n            if let Some(logical_field) = logical_schema\n                .fields()\n                .iter()\n                .find(|lf| lf.name().eq_ignore_ascii_case(field.name()))\n            {\n                if logical_field.name() != field.name() {\n                    Arc::new(Field::new(\n                        logical_field.name(),\n                        field.data_type().clone(),\n                        field.is_nullable(),\n                    ))\n                } else {\n                    Arc::clone(field)\n                }\n            } else {\n                Arc::clone(field)\n            }\n        })\n        .collect();\n\n    Arc::new(Schema::new(remapped_fields))\n}\n\n/// Check if a specific column name has duplicate matches in the physical schema\n/// (case-insensitive). Returns the error info if so.\nfn check_column_duplicate(col_name: &str, physical_schema: &SchemaRef) -> Option<(String, String)> {\n    let matches: Vec<&str> = physical_schema\n        .fields()\n        .iter()\n        .filter(|pf| pf.name().eq_ignore_ascii_case(col_name))\n        .map(|pf| pf.name().as_str())\n        .collect();\n    if matches.len() > 1 {\n        // Include brackets to match the format expected by ShimSparkErrorConverter\n        Some((col_name.to_string(), format!(\"[{}]\", matches.join(\", \"))))\n    } else {\n        None\n    }\n}\n\nimpl PhysicalExprAdapterFactory for SparkPhysicalExprAdapterFactory {\n    fn create(\n        &self,\n        logical_file_schema: SchemaRef,\n        physical_file_schema: SchemaRef,\n    ) -> DataFusionResult<Arc<dyn PhysicalExprAdapter>> {\n        // When case-insensitive, remap physical schema field names to match logical\n        // field names. The DefaultPhysicalExprAdapter uses exact name matching, so\n        // without this remapping, columns like \"a\" won't match logical \"A\" and will\n        // be filled with nulls.\n        //\n        // We also build a reverse map (logical name -> physical name) so that after\n        // the default adapter produces expressions, we can remap column names back\n        // to the original physical names. This is necessary because downstream code\n        // (reassign_expr_columns) looks up columns by name in the actual stream\n        // schema, which uses the original physical file column names.\n        let (adapted_physical_schema, logical_to_physical_names, original_physical_schema) =\n            if !self.parquet_options.case_sensitive {\n                let logical_to_physical: HashMap<String, String> = logical_file_schema\n                    .fields()\n                    .iter()\n                    .filter_map(|logical_field| {\n                        physical_file_schema\n                            .fields()\n                            .iter()\n                            .find(|pf| {\n                                pf.name().eq_ignore_ascii_case(logical_field.name())\n                                    && pf.name() != logical_field.name()\n                            })\n                            .map(|pf| (logical_field.name().clone(), pf.name().clone()))\n                    })\n                    .collect();\n                let remapped =\n                    remap_physical_schema_names(&logical_file_schema, &physical_file_schema);\n                (\n                    remapped,\n                    if logical_to_physical.is_empty() {\n                        None\n                    } else {\n                        Some(logical_to_physical)\n                    },\n                    // Keep original physical schema for per-column duplicate detection\n                    Some(Arc::clone(&physical_file_schema)),\n                )\n            } else {\n                (Arc::clone(&physical_file_schema), None, None)\n            };\n\n        let default_factory = DefaultPhysicalExprAdapterFactory;\n        let default_adapter = default_factory.create(\n            Arc::clone(&logical_file_schema),\n            Arc::clone(&adapted_physical_schema),\n        )?;\n\n        Ok(Arc::new(SparkPhysicalExprAdapter {\n            logical_file_schema,\n            physical_file_schema: adapted_physical_schema,\n            parquet_options: self.parquet_options.clone(),\n            default_values: self.default_values.clone(),\n            default_adapter,\n            logical_to_physical_names,\n            original_physical_schema,\n        }))\n    }\n}\n\n/// Spark-compatible physical expression adapter.\n///\n/// This adapter rewrites expressions at planning time to:\n/// 1. Replace references to missing columns with default values or nulls\n/// 2. Replace standard DataFusion cast expressions with Spark-compatible casts\n/// 3. Handle case-insensitive column matching\n#[derive(Debug)]\nstruct SparkPhysicalExprAdapter {\n    /// The logical schema expected by the query\n    logical_file_schema: SchemaRef,\n    /// The physical schema of the actual file being read\n    physical_file_schema: SchemaRef,\n    /// Spark-specific options for type conversions\n    parquet_options: SparkParquetOptions,\n    /// Default values for missing columns (keyed by Column)\n    default_values: Option<HashMap<Column, ScalarValue>>,\n    /// The default DataFusion adapter to delegate standard handling to\n    default_adapter: Arc<dyn PhysicalExprAdapter>,\n    /// Mapping from logical column names to original physical column names,\n    /// used for case-insensitive mode where names differ in casing.\n    /// After the default adapter rewrites expressions using the remapped\n    /// physical schema (with logical names), we need to restore the original\n    /// physical names so that downstream reassign_expr_columns can find\n    /// columns in the actual stream schema.\n    logical_to_physical_names: Option<HashMap<String, String>>,\n    /// The original (un-remapped) physical schema, kept for per-column duplicate\n    /// detection in case-insensitive mode. Only set when `!case_sensitive`.\n    original_physical_schema: Option<SchemaRef>,\n}\n\nimpl PhysicalExprAdapter for SparkPhysicalExprAdapter {\n    fn rewrite(&self, expr: Arc<dyn PhysicalExpr>) -> DataFusionResult<Arc<dyn PhysicalExpr>> {\n        // In case-insensitive mode, check if any Column in this expression references\n        // a field with multiple case-insensitive matches in the physical schema.\n        // Only the columns actually referenced trigger the error (not the whole schema).\n        if let Some(orig_physical) = &self.original_physical_schema {\n            // Walk the expression tree to find Column references\n            let mut duplicate_err: Option<DataFusionError> = None;\n            let _ = Arc::<dyn PhysicalExpr>::clone(&expr).transform(|e| {\n                if let Some(col) = e.as_any().downcast_ref::<Column>() {\n                    if let Some((req, matched)) = check_column_duplicate(col.name(), orig_physical)\n                    {\n                        duplicate_err = Some(DataFusionError::External(Box::new(\n                            SparkError::DuplicateFieldCaseInsensitive {\n                                required_field_name: req,\n                                matched_fields: matched,\n                            },\n                        )));\n                    }\n                }\n                Ok(Transformed::no(e))\n            });\n            if let Some(err) = duplicate_err {\n                return Err(err);\n            }\n        }\n\n        // First let the default adapter handle column remapping, missing columns,\n        // and simple scalar type casts. Then replace DataFusion's CastColumnExpr\n        // with Spark-compatible equivalents.\n        //\n        // The default adapter may fail for complex nested type casts (List, Map).\n        // In that case, fall back to wrapping everything ourselves.\n        let expr = self.replace_missing_with_defaults(expr)?;\n        let expr = match self.default_adapter.rewrite(Arc::clone(&expr)) {\n            Ok(rewritten) => {\n                // Replace references to missing columns with default values\n                // Replace DataFusion's CastColumnExpr with either:\n                // - CometCastColumnExpr (for Struct/List/Map, uses spark_parquet_convert)\n                // - Spark Cast (for simple scalar types)\n                rewritten\n                    .transform(|e| self.replace_with_spark_cast(e))\n                    .data()?\n            }\n            Err(e) => {\n                // Default adapter failed (likely complex nested type cast).\n                // Handle all type mismatches ourselves using spark_parquet_convert.\n                log::debug!(\"Default schema adapter error: {}\", e);\n                self.wrap_all_type_mismatches(expr)?\n            }\n        };\n\n        // For case-insensitive mode: remap column names from logical back to\n        // original physical names. The default adapter was given a remapped\n        // physical schema (with logical names) so it could find columns. But\n        // downstream code (reassign_expr_columns) looks up columns by name in\n        // the actual parquet stream schema, which uses the original physical names.\n        let expr = if let Some(name_map) = &self.logical_to_physical_names {\n            expr.transform(|e| {\n                if let Some(col) = e.as_any().downcast_ref::<Column>() {\n                    if let Some(physical_name) = name_map.get(col.name()) {\n                        return Ok(Transformed::yes(Arc::new(Column::new(\n                            physical_name,\n                            col.index(),\n                        ))));\n                    }\n                }\n                Ok(Transformed::no(e))\n            })\n            .data()?\n        } else {\n            expr\n        };\n\n        Ok(expr)\n    }\n}\n\nimpl SparkPhysicalExprAdapter {\n    /// Wrap ALL Column expressions that have type mismatches with CometCastColumnExpr.\n    /// This is the fallback path when the default adapter fails (e.g., for complex\n    /// nested type casts like List<Struct> or Map). Uses `spark_parquet_convert`\n    /// under the hood for the actual type conversion.\n    fn wrap_all_type_mismatches(\n        &self,\n        expr: Arc<dyn PhysicalExpr>,\n    ) -> DataFusionResult<Arc<dyn PhysicalExpr>> {\n        expr.transform(|e| {\n            if let Some(column) = e.as_any().downcast_ref::<Column>() {\n                let col_name = column.name();\n\n                // Resolve fields by name because this is the fallback path\n                // that runs on the original expression when the default\n                // adapter fails. The original expression was built against\n                // the required (pruned) schema, so column indices refer to\n                // that schema — not the logical or physical file schemas.\n                // DataFusion's DefaultPhysicalExprAdapter::resolve_physical_column\n                // also resolves by name for the same reason.\n                let logical_field = if self.parquet_options.case_sensitive {\n                    self.logical_file_schema\n                        .fields()\n                        .iter()\n                        .find(|f| f.name() == col_name)\n                } else {\n                    self.logical_file_schema\n                        .fields()\n                        .iter()\n                        .find(|f| f.name().eq_ignore_ascii_case(col_name))\n                };\n                let physical_field = if self.parquet_options.case_sensitive {\n                    self.physical_file_schema\n                        .fields()\n                        .iter()\n                        .find(|f| f.name() == col_name)\n                } else {\n                    self.physical_file_schema\n                        .fields()\n                        .iter()\n                        .find(|f| f.name().eq_ignore_ascii_case(col_name))\n                };\n\n                // Remap the column index to the physical file schema so\n                // downstream evaluation reads the correct column from the\n                // parquet batch.\n                let physical_index = if self.parquet_options.case_sensitive {\n                    self.physical_file_schema.index_of(col_name).ok()\n                } else {\n                    self.physical_file_schema\n                        .fields()\n                        .iter()\n                        .position(|f| f.name().eq_ignore_ascii_case(col_name))\n                };\n\n                if let (Some(logical_field), Some(physical_field), Some(phys_idx)) =\n                    (logical_field, physical_field, physical_index)\n                {\n                    let remapped: Arc<dyn PhysicalExpr> = if column.index() != phys_idx {\n                        Arc::new(Column::new(col_name, phys_idx))\n                    } else {\n                        Arc::clone(&e)\n                    };\n\n                    if logical_field.data_type() != physical_field.data_type() {\n                        let cast_expr: Arc<dyn PhysicalExpr> = Arc::new(\n                            CometCastColumnExpr::new(\n                                remapped,\n                                Arc::clone(physical_field),\n                                Arc::clone(logical_field),\n                                None,\n                            )\n                            .with_parquet_options(self.parquet_options.clone()),\n                        );\n                        return Ok(Transformed::yes(cast_expr));\n                    } else if column.index() != phys_idx {\n                        return Ok(Transformed::yes(remapped));\n                    }\n                }\n            }\n            Ok(Transformed::no(e))\n        })\n        .data()\n    }\n\n    /// Replace CastColumnExpr (DataFusion's cast) with Spark's Cast expression.\n    fn replace_with_spark_cast(\n        &self,\n        expr: Arc<dyn PhysicalExpr>,\n    ) -> DataFusionResult<Transformed<Arc<dyn PhysicalExpr>>> {\n        // Check for CastColumnExpr and replace with spark_expr::Cast\n        // CastColumnExpr is in datafusion_physical_expr::expressions\n        if let Some(cast) = expr\n            .as_any()\n            .downcast_ref::<datafusion::physical_expr::expressions::CastColumnExpr>()\n        {\n            let child = Arc::clone(cast.expr());\n            let physical_type = cast.input_field().data_type();\n            let target_type = cast.target_field().data_type();\n\n            // For complex nested types (Struct, List, Map), Timestamp timezone\n            // mismatches, and Timestamp→Int64 (nanosAsLong), use CometCastColumnExpr\n            // with spark_parquet_convert which handles field-name-based selection,\n            // reordering, nested type casting, metadata-only timestamp timezone\n            // relabeling, and raw value reinterpretation correctly.\n            //\n            // Timestamp mismatches (e.g., Timestamp(us, None) -> Timestamp(us, Some(\"UTC\")))\n            // occur when INT96 Parquet timestamps are coerced to Timestamp(us, None) by\n            // DataFusion but the logical schema expects Timestamp(us, Some(\"UTC\")).\n            // Using Spark's Cast here would incorrectly treat the None-timezone values as\n            // local time (TimestampNTZ) and apply a timezone conversion, but the values are\n            // already in UTC. spark_parquet_convert handles this as a metadata-only change.\n            //\n            // Timestamp→Int64 occurs when Spark's `nanosAsLong` config converts\n            // TIMESTAMP(NANOS) to LongType. Spark's Cast would divide by MICROS_PER_SECOND\n            // (assuming microseconds), but the values are nanoseconds. Arrow cast correctly\n            // reinterprets the raw i64 value without conversion.\n            if matches!(\n                (physical_type, target_type),\n                (DataType::Struct(_), DataType::Struct(_))\n                    | (DataType::List(_), DataType::List(_))\n                    | (DataType::Map(_, _), DataType::Map(_, _))\n                    | (DataType::Timestamp(_, _), DataType::Timestamp(_, _))\n                    | (DataType::Timestamp(_, _), DataType::Int64)\n            ) {\n                let comet_cast: Arc<dyn PhysicalExpr> = Arc::new(\n                    CometCastColumnExpr::new(\n                        child,\n                        Arc::clone(cast.input_field()),\n                        Arc::clone(cast.target_field()),\n                        None,\n                    )\n                    .with_parquet_options(self.parquet_options.clone()),\n                );\n                return Ok(Transformed::yes(comet_cast));\n            }\n\n            // For simple scalar type casts, use Spark-compatible Cast expression\n            let mut cast_options = SparkCastOptions::new(\n                self.parquet_options.eval_mode,\n                &self.parquet_options.timezone,\n                self.parquet_options.allow_incompat,\n            );\n            cast_options.allow_cast_unsigned_ints = self.parquet_options.allow_cast_unsigned_ints;\n            cast_options.is_adapting_schema = true;\n\n            let spark_cast = Arc::new(Cast::new(\n                child,\n                target_type.clone(),\n                cast_options,\n                None,\n                None,\n            ));\n\n            return Ok(Transformed::yes(spark_cast as Arc<dyn PhysicalExpr>));\n        }\n\n        Ok(Transformed::no(expr))\n    }\n\n    /// Replace references to missing columns with default values.\n    fn replace_missing_with_defaults(\n        &self,\n        expr: Arc<dyn PhysicalExpr>,\n    ) -> DataFusionResult<Arc<dyn PhysicalExpr>> {\n        let Some(defaults) = &self.default_values else {\n            return Ok(expr);\n        };\n\n        if defaults.is_empty() {\n            return Ok(expr);\n        }\n\n        // Build owned (column_name, default_value) pairs for columns missing from the physical file.\n        // For each default: filter to only columns absent from physical schema, then type-cast\n        // the value to match the logical schema's field type if they differ (using Spark cast semantics).\n        let missing_column_defaults: Vec<(String, ScalarValue)> = defaults\n            .iter()\n            .filter_map(|(col, val)| {\n                let col_name = col.name();\n\n                // Only include defaults for columns missing from the physical file schema\n                let is_missing = if self.parquet_options.case_sensitive {\n                    self.physical_file_schema.field_with_name(col_name).is_err()\n                } else {\n                    !self\n                        .physical_file_schema\n                        .fields()\n                        .iter()\n                        .any(|f| f.name().eq_ignore_ascii_case(col_name))\n                };\n\n                if !is_missing {\n                    return None;\n                }\n\n                // Cast value to logical schema type if needed (only if types differ)\n                let value = self\n                    .logical_file_schema\n                    .field_with_name(col_name)\n                    .ok()\n                    .filter(|field| val.data_type() != *field.data_type())\n                    .and_then(|field| {\n                        spark_parquet_convert(\n                            ColumnarValue::Scalar(val.clone()),\n                            field.data_type(),\n                            &self.parquet_options,\n                        )\n                        .ok()\n                        .and_then(|cv| match cv {\n                            ColumnarValue::Scalar(s) => Some(s),\n                            _ => None,\n                        })\n                    })\n                    .unwrap_or_else(|| val.clone());\n\n                Some((col_name.to_string(), value))\n            })\n            .collect();\n\n        let name_based: HashMap<&str, &ScalarValue> = missing_column_defaults\n            .iter()\n            .map(|(k, v)| (k.as_str(), v))\n            .collect();\n\n        if name_based.is_empty() {\n            return Ok(expr);\n        }\n\n        replace_columns_with_literals(expr, &name_based)\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use crate::parquet::parquet_support::SparkParquetOptions;\n    use crate::parquet::schema_adapter::SparkPhysicalExprAdapterFactory;\n    use arrow::array::UInt32Array;\n    use arrow::array::{Int32Array, StringArray};\n    use arrow::datatypes::SchemaRef;\n    use arrow::datatypes::{DataType, Field, Schema};\n    use arrow::record_batch::RecordBatch;\n    use datafusion::common::DataFusionError;\n    use datafusion::datasource::listing::PartitionedFile;\n    use datafusion::datasource::physical_plan::{FileGroup, FileScanConfigBuilder, ParquetSource};\n    use datafusion::datasource::source::DataSourceExec;\n    use datafusion::execution::object_store::ObjectStoreUrl;\n    use datafusion::execution::TaskContext;\n    use datafusion::physical_plan::ExecutionPlan;\n    use datafusion_comet_spark_expr::test_common::file_util::get_temp_filename;\n    use datafusion_comet_spark_expr::EvalMode;\n    use datafusion_physical_expr_adapter::PhysicalExprAdapterFactory;\n    use futures::StreamExt;\n    use parquet::arrow::ArrowWriter;\n    use std::fs::File;\n    use std::sync::Arc;\n\n    #[tokio::test]\n    async fn parquet_roundtrip_int_as_string() -> Result<(), DataFusionError> {\n        let file_schema = Arc::new(Schema::new(vec![\n            Field::new(\"id\", DataType::Int32, false),\n            Field::new(\"name\", DataType::Utf8, false),\n        ]));\n\n        let ids = Arc::new(Int32Array::from(vec![1, 2, 3])) as Arc<dyn arrow::array::Array>;\n        let names = Arc::new(StringArray::from(vec![\"Alice\", \"Bob\", \"Charlie\"]))\n            as Arc<dyn arrow::array::Array>;\n        let batch = RecordBatch::try_new(Arc::clone(&file_schema), vec![ids, names])?;\n\n        let required_schema = Arc::new(Schema::new(vec![\n            Field::new(\"id\", DataType::Utf8, false),\n            Field::new(\"name\", DataType::Utf8, false),\n        ]));\n\n        let _ = roundtrip(&batch, required_schema).await?;\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn parquet_roundtrip_unsigned_int() -> Result<(), DataFusionError> {\n        let file_schema = Arc::new(Schema::new(vec![Field::new(\"id\", DataType::UInt32, false)]));\n\n        let ids = Arc::new(UInt32Array::from(vec![1, 2, 3])) as Arc<dyn arrow::array::Array>;\n        let batch = RecordBatch::try_new(Arc::clone(&file_schema), vec![ids])?;\n\n        let required_schema = Arc::new(Schema::new(vec![Field::new(\"id\", DataType::Int32, false)]));\n\n        let _ = roundtrip(&batch, required_schema).await?;\n\n        Ok(())\n    }\n\n    /// Create a Parquet file containing a single batch and then read the batch back using\n    /// the specified required_schema. This will cause the PhysicalExprAdapter code to be used.\n    async fn roundtrip(\n        batch: &RecordBatch,\n        required_schema: SchemaRef,\n    ) -> Result<RecordBatch, DataFusionError> {\n        let filename = get_temp_filename();\n        let filename = filename.as_path().as_os_str().to_str().unwrap().to_string();\n        let file = File::create(&filename)?;\n        let mut writer = ArrowWriter::try_new(file, Arc::clone(&batch.schema()), None)?;\n        writer.write(batch)?;\n        writer.close()?;\n\n        let object_store_url = ObjectStoreUrl::local_filesystem();\n\n        let mut spark_parquet_options = SparkParquetOptions::new(EvalMode::Legacy, \"UTC\", false);\n        spark_parquet_options.allow_cast_unsigned_ints = true;\n\n        // Create expression adapter factory for Spark-compatible schema adaptation\n        let expr_adapter_factory: Arc<dyn PhysicalExprAdapterFactory> = Arc::new(\n            SparkPhysicalExprAdapterFactory::new(spark_parquet_options, None),\n        );\n\n        let parquet_source = ParquetSource::new(required_schema);\n\n        let files = FileGroup::new(vec![PartitionedFile::from_path(filename.to_string())?]);\n        let file_scan_config =\n            FileScanConfigBuilder::new(object_store_url, Arc::new(parquet_source))\n                .with_file_groups(vec![files])\n                .with_expr_adapter(Some(expr_adapter_factory))\n                .build();\n\n        let parquet_exec = DataSourceExec::new(Arc::new(file_scan_config));\n\n        let mut stream = parquet_exec.execute(0, Arc::new(TaskContext::default()))?;\n        stream.next().await.unwrap()\n    }\n\n    #[tokio::test]\n    async fn parquet_duplicate_fields_case_insensitive() {\n        // Parquet file has columns \"A\", \"B\", \"b\" - reading \"b\" in case-insensitive mode\n        // should fail with duplicate field error matching Spark's _LEGACY_ERROR_TEMP_2093\n        let file_schema = Arc::new(Schema::new(vec![\n            Field::new(\"A\", DataType::Int32, false),\n            Field::new(\"B\", DataType::Int32, false),\n            Field::new(\"b\", DataType::Int32, false),\n        ]));\n\n        let col_a = Arc::new(Int32Array::from(vec![1, 2, 3])) as Arc<dyn arrow::array::Array>;\n        let col_b1 = Arc::new(Int32Array::from(vec![4, 5, 6])) as Arc<dyn arrow::array::Array>;\n        let col_b2 = Arc::new(Int32Array::from(vec![7, 8, 9])) as Arc<dyn arrow::array::Array>;\n        let batch =\n            RecordBatch::try_new(Arc::clone(&file_schema), vec![col_a, col_b1, col_b2]).unwrap();\n\n        let filename = get_temp_filename();\n        let filename = filename.as_path().as_os_str().to_str().unwrap().to_string();\n        let file = File::create(&filename).unwrap();\n        let mut writer = ArrowWriter::try_new(file, Arc::clone(&batch.schema()), None).unwrap();\n        writer.write(&batch).unwrap();\n        writer.close().unwrap();\n\n        // Read with case-insensitive mode, requesting column \"b\" which matches both \"B\" and \"b\"\n        let required_schema = Arc::new(Schema::new(vec![Field::new(\"b\", DataType::Int32, false)]));\n\n        let mut spark_parquet_options = SparkParquetOptions::new(EvalMode::Legacy, \"UTC\", false);\n        spark_parquet_options.case_sensitive = false;\n\n        let expr_adapter_factory: Arc<dyn PhysicalExprAdapterFactory> = Arc::new(\n            SparkPhysicalExprAdapterFactory::new(spark_parquet_options, None),\n        );\n\n        let object_store_url = ObjectStoreUrl::local_filesystem();\n        let parquet_source = ParquetSource::new(required_schema);\n        let files = FileGroup::new(vec![\n            PartitionedFile::from_path(filename.to_string()).unwrap()\n        ]);\n        let file_scan_config =\n            FileScanConfigBuilder::new(object_store_url, Arc::new(parquet_source))\n                .with_file_groups(vec![files])\n                .with_expr_adapter(Some(expr_adapter_factory))\n                .build();\n\n        let parquet_exec = DataSourceExec::new(Arc::new(file_scan_config));\n        let mut stream = parquet_exec\n            .execute(0, Arc::new(TaskContext::default()))\n            .unwrap();\n        let result = stream.next().await.unwrap();\n\n        // Should fail with duplicate field error\n        assert!(result.is_err());\n        let err_msg = result.unwrap_err().to_string();\n        assert!(\n            err_msg.contains(\"Found duplicate field\"),\n            \"Expected duplicate field error, got: {err_msg}\"\n        );\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/util/bit_packing.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n/// Unpack 32 values with bit width `num_bits` from `in_ptr`, and write to `out_ptr`.\n/// Return the `in_ptr` where the starting offset points to the first byte after all the\n/// bytes that were consumed.\n// TODO: may be better to make these more compact using if-else conditions.\n//  However, this may require const generics:\n//     https://github.com/rust-lang/rust/issues/44580\n//  to eliminate the branching cost.\n// TODO: we should use SIMD instructions to further optimize this. I have explored\n//    https://github.com/tantivy-search/bitpacking\n// but the layout it uses for SIMD is different from Parquet.\n// TODO: support packing as well, which is used for encoding.\npub unsafe fn unpack32(mut in_ptr: *const u32, out_ptr: *mut u32, num_bits: usize) -> *const u32 {\n    in_ptr = match num_bits {\n        0 => nullunpacker32(in_ptr, out_ptr),\n        1 => unpack1_32(in_ptr, out_ptr),\n        2 => unpack2_32(in_ptr, out_ptr),\n        3 => unpack3_32(in_ptr, out_ptr),\n        4 => unpack4_32(in_ptr, out_ptr),\n        5 => unpack5_32(in_ptr, out_ptr),\n        6 => unpack6_32(in_ptr, out_ptr),\n        7 => unpack7_32(in_ptr, out_ptr),\n        8 => unpack8_32(in_ptr, out_ptr),\n        9 => unpack9_32(in_ptr, out_ptr),\n        10 => unpack10_32(in_ptr, out_ptr),\n        11 => unpack11_32(in_ptr, out_ptr),\n        12 => unpack12_32(in_ptr, out_ptr),\n        13 => unpack13_32(in_ptr, out_ptr),\n        14 => unpack14_32(in_ptr, out_ptr),\n        15 => unpack15_32(in_ptr, out_ptr),\n        16 => unpack16_32(in_ptr, out_ptr),\n        17 => unpack17_32(in_ptr, out_ptr),\n        18 => unpack18_32(in_ptr, out_ptr),\n        19 => unpack19_32(in_ptr, out_ptr),\n        20 => unpack20_32(in_ptr, out_ptr),\n        21 => unpack21_32(in_ptr, out_ptr),\n        22 => unpack22_32(in_ptr, out_ptr),\n        23 => unpack23_32(in_ptr, out_ptr),\n        24 => unpack24_32(in_ptr, out_ptr),\n        25 => unpack25_32(in_ptr, out_ptr),\n        26 => unpack26_32(in_ptr, out_ptr),\n        27 => unpack27_32(in_ptr, out_ptr),\n        28 => unpack28_32(in_ptr, out_ptr),\n        29 => unpack29_32(in_ptr, out_ptr),\n        30 => unpack30_32(in_ptr, out_ptr),\n        31 => unpack31_32(in_ptr, out_ptr),\n        32 => unpack32_32(in_ptr, out_ptr),\n        _ => unimplemented!(),\n    };\n    in_ptr\n}\n\nunsafe fn nullunpacker32(in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    for _ in 0..32 {\n        *out = 0;\n        out = out.offset(1);\n    }\n    in_buf\n}\n\nunsafe fn unpack1_32(in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 1) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 2) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 3) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 4) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 5) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 6) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 7) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 9) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 10) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 11) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 13) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 15) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 17) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 19) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 21) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 22) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 23) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 25) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 26) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 27) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 28) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 29) & 1;\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 30) & 1;\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack2_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 22) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 26) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 28) % (1u32 << 2);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n    *out = (in_buf.read_unaligned()) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 22) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 26) % (1u32 << 2);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 28) % (1u32 << 2);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack3_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 9) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 15) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 21) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 27) % (1u32 << 3);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (3 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 7) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 13) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 19) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 22) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 25) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 28) % (1u32 << 3);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (3 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 11) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 17) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 23) % (1u32 << 3);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 26) % (1u32 << 3);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack4_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 28) % (1u32 << 4);\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 28) % (1u32 << 4);\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 28) % (1u32 << 4);\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) % (1u32 << 4);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 28) % (1u32 << 4);\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack5_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 15) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 25) % (1u32 << 5);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (5 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 13) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 23) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 28) % (1u32 << 5);\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (5 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 11) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 21) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 26) % (1u32 << 5);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (5 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 9) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 19) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) % (1u32 << 5);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (5 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 7) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 17) % (1u32 << 5);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 22) % (1u32 << 5);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack6_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) % (1u32 << 6);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (6 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 22) % (1u32 << 6);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (6 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 6);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) % (1u32 << 6);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (6 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 22) % (1u32 << 6);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (6 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 6);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 6);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack7_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 7) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 21) % (1u32 << 7);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (7 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 17) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 24) % (1u32 << 7);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (7 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 13) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 7);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (7 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 9) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 23) % (1u32 << 7);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (7 - 5);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 19) % (1u32 << 7);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (7 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 15) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 22) % (1u32 << 7);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (7 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 11) % (1u32 << 7);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 7);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 25;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack8_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 8);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 8);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 8);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 8);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 8);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 8);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 8);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 8);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 8);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack9_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 9) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 9);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (9 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 13) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 22) % (1u32 << 9);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (9 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 17) % (1u32 << 9);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (9 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 21) % (1u32 << 9);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (9 - 7);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 7) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 9);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (9 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 11) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 9);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (9 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 15) % (1u32 << 9);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (9 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 19) % (1u32 << 9);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (9 - 5);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 9);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 9);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 23;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack10_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 10);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (10 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 10);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (10 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 10);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (10 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 10);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (10 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 10);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 10);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (10 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 10);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (10 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 10);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (10 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 10);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (10 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 10);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 10);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack11_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 11);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 11) % (1u32 << 11);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (11 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 11);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 11);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 23;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (11 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 11);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 13) % (1u32 << 11);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (11 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 11);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 11);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (11 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 11);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 15) % (1u32 << 11);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (11 - 5);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 11);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 11);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (11 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 11);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 17) % (1u32 << 11);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (11 - 7);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 7) % (1u32 << 11);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 11);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (11 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 11);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 19) % (1u32 << 11);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 9)) << (11 - 9);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 9) % (1u32 << 11);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 20) % (1u32 << 11);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (11 - 10);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 11);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 21;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack12_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 12);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (12 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 12);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (12 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 12);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (12 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 12);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (12 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 12);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (12 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 12);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (12 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 12);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (12 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 12);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (12 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 12);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack13_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 13);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 13) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (13 - 7);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 7) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (13 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 13);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (13 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 21;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (13 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 13);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 15) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 9)) << (13 - 9);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 9) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (13 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 13);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (13 - 10);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 23;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (13 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 13);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 17) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 11)) << (13 - 11);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 11) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (13 - 5);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 13);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 18) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (13 - 12);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (13 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 13);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 19;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack14_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 14);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (14 - 10);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (14 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (14 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 14);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (14 - 12);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (14 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (14 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 14);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (14 - 10);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (14 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (14 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 14);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (14 - 12);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (14 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (14 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 14);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 18;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack15_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 15);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 15) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 13)) << (15 - 13);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 13) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 11)) << (15 - 11);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 11) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 9)) << (15 - 9);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 9) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (15 - 7);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 7) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (15 - 5);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (15 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (15 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 15);\n    out = out.offset(1);\n    *out = ((in_buf.read_unaligned()) >> 16) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (15 - 14);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (15 - 12);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (15 - 10);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (15 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 23;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (15 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 21;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (15 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 19;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (15 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 15);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 17;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack16_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n    out = out.offset(1);\n    in_buf = in_buf.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 16);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 16;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack17_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 17;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (17 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 19;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (17 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 21;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (17 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 23;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (17 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (17 - 10);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (17 - 12);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (17 - 14);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 14) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (17 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (17 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (17 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (17 - 5);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (17 - 7);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 7) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 9)) << (17 - 9);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 9) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 11)) << (17 - 11);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 11) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 13)) << (17 - 13);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 13) % (1u32 << 17);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 15)) << (17 - 15);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 15;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack18_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (18 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (18 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (18 - 12);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (18 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (18 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (18 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (18 - 10);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (18 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (18 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (18 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (18 - 12);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (18 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (18 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (18 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (18 - 10);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 18);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (18 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack19_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 19;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (19 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (19 - 12);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 12) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (19 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (19 - 5);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 11)) << (19 - 11);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 11) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 17)) << (19 - 17);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 17;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (19 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 23;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (19 - 10);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (19 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (19 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 9)) << (19 - 9);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 9) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 15)) << (19 - 15);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 15;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (19 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 21;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (19 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (19 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (19 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (19 - 7);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 7) % (1u32 << 19);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 13)) << (19 - 13);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 13;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack20_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (20 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (20 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (20 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (20 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (20 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (20 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (20 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (20 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (20 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (20 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (20 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (20 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (20 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (20 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (20 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 20);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (20 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack21_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 21);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 21;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (21 - 10);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 10) % (1u32 << 21);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (21 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 9)) << (21 - 9);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 9) % (1u32 << 21);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 19)) << (21 - 19);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 19;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (21 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 21);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (21 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (21 - 7);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 7) % (1u32 << 21);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 17)) << (21 - 17);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 17;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (21 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 21);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (21 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (21 - 5);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 21);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 15)) << (21 - 15);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 15;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (21 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 21);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (21 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (21 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 21);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 13)) << (21 - 13);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 13;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (21 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 21);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 23;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (21 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (21 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 21);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 11)) << (21 - 11);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 11;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack22_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 22);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (22 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (22 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 22);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (22 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (22 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 22);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (22 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (22 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 22);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (22 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (22 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 22);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (22 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (22 - 10);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 10;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 22);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (22 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (22 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 22);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (22 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (22 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 22);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (22 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (22 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 22);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (22 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (22 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 22);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (22 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (22 - 10);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 10;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack23_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 23);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 23;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (23 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (23 - 5);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 23);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 19)) << (23 - 19);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 19;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (23 - 10);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 10;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (23 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 23);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 15)) << (23 - 15);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 15;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (23 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 23);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (23 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 11)) << (23 - 11);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 11;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (23 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 23);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (23 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (23 - 7);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 7) % (1u32 << 23);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 21)) << (23 - 21);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 21;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (23 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (23 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 23);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 17)) << (23 - 17);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 17;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (23 - 8);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 8) % (1u32 << 23);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 22)) << (23 - 22);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 13)) << (23 - 13);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 13;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (23 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 23);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (23 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 9)) << (23 - 9);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 9;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack24_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 24);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (24 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (24 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 24);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (24 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (24 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 24);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (24 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (24 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 24);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (24 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (24 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 24);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (24 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (24 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 24);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (24 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (24 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 24);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (24 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (24 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 24);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (24 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (24 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack25_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 25);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (25 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 11)) << (25 - 11);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 11;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (25 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 25);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 22)) << (25 - 22);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 15)) << (25 - 15);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 15;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (25 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (25 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 25);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 19)) << (25 - 19);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 19;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (25 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (25 - 5);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 5) % (1u32 << 25);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 23)) << (25 - 23);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 23;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (25 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 9)) << (25 - 9);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 9;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (25 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 25);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (25 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 13)) << (25 - 13);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 13;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (25 - 6);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 6) % (1u32 << 25);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (25 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 17)) << (25 - 17);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 17;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (25 - 10);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 10;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (25 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 25);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 21)) << (25 - 21);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 21;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (25 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (25 - 7);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 7;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack26_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 26);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (26 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (26 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (26 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (26 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 26);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 22)) << (26 - 22);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (26 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (26 - 10);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 10;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (26 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 26);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (26 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (26 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (26 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (26 - 6);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 6;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 26);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (26 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (26 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (26 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (26 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 26);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 22)) << (26 - 22);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (26 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (26 - 10);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 10;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (26 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 26);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (26 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (26 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (26 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (26 - 6);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 6;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack27_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 27);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 22)) << (27 - 22);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 17)) << (27 - 17);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 17;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (27 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (27 - 7);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 7;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (27 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 27);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (27 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 19)) << (27 - 19);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 19;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (27 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 9)) << (27 - 9);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 9;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (27 - 4);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 4) % (1u32 << 27);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 26)) << (27 - 26);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 21)) << (27 - 21);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 21;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (27 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 11)) << (27 - 11);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 11;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (27 - 6);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 6;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (27 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 27);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 23)) << (27 - 23);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 23;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (27 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 13)) << (27 - 13);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 13;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (27 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (27 - 3);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 3) % (1u32 << 27);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 25)) << (27 - 25);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (27 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 15)) << (27 - 15);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 15;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (27 - 10);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 10;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (27 - 5);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 5;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack28_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 28);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (28 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (28 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (28 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (28 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (28 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (28 - 4);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 4;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 28);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (28 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (28 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (28 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (28 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (28 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (28 - 4);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 4;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 28);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (28 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (28 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (28 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (28 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (28 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (28 - 4);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 4;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 28);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (28 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (28 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (28 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (28 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (28 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (28 - 4);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 4;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack29_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 29);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 26)) << (29 - 26);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 23)) << (29 - 23);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 23;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (29 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 17)) << (29 - 17);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 17;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (29 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 11)) << (29 - 11);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 11;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (29 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (29 - 5);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 5;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (29 - 2);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 2) % (1u32 << 29);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 28)) << (29 - 28);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 25)) << (29 - 25);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 22)) << (29 - 22);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 19)) << (29 - 19);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 19;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (29 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 13)) << (29 - 13);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 13;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (29 - 10);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 10;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (29 - 7);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 7;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (29 - 4);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 4;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (29 - 1);\n    out = out.offset(1);\n\n    *out = ((in_buf.read_unaligned()) >> 1) % (1u32 << 29);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 27)) << (29 - 27);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (29 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 21)) << (29 - 21);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 21;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (29 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 15)) << (29 - 15);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 15;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (29 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 9)) << (29 - 9);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 9;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (29 - 6);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 6;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (29 - 3);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 3;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack30_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 30);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 28)) << (30 - 28);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 26)) << (30 - 26);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (30 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 22)) << (30 - 22);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (30 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (30 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (30 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (30 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (30 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (30 - 10);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 10;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (30 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (30 - 6);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 6;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (30 - 4);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 4;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (30 - 2);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 2;\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) % (1u32 << 30);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 28)) << (30 - 28);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 26)) << (30 - 26);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (30 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 22)) << (30 - 22);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (30 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (30 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (30 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (30 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (30 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (30 - 10);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 10;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (30 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (30 - 6);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 6;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (30 - 4);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 4;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (30 - 2);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 2;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack31_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = (in_buf.read_unaligned()) % (1u32 << 31);\n    out = out.offset(1);\n    *out = (in_buf.read_unaligned()) >> 31;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 30)) << (31 - 30);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 30;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 29)) << (31 - 29);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 29;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 28)) << (31 - 28);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 28;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 27)) << (31 - 27);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 27;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 26)) << (31 - 26);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 26;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 25)) << (31 - 25);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 25;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 24)) << (31 - 24);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 24;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 23)) << (31 - 23);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 23;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 22)) << (31 - 22);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 22;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 21)) << (31 - 21);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 21;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 20)) << (31 - 20);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 20;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 19)) << (31 - 19);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 19;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 18)) << (31 - 18);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 18;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 17)) << (31 - 17);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 17;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 16)) << (31 - 16);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 16;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 15)) << (31 - 15);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 15;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 14)) << (31 - 14);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 14;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 13)) << (31 - 13);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 13;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 12)) << (31 - 12);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 12;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 11)) << (31 - 11);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 11;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 10)) << (31 - 10);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 10;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 9)) << (31 - 9);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 9;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 8)) << (31 - 8);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 8;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 7)) << (31 - 7);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 7;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 6)) << (31 - 6);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 6;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 5)) << (31 - 5);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 5;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 4)) << (31 - 4);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 4;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 3)) << (31 - 3);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 3;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 2)) << (31 - 2);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 2;\n    in_buf = in_buf.offset(1);\n    *out |= ((in_buf.read_unaligned()) % (1u32 << 1)) << (31 - 1);\n    out = out.offset(1);\n\n    *out = (in_buf.read_unaligned()) >> 1;\n\n    in_buf.offset(1)\n}\n\nunsafe fn unpack32_32(mut in_buf: *const u32, mut out: *mut u32) -> *const u32 {\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n    in_buf = in_buf.offset(1);\n    out = out.offset(1);\n\n    *out = in_buf.read_unaligned();\n\n    in_buf.offset(1)\n}\n"
  },
  {
    "path": "native/core/src/parquet/util/buffer.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{ops::Index, slice::SliceIndex, sync::Arc};\n\n/// An immutable byte buffer.\npub trait Buffer {\n    /// Returns the length (in bytes) of this buffer.\n    fn len(&self) -> usize;\n\n    /// Returns the byte array of this buffer, in range `[0, len)`.\n    fn data(&self) -> &[u8];\n\n    /// Returns whether this buffer is empty or not.\n    fn is_empty(&self) -> bool {\n        self.len() == 0\n    }\n}\n\nimpl Buffer for Vec<u8> {\n    fn len(&self) -> usize {\n        self.len()\n    }\n\n    fn data(&self) -> &[u8] {\n        self\n    }\n}\n\npub struct BufferRef {\n    inner: Arc<dyn Buffer>,\n    offset: usize,\n    len: usize,\n}\n\nimpl BufferRef {\n    pub fn new(inner: Arc<dyn Buffer>) -> Self {\n        let len = inner.len();\n        Self {\n            inner,\n            offset: 0,\n            len,\n        }\n    }\n\n    /// Returns the length of this buffer.\n    #[inline]\n    pub fn len(&self) -> usize {\n        self.len\n    }\n\n    #[inline]\n    pub fn is_empty(&self) -> bool {\n        self.len == 0\n    }\n\n    #[inline]\n    pub fn data(&self) -> &[u8] {\n        self.inner.data()\n    }\n\n    /// Creates a new byte buffer containing elements in `[offset, offset+len)`\n    #[inline]\n    pub fn slice(&self, offset: usize, len: usize) -> BufferRef {\n        assert!(\n            self.offset + offset + len <= self.inner.len(),\n            \"can't create a buffer slice with offset exceeding original \\\n             JNI array len {}, self.offset: {}, offset: {}, len: {}\",\n            self.inner.len(),\n            self.offset,\n            offset,\n            len\n        );\n\n        Self {\n            inner: Arc::clone(&self.inner),\n            offset: self.offset + offset,\n            len,\n        }\n    }\n\n    /// Creates a new byte buffer containing all elements starting from `offset` in this byte array.\n    #[inline]\n    pub fn start(&self, offset: usize) -> BufferRef {\n        assert!(\n            self.offset + offset <= self.inner.len(),\n            \"can't create a buffer slice with offset exceeding original \\\n             JNI array len {}, self.offset: {}, offset: {}\",\n            self.inner.len(),\n            self.offset,\n            offset\n        );\n        let len = self.inner.len() - offset - self.offset;\n        self.slice(offset, len)\n    }\n}\n\nimpl AsRef<[u8]> for BufferRef {\n    fn as_ref(&self) -> &[u8] {\n        let slice = self.inner.as_ref().data();\n        &slice[self.offset..self.offset + self.len]\n    }\n}\n\nimpl<Idx> Index<Idx> for BufferRef\nwhere\n    Idx: SliceIndex<[u8]>,\n{\n    type Output = Idx::Output;\n\n    fn index(&self, index: Idx) -> &Self::Output {\n        &self.as_ref()[index]\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/util/jni.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::sync::Arc;\n\nuse jni::{\n    errors::Result as JNIResult,\n    objects::{JObjectArray, JString},\n    sys::{jboolean, jint},\n    Env,\n};\n\nuse arrow::error::ArrowError;\nuse arrow::ipc::reader::StreamReader;\nuse datafusion::execution::object_store::ObjectStoreUrl;\nuse object_store::path::Path;\nuse parquet::{\n    basic::{Encoding, LogicalType, TimeUnit, Type as PhysicalType},\n    schema::types::{ColumnDescriptor, ColumnPath, PrimitiveTypeBuilder},\n};\nuse url::{ParseError, Url};\n\n/// Convert primitives from Spark side into a `ColumnDescriptor`.\n#[allow(clippy::too_many_arguments)]\npub fn convert_column_descriptor(\n    env: &mut Env,\n    physical_type_id: jint,\n    logical_type_id: jint,\n    max_dl: jint,\n    max_rl: jint,\n    bit_width: jint,\n    is_signed: jboolean,\n    type_length: jint,\n    precision: jint,\n    scale: jint,\n    time_unit: jint,\n    is_adjusted_utc: jboolean,\n    jni_path: JObjectArray,\n) -> JNIResult<ColumnDescriptor> {\n    let physical_type = convert_physical_type(physical_type_id);\n    let type_length = fix_type_length(&physical_type, type_length);\n    let logical_type = if logical_type_id >= 0 {\n        Some(convert_logical_type(\n            logical_type_id,\n            bit_width,\n            is_signed,\n            precision,\n            scale,\n            time_unit,\n            is_adjusted_utc,\n        ))\n    } else {\n        // id < 0 means there is no logical type associated\n        None\n    };\n\n    // We don't care the column name here\n    let ty = PrimitiveTypeBuilder::new(\"f\", physical_type)\n        .with_logical_type(logical_type)\n        .with_length(type_length)\n        .with_precision(precision) // Parquet crate requires to set this even with logical type\n        .with_scale(scale)\n        .build()\n        .unwrap(); // TODO: convert Parquet errot to JNI error\n    let path = convert_column_path(env, jni_path).unwrap();\n\n    let result = ColumnDescriptor::new(Arc::new(ty), max_dl as i16, max_rl as i16, path);\n    Ok(result)\n}\n\npub fn convert_encoding(ordinal: jint) -> Encoding {\n    match ordinal {\n        0 => Encoding::PLAIN,\n        1 => Encoding::RLE,\n        #[allow(deprecated)]\n        3 => Encoding::BIT_PACKED,\n        4 => Encoding::PLAIN_DICTIONARY,\n        5 => Encoding::DELTA_BINARY_PACKED,\n        6 => Encoding::DELTA_LENGTH_BYTE_ARRAY,\n        7 => Encoding::DELTA_BYTE_ARRAY,\n        8 => Encoding::RLE_DICTIONARY,\n        _ => panic!(\"Invalid Java Encoding ordinal: {ordinal}\"),\n    }\n}\n\n#[derive(Debug)]\npub struct TypePromotionInfo {\n    pub(crate) physical_type: PhysicalType,\n    pub(crate) precision: i32,\n    pub(crate) scale: i32,\n    pub(crate) bit_width: i32,\n}\n\nimpl TypePromotionInfo {\n    pub fn new_from_jni(\n        physical_type_id: jint,\n        precision: jint,\n        scale: jint,\n        bit_width: jint,\n    ) -> Self {\n        let physical_type = convert_physical_type(physical_type_id);\n        Self {\n            physical_type,\n            precision,\n            scale,\n            bit_width,\n        }\n    }\n\n    pub fn new(physical_type: PhysicalType, precision: i32, scale: i32, bit_width: i32) -> Self {\n        Self {\n            physical_type,\n            precision,\n            scale,\n            bit_width,\n        }\n    }\n}\n\nfn convert_column_path(env: &mut Env, path_array: JObjectArray) -> JNIResult<ColumnPath> {\n    let array_len = path_array.len(env)?;\n    let mut res: Vec<String> = Vec::new();\n    for i in 0..array_len {\n        let p = path_array.get_element(env, i)?;\n        let p: JString = unsafe { JString::from_raw(env, p.into_raw()) };\n        res.push(p.try_to_string(env)?);\n    }\n    Ok(ColumnPath::new(res))\n}\n\nfn convert_physical_type(id: jint) -> PhysicalType {\n    match id {\n        0 => PhysicalType::BOOLEAN,\n        1 => PhysicalType::INT32,\n        2 => PhysicalType::INT64,\n        3 => PhysicalType::INT96,\n        4 => PhysicalType::FLOAT,\n        5 => PhysicalType::DOUBLE,\n        6 => PhysicalType::BYTE_ARRAY,\n        7 => PhysicalType::FIXED_LEN_BYTE_ARRAY,\n        _ => panic!(\"Invalid id for Parquet physical type: {id} \"),\n    }\n}\n\nfn convert_logical_type(\n    id: jint,\n    bit_width: jint,\n    is_signed: jboolean,\n    precision: jint,\n    scale: jint,\n    time_unit: jint,\n    is_adjusted_utc: jboolean,\n) -> LogicalType {\n    match id {\n        0 => LogicalType::Integer {\n            bit_width: bit_width as i8,\n            is_signed,\n        },\n        1 => LogicalType::String,\n        2 => LogicalType::Decimal { scale, precision },\n        3 => LogicalType::Date,\n        4 => LogicalType::Timestamp {\n            is_adjusted_to_u_t_c: is_adjusted_utc,\n            unit: convert_time_unit(time_unit),\n        },\n        5 => LogicalType::Enum,\n        6 => LogicalType::Uuid,\n        _ => panic!(\"Invalid id for Parquet logical type: {id}\"),\n    }\n}\n\nfn convert_time_unit(time_unit: jint) -> TimeUnit {\n    match time_unit {\n        0 => TimeUnit::MILLIS,\n        1 => TimeUnit::MICROS,\n        2 => TimeUnit::NANOS,\n        _ => panic!(\"Invalid time unit id for Parquet: {time_unit}\"),\n    }\n}\n\n/// Fixes the type length in case they are not set (Parquet only explicitly set it for\n/// FIXED_LEN_BYTE_ARRAY type).\nfn fix_type_length(t: &PhysicalType, type_length: i32) -> i32 {\n    match t {\n        PhysicalType::INT32 | PhysicalType::FLOAT => 4,\n        PhysicalType::INT64 | PhysicalType::DOUBLE => 8,\n        PhysicalType::INT96 => 12,\n        _ => type_length,\n    }\n}\n\npub fn deserialize_schema(ipc_bytes: &[u8]) -> Result<arrow::datatypes::Schema, ArrowError> {\n    let reader = unsafe {\n        StreamReader::try_new(std::io::Cursor::new(ipc_bytes), None)?.with_skip_validation(true)\n    };\n    let schema = reader.schema().as_ref().clone();\n    Ok(schema)\n}\n\n// parses the url and returns a tuple of the scheme and object store path\npub fn get_file_path(url_: String) -> Result<(ObjectStoreUrl, Path), ParseError> {\n    // we define origin of a url as scheme + \"://\" + authority + [\"/\" + bucket]\n    let url = Url::parse(url_.as_ref()).unwrap();\n    let mut object_store_origin = url.scheme().to_owned();\n    let mut object_store_path = Path::from_url_path(url.path()).unwrap();\n    if object_store_origin == \"s3a\" {\n        object_store_origin = \"s3\".to_string();\n        object_store_origin.push_str(\"://\");\n        object_store_origin.push_str(url.authority());\n        object_store_origin.push('/');\n        let path_splits = url.path_segments().map(|c| c.collect::<Vec<_>>()).unwrap();\n        object_store_origin.push_str(path_splits.first().unwrap());\n        let new_path = path_splits[1..path_splits.len() - 1].join(\"/\");\n        //TODO: (ARROW NATIVE) check the use of unwrap here\n        object_store_path = Path::from_url_path(new_path.clone().as_str()).unwrap();\n    } else {\n        object_store_origin.push_str(\"://\");\n        object_store_origin.push_str(url.authority());\n        object_store_origin.push('/');\n    }\n    Ok((\n        ObjectStoreUrl::parse(object_store_origin).unwrap(),\n        object_store_path,\n    ))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_get_file_path() {\n        let inp = \"file:///comet/spark-warehouse/t1/part1=2019-01-01%2011%253A11%253A11/part-00000-84d7ed74-8f28-456c-9270-f45376eea144.c000.snappy.parquet\";\n        let expected = \"comet/spark-warehouse/t1/part1=2019-01-01 11%3A11%3A11/part-00000-84d7ed74-8f28-456c-9270-f45376eea144.c000.snappy.parquet\";\n\n        if let Ok((_obj_store_url, path)) = get_file_path(inp.to_string()) {\n            let actual = path.to_string();\n            assert_eq!(actual, expected);\n        }\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/util/memory.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Utility methods and structs for working with memory.\n\nuse std::{\n    fmt::{Debug, Display, Formatter, Result as FmtResult},\n    io::{Result as IoResult, Write},\n    mem,\n    ops::{Index, IndexMut},\n    sync::{\n        atomic::{AtomicI64, Ordering},\n        Arc, Weak,\n    },\n};\n\n// ----------------------------------------------------------------------\n// Memory Tracker classes\n\n/// Reference counted pointer for [`MemTracker`].\npub type MemTrackerPtr = Arc<MemTracker>;\n/// Non-owning reference for [`MemTracker`].\npub type WeakMemTrackerPtr = Weak<MemTracker>;\n\n/// Struct to track memory usage information.\n#[derive(Debug)]\npub struct MemTracker {\n    // In the tuple, the first element is the current memory allocated (in bytes),\n    // and the second element is the maximum memory allocated so far (in bytes).\n    current_memory_usage: AtomicI64,\n    max_memory_usage: AtomicI64,\n}\n\nimpl MemTracker {\n    /// Creates new memory tracker.\n    #[inline]\n    pub fn new() -> MemTracker {\n        MemTracker {\n            current_memory_usage: Default::default(),\n            max_memory_usage: Default::default(),\n        }\n    }\n\n    /// Returns the current memory consumption, in bytes.\n    pub fn memory_usage(&self) -> i64 {\n        self.current_memory_usage.load(Ordering::Acquire)\n    }\n\n    /// Returns the maximum memory consumption so far, in bytes.\n    pub fn max_memory_usage(&self) -> i64 {\n        self.max_memory_usage.load(Ordering::Acquire)\n    }\n\n    /// Adds `num_bytes` to the memory consumption tracked by this memory tracker.\n    #[inline]\n    pub fn alloc(&self, num_bytes: i64) {\n        let new_current = self\n            .current_memory_usage\n            .fetch_add(num_bytes, Ordering::Acquire)\n            + num_bytes;\n        self.max_memory_usage\n            .fetch_max(new_current, Ordering::Acquire);\n    }\n}\n\nimpl Default for MemTracker {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\n// ----------------------------------------------------------------------\n// Buffer classes\n\n/// Type alias for [`Buffer`].\npub type ByteBuffer = Buffer<u8>;\n/// Type alias for [`BufferPtr`].\npub type ByteBufferPtr = BufferPtr<u8>;\n\n/// A resize-able buffer class with generic member, with optional memory tracker.\n///\n/// Note that a buffer has two attributes:\n/// `capacity` and `size`: the former is the total number of space reserved for\n/// the buffer, while the latter is the actual number of elements.\n/// Invariant: `capacity` >= `size`.\n/// The total allocated bytes for a buffer equals to `capacity * sizeof<T>()`.\npub struct Buffer<T: Clone> {\n    data: Vec<T>,\n    mem_tracker: Option<MemTrackerPtr>,\n    type_length: usize,\n}\n\nimpl<T: Clone> Buffer<T> {\n    /// Creates new empty buffer.\n    pub fn new() -> Self {\n        Buffer {\n            data: vec![],\n            mem_tracker: None,\n            type_length: std::mem::size_of::<T>(),\n        }\n    }\n\n    /// Adds [`MemTracker`] for this buffer.\n    #[inline]\n    pub fn with_mem_tracker(mut self, mc: MemTrackerPtr) -> Self {\n        mc.alloc((self.data.capacity() * self.type_length) as i64);\n        self.mem_tracker = Some(mc);\n        self\n    }\n\n    /// Returns slice of data in this buffer.\n    #[inline]\n    pub fn data(&self) -> &[T] {\n        self.data.as_slice()\n    }\n\n    /// Sets data for this buffer.\n    #[inline]\n    pub fn set_data(&mut self, new_data: Vec<T>) {\n        if let Some(ref mc) = self.mem_tracker {\n            let capacity_diff = new_data.capacity() as i64 - self.data.capacity() as i64;\n            mc.alloc(capacity_diff * self.type_length as i64);\n        }\n        self.data = new_data;\n    }\n\n    /// Resizes underlying data in place to a new length `new_size`.\n    ///\n    /// If `new_size` is less than current length, data is truncated, otherwise, it is\n    /// extended to `new_size` with provided default value `init_value`.\n    ///\n    /// Memory tracker is also updated, if available.\n    #[inline]\n    pub fn resize(&mut self, new_size: usize, init_value: T) {\n        let old_capacity = self.data.capacity();\n        self.data.resize(new_size, init_value);\n        if let Some(ref mc) = self.mem_tracker {\n            let capacity_diff = self.data.capacity() as i64 - old_capacity as i64;\n            mc.alloc(capacity_diff * self.type_length as i64);\n        }\n    }\n\n    /// Clears underlying data.\n    #[inline]\n    pub fn clear(&mut self) {\n        self.data.clear()\n    }\n\n    /// Reserves capacity `additional_capacity` for underlying data vector.\n    ///\n    /// Memory tracker is also updated, if available.\n    #[inline]\n    pub fn reserve(&mut self, additional_capacity: usize) {\n        let old_capacity = self.data.capacity();\n        self.data.reserve(additional_capacity);\n        if self.data.capacity() > old_capacity {\n            if let Some(ref mc) = self.mem_tracker {\n                let capacity_diff = self.data.capacity() as i64 - old_capacity as i64;\n                mc.alloc(capacity_diff * self.type_length as i64);\n            }\n        }\n    }\n\n    /// Returns [`BufferPtr`] with buffer data.\n    /// Buffer data is reset.\n    #[inline]\n    pub fn consume(&mut self) -> BufferPtr<T> {\n        let old_data = mem::take(&mut self.data);\n        let mut result = BufferPtr::new(old_data);\n        if let Some(ref mc) = self.mem_tracker {\n            result = result.with_mem_tracker(Arc::clone(mc));\n        }\n        result\n    }\n\n    /// Adds `value` to the buffer.\n    #[inline]\n    pub fn push(&mut self, value: T) {\n        self.data.push(value)\n    }\n\n    /// Returns current capacity for the buffer.\n    #[inline]\n    pub fn capacity(&self) -> usize {\n        self.data.capacity()\n    }\n\n    /// Returns current size for the buffer.\n    #[inline]\n    pub fn size(&self) -> usize {\n        self.data.len()\n    }\n\n    /// Returns `true` if memory tracker is added to buffer, `false` otherwise.\n    #[inline]\n    pub fn is_mem_tracked(&self) -> bool {\n        self.mem_tracker.is_some()\n    }\n\n    /// Returns memory tracker associated with this buffer.\n    /// This may panic, if memory tracker is not set, use method above to check if\n    /// memory tracker is available.\n    #[inline]\n    pub fn mem_tracker(&self) -> &MemTrackerPtr {\n        self.mem_tracker.as_ref().unwrap()\n    }\n}\n\nimpl<T: Clone> Default for Buffer<T> {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl<T: Sized + Clone> Index<usize> for Buffer<T> {\n    type Output = T;\n\n    fn index(&self, index: usize) -> &T {\n        &self.data[index]\n    }\n}\n\nimpl<T: Sized + Clone> IndexMut<usize> for Buffer<T> {\n    fn index_mut(&mut self, index: usize) -> &mut T {\n        &mut self.data[index]\n    }\n}\n\n// TODO: implement this for other types\nimpl Write for Buffer<u8> {\n    #[inline]\n    fn write(&mut self, buf: &[u8]) -> IoResult<usize> {\n        let old_capacity = self.data.capacity();\n        let bytes_written = self.data.write(buf)?;\n        if let Some(ref mc) = self.mem_tracker {\n            if self.data.capacity() - old_capacity > 0 {\n                mc.alloc((self.data.capacity() - old_capacity) as i64)\n            }\n        }\n        Ok(bytes_written)\n    }\n\n    fn flush(&mut self) -> IoResult<()> {\n        // No-op\n        self.data.flush()\n    }\n}\n\nimpl AsRef<[u8]> for Buffer<u8> {\n    fn as_ref(&self) -> &[u8] {\n        self.data.as_slice()\n    }\n}\n\nimpl<T: Clone> Drop for Buffer<T> {\n    #[inline]\n    fn drop(&mut self) {\n        if let Some(ref mc) = self.mem_tracker {\n            mc.alloc(-((self.data.capacity() * self.type_length) as i64));\n        }\n    }\n}\n\n// ----------------------------------------------------------------------\n// Immutable Buffer (BufferPtr) classes\n\n/// An representation of a slice on a reference-counting and read-only byte array.\n/// Sub-slices can be further created from this. The byte array will be released\n/// when all slices are dropped.\n#[allow(clippy::rc_buffer)]\n#[derive(Clone, Debug)]\npub struct BufferPtr<T> {\n    data: Arc<Vec<T>>,\n    start: usize,\n    len: usize,\n    // TODO: will this create too many references? rethink about this.\n    mem_tracker: Option<MemTrackerPtr>,\n}\n\nimpl<T> BufferPtr<T> {\n    /// Creates new buffer from a vector.\n    pub fn new(v: Vec<T>) -> Self {\n        let len = v.len();\n        Self {\n            data: Arc::new(v),\n            start: 0,\n            len,\n            mem_tracker: None,\n        }\n    }\n\n    /// Returns slice of data in this buffer.\n    #[inline]\n    pub fn data(&self) -> &[T] {\n        &self.data[self.start..self.start + self.len]\n    }\n\n    /// Updates this buffer with new `start` position and length `len`.\n    ///\n    /// Range should be within current start position and length.\n    #[inline]\n    pub fn with_range(mut self, start: usize, len: usize) -> Self {\n        self.set_range(start, len);\n        self\n    }\n\n    /// Updates this buffer with new `start` position and length `len`.\n    ///\n    /// Range should be within current start position and length.\n    #[inline]\n    pub fn set_range(&mut self, start: usize, len: usize) {\n        assert!(self.start <= start && start + len <= self.start + self.len);\n        self.start = start;\n        self.len = len;\n    }\n\n    /// Adds memory tracker to this buffer.\n    pub fn with_mem_tracker(mut self, mc: MemTrackerPtr) -> Self {\n        self.mem_tracker = Some(mc);\n        self\n    }\n\n    /// Returns start position of this buffer.\n    #[inline]\n    pub fn start(&self) -> usize {\n        self.start\n    }\n\n    /// Returns length of this buffer\n    #[inline]\n    pub fn len(&self) -> usize {\n        self.len\n    }\n\n    /// Returns whether this buffer is empty\n    #[inline]\n    pub fn is_empty(&self) -> bool {\n        self.len == 0\n    }\n\n    /// Returns `true` if this buffer has memory tracker, `false` otherwise.\n    pub fn is_mem_tracked(&self) -> bool {\n        self.mem_tracker.is_some()\n    }\n\n    /// Returns a shallow copy of the buffer.\n    /// Reference counted pointer to the data is copied.\n    pub fn all(&self) -> BufferPtr<T> {\n        BufferPtr {\n            data: Arc::clone(&self.data),\n            start: self.start,\n            len: self.len,\n            mem_tracker: self.mem_tracker.as_ref().cloned(),\n        }\n    }\n\n    /// Returns a shallow copy of the buffer that starts with `start` position.\n    pub fn start_from(&self, start: usize) -> BufferPtr<T> {\n        assert!(start <= self.len);\n        BufferPtr {\n            data: Arc::clone(&self.data),\n            start: self.start + start,\n            len: self.len - start,\n            mem_tracker: self.mem_tracker.as_ref().cloned(),\n        }\n    }\n\n    /// Returns a shallow copy that is a range slice within this buffer.\n    pub fn range(&self, start: usize, len: usize) -> BufferPtr<T> {\n        assert!(start + len <= self.len);\n        BufferPtr {\n            data: Arc::clone(&self.data),\n            start: self.start + start,\n            len,\n            mem_tracker: self.mem_tracker.as_ref().cloned(),\n        }\n    }\n}\n\nimpl<T: Sized> Index<usize> for BufferPtr<T> {\n    type Output = T;\n\n    fn index(&self, index: usize) -> &T {\n        assert!(index < self.len);\n        &self.data[self.start + index]\n    }\n}\n\nimpl<T: Debug> Display for BufferPtr<T> {\n    fn fmt(&self, f: &mut Formatter) -> FmtResult {\n        write!(f, \"{:?}\", self.data)\n    }\n}\n\nimpl<T> Drop for BufferPtr<T> {\n    fn drop(&mut self) {\n        if let Some(ref mc) = self.mem_tracker {\n            if Arc::strong_count(&self.data) == 1 && Arc::weak_count(&self.data) == 0 {\n                mc.alloc(-(self.data.capacity() as i64));\n            }\n        }\n    }\n}\n\nimpl AsRef<[u8]> for BufferPtr<u8> {\n    #[inline]\n    fn as_ref(&self) -> &[u8] {\n        &self.data[self.start..self.start + self.len]\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_byte_buffer_mem_tracker() {\n        let mem_tracker = Arc::new(MemTracker::new());\n\n        let mut buffer = ByteBuffer::new().with_mem_tracker(Arc::clone(&mem_tracker));\n        buffer.set_data(vec![0; 10]);\n        assert_eq!(mem_tracker.memory_usage(), buffer.capacity() as i64);\n        buffer.set_data(vec![0; 20]);\n        let capacity = buffer.capacity() as i64;\n        assert_eq!(mem_tracker.memory_usage(), capacity);\n\n        let max_capacity = {\n            let mut buffer2 = ByteBuffer::new().with_mem_tracker(Arc::clone(&mem_tracker));\n            buffer2.reserve(30);\n            assert_eq!(\n                mem_tracker.memory_usage(),\n                buffer2.capacity() as i64 + capacity\n            );\n            buffer2.set_data(vec![0; 100]);\n            assert_eq!(\n                mem_tracker.memory_usage(),\n                buffer2.capacity() as i64 + capacity\n            );\n            buffer2.capacity() as i64 + capacity\n        };\n\n        assert_eq!(mem_tracker.memory_usage(), capacity);\n        assert_eq!(mem_tracker.max_memory_usage(), max_capacity);\n\n        buffer.reserve(40);\n        assert_eq!(mem_tracker.memory_usage(), buffer.capacity() as i64);\n\n        buffer.consume();\n        assert_eq!(mem_tracker.memory_usage(), buffer.capacity() as i64);\n    }\n\n    #[test]\n    fn test_byte_ptr_mem_tracker() {\n        let mem_tracker = Arc::new(MemTracker::new());\n\n        let mut buffer = ByteBuffer::new().with_mem_tracker(Arc::clone(&mem_tracker));\n        buffer.set_data(vec![0; 60]);\n\n        {\n            let buffer_capacity = buffer.capacity() as i64;\n            let buf_ptr = buffer.consume();\n            assert_eq!(mem_tracker.memory_usage(), buffer_capacity);\n            {\n                let buf_ptr1 = buf_ptr.all();\n                {\n                    let _ = buf_ptr.start_from(20);\n                    assert_eq!(mem_tracker.memory_usage(), buffer_capacity);\n                }\n                assert_eq!(mem_tracker.memory_usage(), buffer_capacity);\n                let _ = buf_ptr1.range(30, 20);\n                assert_eq!(mem_tracker.memory_usage(), buffer_capacity);\n            }\n            assert_eq!(mem_tracker.memory_usage(), buffer_capacity);\n        }\n        assert_eq!(mem_tracker.memory_usage(), buffer.capacity() as i64);\n    }\n\n    #[test]\n    fn test_byte_buffer() {\n        let mut buffer = ByteBuffer::new();\n        assert_eq!(buffer.size(), 0);\n        assert_eq!(buffer.capacity(), 0);\n\n        let mut buffer2 = ByteBuffer::new();\n        buffer2.reserve(40);\n        assert_eq!(buffer2.size(), 0);\n        assert_eq!(buffer2.capacity(), 40);\n\n        buffer.set_data((0..5).collect());\n        assert_eq!(buffer.size(), 5);\n        assert_eq!(buffer[4], 4);\n\n        buffer.set_data((0..20).collect());\n        assert_eq!(buffer.size(), 20);\n        assert_eq!(buffer[10], 10);\n\n        let expected: Vec<u8> = (0..20).collect();\n        {\n            let data = buffer.data();\n            assert_eq!(data, expected.as_slice());\n        }\n\n        buffer.reserve(40);\n        assert!(buffer.capacity() >= 40);\n\n        let byte_ptr = buffer.consume();\n        assert_eq!(buffer.size(), 0);\n        assert_eq!(byte_ptr.as_ref(), expected.as_slice());\n\n        let values: Vec<u8> = (0..30).collect();\n        let _ = buffer.write(values.as_slice());\n        let _ = buffer.flush();\n\n        assert_eq!(buffer.data(), values.as_slice());\n    }\n\n    #[test]\n    fn test_byte_ptr() {\n        let values = (0..50).collect();\n        let ptr = ByteBufferPtr::new(values);\n        assert_eq!(ptr.len(), 50);\n        assert_eq!(ptr.start(), 0);\n        assert_eq!(ptr[40], 40);\n\n        let ptr2 = ptr.all();\n        assert_eq!(ptr2.len(), 50);\n        assert_eq!(ptr2.start(), 0);\n        assert_eq!(ptr2[40], 40);\n\n        let ptr3 = ptr.start_from(20);\n        assert_eq!(ptr3.len(), 30);\n        assert_eq!(ptr3.start(), 20);\n        assert_eq!(ptr3[0], 20);\n\n        let ptr4 = ptr3.range(10, 10);\n        assert_eq!(ptr4.len(), 10);\n        assert_eq!(ptr4.start(), 30);\n        assert_eq!(ptr4[0], 30);\n\n        let expected: Vec<u8> = (30..40).collect();\n        assert_eq!(ptr4.as_ref(), expected.as_slice());\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/util/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub(crate) mod bit_packing;\npub mod jni;\npub mod memory;\n\nmod buffer;\npub use buffer::*;\n\npub mod test_common;\n"
  },
  {
    "path": "native/core/src/parquet/util/test_common/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub mod page_util;\npub mod rand_gen;\n\npub use self::rand_gen::{random_bools, random_bytes, random_numbers, random_numbers_range};\n\npub use datafusion_comet_spark_expr::test_common::file_util::{get_temp_file, get_temp_filename};\n"
  },
  {
    "path": "native/core/src/parquet/util/test_common/page_util.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{collections::VecDeque, mem, sync::Arc};\n\nuse rand::distr::uniform::SampleUniform;\n\nuse parquet::{\n    basic::Encoding,\n    column::page::{Page, PageIterator, PageMetadata, PageReader},\n    data_type::DataType,\n    encodings::{\n        encoding::{get_encoder, DictEncoder, Encoder},\n        levels::{max_buffer_size, LevelEncoder},\n    },\n    errors::Result,\n    schema::types::ColumnDescPtr,\n};\n\nuse super::random_numbers_range;\nuse bytes::Bytes;\n\npub trait DataPageBuilder {\n    fn add_rep_levels(&mut self, max_level: i16, rep_levels: &[i16]);\n    fn add_def_levels(&mut self, max_level: i16, def_levels: &[i16]);\n    fn add_values<T: DataType>(&mut self, encoding: Encoding, values: &[T::T]);\n    fn add_indices(&mut self, indices: Bytes);\n    fn consume(self) -> Page;\n}\n\n/// A utility struct for building data pages (v1 or v2). Callers must call:\n///   - add_rep_levels()\n///   - add_def_levels()\n///   - add_values() for normal data page / add_indices() for dictionary data page\n///   - consume()\n///     in order to populate and obtain a data page.\npub struct DataPageBuilderImpl {\n    desc: ColumnDescPtr,\n    encoding: Option<Encoding>,\n    num_values: u32,\n    buffer: Vec<u8>,\n    rep_levels_byte_len: u32,\n    def_levels_byte_len: u32,\n    datapage_v2: bool,\n}\n\nimpl DataPageBuilderImpl {\n    // `num_values` is the number of non-null values to put in the data page.\n    // `datapage_v2` flag is used to indicate if the generated data page should use V2\n    // format or not.\n    pub fn new(desc: ColumnDescPtr, num_values: u32, datapage_v2: bool) -> Self {\n        DataPageBuilderImpl {\n            desc,\n            encoding: None,\n            num_values,\n            buffer: vec![],\n            rep_levels_byte_len: 0,\n            def_levels_byte_len: 0,\n            datapage_v2,\n        }\n    }\n\n    // Adds levels to the buffer and return number of encoded bytes\n    fn add_levels(&mut self, max_level: i16, levels: &[i16]) -> u32 {\n        if max_level <= 0 {\n            return 0;\n        }\n        let size = max_buffer_size(Encoding::RLE, max_level, levels.len());\n        let mut level_encoder = LevelEncoder::v1(Encoding::RLE, max_level, size);\n        level_encoder.put(levels);\n        let encoded_levels = level_encoder.consume();\n        // Actual encoded bytes (without length offset)\n        let encoded_bytes = &encoded_levels[mem::size_of::<i32>()..];\n        if self.datapage_v2 {\n            // Level encoder always initializes with offset of i32, where it stores\n            // length of encoded data; for data page v2 we explicitly\n            // store length, therefore we should skip i32 bytes.\n            self.buffer.extend_from_slice(encoded_bytes);\n        } else {\n            self.buffer.extend_from_slice(encoded_levels.as_slice());\n        }\n        encoded_bytes.len() as u32\n    }\n}\n\nimpl DataPageBuilder for DataPageBuilderImpl {\n    fn add_rep_levels(&mut self, max_levels: i16, rep_levels: &[i16]) {\n        self.num_values = rep_levels.len() as u32;\n        self.rep_levels_byte_len = self.add_levels(max_levels, rep_levels);\n    }\n\n    fn add_def_levels(&mut self, max_levels: i16, def_levels: &[i16]) {\n        assert!(\n            self.num_values == def_levels.len() as u32,\n            \"Must call `add_rep_levels() first!`\"\n        );\n\n        self.def_levels_byte_len = self.add_levels(max_levels, def_levels);\n    }\n\n    fn add_values<T: DataType>(&mut self, encoding: Encoding, values: &[T::T]) {\n        assert!(\n            self.num_values >= values.len() as u32,\n            \"num_values: {}, values.len(): {}\",\n            self.num_values,\n            values.len()\n        );\n        self.encoding = Some(encoding);\n        let mut encoder: Box<dyn Encoder<T>> =\n            get_encoder::<T>(encoding, &self.desc).expect(\"get_encoder() should be OK\");\n        encoder.put(values).expect(\"put() should be OK\");\n        let encoded_values = encoder\n            .flush_buffer()\n            .expect(\"consume_buffer() should be OK\");\n        self.buffer.extend_from_slice(&encoded_values);\n    }\n\n    fn add_indices(&mut self, indices: Bytes) {\n        self.encoding = Some(Encoding::RLE_DICTIONARY);\n        self.buffer.extend_from_slice(indices.as_ref());\n    }\n\n    fn consume(self) -> Page {\n        if self.datapage_v2 {\n            Page::DataPageV2 {\n                buf: Bytes::copy_from_slice(&self.buffer),\n                num_values: self.num_values,\n                encoding: self.encoding.unwrap(),\n                num_nulls: 0, /* set to dummy value - don't need this when reading\n                               * data page */\n                num_rows: self.num_values, /* also don't need this when reading\n                                            * data page */\n                def_levels_byte_len: self.def_levels_byte_len,\n                rep_levels_byte_len: self.rep_levels_byte_len,\n                is_compressed: false,\n                statistics: None, // set to None, we do not need statistics for tests\n            }\n        } else {\n            Page::DataPage {\n                buf: Bytes::copy_from_slice(&self.buffer),\n                num_values: self.num_values,\n                encoding: self.encoding.unwrap(),\n                def_level_encoding: Encoding::RLE,\n                rep_level_encoding: Encoding::RLE,\n                statistics: None, // set to None, we do not need statistics for tests\n            }\n        }\n    }\n}\n\n/// A utility page reader which stores pages in memory.\npub struct InMemoryPageReader<P: Iterator<Item = Page>> {\n    page_iter: P,\n}\n\nimpl<P: Iterator<Item = Page>> InMemoryPageReader<P> {\n    pub fn new(pages: impl IntoIterator<Item = Page, IntoIter = P>) -> Self {\n        Self {\n            page_iter: pages.into_iter(),\n        }\n    }\n}\n\nimpl<P: Iterator<Item = Page> + Send> PageReader for InMemoryPageReader<P> {\n    fn get_next_page(&mut self) -> Result<Option<Page>> {\n        Ok(self.page_iter.next())\n    }\n\n    fn peek_next_page(&mut self) -> Result<Option<PageMetadata>> {\n        unimplemented!()\n    }\n\n    fn skip_next_page(&mut self) -> Result<()> {\n        unimplemented!()\n    }\n}\n\nimpl<P: Iterator<Item = Page> + Send> Iterator for InMemoryPageReader<P> {\n    type Item = Result<Page>;\n\n    fn next(&mut self) -> Option<Self::Item> {\n        self.get_next_page().transpose()\n    }\n}\n\n/// A utility page iterator which stores page readers in memory, used for tests.\n#[derive(Clone)]\npub struct InMemoryPageIterator<I: Iterator<Item = Vec<Page>>> {\n    page_reader_iter: I,\n}\n\nimpl<I: Iterator<Item = Vec<Page>>> InMemoryPageIterator<I> {\n    pub fn new(pages: impl IntoIterator<Item = Vec<Page>, IntoIter = I>) -> Self {\n        Self {\n            page_reader_iter: pages.into_iter(),\n        }\n    }\n}\n\nimpl<I: Iterator<Item = Vec<Page>>> Iterator for InMemoryPageIterator<I> {\n    type Item = Result<Box<dyn PageReader>>;\n\n    fn next(&mut self) -> Option<Self::Item> {\n        self.page_reader_iter\n            .next()\n            .map(|x| Ok(Box::new(InMemoryPageReader::new(x)) as Box<dyn PageReader>))\n    }\n}\n\nimpl<I: Iterator<Item = Vec<Page>> + Send> PageIterator for InMemoryPageIterator<I> {}\n\n#[allow(clippy::too_many_arguments)]\npub fn make_pages<T: DataType>(\n    desc: ColumnDescPtr,\n    encoding: Encoding,\n    num_pages: usize,\n    levels_per_page: usize,\n    min: T::T,\n    max: T::T,\n    def_levels: &mut Vec<i16>,\n    rep_levels: &mut Vec<i16>,\n    values: &mut Vec<T::T>,\n    pages: &mut VecDeque<Page>,\n    use_v2: bool,\n) where\n    T::T: PartialOrd + SampleUniform + Copy,\n{\n    let mut num_values = 0;\n    let max_def_level = desc.max_def_level();\n    let max_rep_level = desc.max_rep_level();\n\n    let mut dict_encoder = DictEncoder::<T>::new(Arc::clone(&desc));\n\n    for i in 0..num_pages {\n        let mut num_values_cur_page = 0;\n        let level_range = i * levels_per_page..(i + 1) * levels_per_page;\n\n        if max_def_level > 0 {\n            random_numbers_range(levels_per_page, 0, max_def_level + 1, def_levels);\n            for dl in &def_levels[level_range.clone()] {\n                if *dl == max_def_level {\n                    num_values_cur_page += 1;\n                }\n            }\n        } else {\n            num_values_cur_page = levels_per_page;\n        }\n        if max_rep_level > 0 {\n            random_numbers_range(levels_per_page, 0, max_rep_level + 1, rep_levels);\n        }\n        random_numbers_range(num_values_cur_page, min, max, values);\n\n        // Generate the current page\n\n        let mut pb =\n            DataPageBuilderImpl::new(Arc::clone(&desc), num_values_cur_page as u32, use_v2);\n        if max_rep_level > 0 {\n            pb.add_rep_levels(max_rep_level, &rep_levels[level_range.clone()]);\n        }\n        if max_def_level > 0 {\n            pb.add_def_levels(max_def_level, &def_levels[level_range]);\n        }\n\n        let value_range = num_values..num_values + num_values_cur_page;\n        match encoding {\n            Encoding::PLAIN_DICTIONARY | Encoding::RLE_DICTIONARY => {\n                let _ = dict_encoder.put(&values[value_range.clone()]);\n                let indices = dict_encoder\n                    .write_indices()\n                    .expect(\"write_indices() should be OK\");\n                pb.add_indices(indices);\n            }\n            Encoding::PLAIN => {\n                pb.add_values::<T>(encoding, &values[value_range]);\n            }\n            enc => panic!(\"Unexpected encoding {enc}\"),\n        }\n\n        let data_page = pb.consume();\n        pages.push_back(data_page);\n        num_values += num_values_cur_page;\n    }\n\n    if encoding == Encoding::PLAIN_DICTIONARY || encoding == Encoding::RLE_DICTIONARY {\n        let dict = dict_encoder\n            .write_dict()\n            .expect(\"write_dict() should be OK\");\n        let dict_page = Page::DictionaryPage {\n            buf: dict,\n            num_values: dict_encoder.num_entries() as u32,\n            encoding: Encoding::RLE_DICTIONARY,\n            is_sorted: false,\n        };\n        pages.push_front(dict_page);\n    }\n}\n"
  },
  {
    "path": "native/core/src/parquet/util/test_common/rand_gen.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse rand::{\n    distr::{uniform::SampleUniform, Distribution, StandardUniform},\n    rng, RngExt,\n};\n\npub fn random_bytes(n: usize) -> Vec<u8> {\n    let mut result = vec![];\n    let mut rng = rng();\n    for _ in 0..n {\n        result.push(rng.random_range(0..255));\n    }\n    result\n}\n\npub fn random_bools(n: usize) -> Vec<bool> {\n    let mut result = vec![];\n    let mut rng = rng();\n    for _ in 0..n {\n        result.push(rng.random::<bool>());\n    }\n    result\n}\n\npub fn random_numbers<T>(n: usize) -> Vec<T>\nwhere\n    StandardUniform: Distribution<T>,\n{\n    let mut rng = rng();\n    StandardUniform.sample_iter(&mut rng).take(n).collect()\n}\n\npub fn random_numbers_range<T>(n: usize, low: T, high: T, result: &mut Vec<T>)\nwhere\n    T: PartialOrd + SampleUniform + Copy,\n{\n    let mut rng = rng();\n    for _ in 0..n {\n        result.push(rng.random_range(low..high));\n    }\n}\n"
  },
  {
    "path": "native/fs-hdfs/Cargo.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[package]\nname = \"datafusion-comet-fs-hdfs3\"\nversion = \"0.1.12\"\nedition = \"2021\"\n\ndescription = \"libhdfs binding library and safe Rust APIs\"\nauthors = [\"Apache DataFusion <dev@datafusion.apache.org>\"]\nlicense = \"Apache-2.0\"\n\nkeywords = [\"hdfs\", \"hadoop\"]\ndocumentation = \"https://docs.rs/crate/fs-hdfs3\"\nhomepage = \"https://github.com/datafusion-contrib/fs-hdfs\"\nrepository = \"https://github.com/datafusion-contrib/fs-hdfs\"\npublish = false\nreadme = \"README.md\"\n\n[lib]\nname = \"fs_hdfs\"\npath = \"src/lib.rs\"\n\n[features]\ndefault = [\"test_util\"]\ntest_util = []\nuse_existing_hdfs = []\n\n[build-dependencies]\ncc = \"1\"\nbindgen = \"0.72.1\"\njava-locator = { version = \"0.1.9\", features = [\"locate-jdk-only\"] }\n\n[dependencies]\nlazy_static = \"^1.4\"\nlibc = \"^0.2\"\nurl = \"^2.2\"\nlog = \"^0.4\"\n\n[dev-dependencies]\nuuid = {version = \"^1.23\", features = [\"v4\"]}\ntempfile = \"^3.26\"\n"
  },
  {
    "path": "native/fs-hdfs/LICENSE.txt",
    "content": "                                 Apache License\n                           Version 2.0, January 2004\n                        http://www.apache.org/licenses/\n\n   TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION\n\n   1. Definitions.\n\n      \"License\" shall mean the terms and conditions for use, reproduction,\n      and distribution as defined by Sections 1 through 9 of this document.\n\n      \"Licensor\" shall mean the copyright owner or entity authorized by\n      the copyright owner that is granting the License.\n\n      \"Legal Entity\" shall mean the union of the acting entity and all\n      other entities that control, are controlled by, or are under common\n      control with that entity. For the purposes of this definition,\n      \"control\" means (i) the power, direct or indirect, to cause the\n      direction or management of such entity, whether by contract or\n      otherwise, or (ii) ownership of fifty percent (50%) or more of the\n      outstanding shares, or (iii) beneficial ownership of such entity.\n\n      \"You\" (or \"Your\") shall mean an individual or Legal Entity\n      exercising permissions granted by this License.\n\n      \"Source\" form shall mean the preferred form for making modifications,\n      including but not limited to software source code, documentation\n      source, and configuration files.\n\n      \"Object\" form shall mean any form resulting from mechanical\n      transformation or translation of a Source form, including but\n      not limited to compiled object code, generated documentation,\n      and conversions to other media types.\n\n      \"Work\" shall mean the work of authorship, whether in Source or\n      Object form, made available under the License, as indicated by a\n      copyright notice that is included in or attached to the work\n      (an example is provided in the Appendix below).\n\n      \"Derivative Works\" shall mean any work, whether in Source or Object\n      form, that is based on (or derived from) the Work and for which the\n      editorial revisions, annotations, elaborations, or other modifications\n      represent, as a whole, an original work of authorship. For the purposes\n      of this License, Derivative Works shall not include works that remain\n      separable from, or merely link (or bind by name) to the interfaces of,\n      the Work and Derivative Works thereof.\n\n      \"Contribution\" shall mean any work of authorship, including\n      the original version of the Work and any modifications or additions\n      to that Work or Derivative Works thereof, that is intentionally\n      submitted to Licensor for inclusion in the Work by the copyright owner\n      or by an individual or Legal Entity authorized to submit on behalf of\n      the copyright owner. For the purposes of this definition, \"submitted\"\n      means any form of electronic, verbal, or written communication sent\n      to the Licensor or its representatives, including but not limited to\n      communication on electronic mailing lists, source code control systems,\n      and issue tracking systems that are managed by, or on behalf of, the\n      Licensor for the purpose of discussing and improving the Work, but\n      excluding communication that is conspicuously marked or otherwise\n      designated in writing by the copyright owner as \"Not a Contribution.\"\n\n      \"Contributor\" shall mean Licensor and any individual or Legal Entity\n      on behalf of whom a Contribution has been received by Licensor and\n      subsequently incorporated within the Work.\n\n   2. Grant of Copyright License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      copyright license to reproduce, prepare Derivative Works of,\n      publicly display, publicly perform, sublicense, and distribute the\n      Work and such Derivative Works in Source or Object form.\n\n   3. Grant of Patent License. Subject to the terms and conditions of\n      this License, each Contributor hereby grants to You a perpetual,\n      worldwide, non-exclusive, no-charge, royalty-free, irrevocable\n      (except as stated in this section) patent license to make, have made,\n      use, offer to sell, sell, import, and otherwise transfer the Work,\n      where such license applies only to those patent claims licensable\n      by such Contributor that are necessarily infringed by their\n      Contribution(s) alone or by combination of their Contribution(s)\n      with the Work to which such Contribution(s) was submitted. If You\n      institute patent litigation against any entity (including a\n      cross-claim or counterclaim in a lawsuit) alleging that the Work\n      or a Contribution incorporated within the Work constitutes direct\n      or contributory patent infringement, then any patent licenses\n      granted to You under this License for that Work shall terminate\n      as of the date such litigation is filed.\n\n   4. Redistribution. You may reproduce and distribute copies of the\n      Work or Derivative Works thereof in any medium, with or without\n      modifications, and in Source or Object form, provided that You\n      meet the following conditions:\n\n      (a) You must give any other recipients of the Work or\n          Derivative Works a copy of this License; and\n\n      (b) You must cause any modified files to carry prominent notices\n          stating that You changed the files; and\n\n      (c) You must retain, in the Source form of any Derivative Works\n          that You distribute, all copyright, patent, trademark, and\n          attribution notices from the Source form of the Work,\n          excluding those notices that do not pertain to any part of\n          the Derivative Works; and\n\n      (d) If the Work includes a \"NOTICE\" text file as part of its\n          distribution, then any Derivative Works that You distribute must\n          include a readable copy of the attribution notices contained\n          within such NOTICE file, excluding those notices that do not\n          pertain to any part of the Derivative Works, in at least one\n          of the following places: within a NOTICE text file distributed\n          as part of the Derivative Works; within the Source form or\n          documentation, if provided along with the Derivative Works; or,\n          within a display generated by the Derivative Works, if and\n          wherever such third-party notices normally appear. The contents\n          of the NOTICE file are for informational purposes only and\n          do not modify the License. You may add Your own attribution\n          notices within Derivative Works that You distribute, alongside\n          or as an addendum to the NOTICE text from the Work, provided\n          that such additional attribution notices cannot be construed\n          as modifying the License.\n\n      You may add Your own copyright statement to Your modifications and\n      may provide additional or different license terms and conditions\n      for use, reproduction, or distribution of Your modifications, or\n      for any such Derivative Works as a whole, provided Your use,\n      reproduction, and distribution of the Work otherwise complies with\n      the conditions stated in this License.\n\n   5. Submission of Contributions. Unless You explicitly state otherwise,\n      any Contribution intentionally submitted for inclusion in the Work\n      by You to the Licensor shall be under the terms and conditions of\n      this License, without any additional terms or conditions.\n      Notwithstanding the above, nothing herein shall supersede or modify\n      the terms of any separate license agreement you may have executed\n      with Licensor regarding such Contributions.\n\n   6. Trademarks. This License does not grant permission to use the trade\n      names, trademarks, service marks, or product names of the Licensor,\n      except as required for reasonable and customary use in describing the\n      origin of the Work and reproducing the content of the NOTICE file.\n\n   7. Disclaimer of Warranty. Unless required by applicable law or\n      agreed to in writing, Licensor provides the Work (and each\n      Contributor provides its Contributions) on an \"AS IS\" BASIS,\n      WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or\n      implied, including, without limitation, any warranties or conditions\n      of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A\n      PARTICULAR PURPOSE. You are solely responsible for determining the\n      appropriateness of using or redistributing the Work and assume any\n      risks associated with Your exercise of permissions under this License.\n\n   8. Limitation of Liability. In no event and under no legal theory,\n      whether in tort (including negligence), contract, or otherwise,\n      unless required by applicable law (such as deliberate and grossly\n      negligent acts) or agreed to in writing, shall any Contributor be\n      liable to You for damages, including any direct, indirect, special,\n      incidental, or consequential damages of any character arising as a\n      result of this License or out of the use or inability to use the\n      Work (including but not limited to damages for loss of goodwill,\n      work stoppage, computer failure or malfunction, or any and all\n      other commercial damages or losses), even if such Contributor\n      has been advised of the possibility of such damages.\n\n   9. Accepting Warranty or Additional Liability. While redistributing\n      the Work or Derivative Works thereof, You may choose to offer,\n      and charge a fee for, acceptance of support, warranty, indemnity,\n      or other liability obligations and/or rights consistent with this\n      License. However, in accepting such obligations, You may act only\n      on Your own behalf and on Your sole responsibility, not on behalf\n      of any other Contributor, and only if You agree to indemnify,\n      defend, and hold each Contributor harmless for any liability\n      incurred by, or claims asserted against, such Contributor by reason\n      of your accepting any such warranty or additional liability.\n\n   END OF TERMS AND CONDITIONS\n\n   APPENDIX: How to apply the Apache License to your work.\n\n      To apply the Apache License to your work, attach the following\n      boilerplate notice, with the fields enclosed by brackets \"[]\"\n      replaced with your own identifying information. (Don't include\n      the brackets!)  The text should be enclosed in the appropriate\n      comment syntax for the file format. We also recommend that a\n      file or class name and description of purpose be included on the\n      same \"printed page\" as the copyright notice for easier\n      identification within third-party archives.\n\n   Copyright [yyyy] [name of copyright owner]\n\n   Licensed under the Apache License, Version 2.0 (the \"License\");\n   you may not use this file except in compliance with the License.\n   You may obtain a copy of the License at\n\n       http://www.apache.org/licenses/LICENSE-2.0\n\n   Unless required by applicable law or agreed to in writing, software\n   distributed under the License is distributed on an \"AS IS\" BASIS,\n   WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n   See the License for the specific language governing permissions and\n   limitations under the License.\n"
  },
  {
    "path": "native/fs-hdfs/README.md",
    "content": " <!--\nLicensed to the Apache Software Foundation (ASF) under one\n or more contributor license agreements.  See the NOTICE file\n distributed with this work for additional information\n regarding copyright ownership.  The ASF licenses this file\n to you under the Apache License, Version 2.0 (the\n \"License\"); you may not use this file except in compliance\n with the License.  You may obtain a copy of the License at\n\n   http://www.apache.org/licenses/LICENSE-2.0\n\n Unless required by applicable law or agreed to in writing,\n software distributed under the License is distributed on an\n \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n KIND, either express or implied.  See the License for the\n specific language governing permissions and limitations\n under the License.\n-->\n\n# fs-hdfs3\n\nIt's based on the version `0.0.4` of http://hyunsik.github.io/hdfs-rs to provide libhdfs binding library and rust APIs which safely wraps libhdfs binding APIs.\n\n# Current Status\n\n- All libhdfs FFI APIs are ported.\n- Safe Rust wrapping APIs to cover most of the libhdfs APIs except those related to zero-copy read.\n- Compared to hdfs-rs, it removes the lifetime in HdfsFs, which will be more friendly for others to depend on.\n\n## Documentation\n\n- [API documentation] (https://docs.rs/crate/fs-hdfs3)\n\n## Requirements\n\n- The C related files are from the branch `3.1.4` of hadoop repository. For rust usage, a few changes are also applied.\n- No need to compile the Hadoop native library by yourself. However, the Hadoop jar dependencies are still required.\n\n## Usage\n\nAdd this to your Cargo.toml:\n\n```toml\n[dependencies]\nfs-hdfs3 = \"0.1.12\"\n```\n\n### Build\n\nWe need to specify `$JAVA_HOME` to make Java shared library available for building.\n\n### Run\n\nSince our compiled libhdfs is JNI-based implementation,\nit requires Hadoop-related classes available through `CLASSPATH`. An example,\n\n```sh\nexport CLASSPATH=$CLASSPATH:`hadoop classpath --glob`\n```\n\nAlso, we need to specify the JVM dynamic library path for the application to load the JVM shared library at runtime.\n\nFor jdk8 and macOS, it's\n\n```sh\nexport DYLD_LIBRARY_PATH=$JAVA_HOME/jre/lib/server\n```\n\nFor jdk11 (or later jdks) and macOS, it's\n\n```sh\nexport DYLD_LIBRARY_PATH=$JAVA_HOME/lib/server\n```\n\nFor jdk8 and Centos\n\n```sh\nexport LD_LIBRARY_PATH=$JAVA_HOME/jre/lib/amd64/server\n```\n\nFor jdk11 (or later jdks) and Centos\n\n```sh\nexport LD_LIBRARY_PATH=$JAVA_HOME/lib/server\n```\n\n### Testing\n\nThe test also requires the `CLASSPATH` and `DYLD_LIBRARY_PATH` (or `LD_LIBRARY_PATH`). In case that the java class of `org.junit.Assert` can't be found. Refine the `$CLASSPATH` as follows:\n\n```sh\nexport CLASSPATH=$CLASSPATH:`hadoop classpath --glob`:$HADOOP_HOME/share/hadoop/tools/lib/*\n```\n\nHere, `$HADOOP_HOME` need to be specified and exported.\n\nThen you can run\n\n```bash\ncargo test\n```\n\n## Example\n\n```rust\nuse std::sync::Arc;\nuse hdfs::hdfs::{get_hdfs_by_full_path, HdfsFs};\n\nlet fs: Arc<HdfsFs> = get_hdfs_by_full_path(\"hdfs://localhost:8020/\").ok().unwrap();\nmatch fs.mkdir(\"/data\") {\n    Ok(_) => { println!(\"/data has been created\") },\n    Err(_)  => { panic!(\"/data creation has failed\") }\n};\n```\n"
  },
  {
    "path": "native/fs-hdfs/build.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::env;\nuse std::path::PathBuf;\n\nfn main() {\n    let vec_flags = &get_build_flags();\n\n    build_hdfs_lib(vec_flags);\n\n    build_minidfs_lib(vec_flags);\n\n    build_ffi(vec_flags);\n}\n\nfn build_ffi(flags: &[String]) {\n    let (header, output) = (\"c_src/wrapper.h\", \"hdfs-native.rs\");\n    // Tell cargo to invalidate the built crate whenever the wrapper changes\n    println!(\"cargo:rerun-if-changed={header}\");\n    println!(\"cargo:rerun-if-changed={}\", get_hdfs_file_path(\"hdfs.h\"));\n    println!(\n        \"cargo:rerun-if-changed={}\",\n        get_minidfs_file_path(\"native_mini_dfs.h\")\n    );\n\n    // To avoid the order issue of dependent dynamic libraries\n    println!(\"cargo:rustc-link-lib=jvm\");\n\n    let bindings = bindgen::Builder::default()\n        .header(header)\n        .allowlist_function(\"nmd.*\")\n        .allowlist_function(\"hdfs.*\")\n        .allowlist_function(\"hadoop.*\")\n        .clang_args(flags)\n        .rustified_enum(\"tObjectKind\")\n        .generate()\n        .expect(\"Unable to generate bindings\");\n\n    let out_path = PathBuf::from(env::var(\"OUT_DIR\").unwrap());\n    bindings\n        .write_to_file(out_path.join(output))\n        .expect(\"Couldn't write bindings!\");\n}\n\nfn build_hdfs_lib(flags: &[String]) {\n    println!(\"cargo:rerun-if-changed={}\", get_hdfs_file_path(\"hdfs.c\"));\n\n    let mut builder = cc::Build::new();\n\n    // includes\n    builder\n        .include(get_hdfs_source_dir())\n        .include(format!(\"{}/os/posix\", get_hdfs_source_dir()));\n\n    // files\n    builder\n        .file(get_hdfs_file_path(\"exception.c\"))\n        .file(get_hdfs_file_path(\"htable.c\"))\n        .file(get_hdfs_file_path(\"jni_helper.c\"))\n        .file(get_hdfs_file_os_path(\"mutexes.c\"))\n        .file(get_hdfs_file_os_path(\"thread_local_storage.c\"))\n        .file(get_hdfs_file_path(\"hdfs.c\"));\n\n    for flag in flags {\n        builder.flag(flag);\n    }\n\n    // disable the warning message\n    builder.warnings(false);\n\n    builder.compile(\"hdfs\");\n}\n\nfn build_minidfs_lib(flags: &[String]) {\n    println!(\n        \"cargo:rerun-if-changed={}\",\n        get_minidfs_file_path(\"native_mini_dfs.c\")\n    );\n\n    let mut builder = cc::Build::new();\n\n    // includes\n    builder\n        .include(get_hdfs_source_dir())\n        .include(format!(\"{}/os/posix\", get_hdfs_source_dir()));\n\n    // files\n    builder.file(get_minidfs_file_path(\"native_mini_dfs.c\"));\n\n    for flag in flags {\n        builder.flag(flag.as_str());\n    }\n    builder.compile(\"minidfs\");\n}\n\nfn get_build_flags() -> Vec<String> {\n    let mut result = vec![];\n\n    result.extend(get_java_dependency());\n    result.push(String::from(\"-Wno-incompatible-pointer-types\"));\n\n    result\n}\n\nfn get_java_dependency() -> Vec<String> {\n    let mut result = vec![];\n\n    // Include directories\n    let java_home = java_locator::locate_java_home()\n        .expect(\"JAVA_HOME could not be found, trying setting the variable manually\");\n    result.push(format!(\"-I{java_home}/include\"));\n    #[cfg(target_os = \"linux\")]\n    result.push(format!(\"-I{java_home}/include/linux\"));\n    #[cfg(target_os = \"macos\")]\n    result.push(format!(\"-I{java_home}/include/darwin\"));\n\n    // libjvm link\n    let jvm_lib_location = java_locator::locate_jvm_dyn_library().unwrap();\n    println!(\"cargo:rustc-link-search=native={jvm_lib_location}\");\n    println!(\"cargo:rustc-link-lib=jvm\");\n\n    // For tests, add libjvm path to rpath, this does not propagate upwards,\n    // unless building an .so, as per Cargo specs, so is only used when testing\n    println!(\"cargo:rustc-link-arg=-Wl,-rpath,{jvm_lib_location}\");\n\n    result\n}\n\nfn get_hdfs_file_path(filename: &'static str) -> String {\n    format!(\"{}/{}\", get_hdfs_source_dir(), filename)\n}\n\nfn get_hdfs_file_os_path(filename: &'static str) -> String {\n    format!(\"{}/os/posix/{}\", get_hdfs_source_dir(), filename)\n}\n\nfn get_hdfs_source_dir() -> &'static str {\n    \"c_src/libhdfs\"\n}\n\nfn get_minidfs_file_path(filename: &'static str) -> String {\n    format!(\"{}/{}\", \"c_src/libminidfs\", filename)\n}\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/config.h",
    "content": "/**\n* Licensed to the Apache Software Foundation (ASF) under one\n* or more contributor license agreements.  See the NOTICE file\n* distributed with this work for additional information\n* regarding copyright ownership.  The ASF licenses this file\n* to you under the Apache License, Version 2.0 (the\n* \"License\"); you may not use this file except in compliance\n* with the License.  You may obtain a copy of the License at\n*\n*     http://www.apache.org/licenses/LICENSE-2.0\n*\n* Unless required by applicable law or agreed to in writing, software\n* distributed under the License is distributed on an \"AS IS\" BASIS,\n* WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n* See the License for the specific language governing permissions and\n* limitations under the License.\n*/\n#ifndef CONFIG_H\n#define CONFIG_H\n\n#define _FUSE_DFS_VERSION \"0.1.0\"\n\n#define HAVE_BETTER_TLS\n\n#define HAVE_INTEL_SSE_INTRINSICS\n\n#endif"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/exception.c",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"exception.h\"\n#include \"hdfs.h\"\n#include \"jni_helper.h\"\n#include \"platform.h\"\n\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n\n#define EXCEPTION_INFO_LEN (sizeof(gExceptionInfo)/sizeof(gExceptionInfo[0]))\n\nstruct ExceptionInfo {\n    const char * const name;\n    int noPrintFlag;\n    int excErrno;\n};\n\nstatic const struct ExceptionInfo gExceptionInfo[] = {\n    {\n        \"java.io.FileNotFoundException\",\n        NOPRINT_EXC_FILE_NOT_FOUND,\n        ENOENT,\n    },\n    {\n        \"org.apache.hadoop.security.AccessControlException\",\n        NOPRINT_EXC_ACCESS_CONTROL,\n        EACCES,\n    },\n    {\n        \"org.apache.hadoop.fs.UnresolvedLinkException\",\n        NOPRINT_EXC_UNRESOLVED_LINK,\n        ENOLINK,\n    },\n    {\n        \"org.apache.hadoop.fs.ParentNotDirectoryException\",\n        NOPRINT_EXC_PARENT_NOT_DIRECTORY,\n        ENOTDIR,\n    },\n    {\n        \"java.lang.IllegalArgumentException\",\n        NOPRINT_EXC_ILLEGAL_ARGUMENT,\n        EINVAL,\n    },\n    {\n        \"java.lang.OutOfMemoryError\",\n        0,\n        ENOMEM,\n    },\n    {\n        \"org.apache.hadoop.hdfs.server.namenode.SafeModeException\",\n        0,\n        EROFS,\n    },\n    {\n        \"org.apache.hadoop.fs.FileAlreadyExistsException\",\n        0,\n        EEXIST,\n    },\n    {\n        \"org.apache.hadoop.hdfs.protocol.QuotaExceededException\",\n        0,\n        EDQUOT,\n    },\n    {\n        \"java.lang.UnsupportedOperationException\",\n        0,\n        ENOTSUP,\n    },\n    {\n        \"org.apache.hadoop.hdfs.server.namenode.LeaseExpiredException\",\n        0,\n        ESTALE,\n    },\n};\n\nvoid getExceptionInfo(const char *excName, int noPrintFlags,\n                      int *excErrno, int *shouldPrint)\n{\n    int i;\n\n    for (i = 0; i < EXCEPTION_INFO_LEN; i++) {\n        if (strstr(gExceptionInfo[i].name, excName)) {\n            break;\n        }\n    }\n    if (i < EXCEPTION_INFO_LEN) {\n        *shouldPrint = !(gExceptionInfo[i].noPrintFlag & noPrintFlags);\n        *excErrno = gExceptionInfo[i].excErrno;\n    } else {\n        *shouldPrint = 1;\n        *excErrno = EINTERNAL;\n    }\n}\n\n/**\n * getExceptionUtilString: A helper function that calls 'methodName' in\n * ExceptionUtils. The function 'methodName' should have a return type of a\n * java String.\n *\n * @param env        The JNI environment.\n * @param exc        The exception to get information for.\n * @param methodName The method of ExceptionUtils to call that has a String\n *                    return type.\n *\n * @return           A C-type string containing the string returned by\n *                   ExceptionUtils.'methodName', or NULL on failure.\n */\nstatic char* getExceptionUtilString(JNIEnv *env, jthrowable exc, char *methodName)\n{\n    jthrowable jthr;\n    jvalue jVal;\n    jstring jStr = NULL;\n    char *excString = NULL;\n    jthr = invokeMethod(env, &jVal, STATIC, NULL,\n        \"org/apache/commons/lang/exception/ExceptionUtils\",\n        methodName, \"(Ljava/lang/Throwable;)Ljava/lang/String;\", exc);\n    if (jthr) {\n        destroyLocalReference(env, jthr);\n        return NULL;\n    }\n    jStr = jVal.l;\n    jthr = newCStr(env, jStr, &excString);\n    if (jthr) {\n        destroyLocalReference(env, jthr);\n        return NULL;\n    }\n    destroyLocalReference(env, jStr);\n    return excString;\n}\n\nint printExceptionAndFreeV(JNIEnv *env, jthrowable exc, int noPrintFlags,\n        const char *fmt, va_list ap)\n{\n    int i, noPrint, excErrno;\n    char *className = NULL;\n    jthrowable jthr;\n    const char *stackTrace;\n    const char *rootCause;\n\n    jthr = classNameOfObject(exc, env, &className);\n    if (jthr) {\n        fprintf(stderr, \"PrintExceptionAndFree: error determining class name \"\n            \"of exception.\\n\");\n        className = strdup(\"(unknown)\");\n        destroyLocalReference(env, jthr);\n    }\n    for (i = 0; i < EXCEPTION_INFO_LEN; i++) {\n        if (!strcmp(gExceptionInfo[i].name, className)) {\n            break;\n        }\n    }\n    if (i < EXCEPTION_INFO_LEN) {\n        noPrint = (gExceptionInfo[i].noPrintFlag & noPrintFlags);\n        excErrno = gExceptionInfo[i].excErrno;\n    } else {\n        noPrint = 0;\n        excErrno = EINTERNAL;\n    }\n\n    // We don't want to use ExceptionDescribe here, because that requires a\n    // pending exception. Instead, use ExceptionUtils.\n    rootCause = getExceptionUtilString(env, exc, \"getRootCauseMessage\");\n    stackTrace = getExceptionUtilString(env, exc, \"getStackTrace\");\n    // Save the exception details in the thread-local state.\n    setTLSExceptionStrings(rootCause, stackTrace);\n\n    if (!noPrint) {\n        vfprintf(stderr, fmt, ap);\n        fprintf(stderr, \" error:\\n\");\n\n        if (!rootCause) {\n            fprintf(stderr, \"(unable to get root cause for %s)\\n\", className);\n        } else {\n            fprintf(stderr, \"%s\", rootCause);\n        }\n        if (!stackTrace) {\n            fprintf(stderr, \"(unable to get stack trace for %s)\\n\", className);\n        } else {\n            fprintf(stderr, \"%s\", stackTrace);\n        }\n    }\n\n    destroyLocalReference(env, exc);\n    free(className);\n    return excErrno;\n}\n\nint printExceptionAndFree(JNIEnv *env, jthrowable exc, int noPrintFlags,\n        const char *fmt, ...)\n{\n    va_list ap;\n    int ret;\n\n    va_start(ap, fmt);\n    ret = printExceptionAndFreeV(env, exc, noPrintFlags, fmt, ap);\n    va_end(ap);\n    return ret;\n}\n\nint printPendingExceptionAndFree(JNIEnv *env, int noPrintFlags,\n        const char *fmt, ...)\n{\n    va_list ap;\n    int ret;\n    jthrowable exc;\n\n    exc = (*env)->ExceptionOccurred(env);\n    if (!exc) {\n        va_start(ap, fmt);\n        vfprintf(stderr, fmt, ap);\n        va_end(ap);\n        fprintf(stderr, \" error: (no exception)\");\n        ret = 0;\n    } else {\n        (*env)->ExceptionClear(env);\n        va_start(ap, fmt);\n        ret = printExceptionAndFreeV(env, exc, noPrintFlags, fmt, ap);\n        va_end(ap);\n    }\n    return ret;\n}\n\njthrowable getPendingExceptionAndClear(JNIEnv *env)\n{\n    jthrowable jthr = (*env)->ExceptionOccurred(env);\n    if (!jthr)\n        return NULL;\n    (*env)->ExceptionClear(env);\n    return jthr;\n}\n\njthrowable newRuntimeError(JNIEnv *env, const char *fmt, ...)\n{\n    char buf[512];\n    jobject out, exc;\n    jstring jstr;\n    va_list ap;\n\n    va_start(ap, fmt);\n    vsnprintf(buf, sizeof(buf), fmt, ap);\n    va_end(ap);\n    jstr = (*env)->NewStringUTF(env, buf);\n    if (!jstr) {\n        // We got an out of memory exception rather than a RuntimeException.\n        // Too bad...\n        return getPendingExceptionAndClear(env);\n    }\n    exc = constructNewObjectOfClass(env, &out, \"RuntimeException\",\n        \"(java/lang/String;)V\", jstr);\n    (*env)->DeleteLocalRef(env, jstr);\n    // Again, we'll either get an out of memory exception or the\n    // RuntimeException we wanted.\n    return (exc) ? exc : out;\n}\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/exception.h",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef LIBHDFS_EXCEPTION_H\n#define LIBHDFS_EXCEPTION_H\n\n/**\n * Exception handling routines for libhdfs.\n *\n * The convention we follow here is to clear pending exceptions as soon as they\n * are raised.  Never assume that the caller of your function will clean up\n * after you-- do it yourself.  Unhandled exceptions can lead to memory leaks\n * and other undefined behavior.\n *\n * If you encounter an exception, return a local reference to it.  The caller is\n * responsible for freeing the local reference, by calling a function like\n * printExceptionAndFree. (You can also free exceptions directly by calling\n * DeleteLocalRef.  However, that would not produce an error message, so it's\n * usually not what you want.)\n *\n * The root cause and stack trace exception strings retrieved from the last\n * exception that happened on a thread are stored in the corresponding\n * thread local state and are accessed by hdfsGetLastExceptionRootCause and\n * hdfsGetLastExceptionStackTrace respectively.\n */\n\n#include \"platform.h\"\n\n#include <jni.h>\n#include <stdio.h>\n\n#include <stdlib.h>\n#include <stdarg.h>\n#include <search.h>\n#include <errno.h>\n\n/**\n * Exception noprint flags\n *\n * Theses flags determine which exceptions should NOT be printed to stderr by\n * the exception printing routines.  For example, if you expect to see\n * FileNotFound, you might use NOPRINT_EXC_FILE_NOT_FOUND, to avoid filling the\n * logs with messages about routine events.\n *\n * On the other hand, if you don't expect any failures, you might pass\n * PRINT_EXC_ALL.\n *\n * You can OR these flags together to avoid printing multiple classes of\n * exceptions.\n */\n#define PRINT_EXC_ALL                           0x00\n#define NOPRINT_EXC_FILE_NOT_FOUND              0x01\n#define NOPRINT_EXC_ACCESS_CONTROL              0x02\n#define NOPRINT_EXC_UNRESOLVED_LINK             0x04\n#define NOPRINT_EXC_PARENT_NOT_DIRECTORY        0x08\n#define NOPRINT_EXC_ILLEGAL_ARGUMENT            0x10\n\n#ifdef WIN32\n#ifdef LIBHDFS_DLL_EXPORT\n        #define LIBHDFS_EXTERNAL __declspec(dllexport)\n    #elif LIBHDFS_DLL_IMPORT\n        #define LIBHDFS_EXTERNAL __declspec(dllimport)\n    #else\n        #define LIBHDFS_EXTERNAL\n    #endif\n#else\n#ifdef LIBHDFS_DLL_EXPORT\n#define LIBHDFS_EXTERNAL __attribute__((visibility(\"default\")))\n#elif LIBHDFS_DLL_IMPORT\n#define LIBHDFS_EXTERNAL __attribute__((visibility(\"default\")))\n#else\n#define LIBHDFS_EXTERNAL\n#endif\n#endif\n\n/**\n * Get information about an exception.\n *\n * @param excName         The Exception name.\n *                        This is a Java class name in JNI format.\n * @param noPrintFlags    Flags which determine which exceptions we should NOT\n *                        print.\n * @param excErrno        (out param) The POSIX error number associated with the\n *                        exception.\n * @param shouldPrint     (out param) Nonzero if we should print this exception,\n *                        based on the noPrintFlags and its name. \n */\nLIBHDFS_EXTERNAL\nvoid getExceptionInfo(const char *excName, int noPrintFlags,\n                      int *excErrno, int *shouldPrint);\n\n/**\n * Store the information about an exception in the thread-local state and print\n * it and free the jthrowable object.\n *\n * @param env             The JNI environment\n * @param exc             The exception to print and free\n * @param noPrintFlags    Flags which determine which exceptions we should NOT\n *                        print.\n * @param fmt             Printf-style format list\n * @param ap              Printf-style varargs\n *\n * @return                The POSIX error number associated with the exception\n *                        object.\n */\nLIBHDFS_EXTERNAL\nint printExceptionAndFreeV(JNIEnv *env, jthrowable exc, int noPrintFlags,\n        const char *fmt, va_list ap);\n\n/**\n * Store the information about an exception in the thread-local state and print\n * it and free the jthrowable object.\n *\n * @param env             The JNI environment\n * @param exc             The exception to print and free\n * @param noPrintFlags    Flags which determine which exceptions we should NOT\n *                        print.\n * @param fmt             Printf-style format list\n * @param ...             Printf-style varargs\n *\n * @return                The POSIX error number associated with the exception\n *                        object.\n */\nLIBHDFS_EXTERNAL\nint printExceptionAndFree(JNIEnv *env, jthrowable exc, int noPrintFlags,\n        const char *fmt, ...) TYPE_CHECKED_PRINTF_FORMAT(4, 5);\n\n/**\n * Store the information about the pending exception in the thread-local state\n * and print it and free the jthrowable object.\n *\n * @param env             The JNI environment\n * @param noPrintFlags    Flags which determine which exceptions we should NOT\n *                        print.\n * @param fmt             Printf-style format list\n * @param ...             Printf-style varargs\n *\n * @return                The POSIX error number associated with the exception\n *                        object.\n */\nLIBHDFS_EXTERNAL\nint printPendingExceptionAndFree(JNIEnv *env, int noPrintFlags,\n        const char *fmt, ...) TYPE_CHECKED_PRINTF_FORMAT(3, 4);\n\n/**\n * Get a local reference to the pending exception and clear it.\n *\n * Once it is cleared, the exception will no longer be pending.  The caller will\n * have to decide what to do with the exception object.\n *\n * @param env             The JNI environment\n *\n * @return                The exception, or NULL if there was no exception\n */\nLIBHDFS_EXTERNAL\njthrowable getPendingExceptionAndClear(JNIEnv *env);\n\n/**\n * Create a new runtime error.\n *\n * This creates (but does not throw) a new RuntimeError.\n *\n * @param env             The JNI environment\n * @param fmt             Printf-style format list\n * @param ...             Printf-style varargs\n *\n * @return                A local reference to a RuntimeError\n */\nLIBHDFS_EXTERNAL\njthrowable newRuntimeError(JNIEnv *env, const char *fmt, ...)\n        TYPE_CHECKED_PRINTF_FORMAT(2, 3);\n\n#undef TYPE_CHECKED_PRINTF_FORMAT\n#undef LIBHDFS_EXTERNAL\n#endif\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/hdfs.c",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"exception.h\"\n#include \"hdfs.h\"\n#include \"jni_helper.h\"\n#include \"platform.h\"\n\n#include <fcntl.h>\n#include <inttypes.h>\n#include <stdio.h>\n#include <string.h>\n\n/* Some frequently used Java paths */\n#define HADOOP_CONF     \"org/apache/hadoop/conf/Configuration\"\n#define HADOOP_PATH     \"org/apache/hadoop/fs/Path\"\n#define HADOOP_LOCALFS  \"org/apache/hadoop/fs/LocalFileSystem\"\n#define HADOOP_FS       \"org/apache/hadoop/fs/FileSystem\"\n#define HADOOP_FSSTATUS \"org/apache/hadoop/fs/FsStatus\"\n#define HADOOP_BLK_LOC  \"org/apache/hadoop/fs/BlockLocation\"\n#define HADOOP_DFS      \"org/apache/hadoop/hdfs/DistributedFileSystem\"\n#define HADOOP_ISTRM    \"org/apache/hadoop/fs/FSDataInputStream\"\n#define HADOOP_OSTRM    \"org/apache/hadoop/fs/FSDataOutputStream\"\n#define HADOOP_STAT     \"org/apache/hadoop/fs/FileStatus\"\n#define HADOOP_FSPERM   \"org/apache/hadoop/fs/permission/FsPermission\"\n#define JAVA_NET_ISA    \"java/net/InetSocketAddress\"\n#define JAVA_NET_URI    \"java/net/URI\"\n#define JAVA_STRING     \"java/lang/String\"\n#define READ_OPTION     \"org/apache/hadoop/fs/ReadOption\"\n#define RENAME_OPTION   \"org/apache/hadoop/fs/Options$Rename\"\n\n#define JAVA_VOID       \"V\"\n\n/* Macros for constructing method signatures */\n#define JPARAM(X)           \"L\" X \";\"\n#define JARRPARAM(X)        \"[L\" X \";\"\n#define JMETHOD1(X, R)      \"(\" X \")\" R\n#define JMETHOD2(X, Y, R)   \"(\" X Y \")\" R\n#define JMETHOD3(X, Y, Z, R)   \"(\" X Y Z\")\" R\n\n#define KERBEROS_TICKET_CACHE_PATH \"hadoop.security.kerberos.ticket.cache.path\"\n\n// Bit fields for hdfsFile_internal flags\n#define HDFS_FILE_SUPPORTS_DIRECT_READ (1<<0)\n\ntSize readDirect(hdfsFS fs, hdfsFile f, void* buffer, tSize length);\nstatic void hdfsFreeFileInfoEntry(hdfsFileInfo *hdfsFileInfo);\n\n/**\n * The C equivalent of org.apache.org.hadoop.FSData(Input|Output)Stream .\n */\nenum hdfsStreamType\n{\n    HDFS_STREAM_UNINITIALIZED = 0,\n    HDFS_STREAM_INPUT = 1,\n    HDFS_STREAM_OUTPUT = 2,\n};\n\n/**\n * The 'file-handle' to a file in hdfs.\n */\nstruct hdfsFile_internal {\n    void* file;\n    enum hdfsStreamType type;\n    int flags;\n};\n\n#define HDFS_EXTENDED_FILE_INFO_ENCRYPTED 0x1\n\n/**\n * Extended file information.\n */\nstruct hdfsExtendedFileInfo {\n    int flags;\n};\n\nint hdfsFileIsOpenForRead(hdfsFile file)\n{\n    return (file->type == HDFS_STREAM_INPUT);\n}\n\nint hdfsGetHedgedReadMetrics(hdfsFS fs, struct hdfsHedgedReadMetrics **metrics)\n{\n    jthrowable jthr;\n    jobject hedgedReadMetrics = NULL;\n    jvalue jVal;\n    struct hdfsHedgedReadMetrics *m = NULL;\n    int ret;\n    jobject jFS = (jobject)fs;\n    JNIEnv* env = getJNIEnv();\n\n    if (env == NULL) {\n        errno = EINTERNAL;\n        return -1;\n    }\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS,\n                  HADOOP_DFS,\n                  \"getHedgedReadMetrics\",\n                  \"()Lorg/apache/hadoop/hdfs/DFSHedgedReadMetrics;\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetHedgedReadMetrics: getHedgedReadMetrics failed\");\n        goto done;\n    }\n    hedgedReadMetrics = jVal.l;\n\n    m = malloc(sizeof(struct hdfsHedgedReadMetrics));\n    if (!m) {\n      ret = ENOMEM;\n      goto done;\n    }\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, hedgedReadMetrics,\n                  \"org/apache/hadoop/hdfs/DFSHedgedReadMetrics\",\n                  \"getHedgedReadOps\", \"()J\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetHedgedReadStatistics: getHedgedReadOps failed\");\n        goto done;\n    }\n    m->hedgedReadOps = jVal.j;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, hedgedReadMetrics,\n                  \"org/apache/hadoop/hdfs/DFSHedgedReadMetrics\",\n                  \"getHedgedReadWins\", \"()J\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetHedgedReadStatistics: getHedgedReadWins failed\");\n        goto done;\n    }\n    m->hedgedReadOpsWin = jVal.j;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, hedgedReadMetrics,\n                  \"org/apache/hadoop/hdfs/DFSHedgedReadMetrics\",\n                  \"getHedgedReadOpsInCurThread\", \"()J\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetHedgedReadStatistics: getHedgedReadOpsInCurThread failed\");\n        goto done;\n    }\n    m->hedgedReadOpsInCurThread = jVal.j;\n\n    *metrics = m;\n    m = NULL;\n    ret = 0;\n\ndone:\n    destroyLocalReference(env, hedgedReadMetrics);\n    free(m);\n    if (ret) {\n      errno = ret;\n      return -1;\n    }\n    return 0;\n}\n\nvoid hdfsFreeHedgedReadMetrics(struct hdfsHedgedReadMetrics *metrics)\n{\n  free(metrics);\n}\n\nint hdfsFileGetReadStatistics(hdfsFile file,\n                              struct hdfsReadStatistics **stats)\n{\n    jthrowable jthr;\n    jobject readStats = NULL;\n    jvalue jVal;\n    struct hdfsReadStatistics *s = NULL;\n    int ret;\n    JNIEnv* env = getJNIEnv();\n\n    if (env == NULL) {\n        errno = EINTERNAL;\n        return -1;\n    }\n    if (file->type != HDFS_STREAM_INPUT) {\n        ret = EINVAL;\n        goto done;\n    }\n    jthr = invokeMethod(env, &jVal, INSTANCE, file->file, \n                  \"org/apache/hadoop/hdfs/client/HdfsDataInputStream\",\n                  \"getReadStatistics\",\n                  \"()Lorg/apache/hadoop/hdfs/ReadStatistics;\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsFileGetReadStatistics: getReadStatistics failed\");\n        goto done;\n    }\n    readStats = jVal.l;\n    s = malloc(sizeof(struct hdfsReadStatistics));\n    if (!s) {\n        ret = ENOMEM;\n        goto done;\n    }\n    jthr = invokeMethod(env, &jVal, INSTANCE, readStats,\n                  \"org/apache/hadoop/hdfs/ReadStatistics\",\n                  \"getTotalBytesRead\", \"()J\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsFileGetReadStatistics: getTotalBytesRead failed\");\n        goto done;\n    }\n    s->totalBytesRead = jVal.j;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, readStats,\n                  \"org/apache/hadoop/hdfs/ReadStatistics\",\n                  \"getTotalLocalBytesRead\", \"()J\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsFileGetReadStatistics: getTotalLocalBytesRead failed\");\n        goto done;\n    }\n    s->totalLocalBytesRead = jVal.j;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, readStats,\n                  \"org/apache/hadoop/hdfs/ReadStatistics\",\n                  \"getTotalShortCircuitBytesRead\", \"()J\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsFileGetReadStatistics: getTotalShortCircuitBytesRead failed\");\n        goto done;\n    }\n    s->totalShortCircuitBytesRead = jVal.j;\n    jthr = invokeMethod(env, &jVal, INSTANCE, readStats,\n                  \"org/apache/hadoop/hdfs/ReadStatistics\",\n                  \"getTotalZeroCopyBytesRead\", \"()J\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsFileGetReadStatistics: getTotalZeroCopyBytesRead failed\");\n        goto done;\n    }\n    s->totalZeroCopyBytesRead = jVal.j;\n    *stats = s;\n    s = NULL;\n    ret = 0;\n\ndone:\n    destroyLocalReference(env, readStats);\n    free(s);\n    if (ret) {\n      errno = ret;\n      return -1;\n    }\n    return 0;\n}\n\nint64_t hdfsReadStatisticsGetRemoteBytesRead(\n                            const struct hdfsReadStatistics *stats)\n{\n    return stats->totalBytesRead - stats->totalLocalBytesRead;\n}\n\nint hdfsFileClearReadStatistics(hdfsFile file)\n{\n    jthrowable jthr;\n    int ret;\n    JNIEnv* env = getJNIEnv();\n\n    if (env == NULL) {\n        errno = EINTERNAL;\n        return EINTERNAL;\n    }\n    if (file->type != HDFS_STREAM_INPUT) {\n        ret = EINVAL;\n        goto done;\n    }\n    jthr = invokeMethod(env, NULL, INSTANCE, file->file,\n                  \"org/apache/hadoop/hdfs/client/HdfsDataInputStream\",\n                  \"clearReadStatistics\", \"()V\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsFileClearReadStatistics: clearReadStatistics failed\");\n        goto done;\n    }\n    ret = 0;\ndone:\n    if (ret) {\n        errno = ret;\n        return ret;\n    }\n    return 0;\n}\n\nvoid hdfsFileFreeReadStatistics(struct hdfsReadStatistics *stats)\n{\n    free(stats);\n}\n\nint hdfsFileIsOpenForWrite(hdfsFile file)\n{\n    return (file->type == HDFS_STREAM_OUTPUT);\n}\n\nint hdfsFileUsesDirectRead(hdfsFile file)\n{\n    return !!(file->flags & HDFS_FILE_SUPPORTS_DIRECT_READ);\n}\n\nvoid hdfsFileDisableDirectRead(hdfsFile file)\n{\n    file->flags &= ~HDFS_FILE_SUPPORTS_DIRECT_READ;\n}\n\nint hdfsDisableDomainSocketSecurity(void)\n{\n    jthrowable jthr;\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n    jthr = invokeMethod(env, NULL, STATIC, NULL,\n            \"org/apache/hadoop/net/unix/DomainSocket\",\n            \"disableBindPathValidation\", \"()V\");\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"DomainSocket#disableBindPathValidation\");\n        return -1;\n    }\n    return 0;\n}\n\n/**\n * hdfsJniEnv: A wrapper struct to be used as 'value'\n * while saving thread -> JNIEnv* mappings\n */\ntypedef struct\n{\n    JNIEnv* env;\n} hdfsJniEnv;\n\n/**\n * Helper function to create a org.apache.hadoop.fs.Path object.\n * @param env: The JNIEnv pointer. \n * @param path: The file-path for which to construct org.apache.hadoop.fs.Path\n * object.\n * @return Returns a jobject on success and NULL on error.\n */\nstatic jthrowable constructNewObjectOfPath(JNIEnv *env, const char *path,\n                                           jobject *out)\n{\n    jthrowable jthr;\n    jstring jPathString;\n    jobject jPath;\n\n    //Construct a java.lang.String object\n    jthr = newJavaStr(env, path, &jPathString);\n    if (jthr)\n        return jthr;\n    //Construct the org.apache.hadoop.fs.Path object\n    jthr = constructNewObjectOfClass(env, &jPath, \"org/apache/hadoop/fs/Path\",\n                                     \"(Ljava/lang/String;)V\", jPathString);\n    destroyLocalReference(env, jPathString);\n    if (jthr)\n        return jthr;\n    *out = jPath;\n    return NULL;\n}\n\nstatic jthrowable hadoopConfGetStr(JNIEnv *env, jobject jConfiguration,\n        const char *key, char **val)\n{\n    jthrowable jthr;\n    jvalue jVal;\n    jstring jkey = NULL, jRet = NULL;\n\n    jthr = newJavaStr(env, key, &jkey);\n    if (jthr)\n        goto done;\n    jthr = invokeMethod(env, &jVal, INSTANCE, jConfiguration,\n            HADOOP_CONF, \"get\", JMETHOD1(JPARAM(JAVA_STRING),\n                                         JPARAM(JAVA_STRING)), jkey);\n    if (jthr)\n        goto done;\n    jRet = jVal.l;\n    jthr = newCStr(env, jRet, val);\ndone:\n    destroyLocalReference(env, jkey);\n    destroyLocalReference(env, jRet);\n    return jthr;\n}\n\nint hdfsConfGetStr(const char *key, char **val)\n{\n    JNIEnv *env;\n    int ret;\n    jthrowable jthr;\n    jobject jConfiguration = NULL;\n\n    env = getJNIEnv();\n    if (env == NULL) {\n        ret = EINTERNAL;\n        goto done;\n    }\n    jthr = constructNewObjectOfClass(env, &jConfiguration, HADOOP_CONF, \"()V\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsConfGetStr(%s): new Configuration\", key);\n        goto done;\n    }\n    jthr = hadoopConfGetStr(env, jConfiguration, key, val);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsConfGetStr(%s): hadoopConfGetStr\", key);\n        goto done;\n    }\n    ret = 0;\ndone:\n    destroyLocalReference(env, jConfiguration);\n    if (ret)\n        errno = ret;\n    return ret;\n}\n\nvoid hdfsConfStrFree(char *val)\n{\n    free(val);\n}\n\nstatic jthrowable hadoopConfGetInt(JNIEnv *env, jobject jConfiguration,\n        const char *key, int32_t *val)\n{\n    jthrowable jthr = NULL;\n    jvalue jVal;\n    jstring jkey = NULL;\n\n    jthr = newJavaStr(env, key, &jkey);\n    if (jthr)\n        return jthr;\n    jthr = invokeMethod(env, &jVal, INSTANCE, jConfiguration,\n            HADOOP_CONF, \"getInt\", JMETHOD2(JPARAM(JAVA_STRING), \"I\", \"I\"),\n            jkey, (jint)(*val));\n    destroyLocalReference(env, jkey);\n    if (jthr)\n        return jthr;\n    *val = jVal.i;\n    return NULL;\n}\n\nint hdfsConfGetInt(const char *key, int32_t *val)\n{\n    JNIEnv *env;\n    int ret;\n    jobject jConfiguration = NULL;\n    jthrowable jthr;\n\n    env = getJNIEnv();\n    if (env == NULL) {\n      ret = EINTERNAL;\n      goto done;\n    }\n    jthr = constructNewObjectOfClass(env, &jConfiguration, HADOOP_CONF, \"()V\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsConfGetInt(%s): new Configuration\", key);\n        goto done;\n    }\n    jthr = hadoopConfGetInt(env, jConfiguration, key, val);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsConfGetInt(%s): hadoopConfGetInt\", key);\n        goto done;\n    }\n    ret = 0;\ndone:\n    destroyLocalReference(env, jConfiguration);\n    if (ret)\n        errno = ret;\n    return ret;\n}\n\nstruct hdfsBuilderConfOpt {\n    struct hdfsBuilderConfOpt *next;\n    const char *key;\n    const char *val;\n};\n\nstruct hdfsBuilder {\n    int forceNewInstance;\n    const char *nn;\n    tPort port;\n    const char *kerbTicketCachePath;\n    const char *userName;\n    struct hdfsBuilderConfOpt *opts;\n};\n\nstruct hdfsBuilder *hdfsNewBuilder(void)\n{\n    struct hdfsBuilder *bld = calloc(1, sizeof(struct hdfsBuilder));\n    if (!bld) {\n        errno = ENOMEM;\n        return NULL;\n    }\n    return bld;\n}\n\nint hdfsBuilderConfSetStr(struct hdfsBuilder *bld, const char *key,\n                          const char *val)\n{\n    struct hdfsBuilderConfOpt *opt, *next;\n    \n    opt = calloc(1, sizeof(struct hdfsBuilderConfOpt));\n    if (!opt)\n        return -ENOMEM;\n    next = bld->opts;\n    bld->opts = opt;\n    opt->next = next;\n    opt->key = key;\n    opt->val = val;\n    return 0;\n}\n\nvoid hdfsFreeBuilder(struct hdfsBuilder *bld)\n{\n    struct hdfsBuilderConfOpt *cur, *next;\n\n    cur = bld->opts;\n    for (cur = bld->opts; cur; ) {\n        next = cur->next;\n        free(cur);\n        cur = next;\n    }\n    free(bld);\n}\n\nvoid hdfsBuilderSetForceNewInstance(struct hdfsBuilder *bld)\n{\n    bld->forceNewInstance = 1;\n}\n\nvoid hdfsBuilderSetNameNode(struct hdfsBuilder *bld, const char *nn)\n{\n    bld->nn = nn;\n}\n\nvoid hdfsBuilderSetNameNodePort(struct hdfsBuilder *bld, tPort port)\n{\n    bld->port = port;\n}\n\nvoid hdfsBuilderSetUserName(struct hdfsBuilder *bld, const char *userName)\n{\n    bld->userName = userName;\n}\n\nvoid hdfsBuilderSetKerbTicketCachePath(struct hdfsBuilder *bld,\n                                       const char *kerbTicketCachePath)\n{\n    bld->kerbTicketCachePath = kerbTicketCachePath;\n}\n\nhdfsFS hdfsConnect(const char *host, tPort port)\n{\n    struct hdfsBuilder *bld = hdfsNewBuilder();\n    if (!bld)\n        return NULL;\n    hdfsBuilderSetNameNode(bld, host);\n    hdfsBuilderSetNameNodePort(bld, port);\n    return hdfsBuilderConnect(bld);\n}\n\n/** Always return a new FileSystem handle */\nhdfsFS hdfsConnectNewInstance(const char *host, tPort port)\n{\n    struct hdfsBuilder *bld = hdfsNewBuilder();\n    if (!bld)\n        return NULL;\n    hdfsBuilderSetNameNode(bld, host);\n    hdfsBuilderSetNameNodePort(bld, port);\n    hdfsBuilderSetForceNewInstance(bld);\n    return hdfsBuilderConnect(bld);\n}\n\nhdfsFS hdfsConnectAsUser(const char *host, tPort port, const char *user)\n{\n    struct hdfsBuilder *bld = hdfsNewBuilder();\n    if (!bld)\n        return NULL;\n    hdfsBuilderSetNameNode(bld, host);\n    hdfsBuilderSetNameNodePort(bld, port);\n    hdfsBuilderSetUserName(bld, user);\n    return hdfsBuilderConnect(bld);\n}\n\n/** Always return a new FileSystem handle */\nhdfsFS hdfsConnectAsUserNewInstance(const char *host, tPort port,\n        const char *user)\n{\n    struct hdfsBuilder *bld = hdfsNewBuilder();\n    if (!bld)\n        return NULL;\n    hdfsBuilderSetNameNode(bld, host);\n    hdfsBuilderSetNameNodePort(bld, port);\n    hdfsBuilderSetForceNewInstance(bld);\n    hdfsBuilderSetUserName(bld, user);\n    return hdfsBuilderConnect(bld);\n}\n\n\n/**\n * Calculate the effective URI to use, given a builder configuration.\n *\n * If there is not already a URI scheme, we prepend 'hdfs://'.\n *\n * If there is not already a port specified, and a port was given to the\n * builder, we suffix that port.  If there is a port specified but also one in\n * the URI, that is an error.\n *\n * @param bld       The hdfs builder object\n * @param uri       (out param) dynamically allocated string representing the\n *                  effective URI\n *\n * @return          0 on success; error code otherwise\n */\nstatic int calcEffectiveURI(struct hdfsBuilder *bld, char ** uri)\n{\n    const char *scheme;\n    char suffix[64];\n    const char *lastColon;\n    char *u;\n    size_t uriLen;\n\n    if (!bld->nn)\n        return EINVAL;\n    scheme = (strstr(bld->nn, \"://\")) ? \"\" : \"hdfs://\";\n    if (bld->port == 0) {\n        suffix[0] = '\\0';\n    } else {\n        lastColon = strrchr(bld->nn, ':');\n        if (lastColon && (strspn(lastColon + 1, \"0123456789\") ==\n                          strlen(lastColon + 1))) {\n            fprintf(stderr, \"port %d was given, but URI '%s' already \"\n                \"contains a port!\\n\", bld->port, bld->nn);\n            return EINVAL;\n        }\n        snprintf(suffix, sizeof(suffix), \":%d\", bld->port);\n    }\n\n    uriLen = strlen(scheme) + strlen(bld->nn) + strlen(suffix);\n    u = malloc((uriLen + 1) * (sizeof(char)));\n    if (!u) {\n        fprintf(stderr, \"calcEffectiveURI: out of memory\");\n        return ENOMEM;\n    }\n    snprintf(u, uriLen + 1, \"%s%s%s\", scheme, bld->nn, suffix);\n    *uri = u;\n    return 0;\n}\n\nstatic const char *maybeNull(const char *str)\n{\n    return str ? str : \"(NULL)\";\n}\n\nstatic const char *hdfsBuilderToStr(const struct hdfsBuilder *bld,\n                                    char *buf, size_t bufLen)\n{\n    snprintf(buf, bufLen, \"forceNewInstance=%d, nn=%s, port=%d, \"\n             \"kerbTicketCachePath=%s, userName=%s\",\n             bld->forceNewInstance, maybeNull(bld->nn), bld->port,\n             maybeNull(bld->kerbTicketCachePath), maybeNull(bld->userName));\n    return buf;\n}\n\nhdfsFS hdfsBuilderConnect(struct hdfsBuilder *bld)\n{\n    JNIEnv *env = 0;\n    jobject jConfiguration = NULL, jFS = NULL, jURI = NULL, jCachePath = NULL;\n    jstring jURIString = NULL, jUserString = NULL;\n    jvalue  jVal;\n    jthrowable jthr = NULL;\n    char *cURI = 0, buf[512];\n    int ret;\n    jobject jRet = NULL;\n    struct hdfsBuilderConfOpt *opt;\n\n    //Get the JNIEnv* corresponding to current thread\n    env = getJNIEnv();\n    if (env == NULL) {\n        ret = EINTERNAL;\n        goto done;\n    }\n\n    //  jConfiguration = new Configuration();\n    jthr = constructNewObjectOfClass(env, &jConfiguration, HADOOP_CONF, \"()V\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsBuilderConnect(%s)\", hdfsBuilderToStr(bld, buf, sizeof(buf)));\n        goto done;\n    }\n    // set configuration values\n    for (opt = bld->opts; opt; opt = opt->next) {\n        jthr = hadoopConfSetStr(env, jConfiguration, opt->key, opt->val);\n        if (jthr) {\n            ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"hdfsBuilderConnect(%s): error setting conf '%s' to '%s'\",\n                hdfsBuilderToStr(bld, buf, sizeof(buf)), opt->key, opt->val);\n            goto done;\n        }\n    }\n \n    //Check what type of FileSystem the caller wants...\n    if (bld->nn == NULL) {\n        // Get a local filesystem.\n        if (bld->forceNewInstance) {\n            // fs = FileSytem#newInstanceLocal(conf);\n            jthr = invokeMethod(env, &jVal, STATIC, NULL, HADOOP_FS,\n                    \"newInstanceLocal\", JMETHOD1(JPARAM(HADOOP_CONF),\n                    JPARAM(HADOOP_LOCALFS)), jConfiguration);\n            if (jthr) {\n                ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                    \"hdfsBuilderConnect(%s)\",\n                    hdfsBuilderToStr(bld, buf, sizeof(buf)));\n                goto done;\n            }\n            jFS = jVal.l;\n        } else {\n            // fs = FileSytem#getLocal(conf);\n            jthr = invokeMethod(env, &jVal, STATIC, NULL, HADOOP_FS, \"getLocal\",\n                             JMETHOD1(JPARAM(HADOOP_CONF),\n                                      JPARAM(HADOOP_LOCALFS)),\n                             jConfiguration);\n            if (jthr) {\n                ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                    \"hdfsBuilderConnect(%s)\",\n                    hdfsBuilderToStr(bld, buf, sizeof(buf)));\n                goto done;\n            }\n            jFS = jVal.l;\n        }\n    } else {\n        if (!strcmp(bld->nn, \"default\")) {\n            // jURI = FileSystem.getDefaultUri(conf)\n            jthr = invokeMethod(env, &jVal, STATIC, NULL, HADOOP_FS,\n                          \"getDefaultUri\",\n                          \"(Lorg/apache/hadoop/conf/Configuration;)Ljava/net/URI;\",\n                          jConfiguration);\n            if (jthr) {\n                ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                    \"hdfsBuilderConnect(%s)\",\n                    hdfsBuilderToStr(bld, buf, sizeof(buf)));\n                goto done;\n            }\n            jURI = jVal.l;\n        } else {\n            // fs = FileSystem#get(URI, conf, ugi);\n            ret = calcEffectiveURI(bld, &cURI);\n            if (ret)\n                goto done;\n            jthr = newJavaStr(env, cURI, &jURIString);\n            if (jthr) {\n                ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                    \"hdfsBuilderConnect(%s)\",\n                    hdfsBuilderToStr(bld, buf, sizeof(buf)));\n                goto done;\n            }\n            jthr = invokeMethod(env, &jVal, STATIC, NULL, JAVA_NET_URI,\n                             \"create\", \"(Ljava/lang/String;)Ljava/net/URI;\",\n                             jURIString);\n            if (jthr) {\n                ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                    \"hdfsBuilderConnect(%s)\",\n                    hdfsBuilderToStr(bld, buf, sizeof(buf)));\n                goto done;\n            }\n            jURI = jVal.l;\n        }\n\n        if (bld->kerbTicketCachePath) {\n            jthr = hadoopConfSetStr(env, jConfiguration,\n                KERBEROS_TICKET_CACHE_PATH, bld->kerbTicketCachePath);\n            if (jthr) {\n                ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                    \"hdfsBuilderConnect(%s)\",\n                    hdfsBuilderToStr(bld, buf, sizeof(buf)));\n                goto done;\n            }\n        }\n        jthr = newJavaStr(env, bld->userName, &jUserString);\n        if (jthr) {\n            ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"hdfsBuilderConnect(%s)\",\n                hdfsBuilderToStr(bld, buf, sizeof(buf)));\n            goto done;\n        }\n        if (bld->forceNewInstance) {\n            jthr = invokeMethod(env, &jVal, STATIC, NULL, HADOOP_FS,\n                    \"newInstance\", JMETHOD3(JPARAM(JAVA_NET_URI), \n                        JPARAM(HADOOP_CONF), JPARAM(JAVA_STRING),\n                        JPARAM(HADOOP_FS)),\n                    jURI, jConfiguration, jUserString);\n            if (jthr) {\n                ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                    \"hdfsBuilderConnect(%s)\",\n                    hdfsBuilderToStr(bld, buf, sizeof(buf)));\n                goto done;\n            }\n            jFS = jVal.l;\n        } else {\n            jthr = invokeMethod(env, &jVal, STATIC, NULL, HADOOP_FS, \"get\",\n                    JMETHOD3(JPARAM(JAVA_NET_URI), JPARAM(HADOOP_CONF),\n                        JPARAM(JAVA_STRING), JPARAM(HADOOP_FS)),\n                        jURI, jConfiguration, jUserString);\n            if (jthr) {\n                ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                    \"hdfsBuilderConnect(%s)\",\n                    hdfsBuilderToStr(bld, buf, sizeof(buf)));\n                goto done;\n            }\n            jFS = jVal.l;\n        }\n    }\n    jRet = (*env)->NewGlobalRef(env, jFS);\n    if (!jRet) {\n        ret = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n                    \"hdfsBuilderConnect(%s)\",\n                    hdfsBuilderToStr(bld, buf, sizeof(buf)));\n        goto done;\n    }\n    ret = 0;\n\ndone:\n    // Release unnecessary local references\n    destroyLocalReference(env, jConfiguration);\n    destroyLocalReference(env, jFS);\n    destroyLocalReference(env, jURI);\n    destroyLocalReference(env, jCachePath);\n    destroyLocalReference(env, jURIString);\n    destroyLocalReference(env, jUserString);\n    free(cURI);\n    hdfsFreeBuilder(bld);\n\n    if (ret) {\n        errno = ret;\n        return NULL;\n    }\n    return (hdfsFS)jRet;\n}\n\nint hdfsDisconnect(hdfsFS fs)\n{\n    // JAVA EQUIVALENT:\n    //  fs.close()\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    int ret;\n    jobject jFS;\n    jthrowable jthr;\n\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Parameters\n    jFS = (jobject)fs;\n\n    //Sanity check\n    if (fs == NULL) {\n        errno = EBADF;\n        return -1;\n    }\n\n    jthr = invokeMethod(env, NULL, INSTANCE, jFS, HADOOP_FS,\n                     \"close\", \"()V\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsDisconnect: FileSystem#close\");\n    } else {\n        ret = 0;\n    }\n    (*env)->DeleteGlobalRef(env, jFS);\n    if (ret) {\n        errno = ret;\n        return -1;\n    }\n    return 0;\n}\n\n/**\n * Get the default block size of a FileSystem object.\n *\n * @param env       The Java env\n * @param jFS       The FileSystem object\n * @param jPath     The path to find the default blocksize at\n * @param out       (out param) the default block size\n *\n * @return          NULL on success; or the exception\n */\nstatic jthrowable getDefaultBlockSize(JNIEnv *env, jobject jFS,\n                                      jobject jPath, jlong *out)\n{\n    jthrowable jthr;\n    jvalue jVal;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                 \"getDefaultBlockSize\", JMETHOD1(JPARAM(HADOOP_PATH), \"J\"), jPath);\n    if (jthr)\n        return jthr;\n    *out = jVal.j;\n    return NULL;\n}\n\nhdfsFile hdfsOpenFile(hdfsFS fs, const char *path, int flags,\n                      int bufferSize, short replication, tSize blockSize)\n{\n    struct hdfsStreamBuilder *bld = hdfsStreamBuilderAlloc(fs, path, flags);\n    if (bufferSize != 0) {\n      hdfsStreamBuilderSetBufferSize(bld, bufferSize);\n    }\n    if (replication != 0) {\n      hdfsStreamBuilderSetReplication(bld, replication);\n    }\n    if (blockSize != 0) {\n      hdfsStreamBuilderSetDefaultBlockSize(bld, blockSize);\n    }\n    return hdfsStreamBuilderBuild(bld);\n}\n\nstruct hdfsStreamBuilder {\n    hdfsFS fs;\n    int flags;\n    int32_t bufferSize;\n    int16_t replication;\n    int64_t defaultBlockSize;\n    char path[1];\n};\n\nstruct hdfsStreamBuilder *hdfsStreamBuilderAlloc(hdfsFS fs,\n                                            const char *path, int flags)\n{\n    int path_len = strlen(path);\n    struct hdfsStreamBuilder *bld;\n\n    // sizeof(hdfsStreamBuilder->path) includes one byte for the string\n    // terminator\n    bld = malloc(sizeof(struct hdfsStreamBuilder) + path_len);\n    if (!bld) {\n        errno = ENOMEM;\n        return NULL;\n    }\n    bld->fs = fs;\n    bld->flags = flags;\n    bld->bufferSize = 0;\n    bld->replication = 0;\n    bld->defaultBlockSize = 0;\n    memcpy(bld->path, path, path_len);\n    bld->path[path_len] = '\\0';\n    return bld;\n}\n\nvoid hdfsStreamBuilderFree(struct hdfsStreamBuilder *bld)\n{\n    free(bld);\n}\n\nint hdfsStreamBuilderSetBufferSize(struct hdfsStreamBuilder *bld,\n                                   int32_t bufferSize)\n{\n    if ((bld->flags & O_ACCMODE) != O_WRONLY) {\n        errno = EINVAL;\n        return -1;\n    }\n    bld->bufferSize = bufferSize;\n    return 0;\n}\n\nint hdfsStreamBuilderSetReplication(struct hdfsStreamBuilder *bld,\n                                    int16_t replication)\n{\n    if ((bld->flags & O_ACCMODE) != O_WRONLY) {\n        errno = EINVAL;\n        return -1;\n    }\n    bld->replication = replication;\n    return 0;\n}\n\nint hdfsStreamBuilderSetDefaultBlockSize(struct hdfsStreamBuilder *bld,\n                                         int64_t defaultBlockSize)\n{\n    if ((bld->flags & O_ACCMODE) != O_WRONLY) {\n        errno = EINVAL;\n        return -1;\n    }\n    bld->defaultBlockSize = defaultBlockSize;\n    return 0;\n}\n\nstatic hdfsFile hdfsOpenFileImpl(hdfsFS fs, const char *path, int flags,\n                  int32_t bufferSize, int16_t replication, int64_t blockSize)\n{\n    /*\n      JAVA EQUIVALENT:\n       File f = new File(path);\n       FSData{Input|Output}Stream f{is|os} = fs.create(f);\n       return f{is|os};\n    */\n    int accmode = flags & O_ACCMODE;\n    jstring jStrBufferSize = NULL, jStrReplication = NULL;\n    jobject jConfiguration = NULL, jPath = NULL, jFile = NULL;\n    jobject jFS = (jobject)fs;\n    jthrowable jthr;\n    jvalue jVal;\n    hdfsFile file = NULL;\n    int ret;\n    jint jBufferSize = bufferSize;\n    jshort jReplication = replication;\n\n    /* The hadoop java api/signature */\n    const char *method = NULL;\n    const char *signature = NULL;\n\n    /* Get the JNIEnv* corresponding to current thread */\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return NULL;\n    }\n\n\n    if (accmode == O_RDONLY || accmode == O_WRONLY) {\n\t/* yay */\n    } else if (accmode == O_RDWR) {\n      fprintf(stderr, \"ERROR: cannot open an hdfs file in O_RDWR mode\\n\");\n      errno = ENOTSUP;\n      return NULL;\n    } else {\n      fprintf(stderr, \"ERROR: cannot open an hdfs file in mode 0x%x\\n\", accmode);\n      errno = EINVAL;\n      return NULL;\n    }\n\n    if ((flags & O_CREAT) && (flags & O_EXCL)) {\n      fprintf(stderr, \"WARN: hdfs does not truly support O_CREATE && O_EXCL\\n\");\n    }\n\n    if (accmode == O_RDONLY) {\n\tmethod = \"open\";\n        signature = JMETHOD2(JPARAM(HADOOP_PATH), \"I\", JPARAM(HADOOP_ISTRM));\n    } else if (flags & O_APPEND) {\n\tmethod = \"append\";\n\tsignature = JMETHOD1(JPARAM(HADOOP_PATH), JPARAM(HADOOP_OSTRM));\n    } else {\n\tmethod = \"create\";\n\tsignature = JMETHOD2(JPARAM(HADOOP_PATH), \"ZISJ\", JPARAM(HADOOP_OSTRM));\n    }\n\n    /* Create an object of org.apache.hadoop.fs.Path */\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsOpenFile(%s): constructNewObjectOfPath\", path);\n        goto done;\n    }\n\n    /* Get the Configuration object from the FileSystem object */\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                     \"getConf\", JMETHOD1(\"\", JPARAM(HADOOP_CONF)));\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsOpenFile(%s): FileSystem#getConf\", path);\n        goto done;\n    }\n    jConfiguration = jVal.l;\n\n    jStrBufferSize = (*env)->NewStringUTF(env, \"io.file.buffer.size\"); \n    if (!jStrBufferSize) {\n        ret = printPendingExceptionAndFree(env, PRINT_EXC_ALL, \"OOM\");\n        goto done;\n    }\n    jStrReplication = (*env)->NewStringUTF(env, \"dfs.replication\");\n    if (!jStrReplication) {\n        ret = printPendingExceptionAndFree(env, PRINT_EXC_ALL, \"OOM\");\n        goto done;\n    }\n\n    if (!bufferSize) {\n        jthr = invokeMethod(env, &jVal, INSTANCE, jConfiguration, \n                         HADOOP_CONF, \"getInt\", \"(Ljava/lang/String;I)I\",\n                         jStrBufferSize, 4096);\n        if (jthr) {\n            ret = printExceptionAndFree(env, jthr, NOPRINT_EXC_FILE_NOT_FOUND |\n                NOPRINT_EXC_ACCESS_CONTROL | NOPRINT_EXC_UNRESOLVED_LINK,\n                \"hdfsOpenFile(%s): Configuration#getInt(io.file.buffer.size)\",\n                path);\n            goto done;\n        }\n        jBufferSize = jVal.i;\n    }\n\n    if ((accmode == O_WRONLY) && (flags & O_APPEND) == 0) {\n        if (!replication) {\n            jthr = invokeMethod(env, &jVal, INSTANCE, jConfiguration, \n                             HADOOP_CONF, \"getInt\", \"(Ljava/lang/String;I)I\",\n                             jStrReplication, 1);\n            if (jthr) {\n                ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                    \"hdfsOpenFile(%s): Configuration#getInt(dfs.replication)\",\n                    path);\n                goto done;\n            }\n            jReplication = (jshort)jVal.i;\n        }\n    }\n \n    /* Create and return either the FSDataInputStream or\n       FSDataOutputStream references jobject jStream */\n\n    // READ?\n    if (accmode == O_RDONLY) {\n        jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                       method, signature, jPath, jBufferSize);\n    }  else if ((accmode == O_WRONLY) && (flags & O_APPEND)) {\n        // WRITE/APPEND?\n       jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                       method, signature, jPath);\n    } else {\n        // WRITE/CREATE\n        jboolean jOverWrite = 1;\n        jlong jBlockSize = blockSize;\n\n        if (jBlockSize == 0) {\n            jthr = getDefaultBlockSize(env, jFS, jPath, &jBlockSize);\n            if (jthr) {\n                ret = EIO;\n                goto done;\n            }\n        }\n        jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                         method, signature, jPath, jOverWrite,\n                         jBufferSize, jReplication, jBlockSize);\n    }\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsOpenFile(%s): FileSystem#%s(%s)\", path, method, signature);\n        goto done;\n    }\n    jFile = jVal.l;\n\n    file = calloc(1, sizeof(struct hdfsFile_internal));\n    if (!file) {\n        fprintf(stderr, \"hdfsOpenFile(%s): OOM create hdfsFile\\n\", path);\n        ret = ENOMEM;\n        goto done;\n    }\n    file->file = (*env)->NewGlobalRef(env, jFile);\n    if (!file->file) {\n        ret = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n            \"hdfsOpenFile(%s): NewGlobalRef\", path); \n        goto done;\n    }\n    file->type = (((flags & O_WRONLY) == 0) ? HDFS_STREAM_INPUT :\n        HDFS_STREAM_OUTPUT);\n    file->flags = 0;\n\n    if ((flags & O_WRONLY) == 0) {\n        // Try a test read to see if we can do direct reads\n        char buf;\n        if (readDirect(fs, file, &buf, 0) == 0) {\n            // Success - 0-byte read should return 0\n            file->flags |= HDFS_FILE_SUPPORTS_DIRECT_READ;\n        } else if (errno != ENOTSUP) {\n            // Unexpected error. Clear it, don't set the direct flag.\n            fprintf(stderr,\n                  \"hdfsOpenFile(%s): WARN: Unexpected error %d when testing \"\n                  \"for direct read compatibility\\n\", path, errno);\n        }\n    }\n    ret = 0;\n\ndone:\n    destroyLocalReference(env, jStrBufferSize);\n    destroyLocalReference(env, jStrReplication);\n    destroyLocalReference(env, jConfiguration); \n    destroyLocalReference(env, jPath); \n    destroyLocalReference(env, jFile); \n    if (ret) {\n        if (file) {\n            if (file->file) {\n                (*env)->DeleteGlobalRef(env, file->file);\n            }\n            free(file);\n        }\n        errno = ret;\n        return NULL;\n    }\n    return file;\n}\n\nhdfsFile hdfsStreamBuilderBuild(struct hdfsStreamBuilder *bld)\n{\n    hdfsFile file = hdfsOpenFileImpl(bld->fs, bld->path, bld->flags,\n                  bld->bufferSize, bld->replication, bld->defaultBlockSize);\n    int prevErrno = errno;\n    hdfsStreamBuilderFree(bld);\n    errno = prevErrno;\n    return file;\n}\n\nint hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength)\n{\n    jobject jFS = (jobject)fs;\n    jthrowable jthr;\n    jvalue jVal;\n    jobject jPath = NULL;\n\n    JNIEnv *env = getJNIEnv();\n\n    if (!env) {\n        errno = EINTERNAL;\n        return -1;\n    }\n\n    /* Create an object of org.apache.hadoop.fs.Path */\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsTruncateFile(%s): constructNewObjectOfPath\", path);\n        return -1;\n    }\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                        \"truncate\", JMETHOD2(JPARAM(HADOOP_PATH), \"J\", \"Z\"),\n                        jPath, newlength);\n    destroyLocalReference(env, jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsTruncateFile(%s): FileSystem#truncate\", path);\n        return -1;\n    }\n    if (jVal.z == JNI_TRUE) {\n        return 1;\n    }\n    return 0;\n}\n\nint hdfsUnbufferFile(hdfsFile file)\n{\n    int ret;\n    jthrowable jthr;\n    JNIEnv *env = getJNIEnv();\n\n    if (!env) {\n        ret = EINTERNAL;\n        goto done;\n    }\n    if (file->type != HDFS_STREAM_INPUT) {\n        ret = ENOTSUP;\n        goto done;\n    }\n    jthr = invokeMethod(env, NULL, INSTANCE, file->file, HADOOP_ISTRM,\n                     \"unbuffer\", \"()V\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                HADOOP_ISTRM \"#unbuffer failed:\");\n        goto done;\n    }\n    ret = 0;\n\ndone:\n    errno = ret;\n    return ret;\n}\n\nint hdfsCloseFile(hdfsFS fs, hdfsFile file)\n{\n    int ret;\n    // JAVA EQUIVALENT:\n    //  file.close \n\n    //The interface whose 'close' method to be called\n    const char *interface;\n    const char *interfaceShortName;\n\n    //Caught exception\n    jthrowable jthr;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n        errno = EINTERNAL;\n        return -1;\n    }\n\n    //Sanity check\n    if (!file || file->type == HDFS_STREAM_UNINITIALIZED) {\n        errno = EBADF;\n        return -1;\n    }\n\n    interface = (file->type == HDFS_STREAM_INPUT) ?\n        HADOOP_ISTRM : HADOOP_OSTRM;\n  \n    jthr = invokeMethod(env, NULL, INSTANCE, file->file, interface,\n                     \"close\", \"()V\");\n    if (jthr) {\n        interfaceShortName = (file->type == HDFS_STREAM_INPUT) ? \n            \"FSDataInputStream\" : \"FSDataOutputStream\";\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"%s#close\", interfaceShortName);\n    } else {\n        ret = 0;\n    }\n\n    //De-allocate memory\n    (*env)->DeleteGlobalRef(env, file->file);\n    free(file);\n\n    if (ret) {\n        errno = ret;\n        return -1;\n    }\n    return 0;\n}\n\nint hdfsExists(hdfsFS fs, const char *path)\n{\n    JNIEnv *env = getJNIEnv();\n    jobject jPath;\n    jvalue  jVal;\n    jobject jFS = (jobject)fs;\n    jthrowable jthr;\n\n    if (env == NULL) {\n        errno = EINTERNAL;\n        return -1;\n    }\n    \n    if (path == NULL) {\n        errno = EINVAL;\n        return -1;\n    }\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsExists: constructNewObjectOfPath\");\n        return -1;\n    }\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n            \"exists\", JMETHOD1(JPARAM(HADOOP_PATH), \"Z\"), jPath);\n    destroyLocalReference(env, jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsExists: invokeMethod(%s)\",\n            JMETHOD1(JPARAM(HADOOP_PATH), \"Z\"));\n        return -1;\n    }\n    if (jVal.z) {\n        return 0;\n    } else {\n        errno = ENOENT;\n        return -1;\n    }\n}\n\n// Checks input file for readiness for reading.\nstatic int readPrepare(JNIEnv* env, hdfsFS fs, hdfsFile f,\n                       jobject* jInputStream)\n{\n    *jInputStream = (jobject)(f ? f->file : NULL);\n\n    //Sanity check\n    if (!f || f->type == HDFS_STREAM_UNINITIALIZED) {\n      errno = EBADF;\n      return -1;\n    }\n\n    //Error checking... make sure that this file is 'readable'\n    if (f->type != HDFS_STREAM_INPUT) {\n      fprintf(stderr, \"Cannot read from a non-InputStream object!\\n\");\n      errno = EINVAL;\n      return -1;\n    }\n\n    return 0;\n}\n\ntSize hdfsRead(hdfsFS fs, hdfsFile f, void* buffer, tSize length)\n{\n    jobject jInputStream;\n    jbyteArray jbRarray;\n    jint noReadBytes = length;\n    jvalue jVal;\n    jthrowable jthr;\n    JNIEnv* env;\n\n    if (length == 0) {\n        return 0;\n    } else if (length < 0) {\n        errno = EINVAL;\n        return -1;\n    }\n    if (f->flags & HDFS_FILE_SUPPORTS_DIRECT_READ) {\n      return readDirect(fs, f, buffer, length);\n    }\n\n    // JAVA EQUIVALENT:\n    //  byte [] bR = new byte[length];\n    //  fis.read(bR);\n\n    //Get the JNIEnv* corresponding to current thread\n    env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Parameters\n    if (readPrepare(env, fs, f, &jInputStream) == -1) {\n      return -1;\n    }\n\n    //Read the requisite bytes\n    jbRarray = (*env)->NewByteArray(env, length);\n    if (!jbRarray) {\n        errno = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n            \"hdfsRead: NewByteArray\");\n        return -1;\n    }\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jInputStream, HADOOP_ISTRM,\n                               \"read\", \"([B)I\", jbRarray);\n    if (jthr) {\n        destroyLocalReference(env, jbRarray);\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsRead: FSDataInputStream#read\");\n        return -1;\n    }\n    if (jVal.i < 0) {\n        // EOF\n        destroyLocalReference(env, jbRarray);\n        return 0;\n    } else if (jVal.i == 0) {\n        destroyLocalReference(env, jbRarray);\n        errno = EINTR;\n        return -1;\n    }\n    (*env)->GetByteArrayRegion(env, jbRarray, 0, noReadBytes, buffer);\n    destroyLocalReference(env, jbRarray);\n    if ((*env)->ExceptionCheck(env)) {\n        errno = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n            \"hdfsRead: GetByteArrayRegion\");\n        return -1;\n    }\n    return jVal.i;\n}\n\n// Reads using the read(ByteBuffer) API, which does fewer copies\ntSize readDirect(hdfsFS fs, hdfsFile f, void* buffer, tSize length)\n{\n    // JAVA EQUIVALENT:\n    //  ByteBuffer bbuffer = ByteBuffer.allocateDirect(length) // wraps C buffer\n    //  fis.read(bbuffer);\n\n    jobject jInputStream;\n    jvalue jVal;\n    jthrowable jthr;\n    jobject bb;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    if (readPrepare(env, fs, f, &jInputStream) == -1) {\n      return -1;\n    }\n\n    //Read the requisite bytes\n    bb = (*env)->NewDirectByteBuffer(env, buffer, length);\n    if (bb == NULL) {\n        errno = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n            \"readDirect: NewDirectByteBuffer\");\n        return -1;\n    }\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jInputStream,\n        HADOOP_ISTRM, \"read\", \"(Ljava/nio/ByteBuffer;)I\", bb);\n    destroyLocalReference(env, bb);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"readDirect: FSDataInputStream#read\");\n        return -1;\n    }\n    return (jVal.i < 0) ? 0 : jVal.i;\n}\n\ntSize hdfsPread(hdfsFS fs, hdfsFile f, tOffset position,\n                void* buffer, tSize length)\n{\n    JNIEnv* env;\n    jbyteArray jbRarray;\n    jvalue jVal;\n    jthrowable jthr;\n\n    if (length == 0) {\n        return 0;\n    } else if (length < 0) {\n        errno = EINVAL;\n        return -1;\n    }\n    if (!f || f->type == HDFS_STREAM_UNINITIALIZED) {\n        errno = EBADF;\n        return -1;\n    }\n\n    env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Error checking... make sure that this file is 'readable'\n    if (f->type != HDFS_STREAM_INPUT) {\n        fprintf(stderr, \"Cannot read from a non-InputStream object!\\n\");\n        errno = EINVAL;\n        return -1;\n    }\n\n    // JAVA EQUIVALENT:\n    //  byte [] bR = new byte[length];\n    //  fis.read(pos, bR, 0, length);\n    jbRarray = (*env)->NewByteArray(env, length);\n    if (!jbRarray) {\n        errno = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n            \"hdfsPread: NewByteArray\");\n        return -1;\n    }\n    jthr = invokeMethod(env, &jVal, INSTANCE, f->file, HADOOP_ISTRM,\n                     \"read\", \"(J[BII)I\", position, jbRarray, 0, length);\n    if (jthr) {\n        destroyLocalReference(env, jbRarray);\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsPread: FSDataInputStream#read\");\n        return -1;\n    }\n    if (jVal.i < 0) {\n        // EOF\n        destroyLocalReference(env, jbRarray);\n        return 0;\n    } else if (jVal.i == 0) {\n        destroyLocalReference(env, jbRarray);\n        errno = EINTR;\n        return -1;\n    }\n    (*env)->GetByteArrayRegion(env, jbRarray, 0, jVal.i, buffer);\n    destroyLocalReference(env, jbRarray);\n    if ((*env)->ExceptionCheck(env)) {\n        errno = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n            \"hdfsPread: GetByteArrayRegion\");\n        return -1;\n    }\n    return jVal.i;\n}\n\ntSize hdfsWrite(hdfsFS fs, hdfsFile f, const void* buffer, tSize length)\n{\n    // JAVA EQUIVALENT\n    // byte b[] = str.getBytes();\n    // fso.write(b);\n\n    jobject jOutputStream;\n    jbyteArray jbWarray;\n    jthrowable jthr;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Sanity check\n    if (!f || f->type == HDFS_STREAM_UNINITIALIZED) {\n        errno = EBADF;\n        return -1;\n    }\n\n    jOutputStream = f->file;\n    \n    if (length < 0) {\n    \terrno = EINVAL;\n    \treturn -1;\n    }\n\n    //Error checking... make sure that this file is 'writable'\n    if (f->type != HDFS_STREAM_OUTPUT) {\n        fprintf(stderr, \"Cannot write into a non-OutputStream object!\\n\");\n        errno = EINVAL;\n        return -1;\n    }\n\n    if (length < 0) {\n        errno = EINVAL;\n        return -1;\n    }\n    if (length == 0) {\n        return 0;\n    }\n    //Write the requisite bytes into the file\n    jbWarray = (*env)->NewByteArray(env, length);\n    if (!jbWarray) {\n        errno = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n            \"hdfsWrite: NewByteArray\");\n        return -1;\n    }\n    (*env)->SetByteArrayRegion(env, jbWarray, 0, length, buffer);\n    if ((*env)->ExceptionCheck(env)) {\n        destroyLocalReference(env, jbWarray);\n        errno = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n            \"hdfsWrite(length = %d): SetByteArrayRegion\", length);\n        return -1;\n    }\n    jthr = invokeMethod(env, NULL, INSTANCE, jOutputStream,\n            HADOOP_OSTRM, \"write\", \"([B)V\", jbWarray);\n    destroyLocalReference(env, jbWarray);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsWrite: FSDataOutputStream#write\");\n        return -1;\n    }\n    // Unlike most Java streams, FSDataOutputStream never does partial writes.\n    // If we succeeded, all the data was written.\n    return length;\n}\n\nint hdfsSeek(hdfsFS fs, hdfsFile f, tOffset desiredPos) \n{\n    // JAVA EQUIVALENT\n    //  fis.seek(pos);\n\n    jobject jInputStream;\n    jthrowable jthr;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Sanity check\n    if (!f || f->type != HDFS_STREAM_INPUT) {\n        errno = EBADF;\n        return -1;\n    }\n\n    jInputStream = f->file;\n    jthr = invokeMethod(env, NULL, INSTANCE, jInputStream,\n            HADOOP_ISTRM, \"seek\", \"(J)V\", desiredPos);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsSeek(desiredPos=%\" PRId64 \")\"\n            \": FSDataInputStream#seek\", desiredPos);\n        return -1;\n    }\n    return 0;\n}\n\n\n\ntOffset hdfsTell(hdfsFS fs, hdfsFile f)\n{\n    // JAVA EQUIVALENT\n    //  pos = f.getPos();\n\n    jobject jStream;\n    const char *interface;\n    jvalue jVal;\n    jthrowable jthr;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Sanity check\n    if (!f || f->type == HDFS_STREAM_UNINITIALIZED) {\n        errno = EBADF;\n        return -1;\n    }\n\n    //Parameters\n    jStream = f->file;\n    interface = (f->type == HDFS_STREAM_INPUT) ?\n        HADOOP_ISTRM : HADOOP_OSTRM;\n    jthr = invokeMethod(env, &jVal, INSTANCE, jStream,\n                     interface, \"getPos\", \"()J\");\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsTell: %s#getPos\",\n            ((f->type == HDFS_STREAM_INPUT) ? \"FSDataInputStream\" :\n                                 \"FSDataOutputStream\"));\n        return -1;\n    }\n    return jVal.j;\n}\n\nint hdfsFlush(hdfsFS fs, hdfsFile f) \n{\n    // JAVA EQUIVALENT\n    //  fos.flush();\n\n    jthrowable jthr;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Sanity check\n    if (!f || f->type != HDFS_STREAM_OUTPUT) {\n        errno = EBADF;\n        return -1;\n    }\n    jthr = invokeMethod(env, NULL, INSTANCE, f->file,\n                     HADOOP_OSTRM, \"flush\", \"()V\");\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsFlush: FSDataInputStream#flush\");\n        return -1;\n    }\n    return 0;\n}\n\nint hdfsHFlush(hdfsFS fs, hdfsFile f)\n{\n    jobject jOutputStream;\n    jthrowable jthr;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Sanity check\n    if (!f || f->type != HDFS_STREAM_OUTPUT) {\n        errno = EBADF;\n        return -1;\n    }\n\n    jOutputStream = f->file;\n    jthr = invokeMethod(env, NULL, INSTANCE, jOutputStream,\n                     HADOOP_OSTRM, \"hflush\", \"()V\");\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsHFlush: FSDataOutputStream#hflush\");\n        return -1;\n    }\n    return 0;\n}\n\nint hdfsHSync(hdfsFS fs, hdfsFile f)\n{\n    jobject jOutputStream;\n    jthrowable jthr;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Sanity check\n    if (!f || f->type != HDFS_STREAM_OUTPUT) {\n        errno = EBADF;\n        return -1;\n    }\n\n    jOutputStream = f->file;\n    jthr = invokeMethod(env, NULL, INSTANCE, jOutputStream,\n                     HADOOP_OSTRM, \"hsync\", \"()V\");\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsHSync: FSDataOutputStream#hsync\");\n        return -1;\n    }\n    return 0;\n}\n\nint hdfsAvailable(hdfsFS fs, hdfsFile f)\n{\n    // JAVA EQUIVALENT\n    //  fis.available();\n\n    jobject jInputStream;\n    jvalue jVal;\n    jthrowable jthr;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Sanity check\n    if (!f || f->type != HDFS_STREAM_INPUT) {\n        errno = EBADF;\n        return -1;\n    }\n\n    //Parameters\n    jInputStream = f->file;\n    jthr = invokeMethod(env, &jVal, INSTANCE, jInputStream,\n                     HADOOP_ISTRM, \"available\", \"()I\");\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsAvailable: FSDataInputStream#available\");\n        return -1;\n    }\n    return jVal.i;\n}\n\nstatic int hdfsCopyImpl(hdfsFS srcFS, const char *src, hdfsFS dstFS,\n        const char *dst, jboolean deleteSource)\n{\n    //JAVA EQUIVALENT\n    //  FileUtil#copy(srcFS, srcPath, dstFS, dstPath,\n    //                 deleteSource = false, conf)\n\n    //Parameters\n    jobject jSrcFS = (jobject)srcFS;\n    jobject jDstFS = (jobject)dstFS;\n    jobject jConfiguration = NULL, jSrcPath = NULL, jDstPath = NULL;\n    jthrowable jthr;\n    jvalue jVal;\n    int ret;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    jthr = constructNewObjectOfPath(env, src, &jSrcPath);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsCopyImpl(src=%s): constructNewObjectOfPath\", src);\n        goto done;\n    }\n    jthr = constructNewObjectOfPath(env, dst, &jDstPath);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsCopyImpl(dst=%s): constructNewObjectOfPath\", dst);\n        goto done;\n    }\n\n    //Create the org.apache.hadoop.conf.Configuration object\n    jthr = constructNewObjectOfClass(env, &jConfiguration,\n                                     HADOOP_CONF, \"()V\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsCopyImpl: Configuration constructor\");\n        goto done;\n    }\n\n    //FileUtil#copy\n    jthr = invokeMethod(env, &jVal, STATIC,\n            NULL, \"org/apache/hadoop/fs/FileUtil\", \"copy\",\n            \"(Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;\"\n            \"Lorg/apache/hadoop/fs/FileSystem;Lorg/apache/hadoop/fs/Path;\"\n            \"ZLorg/apache/hadoop/conf/Configuration;)Z\",\n            jSrcFS, jSrcPath, jDstFS, jDstPath, deleteSource, \n            jConfiguration);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsCopyImpl(src=%s, dst=%s, deleteSource=%d): \"\n            \"FileUtil#copy\", src, dst, deleteSource);\n        goto done;\n    }\n    if (!jVal.z) {\n        ret = EIO;\n        goto done;\n    }\n    ret = 0;\n\ndone:\n    destroyLocalReference(env, jConfiguration);\n    destroyLocalReference(env, jSrcPath);\n    destroyLocalReference(env, jDstPath);\n  \n    if (ret) {\n        errno = ret;\n        return -1;\n    }\n    return 0;\n}\n\nint hdfsCopy(hdfsFS srcFS, const char *src, hdfsFS dstFS, const char *dst)\n{\n    return hdfsCopyImpl(srcFS, src, dstFS, dst, 0);\n}\n\nint hdfsMove(hdfsFS srcFS, const char *src, hdfsFS dstFS, const char *dst)\n{\n    return hdfsCopyImpl(srcFS, src, dstFS, dst, 1);\n}\n\nint hdfsDelete(hdfsFS fs, const char *path, int recursive)\n{\n    // JAVA EQUIVALENT:\n    //  Path p = new Path(path);\n    //  bool retval = fs.delete(p, recursive);\n\n    jobject jFS = (jobject)fs;\n    jthrowable jthr;\n    jobject jPath;\n    jvalue jVal;\n    jboolean jRecursive;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsDelete(path=%s): constructNewObjectOfPath\", path);\n        return -1;\n    }\n    jRecursive = recursive ? JNI_TRUE : JNI_FALSE;\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                     \"delete\", \"(Lorg/apache/hadoop/fs/Path;Z)Z\",\n                     jPath, jRecursive);\n    destroyLocalReference(env, jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsDelete(path=%s, recursive=%d): \"\n            \"FileSystem#delete\", path, recursive);\n        return -1;\n    }\n    if (!jVal.z) {\n        errno = EIO;\n        return -1;\n    }\n    return 0;\n}\n\n\n\nint hdfsRename(hdfsFS fs, const char *oldPath, const char *newPath)\n{\n    // JAVA EQUIVALENT:\n    //  Path old = new Path(oldPath);\n    //  Path new = new Path(newPath);\n    //  fs.rename(old, new);\n\n    jobject jFS = (jobject)fs;\n    jthrowable jthr;\n    jobject jOldPath = NULL, jNewPath = NULL;\n    int ret = -1;\n    jvalue jVal;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    jthr = constructNewObjectOfPath(env, oldPath, &jOldPath );\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsRename: constructNewObjectOfPath(%s)\", oldPath);\n        goto done;\n    }\n    jthr = constructNewObjectOfPath(env, newPath, &jNewPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsRename: constructNewObjectOfPath(%s)\", newPath);\n        goto done;\n    }\n\n    // Rename the file\n    // TODO: use rename2 here?  (See HDFS-3592)\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS, \"rename\",\n                     JMETHOD2(JPARAM(HADOOP_PATH), JPARAM(HADOOP_PATH), \"Z\"),\n                     jOldPath, jNewPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsRename(oldPath=%s, newPath=%s): FileSystem#rename\",\n            oldPath, newPath);\n        goto done;\n    }\n    if (!jVal.z) {\n        errno = EIO;\n        goto done;\n    }\n    ret = 0;\n\ndone:\n    destroyLocalReference(env, jOldPath);\n    destroyLocalReference(env, jNewPath);\n    return ret;\n}\n\nint hdfsRenameOverwrite(hdfsFS fs, const char *oldPath, const char *newPath)\n{\n    // JAVA EQUIVALENT:\n    //  Path old = new Path(oldPath);\n    //  Path new = new Path(newPath);\n    //  fs.rename(old, new, overwrite = true);\n\n    jobject jFS = (jobject)fs;\n    jthrowable jthr;\n    jobject jOldPath = NULL, jNewPath = NULL;\n    int ret = -1;\n    jvalue jVal;\n    jobject enumInst = NULL, enumSetObj = NULL;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    jthr = fetchEnumInstance(env, RENAME_OPTION,\n              \"OVERWRITE\", &enumInst);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsRename: fetchEnumInstance(\" RENAME_OPTION \", OVERWRITE)\");\n        goto done;\n    }\n    jthr = invokeMethod(env, &jVal, STATIC, NULL,\n            \"java/util/EnumSet\", \"of\",\n            \"(Ljava/lang/Enum;)Ljava/util/EnumSet;\", enumInst);\n    if (jthr) {\n        goto done;\n    }\n    enumSetObj = jVal.l;\n\n    jclass enumSetClass = (*env)->FindClass(env, \"java/util/EnumSet\");\n    jmethodID toArrayTypedMethodID = (*env)->GetMethodID(env, enumSetClass, \"toArray\", \"([Ljava/lang/Object;)[Ljava/lang/Object;\");\n    jclass renameOptionClass = (*env)->FindClass(env, \"org/apache/hadoop/fs/Options$Rename\");\n    jobjectArray typedArray = (*env)->NewObjectArray(env, 1, renameOptionClass, NULL);\n    jobjectArray enumArray = (jobjectArray)(*env)->CallObjectMethod(env, enumSetObj, toArrayTypedMethodID, typedArray);\n\n    jthr = constructNewObjectOfPath(env, oldPath, &jOldPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsRename: constructNewObjectOfPath(%s)\", oldPath);\n        goto done;\n    }\n    jthr = constructNewObjectOfPath(env, newPath, &jNewPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsRename: constructNewObjectOfPath(%s)\", newPath);\n        goto done;\n    }\n\n    // Rename the file\n    // TODO: use rename2 here?  (See HDFS-3592)\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS, \"rename\",\n                     \"(Lorg/apache/hadoop/fs/Path;Lorg/apache/hadoop/fs/Path;[Lorg/apache/hadoop/fs/Options$Rename;)V\",\n                     jOldPath, jNewPath, enumArray);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsRename(oldPath=%s, newPath=%s): FileSystem#rename\",\n            oldPath, newPath);\n        goto done;\n    }\n    ret = 0;\n\ndone:\n    destroyLocalReference(env, jOldPath);\n    destroyLocalReference(env, jNewPath);\n    destroyLocalReference(env, enumInst);\n    destroyLocalReference(env, enumSetObj);\n    return ret;\n}\n\n\n\nchar* hdfsGetWorkingDirectory(hdfsFS fs, char* buffer, size_t bufferSize)\n{\n    // JAVA EQUIVALENT:\n    //  Path p = fs.getWorkingDirectory(); \n    //  return p.toString()\n\n    jobject jPath = NULL;\n    jstring jPathString = NULL;\n    jobject jFS = (jobject)fs;\n    jvalue jVal;\n    jthrowable jthr;\n    int ret;\n    const char *jPathChars = NULL;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return NULL;\n    }\n\n    //FileSystem#getWorkingDirectory()\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS,\n                     HADOOP_FS, \"getWorkingDirectory\",\n                     \"()Lorg/apache/hadoop/fs/Path;\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetWorkingDirectory: FileSystem#getWorkingDirectory\");\n        goto done;\n    }\n    jPath = jVal.l;\n    if (!jPath) {\n        fprintf(stderr, \"hdfsGetWorkingDirectory: \"\n            \"FileSystem#getWorkingDirectory returned NULL\");\n        ret = -EIO;\n        goto done;\n    }\n\n    //Path#toString()\n    jthr = invokeMethod(env, &jVal, INSTANCE, jPath, \n                     \"org/apache/hadoop/fs/Path\", \"toString\",\n                     \"()Ljava/lang/String;\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetWorkingDirectory: Path#toString\");\n        goto done;\n    }\n    jPathString = jVal.l;\n    jPathChars = (*env)->GetStringUTFChars(env, jPathString, NULL);\n    if (!jPathChars) {\n        ret = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n            \"hdfsGetWorkingDirectory: GetStringUTFChars\");\n        goto done;\n    }\n\n    //Copy to user-provided buffer\n    ret = snprintf(buffer, bufferSize, \"%s\", jPathChars);\n    if (ret >= bufferSize) {\n        ret = ENAMETOOLONG;\n        goto done;\n    }\n    ret = 0;\n\ndone:\n    if (jPathChars) {\n        (*env)->ReleaseStringUTFChars(env, jPathString, jPathChars);\n    }\n    destroyLocalReference(env, jPath);\n    destroyLocalReference(env, jPathString);\n\n    if (ret) {\n        errno = ret;\n        return NULL;\n    }\n    return buffer;\n}\n\n\n\nint hdfsSetWorkingDirectory(hdfsFS fs, const char *path)\n{\n    // JAVA EQUIVALENT:\n    //  fs.setWorkingDirectory(Path(path)); \n\n    jobject jFS = (jobject)fs;\n    jthrowable jthr;\n    jobject jPath;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Create an object of org.apache.hadoop.fs.Path\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsSetWorkingDirectory(%s): constructNewObjectOfPath\",\n            path);\n        return -1;\n    }\n\n    //FileSystem#setWorkingDirectory()\n    jthr = invokeMethod(env, NULL, INSTANCE, jFS, HADOOP_FS,\n                     \"setWorkingDirectory\", \n                     \"(Lorg/apache/hadoop/fs/Path;)V\", jPath);\n    destroyLocalReference(env, jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, NOPRINT_EXC_ILLEGAL_ARGUMENT,\n            \"hdfsSetWorkingDirectory(%s): FileSystem#setWorkingDirectory\",\n            path);\n        return -1;\n    }\n    return 0;\n}\n\n\n\nint hdfsCreateDirectory(hdfsFS fs, const char *path)\n{\n    // JAVA EQUIVALENT:\n    //  fs.mkdirs(new Path(path));\n\n    jobject jFS = (jobject)fs;\n    jobject jPath;\n    jthrowable jthr;\n    jvalue jVal;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Create an object of org.apache.hadoop.fs.Path\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsCreateDirectory(%s): constructNewObjectOfPath\", path);\n        return -1;\n    }\n\n    //Create the directory\n    jVal.z = 0;\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                     \"mkdirs\", \"(Lorg/apache/hadoop/fs/Path;)Z\",\n                     jPath);\n    destroyLocalReference(env, jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr,\n            NOPRINT_EXC_ACCESS_CONTROL | NOPRINT_EXC_FILE_NOT_FOUND |\n            NOPRINT_EXC_UNRESOLVED_LINK | NOPRINT_EXC_PARENT_NOT_DIRECTORY,\n            \"hdfsCreateDirectory(%s): FileSystem#mkdirs\", path);\n        return -1;\n    }\n    if (!jVal.z) {\n        // It's unclear under exactly which conditions FileSystem#mkdirs\n        // is supposed to return false (as opposed to throwing an exception.)\n        // It seems like the current code never actually returns false.\n        // So we're going to translate this to EIO, since there seems to be\n        // nothing more specific we can do with it.\n        errno = EIO;\n        return -1;\n    }\n    return 0;\n}\n\n\nint hdfsSetReplication(hdfsFS fs, const char *path, int16_t replication)\n{\n    // JAVA EQUIVALENT:\n    //  fs.setReplication(new Path(path), replication);\n\n    jobject jFS = (jobject)fs;\n    jthrowable jthr;\n    jobject jPath;\n    jvalue jVal;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Create an object of org.apache.hadoop.fs.Path\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsSetReplication(path=%s): constructNewObjectOfPath\", path);\n        return -1;\n    }\n\n    //Create the directory\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                     \"setReplication\", \"(Lorg/apache/hadoop/fs/Path;S)Z\",\n                     jPath, replication);\n    destroyLocalReference(env, jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsSetReplication(path=%s, replication=%d): \"\n            \"FileSystem#setReplication\", path, replication);\n        return -1;\n    }\n    if (!jVal.z) {\n        // setReplication returns false \"if file does not exist or is a\n        // directory.\"  So the nearest translation to that is ENOENT.\n        errno = ENOENT;\n        return -1;\n    }\n\n    return 0;\n}\n\nint hdfsChown(hdfsFS fs, const char *path, const char *owner, const char *group)\n{\n    // JAVA EQUIVALENT:\n    //  fs.setOwner(path, owner, group)\n\n    jobject jFS = (jobject)fs;\n    jobject jPath = NULL;\n    jstring jOwner = NULL, jGroup = NULL;\n    jthrowable jthr;\n    int ret;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    if (owner == NULL && group == NULL) {\n      return 0;\n    }\n\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsChown(path=%s): constructNewObjectOfPath\", path);\n        goto done;\n    }\n\n    jthr = newJavaStr(env, owner, &jOwner); \n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsChown(path=%s): newJavaStr(%s)\", path, owner);\n        goto done;\n    }\n    jthr = newJavaStr(env, group, &jGroup);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsChown(path=%s): newJavaStr(%s)\", path, group);\n        goto done;\n    }\n\n    //Create the directory\n    jthr = invokeMethod(env, NULL, INSTANCE, jFS, HADOOP_FS,\n            \"setOwner\", JMETHOD3(JPARAM(HADOOP_PATH), \n                    JPARAM(JAVA_STRING), JPARAM(JAVA_STRING), JAVA_VOID),\n            jPath, jOwner, jGroup);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr,\n            NOPRINT_EXC_ACCESS_CONTROL | NOPRINT_EXC_FILE_NOT_FOUND |\n            NOPRINT_EXC_UNRESOLVED_LINK,\n            \"hdfsChown(path=%s, owner=%s, group=%s): \"\n            \"FileSystem#setOwner\", path, owner, group);\n        goto done;\n    }\n    ret = 0;\n\ndone:\n    destroyLocalReference(env, jPath);\n    destroyLocalReference(env, jOwner);\n    destroyLocalReference(env, jGroup);\n\n    if (ret) {\n        errno = ret;\n        return -1;\n    }\n    return 0;\n}\n\nint hdfsChmod(hdfsFS fs, const char *path, short mode)\n{\n    int ret;\n    // JAVA EQUIVALENT:\n    //  fs.setPermission(path, FsPermission)\n\n    jthrowable jthr;\n    jobject jPath = NULL, jPermObj = NULL;\n    jobject jFS = (jobject)fs;\n    jshort jmode = mode;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    // construct jPerm = FsPermission.createImmutable(short mode);\n    jthr = constructNewObjectOfClass(env, &jPermObj,\n                HADOOP_FSPERM,\"(S)V\",jmode);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"constructNewObjectOfClass(%s)\", HADOOP_FSPERM);\n        return -1;\n    }\n\n    //Create an object of org.apache.hadoop.fs.Path\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsChmod(%s): constructNewObjectOfPath\", path);\n        goto done;\n    }\n\n    //Create the directory\n    jthr = invokeMethod(env, NULL, INSTANCE, jFS, HADOOP_FS,\n            \"setPermission\",\n            JMETHOD2(JPARAM(HADOOP_PATH), JPARAM(HADOOP_FSPERM), JAVA_VOID),\n            jPath, jPermObj);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr,\n            NOPRINT_EXC_ACCESS_CONTROL | NOPRINT_EXC_FILE_NOT_FOUND |\n            NOPRINT_EXC_UNRESOLVED_LINK,\n            \"hdfsChmod(%s): FileSystem#setPermission\", path);\n        goto done;\n    }\n    ret = 0;\n\ndone:\n    destroyLocalReference(env, jPath);\n    destroyLocalReference(env, jPermObj);\n\n    if (ret) {\n        errno = ret;\n        return -1;\n    }\n    return 0;\n}\n\nint hdfsUtime(hdfsFS fs, const char *path, tTime mtime, tTime atime)\n{\n    // JAVA EQUIVALENT:\n    //  fs.setTimes(src, mtime, atime)\n\n    jthrowable jthr;\n    jobject jFS = (jobject)fs;\n    jobject jPath;\n    static const tTime NO_CHANGE = -1;\n    jlong jmtime, jatime;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //Create an object of org.apache.hadoop.fs.Path\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsUtime(path=%s): constructNewObjectOfPath\", path);\n        return -1;\n    }\n\n    jmtime = (mtime == NO_CHANGE) ? -1 : (mtime * (jlong)1000);\n    jatime = (atime == NO_CHANGE) ? -1 : (atime * (jlong)1000);\n\n    jthr = invokeMethod(env, NULL, INSTANCE, jFS, HADOOP_FS,\n            \"setTimes\", JMETHOD3(JPARAM(HADOOP_PATH), \"J\", \"J\", JAVA_VOID),\n            jPath, jmtime, jatime);\n    destroyLocalReference(env, jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr,\n            NOPRINT_EXC_ACCESS_CONTROL | NOPRINT_EXC_FILE_NOT_FOUND |\n            NOPRINT_EXC_UNRESOLVED_LINK,\n            \"hdfsUtime(path=%s): FileSystem#setTimes\", path);\n        return -1;\n    }\n    return 0;\n}\n\n/**\n * Zero-copy options.\n *\n * We cache the EnumSet of ReadOptions which has to be passed into every\n * readZero call, to avoid reconstructing it each time.  This cache is cleared\n * whenever an element changes.\n */\nstruct hadoopRzOptions\n{\n    JNIEnv *env;\n    int skipChecksums;\n    jobject byteBufferPool;\n    jobject cachedEnumSet;\n};\n\nstruct hadoopRzOptions *hadoopRzOptionsAlloc(void)\n{\n    struct hadoopRzOptions *opts;\n    JNIEnv *env;\n\n    env = getJNIEnv();\n    if (!env) {\n        // Check to make sure the JNI environment is set up properly.\n        errno = EINTERNAL;\n        return NULL;\n    }\n    opts = calloc(1, sizeof(struct hadoopRzOptions));\n    if (!opts) {\n        errno = ENOMEM;\n        return NULL;\n    }\n    return opts;\n}\n\nstatic void hadoopRzOptionsClearCached(JNIEnv *env,\n        struct hadoopRzOptions *opts)\n{\n    if (!opts->cachedEnumSet) {\n        return;\n    }\n    (*env)->DeleteGlobalRef(env, opts->cachedEnumSet);\n    opts->cachedEnumSet = NULL;\n}\n\nint hadoopRzOptionsSetSkipChecksum(\n        struct hadoopRzOptions *opts, int skip)\n{\n    JNIEnv *env;\n    env = getJNIEnv();\n    if (!env) {\n        errno = EINTERNAL;\n        return -1;\n    }\n    hadoopRzOptionsClearCached(env, opts);\n    opts->skipChecksums = !!skip;\n    return 0;\n}\n\nint hadoopRzOptionsSetByteBufferPool(\n        struct hadoopRzOptions *opts, const char *className)\n{\n    JNIEnv *env;\n    jthrowable jthr;\n    jobject byteBufferPool = NULL;\n\n    env = getJNIEnv();\n    if (!env) {\n        errno = EINTERNAL;\n        return -1;\n    }\n\n    if (className) {\n      // Note: we don't have to call hadoopRzOptionsClearCached in this\n      // function, since the ByteBufferPool is passed separately from the\n      // EnumSet of ReadOptions.\n\n      jthr = constructNewObjectOfClass(env, &byteBufferPool, className, \"()V\");\n      if (jthr) {\n          printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n              \"hadoopRzOptionsSetByteBufferPool(className=%s): \", className);\n          errno = EINVAL;\n          return -1;\n      }\n    }\n    if (opts->byteBufferPool) {\n        // Delete any previous ByteBufferPool we had.\n        (*env)->DeleteGlobalRef(env, opts->byteBufferPool);\n    }\n    opts->byteBufferPool = byteBufferPool;\n    return 0;\n}\n\nvoid hadoopRzOptionsFree(struct hadoopRzOptions *opts)\n{\n    JNIEnv *env;\n    env = getJNIEnv();\n    if (!env) {\n        return;\n    }\n    hadoopRzOptionsClearCached(env, opts);\n    if (opts->byteBufferPool) {\n        (*env)->DeleteGlobalRef(env, opts->byteBufferPool);\n        opts->byteBufferPool = NULL;\n    }\n    free(opts);\n}\n\nstruct hadoopRzBuffer\n{\n    jobject byteBuffer;\n    uint8_t *ptr;\n    int32_t length;\n    int direct;\n};\n\nstatic jthrowable hadoopRzOptionsGetEnumSet(JNIEnv *env,\n        struct hadoopRzOptions *opts, jobject *enumSet)\n{\n    jthrowable jthr = NULL;\n    jobject enumInst = NULL, enumSetObj = NULL;\n    jvalue jVal;\n\n    if (opts->cachedEnumSet) {\n        // If we cached the value, return it now.\n        *enumSet = opts->cachedEnumSet;\n        goto done;\n    }\n    if (opts->skipChecksums) {\n        jthr = fetchEnumInstance(env, READ_OPTION,\n                  \"SKIP_CHECKSUMS\", &enumInst);\n        if (jthr) {\n            goto done;\n        }\n        jthr = invokeMethod(env, &jVal, STATIC, NULL,\n                \"java/util/EnumSet\", \"of\",\n                \"(Ljava/lang/Enum;)Ljava/util/EnumSet;\", enumInst);\n        if (jthr) {\n            goto done;\n        }\n        enumSetObj = jVal.l;\n    } else {\n        jclass clazz = (*env)->FindClass(env, READ_OPTION);\n        if (!clazz) {\n            jthr = newRuntimeError(env, \"failed \"\n                    \"to find class for %s\", READ_OPTION);\n            goto done;\n        }\n        jthr = invokeMethod(env, &jVal, STATIC, NULL,\n                \"java/util/EnumSet\", \"noneOf\",\n                \"(Ljava/lang/Class;)Ljava/util/EnumSet;\", clazz);\n        enumSetObj = jVal.l;\n    }\n    // create global ref\n    opts->cachedEnumSet = (*env)->NewGlobalRef(env, enumSetObj);\n    if (!opts->cachedEnumSet) {\n        jthr = getPendingExceptionAndClear(env);\n        goto done;\n    }\n    *enumSet = opts->cachedEnumSet;\n    jthr = NULL;\ndone:\n    (*env)->DeleteLocalRef(env, enumInst);\n    (*env)->DeleteLocalRef(env, enumSetObj);\n    return jthr;\n}\n\nstatic int hadoopReadZeroExtractBuffer(JNIEnv *env,\n        const struct hadoopRzOptions *opts, struct hadoopRzBuffer *buffer)\n{\n    int ret;\n    jthrowable jthr;\n    jvalue jVal;\n    uint8_t *directStart;\n    void *mallocBuf = NULL;\n    jint position;\n    jarray array = NULL;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, buffer->byteBuffer,\n                     \"java/nio/ByteBuffer\", \"remaining\", \"()I\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"hadoopReadZeroExtractBuffer: ByteBuffer#remaining failed: \");\n        goto done;\n    }\n    buffer->length = jVal.i;\n    jthr = invokeMethod(env, &jVal, INSTANCE, buffer->byteBuffer,\n                     \"java/nio/ByteBuffer\", \"position\", \"()I\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"hadoopReadZeroExtractBuffer: ByteBuffer#position failed: \");\n        goto done;\n    }\n    position = jVal.i;\n    directStart = (*env)->GetDirectBufferAddress(env, buffer->byteBuffer);\n    if (directStart) {\n        // Handle direct buffers.\n        buffer->ptr = directStart + position;\n        buffer->direct = 1;\n        ret = 0;\n        goto done;\n    }\n    // Handle indirect buffers.\n    // The JNI docs don't say that GetDirectBufferAddress throws any exceptions\n    // when it fails.  However, they also don't clearly say that it doesn't.  It\n    // seems safest to clear any pending exceptions here, to prevent problems on\n    // various JVMs.\n    (*env)->ExceptionClear(env);\n    if (!opts->byteBufferPool) {\n        fputs(\"hadoopReadZeroExtractBuffer: we read through the \"\n                \"zero-copy path, but failed to get the address of the buffer via \"\n                \"GetDirectBufferAddress.  Please make sure your JVM supports \"\n                \"GetDirectBufferAddress.\\n\", stderr);\n        ret = ENOTSUP;\n        goto done;\n    }\n    // Get the backing array object of this buffer.\n    jthr = invokeMethod(env, &jVal, INSTANCE, buffer->byteBuffer,\n                     \"java/nio/ByteBuffer\", \"array\", \"()[B\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"hadoopReadZeroExtractBuffer: ByteBuffer#array failed: \");\n        goto done;\n    }\n    array = jVal.l;\n    if (!array) {\n        fputs(\"hadoopReadZeroExtractBuffer: ByteBuffer#array returned NULL.\",\n              stderr);\n        ret = EIO;\n        goto done;\n    }\n    mallocBuf = malloc(buffer->length);\n    if (!mallocBuf) {\n        fprintf(stderr, \"hadoopReadZeroExtractBuffer: failed to allocate %d bytes of memory\\n\",\n                buffer->length);\n        ret = ENOMEM;\n        goto done;\n    }\n    (*env)->GetByteArrayRegion(env, array, position, buffer->length, mallocBuf);\n    jthr = (*env)->ExceptionOccurred(env);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"hadoopReadZeroExtractBuffer: GetByteArrayRegion failed: \");\n        goto done;\n    }\n    buffer->ptr = mallocBuf;\n    buffer->direct = 0;\n    ret = 0;\n\ndone:\n    free(mallocBuf);\n    (*env)->DeleteLocalRef(env, array);\n    return ret;\n}\n\nstatic int translateZCRException(JNIEnv *env, jthrowable exc)\n{\n    int ret;\n    char *className = NULL;\n    jthrowable jthr = classNameOfObject(exc, env, &className);\n\n    if (jthr) {\n        fputs(\"hadoopReadZero: failed to get class name of \"\n                \"exception from read().\\n\", stderr);\n        destroyLocalReference(env, exc);\n        destroyLocalReference(env, jthr);\n        ret = EIO;\n        goto done;\n    }\n    if (!strcmp(className, \"java.lang.UnsupportedOperationException\")) {\n        ret = EPROTONOSUPPORT;\n        goto done;\n    }\n    ret = printExceptionAndFree(env, exc, PRINT_EXC_ALL,\n            \"hadoopZeroCopyRead: ZeroCopyCursor#read failed\");\ndone:\n    free(className);\n    return ret;\n}\n\nstruct hadoopRzBuffer* hadoopReadZero(hdfsFile file,\n            struct hadoopRzOptions *opts, int32_t maxLength)\n{\n    JNIEnv *env;\n    jthrowable jthr = NULL;\n    jvalue jVal;\n    jobject enumSet = NULL, byteBuffer = NULL;\n    struct hadoopRzBuffer* buffer = NULL;\n    int ret;\n\n    env = getJNIEnv();\n    if (!env) {\n        errno = EINTERNAL;\n        return NULL;\n    }\n    if (file->type != HDFS_STREAM_INPUT) {\n        fputs(\"Cannot read from a non-InputStream object!\\n\", stderr);\n        ret = EINVAL;\n        goto done;\n    }\n    buffer = calloc(1, sizeof(struct hadoopRzBuffer));\n    if (!buffer) {\n        ret = ENOMEM;\n        goto done;\n    }\n    jthr = hadoopRzOptionsGetEnumSet(env, opts, &enumSet);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"hadoopReadZero: hadoopRzOptionsGetEnumSet failed: \");\n        goto done;\n    }\n    jthr = invokeMethod(env, &jVal, INSTANCE, file->file, HADOOP_ISTRM, \"read\",\n        \"(Lorg/apache/hadoop/io/ByteBufferPool;ILjava/util/EnumSet;)\"\n        \"Ljava/nio/ByteBuffer;\", opts->byteBufferPool, maxLength, enumSet);\n    if (jthr) {\n        ret = translateZCRException(env, jthr);\n        goto done;\n    }\n    byteBuffer = jVal.l;\n    if (!byteBuffer) {\n        buffer->byteBuffer = NULL;\n        buffer->length = 0;\n        buffer->ptr = NULL;\n    } else {\n        buffer->byteBuffer = (*env)->NewGlobalRef(env, byteBuffer);\n        if (!buffer->byteBuffer) {\n            ret = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n                \"hadoopReadZero: failed to create global ref to ByteBuffer\");\n            goto done;\n        }\n        ret = hadoopReadZeroExtractBuffer(env, opts, buffer);\n        if (ret) {\n            goto done;\n        }\n    }\n    ret = 0;\ndone:\n    (*env)->DeleteLocalRef(env, byteBuffer);\n    if (ret) {\n        if (buffer) {\n            if (buffer->byteBuffer) {\n                (*env)->DeleteGlobalRef(env, buffer->byteBuffer);\n            }\n            free(buffer);\n        }\n        errno = ret;\n        return NULL;\n    } else {\n        errno = 0;\n    }\n    return buffer;\n}\n\nint32_t hadoopRzBufferLength(const struct hadoopRzBuffer *buffer)\n{\n    return buffer->length;\n}\n\nconst void *hadoopRzBufferGet(const struct hadoopRzBuffer *buffer)\n{\n    return buffer->ptr;\n}\n\nvoid hadoopRzBufferFree(hdfsFile file, struct hadoopRzBuffer *buffer)\n{\n    jvalue jVal;\n    jthrowable jthr;\n    JNIEnv* env;\n    \n    env = getJNIEnv();\n    if (env == NULL) {\n        errno = EINTERNAL;\n        return;\n    }\n    if (buffer->byteBuffer) {\n        jthr = invokeMethod(env, &jVal, INSTANCE, file->file,\n                    HADOOP_ISTRM, \"releaseBuffer\",\n                    \"(Ljava/nio/ByteBuffer;)V\", buffer->byteBuffer);\n        if (jthr) {\n            printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                    \"hadoopRzBufferFree: releaseBuffer failed: \");\n            // even on error, we have to delete the reference.\n        }\n        (*env)->DeleteGlobalRef(env, buffer->byteBuffer);\n    }\n    if (!buffer->direct) {\n        free(buffer->ptr);\n    }\n    memset(buffer, 0, sizeof(*buffer));\n    free(buffer);\n}\n\nchar***\nhdfsGetHosts(hdfsFS fs, const char *path, tOffset start, tOffset length)\n{\n    // JAVA EQUIVALENT:\n    //  fs.getFileBlockLoctions(new Path(path), start, length);\n\n    jobject jFS = (jobject)fs;\n    jthrowable jthr;\n    jobject jPath = NULL;\n    jobject jFileStatus = NULL;\n    jvalue jFSVal, jVal;\n    jobjectArray jBlockLocations = NULL, jFileBlockHosts = NULL;\n    jstring jHost = NULL;\n    char*** blockHosts = NULL;\n    int i, j, ret;\n    jsize jNumFileBlocks = 0;\n    jobject jFileBlock;\n    jsize jNumBlockHosts;\n    const char *hostName;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return NULL;\n    }\n\n    //Create an object of org.apache.hadoop.fs.Path\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetHosts(path=%s): constructNewObjectOfPath\", path);\n        goto done;\n    }\n    jthr = invokeMethod(env, &jFSVal, INSTANCE, jFS,\n            HADOOP_FS, \"getFileStatus\", \"(Lorg/apache/hadoop/fs/Path;)\"\n            \"Lorg/apache/hadoop/fs/FileStatus;\", jPath);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, NOPRINT_EXC_FILE_NOT_FOUND,\n                \"hdfsGetHosts(path=%s, start=%\"PRId64\", length=%\"PRId64\"):\"\n                \"FileSystem#getFileStatus\", path, start, length);\n        destroyLocalReference(env, jPath);\n        goto done;\n    }\n    jFileStatus = jFSVal.l;\n\n    //org.apache.hadoop.fs.FileSystem#getFileBlockLocations\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS,\n                     HADOOP_FS, \"getFileBlockLocations\", \n                     \"(Lorg/apache/hadoop/fs/FileStatus;JJ)\"\n                     \"[Lorg/apache/hadoop/fs/BlockLocation;\",\n                     jFileStatus, start, length);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"hdfsGetHosts(path=%s, start=%\"PRId64\", length=%\"PRId64\"):\"\n                \"FileSystem#getFileBlockLocations\", path, start, length);\n        goto done;\n    }\n    jBlockLocations = jVal.l;\n\n    //Figure out no of entries in jBlockLocations\n    //Allocate memory and add NULL at the end\n    jNumFileBlocks = (*env)->GetArrayLength(env, jBlockLocations);\n\n    blockHosts = calloc(jNumFileBlocks + 1, sizeof(char**));\n    if (blockHosts == NULL) {\n        ret = ENOMEM;\n        goto done;\n    }\n    if (jNumFileBlocks == 0) {\n        ret = 0;\n        goto done;\n    }\n\n    //Now parse each block to get hostnames\n    for (i = 0; i < jNumFileBlocks; ++i) {\n        jFileBlock =\n            (*env)->GetObjectArrayElement(env, jBlockLocations, i);\n        if (!jFileBlock) {\n            ret = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n                \"hdfsGetHosts(path=%s, start=%\"PRId64\", length=%\"PRId64\"):\"\n                \"GetObjectArrayElement(%d)\", path, start, length, i);\n            goto done;\n        }\n        \n        jthr = invokeMethod(env, &jVal, INSTANCE, jFileBlock, HADOOP_BLK_LOC,\n                         \"getHosts\", \"()[Ljava/lang/String;\");\n        if (jthr) {\n            ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"hdfsGetHosts(path=%s, start=%\"PRId64\", length=%\"PRId64\"):\"\n                \"BlockLocation#getHosts\", path, start, length);\n            goto done;\n        }\n        jFileBlockHosts = jVal.l;\n        if (!jFileBlockHosts) {\n            fprintf(stderr,\n                \"hdfsGetHosts(path=%s, start=%\"PRId64\", length=%\"PRId64\"):\"\n                \"BlockLocation#getHosts returned NULL\", path, start, length);\n            ret = EINTERNAL;\n            goto done;\n        }\n        //Figure out no of hosts in jFileBlockHosts, and allocate the memory\n        jNumBlockHosts = (*env)->GetArrayLength(env, jFileBlockHosts);\n        blockHosts[i] = calloc(jNumBlockHosts + 1, sizeof(char*));\n        if (!blockHosts[i]) {\n            ret = ENOMEM;\n            goto done;\n        }\n\n        //Now parse each hostname\n        for (j = 0; j < jNumBlockHosts; ++j) {\n            jHost = (*env)->GetObjectArrayElement(env, jFileBlockHosts, j);\n            if (!jHost) {\n                ret = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n                    \"hdfsGetHosts(path=%s, start=%\"PRId64\", length=%\"PRId64\"): \"\n                    \"NewByteArray\", path, start, length);\n                goto done;\n            }\n            hostName =\n                (const char*)((*env)->GetStringUTFChars(env, jHost, NULL));\n            if (!hostName) {\n                ret = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n                    \"hdfsGetHosts(path=%s, start=%\"PRId64\", length=%\"PRId64\", \"\n                    \"j=%d out of %d): GetStringUTFChars\",\n                    path, start, length, j, jNumBlockHosts);\n                goto done;\n            }\n            blockHosts[i][j] = strdup(hostName);\n            (*env)->ReleaseStringUTFChars(env, jHost, hostName);\n            if (!blockHosts[i][j]) {\n                ret = ENOMEM;\n                goto done;\n            }\n            destroyLocalReference(env, jHost);\n            jHost = NULL;\n        }\n\n        destroyLocalReference(env, jFileBlockHosts);\n        jFileBlockHosts = NULL;\n    }\n    ret = 0;\n\ndone:\n    destroyLocalReference(env, jPath);\n    destroyLocalReference(env, jFileStatus);\n    destroyLocalReference(env, jBlockLocations);\n    destroyLocalReference(env, jFileBlockHosts);\n    destroyLocalReference(env, jHost);\n    if (ret) {\n        errno = ret;\n        if (blockHosts) {\n            hdfsFreeHosts(blockHosts);\n        }\n        return NULL;\n    }\n\n    return blockHosts;\n}\n\n\nvoid hdfsFreeHosts(char ***blockHosts)\n{\n    int i, j;\n    for (i=0; blockHosts[i]; i++) {\n        for (j=0; blockHosts[i][j]; j++) {\n            free(blockHosts[i][j]);\n        }\n        free(blockHosts[i]);\n    }\n    free(blockHosts);\n}\n\n\ntOffset hdfsGetDefaultBlockSize(hdfsFS fs)\n{\n    // JAVA EQUIVALENT:\n    //  fs.getDefaultBlockSize();\n\n    jobject jFS = (jobject)fs;\n    jvalue jVal;\n    jthrowable jthr;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //FileSystem#getDefaultBlockSize()\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                     \"getDefaultBlockSize\", \"()J\");\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetDefaultBlockSize: FileSystem#getDefaultBlockSize\");\n        return -1;\n    }\n    return jVal.j;\n}\n\n\ntOffset hdfsGetDefaultBlockSizeAtPath(hdfsFS fs, const char *path)\n{\n    // JAVA EQUIVALENT:\n    //  fs.getDefaultBlockSize(path);\n\n    jthrowable jthr;\n    jobject jFS = (jobject)fs;\n    jobject jPath;\n    tOffset blockSize;\n    JNIEnv* env = getJNIEnv();\n\n    if (env == NULL) {\n        errno = EINTERNAL;\n        return -1;\n    }\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetDefaultBlockSize(path=%s): constructNewObjectOfPath\",\n            path);\n        return -1;\n    }\n    jthr = getDefaultBlockSize(env, jFS, jPath, &blockSize);\n    (*env)->DeleteLocalRef(env, jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetDefaultBlockSize(path=%s): \"\n            \"FileSystem#getDefaultBlockSize\", path);\n        return -1;\n    }\n    return blockSize;\n}\n\n\ntOffset hdfsGetCapacity(hdfsFS fs)\n{\n    // JAVA EQUIVALENT:\n    //  FsStatus fss = fs.getStatus();\n    //  return Fss.getCapacity();\n\n    jobject jFS = (jobject)fs;\n    jvalue  jVal;\n    jthrowable jthr;\n    jobject fss;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //FileSystem#getStatus\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                     \"getStatus\", \"()Lorg/apache/hadoop/fs/FsStatus;\");\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetCapacity: FileSystem#getStatus\");\n        return -1;\n    }\n    fss = (jobject)jVal.l;\n    jthr = invokeMethod(env, &jVal, INSTANCE, fss, HADOOP_FSSTATUS,\n                     \"getCapacity\", \"()J\");\n    destroyLocalReference(env, fss);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetCapacity: FsStatus#getCapacity\");\n        return -1;\n    }\n    return jVal.j;\n}\n\n\n  \ntOffset hdfsGetUsed(hdfsFS fs)\n{\n    // JAVA EQUIVALENT:\n    //  FsStatus fss = fs.getStatus();\n    //  return Fss.getUsed();\n\n    jobject jFS = (jobject)fs;\n    jvalue  jVal;\n    jthrowable jthr;\n    jobject fss;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n\n    //FileSystem#getStatus\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                     \"getStatus\", \"()Lorg/apache/hadoop/fs/FsStatus;\");\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetUsed: FileSystem#getStatus\");\n        return -1;\n    }\n    fss = (jobject)jVal.l;\n    jthr = invokeMethod(env, &jVal, INSTANCE, fss, HADOOP_FSSTATUS,\n                     \"getUsed\", \"()J\");\n    destroyLocalReference(env, fss);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetUsed: FsStatus#getUsed\");\n        return -1;\n    }\n    return jVal.j;\n}\n \n/**\n * We cannot add new fields to the hdfsFileInfo structure because it would break\n * binary compatibility.  The reason is because we return an array\n * of hdfsFileInfo structures from hdfsListDirectory.  So changing the size of\n * those structures would break all programs that relied on finding the second\n * element in the array at <base_offset> + sizeof(struct hdfsFileInfo).\n *\n * So instead, we add the new fields to the hdfsExtendedFileInfo structure.\n * This structure is contained in the mOwner string found inside the\n * hdfsFileInfo.  Specifically, the format of mOwner is:\n *\n * [owner-string] [null byte] [padding] [hdfsExtendedFileInfo structure]\n *\n * The padding is added so that the hdfsExtendedFileInfo structure starts on an\n * 8-byte boundary.\n *\n * @param str           The string to locate the extended info in.\n * @return              The offset of the hdfsExtendedFileInfo structure.\n */\nstatic size_t getExtendedFileInfoOffset(const char *str)\n{\n    int num_64_bit_words = ((strlen(str) + 1) + 7) / 8;\n    return num_64_bit_words * 8;\n}\n\nstatic struct hdfsExtendedFileInfo *getExtendedFileInfo(hdfsFileInfo *fileInfo)\n{\n    char *owner = fileInfo->mOwner;\n    return (struct hdfsExtendedFileInfo *)(owner +\n                getExtendedFileInfoOffset(owner));\n}\n\nstatic jthrowable\ngetFileInfoFromStat(JNIEnv *env, jobject jStat, hdfsFileInfo *fileInfo)\n{\n    jvalue jVal;\n    jthrowable jthr;\n    jobject jPath = NULL;\n    jstring jPathName = NULL;\n    jstring jUserName = NULL;\n    jstring jGroupName = NULL;\n    jobject jPermission = NULL;\n    const char *cPathName;\n    const char *cUserName;\n    const char *cGroupName;\n    struct hdfsExtendedFileInfo *extInfo;\n    size_t extOffset;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jStat,\n                     HADOOP_STAT, \"isDir\", \"()Z\");\n    if (jthr)\n        goto done;\n    fileInfo->mKind = jVal.z ? kObjectKindDirectory : kObjectKindFile;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jStat,\n                     HADOOP_STAT, \"getReplication\", \"()S\");\n    if (jthr)\n        goto done;\n    fileInfo->mReplication = jVal.s;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jStat,\n                     HADOOP_STAT, \"getBlockSize\", \"()J\");\n    if (jthr)\n        goto done;\n    fileInfo->mBlockSize = jVal.j;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jStat,\n                     HADOOP_STAT, \"getModificationTime\", \"()J\");\n    if (jthr)\n        goto done;\n    fileInfo->mLastMod = jVal.j / 1000;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jStat,\n                     HADOOP_STAT, \"getAccessTime\", \"()J\");\n    if (jthr)\n        goto done;\n    fileInfo->mLastAccess = (tTime) (jVal.j / 1000);\n\n    if (fileInfo->mKind == kObjectKindFile) {\n        jthr = invokeMethod(env, &jVal, INSTANCE, jStat,\n                         HADOOP_STAT, \"getLen\", \"()J\");\n        if (jthr)\n            goto done;\n        fileInfo->mSize = jVal.j;\n    }\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jStat, HADOOP_STAT,\n                     \"getPath\", \"()Lorg/apache/hadoop/fs/Path;\");\n    if (jthr)\n        goto done;\n    jPath = jVal.l;\n    if (jPath == NULL) {\n        jthr = newRuntimeError(env, \"org.apache.hadoop.fs.FileStatus#\"\n            \"getPath returned NULL!\");\n        goto done;\n    }\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jPath, HADOOP_PATH,\n                     \"toString\", \"()Ljava/lang/String;\");\n    if (jthr)\n        goto done;\n    jPathName = jVal.l;\n    cPathName =\n        (const char*) ((*env)->GetStringUTFChars(env, jPathName, NULL));\n    if (!cPathName) {\n        jthr = getPendingExceptionAndClear(env);\n        goto done;\n    }\n    fileInfo->mName = strdup(cPathName);\n    (*env)->ReleaseStringUTFChars(env, jPathName, cPathName);\n    jthr = invokeMethod(env, &jVal, INSTANCE, jStat, HADOOP_STAT,\n                    \"getOwner\", \"()Ljava/lang/String;\");\n    if (jthr)\n        goto done;\n    jUserName = jVal.l;\n    cUserName =\n        (const char*) ((*env)->GetStringUTFChars(env, jUserName, NULL));\n    if (!cUserName) {\n        jthr = getPendingExceptionAndClear(env);\n        goto done;\n    }\n    extOffset = getExtendedFileInfoOffset(cUserName);\n    fileInfo->mOwner = malloc(extOffset + sizeof(struct hdfsExtendedFileInfo));\n    if (!fileInfo->mOwner) {\n        jthr = newRuntimeError(env, \"getFileInfo: OOM allocating mOwner\");\n        goto done;\n    }\n    strcpy(fileInfo->mOwner, cUserName);\n    (*env)->ReleaseStringUTFChars(env, jUserName, cUserName);\n    extInfo = getExtendedFileInfo(fileInfo);\n    memset(extInfo, 0, sizeof(*extInfo));\n    jthr = invokeMethod(env, &jVal, INSTANCE, jStat,\n                    HADOOP_STAT, \"isEncrypted\", \"()Z\");\n    if (jthr) {\n        goto done;\n    }\n    if (jVal.z == JNI_TRUE) {\n        extInfo->flags |= HDFS_EXTENDED_FILE_INFO_ENCRYPTED;\n    }\n    jthr = invokeMethod(env, &jVal, INSTANCE, jStat, HADOOP_STAT,\n                    \"getGroup\", \"()Ljava/lang/String;\");\n    if (jthr)\n        goto done;\n    jGroupName = jVal.l;\n    cGroupName = (const char*) ((*env)->GetStringUTFChars(env, jGroupName, NULL));\n    if (!cGroupName) {\n        jthr = getPendingExceptionAndClear(env);\n        goto done;\n    }\n    fileInfo->mGroup = strdup(cGroupName);\n    (*env)->ReleaseStringUTFChars(env, jGroupName, cGroupName);\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jStat, HADOOP_STAT,\n            \"getPermission\",\n            \"()Lorg/apache/hadoop/fs/permission/FsPermission;\");\n    if (jthr)\n        goto done;\n    if (jVal.l == NULL) {\n        jthr = newRuntimeError(env, \"%s#getPermission returned NULL!\",\n            HADOOP_STAT);\n        goto done;\n    }\n    jPermission = jVal.l;\n    jthr = invokeMethod(env, &jVal, INSTANCE, jPermission, HADOOP_FSPERM,\n                         \"toShort\", \"()S\");\n    if (jthr)\n        goto done;\n    fileInfo->mPermissions = jVal.s;\n    jthr = NULL;\n\ndone:\n    if (jthr)\n        hdfsFreeFileInfoEntry(fileInfo);\n    destroyLocalReference(env, jPath);\n    destroyLocalReference(env, jPathName);\n    destroyLocalReference(env, jUserName);\n    destroyLocalReference(env, jGroupName);\n    destroyLocalReference(env, jPermission);\n    destroyLocalReference(env, jPath);\n    return jthr;\n}\n\nstatic jthrowable\ngetFileInfo(JNIEnv *env, jobject jFS, jobject jPath, hdfsFileInfo **fileInfo)\n{\n    // JAVA EQUIVALENT:\n    //  fs.isDirectory(f)\n    //  fs.getModificationTime()\n    //  fs.getAccessTime()\n    //  fs.getLength(f)\n    //  f.getPath()\n    //  f.getOwner()\n    //  f.getGroup()\n    //  f.getPermission().toShort()\n    jobject jStat;\n    jvalue  jVal;\n    jthrowable jthr;\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_FS,\n                     \"exists\", JMETHOD1(JPARAM(HADOOP_PATH), \"Z\"),\n                     jPath);\n    if (jthr)\n        return jthr;\n    if (jVal.z == 0) {\n        *fileInfo = NULL;\n        return NULL;\n    }\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS,\n            HADOOP_FS, \"getFileStatus\",\n            JMETHOD1(JPARAM(HADOOP_PATH), JPARAM(HADOOP_STAT)), jPath);\n    if (jthr)\n        return jthr;\n    jStat = jVal.l;\n    *fileInfo = calloc(1, sizeof(hdfsFileInfo));\n    if (!*fileInfo) {\n        destroyLocalReference(env, jStat);\n        return newRuntimeError(env, \"getFileInfo: OOM allocating hdfsFileInfo\");\n    }\n    jthr = getFileInfoFromStat(env, jStat, *fileInfo); \n    destroyLocalReference(env, jStat);\n    return jthr;\n}\n\n\n\nhdfsFileInfo* hdfsListDirectory(hdfsFS fs, const char *path, int *numEntries)\n{\n    // JAVA EQUIVALENT:\n    //  Path p(path);\n    //  Path []pathList = fs.listPaths(p)\n    //  foreach path in pathList \n    //    getFileInfo(path)\n\n    jobject jFS = (jobject)fs;\n    jthrowable jthr;\n    jobject jPath = NULL;\n    hdfsFileInfo *pathList = NULL; \n    jobjectArray jPathList = NULL;\n    jvalue jVal;\n    jsize jPathListSize = 0;\n    int ret;\n    jsize i;\n    jobject tmpStat;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return NULL;\n    }\n\n    //Create an object of org.apache.hadoop.fs.Path\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsListDirectory(%s): constructNewObjectOfPath\", path);\n        goto done;\n    }\n\n    jthr = invokeMethod(env, &jVal, INSTANCE, jFS, HADOOP_DFS, \"listStatus\",\n                     JMETHOD1(JPARAM(HADOOP_PATH), JARRPARAM(HADOOP_STAT)),\n                     jPath);\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr,\n            NOPRINT_EXC_ACCESS_CONTROL | NOPRINT_EXC_FILE_NOT_FOUND |\n            NOPRINT_EXC_UNRESOLVED_LINK,\n            \"hdfsListDirectory(%s): FileSystem#listStatus\", path);\n        goto done;\n    }\n    jPathList = jVal.l;\n\n    //Figure out the number of entries in that directory\n    jPathListSize = (*env)->GetArrayLength(env, jPathList);\n    if (jPathListSize == 0) {\n        ret = 0;\n        goto done;\n    }\n\n    //Allocate memory\n    pathList = calloc(jPathListSize, sizeof(hdfsFileInfo));\n    if (pathList == NULL) {\n        ret = ENOMEM;\n        goto done;\n    }\n\n    //Save path information in pathList\n    for (i=0; i < jPathListSize; ++i) {\n        tmpStat = (*env)->GetObjectArrayElement(env, jPathList, i);\n        if (!tmpStat) {\n            ret = printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n                \"hdfsListDirectory(%s): GetObjectArrayElement(%d out of %d)\",\n                path, i, jPathListSize);\n            goto done;\n        }\n        jthr = getFileInfoFromStat(env, tmpStat, &pathList[i]);\n        destroyLocalReference(env, tmpStat);\n        if (jthr) {\n            ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"hdfsListDirectory(%s): getFileInfoFromStat(%d out of %d)\",\n                path, i, jPathListSize);\n            goto done;\n        }\n    }\n    ret = 0;\n\ndone:\n    destroyLocalReference(env, jPath);\n    destroyLocalReference(env, jPathList);\n\n    if (ret) {\n        hdfsFreeFileInfo(pathList, jPathListSize);\n        errno = ret;\n        return NULL;\n    }\n    *numEntries = jPathListSize;\n    errno = 0;\n    return pathList;\n}\n\n\n\nhdfsFileInfo *hdfsGetPathInfo(hdfsFS fs, const char *path)\n{\n    // JAVA EQUIVALENT:\n    //  File f(path);\n    //  fs.isDirectory(f)\n    //  fs.lastModified() ??\n    //  fs.getLength(f)\n    //  f.getPath()\n\n    jobject jFS = (jobject)fs;\n    jobject jPath;\n    jthrowable jthr;\n    hdfsFileInfo *fileInfo;\n\n    //Get the JNIEnv* corresponding to current thread\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return NULL;\n    }\n\n    //Create an object of org.apache.hadoop.fs.Path\n    jthr = constructNewObjectOfPath(env, path, &jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"hdfsGetPathInfo(%s): constructNewObjectOfPath\", path);\n        return NULL;\n    }\n    jthr = getFileInfo(env, jFS, jPath, &fileInfo);\n    destroyLocalReference(env, jPath);\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr,\n            NOPRINT_EXC_ACCESS_CONTROL | NOPRINT_EXC_FILE_NOT_FOUND |\n            NOPRINT_EXC_UNRESOLVED_LINK,\n            \"hdfsGetPathInfo(%s): getFileInfo\", path);\n        return NULL;\n    }\n    if (!fileInfo) {\n        errno = ENOENT;\n        return NULL;\n    }\n    return fileInfo;\n}\n\nstatic void hdfsFreeFileInfoEntry(hdfsFileInfo *hdfsFileInfo)\n{\n    free(hdfsFileInfo->mName);\n    free(hdfsFileInfo->mOwner);\n    free(hdfsFileInfo->mGroup);\n    memset(hdfsFileInfo, 0, sizeof(*hdfsFileInfo));\n}\n\nvoid hdfsFreeFileInfo(hdfsFileInfo *hdfsFileInfo, int numEntries)\n{\n    //Free the mName, mOwner, and mGroup\n    int i;\n    for (i=0; i < numEntries; ++i) {\n        hdfsFreeFileInfoEntry(hdfsFileInfo + i);\n    }\n\n    //Free entire block\n    free(hdfsFileInfo);\n}\n\nint hdfsFileIsEncrypted(hdfsFileInfo *fileInfo)\n{\n    struct hdfsExtendedFileInfo *extInfo;\n\n    extInfo = getExtendedFileInfo(fileInfo);\n    return !!(extInfo->flags & HDFS_EXTENDED_FILE_INFO_ENCRYPTED);\n}\n\nchar* hdfsGetLastExceptionRootCause()\n{\n  return getLastTLSExceptionRootCause();\n}\n\nchar* hdfsGetLastExceptionStackTrace()\n{\n  return getLastTLSExceptionStackTrace();\n}\n\n/**\n * vim: ts=4: sw=4: et:\n */\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/hdfs.h",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef LIBHDFS_HDFS_H\n#define LIBHDFS_HDFS_H\n\n#include <errno.h> /* for EINTERNAL, etc. */\n#include <fcntl.h> /* for O_RDONLY, O_WRONLY */\n#include <stdint.h> /* for uint64_t, etc. */\n#include <time.h> /* for time_t */\n\n/*\n * Support export of DLL symbols during libhdfs build, and import of DLL symbols\n * during client application build.  A client application may optionally define\n * symbol LIBHDFS_DLL_IMPORT in its build.  This is not strictly required, but\n * the compiler can produce more efficient code with it.\n */\n#ifdef WIN32\n    #ifdef LIBHDFS_DLL_EXPORT\n        #define LIBHDFS_EXTERNAL __declspec(dllexport)\n    #elif LIBHDFS_DLL_IMPORT\n        #define LIBHDFS_EXTERNAL __declspec(dllimport)\n    #else\n        #define LIBHDFS_EXTERNAL\n    #endif\n#else\n    #ifdef LIBHDFS_DLL_EXPORT\n        #define LIBHDFS_EXTERNAL __attribute__((visibility(\"default\")))\n    #elif LIBHDFS_DLL_IMPORT\n        #define LIBHDFS_EXTERNAL __attribute__((visibility(\"default\")))\n    #else\n        #define LIBHDFS_EXTERNAL\n    #endif\n#endif\n\n#ifndef O_RDONLY\n#define O_RDONLY 1\n#endif\n\n#ifndef O_WRONLY \n#define O_WRONLY 2\n#endif\n\n#ifndef EINTERNAL\n#define EINTERNAL 255 \n#endif\n\n#define ELASTIC_BYTE_BUFFER_POOL_CLASS \\\n  \"org/apache/hadoop/io/ElasticByteBufferPool\"\n\n/** All APIs set errno to meaningful values */\n\n#ifdef __cplusplus\nextern  \"C\" {\n#endif\n    /**\n     * Some utility decls used in libhdfs.\n     */\n    struct hdfsBuilder;\n    typedef int32_t   tSize; /// size of data for read/write io ops \n    typedef time_t    tTime; /// time type in seconds\n    typedef int64_t   tOffset;/// offset within the file\n    typedef uint16_t  tPort; /// port\n    typedef enum tObjectKind {\n        kObjectKindFile = 'F',\n        kObjectKindDirectory = 'D',\n    } tObjectKind;\n    struct hdfsStreamBuilder;\n\n\n    /**\n     * The C reflection of org.apache.org.hadoop.FileSystem .\n     */\n    struct hdfs_internal;\n    typedef struct hdfs_internal* hdfsFS;\n    \n    struct hdfsFile_internal;\n    typedef struct hdfsFile_internal* hdfsFile;\n\n    struct hadoopRzOptions;\n\n    struct hadoopRzBuffer;\n\n    /**\n     * Determine if a file is open for read.\n     *\n     * @param file     The HDFS file\n     * @return         1 if the file is open for read; 0 otherwise\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsFileIsOpenForRead(hdfsFile file);\n\n    /**\n     * Determine if a file is open for write.\n     *\n     * @param file     The HDFS file\n     * @return         1 if the file is open for write; 0 otherwise\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsFileIsOpenForWrite(hdfsFile file);\n\n    struct hdfsReadStatistics {\n      uint64_t totalBytesRead;\n      uint64_t totalLocalBytesRead;\n      uint64_t totalShortCircuitBytesRead;\n      uint64_t totalZeroCopyBytesRead;\n    };\n\n    /**\n     * Get read statistics about a file.  This is only applicable to files\n     * opened for reading.\n     *\n     * @param file     The HDFS file\n     * @param stats    (out parameter) on a successful return, the read\n     *                 statistics.  Unchanged otherwise.  You must free the\n     *                 returned statistics with hdfsFileFreeReadStatistics.\n     * @return         0 if the statistics were successfully returned,\n     *                 -1 otherwise.  On a failure, please check errno against\n     *                 ENOTSUP.  webhdfs, LocalFilesystem, and so forth may\n     *                 not support read statistics.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsFileGetReadStatistics(hdfsFile file,\n                                  struct hdfsReadStatistics **stats);\n\n    /**\n     * @param stats    HDFS read statistics for a file.\n     *\n     * @return the number of remote bytes read.\n     */\n    LIBHDFS_EXTERNAL\n    int64_t hdfsReadStatisticsGetRemoteBytesRead(\n                            const struct hdfsReadStatistics *stats);\n\n    /**\n     * Clear the read statistics for a file.\n     *\n     * @param file      The file to clear the read statistics of.\n     *\n     * @return          0 on success; the error code otherwise.\n     *                  EINVAL: the file is not open for reading.\n     *                  ENOTSUP: the file does not support clearing the read\n     *                  statistics.\n     *                  Errno will also be set to this code on failure.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsFileClearReadStatistics(hdfsFile file);\n\n    /**\n     * Free some HDFS read statistics.\n     *\n     * @param stats    The HDFS read statistics to free.\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsFileFreeReadStatistics(struct hdfsReadStatistics *stats);\n\n    struct hdfsHedgedReadMetrics {\n      uint64_t hedgedReadOps;\n      uint64_t hedgedReadOpsWin;\n      uint64_t hedgedReadOpsInCurThread;\n    };\n\n    /**\n     * Get cluster wide hedged read metrics.\n     *\n     * @param fs       The configured filesystem handle\n     * @param metrics  (out parameter) on a successful return, the hedged read\n     *                 metrics. Unchanged otherwise. You must free the returned\n     *                 statistics with hdfsFreeHedgedReadMetrics.\n     * @return         0 if the metrics were successfully returned, -1 otherwise.\n     *                 On a failure, please check errno against\n     *                 ENOTSUP. webhdfs, LocalFilesystem, and so forth may\n     *                 not support hedged read metrics.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsGetHedgedReadMetrics(hdfsFS fs, struct hdfsHedgedReadMetrics **metrics);\n\n    /**\n     * Free HDFS Hedged read metrics.\n     *\n     * @param metrics  The HDFS Hedged read metrics to free\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsFreeHedgedReadMetrics(struct hdfsHedgedReadMetrics *metrics);\n\n    /** \n     * hdfsConnectAsUser - Connect to a hdfs file system as a specific user\n     * Connect to the hdfs.\n     * @param nn   The NameNode.  See hdfsBuilderSetNameNode for details.\n     * @param port The port on which the server is listening.\n     * @param user the user name (this is hadoop domain user). Or NULL is equivelant to hhdfsConnect(host, port)\n     * @return Returns a handle to the filesystem or NULL on error.\n     * @deprecated Use hdfsBuilderConnect instead. \n     */\n     LIBHDFS_EXTERNAL\n     hdfsFS hdfsConnectAsUser(const char* nn, tPort port, const char *user);\n\n    /** \n     * hdfsConnect - Connect to a hdfs file system.\n     * Connect to the hdfs.\n     * @param nn   The NameNode.  See hdfsBuilderSetNameNode for details.\n     * @param port The port on which the server is listening.\n     * @return Returns a handle to the filesystem or NULL on error.\n     * @deprecated Use hdfsBuilderConnect instead. \n     */\n     LIBHDFS_EXTERNAL\n     hdfsFS hdfsConnect(const char* nn, tPort port);\n\n    /** \n     * hdfsConnect - Connect to an hdfs file system.\n     *\n     * Forces a new instance to be created\n     *\n     * @param nn     The NameNode.  See hdfsBuilderSetNameNode for details.\n     * @param port   The port on which the server is listening.\n     * @param user   The user name to use when connecting\n     * @return       Returns a handle to the filesystem or NULL on error.\n     * @deprecated   Use hdfsBuilderConnect instead. \n     */\n     LIBHDFS_EXTERNAL\n     hdfsFS hdfsConnectAsUserNewInstance(const char* nn, tPort port, const char *user );\n\n    /** \n     * hdfsConnect - Connect to an hdfs file system.\n     *\n     * Forces a new instance to be created\n     *\n     * @param nn     The NameNode.  See hdfsBuilderSetNameNode for details.\n     * @param port   The port on which the server is listening.\n     * @return       Returns a handle to the filesystem or NULL on error.\n     * @deprecated   Use hdfsBuilderConnect instead. \n     */\n     LIBHDFS_EXTERNAL\n     hdfsFS hdfsConnectNewInstance(const char* nn, tPort port);\n\n    /** \n     * Connect to HDFS using the parameters defined by the builder.\n     *\n     * The HDFS builder will be freed, whether or not the connection was\n     * successful.\n     *\n     * Every successful call to hdfsBuilderConnect should be matched with a call\n     * to hdfsDisconnect, when the hdfsFS is no longer needed.\n     *\n     * @param bld    The HDFS builder\n     * @return       Returns a handle to the filesystem, or NULL on error.\n     */\n     LIBHDFS_EXTERNAL\n     hdfsFS hdfsBuilderConnect(struct hdfsBuilder *bld);\n\n    /**\n     * Create an HDFS builder.\n     *\n     * @return The HDFS builder, or NULL on error.\n     */\n    LIBHDFS_EXTERNAL\n    struct hdfsBuilder *hdfsNewBuilder(void);\n\n    /**\n     * Force the builder to always create a new instance of the FileSystem,\n     * rather than possibly finding one in the cache.\n     *\n     * @param bld The HDFS builder\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsBuilderSetForceNewInstance(struct hdfsBuilder *bld);\n\n    /**\n     * Set the HDFS NameNode to connect to.\n     *\n     * @param bld  The HDFS builder\n     * @param nn   The NameNode to use.\n     *             If the string given is 'default', the default NameNode\n     *             configuration will be used (from the XML configuration files)\n     *             If NULL is given, a LocalFileSystem will be created.\n     *             If the string starts with a protocol type such as file:// or\n     *             hdfs://, this protocol type will be used.  If not, the\n     *             hdfs:// protocol type will be used.\n     *             You may specify a NameNode port in the usual way by\n     *             passing a string of the format hdfs://<hostname>:<port>.\n     *             Alternately, you may set the port with\n     *             hdfsBuilderSetNameNodePort.  However, you must not pass the\n     *             port in two different ways.\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsBuilderSetNameNode(struct hdfsBuilder *bld, const char *nn);\n\n    /**\n     * Set the port of the HDFS NameNode to connect to.\n     *\n     * @param bld The HDFS builder\n     * @param port The port.\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsBuilderSetNameNodePort(struct hdfsBuilder *bld, tPort port);\n\n    /**\n     * Set the username to use when connecting to the HDFS cluster.\n     *\n     * @param bld The HDFS builder\n     * @param userName The user name.  The string will be shallow-copied.\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsBuilderSetUserName(struct hdfsBuilder *bld, const char *userName);\n\n    /**\n     * Set the path to the Kerberos ticket cache to use when connecting to\n     * the HDFS cluster.\n     *\n     * @param bld The HDFS builder\n     * @param kerbTicketCachePath The Kerberos ticket cache path.  The string\n     *                            will be shallow-copied.\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsBuilderSetKerbTicketCachePath(struct hdfsBuilder *bld,\n                                   const char *kerbTicketCachePath);\n\n    /**\n     * Free an HDFS builder.\n     *\n     * It is normally not necessary to call this function since\n     * hdfsBuilderConnect frees the builder.\n     *\n     * @param bld The HDFS builder\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsFreeBuilder(struct hdfsBuilder *bld);\n\n    /**\n     * Set a configuration string for an HdfsBuilder.\n     *\n     * @param key      The key to set.\n     * @param val      The value, or NULL to set no value.\n     *                 This will be shallow-copied.  You are responsible for\n     *                 ensuring that it remains valid until the builder is\n     *                 freed.\n     *\n     * @return         0 on success; nonzero error code otherwise.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsBuilderConfSetStr(struct hdfsBuilder *bld, const char *key,\n                              const char *val);\n\n    /**\n     * Get a configuration string.\n     *\n     * @param key      The key to find\n     * @param val      (out param) The value.  This will be set to NULL if the\n     *                 key isn't found.  You must free this string with\n     *                 hdfsConfStrFree.\n     *\n     * @return         0 on success; nonzero error code otherwise.\n     *                 Failure to find the key is not an error.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsConfGetStr(const char *key, char **val);\n\n    /**\n     * Get a configuration integer.\n     *\n     * @param key      The key to find\n     * @param val      (out param) The value.  This will NOT be changed if the\n     *                 key isn't found.\n     *\n     * @return         0 on success; nonzero error code otherwise.\n     *                 Failure to find the key is not an error.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsConfGetInt(const char *key, int32_t *val);\n\n    /**\n     * Free a configuration string found with hdfsConfGetStr. \n     *\n     * @param val      A configuration string obtained from hdfsConfGetStr\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsConfStrFree(char *val);\n\n    /** \n     * hdfsDisconnect - Disconnect from the hdfs file system.\n     * Disconnect from hdfs.\n     * @param fs The configured filesystem handle.\n     * @return Returns 0 on success, -1 on error.\n     *         Even if there is an error, the resources associated with the\n     *         hdfsFS will be freed.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsDisconnect(hdfsFS fs);\n        \n    /** \n     * hdfsOpenFile - Open a hdfs file in given mode.\n     * @deprecated    Use the hdfsStreamBuilder functions instead.\n     * This function does not support setting block sizes bigger than 2 GB.\n     *\n     * @param fs The configured filesystem handle.\n     * @param path The full path to the file.\n     * @param flags - an | of bits/fcntl.h file flags - supported flags are O_RDONLY, O_WRONLY (meaning create or overwrite i.e., implies O_TRUNCAT), \n     * O_WRONLY|O_APPEND. Other flags are generally ignored other than (O_RDWR || (O_EXCL & O_CREAT)) which return NULL and set errno equal ENOTSUP.\n     * @param bufferSize Size of buffer for read/write - pass 0 if you want\n     * to use the default configured values.\n     * @param replication Block replication - pass 0 if you want to use\n     * the default configured values.\n     * @param blocksize Size of block - pass 0 if you want to use the\n     * default configured values.  Note that if you want a block size bigger\n     * than 2 GB, you must use the hdfsStreamBuilder API rather than this\n     * deprecated function.\n     * @return Returns the handle to the open file or NULL on error.\n     */\n    LIBHDFS_EXTERNAL\n    hdfsFile hdfsOpenFile(hdfsFS fs, const char* path, int flags,\n                          int bufferSize, short replication, tSize blocksize);\n\n    /**\n     * hdfsStreamBuilderAlloc - Allocate an HDFS stream builder.\n     *\n     * @param fs The configured filesystem handle.\n     * @param path The full path to the file.  Will be deep-copied.\n     * @param flags The open flags, as in hdfsOpenFile.\n     * @return Returns the hdfsStreamBuilder, or NULL on error.\n     */\n    LIBHDFS_EXTERNAL\n    struct hdfsStreamBuilder *hdfsStreamBuilderAlloc(hdfsFS fs,\n                                      const char *path, int flags);\n\n    /**\n     * hdfsStreamBuilderFree - Free an HDFS file builder.\n     *\n     * It is normally not necessary to call this function since\n     * hdfsStreamBuilderBuild frees the builder.\n     *\n     * @param bld The hdfsStreamBuilder to free.\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsStreamBuilderFree(struct hdfsStreamBuilder *bld);\n\n    /**\n     * hdfsStreamBuilderSetBufferSize - Set the stream buffer size.\n     *\n     * @param bld The hdfs stream builder.\n     * @param bufferSize The buffer size to set.\n     *\n     * @return 0 on success, or -1 on error.  Errno will be set on error.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsStreamBuilderSetBufferSize(struct hdfsStreamBuilder *bld,\n                                       int32_t bufferSize);\n\n    /**\n     * hdfsStreamBuilderSetReplication - Set the replication for the stream.\n     * This is only relevant for output streams, which will create new blocks.\n     *\n     * @param bld The hdfs stream builder.\n     * @param replication The replication to set.\n     *\n     * @return 0 on success, or -1 on error.  Errno will be set on error.\n     *              If you call this on an input stream builder, you will get\n     *              EINVAL, because this configuration is not relevant to input\n     *              streams.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsStreamBuilderSetReplication(struct hdfsStreamBuilder *bld,\n                                        int16_t replication);\n\n    /**\n     * hdfsStreamBuilderSetDefaultBlockSize - Set the default block size for\n     * the stream.  This is only relevant for output streams, which will create\n     * new blocks.\n     *\n     * @param bld The hdfs stream builder.\n     * @param defaultBlockSize The default block size to set.\n     *\n     * @return 0 on success, or -1 on error.  Errno will be set on error.\n     *              If you call this on an input stream builder, you will get\n     *              EINVAL, because this configuration is not relevant to input\n     *              streams.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsStreamBuilderSetDefaultBlockSize(struct hdfsStreamBuilder *bld,\n                                       int64_t defaultBlockSize);\n\n    /**\n     * hdfsStreamBuilderBuild - Build the stream by calling open or create.\n     *\n     * @param bld The hdfs stream builder.  This pointer will be freed, whether\n     *            or not the open succeeds.\n     *\n     * @return the stream pointer on success, or NULL on error.  Errno will be\n     * set on error.\n     */\n    LIBHDFS_EXTERNAL\n    hdfsFile hdfsStreamBuilderBuild(struct hdfsStreamBuilder *bld);\n\n    /**\n     * hdfsTruncateFile - Truncate a hdfs file to given lenght.\n     * @param fs The configured filesystem handle.\n     * @param path The full path to the file.\n     * @param newlength The size the file is to be truncated to\n     * @return 1 if the file has been truncated to the desired newlength \n     *         and is immediately available to be reused for write operations \n     *         such as append.\n     *         0 if a background process of adjusting the length of the last \n     *         block has been started, and clients should wait for it to\n     *         complete before proceeding with further file updates.\n     *         -1 on error.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsTruncateFile(hdfsFS fs, const char* path, tOffset newlength);\n\n    /**\n     * hdfsUnbufferFile - Reduce the buffering done on a file.\n     *\n     * @param file  The file to unbuffer.\n     * @return      0 on success\n     *              ENOTSUP if the file does not support unbuffering\n     *              Errno will also be set to this value.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsUnbufferFile(hdfsFile file);\n\n    /** \n     * hdfsCloseFile - Close an open file. \n     * @param fs The configured filesystem handle.\n     * @param file The file handle.\n     * @return Returns 0 on success, -1 on error.  \n     *         On error, errno will be set appropriately.\n     *         If the hdfs file was valid, the memory associated with it will\n     *         be freed at the end of this call, even if there was an I/O\n     *         error.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsCloseFile(hdfsFS fs, hdfsFile file);\n\n\n    /** \n     * hdfsExists - Checks if a given path exsits on the filesystem \n     * @param fs The configured filesystem handle.\n     * @param path The path to look for\n     * @return Returns 0 on success, -1 on error.  \n     */\n    LIBHDFS_EXTERNAL\n    int hdfsExists(hdfsFS fs, const char *path);\n\n\n    /** \n     * hdfsSeek - Seek to given offset in file. \n     * This works only for files opened in read-only mode. \n     * @param fs The configured filesystem handle.\n     * @param file The file handle.\n     * @param desiredPos Offset into the file to seek into.\n     * @return Returns 0 on success, -1 on error.  \n     */\n    LIBHDFS_EXTERNAL\n    int hdfsSeek(hdfsFS fs, hdfsFile file, tOffset desiredPos); \n\n\n    /** \n     * hdfsTell - Get the current offset in the file, in bytes.\n     * @param fs The configured filesystem handle.\n     * @param file The file handle.\n     * @return Current offset, -1 on error.\n     */\n    LIBHDFS_EXTERNAL\n    tOffset hdfsTell(hdfsFS fs, hdfsFile file);\n\n\n    /** \n     * hdfsRead - Read data from an open file.\n     * @param fs The configured filesystem handle.\n     * @param file The file handle.\n     * @param buffer The buffer to copy read bytes into.\n     * @param length The length of the buffer.\n     * @return      On success, a positive number indicating how many bytes\n     *              were read.\n     *              On end-of-file, 0.\n     *              On error, -1.  Errno will be set to the error code.\n     *              Just like the POSIX read function, hdfsRead will return -1\n     *              and set errno to EINTR if data is temporarily unavailable,\n     *              but we are not yet at the end of the file.\n     */\n    LIBHDFS_EXTERNAL\n    tSize hdfsRead(hdfsFS fs, hdfsFile file, void* buffer, tSize length);\n\n    /** \n     * hdfsPread - Positional read of data from an open file.\n     * @param fs The configured filesystem handle.\n     * @param file The file handle.\n     * @param position Position from which to read\n     * @param buffer The buffer to copy read bytes into.\n     * @param length The length of the buffer.\n     * @return      See hdfsRead\n     */\n    LIBHDFS_EXTERNAL\n    tSize hdfsPread(hdfsFS fs, hdfsFile file, tOffset position,\n                    void* buffer, tSize length);\n\n\n    /** \n     * hdfsWrite - Write data into an open file.\n     * @param fs The configured filesystem handle.\n     * @param file The file handle.\n     * @param buffer The data.\n     * @param length The no. of bytes to write. \n     * @return Returns the number of bytes written, -1 on error.\n     */\n    LIBHDFS_EXTERNAL\n    tSize hdfsWrite(hdfsFS fs, hdfsFile file, const void* buffer,\n                    tSize length);\n\n\n    /** \n     * hdfsWrite - Flush the data. \n     * @param fs The configured filesystem handle.\n     * @param file The file handle.\n     * @return Returns 0 on success, -1 on error. \n     */\n    LIBHDFS_EXTERNAL\n    int hdfsFlush(hdfsFS fs, hdfsFile file);\n\n\n    /**\n     * hdfsHFlush - Flush out the data in client's user buffer. After the\n     * return of this call, new readers will see the data.\n     * @param fs configured filesystem handle\n     * @param file file handle\n     * @return 0 on success, -1 on error and sets errno\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsHFlush(hdfsFS fs, hdfsFile file);\n\n\n    /**\n     * hdfsHSync - Similar to posix fsync, Flush out the data in client's \n     * user buffer. all the way to the disk device (but the disk may have \n     * it in its cache).\n     * @param fs configured filesystem handle\n     * @param file file handle\n     * @return 0 on success, -1 on error and sets errno\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsHSync(hdfsFS fs, hdfsFile file);\n\n\n    /**\n     * hdfsAvailable - Number of bytes that can be read from this\n     * input stream without blocking.\n     * @param fs The configured filesystem handle.\n     * @param file The file handle.\n     * @return Returns available bytes; -1 on error. \n     */\n    LIBHDFS_EXTERNAL\n    int hdfsAvailable(hdfsFS fs, hdfsFile file);\n\n\n    /**\n     * hdfsCopy - Copy file from one filesystem to another.\n     * @param srcFS The handle to source filesystem.\n     * @param src The path of source file. \n     * @param dstFS The handle to destination filesystem.\n     * @param dst The path of destination file. \n     * @return Returns 0 on success, -1 on error. \n     */\n    LIBHDFS_EXTERNAL\n    int hdfsCopy(hdfsFS srcFS, const char* src, hdfsFS dstFS, const char* dst);\n\n\n    /**\n     * hdfsMove - Move file from one filesystem to another.\n     * @param srcFS The handle to source filesystem.\n     * @param src The path of source file. \n     * @param dstFS The handle to destination filesystem.\n     * @param dst The path of destination file. \n     * @return Returns 0 on success, -1 on error. \n     */\n    LIBHDFS_EXTERNAL\n    int hdfsMove(hdfsFS srcFS, const char* src, hdfsFS dstFS, const char* dst);\n\n\n    /**\n     * hdfsDelete - Delete file. \n     * @param fs The configured filesystem handle.\n     * @param path The path of the file. \n     * @param recursive if path is a directory and set to \n     * non-zero, the directory is deleted else throws an exception. In\n     * case of a file the recursive argument is irrelevant.\n     * @return Returns 0 on success, -1 on error. \n     */\n    LIBHDFS_EXTERNAL\n    int hdfsDelete(hdfsFS fs, const char* path, int recursive);\n\n    /**\n     * hdfsRename - Rename file. \n     * @param fs The configured filesystem handle.\n     * @param oldPath The path of the source file. \n     * @param newPath The path of the destination file. \n     * @return Returns 0 on success, -1 on error. \n     */\n    LIBHDFS_EXTERNAL\n    int hdfsRename(hdfsFS fs, const char* oldPath, const char* newPath);\n\n    /**\n     * hdfsRename - Rename file and overwrite if exists.\n     * @param fs The configured filesystem handle.\n     * @param oldPath The path of the source file.\n     * @param newPath The path of the destination file.\n     * @return Returns 0 on success, -1 on error.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsRenameOverwrite(hdfsFS fs, const char* oldPath, const char* newPath);\n\n\n    /** \n     * hdfsGetWorkingDirectory - Get the current working directory for\n     * the given filesystem.\n     * @param fs The configured filesystem handle.\n     * @param buffer The user-buffer to copy path of cwd into. \n     * @param bufferSize The length of user-buffer.\n     * @return Returns buffer, NULL on error.\n     */\n    LIBHDFS_EXTERNAL\n    char* hdfsGetWorkingDirectory(hdfsFS fs, char *buffer, size_t bufferSize);\n\n\n    /** \n     * hdfsSetWorkingDirectory - Set the working directory. All relative\n     * paths will be resolved relative to it.\n     * @param fs The configured filesystem handle.\n     * @param path The path of the new 'cwd'. \n     * @return Returns 0 on success, -1 on error. \n     */\n    LIBHDFS_EXTERNAL\n    int hdfsSetWorkingDirectory(hdfsFS fs, const char* path);\n\n\n    /** \n     * hdfsCreateDirectory - Make the given file and all non-existent\n     * parents into directories.\n     * @param fs The configured filesystem handle.\n     * @param path The path of the directory. \n     * @return Returns 0 on success, -1 on error. \n     */\n    LIBHDFS_EXTERNAL\n    int hdfsCreateDirectory(hdfsFS fs, const char* path);\n\n\n    /** \n     * hdfsSetReplication - Set the replication of the specified\n     * file to the supplied value\n     * @param fs The configured filesystem handle.\n     * @param path The path of the file. \n     * @return Returns 0 on success, -1 on error. \n     */\n    LIBHDFS_EXTERNAL\n    int hdfsSetReplication(hdfsFS fs, const char* path, int16_t replication);\n\n\n    /** \n     * hdfsFileInfo - Information about a file/directory.\n     */\n    typedef struct  {\n        tObjectKind mKind;   /* file or directory */\n        char *mName;         /* the name of the file */\n        tTime mLastMod;      /* the last modification time for the file in seconds */\n        tOffset mSize;       /* the size of the file in bytes */\n        short mReplication;    /* the count of replicas */\n        tOffset mBlockSize;  /* the block size for the file */\n        char *mOwner;        /* the owner of the file */\n        char *mGroup;        /* the group associated with the file */\n        short mPermissions;  /* the permissions associated with the file */\n        tTime mLastAccess;    /* the last access time for the file in seconds */\n    } hdfsFileInfo;\n\n\n    /** \n     * hdfsListDirectory - Get list of files/directories for a given\n     * directory-path. hdfsFreeFileInfo should be called to deallocate memory. \n     * @param fs The configured filesystem handle.\n     * @param path The path of the directory. \n     * @param numEntries Set to the number of files/directories in path.\n     * @return Returns a dynamically-allocated array of hdfsFileInfo\n     * objects; NULL on error or empty directory.\n     * errno is set to non-zero on error or zero on success.\n     */\n    LIBHDFS_EXTERNAL\n    hdfsFileInfo *hdfsListDirectory(hdfsFS fs, const char* path,\n                                    int *numEntries);\n\n\n    /** \n     * hdfsGetPathInfo - Get information about a path as a (dynamically\n     * allocated) single hdfsFileInfo struct. hdfsFreeFileInfo should be\n     * called when the pointer is no longer needed.\n     * @param fs The configured filesystem handle.\n     * @param path The path of the file. \n     * @return Returns a dynamically-allocated hdfsFileInfo object;\n     * NULL on error.\n     */\n    LIBHDFS_EXTERNAL\n    hdfsFileInfo *hdfsGetPathInfo(hdfsFS fs, const char* path);\n\n\n    /** \n     * hdfsFreeFileInfo - Free up the hdfsFileInfo array (including fields) \n     * @param hdfsFileInfo The array of dynamically-allocated hdfsFileInfo\n     * objects.\n     * @param numEntries The size of the array.\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsFreeFileInfo(hdfsFileInfo *hdfsFileInfo, int numEntries);\n\n    /**\n     * hdfsFileIsEncrypted: determine if a file is encrypted based on its\n     * hdfsFileInfo.\n     * @return -1 if there was an error (errno will be set), 0 if the file is\n     *         not encrypted, 1 if the file is encrypted.\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsFileIsEncrypted(hdfsFileInfo *hdfsFileInfo);\n\n\n    /** \n     * hdfsGetHosts - Get hostnames where a particular block (determined by\n     * pos & blocksize) of a file is stored. The last element in the array\n     * is NULL. Due to replication, a single block could be present on\n     * multiple hosts.\n     * @param fs The configured filesystem handle.\n     * @param path The path of the file. \n     * @param start The start of the block.\n     * @param length The length of the block.\n     * @return Returns a dynamically-allocated 2-d array of blocks-hosts;\n     * NULL on error.\n     */\n    LIBHDFS_EXTERNAL\n    char*** hdfsGetHosts(hdfsFS fs, const char* path, \n            tOffset start, tOffset length);\n\n\n    /** \n     * hdfsFreeHosts - Free up the structure returned by hdfsGetHosts\n     * @param hdfsFileInfo The array of dynamically-allocated hdfsFileInfo\n     * objects.\n     * @param numEntries The size of the array.\n     */\n    LIBHDFS_EXTERNAL\n    void hdfsFreeHosts(char ***blockHosts);\n\n\n    /** \n     * hdfsGetDefaultBlockSize - Get the default blocksize.\n     *\n     * @param fs            The configured filesystem handle.\n     * @deprecated          Use hdfsGetDefaultBlockSizeAtPath instead.\n     *\n     * @return              Returns the default blocksize, or -1 on error.\n     */\n    LIBHDFS_EXTERNAL\n    tOffset hdfsGetDefaultBlockSize(hdfsFS fs);\n\n\n    /** \n     * hdfsGetDefaultBlockSizeAtPath - Get the default blocksize at the\n     * filesystem indicated by a given path.\n     *\n     * @param fs            The configured filesystem handle.\n     * @param path          The given path will be used to locate the actual\n     *                      filesystem.  The full path does not have to exist.\n     *\n     * @return              Returns the default blocksize, or -1 on error.\n     */\n    LIBHDFS_EXTERNAL\n    tOffset hdfsGetDefaultBlockSizeAtPath(hdfsFS fs, const char *path);\n\n\n    /** \n     * hdfsGetCapacity - Return the raw capacity of the filesystem.  \n     * @param fs The configured filesystem handle.\n     * @return Returns the raw-capacity; -1 on error. \n     */\n    LIBHDFS_EXTERNAL\n    tOffset hdfsGetCapacity(hdfsFS fs);\n\n\n    /** \n     * hdfsGetUsed - Return the total raw size of all files in the filesystem.\n     * @param fs The configured filesystem handle.\n     * @return Returns the total-size; -1 on error. \n     */\n    LIBHDFS_EXTERNAL\n    tOffset hdfsGetUsed(hdfsFS fs);\n\n    /** \n     * Change the user and/or group of a file or directory.\n     *\n     * @param fs            The configured filesystem handle.\n     * @param path          the path to the file or directory\n     * @param owner         User string.  Set to NULL for 'no change'\n     * @param group         Group string.  Set to NULL for 'no change'\n     * @return              0 on success else -1\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsChown(hdfsFS fs, const char* path, const char *owner,\n                  const char *group);\n\n    /** \n     * hdfsChmod\n     * @param fs The configured filesystem handle.\n     * @param path the path to the file or directory\n     * @param mode the bitmask to set it to\n     * @return 0 on success else -1\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsChmod(hdfsFS fs, const char* path, short mode);\n\n    /** \n     * hdfsUtime\n     * @param fs The configured filesystem handle.\n     * @param path the path to the file or directory\n     * @param mtime new modification time or -1 for no change\n     * @param atime new access time or -1 for no change\n     * @return 0 on success else -1\n     */\n    LIBHDFS_EXTERNAL\n    int hdfsUtime(hdfsFS fs, const char* path, tTime mtime, tTime atime);\n\n    /**\n     * Allocate a zero-copy options structure.\n     *\n     * You must free all options structures allocated with this function using\n     * hadoopRzOptionsFree.\n     *\n     * @return            A zero-copy options structure, or NULL if one could\n     *                    not be allocated.  If NULL is returned, errno will\n     *                    contain the error number.\n     */\n    LIBHDFS_EXTERNAL\n    struct hadoopRzOptions *hadoopRzOptionsAlloc(void);\n\n    /**\n     * Determine whether we should skip checksums in read0.\n     *\n     * @param opts        The options structure.\n     * @param skip        Nonzero to skip checksums sometimes; zero to always\n     *                    check them.\n     *\n     * @return            0 on success; -1 plus errno on failure.\n     */\n    LIBHDFS_EXTERNAL\n    int hadoopRzOptionsSetSkipChecksum(\n            struct hadoopRzOptions *opts, int skip);\n\n    /**\n     * Set the ByteBufferPool to use with read0.\n     *\n     * @param opts        The options structure.\n     * @param className   If this is NULL, we will not use any\n     *                    ByteBufferPool.  If this is non-NULL, it will be\n     *                    treated as the name of the pool class to use.\n     *                    For example, you can use\n     *                    ELASTIC_BYTE_BUFFER_POOL_CLASS.\n     *\n     * @return            0 if the ByteBufferPool class was found and\n     *                    instantiated;\n     *                    -1 plus errno otherwise.\n     */\n    LIBHDFS_EXTERNAL\n    int hadoopRzOptionsSetByteBufferPool(\n            struct hadoopRzOptions *opts, const char *className);\n\n    /**\n     * Free a hadoopRzOptionsFree structure.\n     *\n     * @param opts        The options structure to free.\n     *                    Any associated ByteBufferPool will also be freed.\n     */\n    LIBHDFS_EXTERNAL\n    void hadoopRzOptionsFree(struct hadoopRzOptions *opts);\n\n    /**\n     * Perform a byte buffer read.\n     * If possible, this will be a zero-copy (mmap) read.\n     *\n     * @param file       The file to read from.\n     * @param opts       An options structure created by hadoopRzOptionsAlloc.\n     * @param maxLength  The maximum length to read.  We may read fewer bytes\n     *                   than this length.\n     *\n     * @return           On success, we will return a new hadoopRzBuffer.\n     *                   This buffer will continue to be valid and readable\n     *                   until it is released by readZeroBufferFree.  Failure to\n     *                   release a buffer will lead to a memory leak.\n     *                   You can access the data within the hadoopRzBuffer with\n     *                   hadoopRzBufferGet.  If you have reached EOF, the data\n     *                   within the hadoopRzBuffer will be NULL.  You must still\n     *                   free hadoopRzBuffer instances containing NULL.\n     *                   On failure, we will return NULL plus an errno code.\n     *                   errno = EOPNOTSUPP indicates that we could not do a\n     *                   zero-copy read, and there was no ByteBufferPool\n     *                   supplied.\n     */\n    LIBHDFS_EXTERNAL\n    struct hadoopRzBuffer* hadoopReadZero(hdfsFile file,\n            struct hadoopRzOptions *opts, int32_t maxLength);\n\n    /**\n     * Determine the length of the buffer returned from readZero.\n     *\n     * @param buffer     a buffer returned from readZero.\n     * @return           the length of the buffer.\n     */\n    LIBHDFS_EXTERNAL\n    int32_t hadoopRzBufferLength(const struct hadoopRzBuffer *buffer);\n\n    /**\n     * Get a pointer to the raw buffer returned from readZero.\n     *\n     * To find out how many bytes this buffer contains, call\n     * hadoopRzBufferLength.\n     *\n     * @param buffer     a buffer returned from readZero.\n     * @return           a pointer to the start of the buffer.  This will be\n     *                   NULL when end-of-file has been reached.\n     */\n    LIBHDFS_EXTERNAL\n    const void *hadoopRzBufferGet(const struct hadoopRzBuffer *buffer);\n\n    /**\n     * Release a buffer obtained through readZero.\n     *\n     * @param file       The hdfs stream that created this buffer.  This must be\n     *                   the same stream you called hadoopReadZero on.\n     * @param buffer     The buffer to release.\n     */\n    LIBHDFS_EXTERNAL\n    void hadoopRzBufferFree(hdfsFile file, struct hadoopRzBuffer *buffer);\n\n    /**\n     * Get the last exception root cause that happened in the context of the\n     * current thread, i.e. the thread that called into libHDFS.\n     *\n     * The pointer returned by this function is guaranteed to be valid until\n     * the next call into libHDFS by the current thread.\n     * Users of this function should not free the pointer.\n     *\n     * A NULL will be returned if no exception information could be retrieved\n     * for the previous call.\n     *\n     * @return           The root cause as a C-string.\n     */\n    LIBHDFS_EXTERNAL\n    char* hdfsGetLastExceptionRootCause();\n\n    /**\n     * Get the last exception stack trace that happened in the context of the\n     * current thread, i.e. the thread that called into libHDFS.\n     *\n     * The pointer returned by this function is guaranteed to be valid until\n     * the next call into libHDFS by the current thread.\n     * Users of this function should not free the pointer.\n     *\n     * A NULL will be returned if no exception information could be retrieved\n     * for the previous call.\n     *\n     * @return           The stack trace as a C-string.\n     */\n    LIBHDFS_EXTERNAL\n    char* hdfsGetLastExceptionStackTrace();\n\n#ifdef __cplusplus\n}\n#endif\n\n#undef LIBHDFS_EXTERNAL\n#endif /*LIBHDFS_HDFS_H*/\n\n/**\n * vim: ts=4: sw=4: et\n */\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/htable.c",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"htable.h\"\n\n#include <errno.h>\n#include <inttypes.h>\n#include <stdlib.h>\n#include <string.h>\n#include <unistd.h>\n\nstruct htable_pair {\n    void *key;\n    void *val;\n};\n\n/**\n * A hash table which uses linear probing.\n */\nstruct htable {\n    uint32_t capacity;\n    uint32_t used;\n    htable_hash_fn_t hash_fun;\n    htable_eq_fn_t eq_fun;\n    struct htable_pair *elem;\n};\n\n/**\n * An internal function for inserting a value into the hash table.\n *\n * Note: this function assumes that you have made enough space in the table.\n *\n * @param nelem         The new element to insert.\n * @param capacity      The capacity of the hash table.\n * @param hash_fun      The hash function to use.\n * @param key           The key to insert.\n * @param val           The value to insert.\n */\nstatic void htable_insert_internal(struct htable_pair *nelem, \n        uint32_t capacity, htable_hash_fn_t hash_fun, void *key,\n        void *val)\n{\n    uint32_t i;\n\n    i = hash_fun(key, capacity);\n    while (1) {\n        if (!nelem[i].key) {\n            nelem[i].key = key;\n            nelem[i].val = val;\n            return;\n        }\n        i++;\n        if (i == capacity) {\n            i = 0;\n        }\n    }\n}\n\nstatic int htable_realloc(struct htable *htable, uint32_t new_capacity)\n{\n    struct htable_pair *nelem;\n    uint32_t i, old_capacity = htable->capacity;\n    htable_hash_fn_t hash_fun = htable->hash_fun;\n\n    nelem = calloc(new_capacity, sizeof(struct htable_pair));\n    if (!nelem) {\n        return ENOMEM;\n    }\n    for (i = 0; i < old_capacity; i++) {\n        struct htable_pair *pair = htable->elem + i;\n        if (pair->key) {\n            htable_insert_internal(nelem, new_capacity, hash_fun,\n                                   pair->key, pair->val);\n        }\n    }\n    free(htable->elem);\n    htable->elem = nelem;\n    htable->capacity = new_capacity;\n    return 0;\n}\n\nstatic uint32_t round_up_to_power_of_2(uint32_t i)\n{\n    if (i == 0) {\n        return 1;\n    }\n    i--;\n    i |= i >> 1;\n    i |= i >> 2;\n    i |= i >> 4;\n    i |= i >> 8;\n    i |= i >> 16;\n    i++;\n    return i;\n}\n\nstruct htable *htable_alloc(uint32_t size,\n                htable_hash_fn_t hash_fun, htable_eq_fn_t eq_fun)\n{\n    struct htable *htable;\n\n    htable = calloc(1, sizeof(*htable));\n    if (!htable) {\n        return NULL;\n    }\n    size = round_up_to_power_of_2(size);\n    if (size < HTABLE_MIN_SIZE) {\n        size = HTABLE_MIN_SIZE;\n    }\n    htable->hash_fun = hash_fun;\n    htable->eq_fun = eq_fun;\n    htable->used = 0;\n    if (htable_realloc(htable, size)) {\n        free(htable);\n        return NULL;\n    }\n    return htable;\n}\n\nvoid htable_visit(struct htable *htable, visitor_fn_t fun, void *ctx)\n{\n    uint32_t i;\n\n    for (i = 0; i != htable->capacity; ++i) {\n        struct htable_pair *elem = htable->elem + i;\n        if (elem->key) {\n            fun(ctx, elem->key, elem->val);\n        }\n    }\n}\n\nvoid htable_free(struct htable *htable)\n{\n    if (htable) {\n        free(htable->elem);\n        free(htable);\n    }\n}\n\nint htable_put(struct htable *htable, void *key, void *val)\n{\n    int ret;\n    uint32_t nused;\n\n    // NULL is not a valid key value.\n    // This helps us implement htable_get_internal efficiently, since we know\n    // that we can stop when we encounter the first NULL key.\n    if (!key) {\n        return EINVAL;\n    }\n    // NULL is not a valid value.  Otherwise the results of htable_get would\n    // be confusing (does a NULL return mean entry not found, or that the\n    // entry was found and was NULL?) \n    if (!val) {\n        return EINVAL;\n    }\n    // Re-hash if we have used more than half of the hash table\n    nused = htable->used + 1;\n    if (nused >= (htable->capacity / 2)) {\n        ret = htable_realloc(htable, htable->capacity * 2);\n        if (ret)\n            return ret;\n    }\n    htable_insert_internal(htable->elem, htable->capacity,\n                                htable->hash_fun, key, val);\n    htable->used++;\n    return 0;\n}\n\nstatic int htable_get_internal(const struct htable *htable,\n                               const void *key, uint32_t *out)\n{\n    uint32_t start_idx, idx;\n\n    start_idx = htable->hash_fun(key, htable->capacity);\n    idx = start_idx;\n    while (1) {\n        struct htable_pair *pair = htable->elem + idx;\n        if (!pair->key) {\n            // We always maintain the invariant that the entries corresponding\n            // to a given key are stored in a contiguous block, not separated\n            // by any NULLs.  So if we encounter a NULL, our search is over.\n            return ENOENT;\n        } else if (htable->eq_fun(pair->key, key)) {\n            *out = idx;\n            return 0;\n        }\n        idx++;\n        if (idx == htable->capacity) {\n            idx = 0;\n        }\n        if (idx == start_idx) {\n            return ENOENT;\n        }\n    }\n}\n\nvoid *htable_get(const struct htable *htable, const void *key)\n{\n    uint32_t idx;\n\n    if (htable_get_internal(htable, key, &idx)) {\n        return NULL;\n    }\n    return htable->elem[idx].val;\n}\n\nvoid htable_pop(struct htable *htable, const void *key,\n                void **found_key, void **found_val)\n{\n    uint32_t hole, i;\n    const void *nkey;\n\n    if (htable_get_internal(htable, key, &hole)) {\n        *found_key = NULL;\n        *found_val = NULL;\n        return;\n    }\n    i = hole;\n    htable->used--;\n    // We need to maintain the compactness invariant used in\n    // htable_get_internal.  This invariant specifies that the entries for any\n    // given key are never separated by NULLs (although they may be separated\n    // by entries for other keys.)\n    while (1) {\n        i++;\n        if (i == htable->capacity) {\n            i = 0;\n        }\n        nkey = htable->elem[i].key;\n        if (!nkey) {\n            *found_key = htable->elem[hole].key;\n            *found_val = htable->elem[hole].val;\n            htable->elem[hole].key = NULL;\n            htable->elem[hole].val = NULL;\n            return;\n        } else if (htable->eq_fun(key, nkey)) {\n            htable->elem[hole].key = htable->elem[i].key;\n            htable->elem[hole].val = htable->elem[i].val;\n            hole = i;\n        }\n    }\n}\n\nuint32_t htable_used(const struct htable *htable)\n{\n    return htable->used;\n}\n\nuint32_t htable_capacity(const struct htable *htable)\n{\n    return htable->capacity;\n}\n\nuint32_t ht_hash_string(const void *str, uint32_t max)\n{\n    const char *s = str;\n    uint32_t hash = 0;\n\n    while (*s) {\n        hash = (hash * 31) + *s;\n        s++;\n    }\n    return hash % max;\n}\n\nint ht_compare_string(const void *a, const void *b)\n{\n    return strcmp(a, b) == 0;\n}\n\n// vim: ts=4:sw=4:tw=79:et\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/htable.h",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef HADOOP_CORE_COMMON_HASH_TABLE\n#define HADOOP_CORE_COMMON_HASH_TABLE\n\n#include <inttypes.h>\n#include <stdio.h>\n#include <stdint.h>\n\n#define HTABLE_MIN_SIZE 4\n\nstruct htable;\n\n/**\n * An HTable hash function.\n *\n * @param key       The key.\n * @param capacity  The total capacity.\n *\n * @return          The hash slot.  Must be less than the capacity.\n */\ntypedef uint32_t (*htable_hash_fn_t)(const void *key, uint32_t capacity);\n\n/**\n * An HTable equality function.  Compares two keys.\n *\n * @param a         First key.\n * @param b         Second key.\n *\n * @return          nonzero if the keys are equal.\n */\ntypedef int (*htable_eq_fn_t)(const void *a, const void *b);\n\n/**\n * Allocate a new hash table.\n *\n * @param capacity  The minimum suggested starting capacity.\n * @param hash_fun  The hash function to use in this hash table.\n * @param eq_fun    The equals function to use in this hash table.\n *\n * @return          The new hash table on success; NULL on OOM.\n */\nstruct htable *htable_alloc(uint32_t capacity, htable_hash_fn_t hash_fun,\n                            htable_eq_fn_t eq_fun);\n\ntypedef void (*visitor_fn_t)(void *ctx, void *key, void *val);\n\n/**\n * Visit all of the entries in the hash table.\n *\n * @param htable    The hash table.\n * @param fun       The callback function to invoke on each key and value.\n * @param ctx       Context pointer to pass to the callback.\n */\nvoid htable_visit(struct htable *htable, visitor_fn_t fun, void *ctx);\n\n/**\n * Free the hash table.\n *\n * It is up the calling code to ensure that the keys and values inside the\n * table are de-allocated, if that is necessary.\n *\n * @param htable    The hash table.\n */\nvoid htable_free(struct htable *htable);\n\n/**\n * Add an entry to the hash table.\n *\n * @param htable    The hash table.\n * @param key       The key to add.  This cannot be NULL.\n * @param fun       The value to add.  This cannot be NULL.\n *\n * @return          0 on success;\n *                  EEXIST if the value already exists in the table;\n *                  ENOMEM if there is not enough memory to add the element.\n *                  EFBIG if the hash table has too many entries to fit in 32\n *                      bits.\n */\nint htable_put(struct htable *htable, void *key, void *val);\n\n/**\n * Get an entry from the hash table.\n *\n * @param htable    The hash table.\n * @param key       The key to find.\n *\n * @return          NULL if there is no such entry; the entry otherwise.\n */\nvoid *htable_get(const struct htable *htable, const void *key);\n\n/**\n * Get an entry from the hash table and remove it.\n *\n * @param htable    The hash table.\n * @param key       The key for the entry find and remove.\n * @param found_key (out param) NULL if the entry was not found; the found key\n *                      otherwise.\n * @param found_val (out param) NULL if the entry was not found; the found\n *                      value otherwise.\n */\nvoid htable_pop(struct htable *htable, const void *key,\n                void **found_key, void **found_val);\n\n/**\n * Get the number of entries used in the hash table.\n *\n * @param htable    The hash table.\n *\n * @return          The number of entries used in the hash table.\n */\nuint32_t htable_used(const struct htable *htable);\n\n/**\n * Get the capacity of the hash table.\n *\n * @param htable    The hash table.\n *\n * @return          The capacity of the hash table.\n */\nuint32_t htable_capacity(const struct htable *htable);\n\n/**\n * Hash a string.\n *\n * @param str       The string.\n * @param max       Maximum hash value\n *\n * @return          A number less than max.\n */\nuint32_t ht_hash_string(const void *str, uint32_t max);\n\n/**\n * Compare two strings.\n *\n * @param a         The first string.\n * @param b         The second string.\n *\n * @return          1 if the strings are identical; 0 otherwise.\n */\nint ht_compare_string(const void *a, const void *b);\n\n#endif\n\n// vim: ts=4:sw=4:tw=79:et\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/jni_helper.c",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"config.h\"\n#include \"exception.h\"\n#include \"jni_helper.h\"\n#include \"platform.h\"\n#include \"htable.h\"\n#include \"os/mutexes.h\"\n#include \"os/thread_local_storage.h\"\n\n#include <errno.h>\n#include <dirent.h>\n#include <stdio.h> \n#include <string.h> \n\nstatic struct htable *gClassRefHTable = NULL;\n\n/** The Native return types that methods could return */\n#define JVOID         'V'\n#define JOBJECT       'L'\n#define JARRAYOBJECT  '['\n#define JBOOLEAN      'Z'\n#define JBYTE         'B'\n#define JCHAR         'C'\n#define JSHORT        'S'\n#define JINT          'I'\n#define JLONG         'J'\n#define JFLOAT        'F'\n#define JDOUBLE       'D'\n\n\n/**\n * MAX_HASH_TABLE_ELEM: The maximum no. of entries in the hashtable.\n * It's set to 4096 to account for (classNames + No. of threads)\n */\n#define MAX_HASH_TABLE_ELEM 4096\n\n/**\n * Length of buffer for retrieving created JVMs.  (We only ever create one.)\n */\n#define VM_BUF_LENGTH 1\n\nvoid destroyLocalReference(JNIEnv *env, jobject jObject)\n{\n  if (jObject)\n    (*env)->DeleteLocalRef(env, jObject);\n}\n\nstatic jthrowable validateMethodType(JNIEnv *env, MethType methType)\n{\n    if (methType != STATIC && methType != INSTANCE) {\n        return newRuntimeError(env, \"validateMethodType(methType=%d): \"\n            \"illegal method type.\\n\", methType);\n    }\n    return NULL;\n}\n\njthrowable newJavaStr(JNIEnv *env, const char *str, jstring *out)\n{\n    jstring jstr;\n\n    if (!str) {\n        /* Can't pass NULL to NewStringUTF: the result would be\n         * implementation-defined. */\n        *out = NULL;\n        return NULL;\n    }\n    jstr = (*env)->NewStringUTF(env, str);\n    if (!jstr) {\n        /* If NewStringUTF returns NULL, an exception has been thrown,\n         * which we need to handle.  Probaly an OOM. */\n        return getPendingExceptionAndClear(env);\n    }\n    *out = jstr;\n    return NULL;\n}\n\njthrowable newCStr(JNIEnv *env, jstring jstr, char **out)\n{\n    const char *tmp;\n\n    if (!jstr) {\n        *out = NULL;\n        return NULL;\n    }\n    tmp = (*env)->GetStringUTFChars(env, jstr, NULL);\n    if (!tmp) {\n        return getPendingExceptionAndClear(env);\n    }\n    *out = strdup(tmp);\n    (*env)->ReleaseStringUTFChars(env, jstr, tmp);\n    return NULL;\n}\n\njthrowable invokeMethod(JNIEnv *env, jvalue *retval, MethType methType,\n                 jobject instObj, const char *className,\n                 const char *methName, const char *methSignature, ...)\n{\n    va_list args;\n    jclass cls;\n    jmethodID mid;\n    jthrowable jthr;\n    const char *str; \n    char returnType;\n    \n    jthr = validateMethodType(env, methType);\n    if (jthr)\n        return jthr;\n    jthr = globalClassReference(className, env, &cls);\n    if (jthr)\n        return jthr;\n    jthr = methodIdFromClass(className, methName, methSignature, \n                            methType, env, &mid);\n    if (jthr)\n        return jthr;\n    str = methSignature;\n    while (*str != ')') str++;\n    str++;\n    returnType = *str;\n    va_start(args, methSignature);\n    if (returnType == JOBJECT || returnType == JARRAYOBJECT) {\n        jobject jobj = NULL;\n        if (methType == STATIC) {\n            jobj = (*env)->CallStaticObjectMethodV(env, cls, mid, args);\n        }\n        else if (methType == INSTANCE) {\n            jobj = (*env)->CallObjectMethodV(env, instObj, mid, args);\n        }\n        retval->l = jobj;\n    }\n    else if (returnType == JVOID) {\n        if (methType == STATIC) {\n            (*env)->CallStaticVoidMethodV(env, cls, mid, args);\n        }\n        else if (methType == INSTANCE) {\n            (*env)->CallVoidMethodV(env, instObj, mid, args);\n        }\n    }\n    else if (returnType == JBOOLEAN) {\n        jboolean jbool = 0;\n        if (methType == STATIC) {\n            jbool = (*env)->CallStaticBooleanMethodV(env, cls, mid, args);\n        }\n        else if (methType == INSTANCE) {\n            jbool = (*env)->CallBooleanMethodV(env, instObj, mid, args);\n        }\n        retval->z = jbool;\n    }\n    else if (returnType == JSHORT) {\n        jshort js = 0;\n        if (methType == STATIC) {\n            js = (*env)->CallStaticShortMethodV(env, cls, mid, args);\n        }\n        else if (methType == INSTANCE) {\n            js = (*env)->CallShortMethodV(env, instObj, mid, args);\n        }\n        retval->s = js;\n    }\n    else if (returnType == JLONG) {\n        jlong jl = -1;\n        if (methType == STATIC) {\n            jl = (*env)->CallStaticLongMethodV(env, cls, mid, args);\n        }\n        else if (methType == INSTANCE) {\n            jl = (*env)->CallLongMethodV(env, instObj, mid, args);\n        }\n        retval->j = jl;\n    }\n    else if (returnType == JINT) {\n        jint ji = -1;\n        if (methType == STATIC) {\n            ji = (*env)->CallStaticIntMethodV(env, cls, mid, args);\n        }\n        else if (methType == INSTANCE) {\n            ji = (*env)->CallIntMethodV(env, instObj, mid, args);\n        }\n        retval->i = ji;\n    }\n    va_end(args);\n\n    jthr = (*env)->ExceptionOccurred(env);\n    if (jthr) {\n        (*env)->ExceptionClear(env);\n        return jthr;\n    }\n    return NULL;\n}\n\njthrowable constructNewObjectOfClass(JNIEnv *env, jobject *out, const char *className, \n                                  const char *ctorSignature, ...)\n{\n    va_list args;\n    jclass cls;\n    jmethodID mid; \n    jobject jobj;\n    jthrowable jthr;\n\n    jthr = globalClassReference(className, env, &cls);\n    if (jthr)\n        return jthr;\n    jthr = methodIdFromClass(className, \"<init>\", ctorSignature, \n                            INSTANCE, env, &mid);\n    if (jthr)\n        return jthr;\n    va_start(args, ctorSignature);\n    jobj = (*env)->NewObjectV(env, cls, mid, args);\n    va_end(args);\n    if (!jobj)\n        return getPendingExceptionAndClear(env);\n    *out = jobj;\n    return NULL;\n}\n\n\njthrowable methodIdFromClass(const char *className, const char *methName, \n                            const char *methSignature, MethType methType, \n                            JNIEnv *env, jmethodID *out)\n{\n    jclass cls;\n    jthrowable jthr;\n    jmethodID mid = 0;\n\n    jthr = globalClassReference(className, env, &cls);\n    if (jthr)\n        return jthr;\n    jthr = validateMethodType(env, methType);\n    if (jthr)\n        return jthr;\n    if (methType == STATIC) {\n        mid = (*env)->GetStaticMethodID(env, cls, methName, methSignature);\n    }\n    else if (methType == INSTANCE) {\n        mid = (*env)->GetMethodID(env, cls, methName, methSignature);\n    }\n    if (mid == NULL) {\n        fprintf(stderr, \"could not find method %s from class %s with \"\n            \"signature %s\\n\", methName, className, methSignature);\n        return getPendingExceptionAndClear(env);\n    }\n    *out = mid;\n    return NULL;\n}\n\njthrowable globalClassReference(const char *className, JNIEnv *env, jclass *out)\n{\n    jthrowable jthr = NULL;\n    jclass local_clazz = NULL;\n    jclass clazz = NULL;\n    int ret;\n\n    mutexLock(&hdfsHashMutex);\n    if (!gClassRefHTable) {\n        gClassRefHTable = htable_alloc(MAX_HASH_TABLE_ELEM, ht_hash_string,\n            ht_compare_string);\n        if (!gClassRefHTable) {\n            jthr = newRuntimeError(env, \"htable_alloc failed\\n\");\n            goto done;\n        }\n    }\n    clazz = htable_get(gClassRefHTable, className);\n    if (clazz) {\n        *out = clazz;\n        goto done;\n    }\n    local_clazz = (*env)->FindClass(env,className);\n    if (!local_clazz) {\n        jthr = getPendingExceptionAndClear(env);\n        goto done;\n    }\n    clazz = (*env)->NewGlobalRef(env, local_clazz);\n    if (!clazz) {\n        jthr = getPendingExceptionAndClear(env);\n        goto done;\n    }\n    ret = htable_put(gClassRefHTable, (void*)className, clazz);\n    if (ret) {\n        jthr = newRuntimeError(env, \"htable_put failed with error \"\n                               \"code %d\\n\", ret);\n        goto done;\n    }\n    *out = clazz;\n    jthr = NULL;\ndone:\n    mutexUnlock(&hdfsHashMutex);\n    (*env)->DeleteLocalRef(env, local_clazz);\n    if (jthr && clazz) {\n        (*env)->DeleteGlobalRef(env, clazz);\n    }\n    return jthr;\n}\n\njthrowable classNameOfObject(jobject jobj, JNIEnv *env, char **name)\n{\n    jthrowable jthr;\n    jclass cls, clsClass = NULL;\n    jmethodID mid;\n    jstring str = NULL;\n    const char *cstr = NULL;\n    char *newstr;\n\n    cls = (*env)->GetObjectClass(env, jobj);\n    if (cls == NULL) {\n        jthr = getPendingExceptionAndClear(env);\n        goto done;\n    }\n    clsClass = (*env)->FindClass(env, \"java/lang/Class\");\n    if (clsClass == NULL) {\n        jthr = getPendingExceptionAndClear(env);\n        goto done;\n    }\n    mid = (*env)->GetMethodID(env, clsClass, \"getName\", \"()Ljava/lang/String;\");\n    if (mid == NULL) {\n        jthr = getPendingExceptionAndClear(env);\n        goto done;\n    }\n    str = (*env)->CallObjectMethod(env, cls, mid);\n    if (str == NULL) {\n        jthr = getPendingExceptionAndClear(env);\n        goto done;\n    }\n    cstr = (*env)->GetStringUTFChars(env, str, NULL);\n    if (!cstr) {\n        jthr = getPendingExceptionAndClear(env);\n        goto done;\n    }\n    newstr = strdup(cstr);\n    if (newstr == NULL) {\n        jthr = newRuntimeError(env, \"classNameOfObject: out of memory\");\n        goto done;\n    }\n    *name = newstr;\n    jthr = NULL;\n\ndone:\n    destroyLocalReference(env, cls);\n    destroyLocalReference(env, clsClass);\n    if (str) {\n        if (cstr)\n            (*env)->ReleaseStringUTFChars(env, str, cstr);\n        (*env)->DeleteLocalRef(env, str);\n    }\n    return jthr;\n}\n\n\n/**\n * For the given path, expand it by filling in with all *.jar or *.JAR files,\n * separated by PATH_SEPARATOR. Assumes that expanded is big enough to hold the\n * string, eg allocated after using this function with expanded=NULL to get the\n * right size. Also assumes that the path ends with a \"/.\". The length of the\n * expanded path is returned, which includes space at the end for either a\n * PATH_SEPARATOR or null terminator.\n */\nstatic ssize_t wildcard_expandPath(const char* path, char* expanded)\n{\n    struct dirent* file;\n    char* dest = expanded;\n    ssize_t length = 0;\n    size_t pathLength = strlen(path);\n    DIR* dir;\n\n    dir = opendir(path);\n    if (dir != NULL) {\n        // can open dir so try to match with all *.jar and *.JAR entries\n\n#ifdef _LIBHDFS_JNI_HELPER_DEBUGGING_ON_\n        printf(\"wildcard_expandPath: %s\\n\", path);\n#endif\n\n        errno = 0;\n        while ((file = readdir(dir)) != NULL) {\n            const char* filename = file->d_name;\n            const size_t filenameLength = strlen(filename);\n            const char* jarExtension;\n\n            // If filename is smaller than 4 characters then it can not possibly\n            // have extension \".jar\" or \".JAR\"\n            if (filenameLength < 4) {\n                continue;\n            }\n\n            jarExtension = &filename[filenameLength-4];\n            if ((strcmp(jarExtension, \".jar\") == 0) ||\n                (strcmp(jarExtension, \".JAR\") == 0)) {\n\n                // pathLength includes an extra '.' which we'll use for either\n                // separator or null termination\n                length += pathLength + filenameLength;\n\n#ifdef _LIBHDFS_JNI_HELPER_DEBUGGING_ON_\n                printf(\"wildcard_scanPath:\\t%s\\t:\\t%zd\\n\", filename, length);\n#endif\n\n                if (expanded != NULL) {\n                    // pathLength includes an extra '.'\n                    strncpy(dest, path, pathLength-1);\n                    dest += pathLength - 1;\n                    strncpy(dest, filename, filenameLength);\n                    dest += filenameLength;\n                    *dest = PATH_SEPARATOR;\n                    dest++;\n\n#ifdef _LIBHDFS_JNI_HELPER_DEBUGGING_ON_\n                    printf(\"wildcard_expandPath:\\t%s\\t:\\t%s\\n\",\n                      filename, expanded);\n#endif\n                }\n            }\n        }\n\n        if (errno != 0) {\n            fprintf(stderr, \"wildcard_expandPath: on readdir %s: %s\\n\",\n              path, strerror(errno));\n            length = -1;\n        }\n\n        if (closedir(dir) != 0) {\n            fprintf(stderr, \"wildcard_expandPath: on closedir %s: %s\\n\",\n                    path, strerror(errno));\n        }\n    } else if ((errno != EACCES) && (errno != ENOENT) && (errno != ENOTDIR)) {\n        // can not opendir due to an error we can not handle\n        fprintf(stderr, \"wildcard_expandPath: on opendir %s: %s\\n\", path,\n                strerror(errno));\n        length = -1;\n    }\n\n    if (length == 0) {\n        // either we failed to open dir due to EACCESS, ENOENT, or ENOTDIR, or\n        // we did not find any file that matches *.jar or *.JAR\n\n#ifdef _LIBHDFS_JNI_HELPER_DEBUGGING_ON_\n        fprintf(stderr, \"wildcard_expandPath: can not expand %.*s*: %s\\n\",\n                (int)(pathLength-1), path, strerror(errno));\n#endif\n\n        // in this case, the wildcard expansion is the same as the original\n        // +1 for PATH_SEPARTOR or null termination\n        length = pathLength + 1;\n        if (expanded != NULL) {\n            // pathLength includes an extra '.'\n            strncpy(dest, path, pathLength-1);\n            dest += pathLength-1;\n            *dest = '*'; // restore wildcard\n            dest++;\n            *dest = PATH_SEPARATOR;\n            dest++;\n        }\n    }\n\n    return length;\n}\n\n/**\n * Helper to expand classpaths. Returns the total length of the expanded\n * classpath. If expandedClasspath is not NULL, then fills that with the\n * expanded classpath. It assumes that expandedClasspath is of correct size, eg\n * allocated after using this function with expandedClasspath=NULL to get the\n * right size.\n */\nstatic ssize_t getClassPath_helper(const char *classpath, char* expandedClasspath)\n{\n    ssize_t length;\n    ssize_t retval;\n    char* expandedCP_curr;\n    char* cp_token;\n    char* classpath_dup;\n\n    classpath_dup = strdup(classpath);\n    if (classpath_dup == NULL) {\n        fprintf(stderr, \"getClassPath_helper: failed strdup: %s\\n\",\n          strerror(errno));\n        return -1;\n    }\n\n    length = 0;\n\n    // expandedCP_curr is the current pointer\n    expandedCP_curr = expandedClasspath;\n\n    cp_token = strtok(classpath_dup, PATH_SEPARATOR_STR);\n    while (cp_token != NULL) {\n        size_t tokenlen;\n\n#ifdef _LIBHDFS_JNI_HELPER_DEBUGGING_ON_\n        printf(\"%s\\n\", cp_token);\n#endif\n\n        tokenlen = strlen(cp_token);\n        // We only expand if token ends with \"/*\"\n        if ((tokenlen > 1) &&\n          (cp_token[tokenlen-1] == '*') && (cp_token[tokenlen-2] == '/')) {\n            // replace the '*' with '.' so that we don't have to allocate another\n            // string for passing to opendir() in wildcard_expandPath()\n            cp_token[tokenlen-1] = '.';\n            retval = wildcard_expandPath(cp_token, expandedCP_curr);\n            if (retval < 0) {\n                free(classpath_dup);\n                return -1;\n            }\n\n            length += retval;\n            if (expandedCP_curr != NULL) {\n                expandedCP_curr += retval;\n            }\n        } else {\n            // +1 for path separator or null terminator\n            length += tokenlen + 1;\n            if (expandedCP_curr != NULL) {\n                strncpy(expandedCP_curr, cp_token, tokenlen);\n                expandedCP_curr += tokenlen;\n                *expandedCP_curr = PATH_SEPARATOR;\n                expandedCP_curr++;\n            }\n        }\n\n        cp_token = strtok(NULL, PATH_SEPARATOR_STR);\n    }\n\n    // Fix the last ':' and use it to null terminate\n    if (expandedCP_curr != NULL) {\n        expandedCP_curr--;\n        *expandedCP_curr = '\\0';\n    }\n\n    free(classpath_dup);\n    return length;\n}\n\n/**\n * Gets the classpath. Wild card entries are resolved only if the entry ends\n * with \"/\\*\" (backslash to escape commenting) to match against .jar and .JAR.\n * All other wild card entries (eg /path/to/dir/\\*foo*) are not resolved,\n * following JAVA default behavior, see:\n * https://docs.oracle.com/javase/8/docs/technotes/tools/unix/classpath.html\n */\nstatic char* getClassPath()\n{\n    char* classpath;\n    char* expandedClasspath;\n    ssize_t length;\n    ssize_t retval;\n\n    classpath = getenv(\"CLASSPATH\");\n    if (classpath == NULL) {\n      return NULL;\n    }\n\n    // First, get the total size of the string we will need for the expanded\n    // classpath\n    length = getClassPath_helper(classpath, NULL);\n    if (length < 0) {\n      return NULL;\n    }\n\n#ifdef _LIBHDFS_JNI_HELPER_DEBUGGING_ON_\n    printf(\"+++++++++++++++++\\n\");\n#endif\n\n    // we don't have to do anything if classpath has no valid wildcards\n    // we get length = 0 when CLASSPATH is set but empty\n    // if CLASSPATH is not empty, then length includes null terminator\n    // if length of expansion is same as original, then return a duplicate of\n    // original since expansion can only be longer\n    if ((length == 0) || ((length - 1) == strlen(classpath))) {\n\n#ifdef _LIBHDFS_JNI_HELPER_DEBUGGING_ON_\n        if ((length == 0) && (strlen(classpath) != 0)) {\n            fprintf(stderr, \"Something went wrong with getting the wildcard \\\n              expansion length\\n\" );\n        }\n#endif\n\n        expandedClasspath = strdup(classpath);\n\n#ifdef _LIBHDFS_JNI_HELPER_DEBUGGING_ON_\n        printf(\"Expanded classpath=%s\\n\", expandedClasspath);\n#endif\n\n        return expandedClasspath;\n    }\n\n    // Allocte memory for expanded classpath string\n    expandedClasspath = calloc(length, sizeof(char));\n    if (expandedClasspath == NULL) {\n        fprintf(stderr, \"getClassPath: failed calloc: %s\\n\", strerror(errno));\n        return NULL;\n    }\n\n    // Actual expansion\n    retval = getClassPath_helper(classpath, expandedClasspath);\n    if (retval < 0) {\n        free(expandedClasspath);\n        return NULL;\n    }\n\n    // This should not happen, but dotting i's and crossing t's\n    if (retval != length) {\n        fprintf(stderr,\n          \"Expected classpath expansion length to be %zu but instead got %zu\\n\",\n          length, retval);\n        free(expandedClasspath);\n        return NULL;\n    }\n\n#ifdef _LIBHDFS_JNI_HELPER_DEBUGGING_ON_\n    printf(\"===============\\n\");\n    printf(\"Allocated %zd for expanding classpath\\n\", length);\n    printf(\"Used %zu for expanding classpath\\n\", strlen(expandedClasspath) + 1);\n    printf(\"Expanded classpath=%s\\n\", expandedClasspath);\n#endif\n\n    return expandedClasspath;\n}\n\n\n/**\n * Get the global JNI environemnt.\n *\n * We only have to create the JVM once.  After that, we can use it in\n * every thread.  You must be holding the jvmMutex when you call this\n * function.\n *\n * @return          The JNIEnv on success; error code otherwise\n */\nstatic JNIEnv* getGlobalJNIEnv(void)\n{\n    JavaVM* vmBuf[VM_BUF_LENGTH]; \n    JNIEnv *env;\n    jint rv = 0; \n    jint noVMs = 0;\n    jthrowable jthr;\n    char *hadoopClassPath;\n    const char *hadoopClassPathVMArg = \"-Djava.class.path=\";\n    size_t optHadoopClassPathLen;\n    char *optHadoopClassPath;\n    int noArgs = 1;\n    char *hadoopJvmArgs;\n    char jvmArgDelims[] = \" \";\n    char *str, *token, *savePtr;\n    JavaVMInitArgs vm_args;\n    JavaVM *vm;\n    JavaVMOption *options;\n\n    rv = JNI_GetCreatedJavaVMs(&(vmBuf[0]), VM_BUF_LENGTH, &noVMs);\n    if (rv != 0) {\n        fprintf(stderr, \"JNI_GetCreatedJavaVMs failed with error: %d\\n\", rv);\n        return NULL;\n    }\n\n    if (noVMs == 0) {\n        //Get the environment variables for initializing the JVM\n        hadoopClassPath = getClassPath();\n        if (hadoopClassPath == NULL) {\n            fprintf(stderr, \"Environment variable CLASSPATH not set!\\n\");\n            return NULL;\n        } \n        optHadoopClassPathLen = strlen(hadoopClassPath) + \n          strlen(hadoopClassPathVMArg) + 1;\n        optHadoopClassPath = malloc(sizeof(char)*optHadoopClassPathLen);\n        snprintf(optHadoopClassPath, optHadoopClassPathLen,\n                \"%s%s\", hadoopClassPathVMArg, hadoopClassPath);\n\n        free(hadoopClassPath);\n\n        // Determine the # of LIBHDFS_OPTS args\n        hadoopJvmArgs = getenv(\"LIBHDFS_OPTS\");\n        if (hadoopJvmArgs != NULL)  {\n          hadoopJvmArgs = strdup(hadoopJvmArgs);\n          for (noArgs = 1, str = hadoopJvmArgs; ; noArgs++, str = NULL) {\n            token = strtok_r(str, jvmArgDelims, &savePtr);\n            if (NULL == token) {\n              break;\n            }\n          }\n          free(hadoopJvmArgs);\n        }\n\n        // Now that we know the # args, populate the options array\n        options = calloc(noArgs, sizeof(JavaVMOption));\n        if (!options) {\n          fputs(\"Call to calloc failed\\n\", stderr);\n          free(optHadoopClassPath);\n          return NULL;\n        }\n        options[0].optionString = optHadoopClassPath;\n        hadoopJvmArgs = getenv(\"LIBHDFS_OPTS\");\n\tif (hadoopJvmArgs != NULL)  {\n          hadoopJvmArgs = strdup(hadoopJvmArgs);\n          for (noArgs = 1, str = hadoopJvmArgs; ; noArgs++, str = NULL) {\n            token = strtok_r(str, jvmArgDelims, &savePtr);\n            if (NULL == token) {\n              break;\n            }\n            options[noArgs].optionString = token;\n          }\n        }\n\n        //Create the VM\n        vm_args.version = JNI_VERSION_1_2;\n        vm_args.options = options;\n        vm_args.nOptions = noArgs; \n        vm_args.ignoreUnrecognized = 1;\n\n        rv = JNI_CreateJavaVM(&vm, (void*)&env, &vm_args);\n\n        if (hadoopJvmArgs != NULL)  {\n          free(hadoopJvmArgs);\n        }\n        free(optHadoopClassPath);\n        free(options);\n\n        if (rv != 0) {\n            fprintf(stderr, \"Call to JNI_CreateJavaVM failed \"\n                    \"with error: %d\\n\", rv);\n            return NULL;\n        }\n        jthr = invokeMethod(env, NULL, STATIC, NULL,\n                         \"org/apache/hadoop/fs/FileSystem\",\n                         \"loadFileSystems\", \"()V\");\n        if (jthr) {\n            printExceptionAndFree(env, jthr, PRINT_EXC_ALL, \"loadFileSystems\");\n        }\n    }\n    else {\n        //Attach this thread to the VM\n        vm = vmBuf[0];\n        rv = (*vm)->AttachCurrentThread(vm, (void*)&env, 0);\n        if (rv != 0) {\n            fprintf(stderr, \"Call to AttachCurrentThread \"\n                    \"failed with error: %d\\n\", rv);\n            return NULL;\n        }\n    }\n\n    return env;\n}\n\n/**\n * getJNIEnv: A helper function to get the JNIEnv* for the given thread.\n * If no JVM exists, then one will be created. JVM command line arguments\n * are obtained from the LIBHDFS_OPTS environment variable.\n *\n * Implementation note: we rely on POSIX thread-local storage (tls).\n * This allows us to associate a destructor function with each thread, that\n * will detach the thread from the Java VM when the thread terminates.  If we\n * failt to do this, it will cause a memory leak.\n *\n * However, POSIX TLS is not the most efficient way to do things.  It requires a\n * key to be initialized before it can be used.  Since we don't know if this key\n * is initialized at the start of this function, we have to lock a mutex first\n * and check.  Luckily, most operating systems support the more efficient\n * __thread construct, which is initialized by the linker.\n *\n * @param: None.\n * @return The JNIEnv* corresponding to the thread.\n */\nJNIEnv* getJNIEnv(void)\n{\n    struct ThreadLocalState *state = NULL;\n    THREAD_LOCAL_STORAGE_GET_QUICK(&state);\n    if (state) return state->env;\n\n    mutexLock(&jvmMutex);\n    if (threadLocalStorageGet(&state)) {\n      mutexUnlock(&jvmMutex);\n      return NULL;\n    }\n    if (state) {\n      mutexUnlock(&jvmMutex);\n\n      // Free any stale exception strings.\n      free(state->lastExceptionRootCause);\n      free(state->lastExceptionStackTrace);\n      state->lastExceptionRootCause = NULL;\n      state->lastExceptionStackTrace = NULL;\n\n      return state->env;\n    }\n\n    /* Create a ThreadLocalState for this thread */\n    state = threadLocalStorageCreate();\n    if (!state) {\n      mutexUnlock(&jvmMutex);\n      fprintf(stderr, \"getJNIEnv: Unable to create ThreadLocalState\\n\");\n      return NULL;\n    }\n    if (threadLocalStorageSet(state)) {\n      mutexUnlock(&jvmMutex);\n      goto fail;\n    }\n    THREAD_LOCAL_STORAGE_SET_QUICK(state);\n\n    state->env = getGlobalJNIEnv();\n    mutexUnlock(&jvmMutex);\n    if (!state->env) {\n      goto fail;\n    }\n    return state->env;\n\nfail:\n    fprintf(stderr, \"getJNIEnv: getGlobalJNIEnv failed\\n\");\n    hdfsThreadDestructor(state);\n    return NULL;\n}\n\nchar* getLastTLSExceptionRootCause()\n{\n    struct ThreadLocalState *state = NULL;\n    THREAD_LOCAL_STORAGE_GET_QUICK(&state);\n    if (!state) {\n        mutexLock(&jvmMutex);\n        if (threadLocalStorageGet(&state)) {\n            mutexUnlock(&jvmMutex);\n            return NULL;\n        }\n        mutexUnlock(&jvmMutex);\n    }\n    return state->lastExceptionRootCause;\n}\n\nchar* getLastTLSExceptionStackTrace()\n{\n    struct ThreadLocalState *state = NULL;\n    THREAD_LOCAL_STORAGE_GET_QUICK(&state);\n    if (!state) {\n        mutexLock(&jvmMutex);\n        if (threadLocalStorageGet(&state)) {\n            mutexUnlock(&jvmMutex);\n            return NULL;\n        }\n        mutexUnlock(&jvmMutex);\n    }\n    return state->lastExceptionStackTrace;\n}\n\nvoid setTLSExceptionStrings(const char *rootCause, const char *stackTrace)\n{\n    struct ThreadLocalState *state = NULL;\n    THREAD_LOCAL_STORAGE_GET_QUICK(&state);\n    if (!state) {\n        mutexLock(&jvmMutex);\n        if (threadLocalStorageGet(&state)) {\n            mutexUnlock(&jvmMutex);\n            return;\n        }\n        mutexUnlock(&jvmMutex);\n    }\n\n    free(state->lastExceptionRootCause);\n    free(state->lastExceptionStackTrace);\n    state->lastExceptionRootCause = (char*)rootCause;\n    state->lastExceptionStackTrace = (char*)stackTrace;\n}\n\nint javaObjectIsOfClass(JNIEnv *env, jobject obj, const char *name)\n{\n    jclass clazz;\n    int ret;\n\n    clazz = (*env)->FindClass(env, name);\n    if (!clazz) {\n        printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n            \"javaObjectIsOfClass(%s)\", name);\n        return -1;\n    }\n    ret = (*env)->IsInstanceOf(env, obj, clazz);\n    (*env)->DeleteLocalRef(env, clazz);\n    return ret == JNI_TRUE ? 1 : 0;\n}\n\njthrowable hadoopConfSetStr(JNIEnv *env, jobject jConfiguration,\n        const char *key, const char *value)\n{\n    jthrowable jthr;\n    jstring jkey = NULL, jvalue = NULL;\n\n    jthr = newJavaStr(env, key, &jkey);\n    if (jthr)\n        goto done;\n    jthr = newJavaStr(env, value, &jvalue);\n    if (jthr)\n        goto done;\n    jthr = invokeMethod(env, NULL, INSTANCE, jConfiguration,\n            \"org/apache/hadoop/conf/Configuration\", \"set\", \n            \"(Ljava/lang/String;Ljava/lang/String;)V\",\n            jkey, jvalue);\n    if (jthr)\n        goto done;\ndone:\n    (*env)->DeleteLocalRef(env, jkey);\n    (*env)->DeleteLocalRef(env, jvalue);\n    return jthr;\n}\n\njthrowable fetchEnumInstance(JNIEnv *env, const char *className,\n                         const char *valueName, jobject *out)\n{\n    jclass clazz;\n    jfieldID fieldId;\n    jobject jEnum;\n    char prettyClass[256];\n\n    clazz = (*env)->FindClass(env, className);\n    if (!clazz) {\n        return newRuntimeError(env, \"fetchEnum(%s, %s): failed to find class.\",\n                className, valueName);\n    }\n    if (snprintf(prettyClass, sizeof(prettyClass), \"L%s;\", className)\n          >= sizeof(prettyClass)) {\n        return newRuntimeError(env, \"fetchEnum(%s, %s): class name too long.\",\n                className, valueName);\n    }\n    fieldId = (*env)->GetStaticFieldID(env, clazz, valueName, prettyClass);\n    if (!fieldId) {\n        return getPendingExceptionAndClear(env);\n    }\n    jEnum = (*env)->GetStaticObjectField(env, clazz, fieldId);\n    if (!jEnum) {\n        return getPendingExceptionAndClear(env);\n    }\n    *out = jEnum;\n    return NULL;\n}\n\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/jni_helper.h",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef LIBHDFS_JNI_HELPER_H\n#define LIBHDFS_JNI_HELPER_H\n\n#include <jni.h>\n#include <stdio.h>\n\n#include <stdlib.h>\n#include <stdarg.h>\n#include <errno.h>\n\n#ifdef WIN32\n    #define PATH_SEPARATOR ';'\n    #define PATH_SEPARATOR_STR \";\"\n#else\n    #define PATH_SEPARATOR ':'\n    #define PATH_SEPARATOR_STR \":\"\n#endif\n\n// #define _LIBHDFS_JNI_HELPER_DEBUGGING_ON_\n\n#ifdef WIN32\n#ifdef LIBHDFS_DLL_EXPORT\n        #define LIBHDFS_EXTERNAL __declspec(dllexport)\n    #elif LIBHDFS_DLL_IMPORT\n        #define LIBHDFS_EXTERNAL __declspec(dllimport)\n    #else\n        #define LIBHDFS_EXTERNAL\n    #endif\n#else\n#ifdef LIBHDFS_DLL_EXPORT\n#define LIBHDFS_EXTERNAL __attribute__((visibility(\"default\")))\n#elif LIBHDFS_DLL_IMPORT\n#define LIBHDFS_EXTERNAL __attribute__((visibility(\"default\")))\n#else\n#define LIBHDFS_EXTERNAL\n#endif\n#endif\n\n/** Denote the method we want to invoke as STATIC or INSTANCE */\ntypedef enum {\n    STATIC,\n    INSTANCE\n} MethType;\n\n/**\n * Create a new malloc'ed C string from a Java string.\n *\n * @param env       The JNI environment\n * @param jstr      The Java string\n * @param out       (out param) the malloc'ed C string\n *\n * @return          NULL on success; the exception otherwise\n */\nLIBHDFS_EXTERNAL\njthrowable newCStr(JNIEnv *env, jstring jstr, char **out);\n\n/**\n * Create a new Java string from a C string.\n *\n * @param env       The JNI environment\n * @param str       The C string\n * @param out       (out param) the java string\n *\n * @return          NULL on success; the exception otherwise\n */\nLIBHDFS_EXTERNAL\njthrowable newJavaStr(JNIEnv *env, const char *str, jstring *out);\n\n/**\n * Helper function to destroy a local reference of java.lang.Object\n * @param env: The JNIEnv pointer. \n * @param jFile: The local reference of java.lang.Object object\n * @return None.\n */\nLIBHDFS_EXTERNAL\nvoid destroyLocalReference(JNIEnv *env, jobject jObject);\n\n/** invokeMethod: Invoke a Static or Instance method.\n * className: Name of the class where the method can be found\n * methName: Name of the method\n * methSignature: the signature of the method \"(arg-types)ret-type\"\n * methType: The type of the method (STATIC or INSTANCE)\n * instObj: Required if the methType is INSTANCE. The object to invoke\n   the method on.\n * env: The JNIEnv pointer\n * retval: The pointer to a union type which will contain the result of the\n   method invocation, e.g. if the method returns an Object, retval will be\n   set to that, if the method returns boolean, retval will be set to the\n   value (JNI_TRUE or JNI_FALSE), etc.\n * exc: If the methods throws any exception, this will contain the reference\n * Arguments (the method arguments) must be passed after methSignature\n * RETURNS: -1 on error and 0 on success. If -1 is returned, exc will have \n   a valid exception reference, and the result stored at retval is undefined.\n */\nLIBHDFS_EXTERNAL\njthrowable invokeMethod(JNIEnv *env, jvalue *retval, MethType methType,\n                 jobject instObj, const char *className, const char *methName, \n                 const char *methSignature, ...);\n\nLIBHDFS_EXTERNAL\njthrowable constructNewObjectOfClass(JNIEnv *env, jobject *out, const char *className,\n                                  const char *ctorSignature, ...);\n\nLIBHDFS_EXTERNAL\njthrowable methodIdFromClass(const char *className, const char *methName,\n                            const char *methSignature, MethType methType, \n                            JNIEnv *env, jmethodID *out);\n\nLIBHDFS_EXTERNAL\njthrowable globalClassReference(const char *className, JNIEnv *env, jclass *out);\n\n/** classNameOfObject: Get an object's class name.\n * @param jobj: The object.\n * @param env: The JNIEnv pointer.\n * @param name: (out param) On success, will contain a string containing the\n * class name. This string must be freed by the caller.\n * @return NULL on success, or the exception\n */\nLIBHDFS_EXTERNAL\njthrowable classNameOfObject(jobject jobj, JNIEnv *env, char **name);\n\n/** getJNIEnv: A helper function to get the JNIEnv* for the given thread.\n * It gets this from the ThreadLocalState if it exists. If a ThreadLocalState\n * does not exist, one will be created.\n * If no JVM exists, then one will be created. JVM command line arguments\n * are obtained from the LIBHDFS_OPTS environment variable.\n * @param: None.\n * @return The JNIEnv* corresponding to the thread.\n * */\nLIBHDFS_EXTERNAL\nJNIEnv* getJNIEnv(void);\n\n/**\n * Get the last exception root cause that happened in the context of the\n * current thread.\n *\n * The pointer returned by this function is guaranteed to be valid until\n * the next call to invokeMethod() by the current thread.\n * Users of this function should not free the pointer.\n *\n * @return The root cause as a C-string.\n */\nLIBHDFS_EXTERNAL\nchar* getLastTLSExceptionRootCause();\n\n/**\n * Get the last exception stack trace that happened in the context of the\n * current thread.\n *\n * The pointer returned by this function is guaranteed to be valid until\n * the next call to invokeMethod() by the current thread.\n * Users of this function should not free the pointer.\n *\n * @return The stack trace as a C-string.\n */\nLIBHDFS_EXTERNAL\nchar* getLastTLSExceptionStackTrace();\n\n/** setTLSExceptionStrings: Sets the 'rootCause' and 'stackTrace' in the\n * ThreadLocalState if one exists for the current thread.\n *\n * @param rootCause A string containing the root cause of an exception.\n * @param stackTrace A string containing the stack trace of an exception.\n * @return None.\n */\nLIBHDFS_EXTERNAL\nvoid setTLSExceptionStrings(const char *rootCause, const char *stackTrace);\n\n/**\n * Figure out if a Java object is an instance of a particular class.\n *\n * @param env  The Java environment.\n * @param obj  The object to check.\n * @param name The class name to check.\n *\n * @return     -1 if we failed to find the referenced class name.\n *             0 if the object is not of the given class.\n *             1 if the object is of the given class.\n */\nLIBHDFS_EXTERNAL\nint javaObjectIsOfClass(JNIEnv *env, jobject obj, const char *name);\n\n/**\n * Set a value in a configuration object.\n *\n * @param env               The JNI environment\n * @param jConfiguration    The configuration object to modify\n * @param key               The key to modify\n * @param value             The value to set the key to\n *\n * @return                  NULL on success; exception otherwise\n */\nLIBHDFS_EXTERNAL\njthrowable hadoopConfSetStr(JNIEnv *env, jobject jConfiguration,\n        const char *key, const char *value);\n\n/**\n * Fetch an instance of an Enum.\n *\n * @param env               The JNI environment.\n * @param className         The enum class name.\n * @param valueName         The name of the enum value\n * @param out               (out param) on success, a local reference to an\n *                          instance of the enum object.  (Since Java enums are\n *                          singletones, this is also the only instance.)\n *\n * @return                  NULL on success; exception otherwise\n */\nLIBHDFS_EXTERNAL\njthrowable fetchEnumInstance(JNIEnv *env, const char *className,\n                             const char *valueName, jobject *out);\n\n#undef LIBHDFS_EXTERNAL\n#endif /*LIBHDFS_JNI_HELPER_H*/\n\n/**\n * vim: ts=4: sw=4: et:\n */\n\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/os/mutexes.h",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef LIBHDFS_MUTEXES_H\n#define LIBHDFS_MUTEXES_H\n\n/*\n * Defines abstraction over platform-specific mutexes.  libhdfs has no formal\n * initialization function that users would call from a single-threaded context\n * to initialize the library.  This creates a challenge for bootstrapping the\n * mutexes.  To address this, all required mutexes are pre-defined here with\n * external storage.  Platform-specific implementations must guarantee that the\n * mutexes are initialized via static initialization.\n */\n\n#include \"platform.h\"\n\n/** Mutex protecting the class reference hash table. */\nextern mutex hdfsHashMutex;\n\n/** Mutex protecting singleton JVM instance. */\nextern mutex jvmMutex;\n\n/**\n * Locks a mutex.\n *\n * @param m mutex\n * @return 0 if successful, non-zero otherwise\n */\nint mutexLock(mutex *m);\n\n/**\n * Unlocks a mutex.\n *\n * @param m mutex\n * @return 0 if successful, non-zero otherwise\n */\nint mutexUnlock(mutex *m);\n\n#endif\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/os/posix/mutexes.c",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"os/mutexes.h\"\n\n#include <pthread.h>\n#include <stdio.h>\n\nmutex hdfsHashMutex = PTHREAD_MUTEX_INITIALIZER;\nmutex jvmMutex;\npthread_mutexattr_t jvmMutexAttr;\n\n__attribute__((constructor)) static void init() {\n  pthread_mutexattr_init(&jvmMutexAttr);\n  pthread_mutexattr_settype(&jvmMutexAttr, PTHREAD_MUTEX_RECURSIVE);\n  pthread_mutex_init(&jvmMutex, &jvmMutexAttr);\n}\n\nint mutexLock(mutex *m) {\n  int ret = pthread_mutex_lock(m);\n  if (ret) {\n    fprintf(stderr, \"mutexLock: pthread_mutex_lock failed with error %d\\n\",\n      ret);\n  }\n  return ret;\n}\n\nint mutexUnlock(mutex *m) {\n  int ret = pthread_mutex_unlock(m);\n  if (ret) {\n    fprintf(stderr, \"mutexUnlock: pthread_mutex_unlock failed with error %d\\n\",\n      ret);\n  }\n  return ret;\n}\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/os/posix/platform.h",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef LIBHDFS_PLATFORM_H\n#define LIBHDFS_PLATFORM_H\n\n#include <pthread.h>\n\n/* Use gcc type-checked format arguments. */\n#define TYPE_CHECKED_PRINTF_FORMAT(formatArg, varArgs) \\\n  __attribute__((format(printf, formatArg, varArgs)))\n\n/*\n * Mutex and thread data types defined by pthreads.\n */\ntypedef pthread_mutex_t mutex;\ntypedef pthread_t threadId;\n\n#endif\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/os/posix/thread.c",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"os/thread.h\"\n\n#include <pthread.h>\n#include <stdio.h>\n\n/**\n * Defines a helper function that adapts function pointer provided by caller to\n * the type required by pthread_create.\n *\n * @param toRun thread to run\n * @return void* result of running thread (always NULL)\n */\nstatic void* runThread(void *toRun) {\n  const thread *t = toRun;\n  t->start(t->arg);\n  return NULL;\n}\n\nint threadCreate(thread *t) {\n  int ret;\n  ret = pthread_create(&t->id, NULL, runThread, t);\n  if (ret) {\n    fprintf(stderr, \"threadCreate: pthread_create failed with error %d\\n\", ret);\n  }\n  return ret;\n}\n\nint threadJoin(const thread *t) {\n  int ret = pthread_join(t->id, NULL);\n  if (ret) {\n    fprintf(stderr, \"threadJoin: pthread_join failed with error %d\\n\", ret);\n  }\n  return ret;\n}\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/os/posix/thread_local_storage.c",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"os/thread_local_storage.h\"\n\n#include <jni.h>\n#include <stdlib.h>\n#include <pthread.h>\n#include <stdio.h>\n\n/** Key that allows us to retrieve thread-local storage */\nstatic pthread_key_t gTlsKey;\n\n/** nonzero if we succeeded in initializing gTlsKey. Protected by the jvmMutex */\nstatic int gTlsKeyInitialized = 0;\n\n/**\n * The function that is called whenever a thread with libhdfs thread local data\n * is destroyed.\n *\n * @param v         The thread-local data\n */\nvoid hdfsThreadDestructor(void *v)\n{\n  JavaVM *vm;\n  struct ThreadLocalState *state = (struct ThreadLocalState*)v;\n  JNIEnv *env = state->env;;\n  jint ret;\n\n  /* Detach the current thread from the JVM */\n  if (env) {\n    ret = (*env)->GetJavaVM(env, &vm);\n    if (ret) {\n      fprintf(stderr, \"hdfsThreadDestructor: GetJavaVM failed with error %d\\n\",\n        ret);\n      (*env)->ExceptionDescribe(env);\n    } else {\n      (*vm)->DetachCurrentThread(vm);\n    }\n  }\n\n  /* Free exception strings */\n  if (state->lastExceptionStackTrace) free(state->lastExceptionStackTrace);\n  if (state->lastExceptionRootCause) free(state->lastExceptionRootCause);\n\n  /* Free the state itself */\n  free(state);\n}\n\nstruct ThreadLocalState* threadLocalStorageCreate()\n{\n  struct ThreadLocalState *state;\n  state = (struct ThreadLocalState*)malloc(sizeof(struct ThreadLocalState));\n  if (state == NULL) {\n    fprintf(stderr,\n      \"threadLocalStorageSet: OOM - Unable to allocate thread local state\\n\");\n    return NULL;\n  }\n  state->lastExceptionStackTrace = NULL;\n  state->lastExceptionRootCause = NULL;\n  return state;\n}\n\nint threadLocalStorageGet(struct ThreadLocalState **state)\n{\n  int ret = 0;\n  if (!gTlsKeyInitialized) {\n    ret = pthread_key_create(&gTlsKey, hdfsThreadDestructor);\n    if (ret) {\n      fprintf(stderr,\n        \"threadLocalStorageGet: pthread_key_create failed with error %d\\n\",\n        ret);\n      return ret;\n    }\n    gTlsKeyInitialized = 1;\n  }\n  *state = pthread_getspecific(gTlsKey);\n  return ret;\n}\n\nint threadLocalStorageSet(struct ThreadLocalState *state)\n{\n  int ret = pthread_setspecific(gTlsKey, state);\n  if (ret) {\n    fprintf(stderr,\n      \"threadLocalStorageSet: pthread_setspecific failed with error %d\\n\",\n      ret);\n    hdfsThreadDestructor(state);\n  }\n  return ret;\n}\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/os/thread.h",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef LIBHDFS_THREAD_H\n#define LIBHDFS_THREAD_H\n\n/*\n * Defines abstraction over platform-specific threads.\n */\n\n#include \"platform.h\"\n\n/** Pointer to function to run in thread. */\ntypedef void (*threadProcedure)(void *);\n\n/** Structure containing a thread's ID, starting address and argument. */\ntypedef struct {\n  threadId id;\n  threadProcedure start;\n  void *arg;\n} thread;\n\n/**\n * Creates and immediately starts a new thread.\n *\n * @param t thread to create\n * @return 0 if successful, non-zero otherwise\n */\nint threadCreate(thread *t);\n\n/**\n * Joins to the given thread, blocking if necessary.\n *\n * @param t thread to join\n * @return 0 if successful, non-zero otherwise\n */\nint threadJoin(const thread *t);\n\n#endif\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libhdfs/os/thread_local_storage.h",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef LIBHDFS_THREAD_LOCAL_STORAGE_H\n#define LIBHDFS_THREAD_LOCAL_STORAGE_H\n\n/*\n * Defines abstraction over platform-specific thread-local storage.  libhdfs\n * currently only needs thread-local storage for a single piece of data: the\n * thread's JNIEnv.  For simplicity, this interface is defined in terms of\n * JNIEnv, not general-purpose thread-local storage of any arbitrary data.\n */\n\n#include <jni.h>\n\n/*\n * Most operating systems support the more efficient __thread construct, which\n * is initialized by the linker.  The following macros use this technique on the\n * operating systems that support it.\n */\n#ifdef HAVE_BETTER_TLS\n  #define THREAD_LOCAL_STORAGE_GET_QUICK(state) \\\n    static __thread struct ThreadLocalState *quickTlsEnv = NULL; \\\n    { \\\n      if (quickTlsEnv) { \\\n        *state = quickTlsEnv; \\\n      } \\\n    }\n\n  #define THREAD_LOCAL_STORAGE_SET_QUICK(state) \\\n    { \\\n      quickTlsEnv = (state); \\\n    }\n#else\n  #define THREAD_LOCAL_STORAGE_GET_QUICK(state)\n  #define THREAD_LOCAL_STORAGE_SET_QUICK(state)\n#endif\n\nstruct ThreadLocalState {\n  /* The JNIEnv associated with the current thread */\n  JNIEnv *env;\n  /* The last exception stack trace that occured on this thread */\n  char *lastExceptionStackTrace;\n  /* The last exception root cause that occured on this thread */\n  char *lastExceptionRootCause;\n};\n\n/**\n * The function that is called whenever a thread with libhdfs thread local data\n * is destroyed.\n *\n * @param v         The thread-local data\n */\nvoid hdfsThreadDestructor(void *v);\n\n/**\n * Creates an object of ThreadLocalState.\n *\n * @return The newly created object if successful, NULL otherwise.\n */\nstruct ThreadLocalState* threadLocalStorageCreate();\n\n/**\n * Gets the ThreadLocalState in thread-local storage for the current thread.\n * If the call succeeds, and there is a ThreadLocalState associated with this\n * thread, then returns 0 and populates 'state'.  If the call succeeds, but\n * there is no ThreadLocalState associated with this thread, then returns 0\n * and sets ThreadLocalState to NULL. If the call fails, then returns non-zero.\n * Only one thread at a time may execute this function. The caller is\n * responsible for enforcing mutual exclusion.\n *\n * @param env ThreadLocalState out parameter\n * @return 0 if successful, non-zero otherwise\n */\nint threadLocalStorageGet(struct ThreadLocalState **state);\n\n/**\n * Sets the ThreadLocalState in thread-local storage for the current thread.\n *\n * @param env ThreadLocalState to set\n * @return 0 if successful, non-zero otherwise\n */\nint threadLocalStorageSet(struct ThreadLocalState *state);\n\n#endif\n"
  },
  {
    "path": "native/fs-hdfs/c_src/libminidfs/native_mini_dfs.c",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#include \"../libhdfs/hdfs.h\"\n#include \"exception.h\"\n#include \"jni_helper.h\"\n#include \"native_mini_dfs.h\"\n#include \"platform.h\"\n\n#include <errno.h>\n#include <jni.h>\n#include <limits.h>\n#include <stdio.h>\n#include <stdlib.h>\n#include <string.h>\n#include <sys/types.h>\n#include <unistd.h>\n\n#ifndef EINTERNAL\n#define EINTERNAL 255\n#endif\n\n#define MINIDFS_CLUSTER_BUILDER \"org/apache/hadoop/hdfs/MiniDFSCluster$Builder\"\n#define MINIDFS_CLUSTER \"org/apache/hadoop/hdfs/MiniDFSCluster\"\n#define HADOOP_CONF     \"org/apache/hadoop/conf/Configuration\"\n#define HADOOP_NAMENODE \"org/apache/hadoop/hdfs/server/namenode/NameNode\"\n#define JAVA_INETSOCKETADDRESS \"java/net/InetSocketAddress\"\n\nstruct NativeMiniDfsCluster {\n    /**\n     * The NativeMiniDfsCluster object\n     */\n    jobject obj;\n\n    /**\n     * Path to the domain socket, or the empty string if there is none.\n     */\n    char domainSocketPath[PATH_MAX];\n};\n\nstatic int hdfsDisableDomainSocketSecurity(void)\n{\n    jthrowable jthr;\n    JNIEnv* env = getJNIEnv();\n    if (env == NULL) {\n      errno = EINTERNAL;\n      return -1;\n    }\n    jthr = invokeMethod(env, NULL, STATIC, NULL,\n            \"org/apache/hadoop/net/unix/DomainSocket\",\n            \"disableBindPathValidation\", \"()V\");\n    if (jthr) {\n        errno = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"DomainSocket#disableBindPathValidation\");\n        return -1;\n    }\n    return 0;\n}\n\nstatic jthrowable nmdConfigureShortCircuit(JNIEnv *env,\n              struct NativeMiniDfsCluster *cl, jobject cobj)\n{\n    jthrowable jthr;\n    char *tmpDir;\n\n    int ret = hdfsDisableDomainSocketSecurity();\n    if (ret) {\n        return newRuntimeError(env, \"failed to disable hdfs domain \"\n                               \"socket security: error %d\", ret);\n    }\n    jthr = hadoopConfSetStr(env, cobj, \"dfs.client.read.shortcircuit\", \"true\");\n    if (jthr) {\n        return jthr;\n    }\n    tmpDir = getenv(\"TMPDIR\");\n    if (!tmpDir) {\n        tmpDir = \"/tmp\";\n    }\n    snprintf(cl->domainSocketPath, PATH_MAX, \"%s/native_mini_dfs.sock.%d.%d\",\n             tmpDir, getpid(), rand());\n    snprintf(cl->domainSocketPath, PATH_MAX, \"%s/native_mini_dfs.sock.%d.%d\",\n             tmpDir, getpid(), rand());\n    jthr = hadoopConfSetStr(env, cobj, \"dfs.domain.socket.path\",\n                            cl->domainSocketPath);\n    if (jthr) {\n        return jthr;\n    }\n    return NULL;\n}\n\nstruct NativeMiniDfsCluster* nmdCreate(const struct NativeMiniDfsConf *conf)\n{\n    struct NativeMiniDfsCluster* cl = NULL;\n    jobject bld = NULL, cobj = NULL, cluster = NULL;\n    jvalue  val;\n    JNIEnv *env = getJNIEnv();\n    jthrowable jthr;\n    jstring jconfStr = NULL;\n\n    if (!env) {\n        fprintf(stderr, \"nmdCreate: unable to construct JNIEnv.\\n\");\n        return NULL;\n    }\n    cl = calloc(1, sizeof(struct NativeMiniDfsCluster));\n    if (!cl) {\n        fprintf(stderr, \"nmdCreate: OOM\");\n        goto error;\n    }\n    jthr = constructNewObjectOfClass(env, &cobj, HADOOP_CONF, \"()V\");\n    if (jthr) {\n        printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"nmdCreate: new Configuration\");\n        goto error;\n    }\n    if (jthr) {\n        printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                              \"nmdCreate: Configuration::setBoolean\");\n        goto error;\n    }\n    // Disable 'minimum block size' -- it's annoying in tests.\n    (*env)->DeleteLocalRef(env, jconfStr);\n    jconfStr = NULL;\n    jthr = newJavaStr(env, \"dfs.namenode.fs-limits.min-block-size\", &jconfStr);\n    if (jthr) {\n        printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                              \"nmdCreate: new String\");\n        goto error;\n    }\n    jthr = invokeMethod(env, NULL, INSTANCE, cobj, HADOOP_CONF,\n                        \"setLong\", \"(Ljava/lang/String;J)V\", jconfStr, 0LL);\n    if (jthr) {\n        printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                              \"nmdCreate: Configuration::setLong\");\n        goto error;\n    }\n    // Creae MiniDFSCluster object\n    jthr = constructNewObjectOfClass(env, &bld, MINIDFS_CLUSTER_BUILDER,\n                    \"(L\"HADOOP_CONF\";)V\", cobj);\n    if (jthr) {\n        printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"nmdCreate: NativeMiniDfsCluster#Builder#Builder\");\n        goto error;\n    }\n    if (conf->configureShortCircuit) {\n        jthr = nmdConfigureShortCircuit(env, cl, cobj);\n        if (jthr) {\n            printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                \"nmdCreate: nmdConfigureShortCircuit error\");\n            goto error;\n        }\n    }\n    jthr = invokeMethod(env, &val, INSTANCE, bld, MINIDFS_CLUSTER_BUILDER,\n            \"format\", \"(Z)L\" MINIDFS_CLUSTER_BUILDER \";\", conf->doFormat);\n    if (jthr) {\n        printExceptionAndFree(env, jthr, PRINT_EXC_ALL, \"nmdCreate: \"\n                              \"Builder::format\");\n        goto error;\n    }\n    (*env)->DeleteLocalRef(env, val.l);\n    if (conf->webhdfsEnabled) {\n        jthr = invokeMethod(env, &val, INSTANCE, bld, MINIDFS_CLUSTER_BUILDER,\n                        \"nameNodeHttpPort\", \"(I)L\" MINIDFS_CLUSTER_BUILDER \";\",\n                        conf->namenodeHttpPort);\n        if (jthr) {\n            printExceptionAndFree(env, jthr, PRINT_EXC_ALL, \"nmdCreate: \"\n                                  \"Builder::nameNodeHttpPort\");\n            goto error;\n        }\n        (*env)->DeleteLocalRef(env, val.l);\n    }\n    jthr = invokeMethod(env, &val, INSTANCE, bld, MINIDFS_CLUSTER_BUILDER,\n            \"build\", \"()L\" MINIDFS_CLUSTER \";\");\n    if (jthr) {\n        printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                              \"nmdCreate: Builder#build\");\n        goto error;\n    }\n    cluster = val.l;\n\t  cl->obj = (*env)->NewGlobalRef(env, val.l);\n    if (!cl->obj) {\n        printPendingExceptionAndFree(env, PRINT_EXC_ALL,\n            \"nmdCreate: NewGlobalRef\");\n        goto error;\n    }\n    (*env)->DeleteLocalRef(env, cluster);\n    (*env)->DeleteLocalRef(env, bld);\n    (*env)->DeleteLocalRef(env, cobj);\n    (*env)->DeleteLocalRef(env, jconfStr);\n    return cl;\n\nerror:\n    (*env)->DeleteLocalRef(env, cluster);\n    (*env)->DeleteLocalRef(env, bld);\n    (*env)->DeleteLocalRef(env, cobj);\n    (*env)->DeleteLocalRef(env, jconfStr);\n    free(cl);\n    return NULL;\n}\n\nvoid nmdFree(struct NativeMiniDfsCluster* cl)\n{\n    JNIEnv *env = getJNIEnv();\n    if (!env) {\n        fprintf(stderr, \"nmdFree: getJNIEnv failed\\n\");\n        free(cl);\n        return;\n    }\n    (*env)->DeleteGlobalRef(env, cl->obj);\n    free(cl);\n}\n\nint nmdShutdownInner(struct NativeMiniDfsCluster* cl, jboolean deleteDfsDir)\n{\n    JNIEnv *env = getJNIEnv();\n    jthrowable jthr;\n\n    if (!env) {\n        fprintf(stderr, \"nmdShutdown: getJNIEnv failed\\n\");\n        return -EIO;\n    }\n    jthr = invokeMethod(env, NULL, INSTANCE, cl->obj,\n                        MINIDFS_CLUSTER, \"shutdown\", \"(Z)V\", deleteDfsDir);\n    if (jthr) {\n        printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                              \"nmdShutdown: MiniDFSCluster#shutdown\");\n        return -EIO;\n    }\n    return 0;\n}\n\nint nmdShutdown(struct NativeMiniDfsCluster *cl) {\n    return nmdShutdownInner(cl, 0);\n}\n\nint nmdShutdownClean(struct NativeMiniDfsCluster *cl) {\n    return nmdShutdownInner(cl, 1);\n}\n\nint nmdWaitClusterUp(struct NativeMiniDfsCluster *cl)\n{\n    jthrowable jthr;\n    JNIEnv *env = getJNIEnv();\n    if (!env) {\n        fprintf(stderr, \"nmdWaitClusterUp: getJNIEnv failed\\n\");\n        return -EIO;\n    }\n    jthr = invokeMethod(env, NULL, INSTANCE, cl->obj,\n            MINIDFS_CLUSTER, \"waitClusterUp\", \"()V\");\n    if (jthr) {\n        printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"nmdWaitClusterUp: MiniDFSCluster#waitClusterUp \");\n        return -EIO;\n    }\n    return 0;\n}\n\nint nmdGetNameNodePort(const struct NativeMiniDfsCluster *cl)\n{\n    JNIEnv *env = getJNIEnv();\n    jvalue jVal;\n    jthrowable jthr;\n\n    if (!env) {\n        fprintf(stderr, \"nmdHdfsConnect: getJNIEnv failed\\n\");\n        return -EIO;\n    }\n    // Note: this will have to be updated when HA nativeMiniDfs clusters are\n    // supported\n    jthr = invokeMethod(env, &jVal, INSTANCE, cl->obj,\n            MINIDFS_CLUSTER, \"getNameNodePort\", \"()I\");\n    if (jthr) {\n        printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n            \"nmdHdfsConnect: MiniDFSCluster#getNameNodePort\");\n        return -EIO;\n    }\n    return jVal.i;\n}\n\nint nmdGetNameNodeHttpAddress(const struct NativeMiniDfsCluster *cl,\n                               int *port, const char **hostName)\n{\n    JNIEnv *env = getJNIEnv();\n    jvalue jVal;\n    jobject jNameNode, jAddress;\n    jthrowable jthr;\n    int ret = 0;\n    const char *host;\n    \n    if (!env) {\n        fprintf(stderr, \"nmdHdfsConnect: getJNIEnv failed\\n\");\n        return -EIO;\n    }\n    // First get the (first) NameNode of the cluster\n    jthr = invokeMethod(env, &jVal, INSTANCE, cl->obj, MINIDFS_CLUSTER,\n                        \"getNameNode\", \"()L\" HADOOP_NAMENODE \";\");\n    if (jthr) {\n        printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                              \"nmdGetNameNodeHttpAddress: \"\n                              \"MiniDFSCluster#getNameNode\");\n        return -EIO;\n    }\n    jNameNode = jVal.l;\n    \n    // Then get the http address (InetSocketAddress) of the NameNode\n    jthr = invokeMethod(env, &jVal, INSTANCE, jNameNode, HADOOP_NAMENODE,\n                        \"getHttpAddress\", \"()L\" JAVA_INETSOCKETADDRESS \";\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                                    \"nmdGetNameNodeHttpAddress: \"\n                                    \"NameNode#getHttpAddress\");\n        goto error_dlr_nn;\n    }\n    jAddress = jVal.l;\n    \n    jthr = invokeMethod(env, &jVal, INSTANCE, jAddress,\n                        JAVA_INETSOCKETADDRESS, \"getPort\", \"()I\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                                    \"nmdGetNameNodeHttpAddress: \"\n                                    \"InetSocketAddress#getPort\");\n        goto error_dlr_addr;\n    }\n    *port = jVal.i;\n    \n    jthr = invokeMethod(env, &jVal, INSTANCE, jAddress, JAVA_INETSOCKETADDRESS,\n                        \"getHostName\", \"()Ljava/lang/String;\");\n    if (jthr) {\n        ret = printExceptionAndFree(env, jthr, PRINT_EXC_ALL,\n                                    \"nmdGetNameNodeHttpAddress: \"\n                                    \"InetSocketAddress#getHostName\");\n        goto error_dlr_addr;\n    }\n    host = (*env)->GetStringUTFChars(env, jVal.l, NULL);\n    *hostName = strdup(host);\n    (*env)->ReleaseStringUTFChars(env, jVal.l, host);\n    \nerror_dlr_addr:\n    (*env)->DeleteLocalRef(env, jAddress);\nerror_dlr_nn:\n    (*env)->DeleteLocalRef(env, jNameNode);\n    \n    return ret;\n}\n\nconst char *hdfsGetDomainSocketPath(const struct NativeMiniDfsCluster *cl) {\n    if (cl->domainSocketPath[0]) {\n        return cl->domainSocketPath;\n    }\n\n    return NULL;\n}\n\nint nmdConfigureHdfsBuilder(struct NativeMiniDfsCluster *cl,\n                            struct hdfsBuilder *bld) {\n    int ret;\n    tPort port;\n    const char *domainSocket;\n\n    hdfsBuilderSetNameNode(bld, \"localhost\");\n    port = (tPort) nmdGetNameNodePort(cl);\n    if (port < 0) {\n        fprintf(stderr, \"nmdGetNameNodePort failed with error %d\\n\", -port);\n        return EIO;\n    }\n    hdfsBuilderSetNameNodePort(bld, port);\n\n    domainSocket = hdfsGetDomainSocketPath(cl);\n\n    if (domainSocket) {\n        ret = hdfsBuilderConfSetStr(bld, \"dfs.client.read.shortcircuit\", \"true\");\n        if (ret) {\n            return ret;\n        }\n        ret = hdfsBuilderConfSetStr(bld, \"dfs.domain.socket.path\",\n                                    domainSocket);\n        if (ret) {\n            return ret;\n        }\n    }\n    return 0;\n}"
  },
  {
    "path": "native/fs-hdfs/c_src/libminidfs/native_mini_dfs.h",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef LIBHDFS_NATIVE_MINI_DFS_H\n#define LIBHDFS_NATIVE_MINI_DFS_H\n\n#include <jni.h> /* for jboolean */\n\n#ifdef __cplusplus\nextern  \"C\" {\n#endif\n\nstruct hdfsBuilder;\n/**\n* <div rustbindgen replaces=\"MiniDfsCluster\"></div>\n*/\nstruct NativeMiniDfsCluster; \n\n/**\n * <div rustbindgen replaces=\"MiniDfsConf\"></div>\n *\n * Represents a configuration to use for creating a Native MiniDFSCluster\n */\nstruct NativeMiniDfsConf {\n    /**\n     * Nonzero if the cluster should be formatted prior to startup.\n     */\n    jboolean doFormat;\n\n    /**\n     * Whether or not to enable webhdfs in MiniDfsCluster\n     */\n    jboolean webhdfsEnabled;\n\n    /**\n     * The http port of the namenode in MiniDfsCluster\n     */\n    jint namenodeHttpPort;\n\n    /**\n     * Nonzero if we should configure short circuit.\n     */\n    jboolean configureShortCircuit;\n};\n\n/**\n * Create a NativeMiniDfsBuilder\n *\n * @param conf      (inout) The cluster configuration\n *\n * @return      a NativeMiniDfsBuilder, or a NULL pointer on error.\n */\nstruct NativeMiniDfsCluster* nmdCreate(const struct NativeMiniDfsConf *conf);\n\n/**\n * Wait until a MiniDFSCluster comes out of safe mode.\n *\n * @param cl        The cluster\n *\n * @return          0 on success; a non-zero error code if the cluster fails to\n *                  come out of safe mode.\n */\nint nmdWaitClusterUp(struct NativeMiniDfsCluster *cl);\n\n/**\n * Shut down a NativeMiniDFS cluster\n *\n * @param cl        The cluster\n *\n * @return          0 on success; a non-zero error code if an exception is\n *                  thrown.\n */\nint nmdShutdown(struct NativeMiniDfsCluster *cl);\n\n/**\n * Shut down a NativeMiniDFS cluster with deleting hdfs directory\n *\n * @param cl        The cluster\n *\n * @return          0 on success; a non-zero error code if an exception is\n *                  thrown.\n */\nint nmdShutdownClean(struct NativeMiniDfsCluster *cl);\n\n/**\n * Destroy a Native MiniDFSCluster\n *\n * @param cl        The cluster to destroy\n */\nvoid nmdFree(struct NativeMiniDfsCluster* cl);\n\n/**\n * Get the port that's in use by the given (non-HA) nativeMiniDfs\n *\n * @param cl        The initialized NativeMiniDfsCluster\n *\n * @return          the port, or a negative error code\n */\nint nmdGetNameNodePort(const struct NativeMiniDfsCluster *cl); \n\n/**\n * Get the http address that's in use by the given (non-HA) nativeMiniDfs\n *\n * @param cl        The initialized NativeMiniDfsCluster\n * @param port      Used to capture the http port of the NameNode \n *                  of the NativeMiniDfsCluster\n * @param hostName  Used to capture the http hostname of the NameNode\n *                  of the NativeMiniDfsCluster\n *\n * @return          0 on success; a non-zero error code if failing to\n *                  get the information.\n */\nint nmdGetNameNodeHttpAddress(const struct NativeMiniDfsCluster *cl,\n                               int *port, const char **hostName);\n\n/**\n * Get domain socket path set for this cluster.\n *\n * @param cl        The cluster\n *\n * @return          A const string of domain socket path, or NULL if not set.\n */\nconst char *hdfsGetDomainSocketPath(const struct NativeMiniDfsCluster *cl);\n\n/**\n * Configure the HDFS builder appropriately to connect to this cluster.\n *\n * @param bld       The hdfs builder\n *\n * @return          the port, or a negative error code\n */\nint nmdConfigureHdfsBuilder(struct NativeMiniDfsCluster *cl,\n                            struct hdfsBuilder *bld);\n\n#ifdef __cplusplus\n}\n#endif\n\n#endif\n"
  },
  {
    "path": "native/fs-hdfs/c_src/wrapper.h",
    "content": "/**\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *     http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing, software\n * distributed under the License is distributed on an \"AS IS\" BASIS,\n * WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n * See the License for the specific language governing permissions and\n * limitations under the License.\n */\n\n#ifndef WRAPPER_HDFS_H\n#define WRAPPER_HDFS_H\n\n#include \"libhdfs/hdfs.h\"\n#include \"libminidfs/native_mini_dfs.h\"\n\n#endif //WRAPPER_HDFS_H\n"
  },
  {
    "path": "native/fs-hdfs/rustfmt.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nedition = \"2018\"\nmax_width = 90"
  },
  {
    "path": "native/fs-hdfs/src/err.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::error::Error;\nuse std::fmt::{Display, Formatter};\n\n/// Errors which can occur during accessing Hdfs cluster\n#[derive(Debug)]\npub enum HdfsErr {\n    Generic(String),\n    /// file path\n    FileNotFound(String),\n    /// file path           \n    FileAlreadyExists(String),\n    /// name node address\n    CannotConnectToNameNode(String),\n    /// URL\n    InvalidUrl(String),\n}\n\nimpl Display for HdfsErr {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        match self {\n            HdfsErr::FileNotFound(path) => write!(f, \"Hdfs file {path} not found\"),\n            HdfsErr::FileAlreadyExists(path) => {\n                write!(f, \"Hdfs file {path} already exists\")\n            }\n            HdfsErr::InvalidUrl(path) => write!(f, \"Hdfs url {path} is not valid\"),\n            HdfsErr::CannotConnectToNameNode(namenode_uri) => {\n                write!(f, \"Cannot connect to name node {namenode_uri}\")\n            }\n            HdfsErr::Generic(err_str) => write!(f, \"Generic error with msg: {err_str}\"),\n        }\n    }\n}\n\nimpl Error for HdfsErr {}\n"
  },
  {
    "path": "native/fs-hdfs/src/hdfs.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! it's a modified version of hdfs-rs\nuse std::collections::HashMap;\nuse std::ffi::{CStr, CString};\nuse std::fmt::Write;\nuse std::fmt::{Debug, Formatter};\nuse std::marker::PhantomData;\nuse std::string::String;\nuse std::sync::{Arc, RwLock};\n\nuse lazy_static::lazy_static;\nuse libc::{c_char, c_int, c_short, c_void, time_t};\nuse log::info;\nuse url::Url;\n\npub use crate::err::HdfsErr;\nuse crate::native::*;\n\n/// These flags should be consistent with the ones in fcntl.h\nconst O_RDONLY: c_int = 0;\nconst O_WRONLY: c_int = 1;\nconst O_APPEND: c_int = 8;\n\nlazy_static! {\n    static ref HDFS_MANAGER: HdfsManager = HdfsManager::new();\n}\n\n/// Create instance of HdfsFs with a global cache.\n/// For each namenode uri, only one instance will be created\npub fn get_hdfs_by_full_path(path: &str) -> Result<Arc<HdfsFs>, HdfsErr> {\n    HDFS_MANAGER.get_hdfs_by_full_path(path)\n}\n\n/// The default NameNode configuration will be used (from the XML configuration files)\npub fn get_hdfs() -> Result<Arc<HdfsFs>, HdfsErr> {\n    HDFS_MANAGER.get_hdfs_by_full_path(\"default\")\n}\n\n/// Remove an instance of HdfsFs from the cache by a specified path with uri\npub fn unload_hdfs_cache_by_full_path(\n    path: &str,\n) -> Result<Option<Arc<HdfsFs>>, HdfsErr> {\n    HDFS_MANAGER.remove_hdfs_by_full_path(path)\n}\n\n/// Remove an instance of HdfsFs from the cache\npub fn unload_hdfs_cache(hdfs: Arc<HdfsFs>) -> Result<Option<Arc<HdfsFs>>, HdfsErr> {\n    HDFS_MANAGER.remove_hdfs(hdfs)\n}\n\n/// Hdfs manager\n/// All of the HdfsFs instances will be managed in a singleton HdfsManager\nstruct HdfsManager {\n    hdfs_cache: Arc<RwLock<HashMap<String, Arc<HdfsFs>>>>,\n}\n\nimpl HdfsManager {\n    fn new() -> Self {\n        Self {\n            hdfs_cache: Arc::new(RwLock::new(HashMap::new())),\n        }\n    }\n\n    fn get_hdfs_by_full_path(&self, path: &str) -> Result<Arc<HdfsFs>, HdfsErr> {\n        let namenode_uri = match path {\n            \"default\" => \"default\".to_owned(),\n            _ => get_namenode_uri(path)?,\n        };\n\n        // Get if already exists\n        if let Some(hdfs_fs) = {\n            let cache = self.hdfs_cache.read().unwrap();\n            cache.get(&namenode_uri).cloned()\n        } {\n            return Ok(hdfs_fs);\n        }\n\n        let mut cache = self.hdfs_cache.write().unwrap();\n        // Check again if exists\n        let ret = if let Some(hdfs_fs) = cache.get(&namenode_uri) {\n            hdfs_fs.clone()\n        } else {\n            let hdfs_fs = unsafe {\n                let hdfs_builder = hdfsNewBuilder();\n                let cstr_uri = CString::new(namenode_uri.as_bytes()).unwrap();\n                hdfsBuilderSetNameNode(hdfs_builder, cstr_uri.as_ptr());\n                info!(\"Connecting to Namenode ({})\", &namenode_uri);\n                hdfsBuilderConnect(hdfs_builder)\n            };\n\n            if hdfs_fs.is_null() {\n                return Err(HdfsErr::CannotConnectToNameNode(namenode_uri.clone()));\n            }\n\n            let hdfs_fs = Arc::new(HdfsFs {\n                url: namenode_uri.clone(),\n                raw: hdfs_fs,\n                _marker: PhantomData,\n            });\n            cache.insert(namenode_uri.clone(), hdfs_fs.clone());\n            hdfs_fs\n        };\n\n        Ok(ret)\n    }\n\n    fn remove_hdfs_by_full_path(\n        &self,\n        path: &str,\n    ) -> Result<Option<Arc<HdfsFs>>, HdfsErr> {\n        let namenode_uri = match path {\n            \"default\" => String::from(\"default\"),\n            _ => get_namenode_uri(path)?,\n        };\n\n        self.remove_hdfs_inner(&namenode_uri)\n    }\n\n    fn remove_hdfs(&self, hdfs: Arc<HdfsFs>) -> Result<Option<Arc<HdfsFs>>, HdfsErr> {\n        self.remove_hdfs_inner(hdfs.url())\n    }\n\n    fn remove_hdfs_inner(&self, hdfs_key: &str) -> Result<Option<Arc<HdfsFs>>, HdfsErr> {\n        let mut cache = self.hdfs_cache.write().unwrap();\n        Ok(cache.remove(hdfs_key))\n    }\n}\n\n/// Hdfs Filesystem\n///\n/// It is basically thread safe because the native API for hdfsFs is thread-safe.\n#[derive(Clone)]\npub struct HdfsFs {\n    url: String,\n    raw: hdfsFS,\n    _marker: PhantomData<()>,\n}\n\nimpl Debug for HdfsFs {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"HdfsFs\").field(\"url\", &self.url).finish()\n    }\n}\n\nimpl HdfsFs {\n    /// Get HDFS namenode url\n    #[inline]\n    pub fn url(&self) -> &str {\n        &self.url\n    }\n\n    /// Get a raw pointer of JNI API's HdfsFs\n    #[inline]\n    pub fn raw(&self) -> hdfsFS {\n        self.raw\n    }\n\n    /// Create HdfsFile from hdfsFile\n    fn new_hdfs_file(&self, path: &str, file: hdfsFile) -> Result<HdfsFile, HdfsErr> {\n        if file.is_null() {\n            Err(HdfsErr::FileNotFound(format!(\n                \"Fail to create/open file {path}\"\n            )))\n        } else {\n            Ok(HdfsFile {\n                fs: self.clone(),\n                path: path.to_owned(),\n                file,\n                _marker: PhantomData,\n            })\n        }\n    }\n\n    /// open a file to read\n    #[inline]\n    pub fn open(&self, path: &str) -> Result<HdfsFile, HdfsErr> {\n        self.open_with_buf_size(path, 0)\n    }\n\n    /// open a file to read with a buffer size\n    pub fn open_with_buf_size(\n        &self,\n        path: &str,\n        buf_size: i32,\n    ) -> Result<HdfsFile, HdfsErr> {\n        let file = unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsOpenFile(\n                self.raw,\n                cstr_path.as_ptr(),\n                O_RDONLY,\n                buf_size as c_int,\n                0,\n                0,\n            )\n        };\n\n        self.new_hdfs_file(path, file)\n    }\n\n    /// Get the file status, including file size, last modified time, etc\n    pub fn get_file_status(&self, path: &str) -> Result<FileStatus, HdfsErr> {\n        let ptr = unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsGetPathInfo(self.raw, cstr_path.as_ptr())\n        };\n\n        if ptr.is_null() {\n            Err(HdfsErr::FileNotFound(format!(\n                \"Fail to get file status for {path}\"\n            )))\n        } else {\n            Ok(FileStatus::new(ptr))\n        }\n    }\n\n    /// Get the file status for each entry under the specified directory\n    pub fn list_status(&self, path: &str) -> Result<Vec<FileStatus>, HdfsErr> {\n        let mut entry_num: c_int = 0;\n\n        let ptr = unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsListDirectory(self.raw, cstr_path.as_ptr(), &mut entry_num)\n        };\n\n        let mut list = Vec::new();\n\n        if ptr.is_null() {\n            return Ok(list);\n        }\n\n        let shared_ptr = Arc::new(HdfsFileInfoPtr::new_array(ptr, entry_num));\n\n        for idx in 0..entry_num {\n            list.push(FileStatus::from_array(shared_ptr.clone(), idx as u32));\n        }\n\n        Ok(list)\n    }\n\n    /// Get the default blocksize.\n    pub fn default_blocksize(&self) -> Result<usize, HdfsErr> {\n        let block_sz = unsafe { hdfsGetDefaultBlockSize(self.raw) };\n\n        if block_sz >= 0 {\n            Ok(block_sz as usize)\n        } else {\n            Err(HdfsErr::Generic(\n                \"Fail to get default block size\".to_owned(),\n            ))\n        }\n    }\n\n    /// Get the default blocksize at the filesystem indicated by a given path.\n    pub fn block_size(&self, path: &str) -> Result<usize, HdfsErr> {\n        let block_sz = unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsGetDefaultBlockSizeAtPath(self.raw, cstr_path.as_ptr())\n        };\n\n        if block_sz >= 0 {\n            Ok(block_sz as usize)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to get block size for file {path}\"\n            )))\n        }\n    }\n\n    /// Return the raw capacity of the filesystem.\n    pub fn capacity(&self) -> Result<usize, HdfsErr> {\n        let capacity = unsafe { hdfsGetCapacity(self.raw) };\n\n        if capacity >= 0 {\n            Ok(capacity as usize)\n        } else {\n            Err(HdfsErr::Generic(\"Fail to get capacity\".to_owned()))\n        }\n    }\n\n    /// Return the total raw size of all files in the filesystem.\n    pub fn used(&self) -> Result<usize, HdfsErr> {\n        let used = unsafe { hdfsGetUsed(self.raw) };\n\n        if used >= 0 {\n            Ok(used as usize)\n        } else {\n            Err(HdfsErr::Generic(\"Fail to get used size\".to_owned()))\n        }\n    }\n\n    /// Checks if a given path exsits on the filesystem\n    pub fn exist(&self, path: &str) -> bool {\n        (unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsExists(self.raw, cstr_path.as_ptr())\n        } == 0)\n    }\n\n    /// Get hostnames where a particular block (determined by\n    /// pos & blocksize) of a file is stored. The last element in the array\n    /// is NULL. Due to replication, a single block could be present on\n    /// multiple hosts.\n    pub fn get_hosts(\n        &self,\n        path: &str,\n        start: usize,\n        length: usize,\n    ) -> Result<BlockHosts, HdfsErr> {\n        let ptr = unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsGetHosts(\n                self.raw,\n                cstr_path.as_ptr(),\n                start as tOffset,\n                length as tOffset,\n            )\n        };\n\n        if !ptr.is_null() {\n            Ok(BlockHosts { ptr })\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to get block hosts for file {path} from {start} with length {length}\"\n            )))\n        }\n    }\n\n    #[inline]\n    pub fn create(&self, path: &str) -> Result<HdfsFile, HdfsErr> {\n        self.create_with_params(path, false, 0, 0, 0)\n    }\n\n    #[inline]\n    pub fn create_with_overwrite(\n        &self,\n        path: &str,\n        overwrite: bool,\n    ) -> Result<HdfsFile, HdfsErr> {\n        self.create_with_params(path, overwrite, 0, 0, 0)\n    }\n\n    pub fn create_with_params(\n        &self,\n        path: &str,\n        overwrite: bool,\n        buf_size: i32,\n        replica_num: i16,\n        block_size: i32,\n    ) -> Result<HdfsFile, HdfsErr> {\n        if !overwrite && self.exist(path) {\n            return Err(HdfsErr::FileAlreadyExists(path.to_owned()));\n        }\n\n        let file = unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsOpenFile(\n                self.raw,\n                cstr_path.as_ptr(),\n                O_WRONLY,\n                buf_size as c_int,\n                replica_num as c_short,\n                block_size as tSize,\n            )\n        };\n\n        self.new_hdfs_file(path, file)\n    }\n\n    /// set permission\n    pub fn chmod(&self, path: &str, mode: i16) -> bool {\n        (unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsChmod(self.raw, cstr_path.as_ptr(), mode as c_short)\n        }) == 0\n    }\n\n    pub fn chown(&self, path: &str, owner: &str, group: &str) -> bool {\n        (unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            let cstr_owner = CString::new(owner).unwrap();\n            let cstr_group = CString::new(group).unwrap();\n            hdfsChown(\n                self.raw,\n                cstr_path.as_ptr(),\n                cstr_owner.as_ptr(),\n                cstr_group.as_ptr(),\n            )\n        }) == 0\n    }\n\n    /// Open a file for append\n    pub fn append(&self, path: &str) -> Result<HdfsFile, HdfsErr> {\n        if !self.exist(path) {\n            return Err(HdfsErr::FileNotFound(path.to_owned()));\n        }\n\n        let file = unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsOpenFile(self.raw, cstr_path.as_ptr(), O_APPEND | O_WRONLY, 0, 0, 0)\n        };\n\n        self.new_hdfs_file(path, file)\n    }\n\n    /// create a directory\n    pub fn mkdir(&self, path: &str) -> Result<bool, HdfsErr> {\n        if unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsCreateDirectory(self.raw, cstr_path.as_ptr())\n        } == 0\n        {\n            Ok(true)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to create directory for {path}\"\n            )))\n        }\n    }\n\n    /// Rename file.\n    pub fn rename(\n        &self,\n        old_path: &str,\n        new_path: &str,\n        overwrite: bool,\n    ) -> Result<bool, HdfsErr> {\n        if unsafe {\n            let cstr_old_path = CString::new(old_path).unwrap();\n            let cstr_new_path = CString::new(new_path).unwrap();\n            if overwrite {\n                hdfsRenameOverwrite(\n                    self.raw,\n                    cstr_old_path.as_ptr(),\n                    cstr_new_path.as_ptr(),\n                )\n            } else {\n                hdfsRename(self.raw, cstr_old_path.as_ptr(), cstr_new_path.as_ptr())\n            }\n        } == 0\n        {\n            Ok(true)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to rename {old_path} to {new_path}\"\n            )))\n        }\n    }\n\n    /// Set the replication of the specified file to the supplied value\n    pub fn set_replication(&self, path: &str, num: i16) -> Result<bool, HdfsErr> {\n        if unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsSetReplication(self.raw, cstr_path.as_ptr(), num)\n        } == 0\n        {\n            Ok(true)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to set replication {num} for {path}\"\n            )))\n        }\n    }\n\n    /// Delete file.\n    pub fn delete(&self, path: &str, recursive: bool) -> Result<bool, HdfsErr> {\n        if unsafe {\n            let cstr_path = CString::new(path).unwrap();\n            hdfsDelete(self.raw, cstr_path.as_ptr(), recursive as c_int)\n        } == 0\n        {\n            Ok(true)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to delete {path} with recursive mode {recursive}\"\n            )))\n        }\n    }\n}\n\n/// since HDFS client handles are completely thread safe, here we implement Send+Sync trait for HdfsFs\nunsafe impl Send for HdfsFs {}\n\nunsafe impl Sync for HdfsFs {}\n\n/// open hdfs file\n#[derive(Clone)]\npub struct HdfsFile {\n    fs: HdfsFs,\n    path: String,\n    file: hdfsFile,\n    _marker: PhantomData<()>,\n}\n\nimpl Debug for HdfsFile {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"HdfsFile\")\n            .field(\"url\", &self.fs.url)\n            .field(\"path\", &self.path)\n            .finish()\n    }\n}\n\nimpl HdfsFile {\n    /// Get HdfsFs\n    #[inline]\n    pub fn fs(&self) -> &HdfsFs {\n        &self.fs\n    }\n\n    /// Return a file path\n    #[inline]\n    pub fn path(&self) -> &str {\n        &self.path\n    }\n\n    pub fn available(&self) -> Result<bool, HdfsErr> {\n        if unsafe { hdfsAvailable(self.fs.raw, self.file) } == 0 {\n            Ok(true)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"File {} is not available\",\n                self.path()\n            )))\n        }\n    }\n\n    /// Close the opened file\n    pub fn close(&self) -> Result<bool, HdfsErr> {\n        if unsafe { hdfsCloseFile(self.fs.raw, self.file) } == 0 {\n            Ok(true)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to close file {}\",\n                self.path()\n            )))\n        }\n    }\n\n    /// Flush the data.\n    pub fn flush(&self) -> bool {\n        (unsafe { hdfsFlush(self.fs.raw, self.file) }) == 0\n    }\n\n    /// Flush out the data in client's user buffer. After the return of this\n    /// call, new readers will see the data.\n    pub fn hflush(&self) -> bool {\n        (unsafe { hdfsHFlush(self.fs.raw, self.file) }) == 0\n    }\n\n    /// Similar to posix fsync, Flush out the data in client's\n    /// user buffer. all the way to the disk device (but the disk may have\n    /// it in its cache).\n    pub fn hsync(&self) -> bool {\n        (unsafe { hdfsHSync(self.fs.raw, self.file) }) == 0\n    }\n\n    /// Determine if a file is open for read.\n    pub fn is_readable(&self) -> bool {\n        (unsafe { hdfsFileIsOpenForRead(self.file) }) == 1\n    }\n\n    /// Determine if a file is open for write.\n    pub fn is_writable(&self) -> bool {\n        (unsafe { hdfsFileIsOpenForWrite(self.file) }) == 1\n    }\n\n    /// Get the file status, including file size, last modified time, etc\n    pub fn get_file_status(&self) -> Result<FileStatus, HdfsErr> {\n        self.fs.get_file_status(self.path())\n    }\n\n    /// Get the current offset in the file, in bytes.\n    pub fn pos(&self) -> Result<u64, HdfsErr> {\n        let pos = unsafe { hdfsTell(self.fs.raw, self.file) };\n\n        if pos >= 0 {\n            Ok(pos as u64)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to get current offset of file {}\",\n                self.path()\n            )))\n        }\n    }\n\n    /// Read data from an open file.\n    pub fn read(&self, buf: &mut [u8]) -> Result<i32, HdfsErr> {\n        if buf.is_empty() {\n            return Ok(0);\n        }\n        let read_len = unsafe {\n            hdfsRead(\n                self.fs.raw,\n                self.file,\n                buf.as_ptr() as *mut c_void,\n                buf.len() as tSize,\n            )\n        };\n\n        if read_len >= 0 {\n            Ok(read_len)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to read contents from {} with return code {read_len}\",\n                self.path()\n            )))\n        }\n    }\n\n    /// Positional read of data from an open file.\n    pub fn read_with_pos(&self, pos: i64, buf: &mut [u8]) -> Result<i32, HdfsErr> {\n        if buf.is_empty() {\n            return Ok(0);\n        }\n        let read_len = unsafe {\n            hdfsPread(\n                self.fs.raw,\n                self.file,\n                pos as tOffset,\n                buf.as_ptr() as *mut c_void,\n                buf.len() as tSize,\n            )\n        };\n\n        if read_len >= 0 {\n            Ok(read_len)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to read contents from {} with offset {pos} and return code {read_len}\",\n                self.path()\n            )))\n        }\n    }\n\n    /// Seek to given offset in file.\n    pub fn seek(&self, offset: u64) -> bool {\n        (unsafe { hdfsSeek(self.fs.raw, self.file, offset as tOffset) }) == 0\n    }\n\n    /// Write data into an open file.\n    pub fn write(&self, buf: &[u8]) -> Result<i32, HdfsErr> {\n        if buf.is_empty() {\n            return Ok(0);\n        }\n        let written_len = unsafe {\n            hdfsWrite(\n                self.fs.raw,\n                self.file,\n                buf.as_ptr() as *mut c_void,\n                buf.len() as tSize,\n            )\n        };\n\n        if written_len >= 0 {\n            Ok(written_len)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to write contents to file {}\",\n                self.path()\n            )))\n        }\n    }\n}\n\n/// since HdfsFile is only the pointer to the file on Hdfs, here we implement Send+Sync trait\nunsafe impl Send for HdfsFile {}\n\nunsafe impl Sync for HdfsFile {}\n\n/// Interface that represents the client side information for a file or directory.\n#[derive(Clone)]\npub struct FileStatus {\n    raw: Arc<HdfsFileInfoPtr>,\n    idx: u32,\n    _marker: PhantomData<()>,\n}\n\nimpl FileStatus {\n    /// create FileStatus from *const hdfsFileInfo\n    #[inline]\n    fn new(ptr: *const hdfsFileInfo) -> FileStatus {\n        FileStatus {\n            raw: Arc::new(HdfsFileInfoPtr::new(ptr)),\n            idx: 0,\n            _marker: PhantomData,\n        }\n    }\n\n    /// create FileStatus from *const hdfsFileInfo which points\n    /// to dynamically allocated array.\n    #[inline]\n    fn from_array(raw: Arc<HdfsFileInfoPtr>, idx: u32) -> FileStatus {\n        FileStatus {\n            raw,\n            idx,\n            _marker: PhantomData,\n        }\n    }\n\n    /// Get the pointer to hdfsFileInfo\n    #[inline]\n    fn ptr(&self) -> *const hdfsFileInfo {\n        unsafe { self.raw.ptr.offset(self.idx as isize) }\n    }\n\n    /// Get the name of the file\n    #[inline]\n    pub fn name(&self) -> &str {\n        let slice = unsafe { CStr::from_ptr((*self.ptr()).mName) }.to_bytes();\n        std::str::from_utf8(slice).unwrap()\n    }\n\n    /// Is this a file?\n    #[inline]\n    pub fn is_file(&self) -> bool {\n        match unsafe { &*self.ptr() }.mKind {\n            tObjectKind::kObjectKindFile => true,\n            tObjectKind::kObjectKindDirectory => false,\n        }\n    }\n\n    /// Is this a directory?\n    #[inline]\n    pub fn is_directory(&self) -> bool {\n        match unsafe { &*self.ptr() }.mKind {\n            tObjectKind::kObjectKindFile => false,\n            tObjectKind::kObjectKindDirectory => true,\n        }\n    }\n\n    /// Get the owner of the file\n    #[inline]\n    pub fn owner(&self) -> &str {\n        let slice = unsafe { CStr::from_ptr((*self.ptr()).mOwner) }.to_bytes();\n        std::str::from_utf8(slice).unwrap()\n    }\n\n    /// Get the group associated with the file\n    #[inline]\n    pub fn group(&self) -> &str {\n        let slice = unsafe { CStr::from_ptr((*self.ptr()).mGroup) }.to_bytes();\n        std::str::from_utf8(slice).unwrap()\n    }\n\n    /// Get the permissions associated with the file\n    #[inline]\n    pub fn permission(&self) -> i16 {\n        unsafe { &*self.ptr() }.mPermissions\n    }\n\n    /// Get the length of this file, in bytes.\n    #[allow(clippy::len_without_is_empty)]\n    #[inline]\n    pub fn len(&self) -> usize {\n        unsafe { &*self.ptr() }.mSize as usize\n    }\n\n    /// Get the block size of the file.\n    #[inline]\n    pub fn block_size(&self) -> usize {\n        unsafe { &*self.ptr() }.mBlockSize as usize\n    }\n\n    /// Get the replication factor of a file.\n    #[inline]\n    pub fn replica_count(&self) -> i16 {\n        unsafe { &*self.ptr() }.mReplication\n    }\n\n    /// Get the last modification time for the file in seconds\n    #[inline]\n    pub fn last_modified(&self) -> time_t {\n        unsafe { &*self.ptr() }.mLastMod\n    }\n\n    /// Get the last access time for the file in seconds\n    #[inline]\n    pub fn last_access(&self) -> time_t {\n        unsafe { &*self.ptr() }.mLastAccess\n    }\n}\n\n/// Safely deallocable hdfsFileInfo pointer\nstruct HdfsFileInfoPtr {\n    pub ptr: *const hdfsFileInfo,\n    pub len: i32,\n}\n\n/// for safe deallocation\nimpl Drop for HdfsFileInfoPtr {\n    fn drop(&mut self) {\n        unsafe { hdfsFreeFileInfo(self.ptr as *mut hdfsFileInfo, self.len) };\n    }\n}\n\nimpl HdfsFileInfoPtr {\n    fn new(ptr: *const hdfsFileInfo) -> HdfsFileInfoPtr {\n        HdfsFileInfoPtr { ptr, len: 1 }\n    }\n\n    pub fn new_array(ptr: *const hdfsFileInfo, len: i32) -> HdfsFileInfoPtr {\n        HdfsFileInfoPtr { ptr, len }\n    }\n}\n\n/// since HdfsFileInfoPtr is only the pointer to the Hdfs file info, here we implement Send+Sync trait\nunsafe impl Send for HdfsFileInfoPtr {}\n\nunsafe impl Sync for HdfsFileInfoPtr {}\n\n/// Includes hostnames where a particular block of a file is stored.\npub struct BlockHosts {\n    ptr: *mut *mut *mut c_char,\n}\n\nimpl Drop for BlockHosts {\n    fn drop(&mut self) {\n        unsafe { hdfsFreeHosts(self.ptr) };\n    }\n}\n\npub const LOCAL_FS_SCHEME: &str = \"file\";\npub const HDFS_FS_SCHEME: &str = \"hdfs\";\npub const VIEW_FS_SCHEME: &str = \"viewfs\";\n\n#[inline]\nfn get_namenode_uri(path: &str) -> Result<String, HdfsErr> {\n    match Url::parse(path) {\n        Ok(url) => match url.scheme() {\n            LOCAL_FS_SCHEME => Ok(\"file:///\".to_string()),\n            HDFS_FS_SCHEME | VIEW_FS_SCHEME => {\n                if let Some(host) = url.host() {\n                    let mut uri_builder = String::new();\n                    write!(&mut uri_builder, \"{}://{}\", url.scheme(), host).unwrap();\n\n                    if let Some(port) = url.port() {\n                        write!(&mut uri_builder, \":{port}\").unwrap();\n                    }\n                    Ok(uri_builder)\n                } else {\n                    Err(HdfsErr::InvalidUrl(path.to_string()))\n                }\n            }\n            _ => Err(HdfsErr::InvalidUrl(path.to_string())),\n        },\n        Err(_) => Err(HdfsErr::InvalidUrl(path.to_string())),\n    }\n}\n\n#[inline]\npub fn get_uri(path: &str) -> Result<String, HdfsErr> {\n    let path = if path.starts_with('/') {\n        format!(\"{LOCAL_FS_SCHEME}://{path}\")\n    } else {\n        path.to_string()\n    };\n    match Url::parse(&path) {\n        Ok(url) => match url.scheme() {\n            LOCAL_FS_SCHEME | HDFS_FS_SCHEME | VIEW_FS_SCHEME => Ok(url.to_string()),\n            _ => Err(HdfsErr::InvalidUrl(path.to_string())),\n        },\n        Err(_) => Err(HdfsErr::InvalidUrl(path.to_string())),\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use uuid::Uuid;\n\n    use crate::minidfs::get_dfs;\n\n    #[cfg(feature = \"use_existing_hdfs\")]\n    #[test]\n    fn test_hdfs_default() {\n        let fs = super::get_hdfs().ok().unwrap();\n\n        let uuid = Uuid::new_v4().to_string();\n        let test_file = uuid.as_str();\n        let created_file = match fs.create(test_file) {\n            Ok(f) => f,\n            Err(_) => panic!(\"Couldn't create a file\"),\n        };\n        assert!(created_file.close().is_ok());\n        assert!(fs.exist(test_file));\n\n        assert!(fs.delete(test_file, false).is_ok());\n        assert!(!fs.exist(test_file));\n    }\n\n    #[test]\n    fn test_hdfs() {\n        let dfs = get_dfs();\n        {\n            let minidfs_addr = dfs.namenode_addr();\n            let fs = dfs.get_hdfs().ok().unwrap();\n\n            // Test hadoop file\n            {\n                // create a file, check existence, and close\n                let uuid = Uuid::new_v4().to_string();\n                let test_file = uuid.as_str();\n                let created_file = match fs.create(test_file) {\n                    Ok(f) => f,\n                    Err(_) => panic!(\"Couldn't create a file\"),\n                };\n                assert!(created_file.close().is_ok());\n                assert!(fs.exist(test_file));\n\n                // open a file and close\n                let opened_file = fs.open(test_file).ok().unwrap();\n                assert!(opened_file.close().is_ok());\n\n                // Clean up\n                assert!(fs.delete(test_file, false).is_ok());\n            }\n\n            // Test directory\n            {\n                let uuid = Uuid::new_v4().to_string();\n                let test_dir = format!(\"/{uuid}\");\n\n                match fs.mkdir(&test_dir) {\n                    Ok(_) => println!(\"{test_dir} created\"),\n                    Err(_) => panic!(\"Couldn't create {test_dir} directory\"),\n                };\n\n                let file_info = fs.get_file_status(&test_dir).ok().unwrap();\n\n                let expected_path = format!(\"{minidfs_addr}{test_dir}\");\n                assert_eq!(&expected_path, file_info.name());\n                assert!(!file_info.is_file());\n                assert!(file_info.is_directory());\n\n                let sub_dir_num = 3;\n                let mut expected_list = Vec::new();\n                for x in 0..sub_dir_num {\n                    let filename = format!(\"{test_dir}/{x}\");\n                    expected_list.push(format!(\"{minidfs_addr}{test_dir}/{x}\"));\n\n                    match fs.mkdir(&filename) {\n                        Ok(_) => println!(\"{filename} created\"),\n                        Err(_) => panic!(\"Couldn't create {filename} directory\"),\n                    };\n                }\n\n                let mut list = fs.list_status(&test_dir).ok().unwrap();\n                assert_eq!(sub_dir_num, list.len());\n\n                list.sort_by(|a, b| Ord::cmp(a.name(), b.name()));\n                for (expected, name) in expected_list\n                    .iter()\n                    .zip(list.iter().map(|status| status.name()))\n                {\n                    assert_eq!(expected, name);\n                }\n\n                // Clean up\n                assert!(fs.delete(&test_dir, true).is_ok());\n            }\n        }\n    }\n\n    #[test]\n    fn test_list_status_with_empty_dir() {\n        let dfs = get_dfs();\n        {\n            let fs = dfs.get_hdfs().ok().unwrap();\n\n            // Test list status with empty directory\n            {\n                let uuid = Uuid::new_v4().to_string();\n                let test_dir = format!(\"/{uuid}\",);\n                let empty_dir = \"/_empty\".to_string();\n\n                match fs.mkdir(&test_dir) {\n                    Ok(_) => println!(\"{test_dir} created\"),\n                    Err(_) => panic!(\"Couldn't create {test_dir} directory\"),\n                };\n\n                match fs.mkdir(&empty_dir) {\n                    Ok(_) => println!(\"{test_dir} created\"),\n                    Err(_) => panic!(\"Couldn't create {test_dir} directory\"),\n                };\n\n                let test_file = format!(\"{test_dir}/test.txt\");\n                match fs.create(&test_file) {\n                    Ok(_) => println!(\"{test_file} created\"),\n                    Err(_) => panic!(\"Couldn't create {test_file}\"),\n                }\n\n                let file_info = fs.list_status(&test_dir).ok().unwrap();\n                assert_eq!(file_info.len(), 1);\n\n                let file_info = fs.list_status(&empty_dir).ok().unwrap();\n                assert_eq!(file_info.len(), 0);\n\n                // Clean up\n                assert!(fs.delete(&test_dir, true).is_ok());\n            }\n        }\n    }\n\n    #[test]\n    fn test_write_read() {\n        let dfs = get_dfs();\n        let fs = dfs.get_hdfs().ok().unwrap();\n\n        let data = \"Hello World!!!\".as_bytes();\n\n        let uuid = Uuid::new_v4().to_string();\n        let test_file = uuid.as_str();\n\n        // Test write to hadoop file\n        {\n            // create a file, check existence, and close\n            let created_file = match fs.create_with_params(test_file, false, 0, 1, 0) {\n                Ok(f) => f,\n                Err(_) => panic!(\"Couldn't create a hdfs file\"),\n            };\n            match created_file.write(data) {\n                Ok(len) => assert_eq!(len as usize, data.len()),\n                Err(_) => panic!(\"Couldn't write contents to a hdfs file\"),\n            }\n            assert!(created_file.close().is_ok());\n        };\n\n        // Test append to hadoop file\n        {\n            // create a file, check existence, and close\n            let appended_file = match fs.append(test_file) {\n                Ok(f) => f,\n                Err(_) => panic!(\"Couldn't create a hdfs file\"),\n            };\n            match appended_file.write(data) {\n                Ok(len) => assert_eq!(len as usize, data.len()),\n                Err(_) => panic!(\"Couldn't write contents to a hdfs file\"),\n            }\n            assert!(appended_file.close().is_ok());\n        };\n\n        // Test read\n        {\n            assert!(fs.exist(test_file));\n\n            let mut data2 = vec![];\n            data2.append(&mut data.to_vec());\n            data2.append(&mut data.to_vec());\n\n            // open a file, read, check the contents and close\n            let opened_file = fs.open(test_file).ok().unwrap();\n            let mut buf = vec![0u8; data2.len()];\n            assert!(opened_file.read(&mut buf).is_ok());\n            assert_eq!(data2, buf);\n            assert!(opened_file.close().is_ok());\n\n            // open a file, read_with_pos, check the contents and close\n            let opened_file = fs.open(test_file).ok().unwrap();\n            let mut buf = vec![0u8; data2.len()];\n            assert!(opened_file.read_with_pos(0, &mut buf).is_ok());\n            assert_eq!(data2, buf);\n            assert!(opened_file.close().is_ok());\n        }\n\n        // Clean up\n        assert!(fs.delete(test_file, false).is_ok());\n    }\n}\n"
  },
  {
    "path": "native/fs-hdfs/src/lib.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! fs-hdfs3 is a library for accessing to HDFS cluster.\n//! Basically, it provides libhdfs FFI APIs.\n//! It also provides more idiomatic and abstract Rust APIs,\n//! hiding manual memory management and some thread-safety problem of libhdfs.\n//! Rust APIs are highly recommended for most users.\n//!\n//! ## Important Note\n//! The original ``libhdfs`` implementation allows only one ``HdfsFs`` instance for the\n//! same namenode because ``libhdfs`` only keeps a single ``hdfsFs`` entry for each namenode.\n//! As a result, a global singleton ``HdfsManager`` is introduced to control only one single ``hdfsFs`` entry created for each namenode.\n//! Contrast, ``HdfsFs`` instance itself is thread-safe.\n//!\n//! This library mainly provides two public methods to load and unload ``HdfsFs``\n//! - pub fn get_hdfs_by_full_path(path: &str) -> Result<Arc<HdfsFs>, HdfsErr>\n//! - pub fn unload_hdfs_cache_by_full_path(path: &str) -> Result<Option<Arc<HdfsFs>>, HdfsErr>\n//!\n//! ## Usage\n//! in Cargo.toml:\n//!\n//! ```ignore\n//! [dependencies]\n//! fs-hdfs3 = \"0.1.12\"\n//! ```\n//! or\n//!\n//! ```ignore\n//! [dependencies.fs-hdfs3]\n//! git = \"https://github.com/datafusion-contrib/fs-hdfs\"\n//! ```\n//!\n//! Firstly, we need to add library path for the jvm related dependencies.\n//! An example for MacOS,\n//!\n//! ```bash ignore\n//! export DYLD_LIBRARY_PATH=$JAVA_HOME/jre/lib/server\n//! ```\n//!\n//! Here, ``$JAVA_HOME`` need to be specified and exported.\n//!\n//! Since our compiled libhdfs is JNI native implementation, it requires the proper ``CLASSPATH``.\n//! An example,\n//!\n//! ```bash ignore\n//! export CLASSPATH=$CLASSPATH:`hadoop classpath --glob`\n//! ```\n//!\n//! ## Testing\n//! The test also requires the ``CLASSPATH``. In case that the java class of ``org.junit.Assert``\n//! can't be found. Refine the ``$CLASSPATH`` as follows:\n//!\n//! ```bash ignore\n//! export CLASSPATH=$CLASSPATH:`hadoop classpath --glob`:$HADOOP_HOME/share/hadoop/tools/lib/*\n//! ```\n//!\n//! Here, ``$HADOOP_HOME`` need to be specified and exported.\n//!\n//! Then you can run\n//!\n//! ```ignore\n//! cargo test\n//! ```\n//!\n//! ## Example\n//!\n//! ```ignore\n//! use std::sync::Arc;\n//! use hdfs::hdfs::{get_hdfs_by_full_path, unload_hdfs_cache, HdfsFs};\n//!\n//! let fs: Arc<HdfsFs> = get_hdfs_by_full_path(\"hdfs://localhost:8020/\").ok().unwrap();\n//! match fs.mkdir(\"/data\") {\n//!   Ok(_) => { println!(\"/data has been created\") },\n//!   Err(_)  => { panic!(\"/data creation has failed\") }\n//! };\n//!\n//! match unload_hdfs_cache(fs) {\n//!   Ok(Some(_)) => { println!(\"Unloading hdfs cache succeeded\") },\n//!   _  => { panic!(\"Unloading hdfs cache failed\") }\n//! }\n//! ```\n\n#![allow(non_upper_case_globals)]\n#![allow(non_camel_case_types)]\n#![allow(non_snake_case)]\n\n#[allow(deref_nullptr)]\nmod native;\n\npub mod err;\n/// Rust APIs wrapping libhdfs API, providing better semantic and abstraction\npub mod hdfs;\n#[cfg(feature = \"test_util\")]\n/// Mainly for unit test\npub mod minidfs;\n#[cfg(feature = \"test_util\")]\npub mod util;\n/// For list files in a directory recursively\npub mod walkdir;\n"
  },
  {
    "path": "native/fs-hdfs/src/minidfs.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! MiniDfs Cluster\n//!\n//! MiniDFS provides a embedded HDFS cluster. It is only for testing.\n//!\n//! Since it's not supported to create multiple MiniDFS instances for parallel testing,\n//! we only use a global one which can be get by ``get_dfs()``.\n//! And when it is finally destroyed, it's ``drop()`` method will be invoked so that users don't need to care about the resource releasing.\n//!\n//! ## Example\n//!\n//! ```ignore\n//! use hdfs::minidfs;\n//!\n//! let dfs = minidfs::get_dfs();\n//! let port = dfs.namenode_port();\n//! ...\n//! ```\n\nuse lazy_static::lazy_static;\nuse libc::{c_char, c_int};\nuse std::ffi;\nuse std::mem;\nuse std::str;\nuse std::sync::Arc;\n\nuse crate::err::HdfsErr;\nuse crate::hdfs;\nuse crate::hdfs::HdfsFs;\nuse crate::native::*;\n\nlazy_static! {\n    static ref MINI_DFS: Arc<MiniDFS> = Arc::new(MiniDFS::new());\n}\n\npub fn get_dfs() -> Arc<MiniDFS> {\n    MINI_DFS.clone()\n}\n\npub struct MiniDFS {\n    cluster: *mut MiniDfsCluster,\n}\n\nunsafe impl Send for MiniDFS {}\n\nunsafe impl Sync for MiniDFS {}\n\nimpl Drop for MiniDFS {\n    fn drop(&mut self) {\n        self.stop();\n    }\n}\n\nimpl MiniDFS {\n    fn new() -> MiniDFS {\n        let conf = new_mini_dfs_conf();\n        MiniDFS::start(&conf).unwrap()\n    }\n\n    fn start(conf: &MiniDfsConf) -> Option<MiniDFS> {\n        match unsafe { nmdCreate(conf) } {\n            val if !val.is_null() => Some(MiniDFS { cluster: val }),\n            _ => None,\n        }\n    }\n\n    fn stop(&self) {\n        // remove hdfs from global cache\n        hdfs::unload_hdfs_cache_by_full_path(self.namenode_addr().as_str()).ok();\n        unsafe {\n            nmdShutdownClean(self.cluster);\n            nmdFree(self.cluster);\n        }\n    }\n\n    #[allow(dead_code)]\n    fn wait_for_clusterup(&self) -> bool {\n        unsafe { nmdWaitClusterUp(self.cluster) == 0 }\n    }\n\n    #[allow(clippy::not_unsafe_ptr_arg_deref)]\n    pub fn set_hdfs_builder(&self, builder: *mut hdfsBuilder) -> bool {\n        unsafe { nmdConfigureHdfsBuilder(self.cluster, builder) == 0 }\n    }\n\n    pub fn namenode_port(&self) -> Option<i32> {\n        match unsafe { nmdGetNameNodePort(self.cluster) } {\n            val if val > 0 => Some(val),\n            _ => None,\n        }\n    }\n\n    pub fn namenode_addr(&self) -> String {\n        if let Some(port) = self.namenode_port() {\n            format!(\"hdfs://localhost:{port}\")\n        } else {\n            \"hdfs://localhost\".to_string()\n        }\n    }\n\n    pub fn namenode_http_addr(&self) -> Option<(&str, i32)> {\n        let mut hostname: *const c_char = unsafe { mem::zeroed() };\n        let mut port: c_int = 0;\n\n        match unsafe { nmdGetNameNodeHttpAddress(self.cluster, &mut port, &mut hostname) }\n        {\n            0 => {\n                let slice = unsafe { ffi::CStr::from_ptr(hostname) }.to_bytes();\n                let str = str::from_utf8(slice).unwrap();\n\n                Some((str, port))\n            }\n            _ => None,\n        }\n    }\n\n    pub fn get_hdfs(&self) -> Result<Arc<HdfsFs>, HdfsErr> {\n        hdfs::get_hdfs_by_full_path(self.namenode_addr().as_str())\n    }\n}\n\nfn new_mini_dfs_conf() -> MiniDfsConf {\n    MiniDfsConf {\n        doFormat: 1,\n        webhdfsEnabled: 0,\n        namenodeHttpPort: 0,\n        configureShortCircuit: 0,\n    }\n}\n"
  },
  {
    "path": "native/fs-hdfs/src/native.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n#![allow(unused_imports)]\n#![allow(dead_code)]\ninclude!(concat!(env!(\"OUT_DIR\"), \"/hdfs-native.rs\"));\n"
  },
  {
    "path": "native/fs-hdfs/src/util.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Hdfs Utility\n\nuse crate::err::HdfsErr;\nuse crate::hdfs;\nuse crate::hdfs::{get_uri, HdfsFs};\nuse crate::minidfs::MiniDFS;\nuse crate::native::{hdfsCopy, hdfsMove};\nuse std::ffi::CString;\nuse std::sync::Arc;\n\n/// Hdfs Utility\npub struct HdfsUtil;\n\nimpl HdfsUtil {\n    /// Copy file to hdfs\n    pub fn copy_file_to_hdfs(\n        dfs: Arc<MiniDFS>,\n        src_path: &str,\n        dst_path: &str,\n    ) -> Result<bool, HdfsErr> {\n        let src_uri = get_uri(src_path)?;\n        let src_fs = hdfs::get_hdfs_by_full_path(&src_uri)?;\n\n        let dst_fs = dfs.get_hdfs()?;\n\n        HdfsUtil::copy(&src_fs, src_path, &dst_fs, dst_path)\n    }\n\n    /// Copy file from hdfs\n    pub fn copy_file_from_hdfs(\n        dfs: Arc<MiniDFS>,\n        src_path: &str,\n        dst_path: &str,\n    ) -> Result<bool, HdfsErr> {\n        let src_fs = dfs.get_hdfs()?;\n\n        let dst_uri = get_uri(dst_path)?;\n        let dst_fs = hdfs::get_hdfs_by_full_path(&dst_uri)?;\n\n        HdfsUtil::copy(&src_fs, src_path, &dst_fs, dst_path)\n    }\n\n    /// Move file to hdfs\n    pub fn mv_file_to_hdfs(\n        dfs: Arc<MiniDFS>,\n        src_path: &str,\n        dst_path: &str,\n    ) -> Result<bool, HdfsErr> {\n        let src_uri = get_uri(src_path)?;\n        let src_fs = hdfs::get_hdfs_by_full_path(&src_uri)?;\n\n        let dst_fs = dfs.get_hdfs()?;\n\n        HdfsUtil::mv(&src_fs, src_path, &dst_fs, dst_path)\n    }\n\n    /// Move file from hdfs\n    pub fn mv_file_from_hdfs(\n        dfs: Arc<MiniDFS>,\n        src_path: &str,\n        dst_path: &str,\n    ) -> Result<bool, HdfsErr> {\n        let src_fs = dfs.get_hdfs()?;\n\n        let dst_uri = get_uri(dst_path)?;\n        let dst_fs = hdfs::get_hdfs_by_full_path(&dst_uri)?;\n\n        HdfsUtil::mv(&src_fs, src_path, &dst_fs, dst_path)\n    }\n\n    /// Copy file from one filesystem to another.\n    ///\n    /// #### Params\n    /// * ```srcFS``` - The handle to source filesystem.\n    /// * ```src``` - The path of source file.\n    /// * ```dstFS``` - The handle to destination filesystem.\n    /// * ```dst``` - The path of destination file.\n    pub fn copy(\n        src_fs: &HdfsFs,\n        src: &str,\n        dst_fs: &HdfsFs,\n        dst: &str,\n    ) -> Result<bool, HdfsErr> {\n        let res = unsafe {\n            let cstr_src = CString::new(src).unwrap();\n            let cstr_dst = CString::new(dst).unwrap();\n            hdfsCopy(\n                src_fs.raw(),\n                cstr_src.as_ptr(),\n                dst_fs.raw(),\n                cstr_dst.as_ptr(),\n            )\n        };\n\n        if res == 0 {\n            Ok(true)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to copy file from {src} to {dst}\"\n            )))\n        }\n    }\n\n    /// Move file from one filesystem to another.\n    ///\n    /// #### Params\n    /// * ```srcFS``` - The handle to source filesystem.\n    /// * ```src``` - The path of source file.\n    /// * ```dstFS``` - The handle to destination filesystem.\n    /// * ```dst``` - The path of destination file.\n    pub fn mv(\n        src_fs: &HdfsFs,\n        src: &str,\n        dst_fs: &HdfsFs,\n        dst: &str,\n    ) -> Result<bool, HdfsErr> {\n        let res = unsafe {\n            let cstr_src = CString::new(src).unwrap();\n            let cstr_dst = CString::new(dst).unwrap();\n            hdfsMove(\n                src_fs.raw(),\n                cstr_src.as_ptr(),\n                dst_fs.raw(),\n                cstr_dst.as_ptr(),\n            )\n        };\n\n        if res == 0 {\n            Ok(true)\n        } else {\n            Err(HdfsErr::Generic(format!(\n                \"Fail to move file from {src} to {dst}\"\n            )))\n        }\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use crate::minidfs::get_dfs;\n    use std::path::Path;\n    use tempfile::tempdir;\n    use uuid::Uuid;\n\n    #[test]\n    fn test_from_local() {\n        // Prepare file source from local file system\n        let temp_file = tempfile::Builder::new().tempfile().unwrap();\n        let src_path = temp_file.path();\n        let src_file_name = src_path.file_name().unwrap().to_str().unwrap();\n        let src_file = src_path.to_str().unwrap();\n\n        let dfs = get_dfs();\n        {\n            // Source\n            let src_file_uri = format!(\"file://{}\", src_path.to_str().unwrap());\n            let src_fs = hdfs::get_hdfs_by_full_path(&src_file_uri).ok().unwrap();\n\n            // Destination\n            let dst_file = format!(\"/{src_file_name}\");\n            let dst_fs = dfs.get_hdfs().ok().unwrap();\n\n            // Test copy\n            {\n                assert!(\n                    HdfsUtil::copy_file_to_hdfs(dfs.clone(), src_file, &dst_file).is_ok()\n                );\n                assert!(dst_fs.exist(&dst_file));\n                assert!(src_fs.exist(src_file));\n                assert!(Path::new(src_file).exists());\n            }\n\n            // Clean up\n            assert!(dst_fs.delete(&dst_file, false).is_ok());\n\n            // Test move\n            {\n                assert!(HdfsUtil::mv_file_to_hdfs(dfs, src_file, &dst_file).is_ok());\n                assert!(dst_fs.exist(&dst_file));\n                assert!(!src_fs.exist(src_file));\n                assert!(!Path::new(src_file).exists());\n            }\n\n            // Clean up\n            assert!(dst_fs.delete(&dst_file, false).is_ok());\n        }\n    }\n\n    #[test]\n    fn test_to_local() {\n        let uuid = Uuid::new_v4().to_string();\n        let file_name = uuid.as_str();\n\n        let temp_dir = tempdir().unwrap();\n        let dst_path = temp_dir.path().join(file_name);\n        let dst_file = dst_path.to_str().unwrap();\n\n        let dfs = get_dfs();\n        {\n            // Source\n            let src_file = format!(\"/{file_name}\");\n            let src_fs = dfs.get_hdfs().ok().unwrap();\n            src_fs.create(&src_file).ok().unwrap();\n\n            // Destination\n            let dst_file_uri = format!(\"file://{dst_file}\");\n            let dst_fs = hdfs::get_hdfs_by_full_path(&dst_file_uri).ok().unwrap();\n\n            // Test copy\n            {\n                assert!(\n                    HdfsUtil::copy_file_from_hdfs(dfs.clone(), &src_file, dst_file)\n                        .is_ok()\n                );\n                assert!(src_fs.exist(&src_file));\n                assert!(dst_fs.exist(dst_file));\n                assert!(Path::new(dst_file).exists());\n            }\n\n            // Clean up\n            assert!(dst_fs.delete(dst_file, false).is_ok());\n\n            {\n                assert!(HdfsUtil::mv_file_from_hdfs(dfs, &src_file, dst_file).is_ok());\n                assert!(!src_fs.exist(&src_file));\n                assert!(dst_fs.exist(dst_file));\n                assert!(Path::new(dst_file).exists());\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "native/fs-hdfs/src/walkdir/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::sync::Arc;\n\nuse crate::err::HdfsErr;\nuse crate::hdfs;\nuse crate::hdfs::{FileStatus, HdfsFs};\nuse crate::walkdir::tree_iter::{IterOptions, TreeIter, TreeManager};\n\npub mod tree_iter;\n\n#[derive(Debug)]\npub struct HdfsWalkDir {\n    hdfs: Arc<HdfsFs>,\n    root: String,\n    opts: IterOptions,\n}\n\nimpl HdfsWalkDir {\n    pub fn new(root: String) -> Result<Self, HdfsErr> {\n        let hdfs = hdfs::get_hdfs_by_full_path(&root)?;\n        Ok(Self::new_with_hdfs(root, hdfs))\n    }\n\n    pub fn new_with_hdfs(root: String, hdfs: Arc<HdfsFs>) -> Self {\n        Self {\n            hdfs,\n            root,\n            opts: IterOptions {\n                min_depth: 0,\n                max_depth: usize::MAX,\n            },\n        }\n    }\n\n    /// Set the minimum depth of entries yielded by the iterator.\n    ///\n    /// The smallest depth is `0` and always corresponds to the path given\n    /// to the `new` function on this type. Its direct descendents have depth\n    /// `1`, and their descendents have depth `2`, and so on.\n    pub fn min_depth(mut self, depth: usize) -> Self {\n        self.opts.min_depth = depth;\n        if self.opts.min_depth > self.opts.max_depth {\n            self.opts.min_depth = self.opts.max_depth;\n        }\n        self\n    }\n\n    /// Set the maximum depth of entries yield by the iterator.\n    ///\n    /// The smallest depth is `0` and always corresponds to the path given\n    /// to the `new` function on this type. Its direct descendents have depth\n    /// `1`, and their descendents have depth `2`, and so on.\n    ///\n    /// Note that this will not simply filter the entries of the iterator, but\n    /// it will actually avoid descending into directories when the depth is\n    /// exceeded.\n    pub fn max_depth(mut self, depth: usize) -> Self {\n        self.opts.max_depth = depth;\n        if self.opts.max_depth < self.opts.min_depth {\n            self.opts.max_depth = self.opts.min_depth;\n        }\n        self\n    }\n}\n\nimpl IntoIterator for HdfsWalkDir {\n    type Item = Result<FileStatus, HdfsErr>;\n    type IntoIter = TreeIter<String, FileStatus, HdfsErr>;\n\n    fn into_iter(self) -> TreeIter<String, FileStatus, HdfsErr> {\n        TreeIter::new(\n            Box::new(HdfsTreeManager {\n                hdfs: self.hdfs.clone(),\n            }),\n            self.opts,\n            self.root,\n        )\n    }\n}\n\nstruct HdfsTreeManager {\n    hdfs: Arc<HdfsFs>,\n}\n\nimpl TreeManager<String, FileStatus, HdfsErr> for HdfsTreeManager {\n    fn to_value(&self, v: String) -> Result<FileStatus, HdfsErr> {\n        self.hdfs.get_file_status(&v)\n    }\n\n    fn get_children(&self, n: &FileStatus) -> Result<Vec<FileStatus>, HdfsErr> {\n        self.hdfs.list_status(n.name())\n    }\n\n    fn is_leaf(&self, n: &FileStatus) -> bool {\n        n.is_file()\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use std::sync::Arc;\n\n    use crate::hdfs::{HdfsErr, HdfsFs};\n    use crate::minidfs::get_dfs;\n    use crate::walkdir::HdfsWalkDir;\n\n    #[test]\n    fn test_hdfs_file_list() -> Result<(), HdfsErr> {\n        let hdfs = set_up_hdfs_env()?;\n\n        let hdfs_walk_dir =\n            HdfsWalkDir::new_with_hdfs(\"/testing\".to_owned(), hdfs.clone())\n                .min_depth(0)\n                .max_depth(2);\n        let mut iter = hdfs_walk_dir.into_iter();\n\n        let ret_vec = [\n            \"/testing\",\n            \"/testing/c\",\n            \"/testing/b\",\n            \"/testing/b/3\",\n            \"/testing/b/2\",\n            \"/testing/b/1\",\n            \"/testing/a\",\n            \"/testing/a/3\",\n            \"/testing/a/2\",\n            \"/testing/a/1\",\n        ];\n        for entry in ret_vec.into_iter() {\n            assert_eq!(\n                format!(\"{}{}\", hdfs.url(), entry),\n                iter.next().unwrap()?.name()\n            );\n        }\n        assert!(iter.next().is_none());\n\n        let hdfs_walk_dir =\n            HdfsWalkDir::new_with_hdfs(\"/testing\".to_owned(), hdfs.clone())\n                .min_depth(2)\n                .max_depth(3);\n        let mut iter = hdfs_walk_dir.into_iter();\n\n        let ret_vec = [\n            \"/testing/b/3\",\n            \"/testing/b/2\",\n            \"/testing/b/1\",\n            \"/testing/a/3\",\n            \"/testing/a/2\",\n            \"/testing/a/2/11\",\n            \"/testing/a/1\",\n            \"/testing/a/1/12\",\n            \"/testing/a/1/11\",\n        ];\n        for entry in ret_vec.into_iter() {\n            assert_eq!(\n                format!(\"{}{}\", hdfs.url(), entry),\n                iter.next().unwrap()?.name()\n            );\n        }\n        assert!(iter.next().is_none());\n\n        Ok(())\n    }\n\n    fn set_up_hdfs_env() -> Result<Arc<HdfsFs>, HdfsErr> {\n        let dfs = get_dfs();\n        let hdfs = dfs.get_hdfs()?;\n\n        hdfs.mkdir(\"/testing\")?;\n        hdfs.mkdir(\"/testing/a\")?;\n        hdfs.mkdir(\"/testing/b\")?;\n        hdfs.create(\"/testing/c\")?;\n        hdfs.mkdir(\"/testing/a/1\")?;\n        hdfs.mkdir(\"/testing/a/2\")?;\n        hdfs.create(\"/testing/a/3\")?;\n        hdfs.create(\"/testing/b/1\")?;\n        hdfs.create(\"/testing/b/2\")?;\n        hdfs.create(\"/testing/b/3\")?;\n        hdfs.create(\"/testing/a/1/11\")?;\n        hdfs.create(\"/testing/a/1/12\")?;\n        hdfs.create(\"/testing/a/2/11\")?;\n\n        Ok(hdfs)\n    }\n}\n"
  },
  {
    "path": "native/fs-hdfs/src/walkdir/tree_iter.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::fmt;\n\n/// A utility struct for iterator tree nodes\npub struct TreeIter<V, N, E> {\n    /// For checking node properties, like leaf node or not, finding children, etc.\n    manager: Box<dyn TreeManager<V, N, E>>,\n    /// Options specified in the builder. Depths, etc.\n    opts: IterOptions,\n    /// The start path.\n    /// This is only `Some(...)` at the beginning. After the first iteration,\n    /// this is always `None`.\n    start: Option<V>,\n    /// A stack of unvisited qualified entries\n    stack_nodes: Vec<TreeNode<N>>,\n    /// A stack of unqualified parent entries\n    deferred_nodes: Vec<TreeNode<N>>,\n}\n\nimpl<V, N, E> TreeIter<V, N, E> {\n    pub fn new(\n        manager: Box<dyn TreeManager<V, N, E>>,\n        opts: IterOptions,\n        start: V,\n    ) -> TreeIter<V, N, E> {\n        TreeIter {\n            manager,\n            opts,\n            start: Some(start),\n            stack_nodes: vec![],\n            deferred_nodes: vec![],\n        }\n    }\n\n    fn next_item(&mut self) -> Result<Option<N>, E> {\n        // Initialize if possible\n        if let Some(start) = self.start.take() {\n            let root = self.manager.to_value(start)?;\n            match (0 == self.opts.min_depth, self.manager.is_leaf(&root)) {\n                (true, true) => return Ok(Some(root)),\n                (true, false) => self.stack_nodes.push(TreeNode {\n                    node: root,\n                    layer: 0,\n                }),\n                (false, true) => return Ok(None),\n                (false, false) => self.deferred_nodes.push(TreeNode {\n                    node: root,\n                    layer: 0,\n                }),\n            }\n        }\n\n        // Check whether there are items in the qualified layer\n        if let Some(next_node) = self.stack_nodes.pop() {\n            if next_node.layer < self.opts.max_depth\n                && !self.manager.is_leaf(&next_node.node)\n            {\n                self.stack_nodes.extend(\n                    self.manager\n                        .get_children(&next_node.node)?\n                        .into_iter()\n                        .map(|node| TreeNode {\n                            node,\n                            layer: next_node.layer + 1,\n                        }),\n                );\n            }\n            return Ok(Some(next_node.node));\n        }\n\n        // Check deferred nodes whose children is not empty\n        if let Some(prev_node) = self.deferred_nodes.pop() {\n            assert!(!self.manager.is_leaf(&prev_node.node));\n            let children = self.manager.get_children(&prev_node.node)?;\n            if prev_node.layer + 1 == self.opts.min_depth {\n                self.stack_nodes\n                    .extend(children.into_iter().map(|node| TreeNode {\n                        node,\n                        layer: prev_node.layer + 1,\n                    }));\n            } else {\n                self.deferred_nodes.extend(\n                    children\n                        .into_iter()\n                        .filter(|node| !self.manager.is_leaf(node))\n                        .map(|node| TreeNode {\n                            node,\n                            layer: prev_node.layer + 1,\n                        }),\n                );\n            }\n            return self.next_item();\n        }\n\n        Ok(None)\n    }\n}\n\nimpl<V, N, E> Iterator for TreeIter<V, N, E> {\n    type Item = Result<N, E>;\n\n    fn next(&mut self) -> Option<Self::Item> {\n        self.next_item().transpose()\n    }\n}\n\npub struct TreeNode<N> {\n    node: N,\n    layer: usize,\n}\n\npub struct IterOptions {\n    pub min_depth: usize,\n    pub max_depth: usize,\n}\n\nimpl fmt::Debug for IterOptions {\n    fn fmt(&self, f: &mut fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"IterOptions\")\n            .field(\"min_depth\", &self.min_depth)\n            .field(\"max_depth\", &self.max_depth)\n            .finish()\n    }\n}\n\npub trait TreeManager<V, N, E>: Send + Sync {\n    fn to_value(&self, v: V) -> Result<N, E>;\n\n    fn get_children(&self, n: &N) -> Result<Vec<N>, E>;\n\n    fn is_leaf(&self, n: &N) -> bool;\n}\n\n#[cfg(test)]\nmod test {\n    use std::collections::BTreeMap;\n    use std::io::Error;\n\n    use crate::walkdir::tree_iter::{IterOptions, TreeIter, TreeManager};\n\n    #[test]\n    fn test_tree_iter() -> Result<(), Error> {\n        let mut iter = TreeIter::new(\n            Box::new(create_test_tree_manager()),\n            IterOptions {\n                min_depth: 0,\n                max_depth: 2,\n            },\n            \"/testing\".to_owned(),\n        );\n\n        let ret_vec = [\n            \"/testing\",\n            \"/testing/c\",\n            \"/testing/b\",\n            \"/testing/b/3\",\n            \"/testing/b/2\",\n            \"/testing/b/1\",\n            \"/testing/a\",\n            \"/testing/a/3\",\n            \"/testing/a/2\",\n            \"/testing/a/1\",\n        ];\n        for entry in ret_vec.into_iter() {\n            assert_eq!(entry.to_owned(), iter.next().unwrap()?);\n        }\n        assert!(iter.next().is_none());\n\n        let mut iter = TreeIter::new(\n            Box::new(create_test_tree_manager()),\n            IterOptions {\n                min_depth: 2,\n                max_depth: 3,\n            },\n            \"/testing\".to_owned(),\n        );\n\n        let ret_vec = [\n            \"/testing/b/3\",\n            \"/testing/b/2\",\n            \"/testing/b/1\",\n            \"/testing/a/3\",\n            \"/testing/a/2\",\n            \"/testing/a/2/11\",\n            \"/testing/a/1\",\n            \"/testing/a/1/12\",\n            \"/testing/a/1/11\",\n        ];\n        for entry in ret_vec.into_iter() {\n            assert_eq!(entry.to_owned(), iter.next().unwrap()?);\n        }\n        assert!(iter.next().is_none());\n        Ok(())\n    }\n\n    // Testing data with pairs. The first one indicating the node value. The second one indicating whether it's a leaf node or not.\n    fn create_test_tree_manager() -> TestTreeManager {\n        TestTreeManager {\n            data: BTreeMap::from([\n                (\"/testing\".to_owned(), false),\n                (\"/testing/a\".to_owned(), false),\n                (\"/testing/b\".to_owned(), false),\n                (\"/testing/c\".to_owned(), true),\n                (\"/testing/a/1\".to_owned(), false),\n                (\"/testing/a/2\".to_owned(), false),\n                (\"/testing/a/3\".to_owned(), true),\n                (\"/testing/b/1\".to_owned(), true),\n                (\"/testing/b/2\".to_owned(), true),\n                (\"/testing/b/3\".to_owned(), true),\n                (\"/testing/a/1/11\".to_owned(), true),\n                (\"/testing/a/1/12\".to_owned(), true),\n                (\"/testing/a/2/11\".to_owned(), true),\n            ]),\n        }\n    }\n\n    struct TestTreeManager {\n        data: BTreeMap<String, bool>,\n    }\n\n    impl TreeManager<String, String, Error> for TestTreeManager {\n        fn to_value(&self, v: String) -> Result<String, Error> {\n            Ok(v)\n        }\n\n        fn get_children(&self, n: &String) -> Result<Vec<String>, Error> {\n            Ok(self\n                .data\n                .keys()\n                .filter(|entry| {\n                    entry.len() > n.len()\n                        && entry.starts_with(n)\n                        && !entry[n.len() + 1..].contains('/')\n                })\n                .map(|entry| entry.to_owned())\n                .collect())\n        }\n\n        fn is_leaf(&self, n: &String) -> bool {\n            *self.data.get(n).unwrap()\n        }\n    }\n}\n"
  },
  {
    "path": "native/hdfs/Cargo.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# This is an optional HDFS crate\n# To build it from root is required to provide a valid JAVA_HOME\n# and enable `hdfs` feature\n# Example: JAVA_HOME=\"/opt/homebrew/opt/openjdk@11\" cargo build --features=hdfs\n\n[package]\nname = \"datafusion-comet-objectstore-hdfs\"\ndescription = \"Comet HDFS integration\"\nversion = { workspace = true }\nhomepage = { workspace = true }\nrepository = { workspace = true }\nauthors = { workspace = true }\nreadme = { workspace = true }\nlicense = { workspace = true }\nedition = { workspace = true }\n\n[features]\ndefault = [\"hdfs\", \"try_spawn_blocking\"]\nhdfs = [\"fs_hdfs\"]\nhdfs3 = [\"fs-hdfs3\"]\n# Used for trying to spawn a blocking thread for implementing each object store interface when running in a tokio runtime\ntry_spawn_blocking = []\n\n[dependencies]\nasync-trait = { workspace = true }\nbytes = { workspace = true }\nchrono = { workspace = true }\nfs_hdfs = { package = \"datafusion-comet-fs-hdfs3\", path = \"../fs-hdfs\", optional = true }\nfs-hdfs3 = { version = \"^0.1.12\", optional = true }\nfutures = { workspace = true }\nobject_store = { workspace = true }\ntokio = { version = \"1\", features = [\"macros\", \"rt\", \"rt-multi-thread\", \"sync\", \"parking_lot\"] }\n\n[package.metadata.cargo-machete]\nignored = [\"fs-hdfs\", \"fs-hdfs3\"]"
  },
  {
    "path": "native/hdfs/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Apache DataFusion Comet: HDFS integration\n\nThis crate contains the HDFS cluster integration\nand is intended to be used as part of the Apache DataFusion Comet project\n\nThe HDFS access powered by [fs-hdfs](https://github.com/datafusion-contrib/fs-hdfs).\nThe crate provides `object_store` implementation leveraged by Rust FFI APIs for the `libhdfs` which can be compiled\nby a set of C files provided by the [official Hadoop Community](https://github.com/apache/hadoop).\n\n# Supported HDFS versions\n\nCurrently supported Apache Hadoop clients are:\n\n- 2.\\*\n- 3.\\*\n"
  },
  {
    "path": "native/hdfs/src/lib.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! HDFS as a remote ObjectStore for [Datafusion](https://github.com/apache/datafusion).\n//!\n//! This crate introduces ``HadoopFileSystem`` as a remote ObjectStore which provides the ability of querying on HDFS files.\n//!\n//! For the HDFS access, We leverage the library [fs-hdfs](https://github.com/datafusion-contrib/fs-hdfs).\n//! Basically, the library only provides Rust FFI APIs for the ``libhdfs`` which can be compiled by a set of C files provided by the [official Hadoop Community](https://github.com/apache/hadoop).\npub mod object_store;\n"
  },
  {
    "path": "native/hdfs/src/object_store/hdfs.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Object store that represents the HDFS File System.\n\nuse std::collections::{BTreeSet, VecDeque};\nuse std::fmt::{Display, Formatter};\nuse std::ops::Range;\nuse std::path::PathBuf;\nuse std::sync::Arc;\n\nuse async_trait::async_trait;\nuse bytes::Bytes;\nuse chrono::{DateTime, Utc};\nuse fs_hdfs::hdfs::{get_hdfs_by_full_path, FileStatus, HdfsErr, HdfsFile, HdfsFs};\nuse fs_hdfs::walkdir::HdfsWalkDir;\nuse futures::{stream::BoxStream, StreamExt, TryStreamExt};\nuse object_store::{\n    path::{self, Path},\n    CopyMode, CopyOptions, Error, GetOptions, GetRange, GetResult, GetResultPayload, ListResult,\n    MultipartUpload, ObjectMeta, ObjectStore, PutMultipartOptions, PutOptions, PutPayload,\n    PutResult, Result,\n};\n\n/// scheme for HDFS File System\npub static HDFS_SCHEME: &str = \"hdfs\";\n/// scheme for HDFS Federation File System\npub static VIEWFS_SCHEME: &str = \"viewfs\";\n\n#[derive(Debug)]\n/// Hadoop File System as Object Store.\npub struct HadoopFileSystem {\n    hdfs: Arc<HdfsFs>,\n}\n\nimpl Default for HadoopFileSystem {\n    fn default() -> Self {\n        Self {\n            hdfs: get_hdfs_by_full_path(\"default\").expect(\"Fail to get default HdfsFs\"),\n        }\n    }\n}\n\nimpl HadoopFileSystem {\n    /// Get HDFS from the full path, like hdfs://localhost:8020/xxx/xxx\n    pub fn new(full_path: &str) -> Option<Self> {\n        get_hdfs_by_full_path(full_path)\n            .map(|hdfs| Some(Self { hdfs }))\n            .unwrap_or(None)\n    }\n\n    /// Return filesystem path of the given location\n    fn path_to_filesystem(location: &Path) -> String {\n        format!(\"/{}\", location.as_ref())\n    }\n\n    pub fn get_path_root(&self) -> String {\n        self.hdfs.url().to_owned()\n    }\n\n    pub fn get_path(&self, full_path: &str) -> Path {\n        get_path(full_path, self.hdfs.url())\n    }\n\n    pub fn get_hdfs_host(&self) -> String {\n        let hdfs_url = self.hdfs.url();\n        if hdfs_url.starts_with(HDFS_SCHEME) {\n            hdfs_url[7..].to_owned()\n        } else if hdfs_url.starts_with(VIEWFS_SCHEME) {\n            hdfs_url[9..].to_owned()\n        } else {\n            \"\".to_owned()\n        }\n    }\n\n    fn read_range(range: &Range<u64>, file: &HdfsFile) -> Result<Bytes> {\n        let to_read = (range.end - range.start) as usize;\n        let mut total_read = 0;\n        let mut buf = vec![0; to_read];\n        while total_read < to_read {\n            let read = file\n                .read_with_pos(\n                    (range.start + total_read as u64) as i64,\n                    buf[total_read..].as_mut(),\n                )\n                .map_err(to_error)?;\n            if read <= 0 {\n                break;\n            }\n            total_read += read as usize;\n        }\n\n        if total_read != to_read {\n            return Err(Error::Generic {\n                store: \"HadoopFileSystem\",\n                source: Box::new(HdfsErr::Generic(format!(\n                    \"Error reading path {} at position {} with expected size {} and actual size {}\",\n                    file.path(),\n                    range.start,\n                    to_read,\n                    total_read\n                ))),\n            });\n        }\n        Ok(buf.into())\n    }\n}\n\nimpl Display for HadoopFileSystem {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"HadoopFileSystem\")\n    }\n}\n\n#[async_trait]\nimpl ObjectStore for HadoopFileSystem {\n    async fn put_opts(\n        &self,\n        _location: &Path,\n        _bytes: PutPayload,\n        _opts: PutOptions,\n    ) -> Result<PutResult> {\n        todo!()\n    }\n\n    async fn put_multipart_opts(\n        &self,\n        _location: &Path,\n        _opts: PutMultipartOptions,\n    ) -> object_store::Result<Box<dyn MultipartUpload>> {\n        unimplemented!()\n    }\n\n    async fn get_opts(&self, location: &Path, options: GetOptions) -> Result<GetResult> {\n        if options.if_match.is_some() || options.if_none_match.is_some() {\n            return Err(Error::Generic {\n                store: \"HadoopFileSystem\",\n                source: Box::new(HdfsErr::Generic(\"ETags not supported\".to_string())),\n            });\n        }\n\n        let hdfs = self.hdfs.clone();\n        let hdfs_root = self.hdfs.url().to_owned();\n        let location = HadoopFileSystem::path_to_filesystem(location);\n\n        let (blob, object_metadata, range) = maybe_spawn_blocking(move || {\n            let file = hdfs.open(&location).map_err(to_error)?;\n\n            let file_status = file.get_file_status().map_err(to_error)?;\n\n            if options.if_unmodified_since.is_some() || options.if_modified_since.is_some() {\n                check_modified(&options, &location, last_modified(&file_status))?;\n            }\n\n            let range = match options.range {\n                Some(GetRange::Bounded(range)) => range,\n                _ => Range {\n                    start: 0,\n                    end: file_status.len() as u64,\n                },\n            };\n\n            let buf = Self::read_range(&range, &file)?;\n\n            file.close().map_err(to_error)?;\n\n            let object_meta = convert_metadata(file_status, &hdfs_root);\n\n            Ok((buf, object_meta, range))\n        })\n        .await?;\n\n        Ok(GetResult {\n            payload: GetResultPayload::Stream(\n                futures::stream::once(async move { Ok(blob) }).boxed(),\n            ),\n            meta: object_metadata,\n            range,\n            attributes: Default::default(),\n        })\n    }\n\n    async fn get_ranges(&self, location: &Path, ranges: &[Range<u64>]) -> Result<Vec<Bytes>> {\n        let hdfs = self.hdfs.clone();\n        let location = HadoopFileSystem::path_to_filesystem(location);\n        let ranges = ranges.to_vec();\n\n        maybe_spawn_blocking(move || {\n            let file = hdfs.open(&location).map_err(to_error)?;\n            let result = ranges\n                .iter()\n                .map(|range| Self::read_range(range, &file))\n                .collect::<Result<Vec<_>>>()?;\n            file.close().map_err(to_error)?;\n            Ok(result)\n        })\n        .await\n    }\n\n    fn delete_stream(\n        &self,\n        locations: BoxStream<'static, Result<Path>>,\n    ) -> BoxStream<'static, Result<Path>> {\n        let hdfs = self.hdfs.clone();\n        locations\n            .map(move |location| {\n                let hdfs = hdfs.clone();\n                maybe_spawn_blocking(move || {\n                    let location = location?;\n                    let fs_path = HadoopFileSystem::path_to_filesystem(&location);\n                    hdfs.delete(&fs_path, false).map_err(to_error)?;\n                    Ok(location)\n                })\n            })\n            .buffered(10)\n            .boxed()\n    }\n\n    /// List all of the leaf files under the prefix path.\n    /// It will recursively search leaf files whose depth is larger than 1\n    fn list(&self, prefix: Option<&Path>) -> BoxStream<'static, Result<ObjectMeta>> {\n        let default_path = Path::from(self.get_path_root());\n        let prefix = prefix.unwrap_or(&default_path);\n        let hdfs = self.hdfs.clone();\n        let hdfs_root = self.hdfs.url().to_owned();\n        let walkdir =\n            HdfsWalkDir::new_with_hdfs(HadoopFileSystem::path_to_filesystem(prefix), hdfs)\n                .min_depth(1);\n\n        let s =\n            walkdir.into_iter().flat_map(move |result_dir_entry| {\n                match convert_walkdir_result(result_dir_entry) {\n                    Err(e) => Some(Err(e)),\n                    Ok(None) => None,\n                    Ok(entry @ Some(_)) => entry\n                        .filter(|dir_entry| dir_entry.is_file())\n                        .map(|entry| Ok(convert_metadata(entry, &hdfs_root))),\n                }\n            });\n\n        // If no tokio context, return iterator directly as no\n        // need to perform chunked spawn_blocking reads\n        if tokio::runtime::Handle::try_current().is_err() {\n            return futures::stream::iter(s).boxed();\n        }\n\n        // Otherwise list in batches of CHUNK_SIZE\n        const CHUNK_SIZE: usize = 1024;\n\n        let buffer = VecDeque::with_capacity(CHUNK_SIZE);\n        let stream = futures::stream::try_unfold((s, buffer), |(mut s, mut buffer)| async move {\n            if buffer.is_empty() {\n                (s, buffer) = tokio::task::spawn_blocking(move || {\n                    for _ in 0..CHUNK_SIZE {\n                        match s.next() {\n                            Some(r) => buffer.push_back(r),\n                            None => break,\n                        }\n                    }\n                    (s, buffer)\n                })\n                .await?;\n            }\n\n            match buffer.pop_front() {\n                Some(Err(e)) => Err(e),\n                Some(Ok(meta)) => Ok(Some((meta, (s, buffer)))),\n                None => Ok(None),\n            }\n        });\n\n        stream.boxed()\n    }\n\n    /// List files and directories directly under the prefix path.\n    /// It will not recursively search leaf files whose depth is larger than 1\n    async fn list_with_delimiter(&self, prefix: Option<&Path>) -> Result<ListResult> {\n        let default_path = Path::from(self.get_path_root());\n        let prefix = prefix.unwrap_or(&default_path);\n        let hdfs = self.hdfs.clone();\n        let hdfs_root = self.hdfs.url().to_owned();\n        let walkdir =\n            HdfsWalkDir::new_with_hdfs(HadoopFileSystem::path_to_filesystem(prefix), hdfs)\n                .min_depth(1)\n                .max_depth(1);\n\n        let prefix = prefix.clone();\n        maybe_spawn_blocking(move || {\n            let mut common_prefixes = BTreeSet::new();\n            let mut objects = Vec::new();\n\n            for entry_res in walkdir.into_iter().map(convert_walkdir_result) {\n                if let Some(entry) = entry_res? {\n                    let is_directory = entry.is_directory();\n                    let entry_location = get_path(entry.name(), &hdfs_root);\n\n                    let mut parts = match entry_location.prefix_match(&prefix) {\n                        Some(parts) => parts,\n                        None => continue,\n                    };\n\n                    let common_prefix = match parts.next() {\n                        Some(p) => p,\n                        None => continue,\n                    };\n\n                    drop(parts);\n\n                    if is_directory {\n                        common_prefixes.insert(prefix.clone().join(common_prefix));\n                    } else {\n                        objects.push(convert_metadata(entry, &hdfs_root));\n                    }\n                }\n            }\n\n            Ok(ListResult {\n                common_prefixes: common_prefixes.into_iter().collect(),\n                objects,\n            })\n        })\n        .await\n    }\n\n    async fn copy_opts(&self, from: &Path, to: &Path, options: CopyOptions) -> Result<()> {\n        let hdfs = self.hdfs.clone();\n        let from = HadoopFileSystem::path_to_filesystem(from);\n        let to = HadoopFileSystem::path_to_filesystem(to);\n\n        maybe_spawn_blocking(move || {\n            if !hdfs.exist(&from) {\n                return Err(Error::NotFound {\n                    path: from.clone(),\n                    source: Box::new(HdfsErr::FileNotFound(from)),\n                });\n            }\n\n            match options.mode {\n                CopyMode::Overwrite => {\n                    if hdfs.exist(&to) {\n                        hdfs.delete(&to, false).map_err(to_error)?;\n                    }\n                }\n                CopyMode::Create => {\n                    if hdfs.exist(&to) {\n                        return Err(Error::AlreadyExists {\n                            path: from,\n                            source: Box::new(HdfsErr::FileAlreadyExists(to)),\n                        });\n                    }\n                }\n            }\n\n            fs_hdfs::util::HdfsUtil::copy(hdfs.as_ref(), &from, hdfs.as_ref(), &to)\n                .map_err(to_error)?;\n\n            Ok(())\n        })\n        .await\n    }\n}\n\n/// Create Path without prefix\npub fn get_path(full_path: &str, prefix: &str) -> Path {\n    let partial_path = &full_path[prefix.len()..];\n    Path::parse(partial_path).unwrap()\n}\n\n/// Convert HDFS file status to ObjectMeta\npub fn convert_metadata(file: FileStatus, prefix: &str) -> ObjectMeta {\n    ObjectMeta {\n        location: get_path(file.name(), prefix),\n        last_modified: last_modified(&file),\n        size: file.len() as u64,\n        e_tag: None,\n        version: None,\n    }\n}\n\nfn last_modified(file: &FileStatus) -> DateTime<Utc> {\n    DateTime::from_timestamp(file.last_modified(), 0).unwrap()\n}\n\nfn check_modified(\n    get_options: &GetOptions,\n    location: &str,\n    last_modified: DateTime<Utc>,\n) -> Result<()> {\n    if let Some(date) = get_options.if_modified_since {\n        if last_modified <= date {\n            return Err(Error::NotModified {\n                path: location.to_string(),\n                source: format!(\"{date} >= {last_modified}\").into(),\n            });\n        }\n    }\n\n    if let Some(date) = get_options.if_unmodified_since {\n        if last_modified > date {\n            return Err(Error::Precondition {\n                path: location.to_string(),\n                source: format!(\"{date} < {last_modified}\").into(),\n            });\n        }\n    }\n    Ok(())\n}\n\n/// Convert walkdir results and converts not-found errors into `None`.\nfn convert_walkdir_result(\n    res: std::result::Result<FileStatus, HdfsErr>,\n) -> Result<Option<FileStatus>> {\n    match res {\n        Ok(entry) => Ok(Some(entry)),\n        Err(walkdir_err) => match walkdir_err {\n            HdfsErr::FileNotFound(_) => Ok(None),\n            _ => Err(to_error(HdfsErr::Generic(\n                \"Fail to walk hdfs directory\".to_owned(),\n            ))),\n        },\n    }\n}\n\n/// Range requests with a gap less than or equal to this,\n/// will be coalesced into a single request by [`coalesce_ranges`]\npub const HDFS_COALESCE_DEFAULT: u64 = 1024 * 1024;\n\n/// Up to this number of range requests will be performed in parallel by [`coalesce_ranges`]\npub const OBJECT_STORE_COALESCE_PARALLEL: usize = 10;\n\n/// Takes a function to fetch ranges and coalesces adjacent ranges if they are\n/// less than `coalesce` bytes apart.\npub async fn coalesce_ranges<F, Fut>(\n    ranges: &[Range<u64>],\n    fetch: F,\n    coalesce: u64,\n) -> Result<Vec<Bytes>>\nwhere\n    F: FnMut(Range<u64>) -> Fut,\n    Fut: std::future::Future<Output = Result<Bytes>>,\n{\n    let fetch_ranges = merge_ranges(ranges, coalesce);\n\n    let fetched: Vec<_> = futures::stream::iter(fetch_ranges.iter().cloned())\n        .map(fetch)\n        .buffered(OBJECT_STORE_COALESCE_PARALLEL)\n        .try_collect()\n        .await?;\n\n    Ok(ranges\n        .iter()\n        .map(|range| {\n            let idx = fetch_ranges.partition_point(|v| v.start <= range.start) - 1;\n            let fetch_range = &fetch_ranges[idx];\n            let fetch_bytes = &fetched[idx];\n\n            let start = (range.start - fetch_range.start) as usize;\n            let end = (range.end - fetch_range.start) as usize;\n            fetch_bytes.slice(start..end)\n        })\n        .collect())\n}\n\n/// Takes a function and spawns it to a tokio blocking pool if available\npub async fn maybe_spawn_blocking<F, T>(f: F) -> Result<T>\nwhere\n    F: FnOnce() -> Result<T> + Send + 'static,\n    T: Send + 'static,\n{\n    #[cfg(feature = \"try_spawn_blocking\")]\n    match tokio::runtime::Handle::try_current() {\n        Ok(runtime) => runtime.spawn_blocking(f).await?,\n        Err(_) => f(),\n    }\n\n    #[cfg(not(feature = \"try_spawn_blocking\"))]\n    f()\n}\n\n/// Returns a sorted list of ranges that cover `ranges`\nfn merge_ranges(ranges: &[Range<u64>], coalesce: u64) -> Vec<Range<u64>> {\n    if ranges.is_empty() {\n        return vec![];\n    }\n\n    let mut ranges = ranges.to_vec();\n    ranges.sort_unstable_by_key(|range| range.start);\n\n    let mut ret = Vec::with_capacity(ranges.len());\n    let mut start_idx = 0;\n    let mut end_idx = 1;\n\n    while start_idx != ranges.len() {\n        let mut range_end = ranges[start_idx].end;\n\n        while end_idx != ranges.len()\n            && ranges[end_idx]\n                .start\n                .checked_sub(range_end)\n                .map(|delta| delta <= coalesce)\n                .unwrap_or(true)\n        {\n            range_end = range_end.max(ranges[end_idx].end);\n            end_idx += 1;\n        }\n\n        let start = ranges[start_idx].start;\n        let end = range_end;\n        ret.push(start..end);\n\n        start_idx = end_idx;\n        end_idx += 1;\n    }\n\n    ret\n}\n\nfn to_error(err: HdfsErr) -> Error {\n    match err {\n        HdfsErr::FileNotFound(path) => Error::NotFound {\n            path: path.clone(),\n            source: Box::new(HdfsErr::FileNotFound(path)),\n        },\n        HdfsErr::FileAlreadyExists(path) => Error::AlreadyExists {\n            path: path.clone(),\n            source: Box::new(HdfsErr::FileAlreadyExists(path)),\n        },\n        HdfsErr::InvalidUrl(path) => Error::InvalidPath {\n            source: path::Error::InvalidPath {\n                path: PathBuf::from(path),\n            },\n        },\n        HdfsErr::CannotConnectToNameNode(namenode_uri) => Error::Generic {\n            store: \"HadoopFileSystem\",\n            source: Box::new(HdfsErr::CannotConnectToNameNode(namenode_uri)),\n        },\n        HdfsErr::Generic(err_str) => Error::Generic {\n            store: \"HadoopFileSystem\",\n            source: Box::new(HdfsErr::Generic(err_str)),\n        },\n    }\n}\n\n#[allow(clippy::single_range_in_vec_init)]\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use std::ops::Range;\n\n    #[tokio::test]\n    async fn test_coalesce_ranges() {\n        let do_fetch = |ranges: Vec<Range<u64>>, coalesce: u64| async move {\n            let max = ranges.iter().map(|x| x.end).max().unwrap_or(0);\n            let src: Vec<_> = (0..max).map(|x| x as u8).collect();\n\n            let mut fetches = vec![];\n            let coalesced = coalesce_ranges(\n                &ranges,\n                |range| {\n                    fetches.push(range.clone());\n                    let range = Range {\n                        start: range.start as usize,\n                        end: range.end as usize,\n                    };\n                    futures::future::ready(Ok(Bytes::from(src[range].to_vec())))\n                },\n                coalesce,\n            )\n            .await\n            .unwrap();\n\n            assert_eq!(ranges.len(), coalesced.len());\n            for (range, bytes) in ranges.iter().zip(coalesced) {\n                let range = Range {\n                    start: range.start as usize,\n                    end: range.end as usize,\n                };\n                assert_eq!(bytes.as_ref(), &src[range.clone()]);\n            }\n            fetches\n        };\n\n        let fetches = do_fetch(vec![], 0).await;\n        assert_eq!(fetches, vec![]);\n\n        let fetches = do_fetch(vec![0..3], 0).await;\n        assert_eq!(fetches, vec![0..3]);\n\n        let fetches = do_fetch(vec![0..2, 3..5], 0).await;\n        assert_eq!(fetches, vec![0..2, 3..5]);\n\n        let fetches = do_fetch(vec![0..1, 1..2], 0).await;\n        assert_eq!(fetches, vec![0..2]);\n\n        let fetches = do_fetch(vec![0..1, 2..72], 1).await;\n        assert_eq!(fetches, vec![0..72]);\n\n        let fetches = do_fetch(vec![0..1, 56..72, 73..75], 1).await;\n        assert_eq!(fetches, vec![0..1, 56..75]);\n\n        let fetches = do_fetch(vec![0..1, 5..6, 7..9, 4..6], 1).await;\n        assert_eq!(fetches, vec![0..1, 4..9]);\n    }\n}\n"
  },
  {
    "path": "native/hdfs/src/object_store/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub mod hdfs;\n"
  },
  {
    "path": "native/jni-bridge/Cargo.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[package]\nname = \"datafusion-comet-jni-bridge\"\nversion = { workspace = true }\nhomepage = \"https://datafusion.apache.org/comet\"\nrepository = \"https://github.com/apache/datafusion-comet\"\nauthors = [\"Apache DataFusion <dev@datafusion.apache.org>\"]\ndescription = \"Apache DataFusion Comet: JNI bridge\"\nreadme = \"README.md\"\nlicense = \"Apache-2.0\"\nedition = \"2021\"\n\npublish = false\n\n[dependencies]\narrow = { workspace = true }\nparquet = { workspace = true }\ndatafusion = { workspace = true }\njni = \"0.22.4\"\nthiserror = { workspace = true }\nregex = { workspace = true }\nlazy_static = \"1.4.0\"\nonce_cell = \"1.18.0\"\npaste = \"1.0.14\"\nprost = \"0.14.3\"\ndatafusion-comet-common = { workspace = true }\n\n[dev-dependencies]\njni = { version = \"0.22.4\", features = [\"invocation\"] }\nassertables = \"9\"\n"
  },
  {
    "path": "native/jni-bridge/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# datafusion-comet-jni-bridge: JNI Bridge\n\nThis crate provides the JNI interaction layer for Apache DataFusion Comet and is maintained as part of the\n[Apache DataFusion Comet] subproject.\n\n[Apache DataFusion Comet]: https://github.com/apache/datafusion-comet/\n"
  },
  {
    "path": "native/jni-bridge/src/batch_iterator.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse jni::signature::Primitive;\nuse jni::{\n    errors::Result as JniResult,\n    objects::{JClass, JMethodID},\n    signature::ReturnType,\n    strings::JNIString,\n    Env,\n};\n\n/// A struct that holds all the JNI methods and fields for JVM `CometBatchIterator` class.\n#[allow(dead_code)] // we need to keep references to Java items to prevent GC\npub struct CometBatchIterator<'a> {\n    pub class: JClass<'a>,\n    pub method_has_next: JMethodID,\n    pub method_has_next_ret: ReturnType,\n    pub method_next: JMethodID,\n    pub method_next_ret: ReturnType,\n    pub method_has_selection_vectors: JMethodID,\n    pub method_has_selection_vectors_ret: ReturnType,\n    pub method_export_selection_indices: JMethodID,\n    pub method_export_selection_indices_ret: ReturnType,\n}\n\nimpl<'a> CometBatchIterator<'a> {\n    pub const JVM_CLASS: &'static str = \"org/apache/comet/CometBatchIterator\";\n\n    pub fn new(env: &mut Env<'a>) -> JniResult<CometBatchIterator<'a>> {\n        let class = env.find_class(JNIString::new(Self::JVM_CLASS))?;\n\n        Ok(CometBatchIterator {\n            class,\n            method_has_next: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"hasNext\"),\n                jni::jni_sig!(\"()I\"),\n            )?,\n            method_has_next_ret: ReturnType::Primitive(Primitive::Int),\n            method_next: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"next\"),\n                jni::jni_sig!(\"([J[J)I\"),\n            )?,\n            method_next_ret: ReturnType::Primitive(Primitive::Int),\n            method_has_selection_vectors: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"hasSelectionVectors\"),\n                jni::jni_sig!(\"()Z\"),\n            )?,\n            method_has_selection_vectors_ret: ReturnType::Primitive(Primitive::Boolean),\n            method_export_selection_indices: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"exportSelectionIndices\"),\n                jni::jni_sig!(\"([J[J)I\"),\n            )?,\n            method_export_selection_indices_ret: ReturnType::Primitive(Primitive::Int),\n        })\n    }\n}\n"
  },
  {
    "path": "native/jni-bridge/src/comet_exec.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse jni::{\n    errors::Result as JniResult,\n    objects::{JClass, JStaticMethodID},\n    signature::{Primitive, ReturnType},\n    strings::JNIString,\n    Env,\n};\n\n/// A struct that holds all the JNI methods and fields for JVM CometExec object.\npub struct CometExec<'a> {\n    pub class: JClass<'a>,\n    pub method_get_bool: JStaticMethodID,\n    pub method_get_bool_ret: ReturnType,\n    pub method_get_byte: JStaticMethodID,\n    pub method_get_byte_ret: ReturnType,\n    pub method_get_short: JStaticMethodID,\n    pub method_get_short_ret: ReturnType,\n    pub method_get_int: JStaticMethodID,\n    pub method_get_int_ret: ReturnType,\n    pub method_get_long: JStaticMethodID,\n    pub method_get_long_ret: ReturnType,\n    pub method_get_float: JStaticMethodID,\n    pub method_get_float_ret: ReturnType,\n    pub method_get_double: JStaticMethodID,\n    pub method_get_double_ret: ReturnType,\n    pub method_get_decimal: JStaticMethodID,\n    pub method_get_decimal_ret: ReturnType,\n    pub method_get_string: JStaticMethodID,\n    pub method_get_string_ret: ReturnType,\n    pub method_get_binary: JStaticMethodID,\n    pub method_get_binary_ret: ReturnType,\n    pub method_is_null: JStaticMethodID,\n    pub method_is_null_ret: ReturnType,\n}\n\nimpl<'a> CometExec<'a> {\n    pub const JVM_CLASS: &'static str = \"org/apache/spark/sql/comet/CometScalarSubquery\";\n\n    pub fn new(env: &mut Env<'a>) -> JniResult<CometExec<'a>> {\n        let class = env.find_class(JNIString::new(Self::JVM_CLASS))?;\n\n        Ok(CometExec {\n            method_get_bool: env.get_static_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getBoolean\"),\n                jni::jni_sig!(\"(JJ)Z\"),\n            )?,\n            method_get_bool_ret: ReturnType::Primitive(Primitive::Boolean),\n            method_get_byte: env.get_static_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getByte\"),\n                jni::jni_sig!(\"(JJ)B\"),\n            )?,\n            method_get_byte_ret: ReturnType::Primitive(Primitive::Byte),\n            method_get_short: env.get_static_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getShort\"),\n                jni::jni_sig!(\"(JJ)S\"),\n            )?,\n            method_get_short_ret: ReturnType::Primitive(Primitive::Short),\n            method_get_int: env.get_static_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getInt\"),\n                jni::jni_sig!(\"(JJ)I\"),\n            )?,\n            method_get_int_ret: ReturnType::Primitive(Primitive::Int),\n            method_get_long: env.get_static_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getLong\"),\n                jni::jni_sig!(\"(JJ)J\"),\n            )?,\n            method_get_long_ret: ReturnType::Primitive(Primitive::Long),\n            method_get_float: env.get_static_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getFloat\"),\n                jni::jni_sig!(\"(JJ)F\"),\n            )?,\n            method_get_float_ret: ReturnType::Primitive(Primitive::Float),\n            method_get_double: env.get_static_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getDouble\"),\n                jni::jni_sig!(\"(JJ)D\"),\n            )?,\n            method_get_double_ret: ReturnType::Primitive(Primitive::Double),\n            method_get_decimal: env.get_static_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getDecimal\"),\n                jni::jni_sig!(\"(JJ)[B\"),\n            )?,\n            method_get_decimal_ret: ReturnType::Array,\n            method_get_string: env.get_static_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getString\"),\n                jni::jni_sig!(\"(JJ)Ljava/lang/String;\"),\n            )?,\n            method_get_string_ret: ReturnType::Object,\n            method_get_binary: env.get_static_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getBinary\"),\n                jni::jni_sig!(\"(JJ)[B\"),\n            )?,\n            method_get_binary_ret: ReturnType::Array,\n            method_is_null: env.get_static_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"isNull\"),\n                jni::jni_sig!(\"(JJ)Z\"),\n            )?,\n            method_is_null_ret: ReturnType::Primitive(Primitive::Boolean),\n            class,\n        })\n    }\n}\n"
  },
  {
    "path": "native/jni-bridge/src/comet_metric_node.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse jni::{\n    errors::Result as JniResult,\n    objects::{JClass, JMethodID},\n    signature::RuntimeMethodSignature,\n    signature::{Primitive, ReturnType},\n    strings::JNIString,\n    Env,\n};\n\n/// A struct that holds all the JNI methods and fields for JVM CometMetricNode class.\n#[allow(dead_code)] // we need to keep references to Java items to prevent GC\npub struct CometMetricNode<'a> {\n    pub class: JClass<'a>,\n    pub method_get_child_node: JMethodID,\n    pub method_get_child_node_ret: ReturnType,\n    pub method_set: JMethodID,\n    pub method_set_ret: ReturnType,\n    pub method_set_all_from_bytes: JMethodID,\n    pub method_set_all_from_bytes_ret: ReturnType,\n}\n\nimpl<'a> CometMetricNode<'a> {\n    pub const JVM_CLASS: &'static str = \"org/apache/spark/sql/comet/CometMetricNode\";\n\n    pub fn new(env: &mut Env<'a>) -> JniResult<CometMetricNode<'a>> {\n        let class = env.find_class(JNIString::new(Self::JVM_CLASS))?;\n        let get_child_node_sig =\n            RuntimeMethodSignature::from_str(format!(\"(I)L{};\", Self::JVM_CLASS))?;\n\n        Ok(CometMetricNode {\n            method_get_child_node: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getChildNode\"),\n                get_child_node_sig.method_signature(),\n            )?,\n            method_get_child_node_ret: ReturnType::Object,\n            method_set: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"set\"),\n                jni::jni_sig!(\"(Ljava/lang/String;J)V\"),\n            )?,\n            method_set_ret: ReturnType::Primitive(Primitive::Void),\n            method_set_all_from_bytes: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"set_all_from_bytes\"),\n                jni::jni_sig!(\"([B)V\"),\n            )?,\n            method_set_all_from_bytes_ret: ReturnType::Primitive(Primitive::Void),\n            class,\n        })\n    }\n}\n"
  },
  {
    "path": "native/jni-bridge/src/comet_task_memory_manager.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse jni::{\n    errors::Result as JniResult,\n    objects::{JClass, JMethodID},\n    signature::{Primitive, ReturnType},\n    strings::JNIString,\n    Env,\n};\n\n/// A wrapper which delegate acquire/release memory calls to the\n/// JVM side `CometTaskMemoryManager`.\n#[derive(Debug)]\n#[allow(dead_code)] // we need to keep references to Java items to prevent GC\npub struct CometTaskMemoryManager<'a> {\n    pub class: JClass<'a>,\n    pub method_acquire_memory: JMethodID,\n    pub method_release_memory: JMethodID,\n\n    pub method_acquire_memory_ret: ReturnType,\n    pub method_release_memory_ret: ReturnType,\n}\n\nimpl<'a> CometTaskMemoryManager<'a> {\n    pub const JVM_CLASS: &'static str = \"org/apache/spark/CometTaskMemoryManager\";\n\n    pub fn new(env: &mut Env<'a>) -> JniResult<CometTaskMemoryManager<'a>> {\n        let class = env.find_class(JNIString::new(Self::JVM_CLASS))?;\n\n        let result = CometTaskMemoryManager {\n            class,\n            method_acquire_memory: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"acquireMemory\"),\n                jni::jni_sig!(\"(J)J\"),\n            )?,\n            method_release_memory: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"releaseMemory\"),\n                jni::jni_sig!(\"(J)V\"),\n            )?,\n            method_acquire_memory_ret: ReturnType::Primitive(Primitive::Long),\n            method_release_memory_ret: ReturnType::Primitive(Primitive::Void),\n        };\n        Ok(result)\n    }\n}\n"
  },
  {
    "path": "native/jni-bridge/src/errors.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Common Parquet errors and macros.\n\nuse arrow::error::ArrowError;\nuse datafusion::common::DataFusionError;\nuse datafusion_comet_common::{SparkError, SparkErrorWithContext};\nuse jni::errors::{Exception, ToException};\nuse regex::Regex;\n\nuse std::{\n    any::Any,\n    convert,\n    fmt::Write,\n    panic::UnwindSafe,\n    result, str,\n    str::Utf8Error,\n    sync::{Arc, Mutex},\n};\n\n// This is just a pointer. We'll be returning it from our function. We\n// can't return one of the objects with lifetime information because the\n// lifetime checker won't let us.\nuse jni::sys::{jboolean, jbyte, jchar, jdouble, jfloat, jint, jlong, jobject, jshort};\n\nuse jni::objects::{Global, JThrowable};\nuse jni::{strings::JNIString, Env, EnvUnowned, Outcome};\nuse lazy_static::lazy_static;\nuse parquet::errors::ParquetError;\nuse thiserror::Error;\n\nlazy_static! {\n    static ref PANIC_BACKTRACE: Arc<Mutex<Option<String>>> = Arc::new(Mutex::new(None));\n}\n\n/// Error returned during executing operators.\n#[derive(thiserror::Error, Debug)]\npub enum ExecutionError {\n    /// Simple error\n    #[allow(dead_code)]\n    #[error(\"General execution error with reason: {0}.\")]\n    GeneralError(String),\n\n    /// Error when deserializing an operator.\n    #[error(\"Fail to deserialize to native operator with reason: {0}.\")]\n    DeserializeError(String),\n\n    /// Error when processing Arrow array.\n    #[error(\"Fail to process Arrow array with reason: {0}.\")]\n    ArrowError(String),\n\n    /// DataFusion error\n    #[error(\"Error from DataFusion: {0}.\")]\n    DataFusionError(String),\n\n    #[error(\"{class}: {msg}\")]\n    JavaException {\n        class: String,\n        msg: String,\n        throwable: Global<JThrowable<'static>>,\n    },\n}\n\n#[derive(thiserror::Error, Debug)]\npub enum CometError {\n    #[error(\"Configuration Error: {0}\")]\n    Config(String),\n\n    #[error(\"{0}\")]\n    NullPointer(String),\n\n    #[error(\"Out of bounds{0}\")]\n    IndexOutOfBounds(usize),\n\n    #[error(\"Comet Internal Error: {0}\")]\n    Internal(String),\n\n    #[error(transparent)]\n    Arrow {\n        #[from]\n        source: ArrowError,\n    },\n\n    #[error(transparent)]\n    Parquet {\n        #[from]\n        source: ParquetError,\n    },\n\n    #[error(transparent)]\n    Expression {\n        #[from]\n        source: ExpressionError,\n    },\n\n    #[error(transparent)]\n    Execution {\n        #[from]\n        source: ExecutionError,\n    },\n\n    #[error(transparent)]\n    IO {\n        #[from]\n        source: std::io::Error,\n    },\n\n    #[error(transparent)]\n    NumberIntFormat {\n        #[from]\n        source: std::num::ParseIntError,\n    },\n\n    #[error(transparent)]\n    NumberFloatFormat {\n        #[from]\n        source: std::num::ParseFloatError,\n    },\n    #[error(transparent)]\n    BoolFormat {\n        #[from]\n        source: std::str::ParseBoolError,\n    },\n    #[error(transparent)]\n    Format {\n        #[from]\n        source: Utf8Error,\n    },\n\n    #[error(transparent)]\n    JNI {\n        #[from]\n        source: jni::errors::Error,\n    },\n\n    #[error(\"{msg}\")]\n    Panic { msg: String },\n\n    #[error(\"{msg}\")]\n    DataFusion {\n        msg: String,\n        #[source]\n        source: DataFusionError,\n    },\n\n    /// Wraps a SparkError directly, allowing Comet to throw Spark-compatible exceptions\n    /// that Spark would return\n    #[error(transparent)]\n    Spark(SparkError),\n\n    #[error(\"{class}: {msg}\")]\n    JavaException {\n        class: String,\n        msg: String,\n        throwable: Global<JThrowable<'static>>,\n    },\n}\n\npub fn init() {\n    std::panic::set_hook(Box::new(|panic_info| {\n        // Log the panic message and location to stderr so it is visible in CI logs\n        // even if the exception message is lost crossing the FFI boundary\n        eprintln!(\"Comet native panic: {panic_info}\");\n        // Capture the backtrace for a panic\n        *PANIC_BACKTRACE.lock().unwrap() =\n            Some(std::backtrace::Backtrace::force_capture().to_string());\n    }));\n}\n\n/// Converts the results from `panic::catch_unwind` (e.g. a panic) to a `CometError`\nimpl convert::From<Box<dyn Any + Send>> for CometError {\n    fn from(e: Box<dyn Any + Send>) -> Self {\n        CometError::Panic {\n            msg: match e.downcast_ref::<&str>() {\n                Some(s) => s.to_string(),\n                None => match e.downcast_ref::<String>() {\n                    Some(msg) => msg.to_string(),\n                    None => \"unknown panic\".to_string(),\n                },\n            },\n        }\n    }\n}\n\nimpl From<DataFusionError> for CometError {\n    fn from(value: DataFusionError) -> Self {\n        CometError::DataFusion {\n            msg: value.message().to_string(),\n            source: value,\n        }\n    }\n}\n\nimpl From<CometError> for DataFusionError {\n    fn from(value: CometError) -> Self {\n        match value {\n            CometError::DataFusion { msg: _, source } => source,\n            _ => DataFusionError::Execution(value.to_string()),\n        }\n    }\n}\n\nimpl From<CometError> for ParquetError {\n    fn from(value: CometError) -> Self {\n        match value {\n            CometError::Parquet { source } => source,\n            _ => ParquetError::General(value.to_string()),\n        }\n    }\n}\n\nimpl From<CometError> for ExecutionError {\n    fn from(value: CometError) -> Self {\n        match value {\n            CometError::Execution { source } => source,\n            CometError::JavaException {\n                class,\n                msg,\n                throwable,\n            } => ExecutionError::JavaException {\n                class,\n                msg,\n                throwable,\n            },\n            _ => ExecutionError::GeneralError(value.to_string()),\n        }\n    }\n}\n\nimpl From<prost::DecodeError> for ExpressionError {\n    fn from(error: prost::DecodeError) -> ExpressionError {\n        ExpressionError::Deserialize(error.to_string())\n    }\n}\n\nimpl From<prost::UnknownEnumValue> for ExpressionError {\n    fn from(error: prost::UnknownEnumValue) -> ExpressionError {\n        ExpressionError::Deserialize(error.to_string())\n    }\n}\n\nimpl From<prost::DecodeError> for ExecutionError {\n    fn from(error: prost::DecodeError) -> ExecutionError {\n        ExecutionError::DeserializeError(error.to_string())\n    }\n}\n\nimpl From<prost::UnknownEnumValue> for ExecutionError {\n    fn from(error: prost::UnknownEnumValue) -> ExecutionError {\n        ExecutionError::DeserializeError(error.to_string())\n    }\n}\n\nimpl From<ArrowError> for ExecutionError {\n    fn from(error: ArrowError) -> ExecutionError {\n        ExecutionError::ArrowError(error.to_string())\n    }\n}\n\nimpl From<ArrowError> for ExpressionError {\n    fn from(error: ArrowError) -> ExpressionError {\n        ExpressionError::ArrowError(error.to_string())\n    }\n}\n\nimpl From<ExpressionError> for ArrowError {\n    fn from(error: ExpressionError) -> ArrowError {\n        ArrowError::ComputeError(error.to_string())\n    }\n}\n\nimpl From<DataFusionError> for ExecutionError {\n    fn from(value: DataFusionError) -> Self {\n        ExecutionError::DataFusionError(value.message().to_string())\n    }\n}\n\nimpl From<ExecutionError> for DataFusionError {\n    fn from(value: ExecutionError) -> Self {\n        DataFusionError::Execution(value.to_string())\n    }\n}\n\nimpl From<ExpressionError> for DataFusionError {\n    fn from(value: ExpressionError) -> Self {\n        DataFusionError::Execution(value.to_string())\n    }\n}\n\nimpl jni::errors::ToException for CometError {\n    fn to_exception(&self) -> Exception {\n        match self {\n            CometError::IndexOutOfBounds(..) => Exception {\n                class: \"java/lang/IndexOutOfBoundsException\".to_string(),\n                msg: self.to_string(),\n            },\n            CometError::NullPointer(..) => Exception {\n                class: \"java/lang/NullPointerException\".to_string(),\n                msg: self.to_string(),\n            },\n            CometError::NumberIntFormat { source: s } => Exception {\n                class: \"java/lang/NumberFormatException\".to_string(),\n                msg: s.to_string(),\n            },\n            CometError::NumberFloatFormat { source: s } => Exception {\n                class: \"java/lang/NumberFormatException\".to_string(),\n                msg: s.to_string(),\n            },\n            CometError::IO { .. } => Exception {\n                class: \"java/io/IOException\".to_string(),\n                msg: self.to_string(),\n            },\n            CometError::Parquet { .. } => Exception {\n                class: \"org/apache/comet/ParquetRuntimeException\".to_string(),\n                msg: self.to_string(),\n            },\n            CometError::Spark(spark_err) => Exception {\n                class: spark_err.exception_class().to_string(),\n                msg: spark_err.to_string(),\n            },\n            _other => Exception {\n                class: \"org/apache/comet/CometNativeException\".to_string(),\n                msg: self.to_string(),\n            },\n        }\n    }\n}\n\n/// Error returned when there is an error during executing an expression.\n#[derive(thiserror::Error, Debug)]\npub enum ExpressionError {\n    /// Simple error\n    #[error(\"General expression error with reason {0}.\")]\n    General(String),\n\n    /// Deserialization error\n    #[error(\"Fail to deserialize to native expression with reason {0}.\")]\n    Deserialize(String),\n\n    /// Evaluation error\n    #[error(\"Fail to evaluate native expression with reason {0}.\")]\n    Evaluation(String),\n\n    /// Error when processing Arrow array.\n    #[error(\"Fail to process Arrow array with reason {0}.\")]\n    ArrowError(String),\n}\n\n/// A specialized `Result` for Comet errors.\npub type CometResult<T> = result::Result<T, CometError>;\n\n// ----------------------------------------------------------------------\n// Convenient macros for different errors\n\n#[macro_export]\nmacro_rules! general_err {\n    ($fmt:expr, $($args:expr),*) => ($crate::errors::CometError::from(parquet::errors::ParquetError::General(format!($fmt, $($args),*))));\n}\n\n/// Returns the \"default value\" for a type.  This is used for JNI code in order to facilitate\n/// returning a value in cases where an exception is thrown.  This value will never be used, as the\n/// JVM will note the pending exception.\n///\n/// Default values are often some kind of initial value, identity value, or anything else that\n/// may make sense as a default.\n///\n/// NOTE: We can't just use [Default] since both the trait and the object are defined in other\n/// crates.\n/// See [Rust Compiler Error Index - E0117](https://doc.rust-lang.org/error-index.html#E0117)\npub trait JNIDefault {\n    fn default() -> Self;\n}\n\nimpl JNIDefault for jboolean {\n    fn default() -> jboolean {\n        false\n    }\n}\n\nimpl JNIDefault for jbyte {\n    fn default() -> jbyte {\n        0\n    }\n}\n\nimpl JNIDefault for jchar {\n    fn default() -> jchar {\n        0\n    }\n}\n\nimpl JNIDefault for jdouble {\n    fn default() -> jdouble {\n        0.0\n    }\n}\n\nimpl JNIDefault for jfloat {\n    fn default() -> jfloat {\n        0.0\n    }\n}\n\nimpl JNIDefault for jint {\n    fn default() -> jint {\n        0\n    }\n}\n\nimpl JNIDefault for jlong {\n    fn default() -> jlong {\n        0\n    }\n}\n\n/// The \"default value\" for all returned objects, such as [jstring], [jlongArray], etc.\nimpl JNIDefault for jobject {\n    fn default() -> jobject {\n        std::ptr::null_mut()\n    }\n}\n\nimpl JNIDefault for jshort {\n    fn default() -> jshort {\n        0\n    }\n}\n\nimpl JNIDefault for () {\n    fn default() {}\n}\n\n// Unwrap the result returned from `panic::catch_unwind` when `Ok`, otherwise throw a\n// `RuntimeException` back to the calling Java.  Since a return result is required, use `JNIDefault`\n// to create a reasonable result.  This returned default value will be ignored due to the exception.\npub fn unwrap_or_throw_default<T: JNIDefault>(\n    env: &mut Env,\n    result: std::result::Result<T, CometError>,\n) -> T {\n    match result {\n        Ok(value) => value,\n        Err(err) => {\n            let backtrace = match err {\n                CometError::Panic { msg: _ } => PANIC_BACKTRACE.lock().unwrap().take(),\n                _ => None,\n            };\n            throw_exception(env, &err, backtrace);\n            T::default()\n        }\n    }\n}\n\nfn throw_exception(env: &mut Env, error: &CometError, backtrace: Option<String>) {\n    // If there isn't already an exception?\n    if !env.exception_check() {\n        // ... then throw new exception\n        // Note: in jni 0.22.x, throw/throw_new return Err(JavaException) on success\n        // (to signal the pending exception to Rust callers via `?`). We discard the\n        // result here because we're in an error-handling path and just need the\n        // exception to be pending in the JVM.\n        let _ = match error {\n            CometError::JavaException {\n                class: _,\n                msg: _,\n                throwable,\n            } => env.throw(throwable),\n            CometError::Execution {\n                source:\n                    ExecutionError::JavaException {\n                        class: _,\n                        msg: _,\n                        throwable,\n                    },\n            } => env.throw(throwable),\n            // Handle DataFusion errors containing SparkError or SparkErrorWithContext\n            CometError::DataFusion {\n                msg: _,\n                source: DataFusionError::External(e),\n            } => {\n                if let Some(spark_error_with_ctx) = e.downcast_ref::<SparkErrorWithContext>() {\n                    let json_message = spark_error_with_ctx.to_json();\n                    env.throw_new(\n                        jni::jni_str!(\"org/apache/comet/exceptions/CometQueryExecutionException\"),\n                        JNIString::new(json_message),\n                    )\n                } else if let Some(spark_error) = e.downcast_ref::<SparkError>() {\n                    let json_message = spark_error.to_json();\n                    env.throw_new(\n                        jni::jni_str!(\"org/apache/comet/exceptions/CometQueryExecutionException\"),\n                        JNIString::new(json_message),\n                    )\n                } else {\n                    // Check for file-not-found errors from object store\n                    let error_msg = e.to_string();\n                    if error_msg.contains(\"not found\")\n                        && error_msg.contains(\"No such file or directory\")\n                    {\n                        let spark_error = SparkError::FileNotFound { message: error_msg };\n                        throw_spark_error_as_json(env, &spark_error)\n                    } else {\n                        // Not a SparkError, use generic exception\n                        let exception = error.to_exception();\n                        match backtrace {\n                            Some(backtrace_string) => env.throw_new(\n                                JNIString::new(exception.class),\n                                JNIString::new(\n                                    to_stacktrace_string(exception.msg, backtrace_string).unwrap(),\n                                ),\n                            ),\n                            _ => env.throw_new(\n                                JNIString::new(exception.class),\n                                JNIString::new(exception.msg),\n                            ),\n                        }\n                    }\n                }\n            }\n            // Handle direct SparkError - serialize to JSON\n            CometError::Spark(spark_error) => throw_spark_error_as_json(env, spark_error),\n            _ => {\n                let error_msg = error.to_string();\n                // Check for file-not-found errors that may arrive through other wrapping paths\n                if error_msg.contains(\"not found\")\n                    && error_msg.contains(\"No such file or directory\")\n                {\n                    let spark_error = SparkError::FileNotFound { message: error_msg };\n                    throw_spark_error_as_json(env, &spark_error)\n                } else if let Some(spark_error) = try_convert_duplicate_field_error(&error_msg) {\n                    throw_spark_error_as_json(env, &spark_error)\n                } else {\n                    let exception = error.to_exception();\n                    match backtrace {\n                        Some(backtrace_string) => env.throw_new(\n                            JNIString::new(exception.class),\n                            JNIString::new(\n                                to_stacktrace_string(exception.msg, backtrace_string).unwrap(),\n                            ),\n                        ),\n                        _ => env.throw_new(\n                            JNIString::new(exception.class),\n                            JNIString::new(exception.msg),\n                        ),\n                    }\n                }\n            }\n        };\n    }\n}\n\n/// Throws a CometQueryExecutionException with JSON-encoded SparkError\nfn throw_spark_error_as_json(env: &mut Env, spark_error: &SparkError) -> jni::errors::Result<()> {\n    // Serialize error to JSON\n    let json_message = spark_error.to_json();\n\n    // Throw CometQueryExecutionException with JSON message\n    env.throw_new(\n        jni::jni_str!(\"org/apache/comet/exceptions/CometQueryExecutionException\"),\n        JNIString::new(json_message),\n    )\n}\n\n/// Try to convert a DataFusion \"Unable to get field named\" error into a SparkError.\n/// DataFusion produces this error when reading Parquet files with duplicate field names\n/// in case-insensitive mode. For example, if a Parquet file has columns \"B\" and \"b\",\n/// DataFusion may deduplicate them and report: Unable to get field named \"b\". Valid\n/// fields: [\"A\", \"B\"]. When the requested field has a case-insensitive match among the\n/// valid fields, we convert this to Spark's _LEGACY_ERROR_TEMP_2093 error.\nfn try_convert_duplicate_field_error(error_msg: &str) -> Option<SparkError> {\n    // Match: Schema error: Unable to get field named \"X\". Valid fields: [...]\n    lazy_static! {\n        static ref FIELD_RE: Regex =\n            Regex::new(r#\"Unable to get field named \"([^\"]+)\"\\. Valid fields: \\[(.+)\\]\"#).unwrap();\n    }\n    if let Some(caps) = FIELD_RE.captures(error_msg) {\n        let requested_field = caps.get(1)?.as_str();\n        let requested_lower = requested_field.to_lowercase();\n        // Parse field names from the Valid fields list: [\"A\", \"B\"] or [A, B, b]\n        let valid_fields_raw = caps.get(2)?.as_str();\n        let all_fields: Vec<String> = valid_fields_raw\n            .split(',')\n            .map(|s| s.trim().trim_matches('\"').to_string())\n            .collect();\n        // Find fields that match case-insensitively\n        let mut matched: Vec<String> = all_fields\n            .into_iter()\n            .filter(|f| f.to_lowercase() == requested_lower)\n            .collect();\n        // Need at least one case-insensitive match to treat this as a duplicate field error.\n        // DataFusion may deduplicate columns case-insensitively, so the valid fields list\n        // might contain only one variant (e.g. \"B\" when file has both \"B\" and \"b\").\n        // If requested field differs from the match, both existed in the original file.\n        if matched.is_empty() {\n            return None;\n        }\n        // Add the requested field name if it's not already in the list (different case)\n        if !matched.iter().any(|f| f == requested_field) {\n            matched.push(requested_field.to_string());\n        }\n        let required_field_name = requested_field.to_string();\n        let matched_fields = format!(\"[{}]\", matched.join(\", \"));\n        Some(SparkError::DuplicateFieldCaseInsensitive {\n            required_field_name,\n            matched_fields,\n        })\n    } else {\n        None\n    }\n}\n\n#[derive(Debug, Error)]\nenum StacktraceError {\n    #[error(\"Unable to initialize message: {0}\")]\n    Message(String),\n    #[error(\"Unable to initialize backtrace regex: {0}\")]\n    Regex(#[from] regex::Error),\n    #[error(\"Required field missing: {0}\")]\n    #[allow(non_camel_case_types)]\n    Required_Field(String),\n    #[error(\"Unable to format stacktrace element: {0}\")]\n    Element(#[from] std::fmt::Error),\n}\n\nfn to_stacktrace_string(msg: String, backtrace_string: String) -> Result<String, StacktraceError> {\n    let mut res = String::new();\n    write!(&mut res, \"{msg}\").map_err(|error| StacktraceError::Message(error.to_string()))?;\n\n    // Use multi-line mode and named capture groups to identify the following stacktrace fields:\n    // - dc = declaredClass\n    // - mn = methodName\n    // - fn = fileName (optional)\n    // - line = file line number (optional)\n    // - col = file col number within the line (optional)\n    let re = Regex::new(\n        r\"(?m)^\\s*\\d+: (?<dc>.*?)(?<mn>[^:]+)\\n(\\s*at\\s+(?<fn>[^:]+):(?<line>\\d+):(?<col>\\d+)$)?\",\n    )?;\n    for c in re.captures_iter(backtrace_string.as_str()) {\n        write!(\n            &mut res,\n            \"\\n        at {}{}({}:{})\",\n            c.name(\"dc\")\n                .ok_or_else(|| StacktraceError::Required_Field(\"declared class\".to_string()))?\n                .as_str(),\n            c.name(\"mn\")\n                .ok_or_else(|| StacktraceError::Required_Field(\"method name\".to_string()))?\n                .as_str(),\n            // There are internal calls within the backtrace that don't provide file information\n            c.name(\"fn\").map(|m| m.as_str()).unwrap_or(\"__internal__\"),\n            c.name(\"line\")\n                .map(|m| m.as_str().parse().expect(\"numeric line number\"))\n                .unwrap_or(0)\n        )?;\n    }\n\n    Ok(res)\n}\n\n// It is currently undefined behavior to unwind from Rust code into foreign code, so we can wrap\n// our JNI functions and turn these panics into a `RuntimeException`.\npub fn try_unwrap_or_throw<T, F>(env: &EnvUnowned, f: F) -> T\nwhere\n    T: JNIDefault,\n    F: FnOnce(&mut Env) -> Result<T, CometError> + UnwindSafe,\n{\n    let raw = env.as_raw();\n    let mut env1 = unsafe { EnvUnowned::from_raw(raw) };\n    match env1.with_env(f).into_outcome() {\n        Outcome::Ok(value) => value,\n        Outcome::Err(err) => {\n            let mut guard = unsafe { jni::AttachGuard::from_unowned(raw) };\n            unwrap_or_throw_default(guard.borrow_env_mut(), Err(err))\n        }\n        Outcome::Panic(payload) => {\n            let mut guard = unsafe { jni::AttachGuard::from_unowned(raw) };\n            unwrap_or_throw_default(guard.borrow_env_mut(), Err(CometError::from(payload)))\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use std::{\n        fs::File,\n        io,\n        io::Read,\n        path::PathBuf,\n        sync::{Arc, Once},\n    };\n\n    use jni::{\n        objects::{JClass, JIntArray, JString, JThrowable},\n        sys::{jintArray, jstring},\n        EnvUnowned, InitArgsBuilder, JNIVersion, JavaVM,\n    };\n\n    use assertables::assert_starts_with;\n\n    pub fn jvm() -> &'static Arc<JavaVM> {\n        static mut JVM: Option<Arc<JavaVM>> = None;\n        static INIT: Once = Once::new();\n\n        // Capture panic backtraces\n        init();\n\n        INIT.call_once(|| {\n            // Add common classes to the classpath in so that we can find CometException\n            let mut common_classes = PathBuf::from(env!(\"CARGO_MANIFEST_DIR\"));\n            common_classes.push(\"../../common/target/classes\");\n            let mut class_path = common_classes\n                .as_path()\n                .to_str()\n                .expect(\"common classes as an str\")\n                .to_string();\n            class_path.insert_str(0, \"-Djava.class.path=\");\n\n            // Build the VM properties\n            let jvm_args = InitArgsBuilder::new()\n                // Pass the JNI API version (default is 8)\n                .version(JNIVersion::V1_8)\n                // You can additionally pass any JVM options (standard, like a system property,\n                // or VM-specific).\n                // Here we enable some extra JNI checks useful during development\n                .option(\"-Xcheck:jni\")\n                .option(class_path.as_str())\n                .build()\n                .unwrap_or_else(|e| panic!(\"{e:#?}\"));\n\n            let jvm = JavaVM::new(jvm_args).unwrap_or_else(|e| panic!(\"{e:#?}\"));\n\n            #[allow(static_mut_refs)]\n            unsafe {\n                JVM = Some(Arc::new(jvm));\n            }\n        });\n\n        #[allow(static_mut_refs)]\n        unsafe {\n            JVM.as_ref().unwrap()\n        }\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `dlopen`\n    pub fn error_from_panic() {\n        jvm()\n            .attach_current_thread(|env| -> jni::errors::Result<()> {\n                let env_unowned = unsafe { EnvUnowned::from_raw(env.get_raw()) };\n                try_unwrap_or_throw(&env_unowned, |_| -> CometResult<()> {\n                    panic!(\"oops!\");\n                });\n\n                assert_pending_java_exception_detailed(\n                    env,\n                    Some(\"java/lang/RuntimeException\"),\n                    Some(\"oops!\"),\n                );\n                Ok(())\n            })\n            .unwrap();\n    }\n\n    // Verify that functions that return an object are handled correctly.  This is basically\n    // a test of the \"happy path\".\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `dlopen`\n    pub fn object_result() {\n        jvm()\n            .attach_current_thread(|env| -> jni::errors::Result<()> {\n                let clazz = env.find_class(jni::jni_str!(\"java/lang/Object\")).unwrap();\n                let input = env.new_string(\"World\").unwrap();\n\n                let actual = unsafe {\n                    Java_Errors_hello(&EnvUnowned::from_raw(env.get_raw()), clazz, input)\n                };\n                let actual_s = unsafe { JString::from_raw(env, actual) };\n\n                let actual_string = actual_s.try_to_string(env).unwrap();\n                assert_eq!(\"Hello, World!\", actual_string);\n                Ok(())\n            })\n            .unwrap();\n    }\n\n    // Verify that functions that return an native time are handled correctly.  This is basically\n    // a test of the \"happy path\".\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `dlopen`\n    pub fn jlong_result() {\n        jvm()\n            .attach_current_thread(|env| -> jni::errors::Result<()> {\n                // Class java.lang.object is just a stand-in\n                let class = env.find_class(jni::jni_str!(\"java/lang/Object\")).unwrap();\n                let a: jlong = 6;\n                let b: jlong = 3;\n                let actual =\n                    unsafe { Java_Errors_div(&EnvUnowned::from_raw(env.get_raw()), class, a, b) };\n\n                assert_eq!(2, actual);\n                Ok(())\n            })\n            .unwrap();\n    }\n\n    // Verify that functions that return an array can handle throwing exceptions.  The test\n    // causes an exception by dividing by zero.\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `dlopen`\n    pub fn jlong_panic_exception() {\n        jvm()\n            .attach_current_thread(|env| -> jni::errors::Result<()> {\n                // Class java.lang.object is just a stand-in\n                let class = env.find_class(jni::jni_str!(\"java/lang/Object\")).unwrap();\n                let a: jlong = 6;\n                let b: jlong = 0;\n                let _actual =\n                    unsafe { Java_Errors_div(&EnvUnowned::from_raw(env.get_raw()), class, a, b) };\n\n                assert_pending_java_exception_detailed(\n                    env,\n                    Some(\"java/lang/RuntimeException\"),\n                    Some(\"attempt to divide by zero\"),\n                );\n                Ok(())\n            })\n            .unwrap();\n    }\n\n    // Verify that functions that return an native time are handled correctly.  This is basically\n    // a test of the \"happy path\".\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `dlopen`\n    pub fn jlong_result_ok() {\n        jvm()\n            .attach_current_thread(|env| -> jni::errors::Result<()> {\n                // Class java.lang.object is just a stand-in\n                let class = env.find_class(jni::jni_str!(\"java/lang/Object\")).unwrap();\n                let a: JString = env.new_string(\"9\").unwrap();\n                let b: JString = env.new_string(\"3\").unwrap();\n                let actual = unsafe {\n                    Java_Errors_div_with_parse(&EnvUnowned::from_raw(env.get_raw()), class, a, b)\n                };\n\n                assert_eq!(3, actual);\n                Ok(())\n            })\n            .unwrap();\n    }\n\n    // Verify that functions that return an native time are handled correctly.  This is basically\n    // a test of the \"happy path\".\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `dlopen`\n    pub fn jlong_result_err() {\n        jvm()\n            .attach_current_thread(|env| -> jni::errors::Result<()> {\n                // Class java.lang.object is just a stand-in\n                let class = env.find_class(jni::jni_str!(\"java/lang/Object\")).unwrap();\n                let a: JString = env.new_string(\"NaN\").unwrap();\n                let b: JString = env.new_string(\"3\").unwrap();\n                let _actual = unsafe {\n                    Java_Errors_div_with_parse(&EnvUnowned::from_raw(env.get_raw()), class, a, b)\n                };\n\n                assert_pending_java_exception_detailed(\n                    env,\n                    Some(\"java/lang/NumberFormatException\"),\n                    Some(\"invalid digit found in string\"),\n                );\n                Ok(())\n            })\n            .unwrap();\n    }\n\n    // Verify that functions that return an array are handled correctly.  This is basically\n    // a test of the \"happy path\".\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `dlopen`\n    pub fn jint_array_result() {\n        jvm()\n            .attach_current_thread(|env| -> jni::errors::Result<()> {\n                // Class java.lang.object is just a stand-in\n                let class = env.find_class(jni::jni_str!(\"java/lang/Object\")).unwrap();\n                let buf = [2, 4, 6];\n                let input = env.new_int_array(3).unwrap();\n                input.set_region(env, 0, &buf).unwrap();\n                let actual = unsafe {\n                    Java_Errors_array_div(&EnvUnowned::from_raw(env.get_raw()), class, &input, 2)\n                };\n                let actual_s = unsafe { JIntArray::from_raw(env, actual) };\n\n                let mut buf: [i32; 3] = [0; 3];\n                actual_s.get_region(env, 0, &mut buf).unwrap();\n                assert_eq!([1, 2, 3], buf);\n                Ok(())\n            })\n            .unwrap();\n    }\n\n    // Verify that functions that return an array can handle throwing exceptions.  The test\n    // causes an exception by dividing by zero.\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `dlopen`\n    pub fn jint_array_panic_exception() {\n        jvm()\n            .attach_current_thread(|env| -> jni::errors::Result<()> {\n                // Class java.lang.object is just a stand-in\n                let class = env.find_class(jni::jni_str!(\"java/lang/Object\")).unwrap();\n                let buf = [2, 4, 6];\n                let input = env.new_int_array(3).unwrap();\n                input.set_region(env, 0, &buf).unwrap();\n                let _actual = unsafe {\n                    Java_Errors_array_div(&EnvUnowned::from_raw(env.get_raw()), class, &input, 0)\n                };\n\n                assert_pending_java_exception_detailed(\n                    env,\n                    Some(\"java/lang/RuntimeException\"),\n                    Some(\"attempt to divide by zero\"),\n                );\n                Ok(())\n            })\n            .unwrap();\n    }\n\n    /// Test that conversion of a serialized backtrace to an equivalent stacktrace message.\n    ///\n    /// See [`object_panic_exception`] for a test which involves generating a panic and verifying\n    /// that the resulting stack trace includes the offending call.\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `dlopen`\n    pub fn stacktrace_string() {\n        // Setup: Start with a backtrace that includes all of the expected scenarios, including\n        // cases where the file and location are not provided as part of the backtrace capture\n        let backtrace_string = read_resource(\"testdata/backtrace.txt\").expect(\"backtrace content\");\n\n        // Test: Reformat the serialized backtrace as a multi-line message which includes the\n        // backtrace formatted as a stacktrace\n        let stacktrace_string =\n            to_stacktrace_string(\"Some Error Message\".to_string(), backtrace_string).unwrap();\n\n        // Verify: The message matches the expected output.  Trim the expected string to remove\n        // the carriage return\n        let expected_string = read_resource(\"testdata/stacktrace.txt\").expect(\"stacktrace content\");\n        assert_eq!(expected_string.trim(), stacktrace_string.as_str());\n    }\n\n    fn read_resource(path: &str) -> Result<String, io::Error> {\n        let mut path_buf = PathBuf::from(env!(\"CARGO_MANIFEST_DIR\"));\n        path_buf.push(path);\n\n        let mut f = File::open(path_buf.as_path())?;\n        let mut s = String::new();\n        f.read_to_string(&mut s)?;\n        Ok(s)\n    }\n\n    // Example of a simple JNI \"Hello World\" program.  It can be used to demonstrate:\n    // * returning an object\n    // * throwing an exception from `.expect()`\n    #[no_mangle]\n    pub extern \"system\" fn Java_Errors_hello(\n        e: &EnvUnowned,\n        _class: JClass,\n        input: JString,\n    ) -> jstring {\n        try_unwrap_or_throw(e, |env| {\n            let input: String = input.try_to_string(env).expect(\"Couldn't get java string!\");\n\n            let output = env\n                .new_string(format!(\"Hello, {input}!\"))\n                .expect(\"Couldn't create java string!\");\n\n            Ok(output.into_raw())\n        })\n    }\n\n    // Example of a simple JNI function that divides.  It can be used to demonstrate:\n    // * returning an native type\n    // * throwing an exception when dividing by zero\n    #[no_mangle]\n    pub extern \"system\" fn Java_Errors_div(\n        env: &EnvUnowned,\n        _class: JClass,\n        a: jlong,\n        b: jlong,\n    ) -> jlong {\n        try_unwrap_or_throw(env, |_| Ok(a / b))\n    }\n\n    #[no_mangle]\n    pub extern \"system\" fn Java_Errors_div_with_parse(\n        e: &EnvUnowned,\n        _class: JClass,\n        a: JString,\n        b: JString,\n    ) -> jlong {\n        try_unwrap_or_throw(e, |env| {\n            let a_value: i64 = a.try_to_string(env)?.parse()?;\n            let b_value: i64 = b.try_to_string(env)?.parse()?;\n            Ok(a_value / b_value)\n        })\n    }\n\n    // Example of a simple JNI function that divides.  It can be used to demonstrate:\n    // * returning an array\n    // * throwing an exception when dividing by zero\n    #[no_mangle]\n    pub extern \"system\" fn Java_Errors_array_div(\n        e: &EnvUnowned,\n        _class: JClass,\n        input: &JIntArray,\n        divisor: jint,\n    ) -> jintArray {\n        try_unwrap_or_throw(e, |env| {\n            let mut input_buf: [jint; 3] = [0; 3];\n            input.get_region(env, 0, &mut input_buf)?;\n\n            let buf = input_buf.map(|v| -> jint { v / divisor });\n\n            let result = env.new_int_array(3)?;\n            result.set_region(env, 0, &buf)?;\n            Ok(result.into_raw())\n        })\n    }\n\n    // Helper method that asserts there is a pending Java exception which is an `instance_of`\n    // `expected_type` with a message matching `expected_message` and clears it if any.\n    fn assert_pending_java_exception_detailed(\n        env: &mut Env,\n        expected_type: Option<&str>,\n        expected_message: Option<&str>,\n    ) {\n        assert!(env.exception_check());\n        let exception = env.exception_occurred().expect(\"Unable to get exception\");\n        env.exception_clear();\n\n        if let Some(expected_type) = expected_type {\n            assert_exception_type(env, &exception, expected_type);\n        }\n\n        if let Some(expected_message) = expected_message {\n            assert_exception_message(env, exception, expected_message);\n        }\n    }\n\n    // Asserts that exception is an `instance_of` `expected_type` type.\n    fn assert_exception_type(env: &mut Env, exception: &JThrowable, expected_type: &str) {\n        if !env\n            .is_instance_of(exception, jni::strings::JNIString::new(expected_type))\n            .unwrap()\n        {\n            let class: JClass = env.get_object_class(exception).unwrap();\n            let name = env\n                .call_method(\n                    class,\n                    jni::jni_str!(\"getName\"),\n                    jni::jni_sig!(\"()Ljava/lang/String;\"),\n                    &[],\n                )\n                .unwrap()\n                .l()\n                .unwrap();\n            let name_string = unsafe { JString::from_raw(env, name.into_raw()) };\n            let class_name: String = name_string.try_to_string(env).unwrap();\n            assert_eq!(class_name.replace('.', \"/\"), expected_type);\n        };\n    }\n\n    // Asserts that exception's message matches `expected_message`.\n    fn assert_exception_message(env: &mut Env, exception: JThrowable, expected_message: &str) {\n        let message = env\n            .call_method(\n                exception,\n                jni::jni_str!(\"getMessage\"),\n                jni::jni_sig!(\"()Ljava/lang/String;\"),\n                &[],\n            )\n            .unwrap()\n            .l()\n            .unwrap();\n        let message_string = unsafe { JString::from_raw(env, message.into_raw()) };\n        let msg_rust: String = message_string.try_to_string(env).unwrap();\n        println!(\"{msg_rust}\");\n        // Since panics result in multi-line messages which include the backtrace, just use the\n        // first line.\n        assert_starts_with!(msg_rust, expected_message);\n    }\n}\n"
  },
  {
    "path": "native/jni-bridge/src/lib.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! JNI bridge for Apache DataFusion Comet.\n//!\n//! This crate provides the JNI interaction layer used across Comet's native Rust crates.\n\n#![allow(clippy::result_large_err)]\n\nuse jni::objects::JClass;\nuse jni::{\n    errors::Error,\n    objects::{JMethodID, JObject, JString, JThrowable, JValueOwned},\n    signature::ReturnType,\n    Env, JavaVM,\n};\nuse once_cell::sync::OnceCell;\n\nuse errors::{CometError, CometResult};\n\npub mod errors;\n\n/// Global reference to the Java VM, initialized during native library setup.\npub static JAVA_VM: OnceCell<JavaVM> = OnceCell::new();\n\n/// Macro for converting JNI Error to Comet Error.\n#[macro_export]\nmacro_rules! jni_map_error {\n    ($env:expr, $result:expr) => {{\n        match $result {\n            Ok(result) => datafusion::error::Result::Ok(result),\n            Err(jni_error) => Err($crate::errors::CometError::JNI { source: jni_error }),\n        }\n    }};\n}\n\n/// Macro for converting Rust types to JNI types.\n#[macro_export]\nmacro_rules! jvalues {\n    ($($args:expr,)* $(,)?) => {{\n        &[$(jni::objects::JValue::from($args).as_jni()),*] as &[jni::sys::jvalue]\n    }}\n}\n\n/// Macro for calling a JNI method.\n/// The syntax is:\n/// jni_call!(env, comet_metric_node(metric_node).add(jname, value) -> ())?;\n/// comet_metric_node is the class name stored in [[JVMClasses]].\n/// metric_node is the Java object on which the method is called.\n/// add is the method name.\n/// jname and value are the arguments.\n#[macro_export]\nmacro_rules! jni_call {\n    ($env:expr, $clsname:ident($obj:expr).$method:ident($($args:expr),* $(,)?) -> $ret:ty) => {{\n        let method_id = paste::paste! {\n            $crate::JVMClasses::get().[<$clsname>].[<method_ $method>]\n        };\n        let ret_type = paste::paste! {\n            $crate::JVMClasses::get().[<$clsname>].[<method_ $method _ret>]\n        }.clone();\n        let args = $crate::jvalues!($($args,)*);\n\n        // Call the JVM method and obtain the returned value\n        let ret = $env.call_method_unchecked($obj, method_id, ret_type, args);\n\n        // Check if JVM has thrown any exception, and handle it if so.\n        let result = if let Some(exception) = $crate::check_exception($env)? {\n            Err(exception.into())\n        } else {\n            $crate::jni_map_error!($env, ret)\n        };\n\n        result.and_then(|result| $crate::jni_map_error!($env, <$ret>::try_from(result)))\n    }}\n}\n\n#[macro_export]\nmacro_rules! jni_static_call {\n    ($env:expr, $clsname:ident.$method:ident($($args:expr),* $(,)?) -> $ret:ty) => {{\n        let clazz = &paste::paste! {\n            $crate::JVMClasses::get().[<$clsname>].[<class>]\n        };\n        let method_id = paste::paste! {\n            $crate::JVMClasses::get().[<$clsname>].[<method_ $method>]\n        };\n        let ret_type = paste::paste! {\n            $crate::JVMClasses::get().[<$clsname>].[<method_ $method _ret>]\n        }.clone();\n        let args = $crate::jvalues!($($args,)*);\n\n        // Call the JVM static method and obtain the returned value\n        let ret = $env.call_static_method_unchecked(clazz, method_id, ret_type, args);\n\n        // Check if JVM has thrown any exception, and handle it if so.\n        let result = if let Some(exception) = $crate::check_exception($env)? {\n            Err(exception.into())\n        } else {\n            $crate::jni_map_error!($env, ret)\n        };\n\n        result.and_then(|result| $crate::jni_map_error!($env, <$ret>::try_from(result)))\n    }}\n}\n\n/// Macro for creating a new global reference.\n#[macro_export]\nmacro_rules! jni_new_global_ref {\n    ($env:expr, $obj:expr) => {{\n        $crate::jni_map_error!($env, $env.new_global_ref($obj))\n    }};\n}\n\n/// Wrapper for JString. Because we cannot implement `TryFrom` trait for `JString` as they\n/// are defined in different crates.\npub struct StringWrapper<'a> {\n    value: JObject<'a>,\n}\n\nimpl<'a> StringWrapper<'a> {\n    pub fn new(value: JObject<'a>) -> StringWrapper<'a> {\n        Self { value }\n    }\n\n    pub fn get(&self) -> &JObject<'_> {\n        &self.value\n    }\n}\n\npub struct BinaryWrapper<'a> {\n    value: JObject<'a>,\n}\n\nimpl<'a> BinaryWrapper<'a> {\n    pub fn new(value: JObject<'a>) -> BinaryWrapper<'a> {\n        Self { value }\n    }\n\n    pub fn get(&self) -> &JObject<'_> {\n        &self.value\n    }\n}\n\nimpl<'a> TryFrom<JValueOwned<'a>> for StringWrapper<'a> {\n    type Error = Error;\n\n    fn try_from(value: JValueOwned<'a>) -> Result<StringWrapper<'a>, Error> {\n        match value {\n            JValueOwned::Object(b) => Ok(StringWrapper::new(b)),\n            _ => Err(Error::WrongJValueType(\"object\", value.type_name())),\n        }\n    }\n}\n\nimpl<'a> TryFrom<JValueOwned<'a>> for BinaryWrapper<'a> {\n    type Error = Error;\n\n    fn try_from(value: JValueOwned<'a>) -> Result<BinaryWrapper<'a>, Error> {\n        match value {\n            JValueOwned::Object(b) => Ok(BinaryWrapper::new(b)),\n            _ => Err(Error::WrongJValueType(\"object\", value.type_name())),\n        }\n    }\n}\n\nmod comet_exec;\npub use comet_exec::*;\nmod batch_iterator;\nmod comet_metric_node;\nmod comet_task_memory_manager;\nmod shuffle_block_iterator;\n\nuse batch_iterator::CometBatchIterator;\npub use comet_metric_node::*;\npub use comet_task_memory_manager::*;\nuse shuffle_block_iterator::CometShuffleBlockIterator;\n\n/// The JVM classes that are used in the JNI calls.\n#[allow(dead_code)] // we need to keep references to Java items to prevent GC\npub struct JVMClasses<'a> {\n    /// Cached JClass for \"java.lang.Object\"\n    java_lang_object: JClass<'a>,\n    /// Cached JClass for \"java.lang.Class\"\n    java_lang_class: JClass<'a>,\n    /// Cached JClass for \"java.lang.Throwable\"\n    java_lang_throwable: JClass<'a>,\n    /// Cached method ID for \"java.lang.Object#getClass\"\n    object_get_class_method: JMethodID,\n    /// Cached method ID for \"java.lang.Class#getName\"\n    class_get_name_method: JMethodID,\n    /// Cached method ID for \"java.lang.Throwable#getMessage\"\n    throwable_get_message_method: JMethodID,\n    /// Cached method ID for \"java.lang.Throwable#getCause\"\n    throwable_get_cause_method: JMethodID,\n\n    /// The CometMetricNode class. Used for updating the metrics.\n    pub comet_metric_node: CometMetricNode<'a>,\n    /// The static CometExec class. Used for getting the subquery result.\n    pub comet_exec: CometExec<'a>,\n    /// The CometBatchIterator class. Used for iterating over the batches.\n    pub comet_batch_iterator: CometBatchIterator<'a>,\n    /// The CometShuffleBlockIterator class. Used for iterating over shuffle blocks.\n    pub comet_shuffle_block_iterator: CometShuffleBlockIterator<'a>,\n    /// The CometTaskMemoryManager used for interacting with JVM side to\n    /// acquire & release native memory.\n    pub comet_task_memory_manager: CometTaskMemoryManager<'a>,\n}\n\nunsafe impl Send for JVMClasses<'_> {}\n\nunsafe impl Sync for JVMClasses<'_> {}\n\n/// Keeps global references to JVM classes. Used for JNI calls to JVM.\nstatic JVM_CLASSES: OnceCell<JVMClasses> = OnceCell::new();\n\nimpl JVMClasses<'_> {\n    /// Creates a new JVMClasses struct.\n    pub fn init(env: &mut Env) {\n        JVM_CLASSES.get_or_init(|| {\n            // A hack to make the `Env` static. It is not safe but we don't really use the\n            // `Env` except for creating the global references of the classes.\n            let env = unsafe { std::mem::transmute::<&mut Env, &'static mut Env>(env) };\n\n            let java_lang_object = env.find_class(jni::jni_str!(\"java/lang/Object\")).unwrap();\n            let object_get_class_method = env\n                .get_method_id(\n                    &java_lang_object,\n                    jni::jni_str!(\"getClass\"),\n                    jni::jni_sig!(\"()Ljava/lang/Class;\"),\n                )\n                .unwrap();\n\n            let java_lang_class = env.find_class(jni::jni_str!(\"java/lang/Class\")).unwrap();\n            let class_get_name_method = env\n                .get_method_id(\n                    &java_lang_class,\n                    jni::jni_str!(\"getName\"),\n                    jni::jni_sig!(\"()Ljava/lang/String;\"),\n                )\n                .unwrap();\n\n            let java_lang_throwable = env\n                .find_class(jni::jni_str!(\"java/lang/Throwable\"))\n                .unwrap();\n            let throwable_get_message_method = env\n                .get_method_id(\n                    &java_lang_throwable,\n                    jni::jni_str!(\"getMessage\"),\n                    jni::jni_sig!(\"()Ljava/lang/String;\"),\n                )\n                .unwrap();\n\n            let throwable_get_cause_method = env\n                .get_method_id(\n                    &java_lang_throwable,\n                    jni::jni_str!(\"getCause\"),\n                    jni::jni_sig!(\"()Ljava/lang/Throwable;\"),\n                )\n                .unwrap();\n\n            // SAFETY: According to the documentation for `JMethodID`, it is our\n            // responsibility to maintain a reference to the `JClass` instances where the\n            // methods were accessed from to prevent the methods from being garbage-collected\n            JVMClasses {\n                java_lang_object,\n                java_lang_class,\n                java_lang_throwable,\n                object_get_class_method,\n                class_get_name_method,\n                throwable_get_message_method,\n                throwable_get_cause_method,\n                comet_metric_node: CometMetricNode::new(env).unwrap(),\n                comet_exec: CometExec::new(env).unwrap(),\n                comet_batch_iterator: CometBatchIterator::new(env).unwrap(),\n                comet_shuffle_block_iterator: CometShuffleBlockIterator::new(env).unwrap(),\n                comet_task_memory_manager: CometTaskMemoryManager::new(env).unwrap(),\n            }\n        });\n    }\n\n    pub fn get() -> &'static JVMClasses<'static> {\n        debug_assert!(\n            JVM_CLASSES.get().is_some(),\n            \"JVMClasses::get: not initialized\"\n        );\n        unsafe { JVM_CLASSES.get_unchecked() }\n    }\n\n    /// Runs a closure with an attached JNI environment for the current thread.\n    pub fn with_env<T, E, F>(f: F) -> Result<T, E>\n    where\n        F: FnOnce(&mut Env) -> Result<T, E>,\n        E: From<CometError>,\n    {\n        debug_assert!(\n            JAVA_VM.get().is_some(),\n            \"JVMClasses::with_env: JAVA_VM not initialized\"\n        );\n        unsafe {\n            let java_vm = JAVA_VM.get_unchecked();\n            let mut scope = jni::ScopeToken::default();\n            let mut guard = java_vm\n                .attach_current_thread_guard(Default::default, &mut scope)\n                .map_err(CometError::from)\n                .map_err(E::from)?;\n            f(guard.borrow_env_mut())\n        }\n    }\n}\n\npub fn check_exception(env: &mut Env) -> CometResult<Option<CometError>> {\n    let result = if env.exception_check() {\n        let exception = env\n            .exception_occurred()\n            .expect(\"exception_check returned true without an exception\");\n        env.exception_clear();\n        let exception_err = convert_exception(env, &exception)?;\n        Some(exception_err)\n    } else {\n        None\n    };\n\n    Ok(result)\n}\n\n/// get the class name of the exception by:\n///  1. get the `Class` object of the input `throwable` via `Object#getClass` method\n///  2. get the exception class name via calling `Class#getName` on the above object\nfn get_throwable_class_name(\n    env: &mut Env,\n    jvm_classes: &JVMClasses,\n    throwable: &JThrowable,\n) -> CometResult<String> {\n    unsafe {\n        let class_obj = env\n            .call_method_unchecked(\n                throwable,\n                jvm_classes.object_get_class_method,\n                ReturnType::Object,\n                &[],\n            )?\n            .l()?;\n        let class_obj = JClass::from_raw(env, class_obj.into_raw());\n        let class_name = env\n            .call_method_unchecked(\n                &class_obj,\n                jvm_classes.class_get_name_method,\n                ReturnType::Object,\n                &[],\n            )?\n            .l()?;\n        let class_name = JString::from_raw(env, class_name.into_raw());\n        let class_name_str = class_name.try_to_string(env)?;\n\n        Ok(class_name_str)\n    }\n}\n\n/// Get the exception message via calling `Throwable#getMessage` on the throwable object\nfn get_throwable_message(\n    env: &mut Env,\n    jvm_classes: &JVMClasses,\n    throwable: &JThrowable,\n) -> CometResult<String> {\n    unsafe {\n        let message: JString = env\n            .call_method_unchecked(\n                throwable,\n                jvm_classes.throwable_get_message_method,\n                ReturnType::Object,\n                &[],\n            )?\n            .l()\n            .map(|obj| JString::from_raw(env, obj.into_raw()))?;\n        let message_str = if !message.is_null() {\n            message.try_to_string(env)?\n        } else {\n            String::from(\"null\")\n        };\n\n        let cause: JThrowable = env\n            .call_method_unchecked(\n                throwable,\n                jvm_classes.throwable_get_cause_method,\n                ReturnType::Object,\n                &[],\n            )?\n            .l()\n            .map(|obj| JThrowable::from_raw(env, obj.into_raw()))?;\n\n        if !cause.is_null() {\n            let cause_class_name = get_throwable_class_name(env, jvm_classes, &cause)?;\n            let cause_message = get_throwable_message(env, jvm_classes, &cause)?;\n            Ok(format!(\n                \"{message_str}\\nCaused by: {cause_class_name}: {cause_message}\"\n            ))\n        } else {\n            Ok(message_str)\n        }\n    }\n}\n\n/// Given a `JThrowable` which is thrown from calling a Java method on the native side,\n/// this converts it into a `CometError::JavaException` with the exception class name\n/// and exception message. This error can then be populated to the JVM side to let\n/// users know the cause of the native side error.\npub fn convert_exception(env: &mut Env, throwable: &JThrowable) -> CometResult<CometError> {\n    let cache = JVMClasses::get();\n    let exception_class_name_str = get_throwable_class_name(env, cache, throwable)?;\n    let message_str = get_throwable_message(env, cache, throwable)?;\n\n    Ok(CometError::JavaException {\n        class: exception_class_name_str,\n        msg: message_str,\n        throwable: env.new_global_ref(throwable)?,\n    })\n}\n"
  },
  {
    "path": "native/jni-bridge/src/shuffle_block_iterator.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse jni::signature::Primitive;\nuse jni::{\n    errors::Result as JniResult,\n    objects::{JClass, JMethodID},\n    signature::ReturnType,\n    strings::JNIString,\n    Env,\n};\n\n/// A struct that holds all the JNI methods and fields for JVM `CometShuffleBlockIterator` class.\n#[allow(dead_code)] // we need to keep references to Java items to prevent GC\npub struct CometShuffleBlockIterator<'a> {\n    pub class: JClass<'a>,\n    pub method_has_next: JMethodID,\n    pub method_has_next_ret: ReturnType,\n    pub method_get_buffer: JMethodID,\n    pub method_get_buffer_ret: ReturnType,\n    pub method_get_current_block_length: JMethodID,\n    pub method_get_current_block_length_ret: ReturnType,\n}\n\nimpl<'a> CometShuffleBlockIterator<'a> {\n    pub const JVM_CLASS: &'static str = \"org/apache/comet/CometShuffleBlockIterator\";\n\n    pub fn new(env: &mut Env<'a>) -> JniResult<CometShuffleBlockIterator<'a>> {\n        let class = env.find_class(JNIString::new(Self::JVM_CLASS))?;\n\n        Ok(CometShuffleBlockIterator {\n            class,\n            method_has_next: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"hasNext\"),\n                jni::jni_sig!(\"()I\"),\n            )?,\n            method_has_next_ret: ReturnType::Primitive(Primitive::Int),\n            method_get_buffer: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getBuffer\"),\n                jni::jni_sig!(\"()Ljava/nio/ByteBuffer;\"),\n            )?,\n            method_get_buffer_ret: ReturnType::Object,\n            method_get_current_block_length: env.get_method_id(\n                JNIString::new(Self::JVM_CLASS),\n                jni::jni_str!(\"getCurrentBlockLength\"),\n                jni::jni_sig!(\"()I\"),\n            )?,\n            method_get_current_block_length_ret: ReturnType::Primitive(Primitive::Int),\n        })\n    }\n}\n"
  },
  {
    "path": "native/jni-bridge/testdata/backtrace.txt",
    "content": "   0: std::backtrace_rs::backtrace::libunwind::trace\n             at /rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93:5\n   1: std::backtrace_rs::backtrace::trace_unsynchronized\n             at /rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/../../backtrace/src/backtrace/mod.rs:66:5\n   2: std::backtrace::Backtrace::create\n             at /rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/backtrace.rs:331:13\n   3: std::backtrace::Backtrace::capture\n             at /rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/backtrace.rs:297:9\n   4: comet::Java_org_apache_comet_NativeBase_init::{{closure}}\n             at /Users/somebody/src/arrow-datafusion-comet/core/src/lib.rs:70:77\n   5: std::panicking::try::do_call\n             at /rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/panicking.rs:526:40\n   6: ___rust_try\n   7: std::panicking::try\n             at /rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/panicking.rs:490:19\n   8: std::panic::catch_unwind\n             at /rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/panic.rs:142:14\n   9: comet::errors::try_unwrap_or_throw\n             at /Users/somebody/src/arrow-datafusion-comet/core/src/errors.rs:369:43\n  10: Java_org_apache_comet_NativeBase_init\n             at /Users/somebody/src/arrow-datafusion-comet/core/src/lib.rs:53:5\n\n"
  },
  {
    "path": "native/jni-bridge/testdata/stacktrace.txt",
    "content": "Some Error Message\n        at std::backtrace_rs::backtrace::libunwind::trace(/rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/../../backtrace/src/backtrace/libunwind.rs:93)\n        at std::backtrace_rs::backtrace::trace_unsynchronized(/rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/../../backtrace/src/backtrace/mod.rs:66)\n        at std::backtrace::Backtrace::create(/rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/backtrace.rs:331)\n        at std::backtrace::Backtrace::capture(/rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/backtrace.rs:297)\n        at comet::Java_org_apache_comet_NativeBase_init::{{closure}}(/Users/somebody/src/arrow-datafusion-comet/core/src/lib.rs:70)\n        at std::panicking::try::do_call(/rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/panicking.rs:526)\n        at ___rust_try(__internal__:0)\n        at std::panicking::try(/rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/panicking.rs:490)\n        at std::panic::catch_unwind(/rustc/ec08a0337f3556212525dbf1d3b41e19bdf27621/library/std/src/panic.rs:142)\n        at comet::errors::try_unwrap_or_throw(/Users/somebody/src/arrow-datafusion-comet/core/src/errors.rs:369)\n        at Java_org_apache_comet_NativeBase_init(/Users/somebody/src/arrow-datafusion-comet/core/src/lib.rs:53)\n"
  },
  {
    "path": "native/proto/Cargo.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[package]\nname = \"datafusion-comet-proto\"\nversion = { workspace = true }\nhomepage = \"https://datafusion.apache.org/comet\"\nrepository = \"https://github.com/apache/datafusion-comet\"\nauthors = [\"Apache DataFusion <dev@datafusion.apache.org>\"]\ndescription = \"Apache DataFusion Comet: High performance accelerator for Apache Spark\"\nreadme = \"README.md\"\nlicense = \"Apache-2.0\"\nedition = \"2021\"\n\n[dependencies]\nprost = \"0.14.3\"\n\n[build-dependencies]\nprost-build = \"0.14.3\"\n"
  },
  {
    "path": "native/proto/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Apache DataFusion Comet: Intermediate Representation of Query Plan\n\nThis crate contains the protocol buffer definitions of Spark physical query plans\nand is intended to be used as part of the Apache DataFusion Comet project.\n"
  },
  {
    "path": "native/proto/build.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Build script for generating codes from .proto files.\n\nuse std::{fs, io::Result, path::Path};\n\nfn main() -> Result<()> {\n    println!(\"cargo:rerun-if-changed=src/proto/\");\n\n    let out_dir = \"src/generated\";\n    if !Path::new(out_dir).is_dir() {\n        fs::create_dir(out_dir)?;\n    }\n\n    prost_build::Config::new().out_dir(out_dir).compile_protos(\n        &[\n            \"src/proto/expr.proto\",\n            \"src/proto/metric.proto\",\n            \"src/proto/partitioning.proto\",\n            \"src/proto/operator.proto\",\n            \"src/proto/config.proto\",\n        ],\n        &[\"src/proto\"],\n    )?;\n    Ok(())\n}\n"
  },
  {
    "path": "native/proto/src/lib.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n// The clippy throws an error if the reference clone not wrapped into `Arc::clone`\n// The lint makes easier for code reader/reviewer separate references clones from more heavyweight ones\n#![deny(clippy::clone_on_ref_ptr)]\n\n// Include generated modules from .proto files.\n#[allow(missing_docs)]\n#[allow(clippy::large_enum_variant)]\npub mod spark_expression {\n    include!(concat!(\"generated\", \"/spark.spark_expression.rs\"));\n}\n\n// Include generated modules from .proto files.\n#[allow(missing_docs)]\npub mod spark_partitioning {\n    include!(concat!(\"generated\", \"/spark.spark_partitioning.rs\"));\n}\n\n// Include generated modules from .proto files.\n#[allow(missing_docs)]\n#[allow(clippy::large_enum_variant)]\npub mod spark_operator {\n    include!(concat!(\"generated\", \"/spark.spark_operator.rs\"));\n}\n\n// Include generated modules from .proto files.\n#[allow(missing_docs)]\npub mod spark_metric {\n    include!(concat!(\"generated\", \"/spark.spark_metric.rs\"));\n}\n\n// Include generated modules from .proto files.\n#[allow(missing_docs)]\npub mod spark_config {\n    include!(concat!(\"generated\", \"/spark.spark_config.rs\"));\n}\n"
  },
  {
    "path": "native/proto/src/proto/config.proto",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nsyntax = \"proto3\";\n\npackage spark.spark_config;\n\noption java_package = \"org.apache.comet.serde\";\n\nmessage ConfigMap {\n  map<string, string> entries = 1;\n}"
  },
  {
    "path": "native/proto/src/proto/expr.proto",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n\n\nsyntax = \"proto3\";\n\npackage spark.spark_expression;\n\nimport \"literal.proto\";\nimport \"types.proto\";\n\noption java_package = \"org.apache.comet.serde\";\n\n// The basic message representing a Spark expression.\nmessage Expr {\n  oneof expr_struct {\n    Literal literal = 2;\n    BoundReference bound = 3;\n    MathExpr add = 4;\n    MathExpr subtract = 5;\n    MathExpr multiply = 6;\n    MathExpr divide = 7;\n    Cast cast = 8;\n    BinaryExpr eq = 9;\n    BinaryExpr neq = 10;\n    BinaryExpr gt = 11;\n    BinaryExpr gt_eq = 12;\n    BinaryExpr lt = 13;\n    BinaryExpr lt_eq = 14;\n    UnaryExpr is_null = 15;\n    UnaryExpr is_not_null = 16;\n    BinaryExpr and = 17;\n    BinaryExpr or = 18;\n    SortOrder sort_order = 19;\n    Substring substring = 20;\n    Hour hour = 22;\n    Minute minute = 23;\n    Second second = 24;\n    CheckOverflow check_overflow = 25;\n    BinaryExpr like = 26;\n    BinaryExpr rlike = 30;\n    ScalarFunc scalarFunc = 31;\n    BinaryExpr eqNullSafe = 32;\n    BinaryExpr neqNullSafe = 33;\n    BinaryExpr bitwiseAnd = 34;\n    BinaryExpr bitwiseOr = 35;\n    BinaryExpr bitwiseXor = 36;\n    MathExpr remainder = 37;\n    CaseWhen caseWhen = 38;\n    In in = 39;\n    UnaryExpr not = 40;\n    UnaryMinus unary_minus = 41;\n    BinaryExpr bitwiseShiftRight = 42;\n    BinaryExpr bitwiseShiftLeft = 43;\n    IfExpr if = 44;\n    NormalizeNaNAndZero normalize_nan_and_zero = 45;\n    TruncTimestamp truncTimestamp = 47;\n    Subquery subquery = 50;\n    UnboundReference unbound = 51;\n    BloomFilterMightContain bloom_filter_might_contain = 52;\n    CreateNamedStruct create_named_struct = 53;\n    GetStructField get_struct_field = 54;\n    ToJson to_json = 55;\n    ListExtract list_extract = 56;\n    GetArrayStructFields get_array_struct_fields = 57;\n    ArrayInsert array_insert = 58;\n    MathExpr integral_divide = 59;\n    ToPrettyString to_pretty_string = 60;\n    Rand rand = 61;\n    Rand randn = 62;\n    EmptyExpr spark_partition_id = 63;\n    EmptyExpr monotonically_increasing_id = 64;\n    UnixTimestamp unix_timestamp = 65;\n    FromJson from_json = 66;\n    ToCsv to_csv = 67;\n    HoursTransform hours_transform = 68;\n  }\n\n  // Optional QueryContext for error reporting (contains SQL text and position)\n  optional QueryContext query_context = 90;\n\n  // Unique expression ID for context lookup during error creation\n  optional uint64 expr_id = 91;\n}\n\n// QueryContext provides SQL query context for error messages.\n// Mirrors Spark's SQLQueryContext for rich error reporting.\nmessage QueryContext {\n  // Full SQL query text\n  string sql_text = 1;\n\n  // Character offset where expression starts (0-based)\n  int32 start_index = 2;\n\n  // Character offset where expression ends (0-based, inclusive)\n  int32 stop_index = 3;\n\n  // Type of SQL object (e.g., \"VIEW\", \"Project\", \"Filter\")\n  optional string object_type = 4;\n\n  // Name of object (e.g., view name, column name)\n  optional string object_name = 5;\n\n  // Line number in SQL query (1-based)\n  int32 line = 6;\n\n  // Column position within the line (0-based)\n  int32 start_position = 7;\n}\n\nmessage AggExpr {\n  oneof expr_struct {\n    Count count = 2;\n    Sum sum = 3;\n    Min min = 4;\n    Max max = 5;\n    Avg avg = 6;\n    First first = 7;\n    Last last = 8;\n    BitAndAgg bitAndAgg = 9;\n    BitOrAgg bitOrAgg = 10;\n    BitXorAgg bitXorAgg = 11;\n    Covariance covariance = 12;\n    Variance variance = 13;\n    Stddev stddev = 14;\n    Correlation correlation = 15;\n    BloomFilterAgg bloomFilterAgg = 16;\n  }\n\n  // Optional filter expression for SQL FILTER (WHERE ...) clause.\n  // Only set in Partial aggregation mode; absent in Final/PartialMerge.\n  optional Expr filter = 89;\n\n  // Optional QueryContext for error reporting (contains SQL text and position)\n  optional QueryContext query_context = 90;\n\n  // Unique expression ID for context lookup during error creation\n  optional uint64 expr_id = 91;\n}\n\nenum StatisticsType {\n  SAMPLE = 0;\n  POPULATION = 1;\n}\n\nmessage Count {\n  repeated Expr children = 1;\n}\n\nmessage Sum {\n  Expr child = 1;\n  DataType datatype = 2;\n  EvalMode eval_mode = 3;\n}\n\nmessage Min {\n  Expr child = 1;\n  DataType datatype = 2;\n}\n\nmessage Max {\n  Expr child = 1;\n  DataType datatype = 2;\n}\n\nmessage Avg {\n  Expr child = 1;\n  DataType datatype = 2;\n  DataType sum_datatype = 3;\n  EvalMode eval_mode = 4;\n}\n\nmessage First {\n  Expr child = 1;\n  DataType datatype = 2;\n  bool ignore_nulls = 3;\n}\n\nmessage Last {\n  Expr child = 1;\n  DataType datatype = 2;\n  bool ignore_nulls = 3;\n}\n\nmessage BitAndAgg {\n  Expr child = 1;\n  DataType datatype = 2;\n}\n\nmessage BitOrAgg {\n  Expr child = 1;\n  DataType datatype = 2;\n}\n\nmessage BitXorAgg {\n  Expr child = 1;\n  DataType datatype = 2;\n}\n\nmessage Covariance {\n  Expr child1 = 1;\n  Expr child2 = 2;\n  bool null_on_divide_by_zero = 3;\n  DataType datatype = 4;\n  StatisticsType stats_type = 5;\n}\n\nmessage Variance {\n  Expr child = 1;\n  bool null_on_divide_by_zero = 2;\n  DataType datatype = 3;\n  StatisticsType stats_type = 4;\n}\n\nmessage Stddev {\n  Expr child = 1;\n  bool null_on_divide_by_zero = 2;\n  DataType datatype = 3;\n  StatisticsType stats_type = 4;\n}\n\nmessage Correlation {\n  Expr child1 = 1;\n  Expr child2 = 2;\n  bool null_on_divide_by_zero = 3;\n  DataType datatype = 4;\n}\n\nmessage BloomFilterAgg {\n  Expr child = 1;\n  Expr numItems = 2;\n  Expr numBits = 3;\n  DataType datatype = 4;\n}\n\nenum EvalMode {\n  LEGACY = 0;\n  TRY = 1;\n  ANSI = 2;\n}\n\nmessage MathExpr {\n  Expr left = 1;\n  Expr right = 2;\n  DataType return_type = 4;\n  EvalMode eval_mode = 5;\n}\n\nmessage Cast {\n  Expr child = 1;\n  DataType datatype = 2;\n  string timezone = 3;\n  EvalMode eval_mode = 4;\n  bool allow_incompat = 5;\n  // True when running against Spark 4.0+. Controls version-specific cast behaviour\n  // such as the handling of leading whitespace before T-prefixed time-only strings.\n  bool is_spark4_plus = 6;\n}\n\nmessage BinaryExpr {\n  Expr left = 1;\n  Expr right = 2;\n}\n\nmessage UnaryExpr {\n  Expr child = 1;\n}\n\nmessage EmptyExpr {\n}\n\n// Bound to a particular vector array in input batch.\nmessage BoundReference {\n  int32 index = 1;\n  DataType datatype = 2;\n}\n\nmessage UnboundReference {\n  string name = 1;\n  DataType datatype = 2;\n}\n\nmessage SortOrder {\n  Expr child = 1;\n  SortDirection direction = 2;\n  NullOrdering null_ordering = 3;\n}\n\nmessage Substring {\n  Expr child = 1;\n  int32 start = 2;\n  int32 len = 3;\n}\n\nmessage ToJson {\n  Expr child = 1;\n  string timezone = 2;\n  string date_format = 3;\n  string timestamp_format = 4;\n  string timestamp_ntz_format = 5;\n  bool ignore_null_fields = 6;\n}\n\nmessage FromJson {\n  Expr child = 1;\n  DataType schema = 2;\n  string timezone = 3;\n}\n\nmessage ToCsv {\n  Expr child = 1;\n  CsvWriteOptions options = 2;\n}\n\nmessage CsvWriteOptions {\n  string delimiter = 1;\n  string quote = 2;\n  string escape = 3;\n  string null_value = 4;\n  bool quote_all = 5;\n  bool ignore_leading_white_space = 6;\n  bool ignore_trailing_white_space = 7;\n  string timezone = 8;\n}\n\nenum BinaryOutputStyle {\n  UTF8 = 0;\n  BASIC = 1;\n  BASE64 = 2;\n  HEX = 3;\n  HEX_DISCRETE = 4;\n}\n\nmessage ToPrettyString {\n  Expr child = 1;\n  string timezone = 2;\n  BinaryOutputStyle binaryOutputStyle = 3;\n}\n\nmessage Hour {\n  Expr child = 1;\n  string timezone = 2;\n}\n\nmessage HoursTransform {\n  Expr child = 1;\n}\n\nmessage Minute {\n  Expr child = 1;\n  string timezone = 2;\n}\n\nmessage Second {\n  Expr child = 1;\n  string timezone = 2;\n}\n\nmessage UnixTimestamp {\n  Expr child = 1;\n  string timezone = 2;\n}\n\nmessage CheckOverflow {\n  Expr child = 1;\n  DataType datatype = 2;\n  bool fail_on_error = 3;\n}\n\nmessage ScalarFunc {\n  string func = 1;\n  repeated Expr args = 2;\n  DataType return_type = 3;\n  bool fail_on_error = 4;\n}\n\nmessage CaseWhen {\n  // The expr field is added to be consistent with CaseExpr definition in DataFusion.\n  // This field is not really used. When constructing a CaseExpr, this expr field\n  // is always set to None. The reason that we always set this expr field to None\n  // is because Spark parser converts the expr to a EqualTo conditions. After the\n  // conversion, we don't see this expr any more so it's always None.\n  Expr expr = 1;\n  repeated Expr when = 2;\n  repeated Expr then = 3;\n  Expr else_expr = 4;\n}\n\nmessage In {\n  Expr in_value = 1;\n  repeated Expr lists = 2;\n  bool negated = 3;\n}\n\nmessage NormalizeNaNAndZero {\n  Expr child = 1;\n  DataType datatype = 2;\n}\n\nmessage UnaryMinus {\n  Expr child = 1;\n  bool fail_on_error = 2;\n}\n\nmessage IfExpr {\n  Expr if_expr = 1;\n  Expr true_expr = 2;\n  Expr false_expr = 3;\n}\n\nmessage TruncTimestamp {\n  Expr format = 1;\n  Expr child = 2;\n  string timezone = 3;\n}\n\nmessage Subquery {\n  int64 id = 1;\n  DataType datatype = 2;\n}\n\nmessage BloomFilterMightContain {\n  Expr bloom_filter = 1;\n  Expr value = 2;\n}\n\nmessage CreateNamedStruct {\n  repeated Expr values = 1;\n  repeated string names = 2;\n}\n\nmessage GetStructField {\n  Expr child = 1;\n  int32 ordinal = 2;\n}\n\nmessage ListExtract {\n  Expr child = 1;\n  Expr ordinal = 2;\n  Expr default_value = 3;\n  bool one_based = 4;\n  bool fail_on_error = 5;\n}\n\nmessage GetArrayStructFields {\n  Expr child = 1;\n  int32 ordinal = 2;\n}\n\nenum SortDirection {\n  Ascending = 0;\n  Descending = 1;\n}\n\nenum NullOrdering {\n  NullsFirst = 0;\n  NullsLast = 1;\n}\n\n// Array functions\nmessage ArrayInsert {\n  Expr src_array_expr = 1;\n  Expr pos_expr = 2;\n  Expr item_expr = 3;\n  bool legacy_negative_index = 4;\n}\n\nmessage ArrayJoin {\n  Expr array_expr = 1;\n  Expr delimiter_expr = 2;\n  Expr null_replacement_expr = 3;\n}\n\nmessage Rand {\n  int64 seed = 1;\n}\n"
  },
  {
    "path": "native/proto/src/proto/literal.proto",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n\n\nsyntax = \"proto3\";\n\npackage spark.spark_expression;\n\nimport \"types.proto\";\n\noption java_package = \"org.apache.comet.serde\";\n\nmessage Literal {\n  oneof value {\n    bool bool_val = 1;\n    // Protobuf doesn't provide int8 and int16, we put them into int32 and convert\n    // to int8 and int16 when deserializing.\n    int32 byte_val = 2;\n    int32 short_val = 3;\n    int32 int_val = 4;\n    int64 long_val = 5;\n    float float_val = 6;\n    double double_val = 7;\n    string string_val = 8;\n    bytes bytes_val = 9;\n    bytes decimal_val = 10;\n    ListLiteral list_val = 11;\n  }\n\n  DataType datatype = 12;\n  bool is_null = 13;\n}\n"
  },
  {
    "path": "native/proto/src/proto/metric.proto",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n\n\nsyntax = \"proto3\";\n\npackage spark.spark_metric;\n\noption java_package = \"org.apache.comet.serde\";\n\nmessage NativeMetricNode {\n  map<string, int64> metrics = 1;\n  repeated NativeMetricNode children = 2;\n}\n"
  },
  {
    "path": "native/proto/src/proto/operator.proto",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n\n\nsyntax = \"proto3\";\n\npackage spark.spark_operator;\n\nimport \"expr.proto\";\nimport \"partitioning.proto\";\nimport \"types.proto\";\n\noption java_package = \"org.apache.comet.serde\";\n\n// The basic message representing a Spark operator.\nmessage Operator {\n  // The child operators of this\n  repeated Operator children = 1;\n\n  // Spark plan ID\n  uint32 plan_id = 2;\n\n  oneof op_struct {\n    Scan scan = 100;\n    Projection projection = 101;\n    Filter filter = 102;\n    Sort sort = 103;\n    HashAggregate hash_agg = 104;\n    Limit limit = 105;\n    ShuffleWriter shuffle_writer = 106;\n    Expand expand = 107;\n    SortMergeJoin sort_merge_join = 108;\n    HashJoin hash_join = 109;\n    Window window = 110;\n    NativeScan native_scan = 111;\n    IcebergScan iceberg_scan = 112;\n    ParquetWriter parquet_writer = 113;\n    Explode explode = 114;\n    CsvScan csv_scan = 115;\n    ShuffleScan shuffle_scan = 116;\n  }\n}\n\nmessage SparkPartitionedFile {\n  string file_path = 1;\n  int64 start = 2;\n  int64 length = 3;\n  int64 file_size = 4;\n  repeated spark.spark_expression.Expr partition_values = 5;\n}\n\n// This name and the one above are not great, but they correspond to the (unfortunate) Spark names.\n// I prepended \"Spark\" since I think there's a name collision on the native side, but we can revisit.\nmessage SparkFilePartition {\n  repeated SparkPartitionedFile partitioned_file = 1;\n}\n\nmessage SparkStructField {\n  string name = 1;\n  spark.spark_expression.DataType data_type = 2;\n  bool nullable = 3;\n}\n\nmessage Scan {\n  repeated spark.spark_expression.DataType fields = 1;\n  // The source of the scan (e.g. file scan, broadcast exchange, shuffle, etc). This\n  // is purely for informational purposes when viewing native query plans in\n  // debug mode.\n  string source = 2;\n  // Whether native code can assume ownership of batches that it receives\n  bool arrow_ffi_safe = 3;\n}\n\nmessage ShuffleScan {\n  repeated spark.spark_expression.DataType fields = 1;\n  // Informational label for debug output (e.g., \"CometShuffleExchangeExec [id=5]\")\n  string source = 2;\n}\n\n// Common data shared by all partitions in split mode (sent once at planning)\nmessage NativeScanCommon {\n  repeated SparkStructField required_schema = 1;\n  repeated SparkStructField data_schema = 2;\n  repeated SparkStructField partition_schema = 3;\n  repeated spark.spark_expression.Expr data_filters = 4;\n  repeated int64 projection_vector = 5;\n  string session_timezone = 6;\n  repeated spark.spark_expression.Expr default_values = 7;\n  repeated int64 default_values_indexes = 8;\n  bool case_sensitive = 9;\n  map<string, string> object_store_options = 10;\n  bool encryption_enabled = 11;\n  string source = 12;\n  repeated spark.spark_expression.DataType fields = 13;\n}\n\nmessage NativeScan {\n  // Common data shared across partitions (schemas, filters, projections, config)\n  NativeScanCommon common = 1;\n\n  // Single partition's file list (injected at execution time)\n  SparkFilePartition file_partition = 2;\n}\n\nmessage CsvScan {\n  repeated SparkStructField data_schema = 1;\n  repeated SparkStructField partition_schema = 2;\n  repeated int32 projection_vector = 3;\n  repeated SparkFilePartition file_partitions = 4;\n  map<string, string> object_store_options = 5;\n  CsvOptions csv_options = 6;\n}\n\nmessage CsvOptions {\n  bool has_header = 1;\n  string delimiter = 2;\n  string quote = 3;\n  string escape = 4;\n  optional string comment = 5;\n  string terminator = 7;\n  bool truncated_rows = 8;\n}\n\n// Partition value for Iceberg partition data\nmessage PartitionValue {\n  int32 field_id = 1;\n  oneof value {\n    int32 int_val = 2;\n    int64 long_val = 3;\n    int64 date_val = 4;           // days since epoch\n    int64 timestamp_val = 5;       // microseconds since epoch\n    int64 timestamp_tz_val = 6;    // microseconds with timezone\n    string string_val = 7;\n    double double_val = 8;\n    float float_val = 9;\n    bytes decimal_val = 10;        // unscaled BigInteger bytes\n    bool bool_val = 11;\n    bytes uuid_val = 12;\n    bytes fixed_val = 13;\n    bytes binary_val = 14;\n  }\n  bool is_null = 15;\n}\n\n// Collection of partition values for a single partition\nmessage PartitionData {\n  repeated PartitionValue values = 1;\n}\n\n// Common data shared by all partitions in split mode (sent once, captured in closure)\nmessage IcebergScanCommon {\n  // Catalog-specific configuration for FileIO (credentials, S3/GCS config, etc.)\n  map<string, string> catalog_properties = 1;\n\n  // Table metadata file path for FileIO initialization\n  string metadata_location = 2;\n\n  // Schema to read\n  repeated SparkStructField required_schema = 3;\n\n  // Deduplication pools (must contain all entries for cross-partition deduplication)\n  repeated string schema_pool = 4;\n  repeated string partition_type_pool = 5;\n  repeated string partition_spec_pool = 6;\n  repeated string name_mapping_pool = 7;\n  repeated ProjectFieldIdList project_field_ids_pool = 8;\n  repeated PartitionData partition_data_pool = 9;\n  repeated DeleteFileList delete_files_pool = 10;\n  repeated spark.spark_expression.Expr residual_pool = 11;\n\n  // Number of data files to read concurrently within a single task\n  uint32 data_file_concurrency_limit = 12;\n}\n\nmessage IcebergScan {\n  // Common data shared across partitions (pools, metadata, catalog props)\n  IcebergScanCommon common = 1;\n\n  // Single partition's file scan tasks\n  repeated IcebergFileScanTask file_scan_tasks = 2;\n}\n\n// Helper message for deduplicating field ID lists\nmessage ProjectFieldIdList {\n  repeated int32 field_ids = 1;\n}\n\n// Helper message for deduplicating delete file lists\nmessage DeleteFileList {\n  repeated IcebergDeleteFile delete_files = 1;\n}\n\n// Iceberg FileScanTask containing data file, delete files, and residual filter\nmessage IcebergFileScanTask {\n  // Data file path (e.g., s3://bucket/warehouse/db/table/data/00000-0-abc.parquet)\n  string data_file_path = 1;\n\n  // Byte range to read (for split files)\n  uint64 start = 2;\n  uint64 length = 3;\n\n  // Record count if reading entire file\n  optional uint64 record_count = 4;\n\n  // Total file size from the manifest entry, used to skip stat/HEAD calls\n  uint64 file_size_in_bytes = 5;\n\n  // Indices into IcebergScan deduplication pools\n  uint32 schema_idx = 15;\n  optional uint32 partition_type_idx = 16;\n  optional uint32 partition_spec_idx = 17;\n  optional uint32 name_mapping_idx = 18;\n  uint32 project_field_ids_idx = 19;\n  optional uint32 partition_data_idx = 20;\n  optional uint32 delete_files_idx = 21;\n  optional uint32 residual_idx = 22;\n}\n\n// Iceberg delete file for MOR tables (positional or equality deletes)\n// Positional: (file_path, row_position) pairs to skip\n// Equality: Column values to filter out (specified by equality_ids)\nmessage IcebergDeleteFile {\n  // Delete file path\n  string file_path = 1;\n\n  // POSITION_DELETES or EQUALITY_DELETES\n  string content_type = 2;\n\n  // Partition spec ID\n  int32 partition_spec_id = 3;\n\n  // Equality field IDs (empty for positional deletes)\n  repeated int32 equality_ids = 4;\n\n  // // Total file size from the manifest entry, used to skip stat/HEAD calls\n  uint64 file_size_in_bytes = 5;\n}\n\nmessage Projection {\n  repeated spark.spark_expression.Expr project_list = 1;\n}\n\nmessage Filter {\n  spark.spark_expression.Expr predicate = 1;\n}\n\nmessage Sort {\n  repeated spark.spark_expression.Expr sort_orders = 1;\n  optional int32 fetch = 3;\n  optional int32 skip = 4;\n}\n\nmessage HashAggregate {\n  repeated spark.spark_expression.Expr grouping_exprs = 1;\n  repeated spark.spark_expression.AggExpr agg_exprs = 2;\n  repeated spark.spark_expression.Expr result_exprs = 3;\n  AggregateMode mode = 5;\n}\n\nmessage Limit {\n  int32 limit = 1;\n  int32 offset = 2;\n}\n\nenum CompressionCodec {\n  None = 0;\n  Zstd = 1;\n  Lz4 = 2;\n  Snappy = 3;\n}\n\nmessage ShuffleWriter {\n  spark.spark_partitioning.Partitioning partitioning = 1;\n  string output_data_file = 3;\n  string output_index_file = 4;\n  CompressionCodec codec = 5;\n  int32 compression_level = 6;\n  bool tracing_enabled = 7;\n  // Size of the write buffer in bytes used when writing shuffle data to disk.\n  // Larger values may improve write performance but use more memory.\n  int32 write_buffer_size = 8;\n}\n\nmessage ParquetWriter {\n  string output_path = 1;\n  CompressionCodec compression = 2;\n  repeated string column_names = 4;\n  // Working directory for temporary files (used by FileCommitProtocol)\n  // If not set, files are written directly to output_path\n  optional string work_dir = 5;\n  // Job ID for tracking this write operation\n  optional string job_id = 6;\n  // Task attempt ID for this specific task\n  optional int32 task_attempt_id = 7;\n  // Options for configuring object stores such as AWS S3, GCS, etc. The key-value pairs are taken\n  // from Hadoop configuration for compatibility with Hadoop FileSystem implementations of object\n  // stores.\n  // The configuration values have hadoop. or spark.hadoop. prefix trimmed. For instance, the\n  // configuration value \"spark.hadoop.fs.s3a.access.key\" will be stored as \"fs.s3a.access.key\" in\n  // the map.\n  map<string, string> object_store_options = 8;\n}\n\nenum AggregateMode {\n  Partial = 0;\n  Final = 1;\n}\n\nmessage Expand {\n  repeated spark.spark_expression.Expr project_list = 1;\n  int32 num_expr_per_project = 3;\n}\n\nmessage Explode {\n  // The array expression to explode into multiple rows\n  spark.spark_expression.Expr child = 1;\n  // Whether this is explode_outer (produces null row for empty/null arrays)\n  bool outer = 2;\n  // Expressions for other columns to project alongside the exploded values\n  repeated spark.spark_expression.Expr project_list = 3;\n}\n\nmessage HashJoin {\n  repeated spark.spark_expression.Expr left_join_keys = 1;\n  repeated spark.spark_expression.Expr right_join_keys = 2;\n  JoinType join_type = 3;\n  optional spark.spark_expression.Expr condition = 4;\n  BuildSide build_side = 5;\n}\n\nmessage SortMergeJoin {\n  repeated spark.spark_expression.Expr left_join_keys = 1;\n  repeated spark.spark_expression.Expr right_join_keys = 2;\n  JoinType join_type = 3;\n  repeated spark.spark_expression.Expr sort_options = 4;\n  optional spark.spark_expression.Expr condition = 5;\n}\n\nenum JoinType {\n  Inner = 0;\n  LeftOuter = 1;\n  RightOuter = 2;\n  FullOuter = 3;\n  LeftSemi = 4;\n  LeftAnti = 5;\n}\n\nenum BuildSide {\n  BuildLeft = 0;\n  BuildRight = 1;\n}\n\nmessage WindowExpr {\n  spark.spark_expression.Expr built_in_window_function = 1;\n  spark.spark_expression.AggExpr agg_func = 2;\n  WindowSpecDefinition spec = 3;\n  bool ignore_nulls = 4;\n}\n\nenum WindowFrameType {\n  Rows = 0;\n  Range = 1;\n}\n\nmessage WindowFrame {\n  WindowFrameType frame_type = 1;\n  LowerWindowFrameBound lower_bound = 2;\n  UpperWindowFrameBound upper_bound = 3;\n}\n\nmessage LowerWindowFrameBound {\n  oneof lower_frame_bound_struct {\n    UnboundedPreceding unboundedPreceding = 1;\n    Preceding preceding = 2;\n    CurrentRow currentRow = 3;\n  }\n}\n\nmessage UpperWindowFrameBound {\n  oneof upper_frame_bound_struct {\n    UnboundedFollowing unboundedFollowing = 1;\n    Following following = 2;\n    CurrentRow currentRow = 3;\n  }\n}\n\nmessage Preceding {\n  int64 offset = 1;\n}\n\nmessage Following {\n  int64 offset = 1;\n}\n\nmessage UnboundedPreceding {}\nmessage UnboundedFollowing {}\nmessage CurrentRow {}\n\nmessage WindowSpecDefinition {\n  repeated spark.spark_expression.Expr partitionSpec = 1;\n  repeated spark.spark_expression.Expr orderSpec = 2;\n  WindowFrame frameSpecification = 3;\n}\n\nmessage Window {\n  repeated WindowExpr window_expr = 1;\n  repeated spark.spark_expression.Expr order_by_list = 2;\n  repeated spark.spark_expression.Expr partition_by_list = 3;\n  Operator child = 4;\n}\n"
  },
  {
    "path": "native/proto/src/proto/partitioning.proto",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n\n\nsyntax = \"proto3\";\n\npackage spark.spark_partitioning;\n\nimport \"expr.proto\";\n\noption java_package = \"org.apache.comet.serde\";\n\n// The basic message representing a Spark partitioning.\nmessage Partitioning {\n  oneof partitioning_struct {\n    HashPartition hash_partition = 1;\n    SinglePartition single_partition = 2;\n    RangePartition range_partition = 3;\n    RoundRobinPartition round_robin_partition = 4;\n  }\n}\n\nmessage HashPartition {\n  repeated spark.spark_expression.Expr hash_expression = 1;\n  int32 num_partitions = 2;\n}\n\nmessage SinglePartition {\n}\n\nmessage BoundaryRow {\n  repeated spark.spark_expression.Expr partition_bounds = 1;\n}\n\nmessage RangePartition {\n  repeated spark.spark_expression.Expr sort_orders = 1;\n  int32 num_partitions = 2;\n  repeated BoundaryRow boundary_rows = 4;\n}\n\nmessage RoundRobinPartition {\n  int32 num_partitions = 1;\n  // Maximum number of columns to hash. 0 means no limit (hash all columns).\n  int32 max_hash_columns = 2;\n}\n"
  },
  {
    "path": "native/proto/src/proto/types.proto",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n\n\nsyntax = \"proto3\";\n\npackage spark.spark_expression;\n\noption java_package = \"org.apache.comet.serde\";\n\nmessage ListLiteral {\n  // Only one of these fields should be populated based on the array type\n  repeated bool boolean_values = 1;\n  repeated int32 byte_values = 2;\n  repeated int32 short_values = 3;\n  repeated int32 int_values = 4;\n  repeated int64 long_values = 5;\n  repeated float float_values = 6;\n  repeated double double_values = 7;\n  repeated string string_values = 8;\n  repeated bytes bytes_values = 9;\n  repeated bytes decimal_values = 10;\n  repeated ListLiteral list_values = 11;\n\n  repeated bool null_mask = 12;\n}\n\nmessage DataType {\n  enum DataTypeId {\n    BOOL = 0;\n    INT8 = 1;\n    INT16 = 2;\n    INT32 = 3;\n    INT64 = 4;\n    FLOAT = 5;\n    DOUBLE = 6;\n    STRING = 7;\n    BYTES = 8;\n    TIMESTAMP = 9;\n    DECIMAL = 10;\n    TIMESTAMP_NTZ = 11;\n    DATE = 12;\n    NULL = 13;\n    LIST = 14;\n    MAP = 15;\n    STRUCT = 16;\n  }\n  DataTypeId type_id = 1;\n\n  message DataTypeInfo {\n    oneof datatype_struct {\n      DecimalInfo decimal = 2;\n      ListInfo list = 3;\n      MapInfo map = 4;\n      StructInfo struct = 5;\n    }\n  }\n\n  message DecimalInfo {\n    int32 precision = 1;\n    int32 scale = 2;\n  }\n\n  message ListInfo {\n    DataType element_type = 1;\n    bool contains_null = 2;\n  }\n\n  message MapInfo {\n    DataType key_type = 1;\n    DataType value_type = 2;\n    bool value_contains_null = 3;\n  }\n\n  message StructInfo {\n    repeated string field_names = 1;\n    repeated DataType field_datatypes = 2;\n    repeated bool field_nullable = 3;\n  }\n\n  DataTypeInfo type_info = 2;\n}\n"
  },
  {
    "path": "native/rustfmt.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\nedition = \"2021\"\nmax_width = 100\n"
  },
  {
    "path": "native/shuffle/Cargo.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[package]\nname = \"datafusion-comet-shuffle\"\ndescription = \"Apache DataFusion Comet: shuffle writer and reader\"\nversion = { workspace = true }\nhomepage = { workspace = true }\nrepository = { workspace = true }\nauthors = { workspace = true }\nreadme = { workspace = true }\nlicense = { workspace = true }\nedition = { workspace = true }\n\npublish = false\n\n[dependencies]\narrow = { workspace = true }\nasync-trait = { workspace = true }\nbytes = { workspace = true }\nclap = { version = \"4\", features = [\"derive\"], optional = true }\ncrc32c = \"0.6.8\"\ncrc32fast = \"1.3.2\"\ndatafusion = { workspace = true }\ndatafusion-comet-common = { workspace = true }\ndatafusion-comet-jni-bridge = { workspace = true }\ndatafusion-comet-spark-expr = { workspace = true }\nfutures = { workspace = true }\nitertools = \"0.14.0\"\njni = \"0.21\"\nlog = \"0.4\"\nlz4_flex = { version = \"0.13.0\", default-features = false, features = [\"frame\"] }\n# parquet is only used by the shuffle_bench binary (shuffle-bench feature)\nparquet = { workspace = true, optional = true }\nsimd-adler32 = \"0.3.9\"\nsnap = \"1.1\"\ntokio = { version = \"1\", features = [\"rt-multi-thread\"] }\nzstd = \"0.13.3\"\n\n[dev-dependencies]\ncriterion = { version = \"0.7\", features = [\"async\", \"async_tokio\", \"async_std\"] }\ndatafusion = { workspace = true, features = [\"parquet_encryption\", \"sql\"] }\nitertools = \"0.14.0\"\ntempfile = \"3.26.0\"\n\n[features]\nshuffle-bench = [\"clap\", \"parquet\"]\n\n[lib]\nname = \"datafusion_comet_shuffle\"\npath = \"src/lib.rs\"\n\n[[bin]]\nname = \"shuffle_bench\"\npath = \"src/bin/shuffle_bench.rs\"\nrequired-features = [\"shuffle-bench\"]\n\n[[bench]]\nname = \"shuffle_writer\"\nharness = false\n\n[[bench]]\nname = \"row_columnar\"\nharness = false\n"
  },
  {
    "path": "native/shuffle/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# datafusion-comet-shuffle: Shuffle Writer and Reader\n\nThis crate provides the shuffle writer and reader implementation for Apache DataFusion Comet and is maintained as part\nof the [Apache DataFusion Comet] subproject.\n\n[Apache DataFusion Comet]: https://github.com/apache/datafusion-comet/\n\n## Shuffle Benchmark Tool\n\nA standalone benchmark binary (`shuffle_bench`) is included for profiling shuffle write\nperformance outside of Spark. It streams input data directly from Parquet files.\n\n### Basic usage\n\n```sh\ncargo run --release --features shuffle-bench --bin shuffle_bench -- \\\n  --input /data/tpch-sf100/lineitem/ \\\n  --partitions 200 \\\n  --codec lz4 \\\n  --hash-columns 0,3\n```\n\n### Options\n\n| Option                | Default                    | Description                                            |\n| --------------------- | -------------------------- | ------------------------------------------------------ |\n| `--input`             | _(required)_               | Path to a Parquet file or directory of Parquet files   |\n| `--partitions`        | `200`                      | Number of output shuffle partitions                    |\n| `--partitioning`      | `hash`                     | Partitioning scheme: `hash`, `single`, `round-robin`   |\n| `--hash-columns`      | `0`                        | Comma-separated column indices to hash on (e.g. `0,3`) |\n| `--codec`             | `lz4`                      | Compression codec: `none`, `lz4`, `zstd`, `snappy`     |\n| `--zstd-level`        | `1`                        | Zstd compression level (1–22)                          |\n| `--batch-size`        | `8192`                     | Batch size for reading Parquet data                    |\n| `--memory-limit`      | _(none)_                   | Memory limit in bytes; triggers spilling when exceeded |\n| `--write-buffer-size` | `1048576`                  | Write buffer size in bytes                             |\n| `--limit`             | `0`                        | Limit rows processed per iteration (0 = no limit)      |\n| `--iterations`        | `1`                        | Number of timed iterations                             |\n| `--warmup`            | `0`                        | Number of warmup iterations before timing              |\n| `--output-dir`        | `/tmp/comet_shuffle_bench` | Directory for temporary shuffle output files           |\n\n### Profiling with flamegraph\n\n```sh\ncargo flamegraph --release --features shuffle-bench --bin shuffle_bench -- \\\n  --input /data/tpch-sf100/lineitem/ \\\n  --partitions 200 --codec lz4\n```\n"
  },
  {
    "path": "native/shuffle/benches/row_columnar.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Benchmarks for JVM shuffle row-to-columnar conversion.\n//!\n//! Measures `process_sorted_row_partition()` performance for converting Spark\n//! UnsafeRow data to Arrow arrays, covering primitive, struct (flat/nested),\n//! list, and map types.\n\nuse arrow::datatypes::{DataType as ArrowDataType, Field, Fields};\nuse criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};\nuse datafusion_comet_shuffle::spark_unsafe::row::{process_sorted_row_partition, SparkUnsafeRow};\nuse datafusion_comet_shuffle::spark_unsafe::unsafe_object::SparkUnsafeObject;\nuse datafusion_comet_shuffle::CompressionCodec;\nuse std::sync::Arc;\nuse tempfile::Builder;\n\nconst BATCH_SIZE: usize = 5000;\n\n/// Size of an Int64 value in bytes.\nconst INT64_SIZE: usize = 8;\n\n/// Size of a pointer in Spark's UnsafeRow format. Encodes a 32-bit offset\n/// (upper bits) and 32-bit size (lower bits) — always 8 bytes regardless of\n/// hardware architecture.\nconst UNSAFE_ROW_POINTER_SIZE: usize = 8;\n\n/// Size of the element-count field in UnsafeRow array/map headers.\nconst ARRAY_HEADER_SIZE: usize = 8;\n\n// ─── UnsafeRow helpers ──────────────────────────────────────────────────────\n\n/// Write an UnsafeRow offset+size pointer at `pos` in `data`.\nfn write_pointer(data: &mut [u8], pos: usize, offset: usize, size: usize) {\n    let value = ((offset as i64) << 32) | (size as i64);\n    data[pos..pos + UNSAFE_ROW_POINTER_SIZE].copy_from_slice(&value.to_le_bytes());\n}\n\n/// Byte size of a null-bitset for `n` elements (64-bit words, rounded up).\nfn null_bitset_size(n: usize) -> usize {\n    n.div_ceil(64) * 8\n}\n\n// ─── Schema builders ────────────────────────────────────────────────────────\n\n/// Create a struct schema with `depth` nesting levels and `num_leaf_fields`\n/// Int64 leaf fields.\n///\n/// - depth=1: `Struct<f0: Int64, f1: Int64, …>`\n/// - depth=2: `Struct<nested: Struct<f0: Int64, …>>`\n/// - depth=3: `Struct<nested: Struct<nested: Struct<f0: Int64, …>>>`\nfn make_struct_schema(depth: usize, num_leaf_fields: usize) -> ArrowDataType {\n    let leaf_fields: Vec<Field> = (0..num_leaf_fields)\n        .map(|i| Field::new(format!(\"f{i}\"), ArrowDataType::Int64, true))\n        .collect();\n    let mut dt = ArrowDataType::Struct(Fields::from(leaf_fields));\n    for _ in 0..depth - 1 {\n        dt = ArrowDataType::Struct(Fields::from(vec![Field::new(\"nested\", dt, true)]));\n    }\n    dt\n}\n\nfn make_list_schema() -> ArrowDataType {\n    ArrowDataType::List(Arc::new(Field::new(\"item\", ArrowDataType::Int64, true)))\n}\n\nfn make_map_schema() -> ArrowDataType {\n    let entries = Field::new(\n        \"entries\",\n        ArrowDataType::Struct(Fields::from(vec![\n            Field::new(\"key\", ArrowDataType::Int64, false),\n            Field::new(\"value\", ArrowDataType::Int64, true),\n        ])),\n        false,\n    );\n    ArrowDataType::Map(Arc::new(entries), false)\n}\n\n// ─── Row data builders ──────────────────────────────────────────────────────\n\n/// Build a binary UnsafeRow containing a struct column with `depth` nesting\n/// levels and `num_leaf_fields` Int64 fields at the innermost level.\nfn build_struct_row(depth: usize, num_leaf_fields: usize) -> Vec<u8> {\n    let top_bitset = SparkUnsafeRow::get_row_bitset_width(1);\n    let inter_bitset = SparkUnsafeRow::get_row_bitset_width(1);\n    let leaf_bitset = SparkUnsafeRow::get_row_bitset_width(num_leaf_fields);\n\n    let inter_level_size = inter_bitset + UNSAFE_ROW_POINTER_SIZE;\n    let leaf_level_size = leaf_bitset + num_leaf_fields * INT64_SIZE;\n\n    let total =\n        top_bitset + UNSAFE_ROW_POINTER_SIZE + (depth - 1) * inter_level_size + leaf_level_size;\n    let mut data = vec![0u8; total];\n\n    // Absolute start position of each struct level in the buffer\n    let mut struct_starts = Vec::with_capacity(depth);\n    let mut pos = top_bitset + UNSAFE_ROW_POINTER_SIZE;\n    for level in 0..depth {\n        struct_starts.push(pos);\n        if level < depth - 1 {\n            pos += inter_level_size;\n        }\n    }\n\n    // Top-level pointer → first struct (absolute offset from row start)\n    let first_size = if depth == 1 {\n        leaf_level_size\n    } else {\n        inter_level_size\n    };\n    write_pointer(&mut data, top_bitset, struct_starts[0], first_size);\n\n    // Intermediate struct pointers (offsets relative to their own struct start)\n    for level in 0..depth - 1 {\n        let next_size = if level + 1 == depth - 1 {\n            leaf_level_size\n        } else {\n            inter_level_size\n        };\n        write_pointer(\n            &mut data,\n            struct_starts[level] + inter_bitset,\n            struct_starts[level + 1] - struct_starts[level],\n            next_size,\n        );\n    }\n\n    // Fill leaf struct with sample data\n    let leaf_start = *struct_starts.last().unwrap();\n    for i in 0..num_leaf_fields {\n        let off = leaf_start + leaf_bitset + i * INT64_SIZE;\n        data[off..off + INT64_SIZE].copy_from_slice(&((i as i64) * 100).to_le_bytes());\n    }\n\n    data\n}\n\n/// Build a binary UnsafeRow containing a `List<Int64>` column.\nfn build_list_row(num_elements: usize) -> Vec<u8> {\n    let top_bitset = SparkUnsafeRow::get_row_bitset_width(1);\n    let elem_null_bitset = null_bitset_size(num_elements);\n    let list_size = ARRAY_HEADER_SIZE + elem_null_bitset + num_elements * INT64_SIZE;\n    let total = top_bitset + UNSAFE_ROW_POINTER_SIZE + list_size;\n    let mut data = vec![0u8; total];\n\n    let list_offset = top_bitset + UNSAFE_ROW_POINTER_SIZE;\n    write_pointer(&mut data, top_bitset, list_offset, list_size);\n\n    // Element count\n    data[list_offset..list_offset + ARRAY_HEADER_SIZE]\n        .copy_from_slice(&(num_elements as i64).to_le_bytes());\n\n    // Element values\n    let data_start = list_offset + ARRAY_HEADER_SIZE + elem_null_bitset;\n    for i in 0..num_elements {\n        let off = data_start + i * INT64_SIZE;\n        data[off..off + INT64_SIZE].copy_from_slice(&((i as i64) * 100).to_le_bytes());\n    }\n\n    data\n}\n\n/// Build a binary UnsafeRow containing a `Map<Int64, Int64>` column.\nfn build_map_row(num_entries: usize) -> Vec<u8> {\n    let top_bitset = SparkUnsafeRow::get_row_bitset_width(1);\n    let entry_null_bitset = null_bitset_size(num_entries);\n    let array_size = ARRAY_HEADER_SIZE + entry_null_bitset + num_entries * INT64_SIZE;\n    // Map layout: [key_array_size header] [key_array] [value_array]\n    let map_size = ARRAY_HEADER_SIZE + 2 * array_size;\n    let total = top_bitset + UNSAFE_ROW_POINTER_SIZE + map_size;\n    let mut data = vec![0u8; total];\n\n    let map_offset = top_bitset + UNSAFE_ROW_POINTER_SIZE;\n    write_pointer(&mut data, top_bitset, map_offset, map_size);\n\n    // Key array size header\n    data[map_offset..map_offset + ARRAY_HEADER_SIZE]\n        .copy_from_slice(&(array_size as i64).to_le_bytes());\n\n    // Key array: [element count] [null bitset] [data]\n    let key_offset = map_offset + ARRAY_HEADER_SIZE;\n    data[key_offset..key_offset + ARRAY_HEADER_SIZE]\n        .copy_from_slice(&(num_entries as i64).to_le_bytes());\n    let key_data = key_offset + ARRAY_HEADER_SIZE + entry_null_bitset;\n    for i in 0..num_entries {\n        let off = key_data + i * INT64_SIZE;\n        data[off..off + INT64_SIZE].copy_from_slice(&(i as i64).to_le_bytes());\n    }\n\n    // Value array: [element count] [null bitset] [data]\n    let val_offset = key_offset + array_size;\n    data[val_offset..val_offset + ARRAY_HEADER_SIZE]\n        .copy_from_slice(&(num_entries as i64).to_le_bytes());\n    let val_data = val_offset + ARRAY_HEADER_SIZE + entry_null_bitset;\n    for i in 0..num_entries {\n        let off = val_data + i * INT64_SIZE;\n        data[off..off + INT64_SIZE].copy_from_slice(&((i as i64) * 100).to_le_bytes());\n    }\n\n    data\n}\n\n// ─── Benchmark runner ───────────────────────────────────────────────────────\n\n/// Common benchmark harness: wraps raw row bytes in SparkUnsafeRow and runs\n/// `process_sorted_row_partition` under Criterion.\nfn run_benchmark(\n    group: &mut criterion::BenchmarkGroup<criterion::measurement::WallTime>,\n    name: &str,\n    param: &str,\n    schema: &[ArrowDataType],\n    rows: &[Vec<u8>],\n    num_top_level_fields: usize,\n) {\n    let num_rows = rows.len();\n\n    let spark_rows: Vec<SparkUnsafeRow> = rows\n        .iter()\n        .map(|data| {\n            let mut row = SparkUnsafeRow::new_with_num_fields(num_top_level_fields);\n            row.point_to_slice(data);\n            for i in 0..num_top_level_fields {\n                row.set_not_null_at(i);\n            }\n            row\n        })\n        .collect();\n\n    let mut addrs: Vec<i64> = spark_rows.iter().map(|r| r.get_row_addr()).collect();\n    let mut sizes: Vec<i32> = spark_rows.iter().map(|r| r.get_row_size()).collect();\n    let addr_ptr = addrs.as_mut_ptr();\n    let size_ptr = sizes.as_mut_ptr();\n\n    group.bench_with_input(BenchmarkId::new(name, param), &num_rows, |b, &n| {\n        b.iter(|| {\n            let tmp = Builder::new().tempfile().unwrap();\n            process_sorted_row_partition(\n                n,\n                BATCH_SIZE,\n                addr_ptr,\n                size_ptr,\n                schema,\n                tmp.path().to_str().unwrap().to_string(),\n                1.0,\n                false,\n                0,\n                None,\n                &CompressionCodec::Zstd(1),\n            )\n            .unwrap();\n        });\n    });\n\n    drop(spark_rows);\n}\n\n// ─── Benchmarks ─────────────────────────────────────────────────────────────\n\n/// 100 primitive Int64 columns — baseline without complex-type overhead.\nfn benchmark_primitive_columns(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"primitive_columns\");\n    const NUM_COLS: usize = 100;\n    let bitset = SparkUnsafeRow::get_row_bitset_width(NUM_COLS);\n    let row_size = bitset + NUM_COLS * INT64_SIZE;\n\n    for num_rows in [1000, 10000] {\n        let schema = vec![ArrowDataType::Int64; NUM_COLS];\n        let rows: Vec<Vec<u8>> = (0..num_rows)\n            .map(|_| {\n                let mut data = vec![0u8; row_size];\n                for (i, byte) in data.iter_mut().enumerate().take(row_size).skip(bitset) {\n                    *byte = i as u8;\n                }\n                data\n            })\n            .collect();\n\n        run_benchmark(\n            &mut group,\n            \"cols_100\",\n            &format!(\"rows_{num_rows}\"),\n            &schema,\n            &rows,\n            NUM_COLS,\n        );\n    }\n\n    group.finish();\n}\n\n/// Struct columns at varying nesting depths (1 = flat, 2 = nested, 3 = deeply nested).\nfn benchmark_struct_conversion(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"struct_conversion\");\n\n    for (depth, label) in [(1, \"flat\"), (2, \"nested\"), (3, \"deeply_nested\")] {\n        for num_fields in [5, 10, 20] {\n            for num_rows in [1000, 10000] {\n                let schema = vec![make_struct_schema(depth, num_fields)];\n                let rows: Vec<Vec<u8>> = (0..num_rows)\n                    .map(|_| build_struct_row(depth, num_fields))\n                    .collect();\n\n                run_benchmark(\n                    &mut group,\n                    &format!(\"{label}_fields_{num_fields}\"),\n                    &format!(\"rows_{num_rows}\"),\n                    &schema,\n                    &rows,\n                    1,\n                );\n            }\n        }\n    }\n\n    group.finish();\n}\n\n/// List<Int64> columns with varying element counts.\nfn benchmark_list_conversion(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"list_conversion\");\n\n    for num_elements in [10, 100] {\n        for num_rows in [1000, 10000] {\n            let schema = vec![make_list_schema()];\n            let rows: Vec<Vec<u8>> = (0..num_rows)\n                .map(|_| build_list_row(num_elements))\n                .collect();\n\n            run_benchmark(\n                &mut group,\n                &format!(\"elements_{num_elements}\"),\n                &format!(\"rows_{num_rows}\"),\n                &schema,\n                &rows,\n                1,\n            );\n        }\n    }\n\n    group.finish();\n}\n\n/// Map<Int64, Int64> columns with varying entry counts.\nfn benchmark_map_conversion(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"map_conversion\");\n\n    for num_entries in [10, 100] {\n        for num_rows in [1000, 10000] {\n            let schema = vec![make_map_schema()];\n            let rows: Vec<Vec<u8>> = (0..num_rows).map(|_| build_map_row(num_entries)).collect();\n\n            run_benchmark(\n                &mut group,\n                &format!(\"entries_{num_entries}\"),\n                &format!(\"rows_{num_rows}\"),\n                &schema,\n                &rows,\n                1,\n            );\n        }\n    }\n\n    group.finish();\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = benchmark_primitive_columns,\n              benchmark_struct_conversion,\n              benchmark_list_conversion,\n              benchmark_map_conversion\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/shuffle/benches/shuffle_writer.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::builder::{Date32Builder, Decimal128Builder, Int32Builder};\nuse arrow::array::{builder::StringBuilder, Array, Int32Array, RecordBatch};\nuse arrow::datatypes::{DataType, Field, Schema};\nuse arrow::row::{RowConverter, SortField};\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion::datasource::memory::MemorySourceConfig;\nuse datafusion::datasource::source::DataSourceExec;\nuse datafusion::physical_expr::expressions::{col, Column};\nuse datafusion::physical_expr::{LexOrdering, PhysicalSortExpr};\nuse datafusion::physical_plan::metrics::Time;\nuse datafusion::{\n    physical_plan::{common::collect, ExecutionPlan},\n    prelude::SessionContext,\n};\nuse datafusion_comet_shuffle::{\n    CometPartitioning, CompressionCodec, ShuffleBlockWriter, ShuffleWriterExec,\n};\nuse itertools::Itertools;\nuse std::io::Cursor;\nuse std::sync::Arc;\nuse tokio::runtime::Runtime;\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let batch = create_batch(8192, true);\n    let mut group = c.benchmark_group(\"shuffle_writer\");\n    for compression_codec in &[\n        CompressionCodec::None,\n        CompressionCodec::Lz4Frame,\n        CompressionCodec::Snappy,\n        CompressionCodec::Zstd(1),\n        CompressionCodec::Zstd(6),\n    ] {\n        let name = format!(\"shuffle_writer: write encoded (compression={compression_codec:?})\");\n        group.bench_function(name, |b| {\n            let mut buffer = vec![];\n            let ipc_time = Time::default();\n            let w =\n                ShuffleBlockWriter::try_new(&batch.schema(), compression_codec.clone()).unwrap();\n            b.iter(|| {\n                buffer.clear();\n                let mut cursor = Cursor::new(&mut buffer);\n                w.write_batch(&batch, &mut cursor, &ipc_time).unwrap();\n            });\n        });\n    }\n\n    for compression_codec in [\n        CompressionCodec::None,\n        CompressionCodec::Lz4Frame,\n        CompressionCodec::Snappy,\n        CompressionCodec::Zstd(1),\n        CompressionCodec::Zstd(6),\n    ] {\n        group.bench_function(\n            format!(\"shuffle_writer: end to end (compression = {compression_codec:?})\"),\n            |b| {\n                let ctx = SessionContext::new();\n                let exec = create_shuffle_writer_exec(\n                    compression_codec.clone(),\n                    CometPartitioning::Hash(vec![Arc::new(Column::new(\"a\", 0))], 16),\n                );\n                b.iter(|| {\n                    let task_ctx = ctx.task_ctx();\n                    let stream = exec.execute(0, task_ctx).unwrap();\n                    let rt = Runtime::new().unwrap();\n                    rt.block_on(collect(stream)).unwrap();\n                });\n            },\n        );\n    }\n\n    let lex_ordering = LexOrdering::new(vec![PhysicalSortExpr::new_default(\n        col(\"c0\", batch.schema().as_ref()).unwrap(),\n    )])\n    .unwrap();\n\n    let sort_fields: Vec<SortField> = batch\n        .columns()\n        .iter()\n        .zip(&lex_ordering)\n        .map(|(array, sort_expr)| {\n            SortField::new_with_options(array.data_type().clone(), sort_expr.options)\n        })\n        .collect();\n    let row_converter = RowConverter::new(sort_fields).unwrap();\n\n    // These are hard-coded values based on the benchmark params of 8192 rows per batch, and 16\n    // partitions. If these change, these values need to be recalculated, or bring over the\n    // bounds-finding logic from shuffle_write_test in shuffle_writer.rs.\n    let bounds_ints = vec![\n        512, 1024, 1536, 2048, 2560, 3072, 3584, 4096, 4608, 5120, 5632, 6144, 6656, 7168, 7680,\n    ];\n    let bounds_array: Arc<dyn Array> = Arc::new(Int32Array::from(bounds_ints));\n    let bounds_rows = row_converter\n        .convert_columns(vec![bounds_array].as_slice())\n        .unwrap();\n\n    let owned_rows = bounds_rows.iter().map(|row| row.owned()).collect_vec();\n\n    for partitioning in [\n        CometPartitioning::Hash(vec![Arc::new(Column::new(\"a\", 0))], 16),\n        CometPartitioning::RangePartitioning(lex_ordering, 16, Arc::new(row_converter), owned_rows),\n    ] {\n        let compression_codec = CompressionCodec::None;\n        group.bench_function(\n            format!(\"shuffle_writer: end to end (partitioning={partitioning:?})\"),\n            |b| {\n                let ctx = SessionContext::new();\n                let exec =\n                    create_shuffle_writer_exec(compression_codec.clone(), partitioning.clone());\n                b.iter(|| {\n                    let task_ctx = ctx.task_ctx();\n                    let stream = exec.execute(0, task_ctx).unwrap();\n                    let rt = Runtime::new().unwrap();\n                    rt.block_on(collect(stream)).unwrap();\n                });\n            },\n        );\n    }\n}\n\nfn create_shuffle_writer_exec(\n    compression_codec: CompressionCodec,\n    partitioning: CometPartitioning,\n) -> ShuffleWriterExec {\n    let batches = create_batches(8192, 10);\n    let schema = batches[0].schema();\n    let partitions = &[batches];\n    ShuffleWriterExec::try_new(\n        Arc::new(DataSourceExec::new(Arc::new(\n            MemorySourceConfig::try_new(partitions, Arc::clone(&schema), None).unwrap(),\n        ))),\n        partitioning,\n        compression_codec,\n        \"/tmp/data.out\".to_string(),\n        \"/tmp/index.out\".to_string(),\n        false,\n        1024 * 1024,\n    )\n    .unwrap()\n}\n\nfn create_batches(size: usize, count: usize) -> Vec<RecordBatch> {\n    let batch = create_batch(size, true);\n    let mut batches = Vec::new();\n    for _ in 0..count {\n        batches.push(batch.clone());\n    }\n    batches\n}\n\nfn create_batch(num_rows: usize, allow_nulls: bool) -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![\n        Field::new(\"c0\", DataType::Int32, true),\n        Field::new(\"c1\", DataType::Utf8, true),\n        Field::new(\"c2\", DataType::Date32, true),\n        Field::new(\"c3\", DataType::Decimal128(11, 2), true),\n    ]));\n    let mut a = Int32Builder::new();\n    let mut b = StringBuilder::new();\n    let mut c = Date32Builder::new();\n    let mut d = Decimal128Builder::new()\n        .with_precision_and_scale(11, 2)\n        .unwrap();\n    for i in 0..num_rows {\n        a.append_value(i as i32);\n        c.append_value(i as i32);\n        d.append_value((i * 1000000) as i128);\n        if allow_nulls && i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(format!(\"this is string number {i}\"));\n        }\n    }\n    let a = a.finish();\n    let b = b.finish();\n    let c = c.finish();\n    let d = d.finish();\n    RecordBatch::try_new(\n        schema.clone(),\n        vec![Arc::new(a), Arc::new(b), Arc::new(c), Arc::new(d)],\n    )\n    .unwrap()\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = criterion_benchmark\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/shuffle/src/bin/shuffle_bench.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Standalone shuffle benchmark tool for profiling Comet shuffle write\n//! performance outside of Spark. Streams input directly from Parquet files.\n//!\n//! # Usage\n//!\n//! ```sh\n//! cargo run --release --bin shuffle_bench -- \\\n//!   --input /data/tpch-sf100/lineitem/ \\\n//!   --partitions 200 \\\n//!   --codec lz4 \\\n//!   --hash-columns 0,3\n//! ```\n//!\n//! Profile with flamegraph:\n//! ```sh\n//! cargo flamegraph --release --bin shuffle_bench -- \\\n//!   --input /data/tpch-sf100/lineitem/ \\\n//!   --partitions 200 --codec lz4\n//! ```\n\nuse arrow::datatypes::{DataType, SchemaRef};\nuse clap::Parser;\nuse datafusion::execution::config::SessionConfig;\nuse datafusion::execution::runtime_env::RuntimeEnvBuilder;\nuse datafusion::physical_expr::expressions::Column;\nuse datafusion::physical_plan::coalesce_partitions::CoalescePartitionsExec;\nuse datafusion::physical_plan::common::collect;\nuse datafusion::physical_plan::metrics::{MetricValue, MetricsSet};\nuse datafusion::physical_plan::ExecutionPlan;\nuse datafusion::prelude::{ParquetReadOptions, SessionContext};\nuse datafusion_comet_shuffle::{CometPartitioning, CompressionCodec, ShuffleWriterExec};\nuse parquet::arrow::arrow_reader::ParquetRecordBatchReaderBuilder;\nuse std::fs;\nuse std::path::{Path, PathBuf};\nuse std::sync::Arc;\nuse std::time::Instant;\n\n#[derive(Parser, Debug)]\n#[command(\n    name = \"shuffle_bench\",\n    about = \"Standalone benchmark for Comet shuffle write performance\"\n)]\nstruct Args {\n    /// Path to input Parquet file or directory of Parquet files\n    #[arg(long)]\n    input: PathBuf,\n\n    /// Batch size for reading Parquet data\n    #[arg(long, default_value_t = 8192)]\n    batch_size: usize,\n\n    /// Number of output shuffle partitions\n    #[arg(long, default_value_t = 200)]\n    partitions: usize,\n\n    /// Partitioning scheme: hash, single, round-robin\n    #[arg(long, default_value = \"hash\")]\n    partitioning: String,\n\n    /// Column indices to hash on (comma-separated, e.g. \"0,3\")\n    #[arg(long, default_value = \"0\")]\n    hash_columns: String,\n\n    /// Compression codec: none, lz4, zstd, snappy\n    #[arg(long, default_value = \"lz4\")]\n    codec: String,\n\n    /// Zstd compression level (1-22)\n    #[arg(long, default_value_t = 1)]\n    zstd_level: i32,\n\n    /// Memory limit in bytes (triggers spilling when exceeded)\n    #[arg(long)]\n    memory_limit: Option<usize>,\n\n    /// Number of iterations to run\n    #[arg(long, default_value_t = 1)]\n    iterations: usize,\n\n    /// Number of warmup iterations before timing\n    #[arg(long, default_value_t = 0)]\n    warmup: usize,\n\n    /// Output directory for shuffle data/index files\n    #[arg(long, default_value = \"/tmp/comet_shuffle_bench\")]\n    output_dir: PathBuf,\n\n    /// Write buffer size in bytes\n    #[arg(long, default_value_t = 1048576)]\n    write_buffer_size: usize,\n\n    /// Limit rows processed per iteration (0 = no limit)\n    #[arg(long, default_value_t = 0)]\n    limit: usize,\n\n    /// Number of concurrent shuffle tasks to simulate executor parallelism.\n    /// Each task reads the same input and writes to its own output files.\n    #[arg(long, default_value_t = 1)]\n    concurrent_tasks: usize,\n}\n\nfn main() {\n    let args = Args::parse();\n\n    // Create output directory\n    fs::create_dir_all(&args.output_dir).expect(\"Failed to create output directory\");\n    let data_file = args.output_dir.join(\"data.out\");\n    let index_file = args.output_dir.join(\"index.out\");\n\n    let (schema, total_rows) = read_parquet_metadata(&args.input, args.limit);\n\n    let codec = parse_codec(&args.codec, args.zstd_level);\n    let hash_col_indices = parse_hash_columns(&args.hash_columns);\n\n    println!(\"=== Shuffle Benchmark ===\");\n    println!(\"Input:          {}\", args.input.display());\n    println!(\n        \"Schema:         {} columns ({})\",\n        schema.fields().len(),\n        describe_schema(&schema)\n    );\n    println!(\"Total rows:     {}\", format_number(total_rows as usize));\n    println!(\"Batch size:     {}\", format_number(args.batch_size));\n    println!(\"Partitioning:   {}\", args.partitioning);\n    println!(\"Partitions:     {}\", args.partitions);\n    println!(\"Codec:          {:?}\", codec);\n    println!(\"Hash columns:   {:?}\", hash_col_indices);\n    if let Some(mem_limit) = args.memory_limit {\n        println!(\"Memory limit:   {}\", format_bytes(mem_limit));\n    }\n    if args.concurrent_tasks > 1 {\n        println!(\"Concurrent:     {} tasks\", args.concurrent_tasks);\n    }\n    println!(\n        \"Iterations:     {} (warmup: {})\",\n        args.iterations, args.warmup\n    );\n    println!();\n\n    let total_iters = args.warmup + args.iterations;\n    let mut write_times = Vec::with_capacity(args.iterations);\n    let mut data_file_sizes = Vec::with_capacity(args.iterations);\n    let mut last_metrics: Option<MetricsSet> = None;\n    let mut last_input_metrics: Option<MetricsSet> = None;\n\n    for i in 0..total_iters {\n        let is_warmup = i < args.warmup;\n        let label = if is_warmup {\n            format!(\"warmup {}/{}\", i + 1, args.warmup)\n        } else {\n            format!(\"iter {}/{}\", i - args.warmup + 1, args.iterations)\n        };\n\n        let (write_elapsed, metrics, input_metrics) = if args.concurrent_tasks > 1 {\n            let elapsed = run_concurrent_shuffle_writes(\n                &args.input,\n                &schema,\n                &codec,\n                &hash_col_indices,\n                &args,\n            );\n            (elapsed, None, None)\n        } else {\n            run_shuffle_write(\n                &args.input,\n                &schema,\n                &codec,\n                &hash_col_indices,\n                &args,\n                data_file.to_str().unwrap(),\n                index_file.to_str().unwrap(),\n            )\n        };\n        let data_size = fs::metadata(&data_file).map(|m| m.len()).unwrap_or(0);\n\n        if !is_warmup {\n            write_times.push(write_elapsed);\n            data_file_sizes.push(data_size);\n            last_metrics = metrics;\n            last_input_metrics = input_metrics;\n        }\n\n        print!(\"  [{label}] write: {:.3}s\", write_elapsed);\n        if args.concurrent_tasks <= 1 {\n            print!(\"  output: {}\", format_bytes(data_size as usize));\n        }\n\n        println!();\n    }\n\n    if args.iterations > 0 {\n        println!();\n        println!(\"=== Results ===\");\n\n        let avg_write = write_times.iter().sum::<f64>() / write_times.len() as f64;\n        let write_throughput_rows = (total_rows as f64 * args.concurrent_tasks as f64) / avg_write;\n\n        println!(\"Write:\");\n        println!(\"  avg time:         {:.3}s\", avg_write);\n        if write_times.len() > 1 {\n            let min = write_times.iter().cloned().fold(f64::INFINITY, f64::min);\n            let max = write_times\n                .iter()\n                .cloned()\n                .fold(f64::NEG_INFINITY, f64::max);\n            println!(\"  min/max:          {:.3}s / {:.3}s\", min, max);\n        }\n        println!(\n            \"  throughput:       {} rows/s (total across {} tasks)\",\n            format_number(write_throughput_rows as usize),\n            args.concurrent_tasks\n        );\n        if args.concurrent_tasks <= 1 {\n            let avg_data_size = data_file_sizes.iter().sum::<u64>() / data_file_sizes.len() as u64;\n            println!(\n                \"  output size:      {}\",\n                format_bytes(avg_data_size as usize)\n            );\n        }\n\n        if let Some(ref metrics) = last_input_metrics {\n            println!();\n            println!(\"Input Metrics (last iteration):\");\n            print_input_metrics(metrics);\n        }\n\n        if let Some(ref metrics) = last_metrics {\n            println!();\n            println!(\"Shuffle Metrics (last iteration):\");\n            print_shuffle_metrics(metrics, avg_write);\n        }\n    }\n\n    let _ = fs::remove_file(&data_file);\n    let _ = fs::remove_file(&index_file);\n}\n\nfn print_shuffle_metrics(metrics: &MetricsSet, total_wall_time_secs: f64) {\n    let get_metric = |name: &str| -> Option<usize> {\n        metrics\n            .iter()\n            .find(|m| m.value().name() == name)\n            .map(|m| m.value().as_usize())\n    };\n\n    let total_ns = (total_wall_time_secs * 1e9) as u64;\n    let fmt_time = |nanos: usize| -> String {\n        let secs = nanos as f64 / 1e9;\n        let pct = if total_ns > 0 {\n            (nanos as f64 / total_ns as f64) * 100.0\n        } else {\n            0.0\n        };\n        format!(\"{:.3}s ({:.1}%)\", secs, pct)\n    };\n\n    if let Some(input_batches) = get_metric(\"input_batches\") {\n        println!(\"  input batches:    {}\", format_number(input_batches));\n    }\n    if let Some(nanos) = get_metric(\"repart_time\") {\n        println!(\"  repart time:      {}\", fmt_time(nanos));\n    }\n    if let Some(nanos) = get_metric(\"encode_time\") {\n        println!(\"  encode time:      {}\", fmt_time(nanos));\n    }\n    if let Some(nanos) = get_metric(\"write_time\") {\n        println!(\"  write time:       {}\", fmt_time(nanos));\n    }\n\n    if let Some(spill_count) = get_metric(\"spill_count\") {\n        if spill_count > 0 {\n            println!(\"  spill count:      {}\", format_number(spill_count));\n        }\n    }\n    if let Some(spilled_bytes) = get_metric(\"spilled_bytes\") {\n        if spilled_bytes > 0 {\n            println!(\"  spilled bytes:    {}\", format_bytes(spilled_bytes));\n        }\n    }\n    if let Some(data_size) = get_metric(\"data_size\") {\n        if data_size > 0 {\n            println!(\"  data size:        {}\", format_bytes(data_size));\n        }\n    }\n}\n\nfn print_input_metrics(metrics: &MetricsSet) {\n    let aggregated = metrics.aggregate_by_name();\n    for m in aggregated.iter() {\n        let value = m.value();\n        let name = value.name();\n        let v = value.as_usize();\n        if v == 0 {\n            continue;\n        }\n        // Format time metrics as seconds, everything else as a number\n        // Skip start/end timestamps — not useful in benchmark output\n        if matches!(\n            value,\n            MetricValue::StartTimestamp(_) | MetricValue::EndTimestamp(_)\n        ) {\n            continue;\n        }\n        let is_time = matches!(\n            value,\n            MetricValue::ElapsedCompute(_) | MetricValue::Time { .. }\n        );\n        if is_time {\n            println!(\"  {name}: {:.3}s\", v as f64 / 1e9);\n        } else if name.contains(\"bytes\") || name.contains(\"size\") {\n            println!(\"  {name}: {}\", format_bytes(v));\n        } else {\n            println!(\"  {name}: {}\", format_number(v));\n        }\n    }\n}\n\n/// Read schema and total row count from Parquet metadata without loading any data.\nfn read_parquet_metadata(path: &Path, limit: usize) -> (SchemaRef, u64) {\n    let paths = collect_parquet_paths(path);\n    let mut schema = None;\n    let mut total_rows = 0u64;\n\n    for file_path in &paths {\n        let file = fs::File::open(file_path)\n            .unwrap_or_else(|e| panic!(\"Failed to open {}: {}\", file_path.display(), e));\n        let builder = ParquetRecordBatchReaderBuilder::try_new(file).unwrap_or_else(|e| {\n            panic!(\n                \"Failed to read Parquet metadata from {}: {}\",\n                file_path.display(),\n                e\n            )\n        });\n        if schema.is_none() {\n            schema = Some(Arc::clone(builder.schema()));\n        }\n        total_rows += builder.metadata().file_metadata().num_rows() as u64;\n        if limit > 0 && total_rows >= limit as u64 {\n            total_rows = total_rows.min(limit as u64);\n            break;\n        }\n    }\n\n    (schema.expect(\"No parquet files found\"), total_rows)\n}\n\nfn collect_parquet_paths(path: &Path) -> Vec<PathBuf> {\n    if path.is_dir() {\n        let mut files: Vec<PathBuf> = fs::read_dir(path)\n            .unwrap_or_else(|e| panic!(\"Failed to read directory {}: {}\", path.display(), e))\n            .filter_map(|entry| {\n                let p = entry.ok()?.path();\n                if p.extension().and_then(|e| e.to_str()) == Some(\"parquet\") {\n                    Some(p)\n                } else {\n                    None\n                }\n            })\n            .collect();\n        files.sort();\n        if files.is_empty() {\n            panic!(\"No .parquet files found in {}\", path.display());\n        }\n        files\n    } else {\n        vec![path.to_path_buf()]\n    }\n}\n\nfn run_shuffle_write(\n    input_path: &Path,\n    schema: &SchemaRef,\n    codec: &CompressionCodec,\n    hash_col_indices: &[usize],\n    args: &Args,\n    data_file: &str,\n    index_file: &str,\n) -> (f64, Option<MetricsSet>, Option<MetricsSet>) {\n    let partitioning = build_partitioning(\n        &args.partitioning,\n        args.partitions,\n        hash_col_indices,\n        schema,\n    );\n\n    let rt = tokio::runtime::Runtime::new().unwrap();\n    rt.block_on(async {\n        let start = Instant::now();\n        let (shuffle_metrics, input_metrics) = execute_shuffle_write(\n            input_path.to_str().unwrap(),\n            codec.clone(),\n            partitioning,\n            args.batch_size,\n            args.memory_limit,\n            args.write_buffer_size,\n            args.limit,\n            data_file.to_string(),\n            index_file.to_string(),\n        )\n        .await\n        .unwrap();\n        (\n            start.elapsed().as_secs_f64(),\n            Some(shuffle_metrics),\n            Some(input_metrics),\n        )\n    })\n}\n\n/// Core async shuffle write logic shared by single and concurrent paths.\n#[allow(clippy::too_many_arguments)]\nasync fn execute_shuffle_write(\n    input_path: &str,\n    codec: CompressionCodec,\n    partitioning: CometPartitioning,\n    batch_size: usize,\n    memory_limit: Option<usize>,\n    write_buffer_size: usize,\n    limit: usize,\n    data_file: String,\n    index_file: String,\n) -> datafusion::common::Result<(MetricsSet, MetricsSet)> {\n    let config = SessionConfig::new().with_batch_size(batch_size);\n    let mut runtime_builder = RuntimeEnvBuilder::new();\n    if let Some(mem_limit) = memory_limit {\n        runtime_builder = runtime_builder.with_memory_limit(mem_limit, 1.0);\n    }\n    let runtime_env = Arc::new(runtime_builder.build().unwrap());\n    let ctx = SessionContext::new_with_config_rt(config, runtime_env);\n\n    let mut df = ctx\n        .read_parquet(input_path, ParquetReadOptions::default())\n        .await\n        .expect(\"Failed to create Parquet scan\");\n    if limit > 0 {\n        df = df.limit(0, Some(limit)).unwrap();\n    }\n\n    let parquet_plan = df\n        .create_physical_plan()\n        .await\n        .expect(\"Failed to create physical plan\");\n\n    let input: Arc<dyn ExecutionPlan> = if parquet_plan\n        .properties()\n        .output_partitioning()\n        .partition_count()\n        > 1\n    {\n        Arc::new(CoalescePartitionsExec::new(parquet_plan))\n    } else {\n        parquet_plan\n    };\n\n    let exec = ShuffleWriterExec::try_new(\n        input,\n        partitioning,\n        codec,\n        data_file,\n        index_file,\n        false,\n        write_buffer_size,\n    )\n    .expect(\"Failed to create ShuffleWriterExec\");\n\n    let task_ctx = ctx.task_ctx();\n    let stream = exec.execute(0, task_ctx).unwrap();\n    collect(stream).await.unwrap();\n\n    // Collect metrics from the input plan (Parquet scan + optional coalesce)\n    let input_metrics = collect_input_metrics(&exec);\n\n    Ok((exec.metrics().unwrap_or_default(), input_metrics))\n}\n\n/// Walk the plan tree and aggregate all metrics from input plans (everything below shuffle writer).\nfn collect_input_metrics(exec: &ShuffleWriterExec) -> MetricsSet {\n    let mut all_metrics = MetricsSet::new();\n    fn gather(plan: &dyn ExecutionPlan, out: &mut MetricsSet) {\n        if let Some(metrics) = plan.metrics() {\n            for m in metrics.iter() {\n                out.push(Arc::clone(m));\n            }\n        }\n        for child in plan.children() {\n            gather(child.as_ref(), out);\n        }\n    }\n    for child in exec.children() {\n        gather(child.as_ref(), &mut all_metrics);\n    }\n    all_metrics\n}\n\n/// Run N concurrent shuffle writes to simulate executor parallelism.\n/// Returns wall-clock time for all tasks to complete.\nfn run_concurrent_shuffle_writes(\n    input_path: &Path,\n    schema: &SchemaRef,\n    codec: &CompressionCodec,\n    hash_col_indices: &[usize],\n    args: &Args,\n) -> f64 {\n    let rt = tokio::runtime::Runtime::new().unwrap();\n    rt.block_on(async {\n        let start = Instant::now();\n\n        let mut handles = Vec::with_capacity(args.concurrent_tasks);\n        for task_id in 0..args.concurrent_tasks {\n            let task_dir = args.output_dir.join(format!(\"task_{task_id}\"));\n            fs::create_dir_all(&task_dir).expect(\"Failed to create task output directory\");\n            let data_file = task_dir.join(\"data.out\").to_str().unwrap().to_string();\n            let index_file = task_dir.join(\"index.out\").to_str().unwrap().to_string();\n\n            let input_str = input_path.to_str().unwrap().to_string();\n            let codec = codec.clone();\n            let partitioning = build_partitioning(\n                &args.partitioning,\n                args.partitions,\n                hash_col_indices,\n                schema,\n            );\n            let batch_size = args.batch_size;\n            let memory_limit = args.memory_limit;\n            let write_buffer_size = args.write_buffer_size;\n            let limit = args.limit;\n\n            handles.push(tokio::spawn(async move {\n                execute_shuffle_write(\n                    &input_str,\n                    codec,\n                    partitioning,\n                    batch_size,\n                    memory_limit,\n                    write_buffer_size,\n                    limit,\n                    data_file,\n                    index_file,\n                )\n                .await\n                .unwrap()\n            }));\n        }\n\n        for handle in handles {\n            handle.await.expect(\"Task panicked\");\n        }\n\n        for task_id in 0..args.concurrent_tasks {\n            let task_dir = args.output_dir.join(format!(\"task_{task_id}\"));\n            let _ = fs::remove_dir_all(&task_dir);\n        }\n\n        start.elapsed().as_secs_f64()\n    })\n}\n\nfn build_partitioning(\n    scheme: &str,\n    num_partitions: usize,\n    hash_col_indices: &[usize],\n    schema: &SchemaRef,\n) -> CometPartitioning {\n    match scheme {\n        \"single\" => CometPartitioning::SinglePartition,\n        \"round-robin\" => CometPartitioning::RoundRobin(num_partitions, 0),\n        \"hash\" => {\n            let exprs: Vec<Arc<dyn datafusion::physical_expr::PhysicalExpr>> = hash_col_indices\n                .iter()\n                .map(|&idx| {\n                    let field = schema.field(idx);\n                    Arc::new(Column::new(field.name(), idx))\n                        as Arc<dyn datafusion::physical_expr::PhysicalExpr>\n                })\n                .collect();\n            CometPartitioning::Hash(exprs, num_partitions)\n        }\n        other => {\n            eprintln!(\"Unknown partitioning scheme: {other}. Using hash.\");\n            build_partitioning(\"hash\", num_partitions, hash_col_indices, schema)\n        }\n    }\n}\n\nfn parse_codec(codec: &str, zstd_level: i32) -> CompressionCodec {\n    match codec.to_lowercase().as_str() {\n        \"none\" => CompressionCodec::None,\n        \"lz4\" => CompressionCodec::Lz4Frame,\n        \"zstd\" => CompressionCodec::Zstd(zstd_level),\n        \"snappy\" => CompressionCodec::Snappy,\n        other => {\n            eprintln!(\"Unknown codec: {other}. Using zstd.\");\n            CompressionCodec::Zstd(zstd_level)\n        }\n    }\n}\n\nfn parse_hash_columns(s: &str) -> Vec<usize> {\n    s.split(',')\n        .filter(|s| !s.is_empty())\n        .map(|s| s.trim().parse::<usize>().expect(\"Invalid column index\"))\n        .collect()\n}\n\nfn describe_schema(schema: &arrow::datatypes::Schema) -> String {\n    let mut counts = std::collections::HashMap::new();\n    for field in schema.fields() {\n        let type_name = match field.data_type() {\n            DataType::Int8\n            | DataType::Int16\n            | DataType::Int32\n            | DataType::Int64\n            | DataType::UInt8\n            | DataType::UInt16\n            | DataType::UInt32\n            | DataType::UInt64 => \"int\",\n            DataType::Float16 | DataType::Float32 | DataType::Float64 => \"float\",\n            DataType::Utf8 | DataType::LargeUtf8 => \"string\",\n            DataType::Boolean => \"bool\",\n            DataType::Date32 | DataType::Date64 => \"date\",\n            DataType::Decimal128(_, _) | DataType::Decimal256(_, _) => \"decimal\",\n            DataType::Timestamp(_, _) => \"timestamp\",\n            DataType::Binary | DataType::LargeBinary | DataType::FixedSizeBinary(_) => \"binary\",\n            _ => \"other\",\n        };\n        *counts.entry(type_name).or_insert(0) += 1;\n    }\n    let mut parts: Vec<String> = counts\n        .into_iter()\n        .map(|(k, v)| format!(\"{}x{}\", v, k))\n        .collect();\n    parts.sort();\n    parts.join(\", \")\n}\n\nfn format_number(n: usize) -> String {\n    let s = n.to_string();\n    let mut result = String::new();\n    for (i, c) in s.chars().rev().enumerate() {\n        if i > 0 && i % 3 == 0 {\n            result.push(',');\n        }\n        result.push(c);\n    }\n    result.chars().rev().collect()\n}\n\nfn format_bytes(bytes: usize) -> String {\n    if bytes >= 1024 * 1024 * 1024 {\n        format!(\"{:.2} GiB\", bytes as f64 / (1024.0 * 1024.0 * 1024.0))\n    } else if bytes >= 1024 * 1024 {\n        format!(\"{:.2} MiB\", bytes as f64 / (1024.0 * 1024.0))\n    } else if bytes >= 1024 {\n        format!(\"{:.2} KiB\", bytes as f64 / 1024.0)\n    } else {\n        format!(\"{bytes} B\")\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/comet_partitioning.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::row::{OwnedRow, RowConverter};\nuse datafusion::physical_expr::{LexOrdering, PhysicalExpr};\nuse std::sync::Arc;\n\n/// Partitioning scheme for distributing rows across shuffle output partitions.\n#[derive(Debug, Clone)]\npub enum CometPartitioning {\n    SinglePartition,\n    /// Allocate rows based on a hash of one of more expressions and the specified number of\n    /// partitions. Args are 1) the expression to hash on, and 2) the number of partitions.\n    Hash(Vec<Arc<dyn PhysicalExpr>>, usize),\n    /// Allocate rows based on the lexical order of one of more expressions and the specified number of\n    /// partitions. Args are 1) the LexOrdering to use to compare values and split into partitions,\n    /// 2) the number of partitions, 3) the RowConverter used to view incoming RecordBatches as Arrow\n    /// Rows for comparing to 4) OwnedRows that represent the boundaries of each partition, used with\n    /// LexOrdering to bin each value in the RecordBatch to a partition.\n    RangePartitioning(LexOrdering, usize, Arc<RowConverter>, Vec<OwnedRow>),\n    /// Round robin partitioning. Distributes rows across partitions by sorting them by hash\n    /// (computed from columns) and then assigning partitions sequentially. Args are:\n    /// 1) number of partitions, 2) max columns to hash (0 means no limit).\n    RoundRobin(usize, usize),\n}\n\nimpl CometPartitioning {\n    pub fn partition_count(&self) -> usize {\n        use CometPartitioning::*;\n        match self {\n            SinglePartition => 1,\n            Hash(_, n) | RangePartitioning(_, n, _, _) | RoundRobin(n, _) => *n,\n        }\n    }\n}\n\npub(crate) fn pmod(hash: u32, n: usize) -> usize {\n    let hash = hash as i32;\n    let n = n as i32;\n    let r = hash % n;\n    let result = if r < 0 { (r + n) % n } else { r };\n    result as usize\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_pmod() {\n        let i: Vec<u32> = vec![0x99f0149d, 0x9c67b85d, 0xc8008529, 0xa05b5d7b, 0xcd1e64fb];\n        let result = i.into_iter().map(|i| pmod(i, 200)).collect::<Vec<usize>>();\n\n        // expected partition from Spark with n=200\n        let expected = vec![69, 5, 193, 171, 115];\n        assert_eq!(result, expected);\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/ipc.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::RecordBatch;\nuse arrow::ipc::reader::StreamReader;\nuse datafusion::common::DataFusionError;\nuse datafusion::error::Result;\n\npub fn read_ipc_compressed(bytes: &[u8]) -> Result<RecordBatch> {\n    match &bytes[0..4] {\n        b\"SNAP\" => {\n            let decoder = snap::read::FrameDecoder::new(&bytes[4..]);\n            let mut reader =\n                unsafe { StreamReader::try_new(decoder, None)?.with_skip_validation(true) };\n            reader.next().unwrap().map_err(|e| e.into())\n        }\n        b\"LZ4_\" => {\n            let decoder = lz4_flex::frame::FrameDecoder::new(&bytes[4..]);\n            let mut reader =\n                unsafe { StreamReader::try_new(decoder, None)?.with_skip_validation(true) };\n            reader.next().unwrap().map_err(|e| e.into())\n        }\n        b\"ZSTD\" => {\n            let decoder = zstd::Decoder::new(&bytes[4..])?;\n            let mut reader =\n                unsafe { StreamReader::try_new(decoder, None)?.with_skip_validation(true) };\n            reader.next().unwrap().map_err(|e| e.into())\n        }\n        b\"NONE\" => {\n            let mut reader =\n                unsafe { StreamReader::try_new(&bytes[4..], None)?.with_skip_validation(true) };\n            reader.next().unwrap().map_err(|e| e.into())\n        }\n        other => Err(DataFusionError::Execution(format!(\n            \"Failed to decode batch: invalid compression codec: {other:?}\"\n        ))),\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/lib.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub(crate) mod comet_partitioning;\npub mod ipc;\npub(crate) mod metrics;\npub(crate) mod partitioners;\nmod shuffle_writer;\nmod spark_crc32c_hasher;\npub mod spark_unsafe;\npub(crate) mod writers;\n\npub use comet_partitioning::CometPartitioning;\npub use ipc::read_ipc_compressed;\npub use shuffle_writer::ShuffleWriterExec;\npub use writers::{CompressionCodec, ShuffleBlockWriter};\n"
  },
  {
    "path": "native/shuffle/src/metrics.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse datafusion::physical_plan::metrics::{\n    BaselineMetrics, Count, ExecutionPlanMetricsSet, MetricBuilder, Time,\n};\n\n/// Execution metrics for a shuffle partition operation.\npub(crate) struct ShufflePartitionerMetrics {\n    /// metrics\n    pub(crate) baseline: BaselineMetrics,\n\n    /// Time to perform repartitioning\n    pub(crate) repart_time: Time,\n\n    /// Time encoding batches to IPC format\n    pub(crate) encode_time: Time,\n\n    /// Time spent writing to disk. Maps to \"shuffleWriteTime\" in Spark SQL Metrics.\n    pub(crate) write_time: Time,\n\n    /// Number of input batches\n    pub(crate) input_batches: Count,\n\n    /// count of spills during the execution of the operator\n    pub(crate) spill_count: Count,\n\n    /// total spilled bytes during the execution of the operator\n    pub(crate) spilled_bytes: Count,\n\n    /// The original size of spilled data. Different to `spilled_bytes` because of compression.\n    pub(crate) data_size: Count,\n}\n\nimpl ShufflePartitionerMetrics {\n    pub(crate) fn new(metrics: &ExecutionPlanMetricsSet, partition: usize) -> Self {\n        Self {\n            baseline: BaselineMetrics::new(metrics, partition),\n            repart_time: MetricBuilder::new(metrics).subset_time(\"repart_time\", partition),\n            encode_time: MetricBuilder::new(metrics).subset_time(\"encode_time\", partition),\n            write_time: MetricBuilder::new(metrics).subset_time(\"write_time\", partition),\n            input_batches: MetricBuilder::new(metrics).counter(\"input_batches\", partition),\n            spill_count: MetricBuilder::new(metrics).spill_count(partition),\n            spilled_bytes: MetricBuilder::new(metrics).spilled_bytes(partition),\n            data_size: MetricBuilder::new(metrics).counter(\"data_size\", partition),\n        }\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/partitioners/empty_schema.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::metrics::ShufflePartitionerMetrics;\nuse crate::partitioners::ShufflePartitioner;\nuse crate::ShuffleBlockWriter;\nuse arrow::array::RecordBatch;\nuse arrow::datatypes::SchemaRef;\nuse datafusion::common::DataFusionError;\nuse std::fs::OpenOptions;\nuse std::io::{BufWriter, Seek, Write};\nuse tokio::time::Instant;\n\n/// A partitioner for zero-column schemas (e.g. queries where ColumnPruning removes all columns).\n/// This handles shuffles for operations like COUNT(*) that produce empty-schema record batches\n/// but contain a valid row count. Accumulates the total row count and writes a single\n/// zero-column IPC batch to partition 0. All other partitions get empty entries in the index file.\npub(crate) struct EmptySchemaShufflePartitioner {\n    output_data_file: String,\n    output_index_file: String,\n    schema: SchemaRef,\n    shuffle_block_writer: ShuffleBlockWriter,\n    num_output_partitions: usize,\n    total_rows: usize,\n    metrics: ShufflePartitionerMetrics,\n}\n\nimpl EmptySchemaShufflePartitioner {\n    pub(crate) fn try_new(\n        output_data_file: String,\n        output_index_file: String,\n        schema: SchemaRef,\n        num_output_partitions: usize,\n        metrics: ShufflePartitionerMetrics,\n        codec: crate::CompressionCodec,\n    ) -> datafusion::common::Result<Self> {\n        debug_assert!(\n            schema.fields().is_empty(),\n            \"EmptySchemaShufflePartitioner requires a zero-column schema\"\n        );\n        let shuffle_block_writer = ShuffleBlockWriter::try_new(schema.as_ref(), codec)?;\n        Ok(Self {\n            output_data_file,\n            output_index_file,\n            schema,\n            shuffle_block_writer,\n            num_output_partitions,\n            total_rows: 0,\n            metrics,\n        })\n    }\n}\n\n#[async_trait::async_trait]\nimpl ShufflePartitioner for EmptySchemaShufflePartitioner {\n    async fn insert_batch(&mut self, batch: RecordBatch) -> datafusion::common::Result<()> {\n        let start_time = Instant::now();\n        let num_rows = batch.num_rows();\n        if num_rows > 0 {\n            self.total_rows += num_rows;\n            self.metrics.baseline.record_output(num_rows);\n        }\n        self.metrics.input_batches.add(1);\n        self.metrics\n            .baseline\n            .elapsed_compute()\n            .add_duration(start_time.elapsed());\n        Ok(())\n    }\n\n    fn shuffle_write(&mut self) -> datafusion::common::Result<()> {\n        let start_time = Instant::now();\n\n        let output_data = OpenOptions::new()\n            .write(true)\n            .create(true)\n            .truncate(true)\n            .open(&self.output_data_file)\n            .map_err(|e| DataFusionError::Execution(format!(\"shuffle write error: {e:?}\")))?;\n        let mut output_data = BufWriter::new(output_data);\n\n        // Write a single zero-column batch with the accumulated row count to partition 0\n        if self.total_rows > 0 {\n            let batch = RecordBatch::try_new_with_options(\n                self.schema.clone(),\n                vec![],\n                &arrow::array::RecordBatchOptions::new().with_row_count(Some(self.total_rows)),\n            )?;\n            self.shuffle_block_writer.write_batch(\n                &batch,\n                &mut output_data,\n                &self.metrics.encode_time,\n            )?;\n        }\n\n        let mut write_timer = self.metrics.write_time.timer();\n        output_data.flush()?;\n        let data_file_length = output_data.stream_position()?;\n\n        // Write index file: partition 0 spans [0, data_file_length), all others are empty\n        let index_file = OpenOptions::new()\n            .write(true)\n            .create(true)\n            .truncate(true)\n            .open(&self.output_index_file)\n            .map_err(|e| DataFusionError::Execution(format!(\"shuffle write error: {e:?}\")))?;\n        let mut index_writer = BufWriter::new(index_file);\n        index_writer.write_all(&0i64.to_le_bytes())?;\n        for _ in 0..self.num_output_partitions {\n            index_writer.write_all(&(data_file_length as i64).to_le_bytes())?;\n        }\n        index_writer.flush()?;\n        write_timer.stop();\n\n        self.metrics\n            .baseline\n            .elapsed_compute()\n            .add_duration(start_time.elapsed());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/partitioners/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod empty_schema;\nmod multi_partition;\nmod partitioned_batch_iterator;\nmod single_partition;\nmod traits;\n\npub(crate) use empty_schema::EmptySchemaShufflePartitioner;\npub(crate) use multi_partition::MultiPartitionShuffleRepartitioner;\npub(crate) use partitioned_batch_iterator::PartitionedBatchIterator;\npub(crate) use single_partition::SinglePartitionShufflePartitioner;\npub(crate) use traits::ShufflePartitioner;\n"
  },
  {
    "path": "native/shuffle/src/partitioners/multi_partition.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::metrics::ShufflePartitionerMetrics;\nuse crate::partitioners::partitioned_batch_iterator::{\n    PartitionedBatchIterator, PartitionedBatchesProducer,\n};\nuse crate::partitioners::ShufflePartitioner;\nuse crate::writers::{BufBatchWriter, PartitionWriter};\nuse crate::{comet_partitioning, CometPartitioning, CompressionCodec, ShuffleBlockWriter};\nuse arrow::array::{ArrayRef, RecordBatch};\nuse arrow::datatypes::SchemaRef;\nuse datafusion::common::utils::proxy::VecAllocExt;\nuse datafusion::common::DataFusionError;\nuse datafusion::execution::memory_pool::{MemoryConsumer, MemoryReservation};\nuse datafusion::execution::runtime_env::RuntimeEnv;\nuse datafusion::physical_plan::metrics::Time;\nuse datafusion_comet_common::tracing::{with_trace, with_trace_async};\nuse datafusion_comet_spark_expr::murmur3::create_murmur3_hashes;\nuse itertools::Itertools;\nuse std::fmt;\nuse std::fmt::{Debug, Formatter};\nuse std::fs::{File, OpenOptions};\nuse std::io::{BufWriter, Seek, Write};\nuse std::sync::Arc;\nuse tokio::time::Instant;\n\n/// Reusable scratch buffers for computing row-to-partition assignments.\n#[derive(Default)]\nstruct ScratchSpace {\n    /// Hashes for each row in the current batch.\n    hashes_buf: Vec<u32>,\n    /// Partition ids for each row in the current batch.\n    partition_ids: Vec<u32>,\n    /// The row indices of the rows in each partition. This array is conceptually divided into\n    /// partitions, where each partition contains the row indices of the rows in that partition.\n    /// The length of this array is the same as the number of rows in the batch.\n    partition_row_indices: Vec<u32>,\n    /// The start indices of partitions in partition_row_indices. partition_starts[K] and\n    /// partition_starts[K + 1] are the start and end indices of partition K in partition_row_indices.\n    /// The length of this array is 1 + the number of partitions.\n    partition_starts: Vec<u32>,\n}\n\nimpl ScratchSpace {\n    fn map_partition_ids_to_starts_and_indices(\n        &mut self,\n        num_output_partitions: usize,\n        num_rows: usize,\n    ) {\n        let partition_ids = &mut self.partition_ids[..num_rows];\n\n        // count each partition size, while leaving the last extra element as 0\n        let partition_counters = &mut self.partition_starts;\n        partition_counters.resize(num_output_partitions + 1, 0);\n        partition_counters.fill(0);\n        partition_ids\n            .iter()\n            .for_each(|partition_id| partition_counters[*partition_id as usize] += 1);\n\n        // accumulate partition counters into partition ends\n        // e.g. partition counter: [1, 3, 2, 1, 0] => [1, 4, 6, 7, 7]\n        let partition_ends = partition_counters;\n        let mut accum = 0;\n        partition_ends.iter_mut().for_each(|v| {\n            *v += accum;\n            accum = *v;\n        });\n\n        // calculate partition row indices and partition starts\n        // e.g. partition ids: [3, 1, 1, 1, 2, 2, 0] will produce the following partition_row_indices\n        // and partition_starts arrays:\n        //\n        //  partition_row_indices: [6, 1, 2, 3, 4, 5, 0]\n        //  partition_starts: [0, 1, 4, 6, 7]\n        //\n        // partition_starts conceptually splits partition_row_indices into smaller slices.\n        // Each slice partition_row_indices[partition_starts[K]..partition_starts[K + 1]] contains the\n        // row indices of the input batch that are partitioned into partition K. For example,\n        // first partition 0 has one row index [6], partition 1 has row indices [1, 2, 3], etc.\n        let partition_row_indices = &mut self.partition_row_indices;\n        partition_row_indices.resize(num_rows, 0);\n        for (index, partition_id) in partition_ids.iter().enumerate().rev() {\n            partition_ends[*partition_id as usize] -= 1;\n            let end = partition_ends[*partition_id as usize];\n            partition_row_indices[end as usize] = index as u32;\n        }\n\n        // after calculating, partition ends become partition starts\n    }\n}\n\n/// A partitioner that uses a hash function to partition data into multiple partitions\npub(crate) struct MultiPartitionShuffleRepartitioner {\n    output_data_file: String,\n    output_index_file: String,\n    buffered_batches: Vec<RecordBatch>,\n    partition_indices: Vec<Vec<(u32, u32)>>,\n    partition_writers: Vec<PartitionWriter>,\n    shuffle_block_writer: ShuffleBlockWriter,\n    /// Partitioning scheme to use\n    partitioning: CometPartitioning,\n    runtime: Arc<RuntimeEnv>,\n    metrics: ShufflePartitionerMetrics,\n    /// Reused scratch space for computing partition indices\n    scratch: ScratchSpace,\n    /// The configured batch size\n    batch_size: usize,\n    /// Reservation for repartitioning\n    reservation: MemoryReservation,\n    tracing_enabled: bool,\n    /// Size of the write buffer in bytes\n    write_buffer_size: usize,\n}\n\nimpl MultiPartitionShuffleRepartitioner {\n    #[allow(clippy::too_many_arguments)]\n    pub(crate) fn try_new(\n        partition: usize,\n        output_data_file: String,\n        output_index_file: String,\n        schema: SchemaRef,\n        partitioning: CometPartitioning,\n        metrics: ShufflePartitionerMetrics,\n        runtime: Arc<RuntimeEnv>,\n        batch_size: usize,\n        codec: CompressionCodec,\n        tracing_enabled: bool,\n        write_buffer_size: usize,\n    ) -> datafusion::common::Result<Self> {\n        let num_output_partitions = partitioning.partition_count();\n        assert_ne!(\n            num_output_partitions, 1,\n            \"Use SinglePartitionShufflePartitioner for 1 output partition.\"\n        );\n\n        // Vectors in the scratch space will be filled with valid values before being used, this\n        // initialization code is simply initializing the vectors to the desired size.\n        // The initial values are not used.\n        let scratch = ScratchSpace {\n            hashes_buf: match partitioning {\n                // Allocate hashes_buf for hash and round robin partitioning.\n                // Round robin hashes all columns to achieve even, deterministic distribution.\n                CometPartitioning::Hash(_, _) | CometPartitioning::RoundRobin(_, _) => {\n                    vec![0; batch_size]\n                }\n                _ => vec![],\n            },\n            partition_ids: vec![0; batch_size],\n            partition_row_indices: vec![0; batch_size],\n            partition_starts: vec![0; num_output_partitions + 1],\n        };\n\n        let shuffle_block_writer = ShuffleBlockWriter::try_new(schema.as_ref(), codec.clone())?;\n\n        let partition_writers = (0..num_output_partitions)\n            .map(|_| PartitionWriter::try_new(shuffle_block_writer.clone()))\n            .collect::<datafusion::common::Result<Vec<_>>>()?;\n\n        let reservation = MemoryConsumer::new(format!(\"ShuffleRepartitioner[{partition}]\"))\n            .with_can_spill(true)\n            .register(&runtime.memory_pool);\n\n        Ok(Self {\n            output_data_file,\n            output_index_file,\n            buffered_batches: vec![],\n            partition_indices: vec![vec![]; num_output_partitions],\n            partition_writers,\n            shuffle_block_writer,\n            partitioning,\n            runtime,\n            metrics,\n            scratch,\n            batch_size,\n            reservation,\n            tracing_enabled,\n            write_buffer_size,\n        })\n    }\n\n    /// Shuffles rows in input batch into corresponding partition buffer.\n    /// This function first calculates hashes for rows and then takes rows in same\n    /// partition as a record batch which is appended into partition buffer.\n    /// This should not be called directly. Use `insert_batch` instead.\n    async fn partitioning_batch(&mut self, input: RecordBatch) -> datafusion::common::Result<()> {\n        if input.num_rows() == 0 {\n            // skip empty batch\n            return Ok(());\n        }\n\n        if input.num_rows() > self.batch_size {\n            return Err(DataFusionError::Internal(\n                \"Input batch size exceeds configured batch size. Call `insert_batch` instead.\"\n                    .to_string(),\n            ));\n        }\n\n        // Update data size metric\n        self.metrics.data_size.add(input.get_array_memory_size());\n\n        // NOTE: in shuffle writer exec, the output_rows metrics represents the\n        // number of rows those are written to output data file.\n        self.metrics.baseline.record_output(input.num_rows());\n\n        match &self.partitioning {\n            CometPartitioning::Hash(exprs, num_output_partitions) => {\n                let mut scratch = std::mem::take(&mut self.scratch);\n                let (partition_starts, partition_row_indices): (&Vec<u32>, &Vec<u32>) = {\n                    let mut timer = self.metrics.repart_time.timer();\n\n                    // Evaluate partition expressions to get rows to apply partitioning scheme.\n                    let arrays = exprs\n                        .iter()\n                        .map(|expr| expr.evaluate(&input)?.into_array(input.num_rows()))\n                        .collect::<datafusion::common::Result<Vec<_>>>()?;\n\n                    let num_rows = arrays[0].len();\n\n                    // Use identical seed as Spark hash partitioning.\n                    let hashes_buf = &mut scratch.hashes_buf[..num_rows];\n                    hashes_buf.fill(42_u32);\n\n                    // Generate partition ids for every row.\n                    {\n                        // Hash arrays and compute partition ids based on number of partitions.\n                        let partition_ids = &mut scratch.partition_ids[..num_rows];\n                        create_murmur3_hashes(&arrays, hashes_buf)?\n                            .iter()\n                            .enumerate()\n                            .for_each(|(idx, hash)| {\n                                partition_ids[idx] =\n                                    comet_partitioning::pmod(*hash, *num_output_partitions) as u32;\n                            });\n                    }\n\n                    // We now have partition ids for every input row, map that to partition starts\n                    // and partition indices to eventually right these rows to partition buffers.\n                    scratch\n                        .map_partition_ids_to_starts_and_indices(*num_output_partitions, num_rows);\n\n                    timer.stop();\n                    Ok::<(&Vec<u32>, &Vec<u32>), DataFusionError>((\n                        &scratch.partition_starts,\n                        &scratch.partition_row_indices,\n                    ))\n                }?;\n\n                self.buffer_partitioned_batch_may_spill(\n                    input,\n                    partition_row_indices,\n                    partition_starts,\n                )\n                .await?;\n                self.scratch = scratch;\n            }\n            CometPartitioning::RangePartitioning(\n                lex_ordering,\n                num_output_partitions,\n                row_converter,\n                bounds,\n            ) => {\n                let mut scratch = std::mem::take(&mut self.scratch);\n                let (partition_starts, partition_row_indices): (&Vec<u32>, &Vec<u32>) = {\n                    let mut timer = self.metrics.repart_time.timer();\n\n                    // Evaluate partition expressions for values to apply partitioning scheme on.\n                    let arrays = lex_ordering\n                        .iter()\n                        .map(|expr| expr.expr.evaluate(&input)?.into_array(input.num_rows()))\n                        .collect::<datafusion::common::Result<Vec<_>>>()?;\n\n                    let num_rows = arrays[0].len();\n\n                    // Generate partition ids for every row, first by converting the partition\n                    // arrays to Rows, and then doing binary search for each Row against the\n                    // bounds Rows.\n                    {\n                        let row_batch = row_converter.convert_columns(arrays.as_slice())?;\n                        let partition_ids = &mut scratch.partition_ids[..num_rows];\n\n                        row_batch.iter().enumerate().for_each(|(row_idx, row)| {\n                            partition_ids[row_idx] = bounds\n                                .as_slice()\n                                .partition_point(|bound| bound.row() <= row)\n                                as u32\n                        });\n                    }\n\n                    // We now have partition ids for every input row, map that to partition starts\n                    // and partition indices to eventually right these rows to partition buffers.\n                    scratch\n                        .map_partition_ids_to_starts_and_indices(*num_output_partitions, num_rows);\n\n                    timer.stop();\n                    Ok::<(&Vec<u32>, &Vec<u32>), DataFusionError>((\n                        &scratch.partition_starts,\n                        &scratch.partition_row_indices,\n                    ))\n                }?;\n\n                self.buffer_partitioned_batch_may_spill(\n                    input,\n                    partition_row_indices,\n                    partition_starts,\n                )\n                .await?;\n                self.scratch = scratch;\n            }\n            CometPartitioning::RoundRobin(num_output_partitions, max_hash_columns) => {\n                // Comet implements \"round robin\" as hash partitioning on columns.\n                // This achieves the same goal as Spark's round robin (even distribution\n                // without semantic grouping) while being deterministic for fault tolerance.\n                //\n                // Note: This produces different partition assignments than Spark's round robin,\n                // which sorts by UnsafeRow binary representation before assigning partitions.\n                // However, both approaches provide even distribution and determinism.\n                let mut scratch = std::mem::take(&mut self.scratch);\n                let (partition_starts, partition_row_indices): (&Vec<u32>, &Vec<u32>) = {\n                    let mut timer = self.metrics.repart_time.timer();\n\n                    let num_rows = input.num_rows();\n\n                    // Collect columns for hashing, respecting max_hash_columns limit\n                    // max_hash_columns of 0 means no limit (hash all columns)\n                    // Negative values are normalized to 0 in the planner\n                    let num_columns_to_hash = if *max_hash_columns == 0 {\n                        input.num_columns()\n                    } else {\n                        (*max_hash_columns).min(input.num_columns())\n                    };\n                    let columns_to_hash: Vec<ArrayRef> = (0..num_columns_to_hash)\n                        .map(|i| Arc::clone(input.column(i)))\n                        .collect();\n\n                    // Use identical seed as Spark hash partitioning.\n                    let hashes_buf = &mut scratch.hashes_buf[..num_rows];\n                    hashes_buf.fill(42_u32);\n\n                    // Compute hash for selected columns\n                    create_murmur3_hashes(&columns_to_hash, hashes_buf)?;\n\n                    // Assign partition IDs based on hash (same as hash partitioning)\n                    let partition_ids = &mut scratch.partition_ids[..num_rows];\n                    hashes_buf.iter().enumerate().for_each(|(idx, hash)| {\n                        partition_ids[idx] =\n                            comet_partitioning::pmod(*hash, *num_output_partitions) as u32;\n                    });\n\n                    // We now have partition ids for every input row, map that to partition starts\n                    // and partition indices to eventually write these rows to partition buffers.\n                    scratch\n                        .map_partition_ids_to_starts_and_indices(*num_output_partitions, num_rows);\n\n                    timer.stop();\n                    Ok::<(&Vec<u32>, &Vec<u32>), DataFusionError>((\n                        &scratch.partition_starts,\n                        &scratch.partition_row_indices,\n                    ))\n                }?;\n\n                self.buffer_partitioned_batch_may_spill(\n                    input,\n                    partition_row_indices,\n                    partition_starts,\n                )\n                .await?;\n                self.scratch = scratch;\n            }\n            other => {\n                // this should be unreachable as long as the validation logic\n                // in the constructor is kept up-to-date\n                return Err(DataFusionError::NotImplemented(format!(\n                    \"Unsupported shuffle partitioning scheme {other:?}\"\n                )));\n            }\n        }\n        Ok(())\n    }\n\n    async fn buffer_partitioned_batch_may_spill(\n        &mut self,\n        input: RecordBatch,\n        partition_row_indices: &[u32],\n        partition_starts: &[u32],\n    ) -> datafusion::common::Result<()> {\n        let mut mem_growth: usize = input.get_array_memory_size();\n        let buffered_partition_idx = self.buffered_batches.len() as u32;\n        self.buffered_batches.push(input);\n\n        // partition_starts conceptually slices partition_row_indices into smaller slices,\n        // each slice contains the indices of rows in input that will go into the corresponding\n        // partition. The following loop iterates over the slices and put the row indices into\n        // the indices array of the corresponding partition.\n        for (partition_id, (&start, &end)) in partition_starts\n            .iter()\n            .tuple_windows()\n            .enumerate()\n            .filter(|(_, (start, end))| start < end)\n        {\n            let row_indices = &partition_row_indices[start as usize..end as usize];\n\n            // Put row indices for the current partition into the indices array of that partition.\n            // This indices array will be used for calling interleave_record_batch to produce\n            // shuffled batches.\n            let indices = &mut self.partition_indices[partition_id];\n            let before_size = indices.allocated_size();\n            indices.reserve(row_indices.len());\n            for row_idx in row_indices {\n                indices.push((buffered_partition_idx, *row_idx));\n            }\n            let after_size = indices.allocated_size();\n            mem_growth += after_size.saturating_sub(before_size);\n        }\n\n        if self.reservation.try_grow(mem_growth).is_err() {\n            self.spill()?;\n        }\n\n        Ok(())\n    }\n\n    fn shuffle_write_partition(\n        partition_iter: &mut PartitionedBatchIterator,\n        shuffle_block_writer: &mut ShuffleBlockWriter,\n        output_data: &mut BufWriter<File>,\n        encode_time: &Time,\n        write_time: &Time,\n        write_buffer_size: usize,\n        batch_size: usize,\n    ) -> datafusion::common::Result<()> {\n        let mut buf_batch_writer = BufBatchWriter::new(\n            shuffle_block_writer,\n            output_data,\n            write_buffer_size,\n            batch_size,\n        );\n        for batch in partition_iter {\n            let batch = batch?;\n            buf_batch_writer.write(&batch, encode_time, write_time)?;\n        }\n        buf_batch_writer.flush(encode_time, write_time)?;\n        Ok(())\n    }\n\n    fn used(&self) -> usize {\n        self.reservation.size()\n    }\n\n    fn spilled_bytes(&self) -> usize {\n        self.metrics.spilled_bytes.value()\n    }\n\n    fn spill_count(&self) -> usize {\n        self.metrics.spill_count.value()\n    }\n\n    fn data_size(&self) -> usize {\n        self.metrics.data_size.value()\n    }\n\n    /// This function transfers the ownership of the buffered batches and partition indices from the\n    /// ShuffleRepartitioner to a new PartitionedBatches struct. The returned PartitionedBatches struct\n    /// can be used to produce shuffled batches.\n    fn partitioned_batches(&mut self) -> PartitionedBatchesProducer {\n        let num_output_partitions = self.partition_indices.len();\n        let buffered_batches = std::mem::take(&mut self.buffered_batches);\n        // let indices = std::mem::take(&mut self.partition_indices);\n        let indices = std::mem::replace(\n            &mut self.partition_indices,\n            vec![vec![]; num_output_partitions],\n        );\n        PartitionedBatchesProducer::new(buffered_batches, indices, self.batch_size)\n    }\n\n    pub(crate) fn spill(&mut self) -> datafusion::common::Result<()> {\n        log::info!(\n            \"ShuffleRepartitioner spilling shuffle data of {} to disk while inserting ({} time(s) so far)\",\n            self.used(),\n            self.spill_count()\n        );\n\n        // we could always get a chance to free some memory as long as we are holding some\n        if self.buffered_batches.is_empty() {\n            return Ok(());\n        }\n\n        with_trace(\"shuffle_spill\", self.tracing_enabled, || {\n            let num_output_partitions = self.partition_writers.len();\n            let mut partitioned_batches = self.partitioned_batches();\n            let mut spilled_bytes = 0;\n\n            for partition_id in 0..num_output_partitions {\n                let partition_writer = &mut self.partition_writers[partition_id];\n                let mut iter = partitioned_batches.produce(partition_id);\n                spilled_bytes += partition_writer.spill(\n                    &mut iter,\n                    &self.runtime,\n                    &self.metrics,\n                    self.write_buffer_size,\n                    self.batch_size,\n                )?;\n            }\n\n            self.reservation.free();\n            self.metrics.spill_count.add(1);\n            self.metrics.spilled_bytes.add(spilled_bytes);\n            Ok(())\n        })\n    }\n\n    #[cfg(test)]\n    pub(crate) fn partition_writers(&self) -> &[PartitionWriter] {\n        &self.partition_writers\n    }\n}\n\n#[async_trait::async_trait]\nimpl ShufflePartitioner for MultiPartitionShuffleRepartitioner {\n    /// Shuffles rows in input batch into corresponding partition buffer.\n    /// This function will slice input batch according to configured batch size and then\n    /// shuffle rows into corresponding partition buffer.\n    async fn insert_batch(&mut self, batch: RecordBatch) -> datafusion::common::Result<()> {\n        with_trace_async(\"shuffle_insert_batch\", self.tracing_enabled, || async {\n            let start_time = Instant::now();\n            let mut start = 0;\n            while start < batch.num_rows() {\n                let end = (start + self.batch_size).min(batch.num_rows());\n                let batch = batch.slice(start, end - start);\n                self.partitioning_batch(batch).await?;\n                start = end;\n            }\n            self.metrics.input_batches.add(1);\n            self.metrics\n                .baseline\n                .elapsed_compute()\n                .add_duration(start_time.elapsed());\n            Ok(())\n        })\n        .await\n    }\n\n    /// Writes buffered shuffled record batches into Arrow IPC bytes.\n    fn shuffle_write(&mut self) -> datafusion::common::Result<()> {\n        with_trace(\"shuffle_write\", self.tracing_enabled, || {\n            let start_time = Instant::now();\n\n            let mut partitioned_batches = self.partitioned_batches();\n            let num_output_partitions = self.partition_indices.len();\n            let mut offsets = vec![0; num_output_partitions + 1];\n\n            let data_file = self.output_data_file.clone();\n            let index_file = self.output_index_file.clone();\n\n            let output_data = OpenOptions::new()\n                .write(true)\n                .create(true)\n                .truncate(true)\n                .open(data_file)\n                .map_err(|e| DataFusionError::Execution(format!(\"shuffle write error: {e:?}\")))?;\n\n            let mut output_data = BufWriter::new(output_data);\n\n            #[allow(clippy::needless_range_loop)]\n            for i in 0..num_output_partitions {\n                offsets[i] = output_data.stream_position()?;\n\n                // if we wrote a spill file for this partition then copy the\n                // contents into the shuffle file\n                if let Some(spill_path) = self.partition_writers[i].path() {\n                    // Use raw File handle (not BufReader) so that std::io::copy\n                    // can use copy_file_range/sendfile for zero-copy on Linux.\n                    let mut spill_file = File::open(spill_path)?;\n                    let mut write_timer = self.metrics.write_time.timer();\n                    std::io::copy(&mut spill_file, &mut output_data)?;\n                    write_timer.stop();\n                }\n\n                // Write in memory batches to output data file\n                let mut partition_iter = partitioned_batches.produce(i);\n                Self::shuffle_write_partition(\n                    &mut partition_iter,\n                    &mut self.shuffle_block_writer,\n                    &mut output_data,\n                    &self.metrics.encode_time,\n                    &self.metrics.write_time,\n                    self.write_buffer_size,\n                    self.batch_size,\n                )?;\n            }\n\n            let mut write_timer = self.metrics.write_time.timer();\n            output_data.flush()?;\n            write_timer.stop();\n\n            // add one extra offset at last to ease partition length computation\n            offsets[num_output_partitions] = output_data.stream_position()?;\n\n            let mut write_timer = self.metrics.write_time.timer();\n            let mut output_index =\n                BufWriter::new(File::create(index_file).map_err(|e| {\n                    DataFusionError::Execution(format!(\"shuffle write error: {e:?}\"))\n                })?);\n            for offset in offsets {\n                output_index.write_all(&(offset as i64).to_le_bytes()[..])?;\n            }\n            output_index.flush()?;\n            write_timer.stop();\n\n            self.metrics\n                .baseline\n                .elapsed_compute()\n                .add_duration(start_time.elapsed());\n\n            Ok(())\n        })\n    }\n}\n\nimpl Debug for MultiPartitionShuffleRepartitioner {\n    fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {\n        f.debug_struct(\"ShuffleRepartitioner\")\n            .field(\"memory_used\", &self.used())\n            .field(\"spilled_bytes\", &self.spilled_bytes())\n            .field(\"spilled_count\", &self.spill_count())\n            .field(\"data_size\", &self.data_size())\n            .finish()\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/partitioners/partitioned_batch_iterator.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::RecordBatch;\nuse arrow::compute::interleave_record_batch;\nuse datafusion::common::DataFusionError;\n\n/// A helper struct to produce shuffled batches.\n/// This struct takes ownership of the buffered batches and partition indices from the\n/// ShuffleRepartitioner, and provides an iterator over the batches in the specified partitions.\npub(super) struct PartitionedBatchesProducer {\n    buffered_batches: Vec<RecordBatch>,\n    partition_indices: Vec<Vec<(u32, u32)>>,\n    batch_size: usize,\n}\n\nimpl PartitionedBatchesProducer {\n    pub(super) fn new(\n        buffered_batches: Vec<RecordBatch>,\n        indices: Vec<Vec<(u32, u32)>>,\n        batch_size: usize,\n    ) -> Self {\n        Self {\n            partition_indices: indices,\n            buffered_batches,\n            batch_size,\n        }\n    }\n\n    pub(super) fn produce(&mut self, partition_id: usize) -> PartitionedBatchIterator<'_> {\n        PartitionedBatchIterator::new(\n            &self.partition_indices[partition_id],\n            &self.buffered_batches,\n            self.batch_size,\n        )\n    }\n}\n\n/// Iterates over the shuffled record batches belonging to a single output partition.\npub(crate) struct PartitionedBatchIterator<'a> {\n    record_batches: Vec<&'a RecordBatch>,\n    batch_size: usize,\n    indices: Vec<(usize, usize)>,\n    pos: usize,\n}\n\nimpl<'a> PartitionedBatchIterator<'a> {\n    fn new(\n        indices: &'a [(u32, u32)],\n        buffered_batches: &'a [RecordBatch],\n        batch_size: usize,\n    ) -> Self {\n        if indices.is_empty() {\n            // Avoid unnecessary allocations when the partition is empty\n            return Self {\n                record_batches: vec![],\n                batch_size,\n                indices: vec![],\n                pos: 0,\n            };\n        }\n        let record_batches = buffered_batches.iter().collect::<Vec<_>>();\n        let current_indices = indices\n            .iter()\n            .map(|(i_batch, i_row)| (*i_batch as usize, *i_row as usize))\n            .collect::<Vec<_>>();\n        Self {\n            record_batches,\n            batch_size,\n            indices: current_indices,\n            pos: 0,\n        }\n    }\n}\n\nimpl Iterator for PartitionedBatchIterator<'_> {\n    type Item = datafusion::common::Result<RecordBatch>;\n\n    fn next(&mut self) -> Option<Self::Item> {\n        if self.pos >= self.indices.len() {\n            return None;\n        }\n\n        let indices_end = std::cmp::min(self.pos + self.batch_size, self.indices.len());\n        let indices = &self.indices[self.pos..indices_end];\n        match interleave_record_batch(&self.record_batches, indices) {\n            Ok(batch) => {\n                self.pos = indices_end;\n                Some(Ok(batch))\n            }\n            Err(e) => Some(Err(DataFusionError::ArrowError(\n                Box::from(e),\n                Some(DataFusionError::get_back_trace()),\n            ))),\n        }\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/partitioners/single_partition.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::metrics::ShufflePartitionerMetrics;\nuse crate::partitioners::ShufflePartitioner;\nuse crate::writers::BufBatchWriter;\nuse crate::{CompressionCodec, ShuffleBlockWriter};\nuse arrow::array::RecordBatch;\nuse arrow::datatypes::SchemaRef;\nuse datafusion::common::DataFusionError;\nuse std::fs::{File, OpenOptions};\nuse std::io::{BufWriter, Write};\nuse tokio::time::Instant;\n\n/// A partitioner that writes all shuffle data to a single file and a single index file\npub(crate) struct SinglePartitionShufflePartitioner {\n    // output_data_file: File,\n    output_data_writer: BufBatchWriter<ShuffleBlockWriter, File>,\n    output_index_path: String,\n    /// Batches that are smaller than the batch size and to be concatenated\n    buffered_batches: Vec<RecordBatch>,\n    /// Number of rows in the concatenating batches\n    num_buffered_rows: usize,\n    /// Metrics for the repartitioner\n    metrics: ShufflePartitionerMetrics,\n    /// The configured batch size\n    batch_size: usize,\n}\n\nimpl SinglePartitionShufflePartitioner {\n    pub(crate) fn try_new(\n        output_data_path: String,\n        output_index_path: String,\n        schema: SchemaRef,\n        metrics: ShufflePartitionerMetrics,\n        batch_size: usize,\n        codec: CompressionCodec,\n        write_buffer_size: usize,\n    ) -> datafusion::common::Result<Self> {\n        let shuffle_block_writer = ShuffleBlockWriter::try_new(schema.as_ref(), codec.clone())?;\n\n        let output_data_file = OpenOptions::new()\n            .write(true)\n            .create(true)\n            .truncate(true)\n            .open(output_data_path)?;\n\n        let output_data_writer = BufBatchWriter::new(\n            shuffle_block_writer,\n            output_data_file,\n            write_buffer_size,\n            batch_size,\n        );\n\n        Ok(Self {\n            output_data_writer,\n            output_index_path,\n            buffered_batches: vec![],\n            num_buffered_rows: 0,\n            metrics,\n            batch_size,\n        })\n    }\n\n    /// Add a batch to the buffer of the partitioner, these buffered batches will be concatenated\n    /// and written to the output data file when the number of rows in the buffer reaches the batch size.\n    fn add_buffered_batch(&mut self, batch: RecordBatch) {\n        self.num_buffered_rows += batch.num_rows();\n        self.buffered_batches.push(batch);\n    }\n\n    /// Consumes buffered batches and return a concatenated batch if successful\n    fn concat_buffered_batches(&mut self) -> datafusion::common::Result<Option<RecordBatch>> {\n        if self.buffered_batches.is_empty() {\n            Ok(None)\n        } else if self.buffered_batches.len() == 1 {\n            let batch = self.buffered_batches.remove(0);\n            self.num_buffered_rows = 0;\n            Ok(Some(batch))\n        } else {\n            let schema = &self.buffered_batches[0].schema();\n            match arrow::compute::concat_batches(schema, self.buffered_batches.iter()) {\n                Ok(concatenated) => {\n                    self.buffered_batches.clear();\n                    self.num_buffered_rows = 0;\n                    Ok(Some(concatenated))\n                }\n                Err(e) => Err(DataFusionError::ArrowError(\n                    Box::from(e),\n                    Some(DataFusionError::get_back_trace()),\n                )),\n            }\n        }\n    }\n}\n\n#[async_trait::async_trait]\nimpl ShufflePartitioner for SinglePartitionShufflePartitioner {\n    async fn insert_batch(&mut self, batch: RecordBatch) -> datafusion::common::Result<()> {\n        let start_time = Instant::now();\n        let num_rows = batch.num_rows();\n\n        if num_rows > 0 {\n            self.metrics.data_size.add(batch.get_array_memory_size());\n            self.metrics.baseline.record_output(num_rows);\n\n            if num_rows >= self.batch_size || num_rows + self.num_buffered_rows > self.batch_size {\n                let concatenated_batch = self.concat_buffered_batches()?;\n\n                // Write the concatenated buffered batch\n                if let Some(batch) = concatenated_batch {\n                    self.output_data_writer.write(\n                        &batch,\n                        &self.metrics.encode_time,\n                        &self.metrics.write_time,\n                    )?;\n                }\n\n                if num_rows >= self.batch_size {\n                    // Write the new batch\n                    self.output_data_writer.write(\n                        &batch,\n                        &self.metrics.encode_time,\n                        &self.metrics.write_time,\n                    )?;\n                } else {\n                    // Add the new batch to the buffer\n                    self.add_buffered_batch(batch);\n                }\n            } else {\n                self.add_buffered_batch(batch);\n            }\n        }\n\n        self.metrics.input_batches.add(1);\n        self.metrics\n            .baseline\n            .elapsed_compute()\n            .add_duration(start_time.elapsed());\n        Ok(())\n    }\n\n    fn shuffle_write(&mut self) -> datafusion::common::Result<()> {\n        let start_time = Instant::now();\n        let concatenated_batch = self.concat_buffered_batches()?;\n\n        // Write the concatenated buffered batch\n        if let Some(batch) = concatenated_batch {\n            self.output_data_writer.write(\n                &batch,\n                &self.metrics.encode_time,\n                &self.metrics.write_time,\n            )?;\n        }\n        self.output_data_writer\n            .flush(&self.metrics.encode_time, &self.metrics.write_time)?;\n\n        // Write index file. It should only contain 2 entries: 0 and the total number of bytes written\n        let index_file = OpenOptions::new()\n            .write(true)\n            .create(true)\n            .truncate(true)\n            .open(self.output_index_path.clone())\n            .map_err(|e| DataFusionError::Execution(format!(\"shuffle write error: {e:?}\")))?;\n        let mut index_buf_writer = BufWriter::new(index_file);\n        let data_file_length = self.output_data_writer.writer_stream_position()?;\n        for offset in [0, data_file_length] {\n            index_buf_writer.write_all(&(offset as i64).to_le_bytes()[..])?;\n        }\n        index_buf_writer.flush()?;\n\n        self.metrics\n            .baseline\n            .elapsed_compute()\n            .add_duration(start_time.elapsed());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/partitioners/traits.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::record_batch::RecordBatch;\nuse datafusion::common::Result;\n\n#[async_trait::async_trait]\npub(crate) trait ShufflePartitioner: Send + Sync {\n    /// Insert a batch into the partitioner\n    async fn insert_batch(&mut self, batch: RecordBatch) -> Result<()>;\n    /// Write shuffle data and shuffle index file to disk\n    fn shuffle_write(&mut self) -> Result<()>;\n}\n"
  },
  {
    "path": "native/shuffle/src/shuffle_writer.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Defines the External shuffle repartition plan.\n\nuse crate::metrics::ShufflePartitionerMetrics;\nuse crate::partitioners::{\n    EmptySchemaShufflePartitioner, MultiPartitionShuffleRepartitioner, ShufflePartitioner,\n    SinglePartitionShufflePartitioner,\n};\nuse crate::{CometPartitioning, CompressionCodec};\nuse async_trait::async_trait;\nuse datafusion::common::exec_datafusion_err;\nuse datafusion::physical_expr::{EquivalenceProperties, Partitioning};\nuse datafusion::physical_plan::execution_plan::{Boundedness, EmissionType};\nuse datafusion::physical_plan::EmptyRecordBatchStream;\nuse datafusion::{\n    arrow::{datatypes::SchemaRef, error::ArrowError},\n    error::Result,\n    execution::context::TaskContext,\n    physical_plan::{\n        metrics::{ExecutionPlanMetricsSet, MetricsSet},\n        stream::RecordBatchStreamAdapter,\n        DisplayAs, DisplayFormatType, ExecutionPlan, PlanProperties, SendableRecordBatchStream,\n    },\n};\nuse futures::{StreamExt, TryFutureExt, TryStreamExt};\nuse std::{\n    any::Any,\n    fmt,\n    fmt::{Debug, Formatter},\n    sync::Arc,\n};\n\n/// The shuffle writer operator maps each input partition to M output partitions based on a\n/// partitioning scheme. No guarantees are made about the order of the resulting partitions.\n#[derive(Debug)]\npub struct ShuffleWriterExec {\n    /// Input execution plan\n    input: Arc<dyn ExecutionPlan>,\n    /// Partitioning scheme to use\n    partitioning: CometPartitioning,\n    /// Output data file path\n    output_data_file: String,\n    /// Output index file path\n    output_index_file: String,\n    /// Metrics\n    metrics: ExecutionPlanMetricsSet,\n    /// Cache for expensive-to-compute plan properties\n    cache: Arc<PlanProperties>,\n    /// The compression codec to use when compressing shuffle blocks\n    codec: CompressionCodec,\n    tracing_enabled: bool,\n    /// Size of the write buffer in bytes\n    write_buffer_size: usize,\n}\n\nimpl ShuffleWriterExec {\n    /// Create a new ShuffleWriterExec\n    #[allow(clippy::too_many_arguments)]\n    pub fn try_new(\n        input: Arc<dyn ExecutionPlan>,\n        partitioning: CometPartitioning,\n        codec: CompressionCodec,\n        output_data_file: String,\n        output_index_file: String,\n        tracing_enabled: bool,\n        write_buffer_size: usize,\n    ) -> Result<Self> {\n        let cache = Arc::new(PlanProperties::new(\n            EquivalenceProperties::new(Arc::clone(&input.schema())),\n            Partitioning::UnknownPartitioning(1),\n            EmissionType::Final,\n            Boundedness::Bounded,\n        ));\n\n        Ok(ShuffleWriterExec {\n            input,\n            partitioning,\n            metrics: ExecutionPlanMetricsSet::new(),\n            output_data_file,\n            output_index_file,\n            cache,\n            codec,\n            tracing_enabled,\n            write_buffer_size,\n        })\n    }\n}\n\nimpl DisplayAs for ShuffleWriterExec {\n    fn fmt_as(&self, t: DisplayFormatType, f: &mut Formatter) -> fmt::Result {\n        match t {\n            DisplayFormatType::Default | DisplayFormatType::Verbose => {\n                write!(\n                    f,\n                    \"ShuffleWriterExec: partitioning={:?}, compression={:?}\",\n                    self.partitioning, self.codec\n                )\n            }\n            DisplayFormatType::TreeRender => unimplemented!(),\n        }\n    }\n}\n\n#[async_trait]\nimpl ExecutionPlan for ShuffleWriterExec {\n    /// Return a reference to Any that can be used for downcasting\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"ShuffleWriterExec\"\n    }\n\n    fn metrics(&self) -> Option<MetricsSet> {\n        Some(self.metrics.clone_inner())\n    }\n\n    fn properties(&self) -> &Arc<PlanProperties> {\n        &self.cache\n    }\n\n    /// Get the schema for this execution plan\n    fn schema(&self) -> SchemaRef {\n        self.input.schema()\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn ExecutionPlan>> {\n        vec![&self.input]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn ExecutionPlan>>,\n    ) -> Result<Arc<dyn ExecutionPlan>> {\n        match children.len() {\n            1 => Ok(Arc::new(ShuffleWriterExec::try_new(\n                Arc::clone(&children[0]),\n                self.partitioning.clone(),\n                self.codec.clone(),\n                self.output_data_file.clone(),\n                self.output_index_file.clone(),\n                self.tracing_enabled,\n                self.write_buffer_size,\n            )?)),\n            _ => panic!(\"ShuffleWriterExec wrong number of children\"),\n        }\n    }\n\n    fn execute(\n        &self,\n        partition: usize,\n        context: Arc<TaskContext>,\n    ) -> Result<SendableRecordBatchStream> {\n        let input = self.input.execute(partition, Arc::clone(&context))?;\n        let metrics = ShufflePartitionerMetrics::new(&self.metrics, 0);\n\n        Ok(Box::pin(RecordBatchStreamAdapter::new(\n            self.schema(),\n            futures::stream::once(\n                external_shuffle(\n                    input,\n                    partition,\n                    self.output_data_file.clone(),\n                    self.output_index_file.clone(),\n                    self.partitioning.clone(),\n                    metrics,\n                    context,\n                    self.codec.clone(),\n                    self.tracing_enabled,\n                    self.write_buffer_size,\n                )\n                .map_err(|e| ArrowError::ExternalError(Box::new(e))),\n            )\n            .try_flatten(),\n        )))\n    }\n}\n\n#[allow(clippy::too_many_arguments)]\nasync fn external_shuffle(\n    mut input: SendableRecordBatchStream,\n    partition: usize,\n    output_data_file: String,\n    output_index_file: String,\n    partitioning: CometPartitioning,\n    metrics: ShufflePartitionerMetrics,\n    context: Arc<TaskContext>,\n    codec: CompressionCodec,\n    tracing_enabled: bool,\n    write_buffer_size: usize,\n) -> Result<SendableRecordBatchStream> {\n    let schema = input.schema();\n\n    let mut repartitioner: Box<dyn ShufflePartitioner> = match &partitioning {\n        _ if schema.fields().is_empty() => {\n            log::debug!(\"found empty schema, overriding {partitioning:?} partitioning with EmptySchemaShufflePartitioner\");\n            Box::new(EmptySchemaShufflePartitioner::try_new(\n                output_data_file,\n                output_index_file,\n                Arc::clone(&schema),\n                partitioning.partition_count(),\n                metrics,\n                codec,\n            )?)\n        }\n        any if any.partition_count() == 1 => Box::new(SinglePartitionShufflePartitioner::try_new(\n            output_data_file,\n            output_index_file,\n            Arc::clone(&schema),\n            metrics,\n            context.session_config().batch_size(),\n            codec,\n            write_buffer_size,\n        )?),\n        _ => Box::new(MultiPartitionShuffleRepartitioner::try_new(\n            partition,\n            output_data_file,\n            output_index_file,\n            Arc::clone(&schema),\n            partitioning,\n            metrics,\n            context.runtime_env(),\n            context.session_config().batch_size(),\n            codec,\n            tracing_enabled,\n            write_buffer_size,\n        )?),\n    };\n\n    while let Some(batch) = input.next().await {\n        // Await the repartitioner to insert the batch and shuffle the rows\n        // into the corresponding partition buffer.\n        // Otherwise, pull the next batch from the input stream might overwrite the\n        // current batch in the repartitioner.\n        repartitioner\n            .insert_batch(batch?)\n            .await\n            .map_err(|err| exec_datafusion_err!(\"Error inserting batch: {err}\"))?;\n    }\n\n    repartitioner\n        .shuffle_write()\n        .map_err(|err| exec_datafusion_err!(\"Error in shuffle write: {err}\"))?;\n\n    // shuffle writer always has empty output\n    Ok(Box::pin(EmptyRecordBatchStream::new(Arc::clone(&schema))) as SendableRecordBatchStream)\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use crate::{read_ipc_compressed, ShuffleBlockWriter};\n    use arrow::array::{Array, StringArray, StringBuilder};\n    use arrow::datatypes::{DataType, Field, Schema};\n    use arrow::record_batch::RecordBatch;\n    use arrow::row::{RowConverter, SortField};\n    use datafusion::datasource::memory::MemorySourceConfig;\n    use datafusion::datasource::source::DataSourceExec;\n    use datafusion::execution::config::SessionConfig;\n    use datafusion::execution::runtime_env::{RuntimeEnv, RuntimeEnvBuilder};\n    use datafusion::physical_expr::expressions::{col, Column};\n    use datafusion::physical_expr::{LexOrdering, PhysicalSortExpr};\n    use datafusion::physical_plan::common::collect;\n    use datafusion::physical_plan::metrics::Time;\n    use datafusion::prelude::SessionContext;\n    use itertools::Itertools;\n    use std::io::Cursor;\n    use tokio::runtime::Runtime;\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `ZSTD_createCCtx`\n    fn roundtrip_ipc() {\n        let batch = create_batch(8192);\n        for codec in &[\n            CompressionCodec::None,\n            CompressionCodec::Zstd(1),\n            CompressionCodec::Snappy,\n            CompressionCodec::Lz4Frame,\n        ] {\n            let mut output = vec![];\n            let mut cursor = Cursor::new(&mut output);\n            let writer =\n                ShuffleBlockWriter::try_new(batch.schema().as_ref(), codec.clone()).unwrap();\n            let length = writer\n                .write_batch(&batch, &mut cursor, &Time::default())\n                .unwrap();\n            assert_eq!(length, output.len());\n\n            let ipc_without_length_prefix = &output[16..];\n            let batch2 = read_ipc_compressed(ipc_without_length_prefix).unwrap();\n            assert_eq!(batch, batch2);\n        }\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `ZSTD_createCCtx`\n    fn test_single_partition_shuffle_writer() {\n        shuffle_write_test(1000, 100, 1, None);\n        shuffle_write_test(10000, 10, 1, None);\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `ZSTD_createCCtx`\n    fn test_insert_larger_batch() {\n        shuffle_write_test(10000, 1, 16, None);\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `ZSTD_createCCtx`\n    fn test_insert_smaller_batch() {\n        shuffle_write_test(1000, 1, 16, None);\n        shuffle_write_test(1000, 10, 16, None);\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `ZSTD_createCCtx`\n    fn test_large_number_of_partitions() {\n        shuffle_write_test(10000, 10, 200, Some(10 * 1024 * 1024));\n        shuffle_write_test(10000, 10, 2000, Some(10 * 1024 * 1024));\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // miri can't call foreign function `ZSTD_createCCtx`\n    fn test_large_number_of_partitions_spilling() {\n        shuffle_write_test(10000, 100, 200, Some(10 * 1024 * 1024));\n    }\n\n    #[tokio::test]\n    async fn shuffle_partitioner_memory() {\n        let batch = create_batch(900);\n        assert_eq!(8316, batch.get_array_memory_size()); // Not stable across Arrow versions\n\n        let memory_limit = 512 * 1024;\n        let num_partitions = 2;\n        let runtime_env = create_runtime(memory_limit);\n        let metrics_set = ExecutionPlanMetricsSet::new();\n        let mut repartitioner = MultiPartitionShuffleRepartitioner::try_new(\n            0,\n            \"/tmp/data.out\".to_string(),\n            \"/tmp/index.out\".to_string(),\n            batch.schema(),\n            CometPartitioning::Hash(vec![Arc::new(Column::new(\"a\", 0))], num_partitions),\n            ShufflePartitionerMetrics::new(&metrics_set, 0),\n            runtime_env,\n            1024,\n            CompressionCodec::Lz4Frame,\n            false,\n            1024 * 1024, // write_buffer_size: 1MB default\n        )\n        .unwrap();\n\n        repartitioner.insert_batch(batch.clone()).await.unwrap();\n\n        {\n            let partition_writers = repartitioner.partition_writers();\n            assert_eq!(partition_writers.len(), 2);\n\n            assert!(!partition_writers[0].has_spill_file());\n            assert!(!partition_writers[1].has_spill_file());\n        }\n\n        repartitioner.spill().unwrap();\n\n        // after spill, there should be spill files\n        {\n            let partition_writers = repartitioner.partition_writers();\n            assert!(partition_writers[0].has_spill_file());\n            assert!(partition_writers[1].has_spill_file());\n        }\n\n        // insert another batch after spilling\n        repartitioner.insert_batch(batch.clone()).await.unwrap();\n    }\n\n    fn create_runtime(memory_limit: usize) -> Arc<RuntimeEnv> {\n        Arc::new(\n            RuntimeEnvBuilder::new()\n                .with_memory_limit(memory_limit, 1.0)\n                .build()\n                .unwrap(),\n        )\n    }\n\n    fn shuffle_write_test(\n        batch_size: usize,\n        num_batches: usize,\n        num_partitions: usize,\n        memory_limit: Option<usize>,\n    ) {\n        let batch = create_batch(batch_size);\n\n        let lex_ordering = LexOrdering::new(vec![PhysicalSortExpr::new_default(\n            col(\"a\", batch.schema().as_ref()).unwrap(),\n        )])\n        .unwrap();\n\n        let sort_fields: Vec<SortField> = batch\n            .columns()\n            .iter()\n            .zip(&lex_ordering)\n            .map(|(array, sort_expr)| {\n                SortField::new_with_options(array.data_type().clone(), sort_expr.options)\n            })\n            .collect();\n        let row_converter = RowConverter::new(sort_fields).unwrap();\n\n        let owned_rows = if num_partitions == 1 {\n            vec![]\n        } else {\n            // Determine range boundaries based on create_batch implementation. We just divide the\n            // domain of values in the batch equally to find partition bounds.\n            let bounds_strings = {\n                let mut boundaries = Vec::with_capacity(num_partitions - 1);\n                let step = batch_size as f64 / num_partitions as f64;\n\n                for i in 1..(num_partitions) {\n                    boundaries.push(Some((step * i as f64).round().to_string()));\n                }\n                boundaries\n            };\n            let bounds_array: Arc<dyn Array> = Arc::new(StringArray::from(bounds_strings));\n            let bounds_rows = row_converter\n                .convert_columns(vec![bounds_array].as_slice())\n                .unwrap();\n\n            let owned_rows_vec = bounds_rows.iter().map(|row| row.owned()).collect_vec();\n            owned_rows_vec\n        };\n\n        for partitioning in [\n            CometPartitioning::Hash(vec![Arc::new(Column::new(\"a\", 0))], num_partitions),\n            CometPartitioning::RangePartitioning(\n                lex_ordering,\n                num_partitions,\n                Arc::new(row_converter),\n                owned_rows,\n            ),\n            CometPartitioning::RoundRobin(num_partitions, 0),\n        ] {\n            let batches = (0..num_batches).map(|_| batch.clone()).collect::<Vec<_>>();\n\n            let partitions = &[batches];\n            let exec = ShuffleWriterExec::try_new(\n                Arc::new(DataSourceExec::new(Arc::new(\n                    MemorySourceConfig::try_new(partitions, batch.schema(), None).unwrap(),\n                ))),\n                partitioning,\n                CompressionCodec::Zstd(1),\n                \"/tmp/data.out\".to_string(),\n                \"/tmp/index.out\".to_string(),\n                false,\n                1024 * 1024, // write_buffer_size: 1MB default\n            )\n            .unwrap();\n\n            // 10MB memory should be enough for running this test\n            let config = SessionConfig::new();\n            let mut runtime_env_builder = RuntimeEnvBuilder::new();\n            runtime_env_builder = match memory_limit {\n                Some(limit) => runtime_env_builder.with_memory_limit(limit, 1.0),\n                None => runtime_env_builder,\n            };\n            let runtime_env = Arc::new(runtime_env_builder.build().unwrap());\n            let ctx = SessionContext::new_with_config_rt(config, runtime_env);\n            let task_ctx = ctx.task_ctx();\n            let stream = exec.execute(0, task_ctx).unwrap();\n            let rt = Runtime::new().unwrap();\n            rt.block_on(collect(stream)).unwrap();\n        }\n    }\n\n    fn create_batch(batch_size: usize) -> RecordBatch {\n        let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Utf8, true)]));\n        let mut b = StringBuilder::new();\n        for i in 0..batch_size {\n            b.append_value(format!(\"{i}\"));\n        }\n        let array = b.finish();\n        RecordBatch::try_new(Arc::clone(&schema), vec![Arc::new(array)]).unwrap()\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)]\n    fn test_round_robin_deterministic() {\n        // Test that round robin partitioning produces identical results when run multiple times\n        use std::fs;\n        use std::io::Read;\n\n        let batch_size = 1000;\n        let num_batches = 10;\n        let num_partitions = 8;\n\n        let batch = create_batch(batch_size);\n        let batches = (0..num_batches).map(|_| batch.clone()).collect::<Vec<_>>();\n\n        // Run shuffle twice and compare results\n        for run in 0..2 {\n            let data_file = format!(\"/tmp/rr_data_{}.out\", run);\n            let index_file = format!(\"/tmp/rr_index_{}.out\", run);\n\n            let partitions = std::slice::from_ref(&batches);\n            let exec = ShuffleWriterExec::try_new(\n                Arc::new(DataSourceExec::new(Arc::new(\n                    MemorySourceConfig::try_new(partitions, batch.schema(), None).unwrap(),\n                ))),\n                CometPartitioning::RoundRobin(num_partitions, 0),\n                CompressionCodec::Zstd(1),\n                data_file.clone(),\n                index_file.clone(),\n                false,\n                1024 * 1024,\n            )\n            .unwrap();\n\n            let config = SessionConfig::new();\n            let runtime_env = Arc::new(\n                RuntimeEnvBuilder::new()\n                    .with_memory_limit(10 * 1024 * 1024, 1.0)\n                    .build()\n                    .unwrap(),\n            );\n            let session_ctx = Arc::new(SessionContext::new_with_config_rt(config, runtime_env));\n            let task_ctx = Arc::new(TaskContext::from(session_ctx.as_ref()));\n\n            // Execute the shuffle\n            futures::executor::block_on(async {\n                let mut stream = exec.execute(0, Arc::clone(&task_ctx)).unwrap();\n                while stream.next().await.is_some() {}\n            });\n\n            if run == 1 {\n                // Compare data files\n                let mut data0 = Vec::new();\n                fs::File::open(\"/tmp/rr_data_0.out\")\n                    .unwrap()\n                    .read_to_end(&mut data0)\n                    .unwrap();\n                let mut data1 = Vec::new();\n                fs::File::open(\"/tmp/rr_data_1.out\")\n                    .unwrap()\n                    .read_to_end(&mut data1)\n                    .unwrap();\n                assert_eq!(\n                    data0, data1,\n                    \"Round robin shuffle data should be identical across runs\"\n                );\n\n                // Compare index files\n                let mut index0 = Vec::new();\n                fs::File::open(\"/tmp/rr_index_0.out\")\n                    .unwrap()\n                    .read_to_end(&mut index0)\n                    .unwrap();\n                let mut index1 = Vec::new();\n                fs::File::open(\"/tmp/rr_index_1.out\")\n                    .unwrap()\n                    .read_to_end(&mut index1)\n                    .unwrap();\n                assert_eq!(\n                    index0, index1,\n                    \"Round robin shuffle index should be identical across runs\"\n                );\n            }\n        }\n\n        // Clean up\n        let _ = fs::remove_file(\"/tmp/rr_data_0.out\");\n        let _ = fs::remove_file(\"/tmp/rr_index_0.out\");\n        let _ = fs::remove_file(\"/tmp/rr_data_1.out\");\n        let _ = fs::remove_file(\"/tmp/rr_index_1.out\");\n    }\n\n    /// Test that batch coalescing in BufBatchWriter reduces output size by\n    /// writing fewer, larger IPC blocks instead of many small ones.\n    #[test]\n    #[cfg_attr(miri, ignore)]\n    fn test_batch_coalescing_reduces_size() {\n        use crate::writers::BufBatchWriter;\n        use arrow::array::Int32Array;\n\n        // Create a wide schema to amplify per-block schema overhead\n        let fields: Vec<Field> = (0..20)\n            .map(|i| Field::new(format!(\"col_{i}\"), DataType::Int32, false))\n            .collect();\n        let schema = Arc::new(Schema::new(fields));\n\n        // Create many small batches (50 rows each)\n        let small_batches: Vec<RecordBatch> = (0..100)\n            .map(|batch_idx| {\n                let columns: Vec<Arc<dyn Array>> = (0..20)\n                    .map(|col_idx| {\n                        let values: Vec<i32> = (0..50)\n                            .map(|row| batch_idx * 50 + row + col_idx * 1000)\n                            .collect();\n                        Arc::new(Int32Array::from(values)) as Arc<dyn Array>\n                    })\n                    .collect();\n                RecordBatch::try_new(Arc::clone(&schema), columns).unwrap()\n            })\n            .collect();\n\n        let codec = CompressionCodec::Lz4Frame;\n        let encode_time = Time::default();\n        let write_time = Time::default();\n\n        // Write with coalescing (batch_size=8192)\n        let mut coalesced_output = Vec::new();\n        {\n            let mut writer = ShuffleBlockWriter::try_new(schema.as_ref(), codec.clone()).unwrap();\n            let mut buf_writer = BufBatchWriter::new(\n                &mut writer,\n                Cursor::new(&mut coalesced_output),\n                1024 * 1024,\n                8192,\n            );\n            for batch in &small_batches {\n                buf_writer.write(batch, &encode_time, &write_time).unwrap();\n            }\n            buf_writer.flush(&encode_time, &write_time).unwrap();\n        }\n\n        // Write without coalescing (batch_size=1)\n        let mut uncoalesced_output = Vec::new();\n        {\n            let mut writer = ShuffleBlockWriter::try_new(schema.as_ref(), codec.clone()).unwrap();\n            let mut buf_writer = BufBatchWriter::new(\n                &mut writer,\n                Cursor::new(&mut uncoalesced_output),\n                1024 * 1024,\n                1,\n            );\n            for batch in &small_batches {\n                buf_writer.write(batch, &encode_time, &write_time).unwrap();\n            }\n            buf_writer.flush(&encode_time, &write_time).unwrap();\n        }\n\n        // Coalesced output should be smaller due to fewer IPC schema blocks\n        assert!(\n            coalesced_output.len() < uncoalesced_output.len(),\n            \"Coalesced output ({} bytes) should be smaller than uncoalesced ({} bytes)\",\n            coalesced_output.len(),\n            uncoalesced_output.len()\n        );\n\n        // Verify both roundtrip correctly by reading all IPC blocks\n        let coalesced_rows = read_all_ipc_blocks(&coalesced_output);\n        let uncoalesced_rows = read_all_ipc_blocks(&uncoalesced_output);\n        assert_eq!(\n            coalesced_rows, 5000,\n            \"Coalesced should contain all 5000 rows\"\n        );\n        assert_eq!(\n            uncoalesced_rows, 5000,\n            \"Uncoalesced should contain all 5000 rows\"\n        );\n    }\n\n    /// Read all IPC blocks from a byte buffer written by BufBatchWriter/ShuffleBlockWriter,\n    /// returning the total number of rows.\n    fn read_all_ipc_blocks(data: &[u8]) -> usize {\n        let mut offset = 0;\n        let mut total_rows = 0;\n        while offset < data.len() {\n            // First 8 bytes are the IPC length (little-endian u64)\n            let ipc_length =\n                u64::from_le_bytes(data[offset..offset + 8].try_into().unwrap()) as usize;\n            // Skip the 8-byte length prefix; the next 8 bytes are field_count + codec header\n            let block_start = offset + 8;\n            let block_end = block_start + ipc_length;\n            // read_ipc_compressed expects data starting after the 16-byte header\n            // (i.e., after length + field_count), at the codec tag\n            let ipc_data = &data[block_start + 8..block_end];\n            let batch = read_ipc_compressed(ipc_data).unwrap();\n            total_rows += batch.num_rows();\n            offset = block_end;\n        }\n        total_rows\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)]\n    fn test_empty_schema_shuffle_writer() {\n        use std::fs;\n        use std::io::Read;\n\n        let num_rows = 1000;\n        let num_batches = 5;\n        let num_partitions = 10;\n\n        let schema = Arc::new(Schema::new(Vec::<Field>::new()));\n        let batch = RecordBatch::try_new_with_options(\n            Arc::clone(&schema),\n            vec![],\n            &arrow::array::RecordBatchOptions::new().with_row_count(Some(num_rows)),\n        )\n        .unwrap();\n\n        let batches = (0..num_batches).map(|_| batch.clone()).collect::<Vec<_>>();\n        let partitions = &[batches];\n\n        let dir = tempfile::tempdir().unwrap();\n        let data_file = dir.path().join(\"data.out\");\n        let index_file = dir.path().join(\"index.out\");\n\n        let exec = ShuffleWriterExec::try_new(\n            Arc::new(DataSourceExec::new(Arc::new(\n                MemorySourceConfig::try_new(partitions, Arc::clone(&schema), None).unwrap(),\n            ))),\n            CometPartitioning::RoundRobin(num_partitions, 0),\n            CompressionCodec::Zstd(1),\n            data_file.to_str().unwrap().to_string(),\n            index_file.to_str().unwrap().to_string(),\n            false,\n            1024 * 1024,\n        )\n        .unwrap();\n\n        let config = SessionConfig::new();\n        let runtime_env = Arc::new(RuntimeEnvBuilder::new().build().unwrap());\n        let ctx = SessionContext::new_with_config_rt(config, runtime_env);\n        let task_ctx = ctx.task_ctx();\n        let stream = exec.execute(0, task_ctx).unwrap();\n        let rt = Runtime::new().unwrap();\n        rt.block_on(collect(stream)).unwrap();\n\n        // Verify data file is non-empty (contains IPC batch with row count)\n        let mut data = Vec::new();\n        fs::File::open(&data_file)\n            .unwrap()\n            .read_to_end(&mut data)\n            .unwrap();\n        assert!(!data.is_empty(), \"Data file should contain IPC data\");\n\n        // Verify row count survives roundtrip\n        let total_rows = read_all_ipc_blocks(&data);\n        assert_eq!(\n            total_rows,\n            num_rows * num_batches,\n            \"Row count should survive roundtrip\"\n        );\n\n        // Verify index file structure: num_partitions + 1 offsets\n        let mut index_data = Vec::new();\n        fs::File::open(&index_file)\n            .unwrap()\n            .read_to_end(&mut index_data)\n            .unwrap();\n        let expected_index_size = (num_partitions + 1) * 8;\n        assert_eq!(index_data.len(), expected_index_size);\n\n        // First offset should be 0\n        let first_offset = i64::from_le_bytes(index_data[0..8].try_into().unwrap());\n        assert_eq!(first_offset, 0);\n\n        // Second offset should equal data file length (partition 0 holds all data)\n        let data_len = data.len() as i64;\n        let second_offset = i64::from_le_bytes(index_data[8..16].try_into().unwrap());\n        assert_eq!(second_offset, data_len);\n\n        // All remaining offsets should equal data file length (empty partitions)\n        for i in 2..=num_partitions {\n            let offset = i64::from_le_bytes(index_data[i * 8..(i + 1) * 8].try_into().unwrap());\n            assert_eq!(\n                offset, data_len,\n                \"Partition {i} offset should equal data length\"\n            );\n        }\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)]\n    fn test_empty_schema_shuffle_writer_zero_rows() {\n        use std::fs;\n        use std::io::Read;\n\n        let num_partitions = 4;\n\n        let schema = Arc::new(Schema::new(Vec::<Field>::new()));\n        let batch = RecordBatch::try_new_with_options(\n            Arc::clone(&schema),\n            vec![],\n            &arrow::array::RecordBatchOptions::new().with_row_count(Some(0)),\n        )\n        .unwrap();\n\n        let batches = vec![batch];\n        let partitions = &[batches];\n\n        let dir = tempfile::tempdir().unwrap();\n        let data_file = dir.path().join(\"data.out\");\n        let index_file = dir.path().join(\"index.out\");\n\n        let exec = ShuffleWriterExec::try_new(\n            Arc::new(DataSourceExec::new(Arc::new(\n                MemorySourceConfig::try_new(partitions, Arc::clone(&schema), None).unwrap(),\n            ))),\n            CometPartitioning::RoundRobin(num_partitions, 0),\n            CompressionCodec::Zstd(1),\n            data_file.to_str().unwrap().to_string(),\n            index_file.to_str().unwrap().to_string(),\n            false,\n            1024 * 1024,\n        )\n        .unwrap();\n\n        let config = SessionConfig::new();\n        let runtime_env = Arc::new(RuntimeEnvBuilder::new().build().unwrap());\n        let ctx = SessionContext::new_with_config_rt(config, runtime_env);\n        let task_ctx = ctx.task_ctx();\n        let stream = exec.execute(0, task_ctx).unwrap();\n        let rt = Runtime::new().unwrap();\n        rt.block_on(collect(stream)).unwrap();\n\n        // Data file should be empty (no rows to write)\n        let mut data = Vec::new();\n        fs::File::open(&data_file)\n            .unwrap()\n            .read_to_end(&mut data)\n            .unwrap();\n        assert!(data.is_empty(), \"Data file should be empty with zero rows\");\n\n        // Index file should have all-zero offsets\n        let mut index_data = Vec::new();\n        fs::File::open(&index_file)\n            .unwrap()\n            .read_to_end(&mut index_data)\n            .unwrap();\n        let expected_index_size = (num_partitions + 1) * 8;\n        assert_eq!(index_data.len(), expected_index_size);\n        for i in 0..=num_partitions {\n            let offset = i64::from_le_bytes(index_data[i * 8..(i + 1) * 8].try_into().unwrap());\n            assert_eq!(offset, 0, \"All offsets should be 0 with zero rows\");\n        }\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/spark_crc32c_hasher.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Provide a CRC-32C implementor of [Hasher].\nuse std::hash::Hasher;\n\nuse crc32c::crc32c_append;\n\n/// Implementor of [Hasher] for CRC-32C.\n///\n/// Note that CRC-32C produces a 32-bit hash (as [u32]),\n/// but the trait requires that the output value be [u64].\n///\n/// This implementation is necessary because the existing [Hasher] implementation does not support\n/// [Clone].\n#[derive(Default, Clone)]\npub struct SparkCrc32cHasher {\n    checksum: u32,\n}\n\nimpl SparkCrc32cHasher {\n    /// Create the [Hasher] pre-loaded with a particular checksum.\n    ///\n    /// Use the [Default::default()] constructor for a clean start.\n    pub fn new(initial: u32) -> Self {\n        Self { checksum: initial }\n    }\n\n    pub fn finalize(&self) -> u32 {\n        self.checksum\n    }\n}\n\nimpl Hasher for SparkCrc32cHasher {\n    fn finish(&self) -> u64 {\n        self.checksum as u64\n    }\n\n    fn write(&mut self, bytes: &[u8]) {\n        self.checksum = crc32c_append(self.checksum, bytes);\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    const TEST_STRING: &[u8] =\n        b\"This is a very long string which is used to test the CRC-32-Castagnoli function.\";\n    const CHECKSUM: u32 = 0x20_CB_1E_59;\n\n    #[test]\n    fn can_hash() {\n        let mut hasher = SparkCrc32cHasher::default();\n        hasher.write(TEST_STRING);\n        assert_eq!(hasher.finish(), CHECKSUM as u64);\n    }\n\n    /// Demonstrate writing in multiple chunks by splitting the [TEST_STRING] and getting the same\n    /// [CHECKSUM].\n    #[test]\n    fn can_hash_in_chunks() {\n        let (head, tail) = TEST_STRING.split_at(20);\n\n        let mut hasher = SparkCrc32cHasher::default();\n        hasher.write(head);\n        hasher.write(tail);\n        assert_eq!(hasher.finish(), CHECKSUM as u64);\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/spark_unsafe/list.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::spark_unsafe::{\n    map::append_map_elements,\n    row::{append_field, downcast_builder_ref, SparkUnsafeRow},\n    unsafe_object::{impl_primitive_accessors, SparkUnsafeObject},\n};\nuse arrow::array::{\n    builder::{\n        ArrayBuilder, BinaryBuilder, BooleanBuilder, Date32Builder, Decimal128Builder,\n        Float32Builder, Float64Builder, Int16Builder, Int32Builder, Int64Builder, Int8Builder,\n        ListBuilder, StringBuilder, StructBuilder, TimestampMicrosecondBuilder,\n    },\n    MapBuilder,\n};\nuse arrow::datatypes::{DataType, TimeUnit};\nuse datafusion_comet_jni_bridge::errors::CometError;\n\n/// Generates bulk append methods for primitive types in SparkUnsafeArray.\n///\n/// # Safety invariants for all generated methods:\n/// - `element_offset` points to contiguous element data of length `num_elements`\n/// - `null_bitset_ptr()` returns a pointer to `ceil(num_elements/64)` i64 words\n/// - These invariants are guaranteed by the SparkUnsafeArray layout from the JVM\nmacro_rules! impl_append_to_builder {\n    ($method_name:ident, $builder_type:ty, $element_type:ty) => {\n        pub(crate) fn $method_name<const NULLABLE: bool>(&self, builder: &mut $builder_type) {\n            let num_elements = self.num_elements;\n            if num_elements == 0 {\n                return;\n            }\n\n            if NULLABLE {\n                let mut ptr = self.element_offset as *const $element_type;\n                let null_words = self.null_bitset_ptr();\n                debug_assert!(!null_words.is_null(), \"null_bitset_ptr is null\");\n                debug_assert!(!ptr.is_null(), \"element_offset pointer is null\");\n                for idx in 0..num_elements {\n                    // SAFETY: null_words has ceil(num_elements/64) words, idx < num_elements\n                    let is_null = unsafe { Self::is_null_in_bitset(null_words, idx) };\n\n                    if is_null {\n                        builder.append_null();\n                    } else {\n                        // SAFETY: ptr is within element data bounds\n                        builder.append_value(unsafe { ptr.read_unaligned() });\n                    }\n                    // SAFETY: ptr stays within bounds, iterating num_elements times\n                    ptr = unsafe { ptr.add(1) };\n                }\n            } else {\n                // SAFETY: element_offset points to contiguous data of length num_elements\n                debug_assert!(self.element_offset != 0, \"element_offset is null\");\n                let ptr = self.element_offset as *const $element_type;\n                // Use bulk copy when data is properly aligned, fall back to\n                // per-element unaligned reads otherwise\n                if (ptr as usize).is_multiple_of(std::mem::align_of::<$element_type>()) {\n                    let slice = unsafe { std::slice::from_raw_parts(ptr, num_elements) };\n                    builder.append_slice(slice);\n                } else {\n                    let mut ptr = ptr;\n                    for _ in 0..num_elements {\n                        builder.append_value(unsafe { ptr.read_unaligned() });\n                        ptr = unsafe { ptr.add(1) };\n                    }\n                }\n            }\n        }\n    };\n}\n\n/// A Spark `UnsafeArray` backed by JVM-allocated memory, providing element access by index.\npub struct SparkUnsafeArray {\n    row_addr: i64,\n    num_elements: usize,\n    element_offset: i64,\n}\n\nimpl SparkUnsafeObject for SparkUnsafeArray {\n    #[inline]\n    fn get_row_addr(&self) -> i64 {\n        self.row_addr\n    }\n\n    #[inline]\n    fn get_element_offset(&self, index: usize, element_size: usize) -> *const u8 {\n        (self.element_offset + (index * element_size) as i64) as *const u8\n    }\n\n    // SparkUnsafeArray base address may be unaligned when nested within a row's variable-length\n    // region, so we must use ptr::read_unaligned() for all typed accesses.\n    impl_primitive_accessors!(read_unaligned);\n}\n\nimpl SparkUnsafeArray {\n    /// Creates a `SparkUnsafeArray` which points to the given address and size in bytes.\n    pub fn new(addr: i64) -> Self {\n        // SAFETY: addr points to valid Spark UnsafeArray data from the JVM.\n        // The first 8 bytes contain the element count as a little-endian i64.\n        debug_assert!(addr != 0, \"SparkUnsafeArray::new: null address\");\n        let slice: &[u8] = unsafe { std::slice::from_raw_parts(addr as *const u8, 8) };\n        let num_elements = i64::from_le_bytes(slice.try_into().unwrap());\n\n        if num_elements < 0 {\n            panic!(\"Negative number of elements: {num_elements}\");\n        }\n\n        if num_elements > i32::MAX as i64 {\n            panic!(\"Number of elements should <= i32::MAX: {num_elements}\");\n        }\n\n        Self {\n            row_addr: addr,\n            num_elements: num_elements as usize,\n            element_offset: addr + Self::get_header_portion_in_bytes(num_elements),\n        }\n    }\n\n    pub(crate) fn get_num_elements(&self) -> usize {\n        self.num_elements\n    }\n\n    /// Returns the size of array header in bytes.\n    #[inline]\n    const fn get_header_portion_in_bytes(num_fields: i64) -> i64 {\n        8 + ((num_fields + 63) / 64) * 8\n    }\n\n    /// Returns true if the null bit at the given index of the array is set.\n    #[inline]\n    pub(crate) fn is_null_at(&self, index: usize) -> bool {\n        // SAFETY: row_addr points to valid Spark UnsafeArray data. The null bitset starts\n        // at offset 8 and contains ceil(num_elements/64) * 8 bytes. The caller ensures\n        // index < num_elements, so word_offset is within the bitset region.\n        debug_assert!(\n            index < self.num_elements,\n            \"is_null_at: index {index} >= num_elements {}\",\n            self.num_elements\n        );\n        unsafe {\n            let mask: i64 = 1i64 << (index & 0x3f);\n            let word_offset = (self.row_addr + 8 + (((index >> 6) as i64) << 3)) as *const i64;\n            let word: i64 = word_offset.read_unaligned();\n            (word & mask) != 0\n        }\n    }\n\n    /// Returns the null bitset pointer (starts at row_addr + 8).\n    #[inline]\n    fn null_bitset_ptr(&self) -> *const i64 {\n        (self.row_addr + 8) as *const i64\n    }\n\n    /// Checks whether the null bit at `idx` is set in the given null bitset pointer.\n    ///\n    /// # Safety\n    /// `null_words` must point to at least `ceil((idx+1)/64)` i64 words.\n    #[inline]\n    unsafe fn is_null_in_bitset(null_words: *const i64, idx: usize) -> bool {\n        let word_idx = idx >> 6;\n        let bit_idx = idx & 0x3f;\n        (null_words.add(word_idx).read_unaligned() & (1i64 << bit_idx)) != 0\n    }\n\n    impl_append_to_builder!(append_ints_to_builder, Int32Builder, i32);\n    impl_append_to_builder!(append_longs_to_builder, Int64Builder, i64);\n    impl_append_to_builder!(append_shorts_to_builder, Int16Builder, i16);\n    impl_append_to_builder!(append_bytes_to_builder, Int8Builder, i8);\n    impl_append_to_builder!(append_floats_to_builder, Float32Builder, f32);\n    impl_append_to_builder!(append_doubles_to_builder, Float64Builder, f64);\n\n    /// Bulk append boolean values to builder.\n    /// Booleans are stored as 1 byte each in SparkUnsafeArray, requiring special handling.\n    pub(crate) fn append_booleans_to_builder<const NULLABLE: bool>(\n        &self,\n        builder: &mut BooleanBuilder,\n    ) {\n        let num_elements = self.num_elements;\n        if num_elements == 0 {\n            return;\n        }\n\n        let mut ptr = self.element_offset as *const u8;\n        debug_assert!(\n            !ptr.is_null(),\n            \"append_booleans: element_offset pointer is null\"\n        );\n\n        if NULLABLE {\n            let null_words = self.null_bitset_ptr();\n            debug_assert!(\n                !null_words.is_null(),\n                \"append_booleans: null_bitset_ptr is null\"\n            );\n            for idx in 0..num_elements {\n                // SAFETY: null_words has ceil(num_elements/64) words, idx < num_elements\n                let is_null = unsafe { Self::is_null_in_bitset(null_words, idx) };\n\n                if is_null {\n                    builder.append_null();\n                } else {\n                    // SAFETY: ptr is within element data bounds\n                    builder.append_value(unsafe { *ptr != 0 });\n                }\n                // SAFETY: ptr stays within bounds, iterating num_elements times\n                ptr = unsafe { ptr.add(1) };\n            }\n        } else {\n            for _ in 0..num_elements {\n                // SAFETY: ptr is within element data bounds\n                builder.append_value(unsafe { *ptr != 0 });\n                ptr = unsafe { ptr.add(1) };\n            }\n        }\n    }\n\n    /// Bulk append timestamp values to builder (stored as i64 microseconds).\n    pub(crate) fn append_timestamps_to_builder<const NULLABLE: bool>(\n        &self,\n        builder: &mut TimestampMicrosecondBuilder,\n    ) {\n        let num_elements = self.num_elements;\n        if num_elements == 0 {\n            return;\n        }\n\n        if NULLABLE {\n            let mut ptr = self.element_offset as *const i64;\n            let null_words = self.null_bitset_ptr();\n            debug_assert!(\n                !null_words.is_null(),\n                \"append_timestamps: null_bitset_ptr is null\"\n            );\n            debug_assert!(\n                !ptr.is_null(),\n                \"append_timestamps: element_offset pointer is null\"\n            );\n            for idx in 0..num_elements {\n                // SAFETY: null_words has ceil(num_elements/64) words, idx < num_elements\n                let is_null = unsafe { Self::is_null_in_bitset(null_words, idx) };\n\n                if is_null {\n                    builder.append_null();\n                } else {\n                    // SAFETY: ptr is within element data bounds\n                    builder.append_value(unsafe { ptr.read_unaligned() });\n                }\n                // SAFETY: ptr stays within bounds, iterating num_elements times\n                ptr = unsafe { ptr.add(1) };\n            }\n        } else {\n            // SAFETY: element_offset points to contiguous i64 data of length num_elements\n            debug_assert!(\n                self.element_offset != 0,\n                \"append_timestamps: element_offset is null\"\n            );\n            let ptr = self.element_offset as *const i64;\n            if (ptr as usize).is_multiple_of(std::mem::align_of::<i64>()) {\n                let slice = unsafe { std::slice::from_raw_parts(ptr, num_elements) };\n                builder.append_slice(slice);\n            } else {\n                let mut ptr = ptr;\n                for _ in 0..num_elements {\n                    builder.append_value(unsafe { ptr.read_unaligned() });\n                    ptr = unsafe { ptr.add(1) };\n                }\n            }\n        }\n    }\n\n    /// Bulk append date values to builder (stored as i32 days since epoch).\n    pub(crate) fn append_dates_to_builder<const NULLABLE: bool>(\n        &self,\n        builder: &mut Date32Builder,\n    ) {\n        let num_elements = self.num_elements;\n        if num_elements == 0 {\n            return;\n        }\n\n        if NULLABLE {\n            let mut ptr = self.element_offset as *const i32;\n            let null_words = self.null_bitset_ptr();\n            debug_assert!(\n                !null_words.is_null(),\n                \"append_dates: null_bitset_ptr is null\"\n            );\n            debug_assert!(\n                !ptr.is_null(),\n                \"append_dates: element_offset pointer is null\"\n            );\n            for idx in 0..num_elements {\n                // SAFETY: null_words has ceil(num_elements/64) words, idx < num_elements\n                let is_null = unsafe { Self::is_null_in_bitset(null_words, idx) };\n\n                if is_null {\n                    builder.append_null();\n                } else {\n                    // SAFETY: ptr is within element data bounds\n                    builder.append_value(unsafe { ptr.read_unaligned() });\n                }\n                // SAFETY: ptr stays within bounds, iterating num_elements times\n                ptr = unsafe { ptr.add(1) };\n            }\n        } else {\n            // SAFETY: element_offset points to contiguous i32 data of length num_elements\n            debug_assert!(\n                self.element_offset != 0,\n                \"append_dates: element_offset is null\"\n            );\n            let ptr = self.element_offset as *const i32;\n            if (ptr as usize).is_multiple_of(std::mem::align_of::<i32>()) {\n                let slice = unsafe { std::slice::from_raw_parts(ptr, num_elements) };\n                builder.append_slice(slice);\n            } else {\n                let mut ptr = ptr;\n                for _ in 0..num_elements {\n                    builder.append_value(unsafe { ptr.read_unaligned() });\n                    ptr = unsafe { ptr.add(1) };\n                }\n            }\n        }\n    }\n}\n\npub fn append_to_builder<const NULLABLE: bool>(\n    data_type: &DataType,\n    builder: &mut dyn ArrayBuilder,\n    array: &SparkUnsafeArray,\n) -> Result<(), CometError> {\n    macro_rules! add_values {\n        ($builder_type:ty, $add_value:expr, $add_null:expr) => {\n            let builder = downcast_builder_ref!($builder_type, builder);\n            for idx in 0..array.get_num_elements() {\n                if NULLABLE && array.is_null_at(idx) {\n                    $add_null(builder);\n                } else {\n                    $add_value(builder, array, idx);\n                }\n            }\n        };\n    }\n\n    match data_type {\n        DataType::Boolean => {\n            let builder = downcast_builder_ref!(BooleanBuilder, builder);\n            array.append_booleans_to_builder::<NULLABLE>(builder);\n        }\n        DataType::Int8 => {\n            let builder = downcast_builder_ref!(Int8Builder, builder);\n            array.append_bytes_to_builder::<NULLABLE>(builder);\n        }\n        DataType::Int16 => {\n            let builder = downcast_builder_ref!(Int16Builder, builder);\n            array.append_shorts_to_builder::<NULLABLE>(builder);\n        }\n        DataType::Int32 => {\n            let builder = downcast_builder_ref!(Int32Builder, builder);\n            array.append_ints_to_builder::<NULLABLE>(builder);\n        }\n        DataType::Int64 => {\n            let builder = downcast_builder_ref!(Int64Builder, builder);\n            array.append_longs_to_builder::<NULLABLE>(builder);\n        }\n        DataType::Float32 => {\n            let builder = downcast_builder_ref!(Float32Builder, builder);\n            array.append_floats_to_builder::<NULLABLE>(builder);\n        }\n        DataType::Float64 => {\n            let builder = downcast_builder_ref!(Float64Builder, builder);\n            array.append_doubles_to_builder::<NULLABLE>(builder);\n        }\n        DataType::Timestamp(TimeUnit::Microsecond, _) => {\n            let builder = downcast_builder_ref!(TimestampMicrosecondBuilder, builder);\n            array.append_timestamps_to_builder::<NULLABLE>(builder);\n        }\n        DataType::Date32 => {\n            let builder = downcast_builder_ref!(Date32Builder, builder);\n            array.append_dates_to_builder::<NULLABLE>(builder);\n        }\n        DataType::Binary => {\n            add_values!(\n                BinaryBuilder,\n                |builder: &mut BinaryBuilder, values: &SparkUnsafeArray, idx: usize| builder\n                    .append_value(values.get_binary(idx)),\n                |builder: &mut BinaryBuilder| builder.append_null()\n            );\n        }\n        DataType::Utf8 => {\n            add_values!(\n                StringBuilder,\n                |builder: &mut StringBuilder, values: &SparkUnsafeArray, idx: usize| builder\n                    .append_value(values.get_string(idx)),\n                |builder: &mut StringBuilder| builder.append_null()\n            );\n        }\n        DataType::List(field) => {\n            let builder = downcast_builder_ref!(ListBuilder<Box<dyn ArrayBuilder>>, builder);\n            for idx in 0..array.get_num_elements() {\n                if NULLABLE && array.is_null_at(idx) {\n                    builder.append_null();\n                } else {\n                    let nested_array = array.get_array(idx);\n                    append_list_element(field.data_type(), builder, &nested_array)?;\n                };\n            }\n        }\n        DataType::Struct(fields) => {\n            let builder = downcast_builder_ref!(StructBuilder, builder);\n            for idx in 0..array.get_num_elements() {\n                let nested_row = if NULLABLE && array.is_null_at(idx) {\n                    builder.append_null();\n                    SparkUnsafeRow::default()\n                } else {\n                    builder.append(true);\n                    array.get_struct(idx, fields.len())\n                };\n\n                for (field_idx, field) in fields.into_iter().enumerate() {\n                    append_field(field.data_type(), builder, &nested_row, field_idx)?;\n                }\n            }\n        }\n        DataType::Decimal128(p, _) => {\n            add_values!(\n                Decimal128Builder,\n                |builder: &mut Decimal128Builder, values: &SparkUnsafeArray, idx: usize| builder\n                    .append_value(values.get_decimal(idx, *p)),\n                |builder: &mut Decimal128Builder| builder.append_null()\n            );\n        }\n        DataType::Map(field, _) => {\n            let builder = downcast_builder_ref!(\n                MapBuilder<Box<dyn ArrayBuilder>, Box<dyn ArrayBuilder>>,\n                builder\n            );\n            for idx in 0..array.get_num_elements() {\n                if NULLABLE && array.is_null_at(idx) {\n                    builder.append(false)?;\n                } else {\n                    let nested_map = array.get_map(idx);\n                    append_map_elements(field, builder, &nested_map)?;\n                };\n            }\n        }\n        _ => {\n            return Err(CometError::Internal(format!(\n                \"Unsupported map data type: {:?}\",\n                data_type\n            )))\n        }\n    }\n\n    Ok(())\n}\n\n/// Appending the given list stored in `SparkUnsafeArray` into `ListBuilder`.\n/// `element_dt` is the data type of the list element. `list_builder` is the list builder.\n/// `list` is the list stored in `SparkUnsafeArray`.\npub fn append_list_element(\n    element_dt: &DataType,\n    list_builder: &mut ListBuilder<Box<dyn ArrayBuilder>>,\n    list: &SparkUnsafeArray,\n) -> Result<(), CometError> {\n    append_to_builder::<true>(element_dt, list_builder.values(), list)?;\n    list_builder.append(true);\n\n    Ok(())\n}\n"
  },
  {
    "path": "native/shuffle/src/spark_unsafe/map.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::spark_unsafe::list::{append_to_builder, SparkUnsafeArray};\nuse arrow::array::builder::{ArrayBuilder, MapBuilder, MapFieldNames};\nuse arrow::datatypes::{DataType, FieldRef};\nuse datafusion_comet_jni_bridge::errors::CometError;\n\n/// A Spark `UnsafeMap` backed by JVM-allocated memory, containing parallel keys and values arrays.\npub struct SparkUnsafeMap {\n    pub(crate) keys: SparkUnsafeArray,\n    pub(crate) values: SparkUnsafeArray,\n}\n\nimpl SparkUnsafeMap {\n    /// Creates a `SparkUnsafeMap` which points to the given address and size in bytes.\n    pub(crate) fn new(addr: i64, size: i32) -> Self {\n        // SAFETY: addr points to valid Spark UnsafeMap data from the JVM.\n        // The first 8 bytes contain the key array size as a little-endian i64.\n        debug_assert!(addr != 0, \"SparkUnsafeMap::new: null address\");\n        debug_assert!(size >= 0, \"SparkUnsafeMap::new: negative size {size}\");\n        let slice: &[u8] = unsafe { std::slice::from_raw_parts(addr as *const u8, 8) };\n        let key_array_size = i64::from_le_bytes(slice.try_into().unwrap());\n\n        if key_array_size < 0 {\n            panic!(\"Negative key size in bytes of map: {key_array_size}\");\n        }\n\n        if key_array_size > i32::MAX as i64 {\n            panic!(\"Number of key size in bytes should <= i32::MAX: {key_array_size}\");\n        }\n\n        let value_array_size = size - key_array_size as i32 - 8;\n        if value_array_size < 0 {\n            panic!(\"Negative value size in bytes of map: {value_array_size}\");\n        }\n\n        let keys = SparkUnsafeArray::new(addr + 8);\n        let values = SparkUnsafeArray::new(addr + 8 + key_array_size);\n\n        if keys.get_num_elements() != values.get_num_elements() {\n            panic!(\n                \"Number of elements of keys and values should be the same: {} vs {}\",\n                keys.get_num_elements(),\n                values.get_num_elements()\n            );\n        }\n\n        Self { keys, values }\n    }\n}\n\n/// Appending the given map stored in `SparkUnsafeMap` into `MapBuilder`.\n/// `field` includes data types of the map element. `map_builder` is the map builder.\n/// `map` is the map stored in `SparkUnsafeMap`.\npub fn append_map_elements(\n    field: &FieldRef,\n    map_builder: &mut MapBuilder<Box<dyn ArrayBuilder>, Box<dyn ArrayBuilder>>,\n    map: &SparkUnsafeMap,\n) -> Result<(), CometError> {\n    let (key_field, value_field, _) = get_map_key_value_fields(field)?;\n\n    let keys = &map.keys;\n    let values = &map.values;\n\n    append_to_builder::<false>(key_field.data_type(), map_builder.keys(), keys)?;\n\n    append_to_builder::<true>(value_field.data_type(), map_builder.values(), values)?;\n\n    map_builder.append(true)?;\n\n    Ok(())\n}\n\n#[allow(clippy::field_reassign_with_default)]\npub fn get_map_key_value_fields(\n    field: &FieldRef,\n) -> Result<(&FieldRef, &FieldRef, MapFieldNames), CometError> {\n    let mut map_fieldnames = MapFieldNames::default();\n    map_fieldnames.entry = field.name().to_string();\n\n    let (key_field, value_field) = match field.data_type() {\n        DataType::Struct(fields) => {\n            if fields.len() != 2 {\n                return Err(CometError::Internal(format!(\n                    \"Map field should have 2 fields, but got {}\",\n                    fields.len()\n                )));\n            }\n\n            let key = &fields[0];\n            let value = &fields[1];\n\n            map_fieldnames.key = key.name().to_string();\n            map_fieldnames.value = value.name().to_string();\n\n            (key, value)\n        }\n        _ => {\n            return Err(CometError::Internal(format!(\n                \"Map field should be a struct, but got {:?}\",\n                field.data_type()\n            )));\n        }\n    };\n\n    Ok((key_field, value_field, map_fieldnames))\n}\n"
  },
  {
    "path": "native/shuffle/src/spark_unsafe/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub mod list;\nmod map;\npub mod row;\npub mod unsafe_object;\n"
  },
  {
    "path": "native/shuffle/src/spark_unsafe/row.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Utils for supporting native sort-based columnar shuffle.\n\nuse crate::spark_unsafe::unsafe_object::{impl_primitive_accessors, SparkUnsafeObject};\nuse crate::spark_unsafe::{\n    list::append_list_element,\n    map::{append_map_elements, get_map_key_value_fields},\n};\nuse crate::writers::Checksum;\nuse crate::writers::ShuffleBlockWriter;\nuse arrow::array::{\n    builder::{\n        ArrayBuilder, BinaryBuilder, BinaryDictionaryBuilder, BooleanBuilder, Date32Builder,\n        Decimal128Builder, Float32Builder, Float64Builder, Int16Builder, Int32Builder,\n        Int64Builder, Int8Builder, ListBuilder, MapBuilder, StringBuilder, StringDictionaryBuilder,\n        StructBuilder, TimestampMicrosecondBuilder,\n    },\n    types::Int32Type,\n    Array, ArrayRef, RecordBatch, RecordBatchOptions,\n};\nuse arrow::compute::cast;\nuse arrow::datatypes::{DataType, Field, Schema, TimeUnit};\nuse arrow::error::ArrowError;\nuse datafusion::physical_plan::metrics::Time;\nuse datafusion_comet_jni_bridge::errors::CometError;\nuse jni::sys::{jint, jlong};\nuse std::{\n    fs::OpenOptions,\n    io::{Cursor, Write},\n    sync::Arc,\n};\n\nconst NESTED_TYPE_BUILDER_CAPACITY: usize = 100;\n\n/// A Spark `UnsafeRow` backed by JVM-allocated memory, providing field access by index.\npub struct SparkUnsafeRow {\n    row_addr: i64,\n    row_size: i32,\n    row_bitset_width: i64,\n}\n\nimpl SparkUnsafeObject for SparkUnsafeRow {\n    fn get_row_addr(&self) -> i64 {\n        self.row_addr\n    }\n\n    fn get_element_offset(&self, index: usize, element_size: usize) -> *const u8 {\n        let offset = self.row_bitset_width + (index * 8) as i64;\n        debug_assert!(\n            self.row_size >= 0 && offset + element_size as i64 <= self.row_size as i64,\n            \"get_element_offset: access at offset {offset} with size {element_size} \\\n             exceeds row_size {} for index {index}\",\n            self.row_size\n        );\n        (self.row_addr + offset) as *const u8\n    }\n\n    // SparkUnsafeRow field offsets are always 8-byte aligned: the base address is 8-byte\n    // aligned (JVM guarantee), bitset_width is a multiple of 8, and each field slot is\n    // 8 bytes. This means we can safely use aligned ptr::read() for all typed accesses.\n    impl_primitive_accessors!(read);\n}\n\nimpl Default for SparkUnsafeRow {\n    fn default() -> Self {\n        Self {\n            row_addr: -1,\n            row_size: -1,\n            row_bitset_width: -1,\n        }\n    }\n}\n\nimpl SparkUnsafeRow {\n    fn new(schema: &[DataType]) -> Self {\n        Self {\n            row_addr: -1,\n            row_size: -1,\n            row_bitset_width: Self::get_row_bitset_width(schema.len()) as i64,\n        }\n    }\n\n    /// Returns true if the row is a null row.\n    pub fn is_null_row(&self) -> bool {\n        self.row_addr == -1 && self.row_size == -1 && self.row_bitset_width == -1\n    }\n\n    /// Calculate the width of the bitset for the row in bytes.\n    /// The logic is from Spark `UnsafeRow.calculateBitSetWidthInBytes`.\n    #[inline]\n    pub const fn get_row_bitset_width(num_fields: usize) -> usize {\n        num_fields.div_ceil(64) * 8\n    }\n\n    pub fn new_with_num_fields(num_fields: usize) -> Self {\n        Self {\n            row_addr: -1,\n            row_size: -1,\n            row_bitset_width: Self::get_row_bitset_width(num_fields) as i64,\n        }\n    }\n\n    /// Points the row to the given slice.\n    pub fn point_to_slice(&mut self, slice: &[u8]) {\n        self.row_addr = slice.as_ptr() as i64;\n        self.row_size = slice.len() as i32;\n    }\n\n    /// Points the row to the given address with specified row size.\n    pub(crate) fn point_to(&mut self, row_addr: i64, row_size: i32) {\n        self.row_addr = row_addr;\n        self.row_size = row_size;\n    }\n\n    pub fn get_row_size(&self) -> i32 {\n        self.row_size\n    }\n\n    /// Returns true if the null bit at the given index of the row is set.\n    #[inline]\n    pub(crate) fn is_null_at(&self, index: usize) -> bool {\n        // SAFETY: row_addr points to valid Spark UnsafeRow data with at least\n        // ceil(num_fields/64) * 8 bytes of null bitset. The caller ensures index < num_fields.\n        // word_offset is within the bitset region since (index >> 6) << 3 < bitset size.\n        // The bitset starts at row_addr (8-byte aligned) and each word is at offset 8*k,\n        // so word_offset is always 8-byte aligned — we can use aligned ptr::read().\n        debug_assert!(self.row_addr != -1, \"is_null_at: row not initialized\");\n        unsafe {\n            let mask: i64 = 1i64 << (index & 0x3f);\n            let word_offset = (self.row_addr + (((index >> 6) as i64) << 3)) as *const i64;\n            let word: i64 = word_offset.read();\n            (word & mask) != 0\n        }\n    }\n\n    /// Unsets the null bit at the given index of the row, i.e., set the bit to 0 (not null).\n    pub fn set_not_null_at(&mut self, index: usize) {\n        // SAFETY: row_addr points to valid Spark UnsafeRow data with at least\n        // ceil(num_fields/64) * 8 bytes of null bitset. The caller ensures index < num_fields.\n        // word_offset is within the bitset region since (index >> 6) << 3 < bitset size.\n        // Writing is safe because we have mutable access and the memory is owned by the JVM.\n        // The bitset is always 8-byte aligned — we can use aligned ptr::read()/write().\n        debug_assert!(self.row_addr != -1, \"set_not_null_at: row not initialized\");\n        unsafe {\n            let mask: i64 = 1i64 << (index & 0x3f);\n            let word_offset = (self.row_addr + (((index >> 6) as i64) << 3)) as *mut i64;\n            let word: i64 = word_offset.read();\n            word_offset.write(word & !mask);\n        }\n    }\n}\n\nmacro_rules! downcast_builder_ref {\n    ($builder_type:ty, $builder:expr) => {{\n        let actual_type_id = $builder.as_any().type_id();\n        $builder\n            .as_any_mut()\n            .downcast_mut::<$builder_type>()\n            .ok_or_else(|| {\n                CometError::Internal(format!(\n                    \"Failed to downcast builder: expected {}, got {:?}\",\n                    stringify!($builder_type),\n                    actual_type_id\n                ))\n            })?\n    }};\n}\n\nmacro_rules! get_field_builder {\n    ($struct_builder:expr, $builder_type:ty, $idx:expr) => {\n        $struct_builder\n            .field_builder::<$builder_type>($idx)\n            .ok_or_else(|| {\n                CometError::Internal(format!(\n                    \"Failed to get field builder at index {}: expected {}\",\n                    $idx,\n                    stringify!($builder_type)\n                ))\n            })?\n    };\n}\n\n// Expose the macro for other modules.\nuse crate::CompressionCodec;\npub(crate) use downcast_builder_ref;\n\n/// Appends field of row to the given struct builder. `dt` is the data type of the field.\n/// `struct_builder` is the struct builder of the row. `row` is the row that contains the field.\n/// `idx` is the index of the field in the row. The caller is responsible for ensuring that the\n/// `struct_builder.append` is called before/after calling this function to append the null buffer\n/// of the struct array.\n#[allow(clippy::redundant_closure_call)]\npub(super) fn append_field(\n    dt: &DataType,\n    struct_builder: &mut StructBuilder,\n    row: &SparkUnsafeRow,\n    idx: usize,\n) -> Result<(), CometError> {\n    /// A macro for generating code of appending value into field builder of Arrow struct builder.\n    macro_rules! append_field_to_builder {\n        ($builder_type:ty, $accessor:expr) => {{\n            let field_builder = get_field_builder!(struct_builder, $builder_type, idx);\n\n            if row.is_null_row() {\n                // The row is null.\n                field_builder.append_null();\n            } else {\n                let is_null = row.is_null_at(idx);\n\n                if is_null {\n                    // The field in the row is null.\n                    // Append a null value to the field builder.\n                    field_builder.append_null();\n                } else {\n                    $accessor(field_builder);\n                }\n            }\n        }};\n    }\n\n    match dt {\n        DataType::Boolean => {\n            append_field_to_builder!(BooleanBuilder, |builder: &mut BooleanBuilder| builder\n                .append_value(row.get_boolean(idx)));\n        }\n        DataType::Int8 => {\n            append_field_to_builder!(Int8Builder, |builder: &mut Int8Builder| builder\n                .append_value(row.get_byte(idx)));\n        }\n        DataType::Int16 => {\n            append_field_to_builder!(Int16Builder, |builder: &mut Int16Builder| builder\n                .append_value(row.get_short(idx)));\n        }\n        DataType::Int32 => {\n            append_field_to_builder!(Int32Builder, |builder: &mut Int32Builder| builder\n                .append_value(row.get_int(idx)));\n        }\n        DataType::Int64 => {\n            append_field_to_builder!(Int64Builder, |builder: &mut Int64Builder| builder\n                .append_value(row.get_long(idx)));\n        }\n        DataType::Float32 => {\n            append_field_to_builder!(Float32Builder, |builder: &mut Float32Builder| builder\n                .append_value(row.get_float(idx)));\n        }\n        DataType::Float64 => {\n            append_field_to_builder!(Float64Builder, |builder: &mut Float64Builder| builder\n                .append_value(row.get_double(idx)));\n        }\n        DataType::Date32 => {\n            append_field_to_builder!(Date32Builder, |builder: &mut Date32Builder| builder\n                .append_value(row.get_date(idx)));\n        }\n        DataType::Timestamp(TimeUnit::Microsecond, _) => {\n            append_field_to_builder!(\n                TimestampMicrosecondBuilder,\n                |builder: &mut TimestampMicrosecondBuilder| builder\n                    .append_value(row.get_timestamp(idx))\n            );\n        }\n        DataType::Binary => {\n            append_field_to_builder!(BinaryBuilder, |builder: &mut BinaryBuilder| builder\n                .append_value(row.get_binary(idx)));\n        }\n        DataType::Utf8 => {\n            append_field_to_builder!(StringBuilder, |builder: &mut StringBuilder| builder\n                .append_value(row.get_string(idx)));\n        }\n        DataType::Decimal128(p, _) => {\n            append_field_to_builder!(Decimal128Builder, |builder: &mut Decimal128Builder| builder\n                .append_value(row.get_decimal(idx, *p)));\n        }\n        DataType::Struct(fields) => {\n            // Appending value into struct field builder of Arrow struct builder.\n            let field_builder = get_field_builder!(struct_builder, StructBuilder, idx);\n\n            let nested_row = if row.is_null_row() || row.is_null_at(idx) {\n                // The row is null, or the field in the row is null, i.e., a null nested row.\n                // Append a null value to the row builder.\n                field_builder.append_null();\n                SparkUnsafeRow::default()\n            } else {\n                field_builder.append(true);\n                row.get_struct(idx, fields.len())\n            };\n\n            for (field_idx, field) in fields.into_iter().enumerate() {\n                append_field(field.data_type(), field_builder, &nested_row, field_idx)?;\n            }\n        }\n        DataType::Map(field, _) => {\n            let field_builder = get_field_builder!(\n                struct_builder,\n                MapBuilder<Box<dyn ArrayBuilder>, Box<dyn ArrayBuilder>>,\n                idx\n            );\n\n            if row.is_null_row() {\n                // The row is null.\n                field_builder.append(false)?;\n            } else {\n                let is_null = row.is_null_at(idx);\n\n                if is_null {\n                    // The field in the row is null.\n                    // Append a null value to the map builder.\n                    field_builder.append(false)?;\n                } else {\n                    append_map_elements(field, field_builder, &row.get_map(idx))?;\n                }\n            }\n        }\n        DataType::List(field) => {\n            let field_builder =\n                get_field_builder!(struct_builder, ListBuilder<Box<dyn ArrayBuilder>>, idx);\n\n            if row.is_null_row() {\n                // The row is null.\n                field_builder.append_null();\n            } else {\n                let is_null = row.is_null_at(idx);\n\n                if is_null {\n                    // The field in the row is null.\n                    // Append a null value to the list builder.\n                    field_builder.append_null();\n                } else {\n                    append_list_element(field.data_type(), field_builder, &row.get_array(idx))?\n                }\n            }\n        }\n        _ => {\n            unreachable!(\"Unsupported data type of struct field: {:?}\", dt)\n        }\n    }\n\n    Ok(())\n}\n\n/// Appends nested struct fields to the struct builder using field-major order.\n/// This is a helper function for processing nested struct fields recursively.\n///\n/// Unlike `append_struct_fields_field_major`, this function takes slices of row addresses,\n/// sizes, and null flags directly, without needing to navigate from a parent row.\n#[allow(clippy::redundant_closure_call)]\nfn append_nested_struct_fields_field_major(\n    row_addresses: &[jlong],\n    row_sizes: &[jint],\n    struct_is_null: &[bool],\n    struct_builder: &mut StructBuilder,\n    fields: &arrow::datatypes::Fields,\n) -> Result<(), CometError> {\n    let num_rows = row_addresses.len();\n    let mut row = SparkUnsafeRow::new_with_num_fields(fields.len());\n\n    // Helper macro for processing primitive fields\n    macro_rules! process_field {\n        ($builder_type:ty, $field_idx:expr, $get_value:expr) => {{\n            let field_builder = get_field_builder!(struct_builder, $builder_type, $field_idx);\n\n            for row_idx in 0..num_rows {\n                if struct_is_null[row_idx] {\n                    // Struct is null, field is also null\n                    field_builder.append_null();\n                } else {\n                    let row_addr = row_addresses[row_idx];\n                    let row_size = row_sizes[row_idx];\n                    row.point_to(row_addr, row_size);\n\n                    if row.is_null_at($field_idx) {\n                        field_builder.append_null();\n                    } else {\n                        field_builder.append_value($get_value(&row, $field_idx));\n                    }\n                }\n            }\n        }};\n    }\n\n    // Process each field across all rows\n    for (field_idx, field) in fields.iter().enumerate() {\n        match field.data_type() {\n            DataType::Boolean => {\n                process_field!(BooleanBuilder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_boolean(idx));\n            }\n            DataType::Int8 => {\n                process_field!(Int8Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_byte(idx));\n            }\n            DataType::Int16 => {\n                process_field!(Int16Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_short(idx));\n            }\n            DataType::Int32 => {\n                process_field!(Int32Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_int(idx));\n            }\n            DataType::Int64 => {\n                process_field!(Int64Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_long(idx));\n            }\n            DataType::Float32 => {\n                process_field!(Float32Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_float(idx));\n            }\n            DataType::Float64 => {\n                process_field!(Float64Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_double(idx));\n            }\n            DataType::Date32 => {\n                process_field!(Date32Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_date(idx));\n            }\n            DataType::Timestamp(TimeUnit::Microsecond, _) => {\n                process_field!(\n                    TimestampMicrosecondBuilder,\n                    field_idx,\n                    |row: &SparkUnsafeRow, idx| row.get_timestamp(idx)\n                );\n            }\n            DataType::Binary => {\n                let field_builder = get_field_builder!(struct_builder, BinaryBuilder, field_idx);\n\n                for row_idx in 0..num_rows {\n                    if struct_is_null[row_idx] {\n                        field_builder.append_null();\n                    } else {\n                        let row_addr = row_addresses[row_idx];\n                        let row_size = row_sizes[row_idx];\n                        row.point_to(row_addr, row_size);\n\n                        if row.is_null_at(field_idx) {\n                            field_builder.append_null();\n                        } else {\n                            field_builder.append_value(row.get_binary(field_idx));\n                        }\n                    }\n                }\n            }\n            DataType::Utf8 => {\n                let field_builder = get_field_builder!(struct_builder, StringBuilder, field_idx);\n\n                for row_idx in 0..num_rows {\n                    if struct_is_null[row_idx] {\n                        field_builder.append_null();\n                    } else {\n                        let row_addr = row_addresses[row_idx];\n                        let row_size = row_sizes[row_idx];\n                        row.point_to(row_addr, row_size);\n\n                        if row.is_null_at(field_idx) {\n                            field_builder.append_null();\n                        } else {\n                            field_builder.append_value(row.get_string(field_idx));\n                        }\n                    }\n                }\n            }\n            DataType::Decimal128(p, _) => {\n                let p = *p;\n                let field_builder =\n                    get_field_builder!(struct_builder, Decimal128Builder, field_idx);\n\n                for row_idx in 0..num_rows {\n                    if struct_is_null[row_idx] {\n                        field_builder.append_null();\n                    } else {\n                        let row_addr = row_addresses[row_idx];\n                        let row_size = row_sizes[row_idx];\n                        row.point_to(row_addr, row_size);\n\n                        if row.is_null_at(field_idx) {\n                            field_builder.append_null();\n                        } else {\n                            field_builder.append_value(row.get_decimal(field_idx, p));\n                        }\n                    }\n                }\n            }\n            DataType::Struct(nested_fields) => {\n                let nested_builder = get_field_builder!(struct_builder, StructBuilder, field_idx);\n\n                // Collect nested struct addresses and sizes in one pass, building validity\n                let mut nested_addresses: Vec<jlong> = Vec::with_capacity(num_rows);\n                let mut nested_sizes: Vec<jint> = Vec::with_capacity(num_rows);\n                let mut nested_is_null: Vec<bool> = Vec::with_capacity(num_rows);\n\n                for row_idx in 0..num_rows {\n                    if struct_is_null[row_idx] {\n                        // Parent struct is null, nested struct is also null\n                        nested_builder.append_null();\n                        nested_is_null.push(true);\n                        nested_addresses.push(0);\n                        nested_sizes.push(0);\n                    } else {\n                        let row_addr = row_addresses[row_idx];\n                        let row_size = row_sizes[row_idx];\n                        row.point_to(row_addr, row_size);\n\n                        if row.is_null_at(field_idx) {\n                            nested_builder.append_null();\n                            nested_is_null.push(true);\n                            nested_addresses.push(0);\n                            nested_sizes.push(0);\n                        } else {\n                            nested_builder.append(true);\n                            nested_is_null.push(false);\n                            // Get nested struct address and size\n                            let nested_row = row.get_struct(field_idx, nested_fields.len());\n                            nested_addresses.push(nested_row.get_row_addr());\n                            nested_sizes.push(nested_row.get_row_size());\n                        }\n                    }\n                }\n\n                // Recursively process nested struct fields in field-major order\n                append_nested_struct_fields_field_major(\n                    &nested_addresses,\n                    &nested_sizes,\n                    &nested_is_null,\n                    nested_builder,\n                    nested_fields,\n                )?;\n            }\n            // For list and map, fall back to append_field since they have variable-length elements\n            dt @ (DataType::List(_) | DataType::Map(_, _)) => {\n                for row_idx in 0..num_rows {\n                    if struct_is_null[row_idx] {\n                        let null_row = SparkUnsafeRow::default();\n                        append_field(dt, struct_builder, &null_row, field_idx)?;\n                    } else {\n                        let row_addr = row_addresses[row_idx];\n                        let row_size = row_sizes[row_idx];\n                        row.point_to(row_addr, row_size);\n                        append_field(dt, struct_builder, &row, field_idx)?;\n                    }\n                }\n            }\n            _ => {\n                unreachable!(\n                    \"Unsupported data type of struct field: {:?}\",\n                    field.data_type()\n                )\n            }\n        }\n    }\n\n    Ok(())\n}\n\n/// Reads row address and size from JVM-provided pointer arrays and points the row to that data.\n///\n/// # Safety\n/// Caller must ensure row_addresses_ptr and row_sizes_ptr are valid for index i.\n/// This is guaranteed when called from append_columns with indices in [row_start, row_end).\nmacro_rules! read_row_at {\n    ($row:expr, $row_addresses_ptr:expr, $row_sizes_ptr:expr, $i:expr) => {{\n        // SAFETY: Caller guarantees pointers are valid for this index (see macro doc)\n        debug_assert!(\n            !$row_addresses_ptr.is_null(),\n            \"read_row_at: null row_addresses_ptr\"\n        );\n        debug_assert!(!$row_sizes_ptr.is_null(), \"read_row_at: null row_sizes_ptr\");\n        let row_addr = unsafe { *$row_addresses_ptr.add($i) };\n        let row_size = unsafe { *$row_sizes_ptr.add($i) };\n        $row.point_to(row_addr, row_size);\n    }};\n}\n\n/// Appends a batch of list values to the list builder with a single type dispatch.\n/// This moves type dispatch from O(rows) to O(1), significantly improving performance\n/// for large batches.\n#[allow(clippy::too_many_arguments)]\nfn append_list_column_batch(\n    row_addresses_ptr: *mut jlong,\n    row_sizes_ptr: *mut jint,\n    row_start: usize,\n    row_end: usize,\n    schema: &[DataType],\n    column_idx: usize,\n    element_type: &DataType,\n    list_builder: &mut ListBuilder<Box<dyn ArrayBuilder>>,\n) -> Result<(), CometError> {\n    let mut row = SparkUnsafeRow::new(schema);\n\n    // Helper macro for primitive element types - gets builder fresh each iteration\n    // to avoid borrow conflicts with list_builder.append()\n    macro_rules! process_primitive_lists {\n        ($builder_type:ty, $append_fn:ident) => {{\n            for i in row_start..row_end {\n                read_row_at!(row, row_addresses_ptr, row_sizes_ptr, i);\n\n                if row.is_null_at(column_idx) {\n                    list_builder.append_null();\n                } else {\n                    let array = row.get_array(column_idx);\n                    // Get values builder fresh each iteration to avoid borrow conflict\n                    let values_builder = list_builder\n                        .values()\n                        .as_any_mut()\n                        .downcast_mut::<$builder_type>()\n                        .expect(stringify!($builder_type));\n                    array.$append_fn::<true>(values_builder);\n                    list_builder.append(true);\n                }\n            }\n        }};\n    }\n\n    match element_type {\n        DataType::Boolean => {\n            process_primitive_lists!(BooleanBuilder, append_booleans_to_builder);\n        }\n        DataType::Int8 => {\n            process_primitive_lists!(Int8Builder, append_bytes_to_builder);\n        }\n        DataType::Int16 => {\n            process_primitive_lists!(Int16Builder, append_shorts_to_builder);\n        }\n        DataType::Int32 => {\n            process_primitive_lists!(Int32Builder, append_ints_to_builder);\n        }\n        DataType::Int64 => {\n            process_primitive_lists!(Int64Builder, append_longs_to_builder);\n        }\n        DataType::Float32 => {\n            process_primitive_lists!(Float32Builder, append_floats_to_builder);\n        }\n        DataType::Float64 => {\n            process_primitive_lists!(Float64Builder, append_doubles_to_builder);\n        }\n        DataType::Date32 => {\n            process_primitive_lists!(Date32Builder, append_dates_to_builder);\n        }\n        DataType::Timestamp(TimeUnit::Microsecond, _) => {\n            process_primitive_lists!(TimestampMicrosecondBuilder, append_timestamps_to_builder);\n        }\n        // For complex element types, fall back to per-row dispatch\n        _ => {\n            for i in row_start..row_end {\n                read_row_at!(row, row_addresses_ptr, row_sizes_ptr, i);\n\n                if row.is_null_at(column_idx) {\n                    list_builder.append_null();\n                } else {\n                    append_list_element(element_type, list_builder, &row.get_array(column_idx))?;\n                }\n            }\n        }\n    }\n\n    Ok(())\n}\n\n/// Appends a batch of map values to the map builder with a single type dispatch.\n/// This moves type dispatch from O(rows × 2) to O(2), improving performance for maps.\n#[allow(clippy::too_many_arguments)]\nfn append_map_column_batch(\n    row_addresses_ptr: *mut jlong,\n    row_sizes_ptr: *mut jint,\n    row_start: usize,\n    row_end: usize,\n    schema: &[DataType],\n    column_idx: usize,\n    field: &arrow::datatypes::FieldRef,\n    map_builder: &mut MapBuilder<Box<dyn ArrayBuilder>, Box<dyn ArrayBuilder>>,\n) -> Result<(), CometError> {\n    let mut row = SparkUnsafeRow::new(schema);\n    let (key_field, value_field, _) = get_map_key_value_fields(field)?;\n    let key_type = key_field.data_type();\n    let value_type = value_field.data_type();\n\n    // Helper macro for processing maps with primitive key/value types\n    // Uses scoped borrows to avoid borrow checker conflicts\n    macro_rules! process_primitive_maps {\n        ($key_builder:ty, $key_append:ident, $val_builder:ty, $val_append:ident) => {{\n            for i in row_start..row_end {\n                read_row_at!(row, row_addresses_ptr, row_sizes_ptr, i);\n\n                if row.is_null_at(column_idx) {\n                    map_builder.append(false)?;\n                } else {\n                    let map = row.get_map(column_idx);\n                    // Process keys in a scope so borrow ends\n                    {\n                        let keys_builder = map_builder\n                            .keys()\n                            .as_any_mut()\n                            .downcast_mut::<$key_builder>()\n                            .expect(stringify!($key_builder));\n                        map.keys.$key_append::<false>(keys_builder);\n                    }\n                    // Process values in a scope so borrow ends\n                    {\n                        let values_builder = map_builder\n                            .values()\n                            .as_any_mut()\n                            .downcast_mut::<$val_builder>()\n                            .expect(stringify!($val_builder));\n                        map.values.$val_append::<true>(values_builder);\n                    }\n                    map_builder.append(true)?;\n                }\n            }\n        }};\n    }\n\n    // Optimize common map type combinations\n    match (key_type, value_type) {\n        // Map<Int64, Int64>\n        (DataType::Int64, DataType::Int64) => {\n            process_primitive_maps!(\n                Int64Builder,\n                append_longs_to_builder,\n                Int64Builder,\n                append_longs_to_builder\n            );\n        }\n        // Map<Int64, Float64>\n        (DataType::Int64, DataType::Float64) => {\n            process_primitive_maps!(\n                Int64Builder,\n                append_longs_to_builder,\n                Float64Builder,\n                append_doubles_to_builder\n            );\n        }\n        // Map<Int32, Int32>\n        (DataType::Int32, DataType::Int32) => {\n            process_primitive_maps!(\n                Int32Builder,\n                append_ints_to_builder,\n                Int32Builder,\n                append_ints_to_builder\n            );\n        }\n        // Map<Int32, Int64>\n        (DataType::Int32, DataType::Int64) => {\n            process_primitive_maps!(\n                Int32Builder,\n                append_ints_to_builder,\n                Int64Builder,\n                append_longs_to_builder\n            );\n        }\n        // For other types, fall back to per-row dispatch\n        _ => {\n            for i in row_start..row_end {\n                read_row_at!(row, row_addresses_ptr, row_sizes_ptr, i);\n\n                if row.is_null_at(column_idx) {\n                    map_builder.append(false)?;\n                } else {\n                    append_map_elements(field, map_builder, &row.get_map(column_idx))?;\n                }\n            }\n        }\n    }\n\n    Ok(())\n}\n\n/// Appends struct fields to the struct builder using field-major order.\n/// This processes one field at a time across all rows, which moves type dispatch\n/// outside the row loop (O(fields) dispatches instead of O(rows × fields)).\n#[allow(clippy::redundant_closure_call, clippy::too_many_arguments)]\nfn append_struct_fields_field_major(\n    row_addresses_ptr: *mut jlong,\n    row_sizes_ptr: *mut jint,\n    row_start: usize,\n    row_end: usize,\n    parent_row: &mut SparkUnsafeRow,\n    column_idx: usize,\n    struct_builder: &mut StructBuilder,\n    fields: &arrow::datatypes::Fields,\n) -> Result<(), CometError> {\n    let num_rows = row_end - row_start;\n    let num_fields = fields.len();\n\n    // First pass: Build struct validity and collect which structs are null\n    // We use a Vec<bool> for simplicity; could use a bitset for better memory\n    let mut struct_is_null = Vec::with_capacity(num_rows);\n\n    for i in row_start..row_end {\n        read_row_at!(parent_row, row_addresses_ptr, row_sizes_ptr, i);\n\n        let is_null = parent_row.is_null_at(column_idx);\n        struct_is_null.push(is_null);\n\n        if is_null {\n            struct_builder.append_null();\n        } else {\n            struct_builder.append(true);\n        }\n    }\n\n    // Helper macro for processing primitive fields\n    macro_rules! process_field {\n        ($builder_type:ty, $field_idx:expr, $get_value:expr) => {{\n            let field_builder = get_field_builder!(struct_builder, $builder_type, $field_idx);\n\n            for (row_idx, i) in (row_start..row_end).enumerate() {\n                if struct_is_null[row_idx] {\n                    // Struct is null, field is also null\n                    field_builder.append_null();\n                } else {\n                    read_row_at!(parent_row, row_addresses_ptr, row_sizes_ptr, i);\n                    let nested_row = parent_row.get_struct(column_idx, num_fields);\n\n                    if nested_row.is_null_at($field_idx) {\n                        field_builder.append_null();\n                    } else {\n                        field_builder.append_value($get_value(&nested_row, $field_idx));\n                    }\n                }\n            }\n        }};\n    }\n\n    // Second pass: Process each field across all rows\n    for (field_idx, field) in fields.iter().enumerate() {\n        match field.data_type() {\n            DataType::Boolean => {\n                process_field!(BooleanBuilder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_boolean(idx));\n            }\n            DataType::Int8 => {\n                process_field!(Int8Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_byte(idx));\n            }\n            DataType::Int16 => {\n                process_field!(Int16Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_short(idx));\n            }\n            DataType::Int32 => {\n                process_field!(Int32Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_int(idx));\n            }\n            DataType::Int64 => {\n                process_field!(Int64Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_long(idx));\n            }\n            DataType::Float32 => {\n                process_field!(Float32Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_float(idx));\n            }\n            DataType::Float64 => {\n                process_field!(Float64Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_double(idx));\n            }\n            DataType::Date32 => {\n                process_field!(Date32Builder, field_idx, |row: &SparkUnsafeRow, idx| row\n                    .get_date(idx));\n            }\n            DataType::Timestamp(TimeUnit::Microsecond, _) => {\n                process_field!(\n                    TimestampMicrosecondBuilder,\n                    field_idx,\n                    |row: &SparkUnsafeRow, idx| row.get_timestamp(idx)\n                );\n            }\n            DataType::Binary => {\n                let field_builder = get_field_builder!(struct_builder, BinaryBuilder, field_idx);\n\n                for (row_idx, i) in (row_start..row_end).enumerate() {\n                    if struct_is_null[row_idx] {\n                        field_builder.append_null();\n                    } else {\n                        read_row_at!(parent_row, row_addresses_ptr, row_sizes_ptr, i);\n                        let nested_row = parent_row.get_struct(column_idx, num_fields);\n\n                        if nested_row.is_null_at(field_idx) {\n                            field_builder.append_null();\n                        } else {\n                            field_builder.append_value(nested_row.get_binary(field_idx));\n                        }\n                    }\n                }\n            }\n            DataType::Utf8 => {\n                let field_builder = get_field_builder!(struct_builder, StringBuilder, field_idx);\n\n                for (row_idx, i) in (row_start..row_end).enumerate() {\n                    if struct_is_null[row_idx] {\n                        field_builder.append_null();\n                    } else {\n                        read_row_at!(parent_row, row_addresses_ptr, row_sizes_ptr, i);\n                        let nested_row = parent_row.get_struct(column_idx, num_fields);\n\n                        if nested_row.is_null_at(field_idx) {\n                            field_builder.append_null();\n                        } else {\n                            field_builder.append_value(nested_row.get_string(field_idx));\n                        }\n                    }\n                }\n            }\n            DataType::Decimal128(p, _) => {\n                let p = *p;\n                let field_builder =\n                    get_field_builder!(struct_builder, Decimal128Builder, field_idx);\n\n                for (row_idx, i) in (row_start..row_end).enumerate() {\n                    if struct_is_null[row_idx] {\n                        field_builder.append_null();\n                    } else {\n                        read_row_at!(parent_row, row_addresses_ptr, row_sizes_ptr, i);\n                        let nested_row = parent_row.get_struct(column_idx, num_fields);\n\n                        if nested_row.is_null_at(field_idx) {\n                            field_builder.append_null();\n                        } else {\n                            field_builder.append_value(nested_row.get_decimal(field_idx, p));\n                        }\n                    }\n                }\n            }\n            // For nested structs, apply field-major processing recursively\n            DataType::Struct(nested_fields) => {\n                let nested_builder = get_field_builder!(struct_builder, StructBuilder, field_idx);\n\n                // Collect nested struct addresses and sizes in one pass, building validity\n                let mut nested_addresses: Vec<jlong> = Vec::with_capacity(num_rows);\n                let mut nested_sizes: Vec<jint> = Vec::with_capacity(num_rows);\n                let mut nested_is_null: Vec<bool> = Vec::with_capacity(num_rows);\n\n                for (row_idx, i) in (row_start..row_end).enumerate() {\n                    if struct_is_null[row_idx] {\n                        // Parent struct is null, nested struct is also null\n                        nested_builder.append_null();\n                        nested_is_null.push(true);\n                        nested_addresses.push(0);\n                        nested_sizes.push(0);\n                    } else {\n                        read_row_at!(parent_row, row_addresses_ptr, row_sizes_ptr, i);\n                        let parent_struct = parent_row.get_struct(column_idx, num_fields);\n\n                        if parent_struct.is_null_at(field_idx) {\n                            nested_builder.append_null();\n                            nested_is_null.push(true);\n                            nested_addresses.push(0);\n                            nested_sizes.push(0);\n                        } else {\n                            nested_builder.append(true);\n                            nested_is_null.push(false);\n                            // Get nested struct address and size\n                            let nested_row =\n                                parent_struct.get_struct(field_idx, nested_fields.len());\n                            nested_addresses.push(nested_row.get_row_addr());\n                            nested_sizes.push(nested_row.get_row_size());\n                        }\n                    }\n                }\n\n                // Recursively process nested struct fields in field-major order\n                append_nested_struct_fields_field_major(\n                    &nested_addresses,\n                    &nested_sizes,\n                    &nested_is_null,\n                    nested_builder,\n                    nested_fields,\n                )?;\n            }\n            // For list and map, fall back to append_field since they have variable-length elements\n            dt @ (DataType::List(_) | DataType::Map(_, _)) => {\n                for (row_idx, i) in (row_start..row_end).enumerate() {\n                    if struct_is_null[row_idx] {\n                        let null_row = SparkUnsafeRow::default();\n                        append_field(dt, struct_builder, &null_row, field_idx)?;\n                    } else {\n                        read_row_at!(parent_row, row_addresses_ptr, row_sizes_ptr, i);\n                        let nested_row = parent_row.get_struct(column_idx, num_fields);\n                        append_field(dt, struct_builder, &nested_row, field_idx)?;\n                    }\n                }\n            }\n            _ => {\n                unreachable!(\n                    \"Unsupported data type of struct field: {:?}\",\n                    field.data_type()\n                )\n            }\n        }\n    }\n\n    Ok(())\n}\n\n/// Appends column of top rows to the given array builder.\n///\n/// # Safety\n///\n/// The caller must ensure:\n/// - `row_addresses_ptr` points to an array of at least `row_end` jlong values\n/// - `row_sizes_ptr` points to an array of at least `row_end` jint values\n/// - Each address in `row_addresses_ptr[row_start..row_end]` points to valid Spark UnsafeRow data\n/// - The memory remains valid for the duration of this function call\n///\n/// These invariants are guaranteed when called from JNI with arrays provided by the JVM.\n#[allow(clippy::redundant_closure_call, clippy::too_many_arguments)]\nfn append_columns(\n    row_addresses_ptr: *mut jlong,\n    row_sizes_ptr: *mut jint,\n    row_start: usize,\n    row_end: usize,\n    schema: &[DataType],\n    column_idx: usize,\n    builder: &mut Box<dyn ArrayBuilder>,\n    prefer_dictionary_ratio: f64,\n) -> Result<(), CometError> {\n    /// A macro for generating code of appending values into Arrow array builders.\n    macro_rules! append_column_to_builder {\n        ($builder_type:ty, $accessor:expr) => {{\n            let element_builder = builder\n                .as_any_mut()\n                .downcast_mut::<$builder_type>()\n                .expect(stringify!($builder_type));\n            let mut row = SparkUnsafeRow::new(schema);\n\n            for i in row_start..row_end {\n                read_row_at!(row, row_addresses_ptr, row_sizes_ptr, i);\n\n                let is_null = row.is_null_at(column_idx);\n\n                if is_null {\n                    // The element value is null.\n                    // Append a null value to the element builder.\n                    element_builder.append_null();\n                } else {\n                    $accessor(element_builder, &row, column_idx);\n                }\n            }\n        }};\n    }\n\n    let dt = &schema[column_idx];\n\n    match dt {\n        DataType::Boolean => {\n            append_column_to_builder!(\n                BooleanBuilder,\n                |builder: &mut BooleanBuilder, row: &SparkUnsafeRow, idx| builder\n                    .append_value(row.get_boolean(idx))\n            );\n        }\n        DataType::Int8 => {\n            append_column_to_builder!(\n                Int8Builder,\n                |builder: &mut Int8Builder, row: &SparkUnsafeRow, idx| builder\n                    .append_value(row.get_byte(idx))\n            );\n        }\n        DataType::Int16 => {\n            append_column_to_builder!(\n                Int16Builder,\n                |builder: &mut Int16Builder, row: &SparkUnsafeRow, idx| builder\n                    .append_value(row.get_short(idx))\n            );\n        }\n        DataType::Int32 => {\n            append_column_to_builder!(\n                Int32Builder,\n                |builder: &mut Int32Builder, row: &SparkUnsafeRow, idx| builder\n                    .append_value(row.get_int(idx))\n            );\n        }\n        DataType::Int64 => {\n            append_column_to_builder!(\n                Int64Builder,\n                |builder: &mut Int64Builder, row: &SparkUnsafeRow, idx| builder\n                    .append_value(row.get_long(idx))\n            );\n        }\n        DataType::Float32 => {\n            append_column_to_builder!(\n                Float32Builder,\n                |builder: &mut Float32Builder, row: &SparkUnsafeRow, idx| builder\n                    .append_value(row.get_float(idx))\n            );\n        }\n        DataType::Float64 => {\n            append_column_to_builder!(\n                Float64Builder,\n                |builder: &mut Float64Builder, row: &SparkUnsafeRow, idx| builder\n                    .append_value(row.get_double(idx))\n            );\n        }\n        DataType::Decimal128(p, _) => {\n            append_column_to_builder!(\n                Decimal128Builder,\n                |builder: &mut Decimal128Builder, row: &SparkUnsafeRow, idx| builder\n                    .append_value(row.get_decimal(idx, *p))\n            );\n        }\n        DataType::Utf8 => {\n            if prefer_dictionary_ratio > 1.0 {\n                append_column_to_builder!(\n                    StringDictionaryBuilder<Int32Type>,\n                    |builder: &mut StringDictionaryBuilder<Int32Type>,\n                     row: &SparkUnsafeRow,\n                     idx| builder.append_value(row.get_string(idx))\n                );\n            } else {\n                append_column_to_builder!(\n                    StringBuilder,\n                    |builder: &mut StringBuilder, row: &SparkUnsafeRow, idx| builder\n                        .append_value(row.get_string(idx))\n                );\n            }\n        }\n        DataType::Binary => {\n            if prefer_dictionary_ratio > 1.0 {\n                append_column_to_builder!(\n                    BinaryDictionaryBuilder<Int32Type>,\n                    |builder: &mut BinaryDictionaryBuilder<Int32Type>,\n                     row: &SparkUnsafeRow,\n                     idx| builder.append_value(row.get_binary(idx))\n                );\n            } else {\n                append_column_to_builder!(\n                    BinaryBuilder,\n                    |builder: &mut BinaryBuilder, row: &SparkUnsafeRow, idx| builder\n                        .append_value(row.get_binary(idx))\n                );\n            }\n        }\n        DataType::Date32 => {\n            append_column_to_builder!(\n                Date32Builder,\n                |builder: &mut Date32Builder, row: &SparkUnsafeRow, idx| builder\n                    .append_value(row.get_date(idx))\n            );\n        }\n        DataType::Timestamp(TimeUnit::Microsecond, _) => {\n            append_column_to_builder!(\n                TimestampMicrosecondBuilder,\n                |builder: &mut TimestampMicrosecondBuilder, row: &SparkUnsafeRow, idx| builder\n                    .append_value(row.get_timestamp(idx))\n            );\n        }\n        DataType::Map(field, _) => {\n            let map_builder = downcast_builder_ref!(\n                MapBuilder<Box<dyn ArrayBuilder>, Box<dyn ArrayBuilder>>,\n                builder\n            );\n            // Use batched processing for better performance\n            append_map_column_batch(\n                row_addresses_ptr,\n                row_sizes_ptr,\n                row_start,\n                row_end,\n                schema,\n                column_idx,\n                field,\n                map_builder,\n            )?;\n        }\n        DataType::List(field) => {\n            let list_builder = downcast_builder_ref!(ListBuilder<Box<dyn ArrayBuilder>>, builder);\n            // Use batched processing for better performance\n            append_list_column_batch(\n                row_addresses_ptr,\n                row_sizes_ptr,\n                row_start,\n                row_end,\n                schema,\n                column_idx,\n                field.data_type(),\n                list_builder,\n            )?;\n        }\n        DataType::Struct(fields) => {\n            let struct_builder = builder\n                .as_any_mut()\n                .downcast_mut::<StructBuilder>()\n                .expect(\"StructBuilder\");\n            let mut row = SparkUnsafeRow::new(schema);\n\n            // Use field-major processing to avoid per-row type dispatch\n            append_struct_fields_field_major(\n                row_addresses_ptr,\n                row_sizes_ptr,\n                row_start,\n                row_end,\n                &mut row,\n                column_idx,\n                struct_builder,\n                fields,\n            )?;\n        }\n        _ => {\n            unreachable!(\"Unsupported data type of column: {:?}\", dt)\n        }\n    }\n\n    Ok(())\n}\n\nfn make_builders(\n    dt: &DataType,\n    row_num: usize,\n    prefer_dictionary_ratio: f64,\n) -> Result<Box<dyn ArrayBuilder>, CometError> {\n    let builder: Box<dyn ArrayBuilder> = match dt {\n        DataType::Boolean => Box::new(BooleanBuilder::with_capacity(row_num)),\n        DataType::Int8 => Box::new(Int8Builder::with_capacity(row_num)),\n        DataType::Int16 => Box::new(Int16Builder::with_capacity(row_num)),\n        DataType::Int32 => Box::new(Int32Builder::with_capacity(row_num)),\n        DataType::Int64 => Box::new(Int64Builder::with_capacity(row_num)),\n        DataType::Float32 => Box::new(Float32Builder::with_capacity(row_num)),\n        DataType::Float64 => Box::new(Float64Builder::with_capacity(row_num)),\n        DataType::Decimal128(_, _) => {\n            Box::new(Decimal128Builder::with_capacity(row_num).with_data_type(dt.clone()))\n        }\n        DataType::Utf8 => {\n            if prefer_dictionary_ratio > 1.0 {\n                Box::new(StringDictionaryBuilder::<Int32Type>::with_capacity(\n                    row_num / 2,\n                    row_num,\n                    1024,\n                ))\n            } else {\n                Box::new(StringBuilder::with_capacity(row_num, 1024))\n            }\n        }\n        DataType::Binary => {\n            if prefer_dictionary_ratio > 1.0 {\n                Box::new(BinaryDictionaryBuilder::<Int32Type>::with_capacity(\n                    row_num / 2,\n                    row_num,\n                    1024,\n                ))\n            } else {\n                Box::new(BinaryBuilder::with_capacity(row_num, 1024))\n            }\n        }\n        DataType::Date32 => Box::new(Date32Builder::with_capacity(row_num)),\n        DataType::Timestamp(TimeUnit::Microsecond, _) => {\n            Box::new(TimestampMicrosecondBuilder::with_capacity(row_num).with_data_type(dt.clone()))\n        }\n        DataType::Map(field, _) => {\n            let (key_field, value_field, map_field_names) = get_map_key_value_fields(field)?;\n            let key_dt = key_field.data_type();\n            let value_dt = value_field.data_type();\n            let key_builder = make_builders(key_dt, NESTED_TYPE_BUILDER_CAPACITY, 1.0)?;\n            let value_builder = make_builders(value_dt, NESTED_TYPE_BUILDER_CAPACITY, 1.0)?;\n\n            Box::new(\n                MapBuilder::new(Some(map_field_names), key_builder, value_builder)\n                    .with_values_field(Arc::clone(value_field)),\n            )\n        }\n        DataType::List(field) => {\n            // Disable dictionary encoding for array element\n            let value_builder =\n                make_builders(field.data_type(), NESTED_TYPE_BUILDER_CAPACITY, 1.0)?;\n\n            // Needed to overwrite default ListBuilder creation having the incoming field schema to be driving\n            let value_field = Arc::clone(field);\n\n            Box::new(ListBuilder::new(value_builder).with_field(value_field))\n        }\n        DataType::Struct(fields) => {\n            let field_builders = fields\n                .iter()\n                // Disable dictionary encoding for struct fields\n                .map(|field| make_builders(field.data_type(), row_num, 1.0))\n                .collect::<Result<Vec<_>, _>>()?;\n\n            Box::new(StructBuilder::new(fields.clone(), field_builders))\n        }\n        _ => return Err(CometError::Internal(format!(\"Unsupported type: {dt:?}\"))),\n    };\n\n    Ok(builder)\n}\n\n/// Processes a sorted row partition and writes the result to the given output path.\n#[allow(clippy::too_many_arguments)]\npub fn process_sorted_row_partition(\n    row_num: usize,\n    batch_size: usize,\n    row_addresses_ptr: *mut jlong,\n    row_sizes_ptr: *mut jint,\n    schema: &[DataType],\n    output_path: String,\n    prefer_dictionary_ratio: f64,\n    checksum_enabled: bool,\n    checksum_algo: i32,\n    // This is the checksum value passed in from Spark side, and is getting updated for\n    // each shuffle partition Spark processes. It is called \"initial\" here to indicate\n    // this is the initial checksum for this method, as it also gets updated iteratively\n    // inside the loop within the method across batches.\n    initial_checksum: Option<u32>,\n    codec: &CompressionCodec,\n) -> Result<(i64, Option<u32>), CometError> {\n    // The current row number we are reading\n    let mut current_row = 0;\n    // Total number of bytes written\n    let mut written = 0;\n    // The current checksum value. This is updated incrementally in the following loop.\n    let mut current_checksum = if checksum_enabled {\n        Some(Checksum::try_new(checksum_algo, initial_checksum)?)\n    } else {\n        None\n    };\n\n    // Create builders once and reuse them across batches.\n    // After finish() is called, builders are reset and can be reused.\n    let mut data_builders: Vec<Box<dyn ArrayBuilder>> = vec![];\n    schema.iter().try_for_each(|dt| {\n        make_builders(dt, batch_size, prefer_dictionary_ratio)\n            .map(|builder| data_builders.push(builder))?;\n        Ok::<(), CometError>(())\n    })?;\n\n    // Open the output file once and reuse it across batches\n    let mut output_data = OpenOptions::new()\n        .create(true)\n        .append(true)\n        .open(&output_path)?;\n\n    // Reusable buffer for serialized batch data\n    let mut frozen: Vec<u8> = Vec::new();\n\n    while current_row < row_num {\n        let n = std::cmp::min(batch_size, row_num - current_row);\n\n        // Appends rows to the array builders.\n        // For each column, iterating over rows and appending values to corresponding array\n        // builder.\n        for (idx, builder) in data_builders.iter_mut().enumerate() {\n            append_columns(\n                row_addresses_ptr,\n                row_sizes_ptr,\n                current_row,\n                current_row + n,\n                schema,\n                idx,\n                builder,\n                prefer_dictionary_ratio,\n            )?;\n        }\n\n        // Writes a record batch generated from the array builders to the output file.\n        // Note: builder_to_array calls finish() which resets the builder, making it reusable for the next batch.\n        let array_refs: Result<Vec<ArrayRef>, _> = data_builders\n            .iter_mut()\n            .zip(schema.iter())\n            .map(|(builder, datatype)| builder_to_array(builder, datatype, prefer_dictionary_ratio))\n            .collect();\n        let batch = make_batch(array_refs?, n)?;\n\n        frozen.clear();\n        let mut cursor = Cursor::new(&mut frozen);\n\n        // we do not collect metrics in Native_writeSortedFileNative\n        let ipc_time = Time::default();\n        let block_writer = ShuffleBlockWriter::try_new(batch.schema().as_ref(), codec.clone())?;\n        written += block_writer.write_batch(&batch, &mut cursor, &ipc_time)?;\n\n        if let Some(checksum) = &mut current_checksum {\n            checksum.update(&mut cursor)?;\n        }\n\n        output_data.write_all(&frozen)?;\n        current_row += n;\n    }\n\n    Ok((written as i64, current_checksum.map(|c| c.finalize())))\n}\n\nfn builder_to_array(\n    builder: &mut Box<dyn ArrayBuilder>,\n    datatype: &DataType,\n    prefer_dictionary_ratio: f64,\n) -> Result<ArrayRef, CometError> {\n    match datatype {\n        // We don't have redundant dictionary values which are not referenced by any key.\n        // So the reasonable ratio must be larger than 1.0.\n        DataType::Utf8 if prefer_dictionary_ratio > 1.0 => {\n            let builder = builder\n                .as_any_mut()\n                .downcast_mut::<StringDictionaryBuilder<Int32Type>>()\n                .expect(\"StringDictionaryBuilder<Int32Type>\");\n\n            let dict_array = builder.finish();\n            let num_keys = dict_array.keys().len();\n            let num_values = dict_array.values().len();\n\n            if num_keys as f64 > num_values as f64 * prefer_dictionary_ratio {\n                // The number of keys in the dictionary is less than a ratio of the number of\n                // values. The dictionary is efficient, so we return it directly.\n                Ok(Arc::new(dict_array))\n            } else {\n                // If the dictionary is not efficient, we convert it to a plain string array.\n                Ok(cast(&dict_array, &DataType::Utf8)?)\n            }\n        }\n        DataType::Binary if prefer_dictionary_ratio > 1.0 => {\n            let builder = builder\n                .as_any_mut()\n                .downcast_mut::<BinaryDictionaryBuilder<Int32Type>>()\n                .expect(\"BinaryDictionaryBuilder<Int32Type>\");\n\n            let dict_array = builder.finish();\n            let num_keys = dict_array.keys().len();\n            let num_values = dict_array.values().len();\n\n            if num_keys as f64 > num_values as f64 * prefer_dictionary_ratio {\n                // The number of keys in the dictionary is less than a ratio of the number of\n                // values. The dictionary is efficient, so we return it directly.\n                Ok(Arc::new(dict_array))\n            } else {\n                // If the dictionary is not efficient, we convert it to a plain string array.\n                Ok(cast(&dict_array, &DataType::Binary)?)\n            }\n        }\n        _ => Ok(builder.finish()),\n    }\n}\n\nfn make_batch(arrays: Vec<ArrayRef>, row_count: usize) -> Result<RecordBatch, ArrowError> {\n    let fields = arrays\n        .iter()\n        .enumerate()\n        .map(|(i, array)| Field::new(format!(\"c{i}\"), array.data_type().clone(), true))\n        .collect::<Vec<_>>();\n    let schema = Arc::new(Schema::new(fields));\n    let options = RecordBatchOptions::new().with_row_count(Option::from(row_count));\n    RecordBatch::try_new_with_options(schema, arrays, &options)\n}\n\n#[cfg(test)]\nmod test {\n    use arrow::datatypes::Fields;\n\n    use super::*;\n\n    #[test]\n    fn test_append_null_row_to_struct_builder() {\n        let data_type = DataType::Struct(Fields::from(vec![\n            Field::new(\"a\", DataType::Boolean, true),\n            Field::new(\"b\", DataType::Boolean, true),\n        ]));\n        let fields = Fields::from(vec![Field::new(\"st\", data_type.clone(), true)]);\n        let mut struct_builder = StructBuilder::from_fields(fields, 1);\n        let row = SparkUnsafeRow::default();\n        append_field(&data_type, &mut struct_builder, &row, 0).expect(\"append field\");\n        struct_builder.append_null();\n        let struct_array = struct_builder.finish();\n        assert_eq!(struct_array.len(), 1);\n        assert!(struct_array.is_null(0));\n    }\n\n    #[test]\n    fn test_append_null_struct_field_to_struct_builder() {\n        let data_type = DataType::Struct(Fields::from(vec![\n            Field::new(\"a\", DataType::Boolean, true),\n            Field::new(\"b\", DataType::Boolean, true),\n        ]));\n        let fields = Fields::from(vec![Field::new(\"st\", data_type.clone(), true)]);\n        let mut struct_builder = StructBuilder::from_fields(fields, 1);\n        let mut row = SparkUnsafeRow::new_with_num_fields(1);\n        // 8 bytes null bitset + 8 bytes field value = 16 bytes\n        // Set bit 0 in the null bitset to mark field 0 as null\n        // Use aligned buffer to match real Spark UnsafeRow layout (8-byte aligned)\n        #[repr(align(8))]\n        struct Aligned([u8; 16]);\n        let mut data = Aligned([0u8; 16]);\n        data.0[0] = 1;\n        row.point_to_slice(&data.0);\n        append_field(&data_type, &mut struct_builder, &row, 0).expect(\"append field\");\n        struct_builder.append_null();\n        let struct_array = struct_builder.finish();\n        assert_eq!(struct_array.len(), 1);\n        assert!(struct_array.is_null(0));\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/spark_unsafe/unsafe_object.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse super::list::SparkUnsafeArray;\nuse super::map::SparkUnsafeMap;\nuse super::row::SparkUnsafeRow;\nuse datafusion_comet_common::bytes_to_i128;\nuse std::str::from_utf8;\n\nconst MAX_LONG_DIGITS: u8 = 18;\n\n/// A common trait for Spark Unsafe classes that can be used to access the underlying data,\n/// e.g., `UnsafeRow` and `UnsafeArray`. This defines a set of methods that can be used to\n/// access the underlying data with index.\n///\n/// # Safety\n///\n/// Implementations must ensure that:\n/// - `get_row_addr()` returns a valid pointer to JVM-allocated memory\n/// - `get_element_offset()` returns a valid pointer within the row/array data region\n/// - The memory layout follows Spark's UnsafeRow/UnsafeArray format\n/// - The memory remains valid for the lifetime of the object (guaranteed by JVM ownership)\n///\n/// All accessor methods (get_boolean, get_int, etc.) use unsafe pointer operations but are\n/// safe to call as long as:\n/// - The index is within bounds (caller's responsibility)\n/// - The object was constructed from valid Spark UnsafeRow/UnsafeArray data\n///\n/// # Alignment\n///\n/// Primitive accessor methods are implemented separately for each type because they have\n/// different alignment guarantees:\n/// - `SparkUnsafeRow`: All field offsets are 8-byte aligned (bitset width is a multiple of 8,\n///   and each field slot is 8 bytes), so accessors use aligned `ptr::read()`.\n/// - `SparkUnsafeArray`: The array base address may be unaligned when nested within a row's\n///   variable-length region, so accessors use `ptr::read_unaligned()`.\npub trait SparkUnsafeObject {\n    /// Returns the address of the row.\n    fn get_row_addr(&self) -> i64;\n\n    /// Returns the offset of the element at the given index.\n    fn get_element_offset(&self, index: usize, element_size: usize) -> *const u8;\n\n    fn get_boolean(&self, index: usize) -> bool;\n    fn get_byte(&self, index: usize) -> i8;\n    fn get_short(&self, index: usize) -> i16;\n    fn get_int(&self, index: usize) -> i32;\n    fn get_long(&self, index: usize) -> i64;\n    fn get_float(&self, index: usize) -> f32;\n    fn get_double(&self, index: usize) -> f64;\n    fn get_date(&self, index: usize) -> i32;\n    fn get_timestamp(&self, index: usize) -> i64;\n\n    /// Returns the offset and length of the element at the given index.\n    #[inline]\n    fn get_offset_and_len(&self, index: usize) -> (i32, i32) {\n        let offset_and_size = self.get_long(index);\n        let offset = (offset_and_size >> 32) as i32;\n        let len = offset_and_size as i32;\n        (offset, len)\n    }\n\n    /// Returns string value at the given index of the object.\n    fn get_string(&self, index: usize) -> &str {\n        let (offset, len) = self.get_offset_and_len(index);\n        let addr = self.get_row_addr() + offset as i64;\n        // SAFETY: addr points to valid UTF-8 string data within the variable-length region.\n        // Offset and length are read from the fixed-length portion of the row/array.\n        debug_assert!(addr != 0, \"get_string: null address at index {index}\");\n        debug_assert!(\n            len >= 0,\n            \"get_string: negative length {len} at index {index}\"\n        );\n        let slice: &[u8] = unsafe { std::slice::from_raw_parts(addr as *const u8, len as usize) };\n\n        from_utf8(slice).unwrap()\n    }\n\n    /// Returns binary value at the given index of the object.\n    fn get_binary(&self, index: usize) -> &[u8] {\n        let (offset, len) = self.get_offset_and_len(index);\n        let addr = self.get_row_addr() + offset as i64;\n        // SAFETY: addr points to valid binary data within the variable-length region.\n        // Offset and length are read from the fixed-length portion of the row/array.\n        debug_assert!(addr != 0, \"get_binary: null address at index {index}\");\n        debug_assert!(\n            len >= 0,\n            \"get_binary: negative length {len} at index {index}\"\n        );\n        unsafe { std::slice::from_raw_parts(addr as *const u8, len as usize) }\n    }\n\n    /// Returns decimal value at the given index of the object.\n    fn get_decimal(&self, index: usize, precision: u8) -> i128 {\n        if precision <= MAX_LONG_DIGITS {\n            self.get_long(index) as i128\n        } else {\n            let slice = self.get_binary(index);\n            bytes_to_i128(slice)\n        }\n    }\n\n    /// Returns struct value at the given index of the object.\n    fn get_struct(&self, index: usize, num_fields: usize) -> SparkUnsafeRow {\n        let (offset, len) = self.get_offset_and_len(index);\n        let mut row = SparkUnsafeRow::new_with_num_fields(num_fields);\n        row.point_to(self.get_row_addr() + offset as i64, len);\n\n        row\n    }\n\n    /// Returns array value at the given index of the object.\n    fn get_array(&self, index: usize) -> SparkUnsafeArray {\n        let (offset, _) = self.get_offset_and_len(index);\n        SparkUnsafeArray::new(self.get_row_addr() + offset as i64)\n    }\n\n    fn get_map(&self, index: usize) -> SparkUnsafeMap {\n        let (offset, len) = self.get_offset_and_len(index);\n        SparkUnsafeMap::new(self.get_row_addr() + offset as i64, len)\n    }\n}\n\n/// Generates primitive accessor implementations for `SparkUnsafeObject`.\n///\n/// Uses `$read_method` to read typed values from raw pointers:\n/// - `read` for aligned access (SparkUnsafeRow — all offsets are 8-byte aligned)\n/// - `read_unaligned` for potentially unaligned access (SparkUnsafeArray)\nmacro_rules! impl_primitive_accessors {\n    ($read_method:ident) => {\n        #[inline]\n        fn get_boolean(&self, index: usize) -> bool {\n            let addr = self.get_element_offset(index, 1);\n            debug_assert!(\n                !addr.is_null(),\n                \"get_boolean: null pointer at index {index}\"\n            );\n            // SAFETY: addr points to valid element data within the row/array region.\n            unsafe { *addr != 0 }\n        }\n\n        #[inline]\n        fn get_byte(&self, index: usize) -> i8 {\n            let addr = self.get_element_offset(index, 1);\n            debug_assert!(!addr.is_null(), \"get_byte: null pointer at index {index}\");\n            // SAFETY: addr points to valid element data (1 byte) within the row/array region.\n            unsafe { *(addr as *const i8) }\n        }\n\n        #[inline]\n        fn get_short(&self, index: usize) -> i16 {\n            let addr = self.get_element_offset(index, 2) as *const i16;\n            debug_assert!(!addr.is_null(), \"get_short: null pointer at index {index}\");\n            // SAFETY: addr points to valid element data (2 bytes) within the row/array region.\n            unsafe { addr.$read_method() }\n        }\n\n        #[inline]\n        fn get_int(&self, index: usize) -> i32 {\n            let addr = self.get_element_offset(index, 4) as *const i32;\n            debug_assert!(!addr.is_null(), \"get_int: null pointer at index {index}\");\n            // SAFETY: addr points to valid element data (4 bytes) within the row/array region.\n            unsafe { addr.$read_method() }\n        }\n\n        #[inline]\n        fn get_long(&self, index: usize) -> i64 {\n            let addr = self.get_element_offset(index, 8) as *const i64;\n            debug_assert!(!addr.is_null(), \"get_long: null pointer at index {index}\");\n            // SAFETY: addr points to valid element data (8 bytes) within the row/array region.\n            unsafe { addr.$read_method() }\n        }\n\n        #[inline]\n        fn get_float(&self, index: usize) -> f32 {\n            let addr = self.get_element_offset(index, 4) as *const f32;\n            debug_assert!(!addr.is_null(), \"get_float: null pointer at index {index}\");\n            // SAFETY: addr points to valid element data (4 bytes) within the row/array region.\n            unsafe { addr.$read_method() }\n        }\n\n        #[inline]\n        fn get_double(&self, index: usize) -> f64 {\n            let addr = self.get_element_offset(index, 8) as *const f64;\n            debug_assert!(!addr.is_null(), \"get_double: null pointer at index {index}\");\n            // SAFETY: addr points to valid element data (8 bytes) within the row/array region.\n            unsafe { addr.$read_method() }\n        }\n\n        #[inline]\n        fn get_date(&self, index: usize) -> i32 {\n            let addr = self.get_element_offset(index, 4) as *const i32;\n            debug_assert!(!addr.is_null(), \"get_date: null pointer at index {index}\");\n            // SAFETY: addr points to valid element data (4 bytes) within the row/array region.\n            unsafe { addr.$read_method() }\n        }\n\n        #[inline]\n        fn get_timestamp(&self, index: usize) -> i64 {\n            let addr = self.get_element_offset(index, 8) as *const i64;\n            debug_assert!(\n                !addr.is_null(),\n                \"get_timestamp: null pointer at index {index}\"\n            );\n            // SAFETY: addr points to valid element data (8 bytes) within the row/array region.\n            unsafe { addr.$read_method() }\n        }\n    };\n}\npub(crate) use impl_primitive_accessors;\n"
  },
  {
    "path": "native/shuffle/src/writers/buf_batch_writer.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse super::ShuffleBlockWriter;\nuse arrow::array::RecordBatch;\nuse arrow::compute::kernels::coalesce::BatchCoalescer;\nuse datafusion::physical_plan::metrics::Time;\nuse std::borrow::Borrow;\nuse std::io::{Cursor, Seek, SeekFrom, Write};\n\n/// Write batches to writer while using a buffer to avoid frequent system calls.\n/// The record batches were first written by ShuffleBlockWriter into an internal buffer.\n/// Once the buffer exceeds the max size, the buffer will be flushed to the writer.\n///\n/// Small batches are coalesced using Arrow's [`BatchCoalescer`] before serialization,\n/// producing exactly `batch_size`-row output batches to reduce per-block IPC schema overhead.\n/// The coalescer is lazily initialized on the first write.\npub(crate) struct BufBatchWriter<S: Borrow<ShuffleBlockWriter>, W: Write> {\n    shuffle_block_writer: S,\n    writer: W,\n    buffer: Vec<u8>,\n    buffer_max_size: usize,\n    /// Coalesces small batches into target_batch_size before serialization.\n    /// Lazily initialized on first write to capture the schema.\n    coalescer: Option<BatchCoalescer>,\n    /// Target batch size for coalescing\n    batch_size: usize,\n}\n\nimpl<S: Borrow<ShuffleBlockWriter>, W: Write> BufBatchWriter<S, W> {\n    pub(crate) fn new(\n        shuffle_block_writer: S,\n        writer: W,\n        buffer_max_size: usize,\n        batch_size: usize,\n    ) -> Self {\n        Self {\n            shuffle_block_writer,\n            writer,\n            buffer: vec![],\n            buffer_max_size,\n            coalescer: None,\n            batch_size,\n        }\n    }\n\n    pub(crate) fn write(\n        &mut self,\n        batch: &RecordBatch,\n        encode_time: &Time,\n        write_time: &Time,\n    ) -> datafusion::common::Result<usize> {\n        let coalescer = self\n            .coalescer\n            .get_or_insert_with(|| BatchCoalescer::new(batch.schema(), self.batch_size));\n        coalescer.push_batch(batch.clone())?;\n\n        // Drain completed batches into a local vec so the coalescer borrow ends\n        // before we call write_batch_to_buffer (which borrows &mut self).\n        let mut completed = Vec::new();\n        while let Some(batch) = coalescer.next_completed_batch() {\n            completed.push(batch);\n        }\n\n        let mut bytes_written = 0;\n        for batch in &completed {\n            bytes_written += self.write_batch_to_buffer(batch, encode_time, write_time)?;\n        }\n        Ok(bytes_written)\n    }\n\n    /// Serialize a single batch into the byte buffer, flushing to the writer if needed.\n    fn write_batch_to_buffer(\n        &mut self,\n        batch: &RecordBatch,\n        encode_time: &Time,\n        write_time: &Time,\n    ) -> datafusion::common::Result<usize> {\n        let mut cursor = Cursor::new(&mut self.buffer);\n        cursor.seek(SeekFrom::End(0))?;\n        let bytes_written =\n            self.shuffle_block_writer\n                .borrow()\n                .write_batch(batch, &mut cursor, encode_time)?;\n        let pos = cursor.position();\n        if pos >= self.buffer_max_size as u64 {\n            let mut write_timer = write_time.timer();\n            self.writer.write_all(&self.buffer)?;\n            write_timer.stop();\n            self.buffer.clear();\n        }\n        Ok(bytes_written)\n    }\n\n    pub(crate) fn flush(\n        &mut self,\n        encode_time: &Time,\n        write_time: &Time,\n    ) -> datafusion::common::Result<()> {\n        // Finish any remaining buffered rows in the coalescer\n        let mut remaining = Vec::new();\n        if let Some(coalescer) = &mut self.coalescer {\n            coalescer.finish_buffered_batch()?;\n            while let Some(batch) = coalescer.next_completed_batch() {\n                remaining.push(batch);\n            }\n        }\n        for batch in &remaining {\n            self.write_batch_to_buffer(batch, encode_time, write_time)?;\n        }\n\n        // Flush the byte buffer to the underlying writer\n        let mut write_timer = write_time.timer();\n        if !self.buffer.is_empty() {\n            self.writer.write_all(&self.buffer)?;\n        }\n        self.writer.flush()?;\n        write_timer.stop();\n        self.buffer.clear();\n        Ok(())\n    }\n}\n\nimpl<S: Borrow<ShuffleBlockWriter>, W: Write + Seek> BufBatchWriter<S, W> {\n    pub(crate) fn writer_stream_position(&mut self) -> datafusion::common::Result<u64> {\n        self.writer.stream_position().map_err(Into::into)\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/writers/checksum.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::spark_crc32c_hasher::SparkCrc32cHasher;\nuse bytes::Buf;\nuse datafusion_comet_jni_bridge::errors::{CometError, CometResult};\nuse simd_adler32::Adler32;\nuse std::default::Default;\nuse std::hash::Hasher;\nuse std::io::{Cursor, SeekFrom};\n\n/// Checksum algorithms for writing IPC bytes.\n#[derive(Clone)]\npub(crate) enum Checksum {\n    /// CRC32 checksum algorithm.\n    CRC32(crc32fast::Hasher),\n    /// Adler32 checksum algorithm.\n    Adler32(Adler32),\n    /// CRC32C checksum algorithm.\n    CRC32C(SparkCrc32cHasher),\n}\n\nimpl Checksum {\n    pub(crate) fn try_new(algo: i32, initial_opt: Option<u32>) -> CometResult<Self> {\n        match algo {\n            0 => {\n                let hasher = if let Some(initial) = initial_opt {\n                    crc32fast::Hasher::new_with_initial(initial)\n                } else {\n                    crc32fast::Hasher::new()\n                };\n                Ok(Checksum::CRC32(hasher))\n            }\n            1 => {\n                let hasher = if let Some(initial) = initial_opt {\n                    // Note that Adler32 initial state is not zero.\n                    // i.e., `Adler32::from_checksum(0)` is not the same as `Adler32::new()`.\n                    Adler32::from_checksum(initial)\n                } else {\n                    Adler32::new()\n                };\n                Ok(Checksum::Adler32(hasher))\n            }\n            2 => {\n                let hasher = if let Some(initial) = initial_opt {\n                    SparkCrc32cHasher::new(initial)\n                } else {\n                    Default::default()\n                };\n                Ok(Checksum::CRC32C(hasher))\n            }\n            _ => Err(CometError::Internal(\n                \"Unsupported checksum algorithm\".to_string(),\n            )),\n        }\n    }\n\n    pub(crate) fn update(&mut self, cursor: &mut Cursor<&mut Vec<u8>>) -> CometResult<()> {\n        match self {\n            Checksum::CRC32(hasher) => {\n                std::io::Seek::seek(cursor, SeekFrom::Start(0))?;\n                hasher.update(cursor.chunk());\n                Ok(())\n            }\n            Checksum::Adler32(hasher) => {\n                std::io::Seek::seek(cursor, SeekFrom::Start(0))?;\n                hasher.write(cursor.chunk());\n                Ok(())\n            }\n            Checksum::CRC32C(hasher) => {\n                std::io::Seek::seek(cursor, SeekFrom::Start(0))?;\n                hasher.write(cursor.chunk());\n                Ok(())\n            }\n        }\n    }\n\n    pub(crate) fn finalize(self) -> u32 {\n        match self {\n            Checksum::CRC32(hasher) => hasher.finalize(),\n            Checksum::Adler32(hasher) => hasher.finish(),\n            Checksum::CRC32C(hasher) => hasher.finalize(),\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use std::io::Cursor;\n\n    #[test]\n    fn test_crc32() {\n        let mut checksum = Checksum::try_new(0, None).unwrap();\n        let message = b\"123456789\";\n\n        let mut vector: Vec<u8> = message.to_vec();\n        let mut buff = Cursor::new(&mut vector);\n\n        checksum.update(&mut buff).unwrap();\n        let result = checksum.finalize();\n\n        let expected_crc = 0xcbf43926u32;\n        assert_eq!(result, expected_crc)\n    }\n\n    #[test]\n    fn test_adler32() {\n        let mut checksum = Checksum::try_new(1, None).unwrap();\n        let message = b\"123456789\";\n\n        let mut vector: Vec<u8> = message.to_vec();\n        let mut buff = Cursor::new(&mut vector);\n\n        checksum.update(&mut buff).unwrap();\n        let result = checksum.finalize();\n\n        let expected_crc = 0x091e01deu32;\n        assert_eq!(result, expected_crc)\n    }\n\n    #[test]\n    fn test_crc32c() {\n        let mut checksum = Checksum::try_new(2, None).unwrap();\n        let message = b\"123456789\";\n\n        let mut vector: Vec<u8> = message.to_vec();\n        let mut buff = Cursor::new(&mut vector);\n\n        checksum.update(&mut buff).unwrap();\n        let result = checksum.finalize();\n\n        let expected_crc = 0xe3069283u32;\n        assert_eq!(result, expected_crc)\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/writers/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod buf_batch_writer;\nmod checksum;\nmod shuffle_block_writer;\nmod spill;\n\npub(crate) use buf_batch_writer::BufBatchWriter;\npub(crate) use checksum::Checksum;\npub use shuffle_block_writer::{CompressionCodec, ShuffleBlockWriter};\npub(crate) use spill::PartitionWriter;\n"
  },
  {
    "path": "native/shuffle/src/writers/shuffle_block_writer.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::RecordBatch;\nuse arrow::datatypes::Schema;\nuse arrow::ipc::writer::StreamWriter;\nuse datafusion::common::DataFusionError;\nuse datafusion::error::Result;\nuse datafusion::physical_plan::metrics::Time;\nuse std::io::{Cursor, Seek, SeekFrom, Write};\n\n/// Compression algorithm applied to shuffle IPC blocks.\n#[derive(Debug, Clone)]\npub enum CompressionCodec {\n    None,\n    Lz4Frame,\n    Zstd(i32),\n    Snappy,\n}\n\n/// Writes a record batch as a length-prefixed, compressed Arrow IPC block.\n#[derive(Clone)]\npub struct ShuffleBlockWriter {\n    codec: CompressionCodec,\n    header_bytes: Vec<u8>,\n}\n\nimpl ShuffleBlockWriter {\n    pub fn try_new(schema: &Schema, codec: CompressionCodec) -> Result<Self> {\n        let header_bytes = Vec::with_capacity(20);\n        let mut cursor = Cursor::new(header_bytes);\n\n        // leave space for compressed message length\n        cursor.seek_relative(8)?;\n\n        // write number of columns because JVM side needs to know how many addresses to allocate\n        let field_count = schema.fields().len();\n        cursor.write_all(&field_count.to_le_bytes())?;\n\n        // write compression codec to header\n        let codec_header = match &codec {\n            CompressionCodec::Snappy => b\"SNAP\",\n            CompressionCodec::Lz4Frame => b\"LZ4_\",\n            CompressionCodec::Zstd(_) => b\"ZSTD\",\n            CompressionCodec::None => b\"NONE\",\n        };\n        cursor.write_all(codec_header)?;\n\n        let header_bytes = cursor.into_inner();\n\n        Ok(Self {\n            codec,\n            header_bytes,\n        })\n    }\n\n    /// Writes given record batch as Arrow IPC bytes into given writer.\n    /// Returns number of bytes written.\n    pub fn write_batch<W: Write + Seek>(\n        &self,\n        batch: &RecordBatch,\n        output: &mut W,\n        ipc_time: &Time,\n    ) -> Result<usize> {\n        if batch.num_rows() == 0 {\n            return Ok(0);\n        }\n\n        let mut timer = ipc_time.timer();\n        let start_pos = output.stream_position()?;\n\n        // write header\n        output.write_all(&self.header_bytes)?;\n\n        let output = match &self.codec {\n            CompressionCodec::None => {\n                let mut arrow_writer = StreamWriter::try_new(output, &batch.schema())?;\n                arrow_writer.write(batch)?;\n                arrow_writer.finish()?;\n                arrow_writer.into_inner()?\n            }\n            CompressionCodec::Lz4Frame => {\n                let mut wtr = lz4_flex::frame::FrameEncoder::new(output);\n                let mut arrow_writer = StreamWriter::try_new(&mut wtr, &batch.schema())?;\n                arrow_writer.write(batch)?;\n                arrow_writer.finish()?;\n                wtr.finish().map_err(|e| {\n                    DataFusionError::Execution(format!(\"lz4 compression error: {e}\"))\n                })?\n            }\n\n            CompressionCodec::Zstd(level) => {\n                let encoder = zstd::Encoder::new(output, *level)?;\n                let mut arrow_writer = StreamWriter::try_new(encoder, &batch.schema())?;\n                arrow_writer.write(batch)?;\n                arrow_writer.finish()?;\n                let zstd_encoder = arrow_writer.into_inner()?;\n                zstd_encoder.finish()?\n            }\n\n            CompressionCodec::Snappy => {\n                let mut wtr = snap::write::FrameEncoder::new(output);\n                let mut arrow_writer = StreamWriter::try_new(&mut wtr, &batch.schema())?;\n                arrow_writer.write(batch)?;\n                arrow_writer.finish()?;\n                wtr.into_inner().map_err(|e| {\n                    DataFusionError::Execution(format!(\"snappy compression error: {e}\"))\n                })?\n            }\n        };\n\n        // fill ipc length\n        let end_pos = output.stream_position()?;\n        let ipc_length = end_pos - start_pos - 8;\n        let max_size = i32::MAX as u64;\n        if ipc_length > max_size {\n            return Err(DataFusionError::Execution(format!(\n                \"Shuffle block size {ipc_length} exceeds maximum size of {max_size}. \\\n                Try reducing batch size or increasing compression level\"\n            )));\n        }\n\n        // fill ipc length\n        output.seek(SeekFrom::Start(start_pos))?;\n        output.write_all(&ipc_length.to_le_bytes())?;\n        output.seek(SeekFrom::Start(end_pos))?;\n\n        timer.stop();\n\n        Ok((end_pos - start_pos) as usize)\n    }\n}\n"
  },
  {
    "path": "native/shuffle/src/writers/spill.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse super::ShuffleBlockWriter;\nuse crate::metrics::ShufflePartitionerMetrics;\nuse crate::partitioners::PartitionedBatchIterator;\nuse crate::writers::buf_batch_writer::BufBatchWriter;\nuse datafusion::common::DataFusionError;\nuse datafusion::execution::disk_manager::RefCountedTempFile;\nuse datafusion::execution::runtime_env::RuntimeEnv;\nuse std::fs::{File, OpenOptions};\n\n/// A temporary disk file for spilling a partition's intermediate shuffle data.\nstruct SpillFile {\n    temp_file: RefCountedTempFile,\n    file: File,\n}\n\n/// Manages encoding and optional disk spilling for a single shuffle partition.\npub(crate) struct PartitionWriter {\n    /// Spill file for intermediate shuffle output for this partition. Each spill event\n    /// will append to this file and the contents will be copied to the shuffle file at\n    /// the end of processing.\n    spill_file: Option<SpillFile>,\n    /// Writer that performs encoding and compression\n    shuffle_block_writer: ShuffleBlockWriter,\n}\n\nimpl PartitionWriter {\n    pub(crate) fn try_new(\n        shuffle_block_writer: ShuffleBlockWriter,\n    ) -> datafusion::common::Result<Self> {\n        Ok(Self {\n            spill_file: None,\n            shuffle_block_writer,\n        })\n    }\n\n    fn ensure_spill_file_created(\n        &mut self,\n        runtime: &RuntimeEnv,\n    ) -> datafusion::common::Result<()> {\n        if self.spill_file.is_none() {\n            // Spill file is not yet created, create it\n            let spill_file = runtime\n                .disk_manager\n                .create_tmp_file(\"shuffle writer spill\")?;\n            let spill_data = OpenOptions::new()\n                .write(true)\n                .create(true)\n                .truncate(true)\n                .open(spill_file.path())\n                .map_err(|e| {\n                    DataFusionError::Execution(format!(\"Error occurred while spilling {e}\"))\n                })?;\n            self.spill_file = Some(SpillFile {\n                temp_file: spill_file,\n                file: spill_data,\n            });\n        }\n        Ok(())\n    }\n\n    pub(crate) fn spill(\n        &mut self,\n        iter: &mut PartitionedBatchIterator,\n        runtime: &RuntimeEnv,\n        metrics: &ShufflePartitionerMetrics,\n        write_buffer_size: usize,\n        batch_size: usize,\n    ) -> datafusion::common::Result<usize> {\n        if let Some(batch) = iter.next() {\n            self.ensure_spill_file_created(runtime)?;\n\n            let total_bytes_written = {\n                let mut buf_batch_writer = BufBatchWriter::new(\n                    &mut self.shuffle_block_writer,\n                    &mut self.spill_file.as_mut().unwrap().file,\n                    write_buffer_size,\n                    batch_size,\n                );\n                let mut bytes_written =\n                    buf_batch_writer.write(&batch?, &metrics.encode_time, &metrics.write_time)?;\n                for batch in iter {\n                    let batch = batch?;\n                    bytes_written += buf_batch_writer.write(\n                        &batch,\n                        &metrics.encode_time,\n                        &metrics.write_time,\n                    )?;\n                }\n                buf_batch_writer.flush(&metrics.encode_time, &metrics.write_time)?;\n                bytes_written\n            };\n\n            Ok(total_bytes_written)\n        } else {\n            Ok(0)\n        }\n    }\n\n    pub(crate) fn path(&self) -> Option<&std::path::Path> {\n        self.spill_file\n            .as_ref()\n            .map(|spill_file| spill_file.temp_file.path())\n    }\n\n    #[cfg(test)]\n    pub(crate) fn has_spill_file(&self) -> bool {\n        self.spill_file.is_some()\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/Cargo.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[package]\nname = \"datafusion-comet-spark-expr\"\ndescription = \"DataFusion expressions that emulate Apache Spark's behavior\"\nversion = { workspace = true }\nhomepage = { workspace = true }\nrepository = { workspace = true }\nauthors = { workspace = true }\nreadme = { workspace = true }\nlicense = { workspace = true }\nedition = { workspace = true }\n\n[dependencies]\narrow = { workspace = true }\nchrono = { workspace = true }\ndatafusion = { workspace = true }\nchrono-tz = { workspace = true }\nnum = { workspace = true }\nregex = { workspace = true }\n# preserve_order: needed for get_json_object to match Spark's JSON key ordering\nserde_json = { version = \"1.0\", features = [\"preserve_order\"] }\ndatafusion-comet-common = { workspace = true }\nfutures = { workspace = true }\ntwox-hash = \"2.1.2\"\nrand = { workspace = true }\nhex = \"0.4.3\"\nbase64 = \"0.22.1\"\n\n[dev-dependencies]\narrow = {workspace = true}\ncriterion = { version = \"0.7\", features = [\"async\", \"async_tokio\", \"async_std\"] }\nrand = { workspace = true}\ntokio = { version = \"1\", features = [\"rt-multi-thread\"] }\ndatafusion = { workspace = true, features = [\"sql\"] }\n\n[lib]\nname = \"datafusion_comet_spark_expr\"\npath = \"src/lib.rs\"\n\n[[bench]]\nname = \"cast_from_string\"\nharness = false\n\n[[bench]]\nname = \"cast_numeric\"\nharness = false\n\n[[bench]]\nname = \"conditional\"\nharness = false\n\n[[bench]]\nname = \"decimal_div\"\nharness = false\n\n[[bench]]\nname = \"aggregate\"\nharness = false\n\n[[bench]]\nname = \"bloom_filter_agg\"\nharness = false\n\n[[bench]]\nname = \"padding\"\nharness = false\n\n[[bench]]\nname = \"date_trunc\"\nharness = false\n\n[[bench]]\nname = \"normalize_nan\"\nharness = false\n\n[[bench]]\nname = \"to_csv\"\nharness = false\n\n[[bench]]\nname = \"cast_int_to_timestamp\"\nharness = false\n\n[[test]]\nname = \"test_udf_registration\"\npath = \"tests/spark_expr_reg.rs\"\n\n[[bench]]\nname = \"cast_from_boolean\"\nharness = false\n\n[[bench]]\nname = \"wide_decimal\"\nharness = false\n\n[[bench]]\nname = \"cast_non_int_numeric_timestamp\"\nharness = false\n"
  },
  {
    "path": "native/spark-expr/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# datafusion-comet-spark-expr: Spark-compatible Expressions\n\nThis crate provides Apache Spark-compatible expressions for use with DataFusion and is maintained as part of the\n[Apache DataFusion Comet] subproject.\n\n[Apache DataFusion Comet]: https://github.com/apache/datafusion-comet/\n\n# Deprecation Notice\n\nThe expressions in this crate are being migrated to the [datafusion-spark] crate, which is part of the core DataFusion\nproject.\n\nThe DataFusion Comet project will stop publishing new versions of the `datafusion-comet-spark-expr` crate in the future.\n\nThere is a GitHub issue (https://github.com/apache/datafusion-comet/issues/2084) to track the work of migrating\nfunctionality from this crate to `datafusion-spark`.\n\n[datafusion-spark]: https://docs.rs/datafusion-spark/50.0.0/datafusion_spark/\n"
  },
  {
    "path": "native/spark-expr/benches/aggregate.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.use arrow::array::{ArrayRef, BooleanBuilder, Int32Builder, RecordBatch, StringBuilder};\n\nuse arrow::array::builder::{Decimal128Builder, Int64Builder, StringBuilder};\nuse arrow::array::{ArrayRef, Int64Array, RecordBatch};\nuse arrow::datatypes::SchemaRef;\nuse arrow::datatypes::{DataType, Field, Schema};\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion::datasource::memory::MemorySourceConfig;\nuse datafusion::datasource::source::DataSourceExec;\nuse datafusion::execution::TaskContext;\nuse datafusion::functions_aggregate::average::avg_udaf;\nuse datafusion::functions_aggregate::sum::sum_udaf;\nuse datafusion::logical_expr::function::AccumulatorArgs;\nuse datafusion::logical_expr::{AggregateUDF, AggregateUDFImpl, EmitTo};\nuse datafusion::physical_expr::aggregate::AggregateExprBuilder;\nuse datafusion::physical_expr::expressions::Column;\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_plan::aggregates::{AggregateExec, AggregateMode, PhysicalGroupBy};\nuse datafusion::physical_plan::ExecutionPlan;\nuse datafusion_comet_spark_expr::{AvgDecimal, EvalMode, SumDecimal, SumInteger};\nuse futures::StreamExt;\nuse std::hint::black_box;\nuse std::sync::Arc;\nuse std::time::Duration;\nuse tokio::runtime::Runtime;\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"aggregate\");\n    let num_rows = 8192;\n    let batch = create_record_batch(num_rows);\n    let mut batches = Vec::new();\n    for _ in 0..10 {\n        batches.push(batch.clone());\n    }\n    let partitions = &[batches];\n    let c0: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"c0\", 0));\n    let c1: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"c1\", 1));\n\n    let rt = Runtime::new().unwrap();\n\n    group.bench_function(\"avg_decimal_datafusion\", |b| {\n        let datafusion_sum_decimal = avg_udaf();\n        b.to_async(&rt).iter(|| {\n            black_box(agg_test(\n                partitions,\n                c0.clone(),\n                c1.clone(),\n                datafusion_sum_decimal.clone(),\n                \"avg\",\n            ))\n        })\n    });\n\n    group.bench_function(\"avg_decimal_comet\", |b| {\n        let comet_avg_decimal = Arc::new(AggregateUDF::new_from_impl(AvgDecimal::new(\n            DataType::Decimal128(38, 10),\n            DataType::Decimal128(38, 10),\n            datafusion_comet_spark_expr::EvalMode::Legacy,\n            None,\n            datafusion_comet_spark_expr::create_query_context_map(),\n        )));\n        b.to_async(&rt).iter(|| {\n            black_box(agg_test(\n                partitions,\n                c0.clone(),\n                c1.clone(),\n                comet_avg_decimal.clone(),\n                \"avg\",\n            ))\n        })\n    });\n\n    group.bench_function(\"sum_decimal_datafusion\", |b| {\n        let datafusion_sum_decimal = sum_udaf();\n        b.to_async(&rt).iter(|| {\n            black_box(agg_test(\n                partitions,\n                c0.clone(),\n                c1.clone(),\n                datafusion_sum_decimal.clone(),\n                \"sum\",\n            ))\n        })\n    });\n\n    group.bench_function(\"sum_decimal_comet\", |b| {\n        let comet_sum_decimal = Arc::new(AggregateUDF::new_from_impl(\n            SumDecimal::try_new(\n                DataType::Decimal128(38, 10),\n                EvalMode::Legacy,\n                None,\n                datafusion_comet_spark_expr::create_query_context_map(),\n            )\n            .unwrap(),\n        ));\n        b.to_async(&rt).iter(|| {\n            black_box(agg_test(\n                partitions,\n                c0.clone(),\n                c1.clone(),\n                comet_sum_decimal.clone(),\n                \"sum\",\n            ))\n        })\n    });\n\n    group.finish();\n\n    // SumInteger benchmarks\n    let mut group = c.benchmark_group(\"sum_integer\");\n    let int_batch = create_int64_record_batch(num_rows);\n    let mut int_batches = Vec::new();\n    for _ in 0..10 {\n        int_batches.push(int_batch.clone());\n    }\n    let int_partitions = &[int_batches];\n    let int_c0: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"c0\", 0));\n    let int_c1: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"c1\", 1));\n\n    group.bench_function(\"sum_int64_datafusion\", |b| {\n        let datafusion_sum = sum_udaf();\n        b.to_async(&rt).iter(|| {\n            black_box(agg_test(\n                int_partitions,\n                int_c0.clone(),\n                int_c1.clone(),\n                datafusion_sum.clone(),\n                \"sum\",\n            ))\n        })\n    });\n\n    group.bench_function(\"sum_int64_comet_legacy\", |b| {\n        let comet_sum = Arc::new(AggregateUDF::new_from_impl(\n            SumInteger::try_new(DataType::Int64, EvalMode::Legacy).unwrap(),\n        ));\n        b.to_async(&rt).iter(|| {\n            black_box(agg_test(\n                int_partitions,\n                int_c0.clone(),\n                int_c1.clone(),\n                comet_sum.clone(),\n                \"sum\",\n            ))\n        })\n    });\n\n    group.bench_function(\"sum_int64_comet_ansi\", |b| {\n        let comet_sum = Arc::new(AggregateUDF::new_from_impl(\n            SumInteger::try_new(DataType::Int64, EvalMode::Ansi).unwrap(),\n        ));\n        b.to_async(&rt).iter(|| {\n            black_box(agg_test(\n                int_partitions,\n                int_c0.clone(),\n                int_c1.clone(),\n                comet_sum.clone(),\n                \"sum\",\n            ))\n        })\n    });\n\n    group.bench_function(\"sum_int64_comet_try\", |b| {\n        let comet_sum = Arc::new(AggregateUDF::new_from_impl(\n            SumInteger::try_new(DataType::Int64, EvalMode::Try).unwrap(),\n        ));\n        b.to_async(&rt).iter(|| {\n            black_box(agg_test(\n                int_partitions,\n                int_c0.clone(),\n                int_c1.clone(),\n                comet_sum.clone(),\n                \"sum\",\n            ))\n        })\n    });\n\n    group.finish();\n\n    // Direct accumulator benchmarks (bypassing execution framework)\n    let mut group = c.benchmark_group(\"sum_integer_accumulator\");\n    let int64_array: ArrayRef = Arc::new(Int64Array::from_iter_values(0..8192i64));\n    let arrays: Vec<ArrayRef> = vec![int64_array];\n\n    let return_field = Arc::new(Field::new(\"sum\", DataType::Int64, true));\n    let schema = Schema::new(vec![Field::new(\"c0\", DataType::Int64, true)]);\n    let expr_field = Arc::new(Field::new(\"c0\", DataType::Int64, true));\n    let expr_fields: Vec<Arc<Field>> = vec![expr_field];\n\n    // Single-row Accumulator benchmarks\n    for (name, eval_mode) in [\n        (\"row_legacy\", EvalMode::Legacy),\n        (\"row_ansi\", EvalMode::Ansi),\n        (\"row_try\", EvalMode::Try),\n    ] {\n        let return_field = return_field.clone();\n        let expr_fields = expr_fields.clone();\n        group.bench_function(name, |b| {\n            let udf = SumInteger::try_new(DataType::Int64, eval_mode).unwrap();\n            b.iter(|| {\n                let acc_args = AccumulatorArgs {\n                    return_field: return_field.clone(),\n                    schema: &schema,\n                    ignore_nulls: false,\n                    order_bys: &[],\n                    name: \"sum\",\n                    is_distinct: false,\n                    is_reversed: false,\n                    exprs: &[],\n                    expr_fields: &expr_fields,\n                };\n                let mut acc = udf.accumulator(acc_args).unwrap();\n                for _ in 0..10 {\n                    acc.update_batch(&arrays).unwrap();\n                }\n                black_box(acc.evaluate().unwrap())\n            })\n        });\n    }\n\n    // GroupsAccumulator benchmarks\n    let group_indices: Vec<usize> = (0..8192).map(|i| i % 1024).collect();\n    for (name, eval_mode) in [\n        (\"groups_legacy\", EvalMode::Legacy),\n        (\"groups_ansi\", EvalMode::Ansi),\n        (\"groups_try\", EvalMode::Try),\n    ] {\n        let return_field = return_field.clone();\n        let expr_fields = expr_fields.clone();\n        group.bench_function(name, |b| {\n            let udf = SumInteger::try_new(DataType::Int64, eval_mode).unwrap();\n            b.iter(|| {\n                let acc_args = AccumulatorArgs {\n                    return_field: return_field.clone(),\n                    schema: &schema,\n                    ignore_nulls: false,\n                    order_bys: &[],\n                    name: \"sum\",\n                    is_distinct: false,\n                    is_reversed: false,\n                    exprs: &[],\n                    expr_fields: &expr_fields,\n                };\n                let mut acc = udf.create_groups_accumulator(acc_args).unwrap();\n                for _ in 0..10 {\n                    acc.update_batch(&arrays, &group_indices, None, 1024)\n                        .unwrap();\n                }\n                black_box(acc.evaluate(EmitTo::All).unwrap())\n            })\n        });\n    }\n\n    group.finish();\n}\n\nasync fn agg_test(\n    partitions: &[Vec<RecordBatch>],\n    c0: Arc<dyn PhysicalExpr>,\n    c1: Arc<dyn PhysicalExpr>,\n    aggregate_udf: Arc<AggregateUDF>,\n    alias: &str,\n) {\n    let schema = &partitions[0][0].schema();\n    let scan: Arc<dyn ExecutionPlan> = Arc::new(DataSourceExec::new(Arc::new(\n        MemorySourceConfig::try_new(partitions, Arc::clone(schema), None).unwrap(),\n    )));\n    let aggregate = create_aggregate(scan, c0.clone(), c1.clone(), schema, aggregate_udf, alias);\n    let mut stream = aggregate\n        .execute(0, Arc::new(TaskContext::default()))\n        .unwrap();\n    while let Some(batch) = stream.next().await {\n        let _batch = batch.unwrap();\n    }\n}\n\nfn create_aggregate(\n    scan: Arc<dyn ExecutionPlan>,\n    c0: Arc<dyn PhysicalExpr>,\n    c1: Arc<dyn PhysicalExpr>,\n    schema: &SchemaRef,\n    aggregate_udf: Arc<AggregateUDF>,\n    alias: &str,\n) -> Arc<AggregateExec> {\n    let aggr_expr = AggregateExprBuilder::new(aggregate_udf, vec![c1])\n        .schema(schema.clone())\n        .alias(alias)\n        .with_ignore_nulls(false)\n        .with_distinct(false)\n        .build()\n        .unwrap();\n\n    Arc::new(\n        AggregateExec::try_new(\n            AggregateMode::Partial,\n            PhysicalGroupBy::new_single(vec![(c0, \"c0\".to_string())]),\n            vec![aggr_expr.into()],\n            vec![None], // no filter expressions\n            scan,\n            Arc::clone(schema),\n        )\n        .unwrap(),\n    )\n}\n\nfn create_record_batch(num_rows: usize) -> RecordBatch {\n    let mut decimal_builder = Decimal128Builder::with_capacity(num_rows);\n    let mut string_builder = StringBuilder::with_capacity(num_rows, num_rows * 32);\n    for i in 0..num_rows {\n        decimal_builder.append_value(i as i128);\n        string_builder.append_value(format!(\"this is string #{}\", i % 1024));\n    }\n    let decimal_array = Arc::new(decimal_builder.finish());\n    let string_array = Arc::new(string_builder.finish());\n\n    let mut fields = vec![];\n    let mut columns: Vec<ArrayRef> = vec![];\n\n    // string column\n    fields.push(Field::new(\"c0\", DataType::Utf8, false));\n    columns.push(string_array);\n\n    // decimal column\n    fields.push(Field::new(\"c1\", DataType::Decimal128(38, 10), false));\n    columns.push(decimal_array);\n\n    let schema = Schema::new(fields);\n    RecordBatch::try_new(Arc::new(schema), columns).unwrap()\n}\n\nfn create_int64_record_batch(num_rows: usize) -> RecordBatch {\n    let mut int64_builder = Int64Builder::with_capacity(num_rows);\n    let mut string_builder = StringBuilder::with_capacity(num_rows, num_rows * 32);\n    for i in 0..num_rows {\n        int64_builder.append_value(i as i64);\n        string_builder.append_value(format!(\"group_{}\", i % 1024));\n    }\n    let int64_array = Arc::new(int64_builder.finish());\n    let string_array = Arc::new(string_builder.finish());\n\n    let mut fields = vec![];\n    let mut columns: Vec<ArrayRef> = vec![];\n\n    // string column for grouping\n    fields.push(Field::new(\"c0\", DataType::Utf8, false));\n    columns.push(string_array);\n\n    // int64 column for summing\n    fields.push(Field::new(\"c1\", DataType::Int64, false));\n    columns.push(int64_array);\n\n    let schema = Schema::new(fields);\n    RecordBatch::try_new(Arc::new(schema), columns).unwrap()\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n        .measurement_time(Duration::from_millis(500))\n        .warm_up_time(Duration::from_millis(500))\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = criterion_benchmark\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/bloom_filter_agg.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.use arrow::array::{ArrayRef, BooleanBuilder, Int32Builder, RecordBatch, StringBuilder};\n\nuse arrow::array::builder::Int64Builder;\nuse arrow::array::{ArrayRef, RecordBatch};\nuse arrow::datatypes::SchemaRef;\nuse arrow::datatypes::{DataType, Field, Schema};\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion::common::ScalarValue;\nuse datafusion::datasource::memory::MemorySourceConfig;\nuse datafusion::datasource::source::DataSourceExec;\nuse datafusion::execution::TaskContext;\nuse datafusion::logical_expr::AggregateUDF;\nuse datafusion::physical_expr::aggregate::AggregateExprBuilder;\nuse datafusion::physical_expr::expressions::{Column, Literal};\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_plan::aggregates::{AggregateExec, AggregateMode, PhysicalGroupBy};\nuse datafusion::physical_plan::ExecutionPlan;\nuse datafusion_comet_spark_expr::BloomFilterAgg;\nuse futures::StreamExt;\nuse std::hint::black_box;\nuse std::sync::Arc;\nuse std::time::Duration;\nuse tokio::runtime::Runtime;\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"bloom_filter_agg\");\n    let num_rows = 8192;\n    let batch = create_record_batch(num_rows);\n    let mut batches = Vec::new();\n    for _ in 0..10 {\n        batches.push(batch.clone());\n    }\n    let partitions = &[batches];\n    let c0: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"c0\", 0));\n    // spark.sql.optimizer.runtime.bloomFilter.expectedNumItems\n    let num_items_sv = ScalarValue::Int64(Some(1000000_i64));\n    let num_items: Arc<dyn PhysicalExpr> = Arc::new(Literal::new(num_items_sv));\n    //spark.sql.optimizer.runtime.bloomFilter.numBits\n    let num_bits_sv = ScalarValue::Int64(Some(8388608_i64));\n    let num_bits: Arc<dyn PhysicalExpr> = Arc::new(Literal::new(num_bits_sv));\n\n    let rt = Runtime::new().unwrap();\n\n    for agg_mode in [\n        (\"partial_agg\", AggregateMode::Partial),\n        (\"single_agg\", AggregateMode::Single),\n    ] {\n        group.bench_function(agg_mode.0, |b| {\n            let comet_bloom_filter_agg =\n                Arc::new(AggregateUDF::new_from_impl(BloomFilterAgg::new(\n                    Arc::clone(&num_items),\n                    Arc::clone(&num_bits),\n                    DataType::Binary,\n                )));\n            b.to_async(&rt).iter(|| {\n                black_box(agg_test(\n                    partitions,\n                    c0.clone(),\n                    comet_bloom_filter_agg.clone(),\n                    \"bloom_filter_agg\",\n                    agg_mode.1,\n                ))\n            })\n        });\n    }\n\n    group.finish();\n}\n\nasync fn agg_test(\n    partitions: &[Vec<RecordBatch>],\n    c0: Arc<dyn PhysicalExpr>,\n    aggregate_udf: Arc<AggregateUDF>,\n    alias: &str,\n    mode: AggregateMode,\n) {\n    let schema = &partitions[0][0].schema();\n    let scan: Arc<dyn ExecutionPlan> = Arc::new(DataSourceExec::new(Arc::new(\n        MemorySourceConfig::try_new(partitions, Arc::clone(schema), None).unwrap(),\n    )));\n    let aggregate = create_aggregate(scan, c0.clone(), schema, aggregate_udf, alias, mode);\n    let mut stream = aggregate\n        .execute(0, Arc::new(TaskContext::default()))\n        .unwrap();\n    while let Some(batch) = stream.next().await {\n        let _batch = batch.unwrap();\n    }\n}\n\nfn create_aggregate(\n    scan: Arc<dyn ExecutionPlan>,\n    c0: Arc<dyn PhysicalExpr>,\n    schema: &SchemaRef,\n    aggregate_udf: Arc<AggregateUDF>,\n    alias: &str,\n    mode: AggregateMode,\n) -> Arc<AggregateExec> {\n    let aggr_expr = AggregateExprBuilder::new(aggregate_udf, vec![c0.clone()])\n        .schema(schema.clone())\n        .alias(alias)\n        .with_ignore_nulls(false)\n        .with_distinct(false)\n        .build()\n        .unwrap();\n\n    Arc::new(\n        AggregateExec::try_new(\n            mode,\n            PhysicalGroupBy::new_single(vec![]),\n            vec![aggr_expr.into()],\n            vec![None],\n            scan,\n            Arc::clone(schema),\n        )\n        .unwrap(),\n    )\n}\n\nfn create_record_batch(num_rows: usize) -> RecordBatch {\n    let mut int64_builder = Int64Builder::with_capacity(num_rows);\n    for i in 0..num_rows {\n        int64_builder.append_value(i as i64);\n    }\n    let int64_array = Arc::new(int64_builder.finish());\n\n    let mut fields = vec![];\n    let mut columns: Vec<ArrayRef> = vec![];\n\n    // int64 column\n    fields.push(Field::new(\"c0\", DataType::Int64, false));\n    columns.push(int64_array);\n\n    let schema = Schema::new(fields);\n    RecordBatch::try_new(Arc::new(schema), columns).unwrap()\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n        .measurement_time(Duration::from_millis(500))\n        .warm_up_time(Duration::from_millis(500))\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = criterion_benchmark\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/cast_from_boolean.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{BooleanBuilder, RecordBatch};\nuse arrow::datatypes::{DataType, Field, Schema};\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion::physical_expr::expressions::Column;\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion_comet_spark_expr::{Cast, EvalMode, SparkCastOptions};\nuse std::sync::Arc;\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let expr = Arc::new(Column::new(\"a\", 0));\n    let boolean_batch = create_boolean_batch();\n    let spark_cast_options = SparkCastOptions::new(EvalMode::Legacy, \"UTC\", false);\n    let cast_to_i8 = Cast::new(\n        expr.clone(),\n        DataType::Int8,\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    let cast_to_i16 = Cast::new(\n        expr.clone(),\n        DataType::Int16,\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    let cast_to_i32 = Cast::new(\n        expr.clone(),\n        DataType::Int32,\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    let cast_to_i64 = Cast::new(\n        expr.clone(),\n        DataType::Int64,\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    let cast_to_f32 = Cast::new(\n        expr.clone(),\n        DataType::Float32,\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    let cast_to_f64 = Cast::new(\n        expr.clone(),\n        DataType::Float64,\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    let cast_to_str = Cast::new(\n        expr.clone(),\n        DataType::Utf8,\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    let cast_to_decimal = Cast::new(\n        expr,\n        DataType::Decimal128(10, 4),\n        spark_cast_options,\n        None,\n        None,\n    );\n\n    let mut group = c.benchmark_group(\"cast_bool\".to_string());\n    group.bench_function(\"i8\", |b| {\n        b.iter(|| cast_to_i8.evaluate(&boolean_batch).unwrap());\n    });\n    group.bench_function(\"i16\", |b| {\n        b.iter(|| cast_to_i16.evaluate(&boolean_batch).unwrap());\n    });\n    group.bench_function(\"i32\", |b| {\n        b.iter(|| cast_to_i32.evaluate(&boolean_batch).unwrap());\n    });\n    group.bench_function(\"i64\", |b| {\n        b.iter(|| cast_to_i64.evaluate(&boolean_batch).unwrap());\n    });\n    group.bench_function(\"f32\", |b| {\n        b.iter(|| cast_to_f32.evaluate(&boolean_batch).unwrap());\n    });\n    group.bench_function(\"f64\", |b| {\n        b.iter(|| cast_to_f64.evaluate(&boolean_batch).unwrap());\n    });\n    group.bench_function(\"str\", |b| {\n        b.iter(|| cast_to_str.evaluate(&boolean_batch).unwrap());\n    });\n    group.bench_function(\"decimal\", |b| {\n        b.iter(|| cast_to_decimal.evaluate(&boolean_batch).unwrap());\n    });\n}\n\nfn create_boolean_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Boolean, true)]));\n    let mut b = BooleanBuilder::with_capacity(1000);\n    for i in 0..1000 {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(rand::random::<bool>());\n        }\n    }\n    let array = b.finish();\n    RecordBatch::try_new(schema, vec![Arc::new(array)]).unwrap()\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = criterion_benchmark\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/cast_from_string.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{builder::StringBuilder, RecordBatch};\nuse arrow::datatypes::{DataType, Field, Schema};\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion::physical_expr::{expressions::Column, PhysicalExpr};\nuse datafusion_comet_spark_expr::{Cast, EvalMode, SparkCastOptions};\nuse std::sync::Arc;\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let small_int_batch = create_small_int_string_batch();\n    let int_batch = create_int_string_batch();\n    let decimal_batch = create_decimal_string_batch();\n    let expr = Arc::new(Column::new(\"a\", 0));\n\n    for (mode, mode_name) in [\n        (EvalMode::Legacy, \"legacy\"),\n        (EvalMode::Ansi, \"ansi\"),\n        (EvalMode::Try, \"try\"),\n    ] {\n        let spark_cast_options = SparkCastOptions::new(mode, \"\", false);\n        let cast_to_i8 = Cast::new(\n            expr.clone(),\n            DataType::Int8,\n            spark_cast_options.clone(),\n            None,\n            None,\n        );\n        let cast_to_i16 = Cast::new(\n            expr.clone(),\n            DataType::Int16,\n            spark_cast_options.clone(),\n            None,\n            None,\n        );\n        let cast_to_i32 = Cast::new(\n            expr.clone(),\n            DataType::Int32,\n            spark_cast_options.clone(),\n            None,\n            None,\n        );\n        let cast_to_i64 = Cast::new(\n            expr.clone(),\n            DataType::Int64,\n            spark_cast_options,\n            None,\n            None,\n        );\n\n        let mut group = c.benchmark_group(format!(\"cast_string_to_int/{}\", mode_name));\n        group.bench_function(\"i8\", |b| {\n            b.iter(|| cast_to_i8.evaluate(&small_int_batch).unwrap());\n        });\n        group.bench_function(\"i16\", |b| {\n            b.iter(|| cast_to_i16.evaluate(&small_int_batch).unwrap());\n        });\n        group.bench_function(\"i32\", |b| {\n            b.iter(|| cast_to_i32.evaluate(&int_batch).unwrap());\n        });\n        group.bench_function(\"i64\", |b| {\n            b.iter(|| cast_to_i64.evaluate(&int_batch).unwrap());\n        });\n        group.finish();\n    }\n\n    // Benchmark decimal truncation (Legacy mode only)\n    let spark_cast_options = SparkCastOptions::new(EvalMode::Legacy, \"\", false);\n    let cast_to_i32 = Cast::new(\n        expr.clone(),\n        DataType::Int32,\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    let cast_to_i64 = Cast::new(\n        expr.clone(),\n        DataType::Int64,\n        spark_cast_options,\n        None,\n        None,\n    );\n\n    let mut group = c.benchmark_group(\"cast_string_to_int/legacy_decimals\");\n    group.bench_function(\"i32\", |b| {\n        b.iter(|| cast_to_i32.evaluate(&decimal_batch).unwrap());\n    });\n    group.bench_function(\"i64\", |b| {\n        b.iter(|| cast_to_i64.evaluate(&decimal_batch).unwrap());\n    });\n    group.finish();\n\n    // str -> decimal benchmark\n    let decimal_string_batch = create_decimal_cast_string_batch();\n    for (mode, mode_name) in [\n        (EvalMode::Legacy, \"legacy\"),\n        (EvalMode::Ansi, \"ansi\"),\n        (EvalMode::Try, \"try\"),\n    ] {\n        let spark_cast_options = SparkCastOptions::new(mode, \"\", false);\n        let cast_to_decimal_38_10 = Cast::new(\n            expr.clone(),\n            DataType::Decimal128(38, 10),\n            spark_cast_options,\n            None,\n            None,\n        );\n\n        let mut group = c.benchmark_group(format!(\"cast_string_to_decimal/{}\", mode_name));\n        group.bench_function(\"decimal_38_10\", |b| {\n            b.iter(|| {\n                cast_to_decimal_38_10\n                    .evaluate(&decimal_string_batch)\n                    .unwrap()\n            });\n        });\n        group.finish();\n    }\n}\n\n/// Create batch with small integer strings that fit in i8 range (for i8/i16 benchmarks)\nfn create_small_int_string_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Utf8, true)]));\n    let mut b = StringBuilder::new();\n    for i in 0..1000 {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(format!(\"{}\", rand::random::<i8>()));\n        }\n    }\n    let array = b.finish();\n    RecordBatch::try_new(schema, vec![Arc::new(array)]).unwrap()\n}\n\n/// Create batch with valid integer strings (works for all eval modes)\nfn create_int_string_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Utf8, true)]));\n    let mut b = StringBuilder::new();\n    for i in 0..1000 {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(format!(\"{}\", rand::random::<i32>()));\n        }\n    }\n    let array = b.finish();\n    RecordBatch::try_new(schema, vec![Arc::new(array)]).unwrap()\n}\n\n/// Create batch with decimal strings (for Legacy mode decimal truncation)\nfn create_decimal_string_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Utf8, true)]));\n    let mut b = StringBuilder::new();\n    for i in 0..1000 {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            // Generate integers with decimal portions to test truncation\n            let int_part: i32 = rand::random();\n            let dec_part: u32 = rand::random::<u32>() % 1000;\n            b.append_value(format!(\"{}.{}\", int_part, dec_part));\n        }\n    }\n    let array = b.finish();\n    RecordBatch::try_new(schema, vec![Arc::new(array)]).unwrap()\n}\n\n/// Create batch with decimal strings for string-to-decimal cast perf evaluation\nfn create_decimal_cast_string_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Utf8, true)]));\n    let mut b = StringBuilder::new();\n    for i in 0..1000 {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            // Generate various decimal formats\n            match i % 5 {\n                0 => {\n                    // gen simple decimals (ex :  \"123.45\"\n                    let int_part: u32 = rand::random::<u32>() % 1000000;\n                    let dec_part: u32 = rand::random::<u32>() % 100000;\n                    b.append_value(format!(\"{}.{}\", int_part, dec_part));\n                }\n                1 => {\n                    // gen scientific notation like \"123e5\"\n                    let mantissa: u32 = rand::random::<u32>() % 1000;\n                    let exp: i8 = (rand::random::<i8>() % 10).abs();\n                    b.append_value(format!(\"{}.{}E{}\", mantissa / 100, mantissa % 100, exp));\n                }\n                2 => {\n                    // Negative numbers\n                    let int_part: u32 = rand::random::<u32>() % 1000000;\n                    let dec_part: u32 = rand::random::<u32>() % 100000;\n                    b.append_value(format!(\"-{}.{}\", int_part, dec_part));\n                }\n                3 => {\n                    // Ints only\n                    let val: i32 = rand::random::<i32>() % 1000000;\n                    b.append_value(format!(\"{}\", val));\n                }\n                _ => {\n                    // Small decimals (ex : 0.001)\n                    let dec_part: u32 = rand::random::<u32>() % 100000;\n                    b.append_value(format!(\"0.{:05}\", dec_part));\n                }\n            }\n        }\n    }\n    let array = b.finish();\n    RecordBatch::try_new(schema, vec![Arc::new(array)]).unwrap()\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = criterion_benchmark\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/cast_int_to_timestamp.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::builder::{Int16Builder, Int32Builder, Int64Builder, Int8Builder};\nuse arrow::array::RecordBatch;\nuse arrow::datatypes::{DataType, Field, Schema, TimeUnit};\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion::physical_expr::{expressions::Column, PhysicalExpr};\nuse datafusion_comet_spark_expr::{Cast, EvalMode, SparkCastOptions};\nuse std::sync::Arc;\n\nconst BATCH_SIZE: usize = 8192;\n\nfn criterion_benchmark(c: &mut Criterion) {\n    // Test with UTC timezone\n    let spark_cast_options = SparkCastOptions::new(EvalMode::Legacy, \"UTC\", false);\n    let timestamp_type = DataType::Timestamp(TimeUnit::Microsecond, Some(\"UTC\".into()));\n\n    let mut group = c.benchmark_group(\"cast_int_to_timestamp\");\n\n    // Int8 -> Timestamp\n    let batch_i8 = create_int8_batch();\n    let expr_i8 = Arc::new(Column::new(\"a\", 0));\n    let cast_i8_to_ts = Cast::new(\n        expr_i8,\n        timestamp_type.clone(),\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    group.bench_function(\"cast_i8_to_timestamp\", |b| {\n        b.iter(|| cast_i8_to_ts.evaluate(&batch_i8).unwrap());\n    });\n\n    // Int16 -> Timestamp\n    let batch_i16 = create_int16_batch();\n    let expr_i16 = Arc::new(Column::new(\"a\", 0));\n    let cast_i16_to_ts = Cast::new(\n        expr_i16,\n        timestamp_type.clone(),\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    group.bench_function(\"cast_i16_to_timestamp\", |b| {\n        b.iter(|| cast_i16_to_ts.evaluate(&batch_i16).unwrap());\n    });\n\n    // Int32 -> Timestamp\n    let batch_i32 = create_int32_batch();\n    let expr_i32 = Arc::new(Column::new(\"a\", 0));\n    let cast_i32_to_ts = Cast::new(\n        expr_i32,\n        timestamp_type.clone(),\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    group.bench_function(\"cast_i32_to_timestamp\", |b| {\n        b.iter(|| cast_i32_to_ts.evaluate(&batch_i32).unwrap());\n    });\n\n    // Int64 -> Timestamp\n    let batch_i64 = create_int64_batch();\n    let expr_i64 = Arc::new(Column::new(\"a\", 0));\n    let cast_i64_to_ts = Cast::new(\n        expr_i64,\n        timestamp_type.clone(),\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    group.bench_function(\"cast_i64_to_timestamp\", |b| {\n        b.iter(|| cast_i64_to_ts.evaluate(&batch_i64).unwrap());\n    });\n\n    group.finish();\n}\n\nfn create_int8_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Int8, true)]));\n    let mut b = Int8Builder::with_capacity(BATCH_SIZE);\n    for i in 0..BATCH_SIZE {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(rand::random::<i8>());\n        }\n    }\n    RecordBatch::try_new(schema, vec![Arc::new(b.finish())]).unwrap()\n}\n\nfn create_int16_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Int16, true)]));\n    let mut b = Int16Builder::with_capacity(BATCH_SIZE);\n    for i in 0..BATCH_SIZE {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(rand::random::<i16>());\n        }\n    }\n    RecordBatch::try_new(schema, vec![Arc::new(b.finish())]).unwrap()\n}\n\nfn create_int32_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Int32, true)]));\n    let mut b = Int32Builder::with_capacity(BATCH_SIZE);\n    for i in 0..BATCH_SIZE {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(rand::random::<i32>());\n        }\n    }\n    RecordBatch::try_new(schema, vec![Arc::new(b.finish())]).unwrap()\n}\n\nfn create_int64_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Int64, true)]));\n    let mut b = Int64Builder::with_capacity(BATCH_SIZE);\n    for i in 0..BATCH_SIZE {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(rand::random::<i64>());\n        }\n    }\n    RecordBatch::try_new(schema, vec![Arc::new(b.finish())]).unwrap()\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = criterion_benchmark\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/cast_non_int_numeric_timestamp.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::builder::{BooleanBuilder, Decimal128Builder, Float32Builder, Float64Builder};\nuse arrow::array::RecordBatch;\nuse arrow::datatypes::{DataType, Field, Schema, TimeUnit};\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion::physical_expr::{expressions::Column, PhysicalExpr};\nuse datafusion_comet_spark_expr::{Cast, EvalMode, SparkCastOptions};\nuse std::sync::Arc;\n\nconst BATCH_SIZE: usize = 8192;\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let spark_cast_options = SparkCastOptions::new(EvalMode::Legacy, \"UTC\", false);\n    let timestamp_type = DataType::Timestamp(TimeUnit::Microsecond, Some(\"UTC\".into()));\n\n    let mut group = c.benchmark_group(\"cast_non_int_numeric_to_timestamp\");\n\n    // Float32 -> Timestamp\n    let batch_f32 = create_float32_batch();\n    let expr_f32 = Arc::new(Column::new(\"a\", 0));\n    let cast_f32_to_ts = Cast::new(\n        expr_f32,\n        timestamp_type.clone(),\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    group.bench_function(\"cast_f32_to_timestamp\", |b| {\n        b.iter(|| cast_f32_to_ts.evaluate(&batch_f32).unwrap());\n    });\n\n    // Float64 -> Timestamp\n    let batch_f64 = create_float64_batch();\n    let expr_f64 = Arc::new(Column::new(\"a\", 0));\n    let cast_f64_to_ts = Cast::new(\n        expr_f64,\n        timestamp_type.clone(),\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    group.bench_function(\"cast_f64_to_timestamp\", |b| {\n        b.iter(|| cast_f64_to_ts.evaluate(&batch_f64).unwrap());\n    });\n\n    // Boolean -> Timestamp\n    let batch_bool = create_boolean_batch();\n    let expr_bool = Arc::new(Column::new(\"a\", 0));\n    let cast_bool_to_ts = Cast::new(\n        expr_bool,\n        timestamp_type.clone(),\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    group.bench_function(\"cast_bool_to_timestamp\", |b| {\n        b.iter(|| cast_bool_to_ts.evaluate(&batch_bool).unwrap());\n    });\n\n    // Decimal128 -> Timestamp\n    let batch_decimal = create_decimal128_batch();\n    let expr_decimal = Arc::new(Column::new(\"a\", 0));\n    let cast_decimal_to_ts = Cast::new(\n        expr_decimal,\n        timestamp_type.clone(),\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    group.bench_function(\"cast_decimal_to_timestamp\", |b| {\n        b.iter(|| cast_decimal_to_ts.evaluate(&batch_decimal).unwrap());\n    });\n\n    group.finish();\n}\n\nfn create_float32_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Float32, true)]));\n    let mut b = Float32Builder::with_capacity(BATCH_SIZE);\n    for i in 0..BATCH_SIZE {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(rand::random::<f32>());\n        }\n    }\n    RecordBatch::try_new(schema, vec![Arc::new(b.finish())]).unwrap()\n}\n\nfn create_float64_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Float64, true)]));\n    let mut b = Float64Builder::with_capacity(BATCH_SIZE);\n    for i in 0..BATCH_SIZE {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(rand::random::<f64>());\n        }\n    }\n    RecordBatch::try_new(schema, vec![Arc::new(b.finish())]).unwrap()\n}\n\nfn create_boolean_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Boolean, true)]));\n    let mut b = BooleanBuilder::with_capacity(BATCH_SIZE);\n    for i in 0..BATCH_SIZE {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(rand::random::<bool>());\n        }\n    }\n    RecordBatch::try_new(schema, vec![Arc::new(b.finish())]).unwrap()\n}\n\nfn create_decimal128_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\n        \"a\",\n        DataType::Decimal128(18, 6),\n        true,\n    )]));\n    let mut b = Decimal128Builder::with_capacity(BATCH_SIZE);\n    for i in 0..BATCH_SIZE {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(i as i128 * 1_000_000);\n        }\n    }\n    let array = b.finish().with_precision_and_scale(18, 6).unwrap();\n    RecordBatch::try_new(schema, vec![Arc::new(array)]).unwrap()\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = criterion_benchmark\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/cast_numeric.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{builder::Int32Builder, RecordBatch};\nuse arrow::datatypes::{DataType, Field, Schema};\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion::physical_expr::{expressions::Column, PhysicalExpr};\nuse datafusion_comet_spark_expr::{Cast, EvalMode, SparkCastOptions};\nuse std::sync::Arc;\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let batch = create_int32_batch();\n    let expr = Arc::new(Column::new(\"a\", 0));\n    let spark_cast_options = SparkCastOptions::new_without_timezone(EvalMode::Legacy, false);\n    let cast_i32_to_i8 = Cast::new(\n        expr.clone(),\n        DataType::Int8,\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    let cast_i32_to_i16 = Cast::new(\n        expr.clone(),\n        DataType::Int16,\n        spark_cast_options.clone(),\n        None,\n        None,\n    );\n    let cast_i32_to_i64 = Cast::new(expr, DataType::Int64, spark_cast_options, None, None);\n\n    let mut group = c.benchmark_group(\"cast_int_to_int\");\n    group.bench_function(\"cast_i32_to_i8\", |b| {\n        b.iter(|| cast_i32_to_i8.evaluate(&batch).unwrap());\n    });\n    group.bench_function(\"cast_i32_to_i16\", |b| {\n        b.iter(|| cast_i32_to_i16.evaluate(&batch).unwrap());\n    });\n    group.bench_function(\"cast_i32_to_i64\", |b| {\n        b.iter(|| cast_i32_to_i64.evaluate(&batch).unwrap());\n    });\n}\n\nfn create_int32_batch() -> RecordBatch {\n    let schema = Arc::new(Schema::new(vec![Field::new(\"a\", DataType::Int32, true)]));\n    let mut b = Int32Builder::new();\n    for i in 0..1000 {\n        if i % 10 == 0 {\n            b.append_null();\n        } else {\n            b.append_value(rand::random::<i32>());\n        }\n    }\n    let array = b.finish();\n\n    RecordBatch::try_new(schema.clone(), vec![Arc::new(array)]).unwrap()\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = criterion_benchmark\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/conditional.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::builder::{Int32Builder, StringBuilder};\nuse arrow::datatypes::DataType;\nuse arrow::datatypes::{Field, Schema};\nuse arrow::record_batch::RecordBatch;\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion::common::ScalarValue;\nuse datafusion::logical_expr::Operator;\nuse datafusion::physical_expr::expressions::Column;\nuse datafusion::physical_expr::expressions::Literal;\nuse datafusion::physical_expr::expressions::{BinaryExpr, CaseExpr};\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion_comet_spark_expr::IfExpr;\nuse std::hint::black_box;\nuse std::sync::Arc;\n\nfn make_col(name: &str, index: usize) -> Arc<dyn PhysicalExpr> {\n    Arc::new(Column::new(name, index))\n}\n\nfn make_lit_i32(n: i32) -> Arc<dyn PhysicalExpr> {\n    Arc::new(Literal::new(ScalarValue::Int32(Some(n))))\n}\n\nfn make_null_lit() -> Arc<dyn PhysicalExpr> {\n    Arc::new(Literal::new(ScalarValue::Utf8(None)))\n}\n\nfn criterion_benchmark(c: &mut Criterion) {\n    // create input data\n    let mut c1 = Int32Builder::new();\n    let mut c2 = StringBuilder::new();\n    let mut c3 = StringBuilder::new();\n    for i in 0..1000 {\n        c1.append_value(i);\n        if i % 7 == 0 {\n            c2.append_null();\n        } else {\n            c2.append_value(format!(\"string {i}\"));\n        }\n        if i % 9 == 0 {\n            c3.append_null();\n        } else {\n            c3.append_value(format!(\"other string {i}\"));\n        }\n    }\n    let c1 = Arc::new(c1.finish());\n    let c2 = Arc::new(c2.finish());\n    let c3 = Arc::new(c3.finish());\n    let schema = Schema::new(vec![\n        Field::new(\"c1\", DataType::Int32, true),\n        Field::new(\"c2\", DataType::Utf8, true),\n        Field::new(\"c3\", DataType::Utf8, true),\n    ]);\n    let batch = RecordBatch::try_new(Arc::new(schema), vec![c1, c2, c3]).unwrap();\n\n    // use same predicate for all benchmarks\n    let predicate = Arc::new(BinaryExpr::new(\n        make_col(\"c1\", 0),\n        Operator::LtEq,\n        make_lit_i32(500),\n    ));\n\n    // CASE WHEN c1 <= 500 THEN 1 ELSE 0 END\n    c.bench_function(\"case_when: scalar or scalar\", |b| {\n        let expr = Arc::new(\n            CaseExpr::try_new(\n                None,\n                vec![(predicate.clone(), make_lit_i32(1))],\n                Some(make_lit_i32(0)),\n            )\n            .unwrap(),\n        );\n        b.iter(|| black_box(expr.evaluate(black_box(&batch)).unwrap()))\n    });\n    c.bench_function(\"if: scalar or scalar\", |b| {\n        let expr = Arc::new(IfExpr::new(\n            predicate.clone(),\n            make_lit_i32(1),\n            make_lit_i32(0),\n        ));\n        b.iter(|| black_box(expr.evaluate(black_box(&batch)).unwrap()))\n    });\n\n    // CASE WHEN c1 <= 500 THEN c2 [ELSE NULL] END\n    c.bench_function(\"case_when: column or null\", |b| {\n        let expr = Arc::new(\n            CaseExpr::try_new(None, vec![(predicate.clone(), make_col(\"c2\", 1))], None).unwrap(),\n        );\n        b.iter(|| black_box(expr.evaluate(black_box(&batch)).unwrap()))\n    });\n    c.bench_function(\"if: column or null\", |b| {\n        let expr = Arc::new(IfExpr::new(\n            predicate.clone(),\n            make_col(\"c2\", 1),\n            make_null_lit(),\n        ));\n        b.iter(|| black_box(expr.evaluate(black_box(&batch)).unwrap()))\n    });\n\n    // CASE WHEN c1 <= 500 THEN c2 ELSE c3 END\n    c.bench_function(\"case_when: expr or expr\", |b| {\n        let expr = Arc::new(\n            CaseExpr::try_new(\n                None,\n                vec![(predicate.clone(), make_col(\"c2\", 1))],\n                Some(make_col(\"c3\", 2)),\n            )\n            .unwrap(),\n        );\n        b.iter(|| black_box(expr.evaluate(black_box(&batch)).unwrap()))\n    });\n    c.bench_function(\"if: expr or expr\", |b| {\n        let expr = Arc::new(IfExpr::new(\n            predicate.clone(),\n            make_col(\"c2\", 1),\n            make_col(\"c3\", 2),\n        ));\n        b.iter(|| black_box(expr.evaluate(black_box(&batch)).unwrap()))\n    });\n}\n\ncriterion_group!(benches, criterion_benchmark);\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/date_trunc.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{ArrayRef, Date32Array};\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion_comet_spark_expr::date_trunc_dyn;\nuse std::sync::Arc;\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let date_array = create_date_array();\n\n    let mut group = c.benchmark_group(\"date_trunc\");\n\n    // Benchmark each truncation format\n    for format in [\"YEAR\", \"QUARTER\", \"MONTH\", \"WEEK\"] {\n        let array_ref: ArrayRef = Arc::new(date_array.clone());\n        group.bench_function(format!(\"date_trunc_{}\", format.to_lowercase()), |b| {\n            b.iter(|| date_trunc_dyn(&array_ref, format.to_string()).unwrap());\n        });\n    }\n\n    group.finish();\n}\n\nfn create_date_array() -> Date32Array {\n    // Create 10000 dates spanning several years (more realistic workload)\n    // Days since Unix epoch: range from 0 (1970-01-01) to ~19000 (2022)\n    let dates: Vec<i32> = (0..10000).map(|i| (i * 2) % 19000).collect();\n    Date32Array::from(dates)\n}\n\nfn config() -> Criterion {\n    Criterion::default()\n}\n\ncriterion_group! {\n    name = benches;\n    config = config();\n    targets = criterion_benchmark\n}\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/decimal_div.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::builder::Decimal128Builder;\nuse arrow::compute::cast;\nuse arrow::datatypes::DataType;\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion::physical_plan::ColumnarValue;\nuse datafusion_comet_spark_expr::{spark_decimal_div, spark_decimal_integral_div, EvalMode};\nuse std::hint::black_box;\nuse std::sync::Arc;\n\nfn criterion_benchmark(c: &mut Criterion) {\n    // create input data\n    let mut c1 = Decimal128Builder::new();\n    let mut c2 = Decimal128Builder::new();\n    for i in 0..1000 {\n        c1.append_value(99999999 + i);\n        c2.append_value(88888888 - i);\n    }\n    let c1 = Arc::new(c1.finish());\n    let c2 = Arc::new(c2.finish());\n\n    let c1_type = DataType::Decimal128(10, 4);\n    let c1 = cast(c1.as_ref(), &c1_type).unwrap();\n    let c2_type = DataType::Decimal128(10, 3);\n    let c2 = cast(c2.as_ref(), &c2_type).unwrap();\n\n    let args = [ColumnarValue::Array(c1), ColumnarValue::Array(c2)];\n\n    let mut group = c.benchmark_group(\"decimal div\");\n    group.bench_function(\"decimal_div\", |b| {\n        b.iter(|| {\n            black_box(spark_decimal_div(\n                black_box(&args),\n                black_box(&DataType::Decimal128(10, 4)),\n                EvalMode::Legacy,\n            ))\n        })\n    });\n\n    group.bench_function(\"decimal_integral_div\", |b| {\n        b.iter(|| {\n            black_box(spark_decimal_integral_div(\n                black_box(&args),\n                black_box(&DataType::Decimal128(10, 4)),\n                EvalMode::Legacy,\n            ))\n        })\n    });\n}\n\ncriterion_group!(benches, criterion_benchmark);\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/normalize_nan.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Benchmarks for NormalizeNaNAndZero expression\n\nuse arrow::array::Float64Array;\nuse arrow::datatypes::{DataType, Field, Schema};\nuse arrow::record_batch::RecordBatch;\nuse criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};\nuse datafusion::physical_expr::expressions::Column;\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion_comet_spark_expr::NormalizeNaNAndZero;\nuse std::hint::black_box;\nuse std::sync::Arc;\n\nconst BATCH_SIZE: usize = 8192;\n\nfn make_col(name: &str, index: usize) -> Arc<dyn PhysicalExpr> {\n    Arc::new(Column::new(name, index))\n}\n\n/// Create a batch with float64 column containing various values including NaN and -0.0\nfn create_float_batch(nan_pct: usize, neg_zero_pct: usize, null_pct: usize) -> RecordBatch {\n    let mut values: Vec<Option<f64>> = Vec::with_capacity(BATCH_SIZE);\n\n    for i in 0..BATCH_SIZE {\n        if null_pct > 0 && i % (100 / null_pct.max(1)) == 0 {\n            values.push(None);\n        } else if nan_pct > 0 && i % (100 / nan_pct.max(1)) == 1 {\n            values.push(Some(f64::NAN));\n        } else if neg_zero_pct > 0 && i % (100 / neg_zero_pct.max(1)) == 2 {\n            values.push(Some(-0.0));\n        } else {\n            values.push(Some(i as f64 * 1.5));\n        }\n    }\n\n    let array = Float64Array::from(values);\n    let schema = Schema::new(vec![Field::new(\"c1\", DataType::Float64, true)]);\n\n    RecordBatch::try_new(Arc::new(schema), vec![Arc::new(array)]).unwrap()\n}\n\nfn bench_normalize_nan_and_zero(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"normalize_nan_and_zero\");\n\n    // Test with different percentages of special values\n    let test_cases = [\n        (\"no_special\", 0, 0, 0),\n        (\"10pct_nan\", 10, 0, 0),\n        (\"10pct_neg_zero\", 0, 10, 0),\n        (\"10pct_null\", 0, 0, 10),\n        (\"mixed_10pct\", 5, 5, 5),\n        (\"all_normal\", 0, 0, 0),\n    ];\n\n    for (name, nan_pct, neg_zero_pct, null_pct) in test_cases {\n        let batch = create_float_batch(nan_pct, neg_zero_pct, null_pct);\n\n        let normalize_expr = Arc::new(NormalizeNaNAndZero::new(\n            DataType::Float64,\n            make_col(\"c1\", 0),\n        ));\n\n        group.bench_with_input(BenchmarkId::new(\"float64\", name), &batch, |b, batch| {\n            b.iter(|| black_box(normalize_expr.evaluate(black_box(batch)).unwrap()));\n        });\n    }\n\n    group.finish();\n}\n\ncriterion_group!(benches, bench_normalize_nan_and_zero);\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/padding.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::builder::StringBuilder;\nuse arrow::array::ArrayRef;\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion::common::ScalarValue;\nuse datafusion::physical_plan::ColumnarValue;\nuse datafusion_comet_spark_expr::{spark_lpad, spark_rpad};\nuse std::hint::black_box;\nuse std::sync::Arc;\n\nfn create_string_array(size: usize) -> ArrayRef {\n    let mut builder = StringBuilder::new();\n    for i in 0..size {\n        if i % 10 == 0 {\n            builder.append_null();\n        } else {\n            builder.append_value(format!(\"string{}\", i % 100));\n        }\n    }\n    Arc::new(builder.finish())\n}\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let size = 8192;\n    let string_array = create_string_array(size);\n\n    // lpad with default padding (space)\n    c.bench_function(\"spark_lpad: default padding\", |b| {\n        let args = vec![\n            ColumnarValue::Array(Arc::clone(&string_array)),\n            ColumnarValue::Scalar(ScalarValue::Int32(Some(20))),\n        ];\n        b.iter(|| black_box(spark_lpad(black_box(&args)).unwrap()))\n    });\n\n    // lpad with custom padding character\n    c.bench_function(\"spark_lpad: custom padding\", |b| {\n        let args = vec![\n            ColumnarValue::Array(Arc::clone(&string_array)),\n            ColumnarValue::Scalar(ScalarValue::Int32(Some(20))),\n            ColumnarValue::Scalar(ScalarValue::Utf8(Some(\"*\".to_string()))),\n        ];\n        b.iter(|| black_box(spark_lpad(black_box(&args)).unwrap()))\n    });\n\n    // rpad with default padding (space)\n    c.bench_function(\"spark_rpad: default padding\", |b| {\n        let args = vec![\n            ColumnarValue::Array(Arc::clone(&string_array)),\n            ColumnarValue::Scalar(ScalarValue::Int32(Some(20))),\n        ];\n        b.iter(|| black_box(spark_rpad(black_box(&args)).unwrap()))\n    });\n\n    // rpad with custom padding character\n    c.bench_function(\"spark_rpad: custom padding\", |b| {\n        let args = vec![\n            ColumnarValue::Array(Arc::clone(&string_array)),\n            ColumnarValue::Scalar(ScalarValue::Int32(Some(20))),\n            ColumnarValue::Scalar(ScalarValue::Utf8(Some(\"*\".to_string()))),\n        ];\n        b.iter(|| black_box(spark_rpad(black_box(&args)).unwrap()))\n    });\n\n    // lpad with multi-character padding string\n    c.bench_function(\"spark_lpad: multi-char padding\", |b| {\n        let args = vec![\n            ColumnarValue::Array(Arc::clone(&string_array)),\n            ColumnarValue::Scalar(ScalarValue::Int32(Some(20))),\n            ColumnarValue::Scalar(ScalarValue::Utf8(Some(\"abc\".to_string()))),\n        ];\n        b.iter(|| black_box(spark_lpad(black_box(&args)).unwrap()))\n    });\n\n    // rpad with multi-character padding string\n    c.bench_function(\"spark_rpad: multi-char padding\", |b| {\n        let args = vec![\n            ColumnarValue::Array(Arc::clone(&string_array)),\n            ColumnarValue::Scalar(ScalarValue::Int32(Some(20))),\n            ColumnarValue::Scalar(ScalarValue::Utf8(Some(\"abc\".to_string()))),\n        ];\n        b.iter(|| black_box(spark_rpad(black_box(&args)).unwrap()))\n    });\n\n    // lpad with truncation (target length shorter than some strings)\n    c.bench_function(\"spark_lpad: with truncation\", |b| {\n        let args = vec![\n            ColumnarValue::Array(Arc::clone(&string_array)),\n            ColumnarValue::Scalar(ScalarValue::Int32(Some(5))),\n        ];\n        b.iter(|| black_box(spark_lpad(black_box(&args)).unwrap()))\n    });\n\n    // rpad with truncation (target length shorter than some strings)\n    c.bench_function(\"spark_rpad: with truncation\", |b| {\n        let args = vec![\n            ColumnarValue::Array(Arc::clone(&string_array)),\n            ColumnarValue::Scalar(ScalarValue::Int32(Some(5))),\n        ];\n        b.iter(|| black_box(spark_rpad(black_box(&args)).unwrap()))\n    });\n}\n\ncriterion_group!(benches, criterion_benchmark);\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/to_csv.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{\n    BooleanBuilder, Int16Builder, Int32Builder, Int64Builder, Int8Builder, StringBuilder,\n    StructArray, StructBuilder,\n};\nuse arrow::datatypes::{DataType, Field};\nuse criterion::{criterion_group, criterion_main, Criterion};\nuse datafusion_comet_spark_expr::{to_csv_inner, CsvWriteOptions, EvalMode, SparkCastOptions};\nuse std::hint::black_box;\n\nfn create_struct_array(array_size: usize) -> StructArray {\n    let fields = vec![\n        Field::new(\"f1\", DataType::Boolean, true),\n        Field::new(\"f2\", DataType::Int8, true),\n        Field::new(\"f3\", DataType::Int16, true),\n        Field::new(\"f4\", DataType::Int32, true),\n        Field::new(\"f5\", DataType::Int64, true),\n        Field::new(\"f6\", DataType::Utf8, true),\n    ];\n    let mut struct_builder = StructBuilder::from_fields(fields, array_size);\n    for i in 0..array_size {\n        struct_builder\n            .field_builder::<BooleanBuilder>(0)\n            .unwrap()\n            .append_option(if i % 10 == 0 { None } else { Some(i % 2 == 0) });\n\n        struct_builder\n            .field_builder::<Int8Builder>(1)\n            .unwrap()\n            .append_option(if i % 10 == 0 {\n                None\n            } else {\n                Some((i % 128) as i8)\n            });\n\n        struct_builder\n            .field_builder::<Int16Builder>(2)\n            .unwrap()\n            .append_option(if i % 10 == 0 { None } else { Some(i as i16) });\n\n        struct_builder\n            .field_builder::<Int32Builder>(3)\n            .unwrap()\n            .append_option(if i % 10 == 0 { None } else { Some(i as i32) });\n\n        struct_builder\n            .field_builder::<Int64Builder>(4)\n            .unwrap()\n            .append_option(if i % 10 == 0 { None } else { Some(i as i64) });\n\n        struct_builder\n            .field_builder::<StringBuilder>(5)\n            .unwrap()\n            .append_option(if i % 10 == 0 {\n                None\n            } else {\n                Some(format!(\"string_{}\", i))\n            });\n\n        struct_builder.append(true);\n    }\n    struct_builder.finish()\n}\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let array_size = 8192;\n    let timezone = \"UTC\";\n    let struct_array = create_struct_array(array_size);\n    let default_delimiter = \",\";\n    let default_null_value = \"\";\n    let default_quote = \"\\\"\";\n    let default_escape = \"\\\\\";\n    let mut cast_options = SparkCastOptions::new(EvalMode::Legacy, timezone, false);\n    cast_options.null_string = default_null_value.to_string();\n    let csv_write_options = CsvWriteOptions::new(\n        default_delimiter.to_string(),\n        default_quote.to_string(),\n        default_escape.to_string(),\n        default_null_value.to_string(),\n        false,\n        true,\n        true,\n    );\n    c.bench_function(\"to_csv\", |b| {\n        b.iter(|| {\n            black_box(to_csv_inner(&struct_array, &cast_options, &csv_write_options).unwrap())\n        })\n    });\n}\n\ncriterion_group!(benches, criterion_benchmark);\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/benches/wide_decimal.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Benchmarks comparing the old Cast->BinaryExpr->Cast chain vs the fused WideDecimalBinaryExpr\n//! for Decimal128 arithmetic that requires wider intermediate precision.\n\nuse arrow::array::builder::Decimal128Builder;\nuse arrow::array::RecordBatch;\nuse arrow::datatypes::{DataType, Field, Schema};\nuse criterion::{criterion_group, criterion_main, BenchmarkId, Criterion};\nuse datafusion::logical_expr::Operator;\nuse datafusion::physical_expr::expressions::{BinaryExpr, Column};\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion_comet_spark_expr::{\n    Cast, EvalMode, SparkCastOptions, WideDecimalBinaryExpr, WideDecimalOp,\n};\nuse std::sync::Arc;\n\nconst BATCH_SIZE: usize = 8192;\n\n/// Build a RecordBatch with two Decimal128 columns.\nfn make_decimal_batch(p1: u8, s1: i8, p2: u8, s2: i8) -> RecordBatch {\n    let mut left = Decimal128Builder::new();\n    let mut right = Decimal128Builder::new();\n    for i in 0..BATCH_SIZE as i128 {\n        left.append_value(123456789012345_i128 + i * 1000);\n        right.append_value(987654321098765_i128 - i * 1000);\n    }\n    let left = left.finish().with_data_type(DataType::Decimal128(p1, s1));\n    let right = right.finish().with_data_type(DataType::Decimal128(p2, s2));\n    let schema = Schema::new(vec![\n        Field::new(\"left\", DataType::Decimal128(p1, s1), false),\n        Field::new(\"right\", DataType::Decimal128(p2, s2), false),\n    ]);\n    RecordBatch::try_new(Arc::new(schema), vec![Arc::new(left), Arc::new(right)]).unwrap()\n}\n\n/// Old approach: Cast(Decimal128->Decimal256) both sides, BinaryExpr, Cast(Decimal256->Decimal128).\nfn build_old_expr(\n    p1: u8,\n    s1: i8,\n    p2: u8,\n    s2: i8,\n    op: Operator,\n    out_type: DataType,\n) -> Arc<dyn PhysicalExpr> {\n    let left_col: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"left\", 0));\n    let right_col: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"right\", 1));\n    let cast_opts = SparkCastOptions::new_without_timezone(EvalMode::Legacy, false);\n    let left_cast = Arc::new(Cast::new(\n        left_col,\n        DataType::Decimal256(p1, s1),\n        cast_opts.clone(),\n        None,\n        None,\n    ));\n    let right_cast = Arc::new(Cast::new(\n        right_col,\n        DataType::Decimal256(p2, s2),\n        cast_opts.clone(),\n        None,\n        None,\n    ));\n    let binary = Arc::new(BinaryExpr::new(left_cast, op, right_cast));\n    Arc::new(Cast::new(binary, out_type, cast_opts, None, None))\n}\n\n/// New approach: single fused WideDecimalBinaryExpr.\nfn build_new_expr(op: WideDecimalOp, p_out: u8, s_out: i8) -> Arc<dyn PhysicalExpr> {\n    let left_col: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"left\", 0));\n    let right_col: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"right\", 1));\n    Arc::new(WideDecimalBinaryExpr::new(\n        left_col,\n        right_col,\n        op,\n        p_out,\n        s_out,\n        EvalMode::Legacy,\n    ))\n}\n\nfn bench_case(\n    group: &mut criterion::BenchmarkGroup<criterion::measurement::WallTime>,\n    name: &str,\n    batch: &RecordBatch,\n    old_expr: &Arc<dyn PhysicalExpr>,\n    new_expr: &Arc<dyn PhysicalExpr>,\n) {\n    group.bench_with_input(BenchmarkId::new(\"old\", name), batch, |b, batch| {\n        b.iter(|| old_expr.evaluate(batch).unwrap());\n    });\n    group.bench_with_input(BenchmarkId::new(\"fused\", name), batch, |b, batch| {\n        b.iter(|| new_expr.evaluate(batch).unwrap());\n    });\n}\n\nfn criterion_benchmark(c: &mut Criterion) {\n    let mut group = c.benchmark_group(\"wide_decimal\");\n\n    // Case 1: Add with same scale - Decimal128(38,10) + Decimal128(38,10) -> Decimal128(38,10)\n    // Triggers wide path because max(s1,s2) + max(p1-s1, p2-s2) = 10 + 28 = 38 >= 38\n    {\n        let batch = make_decimal_batch(38, 10, 38, 10);\n        let old = build_old_expr(38, 10, 38, 10, Operator::Plus, DataType::Decimal128(38, 10));\n        let new = build_new_expr(WideDecimalOp::Add, 38, 10);\n        bench_case(&mut group, \"add_same_scale\", &batch, &old, &new);\n    }\n\n    // Case 2: Add with different scales - Decimal128(38,6) + Decimal128(38,4) -> Decimal128(38,6)\n    {\n        let batch = make_decimal_batch(38, 6, 38, 4);\n        let old = build_old_expr(38, 6, 38, 4, Operator::Plus, DataType::Decimal128(38, 6));\n        let new = build_new_expr(WideDecimalOp::Add, 38, 6);\n        bench_case(&mut group, \"add_diff_scale\", &batch, &old, &new);\n    }\n\n    // Case 3: Multiply - Decimal128(20,10) * Decimal128(20,10) -> Decimal128(38,6)\n    // Triggers wide path because p1 + p2 = 40 >= 38\n    {\n        let batch = make_decimal_batch(20, 10, 20, 10);\n        let old = build_old_expr(\n            20,\n            10,\n            20,\n            10,\n            Operator::Multiply,\n            DataType::Decimal128(38, 6),\n        );\n        let new = build_new_expr(WideDecimalOp::Multiply, 38, 6);\n        bench_case(&mut group, \"multiply\", &batch, &old, &new);\n    }\n\n    // Case 4: Subtract with same scale - Decimal128(38,18) - Decimal128(38,18) -> Decimal128(38,18)\n    {\n        let batch = make_decimal_batch(38, 18, 38, 18);\n        let old = build_old_expr(\n            38,\n            18,\n            38,\n            18,\n            Operator::Minus,\n            DataType::Decimal128(38, 18),\n        );\n        let new = build_new_expr(WideDecimalOp::Subtract, 38, 18);\n        bench_case(&mut group, \"subtract\", &batch, &old, &new);\n    }\n\n    group.finish();\n}\n\ncriterion_group!(benches, criterion_benchmark);\ncriterion_main!(benches);\n"
  },
  {
    "path": "native/spark-expr/src/agg_funcs/avg.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{\n    builder::PrimitiveBuilder,\n    cast::AsArray,\n    types::{Float64Type, Int64Type},\n    Array, ArrayRef, ArrowNumericType, Int64Array, PrimitiveArray,\n};\nuse arrow::compute::sum;\nuse arrow::datatypes::{DataType, Field, FieldRef};\nuse datafusion::common::{not_impl_err, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    Accumulator, AggregateUDFImpl, EmitTo, GroupsAccumulator, ReversedUDAF, Signature,\n};\nuse datafusion::physical_expr::expressions::format_state_name;\nuse std::{any::Any, sync::Arc};\n\nuse arrow::array::ArrowNativeTypeOp;\nuse datafusion::logical_expr::function::{AccumulatorArgs, StateFieldsArgs};\nuse datafusion::logical_expr::Volatility::Immutable;\nuse DataType::*;\n\nfn avg_return_type(_name: &str, data_type: &DataType) -> Result<DataType> {\n    match data_type {\n        Float64 => Ok(Float64),\n        _ => not_impl_err!(\"Avg return type for {data_type}\"),\n    }\n}\n\n/// AVG aggregate expression\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\npub struct Avg {\n    name: String,\n    signature: Signature,\n    // expr: Arc<dyn PhysicalExpr>,\n    input_data_type: DataType,\n    result_data_type: DataType,\n}\n\nimpl Avg {\n    /// Create a new AVG aggregate function\n    pub fn new(name: impl Into<String>, data_type: DataType) -> Self {\n        let result_data_type = avg_return_type(\"avg\", &data_type).unwrap();\n\n        Self {\n            name: name.into(),\n            signature: Signature::user_defined(Immutable),\n            input_data_type: data_type,\n            result_data_type,\n        }\n    }\n}\n\nimpl AggregateUDFImpl for Avg {\n    /// Return a reference to Any that can be used for downcasting\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn accumulator(&self, _acc_args: AccumulatorArgs) -> Result<Box<dyn Accumulator>> {\n        // All numeric types use Float64 accumulation after casting\n        match (&self.input_data_type, &self.result_data_type) {\n            (Float64, Float64) => Ok(Box::<AvgAccumulator>::default()),\n            _ => not_impl_err!(\n                \"AvgAccumulator for ({} --> {})\",\n                self.input_data_type,\n                self.result_data_type\n            ),\n        }\n    }\n\n    fn state_fields(&self, _args: StateFieldsArgs) -> Result<Vec<FieldRef>> {\n        Ok(vec![\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"sum\"),\n                self.input_data_type.clone(),\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"count\"),\n                DataType::Int64,\n                true,\n            )),\n        ])\n    }\n\n    fn name(&self) -> &str {\n        &self.name\n    }\n\n    fn reverse_expr(&self) -> ReversedUDAF {\n        ReversedUDAF::Identical\n    }\n\n    fn groups_accumulator_supported(&self, _args: AccumulatorArgs) -> bool {\n        true\n    }\n\n    fn create_groups_accumulator(\n        &self,\n        _args: AccumulatorArgs,\n    ) -> Result<Box<dyn GroupsAccumulator>> {\n        match (&self.input_data_type, &self.result_data_type) {\n            (Float64, Float64) => Ok(Box::new(AvgGroupsAccumulator::<Float64Type, _>::new(\n                &self.input_data_type,\n                |sum: f64, count: i64| Ok(sum / count as f64),\n            ))),\n\n            _ => not_impl_err!(\n                \"AvgGroupsAccumulator for ({} --> {})\",\n                self.input_data_type,\n                self.result_data_type\n            ),\n        }\n    }\n\n    fn default_value(&self, _data_type: &DataType) -> Result<ScalarValue> {\n        Ok(ScalarValue::Float64(None))\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, arg_types: &[DataType]) -> Result<DataType> {\n        avg_return_type(self.name(), &arg_types[0])\n    }\n}\n\n/// An accumulator to compute the average\n#[derive(Debug, Default)]\npub struct AvgAccumulator {\n    sum: Option<f64>,\n    count: i64,\n}\n\nimpl Accumulator for AvgAccumulator {\n    fn state(&mut self) -> Result<Vec<ScalarValue>> {\n        Ok(vec![\n            ScalarValue::Float64(self.sum),\n            ScalarValue::from(self.count),\n        ])\n    }\n\n    fn update_batch(&mut self, values: &[ArrayRef]) -> Result<()> {\n        let values = values[0].as_primitive::<Float64Type>();\n        self.count += (values.len() - values.null_count()) as i64;\n        let v = self.sum.get_or_insert(0.);\n        if let Some(x) = sum(values) {\n            *v += x;\n        }\n        Ok(())\n    }\n\n    fn merge_batch(&mut self, states: &[ArrayRef]) -> Result<()> {\n        // counts are summed\n        self.count += sum(states[1].as_primitive::<Int64Type>()).unwrap_or_default();\n\n        // sums are summed - no overflow checking in all Eval Modes\n        if let Some(x) = sum(states[0].as_primitive::<Float64Type>()) {\n            let v = self.sum.get_or_insert(0.);\n            *v += x;\n        }\n        Ok(())\n    }\n\n    fn evaluate(&mut self) -> Result<ScalarValue> {\n        if self.count == 0 {\n            // If all input are nulls, count will be 0, and we will get null after the division.\n            // This is consistent with Spark Average implementation.\n            Ok(ScalarValue::Float64(None))\n        } else {\n            Ok(ScalarValue::Float64(\n                self.sum.map(|f| f / self.count as f64),\n            ))\n        }\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n}\n\n/// An accumulator to compute the average of `[PrimitiveArray<T>]`.\n/// Stores values as native types (\n/// no overflow check all eval modes since inf is a perfectly valid value per spark impl)\n///\n/// F: Function that calculates the average value from a sum of\n/// T::Native and a total count\n#[derive(Debug)]\nstruct AvgGroupsAccumulator<T, F>\nwhere\n    T: ArrowNumericType + Send,\n    F: Fn(T::Native, i64) -> Result<T::Native> + Send,\n{\n    /// The type of the returned average\n    return_data_type: DataType,\n\n    /// Count per group (use i64 to make Int64Array)\n    counts: Vec<i64>,\n\n    /// Sums per group, stored as the native type\n    sums: Vec<T::Native>,\n\n    /// Function that computes the final average (value / count)\n    avg_fn: F,\n}\n\nimpl<T, F> AvgGroupsAccumulator<T, F>\nwhere\n    T: ArrowNumericType + Send,\n    F: Fn(T::Native, i64) -> Result<T::Native> + Send,\n{\n    pub fn new(return_data_type: &DataType, avg_fn: F) -> Self {\n        Self {\n            return_data_type: return_data_type.clone(),\n            counts: vec![],\n            sums: vec![],\n            avg_fn,\n        }\n    }\n}\n\nimpl<T, F> GroupsAccumulator for AvgGroupsAccumulator<T, F>\nwhere\n    T: ArrowNumericType + Send,\n    F: Fn(T::Native, i64) -> Result<T::Native> + Send,\n{\n    fn update_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        opt_filter: Option<&arrow::array::BooleanArray>,\n        total_num_groups: usize,\n    ) -> Result<()> {\n        assert_eq!(values.len(), 1, \"single argument to update_batch\");\n        let values = values[0].as_primitive::<T>();\n        let data = values.values();\n\n        // increment counts, update sums\n        self.counts.resize(total_num_groups, 0);\n        self.sums.resize(total_num_groups, T::default_value());\n\n        let iter = group_indices.iter().zip(data.iter());\n        if opt_filter.is_none() && values.null_count() == 0 {\n            for (&group_index, &value) in iter {\n                let sum = &mut self.sums[group_index];\n                // No overflow checking - Infinity is a valid result\n                *sum = (*sum).add_wrapping(value);\n                self.counts[group_index] += 1;\n            }\n        } else {\n            for (idx, (&group_index, &value)) in iter.enumerate() {\n                if let Some(f) = opt_filter {\n                    if !f.is_valid(idx) || !f.value(idx) {\n                        continue;\n                    }\n                }\n                if values.is_null(idx) {\n                    continue;\n                }\n                let sum = &mut self.sums[group_index];\n                *sum = (*sum).add_wrapping(value);\n\n                self.counts[group_index] += 1;\n            }\n        }\n\n        Ok(())\n    }\n\n    fn merge_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        _opt_filter: Option<&arrow::array::BooleanArray>,\n        total_num_groups: usize,\n    ) -> Result<()> {\n        assert_eq!(values.len(), 2, \"two arguments to merge_batch\");\n        // first batch is partial sums, second is counts\n        let partial_sums = values[0].as_primitive::<T>();\n        let partial_counts = values[1].as_primitive::<Int64Type>();\n        // update counts with partial counts\n        self.counts.resize(total_num_groups, 0);\n        let iter1 = group_indices.iter().zip(partial_counts.values().iter());\n        for (&group_index, &partial_count) in iter1 {\n            self.counts[group_index] += partial_count;\n        }\n\n        // update sums - no overflow checking (in all eval modes)\n        self.sums.resize(total_num_groups, T::default_value());\n        let iter2 = group_indices.iter().zip(partial_sums.values().iter());\n        for (&group_index, &new_value) in iter2 {\n            let sum = &mut self.sums[group_index];\n            *sum = sum.add_wrapping(new_value);\n        }\n\n        Ok(())\n    }\n\n    fn evaluate(&mut self, emit_to: EmitTo) -> Result<ArrayRef> {\n        let counts = emit_to.take_needed(&mut self.counts);\n        let sums = emit_to.take_needed(&mut self.sums);\n        let mut builder = PrimitiveBuilder::<T>::with_capacity(sums.len());\n        let iter = sums.into_iter().zip(counts);\n\n        for (sum, count) in iter {\n            if count != 0 {\n                builder.append_value((self.avg_fn)(sum, count)?)\n            } else {\n                builder.append_null();\n            }\n        }\n        let array: PrimitiveArray<T> = builder.finish();\n\n        Ok(Arc::new(array))\n    }\n\n    fn state(&mut self, emit_to: EmitTo) -> Result<Vec<ArrayRef>> {\n        let counts = emit_to.take_needed(&mut self.counts);\n        let counts = Int64Array::new(counts.into(), None);\n\n        let sums = emit_to.take_needed(&mut self.sums);\n        let sums = PrimitiveArray::<T>::new(sums.into(), None)\n            .with_data_type(self.return_data_type.clone());\n\n        Ok(vec![\n            Arc::new(sums) as ArrayRef,\n            Arc::new(counts) as ArrayRef,\n        ])\n    }\n\n    fn size(&self) -> usize {\n        self.counts.capacity() * std::mem::size_of::<i64>()\n            + self.sums.capacity() * std::mem::size_of::<T>()\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/agg_funcs/avg_decimal.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{\n    builder::PrimitiveBuilder,\n    cast::AsArray,\n    types::{Decimal128Type, Int64Type},\n    Array, ArrayRef, Decimal128Array, Int64Array, PrimitiveArray,\n};\nuse arrow::datatypes::{DataType, Field, FieldRef};\nuse arrow::{array::BooleanBufferBuilder, buffer::NullBuffer, compute::sum};\nuse datafusion::common::{not_impl_err, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    Accumulator, AggregateUDFImpl, EmitTo, GroupsAccumulator, ReversedUDAF, Signature,\n};\nuse datafusion::physical_expr::expressions::format_state_name;\nuse std::{any::Any, sync::Arc};\n\nuse crate::utils::{build_bool_state, is_valid_decimal_precision, unlikely};\nuse crate::{decimal_sum_overflow_error, EvalMode, SparkErrorWithContext};\nuse arrow::array::ArrowNativeTypeOp;\nuse arrow::datatypes::{\n    DECIMAL128_MAX_PRECISION, DECIMAL128_MAX_SCALE, MAX_DECIMAL128_FOR_EACH_PRECISION,\n    MIN_DECIMAL128_FOR_EACH_PRECISION,\n};\nuse datafusion::logical_expr::function::{AccumulatorArgs, StateFieldsArgs};\nuse datafusion::logical_expr::Volatility::Immutable;\nuse num::{integer::div_ceil, Integer};\nuse DataType::*;\n\nfn avg_return_type(_name: &str, data_type: &DataType) -> Result<DataType> {\n    match data_type {\n        Decimal128(precision, scale) => {\n            // In the spark, the result type is DECIMAL(min(38,precision+4), min(38,scale+4)).\n            // Ref: https://github.com/apache/spark/blob/fcf636d9eb8d645c24be3db2d599aba2d7e2955a/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/aggregate/Average.scala#L66\n            let new_precision = DECIMAL128_MAX_PRECISION.min(*precision + 4);\n            let new_scale = DECIMAL128_MAX_SCALE.min(*scale + 4);\n            Ok(Decimal128(new_precision, new_scale))\n        }\n        _ => not_impl_err!(\"Avg return type for {data_type}\"),\n    }\n}\n\n/// AVG aggregate expression\n#[derive(Debug, Clone)]\npub struct AvgDecimal {\n    signature: Signature,\n    sum_data_type: DataType,\n    result_data_type: DataType,\n    eval_mode: EvalMode,\n    expr_id: Option<u64>,\n    registry: Arc<crate::QueryContextMap>,\n}\n\n// Manually implement PartialEq, Eq, and Hash excluding the registry field\nimpl PartialEq for AvgDecimal {\n    fn eq(&self, other: &Self) -> bool {\n        self.sum_data_type == other.sum_data_type\n            && self.result_data_type == other.result_data_type\n            && self.eval_mode == other.eval_mode\n            && self.expr_id == other.expr_id\n    }\n}\n\nimpl Eq for AvgDecimal {}\n\nimpl std::hash::Hash for AvgDecimal {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.sum_data_type.hash(state);\n        self.result_data_type.hash(state);\n        self.eval_mode.hash(state);\n        self.expr_id.hash(state);\n    }\n}\n\nimpl AvgDecimal {\n    /// Create a new AVG aggregate function\n    pub fn new(\n        result_type: DataType,\n        sum_type: DataType,\n        eval_mode: EvalMode,\n        expr_id: Option<u64>,\n        registry: Arc<crate::QueryContextMap>,\n    ) -> Self {\n        Self {\n            signature: Signature::user_defined(Immutable),\n            result_data_type: result_type,\n            sum_data_type: sum_type,\n            eval_mode,\n            expr_id,\n            registry,\n        }\n    }\n}\n\nimpl AggregateUDFImpl for AvgDecimal {\n    /// Return a reference to Any that can be used for downcasting\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn accumulator(&self, _acc_args: AccumulatorArgs) -> Result<Box<dyn Accumulator>> {\n        match (&self.sum_data_type, &self.result_data_type) {\n            (Decimal128(sum_precision, sum_scale), Decimal128(target_precision, target_scale)) => {\n                Ok(Box::new(AvgDecimalAccumulator::new(\n                    *sum_scale,\n                    *sum_precision,\n                    *target_precision,\n                    *target_scale,\n                    self.eval_mode,\n                    self.expr_id,\n                    Arc::clone(&self.registry),\n                )))\n            }\n            _ => not_impl_err!(\n                \"AvgDecimalAccumulator for ({} --> {})\",\n                self.sum_data_type,\n                self.result_data_type\n            ),\n        }\n    }\n\n    fn state_fields(&self, _args: StateFieldsArgs) -> Result<Vec<FieldRef>> {\n        Ok(vec![\n            Arc::new(Field::new(\n                format_state_name(self.name(), \"sum\"),\n                self.sum_data_type.clone(),\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(self.name(), \"count\"),\n                DataType::Int64,\n                true,\n            )),\n        ])\n    }\n\n    fn name(&self) -> &str {\n        \"avg\"\n    }\n\n    fn reverse_expr(&self) -> ReversedUDAF {\n        ReversedUDAF::Identical\n    }\n\n    fn groups_accumulator_supported(&self, _args: AccumulatorArgs) -> bool {\n        true\n    }\n\n    fn create_groups_accumulator(\n        &self,\n        _args: AccumulatorArgs,\n    ) -> Result<Box<dyn GroupsAccumulator>> {\n        // instantiate specialized accumulator based for the type\n        match (&self.sum_data_type, &self.result_data_type) {\n            (Decimal128(sum_precision, sum_scale), Decimal128(target_precision, target_scale)) => {\n                Ok(Box::new(AvgDecimalGroupsAccumulator::new(\n                    &self.result_data_type,\n                    &self.sum_data_type,\n                    *target_precision,\n                    *target_scale,\n                    *sum_precision,\n                    *sum_scale,\n                    self.eval_mode,\n                    self.expr_id,\n                    Arc::clone(&self.registry),\n                )))\n            }\n            _ => not_impl_err!(\n                \"AvgDecimalGroupsAccumulator for ({} --> {})\",\n                self.sum_data_type,\n                self.result_data_type\n            ),\n        }\n    }\n\n    fn default_value(&self, _data_type: &DataType) -> Result<ScalarValue> {\n        match &self.result_data_type {\n            Decimal128(target_precision, target_scale) => {\n                Ok(make_decimal128(None, *target_precision, *target_scale))\n            }\n            _ => not_impl_err!(\n                \"The result_data_type of AvgDecimal should be Decimal128 but got{}\",\n                self.result_data_type\n            ),\n        }\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, arg_types: &[DataType]) -> Result<DataType> {\n        avg_return_type(self.name(), &arg_types[0])\n    }\n\n    fn is_nullable(&self) -> bool {\n        // In Spark, Sum.nullable and Average.nullable both return true irrespective of ANSI mode.\n        // AvgDecimal is always nullable because overflows can cause null values.\n        true\n    }\n}\n\n/// An accumulator to compute the average for decimals\n#[derive(Debug)]\nstruct AvgDecimalAccumulator {\n    sum: Option<i128>,\n    count: i64,\n    is_empty: bool,\n    is_not_null: bool,\n    sum_scale: i8,\n    sum_precision: u8,\n    target_precision: u8,\n    target_scale: i8,\n    eval_mode: EvalMode,\n    expr_id: Option<u64>,\n    registry: Arc<crate::QueryContextMap>,\n}\n\nimpl AvgDecimalAccumulator {\n    pub fn new(\n        sum_scale: i8,\n        sum_precision: u8,\n        target_precision: u8,\n        target_scale: i8,\n        eval_mode: EvalMode,\n        expr_id: Option<u64>,\n        registry: Arc<crate::QueryContextMap>,\n    ) -> Self {\n        Self {\n            sum: None,\n            count: 0,\n            is_empty: true,\n            is_not_null: true,\n            sum_scale,\n            sum_precision,\n            target_precision,\n            target_scale,\n            eval_mode,\n            expr_id,\n            registry,\n        }\n    }\n\n    /// Wrap a SparkError with QueryContext if expr_id is available\n    fn wrap_error_with_context(\n        &self,\n        error: crate::SparkError,\n    ) -> datafusion::common::DataFusionError {\n        if let Some(expr_id) = self.expr_id {\n            if let Some(query_ctx) = self.registry.get(expr_id) {\n                let wrapped = SparkErrorWithContext::with_context(error, query_ctx);\n                return datafusion::common::DataFusionError::External(Box::new(wrapped));\n            }\n        }\n        datafusion::common::DataFusionError::from(error)\n    }\n\n    fn update_single(&mut self, values: &Decimal128Array, idx: usize) -> Result<()> {\n        let v = unsafe { values.value_unchecked(idx) };\n        let (new_sum, is_overflow) = match self.sum {\n            Some(sum) => sum.overflowing_add(v),\n            None => (v, false),\n        };\n\n        if is_overflow || !is_valid_decimal_precision(new_sum, self.sum_precision) {\n            // Overflow: set to null. Error will be thrown during evaluate in ANSI mode.\n            // This matches Spark's DecimalAddNoOverflowCheck behavior.\n            self.is_not_null = false;\n            return Ok(());\n        }\n\n        self.sum = Some(new_sum);\n\n        if let Some(new_count) = self.count.checked_add(1) {\n            self.count = new_count;\n        } else {\n            // Count overflow: set to null. Error will be thrown during evaluate in ANSI mode.\n            self.is_not_null = false;\n            return Ok(());\n        }\n\n        self.is_not_null = true;\n        Ok(())\n    }\n}\n\nfn make_decimal128(value: Option<i128>, precision: u8, scale: i8) -> ScalarValue {\n    ScalarValue::Decimal128(value, precision, scale)\n}\n\nimpl Accumulator for AvgDecimalAccumulator {\n    fn state(&mut self) -> Result<Vec<ScalarValue>> {\n        Ok(vec![\n            ScalarValue::Decimal128(self.sum, self.sum_precision, self.sum_scale),\n            ScalarValue::from(self.count),\n        ])\n    }\n\n    fn update_batch(&mut self, values: &[ArrayRef]) -> Result<()> {\n        if !self.is_empty && !self.is_not_null {\n            // This means there's a overflow in decimal, so we will just skip the rest\n            // of the computation\n            return Ok(());\n        }\n        let values = &values[0];\n        let data = values.as_primitive::<Decimal128Type>();\n        self.is_empty = self.is_empty && values.len() == values.null_count();\n        if values.null_count() == 0 {\n            for i in 0..data.len() {\n                self.update_single(data, i)?;\n            }\n        } else {\n            for i in 0..data.len() {\n                if data.is_null(i) {\n                    continue;\n                }\n                self.update_single(data, i)?;\n            }\n        }\n        Ok(())\n    }\n\n    fn merge_batch(&mut self, states: &[ArrayRef]) -> Result<()> {\n        let partial_sums = states[0].as_primitive::<Decimal128Type>();\n        let partial_counts = states[1].as_primitive::<Int64Type>();\n\n        // Update is_empty: if any partial state has data, we're not empty\n        if self.is_empty {\n            self.is_empty = partial_counts.len() == partial_counts.null_count();\n        }\n\n        // counts are summed\n        self.count += sum(partial_counts).unwrap_or_default();\n\n        // sums are summed\n        if let Some(x) = sum(partial_sums) {\n            let v = self.sum.get_or_insert(0);\n            let (result, overflowed) = v.overflowing_add(x);\n\n            if overflowed || !is_valid_decimal_precision(result, self.sum_precision) {\n                // Overflow during merge: set to null, error will be thrown during evaluate in ANSI mode\n                self.is_not_null = false;\n                self.sum = None;\n            } else {\n                *v = result;\n            }\n        }\n        Ok(())\n    }\n\n    fn evaluate(&mut self) -> Result<ScalarValue> {\n        // Check for overflow during sum accumulation in ANSI mode.\n        // This matches Spark's DecimalDivideWithOverflowCheck behavior.\n        if self.sum.is_none() && !self.is_empty && self.eval_mode == EvalMode::Ansi {\n            let error = decimal_sum_overflow_error();\n            return Err(self.wrap_error_with_context(error));\n        }\n\n        // Also check if is_not_null is false (indicates overflow)\n        if !self.is_not_null && self.count > 0 && self.eval_mode == EvalMode::Ansi {\n            let error = decimal_sum_overflow_error();\n            return Err(self.wrap_error_with_context(error));\n        }\n\n        let scaler = 10_i128.pow(self.target_scale.saturating_sub(self.sum_scale) as u32);\n        let target_min = MIN_DECIMAL128_FOR_EACH_PRECISION[self.target_precision as usize];\n        let target_max = MAX_DECIMAL128_FOR_EACH_PRECISION[self.target_precision as usize];\n\n        let result = self\n            .sum\n            .map(|v| avg(v, self.count as i128, target_min, target_max, scaler));\n\n        match result {\n            Some(value) => Ok(make_decimal128(\n                value,\n                self.target_precision,\n                self.target_scale,\n            )),\n            _ => Ok(make_decimal128(\n                None,\n                self.target_precision,\n                self.target_scale,\n            )),\n        }\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n}\n\n#[derive(Debug)]\nstruct AvgDecimalGroupsAccumulator {\n    /// Tracks if the value is null\n    is_not_null: BooleanBufferBuilder,\n\n    /// The type of the avg return type\n    return_data_type: DataType,\n    target_precision: u8,\n    target_scale: i8,\n\n    /// Count per group (use i64 to make Int64Array)\n    counts: Vec<i64>,\n\n    /// Sums per group, stored as i128\n    sums: Vec<i128>,\n\n    /// The type of the sum\n    sum_data_type: DataType,\n    /// This is input_precision + 10 to be consistent with Spark\n    sum_precision: u8,\n    sum_scale: i8,\n\n    /// Evaluation mode for error handling\n    eval_mode: EvalMode,\n    /// Optional expression ID for query context lookup during error creation\n    expr_id: Option<u64>,\n    /// Session-scoped query context registry for error reporting\n    registry: Arc<crate::QueryContextMap>,\n}\n\nimpl AvgDecimalGroupsAccumulator {\n    #[allow(clippy::too_many_arguments)]\n    pub fn new(\n        return_data_type: &DataType,\n        sum_data_type: &DataType,\n        target_precision: u8,\n        target_scale: i8,\n        sum_precision: u8,\n        sum_scale: i8,\n        eval_mode: EvalMode,\n        expr_id: Option<u64>,\n        registry: Arc<crate::QueryContextMap>,\n    ) -> Self {\n        Self {\n            is_not_null: BooleanBufferBuilder::new(0),\n            return_data_type: return_data_type.clone(),\n            target_precision,\n            target_scale,\n            sum_data_type: sum_data_type.clone(),\n            sum_precision,\n            sum_scale,\n            counts: vec![],\n            sums: vec![],\n            eval_mode,\n            expr_id,\n            registry,\n        }\n    }\n\n    /// Wrap a SparkError with QueryContext if expr_id is available\n    fn wrap_error_with_context(\n        &self,\n        error: crate::SparkError,\n    ) -> datafusion::common::DataFusionError {\n        if let Some(expr_id) = self.expr_id {\n            if let Some(query_ctx) = self.registry.get(expr_id) {\n                let wrapped = SparkErrorWithContext::with_context(error, query_ctx);\n                return datafusion::common::DataFusionError::External(Box::new(wrapped));\n            }\n        }\n        datafusion::common::DataFusionError::from(error)\n    }\n\n    #[inline]\n    fn update_single(&mut self, group_index: usize, value: i128) -> Result<()> {\n        let (new_sum, is_overflow) = self.sums[group_index].overflowing_add(value);\n        self.counts[group_index] += 1;\n        self.sums[group_index] = new_sum;\n\n        if unlikely(is_overflow || !is_valid_decimal_precision(new_sum, self.sum_precision)) {\n            // Overflow: set to null. Error will be thrown during evaluate in ANSI mode.\n            // This matches Spark's DecimalAddNoOverflowCheck behavior.\n            self.is_not_null.set_bit(group_index, false);\n        }\n        Ok(())\n    }\n}\n\nfn ensure_bit_capacity(builder: &mut BooleanBufferBuilder, capacity: usize) {\n    if builder.len() < capacity {\n        let additional = capacity - builder.len();\n        builder.append_n(additional, true);\n    }\n}\n\nimpl GroupsAccumulator for AvgDecimalGroupsAccumulator {\n    fn update_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        opt_filter: Option<&arrow::array::BooleanArray>,\n        total_num_groups: usize,\n    ) -> Result<()> {\n        assert_eq!(values.len(), 1, \"single argument to update_batch\");\n        let values = values[0].as_primitive::<Decimal128Type>();\n        let data = values.values();\n\n        // increment counts, update sums\n        self.counts.resize(total_num_groups, 0);\n        self.sums.resize(total_num_groups, 0);\n        ensure_bit_capacity(&mut self.is_not_null, total_num_groups);\n\n        let iter = group_indices.iter().zip(data.iter());\n        if opt_filter.is_none() && values.null_count() == 0 {\n            for (&group_index, &value) in iter {\n                self.update_single(group_index, value)?;\n            }\n        } else {\n            for (idx, (&group_index, &value)) in iter.enumerate() {\n                if let Some(f) = opt_filter {\n                    if !f.is_valid(idx) || !f.value(idx) {\n                        continue;\n                    }\n                }\n                if values.is_null(idx) {\n                    continue;\n                }\n                self.update_single(group_index, value)?;\n            }\n        }\n        Ok(())\n    }\n\n    fn merge_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        _opt_filter: Option<&arrow::array::BooleanArray>,\n        total_num_groups: usize,\n    ) -> Result<()> {\n        assert_eq!(values.len(), 2, \"two arguments to merge_batch\");\n        // first batch is partial sums, second is counts\n        let partial_sums = values[0].as_primitive::<Decimal128Type>();\n        let partial_counts = values[1].as_primitive::<Int64Type>();\n        // update counts with partial counts\n        self.counts.resize(total_num_groups, 0);\n        let iter1 = group_indices.iter().zip(partial_counts.values().iter());\n        for (&group_index, &partial_count) in iter1 {\n            self.counts[group_index] += partial_count;\n        }\n\n        // update sums\n        self.sums.resize(total_num_groups, 0);\n        // Ensure bit capacity BEFORE setting any bits\n        ensure_bit_capacity(&mut self.is_not_null, total_num_groups);\n\n        let iter2 = group_indices.iter().zip(partial_sums.values().iter());\n        for (idx, (&group_index, &new_value)) in iter2.enumerate() {\n            // Check if partial sum is null (indicates overflow in that partition)\n            if partial_sums.is_null(idx) {\n                self.is_not_null.set_bit(group_index, false);\n                continue;\n            }\n\n            let sum = self.sums[group_index];\n            let (new_sum, is_overflow) = sum.overflowing_add(new_value);\n\n            if is_overflow || !is_valid_decimal_precision(new_sum, self.sum_precision) {\n                if self.eval_mode == EvalMode::Ansi {\n                    let error = decimal_sum_overflow_error();\n                    return Err(self.wrap_error_with_context(error));\n                }\n                self.is_not_null.set_bit(group_index, false);\n            } else {\n                self.sums[group_index] = new_sum;\n            }\n        }\n        if partial_counts.null_count() != 0 {\n            for (index, &group_index) in group_indices.iter().enumerate() {\n                if partial_counts.is_null(index) {\n                    self.is_not_null.set_bit(group_index, false);\n                }\n            }\n        }\n\n        Ok(())\n    }\n\n    fn evaluate(&mut self, emit_to: EmitTo) -> Result<ArrayRef> {\n        let nulls = build_bool_state(&mut self.is_not_null, &emit_to);\n        let counts = emit_to.take_needed(&mut self.counts);\n        let sums = emit_to.take_needed(&mut self.sums);\n\n        let mut builder = PrimitiveBuilder::<Decimal128Type>::with_capacity(sums.len())\n            .with_data_type(self.return_data_type.clone());\n        let iter = sums.into_iter().zip(counts);\n\n        let scaler = 10_i128.pow(self.target_scale.saturating_sub(self.sum_scale) as u32);\n        let target_min = MIN_DECIMAL128_FOR_EACH_PRECISION[self.target_precision as usize];\n        let target_max = MAX_DECIMAL128_FOR_EACH_PRECISION[self.target_precision as usize];\n\n        for (is_not_null, (sum, count)) in nulls.into_iter().zip(iter) {\n            // Check for overflow during sum accumulation in ANSI mode.\n            // This matches Spark's DecimalDivideWithOverflowCheck behavior.\n            if !is_not_null && count > 0 && self.eval_mode == EvalMode::Ansi {\n                let error = decimal_sum_overflow_error();\n                return Err(self.wrap_error_with_context(error));\n            }\n\n            if !is_not_null || count == 0 {\n                builder.append_null();\n                continue;\n            }\n\n            match avg(sum, count as i128, target_min, target_max, scaler) {\n                Some(value) => {\n                    builder.append_value(value);\n                }\n                _ => {\n                    builder.append_null();\n                }\n            }\n        }\n        let array: PrimitiveArray<Decimal128Type> = builder.finish();\n\n        Ok(Arc::new(array))\n    }\n\n    // return arrays for sums and counts\n    fn state(&mut self, emit_to: EmitTo) -> Result<Vec<ArrayRef>> {\n        let nulls = build_bool_state(&mut self.is_not_null, &emit_to);\n        let nulls = Some(NullBuffer::new(nulls));\n\n        let counts = emit_to.take_needed(&mut self.counts);\n        let counts = Int64Array::new(counts.into(), nulls.clone());\n\n        let sums = emit_to.take_needed(&mut self.sums);\n        let sums =\n            Decimal128Array::new(sums.into(), nulls).with_data_type(self.sum_data_type.clone());\n\n        Ok(vec![\n            Arc::new(sums) as ArrayRef,\n            Arc::new(counts) as ArrayRef,\n        ])\n    }\n\n    fn size(&self) -> usize {\n        self.counts.capacity() * std::mem::size_of::<i64>()\n            + self.sums.capacity() * std::mem::size_of::<i128>()\n    }\n}\n\n/// Returns the `sum`/`count` as a i128 Decimal128 with\n/// target_scale and target_precision and return None if overflows.\n///\n/// * sum: The total sum value stored as Decimal128 with sum_scale\n/// * count: total count, stored as a i128 (*NOT* a Decimal128 value)\n/// * target_min: The minimum output value possible to represent with the target precision\n/// * target_max: The maximum output value possible to represent with the target precision\n/// * scaler: scale factor for avg\n#[inline(always)]\nfn avg(sum: i128, count: i128, target_min: i128, target_max: i128, scaler: i128) -> Option<i128> {\n    if let Some(value) = sum.checked_mul(scaler) {\n        // `sum / count` with ROUND_HALF_UP\n        let (div, rem) = value.div_rem(&count);\n        let half = div_ceil(count, 2);\n        let half_neg = half.neg_wrapping();\n        let new_value = match value >= 0 {\n            true if rem >= half => div.add_wrapping(1),\n            false if rem <= half_neg => div.sub_wrapping(1),\n            _ => div,\n        };\n        if new_value >= target_min && new_value <= target_max {\n            Some(new_value)\n        } else {\n            None\n        }\n    } else {\n        None\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/agg_funcs/correlation.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::compute::{and, filter, is_not_null};\n\nuse std::{any::Any, sync::Arc};\n\nuse crate::agg_funcs::covariance::CovarianceAccumulator;\nuse crate::agg_funcs::stddev::StddevAccumulator;\nuse arrow::datatypes::FieldRef;\nuse arrow::{\n    array::ArrayRef,\n    datatypes::{DataType, Field},\n};\nuse datafusion::common::{Result, ScalarValue};\nuse datafusion::logical_expr::function::{AccumulatorArgs, StateFieldsArgs};\nuse datafusion::logical_expr::type_coercion::aggregates::NUMERICS;\nuse datafusion::logical_expr::{Accumulator, AggregateUDFImpl, Signature, Volatility};\nuse datafusion::physical_expr::expressions::format_state_name;\nuse datafusion::physical_expr::expressions::StatsType;\n\n/// CORR aggregate expression\n/// The implementation mostly is the same as the DataFusion's implementation. The reason\n/// we have our own implementation is that DataFusion has UInt64 for state_field `count`,\n/// while Spark has Double for count. Also we have added `null_on_divide_by_zero`\n/// to be consistent with Spark's implementation.\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct Correlation {\n    name: String,\n    signature: Signature,\n    null_on_divide_by_zero: bool,\n}\n\nimpl Correlation {\n    pub fn new(name: impl Into<String>, data_type: DataType, null_on_divide_by_zero: bool) -> Self {\n        // the result of correlation just support FLOAT64 data type.\n        assert!(matches!(data_type, DataType::Float64));\n        Self {\n            name: name.into(),\n            signature: Signature::uniform(2, NUMERICS.to_vec(), Volatility::Immutable),\n            null_on_divide_by_zero,\n        }\n    }\n}\n\nimpl AggregateUDFImpl for Correlation {\n    /// Return a reference to Any that can be used for downcasting\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        &self.name\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Float64)\n    }\n    fn default_value(&self, _data_type: &DataType) -> Result<ScalarValue> {\n        Ok(ScalarValue::Float64(None))\n    }\n\n    fn accumulator(&self, _acc_args: AccumulatorArgs) -> Result<Box<dyn Accumulator>> {\n        Ok(Box::new(CorrelationAccumulator::try_new(\n            self.null_on_divide_by_zero,\n        )?))\n    }\n\n    fn state_fields(&self, _args: StateFieldsArgs) -> Result<Vec<FieldRef>> {\n        Ok(vec![\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"count\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"mean1\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"mean2\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"algo_const\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"m2_1\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"m2_2\"),\n                DataType::Float64,\n                true,\n            )),\n        ])\n    }\n}\n\n/// An accumulator to compute correlation\n#[derive(Debug)]\npub struct CorrelationAccumulator {\n    covar: CovarianceAccumulator,\n    stddev1: StddevAccumulator,\n    stddev2: StddevAccumulator,\n    null_on_divide_by_zero: bool,\n}\n\nimpl CorrelationAccumulator {\n    /// Creates a new `CorrelationAccumulator`\n    pub fn try_new(null_on_divide_by_zero: bool) -> Result<Self> {\n        Ok(Self {\n            covar: CovarianceAccumulator::try_new(StatsType::Population, null_on_divide_by_zero)?,\n            stddev1: StddevAccumulator::try_new(StatsType::Population, null_on_divide_by_zero)?,\n            stddev2: StddevAccumulator::try_new(StatsType::Population, null_on_divide_by_zero)?,\n            null_on_divide_by_zero,\n        })\n    }\n}\n\nimpl Accumulator for CorrelationAccumulator {\n    fn state(&mut self) -> Result<Vec<ScalarValue>> {\n        Ok(vec![\n            ScalarValue::from(self.covar.get_count()),\n            ScalarValue::from(self.covar.get_mean1()),\n            ScalarValue::from(self.covar.get_mean2()),\n            ScalarValue::from(self.covar.get_algo_const()),\n            ScalarValue::from(self.stddev1.get_m2()),\n            ScalarValue::from(self.stddev2.get_m2()),\n        ])\n    }\n\n    fn update_batch(&mut self, values: &[ArrayRef]) -> Result<()> {\n        let values = if values[0].null_count() != 0 || values[1].null_count() != 0 {\n            let mask = and(&is_not_null(&values[0])?, &is_not_null(&values[1])?)?;\n            let values1 = filter(&values[0], &mask)?;\n            let values2 = filter(&values[1], &mask)?;\n\n            vec![values1, values2]\n        } else {\n            values.to_vec()\n        };\n\n        if !values[0].is_empty() && !values[1].is_empty() {\n            self.covar.update_batch(&values)?;\n            self.stddev1.update_batch(&values[0..1])?;\n            self.stddev2.update_batch(&values[1..2])?;\n        }\n\n        Ok(())\n    }\n\n    fn retract_batch(&mut self, values: &[ArrayRef]) -> Result<()> {\n        let values = if values[0].null_count() != 0 || values[1].null_count() != 0 {\n            let mask = and(&is_not_null(&values[0])?, &is_not_null(&values[1])?)?;\n            let values1 = filter(&values[0], &mask)?;\n            let values2 = filter(&values[1], &mask)?;\n\n            vec![values1, values2]\n        } else {\n            values.to_vec()\n        };\n\n        self.covar.retract_batch(&values)?;\n        self.stddev1.retract_batch(&values[0..1])?;\n        self.stddev2.retract_batch(&values[1..2])?;\n        Ok(())\n    }\n\n    fn merge_batch(&mut self, states: &[ArrayRef]) -> Result<()> {\n        let states_c = [\n            Arc::clone(&states[0]),\n            Arc::clone(&states[1]),\n            Arc::clone(&states[2]),\n            Arc::clone(&states[3]),\n        ];\n        let states_s1 = [\n            Arc::clone(&states[0]),\n            Arc::clone(&states[1]),\n            Arc::clone(&states[4]),\n        ];\n        let states_s2 = [\n            Arc::clone(&states[0]),\n            Arc::clone(&states[2]),\n            Arc::clone(&states[5]),\n        ];\n\n        if !states[0].is_empty() && !states[1].is_empty() && !states[2].is_empty() {\n            self.covar.merge_batch(&states_c)?;\n            self.stddev1.merge_batch(&states_s1)?;\n            self.stddev2.merge_batch(&states_s2)?;\n        }\n        Ok(())\n    }\n\n    fn evaluate(&mut self) -> Result<ScalarValue> {\n        let covar = self.covar.evaluate()?;\n        let stddev1 = self.stddev1.evaluate()?;\n        let stddev2 = self.stddev2.evaluate()?;\n\n        if self.covar.get_count() == 0.0 {\n            return Ok(ScalarValue::Float64(None));\n        } else if self.covar.get_count() == 1.0 {\n            if self.null_on_divide_by_zero {\n                return Ok(ScalarValue::Float64(None));\n            } else {\n                return Ok(ScalarValue::Float64(Some(f64::NAN)));\n            }\n        }\n        match (covar, stddev1, stddev2) {\n            (\n                ScalarValue::Float64(Some(c)),\n                ScalarValue::Float64(Some(s1)),\n                ScalarValue::Float64(Some(s2)),\n            ) if s1 != 0.0 && s2 != 0.0 => Ok(ScalarValue::Float64(Some(c / (s1 * s2)))),\n            _ => Ok(ScalarValue::Float64(None)),\n        }\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self) - std::mem::size_of_val(&self.covar) + self.covar.size()\n            - std::mem::size_of_val(&self.stddev1)\n            + self.stddev1.size()\n            - std::mem::size_of_val(&self.stddev2)\n            + self.stddev2.size()\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/agg_funcs/covariance.rs",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\nuse arrow::datatypes::FieldRef;\nuse arrow::{\n    array::{ArrayRef, Float64Array},\n    compute::cast,\n    datatypes::{DataType, Field},\n};\nuse datafusion::common::{downcast_value, unwrap_or_internal_err, Result, ScalarValue};\nuse datafusion::logical_expr::function::{AccumulatorArgs, StateFieldsArgs};\nuse datafusion::logical_expr::type_coercion::aggregates::NUMERICS;\nuse datafusion::logical_expr::{Accumulator, AggregateUDFImpl, Signature, Volatility};\nuse datafusion::physical_expr::expressions::format_state_name;\nuse datafusion::physical_expr::expressions::StatsType;\nuse std::any::Any;\nuse std::sync::Arc;\n\n/// COVAR_SAMP and COVAR_POP aggregate expression\n/// The implementation mostly is the same as the DataFusion's implementation. The reason\n/// we have our own implementation is that DataFusion has UInt64 for state_field count,\n/// while Spark has Double for count.\n#[derive(Debug, Clone, PartialEq, Eq)]\npub struct Covariance {\n    name: String,\n    signature: Signature,\n    stats_type: StatsType,\n    null_on_divide_by_zero: bool,\n}\n\nimpl std::hash::Hash for Covariance {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.name.hash(state);\n        self.signature.hash(state);\n        (self.stats_type as u8).hash(state);\n        self.null_on_divide_by_zero.hash(state);\n    }\n}\n\nimpl Covariance {\n    /// Create a new COVAR aggregate function\n    pub fn new(\n        name: impl Into<String>,\n        data_type: DataType,\n        stats_type: StatsType,\n        null_on_divide_by_zero: bool,\n    ) -> Self {\n        // the result of covariance just support FLOAT64 data type.\n        assert!(matches!(data_type, DataType::Float64));\n        Self {\n            name: name.into(),\n            signature: Signature::uniform(2, NUMERICS.to_vec(), Volatility::Immutable),\n            stats_type,\n            null_on_divide_by_zero,\n        }\n    }\n}\n\nimpl AggregateUDFImpl for Covariance {\n    /// Return a reference to Any that can be used for downcasting\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        &self.name\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Float64)\n    }\n    fn default_value(&self, _data_type: &DataType) -> Result<ScalarValue> {\n        Ok(ScalarValue::Float64(None))\n    }\n\n    fn accumulator(&self, _acc_args: AccumulatorArgs) -> Result<Box<dyn Accumulator>> {\n        Ok(Box::new(CovarianceAccumulator::try_new(\n            self.stats_type,\n            self.null_on_divide_by_zero,\n        )?))\n    }\n\n    fn state_fields(&self, _args: StateFieldsArgs) -> Result<Vec<FieldRef>> {\n        Ok(vec![\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"count\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"mean1\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"mean2\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"algo_const\"),\n                DataType::Float64,\n                true,\n            )),\n        ])\n    }\n}\n\n/// An accumulator to compute covariance\n#[derive(Debug)]\npub struct CovarianceAccumulator {\n    algo_const: f64,\n    mean1: f64,\n    mean2: f64,\n    count: f64,\n    stats_type: StatsType,\n    null_on_divide_by_zero: bool,\n}\n\nimpl CovarianceAccumulator {\n    /// Creates a new `CovarianceAccumulator`\n    pub fn try_new(s_type: StatsType, null_on_divide_by_zero: bool) -> Result<Self> {\n        Ok(Self {\n            algo_const: 0_f64,\n            mean1: 0_f64,\n            mean2: 0_f64,\n            count: 0_f64,\n            stats_type: s_type,\n            null_on_divide_by_zero,\n        })\n    }\n\n    pub fn get_count(&self) -> f64 {\n        self.count\n    }\n\n    pub fn get_mean1(&self) -> f64 {\n        self.mean1\n    }\n\n    pub fn get_mean2(&self) -> f64 {\n        self.mean2\n    }\n\n    pub fn get_algo_const(&self) -> f64 {\n        self.algo_const\n    }\n}\n\nimpl Accumulator for CovarianceAccumulator {\n    fn state(&mut self) -> Result<Vec<ScalarValue>> {\n        Ok(vec![\n            ScalarValue::from(self.count),\n            ScalarValue::from(self.mean1),\n            ScalarValue::from(self.mean2),\n            ScalarValue::from(self.algo_const),\n        ])\n    }\n\n    fn update_batch(&mut self, values: &[ArrayRef]) -> Result<()> {\n        let values1 = &cast(&values[0], &DataType::Float64)?;\n        let values2 = &cast(&values[1], &DataType::Float64)?;\n\n        let mut arr1 = downcast_value!(values1, Float64Array).iter().flatten();\n        let mut arr2 = downcast_value!(values2, Float64Array).iter().flatten();\n\n        for i in 0..values1.len() {\n            let value1 = if values1.is_valid(i) {\n                arr1.next()\n            } else {\n                None\n            };\n            let value2 = if values2.is_valid(i) {\n                arr2.next()\n            } else {\n                None\n            };\n\n            if value1.is_none() || value2.is_none() {\n                continue;\n            }\n\n            let value1 = unwrap_or_internal_err!(value1);\n            let value2 = unwrap_or_internal_err!(value2);\n            let new_count = self.count + 1.0;\n            let delta1 = value1 - self.mean1;\n            let new_mean1 = delta1 / new_count + self.mean1;\n            let delta2 = value2 - self.mean2;\n            let new_mean2 = delta2 / new_count + self.mean2;\n            let new_c = delta1 * (value2 - new_mean2) + self.algo_const;\n\n            self.count += 1.0;\n            self.mean1 = new_mean1;\n            self.mean2 = new_mean2;\n            self.algo_const = new_c;\n        }\n\n        Ok(())\n    }\n\n    fn retract_batch(&mut self, values: &[ArrayRef]) -> Result<()> {\n        let values1 = &cast(&values[0], &DataType::Float64)?;\n        let values2 = &cast(&values[1], &DataType::Float64)?;\n        let mut arr1 = downcast_value!(values1, Float64Array).iter().flatten();\n        let mut arr2 = downcast_value!(values2, Float64Array).iter().flatten();\n\n        for i in 0..values1.len() {\n            let value1 = if values1.is_valid(i) {\n                arr1.next()\n            } else {\n                None\n            };\n            let value2 = if values2.is_valid(i) {\n                arr2.next()\n            } else {\n                None\n            };\n\n            if value1.is_none() || value2.is_none() {\n                continue;\n            }\n\n            let value1 = unwrap_or_internal_err!(value1);\n            let value2 = unwrap_or_internal_err!(value2);\n\n            let new_count = self.count - 1.0;\n            let delta1 = self.mean1 - value1;\n            let new_mean1 = delta1 / new_count + self.mean1;\n            let delta2 = self.mean2 - value2;\n            let new_mean2 = delta2 / new_count + self.mean2;\n            let new_c = self.algo_const - delta1 * (new_mean2 - value2);\n\n            self.count -= 1.0;\n            self.mean1 = new_mean1;\n            self.mean2 = new_mean2;\n            self.algo_const = new_c;\n        }\n\n        Ok(())\n    }\n\n    fn merge_batch(&mut self, states: &[ArrayRef]) -> Result<()> {\n        let counts = downcast_value!(states[0], Float64Array);\n        let means1 = downcast_value!(states[1], Float64Array);\n        let means2 = downcast_value!(states[2], Float64Array);\n        let cs = downcast_value!(states[3], Float64Array);\n\n        for i in 0..counts.len() {\n            let c = counts.value(i);\n            if c == 0.0 {\n                continue;\n            }\n            let new_count = self.count + c;\n            let new_mean1 = self.mean1 * self.count / new_count + means1.value(i) * c / new_count;\n            let new_mean2 = self.mean2 * self.count / new_count + means2.value(i) * c / new_count;\n            let delta1 = self.mean1 - means1.value(i);\n            let delta2 = self.mean2 - means2.value(i);\n            let new_c =\n                self.algo_const + cs.value(i) + delta1 * delta2 * self.count * c / new_count;\n\n            self.count = new_count;\n            self.mean1 = new_mean1;\n            self.mean2 = new_mean2;\n            self.algo_const = new_c;\n        }\n        Ok(())\n    }\n\n    fn evaluate(&mut self) -> Result<ScalarValue> {\n        if self.count == 0.0 {\n            return Ok(ScalarValue::Float64(None));\n        }\n\n        let count = match self.stats_type {\n            StatsType::Population => self.count,\n            StatsType::Sample if self.count > 1.0 => self.count - 1.0,\n            StatsType::Sample => {\n                // self.count == 1.0\n                return if self.null_on_divide_by_zero {\n                    Ok(ScalarValue::Float64(None))\n                } else {\n                    Ok(ScalarValue::Float64(Some(f64::NAN)))\n                };\n            }\n        };\n\n        Ok(ScalarValue::Float64(Some(self.algo_const / count)))\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/agg_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod avg;\nmod avg_decimal;\nmod correlation;\nmod covariance;\nmod stddev;\nmod sum_decimal;\nmod sum_int;\nmod variance;\n\npub use avg::Avg;\npub use avg_decimal::AvgDecimal;\npub use correlation::Correlation;\npub use covariance::Covariance;\npub use stddev::Stddev;\npub use sum_decimal::SumDecimal;\npub use sum_int::SumInteger;\npub use variance::Variance;\n"
  },
  {
    "path": "native/spark-expr/src/agg_funcs/stddev.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{any::Any, sync::Arc};\n\nuse crate::agg_funcs::variance::VarianceAccumulator;\nuse arrow::datatypes::FieldRef;\nuse arrow::{\n    array::ArrayRef,\n    datatypes::{DataType, Field},\n};\nuse datafusion::common::types::NativeType;\nuse datafusion::common::{internal_err, Result, ScalarValue};\nuse datafusion::logical_expr::function::{AccumulatorArgs, StateFieldsArgs};\nuse datafusion::logical_expr::{Accumulator, AggregateUDFImpl, Coercion, Signature, Volatility};\nuse datafusion::logical_expr_common::signature;\nuse datafusion::physical_expr::expressions::format_state_name;\nuse datafusion::physical_expr::expressions::StatsType;\n\n/// STDDEV and STDDEV_SAMP (standard deviation) aggregate expression\n/// The implementation mostly is the same as the DataFusion's implementation. The reason\n/// we have our own implementation is that DataFusion has UInt64 for state_field `count`,\n/// while Spark has Double for count. Also we have added `null_on_divide_by_zero`\n/// to be consistent with Spark's implementation.\n#[derive(Debug, PartialEq, Eq)]\npub struct Stddev {\n    name: String,\n    signature: Signature,\n    stats_type: StatsType,\n    null_on_divide_by_zero: bool,\n}\n\nimpl std::hash::Hash for Stddev {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.name.hash(state);\n        self.signature.hash(state);\n        (self.stats_type as u8).hash(state);\n        self.null_on_divide_by_zero.hash(state);\n    }\n}\n\nimpl Stddev {\n    /// Create a new STDDEV aggregate function\n    pub fn new(\n        name: impl Into<String>,\n        data_type: DataType,\n        stats_type: StatsType,\n        null_on_divide_by_zero: bool,\n    ) -> Self {\n        // the result of stddev just support FLOAT64.\n        assert!(matches!(data_type, DataType::Float64));\n        Self {\n            name: name.into(),\n            signature: Signature::coercible(\n                vec![Coercion::new_exact(signature::TypeSignatureClass::Native(\n                    Arc::new(NativeType::Float64),\n                ))],\n                Volatility::Immutable,\n            ),\n            stats_type,\n            null_on_divide_by_zero,\n        }\n    }\n}\n\nimpl AggregateUDFImpl for Stddev {\n    /// Return a reference to Any that can be used for downcasting\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        &self.name\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Float64)\n    }\n\n    fn accumulator(&self, _acc_args: AccumulatorArgs) -> Result<Box<dyn Accumulator>> {\n        Ok(Box::new(StddevAccumulator::try_new(\n            self.stats_type,\n            self.null_on_divide_by_zero,\n        )?))\n    }\n\n    fn create_sliding_accumulator(\n        &self,\n        _acc_args: AccumulatorArgs,\n    ) -> Result<Box<dyn Accumulator>> {\n        Ok(Box::new(StddevAccumulator::try_new(\n            self.stats_type,\n            self.null_on_divide_by_zero,\n        )?))\n    }\n\n    fn state_fields(&self, _args: StateFieldsArgs) -> Result<Vec<FieldRef>> {\n        Ok(vec![\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"count\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"mean\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"m2\"),\n                DataType::Float64,\n                true,\n            )),\n        ])\n    }\n\n    fn default_value(&self, _data_type: &DataType) -> Result<ScalarValue> {\n        Ok(ScalarValue::Float64(None))\n    }\n}\n\n/// An accumulator to compute the standard deviation\n#[derive(Debug)]\npub struct StddevAccumulator {\n    variance: VarianceAccumulator,\n}\n\nimpl StddevAccumulator {\n    /// Creates a new `StddevAccumulator`\n    pub fn try_new(s_type: StatsType, null_on_divide_by_zero: bool) -> Result<Self> {\n        Ok(Self {\n            variance: VarianceAccumulator::try_new(s_type, null_on_divide_by_zero)?,\n        })\n    }\n\n    pub fn get_m2(&self) -> f64 {\n        self.variance.get_m2()\n    }\n}\n\nimpl Accumulator for StddevAccumulator {\n    fn state(&mut self) -> Result<Vec<ScalarValue>> {\n        Ok(vec![\n            ScalarValue::from(self.variance.get_count()),\n            ScalarValue::from(self.variance.get_mean()),\n            ScalarValue::from(self.variance.get_m2()),\n        ])\n    }\n\n    fn update_batch(&mut self, values: &[ArrayRef]) -> Result<()> {\n        self.variance.update_batch(values)\n    }\n\n    fn retract_batch(&mut self, values: &[ArrayRef]) -> Result<()> {\n        self.variance.retract_batch(values)\n    }\n\n    fn merge_batch(&mut self, states: &[ArrayRef]) -> Result<()> {\n        self.variance.merge_batch(states)\n    }\n\n    fn evaluate(&mut self) -> Result<ScalarValue> {\n        let variance = self.variance.evaluate()?;\n        match variance {\n            ScalarValue::Float64(Some(e)) => Ok(ScalarValue::Float64(Some(e.sqrt()))),\n            ScalarValue::Float64(None) => Ok(ScalarValue::Float64(None)),\n            _ => internal_err!(\"Variance should be f64\"),\n        }\n    }\n\n    fn size(&self) -> usize {\n        std::mem::align_of_val(self) - std::mem::align_of_val(&self.variance) + self.variance.size()\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/agg_funcs/sum_decimal.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::utils::is_valid_decimal_precision;\nuse crate::{decimal_sum_overflow_error, EvalMode, SparkErrorWithContext};\nuse arrow::array::{\n    cast::AsArray, types::Decimal128Type, Array, ArrayRef, BooleanArray, Decimal128Array,\n};\nuse arrow::datatypes::{DataType, Field, FieldRef};\nuse datafusion::common::{DataFusionError, Result as DFResult, ScalarValue};\nuse datafusion::logical_expr::function::{AccumulatorArgs, StateFieldsArgs};\nuse datafusion::logical_expr::Volatility::Immutable;\nuse datafusion::logical_expr::{\n    Accumulator, AggregateUDFImpl, EmitTo, GroupsAccumulator, ReversedUDAF, Signature,\n};\nuse std::{any::Any, sync::Arc};\n\n#[derive(Debug)]\npub struct SumDecimal {\n    /// Aggregate function signature\n    signature: Signature,\n    /// The data type of the SUM result. This will always be a decimal type\n    /// with the same precision and scale as specified in this struct\n    result_type: DataType,\n    /// Decimal precision\n    precision: u8,\n    /// Decimal scale\n    scale: i8,\n    eval_mode: EvalMode,\n    /// Optional expression ID for query context lookup during error creation\n    expr_id: Option<u64>,\n    /// Session-scoped query context registry for error reporting\n    registry: Arc<crate::QueryContextMap>,\n}\n\n// Manually implement PartialEq, Eq, and Hash excluding the registry field\n// since registry is only for error reporting and doesn't affect function behavior\nimpl PartialEq for SumDecimal {\n    fn eq(&self, other: &Self) -> bool {\n        self.precision == other.precision\n            && self.scale == other.scale\n            && self.eval_mode == other.eval_mode\n            && self.expr_id == other.expr_id\n            && self.result_type == other.result_type\n    }\n}\n\nimpl Eq for SumDecimal {}\n\nimpl std::hash::Hash for SumDecimal {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.precision.hash(state);\n        self.scale.hash(state);\n        self.eval_mode.hash(state);\n        self.expr_id.hash(state);\n        self.result_type.hash(state);\n    }\n}\n\nimpl SumDecimal {\n    pub fn try_new(\n        data_type: DataType,\n        eval_mode: EvalMode,\n        expr_id: Option<u64>,\n        registry: Arc<crate::QueryContextMap>,\n    ) -> DFResult<Self> {\n        let (precision, scale) = match data_type {\n            DataType::Decimal128(p, s) => (p, s),\n            _ => {\n                return Err(DataFusionError::Internal(\n                    \"Invalid data type for SumDecimal\".into(),\n                ))\n            }\n        };\n        Ok(Self {\n            signature: Signature::user_defined(Immutable),\n            result_type: data_type,\n            precision,\n            scale,\n            eval_mode,\n            expr_id,\n            registry,\n        })\n    }\n}\n\nimpl AggregateUDFImpl for SumDecimal {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn accumulator(&self, _args: AccumulatorArgs) -> DFResult<Box<dyn Accumulator>> {\n        Ok(Box::new(SumDecimalAccumulator::new(\n            self.precision,\n            self.scale,\n            self.eval_mode,\n            self.expr_id,\n            Arc::clone(&self.registry),\n        )))\n    }\n\n    fn state_fields(&self, _args: StateFieldsArgs) -> DFResult<Vec<FieldRef>> {\n        // For decimal sum, we always track is_empty regardless of eval_mode\n        // This matches Spark's behavior where DecimalType always uses shouldTrackIsEmpty = true\n        let data_type = self.result_type.clone();\n        Ok(vec![\n            Arc::new(Field::new(\"sum\", data_type, true)),\n            Arc::new(Field::new(\"is_empty\", DataType::Boolean, false)),\n        ])\n    }\n\n    fn name(&self) -> &str {\n        \"sum\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> DFResult<DataType> {\n        Ok(self.result_type.clone())\n    }\n\n    fn groups_accumulator_supported(&self, _args: AccumulatorArgs) -> bool {\n        true\n    }\n\n    fn create_groups_accumulator(\n        &self,\n        _args: AccumulatorArgs,\n    ) -> DFResult<Box<dyn GroupsAccumulator>> {\n        Ok(Box::new(SumDecimalGroupsAccumulator::new(\n            self.result_type.clone(),\n            self.precision,\n            self.eval_mode,\n            self.expr_id,\n            Arc::clone(&self.registry),\n        )))\n    }\n\n    fn default_value(&self, _data_type: &DataType) -> DFResult<ScalarValue> {\n        ScalarValue::new_primitive::<Decimal128Type>(\n            None,\n            &DataType::Decimal128(self.precision, self.scale),\n        )\n    }\n\n    fn reverse_expr(&self) -> ReversedUDAF {\n        ReversedUDAF::Identical\n    }\n\n    fn is_nullable(&self) -> bool {\n        // In Spark, Sum.nullable and Average.nullable both return true irrespective of ANSI mode.\n        // SumDecimal is always nullable because overflows can cause null values.\n        true\n    }\n}\n\n#[derive(Debug)]\nstruct SumDecimalAccumulator {\n    sum: Option<i128>,\n    is_empty: bool,\n    precision: u8,\n    scale: i8,\n    eval_mode: EvalMode,\n    expr_id: Option<u64>,\n    registry: Arc<crate::QueryContextMap>,\n}\n\nimpl SumDecimalAccumulator {\n    fn new(\n        precision: u8,\n        scale: i8,\n        eval_mode: EvalMode,\n        expr_id: Option<u64>,\n        registry: Arc<crate::QueryContextMap>,\n    ) -> Self {\n        // For decimal sum, always track is_empty regardless of eval_mode\n        // This matches Spark's behavior where DecimalType always uses shouldTrackIsEmpty = true\n        Self {\n            sum: Some(0),\n            is_empty: true,\n            precision,\n            scale,\n            eval_mode,\n            expr_id,\n            registry,\n        }\n    }\n\n    fn update_single(&mut self, values: &Decimal128Array, idx: usize) -> DFResult<()> {\n        // If already overflowed (sum is None but not empty), stay in overflow state\n        if !self.is_empty && self.sum.is_none() {\n            return Ok(());\n        }\n\n        let v = unsafe { values.value_unchecked(idx) };\n        let running_sum = self.sum.unwrap_or(0);\n        let (new_sum, is_overflow) = running_sum.overflowing_add(v);\n\n        if is_overflow || !is_valid_decimal_precision(new_sum, self.precision) {\n            if self.eval_mode == EvalMode::Ansi {\n                let error = decimal_sum_overflow_error();\n                return Err(self.wrap_error_with_context(error));\n            }\n            self.sum = None;\n            self.is_empty = false;\n            return Ok(());\n        }\n\n        self.sum = Some(new_sum);\n        self.is_empty = false;\n        Ok(())\n    }\n\n    /// Wrap a SparkError with QueryContext if expr_id is available\n    fn wrap_error_with_context(&self, error: crate::SparkError) -> DataFusionError {\n        if let Some(expr_id) = self.expr_id {\n            if let Some(query_ctx) = self.registry.get(expr_id) {\n                let wrapped = SparkErrorWithContext::with_context(error, query_ctx);\n                return DataFusionError::External(Box::new(wrapped));\n            }\n        }\n        DataFusionError::from(error)\n    }\n}\n\nimpl Accumulator for SumDecimalAccumulator {\n    fn update_batch(&mut self, values: &[ArrayRef]) -> DFResult<()> {\n        assert_eq!(\n            values.len(),\n            1,\n            \"Expect only one element in 'values' but found {}\",\n            values.len()\n        );\n\n        // For decimal sum, always check for overflow regardless of eval_mode (per Spark's expectation)\n        if !self.is_empty && self.sum.is_none() {\n            return Ok(());\n        }\n\n        let values = &values[0];\n        let data = values.as_primitive::<Decimal128Type>();\n\n        // Update is_empty: it remains true only if it was true AND all values are null\n        self.is_empty = self.is_empty && values.len() == values.null_count();\n\n        if self.is_empty {\n            return Ok(());\n        }\n\n        for i in 0..data.len() {\n            if data.is_null(i) {\n                continue;\n            }\n            self.update_single(data, i)?;\n        }\n        Ok(())\n    }\n\n    fn evaluate(&mut self) -> DFResult<ScalarValue> {\n        if self.is_empty {\n            ScalarValue::new_primitive::<Decimal128Type>(\n                None,\n                &DataType::Decimal128(self.precision, self.scale),\n            )\n        } else {\n            match self.sum {\n                Some(sum_value) if is_valid_decimal_precision(sum_value, self.precision) => {\n                    ScalarValue::try_new_decimal128(sum_value, self.precision, self.scale)\n                }\n                _ => ScalarValue::new_primitive::<Decimal128Type>(\n                    None,\n                    &DataType::Decimal128(self.precision, self.scale),\n                ),\n            }\n        }\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n\n    fn state(&mut self) -> DFResult<Vec<ScalarValue>> {\n        let sum = match self.sum {\n            Some(sum_value) => {\n                ScalarValue::try_new_decimal128(sum_value, self.precision, self.scale)?\n            }\n            None => ScalarValue::new_primitive::<Decimal128Type>(\n                None,\n                &DataType::Decimal128(self.precision, self.scale),\n            )?,\n        };\n\n        // For decimal sum, always return 2 state values regardless of eval_mode\n        Ok(vec![sum, ScalarValue::from(self.is_empty)])\n    }\n\n    fn merge_batch(&mut self, states: &[ArrayRef]) -> DFResult<()> {\n        // For decimal sum, always expect 2 state arrays regardless of eval_mode\n        assert_eq!(\n            states.len(),\n            2,\n            \"Expect two elements in 'states' but found {}\",\n            states.len()\n        );\n        assert_eq!(states[0].len(), 1);\n        assert_eq!(states[1].len(), 1);\n\n        let that_sum_array = states[0].as_primitive::<Decimal128Type>();\n        let that_sum = if that_sum_array.is_null(0) {\n            None\n        } else {\n            Some(that_sum_array.value(0))\n        };\n\n        let that_is_empty = states[1].as_boolean().value(0);\n        let that_overflowed = !that_is_empty && that_sum.is_none();\n        let this_overflowed = !self.is_empty && self.sum.is_none();\n\n        if that_overflowed || this_overflowed {\n            self.sum = None;\n            self.is_empty = false;\n            return Ok(());\n        }\n\n        if that_is_empty {\n            return Ok(());\n        }\n\n        if self.is_empty {\n            self.sum = that_sum;\n            self.is_empty = false;\n            return Ok(());\n        }\n\n        let left = self.sum.unwrap();\n        let right = that_sum.unwrap();\n        let (new_sum, is_overflow) = left.overflowing_add(right);\n\n        if is_overflow || !is_valid_decimal_precision(new_sum, self.precision) {\n            if self.eval_mode == EvalMode::Ansi {\n                let error = decimal_sum_overflow_error();\n                return Err(self.wrap_error_with_context(error));\n            } else {\n                self.sum = None;\n                self.is_empty = false;\n            }\n        } else {\n            self.sum = Some(new_sum);\n        }\n\n        Ok(())\n    }\n}\n\nstruct SumDecimalGroupsAccumulator {\n    sum: Vec<Option<i128>>,\n    is_empty: Vec<bool>,\n    result_type: DataType,\n    precision: u8,\n    eval_mode: EvalMode,\n    expr_id: Option<u64>,\n    registry: Arc<crate::QueryContextMap>,\n}\n\nimpl SumDecimalGroupsAccumulator {\n    fn new(\n        result_type: DataType,\n        precision: u8,\n        eval_mode: EvalMode,\n        expr_id: Option<u64>,\n        registry: Arc<crate::QueryContextMap>,\n    ) -> Self {\n        Self {\n            sum: Vec::new(),\n            is_empty: Vec::new(),\n            result_type,\n            precision,\n            eval_mode,\n            expr_id,\n            registry,\n        }\n    }\n\n    fn resize_helper(&mut self, total_num_groups: usize) {\n        // For decimal sum, always initialize properly regardless of eval_mode\n        self.sum.resize(total_num_groups, Some(0));\n        self.is_empty.resize(total_num_groups, true);\n    }\n\n    /// Wrap a SparkError with QueryContext if expr_id is available\n    fn wrap_error_with_context(&self, error: crate::SparkError) -> DataFusionError {\n        if let Some(expr_id) = self.expr_id {\n            if let Some(query_ctx) = self.registry.get(expr_id) {\n                let wrapped = SparkErrorWithContext::with_context(error, query_ctx);\n                return DataFusionError::External(Box::new(wrapped));\n            }\n        }\n        DataFusionError::from(error)\n    }\n\n    #[inline]\n    fn update_single(&mut self, group_index: usize, value: i128) -> DFResult<()> {\n        // For decimal sum, always check for overflow regardless of eval_mode\n        if !self.is_empty[group_index] && self.sum[group_index].is_none() {\n            return Ok(());\n        }\n\n        let running_sum = self.sum[group_index].unwrap_or(0);\n        let (new_sum, is_overflow) = running_sum.overflowing_add(value);\n\n        if is_overflow || !is_valid_decimal_precision(new_sum, self.precision) {\n            if self.eval_mode == EvalMode::Ansi {\n                let error = decimal_sum_overflow_error();\n                return Err(self.wrap_error_with_context(error));\n            }\n            self.sum[group_index] = None;\n        } else {\n            self.sum[group_index] = Some(new_sum);\n        }\n        self.is_empty[group_index] = false;\n        Ok(())\n    }\n}\n\nimpl GroupsAccumulator for SumDecimalGroupsAccumulator {\n    fn update_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        opt_filter: Option<&BooleanArray>,\n        total_num_groups: usize,\n    ) -> DFResult<()> {\n        assert_eq!(values.len(), 1);\n        let values = values[0].as_primitive::<Decimal128Type>();\n        let data = values.values();\n\n        self.resize_helper(total_num_groups);\n\n        let iter = group_indices.iter().zip(data.iter());\n        if opt_filter.is_none() && values.null_count() == 0 {\n            for (&group_index, &value) in iter {\n                self.update_single(group_index, value)?;\n            }\n        } else {\n            for (idx, (&group_index, &value)) in iter.enumerate() {\n                if let Some(f) = opt_filter {\n                    if !f.is_valid(idx) || !f.value(idx) {\n                        continue;\n                    }\n                }\n                if values.is_null(idx) {\n                    continue;\n                }\n                self.update_single(group_index, value)?;\n            }\n        }\n\n        Ok(())\n    }\n\n    fn evaluate(&mut self, emit_to: EmitTo) -> DFResult<ArrayRef> {\n        match emit_to {\n            EmitTo::All => {\n                let result =\n                    Decimal128Array::from_iter(self.sum.iter().zip(self.is_empty.iter()).map(\n                        |(&sum, &empty)| {\n                            if empty {\n                                None\n                            } else {\n                                match sum {\n                                    Some(v) if is_valid_decimal_precision(v, self.precision) => {\n                                        Some(v)\n                                    }\n                                    _ => None,\n                                }\n                            }\n                        },\n                    ))\n                    .with_data_type(self.result_type.clone());\n\n                self.sum.clear();\n                self.is_empty.clear();\n                Ok(Arc::new(result))\n            }\n            EmitTo::First(n) => {\n                let result = Decimal128Array::from_iter(\n                    self.sum\n                        .drain(..n)\n                        .zip(self.is_empty.drain(..n))\n                        .map(|(sum, empty)| {\n                            if empty {\n                                None\n                            } else {\n                                match sum {\n                                    Some(v) if is_valid_decimal_precision(v, self.precision) => {\n                                        Some(v)\n                                    }\n                                    _ => None,\n                                }\n                            }\n                        }),\n                )\n                .with_data_type(self.result_type.clone());\n\n                Ok(Arc::new(result))\n            }\n        }\n    }\n\n    fn state(&mut self, emit_to: EmitTo) -> DFResult<Vec<ArrayRef>> {\n        let sums = emit_to.take_needed(&mut self.sum);\n\n        let sum_array = Decimal128Array::from_iter(sums.iter().copied())\n            .with_data_type(self.result_type.clone());\n\n        // For decimal sum, always return 2 state arrays regardless of eval_mode\n        let is_empty = emit_to.take_needed(&mut self.is_empty);\n        Ok(vec![\n            Arc::new(sum_array),\n            Arc::new(BooleanArray::from(is_empty)),\n        ])\n    }\n\n    fn merge_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        opt_filter: Option<&BooleanArray>,\n        total_num_groups: usize,\n    ) -> DFResult<()> {\n        debug_assert!(\n            opt_filter.is_none(),\n            \"opt_filter is not supported in merge_batch\"\n        );\n\n        self.resize_helper(total_num_groups);\n\n        // For decimal sum, always expect 2 arrays regardless of eval_mode\n        assert_eq!(\n            values.len(),\n            2,\n            \"Expected two arrays: 'sum' and 'is_empty', but found {}\",\n            values.len()\n        );\n\n        let that_sum = values[0].as_primitive::<Decimal128Type>();\n        let that_is_empty = values[1].as_boolean();\n\n        for (idx, &group_index) in group_indices.iter().enumerate() {\n            let that_sum_val = if that_sum.is_null(idx) {\n                None\n            } else {\n                Some(that_sum.value(idx))\n            };\n\n            let that_is_empty_val = that_is_empty.value(idx);\n            let that_overflowed = !that_is_empty_val && that_sum_val.is_none();\n            let this_overflowed = !self.is_empty[group_index] && self.sum[group_index].is_none();\n\n            if that_overflowed || this_overflowed {\n                self.sum[group_index] = None;\n                self.is_empty[group_index] = false;\n                continue;\n            }\n\n            if that_is_empty_val {\n                continue;\n            }\n\n            if self.is_empty[group_index] {\n                self.sum[group_index] = that_sum_val;\n                self.is_empty[group_index] = false;\n                continue;\n            }\n\n            let left = self.sum[group_index].unwrap();\n            let right = that_sum_val.unwrap();\n            let (new_sum, is_overflow) = left.overflowing_add(right);\n\n            if is_overflow || !is_valid_decimal_precision(new_sum, self.precision) {\n                if self.eval_mode == EvalMode::Ansi {\n                    let error = decimal_sum_overflow_error();\n                    return Err(self.wrap_error_with_context(error));\n                } else {\n                    self.sum[group_index] = None;\n                    self.is_empty[group_index] = false;\n                }\n            } else {\n                self.sum[group_index] = Some(new_sum);\n            }\n        }\n\n        Ok(())\n    }\n\n    fn size(&self) -> usize {\n        self.sum.capacity() * std::mem::size_of::<Option<i128>>()\n            + self.is_empty.capacity() * std::mem::size_of::<bool>()\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::builder::{Decimal128Builder, StringBuilder};\n    use arrow::array::RecordBatch;\n    use arrow::datatypes::*;\n    use datafusion::common::Result;\n    use datafusion::datasource::memory::MemorySourceConfig;\n    use datafusion::datasource::source::DataSourceExec;\n    use datafusion::execution::TaskContext;\n    use datafusion::logical_expr::AggregateUDF;\n    use datafusion::physical_expr::aggregate::AggregateExprBuilder;\n    use datafusion::physical_expr::expressions::Column;\n    use datafusion::physical_expr::PhysicalExpr;\n    use datafusion::physical_plan::aggregates::{AggregateExec, AggregateMode, PhysicalGroupBy};\n    use datafusion::physical_plan::ExecutionPlan;\n    use futures::StreamExt;\n\n    #[test]\n    fn invalid_data_type() {\n        assert!(SumDecimal::try_new(\n            DataType::Int32,\n            EvalMode::Legacy,\n            None,\n            crate::create_query_context_map(),\n        )\n        .is_err());\n    }\n\n    #[tokio::test]\n    async fn sum_no_overflow() -> Result<()> {\n        let num_rows = 8192;\n        let batch = create_record_batch(num_rows);\n        let mut batches = Vec::new();\n        for _ in 0..10 {\n            batches.push(batch.clone());\n        }\n        let partitions = &[batches];\n        let c0: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"c0\", 0));\n        let c1: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"c1\", 1));\n\n        let data_type = DataType::Decimal128(8, 2);\n        let schema = Arc::clone(&partitions[0][0].schema());\n        let scan: Arc<dyn ExecutionPlan> = Arc::new(DataSourceExec::new(Arc::new(\n            MemorySourceConfig::try_new(partitions, Arc::clone(&schema), None).unwrap(),\n        )));\n\n        let aggregate_udf = Arc::new(AggregateUDF::new_from_impl(SumDecimal::try_new(\n            data_type.clone(),\n            EvalMode::Legacy,\n            None,\n            crate::create_query_context_map(),\n        )?));\n\n        let aggr_expr = AggregateExprBuilder::new(aggregate_udf, vec![c1])\n            .schema(Arc::clone(&schema))\n            .alias(\"sum\")\n            .with_ignore_nulls(false)\n            .with_distinct(false)\n            .build()?;\n\n        let aggregate = Arc::new(AggregateExec::try_new(\n            AggregateMode::Partial,\n            PhysicalGroupBy::new_single(vec![(c0, \"c0\".to_string())]),\n            vec![aggr_expr.into()],\n            vec![None], // no filter expressions\n            scan,\n            Arc::clone(&schema),\n        )?);\n\n        let mut stream = aggregate\n            .execute(0, Arc::new(TaskContext::default()))\n            .unwrap();\n        while let Some(batch) = stream.next().await {\n            let _batch = batch?;\n        }\n\n        Ok(())\n    }\n\n    fn create_record_batch(num_rows: usize) -> RecordBatch {\n        let mut decimal_builder = Decimal128Builder::with_capacity(num_rows);\n        let mut string_builder = StringBuilder::with_capacity(num_rows, num_rows * 32);\n        for i in 0..num_rows {\n            decimal_builder.append_value(i as i128);\n            string_builder.append_value(format!(\"this is string #{}\", i % 1024));\n        }\n        let decimal_array = Arc::new(decimal_builder.finish());\n        let string_array = Arc::new(string_builder.finish());\n\n        let mut fields = vec![];\n        let mut columns: Vec<ArrayRef> = vec![];\n\n        // string column\n        fields.push(Field::new(\"c0\", DataType::Utf8, false));\n        columns.push(string_array);\n\n        // decimal column\n        fields.push(Field::new(\"c1\", DataType::Decimal128(38, 10), false));\n        columns.push(decimal_array);\n\n        let schema = Schema::new(fields);\n        RecordBatch::try_new(Arc::new(schema), columns).unwrap()\n    }\n\n    #[test]\n    fn test_update_batch_with_filter() {\n        use arrow::array::Decimal128Array;\n        use datafusion::logical_expr::{EmitTo, GroupsAccumulator};\n\n        let data_type = DataType::Decimal128(10, 2);\n        let mut acc = SumDecimalGroupsAccumulator::new(\n            data_type.clone(),\n            10,\n            EvalMode::Legacy,\n            None,\n            crate::create_query_context_map(),\n        );\n\n        // values: [100, 200, 300, 400], filter: [T, F, T, F] => sum = 100+300 = 400\n        let values: ArrayRef = Arc::new(\n            Decimal128Array::from(vec![100i128, 200, 300, 400]).with_data_type(data_type.clone()),\n        );\n        let filter = BooleanArray::from(vec![true, false, true, false]);\n        acc.update_batch(&[values], &[0, 0, 0, 0], Some(&filter), 1)\n            .unwrap();\n\n        let result = acc.evaluate(EmitTo::All).unwrap();\n        let result = result.as_any().downcast_ref::<Decimal128Array>().unwrap();\n        assert_eq!(result.value(0), 400);\n    }\n\n    #[test]\n    fn test_update_batch_filter_null_treated_as_exclude() {\n        use arrow::array::Decimal128Array;\n        use datafusion::logical_expr::{EmitTo, GroupsAccumulator};\n\n        let data_type = DataType::Decimal128(10, 2);\n        let mut acc = SumDecimalGroupsAccumulator::new(\n            data_type.clone(),\n            10,\n            EvalMode::Legacy,\n            None,\n            crate::create_query_context_map(),\n        );\n\n        let values: ArrayRef =\n            Arc::new(Decimal128Array::from(vec![10i128, 20, 30]).with_data_type(data_type.clone()));\n        let filter = BooleanArray::from(vec![Some(true), None, Some(true)]);\n        acc.update_batch(&[values], &[0, 0, 0], Some(&filter), 1)\n            .unwrap();\n\n        let result = acc.evaluate(EmitTo::All).unwrap();\n        let result = result.as_any().downcast_ref::<Decimal128Array>().unwrap();\n        assert_eq!(result.value(0), 40); // 10 + 30 = 40\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/agg_funcs/sum_int.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::{arithmetic_overflow_error, EvalMode};\nuse arrow::array::{\n    as_primitive_array, cast::AsArray, Array, ArrayRef, ArrowNativeTypeOp, ArrowPrimitiveType,\n    BooleanArray, Int64Array, PrimitiveArray,\n};\nuse arrow::datatypes::{\n    ArrowNativeType, DataType, Field, FieldRef, Int16Type, Int32Type, Int64Type, Int8Type,\n};\nuse datafusion::common::{DataFusionError, Result as DFResult, ScalarValue};\nuse datafusion::logical_expr::function::{AccumulatorArgs, StateFieldsArgs};\nuse datafusion::logical_expr::Volatility::Immutable;\nuse datafusion::logical_expr::{\n    Accumulator, AggregateUDFImpl, EmitTo, GroupsAccumulator, ReversedUDAF, Signature,\n};\nuse std::{any::Any, sync::Arc};\n\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct SumInteger {\n    signature: Signature,\n    eval_mode: EvalMode,\n}\n\nimpl SumInteger {\n    pub fn try_new(data_type: DataType, eval_mode: EvalMode) -> DFResult<Self> {\n        match data_type {\n            DataType::Int8 | DataType::Int16 | DataType::Int32 | DataType::Int64 => Ok(Self {\n                signature: Signature::user_defined(Immutable),\n                eval_mode,\n            }),\n            _ => Err(DataFusionError::Internal(\n                \"Invalid data type for SumInteger\".into(),\n            )),\n        }\n    }\n}\n\nimpl AggregateUDFImpl for SumInteger {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"sum\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> DFResult<DataType> {\n        Ok(DataType::Int64)\n    }\n\n    fn accumulator(&self, _acc_args: AccumulatorArgs) -> DFResult<Box<dyn Accumulator>> {\n        match self.eval_mode {\n            EvalMode::Legacy => Ok(Box::new(SumIntegerAccumulatorLegacy::new())),\n            EvalMode::Ansi => Ok(Box::new(SumIntegerAccumulatorAnsi::new())),\n            EvalMode::Try => Ok(Box::new(SumIntegerAccumulatorTry::new())),\n        }\n    }\n\n    fn state_fields(&self, _args: StateFieldsArgs) -> DFResult<Vec<FieldRef>> {\n        if self.eval_mode == EvalMode::Try {\n            Ok(vec![\n                Arc::new(Field::new(\"sum\", DataType::Int64, true)),\n                Arc::new(Field::new(\"has_all_nulls\", DataType::Boolean, false)),\n            ])\n        } else {\n            Ok(vec![Arc::new(Field::new(\"sum\", DataType::Int64, true))])\n        }\n    }\n\n    fn groups_accumulator_supported(&self, _args: AccumulatorArgs) -> bool {\n        true\n    }\n\n    fn create_groups_accumulator(\n        &self,\n        _args: AccumulatorArgs,\n    ) -> DFResult<Box<dyn GroupsAccumulator>> {\n        match self.eval_mode {\n            EvalMode::Legacy => Ok(Box::new(SumIntGroupsAccumulatorLegacy::new())),\n            EvalMode::Ansi => Ok(Box::new(SumIntGroupsAccumulatorAnsi::new())),\n            EvalMode::Try => Ok(Box::new(SumIntGroupsAccumulatorTry::new())),\n        }\n    }\n\n    fn reverse_expr(&self) -> ReversedUDAF {\n        ReversedUDAF::Identical\n    }\n}\n\n#[derive(Debug)]\nstruct SumIntegerAccumulatorLegacy {\n    sum: Option<i64>,\n}\n\nimpl SumIntegerAccumulatorLegacy {\n    fn new() -> Self {\n        Self { sum: None }\n    }\n}\n\nimpl Accumulator for SumIntegerAccumulatorLegacy {\n    fn update_batch(&mut self, values: &[ArrayRef]) -> DFResult<()> {\n        fn update_sum<T>(int_array: &PrimitiveArray<T>, mut sum: i64) -> DFResult<i64>\n        where\n            T: ArrowPrimitiveType,\n        {\n            for i in 0..int_array.len() {\n                if !int_array.is_null(i) {\n                    let v = int_array.value(i).to_i64().ok_or_else(|| {\n                        DataFusionError::Internal(format!(\n                            \"Failed to convert value {:?} to i64\",\n                            int_array.value(i)\n                        ))\n                    })?;\n                    sum = v.add_wrapping(sum);\n                }\n            }\n            Ok(sum)\n        }\n\n        let values = &values[0];\n        if values.len() == values.null_count() {\n            return Ok(());\n        }\n\n        let running_sum = self.sum.unwrap_or(0);\n        let sum = match values.data_type() {\n            DataType::Int64 => update_sum(as_primitive_array::<Int64Type>(values), running_sum)?,\n            DataType::Int32 => update_sum(as_primitive_array::<Int32Type>(values), running_sum)?,\n            DataType::Int16 => update_sum(as_primitive_array::<Int16Type>(values), running_sum)?,\n            DataType::Int8 => update_sum(as_primitive_array::<Int8Type>(values), running_sum)?,\n            _ => {\n                return Err(DataFusionError::Internal(format!(\n                    \"unsupported data type: {:?}\",\n                    values.data_type()\n                )));\n            }\n        };\n        self.sum = Some(sum);\n        Ok(())\n    }\n\n    fn evaluate(&mut self) -> DFResult<ScalarValue> {\n        Ok(ScalarValue::Int64(self.sum))\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n\n    fn state(&mut self) -> DFResult<Vec<ScalarValue>> {\n        Ok(vec![ScalarValue::Int64(self.sum)])\n    }\n\n    fn merge_batch(&mut self, states: &[ArrayRef]) -> DFResult<()> {\n        if states.len() != 1 {\n            return Err(DataFusionError::Internal(format!(\n                \"Invalid state while merging batch. Expected 1 element but found {}\",\n                states.len()\n            )));\n        }\n\n        let that_sum_array = states[0].as_primitive::<Int64Type>();\n        let that_sum = if that_sum_array.is_null(0) {\n            None\n        } else {\n            Some(that_sum_array.value(0))\n        };\n\n        if that_sum.is_none() {\n            return Ok(());\n        }\n        if self.sum.is_none() {\n            self.sum = that_sum;\n            return Ok(());\n        }\n\n        self.sum = Some(self.sum.unwrap().add_wrapping(that_sum.unwrap()));\n        Ok(())\n    }\n}\n\n#[derive(Debug)]\nstruct SumIntegerAccumulatorAnsi {\n    sum: Option<i64>,\n}\n\nimpl SumIntegerAccumulatorAnsi {\n    fn new() -> Self {\n        Self { sum: None }\n    }\n}\n\nimpl Accumulator for SumIntegerAccumulatorAnsi {\n    fn update_batch(&mut self, values: &[ArrayRef]) -> DFResult<()> {\n        fn update_sum<T>(int_array: &PrimitiveArray<T>, mut sum: i64) -> DFResult<i64>\n        where\n            T: ArrowPrimitiveType,\n        {\n            for i in 0..int_array.len() {\n                if !int_array.is_null(i) {\n                    let v = int_array.value(i).to_i64().ok_or_else(|| {\n                        DataFusionError::Internal(format!(\n                            \"Failed to convert value {:?} to i64\",\n                            int_array.value(i)\n                        ))\n                    })?;\n                    sum = v\n                        .add_checked(sum)\n                        .map_err(|_| DataFusionError::from(arithmetic_overflow_error(\"integer\")))?;\n                }\n            }\n            Ok(sum)\n        }\n\n        let values = &values[0];\n        if values.len() == values.null_count() {\n            return Ok(());\n        }\n\n        let running_sum = self.sum.unwrap_or(0);\n        let sum = match values.data_type() {\n            DataType::Int64 => update_sum(as_primitive_array::<Int64Type>(values), running_sum)?,\n            DataType::Int32 => update_sum(as_primitive_array::<Int32Type>(values), running_sum)?,\n            DataType::Int16 => update_sum(as_primitive_array::<Int16Type>(values), running_sum)?,\n            DataType::Int8 => update_sum(as_primitive_array::<Int8Type>(values), running_sum)?,\n            _ => {\n                return Err(DataFusionError::Internal(format!(\n                    \"unsupported data type: {:?}\",\n                    values.data_type()\n                )));\n            }\n        };\n        self.sum = Some(sum);\n        Ok(())\n    }\n\n    fn evaluate(&mut self) -> DFResult<ScalarValue> {\n        Ok(ScalarValue::Int64(self.sum))\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n\n    fn state(&mut self) -> DFResult<Vec<ScalarValue>> {\n        Ok(vec![ScalarValue::Int64(self.sum)])\n    }\n\n    fn merge_batch(&mut self, states: &[ArrayRef]) -> DFResult<()> {\n        if states.len() != 1 {\n            return Err(DataFusionError::Internal(format!(\n                \"Invalid state while merging batch. Expected 1 element but found {}\",\n                states.len()\n            )));\n        }\n\n        let that_sum_array = states[0].as_primitive::<Int64Type>();\n        let that_sum = if that_sum_array.is_null(0) {\n            None\n        } else {\n            Some(that_sum_array.value(0))\n        };\n\n        if that_sum.is_none() {\n            return Ok(());\n        }\n        if self.sum.is_none() {\n            self.sum = that_sum;\n            return Ok(());\n        }\n\n        self.sum = Some(\n            self.sum\n                .unwrap()\n                .add_checked(that_sum.unwrap())\n                .map_err(|_| DataFusionError::from(arithmetic_overflow_error(\"integer\")))?,\n        );\n        Ok(())\n    }\n}\n\n#[derive(Debug)]\nstruct SumIntegerAccumulatorTry {\n    sum: Option<i64>,\n    has_all_nulls: bool,\n}\n\nimpl SumIntegerAccumulatorTry {\n    fn new() -> Self {\n        Self {\n            // Try mode starts with 0 (because if this is init to None we cant say if it is none due to all nulls or due to an overflow)\n            sum: Some(0),\n            has_all_nulls: true,\n        }\n    }\n\n    fn overflowed(&self) -> bool {\n        !self.has_all_nulls && self.sum.is_none()\n    }\n}\n\nimpl Accumulator for SumIntegerAccumulatorTry {\n    fn update_batch(&mut self, values: &[ArrayRef]) -> DFResult<()> {\n        /// Returns Ok(Some(sum)) on success, Ok(None) on overflow\n        fn update_sum<T>(int_array: &PrimitiveArray<T>, mut sum: i64) -> DFResult<Option<i64>>\n        where\n            T: ArrowPrimitiveType,\n        {\n            for i in 0..int_array.len() {\n                if !int_array.is_null(i) {\n                    let v = int_array.value(i).to_i64().ok_or_else(|| {\n                        DataFusionError::Internal(format!(\n                            \"Failed to convert value {:?} to i64\",\n                            int_array.value(i)\n                        ))\n                    })?;\n                    match v.add_checked(sum) {\n                        Ok(new_sum) => sum = new_sum,\n                        Err(_) => return Ok(None),\n                    }\n                }\n            }\n            Ok(Some(sum))\n        }\n\n        // Skip if we already saw an overflow\n        if self.overflowed() {\n            return Ok(());\n        }\n\n        let values = &values[0];\n        if values.len() == values.null_count() {\n            return Ok(());\n        }\n\n        let running_sum = self.sum.unwrap_or(0);\n        let sum = match values.data_type() {\n            DataType::Int64 => update_sum(as_primitive_array::<Int64Type>(values), running_sum)?,\n            DataType::Int32 => update_sum(as_primitive_array::<Int32Type>(values), running_sum)?,\n            DataType::Int16 => update_sum(as_primitive_array::<Int16Type>(values), running_sum)?,\n            DataType::Int8 => update_sum(as_primitive_array::<Int8Type>(values), running_sum)?,\n            _ => {\n                return Err(DataFusionError::Internal(format!(\n                    \"unsupported data type: {:?}\",\n                    values.data_type()\n                )));\n            }\n        };\n        self.sum = sum;\n        self.has_all_nulls = false;\n        Ok(())\n    }\n\n    fn evaluate(&mut self) -> DFResult<ScalarValue> {\n        if self.has_all_nulls {\n            Ok(ScalarValue::Int64(None))\n        } else {\n            Ok(ScalarValue::Int64(self.sum))\n        }\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n\n    fn state(&mut self) -> DFResult<Vec<ScalarValue>> {\n        Ok(vec![\n            ScalarValue::Int64(self.sum),\n            ScalarValue::Boolean(Some(self.has_all_nulls)),\n        ])\n    }\n\n    fn merge_batch(&mut self, states: &[ArrayRef]) -> DFResult<()> {\n        if states.len() != 2 {\n            return Err(DataFusionError::Internal(format!(\n                \"Invalid state while merging batch. Expected 2 elements but found {}\",\n                states.len()\n            )));\n        }\n\n        let that_sum_array = states[0].as_primitive::<Int64Type>();\n        let that_sum = if that_sum_array.is_null(0) {\n            None\n        } else {\n            Some(that_sum_array.value(0))\n        };\n        let that_has_all_nulls = states[1].as_boolean().value(0);\n\n        let that_overflowed = !that_has_all_nulls && that_sum.is_none();\n        if that_overflowed || self.overflowed() {\n            self.sum = None;\n            self.has_all_nulls = false;\n            return Ok(());\n        }\n\n        if that_has_all_nulls {\n            return Ok(());\n        }\n        if self.has_all_nulls {\n            self.sum = that_sum;\n            self.has_all_nulls = false;\n            return Ok(());\n        }\n\n        // Both sides have non-null values\n        match self.sum.unwrap().add_checked(that_sum.unwrap()) {\n            Ok(v) => self.sum = Some(v),\n            Err(_) => {\n                self.sum = None;\n                self.has_all_nulls = false;\n            }\n        }\n        Ok(())\n    }\n}\n\nstruct SumIntGroupsAccumulatorLegacy {\n    sums: Vec<Option<i64>>,\n}\n\nimpl SumIntGroupsAccumulatorLegacy {\n    fn new() -> Self {\n        Self { sums: Vec::new() }\n    }\n}\n\nimpl GroupsAccumulator for SumIntGroupsAccumulatorLegacy {\n    fn update_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        opt_filter: Option<&BooleanArray>,\n        total_num_groups: usize,\n    ) -> DFResult<()> {\n        fn update_groups_sum<T>(\n            int_array: &PrimitiveArray<T>,\n            group_indices: &[usize],\n            sums: &mut [Option<i64>],\n            opt_filter: Option<&BooleanArray>,\n        ) -> DFResult<()>\n        where\n            T: ArrowPrimitiveType,\n            T::Native: ArrowNativeType,\n        {\n            for (i, &group_index) in group_indices.iter().enumerate() {\n                if let Some(f) = opt_filter {\n                    if !f.is_valid(i) || !f.value(i) {\n                        continue;\n                    }\n                }\n                if !int_array.is_null(i) {\n                    let v = int_array.value(i).to_i64().ok_or_else(|| {\n                        DataFusionError::Internal(\"Failed to convert value to i64\".to_string())\n                    })?;\n                    sums[group_index] = Some(sums[group_index].unwrap_or(0).add_wrapping(v));\n                }\n            }\n            Ok(())\n        }\n\n        let values = &values[0];\n        self.sums.resize(total_num_groups, None);\n\n        match values.data_type() {\n            DataType::Int64 => update_groups_sum(\n                as_primitive_array::<Int64Type>(values),\n                group_indices,\n                &mut self.sums,\n                opt_filter,\n            )?,\n            DataType::Int32 => update_groups_sum(\n                as_primitive_array::<Int32Type>(values),\n                group_indices,\n                &mut self.sums,\n                opt_filter,\n            )?,\n            DataType::Int16 => update_groups_sum(\n                as_primitive_array::<Int16Type>(values),\n                group_indices,\n                &mut self.sums,\n                opt_filter,\n            )?,\n            DataType::Int8 => update_groups_sum(\n                as_primitive_array::<Int8Type>(values),\n                group_indices,\n                &mut self.sums,\n                opt_filter,\n            )?,\n            _ => {\n                return Err(DataFusionError::Internal(format!(\n                    \"Unsupported data type for SumIntGroupsAccumulatorLegacy: {:?}\",\n                    values.data_type()\n                )))\n            }\n        };\n        Ok(())\n    }\n\n    fn evaluate(&mut self, emit_to: EmitTo) -> DFResult<ArrayRef> {\n        match emit_to {\n            EmitTo::All => {\n                let result = Arc::new(Int64Array::from(std::mem::take(&mut self.sums))) as ArrayRef;\n                Ok(result)\n            }\n            EmitTo::First(n) => {\n                let result = Arc::new(Int64Array::from(self.sums.drain(..n).collect::<Vec<_>>()))\n                    as ArrayRef;\n                Ok(result)\n            }\n        }\n    }\n\n    fn state(&mut self, emit_to: EmitTo) -> DFResult<Vec<ArrayRef>> {\n        let sums = emit_to.take_needed(&mut self.sums);\n        Ok(vec![Arc::new(Int64Array::from(sums))])\n    }\n\n    fn merge_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        opt_filter: Option<&BooleanArray>,\n        total_num_groups: usize,\n    ) -> DFResult<()> {\n        debug_assert!(\n            opt_filter.is_none(),\n            \"opt_filter is not supported in merge_batch\"\n        );\n\n        if values.len() != 1 {\n            return Err(DataFusionError::Internal(format!(\n                \"Invalid state while merging batch. Expected 1 element but found {}\",\n                values.len()\n            )));\n        }\n        let that_sums = values[0].as_primitive::<Int64Type>();\n\n        self.sums.resize(total_num_groups, None);\n\n        for (idx, &group_index) in group_indices.iter().enumerate() {\n            if that_sums.is_null(idx) {\n                continue;\n            }\n            let that_sum = that_sums.value(idx);\n\n            if self.sums[group_index].is_none() {\n                self.sums[group_index] = Some(that_sum);\n            } else {\n                self.sums[group_index] =\n                    Some(self.sums[group_index].unwrap().add_wrapping(that_sum));\n            }\n        }\n        Ok(())\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n}\n\nstruct SumIntGroupsAccumulatorAnsi {\n    sums: Vec<Option<i64>>,\n}\n\nimpl SumIntGroupsAccumulatorAnsi {\n    fn new() -> Self {\n        Self { sums: Vec::new() }\n    }\n}\n\nimpl GroupsAccumulator for SumIntGroupsAccumulatorAnsi {\n    fn update_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        opt_filter: Option<&BooleanArray>,\n        total_num_groups: usize,\n    ) -> DFResult<()> {\n        fn update_groups_sum<T>(\n            int_array: &PrimitiveArray<T>,\n            group_indices: &[usize],\n            sums: &mut [Option<i64>],\n            opt_filter: Option<&BooleanArray>,\n        ) -> DFResult<()>\n        where\n            T: ArrowPrimitiveType,\n            T::Native: ArrowNativeType,\n        {\n            for (i, &group_index) in group_indices.iter().enumerate() {\n                if let Some(f) = opt_filter {\n                    if !f.is_valid(i) || !f.value(i) {\n                        continue;\n                    }\n                }\n                if !int_array.is_null(i) {\n                    let v = int_array.value(i).to_i64().ok_or_else(|| {\n                        DataFusionError::Internal(\"Failed to convert value to i64\".to_string())\n                    })?;\n                    sums[group_index] =\n                        Some(sums[group_index].unwrap_or(0).add_checked(v).map_err(|_| {\n                            DataFusionError::from(arithmetic_overflow_error(\"integer\"))\n                        })?);\n                }\n            }\n            Ok(())\n        }\n\n        let values = &values[0];\n        self.sums.resize(total_num_groups, None);\n\n        match values.data_type() {\n            DataType::Int64 => update_groups_sum(\n                as_primitive_array::<Int64Type>(values),\n                group_indices,\n                &mut self.sums,\n                opt_filter,\n            )?,\n            DataType::Int32 => update_groups_sum(\n                as_primitive_array::<Int32Type>(values),\n                group_indices,\n                &mut self.sums,\n                opt_filter,\n            )?,\n            DataType::Int16 => update_groups_sum(\n                as_primitive_array::<Int16Type>(values),\n                group_indices,\n                &mut self.sums,\n                opt_filter,\n            )?,\n            DataType::Int8 => update_groups_sum(\n                as_primitive_array::<Int8Type>(values),\n                group_indices,\n                &mut self.sums,\n                opt_filter,\n            )?,\n            _ => {\n                return Err(DataFusionError::Internal(format!(\n                    \"Unsupported data type for SumIntGroupsAccumulatorAnsi: {:?}\",\n                    values.data_type()\n                )))\n            }\n        };\n        Ok(())\n    }\n\n    fn evaluate(&mut self, emit_to: EmitTo) -> DFResult<ArrayRef> {\n        match emit_to {\n            EmitTo::All => {\n                let result = Arc::new(Int64Array::from(std::mem::take(&mut self.sums))) as ArrayRef;\n                Ok(result)\n            }\n            EmitTo::First(n) => {\n                let result = Arc::new(Int64Array::from(self.sums.drain(..n).collect::<Vec<_>>()))\n                    as ArrayRef;\n                Ok(result)\n            }\n        }\n    }\n\n    fn state(&mut self, emit_to: EmitTo) -> DFResult<Vec<ArrayRef>> {\n        let sums = emit_to.take_needed(&mut self.sums);\n        Ok(vec![Arc::new(Int64Array::from(sums))])\n    }\n\n    fn merge_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        opt_filter: Option<&BooleanArray>,\n        total_num_groups: usize,\n    ) -> DFResult<()> {\n        debug_assert!(\n            opt_filter.is_none(),\n            \"opt_filter is not supported in merge_batch\"\n        );\n\n        if values.len() != 1 {\n            return Err(DataFusionError::Internal(format!(\n                \"Invalid state while merging batch. Expected 1 element but found {}\",\n                values.len()\n            )));\n        }\n        let that_sums = values[0].as_primitive::<Int64Type>();\n\n        self.sums.resize(total_num_groups, None);\n\n        for (idx, &group_index) in group_indices.iter().enumerate() {\n            if that_sums.is_null(idx) {\n                continue;\n            }\n            let that_sum = that_sums.value(idx);\n\n            if self.sums[group_index].is_none() {\n                self.sums[group_index] = Some(that_sum);\n            } else {\n                self.sums[group_index] = Some(\n                    self.sums[group_index]\n                        .unwrap()\n                        .add_checked(that_sum)\n                        .map_err(|_| DataFusionError::from(arithmetic_overflow_error(\"integer\")))?,\n                );\n            }\n        }\n        Ok(())\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n}\n\nstruct SumIntGroupsAccumulatorTry {\n    sums: Vec<Option<i64>>,\n    has_all_nulls: Vec<bool>,\n}\n\nimpl SumIntGroupsAccumulatorTry {\n    fn new() -> Self {\n        Self {\n            sums: Vec::new(),\n            has_all_nulls: Vec::new(),\n        }\n    }\n\n    fn group_overflowed(&self, group_index: usize) -> bool {\n        !self.has_all_nulls[group_index] && self.sums[group_index].is_none()\n    }\n}\n\nimpl GroupsAccumulator for SumIntGroupsAccumulatorTry {\n    fn update_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        opt_filter: Option<&BooleanArray>,\n        total_num_groups: usize,\n    ) -> DFResult<()> {\n        fn update_groups_sum<T>(\n            int_array: &PrimitiveArray<T>,\n            group_indices: &[usize],\n            sums: &mut [Option<i64>],\n            has_all_nulls: &mut [bool],\n            opt_filter: Option<&BooleanArray>,\n        ) -> DFResult<()>\n        where\n            T: ArrowPrimitiveType,\n            T::Native: ArrowNativeType,\n        {\n            for (i, &group_index) in group_indices.iter().enumerate() {\n                if let Some(f) = opt_filter {\n                    if !f.is_valid(i) || !f.value(i) {\n                        continue;\n                    }\n                }\n                if !int_array.is_null(i) {\n                    // Skip if this group already overflowed\n                    if !has_all_nulls[group_index] && sums[group_index].is_none() {\n                        continue;\n                    }\n                    let v = int_array.value(i).to_i64().ok_or_else(|| {\n                        DataFusionError::Internal(\"Failed to convert value to i64\".to_string())\n                    })?;\n                    match sums[group_index].unwrap_or(0).add_checked(v) {\n                        Ok(new_sum) => sums[group_index] = Some(new_sum),\n                        Err(_) => sums[group_index] = None,\n                    };\n                    has_all_nulls[group_index] = false;\n                }\n            }\n            Ok(())\n        }\n        let values = &values[0];\n        self.sums.resize(total_num_groups, Some(0));\n        self.has_all_nulls.resize(total_num_groups, true);\n\n        match values.data_type() {\n            DataType::Int64 => update_groups_sum(\n                as_primitive_array::<Int64Type>(values),\n                group_indices,\n                &mut self.sums,\n                &mut self.has_all_nulls,\n                opt_filter,\n            )?,\n            DataType::Int32 => update_groups_sum(\n                as_primitive_array::<Int32Type>(values),\n                group_indices,\n                &mut self.sums,\n                &mut self.has_all_nulls,\n                opt_filter,\n            )?,\n            DataType::Int16 => update_groups_sum(\n                as_primitive_array::<Int16Type>(values),\n                group_indices,\n                &mut self.sums,\n                &mut self.has_all_nulls,\n                opt_filter,\n            )?,\n            DataType::Int8 => update_groups_sum(\n                as_primitive_array::<Int8Type>(values),\n                group_indices,\n                &mut self.sums,\n                &mut self.has_all_nulls,\n                opt_filter,\n            )?,\n            _ => {\n                return Err(DataFusionError::Internal(format!(\n                    \"Unsupported data type for SumIntGroupsAccumulatorTry: {:?}\",\n                    values.data_type()\n                )))\n            }\n        };\n        Ok(())\n    }\n\n    fn evaluate(&mut self, emit_to: EmitTo) -> DFResult<ArrayRef> {\n        match emit_to {\n            EmitTo::All => {\n                let result = Arc::new(Int64Array::from_iter(\n                    self.sums\n                        .iter()\n                        .zip(self.has_all_nulls.iter())\n                        .map(|(&sum, &is_null)| if is_null { None } else { sum }),\n                )) as ArrayRef;\n                self.sums.clear();\n                self.has_all_nulls.clear();\n                Ok(result)\n            }\n            EmitTo::First(n) => {\n                let result = Arc::new(Int64Array::from_iter(\n                    self.sums\n                        .drain(..n)\n                        .zip(self.has_all_nulls.drain(..n))\n                        .map(|(sum, is_null)| if is_null { None } else { sum }),\n                )) as ArrayRef;\n                Ok(result)\n            }\n        }\n    }\n\n    fn state(&mut self, emit_to: EmitTo) -> DFResult<Vec<ArrayRef>> {\n        let sums = emit_to.take_needed(&mut self.sums);\n        let has_all_nulls = emit_to.take_needed(&mut self.has_all_nulls);\n        Ok(vec![\n            Arc::new(Int64Array::from(sums)),\n            Arc::new(BooleanArray::from(has_all_nulls)),\n        ])\n    }\n\n    fn merge_batch(\n        &mut self,\n        values: &[ArrayRef],\n        group_indices: &[usize],\n        opt_filter: Option<&BooleanArray>,\n        total_num_groups: usize,\n    ) -> DFResult<()> {\n        debug_assert!(\n            opt_filter.is_none(),\n            \"opt_filter is not supported in merge_batch\"\n        );\n\n        if values.len() != 2 {\n            return Err(DataFusionError::Internal(format!(\n                \"Invalid state while merging batch. Expected 2 elements but found {}\",\n                values.len()\n            )));\n        }\n        let that_sums = values[0].as_primitive::<Int64Type>();\n        let that_has_all_nulls_array = values[1].as_boolean();\n\n        self.sums.resize(total_num_groups, Some(0));\n        self.has_all_nulls.resize(total_num_groups, true);\n\n        for (idx, &group_index) in group_indices.iter().enumerate() {\n            let that_sum = if that_sums.is_null(idx) {\n                None\n            } else {\n                Some(that_sums.value(idx))\n            };\n            let that_has_all_nulls = that_has_all_nulls_array.value(idx);\n\n            let that_overflowed = !that_has_all_nulls && that_sum.is_none();\n            if that_overflowed || self.group_overflowed(group_index) {\n                self.sums[group_index] = None;\n                self.has_all_nulls[group_index] = false;\n                continue;\n            }\n\n            if that_has_all_nulls {\n                continue;\n            }\n\n            if self.has_all_nulls[group_index] {\n                self.sums[group_index] = that_sum;\n                self.has_all_nulls[group_index] = false;\n                continue;\n            }\n\n            // Both sides have non-null values\n            match self.sums[group_index]\n                .unwrap()\n                .add_checked(that_sum.unwrap())\n            {\n                Ok(v) => self.sums[group_index] = Some(v),\n                Err(_) => {\n                    self.sums[group_index] = None;\n                    self.has_all_nulls[group_index] = false;\n                }\n            }\n        }\n        Ok(())\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::Int64Array;\n    use datafusion::logical_expr::{EmitTo, GroupsAccumulator};\n\n    fn run_update_batch_with_filter(\n        acc: &mut dyn GroupsAccumulator,\n        values: Vec<i64>,\n        groups: Vec<usize>,\n        filter: Vec<bool>,\n        num_groups: usize,\n    ) -> Vec<Option<i64>> {\n        let values: ArrayRef = Arc::new(Int64Array::from(values));\n        let filter = BooleanArray::from(filter);\n        acc.update_batch(&[values], &groups, Some(&filter), num_groups)\n            .unwrap();\n        acc.evaluate(EmitTo::All)\n            .unwrap()\n            .as_primitive::<Int64Type>()\n            .iter()\n            .collect()\n    }\n\n    #[test]\n    fn test_legacy_update_batch_with_filter() {\n        let mut acc = SumIntGroupsAccumulatorLegacy::new();\n        // values: [1, 2, 3, 4, 5], filter: [T, F, T, F, T] => sum = 1+3+5 = 9\n        let result = run_update_batch_with_filter(\n            &mut acc,\n            vec![1, 2, 3, 4, 5],\n            vec![0, 0, 0, 0, 0],\n            vec![true, false, true, false, true],\n            1,\n        );\n        assert_eq!(result, vec![Some(9)]);\n    }\n\n    #[test]\n    fn test_legacy_update_batch_filter_null_treated_as_exclude() {\n        let mut acc = SumIntGroupsAccumulatorLegacy::new();\n        let values: ArrayRef = Arc::new(Int64Array::from(vec![10i64, 20, 30]));\n        // null filter entry should be treated as exclude\n        let filter = BooleanArray::from(vec![Some(true), None, Some(true)]);\n        acc.update_batch(&[values], &[0, 0, 0], Some(&filter), 1)\n            .unwrap();\n        let result: Vec<Option<i64>> = acc\n            .evaluate(EmitTo::All)\n            .unwrap()\n            .as_primitive::<Int64Type>()\n            .iter()\n            .collect();\n        assert_eq!(result, vec![Some(40)]); // 10 + 30 = 40\n    }\n\n    #[test]\n    fn test_ansi_update_batch_with_filter() {\n        let mut acc = SumIntGroupsAccumulatorAnsi::new();\n        let result = run_update_batch_with_filter(\n            &mut acc,\n            vec![10, 20, 30, 40],\n            vec![0, 1, 0, 1],\n            vec![true, true, false, true],\n            2,\n        );\n        // group 0: 10 (30 filtered out); group 1: 20+40 = 60\n        assert_eq!(result, vec![Some(10), Some(60)]);\n    }\n\n    #[test]\n    fn test_try_update_batch_with_filter() {\n        let mut acc = SumIntGroupsAccumulatorTry::new();\n        let result = run_update_batch_with_filter(\n            &mut acc,\n            vec![1, 2, 3, 4, 5],\n            vec![0, 0, 0, 0, 0],\n            vec![true, false, true, false, true],\n            1,\n        );\n        assert_eq!(result, vec![Some(9)]); // 1+3+5 = 9\n    }\n\n    #[test]\n    fn test_no_filter_still_works() {\n        let mut acc = SumIntGroupsAccumulatorLegacy::new();\n        let values: ArrayRef = Arc::new(Int64Array::from(vec![1i64, 2, 3]));\n        acc.update_batch(&[values], &[0, 0, 0], None, 1).unwrap();\n        let result: Vec<Option<i64>> = acc\n            .evaluate(EmitTo::All)\n            .unwrap()\n            .as_primitive::<Int64Type>()\n            .iter()\n            .collect();\n        assert_eq!(result, vec![Some(6)]);\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/agg_funcs/variance.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::datatypes::FieldRef;\nuse arrow::{\n    array::{ArrayRef, Float64Array},\n    datatypes::{DataType, Field},\n};\nuse datafusion::common::{downcast_value, Result, ScalarValue};\nuse datafusion::logical_expr::function::{AccumulatorArgs, StateFieldsArgs};\nuse datafusion::logical_expr::Volatility::Immutable;\nuse datafusion::logical_expr::{Accumulator, AggregateUDFImpl, Signature};\nuse datafusion::physical_expr::expressions::format_state_name;\nuse datafusion::physical_expr::expressions::StatsType;\nuse std::any::Any;\nuse std::sync::Arc;\n\n/// VAR_SAMP and VAR_POP aggregate expression\n/// The implementation mostly is the same as the DataFusion's implementation. The reason\n/// we have our own implementation is that DataFusion has UInt64 for state_field `count`,\n/// while Spark has Double for count. Also we have added `null_on_divide_by_zero`\n/// to be consistent with Spark's implementation.\n#[derive(Debug, PartialEq, Eq)]\npub struct Variance {\n    name: String,\n    signature: Signature,\n    stats_type: StatsType,\n    null_on_divide_by_zero: bool,\n}\n\nimpl std::hash::Hash for Variance {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.name.hash(state);\n        self.signature.hash(state);\n        (self.stats_type as u8).hash(state);\n        self.null_on_divide_by_zero.hash(state);\n    }\n}\n\nimpl Variance {\n    /// Create a new VARIANCE aggregate function\n    pub fn new(\n        name: impl Into<String>,\n        data_type: DataType,\n        stats_type: StatsType,\n        null_on_divide_by_zero: bool,\n    ) -> Self {\n        // the result of variance just support FLOAT64 data type.\n        assert!(matches!(data_type, DataType::Float64));\n        Self {\n            name: name.into(),\n            signature: Signature::numeric(1, Immutable),\n            stats_type,\n            null_on_divide_by_zero,\n        }\n    }\n}\n\nimpl AggregateUDFImpl for Variance {\n    /// Return a reference to Any that can be used for downcasting\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        &self.name\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Float64)\n    }\n\n    fn accumulator(&self, _acc_args: AccumulatorArgs) -> Result<Box<dyn Accumulator>> {\n        Ok(Box::new(VarianceAccumulator::try_new(\n            self.stats_type,\n            self.null_on_divide_by_zero,\n        )?))\n    }\n\n    fn create_sliding_accumulator(&self, _args: AccumulatorArgs) -> Result<Box<dyn Accumulator>> {\n        Ok(Box::new(VarianceAccumulator::try_new(\n            self.stats_type,\n            self.null_on_divide_by_zero,\n        )?))\n    }\n\n    fn state_fields(&self, _args: StateFieldsArgs) -> Result<Vec<FieldRef>> {\n        Ok(vec![\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"count\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"mean\"),\n                DataType::Float64,\n                true,\n            )),\n            Arc::new(Field::new(\n                format_state_name(&self.name, \"m2\"),\n                DataType::Float64,\n                true,\n            )),\n        ])\n    }\n\n    fn default_value(&self, _data_type: &DataType) -> Result<ScalarValue> {\n        Ok(ScalarValue::Float64(None))\n    }\n}\n\n/// An accumulator to compute variance\n#[derive(Debug)]\npub struct VarianceAccumulator {\n    m2: f64,\n    mean: f64,\n    count: f64,\n    stats_type: StatsType,\n    null_on_divide_by_zero: bool,\n}\n\nimpl VarianceAccumulator {\n    /// Creates a new `VarianceAccumulator`\n    pub fn try_new(s_type: StatsType, null_on_divide_by_zero: bool) -> Result<Self> {\n        Ok(Self {\n            m2: 0_f64,\n            mean: 0_f64,\n            count: 0_f64,\n            stats_type: s_type,\n            null_on_divide_by_zero,\n        })\n    }\n\n    pub fn get_count(&self) -> f64 {\n        self.count\n    }\n\n    pub fn get_mean(&self) -> f64 {\n        self.mean\n    }\n\n    pub fn get_m2(&self) -> f64 {\n        self.m2\n    }\n}\n\nimpl Accumulator for VarianceAccumulator {\n    fn state(&mut self) -> Result<Vec<ScalarValue>> {\n        Ok(vec![\n            ScalarValue::from(self.count),\n            ScalarValue::from(self.mean),\n            ScalarValue::from(self.m2),\n        ])\n    }\n\n    fn update_batch(&mut self, values: &[ArrayRef]) -> Result<()> {\n        let arr = downcast_value!(&values[0], Float64Array).iter().flatten();\n\n        for value in arr {\n            let new_count = self.count + 1.0;\n            let delta1 = value - self.mean;\n            let new_mean = delta1 / new_count + self.mean;\n            let delta2 = value - new_mean;\n            let new_m2 = self.m2 + delta1 * delta2;\n\n            self.count += 1.0;\n            self.mean = new_mean;\n            self.m2 = new_m2;\n        }\n\n        Ok(())\n    }\n\n    fn retract_batch(&mut self, values: &[ArrayRef]) -> Result<()> {\n        let arr = downcast_value!(&values[0], Float64Array).iter().flatten();\n\n        for value in arr {\n            let new_count = self.count - 1.0;\n            let delta1 = self.mean - value;\n            let new_mean = delta1 / new_count + self.mean;\n            let delta2 = new_mean - value;\n            let new_m2 = self.m2 - delta1 * delta2;\n\n            self.count -= 1.0;\n            self.mean = new_mean;\n            self.m2 = new_m2;\n        }\n\n        Ok(())\n    }\n\n    fn merge_batch(&mut self, states: &[ArrayRef]) -> Result<()> {\n        let counts = downcast_value!(states[0], Float64Array);\n        let means = downcast_value!(states[1], Float64Array);\n        let m2s = downcast_value!(states[2], Float64Array);\n\n        for i in 0..counts.len() {\n            let c = counts.value(i);\n            if c == 0_f64 {\n                continue;\n            }\n            let new_count = self.count + c;\n            let new_mean = self.mean * self.count / new_count + means.value(i) * c / new_count;\n            let delta = self.mean - means.value(i);\n            let new_m2 = self.m2 + m2s.value(i) + delta * delta * self.count * c / new_count;\n\n            self.count = new_count;\n            self.mean = new_mean;\n            self.m2 = new_m2;\n        }\n        Ok(())\n    }\n\n    fn evaluate(&mut self) -> Result<ScalarValue> {\n        let count = match self.stats_type {\n            StatsType::Population => self.count,\n            StatsType::Sample => {\n                if self.count > 0.0 {\n                    self.count - 1.0\n                } else {\n                    self.count\n                }\n            }\n        };\n\n        Ok(ScalarValue::Float64(match self.count {\n            0.0 => None,\n            count if count == 1.0 && StatsType::Sample == self.stats_type => {\n                if self.null_on_divide_by_zero {\n                    None\n                } else {\n                    Some(f64::NAN)\n                }\n            }\n            _ => Some(self.m2 / count),\n        }))\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/array_funcs/array_compact.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n// Spark-compatible array_compact: removes null elements from an array.\n//\n// DataFusion's array_remove_all(arr, null) returns NULL for the entire row\n// when the element-to-remove is NULL (DF 53, PR #21013). Spark's array_compact\n// needs to actually remove null elements, so we implement it directly.\n//\n// TODO: upstream this to datafusion-spark crate\n\nuse arrow::array::{\n    make_array, Array, ArrayRef, Capacities, GenericListArray, MutableArrayData, NullBufferBuilder,\n    OffsetSizeTrait,\n};\nuse arrow::buffer::OffsetBuffer;\nuse arrow::datatypes::{DataType, FieldRef};\nuse datafusion::common::{exec_err, utils::take_function_args, Result};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, TypeSignature, Volatility,\n};\nuse std::any::Any;\nuse std::sync::Arc;\n\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct SparkArrayCompact {\n    signature: Signature,\n}\n\nimpl Default for SparkArrayCompact {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl SparkArrayCompact {\n    pub fn new() -> Self {\n        Self {\n            signature: Signature::new(TypeSignature::Any(1), Volatility::Immutable),\n        }\n    }\n}\n\nimpl ScalarUDFImpl for SparkArrayCompact {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"spark_array_compact\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        datafusion::common::internal_err!(\"return_field_from_args should be used instead\")\n    }\n\n    fn return_field_from_args(\n        &self,\n        args: datafusion::logical_expr::ReturnFieldArgs,\n    ) -> Result<FieldRef> {\n        Ok(Arc::clone(&args.arg_fields[0]))\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        let [array] = take_function_args(self.name(), &args.args)?;\n        match array {\n            ColumnarValue::Array(array) => match array.data_type() {\n                DataType::List(_) => Ok(ColumnarValue::Array(compact_list::<i32>(\n                    array.as_any().downcast_ref().unwrap(),\n                )?)),\n                DataType::LargeList(_) => Ok(ColumnarValue::Array(compact_list::<i64>(\n                    array.as_any().downcast_ref().unwrap(),\n                )?)),\n                other => exec_err!(\"spark_array_compact does not support type '{other}'\"),\n            },\n            ColumnarValue::Scalar(scalar) => {\n                let array = scalar.to_array()?;\n                let result = match array.data_type() {\n                    DataType::List(_) => {\n                        compact_list::<i32>(array.as_any().downcast_ref().unwrap())?\n                    }\n                    DataType::LargeList(_) => {\n                        compact_list::<i64>(array.as_any().downcast_ref().unwrap())?\n                    }\n                    other => {\n                        return exec_err!(\"spark_array_compact does not support type '{other}'\")\n                    }\n                };\n                Ok(ColumnarValue::Array(result))\n            }\n        }\n    }\n}\n\n/// Remove null elements from each row of a list array.\nfn compact_list<OffsetSize: OffsetSizeTrait>(\n    list_array: &GenericListArray<OffsetSize>,\n) -> Result<ArrayRef> {\n    let list_field = match list_array.data_type() {\n        DataType::List(field) | DataType::LargeList(field) => field,\n        other => {\n            return exec_err!(\"Expected List or LargeList, got {other:?}\");\n        }\n    };\n\n    let values = list_array.values();\n    let original_data = values.to_data();\n    let mut offsets = Vec::<OffsetSize>::with_capacity(list_array.len() + 1);\n    offsets.push(OffsetSize::zero());\n    let mut mutable = MutableArrayData::with_capacities(\n        vec![&original_data],\n        false,\n        Capacities::Array(original_data.len()),\n    );\n    let mut valid = NullBufferBuilder::new(list_array.len());\n\n    for (row_index, offset_window) in list_array.offsets().windows(2).enumerate() {\n        if list_array.is_null(row_index) {\n            offsets.push(offsets[row_index]);\n            valid.append_null();\n            continue;\n        }\n\n        let start = offset_window[0].to_usize().unwrap();\n        let end = offset_window[1].to_usize().unwrap();\n        let mut copied = 0usize;\n\n        for i in start..end {\n            if !values.is_null(i) {\n                mutable.extend(0, i, i + 1);\n                copied += 1;\n            }\n        }\n\n        offsets.push(offsets[row_index] + OffsetSize::usize_as(copied));\n        valid.append_non_null();\n    }\n\n    let new_values = make_array(mutable.freeze());\n    Ok(Arc::new(GenericListArray::<OffsetSize>::try_new(\n        Arc::clone(list_field),\n        OffsetBuffer::new(offsets.into()),\n        new_values,\n        valid.finish(),\n    )?))\n}\n"
  },
  {
    "path": "native/spark-expr/src/array_funcs/array_insert.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{make_array, Array, ArrayRef, GenericListArray, Int32Array, OffsetSizeTrait};\nuse arrow::datatypes::{DataType, Schema};\nuse arrow::{\n    array::{as_primitive_array, Capacities, MutableArrayData},\n    buffer::{NullBuffer, OffsetBuffer},\n    record_batch::RecordBatch,\n};\nuse datafusion::common::{\n    cast::{as_large_list_array, as_list_array},\n    internal_err, DataFusionError, Result as DataFusionResult,\n};\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::hash::Hash;\nuse std::{\n    any::Any,\n    fmt::{Debug, Display, Formatter},\n    sync::Arc,\n};\n\n// 2147483632 == java.lang.Integer.MAX_VALUE - 15\n// It is a value of ByteArrayUtils.MAX_ROUNDED_ARRAY_LENGTH\n// https://github.com/apache/spark/blob/master/common/utils/src/main/java/org/apache/spark/unsafe/array/ByteArrayUtils.java\nconst MAX_ROUNDED_ARRAY_LENGTH: usize = 2147483632;\n\n#[derive(Debug, Eq)]\npub struct ArrayInsert {\n    src_array_expr: Arc<dyn PhysicalExpr>,\n    pos_expr: Arc<dyn PhysicalExpr>,\n    item_expr: Arc<dyn PhysicalExpr>,\n    legacy_negative_index: bool,\n}\n\nimpl Hash for ArrayInsert {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.src_array_expr.hash(state);\n        self.pos_expr.hash(state);\n        self.item_expr.hash(state);\n        self.legacy_negative_index.hash(state);\n    }\n}\nimpl PartialEq for ArrayInsert {\n    fn eq(&self, other: &Self) -> bool {\n        self.src_array_expr.eq(&other.src_array_expr)\n            && self.pos_expr.eq(&other.pos_expr)\n            && self.item_expr.eq(&other.item_expr)\n            && self.legacy_negative_index.eq(&other.legacy_negative_index)\n    }\n}\n\nimpl ArrayInsert {\n    pub fn new(\n        src_array_expr: Arc<dyn PhysicalExpr>,\n        pos_expr: Arc<dyn PhysicalExpr>,\n        item_expr: Arc<dyn PhysicalExpr>,\n        legacy_negative_index: bool,\n    ) -> Self {\n        Self {\n            src_array_expr,\n            pos_expr,\n            item_expr,\n            legacy_negative_index,\n        }\n    }\n\n    pub fn array_type(&self, data_type: &DataType) -> DataFusionResult<DataType> {\n        match data_type {\n            DataType::List(field) => Ok(DataType::List(Arc::clone(field))),\n            DataType::LargeList(field) => Ok(DataType::LargeList(Arc::clone(field))),\n            data_type => Err(DataFusionError::Internal(format!(\n                \"Unexpected src array type in ArrayInsert: {data_type:?}\"\n            ))),\n        }\n    }\n}\n\nimpl PhysicalExpr for ArrayInsert {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, input_schema: &Schema) -> DataFusionResult<DataType> {\n        self.array_type(&self.src_array_expr.data_type(input_schema)?)\n    }\n\n    fn nullable(&self, input_schema: &Schema) -> DataFusionResult<bool> {\n        Ok(self.src_array_expr.nullable(input_schema)? || self.pos_expr.nullable(input_schema)?)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> DataFusionResult<ColumnarValue> {\n        let pos_value = self\n            .pos_expr\n            .evaluate(batch)?\n            .into_array(batch.num_rows())?;\n\n        // Spark supports only IntegerType (Int32):\n        // https://github.com/apache/spark/blob/branch-3.5/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/collectionOperations.scala#L4737\n        if !matches!(pos_value.data_type(), DataType::Int32) {\n            return Err(DataFusionError::Internal(format!(\n                \"Unexpected index data type in ArrayInsert: {:?}, expected type is Int32\",\n                pos_value.data_type()\n            )));\n        }\n\n        // Check that src array is actually an array and get it's value type\n        let src_value = self\n            .src_array_expr\n            .evaluate(batch)?\n            .into_array(batch.num_rows())?;\n\n        let src_element_type = match self.array_type(src_value.data_type())? {\n            DataType::List(field) => &field.data_type().clone(),\n            DataType::LargeList(field) => &field.data_type().clone(),\n            _ => unreachable!(),\n        };\n\n        // Check that inserted value has the same type as an array\n        let item_value = self\n            .item_expr\n            .evaluate(batch)?\n            .into_array(batch.num_rows())?;\n        if item_value.data_type() != src_element_type {\n            return Err(DataFusionError::Internal(format!(\n                \"Type mismatch in ArrayInsert: array type is {:?} but item type is {:?}\",\n                src_element_type,\n                item_value.data_type()\n            )));\n        }\n\n        match src_value.data_type() {\n            DataType::List(_) => {\n                let list_array = as_list_array(&src_value)?;\n                array_insert(\n                    list_array,\n                    &item_value,\n                    &pos_value,\n                    self.legacy_negative_index,\n                )\n            }\n            DataType::LargeList(_) => {\n                let list_array = as_large_list_array(&src_value)?;\n                array_insert(\n                    list_array,\n                    &item_value,\n                    &pos_value,\n                    self.legacy_negative_index,\n                )\n            }\n            _ => unreachable!(), // This case is checked already\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.src_array_expr, &self.pos_expr, &self.item_expr]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> DataFusionResult<Arc<dyn PhysicalExpr>> {\n        match children.len() {\n            3 => Ok(Arc::new(ArrayInsert::new(\n                Arc::clone(&children[0]),\n                Arc::clone(&children[1]),\n                Arc::clone(&children[2]),\n                self.legacy_negative_index,\n            ))),\n            _ => internal_err!(\"ArrayInsert should have exactly three childrens\"),\n        }\n    }\n}\n\nfn array_insert<O: OffsetSizeTrait>(\n    list_array: &GenericListArray<O>,\n    items_array: &ArrayRef,\n    pos_array: &ArrayRef,\n    legacy_mode: bool,\n) -> DataFusionResult<ColumnarValue> {\n    // Implementation aligned with Arrow's half-open offset ranges and Spark semantics.\n\n    let values = list_array.values();\n    let offsets = list_array.offsets();\n    let values_data = values.to_data();\n    let item_data = items_array.to_data();\n\n    // Estimate capacity (original values + inserted items upper bound)\n    let new_capacity = Capacities::Array(values_data.len() + item_data.len());\n\n    let mut mutable_values =\n        MutableArrayData::with_capacities(vec![&values_data, &item_data], true, new_capacity);\n\n    // New offsets and top-level list validity bitmap\n    let mut new_offsets = Vec::with_capacity(list_array.len() + 1);\n    new_offsets.push(O::usize_as(0));\n    let mut list_valid = Vec::<bool>::with_capacity(list_array.len());\n\n    // Spark supports only Int32 position indices\n    let pos_data: &Int32Array = as_primitive_array(&pos_array);\n\n    for (row_index, window) in offsets.windows(2).enumerate() {\n        let start = window[0].as_usize();\n        let end = window[1].as_usize();\n        let len = end - start;\n\n        // Return null for the entire row when pos is null (consistent with Spark's behavior)\n        if pos_data.is_null(row_index) {\n            new_offsets.push(new_offsets[row_index]);\n            list_valid.push(false);\n            continue;\n        }\n        let pos = pos_data.value(row_index);\n\n        if list_array.is_null(row_index) {\n            // Top-level list row is NULL: do not write any child values and do not advance offset\n            new_offsets.push(new_offsets[row_index]);\n            list_valid.push(false);\n            continue;\n        }\n\n        if pos == 0 {\n            return Err(DataFusionError::Internal(\n                \"Position for array_insert should be greater or less than zero\".to_string(),\n            ));\n        }\n\n        let final_len: usize;\n\n        if pos > 0 {\n            // Positive index (1-based)\n            let pos1 = pos as usize;\n            if pos1 <= len + 1 {\n                // In-range insertion (including appending to end)\n                let corrected = pos1 - 1; // 0-based insertion point\n                mutable_values.extend(0, start, start + corrected);\n                mutable_values.extend(1, row_index, row_index + 1);\n                mutable_values.extend(0, start + corrected, end);\n                final_len = len + 1;\n            } else {\n                // Beyond end: pad with nulls then insert\n                let corrected = pos1 - 1;\n                let padding = corrected - len;\n                mutable_values.extend(0, start, end);\n                mutable_values.extend_nulls(padding);\n                mutable_values.extend(1, row_index, row_index + 1);\n                final_len = corrected + 1; // equals pos1\n            }\n        } else {\n            // Negative index (1-based from the end)\n            let k = (-pos) as usize;\n\n            if k <= len {\n                // In-range negative insertion\n                // Non-legacy: -1 behaves like append to end (corrected = len - k + 1)\n                // Legacy:     -1 behaves like insert before the last element (corrected = len - k)\n                let base_offset = if legacy_mode { 0 } else { 1 };\n                let corrected = len - k + base_offset;\n                mutable_values.extend(0, start, start + corrected);\n                mutable_values.extend(1, row_index, row_index + 1);\n                mutable_values.extend(0, start + corrected, end);\n                final_len = len + 1;\n            } else {\n                // Negative index beyond the start (Spark-specific behavior):\n                // Place item first, then pad with nulls, then append the original array.\n                // Final length = k + base_offset, where base_offset = 1 in legacy mode, otherwise 0.\n                let base_offset = if legacy_mode { 1 } else { 0 };\n                let target_len = k + base_offset;\n                let padding = target_len.saturating_sub(len + 1);\n                mutable_values.extend(1, row_index, row_index + 1); // insert item first\n                mutable_values.extend_nulls(padding); // pad nulls\n                mutable_values.extend(0, start, end); // append original values\n                final_len = target_len;\n            }\n        }\n\n        if final_len > MAX_ROUNDED_ARRAY_LENGTH {\n            return Err(DataFusionError::Internal(format!(\n                \"Max array length in Spark is {MAX_ROUNDED_ARRAY_LENGTH}, but got {final_len}\"\n            )));\n        }\n\n        let prev = new_offsets[row_index].as_usize();\n        new_offsets.push(O::usize_as(prev + final_len));\n        list_valid.push(true);\n    }\n\n    let child = make_array(mutable_values.freeze());\n\n    // Reuse the original list element field (name/type/nullability)\n    let elem_field = match list_array.data_type() {\n        DataType::List(field) => Arc::clone(field),\n        DataType::LargeList(field) => Arc::clone(field),\n        _ => unreachable!(),\n    };\n\n    // Build the resulting list array\n    let new_list = GenericListArray::<O>::try_new(\n        elem_field,\n        OffsetBuffer::new(new_offsets.into()),\n        child,\n        Some(NullBuffer::new(list_valid.into())),\n    )?;\n\n    Ok(ColumnarValue::Array(Arc::new(new_list)))\n}\n\nimpl Display for ArrayInsert {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"ArrayInsert [array: {:?}, pos: {:?}, item: {:?}]\",\n            self.src_array_expr, self.pos_expr, self.item_expr\n        )\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use arrow::array::{Array, ArrayRef, Int32Array, ListArray};\n    use arrow::datatypes::Int32Type;\n    use datafusion::common::Result;\n    use datafusion::physical_plan::ColumnarValue;\n    use std::sync::Arc;\n\n    #[test]\n    fn test_array_insert() -> Result<()> {\n        // Test inserting an item into a list array\n        // Inputs and expected values are taken from the Spark results\n        let list = ListArray::from_iter_primitive::<Int32Type, _, _>(vec![\n            Some(vec![Some(1), Some(2), Some(3)]),\n            Some(vec![Some(4), Some(5)]),\n            Some(vec![None]),\n            Some(vec![Some(1), Some(2), Some(3)]),\n            Some(vec![Some(1), Some(2), Some(3)]),\n            None,\n        ]);\n\n        let positions = Int32Array::from(vec![2, 1, 1, 5, 6, 1]);\n        let items = Int32Array::from(vec![\n            Some(10),\n            Some(20),\n            Some(30),\n            Some(100),\n            Some(100),\n            Some(40),\n        ]);\n\n        let ColumnarValue::Array(result) = array_insert(\n            &list,\n            &(Arc::new(items) as ArrayRef),\n            &(Arc::new(positions) as ArrayRef),\n            false,\n        )?\n        else {\n            unreachable!()\n        };\n\n        let expected = ListArray::from_iter_primitive::<Int32Type, _, _>(vec![\n            Some(vec![Some(1), Some(10), Some(2), Some(3)]),\n            Some(vec![Some(20), Some(4), Some(5)]),\n            Some(vec![Some(30), None]),\n            Some(vec![Some(1), Some(2), Some(3), None, Some(100)]),\n            Some(vec![Some(1), Some(2), Some(3), None, None, Some(100)]),\n            None,\n        ]);\n\n        assert_eq!(&result.to_data(), &expected.to_data());\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_array_insert_negative_index() -> Result<()> {\n        // Test insert with negative index\n        // Inputs and expected values are taken from the Spark results\n        let list = ListArray::from_iter_primitive::<Int32Type, _, _>(vec![\n            Some(vec![Some(1), Some(2), Some(3)]),\n            Some(vec![Some(4), Some(5)]),\n            Some(vec![Some(1)]),\n            None,\n        ]);\n\n        let positions = Int32Array::from(vec![-2, -1, -3, -1]);\n        let items = Int32Array::from(vec![Some(10), Some(20), Some(100), Some(30)]);\n\n        let ColumnarValue::Array(result) = array_insert(\n            &list,\n            &(Arc::new(items) as ArrayRef),\n            &(Arc::new(positions) as ArrayRef),\n            false,\n        )?\n        else {\n            unreachable!()\n        };\n\n        let expected = ListArray::from_iter_primitive::<Int32Type, _, _>(vec![\n            Some(vec![Some(1), Some(2), Some(10), Some(3)]),\n            Some(vec![Some(4), Some(5), Some(20)]),\n            Some(vec![Some(100), None, Some(1)]),\n            None,\n        ]);\n\n        assert_eq!(&result.to_data(), &expected.to_data());\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_array_insert_legacy_mode() -> Result<()> {\n        // Test the so-called \"legacy\" mode exisiting in the Spark\n        let list = ListArray::from_iter_primitive::<Int32Type, _, _>(vec![\n            Some(vec![Some(1), Some(2), Some(3)]),\n            Some(vec![Some(4), Some(5)]),\n            None,\n        ]);\n\n        let positions = Int32Array::from(vec![-1, -1, -1]);\n        let items = Int32Array::from(vec![Some(10), Some(20), Some(30)]);\n\n        let ColumnarValue::Array(result) = array_insert(\n            &list,\n            &(Arc::new(items) as ArrayRef),\n            &(Arc::new(positions) as ArrayRef),\n            true,\n        )?\n        else {\n            unreachable!()\n        };\n\n        let expected = ListArray::from_iter_primitive::<Int32Type, _, _>(vec![\n            Some(vec![Some(1), Some(2), Some(10), Some(3)]),\n            Some(vec![Some(4), Some(20), Some(5)]),\n            None,\n        ]);\n\n        assert_eq!(&result.to_data(), &expected.to_data());\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_array_insert_bug_repro_null_item_pos1_fixed() -> Result<()> {\n        use arrow::array::{Array, ArrayRef, Int32Array, ListArray};\n        use arrow::datatypes::Int32Type;\n\n        // row0 = [0, null, 0]\n        // row1 = [1, null, 1]\n        let list = ListArray::from_iter_primitive::<Int32Type, _, _>(vec![\n            Some(vec![Some(0), None, Some(0)]),\n            Some(vec![Some(1), None, Some(1)]),\n        ]);\n\n        let positions = Int32Array::from(vec![1, 1]);\n        let items = Int32Array::from(vec![None, None]);\n\n        let ColumnarValue::Array(result) = array_insert(\n            &list,\n            &(Arc::new(items) as ArrayRef),\n            &(Arc::new(positions) as ArrayRef),\n            false, // legacy_mode = false\n        )?\n        else {\n            unreachable!()\n        };\n\n        let expected = ListArray::from_iter_primitive::<Int32Type, _, _>(vec![\n            Some(vec![None, Some(0), None, Some(0)]),\n            Some(vec![None, Some(1), None, Some(1)]),\n        ]);\n        assert_eq!(&result.to_data(), &expected.to_data());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/array_funcs/get_array_struct_fields.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{make_array, Array, GenericListArray, OffsetSizeTrait, StructArray};\nuse arrow::buffer::NullBuffer;\nuse arrow::datatypes::{DataType, FieldRef, Schema};\nuse arrow::record_batch::RecordBatch;\nuse datafusion::common::{\n    cast::{as_large_list_array, as_list_array},\n    internal_err, DataFusionError, Result as DataFusionResult,\n};\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::hash::Hash;\nuse std::{\n    any::Any,\n    fmt::{Debug, Display, Formatter},\n    sync::Arc,\n};\n\n#[derive(Debug, Eq)]\npub struct GetArrayStructFields {\n    child: Arc<dyn PhysicalExpr>,\n    ordinal: usize,\n}\n\nimpl Hash for GetArrayStructFields {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.child.hash(state);\n        self.ordinal.hash(state);\n    }\n}\nimpl PartialEq for GetArrayStructFields {\n    fn eq(&self, other: &Self) -> bool {\n        self.child.eq(&other.child) && self.ordinal.eq(&other.ordinal)\n    }\n}\n\nimpl GetArrayStructFields {\n    pub fn new(child: Arc<dyn PhysicalExpr>, ordinal: usize) -> Self {\n        Self { child, ordinal }\n    }\n\n    fn list_field(&self, input_schema: &Schema) -> DataFusionResult<FieldRef> {\n        match self.child.data_type(input_schema)? {\n            DataType::List(field) | DataType::LargeList(field) => Ok(field),\n            data_type => Err(DataFusionError::Internal(format!(\n                \"Unexpected data type in GetArrayStructFields: {data_type:?}\"\n            ))),\n        }\n    }\n\n    fn child_field(&self, input_schema: &Schema) -> DataFusionResult<FieldRef> {\n        match self.list_field(input_schema)?.data_type() {\n            DataType::Struct(fields) => Ok(Arc::clone(&fields[self.ordinal])),\n            data_type => Err(DataFusionError::Internal(format!(\n                \"Unexpected data type in GetArrayStructFields: {data_type:?}\"\n            ))),\n        }\n    }\n}\n\nimpl PhysicalExpr for GetArrayStructFields {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn data_type(&self, input_schema: &Schema) -> DataFusionResult<DataType> {\n        let struct_field = self.child_field(input_schema)?;\n        match self.child.data_type(input_schema)? {\n            DataType::List(_) => Ok(DataType::List(struct_field)),\n            DataType::LargeList(_) => Ok(DataType::LargeList(struct_field)),\n            data_type => Err(DataFusionError::Internal(format!(\n                \"Unexpected data type in GetArrayStructFields: {data_type:?}\"\n            ))),\n        }\n    }\n\n    fn nullable(&self, input_schema: &Schema) -> DataFusionResult<bool> {\n        Ok(self.list_field(input_schema)?.is_nullable()\n            || self.child_field(input_schema)?.is_nullable())\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> DataFusionResult<ColumnarValue> {\n        let child_value = self.child.evaluate(batch)?.into_array(batch.num_rows())?;\n\n        match child_value.data_type() {\n            DataType::List(_) => {\n                let list_array = as_list_array(&child_value)?;\n\n                get_array_struct_fields(list_array, self.ordinal)\n            }\n            DataType::LargeList(_) => {\n                let list_array = as_large_list_array(&child_value)?;\n\n                get_array_struct_fields(list_array, self.ordinal)\n            }\n            data_type => Err(DataFusionError::Internal(format!(\n                \"Unexpected child type for ListExtract: {data_type:?}\"\n            ))),\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.child]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        match children.len() {\n            1 => Ok(Arc::new(GetArrayStructFields::new(\n                Arc::clone(&children[0]),\n                self.ordinal,\n            ))),\n            _ => internal_err!(\"GetArrayStructFields should have exactly one child\"),\n        }\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n}\n\nfn get_array_struct_fields<O: OffsetSizeTrait>(\n    list_array: &GenericListArray<O>,\n    ordinal: usize,\n) -> DataFusionResult<ColumnarValue> {\n    let values = list_array\n        .values()\n        .as_any()\n        .downcast_ref::<StructArray>()\n        .expect(\"A StructType is expected\");\n\n    let field = Arc::clone(&values.fields()[ordinal]);\n    // Get struct column by ordinal\n    let extracted_column = values.column(ordinal);\n\n    let data = if values.null_count() == extracted_column.null_count() {\n        Arc::clone(extracted_column)\n    } else {\n        // In some cases the column obtained from struct by ordinal doesn't\n        // represent all nulls that imposed by parent values.\n        // This maybe caused by a low level reader bug and needs more investigation.\n        // For this specific case we patch the null buffer for the column by merging nulls buffers\n        // from parent and column\n        let merged_nulls = NullBuffer::union(values.nulls(), extracted_column.nulls());\n        make_array(\n            extracted_column\n                .into_data()\n                .into_builder()\n                .nulls(merged_nulls)\n                .build()?,\n        )\n    };\n\n    let array = GenericListArray::new(\n        field,\n        list_array.offsets().clone(),\n        data,\n        list_array.nulls().cloned(),\n    );\n\n    Ok(ColumnarValue::Array(Arc::new(array)))\n}\n\nimpl Display for GetArrayStructFields {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"GetArrayStructFields [child: {:?}, ordinal: {:?}]\",\n            self.child, self.ordinal\n        )\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/array_funcs/list_extract.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, GenericListArray, Int32Array, OffsetSizeTrait};\nuse arrow::datatypes::{DataType, FieldRef, Schema};\nuse arrow::{array::MutableArrayData, datatypes::ArrowNativeType, record_batch::RecordBatch};\nuse datafusion::common::{\n    cast::{as_int32_array, as_large_list_array, as_list_array},\n    internal_err, DataFusionError, Result as DataFusionResult, ScalarValue,\n};\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::hash::Hash;\nuse std::{\n    any::Any,\n    fmt::{Debug, Display, Formatter},\n    sync::Arc,\n};\n\nuse crate::SparkError;\n\n#[derive(Debug, Clone)]\npub struct ListExtract {\n    child: Arc<dyn PhysicalExpr>,\n    ordinal: Arc<dyn PhysicalExpr>,\n    default_value: Option<Arc<dyn PhysicalExpr>>,\n    one_based: bool,\n    fail_on_error: bool,\n    expr_id: Option<u64>,\n    registry: Arc<crate::QueryContextMap>,\n}\n\nimpl Hash for ListExtract {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.child.hash(state);\n        self.ordinal.hash(state);\n        self.default_value.hash(state);\n        self.one_based.hash(state);\n        self.fail_on_error.hash(state);\n        self.expr_id.hash(state);\n        // Exclude registry from hash\n    }\n}\n\nimpl PartialEq for ListExtract {\n    fn eq(&self, other: &Self) -> bool {\n        self.child.eq(&other.child)\n            && self.ordinal.eq(&other.ordinal)\n            && self.default_value.eq(&other.default_value)\n            && self.one_based.eq(&other.one_based)\n            && self.fail_on_error.eq(&other.fail_on_error)\n            && self.expr_id.eq(&other.expr_id)\n        // Exclude registry from equality check\n    }\n}\n\nimpl Eq for ListExtract {}\n\nimpl ListExtract {\n    pub fn new(\n        child: Arc<dyn PhysicalExpr>,\n        ordinal: Arc<dyn PhysicalExpr>,\n        default_value: Option<Arc<dyn PhysicalExpr>>,\n        one_based: bool,\n        fail_on_error: bool,\n        expr_id: Option<u64>,\n        registry: Arc<crate::QueryContextMap>,\n    ) -> Self {\n        Self {\n            child,\n            ordinal,\n            default_value,\n            one_based,\n            fail_on_error,\n            expr_id,\n            registry,\n        }\n    }\n\n    fn child_field(&self, input_schema: &Schema) -> DataFusionResult<FieldRef> {\n        match self.child.data_type(input_schema)? {\n            DataType::List(field) | DataType::LargeList(field) => Ok(field),\n            data_type => Err(DataFusionError::Internal(format!(\n                \"Unexpected data type in ListExtract: {data_type:?}\"\n            ))),\n        }\n    }\n\n    /// Wrap a SparkError with QueryContext if expr_id is available\n    fn wrap_error_with_context(&self, error: SparkError) -> DataFusionError {\n        if let Some(expr_id) = self.expr_id {\n            if let Some(query_ctx) = self.registry.get(expr_id) {\n                let wrapped = crate::SparkErrorWithContext::with_context(error, query_ctx);\n                return DataFusionError::External(Box::new(wrapped));\n            }\n        }\n        DataFusionError::from(error)\n    }\n}\n\nimpl PhysicalExpr for ListExtract {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, input_schema: &Schema) -> DataFusionResult<DataType> {\n        Ok(self.child_field(input_schema)?.data_type().clone())\n    }\n\n    fn nullable(&self, input_schema: &Schema) -> DataFusionResult<bool> {\n        // Only non-nullable if fail_on_error is enabled and the element is non-nullable\n        Ok(!self.fail_on_error || self.child_field(input_schema)?.is_nullable())\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> DataFusionResult<ColumnarValue> {\n        let child_value = self.child.evaluate(batch)?.into_array(batch.num_rows())?;\n        let ordinal_value = self.ordinal.evaluate(batch)?.into_array(batch.num_rows())?;\n\n        let default_value = self\n            .default_value\n            .as_ref()\n            .map(|d| {\n                d.evaluate(batch).map(|value| match value {\n                    ColumnarValue::Scalar(scalar)\n                        if !scalar.data_type().equals_datatype(child_value.data_type()) =>\n                    {\n                        scalar.cast_to(child_value.data_type())\n                    }\n                    ColumnarValue::Scalar(scalar) => Ok(scalar),\n                    v => Err(DataFusionError::Execution(format!(\n                        \"Expected scalar default value for ListExtract, got {v:?}\"\n                    ))),\n                })\n            })\n            .transpose()?\n            .unwrap_or(self.data_type(&batch.schema())?.try_into())?;\n\n        // Create error wrapper closure that has access to self\n        let error_wrapper = |error: SparkError| self.wrap_error_with_context(error);\n\n        let adjust_index: Box<dyn Fn(i32, usize) -> DataFusionResult<Option<usize>>> =\n            if self.one_based {\n                Box::new(|idx, len| one_based_index(idx, len, &error_wrapper))\n            } else {\n                Box::new(|idx, len| zero_based_index(idx, len, &error_wrapper))\n            };\n\n        match child_value.data_type() {\n            DataType::List(_) => {\n                let list_array = as_list_array(&child_value)?;\n                let index_array = as_int32_array(&ordinal_value)?;\n\n                list_extract(\n                    list_array,\n                    index_array,\n                    &default_value,\n                    self.fail_on_error,\n                    self.one_based,\n                    adjust_index,\n                    &error_wrapper,\n                )\n            }\n            DataType::LargeList(_) => {\n                let list_array = as_large_list_array(&child_value)?;\n                let index_array = as_int32_array(&ordinal_value)?;\n\n                list_extract(\n                    list_array,\n                    index_array,\n                    &default_value,\n                    self.fail_on_error,\n                    self.one_based,\n                    adjust_index,\n                    &error_wrapper,\n                )\n            }\n            data_type => Err(DataFusionError::Internal(format!(\n                \"Unexpected child type for ListExtract: {data_type:?}\"\n            ))),\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.child, &self.ordinal]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        match children.len() {\n            2 => Ok(Arc::new(ListExtract::new(\n                Arc::clone(&children[0]),\n                Arc::clone(&children[1]),\n                self.default_value.clone(),\n                self.one_based,\n                self.fail_on_error,\n                self.expr_id,\n                Arc::clone(&self.registry),\n            ))),\n            _ => internal_err!(\"ListExtract should have exactly two children\"),\n        }\n    }\n}\n\nfn one_based_index(\n    index: i32,\n    len: usize,\n    error_wrapper: &impl Fn(SparkError) -> DataFusionError,\n) -> DataFusionResult<Option<usize>> {\n    if index == 0 {\n        return Err(error_wrapper(SparkError::InvalidIndexOfZero));\n    }\n\n    let abs_index = index.abs().as_usize();\n    if abs_index <= len {\n        if index > 0 {\n            Ok(Some(abs_index - 1))\n        } else {\n            Ok(Some(len - abs_index))\n        }\n    } else {\n        Ok(None)\n    }\n}\n\nfn zero_based_index(\n    index: i32,\n    len: usize,\n    _error_wrapper: &impl Fn(SparkError) -> DataFusionError,\n) -> DataFusionResult<Option<usize>> {\n    if index < 0 {\n        Ok(None)\n    } else {\n        let positive_index = index.as_usize();\n        if positive_index < len {\n            Ok(Some(positive_index))\n        } else {\n            Ok(None)\n        }\n    }\n}\n\nfn list_extract<O: OffsetSizeTrait>(\n    list_array: &GenericListArray<O>,\n    index_array: &Int32Array,\n    default_value: &ScalarValue,\n    fail_on_error: bool,\n    one_based: bool,\n    adjust_index: impl Fn(i32, usize) -> DataFusionResult<Option<usize>>,\n    error_wrapper: &impl Fn(SparkError) -> DataFusionError,\n) -> DataFusionResult<ColumnarValue> {\n    let values = list_array.values();\n    let offsets = list_array.offsets();\n\n    let data = values.to_data();\n\n    let default_data = default_value.to_array()?.to_data();\n\n    let mut mutable = MutableArrayData::new(vec![&data, &default_data], true, index_array.len());\n\n    for (row, (offset_window, index)) in offsets.windows(2).zip(index_array.iter()).enumerate() {\n        let start = offset_window[0].as_usize();\n        let len = offset_window[1].as_usize() - start;\n\n        if list_array.is_null(row) {\n            mutable.extend_nulls(1);\n        } else if let Some(index) = index {\n            if let Some(i) = adjust_index(index, len)? {\n                mutable.extend(0, start + i, start + i + 1);\n            } else if fail_on_error {\n                // Throw appropriate error based on whether this is element_at (one_based=true)\n                // or GetArrayItem (one_based=false)\n                let error = if one_based {\n                    // element_at function\n                    SparkError::InvalidElementAtIndex {\n                        index_value: index,\n                        array_size: len as i32,\n                    }\n                } else {\n                    // GetArrayItem (arr[index])\n                    SparkError::InvalidArrayIndex {\n                        index_value: index,\n                        array_size: len as i32,\n                    }\n                };\n                return Err(error_wrapper(error));\n            } else {\n                mutable.extend(1, 0, 1);\n            }\n        } else {\n            // index is NULL → result is NULL\n            mutable.extend_nulls(1);\n        }\n    }\n\n    let data = mutable.freeze();\n    Ok(ColumnarValue::Array(arrow::array::make_array(data)))\n}\n\nimpl Display for ListExtract {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"ListExtract [child: {:?}, ordinal: {:?}, default_value: {:?}, one_based: {:?}, fail_on_error: {:?}]\",\n            self.child, self.ordinal,  self.default_value, self.one_based, self.fail_on_error\n        )\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use arrow::array::{Array, Int32Array, ListArray};\n    use arrow::datatypes::Int32Type;\n    use datafusion::common::{Result, ScalarValue};\n    use datafusion::physical_plan::ColumnarValue;\n\n    #[test]\n    fn test_list_extract_default_value() -> Result<()> {\n        let list = ListArray::from_iter_primitive::<Int32Type, _, _>(vec![\n            Some(vec![Some(1)]),\n            None,\n            Some(vec![]),\n        ]);\n        let indices = Int32Array::from(vec![0, 0, 0]);\n\n        let null_default = ScalarValue::Int32(None);\n\n        // Simple error wrapper for tests - just converts SparkError to DataFusionError\n        let error_wrapper = |error: SparkError| DataFusionError::from(error);\n\n        let ColumnarValue::Array(result) = list_extract(\n            &list,\n            &indices,\n            &null_default,\n            false,\n            false,\n            |idx, len| zero_based_index(idx, len, &error_wrapper),\n            &error_wrapper,\n        )?\n        else {\n            unreachable!()\n        };\n\n        assert_eq!(\n            &result.to_data(),\n            &Int32Array::from(vec![Some(1), None, None]).to_data()\n        );\n\n        let zero_default = ScalarValue::Int32(Some(0));\n\n        let ColumnarValue::Array(result) = list_extract(\n            &list,\n            &indices,\n            &zero_default,\n            false,\n            false,\n            |idx, len| zero_based_index(idx, len, &error_wrapper),\n            &error_wrapper,\n        )?\n        else {\n            unreachable!()\n        };\n\n        assert_eq!(\n            &result.to_data(),\n            &Int32Array::from(vec![Some(1), None, Some(0)]).to_data()\n        );\n        Ok(())\n    }\n\n    #[test]\n    fn test_list_extract_null_index() -> Result<()> {\n        // GetArrayItem returns incorrect results with dynamic (column) index containing nulls\n        let list = ListArray::from_iter_primitive::<Int32Type, _, _>(vec![\n            Some(vec![Some(10), Some(20), Some(30)]),\n            Some(vec![Some(10), Some(20), Some(30)]),\n            Some(vec![Some(10), Some(20), Some(30)]),\n            Some(vec![Some(1)]),\n            None,\n            Some(vec![Some(10), Some(20)]),\n        ]);\n        let indices = Int32Array::from(vec![Some(0), Some(1), Some(2), Some(0), Some(0), None]);\n\n        let null_default = ScalarValue::Int32(None);\n        let error_wrapper = |error: SparkError| DataFusionError::from(error);\n\n        let ColumnarValue::Array(result) = list_extract(\n            &list,\n            &indices,\n            &null_default,\n            false,\n            false,\n            |idx, len| zero_based_index(idx, len, &error_wrapper),\n            &error_wrapper,\n        )?\n        else {\n            unreachable!()\n        };\n\n        assert_eq!(\n            &result.to_data(),\n            &Int32Array::from(vec![Some(10), Some(20), Some(30), Some(1), None, None]).to_data()\n        );\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/array_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod array_compact;\nmod array_insert;\nmod get_array_struct_fields;\nmod list_extract;\nmod size;\n\npub use array_compact::SparkArrayCompact;\npub use array_insert::ArrayInsert;\npub use get_array_struct_fields::GetArrayStructFields;\npub use list_extract::ListExtract;\npub use size::{spark_size, SparkSizeFunc};\n"
  },
  {
    "path": "native/spark-expr/src/array_funcs/size.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, ArrayRef, Int32Array};\nuse arrow::datatypes::{DataType, Field};\nuse datafusion::common::{exec_err, DataFusionError, Result as DataFusionResult, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse std::any::Any;\nuse std::sync::Arc;\n\n/// Spark size() function that returns the size of arrays or maps.\n/// Returns -1 for null inputs (Spark behavior differs from standard SQL).\npub fn spark_size(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {\n    if args.len() != 1 {\n        return exec_err!(\"size function takes exactly one argument\");\n    }\n\n    match &args[0] {\n        ColumnarValue::Array(array) => {\n            let result = spark_size_array(array)?;\n            Ok(ColumnarValue::Array(result))\n        }\n        ColumnarValue::Scalar(scalar) => {\n            let result = spark_size_scalar(scalar)?;\n            Ok(ColumnarValue::Scalar(result))\n        }\n    }\n}\n\n#[derive(Debug, Hash, Eq, PartialEq)]\npub struct SparkSizeFunc {\n    signature: Signature,\n}\n\nimpl Default for SparkSizeFunc {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl SparkSizeFunc {\n    pub fn new() -> Self {\n        use DataType::*;\n        Self {\n            signature: Signature::uniform(\n                1,\n                vec![\n                    List(Arc::new(Field::new(\"item\", Null, true))),\n                    LargeList(Arc::new(Field::new(\"item\", Null, true))),\n                    FixedSizeList(Arc::new(Field::new(\"item\", Null, true)), -1),\n                    Map(Arc::new(Field::new(\"entries\", Null, true)), false),\n                ],\n                Volatility::Immutable,\n            ),\n        }\n    }\n}\n\nimpl ScalarUDFImpl for SparkSizeFunc {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"size\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> DataFusionResult<DataType> {\n        Ok(DataType::Int32)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> DataFusionResult<ColumnarValue> {\n        spark_size(&args.args)\n    }\n}\n\nfn spark_size_array(array: &ArrayRef) -> Result<ArrayRef, DataFusionError> {\n    let mut builder = Int32Array::builder(array.len());\n\n    match array.data_type() {\n        DataType::List(_) => {\n            let list_array = array\n                .as_any()\n                .downcast_ref::<arrow::array::ListArray>()\n                .ok_or_else(|| DataFusionError::Internal(\"Expected ListArray\".to_string()))?;\n\n            for i in 0..list_array.len() {\n                if list_array.is_null(i) {\n                    builder.append_value(-1); // Spark behavior: return -1 for null\n                } else {\n                    let list_len = list_array.value(i).len() as i32;\n                    builder.append_value(list_len);\n                }\n            }\n        }\n        DataType::LargeList(_) => {\n            let list_array = array\n                .as_any()\n                .downcast_ref::<arrow::array::LargeListArray>()\n                .ok_or_else(|| DataFusionError::Internal(\"Expected LargeListArray\".to_string()))?;\n\n            for i in 0..list_array.len() {\n                if list_array.is_null(i) {\n                    builder.append_value(-1); // Spark behavior: return -1 for null\n                } else {\n                    let list_len = list_array.value(i).len() as i32;\n                    builder.append_value(list_len);\n                }\n            }\n        }\n        DataType::FixedSizeList(_, size) => {\n            let fixed_list_array = array\n                .as_any()\n                .downcast_ref::<arrow::array::FixedSizeListArray>()\n                .ok_or_else(|| {\n                    DataFusionError::Internal(\"Expected FixedSizeListArray\".to_string())\n                })?;\n\n            for i in 0..fixed_list_array.len() {\n                if fixed_list_array.is_null(i) {\n                    builder.append_value(-1); // Spark behavior: return -1 for null\n                } else {\n                    builder.append_value(*size);\n                }\n            }\n        }\n        DataType::Map(_, _) => {\n            let map_array = array\n                .as_any()\n                .downcast_ref::<arrow::array::MapArray>()\n                .ok_or_else(|| DataFusionError::Internal(\"Expected MapArray\".to_string()))?;\n\n            for i in 0..map_array.len() {\n                if map_array.is_null(i) {\n                    builder.append_value(-1); // Spark behavior: return -1 for null\n                } else {\n                    let map_len = map_array.value_length(i);\n                    builder.append_value(map_len);\n                }\n            }\n        }\n        _ => {\n            return exec_err!(\n                \"size function only supports arrays and maps, got: {:?}\",\n                array.data_type()\n            );\n        }\n    }\n\n    Ok(Arc::new(builder.finish()))\n}\n\nfn spark_size_scalar(scalar: &ScalarValue) -> Result<ScalarValue, DataFusionError> {\n    match scalar {\n        ScalarValue::List(array) => {\n            // ScalarValue::List contains a ListArray with exactly one row.\n            // We need the length of that row's contents, not the row count.\n            if array.is_null(0) {\n                Ok(ScalarValue::Int32(Some(-1))) // Spark behavior: return -1 for null\n            } else {\n                let len = array.value(0).len() as i32;\n                Ok(ScalarValue::Int32(Some(len)))\n            }\n        }\n        ScalarValue::LargeList(array) => {\n            if array.is_null(0) {\n                Ok(ScalarValue::Int32(Some(-1)))\n            } else {\n                let len = array.value(0).len() as i32;\n                Ok(ScalarValue::Int32(Some(len)))\n            }\n        }\n        ScalarValue::FixedSizeList(array) => {\n            if array.is_null(0) {\n                Ok(ScalarValue::Int32(Some(-1)))\n            } else {\n                let len = array.value(0).len() as i32;\n                Ok(ScalarValue::Int32(Some(len)))\n            }\n        }\n        ScalarValue::Null => {\n            Ok(ScalarValue::Int32(Some(-1))) // Spark behavior: return -1 for null\n        }\n        _ => {\n            exec_err!(\n                \"size function only supports arrays and maps, got: {:?}\",\n                scalar\n            )\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::{Int32Array, ListArray, NullBufferBuilder};\n    use arrow::datatypes::{DataType, Field};\n    use std::sync::Arc;\n\n    #[test]\n    fn test_spark_size_array() {\n        // Create test data: [[1, 2, 3], [4, 5], null, []]\n        let value_data = Int32Array::from(vec![1, 2, 3, 4, 5]);\n        let value_offsets = arrow::buffer::OffsetBuffer::new(vec![0, 3, 5, 5, 5].into());\n        let field = Arc::new(Field::new(\"item\", DataType::Int32, true));\n\n        let mut null_buffer = NullBufferBuilder::new(4);\n        null_buffer.append(true); // [1, 2, 3] - not null\n        null_buffer.append(true); // [4, 5] - not null\n        null_buffer.append(false); // null\n        null_buffer.append(true); // [] - not null but empty\n\n        let list_array = ListArray::try_new(\n            field,\n            value_offsets,\n            Arc::new(value_data),\n            null_buffer.finish(),\n        )\n        .unwrap();\n\n        let array_ref: ArrayRef = Arc::new(list_array);\n        let result = spark_size_array(&array_ref).unwrap();\n        let result = result.as_any().downcast_ref::<Int32Array>().unwrap();\n\n        // Expected: [3, 2, -1, 0]\n        assert_eq!(result.value(0), 3); // [1, 2, 3] has 3 elements\n        assert_eq!(result.value(1), 2); // [4, 5] has 2 elements\n        assert_eq!(result.value(2), -1); // null returns -1\n        assert_eq!(result.value(3), 0); // [] has 0 elements\n    }\n\n    #[test]\n    fn test_spark_size_scalar() {\n        // Test non-null list with 3 elements\n        let values = Int32Array::from(vec![1, 2, 3]);\n        let field = Arc::new(Field::new(\"item\", DataType::Int32, true));\n        let offsets = arrow::buffer::OffsetBuffer::new(vec![0, 3].into());\n        let list_array = ListArray::try_new(field, offsets, Arc::new(values), None).unwrap();\n        let scalar = ScalarValue::List(Arc::new(list_array));\n        let result = spark_size_scalar(&scalar).unwrap();\n        assert_eq!(result, ScalarValue::Int32(Some(3))); // The array [1,2,3] has 3 elements\n\n        // Test empty list\n        let empty_values = Int32Array::from(vec![] as Vec<i32>);\n        let field = Arc::new(Field::new(\"item\", DataType::Int32, true));\n        let offsets = arrow::buffer::OffsetBuffer::new(vec![0, 0].into());\n        let empty_list_array =\n            ListArray::try_new(field, offsets, Arc::new(empty_values), None).unwrap();\n        let scalar = ScalarValue::List(Arc::new(empty_list_array));\n        let result = spark_size_scalar(&scalar).unwrap();\n        assert_eq!(result, ScalarValue::Int32(Some(0))); // Empty array has 0 elements\n\n        // Test null handling\n        let scalar = ScalarValue::Null;\n        let result = spark_size_scalar(&scalar).unwrap();\n        assert_eq!(result, ScalarValue::Int32(Some(-1)));\n    }\n\n    // TODO: Add map array test once Arrow MapArray API constraints are resolved\n    // Currently MapArray doesn't allow nulls in entries which makes testing complex\n    // The core size() implementation supports maps correctly\n    #[ignore]\n    #[test]\n    fn test_spark_size_map_array() {\n        use arrow::array::{MapArray, StringArray};\n\n        // Create a simpler test with maps:\n        // [{\"key1\": \"value1\", \"key2\": \"value2\"}, {\"key3\": \"value3\"}, {}, null]\n\n        // Create keys array for all entries (no nulls)\n        let keys = StringArray::from(vec![\"key1\", \"key2\", \"key3\"]);\n\n        // Create values array for all entries (no nulls)\n        let values = StringArray::from(vec![\"value1\", \"value2\", \"value3\"]);\n\n        // Create entry offsets: [0, 2, 3, 3] representing:\n        // - Map 1: entries 0-1 (2 key-value pairs)\n        // - Map 2: entries 2-2 (1 key-value pair)\n        // - Map 3: entries 3-2 (0 key-value pairs, empty map)\n        // - Map 4: null (handled by null buffer)\n        let entry_offsets = arrow::buffer::OffsetBuffer::new(vec![0, 2, 3, 3, 3].into());\n\n        let key_field = Arc::new(Field::new(\"key\", DataType::Utf8, false));\n        let value_field = Arc::new(Field::new(\"value\", DataType::Utf8, false)); // Make values non-nullable too\n\n        // Create the entries struct array\n        let entries = arrow::array::StructArray::new(\n            arrow::datatypes::Fields::from(vec![key_field, value_field]),\n            vec![Arc::new(keys), Arc::new(values)],\n            None, // No nulls in the entries struct array itself\n        );\n\n        // Create null buffer for the map array (fourth map is null)\n        let mut null_buffer = NullBufferBuilder::new(4);\n        null_buffer.append(true); // Map with 2 entries - not null\n        null_buffer.append(true); // Map with 1 entry - not null\n        null_buffer.append(true); // Empty map - not null\n        null_buffer.append(false); // null map\n\n        let map_data_type = DataType::Map(\n            Arc::new(Field::new(\n                \"entries\",\n                DataType::Struct(arrow::datatypes::Fields::from(vec![\n                    Field::new(\"key\", DataType::Utf8, false),\n                    Field::new(\"value\", DataType::Utf8, false), // Make values non-nullable too\n                ])),\n                false,\n            )),\n            false, // keys are not sorted\n        );\n\n        let map_field = Arc::new(Field::new(\"map\", map_data_type, true));\n\n        let map_array = MapArray::new(\n            map_field,\n            entry_offsets,\n            entries,\n            null_buffer.finish(),\n            false, // keys are not sorted\n        );\n\n        let array_ref: ArrayRef = Arc::new(map_array);\n        let result = spark_size_array(&array_ref).unwrap();\n        let result = result.as_any().downcast_ref::<Int32Array>().unwrap();\n\n        // Expected: [2, 1, 0, -1]\n        assert_eq!(result.value(0), 2); // Map with 2 key-value pairs\n        assert_eq!(result.value(1), 1); // Map with 1 key-value pair\n        assert_eq!(result.value(2), 0); // empty map has 0 pairs\n        assert_eq!(result.value(3), -1); // null map returns -1\n    }\n\n    #[test]\n    fn test_spark_size_fixed_size_list_array() {\n        use arrow::array::FixedSizeListArray;\n\n        // Create test data: fixed-size arrays of size 3\n        // [[1, 2, 3], [4, 5, 6], null]\n        let values = Int32Array::from(vec![1, 2, 3, 4, 5, 6, 0, 0, 0]); // Last 3 values are for the null entry\n        let list_size = 3;\n\n        let mut null_buffer = NullBufferBuilder::new(3);\n        null_buffer.append(true); // [1, 2, 3] - not null\n        null_buffer.append(true); // [4, 5, 6] - not null\n        null_buffer.append(false); // null\n\n        let list_field = Arc::new(Field::new(\"item\", DataType::Int32, true));\n\n        let fixed_list_array = FixedSizeListArray::new(\n            list_field,\n            list_size,\n            Arc::new(values),\n            null_buffer.finish(),\n        );\n\n        let array_ref: ArrayRef = Arc::new(fixed_list_array);\n        let result = spark_size_array(&array_ref).unwrap();\n        let result = result.as_any().downcast_ref::<Int32Array>().unwrap();\n\n        // Expected: [3, 3, -1]\n        assert_eq!(result.value(0), 3); // Fixed-size list always has size 3\n        assert_eq!(result.value(1), 3); // Fixed-size list always has size 3\n        assert_eq!(result.value(2), -1); // null returns -1\n    }\n\n    #[test]\n    fn test_spark_size_large_list_array() {\n        use arrow::array::LargeListArray;\n\n        // Create test data: [[1, 2, 3, 4], [5], null, []]\n        let value_data = Int32Array::from(vec![1, 2, 3, 4, 5]);\n        let value_offsets = arrow::buffer::OffsetBuffer::new(vec![0i64, 4, 5, 5, 5].into());\n        let field = Arc::new(Field::new(\"item\", DataType::Int32, true));\n\n        let mut null_buffer = NullBufferBuilder::new(4);\n        null_buffer.append(true); // [1, 2, 3, 4] - not null\n        null_buffer.append(true); // [5] - not null\n        null_buffer.append(false); // null\n        null_buffer.append(true); // [] - not null but empty\n\n        let large_list_array = LargeListArray::try_new(\n            field,\n            value_offsets,\n            Arc::new(value_data),\n            null_buffer.finish(),\n        )\n        .unwrap();\n\n        let array_ref: ArrayRef = Arc::new(large_list_array);\n        let result = spark_size_array(&array_ref).unwrap();\n        let result = result.as_any().downcast_ref::<Int32Array>().unwrap();\n\n        // Expected: [4, 1, -1, 0]\n        assert_eq!(result.value(0), 4); // [1, 2, 3, 4] has 4 elements\n        assert_eq!(result.value(1), 1); // [5] has 1 element\n        assert_eq!(result.value(2), -1); // null returns -1\n        assert_eq!(result.value(3), 0); // [] has 0 elements\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/bloom_filter/bit.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n/// Similar to the `read_num_bytes` but read nums from bytes in big-endian order\n/// This is used to read bytes from Java's OutputStream which writes bytes in big-endian\n/// Reads `$size` of bytes from `$src`, and reinterprets them as type `$ty`, in\n/// big-endian order.\n/// This is copied and modified datafusion_comet::common::bit.\nmacro_rules! read_num_be_bytes {\n    ($ty:ty, $size:expr, $src:expr) => {{\n        debug_assert!($size <= $src.len());\n        let mut buffer = [0u8; std::mem::size_of::<$ty>()];\n        buffer.as_mut()[..$size].copy_from_slice(&$src[..$size]);\n        <$ty>::from_be_bytes(buffer)\n    }};\n}\n"
  },
  {
    "path": "native/spark-expr/src/bloom_filter/bloom_filter_agg.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::datatypes::{Field, FieldRef};\nuse datafusion::{arrow::datatypes::DataType, logical_expr::Volatility};\nuse std::{any::Any, sync::Arc};\n\nuse crate::bloom_filter::spark_bloom_filter;\nuse crate::bloom_filter::spark_bloom_filter::SparkBloomFilter;\n\nuse arrow::array::ArrayRef;\nuse arrow::array::BinaryArray;\nuse datafusion::common::{downcast_value, ScalarValue};\nuse datafusion::error::Result;\nuse datafusion::logical_expr::function::{AccumulatorArgs, StateFieldsArgs};\nuse datafusion::logical_expr::{AggregateUDFImpl, Signature};\nuse datafusion::physical_expr::expressions::Literal;\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_plan::Accumulator;\n\n#[derive(Debug, Clone, PartialEq, Eq, Hash)]\npub struct BloomFilterAgg {\n    signature: Signature,\n    num_items: i32,\n    num_bits: i32,\n}\n\n#[inline]\nfn extract_i32_from_literal(expr: Arc<dyn PhysicalExpr>) -> i32 {\n    match expr.as_any().downcast_ref::<Literal>().unwrap().value() {\n        ScalarValue::Int64(scalar_value) => scalar_value.unwrap() as i32,\n        _ => {\n            unreachable!()\n        }\n    }\n}\n\nimpl BloomFilterAgg {\n    pub fn new(\n        num_items: Arc<dyn PhysicalExpr>,\n        num_bits: Arc<dyn PhysicalExpr>,\n        data_type: DataType,\n    ) -> Self {\n        assert!(matches!(data_type, DataType::Binary));\n        Self {\n            signature: Signature::uniform(\n                1,\n                vec![\n                    DataType::Int8,\n                    DataType::Int16,\n                    DataType::Int32,\n                    DataType::Int64,\n                    DataType::Utf8,\n                ],\n                Volatility::Immutable,\n            ),\n            num_items: extract_i32_from_literal(num_items),\n            num_bits: extract_i32_from_literal(num_bits),\n        }\n    }\n}\n\nimpl AggregateUDFImpl for BloomFilterAgg {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"bloom_filter_agg\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Binary)\n    }\n\n    fn accumulator(&self, _acc_args: AccumulatorArgs) -> Result<Box<dyn Accumulator>> {\n        Ok(Box::new(SparkBloomFilter::from((\n            spark_bloom_filter::optimal_num_hash_functions(self.num_items, self.num_bits),\n            self.num_bits,\n        ))))\n    }\n\n    fn state_fields(&self, _args: StateFieldsArgs) -> Result<Vec<FieldRef>> {\n        Ok(vec![Arc::new(Field::new(\"bits\", DataType::Binary, false))])\n    }\n\n    fn groups_accumulator_supported(&self, _args: AccumulatorArgs) -> bool {\n        false\n    }\n}\n\nimpl Accumulator for SparkBloomFilter {\n    fn update_batch(&mut self, values: &[ArrayRef]) -> Result<()> {\n        if values.is_empty() {\n            return Ok(());\n        }\n        let arr = &values[0];\n        (0..arr.len()).try_for_each(|index| {\n            let v = ScalarValue::try_from_array(arr, index)?;\n\n            match v {\n                ScalarValue::Int8(Some(value)) => {\n                    self.put_long(value as i64);\n                }\n                ScalarValue::Int16(Some(value)) => {\n                    self.put_long(value as i64);\n                }\n                ScalarValue::Int32(Some(value)) => {\n                    self.put_long(value as i64);\n                }\n                ScalarValue::Int64(Some(value)) => {\n                    self.put_long(value);\n                }\n                ScalarValue::Utf8(Some(value)) => {\n                    self.put_binary(value.as_bytes());\n                }\n                _ => {\n                    unreachable!()\n                }\n            }\n            Ok(())\n        })\n    }\n\n    fn evaluate(&mut self) -> Result<ScalarValue> {\n        Ok(ScalarValue::Binary(Some(self.spark_serialization())))\n    }\n\n    fn size(&self) -> usize {\n        std::mem::size_of_val(self)\n    }\n\n    fn state(&mut self) -> Result<Vec<ScalarValue>> {\n        // There might be a more efficient way to do this by transmuting since calling state() on an\n        // Accumulator is considered destructive.\n        let state_sv = ScalarValue::Binary(Some(self.state_as_bytes()));\n        Ok(vec![state_sv])\n    }\n\n    fn merge_batch(&mut self, states: &[ArrayRef]) -> Result<()> {\n        assert_eq!(\n            states.len(),\n            1,\n            \"Expect one element in 'states' but found {}\",\n            states.len()\n        );\n        assert_eq!(states[0].len(), 1);\n        let state_sv = downcast_value!(states[0], BinaryArray);\n        self.merge_filter(state_sv.value_data());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/bloom_filter/bloom_filter_might_contain.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, Int64Array, RecordBatch};\nuse arrow::datatypes::{DataType, Schema};\nuse datafusion::common::{internal_err, Result, ScalarValue};\nuse datafusion::error::DataFusionError;\nuse datafusion::logical_expr::{ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility};\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_plan::ColumnarValue;\nuse std::any::Any;\nuse std::sync::Arc;\n\nuse crate::bloom_filter::spark_bloom_filter::SparkBloomFilter;\n\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct BloomFilterMightContain {\n    signature: Signature,\n    bloom_filter: Option<SparkBloomFilter>,\n}\n\nimpl BloomFilterMightContain {\n    pub fn try_new(bloom_filter_expr: Arc<dyn PhysicalExpr>) -> Result<Self> {\n        // early evaluate the bloom_filter_expr to get the actual bloom filter\n        let bloom_filter = evaluate_bloom_filter(&bloom_filter_expr)?;\n        Ok(Self {\n            bloom_filter,\n            signature: Signature::exact(\n                vec![DataType::Binary, DataType::Int64],\n                Volatility::Immutable,\n            ),\n        })\n    }\n}\n\nfn evaluate_bloom_filter(\n    bloom_filter_expr: &Arc<dyn PhysicalExpr>,\n) -> Result<Option<SparkBloomFilter>> {\n    // bloom_filter_expr must be a literal/scalar subquery expression, so we can evaluate it\n    // with an empty batch with empty schema\n    let batch = RecordBatch::new_empty(Arc::new(Schema::empty()));\n    let bloom_filter_bytes = bloom_filter_expr.evaluate(&batch)?;\n    match bloom_filter_bytes {\n        ColumnarValue::Scalar(ScalarValue::Binary(v)) => {\n            Ok(v.map(|v| SparkBloomFilter::from(v.as_slice())))\n        }\n        _ => internal_err!(\"Bloom filter expression should be evaluated as a scalar binary value\"),\n    }\n}\n\nimpl ScalarUDFImpl for BloomFilterMightContain {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"might_contain\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Boolean)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        execute_might_contain(&self.bloom_filter, &args.args)\n    }\n}\n\nfn execute_might_contain(\n    bloom_filter: &Option<SparkBloomFilter>,\n    args: &[ColumnarValue],\n) -> Result<ColumnarValue> {\n    match &args[0] {\n        ColumnarValue::Scalar(ScalarValue::Int64(optional_value)) => {\n            let result = bloom_filter\n                .as_ref()\n                .and_then(|filter| optional_value.map(|value| filter.might_contain_long(value)));\n            Ok(ColumnarValue::Scalar(ScalarValue::Boolean(result)))\n        }\n        ColumnarValue::Array(values_array) => {\n            let values = values_array\n                .as_any()\n                .downcast_ref::<Int64Array>()\n                .ok_or_else(|| {\n                    DataFusionError::Execution(\n                        \"Expected values Array to be an Int64Array\".to_lowercase(),\n                    )\n                })?;\n\n            bloom_filter\n                .as_ref()\n                .map(|filter| {\n                    Ok(ColumnarValue::Array(Arc::new(\n                        filter.might_contain_longs(values),\n                    )))\n                })\n                .unwrap_or_else(|| Ok(ColumnarValue::Scalar(ScalarValue::Boolean(None))))\n        }\n        _ => internal_err!(\"Expected Int64Array or Int64 Scalar as arguments\"),\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::BooleanArray;\n\n    fn assert_result_eq<T: Into<Option<bool>>>(result: ColumnarValue, expected: Vec<T>) {\n        let array = result.to_array(1).unwrap();\n        let booleans = array.as_any().downcast_ref::<BooleanArray>().unwrap();\n        let expected_optionals: Vec<Option<bool>> =\n            expected.into_iter().map(|b| b.into()).collect();\n        assert_eq!(booleans, &BooleanArray::from(expected_optionals));\n    }\n\n    fn assert_all_null(result: ColumnarValue) {\n        let array = result.to_array(1).unwrap();\n        let booleans = array.as_any().downcast_ref::<BooleanArray>().unwrap();\n        assert_eq!(booleans.null_count(), booleans.len());\n    }\n\n    #[test]\n    fn test_execute_scalar_contained() {\n        let mut filter = SparkBloomFilter::from((4, 64));\n        filter.put_long(123);\n        filter.put_long(456);\n        filter.put_long(789);\n\n        let args = [ColumnarValue::Scalar(ScalarValue::Int64(Some(123)))];\n\n        let result = execute_might_contain(&Some(filter), &args).unwrap();\n        assert_result_eq(result, vec![true]);\n    }\n\n    #[test]\n    fn test_execute_scalar_not_contained() {\n        let mut filter = SparkBloomFilter::from((4, 64));\n        filter.put_long(123);\n        filter.put_long(456);\n        filter.put_long(789);\n\n        let args = [ColumnarValue::Scalar(ScalarValue::Int64(Some(999)))];\n\n        let result = execute_might_contain(&Some(filter), &args).unwrap();\n        assert_result_eq(result, vec![false]);\n    }\n\n    #[test]\n    fn test_execute_scalar_null() {\n        let mut filter = SparkBloomFilter::from((4, 64));\n        filter.put_long(123);\n        filter.put_long(456);\n        filter.put_long(789);\n\n        let args = [ColumnarValue::Scalar(ScalarValue::Int64(None))];\n\n        let result = execute_might_contain(&Some(filter), &args).unwrap();\n        assert_all_null(result);\n    }\n\n    #[test]\n    fn test_execute_array() {\n        let mut filter = SparkBloomFilter::from((4, 64));\n        filter.put_long(123);\n        filter.put_long(456);\n        filter.put_long(789);\n\n        let values = Int64Array::from(vec![123, 999, 789]);\n\n        let args = [ColumnarValue::Array(Arc::new(values))];\n\n        let result = execute_might_contain(&Some(filter), &args).unwrap();\n        assert_result_eq(result, vec![true, false, true]);\n    }\n\n    #[test]\n    fn test_execute_array_partially_null() {\n        let mut filter = SparkBloomFilter::from((4, 64));\n        filter.put_long(123);\n        filter.put_long(456);\n        filter.put_long(789);\n\n        let values = Int64Array::from(vec![Some(123), None, Some(555)]);\n\n        let args = [ColumnarValue::Array(Arc::new(values))];\n\n        let result = execute_might_contain(&Some(filter), &args).unwrap();\n        assert_result_eq(result, vec![Some(true), None, Some(false)]);\n    }\n\n    #[test]\n    fn test_execute_scalar_missing_filter() {\n        let args = [ColumnarValue::Scalar(ScalarValue::Int64(Some(123)))];\n\n        let result = execute_might_contain(&None, &args).unwrap();\n        assert_all_null(result);\n    }\n\n    #[test]\n    fn test_execute_array_missing_filter() {\n        let values = Int64Array::from(vec![123, 999, 789]);\n\n        let args = [ColumnarValue::Array(Arc::new(values))];\n\n        let result = execute_might_contain(&None, &args).unwrap();\n        assert_all_null(result);\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/bloom_filter/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n#[macro_use]\nmod bit;\n\nmod spark_bit_array;\nmod spark_bloom_filter;\n\npub mod bloom_filter_agg;\npub use bloom_filter_might_contain::BloomFilterMightContain;\n\npub mod bloom_filter_might_contain;\npub use bloom_filter_agg::BloomFilterAgg;\n"
  },
  {
    "path": "native/spark-expr/src/bloom_filter/spark_bit_array.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::datatypes::ToByteSlice;\nuse std::iter::zip;\n\n/// A simple bit array implementation that simulates the behavior of Spark's BitArray which is\n/// used in the BloomFilter implementation. Some methods are not implemented as they are not\n/// required for the current use case.\n#[derive(Debug, Hash, PartialEq, Eq)]\npub struct SparkBitArray {\n    data: Vec<u64>,\n    bit_count: usize,\n}\n\nimpl SparkBitArray {\n    pub fn new(buf: Vec<u64>) -> Self {\n        let num_bits = buf.iter().map(|x| x.count_ones() as usize).sum();\n        Self {\n            data: buf,\n            bit_count: num_bits,\n        }\n    }\n\n    pub fn set(&mut self, index: usize) -> bool {\n        if !self.get(index) {\n            // see the get method for the explanation of the shift operators\n            self.data[index >> 6] |= 1u64 << (index & 0x3f);\n            self.bit_count += 1;\n            true\n        } else {\n            false\n        }\n    }\n\n    pub fn get(&self, index: usize) -> bool {\n        // Java version: (data[(int) (index >> 6)] & (1L << (index))) != 0\n        // Rust and Java have different semantics for the shift operators. Java's shift operators\n        // explicitly mask the right-hand operand with 0x3f [1], while Rust's shift operators does\n        // not do this, it will panic with shift left with overflow for large right-hand operand.\n        // To fix this, we need to mask the right-hand operand with 0x3f in the rust side.\n        // [1]: https://docs.oracle.com/javase/specs/jls/se7/html/jls-15.html#jls-15.19\n        (self.data[index >> 6] & (1u64 << (index & 0x3f))) != 0\n    }\n\n    pub fn bit_size(&self) -> u64 {\n        self.word_size() as u64 * 64\n    }\n\n    pub fn byte_size(&self) -> usize {\n        self.word_size() * 8\n    }\n\n    pub fn word_size(&self) -> usize {\n        self.data.len()\n    }\n\n    #[allow(dead_code)] // this is only called from tests\n    pub fn cardinality(&self) -> usize {\n        self.bit_count\n    }\n\n    pub fn to_bytes(&self) -> Vec<u8> {\n        Vec::from(self.data.to_byte_slice())\n    }\n\n    pub fn data(&self) -> Vec<u64> {\n        self.data.clone()\n    }\n\n    // Combines SparkBitArrays, however other is a &[u8] because we anticipate to come from an\n    // Arrow ScalarValue::Binary which is a byte vector underneath, rather than a word vector.\n    pub fn merge_bits(&mut self, other: &[u8]) {\n        assert_eq!(self.byte_size(), other.len());\n        let mut bit_count: usize = 0;\n        // For each word, merge the bits into self, and accumulate a new bit_count.\n        for i in zip(\n            self.data.iter_mut(),\n            other\n                .chunks(8)\n                .map(|chunk| u64::from_ne_bytes(chunk.try_into().unwrap())),\n        ) {\n            *i.0 |= i.1;\n            bit_count += i.0.count_ones() as usize;\n        }\n        self.bit_count = bit_count;\n    }\n}\n\npub fn num_words(num_bits: usize) -> usize {\n    num_bits.div_ceil(64)\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n\n    #[test]\n    fn test_spark_bit_array() {\n        let buf = vec![0u64; 4];\n        let mut array = SparkBitArray::new(buf);\n        assert_eq!(array.bit_size(), 256);\n        assert_eq!(array.cardinality(), 0);\n\n        assert!(!array.get(0));\n        assert!(!array.get(1));\n        assert!(!array.get(63));\n        assert!(!array.get(64));\n        assert!(!array.get(65));\n        assert!(!array.get(127));\n        assert!(!array.get(128));\n        assert!(!array.get(129));\n\n        assert!(array.set(0));\n        assert!(array.set(1));\n        assert!(array.set(63));\n        assert!(array.set(64));\n        assert!(array.set(65));\n        assert!(array.set(127));\n        assert!(array.set(128));\n        assert!(array.set(129));\n\n        assert_eq!(array.cardinality(), 8);\n        assert_eq!(array.bit_size(), 256);\n\n        assert!(array.get(0));\n        // already set so should return false\n        assert!(!array.set(0));\n\n        // not set values should return false for get\n        assert!(!array.get(2));\n        assert!(!array.get(62));\n    }\n\n    #[test]\n    fn test_spark_bit_with_non_empty_buffer() {\n        let buf = vec![8u64; 4];\n        let mut array = SparkBitArray::new(buf);\n        assert_eq!(array.bit_size(), 256);\n        assert_eq!(array.cardinality(), 4);\n\n        // already set bits should return true\n        assert!(array.get(3));\n        assert!(array.get(67));\n        assert!(array.get(131));\n        assert!(array.get(195));\n\n        // other unset bits should return false\n        assert!(!array.get(0));\n        assert!(!array.get(1));\n\n        // set bits\n        assert!(array.set(0));\n        assert!(array.set(1));\n\n        // check cardinality\n        assert_eq!(array.cardinality(), 6);\n    }\n\n    #[test]\n    fn test_spark_bit_with_empty_buffer() {\n        let buf = vec![0u64; 4];\n        let array = SparkBitArray::new(buf);\n\n        assert_eq!(array.bit_size(), 256);\n        assert_eq!(array.cardinality(), 0);\n\n        for n in 0..256 {\n            assert!(!array.get(n));\n        }\n    }\n\n    #[test]\n    fn test_spark_bit_with_full_buffer() {\n        let buf = vec![u64::MAX; 4];\n        let array = SparkBitArray::new(buf);\n\n        assert_eq!(array.bit_size(), 256);\n        assert_eq!(array.cardinality(), 256);\n\n        for n in 0..256 {\n            assert!(array.get(n));\n        }\n    }\n\n    #[test]\n    fn test_spark_bit_merge() {\n        let buf1 = vec![0u64; 4];\n        let mut array1 = SparkBitArray::new(buf1);\n        let buf2 = vec![0u64; 4];\n        let mut array2 = SparkBitArray::new(buf2);\n\n        let primes = [\n            2, 3, 5, 7, 11, 13, 17, 19, 23, 29, 31, 37, 41, 43, 47, 53, 59, 61, 67, 71, 73, 79, 83,\n            89, 97, 101, 103, 107, 109, 113, 127, 131, 137, 139, 149, 151, 157, 163, 167, 173, 179,\n            181, 191, 193, 197, 199, 211, 223, 227, 229, 233, 239, 241, 251,\n        ];\n        let fibs = [1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233];\n\n        for n in fibs {\n            array1.set(n);\n        }\n\n        for n in primes {\n            array2.set(n);\n        }\n\n        assert_eq!(array1.cardinality(), fibs.len());\n        assert_eq!(array2.cardinality(), primes.len());\n\n        array1.merge_bits(array2.to_bytes().as_slice());\n\n        for n in fibs {\n            assert!(array1.get(n));\n        }\n\n        for n in primes {\n            assert!(array1.get(n));\n        }\n        assert_eq!(array1.cardinality(), 60);\n    }\n\n    #[test]\n    fn test_num_words() {\n        let cases = [\n            (0, 0),\n            (32, 1),\n            (63, 1),\n            (64, 1),\n            (96, 2),\n            (127, 2),\n            (128, 2),\n            (129, 3),\n            (639, 10),\n            (640, 10),\n            (653, 11),\n        ];\n\n        for (num_bits, expected) in cases.into_iter() {\n            let actual = num_words(num_bits);\n            assert!(\n                actual == expected,\n                \"num_words({num_bits}) = {actual} but expected to equal {expected}\"\n            );\n        }\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/bloom_filter/spark_bloom_filter.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{ArrowNativeTypeOp, BooleanArray, Int64Array};\nuse arrow::datatypes::ToByteSlice;\nuse std::cmp;\n\nuse crate::bloom_filter::spark_bit_array;\nuse crate::bloom_filter::spark_bit_array::SparkBitArray;\nuse crate::hash_funcs::murmur3::spark_compatible_murmur3_hash;\n\nconst SPARK_BLOOM_FILTER_VERSION_1: i32 = 1;\n\n/// A Bloom filter implementation that simulates the behavior of Spark's BloomFilter.\n/// It's not a complete implementation of Spark's BloomFilter, but just add the minimum\n/// methods to support mightContainsLong in the native side.\n#[derive(Debug, Hash, PartialEq, Eq)]\npub struct SparkBloomFilter {\n    bits: SparkBitArray,\n    num_hash_functions: u32,\n}\n\npub fn optimal_num_hash_functions(expected_items: i32, num_bits: i32) -> i32 {\n    cmp::max(\n        1,\n        ((num_bits as f64 / expected_items as f64) * 2.0_f64.ln()).round() as i32,\n    )\n}\n\nimpl From<(i32, i32)> for SparkBloomFilter {\n    /// Creates an empty SparkBloomFilter given number of hash functions and bits.\n    fn from((num_hash_functions, num_bits): (i32, i32)) -> Self {\n        let num_words = spark_bit_array::num_words(num_bits as usize);\n        let bits = vec![0u64; num_words];\n        Self {\n            bits: SparkBitArray::new(bits),\n            num_hash_functions: num_hash_functions as u32,\n        }\n    }\n}\n\nimpl From<&[u8]> for SparkBloomFilter {\n    /// Creates a SparkBloomFilter from a serialized byte array conforming to Spark's BloomFilter\n    /// binary format version 1.\n    fn from(buf: &[u8]) -> Self {\n        let mut offset = 0;\n        let version = read_num_be_bytes!(i32, 4, buf[offset..]);\n        offset += 4;\n        assert_eq!(\n            version, SPARK_BLOOM_FILTER_VERSION_1,\n            \"Unsupported BloomFilter version: {version}, expecting version: {SPARK_BLOOM_FILTER_VERSION_1}\"\n        );\n        let num_hash_functions = read_num_be_bytes!(i32, 4, buf[offset..]);\n        offset += 4;\n        let num_words = read_num_be_bytes!(i32, 4, buf[offset..]);\n        offset += 4;\n        let mut bits = vec![0u64; num_words as usize];\n        for i in 0..num_words {\n            bits[i as usize] = read_num_be_bytes!(i64, 8, buf[offset..]) as u64;\n            offset += 8;\n        }\n        Self {\n            bits: SparkBitArray::new(bits),\n            num_hash_functions: num_hash_functions as u32,\n        }\n    }\n}\n\nimpl SparkBloomFilter {\n    /// Serializes a SparkBloomFilter to a byte array conforming to Spark's BloomFilter\n    /// binary format version 1.\n    pub fn spark_serialization(&self) -> Vec<u8> {\n        // There might be a more efficient way to do this, even with all the endianness stuff.\n        let mut spark_bloom_filter: Vec<u8> = 1_u32.to_be_bytes().to_vec();\n        spark_bloom_filter.append(&mut self.num_hash_functions.to_be_bytes().to_vec());\n        spark_bloom_filter.append(&mut (self.bits.word_size() as u32).to_be_bytes().to_vec());\n        let mut filter_state: Vec<u64> = self.bits.data();\n        for i in filter_state.iter_mut() {\n            *i = i.to_be();\n        }\n        // Does it make sense to do a std::mem::take of filter_state here? Unclear to me if a deep\n        // copy of filter_state as a Vec<u64> to a Vec<u8> is happening here.\n        spark_bloom_filter.append(&mut Vec::from(filter_state.to_byte_slice()));\n        spark_bloom_filter\n    }\n\n    pub fn put_long(&mut self, item: i64) -> bool {\n        // Here we first hash the input long element into 2 int hash values, h1 and h2, then produce\n        // n hash values by `h1 + i * h2` with 1 <= i <= num_hash_functions.\n        let h1 = spark_compatible_murmur3_hash(item.to_le_bytes(), 0);\n        let h2 = spark_compatible_murmur3_hash(item.to_le_bytes(), h1);\n        let bit_size = self.bits.bit_size() as i32;\n        let mut bit_changed = false;\n        for i in 1..=self.num_hash_functions {\n            let mut combined_hash = (h1 as i32).add_wrapping((i as i32).mul_wrapping(h2 as i32));\n            if combined_hash < 0 {\n                combined_hash = !combined_hash;\n            }\n            bit_changed |= self.bits.set((combined_hash % bit_size) as usize)\n        }\n        bit_changed\n    }\n\n    pub fn put_binary(&mut self, item: &[u8]) -> bool {\n        // Here we first hash the input long element into 2 int hash values, h1 and h2, then produce\n        // n hash values by `h1 + i * h2` with 1 <= i <= num_hash_functions.\n        let h1 = spark_compatible_murmur3_hash(item, 0);\n        let h2 = spark_compatible_murmur3_hash(item, h1);\n        let bit_size = self.bits.bit_size() as i32;\n        let mut bit_changed = false;\n        for i in 1..=self.num_hash_functions {\n            let mut combined_hash = (h1 as i32).add_wrapping((i as i32).mul_wrapping(h2 as i32));\n            if combined_hash < 0 {\n                combined_hash = !combined_hash;\n            }\n            bit_changed |= self.bits.set((combined_hash % bit_size) as usize)\n        }\n        bit_changed\n    }\n\n    pub fn might_contain_long(&self, item: i64) -> bool {\n        let h1 = spark_compatible_murmur3_hash(item.to_le_bytes(), 0);\n        let h2 = spark_compatible_murmur3_hash(item.to_le_bytes(), h1);\n        let bit_size = self.bits.bit_size() as i32;\n        for i in 1..=self.num_hash_functions {\n            let mut combined_hash = (h1 as i32).add_wrapping((i as i32).mul_wrapping(h2 as i32));\n            if combined_hash < 0 {\n                combined_hash = !combined_hash;\n            }\n            if !self.bits.get((combined_hash % bit_size) as usize) {\n                return false;\n            }\n        }\n        true\n    }\n\n    pub fn might_contain_longs(&self, items: &Int64Array) -> BooleanArray {\n        items\n            .iter()\n            .map(|v| v.map(|x| self.might_contain_long(x)))\n            .collect()\n    }\n\n    pub fn state_as_bytes(&self) -> Vec<u8> {\n        self.bits.to_bytes()\n    }\n\n    pub fn merge_filter(&mut self, other: &[u8]) {\n        assert_eq!(\n            other.len(),\n            self.bits.byte_size(),\n            \"Cannot merge SparkBloomFilters with different lengths.\"\n        );\n        self.bits.merge_bits(other);\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/comet_scalar_funcs.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::hash_funcs::*;\nuse crate::math_funcs::abs::abs;\nuse crate::math_funcs::checked_arithmetic::{checked_add, checked_div, checked_mul, checked_sub};\nuse crate::math_funcs::log::spark_log;\nuse crate::math_funcs::modulo_expr::spark_modulo;\nuse crate::{\n    spark_ceil, spark_decimal_div, spark_decimal_integral_div, spark_floor, spark_isnan,\n    spark_lpad, spark_make_decimal, spark_read_side_padding, spark_round, spark_rpad, spark_unhex,\n    spark_unscaled_value, EvalMode, SparkArrayCompact, SparkContains, SparkDateDiff,\n    SparkDateFromUnixDate, SparkDateTrunc, SparkMakeDate, SparkSizeFunc,\n};\nuse arrow::datatypes::DataType;\nuse datafusion::common::{DataFusionError, Result as DataFusionResult};\nuse datafusion::execution::FunctionRegistry;\nuse datafusion::logical_expr::{\n    ScalarFunctionArgs, ScalarFunctionImplementation, ScalarUDF, ScalarUDFImpl, Signature,\n    Volatility,\n};\nuse datafusion::physical_plan::ColumnarValue;\nuse std::any::Any;\nuse std::fmt::Debug;\nuse std::sync::Arc;\n\nmacro_rules! make_comet_scalar_udf {\n    ($name:expr, $func:ident, $data_type:ident) => {{\n        let scalar_func = CometScalarFunction::new(\n            $name.to_string(),\n            Signature::variadic_any(Volatility::Immutable),\n            $data_type.clone(),\n            Arc::new(move |args| $func(args, &$data_type)),\n        );\n        Ok(Arc::new(ScalarUDF::new_from_impl(scalar_func)))\n    }};\n    ($name:expr, $func:expr, without $data_type:ident) => {{\n        let scalar_func = CometScalarFunction::new(\n            $name.to_string(),\n            Signature::variadic_any(Volatility::Immutable),\n            $data_type,\n            $func,\n        );\n        Ok(Arc::new(ScalarUDF::new_from_impl(scalar_func)))\n    }};\n    ($name:expr, $func:ident, without $data_type:ident, $fail_on_error:ident) => {{\n        let scalar_func = CometScalarFunction::new(\n            $name.to_string(),\n            Signature::variadic_any(Volatility::Immutable),\n            $data_type,\n            Arc::new(move |args| $func(args, $fail_on_error)),\n        );\n        Ok(Arc::new(ScalarUDF::new_from_impl(scalar_func)))\n    }};\n    ($name:expr, $func:ident, $data_type:ident, $eval_mode:ident) => {{\n        let scalar_func = CometScalarFunction::new(\n            $name.to_string(),\n            Signature::variadic_any(Volatility::Immutable),\n            $data_type.clone(),\n            Arc::new(move |args| $func(args, &$data_type, $eval_mode)),\n        );\n        Ok(Arc::new(ScalarUDF::new_from_impl(scalar_func)))\n    }};\n}\n\n/// Create a physical scalar function.\npub fn create_comet_physical_fun(\n    fun_name: &str,\n    data_type: DataType,\n    registry: &dyn FunctionRegistry,\n    fail_on_error: Option<bool>,\n) -> Result<Arc<ScalarUDF>, DataFusionError> {\n    create_comet_physical_fun_with_eval_mode(\n        fun_name,\n        data_type,\n        registry,\n        fail_on_error,\n        EvalMode::Legacy,\n    )\n}\n\n/// Create a physical scalar function with eval mode. Goal is to deprecate above function once all the operators have ANSI support\npub fn create_comet_physical_fun_with_eval_mode(\n    fun_name: &str,\n    data_type: DataType,\n    registry: &dyn FunctionRegistry,\n    fail_on_error: Option<bool>,\n    eval_mode: EvalMode,\n) -> Result<Arc<ScalarUDF>, DataFusionError> {\n    let fail_on_error = fail_on_error.unwrap_or(false);\n    match fun_name {\n        \"ceil\" => {\n            make_comet_scalar_udf!(\"ceil\", spark_ceil, data_type)\n        }\n        \"floor\" => {\n            make_comet_scalar_udf!(\"floor\", spark_floor, data_type)\n        }\n        \"read_side_padding\" => {\n            let func = Arc::new(spark_read_side_padding);\n            make_comet_scalar_udf!(\"read_side_padding\", func, without data_type)\n        }\n        \"rpad\" => {\n            let func = Arc::new(spark_rpad);\n            make_comet_scalar_udf!(\"rpad\", func, without data_type)\n        }\n        \"lpad\" => {\n            let func = Arc::new(spark_lpad);\n            make_comet_scalar_udf!(\"lpad\", func, without data_type)\n        }\n        \"round\" => {\n            make_comet_scalar_udf!(\"round\", spark_round, data_type, fail_on_error)\n        }\n        \"unscaled_value\" => {\n            let func = Arc::new(spark_unscaled_value);\n            make_comet_scalar_udf!(\"unscaled_value\", func, without data_type)\n        }\n        \"make_decimal\" => {\n            make_comet_scalar_udf!(\"make_decimal\", spark_make_decimal, data_type)\n        }\n        \"unhex\" => {\n            let func = Arc::new(spark_unhex);\n            make_comet_scalar_udf!(\"unhex\", func, without data_type)\n        }\n        \"decimal_div\" => {\n            make_comet_scalar_udf!(\"decimal_div\", spark_decimal_div, data_type, eval_mode)\n        }\n        \"decimal_integral_div\" => {\n            make_comet_scalar_udf!(\n                \"decimal_integral_div\",\n                spark_decimal_integral_div,\n                data_type,\n                eval_mode\n            )\n        }\n        \"checked_add\" => {\n            make_comet_scalar_udf!(\"checked_add\", checked_add, data_type, eval_mode)\n        }\n        \"checked_sub\" => {\n            make_comet_scalar_udf!(\"checked_sub\", checked_sub, data_type, eval_mode)\n        }\n        \"checked_mul\" => {\n            make_comet_scalar_udf!(\"checked_mul\", checked_mul, data_type, eval_mode)\n        }\n        \"checked_div\" => {\n            make_comet_scalar_udf!(\"checked_div\", checked_div, data_type, eval_mode)\n        }\n        \"murmur3_hash\" => {\n            let func = Arc::new(spark_murmur3_hash);\n            make_comet_scalar_udf!(\"murmur3_hash\", func, without data_type)\n        }\n        \"xxhash64\" => {\n            let func = Arc::new(spark_xxhash64);\n            make_comet_scalar_udf!(\"xxhash64\", func, without data_type)\n        }\n        \"isnan\" => {\n            let func = Arc::new(spark_isnan);\n            make_comet_scalar_udf!(\"isnan\", func, without data_type)\n        }\n        \"spark_modulo\" => {\n            let func = Arc::new(spark_modulo);\n            make_comet_scalar_udf!(\"spark_modulo\", func, without data_type, fail_on_error)\n        }\n        \"abs\" => {\n            let func = Arc::new(abs);\n            make_comet_scalar_udf!(\"abs\", func, without data_type)\n        }\n        \"spark_log\" => {\n            let func = Arc::new(spark_log);\n            make_comet_scalar_udf!(\"spark_log\", func, without data_type)\n        }\n        \"split\" => {\n            let func = Arc::new(crate::string_funcs::spark_split);\n            make_comet_scalar_udf!(\"split\", func, without data_type)\n        }\n        \"get_json_object\" => {\n            let func = Arc::new(crate::string_funcs::spark_get_json_object);\n            make_comet_scalar_udf!(\"get_json_object\", func, without data_type)\n        }\n        _ => registry.udf(fun_name).map_err(|e| {\n            DataFusionError::Execution(format!(\n                \"Function {fun_name} not found in the registry: {e}\",\n            ))\n        }),\n    }\n}\n\nfn all_scalar_functions() -> Vec<Arc<ScalarUDF>> {\n    vec![\n        Arc::new(ScalarUDF::new_from_impl(SparkArrayCompact::default())),\n        Arc::new(ScalarUDF::new_from_impl(SparkContains::default())),\n        Arc::new(ScalarUDF::new_from_impl(SparkDateDiff::default())),\n        Arc::new(ScalarUDF::new_from_impl(SparkDateFromUnixDate::default())),\n        Arc::new(ScalarUDF::new_from_impl(SparkDateTrunc::default())),\n        Arc::new(ScalarUDF::new_from_impl(SparkMakeDate::default())),\n        Arc::new(ScalarUDF::new_from_impl(SparkSizeFunc::default())),\n    ]\n}\n\n/// Registers all custom UDFs\npub fn register_all_comet_functions(registry: &mut dyn FunctionRegistry) -> DataFusionResult<()> {\n    // This will override existing UDFs with the same name\n    all_scalar_functions()\n        .into_iter()\n        .try_for_each(|udf| registry.register_udf(udf).map(|_| ()))?;\n\n    Ok(())\n}\n\nstruct CometScalarFunction {\n    name: String,\n    signature: Signature,\n    data_type: DataType,\n    func: ScalarFunctionImplementation,\n}\n\nimpl PartialEq for CometScalarFunction {\n    fn eq(&self, other: &Self) -> bool {\n        self.name == other.name\n            && self.signature == other.signature\n            && self.data_type == other.data_type\n        // Note: we do not test ScalarFunctionImplementation equality, relying on function metadata.\n    }\n}\n\nimpl Eq for CometScalarFunction {}\n\nimpl std::hash::Hash for CometScalarFunction {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.name.hash(state);\n        self.signature.hash(state);\n        self.data_type.hash(state);\n        // Note: we do not hash ScalarFunctionImplementation, relying on function metadata.\n    }\n}\n\nimpl Debug for CometScalarFunction {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        f.debug_struct(\"CometScalarFunction\")\n            .field(\"name\", &self.name)\n            .field(\"signature\", &self.signature)\n            .field(\"data_type\", &self.data_type)\n            .finish()\n    }\n}\n\nimpl CometScalarFunction {\n    fn new(\n        name: String,\n        signature: Signature,\n        data_type: DataType,\n        func: ScalarFunctionImplementation,\n    ) -> Self {\n        Self {\n            name,\n            signature,\n            data_type,\n            func,\n        }\n    }\n}\n\nimpl ScalarUDFImpl for CometScalarFunction {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        self.name.as_str()\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _: &[DataType]) -> DataFusionResult<DataType> {\n        Ok(self.data_type.clone())\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> DataFusionResult<ColumnarValue> {\n        (self.func)(&args.args)\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/conditional_funcs/if_expr.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::{\n    datatypes::{DataType, Schema},\n    record_batch::RecordBatch,\n};\nuse datafusion::common::Result;\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::{expressions::CaseExpr, PhysicalExpr};\nuse std::fmt::{Display, Formatter};\nuse std::hash::Hash;\nuse std::{any::Any, sync::Arc};\n\n/// IfExpr is a wrapper around CaseExpr, because `IF(a, b, c)` is semantically equivalent to\n/// `CASE WHEN a THEN b ELSE c END`.\n#[derive(Debug, Eq)]\npub struct IfExpr {\n    if_expr: Arc<dyn PhysicalExpr>,\n    true_expr: Arc<dyn PhysicalExpr>,\n    false_expr: Arc<dyn PhysicalExpr>,\n    // we delegate to case_expr for evaluation\n    case_expr: Arc<CaseExpr>,\n}\n\nimpl Hash for IfExpr {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.if_expr.hash(state);\n        self.true_expr.hash(state);\n        self.false_expr.hash(state);\n        self.case_expr.hash(state);\n    }\n}\nimpl PartialEq for IfExpr {\n    fn eq(&self, other: &Self) -> bool {\n        self.if_expr.eq(&other.if_expr)\n            && self.true_expr.eq(&other.true_expr)\n            && self.false_expr.eq(&other.false_expr)\n            && self.case_expr.eq(&other.case_expr)\n    }\n}\n\nimpl std::fmt::Display for IfExpr {\n    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        write!(\n            f,\n            \"If [if: {}, true_expr: {}, false_expr: {}]\",\n            self.if_expr, self.true_expr, self.false_expr\n        )\n    }\n}\n\nimpl IfExpr {\n    /// Create a new IF expression\n    pub fn new(\n        if_expr: Arc<dyn PhysicalExpr>,\n        true_expr: Arc<dyn PhysicalExpr>,\n        false_expr: Arc<dyn PhysicalExpr>,\n    ) -> Self {\n        Self {\n            if_expr: Arc::clone(&if_expr),\n            true_expr: Arc::clone(&true_expr),\n            false_expr: Arc::clone(&false_expr),\n            case_expr: Arc::new(\n                CaseExpr::try_new(None, vec![(if_expr, true_expr)], Some(false_expr)).unwrap(),\n            ),\n        }\n    }\n}\n\nimpl PhysicalExpr for IfExpr {\n    /// Return a reference to Any that can be used for down-casting\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, input_schema: &Schema) -> Result<DataType> {\n        let data_type = self.true_expr.data_type(input_schema)?;\n        Ok(data_type)\n    }\n\n    fn nullable(&self, _input_schema: &Schema) -> Result<bool> {\n        if self.true_expr.nullable(_input_schema)? || self.false_expr.nullable(_input_schema)? {\n            Ok(true)\n        } else {\n            Ok(false)\n        }\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> Result<ColumnarValue> {\n        self.case_expr.evaluate(batch)\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.if_expr, &self.true_expr, &self.false_expr]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Result<Arc<dyn PhysicalExpr>> {\n        Ok(Arc::new(IfExpr::new(\n            Arc::clone(&children[0]),\n            Arc::clone(&children[1]),\n            Arc::clone(&children[2]),\n        )))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use arrow::array::Int32Array;\n    use arrow::{array::StringArray, datatypes::*};\n    use datafusion::common::cast::as_int32_array;\n    use datafusion::logical_expr::Operator;\n    use datafusion::physical_expr::expressions::{binary, col, lit};\n\n    use super::*;\n\n    /// Create an If expression\n    fn if_fn(\n        if_expr: Arc<dyn PhysicalExpr>,\n        true_expr: Arc<dyn PhysicalExpr>,\n        false_expr: Arc<dyn PhysicalExpr>,\n    ) -> Result<Arc<dyn PhysicalExpr>> {\n        Ok(Arc::new(IfExpr::new(if_expr, true_expr, false_expr)))\n    }\n\n    #[test]\n    fn test_if_1() -> Result<()> {\n        let schema = Schema::new(vec![Field::new(\"a\", DataType::Utf8, true)]);\n        let a = StringArray::from(vec![Some(\"foo\"), Some(\"baz\"), None, Some(\"bar\")]);\n        let batch = RecordBatch::try_new(Arc::new(schema), vec![Arc::new(a)])?;\n        let schema_ref = batch.schema();\n\n        // if a = 'foo' 123 else 999\n        let if_expr = binary(\n            col(\"a\", &schema_ref)?,\n            Operator::Eq,\n            lit(\"foo\"),\n            &schema_ref,\n        )?;\n        let true_expr = lit(123i32);\n        let false_expr = lit(999i32);\n\n        let expr = if_fn(if_expr, true_expr, false_expr);\n        let result = expr?.evaluate(&batch)?.into_array(batch.num_rows())?;\n        let result = as_int32_array(&result)?;\n\n        let expected = &Int32Array::from(vec![Some(123), Some(999), Some(999), Some(999)]);\n\n        assert_eq!(expected, result);\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_if_2() -> Result<()> {\n        let schema = Schema::new(vec![Field::new(\"a\", DataType::Int32, true)]);\n        let a = Int32Array::from(vec![Some(1), Some(0), None, Some(5)]);\n        let batch = RecordBatch::try_new(Arc::new(schema), vec![Arc::new(a)])?;\n        let schema_ref = batch.schema();\n\n        // if a >=1 123 else 999\n        let if_expr = binary(col(\"a\", &schema_ref)?, Operator::GtEq, lit(1), &schema_ref)?;\n        let true_expr = lit(123i32);\n        let false_expr = lit(999i32);\n\n        let expr = if_fn(if_expr, true_expr, false_expr);\n        let result = expr?.evaluate(&batch)?.into_array(batch.num_rows())?;\n        let result = as_int32_array(&result)?;\n\n        let expected = &Int32Array::from(vec![Some(123), Some(999), Some(999), Some(123)]);\n        assert_eq!(expected, result);\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_if_children() {\n        let if_expr = lit(true);\n        let true_expr = lit(123i32);\n        let false_expr = lit(999i32);\n\n        let expr = if_fn(if_expr, true_expr, false_expr).unwrap();\n        let children = expr.children();\n        assert_eq!(children.len(), 3);\n        assert_eq!(children[0].to_string(), \"true\");\n        assert_eq!(children[1].to_string(), \"123\");\n        assert_eq!(children[2].to_string(), \"999\");\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/conditional_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod if_expr;\n\npub use if_expr::IfExpr;\n"
  },
  {
    "path": "native/spark-expr/src/conversion_funcs/boolean.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::{SparkError, SparkResult};\nuse arrow::array::{Array, ArrayRef, AsArray, Decimal128Array, TimestampMicrosecondBuilder};\nuse arrow::datatypes::DataType;\nuse std::sync::Arc;\n\npub fn is_df_cast_from_bool_spark_compatible(to_type: &DataType) -> bool {\n    use DataType::*;\n    matches!(\n        to_type,\n        Int8 | Int16 | Int32 | Int64 | Float32 | Float64 | Utf8\n    )\n}\n\npub fn cast_boolean_to_decimal(\n    array: &ArrayRef,\n    precision: u8,\n    scale: i8,\n) -> SparkResult<ArrayRef> {\n    let bool_array = array.as_boolean();\n    let scaled_val = 10_i128.pow(scale as u32);\n    let result: Decimal128Array = bool_array\n        .iter()\n        .map(|v| v.map(|b| if b { scaled_val } else { 0 }))\n        .collect();\n\n    // Convert Arrow decimal overflow errors to SparkError\n    let decimal_array = result\n        .with_precision_and_scale(precision, scale)\n        .map_err(|e| {\n            if matches!(e, arrow::error::ArrowError::InvalidArgumentError(_))\n                && e.to_string().contains(\"too large to store in a Decimal128\")\n            {\n                // Use the scaled value as it's the only non-zero value that could overflow\n                crate::error::decimal_overflow_error(scaled_val, precision, scale)\n            } else {\n                SparkError::Arrow(Arc::new(e))\n            }\n        })?;\n\n    Ok(Arc::new(decimal_array))\n}\n\npub(crate) fn cast_boolean_to_timestamp(\n    array_ref: &ArrayRef,\n    target_tz: &Option<Arc<str>>,\n) -> SparkResult<ArrayRef> {\n    let bool_array = array_ref.as_boolean();\n    let mut builder = TimestampMicrosecondBuilder::with_capacity(bool_array.len());\n\n    for i in 0..bool_array.len() {\n        if bool_array.is_null(i) {\n            builder.append_null();\n        } else {\n            let micros = if bool_array.value(i) { 1 } else { 0 };\n            builder.append_value(micros);\n        }\n    }\n\n    Ok(Arc::new(builder.finish().with_timezone_opt(target_tz.clone())) as ArrayRef)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::cast::cast_array;\n    use crate::{EvalMode, SparkCastOptions};\n    use arrow::array::{\n        Array, ArrayRef, BooleanArray, Float32Array, Float64Array, Int16Array, Int32Array,\n        Int64Array, Int8Array, StringArray,\n    };\n    use arrow::datatypes::DataType::Decimal128;\n    use arrow::datatypes::TimestampMicrosecondType;\n    use std::sync::Arc;\n\n    fn test_input_bool_array() -> ArrayRef {\n        Arc::new(BooleanArray::from(vec![Some(true), Some(false), None]))\n    }\n\n    fn test_input_spark_opts() -> SparkCastOptions {\n        SparkCastOptions::new(EvalMode::Legacy, \"Asia/Kolkata\", false)\n    }\n\n    #[test]\n    fn test_is_df_cast_from_bool_spark_compatible() {\n        assert!(!is_df_cast_from_bool_spark_compatible(&DataType::Boolean));\n        assert!(is_df_cast_from_bool_spark_compatible(&DataType::Int8));\n        assert!(is_df_cast_from_bool_spark_compatible(&DataType::Int16));\n        assert!(is_df_cast_from_bool_spark_compatible(&DataType::Int32));\n        assert!(is_df_cast_from_bool_spark_compatible(&DataType::Int64));\n        assert!(is_df_cast_from_bool_spark_compatible(&DataType::Float32));\n        assert!(is_df_cast_from_bool_spark_compatible(&DataType::Float64));\n        assert!(is_df_cast_from_bool_spark_compatible(&DataType::Utf8));\n        assert!(!is_df_cast_from_bool_spark_compatible(\n            &DataType::Decimal128(10, 4)\n        ));\n        assert!(!is_df_cast_from_bool_spark_compatible(&DataType::Null));\n    }\n\n    #[test]\n    fn test_bool_to_int8_cast() {\n        let result = cast_array(\n            test_input_bool_array(),\n            &DataType::Int8,\n            &test_input_spark_opts(),\n        )\n        .unwrap();\n        let arr = result.as_any().downcast_ref::<Int8Array>().unwrap();\n        assert_eq!(arr.value(0), 1);\n        assert_eq!(arr.value(1), 0);\n        assert!(arr.is_null(2));\n    }\n\n    #[test]\n    fn test_bool_to_int16_cast() {\n        let result = cast_array(\n            test_input_bool_array(),\n            &DataType::Int16,\n            &test_input_spark_opts(),\n        )\n        .unwrap();\n        let arr = result.as_any().downcast_ref::<Int16Array>().unwrap();\n        assert_eq!(arr.value(0), 1);\n        assert_eq!(arr.value(1), 0);\n        assert!(arr.is_null(2));\n    }\n\n    #[test]\n    fn test_bool_to_int32_cast() {\n        let result = cast_array(\n            test_input_bool_array(),\n            &DataType::Int32,\n            &test_input_spark_opts(),\n        )\n        .unwrap();\n        let arr = result.as_any().downcast_ref::<Int32Array>().unwrap();\n        assert_eq!(arr.value(0), 1);\n        assert_eq!(arr.value(1), 0);\n        assert!(arr.is_null(2));\n    }\n\n    #[test]\n    fn test_bool_to_int64_cast() {\n        let result = cast_array(\n            test_input_bool_array(),\n            &DataType::Int64,\n            &test_input_spark_opts(),\n        )\n        .unwrap();\n        let arr = result.as_any().downcast_ref::<Int64Array>().unwrap();\n        assert_eq!(arr.value(0), 1);\n        assert_eq!(arr.value(1), 0);\n        assert!(arr.is_null(2));\n    }\n\n    #[test]\n    fn test_bool_to_float32_cast() {\n        let result = cast_array(\n            test_input_bool_array(),\n            &DataType::Float32,\n            &test_input_spark_opts(),\n        )\n        .unwrap();\n        let arr = result.as_any().downcast_ref::<Float32Array>().unwrap();\n        assert_eq!(arr.value(0), 1.0);\n        assert_eq!(arr.value(1), 0.0);\n        assert!(arr.is_null(2));\n    }\n\n    #[test]\n    fn test_bool_to_float64_cast() {\n        let result = cast_array(\n            test_input_bool_array(),\n            &DataType::Float64,\n            &test_input_spark_opts(),\n        )\n        .unwrap();\n        let arr = result.as_any().downcast_ref::<Float64Array>().unwrap();\n        assert_eq!(arr.value(0), 1.0);\n        assert_eq!(arr.value(1), 0.0);\n        assert!(arr.is_null(2));\n    }\n\n    #[test]\n    fn test_bool_to_string_cast() {\n        let result = cast_array(\n            test_input_bool_array(),\n            &DataType::Utf8,\n            &test_input_spark_opts(),\n        )\n        .unwrap();\n        let arr = result.as_any().downcast_ref::<StringArray>().unwrap();\n        assert_eq!(arr.value(0), \"true\");\n        assert_eq!(arr.value(1), \"false\");\n        assert!(arr.is_null(2));\n    }\n\n    #[test]\n    fn test_bool_to_decimal_cast() {\n        let result = cast_array(\n            test_input_bool_array(),\n            &Decimal128(10, 4),\n            &test_input_spark_opts(),\n        )\n        .unwrap();\n        let expected_arr = Decimal128Array::from(vec![10000_i128, 0_i128])\n            .with_precision_and_scale(10, 4)\n            .unwrap();\n        let arr = result.as_any().downcast_ref::<Decimal128Array>().unwrap();\n        assert_eq!(arr.value(0), expected_arr.value(0));\n        assert_eq!(arr.value(1), expected_arr.value(1));\n        assert!(arr.is_null(2));\n    }\n\n    #[test]\n    fn test_cast_boolean_to_timestamp() {\n        let timezones: [Option<Arc<str>>; 3] = [\n            Some(Arc::from(\"UTC\")),\n            Some(Arc::from(\"America/Los_Angeles\")),\n            None,\n        ];\n\n        for tz in &timezones {\n            let bool_array: ArrayRef =\n                Arc::new(BooleanArray::from(vec![Some(true), Some(false), None]));\n\n            let result = cast_boolean_to_timestamp(&bool_array, tz).unwrap();\n            let ts_array = result.as_primitive::<TimestampMicrosecondType>();\n\n            assert_eq!(ts_array.value(0), 1); // true -> 1 microsecond\n            assert_eq!(ts_array.value(1), 0); // false -> 0 (epoch)\n            assert!(ts_array.is_null(2));\n            assert_eq!(ts_array.timezone(), tz.as_ref().map(|s| s.as_ref()));\n        }\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/conversion_funcs/cast.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::conversion_funcs::boolean::{\n    cast_boolean_to_decimal, cast_boolean_to_timestamp, is_df_cast_from_bool_spark_compatible,\n};\nuse crate::conversion_funcs::numeric::{\n    cast_decimal128_to_utf8, cast_decimal_to_timestamp, cast_float32_to_decimal128,\n    cast_float64_to_decimal128, cast_float_to_timestamp, cast_int_to_decimal128,\n    cast_int_to_timestamp, is_df_cast_from_decimal_spark_compatible,\n    is_df_cast_from_float_spark_compatible, is_df_cast_from_int_spark_compatible,\n    spark_cast_decimal_to_boolean, spark_cast_float32_to_utf8, spark_cast_float64_to_utf8,\n    spark_cast_int_to_int, spark_cast_nonintegral_numeric_to_integral,\n};\nuse crate::conversion_funcs::string::{\n    cast_string_to_date, cast_string_to_decimal, cast_string_to_float, cast_string_to_int,\n    cast_string_to_timestamp, is_df_cast_from_string_spark_compatible, spark_cast_utf8_to_boolean,\n};\nuse crate::conversion_funcs::temporal::{\n    cast_date_to_timestamp, is_df_cast_from_date_spark_compatible,\n    is_df_cast_from_timestamp_spark_compatible,\n};\nuse crate::conversion_funcs::utils::spark_cast_postprocess;\nuse crate::utils::array_with_timezone;\nuse crate::EvalMode::Legacy;\nuse crate::{cast_whole_num_to_binary, BinaryOutputStyle};\nuse crate::{EvalMode, SparkError};\nuse arrow::array::builder::StringBuilder;\nuse arrow::array::{\n    new_null_array, BinaryBuilder, DictionaryArray, GenericByteArray, ListArray, MapArray,\n    StringArray, StructArray,\n};\nuse arrow::datatypes::{ArrowDictionaryKeyType, ArrowNativeType, DataType, Schema};\nuse arrow::datatypes::{Field, Fields, GenericBinaryType};\nuse arrow::error::ArrowError;\nuse arrow::{\n    array::{\n        cast::AsArray, types::Int32Type, Array, ArrayRef, GenericStringArray, Int16Array,\n        Int32Array, Int64Array, Int8Array, OffsetSizeTrait, PrimitiveArray,\n    },\n    compute::{cast_with_options, take, CastOptions},\n    record_batch::RecordBatch,\n    util::display::FormatOptions,\n};\nuse base64::prelude::BASE64_STANDARD_NO_PAD;\nuse base64::Engine;\nuse datafusion::common::{internal_err, DataFusionError, Result as DataFusionResult, ScalarValue};\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_plan::ColumnarValue;\nuse std::{\n    any::Any,\n    fmt::{Debug, Display, Formatter},\n    hash::Hash,\n    sync::Arc,\n};\n\nstatic TIMESTAMP_FORMAT: Option<&str> = Some(\"%Y-%m-%d %H:%M:%S%.f\");\n\nstatic CAST_OPTIONS: CastOptions = CastOptions {\n    safe: true,\n    format_options: FormatOptions::new()\n        .with_timestamp_tz_format(TIMESTAMP_FORMAT)\n        .with_timestamp_format(TIMESTAMP_FORMAT),\n};\n\n#[derive(Debug, Eq)]\npub struct Cast {\n    pub child: Arc<dyn PhysicalExpr>,\n    pub data_type: DataType,\n    pub cast_options: SparkCastOptions,\n    pub expr_id: Option<u64>,\n    pub query_context: Option<Arc<crate::QueryContext>>,\n}\n\nimpl PartialEq for Cast {\n    fn eq(&self, other: &Self) -> bool {\n        self.child.eq(&other.child)\n            && self.data_type.eq(&other.data_type)\n            && self.cast_options.eq(&other.cast_options)\n    }\n}\n\nimpl Hash for Cast {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.child.hash(state);\n        self.data_type.hash(state);\n        self.cast_options.hash(state);\n    }\n}\n\nimpl Cast {\n    pub fn new(\n        child: Arc<dyn PhysicalExpr>,\n        data_type: DataType,\n        cast_options: SparkCastOptions,\n        expr_id: Option<u64>,\n        query_context: Option<Arc<crate::QueryContext>>,\n    ) -> Self {\n        Self {\n            child,\n            data_type,\n            cast_options,\n            expr_id,\n            query_context,\n        }\n    }\n}\n\n/// Spark cast options\n#[derive(Debug, Clone, Hash, PartialEq, Eq)]\npub struct SparkCastOptions {\n    /// Spark evaluation mode\n    pub eval_mode: EvalMode,\n    /// When cast from/to timezone related types, we need timezone, which will be resolved with\n    /// session local timezone by an analyzer in Spark.\n    // TODO we should change timezone to Tz to avoid repeated parsing\n    pub timezone: String,\n    /// Allow casts that are supported but not guaranteed to be 100% compatible\n    pub allow_incompat: bool,\n    /// True when running against Spark 4.0+. Enables version-specific cast behaviour\n    /// such as the handling of leading whitespace before T-prefixed time-only strings.\n    pub is_spark4_plus: bool,\n    /// Support casting unsigned ints to signed ints (used by Parquet SchemaAdapter)\n    pub allow_cast_unsigned_ints: bool,\n    /// We also use the cast logic for adapting Parquet schemas, so this flag is used\n    /// for that use case\n    pub is_adapting_schema: bool,\n    /// String to use to represent null values\n    pub null_string: String,\n    /// SparkSQL's binaryOutputStyle\n    pub binary_output_style: Option<BinaryOutputStyle>,\n}\n\nimpl SparkCastOptions {\n    pub fn new(eval_mode: EvalMode, timezone: &str, allow_incompat: bool) -> Self {\n        Self {\n            eval_mode,\n            timezone: timezone.to_string(),\n            allow_incompat,\n            is_spark4_plus: false,\n            allow_cast_unsigned_ints: false,\n            is_adapting_schema: false,\n            null_string: \"null\".to_string(),\n            binary_output_style: None,\n        }\n    }\n\n    pub fn new_without_timezone(eval_mode: EvalMode, allow_incompat: bool) -> Self {\n        Self {\n            eval_mode,\n            timezone: \"\".to_string(),\n            allow_incompat,\n            is_spark4_plus: false,\n            allow_cast_unsigned_ints: false,\n            is_adapting_schema: false,\n            null_string: \"null\".to_string(),\n            binary_output_style: None,\n        }\n    }\n\n    pub fn new_with_version(\n        eval_mode: EvalMode,\n        timezone: &str,\n        allow_incompat: bool,\n        is_spark4_plus: bool,\n    ) -> Self {\n        Self {\n            is_spark4_plus,\n            ..Self::new(eval_mode, timezone, allow_incompat)\n        }\n    }\n}\n\n/// Spark-compatible cast implementation. Defers to DataFusion's cast where that is known\n/// to be compatible, and returns an error when a not supported and not DF-compatible cast\n/// is requested.\npub fn spark_cast(\n    arg: ColumnarValue,\n    data_type: &DataType,\n    cast_options: &SparkCastOptions,\n) -> DataFusionResult<ColumnarValue> {\n    let result = match arg {\n        ColumnarValue::Array(array) => {\n            let result_array = cast_array(array, data_type, cast_options)?;\n            ColumnarValue::Array(result_array)\n        }\n        ColumnarValue::Scalar(scalar) => {\n            // Note that normally CAST(scalar) should be fold in Spark JVM side. However, for\n            // some cases e.g., scalar subquery, Spark will not fold it, so we need to handle it\n            // here.\n            let array = scalar.to_array()?;\n            let scalar =\n                ScalarValue::try_from_array(&cast_array(array, data_type, cast_options)?, 0)?;\n            ColumnarValue::Scalar(scalar)\n        }\n    };\n\n    Ok(result)\n}\n\n// copied from datafusion common scalar/mod.rs\nfn dict_from_values<K: ArrowDictionaryKeyType>(\n    values_array: ArrayRef,\n) -> datafusion::common::Result<ArrayRef> {\n    // Create a key array with `size` elements of 0..array_len for all\n    // non-null value elements\n    let key_array: PrimitiveArray<K> = (0..values_array.len())\n        .map(|index| {\n            if values_array.is_valid(index) {\n                let native_index = K::Native::from_usize(index).ok_or_else(|| {\n                    DataFusionError::Internal(format!(\n                        \"Can not create index of type {} from value {}\",\n                        K::DATA_TYPE,\n                        index\n                    ))\n                })?;\n                Ok(Some(native_index))\n            } else {\n                Ok(None)\n            }\n        })\n        .collect::<datafusion::common::Result<Vec<_>>>()?\n        .into_iter()\n        .collect();\n\n    // create a new DictionaryArray\n    //\n    // Note: this path could be made faster by using the ArrayData\n    // APIs and skipping validation, if it every comes up in\n    // performance traces.\n    let dict_array = DictionaryArray::<K>::try_new(key_array, values_array)?;\n    Ok(Arc::new(dict_array))\n}\n\npub(crate) fn cast_array(\n    array: ArrayRef,\n    to_type: &DataType,\n    cast_options: &SparkCastOptions,\n) -> DataFusionResult<ArrayRef> {\n    use DataType::*;\n    let from_type = array.data_type().clone();\n\n    if &from_type == to_type {\n        return Ok(Arc::new(array));\n    }\n\n    let array = array_with_timezone(array, cast_options.timezone.clone(), Some(to_type))?;\n    let eval_mode = cast_options.eval_mode;\n\n    let native_cast_options: CastOptions = CastOptions {\n        safe: !matches!(cast_options.eval_mode, EvalMode::Ansi), // take safe mode from cast_options passed\n        format_options: FormatOptions::new()\n            .with_timestamp_tz_format(TIMESTAMP_FORMAT)\n            .with_timestamp_format(TIMESTAMP_FORMAT),\n    };\n\n    let array = match &from_type {\n        Dictionary(key_type, value_type)\n            if key_type.as_ref() == &Int32\n                && (value_type.as_ref() == &Utf8\n                    || value_type.as_ref() == &LargeUtf8\n                    || value_type.as_ref() == &Binary\n                    || value_type.as_ref() == &LargeBinary) =>\n        {\n            let dict_array = array\n                .as_any()\n                .downcast_ref::<DictionaryArray<Int32Type>>()\n                .expect(\"Expected a dictionary array\");\n\n            let casted_result = match to_type {\n                Dictionary(_, to_value_type) => {\n                    let casted_dictionary = DictionaryArray::<Int32Type>::new(\n                        dict_array.keys().clone(),\n                        cast_array(Arc::clone(dict_array.values()), to_value_type, cast_options)?,\n                    );\n                    Arc::new(casted_dictionary.clone())\n                }\n                _ => {\n                    let casted_dictionary = DictionaryArray::<Int32Type>::new(\n                        dict_array.keys().clone(),\n                        cast_array(Arc::clone(dict_array.values()), to_type, cast_options)?,\n                    );\n                    take(casted_dictionary.values().as_ref(), dict_array.keys(), None)?\n                }\n            };\n            return Ok(spark_cast_postprocess(casted_result, &from_type, to_type));\n        }\n        _ => {\n            if let Dictionary(_, _) = to_type {\n                let dict_array = dict_from_values::<Int32Type>(array)?;\n                let casted_result = cast_array(dict_array, to_type, cast_options)?;\n                return Ok(spark_cast_postprocess(casted_result, &from_type, to_type));\n            } else {\n                array\n            }\n        }\n    };\n\n    let cast_result = match (&from_type, to_type) {\n        // Null arrays carry no concrete values, so Arrow's native cast can change only the\n        // logical type while preserving length and nullness.\n        (Null, _) => Ok(cast_with_options(&array, to_type, &native_cast_options)?),\n        (Utf8, Boolean) => spark_cast_utf8_to_boolean::<i32>(&array, eval_mode),\n        (LargeUtf8, Boolean) => spark_cast_utf8_to_boolean::<i64>(&array, eval_mode),\n        (Utf8, Timestamp(_, _)) => cast_string_to_timestamp(\n            &array,\n            to_type,\n            eval_mode,\n            &cast_options.timezone,\n            cast_options.is_spark4_plus,\n        ),\n        (Utf8, Date32) => cast_string_to_date(&array, to_type, eval_mode),\n        (Date32, Int32) => {\n            // Date32 is stored as days since epoch (i32), so this is a simple reinterpret cast\n            Ok(cast_with_options(&array, to_type, &CAST_OPTIONS)?)\n        }\n        (Utf8, Float32 | Float64) => cast_string_to_float(&array, to_type, eval_mode),\n        (Utf8 | LargeUtf8, Decimal128(precision, scale)) => {\n            cast_string_to_decimal(&array, to_type, precision, scale, eval_mode)\n        }\n        (Utf8 | LargeUtf8, Decimal256(precision, scale)) => {\n            cast_string_to_decimal(&array, to_type, precision, scale, eval_mode)\n        }\n        (Int64, Int32)\n        | (Int64, Int16)\n        | (Int64, Int8)\n        | (Int32, Int16)\n        | (Int32, Int8)\n        | (Int16, Int8)\n            if eval_mode != EvalMode::Try =>\n        {\n            spark_cast_int_to_int(&array, eval_mode, &from_type, to_type)\n        }\n        (Int8 | Int16 | Int32 | Int64, Decimal128(precision, scale)) => {\n            cast_int_to_decimal128(&array, eval_mode, &from_type, to_type, *precision, *scale)\n        }\n        (Utf8, Int8 | Int16 | Int32 | Int64) => {\n            cast_string_to_int::<i32>(to_type, &array, eval_mode)\n        }\n        (LargeUtf8, Int8 | Int16 | Int32 | Int64) => {\n            cast_string_to_int::<i64>(to_type, &array, eval_mode)\n        }\n        (Float64, Utf8) => spark_cast_float64_to_utf8::<i32>(&array, eval_mode),\n        (Float64, LargeUtf8) => spark_cast_float64_to_utf8::<i64>(&array, eval_mode),\n        (Float32, Utf8) => spark_cast_float32_to_utf8::<i32>(&array, eval_mode),\n        (Float32, LargeUtf8) => spark_cast_float32_to_utf8::<i64>(&array, eval_mode),\n        (Float32, Decimal128(precision, scale)) => {\n            cast_float32_to_decimal128(&array, *precision, *scale, eval_mode)\n        }\n        (Float64, Decimal128(precision, scale)) => {\n            cast_float64_to_decimal128(&array, *precision, *scale, eval_mode)\n        }\n        (Float32, Int8)\n        | (Float32, Int16)\n        | (Float32, Int32)\n        | (Float32, Int64)\n        | (Float64, Int8)\n        | (Float64, Int16)\n        | (Float64, Int32)\n        | (Float64, Int64)\n        | (Decimal128(_, _), Int8)\n        | (Decimal128(_, _), Int16)\n        | (Decimal128(_, _), Int32)\n        | (Decimal128(_, _), Int64)\n            if eval_mode != EvalMode::Try =>\n        {\n            spark_cast_nonintegral_numeric_to_integral(&array, eval_mode, &from_type, to_type)\n        }\n        (Decimal128(_p, _s), Boolean) => spark_cast_decimal_to_boolean(&array),\n        // Spark LEGACY cast uses Java BigDecimal.toString() which produces scientific notation\n        // when adjusted_exponent < -6 (e.g. \"0E-18\" for zero with scale=18).\n        // TRY and ANSI use plain notation (\"0.000000000000000000\") so DataFusion handles those.\n        (Decimal128(_, scale), Utf8) if eval_mode == EvalMode::Legacy => {\n            cast_decimal128_to_utf8(&array, *scale)\n        }\n        (Utf8View, Utf8) => Ok(cast_with_options(&array, to_type, &CAST_OPTIONS)?),\n        (Struct(_), Utf8) => Ok(casts_struct_to_string(array.as_struct(), cast_options)?),\n        (Struct(_), Struct(_)) => Ok(cast_struct_to_struct(\n            array.as_struct(),\n            &from_type,\n            to_type,\n            cast_options,\n        )?),\n        (List(_), Utf8) => Ok(cast_array_to_string(array.as_list(), cast_options)?),\n        (List(_), List(to)) => {\n            // Cast list elements recursively so nested array casts follow Spark semantics\n            // instead of relying on Arrow's top-level cast support.\n            let list_array = array.as_list::<i32>();\n            let casted_values = match (list_array.values().data_type(), to.data_type()) {\n                // Spark legacy array casts produce null elements for array<Date> -> array<Int>.\n                (Date32, Int32) => new_null_array(to.data_type(), list_array.values().len()),\n                _ => cast_array(\n                    Arc::clone(list_array.values()),\n                    to.data_type(),\n                    cast_options,\n                )?,\n            };\n            Ok(Arc::new(ListArray::new(\n                Arc::clone(to),\n                list_array.offsets().clone(),\n                casted_values,\n                list_array.nulls().cloned(),\n            )) as ArrayRef)\n        }\n        (Map(_, _), Map(_, _)) => Ok(cast_map_to_map(&array, &from_type, to_type, cast_options)?),\n        (UInt8 | UInt16 | UInt32 | UInt64, Int8 | Int16 | Int32 | Int64)\n            if cast_options.allow_cast_unsigned_ints =>\n        {\n            Ok(cast_with_options(&array, to_type, &CAST_OPTIONS)?)\n        }\n        (Binary, Utf8) => Ok(cast_binary_to_string::<i32>(&array, cast_options)?),\n        (Date32, Timestamp(_, tz)) => Ok(cast_date_to_timestamp(&array, cast_options, tz)?),\n        (Int8, Binary) if (eval_mode == Legacy) => cast_whole_num_to_binary!(&array, Int8Array, 1),\n        (Int16, Binary) if (eval_mode == Legacy) => {\n            cast_whole_num_to_binary!(&array, Int16Array, 2)\n        }\n        (Int32, Binary) if (eval_mode == Legacy) => {\n            cast_whole_num_to_binary!(&array, Int32Array, 4)\n        }\n        (Int64, Binary) if (eval_mode == Legacy) => {\n            cast_whole_num_to_binary!(&array, Int64Array, 8)\n        }\n        (Boolean, Decimal128(precision, scale)) => {\n            cast_boolean_to_decimal(&array, *precision, *scale)\n        }\n        (Int8 | Int16 | Int32 | Int64, Timestamp(_, tz)) => cast_int_to_timestamp(&array, tz),\n        (Float32 | Float64, Timestamp(_, tz)) => cast_float_to_timestamp(&array, tz, eval_mode),\n        (Boolean, Timestamp(_, tz)) => cast_boolean_to_timestamp(&array, tz),\n        (Decimal128(_, scale), Timestamp(_, tz)) => cast_decimal_to_timestamp(&array, tz, *scale),\n        _ if cast_options.is_adapting_schema\n            || is_datafusion_spark_compatible(&from_type, to_type) =>\n        {\n            // use DataFusion cast only when we know that it is compatible with Spark\n            Ok(cast_with_options(&array, to_type, &native_cast_options)?)\n        }\n        _ => {\n            // we should never reach this code because the Scala code should be checking\n            // for supported cast operations and falling back to Spark for anything that\n            // is not yet supported\n            Err(SparkError::Internal(format!(\n                \"Native cast invoked for unsupported cast from {from_type:?} to {to_type:?}\"\n            )))\n        }\n    };\n\n    Ok(spark_cast_postprocess(cast_result?, &from_type, to_type))\n}\n\n/// Determines if DataFusion supports the given cast in a way that is\n/// compatible with Spark\nfn is_datafusion_spark_compatible(from_type: &DataType, to_type: &DataType) -> bool {\n    if from_type == to_type {\n        return true;\n    }\n    match from_type {\n        DataType::Null => {\n            matches!(to_type, DataType::List(_))\n        }\n        DataType::Boolean => is_df_cast_from_bool_spark_compatible(to_type),\n        DataType::Int8 | DataType::Int16 | DataType::Int32 | DataType::Int64 => {\n            is_df_cast_from_int_spark_compatible(to_type)\n        }\n        DataType::Float32 | DataType::Float64 => is_df_cast_from_float_spark_compatible(to_type),\n        DataType::Decimal128(_, _) | DataType::Decimal256(_, _) => {\n            is_df_cast_from_decimal_spark_compatible(to_type)\n        }\n        DataType::Utf8 => is_df_cast_from_string_spark_compatible(to_type),\n        DataType::Date32 => is_df_cast_from_date_spark_compatible(to_type),\n        DataType::Timestamp(_, _) => is_df_cast_from_timestamp_spark_compatible(to_type),\n        DataType::Binary => {\n            // note that this is not completely Spark compatible because\n            // DataFusion only supports binary data containing valid UTF-8 strings\n            matches!(to_type, DataType::Utf8)\n        }\n        _ => false,\n    }\n}\n\n/// Cast between struct types based on logic in\n/// `org.apache.spark.sql.catalyst.expressions.Cast#castStruct`.\nfn cast_struct_to_struct(\n    array: &StructArray,\n    from_type: &DataType,\n    to_type: &DataType,\n    cast_options: &SparkCastOptions,\n) -> DataFusionResult<ArrayRef> {\n    match (from_type, to_type) {\n        (DataType::Struct(from_fields), DataType::Struct(to_fields)) => {\n            let cast_fields: Vec<ArrayRef> = from_fields\n                .iter()\n                .enumerate()\n                .zip(to_fields.iter())\n                .map(|((idx, _from), to)| {\n                    let from_field = Arc::clone(array.column(idx));\n                    let array_length = from_field.len();\n                    let cast_result = spark_cast(\n                        ColumnarValue::from(from_field),\n                        to.data_type(),\n                        cast_options,\n                    )\n                    .unwrap();\n                    cast_result.to_array(array_length).unwrap()\n                })\n                .collect();\n\n            Ok(Arc::new(StructArray::new(\n                to_fields.clone(),\n                cast_fields,\n                array.nulls().cloned(),\n            )))\n        }\n        _ => unreachable!(),\n    }\n}\n\n/// Cast between map types, handling field name differences between Parquet (\"key_value\")\n/// and Spark (\"entries\") while preserving the map's structure.\nfn cast_map_to_map(\n    array: &ArrayRef,\n    from_type: &DataType,\n    to_type: &DataType,\n    cast_options: &SparkCastOptions,\n) -> DataFusionResult<ArrayRef> {\n    let map_array = array\n        .as_any()\n        .downcast_ref::<MapArray>()\n        .expect(\"Expected a MapArray\");\n\n    match (from_type, to_type) {\n        (\n            DataType::Map(from_entries_field, from_sorted),\n            DataType::Map(to_entries_field, _to_sorted),\n        ) => {\n            // Get the struct types for entries\n            let from_struct_type = from_entries_field.data_type();\n            let to_struct_type = to_entries_field.data_type();\n\n            match (from_struct_type, to_struct_type) {\n                (DataType::Struct(from_fields), DataType::Struct(to_fields)) => {\n                    // Get the key and value types\n                    let from_key_type = from_fields[0].data_type();\n                    let from_value_type = from_fields[1].data_type();\n                    let to_key_type = to_fields[0].data_type();\n                    let to_value_type = to_fields[1].data_type();\n\n                    // Cast keys if needed\n                    let keys = map_array.keys();\n                    let cast_keys = if from_key_type != to_key_type {\n                        cast_array(Arc::clone(keys), to_key_type, cast_options)?\n                    } else {\n                        Arc::clone(keys)\n                    };\n\n                    // Cast values if needed\n                    let values = map_array.values();\n                    let cast_values = if from_value_type != to_value_type {\n                        cast_array(Arc::clone(values), to_value_type, cast_options)?\n                    } else {\n                        Arc::clone(values)\n                    };\n\n                    // Build the new entries struct with the target field names\n                    let new_key_field = Arc::new(Field::new(\n                        to_fields[0].name(),\n                        to_key_type.clone(),\n                        to_fields[0].is_nullable(),\n                    ));\n                    let new_value_field = Arc::new(Field::new(\n                        to_fields[1].name(),\n                        to_value_type.clone(),\n                        to_fields[1].is_nullable(),\n                    ));\n\n                    let struct_fields = Fields::from(vec![new_key_field, new_value_field]);\n                    let entries_struct =\n                        StructArray::new(struct_fields, vec![cast_keys, cast_values], None);\n\n                    // Create the new map field with the target name\n                    let new_entries_field = Arc::new(Field::new(\n                        to_entries_field.name(),\n                        DataType::Struct(entries_struct.fields().clone()),\n                        to_entries_field.is_nullable(),\n                    ));\n\n                    // Build the new MapArray\n                    let new_map = MapArray::new(\n                        new_entries_field,\n                        map_array.offsets().clone(),\n                        entries_struct,\n                        map_array.nulls().cloned(),\n                        *from_sorted,\n                    );\n\n                    Ok(Arc::new(new_map))\n                }\n                _ => Err(DataFusionError::Internal(format!(\n                    \"Map entries must be structs, got {:?} and {:?}\",\n                    from_struct_type, to_struct_type\n                ))),\n            }\n        }\n        _ => unreachable!(\"cast_map_to_map called with non-Map types\"),\n    }\n}\n\nfn cast_array_to_string(\n    array: &ListArray,\n    spark_cast_options: &SparkCastOptions,\n) -> DataFusionResult<ArrayRef> {\n    let mut builder = StringBuilder::with_capacity(array.len(), array.len() * 16);\n    let mut str = String::with_capacity(array.len() * 16);\n\n    let casted_values = cast_array(\n        Arc::clone(array.values()),\n        &DataType::Utf8,\n        spark_cast_options,\n    )?;\n    let string_values = casted_values\n        .as_any()\n        .downcast_ref::<StringArray>()\n        .expect(\"Casted values should be StringArray\");\n\n    let offsets = array.offsets();\n    for row_index in 0..array.len() {\n        if array.is_null(row_index) {\n            builder.append_null();\n        } else {\n            str.clear();\n            let start = offsets[row_index] as usize;\n            let end = offsets[row_index + 1] as usize;\n\n            str.push('[');\n            let mut first = true;\n            for idx in start..end {\n                if !first {\n                    str.push_str(\", \");\n                }\n                if string_values.is_null(idx) {\n                    str.push_str(&spark_cast_options.null_string);\n                } else {\n                    str.push_str(string_values.value(idx));\n                }\n                first = false;\n            }\n            str.push(']');\n            builder.append_value(&str);\n        }\n    }\n    Ok(Arc::new(builder.finish()))\n}\n\nfn casts_struct_to_string(\n    array: &StructArray,\n    spark_cast_options: &SparkCastOptions,\n) -> DataFusionResult<ArrayRef> {\n    // cast each field to a string\n    let string_arrays: Vec<ArrayRef> = array\n        .columns()\n        .iter()\n        .map(|arr| {\n            spark_cast(\n                ColumnarValue::Array(Arc::clone(arr)),\n                &DataType::Utf8,\n                spark_cast_options,\n            )\n            .and_then(|cv| cv.into_array(arr.len()))\n        })\n        .collect::<DataFusionResult<Vec<_>>>()?;\n    let string_arrays: Vec<&StringArray> =\n        string_arrays.iter().map(|arr| arr.as_string()).collect();\n    // build the struct string containing entries in the format `\"field_name\":field_value`\n    let mut builder = StringBuilder::with_capacity(array.len(), array.len() * 16);\n    let mut str = String::with_capacity(array.len() * 16);\n    for row_index in 0..array.len() {\n        if array.is_null(row_index) {\n            builder.append_null();\n        } else {\n            str.clear();\n            let mut any_fields_written = false;\n            str.push('{');\n            for field in &string_arrays {\n                if any_fields_written {\n                    str.push_str(\", \");\n                }\n                if field.is_null(row_index) {\n                    str.push_str(&spark_cast_options.null_string);\n                } else {\n                    str.push_str(field.value(row_index));\n                }\n                any_fields_written = true;\n            }\n            str.push('}');\n            builder.append_value(&str);\n        }\n    }\n    Ok(Arc::new(builder.finish()))\n}\n\nimpl Display for Cast {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"Cast [data_type: {}, timezone: {}, child: {}, eval_mode: {:?}]\",\n            self.data_type, self.cast_options.timezone, self.child, &self.cast_options.eval_mode\n        )\n    }\n}\n\nimpl PhysicalExpr for Cast {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, _: &Schema) -> DataFusionResult<DataType> {\n        Ok(self.data_type.clone())\n    }\n\n    fn nullable(&self, _: &Schema) -> DataFusionResult<bool> {\n        Ok(true)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> DataFusionResult<ColumnarValue> {\n        let arg = self.child.evaluate(batch)?;\n        let result = spark_cast(arg, &self.data_type, &self.cast_options);\n\n        // If there's an error and we have query_context, wrap it\n        match result {\n            Err(DataFusionError::External(e)) if self.query_context.is_some() => {\n                if let Some(spark_err) = e.downcast_ref::<crate::SparkError>() {\n                    let wrapped = crate::SparkErrorWithContext::with_context(\n                        spark_err.clone(),\n                        Arc::clone(self.query_context.as_ref().unwrap()),\n                    );\n                    Err(DataFusionError::External(Box::new(wrapped)))\n                } else {\n                    Err(DataFusionError::External(e))\n                }\n            }\n            other => other,\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.child]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        match children.len() {\n            1 => Ok(Arc::new(Cast::new(\n                Arc::clone(&children[0]),\n                self.data_type.clone(),\n                self.cast_options.clone(),\n                self.expr_id,\n                self.query_context.clone(),\n            ))),\n            _ => internal_err!(\"Cast should have exactly one child\"),\n        }\n    }\n}\n\nfn cast_binary_to_string<O: OffsetSizeTrait>(\n    array: &dyn Array,\n    spark_cast_options: &SparkCastOptions,\n) -> Result<ArrayRef, ArrowError> {\n    let input = array\n        .as_any()\n        .downcast_ref::<GenericByteArray<GenericBinaryType<O>>>()\n        .unwrap();\n\n    fn binary_formatter(value: &[u8], spark_cast_options: &SparkCastOptions) -> String {\n        match spark_cast_options.binary_output_style {\n            Some(s) => spark_binary_formatter(value, s),\n            None => cast_binary_formatter(value),\n        }\n    }\n\n    let output_array = input\n        .iter()\n        .map(|value| match value {\n            Some(value) => Ok(Some(binary_formatter(value, spark_cast_options))),\n            _ => Ok(None),\n        })\n        .collect::<Result<GenericStringArray<O>, ArrowError>>()?;\n    Ok(Arc::new(output_array))\n}\n\n/// This function mimics the [BinaryFormatter]: https://github.com/apache/spark/blob/v4.0.0/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/ToStringBase.scala#L449-L468\n/// used by SparkSQL's ToPrettyString expression.\n/// The BinaryFormatter was [introduced]: https://issues.apache.org/jira/browse/SPARK-47911 in Spark 4.0.0\n/// Before Spark 4.0.0, the default is SPACE_DELIMITED_UPPERCASE_HEX\nfn spark_binary_formatter(value: &[u8], binary_output_style: BinaryOutputStyle) -> String {\n    match binary_output_style {\n        BinaryOutputStyle::Utf8 => String::from_utf8(value.to_vec()).unwrap(),\n        BinaryOutputStyle::Basic => {\n            format!(\n                \"{:?}\",\n                value\n                    .iter()\n                    .map(|v| i8::from_ne_bytes([*v]))\n                    .collect::<Vec<i8>>()\n            )\n        }\n        BinaryOutputStyle::Base64 => BASE64_STANDARD_NO_PAD.encode(value),\n        BinaryOutputStyle::Hex => value\n            .iter()\n            .map(|v| hex::encode_upper([*v]))\n            .collect::<String>(),\n        BinaryOutputStyle::HexDiscrete => {\n            // Spark's default SPACE_DELIMITED_UPPERCASE_HEX\n            format!(\n                \"[{}]\",\n                value\n                    .iter()\n                    .map(|v| hex::encode_upper([*v]))\n                    .collect::<Vec<String>>()\n                    .join(\" \")\n            )\n        }\n    }\n}\n\nfn cast_binary_formatter(value: &[u8]) -> String {\n    match String::from_utf8(value.to_vec()) {\n        Ok(value) => value,\n        Err(_) => unsafe { String::from_utf8_unchecked(value.to_vec()) },\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::{ListArray, NullArray, StringArray};\n    use arrow::buffer::OffsetBuffer;\n    use arrow::datatypes::TimestampMicrosecondType;\n    use arrow::datatypes::{Field, Fields};\n    #[test]\n    fn test_cast_unsupported_timestamp_to_date() {\n        // Since datafusion uses chrono::Datetime internally not all dates representable by TimestampMicrosecondType are supported\n        let timestamps: PrimitiveArray<TimestampMicrosecondType> = vec![i64::MAX].into();\n        let cast_options = SparkCastOptions::new(EvalMode::Legacy, \"UTC\", false);\n        let result = cast_array(\n            Arc::new(timestamps.with_timezone(\"Europe/Copenhagen\")),\n            &DataType::Date32,\n            &cast_options,\n        );\n        assert!(result.is_err())\n    }\n\n    #[test]\n    fn test_cast_invalid_timezone() {\n        let timestamps: PrimitiveArray<TimestampMicrosecondType> = vec![i64::MAX].into();\n        let cast_options = SparkCastOptions::new(EvalMode::Legacy, \"Not a valid timezone\", false);\n        let result = cast_array(\n            Arc::new(timestamps.with_timezone(\"Europe/Copenhagen\")),\n            &DataType::Date32,\n            &cast_options,\n        );\n        assert!(result.is_err())\n    }\n\n    #[test]\n    fn test_cast_struct_to_utf8() {\n        let a: ArrayRef = Arc::new(Int32Array::from(vec![\n            Some(1),\n            Some(2),\n            None,\n            Some(4),\n            Some(5),\n        ]));\n        let b: ArrayRef = Arc::new(StringArray::from(vec![\"a\", \"b\", \"c\", \"d\", \"e\"]));\n        let c: ArrayRef = Arc::new(StructArray::from(vec![\n            (Arc::new(Field::new(\"a\", DataType::Int32, true)), a),\n            (Arc::new(Field::new(\"b\", DataType::Utf8, true)), b),\n        ]));\n        let string_array = cast_array(\n            c,\n            &DataType::Utf8,\n            &SparkCastOptions::new(EvalMode::Legacy, \"UTC\", false),\n        )\n        .unwrap();\n        let string_array = string_array.as_string::<i32>();\n        assert_eq!(5, string_array.len());\n        assert_eq!(r#\"{1, a}\"#, string_array.value(0));\n        assert_eq!(r#\"{2, b}\"#, string_array.value(1));\n        assert_eq!(r#\"{null, c}\"#, string_array.value(2));\n        assert_eq!(r#\"{4, d}\"#, string_array.value(3));\n        assert_eq!(r#\"{5, e}\"#, string_array.value(4));\n    }\n\n    #[test]\n    fn test_cast_struct_to_struct() {\n        let a: ArrayRef = Arc::new(Int32Array::from(vec![\n            Some(1),\n            Some(2),\n            None,\n            Some(4),\n            Some(5),\n        ]));\n        let b: ArrayRef = Arc::new(StringArray::from(vec![\"a\", \"b\", \"c\", \"d\", \"e\"]));\n        let c: ArrayRef = Arc::new(StructArray::from(vec![\n            (Arc::new(Field::new(\"a\", DataType::Int32, true)), a),\n            (Arc::new(Field::new(\"b\", DataType::Utf8, true)), b),\n        ]));\n        // change type of \"a\" from Int32 to Utf8\n        let fields = Fields::from(vec![\n            Field::new(\"a\", DataType::Utf8, true),\n            Field::new(\"b\", DataType::Utf8, true),\n        ]);\n        let cast_array = spark_cast(\n            ColumnarValue::Array(c),\n            &DataType::Struct(fields),\n            &SparkCastOptions::new(EvalMode::Legacy, \"UTC\", false),\n        )\n        .unwrap();\n        if let ColumnarValue::Array(cast_array) = cast_array {\n            assert_eq!(5, cast_array.len());\n            let a = cast_array.as_struct().column(0).as_string::<i32>();\n            assert_eq!(\"1\", a.value(0));\n        } else {\n            unreachable!()\n        }\n    }\n\n    #[test]\n    fn test_cast_struct_to_struct_drop_column() {\n        let a: ArrayRef = Arc::new(Int32Array::from(vec![\n            Some(1),\n            Some(2),\n            None,\n            Some(4),\n            Some(5),\n        ]));\n        let b: ArrayRef = Arc::new(StringArray::from(vec![\"a\", \"b\", \"c\", \"d\", \"e\"]));\n        let c: ArrayRef = Arc::new(StructArray::from(vec![\n            (Arc::new(Field::new(\"a\", DataType::Int32, true)), a),\n            (Arc::new(Field::new(\"b\", DataType::Utf8, true)), b),\n        ]));\n        // change type of \"a\" from Int32 to Utf8 and drop \"b\"\n        let fields = Fields::from(vec![Field::new(\"a\", DataType::Utf8, true)]);\n        let cast_array = spark_cast(\n            ColumnarValue::Array(c),\n            &DataType::Struct(fields),\n            &SparkCastOptions::new(EvalMode::Legacy, \"UTC\", false),\n        )\n        .unwrap();\n        if let ColumnarValue::Array(cast_array) = cast_array {\n            assert_eq!(5, cast_array.len());\n            let struct_array = cast_array.as_struct();\n            assert_eq!(1, struct_array.columns().len());\n            let a = struct_array.column(0).as_string::<i32>();\n            assert_eq!(\"1\", a.value(0));\n        } else {\n            unreachable!()\n        }\n    }\n\n    #[test]\n    fn test_cast_string_array_to_string() {\n        let values_array =\n            StringArray::from(vec![Some(\"a\"), Some(\"b\"), Some(\"c\"), Some(\"a\"), None, None]);\n        let offsets_buffer = OffsetBuffer::<i32>::new(vec![0, 3, 5, 6, 6].into());\n        let item_field = Arc::new(Field::new(\"item\", DataType::Utf8, true));\n        let list_array = Arc::new(ListArray::new(\n            item_field,\n            offsets_buffer,\n            Arc::new(values_array),\n            None,\n        ));\n        let string_array = cast_array_to_string(\n            &list_array,\n            &SparkCastOptions::new(EvalMode::Legacy, \"UTC\", false),\n        )\n        .unwrap();\n        let string_array = string_array.as_string::<i32>();\n        assert_eq!(r#\"[a, b, c]\"#, string_array.value(0));\n        assert_eq!(r#\"[a, null]\"#, string_array.value(1));\n        assert_eq!(r#\"[null]\"#, string_array.value(2));\n        assert_eq!(r#\"[]\"#, string_array.value(3));\n    }\n\n    #[test]\n    fn test_cast_i32_array_to_string() {\n        let values_array = Int32Array::from(vec![Some(1), Some(2), Some(3), Some(1), None, None]);\n        let offsets_buffer = OffsetBuffer::<i32>::new(vec![0, 3, 5, 6, 6].into());\n        let item_field = Arc::new(Field::new(\"item\", DataType::Int32, true));\n        let list_array = Arc::new(ListArray::new(\n            item_field,\n            offsets_buffer,\n            Arc::new(values_array),\n            None,\n        ));\n        let string_array = cast_array_to_string(\n            &list_array,\n            &SparkCastOptions::new(EvalMode::Legacy, \"UTC\", false),\n        )\n        .unwrap();\n        let string_array = string_array.as_string::<i32>();\n        assert_eq!(r#\"[1, 2, 3]\"#, string_array.value(0));\n        assert_eq!(r#\"[1, null]\"#, string_array.value(1));\n        assert_eq!(r#\"[null]\"#, string_array.value(2));\n        assert_eq!(r#\"[]\"#, string_array.value(3));\n    }\n\n    #[test]\n    fn test_cast_array_of_nulls_to_array() {\n        let offsets_buffer = OffsetBuffer::<i32>::new(vec![0, 2, 3, 3].into());\n        let from_item_field = Arc::new(Field::new(\"item\", DataType::Null, true));\n        let from_array: ArrayRef = Arc::new(ListArray::new(\n            from_item_field,\n            offsets_buffer,\n            Arc::new(NullArray::new(3)),\n            None,\n        ));\n\n        let to_type = DataType::List(Arc::new(Field::new(\"item\", DataType::Int32, true)));\n        let to_array = cast_array(\n            from_array,\n            &to_type,\n            &SparkCastOptions::new(EvalMode::Legacy, \"UTC\", false),\n        )\n        .unwrap();\n\n        let result = to_array.as_list::<i32>();\n        assert_eq!(3, result.len());\n        assert_eq!(result.value_offsets(), &[0, 2, 3, 3]);\n\n        let values = result.values().as_primitive::<Int32Type>();\n        assert_eq!(3, values.len());\n        assert_eq!(3, values.null_count());\n        assert!(values.iter().all(|value| value.is_none()));\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/conversion_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod boolean;\npub mod cast;\nmod numeric;\nmod string;\nmod temporal;\nmod utils;\n"
  },
  {
    "path": "native/spark-expr/src/conversion_funcs/numeric.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::conversion_funcs::utils::cast_overflow;\nuse crate::conversion_funcs::utils::MICROS_PER_SECOND;\nuse crate::{EvalMode, SparkError, SparkResult};\nuse arrow::array::{\n    Array, ArrayRef, AsArray, BooleanBuilder, Decimal128Array, Decimal128Builder, Float32Array,\n    Float64Array, GenericStringArray, Int16Array, Int32Array, Int64Array, Int8Array,\n    OffsetSizeTrait, PrimitiveArray, StringBuilder, TimestampMicrosecondBuilder,\n};\nuse arrow::datatypes::{\n    i256, is_validate_decimal_precision, ArrowPrimitiveType, DataType, Decimal128Type, Float32Type,\n    Float64Type, Int16Type, Int32Type, Int64Type, Int8Type,\n};\nuse num::{cast::AsPrimitive, ToPrimitive, Zero};\nuse std::sync::Arc;\n\n/// Check if DataFusion cast from integer types is Spark compatible\npub(crate) fn is_df_cast_from_int_spark_compatible(to_type: &DataType) -> bool {\n    matches!(\n        to_type,\n        DataType::Boolean\n            | DataType::Int8\n            | DataType::Int16\n            | DataType::Int32\n            | DataType::Int64\n            | DataType::Float32\n            | DataType::Float64\n            | DataType::Utf8\n    )\n}\n\n/// Check if DataFusion cast from float types is Spark compatible\npub(crate) fn is_df_cast_from_float_spark_compatible(to_type: &DataType) -> bool {\n    matches!(\n        to_type,\n        DataType::Boolean\n            | DataType::Int8\n            | DataType::Int16\n            | DataType::Int32\n            | DataType::Int64\n            | DataType::Float32\n            | DataType::Float64\n    )\n}\n\n/// Check if DataFusion cast from decimal types is Spark compatible\npub(crate) fn is_df_cast_from_decimal_spark_compatible(to_type: &DataType) -> bool {\n    matches!(\n        to_type,\n        DataType::Int8\n            | DataType::Int16\n            | DataType::Int32\n            | DataType::Int64\n            | DataType::Float32 // DataFusion divides i128 by 10^scale in f64, then narrows to\n            | DataType::Float64 // f32 if needed; empirically matches Spark's BigDecimal.doubleValue\n                                // / floatValue for all tested values\n            | DataType::Decimal128(_, _)\n            | DataType::Decimal256(_, _)\n            // DataFusion's Decimal128→Utf8 cast uses plain notation (toPlainString semantics),\n            // matching Spark's TRY and ANSI modes. LEGACY mode is handled by a separate match\n            // arm in cast_array that applies Java BigDecimal.toString() (scientific notation\n            // for values where adjusted_exponent < -6, e.g. \"0E-18\" for zero with scale=18).\n            | DataType::Utf8\n    )\n    // Note: Boolean is intentionally absent. Decimal-to-boolean uses a dedicated\n    // spark_cast_decimal_to_boolean function (in cast.rs) that checks the raw i128\n    // value, bypassing the DataFusion cast kernel entirely.\n}\n\nmacro_rules! cast_float_to_timestamp_impl {\n    ($array:expr, $builder:expr, $primitive_type:ty, $eval_mode:expr) => {{\n        let arr = $array.as_primitive::<$primitive_type>();\n        for i in 0..arr.len() {\n            if arr.is_null(i) {\n                $builder.append_null();\n            } else {\n                let val = arr.value(i) as f64;\n                // Path 1: NaN/Infinity check - error says TIMESTAMP\n                if val.is_nan() || val.is_infinite() {\n                    if $eval_mode == EvalMode::Ansi {\n                        return Err(SparkError::CastInvalidValue {\n                            value: val.to_string(),\n                            from_type: \"DOUBLE\".to_string(),\n                            to_type: \"TIMESTAMP\".to_string(),\n                        });\n                    }\n                    $builder.append_null();\n                } else {\n                    // Path 2: Multiply then check overflow - error says BIGINT\n                    let micros = val * MICROS_PER_SECOND as f64;\n                    if micros.floor() <= i64::MAX as f64 && micros.ceil() >= i64::MIN as f64 {\n                        $builder.append_value(micros as i64);\n                    } else {\n                        if $eval_mode == EvalMode::Ansi {\n                            let value_str = if micros.is_infinite() {\n                                if micros.is_sign_positive() {\n                                    \"Infinity\".to_string()\n                                } else {\n                                    \"-Infinity\".to_string()\n                                }\n                            } else if micros.is_nan() {\n                                \"NaN\".to_string()\n                            } else {\n                                format!(\"{:e}\", micros).to_uppercase() + \"D\"\n                            };\n                            return Err(SparkError::CastOverFlow {\n                                value: value_str,\n                                from_type: \"DOUBLE\".to_string(),\n                                to_type: \"BIGINT\".to_string(),\n                            });\n                        }\n                        $builder.append_null();\n                    }\n                }\n            }\n        }\n    }};\n}\n\nmacro_rules! cast_float_to_string {\n    ($from:expr, $eval_mode:expr, $type:ty, $output_type:ty, $offset_type:ty) => {{\n\n        fn cast<OffsetSize>(\n            from: &dyn Array,\n            _eval_mode: EvalMode,\n        ) -> SparkResult<ArrayRef>\n        where\n            OffsetSize: OffsetSizeTrait, {\n                let array = from.as_any().downcast_ref::<$output_type>().unwrap();\n\n                // If the absolute number is less than 10,000,000 and greater or equal than 0.001, the\n                // result is expressed without scientific notation with at least one digit on either side of\n                // the decimal point. Otherwise, Spark uses a mantissa followed by E and an\n                // exponent. The mantissa has an optional leading minus sign followed by one digit to the\n                // left of the decimal point, and the minimal number of digits greater than zero to the\n                // right. The exponent has and optional leading minus sign.\n                // source: https://docs.databricks.com/en/sql/language-manual/functions/cast.html\n\n                const LOWER_SCIENTIFIC_BOUND: $type = 0.001;\n                const UPPER_SCIENTIFIC_BOUND: $type = 10000000.0;\n\n                let output_array = array\n                    .iter()\n                    .map(|value| match value {\n                        Some(value) if value == <$type>::INFINITY => Ok(Some(\"Infinity\".to_string())),\n                        Some(value) if value == <$type>::NEG_INFINITY => Ok(Some(\"-Infinity\".to_string())),\n                        Some(value)\n                            if (value.abs() < UPPER_SCIENTIFIC_BOUND\n                                && value.abs() >= LOWER_SCIENTIFIC_BOUND)\n                                || value.abs() == 0.0 =>\n                        {\n                            let trailing_zero = if value.fract() == 0.0 { \".0\" } else { \"\" };\n\n                            Ok(Some(format!(\"{value}{trailing_zero}\")))\n                        }\n                        Some(value)\n                            if value.abs() >= UPPER_SCIENTIFIC_BOUND\n                                || value.abs() < LOWER_SCIENTIFIC_BOUND =>\n                        {\n                            let formatted = format!(\"{value:E}\");\n\n                            if formatted.contains(\".\") {\n                                Ok(Some(formatted))\n                            } else {\n                                // `formatted` is already in scientific notation and can be split up by E\n                                // in order to add the missing trailing 0 which gets removed for numbers with a fraction of 0.0\n                                let prepare_number: Vec<&str> = formatted.split(\"E\").collect();\n\n                                let coefficient = prepare_number[0];\n\n                                let exponent = prepare_number[1];\n\n                                Ok(Some(format!(\"{coefficient}.0E{exponent}\")))\n                            }\n                        }\n                        Some(value) => Ok(Some(value.to_string())),\n                        _ => Ok(None),\n                    })\n                    .collect::<Result<GenericStringArray<OffsetSize>, SparkError>>()?;\n\n                Ok(Arc::new(output_array))\n            }\n\n        cast::<$offset_type>($from, $eval_mode)\n    }};\n}\n\n// eval mode is not needed since all ints can be implemented in binary format\n#[macro_export]\nmacro_rules! cast_whole_num_to_binary {\n    ($array:expr, $primitive_type:ty, $byte_size:expr) => {{\n        let input_arr = $array\n            .as_any()\n            .downcast_ref::<$primitive_type>()\n            .ok_or_else(|| SparkError::Internal(\"Expected numeric array\".to_string()))?;\n\n        let len = input_arr.len();\n        let mut builder = BinaryBuilder::with_capacity(len, len * $byte_size);\n\n        for i in 0..input_arr.len() {\n            if input_arr.is_null(i) {\n                builder.append_null();\n            } else {\n                builder.append_value(input_arr.value(i).to_be_bytes());\n            }\n        }\n\n        Ok(Arc::new(builder.finish()) as ArrayRef)\n    }};\n}\n\nmacro_rules! cast_int_to_timestamp_impl {\n    ($array:expr, $builder:expr, $primitive_type:ty) => {{\n        let arr = $array.as_primitive::<$primitive_type>();\n        for i in 0..arr.len() {\n            if arr.is_null(i) {\n                $builder.append_null();\n            } else {\n                // saturating_mul limits to i64::MIN/MAX on overflow instead of panicking,\n                // which could occur when converting extreme values (e.g., Long.MIN_VALUE)\n                // matching spark behavior (irrespective of EvalMode)\n                let micros = (arr.value(i) as i64).saturating_mul(MICROS_PER_SECOND);\n                $builder.append_value(micros);\n            }\n        }\n    }};\n}\n\nmacro_rules! cast_int_to_int_macro {\n    (\n        $array: expr,\n        $eval_mode:expr,\n        $from_arrow_primitive_type: ty,\n        $to_arrow_primitive_type: ty,\n        $from_data_type: expr,\n        $to_native_type: ty,\n        $spark_from_data_type_name: expr,\n        $spark_to_data_type_name: expr\n    ) => {{\n        let cast_array = $array\n            .as_any()\n            .downcast_ref::<PrimitiveArray<$from_arrow_primitive_type>>()\n            .unwrap();\n        let spark_int_literal_suffix = match $from_data_type {\n            &DataType::Int64 => \"L\",\n            &DataType::Int16 => \"S\",\n            &DataType::Int8 => \"T\",\n            _ => \"\",\n        };\n\n        let output_array = match $eval_mode {\n            EvalMode::Legacy => cast_array\n                .iter()\n                .map(|value| match value {\n                    Some(value) => {\n                        Ok::<Option<$to_native_type>, SparkError>(Some(value as $to_native_type))\n                    }\n                    _ => Ok(None),\n                })\n                .collect::<Result<PrimitiveArray<$to_arrow_primitive_type>, _>>(),\n            _ => cast_array\n                .iter()\n                .map(|value| match value {\n                    Some(value) => {\n                        let res = <$to_native_type>::try_from(value);\n                        if res.is_err() {\n                            Err(cast_overflow(\n                                &(value.to_string() + spark_int_literal_suffix),\n                                $spark_from_data_type_name,\n                                $spark_to_data_type_name,\n                            ))\n                        } else {\n                            Ok::<Option<$to_native_type>, SparkError>(Some(res.unwrap()))\n                        }\n                    }\n                    _ => Ok(None),\n                })\n                .collect::<Result<PrimitiveArray<$to_arrow_primitive_type>, _>>(),\n        }?;\n        let result: SparkResult<ArrayRef> = Ok(Arc::new(output_array) as ArrayRef);\n        result\n    }};\n}\n\n// When Spark casts to Byte/Short Types, it does not cast directly to Byte/Short.\n// It casts to Int first and then to Byte/Short. Because of potential overflows in the Int cast,\n// this can cause unexpected Short/Byte cast results. Replicate this behavior.\nmacro_rules! cast_float_to_int16_down {\n    (\n        $array:expr,\n        $eval_mode:expr,\n        $src_array_type:ty,\n        $dest_array_type:ty,\n        $rust_src_type:ty,\n        $rust_dest_type:ty,\n        $src_type_str:expr,\n        $dest_type_str:expr,\n        $format_str:expr\n    ) => {{\n        let cast_array = $array\n            .as_any()\n            .downcast_ref::<$src_array_type>()\n            .expect(concat!(\"Expected a \", stringify!($src_array_type)));\n\n        let output_array = match $eval_mode {\n            EvalMode::Ansi => cast_array\n                .iter()\n                .map(|value| match value {\n                    Some(value) => {\n                        let is_overflow = value.is_nan() || value.abs() as i32 == i32::MAX;\n                        if is_overflow {\n                            return Err(cast_overflow(\n                                &format!($format_str, value).replace(\"e\", \"E\"),\n                                $src_type_str,\n                                $dest_type_str,\n                            ));\n                        }\n                        let i32_value = value as i32;\n                        <$rust_dest_type>::try_from(i32_value)\n                            .map_err(|_| {\n                                cast_overflow(\n                                    &format!($format_str, value).replace(\"e\", \"E\"),\n                                    $src_type_str,\n                                    $dest_type_str,\n                                )\n                            })\n                            .map(Some)\n                    }\n                    None => Ok(None),\n                })\n                .collect::<Result<$dest_array_type, _>>()?,\n            _ => cast_array\n                .iter()\n                .map(|value| match value {\n                    Some(value) => {\n                        let i32_value = value as i32;\n                        Ok::<Option<$rust_dest_type>, SparkError>(Some(\n                            i32_value as $rust_dest_type,\n                        ))\n                    }\n                    None => Ok(None),\n                })\n                .collect::<Result<$dest_array_type, _>>()?,\n        };\n        Ok(Arc::new(output_array) as ArrayRef)\n    }};\n}\n\nmacro_rules! cast_float_to_int32_up {\n    (\n        $array:expr,\n        $eval_mode:expr,\n        $src_array_type:ty,\n        $dest_array_type:ty,\n        $rust_src_type:ty,\n        $rust_dest_type:ty,\n        $src_type_str:expr,\n        $dest_type_str:expr,\n        $max_dest_val:expr,\n        $format_str:expr\n    ) => {{\n        let cast_array = $array\n            .as_any()\n            .downcast_ref::<$src_array_type>()\n            .expect(concat!(\"Expected a \", stringify!($src_array_type)));\n\n        let output_array = match $eval_mode {\n            EvalMode::Ansi => cast_array\n                .iter()\n                .map(|value| match value {\n                    Some(value) => {\n                        let is_overflow =\n                            value.is_nan() || value.abs() as $rust_dest_type == $max_dest_val;\n                        if is_overflow {\n                            return Err(cast_overflow(\n                                &format!($format_str, value).replace(\"e\", \"E\"),\n                                $src_type_str,\n                                $dest_type_str,\n                            ));\n                        }\n                        Ok(Some(value as $rust_dest_type))\n                    }\n                    None => Ok(None),\n                })\n                .collect::<Result<$dest_array_type, _>>()?,\n            _ => cast_array\n                .iter()\n                .map(|value| match value {\n                    Some(value) => {\n                        Ok::<Option<$rust_dest_type>, SparkError>(Some(value as $rust_dest_type))\n                    }\n                    None => Ok(None),\n                })\n                .collect::<Result<$dest_array_type, _>>()?,\n        };\n        Ok(Arc::new(output_array) as ArrayRef)\n    }};\n}\n\n// When Spark casts to Byte/Short Types, it does not cast directly to Byte/Short.\n// It casts to Int first and then to Byte/Short. Because of potential overflows in the Int cast,\n// this can cause unexpected Short/Byte cast results. Replicate this behavior.\nmacro_rules! cast_decimal_to_int16_down {\n    (\n        $array:expr,\n        $eval_mode:expr,\n        $dest_array_type:ty,\n        $rust_dest_type:ty,\n        $dest_type_str:expr,\n        $precision:expr,\n        $scale:expr\n    ) => {{\n        let cast_array = $array\n            .as_any()\n            .downcast_ref::<Decimal128Array>()\n            .expect(\"Expected a Decimal128ArrayType\");\n\n        let output_array = match $eval_mode {\n            EvalMode::Ansi => cast_array\n                .iter()\n                .map(|value| match value {\n                    Some(value) => {\n                        let divisor = 10_i128.pow($scale as u32);\n                        let truncated = value / divisor;\n                        let is_overflow = truncated.abs() > i32::MAX.into();\n                        if is_overflow {\n                            return Err(cast_overflow(\n                                &format!(\n                                    \"{}BD\",\n                                    format_decimal_str(\n                                        &value.to_string(),\n                                        $precision as usize,\n                                        $scale\n                                    )\n                                ),\n                                &format!(\"DECIMAL({},{})\", $precision, $scale),\n                                $dest_type_str,\n                            ));\n                        }\n                        let i32_value = truncated as i32;\n                        <$rust_dest_type>::try_from(i32_value)\n                            .map_err(|_| {\n                                cast_overflow(\n                                    &format!(\n                                        \"{}BD\",\n                                        format_decimal_str(\n                                            &value.to_string(),\n                                            $precision as usize,\n                                            $scale\n                                        )\n                                    ),\n                                    &format!(\"DECIMAL({},{})\", $precision, $scale),\n                                    $dest_type_str,\n                                )\n                            })\n                            .map(Some)\n                    }\n                    None => Ok(None),\n                })\n                .collect::<Result<$dest_array_type, _>>()?,\n            _ => cast_array\n                .iter()\n                .map(|value| match value {\n                    Some(value) => {\n                        let divisor = 10_i128.pow($scale as u32);\n                        let i32_value = (value / divisor) as i32;\n                        Ok::<Option<$rust_dest_type>, SparkError>(Some(\n                            i32_value as $rust_dest_type,\n                        ))\n                    }\n                    None => Ok(None),\n                })\n                .collect::<Result<$dest_array_type, _>>()?,\n        };\n        Ok(Arc::new(output_array) as ArrayRef)\n    }};\n}\n\nmacro_rules! cast_decimal_to_int32_up {\n    (\n        $array:expr,\n        $eval_mode:expr,\n        $dest_array_type:ty,\n        $rust_dest_type:ty,\n        $dest_type_str:expr,\n        $max_dest_val:expr,\n        $precision:expr,\n        $scale:expr\n    ) => {{\n        let cast_array = $array\n            .as_any()\n            .downcast_ref::<Decimal128Array>()\n            .expect(\"Expected a Decimal128ArrayType\");\n\n        let output_array = match $eval_mode {\n            EvalMode::Ansi => cast_array\n                .iter()\n                .map(|value| match value {\n                    Some(value) => {\n                        let divisor = 10_i128.pow($scale as u32);\n                        let truncated = value / divisor;\n                        let is_overflow = truncated.abs() > $max_dest_val.into();\n                        if is_overflow {\n                            return Err(cast_overflow(\n                                &format!(\n                                    \"{}BD\",\n                                    format_decimal_str(\n                                        &value.to_string(),\n                                        $precision as usize,\n                                        $scale\n                                    )\n                                ),\n                                &format!(\"DECIMAL({},{})\", $precision, $scale),\n                                $dest_type_str,\n                            ));\n                        }\n                        Ok(Some(truncated as $rust_dest_type))\n                    }\n                    None => Ok(None),\n                })\n                .collect::<Result<$dest_array_type, _>>()?,\n            _ => cast_array\n                .iter()\n                .map(|value| match value {\n                    Some(value) => {\n                        let divisor = 10_i128.pow($scale as u32);\n                        let truncated = value / divisor;\n                        Ok::<Option<$rust_dest_type>, SparkError>(Some(\n                            truncated as $rust_dest_type,\n                        ))\n                    }\n                    None => Ok(None),\n                })\n                .collect::<Result<$dest_array_type, _>>()?,\n        };\n        Ok(Arc::new(output_array) as ArrayRef)\n    }};\n}\n\n// copied from arrow::dataTypes::Decimal128Type since Decimal128Type::format_decimal can't be called directly\npub(crate) fn format_decimal_str(value_str: &str, precision: usize, scale: i8) -> String {\n    let (sign, rest) = match value_str.strip_prefix('-') {\n        Some(stripped) => (\"-\", stripped),\n        None => (\"\", value_str),\n    };\n    let bound = precision.min(rest.len()) + sign.len();\n    let value_str = &value_str[0..bound];\n\n    if scale == 0 {\n        value_str.to_string()\n    } else if scale < 0 {\n        let padding = value_str.len() + scale.unsigned_abs() as usize;\n        format!(\"{value_str:0<padding$}\")\n    } else if rest.len() > scale as usize {\n        // Decimal separator is in the middle of the string\n        let (whole, decimal) = value_str.split_at(value_str.len() - scale as usize);\n        format!(\"{whole}.{decimal}\")\n    } else {\n        // String has to be padded\n        format!(\"{}0.{:0>width$}\", sign, rest, width = scale as usize)\n    }\n}\n\n/// Casts a Decimal128 array to string using Java's BigDecimal.toString() semantics,\n/// which is Spark's LEGACY eval mode behavior. Plain notation when scale >= 0 and\n/// adjusted_exponent >= -6, otherwise scientific notation (e.g. \"0E-18\" for zero\n/// with scale=18, since adjusted_exponent = -18 + 0 = -18 < -6).\n///\n/// TRY and ANSI modes produce plain notation via DataFusion's cast instead.\npub(crate) fn cast_decimal128_to_utf8(array: &ArrayRef, scale: i8) -> SparkResult<ArrayRef> {\n    let decimal_array = array\n        .as_any()\n        .downcast_ref::<Decimal128Array>()\n        .expect(\"Expected a Decimal128Array\");\n    let mut builder = StringBuilder::with_capacity(decimal_array.len(), decimal_array.len() * 16);\n    // Reuse a single String buffer across rows to avoid one allocation per value.\n    let mut buf = String::with_capacity(40);\n    for opt_val in decimal_array.iter() {\n        match opt_val {\n            None => builder.append_null(),\n            Some(unscaled) => {\n                buf.clear();\n                decimal128_to_java_string(unscaled, scale, &mut buf);\n                builder.append_value(&buf);\n            }\n        }\n    }\n    Ok(Arc::new(builder.finish()))\n}\n\n/// Formats a Decimal128 unscaled value into `out` matching Java's BigDecimal.toString():\n/// - Plain notation when scale >= 0 and adjusted_exponent >= -6\n/// - Scientific notation otherwise\n///\n/// adjusted_exponent = -scale + (numDigits - 1)\nfn decimal128_to_java_string(unscaled: i128, scale: i8, out: &mut String) {\n    use std::fmt::Write;\n    let negative = unscaled < 0;\n    let sign = if negative { \"-\" } else { \"\" };\n    let coeff = unscaled.unsigned_abs().to_string();\n    let num_digits = coeff.len() as i64;\n    let adj_exp = -(scale as i64) + (num_digits - 1);\n\n    if scale >= 0 && adj_exp >= -6 {\n        let scale_u = scale as usize;\n        let num_digits_u = num_digits as usize;\n        if scale_u == 0 {\n            write!(out, \"{sign}{coeff}\").unwrap();\n        } else if num_digits_u > scale_u {\n            let (int_part, frac_part) = coeff.split_at(num_digits_u - scale_u);\n            write!(out, \"{sign}{int_part}.{frac_part}\").unwrap();\n        } else {\n            let leading = scale_u - num_digits_u;\n            write!(out, \"{sign}0.{}{coeff}\", \"0\".repeat(leading)).unwrap();\n        }\n    } else {\n        if num_digits > 1 {\n            write!(out, \"{sign}{}.{}\", &coeff[..1], &coeff[1..]).unwrap();\n        } else {\n            write!(out, \"{sign}{coeff}\").unwrap();\n        }\n        if adj_exp > 0 {\n            write!(out, \"E+{adj_exp}\").unwrap();\n        } else {\n            write!(out, \"E{adj_exp}\").unwrap();\n        }\n    }\n}\n\npub(crate) fn spark_cast_float64_to_utf8<OffsetSize>(\n    from: &dyn Array,\n    _eval_mode: EvalMode,\n) -> SparkResult<ArrayRef>\nwhere\n    OffsetSize: OffsetSizeTrait,\n{\n    cast_float_to_string!(from, _eval_mode, f64, Float64Array, OffsetSize)\n}\n\npub(crate) fn spark_cast_float32_to_utf8<OffsetSize>(\n    from: &dyn Array,\n    _eval_mode: EvalMode,\n) -> SparkResult<ArrayRef>\nwhere\n    OffsetSize: OffsetSizeTrait,\n{\n    cast_float_to_string!(from, _eval_mode, f32, Float32Array, OffsetSize)\n}\n\nfn cast_int_to_decimal128_internal<T>(\n    array: &PrimitiveArray<T>,\n    precision: u8,\n    scale: i8,\n    eval_mode: EvalMode,\n) -> SparkResult<ArrayRef>\nwhere\n    T: ArrowPrimitiveType,\n    T::Native: Into<i128>,\n{\n    let mut builder = Decimal128Builder::with_capacity(array.len());\n    let multiplier = 10_i128.pow(scale as u32);\n\n    for i in 0..array.len() {\n        if array.is_null(i) {\n            builder.append_null();\n        } else {\n            let v = array.value(i).into();\n            let scaled = v.checked_mul(multiplier);\n            match scaled {\n                Some(scaled) => {\n                    if !is_validate_decimal_precision(scaled, precision) {\n                        match eval_mode {\n                            EvalMode::Ansi => {\n                                return Err(SparkError::NumericValueOutOfRange {\n                                    value: v.to_string(),\n                                    precision,\n                                    scale,\n                                });\n                            }\n                            EvalMode::Try | EvalMode::Legacy => builder.append_null(),\n                        }\n                    } else {\n                        builder.append_value(scaled);\n                    }\n                }\n                _ => match eval_mode {\n                    EvalMode::Ansi => {\n                        return Err(SparkError::NumericValueOutOfRange {\n                            value: v.to_string(),\n                            precision,\n                            scale,\n                        })\n                    }\n                    EvalMode::Legacy | EvalMode::Try => builder.append_null(),\n                },\n            }\n        }\n    }\n    Ok(Arc::new(\n        builder.with_precision_and_scale(precision, scale)?.finish(),\n    ))\n}\n\npub(crate) fn cast_int_to_decimal128(\n    array: &dyn Array,\n    eval_mode: EvalMode,\n    from_type: &DataType,\n    to_type: &DataType,\n    precision: u8,\n    scale: i8,\n) -> SparkResult<ArrayRef> {\n    match (from_type, to_type) {\n        (DataType::Int8, DataType::Decimal128(_p, _s)) => {\n            cast_int_to_decimal128_internal::<Int8Type>(\n                array.as_primitive::<Int8Type>(),\n                precision,\n                scale,\n                eval_mode,\n            )\n        }\n        (DataType::Int16, DataType::Decimal128(_p, _s)) => {\n            cast_int_to_decimal128_internal::<Int16Type>(\n                array.as_primitive::<Int16Type>(),\n                precision,\n                scale,\n                eval_mode,\n            )\n        }\n        (DataType::Int32, DataType::Decimal128(_p, _s)) => {\n            cast_int_to_decimal128_internal::<Int32Type>(\n                array.as_primitive::<Int32Type>(),\n                precision,\n                scale,\n                eval_mode,\n            )\n        }\n        (DataType::Int64, DataType::Decimal128(_p, _s)) => {\n            cast_int_to_decimal128_internal::<Int64Type>(\n                array.as_primitive::<Int64Type>(),\n                precision,\n                scale,\n                eval_mode,\n            )\n        }\n        _ => Err(SparkError::Internal(format!(\n            \"Unsupported cast from datatype : {}\",\n            from_type\n        ))),\n    }\n}\n\npub(crate) fn spark_cast_int_to_int(\n    array: &dyn Array,\n    eval_mode: EvalMode,\n    from_type: &DataType,\n    to_type: &DataType,\n) -> SparkResult<ArrayRef> {\n    match (from_type, to_type) {\n        (DataType::Int64, DataType::Int32) => cast_int_to_int_macro!(\n            array, eval_mode, Int64Type, Int32Type, from_type, i32, \"BIGINT\", \"INT\"\n        ),\n        (DataType::Int64, DataType::Int16) => cast_int_to_int_macro!(\n            array, eval_mode, Int64Type, Int16Type, from_type, i16, \"BIGINT\", \"SMALLINT\"\n        ),\n        (DataType::Int64, DataType::Int8) => cast_int_to_int_macro!(\n            array, eval_mode, Int64Type, Int8Type, from_type, i8, \"BIGINT\", \"TINYINT\"\n        ),\n        (DataType::Int32, DataType::Int16) => cast_int_to_int_macro!(\n            array, eval_mode, Int32Type, Int16Type, from_type, i16, \"INT\", \"SMALLINT\"\n        ),\n        (DataType::Int32, DataType::Int8) => cast_int_to_int_macro!(\n            array, eval_mode, Int32Type, Int8Type, from_type, i8, \"INT\", \"TINYINT\"\n        ),\n        (DataType::Int16, DataType::Int8) => cast_int_to_int_macro!(\n            array, eval_mode, Int16Type, Int8Type, from_type, i8, \"SMALLINT\", \"TINYINT\"\n        ),\n        _ => unreachable!(\n            \"{}\",\n            format!(\"invalid integer type {to_type} in cast from {from_type}\")\n        ),\n    }\n}\n\npub(crate) fn spark_cast_decimal_to_boolean(array: &dyn Array) -> SparkResult<ArrayRef> {\n    let decimal_array = array.as_primitive::<Decimal128Type>();\n    let mut result = BooleanBuilder::with_capacity(decimal_array.len());\n    for i in 0..decimal_array.len() {\n        if decimal_array.is_null(i) {\n            result.append_null()\n        } else {\n            result.append_value(!decimal_array.value(i).is_zero());\n        }\n    }\n    Ok(Arc::new(result.finish()))\n}\n\npub(crate) fn cast_float64_to_decimal128(\n    array: &dyn Array,\n    precision: u8,\n    scale: i8,\n    eval_mode: EvalMode,\n) -> SparkResult<ArrayRef> {\n    cast_floating_point_to_decimal128::<Float64Type>(array, precision, scale, eval_mode)\n}\n\npub(crate) fn cast_float32_to_decimal128(\n    array: &dyn Array,\n    precision: u8,\n    scale: i8,\n    eval_mode: EvalMode,\n) -> SparkResult<ArrayRef> {\n    cast_floating_point_to_decimal128::<Float32Type>(array, precision, scale, eval_mode)\n}\n\nfn cast_floating_point_to_decimal128<T: ArrowPrimitiveType>(\n    array: &dyn Array,\n    precision: u8,\n    scale: i8,\n    eval_mode: EvalMode,\n) -> SparkResult<ArrayRef>\nwhere\n    <T as ArrowPrimitiveType>::Native: AsPrimitive<f64>,\n{\n    let input = array.as_any().downcast_ref::<PrimitiveArray<T>>().unwrap();\n    let mut cast_array = PrimitiveArray::<Decimal128Type>::builder(input.len());\n\n    let mul = 10_f64.powi(scale as i32);\n\n    for i in 0..input.len() {\n        if input.is_null(i) {\n            cast_array.append_null();\n            continue;\n        }\n\n        let input_value = input.value(i).as_();\n        if let Some(v) = (input_value * mul).round().to_i128() {\n            if is_validate_decimal_precision(v, precision) {\n                cast_array.append_value(v);\n                continue;\n            }\n        };\n\n        if eval_mode == EvalMode::Ansi {\n            return Err(SparkError::NumericValueOutOfRange {\n                value: input_value.to_string(),\n                precision,\n                scale,\n            });\n        }\n        cast_array.append_null();\n    }\n\n    let res = Arc::new(\n        cast_array\n            .with_precision_and_scale(precision, scale)?\n            .finish(),\n    ) as ArrayRef;\n    Ok(res)\n}\n\npub(crate) fn spark_cast_nonintegral_numeric_to_integral(\n    array: &dyn Array,\n    eval_mode: EvalMode,\n    from_type: &DataType,\n    to_type: &DataType,\n) -> SparkResult<ArrayRef> {\n    match (from_type, to_type) {\n        (DataType::Float32, DataType::Int8) => cast_float_to_int16_down!(\n            array,\n            eval_mode,\n            Float32Array,\n            Int8Array,\n            f32,\n            i8,\n            \"FLOAT\",\n            \"TINYINT\",\n            \"{:e}\"\n        ),\n        (DataType::Float32, DataType::Int16) => cast_float_to_int16_down!(\n            array,\n            eval_mode,\n            Float32Array,\n            Int16Array,\n            f32,\n            i16,\n            \"FLOAT\",\n            \"SMALLINT\",\n            \"{:e}\"\n        ),\n        (DataType::Float32, DataType::Int32) => cast_float_to_int32_up!(\n            array,\n            eval_mode,\n            Float32Array,\n            Int32Array,\n            f32,\n            i32,\n            \"FLOAT\",\n            \"INT\",\n            i32::MAX,\n            \"{:e}\"\n        ),\n        (DataType::Float32, DataType::Int64) => cast_float_to_int32_up!(\n            array,\n            eval_mode,\n            Float32Array,\n            Int64Array,\n            f32,\n            i64,\n            \"FLOAT\",\n            \"BIGINT\",\n            i64::MAX,\n            \"{:e}\"\n        ),\n        (DataType::Float64, DataType::Int8) => cast_float_to_int16_down!(\n            array,\n            eval_mode,\n            Float64Array,\n            Int8Array,\n            f64,\n            i8,\n            \"DOUBLE\",\n            \"TINYINT\",\n            \"{:e}D\"\n        ),\n        (DataType::Float64, DataType::Int16) => cast_float_to_int16_down!(\n            array,\n            eval_mode,\n            Float64Array,\n            Int16Array,\n            f64,\n            i16,\n            \"DOUBLE\",\n            \"SMALLINT\",\n            \"{:e}D\"\n        ),\n        (DataType::Float64, DataType::Int32) => cast_float_to_int32_up!(\n            array,\n            eval_mode,\n            Float64Array,\n            Int32Array,\n            f64,\n            i32,\n            \"DOUBLE\",\n            \"INT\",\n            i32::MAX,\n            \"{:e}D\"\n        ),\n        (DataType::Float64, DataType::Int64) => cast_float_to_int32_up!(\n            array,\n            eval_mode,\n            Float64Array,\n            Int64Array,\n            f64,\n            i64,\n            \"DOUBLE\",\n            \"BIGINT\",\n            i64::MAX,\n            \"{:e}D\"\n        ),\n        (DataType::Decimal128(precision, scale), DataType::Int8) => {\n            cast_decimal_to_int16_down!(\n                array, eval_mode, Int8Array, i8, \"TINYINT\", *precision, *scale\n            )\n        }\n        (DataType::Decimal128(precision, scale), DataType::Int16) => {\n            cast_decimal_to_int16_down!(\n                array, eval_mode, Int16Array, i16, \"SMALLINT\", *precision, *scale\n            )\n        }\n        (DataType::Decimal128(precision, scale), DataType::Int32) => {\n            cast_decimal_to_int32_up!(\n                array,\n                eval_mode,\n                Int32Array,\n                i32,\n                \"INT\",\n                i32::MAX,\n                *precision,\n                *scale\n            )\n        }\n        (DataType::Decimal128(precision, scale), DataType::Int64) => {\n            cast_decimal_to_int32_up!(\n                array,\n                eval_mode,\n                Int64Array,\n                i64,\n                \"BIGINT\",\n                i64::MAX,\n                *precision,\n                *scale\n            )\n        }\n        _ => unreachable!(\n            \"{}\",\n            format!(\"invalid cast from non-integral numeric type: {from_type} to integral numeric type: {to_type}\")\n        ),\n    }\n}\n\npub(crate) fn cast_int_to_timestamp(\n    array_ref: &ArrayRef,\n    target_tz: &Option<Arc<str>>,\n) -> SparkResult<ArrayRef> {\n    // Input is seconds since epoch, multiply by MICROS_PER_SECOND to get microseconds.\n    let mut builder = TimestampMicrosecondBuilder::with_capacity(array_ref.len());\n\n    match array_ref.data_type() {\n        DataType::Int8 => cast_int_to_timestamp_impl!(array_ref, builder, Int8Type),\n        DataType::Int16 => cast_int_to_timestamp_impl!(array_ref, builder, Int16Type),\n        DataType::Int32 => cast_int_to_timestamp_impl!(array_ref, builder, Int32Type),\n        DataType::Int64 => cast_int_to_timestamp_impl!(array_ref, builder, Int64Type),\n        dt => {\n            return Err(SparkError::Internal(format!(\n                \"Unsupported type for cast_int_to_timestamp: {:?}\",\n                dt\n            )))\n        }\n    }\n\n    Ok(Arc::new(builder.finish().with_timezone_opt(target_tz.clone())) as ArrayRef)\n}\n\npub(crate) fn cast_decimal_to_timestamp(\n    array_ref: &ArrayRef,\n    target_tz: &Option<Arc<str>>,\n    scale: i8,\n) -> SparkResult<ArrayRef> {\n    let arr = array_ref.as_primitive::<Decimal128Type>();\n    let scale_factor = 10_i128.pow(scale as u32);\n    let mut builder = TimestampMicrosecondBuilder::with_capacity(arr.len());\n\n    for i in 0..arr.len() {\n        if arr.is_null(i) {\n            builder.append_null();\n        } else {\n            let value = arr.value(i);\n            // Note: spark's big decimal truncates to\n            // long value and does not throw error (in all leval modes)\n            let value_256 = i256::from_i128(value);\n            let micros_256 = value_256 * i256::from_i128(MICROS_PER_SECOND as i128);\n            let ts = micros_256 / i256::from_i128(scale_factor);\n            builder.append_value(ts.as_i128() as i64);\n        }\n    }\n\n    Ok(Arc::new(builder.finish().with_timezone_opt(target_tz.clone())) as ArrayRef)\n}\n\npub(crate) fn cast_float_to_timestamp(\n    array_ref: &ArrayRef,\n    target_tz: &Option<Arc<str>>,\n    eval_mode: EvalMode,\n) -> SparkResult<ArrayRef> {\n    let mut builder = TimestampMicrosecondBuilder::with_capacity(array_ref.len());\n\n    match array_ref.data_type() {\n        DataType::Float32 => {\n            cast_float_to_timestamp_impl!(array_ref, builder, Float32Type, eval_mode)\n        }\n        DataType::Float64 => {\n            cast_float_to_timestamp_impl!(array_ref, builder, Float64Type, eval_mode)\n        }\n        dt => {\n            return Err(SparkError::Internal(format!(\n                \"Unsupported type for cast_float_to_timestamp: {:?}\",\n                dt\n            )))\n        }\n    }\n\n    Ok(Arc::new(builder.finish().with_timezone_opt(target_tz.clone())) as ArrayRef)\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::AsArray;\n    use arrow::datatypes::TimestampMicrosecondType;\n    use core::f64;\n\n    #[test]\n    fn test_spark_cast_int_to_int_overflow() {\n        // Test Int64 -> Int32 overflow\n        let array: ArrayRef = Arc::new(Int64Array::from(vec![\n            Some(i64::MAX),\n            Some(i64::MIN),\n            Some(100),\n        ]));\n\n        // Legacy mode should wrap around\n        let result =\n            spark_cast_int_to_int(&array, EvalMode::Legacy, &DataType::Int64, &DataType::Int32)\n                .unwrap();\n        let int32_array = result.as_primitive::<Int32Type>();\n        assert_eq!(int32_array.value(2), 100);\n\n        // Ansi mode should error on overflow\n        let result =\n            spark_cast_int_to_int(&array, EvalMode::Ansi, &DataType::Int64, &DataType::Int32);\n        assert!(result.is_err());\n    }\n\n    #[test]\n    fn test_spark_cast_decimal_to_boolean() {\n        let array: ArrayRef = Arc::new(\n            Decimal128Array::from(vec![Some(0), Some(100), Some(-100), None])\n                .with_precision_and_scale(10, 2)\n                .unwrap(),\n        );\n        let result = spark_cast_decimal_to_boolean(&array).unwrap();\n        let bool_array = result.as_boolean();\n        assert!(!bool_array.value(0)); // 0 -> false\n        assert!(bool_array.value(1)); // 100 -> true\n        assert!(bool_array.value(2)); // -100 -> true\n        assert!(bool_array.is_null(3)); // null -> null\n    }\n\n    #[test]\n    fn test_cast_int_to_decimal128() {\n        let array: ArrayRef = Arc::new(Int32Array::from(vec![Some(100), Some(-100), None]));\n        let result = cast_int_to_decimal128(\n            &array,\n            EvalMode::Legacy,\n            &DataType::Int32,\n            &DataType::Decimal128(10, 2),\n            10,\n            2,\n        )\n        .unwrap();\n        let decimal_array = result.as_primitive::<Decimal128Type>();\n        assert_eq!(decimal_array.value(0), 10000); // 100 * 10^2\n        assert_eq!(decimal_array.value(1), -10000); // -100 * 10^2\n        assert!(decimal_array.is_null(2));\n    }\n    #[test]\n    fn test_cast_int_to_timestamp() {\n        let timezones: [Option<Arc<str>>; 6] = [\n            Some(Arc::from(\"UTC\")),\n            Some(Arc::from(\"America/New_York\")),\n            Some(Arc::from(\"America/Los_Angeles\")),\n            Some(Arc::from(\"Europe/London\")),\n            Some(Arc::from(\"Asia/Tokyo\")),\n            Some(Arc::from(\"Australia/Sydney\")),\n        ];\n\n        for tz in &timezones {\n            let int8_array: ArrayRef = Arc::new(Int8Array::from(vec![\n                Some(0),\n                Some(1),\n                Some(-1),\n                Some(127),\n                Some(-128),\n                None,\n            ]));\n\n            let result = cast_int_to_timestamp(&int8_array, tz).unwrap();\n            let ts_array = result.as_primitive::<TimestampMicrosecondType>();\n\n            assert_eq!(ts_array.value(0), 0);\n            assert_eq!(ts_array.value(1), 1_000_000);\n            assert_eq!(ts_array.value(2), -1_000_000);\n            assert_eq!(ts_array.value(3), 127_000_000);\n            assert_eq!(ts_array.value(4), -128_000_000);\n            assert!(ts_array.is_null(5));\n            assert_eq!(ts_array.timezone(), tz.as_ref().map(|s| s.as_ref()));\n\n            let int16_array: ArrayRef = Arc::new(Int16Array::from(vec![\n                Some(0),\n                Some(1),\n                Some(-1),\n                Some(32767),\n                Some(-32768),\n                None,\n            ]));\n\n            let result = cast_int_to_timestamp(&int16_array, tz).unwrap();\n            let ts_array = result.as_primitive::<TimestampMicrosecondType>();\n\n            assert_eq!(ts_array.value(0), 0);\n            assert_eq!(ts_array.value(1), 1_000_000);\n            assert_eq!(ts_array.value(2), -1_000_000);\n            assert_eq!(ts_array.value(3), 32_767_000_000_i64);\n            assert_eq!(ts_array.value(4), -32_768_000_000_i64);\n            assert!(ts_array.is_null(5));\n            assert_eq!(ts_array.timezone(), tz.as_ref().map(|s| s.as_ref()));\n\n            let int32_array: ArrayRef = Arc::new(Int32Array::from(vec![\n                Some(0),\n                Some(1),\n                Some(-1),\n                Some(1704067200),\n                None,\n            ]));\n\n            let result = cast_int_to_timestamp(&int32_array, tz).unwrap();\n            let ts_array = result.as_primitive::<TimestampMicrosecondType>();\n\n            assert_eq!(ts_array.value(0), 0);\n            assert_eq!(ts_array.value(1), 1_000_000);\n            assert_eq!(ts_array.value(2), -1_000_000);\n            assert_eq!(ts_array.value(3), 1_704_067_200_000_000_i64);\n            assert!(ts_array.is_null(4));\n            assert_eq!(ts_array.timezone(), tz.as_ref().map(|s| s.as_ref()));\n\n            let int64_array: ArrayRef = Arc::new(Int64Array::from(vec![\n                Some(0),\n                Some(1),\n                Some(-1),\n                Some(i64::MAX),\n                Some(i64::MIN),\n            ]));\n\n            let result = cast_int_to_timestamp(&int64_array, tz).unwrap();\n            let ts_array = result.as_primitive::<TimestampMicrosecondType>();\n\n            assert_eq!(ts_array.value(0), 0);\n            assert_eq!(ts_array.value(1), 1_000_000_i64);\n            assert_eq!(ts_array.value(2), -1_000_000_i64);\n            assert_eq!(ts_array.value(3), i64::MAX);\n            assert_eq!(ts_array.value(4), i64::MIN);\n            assert_eq!(ts_array.timezone(), tz.as_ref().map(|s| s.as_ref()));\n        }\n    }\n    #[test]\n    // Currently the cast function depending on `f64::powi`, which has unspecified precision according to the doc\n    // https://doc.rust-lang.org/std/primitive.f64.html#unspecified-precision.\n    // Miri deliberately apply random floating-point errors to these operations to expose bugs\n    // https://github.com/rust-lang/miri/issues/4395.\n    // The random errors may interfere with test cases at rounding edge, so we ignore it on miri for now.\n    // Once https://github.com/apache/datafusion-comet/issues/1371 is fixed, this should no longer be an issue.\n    #[cfg_attr(miri, ignore)]\n    fn test_cast_float_to_decimal() {\n        let a: ArrayRef = Arc::new(Float64Array::from(vec![\n            Some(42.),\n            Some(0.5153125),\n            Some(-42.4242415),\n            Some(42e-314),\n            Some(0.),\n            Some(-4242.424242),\n            Some(f64::INFINITY),\n            Some(f64::NEG_INFINITY),\n            Some(f64::NAN),\n            None,\n        ]));\n        let b =\n            cast_floating_point_to_decimal128::<Float64Type>(&a, 8, 6, EvalMode::Legacy).unwrap();\n        assert_eq!(b.len(), a.len());\n        let casted = b.as_primitive::<Decimal128Type>();\n        assert_eq!(casted.value(0), 42000000);\n        // https://github.com/apache/datafusion-comet/issues/1371\n        // assert_eq!(casted.value(1), 515313);\n        assert_eq!(casted.value(2), -42424242);\n        assert_eq!(casted.value(3), 0);\n        assert_eq!(casted.value(4), 0);\n        assert!(casted.is_null(5));\n        assert!(casted.is_null(6));\n        assert!(casted.is_null(7));\n        assert!(casted.is_null(8));\n        assert!(casted.is_null(9));\n    }\n\n    #[test]\n    fn test_cast_decimal_to_timestamp() {\n        let timezones: [Option<Arc<str>>; 3] = [\n            Some(Arc::from(\"UTC\")),\n            Some(Arc::from(\"America/Los_Angeles\")),\n            None,\n        ];\n\n        for tz in &timezones {\n            // Decimal128 with scale 6\n            let decimal_array: ArrayRef = Arc::new(\n                Decimal128Array::from(vec![\n                    Some(0_i128),\n                    Some(1_000_000_i128),\n                    Some(-1_000_000_i128),\n                    Some(1_500_000_i128),\n                    Some(123_456_789_i128),\n                    None,\n                ])\n                .with_precision_and_scale(18, 6)\n                .unwrap(),\n            );\n\n            let result = cast_decimal_to_timestamp(&decimal_array, tz, 6).unwrap();\n            let ts_array = result.as_primitive::<TimestampMicrosecondType>();\n\n            assert_eq!(ts_array.value(0), 0);\n            assert_eq!(ts_array.value(1), 1_000_000);\n            assert_eq!(ts_array.value(2), -1_000_000);\n            assert_eq!(ts_array.value(3), 1_500_000);\n            assert_eq!(ts_array.value(4), 123_456_789);\n            assert!(ts_array.is_null(5));\n            assert_eq!(ts_array.timezone(), tz.as_ref().map(|s| s.as_ref()));\n\n            // Test with scale 2\n            let decimal_array: ArrayRef = Arc::new(\n                Decimal128Array::from(vec![Some(100_i128), Some(150_i128), Some(-250_i128)])\n                    .with_precision_and_scale(10, 2)\n                    .unwrap(),\n            );\n\n            let result = cast_decimal_to_timestamp(&decimal_array, tz, 2).unwrap();\n            let ts_array = result.as_primitive::<TimestampMicrosecondType>();\n\n            assert_eq!(ts_array.value(0), 1_000_000);\n            assert_eq!(ts_array.value(1), 1_500_000);\n            assert_eq!(ts_array.value(2), -2_500_000);\n        }\n    }\n\n    #[test]\n    fn test_cast_float_to_timestamp() {\n        let timezones: [Option<Arc<str>>; 3] = [\n            Some(Arc::from(\"UTC\")),\n            Some(Arc::from(\"America/Los_Angeles\")),\n            None,\n        ];\n        let eval_modes = [EvalMode::Legacy, EvalMode::Ansi, EvalMode::Try];\n\n        for tz in &timezones {\n            for eval_mode in &eval_modes {\n                // Float64 tests\n                let f64_array: ArrayRef = Arc::new(Float64Array::from(vec![\n                    Some(0.0),\n                    Some(1.0),\n                    Some(-1.0),\n                    Some(1.5),\n                    Some(0.000001),\n                    None,\n                ]));\n\n                let result = cast_float_to_timestamp(&f64_array, tz, *eval_mode).unwrap();\n                let ts_array = result.as_primitive::<TimestampMicrosecondType>();\n\n                assert_eq!(ts_array.value(0), 0);\n                assert_eq!(ts_array.value(1), 1_000_000);\n                assert_eq!(ts_array.value(2), -1_000_000);\n                assert_eq!(ts_array.value(3), 1_500_000);\n                assert_eq!(ts_array.value(4), 1);\n                assert!(ts_array.is_null(5));\n                assert_eq!(ts_array.timezone(), tz.as_ref().map(|s| s.as_ref()));\n\n                // Float32 tests\n                let f32_array: ArrayRef = Arc::new(Float32Array::from(vec![\n                    Some(0.0_f32),\n                    Some(1.0_f32),\n                    Some(-1.0_f32),\n                    None,\n                ]));\n\n                let result = cast_float_to_timestamp(&f32_array, tz, *eval_mode).unwrap();\n                let ts_array = result.as_primitive::<TimestampMicrosecondType>();\n\n                assert_eq!(ts_array.value(0), 0);\n                assert_eq!(ts_array.value(1), 1_000_000);\n                assert_eq!(ts_array.value(2), -1_000_000);\n                assert!(ts_array.is_null(3));\n            }\n        }\n\n        // ANSI mode errors on NaN/Infinity\n        let tz = &Some(Arc::from(\"UTC\"));\n        let f64_nan: ArrayRef = Arc::new(Float64Array::from(vec![Some(f64::NAN)]));\n        assert!(cast_float_to_timestamp(&f64_nan, tz, EvalMode::Ansi).is_err());\n\n        let f64_inf: ArrayRef = Arc::new(Float64Array::from(vec![Some(f64::INFINITY)]));\n        assert!(cast_float_to_timestamp(&f64_inf, tz, EvalMode::Ansi).is_err());\n    }\n\n    #[test]\n    fn test_decimal128_to_java_string() {\n        fn fmt(unscaled: i128, scale: i8) -> String {\n            let mut buf = String::new();\n            decimal128_to_java_string(unscaled, scale, &mut buf);\n            buf\n        }\n        // scale >= 0, adj_exp >= -6 → plain notation\n        assert_eq!(fmt(0, 0), \"0\");\n        assert_eq!(fmt(0, 2), \"0.00\");\n        assert_eq!(fmt(12345, 2), \"123.45\");\n        assert_eq!(fmt(-12345, 2), \"-123.45\");\n        assert_eq!(fmt(1, 2), \"0.01\");\n        assert_eq!(fmt(42, 0), \"42\");\n        assert_eq!(fmt(-42, 0), \"-42\");\n        assert_eq!(fmt(1, 6), \"0.000001\"); // adj_exp = -6 (boundary)\n\n        // scale >= 0, adj_exp < -6 → scientific notation (Spark LEGACY mode)\n        assert_eq!(fmt(0, 18), \"0E-18\"); // adj_exp = -18\n        assert_eq!(fmt(0, 7), \"0E-7\"); // adj_exp = -7\n        assert_eq!(fmt(1, 7), \"1E-7\");\n        assert_eq!(fmt(1, 18), \"1E-18\");\n\n        // scale < 0 → scientific notation\n        assert_eq!(fmt(0, -2), \"0E+2\");\n        assert_eq!(fmt(1, -2), \"1E+2\");\n        assert_eq!(fmt(123, -2), \"1.23E+4\");\n        assert_eq!(fmt(-123, -2), \"-1.23E+4\");\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/conversion_funcs/string.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::{timezone, EvalMode, SparkError, SparkResult};\nuse arrow::array::{\n    Array, ArrayRef, ArrowPrimitiveType, BooleanArray, Decimal128Builder, GenericStringArray,\n    OffsetSizeTrait, PrimitiveArray, PrimitiveBuilder, StringArray,\n};\nuse arrow::compute::DecimalCast;\nuse arrow::datatypes::{\n    i256, is_validate_decimal_precision, DataType, Date32Type, Decimal256Type, Float32Type,\n    Float64Type, Int16Type, Int32Type, Int64Type, Int8Type, TimestampMicrosecondType,\n};\nuse chrono::{DateTime, LocalResult, NaiveDate, NaiveTime, Offset, TimeZone, Timelike};\nuse num::traits::CheckedNeg;\nuse num::{CheckedSub, Integer};\nuse regex::Regex;\nuse std::num::Wrapping;\nuse std::str::FromStr;\nuse std::sync::{Arc, LazyLock};\n\nmacro_rules! cast_utf8_to_timestamp {\n    // $tz is a Timezone:Tz object and contains the session timezone.\n    // $to_tz_str is a string containing the to_type timezone\n    ($array:expr, $eval_mode:expr, $array_type:ty, $cast_method:ident, $tz:expr, $to_tz_str:expr, $is_spark4_plus:expr) => {{\n        let len = $array.len();\n        let mut cast_array = PrimitiveArray::<$array_type>::builder(len).with_timezone($to_tz_str);\n        let mut cast_err: Option<SparkError> = None;\n        for i in 0..len {\n            if $array.is_null(i) {\n                cast_array.append_null()\n            } else {\n                // we use trim_end instead of trim because strings with leading spaces are interpreted differently\n                // by Spark in cases where the string has only the time component starting with T.\n                // The string \" T2\" results in null while \"T2\" results in a valid timestamp.\n                match $cast_method($array.value(i).trim_end(), $eval_mode, $tz, $is_spark4_plus) {\n                    Ok(Some(cast_value)) => cast_array.append_value(cast_value),\n                    Ok(None) => cast_array.append_null(),\n                    Err(e) => {\n                        if $eval_mode == EvalMode::Ansi {\n                            // Replace the error value with the raw (untrimmed) input to match\n                            // Spark's behavior: Spark reports the original string in CAST_INVALID_INPUT.\n                            let raw_value = $array.value(i).to_string();\n                            let e = match e {\n                                SparkError::InvalidInputInCastToDatetime {\n                                    from_type,\n                                    to_type,\n                                    ..\n                                } => SparkError::InvalidInputInCastToDatetime {\n                                    value: raw_value,\n                                    from_type,\n                                    to_type,\n                                },\n                                other => other,\n                            };\n                            cast_err = Some(e);\n                            break;\n                        }\n                        cast_array.append_null()\n                    }\n                }\n            }\n        }\n        if let Some(e) = cast_err {\n            Err(e)\n        } else {\n            Ok(Arc::new(cast_array.finish()) as ArrayRef)\n        }\n    }};\n}\n\nmacro_rules! cast_utf8_to_int {\n    ($array:expr, $array_type:ty, $parse_fn:expr) => {{\n        let len = $array.len();\n        let mut cast_array = PrimitiveArray::<$array_type>::builder(len);\n        let parse_fn = $parse_fn;\n        if $array.null_count() == 0 {\n            for i in 0..len {\n                if let Some(cast_value) = parse_fn($array.value(i))? {\n                    cast_array.append_value(cast_value);\n                } else {\n                    cast_array.append_null()\n                }\n            }\n        } else {\n            for i in 0..len {\n                if $array.is_null(i) {\n                    cast_array.append_null()\n                } else if let Some(cast_value) = parse_fn($array.value(i))? {\n                    cast_array.append_value(cast_value);\n                } else {\n                    cast_array.append_null()\n                }\n            }\n        }\n        let result: SparkResult<ArrayRef> = Ok(Arc::new(cast_array.finish()) as ArrayRef);\n        result\n    }};\n}\n\nstruct TimeStampInfo {\n    year: i32,\n    month: u32,\n    day: u32,\n    hour: u32,\n    minute: u32,\n    second: u32,\n    microsecond: u32,\n}\n\nimpl Default for TimeStampInfo {\n    fn default() -> Self {\n        TimeStampInfo {\n            year: 1,\n            month: 1,\n            day: 1,\n            hour: 0,\n            minute: 0,\n            second: 0,\n            microsecond: 0,\n        }\n    }\n}\n\nimpl TimeStampInfo {\n    fn with_year(&mut self, year: i32) -> &mut Self {\n        self.year = year;\n        self\n    }\n\n    fn with_month(&mut self, month: u32) -> &mut Self {\n        self.month = month;\n        self\n    }\n\n    fn with_day(&mut self, day: u32) -> &mut Self {\n        self.day = day;\n        self\n    }\n\n    fn with_hour(&mut self, hour: u32) -> &mut Self {\n        self.hour = hour;\n        self\n    }\n\n    fn with_minute(&mut self, minute: u32) -> &mut Self {\n        self.minute = minute;\n        self\n    }\n\n    fn with_second(&mut self, second: u32) -> &mut Self {\n        self.second = second;\n        self\n    }\n\n    fn with_microsecond(&mut self, microsecond: u32) -> &mut Self {\n        self.microsecond = microsecond;\n        self\n    }\n}\n\npub(crate) fn is_df_cast_from_string_spark_compatible(to_type: &DataType) -> bool {\n    matches!(to_type, DataType::Binary)\n}\n\npub(crate) fn cast_string_to_float(\n    array: &ArrayRef,\n    to_type: &DataType,\n    eval_mode: EvalMode,\n) -> SparkResult<ArrayRef> {\n    match to_type {\n        DataType::Float32 => cast_string_to_float_impl::<Float32Type>(array, eval_mode, \"FLOAT\"),\n        DataType::Float64 => cast_string_to_float_impl::<Float64Type>(array, eval_mode, \"DOUBLE\"),\n        _ => Err(SparkError::Internal(format!(\n            \"Unsupported cast to float type: {:?}\",\n            to_type\n        ))),\n    }\n}\n\nfn cast_string_to_float_impl<T: ArrowPrimitiveType>(\n    array: &ArrayRef,\n    eval_mode: EvalMode,\n    type_name: &str,\n) -> SparkResult<ArrayRef>\nwhere\n    T::Native: FromStr + num::Float,\n{\n    let arr = array\n        .as_any()\n        .downcast_ref::<StringArray>()\n        .ok_or_else(|| SparkError::Internal(\"Expected string array\".to_string()))?;\n\n    let mut builder = PrimitiveBuilder::<T>::with_capacity(arr.len());\n\n    for i in 0..arr.len() {\n        if arr.is_null(i) {\n            builder.append_null();\n        } else {\n            let str_value = arr.value(i).trim();\n            match parse_string_to_float(str_value) {\n                Some(v) => builder.append_value(v),\n                None => {\n                    if eval_mode == EvalMode::Ansi {\n                        return Err(invalid_value(arr.value(i), \"STRING\", type_name));\n                    }\n                    builder.append_null();\n                }\n            }\n        }\n    }\n\n    Ok(Arc::new(builder.finish()))\n}\n\n/// helper to parse floats from string inputs\nfn parse_string_to_float<F>(s: &str) -> Option<F>\nwhere\n    F: FromStr + num::Float,\n{\n    // Handle +inf / -inf\n    if s.eq_ignore_ascii_case(\"inf\")\n        || s.eq_ignore_ascii_case(\"+inf\")\n        || s.eq_ignore_ascii_case(\"infinity\")\n        || s.eq_ignore_ascii_case(\"+infinity\")\n    {\n        return Some(F::infinity());\n    }\n    if s.eq_ignore_ascii_case(\"-inf\") || s.eq_ignore_ascii_case(\"-infinity\") {\n        return Some(F::neg_infinity());\n    }\n    if s.eq_ignore_ascii_case(\"nan\") {\n        return Some(F::nan());\n    }\n    // Remove D/F suffix if present\n    let pruned_float_str =\n        if s.ends_with(\"d\") || s.ends_with(\"D\") || s.ends_with('f') || s.ends_with('F') {\n            &s[..s.len() - 1]\n        } else {\n            s\n        };\n    // Rust's parse logic already handles scientific notations so we just rely on it\n    pruned_float_str.parse::<F>().ok()\n}\n\npub(crate) fn spark_cast_utf8_to_boolean<OffsetSize>(\n    from: &dyn Array,\n    eval_mode: EvalMode,\n) -> SparkResult<ArrayRef>\nwhere\n    OffsetSize: OffsetSizeTrait,\n{\n    let array = from\n        .as_any()\n        .downcast_ref::<GenericStringArray<OffsetSize>>()\n        .unwrap();\n\n    let output_array = array\n        .iter()\n        .map(|value| match value {\n            Some(value) => match value.to_ascii_lowercase().trim() {\n                \"t\" | \"true\" | \"y\" | \"yes\" | \"1\" => Ok(Some(true)),\n                \"f\" | \"false\" | \"n\" | \"no\" | \"0\" => Ok(Some(false)),\n                _ if eval_mode == EvalMode::Ansi => Err(SparkError::CastInvalidValue {\n                    value: value.to_string(),\n                    from_type: \"STRING\".to_string(),\n                    to_type: \"BOOLEAN\".to_string(),\n                }),\n                _ => Ok(None),\n            },\n            _ => Ok(None),\n        })\n        .collect::<Result<BooleanArray, _>>()?;\n\n    Ok(Arc::new(output_array))\n}\n\npub(crate) fn cast_string_to_decimal(\n    array: &ArrayRef,\n    to_type: &DataType,\n    precision: &u8,\n    scale: &i8,\n    eval_mode: EvalMode,\n) -> SparkResult<ArrayRef> {\n    match to_type {\n        DataType::Decimal128(_, _) => {\n            cast_string_to_decimal128_impl(array, eval_mode, *precision, *scale)\n        }\n        DataType::Decimal256(_, _) => {\n            cast_string_to_decimal256_impl(array, eval_mode, *precision, *scale)\n        }\n        _ => Err(SparkError::Internal(format!(\n            \"Unexpected type in cast_string_to_decimal: {:?}\",\n            to_type\n        ))),\n    }\n}\n\nfn cast_string_to_decimal128_impl(\n    array: &ArrayRef,\n    eval_mode: EvalMode,\n    precision: u8,\n    scale: i8,\n) -> SparkResult<ArrayRef> {\n    let string_array = array\n        .as_any()\n        .downcast_ref::<StringArray>()\n        .ok_or_else(|| SparkError::Internal(\"Expected string array\".to_string()))?;\n\n    let mut decimal_builder = Decimal128Builder::with_capacity(string_array.len());\n\n    for i in 0..string_array.len() {\n        if string_array.is_null(i) {\n            decimal_builder.append_null();\n        } else {\n            let str_value = string_array.value(i);\n            match parse_string_to_decimal(str_value, precision, scale) {\n                Ok(Some(decimal_value)) => {\n                    decimal_builder.append_value(decimal_value);\n                }\n                Ok(None) => {\n                    if eval_mode == EvalMode::Ansi {\n                        return Err(invalid_value(\n                            string_array.value(i),\n                            \"STRING\",\n                            &format!(\"DECIMAL({},{})\", precision, scale),\n                        ));\n                    }\n                    decimal_builder.append_null();\n                }\n                Err(e) => {\n                    if eval_mode == EvalMode::Ansi {\n                        return Err(e);\n                    }\n                    decimal_builder.append_null();\n                }\n            }\n        }\n    }\n\n    Ok(Arc::new(\n        decimal_builder\n            .with_precision_and_scale(precision, scale)\n            .map_err(|e| {\n                if matches!(e, arrow::error::ArrowError::InvalidArgumentError(_))\n                    && e.to_string().contains(\"too large to store in a Decimal128\")\n                {\n                    // Fallback error handling\n                    SparkError::NumericValueOutOfRange {\n                        value: \"overflow\".to_string(),\n                        precision,\n                        scale,\n                    }\n                } else {\n                    SparkError::Arrow(Arc::new(e))\n                }\n            })?\n            .finish(),\n    ))\n}\n\nfn cast_string_to_decimal256_impl(\n    array: &ArrayRef,\n    eval_mode: EvalMode,\n    precision: u8,\n    scale: i8,\n) -> SparkResult<ArrayRef> {\n    let string_array = array\n        .as_any()\n        .downcast_ref::<StringArray>()\n        .ok_or_else(|| SparkError::Internal(\"Expected string array\".to_string()))?;\n\n    let mut decimal_builder = PrimitiveBuilder::<Decimal256Type>::with_capacity(string_array.len());\n\n    for i in 0..string_array.len() {\n        if string_array.is_null(i) {\n            decimal_builder.append_null();\n        } else {\n            let str_value = string_array.value(i);\n            match parse_string_to_decimal(str_value, precision, scale) {\n                Ok(Some(decimal_value)) => {\n                    // Convert i128 to i256\n                    let i256_value = i256::from_i128(decimal_value);\n                    decimal_builder.append_value(i256_value);\n                }\n                Ok(None) => {\n                    if eval_mode == EvalMode::Ansi {\n                        return Err(invalid_value(\n                            str_value,\n                            \"STRING\",\n                            &format!(\"DECIMAL({},{})\", precision, scale),\n                        ));\n                    }\n                    decimal_builder.append_null();\n                }\n                Err(e) => {\n                    if eval_mode == EvalMode::Ansi {\n                        return Err(e);\n                    }\n                    decimal_builder.append_null();\n                }\n            }\n        }\n    }\n\n    Ok(Arc::new(\n        decimal_builder\n            .with_precision_and_scale(precision, scale)\n            .map_err(|e| {\n                if matches!(e, arrow::error::ArrowError::InvalidArgumentError(_))\n                    && e.to_string().contains(\"too large to store in a Decimal128\")\n                {\n                    // Fallback error handling\n                    SparkError::NumericValueOutOfRange {\n                        value: \"overflow\".to_string(),\n                        precision,\n                        scale,\n                    }\n                } else {\n                    SparkError::Arrow(Arc::new(e))\n                }\n            })?\n            .finish(),\n    ))\n}\n\n/// Normalize fullwidth Unicode digits (U+FF10–U+FF19) to their ASCII equivalents.\n///\n/// Spark's UTF8String parser treats fullwidth digits as numerically equivalent to\n/// ASCII digits, e.g. \"１２３.４５\" parses as 123.45. Each fullwidth digit encodes\n/// to exactly three UTF-8 bytes: [0xEF, 0xBC, 0x90+n] for digit n. The ASCII\n/// equivalent is 0x30+n, so the conversion is: third_byte - 0x60.\n///\n/// All other bytes (ASCII or other multi-byte sequences) are passed through\n/// unchanged, so the output is valid UTF-8 whenever the input is.\nfn normalize_fullwidth_digits(s: &str) -> String {\n    let bytes = s.as_bytes();\n    let mut out = Vec::with_capacity(s.len());\n    let mut i = 0;\n    while i < bytes.len() {\n        if i + 2 < bytes.len()\n            && bytes[i] == 0xEF\n            && bytes[i + 1] == 0xBC\n            && bytes[i + 2] >= 0x90\n            && bytes[i + 2] <= 0x99\n        {\n            // e.g. 0x91 - 0x60 = 0x31 = b'1'\n            out.push(bytes[i + 2] - 0x60);\n            i += 3;\n        } else {\n            out.push(bytes[i]);\n            i += 1;\n        }\n    }\n    // SAFETY: we only replace valid 3-byte UTF-8 sequences [EF BC 9X] with a\n    // single ASCII byte; all other bytes are copied unchanged, preserving the\n    // UTF-8 invariant of the input.\n    unsafe { String::from_utf8_unchecked(out) }\n}\n\n/// Parse a decimal string into mantissa and scale\n/// e.g., \"123.45\" -> (12345, 2), \"-0.001\" -> (-1, 3) , 0e50 -> (0,50) etc\n/// Parse a string to decimal following Spark's behavior\nfn parse_string_to_decimal(input_str: &str, precision: u8, scale: i8) -> SparkResult<Option<i128>> {\n    let string_bytes = input_str.as_bytes();\n    let mut start = 0;\n    let mut end = string_bytes.len();\n\n    // Trim ASCII whitespace and null bytes from both ends. Spark's UTF8String\n    // trims null bytes the same way it trims whitespace: \"123\\u0000\" and\n    // \"\\u0000123\" both parse as 123. Null bytes in the middle are not trimmed\n    // and will fail the digit validation in parse_decimal_str, producing NULL.\n    while start < end && (string_bytes[start].is_ascii_whitespace() || string_bytes[start] == 0) {\n        start += 1;\n    }\n    while end > start && (string_bytes[end - 1].is_ascii_whitespace() || string_bytes[end - 1] == 0)\n    {\n        end -= 1;\n    }\n\n    let trimmed = &input_str[start..end];\n\n    // Normalize fullwidth digits to ASCII. Fast path skips the allocation for\n    // pure-ASCII strings, which is the common case.\n    let normalized;\n    let trimmed = if trimmed.bytes().any(|b| b > 0x7F) {\n        normalized = normalize_fullwidth_digits(trimmed);\n        normalized.as_str()\n    } else {\n        trimmed\n    };\n\n    if trimmed.is_empty() {\n        return Ok(None);\n    }\n    // Handle special values (inf, nan, etc.)\n    if trimmed.eq_ignore_ascii_case(\"inf\")\n        || trimmed.eq_ignore_ascii_case(\"+inf\")\n        || trimmed.eq_ignore_ascii_case(\"infinity\")\n        || trimmed.eq_ignore_ascii_case(\"+infinity\")\n        || trimmed.eq_ignore_ascii_case(\"-inf\")\n        || trimmed.eq_ignore_ascii_case(\"-infinity\")\n        || trimmed.eq_ignore_ascii_case(\"nan\")\n    {\n        return Ok(None);\n    }\n\n    // validate and parse mantissa and exponent or bubble up the error\n    let (mantissa, exponent) = parse_decimal_str(trimmed, input_str, precision, scale)?;\n\n    // Early return mantissa 0, Spark checks if it fits digits and throw error in ansi\n    if mantissa == 0 {\n        if exponent < -37 {\n            return Err(SparkError::NumericOutOfRange {\n                value: input_str.to_string(),\n            });\n        }\n        return Ok(Some(0));\n    }\n\n    // scale adjustment\n    let target_scale = scale as i32;\n    let scale_adjustment = target_scale - exponent;\n\n    let scaled_value = if scale_adjustment >= 0 {\n        // Need to multiply (increase scale) but return None if scale is too high to fit i128\n        if scale_adjustment > 38 {\n            return Ok(None);\n        }\n        mantissa.checked_mul(10_i128.pow(scale_adjustment as u32))\n    } else {\n        // Need to divide (decrease scale)\n        let abs_scale_adjustment = (-scale_adjustment) as u32;\n        if abs_scale_adjustment > 38 {\n            return Ok(Some(0));\n        }\n\n        let divisor = 10_i128.pow(abs_scale_adjustment);\n        let quotient_opt = mantissa.checked_div(divisor);\n        // Check if divisor is 0\n        if quotient_opt.is_none() {\n            return Ok(None);\n        }\n        let quotient = quotient_opt.unwrap();\n        let remainder = mantissa % divisor;\n\n        // Round half up: if abs(remainder) >= divisor/2, round away from zero\n        let half_divisor = divisor / 2;\n        let rounded = if remainder.abs() >= half_divisor {\n            if mantissa >= 0 {\n                quotient + 1\n            } else {\n                quotient - 1\n            }\n        } else {\n            quotient\n        };\n        Some(rounded)\n    };\n\n    match scaled_value {\n        Some(value) => {\n            if is_validate_decimal_precision(value, precision) {\n                Ok(Some(value))\n            } else {\n                // Value ok but exceeds precision mentioned . THrow error\n                Err(SparkError::NumericValueOutOfRange {\n                    value: trimmed.to_string(),\n                    precision,\n                    scale,\n                })\n            }\n        }\n        None => {\n            // Overflow when scaling raise exception\n            Err(SparkError::NumericValueOutOfRange {\n                value: trimmed.to_string(),\n                precision,\n                scale,\n            })\n        }\n    }\n}\n\nfn invalid_decimal_cast(value: &str, precision: u8, scale: i8) -> SparkError {\n    invalid_value(\n        value,\n        \"STRING\",\n        &format!(\"DECIMAL({},{})\", precision, scale),\n    )\n}\n\n/// Parse a decimal string into mantissa and scale\n/// e.g., \"123.45\" -> (12345, 2), \"-0.001\" -> (-1, 3) , 0e50 -> (0,50) etc\nfn parse_decimal_str(\n    s: &str,\n    original_str: &str,\n    precision: u8,\n    scale: i8,\n) -> SparkResult<(i128, i32)> {\n    if s.is_empty() {\n        return Err(invalid_decimal_cast(original_str, precision, scale));\n    }\n\n    let (mantissa_str, exponent) = if let Some(e_pos) = s.find(|c| ['e', 'E'].contains(&c)) {\n        let mantissa_part = &s[..e_pos];\n        let exponent_part = &s[e_pos + 1..];\n        // Parse exponent\n        let exp: i32 = exponent_part\n            .parse()\n            .map_err(|_| invalid_decimal_cast(original_str, precision, scale))?;\n\n        (mantissa_part, exp)\n    } else {\n        (s, 0)\n    };\n\n    let negative = mantissa_str.starts_with('-');\n    let mantissa_str = if negative || mantissa_str.starts_with('+') {\n        &mantissa_str[1..]\n    } else {\n        mantissa_str\n    };\n\n    if mantissa_str.starts_with('+') || mantissa_str.starts_with('-') {\n        return Err(invalid_decimal_cast(original_str, precision, scale));\n    }\n\n    let (integral_part, fractional_part) = match mantissa_str.find('.') {\n        Some(dot_pos) => {\n            if mantissa_str[dot_pos + 1..].contains('.') {\n                return Err(invalid_decimal_cast(original_str, precision, scale));\n            }\n            (&mantissa_str[..dot_pos], &mantissa_str[dot_pos + 1..])\n        }\n        None => (mantissa_str, \"\"),\n    };\n\n    if integral_part.is_empty() && fractional_part.is_empty() {\n        return Err(invalid_decimal_cast(original_str, precision, scale));\n    }\n\n    if !integral_part.is_empty() && !integral_part.bytes().all(|b| b.is_ascii_digit()) {\n        return Err(invalid_decimal_cast(original_str, precision, scale));\n    }\n\n    if !fractional_part.is_empty() && !fractional_part.bytes().all(|b| b.is_ascii_digit()) {\n        return Err(invalid_decimal_cast(original_str, precision, scale));\n    }\n\n    // Parse integral part\n    let integral_value: i128 = if integral_part.is_empty() {\n        // Empty integral part is valid (e.g., \".5\" or \"-.7e9\")\n        0\n    } else {\n        integral_part\n            .parse()\n            .map_err(|_| invalid_decimal_cast(original_str, precision, scale))?\n    };\n\n    // Parse fractional part\n    let fractional_scale = fractional_part.len() as i32;\n    let fractional_value: i128 = if fractional_part.is_empty() {\n        0\n    } else {\n        fractional_part\n            .parse()\n            .map_err(|_| invalid_decimal_cast(original_str, precision, scale))?\n    };\n\n    // Combine: value = integral * 10^fractional_scale + fractional\n    let mantissa = integral_value\n        .checked_mul(10_i128.pow(fractional_scale as u32))\n        .and_then(|v| v.checked_add(fractional_value))\n        .ok_or_else(|| invalid_decimal_cast(original_str, precision, scale))?;\n\n    let final_mantissa = if negative { -mantissa } else { mantissa };\n    // final scale = fractional_scale - exponent\n    // For example : \"1.23E-5\" has fractional_scale=2, exponent=-5, so scale = 2 - (-5) = 7\n    let final_scale = fractional_scale - exponent;\n    Ok((final_mantissa, final_scale))\n}\n\npub(crate) fn cast_string_to_date(\n    array: &ArrayRef,\n    to_type: &DataType,\n    eval_mode: EvalMode,\n) -> SparkResult<ArrayRef> {\n    let string_array = array\n        .as_any()\n        .downcast_ref::<GenericStringArray<i32>>()\n        .expect(\"Expected a string array\");\n\n    if to_type != &DataType::Date32 {\n        unreachable!(\"Invalid data type {:?} in cast from string\", to_type);\n    }\n\n    let len = string_array.len();\n    let mut cast_array = PrimitiveArray::<Date32Type>::builder(len);\n\n    for i in 0..len {\n        let value = if string_array.is_null(i) {\n            None\n        } else {\n            match date_parser(string_array.value(i), eval_mode) {\n                Ok(Some(cast_value)) => Some(cast_value),\n                Ok(None) => None,\n                Err(e) => return Err(e),\n            }\n        };\n\n        match value {\n            Some(cast_value) => cast_array.append_value(cast_value),\n            None => cast_array.append_null(),\n        }\n    }\n\n    Ok(Arc::new(cast_array.finish()) as ArrayRef)\n}\n\npub(crate) fn cast_string_to_timestamp(\n    array: &ArrayRef,\n    to_type: &DataType,\n    eval_mode: EvalMode,\n    timezone_str: &str,\n    is_spark4_plus: bool,\n) -> SparkResult<ArrayRef> {\n    let string_array = array\n        .as_any()\n        .downcast_ref::<GenericStringArray<i32>>()\n        .expect(\"Expected a string array\");\n\n    let tz = &timezone::Tz::from_str(timezone_str)\n        .map_err(|_| SparkError::Internal(format!(\"Invalid timezone string: {timezone_str}\")))?;\n\n    let cast_array: ArrayRef = match to_type {\n        DataType::Timestamp(_, tz_opt) => {\n            let to_tz = tz_opt.as_deref().unwrap_or(\"UTC\");\n            cast_utf8_to_timestamp!(\n                string_array,\n                eval_mode,\n                TimestampMicrosecondType,\n                timestamp_parser,\n                tz,\n                to_tz,\n                is_spark4_plus\n            )?\n        }\n        _ => unreachable!(\"Invalid data type {:?} in cast from string\", to_type),\n    };\n    Ok(cast_array)\n}\n\npub(crate) fn cast_string_to_int<OffsetSize: OffsetSizeTrait>(\n    to_type: &DataType,\n    array: &ArrayRef,\n    eval_mode: EvalMode,\n) -> SparkResult<ArrayRef> {\n    let string_array = array\n        .as_any()\n        .downcast_ref::<GenericStringArray<OffsetSize>>()\n        .expect(\"cast_string_to_int expected a string array\");\n\n    // Select parse function once per batch based on eval_mode\n    let cast_array: ArrayRef =\n        match (to_type, eval_mode) {\n            (DataType::Int8, EvalMode::Legacy) => {\n                cast_utf8_to_int!(string_array, Int8Type, parse_string_to_i8_legacy)?\n            }\n            (DataType::Int8, EvalMode::Ansi) => {\n                cast_utf8_to_int!(string_array, Int8Type, parse_string_to_i8_ansi)?\n            }\n            (DataType::Int8, EvalMode::Try) => {\n                cast_utf8_to_int!(string_array, Int8Type, parse_string_to_i8_try)?\n            }\n            (DataType::Int16, EvalMode::Legacy) => {\n                cast_utf8_to_int!(string_array, Int16Type, parse_string_to_i16_legacy)?\n            }\n            (DataType::Int16, EvalMode::Ansi) => {\n                cast_utf8_to_int!(string_array, Int16Type, parse_string_to_i16_ansi)?\n            }\n            (DataType::Int16, EvalMode::Try) => {\n                cast_utf8_to_int!(string_array, Int16Type, parse_string_to_i16_try)?\n            }\n            (DataType::Int32, EvalMode::Legacy) => cast_utf8_to_int!(\n                string_array,\n                Int32Type,\n                |s| do_parse_string_to_int_legacy::<i32>(s, i32::MIN)\n            )?,\n            (DataType::Int32, EvalMode::Ansi) => {\n                cast_utf8_to_int!(string_array, Int32Type, |s| do_parse_string_to_int_ansi::<\n                    i32,\n                >(\n                    s, \"INT\", i32::MIN\n                ))?\n            }\n            (DataType::Int32, EvalMode::Try) => {\n                cast_utf8_to_int!(\n                    string_array,\n                    Int32Type,\n                    |s| do_parse_string_to_int_try::<i32>(s, i32::MIN)\n                )?\n            }\n            (DataType::Int64, EvalMode::Legacy) => cast_utf8_to_int!(\n                string_array,\n                Int64Type,\n                |s| do_parse_string_to_int_legacy::<i64>(s, i64::MIN)\n            )?,\n            (DataType::Int64, EvalMode::Ansi) => {\n                cast_utf8_to_int!(string_array, Int64Type, |s| do_parse_string_to_int_ansi::<\n                    i64,\n                >(\n                    s, \"BIGINT\", i64::MIN\n                ))?\n            }\n            (DataType::Int64, EvalMode::Try) => {\n                cast_utf8_to_int!(\n                    string_array,\n                    Int64Type,\n                    |s| do_parse_string_to_int_try::<i64>(s, i64::MIN)\n                )?\n            }\n            (dt, _) => unreachable!(\n                \"{}\",\n                format!(\"invalid integer type {dt} in cast from string\")\n            ),\n        };\n    Ok(cast_array)\n}\n\n/// Finalizes the result by applying the sign. Returns None if overflow would occur.\nfn finalize_int_result<T: Integer + CheckedNeg + Copy>(result: T, negative: bool) -> Option<T> {\n    if negative {\n        Some(result)\n    } else {\n        result.checked_neg().filter(|&n| n >= T::zero())\n    }\n}\n\n/// Equivalent to\n/// - org.apache.spark.unsafe.types.UTF8String.toInt(IntWrapper intWrapper, boolean allowDecimal)\n/// - org.apache.spark.unsafe.types.UTF8String.toLong(LongWrapper longWrapper, boolean allowDecimal)\nfn do_parse_string_to_int_legacy<T: Integer + CheckedSub + CheckedNeg + From<u8> + Copy>(\n    str: &str,\n    min_value: T,\n) -> SparkResult<Option<T>> {\n    let trimmed_bytes = str.as_bytes().trim_ascii();\n\n    let (negative, digits) = match parse_sign(trimmed_bytes) {\n        Some(result) => result,\n        None => return Ok(None),\n    };\n\n    let mut result: T = T::zero();\n    let radix = T::from(10_u8);\n    let stop_value = min_value / radix;\n\n    let mut iter = digits.iter();\n\n    // Parse integer portion until '.' or end\n    for &ch in iter.by_ref() {\n        if ch == b'.' {\n            break;\n        }\n\n        if !ch.is_ascii_digit() {\n            return Ok(None);\n        }\n\n        if result < stop_value {\n            return Ok(None);\n        }\n        let v = result * radix;\n        let digit: T = T::from(ch - b'0');\n        match v.checked_sub(&digit) {\n            Some(x) if x <= T::zero() => result = x,\n            _ => return Ok(None),\n        }\n    }\n\n    // Validate decimal portion (digits only, values ignored)\n    for &ch in iter {\n        if !ch.is_ascii_digit() {\n            return Ok(None);\n        }\n    }\n\n    Ok(finalize_int_result(result, negative))\n}\n\nfn do_parse_string_to_int_ansi<T: Integer + CheckedSub + CheckedNeg + From<u8> + Copy>(\n    str: &str,\n    type_name: &str,\n    min_value: T,\n) -> SparkResult<Option<T>> {\n    let error = || Err(invalid_value(str, \"STRING\", type_name));\n\n    let trimmed_bytes = str.as_bytes().trim_ascii();\n\n    let (negative, digits) = match parse_sign(trimmed_bytes) {\n        Some(result) => result,\n        None => return error(),\n    };\n\n    let mut result: T = T::zero();\n    let radix = T::from(10_u8);\n    let stop_value = min_value / radix;\n\n    for &ch in digits {\n        if ch == b'.' || !ch.is_ascii_digit() {\n            return error();\n        }\n\n        if result < stop_value {\n            return error();\n        }\n        let v = result * radix;\n        let digit: T = T::from(ch - b'0');\n        match v.checked_sub(&digit) {\n            Some(x) if x <= T::zero() => result = x,\n            _ => return error(),\n        }\n    }\n\n    finalize_int_result(result, negative)\n        .map(Some)\n        .ok_or_else(|| invalid_value(str, \"STRING\", type_name))\n}\n\nfn do_parse_string_to_int_try<T: Integer + CheckedSub + CheckedNeg + From<u8> + Copy>(\n    str: &str,\n    min_value: T,\n) -> SparkResult<Option<T>> {\n    let trimmed_bytes = str.as_bytes().trim_ascii();\n\n    let (negative, digits) = match parse_sign(trimmed_bytes) {\n        Some(result) => result,\n        None => return Ok(None),\n    };\n\n    let mut result: T = T::zero();\n    let radix = T::from(10_u8);\n    let stop_value = min_value / radix;\n\n    for &ch in digits {\n        if ch == b'.' || !ch.is_ascii_digit() {\n            return Ok(None);\n        }\n\n        if result < stop_value {\n            return Ok(None);\n        }\n        let v = result * radix;\n        let digit: T = T::from(ch - b'0');\n        match v.checked_sub(&digit) {\n            Some(x) if x <= T::zero() => result = x,\n            _ => return Ok(None),\n        }\n    }\n\n    Ok(finalize_int_result(result, negative))\n}\n\nfn parse_string_to_i8_legacy(str: &str) -> SparkResult<Option<i8>> {\n    match do_parse_string_to_int_legacy::<i32>(str, i32::MIN)? {\n        Some(v) if v >= i8::MIN as i32 && v <= i8::MAX as i32 => Ok(Some(v as i8)),\n        _ => Ok(None),\n    }\n}\n\nfn parse_string_to_i8_ansi(str: &str) -> SparkResult<Option<i8>> {\n    match do_parse_string_to_int_ansi::<i32>(str, \"TINYINT\", i32::MIN)? {\n        Some(v) if v >= i8::MIN as i32 && v <= i8::MAX as i32 => Ok(Some(v as i8)),\n        _ => Err(invalid_value(str, \"STRING\", \"TINYINT\")),\n    }\n}\n\nfn parse_string_to_i8_try(str: &str) -> SparkResult<Option<i8>> {\n    match do_parse_string_to_int_try::<i32>(str, i32::MIN)? {\n        Some(v) if v >= i8::MIN as i32 && v <= i8::MAX as i32 => Ok(Some(v as i8)),\n        _ => Ok(None),\n    }\n}\n\nfn parse_string_to_i16_legacy(str: &str) -> SparkResult<Option<i16>> {\n    match do_parse_string_to_int_legacy::<i32>(str, i32::MIN)? {\n        Some(v) if v >= i16::MIN as i32 && v <= i16::MAX as i32 => Ok(Some(v as i16)),\n        _ => Ok(None),\n    }\n}\n\nfn parse_string_to_i16_ansi(str: &str) -> SparkResult<Option<i16>> {\n    match do_parse_string_to_int_ansi::<i32>(str, \"SMALLINT\", i32::MIN)? {\n        Some(v) if v >= i16::MIN as i32 && v <= i16::MAX as i32 => Ok(Some(v as i16)),\n        _ => Err(invalid_value(str, \"STRING\", \"SMALLINT\")),\n    }\n}\n\nfn parse_string_to_i16_try(str: &str) -> SparkResult<Option<i16>> {\n    match do_parse_string_to_int_try::<i32>(str, i32::MIN)? {\n        Some(v) if v >= i16::MIN as i32 && v <= i16::MAX as i32 => Ok(Some(v as i16)),\n        _ => Ok(None),\n    }\n}\n\n/// Parses sign and returns (is_negative, remaining_bytes after sign)\n/// Returns None if invalid (empty input, or just \"+\" or \"-\")\nfn parse_sign(bytes: &[u8]) -> Option<(bool, &[u8])> {\n    let (&first, rest) = bytes.split_first()?;\n    match first {\n        b'-' if !rest.is_empty() => Some((true, rest)),\n        b'+' if !rest.is_empty() => Some((false, rest)),\n        _ => Some((false, bytes)),\n    }\n}\n\n#[inline]\npub fn invalid_value(value: &str, from_type: &str, to_type: &str) -> SparkError {\n    SparkError::CastInvalidValue {\n        value: value.to_string(),\n        from_type: from_type.to_string(),\n        to_type: to_type.to_string(),\n    }\n}\n\nfn get_timestamp_values<T: TimeZone>(\n    value: &str,\n    timestamp_type: &str,\n    tz: &T,\n) -> SparkResult<Option<i64>> {\n    // Handle negative year: strip leading '-' and remember the sign.\n    let (sign, date_part) = if let Some(stripped) = value.strip_prefix('-') {\n        (-1i32, stripped)\n    } else {\n        (1i32, value)\n    };\n    let mut parts = date_part.split(['T', ' ', '-', ':', '.']);\n    let year = sign\n        * parts\n            .next()\n            .unwrap_or(\"\")\n            .parse::<i32>()\n            .unwrap_or_default();\n\n    // Guard against years that cannot produce a valid i64 microsecond timestamp.\n    // The Long.MaxValue/MinValue boundaries correspond to years 294247 / -290308.\n    // We allow a slightly wider range and let parse_timestamp_to_micros perform the\n    // exact overflow check via checked arithmetic.\n    if !(-290309..=294248).contains(&year) {\n        return Ok(None);\n    }\n\n    let month = parts.next().map_or(1, |m| m.parse::<u32>().unwrap_or(1));\n    let day = parts.next().map_or(1, |d| d.parse::<u32>().unwrap_or(1));\n    let hour = parts.next().map_or(0, |h| h.parse::<u32>().unwrap_or(0));\n    let minute = parts.next().map_or(0, |m| m.parse::<u32>().unwrap_or(0));\n    let second = parts.next().map_or(0, |s| s.parse::<u32>().unwrap_or(0));\n    let microsecond = parts.next().map_or(0, |ms| {\n        // Truncate to at most 6 digits then scale to fill the microsecond field.\n        // E.g. \".123\" -> 123 * 10^3 = 123_000 µs; \".1234567\" -> truncated to 123_456 µs.\n        let ms = &ms[..ms.len().min(6)];\n        let n = ms.len();\n        ms.parse::<u32>().unwrap_or(0) * 10u32.pow((6 - n) as u32)\n    });\n\n    let mut timestamp_info = TimeStampInfo::default();\n\n    let timestamp_info = match timestamp_type {\n        \"year\" => timestamp_info.with_year(year),\n        \"month\" => timestamp_info.with_year(year).with_month(month),\n        \"day\" => timestamp_info\n            .with_year(year)\n            .with_month(month)\n            .with_day(day),\n        \"hour\" => timestamp_info\n            .with_year(year)\n            .with_month(month)\n            .with_day(day)\n            .with_hour(hour),\n        \"minute\" => timestamp_info\n            .with_year(year)\n            .with_month(month)\n            .with_day(day)\n            .with_hour(hour)\n            .with_minute(minute),\n        \"second\" => timestamp_info\n            .with_year(year)\n            .with_month(month)\n            .with_day(day)\n            .with_hour(hour)\n            .with_minute(minute)\n            .with_second(second),\n        \"microsecond\" => timestamp_info\n            .with_year(year)\n            .with_month(month)\n            .with_day(day)\n            .with_hour(hour)\n            .with_minute(minute)\n            .with_second(second)\n            .with_microsecond(microsecond),\n        _ => {\n            return Err(SparkError::InvalidInputInCastToDatetime {\n                value: value.to_string(),\n                from_type: \"STRING\".to_string(),\n                to_type: \"TIMESTAMP\".to_string(),\n            })\n        }\n    };\n    parse_timestamp_to_micros(timestamp_info, tz)\n}\n\n/// Howard Hinnant's algorithm: proleptic Gregorian days since 1970-01-01 for any i64 year.\n/// Works correctly for positive and negative years via Euclidean floor division.\n/// Spark uses Java's equivalent [LocalDate.toEpochDay](https://github.com/openjdk/jdk/blob/cddee6d6eb3e048635c380a32bd2f6ebfd2c18b5/src/java.base/share/classes/java/time/LocalDate.java#L1954)\nfn days_from_civil(y: i64, m: i64, d: i64) -> i64 {\n    let (y, m) = if m <= 2 { (y - 1, m + 9) } else { (y, m - 3) };\n    let era = if y >= 0 { y / 400 } else { (y - 399) / 400 };\n    let yoe = y - era * 400; // year of era [0, 399]\n    let doy = (153 * m + 2) / 5 + d - 1; // day of year [0, 365]\n    let doe = yoe * 365 + yoe / 4 - yoe / 100 + doy; // day of era [0, 146096]\n    era * 146097 + doe - 719468\n}\n\nfn parse_timestamp_to_micros<T: TimeZone>(\n    timestamp_info: &TimeStampInfo,\n    tz: &T,\n) -> SparkResult<Option<i64>> {\n    // Build NaiveDateTime explicitly so we can pattern-match LocalResult variants and\n    // handle the DST spring-forward gap case.\n    let naive_date_opt = NaiveDate::from_ymd_opt(\n        timestamp_info.year,\n        timestamp_info.month,\n        timestamp_info.day,\n    );\n\n    // NaiveTime is used for the common path; also validates hour/min/sec.\n    let naive_time = match NaiveTime::from_hms_opt(\n        timestamp_info.hour,\n        timestamp_info.minute,\n        timestamp_info.second,\n    ) {\n        Some(t) => t,\n        None => return Ok(None), // invalid time components\n    };\n\n    if let Some(naive_date) = naive_date_opt {\n        let local_naive = naive_date.and_time(naive_time);\n\n        // Resolve local datetime to UTC, handling DST transitions.\n        // We compute base_micros with second precision (local_naive has no sub-second component),\n        // then add microseconds at the end to avoid calling with_nanosecond(), which internally\n        // calls from_local_datetime().single() and returns None for ambiguous (fall-back) times.\n        let base_micros: Option<i64> = match tz.from_local_datetime(&local_naive) {\n            // Unambiguous local time.\n            LocalResult::Single(dt) => Some(dt.timestamp_micros()),\n            // DST fall-back overlap: Spark picks the earlier UTC instant (pre-transition offset).\n            LocalResult::Ambiguous(earlier, _) => Some(earlier.timestamp_micros()),\n            // DST spring-forward gap: the local time does not exist.\n            // Java's ZonedDateTime.of() advances by the gap length, which is equivalent to\n            //   utc = local_naive − pre_gap_offset\n            LocalResult::None => {\n                let probe = local_naive - chrono::Duration::hours(3);\n                let pre_offset = match tz.from_local_datetime(&probe) {\n                    LocalResult::Single(dt) => dt.offset().fix(),\n                    LocalResult::Ambiguous(dt, _) => dt.offset().fix(),\n                    LocalResult::None => return Ok(None),\n                };\n                let offset_secs = pre_offset.local_minus_utc() as i64;\n                let utc_naive = local_naive - chrono::Duration::seconds(offset_secs);\n                Some(utc_naive.and_utc().timestamp_micros())\n            }\n        };\n\n        Ok(base_micros.map(|m| m + timestamp_info.microsecond as i64))\n    } else {\n        // NaiveDate::from_ymd_opt returned None. This means either:\n        //   (a) invalid calendar date (Feb 29 on non-leap year, month 13, etc.)\n        //   (b) year outside chrono's representable range (> 262143 or < -262144)\n        //\n        // For case (b) we fall back to Howard Hinnant's direct arithmetic, which works\n        // for any year that fits in i64.  This covers the Long.MaxValue / Long.MinValue\n        // boundary timestamps (year 294247 / -290308).\n        let year = timestamp_info.year as i64;\n        if (-262144..=262143).contains(&year) {\n            // Year is in chrono's range but date was rejected -> truly invalid date.\n            return Ok(None);\n        }\n        // Validate month and day manually for extreme years.\n        let m = timestamp_info.month;\n        let d = timestamp_info.day;\n        if !(1..=12).contains(&m) {\n            return Ok(None);\n        }\n        let max_day = match m {\n            1 | 3 | 5 | 7 | 8 | 10 | 12 => 31,\n            4 | 6 | 9 | 11 => 30,\n            2 => {\n                let leap = year % 4 == 0 && (year % 100 != 0 || year % 400 == 0);\n                if leap {\n                    29\n                } else {\n                    28\n                }\n            }\n            _ => return Ok(None),\n        };\n        if d < 1 || d > max_day {\n            return Ok(None);\n        }\n        // Compute the timezone offset using epoch as a surrogate probe point.\n        // Extreme-year timestamps are only valid with a UTC-like fixed offset (any DST\n        // zone would overflow).  Using epoch gives us the standard offset.\n        let epoch_probe = NaiveDate::from_ymd_opt(1970, 1, 1)\n            .unwrap()\n            .and_hms_opt(0, 0, 0)\n            .unwrap();\n        let tz_offset_secs: i64 = match tz.from_local_datetime(&epoch_probe) {\n            LocalResult::Single(dt) => dt.offset().fix().local_minus_utc() as i64,\n            LocalResult::Ambiguous(dt, _) => dt.offset().fix().local_minus_utc() as i64,\n            LocalResult::None => 0,\n        };\n        // Compute seconds since epoch via direct calendar arithmetic.\n        // Use i128 for the intermediate multiply-by-1_000_000 step: the seconds value can be\n        // just outside the i64 range while the final microseconds result is still within range\n        // (e.g., Long.MinValue boundary: seconds = -9_223_372_036_855, result = i64::MIN).\n        let days = days_from_civil(year, m as i64, d as i64);\n        let time_secs = timestamp_info.hour as i64 * 3600\n            + timestamp_info.minute as i64 * 60\n            + timestamp_info.second as i64;\n        let total_secs = days\n            .checked_mul(86400)\n            .and_then(|s| s.checked_add(time_secs))\n            .and_then(|s| s.checked_sub(tz_offset_secs));\n        let utc_micros = total_secs.and_then(|s| {\n            let micros128 = s as i128 * 1_000_000 + timestamp_info.microsecond as i128;\n            i64::try_from(micros128).ok()\n        });\n        Ok(utc_micros)\n    }\n}\n\nfn parse_str_to_year_timestamp<T: TimeZone>(value: &str, tz: &T) -> SparkResult<Option<i64>> {\n    get_timestamp_values(value, \"year\", tz)\n}\n\nfn parse_str_to_month_timestamp<T: TimeZone>(value: &str, tz: &T) -> SparkResult<Option<i64>> {\n    get_timestamp_values(value, \"month\", tz)\n}\n\nfn parse_str_to_day_timestamp<T: TimeZone>(value: &str, tz: &T) -> SparkResult<Option<i64>> {\n    get_timestamp_values(value, \"day\", tz)\n}\n\nfn parse_str_to_hour_timestamp<T: TimeZone>(value: &str, tz: &T) -> SparkResult<Option<i64>> {\n    get_timestamp_values(value, \"hour\", tz)\n}\n\nfn parse_str_to_minute_timestamp<T: TimeZone>(value: &str, tz: &T) -> SparkResult<Option<i64>> {\n    get_timestamp_values(value, \"minute\", tz)\n}\n\nfn parse_str_to_second_timestamp<T: TimeZone>(value: &str, tz: &T) -> SparkResult<Option<i64>> {\n    get_timestamp_values(value, \"second\", tz)\n}\n\nfn parse_str_to_microsecond_timestamp<T: TimeZone>(\n    value: &str,\n    tz: &T,\n) -> SparkResult<Option<i64>> {\n    get_timestamp_values(value, \"microsecond\", tz)\n}\n\nfn timestamp_parser<T: TimeZone>(\n    value: &str,\n    eval_mode: EvalMode,\n    tz: &T,\n    is_spark4_plus: bool,\n) -> SparkResult<Option<i64>> {\n    let trimmed = value.trim();\n    if trimmed.is_empty() {\n        return Ok(None);\n    }\n    // Spark 4.0+ rejects leading whitespace for ALL T-prefixed time-only strings\n    // (T<h>, T<h>:<m>, T<h>:<m>:<s>, T<h>:<m>:<s>.<f>), but accepts trailing whitespace.\n    // Spark 3.x trims all whitespace first, so leading whitespace is accepted there.\n    // Check the raw (pre-trim) value for leading whitespace before any T-time-only match.\n    if is_spark4_plus\n        && value.len() > value.trim_start().len()\n        && (RE_TIME_ONLY_H.is_match(trimmed)\n            || RE_TIME_ONLY_HM.is_match(trimmed)\n            || RE_TIME_ONLY_HMS.is_match(trimmed)\n            || RE_TIME_ONLY_HMSU.is_match(trimmed))\n    {\n        return if eval_mode == EvalMode::Ansi {\n            Err(SparkError::InvalidInputInCastToDatetime {\n                value: value.to_string(),\n                from_type: \"STRING\".to_string(),\n                to_type: \"TIMESTAMP\".to_string(),\n            })\n        } else {\n            Ok(None)\n        };\n    }\n    let value = trimmed;\n    // Spark accepts a leading '+' year sign on full date-time strings (e.g. \"+2020-01-01T12:34:56\")\n    // but rejects it on time-only strings (e.g. \"+12:12:12\" -> null).\n    // Detect: '+' followed by at least one digit and then a '-' separator -> year prefix -> strip '+'.\n    // Anything else starting with '+' (time-only, bare number, etc.) -> null.\n    let value = if let Some(rest) = value.strip_prefix('+') {\n        let first_non_digit = rest.find(|c: char| !c.is_ascii_digit());\n        match first_non_digit {\n            Some(i) if i >= 1 && rest.as_bytes()[i] == b'-' => rest,\n            _ => return Ok(None),\n        }\n    } else {\n        value\n    };\n\n    // Only attempt offset-suffix extraction when the value does not already match a\n    // base pattern.  This prevents the '-' in plain date strings like \"2015-03-18\"\n    // from being misidentified as a negative-offset sign.\n    let has_direct_match = RE_YEAR.is_match(value)\n        || RE_MONTH.is_match(value)\n        || RE_DAY.is_match(value)\n        || RE_HOUR.is_match(value)\n        || RE_MINUTE.is_match(value)\n        || RE_SECOND.is_match(value)\n        || RE_MICROSECOND.is_match(value)\n        || RE_TIME_ONLY_H.is_match(value)\n        || RE_TIME_ONLY_HM.is_match(value)\n        || RE_TIME_ONLY_HMS.is_match(value)\n        || RE_TIME_ONLY_HMSU.is_match(value)\n        || RE_BARE_HM.is_match(value)\n        || RE_BARE_HMS.is_match(value)\n        || RE_BARE_HMSU.is_match(value);\n\n    if !has_direct_match {\n        if let Some((stripped, suffix_tz)) = extract_offset_suffix(value) {\n            return timestamp_parser_with_tz(stripped, eval_mode, &suffix_tz);\n        }\n    }\n\n    timestamp_parser_with_tz(value, eval_mode, tz)\n}\n\n/// Parses the portion of an offset string AFTER any \"UTC\"/\"GMT\"/\"UT\" prefix (or the\n/// full bare +/- offset including its sign character).  Returns the offset in whole seconds,\n/// or `None` for any malformed, out-of-range, or trailing-garbage input.\n///\n/// Accepted formats (H = 1–2 digit hour, M = 1–2 digit minute):\n///   \"\"        -> 0          (bare \"UTC\" / \"GMT\" / \"UT\")\n///   \"+H\"      -> +H*3600    (hour-only, e.g. \"+0\" from \"UTC+0\")\n///   \"+HH\"     -> same\n///   \"+HHMM\"   -> +H*3600+M*60  (4 digits, no colon)\n///   \"+H:M\"    -> same (with colon, any digit count 1-2 each)\n///   \"+HH:MM\"  -> same\n///   (negative with '-' analogously)\n///\n/// Hours must be 0–18 and minutes 0–59.  A trailing colon (\"+8:\") is rejected.\nfn parse_sign_offset(s: &str) -> Option<i32> {\n    if s.is_empty() {\n        return Some(0);\n    }\n    let (sign, rest) = match s.as_bytes().first() {\n        Some(&b'+') => (1i32, &s[1..]),\n        Some(&b'-') => (-1i32, &s[1..]),\n        _ => return None,\n    };\n    if rest.is_empty() {\n        return None; // lone '+' or '-'\n    }\n    let (h, m) = if let Some(colon_pos) = rest.find(':') {\n        let h_str = &rest[..colon_pos];\n        let m_str = &rest[colon_pos + 1..];\n        if m_str.is_empty() {\n            return None; // trailing colon: \"+8:\"\n        }\n        let h: i32 = h_str.parse().ok()?;\n        // Note: \"+HH:MM:SS\" (with seconds) is not handled; Spark accepts it but it is rare.\n        let m: i32 = m_str.parse().ok()?;\n        (h, m)\n    } else {\n        match rest.len() {\n            1 | 2 => (rest.parse::<i32>().ok()?, 0),\n            4 => (\n                rest[..2].parse::<i32>().ok()?,\n                rest[2..].parse::<i32>().ok()?,\n            ),\n            _ => return None,\n        }\n    };\n    if !(0..=18).contains(&h) || !(0..=59).contains(&m) {\n        return None;\n    }\n    Some(sign * (h * 3600 + m * 60))\n}\n\n/// Constructs a `timezone::Tz` from an offset measured in seconds.\n/// E.g. `+7*3600 + 30*60` -> `\"+07:30\"`.\nfn tz_from_offset_secs(secs: i32) -> Option<timezone::Tz> {\n    let abs = secs.abs();\n    let h = abs / 3600;\n    let m = (abs % 3600) / 60;\n    let sign = if secs >= 0 { '+' } else { '-' };\n    timezone::Tz::from_str(&format!(\"{}{:02}:{:02}\", sign, h, m)).ok()\n}\n\n/// Returns the last (rightmost) byte position where `needle` starts inside `haystack`.\nfn rfind_str(haystack: &str, needle: &str) -> Option<usize> {\n    let hb = haystack.as_bytes();\n    let nb = needle.as_bytes();\n    if nb.len() > hb.len() {\n        return None;\n    }\n    (0..=(hb.len() - nb.len()))\n        .rev()\n        .find(|&i| hb[i..].starts_with(nb))\n}\n\n/// If `value` ends with a recognised timezone suffix, returns `(datetime_prefix, Tz)`.\n/// Returns `None` when no suffix is found.\n///\n/// Recognised forms (in matching priority order):\n///   Z                  -> UTC+0\n///   UTC / \" UTC\"       -> UTC+0  (or UTC +/- offset, e.g. \"UTC+0\", \" UTC+07:30\")\n///   GMT / \" GMT\"       -> UTC+0  (or GMT +/- offset)\n///   UT  / \" UT\"        -> UTC+0  (or UT +/- offset)\n///   Named IANA zone    -> e.g. \" Europe/Moscow\"\n///   Bare +/-offset       -> e.g. \"+07:30\", \"-1:0\", \"+0730\"\n///\n/// **The caller must ensure the value does not already match a base timestamp pattern.**\n/// Without that guard a bare '-' in \"2015-03-18\" would be misread as a -18:00 offset.\nfn extract_offset_suffix(value: &str) -> Option<(&str, timezone::Tz)> {\n    // 1. Z suffix\n    if let Some(stripped) = value.strip_suffix('Z') {\n        return Some((stripped, tz_from_offset_secs(0)?));\n    }\n\n    // 2. Named text-prefix forms: \"UTC\", \"GMT\", \"UT\" (optionally space-prefixed),\n    //    each optionally followed by a bare +/-offset.\n    //    Longest first so \" UTC\" is tried before \" UT\", etc.\n    for prefix in &[\" UTC\", \"UTC\", \" GMT\", \"GMT\", \" UT\", \"UT\"] {\n        if let Some(pos) = rfind_str(value, prefix) {\n            let offset_str = &value[pos + prefix.len()..];\n            if let Some(secs) = parse_sign_offset(offset_str) {\n                return Some((&value[..pos], tz_from_offset_secs(secs)?));\n            }\n        }\n    }\n\n    // 3. Java SHORT_IDS fixed-offset abbreviations recognised by ZoneId.of() via SHORT_IDS map.\n    //    Only three have purely fixed offsets (no '/'):\n    //      EST -> -05:00  (-18 000 s)\n    //      MST -> -07:00  (-25 200 s)\n    //      HST -> -10:00  (-36 000 s)\n    //    These may appear with or without a leading space; no sub-offset is allowed after them.\n    for (abbr, offset_secs) in &[\n        (\" EST\", -18_000i32),\n        (\"EST\", -18_000),\n        (\" MST\", -25_200),\n        (\"MST\", -25_200),\n        (\" HST\", -36_000),\n        (\"HST\", -36_000),\n    ] {\n        if let Some(pos) = rfind_str(value, abbr) {\n            if pos + abbr.len() == value.len() {\n                return Some((&value[..pos], tz_from_offset_secs(*offset_secs)?));\n            }\n        }\n    }\n\n    // 4. Named IANA timezone: a space followed by a slash-containing word at the end.\n    //    e.g. \"2015-03-18T12:03:17.123456 Europe/Moscow\"\n    if let Some(space_pos) = value.rfind(' ') {\n        let tz_name = &value[space_pos + 1..];\n        if tz_name.contains('/') {\n            if let Ok(tz) = timezone::Tz::from_str(tz_name) {\n                return Some((&value[..space_pos], tz));\n            }\n        }\n    }\n\n    // 5. Bare +/-offset: find the rightmost '+' or '-' and try to parse everything\n    //    from that position to the end as a complete valid offset.\n    let last_sign = {\n        let p = value.rfind('+');\n        let m = value.rfind('-');\n        match (p, m) {\n            (Some(p), Some(m)) => Some(p.max(m)),\n            (a, b) => a.or(b),\n        }\n    };\n    if let Some(pos) = last_sign {\n        let offset_str = &value[pos..];\n        if let Some(secs) = parse_sign_offset(offset_str) {\n            return Some((&value[..pos], tz_from_offset_secs(secs)?));\n        }\n    }\n\n    None\n}\n\ntype TimestampParsePattern<T> = (&'static Regex, fn(&str, &T) -> SparkResult<Option<i64>>);\n\nstatic RE_YEAR: LazyLock<Regex> = LazyLock::new(|| Regex::new(r\"^-?\\d{4,7}$\").unwrap());\nstatic RE_MONTH: LazyLock<Regex> = LazyLock::new(|| Regex::new(r\"^-?\\d{4,7}-\\d{2}$\").unwrap());\nstatic RE_DAY: LazyLock<Regex> = LazyLock::new(|| Regex::new(r\"^-?\\d{4,7}-\\d{2}-\\d{2}$\").unwrap());\nstatic RE_HOUR: LazyLock<Regex> =\n    LazyLock::new(|| Regex::new(r\"^-?\\d{4,7}-\\d{2}-\\d{2}[T ]\\d{1,2}$\").unwrap());\nstatic RE_MINUTE: LazyLock<Regex> =\n    LazyLock::new(|| Regex::new(r\"^-?\\d{4,7}-\\d{2}-\\d{2}[T ]\\d{2}:\\d{2}$\").unwrap());\nstatic RE_SECOND: LazyLock<Regex> =\n    LazyLock::new(|| Regex::new(r\"^-?\\d{4,7}-\\d{2}-\\d{2}[T ]\\d{2}:\\d{2}:\\d{2}$\").unwrap());\nstatic RE_MICROSECOND: LazyLock<Regex> =\n    LazyLock::new(|| Regex::new(r\"^-?\\d{4,7}-\\d{2}-\\d{2}[T ]\\d{2}:\\d{2}:\\d{2}\\.\\d+$\").unwrap());\nstatic RE_TIME_ONLY_H: LazyLock<Regex> = LazyLock::new(|| Regex::new(r\"^T\\d{1,2}$\").unwrap());\nstatic RE_TIME_ONLY_HM: LazyLock<Regex> =\n    LazyLock::new(|| Regex::new(r\"^T\\d{1,2}:\\d{1,2}$\").unwrap());\nstatic RE_TIME_ONLY_HMS: LazyLock<Regex> =\n    LazyLock::new(|| Regex::new(r\"^T\\d{1,2}:\\d{1,2}:\\d{1,2}$\").unwrap());\nstatic RE_TIME_ONLY_HMSU: LazyLock<Regex> =\n    LazyLock::new(|| Regex::new(r\"^T\\d{1,2}:\\d{1,2}:\\d{1,2}\\.\\d+$\").unwrap());\nstatic RE_BARE_HM: LazyLock<Regex> = LazyLock::new(|| Regex::new(r\"^\\d{1,2}:\\d{1,2}$\").unwrap());\nstatic RE_BARE_HMS: LazyLock<Regex> =\n    LazyLock::new(|| Regex::new(r\"^\\d{1,2}:\\d{1,2}:\\d{1,2}$\").unwrap());\nstatic RE_BARE_HMSU: LazyLock<Regex> =\n    LazyLock::new(|| Regex::new(r\"^\\d{1,2}:\\d{1,2}:\\d{1,2}\\.\\d+$\").unwrap());\n\nfn timestamp_parser_with_tz<T: TimeZone>(\n    value: &str,\n    eval_mode: EvalMode,\n    tz: &T,\n) -> SparkResult<Option<i64>> {\n    // Both T-separator and space-separator date-time forms are supported.\n    // Negative years are handled by get_timestamp_values detecting a leading '-'.\n    let patterns: &[TimestampParsePattern<T>] = &[\n        // Year only: 4-7 digits, optionally negative\n        (\n            &RE_YEAR,\n            parse_str_to_year_timestamp as fn(&str, &T) -> SparkResult<Option<i64>>,\n        ),\n        // Year-month\n        (&RE_MONTH, parse_str_to_month_timestamp),\n        // Year-month-day\n        (&RE_DAY, parse_str_to_day_timestamp),\n        // Date T-or-space hour (1 or 2 digits)\n        (&RE_HOUR, parse_str_to_hour_timestamp),\n        // Date T-or-space hour:minute\n        (&RE_MINUTE, parse_str_to_minute_timestamp),\n        // Date T-or-space hour:minute:second\n        (&RE_SECOND, parse_str_to_second_timestamp),\n        // Date T-or-space hour:minute:second.fraction\n        (&RE_MICROSECOND, parse_str_to_microsecond_timestamp),\n        // Time-only: T hour (1 or 2 digits, no colon)\n        (&RE_TIME_ONLY_H, parse_str_to_time_only_timestamp),\n        // Time-only: T hour:minute\n        (&RE_TIME_ONLY_HM, parse_str_to_time_only_timestamp),\n        // Time-only: T hour:minute:second\n        (&RE_TIME_ONLY_HMS, parse_str_to_time_only_timestamp),\n        // Time-only: T hour:minute:second.fraction\n        (&RE_TIME_ONLY_HMSU, parse_str_to_time_only_timestamp),\n        // Bare time-only: hour:minute (without T prefix)\n        (&RE_BARE_HM, parse_str_to_time_only_timestamp),\n        // Bare time-only: hour:minute:second\n        (&RE_BARE_HMS, parse_str_to_time_only_timestamp),\n        // Bare time-only: hour:minute:second.fraction\n        (&RE_BARE_HMSU, parse_str_to_time_only_timestamp),\n    ];\n\n    let mut timestamp = None;\n\n    // Iterate through patterns and try matching\n    for (pattern, parse_func) in patterns {\n        if pattern.is_match(value) {\n            timestamp = parse_func(value, tz)?;\n            break;\n        }\n    }\n\n    if timestamp.is_none() {\n        return if eval_mode == EvalMode::Ansi {\n            Err(SparkError::InvalidInputInCastToDatetime {\n                value: value.to_string(),\n                from_type: \"STRING\".to_string(),\n                to_type: \"TIMESTAMP\".to_string(),\n            })\n        } else {\n            Ok(None)\n        };\n    }\n\n    Ok(timestamp)\n}\n\nfn parse_str_to_time_only_timestamp<T: TimeZone>(value: &str, tz: &T) -> SparkResult<Option<i64>> {\n    // The 'T' is optional in the time format; strip it if specified.\n    let time_part = value.strip_prefix('T').unwrap_or(value);\n\n    // Parse time components: hour[:minute[:second[.fraction]]]\n    // Use splitn(3) so \"12:34:56.789\" splits into [\"12\", \"34\", \"56.789\"].\n    let colon_parts: Vec<&str> = time_part.splitn(3, ':').collect();\n    let hour: u32 = colon_parts[0].parse().unwrap_or(0);\n    let minute: u32 = colon_parts.get(1).and_then(|s| s.parse().ok()).unwrap_or(0);\n    let (second, nanosecond) = if let Some(sec_frac) = colon_parts.get(2) {\n        let dot_idx = sec_frac.find('.');\n        let sec: u32 = sec_frac[..dot_idx.unwrap_or(sec_frac.len())]\n            .parse()\n            .unwrap_or(0);\n        let ns: u32 = if let Some(dot) = dot_idx {\n            let frac = &sec_frac[dot + 1..];\n            // Interpret up to 6 digits as microseconds, padding with trailing zeros.\n            let trimmed = &frac[..frac.len().min(6)];\n            let padded = format!(\"{:0<6}\", trimmed);\n            padded.parse::<u32>().unwrap_or(0) * 1000\n        } else {\n            0\n        };\n        (sec, ns)\n    } else {\n        (0, 0)\n    };\n\n    let datetime = tz.from_utc_datetime(&chrono::Utc::now().naive_utc());\n    let result = datetime\n        .with_timezone(tz)\n        .with_hour(hour)\n        .and_then(|dt| dt.with_minute(minute))\n        .and_then(|dt| dt.with_second(second))\n        .and_then(|dt| dt.with_nanosecond(nanosecond))\n        .map(|dt| dt.timestamp_micros());\n\n    Ok(result)\n}\n\n//a string to date parser - port of spark's SparkDateTimeUtils#stringToDate.\nfn date_parser(date_str: &str, eval_mode: EvalMode) -> SparkResult<Option<i32>> {\n    // local functions\n    fn get_trimmed_start(bytes: &[u8]) -> usize {\n        let mut start = 0;\n        while start < bytes.len() && is_whitespace_or_iso_control(bytes[start]) {\n            start += 1;\n        }\n        start\n    }\n\n    fn get_trimmed_end(start: usize, bytes: &[u8]) -> usize {\n        let mut end = bytes.len() - 1;\n        while end > start && is_whitespace_or_iso_control(bytes[end]) {\n            end -= 1;\n        }\n        end + 1\n    }\n\n    fn is_whitespace_or_iso_control(byte: u8) -> bool {\n        byte.is_ascii_whitespace() || byte.is_ascii_control()\n    }\n\n    fn is_valid_digits(segment: i32, digits: usize) -> bool {\n        // NaiveDate is bounded to [-262142, 262142] (6 digits). We allow up to 7 digits to support\n        // leading-zero year strings like \"0002020\" (= year 2020), matching Spark's\n        // isValidDigits. Values outside the bounds are caught by an explicit bounds\n        // check below.\n        let max_digits_year = 7;\n        // year (segment 0) can be between 4 to 7 digits,\n        // month and day (segment 1 and 2) can be between 1 to 2 digits\n        (segment == 0 && digits >= 4 && digits <= max_digits_year)\n            || (segment != 0 && digits > 0 && digits <= 2)\n    }\n\n    fn return_result(date_str: &str, eval_mode: EvalMode) -> SparkResult<Option<i32>> {\n        if eval_mode == EvalMode::Ansi {\n            Err(SparkError::InvalidInputInCastToDatetime {\n                value: date_str.to_string(),\n                from_type: \"STRING\".to_string(),\n                to_type: \"DATE\".to_string(),\n            })\n        } else {\n            Ok(None)\n        }\n    }\n    // end local functions\n\n    if date_str.is_empty() {\n        return return_result(date_str, eval_mode);\n    }\n\n    //values of date segments year, month and day defaulting to 1\n    let mut date_segments = [1, 1, 1];\n    let mut sign = 1;\n    let mut current_segment = 0;\n    let mut current_segment_value = Wrapping(0);\n    let mut current_segment_digits = 0;\n    let bytes = date_str.as_bytes();\n\n    let mut j = get_trimmed_start(bytes);\n    let str_end_trimmed = get_trimmed_end(j, bytes);\n\n    if j == str_end_trimmed {\n        return return_result(date_str, eval_mode);\n    }\n\n    // assign a sign to the date; both '-' and '+' are accepted (Spark stringToDate line 357-360)\n    if bytes[j] == b'-' {\n        sign = -1;\n        j += 1;\n    } else if bytes[j] == b'+' {\n        // sign remains 1 (positive)\n        j += 1;\n    }\n\n    //loop to the end of string until we have processed 3 segments,\n    //exit loop on encountering any space ' ' or 'T' after the 3rd segment\n    while j < str_end_trimmed && (current_segment < 3 && !(bytes[j] == b' ' || bytes[j] == b'T')) {\n        let b = bytes[j];\n        if current_segment < 2 && b == b'-' {\n            //check for validity of year and month segments if current byte is separator\n            if !is_valid_digits(current_segment, current_segment_digits) {\n                return return_result(date_str, eval_mode);\n            }\n            //if valid update corresponding segment with the current segment value.\n            date_segments[current_segment as usize] = current_segment_value.0;\n            current_segment_value = Wrapping(0);\n            current_segment_digits = 0;\n            current_segment += 1;\n        } else if !b.is_ascii_digit() {\n            return return_result(date_str, eval_mode);\n        } else {\n            //increment value of current segment by the next digit\n            let parsed_value = Wrapping((b - b'0') as i32);\n            current_segment_value = current_segment_value * Wrapping(10) + parsed_value;\n            current_segment_digits += 1;\n        }\n        j += 1;\n    }\n\n    //check for validity of last segment\n    if !is_valid_digits(current_segment, current_segment_digits) {\n        return return_result(date_str, eval_mode);\n    }\n\n    if current_segment < 2 && j < str_end_trimmed {\n        // For the `yyyy` and `yyyy-[m]m` formats, entire input must be consumed.\n        return return_result(date_str, eval_mode);\n    }\n\n    date_segments[current_segment as usize] = current_segment_value.0;\n\n    // Reject out-of-range years explicitly\n    let year = sign * date_segments[0];\n    if !(-262143..=262142).contains(&year) {\n        return Ok(None);\n    }\n\n    match NaiveDate::from_ymd_opt(year, date_segments[1] as u32, date_segments[2] as u32) {\n        Some(date) => {\n            let duration_since_epoch = date\n                .signed_duration_since(DateTime::UNIX_EPOCH.naive_utc().date())\n                .num_days();\n            Ok(Some(duration_since_epoch.to_i32().unwrap()))\n        }\n        None => Ok(None),\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use crate::cast::cast_array;\n    use crate::SparkCastOptions;\n    use arrow::array::{DictionaryArray, Int32Array, StringArray};\n    use arrow::datatypes::TimeUnit;\n    use datafusion::common::Result as DataFusionResult;\n\n    /// Test helper that wraps the mode-specific parse functions\n    fn cast_string_to_i8(str: &str, eval_mode: EvalMode) -> SparkResult<Option<i8>> {\n        match eval_mode {\n            EvalMode::Legacy => parse_string_to_i8_legacy(str),\n            EvalMode::Ansi => parse_string_to_i8_ansi(str),\n            EvalMode::Try => parse_string_to_i8_try(str),\n        }\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // test takes too long with miri\n    fn test_cast_string_to_timestamp() {\n        let array: ArrayRef = Arc::new(StringArray::from(vec![\n            Some(\"2020-01-01T12:34:56.123456\"),\n            Some(\"T2\"),\n            Some(\"0100-01-01T12:34:56.123456\"),\n            Some(\"10000-01-01T12:34:56.123456\"),\n        ]));\n        let tz = &timezone::Tz::from_str(\"UTC\").unwrap();\n\n        let string_array = array\n            .as_any()\n            .downcast_ref::<GenericStringArray<i32>>()\n            .expect(\"Expected a string array\");\n\n        let eval_mode = EvalMode::Legacy;\n        let result = cast_utf8_to_timestamp!(\n            &string_array,\n            eval_mode,\n            TimestampMicrosecondType,\n            timestamp_parser,\n            tz,\n            \"UTC\",\n            true\n        )\n        .unwrap();\n\n        assert_eq!(\n            result.data_type(),\n            &DataType::Timestamp(TimeUnit::Microsecond, Some(\"UTC\".into()))\n        );\n        assert_eq!(result.len(), 4);\n    }\n\n    #[test]\n    fn test_cast_string_to_timestamp_ansi_error() {\n        // In ANSI mode, an invalid timestamp string must produce an error rather than null.\n        let array: ArrayRef = Arc::new(StringArray::from(vec![\n            Some(\"2020-01-01T12:34:56.123456\"),\n            Some(\"not_a_timestamp\"),\n        ]));\n        let tz = &timezone::Tz::from_str(\"UTC\").unwrap();\n        let string_array = array\n            .as_any()\n            .downcast_ref::<GenericStringArray<i32>>()\n            .expect(\"Expected a string array\");\n\n        let eval_mode = EvalMode::Ansi;\n        let result = cast_utf8_to_timestamp!(\n            &string_array,\n            eval_mode,\n            TimestampMicrosecondType,\n            timestamp_parser,\n            tz,\n            \"UTC\",\n            true\n        );\n        assert!(\n            result.is_err(),\n            \"ANSI mode should return Err for an invalid timestamp string\"\n        );\n    }\n\n    #[test]\n    fn test_cast_string_to_timestamp_ansi_error_trimmed_value() {\n        // The error value in InvalidInputInCastToDatetime must match the raw input\n        // (including trailing whitespace) to match Spark's CAST_INVALID_INPUT behavior.\n        let array: ArrayRef = Arc::new(StringArray::from(vec![\n            Some(\"91\\n3       \"), // trailing spaces after a newline in the middle\n        ]));\n        let tz = &timezone::Tz::from_str(\"UTC\").unwrap();\n        let string_array = array\n            .as_any()\n            .downcast_ref::<GenericStringArray<i32>>()\n            .expect(\"Expected a string array\");\n\n        let eval_mode = EvalMode::Ansi;\n        let result = cast_utf8_to_timestamp!(\n            &string_array,\n            eval_mode,\n            TimestampMicrosecondType,\n            timestamp_parser,\n            tz,\n            \"UTC\",\n            true\n        );\n        match result {\n            Err(SparkError::InvalidInputInCastToDatetime { value, .. }) => {\n                assert_eq!(\n                    value, \"91\\n3       \",\n                    \"ANSI error value should match the raw (untrimmed) input to match Spark behavior\"\n                );\n            }\n            other => panic!(\"Expected InvalidInputInCastToDatetime error, got {other:?}\"),\n        }\n    }\n\n    #[test]\n    fn test_cast_dict_string_to_timestamp() -> DataFusionResult<()> {\n        // prepare input data\n        let keys = Int32Array::from(vec![0, 1]);\n        let values: ArrayRef = Arc::new(StringArray::from(vec![\n            Some(\"2020-01-01T12:34:56.123456\"),\n            Some(\"T2\"),\n        ]));\n        let dict_array = Arc::new(DictionaryArray::new(keys, values));\n\n        let timezone = \"UTC\".to_string();\n        // test casting string dictionary array to timestamp array\n        let cast_options = SparkCastOptions::new(EvalMode::Legacy, &timezone, false);\n        let result = cast_array(\n            dict_array,\n            &DataType::Timestamp(TimeUnit::Microsecond, Some(timezone.clone().into())),\n            &cast_options,\n        )?;\n        assert_eq!(\n            *result.data_type(),\n            DataType::Timestamp(TimeUnit::Microsecond, Some(timezone.into()))\n        );\n        assert_eq!(result.len(), 2);\n\n        Ok(())\n    }\n\n    #[test]\n    fn extreme_year_boundary_test() {\n        let tz = &timezone::Tz::from_str(\"UTC\").unwrap();\n        // Long.MaxValue = 9223372036854775807 μs -> 294247-01-10T04:00:54.775807Z\n        assert_eq!(\n            timestamp_parser(\"294247-01-10T04:00:54.775807Z\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(i64::MAX),\n        );\n        // Long.MinValue = -9223372036854775808 μs -> -290308-12-21T19:59:05.224192Z\n        assert_eq!(\n            timestamp_parser(\"-290308-12-21T19:59:05.224192Z\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(i64::MIN),\n        );\n        // One beyond Long.MaxValue -> null (overflow)\n        assert_eq!(\n            timestamp_parser(\"294247-01-10T04:00:54.775808Z\", EvalMode::Legacy, tz, true).unwrap(),\n            None,\n        );\n        // One before Long.MinValue -> null (overflow)\n        assert_eq!(\n            timestamp_parser(\"-290308-12-21T19:59:05.224191Z\", EvalMode::Legacy, tz, true).unwrap(),\n            None,\n        );\n    }\n\n    #[test]\n    fn test_leading_whitespace_t_hm() {\n        let tz = &timezone::Tz::from_str(\"UTC\").unwrap();\n        // Spark 4.0+ rejects leading whitespace for ALL T-prefixed time-only patterns.\n        for ws_input in &[\" T2:30\", \"\\tT2:30\", \"\\nT2:30\", \" T2\", \"\\tT2\", \"\\nT2\"] {\n            assert!(\n                timestamp_parser(ws_input, EvalMode::Legacy, tz, true)\n                    .unwrap()\n                    .is_none(),\n                \"'{ws_input}' should be null in Legacy mode on Spark 4.0+\"\n            );\n            // In ANSI mode the same inputs must raise an error (not silently return null).\n            assert!(\n                timestamp_parser(ws_input, EvalMode::Ansi, tz, true).is_err(),\n                \"'{ws_input}' should error in ANSI mode on Spark 4.0+\"\n            );\n            // Spark 3.x trims all whitespace first, so leading whitespace is valid.\n            assert!(\n                timestamp_parser(ws_input, EvalMode::Legacy, tz, false)\n                    .unwrap()\n                    .is_some(),\n                \"'{ws_input}' should be valid in Legacy mode on Spark 3.x\"\n            );\n        }\n        // Without leading whitespace, these must be valid on all versions.\n        for ok_input in &[\"T2:30\", \"T2\"] {\n            assert!(\n                timestamp_parser(ok_input, EvalMode::Legacy, tz, true)\n                    .unwrap()\n                    .is_some(),\n                \"'{ok_input}' should be valid\"\n            );\n        }\n    }\n\n    #[test]\n    fn plus_sign_year_test() {\n        let tz = &timezone::Tz::from_str(\"UTC\").unwrap();\n        // Spark accepts '+year' prefix on full date-time strings for TIMESTAMP casts.\n        // \"+2020-01-01T12:34:56\" -> 2020-01-01T12:34:56 UTC = 1577882096 seconds.\n        assert_eq!(\n            timestamp_parser(\"+2020-01-01T12:34:56\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577882096000000),\n            \"+year on full datetime should parse the same as without the + prefix\"\n        );\n        // But '+' on a time-only string is rejected (Spark returns null).\n        assert_eq!(\n            timestamp_parser(\"+12:12:12\", EvalMode::Legacy, tz, true).unwrap(),\n            None,\n            \"+hour:min:sec must return null\"\n        );\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // test takes too long with miri\n    fn timestamp_parser_test() {\n        let tz = &timezone::Tz::from_str(\"UTC\").unwrap();\n        // write for all formats\n        assert_eq!(\n            timestamp_parser(\"2020\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577836800000000) // this is in milliseconds\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577836800000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577836800000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577880000000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577882040000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577882096000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56.123456\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577882096123456)\n        );\n        assert_eq!(\n            timestamp_parser(\"0100\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(-59011459200000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"0100-01\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(-59011459200000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"0100-01-01\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(-59011459200000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"0100-01-01T12\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(-59011416000000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"0100-01-01T12:34\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(-59011413960000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"0100-01-01T12:34:56\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(-59011413904000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"0100-01-01T12:34:56.123456\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(-59011413903876544)\n        );\n        assert_eq!(\n            timestamp_parser(\"10000\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(253402300800000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"10000-01\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(253402300800000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"10000-01-01\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(253402300800000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"10000-01-01T12\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(253402344000000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"10000-01-01T12:34\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(253402346040000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"10000-01-01T12:34:56\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(253402346096000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"10000-01-01T12:34:56.123456\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(253402346096123456)\n        );\n        // Space separator (same values as T separator)\n        assert_eq!(\n            timestamp_parser(\"2020-01-01 12\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577880000000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01 12:34\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577882040000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01 12:34:56\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577882096000000)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01 12:34:56.123456\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577882096123456)\n        );\n        // Z suffix (UTC)\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56Z\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577882096000000)\n        );\n        // Positive offset suffix\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56+05:30\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577862296000000) // 12:34:56 UTC+5:30 = 07:04:56 UTC\n        );\n        // T-prefixed time-only with colon\n        assert!(timestamp_parser(\"T12:34\", EvalMode::Legacy, tz, true)\n            .unwrap()\n            .is_some());\n        assert!(timestamp_parser(\"T12:34:56\", EvalMode::Legacy, tz, true)\n            .unwrap()\n            .is_some());\n        assert!(\n            timestamp_parser(\"T12:34:56.123456\", EvalMode::Legacy, tz, true)\n                .unwrap()\n                .is_some()\n        );\n        // Bare time-only (hour:minute without T prefix)\n        assert!(timestamp_parser(\"12:34\", EvalMode::Legacy, tz, true)\n            .unwrap()\n            .is_some());\n        assert!(timestamp_parser(\"12:34:56\", EvalMode::Legacy, tz, true)\n            .unwrap()\n            .is_some());\n        // Negative year\n        assert!(timestamp_parser(\"-0001\", EvalMode::Legacy, tz, true)\n            .unwrap()\n            .is_some());\n        assert!(\n            timestamp_parser(\"-0001-01-01T12:34:56\", EvalMode::Legacy, tz, true)\n                .unwrap()\n                .is_some()\n        );\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)]\n    fn timestamp_parser_fraction_scaling_test() {\n        let tz = &timezone::Tz::from_str(\"UTC\").unwrap();\n        // Base: \"2020-01-01T12:34:56\" = 1577882096000000 µs (confirmed by timestamp_parser_test)\n        let base = 1577882096000000i64;\n\n        // 3-digit fraction: \".123\" -> 123_000 µs\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56.123\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(base + 123_000)\n        );\n        // 1-digit fraction: \".1\" -> 100_000 µs\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56.1\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(base + 100_000)\n        );\n        // 4-digit fraction: \".1000\" -> 100_000 µs  (trailing zeros not extra precision)\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56.1000\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(base + 100_000)\n        );\n        // 5-digit fraction: \".12312\" -> 123_120 µs\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56.12312\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(base + 123_120)\n        );\n        // 6-digit fraction (exact): unchanged\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56.123456\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(base + 123_456)\n        );\n        // >6 digits: truncated to 6\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56.123456789\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(base + 123_456)\n        );\n        // Fraction after Z-stripped offset\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56.123Z\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(base + 123_000)\n        );\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)]\n    fn timestamp_parser_tz_offset_formats_test() {\n        let tz = &timezone::Tz::from_str(\"UTC\").unwrap();\n        // All of these represent 2020-01-01T12:34:56 UTC = 1577882096000000 µs.\n        let utc = 1577882096000000i64;\n        // +05:30 offset -> UTC = 12:34:56 − 5h30m = 07:04:56 UTC = 1577862296000000 µs\n        let plus530 = 1577862296000000i64;\n\n        // +/-HHMM (no colon)\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56+0000\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(utc)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56+0530\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(plus530)\n        );\n        // +/-H:MM (single-digit hour)\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56+5:30\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(plus530)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56+0:00\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(utc)\n        );\n        // +/-H:M (single-digit both)\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56+5:3\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577863916000000) // 12:34:56 − 5h3m = 07:31:56 UTC = 1577836800+27116\n        );\n        // bare UTC / \" UTC\"\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56UTC\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(utc)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56 UTC\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(utc)\n        );\n        // UTC+offset\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56 UTC+5:30\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(plus530)\n        );\n        // UTC+0 (single-digit zero)\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56 UTC+0\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(utc)\n        );\n        // GMT+/-HH:MM (no space)\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56GMT+00:00\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(utc)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56GMT+05:30\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(plus530)\n        );\n        // \" GMT+/-...\" (space-prefixed)\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56 GMT+05:30\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(plus530)\n        );\n        // \" GMT+/-HHMM\" (space + GMT + no colon)\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56 GMT+0530\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(plus530)\n        );\n        // \" UT+/-HH:MM\"\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56 UT+05:30\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(plus530)\n        );\n        // Bare \"UT\" (no leading space) — Spark accepts \"UT\" as a UTC alias.\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56UT\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(utc)\n        );\n        // Java SHORT_IDS: EST (-05:00), MST (-07:00), HST (-10:00)\n        // 2020-01-01T12:34:56 EST = 2020-01-01T17:34:56 UTC = 1577896496 s\n        let est_utc = utc + 5 * 3600 * 1_000_000;\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56 EST\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(est_utc)\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56EST\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(est_utc)\n        );\n        // 2020-01-01T12:34:56 MST = 2020-01-01T19:34:56 UTC\n        let mst_utc = utc + 7 * 3600 * 1_000_000;\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56 MST\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(mst_utc)\n        );\n        // 2020-01-01T12:34:56 HST = 2020-01-01T22:34:56 UTC\n        let hst_utc = utc + 10 * 3600 * 1_000_000;\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56 HST\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(hst_utc)\n        );\n        // Named IANA zone \" Europe/Moscow\" (UTC+3 in winter 2020)\n        // 2020-01-01T12:34:56 Europe/Moscow = 2020-01-01T09:34:56 UTC = 1577871296000000 µs\n        assert_eq!(\n            timestamp_parser(\n                \"2020-01-01T12:34:56 Europe/Moscow\",\n                EvalMode::Legacy,\n                tz,\n                true\n            )\n            .unwrap(),\n            Some(1577871296000000)\n        );\n        // Plain date strings must NOT be affected by the offset-extraction logic.\n        assert_eq!(\n            timestamp_parser(\"2020-01-01\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577836800000000)\n        );\n        // Invalid offset formats -> null\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56-8:\", EvalMode::Legacy, tz, true).unwrap(),\n            None\n        );\n        assert_eq!(\n            timestamp_parser(\"2020-01-01T12:34:56-20:0\", EvalMode::Legacy, tz, true).unwrap(),\n            None // h=20 > 18 invalid\n        );\n        // Positive year-sign prefix is accepted for timestamps (see plus_sign_year_test)\n        assert_eq!(\n            timestamp_parser(\"+2020-01-01T12:34:56\", EvalMode::Legacy, tz, true).unwrap(),\n            Some(1577882096000000)\n        );\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)]\n    fn timestamp_parser_dst_test() {\n        // DST spring-forward: America/New_York springs forward 2020-03-08 02:00 -> 03:00.\n        // 02:30 does not exist; Spark advances to 03:30 EDT (UTC-4) = 07:30 UTC.\n        // 2020-03-08T07:30:00Z = 1577836800 + 67*86400 + 27000 = 1583652600 seconds.\n        let ny_tz = &timezone::Tz::from_str(\"America/New_York\").unwrap();\n        assert_eq!(\n            timestamp_parser(\"2020-03-08 02:30:00\", EvalMode::Legacy, ny_tz, true).unwrap(),\n            Some(1583652600000000)\n        );\n        // Just before gap: 01:59:59 EST (UTC-5) = 06:59:59 UTC = 1583650799 seconds.\n        assert_eq!(\n            timestamp_parser(\"2020-03-08 01:59:59\", EvalMode::Legacy, ny_tz, true).unwrap(),\n            Some(1583650799000000)\n        );\n        // Just after gap: 03:00:00 EDT (UTC-4) = 07:00:00 UTC = 1583650800 seconds.\n        assert_eq!(\n            timestamp_parser(\"2020-03-08 03:00:00\", EvalMode::Legacy, ny_tz, true).unwrap(),\n            Some(1583650800000000)\n        );\n\n        // DST fall-back: 2020-11-01 02:00 EDT -> 01:00 EST. Ambiguous: [01:00, 02:00).\n        // Spark picks the earlier UTC instant (pre-transition = EDT = UTC-4).\n        // 01:30 EDT (UTC-4) = 05:30 UTC.\n        // 2020-11-01 = 2020-01-01 + 305 days = 1577836800 + 305*86400 = 1604188800 seconds.\n        assert_eq!(\n            timestamp_parser(\"2020-11-01 01:30:00\", EvalMode::Legacy, ny_tz, true).unwrap(),\n            Some(1604208600000000) // 1604188800 + 5*3600 + 30*60 = 1604208600\n        );\n    }\n\n    #[test]\n    fn date_parser_test() {\n        for date in &[\n            \"2020\",\n            \"2020-01\",\n            \"2020-01-01\",\n            \"+2020-01-01\", // Spark accepts '+' year prefix on dates\n            \"02020-01-01\",\n            \"002020-01-01\",\n            \"0002020-01-01\",\n            \"2020-1-1\",\n            \"2020-01-01 \",\n            \"2020-01-01T\",\n        ] {\n            for eval_mode in &[EvalMode::Legacy, EvalMode::Ansi, EvalMode::Try] {\n                assert_eq!(date_parser(date, *eval_mode).unwrap(), Some(18262));\n            }\n        }\n\n        //dates in invalid formats\n        for date in &[\n            \"abc\",\n            \"\",\n            \"not_a_date\",\n            \"3/\",\n            \"3/12\",\n            \"3/12/2020\",\n            \"3/12/2002 T\",\n            \"202\",\n            \"2020-010-01\",\n            \"2020-10-010\",\n            \"2020-10-010T\",\n            \"--262143-12-31\",\n            \"--262143-12-31 \",\n        ] {\n            for eval_mode in &[EvalMode::Legacy, EvalMode::Try] {\n                assert_eq!(date_parser(date, *eval_mode).unwrap(), None);\n            }\n            assert!(date_parser(date, EvalMode::Ansi).is_err());\n        }\n\n        for date in &[\"-3638-5\"] {\n            for eval_mode in &[EvalMode::Legacy, EvalMode::Try, EvalMode::Ansi] {\n                assert_eq!(date_parser(date, *eval_mode).unwrap(), Some(-2048160));\n            }\n        }\n\n        //Naive Date only supports years 262142 AD to 262143 BC\n        //returns None for dates out of range supported by Naive Date.\n        for date in &[\n            \"-262144-1-1\",\n            \"262143-01-1\",\n            \"262143-1-1\",\n            \"262143-01-1 \",\n            \"262143-01-01T \",\n            \"262143-1-01T 1234\",\n            \"-0973250\",\n        ] {\n            for eval_mode in &[EvalMode::Legacy, EvalMode::Try, EvalMode::Ansi] {\n                assert_eq!(date_parser(date, *eval_mode).unwrap(), None);\n            }\n        }\n    }\n\n    #[test]\n    fn test_cast_string_to_date() {\n        let array: ArrayRef = Arc::new(StringArray::from(vec![\n            Some(\"2020\"),\n            Some(\"2020-01\"),\n            Some(\"2020-01-01\"),\n            Some(\"2020-01-01T\"),\n        ]));\n\n        let result = cast_string_to_date(&array, &DataType::Date32, EvalMode::Legacy).unwrap();\n\n        let date32_array = result\n            .as_any()\n            .downcast_ref::<arrow::array::Date32Array>()\n            .unwrap();\n        assert_eq!(date32_array.len(), 4);\n        date32_array\n            .iter()\n            .for_each(|v| assert_eq!(v.unwrap(), 18262));\n    }\n\n    #[test]\n    fn test_cast_string_array_with_valid_dates() {\n        let array_with_invalid_date: ArrayRef = Arc::new(StringArray::from(vec![\n            Some(\"-262143-12-31\"),\n            Some(\"\\n -262143-12-31 \"),\n            Some(\"-262143-12-31T \\t\\n\"),\n            Some(\"\\n\\t-262143-12-31T\\r\"),\n            Some(\"-262143-12-31T 123123123\"),\n            Some(\"\\r\\n-262143-12-31T \\r123123123\"),\n            Some(\"\\n -262143-12-31T \\n\\t\"),\n        ]));\n\n        for eval_mode in &[EvalMode::Legacy, EvalMode::Try, EvalMode::Ansi] {\n            let result =\n                cast_string_to_date(&array_with_invalid_date, &DataType::Date32, *eval_mode)\n                    .unwrap();\n\n            let date32_array = result\n                .as_any()\n                .downcast_ref::<arrow::array::Date32Array>()\n                .unwrap();\n            assert_eq!(result.len(), 7);\n            date32_array\n                .iter()\n                .for_each(|v| assert_eq!(v.unwrap(), -96464928));\n        }\n    }\n\n    #[test]\n    fn test_cast_string_array_with_invalid_dates() {\n        let array_with_invalid_date: ArrayRef = Arc::new(StringArray::from(vec![\n            Some(\"2020\"),\n            Some(\"2020-01\"),\n            Some(\"2020-01-01\"),\n            //4 invalid dates\n            Some(\"2020-010-01T\"),\n            Some(\"202\"),\n            Some(\" 202 \"),\n            Some(\"\\n 2020-\\r8 \"),\n            Some(\"2020-01-01T\"),\n            // Overflows i32\n            Some(\"-4607172990231812908\"),\n        ]));\n\n        for eval_mode in &[EvalMode::Legacy, EvalMode::Try] {\n            let result =\n                cast_string_to_date(&array_with_invalid_date, &DataType::Date32, *eval_mode)\n                    .unwrap();\n\n            let date32_array = result\n                .as_any()\n                .downcast_ref::<arrow::array::Date32Array>()\n                .unwrap();\n            assert_eq!(\n                date32_array.iter().collect::<Vec<_>>(),\n                vec![\n                    Some(18262),\n                    Some(18262),\n                    Some(18262),\n                    None,\n                    None,\n                    None,\n                    None,\n                    Some(18262),\n                    None\n                ]\n            );\n        }\n\n        let result =\n            cast_string_to_date(&array_with_invalid_date, &DataType::Date32, EvalMode::Ansi);\n        match result {\n            Err(e) => assert!(\n                e.to_string().contains(\n                    \"[CAST_INVALID_INPUT] The value '2020-010-01T' of the type \\\"STRING\\\" cannot be cast to \\\"DATE\\\" because it is malformed\")\n            ),\n            _ => panic!(\"Expected error\"),\n        }\n    }\n\n    #[test]\n    fn test_cast_string_as_i8() {\n        // basic\n        assert_eq!(\n            cast_string_to_i8(\"127\", EvalMode::Legacy).unwrap(),\n            Some(127_i8)\n        );\n        assert_eq!(cast_string_to_i8(\"128\", EvalMode::Legacy).unwrap(), None);\n        assert!(cast_string_to_i8(\"128\", EvalMode::Ansi).is_err());\n        // decimals\n        assert_eq!(\n            cast_string_to_i8(\"0.2\", EvalMode::Legacy).unwrap(),\n            Some(0_i8)\n        );\n        assert_eq!(\n            cast_string_to_i8(\".\", EvalMode::Legacy).unwrap(),\n            Some(0_i8)\n        );\n        // TRY should always return null for decimals\n        assert_eq!(cast_string_to_i8(\"0.2\", EvalMode::Try).unwrap(), None);\n        assert_eq!(cast_string_to_i8(\".\", EvalMode::Try).unwrap(), None);\n        // ANSI mode should throw error on decimal\n        assert!(cast_string_to_i8(\"0.2\", EvalMode::Ansi).is_err());\n        assert!(cast_string_to_i8(\".\", EvalMode::Ansi).is_err());\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/conversion_funcs/temporal.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::{timezone, SparkCastOptions, SparkResult};\nuse arrow::array::{ArrayRef, AsArray, TimestampMicrosecondBuilder};\nuse arrow::datatypes::{DataType, Date32Type};\nuse chrono::{NaiveDate, TimeZone};\nuse std::str::FromStr;\nuse std::sync::Arc;\n\npub(crate) fn is_df_cast_from_date_spark_compatible(to_type: &DataType) -> bool {\n    matches!(to_type, DataType::Int32 | DataType::Utf8)\n}\n\npub(crate) fn is_df_cast_from_timestamp_spark_compatible(to_type: &DataType) -> bool {\n    matches!(\n        to_type,\n        DataType::Int64 | DataType::Date32 | DataType::Utf8 | DataType::Timestamp(_, _)\n    )\n}\n\npub(crate) fn cast_date_to_timestamp(\n    array_ref: &ArrayRef,\n    cast_options: &SparkCastOptions,\n    target_tz: &Option<Arc<str>>,\n) -> SparkResult<ArrayRef> {\n    let tz_str = if cast_options.timezone.is_empty() {\n        \"UTC\"\n    } else {\n        cast_options.timezone.as_str()\n    };\n    // safe to unwrap since we are falling back to UTC above\n    let tz = timezone::Tz::from_str(tz_str)?;\n    let epoch = NaiveDate::from_ymd_opt(1970, 1, 1).unwrap();\n    let date_array = array_ref.as_primitive::<Date32Type>();\n\n    let mut builder = TimestampMicrosecondBuilder::with_capacity(date_array.len());\n\n    for date in date_array.iter() {\n        match date {\n            Some(date) => {\n                // safe to unwrap since chrono's range ( 262,143 yrs) is higher than\n                // number of years possible with days as i32 (~ 6 mil yrs)\n                // convert date in session timezone to timestamp in UTC\n                let naive_date = epoch + chrono::Duration::days(date as i64);\n                let local_midnight = naive_date.and_hms_opt(0, 0, 0).unwrap();\n                let local_midnight_in_microsec = tz\n                    .from_local_datetime(&local_midnight)\n                    // return earliest possible time (edge case with spring / fall DST changes)\n                    .earliest()\n                    .map(|dt| dt.timestamp_micros())\n                    // in case there is an issue with DST and returns None , we fall back to UTC\n                    .unwrap_or((date as i64) * 86_400 * 1_000_000);\n                builder.append_value(local_midnight_in_microsec);\n            }\n            None => {\n                builder.append_null();\n            }\n        }\n    }\n    Ok(Arc::new(\n        builder.finish().with_timezone_opt(target_tz.clone()),\n    ))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use std::sync::Arc;\n    #[test]\n    fn test_cast_date_to_timestamp() {\n        use crate::EvalMode;\n        use arrow::array::Date32Array;\n        use arrow::array::{Array, ArrayRef};\n        use arrow::datatypes::TimestampMicrosecondType;\n\n        // verifying epoch , DST change dates (US) and a null value (comprehensive tests on spark side)\n        let dates: ArrayRef = Arc::new(Date32Array::from(vec![\n            Some(0),\n            Some(19723),\n            Some(19793),\n            None,\n        ]));\n\n        let non_dst_date = 1704067200000000i64;\n        let dst_date = 1710115200000000i64;\n        let seven_hours_ts = 25200000000i64;\n        let eight_hours_ts = 28800000000i64;\n\n        // validate UTC\n        let target_tz: Option<Arc<str>> = Some(\"UTC\".into());\n        let result = cast_date_to_timestamp(\n            &dates,\n            &SparkCastOptions::new(EvalMode::Legacy, \"UTC\", false),\n            &target_tz,\n        )\n        .unwrap();\n        let ts = result.as_primitive::<TimestampMicrosecondType>();\n        assert_eq!(ts.value(0), 0);\n        assert_eq!(ts.value(1), non_dst_date);\n        assert_eq!(ts.value(2), dst_date);\n        assert!(ts.is_null(3));\n\n        // validate LA timezone (follows Daylight savings)\n        let result = cast_date_to_timestamp(\n            &dates,\n            &SparkCastOptions::new(EvalMode::Legacy, \"America/Los_Angeles\", false),\n            &target_tz,\n        )\n        .unwrap();\n        let ts = result.as_primitive::<TimestampMicrosecondType>();\n        assert_eq!(ts.value(0), eight_hours_ts);\n        assert_eq!(ts.value(1), non_dst_date + eight_hours_ts);\n        // should adjust for DST\n        assert_eq!(ts.value(2), dst_date + seven_hours_ts);\n        assert!(ts.is_null(3));\n\n        // Phoenix timezone (does not follow Daylight savings)\n        let result = cast_date_to_timestamp(\n            &dates,\n            &SparkCastOptions::new(EvalMode::Legacy, \"America/Phoenix\", false),\n            &target_tz,\n        )\n        .unwrap();\n        let ts = result.as_primitive::<TimestampMicrosecondType>();\n        assert_eq!(ts.value(0), seven_hours_ts);\n        assert_eq!(ts.value(1), non_dst_date + seven_hours_ts);\n        assert_eq!(ts.value(2), dst_date + seven_hours_ts);\n        assert!(ts.is_null(3));\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/conversion_funcs/utils.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::SparkError;\nuse arrow::array::{\n    Array, ArrayRef, ArrowPrimitiveType, AsArray, GenericStringArray, PrimitiveArray,\n};\nuse arrow::compute::unary;\nuse arrow::datatypes::{DataType, Int64Type};\nuse arrow::error::ArrowError;\nuse datafusion::common::cast::as_generic_string_array;\nuse num::integer::div_floor;\nuse std::sync::Arc;\n\npub(crate) const MICROS_PER_SECOND: i64 = 1000000;\n\n/// A fork & modified version of Arrow's `unary_dyn` which is being deprecated\npub fn unary_dyn<F, T>(array: &ArrayRef, op: F) -> Result<ArrayRef, ArrowError>\nwhere\n    T: ArrowPrimitiveType,\n    F: Fn(T::Native) -> T::Native,\n{\n    if let Some(d) = array.as_any_dictionary_opt() {\n        let new_values = unary_dyn::<F, T>(d.values(), op)?;\n        return Ok(Arc::new(d.with_values(Arc::new(new_values))));\n    }\n\n    match array.as_primitive_opt::<T>() {\n        Some(a) if PrimitiveArray::<T>::is_compatible(a.data_type()) => {\n            Ok(Arc::new(unary::<T, F, T>(\n                array.as_any().downcast_ref::<PrimitiveArray<T>>().unwrap(),\n                op,\n            )))\n        }\n        _ => Err(ArrowError::NotYetImplemented(format!(\n            \"Cannot perform unary operation of type {} on array of type {}\",\n            T::DATA_TYPE,\n            array.data_type()\n        ))),\n    }\n}\n\n/// This takes for special casting cases of Spark. E.g., Timestamp to Long.\n/// This function runs as a post process of the DataFusion cast(). By the time it arrives here,\n/// Dictionary arrays are already unpacked by the DataFusion cast() since Spark cannot specify\n/// Dictionary as to_type. The from_type is taken before the DataFusion cast() runs in\n/// expressions/cast.rs, so it can be still Dictionary.\npub fn spark_cast_postprocess(\n    array: ArrayRef,\n    from_type: &DataType,\n    to_type: &DataType,\n) -> ArrayRef {\n    match (from_type, to_type) {\n        (DataType::Timestamp(_, _), DataType::Int64) => {\n            // See Spark's `Cast` expression\n            unary_dyn::<_, Int64Type>(&array, |v| div_floor(v, MICROS_PER_SECOND)).unwrap()\n        }\n        (DataType::Dictionary(_, value_type), DataType::Int64)\n            if matches!(value_type.as_ref(), &DataType::Timestamp(_, _)) =>\n        {\n            // See Spark's `Cast` expression\n            unary_dyn::<_, Int64Type>(&array, |v| div_floor(v, MICROS_PER_SECOND)).unwrap()\n        }\n        (DataType::Timestamp(_, _), DataType::Utf8) => remove_trailing_zeroes(array),\n        (DataType::Dictionary(_, value_type), DataType::Utf8)\n            if matches!(value_type.as_ref(), &DataType::Timestamp(_, _)) =>\n        {\n            remove_trailing_zeroes(array)\n        }\n        _ => array,\n    }\n}\n\n/// Remove any trailing zeroes in the string if they occur after in the fractional seconds,\n/// to match Spark behavior\n/// example:\n/// \"1970-01-01 05:29:59.900\" => \"1970-01-01 05:29:59.9\"\n/// \"1970-01-01 05:29:59.990\" => \"1970-01-01 05:29:59.99\"\n/// \"1970-01-01 05:29:59.999\" => \"1970-01-01 05:29:59.999\"\n/// \"1970-01-01 05:30:00\"     => \"1970-01-01 05:30:00\"\n/// \"1970-01-01 05:30:00.001\" => \"1970-01-01 05:30:00.001\"\nfn remove_trailing_zeroes(array: ArrayRef) -> ArrayRef {\n    let string_array = as_generic_string_array::<i32>(&array).unwrap();\n    let result = string_array\n        .iter()\n        .map(|s| s.map(trim_end))\n        .collect::<GenericStringArray<i32>>();\n    Arc::new(result) as ArrayRef\n}\n\nfn trim_end(s: &str) -> &str {\n    if s.rfind('.').is_some() {\n        s.trim_end_matches('0')\n    } else {\n        s\n    }\n}\n\n#[inline]\npub fn cast_overflow(value: &str, from_type: &str, to_type: &str) -> SparkError {\n    SparkError::CastOverFlow {\n        value: value.to_string(),\n        from_type: from_type.to_string(),\n        to_type: to_type.to_string(),\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/csv_funcs/csv_write_options.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::fmt::{Display, Formatter};\n\n#[derive(Debug, Clone, Hash, PartialEq, Eq)]\npub struct CsvWriteOptions {\n    pub delimiter: String,\n    pub quote: String,\n    pub escape: String,\n    pub null_value: String,\n    pub quote_all: bool,\n    pub ignore_leading_white_space: bool,\n    pub ignore_trailing_white_space: bool,\n}\n\nimpl Display for CsvWriteOptions {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"csv_write_options(delimiter={}, quote={}, escape={}, null_value={}, quote_all={}, ignore_leading_white_space={}, ignore_trailing_white_space={})\",\n            self.delimiter, self.quote, self.escape, self.null_value, self.quote_all, self.ignore_leading_white_space, self.ignore_trailing_white_space\n        )\n    }\n}\n\nimpl CsvWriteOptions {\n    pub fn new(\n        delimiter: String,\n        quote: String,\n        escape: String,\n        null_value: String,\n        quote_all: bool,\n        ignore_leading_white_space: bool,\n        ignore_trailing_white_space: bool,\n    ) -> Self {\n        Self {\n            delimiter,\n            quote,\n            escape,\n            null_value,\n            quote_all,\n            ignore_leading_white_space,\n            ignore_trailing_white_space,\n        }\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/csv_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod csv_write_options;\nmod to_csv;\n\npub use csv_write_options::CsvWriteOptions;\npub use to_csv::{to_csv_inner, ToCsv};\n"
  },
  {
    "path": "native/spark-expr/src/csv_funcs/to_csv.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::csv_funcs::csv_write_options::CsvWriteOptions;\nuse crate::{spark_cast, EvalMode, SparkCastOptions};\nuse arrow::array::{as_string_array, as_struct_array, Array, ArrayRef, StringArray, StringBuilder};\nuse arrow::array::{RecordBatch, StructArray};\nuse arrow::datatypes::{DataType, Schema};\nuse datafusion::common::Result;\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::any::Any;\nuse std::fmt::{Display, Formatter};\nuse std::hash::Hash;\nuse std::sync::Arc;\n\n/// to_csv spark function\n#[derive(Debug, Eq)]\npub struct ToCsv {\n    expr: Arc<dyn PhysicalExpr>,\n    timezone: String,\n    csv_write_options: CsvWriteOptions,\n}\n\nimpl Hash for ToCsv {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.expr.hash(state);\n        self.timezone.hash(state);\n        self.csv_write_options.hash(state);\n    }\n}\n\nimpl PartialEq for ToCsv {\n    fn eq(&self, other: &Self) -> bool {\n        self.expr.eq(&other.expr)\n            && self.timezone.eq(&other.timezone)\n            && self.csv_write_options.eq(&other.csv_write_options)\n    }\n}\n\nimpl ToCsv {\n    pub fn new(\n        expr: Arc<dyn PhysicalExpr>,\n        timezone: &str,\n        csv_write_options: CsvWriteOptions,\n    ) -> Self {\n        Self {\n            expr,\n            timezone: timezone.to_owned(),\n            csv_write_options,\n        }\n    }\n}\n\nimpl Display for ToCsv {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"to_csv({}, timezone={}, csv_write_options={})\",\n            self.expr, self.timezone, self.csv_write_options\n        )\n    }\n}\n\nimpl PhysicalExpr for ToCsv {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn data_type(&self, _: &Schema) -> Result<DataType> {\n        Ok(DataType::Utf8)\n    }\n\n    fn nullable(&self, input_schema: &Schema) -> Result<bool> {\n        self.expr.nullable(input_schema)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> Result<ColumnarValue> {\n        let input_array = self.expr.evaluate(batch)?.into_array(batch.num_rows())?;\n        let mut cast_options = SparkCastOptions::new(EvalMode::Legacy, &self.timezone, false);\n        cast_options.null_string = self.csv_write_options.null_value.clone();\n        let struct_array = as_struct_array(&input_array);\n\n        let csv_array = to_csv_inner(struct_array, &cast_options, &self.csv_write_options)?;\n\n        Ok(ColumnarValue::Array(csv_array))\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.expr]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Result<Arc<dyn PhysicalExpr>> {\n        Ok(Arc::new(Self::new(\n            Arc::clone(&children[0]),\n            &self.timezone,\n            self.csv_write_options.clone(),\n        )))\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n}\n\npub fn to_csv_inner(\n    array: &StructArray,\n    cast_options: &SparkCastOptions,\n    write_options: &CsvWriteOptions,\n) -> Result<ArrayRef> {\n    let string_arrays: Vec<ArrayRef> = as_struct_array(&array)\n        .columns()\n        .iter()\n        .map(|array| {\n            spark_cast(\n                ColumnarValue::Array(Arc::clone(array)),\n                &DataType::Utf8,\n                cast_options,\n            )?\n            .into_array(array.len())\n        })\n        .collect::<Result<Vec<_>>>()?;\n    let string_arrays: Vec<&StringArray> = string_arrays\n        .iter()\n        .map(|array| as_string_array(array))\n        .collect();\n    let is_string: Vec<bool> = array\n        .fields()\n        .iter()\n        .map(|f| matches!(f.data_type(), DataType::Utf8 | DataType::LargeUtf8))\n        .collect();\n\n    let mut builder = StringBuilder::with_capacity(array.len(), array.len() * 16);\n    let mut csv_string = String::with_capacity(array.len() * 16);\n\n    let quote_char = write_options.quote.chars().next().unwrap_or('\"');\n    let escape_char = write_options.escape.chars().next().unwrap_or('\\\\');\n    for row_idx in 0..array.len() {\n        if array.is_null(row_idx) {\n            builder.append_null();\n        } else {\n            csv_string.clear();\n            for (col_idx, column) in string_arrays.iter().enumerate() {\n                if col_idx > 0 {\n                    csv_string.push_str(&write_options.delimiter);\n                }\n                if column.is_null(row_idx) {\n                    if write_options.quote_all {\n                        csv_string.push(quote_char);\n                    }\n                    csv_string.push_str(&write_options.null_value);\n                    if write_options.quote_all {\n                        csv_string.push(quote_char);\n                    }\n                } else {\n                    let mut value = column.value(row_idx);\n                    let is_string_field = is_string[col_idx];\n\n                    if is_string_field {\n                        if write_options.ignore_leading_white_space {\n                            value = value.trim_start();\n                        }\n                        if write_options.ignore_trailing_white_space {\n                            value = value.trim_end();\n                        }\n                    }\n\n                    let needs_quoting = write_options.quote_all\n                        || (is_string_field\n                            && (value.contains(&write_options.delimiter)\n                                || value.contains(quote_char)\n                                || value.contains('\\n')\n                                || value.contains('\\r'))\n                            || value.is_empty());\n\n                    let needs_escaping = needs_quoting\n                        && (value.contains(quote_char) || value.contains(escape_char));\n\n                    if needs_quoting {\n                        csv_string.push(quote_char);\n                    }\n                    if needs_escaping {\n                        escape_value(value, quote_char, escape_char, &mut csv_string);\n                    } else {\n                        csv_string.push_str(value);\n                    }\n                    if needs_quoting {\n                        csv_string.push(quote_char);\n                    }\n                }\n            }\n            builder.append_value(&csv_string);\n        }\n    }\n    Ok(Arc::new(builder.finish()))\n}\n\n#[inline]\nfn escape_value(value: &str, quote_char: char, escape_char: char, output: &mut String) {\n    for ch in value.chars() {\n        if ch == quote_char || ch == escape_char {\n            output.push(escape_char);\n        }\n        output.push(ch);\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/datetime_funcs/date_diff.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, Date32Array, Int32Array};\nuse arrow::compute::kernels::arity::binary;\nuse arrow::datatypes::DataType;\nuse datafusion::common::{utils::take_function_args, DataFusionError, Result};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse std::any::Any;\nuse std::sync::Arc;\n\n/// Spark-compatible date_diff function.\n/// Returns the number of days from startDate to endDate (endDate - startDate).\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct SparkDateDiff {\n    signature: Signature,\n    aliases: Vec<String>,\n}\n\nimpl SparkDateDiff {\n    pub fn new() -> Self {\n        Self {\n            signature: Signature::exact(\n                vec![DataType::Date32, DataType::Date32],\n                Volatility::Immutable,\n            ),\n            aliases: vec![\"datediff\".to_string()],\n        }\n    }\n}\n\nimpl Default for SparkDateDiff {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl ScalarUDFImpl for SparkDateDiff {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"date_diff\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Int32)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        let [end_date, start_date] = take_function_args(self.name(), args.args)?;\n\n        // Determine the batch size from array arguments (scalars have no inherent size)\n        let num_rows = [&end_date, &start_date]\n            .iter()\n            .find_map(|arg| match arg {\n                ColumnarValue::Array(array) => Some(array.len()),\n                ColumnarValue::Scalar(_) => None,\n            })\n            .unwrap_or(1);\n\n        // Convert scalars to arrays for uniform processing, using the correct batch size\n        let end_arr = end_date.into_array(num_rows)?;\n        let start_arr = start_date.into_array(num_rows)?;\n\n        let end_date_array = end_arr\n            .as_any()\n            .downcast_ref::<Date32Array>()\n            .ok_or_else(|| {\n                DataFusionError::Execution(\"date_diff expects Date32Array for end_date\".to_string())\n            })?;\n\n        let start_date_array = start_arr\n            .as_any()\n            .downcast_ref::<Date32Array>()\n            .ok_or_else(|| {\n                DataFusionError::Execution(\n                    \"date_diff expects Date32Array for start_date\".to_string(),\n                )\n            })?;\n\n        // Date32 stores days since epoch, so difference is just subtraction\n        let result: Int32Array =\n            binary(end_date_array, start_date_array, |end, start| end - start)?;\n\n        Ok(ColumnarValue::Array(Arc::new(result)))\n    }\n\n    fn aliases(&self) -> &[String] {\n        &self.aliases\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/datetime_funcs/date_from_unix_date.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, Date32Array, Int32Array};\nuse arrow::datatypes::DataType;\nuse datafusion::common::{utils::take_function_args, DataFusionError, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse std::any::Any;\nuse std::sync::Arc;\n\n/// Spark-compatible date_from_unix_date function.\n/// Converts an integer representing days since Unix epoch (1970-01-01) to a Date32 value.\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct SparkDateFromUnixDate {\n    signature: Signature,\n    aliases: Vec<String>,\n}\n\nimpl SparkDateFromUnixDate {\n    pub fn new() -> Self {\n        Self {\n            signature: Signature::exact(vec![DataType::Int32], Volatility::Immutable),\n            aliases: vec![],\n        }\n    }\n}\n\nimpl Default for SparkDateFromUnixDate {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl ScalarUDFImpl for SparkDateFromUnixDate {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"date_from_unix_date\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Date32)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        let [unix_date] = take_function_args(self.name(), args.args)?;\n        match unix_date {\n            ColumnarValue::Array(arr) => {\n                let int_array = arr.as_any().downcast_ref::<Int32Array>().ok_or_else(|| {\n                    DataFusionError::Execution(\n                        \"date_from_unix_date expects Int32Array input\".to_string(),\n                    )\n                })?;\n\n                // Date32 and Int32 both represent days since epoch, so we can directly\n                // reinterpret the values. The only operation needed is creating a Date32Array\n                // from the same underlying i32 values.\n                let date_array =\n                    Date32Array::new(int_array.values().clone(), int_array.nulls().cloned());\n\n                Ok(ColumnarValue::Array(Arc::new(date_array)))\n            }\n            ColumnarValue::Scalar(scalar) => match scalar {\n                ScalarValue::Int32(Some(days)) => {\n                    Ok(ColumnarValue::Scalar(ScalarValue::Date32(Some(days))))\n                }\n                ScalarValue::Int32(None) | ScalarValue::Null => {\n                    Ok(ColumnarValue::Scalar(ScalarValue::Date32(None)))\n                }\n                _ => Err(DataFusionError::Execution(\n                    \"date_from_unix_date expects Int32 scalar input\".to_string(),\n                )),\n            },\n        }\n    }\n\n    fn aliases(&self) -> &[String] {\n        &self.aliases\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/datetime_funcs/date_trunc.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::datatypes::DataType;\nuse datafusion::common::{\n    utils::take_function_args, DataFusionError, Result, ScalarValue, ScalarValue::Utf8,\n};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse std::any::Any;\n\nuse crate::kernels::temporal::{date_trunc_array_fmt_dyn, date_trunc_dyn};\n\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct SparkDateTrunc {\n    signature: Signature,\n    aliases: Vec<String>,\n}\n\nimpl SparkDateTrunc {\n    pub fn new() -> Self {\n        Self {\n            signature: Signature::exact(\n                vec![DataType::Date32, DataType::Utf8],\n                Volatility::Immutable,\n            ),\n            aliases: vec![],\n        }\n    }\n}\n\nimpl Default for SparkDateTrunc {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl ScalarUDFImpl for SparkDateTrunc {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"date_trunc\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Date32)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        let [date, format] = take_function_args(self.name(), args.args)?;\n        match (date, format) {\n            (ColumnarValue::Array(date), ColumnarValue::Scalar(Utf8(Some(format)))) => {\n                let result = date_trunc_dyn(&date, format)?;\n                Ok(ColumnarValue::Array(result))\n            }\n            (ColumnarValue::Array(date), ColumnarValue::Array(formats)) => {\n                let result = date_trunc_array_fmt_dyn(&date, &formats)?;\n                Ok(ColumnarValue::Array(result))\n            }\n            (ColumnarValue::Scalar(date_scalar), ColumnarValue::Scalar(Utf8(Some(format)))) => {\n                let date_arr = date_scalar.to_array()?;\n                let result = date_trunc_dyn(&date_arr, format)?;\n                let scalar = ScalarValue::try_from_array(&result, 0)?;\n                Ok(ColumnarValue::Scalar(scalar))\n            }\n            _ => Err(DataFusionError::Execution(\n                \"Invalid input to function DateTrunc. Expected (Date32, Utf8)\".to_string(),\n            )),\n        }\n    }\n\n    fn aliases(&self) -> &[String] {\n        &self.aliases\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/datetime_funcs/extract_date_part.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::utils::array_with_timezone;\nuse arrow::compute::{date_part, DatePart};\nuse arrow::datatypes::{DataType, TimeUnit::Microsecond};\nuse datafusion::common::{internal_datafusion_err, DataFusionError};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse std::{any::Any, fmt::Debug};\n\nmacro_rules! extract_date_part {\n    ($struct_name:ident, $fn_name:expr, $date_part_variant:ident) => {\n        #[derive(Debug, PartialEq, Eq, Hash)]\n        pub struct $struct_name {\n            signature: Signature,\n            aliases: Vec<String>,\n            timezone: String,\n        }\n\n        impl $struct_name {\n            pub fn new(timezone: String) -> Self {\n                Self {\n                    signature: Signature::user_defined(Volatility::Immutable),\n                    aliases: vec![],\n                    timezone,\n                }\n            }\n        }\n\n        impl ScalarUDFImpl for $struct_name {\n            fn as_any(&self) -> &dyn Any {\n                self\n            }\n\n            fn name(&self) -> &str {\n                $fn_name\n            }\n\n            fn signature(&self) -> &Signature {\n                &self.signature\n            }\n\n            fn return_type(&self, arg_types: &[DataType]) -> datafusion::common::Result<DataType> {\n                Ok(match &arg_types[0] {\n                    DataType::Dictionary(_, _) => {\n                        DataType::Dictionary(Box::new(DataType::Int32), Box::new(DataType::Int32))\n                    }\n                    _ => DataType::Int32,\n                })\n            }\n\n            fn invoke_with_args(\n                &self,\n                args: ScalarFunctionArgs,\n            ) -> datafusion::common::Result<ColumnarValue> {\n                let args: [ColumnarValue; 1] = args.args.try_into().map_err(|_| {\n                    internal_datafusion_err!(concat!($fn_name, \" expects exactly one argument\"))\n                })?;\n\n                match args {\n                    [ColumnarValue::Array(array)] => {\n                        let array = array_with_timezone(\n                            array,\n                            self.timezone.clone(),\n                            Some(&DataType::Timestamp(\n                                Microsecond,\n                                Some(self.timezone.clone().into()),\n                            )),\n                        )?;\n                        let result = date_part(&array, DatePart::$date_part_variant)?;\n                        Ok(ColumnarValue::Array(result))\n                    }\n                    _ => Err(DataFusionError::Execution(\n                        concat!($fn_name, \"(scalar) should be fold in Spark JVM side.\").to_string(),\n                    )),\n                }\n            }\n\n            fn aliases(&self) -> &[String] {\n                &self.aliases\n            }\n        }\n    };\n}\n\nextract_date_part!(SparkHour, \"hour\", Hour);\nextract_date_part!(SparkMinute, \"minute\", Minute);\nextract_date_part!(SparkSecond, \"second\", Second);\n"
  },
  {
    "path": "native/spark-expr/src/datetime_funcs/hours.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Spark-compatible `hours` V2 partition transform.\n//!\n//! Computes the number of hours since the Unix epoch (1970-01-01 00:00:00 UTC).\n//!\n//! Both `TimestampType` and `TimestampNTZType` are computationally identical. They\n//! extract the absolute hours since the epoch by directly dividing the microsecond\n//! value by the number of microseconds in an hour, ignoring session timezone offsets.\n\nuse arrow::array::cast::as_primitive_array;\nuse arrow::array::types::TimestampMicrosecondType;\nuse arrow::array::{Array, Int32Array};\nuse arrow::datatypes::{DataType, TimeUnit::Microsecond};\nuse datafusion::common::{internal_datafusion_err, DataFusionError};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse num::integer::div_floor;\nuse std::{any::Any, fmt::Debug, sync::Arc};\n\nconst MICROS_PER_HOUR: i64 = 3_600_000_000;\n\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct SparkHoursTransform {\n    signature: Signature,\n}\n\nimpl SparkHoursTransform {\n    pub fn new() -> Self {\n        Self {\n            signature: Signature::user_defined(Volatility::Immutable),\n        }\n    }\n}\n\nimpl Default for SparkHoursTransform {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl ScalarUDFImpl for SparkHoursTransform {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"hours_transform\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> datafusion::common::Result<DataType> {\n        Ok(DataType::Int32)\n    }\n\n    fn invoke_with_args(\n        &self,\n        args: ScalarFunctionArgs,\n    ) -> datafusion::common::Result<ColumnarValue> {\n        let args: [ColumnarValue; 1] = args.args.try_into().map_err(|_| {\n            internal_datafusion_err!(\"hours_transform expects exactly one argument\")\n        })?;\n\n        match args {\n            [ColumnarValue::Array(array)] => {\n                let result: Int32Array = match array.data_type() {\n                    DataType::Timestamp(Microsecond, _) => {\n                        let ts_array = as_primitive_array::<TimestampMicrosecondType>(&array);\n                        arrow::compute::kernels::arity::unary(ts_array, |micros| {\n                            div_floor(micros, MICROS_PER_HOUR) as i32\n                        })\n                    }\n                    other => {\n                        return Err(DataFusionError::Execution(format!(\n                            \"hours_transform does not support input type: {:?}\",\n                            other\n                        )));\n                    }\n                };\n                Ok(ColumnarValue::Array(Arc::new(result)))\n            }\n            _ => Err(DataFusionError::Execution(\n                \"hours_transform(scalar) should be folded on Spark JVM side.\".to_string(),\n            )),\n        }\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::TimestampMicrosecondArray;\n    use arrow::datatypes::Field;\n    use datafusion::config::ConfigOptions;\n    use std::sync::Arc;\n\n    #[test]\n    fn test_hours_transform_utc() {\n        let udf = SparkHoursTransform::new();\n        // 2023-10-01 14:30:00 UTC = 1696171800 seconds = 1696171800000000 micros\n        // Expected hours since epoch = 1696171800000000 / 3600000000 = 471158\n        let ts = TimestampMicrosecondArray::from(vec![Some(1_696_171_800_000_000i64)])\n            .with_timezone(\"UTC\");\n        let return_field = Arc::new(Field::new(\"hours_transform\", DataType::Int32, true));\n        let args = ScalarFunctionArgs {\n            args: vec![ColumnarValue::Array(Arc::new(ts))],\n            number_rows: 1,\n            return_field,\n            config_options: Arc::new(ConfigOptions::default()),\n            arg_fields: vec![],\n        };\n        let result = udf.invoke_with_args(args).unwrap();\n        match result {\n            ColumnarValue::Array(arr) => {\n                let int_arr = arr.as_any().downcast_ref::<Int32Array>().unwrap();\n                assert_eq!(int_arr.value(0), 471158);\n            }\n            _ => panic!(\"Expected array\"),\n        }\n    }\n\n    #[test]\n    fn test_hours_transform_ntz() {\n        let udf = SparkHoursTransform::new();\n        // Same timestamp but NTZ (no timezone on array)\n        let ts = TimestampMicrosecondArray::from(vec![Some(1_696_171_800_000_000i64)]);\n        let return_field = Arc::new(Field::new(\"hours_transform\", DataType::Int32, true));\n        let args = ScalarFunctionArgs {\n            args: vec![ColumnarValue::Array(Arc::new(ts))],\n            number_rows: 1,\n            return_field,\n            config_options: Arc::new(ConfigOptions::default()),\n            arg_fields: vec![],\n        };\n        let result = udf.invoke_with_args(args).unwrap();\n        match result {\n            ColumnarValue::Array(arr) => {\n                let int_arr = arr.as_any().downcast_ref::<Int32Array>().unwrap();\n                assert_eq!(int_arr.value(0), 471158);\n            }\n            _ => panic!(\"Expected array\"),\n        }\n    }\n\n    #[test]\n    fn test_hours_transform_negative_epoch() {\n        let udf = SparkHoursTransform::new();\n        // 1969-12-31 23:30:00 UTC = -1800 seconds = -1800000000 micros\n        // Expected: floor_div(-1800000000, 3600000000) = -1\n        let ts =\n            TimestampMicrosecondArray::from(vec![Some(-1_800_000_000i64)]).with_timezone(\"UTC\");\n        let return_field = Arc::new(Field::new(\"hours_transform\", DataType::Int32, true));\n        let args = ScalarFunctionArgs {\n            args: vec![ColumnarValue::Array(Arc::new(ts))],\n            number_rows: 1,\n            return_field,\n            config_options: Arc::new(ConfigOptions::default()),\n            arg_fields: vec![],\n        };\n        let result = udf.invoke_with_args(args).unwrap();\n        match result {\n            ColumnarValue::Array(arr) => {\n                let int_arr = arr.as_any().downcast_ref::<Int32Array>().unwrap();\n                assert_eq!(int_arr.value(0), -1);\n            }\n            _ => panic!(\"Expected array\"),\n        }\n    }\n\n    #[test]\n    fn test_hours_transform_null() {\n        let udf = SparkHoursTransform::new();\n        let ts = TimestampMicrosecondArray::from(vec![None as Option<i64>]).with_timezone(\"UTC\");\n        let return_field = Arc::new(Field::new(\"hours_transform\", DataType::Int32, true));\n        let args = ScalarFunctionArgs {\n            args: vec![ColumnarValue::Array(Arc::new(ts))],\n            number_rows: 1,\n            return_field,\n            config_options: Arc::new(ConfigOptions::default()),\n            arg_fields: vec![],\n        };\n        let result = udf.invoke_with_args(args).unwrap();\n        match result {\n            ColumnarValue::Array(arr) => {\n                let int_arr = arr.as_any().downcast_ref::<Int32Array>().unwrap();\n                assert!(int_arr.is_null(0));\n            }\n            _ => panic!(\"Expected array\"),\n        }\n    }\n\n    #[test]\n    fn test_hours_transform_epoch_zero() {\n        let udf = SparkHoursTransform::new();\n        let ts = TimestampMicrosecondArray::from(vec![Some(0i64)]).with_timezone(\"UTC\");\n        let return_field = Arc::new(Field::new(\"hours_transform\", DataType::Int32, true));\n        let args = ScalarFunctionArgs {\n            args: vec![ColumnarValue::Array(Arc::new(ts))],\n            number_rows: 1,\n            return_field,\n            config_options: Arc::new(ConfigOptions::default()),\n            arg_fields: vec![],\n        };\n        let result = udf.invoke_with_args(args).unwrap();\n        match result {\n            ColumnarValue::Array(arr) => {\n                let int_arr = arr.as_any().downcast_ref::<Int32Array>().unwrap();\n                assert_eq!(int_arr.value(0), 0);\n            }\n            _ => panic!(\"Expected array\"),\n        }\n    }\n\n    #[test]\n    fn test_hours_transform_non_utc_timezone() {\n        // Spark's Hours partition transform evaluates absolute hours since epoch. Thus, a UTC\n        // timestamp of 1970-01-01 00:00:00 UTC (micros=0) maps to 0 hours, even if the\n        // timestamp array itself contains timezone metadata like Asia/Tokyo.\n        let udf = SparkHoursTransform::new();\n        let ts = TimestampMicrosecondArray::from(vec![Some(0i64)]).with_timezone(\"Asia/Tokyo\");\n        let return_field = Arc::new(Field::new(\"hours_transform\", DataType::Int32, true));\n        let args = ScalarFunctionArgs {\n            args: vec![ColumnarValue::Array(Arc::new(ts))],\n            number_rows: 1,\n            return_field,\n            config_options: Arc::new(ConfigOptions::default()),\n            arg_fields: vec![],\n        };\n        let result = udf.invoke_with_args(args).unwrap();\n        match result {\n            ColumnarValue::Array(arr) => {\n                let int_arr = arr.as_any().downcast_ref::<Int32Array>().unwrap();\n                assert_eq!(int_arr.value(0), 0);\n            }\n            _ => panic!(\"Expected array\"),\n        }\n    }\n\n    #[test]\n    fn test_hours_transform_ntz_ignores_timezone() {\n        // NTZ with micros=0 always returns 0 because NTZ is pure wall-clock time.\n        // There is no timezone offset logic applied to either TimestampType or NTZ.\n        let udf = SparkHoursTransform::new();\n        let ts = TimestampMicrosecondArray::from(vec![Some(0i64)]); // No timezone on array\n        let return_field = Arc::new(Field::new(\"hours_transform\", DataType::Int32, true));\n        let args = ScalarFunctionArgs {\n            args: vec![ColumnarValue::Array(Arc::new(ts))],\n            number_rows: 1,\n            return_field,\n            config_options: Arc::new(ConfigOptions::default()),\n            arg_fields: vec![],\n        };\n        let result = udf.invoke_with_args(args).unwrap();\n        match result {\n            ColumnarValue::Array(arr) => {\n                let int_arr = arr.as_any().downcast_ref::<Int32Array>().unwrap();\n                assert_eq!(int_arr.value(0), 0); // NOT 9, because NTZ ignores timezone\n            }\n            _ => panic!(\"Expected array\"),\n        }\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/datetime_funcs/make_date.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, Date32Array, Int32Array};\nuse arrow::compute::cast;\nuse arrow::datatypes::DataType;\nuse chrono::NaiveDate;\nuse datafusion::common::{utils::take_function_args, DataFusionError, Result};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse std::any::Any;\nuse std::sync::Arc;\n\n/// Spark-compatible make_date function.\n/// Creates a date from year, month, and day columns.\n/// Returns NULL for invalid dates (e.g., Feb 30, month 13, etc.) instead of throwing an error.\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct SparkMakeDate {\n    signature: Signature,\n}\n\nimpl SparkMakeDate {\n    pub fn new() -> Self {\n        Self {\n            // Accept any numeric type - we'll cast to Int32 internally\n            signature: Signature::any(3, Volatility::Immutable),\n        }\n    }\n}\n\nimpl Default for SparkMakeDate {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\n/// Cast an array to Int32Array if it's not already Int32.\nfn cast_to_int32(arr: &Arc<dyn Array>) -> Result<Arc<dyn Array>> {\n    if arr.data_type() == &DataType::Int32 {\n        Ok(Arc::clone(arr))\n    } else {\n        cast(arr.as_ref(), &DataType::Int32)\n            .map_err(|e| DataFusionError::Execution(format!(\"Failed to cast to Int32: {e}\")))\n    }\n}\n\n/// Convert year, month, day to days since Unix epoch (1970-01-01).\n/// Returns None if the date is invalid.\nfn make_date(year: i32, month: i32, day: i32) -> Option<i32> {\n    // Validate month and day ranges first\n    if !(1..=12).contains(&month) || !(1..=31).contains(&day) {\n        return None;\n    }\n\n    // Try to create a valid date\n    NaiveDate::from_ymd_opt(year, month as u32, day as u32).map(|date| {\n        date.signed_duration_since(NaiveDate::from_ymd_opt(1970, 1, 1).unwrap())\n            .num_days() as i32\n    })\n}\n\nimpl ScalarUDFImpl for SparkMakeDate {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"make_date\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Date32)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        let [year, month, day] = take_function_args(self.name(), args.args)?;\n\n        // Determine the batch size from array arguments (scalars have no inherent size)\n        let num_rows = [&year, &month, &day]\n            .iter()\n            .find_map(|arg| match arg {\n                ColumnarValue::Array(array) => Some(array.len()),\n                ColumnarValue::Scalar(_) => None,\n            })\n            .unwrap_or(1);\n\n        // Convert scalars to arrays for uniform processing, using the correct batch size\n        let year_arr = year.into_array(num_rows)?;\n        let month_arr = month.into_array(num_rows)?;\n        let day_arr = day.into_array(num_rows)?;\n\n        // Cast to Int32 if needed (handles Int64 literals from SQL)\n        let year_arr = cast_to_int32(&year_arr)?;\n        let month_arr = cast_to_int32(&month_arr)?;\n        let day_arr = cast_to_int32(&day_arr)?;\n\n        let year_array = year_arr\n            .as_any()\n            .downcast_ref::<Int32Array>()\n            .ok_or_else(|| {\n                DataFusionError::Execution(\"make_date: failed to cast year to Int32\".to_string())\n            })?;\n\n        let month_array = month_arr\n            .as_any()\n            .downcast_ref::<Int32Array>()\n            .ok_or_else(|| {\n                DataFusionError::Execution(\"make_date: failed to cast month to Int32\".to_string())\n            })?;\n\n        let day_array = day_arr\n            .as_any()\n            .downcast_ref::<Int32Array>()\n            .ok_or_else(|| {\n                DataFusionError::Execution(\"make_date: failed to cast day to Int32\".to_string())\n            })?;\n\n        let len = year_array.len();\n        let mut builder = Date32Array::builder(len);\n\n        for i in 0..len {\n            if year_array.is_null(i) || month_array.is_null(i) || day_array.is_null(i) {\n                builder.append_null();\n            } else {\n                let y = year_array.value(i);\n                let m = month_array.value(i);\n                let d = day_array.value(i);\n\n                match make_date(y, m, d) {\n                    Some(days) => builder.append_value(days),\n                    None => builder.append_null(),\n                }\n            }\n        }\n\n        Ok(ColumnarValue::Array(Arc::new(builder.finish())))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_make_date_valid() {\n        // Unix epoch\n        assert_eq!(make_date(1970, 1, 1), Some(0));\n        // Day after epoch\n        assert_eq!(make_date(1970, 1, 2), Some(1));\n        // Day before epoch\n        assert_eq!(make_date(1969, 12, 31), Some(-1));\n        // Leap years - just verify they return Some (valid dates)\n        assert!(make_date(2000, 2, 29).is_some()); // 2000 is a leap year\n        assert!(make_date(2004, 2, 29).is_some()); // 2004 is a leap year\n                                                   // Regular date\n        assert!(make_date(2023, 6, 15).is_some());\n    }\n\n    #[test]\n    fn test_make_date_invalid_month() {\n        assert_eq!(make_date(2023, 0, 15), None);\n        assert_eq!(make_date(2023, 13, 15), None);\n        assert_eq!(make_date(2023, -1, 15), None);\n    }\n\n    #[test]\n    fn test_make_date_invalid_day() {\n        assert_eq!(make_date(2023, 6, 0), None);\n        assert_eq!(make_date(2023, 6, 32), None);\n        assert_eq!(make_date(2023, 6, -1), None);\n    }\n\n    #[test]\n    fn test_make_date_invalid_dates() {\n        // Feb 30 never exists\n        assert_eq!(make_date(2023, 2, 30), None);\n        // Feb 29 on non-leap year\n        assert_eq!(make_date(2023, 2, 29), None);\n        // 1900 is not a leap year (divisible by 100 but not 400)\n        assert_eq!(make_date(1900, 2, 29), None);\n        // 2100 will not be a leap year\n        assert_eq!(make_date(2100, 2, 29), None);\n        // April has 30 days\n        assert_eq!(make_date(2023, 4, 31), None);\n    }\n\n    #[test]\n    fn test_make_date_extreme_years() {\n        // Spark supports dates from 0001-01-01 to 9999-12-31 (Proleptic Gregorian calendar)\n\n        // Minimum valid date in Spark: 0001-01-01\n        assert!(make_date(1, 1, 1).is_some(), \"Year 1 should be valid\");\n\n        // Maximum valid date in Spark: 9999-12-31\n        assert!(\n            make_date(9999, 12, 31).is_some(),\n            \"Year 9999 should be valid\"\n        );\n\n        // Year 0 - In Proleptic Gregorian calendar, year 0 = 1 BCE\n        // Spark returns NULL for year 0 in make_date\n        // chrono supports year 0, but we should match Spark's behavior\n        // For now, chrono allows it - this may need adjustment for full Spark compatibility\n        let year_0_result = make_date(0, 1, 1);\n        // chrono allows year 0 (1 BCE in proleptic Gregorian)\n        assert!(year_0_result.is_some(), \"chrono allows year 0\");\n\n        // Negative years - Spark returns NULL for negative years\n        // chrono supports negative years (BCE dates)\n        let negative_year_result = make_date(-1, 1, 1);\n        // chrono allows negative years\n        assert!(\n            negative_year_result.is_some(),\n            \"chrono allows negative years\"\n        );\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/datetime_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod date_diff;\nmod date_from_unix_date;\nmod date_trunc;\nmod extract_date_part;\nmod hours;\nmod make_date;\nmod timestamp_trunc;\nmod unix_timestamp;\n\npub use date_diff::SparkDateDiff;\npub use date_from_unix_date::SparkDateFromUnixDate;\npub use date_trunc::SparkDateTrunc;\npub use extract_date_part::SparkHour;\npub use extract_date_part::SparkMinute;\npub use extract_date_part::SparkSecond;\npub use hours::SparkHoursTransform;\npub use make_date::SparkMakeDate;\npub use timestamp_trunc::TimestampTruncExpr;\npub use unix_timestamp::SparkUnixTimestamp;\n"
  },
  {
    "path": "native/spark-expr/src/datetime_funcs/timestamp_trunc.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::utils::array_with_timezone;\nuse arrow::datatypes::{DataType, Schema, TimeUnit::Microsecond};\nuse arrow::record_batch::RecordBatch;\nuse datafusion::common::{DataFusionError, ScalarValue, ScalarValue::Utf8};\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::hash::Hash;\nuse std::{\n    any::Any,\n    fmt::{Debug, Display, Formatter},\n    sync::Arc,\n};\n\nuse crate::kernels::temporal::{timestamp_trunc_array_fmt_dyn, timestamp_trunc_dyn};\n\n#[derive(Debug, Eq)]\npub struct TimestampTruncExpr {\n    /// An array with DataType::Timestamp(TimeUnit::Microsecond, None)\n    child: Arc<dyn PhysicalExpr>,\n    /// Scalar UTF8 string matching the valid values in Spark SQL: https://spark.apache.org/docs/latest/api/sql/index.html#date_trunc\n    format: Arc<dyn PhysicalExpr>,\n    /// String containing a timezone name. The name must be found in the standard timezone\n    /// database (https://en.wikipedia.org/wiki/List_of_tz_database_time_zones). The string is\n    /// later parsed into a chrono::TimeZone.\n    /// Timestamp arrays in this implementation are kept in arrays of UTC timestamps (in micros)\n    /// along with a single value for the associated TimeZone. The timezone offset is applied\n    /// just before any operations on the timestamp\n    timezone: String,\n}\n\nimpl Hash for TimestampTruncExpr {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.child.hash(state);\n        self.format.hash(state);\n        self.timezone.hash(state);\n    }\n}\nimpl PartialEq for TimestampTruncExpr {\n    fn eq(&self, other: &Self) -> bool {\n        self.child.eq(&other.child)\n            && self.format.eq(&other.format)\n            && self.timezone.eq(&other.timezone)\n    }\n}\n\nimpl TimestampTruncExpr {\n    pub fn new(\n        child: Arc<dyn PhysicalExpr>,\n        format: Arc<dyn PhysicalExpr>,\n        timezone: String,\n    ) -> Self {\n        TimestampTruncExpr {\n            child,\n            format,\n            timezone,\n        }\n    }\n}\n\nimpl Display for TimestampTruncExpr {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"TimestampTrunc [child:{}, format:{}, timezone: {}]\",\n            self.child, self.format, self.timezone\n        )\n    }\n}\n\nimpl PhysicalExpr for TimestampTruncExpr {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, input_schema: &Schema) -> datafusion::common::Result<DataType> {\n        match self.child.data_type(input_schema)? {\n            DataType::Dictionary(key_type, _) => Ok(DataType::Dictionary(\n                key_type,\n                Box::new(DataType::Timestamp(Microsecond, None)),\n            )),\n            _ => Ok(DataType::Timestamp(Microsecond, None)),\n        }\n    }\n\n    fn nullable(&self, _: &Schema) -> datafusion::common::Result<bool> {\n        Ok(true)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> datafusion::common::Result<ColumnarValue> {\n        let timestamp = self.child.evaluate(batch)?;\n        let format = self.format.evaluate(batch)?;\n        let tz = self.timezone.clone();\n        match (timestamp, format) {\n            (ColumnarValue::Array(ts), ColumnarValue::Scalar(Utf8(Some(format)))) => {\n                let ts = array_with_timezone(\n                    ts,\n                    tz.clone(),\n                    Some(&DataType::Timestamp(Microsecond, Some(tz.into()))),\n                )?;\n                let result = timestamp_trunc_dyn(&ts, format)?;\n                Ok(ColumnarValue::Array(result))\n            }\n            (ColumnarValue::Array(ts), ColumnarValue::Array(formats)) => {\n                let ts = array_with_timezone(\n                    ts,\n                    tz.clone(),\n                    Some(&DataType::Timestamp(Microsecond, Some(tz.into()))),\n                )?;\n                let result = timestamp_trunc_array_fmt_dyn(&ts, &formats)?;\n                Ok(ColumnarValue::Array(result))\n            }\n            (ColumnarValue::Scalar(ts_scalar), ColumnarValue::Scalar(Utf8(Some(format)))) => {\n                let ts_arr = ts_scalar.to_array()?;\n                let ts = array_with_timezone(\n                    ts_arr,\n                    tz.clone(),\n                    Some(&DataType::Timestamp(Microsecond, Some(tz.into()))),\n                )?;\n                let result = timestamp_trunc_dyn(&ts, format)?;\n                let scalar = ScalarValue::try_from_array(&result, 0)?;\n                Ok(ColumnarValue::Scalar(scalar))\n            }\n            _ => Err(DataFusionError::Execution(\n                \"Invalid input to function TimestampTrunc. \\\n                    Expected (Timestamp, Utf8)\"\n                    .to_string(),\n            )),\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.child]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Result<Arc<dyn PhysicalExpr>, DataFusionError> {\n        Ok(Arc::new(TimestampTruncExpr::new(\n            Arc::clone(&children[0]),\n            Arc::clone(&self.format),\n            self.timezone.clone(),\n        )))\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/datetime_funcs/unix_timestamp.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::utils::array_with_timezone;\nuse arrow::array::{Array, AsArray, PrimitiveArray};\nuse arrow::compute::cast;\nuse arrow::datatypes::{DataType, Int64Type, TimeUnit::Microsecond};\nuse datafusion::common::{internal_datafusion_err, DataFusionError};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse num::integer::div_floor;\nuse std::{any::Any, fmt::Debug, sync::Arc};\n\nconst MICROS_PER_SECOND: i64 = 1_000_000;\n\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct SparkUnixTimestamp {\n    signature: Signature,\n    aliases: Vec<String>,\n    timezone: String,\n}\n\nimpl SparkUnixTimestamp {\n    pub fn new(timezone: String) -> Self {\n        Self {\n            signature: Signature::user_defined(Volatility::Immutable),\n            aliases: vec![],\n            timezone,\n        }\n    }\n}\n\nimpl ScalarUDFImpl for SparkUnixTimestamp {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"unix_timestamp\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, arg_types: &[DataType]) -> datafusion::common::Result<DataType> {\n        Ok(match &arg_types[0] {\n            DataType::Dictionary(_, _) => {\n                DataType::Dictionary(Box::new(DataType::Int32), Box::new(DataType::Int64))\n            }\n            _ => DataType::Int64,\n        })\n    }\n\n    fn invoke_with_args(\n        &self,\n        args: ScalarFunctionArgs,\n    ) -> datafusion::common::Result<ColumnarValue> {\n        let args: [ColumnarValue; 1] = args\n            .args\n            .try_into()\n            .map_err(|_| internal_datafusion_err!(\"unix_timestamp expects exactly one argument\"))?;\n\n        match args {\n            [ColumnarValue::Array(array)] => match array.data_type() {\n                DataType::Timestamp(_, _) => {\n                    let is_utc = self.timezone == \"UTC\";\n                    let array = if is_utc\n                        && matches!(array.data_type(), DataType::Timestamp(Microsecond, Some(tz)) if tz.as_ref() == \"UTC\")\n                    {\n                        array\n                    } else {\n                        array_with_timezone(\n                            array,\n                            self.timezone.clone(),\n                            Some(&DataType::Timestamp(Microsecond, Some(\"UTC\".into()))),\n                        )?\n                    };\n\n                    let timestamp_array =\n                        array.as_primitive::<arrow::datatypes::TimestampMicrosecondType>();\n\n                    let result: PrimitiveArray<Int64Type> = if timestamp_array.null_count() == 0 {\n                        timestamp_array\n                            .values()\n                            .iter()\n                            .map(|&micros| micros / MICROS_PER_SECOND)\n                            .collect()\n                    } else {\n                        timestamp_array\n                            .iter()\n                            .map(|v| v.map(|micros| div_floor(micros, MICROS_PER_SECOND)))\n                            .collect()\n                    };\n\n                    Ok(ColumnarValue::Array(Arc::new(result)))\n                }\n                DataType::Date32 => {\n                    let timestamp_array = cast(&array, &DataType::Timestamp(Microsecond, None))?;\n\n                    let is_utc = self.timezone == \"UTC\";\n                    let array = if is_utc {\n                        timestamp_array\n                    } else {\n                        array_with_timezone(\n                            timestamp_array,\n                            self.timezone.clone(),\n                            Some(&DataType::Timestamp(Microsecond, Some(\"UTC\".into()))),\n                        )?\n                    };\n\n                    let timestamp_array =\n                        array.as_primitive::<arrow::datatypes::TimestampMicrosecondType>();\n\n                    let result: PrimitiveArray<Int64Type> = if timestamp_array.null_count() == 0 {\n                        timestamp_array\n                            .values()\n                            .iter()\n                            .map(|&micros| micros / MICROS_PER_SECOND)\n                            .collect()\n                    } else {\n                        timestamp_array\n                            .iter()\n                            .map(|v| v.map(|micros| div_floor(micros, MICROS_PER_SECOND)))\n                            .collect()\n                    };\n\n                    Ok(ColumnarValue::Array(Arc::new(result)))\n                }\n                _ => Err(DataFusionError::Execution(format!(\n                    \"unix_timestamp does not support input type: {:?}\",\n                    array.data_type()\n                ))),\n            },\n            _ => Err(DataFusionError::Execution(\n                \"unix_timestamp(scalar) should be fold in Spark JVM side.\".to_string(),\n            )),\n        }\n    }\n\n    fn aliases(&self) -> &[String] {\n        &self.aliases\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::{Array, Date32Array, TimestampMicrosecondArray};\n    use arrow::datatypes::Field;\n    use datafusion::config::ConfigOptions;\n    use std::sync::Arc;\n\n    #[test]\n    fn test_unix_timestamp_from_timestamp() {\n        // Test with known timestamp value\n        // 2020-01-01 00:00:00 UTC = 1577836800 seconds = 1577836800000000 microseconds\n        let input = TimestampMicrosecondArray::from(vec![Some(1577836800000000)]);\n        let udf = SparkUnixTimestamp::new(\"UTC\".to_string());\n\n        let return_field = Arc::new(Field::new(\"unix_timestamp\", DataType::Int64, true));\n        let args = ScalarFunctionArgs {\n            args: vec![ColumnarValue::Array(Arc::new(input))],\n            number_rows: 1,\n            return_field,\n            config_options: Arc::new(ConfigOptions::default()),\n            arg_fields: vec![],\n        };\n\n        let result = udf.invoke_with_args(args).unwrap();\n        if let ColumnarValue::Array(result_array) = result {\n            let int64_array = result_array.as_primitive::<arrow::datatypes::Int64Type>();\n            assert_eq!(int64_array.value(0), 1577836800);\n        } else {\n            panic!(\"Expected array result\");\n        }\n    }\n\n    #[test]\n    fn test_unix_timestamp_from_date() {\n        // Test with Date32\n        // Date32(18262) = 2020-01-01 = 1577836800 seconds\n        let input = Date32Array::from(vec![Some(18262)]);\n        let udf = SparkUnixTimestamp::new(\"UTC\".to_string());\n\n        let return_field = Arc::new(Field::new(\"unix_timestamp\", DataType::Int64, true));\n        let args = ScalarFunctionArgs {\n            args: vec![ColumnarValue::Array(Arc::new(input))],\n            number_rows: 1,\n            return_field,\n            config_options: Arc::new(ConfigOptions::default()),\n            arg_fields: vec![],\n        };\n\n        let result = udf.invoke_with_args(args).unwrap();\n        if let ColumnarValue::Array(result_array) = result {\n            let int64_array = result_array.as_primitive::<arrow::datatypes::Int64Type>();\n            assert_eq!(int64_array.value(0), 1577836800);\n        } else {\n            panic!(\"Expected array result\");\n        }\n    }\n\n    #[test]\n    fn test_unix_timestamp_with_nulls() {\n        let input = TimestampMicrosecondArray::from(vec![Some(1577836800000000), None]);\n        let udf = SparkUnixTimestamp::new(\"UTC\".to_string());\n\n        let return_field = Arc::new(Field::new(\"unix_timestamp\", DataType::Int64, true));\n        let args = ScalarFunctionArgs {\n            args: vec![ColumnarValue::Array(Arc::new(input))],\n            number_rows: 2,\n            return_field,\n            config_options: Arc::new(ConfigOptions::default()),\n            arg_fields: vec![],\n        };\n\n        let result = udf.invoke_with_args(args).unwrap();\n        if let ColumnarValue::Array(result_array) = result {\n            let int64_array = result_array.as_primitive::<arrow::datatypes::Int64Type>();\n            assert_eq!(int64_array.value(0), 1577836800);\n            assert!(int64_array.is_null(1));\n        } else {\n            panic!(\"Expected array result\");\n        }\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/error.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n// Re-export all error types from the common crate\npub use datafusion_comet_common::{\n    decimal_overflow_error, SparkError, SparkErrorWithContext, SparkResult,\n};\n"
  },
  {
    "path": "native/spark-expr/src/hash_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub mod murmur3;\npub(super) mod utils;\nmod xxhash64;\n\npub use murmur3::spark_murmur3_hash;\npub use xxhash64::spark_xxhash64;\n"
  },
  {
    "path": "native/spark-expr/src/hash_funcs/murmur3.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::create_hashes_internal;\nuse arrow::array::types::ArrowDictionaryKeyType;\nuse arrow::array::{Array, ArrayRef, ArrowNativeTypeOp, DictionaryArray, Int32Array};\nuse arrow::compute::take;\nuse arrow::datatypes::ArrowNativeType;\nuse datafusion::common::{internal_err, DataFusionError, ScalarValue};\nuse datafusion::physical_plan::ColumnarValue;\nuse std::sync::Arc;\n\n/// Spark compatible murmur3 hash (just `hash` in Spark) in vectorized execution fashion\npub fn spark_murmur3_hash(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {\n    let length = args.len();\n    let seed = &args[length - 1];\n    match seed {\n        ColumnarValue::Scalar(ScalarValue::Int32(Some(seed))) => {\n            // iterate over the arguments to find out the length of the array\n            let num_rows = args[0..args.len() - 1]\n                .iter()\n                .find_map(|arg| match arg {\n                    ColumnarValue::Array(array) => Some(array.len()),\n                    ColumnarValue::Scalar(_) => None,\n                })\n                .unwrap_or(1);\n            let mut hashes: Vec<u32> = vec![0_u32; num_rows];\n            hashes.fill(*seed as u32);\n            let arrays = args[0..args.len() - 1]\n                .iter()\n                .map(|arg| match arg {\n                    ColumnarValue::Array(array) => Arc::clone(array),\n                    ColumnarValue::Scalar(scalar) => {\n                        scalar.clone().to_array_of_size(num_rows).unwrap()\n                    }\n                })\n                .collect::<Vec<ArrayRef>>();\n            create_murmur3_hashes(&arrays, &mut hashes)?;\n            if num_rows == 1 {\n                Ok(ColumnarValue::Scalar(ScalarValue::Int32(Some(\n                    hashes[0] as i32,\n                ))))\n            } else {\n                let hashes: Vec<i32> = hashes.into_iter().map(|x| x as i32).collect();\n                Ok(ColumnarValue::Array(Arc::new(Int32Array::from(hashes))))\n            }\n        }\n        _ => {\n            internal_err!(\n                \"The seed of function murmur3_hash must be an Int32 scalar value, but got: {:?}.\",\n                seed\n            )\n        }\n    }\n}\n\n/// Spark-compatible murmur3 hash function\n#[inline]\npub fn spark_compatible_murmur3_hash<T: AsRef<[u8]>>(data: T, seed: u32) -> u32 {\n    #[inline]\n    fn mix_k1(mut k1: i32) -> i32 {\n        k1 = k1.mul_wrapping(0xcc9e2d51u32 as i32);\n        k1 = k1.rotate_left(15);\n        k1 = k1.mul_wrapping(0x1b873593u32 as i32);\n        k1\n    }\n\n    #[inline]\n    fn mix_h1(mut h1: i32, k1: i32) -> i32 {\n        h1 ^= k1;\n        h1 = h1.rotate_left(13);\n        h1 = h1.mul_wrapping(5).add_wrapping(0xe6546b64u32 as i32);\n        h1\n    }\n\n    #[inline]\n    fn fmix(mut h1: i32, len: i32) -> i32 {\n        h1 ^= len;\n        h1 ^= (h1 as u32 >> 16) as i32;\n        h1 = h1.mul_wrapping(0x85ebca6bu32 as i32);\n        h1 ^= (h1 as u32 >> 13) as i32;\n        h1 = h1.mul_wrapping(0xc2b2ae35u32 as i32);\n        h1 ^= (h1 as u32 >> 16) as i32;\n        h1\n    }\n\n    #[inline]\n    unsafe fn hash_bytes_by_int(data: &[u8], seed: u32) -> i32 {\n        // safety: data length must be aligned to 4 bytes\n        let mut h1 = seed as i32;\n        for i in (0..data.len()).step_by(4) {\n            let ints = data.as_ptr().add(i) as *const i32;\n            let mut half_word = ints.read_unaligned();\n            if cfg!(target_endian = \"big\") {\n                half_word = half_word.reverse_bits();\n            }\n            h1 = mix_h1(h1, mix_k1(half_word));\n        }\n        h1\n    }\n    let data = data.as_ref();\n    let len = data.len();\n    let len_aligned = len - len % 4;\n\n    // safety:\n    // avoid boundary checking in performance critical codes.\n    // all operations are guaranteed to be safe\n    // data is &[u8] so we do not need to check for proper alignment\n    unsafe {\n        let mut h1 = if len_aligned > 0 {\n            hash_bytes_by_int(&data[0..len_aligned], seed)\n        } else {\n            seed as i32\n        };\n\n        for i in len_aligned..len {\n            let half_word = *data.get_unchecked(i) as i8 as i32;\n            h1 = mix_h1(h1, mix_k1(half_word));\n        }\n        fmix(h1, len as i32) as u32\n    }\n}\n\n/// Hash the values in a dictionary array\nfn create_hashes_dictionary<K: ArrowDictionaryKeyType>(\n    array: &ArrayRef,\n    hashes_buffer: &mut [u32],\n    first_col: bool,\n) -> datafusion::common::Result<()> {\n    let dict_array = array.as_any().downcast_ref::<DictionaryArray<K>>().unwrap();\n    if !first_col {\n        // unpack the dictionary array as each row may have a different hash input\n        let unpacked = take(dict_array.values().as_ref(), dict_array.keys(), None)?;\n        create_murmur3_hashes(&[unpacked], hashes_buffer)?;\n    } else {\n        // For the first column, hash each dictionary value once, and then use\n        // that computed hash for each key value to avoid a potentially\n        // expensive redundant hashing for large dictionary elements (e.g. strings)\n        let dict_values = Arc::clone(dict_array.values());\n        // same initial seed as Spark\n        let mut dict_hashes = vec![42; dict_values.len()];\n        create_murmur3_hashes(&[dict_values], &mut dict_hashes)?;\n        for (hash, key) in hashes_buffer.iter_mut().zip(dict_array.keys().iter()) {\n            if let Some(key) = key {\n                let idx = key.to_usize().ok_or_else(|| {\n                    DataFusionError::Internal(format!(\n                        \"Can not convert key value {:?} to usize in dictionary of type {:?}\",\n                        key,\n                        dict_array.data_type()\n                    ))\n                })?;\n                *hash = dict_hashes[idx]\n            } // no update for Null, consistent with other hashes\n        }\n    }\n    Ok(())\n}\n\n/// Creates hash values for every row, based on the values in the\n/// columns.\n///\n/// The number of rows to hash is determined by `hashes_buffer.len()`.\n/// `hashes_buffer` should be pre-sized appropriately\npub fn create_murmur3_hashes<'a>(\n    arrays: &[ArrayRef],\n    hashes_buffer: &'a mut [u32],\n) -> datafusion::common::Result<&'a mut [u32]> {\n    create_hashes_internal!(\n        arrays,\n        hashes_buffer,\n        spark_compatible_murmur3_hash,\n        create_hashes_dictionary,\n        create_murmur3_hashes\n    );\n    Ok(hashes_buffer)\n}\n\n#[cfg(test)]\nmod tests {\n    use arrow::array::{Float32Array, Float64Array};\n    use std::sync::Arc;\n\n    use crate::murmur3::create_murmur3_hashes;\n    use crate::test_hashes_with_nulls;\n    use datafusion::arrow::array::{ArrayRef, Int32Array, Int64Array, Int8Array, StringArray};\n\n    fn test_murmur3_hash<I: Clone, T: arrow::array::Array + From<Vec<Option<I>>> + 'static>(\n        values: Vec<Option<I>>,\n        expected: Vec<u32>,\n    ) {\n        test_hashes_with_nulls!(create_murmur3_hashes, T, values, expected, u32);\n    }\n\n    #[test]\n    fn test_i8() {\n        test_murmur3_hash::<i8, Int8Array>(\n            vec![Some(1), Some(0), Some(-1), Some(i8::MAX), Some(i8::MIN)],\n            vec![0xdea578e3, 0x379fae8f, 0xa0590e3d, 0x43b4d8ed, 0x422a1365],\n        );\n    }\n\n    #[test]\n    fn test_i32() {\n        test_murmur3_hash::<i32, Int32Array>(\n            vec![Some(1), Some(0), Some(-1), Some(i32::MAX), Some(i32::MIN)],\n            vec![0xdea578e3, 0x379fae8f, 0xa0590e3d, 0x07fb67e7, 0x2b1f0fc6],\n        );\n    }\n\n    #[test]\n    fn test_i64() {\n        test_murmur3_hash::<i64, Int64Array>(\n            vec![Some(1), Some(0), Some(-1), Some(i64::MAX), Some(i64::MIN)],\n            vec![0x99f0149d, 0x9c67b85d, 0xc8008529, 0xa05b5d7b, 0xcd1e64fb],\n        );\n    }\n\n    #[test]\n    fn test_f32() {\n        test_murmur3_hash::<f32, Float32Array>(\n            vec![\n                Some(1.0),\n                Some(0.0),\n                Some(-0.0),\n                Some(-1.0),\n                Some(99999999999.99999999999),\n                Some(-99999999999.99999999999),\n            ],\n            vec![\n                0xe434cc39, 0x379fae8f, 0x379fae8f, 0xdc0da8eb, 0xcbdc340f, 0xc0361c86,\n            ],\n        );\n    }\n\n    #[test]\n    fn test_f64() {\n        test_murmur3_hash::<f64, Float64Array>(\n            vec![\n                Some(1.0),\n                Some(0.0),\n                Some(-0.0),\n                Some(-1.0),\n                Some(99999999999.99999999999),\n                Some(-99999999999.99999999999),\n            ],\n            vec![\n                0xe4876492, 0x9c67b85d, 0x9c67b85d, 0x13d81357, 0xb87e1595, 0xa0eef9f9,\n            ],\n        );\n    }\n\n    #[test]\n    fn test_str() {\n        let input = [\n            \"hello\", \"bar\", \"\", \"😁\", \"天地\", \"a\", \"ab\", \"abc\", \"abcd\", \"abcde\",\n        ]\n        .iter()\n        .map(|s| Some(s.to_string()))\n        .collect::<Vec<Option<String>>>();\n        let expected: Vec<u32> = vec![\n            3286402344, 2486176763, 142593372, 885025535, 2395000894, 1485273170, 0xfa37157b,\n            1322437556, 0xe860e5cc, 814637928,\n        ];\n\n        test_murmur3_hash::<String, StringArray>(input.clone(), expected);\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/hash_funcs/utils.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! This includes utilities for hashing and murmur3 hashing.\n\n#[macro_export]\nmacro_rules! hash_array {\n    ($array_type: ident, $column: ident, $hashes: ident, $hash_method: ident) => {\n        let array = $column\n            .as_any()\n            .downcast_ref::<$array_type>()\n            .unwrap_or_else(|| {\n                panic!(\n                    \"Failed to downcast column to {}. Actual data type: {:?}.\",\n                    stringify!($array_type),\n                    $column.data_type()\n                )\n            });\n        if array.null_count() == 0 {\n            // Fast path: no nulls, use direct indexing\n            for i in 0..$hashes.len() {\n                $hashes[i] = $hash_method(&array.value(i), $hashes[i]);\n            }\n        } else {\n            // Slow path: check nulls\n            for i in 0..$hashes.len() {\n                if !array.is_null(i) {\n                    $hashes[i] = $hash_method(&array.value(i), $hashes[i]);\n                }\n            }\n        }\n    };\n}\n\n#[macro_export]\nmacro_rules! hash_array_boolean {\n    ($array_type: ident, $column: ident, $hash_input_type: ident, $hashes: ident, $hash_method: ident) => {\n        let array = $column\n            .as_any()\n            .downcast_ref::<$array_type>()\n            .unwrap_or_else(|| {\n                panic!(\n                    \"Failed to downcast column to {}. Actual data type: {:?}.\",\n                    stringify!($array_type),\n                    $column.data_type()\n                )\n            });\n        if array.null_count() == 0 {\n            // Fast path: no nulls, use direct indexing\n            for i in 0..$hashes.len() {\n                $hashes[i] = $hash_method(\n                    $hash_input_type::from(array.value(i)).to_le_bytes(),\n                    $hashes[i],\n                );\n            }\n        } else {\n            // Slow path: check nulls\n            for i in 0..$hashes.len() {\n                if !array.is_null(i) {\n                    $hashes[i] = $hash_method(\n                        $hash_input_type::from(array.value(i)).to_le_bytes(),\n                        $hashes[i],\n                    );\n                }\n            }\n        }\n    };\n}\n\n#[macro_export]\nmacro_rules! hash_array_primitive {\n    ($array_type: ident, $column: ident, $ty: ident, $hashes: ident, $hash_method: ident) => {\n        let array = $column\n            .as_any()\n            .downcast_ref::<$array_type>()\n            .unwrap_or_else(|| {\n                panic!(\n                    \"Failed to downcast column to {}. Actual data type: {:?}.\",\n                    stringify!($array_type),\n                    $column.data_type()\n                )\n            });\n        let values = array.values();\n\n        if array.null_count() == 0 {\n            // Fast path: no nulls, use direct indexing\n            for i in 0..values.len() {\n                $hashes[i] = $hash_method((values[i] as $ty).to_le_bytes(), $hashes[i]);\n            }\n        } else {\n            // Slow path: check nulls\n            for i in 0..values.len() {\n                if !array.is_null(i) {\n                    $hashes[i] = $hash_method((values[i] as $ty).to_le_bytes(), $hashes[i]);\n                }\n            }\n        }\n    };\n}\n\n#[macro_export]\nmacro_rules! hash_array_primitive_float {\n    ($array_type: ident, $column: ident, $ty: ident, $ty2: ident, $hashes: ident, $hash_method: ident) => {\n        let array = $column\n            .as_any()\n            .downcast_ref::<$array_type>()\n            .unwrap_or_else(|| {\n                panic!(\n                    \"Failed to downcast column to {}. Actual data type: {:?}.\",\n                    stringify!($array_type),\n                    $column.data_type()\n                )\n            });\n        let values = array.values();\n\n        if array.null_count() == 0 {\n            // Fast path: no nulls, use direct indexing\n            for i in 0..values.len() {\n                let value = values[i];\n                // Spark uses 0 as hash for -0.0, see `Murmur3Hash` expression.\n                if value == 0.0 && value.is_sign_negative() {\n                    $hashes[i] = $hash_method((0 as $ty2).to_le_bytes(), $hashes[i]);\n                } else {\n                    $hashes[i] = $hash_method((value as $ty).to_le_bytes(), $hashes[i]);\n                }\n            }\n        } else {\n            // Slow path: check nulls\n            for i in 0..values.len() {\n                if !array.is_null(i) {\n                    let value = values[i];\n                    // Spark uses 0 as hash for -0.0, see `Murmur3Hash` expression.\n                    if value == 0.0 && value.is_sign_negative() {\n                        $hashes[i] = $hash_method((0 as $ty2).to_le_bytes(), $hashes[i]);\n                    } else {\n                        $hashes[i] = $hash_method((value as $ty).to_le_bytes(), $hashes[i]);\n                    }\n                }\n            }\n        }\n    };\n}\n\n#[macro_export]\nmacro_rules! hash_array_small_decimal {\n    ($array_type:ident, $column: ident, $hashes: ident, $hash_method: ident) => {\n        let array = $column\n            .as_any()\n            .downcast_ref::<$array_type>()\n            .unwrap_or_else(|| {\n                panic!(\n                    \"Failed to downcast column to {}. Actual data type: {:?}.\",\n                    stringify!($array_type),\n                    $column.data_type()\n                )\n            });\n\n        if array.null_count() == 0 {\n            // Fast path: no nulls, use direct indexing\n            for i in 0..$hashes.len() {\n                $hashes[i] = $hash_method(\n                    i64::try_from(array.value(i))\n                        .map(|v| v.to_le_bytes())\n                        .map_err(|e| DataFusionError::Execution(e.to_string()))?,\n                    $hashes[i],\n                );\n            }\n        } else {\n            // Slow path: check nulls\n            for i in 0..$hashes.len() {\n                if !array.is_null(i) {\n                    $hashes[i] = $hash_method(\n                        i64::try_from(array.value(i))\n                            .map(|v| v.to_le_bytes())\n                            .map_err(|e| DataFusionError::Execution(e.to_string()))?,\n                        $hashes[i],\n                    );\n                }\n            }\n        }\n    };\n}\n\n#[macro_export]\nmacro_rules! hash_array_decimal {\n    ($array_type:ident, $column: ident, $hashes: ident, $hash_method: ident) => {\n        let array = $column\n            .as_any()\n            .downcast_ref::<$array_type>()\n            .unwrap_or_else(|| {\n                panic!(\n                    \"Failed to downcast column to {}. Actual data type: {:?}.\",\n                    stringify!($array_type),\n                    $column.data_type()\n                )\n            });\n\n        if array.null_count() == 0 {\n            // Fast path: no nulls, use direct indexing\n            for i in 0..$hashes.len() {\n                $hashes[i] = $hash_method(array.value(i).to_le_bytes(), $hashes[i]);\n            }\n        } else {\n            // Slow path: check nulls\n            for i in 0..$hashes.len() {\n                if !array.is_null(i) {\n                    $hashes[i] = $hash_method(array.value(i).to_le_bytes(), $hashes[i]);\n                }\n            }\n        }\n    };\n}\n\n/// Hash a list array with primitive elements by directly accessing the underlying buffer.\n/// This avoids the overhead of slicing and recursive calls for common cases.\n/// Supports both variable-length lists (with offsets) and fixed-size lists.\n#[macro_export]\nmacro_rules! hash_list_primitive {\n    // Variable-length list variant (List/LargeList)\n    (offsets: $offsets:expr, $list_array:ident, $elem_array:ident, $hashes:ident, $hash_method:ident, $value_transform:expr) => {\n        if $list_array.null_count() == 0 && $elem_array.null_count() == 0 {\n            for (row_idx, hash) in $hashes.iter_mut().enumerate() {\n                let start = $offsets[row_idx] as usize;\n                let end = $offsets[row_idx + 1] as usize;\n                for elem_idx in start..end {\n                    let value = $elem_array.value(elem_idx);\n                    *hash = $hash_method($value_transform(value), *hash);\n                }\n            }\n        } else {\n            for (row_idx, hash) in $hashes.iter_mut().enumerate() {\n                if !$list_array.is_null(row_idx) {\n                    let start = $offsets[row_idx] as usize;\n                    let end = $offsets[row_idx + 1] as usize;\n                    for elem_idx in start..end {\n                        if !$elem_array.is_null(elem_idx) {\n                            let value = $elem_array.value(elem_idx);\n                            *hash = $hash_method($value_transform(value), *hash);\n                        }\n                    }\n                }\n            }\n        }\n    };\n    // Fixed-size list variant\n    (fixed_size: $list_size:expr, $list_array:ident, $elem_array:ident, $hashes:ident, $hash_method:ident, $value_transform:expr) => {\n        if $list_array.null_count() == 0 && $elem_array.null_count() == 0 {\n            for (row_idx, hash) in $hashes.iter_mut().enumerate() {\n                let start = row_idx * $list_size;\n                for elem_idx in 0..$list_size {\n                    let value = $elem_array.value(start + elem_idx);\n                    *hash = $hash_method($value_transform(value), *hash);\n                }\n            }\n        } else {\n            for (row_idx, hash) in $hashes.iter_mut().enumerate() {\n                if !$list_array.is_null(row_idx) {\n                    let start = row_idx * $list_size;\n                    for elem_idx in 0..$list_size {\n                        if !$elem_array.is_null(start + elem_idx) {\n                            let value = $elem_array.value(start + elem_idx);\n                            *hash = $hash_method($value_transform(value), *hash);\n                        }\n                    }\n                }\n            }\n        }\n    };\n}\n\n/// Hash a list array by recursively hashing each element.\n/// For each row, we hash all elements in the list.\n/// Spark hashes arrays by recursively hashing each element, where each\n/// element's hash is computed using the previous element's hash as the seed.\n/// This creates a chain: hash(elem_n, hash(elem_n-1, ... hash(elem_0, seed)...))\n/// Dispatches hash operations for List/LargeList/FixedSizeList arrays with primitive element types.\n/// This macro eliminates duplication by handling the type-to-array mapping for all supported primitives.\n#[macro_export]\nmacro_rules! hash_list_with_primitive_elements {\n    // Variant for List/LargeList with offsets\n    (offsets: $list_array_type:ident, $list_array:ident, $values:ident, $offsets:ident, $field:expr, $hashes_buffer:ident, $hash_method:ident, $recursive_hash_method:ident, $fallback_offset_type:ty, $col:ident) => {\n        match $field.data_type() {\n            DataType::Int8 => {\n                let elem_array = $values.as_any().downcast_ref::<Int8Array>().unwrap();\n                $crate::hash_list_primitive!(offsets: $offsets, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i8| (v as i32).to_le_bytes());\n            }\n            DataType::Int16 => {\n                let elem_array = $values.as_any().downcast_ref::<Int16Array>().unwrap();\n                $crate::hash_list_primitive!(offsets: $offsets, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i16| (v as i32).to_le_bytes());\n            }\n            DataType::Int32 => {\n                let elem_array = $values.as_any().downcast_ref::<Int32Array>().unwrap();\n                $crate::hash_list_primitive!(offsets: $offsets, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i32| v.to_le_bytes());\n            }\n            DataType::Int64 => {\n                let elem_array = $values.as_any().downcast_ref::<Int64Array>().unwrap();\n                $crate::hash_list_primitive!(offsets: $offsets, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i64| v.to_le_bytes());\n            }\n            DataType::Float32 => {\n                let elem_array = $values.as_any().downcast_ref::<Float32Array>().unwrap();\n                $crate::hash_list_primitive!(offsets: $offsets, $list_array, elem_array, $hashes_buffer, $hash_method,\n                    |v: f32| if v == 0.0 && v.is_sign_negative() { (0_i32).to_le_bytes() } else { v.to_le_bytes() });\n            }\n            DataType::Float64 => {\n                let elem_array = $values.as_any().downcast_ref::<Float64Array>().unwrap();\n                $crate::hash_list_primitive!(offsets: $offsets, $list_array, elem_array, $hashes_buffer, $hash_method,\n                    |v: f64| if v == 0.0 && v.is_sign_negative() { (0_i64).to_le_bytes() } else { v.to_le_bytes() });\n            }\n            DataType::Boolean => {\n                let elem_array = $values.as_any().downcast_ref::<BooleanArray>().unwrap();\n                $crate::hash_list_primitive!(offsets: $offsets, $list_array, elem_array, $hashes_buffer, $hash_method, |v: bool| (i32::from(v)).to_le_bytes());\n            }\n            DataType::Utf8 => {\n                let elem_array = $values.as_any().downcast_ref::<StringArray>().unwrap();\n                if $list_array.null_count() == 0 && elem_array.null_count() == 0 {\n                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                        let start = $offsets[row_idx] as usize;\n                        let end = $offsets[row_idx + 1] as usize;\n                        for elem_idx in start..end {\n                            *hash = $hash_method(elem_array.value(elem_idx), *hash);\n                        }\n                    }\n                } else {\n                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                        if !$list_array.is_null(row_idx) {\n                            let start = $offsets[row_idx] as usize;\n                            let end = $offsets[row_idx + 1] as usize;\n                            for elem_idx in start..end {\n                                if !elem_array.is_null(elem_idx) {\n                                    *hash = $hash_method(elem_array.value(elem_idx), *hash);\n                                }\n                            }\n                        }\n                    }\n                }\n            }\n            DataType::Binary => {\n                let elem_array = $values.as_any().downcast_ref::<BinaryArray>().unwrap();\n                if $list_array.null_count() == 0 && elem_array.null_count() == 0 {\n                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                        let start = $offsets[row_idx] as usize;\n                        let end = $offsets[row_idx + 1] as usize;\n                        for elem_idx in start..end {\n                            *hash = $hash_method(elem_array.value(elem_idx), *hash);\n                        }\n                    }\n                } else {\n                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                        if !$list_array.is_null(row_idx) {\n                            let start = $offsets[row_idx] as usize;\n                            let end = $offsets[row_idx + 1] as usize;\n                            for elem_idx in start..end {\n                                if !elem_array.is_null(elem_idx) {\n                                    *hash = $hash_method(elem_array.value(elem_idx), *hash);\n                                }\n                            }\n                        }\n                    }\n                }\n            }\n            DataType::Date32 => {\n                let elem_array = $values.as_any().downcast_ref::<Date32Array>().unwrap();\n                $crate::hash_list_primitive!(offsets: $offsets, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i32| v.to_le_bytes());\n            }\n            DataType::Timestamp(TimeUnit::Microsecond, _) => {\n                let elem_array = $values.as_any().downcast_ref::<TimestampMicrosecondArray>().unwrap();\n                $crate::hash_list_primitive!(offsets: $offsets, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i64| v.to_le_bytes());\n            }\n            _ => {\n                // Fall back to recursive approach for complex element types\n                $crate::hash_list_array!($list_array_type, $fallback_offset_type, $col, $hashes_buffer, $recursive_hash_method);\n            }\n        }\n    };\n    // Variant for FixedSizeList with fixed size\n    (fixed_size: $list_array:ident, $values:ident, $list_size:ident, $field:expr, $hashes_buffer:ident, $hash_method:ident, $recursive_hash_method:ident) => {\n        match $field.data_type() {\n            DataType::Int8 => {\n                let elem_array = $values.as_any().downcast_ref::<Int8Array>().unwrap();\n                $crate::hash_list_primitive!(fixed_size: $list_size, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i8| (v as i32).to_le_bytes());\n            }\n            DataType::Int16 => {\n                let elem_array = $values.as_any().downcast_ref::<Int16Array>().unwrap();\n                $crate::hash_list_primitive!(fixed_size: $list_size, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i16| (v as i32).to_le_bytes());\n            }\n            DataType::Int32 => {\n                let elem_array = $values.as_any().downcast_ref::<Int32Array>().unwrap();\n                $crate::hash_list_primitive!(fixed_size: $list_size, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i32| v.to_le_bytes());\n            }\n            DataType::Int64 => {\n                let elem_array = $values.as_any().downcast_ref::<Int64Array>().unwrap();\n                $crate::hash_list_primitive!(fixed_size: $list_size, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i64| v.to_le_bytes());\n            }\n            DataType::Float32 => {\n                let elem_array = $values.as_any().downcast_ref::<Float32Array>().unwrap();\n                $crate::hash_list_primitive!(fixed_size: $list_size, $list_array, elem_array, $hashes_buffer, $hash_method,\n                    |v: f32| if v == 0.0 && v.is_sign_negative() { (0_i32).to_le_bytes() } else { v.to_le_bytes() });\n            }\n            DataType::Float64 => {\n                let elem_array = $values.as_any().downcast_ref::<Float64Array>().unwrap();\n                $crate::hash_list_primitive!(fixed_size: $list_size, $list_array, elem_array, $hashes_buffer, $hash_method,\n                    |v: f64| if v == 0.0 && v.is_sign_negative() { (0_i64).to_le_bytes() } else { v.to_le_bytes() });\n            }\n            DataType::Boolean => {\n                let elem_array = $values.as_any().downcast_ref::<BooleanArray>().unwrap();\n                $crate::hash_list_primitive!(fixed_size: $list_size, $list_array, elem_array, $hashes_buffer, $hash_method, |v: bool| (i32::from(v)).to_le_bytes());\n            }\n            DataType::Utf8 => {\n                let elem_array = $values.as_any().downcast_ref::<StringArray>().unwrap();\n                if $list_array.null_count() == 0 && elem_array.null_count() == 0 {\n                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                        let start = row_idx * $list_size;\n                        for elem_idx in 0..$list_size {\n                            *hash = $hash_method(elem_array.value(start + elem_idx), *hash);\n                        }\n                    }\n                } else {\n                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                        if !$list_array.is_null(row_idx) {\n                            let start = row_idx * $list_size;\n                            for elem_idx in 0..$list_size {\n                                if !elem_array.is_null(start + elem_idx) {\n                                    *hash = $hash_method(elem_array.value(start + elem_idx), *hash);\n                                }\n                            }\n                        }\n                    }\n                }\n            }\n            DataType::Binary => {\n                let elem_array = $values.as_any().downcast_ref::<BinaryArray>().unwrap();\n                if $list_array.null_count() == 0 && elem_array.null_count() == 0 {\n                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                        let start = row_idx * $list_size;\n                        for elem_idx in 0..$list_size {\n                            *hash = $hash_method(elem_array.value(start + elem_idx), *hash);\n                        }\n                    }\n                } else {\n                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                        if !$list_array.is_null(row_idx) {\n                            let start = row_idx * $list_size;\n                            for elem_idx in 0..$list_size {\n                                if !elem_array.is_null(start + elem_idx) {\n                                    *hash = $hash_method(elem_array.value(start + elem_idx), *hash);\n                                }\n                            }\n                        }\n                    }\n                }\n            }\n            DataType::Date32 => {\n                let elem_array = $values.as_any().downcast_ref::<Date32Array>().unwrap();\n                $crate::hash_list_primitive!(fixed_size: $list_size, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i32| v.to_le_bytes());\n            }\n            DataType::Timestamp(TimeUnit::Microsecond, _) => {\n                let elem_array = $values.as_any().downcast_ref::<TimestampMicrosecondArray>().unwrap();\n                $crate::hash_list_primitive!(fixed_size: $list_size, $list_array, elem_array, $hashes_buffer, $hash_method, |v: i64| v.to_le_bytes());\n            }\n            _ => {\n                // Fall back to recursive approach for complex element types\n                if $list_array.null_count() == 0 {\n                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                        let start = row_idx * $list_size;\n                        for elem_idx in 0..$list_size {\n                            let elem_array = $values.slice(start + elem_idx, 1);\n                            let mut single_hash = [*hash];\n                            $recursive_hash_method(&[elem_array], &mut single_hash)?;\n                            *hash = single_hash[0];\n                        }\n                    }\n                } else {\n                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                        if !$list_array.is_null(row_idx) {\n                            let start = row_idx * $list_size;\n                            for elem_idx in 0..$list_size {\n                                let elem_array = $values.slice(start + elem_idx, 1);\n                                let mut single_hash = [*hash];\n                                $recursive_hash_method(&[elem_array], &mut single_hash)?;\n                                *hash = single_hash[0];\n                            }\n                        }\n                    }\n                }\n            }\n        }\n    };\n}\n\n#[macro_export]\nmacro_rules! hash_list_array {\n    ($array_type:ident, $offset_type:ty, $column: ident, $hashes: ident, $recursive_hash_method: ident) => {\n        let list_array = $column\n            .as_any()\n            .downcast_ref::<$array_type>()\n            .unwrap_or_else(|| {\n                panic!(\n                    \"Failed to downcast column to {}. Actual data type: {:?}.\",\n                    stringify!($array_type),\n                    $column.data_type()\n                )\n            });\n\n        let values = list_array.values();\n        let offsets = list_array.offsets();\n\n        if list_array.null_count() == 0 {\n            // Fast path: no nulls, skip null checks\n            for (row_idx, hash) in $hashes.iter_mut().enumerate() {\n                let start = offsets[row_idx] as usize;\n                let end = offsets[row_idx + 1] as usize;\n                let len = end - start;\n                // Hash each element in sequence, chaining the hash values\n                for elem_idx in 0..len {\n                    let elem_array = values.slice(start + elem_idx, 1);\n                    let mut single_hash = [*hash];\n                    $recursive_hash_method(&[elem_array], &mut single_hash)?;\n                    *hash = single_hash[0];\n                }\n            }\n        } else {\n            // Slow path: array has nulls, check each row\n            for (row_idx, hash) in $hashes.iter_mut().enumerate() {\n                if !list_array.is_null(row_idx) {\n                    let start = offsets[row_idx] as usize;\n                    let end = offsets[row_idx + 1] as usize;\n                    let len = end - start;\n                    // Hash each element in sequence, chaining the hash values\n                    for elem_idx in 0..len {\n                        let elem_array = values.slice(start + elem_idx, 1);\n                        let mut single_hash = [*hash];\n                        $recursive_hash_method(&[elem_array], &mut single_hash)?;\n                        *hash = single_hash[0];\n                    }\n                }\n            }\n        }\n    };\n}\n\n/// Creates hash values for every row, based on the values in the\n/// columns.\n///\n/// The number of rows to hash is determined by `hashes_buffer.len()`.\n/// `hashes_buffer` should be pre-sized appropriately\n///\n/// `hash_method` is the hash function to use.\n/// `create_dictionary_hash_method` is the function to create hashes for dictionary arrays input.\n/// `recursive_hash_method` is the function to call for recursive hashing of complex types.\n#[macro_export]\nmacro_rules! create_hashes_internal {\n    ($arrays: ident, $hashes_buffer: ident, $hash_method: ident, $create_dictionary_hash_method: ident, $recursive_hash_method: ident) => {\n        use arrow::datatypes::{DataType, TimeUnit};\n        use arrow::array::{types::*, *};\n\n        for (i, col) in $arrays.iter().enumerate() {\n            let first_col = i == 0;\n            match col.data_type() {\n                DataType::Boolean => {\n                    $crate::hash_array_boolean!(\n                        BooleanArray,\n                        col,\n                        i32,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Int8 => {\n                    $crate::hash_array_primitive!(\n                        Int8Array,\n                        col,\n                        i32,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Int16 => {\n                    $crate::hash_array_primitive!(\n                        Int16Array,\n                        col,\n                        i32,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Int32 => {\n                    $crate::hash_array_primitive!(\n                        Int32Array,\n                        col,\n                        i32,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Int64 => {\n                    $crate::hash_array_primitive!(\n                        Int64Array,\n                        col,\n                        i64,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Float32 => {\n                    $crate::hash_array_primitive_float!(\n                        Float32Array,\n                        col,\n                        f32,\n                        i32,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Float64 => {\n                    $crate::hash_array_primitive_float!(\n                        Float64Array,\n                        col,\n                        f64,\n                        i64,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Timestamp(TimeUnit::Second, _) => {\n                    $crate::hash_array_primitive!(\n                        TimestampSecondArray,\n                        col,\n                        i64,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Timestamp(TimeUnit::Millisecond, _) => {\n                    $crate::hash_array_primitive!(\n                        TimestampMillisecondArray,\n                        col,\n                        i64,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Timestamp(TimeUnit::Microsecond, _) => {\n                    $crate::hash_array_primitive!(\n                        TimestampMicrosecondArray,\n                        col,\n                        i64,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Timestamp(TimeUnit::Nanosecond, _) => {\n                    $crate::hash_array_primitive!(\n                        TimestampNanosecondArray,\n                        col,\n                        i64,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Date32 => {\n                    $crate::hash_array_primitive!(\n                        Date32Array,\n                        col,\n                        i32,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Date64 => {\n                    $crate::hash_array_primitive!(\n                        Date64Array,\n                        col,\n                        i64,\n                        $hashes_buffer,\n                        $hash_method\n                    );\n                }\n                DataType::Utf8 => {\n                    $crate::hash_array!(StringArray, col, $hashes_buffer, $hash_method);\n                }\n                DataType::LargeUtf8 => {\n                    $crate::hash_array!(LargeStringArray, col, $hashes_buffer, $hash_method);\n                }\n                DataType::Binary => {\n                    $crate::hash_array!(BinaryArray, col, $hashes_buffer, $hash_method);\n                }\n                DataType::LargeBinary => {\n                    $crate::hash_array!(LargeBinaryArray, col, $hashes_buffer, $hash_method);\n                }\n                DataType::FixedSizeBinary(_) => {\n                    $crate::hash_array!(FixedSizeBinaryArray, col, $hashes_buffer, $hash_method);\n                }\n                // Apache Spark: if it's a small decimal, i.e. precision <= 18, turn it into long and hash it.\n                // Else, turn it into bytes and hash it.\n                DataType::Decimal128(precision, _) if *precision <= 18 => {\n                    $crate::hash_array_small_decimal!(Decimal128Array, col, $hashes_buffer, $hash_method);\n                }\n                DataType::Decimal128(_, _) => {\n                    $crate::hash_array_decimal!(Decimal128Array, col, $hashes_buffer, $hash_method);\n                }\n                DataType::Dictionary(index_type, _) => match **index_type {\n                    DataType::Int8 => {\n                        $create_dictionary_hash_method::<Int8Type>(col, $hashes_buffer, first_col)?;\n                    }\n                    DataType::Int16 => {\n                        $create_dictionary_hash_method::<Int16Type>(\n                            col,\n                            $hashes_buffer,\n                            first_col,\n                        )?;\n                    }\n                    DataType::Int32 => {\n                        $create_dictionary_hash_method::<Int32Type>(\n                            col,\n                            $hashes_buffer,\n                            first_col,\n                        )?;\n                    }\n                    DataType::Int64 => {\n                        $create_dictionary_hash_method::<Int64Type>(\n                            col,\n                            $hashes_buffer,\n                            first_col,\n                        )?;\n                    }\n                    DataType::UInt8 => {\n                        $create_dictionary_hash_method::<UInt8Type>(\n                            col,\n                            $hashes_buffer,\n                            first_col,\n                        )?;\n                    }\n                    DataType::UInt16 => {\n                        $create_dictionary_hash_method::<UInt16Type>(\n                            col,\n                            $hashes_buffer,\n                            first_col,\n                        )?;\n                    }\n                    DataType::UInt32 => {\n                        $create_dictionary_hash_method::<UInt32Type>(\n                            col,\n                            $hashes_buffer,\n                            first_col,\n                        )?;\n                    }\n                    DataType::UInt64 => {\n                        $create_dictionary_hash_method::<UInt64Type>(\n                            col,\n                            $hashes_buffer,\n                            first_col,\n                        )?;\n                    }\n                    _ => {\n                        return Err(DataFusionError::Internal(format!(\n                            \"Unsupported dictionary type in hasher hashing: {}\",\n                            col.data_type(),\n                        )))\n                    }\n                },\n                DataType::List(field) => {\n                    let list_array = col.as_any().downcast_ref::<ListArray>().unwrap();\n                    let values = list_array.values();\n                    let offsets = list_array.offsets();\n\n                    $crate::hash_list_with_primitive_elements!(offsets: ListArray, list_array, values, offsets, field, $hashes_buffer, $hash_method, $recursive_hash_method, i32, col);\n                }\n                DataType::LargeList(field) => {\n                    let list_array = col.as_any().downcast_ref::<LargeListArray>().unwrap();\n                    let values = list_array.values();\n                    let offsets = list_array.offsets();\n\n                    $crate::hash_list_with_primitive_elements!(offsets: LargeListArray, list_array, values, offsets, field, $hashes_buffer, $hash_method, $recursive_hash_method, i64, col);\n                }\n                DataType::FixedSizeList(field, size) => {\n                    let list_array = col.as_any().downcast_ref::<FixedSizeListArray>().unwrap();\n                    let values = list_array.values();\n                    let list_size = *size as usize;\n\n                    $crate::hash_list_with_primitive_elements!(fixed_size: list_array, values, list_size, field, $hashes_buffer, $hash_method, $recursive_hash_method);\n                }\n                DataType::Struct(_) => {\n                    let struct_array = col.as_any().downcast_ref::<StructArray>().unwrap();\n                    // Hash each field of the struct - Spark hashes all fields recursively\n                    let columns: Vec<ArrayRef> = struct_array.columns().to_vec();\n                    if !columns.is_empty() {\n                        $recursive_hash_method(&columns, $hashes_buffer)?;\n                    }\n                }\n                DataType::Map(field, _) => {\n                    let map_array = col.as_any().downcast_ref::<MapArray>().unwrap();\n                    let keys = map_array.keys();\n                    let values = map_array.values();\n                    let offsets = map_array.offsets();\n\n                    // Get key and value types from the struct field\n                    if let DataType::Struct(fields) = field.data_type() {\n                        let key_type = &fields[0].data_type();\n                        let value_type = &fields[1].data_type();\n\n                        // Specialize for common map key/value combinations\n                        match (key_type, value_type) {\n                            (DataType::Utf8, DataType::Int32) => {\n                                let key_array = keys.as_any().downcast_ref::<StringArray>().unwrap();\n                                let value_array = values.as_any().downcast_ref::<Int32Array>().unwrap();\n                                if map_array.null_count() == 0 && key_array.null_count() == 0 && value_array.null_count() == 0 {\n                                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                                        let start = offsets[row_idx] as usize;\n                                        let end = offsets[row_idx + 1] as usize;\n                                        for entry_idx in start..end {\n                                            *hash = $hash_method(key_array.value(entry_idx), *hash);\n                                            *hash = $hash_method(value_array.value(entry_idx).to_le_bytes(), *hash);\n                                        }\n                                    }\n                                } else {\n                                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                                        if !map_array.is_null(row_idx) {\n                                            let start = offsets[row_idx] as usize;\n                                            let end = offsets[row_idx + 1] as usize;\n                                            for entry_idx in start..end {\n                                                if !key_array.is_null(entry_idx) {\n                                                    *hash = $hash_method(key_array.value(entry_idx), *hash);\n                                                }\n                                                if !value_array.is_null(entry_idx) {\n                                                    *hash = $hash_method(value_array.value(entry_idx).to_le_bytes(), *hash);\n                                                }\n                                            }\n                                        }\n                                    }\n                                }\n                            }\n                            (DataType::Int32, DataType::Utf8) => {\n                                let key_array = keys.as_any().downcast_ref::<Int32Array>().unwrap();\n                                let value_array = values.as_any().downcast_ref::<StringArray>().unwrap();\n                                if map_array.null_count() == 0 && key_array.null_count() == 0 && value_array.null_count() == 0 {\n                                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                                        let start = offsets[row_idx] as usize;\n                                        let end = offsets[row_idx + 1] as usize;\n                                        for entry_idx in start..end {\n                                            *hash = $hash_method(key_array.value(entry_idx).to_le_bytes(), *hash);\n                                            *hash = $hash_method(value_array.value(entry_idx), *hash);\n                                        }\n                                    }\n                                } else {\n                                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                                        if !map_array.is_null(row_idx) {\n                                            let start = offsets[row_idx] as usize;\n                                            let end = offsets[row_idx + 1] as usize;\n                                            for entry_idx in start..end {\n                                                if !key_array.is_null(entry_idx) {\n                                                    *hash = $hash_method(key_array.value(entry_idx).to_le_bytes(), *hash);\n                                                }\n                                                if !value_array.is_null(entry_idx) {\n                                                    *hash = $hash_method(value_array.value(entry_idx), *hash);\n                                                }\n                                            }\n                                        }\n                                    }\n                                }\n                            }\n                            (DataType::Utf8, DataType::Utf8) => {\n                                let key_array = keys.as_any().downcast_ref::<StringArray>().unwrap();\n                                let value_array = values.as_any().downcast_ref::<StringArray>().unwrap();\n                                if map_array.null_count() == 0 && key_array.null_count() == 0 && value_array.null_count() == 0 {\n                                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                                        let start = offsets[row_idx] as usize;\n                                        let end = offsets[row_idx + 1] as usize;\n                                        for entry_idx in start..end {\n                                            *hash = $hash_method(key_array.value(entry_idx), *hash);\n                                            *hash = $hash_method(value_array.value(entry_idx), *hash);\n                                        }\n                                    }\n                                } else {\n                                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                                        if !map_array.is_null(row_idx) {\n                                            let start = offsets[row_idx] as usize;\n                                            let end = offsets[row_idx + 1] as usize;\n                                            for entry_idx in start..end {\n                                                if !key_array.is_null(entry_idx) {\n                                                    *hash = $hash_method(key_array.value(entry_idx), *hash);\n                                                }\n                                                if !value_array.is_null(entry_idx) {\n                                                    *hash = $hash_method(value_array.value(entry_idx), *hash);\n                                                }\n                                            }\n                                        }\n                                    }\n                                }\n                            }\n                            (DataType::Int32, DataType::Int32) => {\n                                let key_array = keys.as_any().downcast_ref::<Int32Array>().unwrap();\n                                let value_array = values.as_any().downcast_ref::<Int32Array>().unwrap();\n                                if map_array.null_count() == 0 && key_array.null_count() == 0 && value_array.null_count() == 0 {\n                                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                                        let start = offsets[row_idx] as usize;\n                                        let end = offsets[row_idx + 1] as usize;\n                                        for entry_idx in start..end {\n                                            *hash = $hash_method(key_array.value(entry_idx).to_le_bytes(), *hash);\n                                            *hash = $hash_method(value_array.value(entry_idx).to_le_bytes(), *hash);\n                                        }\n                                    }\n                                } else {\n                                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                                        if !map_array.is_null(row_idx) {\n                                            let start = offsets[row_idx] as usize;\n                                            let end = offsets[row_idx + 1] as usize;\n                                            for entry_idx in start..end {\n                                                if !key_array.is_null(entry_idx) {\n                                                    *hash = $hash_method(key_array.value(entry_idx).to_le_bytes(), *hash);\n                                                }\n                                                if !value_array.is_null(entry_idx) {\n                                                    *hash = $hash_method(value_array.value(entry_idx).to_le_bytes(), *hash);\n                                                }\n                                            }\n                                        }\n                                    }\n                                }\n                            }\n                            _ => {\n                                // Fall back to recursive approach for other type combinations\n                                if map_array.null_count() == 0 {\n                                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                                        let start = offsets[row_idx] as usize;\n                                        let end = offsets[row_idx + 1] as usize;\n                                        for entry_idx in start..end {\n                                            let key_array = keys.slice(entry_idx, 1);\n                                            let mut single_hash = [*hash];\n                                            $recursive_hash_method(&[key_array], &mut single_hash)?;\n                                            *hash = single_hash[0];\n\n                                            let value_array = values.slice(entry_idx, 1);\n                                            single_hash = [*hash];\n                                            $recursive_hash_method(&[value_array], &mut single_hash)?;\n                                            *hash = single_hash[0];\n                                        }\n                                    }\n                                } else {\n                                    for (row_idx, hash) in $hashes_buffer.iter_mut().enumerate() {\n                                        if !map_array.is_null(row_idx) {\n                                            let start = offsets[row_idx] as usize;\n                                            let end = offsets[row_idx + 1] as usize;\n                                            for entry_idx in start..end {\n                                                let key_array = keys.slice(entry_idx, 1);\n                                                let mut single_hash = [*hash];\n                                                $recursive_hash_method(&[key_array], &mut single_hash)?;\n                                                *hash = single_hash[0];\n\n                                                let value_array = values.slice(entry_idx, 1);\n                                                single_hash = [*hash];\n                                                $recursive_hash_method(&[value_array], &mut single_hash)?;\n                                                *hash = single_hash[0];\n                                            }\n                                        }\n                                    }\n                                }\n                            }\n                        }\n                    } else {\n                        return Err(DataFusionError::Internal(format!(\n                            \"Map field type must be a struct, got: {}\",\n                            field.data_type()\n                        )));\n                    }\n                }\n                _ => {\n                    // This is internal because we should have caught this before.\n                    return Err(DataFusionError::Internal(format!(\n                        \"Unsupported data type in hasher: {}\",\n                        col.data_type()\n                    )));\n                }\n            }\n        }\n    };\n}\n\npub(crate) mod test_utils {\n\n    #[macro_export]\n    macro_rules! test_hashes_internal {\n        ($hash_method: ident, $input: expr, $initial_seeds: expr, $expected: expr) => {\n            let i = $input;\n            let mut hashes = $initial_seeds.clone();\n            $hash_method(&[i], &mut hashes).unwrap();\n            assert_eq!(hashes, $expected);\n        };\n    }\n\n    #[macro_export]\n    macro_rules! test_hashes_with_nulls {\n        ($method: ident, $t: ty, $values: ident, $expected: ident, $seed_type: ty) => {\n            // copied before inserting nulls\n            let mut input_with_nulls = $values.clone();\n            let mut expected_with_nulls = $expected.clone();\n            // test before inserting nulls\n            let len = $values.len();\n            let initial_seeds = vec![42 as $seed_type; len];\n            let i = Arc::new(<$t>::from($values)) as ArrayRef;\n            $crate::test_hashes_internal!($method, i, initial_seeds, $expected);\n\n            // test with nulls\n            let median = len / 2;\n            input_with_nulls.insert(0, None);\n            input_with_nulls.insert(median, None);\n            expected_with_nulls.insert(0, 42 as $seed_type);\n            expected_with_nulls.insert(median, 42 as $seed_type);\n            let len_with_nulls = len + 2;\n            let initial_seeds_with_nulls = vec![42 as $seed_type; len_with_nulls];\n            let nullable_input = Arc::new(<$t>::from(input_with_nulls)) as ArrayRef;\n            $crate::test_hashes_internal!(\n                $method,\n                nullable_input,\n                initial_seeds_with_nulls,\n                expected_with_nulls\n            );\n        };\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/hash_funcs/xxhash64.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::compute::take;\nuse twox_hash::XxHash64;\n\nuse datafusion::{\n    arrow::{\n        array::*,\n        datatypes::{ArrowDictionaryKeyType, ArrowNativeType},\n    },\n    common::{internal_err, ScalarValue},\n    error::{DataFusionError, Result},\n};\n\nuse crate::create_hashes_internal;\nuse arrow::array::{Array, ArrayRef, Int64Array};\nuse datafusion::physical_plan::ColumnarValue;\nuse std::sync::Arc;\n\n/// Spark compatible xxhash64 in vectorized execution fashion\npub fn spark_xxhash64(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {\n    let length = args.len();\n    let seed = &args[length - 1];\n    match seed {\n        ColumnarValue::Scalar(ScalarValue::Int64(Some(seed))) => {\n            // iterate over the arguments to find out the length of the array\n            let num_rows = args[0..args.len() - 1]\n                .iter()\n                .find_map(|arg| match arg {\n                    ColumnarValue::Array(array) => Some(array.len()),\n                    ColumnarValue::Scalar(_) => None,\n                })\n                .unwrap_or(1);\n            let mut hashes: Vec<u64> = vec![0_u64; num_rows];\n            hashes.fill(*seed as u64);\n            let arrays = args[0..args.len() - 1]\n                .iter()\n                .map(|arg| match arg {\n                    ColumnarValue::Array(array) => Arc::clone(array),\n                    ColumnarValue::Scalar(scalar) => {\n                        scalar.clone().to_array_of_size(num_rows).unwrap()\n                    }\n                })\n                .collect::<Vec<ArrayRef>>();\n            create_xxhash64_hashes(&arrays, &mut hashes)?;\n            if num_rows == 1 {\n                Ok(ColumnarValue::Scalar(ScalarValue::Int64(Some(\n                    hashes[0] as i64,\n                ))))\n            } else {\n                let hashes: Vec<i64> = hashes.into_iter().map(|x| x as i64).collect();\n                Ok(ColumnarValue::Array(Arc::new(Int64Array::from(hashes))))\n            }\n        }\n        _ => {\n            internal_err!(\n                \"The seed of function xxhash64 must be an Int64 scalar value, but got: {:?}.\",\n                seed\n            )\n        }\n    }\n}\n\n#[inline]\nfn spark_compatible_xxhash64<T: AsRef<[u8]>>(data: T, seed: u64) -> u64 {\n    XxHash64::oneshot(seed, data.as_ref())\n}\n\n// Hash the values in a dictionary array using xxhash64\nfn create_xxhash64_hashes_dictionary<K: ArrowDictionaryKeyType>(\n    array: &ArrayRef,\n    hashes_buffer: &mut [u64],\n    first_col: bool,\n) -> Result<()> {\n    let dict_array = array.as_any().downcast_ref::<DictionaryArray<K>>().unwrap();\n    if !first_col {\n        let unpacked = take(dict_array.values().as_ref(), dict_array.keys(), None)?;\n        create_xxhash64_hashes(&[unpacked], hashes_buffer)?;\n    } else {\n        // Hash each dictionary value once, and then use that computed\n        // hash for each key value to avoid a potentially expensive\n        // redundant hashing for large dictionary elements (e.g. strings)\n        let dict_values = Arc::clone(dict_array.values());\n        // same initial seed as Spark\n        let mut dict_hashes = vec![42u64; dict_values.len()];\n        create_xxhash64_hashes(&[dict_values], &mut dict_hashes)?;\n\n        for (hash, key) in hashes_buffer.iter_mut().zip(dict_array.keys().iter()) {\n            if let Some(key) = key {\n                let idx = key.to_usize().ok_or_else(|| {\n                    DataFusionError::Internal(format!(\n                        \"Can not convert key value {:?} to usize in dictionary of type {:?}\",\n                        key,\n                        dict_array.data_type()\n                    ))\n                })?;\n                *hash = dict_hashes[idx]\n            } // no update for Null, consistent with other hashes\n        }\n    }\n    Ok(())\n}\n\n/// Creates xxhash64 hash values for every row, based on the values in the\n/// columns.\n///\n/// The number of rows to hash is determined by `hashes_buffer.len()`.\n/// `hashes_buffer` should be pre-sized appropriately\nfn create_xxhash64_hashes<'a>(\n    arrays: &[ArrayRef],\n    hashes_buffer: &'a mut [u64],\n) -> Result<&'a mut [u64]> {\n    create_hashes_internal!(\n        arrays,\n        hashes_buffer,\n        spark_compatible_xxhash64,\n        create_xxhash64_hashes_dictionary,\n        create_xxhash64_hashes\n    );\n    Ok(hashes_buffer)\n}\n\n#[cfg(test)]\nmod tests {\n    use arrow::array::{Float32Array, Float64Array};\n    use std::sync::Arc;\n\n    use super::create_xxhash64_hashes;\n    use crate::test_hashes_with_nulls;\n    use datafusion::arrow::array::{ArrayRef, Int32Array, Int64Array, Int8Array, StringArray};\n\n    fn test_xxhash64_hash<I: Clone, T: arrow::array::Array + From<Vec<Option<I>>> + 'static>(\n        values: Vec<Option<I>>,\n        expected: Vec<u64>,\n    ) {\n        test_hashes_with_nulls!(create_xxhash64_hashes, T, values, expected, u64);\n    }\n\n    #[test]\n    fn test_i8() {\n        test_xxhash64_hash::<i8, Int8Array>(\n            vec![Some(1), Some(0), Some(-1), Some(i8::MAX), Some(i8::MIN)],\n            vec![\n                0xa309b38455455929,\n                0x3229fbc4681e48f3,\n                0x1bfdda8861c06e45,\n                0x77cc15d9f9f2cdc2,\n                0x39bc22b9e94d81d0,\n            ],\n        );\n    }\n\n    #[test]\n    fn test_i32() {\n        test_xxhash64_hash::<i32, Int32Array>(\n            vec![Some(1), Some(0), Some(-1), Some(i32::MAX), Some(i32::MIN)],\n            vec![\n                0xa309b38455455929,\n                0x3229fbc4681e48f3,\n                0x1bfdda8861c06e45,\n                0x14f0ac009c21721c,\n                0x1cc7cb8d034769cd,\n            ],\n        );\n    }\n\n    #[test]\n    fn test_i64() {\n        test_xxhash64_hash::<i64, Int64Array>(\n            vec![Some(1), Some(0), Some(-1), Some(i64::MAX), Some(i64::MIN)],\n            vec![\n                0x9ed50fd59358d232,\n                0xb71b47ebda15746c,\n                0x358ae035bfb46fd2,\n                0xd2f1c616ae7eb306,\n                0x88608019c494c1f4,\n            ],\n        );\n    }\n\n    #[test]\n    fn test_f32() {\n        test_xxhash64_hash::<f32, Float32Array>(\n            vec![\n                Some(1.0),\n                Some(0.0),\n                Some(-0.0),\n                Some(-1.0),\n                Some(99999999999.99999999999),\n                Some(-99999999999.99999999999),\n            ],\n            vec![\n                0x9b92689757fcdbd,\n                0x3229fbc4681e48f3,\n                0x3229fbc4681e48f3,\n                0xa2becc0e61bb3823,\n                0x8f20ab82d4f3687f,\n                0xdce4982d97f7ac4,\n            ],\n        )\n    }\n\n    #[test]\n    fn test_f64() {\n        test_xxhash64_hash::<f64, Float64Array>(\n            vec![\n                Some(1.0),\n                Some(0.0),\n                Some(-0.0),\n                Some(-1.0),\n                Some(99999999999.99999999999),\n                Some(-99999999999.99999999999),\n            ],\n            vec![\n                0xe1fd6e07fee8ad53,\n                0xb71b47ebda15746c,\n                0xb71b47ebda15746c,\n                0x8cdde022746f8f1f,\n                0x793c5c88d313eac7,\n                0xc5e60e7b75d9b232,\n            ],\n        )\n    }\n\n    #[test]\n    fn test_str() {\n        let input = [\n            \"hello\", \"bar\", \"\", \"😁\", \"天地\", \"a\", \"ab\", \"abc\", \"abcd\", \"abcde\",\n        ]\n        .iter()\n        .map(|s| Some(s.to_string()))\n        .collect::<Vec<Option<String>>>();\n\n        test_xxhash64_hash::<String, StringArray>(\n            input,\n            vec![\n                0xc3629e6318d53932,\n                0xe7097b6a54378d8a,\n                0x98b1582b0977e704,\n                0xa80d9d5a6a523bd5,\n                0xfcba5f61ac666c61,\n                0x88e4fe59adf7b0cc,\n                0x259dd873209a3fe3,\n                0x13c1d910702770e6,\n                0xa17b5eb5dc364dff,\n                0xf241303e4a90f299,\n            ],\n        )\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/json_funcs/from_json.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{\n    Array, ArrayRef, BooleanBuilder, Float32Builder, Float64Builder, Int32Builder, Int64Builder,\n    RecordBatch, StringBuilder, StructArray,\n};\nuse arrow::datatypes::{DataType, Field, Schema};\nuse datafusion::common::Result;\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_plan::ColumnarValue;\nuse std::any::Any;\nuse std::fmt::{Debug, Display, Formatter};\nuse std::sync::Arc;\n\n/// from_json function - parses JSON strings into structured types\n#[derive(Debug, Eq)]\npub struct FromJson {\n    /// The JSON string input expression\n    expr: Arc<dyn PhysicalExpr>,\n    /// Target schema for parsing\n    schema: DataType,\n    /// Timezone for timestamp parsing (future use)\n    timezone: String,\n}\n\nimpl PartialEq for FromJson {\n    fn eq(&self, other: &Self) -> bool {\n        self.expr.eq(&other.expr) && self.schema == other.schema && self.timezone == other.timezone\n    }\n}\n\nimpl std::hash::Hash for FromJson {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.expr.hash(state);\n        // Note: DataType doesn't implement Hash, so we hash its debug representation\n        format!(\"{:?}\", self.schema).hash(state);\n        self.timezone.hash(state);\n    }\n}\n\nimpl FromJson {\n    pub fn new(expr: Arc<dyn PhysicalExpr>, schema: DataType, timezone: &str) -> Self {\n        Self {\n            expr,\n            schema,\n            timezone: timezone.to_owned(),\n        }\n    }\n}\n\nimpl Display for FromJson {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"from_json({}, schema={:?}, timezone={})\",\n            self.expr, self.schema, self.timezone\n        )\n    }\n}\n\nimpl PartialEq<dyn Any> for FromJson {\n    fn eq(&self, other: &dyn Any) -> bool {\n        if let Some(other) = other.downcast_ref::<FromJson>() {\n            self.expr.eq(&other.expr)\n                && self.schema == other.schema\n                && self.timezone == other.timezone\n        } else {\n            false\n        }\n    }\n}\n\nimpl PhysicalExpr for FromJson {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, _: &Schema) -> Result<DataType> {\n        Ok(self.schema.clone())\n    }\n\n    fn nullable(&self, _input_schema: &Schema) -> Result<bool> {\n        // Always nullable - parse errors return null in PERMISSIVE mode\n        Ok(true)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> Result<ColumnarValue> {\n        let input = self.expr.evaluate(batch)?.into_array(batch.num_rows())?;\n        Ok(ColumnarValue::Array(json_string_to_struct(\n            &input,\n            &self.schema,\n        )?))\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.expr]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Result<Arc<dyn PhysicalExpr>> {\n        assert!(children.len() == 1);\n        Ok(Arc::new(Self::new(\n            Arc::clone(&children[0]),\n            self.schema.clone(),\n            &self.timezone,\n        )))\n    }\n}\n\n/// Parse JSON string array into struct array\nfn json_string_to_struct(arr: &Arc<dyn Array>, schema: &DataType) -> Result<ArrayRef> {\n    use arrow::array::StringArray;\n    use arrow::buffer::NullBuffer;\n\n    let string_array = arr.as_any().downcast_ref::<StringArray>().ok_or_else(|| {\n        datafusion::common::DataFusionError::Execution(\"from_json expects string input\".to_string())\n    })?;\n\n    let DataType::Struct(fields) = schema else {\n        return Err(datafusion::common::DataFusionError::Execution(\n            \"from_json requires struct schema\".to_string(),\n        ));\n    };\n\n    let num_rows = string_array.len();\n    let mut field_builders = create_field_builders(fields, num_rows)?;\n    let mut struct_nulls = vec![true; num_rows];\n    for (row_idx, struct_null) in struct_nulls.iter_mut().enumerate() {\n        if string_array.is_null(row_idx) {\n            // Null input -> null struct\n            *struct_null = false;\n            append_null_to_all_builders(&mut field_builders);\n        } else {\n            let json_str = string_array.value(row_idx);\n\n            // Parse JSON (PERMISSIVE mode: return null fields on error)\n            match serde_json::from_str::<serde_json::Value>(json_str) {\n                Ok(json_value) => {\n                    if let serde_json::Value::Object(obj) = json_value {\n                        // Struct is not null, extract each field\n                        *struct_null = true;\n                        for (field, builder) in fields.iter().zip(field_builders.iter_mut()) {\n                            let field_value = obj.get(field.name());\n                            append_field_value(builder, field, field_value)?;\n                        }\n                    } else {\n                        // Not an object -> struct with null fields\n                        *struct_null = true;\n                        append_null_to_all_builders(&mut field_builders);\n                    }\n                }\n                Err(_) => {\n                    // Parse error -> struct with null fields (PERMISSIVE mode)\n                    *struct_null = true;\n                    append_null_to_all_builders(&mut field_builders);\n                }\n            }\n        }\n    }\n\n    let arrays: Vec<ArrayRef> = field_builders\n        .into_iter()\n        .map(finish_builder)\n        .collect::<Result<Vec<_>>>()?;\n    let null_buffer = NullBuffer::from(struct_nulls);\n    Ok(Arc::new(StructArray::new(\n        fields.clone(),\n        arrays,\n        Some(null_buffer),\n    )))\n}\n\n/// Builder enum for different data types\nenum FieldBuilder {\n    Int32(Int32Builder),\n    Int64(Int64Builder),\n    Float32(Float32Builder),\n    Float64(Float64Builder),\n    Boolean(BooleanBuilder),\n    String(StringBuilder),\n    Struct {\n        fields: arrow::datatypes::Fields,\n        builders: Vec<FieldBuilder>,\n        null_buffer: Vec<bool>,\n    },\n}\n\nfn create_field_builders(\n    fields: &arrow::datatypes::Fields,\n    capacity: usize,\n) -> Result<Vec<FieldBuilder>> {\n    fields\n        .iter()\n        .map(|field| match field.data_type() {\n            DataType::Int32 => Ok(FieldBuilder::Int32(Int32Builder::with_capacity(capacity))),\n            DataType::Int64 => Ok(FieldBuilder::Int64(Int64Builder::with_capacity(capacity))),\n            DataType::Float32 => Ok(FieldBuilder::Float32(Float32Builder::with_capacity(\n                capacity,\n            ))),\n            DataType::Float64 => Ok(FieldBuilder::Float64(Float64Builder::with_capacity(\n                capacity,\n            ))),\n            DataType::Boolean => Ok(FieldBuilder::Boolean(BooleanBuilder::with_capacity(\n                capacity,\n            ))),\n            DataType::Utf8 => Ok(FieldBuilder::String(StringBuilder::with_capacity(\n                capacity,\n                capacity * 16,\n            ))),\n            DataType::Struct(nested_fields) => {\n                let nested_builders = create_field_builders(nested_fields, capacity)?;\n                Ok(FieldBuilder::Struct {\n                    fields: nested_fields.clone(),\n                    builders: nested_builders,\n                    null_buffer: Vec::with_capacity(capacity),\n                })\n            }\n            dt => Err(datafusion::common::DataFusionError::Execution(format!(\n                \"Unsupported field type in from_json: {:?}\",\n                dt\n            ))),\n        })\n        .collect()\n}\n\nfn append_null_to_all_builders(builders: &mut [FieldBuilder]) {\n    for builder in builders {\n        match builder {\n            FieldBuilder::Int32(b) => b.append_null(),\n            FieldBuilder::Int64(b) => b.append_null(),\n            FieldBuilder::Float32(b) => b.append_null(),\n            FieldBuilder::Float64(b) => b.append_null(),\n            FieldBuilder::Boolean(b) => b.append_null(),\n            FieldBuilder::String(b) => b.append_null(),\n            FieldBuilder::Struct {\n                builders: nested_builders,\n                null_buffer,\n                ..\n            } => {\n                // Append null to nested struct\n                null_buffer.push(false);\n                append_null_to_all_builders(nested_builders);\n            }\n        }\n    }\n}\n\nfn append_field_value(\n    builder: &mut FieldBuilder,\n    field: &Field,\n    json_value: Option<&serde_json::Value>,\n) -> Result<()> {\n    use serde_json::Value;\n\n    let value = match json_value {\n        Some(Value::Null) | None => {\n            // Missing field or explicit null -> append null\n            match builder {\n                FieldBuilder::Int32(b) => b.append_null(),\n                FieldBuilder::Int64(b) => b.append_null(),\n                FieldBuilder::Float32(b) => b.append_null(),\n                FieldBuilder::Float64(b) => b.append_null(),\n                FieldBuilder::Boolean(b) => b.append_null(),\n                FieldBuilder::String(b) => b.append_null(),\n                FieldBuilder::Struct {\n                    builders: nested_builders,\n                    null_buffer,\n                    ..\n                } => {\n                    null_buffer.push(false);\n                    append_null_to_all_builders(nested_builders);\n                }\n            }\n            return Ok(());\n        }\n        Some(v) => v,\n    };\n\n    match (builder, field.data_type()) {\n        (FieldBuilder::Int32(b), DataType::Int32) => {\n            if let Some(i) = value.as_i64() {\n                if i >= i32::MIN as i64 && i <= i32::MAX as i64 {\n                    b.append_value(i as i32);\n                } else {\n                    b.append_null(); // Overflow\n                }\n            } else {\n                b.append_null(); // Type mismatch\n            }\n        }\n        (FieldBuilder::Int64(b), DataType::Int64) => {\n            if let Some(i) = value.as_i64() {\n                b.append_value(i);\n            } else {\n                b.append_null();\n            }\n        }\n        (FieldBuilder::Float32(b), DataType::Float32) => {\n            if let Some(f) = value.as_f64() {\n                b.append_value(f as f32);\n            } else {\n                b.append_null();\n            }\n        }\n        (FieldBuilder::Float64(b), DataType::Float64) => {\n            if let Some(f) = value.as_f64() {\n                b.append_value(f);\n            } else {\n                b.append_null();\n            }\n        }\n        (FieldBuilder::Boolean(b), DataType::Boolean) => {\n            if let Some(bool_val) = value.as_bool() {\n                b.append_value(bool_val);\n            } else {\n                b.append_null();\n            }\n        }\n        (FieldBuilder::String(b), DataType::Utf8) => {\n            if let Some(s) = value.as_str() {\n                b.append_value(s);\n            } else {\n                // Stringify non-string values\n                b.append_value(value.to_string());\n            }\n        }\n        (\n            FieldBuilder::Struct {\n                fields: nested_fields,\n                builders: nested_builders,\n                null_buffer,\n            },\n            DataType::Struct(_),\n        ) => {\n            // Handle nested struct\n            if let Some(obj) = value.as_object() {\n                // Non-null nested struct\n                null_buffer.push(true);\n                for (nested_field, nested_builder) in\n                    nested_fields.iter().zip(nested_builders.iter_mut())\n                {\n                    let nested_value = obj.get(nested_field.name());\n                    append_field_value(nested_builder, nested_field, nested_value)?;\n                }\n            } else {\n                // Not an object -> null nested struct\n                null_buffer.push(false);\n                append_null_to_all_builders(nested_builders);\n            }\n        }\n        _ => {\n            return Err(datafusion::common::DataFusionError::Execution(\n                \"Type mismatch in from_json\".to_string(),\n            ));\n        }\n    }\n\n    Ok(())\n}\n\nfn finish_builder(builder: FieldBuilder) -> Result<ArrayRef> {\n    Ok(match builder {\n        FieldBuilder::Int32(mut b) => Arc::new(b.finish()),\n        FieldBuilder::Int64(mut b) => Arc::new(b.finish()),\n        FieldBuilder::Float32(mut b) => Arc::new(b.finish()),\n        FieldBuilder::Float64(mut b) => Arc::new(b.finish()),\n        FieldBuilder::Boolean(mut b) => Arc::new(b.finish()),\n        FieldBuilder::String(mut b) => Arc::new(b.finish()),\n        FieldBuilder::Struct {\n            fields,\n            builders,\n            null_buffer,\n        } => {\n            let nested_arrays: Vec<ArrayRef> = builders\n                .into_iter()\n                .map(finish_builder)\n                .collect::<Result<Vec<_>>>()?;\n            let null_buf = arrow::buffer::NullBuffer::from(null_buffer);\n            Arc::new(StructArray::new(fields, nested_arrays, Some(null_buf)))\n        }\n    })\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::{Int32Array, StringArray};\n    use arrow::datatypes::Fields;\n\n    #[test]\n    fn test_simple_struct() -> Result<()> {\n        let schema = DataType::Struct(Fields::from(vec![\n            Field::new(\"a\", DataType::Int32, true),\n            Field::new(\"b\", DataType::Utf8, true),\n        ]));\n\n        let input: Arc<dyn Array> = Arc::new(StringArray::from(vec![\n            Some(r#\"{\"a\": 123, \"b\": \"hello\"}\"#),\n            Some(r#\"{\"a\": 456}\"#),\n            Some(r#\"invalid json\"#),\n            None,\n        ]));\n\n        let result = json_string_to_struct(&input, &schema)?;\n        let struct_array = result.as_any().downcast_ref::<StructArray>().unwrap();\n\n        assert_eq!(struct_array.len(), 4);\n\n        // First row\n        let a_array = struct_array\n            .column(0)\n            .as_any()\n            .downcast_ref::<Int32Array>()\n            .unwrap();\n        assert_eq!(a_array.value(0), 123);\n        let b_array = struct_array\n            .column(1)\n            .as_any()\n            .downcast_ref::<StringArray>()\n            .unwrap();\n        assert_eq!(b_array.value(0), \"hello\");\n\n        // Second row (missing field b)\n        assert_eq!(a_array.value(1), 456);\n        assert!(b_array.is_null(1));\n\n        // Third row (parse error -> struct NOT null, all fields null)\n        assert!(!struct_array.is_null(2), \"Struct should not be null\");\n        assert!(a_array.is_null(2));\n        assert!(b_array.is_null(2));\n\n        // Fourth row (null input -> struct IS null)\n        assert!(struct_array.is_null(3), \"Struct itself should be null\");\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_all_primitive_types() -> Result<()> {\n        let schema = DataType::Struct(Fields::from(vec![\n            Field::new(\"i32\", DataType::Int32, true),\n            Field::new(\"i64\", DataType::Int64, true),\n            Field::new(\"f32\", DataType::Float32, true),\n            Field::new(\"f64\", DataType::Float64, true),\n            Field::new(\"bool\", DataType::Boolean, true),\n            Field::new(\"str\", DataType::Utf8, true),\n        ]));\n\n        let input: Arc<dyn Array> = Arc::new(StringArray::from(vec![Some(\n            r#\"{\"i32\":123,\"i64\":9999999999,\"f32\":1.5,\"f64\":2.5,\"bool\":true,\"str\":\"test\"}\"#,\n        )]));\n\n        let result = json_string_to_struct(&input, &schema)?;\n        let struct_array = result.as_any().downcast_ref::<StructArray>().unwrap();\n\n        assert_eq!(struct_array.len(), 1);\n\n        // Verify all types\n        let i32_array = struct_array\n            .column(0)\n            .as_any()\n            .downcast_ref::<Int32Array>()\n            .unwrap();\n        assert_eq!(i32_array.value(0), 123);\n\n        let i64_array = struct_array\n            .column(1)\n            .as_any()\n            .downcast_ref::<arrow::array::Int64Array>()\n            .unwrap();\n        assert_eq!(i64_array.value(0), 9999999999);\n\n        let f32_array = struct_array\n            .column(2)\n            .as_any()\n            .downcast_ref::<arrow::array::Float32Array>()\n            .unwrap();\n        assert_eq!(f32_array.value(0), 1.5);\n\n        let f64_array = struct_array\n            .column(3)\n            .as_any()\n            .downcast_ref::<arrow::array::Float64Array>()\n            .unwrap();\n        assert_eq!(f64_array.value(0), 2.5);\n\n        let bool_array = struct_array\n            .column(4)\n            .as_any()\n            .downcast_ref::<arrow::array::BooleanArray>()\n            .unwrap();\n        assert!(bool_array.value(0));\n\n        let str_array = struct_array\n            .column(5)\n            .as_any()\n            .downcast_ref::<StringArray>()\n            .unwrap();\n        assert_eq!(str_array.value(0), \"test\");\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_empty_and_null_json() -> Result<()> {\n        let schema = DataType::Struct(Fields::from(vec![\n            Field::new(\"a\", DataType::Int32, true),\n            Field::new(\"b\", DataType::Utf8, true),\n        ]));\n\n        let input: Arc<dyn Array> = Arc::new(StringArray::from(vec![\n            Some(r#\"{}\"#),   // Empty object\n            Some(r#\"null\"#), // JSON null\n            Some(r#\"[]\"#),   // Array (not object)\n            Some(r#\"123\"#),  // Number (not object)\n        ]));\n\n        let result = json_string_to_struct(&input, &schema)?;\n        let struct_array = result.as_any().downcast_ref::<StructArray>().unwrap();\n\n        assert_eq!(struct_array.len(), 4);\n\n        let a_array = struct_array\n            .column(0)\n            .as_any()\n            .downcast_ref::<Int32Array>()\n            .unwrap();\n        let b_array = struct_array\n            .column(1)\n            .as_any()\n            .downcast_ref::<StringArray>()\n            .unwrap();\n\n        // All rows should have non-null structs with null field values\n        for i in 0..4 {\n            assert!(\n                !struct_array.is_null(i),\n                \"Row {} struct should not be null\",\n                i\n            );\n            assert!(a_array.is_null(i), \"Row {} field a should be null\", i);\n            assert!(b_array.is_null(i), \"Row {} field b should be null\", i);\n        }\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_nested_struct() -> Result<()> {\n        let schema = DataType::Struct(Fields::from(vec![\n            Field::new(\n                \"outer\",\n                DataType::Struct(Fields::from(vec![\n                    Field::new(\"inner_a\", DataType::Int32, true),\n                    Field::new(\"inner_b\", DataType::Utf8, true),\n                ])),\n                true,\n            ),\n            Field::new(\"top_level\", DataType::Int32, true),\n        ]));\n\n        let input: Arc<dyn Array> = Arc::new(StringArray::from(vec![\n            Some(r#\"{\"outer\":{\"inner_a\":123,\"inner_b\":\"hello\"},\"top_level\":999}\"#),\n            Some(r#\"{\"outer\":{\"inner_a\":456},\"top_level\":888}\"#), // Missing nested field\n            Some(r#\"{\"outer\":null,\"top_level\":777}\"#),            // Null nested struct\n            Some(r#\"{\"top_level\":666}\"#),                         // Missing nested struct\n        ]));\n\n        let result = json_string_to_struct(&input, &schema)?;\n        let struct_array = result.as_any().downcast_ref::<StructArray>().unwrap();\n\n        assert_eq!(struct_array.len(), 4);\n\n        // Check outer struct\n        let outer_array = struct_array\n            .column(0)\n            .as_any()\n            .downcast_ref::<StructArray>()\n            .unwrap();\n        let top_level_array = struct_array\n            .column(1)\n            .as_any()\n            .downcast_ref::<Int32Array>()\n            .unwrap();\n\n        // Row 0: Valid nested struct\n        assert!(!outer_array.is_null(0), \"Nested struct should not be null\");\n        let inner_a_array = outer_array\n            .column(0)\n            .as_any()\n            .downcast_ref::<Int32Array>()\n            .unwrap();\n        let inner_b_array = outer_array\n            .column(1)\n            .as_any()\n            .downcast_ref::<StringArray>()\n            .unwrap();\n        assert_eq!(inner_a_array.value(0), 123);\n        assert_eq!(inner_b_array.value(0), \"hello\");\n        assert_eq!(top_level_array.value(0), 999);\n\n        // Row 1: Missing nested field\n        assert!(!outer_array.is_null(1));\n        assert_eq!(inner_a_array.value(1), 456);\n        assert!(inner_b_array.is_null(1));\n        assert_eq!(top_level_array.value(1), 888);\n\n        // Row 2: Null nested struct\n        assert!(outer_array.is_null(2), \"Nested struct should be null\");\n        assert_eq!(top_level_array.value(2), 777);\n\n        // Row 3: Missing nested struct\n        assert!(outer_array.is_null(3), \"Nested struct should be null\");\n        assert_eq!(top_level_array.value(3), 666);\n\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/json_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod from_json;\nmod to_json;\n\npub use from_json::FromJson;\npub use to_json::ToJson;\n"
  },
  {
    "path": "native/spark-expr/src/json_funcs/to_json.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n// TODO upstream this to DataFusion as long as we have a way to specify all\n// of the Spark-specific compatibility features that we need (including\n// being able to specify Spark-compatible cast from all types to string)\n\nuse crate::SparkCastOptions;\nuse crate::{spark_cast, EvalMode};\nuse arrow::array::builder::StringBuilder;\nuse arrow::array::{Array, ArrayRef, RecordBatch, StringArray, StructArray};\nuse arrow::datatypes::{DataType, Schema};\nuse datafusion::common::Result;\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_plan::ColumnarValue;\nuse std::any::Any;\nuse std::fmt::{Debug, Display, Formatter};\nuse std::hash::Hash;\nuse std::sync::Arc;\n\n/// to_json function\n#[derive(Debug, Eq)]\npub struct ToJson {\n    /// The input to convert to JSON\n    expr: Arc<dyn PhysicalExpr>,\n    /// Timezone to use when converting timestamps to JSON\n    timezone: String,\n}\n\nimpl Hash for ToJson {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.expr.hash(state);\n        self.timezone.hash(state);\n    }\n}\nimpl PartialEq for ToJson {\n    fn eq(&self, other: &Self) -> bool {\n        self.expr.eq(&other.expr) && self.timezone.eq(&other.timezone)\n    }\n}\n\nimpl ToJson {\n    pub fn new(expr: Arc<dyn PhysicalExpr>, timezone: &str) -> Self {\n        Self {\n            expr,\n            timezone: timezone.to_owned(),\n        }\n    }\n}\n\nimpl Display for ToJson {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"to_json({}, timezone={})\", self.expr, self.timezone)\n    }\n}\n\nimpl PartialEq<dyn Any> for ToJson {\n    fn eq(&self, other: &dyn Any) -> bool {\n        if let Some(other) = other.downcast_ref::<ToJson>() {\n            self.expr.eq(&other.expr) && self.timezone.eq(&other.timezone)\n        } else {\n            false\n        }\n    }\n}\n\nimpl PhysicalExpr for ToJson {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, _: &Schema) -> Result<DataType> {\n        Ok(DataType::Utf8)\n    }\n\n    fn nullable(&self, input_schema: &Schema) -> Result<bool> {\n        self.expr.nullable(input_schema)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> Result<ColumnarValue> {\n        let input = self.expr.evaluate(batch)?.into_array(batch.num_rows())?;\n        Ok(ColumnarValue::Array(array_to_json_string(\n            &input,\n            &self.timezone,\n        )?))\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.expr]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Result<Arc<dyn PhysicalExpr>> {\n        assert!(children.len() == 1);\n        Ok(Arc::new(Self::new(\n            Arc::clone(&children[0]),\n            &self.timezone,\n        )))\n    }\n}\n\n/// Convert an array into a JSON value string representation\nfn array_to_json_string(arr: &Arc<dyn Array>, timezone: &str) -> Result<ArrayRef> {\n    if let Some(struct_array) = arr.as_any().downcast_ref::<StructArray>() {\n        struct_to_json(struct_array, timezone)\n    } else {\n        spark_cast(\n            ColumnarValue::Array(Arc::clone(arr)),\n            &DataType::Utf8,\n            &SparkCastOptions::new(EvalMode::Legacy, timezone, false),\n        )?\n        .into_array(arr.len())\n    }\n}\n\nfn escape_string(input: &str) -> String {\n    let mut escaped_string = String::with_capacity(input.len());\n    let mut is_escaped = false;\n    for c in input.chars() {\n        match c {\n            '\\\"' | '\\\\' if !is_escaped => {\n                escaped_string.push('\\\\');\n                escaped_string.push(c);\n                is_escaped = false;\n            }\n            '\\t' => {\n                escaped_string.push('\\\\');\n                escaped_string.push('t');\n                is_escaped = false;\n            }\n            '\\r' => {\n                escaped_string.push('\\\\');\n                escaped_string.push('r');\n                is_escaped = false;\n            }\n            '\\n' => {\n                escaped_string.push('\\\\');\n                escaped_string.push('n');\n                is_escaped = false;\n            }\n            '\\x0C' => {\n                escaped_string.push('\\\\');\n                escaped_string.push('f');\n                is_escaped = false;\n            }\n            '\\x08' => {\n                escaped_string.push('\\\\');\n                escaped_string.push('b');\n                is_escaped = false;\n            }\n            '\\\\' => {\n                escaped_string.push('\\\\');\n                is_escaped = true;\n            }\n            _ => {\n                escaped_string.push(c);\n                is_escaped = false;\n            }\n        }\n    }\n    escaped_string\n}\n\nfn struct_to_json(array: &StructArray, timezone: &str) -> Result<ArrayRef> {\n    // get field names and escape any quotes\n    let field_names: Vec<String> = array\n        .fields()\n        .iter()\n        .map(|f| escape_string(f.name().as_str()))\n        .collect();\n    // determine which fields need to have their values quoted\n    let is_string: Vec<bool> = array\n        .fields()\n        .iter()\n        .map(|f| match f.data_type() {\n            DataType::Utf8 | DataType::LargeUtf8 => true,\n            DataType::Dictionary(_, dt) => {\n                matches!(dt.as_ref(), DataType::Utf8 | DataType::LargeUtf8)\n            }\n            _ => false,\n        })\n        .collect();\n    // create JSON string representation of each column\n    let string_arrays: Vec<ArrayRef> = array\n        .columns()\n        .iter()\n        .map(|arr| array_to_json_string(arr, timezone))\n        .collect::<Result<Vec<_>>>()?;\n    let string_arrays: Vec<&StringArray> = string_arrays\n        .iter()\n        .map(|arr| {\n            arr.as_any()\n                .downcast_ref::<StringArray>()\n                .expect(\"string array\")\n        })\n        .collect();\n    // build the JSON string containing entries in the format `\"field_name\":field_value`\n    let mut builder = StringBuilder::with_capacity(array.len(), array.len() * 16);\n    let mut json = String::with_capacity(array.len() * 16);\n    for row_index in 0..array.len() {\n        if array.is_null(row_index) {\n            builder.append_null();\n        } else {\n            json.clear();\n            let mut any_fields_written = false;\n            json.push('{');\n            for col_index in 0..string_arrays.len() {\n                if !string_arrays[col_index].is_null(row_index) {\n                    if any_fields_written {\n                        json.push(',');\n                    }\n                    // quoted field name\n                    json.push('\"');\n                    json.push_str(&field_names[col_index]);\n                    json.push_str(\"\\\":\");\n                    // value\n                    let string_value = string_arrays[col_index].value(row_index);\n                    if is_string[col_index] {\n                        json.push('\"');\n                        json.push_str(&escape_string(string_value));\n                        json.push('\"');\n                    } else {\n                        json.push_str(string_value);\n                    }\n                    any_fields_written = true;\n                }\n            }\n            json.push('}');\n            builder.append_value(&json);\n        }\n    }\n    Ok(Arc::new(builder.finish()))\n}\n\n#[cfg(test)]\nmod test {\n    use crate::json_funcs::to_json::struct_to_json;\n    use arrow::array::types::Int32Type;\n    use arrow::array::{Array, PrimitiveArray, StringArray};\n    use arrow::array::{ArrayRef, BooleanArray, Int32Array, StructArray};\n    use arrow::datatypes::{DataType, Field};\n    use datafusion::common::Result;\n    use std::sync::Arc;\n\n    #[test]\n    fn test_primitives() -> Result<()> {\n        let bools: ArrayRef = create_bools();\n        let ints: ArrayRef = create_ints();\n        let strings: ArrayRef = create_strings();\n        let struct_array = StructArray::from(vec![\n            (Arc::new(Field::new(\"a\", DataType::Boolean, true)), bools),\n            (Arc::new(Field::new(\"b\", DataType::Int32, true)), ints),\n            (Arc::new(Field::new(\"c\", DataType::Utf8, true)), strings),\n        ]);\n        let json = struct_to_json(&struct_array, \"UTC\")?;\n        let json = json\n            .as_any()\n            .downcast_ref::<StringArray>()\n            .expect(\"string array\");\n        assert_eq!(4, json.len());\n        assert_eq!(r#\"{\"b\":123}\"#, json.value(0));\n        assert_eq!(r#\"{\"a\":true,\"c\":\"foo\"}\"#, json.value(1));\n        assert_eq!(r#\"{\"a\":false,\"b\":2147483647,\"c\":\"bar\"}\"#, json.value(2));\n        assert_eq!(r#\"{\"a\":false,\"b\":-2147483648,\"c\":\"\"}\"#, json.value(3));\n        Ok(())\n    }\n\n    #[test]\n    fn test_nested_struct() -> Result<()> {\n        let bools: ArrayRef = create_bools();\n        let ints: ArrayRef = create_ints();\n\n        // create first struct array\n        let struct_fields = vec![\n            Arc::new(Field::new(\"a\", DataType::Boolean, true)),\n            Arc::new(Field::new(\"b\", DataType::Int32, true)),\n        ];\n        let struct_values = vec![bools, ints];\n        let struct_array = StructArray::from(\n            struct_fields\n                .clone()\n                .into_iter()\n                .zip(struct_values)\n                .collect::<Vec<_>>(),\n        );\n\n        // create second struct array containing the first struct array\n        let struct_fields2 = vec![Arc::new(Field::new(\n            \"a\",\n            DataType::Struct(struct_fields.into()),\n            true,\n        ))];\n        let struct_values2: Vec<ArrayRef> = vec![Arc::new(struct_array.clone())];\n        let struct_array2 = StructArray::from(\n            struct_fields2\n                .into_iter()\n                .zip(struct_values2)\n                .collect::<Vec<_>>(),\n        );\n\n        let json = struct_to_json(&struct_array2, \"UTC\")?;\n        let json = json\n            .as_any()\n            .downcast_ref::<StringArray>()\n            .expect(\"string array\");\n        assert_eq!(4, json.len());\n        assert_eq!(r#\"{\"a\":{\"b\":123}}\"#, json.value(0));\n        assert_eq!(r#\"{\"a\":{\"a\":true}}\"#, json.value(1));\n        assert_eq!(r#\"{\"a\":{\"a\":false,\"b\":2147483647}}\"#, json.value(2));\n        assert_eq!(r#\"{\"a\":{\"a\":false,\"b\":-2147483648}}\"#, json.value(3));\n        Ok(())\n    }\n\n    fn create_ints() -> Arc<PrimitiveArray<Int32Type>> {\n        Arc::new(Int32Array::from(vec![\n            Some(123),\n            None,\n            Some(i32::MAX),\n            Some(i32::MIN),\n        ]))\n    }\n\n    fn create_bools() -> Arc<BooleanArray> {\n        Arc::new(BooleanArray::from(vec![\n            None,\n            Some(true),\n            Some(false),\n            Some(false),\n        ]))\n    }\n\n    fn create_strings() -> Arc<StringArray> {\n        Arc::new(StringArray::from(vec![\n            None,\n            Some(\"foo\"),\n            Some(\"bar\"),\n            Some(\"\"),\n        ]))\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/kernels/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Kernels\n\npub mod strings;\npub mod temporal;\n"
  },
  {
    "path": "native/spark-expr/src/kernels/strings.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! String kernels\n\nuse std::sync::Arc;\n\nuse arrow::{\n    array::*,\n    compute::kernels::substring::{substring as arrow_substring, substring_by_char},\n    datatypes::{DataType, Int32Type},\n};\nuse datafusion::common::DataFusionError;\n\npub fn substring(array: &dyn Array, start: i64, length: u64) -> Result<ArrayRef, DataFusionError> {\n    match array.data_type() {\n        DataType::LargeUtf8 => substring_by_char(\n            array\n                .as_any()\n                .downcast_ref::<LargeStringArray>()\n                .expect(\"A large string is expected\"),\n            start,\n            Some(length),\n        )\n        .map_err(|e| e.into())\n        .map(|t| make_array(t.into_data())),\n        DataType::Utf8 => substring_by_char(\n            array\n                .as_any()\n                .downcast_ref::<StringArray>()\n                .expect(\"A string is expected\"),\n            start,\n            Some(length),\n        )\n        .map_err(|e| e.into())\n        .map(|t| make_array(t.into_data())),\n        DataType::Binary | DataType::LargeBinary => {\n            arrow_substring(array, start, Some(length)).map_err(|e| e.into())\n        }\n        DataType::Dictionary(_, _) => {\n            let dict = as_dictionary_array::<Int32Type>(array);\n            let values = substring(dict.values(), start, length)?;\n            let result = DictionaryArray::try_new(dict.keys().clone(), values)?;\n            Ok(Arc::new(result))\n        }\n        dt => panic!(\"Unsupported input type for function 'substring': {dt:?}\"),\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/kernels/temporal.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! temporal kernels\n\nuse chrono::{DateTime, Datelike, Duration, NaiveDate, Timelike, Utc};\n\nuse std::sync::Arc;\n\nuse arrow::array::{\n    downcast_dictionary_array, downcast_temporal_array,\n    temporal_conversions::*,\n    timezone::Tz,\n    types::{ArrowDictionaryKeyType, ArrowTemporalType, TimestampMicrosecondType},\n    ArrowNumericType,\n};\nuse arrow::{\n    array::*,\n    datatypes::{DataType, TimeUnit},\n};\n\nuse crate::SparkError;\n\n// Copied from arrow_arith/temporal.rs\nmacro_rules! return_compute_error_with {\n    ($msg:expr, $param:expr) => {\n        return { Err(SparkError::Internal(format!(\"{}: {:?}\", $msg, $param))) }\n    };\n}\n\n// The number of days between the beginning of the proleptic gregorian calendar (0001-01-01)\n// and the beginning of the Unix Epoch (1970-01-01)\nconst DAYS_TO_UNIX_EPOCH: i32 = 719_163;\n\n// Optimized date truncation functions that work directly with days since epoch\n// These avoid the overhead of converting to/from NaiveDateTime\n\n/// Convert days since Unix epoch to NaiveDate\n#[inline]\nfn days_to_date(days: i32) -> Option<NaiveDate> {\n    NaiveDate::from_num_days_from_ce_opt(days + DAYS_TO_UNIX_EPOCH)\n}\n\n/// Truncate date to first day of year - optimized version\n/// Uses ordinal (day of year) to avoid creating a new date\n#[inline]\nfn trunc_days_to_year(days: i32) -> Option<i32> {\n    let date = days_to_date(days)?;\n    let day_of_year_offset = date.ordinal() as i32 - 1;\n    Some(days - day_of_year_offset)\n}\n\n/// Truncate date to first day of quarter - optimized version\n/// Computes offset from first day of quarter without creating a new date\n#[inline]\nfn trunc_days_to_quarter(days: i32) -> Option<i32> {\n    let date = days_to_date(days)?;\n    let month = date.month(); // 1-12\n    let quarter = (month - 1) / 3; // 0-3\n    let first_month_of_quarter = quarter * 3 + 1; // 1, 4, 7, or 10\n\n    // Find day of year for first day of quarter\n    let first_day_of_quarter = NaiveDate::from_ymd_opt(date.year(), first_month_of_quarter, 1)?;\n    let quarter_start_ordinal = first_day_of_quarter.ordinal() as i32;\n    let current_ordinal = date.ordinal() as i32;\n\n    Some(days - (current_ordinal - quarter_start_ordinal))\n}\n\n/// Truncate date to first day of month - optimized version\n/// Instead of creating a new date, just subtract day offset\n#[inline]\nfn trunc_days_to_month(days: i32) -> Option<i32> {\n    let date = days_to_date(days)?;\n    let day_offset = date.day() as i32 - 1;\n    Some(days - day_offset)\n}\n\n/// Truncate date to first day of week (Monday) - optimized version\n#[inline]\nfn trunc_days_to_week(days: i32) -> Option<i32> {\n    let date = days_to_date(days)?;\n    // weekday().num_days_from_monday() gives 0 for Monday, 1 for Tuesday, etc.\n    let days_since_monday = date.weekday().num_days_from_monday() as i32;\n    Some(days - days_since_monday)\n}\n\n// Based on arrow_arith/temporal.rs:extract_component_from_datetime_array\n// Transforms an array of DateTime<Tz> to an arrayOf TimeStampMicrosecond after applying an\n// operation\nfn as_timestamp_tz_with_op<A: ArrayAccessor<Item = T::Native>, T: ArrowTemporalType, F>(\n    iter: ArrayIter<A>,\n    mut builder: PrimitiveBuilder<TimestampMicrosecondType>,\n    tz: &str,\n    op: F,\n) -> Result<TimestampMicrosecondArray, SparkError>\nwhere\n    F: Fn(DateTime<Tz>) -> i64,\n    i64: From<T::Native>,\n{\n    let tz: Tz = tz.parse()?;\n    for value in iter {\n        match value {\n            Some(value) => match as_datetime_with_timezone::<T>(value.into(), tz) {\n                Some(time) => builder.append_value(op(time)),\n                _ => {\n                    return Err(SparkError::Internal(\n                        \"Unable to read value as datetime\".to_string(),\n                    ));\n                }\n            },\n            None => builder.append_null(),\n        }\n    }\n    Ok(builder.finish())\n}\n\nfn as_timestamp_tz_with_op_single<T: ArrowTemporalType, F>(\n    value: Option<T::Native>,\n    builder: &mut PrimitiveBuilder<TimestampMicrosecondType>,\n    tz: &Tz,\n    op: F,\n) -> Result<(), SparkError>\nwhere\n    F: Fn(DateTime<Tz>) -> i64,\n    i64: From<T::Native>,\n{\n    match value {\n        Some(value) => match as_datetime_with_timezone::<T>(value.into(), *tz) {\n            Some(time) => builder.append_value(op(time)),\n            _ => {\n                return Err(SparkError::Internal(\n                    \"Unable to read value as datetime\".to_string(),\n                ));\n            }\n        },\n        None => builder.append_null(),\n    }\n    Ok(())\n}\n\n// Apply the Tz to the Naive Date Time,,convert to UTC, and return as microseconds in Unix epoch\n#[inline]\nfn as_micros_from_unix_epoch_utc(dt: Option<DateTime<Tz>>) -> i64 {\n    dt.unwrap().with_timezone(&Utc).timestamp_micros()\n}\n\n#[inline]\nfn trunc_date_to_year<T: Datelike + Timelike>(dt: T) -> Option<T> {\n    Some(dt)\n        .and_then(|d| d.with_nanosecond(0))\n        .and_then(|d| d.with_second(0))\n        .and_then(|d| d.with_minute(0))\n        .and_then(|d| d.with_hour(0))\n        .and_then(|d| d.with_day0(0))\n        .and_then(|d| d.with_month0(0))\n}\n\n/// returns the month of the beginning of the quarter\n#[inline]\nfn quarter_month<T: Datelike>(dt: &T) -> u32 {\n    1 + 3 * ((dt.month() - 1) / 3)\n}\n\n#[inline]\nfn trunc_date_to_quarter<T: Datelike + Timelike>(dt: T) -> Option<T> {\n    Some(dt)\n        .and_then(|d| d.with_nanosecond(0))\n        .and_then(|d| d.with_second(0))\n        .and_then(|d| d.with_minute(0))\n        .and_then(|d| d.with_hour(0))\n        .and_then(|d| d.with_day0(0))\n        .and_then(|d| d.with_month(quarter_month(&d)))\n}\n\n#[inline]\nfn trunc_date_to_month<T: Datelike + Timelike>(dt: T) -> Option<T> {\n    Some(dt)\n        .and_then(|d| d.with_nanosecond(0))\n        .and_then(|d| d.with_second(0))\n        .and_then(|d| d.with_minute(0))\n        .and_then(|d| d.with_hour(0))\n        .and_then(|d| d.with_day0(0))\n}\n\n#[inline]\nfn trunc_date_to_week<T>(dt: T) -> Option<T>\nwhere\n    T: Datelike + Timelike + std::ops::Sub<Duration, Output = T> + Copy,\n{\n    Some(dt)\n        .map(|d| d - Duration::try_seconds(60 * 60 * 24 * d.weekday() as i64).unwrap())\n        .and_then(|d| d.with_nanosecond(0))\n        .and_then(|d| d.with_second(0))\n        .and_then(|d| d.with_minute(0))\n        .and_then(|d| d.with_hour(0))\n}\n\n#[inline]\nfn trunc_date_to_day<T: Timelike>(dt: T) -> Option<T> {\n    Some(dt)\n        .and_then(|d| d.with_nanosecond(0))\n        .and_then(|d| d.with_second(0))\n        .and_then(|d| d.with_minute(0))\n        .and_then(|d| d.with_hour(0))\n}\n\n#[inline]\nfn trunc_date_to_hour<T: Timelike>(dt: T) -> Option<T> {\n    Some(dt)\n        .and_then(|d| d.with_nanosecond(0))\n        .and_then(|d| d.with_second(0))\n        .and_then(|d| d.with_minute(0))\n}\n\n#[inline]\nfn trunc_date_to_minute<T: Timelike>(dt: T) -> Option<T> {\n    Some(dt)\n        .and_then(|d| d.with_nanosecond(0))\n        .and_then(|d| d.with_second(0))\n}\n\n#[inline]\nfn trunc_date_to_second<T: Timelike>(dt: T) -> Option<T> {\n    Some(dt).and_then(|d| d.with_nanosecond(0))\n}\n\n#[inline]\nfn trunc_date_to_ms<T: Timelike>(dt: T) -> Option<T> {\n    Some(dt).and_then(|d| d.with_nanosecond(1_000_000 * (d.nanosecond() / 1_000_000)))\n}\n\n#[inline]\nfn trunc_date_to_microsec<T: Timelike>(dt: T) -> Option<T> {\n    Some(dt).and_then(|d| d.with_nanosecond(1_000 * (d.nanosecond() / 1_000)))\n}\n\n///\n/// Implements the spark [TRUNC](https://spark.apache.org/docs/latest/api/sql/index.html#trunc)\n/// function where the specified format is a scalar value\n///\n///   array is an array of Date32 values. The array may be a dictionary array.\n///\n///   format is a scalar string specifying the format to apply to the timestamp value.\npub fn date_trunc_dyn(array: &dyn Array, format: String) -> Result<ArrayRef, SparkError> {\n    match array.data_type().clone() {\n        DataType::Dictionary(_, _) => {\n            downcast_dictionary_array!(\n                array => {\n                    let truncated_values = date_trunc_dyn(array.values(), format)?;\n                    Ok(Arc::new(array.with_values(truncated_values)))\n                }\n                dt => return_compute_error_with!(\"date_trunc does not support\", dt),\n            )\n        }\n        _ => {\n            downcast_temporal_array!(\n                array => {\n                   date_trunc(array, format)\n                    .map(|a| Arc::new(a) as ArrayRef)\n                }\n                dt => return_compute_error_with!(\"date_trunc does not support\", dt),\n            )\n        }\n    }\n}\n\npub(crate) fn date_trunc<T>(\n    array: &PrimitiveArray<T>,\n    format: String,\n) -> Result<Date32Array, SparkError>\nwhere\n    T: ArrowTemporalType + ArrowNumericType,\n    i64: From<T::Native>,\n{\n    match array.data_type() {\n        DataType::Date32 => {\n            // Use optimized path for Date32 that works directly with days\n            date_trunc_date32(\n                array\n                    .as_any()\n                    .downcast_ref::<Date32Array>()\n                    .expect(\"Date32 type mismatch\"),\n                format,\n            )\n        }\n        dt => return_compute_error_with!(\n            \"Unsupported input type '{:?}' for function 'date_trunc'\",\n            dt\n        ),\n    }\n}\n\n/// Optimized date truncation for Date32 arrays\n/// Works directly with days since epoch instead of converting to/from NaiveDateTime\nfn date_trunc_date32(array: &Date32Array, format: String) -> Result<Date32Array, SparkError> {\n    // Select the truncation function based on format\n    let trunc_fn: fn(i32) -> Option<i32> = match format.to_uppercase().as_str() {\n        \"YEAR\" | \"YYYY\" | \"YY\" => trunc_days_to_year,\n        \"QUARTER\" => trunc_days_to_quarter,\n        \"MONTH\" | \"MON\" | \"MM\" => trunc_days_to_month,\n        \"WEEK\" => trunc_days_to_week,\n        _ => {\n            return Err(SparkError::Internal(format!(\n                \"Unsupported format: {format:?} for function 'date_trunc'\"\n            )))\n        }\n    };\n\n    // Apply truncation to each element\n    let result: Date32Array = array\n        .iter()\n        .map(|opt_days| opt_days.and_then(trunc_fn))\n        .collect();\n\n    Ok(result)\n}\n\n///\n/// Implements the spark [TRUNC](https://spark.apache.org/docs/latest/api/sql/index.html#trunc)\n/// function where the specified format may be an array\n///\n///   array is an array of Date32 values. The array may be a dictionary array.\n///\n///   format is an array of strings specifying the format to apply to the corresponding date value.\n///             The array may be a dictionary array.\npub(crate) fn date_trunc_array_fmt_dyn(\n    array: &dyn Array,\n    formats: &dyn Array,\n) -> Result<ArrayRef, SparkError> {\n    match (array.data_type().clone(), formats.data_type().clone()) {\n        (DataType::Dictionary(_, v), DataType::Dictionary(_, f)) => {\n            if !matches!(*v, DataType::Date32) {\n                return_compute_error_with!(\"date_trunc does not support\", v)\n            }\n            if !matches!(*f, DataType::Utf8) {\n                return_compute_error_with!(\"date_trunc does not support format type \", f)\n            }\n            downcast_dictionary_array!(\n                formats => {\n                    downcast_dictionary_array!(\n                        array => {\n                            date_trunc_array_fmt_dict_dict(\n                                    &array.downcast_dict::<Date32Array>().unwrap(),\n                                    &formats.downcast_dict::<StringArray>().unwrap())\n                            .map(|a| Arc::new(a) as ArrayRef)\n                        }\n                        dt => return_compute_error_with!(\"date_trunc does not support\", dt)\n                    )\n                }\n                fmt => return_compute_error_with!(\"date_trunc does not support format type\", fmt),\n            )\n        }\n        (DataType::Dictionary(_, v), DataType::Utf8) => {\n            if !matches!(*v, DataType::Date32) {\n                return_compute_error_with!(\"date_trunc does not support\", v)\n            }\n            downcast_dictionary_array!(\n                array => {\n                  date_trunc_array_fmt_dict_plain(\n                        &array.downcast_dict::<Date32Array>().unwrap(),\n                        formats.as_any().downcast_ref::<StringArray>()\n                            .expect(\"Unexpected value type in formats\"))\n                  .map(|a| Arc::new(a) as ArrayRef)\n                }\n                dt => return_compute_error_with!(\"date_trunc does not support\", dt),\n            )\n        }\n        (DataType::Date32, DataType::Dictionary(_, f)) => {\n            if !matches!(*f, DataType::Utf8) {\n                return_compute_error_with!(\"date_trunc does not support format type \", f)\n            }\n            downcast_dictionary_array!(\n                formats => {\n                downcast_temporal_array!(array => {\n                        date_trunc_array_fmt_plain_dict(\n                            array.as_any().downcast_ref::<Date32Array>()\n                                .expect(\"Unexpected error in casting date array\"),\n                            &formats.downcast_dict::<StringArray>().unwrap())\n                        .map(|a| Arc::new(a) as ArrayRef)\n                    }\n                    dt => return_compute_error_with!(\"date_trunc does not support\", dt),\n                    )\n                }\n                fmt => return_compute_error_with!(\"date_trunc does not support format type\", fmt),\n            )\n        }\n        (DataType::Date32, DataType::Utf8) => date_trunc_array_fmt_plain_plain(\n            array\n                .as_any()\n                .downcast_ref::<Date32Array>()\n                .expect(\"Unexpected error in casting date array\"),\n            formats\n                .as_any()\n                .downcast_ref::<StringArray>()\n                .expect(\"Unexpected value type in formats\"),\n        )\n        .map(|a| Arc::new(a) as ArrayRef),\n        (dt, fmt) => Err(SparkError::Internal(format!(\n            \"Unsupported datatype: {dt:}, format: {fmt:?} for function 'date_trunc'\"\n        ))),\n    }\n}\n\nmacro_rules! date_trunc_array_fmt_helper {\n    ($array: ident, $formats: ident, $datatype: ident) => {{\n        let mut builder = Date32Builder::with_capacity($array.len());\n        let iter = $array.into_iter();\n        match $datatype {\n            DataType::Date32 => {\n                for (index, val) in iter.enumerate() {\n                    let trunc_fn: fn(i32) -> Option<i32> =\n                        match $formats.value(index).to_uppercase().as_str() {\n                            \"YEAR\" | \"YYYY\" | \"YY\" => trunc_days_to_year,\n                            \"QUARTER\" => trunc_days_to_quarter,\n                            \"MONTH\" | \"MON\" | \"MM\" => trunc_days_to_month,\n                            \"WEEK\" => trunc_days_to_week,\n                            _ => {\n                                return Err(SparkError::Internal(format!(\n                                    \"Unsupported format: {:?} for function 'date_trunc'\",\n                                    $formats.value(index)\n                                )))\n                            }\n                        };\n                    match val.and_then(trunc_fn) {\n                        Some(days) => builder.append_value(days),\n                        None => builder.append_null(),\n                    }\n                }\n                Ok(builder.finish())\n            }\n            dt => return_compute_error_with!(\n                \"Unsupported input type '{:?}' for function 'date_trunc'\",\n                dt\n            ),\n        }\n    }};\n}\n\nfn date_trunc_array_fmt_plain_plain(\n    array: &Date32Array,\n    formats: &StringArray,\n) -> Result<Date32Array, SparkError>\nwhere\n{\n    let data_type = array.data_type();\n    date_trunc_array_fmt_helper!(array, formats, data_type)\n}\n\nfn date_trunc_array_fmt_plain_dict<K>(\n    array: &Date32Array,\n    formats: &TypedDictionaryArray<K, StringArray>,\n) -> Result<Date32Array, SparkError>\nwhere\n    K: ArrowDictionaryKeyType,\n{\n    let data_type = array.data_type();\n    date_trunc_array_fmt_helper!(array, formats, data_type)\n}\n\nfn date_trunc_array_fmt_dict_plain<K>(\n    array: &TypedDictionaryArray<K, Date32Array>,\n    formats: &StringArray,\n) -> Result<Date32Array, SparkError>\nwhere\n    K: ArrowDictionaryKeyType,\n{\n    let data_type = array.values().data_type();\n    date_trunc_array_fmt_helper!(array, formats, data_type)\n}\n\nfn date_trunc_array_fmt_dict_dict<K, F>(\n    array: &TypedDictionaryArray<K, Date32Array>,\n    formats: &TypedDictionaryArray<F, StringArray>,\n) -> Result<Date32Array, SparkError>\nwhere\n    K: ArrowDictionaryKeyType,\n    F: ArrowDictionaryKeyType,\n{\n    let data_type = array.values().data_type();\n    date_trunc_array_fmt_helper!(array, formats, data_type)\n}\n\n///\n/// Implements the spark [DATE_TRUNC](https://spark.apache.org/docs/latest/api/sql/index.html#date_trunc)\n/// function where the specified format is a scalar value\n///\n///   array is an array of Timestamp(Microsecond) values. Timestamp values must have a valid\n///            timezone or no timezone. The array may be a dictionary array.\n///\n///   format is a scalar string specifying the format to apply to the timestamp value.\npub(crate) fn timestamp_trunc_dyn(\n    array: &dyn Array,\n    format: String,\n) -> Result<ArrayRef, SparkError> {\n    match array.data_type().clone() {\n        DataType::Dictionary(_, _) => {\n            downcast_dictionary_array!(\n                array => {\n                    let truncated_values = timestamp_trunc_dyn(array.values(), format)?;\n                    Ok(Arc::new(array.with_values(truncated_values)))\n                }\n                dt => return_compute_error_with!(\"timestamp_trunc does not support\", dt),\n            )\n        }\n        _ => {\n            downcast_temporal_array!(\n                array => {\n                   timestamp_trunc(array, format)\n                    .map(|a| Arc::new(a) as ArrayRef)\n                }\n                dt => return_compute_error_with!(\"timestamp_trunc does not support\", dt),\n            )\n        }\n    }\n}\n\npub(crate) fn timestamp_trunc<T>(\n    array: &PrimitiveArray<T>,\n    format: String,\n) -> Result<TimestampMicrosecondArray, SparkError>\nwhere\n    T: ArrowTemporalType + ArrowNumericType,\n    i64: From<T::Native>,\n{\n    let builder = TimestampMicrosecondBuilder::with_capacity(array.len());\n    let iter = ArrayIter::new(array);\n    match array.data_type() {\n        DataType::Timestamp(TimeUnit::Microsecond, Some(tz)) => {\n            match format.to_uppercase().as_str() {\n                \"YEAR\" | \"YYYY\" | \"YY\" => {\n                    as_timestamp_tz_with_op::<&PrimitiveArray<T>, T, _>(iter, builder, tz, |dt| {\n                        as_micros_from_unix_epoch_utc(trunc_date_to_year(dt))\n                    })\n                }\n                \"QUARTER\" => {\n                    as_timestamp_tz_with_op::<&PrimitiveArray<T>, T, _>(iter, builder, tz, |dt| {\n                        as_micros_from_unix_epoch_utc(trunc_date_to_quarter(dt))\n                    })\n                }\n                \"MONTH\" | \"MON\" | \"MM\" => {\n                    as_timestamp_tz_with_op::<&PrimitiveArray<T>, T, _>(iter, builder, tz, |dt| {\n                        as_micros_from_unix_epoch_utc(trunc_date_to_month(dt))\n                    })\n                }\n                \"WEEK\" => {\n                    as_timestamp_tz_with_op::<&PrimitiveArray<T>, T, _>(iter, builder, tz, |dt| {\n                        as_micros_from_unix_epoch_utc(trunc_date_to_week(dt))\n                    })\n                }\n                \"DAY\" | \"DD\" => {\n                    as_timestamp_tz_with_op::<&PrimitiveArray<T>, T, _>(iter, builder, tz, |dt| {\n                        as_micros_from_unix_epoch_utc(trunc_date_to_day(dt))\n                    })\n                }\n                \"HOUR\" => {\n                    as_timestamp_tz_with_op::<&PrimitiveArray<T>, T, _>(iter, builder, tz, |dt| {\n                        as_micros_from_unix_epoch_utc(trunc_date_to_hour(dt))\n                    })\n                }\n                \"MINUTE\" => {\n                    as_timestamp_tz_with_op::<&PrimitiveArray<T>, T, _>(iter, builder, tz, |dt| {\n                        as_micros_from_unix_epoch_utc(trunc_date_to_minute(dt))\n                    })\n                }\n                \"SECOND\" => {\n                    as_timestamp_tz_with_op::<&PrimitiveArray<T>, T, _>(iter, builder, tz, |dt| {\n                        as_micros_from_unix_epoch_utc(trunc_date_to_second(dt))\n                    })\n                }\n                \"MILLISECOND\" => {\n                    as_timestamp_tz_with_op::<&PrimitiveArray<T>, T, _>(iter, builder, tz, |dt| {\n                        as_micros_from_unix_epoch_utc(trunc_date_to_ms(dt))\n                    })\n                }\n                \"MICROSECOND\" => {\n                    as_timestamp_tz_with_op::<&PrimitiveArray<T>, T, _>(iter, builder, tz, |dt| {\n                        as_micros_from_unix_epoch_utc(trunc_date_to_microsec(dt))\n                    })\n                }\n                _ => Err(SparkError::Internal(format!(\n                    \"Unsupported format: {format:?} for function 'timestamp_trunc'\"\n                ))),\n            }\n        }\n        dt => return_compute_error_with!(\n            \"Unsupported input type '{:?}' for function 'timestamp_trunc'\",\n            dt\n        ),\n    }\n}\n\n///\n/// Implements the spark [DATE_TRUNC](https://spark.apache.org/docs/latest/api/sql/index.html#date_trunc)\n/// function where the specified format may be an array\n///\n///   array is an array of Timestamp(Microsecond) values. Timestamp values must have a valid\n///            timezone or no timezone. The array may be a dictionary array.\n///\n///   format is an array of strings specifying the format to apply to the corresponding timestamp\n///             value. The array may be a dictionary array.\npub(crate) fn timestamp_trunc_array_fmt_dyn(\n    array: &dyn Array,\n    formats: &dyn Array,\n) -> Result<ArrayRef, SparkError> {\n    match (array.data_type().clone(), formats.data_type().clone()) {\n        (DataType::Dictionary(_, _), DataType::Dictionary(_, _)) => {\n            downcast_dictionary_array!(\n                formats => {\n                    downcast_dictionary_array!(\n                        array => {\n                            timestamp_trunc_array_fmt_dict_dict(\n                                    &array.downcast_dict::<TimestampMicrosecondArray>().unwrap(),\n                                    &formats.downcast_dict::<StringArray>().unwrap())\n                            .map(|a| Arc::new(a) as ArrayRef)\n                        }\n                        dt => return_compute_error_with!(\"timestamp_trunc does not support\", dt)\n                    )\n                }\n                fmt => return_compute_error_with!(\"timestamp_trunc does not support format type\", fmt),\n            )\n        }\n        (DataType::Dictionary(_, _), DataType::Utf8) => {\n            downcast_dictionary_array!(\n                array => {\n                  timestamp_trunc_array_fmt_dict_plain(\n                        &array.downcast_dict::<PrimitiveArray<TimestampMicrosecondType>>().unwrap(),\n                        formats.as_any().downcast_ref::<StringArray>()\n                            .expect(\"Unexpected value type in formats\"))\n                  .map(|a| Arc::new(a) as ArrayRef)\n                }\n                dt => return_compute_error_with!(\"timestamp_trunc does not support\", dt),\n            )\n        }\n        (DataType::Timestamp(TimeUnit::Microsecond, _), DataType::Dictionary(_, _)) => {\n            downcast_dictionary_array!(\n                formats => {\n                downcast_temporal_array!(array => {\n                        timestamp_trunc_array_fmt_plain_dict(\n                                array,\n                                &formats.downcast_dict::<StringArray>().unwrap())\n                        .map(|a| Arc::new(a) as ArrayRef)\n                    }\n                    dt => return_compute_error_with!(\"timestamp_trunc does not support\", dt),\n                    )\n                }\n                fmt => return_compute_error_with!(\"timestamp_trunc does not support format type\", fmt),\n            )\n        }\n        (DataType::Timestamp(TimeUnit::Microsecond, _), DataType::Utf8) => {\n            downcast_temporal_array!(\n                array => {\n                    timestamp_trunc_array_fmt_plain_plain(array,\n                        formats.as_any().downcast_ref::<StringArray>().expect(\"Unexpected value type in formats\"))\n                    .map(|a| Arc::new(a) as ArrayRef)\n                },\n                dt => return_compute_error_with!(\"timestamp_trunc does not support\", dt),\n            )\n        }\n        (dt, fmt) => Err(SparkError::Internal(format!(\n            \"Unsupported datatype: {dt:}, format: {fmt:?} for function 'timestamp_trunc'\"\n        ))),\n    }\n}\n\nmacro_rules! timestamp_trunc_array_fmt_helper {\n    ($array: ident, $formats: ident, $datatype: ident) => {{\n        let mut builder = TimestampMicrosecondBuilder::with_capacity($array.len());\n        let iter = $array.into_iter();\n        assert_eq!(\n            $array.len(),\n            $formats.len(),\n            \"lengths of values array and format array must be the same\"\n        );\n        match $datatype {\n            DataType::Timestamp(TimeUnit::Microsecond, Some(tz)) => {\n                let tz: Tz = tz.parse()?;\n                for (index, val) in iter.enumerate() {\n                    let op_result = match $formats.value(index).to_uppercase().as_str() {\n                        \"YEAR\" | \"YYYY\" | \"YY\" => {\n                            as_timestamp_tz_with_op_single::<T, _>(val, &mut builder, &tz, |dt| {\n                                as_micros_from_unix_epoch_utc(trunc_date_to_year(dt))\n                            })\n                        }\n                        \"QUARTER\" => {\n                            as_timestamp_tz_with_op_single::<T, _>(val, &mut builder, &tz, |dt| {\n                                as_micros_from_unix_epoch_utc(trunc_date_to_quarter(dt))\n                            })\n                        }\n                        \"MONTH\" | \"MON\" | \"MM\" => {\n                            as_timestamp_tz_with_op_single::<T, _>(val, &mut builder, &tz, |dt| {\n                                as_micros_from_unix_epoch_utc(trunc_date_to_month(dt))\n                            })\n                        }\n                        \"WEEK\" => {\n                            as_timestamp_tz_with_op_single::<T, _>(val, &mut builder, &tz, |dt| {\n                                as_micros_from_unix_epoch_utc(trunc_date_to_week(dt))\n                            })\n                        }\n                        \"DAY\" | \"DD\" => {\n                            as_timestamp_tz_with_op_single::<T, _>(val, &mut builder, &tz, |dt| {\n                                as_micros_from_unix_epoch_utc(trunc_date_to_day(dt))\n                            })\n                        }\n                        \"HOUR\" => {\n                            as_timestamp_tz_with_op_single::<T, _>(val, &mut builder, &tz, |dt| {\n                                as_micros_from_unix_epoch_utc(trunc_date_to_hour(dt))\n                            })\n                        }\n                        \"MINUTE\" => {\n                            as_timestamp_tz_with_op_single::<T, _>(val, &mut builder, &tz, |dt| {\n                                as_micros_from_unix_epoch_utc(trunc_date_to_minute(dt))\n                            })\n                        }\n                        \"SECOND\" => {\n                            as_timestamp_tz_with_op_single::<T, _>(val, &mut builder, &tz, |dt| {\n                                as_micros_from_unix_epoch_utc(trunc_date_to_second(dt))\n                            })\n                        }\n                        \"MILLISECOND\" => {\n                            as_timestamp_tz_with_op_single::<T, _>(val, &mut builder, &tz, |dt| {\n                                as_micros_from_unix_epoch_utc(trunc_date_to_ms(dt))\n                            })\n                        }\n                        \"MICROSECOND\" => {\n                            as_timestamp_tz_with_op_single::<T, _>(val, &mut builder, &tz, |dt| {\n                                as_micros_from_unix_epoch_utc(trunc_date_to_microsec(dt))\n                            })\n                        }\n                        _ => Err(SparkError::Internal(format!(\n                            \"Unsupported format: {:?} for function 'timestamp_trunc'\",\n                            $formats.value(index)\n                        ))),\n                    };\n                    op_result?\n                }\n                Ok(builder.finish())\n            }\n            dt => {\n                return_compute_error_with!(\n                    \"Unsupported input type '{:?}' for function 'timestamp_trunc'\",\n                    dt\n                )\n            }\n        }\n    }};\n}\n\nfn timestamp_trunc_array_fmt_plain_plain<T>(\n    array: &PrimitiveArray<T>,\n    formats: &StringArray,\n) -> Result<TimestampMicrosecondArray, SparkError>\nwhere\n    T: ArrowTemporalType + ArrowNumericType,\n    i64: From<T::Native>,\n{\n    let data_type = array.data_type();\n    timestamp_trunc_array_fmt_helper!(array, formats, data_type)\n}\nfn timestamp_trunc_array_fmt_plain_dict<T, K>(\n    array: &PrimitiveArray<T>,\n    formats: &TypedDictionaryArray<K, StringArray>,\n) -> Result<TimestampMicrosecondArray, SparkError>\nwhere\n    T: ArrowTemporalType + ArrowNumericType,\n    i64: From<T::Native>,\n    K: ArrowDictionaryKeyType,\n{\n    let data_type = array.data_type();\n    timestamp_trunc_array_fmt_helper!(array, formats, data_type)\n}\n\nfn timestamp_trunc_array_fmt_dict_plain<T, K>(\n    array: &TypedDictionaryArray<K, PrimitiveArray<T>>,\n    formats: &StringArray,\n) -> Result<TimestampMicrosecondArray, SparkError>\nwhere\n    T: ArrowTemporalType + ArrowNumericType,\n    i64: From<T::Native>,\n    K: ArrowDictionaryKeyType,\n{\n    let data_type = array.values().data_type();\n    timestamp_trunc_array_fmt_helper!(array, formats, data_type)\n}\n\nfn timestamp_trunc_array_fmt_dict_dict<T, K, F>(\n    array: &TypedDictionaryArray<K, PrimitiveArray<T>>,\n    formats: &TypedDictionaryArray<F, StringArray>,\n) -> Result<TimestampMicrosecondArray, SparkError>\nwhere\n    T: ArrowTemporalType + ArrowNumericType,\n    i64: From<T::Native>,\n    K: ArrowDictionaryKeyType,\n    F: ArrowDictionaryKeyType,\n{\n    let data_type = array.values().data_type();\n    timestamp_trunc_array_fmt_helper!(array, formats, data_type)\n}\n\n#[cfg(test)]\nmod tests {\n    use crate::kernels::temporal::{\n        date_trunc, date_trunc_array_fmt_dyn, timestamp_trunc, timestamp_trunc_array_fmt_dyn,\n    };\n    use arrow::array::{\n        builder::{PrimitiveDictionaryBuilder, StringDictionaryBuilder},\n        iterator::ArrayIter,\n        types::{Date32Type, Int32Type, TimestampMicrosecondType},\n        Array, Date32Array, PrimitiveArray, StringArray, TimestampMicrosecondArray,\n    };\n    use std::sync::Arc;\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // test takes too long with miri\n    fn test_date_trunc() {\n        let size = 1000;\n        let mut vec: Vec<i32> = Vec::with_capacity(size);\n        for i in 0..size {\n            vec.push(i as i32);\n        }\n        let array = Date32Array::from(vec);\n        for fmt in [\n            \"YEAR\", \"YYYY\", \"YY\", \"QUARTER\", \"MONTH\", \"MON\", \"MM\", \"WEEK\",\n        ] {\n            match date_trunc(&array, fmt.to_string()) {\n                Ok(a) => {\n                    for i in 0..size {\n                        assert!(array.values().get(i) >= a.values().get(i))\n                    }\n                }\n                _ => unreachable!(),\n            }\n        }\n    }\n\n    #[test]\n    // This test only verifies that the various input array types work. Actually correctness to\n    // ensure this produces the same results as spark is verified in the JVM tests\n    fn test_date_trunc_array_fmt_dyn() {\n        let size = 10;\n        let formats = [\n            \"YEAR\", \"YYYY\", \"YY\", \"QUARTER\", \"MONTH\", \"MON\", \"MM\", \"WEEK\",\n        ];\n        let mut vec: Vec<i32> = Vec::with_capacity(size * formats.len());\n        let mut fmt_vec: Vec<&str> = Vec::with_capacity(size * formats.len());\n        for i in 0..size {\n            for fmt_value in &formats {\n                vec.push(i as i32 * 1_000_001);\n                fmt_vec.push(fmt_value);\n            }\n        }\n\n        // timestamp array\n        let array = Date32Array::from(vec);\n\n        // formats array\n        let fmt_array = StringArray::from(fmt_vec);\n\n        // timestamp dictionary array\n        let mut date_dict_builder = PrimitiveDictionaryBuilder::<Int32Type, Date32Type>::new();\n        for v in array.iter() {\n            date_dict_builder\n                .append(v.unwrap())\n                .expect(\"Error in building timestamp array\");\n        }\n        let mut array_dict = date_dict_builder.finish();\n        // apply timezone\n        array_dict = array_dict.with_values(Arc::new(\n            array_dict\n                .values()\n                .as_any()\n                .downcast_ref::<Date32Array>()\n                .unwrap()\n                .clone(),\n        ));\n\n        // formats dictionary array\n        let mut formats_dict_builder = StringDictionaryBuilder::<Int32Type>::new();\n        for v in fmt_array.iter() {\n            formats_dict_builder\n                .append(v.unwrap())\n                .expect(\"Error in building formats array\");\n        }\n        let fmt_dict = formats_dict_builder.finish();\n\n        // verify input arrays\n        let iter = ArrayIter::new(&array);\n        let mut dict_iter = array_dict\n            .downcast_dict::<PrimitiveArray<Date32Type>>()\n            .unwrap()\n            .into_iter();\n        for val in iter {\n            assert_eq!(\n                dict_iter\n                    .next()\n                    .expect(\"array and dictionary array do not match\"),\n                val\n            )\n        }\n\n        // verify input format arrays\n        let fmt_iter = ArrayIter::new(&fmt_array);\n        let mut fmt_dict_iter = fmt_dict.downcast_dict::<StringArray>().unwrap().into_iter();\n        for val in fmt_iter {\n            assert_eq!(\n                fmt_dict_iter\n                    .next()\n                    .expect(\"formats and dictionary formats do not match\"),\n                val\n            )\n        }\n\n        // test cases\n        if let Ok(a) = date_trunc_array_fmt_dyn(&array, &fmt_array) {\n            for i in 0..array.len() {\n                assert!(\n                    array.value(i) >= a.as_any().downcast_ref::<Date32Array>().unwrap().value(i)\n                )\n            }\n        } else {\n            unreachable!()\n        }\n        if let Ok(a) = date_trunc_array_fmt_dyn(&array_dict, &fmt_array) {\n            for i in 0..array.len() {\n                assert!(\n                    array.value(i) >= a.as_any().downcast_ref::<Date32Array>().unwrap().value(i)\n                )\n            }\n        } else {\n            unreachable!()\n        }\n        if let Ok(a) = date_trunc_array_fmt_dyn(&array, &fmt_dict) {\n            for i in 0..array.len() {\n                assert!(\n                    array.value(i) >= a.as_any().downcast_ref::<Date32Array>().unwrap().value(i)\n                )\n            }\n        } else {\n            unreachable!()\n        }\n        if let Ok(a) = date_trunc_array_fmt_dyn(&array_dict, &fmt_dict) {\n            for i in 0..array.len() {\n                assert!(\n                    array.value(i) >= a.as_any().downcast_ref::<Date32Array>().unwrap().value(i)\n                )\n            }\n        } else {\n            unreachable!()\n        }\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // test takes too long with miri\n    fn test_timestamp_trunc() {\n        let size = 1000;\n        let mut vec: Vec<i64> = Vec::with_capacity(size);\n        for i in 0..size {\n            vec.push(i as i64);\n        }\n        let array = TimestampMicrosecondArray::from(vec).with_timezone_utc();\n        for fmt in [\n            \"YEAR\",\n            \"YYYY\",\n            \"YY\",\n            \"QUARTER\",\n            \"MONTH\",\n            \"MON\",\n            \"MM\",\n            \"WEEK\",\n            \"DAY\",\n            \"DD\",\n            \"HOUR\",\n            \"MINUTE\",\n            \"SECOND\",\n            \"MILLISECOND\",\n            \"MICROSECOND\",\n        ] {\n            match timestamp_trunc(&array, fmt.to_string()) {\n                Ok(a) => {\n                    for i in 0..size {\n                        assert!(array.values().get(i) >= a.values().get(i))\n                    }\n                }\n                _ => unreachable!(),\n            }\n        }\n    }\n\n    #[test]\n    // test takes too long with miri\n    #[cfg_attr(miri, ignore)]\n    // This test only verifies that the various input array types work. Actually correctness to\n    // ensure this produces the same results as spark is verified in the JVM tests\n    fn test_timestamp_trunc_array_fmt_dyn() {\n        let size = 10;\n        let formats = [\n            \"YEAR\",\n            \"YYYY\",\n            \"YY\",\n            \"QUARTER\",\n            \"MONTH\",\n            \"MON\",\n            \"MM\",\n            \"WEEK\",\n            \"DAY\",\n            \"DD\",\n            \"HOUR\",\n            \"MINUTE\",\n            \"SECOND\",\n            \"MILLISECOND\",\n            \"MICROSECOND\",\n        ];\n        let mut vec: Vec<i64> = Vec::with_capacity(size * formats.len());\n        let mut fmt_vec: Vec<&str> = Vec::with_capacity(size * formats.len());\n        for i in 0..size {\n            for fmt_value in &formats {\n                vec.push(i as i64 * 1_000_000_001);\n                fmt_vec.push(fmt_value);\n            }\n        }\n\n        // timestamp array\n        let array = TimestampMicrosecondArray::from(vec).with_timezone_utc();\n\n        // formats array\n        let fmt_array = StringArray::from(fmt_vec);\n\n        // timestamp dictionary array\n        let mut timestamp_dict_builder =\n            PrimitiveDictionaryBuilder::<Int32Type, TimestampMicrosecondType>::new();\n        for v in array.iter() {\n            timestamp_dict_builder\n                .append(v.unwrap())\n                .expect(\"Error in building timestamp array\");\n        }\n        let mut array_dict = timestamp_dict_builder.finish();\n        // apply timezone\n        array_dict = array_dict.with_values(Arc::new(\n            array_dict\n                .values()\n                .as_any()\n                .downcast_ref::<TimestampMicrosecondArray>()\n                .unwrap()\n                .clone()\n                .with_timezone_utc(),\n        ));\n\n        // formats dictionary array\n        let mut formats_dict_builder = StringDictionaryBuilder::<Int32Type>::new();\n        for v in fmt_array.iter() {\n            formats_dict_builder\n                .append(v.unwrap())\n                .expect(\"Error in building formats array\");\n        }\n        let fmt_dict = formats_dict_builder.finish();\n\n        // verify input arrays\n        let iter = ArrayIter::new(&array);\n        let mut dict_iter = array_dict\n            .downcast_dict::<PrimitiveArray<TimestampMicrosecondType>>()\n            .unwrap()\n            .into_iter();\n        for val in iter {\n            assert_eq!(\n                dict_iter\n                    .next()\n                    .expect(\"array and dictionary array do not match\"),\n                val\n            )\n        }\n\n        // verify input format arrays\n        let fmt_iter = ArrayIter::new(&fmt_array);\n        let mut fmt_dict_iter = fmt_dict.downcast_dict::<StringArray>().unwrap().into_iter();\n        for val in fmt_iter {\n            assert_eq!(\n                fmt_dict_iter\n                    .next()\n                    .expect(\"formats and dictionary formats do not match\"),\n                val\n            )\n        }\n\n        // test cases\n        if let Ok(a) = timestamp_trunc_array_fmt_dyn(&array, &fmt_array) {\n            for i in 0..array.len() {\n                assert!(\n                    array.value(i)\n                        >= a.as_any()\n                            .downcast_ref::<TimestampMicrosecondArray>()\n                            .unwrap()\n                            .value(i)\n                )\n            }\n        } else {\n            unreachable!()\n        }\n        if let Ok(a) = timestamp_trunc_array_fmt_dyn(&array_dict, &fmt_array) {\n            for i in 0..array.len() {\n                assert!(\n                    array.value(i)\n                        >= a.as_any()\n                            .downcast_ref::<TimestampMicrosecondArray>()\n                            .unwrap()\n                            .value(i)\n                )\n            }\n        } else {\n            unreachable!()\n        }\n        if let Ok(a) = timestamp_trunc_array_fmt_dyn(&array, &fmt_dict) {\n            for i in 0..array.len() {\n                assert!(\n                    array.value(i)\n                        >= a.as_any()\n                            .downcast_ref::<TimestampMicrosecondArray>()\n                            .unwrap()\n                            .value(i)\n                )\n            }\n        } else {\n            unreachable!()\n        }\n        if let Ok(a) = timestamp_trunc_array_fmt_dyn(&array_dict, &fmt_dict) {\n            for i in 0..array.len() {\n                assert!(\n                    array.value(i)\n                        >= a.as_any()\n                            .downcast_ref::<TimestampMicrosecondArray>()\n                            .unwrap()\n                            .value(i)\n                )\n            }\n        } else {\n            unreachable!()\n        }\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/lib.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n// The clippy throws an error if the reference clone not wrapped into `Arc::clone`\n// The lint makes easier for code reader/reviewer separate references clones from more heavyweight ones\n#![deny(clippy::clone_on_ref_ptr)]\n\nmod error;\nmod query_context;\n\npub mod kernels;\npub use kernels::temporal::date_trunc_dyn;\nmod static_invoke;\npub use static_invoke::*;\n\nmod struct_funcs;\npub use struct_funcs::{CreateNamedStruct, GetStructField};\n\nmod csv_funcs;\nmod json_funcs;\npub mod test_common;\npub mod timezone;\nmod unbound;\npub use unbound::UnboundColumn;\nmod predicate_funcs;\npub mod utils;\npub use predicate_funcs::{spark_isnan, RLike};\n\nmod agg_funcs;\nmod array_funcs;\nmod comet_scalar_funcs;\npub mod hash_funcs;\n\nmod string_funcs;\n\nmod datetime_funcs;\npub use agg_funcs::*;\n\npub use cast::{spark_cast, Cast, SparkCastOptions};\n\nmod bloom_filter;\npub use bloom_filter::{BloomFilterAgg, BloomFilterMightContain};\n\nmod conditional_funcs;\nmod conversion_funcs;\nmod math_funcs;\nmod nondetermenistic_funcs;\n\npub use array_funcs::*;\npub use conditional_funcs::*;\npub use conversion_funcs::*;\npub use nondetermenistic_funcs::*;\n\npub use comet_scalar_funcs::{\n    create_comet_physical_fun, create_comet_physical_fun_with_eval_mode,\n    register_all_comet_functions,\n};\npub use csv_funcs::*;\npub use datetime_funcs::{\n    SparkDateDiff, SparkDateFromUnixDate, SparkDateTrunc, SparkHour, SparkHoursTransform,\n    SparkMakeDate, SparkMinute, SparkSecond, SparkUnixTimestamp, TimestampTruncExpr,\n};\npub use error::{decimal_overflow_error, SparkError, SparkErrorWithContext, SparkResult};\npub use hash_funcs::*;\npub use json_funcs::{FromJson, ToJson};\npub use math_funcs::{\n    create_modulo_expr, create_negate_expr, spark_ceil, spark_decimal_div,\n    spark_decimal_integral_div, spark_floor, spark_log, spark_make_decimal, spark_round,\n    spark_unhex, spark_unscaled_value, CheckOverflow, DecimalRescaleCheckOverflow, NegativeExpr,\n    NormalizeNaNAndZero, WideDecimalBinaryExpr, WideDecimalOp,\n};\npub use query_context::{create_query_context_map, QueryContext, QueryContextMap};\npub use string_funcs::*;\n\n/// Spark supports three evaluation modes when evaluating expressions, which affect\n/// the behavior when processing input values that are invalid or would result in an\n/// error, such as divide by zero errors, and also affects behavior when converting\n/// between types.\n#[derive(Debug, Hash, PartialEq, Eq, Clone, Copy)]\npub enum EvalMode {\n    /// Legacy is the default behavior in Spark prior to Spark 4.0. This mode silently ignores\n    /// or replaces errors during SQL operations. Operations resulting in errors (like\n    /// division by zero) will produce NULL values instead of failing. Legacy mode also\n    /// enables implicit type conversions.\n    Legacy,\n    /// Adheres to the ANSI SQL standard for error handling by throwing exceptions for\n    /// operations that result in errors. Does not perform implicit type conversions.\n    Ansi,\n    /// Same as Ansi mode, except that it converts errors to NULL values without\n    /// failing the entire query.\n    Try,\n}\n\n#[derive(Debug, Hash, PartialEq, Eq, Clone, Copy)]\npub enum BinaryOutputStyle {\n    Utf8,\n    Basic,\n    Base64,\n    Hex,\n    HexDiscrete,\n}\n\npub(crate) fn arithmetic_overflow_error(from_type: &str) -> SparkError {\n    SparkError::ArithmeticOverflow {\n        from_type: from_type.to_string(),\n    }\n}\n\npub(crate) fn decimal_sum_overflow_error() -> SparkError {\n    SparkError::DecimalSumOverflow\n}\n\npub(crate) fn divide_by_zero_error() -> SparkError {\n    SparkError::DivideByZero\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/abs.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::arithmetic_overflow_error;\nuse arrow::array::*;\nuse arrow::datatypes::*;\nuse arrow::error::ArrowError;\nuse datafusion::common::{exec_err, DataFusionError, Result, ScalarValue};\nuse datafusion::logical_expr::ColumnarValue;\nuse std::sync::Arc;\n\nmacro_rules! legacy_compute_op {\n    ($ARRAY:expr, $FUNC:ident, $TYPE:ident, $RESULT:ident) => {{\n        let n = $ARRAY.as_any().downcast_ref::<$TYPE>();\n        match n {\n            Some(array) => {\n                let res: $RESULT = arrow::compute::kernels::arity::unary(array, |x| x.$FUNC());\n                Ok(res)\n            }\n            _ => Err(DataFusionError::Internal(format!(\n                \"Invalid data type for abs\"\n            ))),\n        }\n    }};\n}\n\nmacro_rules! ansi_compute_op {\n    ($ARRAY:expr, $FUNC:ident, $TYPE:ident, $RESULT:ident, $NATIVE:ident, $FROM_TYPE:expr) => {{\n        let n = $ARRAY.as_any().downcast_ref::<$TYPE>();\n        match n {\n            Some(array) => {\n                match arrow::compute::kernels::arity::try_unary(array, |x| {\n                    if x == $NATIVE::MIN {\n                        Err(ArrowError::ArithmeticOverflow($FROM_TYPE.to_string()))\n                    } else {\n                        Ok(x.$FUNC())\n                    }\n                }) {\n                    Ok(res) => Ok(ColumnarValue::Array(Arc::<PrimitiveArray<$RESULT>>::new(\n                        res,\n                    ))),\n                    Err(_) => Err(arithmetic_overflow_error($FROM_TYPE).into()),\n                }\n            }\n            _ => Err(DataFusionError::Internal(\"Invalid data type\".to_string())),\n        }\n    }};\n}\n\n/// This function mimics SparkSQL's [Abs]: https://github.com/apache/spark/blob/v4.0.1/sql/catalyst/src/main/scala/org/apache/spark/sql/catalyst/expressions/arithmetic.scala#L148\n/// Spark's [ANSI-compliant]: https://spark.apache.org/docs/latest/sql-ref-ansi-compliance.html#arithmetic-operations dialect mode throws org.apache.spark.SparkArithmeticException\n/// when abs causes overflow.\npub fn abs(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {\n    if args.is_empty() || args.len() > 2 {\n        return exec_err!(\"abs takes 1 or 2 arguments, but got: {}\", args.len());\n    }\n\n    let fail_on_error = if args.len() == 2 {\n        match &args[1] {\n            ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error))) => *fail_on_error,\n            _ => {\n                return exec_err!(\n                    \"The second argument must be boolean scalar, but got: {:?}\",\n                    args[1]\n                );\n            }\n        }\n    } else {\n        false\n    };\n\n    match &args[0] {\n        ColumnarValue::Array(array) => match array.data_type() {\n            DataType::Null\n            | DataType::UInt8\n            | DataType::UInt16\n            | DataType::UInt32\n            | DataType::UInt64 => Ok(args[0].clone()),\n            DataType::Int8 => {\n                if !fail_on_error {\n                    let result = legacy_compute_op!(array, wrapping_abs, Int8Array, Int8Array);\n                    Ok(ColumnarValue::Array(Arc::new(result?)))\n                } else {\n                    ansi_compute_op!(array, abs, Int8Array, Int8Type, i8, \"Int8\")\n                }\n            }\n            DataType::Int16 => {\n                if !fail_on_error {\n                    let result = legacy_compute_op!(array, wrapping_abs, Int16Array, Int16Array);\n                    Ok(ColumnarValue::Array(Arc::new(result?)))\n                } else {\n                    ansi_compute_op!(array, abs, Int16Array, Int16Type, i16, \"Int16\")\n                }\n            }\n            DataType::Int32 => {\n                if !fail_on_error {\n                    let result = legacy_compute_op!(array, wrapping_abs, Int32Array, Int32Array);\n                    Ok(ColumnarValue::Array(Arc::new(result?)))\n                } else {\n                    ansi_compute_op!(array, abs, Int32Array, Int32Type, i32, \"Int32\")\n                }\n            }\n            DataType::Int64 => {\n                if !fail_on_error {\n                    let result = legacy_compute_op!(array, wrapping_abs, Int64Array, Int64Array);\n                    Ok(ColumnarValue::Array(Arc::new(result?)))\n                } else {\n                    ansi_compute_op!(array, abs, Int64Array, Int64Type, i64, \"Int64\")\n                }\n            }\n            DataType::Float32 => {\n                let result = legacy_compute_op!(array, abs, Float32Array, Float32Array);\n                Ok(ColumnarValue::Array(Arc::new(result?)))\n            }\n            DataType::Float64 => {\n                let result = legacy_compute_op!(array, abs, Float64Array, Float64Array);\n                Ok(ColumnarValue::Array(Arc::new(result?)))\n            }\n            DataType::Decimal128(precision, scale) => {\n                if !fail_on_error {\n                    let result =\n                        legacy_compute_op!(array, wrapping_abs, Decimal128Array, Decimal128Array)?;\n                    let result = result.with_data_type(DataType::Decimal128(*precision, *scale));\n                    Ok(ColumnarValue::Array(Arc::new(result)))\n                } else {\n                    // Need to pass precision and scale from input, so not using ansi_compute_op\n                    let input = array.as_any().downcast_ref::<Decimal128Array>();\n                    match input {\n                        Some(i) => {\n                            match arrow::compute::kernels::arity::try_unary(i, |x| {\n                                if x == i128::MIN {\n                                    Err(ArrowError::ArithmeticOverflow(\"Decimal128\".to_string()))\n                                } else {\n                                    Ok(x.abs())\n                                }\n                            }) {\n                                Ok(res) => Ok(ColumnarValue::Array(Arc::<\n                                    PrimitiveArray<Decimal128Type>,\n                                >::new(\n                                    res.with_data_type(DataType::Decimal128(*precision, *scale)),\n                                ))),\n                                Err(_) => Err(arithmetic_overflow_error(\"Decimal128\").into()),\n                            }\n                        }\n                        _ => Err(DataFusionError::Internal(\"Invalid data type\".to_string())),\n                    }\n                }\n            }\n            DataType::Decimal256(precision, scale) => {\n                if !fail_on_error {\n                    let result =\n                        legacy_compute_op!(array, wrapping_abs, Decimal256Array, Decimal256Array)?;\n                    let result = result.with_data_type(DataType::Decimal256(*precision, *scale));\n                    Ok(ColumnarValue::Array(Arc::new(result)))\n                } else {\n                    // Need to pass precision and scale from input, so not using ansi_compute_op\n                    let input = array.as_any().downcast_ref::<Decimal256Array>();\n                    match input {\n                        Some(i) => {\n                            match arrow::compute::kernels::arity::try_unary(i, |x| {\n                                if x == i256::MIN {\n                                    Err(ArrowError::ArithmeticOverflow(\"Decimal256\".to_string()))\n                                } else {\n                                    Ok(x.wrapping_abs()) // i256 doesn't define abs() method\n                                }\n                            }) {\n                                Ok(res) => Ok(ColumnarValue::Array(Arc::<\n                                    PrimitiveArray<Decimal256Type>,\n                                >::new(\n                                    res.with_data_type(DataType::Decimal256(*precision, *scale)),\n                                ))),\n                                Err(_) => Err(arithmetic_overflow_error(\"Decimal256\").into()),\n                            }\n                        }\n                        _ => Err(DataFusionError::Internal(\"Invalid data type\".to_string())),\n                    }\n                }\n            }\n            dt => exec_err!(\"Not supported datatype for ABS: {dt}\"),\n        },\n        ColumnarValue::Scalar(sv) => match sv {\n            ScalarValue::Null\n            | ScalarValue::UInt8(_)\n            | ScalarValue::UInt16(_)\n            | ScalarValue::UInt32(_)\n            | ScalarValue::UInt64(_) => Ok(args[0].clone()),\n            ScalarValue::Int8(a) => match a {\n                None => Ok(args[0].clone()),\n                Some(v) => match v.checked_abs() {\n                    Some(abs_val) => Ok(ColumnarValue::Scalar(ScalarValue::Int8(Some(abs_val)))),\n                    None => {\n                        if !fail_on_error {\n                            // return the original value\n                            Ok(ColumnarValue::Scalar(ScalarValue::Int8(Some(*v))))\n                        } else {\n                            Err(arithmetic_overflow_error(\"Int8\").into())\n                        }\n                    }\n                },\n            },\n            ScalarValue::Int16(a) => match a {\n                None => Ok(args[0].clone()),\n                Some(v) => match v.checked_abs() {\n                    Some(abs_val) => Ok(ColumnarValue::Scalar(ScalarValue::Int16(Some(abs_val)))),\n                    None => {\n                        if !fail_on_error {\n                            // return the original value\n                            Ok(ColumnarValue::Scalar(ScalarValue::Int16(Some(*v))))\n                        } else {\n                            Err(arithmetic_overflow_error(\"Int16\").into())\n                        }\n                    }\n                },\n            },\n            ScalarValue::Int32(a) => match a {\n                None => Ok(args[0].clone()),\n                Some(v) => match v.checked_abs() {\n                    Some(abs_val) => Ok(ColumnarValue::Scalar(ScalarValue::Int32(Some(abs_val)))),\n                    None => {\n                        if !fail_on_error {\n                            // return the original value\n                            Ok(ColumnarValue::Scalar(ScalarValue::Int32(Some(*v))))\n                        } else {\n                            Err(arithmetic_overflow_error(\"Int32\").into())\n                        }\n                    }\n                },\n            },\n            ScalarValue::Int64(a) => match a {\n                None => Ok(args[0].clone()),\n                Some(v) => match v.checked_abs() {\n                    Some(abs_val) => Ok(ColumnarValue::Scalar(ScalarValue::Int64(Some(abs_val)))),\n                    None => {\n                        if !fail_on_error {\n                            // return the original value\n                            Ok(ColumnarValue::Scalar(ScalarValue::Int64(Some(*v))))\n                        } else {\n                            Err(arithmetic_overflow_error(\"Int64\").into())\n                        }\n                    }\n                },\n            },\n            ScalarValue::Float32(a) => Ok(ColumnarValue::Scalar(ScalarValue::Float32(\n                a.map(|x| x.abs()),\n            ))),\n            ScalarValue::Float64(a) => Ok(ColumnarValue::Scalar(ScalarValue::Float64(\n                a.map(|x| x.abs()),\n            ))),\n            ScalarValue::Decimal128(a, precision, scale) => match a {\n                None => Ok(args[0].clone()),\n                Some(v) => match v.checked_abs() {\n                    Some(abs_val) => Ok(ColumnarValue::Scalar(ScalarValue::Decimal128(\n                        Some(abs_val),\n                        *precision,\n                        *scale,\n                    ))),\n                    None => {\n                        if !fail_on_error {\n                            // return the original value\n                            Ok(ColumnarValue::Scalar(ScalarValue::Decimal128(\n                                Some(*v),\n                                *precision,\n                                *scale,\n                            )))\n                        } else {\n                            Err(arithmetic_overflow_error(\"Decimal128\").into())\n                        }\n                    }\n                },\n            },\n            ScalarValue::Decimal256(a, precision, scale) => match a {\n                None => Ok(args[0].clone()),\n                Some(v) => match v.checked_abs() {\n                    Some(abs_val) => Ok(ColumnarValue::Scalar(ScalarValue::Decimal256(\n                        Some(abs_val),\n                        *precision,\n                        *scale,\n                    ))),\n                    None => {\n                        if !fail_on_error {\n                            // return the original value\n                            Ok(ColumnarValue::Scalar(ScalarValue::Decimal256(\n                                Some(*v),\n                                *precision,\n                                *scale,\n                            )))\n                        } else {\n                            Err(arithmetic_overflow_error(\"Decimal256\").into())\n                        }\n                    }\n                },\n            },\n            dt => exec_err!(\"Not supported datatype for ABS: {dt}\"),\n        },\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use datafusion::common::cast::{\n        as_decimal128_array, as_decimal256_array, as_float32_array, as_float64_array,\n        as_int16_array, as_int32_array, as_int64_array, as_int8_array, as_uint64_array,\n    };\n\n    fn with_fail_on_error<F: Fn(bool) -> Result<()>>(test_fn: F) {\n        for fail_on_error in [true, false] {\n            test_fn(fail_on_error).expect(\"test should pass on error successfully\");\n        }\n    }\n\n    // Unsigned types, return as is\n    #[test]\n    fn test_abs_u8_scalar() {\n        with_fail_on_error(|fail_on_error| {\n            let args = ColumnarValue::Scalar(ScalarValue::UInt8(Some(u8::MAX)));\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::UInt8(Some(result)))) => {\n                    assert_eq!(result, u8::MAX);\n                    Ok(())\n                }\n                Err(e) => {\n                    unreachable!(\"Didn't expect error, but got: {e:?}\")\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_i8_scalar() {\n        with_fail_on_error(|fail_on_error| {\n            let args = ColumnarValue::Scalar(ScalarValue::Int8(Some(i8::MIN)));\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Int8(Some(result)))) => {\n                    assert_eq!(result, i8::MIN);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        unreachable!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_i16_scalar() {\n        with_fail_on_error(|fail_on_error| {\n            let args = ColumnarValue::Scalar(ScalarValue::Int16(Some(i16::MIN)));\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Int16(Some(result)))) => {\n                    assert_eq!(result, i16::MIN);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        unreachable!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_i32_scalar() {\n        with_fail_on_error(|fail_on_error| {\n            let args = ColumnarValue::Scalar(ScalarValue::Int32(Some(i32::MIN)));\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Int32(Some(result)))) => {\n                    assert_eq!(result, i32::MIN);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_i64_scalar() {\n        with_fail_on_error(|fail_on_error| {\n            let args = ColumnarValue::Scalar(ScalarValue::Int64(Some(i64::MIN)));\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Int64(Some(result)))) => {\n                    assert_eq!(result, i64::MIN);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_decimal128_scalar() {\n        with_fail_on_error(|fail_on_error| {\n            let args = ColumnarValue::Scalar(ScalarValue::Decimal128(Some(i128::MIN), 18, 10));\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Decimal128(\n                    Some(result),\n                    precision,\n                    scale,\n                ))) => {\n                    assert_eq!(result, i128::MIN);\n                    assert_eq!(precision, 18);\n                    assert_eq!(scale, 10);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_decimal256_scalar() {\n        with_fail_on_error(|fail_on_error| {\n            let args = ColumnarValue::Scalar(ScalarValue::Decimal256(Some(i256::MIN), 10, 2));\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Decimal256(\n                    Some(result),\n                    precision,\n                    scale,\n                ))) => {\n                    assert_eq!(result, i256::MIN);\n                    assert_eq!(precision, 10);\n                    assert_eq!(scale, 2);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_i8_array() {\n        with_fail_on_error(|fail_on_error| {\n            let input = Int8Array::from(vec![Some(-1), Some(i8::MIN), Some(i8::MAX), None]);\n            let args = ColumnarValue::Array(Arc::new(input));\n            let expected = Int8Array::from(vec![Some(1), Some(i8::MIN), Some(i8::MAX), None]);\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Array(result)) => {\n                    let actual = as_int8_array(&result)?;\n                    assert_eq!(actual, &expected);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_i16_array() {\n        with_fail_on_error(|fail_on_error| {\n            let input = Int16Array::from(vec![Some(-1), Some(i16::MIN), Some(i16::MAX), None]);\n            let args = ColumnarValue::Array(Arc::new(input));\n            let expected = Int16Array::from(vec![Some(1), Some(i16::MIN), Some(i16::MAX), None]);\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Array(result)) => {\n                    let actual = as_int16_array(&result)?;\n                    assert_eq!(actual, &expected);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_i32_array() {\n        with_fail_on_error(|fail_on_error| {\n            let input = Int32Array::from(vec![Some(-1), Some(i32::MIN), Some(i32::MAX), None]);\n            let args = ColumnarValue::Array(Arc::new(input));\n            let expected = Int32Array::from(vec![Some(1), Some(i32::MIN), Some(i32::MAX), None]);\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Array(result)) => {\n                    let actual = as_int32_array(&result)?;\n                    assert_eq!(actual, &expected);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_i64_array() {\n        with_fail_on_error(|fail_on_error| {\n            let input = Int64Array::from(vec![Some(-1), Some(i64::MIN), Some(i64::MAX), None]);\n            let args = ColumnarValue::Array(Arc::new(input));\n            let expected = Int64Array::from(vec![Some(1), Some(i64::MIN), Some(i64::MAX), None]);\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Array(result)) => {\n                    let actual = as_int64_array(&result)?;\n                    assert_eq!(actual, &expected);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_f32_array() {\n        with_fail_on_error(|fail_on_error| {\n            let input = Float32Array::from(vec![\n                Some(-1f32),\n                Some(f32::MIN),\n                Some(f32::MAX),\n                None,\n                Some(f32::NAN),\n                Some(f32::NEG_INFINITY),\n                Some(f32::INFINITY),\n                Some(-0.0),\n                Some(0.0),\n            ]);\n            let args = ColumnarValue::Array(Arc::new(input));\n            let expected = Float32Array::from(vec![\n                Some(1f32),\n                Some(f32::MAX),\n                Some(f32::MAX),\n                None,\n                Some(f32::NAN),\n                Some(f32::INFINITY),\n                Some(f32::INFINITY),\n                Some(0.0),\n                Some(0.0),\n            ]);\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Array(result)) => {\n                    let actual = as_float32_array(&result)?;\n                    assert_eq!(actual, &expected);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_f64_array() {\n        with_fail_on_error(|fail_on_error| {\n            let input = Float64Array::from(vec![Some(-1f64), Some(f64::MIN), Some(f64::MAX), None]);\n            let args = ColumnarValue::Array(Arc::new(input));\n            let expected =\n                Float64Array::from(vec![Some(1f64), Some(f64::MAX), Some(f64::MAX), None]);\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Array(result)) => {\n                    let actual = as_float64_array(&result)?;\n                    assert_eq!(actual, &expected);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_decimal128_array() {\n        with_fail_on_error(|fail_on_error| {\n            let input = Decimal128Array::from(vec![Some(i128::MIN), None])\n                .with_precision_and_scale(38, 37)?;\n            let args = ColumnarValue::Array(Arc::new(input));\n            let expected = Decimal128Array::from(vec![Some(i128::MIN), None])\n                .with_precision_and_scale(38, 37)?;\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Array(result)) => {\n                    let actual = as_decimal128_array(&result)?;\n                    assert_eq!(actual, &expected);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_decimal256_array() {\n        with_fail_on_error(|fail_on_error| {\n            let input = Decimal256Array::from(vec![Some(i256::MIN), None])\n                .with_precision_and_scale(5, 2)?;\n            let args = ColumnarValue::Array(Arc::new(input));\n            let expected = Decimal256Array::from(vec![Some(i256::MIN), None])\n                .with_precision_and_scale(5, 2)?;\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Array(result)) => {\n                    let actual = as_decimal256_array(&result)?;\n                    assert_eq!(actual, &expected);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_u64_array() {\n        with_fail_on_error(|fail_on_error| {\n            let input = UInt64Array::from(vec![Some(u64::MIN), Some(u64::MAX), None]);\n            let args = ColumnarValue::Array(Arc::new(input));\n            let expected = UInt64Array::from(vec![Some(u64::MIN), Some(u64::MAX), None]);\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n\n            match abs(&[args, fail_on_error_arg]) {\n                Ok(ColumnarValue::Array(result)) => {\n                    let actual = as_uint64_array(&result)?;\n                    assert_eq!(actual, &expected);\n                    Ok(())\n                }\n                Err(e) => {\n                    if fail_on_error {\n                        assert!(\n                            e.to_string().contains(\"ARITHMETIC_OVERFLOW\"),\n                            \"Error message did not match. Actual message: {e}\"\n                        );\n                        Ok(())\n                    } else {\n                        panic!(\"Didn't expect error, but got: {e:?}\")\n                    }\n                }\n                _ => unreachable!(),\n            }\n        });\n    }\n\n    #[test]\n    fn test_abs_null_scalars() {\n        // Test that NULL scalars return NULL (no panic) for all signed types\n        with_fail_on_error(|fail_on_error| {\n            let fail_on_error_arg =\n                ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error)));\n\n            // Test Int8\n            let args = ColumnarValue::Scalar(ScalarValue::Int8(None));\n            match abs(&[args.clone(), fail_on_error_arg.clone()]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Int8(None))) => {}\n                _ => panic!(\"Expected NULL Int8, got different result\"),\n            }\n\n            // Test Int16\n            let args = ColumnarValue::Scalar(ScalarValue::Int16(None));\n            match abs(&[args.clone(), fail_on_error_arg.clone()]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Int16(None))) => {}\n                _ => panic!(\"Expected NULL Int16, got different result\"),\n            }\n\n            // Test Int32\n            let args = ColumnarValue::Scalar(ScalarValue::Int32(None));\n            match abs(&[args.clone(), fail_on_error_arg.clone()]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Int32(None))) => {}\n                _ => panic!(\"Expected NULL Int32, got different result\"),\n            }\n\n            // Test Int64\n            let args = ColumnarValue::Scalar(ScalarValue::Int64(None));\n            match abs(&[args.clone(), fail_on_error_arg.clone()]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Int64(None))) => {}\n                _ => panic!(\"Expected NULL Int64, got different result\"),\n            }\n\n            // Test Decimal128\n            let args = ColumnarValue::Scalar(ScalarValue::Decimal128(None, 10, 2));\n            match abs(&[args.clone(), fail_on_error_arg.clone()]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Decimal128(None, 10, 2))) => {}\n                _ => panic!(\"Expected NULL Decimal128, got different result\"),\n            }\n\n            // Test Decimal256\n            let args = ColumnarValue::Scalar(ScalarValue::Decimal256(None, 10, 2));\n            match abs(&[args.clone(), fail_on_error_arg.clone()]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Decimal256(None, 10, 2))) => {}\n                _ => panic!(\"Expected NULL Decimal256, got different result\"),\n            }\n\n            // Test Float32\n            let args = ColumnarValue::Scalar(ScalarValue::Float32(None));\n            match abs(&[args.clone(), fail_on_error_arg.clone()]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Float32(None))) => {}\n                _ => panic!(\"Expected NULL Float32, got different result\"),\n            }\n\n            // Test Float64\n            let args = ColumnarValue::Scalar(ScalarValue::Float64(None));\n            match abs(&[args.clone(), fail_on_error_arg.clone()]) {\n                Ok(ColumnarValue::Scalar(ScalarValue::Float64(None))) => {}\n                _ => panic!(\"Expected NULL Float64, got different result\"),\n            }\n\n            Ok(())\n        });\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/ceil.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::downcast_compute_op;\nuse crate::math_funcs::utils::{get_precision_scale, make_decimal_array, make_decimal_scalar};\nuse arrow::array::{Array, ArrowNativeTypeOp};\nuse arrow::array::{Float32Array, Float64Array, Int64Array};\nuse arrow::datatypes::DataType;\nuse datafusion::common::{DataFusionError, ScalarValue};\nuse datafusion::physical_plan::ColumnarValue;\nuse num::integer::div_ceil;\nuse std::sync::Arc;\n\n/// `ceil` function that simulates Spark `ceil` expression\npub fn spark_ceil(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n) -> Result<ColumnarValue, DataFusionError> {\n    let value = &args[0];\n    match value {\n        ColumnarValue::Array(array) => match array.data_type() {\n            DataType::Float32 => {\n                let result = downcast_compute_op!(array, \"ceil\", ceil, Float32Array, Int64Array);\n                Ok(ColumnarValue::Array(result?))\n            }\n            DataType::Float64 => {\n                let result = downcast_compute_op!(array, \"ceil\", ceil, Float64Array, Int64Array);\n                Ok(ColumnarValue::Array(result?))\n            }\n            DataType::Int64 => {\n                let result = array.as_any().downcast_ref::<Int64Array>().unwrap();\n                Ok(ColumnarValue::Array(Arc::new(result.clone())))\n            }\n            DataType::Decimal128(_, scale) if *scale > 0 => {\n                let f = decimal_ceil_f(scale);\n                let (precision, scale) = get_precision_scale(data_type);\n                make_decimal_array(array, precision, scale, &f)\n            }\n            other => Err(DataFusionError::Internal(format!(\n                \"Unsupported data type {other:?} for function ceil\",\n            ))),\n        },\n        ColumnarValue::Scalar(a) => match a {\n            ScalarValue::Float32(a) => Ok(ColumnarValue::Scalar(ScalarValue::Int64(\n                a.map(|x| x.ceil() as i64),\n            ))),\n            ScalarValue::Float64(a) => Ok(ColumnarValue::Scalar(ScalarValue::Int64(\n                a.map(|x| x.ceil() as i64),\n            ))),\n            ScalarValue::Int64(a) => Ok(ColumnarValue::Scalar(ScalarValue::Int64(a.map(|x| x)))),\n            ScalarValue::Decimal128(a, _, scale) if *scale > 0 => {\n                let f = decimal_ceil_f(scale);\n                let (precision, scale) = get_precision_scale(data_type);\n                make_decimal_scalar(a, precision, scale, &f)\n            }\n            _ => Err(DataFusionError::Internal(format!(\n                \"Unsupported data type {:?} for function ceil\",\n                value.data_type(),\n            ))),\n        },\n    }\n}\n\n#[inline]\nfn decimal_ceil_f(scale: &i8) -> impl Fn(i128) -> i128 {\n    let div = 10_i128.pow_wrapping(*scale as u32);\n    move |x: i128| div_ceil(x, div)\n}\n\n#[cfg(test)]\nmod test {\n    use crate::spark_ceil;\n    use arrow::array::{Decimal128Array, Float32Array, Float64Array, Int64Array};\n    use arrow::datatypes::DataType;\n    use datafusion::common::cast::as_int64_array;\n    use datafusion::common::{Result, ScalarValue};\n    use datafusion::physical_plan::ColumnarValue;\n    use std::sync::Arc;\n\n    #[test]\n    fn test_ceil_f32_array() -> Result<()> {\n        let input = Float32Array::from(vec![\n            Some(125.2345),\n            Some(15.0001),\n            Some(0.1),\n            Some(-0.9),\n            Some(-1.1),\n            Some(123.0),\n            None,\n        ]);\n        let args = vec![ColumnarValue::Array(Arc::new(input))];\n        let ColumnarValue::Array(result) = spark_ceil(&args, &DataType::Float32)? else {\n            unreachable!()\n        };\n        let actual = as_int64_array(&result)?;\n        let expected = Int64Array::from(vec![\n            Some(126),\n            Some(16),\n            Some(1),\n            Some(0),\n            Some(-1),\n            Some(123),\n            None,\n        ]);\n        assert_eq!(actual, &expected);\n        Ok(())\n    }\n\n    #[test]\n    fn test_ceil_f64_array() -> Result<()> {\n        let input = Float64Array::from(vec![\n            Some(125.2345),\n            Some(15.0001),\n            Some(0.1),\n            Some(-0.9),\n            Some(-1.1),\n            Some(123.0),\n            None,\n        ]);\n        let args = vec![ColumnarValue::Array(Arc::new(input))];\n        let ColumnarValue::Array(result) = spark_ceil(&args, &DataType::Float64)? else {\n            unreachable!()\n        };\n        let actual = as_int64_array(&result)?;\n        let expected = Int64Array::from(vec![\n            Some(126),\n            Some(16),\n            Some(1),\n            Some(0),\n            Some(-1),\n            Some(123),\n            None,\n        ]);\n        assert_eq!(actual, &expected);\n        Ok(())\n    }\n\n    #[test]\n    fn test_ceil_i64_array() -> Result<()> {\n        let input = Int64Array::from(vec![Some(-1), Some(0), Some(1), None]);\n        let args = vec![ColumnarValue::Array(Arc::new(input))];\n        let ColumnarValue::Array(result) = spark_ceil(&args, &DataType::Int64)? else {\n            unreachable!()\n        };\n        let actual = as_int64_array(&result)?;\n        let expected = Int64Array::from(vec![Some(-1), Some(0), Some(1), None]);\n        assert_eq!(actual, &expected);\n        Ok(())\n    }\n\n    #[test]\n    fn test_ceil_decimal128_array() -> Result<()> {\n        let array = Decimal128Array::from(vec![\n            Some(12345),  // 123.45\n            Some(12500),  // 125.00\n            Some(-12999), // -129.99\n            None,\n        ])\n        .with_precision_and_scale(5, 2)?;\n        let args = vec![ColumnarValue::Array(Arc::new(array))];\n        let ColumnarValue::Array(result) = spark_ceil(&args, &DataType::Decimal128(4, 0))? else {\n            unreachable!()\n        };\n        let expected = Decimal128Array::from(vec![\n            Some(124),  // 124.00\n            Some(125),  // 125.00\n            Some(-129), // -129.00\n            None,\n        ])\n        .with_precision_and_scale(4, 0)?;\n        let actual = result.as_any().downcast_ref::<Decimal128Array>().unwrap();\n        assert_eq!(actual, &expected);\n        Ok(())\n    }\n\n    #[test]\n    fn test_ceil_f32_scalar() -> Result<()> {\n        let args = vec![ColumnarValue::Scalar(ScalarValue::Float32(Some(125.2345)))];\n        let ColumnarValue::Scalar(ScalarValue::Int64(Some(result))) =\n            spark_ceil(&args, &DataType::Float32)?\n        else {\n            unreachable!()\n        };\n        assert_eq!(result, 126);\n        Ok(())\n    }\n\n    #[test]\n    fn test_ceil_f64_scalar() -> Result<()> {\n        let args = vec![ColumnarValue::Scalar(ScalarValue::Float64(Some(-1.1)))];\n        let ColumnarValue::Scalar(ScalarValue::Int64(Some(result))) =\n            spark_ceil(&args, &DataType::Float64)?\n        else {\n            unreachable!()\n        };\n        assert_eq!(result, -1);\n        Ok(())\n    }\n\n    #[test]\n    fn test_ceil_i64_scalar() -> Result<()> {\n        let args = vec![ColumnarValue::Scalar(ScalarValue::Int64(Some(48)))];\n        let ColumnarValue::Scalar(ScalarValue::Int64(Some(result))) =\n            spark_ceil(&args, &DataType::Int64)?\n        else {\n            unreachable!()\n        };\n        assert_eq!(result, 48);\n        Ok(())\n    }\n\n    #[test]\n    fn test_ceil_decimal128_scalar() -> Result<()> {\n        let args = vec![ColumnarValue::Scalar(ScalarValue::Decimal128(\n            Some(567),\n            3,\n            1,\n        ))]; // 56.7\n        let ColumnarValue::Scalar(ScalarValue::Decimal128(Some(result), 3, 0)) =\n            spark_ceil(&args, &DataType::Decimal128(3, 0))?\n        else {\n            unreachable!()\n        };\n        assert_eq!(result, 57); // 57.0\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/checked_arithmetic.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, ArrowNativeTypeOp, PrimitiveArray, PrimitiveBuilder};\nuse arrow::array::{ArrayRef, AsArray};\n\nuse crate::{divide_by_zero_error, EvalMode, SparkError};\nuse arrow::datatypes::{\n    ArrowPrimitiveType, DataType, Float16Type, Float32Type, Float64Type, Int16Type, Int32Type,\n    Int64Type, Int8Type,\n};\nuse datafusion::common::DataFusionError;\nuse datafusion::physical_plan::ColumnarValue;\nuse std::sync::Arc;\n\npub fn try_arithmetic_kernel<T>(\n    left: &PrimitiveArray<T>,\n    right: &PrimitiveArray<T>,\n    op: &str,\n    is_ansi_mode: bool,\n) -> Result<ArrayRef, DataFusionError>\nwhere\n    T: ArrowPrimitiveType,\n{\n    let len = left.len();\n    let mut builder = PrimitiveBuilder::<T>::with_capacity(len);\n    match op {\n        \"checked_add\" => {\n            for i in 0..len {\n                if left.is_null(i) || right.is_null(i) {\n                    builder.append_null();\n                } else {\n                    match left.value(i).add_checked(right.value(i)) {\n                        Ok(v) => builder.append_value(v),\n                        Err(_e) => {\n                            if is_ansi_mode {\n                                return Err(SparkError::ArithmeticOverflow {\n                                    from_type: String::from(\"integer\"),\n                                }\n                                .into());\n                            } else {\n                                builder.append_null();\n                            }\n                        }\n                    }\n                }\n            }\n        }\n        \"checked_sub\" => {\n            for i in 0..len {\n                if left.is_null(i) || right.is_null(i) {\n                    builder.append_null();\n                } else {\n                    match left.value(i).sub_checked(right.value(i)) {\n                        Ok(v) => builder.append_value(v),\n                        Err(_e) => {\n                            if is_ansi_mode {\n                                return Err(SparkError::ArithmeticOverflow {\n                                    from_type: String::from(\"integer\"),\n                                }\n                                .into());\n                            } else {\n                                builder.append_null();\n                            }\n                        }\n                    }\n                }\n            }\n        }\n        \"checked_mul\" => {\n            for i in 0..len {\n                if left.is_null(i) || right.is_null(i) {\n                    builder.append_null();\n                } else {\n                    match left.value(i).mul_checked(right.value(i)) {\n                        Ok(v) => builder.append_value(v),\n                        Err(_e) => {\n                            if is_ansi_mode {\n                                return Err(SparkError::ArithmeticOverflow {\n                                    from_type: String::from(\"integer\"),\n                                }\n                                .into());\n                            } else {\n                                builder.append_null();\n                            }\n                        }\n                    }\n                }\n            }\n        }\n        \"checked_div\" => {\n            for i in 0..len {\n                if left.is_null(i) || right.is_null(i) {\n                    builder.append_null();\n                } else {\n                    match left.value(i).div_checked(right.value(i)) {\n                        Ok(v) => builder.append_value(v),\n                        Err(_e) => {\n                            if is_ansi_mode {\n                                return if right.value(i).is_zero() {\n                                    Err(divide_by_zero_error().into())\n                                } else {\n                                    return Err(SparkError::ArithmeticOverflow {\n                                        from_type: String::from(\"integer\"),\n                                    }\n                                    .into());\n                                };\n                            } else {\n                                builder.append_null();\n                            }\n                        }\n                    }\n                }\n            }\n        }\n        _ => {\n            return Err(DataFusionError::Internal(format!(\n                \"Unsupported operation: {:?}\",\n                op\n            )))\n        }\n    }\n\n    Ok(Arc::new(builder.finish()) as ArrayRef)\n}\n\npub fn checked_add(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n    eval_mode: EvalMode,\n) -> Result<ColumnarValue, DataFusionError> {\n    checked_arithmetic_internal(args, data_type, \"checked_add\", eval_mode)\n}\n\npub fn checked_sub(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n    eval_mode: EvalMode,\n) -> Result<ColumnarValue, DataFusionError> {\n    checked_arithmetic_internal(args, data_type, \"checked_sub\", eval_mode)\n}\n\npub fn checked_mul(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n    eval_mode: EvalMode,\n) -> Result<ColumnarValue, DataFusionError> {\n    checked_arithmetic_internal(args, data_type, \"checked_mul\", eval_mode)\n}\n\npub fn checked_div(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n    eval_mode: EvalMode,\n) -> Result<ColumnarValue, DataFusionError> {\n    checked_arithmetic_internal(args, data_type, \"checked_div\", eval_mode)\n}\n\nfn checked_arithmetic_internal(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n    op: &str,\n    eval_mode: EvalMode,\n) -> Result<ColumnarValue, DataFusionError> {\n    let left = &args[0];\n    let right = &args[1];\n\n    let is_ansi_mode = match eval_mode {\n        EvalMode::Try => false,\n        EvalMode::Ansi => true,\n        _ => {\n            return Err(DataFusionError::Internal(format!(\n                \"Unsupported mode : {:?}\",\n                eval_mode\n            )))\n        }\n    };\n\n    let (left_arr, right_arr): (ArrayRef, ArrayRef) = match (left, right) {\n        (ColumnarValue::Array(l), ColumnarValue::Array(r)) => (Arc::clone(l), Arc::clone(r)),\n        (ColumnarValue::Scalar(l), ColumnarValue::Array(r)) => {\n            (l.to_array_of_size(r.len())?, Arc::clone(r))\n        }\n        (ColumnarValue::Array(l), ColumnarValue::Scalar(r)) => {\n            (Arc::clone(l), r.to_array_of_size(l.len())?)\n        }\n        (ColumnarValue::Scalar(l), ColumnarValue::Scalar(r)) => (l.to_array()?, r.to_array()?),\n    };\n\n    // Rust only supports checked_arithmetic on numeric types\n    let result_array = match data_type {\n        DataType::Int8 => try_arithmetic_kernel::<Int8Type>(\n            left_arr.as_primitive::<Int8Type>(),\n            right_arr.as_primitive::<Int8Type>(),\n            op,\n            is_ansi_mode,\n        ),\n        DataType::Int16 => try_arithmetic_kernel::<Int16Type>(\n            left_arr.as_primitive::<Int16Type>(),\n            right_arr.as_primitive::<Int16Type>(),\n            op,\n            is_ansi_mode,\n        ),\n        DataType::Int32 => try_arithmetic_kernel::<Int32Type>(\n            left_arr.as_primitive::<Int32Type>(),\n            right_arr.as_primitive::<Int32Type>(),\n            op,\n            is_ansi_mode,\n        ),\n        DataType::Int64 => try_arithmetic_kernel::<Int64Type>(\n            left_arr.as_primitive::<Int64Type>(),\n            right_arr.as_primitive::<Int64Type>(),\n            op,\n            is_ansi_mode,\n        ),\n        // Spark always casts division operands to floats\n        DataType::Float16 if (op == \"checked_div\") => try_arithmetic_kernel::<Float16Type>(\n            left_arr.as_primitive::<Float16Type>(),\n            right_arr.as_primitive::<Float16Type>(),\n            op,\n            is_ansi_mode,\n        ),\n        DataType::Float32 if (op == \"checked_div\") => try_arithmetic_kernel::<Float32Type>(\n            left_arr.as_primitive::<Float32Type>(),\n            right_arr.as_primitive::<Float32Type>(),\n            op,\n            is_ansi_mode,\n        ),\n        DataType::Float64 if (op == \"checked_div\") => try_arithmetic_kernel::<Float64Type>(\n            left_arr.as_primitive::<Float64Type>(),\n            right_arr.as_primitive::<Float64Type>(),\n            op,\n            is_ansi_mode,\n        ),\n        _ => Err(DataFusionError::Internal(format!(\n            \"Unsupported data type: {:?}\",\n            data_type\n        ))),\n    };\n\n    Ok(ColumnarValue::Array(result_array?))\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/div.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::math_funcs::utils::get_precision_scale;\nuse crate::{divide_by_zero_error, EvalMode};\nuse arrow::array::{Array, Decimal128Array};\nuse arrow::datatypes::{DataType, DECIMAL128_MAX_PRECISION};\nuse arrow::error::ArrowError;\nuse arrow::{\n    array::{ArrayRef, AsArray},\n    datatypes::Decimal128Type,\n};\nuse datafusion::common::DataFusionError;\nuse datafusion::physical_plan::ColumnarValue;\nuse num::{BigInt, Signed, ToPrimitive, Zero};\nuse std::sync::Arc;\n\npub fn spark_decimal_div(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n    eval_mode: EvalMode,\n) -> Result<ColumnarValue, DataFusionError> {\n    spark_decimal_div_internal(args, data_type, false, eval_mode)\n}\n\npub fn spark_decimal_integral_div(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n    eval_mode: EvalMode,\n) -> Result<ColumnarValue, DataFusionError> {\n    spark_decimal_div_internal(args, data_type, true, eval_mode)\n}\n\n// Let Decimal(p3, s3) as return type i.e. Decimal(p1, s1) / Decimal(p2, s2) = Decimal(p3, s3).\n// Conversely, Decimal(p1, s1) = Decimal(p2, s2) * Decimal(p3, s3). This means that, in order to\n// get enough scale that matches with Spark behavior, it requires to widen s1 to s2 + s3 + 1. Since\n// both s2 and s3 are 38 at max., s1 is 77 at max. DataFusion division cannot handle such scale >\n// Decimal256Type::MAX_SCALE. Therefore, we need to implement this decimal division using BigInt.\nfn spark_decimal_div_internal(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n    is_integral_div: bool,\n    eval_mode: EvalMode,\n) -> Result<ColumnarValue, DataFusionError> {\n    let left = &args[0];\n    let right = &args[1];\n    let (p3, s3) = get_precision_scale(data_type);\n\n    let (left, right): (ArrayRef, ArrayRef) = match (left, right) {\n        (ColumnarValue::Array(l), ColumnarValue::Array(r)) => (Arc::clone(l), Arc::clone(r)),\n        (ColumnarValue::Scalar(l), ColumnarValue::Array(r)) => {\n            (l.to_array_of_size(r.len())?, Arc::clone(r))\n        }\n        (ColumnarValue::Array(l), ColumnarValue::Scalar(r)) => {\n            (Arc::clone(l), r.to_array_of_size(l.len())?)\n        }\n        (ColumnarValue::Scalar(l), ColumnarValue::Scalar(r)) => (l.to_array()?, r.to_array()?),\n    };\n    let left = left.as_primitive::<Decimal128Type>();\n    let right = right.as_primitive::<Decimal128Type>();\n    let (p1, s1) = get_precision_scale(left.data_type());\n    let (p2, s2) = get_precision_scale(right.data_type());\n\n    let l_exp = ((s2 + s3 + 1) as u32).saturating_sub(s1 as u32);\n    let r_exp = (s1 as u32).saturating_sub((s2 + s3 + 1) as u32);\n    let result: Decimal128Array = if p1 as u32 + l_exp > DECIMAL128_MAX_PRECISION as u32\n        || p2 as u32 + r_exp > DECIMAL128_MAX_PRECISION as u32\n    {\n        let ten = BigInt::from(10);\n        let l_mul = ten.pow(l_exp);\n        let r_mul = ten.pow(r_exp);\n        let five = BigInt::from(5);\n        let zero = BigInt::from(0);\n        arrow::compute::kernels::arity::try_binary(left, right, |l, r| {\n            let l = BigInt::from(l) * &l_mul;\n            let r = BigInt::from(r) * &r_mul;\n            // Previously this check included `&& is_integral_div`, so regular decimal `/`\n            // silently returned 0 for a zero divisor in ANSI mode instead of throwing.\n            // Spark throws DIVIDE_BY_ZERO for both `/` and `div` when ANSI is enabled, so\n            // the `is_integral_div` guard was wrong and has been removed.\n            if eval_mode == EvalMode::Ansi && r.is_zero() {\n                return Err(ArrowError::ComputeError(divide_by_zero_error().to_string()));\n            }\n            // Non-ANSI: zero divisors have already been replaced with null by the\n            // `nullIfWhenPrimitive` wrapper applied in the Scala serde layer, so\n            // `try_binary` will never invoke this closure for a zero `r` in legacy/try mode.\n            // The fallback `zero.clone()` is therefore unreachable in practice.\n            let div = if r.eq(&zero) { zero.clone() } else { &l / &r };\n            let res = if is_integral_div {\n                div\n            } else if div.is_negative() {\n                div - &five\n            } else {\n                div + &five\n            } / &ten;\n            Ok(res.to_i128().unwrap_or(i128::MAX))\n        })?\n    } else {\n        let l_mul = 10_i128.pow(l_exp);\n        let r_mul = 10_i128.pow(r_exp);\n        arrow::compute::kernels::arity::try_binary(left, right, |l, r| {\n            let l = l * l_mul;\n            let r = r * r_mul;\n            // Previously this check included `&& is_integral_div`, so regular decimal `/`\n            // silently returned 0 for a zero divisor in ANSI mode instead of throwing.\n            // Spark throws DIVIDE_BY_ZERO for both `/` and `div` when ANSI is enabled, so\n            // the `is_integral_div` guard was wrong and has been removed.\n            if eval_mode == EvalMode::Ansi && r == 0 {\n                return Err(ArrowError::ComputeError(divide_by_zero_error().to_string()));\n            }\n            // Non-ANSI: zero divisors have already been replaced with null by the\n            // `nullIfWhenPrimitive` wrapper applied in the Scala serde layer, so\n            // `try_binary` will never invoke this closure for a zero `r` in legacy/try mode.\n            // The fallback `0` is therefore unreachable in practice.\n            let div = if r == 0 { 0 } else { l / r };\n            let res = if is_integral_div {\n                div\n            } else if div.is_negative() {\n                div - 5\n            } else {\n                div + 5\n            } / 10;\n            Ok(res.to_i128().unwrap_or(i128::MAX))\n        })?\n    };\n    let result = result.with_data_type(DataType::Decimal128(p3, s3));\n    Ok(ColumnarValue::Array(Arc::new(result)))\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/floor.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::downcast_compute_op;\nuse crate::math_funcs::utils::{get_precision_scale, make_decimal_array, make_decimal_scalar};\nuse arrow::array::{Array, ArrowNativeTypeOp};\nuse arrow::array::{Float32Array, Float64Array, Int64Array};\nuse arrow::datatypes::DataType;\nuse datafusion::common::{DataFusionError, ScalarValue};\nuse datafusion::physical_plan::ColumnarValue;\nuse num::integer::div_floor;\nuse std::sync::Arc;\n\n/// `floor` function that simulates Spark `floor` expression\npub fn spark_floor(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n) -> Result<ColumnarValue, DataFusionError> {\n    let value = &args[0];\n    match value {\n        ColumnarValue::Array(array) => match array.data_type() {\n            DataType::Float32 => {\n                let result = downcast_compute_op!(array, \"floor\", floor, Float32Array, Int64Array);\n                Ok(ColumnarValue::Array(result?))\n            }\n            DataType::Float64 => {\n                let result = downcast_compute_op!(array, \"floor\", floor, Float64Array, Int64Array);\n                Ok(ColumnarValue::Array(result?))\n            }\n            DataType::Int64 => {\n                let result = array.as_any().downcast_ref::<Int64Array>().unwrap();\n                Ok(ColumnarValue::Array(Arc::new(result.clone())))\n            }\n            DataType::Decimal128(_, scale) if *scale > 0 => {\n                let f = decimal_floor_f(scale);\n                let (precision, scale) = get_precision_scale(data_type);\n                make_decimal_array(array, precision, scale, &f)\n            }\n            other => Err(DataFusionError::Internal(format!(\n                \"Unsupported data type {other:?} for function floor\",\n            ))),\n        },\n        ColumnarValue::Scalar(a) => match a {\n            ScalarValue::Float32(a) => Ok(ColumnarValue::Scalar(ScalarValue::Int64(\n                a.map(|x| x.floor() as i64),\n            ))),\n            ScalarValue::Float64(a) => Ok(ColumnarValue::Scalar(ScalarValue::Int64(\n                a.map(|x| x.floor() as i64),\n            ))),\n            ScalarValue::Int64(a) => Ok(ColumnarValue::Scalar(ScalarValue::Int64(a.map(|x| x)))),\n            ScalarValue::Decimal128(a, _, scale) if *scale > 0 => {\n                let f = decimal_floor_f(scale);\n                let (precision, scale) = get_precision_scale(data_type);\n                make_decimal_scalar(a, precision, scale, &f)\n            }\n            _ => Err(DataFusionError::Internal(format!(\n                \"Unsupported data type {:?} for function floor\",\n                value.data_type(),\n            ))),\n        },\n    }\n}\n\n#[inline]\nfn decimal_floor_f(scale: &i8) -> impl Fn(i128) -> i128 {\n    let div = 10_i128.pow_wrapping(*scale as u32);\n    move |x: i128| div_floor(x, div)\n}\n\n#[cfg(test)]\nmod test {\n    use crate::spark_floor;\n    use arrow::array::{Decimal128Array, Float32Array, Float64Array, Int64Array};\n    use arrow::datatypes::DataType;\n    use datafusion::common::cast::as_int64_array;\n    use datafusion::common::{Result, ScalarValue};\n    use datafusion::physical_plan::ColumnarValue;\n    use std::sync::Arc;\n\n    #[test]\n    fn test_floor_f32_array() -> Result<()> {\n        let input = Float32Array::from(vec![\n            Some(125.9345),\n            Some(15.9999),\n            Some(0.9),\n            Some(-0.1),\n            Some(-1.999),\n            Some(123.0),\n            None,\n        ]);\n        let args = vec![ColumnarValue::Array(Arc::new(input))];\n        let ColumnarValue::Array(result) = spark_floor(&args, &DataType::Float32)? else {\n            unreachable!()\n        };\n        let actual = as_int64_array(&result)?;\n        let expected = Int64Array::from(vec![\n            Some(125),\n            Some(15),\n            Some(0),\n            Some(-1),\n            Some(-2),\n            Some(123),\n            None,\n        ]);\n        assert_eq!(actual, &expected);\n        Ok(())\n    }\n\n    #[test]\n    fn test_floor_f64_array() -> Result<()> {\n        let input = Float64Array::from(vec![\n            Some(125.9345),\n            Some(15.9999),\n            Some(0.9),\n            Some(-0.1),\n            Some(-1.999),\n            Some(123.0),\n            None,\n        ]);\n        let args = vec![ColumnarValue::Array(Arc::new(input))];\n        let ColumnarValue::Array(result) = spark_floor(&args, &DataType::Float64)? else {\n            unreachable!()\n        };\n        let actual = as_int64_array(&result)?;\n        let expected = Int64Array::from(vec![\n            Some(125),\n            Some(15),\n            Some(0),\n            Some(-1),\n            Some(-2),\n            Some(123),\n            None,\n        ]);\n        assert_eq!(actual, &expected);\n        Ok(())\n    }\n\n    #[test]\n    fn test_floor_i64_array() -> Result<()> {\n        let input = Int64Array::from(vec![Some(-1), Some(0), Some(1), None]);\n        let args = vec![ColumnarValue::Array(Arc::new(input))];\n        let ColumnarValue::Array(result) = spark_floor(&args, &DataType::Int64)? else {\n            unreachable!()\n        };\n        let actual = as_int64_array(&result)?;\n        let expected = Int64Array::from(vec![Some(-1), Some(0), Some(1), None]);\n        assert_eq!(actual, &expected);\n        Ok(())\n    }\n\n    #[test]\n    fn test_floor_decimal128_array() -> Result<()> {\n        let array = Decimal128Array::from(vec![\n            Some(12345),  // 123.45\n            Some(12500),  // 125.00\n            Some(-12999), // -129.99\n            None,\n        ])\n        .with_precision_and_scale(5, 2)?;\n        let args = vec![ColumnarValue::Array(Arc::new(array))];\n        let ColumnarValue::Array(result) = spark_floor(&args, &DataType::Decimal128(4, 0))? else {\n            unreachable!()\n        };\n        let expected = Decimal128Array::from(vec![\n            Some(123),  // 123.00\n            Some(125),  // 125.00\n            Some(-130), // -130.00\n            None,\n        ])\n        .with_precision_and_scale(4, 0)?;\n        let actual = result.as_any().downcast_ref::<Decimal128Array>().unwrap();\n        assert_eq!(actual, &expected);\n        Ok(())\n    }\n\n    #[test]\n    fn test_floor_f32_scalar() -> Result<()> {\n        let args = vec![ColumnarValue::Scalar(ScalarValue::Float32(Some(125.9345)))];\n        let ColumnarValue::Scalar(ScalarValue::Int64(Some(result))) =\n            spark_floor(&args, &DataType::Float32)?\n        else {\n            unreachable!()\n        };\n        assert_eq!(result, 125);\n        Ok(())\n    }\n\n    #[test]\n    fn test_floor_f64_scalar() -> Result<()> {\n        let args = vec![ColumnarValue::Scalar(ScalarValue::Float64(Some(-1.999)))];\n        let ColumnarValue::Scalar(ScalarValue::Int64(Some(result))) =\n            spark_floor(&args, &DataType::Float64)?\n        else {\n            unreachable!()\n        };\n        assert_eq!(result, -2);\n        Ok(())\n    }\n\n    #[test]\n    fn test_floor_i64_scalar() -> Result<()> {\n        let args = vec![ColumnarValue::Scalar(ScalarValue::Int64(Some(48)))];\n        let ColumnarValue::Scalar(ScalarValue::Int64(Some(result))) =\n            spark_floor(&args, &DataType::Int64)?\n        else {\n            unreachable!()\n        };\n        assert_eq!(result, 48);\n        Ok(())\n    }\n\n    #[test]\n    fn test_floor_decimal128_scalar() -> Result<()> {\n        let args = vec![ColumnarValue::Scalar(ScalarValue::Decimal128(\n            Some(567),\n            3,\n            1,\n        ))]; // 56.7\n        let ColumnarValue::Scalar(ScalarValue::Decimal128(Some(result), 3, 0)) =\n            spark_floor(&args, &DataType::Decimal128(3, 0))?\n        else {\n            unreachable!()\n        };\n        assert_eq!(result, 56); // 56.0\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/internal/checkoverflow.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::datatypes::{DataType, Schema};\nuse arrow::{\n    array::{as_primitive_array, Array, ArrayRef, Decimal128Array},\n    datatypes::{Decimal128Type, DecimalType},\n    record_batch::RecordBatch,\n};\nuse datafusion::common::{DataFusionError, ScalarValue};\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::hash::Hash;\n\nuse crate::SparkError;\nuse std::{\n    any::Any,\n    fmt::{Display, Formatter},\n    sync::Arc,\n};\n\n/// This is from Spark `CheckOverflow` expression. Spark `CheckOverflow` expression rounds decimals\n/// to given scale and check if the decimals can fit in given precision. As `cast` kernel rounds\n/// decimals already, Comet `CheckOverflow` expression only checks if the decimals can fit in the\n/// precision.\n#[derive(Debug, Eq)]\npub struct CheckOverflow {\n    pub child: Arc<dyn PhysicalExpr>,\n    pub data_type: DataType,\n    pub fail_on_error: bool,\n    pub expr_id: Option<u64>,\n    pub query_context: Option<Arc<crate::QueryContext>>,\n}\n\nimpl Hash for CheckOverflow {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.child.hash(state);\n        self.data_type.hash(state);\n        self.fail_on_error.hash(state);\n    }\n}\n\nimpl PartialEq for CheckOverflow {\n    fn eq(&self, other: &Self) -> bool {\n        self.child.eq(&other.child)\n            && self.data_type.eq(&other.data_type)\n            && self.fail_on_error.eq(&other.fail_on_error)\n    }\n}\n\nimpl CheckOverflow {\n    pub fn new(\n        child: Arc<dyn PhysicalExpr>,\n        data_type: DataType,\n        fail_on_error: bool,\n        expr_id: Option<u64>,\n        query_context: Option<Arc<crate::QueryContext>>,\n    ) -> Self {\n        Self {\n            child,\n            data_type,\n            fail_on_error,\n            expr_id,\n            query_context,\n        }\n    }\n}\n\nimpl Display for CheckOverflow {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"CheckOverflow [datatype: {}, fail_on_error: {}, child: {}]\",\n            self.data_type, self.fail_on_error, self.child\n        )\n    }\n}\n\nimpl PhysicalExpr for CheckOverflow {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, _: &Schema) -> datafusion::common::Result<DataType> {\n        Ok(self.data_type.clone())\n    }\n\n    fn nullable(&self, _: &Schema) -> datafusion::common::Result<bool> {\n        Ok(true)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> datafusion::common::Result<ColumnarValue> {\n        let arg = self.child.evaluate(batch)?;\n        match arg {\n            ColumnarValue::Array(array)\n                if matches!(array.data_type(), DataType::Decimal128(_, _)) =>\n            {\n                let (precision, scale) = match &self.data_type {\n                    DataType::Decimal128(p, s) => (p, s),\n                    dt => {\n                        return Err(DataFusionError::Execution(format!(\n                            \"CheckOverflow expects only Decimal128, but got {dt:?}\"\n                        )))\n                    }\n                };\n\n                let decimal_array = as_primitive_array::<Decimal128Type>(&array);\n\n                let casted_array = if self.fail_on_error {\n                    // Returning error if overflow - convert decimal overflow to SparkError\n                    decimal_array\n                        .validate_decimal_precision(*precision)\n                        .map_err(|e| {\n                            if matches!(e, arrow::error::ArrowError::InvalidArgumentError(_))\n                                && e.to_string().contains(\"too large to store in a Decimal128\") {\n                                // Find the first overflowing value\n                                let overflow_value = decimal_array\n                                    .iter()\n                                    .find(|v| {\n                                        if let Some(val) = v {\n                                            arrow::array::types::Decimal128Type::validate_decimal_precision(\n                                                *val, *precision, *scale\n                                            ).is_err()\n                                        } else {\n                                            false\n                                        }\n                                    })\n                                    .and_then(|v| v)\n                                    .unwrap_or(0);\n\n                                let spark_error = crate::error::decimal_overflow_error(overflow_value, *precision, *scale);\n\n                                // Wrap with query_context if present\n                                if let Some(ctx) = &self.query_context {\n                                    DataFusionError::External(Box::new(\n                                        crate::SparkErrorWithContext::with_context(spark_error, Arc::clone(ctx))\n                                    ))\n                                } else {\n                                    DataFusionError::External(Box::new(spark_error))\n                                }\n                            } else {\n                                DataFusionError::ArrowError(Box::new(e), None)\n                            }\n                        })?;\n                    decimal_array\n                } else {\n                    // Overflowing gets null value\n                    &decimal_array.null_if_overflow_precision(*precision)\n                };\n\n                let new_array = Decimal128Array::from(casted_array.into_data())\n                    .with_precision_and_scale(*precision, *scale)\n                    .map(|a| Arc::new(a) as ArrayRef)\n                    .map_err(|e| {\n                        if matches!(e, arrow::error::ArrowError::InvalidArgumentError(_))\n                            && e.to_string().contains(\"too large to store in a Decimal128\")\n                        {\n                            // Fallback error handling\n                            let spark_error = SparkError::NumericValueOutOfRange {\n                                value: \"overflow\".to_string(),\n                                precision: *precision,\n                                scale: *scale,\n                            };\n\n                            // Wrap with query_context if present\n                            if let Some(ctx) = &self.query_context {\n                                DataFusionError::External(Box::new(\n                                    crate::SparkErrorWithContext::with_context(\n                                        spark_error,\n                                        Arc::clone(ctx),\n                                    ),\n                                ))\n                            } else {\n                                DataFusionError::External(Box::new(spark_error))\n                            }\n                        } else {\n                            DataFusionError::ArrowError(Box::new(e), None)\n                        }\n                    })?;\n\n                Ok(ColumnarValue::Array(new_array))\n            }\n            ColumnarValue::Scalar(ScalarValue::Decimal128(v, precision, scale)) => {\n                if self.fail_on_error {\n                    if let Some(val) = v {\n                        Decimal128Type::validate_decimal_precision(val, precision, scale).map_err(\n                            |_| {\n                                let spark_error =\n                                    crate::error::decimal_overflow_error(val, precision, scale);\n                                if let Some(ctx) = &self.query_context {\n                                    DataFusionError::External(Box::new(\n                                        crate::SparkErrorWithContext::with_context(\n                                            spark_error,\n                                            Arc::clone(ctx),\n                                        ),\n                                    ))\n                                } else {\n                                    DataFusionError::External(Box::new(spark_error))\n                                }\n                            },\n                        )?;\n                    }\n                    Ok(ColumnarValue::Scalar(ScalarValue::Decimal128(\n                        v, precision, scale,\n                    )))\n                } else {\n                    let new_v: Option<i128> = v.and_then(|v| {\n                        Decimal128Type::validate_decimal_precision(v, precision, scale)\n                            .map(|_| v)\n                            .ok()\n                    });\n                    Ok(ColumnarValue::Scalar(ScalarValue::Decimal128(\n                        new_v, precision, scale,\n                    )))\n                }\n            }\n            v => Err(DataFusionError::Execution(format!(\n                \"CheckOverflow's child expression should be decimal array, but found {v:?}\"\n            ))),\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.child]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        Ok(Arc::new(CheckOverflow::new(\n            Arc::clone(&children[0]),\n            self.data_type.clone(),\n            self.fail_on_error,\n            self.expr_id,\n            self.query_context.clone(),\n        )))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::datatypes::{Field, Schema};\n    use arrow::record_batch::RecordBatch;\n    use std::fmt::{Display, Formatter};\n\n    /// Helper that always returns a fixed Decimal128 scalar.\n    #[derive(Debug, Eq, PartialEq, Hash)]\n    struct ScalarChild(Option<i128>, u8, i8);\n\n    impl Display for ScalarChild {\n        fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n            write!(f, \"ScalarChild({:?})\", self.0)\n        }\n    }\n\n    impl PhysicalExpr for ScalarChild {\n        fn as_any(&self) -> &dyn Any {\n            self\n        }\n        fn data_type(&self, _: &Schema) -> datafusion::common::Result<DataType> {\n            Ok(DataType::Decimal128(self.1, self.2))\n        }\n        fn nullable(&self, _: &Schema) -> datafusion::common::Result<bool> {\n            Ok(true)\n        }\n        fn evaluate(&self, _: &RecordBatch) -> datafusion::common::Result<ColumnarValue> {\n            Ok(ColumnarValue::Scalar(ScalarValue::Decimal128(\n                self.0, self.1, self.2,\n            )))\n        }\n        fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n            vec![]\n        }\n        fn with_new_children(\n            self: Arc<Self>,\n            _: Vec<Arc<dyn PhysicalExpr>>,\n        ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n            Ok(self)\n        }\n        fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n            Display::fmt(self, f)\n        }\n    }\n\n    fn empty_batch() -> RecordBatch {\n        let schema = Schema::new(vec![Field::new(\"x\", DataType::Decimal128(38, 0), true)]);\n        RecordBatch::new_empty(Arc::new(schema))\n    }\n\n    fn make_check_overflow(\n        value: Option<i128>,\n        precision: u8,\n        scale: i8,\n        fail_on_error: bool,\n    ) -> CheckOverflow {\n        CheckOverflow::new(\n            Arc::new(ScalarChild(value, precision, scale)),\n            DataType::Decimal128(precision, scale),\n            fail_on_error,\n            None,\n            None,\n        )\n    }\n\n    // --- scalar, fail_on_error = false (legacy mode) ---\n\n    #[test]\n    fn test_scalar_no_overflow_legacy() {\n        // 999 fits in precision 3, scale 0 → returned as-is\n        let expr = make_check_overflow(Some(999), 3, 0, false);\n        let result = expr.evaluate(&empty_batch()).unwrap();\n        match result {\n            ColumnarValue::Scalar(ScalarValue::Decimal128(v, 3, 0)) => assert_eq!(v, Some(999)),\n            other => panic!(\"unexpected: {other:?}\"),\n        }\n    }\n\n    #[test]\n    fn test_scalar_overflow_returns_null_in_legacy_mode() {\n        // 1000 does not fit in precision 3 → null, no error\n        let expr = make_check_overflow(Some(1000), 3, 0, false);\n        let result = expr.evaluate(&empty_batch()).unwrap();\n        match result {\n            ColumnarValue::Scalar(ScalarValue::Decimal128(v, 3, 0)) => assert_eq!(v, None),\n            other => panic!(\"unexpected: {other:?}\"),\n        }\n    }\n\n    #[test]\n    fn test_scalar_null_passthrough_legacy() {\n        let expr = make_check_overflow(None, 3, 0, false);\n        let result = expr.evaluate(&empty_batch()).unwrap();\n        match result {\n            ColumnarValue::Scalar(ScalarValue::Decimal128(v, 3, 0)) => assert_eq!(v, None),\n            other => panic!(\"unexpected: {other:?}\"),\n        }\n    }\n\n    // --- scalar, fail_on_error = true (ANSI mode) ---\n\n    #[test]\n    fn test_scalar_no_overflow_ansi() {\n        // 999 fits in precision 3 → returned as-is, no error\n        let expr = make_check_overflow(Some(999), 3, 0, true);\n        let result = expr.evaluate(&empty_batch()).unwrap();\n        match result {\n            ColumnarValue::Scalar(ScalarValue::Decimal128(v, 3, 0)) => assert_eq!(v, Some(999)),\n            other => panic!(\"unexpected: {other:?}\"),\n        }\n    }\n\n    #[test]\n    fn test_scalar_overflow_returns_error_in_ansi_mode() {\n        // 1000 does not fit in precision 3 → error, not Ok(None)\n        // This is the case that previously panicked with \"fail_on_error (ANSI mode) is not\n        // supported yet\".\n        let expr = make_check_overflow(Some(1000), 3, 0, true);\n        let result = expr.evaluate(&empty_batch());\n        assert!(result.is_err(), \"expected error on overflow in ANSI mode\");\n    }\n\n    #[test]\n    fn test_scalar_null_passthrough_ansi() {\n        // None input → None output even in ANSI mode (no value to overflow)\n        let expr = make_check_overflow(None, 3, 0, true);\n        let result = expr.evaluate(&empty_batch()).unwrap();\n        match result {\n            ColumnarValue::Scalar(ScalarValue::Decimal128(v, 3, 0)) => assert_eq!(v, None),\n            other => panic!(\"unexpected: {other:?}\"),\n        }\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/internal/decimal_rescale_check.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Fused decimal rescale + overflow check expression.\n//!\n//! Replaces the pattern `CheckOverflow(Cast(expr, Decimal128(p2,s2)), Decimal128(p2,s2))`\n//! with a single expression that rescales and validates precision in one pass.\n\nuse arrow::array::{as_primitive_array, Array, ArrayRef, Decimal128Array};\nuse arrow::datatypes::{DataType, Decimal128Type, Schema};\nuse arrow::error::ArrowError;\nuse arrow::record_batch::RecordBatch;\nuse datafusion::common::{DataFusionError, ScalarValue};\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::hash::Hash;\nuse std::{\n    any::Any,\n    fmt::{Display, Formatter},\n    sync::Arc,\n};\n\n/// A fused expression that rescales a Decimal128 value (changing scale) and checks\n/// for precision overflow in a single pass. Replaces the two-step\n/// `CheckOverflow(Cast(expr, Decimal128(p,s)))` pattern.\n#[derive(Debug, Eq)]\npub struct DecimalRescaleCheckOverflow {\n    child: Arc<dyn PhysicalExpr>,\n    input_scale: i8,\n    output_precision: u8,\n    output_scale: i8,\n    fail_on_error: bool,\n}\n\nimpl Hash for DecimalRescaleCheckOverflow {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.child.hash(state);\n        self.input_scale.hash(state);\n        self.output_precision.hash(state);\n        self.output_scale.hash(state);\n        self.fail_on_error.hash(state);\n    }\n}\n\nimpl PartialEq for DecimalRescaleCheckOverflow {\n    fn eq(&self, other: &Self) -> bool {\n        self.child.eq(&other.child)\n            && self.input_scale == other.input_scale\n            && self.output_precision == other.output_precision\n            && self.output_scale == other.output_scale\n            && self.fail_on_error == other.fail_on_error\n    }\n}\n\nimpl DecimalRescaleCheckOverflow {\n    pub fn new(\n        child: Arc<dyn PhysicalExpr>,\n        input_scale: i8,\n        output_precision: u8,\n        output_scale: i8,\n        fail_on_error: bool,\n    ) -> Self {\n        Self {\n            child,\n            input_scale,\n            output_precision,\n            output_scale,\n            fail_on_error,\n        }\n    }\n}\n\nimpl Display for DecimalRescaleCheckOverflow {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"DecimalRescaleCheckOverflow [child: {}, input_scale: {}, output: Decimal128({}, {}), fail_on_error: {}]\",\n            self.child, self.input_scale, self.output_precision, self.output_scale, self.fail_on_error\n        )\n    }\n}\n\n/// Maximum absolute value for a given decimal precision: 10^p - 1.\n/// Precision must be <= 38 (max for Decimal128).\n#[inline]\nfn precision_bound(precision: u8) -> i128 {\n    assert!(\n        precision <= 38,\n        \"precision_bound: precision {precision} exceeds maximum 38\"\n    );\n    10i128.pow(precision as u32) - 1\n}\n\n/// Rescale a single i128 value by the given delta (output_scale - input_scale)\n/// and check precision bounds. Returns `Ok(value)` or `Ok(i128::MAX)` as sentinel\n/// for overflow in legacy mode, or `Err` in ANSI mode.\n#[inline]\nfn rescale_and_check(\n    value: i128,\n    delta: i8,\n    scale_factor: i128,\n    bound: i128,\n    fail_on_error: bool,\n) -> Result<i128, ArrowError> {\n    let rescaled = if delta > 0 {\n        // Scale up: multiply. Check for overflow.\n        match value.checked_mul(scale_factor) {\n            Some(v) => v,\n            None => {\n                if fail_on_error {\n                    return Err(ArrowError::ComputeError(\n                        \"Decimal overflow during rescale\".to_string(),\n                    ));\n                }\n                return Ok(i128::MAX); // sentinel\n            }\n        }\n    } else if delta < 0 {\n        // Scale down with HALF_UP rounding\n        // divisor = 10^(-delta), half = divisor / 2\n        let divisor = scale_factor; // already 10^abs(delta)\n        let half = divisor / 2;\n        let sign = value.signum();\n        (value + sign * half) / divisor\n    } else {\n        value\n    };\n\n    // Precision check\n    if rescaled.abs() > bound {\n        if fail_on_error {\n            return Err(ArrowError::ComputeError(\n                \"Decimal overflow: value does not fit in precision\".to_string(),\n            ));\n        }\n        Ok(i128::MAX) // sentinel for null_if_overflow_precision\n    } else {\n        Ok(rescaled)\n    }\n}\n\nimpl PhysicalExpr for DecimalRescaleCheckOverflow {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, _: &Schema) -> datafusion::common::Result<DataType> {\n        Ok(DataType::Decimal128(\n            self.output_precision,\n            self.output_scale,\n        ))\n    }\n\n    fn nullable(&self, _: &Schema) -> datafusion::common::Result<bool> {\n        Ok(true)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> datafusion::common::Result<ColumnarValue> {\n        let arg = self.child.evaluate(batch)?;\n        let delta = self.output_scale - self.input_scale;\n        let abs_delta = delta.unsigned_abs();\n        // If abs_delta > 38, the scale factor overflows i128. In that case,\n        // any non-zero value will overflow the output precision, so we treat\n        // it as an immediate overflow condition.\n        if abs_delta > 38 {\n            return Err(DataFusionError::Execution(format!(\n                \"DecimalRescaleCheckOverflow: scale delta {delta} exceeds maximum supported range\"\n            )));\n        }\n        let scale_factor = 10i128.pow(abs_delta as u32);\n        let bound = precision_bound(self.output_precision);\n        let fail_on_error = self.fail_on_error;\n        let p_out = self.output_precision;\n        let s_out = self.output_scale;\n\n        match arg {\n            ColumnarValue::Array(array)\n                if matches!(array.data_type(), DataType::Decimal128(_, _)) =>\n            {\n                let decimal_array = as_primitive_array::<Decimal128Type>(&array);\n\n                let result: Decimal128Array =\n                    arrow::compute::kernels::arity::try_unary(decimal_array, |value| {\n                        rescale_and_check(value, delta, scale_factor, bound, fail_on_error)\n                    })?;\n\n                let result = if !fail_on_error {\n                    result.null_if_overflow_precision(p_out)\n                } else {\n                    result\n                };\n\n                let result = result\n                    .with_precision_and_scale(p_out, s_out)\n                    .map(|a| Arc::new(a) as ArrayRef)?;\n\n                Ok(ColumnarValue::Array(result))\n            }\n            ColumnarValue::Scalar(ScalarValue::Decimal128(v, _precision, _scale)) => {\n                let new_v = match v {\n                    Some(val) => {\n                        let r = rescale_and_check(val, delta, scale_factor, bound, fail_on_error)\n                            .map_err(|e| DataFusionError::ArrowError(Box::new(e), None))?;\n                        if r == i128::MAX {\n                            None\n                        } else {\n                            Some(r)\n                        }\n                    }\n                    None => None,\n                };\n                Ok(ColumnarValue::Scalar(ScalarValue::Decimal128(\n                    new_v, p_out, s_out,\n                )))\n            }\n            v => Err(DataFusionError::Execution(format!(\n                \"DecimalRescaleCheckOverflow expects Decimal128, but found {v:?}\"\n            ))),\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.child]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        if children.len() != 1 {\n            return Err(DataFusionError::Internal(format!(\n                \"DecimalRescaleCheckOverflow expects 1 child, got {}\",\n                children.len()\n            )));\n        }\n        Ok(Arc::new(DecimalRescaleCheckOverflow::new(\n            Arc::clone(&children[0]),\n            self.input_scale,\n            self.output_precision,\n            self.output_scale,\n            self.fail_on_error,\n        )))\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::{AsArray, Decimal128Array};\n    use arrow::datatypes::{Field, Schema};\n    use arrow::record_batch::RecordBatch;\n    use datafusion::physical_expr::expressions::Column;\n\n    fn make_batch(values: Vec<Option<i128>>, precision: u8, scale: i8) -> RecordBatch {\n        let arr =\n            Decimal128Array::from(values).with_data_type(DataType::Decimal128(precision, scale));\n        let schema = Schema::new(vec![Field::new(\"col\", arr.data_type().clone(), true)]);\n        RecordBatch::try_new(Arc::new(schema), vec![Arc::new(arr)]).unwrap()\n    }\n\n    fn eval_expr(\n        batch: &RecordBatch,\n        input_scale: i8,\n        output_precision: u8,\n        output_scale: i8,\n        fail_on_error: bool,\n    ) -> datafusion::common::Result<ArrayRef> {\n        let child: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"col\", 0));\n        let expr = DecimalRescaleCheckOverflow::new(\n            child,\n            input_scale,\n            output_precision,\n            output_scale,\n            fail_on_error,\n        );\n        match expr.evaluate(batch)? {\n            ColumnarValue::Array(arr) => Ok(arr),\n            _ => panic!(\"expected array\"),\n        }\n    }\n\n    #[test]\n    fn test_scale_up() {\n        // Decimal128(10,2) -> Decimal128(10,4): 1.50 (150) -> 1.5000 (15000)\n        let batch = make_batch(vec![Some(150), Some(-300)], 10, 2);\n        let result = eval_expr(&batch, 2, 10, 4, false).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), 15000); // 1.5000\n        assert_eq!(arr.value(1), -30000); // -3.0000\n    }\n\n    #[test]\n    fn test_scale_down_with_half_up_rounding() {\n        // Decimal128(10,4) -> Decimal128(10,2)\n        // 1.2350 (12350) -> round to 1.24 (124) with HALF_UP\n        // 1.2349 (12349) -> round to 1.23 (123) with HALF_UP\n        // -1.2350 (-12350) -> round to -1.24 (-124) with HALF_UP\n        let batch = make_batch(vec![Some(12350), Some(12349), Some(-12350)], 10, 4);\n        let result = eval_expr(&batch, 4, 10, 2, false).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), 124); // 1.24\n        assert_eq!(arr.value(1), 123); // 1.23\n        assert_eq!(arr.value(2), -124); // -1.24\n    }\n\n    #[test]\n    fn test_same_scale_precision_check_only() {\n        // Same scale, just check precision. Value 999 fits in precision 3, 1000 does not.\n        let batch = make_batch(vec![Some(999), Some(1000)], 38, 0);\n        let result = eval_expr(&batch, 0, 3, 0, false).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), 999);\n        assert!(arr.is_null(1)); // overflow -> null in legacy mode\n    }\n\n    #[test]\n    fn test_overflow_null_in_legacy_mode() {\n        // Scale up causes overflow: 10^37 * 100 > i128::MAX range for precision 38\n        // Use precision 3, value 10 (which is 10 at scale 0), scale up to scale 2 -> 1000, which overflows precision 3\n        let batch = make_batch(vec![Some(10)], 38, 0);\n        let result = eval_expr(&batch, 0, 3, 2, false).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert!(arr.is_null(0)); // 10 * 100 = 1000 > 999 (max for precision 3)\n    }\n\n    #[test]\n    fn test_overflow_error_in_ansi_mode() {\n        let batch = make_batch(vec![Some(10)], 38, 0);\n        let result = eval_expr(&batch, 0, 3, 2, true);\n        assert!(result.is_err());\n    }\n\n    #[test]\n    fn test_null_propagation() {\n        let batch = make_batch(vec![Some(100), None, Some(200)], 10, 2);\n        let result = eval_expr(&batch, 2, 10, 4, false).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert!(!arr.is_null(0));\n        assert!(arr.is_null(1));\n        assert!(!arr.is_null(2));\n    }\n\n    #[test]\n    fn test_scalar_path() {\n        let schema = Schema::new(vec![Field::new(\"col\", DataType::Decimal128(10, 2), true)]);\n        let batch = RecordBatch::new_empty(Arc::new(schema));\n\n        let scalar_expr = DecimalRescaleCheckOverflow::new(\n            Arc::new(ScalarChild(Some(150), 10, 2)),\n            2,\n            10,\n            4,\n            false,\n        );\n        let result = scalar_expr.evaluate(&batch).unwrap();\n        match result {\n            ColumnarValue::Scalar(ScalarValue::Decimal128(v, p, s)) => {\n                assert_eq!(v, Some(15000));\n                assert_eq!(p, 10);\n                assert_eq!(s, 4);\n            }\n            _ => panic!(\"expected decimal scalar\"),\n        }\n    }\n\n    /// Helper expression that always returns a Decimal128 scalar.\n    #[derive(Debug, Eq, PartialEq, Hash)]\n    struct ScalarChild(Option<i128>, u8, i8);\n\n    impl Display for ScalarChild {\n        fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n            write!(f, \"ScalarChild({:?})\", self.0)\n        }\n    }\n\n    impl PhysicalExpr for ScalarChild {\n        fn as_any(&self) -> &dyn Any {\n            self\n        }\n        fn data_type(&self, _: &Schema) -> datafusion::common::Result<DataType> {\n            Ok(DataType::Decimal128(self.1, self.2))\n        }\n        fn nullable(&self, _: &Schema) -> datafusion::common::Result<bool> {\n            Ok(true)\n        }\n        fn evaluate(&self, _batch: &RecordBatch) -> datafusion::common::Result<ColumnarValue> {\n            Ok(ColumnarValue::Scalar(ScalarValue::Decimal128(\n                self.0, self.1, self.2,\n            )))\n        }\n        fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n            vec![]\n        }\n        fn with_new_children(\n            self: Arc<Self>,\n            _children: Vec<Arc<dyn PhysicalExpr>>,\n        ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n            Ok(self)\n        }\n        fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n            Display::fmt(self, f)\n        }\n    }\n\n    #[test]\n    fn test_scalar_null() {\n        let schema = Schema::new(vec![Field::new(\"col\", DataType::Decimal128(10, 2), true)]);\n        let batch = RecordBatch::new_empty(Arc::new(schema));\n        let expr =\n            DecimalRescaleCheckOverflow::new(Arc::new(ScalarChild(None, 10, 2)), 2, 10, 4, false);\n        let result = expr.evaluate(&batch).unwrap();\n        match result {\n            ColumnarValue::Scalar(ScalarValue::Decimal128(v, _, _)) => {\n                assert_eq!(v, None);\n            }\n            _ => panic!(\"expected decimal scalar\"),\n        }\n    }\n\n    #[test]\n    fn test_scalar_overflow_legacy() {\n        let schema = Schema::new(vec![Field::new(\"col\", DataType::Decimal128(38, 0), true)]);\n        let batch = RecordBatch::new_empty(Arc::new(schema));\n        let expr = DecimalRescaleCheckOverflow::new(\n            Arc::new(ScalarChild(Some(10), 38, 0)),\n            0,\n            3,\n            2,\n            false,\n        );\n        let result = expr.evaluate(&batch).unwrap();\n        match result {\n            ColumnarValue::Scalar(ScalarValue::Decimal128(v, _, _)) => {\n                assert_eq!(v, None); // 10 * 100 = 1000 > 999\n            }\n            _ => panic!(\"expected decimal scalar\"),\n        }\n    }\n\n    #[test]\n    fn test_scalar_overflow_ansi_returns_error() {\n        // fail_on_error=true must propagate the error, not silently return None\n        let schema = Schema::new(vec![Field::new(\"col\", DataType::Decimal128(38, 0), true)]);\n        let batch = RecordBatch::new_empty(Arc::new(schema));\n        let expr = DecimalRescaleCheckOverflow::new(\n            Arc::new(ScalarChild(Some(10), 38, 0)),\n            0,\n            3,\n            2,\n            true, // fail_on_error = true\n        );\n        let result = expr.evaluate(&batch);\n        assert!(result.is_err()); // must be error, not Ok(None)\n    }\n\n    #[test]\n    fn test_large_scale_delta_returns_error() {\n        // delta = output_scale - input_scale = 38 - (-1) = 39\n        // 10i128.pow(39) would overflow, so we must reject gracefully\n        let batch = make_batch(vec![Some(1)], 38, -1);\n        let result = eval_expr(&batch, -1, 38, 38, false);\n        assert!(result.is_err());\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/internal/make_decimal.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::math_funcs::utils::get_precision_scale;\nuse arrow::datatypes::DataType;\nuse arrow::{\n    array::{AsArray, Decimal128Builder},\n    datatypes::{validate_decimal_precision, Int64Type},\n};\nuse datafusion::common::{internal_err, Result as DataFusionResult, ScalarValue};\nuse datafusion::physical_plan::ColumnarValue;\nuse std::sync::Arc;\n\n/// Spark-compatible `MakeDecimal` expression (internal to Spark optimizer)\npub fn spark_make_decimal(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n) -> DataFusionResult<ColumnarValue> {\n    let (precision, scale) = get_precision_scale(data_type);\n    match &args[0] {\n        ColumnarValue::Scalar(v) => match v {\n            ScalarValue::Int64(n) => Ok(ColumnarValue::Scalar(ScalarValue::Decimal128(\n                long_to_decimal(n, precision, scale),\n                precision,\n                scale,\n            ))),\n            sv => internal_err!(\"Expected Int64 but found {sv:?}\"),\n        },\n        ColumnarValue::Array(a) => match a.data_type() {\n            DataType::Int64 => {\n                let arr = a.as_primitive::<Int64Type>();\n                let mut result = Decimal128Builder::new();\n                for v in arr.into_iter() {\n                    result.append_option(long_to_decimal(&v, precision, scale))\n                }\n                let result_type = DataType::Decimal128(precision, scale);\n\n                Ok(ColumnarValue::Array(Arc::new(\n                    result.finish().with_data_type(result_type),\n                )))\n            }\n            av => internal_err!(\"Expected Int64 but found {av:?}\"),\n        },\n    }\n}\n\n/// Convert the input long to decimal with the given maximum precision. If overflows, returns null\n/// instead.\n#[inline]\nfn long_to_decimal(v: &Option<i64>, precision: u8, scale: i8) -> Option<i128> {\n    match v {\n        Some(v) if validate_decimal_precision(*v as i128, precision, scale).is_ok() => {\n            Some(*v as i128)\n        }\n        _ => None,\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/internal/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod checkoverflow;\nmod decimal_rescale_check;\nmod make_decimal;\nmod normalize_nan;\nmod unscaled_value;\n\npub use checkoverflow::CheckOverflow;\npub use decimal_rescale_check::DecimalRescaleCheckOverflow;\npub use make_decimal::spark_make_decimal;\npub use normalize_nan::NormalizeNaNAndZero;\npub use unscaled_value::spark_unscaled_value;\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/internal/normalize_nan.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::compute::unary;\nuse arrow::datatypes::{DataType, Schema};\nuse arrow::{\n    array::{as_primitive_array, Float32Array, Float64Array},\n    datatypes::{Float32Type, Float64Type},\n    record_batch::RecordBatch,\n};\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::hash::Hash;\nuse std::{\n    any::Any,\n    fmt::{Display, Formatter},\n    sync::Arc,\n};\n\n#[derive(Debug, Eq)]\npub struct NormalizeNaNAndZero {\n    pub data_type: DataType,\n    pub child: Arc<dyn PhysicalExpr>,\n}\n\nimpl PartialEq for NormalizeNaNAndZero {\n    fn eq(&self, other: &Self) -> bool {\n        self.child.eq(&other.child) && self.data_type.eq(&other.data_type)\n    }\n}\n\nimpl Hash for NormalizeNaNAndZero {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.child.hash(state);\n        self.data_type.hash(state);\n    }\n}\n\nimpl NormalizeNaNAndZero {\n    pub fn new(data_type: DataType, child: Arc<dyn PhysicalExpr>) -> Self {\n        Self { data_type, child }\n    }\n}\n\nimpl PhysicalExpr for NormalizeNaNAndZero {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, input_schema: &Schema) -> datafusion::common::Result<DataType> {\n        self.child.data_type(input_schema)\n    }\n\n    fn nullable(&self, input_schema: &Schema) -> datafusion::common::Result<bool> {\n        self.child.nullable(input_schema)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> datafusion::common::Result<ColumnarValue> {\n        let cv = self.child.evaluate(batch)?;\n        let array = cv.into_array(batch.num_rows())?;\n\n        match &self.data_type {\n            DataType::Float32 => {\n                let input = as_primitive_array::<Float32Type>(&array);\n                // Use unary which operates directly on values buffer without intermediate allocation\n                let result: Float32Array = unary(input, normalize_float);\n                Ok(ColumnarValue::Array(Arc::new(result)))\n            }\n            DataType::Float64 => {\n                let input = as_primitive_array::<Float64Type>(&array);\n                // Use unary which operates directly on values buffer without intermediate allocation\n                let result: Float64Array = unary(input, normalize_float);\n                Ok(ColumnarValue::Array(Arc::new(result)))\n            }\n            dt => panic!(\"Unexpected data type {dt:?}\"),\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.child]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        Ok(Arc::new(NormalizeNaNAndZero::new(\n            self.data_type.clone(),\n            Arc::clone(&children[0]),\n        )))\n    }\n}\n\n/// Normalize a floating point value by converting all NaN representations to a canonical NaN\n/// and negative zero to positive zero. This is used for Spark's comparison semantics.\n#[inline]\nfn normalize_float<T: num::Float>(v: T) -> T {\n    if v.is_nan() {\n        T::nan()\n    } else if v == T::neg_zero() {\n        T::zero()\n    } else {\n        v\n    }\n}\n\nimpl Display for NormalizeNaNAndZero {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"FloatNormalize [child: {}]\", self.child)\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/internal/unscaled_value.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::{\n    array::{AsArray, Int64Builder},\n    datatypes::Decimal128Type,\n};\nuse datafusion::common::{internal_err, Result as DataFusionResult, ScalarValue};\nuse datafusion::physical_plan::ColumnarValue;\nuse std::sync::Arc;\n\n/// Spark-compatible `UnscaledValue` expression (internal to Spark optimizer)\npub fn spark_unscaled_value(args: &[ColumnarValue]) -> DataFusionResult<ColumnarValue> {\n    match &args[0] {\n        ColumnarValue::Scalar(v) => match v {\n            ScalarValue::Decimal128(d, _, _) => Ok(ColumnarValue::Scalar(ScalarValue::Int64(\n                d.map(|n| n as i64),\n            ))),\n            dt => internal_err!(\"Expected Decimal128 but found {dt:}\"),\n        },\n        ColumnarValue::Array(a) => {\n            let arr = a.as_primitive::<Decimal128Type>();\n            let mut result = Int64Builder::new();\n            for v in arr.into_iter() {\n                result.append_option(v.map(|v| v as i64));\n            }\n            Ok(ColumnarValue::Array(Arc::new(result.finish())))\n        }\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/log.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, Float64Array};\nuse datafusion::common::{DataFusionError, ScalarValue};\nuse datafusion::physical_plan::ColumnarValue;\nuse std::sync::Arc;\n\n/// Spark-compatible two-argument logarithm: `log(base, value)`.\n///\n/// Returns `log(value) / log(base)`, matching Spark's `Logarithm` expression.\n/// Returns null when `base <= 0` or `value <= 0`, matching Spark's `nullSafeEval`.\npub fn spark_log(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {\n    if args.len() != 2 {\n        return Err(DataFusionError::Internal(format!(\n            \"spark_log requires 2 arguments, got {}\",\n            args.len()\n        )));\n    }\n\n    // Spark's Logarithm: log(base, value) = ln(value) / ln(base)\n    // Returns null when base <= 0 or value <= 0\n    fn compute(base: f64, value: f64) -> Option<f64> {\n        if base <= 0.0 || value <= 0.0 {\n            None\n        } else {\n            Some(value.ln() / base.ln())\n        }\n    }\n\n    match (&args[0], &args[1]) {\n        (ColumnarValue::Array(base_arr), ColumnarValue::Array(val_arr)) => {\n            let bases = base_arr\n                .as_any()\n                .downcast_ref::<Float64Array>()\n                .ok_or_else(|| {\n                    DataFusionError::Internal(format!(\n                        \"spark_log expected Float64 for base, got {:?}\",\n                        base_arr.data_type()\n                    ))\n                })?;\n            let values = val_arr\n                .as_any()\n                .downcast_ref::<Float64Array>()\n                .ok_or_else(|| {\n                    DataFusionError::Internal(format!(\n                        \"spark_log expected Float64 for value, got {:?}\",\n                        val_arr.data_type()\n                    ))\n                })?;\n            let result: Float64Array = bases\n                .iter()\n                .zip(values.iter())\n                .map(|(b, v)| match (b, v) {\n                    (Some(base), Some(value)) => compute(base, value),\n                    _ => None,\n                })\n                .collect();\n            Ok(ColumnarValue::Array(Arc::new(result)))\n        }\n        (ColumnarValue::Scalar(base_scalar), ColumnarValue::Array(val_arr)) => {\n            let base = match base_scalar {\n                ScalarValue::Float64(Some(b)) => *b,\n                ScalarValue::Float64(None) => {\n                    let result = Float64Array::new_null(val_arr.len());\n                    return Ok(ColumnarValue::Array(Arc::new(result)));\n                }\n                _ => {\n                    return Err(DataFusionError::Internal(format!(\n                        \"spark_log expected Float64 scalar for base, got {base_scalar:?}\",\n                    )));\n                }\n            };\n            let values = val_arr\n                .as_any()\n                .downcast_ref::<Float64Array>()\n                .ok_or_else(|| {\n                    DataFusionError::Internal(format!(\n                        \"spark_log expected Float64 for value, got {:?}\",\n                        val_arr.data_type()\n                    ))\n                })?;\n            let result: Float64Array = values\n                .iter()\n                .map(|v| v.and_then(|value| compute(base, value)))\n                .collect();\n            Ok(ColumnarValue::Array(Arc::new(result)))\n        }\n        (ColumnarValue::Array(base_arr), ColumnarValue::Scalar(val_scalar)) => {\n            let value = match val_scalar {\n                ScalarValue::Float64(Some(v)) => *v,\n                ScalarValue::Float64(None) => {\n                    let result = Float64Array::new_null(base_arr.len());\n                    return Ok(ColumnarValue::Array(Arc::new(result)));\n                }\n                _ => {\n                    return Err(DataFusionError::Internal(format!(\n                        \"spark_log expected Float64 scalar for value, got {val_scalar:?}\",\n                    )));\n                }\n            };\n            let bases = base_arr\n                .as_any()\n                .downcast_ref::<Float64Array>()\n                .ok_or_else(|| {\n                    DataFusionError::Internal(format!(\n                        \"spark_log expected Float64 for base, got {:?}\",\n                        base_arr.data_type()\n                    ))\n                })?;\n            let result: Float64Array = bases\n                .iter()\n                .map(|b| b.and_then(|base| compute(base, value)))\n                .collect();\n            Ok(ColumnarValue::Array(Arc::new(result)))\n        }\n        (ColumnarValue::Scalar(base_scalar), ColumnarValue::Scalar(val_scalar)) => {\n            let result = match (base_scalar, val_scalar) {\n                (ScalarValue::Float64(Some(base)), ScalarValue::Float64(Some(value))) => {\n                    ScalarValue::Float64(compute(*base, *value))\n                }\n                (ScalarValue::Float64(_), ScalarValue::Float64(_)) => ScalarValue::Float64(None),\n                _ => {\n                    return Err(DataFusionError::Internal(format!(\n                        \"spark_log expected Float64 scalars, got {base_scalar:?} and {val_scalar:?}\",\n                    )));\n                }\n            };\n            Ok(ColumnarValue::Scalar(result))\n        }\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::*;\n    use arrow::array::Array;\n\n    #[test]\n    fn test_spark_log_basic() {\n        let bases = Float64Array::from(vec![10.0, 2.0, 10.0]);\n        let values = Float64Array::from(vec![100.0, 8.0, 1.0]);\n        let result = spark_log(&[\n            ColumnarValue::Array(Arc::new(bases)),\n            ColumnarValue::Array(Arc::new(values)),\n        ])\n        .unwrap();\n        if let ColumnarValue::Array(arr) = result {\n            let arr = arr.as_any().downcast_ref::<Float64Array>().unwrap();\n            assert!((arr.value(0) - 2.0).abs() < 1e-10);\n            assert!((arr.value(1) - 3.0).abs() < 1e-10);\n            assert!((arr.value(2) - 0.0).abs() < 1e-10);\n        } else {\n            panic!(\"expected array result\");\n        }\n    }\n\n    #[test]\n    fn test_spark_log_non_positive_returns_null() {\n        let bases = Float64Array::from(vec![Some(0.0), Some(-1.0), Some(10.0), Some(10.0)]);\n        let values = Float64Array::from(vec![Some(10.0), Some(10.0), Some(0.0), Some(-1.0)]);\n        let result = spark_log(&[\n            ColumnarValue::Array(Arc::new(bases)),\n            ColumnarValue::Array(Arc::new(values)),\n        ])\n        .unwrap();\n        if let ColumnarValue::Array(arr) = result {\n            let arr = arr.as_any().downcast_ref::<Float64Array>().unwrap();\n            assert!(arr.is_null(0));\n            assert!(arr.is_null(1));\n            assert!(arr.is_null(2));\n            assert!(arr.is_null(3));\n        } else {\n            panic!(\"expected array result\");\n        }\n    }\n\n    #[test]\n    fn test_spark_log_null_propagation() {\n        let bases = Float64Array::from(vec![Some(10.0), None]);\n        let values = Float64Array::from(vec![None, Some(10.0)]);\n        let result = spark_log(&[\n            ColumnarValue::Array(Arc::new(bases)),\n            ColumnarValue::Array(Arc::new(values)),\n        ])\n        .unwrap();\n        if let ColumnarValue::Array(arr) = result {\n            let arr = arr.as_any().downcast_ref::<Float64Array>().unwrap();\n            assert!(arr.is_null(0));\n            assert!(arr.is_null(1));\n        } else {\n            panic!(\"expected array result\");\n        }\n    }\n\n    #[test]\n    fn test_spark_log_base_one_returns_nan() {\n        // log(1, 1) = ln(1) / ln(1) = 0/0 = NaN\n        let bases = Float64Array::from(vec![1.0]);\n        let values = Float64Array::from(vec![1.0]);\n        let result = spark_log(&[\n            ColumnarValue::Array(Arc::new(bases)),\n            ColumnarValue::Array(Arc::new(values)),\n        ])\n        .unwrap();\n        if let ColumnarValue::Array(arr) = result {\n            let arr = arr.as_any().downcast_ref::<Float64Array>().unwrap();\n            assert!(arr.value(0).is_nan());\n        } else {\n            panic!(\"expected array result\");\n        }\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub(crate) mod abs;\nmod ceil;\npub(crate) mod checked_arithmetic;\nmod div;\nmod floor;\npub mod internal;\npub(crate) mod log;\npub mod modulo_expr;\nmod negative;\nmod round;\npub(crate) mod unhex;\nmod utils;\nmod wide_decimal_binary_expr;\n\npub use ceil::spark_ceil;\npub use div::spark_decimal_div;\npub use div::spark_decimal_integral_div;\npub use floor::spark_floor;\npub use internal::*;\npub use log::spark_log;\npub use modulo_expr::create_modulo_expr;\npub use negative::{create_negate_expr, NegativeExpr};\npub use round::spark_round;\npub use unhex::spark_unhex;\npub use wide_decimal_binary_expr::{WideDecimalBinaryExpr, WideDecimalOp};\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/modulo_expr.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::{create_comet_physical_fun, IfExpr};\nuse crate::{divide_by_zero_error, Cast, EvalMode, SparkCastOptions};\nuse arrow::compute::kernels::numeric::rem;\nuse arrow::datatypes::*;\nuse datafusion::common::{exec_err, internal_err, DataFusionError, Result, ScalarValue};\nuse datafusion::config::ConfigOptions;\nuse datafusion::execution::FunctionRegistry;\nuse datafusion::physical_expr::expressions::{lit, BinaryExpr};\nuse datafusion::physical_expr::ScalarFunctionExpr;\nuse datafusion::physical_expr_common::datum::{apply, apply_cmp_for_nested};\nuse datafusion::{\n    logical_expr::{ColumnarValue, Operator},\n    physical_expr::PhysicalExpr,\n};\nuse std::cmp::max;\nuse std::sync::Arc;\n\n/// Spark-compliant modulo function. If `fail_on_error` is true, then this function computes modulo\n/// in ANSI mode and returns an error on division by zero, otherwise it returns `NULL` for such\n/// cases.\npub fn spark_modulo(args: &[ColumnarValue], fail_on_error: bool) -> Result<ColumnarValue> {\n    if args.len() != 2 {\n        return exec_err!(\"modulo expects exactly two arguments\");\n    }\n\n    let lhs = &args[0];\n    let rhs = &args[1];\n\n    let left_data_type = lhs.data_type();\n    let right_data_type = rhs.data_type();\n\n    if left_data_type.is_nested() {\n        if right_data_type != left_data_type {\n            return internal_err!(\"Type mismatch for spark modulo operation\");\n        }\n        return apply_cmp_for_nested(Operator::Modulo, lhs, rhs);\n    }\n\n    match apply(lhs, rhs, rem) {\n        Ok(result) => Ok(result),\n        Err(e) if e.to_string().contains(\"Divide by zero\") && fail_on_error => {\n            // Return Spark-compliant divide by zero error.\n            Err(divide_by_zero_error().into())\n        }\n        Err(e) => Err(e),\n    }\n}\n\npub fn create_modulo_expr(\n    left: Arc<dyn PhysicalExpr>,\n    right: Arc<dyn PhysicalExpr>,\n    data_type: DataType,\n    input_schema: SchemaRef,\n    fail_on_error: bool,\n    registry: &dyn FunctionRegistry,\n) -> Result<Arc<dyn PhysicalExpr>, DataFusionError> {\n    // For non-ANSI mode, wrap the right expression such that any zero value is replaced with `NULL`\n    // to prevent divide by zero error.\n    let right_non_ansi_safe = if !fail_on_error {\n        null_if_zero_primitive(right, &input_schema)?\n    } else {\n        right\n    };\n\n    // If the data type is `Decimal128` and the (scale + integral part) exceeds the maximum allowed\n    // for `Decimal128`, then cast both operands to `Decimal256` before creating the modulo scalar\n    // expression, otherwise, create the modulo scalar expression directly.\n    match (\n        left.data_type(&input_schema),\n        right_non_ansi_safe.data_type(&input_schema),\n    ) {\n        (Ok(DataType::Decimal128(p1, s1)), Ok(DataType::Decimal128(p2, s2)))\n            if max(s1, s2) as u8 + max(p1 - s1 as u8, p2 - s2 as u8) > DECIMAL128_MAX_PRECISION =>\n        {\n            let left_256 = Arc::new(Cast::new(\n                left,\n                DataType::Decimal256(p1, s1),\n                SparkCastOptions::new_without_timezone(EvalMode::Legacy, false),\n                None,\n                None,\n            ));\n            let right_256 = Arc::new(Cast::new(\n                right_non_ansi_safe,\n                DataType::Decimal256(p2, s2),\n                SparkCastOptions::new_without_timezone(EvalMode::Legacy, false),\n                None,\n                None,\n            ));\n\n            // The UDF's return type must match what Arrow's rem function will actually return.\n            // Since we're operating on Decimal256 inputs, rem will return Decimal256.\n            let decimal256_return_type = match &data_type {\n                DataType::Decimal128(p, s) => DataType::Decimal256(*p, *s),\n                other => other.clone(),\n            };\n            let modulo_scalar_func = create_modulo_scalar_function(\n                left_256,\n                right_256,\n                &decimal256_return_type,\n                registry,\n                fail_on_error,\n            )?;\n\n            Ok(Arc::new(Cast::new(\n                modulo_scalar_func,\n                data_type,\n                SparkCastOptions::new_without_timezone(EvalMode::Legacy, false),\n                None,\n                None,\n            )))\n        }\n        _ => create_modulo_scalar_function(\n            left,\n            right_non_ansi_safe,\n            &data_type,\n            registry,\n            fail_on_error,\n        ),\n    }\n}\n\nfn null_if_zero_primitive(\n    expression: Arc<dyn PhysicalExpr>,\n    input_schema: &Schema,\n) -> Result<Arc<dyn PhysicalExpr>, DataFusionError> {\n    let expr_data_type = expression.data_type(input_schema)?;\n\n    if is_primitive_datatype(&expr_data_type) {\n        let zero = match expr_data_type {\n            DataType::Int8 => ScalarValue::Int8(Some(0)),\n            DataType::Int16 => ScalarValue::Int16(Some(0)),\n            DataType::Int32 => ScalarValue::Int32(Some(0)),\n            DataType::Int64 => ScalarValue::Int64(Some(0)),\n            DataType::UInt8 => ScalarValue::UInt8(Some(0)),\n            DataType::UInt16 => ScalarValue::UInt16(Some(0)),\n            DataType::UInt32 => ScalarValue::UInt32(Some(0)),\n            DataType::UInt64 => ScalarValue::UInt64(Some(0)),\n            DataType::Float32 => ScalarValue::Float32(Some(0.0)),\n            DataType::Float64 => ScalarValue::Float64(Some(0.0)),\n            DataType::Decimal128(s, p) => ScalarValue::Decimal128(Some(0), s, p),\n            DataType::Decimal256(s, p) => ScalarValue::Decimal256(Some(i256::from(0)), s, p),\n            _ => return Ok(expression),\n        };\n\n        // Create an expression like - `if (eval(expr) == Literal(0)) then NULL else eval(expr)`.\n        // This expression evaluates to null for rows with zero values to prevent divide by zero\n        // error.\n        let eq_expr = Arc::new(BinaryExpr::new(\n            Arc::<dyn PhysicalExpr>::clone(&expression),\n            Operator::Eq,\n            lit(zero),\n        ));\n        let null_literal = lit(ScalarValue::try_new_null(&expr_data_type)?);\n        let if_expr = Arc::new(IfExpr::new(eq_expr, null_literal, expression));\n        Ok(if_expr)\n    } else {\n        Ok(expression)\n    }\n}\n\nfn is_primitive_datatype(dt: &DataType) -> bool {\n    matches!(\n        dt,\n        DataType::Int8\n            | DataType::Int16\n            | DataType::Int32\n            | DataType::Int64\n            | DataType::UInt8\n            | DataType::UInt16\n            | DataType::UInt32\n            | DataType::UInt64\n            | DataType::Float32\n            | DataType::Float64\n            | DataType::Decimal128(_, _)\n            | DataType::Decimal256(_, _)\n    )\n}\n\nfn create_modulo_scalar_function(\n    left: Arc<dyn PhysicalExpr>,\n    right: Arc<dyn PhysicalExpr>,\n    data_type: &DataType,\n    registry: &dyn FunctionRegistry,\n    fail_on_error: bool,\n) -> Result<Arc<dyn PhysicalExpr>, DataFusionError> {\n    let func_name = \"spark_modulo\";\n    let modulo_expr =\n        create_comet_physical_fun(func_name, data_type.clone(), registry, Some(fail_on_error))?;\n    Ok(Arc::new(ScalarFunctionExpr::new(\n        func_name,\n        modulo_expr,\n        vec![left, right],\n        Arc::new(Field::new(func_name, data_type.clone(), true)),\n        Arc::new(ConfigOptions::default()),\n    )))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::{\n        Array, ArrayRef, Decimal128Array, Decimal128Builder, Int32Array, PrimitiveArray,\n        RecordBatch,\n    };\n    use datafusion::logical_expr::ColumnarValue;\n    use datafusion::physical_expr::expressions::{Column, Literal};\n    use datafusion::prelude::SessionContext;\n\n    fn with_fail_on_error<F: Fn(bool)>(test_fn: F) {\n        for fail_on_error in [true, false] {\n            test_fn(fail_on_error);\n        }\n    }\n\n    pub fn verify_result<T>(\n        expr: Arc<dyn PhysicalExpr>,\n        batch: RecordBatch,\n        should_fail: bool,\n        expected_result: Option<Arc<PrimitiveArray<T>>>,\n    ) where\n        T: ArrowPrimitiveType,\n    {\n        let actual_result = expr.evaluate(&batch);\n\n        if should_fail {\n            match actual_result {\n                Err(error) => {\n                    assert!(\n                        error\n                            .to_string()\n                            .contains(\"[DIVIDE_BY_ZERO] Division by zero\"),\n                        \"Error message did not match. Actual message: {error}\"\n                    );\n                }\n                Ok(value) => {\n                    panic!(\"Expected error, but got: {value:?}\");\n                }\n            }\n        } else {\n            match (actual_result, expected_result) {\n                (Ok(ColumnarValue::Array(ref actual)), Some(expected)) => {\n                    assert_eq!(actual.len(), expected.len(), \"Array length mismatch\");\n\n                    let actual_arr = actual.as_any().downcast_ref::<PrimitiveArray<T>>().unwrap();\n                    let expected_arr = expected\n                        .as_any()\n                        .downcast_ref::<PrimitiveArray<T>>()\n                        .unwrap();\n\n                    for i in 0..actual_arr.len() {\n                        assert_eq!(\n                            actual_arr.is_null(i),\n                            expected_arr.is_null(i),\n                            \"Nullity mismatch at index {i}\"\n                        );\n                        if !actual_arr.is_null(i) {\n                            let actual_value = actual_arr.value(i);\n                            let expected_value = expected_arr.value(i);\n                            assert_eq!(\n                                actual_value, expected_value,\n                                \"Mismatch at index {i}, actual {actual_value:?}, expected {expected_value:?}\"\n                            );\n                        }\n                    }\n                }\n                (actual, expected) => {\n                    panic!(\"Actual: {actual:?}, expected: {expected:?}\");\n                }\n            }\n        }\n    }\n\n    #[test]\n    fn test_modulo_basic_int() {\n        with_fail_on_error(|fail_on_error| {\n            let schema = Arc::new(Schema::new(vec![\n                Field::new(\"a\", DataType::Int32, false),\n                Field::new(\"b\", DataType::Int32, false),\n            ]));\n\n            let a_array = Arc::new(Int32Array::from(vec![3, 2, i32::MIN]));\n            let b_array = Arc::new(Int32Array::from(vec![1, 5, -1]));\n            let batch = RecordBatch::try_new(Arc::clone(&schema), vec![a_array, b_array]).unwrap();\n\n            let left_expr = Arc::new(Column::new(\"a\", 0));\n            let right_expr = Arc::new(Column::new(\"b\", 1));\n\n            let session_ctx = SessionContext::new();\n            let modulo_expr = create_modulo_expr(\n                left_expr,\n                right_expr,\n                DataType::Int32,\n                schema,\n                fail_on_error,\n                &session_ctx.state(),\n            )\n            .unwrap();\n\n            // This test case should not fail as there is no division by zero.\n            let should_fail = false;\n            let expected_result = Arc::new(Int32Array::from(vec![0, 2, 0]));\n            verify_result(modulo_expr, batch, should_fail, Some(expected_result));\n        })\n    }\n\n    #[test]\n    fn test_modulo_basic_decimal() {\n        with_fail_on_error(|fail_on_error| {\n            let schema = Arc::new(Schema::new(vec![\n                Field::new(\"a\", DataType::Decimal128(18, 4), false),\n                Field::new(\"b\", DataType::Decimal128(18, 4), false),\n            ]));\n\n            let mut a_builder =\n                Decimal128Builder::with_capacity(2).with_data_type(DataType::Decimal128(18, 4));\n            a_builder.append_value(3000000000000000000);\n            a_builder.append_value(2000000000000000000);\n            let a_array: ArrayRef = Arc::new(a_builder.finish());\n\n            let mut b_builder =\n                Decimal128Builder::with_capacity(2).with_data_type(DataType::Decimal128(18, 4));\n            b_builder.append_value(1000000000000000000);\n            b_builder.append_value(5000000000000000000);\n            let b_array: ArrayRef = Arc::new(b_builder.finish());\n\n            let batch = RecordBatch::try_new(Arc::clone(&schema), vec![a_array, b_array]).unwrap();\n\n            let left_expr = Arc::new(Column::new(\"a\", 0));\n            let right_expr = Arc::new(Column::new(\"b\", 1));\n\n            let session_ctx = SessionContext::new();\n            let modulo_expr = create_modulo_expr(\n                left_expr,\n                right_expr,\n                DataType::Decimal128(18, 4),\n                schema,\n                fail_on_error,\n                &session_ctx.state(),\n            )\n            .unwrap();\n\n            // This test case should not fail as there is no division by zero.\n            let should_fail = false;\n            let expected_result = Arc::new(Decimal128Array::from(vec![\n                Some(0),\n                Some(2000000000000000000),\n            ]));\n            verify_result(modulo_expr, batch, should_fail, Some(expected_result));\n        })\n    }\n\n    #[test]\n    fn test_modulo_divide_by_zero_int() {\n        with_fail_on_error(|fail_on_error| {\n            let schema = Arc::new(Schema::new(vec![\n                Field::new(\"a\", DataType::Int32, false),\n                Field::new(\"b\", DataType::Int32, false),\n            ]));\n\n            let a_array = Arc::new(Int32Array::from(vec![3]));\n            let b_array = Arc::new(Int32Array::from(vec![0]));\n            let batch = RecordBatch::try_new(Arc::clone(&schema), vec![a_array, b_array]).unwrap();\n\n            let left_expr = Arc::new(Column::new(\"a\", 0));\n            let right_expr = Arc::new(Column::new(\"b\", 1));\n\n            let session_ctx = SessionContext::new();\n            let modulo_expr = create_modulo_expr(\n                left_expr,\n                right_expr,\n                DataType::Int32,\n                schema,\n                fail_on_error,\n                &session_ctx.state(),\n            )\n            .unwrap();\n\n            // Expected result in non-ANSI mode.\n            let expected_result = Arc::new(Int32Array::from(vec![None]));\n            verify_result(modulo_expr, batch, fail_on_error, Some(expected_result));\n        })\n    }\n\n    #[test]\n    fn test_division_by_zero_with_complex_int_expr() {\n        with_fail_on_error(|fail_on_error| {\n            let schema = Arc::new(Schema::new(vec![\n                Field::new(\"a\", DataType::Int32, false),\n                Field::new(\"b\", DataType::Int32, false),\n                Field::new(\"c\", DataType::Int32, false),\n            ]));\n\n            let a_array = Arc::new(Int32Array::from(vec![3, 0]));\n            let b_array = Arc::new(Int32Array::from(vec![2, 4]));\n            let c_array = Arc::new(Int32Array::from(vec![4, 5]));\n            let batch =\n                RecordBatch::try_new(Arc::clone(&schema), vec![a_array, b_array, c_array]).unwrap();\n\n            let left_expr = Arc::new(BinaryExpr::new(\n                Arc::new(Column::new(\"a\", 0)),\n                Operator::Divide,\n                Arc::new(Column::new(\"b\", 1)),\n            ));\n            let right_expr = Arc::new(BinaryExpr::new(\n                Arc::new(Literal::new(ScalarValue::Int32(Some(0)))),\n                Operator::Divide,\n                Arc::new(Column::new(\"c\", 2)),\n            ));\n\n            // Computes modulo of (a / b) % (0 / c).\n            let session_ctx = SessionContext::new();\n            let modulo_expr = create_modulo_expr(\n                left_expr,\n                right_expr,\n                DataType::Int32,\n                schema,\n                fail_on_error,\n                &session_ctx.state(),\n            )\n            .unwrap();\n\n            // Expected result in non-ANSI mode.\n            let expected_result = Arc::new(Int32Array::from(vec![None, None]));\n            verify_result(modulo_expr, batch, fail_on_error, Some(expected_result));\n        })\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/negative.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::arithmetic_overflow_error;\nuse crate::SparkError;\nuse arrow::array::RecordBatch;\nuse arrow::datatypes::IntervalDayTime;\nuse arrow::datatypes::{DataType, Schema};\nuse arrow::{compute::kernels::numeric::neg_wrapping, datatypes::IntervalDayTimeType};\nuse datafusion::common::{DataFusionError, Result, ScalarValue};\nuse datafusion::logical_expr::sort_properties::ExprProperties;\nuse datafusion::{\n    logical_expr::{interval_arithmetic::Interval, ColumnarValue},\n    physical_expr::PhysicalExpr,\n};\nuse std::fmt::{Display, Formatter};\nuse std::hash::Hash;\nuse std::{any::Any, sync::Arc};\n\npub fn create_negate_expr(\n    expr: Arc<dyn PhysicalExpr>,\n    fail_on_error: bool,\n) -> Result<Arc<dyn PhysicalExpr>, DataFusionError> {\n    Ok(Arc::new(NegativeExpr::new(expr, fail_on_error)))\n}\n\n/// Negative expression\n#[derive(Debug, Eq)]\npub struct NegativeExpr {\n    /// Input expression\n    arg: Arc<dyn PhysicalExpr>,\n    fail_on_error: bool,\n}\n\nimpl Hash for NegativeExpr {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.arg.hash(state);\n        self.fail_on_error.hash(state);\n    }\n}\n\nimpl PartialEq for NegativeExpr {\n    fn eq(&self, other: &Self) -> bool {\n        self.arg.eq(&other.arg) && self.fail_on_error.eq(&other.fail_on_error)\n    }\n}\n\nmacro_rules! check_overflow {\n    ($array:expr, $array_type:ty, $min_val:expr, $type_name:expr) => {{\n        let typed_array = $array\n            .as_any()\n            .downcast_ref::<$array_type>()\n            .expect(concat!(stringify!($array_type), \" expected\"));\n        for i in 0..typed_array.len() {\n            if typed_array.value(i) == $min_val {\n                if $type_name == \"byte\" || $type_name == \"short\" {\n                    let value = format!(\"{:?} caused\", typed_array.value(i));\n                    return Err(arithmetic_overflow_error(value.as_str()).into());\n                }\n                return Err(arithmetic_overflow_error($type_name).into());\n            }\n        }\n    }};\n}\n\nimpl NegativeExpr {\n    /// Create new not expression\n    pub fn new(arg: Arc<dyn PhysicalExpr>, fail_on_error: bool) -> Self {\n        Self { arg, fail_on_error }\n    }\n\n    /// Get the input expression\n    pub fn arg(&self) -> &Arc<dyn PhysicalExpr> {\n        &self.arg\n    }\n}\n\nimpl std::fmt::Display for NegativeExpr {\n    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        write!(f, \"(- {})\", self.arg)\n    }\n}\n\nimpl PhysicalExpr for NegativeExpr {\n    /// Return a reference to Any that can be used for downcasting\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn data_type(&self, input_schema: &Schema) -> Result<DataType> {\n        self.arg.data_type(input_schema)\n    }\n\n    fn nullable(&self, input_schema: &Schema) -> Result<bool> {\n        self.arg.nullable(input_schema)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> Result<ColumnarValue> {\n        let arg = self.arg.evaluate(batch)?;\n\n        // overflow checks only apply in ANSI mode\n        // datatypes supported are byte, short, integer, long, float, interval\n        match arg {\n            ColumnarValue::Array(array) => {\n                if self.fail_on_error {\n                    match array.data_type() {\n                        DataType::Int8 => {\n                            check_overflow!(array, arrow::array::Int8Array, i8::MIN, \"byte\")\n                        }\n                        DataType::Int16 => {\n                            check_overflow!(array, arrow::array::Int16Array, i16::MIN, \"short\")\n                        }\n                        DataType::Int32 => {\n                            check_overflow!(array, arrow::array::Int32Array, i32::MIN, \"integer\")\n                        }\n                        DataType::Int64 => {\n                            check_overflow!(array, arrow::array::Int64Array, i64::MIN, \"long\")\n                        }\n                        DataType::Interval(value) => match value {\n                            arrow::datatypes::IntervalUnit::YearMonth => check_overflow!(\n                                array,\n                                arrow::array::IntervalYearMonthArray,\n                                i32::MIN,\n                                \"interval\"\n                            ),\n                            arrow::datatypes::IntervalUnit::DayTime => check_overflow!(\n                                array,\n                                arrow::array::IntervalDayTimeArray,\n                                IntervalDayTime::MIN,\n                                \"interval\"\n                            ),\n                            arrow::datatypes::IntervalUnit::MonthDayNano => {\n                                // Overflow checks are not supported\n                            }\n                        },\n                        _ => {\n                            // Overflow checks are not supported for other datatypes\n                        }\n                    }\n                }\n                let result = neg_wrapping(array.as_ref())?;\n                Ok(ColumnarValue::Array(result))\n            }\n            ColumnarValue::Scalar(scalar) => {\n                if self.fail_on_error {\n                    match scalar {\n                        ScalarValue::Int8(Some(i8::MIN)) => {\n                            return Err(arithmetic_overflow_error(\" caused\").into());\n                        }\n                        ScalarValue::Int16(Some(i16::MIN)) => {\n                            return Err(arithmetic_overflow_error(\" caused\").into());\n                        }\n                        ScalarValue::Int32(Some(i32::MIN)) => {\n                            return Err(arithmetic_overflow_error(\"integer\").into());\n                        }\n                        ScalarValue::Int64(Some(i64::MIN)) => {\n                            return Err(arithmetic_overflow_error(\"long\").into());\n                        }\n                        ScalarValue::IntervalDayTime(value) => {\n                            let (days, ms) =\n                                IntervalDayTimeType::to_parts(value.unwrap_or_default());\n                            if days == i32::MIN || ms == i32::MIN {\n                                return Err(arithmetic_overflow_error(\"interval\").into());\n                            }\n                        }\n                        ScalarValue::IntervalYearMonth(Some(i32::MIN)) => {\n                            return Err(arithmetic_overflow_error(\"interval\").into());\n                        }\n                        ScalarValue::IntervalYearMonth(_) => {}\n                        _ => {\n                            // Overflow checks are not supported for other datatypes\n                        }\n                    }\n                }\n                Ok(ColumnarValue::Scalar((scalar.arithmetic_negate())?))\n            }\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.arg]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Result<Arc<dyn PhysicalExpr>> {\n        Ok(Arc::new(NegativeExpr::new(\n            Arc::clone(&children[0]),\n            self.fail_on_error,\n        )))\n    }\n\n    /// Given the child interval of a NegativeExpr, it calculates the NegativeExpr's interval.\n    /// It replaces the upper and lower bounds after multiplying them with -1.\n    /// Ex: `(a, b]` => `[-b, -a)`\n    fn evaluate_bounds(&self, children: &[&Interval]) -> Result<Interval> {\n        Interval::try_new(\n            children[0].upper().arithmetic_negate()?,\n            children[0].lower().arithmetic_negate()?,\n        )\n    }\n\n    /// Returns a new [`Interval`] of a NegativeExpr  that has the existing `interval` given that\n    /// given the input interval is known to be `children`.\n    fn propagate_constraints(\n        &self,\n        interval: &Interval,\n        children: &[&Interval],\n    ) -> Result<Option<Vec<Interval>>> {\n        let child_interval = children[0];\n\n        if child_interval.lower() == &ScalarValue::Int32(Some(i32::MIN))\n            || child_interval.upper() == &ScalarValue::Int32(Some(i32::MIN))\n            || child_interval.lower() == &ScalarValue::Int64(Some(i64::MIN))\n            || child_interval.upper() == &ScalarValue::Int64(Some(i64::MIN))\n        {\n            return Err(SparkError::ArithmeticOverflow {\n                from_type: \"long\".to_string(),\n            }\n            .into());\n        }\n\n        let negated_interval = Interval::try_new(\n            interval.upper().arithmetic_negate()?,\n            interval.lower().arithmetic_negate()?,\n        )?;\n\n        Ok(child_interval\n            .intersect(negated_interval)?\n            .map(|result| vec![result]))\n    }\n\n    /// The ordering of a [`NegativeExpr`] is simply the reverse of its child.\n    fn get_properties(&self, children: &[ExprProperties]) -> Result<ExprProperties> {\n        let properties = children[0].clone().with_order(children[0].sort_properties);\n        Ok(properties)\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/round.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::arithmetic_overflow_error;\nuse crate::math_funcs::utils::{get_precision_scale, make_decimal_array, make_decimal_scalar};\nuse arrow::array::{Array, ArrowNativeTypeOp};\nuse arrow::array::{Int16Array, Int32Array, Int64Array, Int8Array};\nuse arrow::datatypes::{DataType, Field};\nuse arrow::error::ArrowError;\nuse datafusion::common::config::ConfigOptions;\nuse datafusion::common::{exec_err, internal_err, DataFusionError, ScalarValue};\nuse datafusion::functions::math::round::RoundFunc;\nuse datafusion::logical_expr::{ScalarFunctionArgs, ScalarUDFImpl};\nuse datafusion::physical_plan::ColumnarValue;\nuse std::{cmp::min, sync::Arc};\n\nmacro_rules! integer_round {\n    ($X:expr, $DIV:expr, $HALF:expr, $FAIL_ON_ERROR:expr) => {{\n        let rem = $X % $DIV;\n        if rem <= -$HALF {\n            if $FAIL_ON_ERROR {\n                ($X - rem).sub_checked($DIV).map_err(|_| {\n                    ArrowError::ComputeError(arithmetic_overflow_error(\"integer\").to_string())\n                })\n            } else {\n                Ok(($X - rem).sub_wrapping($DIV))\n            }\n        } else if rem >= $HALF {\n            if $FAIL_ON_ERROR {\n                ($X - rem).add_checked($DIV).map_err(|_| {\n                    ArrowError::ComputeError(arithmetic_overflow_error(\"integer\").to_string())\n                })\n            } else {\n                Ok(($X - rem).add_wrapping($DIV))\n            }\n        } else {\n            if $FAIL_ON_ERROR {\n                $X.sub_checked(rem).map_err(|_| {\n                    ArrowError::ComputeError(arithmetic_overflow_error(\"integer\").to_string())\n                })\n            } else {\n                Ok($X.sub_wrapping(rem))\n            }\n        }\n    }};\n}\n\nmacro_rules! round_integer_array {\n    ($ARRAY:expr, $POINT:expr, $TYPE:ty, $NATIVE:ty, $FAIL_ON_ERROR:expr) => {{\n        let array = $ARRAY.as_any().downcast_ref::<$TYPE>().unwrap();\n        let ten: $NATIVE = 10;\n        let result: $TYPE = if let Some(div) = ten.checked_pow((-(*$POINT)) as u32) {\n            let half = div / 2;\n            arrow::compute::kernels::arity::try_unary(array, |x| {\n                integer_round!(x, div, half, $FAIL_ON_ERROR)\n            })?\n        } else {\n            arrow::compute::kernels::arity::try_unary(array, |_| Ok(0))?\n        };\n        Ok(ColumnarValue::Array(Arc::new(result)))\n    }};\n}\n\nmacro_rules! round_integer_scalar {\n    ($SCALAR:expr, $POINT:expr, $TYPE:expr, $NATIVE:ty, $FAIL_ON_ERROR:expr) => {{\n        let ten: $NATIVE = 10;\n        if let Some(div) = ten.checked_pow((-(*$POINT)) as u32) {\n            let half = div / 2;\n            let scalar_opt = match $SCALAR {\n                Some(x) => match integer_round!(x, div, half, $FAIL_ON_ERROR) {\n                    Ok(v) => Some(v),\n                    Err(e) => {\n                        return Err(DataFusionError::ArrowError(\n                            Box::from(e),\n                            Some(DataFusionError::get_back_trace()),\n                        ))\n                    }\n                },\n                None => None,\n            };\n            Ok(ColumnarValue::Scalar($TYPE(scalar_opt)))\n        } else {\n            Ok(ColumnarValue::Scalar($TYPE(Some(0))))\n        }\n    }};\n}\n\n/// `round` function that simulates Spark `round` expression\npub fn spark_round(\n    args: &[ColumnarValue],\n    data_type: &DataType,\n    fail_on_error: bool,\n) -> Result<ColumnarValue, DataFusionError> {\n    let value = &args[0];\n    let point = &args[1];\n    let ColumnarValue::Scalar(ScalarValue::Int64(Some(point))) = point else {\n        return internal_err!(\"Invalid point argument for Round(): {:#?}\", point);\n    };\n    // DataFusion's RoundFunc expects Int32 for decimal_places\n    let point_i32 = ColumnarValue::Scalar(ScalarValue::Int32(Some(*point as i32)));\n    match value {\n        ColumnarValue::Array(array) => match array.data_type() {\n            DataType::Int64 if *point < 0 => {\n                round_integer_array!(array, point, Int64Array, i64, fail_on_error)\n            }\n            DataType::Int32 if *point < 0 => {\n                round_integer_array!(array, point, Int32Array, i32, fail_on_error)\n            }\n            DataType::Int16 if *point < 0 => {\n                round_integer_array!(array, point, Int16Array, i16, fail_on_error)\n            }\n            DataType::Int8 if *point < 0 => {\n                round_integer_array!(array, point, Int8Array, i8, fail_on_error)\n            }\n            DataType::Decimal128(_, scale) if *scale >= 0 => {\n                let f = decimal_round_f(scale, point);\n                let (precision, scale) = get_precision_scale(data_type);\n                make_decimal_array(array, precision, scale, &f)\n            }\n            DataType::Float32 | DataType::Float64 => {\n                let round_udf = RoundFunc::new();\n                let return_field = Arc::new(Field::new(\"round\", array.data_type().clone(), true));\n                let args_for_round = ScalarFunctionArgs {\n                    args: vec![ColumnarValue::Array(Arc::clone(array)), point_i32.clone()],\n                    number_rows: array.len(),\n                    return_field,\n                    arg_fields: vec![],\n                    config_options: Arc::new(ConfigOptions::default()),\n                };\n                round_udf.invoke_with_args(args_for_round)\n            }\n            dt => exec_err!(\"Not supported datatype for ROUND: {dt}\"),\n        },\n        ColumnarValue::Scalar(a) => match a {\n            ScalarValue::Int64(a) if *point < 0 => {\n                round_integer_scalar!(a, point, ScalarValue::Int64, i64, fail_on_error)\n            }\n            ScalarValue::Int32(a) if *point < 0 => {\n                round_integer_scalar!(a, point, ScalarValue::Int32, i32, fail_on_error)\n            }\n            ScalarValue::Int16(a) if *point < 0 => {\n                round_integer_scalar!(a, point, ScalarValue::Int16, i16, fail_on_error)\n            }\n            ScalarValue::Int8(a) if *point < 0 => {\n                round_integer_scalar!(a, point, ScalarValue::Int8, i8, fail_on_error)\n            }\n            ScalarValue::Decimal128(a, _, scale) if *scale >= 0 => {\n                let f = decimal_round_f(scale, point);\n                let (precision, scale) = get_precision_scale(data_type);\n                make_decimal_scalar(a, precision, scale, &f)\n            }\n            ScalarValue::Float32(_) | ScalarValue::Float64(_) => {\n                let round_udf = RoundFunc::new();\n                let data_type = a.data_type();\n                let return_field = Arc::new(Field::new(\"round\", data_type, true));\n                let args_for_round = ScalarFunctionArgs {\n                    args: vec![ColumnarValue::Scalar(a.clone()), point_i32.clone()],\n                    number_rows: 1,\n                    return_field,\n                    arg_fields: vec![],\n                    config_options: Arc::new(ConfigOptions::default()),\n                };\n                round_udf.invoke_with_args(args_for_round)\n            }\n            dt => exec_err!(\"Not supported datatype for ROUND: {dt}\"),\n        },\n    }\n}\n\n// Spark uses BigDecimal. See RoundBase implementation in Spark. Instead, we do the same by\n// 1) add the half of divisor, 2) round down by division, 3) adjust precision by multiplication\n#[inline]\nfn decimal_round_f(scale: &i8, point: &i64) -> Box<dyn Fn(i128) -> i128> {\n    if *point < 0 {\n        if let Some(div) = 10_i128.checked_pow((-(*point) as u32) + (*scale as u32)) {\n            let half = div / 2;\n            let mul = 10_i128.pow_wrapping((-(*point)) as u32);\n            // i128 can hold 39 digits of a base 10 number, adding half will not cause overflow\n            Box::new(move |x: i128| (x + x.signum() * half) / div * mul)\n        } else {\n            Box::new(move |_: i128| 0)\n        }\n    } else {\n        let div = 10_i128.pow_wrapping((*scale as u32) - min(*scale as u32, *point as u32));\n        let half = div / 2;\n        Box::new(move |x: i128| (x + x.signum() * half) / div)\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use std::sync::Arc;\n\n    use crate::spark_round;\n\n    use arrow::array::{Float32Array, Float64Array};\n    use arrow::datatypes::DataType;\n    use datafusion::common::cast::{as_float32_array, as_float64_array};\n    use datafusion::common::{Result, ScalarValue};\n    use datafusion::physical_plan::ColumnarValue;\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // rounding does not work when miri enabled\n    fn test_round_f32_array() -> Result<()> {\n        let args = vec![\n            ColumnarValue::Array(Arc::new(Float32Array::from(vec![\n                125.2345, 15.3455, 0.1234, 0.125, 0.785, 123.123,\n            ]))),\n            ColumnarValue::Scalar(ScalarValue::Int64(Some(2))),\n        ];\n        let ColumnarValue::Array(result) = spark_round(&args, &DataType::Float32, false)? else {\n            unreachable!()\n        };\n        let floats = as_float32_array(&result)?;\n        let expected = Float32Array::from(vec![125.23, 15.35, 0.12, 0.13, 0.79, 123.12]);\n        assert_eq!(floats, &expected);\n        Ok(())\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // rounding does not work when miri enabled\n    fn test_round_f64_array() -> Result<()> {\n        let args = vec![\n            ColumnarValue::Array(Arc::new(Float64Array::from(vec![\n                125.2345, 15.3455, 0.1234, 0.125, 0.785, 123.123,\n            ]))),\n            ColumnarValue::Scalar(ScalarValue::Int64(Some(2))),\n        ];\n        let ColumnarValue::Array(result) = spark_round(&args, &DataType::Float64, false)? else {\n            unreachable!()\n        };\n        let floats = as_float64_array(&result)?;\n        let expected = Float64Array::from(vec![125.23, 15.35, 0.12, 0.13, 0.79, 123.12]);\n        assert_eq!(floats, &expected);\n        Ok(())\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // rounding does not work when miri enabled\n    fn test_round_f32_scalar() -> Result<()> {\n        let args = vec![\n            ColumnarValue::Scalar(ScalarValue::Float32(Some(125.2345))),\n            ColumnarValue::Scalar(ScalarValue::Int64(Some(2))),\n        ];\n        let ColumnarValue::Scalar(ScalarValue::Float32(Some(result))) =\n            spark_round(&args, &DataType::Float32, false)?\n        else {\n            unreachable!()\n        };\n        assert_eq!(result, 125.23);\n        Ok(())\n    }\n\n    #[test]\n    #[cfg_attr(miri, ignore)] // rounding does not work when miri enabled\n    fn test_round_f64_scalar() -> Result<()> {\n        let args = vec![\n            ColumnarValue::Scalar(ScalarValue::Float64(Some(125.2345))),\n            ColumnarValue::Scalar(ScalarValue::Int64(Some(2))),\n        ];\n        let ColumnarValue::Scalar(ScalarValue::Float64(Some(result))) =\n            spark_round(&args, &DataType::Float64, false)?\n        else {\n            unreachable!()\n        };\n        assert_eq!(result, 125.23);\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/unhex.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::sync::Arc;\n\nuse arrow::array::OffsetSizeTrait;\nuse arrow::datatypes::DataType;\nuse datafusion::common::{cast::as_generic_string_array, exec_err, DataFusionError, ScalarValue};\nuse datafusion::logical_expr::ColumnarValue;\n\n/// Helper function to convert a hex digit to a binary value.\nfn unhex_digit(c: u8) -> Result<u8, DataFusionError> {\n    match c {\n        b'0'..=b'9' => Ok(c - b'0'),\n        b'A'..=b'F' => Ok(10 + c - b'A'),\n        b'a'..=b'f' => Ok(10 + c - b'a'),\n        _ => Err(DataFusionError::Execution(\n            \"Input to unhex_digit is not a valid hex digit\".to_string(),\n        )),\n    }\n}\n\n/// Convert a hex string to binary and store the result in `result`. Returns an error if the input\n/// is not a valid hex string.\nfn unhex(hex_str: &str, result: &mut Vec<u8>) -> Result<(), DataFusionError> {\n    let bytes = hex_str.as_bytes();\n\n    let mut i = 0;\n\n    if (bytes.len() & 0x01) != 0 {\n        let v = unhex_digit(bytes[0])?;\n\n        result.push(v);\n        i += 1;\n    }\n\n    while i < bytes.len() {\n        let first = unhex_digit(bytes[i])?;\n        let second = unhex_digit(bytes[i + 1])?;\n        result.push((first << 4) | second);\n\n        i += 2;\n    }\n\n    Ok(())\n}\n\nfn spark_unhex_inner<T: OffsetSizeTrait>(\n    array: &ColumnarValue,\n    fail_on_error: bool,\n) -> Result<ColumnarValue, DataFusionError> {\n    match array {\n        ColumnarValue::Array(array) => {\n            let string_array = as_generic_string_array::<T>(array)?;\n\n            let mut encoded = Vec::new();\n            let mut builder = arrow::array::BinaryBuilder::new();\n\n            for item in string_array.iter() {\n                if let Some(s) = item {\n                    if unhex(s, &mut encoded).is_ok() {\n                        builder.append_value(encoded.as_slice());\n                    } else if fail_on_error {\n                        return exec_err!(\"Input to unhex is not a valid hex string: {s}\");\n                    } else {\n                        builder.append_null();\n                    }\n                    encoded.clear();\n                } else {\n                    builder.append_null();\n                }\n            }\n            Ok(ColumnarValue::Array(Arc::new(builder.finish())))\n        }\n        ColumnarValue::Scalar(ScalarValue::Utf8(Some(string))) => {\n            let mut encoded = Vec::new();\n\n            if unhex(string, &mut encoded).is_ok() {\n                Ok(ColumnarValue::Scalar(ScalarValue::Binary(Some(encoded))))\n            } else if fail_on_error {\n                exec_err!(\"Input to unhex is not a valid hex string: {string}\")\n            } else {\n                Ok(ColumnarValue::Scalar(ScalarValue::Binary(None)))\n            }\n        }\n        ColumnarValue::Scalar(ScalarValue::Utf8(None)) => {\n            Ok(ColumnarValue::Scalar(ScalarValue::Binary(None)))\n        }\n        _ => {\n            exec_err!(\n                \"The first argument must be a string scalar or array, but got: {:?}\",\n                array\n            )\n        }\n    }\n}\n\n/// Spark-compatible `unhex` expression\npub fn spark_unhex(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {\n    if args.len() > 2 {\n        return exec_err!(\"unhex takes at most 2 arguments, but got: {}\", args.len());\n    }\n\n    let val_to_unhex = &args[0];\n    let fail_on_error = if args.len() == 2 {\n        match &args[1] {\n            ColumnarValue::Scalar(ScalarValue::Boolean(Some(fail_on_error))) => *fail_on_error,\n            _ => {\n                return exec_err!(\n                    \"The second argument must be boolean scalar, but got: {:?}\",\n                    args[1]\n                );\n            }\n        }\n    } else {\n        false\n    };\n\n    match val_to_unhex.data_type() {\n        DataType::Utf8 => spark_unhex_inner::<i32>(val_to_unhex, fail_on_error),\n        DataType::LargeUtf8 => spark_unhex_inner::<i64>(val_to_unhex, fail_on_error),\n        other => exec_err!(\n            \"The first argument must be a Utf8 or LargeUtf8: {:?}\",\n            other\n        ),\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use std::sync::Arc;\n\n    use arrow::array::make_array;\n    use arrow::array::ArrayData;\n    use arrow::array::{BinaryBuilder, StringBuilder};\n    use datafusion::common::ScalarValue;\n    use datafusion::logical_expr::ColumnarValue;\n\n    use super::unhex;\n\n    #[test]\n    fn test_spark_unhex_null() -> Result<(), Box<dyn std::error::Error>> {\n        let input = ArrayData::new_null(&arrow::datatypes::DataType::Utf8, 2);\n        let output = ArrayData::new_null(&arrow::datatypes::DataType::Binary, 2);\n\n        let input = ColumnarValue::Array(Arc::new(make_array(input)));\n        let expected = ColumnarValue::Array(Arc::new(make_array(output)));\n\n        let result = super::spark_unhex(&[input])?;\n\n        match (result, expected) {\n            (ColumnarValue::Array(result), ColumnarValue::Array(expected)) => {\n                assert_eq!(*result, *expected);\n                Ok(())\n            }\n            _ => Err(\"Unexpected result type\".into()),\n        }\n    }\n\n    #[test]\n    fn test_partial_error() -> Result<(), Box<dyn std::error::Error>> {\n        let mut input = StringBuilder::new();\n\n        input.append_value(\"1CGG\"); // 1C is ok, but GG is invalid\n        input.append_value(\"537061726B2053514C\"); // followed by valid\n\n        let input = ColumnarValue::Array(Arc::new(input.finish()));\n        let fail_on_error = ColumnarValue::Scalar(ScalarValue::Boolean(Some(false)));\n\n        let result = super::spark_unhex(&[input, fail_on_error])?;\n\n        let mut expected = BinaryBuilder::new();\n        expected.append_null();\n        expected.append_value(\"Spark SQL\".as_bytes());\n\n        match (result, ColumnarValue::Array(Arc::new(expected.finish()))) {\n            (ColumnarValue::Array(result), ColumnarValue::Array(expected)) => {\n                assert_eq!(*result, *expected);\n\n                Ok(())\n            }\n            _ => Err(\"Unexpected result type\".into()),\n        }\n    }\n\n    #[test]\n    fn test_unhex_valid() -> Result<(), Box<dyn std::error::Error>> {\n        let mut result = Vec::new();\n\n        unhex(\"537061726B2053514C\", &mut result)?;\n        let result_str = std::str::from_utf8(&result)?;\n        assert_eq!(result_str, \"Spark SQL\");\n        result.clear();\n\n        unhex(\"1C\", &mut result)?;\n        assert_eq!(result, vec![28]);\n        result.clear();\n\n        unhex(\"737472696E67\", &mut result)?;\n        assert_eq!(result, \"string\".as_bytes());\n        result.clear();\n\n        unhex(\"1\", &mut result)?;\n        assert_eq!(result, vec![1]);\n        result.clear();\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_odd_length() -> Result<(), Box<dyn std::error::Error>> {\n        let mut result = Vec::new();\n\n        unhex(\"A1B\", &mut result)?;\n        assert_eq!(result, vec![10, 27]);\n        result.clear();\n\n        unhex(\"0A1B\", &mut result)?;\n        assert_eq!(result, vec![10, 27]);\n        result.clear();\n\n        Ok(())\n    }\n\n    #[test]\n    fn test_unhex_empty() {\n        let mut result = Vec::new();\n\n        // Empty hex string\n        unhex(\"\", &mut result).unwrap();\n        assert!(result.is_empty());\n    }\n\n    #[test]\n    fn test_unhex_invalid() {\n        let mut result = Vec::new();\n\n        // Invalid hex strings\n        assert!(unhex(\"##\", &mut result).is_err());\n        assert!(unhex(\"G123\", &mut result).is_err());\n        assert!(unhex(\"hello\", &mut result).is_err());\n        assert!(unhex(\"\\0\", &mut result).is_err());\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/utils.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::cast::AsArray;\nuse arrow::array::types::Decimal128Type;\nuse arrow::array::{ArrayRef, Decimal128Array};\nuse arrow::datatypes::DataType;\nuse datafusion::common::{DataFusionError, ScalarValue};\nuse datafusion::physical_plan::ColumnarValue;\nuse std::sync::Arc;\n\n#[macro_export]\nmacro_rules! downcast_compute_op {\n    ($ARRAY:expr, $NAME:expr, $FUNC:ident, $TYPE:ident, $RESULT:ident) => {{\n        let n = $ARRAY.as_any().downcast_ref::<$TYPE>();\n        match n {\n            Some(array) => {\n                let res: $RESULT =\n                    arrow::compute::kernels::arity::unary(array, |x| x.$FUNC() as i64);\n                Ok(Arc::new(res))\n            }\n            _ => Err(DataFusionError::Internal(format!(\n                \"Invalid data type for {}\",\n                $NAME\n            ))),\n        }\n    }};\n}\n\n#[inline]\npub(crate) fn make_decimal_scalar(\n    a: &Option<i128>,\n    precision: u8,\n    scale: i8,\n    f: &dyn Fn(i128) -> i128,\n) -> Result<ColumnarValue, DataFusionError> {\n    let result = ScalarValue::Decimal128(a.map(f), precision, scale);\n    Ok(ColumnarValue::Scalar(result))\n}\n\n#[inline]\npub(crate) fn make_decimal_array(\n    array: &ArrayRef,\n    precision: u8,\n    scale: i8,\n    f: &dyn Fn(i128) -> i128,\n) -> Result<ColumnarValue, DataFusionError> {\n    let array = array.as_primitive::<Decimal128Type>();\n    let result: Decimal128Array = arrow::compute::kernels::arity::unary(array, f);\n    let result = result.with_data_type(DataType::Decimal128(precision, scale));\n    Ok(ColumnarValue::Array(Arc::new(result)))\n}\n\n#[inline]\npub(crate) fn get_precision_scale(data_type: &DataType) -> (u8, i8) {\n    let DataType::Decimal128(precision, scale) = data_type else {\n        unreachable!()\n    };\n    (*precision, *scale)\n}\n"
  },
  {
    "path": "native/spark-expr/src/math_funcs/wide_decimal_binary_expr.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Fused wide-decimal binary expression for Decimal128 add/sub/mul that may overflow.\n//!\n//! Instead of building a 4-node expression tree (Cast→BinaryExpr→Cast→Cast), this performs\n//! i256 intermediate arithmetic in a single expression, producing only one output array.\n\nuse crate::math_funcs::utils::get_precision_scale;\nuse crate::EvalMode;\nuse arrow::array::{Array, ArrayRef, AsArray, Decimal128Array};\nuse arrow::datatypes::{i256, DataType, Decimal128Type, Schema};\nuse arrow::error::ArrowError;\nuse arrow::record_batch::RecordBatch;\nuse datafusion::common::Result;\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::fmt::{Display, Formatter};\nuse std::hash::Hash;\nuse std::{any::Any, sync::Arc};\n\n/// The arithmetic operation to perform.\n#[derive(Debug, Hash, PartialEq, Eq, Clone, Copy)]\npub enum WideDecimalOp {\n    Add,\n    Subtract,\n    Multiply,\n}\n\nimpl Display for WideDecimalOp {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        match self {\n            WideDecimalOp::Add => write!(f, \"+\"),\n            WideDecimalOp::Subtract => write!(f, \"-\"),\n            WideDecimalOp::Multiply => write!(f, \"*\"),\n        }\n    }\n}\n\n/// A fused expression that evaluates Decimal128 add/sub/mul using i256 intermediate arithmetic,\n/// applies scale adjustment with HALF_UP rounding, checks precision bounds, and outputs\n/// a single Decimal128 array.\n#[derive(Debug, Eq)]\npub struct WideDecimalBinaryExpr {\n    left: Arc<dyn PhysicalExpr>,\n    right: Arc<dyn PhysicalExpr>,\n    op: WideDecimalOp,\n    output_precision: u8,\n    output_scale: i8,\n    eval_mode: EvalMode,\n}\n\nimpl Hash for WideDecimalBinaryExpr {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.left.hash(state);\n        self.right.hash(state);\n        self.op.hash(state);\n        self.output_precision.hash(state);\n        self.output_scale.hash(state);\n        self.eval_mode.hash(state);\n    }\n}\n\nimpl PartialEq for WideDecimalBinaryExpr {\n    fn eq(&self, other: &Self) -> bool {\n        self.left.eq(&other.left)\n            && self.right.eq(&other.right)\n            && self.op == other.op\n            && self.output_precision == other.output_precision\n            && self.output_scale == other.output_scale\n            && self.eval_mode == other.eval_mode\n    }\n}\n\nimpl WideDecimalBinaryExpr {\n    pub fn new(\n        left: Arc<dyn PhysicalExpr>,\n        right: Arc<dyn PhysicalExpr>,\n        op: WideDecimalOp,\n        output_precision: u8,\n        output_scale: i8,\n        eval_mode: EvalMode,\n    ) -> Self {\n        Self {\n            left,\n            right,\n            op,\n            output_precision,\n            output_scale,\n            eval_mode,\n        }\n    }\n}\n\nimpl Display for WideDecimalBinaryExpr {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"WideDecimalBinaryExpr [{} {} {}, output: Decimal128({}, {})]\",\n            self.left, self.op, self.right, self.output_precision, self.output_scale\n        )\n    }\n}\n\n/// Compute `value / divisor` with HALF_UP rounding.\n#[inline]\nfn div_round_half_up(value: i256, divisor: i256) -> i256 {\n    let (quot, rem) = (value / divisor, value % divisor);\n    // HALF_UP: if |remainder| * 2 >= |divisor|, round away from zero\n    let abs_rem_x2 = if rem < i256::ZERO {\n        rem.wrapping_neg()\n    } else {\n        rem\n    }\n    .wrapping_mul(i256::from_i128(2));\n    let abs_divisor = if divisor < i256::ZERO {\n        divisor.wrapping_neg()\n    } else {\n        divisor\n    };\n    if abs_rem_x2 >= abs_divisor {\n        if (value < i256::ZERO) != (divisor < i256::ZERO) {\n            quot.wrapping_sub(i256::ONE)\n        } else {\n            quot.wrapping_add(i256::ONE)\n        }\n    } else {\n        quot\n    }\n}\n\n/// i256 constant for 10.\nconst I256_TEN: i256 = i256::from_i128(10);\n\n/// Compute 10^exp as i256. Panics if exp > 76 (max representable power of 10 in i256).\n#[inline]\nfn i256_pow10(exp: u32) -> i256 {\n    assert!(exp <= 76, \"i256_pow10: exponent {exp} exceeds maximum 76\");\n    let mut result = i256::ONE;\n    for _ in 0..exp {\n        result = result.wrapping_mul(I256_TEN);\n    }\n    result\n}\n\n/// Maximum i128 value for a given decimal precision (1-indexed).\n/// precision p allows values in [-10^p + 1, 10^p - 1].\n#[inline]\nfn max_for_precision(precision: u8) -> i256 {\n    i256_pow10(precision as u32).wrapping_sub(i256::ONE)\n}\n\nimpl PhysicalExpr for WideDecimalBinaryExpr {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn data_type(&self, _input_schema: &Schema) -> Result<DataType> {\n        Ok(DataType::Decimal128(\n            self.output_precision,\n            self.output_scale,\n        ))\n    }\n\n    fn nullable(&self, _input_schema: &Schema) -> Result<bool> {\n        Ok(true)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> Result<ColumnarValue> {\n        let left_val = self.left.evaluate(batch)?;\n        let right_val = self.right.evaluate(batch)?;\n\n        let (left_arr, right_arr): (ArrayRef, ArrayRef) = match (&left_val, &right_val) {\n            (ColumnarValue::Array(l), ColumnarValue::Array(r)) => (Arc::clone(l), Arc::clone(r)),\n            (ColumnarValue::Scalar(l), ColumnarValue::Array(r)) => {\n                (l.to_array_of_size(r.len())?, Arc::clone(r))\n            }\n            (ColumnarValue::Array(l), ColumnarValue::Scalar(r)) => {\n                (Arc::clone(l), r.to_array_of_size(l.len())?)\n            }\n            (ColumnarValue::Scalar(l), ColumnarValue::Scalar(r)) => (l.to_array()?, r.to_array()?),\n        };\n\n        let left = left_arr.as_primitive::<Decimal128Type>();\n        let right = right_arr.as_primitive::<Decimal128Type>();\n        let (_p1, s1) = get_precision_scale(left.data_type());\n        let (_p2, s2) = get_precision_scale(right.data_type());\n\n        let p_out = self.output_precision;\n        let s_out = self.output_scale;\n        let op = self.op;\n        let eval_mode = self.eval_mode;\n\n        let bound = max_for_precision(p_out);\n        let neg_bound = i256::ZERO.wrapping_sub(bound);\n\n        let result: Decimal128Array = match op {\n            WideDecimalOp::Add | WideDecimalOp::Subtract => {\n                let max_scale = std::cmp::max(s1, s2);\n                let l_scale_up = i256_pow10((max_scale - s1) as u32);\n                let r_scale_up = i256_pow10((max_scale - s2) as u32);\n                // After add/sub at max_scale, we may need to rescale to s_out\n                let scale_diff = max_scale as i16 - s_out as i16;\n                let (need_scale_down, need_scale_up) = (scale_diff > 0, scale_diff < 0);\n                let rescale_divisor = if need_scale_down {\n                    i256_pow10(scale_diff as u32)\n                } else {\n                    i256::ONE\n                };\n                let scale_up_factor = if need_scale_up {\n                    i256_pow10((-scale_diff) as u32)\n                } else {\n                    i256::ONE\n                };\n\n                arrow::compute::kernels::arity::try_binary(left, right, |l, r| {\n                    let l256 = i256::from_i128(l).wrapping_mul(l_scale_up);\n                    let r256 = i256::from_i128(r).wrapping_mul(r_scale_up);\n                    let raw = match op {\n                        WideDecimalOp::Add => l256.wrapping_add(r256),\n                        WideDecimalOp::Subtract => l256.wrapping_sub(r256),\n                        _ => unreachable!(),\n                    };\n                    let result = if need_scale_down {\n                        div_round_half_up(raw, rescale_divisor)\n                    } else if need_scale_up {\n                        raw.wrapping_mul(scale_up_factor)\n                    } else {\n                        raw\n                    };\n                    check_overflow_and_convert(result, bound, neg_bound, eval_mode)\n                })?\n            }\n            WideDecimalOp::Multiply => {\n                let natural_scale = s1 + s2;\n                let scale_diff = natural_scale as i16 - s_out as i16;\n                let (need_scale_down, need_scale_up) = (scale_diff > 0, scale_diff < 0);\n                let rescale_divisor = if need_scale_down {\n                    i256_pow10(scale_diff as u32)\n                } else {\n                    i256::ONE\n                };\n                let scale_up_factor = if need_scale_up {\n                    i256_pow10((-scale_diff) as u32)\n                } else {\n                    i256::ONE\n                };\n\n                arrow::compute::kernels::arity::try_binary(left, right, |l, r| {\n                    let raw = i256::from_i128(l).wrapping_mul(i256::from_i128(r));\n                    let result = if need_scale_down {\n                        div_round_half_up(raw, rescale_divisor)\n                    } else if need_scale_up {\n                        raw.wrapping_mul(scale_up_factor)\n                    } else {\n                        raw\n                    };\n                    check_overflow_and_convert(result, bound, neg_bound, eval_mode)\n                })?\n            }\n        };\n\n        let result = if eval_mode != EvalMode::Ansi {\n            result.null_if_overflow_precision(p_out)\n        } else {\n            result\n        };\n        let result = result.with_data_type(DataType::Decimal128(p_out, s_out));\n        Ok(ColumnarValue::Array(Arc::new(result)))\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.left, &self.right]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Result<Arc<dyn PhysicalExpr>> {\n        if children.len() != 2 {\n            return Err(datafusion::common::DataFusionError::Internal(format!(\n                \"WideDecimalBinaryExpr expects 2 children, got {}\",\n                children.len()\n            )));\n        }\n        Ok(Arc::new(WideDecimalBinaryExpr::new(\n            Arc::clone(&children[0]),\n            Arc::clone(&children[1]),\n            self.op,\n            self.output_precision,\n            self.output_scale,\n            self.eval_mode,\n        )))\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n}\n\n/// Check if the i256 result fits in the output precision. In Ansi mode, return an error\n/// on overflow. In Legacy/Try mode, return i128::MAX as a sentinel value that will be\n/// nullified by `null_if_overflow_precision`.\n#[inline]\nfn check_overflow_and_convert(\n    result: i256,\n    bound: i256,\n    neg_bound: i256,\n    eval_mode: EvalMode,\n) -> Result<i128, ArrowError> {\n    if result > bound || result < neg_bound {\n        if eval_mode == EvalMode::Ansi {\n            return Err(ArrowError::ComputeError(\"Arithmetic overflow\".to_string()));\n        }\n        // Sentinel value — will be nullified by null_if_overflow_precision\n        Ok(i128::MAX)\n    } else {\n        Ok(result.to_i128().unwrap())\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::Decimal128Array;\n    use arrow::datatypes::{Field, Schema};\n    use arrow::record_batch::RecordBatch;\n    use datafusion::physical_expr::expressions::Column;\n\n    fn make_batch(\n        left_values: Vec<Option<i128>>,\n        left_precision: u8,\n        left_scale: i8,\n        right_values: Vec<Option<i128>>,\n        right_precision: u8,\n        right_scale: i8,\n    ) -> RecordBatch {\n        let left_arr = Decimal128Array::from(left_values)\n            .with_data_type(DataType::Decimal128(left_precision, left_scale));\n        let right_arr = Decimal128Array::from(right_values)\n            .with_data_type(DataType::Decimal128(right_precision, right_scale));\n        let schema = Schema::new(vec![\n            Field::new(\"left\", left_arr.data_type().clone(), true),\n            Field::new(\"right\", right_arr.data_type().clone(), true),\n        ]);\n        RecordBatch::try_new(\n            Arc::new(schema),\n            vec![Arc::new(left_arr), Arc::new(right_arr)],\n        )\n        .unwrap()\n    }\n\n    fn eval_expr(\n        batch: &RecordBatch,\n        op: WideDecimalOp,\n        output_precision: u8,\n        output_scale: i8,\n        eval_mode: EvalMode,\n    ) -> Result<ArrayRef> {\n        let left: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"left\", 0));\n        let right: Arc<dyn PhysicalExpr> = Arc::new(Column::new(\"right\", 1));\n        let expr =\n            WideDecimalBinaryExpr::new(left, right, op, output_precision, output_scale, eval_mode);\n        match expr.evaluate(batch)? {\n            ColumnarValue::Array(arr) => Ok(arr),\n            _ => panic!(\"expected array\"),\n        }\n    }\n\n    #[test]\n    fn test_add_same_scale() {\n        // Decimal128(38, 10) + Decimal128(38, 10) -> Decimal128(38, 10)\n        let batch = make_batch(\n            vec![Some(1000000000), Some(2500000000)], // 0.1, 0.25 (scale 10 → divide by 10^10 mentally)\n            38,\n            10,\n            vec![Some(2000000000), Some(7500000000)],\n            38,\n            10,\n        );\n        let result = eval_expr(&batch, WideDecimalOp::Add, 38, 10, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), 3000000000); // 0.1 + 0.2\n        assert_eq!(arr.value(1), 10000000000); // 0.25 + 0.75\n    }\n\n    #[test]\n    fn test_subtract_same_scale() {\n        let batch = make_batch(\n            vec![Some(5000), Some(1000)],\n            38,\n            2,\n            vec![Some(3000), Some(2000)],\n            38,\n            2,\n        );\n        let result = eval_expr(&batch, WideDecimalOp::Subtract, 38, 2, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), 2000); // 50.00 - 30.00\n        assert_eq!(arr.value(1), -1000); // 10.00 - 20.00\n    }\n\n    #[test]\n    fn test_add_different_scales() {\n        // Decimal128(10, 2) + Decimal128(10, 4) -> output scale 4\n        let batch = make_batch(\n            vec![Some(150)], // 1.50\n            10,\n            2,\n            vec![Some(2500)], // 0.2500\n            10,\n            4,\n        );\n        let result = eval_expr(&batch, WideDecimalOp::Add, 38, 4, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), 17500); // 1.5000 + 0.2500 = 1.7500\n    }\n\n    #[test]\n    fn test_multiply_with_scale_reduction() {\n        // Decimal128(20, 5) * Decimal128(20, 5) -> natural scale 10, output scale 6\n        // 1.00000 * 2.00000 = 2.000000\n        let batch = make_batch(\n            vec![Some(100000)], // 1.00000\n            20,\n            5,\n            vec![Some(200000)], // 2.00000\n            20,\n            5,\n        );\n        let result = eval_expr(&batch, WideDecimalOp::Multiply, 38, 6, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), 2000000); // 2.000000\n    }\n\n    #[test]\n    fn test_multiply_half_up_rounding() {\n        // Test HALF_UP rounding: 1.5 * 1.5 = 2.25, but if output scale=1, should round to 2.3\n        // Input: scale 1, values 15 (1.5) * 15 (1.5) = natural scale 2, raw = 225\n        // Output scale 1: 225 / 10 = 22 remainder 5 -> HALF_UP rounds to 23\n        let batch = make_batch(\n            vec![Some(15)], // 1.5\n            10,\n            1,\n            vec![Some(15)], // 1.5\n            10,\n            1,\n        );\n        let result = eval_expr(&batch, WideDecimalOp::Multiply, 38, 1, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), 23); // 2.3\n    }\n\n    #[test]\n    fn test_multiply_half_up_rounding_negative() {\n        // -1.5 * 1.5 = -2.25, output scale 1: -225/10 => -22 rem -5 -> HALF_UP rounds to -23\n        let batch = make_batch(\n            vec![Some(-15)], // -1.5\n            10,\n            1,\n            vec![Some(15)], // 1.5\n            10,\n            1,\n        );\n        let result = eval_expr(&batch, WideDecimalOp::Multiply, 38, 1, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), -23); // -2.3\n    }\n\n    #[test]\n    fn test_overflow_legacy_mode_returns_null() {\n        // Use precision 1 (max value 9), so 5 + 5 = 10 overflows\n        let batch = make_batch(vec![Some(5)], 38, 0, vec![Some(5)], 38, 0);\n        let result = eval_expr(&batch, WideDecimalOp::Add, 1, 0, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert!(arr.is_null(0));\n    }\n\n    #[test]\n    fn test_overflow_ansi_mode_returns_error() {\n        let batch = make_batch(vec![Some(5)], 38, 0, vec![Some(5)], 38, 0);\n        let result = eval_expr(&batch, WideDecimalOp::Add, 1, 0, EvalMode::Ansi);\n        assert!(result.is_err());\n    }\n\n    #[test]\n    fn test_null_propagation() {\n        let batch = make_batch(vec![Some(100), None], 10, 2, vec![None, Some(200)], 10, 2);\n        let result = eval_expr(&batch, WideDecimalOp::Add, 38, 2, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert!(arr.is_null(0));\n        assert!(arr.is_null(1));\n    }\n\n    #[test]\n    fn test_zeros() {\n        let batch = make_batch(vec![Some(0)], 38, 10, vec![Some(0)], 38, 10);\n        let result = eval_expr(&batch, WideDecimalOp::Multiply, 38, 10, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), 0);\n    }\n\n    #[test]\n    fn test_max_precision_values() {\n        // Max Decimal128(38,0) value: 10^38 - 1\n        let max_val = 10i128.pow(38) - 1;\n        let batch = make_batch(vec![Some(max_val)], 38, 0, vec![Some(0)], 38, 0);\n        let result = eval_expr(&batch, WideDecimalOp::Add, 38, 0, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), max_val);\n    }\n\n    #[test]\n    fn test_add_scale_up_to_output() {\n        // When s_out > max(s1, s2), result must be scaled UP\n        // Decimal128(10, 2) + Decimal128(10, 2) with output scale 4\n        // 1.50 + 0.25 = 1.75, at scale 4 = 17500\n        let batch = make_batch(\n            vec![Some(150)], // 1.50\n            10,\n            2,\n            vec![Some(25)], // 0.25\n            10,\n            2,\n        );\n        let result = eval_expr(&batch, WideDecimalOp::Add, 38, 4, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), 17500); // 1.7500\n    }\n\n    #[test]\n    fn test_subtract_scale_up_to_output() {\n        // s_out (4) > max(s1, s2) (2) — verify scale-up path for subtract\n        let batch = make_batch(\n            vec![Some(300)], // 3.00\n            10,\n            2,\n            vec![Some(100)], // 1.00\n            10,\n            2,\n        );\n        let result = eval_expr(&batch, WideDecimalOp::Subtract, 38, 4, EvalMode::Legacy).unwrap();\n        let arr = result.as_primitive::<Decimal128Type>();\n        assert_eq!(arr.value(0), 20000); // 2.0000\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/nondetermenistic_funcs/internal/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod rand_utils;\n\npub use rand_utils::evaluate_batch_for_rand;\npub use rand_utils::StatefulSeedValueGenerator;\n"
  },
  {
    "path": "native/spark-expr/src/nondetermenistic_funcs/internal/rand_utils.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Float64Array, Float64Builder};\nuse datafusion::logical_expr::ColumnarValue;\nuse std::ops::Deref;\nuse std::sync::{Arc, Mutex};\n\npub fn evaluate_batch_for_rand<R, S>(\n    state_holder: &Arc<Mutex<Option<S>>>,\n    seed: i64,\n    num_rows: usize,\n) -> datafusion::common::Result<ColumnarValue>\nwhere\n    R: StatefulSeedValueGenerator<S, f64>,\n    S: Copy,\n{\n    let seed_state = state_holder.lock().unwrap();\n    let mut rnd = R::from_state_ref(seed_state, seed);\n    let mut arr_builder = Float64Builder::with_capacity(num_rows);\n    std::iter::repeat_with(|| rnd.next_value())\n        .take(num_rows)\n        .for_each(|v| arr_builder.append_value(v));\n    let array_ref = Arc::new(Float64Array::from(arr_builder.finish()));\n    let mut seed_state = state_holder.lock().unwrap();\n    seed_state.replace(rnd.get_current_state());\n    Ok(ColumnarValue::Array(array_ref))\n}\n\npub trait StatefulSeedValueGenerator<State: Copy, Value>: Sized {\n    fn from_init_seed(init_seed: i64) -> Self;\n\n    fn from_stored_state(stored_state: State) -> Self;\n\n    fn next_value(&mut self) -> Value;\n\n    fn get_current_state(&self) -> State;\n\n    fn from_state_ref(state: impl Deref<Target = Option<State>>, init_value: i64) -> Self {\n        if state.is_none() {\n            Self::from_init_seed(init_value)\n        } else {\n            Self::from_stored_state(state.unwrap())\n        }\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/nondetermenistic_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\npub mod internal;\npub mod monotonically_increasing_id;\npub mod rand;\npub mod randn;\n\npub use rand::RandExpr;\npub use randn::RandnExpr;\n"
  },
  {
    "path": "native/spark-expr/src/nondetermenistic_funcs/monotonically_increasing_id.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Int64Array, RecordBatch};\nuse arrow::datatypes::{DataType, Schema};\nuse datafusion::common::Result;\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::any::Any;\nuse std::fmt::{Debug, Display, Formatter};\nuse std::hash::{Hash, Hasher};\nuse std::sync::atomic::{AtomicI64, Ordering};\nuse std::sync::Arc;\n\n#[derive(Debug)]\npub struct MonotonicallyIncreasingId {\n    initial_offset: i64,\n    current_offset: AtomicI64,\n}\n\nimpl MonotonicallyIncreasingId {\n    pub fn from_offset(offset: i64) -> Self {\n        Self {\n            initial_offset: offset,\n            current_offset: AtomicI64::new(offset),\n        }\n    }\n\n    pub fn from_partition_id(partition: i32) -> Self {\n        Self::from_offset((partition as i64) << 33)\n    }\n}\n\nimpl Display for MonotonicallyIncreasingId {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(f, \"monotonically_increasing_id()\")\n    }\n}\n\nimpl PartialEq for MonotonicallyIncreasingId {\n    fn eq(&self, other: &Self) -> bool {\n        self.initial_offset == other.initial_offset\n    }\n}\n\nimpl Eq for MonotonicallyIncreasingId {}\n\nimpl Hash for MonotonicallyIncreasingId {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.initial_offset.hash(state);\n    }\n}\n\nimpl PhysicalExpr for MonotonicallyIncreasingId {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> Result<ColumnarValue> {\n        let start = self\n            .current_offset\n            .fetch_add(batch.num_rows() as i64, Ordering::Relaxed);\n        let end = start + batch.num_rows() as i64;\n        let array_ref = Arc::new(Int64Array::from_iter_values(start..end));\n        Ok(ColumnarValue::Array(array_ref))\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        _: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Result<Arc<dyn PhysicalExpr>> {\n        Ok(self)\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, _input_schema: &Schema) -> Result<DataType> {\n        Ok(DataType::Int64)\n    }\n\n    fn nullable(&self, _input_schema: &Schema) -> Result<bool> {\n        Ok(false)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::{Array, Int64Array};\n    use arrow::compute::concat;\n    use arrow::{array::StringArray, datatypes::*};\n    use datafusion::common::cast::as_int64_array;\n\n    #[test]\n    fn test_monotonically_increasing_id_single_batch() -> Result<()> {\n        let schema = Schema::new(vec![Field::new(\"a\", DataType::Utf8, true)]);\n        let data = StringArray::from(vec![Some(\"foo\"), None, None, Some(\"bar\"), None]);\n        let batch = RecordBatch::try_new(Arc::new(schema), vec![Arc::new(data)])?;\n        let mid_expr = MonotonicallyIncreasingId::from_offset(0);\n        let result = mid_expr.evaluate(&batch)?.into_array(batch.num_rows())?;\n        let result = as_int64_array(&result)?;\n        let expected = &Int64Array::from_iter_values(0..batch.num_rows() as i64);\n        assert_eq!(result, expected);\n        Ok(())\n    }\n\n    #[test]\n    fn test_monotonically_increasing_id_multi_batch() -> Result<()> {\n        let first_batch_schema = Schema::new(vec![Field::new(\"a\", DataType::Int64, true)]);\n        let first_batch_data = Int64Array::from(vec![Some(42), None]);\n        let second_batch_schema = first_batch_schema.clone();\n        let second_batch_data = Int64Array::from(vec![None, Some(-42), None]);\n        let starting_offset: i64 = 100;\n        let mid_expr = MonotonicallyIncreasingId::from_offset(starting_offset);\n        let first_batch = RecordBatch::try_new(\n            Arc::new(first_batch_schema),\n            vec![Arc::new(first_batch_data)],\n        )?;\n        let first_batch_result = mid_expr\n            .evaluate(&first_batch)?\n            .into_array(first_batch.num_rows())?;\n        let second_batch = RecordBatch::try_new(\n            Arc::new(second_batch_schema),\n            vec![Arc::new(second_batch_data)],\n        )?;\n        let second_batch_result = mid_expr\n            .evaluate(&second_batch)?\n            .into_array(second_batch.num_rows())?;\n        let result_arrays: Vec<&dyn Array> = vec![\n            as_int64_array(&first_batch_result)?,\n            as_int64_array(&second_batch_result)?,\n        ];\n        let result_arrays = &concat(&result_arrays)?;\n        let final_result = as_int64_array(result_arrays)?;\n        let range_start = starting_offset;\n        let range_end =\n            starting_offset + first_batch.num_rows() as i64 + second_batch.num_rows() as i64;\n        let expected = &Int64Array::from_iter_values(range_start..range_end);\n        assert_eq!(final_result, expected);\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/nondetermenistic_funcs/rand.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::hash_funcs::murmur3::spark_compatible_murmur3_hash;\n\nuse crate::internal::{evaluate_batch_for_rand, StatefulSeedValueGenerator};\nuse arrow::array::RecordBatch;\nuse arrow::datatypes::{DataType, Schema};\nuse datafusion::common::Result;\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::any::Any;\nuse std::fmt::{Display, Formatter};\nuse std::hash::{Hash, Hasher};\nuse std::sync::{Arc, Mutex};\n\n/// Adoption of the XOR-shift algorithm used in Apache Spark.\n/// See: https://github.com/apache/spark/blob/91f3fdd25852b43095dd5273358fc394ffd11b66/core/src/main/scala/org/apache/spark/util/random/XORShiftRandom.scala\n/// Normalization multiplier used in mapping from a random i64 value to the f64 interval [0.0, 1.0).\n/// Corresponds to the java implementation: https://github.com/openjdk/jdk/blob/07c9f7138affdf0d42ecdc30adcb854515569985/src/java.base/share/classes/java/util/Random.java#L302\n/// Due to the lack of hexadecimal float literals support in rust, the scientific notation is used instead.\nconst DOUBLE_UNIT: f64 = 1.1102230246251565e-16;\n\n/// Spark-compatible initial seed which is actually a part of the scala standard library murmurhash3 implementation.\n/// The references:\n/// https://github.com/apache/spark/blob/91f3fdd25852b43095dd5273358fc394ffd11b66/core/src/main/scala/org/apache/spark/util/random/XORShiftRandom.scala#L63\n/// https://github.com/scala/scala/blob/360d5da544d84b821c40e4662ad08703b51a44e1/src/library/scala/util/hashing/MurmurHash3.scala#L331\nconst SPARK_MURMUR_ARRAY_SEED: u32 = 0x3c074a61;\n\n#[derive(Debug, Clone)]\npub(crate) struct XorShiftRandom {\n    pub(crate) seed: i64,\n}\n\nimpl XorShiftRandom {\n    fn next(&mut self, bits: u8) -> i32 {\n        let mut next_seed = self.seed ^ (self.seed << 21);\n        next_seed ^= ((next_seed as u64) >> 35) as i64;\n        next_seed ^= next_seed << 4;\n        self.seed = next_seed;\n        (next_seed & ((1i64 << bits) - 1)) as i32\n    }\n\n    pub fn next_f64(&mut self) -> f64 {\n        let a = self.next(26) as i64;\n        let b = self.next(27) as i64;\n        ((a << 27) + b) as f64 * DOUBLE_UNIT\n    }\n}\n\nimpl StatefulSeedValueGenerator<i64, f64> for XorShiftRandom {\n    fn from_init_seed(init_seed: i64) -> Self {\n        let bytes_repr = init_seed.to_be_bytes();\n        let low_bits = spark_compatible_murmur3_hash(bytes_repr, SPARK_MURMUR_ARRAY_SEED);\n        let high_bits = spark_compatible_murmur3_hash(bytes_repr, low_bits);\n        let init_seed = ((high_bits as i64) << 32) | (low_bits as i64 & 0xFFFFFFFFi64);\n        XorShiftRandom { seed: init_seed }\n    }\n\n    fn from_stored_state(stored_state: i64) -> Self {\n        XorShiftRandom { seed: stored_state }\n    }\n\n    fn next_value(&mut self) -> f64 {\n        self.next_f64()\n    }\n\n    fn get_current_state(&self) -> i64 {\n        self.seed\n    }\n}\n\n#[derive(Debug)]\npub struct RandExpr {\n    seed: i64,\n    state_holder: Arc<Mutex<Option<i64>>>,\n}\n\nimpl RandExpr {\n    pub fn new(seed: i64) -> Self {\n        Self {\n            seed,\n            state_holder: Arc::new(Mutex::new(None::<i64>)),\n        }\n    }\n}\n\nimpl Display for RandExpr {\n    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        write!(f, \"RAND({})\", self.seed)\n    }\n}\n\nimpl PartialEq for RandExpr {\n    fn eq(&self, other: &Self) -> bool {\n        self.seed.eq(&other.seed)\n    }\n}\n\nimpl Eq for RandExpr {}\n\nimpl Hash for RandExpr {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.children().hash(state);\n    }\n}\n\nimpl PhysicalExpr for RandExpr {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn data_type(&self, _input_schema: &Schema) -> Result<DataType> {\n        Ok(DataType::Float64)\n    }\n\n    fn nullable(&self, _input_schema: &Schema) -> Result<bool> {\n        Ok(false)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> Result<ColumnarValue> {\n        evaluate_batch_for_rand::<XorShiftRandom, i64>(\n            &self.state_holder,\n            self.seed,\n            batch.num_rows(),\n        )\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![]\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        _children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Result<Arc<dyn PhysicalExpr>> {\n        Ok(Arc::new(RandExpr::new(self.seed)))\n    }\n}\n\npub fn rand(seed: i64) -> Arc<dyn PhysicalExpr> {\n    Arc::new(RandExpr::new(seed))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::{Array, Float64Array, Int64Array};\n    use arrow::{array::StringArray, compute::concat, datatypes::*};\n    use datafusion::common::cast::as_float64_array;\n\n    const SPARK_SEED_42_FIRST_5: [f64; 5] = [\n        0.619189370225301,\n        0.5096018842446481,\n        0.8325259388871524,\n        0.26322809041172357,\n        0.6702867696264135,\n    ];\n\n    #[test]\n    fn test_rand_single_batch() -> Result<()> {\n        let schema = Schema::new(vec![Field::new(\"a\", DataType::Utf8, true)]);\n        let data = StringArray::from(vec![Some(\"foo\"), None, None, Some(\"bar\"), None]);\n        let batch = RecordBatch::try_new(Arc::new(schema), vec![Arc::new(data)])?;\n        let rand_expr = rand(42);\n        let result = rand_expr.evaluate(&batch)?.into_array(batch.num_rows())?;\n        let result = as_float64_array(&result)?;\n        let expected = &Float64Array::from(Vec::from(SPARK_SEED_42_FIRST_5));\n        assert_eq!(result, expected);\n        Ok(())\n    }\n\n    #[test]\n    fn test_rand_multi_batch() -> Result<()> {\n        let first_batch_schema = Schema::new(vec![Field::new(\"a\", DataType::Int64, true)]);\n        let first_batch_data = Int64Array::from(vec![Some(42), None]);\n        let second_batch_schema = first_batch_schema.clone();\n        let second_batch_data = Int64Array::from(vec![None, Some(-42), None]);\n        let rand_expr = rand(42);\n        let first_batch = RecordBatch::try_new(\n            Arc::new(first_batch_schema),\n            vec![Arc::new(first_batch_data)],\n        )?;\n        let first_batch_result = rand_expr\n            .evaluate(&first_batch)?\n            .into_array(first_batch.num_rows())?;\n        let second_batch = RecordBatch::try_new(\n            Arc::new(second_batch_schema),\n            vec![Arc::new(second_batch_data)],\n        )?;\n        let second_batch_result = rand_expr\n            .evaluate(&second_batch)?\n            .into_array(second_batch.num_rows())?;\n        let result_arrays: Vec<&dyn Array> = vec![\n            as_float64_array(&first_batch_result)?,\n            as_float64_array(&second_batch_result)?,\n        ];\n        let result_arrays = &concat(&result_arrays)?;\n        let final_result = as_float64_array(result_arrays)?;\n        let expected = &Float64Array::from(Vec::from(SPARK_SEED_42_FIRST_5));\n        assert_eq!(final_result, expected);\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/nondetermenistic_funcs/randn.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::nondetermenistic_funcs::rand::XorShiftRandom;\n\nuse crate::internal::{evaluate_batch_for_rand, StatefulSeedValueGenerator};\nuse arrow::array::RecordBatch;\nuse arrow::datatypes::{DataType, Schema};\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::any::Any;\nuse std::fmt::{Display, Formatter};\nuse std::hash::{Hash, Hasher};\nuse std::sync::{Arc, Mutex};\n\n/// Stateful extension of the Marsaglia polar method (https://en.wikipedia.org/wiki/Marsaglia_polar_method)\n/// to convert uniform distribution to the standard normal one used by Apache Spark.\n/// For correct processing of batches having odd number of elements, we need to keep not used yet generated value as a part of the state.\n/// Note about Comet <-> Spark equivalence:\n/// Under the hood, the spark algorithm refers to java.util.Random relying on a module StrictMath. The latter uses\n/// native implementations of floating-point operations (ln, exp, sin, cos) and ensures\n/// they are stable across different platforms.\n/// See: https://github.com/openjdk/jdk/blob/07c9f7138affdf0d42ecdc30adcb854515569985/src/java.base/share/classes/java/util/Random.java#L745\n/// Yet, for the Rust standard library this stability is not guaranteed (https://doc.rust-lang.org/std/primitive.f64.html#method.ln)\n/// Moreover, potential usage of external library like rug (https://docs.rs/rug/latest/rug/) doesn't help because still there is no\n/// guarantee it matches the StrictMath jvm implementation.\n/// So, we can ensure only equivalence with some error tolerance between rust and spark(jvm).\n\n#[derive(Debug, Clone)]\nstruct XorShiftRandomForGaussian {\n    base_generator: XorShiftRandom,\n    next_gaussian: Option<f64>,\n}\n\nimpl XorShiftRandomForGaussian {\n    pub fn next_gaussian(&mut self) -> f64 {\n        if let Some(stored_value) = self.next_gaussian {\n            self.next_gaussian = None;\n            return stored_value;\n        }\n        let mut v1: f64;\n        let mut v2: f64;\n        let mut s: f64;\n        loop {\n            v1 = 2f64 * self.base_generator.next_f64() - 1f64;\n            v2 = 2f64 * self.base_generator.next_f64() - 1f64;\n            s = v1 * v1 + v2 * v2;\n            if s < 1f64 && s != 0f64 {\n                break;\n            }\n        }\n        let multiplier = (-2f64 * s.ln() / s).sqrt();\n        self.next_gaussian = Some(v2 * multiplier);\n        v1 * multiplier\n    }\n}\n\ntype RandomGaussianState = (i64, Option<f64>);\n\nimpl StatefulSeedValueGenerator<RandomGaussianState, f64> for XorShiftRandomForGaussian {\n    fn from_init_seed(init_value: i64) -> Self {\n        XorShiftRandomForGaussian {\n            base_generator: XorShiftRandom::from_init_seed(init_value),\n            next_gaussian: None,\n        }\n    }\n\n    fn from_stored_state(stored_state: RandomGaussianState) -> Self {\n        XorShiftRandomForGaussian {\n            base_generator: XorShiftRandom::from_stored_state(stored_state.0),\n            next_gaussian: stored_state.1,\n        }\n    }\n\n    fn next_value(&mut self) -> f64 {\n        self.next_gaussian()\n    }\n\n    fn get_current_state(&self) -> RandomGaussianState {\n        (self.base_generator.seed, self.next_gaussian)\n    }\n}\n\n#[derive(Debug, Clone)]\npub struct RandnExpr {\n    seed: i64,\n    state_holder: Arc<Mutex<Option<RandomGaussianState>>>,\n}\n\nimpl RandnExpr {\n    pub fn new(seed: i64) -> Self {\n        Self {\n            seed,\n            state_holder: Arc::new(Mutex::new(None)),\n        }\n    }\n}\n\nimpl Display for RandnExpr {\n    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        write!(f, \"RANDN({})\", self.seed)\n    }\n}\n\nimpl PartialEq for RandnExpr {\n    fn eq(&self, other: &Self) -> bool {\n        self.seed.eq(&other.seed)\n    }\n}\n\nimpl Eq for RandnExpr {}\n\nimpl Hash for RandnExpr {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.children().hash(state);\n    }\n}\n\nimpl PhysicalExpr for RandnExpr {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn data_type(&self, _input_schema: &Schema) -> datafusion::common::Result<DataType> {\n        Ok(DataType::Float64)\n    }\n\n    fn nullable(&self, _input_schema: &Schema) -> datafusion::common::Result<bool> {\n        Ok(false)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> datafusion::common::Result<ColumnarValue> {\n        evaluate_batch_for_rand::<XorShiftRandomForGaussian, RandomGaussianState>(\n            &self.state_holder,\n            self.seed,\n            batch.num_rows(),\n        )\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![]\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        _children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        Ok(Arc::new(RandnExpr::new(self.seed)))\n    }\n}\n\npub fn randn(seed: i64) -> Arc<dyn PhysicalExpr> {\n    Arc::new(RandnExpr::new(seed))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::{Array, Float64Array, Int64Array};\n    use arrow::{array::StringArray, compute::concat, datatypes::*};\n    use datafusion::common::cast::as_float64_array;\n\n    const PRECISION_TOLERANCE: f64 = 1e-6;\n\n    const SPARK_SEED_42_FIRST_5_GAUSSIAN: [f64; 5] = [\n        2.384479054241165,\n        0.1920934041293524,\n        0.7337336533286575,\n        -0.5224480195716871,\n        2.060084179317831,\n    ];\n\n    #[test]\n    fn test_rand_single_batch() -> datafusion::common::Result<()> {\n        let schema = Schema::new(vec![Field::new(\"a\", DataType::Utf8, true)]);\n        let data = StringArray::from(vec![Some(\"foo\"), None, None, Some(\"bar\"), None]);\n        let batch = RecordBatch::try_new(Arc::new(schema), vec![Arc::new(data)])?;\n        let randn_expr = randn(42);\n        let result = randn_expr.evaluate(&batch)?.into_array(batch.num_rows())?;\n        let result = as_float64_array(&result)?;\n        let expected = &Float64Array::from(Vec::from(SPARK_SEED_42_FIRST_5_GAUSSIAN));\n        assert_eq_with_tolerance(result, expected);\n        Ok(())\n    }\n\n    #[test]\n    fn test_rand_multi_batch() -> datafusion::common::Result<()> {\n        let first_batch_schema = Schema::new(vec![Field::new(\"a\", DataType::Int64, true)]);\n        let first_batch_data = Int64Array::from(vec![Some(24), None, None]);\n        let second_batch_schema = first_batch_schema.clone();\n        let second_batch_data = Int64Array::from(vec![None, Some(22)]);\n        let randn_expr = randn(42);\n        let first_batch = RecordBatch::try_new(\n            Arc::new(first_batch_schema),\n            vec![Arc::new(first_batch_data)],\n        )?;\n        let first_batch_result = randn_expr\n            .evaluate(&first_batch)?\n            .into_array(first_batch.num_rows())?;\n        let second_batch = RecordBatch::try_new(\n            Arc::new(second_batch_schema),\n            vec![Arc::new(second_batch_data)],\n        )?;\n        let second_batch_result = randn_expr\n            .evaluate(&second_batch)?\n            .into_array(second_batch.num_rows())?;\n        let result_arrays: Vec<&dyn Array> = vec![\n            as_float64_array(&first_batch_result)?,\n            as_float64_array(&second_batch_result)?,\n        ];\n        let result_arrays = &concat(&result_arrays)?;\n        let final_result = as_float64_array(result_arrays)?;\n        let expected = &Float64Array::from(Vec::from(SPARK_SEED_42_FIRST_5_GAUSSIAN));\n        assert_eq_with_tolerance(final_result, expected);\n        Ok(())\n    }\n\n    fn assert_eq_with_tolerance(left: &Float64Array, right: &Float64Array) {\n        assert_eq!(left.len(), right.len());\n        left.iter().zip(right.iter()).for_each(|(l, r)| {\n            assert!(\n                (l.unwrap() - r.unwrap()).abs() < PRECISION_TOLERANCE,\n                \"difference between {:?} and {:?} is larger than acceptable precision\",\n                l.unwrap(),\n                r.unwrap()\n            )\n        })\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/predicate_funcs/is_nan.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, BooleanArray};\nuse arrow::array::{Float32Array, Float64Array};\nuse arrow::datatypes::DataType;\nuse datafusion::common::{DataFusionError, ScalarValue};\nuse datafusion::physical_plan::ColumnarValue;\nuse std::sync::Arc;\n\n/// Spark-compatible `isnan` expression\npub fn spark_isnan(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {\n    fn set_nulls_to_false(is_nan: BooleanArray) -> ColumnarValue {\n        match is_nan.nulls() {\n            Some(nulls) => {\n                let is_not_null = nulls.inner();\n                ColumnarValue::Array(Arc::new(BooleanArray::new(\n                    is_nan.values() & is_not_null,\n                    None,\n                )))\n            }\n            None => ColumnarValue::Array(Arc::new(is_nan)),\n        }\n    }\n    let value = &args[0];\n    match value {\n        ColumnarValue::Array(array) => match array.data_type() {\n            DataType::Float64 => {\n                let array = array.as_any().downcast_ref::<Float64Array>().unwrap();\n                let is_nan = BooleanArray::from_unary(array, |x| x.is_nan());\n                Ok(set_nulls_to_false(is_nan))\n            }\n            DataType::Float32 => {\n                let array = array.as_any().downcast_ref::<Float32Array>().unwrap();\n                let is_nan = BooleanArray::from_unary(array, |x| x.is_nan());\n                Ok(set_nulls_to_false(is_nan))\n            }\n            other => Err(DataFusionError::Internal(format!(\n                \"Unsupported data type {other:?} for function isnan\",\n            ))),\n        },\n        ColumnarValue::Scalar(a) => match a {\n            ScalarValue::Float64(a) => Ok(ColumnarValue::Scalar(ScalarValue::Boolean(Some(\n                a.map(|x| x.is_nan()).unwrap_or(false),\n            )))),\n            ScalarValue::Float32(a) => Ok(ColumnarValue::Scalar(ScalarValue::Boolean(Some(\n                a.map(|x| x.is_nan()).unwrap_or(false),\n            )))),\n            _ => Err(DataFusionError::Internal(format!(\n                \"Unsupported data type {:?} for function isnan\",\n                value.data_type(),\n            ))),\n        },\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/predicate_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod is_nan;\nmod rlike;\n\npub use is_nan::spark_isnan;\npub use rlike::RLike;\n"
  },
  {
    "path": "native/spark-expr/src/predicate_funcs/rlike.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse crate::SparkError;\nuse arrow::array::builder::BooleanBuilder;\nuse arrow::array::types::Int32Type;\nuse arrow::array::{Array, BooleanArray, DictionaryArray, RecordBatch, StringArray};\nuse arrow::compute::take;\nuse arrow::datatypes::{DataType, Schema};\nuse datafusion::common::{internal_err, Result, ScalarValue};\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_plan::ColumnarValue;\nuse regex::Regex;\nuse std::any::Any;\nuse std::fmt::{Display, Formatter};\nuse std::hash::{Hash, Hasher};\nuse std::sync::Arc;\n\n/// Implementation of RLIKE operator.\n///\n/// Note that this implementation is not yet Spark-compatible and simply delegates to\n/// the Rust regexp crate. It will match Spark behavior for some simple cases but has\n/// differences in whitespace handling and does not support all the features of Java's\n/// regular expression engine, which are documented at:\n///\n/// https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html\n#[derive(Debug)]\npub struct RLike {\n    child: Arc<dyn PhysicalExpr>,\n    // Only scalar patterns are supported\n    pattern_str: String,\n    pattern: Regex,\n}\n\nimpl PartialEq for RLike {\n    fn eq(&self, other: &Self) -> bool {\n        *(self.child) == *(other.child) && self.pattern_str == other.pattern_str\n    }\n}\n\nimpl Eq for RLike {}\n\nimpl Hash for RLike {\n    fn hash<H: Hasher>(&self, state: &mut H) {\n        self.child.hash(state);\n        self.pattern_str.hash(state);\n    }\n}\n\nimpl RLike {\n    pub fn try_new(child: Arc<dyn PhysicalExpr>, pattern: &str) -> Result<Self> {\n        Ok(Self {\n            child,\n            pattern_str: pattern.to_string(),\n            pattern: Regex::new(pattern).map_err(|e| {\n                SparkError::Internal(format!(\"Failed to compile pattern {pattern}: {e}\"))\n            })?,\n        })\n    }\n\n    fn is_match(&self, inputs: &StringArray) -> BooleanArray {\n        let mut builder = BooleanBuilder::with_capacity(inputs.len());\n        if inputs.is_nullable() {\n            for i in 0..inputs.len() {\n                if inputs.is_null(i) {\n                    builder.append_null();\n                } else {\n                    builder.append_value(self.pattern.is_match(inputs.value(i)));\n                }\n            }\n        } else {\n            for i in 0..inputs.len() {\n                builder.append_value(self.pattern.is_match(inputs.value(i)));\n            }\n        }\n        builder.finish()\n    }\n}\n\nimpl Display for RLike {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"RLike [child: {}, pattern: {}] \",\n            self.child, self.pattern_str\n        )\n    }\n}\n\nimpl PhysicalExpr for RLike {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn data_type(&self, _input_schema: &Schema) -> Result<DataType> {\n        Ok(DataType::Boolean)\n    }\n\n    fn nullable(&self, input_schema: &Schema) -> Result<bool> {\n        self.child.nullable(input_schema)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> Result<ColumnarValue> {\n        match self.child.evaluate(batch)? {\n            ColumnarValue::Array(array) if array.as_any().is::<DictionaryArray<Int32Type>>() => {\n                let dict_array = array\n                    .as_any()\n                    .downcast_ref::<DictionaryArray<Int32Type>>()\n                    .expect(\"dict array\");\n                let dict_values = dict_array\n                    .values()\n                    .as_any()\n                    .downcast_ref::<StringArray>()\n                    .expect(\"strings\");\n                // evaluate the regexp pattern against the dictionary values\n                let new_values = self.is_match(dict_values);\n                // convert to conventional (not dictionary-encoded) array\n                let result = take(&new_values, dict_array.keys(), None)?;\n                Ok(ColumnarValue::Array(result))\n            }\n            ColumnarValue::Array(array) => {\n                let inputs = array\n                    .as_any()\n                    .downcast_ref::<StringArray>()\n                    .expect(\"string array\");\n                let array = self.is_match(inputs);\n                Ok(ColumnarValue::Array(Arc::new(array)))\n            }\n            ColumnarValue::Scalar(scalar) => {\n                if scalar.is_null() {\n                    return Ok(ColumnarValue::Scalar(ScalarValue::Boolean(None)));\n                }\n\n                let is_match = match scalar {\n                    ScalarValue::Utf8(Some(s))\n                    | ScalarValue::LargeUtf8(Some(s))\n                    | ScalarValue::Utf8View(Some(s)) => self.pattern.is_match(&s),\n                    _ => {\n                        return internal_err!(\n                            \"RLike requires string type for input, got {:?}\",\n                            scalar.data_type()\n                        );\n                    }\n                };\n\n                Ok(ColumnarValue::Scalar(ScalarValue::Boolean(Some(is_match))))\n            }\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.child]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Result<Arc<dyn PhysicalExpr>> {\n        assert!(children.len() == 1);\n        Ok(Arc::new(RLike::try_new(\n            Arc::clone(&children[0]),\n            &self.pattern_str,\n        )?))\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use datafusion::physical_expr::expressions::Literal;\n\n    #[test]\n    fn test_rlike_scalar_string_variants() {\n        let pattern = \"R[a-z]+\";\n        let scalars = [\n            ScalarValue::Utf8(Some(\"Rose\".to_string())),\n            ScalarValue::LargeUtf8(Some(\"Rose\".to_string())),\n            ScalarValue::Utf8View(Some(\"Rose\".to_string())),\n        ];\n\n        for scalar in scalars {\n            let expr = RLike::try_new(Arc::new(Literal::new(scalar.clone())), pattern).unwrap();\n            let result = expr\n                .evaluate(&RecordBatch::new_empty(Arc::new(Schema::empty())))\n                .unwrap();\n            let ColumnarValue::Scalar(result) = result else {\n                panic!(\"expected scalar result\");\n            };\n            assert_eq!(result, ScalarValue::Boolean(Some(true)));\n        }\n\n        // Null input should produce a null boolean result\n        let expr =\n            RLike::try_new(Arc::new(Literal::new(ScalarValue::Utf8(None))), pattern).unwrap();\n        let result = expr\n            .evaluate(&RecordBatch::new_empty(Arc::new(Schema::empty())))\n            .unwrap();\n        let ColumnarValue::Scalar(result) = result else {\n            panic!(\"expected scalar result\");\n        };\n        assert_eq!(result, ScalarValue::Boolean(None));\n    }\n\n    #[test]\n    fn test_rlike_scalar_non_string_error() {\n        let expr = RLike::try_new(\n            Arc::new(Literal::new(ScalarValue::Boolean(Some(true)))),\n            \"R[a-z]+\",\n        )\n        .unwrap();\n\n        let result = expr.evaluate(&RecordBatch::new_empty(Arc::new(Schema::empty())));\n        assert!(result.is_err());\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/query_context.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n// Re-export all query context types from the common crate\npub use datafusion_comet_common::{create_query_context_map, QueryContext, QueryContextMap};\n"
  },
  {
    "path": "native/spark-expr/src/static_invoke/char_varchar_utils/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod read_side_padding;\n\npub use read_side_padding::{spark_lpad, spark_read_side_padding, spark_rpad};\n"
  },
  {
    "path": "native/spark-expr/src/static_invoke/char_varchar_utils/read_side_padding.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::builder::GenericStringBuilder;\nuse arrow::array::cast::as_dictionary_array;\nuse arrow::array::types::Int32Type;\nuse arrow::array::{make_array, Array, AsArray, DictionaryArray};\nuse arrow::array::{ArrayRef, OffsetSizeTrait};\nuse arrow::datatypes::DataType;\nuse datafusion::common::{cast::as_generic_string_array, DataFusionError, ScalarValue};\nuse datafusion::physical_plan::ColumnarValue;\nuse std::sync::Arc;\n\nconst SPACE: &str = \" \";\n/// Similar to DataFusion `rpad`, but not to truncate when the string is already longer than length\npub fn spark_read_side_padding(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {\n    spark_read_side_padding2(args, false, false)\n}\n\n/// Custom `rpad` because DataFusion's `rpad` has differences in unicode handling\npub fn spark_rpad(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {\n    spark_read_side_padding2(args, true, false)\n}\n\n/// Custom `lpad` because DataFusion's `lpad` has differences in unicode handling\npub fn spark_lpad(args: &[ColumnarValue]) -> Result<ColumnarValue, DataFusionError> {\n    spark_read_side_padding2(args, true, true)\n}\n\nfn spark_read_side_padding2(\n    args: &[ColumnarValue],\n    truncate: bool,\n    is_left_pad: bool,\n) -> Result<ColumnarValue, DataFusionError> {\n    match args {\n        [ColumnarValue::Array(array), ColumnarValue::Scalar(ScalarValue::Int32(Some(length)))] => {\n            match array.data_type() {\n                DataType::Utf8 => spark_read_side_padding_internal::<i32>(\n                    array,\n                    truncate,\n                    ColumnarValue::Scalar(ScalarValue::Int32(Some(*length))),\n                    SPACE,\n                    is_left_pad,\n                ),\n                DataType::LargeUtf8 => spark_read_side_padding_internal::<i64>(\n                    array,\n                    truncate,\n                    ColumnarValue::Scalar(ScalarValue::Int32(Some(*length))),\n                    SPACE,\n                    is_left_pad,\n                ),\n                // Dictionary support required for SPARK-48498\n                DataType::Dictionary(_, value_type) => {\n                    let dict = as_dictionary_array::<Int32Type>(array);\n                    let col = if value_type.as_ref() == &DataType::Utf8 {\n                        spark_read_side_padding_internal::<i32>(\n                            dict.values(),\n                            truncate,\n                            ColumnarValue::Scalar(ScalarValue::Int32(Some(*length))),\n                            SPACE,\n                            is_left_pad,\n                        )?\n                    } else {\n                        spark_read_side_padding_internal::<i64>(\n                            dict.values(),\n                            truncate,\n                            ColumnarValue::Scalar(ScalarValue::Int32(Some(*length))),\n                            SPACE,\n                            is_left_pad,\n                        )?\n                    };\n                    // col consists of an array, so arg of to_array() is not used. Can be anything\n                    let values = col.to_array(0)?;\n                    let result = DictionaryArray::try_new(dict.keys().clone(), values)?;\n                    Ok(ColumnarValue::Array(make_array(result.into())))\n                }\n                other => Err(DataFusionError::Internal(format!(\n                    \"Unsupported data type {other:?} for function rpad/read_side_padding\",\n                ))),\n            }\n        }\n        [ColumnarValue::Array(array), ColumnarValue::Scalar(ScalarValue::Int32(Some(length))), ColumnarValue::Scalar(ScalarValue::Utf8(Some(string)))] =>\n        {\n            match array.data_type() {\n                DataType::Utf8 => spark_read_side_padding_internal::<i32>(\n                    array,\n                    truncate,\n                    ColumnarValue::Scalar(ScalarValue::Int32(Some(*length))),\n                    string,\n                    is_left_pad,\n                ),\n                DataType::LargeUtf8 => spark_read_side_padding_internal::<i64>(\n                    array,\n                    truncate,\n                    ColumnarValue::Scalar(ScalarValue::Int32(Some(*length))),\n                    string,\n                    is_left_pad,\n                ),\n                // Dictionary support required for SPARK-48498\n                DataType::Dictionary(_, value_type) => {\n                    let dict = as_dictionary_array::<Int32Type>(array);\n                    let col = if value_type.as_ref() == &DataType::Utf8 {\n                        spark_read_side_padding_internal::<i32>(\n                            dict.values(),\n                            truncate,\n                            ColumnarValue::Scalar(ScalarValue::Int32(Some(*length))),\n                            SPACE,\n                            is_left_pad,\n                        )?\n                    } else {\n                        spark_read_side_padding_internal::<i64>(\n                            dict.values(),\n                            truncate,\n                            ColumnarValue::Scalar(ScalarValue::Int32(Some(*length))),\n                            SPACE,\n                            is_left_pad,\n                        )?\n                    };\n                    // col consists of an array, so arg of to_array() is not used. Can be anything\n                    let values = col.to_array(0)?;\n                    let result = DictionaryArray::try_new(dict.keys().clone(), values)?;\n                    Ok(ColumnarValue::Array(make_array(result.into())))\n                }\n                other => Err(DataFusionError::Internal(format!(\n                    \"Unsupported data type {other:?} for function rpad/lpad/read_side_padding\",\n                ))),\n            }\n        }\n        [ColumnarValue::Array(array), ColumnarValue::Array(array_int)] => match array.data_type() {\n            DataType::Utf8 => spark_read_side_padding_internal::<i32>(\n                array,\n                truncate,\n                ColumnarValue::Array(Arc::<dyn Array>::clone(array_int)),\n                SPACE,\n                is_left_pad,\n            ),\n            DataType::LargeUtf8 => spark_read_side_padding_internal::<i64>(\n                array,\n                truncate,\n                ColumnarValue::Array(Arc::<dyn Array>::clone(array_int)),\n                SPACE,\n                is_left_pad,\n            ),\n            other => Err(DataFusionError::Internal(format!(\n                \"Unsupported data type {other:?} for function rpad/lpad/read_side_padding\",\n            ))),\n        },\n        [ColumnarValue::Array(array), ColumnarValue::Array(array_int), ColumnarValue::Scalar(ScalarValue::Utf8(Some(string)))] => {\n            match array.data_type() {\n                DataType::Utf8 => spark_read_side_padding_internal::<i32>(\n                    array,\n                    truncate,\n                    ColumnarValue::Array(Arc::<dyn Array>::clone(array_int)),\n                    string,\n                    is_left_pad,\n                ),\n                DataType::LargeUtf8 => spark_read_side_padding_internal::<i64>(\n                    array,\n                    truncate,\n                    ColumnarValue::Array(Arc::<dyn Array>::clone(array_int)),\n                    string,\n                    is_left_pad,\n                ),\n                other => Err(DataFusionError::Internal(format!(\n                    \"Unsupported data type {other:?} for function rpad/read_side_padding\",\n                ))),\n            }\n        }\n        other => Err(DataFusionError::Internal(format!(\n            \"Unsupported arguments {other:?} for function rpad/lpad/read_side_padding\",\n        ))),\n    }\n}\n\nfn spark_read_side_padding_internal<T: OffsetSizeTrait>(\n    array: &ArrayRef,\n    truncate: bool,\n    pad_type: ColumnarValue,\n    pad_string: &str,\n    is_left_pad: bool,\n) -> Result<ColumnarValue, DataFusionError> {\n    let string_array = as_generic_string_array::<T>(array)?;\n\n    // Pre-compute pad characters once to avoid repeated iteration\n    let pad_chars: Vec<char> = pad_string.chars().collect();\n\n    match pad_type {\n        ColumnarValue::Array(array_int) => {\n            let int_pad_array = array_int.as_primitive::<Int32Type>();\n\n            let mut builder = GenericStringBuilder::<T>::with_capacity(\n                string_array.len(),\n                string_array.len() * int_pad_array.len(),\n            );\n\n            // Reusable buffer to avoid per-element allocations\n            let mut buffer = String::with_capacity(pad_chars.len());\n\n            for (string, length) in string_array.iter().zip(int_pad_array) {\n                let length = length.unwrap();\n                match string {\n                    Some(string) => {\n                        if length >= 0 {\n                            buffer.clear();\n                            write_padded_string(\n                                &mut buffer,\n                                string,\n                                length as usize,\n                                truncate,\n                                &pad_chars,\n                                is_left_pad,\n                            );\n                            builder.append_value(&buffer);\n                        } else {\n                            builder.append_value(\"\");\n                        }\n                    }\n                    _ => builder.append_null(),\n                }\n            }\n            Ok(ColumnarValue::Array(Arc::new(builder.finish())))\n        }\n        ColumnarValue::Scalar(const_pad_length) => {\n            let length = 0.max(i32::try_from(const_pad_length)?) as usize;\n\n            let mut builder = GenericStringBuilder::<T>::with_capacity(\n                string_array.len(),\n                string_array.len() * length,\n            );\n\n            // Reusable buffer to avoid per-element allocations\n            let mut buffer = String::with_capacity(length);\n\n            for string in string_array.iter() {\n                match string {\n                    Some(string) => {\n                        buffer.clear();\n                        write_padded_string(\n                            &mut buffer,\n                            string,\n                            length,\n                            truncate,\n                            &pad_chars,\n                            is_left_pad,\n                        );\n                        builder.append_value(&buffer);\n                    }\n                    _ => builder.append_null(),\n                }\n            }\n            Ok(ColumnarValue::Array(Arc::new(builder.finish())))\n        }\n    }\n}\n\n/// Writes a padded string to the provided buffer, avoiding allocations.\n///\n/// The buffer is assumed to be cleared before calling this function.\n/// Padding characters are written directly to the buffer without intermediate allocations.\n#[inline]\nfn write_padded_string(\n    buffer: &mut String,\n    string: &str,\n    length: usize,\n    truncate: bool,\n    pad_chars: &[char],\n    is_left_pad: bool,\n) {\n    // Spark's UTF8String uses char count, not grapheme count\n    // https://stackoverflow.com/a/46290728\n    let char_len = string.chars().count();\n\n    if length <= char_len {\n        if truncate {\n            // Find byte index for the truncation point\n            let idx = string\n                .char_indices()\n                .nth(length)\n                .map(|(i, _)| i)\n                .unwrap_or(string.len());\n            buffer.push_str(&string[..idx]);\n        } else {\n            buffer.push_str(string);\n        }\n    } else {\n        let pad_needed = length - char_len;\n\n        if is_left_pad {\n            // Write padding first, then string\n            write_padding_chars(buffer, pad_chars, pad_needed);\n            buffer.push_str(string);\n        } else {\n            // Write string first, then padding\n            buffer.push_str(string);\n            write_padding_chars(buffer, pad_chars, pad_needed);\n        }\n    }\n}\n\n/// Writes `count` characters from the cycling pad pattern directly to the buffer.\n#[inline]\nfn write_padding_chars(buffer: &mut String, pad_chars: &[char], count: usize) {\n    if pad_chars.is_empty() {\n        return;\n    }\n\n    // Optimize for the common single-character padding case\n    if pad_chars.len() == 1 {\n        let ch = pad_chars[0];\n        for _ in 0..count {\n            buffer.push(ch);\n        }\n    } else {\n        // Multi-character padding: cycle through pad_chars\n        let mut remaining = count;\n        while remaining > 0 {\n            for &ch in pad_chars {\n                if remaining == 0 {\n                    break;\n                }\n                buffer.push(ch);\n                remaining -= 1;\n            }\n        }\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/static_invoke/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod char_varchar_utils;\n\npub use char_varchar_utils::{spark_lpad, spark_read_side_padding, spark_rpad};\n"
  },
  {
    "path": "native/spark-expr/src/string_funcs/contains.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n//! Optimized `contains` string function for Spark compatibility.\n//!\n//! Optimized for scalar pattern case by passing scalar directly to arrow_contains\n//! instead of expanding to arrays like DataFusion's built-in contains.\n\nuse arrow::array::{Array, ArrayRef, BooleanArray, StringArray};\nuse arrow::compute::kernels::comparison::contains as arrow_contains;\nuse arrow::datatypes::DataType;\nuse datafusion::common::{exec_err, Result, ScalarValue};\nuse datafusion::logical_expr::{\n    ColumnarValue, ScalarFunctionArgs, ScalarUDFImpl, Signature, Volatility,\n};\nuse std::any::Any;\nuse std::sync::Arc;\n\n/// Spark-optimized contains function.\n/// Returns true if the first string argument contains the second string argument.\n/// Optimized for scalar pattern constants.\n#[derive(Debug, PartialEq, Eq, Hash)]\npub struct SparkContains {\n    signature: Signature,\n}\n\nimpl Default for SparkContains {\n    fn default() -> Self {\n        Self::new()\n    }\n}\n\nimpl SparkContains {\n    pub fn new() -> Self {\n        Self {\n            signature: Signature::variadic_any(Volatility::Immutable),\n        }\n    }\n}\n\nimpl ScalarUDFImpl for SparkContains {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn name(&self) -> &str {\n        \"contains\"\n    }\n\n    fn signature(&self) -> &Signature {\n        &self.signature\n    }\n\n    fn return_type(&self, _arg_types: &[DataType]) -> Result<DataType> {\n        Ok(DataType::Boolean)\n    }\n\n    fn invoke_with_args(&self, args: ScalarFunctionArgs) -> Result<ColumnarValue> {\n        if args.args.len() != 2 {\n            return exec_err!(\"contains function requires exactly 2 arguments\");\n        }\n        spark_contains(&args.args[0], &args.args[1])\n    }\n}\n\n/// Execute contains function with optimized scalar pattern handling.\nfn spark_contains(haystack: &ColumnarValue, needle: &ColumnarValue) -> Result<ColumnarValue> {\n    match (haystack, needle) {\n        // Both arrays - use arrow's contains directly\n        (ColumnarValue::Array(haystack_array), ColumnarValue::Array(needle_array)) => {\n            let result = arrow_contains(haystack_array, needle_array)?;\n            Ok(ColumnarValue::Array(Arc::new(result)))\n        }\n\n        // Array haystack, scalar needle - OPTIMIZED PATH\n        (ColumnarValue::Array(haystack_array), ColumnarValue::Scalar(needle_scalar)) => {\n            let result = contains_with_arrow_scalar(haystack_array, needle_scalar)?;\n            Ok(ColumnarValue::Array(result))\n        }\n\n        // Scalar haystack, array needle - less common\n        (ColumnarValue::Scalar(haystack_scalar), ColumnarValue::Array(needle_array)) => {\n            let haystack_array = haystack_scalar.to_array_of_size(needle_array.len())?;\n            let result = arrow_contains(&haystack_array, needle_array)?;\n            Ok(ColumnarValue::Array(Arc::new(result)))\n        }\n\n        // Both scalars - compute single result\n        (ColumnarValue::Scalar(haystack_scalar), ColumnarValue::Scalar(needle_scalar)) => {\n            let result = contains_scalar_scalar(haystack_scalar, needle_scalar)?;\n            Ok(ColumnarValue::Scalar(result))\n        }\n    }\n}\n\n/// Optimized contains for array haystack with scalar needle.\n/// Uses Arrow's native scalar handling for better performance.\nfn contains_with_arrow_scalar(\n    haystack_array: &ArrayRef,\n    needle_scalar: &ScalarValue,\n) -> Result<ArrayRef> {\n    // Handle null needle\n    if needle_scalar.is_null() {\n        return Ok(Arc::new(BooleanArray::new_null(haystack_array.len())));\n    }\n\n    // Extract the needle string\n    let needle_str = match needle_scalar {\n        ScalarValue::Utf8(Some(s))\n        | ScalarValue::LargeUtf8(Some(s))\n        | ScalarValue::Utf8View(Some(s)) => s.clone(),\n        _ => {\n            return exec_err!(\n                \"contains function requires string type for needle, got {:?}\",\n                needle_scalar.data_type()\n            )\n        }\n    };\n\n    // Create scalar array for needle - tells Arrow to use optimized paths\n    let needle_scalar_array = StringArray::new_scalar(needle_str);\n\n    // Use Arrow's contains which detects scalar case and uses optimized paths\n    let result = arrow_contains(haystack_array, &needle_scalar_array)?;\n    Ok(Arc::new(result))\n}\n\n/// Contains for two scalar values.\nfn contains_scalar_scalar(\n    haystack_scalar: &ScalarValue,\n    needle_scalar: &ScalarValue,\n) -> Result<ScalarValue> {\n    // Handle nulls\n    if haystack_scalar.is_null() || needle_scalar.is_null() {\n        return Ok(ScalarValue::Boolean(None));\n    }\n\n    let haystack_str = match haystack_scalar {\n        ScalarValue::Utf8(Some(s))\n        | ScalarValue::LargeUtf8(Some(s))\n        | ScalarValue::Utf8View(Some(s)) => s.as_str(),\n        _ => {\n            return exec_err!(\n                \"contains function requires string type for haystack, got {:?}\",\n                haystack_scalar.data_type()\n            )\n        }\n    };\n\n    let needle_str = match needle_scalar {\n        ScalarValue::Utf8(Some(s))\n        | ScalarValue::LargeUtf8(Some(s))\n        | ScalarValue::Utf8View(Some(s)) => s.as_str(),\n        _ => {\n            return exec_err!(\n                \"contains function requires string type for needle, got {:?}\",\n                needle_scalar.data_type()\n            )\n        }\n    };\n\n    Ok(ScalarValue::Boolean(Some(\n        haystack_str.contains(needle_str),\n    )))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::StringArray;\n\n    #[test]\n    fn test_contains_array_scalar() {\n        let haystack = Arc::new(StringArray::from(vec![\n            Some(\"hello world\"),\n            Some(\"foo bar\"),\n            Some(\"testing\"),\n            None,\n        ])) as ArrayRef;\n        let needle = ScalarValue::Utf8(Some(\"world\".to_string()));\n\n        let result = contains_with_arrow_scalar(&haystack, &needle).unwrap();\n        let bool_array = result.as_any().downcast_ref::<BooleanArray>().unwrap();\n\n        assert!(bool_array.value(0)); // \"hello world\" contains \"world\"\n        assert!(!bool_array.value(1)); // \"foo bar\" does not contain \"world\"\n        assert!(!bool_array.value(2)); // \"testing\" does not contain \"world\"\n        assert!(bool_array.is_null(3)); // null input => null output\n    }\n\n    #[test]\n    fn test_contains_scalar_scalar() {\n        let haystack = ScalarValue::Utf8(Some(\"hello world\".to_string()));\n        let needle = ScalarValue::Utf8(Some(\"world\".to_string()));\n\n        let result = contains_scalar_scalar(&haystack, &needle).unwrap();\n        assert_eq!(result, ScalarValue::Boolean(Some(true)));\n\n        let needle_not_found = ScalarValue::Utf8(Some(\"xyz\".to_string()));\n        let result = contains_scalar_scalar(&haystack, &needle_not_found).unwrap();\n        assert_eq!(result, ScalarValue::Boolean(Some(false)));\n    }\n\n    #[test]\n    fn test_contains_null_needle() {\n        let haystack = Arc::new(StringArray::from(vec![\n            Some(\"hello world\"),\n            Some(\"foo bar\"),\n        ])) as ArrayRef;\n        let needle = ScalarValue::Utf8(None);\n\n        let result = contains_with_arrow_scalar(&haystack, &needle).unwrap();\n        let bool_array = result.as_any().downcast_ref::<BooleanArray>().unwrap();\n\n        // Null needle should produce null results\n        assert!(bool_array.is_null(0));\n        assert!(bool_array.is_null(1));\n    }\n\n    #[test]\n    fn test_contains_empty_needle() {\n        let haystack = Arc::new(StringArray::from(vec![Some(\"hello world\"), Some(\"\")])) as ArrayRef;\n        let needle = ScalarValue::Utf8(Some(\"\".to_string()));\n\n        let result = contains_with_arrow_scalar(&haystack, &needle).unwrap();\n        let bool_array = result.as_any().downcast_ref::<BooleanArray>().unwrap();\n\n        // Empty string is contained in any string\n        assert!(bool_array.value(0));\n        assert!(bool_array.value(1));\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/string_funcs/get_json_object.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, ArrayRef, StringArray, StringBuilder};\nuse datafusion::common::{\n    cast::as_generic_string_array, exec_err, Result as DataFusionResult, ScalarValue,\n};\nuse datafusion::logical_expr::ColumnarValue;\nuse serde_json::Value;\nuse std::sync::Arc;\n\n/// Extracts a string from a ScalarValue, returning Ok(None) for null values.\nfn scalar_to_str(scalar: &ScalarValue, arg_name: &str) -> DataFusionResult<Option<String>> {\n    match scalar {\n        ScalarValue::Utf8(s) | ScalarValue::LargeUtf8(s) => Ok(s.clone()),\n        _ => exec_err!(\"get_json_object {arg_name} must be a string\"),\n    }\n}\n\n/// Spark-compatible `get_json_object` function.\n///\n/// Extracts a JSON value from a JSON string using a JSONPath expression.\n/// Returns the result as a string, or null if the path doesn't match or input is invalid.\n///\n/// Supported JSONPath syntax:\n/// - `$` — root element\n/// - `.name` or `['name']` — named child\n/// - `[n]` — array index (0-based)\n/// - `[*]` — array wildcard (iterates over array elements)\npub fn spark_get_json_object(args: &[ColumnarValue]) -> DataFusionResult<ColumnarValue> {\n    if args.len() != 2 {\n        return exec_err!(\n            \"get_json_object expects 2 arguments (json, path), got {}\",\n            args.len()\n        );\n    }\n\n    match (&args[0], &args[1]) {\n        // Column json, scalar path (most common case)\n        (ColumnarValue::Array(json_array), ColumnarValue::Scalar(path_scalar)) => {\n            let path_str = match scalar_to_str(path_scalar, \"path\")? {\n                Some(p) => p,\n                None => {\n                    let null_array: ArrayRef = Arc::new(StringArray::new_null(json_array.len()));\n                    return Ok(ColumnarValue::Array(null_array));\n                }\n            };\n\n            let parsed_path = match parse_json_path(&path_str) {\n                Some(p) => p,\n                None => {\n                    let null_array: ArrayRef = Arc::new(StringArray::new_null(json_array.len()));\n                    return Ok(ColumnarValue::Array(null_array));\n                }\n            };\n\n            let json_strings = as_generic_string_array::<i32>(json_array)?;\n            let mut builder = StringBuilder::new();\n\n            for i in 0..json_strings.len() {\n                if json_strings.is_null(i) {\n                    builder.append_null();\n                } else {\n                    let json_str = json_strings.value(i);\n                    match evaluate_path(json_str, &parsed_path) {\n                        Some(result) => builder.append_value(&result),\n                        None => builder.append_null(),\n                    }\n                }\n            }\n\n            Ok(ColumnarValue::Array(Arc::new(builder.finish())))\n        }\n        // Scalar json, scalar path\n        (ColumnarValue::Scalar(json_scalar), ColumnarValue::Scalar(path_scalar)) => {\n            let json_str = match scalar_to_str(json_scalar, \"json\")? {\n                Some(s) => s,\n                None => return Ok(ColumnarValue::Scalar(ScalarValue::Utf8(None))),\n            };\n            let path_str = match scalar_to_str(path_scalar, \"path\")? {\n                Some(p) => p,\n                None => return Ok(ColumnarValue::Scalar(ScalarValue::Utf8(None))),\n            };\n\n            let parsed_path = match parse_json_path(&path_str) {\n                Some(p) => p,\n                None => return Ok(ColumnarValue::Scalar(ScalarValue::Utf8(None))),\n            };\n\n            let result = evaluate_path(&json_str, &parsed_path);\n            Ok(ColumnarValue::Scalar(ScalarValue::Utf8(result)))\n        }\n        // Column json, column path\n        (ColumnarValue::Array(json_array), ColumnarValue::Array(path_array)) => {\n            let json_strings = as_generic_string_array::<i32>(json_array)?;\n            let path_strings = as_generic_string_array::<i32>(path_array)?;\n            let mut builder = StringBuilder::new();\n\n            for i in 0..json_strings.len() {\n                if json_strings.is_null(i) || path_strings.is_null(i) {\n                    builder.append_null();\n                } else {\n                    let json_str = json_strings.value(i);\n                    let path_str = path_strings.value(i);\n                    match parse_json_path(path_str) {\n                        Some(parsed_path) => match evaluate_path(json_str, &parsed_path) {\n                            Some(result) => builder.append_value(&result),\n                            None => builder.append_null(),\n                        },\n                        None => builder.append_null(),\n                    }\n                }\n            }\n\n            Ok(ColumnarValue::Array(Arc::new(builder.finish())))\n        }\n        _ => exec_err!(\"get_json_object: unsupported argument types\"),\n    }\n}\n\n/// A parsed JSONPath segment.\n#[derive(Debug, Clone)]\nenum PathSegment {\n    /// Named field: `.name` or `['name']`\n    Field(String),\n    /// Array index: `[n]`\n    Index(usize),\n    /// Wildcard: `[*]` (iterates over array elements)\n    Wildcard,\n}\n\n/// A parsed JSONPath expression with precomputed metadata.\nstruct ParsedPath {\n    segments: Vec<PathSegment>,\n    has_wildcard: bool,\n}\n\n/// Parse a Spark-compatible JSONPath expression.\n/// Returns None for invalid paths.\nfn parse_json_path(path: &str) -> Option<ParsedPath> {\n    let mut chars = path.chars().peekable();\n\n    // Must start with '$'\n    if chars.next()? != '$' {\n        return None;\n    }\n\n    let mut segments = Vec::new();\n    let mut has_wildcard = false;\n\n    while chars.peek().is_some() {\n        match chars.peek()? {\n            '.' => {\n                chars.next();\n                if chars.peek() == Some(&'.') {\n                    // Recursive descent not supported\n                    return None;\n                }\n                if chars.peek() == Some(&'*') {\n                    chars.next();\n                    segments.push(PathSegment::Wildcard);\n                    has_wildcard = true;\n                } else {\n                    // Read field name\n                    let mut name = String::new();\n                    while let Some(&c) = chars.peek() {\n                        if c == '.' || c == '[' {\n                            break;\n                        }\n                        name.push(c);\n                        chars.next();\n                    }\n                    if name.is_empty() {\n                        return None;\n                    }\n                    segments.push(PathSegment::Field(name));\n                }\n            }\n            '[' => {\n                chars.next();\n                if chars.peek() == Some(&'\\'') {\n                    // Bracket notation with quotes: ['name'] or ['*']\n                    chars.next();\n                    let mut name = String::new();\n                    loop {\n                        match chars.next()? {\n                            '\\'' => break,\n                            c => name.push(c),\n                        }\n                    }\n                    if chars.next()? != ']' {\n                        return None;\n                    }\n                    if name == \"*\" {\n                        segments.push(PathSegment::Wildcard);\n                        has_wildcard = true;\n                    } else {\n                        segments.push(PathSegment::Field(name));\n                    }\n                } else if chars.peek() == Some(&'*') {\n                    // [*]\n                    chars.next();\n                    if chars.next()? != ']' {\n                        return None;\n                    }\n                    segments.push(PathSegment::Wildcard);\n                    has_wildcard = true;\n                } else {\n                    // [n] — numeric index\n                    let mut num_str = String::new();\n                    while let Some(&c) = chars.peek() {\n                        if c == ']' {\n                            break;\n                        }\n                        num_str.push(c);\n                        chars.next();\n                    }\n                    if chars.next()? != ']' {\n                        return None;\n                    }\n                    let idx: usize = num_str.parse().ok()?;\n                    segments.push(PathSegment::Index(idx));\n                }\n            }\n            _ => {\n                // Unexpected character\n                return None;\n            }\n        }\n    }\n\n    Some(ParsedPath {\n        segments,\n        has_wildcard,\n    })\n}\n\n/// Evaluate a parsed JSONPath against a JSON string.\n/// Returns the result as a string, or None if no match.\nfn evaluate_path(json_str: &str, path: &ParsedPath) -> Option<String> {\n    let value: Value = serde_json::from_str(json_str).ok()?;\n\n    if !path.has_wildcard {\n        // Fast path: no wildcards, no Vec allocations\n        let result = evaluate_no_wildcard(&value, &path.segments)?;\n        return value_to_string(result);\n    }\n\n    // Wildcard path: may return multiple results\n    let results = evaluate_with_wildcard(&value, &path.segments);\n\n    match results.len() {\n        0 => None,\n        1 => {\n            // Single wildcard match: Spark preserves JSON serialization format\n            // (strings keep their quotes, numbers don't)\n            let s = serde_json::to_string(results[0]).ok()?;\n            if s == \"null\" {\n                None\n            } else {\n                Some(s)\n            }\n        }\n        _ => {\n            // Multiple results: wrap in JSON array\n            let arr = Value::Array(results.into_iter().cloned().collect());\n            Some(arr.to_string())\n        }\n    }\n}\n\n/// Fast-path evaluation for paths without wildcards.\n/// Returns a reference to the matched value, or None if no match.\nfn evaluate_no_wildcard<'a>(value: &'a Value, segments: &[PathSegment]) -> Option<&'a Value> {\n    if segments.is_empty() {\n        return Some(value);\n    }\n\n    match &segments[0] {\n        PathSegment::Field(name) => {\n            let child = value.as_object()?.get(name)?;\n            evaluate_no_wildcard(child, &segments[1..])\n        }\n        PathSegment::Index(idx) => {\n            let child = value.as_array()?.get(*idx)?;\n            evaluate_no_wildcard(child, &segments[1..])\n        }\n        PathSegment::Wildcard => unreachable!(\"wildcard in no-wildcard path\"),\n    }\n}\n\n/// Evaluation for paths containing wildcards.\n/// Returns references to all matching values.\nfn evaluate_with_wildcard<'a>(value: &'a Value, segments: &[PathSegment]) -> Vec<&'a Value> {\n    if segments.is_empty() {\n        return vec![value];\n    }\n\n    let rest = &segments[1..];\n\n    match &segments[0] {\n        PathSegment::Field(name) => match value {\n            Value::Object(map) => match map.get(name) {\n                Some(v) => evaluate_with_wildcard(v, rest),\n                None => vec![],\n            },\n            _ => vec![],\n        },\n        PathSegment::Index(idx) => match value {\n            Value::Array(arr) => match arr.get(*idx) {\n                Some(v) => evaluate_with_wildcard(v, rest),\n                None => vec![],\n            },\n            _ => vec![],\n        },\n        PathSegment::Wildcard => match value {\n            Value::Array(arr) => arr\n                .iter()\n                .flat_map(|v| evaluate_with_wildcard(v, rest))\n                .collect(),\n            _ => vec![],\n        },\n    }\n}\n\n/// Convert a JSON value to its string representation matching Spark behavior.\n/// - Strings are returned without quotes\n/// - null returns None\n/// - Numbers, booleans, objects, arrays are serialized as JSON\nfn value_to_string(value: &Value) -> Option<String> {\n    match value {\n        Value::Null => None,\n        Value::String(s) => Some(s.clone()),\n        _ => Some(value.to_string()),\n    }\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    #[test]\n    fn test_parse_json_path() {\n        // Root only\n        let path = parse_json_path(\"$\").unwrap();\n        assert!(path.segments.is_empty());\n        assert!(!path.has_wildcard);\n\n        // Simple field\n        let path = parse_json_path(\"$.name\").unwrap();\n        assert!(matches!(&path.segments[0], PathSegment::Field(n) if n == \"name\"));\n        assert!(!path.has_wildcard);\n\n        // Array index\n        let path = parse_json_path(\"$[0]\").unwrap();\n        assert!(matches!(&path.segments[0], PathSegment::Index(0)));\n\n        // Bracket notation\n        let path = parse_json_path(\"$['key with spaces']\").unwrap();\n        assert!(matches!(&path.segments[0], PathSegment::Field(n) if n == \"key with spaces\"));\n\n        // Wildcard\n        let path = parse_json_path(\"$[*]\").unwrap();\n        assert!(matches!(&path.segments[0], PathSegment::Wildcard));\n        assert!(path.has_wildcard);\n\n        let path = parse_json_path(\"$.*\").unwrap();\n        assert!(matches!(&path.segments[0], PathSegment::Wildcard));\n        assert!(path.has_wildcard);\n\n        // Recursive descent not supported\n        assert!(parse_json_path(\"$..name\").is_none());\n\n        // Must start with $\n        assert!(parse_json_path(\"name\").is_none());\n        assert!(parse_json_path(\"[0]\").is_none());\n    }\n\n    #[test]\n    fn test_evaluate_simple_field() {\n        let path = parse_json_path(\"$.name\").unwrap();\n        assert_eq!(\n            evaluate_path(r#\"{\"name\":\"John\",\"age\":30}\"#, &path),\n            Some(\"John\".to_string())\n        );\n        let path = parse_json_path(\"$.age\").unwrap();\n        assert_eq!(\n            evaluate_path(r#\"{\"name\":\"John\",\"age\":30}\"#, &path),\n            Some(\"30\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_evaluate_nested() {\n        let json = r#\"{\"user\":{\"profile\":{\"name\":\"Alice\"}}}\"#;\n        let path = parse_json_path(\"$.user.profile.name\").unwrap();\n        assert_eq!(evaluate_path(json, &path), Some(\"Alice\".to_string()));\n    }\n\n    #[test]\n    fn test_evaluate_array_index() {\n        let path = parse_json_path(\"$[0]\").unwrap();\n        assert_eq!(evaluate_path(r#\"[1,2,3]\"#, &path), Some(\"1\".to_string()));\n        let path = parse_json_path(\"$[3]\").unwrap();\n        assert_eq!(evaluate_path(r#\"[1,2,3]\"#, &path), None);\n    }\n\n    #[test]\n    fn test_evaluate_root() {\n        let path = parse_json_path(\"$\").unwrap();\n        assert_eq!(\n            evaluate_path(r#\"{\"a\":\"b\"}\"#, &path),\n            Some(r#\"{\"a\":\"b\"}\"#.to_string())\n        );\n    }\n\n    #[test]\n    fn test_evaluate_null_value() {\n        let path = parse_json_path(\"$.a\").unwrap();\n        assert_eq!(evaluate_path(r#\"{\"a\":null}\"#, &path), None);\n    }\n\n    #[test]\n    fn test_evaluate_missing_field() {\n        let path = parse_json_path(\"$.c\").unwrap();\n        assert_eq!(evaluate_path(r#\"{\"a\":\"b\"}\"#, &path), None);\n    }\n\n    #[test]\n    fn test_evaluate_invalid_json() {\n        let path = parse_json_path(\"$.a\").unwrap();\n        assert_eq!(evaluate_path(\"not json\", &path), None);\n    }\n\n    #[test]\n    fn test_evaluate_wildcard() {\n        let json = r#\"[{\"a\":\"b\"},{\"a\":\"c\"}]\"#;\n        let path = parse_json_path(\"$[*].a\").unwrap();\n        assert_eq!(evaluate_path(json, &path), Some(r#\"[\"b\",\"c\"]\"#.to_string()));\n    }\n\n    #[test]\n    fn test_evaluate_string_unquoted() {\n        // Strings should be returned without quotes for non-wildcard paths\n        let path = parse_json_path(\"$[1]\").unwrap();\n        assert_eq!(evaluate_path(r#\"[\"a\",\"b\"]\"#, &path), Some(\"b\".to_string()));\n    }\n\n    #[test]\n    fn test_evaluate_nested_array_field() {\n        let json = r#\"{\"items\":[\"apple\",\"banana\",\"cherry\"]}\"#;\n        let path = parse_json_path(\"$.items[1]\").unwrap();\n        assert_eq!(evaluate_path(json, &path), Some(\"banana\".to_string()));\n    }\n\n    #[test]\n    fn test_evaluate_bracket_notation_with_spaces() {\n        let json = r#\"{\"key with spaces\":\"it works\"}\"#;\n        let path = parse_json_path(\"$['key with spaces']\").unwrap();\n        assert_eq!(evaluate_path(json, &path), Some(\"it works\".to_string()));\n    }\n\n    #[test]\n    fn test_evaluate_boolean_and_nested_object() {\n        let json = r#\"{\"a\":true,\"b\":{\"c\":1}}\"#;\n        let path_a = parse_json_path(\"$.a\").unwrap();\n        assert_eq!(evaluate_path(json, &path_a), Some(\"true\".to_string()));\n        let path_b = parse_json_path(\"$.b\").unwrap();\n        assert_eq!(evaluate_path(json, &path_b), Some(r#\"{\"c\":1}\"#.to_string()));\n    }\n\n    #[test]\n    fn test_object_key_order_preserved() {\n        // Depends on serde_json \"preserve_order\" feature (see Cargo.toml)\n        let json = r#\"{\"z\":1,\"a\":2}\"#;\n        let path = parse_json_path(\"$\").unwrap();\n        assert_eq!(\n            evaluate_path(json, &path),\n            Some(r#\"{\"z\":1,\"a\":2}\"#.to_string())\n        );\n    }\n\n    #[test]\n    fn test_wildcard_single_match() {\n        // Single wildcard match on string: Spark preserves JSON quotes\n        let json = r#\"[{\"a\":\"only\"}]\"#;\n        let path = parse_json_path(\"$[*].a\").unwrap();\n        assert_eq!(evaluate_path(json, &path), Some(r#\"\"only\"\"#.to_string()));\n\n        // Single wildcard match on number: no quotes\n        let json = r#\"[{\"a\":42}]\"#;\n        assert_eq!(evaluate_path(json, &path), Some(\"42\".to_string()));\n    }\n\n    #[test]\n    fn test_wildcard_missing_fields() {\n        // Wildcard should skip elements where the field is missing\n        let json = r#\"[{\"a\":1},{\"b\":2},{\"a\":3}]\"#;\n        let path = parse_json_path(\"$[*].a\").unwrap();\n        assert_eq!(evaluate_path(json, &path), Some(\"[1,3]\".to_string()));\n    }\n\n    #[test]\n    fn test_field_with_colon() {\n        let json = r#\"{\"fb:testid\":\"123\"}\"#;\n        let path = parse_json_path(\"$.fb:testid\").unwrap();\n        assert_eq!(evaluate_path(json, &path), Some(\"123\".to_string()));\n    }\n\n    #[test]\n    fn test_dot_bracket_invalid() {\n        // $.[0] is not valid path syntax in Spark\n        assert!(parse_json_path(\"$.[0]\").is_none());\n    }\n\n    #[test]\n    fn test_object_wildcard() {\n        // Spark returns null for $.* on objects (wildcard only works in array contexts)\n        let json = r#\"{\"a\":1,\"b\":2,\"c\":3}\"#;\n        let path = parse_json_path(\"$.*\").unwrap();\n        assert_eq!(evaluate_path(json, &path), None);\n    }\n\n    #[test]\n    fn test_unicode_field_names() {\n        let json = r#\"{\"名前\":\"太郎\",\"年齢\":25}\"#;\n        let path = parse_json_path(\"$.名前\").unwrap();\n        assert_eq!(evaluate_path(json, &path), Some(\"太郎\".to_string()));\n    }\n\n    #[test]\n    fn test_unicode_values() {\n        let json = r#\"{\"greeting\":\"こんにちは世界\"}\"#;\n        let path = parse_json_path(\"$.greeting\").unwrap();\n        assert_eq!(\n            evaluate_path(json, &path),\n            Some(\"こんにちは世界\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_unicode_emoji() {\n        let json = r#\"{\"emoji\":\"🎉🚀\",\"nested\":{\"flag\":\"🇺🇸\"}}\"#;\n        let path = parse_json_path(\"$.emoji\").unwrap();\n        assert_eq!(evaluate_path(json, &path), Some(\"🎉🚀\".to_string()));\n        let path = parse_json_path(\"$.nested.flag\").unwrap();\n        assert_eq!(evaluate_path(json, &path), Some(\"🇺🇸\".to_string()));\n    }\n\n    #[test]\n    fn test_unicode_bracket_notation() {\n        let json = r#\"{\"键\":\"值\"}\"#;\n        let path = parse_json_path(\"$['键']\").unwrap();\n        assert_eq!(evaluate_path(json, &path), Some(\"值\".to_string()));\n    }\n\n    #[test]\n    fn test_unicode_mixed_scripts() {\n        let json = r#\"{\"data\":\"café résumé naïve\"}\"#;\n        let path = parse_json_path(\"$.data\").unwrap();\n        assert_eq!(\n            evaluate_path(json, &path),\n            Some(\"café résumé naïve\".to_string())\n        );\n    }\n\n    #[test]\n    fn test_unicode_wildcard() {\n        let json = r#\"[{\"名\":\"Alice\"},{\"名\":\"太郎\"}]\"#;\n        let path = parse_json_path(\"$[*].名\").unwrap();\n        assert_eq!(\n            evaluate_path(json, &path),\n            Some(r#\"[\"Alice\",\"太郎\"]\"#.to_string())\n        );\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/string_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod contains;\nmod get_json_object;\nmod split;\nmod substring;\n\npub use contains::SparkContains;\npub use get_json_object::spark_get_json_object;\npub use split::spark_split;\npub use substring::SubstringExpr;\n"
  },
  {
    "path": "native/spark-expr/src/string_funcs/split.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, ArrayRef, GenericStringArray, ListArray};\nuse arrow::datatypes::{DataType, Field};\nuse datafusion::common::{\n    cast::as_generic_string_array, exec_err, DataFusionError, Result as DataFusionResult,\n    ScalarValue,\n};\nuse datafusion::logical_expr::ColumnarValue;\nuse regex::Regex;\nuse std::sync::Arc;\n\n/// Spark-compatible split function\n/// Splits a string around matches of a regex pattern with optional limit\n///\n/// Arguments:\n/// - string: The string to split\n/// - pattern: The regex pattern to split on\n/// - limit (optional): Controls the number of splits\n///   - limit > 0: At most limit-1 splits, array length <= limit\n///   - limit = 0: As many splits as possible, trailing empty strings removed\n///   - limit < 0: As many splits as possible, trailing empty strings kept\npub fn spark_split(args: &[ColumnarValue]) -> DataFusionResult<ColumnarValue> {\n    if args.len() < 2 || args.len() > 3 {\n        return exec_err!(\n            \"split expects 2 or 3 arguments (string, pattern, [limit]), got {}\",\n            args.len()\n        );\n    }\n\n    // Get limit parameter (default to -1 if not provided)\n    let limit = if args.len() == 3 {\n        match &args[2] {\n            ColumnarValue::Scalar(ScalarValue::Int32(Some(l))) => *l,\n            ColumnarValue::Scalar(ScalarValue::Int32(None)) => {\n                // NULL limit, return NULL\n                return Ok(ColumnarValue::Scalar(ScalarValue::Null));\n            }\n            _ => {\n                return exec_err!(\"split limit argument must be an Int32 scalar\");\n            }\n        }\n    } else {\n        -1\n    };\n\n    match (&args[0], &args[1]) {\n        (ColumnarValue::Array(string_array), ColumnarValue::Scalar(ScalarValue::Utf8(pattern)))\n        | (\n            ColumnarValue::Array(string_array),\n            ColumnarValue::Scalar(ScalarValue::LargeUtf8(pattern)),\n        ) => {\n            if pattern.is_none() {\n                // NULL pattern returns NULL\n                let null_array = new_null_list_array(string_array.len());\n                return Ok(ColumnarValue::Array(null_array));\n            }\n\n            let pattern_str = pattern.as_ref().unwrap();\n            split_array(string_array.as_ref(), pattern_str, limit)\n        }\n        (ColumnarValue::Scalar(ScalarValue::Utf8(string)), ColumnarValue::Scalar(pattern_val))\n        | (\n            ColumnarValue::Scalar(ScalarValue::LargeUtf8(string)),\n            ColumnarValue::Scalar(pattern_val),\n        ) => {\n            if string.is_none() {\n                return Ok(ColumnarValue::Scalar(ScalarValue::Null));\n            }\n\n            let pattern_str = match pattern_val {\n                ScalarValue::Utf8(Some(p)) | ScalarValue::LargeUtf8(Some(p)) => p,\n                ScalarValue::Utf8(None) | ScalarValue::LargeUtf8(None) => {\n                    return Ok(ColumnarValue::Scalar(ScalarValue::Null));\n                }\n                _ => {\n                    return exec_err!(\"split pattern must be a string\");\n                }\n            };\n\n            let result = split_string(string.as_ref().unwrap(), pattern_str, limit)?;\n            let string_array = GenericStringArray::<i32>::from(result);\n            let list_array = create_list_array(Arc::new(string_array));\n\n            Ok(ColumnarValue::Scalar(ScalarValue::List(Arc::new(\n                list_array,\n            ))))\n        }\n        _ => exec_err!(\"split expects (array, scalar) or (scalar, scalar) arguments\"),\n    }\n}\n\nfn split_array(\n    string_array: &dyn arrow::array::Array,\n    pattern: &str,\n    limit: i32,\n) -> DataFusionResult<ColumnarValue> {\n    // Compile regex once for the entire array\n    let regex = Regex::new(pattern).map_err(|e| {\n        DataFusionError::Execution(format!(\"Invalid regex pattern '{}': {}\", pattern, e))\n    })?;\n\n    let string_array = match string_array.data_type() {\n        DataType::Utf8 => as_generic_string_array::<i32>(string_array)?,\n        DataType::LargeUtf8 => {\n            // Convert LargeUtf8 to Utf8 for processing\n            let large_array = as_generic_string_array::<i64>(string_array)?;\n            return split_large_string_array(large_array, &regex, limit);\n        }\n        _ => {\n            return exec_err!(\n                \"split expects Utf8 or LargeUtf8 string array, got {:?}\",\n                string_array.data_type()\n            );\n        }\n    };\n\n    // Build the result ListArray\n    let mut offsets: Vec<i32> = Vec::with_capacity(string_array.len() + 1);\n    let mut values: Vec<String> = Vec::new();\n    let mut null_buffer_builder = arrow::array::BooleanBufferBuilder::new(string_array.len());\n    offsets.push(0);\n\n    for i in 0..string_array.len() {\n        if string_array.is_null(i) {\n            // NULL input produces NULL in result (Spark behavior)\n            offsets.push(offsets[i]);\n            null_buffer_builder.append(false); // false = NULL\n        } else {\n            let string_val = string_array.value(i);\n            let parts = split_with_regex(string_val, &regex, limit);\n            values.extend(parts);\n            offsets.push(values.len() as i32);\n            null_buffer_builder.append(true); // true = valid\n        }\n    }\n\n    let values_array = Arc::new(GenericStringArray::<i32>::from(values)) as ArrayRef;\n    let field = Arc::new(Field::new(\"item\", DataType::Utf8, false));\n    let nulls = arrow::buffer::NullBuffer::new(null_buffer_builder.finish());\n    let list_array = ListArray::new(\n        field,\n        arrow::buffer::OffsetBuffer::new(offsets.into()),\n        values_array,\n        Some(nulls),\n    );\n\n    Ok(ColumnarValue::Array(Arc::new(list_array)))\n}\n\nfn split_large_string_array(\n    string_array: &GenericStringArray<i64>,\n    regex: &Regex,\n    limit: i32,\n) -> DataFusionResult<ColumnarValue> {\n    let mut offsets: Vec<i32> = Vec::with_capacity(string_array.len() + 1);\n    let mut values: Vec<String> = Vec::new();\n    let mut null_buffer_builder = arrow::array::BooleanBufferBuilder::new(string_array.len());\n    offsets.push(0);\n\n    for i in 0..string_array.len() {\n        if string_array.is_null(i) {\n            // NULL input produces NULL in result (Spark behavior)\n            offsets.push(offsets[i]);\n            null_buffer_builder.append(false); // false = NULL\n        } else {\n            let string_val = string_array.value(i);\n            let parts = split_with_regex(string_val, regex, limit);\n            values.extend(parts);\n            offsets.push(values.len() as i32);\n            null_buffer_builder.append(true); // true = valid\n        }\n    }\n\n    let values_array = Arc::new(GenericStringArray::<i32>::from(values)) as ArrayRef;\n    let field = Arc::new(Field::new(\"item\", DataType::Utf8, false));\n    let nulls = arrow::buffer::NullBuffer::new(null_buffer_builder.finish());\n    let list_array = ListArray::new(\n        field,\n        arrow::buffer::OffsetBuffer::new(offsets.into()),\n        values_array,\n        Some(nulls),\n    );\n\n    Ok(ColumnarValue::Array(Arc::new(list_array)))\n}\n\nfn split_string(string: &str, pattern: &str, limit: i32) -> DataFusionResult<Vec<String>> {\n    let regex = Regex::new(pattern).map_err(|e| {\n        DataFusionError::Execution(format!(\"Invalid regex pattern '{}': {}\", pattern, e))\n    })?;\n\n    Ok(split_with_regex(string, &regex, limit))\n}\n\nfn split_with_regex(string: &str, regex: &Regex, limit: i32) -> Vec<String> {\n    if limit == 0 {\n        // limit = 0: split as many times as possible, discard trailing empty strings\n        let mut parts: Vec<String> = regex.split(string).map(|s| s.to_string()).collect();\n        // Remove trailing empty strings\n        while parts.last().is_some_and(|s| s.is_empty()) {\n            parts.pop();\n        }\n        if parts.is_empty() {\n            vec![\"\".to_string()]\n        } else {\n            parts\n        }\n    } else if limit > 0 {\n        // limit > 0: at most limit-1 splits (array length <= limit)\n        let mut parts: Vec<String> = Vec::new();\n        let mut last_end = 0;\n\n        for (count, mat) in regex.find_iter(string).enumerate() {\n            if count >= (limit - 1) as usize {\n                break;\n            }\n            parts.push(string[last_end..mat.start()].to_string());\n            last_end = mat.end();\n        }\n        // Add the remaining string\n        parts.push(string[last_end..].to_string());\n        parts\n    } else {\n        // limit < 0: split as many times as possible, keep trailing empty strings\n        regex.split(string).map(|s| s.to_string()).collect()\n    }\n}\n\nfn create_list_array(values: ArrayRef) -> ListArray {\n    let field = Arc::new(Field::new(\"item\", DataType::Utf8, false));\n    let offsets = vec![0i32, values.len() as i32];\n    ListArray::new(\n        field,\n        arrow::buffer::OffsetBuffer::new(offsets.into()),\n        values,\n        None,\n    )\n}\n\nfn new_null_list_array(len: usize) -> ArrayRef {\n    let field = Arc::new(Field::new(\"item\", DataType::Utf8, false));\n    let values = Arc::new(GenericStringArray::<i32>::from(Vec::<String>::new())) as ArrayRef;\n    let offsets = vec![0i32; len + 1];\n    let nulls = arrow::buffer::NullBuffer::new_null(len);\n\n    Arc::new(ListArray::new(\n        field,\n        arrow::buffer::OffsetBuffer::new(offsets.into()),\n        values,\n        Some(nulls),\n    ))\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n    use arrow::array::StringArray;\n\n    #[test]\n    fn test_split_basic() {\n        let string_array = Arc::new(StringArray::from(vec![\"a,b,c\", \"x,y,z\"])) as ArrayRef;\n        let pattern = ColumnarValue::Scalar(ScalarValue::Utf8(Some(\",\".to_string())));\n        let args = vec![ColumnarValue::Array(string_array), pattern];\n\n        let result = spark_split(&args).unwrap();\n        // Should produce [[\"a\", \"b\", \"c\"], [\"x\", \"y\", \"z\"]]\n        assert!(matches!(result, ColumnarValue::Array(_)));\n    }\n\n    #[test]\n    fn test_split_with_limit() {\n        let string_array = Arc::new(StringArray::from(vec![\"a,b,c,d\"])) as ArrayRef;\n        let pattern = ColumnarValue::Scalar(ScalarValue::Utf8(Some(\",\".to_string())));\n        let limit = ColumnarValue::Scalar(ScalarValue::Int32(Some(2)));\n        let args = vec![ColumnarValue::Array(string_array), pattern, limit];\n\n        let result = spark_split(&args).unwrap();\n        // Should produce [[\"a\", \"b,c,d\"]]\n        assert!(matches!(result, ColumnarValue::Array(_)));\n    }\n\n    #[test]\n    fn test_split_regex() {\n        let parts = split_string(\"foo123bar456baz\", r\"\\d+\", -1).unwrap();\n        assert_eq!(parts, vec![\"foo\", \"bar\", \"baz\"]);\n    }\n\n    #[test]\n    fn test_split_limit_positive() {\n        let parts = split_string(\"a,b,c,d,e\", \",\", 3).unwrap();\n        assert_eq!(parts, vec![\"a\", \"b\", \"c,d,e\"]);\n    }\n\n    #[test]\n    fn test_split_limit_zero() {\n        let parts = split_string(\"a,b,c,,\", \",\", 0).unwrap();\n        assert_eq!(parts, vec![\"a\", \"b\", \"c\"]);\n    }\n\n    #[test]\n    fn test_split_limit_negative() {\n        let parts = split_string(\"a,b,c,,\", \",\", -1).unwrap();\n        assert_eq!(parts, vec![\"a\", \"b\", \"c\", \"\", \"\"]);\n    }\n\n    #[test]\n    fn test_split_with_nulls() {\n        // Test that NULL inputs produce NULL outputs (not empty arrays)\n        let string_array = Arc::new(StringArray::from(vec![\n            Some(\"a,b,c\"),\n            None,\n            Some(\"x,y\"),\n            None,\n        ])) as ArrayRef;\n        let pattern = ColumnarValue::Scalar(ScalarValue::Utf8(Some(\",\".to_string())));\n        let args = vec![ColumnarValue::Array(string_array), pattern];\n\n        let result = spark_split(&args).unwrap();\n        match result {\n            ColumnarValue::Array(arr) => {\n                let list_array = arr.as_any().downcast_ref::<ListArray>().unwrap();\n                assert_eq!(list_array.len(), 4);\n                // First row: valid [\"a\", \"b\", \"c\"]\n                assert!(!list_array.is_null(0));\n                // Second row: NULL\n                assert!(list_array.is_null(1));\n                // Third row: valid [\"x\", \"y\"]\n                assert!(!list_array.is_null(2));\n                // Fourth row: NULL\n                assert!(list_array.is_null(3));\n            }\n            _ => panic!(\"Expected Array result\"),\n        }\n    }\n\n    #[test]\n    fn test_split_empty_string() {\n        // Test that empty string input produces array with single empty string\n        let parts = split_string(\"\", \",\", -1).unwrap();\n        assert_eq!(parts, vec![\"\"]);\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/string_funcs/substring.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n#![allow(deprecated)]\n\nuse crate::kernels::strings::substring;\nuse arrow::datatypes::{DataType, Schema};\nuse arrow::record_batch::RecordBatch;\nuse datafusion::common::DataFusionError;\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::{\n    any::Any,\n    fmt::{Display, Formatter},\n    hash::Hash,\n    sync::Arc,\n};\n\n#[derive(Debug, Eq)]\npub struct SubstringExpr {\n    pub child: Arc<dyn PhysicalExpr>,\n    pub start: i64,\n    pub len: u64,\n}\n\nimpl Hash for SubstringExpr {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.child.hash(state);\n        self.start.hash(state);\n        self.len.hash(state);\n    }\n}\n\nimpl PartialEq for SubstringExpr {\n    fn eq(&self, other: &Self) -> bool {\n        self.child.eq(&other.child) && self.start.eq(&other.start) && self.len.eq(&other.len)\n    }\n}\n\nimpl SubstringExpr {\n    pub fn new(child: Arc<dyn PhysicalExpr>, start: i64, len: u64) -> Self {\n        Self { child, start, len }\n    }\n}\n\nimpl Display for SubstringExpr {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"Substring [start: {}, len: {}, child: {}]\",\n            self.start, self.len, self.child\n        )\n    }\n}\n\nimpl PhysicalExpr for SubstringExpr {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, input_schema: &Schema) -> datafusion::common::Result<DataType> {\n        self.child.data_type(input_schema)\n    }\n\n    fn nullable(&self, _: &Schema) -> datafusion::common::Result<bool> {\n        Ok(true)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> datafusion::common::Result<ColumnarValue> {\n        let arg = self.child.evaluate(batch)?;\n        match arg {\n            ColumnarValue::Array(array) => {\n                let result = substring(&array, self.start, self.len)?;\n\n                Ok(ColumnarValue::Array(result))\n            }\n            _ => Err(DataFusionError::Execution(\n                \"Substring(scalar) should be fold in Spark JVM side.\".to_string(),\n            )),\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.child]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        Ok(Arc::new(SubstringExpr::new(\n            Arc::clone(&children[0]),\n            self.start,\n            self.len,\n        )))\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/struct_funcs/create_named_struct.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::StructArray;\nuse arrow::datatypes::{DataType, Field, Schema};\nuse arrow::record_batch::RecordBatch;\nuse datafusion::common::Result as DataFusionResult;\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::{\n    any::Any,\n    fmt::{Display, Formatter},\n    hash::Hash,\n    sync::Arc,\n};\n\n#[derive(Debug, Hash, PartialEq, Eq)]\npub struct CreateNamedStruct {\n    values: Vec<Arc<dyn PhysicalExpr>>,\n    names: Vec<String>,\n}\n\nimpl CreateNamedStruct {\n    pub fn new(values: Vec<Arc<dyn PhysicalExpr>>, names: Vec<String>) -> Self {\n        Self { values, names }\n    }\n\n    fn fields(&self, schema: &Schema) -> DataFusionResult<Vec<Field>> {\n        self.values\n            .iter()\n            .zip(&self.names)\n            .map(|(expr, name)| {\n                let data_type = expr.data_type(schema)?;\n                let nullable = expr.nullable(schema)?;\n                Ok(Field::new(name, data_type, nullable))\n            })\n            .collect()\n    }\n}\n\nimpl PhysicalExpr for CreateNamedStruct {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, input_schema: &Schema) -> DataFusionResult<DataType> {\n        let fields = self.fields(input_schema)?;\n        Ok(DataType::Struct(fields.into()))\n    }\n\n    fn nullable(&self, _input_schema: &Schema) -> DataFusionResult<bool> {\n        Ok(false)\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> DataFusionResult<ColumnarValue> {\n        let values = self\n            .values\n            .iter()\n            .map(|expr| expr.evaluate(batch))\n            .collect::<datafusion::common::Result<Vec<_>>>()?;\n        let arrays = ColumnarValue::values_to_arrays(&values)?;\n        let fields = self.fields(&batch.schema())?;\n        Ok(ColumnarValue::Array(Arc::new(StructArray::new(\n            fields.into(),\n            arrays,\n            None,\n        ))))\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        self.values.iter().collect()\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        Ok(Arc::new(CreateNamedStruct::new(\n            children.clone(),\n            self.names.clone(),\n        )))\n    }\n}\n\nimpl Display for CreateNamedStruct {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"CreateNamedStruct [values: {:?}, names: {:?}]\",\n            self.values, self.names\n        )\n    }\n}\n\n#[cfg(test)]\nmod test {\n    use super::CreateNamedStruct;\n    use arrow::array::{Array, DictionaryArray, Int32Array, RecordBatch, StringArray};\n    use arrow::datatypes::{DataType, Field, Schema};\n    use datafusion::common::Result;\n    use datafusion::physical_expr::expressions::Column;\n    use datafusion::physical_expr::PhysicalExpr;\n    use datafusion::physical_plan::ColumnarValue;\n    use std::sync::Arc;\n\n    #[test]\n    fn test_create_struct_from_dict_encoded_i32() -> Result<()> {\n        let keys = Int32Array::from(vec![0, 1, 2]);\n        let values = Int32Array::from(vec![0, 111, 233]);\n        let dict = DictionaryArray::try_new(keys, Arc::new(values))?;\n        let data_type = DataType::Dictionary(Box::new(DataType::Int32), Box::new(DataType::Int32));\n        let schema = Schema::new(vec![Field::new(\"a\", data_type, false)]);\n        let batch = RecordBatch::try_new(Arc::new(schema), vec![Arc::new(dict)])?;\n        let field_names = vec![\"a\".to_string()];\n        let x = CreateNamedStruct::new(vec![Arc::new(Column::new(\"a\", 0))], field_names);\n        let ColumnarValue::Array(x) = x.evaluate(&batch)? else {\n            unreachable!()\n        };\n        assert_eq!(3, x.len());\n        Ok(())\n    }\n\n    #[test]\n    fn test_create_struct_from_dict_encoded_string() -> Result<()> {\n        let keys = Int32Array::from(vec![0, 1, 2]);\n        let values = StringArray::from(vec![\"a\".to_string(), \"b\".to_string(), \"c\".to_string()]);\n        let dict = DictionaryArray::try_new(keys, Arc::new(values))?;\n        let data_type = DataType::Dictionary(Box::new(DataType::Int32), Box::new(DataType::Utf8));\n        let schema = Schema::new(vec![Field::new(\"a\", data_type, false)]);\n        let batch = RecordBatch::try_new(Arc::new(schema), vec![Arc::new(dict)])?;\n        let field_names = vec![\"a\".to_string()];\n        let x = CreateNamedStruct::new(vec![Arc::new(Column::new(\"a\", 0))], field_names);\n        let ColumnarValue::Array(x) = x.evaluate(&batch)? else {\n            unreachable!()\n        };\n        assert_eq!(3, x.len());\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/struct_funcs/get_struct_field.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::{Array, StructArray};\nuse arrow::datatypes::{DataType, Field, Schema};\nuse arrow::record_batch::RecordBatch;\nuse datafusion::common::{DataFusionError, Result as DataFusionResult, ScalarValue};\nuse datafusion::logical_expr::ColumnarValue;\nuse datafusion::physical_expr::PhysicalExpr;\nuse std::{\n    any::Any,\n    fmt::{Display, Formatter},\n    hash::Hash,\n    sync::Arc,\n};\n\n#[derive(Debug, Eq)]\npub struct GetStructField {\n    child: Arc<dyn PhysicalExpr>,\n    ordinal: usize,\n}\n\nimpl Hash for GetStructField {\n    fn hash<H: std::hash::Hasher>(&self, state: &mut H) {\n        self.child.hash(state);\n        self.ordinal.hash(state);\n    }\n}\nimpl PartialEq for GetStructField {\n    fn eq(&self, other: &Self) -> bool {\n        self.child.eq(&other.child) && self.ordinal.eq(&other.ordinal)\n    }\n}\n\nimpl GetStructField {\n    pub fn new(child: Arc<dyn PhysicalExpr>, ordinal: usize) -> Self {\n        Self { child, ordinal }\n    }\n\n    fn child_field(&self, input_schema: &Schema) -> DataFusionResult<Arc<Field>> {\n        match self.child.data_type(input_schema)? {\n            DataType::Struct(fields) => Ok(Arc::clone(&fields[self.ordinal])),\n            data_type => Err(DataFusionError::Plan(format!(\n                \"Expect struct field, got {data_type:?}\"\n            ))),\n        }\n    }\n}\n\nimpl PhysicalExpr for GetStructField {\n    fn as_any(&self) -> &dyn Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    fn data_type(&self, input_schema: &Schema) -> DataFusionResult<DataType> {\n        Ok(self.child_field(input_schema)?.data_type().clone())\n    }\n\n    fn nullable(&self, input_schema: &Schema) -> DataFusionResult<bool> {\n        Ok(self.child_field(input_schema)?.is_nullable())\n    }\n\n    fn evaluate(&self, batch: &RecordBatch) -> DataFusionResult<ColumnarValue> {\n        let child_value = self.child.evaluate(batch)?;\n\n        match child_value {\n            ColumnarValue::Array(array) => {\n                let struct_array = array\n                    .as_any()\n                    .downcast_ref::<StructArray>()\n                    .expect(\"A struct is expected\");\n\n                Ok(ColumnarValue::Array(Arc::clone(\n                    struct_array.column(self.ordinal),\n                )))\n            }\n            ColumnarValue::Scalar(ScalarValue::Struct(struct_array)) => Ok(ColumnarValue::Array(\n                Arc::clone(struct_array.column(self.ordinal)),\n            )),\n            value => Err(DataFusionError::Execution(format!(\n                \"Expected a struct array, got {value:?}\"\n            ))),\n        }\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![&self.child]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> datafusion::common::Result<Arc<dyn PhysicalExpr>> {\n        Ok(Arc::new(GetStructField::new(\n            Arc::clone(&children[0]),\n            self.ordinal,\n        )))\n    }\n}\n\nimpl Display for GetStructField {\n    fn fmt(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        write!(\n            f,\n            \"GetStructField [child: {:?}, ordinal: {:?}]\",\n            self.child, self.ordinal\n        )\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/struct_funcs/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nmod create_named_struct;\nmod get_struct_field;\n\npub use create_named_struct::CreateNamedStruct;\npub use get_struct_field::GetStructField;\n"
  },
  {
    "path": "native/spark-expr/src/test_common/file_util.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse std::{env, fs, io::Write, path::PathBuf};\n\n/// Returns file handle for a temp file in 'target' directory with a provided content\npub fn get_temp_file(file_name: &str, content: &[u8]) -> fs::File {\n    // build tmp path to a file in \"target/debug/testdata\"\n    let mut path_buf = env::current_dir().unwrap();\n    path_buf.push(\"target\");\n    path_buf.push(\"debug\");\n    path_buf.push(\"testdata\");\n    fs::create_dir_all(&path_buf).unwrap();\n    path_buf.push(file_name);\n\n    // write file content\n    let mut tmp_file = fs::File::create(path_buf.as_path()).unwrap();\n    tmp_file.write_all(content).unwrap();\n    tmp_file.sync_all().unwrap();\n\n    // return file handle for both read and write\n    let file = fs::OpenOptions::new()\n        .read(true)\n        .write(true)\n        .open(path_buf.as_path());\n    assert!(file.is_ok());\n    file.unwrap()\n}\n\npub fn get_temp_filename() -> PathBuf {\n    let mut path_buf = env::current_dir().unwrap();\n    path_buf.push(\"target\");\n    path_buf.push(\"debug\");\n    path_buf.push(\"testdata\");\n    fs::create_dir_all(&path_buf).unwrap();\n    path_buf.push(rand::random::<i16>().to_string());\n\n    path_buf\n}\n"
  },
  {
    "path": "native/spark-expr/src/test_common/mod.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\npub mod file_util;\n"
  },
  {
    "path": "native/spark-expr/src/timezone.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n/// Utils for timezone. This is basically from arrow-array::timezone (private).\nuse arrow::error::ArrowError;\nuse chrono::{\n    format::{parse, Parsed, StrftimeItems},\n    offset::TimeZone,\n    FixedOffset, LocalResult, NaiveDate, NaiveDateTime, Offset,\n};\nuse std::str::FromStr;\n\n/// Parses a fixed offset of the form \"+09:00\"\nfn parse_fixed_offset(tz: &str) -> Result<FixedOffset, ArrowError> {\n    let mut parsed = Parsed::new();\n\n    if let Ok(fixed_offset) =\n        parse(&mut parsed, tz, StrftimeItems::new(\"%:z\")).and_then(|_| parsed.to_fixed_offset())\n    {\n        return Ok(fixed_offset);\n    }\n\n    if let Ok(fixed_offset) =\n        parse(&mut parsed, tz, StrftimeItems::new(\"%#z\")).and_then(|_| parsed.to_fixed_offset())\n    {\n        return Ok(fixed_offset);\n    }\n\n    Err(ArrowError::ParseError(format!(\n        \"Invalid timezone \\\"{tz}\\\": Expected format [+-]XX:XX, [+-]XX, or [+-]XXXX\"\n    )))\n}\n\n/// An [`Offset`] for [`Tz`]\n#[derive(Debug, Copy, Clone)]\npub struct TzOffset {\n    tz: Tz,\n    offset: FixedOffset,\n}\n\nimpl std::fmt::Display for TzOffset {\n    fn fmt(&self, f: &mut std::fmt::Formatter<'_>) -> std::fmt::Result {\n        self.offset.fmt(f)\n    }\n}\n\nimpl Offset for TzOffset {\n    fn fix(&self) -> FixedOffset {\n        self.offset\n    }\n}\n\n/// An Arrow [`TimeZone`]\n#[derive(Debug, Copy, Clone)]\npub struct Tz(TzInner);\n\n#[derive(Debug, Copy, Clone)]\nenum TzInner {\n    Timezone(chrono_tz::Tz),\n    Offset(FixedOffset),\n}\n\nimpl FromStr for Tz {\n    type Err = ArrowError;\n\n    fn from_str(tz: &str) -> Result<Self, Self::Err> {\n        if tz.starts_with('+') || tz.starts_with('-') {\n            Ok(Self(TzInner::Offset(parse_fixed_offset(tz)?)))\n        } else {\n            Ok(Self(TzInner::Timezone(tz.parse().map_err(|e| {\n                ArrowError::ParseError(format!(\"Invalid timezone \\\"{tz}\\\": {e}\"))\n            })?)))\n        }\n    }\n}\n\nmacro_rules! tz {\n    ($s:ident, $tz:ident, $b:block) => {\n        match $s.0 {\n            TzInner::Timezone($tz) => $b,\n            TzInner::Offset($tz) => $b,\n        }\n    };\n}\n\nimpl TimeZone for Tz {\n    type Offset = TzOffset;\n\n    fn from_offset(offset: &Self::Offset) -> Self {\n        offset.tz\n    }\n\n    fn offset_from_local_date(&self, local: &NaiveDate) -> LocalResult<Self::Offset> {\n        tz!(self, tz, {\n            tz.offset_from_local_date(local).map(|x| TzOffset {\n                tz: *self,\n                offset: x.fix(),\n            })\n        })\n    }\n\n    fn offset_from_local_datetime(&self, local: &NaiveDateTime) -> LocalResult<Self::Offset> {\n        tz!(self, tz, {\n            tz.offset_from_local_datetime(local).map(|x| TzOffset {\n                tz: *self,\n                offset: x.fix(),\n            })\n        })\n    }\n\n    fn offset_from_utc_date(&self, utc: &NaiveDate) -> Self::Offset {\n        tz!(self, tz, {\n            TzOffset {\n                tz: *self,\n                offset: tz.offset_from_utc_date(utc).fix(),\n            }\n        })\n    }\n\n    fn offset_from_utc_datetime(&self, utc: &NaiveDateTime) -> Self::Offset {\n        tz!(self, tz, {\n            TzOffset {\n                tz: *self,\n                offset: tz.offset_from_utc_datetime(utc).fix(),\n            }\n        })\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/unbound.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::array::RecordBatch;\nuse arrow::datatypes::{DataType, Schema};\nuse datafusion::common::{internal_err, Result};\nuse datafusion::physical_expr::PhysicalExpr;\nuse datafusion::physical_plan::ColumnarValue;\nuse std::fmt::{Display, Formatter};\nuse std::{hash::Hash, sync::Arc};\n\n/// This is similar to `UnKnownColumn` in DataFusion, but it has data type.\n/// This is only used when the column is not bound to a schema, for example, the\n/// inputs to aggregation functions in final aggregation. In the case, we cannot\n/// bind the aggregation functions to the input schema which is grouping columns\n/// and aggregate buffer attributes in Spark (DataFusion has different design).\n/// But when creating certain aggregation functions, we need to know its input\n/// data types. As `UnKnownColumn` doesn't have data type, we implement this\n/// `UnboundColumn` to carry the data type.\n#[derive(Debug, Hash, PartialEq, Eq, Clone)]\npub struct UnboundColumn {\n    name: String,\n    datatype: DataType,\n}\n\nimpl UnboundColumn {\n    /// Create a new unbound column expression\n    pub fn new(name: &str, datatype: DataType) -> Self {\n        Self {\n            name: name.to_owned(),\n            datatype,\n        }\n    }\n\n    /// Get the column name\n    pub fn name(&self) -> &str {\n        &self.name\n    }\n}\n\nimpl std::fmt::Display for UnboundColumn {\n    fn fmt(&self, f: &mut std::fmt::Formatter) -> std::fmt::Result {\n        write!(f, \"{}, datatype: {}\", self.name, self.datatype)\n    }\n}\n\nimpl PhysicalExpr for UnboundColumn {\n    /// Return a reference to Any that can be used for downcasting\n    fn as_any(&self) -> &dyn std::any::Any {\n        self\n    }\n\n    fn fmt_sql(&self, f: &mut Formatter<'_>) -> std::fmt::Result {\n        Display::fmt(self, f)\n    }\n\n    /// Get the data type of this expression, given the schema of the input\n    fn data_type(&self, _input_schema: &Schema) -> Result<DataType> {\n        Ok(self.datatype.clone())\n    }\n\n    /// Decide whether this expression is nullable, given the schema of the input\n    fn nullable(&self, _input_schema: &Schema) -> Result<bool> {\n        Ok(true)\n    }\n\n    /// Evaluate the expression\n    fn evaluate(&self, _batch: &RecordBatch) -> Result<ColumnarValue> {\n        internal_err!(\"UnboundColumn::evaluate() should not be called\")\n    }\n\n    fn children(&self) -> Vec<&Arc<dyn PhysicalExpr>> {\n        vec![]\n    }\n\n    fn with_new_children(\n        self: Arc<Self>,\n        _children: Vec<Arc<dyn PhysicalExpr>>,\n    ) -> Result<Arc<dyn PhysicalExpr>> {\n        Ok(self)\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/src/utils.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nuse arrow::datatypes::{DataType, TimeUnit, DECIMAL128_MAX_PRECISION};\nuse arrow::{\n    array::{\n        cast::as_primitive_array,\n        types::{Int32Type, TimestampMicrosecondType},\n        BooleanBufferBuilder,\n    },\n    buffer::BooleanBuffer,\n};\nuse datafusion::logical_expr::EmitTo;\nuse std::sync::Arc;\n\nuse crate::timezone::Tz;\nuse arrow::array::types::TimestampMillisecondType;\nuse arrow::array::TimestampMicrosecondArray;\nuse arrow::datatypes::{MAX_DECIMAL128_FOR_EACH_PRECISION, MIN_DECIMAL128_FOR_EACH_PRECISION};\nuse arrow::error::ArrowError;\nuse arrow::{\n    array::{as_dictionary_array, Array, ArrayRef, PrimitiveArray},\n    temporal_conversions::as_datetime,\n};\nuse chrono::{DateTime, LocalResult, NaiveDateTime, Offset, TimeZone};\n\n/// Preprocesses input arrays to add timezone information from Spark to Arrow array datatype or\n/// to apply timezone offset.\n//\n//  We consider the following cases:\n//\n//  | --------------------- | ------------ | ----------------- | -------------------------------- |\n//  | Conversion            | Input array  | Timezone          | Output array                     |\n//  | --------------------- | ------------ | ----------------- | -------------------------------- |\n//  | Timestamp ->          | Array in UTC | Timezone of input | A timestamp with the timezone    |\n//  |  Utf8 or Date32       |              |                   | offset applied and timezone      |\n//  |                       |              |                   | removed                          |\n//  | --------------------- | ------------ | ----------------- | -------------------------------- |\n//  | Timestamp ->          | Array in UTC | Timezone of input | Same as input array              |\n//  |  Timestamp  w/Timezone|              |                   |                                  |\n//  | --------------------- | ------------ | ----------------- | -------------------------------- |\n//  | Timestamp_ntz ->      | Array in     | Timezone of input | Same as input array              |\n//  |   Utf8 or Date32      | timezone     |                   |                                  |\n//  |                       | session local|                   |                                  |\n//  |                       | timezone     |                   |                                  |\n//  | --------------------- | ------------ | ----------------- | -------------------------------- |\n//  | Timestamp_ntz ->      | Array in     | Timezone of input |  Array in UTC and timezone       |\n//  |  Timestamp w/Timezone | session local|                   |  specified in input              |\n//  |                       | timezone     |                   |                                  |\n//  | --------------------- | ------------ | ----------------- | -------------------------------- |\n//  | Timestamp(_ntz) ->    |                                                                     |\n//  |        Any other type |              Not Supported                                          |\n//  | --------------------- | ------------ | ----------------- | -------------------------------- |\n//\npub fn array_with_timezone(\n    array: ArrayRef,\n    timezone: String,\n    to_type: Option<&DataType>,\n) -> Result<ArrayRef, ArrowError> {\n    match array.data_type() {\n        DataType::Timestamp(TimeUnit::Millisecond, None) => {\n            assert!(!timezone.is_empty());\n            match to_type {\n                Some(DataType::Utf8) | Some(DataType::Date32) => Ok(array),\n                Some(DataType::Timestamp(_, Some(_))) => {\n                    timestamp_ntz_to_timestamp(array, timezone.as_str(), Some(timezone.as_str()))\n                }\n                Some(DataType::Timestamp(TimeUnit::Microsecond, None)) => {\n                    // Convert from Timestamp(Millisecond, None) to Timestamp(Microsecond, None)\n                    let millis_array = as_primitive_array::<TimestampMillisecondType>(&array);\n                    let micros_array: TimestampMicrosecondArray =\n                        arrow::compute::kernels::arity::unary(millis_array, |v| v * 1000);\n                    Ok(Arc::new(micros_array))\n                }\n                _ => {\n                    // Not supported\n                    Err(ArrowError::CastError(format!(\n                        \"Cannot convert from {:?} to {:?}\",\n                        array.data_type(),\n                        to_type.unwrap()\n                    )))\n                }\n            }\n        }\n        DataType::Timestamp(TimeUnit::Microsecond, None) => {\n            assert!(!timezone.is_empty());\n            match to_type {\n                Some(DataType::Utf8) | Some(DataType::Date32) => Ok(array),\n                Some(DataType::Timestamp(_, Some(_))) => {\n                    timestamp_ntz_to_timestamp(array, timezone.as_str(), Some(timezone.as_str()))\n                }\n                _ => {\n                    // Not supported\n                    Err(ArrowError::CastError(format!(\n                        \"Cannot convert from {:?} to {:?}\",\n                        array.data_type(),\n                        to_type.unwrap()\n                    )))\n                }\n            }\n        }\n        DataType::Timestamp(_, None) => {\n            assert!(!timezone.is_empty());\n            match to_type {\n                Some(DataType::Utf8) | Some(DataType::Date32) => Ok(array),\n                Some(DataType::Timestamp(_, Some(_))) => {\n                    timestamp_ntz_to_timestamp(array, timezone.as_str(), Some(timezone.as_str()))\n                }\n                _ => {\n                    // Not supported\n                    Err(ArrowError::CastError(format!(\n                        \"Cannot convert from {:?} to {:?}\",\n                        array.data_type(),\n                        to_type.unwrap()\n                    )))\n                }\n            }\n        }\n        DataType::Timestamp(TimeUnit::Microsecond, Some(_)) => {\n            assert!(!timezone.is_empty());\n            let array = as_primitive_array::<TimestampMicrosecondType>(&array);\n            let array_with_timezone = array.clone().with_timezone(timezone.clone());\n            let array = Arc::new(array_with_timezone) as ArrayRef;\n            match to_type {\n                Some(DataType::Utf8) | Some(DataType::Date32) => {\n                    pre_timestamp_cast(array, timezone)\n                }\n                _ => Ok(array),\n            }\n        }\n        DataType::Timestamp(TimeUnit::Millisecond, Some(_)) => {\n            assert!(!timezone.is_empty());\n            let array = as_primitive_array::<TimestampMillisecondType>(&array);\n            let array_with_timezone = array.clone().with_timezone(timezone.clone());\n            let array = Arc::new(array_with_timezone) as ArrayRef;\n            match to_type {\n                Some(DataType::Utf8) | Some(DataType::Date32) => {\n                    pre_timestamp_cast(array, timezone)\n                }\n                _ => Ok(array),\n            }\n        }\n        DataType::Dictionary(_, value_type)\n            if matches!(value_type.as_ref(), &DataType::Timestamp(_, _)) =>\n        {\n            let dict = as_dictionary_array::<Int32Type>(&array);\n            let array = as_primitive_array::<TimestampMicrosecondType>(dict.values());\n            let array_with_timezone =\n                array_with_timezone(Arc::new(array.clone()) as ArrayRef, timezone, to_type)?;\n            let dict = dict.with_values(array_with_timezone);\n            Ok(Arc::new(dict))\n        }\n        _ => Ok(array),\n    }\n}\n\nfn datetime_cast_err(value: i64) -> ArrowError {\n    ArrowError::CastError(format!(\n        \"Cannot convert TimestampMicrosecondType {value} to datetime. Comet only supports dates between Jan 1, 262145 BCE and Dec 31, 262143 CE\",\n    ))\n}\n\n/// Resolves a local datetime in the given timezone to an absolute DateTime,\n/// handling DST ambiguity and spring-forward gaps.\n/// Parameters:\n///     tz - timezone used to interpret local_datetime\n///     local_datetime - a naive local datetime to resolve\nfn resolve_local_datetime(tz: &Tz, local_datetime: NaiveDateTime) -> DateTime<Tz> {\n    match tz.from_local_datetime(&local_datetime) {\n        LocalResult::Single(dt) => dt,\n        LocalResult::Ambiguous(dt, _) => dt,\n        LocalResult::None => {\n            // Determine offset before time-change\n            let probe = local_datetime - chrono::Duration::hours(3);\n            let pre_offset = match tz.from_local_datetime(&probe) {\n                LocalResult::Single(dt) => dt.offset().fix(),\n                LocalResult::Ambiguous(dt, _) => dt.offset().fix(),\n                LocalResult::None => {\n                    // Cannot determine offset; fall back to UTC interpretation\n                    return local_datetime.and_utc().with_timezone(tz);\n                }\n            };\n            let offset_secs = pre_offset.local_minus_utc() as i64;\n\n            let utc_naive = local_datetime - chrono::Duration::seconds(offset_secs);\n            utc_naive.and_utc().with_timezone(tz)\n        }\n    }\n}\n\n/// Takes in a Timestamp(Microsecond, None) array and a timezone id, and returns\n/// a Timestamp(Microsecond, Some<_>) array.\n/// The understanding is that the input array has time in the timezone specified in the second\n/// argument.\n/// Parameters:\n///     array - input array of timestamp without timezone\n///     tz - timezone of the values in the input array\n///     to_timezone - timezone to change the input values to\nfn timestamp_ntz_to_timestamp(\n    array: ArrayRef,\n    tz: &str,\n    to_timezone: Option<&str>,\n) -> Result<ArrayRef, ArrowError> {\n    assert!(!tz.is_empty());\n    match array.data_type() {\n        DataType::Timestamp(TimeUnit::Microsecond, None) => {\n            let array = as_primitive_array::<TimestampMicrosecondType>(&array);\n            let tz: Tz = tz.parse()?;\n            let array: PrimitiveArray<TimestampMicrosecondType> = array.try_unary(|value| {\n                as_datetime::<TimestampMicrosecondType>(value)\n                    .ok_or_else(|| datetime_cast_err(value))\n                    .map(|local_datetime| {\n                        let datetime = resolve_local_datetime(&tz, local_datetime);\n\n                        datetime.timestamp_micros()\n                    })\n            })?;\n            let array_with_tz = if let Some(to_tz) = to_timezone {\n                array.with_timezone(to_tz)\n            } else {\n                array\n            };\n            Ok(Arc::new(array_with_tz))\n        }\n        DataType::Timestamp(TimeUnit::Millisecond, None) => {\n            let array = as_primitive_array::<TimestampMillisecondType>(&array);\n            let tz: Tz = tz.parse()?;\n            let array: PrimitiveArray<TimestampMillisecondType> = array.try_unary(|value| {\n                as_datetime::<TimestampMillisecondType>(value)\n                    .ok_or_else(|| datetime_cast_err(value))\n                    .map(|local_datetime| {\n                        let datetime = resolve_local_datetime(&tz, local_datetime);\n\n                        datetime.timestamp_millis()\n                    })\n            })?;\n            let array_with_tz = if let Some(to_tz) = to_timezone {\n                array.with_timezone(to_tz)\n            } else {\n                array\n            };\n            Ok(Arc::new(array_with_tz))\n        }\n        _ => Ok(array),\n    }\n}\n\n/// This takes for special pre-casting cases of Spark. E.g., Timestamp to String.\nfn pre_timestamp_cast(array: ArrayRef, timezone: String) -> Result<ArrayRef, ArrowError> {\n    assert!(!timezone.is_empty());\n    match array.data_type() {\n        DataType::Timestamp(_, _) => {\n            // Spark doesn't output timezone while casting timestamp to string, but arrow's cast\n            // kernel does if timezone exists. So we need to apply offset of timezone to array\n            // timestamp value and remove timezone from array datatype.\n            let array = as_primitive_array::<TimestampMicrosecondType>(&array);\n\n            let tz: Tz = timezone.parse()?;\n            let array: PrimitiveArray<TimestampMicrosecondType> = array.try_unary(|value| {\n                as_datetime::<TimestampMicrosecondType>(value)\n                    .ok_or_else(|| datetime_cast_err(value))\n                    .map(|datetime| {\n                        let offset = tz.offset_from_utc_datetime(&datetime).fix();\n                        let datetime = datetime + offset;\n                        datetime.and_utc().timestamp_micros()\n                    })\n            })?;\n\n            Ok(Arc::new(array))\n        }\n        _ => Ok(array),\n    }\n}\n\n/// Adapted from arrow-rs `validate_decimal_precision` but returns bool\n/// instead of Err to avoid the cost of formatting the error strings and is\n/// optimized to remove a memcpy that exists in the original function\n/// we can remove this code once we upgrade to a version of arrow-rs that\n/// includes https://github.com/apache/arrow-rs/pull/6419\n#[inline]\npub fn is_valid_decimal_precision(value: i128, precision: u8) -> bool {\n    precision <= DECIMAL128_MAX_PRECISION\n        && value >= MIN_DECIMAL128_FOR_EACH_PRECISION[precision as usize]\n        && value <= MAX_DECIMAL128_FOR_EACH_PRECISION[precision as usize]\n}\n\n/// Build a boolean buffer from the state and reset the state, based on the emit_to\n/// strategy.\npub fn build_bool_state(state: &mut BooleanBufferBuilder, emit_to: &EmitTo) -> BooleanBuffer {\n    let bool_state: BooleanBuffer = state.finish();\n\n    match emit_to {\n        EmitTo::All => bool_state,\n        EmitTo::First(n) => {\n            state.append_buffer(&bool_state.slice(*n, bool_state.len() - n));\n            bool_state.slice(0, *n)\n        }\n    }\n}\n\n// These are borrowed from hashbrown crate:\n//   https://github.com/rust-lang/hashbrown/blob/master/src/raw/mod.rs\n\n// On stable we can use #[cold] to get a equivalent effect: this attributes\n// suggests that the function is unlikely to be called\n#[inline]\n#[cold]\npub fn cold() {}\n\n#[inline]\npub fn likely(b: bool) -> bool {\n    if !b {\n        cold();\n    }\n    b\n}\n#[inline]\npub fn unlikely(b: bool) -> bool {\n    if b {\n        cold();\n    }\n    b\n}\n\n#[cfg(test)]\nmod tests {\n    use super::*;\n\n    fn array_containing(local_datetime: &str) -> ArrayRef {\n        let dt = NaiveDateTime::parse_from_str(local_datetime, \"%Y-%m-%d %H:%M:%S\").unwrap();\n        let ts = dt.and_utc().timestamp_micros();\n        Arc::new(TimestampMicrosecondArray::from(vec![ts]))\n    }\n\n    fn micros_for(datetime: &str) -> i64 {\n        NaiveDateTime::parse_from_str(datetime, \"%Y-%m-%d %H:%M:%S\")\n            .unwrap()\n            .and_utc()\n            .timestamp_micros()\n    }\n\n    #[test]\n    fn test_build_bool_state() {\n        let mut builder = BooleanBufferBuilder::new(0);\n        builder.append_packed_range(0..16, &[0x42u8, 0x39u8]);\n\n        let mut first_nine = BooleanBufferBuilder::new(0);\n        first_nine.append_packed_range(0..9, &[0x42u8, 0x01u8]);\n        let first_nine = first_nine.finish();\n        let mut last = BooleanBufferBuilder::new(0);\n        last.append_packed_range(0..7, &[0x1cu8]);\n        let last = last.finish();\n\n        assert_eq!(\n            first_nine,\n            build_bool_state(&mut builder, &EmitTo::First(9))\n        );\n        assert_eq!(last, build_bool_state(&mut builder, &EmitTo::All));\n    }\n\n    #[test]\n    fn test_timestamp_ntz_to_timestamp_handles_non_existent_time() {\n        let output = timestamp_ntz_to_timestamp(\n            array_containing(\"2024-03-31 01:30:00\"),\n            \"Europe/London\",\n            None,\n        )\n        .unwrap();\n\n        assert_eq!(\n            as_primitive_array::<TimestampMicrosecondType>(&output).value(0),\n            micros_for(\"2024-03-31 01:30:00\")\n        );\n    }\n\n    #[test]\n    fn test_timestamp_ntz_to_timestamp_handles_ambiguous_time() {\n        let output = timestamp_ntz_to_timestamp(\n            array_containing(\"2024-10-27 01:30:00\"),\n            \"Europe/London\",\n            None,\n        )\n        .unwrap();\n\n        assert_eq!(\n            as_primitive_array::<TimestampMicrosecondType>(&output).value(0),\n            micros_for(\"2024-10-27 00:30:00\")\n        );\n    }\n}\n"
  },
  {
    "path": "native/spark-expr/tests/spark_expr_reg.rs",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\n#[cfg(test)]\nmod tests {\n    use arrow::datatypes::DataType;\n    use datafusion::error::Result;\n    use datafusion::execution::session_state::SessionStateBuilder;\n    use datafusion::execution::FunctionRegistry;\n    use datafusion::prelude::SessionContext;\n    use datafusion_comet_spark_expr::create_comet_physical_fun;\n    use datafusion_comet_spark_expr::register_all_comet_functions;\n\n    #[tokio::test]\n    async fn test_udf_registration() -> Result<()> {\n        // 1. Setup session with UDF registration of existing Spark-compatible expression\n        let mut session_state = SessionStateBuilder::new().build();\n        let _ = session_state.register_udf(create_comet_physical_fun(\n            \"xxhash64\",\n            DataType::Int64,\n            &session_state,\n            None,\n        )?);\n        let ctx = SessionContext::new_with_state(session_state);\n\n        // 2. Execute SQL with literal values\n        let df = ctx\n            .sql(\"SELECT xxhash64('valid',64) AS hash_value\")\n            .await\n            .unwrap();\n        let results = df.collect().await?;\n\n        // 3. Ensure results are returned, i.e UDF is registered. no need to validate the actual value\n        assert!(!results.is_empty(), \"Results should not be empty\");\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_make_date_returns_null_for_invalid_input() -> Result<()> {\n        // Setup session with all Comet functions registered\n        let mut ctx = SessionContext::new();\n        register_all_comet_functions(&mut ctx)?;\n\n        // Test that make_date returns NULL for invalid month (0)\n        // DataFusion's built-in make_date would throw an error\n        let df = ctx.sql(\"SELECT make_date(2023, 0, 15)\").await?;\n        let results = df.collect().await?;\n\n        // Should return one row with NULL\n        assert_eq!(results.len(), 1);\n        assert_eq!(results[0].num_rows(), 1);\n\n        // The result should be NULL for invalid input\n        let column = results[0].column(0);\n        assert!(column.is_null(0), \"Expected NULL for invalid month\");\n\n        Ok(())\n    }\n\n    #[tokio::test]\n    async fn test_make_date_valid_input() -> Result<()> {\n        // Setup session with all Comet functions registered\n        let mut ctx = SessionContext::new();\n        register_all_comet_functions(&mut ctx)?;\n\n        // Test that make_date works for valid input\n        let df = ctx.sql(\"SELECT make_date(1970, 1, 1)\").await?;\n        let results = df.collect().await?;\n\n        assert_eq!(results.len(), 1);\n        assert_eq!(results[0].num_rows(), 1);\n\n        // Should return epoch date (1970-01-01 = day 0)\n        let column = results[0].column(0);\n        assert!(!column.is_null(0), \"Expected valid date for epoch\");\n\n        Ok(())\n    }\n}\n"
  },
  {
    "path": "pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>org.apache</groupId>\n    <artifactId>apache</artifactId>\n    <version>23</version>\n  </parent>\n  <groupId>org.apache.datafusion</groupId>\n  <artifactId>comet-parent-spark${spark.version.short}_${scala.binary.version}</artifactId>\n  <version>0.15.0-SNAPSHOT</version>\n  <packaging>pom</packaging>\n  <name>Comet Project Parent POM</name>\n\n  <modules>\n    <module>common</module>\n    <module>spark</module>\n    <module>spark-integration</module>\n    <module>fuzz-testing</module>\n  </modules>\n\n  <properties>\n    <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>\n    <project.reporting.outputEncoding>UTF-8</project.reporting.outputEncoding>\n    <java.version>11</java.version>\n    <maven.compiler.source>${java.version}</maven.compiler.source>\n    <maven.compiler.target>${java.version}</maven.compiler.target>\n    <maven-compiler-plugin.version>3.11.0</maven-compiler-plugin.version>\n    <maven-assembly-plugin.version>3.6.0</maven-assembly-plugin.version>\n    <maven-shade-plugin.version>3.2.4</maven-shade-plugin.version>\n    <maven-surefire-plugin.version>3.5.4</maven-surefire-plugin.version>\n    <maven-source-plugin.version>3.3.0</maven-source-plugin.version>\n    <maven-enforcer-plugin.version>3.3.0</maven-enforcer-plugin.version>\n    <maven-failsafe-plugin.version>3.1.0</maven-failsafe-plugin.version>\n    <asm.version>9.1</asm.version>\n    <build-helper-maven-plugin.version>3.4.0</build-helper-maven-plugin.version>\n    <flatten-maven-plugin.version>1.3.0</flatten-maven-plugin.version>\n    <scalastyle-maven-plugin.version>1.0.0</scalastyle-maven-plugin.version>\n    <git-commit-id-maven-plugin.version>9.0.2</git-commit-id-maven-plugin.version>\n    <exec-maven-plugin.version>3.1.0</exec-maven-plugin.version>\n    <protoc-jar-maven-plugin.version>3.11.4</protoc-jar-maven-plugin.version>\n    <scalafix-maven-plugin.version>0.1.7_0.10.4</scalafix-maven-plugin.version>\n    <extra-enforcer-rules.version>1.7.0</extra-enforcer-rules.version>\n    <scalafmt.version>3.6.1</scalafmt.version>\n    <apache-rat-plugin.version>0.16.1</apache-rat-plugin.version>\n    <scala.version>2.12.18</scala.version>\n    <scala.binary.version>2.12</scala.binary.version>\n    <scala.plugin.version>4.9.6</scala.plugin.version>\n    <scalatest.version>3.2.16</scalatest.version>\n    <scalatest-maven-plugin.version>2.2.0</scalatest-maven-plugin.version>\n    <spark.version>3.5.8</spark.version>\n    <spark.version.short>3.5</spark.version.short>\n    <spark.maven.scope>provided</spark.maven.scope>\n    <protobuf.version>3.25.5</protobuf.version>\n    <parquet.version>1.13.1</parquet.version>\n    <parquet.maven.scope>provided</parquet.maven.scope>\n    <hadoop.version>3.3.4</hadoop.version>\n    <arrow.version>18.3.0</arrow.version>\n    <codehaus.jackson.version>1.9.13</codehaus.jackson.version>\n    <spotless.version>2.43.0</spotless.version>\n    <jacoco.version>0.8.11</jacoco.version>\n    <semanticdb.version>4.8.8</semanticdb.version>\n    <slf4j.version>2.0.7</slf4j.version>\n    <guava.version>33.2.1-jre</guava.version>\n    <testcontainers.version>1.21.0</testcontainers.version>\n    <amazon-awssdk-v2.version>2.31.51</amazon-awssdk-v2.version>\n    <jni.dir>${project.basedir}/../native/target/debug</jni.dir>\n    <platform>darwin</platform>\n    <arch>x86_64</arch>\n    <comet.shade.packageName>org.apache.comet.shaded</comet.shade.packageName>\n    <!-- Used by some tests inherited from Spark to get project root directory -->\n    <spark.test.home>${session.executionRootDirectory}</spark.test.home>\n\n    <!-- Reverse default (skip installation), and then enable only for child modules -->\n    <maven.deploy.skip>true</maven.deploy.skip>\n\n    <!-- For testing with JDK 17 -->\n    <extraJavaTestArgs>\n      -XX:+IgnoreUnrecognizedVMOptions\n      --add-opens=java.base/java.lang=ALL-UNNAMED\n      --add-opens=java.base/java.lang.invoke=ALL-UNNAMED\n      --add-opens=java.base/java.lang.reflect=ALL-UNNAMED\n      --add-opens=java.base/java.io=ALL-UNNAMED\n      --add-opens=java.base/java.net=ALL-UNNAMED\n      --add-opens=java.base/java.nio=ALL-UNNAMED\n      --add-opens=java.base/java.util=ALL-UNNAMED\n      --add-opens=java.base/java.util.concurrent=ALL-UNNAMED\n      --add-opens=java.base/java.util.concurrent.atomic=ALL-UNNAMED\n      --add-opens=java.base/jdk.internal.ref=ALL-UNNAMED\n      --add-opens=java.base/sun.nio.ch=ALL-UNNAMED\n      --add-opens=java.base/sun.nio.cs=ALL-UNNAMED\n      --add-opens=java.base/sun.security.action=ALL-UNNAMED\n      --add-opens=java.base/sun.util.calendar=ALL-UNNAMED\n      -Djdk.reflect.useDirectMethodHandle=false\n    </extraJavaTestArgs>\n    <argLine>-ea -Xmx4g -Xss4m ${extraJavaTestArgs}</argLine>\n    <shims.majorVerSrc>spark-3.x</shims.majorVerSrc>\n    <shims.minorVerSrc>spark-3.5</shims.minorVerSrc>\n  </properties>\n\n  <dependencyManagement>\n    <dependencies>\n      <!-- Spark dependencies -->\n      <dependency>\n        <groupId>org.apache.spark</groupId>\n        <artifactId>spark-sql_${scala.binary.version}</artifactId>\n        <version>${spark.version}</version>\n        <scope>${spark.maven.scope}</scope>\n        <exclusions>\n          <exclusion>\n            <groupId>org.apache.parquet</groupId>\n            <artifactId>parquet-hadoop</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>org.apache.parquet</groupId>\n            <artifactId>parquet-column</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>org.apache.parquet</groupId>\n            <artifactId>parquet-format-structures</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>com.google.guava</groupId>\n            <artifactId>guava</artifactId>\n          </exclusion>\n\n          <!-- We're using \"org.slf4j:jcl-over-slf4j\" -->\n          <exclusion>\n            <groupId>commons-logging</groupId>\n            <artifactId>commons-logging</artifactId>\n          </exclusion>\n\n          <exclusion>\n            <groupId>org.apache.arrow</groupId>\n            <artifactId>*</artifactId>\n          </exclusion>\n        </exclusions>\n      </dependency>\n      <dependency>\n        <groupId>org.apache.spark</groupId>\n        <artifactId>spark-catalyst_${scala.binary.version}</artifactId>\n        <version>${spark.version}</version>\n        <scope>${spark.maven.scope}</scope>\n        <exclusions>\n\n          <!-- We're using \"org.slf4j:jcl-over-slf4j\" -->\n          <exclusion>\n            <groupId>commons-logging</groupId>\n            <artifactId>commons-logging</artifactId>\n          </exclusion>\n\n          <!-- Comet uses arrow-memory-unsafe -->\n          <exclusion>\n            <groupId>org.apache.arrow</groupId>\n            <artifactId>*</artifactId>\n          </exclusion>\n        </exclusions>\n      </dependency>\n\n      <!-- Arrow dependencies -->\n      <dependency>\n        <groupId>org.apache.arrow</groupId>\n        <artifactId>arrow-vector</artifactId>\n        <version>${arrow.version}</version>\n        <!-- Exclude the following in favor of those from Spark -->\n        <exclusions>\n          <exclusion>\n            <groupId>com.fasterxml.jackson.core</groupId>\n            <artifactId>jackson-annotations</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>com.fasterxml.jackson.core</groupId>\n            <artifactId>jackson-core</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>io.netty</groupId>\n            <artifactId>netty-common</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>com.google.code.findbugs</groupId>\n            <artifactId>jsr305</artifactId>\n          </exclusion>\n        </exclusions>\n      </dependency>\n      <dependency>\n        <groupId>org.apache.arrow</groupId>\n        <artifactId>arrow-memory-unsafe</artifactId>\n        <version>${arrow.version}</version>\n      </dependency>\n      <dependency>\n        <groupId>org.apache.arrow</groupId>\n        <artifactId>arrow-c-data</artifactId>\n        <version>${arrow.version}</version>\n      </dependency>\n\n      <!-- Parquet dependencies -->\n      <dependency>\n        <groupId>org.apache.parquet</groupId>\n        <artifactId>parquet-column</artifactId>\n        <version>${parquet.version}</version>\n        <scope>${parquet.maven.scope}</scope>\n      </dependency>\n      <dependency>\n        <groupId>org.apache.parquet</groupId>\n        <artifactId>parquet-format-structures</artifactId>\n        <version>${parquet.version}</version>\n        <scope>${parquet.maven.scope}</scope>\n      </dependency>\n      <dependency>\n        <groupId>org.apache.parquet</groupId>\n        <artifactId>parquet-hadoop</artifactId>\n        <version>${parquet.version}</version>\n        <scope>${parquet.maven.scope}</scope>\n        <exclusions>\n          <!-- Exclude the following in favor of jakarta.annotation:jakarta.annotation-api -->\n          <exclusion>\n            <groupId>javax.annotation</groupId>\n            <artifactId>javax.annotation-api</artifactId>\n          </exclusion>\n        </exclusions>\n      </dependency>\n      <dependency>\n        <groupId>org.apache.parquet</groupId>\n        <artifactId>parquet-avro</artifactId>\n        <version>${parquet.version}</version>\n        <scope>test</scope>\n      </dependency>\n      <dependency>\n        <groupId>org.apache.parquet</groupId>\n        <artifactId>parquet-hadoop</artifactId>\n        <version>${parquet.version}</version>\n        <classifier>tests</classifier>\n        <scope>test</scope>\n        <exclusions>\n          <!-- Exclude the following in favor of jakarta.annotation:jakarta.annotation-api -->\n          <exclusion>\n            <groupId>javax.annotation</groupId>\n            <artifactId>javax.annotation-api</artifactId>\n          </exclusion>\n        </exclusions>\n      </dependency>\n\n      <!-- Others -->\n      <dependency>\n        <groupId>org.scala-lang</groupId>\n        <artifactId>scala-library</artifactId>\n        <version>${scala.version}</version>\n      </dependency>\n      <dependency>\n        <groupId>org.scala-lang.modules</groupId>\n        <artifactId>scala-collection-compat_${scala.binary.version}</artifactId>\n        <version>2.12.0</version>\n      </dependency>\n      <dependency>\n        <groupId>com.google.protobuf</groupId>\n        <artifactId>protobuf-java</artifactId>\n        <version>${protobuf.version}</version>\n      </dependency>\n      <dependency>\n        <groupId>org.slf4j</groupId>\n        <artifactId>slf4j-api</artifactId>\n        <version>${slf4j.version}</version>\n        <scope>${spark.maven.scope}</scope>\n      </dependency>\n\n      <!-- Shaded deps marked as provided. These are promoted to compile scope\n           in the modules where we want the shaded classes to appear in the\n           associated jar. -->\n      <dependency>\n        <groupId>com.google.guava</groupId>\n        <artifactId>guava</artifactId>\n        <version>${guava.version}</version>\n      </dependency>\n      <!-- End of shaded deps -->\n\n      <!-- Test dependencies -->\n      <dependency>\n        <groupId>org.apache.spark</groupId>\n        <artifactId>spark-core_${scala.binary.version}</artifactId>\n        <version>${spark.version}</version>\n        <classifier>tests</classifier>\n        <scope>test</scope>\n        <exclusions>\n          <exclusion>\n            <groupId>org.apache.datafusion</groupId>\n            <artifactId>*</artifactId>\n          </exclusion>\n\n          <!-- We are using arrow-memory-unsafe -->\n          <exclusion>\n            <groupId>org.apache.arrow</groupId>\n            <artifactId>*</artifactId>\n          </exclusion>\n\n          <!-- We're using \"org.slf4j:jcl-over-slf4j\" -->\n          <exclusion>\n            <groupId>commons-logging</groupId>\n            <artifactId>commons-logging</artifactId>\n          </exclusion>\n\n          <exclusion>\n            <groupId>com.google.code.findbugs</groupId>\n            <artifactId>jsr305</artifactId>\n          </exclusion>\n        </exclusions>\n      </dependency>\n      <dependency>\n        <groupId>org.apache.spark</groupId>\n        <artifactId>spark-catalyst_${scala.binary.version}</artifactId>\n        <version>${spark.version}</version>\n        <classifier>tests</classifier>\n        <scope>test</scope>\n        <exclusions>\n\n          <!-- We are using arrow-memory-unsafe -->\n          <exclusion>\n            <groupId>org.apache.arrow</groupId>\n            <artifactId>*</artifactId>\n          </exclusion>\n\n          <!-- We're using \"org.slf4j:jcl-over-slf4j\" -->\n          <exclusion>\n            <groupId>commons-logging</groupId>\n            <artifactId>commons-logging</artifactId>\n          </exclusion>\n\n          <exclusion>\n            <groupId>com.google.code.findbugs</groupId>\n            <artifactId>jsr305</artifactId>\n          </exclusion>\n        </exclusions>\n      </dependency>\n      <dependency>\n        <groupId>org.apache.spark</groupId>\n        <artifactId>spark-sql_${scala.binary.version}</artifactId>\n        <version>${spark.version}</version>\n        <classifier>tests</classifier>\n        <scope>test</scope>\n        <exclusions>\n          <exclusion>\n            <groupId>org.apache.parquet</groupId>\n            <artifactId>parquet-hadoop</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>org.apache.parquet</groupId>\n            <artifactId>parquet-column</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>org.apache.datafusion</groupId>\n            <artifactId>*</artifactId>\n          </exclusion>\n\n          <!-- We're using \"org.slf4j:jcl-over-slf4j\" -->\n          <exclusion>\n            <groupId>commons-logging</groupId>\n            <artifactId>commons-logging</artifactId>\n          </exclusion>\n\n          <exclusion>\n            <groupId>org.apache.arrow</groupId>\n            <artifactId>*</artifactId>\n          </exclusion>\n        </exclusions>\n      </dependency>\n      <dependency>\n        <groupId>org.apache.spark</groupId>\n        <artifactId>spark-hive_${scala.binary.version}</artifactId>\n        <version>${spark.version}</version>\n        <scope>test</scope>\n        <exclusions>\n          <exclusion>\n            <groupId>org.apache.datafusion</groupId>\n            <artifactId>*</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>commons-logging</groupId>\n            <artifactId>commons-logging</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>org.apache.arrow</groupId>\n            <artifactId>*</artifactId>\n          </exclusion>\n        </exclusions>\n      </dependency>\n      <dependency>\n        <groupId>junit</groupId>\n        <artifactId>junit</artifactId>\n        <version>4.13.2</version>\n        <scope>test</scope>\n      </dependency>\n      <dependency>\n        <groupId>org.assertj</groupId>\n        <artifactId>assertj-core</artifactId>\n        <version>3.27.7</version>\n        <scope>test</scope>\n      </dependency>\n      <dependency>\n        <groupId>org.scalatest</groupId>\n        <artifactId>scalatest_${scala.binary.version}</artifactId>\n        <version>${scalatest.version}</version>\n        <scope>test</scope>\n      </dependency>\n      <dependency>\n        <groupId>org.scala-lang</groupId>\n        <artifactId>scala-reflect</artifactId>\n        <version>${scala.version}</version>\n        <scope>test</scope>\n      </dependency>\n      <dependency>\n        <groupId>org.scalatestplus</groupId>\n        <artifactId>junit-4-13_${scala.binary.version}</artifactId>\n        <version>3.2.16.0</version>\n        <scope>test</scope>\n      </dependency>\n\n      <!-- For benchmarks to access S3 -->\n      <dependency>\n        <groupId>org.apache.spark</groupId>\n        <artifactId>spark-hadoop-cloud_${scala.binary.version}</artifactId>\n        <version>${spark.version}</version>\n        <classifier>tests</classifier>\n        <scope>test</scope>\n        <exclusions>\n          <!-- We're using hadoop-client -->\n          <exclusion>\n            <groupId>org.apache.hadoop.thirdparty</groupId>\n            <artifactId>hadoop-shaded-guava</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>org.apache.hadoop</groupId>\n            <artifactId>hadoop-annotations</artifactId>\n          </exclusion>\n          <exclusion>\n            <groupId>javax.xml.bind</groupId>\n            <artifactId>jaxb-api</artifactId>\n          </exclusion>\n\n          <!-- We're using \"org.slf4j:jcl-over-slf4j\" -->\n          <exclusion>\n            <groupId>commons-logging</groupId>\n            <artifactId>commons-logging</artifactId>\n          </exclusion>\n\n          <exclusion>\n            <groupId>org.apache.arrow</groupId>\n            <artifactId>*</artifactId>\n          </exclusion>\n        </exclusions>\n      </dependency>\n\n      <!-- TestContainers for testing reading Parquet on S3 -->\n      <dependency>\n        <groupId>org.testcontainers</groupId>\n        <artifactId>minio</artifactId>\n        <version>${testcontainers.version}</version>\n        <scope>test</scope>\n      </dependency>\n      <!--\n        AWS SDK modules for Iceberg REST catalog + S3 tests.\n        iceberg-spark-runtime treats the AWS SDK as provided scope, so tests\n        that exercise Iceberg's S3FileIO (via ResolvingFileIO) must supply these.\n        AwsProperties references all service client types in method signatures,\n        and Java serialization introspection resolves them at class-load time.\n      -->\n      <dependency>\n        <groupId>software.amazon.awssdk</groupId>\n        <artifactId>s3</artifactId>\n        <version>${amazon-awssdk-v2.version}</version>\n        <scope>test</scope>\n      </dependency>\n      <dependency>\n        <groupId>software.amazon.awssdk</groupId>\n        <artifactId>sts</artifactId>\n        <version>${amazon-awssdk-v2.version}</version>\n        <scope>test</scope>\n      </dependency>\n      <dependency>\n        <groupId>software.amazon.awssdk</groupId>\n        <artifactId>dynamodb</artifactId>\n        <version>${amazon-awssdk-v2.version}</version>\n        <scope>test</scope>\n      </dependency>\n      <dependency>\n        <groupId>software.amazon.awssdk</groupId>\n        <artifactId>glue</artifactId>\n        <version>${amazon-awssdk-v2.version}</version>\n        <scope>test</scope>\n      </dependency>\n      <dependency>\n        <groupId>software.amazon.awssdk</groupId>\n        <artifactId>kms</artifactId>\n        <version>${amazon-awssdk-v2.version}</version>\n        <scope>test</scope>\n      </dependency>\n\n      <dependency>\n        <groupId>org.codehaus.jackson</groupId>\n        <artifactId>jackson-mapper-asl</artifactId>\n        <version>${codehaus.jackson.version}</version>\n        <scope>test</scope>\n      </dependency>\n\n      <dependency>\n        <groupId>org.rogach</groupId>\n        <artifactId>scallop_${scala.binary.version}</artifactId>\n        <version>5.1.0</version>\n      </dependency>\n\n      <dependency>\n        <groupId>org.apache.hadoop</groupId>\n        <artifactId>hadoop-client-minicluster</artifactId>\n        <version>${hadoop.version}</version>\n        <scope>test</scope>\n      </dependency>\n\n    </dependencies>\n\n  </dependencyManagement>\n\n  <profiles>\n    <profile>\n      <id>release</id>\n      <properties>\n        <jni.dir>${project.basedir}/../native/target/release</jni.dir>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>Win-x86</id>\n      <activation>\n        <os>\n          <family>Windows</family>\n          <arch>x86</arch>\n        </os>\n      </activation>\n      <properties>\n        <platform>win32</platform>\n        <arch>x86_64</arch>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>Win-amd64</id>\n      <activation>\n        <os>\n          <family>Windows</family>\n          <arch>amd64</arch>\n        </os>\n      </activation>\n      <properties>\n        <platform>win32</platform>\n        <arch>amd64</arch>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>Darwin-x86</id>\n      <activation>\n        <os>\n          <family>mac</family>\n          <arch>x86</arch>\n        </os>\n      </activation>\n      <properties>\n        <platform>darwin</platform>\n        <arch>x86_64</arch>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>Darwin-aarch64</id>\n      <activation>\n        <os>\n          <family>mac</family>\n          <arch>aarch64</arch>\n        </os>\n      </activation>\n      <properties>\n        <platform>darwin</platform>\n        <arch>aarch64</arch>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>Linux-amd64</id>\n      <activation>\n        <os>\n          <family>Linux</family>\n          <arch>amd64</arch>\n        </os>\n      </activation>\n      <properties>\n        <platform>linux</platform>\n        <arch>amd64</arch>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>Linux-aarch64</id>\n      <activation>\n        <os>\n          <family>Linux</family>\n          <arch>aarch64</arch>\n        </os>\n      </activation>\n      <properties>\n        <platform>linux</platform>\n        <arch>aarch64</arch>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>spark-3.4</id>\n      <properties>\n        <scala.version>2.12.17</scala.version>\n        <spark.version>3.4.3</spark.version>\n        <spark.version.short>3.4</spark.version.short>\n        <parquet.version>1.13.1</parquet.version>\n        <slf4j.version>2.0.6</slf4j.version>\n        <shims.minorVerSrc>spark-3.4</shims.minorVerSrc>\n        <java.version>11</java.version>\n        <maven.compiler.source>${java.version}</maven.compiler.source>\n        <maven.compiler.target>${java.version}</maven.compiler.target>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>spark-3.5</id>\n      <properties>\n        <scala.version>2.12.18</scala.version>\n        <spark.version>3.5.8</spark.version>\n        <spark.version.short>3.5</spark.version.short>\n        <parquet.version>1.13.1</parquet.version>\n        <slf4j.version>2.0.7</slf4j.version>\n        <shims.minorVerSrc>spark-3.5</shims.minorVerSrc>\n        <java.version>11</java.version>\n        <maven.compiler.source>${java.version}</maven.compiler.source>\n        <maven.compiler.target>${java.version}</maven.compiler.target>\n      </properties>\n    </profile>\n\n    <profile>\n      <!-- FIXME: this is WIP. Tests may fail https://github.com/apache/datafusion-comet/issues/551 -->\n      <id>spark-4.0</id>\n      <properties>\n        <!-- Use Scala 2.13 by default -->\n        <scala.version>2.13.16</scala.version>\n        <scala.binary.version>2.13</scala.binary.version>\n        <spark.version>4.0.1</spark.version>\n        <spark.version.short>4.0</spark.version.short>\n        <parquet.version>1.15.2</parquet.version>\n        <semanticdb.version>4.13.6</semanticdb.version>\n        <slf4j.version>2.0.16</slf4j.version>\n        <shims.majorVerSrc>spark-4.0</shims.majorVerSrc>\n        <shims.minorVerSrc>not-needed-yet</shims.minorVerSrc>\n        <!-- Use jdk17 by default -->\n        <java.version>17</java.version>\n        <maven.compiler.source>${java.version}</maven.compiler.source>\n        <maven.compiler.target>${java.version}</maven.compiler.target>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>scala-2.12</id>\n    </profile>\n\n    <profile>\n      <id>scala-2.13</id>\n      <properties>\n        <scala.version>2.13.16</scala.version>\n        <scala.binary.version>2.13</scala.binary.version>\n        <semanticdb.version>4.13.6</semanticdb.version>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>jdk11</id>\n      <activation>\n        <jdk>11</jdk>\n      </activation>\n      <properties>\n        <java.version>11</java.version>\n        <maven.compiler.source>${java.version}</maven.compiler.source>\n        <maven.compiler.target>${java.version}</maven.compiler.target>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>jdk17</id>\n      <activation>\n        <jdk>17</jdk>\n      </activation>\n      <properties>\n        <java.version>17</java.version>\n        <maven.compiler.source>${java.version}</maven.compiler.source>\n        <maven.compiler.target>${java.version}</maven.compiler.target>\n      </properties>\n    </profile>\n\n    <profile>\n      <id>semanticdb</id>\n      <properties>\n        <scalastyle.skip>true</scalastyle.skip>\n        <spotless.check.skip>true</spotless.check.skip>\n        <enforcer.skip>true</enforcer.skip>\n      </properties>\n      <build>\n        <pluginManagement>\n          <plugins>\n            <plugin>\n              <groupId>net.alchim31.maven</groupId>\n              <artifactId>scala-maven-plugin</artifactId>\n              <version>${scala.plugin.version}</version>\n              <executions>\n                <execution>\n                  <goals>\n                    <goal>compile</goal>\n                    <goal>testCompile</goal>\n                  </goals>\n                </execution>\n              </executions>\n              <configuration>\n                <args>\n                  <arg>-Ywarn-unused</arg> <!-- if you need exactly RemoveUnused -->\n                </args>\n                <javacArgs>\n                  <javacArg>-source</javacArg>\n                  <javacArg>${java.version}</javacArg>\n                  <javacArg>-target</javacArg>\n                  <javacArg>${java.version}</javacArg>\n                  <javacArg>-Xlint:all,-serial,-path,-try</javacArg>\n                </javacArgs>\n                <compilerPlugins>\n                  <compilerPlugin>\n                    <groupId>org.scalameta</groupId>\n                    <artifactId>semanticdb-scalac_${scala.version}</artifactId>\n                    <version>${semanticdb.version}</version>\n                  </compilerPlugin>\n                </compilerPlugins>\n              </configuration>\n            </plugin>\n            <plugin>\n              <groupId>io.github.evis</groupId>\n              <artifactId>scalafix-maven-plugin_${scala.binary.version}</artifactId>\n              <version>${scalafix-maven-plugin.version}</version>\n            </plugin>\n          </plugins>\n        </pluginManagement>\n      </build>\n    </profile>\n    <profile>\n          <id>strict-warnings</id>\n          <build>\n              <plugins>\n                  <plugin>\n                      <groupId>net.alchim31.maven</groupId>\n                      <artifactId>scala-maven-plugin</artifactId>\n                      <configuration>\n                          <args>\n                              <arg>-deprecation</arg>\n                              <arg>-unchecked</arg>\n                              <arg>-feature</arg>\n                              <arg>-Xlint:_</arg>\n                              <arg>-Ywarn-dead-code</arg>\n                              <arg>-Ywarn-numeric-widen</arg>\n                              <arg>-Ywarn-value-discard</arg>\n                              <arg>-Ywarn-unused:imports,patvars,privates,locals,params,-implicits</arg>\n                              <arg>-Xfatal-warnings</arg>\n                          </args>\n                      </configuration>\n                  </plugin>\n              </plugins>\n          </build>\n    </profile>\n  </profiles>\n\n  <build>\n    <pluginManagement>\n      <plugins>\n        <plugin>\n          <groupId>net.alchim31.maven</groupId>\n          <artifactId>scala-maven-plugin</artifactId>\n          <version>${scala.plugin.version}</version>\n          <executions>\n            <execution>\n              <id>eclipse-add-source</id>\n              <goals>\n                <goal>add-source</goal>\n              </goals>\n            </execution>\n            <execution>\n              <id>scala-compile-first</id>\n              <phase>process-resources</phase>\n              <goals>\n                <goal>compile</goal>\n                <goal>add-source</goal>\n              </goals>\n            </execution>\n            <execution>\n              <id>scala-test-compile-first</id>\n              <phase>process-test-resources</phase>\n              <goals>\n                <goal>testCompile</goal>\n              </goals>\n            </execution>\n          </executions>\n          <configuration>\n            <scalaVersion>${scala.version}</scalaVersion>\n            <checkMultipleScalaVersions>true</checkMultipleScalaVersions>\n            <failOnMultipleScalaVersions>true</failOnMultipleScalaVersions>\n            <recompileMode>incremental</recompileMode>\n            <args>\n              <arg>-unchecked</arg>\n              <arg>-deprecation</arg>\n              <arg>-feature</arg>\n              <arg>-explaintypes</arg>\n              <arg>-Xlint:adapted-args</arg>\n            </args>\n            <jvmArgs>\n              <jvmArg>-Xms1024m</jvmArg>\n              <jvmArg>-Xmx1024m</jvmArg>\n            </jvmArgs>\n            <javacArgs>\n              <javacArg>-source</javacArg>\n              <javacArg>${maven.compiler.source}</javacArg>\n              <javacArg>-target</javacArg>\n              <javacArg>${maven.compiler.target}</javacArg>\n              <javacArg>-Xlint:all,-serial,-path,-try</javacArg>\n            </javacArgs>\n          </configuration>\n        </plugin>\n        <plugin>\n          <groupId>org.scalatest</groupId>\n          <artifactId>scalatest-maven-plugin</artifactId>\n          <version>${scalatest-maven-plugin.version}</version>\n          <configuration>\n            <reportsDirectory>${project.build.directory}/surefire-reports</reportsDirectory>\n            <junitxml>.</junitxml>\n            <filereports>SparkTestSuite.txt</filereports>\n            <stdout>D</stdout>\n            <stderr/>\n            <tagsToExclude>org.apache.comet.IntegrationTestSuite</tagsToExclude>\n            <systemProperties>\n              <!-- emit test logs to target/unit-tests.log -->\n              <log4j.configurationFile>file:src/test/resources/log4j2.properties</log4j.configurationFile>\n              <java.awt.headless>true</java.awt.headless>\n              <java.io.tmpdir>${project.build.directory}/tmp</java.io.tmpdir>\n            </systemProperties>\n          </configuration>\n          <executions>\n            <execution>\n              <id>test</id>\n              <goals>\n                <goal>test</goal>\n              </goals>\n            </execution>\n          </executions>\n        </plugin>\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-shade-plugin</artifactId>\n          <version>${maven-shade-plugin.version}</version>\n          <dependencies>\n            <dependency>\n              <groupId>org.ow2.asm</groupId>\n              <artifactId>asm</artifactId>\n              <version>${asm.version}</version>\n            </dependency>\n            <dependency>\n              <groupId>org.ow2.asm</groupId>\n              <artifactId>asm-commons</artifactId>\n              <version>${asm.version}</version>\n            </dependency>\n          </dependencies>\n        </plugin>\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-surefire-plugin</artifactId>\n          <version>${maven-surefire-plugin.version}</version>\n          <configuration>\n            <systemPropertyVariables>\n              <log4j.configurationFile>file:src/test/resources/log4j2.properties</log4j.configurationFile>\n            </systemPropertyVariables>\n            <failIfNoSpecifiedTests>false</failIfNoSpecifiedTests>\n          </configuration>\n        </plugin>\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-source-plugin</artifactId>\n          <version>${maven-source-plugin.version}</version>\n          <configuration>\n            <attach>true</attach>\n          </configuration>\n          <executions>\n            <execution>\n              <id>create-source-jar</id>\n              <goals>\n                <goal>jar-no-fork</goal>\n                <goal>test-jar-no-fork</goal>\n              </goals>\n            </execution>\n          </executions>\n        </plugin>\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-compiler-plugin</artifactId>\n          <version>${maven-compiler-plugin.version}</version>\n          <configuration>\n            <source>${java.version}</source>\n            <target>${java.version}</target>\n            <skipMain>true</skipMain>\n            <skip>true</skip>\n          </configuration>\n        </plugin>\n        <plugin>\n          <groupId>org.apache.maven.plugins</groupId>\n          <artifactId>maven-failsafe-plugin</artifactId>\n          <version>${maven-failsafe-plugin.version}</version>\n        </plugin>\n        <plugin>\n          <groupId>com.diffplug.spotless</groupId>\n          <artifactId>spotless-maven-plugin</artifactId>\n          <version>${spotless.version}</version>\n          <configuration>\n            <java>\n              <toggleOffOn />\n              <googleJavaFormat />\n              <removeUnusedImports />\n              <importOrder>\n                <order>java|javax,scala,org,org.apache,com,org.apache.comet,\\#,\\#org.apache.comet</order>\n              </importOrder>\n              <licenseHeader>\n                <file>${maven.multiModuleProjectDirectory}/dev/copyright/java-header.txt</file>\n              </licenseHeader>\n            </java>\n            <scala>\n              <includes>\n                <include>src/main/scala/**/*.scala</include>\n                <include>src/test/scala/**/*.scala</include>\n                <!-- Include spark shim sources -->\n                <include>src/main/spark-*/**/*.scala</include>\n                <include>src/test/spark-*/**/*.scala</include>\n              </includes>\n              <toggleOffOn />\n              <scalafmt>\n                <version>${scalafmt.version}</version>\n                <file>${maven.multiModuleProjectDirectory}/scalafmt.conf</file>\n              </scalafmt>\n              <licenseHeader>\n                <file>${maven.multiModuleProjectDirectory}/dev/copyright/scala-header.txt</file>\n              </licenseHeader>\n            </scala>\n          </configuration>\n        </plugin>\n        <plugin>\n          <groupId>org.codehaus.mojo</groupId>\n          <artifactId>flatten-maven-plugin</artifactId>\n          <version>${flatten-maven-plugin.version}</version>\n        </plugin>\n        <plugin>\n          <groupId>org.jacoco</groupId>\n          <artifactId>jacoco-maven-plugin</artifactId>\n          <version>${jacoco.version}</version>\n        </plugin>\n        <plugin>\n          <groupId>org.codehaus.mojo</groupId>\n          <artifactId>build-helper-maven-plugin</artifactId>\n          <version>${build-helper-maven-plugin.version}</version>\n        </plugin>\n      </plugins>\n    </pluginManagement>\n    <plugins>\n      <plugin>\n        <groupId>org.scalastyle</groupId>\n        <artifactId>scalastyle-maven-plugin</artifactId>\n        <version>${scalastyle-maven-plugin.version}</version>\n        <configuration>\n          <verbose>false</verbose>\n          <failOnViolation>true</failOnViolation>\n          <includeTestSourceDirectory>false</includeTestSourceDirectory>\n          <failOnWarning>false</failOnWarning>\n          <sourceDirectory>${basedir}/src/main/scala</sourceDirectory>\n          <testSourceDirectory>${basedir}/src/test/scala</testSourceDirectory>\n          <configLocation>${maven.multiModuleProjectDirectory}/dev/scalastyle-config.xml</configLocation>\n          <outputFile>${basedir}/target/scalastyle-output.xml</outputFile>\n          <inputEncoding>${project.build.sourceEncoding}</inputEncoding>\n          <outputEncoding>${project.reporting.outputEncoding}</outputEncoding>\n        </configuration>\n        <executions>\n          <execution>\n            <goals>\n              <goal>check</goal>\n            </goals>\n            <phase>compile</phase>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>com.diffplug.spotless</groupId>\n        <artifactId>spotless-maven-plugin</artifactId>\n        <executions>\n          <execution>\n            <goals>\n              <goal>check</goal>\n            </goals>\n            <phase>compile</phase>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-source-plugin</artifactId>\n      </plugin>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-failsafe-plugin</artifactId>\n        <executions>\n          <execution>\n            <goals>\n              <goal>integration-test</goal>\n              <goal>verify</goal>\n            </goals>\n            <configuration>\n              <trimStackTrace>false</trimStackTrace>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>org.apache.rat</groupId>\n        <artifactId>apache-rat-plugin</artifactId>\n        <version>${apache-rat-plugin.version}</version>\n        <executions>\n          <execution>\n            <phase>verify</phase>\n            <goals>\n              <goal>check</goal>\n            </goals>\n          </execution>\n        </executions>\n        <configuration>\n          <consoleOutput>true</consoleOutput>\n          <excludes>\n            <exclude>**/*.iml</exclude>\n            <exclude>**/*.log</exclude>\n            <exclude>**/*.md.vm</exclude>\n            <exclude>**/.classpath</exclude>\n            <exclude>**/.project</exclude>\n            <exclude>**/.settings/**</exclude>\n            <exclude>**/build/**</exclude>\n            <exclude>**/target/**</exclude>\n            <exclude>**/apache-spark/**</exclude>\n            <exclude>**/apache-iceberg/**</exclude>\n            <exclude>.dockerignore</exclude>\n            <exclude>.git/**</exclude>\n            <exclude>.github/**</exclude>\n            <exclude>.gitignore</exclude>\n            <exclude>.gitmodules</exclude>\n            <exclude>**/.idea/**</exclude>\n            <exclude>**/dependency-reduced-pom.xml</exclude>\n            <exclude>**/testdata/**</exclude>\n            <exclude>**/.lldbinit</exclude>\n            <exclude>rust-toolchain</exclude>\n            <exclude>Makefile</exclude>\n            <exclude>dev/Dockerfile*</exclude>\n            <exclude>dev/diffs/**</exclude>\n            <exclude>dev/deploy-file</exclude>\n            <exclude>**/test/resources/**</exclude>\n            <exclude>**/benchmarks/*.txt</exclude>\n            <exclude>**/inspections/*.txt</exclude>\n            <exclude>tpcds-kit/**</exclude>\n            <exclude>tpcds-sf-1/**</exclude>\n            <exclude>tpch/**</exclude>\n            <exclude>docs/*.txt</exclude>\n            <exclude>docs/logos/*.png</exclude>\n            <exclude>docs/logos/*.svg</exclude>\n            <exclude>docs/source/_static/images/**</exclude>\n            <exclude>docs/source/contributor-guide/*.svg</exclude>\n            <exclude>dev/release/rat_exclude_files.txt</exclude>\n            <exclude>dev/release/requirements.txt</exclude>\n            <exclude>native/proto/src/generated/**</exclude>\n            <exclude>benchmarks/tpc/queries/**</exclude>\n            <exclude>.claude/**</exclude>\n          </excludes>\n        </configuration>\n      </plugin>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-enforcer-plugin</artifactId>\n        <version>${maven-enforcer-plugin.version}</version>\n        <executions>\n          <execution>\n            <id>no-duplicate-declared-dependencies</id>\n            <goals>\n              <goal>enforce</goal>\n            </goals>\n            <configuration>\n              <rules>\n                <banCircularDependencies/>\n                <banDuplicatePomDependencyVersions/>\n                <banDuplicateClasses>\n                  <scopes>\n                    <scope>compile</scope>\n                    <scope>provided</scope>\n                  </scopes>\n                  <ignoreClasses>\n                    <ignoreClass>org.apache.spark.unused.UnusedStubClass</ignoreClass>\n                  </ignoreClasses>\n                  <dependencies>\n                    <dependency>\n                      <groupId>org.apache.spark</groupId>\n                      <artifactId>spark-sql_${scala.binary.version}</artifactId>\n                      <ignoreClasses>\n                        <!-- the followings conflict between spark-sql and spark-sql:test -->\n                        <ignoreClass>javax.annotation.meta.TypeQualifier</ignoreClass>\n                        <ignoreClass>javax.annotation.Nonnull</ignoreClass>\n                        <ignoreClass>javax.annotation.meta.When</ignoreClass>\n                        <ignoreClass>javax.annotation.Nonnull$Checker</ignoreClass>\n                        <ignoreClass>javax.annotation.meta.TypeQualifierValidator</ignoreClass>\n                        <!-- this class is not properly excluded from comet-spark right now -->\n                        <ignoreClass>org.apache.parquet.filter2.predicate.SparkFilterApi</ignoreClass>\n                        <!-- we explicitly include a duplicate to allow older versions of Spark -->\n                        <!-- this can be removed once we no longer support spark 3.x  -->\n                        <ignoreClass>org.apache.spark.sql.ExtendedExplainGenerator</ignoreClass>\n                      </ignoreClasses>\n                    </dependency>\n                    <dependency>\n                      <groupId>com.google.code.findbugs</groupId>\n                      <artifactId>jsr305</artifactId>\n                      <ignoreClasses>\n                        <!-- the followings conflict between spark-sql and findbugs -->\n                        <ignoreClass>javax.annotation.meta.TypeQualifier</ignoreClass>\n                        <ignoreClass>javax.annotation.Nonnull</ignoreClass>\n                        <ignoreClass>javax.annotation.meta.When</ignoreClass>\n                        <ignoreClass>javax.annotation.Nonnull$Checker</ignoreClass>\n                        <ignoreClass>javax.annotation.meta.TypeQualifierValidator</ignoreClass>\n                        <ignoreClass>javax.annotation.Nullable</ignoreClass>\n                        <ignoreClass>javax.annotation.meta.TypeQualifierNickname</ignoreClass>\n                      </ignoreClasses>\n                    </dependency>\n                    <dependency>\n                      <groupId>com.google.guava</groupId>\n                      <artifactId>guava</artifactId>\n                      <ignoreClasses>\n                        <ignoreClass>com.google.thirdparty.publicsuffix.TrieParser</ignoreClass>\n                        <ignoreClass>com.google.thirdparty.publicsuffix.PublicSuffixPatterns</ignoreClass>\n                        <ignoreClass>com.google.thirdparty.publicsuffix.PublicSuffixType</ignoreClass>\n                      </ignoreClasses>\n                    </dependency>\n                  </dependencies>\n                  <findAllDuplicates>true</findAllDuplicates>\n                  <ignoreWhenIdentical>true</ignoreWhenIdentical>\n\n                </banDuplicateClasses>\n              </rules>\n            </configuration>\n          </execution>\n        </executions>\n        <dependencies>\n          <dependency>\n            <groupId>org.codehaus.mojo</groupId>\n            <artifactId>extra-enforcer-rules</artifactId>\n            <version>${extra-enforcer-rules.version}</version>\n          </dependency>\n        </dependencies>\n      </plugin>\n      <plugin>\n        <groupId>org.codehaus.mojo</groupId>\n        <artifactId>flatten-maven-plugin</artifactId>\n        <executions>\n          <!-- enable flattening -->\n          <execution>\n            <id>flatten</id>\n            <phase>process-resources</phase>\n            <goals>\n              <goal>flatten</goal>\n            </goals>\n          </execution>\n          <!-- ensure proper cleanup -->\n          <execution>\n            <id>flatten.clean</id>\n            <phase>clean</phase>\n            <goals>\n              <goal>clean</goal>\n            </goals>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>org.jacoco</groupId>\n        <artifactId>jacoco-maven-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>default-prepare-agent</id>\n            <goals>\n              <goal>prepare-agent</goal>\n            </goals>\n          </execution>\n          <execution>\n            <id>report</id>\n            <phase>test</phase>\n            <goals>\n              <goal>report</goal>\n            </goals>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n</project>\n"
  },
  {
    "path": "rust-toolchain.toml",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n#\n#   http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n[toolchain]\nchannel = \"stable\"\ncomponents = [\"rustfmt\", \"clippy\", \"rust-analyzer\"]\n"
  },
  {
    "path": "scalafmt.conf",
    "content": "// Licensed to the Apache Software Foundation (ASF) under one\n// or more contributor license agreements.  See the NOTICE file\n// distributed with this work for additional information\n// regarding copyright ownership.  The ASF licenses this file\n// to you under the Apache License, Version 2.0 (the\n// \"License\"); you may not use this file except in compliance\n// with the License.  You may obtain a copy of the License at\n//\n//   http://www.apache.org/licenses/LICENSE-2.0\n//\n// Unless required by applicable law or agreed to in writing,\n// software distributed under the License is distributed on an\n// \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n// KIND, either express or implied.  See the License for the\n// specific language governing permissions and limitations\n// under the License.\n\nalign = none\nalign.openParenDefnSite = false\nalign.openParenCallSite = false\nalign.tokens = []\nimportSelectors = \"singleLine\"\noptIn = {\n  configStyleArguments = false\n}\ndanglingParentheses.preset = false\ndocstrings.style = Asterisk\nmaxColumn = 98\nrunner.dialect = scala212\nversion = 3.6.1\n\nrewrite.rules = [Imports]\nrewrite.imports.sort = scalastyle\nrewrite.imports.groups = [\n  [\"java\\\\..*\", \"javax\\\\..*\"],\n  [\"scala\\\\..*\"],\n  [\"org\\\\..*\"],\n  [\"org\\\\.apache\\\\..*\"],\n  [\"com\\\\..*\"],\n  [\"org\\\\.apache\\\\.comet\\\\..*\"],\n]\n"
  },
  {
    "path": "spark/README.md",
    "content": "<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n# Apache Spark Plugin for Comet\n\nThis folder implements a plugin for Apache Spark to use Comet native library, using Spark's extension framework.\n"
  },
  {
    "path": "spark/inspections/CometTPCDSQueriesList-results.txt",
    "content": "Query: q1. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q1: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   +- BroadcastHashJoin\n      :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :  +- BroadcastHashJoin\n      :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :     :        +- HashAggregate\n      :     :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n      :     :     :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :     :     :                    +- BroadcastHashJoin\n      :     :     :                       :-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n      :     :     :                       :  :  +- Subquery\n      :     :     :                       :  :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n      :     :     :                       :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :                       :  :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n      :     :     :                       :  :              +- CometProject\n      :     :     :                       :  :                 +- CometFilter\n      :     :     :                       :  :                    +- CometScan parquet spark_catalog.default.store\n      :     :     :                       :  +- CometScan parquet spark_catalog.default.store_returns\n      :     :     :                       +- BroadcastExchange\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometScan parquet spark_catalog.default.date_dim\n      :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n      :     :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :           +- HashAggregate\n      :     :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n      :     :                    +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                          +- CometHashAggregate\n      :     :                             +- CometProject\n      :     :                                +- CometBroadcastHashJoin\n      :     :                                   :- CometFilter\n      :     :                                   :  +- CometScan parquet spark_catalog.default.store_returns\n      :     :                                   +- CometBroadcastExchange\n      :     :                                      +- CometProject\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- CometProject\n      :           +- CometFilter\n      :              +- CometScan parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- CometProject\n            +- CometFilter\n               +- CometScan parquet spark_catalog.default.customer\n\n\nQuery: q2. Comet Exec: Enabled (CometFilter, CometProject, CometUnion)\nQuery: q2: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n         :-  Sort [COMET: Sort is not native because the following children are not native (Project)]\n         :  +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :     +- BroadcastHashJoin\n         :        :- HashAggregate\n         :        :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :        :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         :        :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :        :           +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n         :        :              :- CometUnion\n         :        :              :  :- CometProject\n         :        :              :  :  +- CometFilter\n         :        :              :  :     +- CometScan parquet spark_catalog.default.web_sales\n         :        :              :  +- CometProject\n         :        :              :     +- CometFilter\n         :        :              :        +- CometScan parquet spark_catalog.default.catalog_sales\n         :        :              +- BroadcastExchange\n         :        :                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n         :        :                    +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n         :        :                       :  +- Subquery\n         :        :                       :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :        :                       :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :        :                       :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :        :                       :              +- CometProject\n         :        :                       :                 +- CometFilter\n         :        :                       :                    +- CometScan parquet spark_catalog.default.date_dim\n         :        :                       +- CometScan parquet spark_catalog.default.date_dim\n         :        +- BroadcastExchange\n         :           +- CometProject\n         :              +- CometFilter\n         :                 +- CometScan parquet spark_catalog.default.date_dim\n         +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  +- BroadcastHashJoin\n                     :- HashAggregate\n                     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                     :           +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                     :              :- CometUnion\n                     :              :  :- CometProject\n                     :              :  :  +- CometFilter\n                     :              :  :     +- CometScan parquet spark_catalog.default.web_sales\n                     :              :  +- CometProject\n                     :              :     +- CometFilter\n                     :              :        +- CometScan parquet spark_catalog.default.catalog_sales\n                     :              +- BroadcastExchange\n                     :                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                     :                    +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                     :                       :  +- Subquery\n                     :                       :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                     :                       :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     :                       :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                     :                       :              +- CometProject\n                     :                       :                 +- CometFilter\n                     :                       :                    +- CometScan parquet spark_catalog.default.date_dim\n                     :                       +- CometScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q3. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q3: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometBroadcastExchange\n               :     :  +- CometProject\n               :     :     +- CometFilter\n               :     :        +- CometScan parquet spark_catalog.default.date_dim\n               :     +- CometFilter\n               :        +- CometScan parquet spark_catalog.default.store_sales\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q4. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q4: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :     :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :     :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n      :     :     :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :     :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :     :  :     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :     :     :  :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     :     :  :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :     :  :              +- CometHashAggregate\n      :     :     :     :  :                 +- CometProject\n      :     :     :     :  :                    +- CometBroadcastHashJoin\n      :     :     :     :  :                       :- CometProject\n      :     :     :     :  :                       :  +- CometBroadcastHashJoin\n      :     :     :     :  :                       :     :- CometBroadcastExchange\n      :     :     :     :  :                       :     :  +- CometProject\n      :     :     :     :  :                       :     :     +- CometFilter\n      :     :     :     :  :                       :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :     :     :  :                       :     +- CometFilter\n      :     :     :     :  :                       :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :     :     :  :                       +- CometBroadcastExchange\n      :     :     :     :  :                          +- CometFilter\n      :     :     :     :  :                             +- CometScan parquet spark_catalog.default.date_dim\n      :     :     :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :     :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :     :              +- CometHashAggregate\n      :     :     :     :                 +- CometProject\n      :     :     :     :                    +- CometBroadcastHashJoin\n      :     :     :     :                       :- CometProject\n      :     :     :     :                       :  +- CometBroadcastHashJoin\n      :     :     :     :                       :     :- CometBroadcastExchange\n      :     :     :     :                       :     :  +- CometProject\n      :     :     :     :                       :     :     +- CometFilter\n      :     :     :     :                       :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :     :     :                       :     +- CometFilter\n      :     :     :     :                       :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :     :     :                       +- CometBroadcastExchange\n      :     :     :     :                          +- CometFilter\n      :     :     :     :                             +- CometScan parquet spark_catalog.default.date_dim\n      :     :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :     :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :                    +- CometHashAggregate\n      :     :     :                       +- CometProject\n      :     :     :                          +- CometBroadcastHashJoin\n      :     :     :                             :- CometProject\n      :     :     :                             :  +- CometBroadcastHashJoin\n      :     :     :                             :     :- CometBroadcastExchange\n      :     :     :                             :     :  +- CometProject\n      :     :     :                             :     :     +- CometFilter\n      :     :     :                             :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :     :                             :     +- CometFilter\n      :     :     :                             :        +- CometScan parquet spark_catalog.default.catalog_sales\n      :     :     :                             +- CometBroadcastExchange\n      :     :     :                                +- CometFilter\n      :     :     :                                   +- CometScan parquet spark_catalog.default.date_dim\n      :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                 +- CometHashAggregate\n      :     :                    +- CometProject\n      :     :                       +- CometBroadcastHashJoin\n      :     :                          :- CometProject\n      :     :                          :  +- CometBroadcastHashJoin\n      :     :                          :     :- CometBroadcastExchange\n      :     :                          :     :  +- CometProject\n      :     :                          :     :     +- CometFilter\n      :     :                          :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :                          :     +- CometFilter\n      :     :                          :        +- CometScan parquet spark_catalog.default.catalog_sales\n      :     :                          +- CometBroadcastExchange\n      :     :                             +- CometFilter\n      :     :                                +- CometScan parquet spark_catalog.default.date_dim\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                    +- CometHashAggregate\n      :                       +- CometProject\n      :                          +- CometBroadcastHashJoin\n      :                             :- CometProject\n      :                             :  +- CometBroadcastHashJoin\n      :                             :     :- CometBroadcastExchange\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometScan parquet spark_catalog.default.customer\n      :                             :     +- CometFilter\n      :                             :        +- CometScan parquet spark_catalog.default.web_sales\n      :                             +- CometBroadcastExchange\n      :                                +- CometFilter\n      :                                   +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometBroadcastExchange\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometScan parquet spark_catalog.default.customer\n                           :     +- CometFilter\n                           :        +- CometScan parquet spark_catalog.default.web_sales\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q5. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject, CometUnion)\nQuery: q5: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Expand)]\n         +-  Expand [COMET: Expand is not native because the following children are not native (Union)]\n            +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n               :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometUnion\n               :              :     :  :- CometProject\n               :              :     :  :  +- CometFilter\n               :              :     :  :     +- CometScan parquet spark_catalog.default.store_sales\n               :              :     :  +- CometProject\n               :              :     :     +- CometFilter\n               :              :     :        +- CometScan parquet spark_catalog.default.store_returns\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan parquet spark_catalog.default.date_dim\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometFilter\n               :                       +- CometScan parquet spark_catalog.default.store\n               :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometUnion\n               :              :     :  :- CometProject\n               :              :     :  :  +- CometFilter\n               :              :     :  :     +- CometScan parquet spark_catalog.default.catalog_sales\n               :              :     :  +- CometProject\n               :              :     :     +- CometFilter\n               :              :     :        +- CometScan parquet spark_catalog.default.catalog_returns\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan parquet spark_catalog.default.date_dim\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometFilter\n               :                       +- CometScan parquet spark_catalog.default.catalog_page\n               +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometUnion\n                              :     :  :- CometProject\n                              :     :  :  +- CometFilter\n                              :     :  :     +- CometScan parquet spark_catalog.default.web_sales\n                              :     :  +- CometProject\n                              :     :     +- CometBroadcastHashJoin\n                              :     :        :- CometBroadcastExchange\n                              :     :        :  +- CometFilter\n                              :     :        :     +- CometScan parquet spark_catalog.default.web_returns\n                              :     :        +- CometFilter\n                              :     :           +- CometScan parquet spark_catalog.default.web_sales\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan parquet spark_catalog.default.date_dim\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan parquet spark_catalog.default.web_site\n\n\nQuery: q6. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q6: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n   +- HashAggregate\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometBroadcastExchange\n                  :     :     :     :  +- CometProject\n                  :     :     :     :     +- CometFilter\n                  :     :     :     :        +- CometScan parquet spark_catalog.default.customer_address\n                  :     :     :     +- CometFilter\n                  :     :     :        +- CometScan parquet spark_catalog.default.customer\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan parquet spark_catalog.default.store_sales\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              :  +- Subquery\n                  :              :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  :              :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :              :           +- CometHashAggregate\n                  :              :              +- CometProject\n                  :              :                 +- CometFilter\n                  :              :                    +- CometScan parquet spark_catalog.default.date_dim\n                  :              +- CometScan parquet spark_catalog.default.date_dim\n                  +- BroadcastExchange\n                     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                           :- CometFilter\n                           :  +- CometScan parquet spark_catalog.default.item\n                           +- BroadcastExchange\n                              +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       +- CometHashAggregate\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometFilter\n               :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometProject\n               :     :     :           +- CometFilter\n               :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.item\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.promotion\n\n\nQuery: q8. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q8: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.store\n               +- BroadcastExchange\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (SortMergeJoin)]\n                           +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                              :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                              :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              :     +- CometProject\n                              :        +- CometFilter\n                              :           +- CometScan parquet spark_catalog.default.customer_address\n                              +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                                       +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                                          +- HashAggregate\n                                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                                   +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                                      +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                                                         :- BroadcastExchange\n                                                         :  +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                                                         :     +-  Filter [COMET: getstructfield is not supported, xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                                                         :        :  :- Subquery\n                                                         :        :  :  +-  Project [COMET: Project is not native because the following children are not native (ObjectHashAggregate)]\n                                                         :        :  :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                                                         :        :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                         :        :  :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                                                         :        :  :              +- CometProject\n                                                         :        :  :                 +- CometFilter\n                                                         :        :  :                    +- CometScan parquet spark_catalog.default.customer_address\n                                                         :        :  +- Subquery\n                                                         :        :     +-  Project [COMET: Project is not native because the following children are not native (ObjectHashAggregate)]\n                                                         :        :        +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                                                         :        :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                         :        :              +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                                                         :        :                 +- CometProject\n                                                         :        :                    +- CometFilter\n                                                         :        :                       +- CometScan parquet spark_catalog.default.customer_address\n                                                         :        +- CometScan parquet spark_catalog.default.customer_address\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan parquet spark_catalog.default.customer\n\n\nQuery: q9. Comet Exec: Enabled (CometFilter)\nQuery: q9: ExplainInfo:\n Project [COMET: getstructfield is not supported]\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  :- Subquery\n:  :  +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometFilter\n:  :                    +- CometScan parquet spark_catalog.default.store_sales\n:  +- Subquery\n:     +-  Project [COMET: Project is not native because the following children are not native (HashAggregate)]\n:        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:              +- CometHashAggregate\n:                 +- CometProject\n:                    +- CometFilter\n:                       +- CometScan parquet spark_catalog.default.store_sales\n+- CometFilter\n   +- CometScan parquet spark_catalog.default.reason\n\n\nQuery: q10. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q10: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +- BroadcastHashJoin\n               :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :  +- BroadcastHashJoin\n               :     :-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               :     :  +-  Filter [COMET: Filter is not native because the following children are not native (SortMergeJoin)]\n               :     :     +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n               :     :        :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n               :     :        :  :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :     :        :  :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        :  :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :  :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n               :     :        :  :  :        :  +- Subquery\n               :     :        :  :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n               :     :        :  :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :  :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n               :     :        :  :  :        :              +- CometProject\n               :     :        :  :  :        :                 +- CometFilter\n               :     :        :  :  :        :                    +- CometScan parquet spark_catalog.default.customer_address\n               :     :        :  :  :        +- CometScan parquet spark_catalog.default.customer\n               :     :        :  :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        :  :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :  :        +- CometProject\n               :     :        :  :           +- CometBroadcastHashJoin\n               :     :        :  :              :- CometFilter\n               :     :        :  :              :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :        :  :              +- CometBroadcastExchange\n               :     :        :  :                 +- CometProject\n               :     :        :  :                    +- CometFilter\n               :     :        :  :                       +- CometScan parquet spark_catalog.default.date_dim\n               :     :        :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :        +- CometProject\n               :     :        :           +- CometBroadcastHashJoin\n               :     :        :              :- CometFilter\n               :     :        :              :  +- CometScan parquet spark_catalog.default.web_sales\n               :     :        :              +- CometBroadcastExchange\n               :     :        :                 +- CometProject\n               :     :        :                    +- CometFilter\n               :     :        :                       +- CometScan parquet spark_catalog.default.date_dim\n               :     :        +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :              +- CometProject\n               :     :                 +- CometBroadcastHashJoin\n               :     :                    :- CometFilter\n               :     :                    :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :                    +- CometBroadcastExchange\n               :     :                       +- CometProject\n               :     :                          +- CometFilter\n               :     :                             +- CometScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.customer_address\n               +- BroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.customer_demographics\n\n\nQuery: q11. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q11: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :     :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :              +- CometHashAggregate\n      :     :     :                 +- CometProject\n      :     :     :                    +- CometBroadcastHashJoin\n      :     :     :                       :- CometProject\n      :     :     :                       :  +- CometBroadcastHashJoin\n      :     :     :                       :     :- CometBroadcastExchange\n      :     :     :                       :     :  +- CometProject\n      :     :     :                       :     :     +- CometFilter\n      :     :     :                       :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :     :                       :     +- CometFilter\n      :     :     :                       :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :     :                       +- CometBroadcastExchange\n      :     :     :                          +- CometFilter\n      :     :     :                             +- CometScan parquet spark_catalog.default.date_dim\n      :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                 +- CometHashAggregate\n      :     :                    +- CometProject\n      :     :                       +- CometBroadcastHashJoin\n      :     :                          :- CometProject\n      :     :                          :  +- CometBroadcastHashJoin\n      :     :                          :     :- CometBroadcastExchange\n      :     :                          :     :  +- CometProject\n      :     :                          :     :     +- CometFilter\n      :     :                          :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :                          :     +- CometFilter\n      :     :                          :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :                          +- CometBroadcastExchange\n      :     :                             +- CometFilter\n      :     :                                +- CometScan parquet spark_catalog.default.date_dim\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                    +- CometHashAggregate\n      :                       +- CometProject\n      :                          +- CometBroadcastHashJoin\n      :                             :- CometProject\n      :                             :  +- CometBroadcastHashJoin\n      :                             :     :- CometBroadcastExchange\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometScan parquet spark_catalog.default.customer\n      :                             :     +- CometFilter\n      :                             :        +- CometScan parquet spark_catalog.default.web_sales\n      :                             +- CometBroadcastExchange\n      :                                +- CometFilter\n      :                                   +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometBroadcastExchange\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometScan parquet spark_catalog.default.customer\n                           :     +- CometFilter\n                           :        +- CometScan parquet spark_catalog.default.web_sales\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q12. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q12: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometFilter\n                           :     :  +- CometScan parquet spark_catalog.default.web_sales\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.item\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q13. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q13: ExplainInfo:\n HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- CometHashAggregate\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometProject\n            :     :  +- CometBroadcastHashJoin\n            :     :     :- CometProject\n            :     :     :  +- CometBroadcastHashJoin\n            :     :     :     :- CometProject\n            :     :     :     :  +- CometBroadcastHashJoin\n            :     :     :     :     :- CometFilter\n            :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :     :     :     :     +- CometBroadcastExchange\n            :     :     :     :        +- CometFilter\n            :     :     :     :           +- CometScan parquet spark_catalog.default.store\n            :     :     :     +- CometBroadcastExchange\n            :     :     :        +- CometProject\n            :     :     :           +- CometFilter\n            :     :     :              +- CometScan parquet spark_catalog.default.customer_address\n            :     :     +- CometBroadcastExchange\n            :     :        +- CometProject\n            :     :           +- CometFilter\n            :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometScan parquet spark_catalog.default.customer_demographics\n            +- CometBroadcastExchange\n               +- CometFilter\n                  +- CometScan parquet spark_catalog.default.household_demographics\n\n\nQuery: q14a. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q14a: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Expand)]\n         +-  Expand [COMET: Expand is not native because the following children are not native (Union)]\n            +-  Union [COMET: Union is not enabled because the following children are not native (Project, Project, Project)]\n               :-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               :  +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n               :     :  +- Subquery\n               :     :     +- HashAggregate\n               :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n               :     :              +- CometUnion\n               :     :                 :- CometProject\n               :     :                 :  +- CometBroadcastHashJoin\n               :     :                 :     :- CometFilter\n               :     :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :                 :     +- CometBroadcastExchange\n               :     :                 :        +- CometProject\n               :     :                 :           +- CometFilter\n               :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :                 :- CometProject\n               :     :                 :  +- CometBroadcastHashJoin\n               :     :                 :     :- CometFilter\n               :     :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :                 :     +- CometBroadcastExchange\n               :     :                 :        +- CometProject\n               :     :                 :           +- CometFilter\n               :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :                 +- CometProject\n               :     :                    +- CometBroadcastHashJoin\n               :     :                       :- CometFilter\n               :     :                       :  +- CometScan parquet spark_catalog.default.web_sales\n               :     :                       +- CometBroadcastExchange\n               :     :                          +- CometProject\n               :     :                             +- CometFilter\n               :     :                                +- CometScan parquet spark_catalog.default.date_dim\n               :     +- HashAggregate\n               :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n               :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                 +- BroadcastHashJoin\n               :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n               :                    :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n               :                    :     :  :        :  +- Subquery\n               :                    :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n               :                    :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n               :                    :     :  :        :              +- CometProject\n               :                    :     :  :        :                 +- CometFilter\n               :                    :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n               :                    :     :  :        +- CometScan parquet spark_catalog.default.store_sales\n               :                    :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :     :           +- BroadcastHashJoin\n               :                    :     :              :- BroadcastExchange\n               :                    :     :              :  +- CometFilter\n               :                    :     :              :     +- CometScan parquet spark_catalog.default.item\n               :                    :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :                    :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n               :                    :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :     :                 :                 +- BroadcastHashJoin\n               :                    :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n               :                    :     :                 :                    :     :- CometFilter\n               :                    :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :                    :     :                 :                    :     +- BroadcastExchange\n               :                    :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :                 :                    :           :     +- CometFilter\n               :                    :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n               :                    :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :                 :                    :                 +- CometProject\n               :                    :     :                 :                    :                    +- CometBroadcastHashJoin\n               :                    :     :                 :                    :                       :- CometProject\n               :                    :     :                 :                    :                       :  +- CometBroadcastHashJoin\n               :                    :     :                 :                    :                       :     :- CometFilter\n               :                    :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :                    :     :                 :                    :                       :     +- CometBroadcastExchange\n               :                    :     :                 :                    :                       :        +- CometFilter\n               :                    :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n               :                    :     :                 :                    :                       +- CometBroadcastExchange\n               :                    :     :                 :                    :                          +- CometProject\n               :                    :     :                 :                    :                             +- CometFilter\n               :                    :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n               :                    :     :                 :                    +- BroadcastExchange\n               :                    :     :                 :                       +- CometProject\n               :                    :     :                 :                          +- CometFilter\n               :                    :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n               :                    :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :                       +- CometProject\n               :                    :     :                          +- CometBroadcastHashJoin\n               :                    :     :                             :- CometProject\n               :                    :     :                             :  +- CometBroadcastHashJoin\n               :                    :     :                             :     :- CometFilter\n               :                    :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n               :                    :     :                             :     +- CometBroadcastExchange\n               :                    :     :                             :        +- CometFilter\n               :                    :     :                             :           +- CometScan parquet spark_catalog.default.item\n               :                    :     :                             +- CometBroadcastExchange\n               :                    :     :                                +- CometProject\n               :                    :     :                                   +- CometFilter\n               :                    :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n               :                    :     +- BroadcastExchange\n               :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :           :     +- CometFilter\n               :                    :           :        +- CometScan parquet spark_catalog.default.item\n               :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :                    +- BroadcastHashJoin\n               :                    :                       :- BroadcastExchange\n               :                    :                       :  +- CometFilter\n               :                    :                       :     +- CometScan parquet spark_catalog.default.item\n               :                    :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :                    :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n               :                    :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :                          :                 +- BroadcastHashJoin\n               :                    :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n               :                    :                          :                    :     :- CometFilter\n               :                    :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :                    :                          :                    :     +- BroadcastExchange\n               :                    :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                          :                    :           :     +- CometFilter\n               :                    :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n               :                    :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                          :                    :                 +- CometProject\n               :                    :                          :                    :                    +- CometBroadcastHashJoin\n               :                    :                          :                    :                       :- CometProject\n               :                    :                          :                    :                       :  +- CometBroadcastHashJoin\n               :                    :                          :                    :                       :     :- CometFilter\n               :                    :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :                    :                          :                    :                       :     +- CometBroadcastExchange\n               :                    :                          :                    :                       :        +- CometFilter\n               :                    :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n               :                    :                          :                    :                       +- CometBroadcastExchange\n               :                    :                          :                    :                          +- CometProject\n               :                    :                          :                    :                             +- CometFilter\n               :                    :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n               :                    :                          :                    +- BroadcastExchange\n               :                    :                          :                       +- CometProject\n               :                    :                          :                          +- CometFilter\n               :                    :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n               :                    :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                                +- CometProject\n               :                    :                                   +- CometBroadcastHashJoin\n               :                    :                                      :- CometProject\n               :                    :                                      :  +- CometBroadcastHashJoin\n               :                    :                                      :     :- CometFilter\n               :                    :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n               :                    :                                      :     +- CometBroadcastExchange\n               :                    :                                      :        +- CometFilter\n               :                    :                                      :           +- CometScan parquet spark_catalog.default.item\n               :                    :                                      +- CometBroadcastExchange\n               :                    :                                         +- CometProject\n               :                    :                                            +- CometFilter\n               :                    :                                               +- CometScan parquet spark_catalog.default.date_dim\n               :                    +- BroadcastExchange\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometScan parquet spark_catalog.default.date_dim\n               :-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               :  +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n               :     :  +- Subquery\n               :     :     +- HashAggregate\n               :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n               :     :              +- CometUnion\n               :     :                 :- CometProject\n               :     :                 :  +- CometBroadcastHashJoin\n               :     :                 :     :- CometFilter\n               :     :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :                 :     +- CometBroadcastExchange\n               :     :                 :        +- CometProject\n               :     :                 :           +- CometFilter\n               :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :                 :- CometProject\n               :     :                 :  +- CometBroadcastHashJoin\n               :     :                 :     :- CometFilter\n               :     :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :                 :     +- CometBroadcastExchange\n               :     :                 :        +- CometProject\n               :     :                 :           +- CometFilter\n               :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :                 +- CometProject\n               :     :                    +- CometBroadcastHashJoin\n               :     :                       :- CometFilter\n               :     :                       :  +- CometScan parquet spark_catalog.default.web_sales\n               :     :                       +- CometBroadcastExchange\n               :     :                          +- CometProject\n               :     :                             +- CometFilter\n               :     :                                +- CometScan parquet spark_catalog.default.date_dim\n               :     +- HashAggregate\n               :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n               :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                 +- BroadcastHashJoin\n               :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n               :                    :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n               :                    :     :  :        :  +- Subquery\n               :                    :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n               :                    :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n               :                    :     :  :        :              +- CometProject\n               :                    :     :  :        :                 +- CometFilter\n               :                    :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n               :                    :     :  :        +- CometScan parquet spark_catalog.default.catalog_sales\n               :                    :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :     :           +- BroadcastHashJoin\n               :                    :     :              :- BroadcastExchange\n               :                    :     :              :  +- CometFilter\n               :                    :     :              :     +- CometScan parquet spark_catalog.default.item\n               :                    :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :                    :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n               :                    :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :     :                 :                 +- BroadcastHashJoin\n               :                    :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n               :                    :     :                 :                    :     :- CometFilter\n               :                    :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :                    :     :                 :                    :     +- BroadcastExchange\n               :                    :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :                 :                    :           :     +- CometFilter\n               :                    :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n               :                    :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :                 :                    :                 +- CometProject\n               :                    :     :                 :                    :                    +- CometBroadcastHashJoin\n               :                    :     :                 :                    :                       :- CometProject\n               :                    :     :                 :                    :                       :  +- CometBroadcastHashJoin\n               :                    :     :                 :                    :                       :     :- CometFilter\n               :                    :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :                    :     :                 :                    :                       :     +- CometBroadcastExchange\n               :                    :     :                 :                    :                       :        +- CometFilter\n               :                    :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n               :                    :     :                 :                    :                       +- CometBroadcastExchange\n               :                    :     :                 :                    :                          +- CometProject\n               :                    :     :                 :                    :                             +- CometFilter\n               :                    :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n               :                    :     :                 :                    +- BroadcastExchange\n               :                    :     :                 :                       +- CometProject\n               :                    :     :                 :                          +- CometFilter\n               :                    :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n               :                    :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :     :                       +- CometProject\n               :                    :     :                          +- CometBroadcastHashJoin\n               :                    :     :                             :- CometProject\n               :                    :     :                             :  +- CometBroadcastHashJoin\n               :                    :     :                             :     :- CometFilter\n               :                    :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n               :                    :     :                             :     +- CometBroadcastExchange\n               :                    :     :                             :        +- CometFilter\n               :                    :     :                             :           +- CometScan parquet spark_catalog.default.item\n               :                    :     :                             +- CometBroadcastExchange\n               :                    :     :                                +- CometProject\n               :                    :     :                                   +- CometFilter\n               :                    :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n               :                    :     +- BroadcastExchange\n               :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :           :     +- CometFilter\n               :                    :           :        +- CometScan parquet spark_catalog.default.item\n               :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :                    +- BroadcastHashJoin\n               :                    :                       :- BroadcastExchange\n               :                    :                       :  +- CometFilter\n               :                    :                       :     +- CometScan parquet spark_catalog.default.item\n               :                    :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :                    :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n               :                    :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :                          :                 +- BroadcastHashJoin\n               :                    :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :                    :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n               :                    :                          :                    :     :- CometFilter\n               :                    :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :                    :                          :                    :     +- BroadcastExchange\n               :                    :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                    :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                          :                    :           :     +- CometFilter\n               :                    :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n               :                    :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                          :                    :                 +- CometProject\n               :                    :                          :                    :                    +- CometBroadcastHashJoin\n               :                    :                          :                    :                       :- CometProject\n               :                    :                          :                    :                       :  +- CometBroadcastHashJoin\n               :                    :                          :                    :                       :     :- CometFilter\n               :                    :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :                    :                          :                    :                       :     +- CometBroadcastExchange\n               :                    :                          :                    :                       :        +- CometFilter\n               :                    :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n               :                    :                          :                    :                       +- CometBroadcastExchange\n               :                    :                          :                    :                          +- CometProject\n               :                    :                          :                    :                             +- CometFilter\n               :                    :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n               :                    :                          :                    +- BroadcastExchange\n               :                    :                          :                       +- CometProject\n               :                    :                          :                          +- CometFilter\n               :                    :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n               :                    :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    :                                +- CometProject\n               :                    :                                   +- CometBroadcastHashJoin\n               :                    :                                      :- CometProject\n               :                    :                                      :  +- CometBroadcastHashJoin\n               :                    :                                      :     :- CometFilter\n               :                    :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n               :                    :                                      :     +- CometBroadcastExchange\n               :                    :                                      :        +- CometFilter\n               :                    :                                      :           +- CometScan parquet spark_catalog.default.item\n               :                    :                                      +- CometBroadcastExchange\n               :                    :                                         +- CometProject\n               :                    :                                            +- CometFilter\n               :                    :                                               +- CometScan parquet spark_catalog.default.date_dim\n               :                    +- BroadcastExchange\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometScan parquet spark_catalog.default.date_dim\n               +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                  +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                     :  +- Subquery\n                     :     +- HashAggregate\n                     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                     :              +- CometUnion\n                     :                 :- CometProject\n                     :                 :  +- CometBroadcastHashJoin\n                     :                 :     :- CometFilter\n                     :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n                     :                 :     +- CometBroadcastExchange\n                     :                 :        +- CometProject\n                     :                 :           +- CometFilter\n                     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n                     :                 :- CometProject\n                     :                 :  +- CometBroadcastHashJoin\n                     :                 :     :- CometFilter\n                     :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                     :                 :     +- CometBroadcastExchange\n                     :                 :        +- CometProject\n                     :                 :           +- CometFilter\n                     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n                     :                 +- CometProject\n                     :                    +- CometBroadcastHashJoin\n                     :                       :- CometFilter\n                     :                       :  +- CometScan parquet spark_catalog.default.web_sales\n                     :                       +- CometBroadcastExchange\n                     :                          +- CometProject\n                     :                             +- CometFilter\n                     :                                +- CometScan parquet spark_catalog.default.date_dim\n                     +- HashAggregate\n                        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 +- BroadcastHashJoin\n                                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n                                    :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                    :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                                    :     :  :        :  +- Subquery\n                                    :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                                    :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                                    :     :  :        :              +- CometProject\n                                    :     :  :        :                 +- CometFilter\n                                    :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n                                    :     :  :        +- CometScan parquet spark_catalog.default.web_sales\n                                    :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                    :     :           +- BroadcastHashJoin\n                                    :     :              :- BroadcastExchange\n                                    :     :              :  +- CometFilter\n                                    :     :              :     +- CometScan parquet spark_catalog.default.item\n                                    :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                    :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                    :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                    :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                    :     :                 :                 +- BroadcastHashJoin\n                                    :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                    :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                                    :     :                 :                    :     :- CometFilter\n                                    :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                    :     :                 :                    :     +- BroadcastExchange\n                                    :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                    :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :     :                 :                    :           :     +- CometFilter\n                                    :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n                                    :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :     :                 :                    :                 +- CometProject\n                                    :     :                 :                    :                    +- CometBroadcastHashJoin\n                                    :     :                 :                    :                       :- CometProject\n                                    :     :                 :                    :                       :  +- CometBroadcastHashJoin\n                                    :     :                 :                    :                       :     :- CometFilter\n                                    :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                    :     :                 :                    :                       :     +- CometBroadcastExchange\n                                    :     :                 :                    :                       :        +- CometFilter\n                                    :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                                    :     :                 :                    :                       +- CometBroadcastExchange\n                                    :     :                 :                    :                          +- CometProject\n                                    :     :                 :                    :                             +- CometFilter\n                                    :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                                    :     :                 :                    +- BroadcastExchange\n                                    :     :                 :                       +- CometProject\n                                    :     :                 :                          +- CometFilter\n                                    :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n                                    :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :     :                       +- CometProject\n                                    :     :                          +- CometBroadcastHashJoin\n                                    :     :                             :- CometProject\n                                    :     :                             :  +- CometBroadcastHashJoin\n                                    :     :                             :     :- CometFilter\n                                    :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                    :     :                             :     +- CometBroadcastExchange\n                                    :     :                             :        +- CometFilter\n                                    :     :                             :           +- CometScan parquet spark_catalog.default.item\n                                    :     :                             +- CometBroadcastExchange\n                                    :     :                                +- CometProject\n                                    :     :                                   +- CometFilter\n                                    :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :           :     +- CometFilter\n                                    :           :        +- CometScan parquet spark_catalog.default.item\n                                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                    :                    +- BroadcastHashJoin\n                                    :                       :- BroadcastExchange\n                                    :                       :  +- CometFilter\n                                    :                       :     +- CometScan parquet spark_catalog.default.item\n                                    :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                    :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                    :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                    :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                    :                          :                 +- BroadcastHashJoin\n                                    :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                    :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                                    :                          :                    :     :- CometFilter\n                                    :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                    :                          :                    :     +- BroadcastExchange\n                                    :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                    :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                          :                    :           :     +- CometFilter\n                                    :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n                                    :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                          :                    :                 +- CometProject\n                                    :                          :                    :                    +- CometBroadcastHashJoin\n                                    :                          :                    :                       :- CometProject\n                                    :                          :                    :                       :  +- CometBroadcastHashJoin\n                                    :                          :                    :                       :     :- CometFilter\n                                    :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                    :                          :                    :                       :     +- CometBroadcastExchange\n                                    :                          :                    :                       :        +- CometFilter\n                                    :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                                    :                          :                    :                       +- CometBroadcastExchange\n                                    :                          :                    :                          +- CometProject\n                                    :                          :                    :                             +- CometFilter\n                                    :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                                    :                          :                    +- BroadcastExchange\n                                    :                          :                       +- CometProject\n                                    :                          :                          +- CometFilter\n                                    :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n                                    :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                                +- CometProject\n                                    :                                   +- CometBroadcastHashJoin\n                                    :                                      :- CometProject\n                                    :                                      :  +- CometBroadcastHashJoin\n                                    :                                      :     :- CometFilter\n                                    :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                    :                                      :     +- CometBroadcastExchange\n                                    :                                      :        +- CometFilter\n                                    :                                      :           +- CometScan parquet spark_catalog.default.item\n                                    :                                      +- CometBroadcastExchange\n                                    :                                         +- CometProject\n                                    :                                            +- CometFilter\n                                    :                                               +- CometScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q14b. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q14b: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n   :  +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n   :     :  +- Subquery\n   :     :     +- HashAggregate\n   :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n   :     :              +- CometUnion\n   :     :                 :- CometProject\n   :     :                 :  +- CometBroadcastHashJoin\n   :     :                 :     :- CometFilter\n   :     :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :     :                 :     +- CometBroadcastExchange\n   :     :                 :        +- CometProject\n   :     :                 :           +- CometFilter\n   :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n   :     :                 :- CometProject\n   :     :                 :  +- CometBroadcastHashJoin\n   :     :                 :     :- CometFilter\n   :     :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n   :     :                 :     +- CometBroadcastExchange\n   :     :                 :        +- CometProject\n   :     :                 :           +- CometFilter\n   :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n   :     :                 +- CometProject\n   :     :                    +- CometBroadcastHashJoin\n   :     :                       :- CometFilter\n   :     :                       :  +- CometScan parquet spark_catalog.default.web_sales\n   :     :                       +- CometBroadcastExchange\n   :     :                          +- CometProject\n   :     :                             +- CometFilter\n   :     :                                +- CometScan parquet spark_catalog.default.date_dim\n   :     +- HashAggregate\n   :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n   :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                 +- BroadcastHashJoin\n   :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n   :                    :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n   :                    :     :  :        :  +- Subquery\n   :                    :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n   :                    :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n   :                    :     :  :        :              +- CometProject\n   :                    :     :  :        :                 +- CometFilter\n   :                    :     :  :        :                    :  +- Subquery\n   :                    :     :  :        :                    :     +- CometProject\n   :                    :     :  :        :                    :        +- CometFilter\n   :                    :     :  :        :                    :           +- CometScan parquet spark_catalog.default.date_dim\n   :                    :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n   :                    :     :  :        +- CometScan parquet spark_catalog.default.store_sales\n   :                    :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :     :           +- BroadcastHashJoin\n   :                    :     :              :- BroadcastExchange\n   :                    :     :              :  +- CometFilter\n   :                    :     :              :     +- CometScan parquet spark_catalog.default.item\n   :                    :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :                    :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n   :                    :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :     :                 :                 +- BroadcastHashJoin\n   :                    :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n   :                    :     :                 :                    :     :- CometFilter\n   :                    :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :                    :     :                 :                    :     +- BroadcastExchange\n   :                    :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :                 :                    :           :     +- CometFilter\n   :                    :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n   :                    :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :                 :                    :                 +- CometProject\n   :                    :     :                 :                    :                    +- CometBroadcastHashJoin\n   :                    :     :                 :                    :                       :- CometProject\n   :                    :     :                 :                    :                       :  +- CometBroadcastHashJoin\n   :                    :     :                 :                    :                       :     :- CometFilter\n   :                    :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n   :                    :     :                 :                    :                       :     +- CometBroadcastExchange\n   :                    :     :                 :                    :                       :        +- CometFilter\n   :                    :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n   :                    :     :                 :                    :                       +- CometBroadcastExchange\n   :                    :     :                 :                    :                          +- CometProject\n   :                    :     :                 :                    :                             +- CometFilter\n   :                    :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                    +- BroadcastExchange\n   :                    :     :                 :                       +- CometProject\n   :                    :     :                 :                          +- CometFilter\n   :                    :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n   :                    :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :                       +- CometProject\n   :                    :     :                          +- CometBroadcastHashJoin\n   :                    :     :                             :- CometProject\n   :                    :     :                             :  +- CometBroadcastHashJoin\n   :                    :     :                             :     :- CometFilter\n   :                    :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n   :                    :     :                             :     +- CometBroadcastExchange\n   :                    :     :                             :        +- CometFilter\n   :                    :     :                             :           +- CometScan parquet spark_catalog.default.item\n   :                    :     :                             +- CometBroadcastExchange\n   :                    :     :                                +- CometProject\n   :                    :     :                                   +- CometFilter\n   :                    :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n   :                    :     +- BroadcastExchange\n   :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :           :     +- CometFilter\n   :                    :           :        +- CometScan parquet spark_catalog.default.item\n   :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :                    +- BroadcastHashJoin\n   :                    :                       :- BroadcastExchange\n   :                    :                       :  +- CometFilter\n   :                    :                       :     +- CometScan parquet spark_catalog.default.item\n   :                    :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :                    :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n   :                    :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :                          :                 +- BroadcastHashJoin\n   :                    :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n   :                    :                          :                    :     :- CometFilter\n   :                    :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :                    :                          :                    :     +- BroadcastExchange\n   :                    :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                          :                    :           :     +- CometFilter\n   :                    :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n   :                    :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                          :                    :                 +- CometProject\n   :                    :                          :                    :                    +- CometBroadcastHashJoin\n   :                    :                          :                    :                       :- CometProject\n   :                    :                          :                    :                       :  +- CometBroadcastHashJoin\n   :                    :                          :                    :                       :     :- CometFilter\n   :                    :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n   :                    :                          :                    :                       :     +- CometBroadcastExchange\n   :                    :                          :                    :                       :        +- CometFilter\n   :                    :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n   :                    :                          :                    :                       +- CometBroadcastExchange\n   :                    :                          :                    :                          +- CometProject\n   :                    :                          :                    :                             +- CometFilter\n   :                    :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n   :                    :                          :                    +- BroadcastExchange\n   :                    :                          :                       +- CometProject\n   :                    :                          :                          +- CometFilter\n   :                    :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n   :                    :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                                +- CometProject\n   :                    :                                   +- CometBroadcastHashJoin\n   :                    :                                      :- CometProject\n   :                    :                                      :  +- CometBroadcastHashJoin\n   :                    :                                      :     :- CometFilter\n   :                    :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n   :                    :                                      :     +- CometBroadcastExchange\n   :                    :                                      :        +- CometFilter\n   :                    :                                      :           +- CometScan parquet spark_catalog.default.item\n   :                    :                                      +- CometBroadcastExchange\n   :                    :                                         +- CometProject\n   :                    :                                            +- CometFilter\n   :                    :                                               +- CometScan parquet spark_catalog.default.date_dim\n   :                    +- BroadcastExchange\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             :  +- Subquery\n   :                             :     +- CometProject\n   :                             :        +- CometFilter\n   :                             :           +- CometScan parquet spark_catalog.default.date_dim\n   :                             +- CometScan parquet spark_catalog.default.date_dim\n   +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n      +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n         :  +- Subquery\n         :     +- HashAggregate\n         :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n         :              +- CometUnion\n         :                 :- CometProject\n         :                 :  +- CometBroadcastHashJoin\n         :                 :     :- CometFilter\n         :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n         :                 :     +- CometBroadcastExchange\n         :                 :        +- CometProject\n         :                 :           +- CometFilter\n         :                 :              +- CometScan parquet spark_catalog.default.date_dim\n         :                 :- CometProject\n         :                 :  +- CometBroadcastHashJoin\n         :                 :     :- CometFilter\n         :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n         :                 :     +- CometBroadcastExchange\n         :                 :        +- CometProject\n         :                 :           +- CometFilter\n         :                 :              +- CometScan parquet spark_catalog.default.date_dim\n         :                 +- CometProject\n         :                    +- CometBroadcastHashJoin\n         :                       :- CometFilter\n         :                       :  +- CometScan parquet spark_catalog.default.web_sales\n         :                       +- CometBroadcastExchange\n         :                          +- CometProject\n         :                             +- CometFilter\n         :                                +- CometScan parquet spark_catalog.default.date_dim\n         +- HashAggregate\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                  +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                     +- BroadcastHashJoin\n                        :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n                        :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                        :     :  :        :  +- Subquery\n                        :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                        :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                        :     :  :        :              +- CometProject\n                        :     :  :        :                 +- CometFilter\n                        :     :  :        :                    :  +- Subquery\n                        :     :  :        :                    :     +- CometProject\n                        :     :  :        :                    :        +- CometFilter\n                        :     :  :        :                    :           +- CometScan parquet spark_catalog.default.date_dim\n                        :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n                        :     :  :        +- CometScan parquet spark_catalog.default.store_sales\n                        :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :     :           +- BroadcastHashJoin\n                        :     :              :- BroadcastExchange\n                        :     :              :  +- CometFilter\n                        :     :              :     +- CometScan parquet spark_catalog.default.item\n                        :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                        :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                        :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :     :                 :                 +- BroadcastHashJoin\n                        :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                        :     :                 :                    :     :- CometFilter\n                        :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :     :                 :                    :     +- BroadcastExchange\n                        :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :                 :                    :           :     +- CometFilter\n                        :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n                        :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :                 :                    :                 +- CometProject\n                        :     :                 :                    :                    +- CometBroadcastHashJoin\n                        :     :                 :                    :                       :- CometProject\n                        :     :                 :                    :                       :  +- CometBroadcastHashJoin\n                        :     :                 :                    :                       :     :- CometFilter\n                        :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                        :     :                 :                    :                       :     +- CometBroadcastExchange\n                        :     :                 :                    :                       :        +- CometFilter\n                        :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                        :     :                 :                    :                       +- CometBroadcastExchange\n                        :     :                 :                    :                          +- CometProject\n                        :     :                 :                    :                             +- CometFilter\n                        :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                        :     :                 :                    +- BroadcastExchange\n                        :     :                 :                       +- CometProject\n                        :     :                 :                          +- CometFilter\n                        :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n                        :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :                       +- CometProject\n                        :     :                          +- CometBroadcastHashJoin\n                        :     :                             :- CometProject\n                        :     :                             :  +- CometBroadcastHashJoin\n                        :     :                             :     :- CometFilter\n                        :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n                        :     :                             :     +- CometBroadcastExchange\n                        :     :                             :        +- CometFilter\n                        :     :                             :           +- CometScan parquet spark_catalog.default.item\n                        :     :                             +- CometBroadcastExchange\n                        :     :                                +- CometProject\n                        :     :                                   +- CometFilter\n                        :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n                        :     +- BroadcastExchange\n                        :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :           :     +- CometFilter\n                        :           :        +- CometScan parquet spark_catalog.default.item\n                        :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :                    +- BroadcastHashJoin\n                        :                       :- BroadcastExchange\n                        :                       :  +- CometFilter\n                        :                       :     +- CometScan parquet spark_catalog.default.item\n                        :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                        :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                        :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :                          :                 +- BroadcastHashJoin\n                        :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                        :                          :                    :     :- CometFilter\n                        :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :                          :                    :     +- BroadcastExchange\n                        :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                          :                    :           :     +- CometFilter\n                        :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n                        :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                          :                    :                 +- CometProject\n                        :                          :                    :                    +- CometBroadcastHashJoin\n                        :                          :                    :                       :- CometProject\n                        :                          :                    :                       :  +- CometBroadcastHashJoin\n                        :                          :                    :                       :     :- CometFilter\n                        :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                        :                          :                    :                       :     +- CometBroadcastExchange\n                        :                          :                    :                       :        +- CometFilter\n                        :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                        :                          :                    :                       +- CometBroadcastExchange\n                        :                          :                    :                          +- CometProject\n                        :                          :                    :                             +- CometFilter\n                        :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                        :                          :                    +- BroadcastExchange\n                        :                          :                       +- CometProject\n                        :                          :                          +- CometFilter\n                        :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n                        :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                                +- CometProject\n                        :                                   +- CometBroadcastHashJoin\n                        :                                      :- CometProject\n                        :                                      :  +- CometBroadcastHashJoin\n                        :                                      :     :- CometFilter\n                        :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n                        :                                      :     +- CometBroadcastExchange\n                        :                                      :        +- CometFilter\n                        :                                      :           +- CometScan parquet spark_catalog.default.item\n                        :                                      +- CometBroadcastExchange\n                        :                                         +- CometProject\n                        :                                            +- CometFilter\n                        :                                               +- CometScan parquet spark_catalog.default.date_dim\n                        +- BroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 :  +- Subquery\n                                 :     +- CometProject\n                                 :        +- CometFilter\n                                 :           +- CometScan parquet spark_catalog.default.date_dim\n                                 +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q15. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q15: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometFilter\n               :     :           +- CometScan parquet spark_catalog.default.customer\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.customer_address\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q16. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q16: ExplainInfo:\nHashAggregate\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- HashAggregate\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n               +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  +- BroadcastHashJoin\n                     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                     :  +- BroadcastHashJoin\n                     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- BroadcastHashJoin\n                     :     :     :  :- CometProject\n                     :     :     :  :  +- CometBroadcastHashJoin\n                     :     :     :  :     :- CometFilter\n                     :     :     :  :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                     :     :     :  :     +- CometBroadcastExchange\n                     :     :     :  :        +- CometScan parquet spark_catalog.default.catalog_sales\n                     :     :     :  +- BroadcastExchange\n                     :     :     :     +- CometScan parquet spark_catalog.default.catalog_returns\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan parquet spark_catalog.default.customer_address\n                     +- BroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan parquet spark_catalog.default.call_center\n\n\nQuery: q17. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q17: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometProject\n               :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :- CometProject\n               :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :- CometFilter\n               :     :     :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :     :        +- CometFilter\n               :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n               :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :        +- CometFilter\n               :     :     :     :     :           +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :        +- CometProject\n               :     :     :     :           +- CometFilter\n               :     :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometProject\n               :     :     :           +- CometFilter\n               :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.store\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q18. Comet Exec: Enabled (CometExpand, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q18: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true, Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n         +- CometExpand\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometFilter\n                  :     :     :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometProject\n                  :     :     :     :     :           +- CometFilter\n                  :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan parquet spark_catalog.default.customer\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan parquet spark_catalog.default.customer_address\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.date_dim\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q19. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q19: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometProject\n               :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :- CometBroadcastExchange\n               :     :     :     :     :  +- CometProject\n               :     :     :     :     :     +- CometFilter\n               :     :     :     :     :        +- CometScan parquet spark_catalog.default.date_dim\n               :     :     :     :     +- CometFilter\n               :     :     :     :        +- CometScan parquet spark_catalog.default.store_sales\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometProject\n               :     :     :           +- CometFilter\n               :     :     :              +- CometScan parquet spark_catalog.default.item\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometFilter\n               :     :           +- CometScan parquet spark_catalog.default.customer\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.customer_address\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.store\n\n\nQuery: q20. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q20: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometFilter\n                           :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.item\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q21. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q21: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometFilter\n                  :     :     :  +- CometScan parquet spark_catalog.default.inventory\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan parquet spark_catalog.default.warehouse\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q22. Comet Exec: Enabled (CometHashAggregate, CometExpand, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q22: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometExpand\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometFilter\n                  :     :     :  +- CometScan parquet spark_catalog.default.inventory\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.warehouse\n\n\nQuery: q23a. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q23a: ExplainInfo:\nHashAggregate\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n      +-  Union [COMET: Union is not enabled because the following children are not native (Project, Project)]\n         :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :  +- BroadcastHashJoin\n         :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n         :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :     :     +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         :     :     :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n         :     :     :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :     :     :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :     :           :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n         :     :     :           :        :  +- Subquery\n         :     :     :           :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :     :     :           :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :     :           :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :     :     :           :        :              +- CometProject\n         :     :     :           :        :                 +- CometFilter\n         :     :     :           :        :                    +- CometScan parquet spark_catalog.default.date_dim\n         :     :     :           :        +- CometScan parquet spark_catalog.default.catalog_sales\n         :     :     :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :     :     :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :     :                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n         :     :     :                    +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n         :     :     :                       +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :     :     :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :     :                             +- CometHashAggregate\n         :     :     :                                +- CometProject\n         :     :     :                                   +- CometBroadcastHashJoin\n         :     :     :                                      :- CometProject\n         :     :     :                                      :  +- CometBroadcastHashJoin\n         :     :     :                                      :     :- CometFilter\n         :     :     :                                      :     :  +- CometScan parquet spark_catalog.default.store_sales\n         :     :     :                                      :     +- CometBroadcastExchange\n         :     :     :                                      :        +- CometProject\n         :     :     :                                      :           +- CometFilter\n         :     :     :                                      :              +- CometScan parquet spark_catalog.default.date_dim\n         :     :     :                                      +- CometBroadcastExchange\n         :     :     :                                         +- CometFilter\n         :     :     :                                            +- CometScan parquet spark_catalog.default.item\n         :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Project)]\n         :     :        +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n         :     :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n         :     :              :  +- Subquery\n         :     :              :     +- HashAggregate\n         :     :              :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :              :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n         :     :              :              +- HashAggregate\n         :     :              :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :              :                    +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n         :     :              :                       +- CometProject\n         :     :              :                          +- CometBroadcastHashJoin\n         :     :              :                             :- CometProject\n         :     :              :                             :  +- CometBroadcastHashJoin\n         :     :              :                             :     :- CometFilter\n         :     :              :                             :     :  +- CometScan parquet spark_catalog.default.store_sales\n         :     :              :                             :     +- CometBroadcastExchange\n         :     :              :                             :        +- CometFilter\n         :     :              :                             :           +- CometScan parquet spark_catalog.default.customer\n         :     :              :                             +- CometBroadcastExchange\n         :     :              :                                +- CometProject\n         :     :              :                                   +- CometFilter\n         :     :              :                                      +- CometScan parquet spark_catalog.default.date_dim\n         :     :              +- HashAggregate\n         :     :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :                    +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n         :     :                       +- CometProject\n         :     :                          +- CometBroadcastHashJoin\n         :     :                             :- CometFilter\n         :     :                             :  +- CometScan parquet spark_catalog.default.store_sales\n         :     :                             +- CometBroadcastExchange\n         :     :                                +- CometFilter\n         :     :                                   +- CometScan parquet spark_catalog.default.customer\n         :     +- BroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan parquet spark_catalog.default.date_dim\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +- BroadcastHashJoin\n               :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n               :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :     +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n               :     :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :     :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :           :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n               :     :           :        :  +- Subquery\n               :     :           :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n               :     :           :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :           :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n               :     :           :        :              +- CometProject\n               :     :           :        :                 +- CometFilter\n               :     :           :        :                    +- CometScan parquet spark_catalog.default.date_dim\n               :     :           :        +- CometScan parquet spark_catalog.default.web_sales\n               :     :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               :     :                    +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n               :     :                       +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :     :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :                             +- CometHashAggregate\n               :     :                                +- CometProject\n               :     :                                   +- CometBroadcastHashJoin\n               :     :                                      :- CometProject\n               :     :                                      :  +- CometBroadcastHashJoin\n               :     :                                      :     :- CometFilter\n               :     :                                      :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :                                      :     +- CometBroadcastExchange\n               :     :                                      :        +- CometProject\n               :     :                                      :           +- CometFilter\n               :     :                                      :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :                                      +- CometBroadcastExchange\n               :     :                                         +- CometFilter\n               :     :                                            +- CometScan parquet spark_catalog.default.item\n               :     +-  Sort [COMET: Sort is not native because the following children are not native (Project)]\n               :        +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n               :              :  +- Subquery\n               :              :     +- HashAggregate\n               :              :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :              :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n               :              :              +- HashAggregate\n               :              :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :              :                    +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n               :              :                       +- CometProject\n               :              :                          +- CometBroadcastHashJoin\n               :              :                             :- CometProject\n               :              :                             :  +- CometBroadcastHashJoin\n               :              :                             :     :- CometFilter\n               :              :                             :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :              :                             :     +- CometBroadcastExchange\n               :              :                             :        +- CometFilter\n               :              :                             :           +- CometScan parquet spark_catalog.default.customer\n               :              :                             +- CometBroadcastExchange\n               :              :                                +- CometProject\n               :              :                                   +- CometFilter\n               :              :                                      +- CometScan parquet spark_catalog.default.date_dim\n               :              +- HashAggregate\n               :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                    +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometScan parquet spark_catalog.default.store_sales\n               :                             +- CometBroadcastExchange\n               :                                +- CometFilter\n               :                                   +- CometScan parquet spark_catalog.default.customer\n               +- BroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q23b. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q23b: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate)]\n   :- HashAggregate\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n   :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :           +- BroadcastHashJoin\n   :              :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :              :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n   :              :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :              :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :              :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :     :  :     +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   :              :     :  :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :              :     :  :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :              :     :  :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :     :  :           :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n   :              :     :  :           :        :  +- Subquery\n   :              :     :  :           :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n   :              :     :  :           :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :     :  :           :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n   :              :     :  :           :        :              +- CometProject\n   :              :     :  :           :        :                 +- CometFilter\n   :              :     :  :           :        :                    +- CometScan parquet spark_catalog.default.date_dim\n   :              :     :  :           :        +- CometScan parquet spark_catalog.default.catalog_sales\n   :              :     :  :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :              :     :  :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :     :  :                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n   :              :     :  :                    +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n   :              :     :  :                       +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :              :     :  :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :     :  :                             +- CometHashAggregate\n   :              :     :  :                                +- CometProject\n   :              :     :  :                                   +- CometBroadcastHashJoin\n   :              :     :  :                                      :- CometProject\n   :              :     :  :                                      :  +- CometBroadcastHashJoin\n   :              :     :  :                                      :     :- CometFilter\n   :              :     :  :                                      :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :              :     :  :                                      :     +- CometBroadcastExchange\n   :              :     :  :                                      :        +- CometProject\n   :              :     :  :                                      :           +- CometFilter\n   :              :     :  :                                      :              +- CometScan parquet spark_catalog.default.date_dim\n   :              :     :  :                                      +- CometBroadcastExchange\n   :              :     :  :                                         +- CometFilter\n   :              :     :  :                                            +- CometScan parquet spark_catalog.default.item\n   :              :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Project)]\n   :              :     :     +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n   :              :     :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n   :              :     :           :  +- Subquery\n   :              :     :           :     +- HashAggregate\n   :              :     :           :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :     :           :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n   :              :     :           :              +- HashAggregate\n   :              :     :           :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :     :           :                    +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n   :              :     :           :                       +- CometProject\n   :              :     :           :                          +- CometBroadcastHashJoin\n   :              :     :           :                             :- CometProject\n   :              :     :           :                             :  +- CometBroadcastHashJoin\n   :              :     :           :                             :     :- CometFilter\n   :              :     :           :                             :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :              :     :           :                             :     +- CometBroadcastExchange\n   :              :     :           :                             :        +- CometFilter\n   :              :     :           :                             :           +- CometScan parquet spark_catalog.default.customer\n   :              :     :           :                             +- CometBroadcastExchange\n   :              :     :           :                                +- CometProject\n   :              :     :           :                                   +- CometFilter\n   :              :     :           :                                      +- CometScan parquet spark_catalog.default.date_dim\n   :              :     :           +- HashAggregate\n   :              :     :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :     :                 +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n   :              :     :                    +- CometProject\n   :              :     :                       +- CometBroadcastHashJoin\n   :              :     :                          :- CometFilter\n   :              :     :                          :  +- CometScan parquet spark_catalog.default.store_sales\n   :              :     :                          +- CometBroadcastExchange\n   :              :     :                             +- CometFilter\n   :              :     :                                +- CometScan parquet spark_catalog.default.customer\n   :              :     +- BroadcastExchange\n   :              :        +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   :              :           +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :              :              :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :              :              :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :              :     +- CometFilter\n   :              :              :        +- CometScan parquet spark_catalog.default.customer\n   :              :              +-  Sort [COMET: Sort is not native because the following children are not native (Project)]\n   :              :                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n   :              :                    +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n   :              :                       :  +- Subquery\n   :              :                       :     +- HashAggregate\n   :              :                       :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :                       :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n   :              :                       :              +- HashAggregate\n   :              :                       :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :                       :                    +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n   :              :                       :                       +- CometProject\n   :              :                       :                          +- CometBroadcastHashJoin\n   :              :                       :                             :- CometProject\n   :              :                       :                             :  +- CometBroadcastHashJoin\n   :              :                       :                             :     :- CometFilter\n   :              :                       :                             :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :              :                       :                             :     +- CometBroadcastExchange\n   :              :                       :                             :        +- CometFilter\n   :              :                       :                             :           +- CometScan parquet spark_catalog.default.customer\n   :              :                       :                             +- CometBroadcastExchange\n   :              :                       :                                +- CometProject\n   :              :                       :                                   +- CometFilter\n   :              :                       :                                      +- CometScan parquet spark_catalog.default.date_dim\n   :              :                       +- HashAggregate\n   :              :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              :                             +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n   :              :                                +- CometProject\n   :              :                                   +- CometBroadcastHashJoin\n   :              :                                      :- CometFilter\n   :              :                                      :  +- CometScan parquet spark_catalog.default.store_sales\n   :              :                                      +- CometBroadcastExchange\n   :              :                                         +- CometFilter\n   :              :                                            +- CometScan parquet spark_catalog.default.customer\n   :              +- BroadcastExchange\n   :                 +- CometProject\n   :                    +- CometFilter\n   :                       +- CometScan parquet spark_catalog.default.date_dim\n   +- HashAggregate\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               +- BroadcastHashJoin\n                  :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n                  :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                  :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :  :     +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                  :     :  :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                  :     :  :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :     :  :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :  :           :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                  :     :  :           :        :  +- Subquery\n                  :     :  :           :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                  :     :  :           :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :  :           :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                  :     :  :           :        :              +- CometProject\n                  :     :  :           :        :                 +- CometFilter\n                  :     :  :           :        :                    +- CometScan parquet spark_catalog.default.date_dim\n                  :     :  :           :        +- CometScan parquet spark_catalog.default.web_sales\n                  :     :  :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :     :  :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :  :                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                  :     :  :                    +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                  :     :  :                       +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  :     :  :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :  :                             +- CometHashAggregate\n                  :     :  :                                +- CometProject\n                  :     :  :                                   +- CometBroadcastHashJoin\n                  :     :  :                                      :- CometProject\n                  :     :  :                                      :  +- CometBroadcastHashJoin\n                  :     :  :                                      :     :- CometFilter\n                  :     :  :                                      :     :  +- CometScan parquet spark_catalog.default.store_sales\n                  :     :  :                                      :     +- CometBroadcastExchange\n                  :     :  :                                      :        +- CometProject\n                  :     :  :                                      :           +- CometFilter\n                  :     :  :                                      :              +- CometScan parquet spark_catalog.default.date_dim\n                  :     :  :                                      +- CometBroadcastExchange\n                  :     :  :                                         +- CometFilter\n                  :     :  :                                            +- CometScan parquet spark_catalog.default.item\n                  :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Project)]\n                  :     :     +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                  :     :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                  :     :           :  +- Subquery\n                  :     :           :     +- HashAggregate\n                  :     :           :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :           :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                  :     :           :              +- HashAggregate\n                  :     :           :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :           :                    +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                  :     :           :                       +- CometProject\n                  :     :           :                          +- CometBroadcastHashJoin\n                  :     :           :                             :- CometProject\n                  :     :           :                             :  +- CometBroadcastHashJoin\n                  :     :           :                             :     :- CometFilter\n                  :     :           :                             :     :  +- CometScan parquet spark_catalog.default.store_sales\n                  :     :           :                             :     +- CometBroadcastExchange\n                  :     :           :                             :        +- CometFilter\n                  :     :           :                             :           +- CometScan parquet spark_catalog.default.customer\n                  :     :           :                             +- CometBroadcastExchange\n                  :     :           :                                +- CometProject\n                  :     :           :                                   +- CometFilter\n                  :     :           :                                      +- CometScan parquet spark_catalog.default.date_dim\n                  :     :           +- HashAggregate\n                  :     :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :                 +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                  :     :                    +- CometProject\n                  :     :                       +- CometBroadcastHashJoin\n                  :     :                          :- CometFilter\n                  :     :                          :  +- CometScan parquet spark_catalog.default.store_sales\n                  :     :                          +- CometBroadcastExchange\n                  :     :                             +- CometFilter\n                  :     :                                +- CometScan parquet spark_catalog.default.customer\n                  :     +- BroadcastExchange\n                  :        +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                  :           +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                  :              :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :              :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :              :     +- CometFilter\n                  :              :        +- CometScan parquet spark_catalog.default.customer\n                  :              +-  Sort [COMET: Sort is not native because the following children are not native (Project)]\n                  :                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                  :                    +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                  :                       :  +- Subquery\n                  :                       :     +- HashAggregate\n                  :                       :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :                       :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                  :                       :              +- HashAggregate\n                  :                       :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :                       :                    +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                  :                       :                       +- CometProject\n                  :                       :                          +- CometBroadcastHashJoin\n                  :                       :                             :- CometProject\n                  :                       :                             :  +- CometBroadcastHashJoin\n                  :                       :                             :     :- CometFilter\n                  :                       :                             :     :  +- CometScan parquet spark_catalog.default.store_sales\n                  :                       :                             :     +- CometBroadcastExchange\n                  :                       :                             :        +- CometFilter\n                  :                       :                             :           +- CometScan parquet spark_catalog.default.customer\n                  :                       :                             +- CometBroadcastExchange\n                  :                       :                                +- CometProject\n                  :                       :                                   +- CometFilter\n                  :                       :                                      +- CometScan parquet spark_catalog.default.date_dim\n                  :                       +- HashAggregate\n                  :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :                             +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                  :                                +- CometProject\n                  :                                   +- CometBroadcastHashJoin\n                  :                                      :- CometFilter\n                  :                                      :  +- CometScan parquet spark_catalog.default.store_sales\n                  :                                      +- CometBroadcastExchange\n                  :                                         +- CometFilter\n                  :                                            +- CometScan parquet spark_catalog.default.customer\n                  +- BroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q24a. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q24a: ExplainInfo:\n Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n:  +- Subquery\n:     +- HashAggregate\n:        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n:              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:                    +- CometHashAggregate\n:                       +- CometProject\n:                          +- CometBroadcastHashJoin\n:                             :- CometProject\n:                             :  +- CometBroadcastHashJoin\n:                             :     :- CometProject\n:                             :     :  +- CometBroadcastHashJoin\n:                             :     :     :- CometProject\n:                             :     :     :  +- CometBroadcastHashJoin\n:                             :     :     :     :- CometProject\n:                             :     :     :     :  +- CometBroadcastHashJoin\n:                             :     :     :     :     :- CometFilter\n:                             :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n:                             :     :     :     :     +- CometBroadcastExchange\n:                             :     :     :     :        +- CometFilter\n:                             :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n:                             :     :     :     +- CometBroadcastExchange\n:                             :     :     :        +- CometProject\n:                             :     :     :           +- CometFilter\n:                             :     :     :              +- CometScan parquet spark_catalog.default.store\n:                             :     :     +- CometBroadcastExchange\n:                             :     :        +- CometProject\n:                             :     :           +- CometFilter\n:                             :     :              +- CometScan parquet spark_catalog.default.item\n:                             :     +- CometBroadcastExchange\n:                             :        +- CometProject\n:                             :           +- CometFilter\n:                             :              +- CometScan parquet spark_catalog.default.customer\n:                             +- CometBroadcastExchange\n:                                +- CometProject\n:                                   +- CometFilter\n:                                      +- CometScan parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometProject\n                        :     :     :  +- CometBroadcastHashJoin\n                        :     :     :     :- CometProject\n                        :     :     :     :  +- CometBroadcastHashJoin\n                        :     :     :     :     :- CometFilter\n                        :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :     :     :     :     +- CometBroadcastExchange\n                        :     :     :     :        +- CometFilter\n                        :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n                        :     :     :     +- CometBroadcastExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometFilter\n                        :     :     :              +- CometScan parquet spark_catalog.default.store\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan parquet spark_catalog.default.item\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan parquet spark_catalog.default.customer\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan parquet spark_catalog.default.customer_address\n\n\nQuery: q24b. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q24b: ExplainInfo:\n Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n:  +- Subquery\n:     +- HashAggregate\n:        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n:              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:                    +- CometHashAggregate\n:                       +- CometProject\n:                          +- CometBroadcastHashJoin\n:                             :- CometProject\n:                             :  +- CometBroadcastHashJoin\n:                             :     :- CometProject\n:                             :     :  +- CometBroadcastHashJoin\n:                             :     :     :- CometProject\n:                             :     :     :  +- CometBroadcastHashJoin\n:                             :     :     :     :- CometProject\n:                             :     :     :     :  +- CometBroadcastHashJoin\n:                             :     :     :     :     :- CometFilter\n:                             :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n:                             :     :     :     :     +- CometBroadcastExchange\n:                             :     :     :     :        +- CometFilter\n:                             :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n:                             :     :     :     +- CometBroadcastExchange\n:                             :     :     :        +- CometProject\n:                             :     :     :           +- CometFilter\n:                             :     :     :              +- CometScan parquet spark_catalog.default.store\n:                             :     :     +- CometBroadcastExchange\n:                             :     :        +- CometProject\n:                             :     :           +- CometFilter\n:                             :     :              +- CometScan parquet spark_catalog.default.item\n:                             :     +- CometBroadcastExchange\n:                             :        +- CometProject\n:                             :           +- CometFilter\n:                             :              +- CometScan parquet spark_catalog.default.customer\n:                             +- CometBroadcastExchange\n:                                +- CometProject\n:                                   +- CometFilter\n:                                      +- CometScan parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometProject\n                        :     :     :  +- CometBroadcastHashJoin\n                        :     :     :     :- CometProject\n                        :     :     :     :  +- CometBroadcastHashJoin\n                        :     :     :     :     :- CometFilter\n                        :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :     :     :     :     +- CometBroadcastExchange\n                        :     :     :     :        +- CometFilter\n                        :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n                        :     :     :     +- CometBroadcastExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometFilter\n                        :     :     :              +- CometScan parquet spark_catalog.default.store\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan parquet spark_catalog.default.item\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan parquet spark_catalog.default.customer\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan parquet spark_catalog.default.customer_address\n\n\nQuery: q25. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q25: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometProject\n               :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :- CometProject\n               :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :- CometFilter\n               :     :     :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :     :        +- CometFilter\n               :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n               :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :        +- CometFilter\n               :     :     :     :     :           +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :        +- CometProject\n               :     :     :     :           +- CometFilter\n               :     :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometProject\n               :     :     :           +- CometFilter\n               :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.store\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q26. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q26: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometFilter\n               :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometProject\n               :     :     :           +- CometFilter\n               :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.item\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.promotion\n\n\nQuery: q27. Comet Exec: Enabled (CometHashAggregate, CometExpand, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q27: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometExpand\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q28. Comet Exec: Enabled (CometHashAggregate, CometFilter, CometProject)\nQuery: q28: ExplainInfo:\n BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (BroadcastNestedLoopJoin, BroadcastExchange)]\n:-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (BroadcastNestedLoopJoin, BroadcastExchange)]\n:  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (BroadcastNestedLoopJoin, BroadcastExchange)]\n:  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (BroadcastNestedLoopJoin, BroadcastExchange)]\n:  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (HashAggregate, BroadcastExchange)]\n:  :  :  :  :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :  :  :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :  :  :  :     +- HashAggregate\n:  :  :  :  :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :  :  :  :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :                 +- CometProject\n:  :  :  :  :                    +- CometFilter\n:  :  :  :  :                       +- CometScan parquet spark_catalog.default.store_sales\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :  :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :  :  :           +- HashAggregate\n:  :  :  :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :  :  :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :  :  :                    +- CometHashAggregate\n:  :  :  :                       +- CometProject\n:  :  :  :                          +- CometFilter\n:  :  :  :                             +- CometScan parquet spark_catalog.default.store_sales\n:  :  :  +- BroadcastExchange\n:  :  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :  :           +- HashAggregate\n:  :  :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :  :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :  :                    +- CometHashAggregate\n:  :  :                       +- CometProject\n:  :  :                          +- CometFilter\n:  :  :                             +- CometScan parquet spark_catalog.default.store_sales\n:  :  +- BroadcastExchange\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- HashAggregate\n:  :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :                    +- CometHashAggregate\n:  :                       +- CometProject\n:  :                          +- CometFilter\n:  :                             +- CometScan parquet spark_catalog.default.store_sales\n:  +- BroadcastExchange\n:     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:           +- HashAggregate\n:              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:                    +- CometHashAggregate\n:                       +- CometProject\n:                          +- CometFilter\n:                             +- CometScan parquet spark_catalog.default.store_sales\n+- BroadcastExchange\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +- HashAggregate\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet spark_catalog.default.store_sales\n\n\nQuery: q29. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q29: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometProject\n               :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :- CometProject\n               :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :- CometFilter\n               :     :     :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :     :        +- CometFilter\n               :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n               :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :        +- CometFilter\n               :     :     :     :     :           +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :        +- CometProject\n               :     :     :     :           +- CometFilter\n               :     :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometProject\n               :     :     :           +- CometFilter\n               :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.store\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q30. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q30: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   +- BroadcastHashJoin\n      :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (Project, BroadcastExchange)]\n      :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :     :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :              +- CometHashAggregate\n      :     :     :                 +- CometProject\n      :     :     :                    +- CometBroadcastHashJoin\n      :     :     :                       :- CometProject\n      :     :     :                       :  +- CometBroadcastHashJoin\n      :     :     :                       :     :- CometFilter\n      :     :     :                       :     :  +- CometScan parquet spark_catalog.default.web_returns\n      :     :     :                       :     +- CometBroadcastExchange\n      :     :     :                       :        +- CometProject\n      :     :     :                       :           +- CometFilter\n      :     :     :                       :              +- CometScan parquet spark_catalog.default.date_dim\n      :     :     :                       +- CometBroadcastExchange\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometScan parquet spark_catalog.default.customer_address\n      :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n      :     :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :           +- HashAggregate\n      :     :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n      :     :                    +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                          +- CometHashAggregate\n      :     :                             +- CometProject\n      :     :                                +- CometBroadcastHashJoin\n      :     :                                   :- CometProject\n      :     :                                   :  +- CometBroadcastHashJoin\n      :     :                                   :     :- CometFilter\n      :     :                                   :     :  +- CometScan parquet spark_catalog.default.web_returns\n      :     :                                   :     +- CometBroadcastExchange\n      :     :                                   :        +- CometProject\n      :     :                                   :           +- CometFilter\n      :     :                                   :              +- CometScan parquet spark_catalog.default.date_dim\n      :     :                                   +- CometBroadcastExchange\n      :     :                                      +- CometProject\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n      :           +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n      :              :  +- Subquery\n      :              :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n      :              :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :              :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n      :              :              +- CometProject\n      :              :                 +- CometFilter\n      :              :                    +- CometScan parquet spark_catalog.default.customer_address\n      :              +- CometScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometProject\n            +- CometFilter\n               +- CometScan parquet spark_catalog.default.customer_address\n\n\nQuery: q31. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q31: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n         :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n         :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n         :     :  :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         :     :  :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n         :     :  :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n         :     :  :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :     :  :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :  :     :  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :     :  :     :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :  :     :  :           +- CometHashAggregate\n         :     :  :     :  :              +- CometProject\n         :     :  :     :  :                 +- CometBroadcastHashJoin\n         :     :  :     :  :                    :- CometProject\n         :     :  :     :  :                    :  +- CometBroadcastHashJoin\n         :     :  :     :  :                    :     :- CometFilter\n         :     :  :     :  :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n         :     :  :     :  :                    :     +- CometBroadcastExchange\n         :     :  :     :  :                    :        +- CometFilter\n         :     :  :     :  :                    :           +- CometScan parquet spark_catalog.default.date_dim\n         :     :  :     :  :                    +- CometBroadcastExchange\n         :     :  :     :  :                       +- CometFilter\n         :     :  :     :  :                          +- CometScan parquet spark_catalog.default.customer_address\n         :     :  :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :     :  :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :  :     :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :     :  :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :  :     :              +- CometHashAggregate\n         :     :  :     :                 +- CometProject\n         :     :  :     :                    +- CometBroadcastHashJoin\n         :     :  :     :                       :- CometProject\n         :     :  :     :                       :  +- CometBroadcastHashJoin\n         :     :  :     :                       :     :- CometFilter\n         :     :  :     :                       :     :  +- CometScan parquet spark_catalog.default.store_sales\n         :     :  :     :                       :     +- CometBroadcastExchange\n         :     :  :     :                       :        +- CometFilter\n         :     :  :     :                       :           +- CometScan parquet spark_catalog.default.date_dim\n         :     :  :     :                       +- CometBroadcastExchange\n         :     :  :     :                          +- CometFilter\n         :     :  :     :                             +- CometScan parquet spark_catalog.default.customer_address\n         :     :  :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :     :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :  :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :     :  :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :  :                 +- CometHashAggregate\n         :     :  :                    +- CometProject\n         :     :  :                       +- CometBroadcastHashJoin\n         :     :  :                          :- CometProject\n         :     :  :                          :  +- CometBroadcastHashJoin\n         :     :  :                          :     :- CometFilter\n         :     :  :                          :     :  +- CometScan parquet spark_catalog.default.store_sales\n         :     :  :                          :     +- CometBroadcastExchange\n         :     :  :                          :        +- CometFilter\n         :     :  :                          :           +- CometScan parquet spark_catalog.default.date_dim\n         :     :  :                          +- CometBroadcastExchange\n         :     :  :                             +- CometFilter\n         :     :  :                                +- CometScan parquet spark_catalog.default.customer_address\n         :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :              +- CometHashAggregate\n         :     :                 +- CometProject\n         :     :                    +- CometBroadcastHashJoin\n         :     :                       :- CometProject\n         :     :                       :  +- CometBroadcastHashJoin\n         :     :                       :     :- CometFilter\n         :     :                       :     :  +- CometScan parquet spark_catalog.default.web_sales\n         :     :                       :     +- CometBroadcastExchange\n         :     :                       :        +- CometFilter\n         :     :                       :           +- CometScan parquet spark_catalog.default.date_dim\n         :     :                       +- CometBroadcastExchange\n         :     :                          +- CometFilter\n         :     :                             +- CometScan parquet spark_catalog.default.customer_address\n         :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometFilter\n         :                          :     :  +- CometScan parquet spark_catalog.default.web_sales\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan parquet spark_catalog.default.date_dim\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan parquet spark_catalog.default.customer_address\n         +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan parquet spark_catalog.default.web_sales\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometScan parquet spark_catalog.default.date_dim\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan parquet spark_catalog.default.customer_address\n\n\nQuery: q32. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q32: ExplainInfo:\nHashAggregate\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n      +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         +- BroadcastHashJoin\n            :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n            :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     :     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :     :        +- BroadcastHashJoin\n            :     :           :-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :     :           :  :  +- Subquery\n            :     :           :  :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :     :           :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     :           :  :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :     :           :  :              +- CometProject\n            :     :           :  :                 +- CometFilter\n            :     :           :  :                    +- CometScan parquet spark_catalog.default.date_dim\n            :     :           :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :     :           +- BroadcastExchange\n            :     :              +- CometProject\n            :     :                 +- CometFilter\n            :     :                    +- CometScan parquet spark_catalog.default.item\n            :     +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n            :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                 +- CometHashAggregate\n            :                    +- CometProject\n            :                       +- CometBroadcastHashJoin\n            :                          :- CometFilter\n            :                          :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                          +- CometBroadcastExchange\n            :                             +- CometProject\n            :                                +- CometFilter\n            :                                   +- CometScan parquet spark_catalog.default.date_dim\n            +- BroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q33. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q33: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n         +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n            :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +- CometHashAggregate\n            :        +- CometProject\n            :           +- CometBroadcastHashJoin\n            :              :- CometProject\n            :              :  +- CometBroadcastHashJoin\n            :              :     :- CometProject\n            :              :     :  +- CometBroadcastHashJoin\n            :              :     :     :- CometFilter\n            :              :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :              :     :     +- CometBroadcastExchange\n            :              :     :        +- CometProject\n            :              :     :           +- CometFilter\n            :              :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :              :     +- CometBroadcastExchange\n            :              :        +- CometProject\n            :              :           +- CometFilter\n            :              :              +- CometScan parquet spark_catalog.default.customer_address\n            :              +- CometBroadcastExchange\n            :                 +- CometBroadcastHashJoin\n            :                    :- CometFilter\n            :                    :  +- CometScan parquet spark_catalog.default.item\n            :                    +- CometBroadcastExchange\n            :                       +- CometProject\n            :                          +- CometFilter\n            :                             +- CometScan parquet spark_catalog.default.item\n            :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +- CometHashAggregate\n            :        +- CometProject\n            :           +- CometBroadcastHashJoin\n            :              :- CometProject\n            :              :  +- CometBroadcastHashJoin\n            :              :     :- CometProject\n            :              :     :  +- CometBroadcastHashJoin\n            :              :     :     :- CometFilter\n            :              :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :              :     :     +- CometBroadcastExchange\n            :              :     :        +- CometProject\n            :              :     :           +- CometFilter\n            :              :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :              :     +- CometBroadcastExchange\n            :              :        +- CometProject\n            :              :           +- CometFilter\n            :              :              +- CometScan parquet spark_catalog.default.customer_address\n            :              +- CometBroadcastExchange\n            :                 +- CometBroadcastHashJoin\n            :                    :- CometFilter\n            :                    :  +- CometScan parquet spark_catalog.default.item\n            :                    +- CometBroadcastExchange\n            :                       +- CometProject\n            :                          +- CometFilter\n            :                             +- CometScan parquet spark_catalog.default.item\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometBroadcastHashJoin\n                           :     :     :- CometFilter\n                           :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n                           :     :     +- CometBroadcastExchange\n                           :     :        +- CometProject\n                           :     :           +- CometFilter\n                           :     :              +- CometScan parquet spark_catalog.default.date_dim\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.customer_address\n                           +- CometBroadcastExchange\n                              +- CometBroadcastHashJoin\n                                 :- CometFilter\n                                 :  +- CometScan parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q34. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q34: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      +- BroadcastHashJoin\n         :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n         :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :        +- CometHashAggregate\n         :           +- CometProject\n         :              +- CometBroadcastHashJoin\n         :                 :- CometProject\n         :                 :  +- CometBroadcastHashJoin\n         :                 :     :- CometProject\n         :                 :     :  +- CometBroadcastHashJoin\n         :                 :     :     :- CometFilter\n         :                 :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n         :                 :     :     +- CometBroadcastExchange\n         :                 :     :        +- CometProject\n         :                 :     :           +- CometFilter\n         :                 :     :              +- CometScan parquet spark_catalog.default.date_dim\n         :                 :     +- CometBroadcastExchange\n         :                 :        +- CometProject\n         :                 :           +- CometFilter\n         :                 :              +- CometScan parquet spark_catalog.default.store\n         :                 +- CometBroadcastExchange\n         :                    +- CometProject\n         :                       +- CometFilter\n         :                          +- CometScan parquet spark_catalog.default.household_demographics\n         +- BroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan parquet spark_catalog.default.customer\n\n\nQuery: q35. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q35: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +- BroadcastHashJoin\n               :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :  +- BroadcastHashJoin\n               :     :-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               :     :  +-  Filter [COMET: Filter is not native because the following children are not native (SortMergeJoin)]\n               :     :     +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n               :     :        :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n               :     :        :  :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :     :        :  :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        :  :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :  :  :     +- CometFilter\n               :     :        :  :  :        +- CometScan parquet spark_catalog.default.customer\n               :     :        :  :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        :  :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :  :        +- CometProject\n               :     :        :  :           +- CometBroadcastHashJoin\n               :     :        :  :              :- CometFilter\n               :     :        :  :              :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :        :  :              +- CometBroadcastExchange\n               :     :        :  :                 +- CometProject\n               :     :        :  :                    +- CometFilter\n               :     :        :  :                       +- CometScan parquet spark_catalog.default.date_dim\n               :     :        :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :        +- CometProject\n               :     :        :           +- CometBroadcastHashJoin\n               :     :        :              :- CometFilter\n               :     :        :              :  +- CometScan parquet spark_catalog.default.web_sales\n               :     :        :              +- CometBroadcastExchange\n               :     :        :                 +- CometProject\n               :     :        :                    +- CometFilter\n               :     :        :                       +- CometScan parquet spark_catalog.default.date_dim\n               :     :        +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :              +- CometProject\n               :     :                 +- CometBroadcastHashJoin\n               :     :                    :- CometFilter\n               :     :                    :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :                    +- CometBroadcastExchange\n               :     :                       +- CometProject\n               :     :                          +- CometFilter\n               :     :                             +- CometScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.customer_address\n               +- BroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.customer_demographics\n\n\nQuery: q36. Comet Exec: Enabled (CometHashAggregate, CometExpand, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q36: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometExpand\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan parquet spark_catalog.default.store\n\n\nQuery: q37. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q37: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometBroadcastExchange\n               :     :     :  +- CometProject\n               :     :     :     +- CometFilter\n               :     :     :        +- CometScan parquet spark_catalog.default.item\n               :     :     +- CometProject\n               :     :        +- CometFilter\n               :     :           +- CometScan parquet spark_catalog.default.inventory\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.date_dim\n               +- CometBroadcastExchange\n                  +- CometFilter\n                     +- CometScan parquet spark_catalog.default.catalog_sales\n\n\nQuery: q38. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q38: ExplainInfo:\nHashAggregate\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n      +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n            :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :  :           +- CometHashAggregate\n            :  :              +- CometProject\n            :  :                 +- CometBroadcastHashJoin\n            :  :                    :- CometProject\n            :  :                    :  +- CometBroadcastHashJoin\n            :  :                    :     :- CometFilter\n            :  :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :  :                    :     +- CometBroadcastExchange\n            :  :                    :        +- CometProject\n            :  :                    :           +- CometFilter\n            :  :                    :              +- CometScan parquet spark_catalog.default.date_dim\n            :  :                    +- CometBroadcastExchange\n            :  :                       +- CometProject\n            :  :                          +- CometFilter\n            :  :                             +- CometScan parquet spark_catalog.default.customer\n            :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :              +- CometHashAggregate\n            :                 +- CometProject\n            :                    +- CometBroadcastHashJoin\n            :                       :- CometProject\n            :                       :  +- CometBroadcastHashJoin\n            :                       :     :- CometFilter\n            :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                       :     +- CometBroadcastExchange\n            :                       :        +- CometProject\n            :                       :           +- CometFilter\n            :                       :              +- CometScan parquet spark_catalog.default.date_dim\n            :                       +- CometBroadcastExchange\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometScan parquet spark_catalog.default.customer\n            +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometFilter\n                                 :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan parquet spark_catalog.default.customer\n\n\nQuery: q39a. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q39a: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n      :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                 +- CometHashAggregate\n      :                    +- CometProject\n      :                       +- CometBroadcastHashJoin\n      :                          :- CometProject\n      :                          :  +- CometBroadcastHashJoin\n      :                          :     :- CometProject\n      :                          :     :  +- CometBroadcastHashJoin\n      :                          :     :     :- CometFilter\n      :                          :     :     :  +- CometScan parquet spark_catalog.default.inventory\n      :                          :     :     +- CometBroadcastExchange\n      :                          :     :        +- CometFilter\n      :                          :     :           +- CometScan parquet spark_catalog.default.item\n      :                          :     +- CometBroadcastExchange\n      :                          :        +- CometFilter\n      :                          :           +- CometScan parquet spark_catalog.default.warehouse\n      :                          +- CometBroadcastExchange\n      :                             +- CometProject\n      :                                +- CometFilter\n      :                                   +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan parquet spark_catalog.default.inventory\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan parquet spark_catalog.default.item\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometFilter\n                                 :           +- CometScan parquet spark_catalog.default.warehouse\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q39b. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q39b: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n      :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                 +- CometHashAggregate\n      :                    +- CometProject\n      :                       +- CometBroadcastHashJoin\n      :                          :- CometProject\n      :                          :  +- CometBroadcastHashJoin\n      :                          :     :- CometProject\n      :                          :     :  +- CometBroadcastHashJoin\n      :                          :     :     :- CometFilter\n      :                          :     :     :  +- CometScan parquet spark_catalog.default.inventory\n      :                          :     :     +- CometBroadcastExchange\n      :                          :     :        +- CometFilter\n      :                          :     :           +- CometScan parquet spark_catalog.default.item\n      :                          :     +- CometBroadcastExchange\n      :                          :        +- CometFilter\n      :                          :           +- CometScan parquet spark_catalog.default.warehouse\n      :                          +- CometBroadcastExchange\n      :                             +- CometProject\n      :                                +- CometFilter\n      :                                   +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan parquet spark_catalog.default.inventory\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan parquet spark_catalog.default.item\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometFilter\n                                 :           +- CometScan parquet spark_catalog.default.warehouse\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q40. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q40: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: ]\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometFilter\n               :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometFilter\n               :     :     :           +- CometScan parquet spark_catalog.default.catalog_returns\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan parquet spark_catalog.default.warehouse\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.item\n               +- CometBroadcastExchange\n                  +- CometFilter\n                     +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q41. Comet Exec: Enabled (CometFilter, CometProject)\nQuery: q41: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n               :- CometProject\n               :  +- CometFilter\n               :     +- CometScan parquet spark_catalog.default.item\n               +- BroadcastExchange\n                  +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                        +- HashAggregate\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                                    +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                                       :  +- Subquery\n                                       :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                                       :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                                       :              +- CometProject\n                                       :                 +- CometFilter\n                                       :                    +- CometScan parquet spark_catalog.default.item\n                                       +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q42. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q42: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometBroadcastExchange\n               :     :  +- CometProject\n               :     :     +- CometFilter\n               :     :        +- CometScan parquet spark_catalog.default.date_dim\n               :     +- CometFilter\n               :        +- CometScan parquet spark_catalog.default.store_sales\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q43. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q43: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometBroadcastExchange\n               :     :  +- CometProject\n               :     :     +- CometFilter\n               :     :        +- CometScan parquet spark_catalog.default.date_dim\n               :     +- CometFilter\n               :        +- CometScan parquet spark_catalog.default.store_sales\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.store\n\n\nQuery: q44. Comet Exec: Enabled (CometHashAggregate, CometFilter, CometProject)\nQuery: q44: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   +- BroadcastHashJoin\n      :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :  +- BroadcastHashJoin\n      :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :     :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (Project, BroadcastExchange)]\n      :     :     :-  Project [COMET: Project is not native because the following children are not native (Filter)]\n      :     :     :  +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      :     :     :     +-  Window [COMET: Window is not supported]\n      :     :     :        +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :              +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :     :                 :  +- Subquery\n      :     :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :                 :           +- CometHashAggregate\n      :     :     :                 :              +- CometProject\n      :     :     :                 :                 +- CometFilter\n      :     :     :                 :                    +- CometScan parquet spark_catalog.default.store_sales\n      :     :     :                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :                       +- CometHashAggregate\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometScan parquet spark_catalog.default.store_sales\n      :     :     +- BroadcastExchange\n      :     :        +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n      :     :           +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      :     :              +-  Window [COMET: Window is not supported]\n      :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                       +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :                          :  +- Subquery\n      :     :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                          :           +- CometHashAggregate\n      :     :                          :              +- CometProject\n      :     :                          :                 +- CometFilter\n      :     :                          :                    +- CometScan parquet spark_catalog.default.store_sales\n      :     :                          +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                                +- CometHashAggregate\n      :     :                                   +- CometProject\n      :     :                                      +- CometFilter\n      :     :                                         +- CometScan parquet spark_catalog.default.store_sales\n      :     +- BroadcastExchange\n      :        +- CometProject\n      :           +- CometFilter\n      :              +- CometScan parquet spark_catalog.default.item\n      +- BroadcastExchange\n         +- CometProject\n            +- CometFilter\n               +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q45. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q45: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n            +-  Filter [COMET: Filter is not native because the following children are not native (BroadcastHashJoin)]\n               +- BroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometFilter\n                  :     :     :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometFilter\n                  :     :     :     :           +- CometScan parquet spark_catalog.default.customer\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan parquet spark_catalog.default.customer_address\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.item\n                  +- BroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q46. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q46: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   +- BroadcastHashJoin\n      :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :  +- BroadcastHashJoin\n      :     :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     +- CometHashAggregate\n      :     :        +- CometProject\n      :     :           +- CometBroadcastHashJoin\n      :     :              :- CometProject\n      :     :              :  +- CometBroadcastHashJoin\n      :     :              :     :- CometProject\n      :     :              :     :  +- CometBroadcastHashJoin\n      :     :              :     :     :- CometProject\n      :     :              :     :     :  +- CometBroadcastHashJoin\n      :     :              :     :     :     :- CometFilter\n      :     :              :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n      :     :              :     :     :     +- CometBroadcastExchange\n      :     :              :     :     :        +- CometProject\n      :     :              :     :     :           +- CometFilter\n      :     :              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n      :     :              :     :     +- CometBroadcastExchange\n      :     :              :     :        +- CometProject\n      :     :              :     :           +- CometFilter\n      :     :              :     :              +- CometScan parquet spark_catalog.default.store\n      :     :              :     +- CometBroadcastExchange\n      :     :              :        +- CometProject\n      :     :              :           +- CometFilter\n      :     :              :              +- CometScan parquet spark_catalog.default.household_demographics\n      :     :              +- CometBroadcastExchange\n      :     :                 +- CometFilter\n      :     :                    +- CometScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometProject\n      :           +- CometFilter\n      :              +- CometScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometFilter\n            +- CometScan parquet spark_catalog.default.customer_address\n\n\nQuery: q47. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q47: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n      :     :        +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      :     :           +-  Window [COMET: Window is not supported]\n      :     :              +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      :     :                 +-  Window [COMET: Window is not supported]\n      :     :                    +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                          +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                                +- CometHashAggregate\n      :     :                                   +- CometProject\n      :     :                                      +- CometBroadcastHashJoin\n      :     :                                         :- CometProject\n      :     :                                         :  +- CometBroadcastHashJoin\n      :     :                                         :     :- CometProject\n      :     :                                         :     :  +- CometBroadcastHashJoin\n      :     :                                         :     :     :- CometBroadcastExchange\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- CometFilter\n      :     :                                         :     :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :                                         :     +- CometBroadcastExchange\n      :     :                                         :        +- CometFilter\n      :     :                                         :           +- CometScan parquet spark_catalog.default.date_dim\n      :     :                                         +- CometBroadcastExchange\n      :     :                                            +- CometFilter\n      :     :                                               +- CometScan parquet spark_catalog.default.store\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           +-  Project [COMET: Project is not native because the following children are not native (Window)]\n      :              +-  Window [COMET: Window is not supported]\n      :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                       +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometBroadcastExchange\n      :                                      :     :     :  +- CometProject\n      :                                      :     :     :     +- CometFilter\n      :                                      :     :     :        +- CometScan parquet spark_catalog.default.item\n      :                                      :     :     +- CometFilter\n      :                                      :     :        +- CometScan parquet spark_catalog.default.store_sales\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan parquet spark_catalog.default.store\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  Project [COMET: Project is not native because the following children are not native (Window)]\n               +-  Window [COMET: Window is not supported]\n                  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometBroadcastExchange\n                                       :     :     :  +- CometProject\n                                       :     :     :     +- CometFilter\n                                       :     :     :        +- CometScan parquet spark_catalog.default.item\n                                       :     :     +- CometFilter\n                                       :     :        +- CometScan parquet spark_catalog.default.store_sales\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan parquet spark_catalog.default.store\n\n\nQuery: q48. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q48: ExplainInfo:\n HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- CometHashAggregate\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometProject\n            :     :  +- CometBroadcastHashJoin\n            :     :     :- CometProject\n            :     :     :  +- CometBroadcastHashJoin\n            :     :     :     :- CometFilter\n            :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :     :     :     +- CometBroadcastExchange\n            :     :     :        +- CometFilter\n            :     :     :           +- CometScan parquet spark_catalog.default.store\n            :     :     +- CometBroadcastExchange\n            :     :        +- CometProject\n            :     :           +- CometFilter\n            :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometScan parquet spark_catalog.default.customer_address\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q49. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q49: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n         +-  Union [COMET: Union is not enabled because the following children are not native (Project, Project, Project)]\n            :-  Project [COMET: Project is not native because the following children are not native (Filter)]\n            :  +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n            :     +-  Window [COMET: Window is not supported]\n            :        +-  Sort [COMET: Sort is not native because the following children are not native (Window)]\n            :           +-  Window [COMET: Window is not supported]\n            :              +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    +- HashAggregate\n            :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                          +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                             +- CometProject\n            :                                +- CometBroadcastHashJoin\n            :                                   :- CometProject\n            :                                   :  +- CometBroadcastHashJoin\n            :                                   :     :- CometProject\n            :                                   :     :  +- CometFilter\n            :                                   :     :     +- CometScan parquet spark_catalog.default.web_sales\n            :                                   :     +- CometBroadcastExchange\n            :                                   :        +- CometFilter\n            :                                   :           +- CometScan parquet spark_catalog.default.web_returns\n            :                                   +- CometBroadcastExchange\n            :                                      +- CometProject\n            :                                         +- CometFilter\n            :                                            +- CometScan parquet spark_catalog.default.date_dim\n            :-  Project [COMET: Project is not native because the following children are not native (Filter)]\n            :  +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n            :     +-  Window [COMET: Window is not supported]\n            :        +-  Sort [COMET: Sort is not native because the following children are not native (Window)]\n            :           +-  Window [COMET: Window is not supported]\n            :              +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    +- HashAggregate\n            :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                          +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                             +- CometProject\n            :                                +- CometBroadcastHashJoin\n            :                                   :- CometProject\n            :                                   :  +- CometBroadcastHashJoin\n            :                                   :     :- CometProject\n            :                                   :     :  +- CometFilter\n            :                                   :     :     +- CometScan parquet spark_catalog.default.catalog_sales\n            :                                   :     +- CometBroadcastExchange\n            :                                   :        +- CometFilter\n            :                                   :           +- CometScan parquet spark_catalog.default.catalog_returns\n            :                                   +- CometBroadcastExchange\n            :                                      +- CometProject\n            :                                         +- CometFilter\n            :                                            +- CometScan parquet spark_catalog.default.date_dim\n            +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n                  +-  Window [COMET: Window is not supported]\n                     +-  Sort [COMET: Sort is not native because the following children are not native (Window)]\n                        +-  Window [COMET: Window is not supported]\n                           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 +- HashAggregate\n                                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                                          +- CometProject\n                                             +- CometBroadcastHashJoin\n                                                :- CometProject\n                                                :  +- CometBroadcastHashJoin\n                                                :     :- CometProject\n                                                :     :  +- CometFilter\n                                                :     :     +- CometScan parquet spark_catalog.default.store_sales\n                                                :     +- CometBroadcastExchange\n                                                :        +- CometFilter\n                                                :           +- CometScan parquet spark_catalog.default.store_returns\n                                                +- CometBroadcastExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q50. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q50: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometFilter\n               :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometFilter\n               :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan parquet spark_catalog.default.store\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan parquet spark_catalog.default.date_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q51. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q51: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n               +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     +-  Project [COMET: Project is not native because the following children are not native (Window)]\n                  :        +-  Window [COMET: Window is not supported]\n                  :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :                       +- CometHashAggregate\n                  :                          +- CometProject\n                  :                             +- CometBroadcastHashJoin\n                  :                                :- CometFilter\n                  :                                :  +- CometScan parquet spark_catalog.default.web_sales\n                  :                                +- CometBroadcastExchange\n                  :                                   +- CometProject\n                  :                                      +- CometFilter\n                  :                                         +- CometScan parquet spark_catalog.default.date_dim\n                  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +-  Project [COMET: Project is not native because the following children are not native (Window)]\n                           +-  Window [COMET: Window is not supported]\n                              +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometFilter\n                                                   :  +- CometScan parquet spark_catalog.default.store_sales\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q52. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q52: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometBroadcastExchange\n               :     :  +- CometProject\n               :     :     +- CometFilter\n               :     :        +- CometScan parquet spark_catalog.default.date_dim\n               :     +- CometFilter\n               :        +- CometScan parquet spark_catalog.default.store_sales\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q53. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q53: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Filter)]\n   +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      +-  Window [COMET: Window is not supported]\n         +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometBroadcastExchange\n                              :     :     :  +- CometProject\n                              :     :     :     +- CometFilter\n                              :     :     :        +- CometScan parquet spark_catalog.default.item\n                              :     :     +- CometFilter\n                              :     :        +- CometScan parquet spark_catalog.default.store_sales\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan parquet spark_catalog.default.date_dim\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan parquet spark_catalog.default.store\n\n\nQuery: q54. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject, CometUnion)\nQuery: q54: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n         +- HashAggregate\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n               +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  +- BroadcastHashJoin\n                     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                     :  +- BroadcastHashJoin\n                     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                     :     :  +- BroadcastHashJoin\n                     :     :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                     :     :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                     :     :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                     :     :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     :     :     :     :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                     :     :     :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     :     :     :     :           +- CometHashAggregate\n                     :     :     :     :              +- CometProject\n                     :     :     :     :                 +- CometBroadcastHashJoin\n                     :     :     :     :                    :- CometProject\n                     :     :     :     :                    :  +- CometBroadcastHashJoin\n                     :     :     :     :                    :     :- CometProject\n                     :     :     :     :                    :     :  +- CometBroadcastHashJoin\n                     :     :     :     :                    :     :     :- CometUnion\n                     :     :     :     :                    :     :     :  :- CometProject\n                     :     :     :     :                    :     :     :  :  +- CometFilter\n                     :     :     :     :                    :     :     :  :     +- CometScan parquet spark_catalog.default.catalog_sales\n                     :     :     :     :                    :     :     :  +- CometProject\n                     :     :     :     :                    :     :     :     +- CometFilter\n                     :     :     :     :                    :     :     :        +- CometScan parquet spark_catalog.default.web_sales\n                     :     :     :     :                    :     :     +- CometBroadcastExchange\n                     :     :     :     :                    :     :        +- CometProject\n                     :     :     :     :                    :     :           +- CometFilter\n                     :     :     :     :                    :     :              +- CometScan parquet spark_catalog.default.item\n                     :     :     :     :                    :     +- CometBroadcastExchange\n                     :     :     :     :                    :        +- CometProject\n                     :     :     :     :                    :           +- CometFilter\n                     :     :     :     :                    :              +- CometScan parquet spark_catalog.default.date_dim\n                     :     :     :     :                    +- CometBroadcastExchange\n                     :     :     :     :                       +- CometFilter\n                     :     :     :     :                          +- CometScan parquet spark_catalog.default.customer\n                     :     :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                     :     :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     :     :     :           +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                     :     :     :              :  +- Subquery\n                     :     :     :              :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                     :     :     :              :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     :     :     :              :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                     :     :     :              :              +- CometProject\n                     :     :     :              :                 +- CometFilter\n                     :     :     :              :                    :  :- Subquery\n                     :     :     :              :                    :  :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                     :     :     :              :                    :  :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     :     :     :              :                    :  :        +- CometHashAggregate\n                     :     :     :              :                    :  :           +- CometProject\n                     :     :     :              :                    :  :              +- CometFilter\n                     :     :     :              :                    :  :                 +- CometScan parquet spark_catalog.default.date_dim\n                     :     :     :              :                    :  +- Subquery\n                     :     :     :              :                    :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                     :     :     :              :                    :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     :     :     :              :                    :           +- CometHashAggregate\n                     :     :     :              :                    :              +- CometProject\n                     :     :     :              :                    :                 +- CometFilter\n                     :     :     :              :                    :                    +- CometScan parquet spark_catalog.default.date_dim\n                     :     :     :              :                    +- CometScan parquet spark_catalog.default.date_dim\n                     :     :     :              +- CometScan parquet spark_catalog.default.store_sales\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan parquet spark_catalog.default.customer_address\n                     :     +- BroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan parquet spark_catalog.default.store\n                     +- BroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              :  :- Subquery\n                              :  :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                              :  :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              :  :        +- CometHashAggregate\n                              :  :           +- CometProject\n                              :  :              +- CometFilter\n                              :  :                 +- CometScan parquet spark_catalog.default.date_dim\n                              :  +- Subquery\n                              :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                              :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              :           +- CometHashAggregate\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometScan parquet spark_catalog.default.date_dim\n                              +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q55. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q55: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometBroadcastExchange\n               :     :  +- CometProject\n               :     :     +- CometFilter\n               :     :        +- CometScan parquet spark_catalog.default.date_dim\n               :     +- CometFilter\n               :        +- CometScan parquet spark_catalog.default.store_sales\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q56. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q56: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n         +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n            :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +- CometHashAggregate\n            :        +- CometProject\n            :           +- CometBroadcastHashJoin\n            :              :- CometProject\n            :              :  +- CometBroadcastHashJoin\n            :              :     :- CometProject\n            :              :     :  +- CometBroadcastHashJoin\n            :              :     :     :- CometFilter\n            :              :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :              :     :     +- CometBroadcastExchange\n            :              :     :        +- CometProject\n            :              :     :           +- CometFilter\n            :              :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :              :     +- CometBroadcastExchange\n            :              :        +- CometProject\n            :              :           +- CometFilter\n            :              :              +- CometScan parquet spark_catalog.default.customer_address\n            :              +- CometBroadcastExchange\n            :                 +- CometProject\n            :                    +- CometBroadcastHashJoin\n            :                       :- CometFilter\n            :                       :  +- CometScan parquet spark_catalog.default.item\n            :                       +- CometBroadcastExchange\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometScan parquet spark_catalog.default.item\n            :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +- CometHashAggregate\n            :        +- CometProject\n            :           +- CometBroadcastHashJoin\n            :              :- CometProject\n            :              :  +- CometBroadcastHashJoin\n            :              :     :- CometProject\n            :              :     :  +- CometBroadcastHashJoin\n            :              :     :     :- CometFilter\n            :              :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :              :     :     +- CometBroadcastExchange\n            :              :     :        +- CometProject\n            :              :     :           +- CometFilter\n            :              :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :              :     +- CometBroadcastExchange\n            :              :        +- CometProject\n            :              :           +- CometFilter\n            :              :              +- CometScan parquet spark_catalog.default.customer_address\n            :              +- CometBroadcastExchange\n            :                 +- CometProject\n            :                    +- CometBroadcastHashJoin\n            :                       :- CometFilter\n            :                       :  +- CometScan parquet spark_catalog.default.item\n            :                       +- CometBroadcastExchange\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometScan parquet spark_catalog.default.item\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometBroadcastHashJoin\n                           :     :     :- CometFilter\n                           :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n                           :     :     +- CometBroadcastExchange\n                           :     :        +- CometProject\n                           :     :           +- CometFilter\n                           :     :              +- CometScan parquet spark_catalog.default.date_dim\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.customer_address\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometFilter\n                                    :  +- CometScan parquet spark_catalog.default.item\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q57. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q57: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n      :     :        +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      :     :           +-  Window [COMET: Window is not supported]\n      :     :              +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      :     :                 +-  Window [COMET: Window is not supported]\n      :     :                    +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                          +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                                +- CometHashAggregate\n      :     :                                   +- CometProject\n      :     :                                      +- CometBroadcastHashJoin\n      :     :                                         :- CometProject\n      :     :                                         :  +- CometBroadcastHashJoin\n      :     :                                         :     :- CometProject\n      :     :                                         :     :  +- CometBroadcastHashJoin\n      :     :                                         :     :     :- CometBroadcastExchange\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- CometFilter\n      :     :                                         :     :        +- CometScan parquet spark_catalog.default.catalog_sales\n      :     :                                         :     +- CometBroadcastExchange\n      :     :                                         :        +- CometFilter\n      :     :                                         :           +- CometScan parquet spark_catalog.default.date_dim\n      :     :                                         +- CometBroadcastExchange\n      :     :                                            +- CometFilter\n      :     :                                               +- CometScan parquet spark_catalog.default.call_center\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           +-  Project [COMET: Project is not native because the following children are not native (Window)]\n      :              +-  Window [COMET: Window is not supported]\n      :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                       +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometBroadcastExchange\n      :                                      :     :     :  +- CometProject\n      :                                      :     :     :     +- CometFilter\n      :                                      :     :     :        +- CometScan parquet spark_catalog.default.item\n      :                                      :     :     +- CometFilter\n      :                                      :     :        +- CometScan parquet spark_catalog.default.catalog_sales\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan parquet spark_catalog.default.call_center\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  Project [COMET: Project is not native because the following children are not native (Window)]\n               +-  Window [COMET: Window is not supported]\n                  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometBroadcastExchange\n                                       :     :     :  +- CometProject\n                                       :     :     :     +- CometFilter\n                                       :     :     :        +- CometScan parquet spark_catalog.default.item\n                                       :     :     +- CometFilter\n                                       :     :        +- CometScan parquet spark_catalog.default.catalog_sales\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan parquet spark_catalog.default.call_center\n\n\nQuery: q58. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q58: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n      :     :  +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :           +- CometHashAggregate\n      :     :              +- CometProject\n      :     :                 +- CometBroadcastHashJoin\n      :     :                    :- CometProject\n      :     :                    :  +- CometBroadcastHashJoin\n      :     :                    :     :- CometFilter\n      :     :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n      :     :                    :     +- CometBroadcastExchange\n      :     :                    :        +- CometProject\n      :     :                    :           +- CometFilter\n      :     :                    :              +- CometScan parquet spark_catalog.default.item\n      :     :                    +- CometBroadcastExchange\n      :     :                       +- CometProject\n      :     :                          +- CometBroadcastHashJoin\n      :     :                             :- CometFilter\n      :     :                             :  +- CometScan parquet spark_catalog.default.date_dim\n      :     :                             +- CometBroadcastExchange\n      :     :                                +- CometProject\n      :     :                                   +- CometFilter\n      :     :                                      :  +- Subquery\n      :     :                                      :     +- CometProject\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan parquet spark_catalog.default.date_dim\n      :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n      :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                 +- CometHashAggregate\n      :                    +- CometProject\n      :                       +- CometBroadcastHashJoin\n      :                          :- CometProject\n      :                          :  +- CometBroadcastHashJoin\n      :                          :     :- CometFilter\n      :                          :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n      :                          :     +- CometBroadcastExchange\n      :                          :        +- CometProject\n      :                          :           +- CometFilter\n      :                          :              +- CometScan parquet spark_catalog.default.item\n      :                          +- CometBroadcastExchange\n      :                             +- CometProject\n      :                                +- CometBroadcastHashJoin\n      :                                   :- CometFilter\n      :                                   :  +- CometScan parquet spark_catalog.default.date_dim\n      :                                   +- CometBroadcastExchange\n      :                                      +- CometProject\n      :                                         +- CometFilter\n      :                                            :  +- Subquery\n      :                                            :     +- CometProject\n      :                                            :        +- CometFilter\n      :                                            :           +- CometScan parquet spark_catalog.default.date_dim\n      :                                            +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n         +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometFilter\n                           :     :  +- CometScan parquet spark_catalog.default.web_sales\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.item\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometFilter\n                                    :  +- CometScan parquet spark_catalog.default.date_dim\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             :  +- Subquery\n                                             :     +- CometProject\n                                             :        +- CometFilter\n                                             :           +- CometScan parquet spark_catalog.default.date_dim\n                                             +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q59. Comet Exec: Enabled (CometFilter, CometProject)\nQuery: q59: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :        +- BroadcastHashJoin\n      :           :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :           :  +- BroadcastHashJoin\n      :           :     :- HashAggregate\n      :           :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           :     :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n      :           :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :           :     :           +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n      :           :     :              :- CometFilter\n      :           :     :              :  +- CometScan parquet spark_catalog.default.store_sales\n      :           :     :              +- BroadcastExchange\n      :           :     :                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n      :           :     :                    +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n      :           :     :                       :  +- Subquery\n      :           :     :                       :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n      :           :     :                       :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           :     :                       :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n      :           :     :                       :              +- CometProject\n      :           :     :                       :                 +- CometFilter\n      :           :     :                       :                    +- CometScan parquet spark_catalog.default.date_dim\n      :           :     :                       +- CometScan parquet spark_catalog.default.date_dim\n      :           :     +- BroadcastExchange\n      :           :        +- CometProject\n      :           :           +- CometFilter\n      :           :              +- CometScan parquet spark_catalog.default.store\n      :           +- BroadcastExchange\n      :              +- CometProject\n      :                 +- CometFilter\n      :                    +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               +- BroadcastHashJoin\n                  :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  :  +- BroadcastHashJoin\n                  :     :- HashAggregate\n                  :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                  :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  :     :           +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                  :     :              :- CometFilter\n                  :     :              :  +- CometScan parquet spark_catalog.default.store_sales\n                  :     :              +- BroadcastExchange\n                  :     :                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                  :     :                    +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                  :     :                       :  +- Subquery\n                  :     :                       :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                  :     :                       :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :                       :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                  :     :                       :              +- CometProject\n                  :     :                       :                 +- CometFilter\n                  :     :                       :                    +- CometScan parquet spark_catalog.default.date_dim\n                  :     :                       +- CometScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q60. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q60: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n         +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n            :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +- CometHashAggregate\n            :        +- CometProject\n            :           +- CometBroadcastHashJoin\n            :              :- CometProject\n            :              :  +- CometBroadcastHashJoin\n            :              :     :- CometProject\n            :              :     :  +- CometBroadcastHashJoin\n            :              :     :     :- CometFilter\n            :              :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :              :     :     +- CometBroadcastExchange\n            :              :     :        +- CometProject\n            :              :     :           +- CometFilter\n            :              :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :              :     +- CometBroadcastExchange\n            :              :        +- CometProject\n            :              :           +- CometFilter\n            :              :              +- CometScan parquet spark_catalog.default.customer_address\n            :              +- CometBroadcastExchange\n            :                 +- CometProject\n            :                    +- CometBroadcastHashJoin\n            :                       :- CometFilter\n            :                       :  +- CometScan parquet spark_catalog.default.item\n            :                       +- CometBroadcastExchange\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometScan parquet spark_catalog.default.item\n            :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +- CometHashAggregate\n            :        +- CometProject\n            :           +- CometBroadcastHashJoin\n            :              :- CometProject\n            :              :  +- CometBroadcastHashJoin\n            :              :     :- CometProject\n            :              :     :  +- CometBroadcastHashJoin\n            :              :     :     :- CometFilter\n            :              :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :              :     :     +- CometBroadcastExchange\n            :              :     :        +- CometProject\n            :              :     :           +- CometFilter\n            :              :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :              :     +- CometBroadcastExchange\n            :              :        +- CometProject\n            :              :           +- CometFilter\n            :              :              +- CometScan parquet spark_catalog.default.customer_address\n            :              +- CometBroadcastExchange\n            :                 +- CometProject\n            :                    +- CometBroadcastHashJoin\n            :                       :- CometFilter\n            :                       :  +- CometScan parquet spark_catalog.default.item\n            :                       +- CometBroadcastExchange\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometScan parquet spark_catalog.default.item\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometBroadcastHashJoin\n                           :     :     :- CometFilter\n                           :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n                           :     :     +- CometBroadcastExchange\n                           :     :        +- CometProject\n                           :     :           +- CometFilter\n                           :     :              +- CometScan parquet spark_catalog.default.date_dim\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.customer_address\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometFilter\n                                    :  +- CometScan parquet spark_catalog.default.item\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q61. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q61: ExplainInfo:\n Project [COMET: Project is not native because the following children are not native (BroadcastNestedLoopJoin)]\n+-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (HashAggregate, BroadcastExchange)]\n   :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +- CometHashAggregate\n   :        +- CometProject\n   :           +- CometBroadcastHashJoin\n   :              :- CometProject\n   :              :  +- CometBroadcastHashJoin\n   :              :     :- CometProject\n   :              :     :  +- CometBroadcastHashJoin\n   :              :     :     :- CometProject\n   :              :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :- CometProject\n   :              :     :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :     :- CometProject\n   :              :     :     :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :     :     :- CometFilter\n   :              :     :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :              :     :     :     :     :     +- CometBroadcastExchange\n   :              :     :     :     :     :        +- CometProject\n   :              :     :     :     :     :           +- CometFilter\n   :              :     :     :     :     :              +- CometScan parquet spark_catalog.default.store\n   :              :     :     :     :     +- CometBroadcastExchange\n   :              :     :     :     :        +- CometProject\n   :              :     :     :     :           +- CometFilter\n   :              :     :     :     :              +- CometScan parquet spark_catalog.default.promotion\n   :              :     :     :     +- CometBroadcastExchange\n   :              :     :     :        +- CometProject\n   :              :     :     :           +- CometFilter\n   :              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n   :              :     :     +- CometBroadcastExchange\n   :              :     :        +- CometFilter\n   :              :     :           +- CometScan parquet spark_catalog.default.customer\n   :              :     +- CometBroadcastExchange\n   :              :        +- CometProject\n   :              :           +- CometFilter\n   :              :              +- CometScan parquet spark_catalog.default.customer_address\n   :              +- CometBroadcastExchange\n   :                 +- CometProject\n   :                    +- CometFilter\n   :                       +- CometScan parquet spark_catalog.default.item\n   +- BroadcastExchange\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :- CometFilter\n                     :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :        +- CometProject\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometScan parquet spark_catalog.default.store\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometProject\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometFilter\n                     :     :           +- CometScan parquet spark_catalog.default.customer\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan parquet spark_catalog.default.customer_address\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q62. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q62: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometFilter\n               :     :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometFilter\n               :     :     :           +- CometScan parquet spark_catalog.default.warehouse\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan parquet spark_catalog.default.ship_mode\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan parquet spark_catalog.default.web_site\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q63. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q63: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Filter)]\n   +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      +-  Window [COMET: Window is not supported]\n         +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometBroadcastExchange\n                              :     :     :  +- CometProject\n                              :     :     :     +- CometFilter\n                              :     :     :        +- CometScan parquet spark_catalog.default.item\n                              :     :     +- CometFilter\n                              :     :        +- CometScan parquet spark_catalog.default.store_sales\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan parquet spark_catalog.default.date_dim\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan parquet spark_catalog.default.store\n\n\nQuery: q64. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q64: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n         :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     +- HashAggregate\n         :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :              +- BroadcastHashJoin\n         :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :  +- BroadcastHashJoin\n         :                 :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :  +- BroadcastHashJoin\n         :                 :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :- Subquery\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :  +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :        +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :           +- CometProject\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :              +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :                 +- CometScan parquet spark_catalog.default.date_dim\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  +- Subquery\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :              +- CometProject\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :                 +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :                    +- CometScan parquet spark_catalog.default.item\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometScan parquet spark_catalog.default.store_sales\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan parquet spark_catalog.default.store_returns\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Project)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometBroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometScan parquet spark_catalog.default.catalog_sales\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometBroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometScan parquet spark_catalog.default.catalog_returns\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.store\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.customer\n         :                 :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n         :                 :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n         :                 :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :        +- CometProject\n         :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n         :                 :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :        +- CometProject\n         :                 :     :     :     :     :     :     :     :           +- CometFilter\n         :                 :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n         :                 :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.promotion\n         :                 :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.household_demographics\n         :                 :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :           +- CometScan parquet spark_catalog.default.household_demographics\n         :                 :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :        +- CometProject\n         :                 :     :     :     :           +- CometFilter\n         :                 :     :     :     :              +- CometScan parquet spark_catalog.default.customer_address\n         :                 :     :     :     +- BroadcastExchange\n         :                 :     :     :        +- CometProject\n         :                 :     :     :           +- CometFilter\n         :                 :     :     :              +- CometScan parquet spark_catalog.default.customer_address\n         :                 :     :     +- BroadcastExchange\n         :                 :     :        +- CometFilter\n         :                 :     :           +- CometScan parquet spark_catalog.default.income_band\n         :                 :     +- BroadcastExchange\n         :                 :        +- CometFilter\n         :                 :           +- CometScan parquet spark_catalog.default.income_band\n         :                 +- BroadcastExchange\n         :                    +- CometProject\n         :                       +- CometFilter\n         :                          +- CometScan parquet spark_catalog.default.item\n         +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +- HashAggregate\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        +- BroadcastHashJoin\n                           :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :  +- BroadcastHashJoin\n                           :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :  +- BroadcastHashJoin\n                           :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :  +- BroadcastHashJoin\n                           :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :- Subquery\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :  +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :        +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :           +- CometProject\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :              +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :                 +- CometScan parquet spark_catalog.default.date_dim\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  +- Subquery\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :              +- CometProject\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :                 +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :                    +- CometScan parquet spark_catalog.default.item\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometScan parquet spark_catalog.default.store_sales\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan parquet spark_catalog.default.store_returns\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Project)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometBroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometScan parquet spark_catalog.default.catalog_sales\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometBroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometScan parquet spark_catalog.default.catalog_returns\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                           :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.store\n                           :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.customer\n                           :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n                           :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n                           :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :        +- CometProject\n                           :     :     :     :     :     :     :     :     :           +- CometFilter\n                           :     :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n                           :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :        +- CometProject\n                           :     :     :     :     :     :     :     :           +- CometFilter\n                           :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n                           :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.promotion\n                           :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.household_demographics\n                           :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :           +- CometScan parquet spark_catalog.default.household_demographics\n                           :     :     :     :     +- BroadcastExchange\n                           :     :     :     :        +- CometProject\n                           :     :     :     :           +- CometFilter\n                           :     :     :     :              +- CometScan parquet spark_catalog.default.customer_address\n                           :     :     :     +- BroadcastExchange\n                           :     :     :        +- CometProject\n                           :     :     :           +- CometFilter\n                           :     :     :              +- CometScan parquet spark_catalog.default.customer_address\n                           :     :     +- BroadcastExchange\n                           :     :        +- CometFilter\n                           :     :           +- CometScan parquet spark_catalog.default.income_band\n                           :     +- BroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan parquet spark_catalog.default.income_band\n                           +- BroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q65. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q65: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :        +- BroadcastHashJoin\n      :           :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :           :  +- BroadcastHashJoin\n      :           :     :- BroadcastExchange\n      :           :     :  +- CometFilter\n      :           :     :     +- CometScan parquet spark_catalog.default.store\n      :           :     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :           :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :           :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           :              +- CometHashAggregate\n      :           :                 +- CometProject\n      :           :                    +- CometBroadcastHashJoin\n      :           :                       :- CometFilter\n      :           :                       :  +- CometScan parquet spark_catalog.default.store_sales\n      :           :                       +- CometBroadcastExchange\n      :           :                          +- CometProject\n      :           :                             +- CometFilter\n      :           :                                +- CometScan parquet spark_catalog.default.date_dim\n      :           +- BroadcastExchange\n      :              +- CometProject\n      :                 +- CometFilter\n      :                    +- CometScan parquet spark_catalog.default.item\n      +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n         +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            +- HashAggregate\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           +- CometHashAggregate\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometFilter\n                                    :  +- CometScan parquet spark_catalog.default.store_sales\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q66. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q66: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n         +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate)]\n            :- HashAggregate\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  HashAggregate [COMET: ]\n            :        +- CometProject\n            :           +- CometBroadcastHashJoin\n            :              :- CometProject\n            :              :  +- CometBroadcastHashJoin\n            :              :     :- CometProject\n            :              :     :  +- CometBroadcastHashJoin\n            :              :     :     :- CometProject\n            :              :     :     :  +- CometBroadcastHashJoin\n            :              :     :     :     :- CometFilter\n            :              :     :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :              :     :     :     +- CometBroadcastExchange\n            :              :     :     :        +- CometProject\n            :              :     :     :           +- CometFilter\n            :              :     :     :              +- CometScan parquet spark_catalog.default.warehouse\n            :              :     :     +- CometBroadcastExchange\n            :              :     :        +- CometFilter\n            :              :     :           +- CometScan parquet spark_catalog.default.date_dim\n            :              :     +- CometBroadcastExchange\n            :              :        +- CometProject\n            :              :           +- CometFilter\n            :              :              +- CometScan parquet spark_catalog.default.time_dim\n            :              +- CometBroadcastExchange\n            :                 +- CometProject\n            :                    +- CometFilter\n            :                       +- CometScan parquet spark_catalog.default.ship_mode\n            +- HashAggregate\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: ]\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometBroadcastHashJoin\n                           :     :     :- CometProject\n                           :     :     :  +- CometBroadcastHashJoin\n                           :     :     :     :- CometFilter\n                           :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                           :     :     :     +- CometBroadcastExchange\n                           :     :     :        +- CometProject\n                           :     :     :           +- CometFilter\n                           :     :     :              +- CometScan parquet spark_catalog.default.warehouse\n                           :     :     +- CometBroadcastExchange\n                           :     :        +- CometFilter\n                           :     :           +- CometScan parquet spark_catalog.default.date_dim\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.time_dim\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometScan parquet spark_catalog.default.ship_mode\n\n\nQuery: q67. Comet Exec: Enabled (CometExpand, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q67: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +- HashAggregate\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: ]\n                     +- CometExpand\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan parquet spark_catalog.default.store\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q68. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q68: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   +- BroadcastHashJoin\n      :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :  +- BroadcastHashJoin\n      :     :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     +- CometHashAggregate\n      :     :        +- CometProject\n      :     :           +- CometBroadcastHashJoin\n      :     :              :- CometProject\n      :     :              :  +- CometBroadcastHashJoin\n      :     :              :     :- CometProject\n      :     :              :     :  +- CometBroadcastHashJoin\n      :     :              :     :     :- CometProject\n      :     :              :     :     :  +- CometBroadcastHashJoin\n      :     :              :     :     :     :- CometFilter\n      :     :              :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n      :     :              :     :     :     +- CometBroadcastExchange\n      :     :              :     :     :        +- CometProject\n      :     :              :     :     :           +- CometFilter\n      :     :              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n      :     :              :     :     +- CometBroadcastExchange\n      :     :              :     :        +- CometProject\n      :     :              :     :           +- CometFilter\n      :     :              :     :              +- CometScan parquet spark_catalog.default.store\n      :     :              :     +- CometBroadcastExchange\n      :     :              :        +- CometProject\n      :     :              :           +- CometFilter\n      :     :              :              +- CometScan parquet spark_catalog.default.household_demographics\n      :     :              +- CometBroadcastExchange\n      :     :                 +- CometFilter\n      :     :                    +- CometScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometProject\n      :           +- CometFilter\n      :              +- CometScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometFilter\n            +- CometScan parquet spark_catalog.default.customer_address\n\n\nQuery: q69. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q69: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +- BroadcastHashJoin\n               :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :  +- BroadcastHashJoin\n               :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n               :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n               :     :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n               :     :     :  :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :     :     :  :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :     :  :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :     :  :  :     +- CometFilter\n               :     :     :  :  :        +- CometScan parquet spark_catalog.default.customer\n               :     :     :  :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :     :  :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :     :  :        +- CometProject\n               :     :     :  :           +- CometBroadcastHashJoin\n               :     :     :  :              :- CometFilter\n               :     :     :  :              :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :     :  :              +- CometBroadcastExchange\n               :     :     :  :                 +- CometProject\n               :     :     :  :                    +- CometFilter\n               :     :     :  :                       +- CometScan parquet spark_catalog.default.date_dim\n               :     :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :     :        +- CometProject\n               :     :     :           +- CometBroadcastHashJoin\n               :     :     :              :- CometFilter\n               :     :     :              :  +- CometScan parquet spark_catalog.default.web_sales\n               :     :     :              +- CometBroadcastExchange\n               :     :     :                 +- CometProject\n               :     :     :                    +- CometFilter\n               :     :     :                       +- CometScan parquet spark_catalog.default.date_dim\n               :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :           +- CometProject\n               :     :              +- CometBroadcastHashJoin\n               :     :                 :- CometFilter\n               :     :                 :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :                 +- CometBroadcastExchange\n               :     :                    +- CometProject\n               :     :                       +- CometFilter\n               :     :                          +- CometScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.customer_address\n               +- BroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.customer_demographics\n\n\nQuery: q70. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q70: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +- HashAggregate\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Expand)]\n                     +-  Expand [COMET: Expand is not native because the following children are not native (Project)]\n                        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan parquet spark_catalog.default.store_sales\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan parquet spark_catalog.default.date_dim\n                              +- BroadcastExchange\n                                 +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                                    +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Project)]\n                                       :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                       :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       :     +- CometFilter\n                                       :        +- CometScan parquet spark_catalog.default.store\n                                       +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                                          +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n                                             +-  Window [COMET: Window is not supported]\n                                                +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n                                                   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                                      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                         +- CometHashAggregate\n                                                            +- CometProject\n                                                               +- CometBroadcastHashJoin\n                                                                  :- CometProject\n                                                                  :  +- CometBroadcastHashJoin\n                                                                  :     :- CometFilter\n                                                                  :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                                                  :     +- CometBroadcastExchange\n                                                                  :        +- CometProject\n                                                                  :           +- CometFilter\n                                                                  :              +- CometScan parquet spark_catalog.default.store\n                                                                  +- CometBroadcastExchange\n                                                                     +- CometProject\n                                                                        +- CometFilter\n                                                                           +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q71. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject, CometUnion)\nQuery: q71: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometBroadcastExchange\n                  :     :  +- CometProject\n                  :     :     +- CometFilter\n                  :     :        +- CometScan parquet spark_catalog.default.item\n                  :     +- CometUnion\n                  :        :- CometProject\n                  :        :  +- CometBroadcastHashJoin\n                  :        :     :- CometFilter\n                  :        :     :  +- CometScan parquet spark_catalog.default.web_sales\n                  :        :     +- CometBroadcastExchange\n                  :        :        +- CometProject\n                  :        :           +- CometFilter\n                  :        :              +- CometScan parquet spark_catalog.default.date_dim\n                  :        :- CometProject\n                  :        :  +- CometBroadcastHashJoin\n                  :        :     :- CometFilter\n                  :        :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                  :        :     +- CometBroadcastExchange\n                  :        :        +- CometProject\n                  :        :           +- CometFilter\n                  :        :              +- CometScan parquet spark_catalog.default.date_dim\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometFilter\n                  :              :  +- CometScan parquet spark_catalog.default.store_sales\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan parquet spark_catalog.default.date_dim\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet spark_catalog.default.time_dim\n\n\nQuery: q72. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q72: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +- BroadcastHashJoin\n               :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :  +- BroadcastHashJoin\n               :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :     :  +- BroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometProject\n               :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :- CometProject\n               :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :     :     :     :- CometBroadcastExchange\n               :     :     :     :     :     :     :     :     :     :  +- CometFilter\n               :     :     :     :     :     :     :     :     :     :     +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :     :     :     :     :     :     :     :     +- CometFilter\n               :     :     :     :     :     :     :     :     :        +- CometScan parquet spark_catalog.default.inventory\n               :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :     :     :     :        +- CometFilter\n               :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.warehouse\n               :     :     :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :     :     :        +- CometFilter\n               :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.item\n               :     :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :     :        +- CometProject\n               :     :     :     :     :     :           +- CometFilter\n               :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n               :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :        +- CometProject\n               :     :     :     :     :           +- CometFilter\n               :     :     :     :     :              +- CometScan parquet spark_catalog.default.household_demographics\n               :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :        +- CometProject\n               :     :     :     :           +- CometFilter\n               :     :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometFilter\n               :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n               :     :     +- BroadcastExchange\n               :     :        +- CometFilter\n               :     :           +- CometScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan parquet spark_catalog.default.promotion\n               +- BroadcastExchange\n                  +- CometFilter\n                     +- CometScan parquet spark_catalog.default.catalog_returns\n\n\nQuery: q73. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q73: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      +- BroadcastHashJoin\n         :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n         :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :        +- CometHashAggregate\n         :           +- CometProject\n         :              +- CometBroadcastHashJoin\n         :                 :- CometProject\n         :                 :  +- CometBroadcastHashJoin\n         :                 :     :- CometProject\n         :                 :     :  +- CometBroadcastHashJoin\n         :                 :     :     :- CometFilter\n         :                 :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n         :                 :     :     +- CometBroadcastExchange\n         :                 :     :        +- CometProject\n         :                 :     :           +- CometFilter\n         :                 :     :              +- CometScan parquet spark_catalog.default.date_dim\n         :                 :     +- CometBroadcastExchange\n         :                 :        +- CometProject\n         :                 :           +- CometFilter\n         :                 :              +- CometScan parquet spark_catalog.default.store\n         :                 +- CometBroadcastExchange\n         :                    +- CometProject\n         :                       +- CometFilter\n         :                          +- CometScan parquet spark_catalog.default.household_demographics\n         +- BroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan parquet spark_catalog.default.customer\n\n\nQuery: q74. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q74: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n      :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :  :     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :  :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :  :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :  :              +- CometHashAggregate\n      :     :  :                 +- CometProject\n      :     :  :                    +- CometBroadcastHashJoin\n      :     :  :                       :- CometProject\n      :     :  :                       :  +- CometBroadcastHashJoin\n      :     :  :                       :     :- CometBroadcastExchange\n      :     :  :                       :     :  +- CometProject\n      :     :  :                       :     :     +- CometFilter\n      :     :  :                       :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :  :                       :     +- CometFilter\n      :     :  :                       :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :  :                       +- CometBroadcastExchange\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometScan parquet spark_catalog.default.date_dim\n      :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :              +- CometHashAggregate\n      :     :                 +- CometProject\n      :     :                    +- CometBroadcastHashJoin\n      :     :                       :- CometProject\n      :     :                       :  +- CometBroadcastHashJoin\n      :     :                       :     :- CometBroadcastExchange\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :                       :     +- CometFilter\n      :     :                       :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :                       +- CometBroadcastExchange\n      :     :                          +- CometFilter\n      :     :                             +- CometScan parquet spark_catalog.default.date_dim\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                    +- CometHashAggregate\n      :                       +- CometProject\n      :                          +- CometBroadcastHashJoin\n      :                             :- CometProject\n      :                             :  +- CometBroadcastHashJoin\n      :                             :     :- CometBroadcastExchange\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometScan parquet spark_catalog.default.customer\n      :                             :     +- CometFilter\n      :                             :        +- CometScan parquet spark_catalog.default.web_sales\n      :                             +- CometBroadcastExchange\n      :                                +- CometFilter\n      :                                   +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometBroadcastExchange\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometScan parquet spark_catalog.default.customer\n                           :     +- CometFilter\n                           :        +- CometScan parquet spark_catalog.default.web_sales\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q75. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject, CometUnion)\nQuery: q75: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :        +- HashAggregate\n      :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n      :                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                       +- CometHashAggregate\n      :                          +- CometUnion\n      :                             :- CometProject\n      :                             :  +- CometBroadcastHashJoin\n      :                             :     :- CometProject\n      :                             :     :  +- CometBroadcastHashJoin\n      :                             :     :     :- CometProject\n      :                             :     :     :  +- CometBroadcastHashJoin\n      :                             :     :     :     :- CometFilter\n      :                             :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n      :                             :     :     :     +- CometBroadcastExchange\n      :                             :     :     :        +- CometProject\n      :                             :     :     :           +- CometFilter\n      :                             :     :     :              +- CometScan parquet spark_catalog.default.item\n      :                             :     :     +- CometBroadcastExchange\n      :                             :     :        +- CometFilter\n      :                             :     :           +- CometScan parquet spark_catalog.default.date_dim\n      :                             :     +- CometBroadcastExchange\n      :                             :        +- CometFilter\n      :                             :           +- CometScan parquet spark_catalog.default.catalog_returns\n      :                             :- CometProject\n      :                             :  +- CometBroadcastHashJoin\n      :                             :     :- CometProject\n      :                             :     :  +- CometBroadcastHashJoin\n      :                             :     :     :- CometProject\n      :                             :     :     :  +- CometBroadcastHashJoin\n      :                             :     :     :     :- CometFilter\n      :                             :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n      :                             :     :     :     +- CometBroadcastExchange\n      :                             :     :     :        +- CometProject\n      :                             :     :     :           +- CometFilter\n      :                             :     :     :              +- CometScan parquet spark_catalog.default.item\n      :                             :     :     +- CometBroadcastExchange\n      :                             :     :        +- CometFilter\n      :                             :     :           +- CometScan parquet spark_catalog.default.date_dim\n      :                             :     +- CometBroadcastExchange\n      :                             :        +- CometFilter\n      :                             :           +- CometScan parquet spark_catalog.default.store_returns\n      :                             +- CometProject\n      :                                +- CometBroadcastHashJoin\n      :                                   :- CometProject\n      :                                   :  +- CometBroadcastHashJoin\n      :                                   :     :- CometProject\n      :                                   :     :  +- CometBroadcastHashJoin\n      :                                   :     :     :- CometFilter\n      :                                   :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n      :                                   :     :     +- CometBroadcastExchange\n      :                                   :     :        +- CometProject\n      :                                   :     :           +- CometFilter\n      :                                   :     :              +- CometScan parquet spark_catalog.default.item\n      :                                   :     +- CometBroadcastExchange\n      :                                   :        +- CometFilter\n      :                                   :           +- CometScan parquet spark_catalog.default.date_dim\n      :                                   +- CometBroadcastExchange\n      :                                      +- CometFilter\n      :                                         +- CometScan parquet spark_catalog.default.web_returns\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n               +- HashAggregate\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometProject\n                                    :  +- CometBroadcastHashJoin\n                                    :     :- CometProject\n                                    :     :  +- CometBroadcastHashJoin\n                                    :     :     :- CometProject\n                                    :     :     :  +- CometBroadcastHashJoin\n                                    :     :     :     :- CometFilter\n                                    :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                    :     :     :     +- CometBroadcastExchange\n                                    :     :     :        +- CometProject\n                                    :     :     :           +- CometFilter\n                                    :     :     :              +- CometScan parquet spark_catalog.default.item\n                                    :     :     +- CometBroadcastExchange\n                                    :     :        +- CometFilter\n                                    :     :           +- CometScan parquet spark_catalog.default.date_dim\n                                    :     +- CometBroadcastExchange\n                                    :        +- CometFilter\n                                    :           +- CometScan parquet spark_catalog.default.catalog_returns\n                                    :- CometProject\n                                    :  +- CometBroadcastHashJoin\n                                    :     :- CometProject\n                                    :     :  +- CometBroadcastHashJoin\n                                    :     :     :- CometProject\n                                    :     :     :  +- CometBroadcastHashJoin\n                                    :     :     :     :- CometFilter\n                                    :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                    :     :     :     +- CometBroadcastExchange\n                                    :     :     :        +- CometProject\n                                    :     :     :           +- CometFilter\n                                    :     :     :              +- CometScan parquet spark_catalog.default.item\n                                    :     :     +- CometBroadcastExchange\n                                    :     :        +- CometFilter\n                                    :     :           +- CometScan parquet spark_catalog.default.date_dim\n                                    :     +- CometBroadcastExchange\n                                    :        +- CometFilter\n                                    :           +- CometScan parquet spark_catalog.default.store_returns\n                                    +- CometProject\n                                       +- CometBroadcastHashJoin\n                                          :- CometProject\n                                          :  +- CometBroadcastHashJoin\n                                          :     :- CometProject\n                                          :     :  +- CometBroadcastHashJoin\n                                          :     :     :- CometFilter\n                                          :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                          :     :     +- CometBroadcastExchange\n                                          :     :        +- CometProject\n                                          :     :           +- CometFilter\n                                          :     :              +- CometScan parquet spark_catalog.default.item\n                                          :     +- CometBroadcastExchange\n                                          :        +- CometFilter\n                                          :           +- CometScan parquet spark_catalog.default.date_dim\n                                          +- CometBroadcastExchange\n                                             +- CometFilter\n                                                +- CometScan parquet spark_catalog.default.web_returns\n\n\nQuery: q76. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject, CometUnion)\nQuery: q76: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometUnion\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometProject\n            :     :  +- CometBroadcastHashJoin\n            :     :     :- CometFilter\n            :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :     :     +- CometBroadcastExchange\n            :     :        +- CometProject\n            :     :           +- CometFilter\n            :     :              +- CometScan parquet spark_catalog.default.item\n            :     +- CometBroadcastExchange\n            :        +- CometFilter\n            :           +- CometScan parquet spark_catalog.default.date_dim\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometProject\n            :     :  +- CometBroadcastHashJoin\n            :     :     :- CometFilter\n            :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :     :     +- CometBroadcastExchange\n            :     :        +- CometProject\n            :     :           +- CometFilter\n            :     :              +- CometScan parquet spark_catalog.default.item\n            :     +- CometBroadcastExchange\n            :        +- CometFilter\n            :           +- CometScan parquet spark_catalog.default.date_dim\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometFilter\n                  :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q77. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q77: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Expand)]\n         +-  Expand [COMET: Expand is not native because the following children are not native (Union)]\n            +-  Union [COMET: Union is not enabled because the following children are not native (Project, Project, Project)]\n               :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n               :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :     :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n               :     :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        +- CometHashAggregate\n               :     :           +- CometProject\n               :     :              +- CometBroadcastHashJoin\n               :     :                 :- CometProject\n               :     :                 :  +- CometBroadcastHashJoin\n               :     :                 :     :- CometFilter\n               :     :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :                 :     +- CometBroadcastExchange\n               :     :                 :        +- CometProject\n               :     :                 :           +- CometFilter\n               :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :                 +- CometBroadcastExchange\n               :     :                    +- CometFilter\n               :     :                       +- CometScan parquet spark_catalog.default.store\n               :     +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n               :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :              +- CometHashAggregate\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometProject\n               :                       :  +- CometBroadcastHashJoin\n               :                       :     :- CometFilter\n               :                       :     :  +- CometScan parquet spark_catalog.default.store_returns\n               :                       :     +- CometBroadcastExchange\n               :                       :        +- CometProject\n               :                       :           +- CometFilter\n               :                       :              +- CometScan parquet spark_catalog.default.date_dim\n               :                       +- CometBroadcastExchange\n               :                          +- CometFilter\n               :                             +- CometScan parquet spark_catalog.default.store\n               :-  Project [COMET: Project is not native because the following children are not native (BroadcastNestedLoopJoin)]\n               :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (HashAggregate, BroadcastExchange)]\n               :     :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :     +- CometHashAggregate\n               :     :        +- CometProject\n               :     :           +- CometBroadcastHashJoin\n               :     :              :- CometFilter\n               :     :              :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :              +- CometBroadcastExchange\n               :     :                 +- CometProject\n               :     :                    +- CometFilter\n               :     :                       +- CometScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :              +- CometHashAggregate\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan parquet spark_catalog.default.catalog_returns\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan parquet spark_catalog.default.date_dim\n               +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                     :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n                     :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     :        +- CometHashAggregate\n                     :           +- CometProject\n                     :              +- CometBroadcastHashJoin\n                     :                 :- CometProject\n                     :                 :  +- CometBroadcastHashJoin\n                     :                 :     :- CometFilter\n                     :                 :     :  +- CometScan parquet spark_catalog.default.web_sales\n                     :                 :     +- CometBroadcastExchange\n                     :                 :        +- CometProject\n                     :                 :           +- CometFilter\n                     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n                     :                 +- CometBroadcastExchange\n                     :                    +- CometFilter\n                     :                       +- CometScan parquet spark_catalog.default.web_page\n                     +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n                        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometFilter\n                                       :     :  +- CometScan parquet spark_catalog.default.web_returns\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometProject\n                                       :           +- CometFilter\n                                       :              +- CometScan parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan parquet spark_catalog.default.web_page\n\n\nQuery: q78. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q78: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n      :     :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :        +- CometHashAggregate\n      :     :           +- CometProject\n      :     :              +- CometBroadcastHashJoin\n      :     :                 :- CometProject\n      :     :                 :  +- CometFilter\n      :     :                 :     +- CometBroadcastHashJoin\n      :     :                 :        :- CometFilter\n      :     :                 :        :  +- CometScan parquet spark_catalog.default.store_sales\n      :     :                 :        +- CometBroadcastExchange\n      :     :                 :           +- CometFilter\n      :     :                 :              +- CometScan parquet spark_catalog.default.store_returns\n      :     :                 +- CometBroadcastExchange\n      :     :                    +- CometFilter\n      :     :                       +- CometScan parquet spark_catalog.default.date_dim\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n      :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                 +- CometHashAggregate\n      :                    +- CometProject\n      :                       +- CometBroadcastHashJoin\n      :                          :- CometProject\n      :                          :  +- CometFilter\n      :                          :     +- CometBroadcastHashJoin\n      :                          :        :- CometFilter\n      :                          :        :  +- CometScan parquet spark_catalog.default.web_sales\n      :                          :        +- CometBroadcastExchange\n      :                          :           +- CometFilter\n      :                          :              +- CometScan parquet spark_catalog.default.web_returns\n      :                          +- CometBroadcastExchange\n      :                             +- CometFilter\n      :                                +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n         +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometFilter\n                           :     +- CometBroadcastHashJoin\n                           :        :- CometFilter\n                           :        :  +- CometScan parquet spark_catalog.default.catalog_sales\n                           :        +- CometBroadcastExchange\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.catalog_returns\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q79. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q79: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   +- BroadcastHashJoin\n      :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometFilter\n      :              :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan parquet spark_catalog.default.date_dim\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan parquet spark_catalog.default.store\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan parquet spark_catalog.default.household_demographics\n      +- BroadcastExchange\n         +- CometProject\n            +- CometFilter\n               +- CometScan parquet spark_catalog.default.customer\n\n\nQuery: q80. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q80: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Expand)]\n         +-  Expand [COMET: Expand is not native because the following children are not native (Union)]\n            +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n               :- HashAggregate\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometProject\n               :              :     :     :  +- CometBroadcastHashJoin\n               :              :     :     :     :- CometProject\n               :              :     :     :     :  +- CometBroadcastHashJoin\n               :              :     :     :     :     :- CometFilter\n               :              :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :              :     :     :     :     +- CometBroadcastExchange\n               :              :     :     :     :        +- CometFilter\n               :              :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n               :              :     :     :     +- CometBroadcastExchange\n               :              :     :     :        +- CometProject\n               :              :     :     :           +- CometFilter\n               :              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan parquet spark_catalog.default.store\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan parquet spark_catalog.default.item\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometFilter\n               :                       +- CometScan parquet spark_catalog.default.promotion\n               :- HashAggregate\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometProject\n               :              :     :     :  +- CometBroadcastHashJoin\n               :              :     :     :     :- CometProject\n               :              :     :     :     :  +- CometBroadcastHashJoin\n               :              :     :     :     :     :- CometFilter\n               :              :     :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :              :     :     :     :     +- CometBroadcastExchange\n               :              :     :     :     :        +- CometFilter\n               :              :     :     :     :           +- CometScan parquet spark_catalog.default.catalog_returns\n               :              :     :     :     +- CometBroadcastExchange\n               :              :     :     :        +- CometProject\n               :              :     :     :           +- CometFilter\n               :              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan parquet spark_catalog.default.catalog_page\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan parquet spark_catalog.default.item\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometFilter\n               :                       +- CometScan parquet spark_catalog.default.promotion\n               +- HashAggregate\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometFilter\n                              :     :     :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometFilter\n                              :     :     :     :           +- CometScan parquet spark_catalog.default.web_returns\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan parquet spark_catalog.default.web_site\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan parquet spark_catalog.default.promotion\n\n\nQuery: q81. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q81: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   +- BroadcastHashJoin\n      :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (Project, BroadcastExchange)]\n      :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :     :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :              +- CometHashAggregate\n      :     :     :                 +- CometProject\n      :     :     :                    +- CometBroadcastHashJoin\n      :     :     :                       :- CometProject\n      :     :     :                       :  +- CometBroadcastHashJoin\n      :     :     :                       :     :- CometFilter\n      :     :     :                       :     :  +- CometScan parquet spark_catalog.default.catalog_returns\n      :     :     :                       :     +- CometBroadcastExchange\n      :     :     :                       :        +- CometProject\n      :     :     :                       :           +- CometFilter\n      :     :     :                       :              +- CometScan parquet spark_catalog.default.date_dim\n      :     :     :                       +- CometBroadcastExchange\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometScan parquet spark_catalog.default.customer_address\n      :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n      :     :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :           +- HashAggregate\n      :     :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n      :     :                    +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                          +- CometHashAggregate\n      :     :                             +- CometProject\n      :     :                                +- CometBroadcastHashJoin\n      :     :                                   :- CometProject\n      :     :                                   :  +- CometBroadcastHashJoin\n      :     :                                   :     :- CometFilter\n      :     :                                   :     :  +- CometScan parquet spark_catalog.default.catalog_returns\n      :     :                                   :     +- CometBroadcastExchange\n      :     :                                   :        +- CometProject\n      :     :                                   :           +- CometFilter\n      :     :                                   :              +- CometScan parquet spark_catalog.default.date_dim\n      :     :                                   +- CometBroadcastExchange\n      :     :                                      +- CometProject\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n      :           +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n      :              :  +- Subquery\n      :              :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n      :              :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :              :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n      :              :              +- CometProject\n      :              :                 +- CometFilter\n      :              :                    +- CometScan parquet spark_catalog.default.customer_address\n      :              +- CometScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometProject\n            +- CometFilter\n               +- CometScan parquet spark_catalog.default.customer_address\n\n\nQuery: q82. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q82: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometBroadcastExchange\n               :     :     :  +- CometProject\n               :     :     :     +- CometFilter\n               :     :     :        +- CometScan parquet spark_catalog.default.item\n               :     :     +- CometProject\n               :     :        +- CometFilter\n               :     :           +- CometScan parquet spark_catalog.default.inventory\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.date_dim\n               +- CometBroadcastExchange\n                  +- CometFilter\n                     +- CometScan parquet spark_catalog.default.store_sales\n\n\nQuery: q83. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q83: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n      :     :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :        +- CometHashAggregate\n      :     :           +- CometProject\n      :     :              +- CometBroadcastHashJoin\n      :     :                 :- CometProject\n      :     :                 :  +- CometBroadcastHashJoin\n      :     :                 :     :- CometFilter\n      :     :                 :     :  +- CometScan parquet spark_catalog.default.store_returns\n      :     :                 :     +- CometBroadcastExchange\n      :     :                 :        +- CometProject\n      :     :                 :           +- CometFilter\n      :     :                 :              +- CometScan parquet spark_catalog.default.item\n      :     :                 +- CometBroadcastExchange\n      :     :                    +- CometProject\n      :     :                       +- CometBroadcastHashJoin\n      :     :                          :- CometFilter\n      :     :                          :  +- CometScan parquet spark_catalog.default.date_dim\n      :     :                          +- CometBroadcastExchange\n      :     :                             +- CometProject\n      :     :                                +- CometBroadcastHashJoin\n      :     :                                   :- CometScan parquet spark_catalog.default.date_dim\n      :     :                                   +- CometBroadcastExchange\n      :     :                                      +- CometProject\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan parquet spark_catalog.default.date_dim\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n      :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :              +- CometHashAggregate\n      :                 +- CometProject\n      :                    +- CometBroadcastHashJoin\n      :                       :- CometProject\n      :                       :  +- CometBroadcastHashJoin\n      :                       :     :- CometFilter\n      :                       :     :  +- CometScan parquet spark_catalog.default.catalog_returns\n      :                       :     +- CometBroadcastExchange\n      :                       :        +- CometProject\n      :                       :           +- CometFilter\n      :                       :              +- CometScan parquet spark_catalog.default.item\n      :                       +- CometBroadcastExchange\n      :                          +- CometProject\n      :                             +- CometBroadcastHashJoin\n      :                                :- CometFilter\n      :                                :  +- CometScan parquet spark_catalog.default.date_dim\n      :                                +- CometBroadcastExchange\n      :                                   +- CometProject\n      :                                      +- CometBroadcastHashJoin\n      :                                         :- CometScan parquet spark_catalog.default.date_dim\n      :                                         +- CometBroadcastExchange\n      :                                            +- CometProject\n      :                                               +- CometFilter\n      :                                                  +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometFilter\n                        :     :  +- CometScan parquet spark_catalog.default.web_returns\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan parquet spark_catalog.default.item\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometFilter\n                                 :  +- CometScan parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometBroadcastHashJoin\n                                          :- CometScan parquet spark_catalog.default.date_dim\n                                          +- CometBroadcastExchange\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q84. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q84: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: concat is not supported]\n   +- CometBroadcastHashJoin\n      :- CometProject\n      :  +- CometBroadcastHashJoin\n      :     :- CometProject\n      :     :  +- CometBroadcastHashJoin\n      :     :     :- CometProject\n      :     :     :  +- CometBroadcastHashJoin\n      :     :     :     :- CometProject\n      :     :     :     :  +- CometBroadcastHashJoin\n      :     :     :     :     :- CometProject\n      :     :     :     :     :  +- CometFilter\n      :     :     :     :     :     +- CometScan parquet spark_catalog.default.customer\n      :     :     :     :     +- CometBroadcastExchange\n      :     :     :     :        +- CometProject\n      :     :     :     :           +- CometFilter\n      :     :     :     :              +- CometScan parquet spark_catalog.default.customer_address\n      :     :     :     +- CometBroadcastExchange\n      :     :     :        +- CometFilter\n      :     :     :           +- CometScan parquet spark_catalog.default.customer_demographics\n      :     :     +- CometBroadcastExchange\n      :     :        +- CometFilter\n      :     :           +- CometScan parquet spark_catalog.default.household_demographics\n      :     +- CometBroadcastExchange\n      :        +- CometProject\n      :           +- CometFilter\n      :              +- CometScan parquet spark_catalog.default.income_band\n      +- CometBroadcastExchange\n         +- CometFilter\n            +- CometScan parquet spark_catalog.default.store_returns\n\n\nQuery: q85. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q85: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometProject\n               :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :- CometProject\n               :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :- CometFilter\n               :     :     :     :     :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n               :     :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :     :        +- CometFilter\n               :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.web_returns\n               :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :        +- CometFilter\n               :     :     :     :     :           +- CometScan parquet spark_catalog.default.web_page\n               :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :        +- CometProject\n               :     :     :     :           +- CometFilter\n               :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometProject\n               :     :     :           +- CometFilter\n               :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan parquet spark_catalog.default.customer_address\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.date_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.reason\n\n\nQuery: q86. Comet Exec: Enabled (CometHashAggregate, CometExpand, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q86: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometExpand\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan parquet spark_catalog.default.web_sales\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan parquet spark_catalog.default.date_dim\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q87. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q87: ExplainInfo:\nHashAggregate\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n      +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n            :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :  :           +- CometHashAggregate\n            :  :              +- CometProject\n            :  :                 +- CometBroadcastHashJoin\n            :  :                    :- CometProject\n            :  :                    :  +- CometBroadcastHashJoin\n            :  :                    :     :- CometFilter\n            :  :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :  :                    :     +- CometBroadcastExchange\n            :  :                    :        +- CometProject\n            :  :                    :           +- CometFilter\n            :  :                    :              +- CometScan parquet spark_catalog.default.date_dim\n            :  :                    +- CometBroadcastExchange\n            :  :                       +- CometProject\n            :  :                          +- CometFilter\n            :  :                             +- CometScan parquet spark_catalog.default.customer\n            :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :              +- CometHashAggregate\n            :                 +- CometProject\n            :                    +- CometBroadcastHashJoin\n            :                       :- CometProject\n            :                       :  +- CometBroadcastHashJoin\n            :                       :     :- CometFilter\n            :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                       :     +- CometBroadcastExchange\n            :                       :        +- CometProject\n            :                       :           +- CometFilter\n            :                       :              +- CometScan parquet spark_catalog.default.date_dim\n            :                       +- CometBroadcastExchange\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometScan parquet spark_catalog.default.customer\n            +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometFilter\n                                 :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan parquet spark_catalog.default.customer\n\n\nQuery: q88. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q88: ExplainInfo:\n BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (BroadcastNestedLoopJoin, BroadcastExchange)]\n:-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (BroadcastNestedLoopJoin, BroadcastExchange)]\n:  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (BroadcastNestedLoopJoin, BroadcastExchange)]\n:  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (BroadcastNestedLoopJoin, BroadcastExchange)]\n:  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (BroadcastNestedLoopJoin, BroadcastExchange)]\n:  :  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (BroadcastNestedLoopJoin, BroadcastExchange)]\n:  :  :  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (HashAggregate, BroadcastExchange)]\n:  :  :  :  :  :  :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :  :  :  :  :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :  :  :  :  :  :     +- CometHashAggregate\n:  :  :  :  :  :  :        +- CometProject\n:  :  :  :  :  :  :           +- CometBroadcastHashJoin\n:  :  :  :  :  :  :              :- CometProject\n:  :  :  :  :  :  :              :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :              :     :- CometProject\n:  :  :  :  :  :  :              :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :              :     :     :- CometFilter\n:  :  :  :  :  :  :              :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n:  :  :  :  :  :  :              :     :     +- CometBroadcastExchange\n:  :  :  :  :  :  :              :     :        +- CometProject\n:  :  :  :  :  :  :              :     :           +- CometFilter\n:  :  :  :  :  :  :              :     :              +- CometScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :  :              :     +- CometBroadcastExchange\n:  :  :  :  :  :  :              :        +- CometProject\n:  :  :  :  :  :  :              :           +- CometFilter\n:  :  :  :  :  :  :              :              +- CometScan parquet spark_catalog.default.time_dim\n:  :  :  :  :  :  :              +- CometBroadcastExchange\n:  :  :  :  :  :  :                 +- CometProject\n:  :  :  :  :  :  :                    +- CometFilter\n:  :  :  :  :  :  :                       +- CometScan parquet spark_catalog.default.store\n:  :  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :  :  :  :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :  :  :  :  :           +- CometHashAggregate\n:  :  :  :  :  :              +- CometProject\n:  :  :  :  :  :                 +- CometBroadcastHashJoin\n:  :  :  :  :  :                    :- CometProject\n:  :  :  :  :  :                    :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                    :     :- CometProject\n:  :  :  :  :  :                    :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                    :     :     :- CometFilter\n:  :  :  :  :  :                    :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n:  :  :  :  :  :                    :     :     +- CometBroadcastExchange\n:  :  :  :  :  :                    :     :        +- CometProject\n:  :  :  :  :  :                    :     :           +- CometFilter\n:  :  :  :  :  :                    :     :              +- CometScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :                    :     +- CometBroadcastExchange\n:  :  :  :  :  :                    :        +- CometProject\n:  :  :  :  :  :                    :           +- CometFilter\n:  :  :  :  :  :                    :              +- CometScan parquet spark_catalog.default.time_dim\n:  :  :  :  :  :                    +- CometBroadcastExchange\n:  :  :  :  :  :                       +- CometProject\n:  :  :  :  :  :                          +- CometFilter\n:  :  :  :  :  :                             +- CometScan parquet spark_catalog.default.store\n:  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :  :  :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :  :  :  :           +- CometHashAggregate\n:  :  :  :  :              +- CometProject\n:  :  :  :  :                 +- CometBroadcastHashJoin\n:  :  :  :  :                    :- CometProject\n:  :  :  :  :                    :  +- CometBroadcastHashJoin\n:  :  :  :  :                    :     :- CometProject\n:  :  :  :  :                    :     :  +- CometBroadcastHashJoin\n:  :  :  :  :                    :     :     :- CometFilter\n:  :  :  :  :                    :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n:  :  :  :  :                    :     :     +- CometBroadcastExchange\n:  :  :  :  :                    :     :        +- CometProject\n:  :  :  :  :                    :     :           +- CometFilter\n:  :  :  :  :                    :     :              +- CometScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :                    :     +- CometBroadcastExchange\n:  :  :  :  :                    :        +- CometProject\n:  :  :  :  :                    :           +- CometFilter\n:  :  :  :  :                    :              +- CometScan parquet spark_catalog.default.time_dim\n:  :  :  :  :                    +- CometBroadcastExchange\n:  :  :  :  :                       +- CometProject\n:  :  :  :  :                          +- CometFilter\n:  :  :  :  :                             +- CometScan parquet spark_catalog.default.store\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :  :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :  :  :           +- CometHashAggregate\n:  :  :  :              +- CometProject\n:  :  :  :                 +- CometBroadcastHashJoin\n:  :  :  :                    :- CometProject\n:  :  :  :                    :  +- CometBroadcastHashJoin\n:  :  :  :                    :     :- CometProject\n:  :  :  :                    :     :  +- CometBroadcastHashJoin\n:  :  :  :                    :     :     :- CometFilter\n:  :  :  :                    :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n:  :  :  :                    :     :     +- CometBroadcastExchange\n:  :  :  :                    :     :        +- CometProject\n:  :  :  :                    :     :           +- CometFilter\n:  :  :  :                    :     :              +- CometScan parquet spark_catalog.default.household_demographics\n:  :  :  :                    :     +- CometBroadcastExchange\n:  :  :  :                    :        +- CometProject\n:  :  :  :                    :           +- CometFilter\n:  :  :  :                    :              +- CometScan parquet spark_catalog.default.time_dim\n:  :  :  :                    +- CometBroadcastExchange\n:  :  :  :                       +- CometProject\n:  :  :  :                          +- CometFilter\n:  :  :  :                             +- CometScan parquet spark_catalog.default.store\n:  :  :  +- BroadcastExchange\n:  :  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :  :           +- CometHashAggregate\n:  :  :              +- CometProject\n:  :  :                 +- CometBroadcastHashJoin\n:  :  :                    :- CometProject\n:  :  :                    :  +- CometBroadcastHashJoin\n:  :  :                    :     :- CometProject\n:  :  :                    :     :  +- CometBroadcastHashJoin\n:  :  :                    :     :     :- CometFilter\n:  :  :                    :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n:  :  :                    :     :     +- CometBroadcastExchange\n:  :  :                    :     :        +- CometProject\n:  :  :                    :     :           +- CometFilter\n:  :  :                    :     :              +- CometScan parquet spark_catalog.default.household_demographics\n:  :  :                    :     +- CometBroadcastExchange\n:  :  :                    :        +- CometProject\n:  :  :                    :           +- CometFilter\n:  :  :                    :              +- CometScan parquet spark_catalog.default.time_dim\n:  :  :                    +- CometBroadcastExchange\n:  :  :                       +- CometProject\n:  :  :                          +- CometFilter\n:  :  :                             +- CometScan parquet spark_catalog.default.store\n:  :  +- BroadcastExchange\n:  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:  :           +- CometHashAggregate\n:  :              +- CometProject\n:  :                 +- CometBroadcastHashJoin\n:  :                    :- CometProject\n:  :                    :  +- CometBroadcastHashJoin\n:  :                    :     :- CometProject\n:  :                    :     :  +- CometBroadcastHashJoin\n:  :                    :     :     :- CometFilter\n:  :                    :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n:  :                    :     :     +- CometBroadcastExchange\n:  :                    :     :        +- CometProject\n:  :                    :     :           +- CometFilter\n:  :                    :     :              +- CometScan parquet spark_catalog.default.household_demographics\n:  :                    :     +- CometBroadcastExchange\n:  :                    :        +- CometProject\n:  :                    :           +- CometFilter\n:  :                    :              +- CometScan parquet spark_catalog.default.time_dim\n:  :                    +- CometBroadcastExchange\n:  :                       +- CometProject\n:  :                          +- CometFilter\n:  :                             +- CometScan parquet spark_catalog.default.store\n:  +- BroadcastExchange\n:     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n:        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n:           +- CometHashAggregate\n:              +- CometProject\n:                 +- CometBroadcastHashJoin\n:                    :- CometProject\n:                    :  +- CometBroadcastHashJoin\n:                    :     :- CometProject\n:                    :     :  +- CometBroadcastHashJoin\n:                    :     :     :- CometFilter\n:                    :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n:                    :     :     +- CometBroadcastExchange\n:                    :     :        +- CometProject\n:                    :     :           +- CometFilter\n:                    :     :              +- CometScan parquet spark_catalog.default.household_demographics\n:                    :     +- CometBroadcastExchange\n:                    :        +- CometProject\n:                    :           +- CometFilter\n:                    :              +- CometScan parquet spark_catalog.default.time_dim\n:                    +- CometBroadcastExchange\n:                       +- CometProject\n:                          +- CometFilter\n:                             +- CometScan parquet spark_catalog.default.store\n+- BroadcastExchange\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometFilter\n                  :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan parquet spark_catalog.default.household_demographics\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.time_dim\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet spark_catalog.default.store\n\n\nQuery: q89. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q89: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Filter)]\n   +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      +-  Window [COMET: Window is not supported]\n         +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometBroadcastExchange\n                              :     :     :  +- CometProject\n                              :     :     :     +- CometFilter\n                              :     :     :        +- CometScan parquet spark_catalog.default.item\n                              :     :     +- CometFilter\n                              :     :        +- CometScan parquet spark_catalog.default.store_sales\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan parquet spark_catalog.default.date_dim\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan parquet spark_catalog.default.store\n\n\nQuery: q90. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q90: ExplainInfo:\n Project [COMET: Project is not native because the following children are not native (BroadcastNestedLoopJoin)]\n+-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (HashAggregate, BroadcastExchange)]\n   :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +- CometHashAggregate\n   :        +- CometProject\n   :           +- CometBroadcastHashJoin\n   :              :- CometProject\n   :              :  +- CometBroadcastHashJoin\n   :              :     :- CometProject\n   :              :     :  +- CometBroadcastHashJoin\n   :              :     :     :- CometFilter\n   :              :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n   :              :     :     +- CometBroadcastExchange\n   :              :     :        +- CometProject\n   :              :     :           +- CometFilter\n   :              :     :              +- CometScan parquet spark_catalog.default.household_demographics\n   :              :     +- CometBroadcastExchange\n   :              :        +- CometProject\n   :              :           +- CometFilter\n   :              :              +- CometScan parquet spark_catalog.default.time_dim\n   :              +- CometBroadcastExchange\n   :                 +- CometProject\n   :                    +- CometFilter\n   :                       +- CometScan parquet spark_catalog.default.web_page\n   +- BroadcastExchange\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometFilter\n                     :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan parquet spark_catalog.default.household_demographics\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan parquet spark_catalog.default.time_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan parquet spark_catalog.default.web_page\n\n\nQuery: q91. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q91: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometBroadcastExchange\n                  :     :     :     :     :     :  +- CometProject\n                  :     :     :     :     :     :     +- CometFilter\n                  :     :     :     :     :     :        +- CometScan parquet spark_catalog.default.call_center\n                  :     :     :     :     :     +- CometFilter\n                  :     :     :     :     :        +- CometScan parquet spark_catalog.default.catalog_returns\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan parquet spark_catalog.default.customer\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan parquet spark_catalog.default.customer_address\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.customer_demographics\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet spark_catalog.default.household_demographics\n\n\nQuery: q92. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q92: ExplainInfo:\nHashAggregate\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n      +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         +- BroadcastHashJoin\n            :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n            :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     :     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :     :        +- BroadcastHashJoin\n            :     :           :-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :     :           :  :  +- Subquery\n            :     :           :  :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :     :           :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     :           :  :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :     :           :  :              +- CometProject\n            :     :           :  :                 +- CometFilter\n            :     :           :  :                    +- CometScan parquet spark_catalog.default.date_dim\n            :     :           :  +- CometScan parquet spark_catalog.default.web_sales\n            :     :           +- BroadcastExchange\n            :     :              +- CometProject\n            :     :                 +- CometFilter\n            :     :                    +- CometScan parquet spark_catalog.default.item\n            :     +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n            :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                 +- CometHashAggregate\n            :                    +- CometProject\n            :                       +- CometBroadcastHashJoin\n            :                          :- CometFilter\n            :                          :  +- CometScan parquet spark_catalog.default.web_sales\n            :                          +- CometBroadcastExchange\n            :                             +- CometProject\n            :                                +- CometFilter\n            :                                   +- CometScan parquet spark_catalog.default.date_dim\n            +- BroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q93. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q93: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: ]\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometScan parquet spark_catalog.default.store_sales\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan parquet spark_catalog.default.store_returns\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.reason\n\n\nQuery: q94. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q94: ExplainInfo:\nHashAggregate\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- HashAggregate\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n               +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  +- BroadcastHashJoin\n                     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                     :  +- BroadcastHashJoin\n                     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- BroadcastHashJoin\n                     :     :     :  :- CometProject\n                     :     :     :  :  +- CometBroadcastHashJoin\n                     :     :     :  :     :- CometFilter\n                     :     :     :  :     :  +- CometScan parquet spark_catalog.default.web_sales\n                     :     :     :  :     +- CometBroadcastExchange\n                     :     :     :  :        +- CometScan parquet spark_catalog.default.web_sales\n                     :     :     :  +- BroadcastExchange\n                     :     :     :     +- CometScan parquet spark_catalog.default.web_returns\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan parquet spark_catalog.default.customer_address\n                     +- BroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan parquet spark_catalog.default.web_site\n\n\nQuery: q95. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q95: ExplainInfo:\nHashAggregate\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- HashAggregate\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               +- BroadcastHashJoin\n                  :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  :  +- BroadcastHashJoin\n                  :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  :     :  +- BroadcastHashJoin\n                  :     :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n                  :     :     :  :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                  :     :     :  :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :     :     :  :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :     :  :  :     +- CometFilter\n                  :     :     :  :  :        +- CometScan parquet spark_catalog.default.web_sales\n                  :     :     :  :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :     :     :  :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :     :  :        +- CometProject\n                  :     :     :  :           +- CometBroadcastHashJoin\n                  :     :     :  :              :- CometFilter\n                  :     :     :  :              :  +- CometScan parquet spark_catalog.default.web_sales\n                  :     :     :  :              +- CometBroadcastExchange\n                  :     :     :  :                 +- CometFilter\n                  :     :     :  :                    +- CometScan parquet spark_catalog.default.web_sales\n                  :     :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :     :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometBroadcastHashJoin\n                  :     :     :              :- CometBroadcastExchange\n                  :     :     :              :  +- CometFilter\n                  :     :     :              :     +- CometScan parquet spark_catalog.default.web_returns\n                  :     :     :              +- CometProject\n                  :     :     :                 +- CometBroadcastHashJoin\n                  :     :     :                    :- CometFilter\n                  :     :     :                    :  +- CometScan parquet spark_catalog.default.web_sales\n                  :     :     :                    +- CometBroadcastExchange\n                  :     :     :                       +- CometFilter\n                  :     :     :                          +- CometScan parquet spark_catalog.default.web_sales\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet spark_catalog.default.web_site\n\n\nQuery: q96. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q96: ExplainInfo:\n HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- CometHashAggregate\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometProject\n            :     :  +- CometBroadcastHashJoin\n            :     :     :- CometFilter\n            :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :     :     +- CometBroadcastExchange\n            :     :        +- CometProject\n            :     :           +- CometFilter\n            :     :              +- CometScan parquet spark_catalog.default.household_demographics\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometScan parquet spark_catalog.default.time_dim\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan parquet spark_catalog.default.store\n\n\nQuery: q97. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q97: ExplainInfo:\nHashAggregate\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n      +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n            :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometFilter\n            :                 :  +- CometScan parquet spark_catalog.default.store_sales\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan parquet spark_catalog.default.date_dim\n            +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n               +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometFilter\n                              :  +- CometScan parquet spark_catalog.default.catalog_sales\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q98. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q98: ExplainInfo:\n Project [COMET: Project is not native because the following children are not native (Sort)]\n+-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  Project [COMET: Project is not native because the following children are not native (Window)]\n         +-  Window [COMET: Window is not supported]\n            +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometFilter\n                                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q99. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q99: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometFilter\n               :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometFilter\n               :     :     :           +- CometScan parquet spark_catalog.default.warehouse\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan parquet spark_catalog.default.ship_mode\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan parquet spark_catalog.default.call_center\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q5a-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject, CometUnion)\nQuery: q5a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n         +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n            :- HashAggregate\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n            :        +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n            :           :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :     +- CometHashAggregate\n            :           :        +- CometProject\n            :           :           +- CometBroadcastHashJoin\n            :           :              :- CometProject\n            :           :              :  +- CometBroadcastHashJoin\n            :           :              :     :- CometUnion\n            :           :              :     :  :- CometProject\n            :           :              :     :  :  +- CometFilter\n            :           :              :     :  :     +- CometScan parquet spark_catalog.default.store_sales\n            :           :              :     :  +- CometProject\n            :           :              :     :     +- CometFilter\n            :           :              :     :        +- CometScan parquet spark_catalog.default.store_returns\n            :           :              :     +- CometBroadcastExchange\n            :           :              :        +- CometProject\n            :           :              :           +- CometFilter\n            :           :              :              +- CometScan parquet spark_catalog.default.date_dim\n            :           :              +- CometBroadcastExchange\n            :           :                 +- CometProject\n            :           :                    +- CometFilter\n            :           :                       +- CometScan parquet spark_catalog.default.store\n            :           :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :     +- CometHashAggregate\n            :           :        +- CometProject\n            :           :           +- CometBroadcastHashJoin\n            :           :              :- CometProject\n            :           :              :  +- CometBroadcastHashJoin\n            :           :              :     :- CometUnion\n            :           :              :     :  :- CometProject\n            :           :              :     :  :  +- CometFilter\n            :           :              :     :  :     +- CometScan parquet spark_catalog.default.catalog_sales\n            :           :              :     :  +- CometProject\n            :           :              :     :     +- CometFilter\n            :           :              :     :        +- CometScan parquet spark_catalog.default.catalog_returns\n            :           :              :     +- CometBroadcastExchange\n            :           :              :        +- CometProject\n            :           :              :           +- CometFilter\n            :           :              :              +- CometScan parquet spark_catalog.default.date_dim\n            :           :              +- CometBroadcastExchange\n            :           :                 +- CometProject\n            :           :                    +- CometFilter\n            :           :                       +- CometScan parquet spark_catalog.default.catalog_page\n            :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                 +- CometHashAggregate\n            :                    +- CometProject\n            :                       +- CometBroadcastHashJoin\n            :                          :- CometProject\n            :                          :  +- CometBroadcastHashJoin\n            :                          :     :- CometUnion\n            :                          :     :  :- CometProject\n            :                          :     :  :  +- CometFilter\n            :                          :     :  :     +- CometScan parquet spark_catalog.default.web_sales\n            :                          :     :  +- CometProject\n            :                          :     :     +- CometBroadcastHashJoin\n            :                          :     :        :- CometBroadcastExchange\n            :                          :     :        :  +- CometFilter\n            :                          :     :        :     +- CometScan parquet spark_catalog.default.web_returns\n            :                          :     :        +- CometFilter\n            :                          :     :           +- CometScan parquet spark_catalog.default.web_sales\n            :                          :     +- CometBroadcastExchange\n            :                          :        +- CometProject\n            :                          :           +- CometFilter\n            :                          :              +- CometScan parquet spark_catalog.default.date_dim\n            :                          +- CometBroadcastExchange\n            :                             +- CometProject\n            :                                +- CometFilter\n            :                                   +- CometScan parquet spark_catalog.default.web_site\n            :- HashAggregate\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n            :        +- HashAggregate\n            :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n            :                 +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n            :                    :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :     +- CometHashAggregate\n            :                    :        +- CometProject\n            :                    :           +- CometBroadcastHashJoin\n            :                    :              :- CometProject\n            :                    :              :  +- CometBroadcastHashJoin\n            :                    :              :     :- CometUnion\n            :                    :              :     :  :- CometProject\n            :                    :              :     :  :  +- CometFilter\n            :                    :              :     :  :     +- CometScan parquet spark_catalog.default.store_sales\n            :                    :              :     :  +- CometProject\n            :                    :              :     :     +- CometFilter\n            :                    :              :     :        +- CometScan parquet spark_catalog.default.store_returns\n            :                    :              :     +- CometBroadcastExchange\n            :                    :              :        +- CometProject\n            :                    :              :           +- CometFilter\n            :                    :              :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :              +- CometBroadcastExchange\n            :                    :                 +- CometProject\n            :                    :                    +- CometFilter\n            :                    :                       +- CometScan parquet spark_catalog.default.store\n            :                    :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :     +- CometHashAggregate\n            :                    :        +- CometProject\n            :                    :           +- CometBroadcastHashJoin\n            :                    :              :- CometProject\n            :                    :              :  +- CometBroadcastHashJoin\n            :                    :              :     :- CometUnion\n            :                    :              :     :  :- CometProject\n            :                    :              :     :  :  +- CometFilter\n            :                    :              :     :  :     +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :              :     :  +- CometProject\n            :                    :              :     :     +- CometFilter\n            :                    :              :     :        +- CometScan parquet spark_catalog.default.catalog_returns\n            :                    :              :     +- CometBroadcastExchange\n            :                    :              :        +- CometProject\n            :                    :              :           +- CometFilter\n            :                    :              :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :              +- CometBroadcastExchange\n            :                    :                 +- CometProject\n            :                    :                    +- CometFilter\n            :                    :                       +- CometScan parquet spark_catalog.default.catalog_page\n            :                    +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                          +- CometHashAggregate\n            :                             +- CometProject\n            :                                +- CometBroadcastHashJoin\n            :                                   :- CometProject\n            :                                   :  +- CometBroadcastHashJoin\n            :                                   :     :- CometUnion\n            :                                   :     :  :- CometProject\n            :                                   :     :  :  +- CometFilter\n            :                                   :     :  :     +- CometScan parquet spark_catalog.default.web_sales\n            :                                   :     :  +- CometProject\n            :                                   :     :     +- CometBroadcastHashJoin\n            :                                   :     :        :- CometBroadcastExchange\n            :                                   :     :        :  +- CometFilter\n            :                                   :     :        :     +- CometScan parquet spark_catalog.default.web_returns\n            :                                   :     :        +- CometFilter\n            :                                   :     :           +- CometScan parquet spark_catalog.default.web_sales\n            :                                   :     +- CometBroadcastExchange\n            :                                   :        +- CometProject\n            :                                   :           +- CometFilter\n            :                                   :              +- CometScan parquet spark_catalog.default.date_dim\n            :                                   +- CometBroadcastExchange\n            :                                      +- CometProject\n            :                                         +- CometFilter\n            :                                            +- CometScan parquet spark_catalog.default.web_site\n            +- HashAggregate\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                     +- HashAggregate\n                        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n                              +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n                                 :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :     +- CometHashAggregate\n                                 :        +- CometProject\n                                 :           +- CometBroadcastHashJoin\n                                 :              :- CometProject\n                                 :              :  +- CometBroadcastHashJoin\n                                 :              :     :- CometUnion\n                                 :              :     :  :- CometProject\n                                 :              :     :  :  +- CometFilter\n                                 :              :     :  :     +- CometScan parquet spark_catalog.default.store_sales\n                                 :              :     :  +- CometProject\n                                 :              :     :     +- CometFilter\n                                 :              :     :        +- CometScan parquet spark_catalog.default.store_returns\n                                 :              :     +- CometBroadcastExchange\n                                 :              :        +- CometProject\n                                 :              :           +- CometFilter\n                                 :              :              +- CometScan parquet spark_catalog.default.date_dim\n                                 :              +- CometBroadcastExchange\n                                 :                 +- CometProject\n                                 :                    +- CometFilter\n                                 :                       +- CometScan parquet spark_catalog.default.store\n                                 :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :     +- CometHashAggregate\n                                 :        +- CometProject\n                                 :           +- CometBroadcastHashJoin\n                                 :              :- CometProject\n                                 :              :  +- CometBroadcastHashJoin\n                                 :              :     :- CometUnion\n                                 :              :     :  :- CometProject\n                                 :              :     :  :  +- CometFilter\n                                 :              :     :  :     +- CometScan parquet spark_catalog.default.catalog_sales\n                                 :              :     :  +- CometProject\n                                 :              :     :     +- CometFilter\n                                 :              :     :        +- CometScan parquet spark_catalog.default.catalog_returns\n                                 :              :     +- CometBroadcastExchange\n                                 :              :        +- CometProject\n                                 :              :           +- CometFilter\n                                 :              :              +- CometScan parquet spark_catalog.default.date_dim\n                                 :              +- CometBroadcastExchange\n                                 :                 +- CometProject\n                                 :                    +- CometFilter\n                                 :                       +- CometScan parquet spark_catalog.default.catalog_page\n                                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       +- CometHashAggregate\n                                          +- CometProject\n                                             +- CometBroadcastHashJoin\n                                                :- CometProject\n                                                :  +- CometBroadcastHashJoin\n                                                :     :- CometUnion\n                                                :     :  :- CometProject\n                                                :     :  :  +- CometFilter\n                                                :     :  :     +- CometScan parquet spark_catalog.default.web_sales\n                                                :     :  +- CometProject\n                                                :     :     +- CometBroadcastHashJoin\n                                                :     :        :- CometBroadcastExchange\n                                                :     :        :  +- CometFilter\n                                                :     :        :     +- CometScan parquet spark_catalog.default.web_returns\n                                                :     :        +- CometFilter\n                                                :     :           +- CometScan parquet spark_catalog.default.web_sales\n                                                :     +- CometBroadcastExchange\n                                                :        +- CometProject\n                                                :           +- CometFilter\n                                                :              +- CometScan parquet spark_catalog.default.date_dim\n                                                +- CometBroadcastExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan parquet spark_catalog.default.web_site\n\n\nQuery: q6-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q6-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n   +- HashAggregate\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometBroadcastExchange\n                  :     :     :     :  +- CometProject\n                  :     :     :     :     +- CometFilter\n                  :     :     :     :        +- CometScan parquet spark_catalog.default.customer_address\n                  :     :     :     +- CometFilter\n                  :     :     :        +- CometScan parquet spark_catalog.default.customer\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan parquet spark_catalog.default.store_sales\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              :  +- Subquery\n                  :              :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  :              :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :              :           +- CometHashAggregate\n                  :              :              +- CometProject\n                  :              :                 +- CometFilter\n                  :              :                    +- CometScan parquet spark_catalog.default.date_dim\n                  :              +- CometScan parquet spark_catalog.default.date_dim\n                  +- BroadcastExchange\n                     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                           :- CometFilter\n                           :  +- CometScan parquet spark_catalog.default.item\n                           +- BroadcastExchange\n                              +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       +- CometHashAggregate\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q10a-v2.7. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject, CometUnion)\nQuery: q10a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +- BroadcastHashJoin\n               :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :  +- BroadcastHashJoin\n               :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n               :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n               :     :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :     :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :     :  :     +- CometFilter\n               :     :     :  :        +- CometScan parquet spark_catalog.default.customer\n               :     :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :     :        +- CometProject\n               :     :     :           +- CometBroadcastHashJoin\n               :     :     :              :- CometFilter\n               :     :     :              :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :     :              +- CometBroadcastExchange\n               :     :     :                 +- CometProject\n               :     :     :                    +- CometFilter\n               :     :     :                       +- CometScan parquet spark_catalog.default.date_dim\n               :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :           +- CometUnion\n               :     :              :- CometProject\n               :     :              :  +- CometBroadcastHashJoin\n               :     :              :     :- CometFilter\n               :     :              :     :  +- CometScan parquet spark_catalog.default.web_sales\n               :     :              :     +- CometBroadcastExchange\n               :     :              :        +- CometProject\n               :     :              :           +- CometFilter\n               :     :              :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :              +- CometProject\n               :     :                 +- CometBroadcastHashJoin\n               :     :                    :- CometFilter\n               :     :                    :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :                    +- CometBroadcastExchange\n               :     :                       +- CometProject\n               :     :                          +- CometFilter\n               :     :                             +- CometScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.customer_address\n               +- BroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.customer_demographics\n\n\nQuery: q11-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q11-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n      :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :  :     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :  :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :  :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :  :              +- CometHashAggregate\n      :     :  :                 +- CometProject\n      :     :  :                    +- CometBroadcastHashJoin\n      :     :  :                       :- CometProject\n      :     :  :                       :  +- CometBroadcastHashJoin\n      :     :  :                       :     :- CometBroadcastExchange\n      :     :  :                       :     :  +- CometProject\n      :     :  :                       :     :     +- CometFilter\n      :     :  :                       :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :  :                       :     +- CometFilter\n      :     :  :                       :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :  :                       +- CometBroadcastExchange\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometScan parquet spark_catalog.default.date_dim\n      :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :              +- CometHashAggregate\n      :     :                 +- CometProject\n      :     :                    +- CometBroadcastHashJoin\n      :     :                       :- CometProject\n      :     :                       :  +- CometBroadcastHashJoin\n      :     :                       :     :- CometBroadcastExchange\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :                       :     +- CometFilter\n      :     :                       :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :                       +- CometBroadcastExchange\n      :     :                          +- CometFilter\n      :     :                             +- CometScan parquet spark_catalog.default.date_dim\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                    +- CometHashAggregate\n      :                       +- CometProject\n      :                          +- CometBroadcastHashJoin\n      :                             :- CometProject\n      :                             :  +- CometBroadcastHashJoin\n      :                             :     :- CometBroadcastExchange\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometScan parquet spark_catalog.default.customer\n      :                             :     +- CometFilter\n      :                             :        +- CometScan parquet spark_catalog.default.web_sales\n      :                             +- CometBroadcastExchange\n      :                                +- CometFilter\n      :                                   +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometBroadcastExchange\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometScan parquet spark_catalog.default.customer\n                           :     +- CometFilter\n                           :        +- CometScan parquet spark_catalog.default.web_sales\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q12-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q12-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometFilter\n                           :     :  +- CometScan parquet spark_catalog.default.web_sales\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.item\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q14-v2.7. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q14-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n   :  +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n   :     :  +- Subquery\n   :     :     +- HashAggregate\n   :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n   :     :              +- CometUnion\n   :     :                 :- CometProject\n   :     :                 :  +- CometBroadcastHashJoin\n   :     :                 :     :- CometFilter\n   :     :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :     :                 :     +- CometBroadcastExchange\n   :     :                 :        +- CometProject\n   :     :                 :           +- CometFilter\n   :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n   :     :                 :- CometProject\n   :     :                 :  +- CometBroadcastHashJoin\n   :     :                 :     :- CometFilter\n   :     :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n   :     :                 :     +- CometBroadcastExchange\n   :     :                 :        +- CometProject\n   :     :                 :           +- CometFilter\n   :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n   :     :                 +- CometProject\n   :     :                    +- CometBroadcastHashJoin\n   :     :                       :- CometFilter\n   :     :                       :  +- CometScan parquet spark_catalog.default.web_sales\n   :     :                       +- CometBroadcastExchange\n   :     :                          +- CometProject\n   :     :                             +- CometFilter\n   :     :                                +- CometScan parquet spark_catalog.default.date_dim\n   :     +- HashAggregate\n   :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n   :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                 +- BroadcastHashJoin\n   :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n   :                    :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n   :                    :     :  :        :  +- Subquery\n   :                    :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n   :                    :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n   :                    :     :  :        :              +- CometProject\n   :                    :     :  :        :                 +- CometFilter\n   :                    :     :  :        :                    :  +- Subquery\n   :                    :     :  :        :                    :     +- CometProject\n   :                    :     :  :        :                    :        +- CometFilter\n   :                    :     :  :        :                    :           +- CometScan parquet spark_catalog.default.date_dim\n   :                    :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n   :                    :     :  :        +- CometScan parquet spark_catalog.default.store_sales\n   :                    :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :     :           +- BroadcastHashJoin\n   :                    :     :              :- BroadcastExchange\n   :                    :     :              :  +- CometFilter\n   :                    :     :              :     +- CometScan parquet spark_catalog.default.item\n   :                    :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :                    :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n   :                    :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :     :                 :                 +- BroadcastHashJoin\n   :                    :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n   :                    :     :                 :                    :     :- CometFilter\n   :                    :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :                    :     :                 :                    :     +- BroadcastExchange\n   :                    :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :                 :                    :           :     +- CometFilter\n   :                    :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n   :                    :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :                 :                    :                 +- CometProject\n   :                    :     :                 :                    :                    +- CometBroadcastHashJoin\n   :                    :     :                 :                    :                       :- CometProject\n   :                    :     :                 :                    :                       :  +- CometBroadcastHashJoin\n   :                    :     :                 :                    :                       :     :- CometFilter\n   :                    :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n   :                    :     :                 :                    :                       :     +- CometBroadcastExchange\n   :                    :     :                 :                    :                       :        +- CometFilter\n   :                    :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n   :                    :     :                 :                    :                       +- CometBroadcastExchange\n   :                    :     :                 :                    :                          +- CometProject\n   :                    :     :                 :                    :                             +- CometFilter\n   :                    :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                    +- BroadcastExchange\n   :                    :     :                 :                       +- CometProject\n   :                    :     :                 :                          +- CometFilter\n   :                    :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n   :                    :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :     :                       +- CometProject\n   :                    :     :                          +- CometBroadcastHashJoin\n   :                    :     :                             :- CometProject\n   :                    :     :                             :  +- CometBroadcastHashJoin\n   :                    :     :                             :     :- CometFilter\n   :                    :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n   :                    :     :                             :     +- CometBroadcastExchange\n   :                    :     :                             :        +- CometFilter\n   :                    :     :                             :           +- CometScan parquet spark_catalog.default.item\n   :                    :     :                             +- CometBroadcastExchange\n   :                    :     :                                +- CometProject\n   :                    :     :                                   +- CometFilter\n   :                    :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n   :                    :     +- BroadcastExchange\n   :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :           :     +- CometFilter\n   :                    :           :        +- CometScan parquet spark_catalog.default.item\n   :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :                    +- BroadcastHashJoin\n   :                    :                       :- BroadcastExchange\n   :                    :                       :  +- CometFilter\n   :                    :                       :     +- CometScan parquet spark_catalog.default.item\n   :                    :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :                    :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n   :                    :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :                          :                 +- BroadcastHashJoin\n   :                    :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   :                    :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n   :                    :                          :                    :     :- CometFilter\n   :                    :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :                    :                          :                    :     +- BroadcastExchange\n   :                    :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n   :                    :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                          :                    :           :     +- CometFilter\n   :                    :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n   :                    :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                          :                    :                 +- CometProject\n   :                    :                          :                    :                    +- CometBroadcastHashJoin\n   :                    :                          :                    :                       :- CometProject\n   :                    :                          :                    :                       :  +- CometBroadcastHashJoin\n   :                    :                          :                    :                       :     :- CometFilter\n   :                    :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n   :                    :                          :                    :                       :     +- CometBroadcastExchange\n   :                    :                          :                    :                       :        +- CometFilter\n   :                    :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n   :                    :                          :                    :                       +- CometBroadcastExchange\n   :                    :                          :                    :                          +- CometProject\n   :                    :                          :                    :                             +- CometFilter\n   :                    :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n   :                    :                          :                    +- BroadcastExchange\n   :                    :                          :                       +- CometProject\n   :                    :                          :                          +- CometFilter\n   :                    :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n   :                    :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n   :                    :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :                    :                                +- CometProject\n   :                    :                                   +- CometBroadcastHashJoin\n   :                    :                                      :- CometProject\n   :                    :                                      :  +- CometBroadcastHashJoin\n   :                    :                                      :     :- CometFilter\n   :                    :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n   :                    :                                      :     +- CometBroadcastExchange\n   :                    :                                      :        +- CometFilter\n   :                    :                                      :           +- CometScan parquet spark_catalog.default.item\n   :                    :                                      +- CometBroadcastExchange\n   :                    :                                         +- CometProject\n   :                    :                                            +- CometFilter\n   :                    :                                               +- CometScan parquet spark_catalog.default.date_dim\n   :                    +- BroadcastExchange\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             :  +- Subquery\n   :                             :     +- CometProject\n   :                             :        +- CometFilter\n   :                             :           +- CometScan parquet spark_catalog.default.date_dim\n   :                             +- CometScan parquet spark_catalog.default.date_dim\n   +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n      +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n         :  +- Subquery\n         :     +- HashAggregate\n         :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n         :              +- CometUnion\n         :                 :- CometProject\n         :                 :  +- CometBroadcastHashJoin\n         :                 :     :- CometFilter\n         :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n         :                 :     +- CometBroadcastExchange\n         :                 :        +- CometProject\n         :                 :           +- CometFilter\n         :                 :              +- CometScan parquet spark_catalog.default.date_dim\n         :                 :- CometProject\n         :                 :  +- CometBroadcastHashJoin\n         :                 :     :- CometFilter\n         :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n         :                 :     +- CometBroadcastExchange\n         :                 :        +- CometProject\n         :                 :           +- CometFilter\n         :                 :              +- CometScan parquet spark_catalog.default.date_dim\n         :                 +- CometProject\n         :                    +- CometBroadcastHashJoin\n         :                       :- CometFilter\n         :                       :  +- CometScan parquet spark_catalog.default.web_sales\n         :                       +- CometBroadcastExchange\n         :                          +- CometProject\n         :                             +- CometFilter\n         :                                +- CometScan parquet spark_catalog.default.date_dim\n         +- HashAggregate\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                  +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                     +- BroadcastHashJoin\n                        :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n                        :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                        :     :  :        :  +- Subquery\n                        :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                        :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                        :     :  :        :              +- CometProject\n                        :     :  :        :                 +- CometFilter\n                        :     :  :        :                    :  +- Subquery\n                        :     :  :        :                    :     +- CometProject\n                        :     :  :        :                    :        +- CometFilter\n                        :     :  :        :                    :           +- CometScan parquet spark_catalog.default.date_dim\n                        :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n                        :     :  :        +- CometScan parquet spark_catalog.default.store_sales\n                        :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :     :           +- BroadcastHashJoin\n                        :     :              :- BroadcastExchange\n                        :     :              :  +- CometFilter\n                        :     :              :     +- CometScan parquet spark_catalog.default.item\n                        :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                        :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                        :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :     :                 :                 +- BroadcastHashJoin\n                        :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                        :     :                 :                    :     :- CometFilter\n                        :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :     :                 :                    :     +- BroadcastExchange\n                        :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :                 :                    :           :     +- CometFilter\n                        :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n                        :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :                 :                    :                 +- CometProject\n                        :     :                 :                    :                    +- CometBroadcastHashJoin\n                        :     :                 :                    :                       :- CometProject\n                        :     :                 :                    :                       :  +- CometBroadcastHashJoin\n                        :     :                 :                    :                       :     :- CometFilter\n                        :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                        :     :                 :                    :                       :     +- CometBroadcastExchange\n                        :     :                 :                    :                       :        +- CometFilter\n                        :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                        :     :                 :                    :                       +- CometBroadcastExchange\n                        :     :                 :                    :                          +- CometProject\n                        :     :                 :                    :                             +- CometFilter\n                        :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                        :     :                 :                    +- BroadcastExchange\n                        :     :                 :                       +- CometProject\n                        :     :                 :                          +- CometFilter\n                        :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n                        :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     :                       +- CometProject\n                        :     :                          +- CometBroadcastHashJoin\n                        :     :                             :- CometProject\n                        :     :                             :  +- CometBroadcastHashJoin\n                        :     :                             :     :- CometFilter\n                        :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n                        :     :                             :     +- CometBroadcastExchange\n                        :     :                             :        +- CometFilter\n                        :     :                             :           +- CometScan parquet spark_catalog.default.item\n                        :     :                             +- CometBroadcastExchange\n                        :     :                                +- CometProject\n                        :     :                                   +- CometFilter\n                        :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n                        :     +- BroadcastExchange\n                        :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :           :     +- CometFilter\n                        :           :        +- CometScan parquet spark_catalog.default.item\n                        :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :                    +- BroadcastHashJoin\n                        :                       :- BroadcastExchange\n                        :                       :  +- CometFilter\n                        :                       :     +- CometScan parquet spark_catalog.default.item\n                        :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                        :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                        :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :                          :                 +- BroadcastHashJoin\n                        :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                        :                          :                    :     :- CometFilter\n                        :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :                          :                    :     +- BroadcastExchange\n                        :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                        :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                          :                    :           :     +- CometFilter\n                        :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n                        :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                          :                    :                 +- CometProject\n                        :                          :                    :                    +- CometBroadcastHashJoin\n                        :                          :                    :                       :- CometProject\n                        :                          :                    :                       :  +- CometBroadcastHashJoin\n                        :                          :                    :                       :     :- CometFilter\n                        :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                        :                          :                    :                       :     +- CometBroadcastExchange\n                        :                          :                    :                       :        +- CometFilter\n                        :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                        :                          :                    :                       +- CometBroadcastExchange\n                        :                          :                    :                          +- CometProject\n                        :                          :                    :                             +- CometFilter\n                        :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                        :                          :                    +- BroadcastExchange\n                        :                          :                       +- CometProject\n                        :                          :                          +- CometFilter\n                        :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n                        :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                                +- CometProject\n                        :                                   +- CometBroadcastHashJoin\n                        :                                      :- CometProject\n                        :                                      :  +- CometBroadcastHashJoin\n                        :                                      :     :- CometFilter\n                        :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n                        :                                      :     +- CometBroadcastExchange\n                        :                                      :        +- CometFilter\n                        :                                      :           +- CometScan parquet spark_catalog.default.item\n                        :                                      +- CometBroadcastExchange\n                        :                                         +- CometProject\n                        :                                            +- CometFilter\n                        :                                               +- CometScan parquet spark_catalog.default.date_dim\n                        +- BroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 :  +- Subquery\n                                 :     +- CometProject\n                                 :        +- CometFilter\n                                 :           +- CometScan parquet spark_catalog.default.date_dim\n                                 +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q14a-v2.7. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q14a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n         +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate, HashAggregate, HashAggregate)]\n            :- HashAggregate\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n            :        +-  Union [COMET: Union is not enabled because the following children are not native (Filter, Filter, Filter)]\n            :           :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :           :  :  +- Subquery\n            :           :  :     +- HashAggregate\n            :           :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :  :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :           :  :              +- CometUnion\n            :           :  :                 :- CometProject\n            :           :  :                 :  +- CometBroadcastHashJoin\n            :           :  :                 :     :- CometFilter\n            :           :  :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :           :  :                 :     +- CometBroadcastExchange\n            :           :  :                 :        +- CometProject\n            :           :  :                 :           +- CometFilter\n            :           :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :           :  :                 :- CometProject\n            :           :  :                 :  +- CometBroadcastHashJoin\n            :           :  :                 :     :- CometFilter\n            :           :  :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :           :  :                 :     +- CometBroadcastExchange\n            :           :  :                 :        +- CometProject\n            :           :  :                 :           +- CometFilter\n            :           :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :           :  :                 +- CometProject\n            :           :  :                    +- CometBroadcastHashJoin\n            :           :  :                       :- CometFilter\n            :           :  :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :           :  :                       +- CometBroadcastExchange\n            :           :  :                          +- CometProject\n            :           :  :                             +- CometFilter\n            :           :  :                                +- CometScan parquet spark_catalog.default.date_dim\n            :           :  +- HashAggregate\n            :           :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :           :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :              +- BroadcastHashJoin\n            :           :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :           :                 :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :           :                 :     :  :        :  +- Subquery\n            :           :                 :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :           :                 :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :           :                 :     :  :        :              +- CometProject\n            :           :                 :     :  :        :                 +- CometFilter\n            :           :                 :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :     :  :        +- CometScan parquet spark_catalog.default.store_sales\n            :           :                 :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :     :           +- BroadcastHashJoin\n            :           :                 :     :              :- BroadcastExchange\n            :           :                 :     :              :  +- CometFilter\n            :           :                 :     :              :     +- CometScan parquet spark_catalog.default.item\n            :           :                 :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           :                 :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :           :                 :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :     :                 :                 +- BroadcastHashJoin\n            :           :                 :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :           :                 :     :                 :                    :     :- CometFilter\n            :           :                 :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :           :                 :     :                 :                    :     +- BroadcastExchange\n            :           :                 :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :                 :                    :           :     +- CometFilter\n            :           :                 :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :           :                 :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :                 :                    :                 +- CometProject\n            :           :                 :     :                 :                    :                    +- CometBroadcastHashJoin\n            :           :                 :     :                 :                    :                       :- CometProject\n            :           :                 :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :           :                 :     :                 :                    :                       :     :- CometFilter\n            :           :                 :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :           :                 :     :                 :                    :                       :     +- CometBroadcastExchange\n            :           :                 :     :                 :                    :                       :        +- CometFilter\n            :           :                 :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :           :                 :     :                 :                    :                       +- CometBroadcastExchange\n            :           :                 :     :                 :                    :                          +- CometProject\n            :           :                 :     :                 :                    :                             +- CometFilter\n            :           :                 :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :     :                 :                    +- BroadcastExchange\n            :           :                 :     :                 :                       +- CometProject\n            :           :                 :     :                 :                          +- CometFilter\n            :           :                 :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :                       +- CometProject\n            :           :                 :     :                          +- CometBroadcastHashJoin\n            :           :                 :     :                             :- CometProject\n            :           :                 :     :                             :  +- CometBroadcastHashJoin\n            :           :                 :     :                             :     :- CometFilter\n            :           :                 :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :           :                 :     :                             :     +- CometBroadcastExchange\n            :           :                 :     :                             :        +- CometFilter\n            :           :                 :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :           :                 :     :                             +- CometBroadcastExchange\n            :           :                 :     :                                +- CometProject\n            :           :                 :     :                                   +- CometFilter\n            :           :                 :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :     +- BroadcastExchange\n            :           :                 :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :           :     +- CometFilter\n            :           :                 :           :        +- CometScan parquet spark_catalog.default.item\n            :           :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :                    +- BroadcastHashJoin\n            :           :                 :                       :- BroadcastExchange\n            :           :                 :                       :  +- CometFilter\n            :           :                 :                       :     +- CometScan parquet spark_catalog.default.item\n            :           :                 :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           :                 :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :           :                 :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :                          :                 +- BroadcastHashJoin\n            :           :                 :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :           :                 :                          :                    :     :- CometFilter\n            :           :                 :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :           :                 :                          :                    :     +- BroadcastExchange\n            :           :                 :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                          :                    :           :     +- CometFilter\n            :           :                 :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :           :                 :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                          :                    :                 +- CometProject\n            :           :                 :                          :                    :                    +- CometBroadcastHashJoin\n            :           :                 :                          :                    :                       :- CometProject\n            :           :                 :                          :                    :                       :  +- CometBroadcastHashJoin\n            :           :                 :                          :                    :                       :     :- CometFilter\n            :           :                 :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :           :                 :                          :                    :                       :     +- CometBroadcastExchange\n            :           :                 :                          :                    :                       :        +- CometFilter\n            :           :                 :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :           :                 :                          :                    :                       +- CometBroadcastExchange\n            :           :                 :                          :                    :                          +- CometProject\n            :           :                 :                          :                    :                             +- CometFilter\n            :           :                 :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :                          :                    +- BroadcastExchange\n            :           :                 :                          :                       +- CometProject\n            :           :                 :                          :                          +- CometFilter\n            :           :                 :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                                +- CometProject\n            :           :                 :                                   +- CometBroadcastHashJoin\n            :           :                 :                                      :- CometProject\n            :           :                 :                                      :  +- CometBroadcastHashJoin\n            :           :                 :                                      :     :- CometFilter\n            :           :                 :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :           :                 :                                      :     +- CometBroadcastExchange\n            :           :                 :                                      :        +- CometFilter\n            :           :                 :                                      :           +- CometScan parquet spark_catalog.default.item\n            :           :                 :                                      +- CometBroadcastExchange\n            :           :                 :                                         +- CometProject\n            :           :                 :                                            +- CometFilter\n            :           :                 :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 +- BroadcastExchange\n            :           :                    +- CometProject\n            :           :                       +- CometFilter\n            :           :                          +- CometScan parquet spark_catalog.default.date_dim\n            :           :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :           :  :  +- Subquery\n            :           :  :     +- HashAggregate\n            :           :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :  :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :           :  :              +- CometUnion\n            :           :  :                 :- CometProject\n            :           :  :                 :  +- CometBroadcastHashJoin\n            :           :  :                 :     :- CometFilter\n            :           :  :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :           :  :                 :     +- CometBroadcastExchange\n            :           :  :                 :        +- CometProject\n            :           :  :                 :           +- CometFilter\n            :           :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :           :  :                 :- CometProject\n            :           :  :                 :  +- CometBroadcastHashJoin\n            :           :  :                 :     :- CometFilter\n            :           :  :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :           :  :                 :     +- CometBroadcastExchange\n            :           :  :                 :        +- CometProject\n            :           :  :                 :           +- CometFilter\n            :           :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :           :  :                 +- CometProject\n            :           :  :                    +- CometBroadcastHashJoin\n            :           :  :                       :- CometFilter\n            :           :  :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :           :  :                       +- CometBroadcastExchange\n            :           :  :                          +- CometProject\n            :           :  :                             +- CometFilter\n            :           :  :                                +- CometScan parquet spark_catalog.default.date_dim\n            :           :  +- HashAggregate\n            :           :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :           :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :              +- BroadcastHashJoin\n            :           :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :           :                 :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :           :                 :     :  :        :  +- Subquery\n            :           :                 :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :           :                 :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :           :                 :     :  :        :              +- CometProject\n            :           :                 :     :  :        :                 +- CometFilter\n            :           :                 :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :     :  :        +- CometScan parquet spark_catalog.default.catalog_sales\n            :           :                 :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :     :           +- BroadcastHashJoin\n            :           :                 :     :              :- BroadcastExchange\n            :           :                 :     :              :  +- CometFilter\n            :           :                 :     :              :     +- CometScan parquet spark_catalog.default.item\n            :           :                 :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           :                 :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :           :                 :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :     :                 :                 +- BroadcastHashJoin\n            :           :                 :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :           :                 :     :                 :                    :     :- CometFilter\n            :           :                 :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :           :                 :     :                 :                    :     +- BroadcastExchange\n            :           :                 :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :                 :                    :           :     +- CometFilter\n            :           :                 :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :           :                 :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :                 :                    :                 +- CometProject\n            :           :                 :     :                 :                    :                    +- CometBroadcastHashJoin\n            :           :                 :     :                 :                    :                       :- CometProject\n            :           :                 :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :           :                 :     :                 :                    :                       :     :- CometFilter\n            :           :                 :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :           :                 :     :                 :                    :                       :     +- CometBroadcastExchange\n            :           :                 :     :                 :                    :                       :        +- CometFilter\n            :           :                 :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :           :                 :     :                 :                    :                       +- CometBroadcastExchange\n            :           :                 :     :                 :                    :                          +- CometProject\n            :           :                 :     :                 :                    :                             +- CometFilter\n            :           :                 :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :     :                 :                    +- BroadcastExchange\n            :           :                 :     :                 :                       +- CometProject\n            :           :                 :     :                 :                          +- CometFilter\n            :           :                 :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :     :                       +- CometProject\n            :           :                 :     :                          +- CometBroadcastHashJoin\n            :           :                 :     :                             :- CometProject\n            :           :                 :     :                             :  +- CometBroadcastHashJoin\n            :           :                 :     :                             :     :- CometFilter\n            :           :                 :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :           :                 :     :                             :     +- CometBroadcastExchange\n            :           :                 :     :                             :        +- CometFilter\n            :           :                 :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :           :                 :     :                             +- CometBroadcastExchange\n            :           :                 :     :                                +- CometProject\n            :           :                 :     :                                   +- CometFilter\n            :           :                 :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :     +- BroadcastExchange\n            :           :                 :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :           :     +- CometFilter\n            :           :                 :           :        +- CometScan parquet spark_catalog.default.item\n            :           :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :                    +- BroadcastHashJoin\n            :           :                 :                       :- BroadcastExchange\n            :           :                 :                       :  +- CometFilter\n            :           :                 :                       :     +- CometScan parquet spark_catalog.default.item\n            :           :                 :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           :                 :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :           :                 :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :                          :                 +- BroadcastHashJoin\n            :           :                 :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :           :                 :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :           :                 :                          :                    :     :- CometFilter\n            :           :                 :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :           :                 :                          :                    :     +- BroadcastExchange\n            :           :                 :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :                 :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                          :                    :           :     +- CometFilter\n            :           :                 :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :           :                 :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                          :                    :                 +- CometProject\n            :           :                 :                          :                    :                    +- CometBroadcastHashJoin\n            :           :                 :                          :                    :                       :- CometProject\n            :           :                 :                          :                    :                       :  +- CometBroadcastHashJoin\n            :           :                 :                          :                    :                       :     :- CometFilter\n            :           :                 :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :           :                 :                          :                    :                       :     +- CometBroadcastExchange\n            :           :                 :                          :                    :                       :        +- CometFilter\n            :           :                 :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :           :                 :                          :                    :                       +- CometBroadcastExchange\n            :           :                 :                          :                    :                          +- CometProject\n            :           :                 :                          :                    :                             +- CometFilter\n            :           :                 :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :                          :                    +- BroadcastExchange\n            :           :                 :                          :                       +- CometProject\n            :           :                 :                          :                          +- CometFilter\n            :           :                 :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :           :                 :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :                 :                                +- CometProject\n            :           :                 :                                   +- CometBroadcastHashJoin\n            :           :                 :                                      :- CometProject\n            :           :                 :                                      :  +- CometBroadcastHashJoin\n            :           :                 :                                      :     :- CometFilter\n            :           :                 :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :           :                 :                                      :     +- CometBroadcastExchange\n            :           :                 :                                      :        +- CometFilter\n            :           :                 :                                      :           +- CometScan parquet spark_catalog.default.item\n            :           :                 :                                      +- CometBroadcastExchange\n            :           :                 :                                         +- CometProject\n            :           :                 :                                            +- CometFilter\n            :           :                 :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :           :                 +- BroadcastExchange\n            :           :                    +- CometProject\n            :           :                       +- CometFilter\n            :           :                          +- CometScan parquet spark_catalog.default.date_dim\n            :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :              :  +- Subquery\n            :              :     +- HashAggregate\n            :              :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :              :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :              :              +- CometUnion\n            :              :                 :- CometProject\n            :              :                 :  +- CometBroadcastHashJoin\n            :              :                 :     :- CometFilter\n            :              :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :              :                 :     +- CometBroadcastExchange\n            :              :                 :        +- CometProject\n            :              :                 :           +- CometFilter\n            :              :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :              :                 :- CometProject\n            :              :                 :  +- CometBroadcastHashJoin\n            :              :                 :     :- CometFilter\n            :              :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :              :                 :     +- CometBroadcastExchange\n            :              :                 :        +- CometProject\n            :              :                 :           +- CometFilter\n            :              :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :              :                 +- CometProject\n            :              :                    +- CometBroadcastHashJoin\n            :              :                       :- CometFilter\n            :              :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :              :                       +- CometBroadcastExchange\n            :              :                          +- CometProject\n            :              :                             +- CometFilter\n            :              :                                +- CometScan parquet spark_catalog.default.date_dim\n            :              +- HashAggregate\n            :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                       +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                          +- BroadcastHashJoin\n            :                             :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                             :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :                             :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                             :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :                             :     :  :        :  +- Subquery\n            :                             :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                             :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                             :     :  :        :              +- CometProject\n            :                             :     :  :        :                 +- CometFilter\n            :                             :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n            :                             :     :  :        +- CometScan parquet spark_catalog.default.web_sales\n            :                             :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                             :     :           +- BroadcastHashJoin\n            :                             :     :              :- BroadcastExchange\n            :                             :     :              :  +- CometFilter\n            :                             :     :              :     +- CometScan parquet spark_catalog.default.item\n            :                             :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                             :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                             :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                             :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                             :     :                 :                 +- BroadcastHashJoin\n            :                             :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                             :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                             :     :                 :                    :     :- CometFilter\n            :                             :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                             :     :                 :                    :     +- BroadcastExchange\n            :                             :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                             :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :     :                 :                    :           :     +- CometFilter\n            :                             :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                             :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :     :                 :                    :                 +- CometProject\n            :                             :     :                 :                    :                    +- CometBroadcastHashJoin\n            :                             :     :                 :                    :                       :- CometProject\n            :                             :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :                             :     :                 :                    :                       :     :- CometFilter\n            :                             :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                             :     :                 :                    :                       :     +- CometBroadcastExchange\n            :                             :     :                 :                    :                       :        +- CometFilter\n            :                             :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                             :     :                 :                    :                       +- CometBroadcastExchange\n            :                             :     :                 :                    :                          +- CometProject\n            :                             :     :                 :                    :                             +- CometFilter\n            :                             :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                             :     :                 :                    +- BroadcastExchange\n            :                             :     :                 :                       +- CometProject\n            :                             :     :                 :                          +- CometFilter\n            :                             :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                             :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :     :                       +- CometProject\n            :                             :     :                          +- CometBroadcastHashJoin\n            :                             :     :                             :- CometProject\n            :                             :     :                             :  +- CometBroadcastHashJoin\n            :                             :     :                             :     :- CometFilter\n            :                             :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                             :     :                             :     +- CometBroadcastExchange\n            :                             :     :                             :        +- CometFilter\n            :                             :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :                             :     :                             +- CometBroadcastExchange\n            :                             :     :                                +- CometProject\n            :                             :     :                                   +- CometFilter\n            :                             :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :                             :     +- BroadcastExchange\n            :                             :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                             :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :           :     +- CometFilter\n            :                             :           :        +- CometScan parquet spark_catalog.default.item\n            :                             :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                             :                    +- BroadcastHashJoin\n            :                             :                       :- BroadcastExchange\n            :                             :                       :  +- CometFilter\n            :                             :                       :     +- CometScan parquet spark_catalog.default.item\n            :                             :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                             :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                             :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                             :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                             :                          :                 +- BroadcastHashJoin\n            :                             :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                             :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                             :                          :                    :     :- CometFilter\n            :                             :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                             :                          :                    :     +- BroadcastExchange\n            :                             :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                             :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :                          :                    :           :     +- CometFilter\n            :                             :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                             :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :                          :                    :                 +- CometProject\n            :                             :                          :                    :                    +- CometBroadcastHashJoin\n            :                             :                          :                    :                       :- CometProject\n            :                             :                          :                    :                       :  +- CometBroadcastHashJoin\n            :                             :                          :                    :                       :     :- CometFilter\n            :                             :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                             :                          :                    :                       :     +- CometBroadcastExchange\n            :                             :                          :                    :                       :        +- CometFilter\n            :                             :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                             :                          :                    :                       +- CometBroadcastExchange\n            :                             :                          :                    :                          +- CometProject\n            :                             :                          :                    :                             +- CometFilter\n            :                             :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                             :                          :                    +- BroadcastExchange\n            :                             :                          :                       +- CometProject\n            :                             :                          :                          +- CometFilter\n            :                             :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                             :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                             :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             :                                +- CometProject\n            :                             :                                   +- CometBroadcastHashJoin\n            :                             :                                      :- CometProject\n            :                             :                                      :  +- CometBroadcastHashJoin\n            :                             :                                      :     :- CometFilter\n            :                             :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                             :                                      :     +- CometBroadcastExchange\n            :                             :                                      :        +- CometFilter\n            :                             :                                      :           +- CometScan parquet spark_catalog.default.item\n            :                             :                                      +- CometBroadcastExchange\n            :                             :                                         +- CometProject\n            :                             :                                            +- CometFilter\n            :                             :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :                             +- BroadcastExchange\n            :                                +- CometProject\n            :                                   +- CometFilter\n            :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :- HashAggregate\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n            :        +- HashAggregate\n            :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n            :                 +-  Union [COMET: Union is not enabled because the following children are not native (Filter, Filter, Filter)]\n            :                    :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :                    :  :  +- Subquery\n            :                    :  :     +- HashAggregate\n            :                    :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :  :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                    :  :              +- CometUnion\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 +- CometProject\n            :                    :  :                    +- CometBroadcastHashJoin\n            :                    :  :                       :- CometFilter\n            :                    :  :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :  :                       +- CometBroadcastExchange\n            :                    :  :                          +- CometProject\n            :                    :  :                             +- CometFilter\n            :                    :  :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  +- HashAggregate\n            :                    :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :              +- BroadcastHashJoin\n            :                    :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :                    :                 :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :                    :                 :     :  :        :  +- Subquery\n            :                    :                 :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                    :                 :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                    :                 :     :  :        :              +- CometProject\n            :                    :                 :     :  :        :                 +- CometFilter\n            :                    :                 :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :  :        +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :           +- BroadcastHashJoin\n            :                    :                 :     :              :- BroadcastExchange\n            :                    :                 :     :              :  +- CometFilter\n            :                    :                 :     :              :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                 +- BroadcastHashJoin\n            :                    :                 :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :     :                 :                    :     :- CometFilter\n            :                    :                 :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :     :                 :                    :     +- BroadcastExchange\n            :                    :                 :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :           :     +- CometFilter\n            :                    :                 :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :                 +- CometProject\n            :                    :                 :     :                 :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :- CometProject\n            :                    :                 :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :     :- CometFilter\n            :                    :                 :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :     :                 :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                       :        +- CometFilter\n            :                    :                 :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :                       +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                          +- CometProject\n            :                    :                 :     :                 :                    :                             +- CometFilter\n            :                    :                 :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 :                    +- BroadcastExchange\n            :                    :                 :     :                 :                       +- CometProject\n            :                    :                 :     :                 :                          +- CometFilter\n            :                    :                 :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                       +- CometProject\n            :                    :                 :     :                          +- CometBroadcastHashJoin\n            :                    :                 :     :                             :- CometProject\n            :                    :                 :     :                             :  +- CometBroadcastHashJoin\n            :                    :                 :     :                             :     :- CometFilter\n            :                    :                 :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :     :                             :     +- CometBroadcastExchange\n            :                    :                 :     :                             :        +- CometFilter\n            :                    :                 :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                             +- CometBroadcastExchange\n            :                    :                 :     :                                +- CometProject\n            :                    :                 :     :                                   +- CometFilter\n            :                    :                 :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     +- BroadcastExchange\n            :                    :                 :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :           :     +- CometFilter\n            :                    :                 :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                    +- BroadcastHashJoin\n            :                    :                 :                       :- BroadcastExchange\n            :                    :                 :                       :  +- CometFilter\n            :                    :                 :                       :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                 +- BroadcastHashJoin\n            :                    :                 :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :                          :                    :     :- CometFilter\n            :                    :                 :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :                          :                    :     +- BroadcastExchange\n            :                    :                 :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :           :     +- CometFilter\n            :                    :                 :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :                 +- CometProject\n            :                    :                 :                          :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :- CometProject\n            :                    :                 :                          :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :     :- CometFilter\n            :                    :                 :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :                          :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :                          :                    :                       :        +- CometFilter\n            :                    :                 :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :                       +- CometBroadcastExchange\n            :                    :                 :                          :                    :                          +- CometProject\n            :                    :                 :                          :                    :                             +- CometFilter\n            :                    :                 :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          :                    +- BroadcastExchange\n            :                    :                 :                          :                       +- CometProject\n            :                    :                 :                          :                          +- CometFilter\n            :                    :                 :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                                +- CometProject\n            :                    :                 :                                   +- CometBroadcastHashJoin\n            :                    :                 :                                      :- CometProject\n            :                    :                 :                                      :  +- CometBroadcastHashJoin\n            :                    :                 :                                      :     :- CometFilter\n            :                    :                 :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :                                      :     +- CometBroadcastExchange\n            :                    :                 :                                      :        +- CometFilter\n            :                    :                 :                                      :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                                      +- CometBroadcastExchange\n            :                    :                 :                                         +- CometProject\n            :                    :                 :                                            +- CometFilter\n            :                    :                 :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 +- BroadcastExchange\n            :                    :                    +- CometProject\n            :                    :                       +- CometFilter\n            :                    :                          +- CometScan parquet spark_catalog.default.date_dim\n            :                    :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :                    :  :  +- Subquery\n            :                    :  :     +- HashAggregate\n            :                    :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :  :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                    :  :              +- CometUnion\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 +- CometProject\n            :                    :  :                    +- CometBroadcastHashJoin\n            :                    :  :                       :- CometFilter\n            :                    :  :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :  :                       +- CometBroadcastExchange\n            :                    :  :                          +- CometProject\n            :                    :  :                             +- CometFilter\n            :                    :  :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  +- HashAggregate\n            :                    :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :              +- BroadcastHashJoin\n            :                    :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :                    :                 :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :                    :                 :     :  :        :  +- Subquery\n            :                    :                 :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                    :                 :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                    :                 :     :  :        :              +- CometProject\n            :                    :                 :     :  :        :                 +- CometFilter\n            :                    :                 :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :  :        +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :           +- BroadcastHashJoin\n            :                    :                 :     :              :- BroadcastExchange\n            :                    :                 :     :              :  +- CometFilter\n            :                    :                 :     :              :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                 +- BroadcastHashJoin\n            :                    :                 :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :     :                 :                    :     :- CometFilter\n            :                    :                 :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :     :                 :                    :     +- BroadcastExchange\n            :                    :                 :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :           :     +- CometFilter\n            :                    :                 :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :                 +- CometProject\n            :                    :                 :     :                 :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :- CometProject\n            :                    :                 :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :     :- CometFilter\n            :                    :                 :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :     :                 :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                       :        +- CometFilter\n            :                    :                 :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :                       +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                          +- CometProject\n            :                    :                 :     :                 :                    :                             +- CometFilter\n            :                    :                 :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 :                    +- BroadcastExchange\n            :                    :                 :     :                 :                       +- CometProject\n            :                    :                 :     :                 :                          +- CometFilter\n            :                    :                 :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                       +- CometProject\n            :                    :                 :     :                          +- CometBroadcastHashJoin\n            :                    :                 :     :                             :- CometProject\n            :                    :                 :     :                             :  +- CometBroadcastHashJoin\n            :                    :                 :     :                             :     :- CometFilter\n            :                    :                 :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :     :                             :     +- CometBroadcastExchange\n            :                    :                 :     :                             :        +- CometFilter\n            :                    :                 :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                             +- CometBroadcastExchange\n            :                    :                 :     :                                +- CometProject\n            :                    :                 :     :                                   +- CometFilter\n            :                    :                 :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     +- BroadcastExchange\n            :                    :                 :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :           :     +- CometFilter\n            :                    :                 :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                    +- BroadcastHashJoin\n            :                    :                 :                       :- BroadcastExchange\n            :                    :                 :                       :  +- CometFilter\n            :                    :                 :                       :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                 +- BroadcastHashJoin\n            :                    :                 :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :                          :                    :     :- CometFilter\n            :                    :                 :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :                          :                    :     +- BroadcastExchange\n            :                    :                 :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :           :     +- CometFilter\n            :                    :                 :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :                 +- CometProject\n            :                    :                 :                          :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :- CometProject\n            :                    :                 :                          :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :     :- CometFilter\n            :                    :                 :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :                          :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :                          :                    :                       :        +- CometFilter\n            :                    :                 :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :                       +- CometBroadcastExchange\n            :                    :                 :                          :                    :                          +- CometProject\n            :                    :                 :                          :                    :                             +- CometFilter\n            :                    :                 :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          :                    +- BroadcastExchange\n            :                    :                 :                          :                       +- CometProject\n            :                    :                 :                          :                          +- CometFilter\n            :                    :                 :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                                +- CometProject\n            :                    :                 :                                   +- CometBroadcastHashJoin\n            :                    :                 :                                      :- CometProject\n            :                    :                 :                                      :  +- CometBroadcastHashJoin\n            :                    :                 :                                      :     :- CometFilter\n            :                    :                 :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :                                      :     +- CometBroadcastExchange\n            :                    :                 :                                      :        +- CometFilter\n            :                    :                 :                                      :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                                      +- CometBroadcastExchange\n            :                    :                 :                                         +- CometProject\n            :                    :                 :                                            +- CometFilter\n            :                    :                 :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 +- BroadcastExchange\n            :                    :                    +- CometProject\n            :                    :                       +- CometFilter\n            :                    :                          +- CometScan parquet spark_catalog.default.date_dim\n            :                    +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :                       :  +- Subquery\n            :                       :     +- HashAggregate\n            :                       :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                       :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                       :              +- CometUnion\n            :                       :                 :- CometProject\n            :                       :                 :  +- CometBroadcastHashJoin\n            :                       :                 :     :- CometFilter\n            :                       :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                       :                 :     +- CometBroadcastExchange\n            :                       :                 :        +- CometProject\n            :                       :                 :           +- CometFilter\n            :                       :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                       :                 :- CometProject\n            :                       :                 :  +- CometBroadcastHashJoin\n            :                       :                 :     :- CometFilter\n            :                       :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                       :                 :     +- CometBroadcastExchange\n            :                       :                 :        +- CometProject\n            :                       :                 :           +- CometFilter\n            :                       :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                       :                 +- CometProject\n            :                       :                    +- CometBroadcastHashJoin\n            :                       :                       :- CometFilter\n            :                       :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :                       :                       +- CometBroadcastExchange\n            :                       :                          +- CometProject\n            :                       :                             +- CometFilter\n            :                       :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                       +- HashAggregate\n            :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                                +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                   +- BroadcastHashJoin\n            :                                      :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :                                      :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :                                      :     :  :        :  +- Subquery\n            :                                      :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                                      :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                                      :     :  :        :              +- CometProject\n            :                                      :     :  :        :                 +- CometFilter\n            :                                      :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :     :  :        +- CometScan parquet spark_catalog.default.web_sales\n            :                                      :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :     :           +- BroadcastHashJoin\n            :                                      :     :              :- BroadcastExchange\n            :                                      :     :              :  +- CometFilter\n            :                                      :     :              :     +- CometScan parquet spark_catalog.default.item\n            :                                      :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                                      :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :     :                 :                 +- BroadcastHashJoin\n            :                                      :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                                      :     :                 :                    :     :- CometFilter\n            :                                      :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                                      :     :                 :                    :     +- BroadcastExchange\n            :                                      :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :                    :           :     +- CometFilter\n            :                                      :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                                      :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :                    :                 +- CometProject\n            :                                      :     :                 :                    :                    +- CometBroadcastHashJoin\n            :                                      :     :                 :                    :                       :- CometProject\n            :                                      :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :                                      :     :                 :                    :                       :     :- CometFilter\n            :                                      :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                                      :     :                 :                    :                       :     +- CometBroadcastExchange\n            :                                      :     :                 :                    :                       :        +- CometFilter\n            :                                      :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                                      :     :                 :                    :                       +- CometBroadcastExchange\n            :                                      :     :                 :                    :                          +- CometProject\n            :                                      :     :                 :                    :                             +- CometFilter\n            :                                      :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :     :                 :                    +- BroadcastExchange\n            :                                      :     :                 :                       +- CometProject\n            :                                      :     :                 :                          +- CometFilter\n            :                                      :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                       +- CometProject\n            :                                      :     :                          +- CometBroadcastHashJoin\n            :                                      :     :                             :- CometProject\n            :                                      :     :                             :  +- CometBroadcastHashJoin\n            :                                      :     :                             :     :- CometFilter\n            :                                      :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                                      :     :                             :     +- CometBroadcastExchange\n            :                                      :     :                             :        +- CometFilter\n            :                                      :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :                                      :     :                             +- CometBroadcastExchange\n            :                                      :     :                                +- CometProject\n            :                                      :     :                                   +- CometFilter\n            :                                      :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :     +- BroadcastExchange\n            :                                      :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :           :     +- CometFilter\n            :                                      :           :        +- CometScan parquet spark_catalog.default.item\n            :                                      :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :                    +- BroadcastHashJoin\n            :                                      :                       :- BroadcastExchange\n            :                                      :                       :  +- CometFilter\n            :                                      :                       :     +- CometScan parquet spark_catalog.default.item\n            :                                      :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                                      :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                                      :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :                          :                 +- BroadcastHashJoin\n            :                                      :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                                      :                          :                    :     :- CometFilter\n            :                                      :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                                      :                          :                    :     +- BroadcastExchange\n            :                                      :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :                    :           :     +- CometFilter\n            :                                      :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                                      :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :                    :                 +- CometProject\n            :                                      :                          :                    :                    +- CometBroadcastHashJoin\n            :                                      :                          :                    :                       :- CometProject\n            :                                      :                          :                    :                       :  +- CometBroadcastHashJoin\n            :                                      :                          :                    :                       :     :- CometFilter\n            :                                      :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                                      :                          :                    :                       :     +- CometBroadcastExchange\n            :                                      :                          :                    :                       :        +- CometFilter\n            :                                      :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                                      :                          :                    :                       +- CometBroadcastExchange\n            :                                      :                          :                    :                          +- CometProject\n            :                                      :                          :                    :                             +- CometFilter\n            :                                      :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :                          :                    +- BroadcastExchange\n            :                                      :                          :                       +- CometProject\n            :                                      :                          :                          +- CometFilter\n            :                                      :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                                +- CometProject\n            :                                      :                                   +- CometBroadcastHashJoin\n            :                                      :                                      :- CometProject\n            :                                      :                                      :  +- CometBroadcastHashJoin\n            :                                      :                                      :     :- CometFilter\n            :                                      :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                                      :                                      :     +- CometBroadcastExchange\n            :                                      :                                      :        +- CometFilter\n            :                                      :                                      :           +- CometScan parquet spark_catalog.default.item\n            :                                      :                                      +- CometBroadcastExchange\n            :                                      :                                         +- CometProject\n            :                                      :                                            +- CometFilter\n            :                                      :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :                                      +- BroadcastExchange\n            :                                         +- CometProject\n            :                                            +- CometFilter\n            :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :- HashAggregate\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n            :        +- HashAggregate\n            :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n            :                 +-  Union [COMET: Union is not enabled because the following children are not native (Filter, Filter, Filter)]\n            :                    :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :                    :  :  +- Subquery\n            :                    :  :     +- HashAggregate\n            :                    :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :  :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                    :  :              +- CometUnion\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 +- CometProject\n            :                    :  :                    +- CometBroadcastHashJoin\n            :                    :  :                       :- CometFilter\n            :                    :  :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :  :                       +- CometBroadcastExchange\n            :                    :  :                          +- CometProject\n            :                    :  :                             +- CometFilter\n            :                    :  :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  +- HashAggregate\n            :                    :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :              +- BroadcastHashJoin\n            :                    :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :                    :                 :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :                    :                 :     :  :        :  +- Subquery\n            :                    :                 :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                    :                 :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                    :                 :     :  :        :              +- CometProject\n            :                    :                 :     :  :        :                 +- CometFilter\n            :                    :                 :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :  :        +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :           +- BroadcastHashJoin\n            :                    :                 :     :              :- BroadcastExchange\n            :                    :                 :     :              :  +- CometFilter\n            :                    :                 :     :              :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                 +- BroadcastHashJoin\n            :                    :                 :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :     :                 :                    :     :- CometFilter\n            :                    :                 :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :     :                 :                    :     +- BroadcastExchange\n            :                    :                 :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :           :     +- CometFilter\n            :                    :                 :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :                 +- CometProject\n            :                    :                 :     :                 :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :- CometProject\n            :                    :                 :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :     :- CometFilter\n            :                    :                 :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :     :                 :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                       :        +- CometFilter\n            :                    :                 :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :                       +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                          +- CometProject\n            :                    :                 :     :                 :                    :                             +- CometFilter\n            :                    :                 :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 :                    +- BroadcastExchange\n            :                    :                 :     :                 :                       +- CometProject\n            :                    :                 :     :                 :                          +- CometFilter\n            :                    :                 :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                       +- CometProject\n            :                    :                 :     :                          +- CometBroadcastHashJoin\n            :                    :                 :     :                             :- CometProject\n            :                    :                 :     :                             :  +- CometBroadcastHashJoin\n            :                    :                 :     :                             :     :- CometFilter\n            :                    :                 :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :     :                             :     +- CometBroadcastExchange\n            :                    :                 :     :                             :        +- CometFilter\n            :                    :                 :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                             +- CometBroadcastExchange\n            :                    :                 :     :                                +- CometProject\n            :                    :                 :     :                                   +- CometFilter\n            :                    :                 :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     +- BroadcastExchange\n            :                    :                 :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :           :     +- CometFilter\n            :                    :                 :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                    +- BroadcastHashJoin\n            :                    :                 :                       :- BroadcastExchange\n            :                    :                 :                       :  +- CometFilter\n            :                    :                 :                       :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                 +- BroadcastHashJoin\n            :                    :                 :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :                          :                    :     :- CometFilter\n            :                    :                 :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :                          :                    :     +- BroadcastExchange\n            :                    :                 :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :           :     +- CometFilter\n            :                    :                 :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :                 +- CometProject\n            :                    :                 :                          :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :- CometProject\n            :                    :                 :                          :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :     :- CometFilter\n            :                    :                 :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :                          :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :                          :                    :                       :        +- CometFilter\n            :                    :                 :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :                       +- CometBroadcastExchange\n            :                    :                 :                          :                    :                          +- CometProject\n            :                    :                 :                          :                    :                             +- CometFilter\n            :                    :                 :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          :                    +- BroadcastExchange\n            :                    :                 :                          :                       +- CometProject\n            :                    :                 :                          :                          +- CometFilter\n            :                    :                 :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                                +- CometProject\n            :                    :                 :                                   +- CometBroadcastHashJoin\n            :                    :                 :                                      :- CometProject\n            :                    :                 :                                      :  +- CometBroadcastHashJoin\n            :                    :                 :                                      :     :- CometFilter\n            :                    :                 :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :                                      :     +- CometBroadcastExchange\n            :                    :                 :                                      :        +- CometFilter\n            :                    :                 :                                      :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                                      +- CometBroadcastExchange\n            :                    :                 :                                         +- CometProject\n            :                    :                 :                                            +- CometFilter\n            :                    :                 :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 +- BroadcastExchange\n            :                    :                    +- CometProject\n            :                    :                       +- CometFilter\n            :                    :                          +- CometScan parquet spark_catalog.default.date_dim\n            :                    :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :                    :  :  +- Subquery\n            :                    :  :     +- HashAggregate\n            :                    :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :  :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                    :  :              +- CometUnion\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 +- CometProject\n            :                    :  :                    +- CometBroadcastHashJoin\n            :                    :  :                       :- CometFilter\n            :                    :  :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :  :                       +- CometBroadcastExchange\n            :                    :  :                          +- CometProject\n            :                    :  :                             +- CometFilter\n            :                    :  :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  +- HashAggregate\n            :                    :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :              +- BroadcastHashJoin\n            :                    :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :                    :                 :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :                    :                 :     :  :        :  +- Subquery\n            :                    :                 :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                    :                 :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                    :                 :     :  :        :              +- CometProject\n            :                    :                 :     :  :        :                 +- CometFilter\n            :                    :                 :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :  :        +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :           +- BroadcastHashJoin\n            :                    :                 :     :              :- BroadcastExchange\n            :                    :                 :     :              :  +- CometFilter\n            :                    :                 :     :              :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                 +- BroadcastHashJoin\n            :                    :                 :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :     :                 :                    :     :- CometFilter\n            :                    :                 :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :     :                 :                    :     +- BroadcastExchange\n            :                    :                 :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :           :     +- CometFilter\n            :                    :                 :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :                 +- CometProject\n            :                    :                 :     :                 :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :- CometProject\n            :                    :                 :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :     :- CometFilter\n            :                    :                 :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :     :                 :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                       :        +- CometFilter\n            :                    :                 :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :                       +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                          +- CometProject\n            :                    :                 :     :                 :                    :                             +- CometFilter\n            :                    :                 :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 :                    +- BroadcastExchange\n            :                    :                 :     :                 :                       +- CometProject\n            :                    :                 :     :                 :                          +- CometFilter\n            :                    :                 :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                       +- CometProject\n            :                    :                 :     :                          +- CometBroadcastHashJoin\n            :                    :                 :     :                             :- CometProject\n            :                    :                 :     :                             :  +- CometBroadcastHashJoin\n            :                    :                 :     :                             :     :- CometFilter\n            :                    :                 :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :     :                             :     +- CometBroadcastExchange\n            :                    :                 :     :                             :        +- CometFilter\n            :                    :                 :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                             +- CometBroadcastExchange\n            :                    :                 :     :                                +- CometProject\n            :                    :                 :     :                                   +- CometFilter\n            :                    :                 :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     +- BroadcastExchange\n            :                    :                 :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :           :     +- CometFilter\n            :                    :                 :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                    +- BroadcastHashJoin\n            :                    :                 :                       :- BroadcastExchange\n            :                    :                 :                       :  +- CometFilter\n            :                    :                 :                       :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                 +- BroadcastHashJoin\n            :                    :                 :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :                          :                    :     :- CometFilter\n            :                    :                 :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :                          :                    :     +- BroadcastExchange\n            :                    :                 :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :           :     +- CometFilter\n            :                    :                 :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :                 +- CometProject\n            :                    :                 :                          :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :- CometProject\n            :                    :                 :                          :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :     :- CometFilter\n            :                    :                 :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :                          :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :                          :                    :                       :        +- CometFilter\n            :                    :                 :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :                       +- CometBroadcastExchange\n            :                    :                 :                          :                    :                          +- CometProject\n            :                    :                 :                          :                    :                             +- CometFilter\n            :                    :                 :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          :                    +- BroadcastExchange\n            :                    :                 :                          :                       +- CometProject\n            :                    :                 :                          :                          +- CometFilter\n            :                    :                 :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                                +- CometProject\n            :                    :                 :                                   +- CometBroadcastHashJoin\n            :                    :                 :                                      :- CometProject\n            :                    :                 :                                      :  +- CometBroadcastHashJoin\n            :                    :                 :                                      :     :- CometFilter\n            :                    :                 :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :                                      :     +- CometBroadcastExchange\n            :                    :                 :                                      :        +- CometFilter\n            :                    :                 :                                      :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                                      +- CometBroadcastExchange\n            :                    :                 :                                         +- CometProject\n            :                    :                 :                                            +- CometFilter\n            :                    :                 :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 +- BroadcastExchange\n            :                    :                    +- CometProject\n            :                    :                       +- CometFilter\n            :                    :                          +- CometScan parquet spark_catalog.default.date_dim\n            :                    +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :                       :  +- Subquery\n            :                       :     +- HashAggregate\n            :                       :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                       :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                       :              +- CometUnion\n            :                       :                 :- CometProject\n            :                       :                 :  +- CometBroadcastHashJoin\n            :                       :                 :     :- CometFilter\n            :                       :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                       :                 :     +- CometBroadcastExchange\n            :                       :                 :        +- CometProject\n            :                       :                 :           +- CometFilter\n            :                       :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                       :                 :- CometProject\n            :                       :                 :  +- CometBroadcastHashJoin\n            :                       :                 :     :- CometFilter\n            :                       :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                       :                 :     +- CometBroadcastExchange\n            :                       :                 :        +- CometProject\n            :                       :                 :           +- CometFilter\n            :                       :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                       :                 +- CometProject\n            :                       :                    +- CometBroadcastHashJoin\n            :                       :                       :- CometFilter\n            :                       :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :                       :                       +- CometBroadcastExchange\n            :                       :                          +- CometProject\n            :                       :                             +- CometFilter\n            :                       :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                       +- HashAggregate\n            :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                                +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                   +- BroadcastHashJoin\n            :                                      :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :                                      :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :                                      :     :  :        :  +- Subquery\n            :                                      :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                                      :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                                      :     :  :        :              +- CometProject\n            :                                      :     :  :        :                 +- CometFilter\n            :                                      :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :     :  :        +- CometScan parquet spark_catalog.default.web_sales\n            :                                      :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :     :           +- BroadcastHashJoin\n            :                                      :     :              :- BroadcastExchange\n            :                                      :     :              :  +- CometFilter\n            :                                      :     :              :     +- CometScan parquet spark_catalog.default.item\n            :                                      :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                                      :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :     :                 :                 +- BroadcastHashJoin\n            :                                      :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                                      :     :                 :                    :     :- CometFilter\n            :                                      :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                                      :     :                 :                    :     +- BroadcastExchange\n            :                                      :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :                    :           :     +- CometFilter\n            :                                      :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                                      :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :                    :                 +- CometProject\n            :                                      :     :                 :                    :                    +- CometBroadcastHashJoin\n            :                                      :     :                 :                    :                       :- CometProject\n            :                                      :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :                                      :     :                 :                    :                       :     :- CometFilter\n            :                                      :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                                      :     :                 :                    :                       :     +- CometBroadcastExchange\n            :                                      :     :                 :                    :                       :        +- CometFilter\n            :                                      :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                                      :     :                 :                    :                       +- CometBroadcastExchange\n            :                                      :     :                 :                    :                          +- CometProject\n            :                                      :     :                 :                    :                             +- CometFilter\n            :                                      :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :     :                 :                    +- BroadcastExchange\n            :                                      :     :                 :                       +- CometProject\n            :                                      :     :                 :                          +- CometFilter\n            :                                      :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                       +- CometProject\n            :                                      :     :                          +- CometBroadcastHashJoin\n            :                                      :     :                             :- CometProject\n            :                                      :     :                             :  +- CometBroadcastHashJoin\n            :                                      :     :                             :     :- CometFilter\n            :                                      :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                                      :     :                             :     +- CometBroadcastExchange\n            :                                      :     :                             :        +- CometFilter\n            :                                      :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :                                      :     :                             +- CometBroadcastExchange\n            :                                      :     :                                +- CometProject\n            :                                      :     :                                   +- CometFilter\n            :                                      :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :     +- BroadcastExchange\n            :                                      :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :           :     +- CometFilter\n            :                                      :           :        +- CometScan parquet spark_catalog.default.item\n            :                                      :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :                    +- BroadcastHashJoin\n            :                                      :                       :- BroadcastExchange\n            :                                      :                       :  +- CometFilter\n            :                                      :                       :     +- CometScan parquet spark_catalog.default.item\n            :                                      :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                                      :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                                      :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :                          :                 +- BroadcastHashJoin\n            :                                      :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                                      :                          :                    :     :- CometFilter\n            :                                      :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                                      :                          :                    :     +- BroadcastExchange\n            :                                      :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :                    :           :     +- CometFilter\n            :                                      :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                                      :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :                    :                 +- CometProject\n            :                                      :                          :                    :                    +- CometBroadcastHashJoin\n            :                                      :                          :                    :                       :- CometProject\n            :                                      :                          :                    :                       :  +- CometBroadcastHashJoin\n            :                                      :                          :                    :                       :     :- CometFilter\n            :                                      :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                                      :                          :                    :                       :     +- CometBroadcastExchange\n            :                                      :                          :                    :                       :        +- CometFilter\n            :                                      :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                                      :                          :                    :                       +- CometBroadcastExchange\n            :                                      :                          :                    :                          +- CometProject\n            :                                      :                          :                    :                             +- CometFilter\n            :                                      :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :                          :                    +- BroadcastExchange\n            :                                      :                          :                       +- CometProject\n            :                                      :                          :                          +- CometFilter\n            :                                      :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                                +- CometProject\n            :                                      :                                   +- CometBroadcastHashJoin\n            :                                      :                                      :- CometProject\n            :                                      :                                      :  +- CometBroadcastHashJoin\n            :                                      :                                      :     :- CometFilter\n            :                                      :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                                      :                                      :     +- CometBroadcastExchange\n            :                                      :                                      :        +- CometFilter\n            :                                      :                                      :           +- CometScan parquet spark_catalog.default.item\n            :                                      :                                      +- CometBroadcastExchange\n            :                                      :                                         +- CometProject\n            :                                      :                                            +- CometFilter\n            :                                      :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :                                      +- BroadcastExchange\n            :                                         +- CometProject\n            :                                            +- CometFilter\n            :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :- HashAggregate\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n            :        +- HashAggregate\n            :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n            :                 +-  Union [COMET: Union is not enabled because the following children are not native (Filter, Filter, Filter)]\n            :                    :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :                    :  :  +- Subquery\n            :                    :  :     +- HashAggregate\n            :                    :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :  :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                    :  :              +- CometUnion\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 +- CometProject\n            :                    :  :                    +- CometBroadcastHashJoin\n            :                    :  :                       :- CometFilter\n            :                    :  :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :  :                       +- CometBroadcastExchange\n            :                    :  :                          +- CometProject\n            :                    :  :                             +- CometFilter\n            :                    :  :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  +- HashAggregate\n            :                    :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :              +- BroadcastHashJoin\n            :                    :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :                    :                 :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :  :     +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n            :                    :                 :     :  :        :  +- Subquery\n            :                    :                 :     :  :        :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                    :                 :     :  :        :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :  :        :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n            :                    :                 :     :  :        :              +- CometProject\n            :                    :                 :     :  :        :                 +- CometFilter\n            :                    :                 :     :  :        :                    +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :  :        +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :           +- BroadcastHashJoin\n            :                    :                 :     :              :- BroadcastExchange\n            :                    :                 :     :              :  +- CometFilter\n            :                    :                 :     :              :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                 +- BroadcastHashJoin\n            :                    :                 :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :     :                 :                    :     :- CometFilter\n            :                    :                 :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :     :                 :                    :     +- BroadcastExchange\n            :                    :                 :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :           :     +- CometFilter\n            :                    :                 :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :                 +- CometProject\n            :                    :                 :     :                 :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :- CometProject\n            :                    :                 :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :     :- CometFilter\n            :                    :                 :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :     :                 :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                       :        +- CometFilter\n            :                    :                 :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :                       +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                          +- CometProject\n            :                    :                 :     :                 :                    :                             +- CometFilter\n            :                    :                 :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 :                    +- BroadcastExchange\n            :                    :                 :     :                 :                       +- CometProject\n            :                    :                 :     :                 :                          +- CometFilter\n            :                    :                 :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                       +- CometProject\n            :                    :                 :     :                          +- CometBroadcastHashJoin\n            :                    :                 :     :                             :- CometProject\n            :                    :                 :     :                             :  +- CometBroadcastHashJoin\n            :                    :                 :     :                             :     :- CometFilter\n            :                    :                 :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :     :                             :     +- CometBroadcastExchange\n            :                    :                 :     :                             :        +- CometFilter\n            :                    :                 :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                             +- CometBroadcastExchange\n            :                    :                 :     :                                +- CometProject\n            :                    :                 :     :                                   +- CometFilter\n            :                    :                 :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     +- BroadcastExchange\n            :                    :                 :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :           :     +- CometFilter\n            :                    :                 :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                    +- BroadcastHashJoin\n            :                    :                 :                       :- BroadcastExchange\n            :                    :                 :                       :  +- CometFilter\n            :                    :                 :                       :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                 +- BroadcastHashJoin\n            :                    :                 :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :                          :                    :     :- CometFilter\n            :                    :                 :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :                          :                    :     +- BroadcastExchange\n            :                    :                 :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :           :     +- CometFilter\n            :                    :                 :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :                 +- CometProject\n            :                    :                 :                          :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :- CometProject\n            :                    :                 :                          :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :     :- CometFilter\n            :                    :                 :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :                          :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :                          :                    :                       :        +- CometFilter\n            :                    :                 :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :                       +- CometBroadcastExchange\n            :                    :                 :                          :                    :                          +- CometProject\n            :                    :                 :                          :                    :                             +- CometFilter\n            :                    :                 :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          :                    +- BroadcastExchange\n            :                    :                 :                          :                       +- CometProject\n            :                    :                 :                          :                          +- CometFilter\n            :                    :                 :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                                +- CometProject\n            :                    :                 :                                   +- CometBroadcastHashJoin\n            :                    :                 :                                      :- CometProject\n            :                    :                 :                                      :  +- CometBroadcastHashJoin\n            :                    :                 :                                      :     :- CometFilter\n            :                    :                 :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :                                      :     +- CometBroadcastExchange\n            :                    :                 :                                      :        +- CometFilter\n            :                    :                 :                                      :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                                      +- CometBroadcastExchange\n            :                    :                 :                                         +- CometProject\n            :                    :                 :                                            +- CometFilter\n            :                    :                 :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 +- BroadcastExchange\n            :                    :                    +- CometProject\n            :                    :                       +- CometFilter\n            :                    :                          +- CometScan parquet spark_catalog.default.date_dim\n            :                    :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :                    :  :  +- Subquery\n            :                    :  :     +- HashAggregate\n            :                    :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :  :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                    :  :              +- CometUnion\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 :- CometProject\n            :                    :  :                 :  +- CometBroadcastHashJoin\n            :                    :  :                 :     :- CometFilter\n            :                    :  :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :  :                 :     +- CometBroadcastExchange\n            :                    :  :                 :        +- CometProject\n            :                    :  :                 :           +- CometFilter\n            :                    :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  :                 +- CometProject\n            :                    :  :                    +- CometBroadcastHashJoin\n            :                    :  :                       :- CometFilter\n            :                    :  :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :  :                       +- CometBroadcastExchange\n            :                    :  :                          +- CometProject\n            :                    :  :                             +- CometFilter\n            :                    :  :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :  +- HashAggregate\n            :                    :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :              +- BroadcastHashJoin\n            :                    :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :                    :                 :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :  :     +- CometFilter\n            :                    :                 :     :  :        +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :           +- BroadcastHashJoin\n            :                    :                 :     :              :- BroadcastExchange\n            :                    :                 :     :              :  +- CometFilter\n            :                    :                 :     :              :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                 +- BroadcastHashJoin\n            :                    :                 :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :     :                 :                    :     :- CometFilter\n            :                    :                 :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :     :                 :                    :     +- BroadcastExchange\n            :                    :                 :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :           :     +- CometFilter\n            :                    :                 :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                 :                    :                 +- CometProject\n            :                    :                 :     :                 :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :- CometProject\n            :                    :                 :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :     :                 :                    :                       :     :- CometFilter\n            :                    :                 :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :     :                 :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                       :        +- CometFilter\n            :                    :                 :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                 :                    :                       +- CometBroadcastExchange\n            :                    :                 :     :                 :                    :                          +- CometProject\n            :                    :                 :     :                 :                    :                             +- CometFilter\n            :                    :                 :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 :                    +- BroadcastExchange\n            :                    :                 :     :                 :                       +- CometProject\n            :                    :                 :     :                 :                          +- CometFilter\n            :                    :                 :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :     :                       +- CometProject\n            :                    :                 :     :                          +- CometBroadcastHashJoin\n            :                    :                 :     :                             :- CometProject\n            :                    :                 :     :                             :  +- CometBroadcastHashJoin\n            :                    :                 :     :                             :     :- CometFilter\n            :                    :                 :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :     :                             :     +- CometBroadcastExchange\n            :                    :                 :     :                             :        +- CometFilter\n            :                    :                 :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :     :                             +- CometBroadcastExchange\n            :                    :                 :     :                                +- CometProject\n            :                    :                 :     :                                   +- CometFilter\n            :                    :                 :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :     +- BroadcastExchange\n            :                    :                 :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :           :     +- CometFilter\n            :                    :                 :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                    +- BroadcastHashJoin\n            :                    :                 :                       :- BroadcastExchange\n            :                    :                 :                       :  +- CometFilter\n            :                    :                 :                       :     +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                    :                 :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                 +- BroadcastHashJoin\n            :                    :                 :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                    :                 :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                    :                 :                          :                    :     :- CometFilter\n            :                    :                 :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :                 :                          :                    :     +- BroadcastExchange\n            :                    :                 :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :                 :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :           :     +- CometFilter\n            :                    :                 :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                          :                    :                 +- CometProject\n            :                    :                 :                          :                    :                    +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :- CometProject\n            :                    :                 :                          :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                 :                          :                    :                       :     :- CometFilter\n            :                    :                 :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :                 :                          :                    :                       :     +- CometBroadcastExchange\n            :                    :                 :                          :                    :                       :        +- CometFilter\n            :                    :                 :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                          :                    :                       +- CometBroadcastExchange\n            :                    :                 :                          :                    :                          +- CometProject\n            :                    :                 :                          :                    :                             +- CometFilter\n            :                    :                 :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          :                    +- BroadcastExchange\n            :                    :                 :                          :                       +- CometProject\n            :                    :                 :                          :                          +- CometFilter\n            :                    :                 :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                    :                 :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :                 :                                +- CometProject\n            :                    :                 :                                   +- CometBroadcastHashJoin\n            :                    :                 :                                      :- CometProject\n            :                    :                 :                                      :  +- CometBroadcastHashJoin\n            :                    :                 :                                      :     :- CometFilter\n            :                    :                 :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                    :                 :                                      :     +- CometBroadcastExchange\n            :                    :                 :                                      :        +- CometFilter\n            :                    :                 :                                      :           +- CometScan parquet spark_catalog.default.item\n            :                    :                 :                                      +- CometBroadcastExchange\n            :                    :                 :                                         +- CometProject\n            :                    :                 :                                            +- CometFilter\n            :                    :                 :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                 +- BroadcastExchange\n            :                    :                    +- CometProject\n            :                    :                       +- CometFilter\n            :                    :                          +- CometScan parquet spark_catalog.default.date_dim\n            :                    +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :                       :  +- Subquery\n            :                       :     +- HashAggregate\n            :                       :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                       :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                       :              +- CometUnion\n            :                       :                 :- CometProject\n            :                       :                 :  +- CometBroadcastHashJoin\n            :                       :                 :     :- CometFilter\n            :                       :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                       :                 :     +- CometBroadcastExchange\n            :                       :                 :        +- CometProject\n            :                       :                 :           +- CometFilter\n            :                       :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                       :                 :- CometProject\n            :                       :                 :  +- CometBroadcastHashJoin\n            :                       :                 :     :- CometFilter\n            :                       :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                       :                 :     +- CometBroadcastExchange\n            :                       :                 :        +- CometProject\n            :                       :                 :           +- CometFilter\n            :                       :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                       :                 +- CometProject\n            :                       :                    +- CometBroadcastHashJoin\n            :                       :                       :- CometFilter\n            :                       :                       :  +- CometScan parquet spark_catalog.default.web_sales\n            :                       :                       +- CometBroadcastExchange\n            :                       :                          +- CometProject\n            :                       :                             +- CometFilter\n            :                       :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                       +- HashAggregate\n            :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                                +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                   +- BroadcastHashJoin\n            :                                      :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n            :                                      :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :  :     +- CometFilter\n            :                                      :     :  :        +- CometScan parquet spark_catalog.default.web_sales\n            :                                      :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :     :           +- BroadcastHashJoin\n            :                                      :     :              :- BroadcastExchange\n            :                                      :     :              :  +- CometFilter\n            :                                      :     :              :     +- CometScan parquet spark_catalog.default.item\n            :                                      :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                                      :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :     :                 :                 +- BroadcastHashJoin\n            :                                      :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                                      :     :                 :                    :     :- CometFilter\n            :                                      :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                                      :     :                 :                    :     +- BroadcastExchange\n            :                                      :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :                    :           :     +- CometFilter\n            :                                      :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                                      :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                 :                    :                 +- CometProject\n            :                                      :     :                 :                    :                    +- CometBroadcastHashJoin\n            :                                      :     :                 :                    :                       :- CometProject\n            :                                      :     :                 :                    :                       :  +- CometBroadcastHashJoin\n            :                                      :     :                 :                    :                       :     :- CometFilter\n            :                                      :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                                      :     :                 :                    :                       :     +- CometBroadcastExchange\n            :                                      :     :                 :                    :                       :        +- CometFilter\n            :                                      :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                                      :     :                 :                    :                       +- CometBroadcastExchange\n            :                                      :     :                 :                    :                          +- CometProject\n            :                                      :     :                 :                    :                             +- CometFilter\n            :                                      :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :     :                 :                    +- BroadcastExchange\n            :                                      :     :                 :                       +- CometProject\n            :                                      :     :                 :                          +- CometFilter\n            :                                      :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :     :                       +- CometProject\n            :                                      :     :                          +- CometBroadcastHashJoin\n            :                                      :     :                             :- CometProject\n            :                                      :     :                             :  +- CometBroadcastHashJoin\n            :                                      :     :                             :     :- CometFilter\n            :                                      :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                                      :     :                             :     +- CometBroadcastExchange\n            :                                      :     :                             :        +- CometFilter\n            :                                      :     :                             :           +- CometScan parquet spark_catalog.default.item\n            :                                      :     :                             +- CometBroadcastExchange\n            :                                      :     :                                +- CometProject\n            :                                      :     :                                   +- CometFilter\n            :                                      :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :     +- BroadcastExchange\n            :                                      :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :           :     +- CometFilter\n            :                                      :           :        +- CometScan parquet spark_catalog.default.item\n            :                                      :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :                    +- BroadcastHashJoin\n            :                                      :                       :- BroadcastExchange\n            :                                      :                       :  +- CometFilter\n            :                                      :                       :     +- CometScan parquet spark_catalog.default.item\n            :                                      :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                                      :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            :                                      :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :                          :                 +- BroadcastHashJoin\n            :                                      :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :                                      :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :                                      :                          :                    :     :- CometFilter\n            :                                      :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                                      :                          :                    :     +- BroadcastExchange\n            :                                      :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                                      :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :                    :           :     +- CometFilter\n            :                                      :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n            :                                      :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                          :                    :                 +- CometProject\n            :                                      :                          :                    :                    +- CometBroadcastHashJoin\n            :                                      :                          :                    :                       :- CometProject\n            :                                      :                          :                    :                       :  +- CometBroadcastHashJoin\n            :                                      :                          :                    :                       :     :- CometFilter\n            :                                      :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                                      :                          :                    :                       :     +- CometBroadcastExchange\n            :                                      :                          :                    :                       :        +- CometFilter\n            :                                      :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n            :                                      :                          :                    :                       +- CometBroadcastExchange\n            :                                      :                          :                    :                          +- CometProject\n            :                                      :                          :                    :                             +- CometFilter\n            :                                      :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :                          :                    +- BroadcastExchange\n            :                                      :                          :                       +- CometProject\n            :                                      :                          :                          +- CometFilter\n            :                                      :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n            :                                      :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                                      :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                      :                                +- CometProject\n            :                                      :                                   +- CometBroadcastHashJoin\n            :                                      :                                      :- CometProject\n            :                                      :                                      :  +- CometBroadcastHashJoin\n            :                                      :                                      :     :- CometFilter\n            :                                      :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                                      :                                      :     +- CometBroadcastExchange\n            :                                      :                                      :        +- CometFilter\n            :                                      :                                      :           +- CometScan parquet spark_catalog.default.item\n            :                                      :                                      +- CometBroadcastExchange\n            :                                      :                                         +- CometProject\n            :                                      :                                            +- CometFilter\n            :                                      :                                               +- CometScan parquet spark_catalog.default.date_dim\n            :                                      +- BroadcastExchange\n            :                                         +- CometProject\n            :                                            +- CometFilter\n            :                                               +- CometScan parquet spark_catalog.default.date_dim\n            +- HashAggregate\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                     +- HashAggregate\n                        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n                              +-  Union [COMET: Union is not enabled because the following children are not native (Filter, Filter, Filter)]\n                                 :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                                 :  :  +- Subquery\n                                 :  :     +- HashAggregate\n                                 :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :  :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                                 :  :              +- CometUnion\n                                 :  :                 :- CometProject\n                                 :  :                 :  +- CometBroadcastHashJoin\n                                 :  :                 :     :- CometFilter\n                                 :  :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                 :  :                 :     +- CometBroadcastExchange\n                                 :  :                 :        +- CometProject\n                                 :  :                 :           +- CometFilter\n                                 :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n                                 :  :                 :- CometProject\n                                 :  :                 :  +- CometBroadcastHashJoin\n                                 :  :                 :     :- CometFilter\n                                 :  :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                 :  :                 :     +- CometBroadcastExchange\n                                 :  :                 :        +- CometProject\n                                 :  :                 :           +- CometFilter\n                                 :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n                                 :  :                 +- CometProject\n                                 :  :                    +- CometBroadcastHashJoin\n                                 :  :                       :- CometFilter\n                                 :  :                       :  +- CometScan parquet spark_catalog.default.web_sales\n                                 :  :                       +- CometBroadcastExchange\n                                 :  :                          +- CometProject\n                                 :  :                             +- CometFilter\n                                 :  :                                +- CometScan parquet spark_catalog.default.date_dim\n                                 :  +- HashAggregate\n                                 :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                 :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :              +- BroadcastHashJoin\n                                 :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n                                 :                 :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :  :     +- CometFilter\n                                 :                 :     :  :        +- CometScan parquet spark_catalog.default.store_sales\n                                 :                 :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :     :           +- BroadcastHashJoin\n                                 :                 :     :              :- BroadcastExchange\n                                 :                 :     :              :  +- CometFilter\n                                 :                 :     :              :     +- CometScan parquet spark_catalog.default.item\n                                 :                 :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                 :                 :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                 :                 :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :     :                 :                 +- BroadcastHashJoin\n                                 :                 :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                                 :                 :     :                 :                    :     :- CometFilter\n                                 :                 :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                 :                 :     :                 :                    :     +- BroadcastExchange\n                                 :                 :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :                 :                    :           :     +- CometFilter\n                                 :                 :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n                                 :                 :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :                 :                    :                 +- CometProject\n                                 :                 :     :                 :                    :                    +- CometBroadcastHashJoin\n                                 :                 :     :                 :                    :                       :- CometProject\n                                 :                 :     :                 :                    :                       :  +- CometBroadcastHashJoin\n                                 :                 :     :                 :                    :                       :     :- CometFilter\n                                 :                 :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                 :                 :     :                 :                    :                       :     +- CometBroadcastExchange\n                                 :                 :     :                 :                    :                       :        +- CometFilter\n                                 :                 :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                                 :                 :     :                 :                    :                       +- CometBroadcastExchange\n                                 :                 :     :                 :                    :                          +- CometProject\n                                 :                 :     :                 :                    :                             +- CometFilter\n                                 :                 :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 :     :                 :                    +- BroadcastExchange\n                                 :                 :     :                 :                       +- CometProject\n                                 :                 :     :                 :                          +- CometFilter\n                                 :                 :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :                       +- CometProject\n                                 :                 :     :                          +- CometBroadcastHashJoin\n                                 :                 :     :                             :- CometProject\n                                 :                 :     :                             :  +- CometBroadcastHashJoin\n                                 :                 :     :                             :     :- CometFilter\n                                 :                 :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                 :                 :     :                             :     +- CometBroadcastExchange\n                                 :                 :     :                             :        +- CometFilter\n                                 :                 :     :                             :           +- CometScan parquet spark_catalog.default.item\n                                 :                 :     :                             +- CometBroadcastExchange\n                                 :                 :     :                                +- CometProject\n                                 :                 :     :                                   +- CometFilter\n                                 :                 :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 :     +- BroadcastExchange\n                                 :                 :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :           :     +- CometFilter\n                                 :                 :           :        +- CometScan parquet spark_catalog.default.item\n                                 :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :                    +- BroadcastHashJoin\n                                 :                 :                       :- BroadcastExchange\n                                 :                 :                       :  +- CometFilter\n                                 :                 :                       :     +- CometScan parquet spark_catalog.default.item\n                                 :                 :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                 :                 :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                 :                 :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :                          :                 +- BroadcastHashJoin\n                                 :                 :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                                 :                 :                          :                    :     :- CometFilter\n                                 :                 :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                 :                 :                          :                    :     +- BroadcastExchange\n                                 :                 :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                          :                    :           :     +- CometFilter\n                                 :                 :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n                                 :                 :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                          :                    :                 +- CometProject\n                                 :                 :                          :                    :                    +- CometBroadcastHashJoin\n                                 :                 :                          :                    :                       :- CometProject\n                                 :                 :                          :                    :                       :  +- CometBroadcastHashJoin\n                                 :                 :                          :                    :                       :     :- CometFilter\n                                 :                 :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                 :                 :                          :                    :                       :     +- CometBroadcastExchange\n                                 :                 :                          :                    :                       :        +- CometFilter\n                                 :                 :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                                 :                 :                          :                    :                       +- CometBroadcastExchange\n                                 :                 :                          :                    :                          +- CometProject\n                                 :                 :                          :                    :                             +- CometFilter\n                                 :                 :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 :                          :                    +- BroadcastExchange\n                                 :                 :                          :                       +- CometProject\n                                 :                 :                          :                          +- CometFilter\n                                 :                 :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                                +- CometProject\n                                 :                 :                                   +- CometBroadcastHashJoin\n                                 :                 :                                      :- CometProject\n                                 :                 :                                      :  +- CometBroadcastHashJoin\n                                 :                 :                                      :     :- CometFilter\n                                 :                 :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                 :                 :                                      :     +- CometBroadcastExchange\n                                 :                 :                                      :        +- CometFilter\n                                 :                 :                                      :           +- CometScan parquet spark_catalog.default.item\n                                 :                 :                                      +- CometBroadcastExchange\n                                 :                 :                                         +- CometProject\n                                 :                 :                                            +- CometFilter\n                                 :                 :                                               +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 +- BroadcastExchange\n                                 :                    +- CometProject\n                                 :                       +- CometFilter\n                                 :                          +- CometScan parquet spark_catalog.default.date_dim\n                                 :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                                 :  :  +- Subquery\n                                 :  :     +- HashAggregate\n                                 :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :  :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                                 :  :              +- CometUnion\n                                 :  :                 :- CometProject\n                                 :  :                 :  +- CometBroadcastHashJoin\n                                 :  :                 :     :- CometFilter\n                                 :  :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                 :  :                 :     +- CometBroadcastExchange\n                                 :  :                 :        +- CometProject\n                                 :  :                 :           +- CometFilter\n                                 :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n                                 :  :                 :- CometProject\n                                 :  :                 :  +- CometBroadcastHashJoin\n                                 :  :                 :     :- CometFilter\n                                 :  :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                 :  :                 :     +- CometBroadcastExchange\n                                 :  :                 :        +- CometProject\n                                 :  :                 :           +- CometFilter\n                                 :  :                 :              +- CometScan parquet spark_catalog.default.date_dim\n                                 :  :                 +- CometProject\n                                 :  :                    +- CometBroadcastHashJoin\n                                 :  :                       :- CometFilter\n                                 :  :                       :  +- CometScan parquet spark_catalog.default.web_sales\n                                 :  :                       +- CometBroadcastExchange\n                                 :  :                          +- CometProject\n                                 :  :                             +- CometFilter\n                                 :  :                                +- CometScan parquet spark_catalog.default.date_dim\n                                 :  +- HashAggregate\n                                 :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                 :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :              +- BroadcastHashJoin\n                                 :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n                                 :                 :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :  :     +- CometFilter\n                                 :                 :     :  :        +- CometScan parquet spark_catalog.default.catalog_sales\n                                 :                 :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :     :           +- BroadcastHashJoin\n                                 :                 :     :              :- BroadcastExchange\n                                 :                 :     :              :  +- CometFilter\n                                 :                 :     :              :     +- CometScan parquet spark_catalog.default.item\n                                 :                 :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                 :                 :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                 :                 :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :     :                 :                 +- BroadcastHashJoin\n                                 :                 :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                                 :                 :     :                 :                    :     :- CometFilter\n                                 :                 :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                 :                 :     :                 :                    :     +- BroadcastExchange\n                                 :                 :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :                 :                    :           :     +- CometFilter\n                                 :                 :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n                                 :                 :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :                 :                    :                 +- CometProject\n                                 :                 :     :                 :                    :                    +- CometBroadcastHashJoin\n                                 :                 :     :                 :                    :                       :- CometProject\n                                 :                 :     :                 :                    :                       :  +- CometBroadcastHashJoin\n                                 :                 :     :                 :                    :                       :     :- CometFilter\n                                 :                 :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                 :                 :     :                 :                    :                       :     +- CometBroadcastExchange\n                                 :                 :     :                 :                    :                       :        +- CometFilter\n                                 :                 :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                                 :                 :     :                 :                    :                       +- CometBroadcastExchange\n                                 :                 :     :                 :                    :                          +- CometProject\n                                 :                 :     :                 :                    :                             +- CometFilter\n                                 :                 :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 :     :                 :                    +- BroadcastExchange\n                                 :                 :     :                 :                       +- CometProject\n                                 :                 :     :                 :                          +- CometFilter\n                                 :                 :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :     :                       +- CometProject\n                                 :                 :     :                          +- CometBroadcastHashJoin\n                                 :                 :     :                             :- CometProject\n                                 :                 :     :                             :  +- CometBroadcastHashJoin\n                                 :                 :     :                             :     :- CometFilter\n                                 :                 :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                 :                 :     :                             :     +- CometBroadcastExchange\n                                 :                 :     :                             :        +- CometFilter\n                                 :                 :     :                             :           +- CometScan parquet spark_catalog.default.item\n                                 :                 :     :                             +- CometBroadcastExchange\n                                 :                 :     :                                +- CometProject\n                                 :                 :     :                                   +- CometFilter\n                                 :                 :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 :     +- BroadcastExchange\n                                 :                 :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :           :     +- CometFilter\n                                 :                 :           :        +- CometScan parquet spark_catalog.default.item\n                                 :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :                    +- BroadcastHashJoin\n                                 :                 :                       :- BroadcastExchange\n                                 :                 :                       :  +- CometFilter\n                                 :                 :                       :     +- CometScan parquet spark_catalog.default.item\n                                 :                 :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                 :                 :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                 :                 :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :                          :                 +- BroadcastHashJoin\n                                 :                 :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                 :                 :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                                 :                 :                          :                    :     :- CometFilter\n                                 :                 :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                 :                 :                          :                    :     +- BroadcastExchange\n                                 :                 :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :                 :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                          :                    :           :     +- CometFilter\n                                 :                 :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n                                 :                 :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                          :                    :                 +- CometProject\n                                 :                 :                          :                    :                    +- CometBroadcastHashJoin\n                                 :                 :                          :                    :                       :- CometProject\n                                 :                 :                          :                    :                       :  +- CometBroadcastHashJoin\n                                 :                 :                          :                    :                       :     :- CometFilter\n                                 :                 :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                 :                 :                          :                    :                       :     +- CometBroadcastExchange\n                                 :                 :                          :                    :                       :        +- CometFilter\n                                 :                 :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                                 :                 :                          :                    :                       +- CometBroadcastExchange\n                                 :                 :                          :                    :                          +- CometProject\n                                 :                 :                          :                    :                             +- CometFilter\n                                 :                 :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 :                          :                    +- BroadcastExchange\n                                 :                 :                          :                       +- CometProject\n                                 :                 :                          :                          +- CometFilter\n                                 :                 :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                 :                 :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :                 :                                +- CometProject\n                                 :                 :                                   +- CometBroadcastHashJoin\n                                 :                 :                                      :- CometProject\n                                 :                 :                                      :  +- CometBroadcastHashJoin\n                                 :                 :                                      :     :- CometFilter\n                                 :                 :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                 :                 :                                      :     +- CometBroadcastExchange\n                                 :                 :                                      :        +- CometFilter\n                                 :                 :                                      :           +- CometScan parquet spark_catalog.default.item\n                                 :                 :                                      +- CometBroadcastExchange\n                                 :                 :                                         +- CometProject\n                                 :                 :                                            +- CometFilter\n                                 :                 :                                               +- CometScan parquet spark_catalog.default.date_dim\n                                 :                 +- BroadcastExchange\n                                 :                    +- CometProject\n                                 :                       +- CometFilter\n                                 :                          +- CometScan parquet spark_catalog.default.date_dim\n                                 +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                                    :  +- Subquery\n                                    :     +- HashAggregate\n                                    :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                                    :              +- CometUnion\n                                    :                 :- CometProject\n                                    :                 :  +- CometBroadcastHashJoin\n                                    :                 :     :- CometFilter\n                                    :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                    :                 :     +- CometBroadcastExchange\n                                    :                 :        +- CometProject\n                                    :                 :           +- CometFilter\n                                    :                 :              +- CometScan parquet spark_catalog.default.date_dim\n                                    :                 :- CometProject\n                                    :                 :  +- CometBroadcastHashJoin\n                                    :                 :     :- CometFilter\n                                    :                 :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                    :                 :     +- CometBroadcastExchange\n                                    :                 :        +- CometProject\n                                    :                 :           +- CometFilter\n                                    :                 :              +- CometScan parquet spark_catalog.default.date_dim\n                                    :                 +- CometProject\n                                    :                    +- CometBroadcastHashJoin\n                                    :                       :- CometFilter\n                                    :                       :  +- CometScan parquet spark_catalog.default.web_sales\n                                    :                       +- CometBroadcastExchange\n                                    :                          +- CometProject\n                                    :                             +- CometFilter\n                                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                                    +- HashAggregate\n                                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                          +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                             +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                                +- BroadcastHashJoin\n                                                   :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                                   :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (SortMergeJoin, BroadcastExchange)]\n                                                   :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                                   :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :     :  :     +- CometFilter\n                                                   :     :  :        +- CometScan parquet spark_catalog.default.web_sales\n                                                   :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :     :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                                   :     :           +- BroadcastHashJoin\n                                                   :     :              :- BroadcastExchange\n                                                   :     :              :  +- CometFilter\n                                                   :     :              :     +- CometScan parquet spark_catalog.default.item\n                                                   :     :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                                   :     :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :     :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :     :                 :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                                   :     :                 :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :     :                 :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                                   :     :                 :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                                   :     :                 :                 +- BroadcastHashJoin\n                                                   :     :                 :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                                   :     :                 :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                                                   :     :                 :                    :     :- CometFilter\n                                                   :     :                 :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                                   :     :                 :                    :     +- BroadcastExchange\n                                                   :     :                 :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                                   :     :                 :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :     :                 :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :     :                 :                    :           :     +- CometFilter\n                                                   :     :                 :                    :           :        +- CometScan parquet spark_catalog.default.item\n                                                   :     :                 :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :     :                 :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :     :                 :                    :                 +- CometProject\n                                                   :     :                 :                    :                    +- CometBroadcastHashJoin\n                                                   :     :                 :                    :                       :- CometProject\n                                                   :     :                 :                    :                       :  +- CometBroadcastHashJoin\n                                                   :     :                 :                    :                       :     :- CometFilter\n                                                   :     :                 :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                                   :     :                 :                    :                       :     +- CometBroadcastExchange\n                                                   :     :                 :                    :                       :        +- CometFilter\n                                                   :     :                 :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                                                   :     :                 :                    :                       +- CometBroadcastExchange\n                                                   :     :                 :                    :                          +- CometProject\n                                                   :     :                 :                    :                             +- CometFilter\n                                                   :     :                 :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                                                   :     :                 :                    +- BroadcastExchange\n                                                   :     :                 :                       +- CometProject\n                                                   :     :                 :                          +- CometFilter\n                                                   :     :                 :                             +- CometScan parquet spark_catalog.default.date_dim\n                                                   :     :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :     :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :     :                       +- CometProject\n                                                   :     :                          +- CometBroadcastHashJoin\n                                                   :     :                             :- CometProject\n                                                   :     :                             :  +- CometBroadcastHashJoin\n                                                   :     :                             :     :- CometFilter\n                                                   :     :                             :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                                   :     :                             :     +- CometBroadcastExchange\n                                                   :     :                             :        +- CometFilter\n                                                   :     :                             :           +- CometScan parquet spark_catalog.default.item\n                                                   :     :                             +- CometBroadcastExchange\n                                                   :     :                                +- CometProject\n                                                   :     :                                   +- CometFilter\n                                                   :     :                                      +- CometScan parquet spark_catalog.default.date_dim\n                                                   :     +- BroadcastExchange\n                                                   :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                                   :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :           :     +- CometFilter\n                                                   :           :        +- CometScan parquet spark_catalog.default.item\n                                                   :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                                   :                    +- BroadcastHashJoin\n                                                   :                       :- BroadcastExchange\n                                                   :                       :  +- CometFilter\n                                                   :                       :     +- CometScan parquet spark_catalog.default.item\n                                                   :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                                   :                          :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :                          :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :                          :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                                   :                          :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :                          :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                                   :                          :              +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                                   :                          :                 +- BroadcastHashJoin\n                                                   :                          :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                                   :                          :                    :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                                                   :                          :                    :     :- CometFilter\n                                                   :                          :                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                                   :                          :                    :     +- BroadcastExchange\n                                                   :                          :                    :        +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                                   :                          :                    :           :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :                          :                    :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :                          :                    :           :     +- CometFilter\n                                                   :                          :                    :           :        +- CometScan parquet spark_catalog.default.item\n                                                   :                          :                    :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :                          :                    :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :                          :                    :                 +- CometProject\n                                                   :                          :                    :                    +- CometBroadcastHashJoin\n                                                   :                          :                    :                       :- CometProject\n                                                   :                          :                    :                       :  +- CometBroadcastHashJoin\n                                                   :                          :                    :                       :     :- CometFilter\n                                                   :                          :                    :                       :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                                   :                          :                    :                       :     +- CometBroadcastExchange\n                                                   :                          :                    :                       :        +- CometFilter\n                                                   :                          :                    :                       :           +- CometScan parquet spark_catalog.default.item\n                                                   :                          :                    :                       +- CometBroadcastExchange\n                                                   :                          :                    :                          +- CometProject\n                                                   :                          :                    :                             +- CometFilter\n                                                   :                          :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n                                                   :                          :                    +- BroadcastExchange\n                                                   :                          :                       +- CometProject\n                                                   :                          :                          +- CometFilter\n                                                   :                          :                             +- CometScan parquet spark_catalog.default.date_dim\n                                                   :                          +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                   :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                   :                                +- CometProject\n                                                   :                                   +- CometBroadcastHashJoin\n                                                   :                                      :- CometProject\n                                                   :                                      :  +- CometBroadcastHashJoin\n                                                   :                                      :     :- CometFilter\n                                                   :                                      :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                                   :                                      :     +- CometBroadcastExchange\n                                                   :                                      :        +- CometFilter\n                                                   :                                      :           +- CometScan parquet spark_catalog.default.item\n                                                   :                                      +- CometBroadcastExchange\n                                                   :                                         +- CometProject\n                                                   :                                            +- CometFilter\n                                                   :                                               +- CometScan parquet spark_catalog.default.date_dim\n                                                   +- BroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q18a-v2.7. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q18a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate, HashAggregate, HashAggregate)]\n   :- HashAggregate\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n   :        +-  Project [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true, Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n   :           +- CometBroadcastHashJoin\n   :              :- CometProject\n   :              :  +- CometBroadcastHashJoin\n   :              :     :- CometProject\n   :              :     :  +- CometBroadcastHashJoin\n   :              :     :     :- CometProject\n   :              :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :- CometProject\n   :              :     :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :     :- CometProject\n   :              :     :     :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :     :     :- CometFilter\n   :              :     :     :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n   :              :     :     :     :     :     +- CometBroadcastExchange\n   :              :     :     :     :     :        +- CometProject\n   :              :     :     :     :     :           +- CometFilter\n   :              :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n   :              :     :     :     :     +- CometBroadcastExchange\n   :              :     :     :     :        +- CometProject\n   :              :     :     :     :           +- CometFilter\n   :              :     :     :     :              +- CometScan parquet spark_catalog.default.customer\n   :              :     :     :     +- CometBroadcastExchange\n   :              :     :     :        +- CometFilter\n   :              :     :     :           +- CometScan parquet spark_catalog.default.customer_demographics\n   :              :     :     +- CometBroadcastExchange\n   :              :     :        +- CometProject\n   :              :     :           +- CometFilter\n   :              :     :              +- CometScan parquet spark_catalog.default.customer_address\n   :              :     +- CometBroadcastExchange\n   :              :        +- CometProject\n   :              :           +- CometFilter\n   :              :              +- CometScan parquet spark_catalog.default.date_dim\n   :              +- CometBroadcastExchange\n   :                 +- CometProject\n   :                    +- CometFilter\n   :                       +- CometScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n   :        +-  Project [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true, Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n   :           +- CometBroadcastHashJoin\n   :              :- CometProject\n   :              :  +- CometBroadcastHashJoin\n   :              :     :- CometProject\n   :              :     :  +- CometBroadcastHashJoin\n   :              :     :     :- CometProject\n   :              :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :- CometProject\n   :              :     :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :     :- CometProject\n   :              :     :     :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :     :     :- CometFilter\n   :              :     :     :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n   :              :     :     :     :     :     +- CometBroadcastExchange\n   :              :     :     :     :     :        +- CometProject\n   :              :     :     :     :     :           +- CometFilter\n   :              :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n   :              :     :     :     :     +- CometBroadcastExchange\n   :              :     :     :     :        +- CometProject\n   :              :     :     :     :           +- CometFilter\n   :              :     :     :     :              +- CometScan parquet spark_catalog.default.customer\n   :              :     :     :     +- CometBroadcastExchange\n   :              :     :     :        +- CometFilter\n   :              :     :     :           +- CometScan parquet spark_catalog.default.customer_demographics\n   :              :     :     +- CometBroadcastExchange\n   :              :     :        +- CometProject\n   :              :     :           +- CometFilter\n   :              :     :              +- CometScan parquet spark_catalog.default.customer_address\n   :              :     +- CometBroadcastExchange\n   :              :        +- CometProject\n   :              :           +- CometFilter\n   :              :              +- CometScan parquet spark_catalog.default.date_dim\n   :              +- CometBroadcastExchange\n   :                 +- CometProject\n   :                    +- CometFilter\n   :                       +- CometScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n   :        +-  Project [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true, Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n   :           +- CometBroadcastHashJoin\n   :              :- CometProject\n   :              :  +- CometBroadcastHashJoin\n   :              :     :- CometProject\n   :              :     :  +- CometBroadcastHashJoin\n   :              :     :     :- CometProject\n   :              :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :- CometProject\n   :              :     :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :     :- CometProject\n   :              :     :     :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :     :     :- CometFilter\n   :              :     :     :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n   :              :     :     :     :     :     +- CometBroadcastExchange\n   :              :     :     :     :     :        +- CometProject\n   :              :     :     :     :     :           +- CometFilter\n   :              :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n   :              :     :     :     :     +- CometBroadcastExchange\n   :              :     :     :     :        +- CometProject\n   :              :     :     :     :           +- CometFilter\n   :              :     :     :     :              +- CometScan parquet spark_catalog.default.customer\n   :              :     :     :     +- CometBroadcastExchange\n   :              :     :     :        +- CometFilter\n   :              :     :     :           +- CometScan parquet spark_catalog.default.customer_demographics\n   :              :     :     +- CometBroadcastExchange\n   :              :     :        +- CometProject\n   :              :     :           +- CometFilter\n   :              :     :              +- CometScan parquet spark_catalog.default.customer_address\n   :              :     +- CometBroadcastExchange\n   :              :        +- CometProject\n   :              :           +- CometFilter\n   :              :              +- CometScan parquet spark_catalog.default.date_dim\n   :              +- CometBroadcastExchange\n   :                 +- CometProject\n   :                    +- CometFilter\n   :                       +- CometScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n   :        +-  Project [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true, Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n   :           +- CometBroadcastHashJoin\n   :              :- CometProject\n   :              :  +- CometBroadcastHashJoin\n   :              :     :- CometProject\n   :              :     :  +- CometBroadcastHashJoin\n   :              :     :     :- CometProject\n   :              :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :- CometProject\n   :              :     :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :     :- CometProject\n   :              :     :     :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :     :     :- CometFilter\n   :              :     :     :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n   :              :     :     :     :     :     +- CometBroadcastExchange\n   :              :     :     :     :     :        +- CometProject\n   :              :     :     :     :     :           +- CometFilter\n   :              :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n   :              :     :     :     :     +- CometBroadcastExchange\n   :              :     :     :     :        +- CometProject\n   :              :     :     :     :           +- CometFilter\n   :              :     :     :     :              +- CometScan parquet spark_catalog.default.customer\n   :              :     :     :     +- CometBroadcastExchange\n   :              :     :     :        +- CometFilter\n   :              :     :     :           +- CometScan parquet spark_catalog.default.customer_demographics\n   :              :     :     +- CometBroadcastExchange\n   :              :     :        +- CometProject\n   :              :     :           +- CometFilter\n   :              :     :              +- CometScan parquet spark_catalog.default.customer_address\n   :              :     +- CometBroadcastExchange\n   :              :        +- CometProject\n   :              :           +- CometFilter\n   :              :              +- CometScan parquet spark_catalog.default.date_dim\n   :              +- CometBroadcastExchange\n   :                 +- CometProject\n   :                    +- CometFilter\n   :                       +- CometScan parquet spark_catalog.default.item\n   +- HashAggregate\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            +-  Project [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true, Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometFilter\n                  :     :     :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometProject\n                  :     :     :     :     :           +- CometFilter\n                  :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan parquet spark_catalog.default.customer\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan parquet spark_catalog.default.customer_address\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.date_dim\n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q20-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q20-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometFilter\n                           :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.item\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q22-v2.7. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q22-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Expand)]\n         +-  Expand [COMET: Expand is not native because the following children are not native (Project)]\n            +-  Project [COMET: Project is not native because the following children are not native (BroadcastNestedLoopJoin)]\n               +- BroadcastNestedLoopJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometFilter\n                  :     :     :  +- CometScan parquet spark_catalog.default.inventory\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.item\n                  +- BroadcastExchange\n                     +- CometScan parquet spark_catalog.default.warehouse\n\n\nQuery: q22a-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q22a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate, HashAggregate, HashAggregate)]\n   :- HashAggregate\n   :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n   :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :           +- CometHashAggregate\n   :              +- CometProject\n   :                 +- CometBroadcastHashJoin\n   :                    :- CometProject\n   :                    :  +- CometBroadcastHashJoin\n   :                    :     :- CometProject\n   :                    :     :  +- CometBroadcastHashJoin\n   :                    :     :     :- CometFilter\n   :                    :     :     :  +- CometScan parquet spark_catalog.default.inventory\n   :                    :     :     +- CometBroadcastExchange\n   :                    :     :        +- CometProject\n   :                    :     :           +- CometFilter\n   :                    :     :              +- CometScan parquet spark_catalog.default.date_dim\n   :                    :     +- CometBroadcastExchange\n   :                    :        +- CometProject\n   :                    :           +- CometFilter\n   :                    :              +- CometScan parquet spark_catalog.default.item\n   :                    +- CometBroadcastExchange\n   :                       +- CometFilter\n   :                          +- CometScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n   :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              +- CometHashAggregate\n   :                 +- CometProject\n   :                    +- CometBroadcastHashJoin\n   :                       :- CometProject\n   :                       :  +- CometBroadcastHashJoin\n   :                       :     :- CometProject\n   :                       :     :  +- CometBroadcastHashJoin\n   :                       :     :     :- CometFilter\n   :                       :     :     :  +- CometScan parquet spark_catalog.default.inventory\n   :                       :     :     +- CometBroadcastExchange\n   :                       :     :        +- CometProject\n   :                       :     :           +- CometFilter\n   :                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n   :                       :     +- CometBroadcastExchange\n   :                       :        +- CometProject\n   :                       :           +- CometFilter\n   :                       :              +- CometScan parquet spark_catalog.default.item\n   :                       +- CometBroadcastExchange\n   :                          +- CometFilter\n   :                             +- CometScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n   :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              +- CometHashAggregate\n   :                 +- CometProject\n   :                    +- CometBroadcastHashJoin\n   :                       :- CometProject\n   :                       :  +- CometBroadcastHashJoin\n   :                       :     :- CometProject\n   :                       :     :  +- CometBroadcastHashJoin\n   :                       :     :     :- CometFilter\n   :                       :     :     :  +- CometScan parquet spark_catalog.default.inventory\n   :                       :     :     +- CometBroadcastExchange\n   :                       :     :        +- CometProject\n   :                       :     :           +- CometFilter\n   :                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n   :                       :     +- CometBroadcastExchange\n   :                       :        +- CometProject\n   :                       :           +- CometFilter\n   :                       :              +- CometScan parquet spark_catalog.default.item\n   :                       +- CometBroadcastExchange\n   :                          +- CometFilter\n   :                             +- CometScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n   :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :              +- CometHashAggregate\n   :                 +- CometProject\n   :                    +- CometBroadcastHashJoin\n   :                       :- CometProject\n   :                       :  +- CometBroadcastHashJoin\n   :                       :     :- CometProject\n   :                       :     :  +- CometBroadcastHashJoin\n   :                       :     :     :- CometFilter\n   :                       :     :     :  +- CometScan parquet spark_catalog.default.inventory\n   :                       :     :     +- CometBroadcastExchange\n   :                       :     :        +- CometProject\n   :                       :     :           +- CometFilter\n   :                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n   :                       :     +- CometBroadcastExchange\n   :                       :        +- CometProject\n   :                       :           +- CometFilter\n   :                       :              +- CometScan parquet spark_catalog.default.item\n   :                       +- CometBroadcastExchange\n   :                          +- CometFilter\n   :                             +- CometScan parquet spark_catalog.default.warehouse\n   +- HashAggregate\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometBroadcastHashJoin\n                           :     :     :- CometFilter\n                           :     :     :  +- CometScan parquet spark_catalog.default.inventory\n                           :     :     +- CometBroadcastExchange\n                           :     :        +- CometProject\n                           :     :           +- CometFilter\n                           :     :              +- CometScan parquet spark_catalog.default.date_dim\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.item\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan parquet spark_catalog.default.warehouse\n\n\nQuery: q24-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q24-v2.7: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :  +- Subquery\n      :     +- HashAggregate\n      :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n      :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                    +- CometHashAggregate\n      :                       +- CometProject\n      :                          +- CometBroadcastHashJoin\n      :                             :- CometProject\n      :                             :  +- CometBroadcastHashJoin\n      :                             :     :- CometProject\n      :                             :     :  +- CometBroadcastHashJoin\n      :                             :     :     :- CometProject\n      :                             :     :     :  +- CometBroadcastHashJoin\n      :                             :     :     :     :- CometProject\n      :                             :     :     :     :  +- CometBroadcastHashJoin\n      :                             :     :     :     :     :- CometFilter\n      :                             :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n      :                             :     :     :     :     +- CometBroadcastExchange\n      :                             :     :     :     :        +- CometFilter\n      :                             :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n      :                             :     :     :     +- CometBroadcastExchange\n      :                             :     :     :        +- CometProject\n      :                             :     :     :           +- CometFilter\n      :                             :     :     :              +- CometScan parquet spark_catalog.default.store\n      :                             :     :     +- CometBroadcastExchange\n      :                             :     :        +- CometProject\n      :                             :     :           +- CometFilter\n      :                             :     :              +- CometScan parquet spark_catalog.default.item\n      :                             :     +- CometBroadcastExchange\n      :                             :        +- CometProject\n      :                             :           +- CometFilter\n      :                             :              +- CometScan parquet spark_catalog.default.customer\n      :                             +- CometBroadcastExchange\n      :                                +- CometProject\n      :                                   +- CometFilter\n      :                                      +- CometScan parquet spark_catalog.default.customer_address\n      +- HashAggregate\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n               +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometFilter\n                              :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometFilter\n                              :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometScan parquet spark_catalog.default.store\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan parquet spark_catalog.default.item\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan parquet spark_catalog.default.customer\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan parquet spark_catalog.default.customer_address\n\n\nQuery: q27a-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q27a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n   :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +- CometHashAggregate\n   :        +- CometProject\n   :           +- CometBroadcastHashJoin\n   :              :- CometProject\n   :              :  +- CometBroadcastHashJoin\n   :              :     :- CometProject\n   :              :     :  +- CometBroadcastHashJoin\n   :              :     :     :- CometProject\n   :              :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :- CometFilter\n   :              :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :              :     :     :     +- CometBroadcastExchange\n   :              :     :     :        +- CometProject\n   :              :     :     :           +- CometFilter\n   :              :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n   :              :     :     +- CometBroadcastExchange\n   :              :     :        +- CometProject\n   :              :     :           +- CometFilter\n   :              :     :              +- CometScan parquet spark_catalog.default.date_dim\n   :              :     +- CometBroadcastExchange\n   :              :        +- CometProject\n   :              :           +- CometFilter\n   :              :              +- CometScan parquet spark_catalog.default.store\n   :              +- CometBroadcastExchange\n   :                 +- CometProject\n   :                    +- CometFilter\n   :                       +- CometScan parquet spark_catalog.default.item\n   :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   :     +- CometHashAggregate\n   :        +- CometProject\n   :           +- CometBroadcastHashJoin\n   :              :- CometProject\n   :              :  +- CometBroadcastHashJoin\n   :              :     :- CometProject\n   :              :     :  +- CometBroadcastHashJoin\n   :              :     :     :- CometProject\n   :              :     :     :  +- CometBroadcastHashJoin\n   :              :     :     :     :- CometFilter\n   :              :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n   :              :     :     :     +- CometBroadcastExchange\n   :              :     :     :        +- CometProject\n   :              :     :     :           +- CometFilter\n   :              :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n   :              :     :     +- CometBroadcastExchange\n   :              :     :        +- CometProject\n   :              :     :           +- CometFilter\n   :              :     :              +- CometScan parquet spark_catalog.default.date_dim\n   :              :     +- CometBroadcastExchange\n   :              :        +- CometProject\n   :              :           +- CometFilter\n   :              :              +- CometScan parquet spark_catalog.default.store\n   :              +- CometBroadcastExchange\n   :                 +- CometProject\n   :                    +- CometFilter\n   :                       +- CometScan parquet spark_catalog.default.item\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q34-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q34-v2.7: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      +- BroadcastHashJoin\n         :-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n         :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :        +- CometHashAggregate\n         :           +- CometProject\n         :              +- CometBroadcastHashJoin\n         :                 :- CometProject\n         :                 :  +- CometBroadcastHashJoin\n         :                 :     :- CometProject\n         :                 :     :  +- CometBroadcastHashJoin\n         :                 :     :     :- CometFilter\n         :                 :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n         :                 :     :     +- CometBroadcastExchange\n         :                 :     :        +- CometProject\n         :                 :     :           +- CometFilter\n         :                 :     :              +- CometScan parquet spark_catalog.default.date_dim\n         :                 :     +- CometBroadcastExchange\n         :                 :        +- CometProject\n         :                 :           +- CometFilter\n         :                 :              +- CometScan parquet spark_catalog.default.store\n         :                 +- CometBroadcastExchange\n         :                    +- CometProject\n         :                       +- CometFilter\n         :                          +- CometScan parquet spark_catalog.default.household_demographics\n         +- BroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan parquet spark_catalog.default.customer\n\n\nQuery: q35-v2.7. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q35-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +- BroadcastHashJoin\n               :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :  +- BroadcastHashJoin\n               :     :-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               :     :  +-  Filter [COMET: Filter is not native because the following children are not native (SortMergeJoin)]\n               :     :     +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n               :     :        :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n               :     :        :  :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :     :        :  :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        :  :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :  :  :     +- CometFilter\n               :     :        :  :  :        +- CometScan parquet spark_catalog.default.customer\n               :     :        :  :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        :  :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :  :        +- CometProject\n               :     :        :  :           +- CometBroadcastHashJoin\n               :     :        :  :              :- CometFilter\n               :     :        :  :              :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :        :  :              +- CometBroadcastExchange\n               :     :        :  :                 +- CometProject\n               :     :        :  :                    +- CometFilter\n               :     :        :  :                       +- CometScan parquet spark_catalog.default.date_dim\n               :     :        :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :        +- CometProject\n               :     :        :           +- CometBroadcastHashJoin\n               :     :        :              :- CometFilter\n               :     :        :              :  +- CometScan parquet spark_catalog.default.web_sales\n               :     :        :              +- CometBroadcastExchange\n               :     :        :                 +- CometProject\n               :     :        :                    +- CometFilter\n               :     :        :                       +- CometScan parquet spark_catalog.default.date_dim\n               :     :        +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :              +- CometProject\n               :     :                 +- CometBroadcastHashJoin\n               :     :                    :- CometFilter\n               :     :                    :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :                    +- CometBroadcastExchange\n               :     :                       +- CometProject\n               :     :                          +- CometFilter\n               :     :                             +- CometScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.customer_address\n               +- BroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.customer_demographics\n\n\nQuery: q35a-v2.7. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject, CometUnion)\nQuery: q35a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +- BroadcastHashJoin\n               :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :  +- BroadcastHashJoin\n               :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n               :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n               :     :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :     :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :     :  :     +- CometFilter\n               :     :     :  :        +- CometScan parquet spark_catalog.default.customer\n               :     :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :     :        +- CometProject\n               :     :     :           +- CometBroadcastHashJoin\n               :     :     :              :- CometFilter\n               :     :     :              :  +- CometScan parquet spark_catalog.default.store_sales\n               :     :     :              +- CometBroadcastExchange\n               :     :     :                 +- CometProject\n               :     :     :                    +- CometFilter\n               :     :     :                       +- CometScan parquet spark_catalog.default.date_dim\n               :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :           +- CometUnion\n               :     :              :- CometProject\n               :     :              :  +- CometBroadcastHashJoin\n               :     :              :     :- CometFilter\n               :     :              :     :  +- CometScan parquet spark_catalog.default.web_sales\n               :     :              :     +- CometBroadcastExchange\n               :     :              :        +- CometProject\n               :     :              :           +- CometFilter\n               :     :              :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :              +- CometProject\n               :     :                 +- CometBroadcastHashJoin\n               :     :                    :- CometFilter\n               :     :                    :  +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :                    +- CometBroadcastExchange\n               :     :                       +- CometProject\n               :     :                          +- CometFilter\n               :     :                             +- CometScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet spark_catalog.default.customer_address\n               +- BroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet spark_catalog.default.customer_demographics\n\n\nQuery: q36a-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q36a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n                     +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n                        :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                        :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     +- CometHashAggregate\n                        :        +- CometProject\n                        :           +- CometBroadcastHashJoin\n                        :              :- CometProject\n                        :              :  +- CometBroadcastHashJoin\n                        :              :     :- CometProject\n                        :              :     :  +- CometBroadcastHashJoin\n                        :              :     :     :- CometFilter\n                        :              :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :              :     :     +- CometBroadcastExchange\n                        :              :     :        +- CometProject\n                        :              :     :           +- CometFilter\n                        :              :     :              +- CometScan parquet spark_catalog.default.date_dim\n                        :              :     +- CometBroadcastExchange\n                        :              :        +- CometProject\n                        :              :           +- CometFilter\n                        :              :              +- CometScan parquet spark_catalog.default.item\n                        :              +- CometBroadcastExchange\n                        :                 +- CometProject\n                        :                    +- CometFilter\n                        :                       +- CometScan parquet spark_catalog.default.store\n                        :- HashAggregate\n                        :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                        :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                        :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :              +- CometHashAggregate\n                        :                 +- CometProject\n                        :                    +- CometBroadcastHashJoin\n                        :                       :- CometProject\n                        :                       :  +- CometBroadcastHashJoin\n                        :                       :     :- CometProject\n                        :                       :     :  +- CometBroadcastHashJoin\n                        :                       :     :     :- CometFilter\n                        :                       :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :                       :     :     +- CometBroadcastExchange\n                        :                       :     :        +- CometProject\n                        :                       :     :           +- CometFilter\n                        :                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n                        :                       :     +- CometBroadcastExchange\n                        :                       :        +- CometProject\n                        :                       :           +- CometFilter\n                        :                       :              +- CometScan parquet spark_catalog.default.item\n                        :                       +- CometBroadcastExchange\n                        :                          +- CometProject\n                        :                             +- CometFilter\n                        :                                +- CometScan parquet spark_catalog.default.store\n                        +- HashAggregate\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       +- CometHashAggregate\n                                          +- CometProject\n                                             +- CometBroadcastHashJoin\n                                                :- CometProject\n                                                :  +- CometBroadcastHashJoin\n                                                :     :- CometProject\n                                                :     :  +- CometBroadcastHashJoin\n                                                :     :     :- CometFilter\n                                                :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                                :     :     +- CometBroadcastExchange\n                                                :     :        +- CometProject\n                                                :     :           +- CometFilter\n                                                :     :              +- CometScan parquet spark_catalog.default.date_dim\n                                                :     +- CometBroadcastExchange\n                                                :        +- CometProject\n                                                :           +- CometFilter\n                                                :              +- CometScan parquet spark_catalog.default.item\n                                                +- CometBroadcastExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan parquet spark_catalog.default.store\n\n\nQuery: q47-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q47-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n      :     :        +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      :     :           +-  Window [COMET: Window is not supported]\n      :     :              +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      :     :                 +-  Window [COMET: Window is not supported]\n      :     :                    +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                          +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                                +- CometHashAggregate\n      :     :                                   +- CometProject\n      :     :                                      +- CometBroadcastHashJoin\n      :     :                                         :- CometProject\n      :     :                                         :  +- CometBroadcastHashJoin\n      :     :                                         :     :- CometProject\n      :     :                                         :     :  +- CometBroadcastHashJoin\n      :     :                                         :     :     :- CometBroadcastExchange\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- CometFilter\n      :     :                                         :     :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :                                         :     +- CometBroadcastExchange\n      :     :                                         :        +- CometFilter\n      :     :                                         :           +- CometScan parquet spark_catalog.default.date_dim\n      :     :                                         +- CometBroadcastExchange\n      :     :                                            +- CometFilter\n      :     :                                               +- CometScan parquet spark_catalog.default.store\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           +-  Project [COMET: Project is not native because the following children are not native (Window)]\n      :              +-  Window [COMET: Window is not supported]\n      :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                       +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometBroadcastExchange\n      :                                      :     :     :  +- CometProject\n      :                                      :     :     :     +- CometFilter\n      :                                      :     :     :        +- CometScan parquet spark_catalog.default.item\n      :                                      :     :     +- CometFilter\n      :                                      :     :        +- CometScan parquet spark_catalog.default.store_sales\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan parquet spark_catalog.default.store\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  Project [COMET: Project is not native because the following children are not native (Window)]\n               +-  Window [COMET: Window is not supported]\n                  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometBroadcastExchange\n                                       :     :     :  +- CometProject\n                                       :     :     :     +- CometFilter\n                                       :     :     :        +- CometScan parquet spark_catalog.default.item\n                                       :     :     +- CometFilter\n                                       :     :        +- CometScan parquet spark_catalog.default.store_sales\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan parquet spark_catalog.default.store\n\n\nQuery: q49-v2.7. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q49-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n         +-  Union [COMET: Union is not enabled because the following children are not native (Project, Project, Project)]\n            :-  Project [COMET: Project is not native because the following children are not native (Filter)]\n            :  +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n            :     +-  Window [COMET: Window is not supported]\n            :        +-  Sort [COMET: Sort is not native because the following children are not native (Window)]\n            :           +-  Window [COMET: Window is not supported]\n            :              +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    +- HashAggregate\n            :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                          +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                             +- CometProject\n            :                                +- CometBroadcastHashJoin\n            :                                   :- CometProject\n            :                                   :  +- CometBroadcastHashJoin\n            :                                   :     :- CometProject\n            :                                   :     :  +- CometFilter\n            :                                   :     :     +- CometScan parquet spark_catalog.default.web_sales\n            :                                   :     +- CometBroadcastExchange\n            :                                   :        +- CometFilter\n            :                                   :           +- CometScan parquet spark_catalog.default.web_returns\n            :                                   +- CometBroadcastExchange\n            :                                      +- CometProject\n            :                                         +- CometFilter\n            :                                            +- CometScan parquet spark_catalog.default.date_dim\n            :-  Project [COMET: Project is not native because the following children are not native (Filter)]\n            :  +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n            :     +-  Window [COMET: Window is not supported]\n            :        +-  Sort [COMET: Sort is not native because the following children are not native (Window)]\n            :           +-  Window [COMET: Window is not supported]\n            :              +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    +- HashAggregate\n            :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                          +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                             +- CometProject\n            :                                +- CometBroadcastHashJoin\n            :                                   :- CometProject\n            :                                   :  +- CometBroadcastHashJoin\n            :                                   :     :- CometProject\n            :                                   :     :  +- CometFilter\n            :                                   :     :     +- CometScan parquet spark_catalog.default.catalog_sales\n            :                                   :     +- CometBroadcastExchange\n            :                                   :        +- CometFilter\n            :                                   :           +- CometScan parquet spark_catalog.default.catalog_returns\n            :                                   +- CometBroadcastExchange\n            :                                      +- CometProject\n            :                                         +- CometFilter\n            :                                            +- CometScan parquet spark_catalog.default.date_dim\n            +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n                  +-  Window [COMET: Window is not supported]\n                     +-  Sort [COMET: Sort is not native because the following children are not native (Window)]\n                        +-  Window [COMET: Window is not supported]\n                           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 +- HashAggregate\n                                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                                          +- CometProject\n                                             +- CometBroadcastHashJoin\n                                                :- CometProject\n                                                :  +- CometBroadcastHashJoin\n                                                :     :- CometProject\n                                                :     :  +- CometFilter\n                                                :     :     +- CometScan parquet spark_catalog.default.store_sales\n                                                :     +- CometBroadcastExchange\n                                                :        +- CometFilter\n                                                :           +- CometScan parquet spark_catalog.default.store_returns\n                                                +- CometBroadcastExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q51a-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q51a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n   +- HashAggregate\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n            +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Window, Project)]\n               :-  Window [COMET: Window is not supported]\n               :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :        +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n               :           +-  Filter [COMET: Filter is not native because the following children are not native (SortMergeJoin)]\n               :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                 :     +- HashAggregate\n               :                 :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n               :                 :           +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n               :                 :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                 :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                 :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                 :                 :     +-  Project [COMET: Project is not native because the following children are not native (Window)]\n               :                 :                 :        +-  Window [COMET: Window is not supported]\n               :                 :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                 :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                 :                 :                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :                 :                 :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                 :                 :                       +- CometHashAggregate\n               :                 :                 :                          +- CometProject\n               :                 :                 :                             +- CometBroadcastHashJoin\n               :                 :                 :                                :- CometFilter\n               :                 :                 :                                :  +- CometScan parquet spark_catalog.default.web_sales\n               :                 :                 :                                +- CometBroadcastExchange\n               :                 :                 :                                   +- CometProject\n               :                 :                 :                                      +- CometFilter\n               :                 :                 :                                         +- CometScan parquet spark_catalog.default.date_dim\n               :                 :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                 :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                 :                       +-  Project [COMET: Project is not native because the following children are not native (Window)]\n               :                 :                          +-  Window [COMET: Window is not supported]\n               :                 :                             +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                 :                                +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                 :                                   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :                 :                                      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                 :                                         +- CometHashAggregate\n               :                 :                                            +- CometProject\n               :                 :                                               +- CometBroadcastHashJoin\n               :                 :                                                  :- CometFilter\n               :                 :                                                  :  +- CometScan parquet spark_catalog.default.web_sales\n               :                 :                                                  +- CometBroadcastExchange\n               :                 :                                                     +- CometProject\n               :                 :                                                        +- CometFilter\n               :                 :                                                           +- CometScan parquet spark_catalog.default.date_dim\n               :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                       +- HashAggregate\n               :                          +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n               :                             +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n               :                                +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :                                   :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                                   :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                                   :     +-  Project [COMET: Project is not native because the following children are not native (Window)]\n               :                                   :        +-  Window [COMET: Window is not supported]\n               :                                   :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                                   :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                                   :                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :                                   :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                                   :                       +- CometHashAggregate\n               :                                   :                          +- CometProject\n               :                                   :                             +- CometBroadcastHashJoin\n               :                                   :                                :- CometFilter\n               :                                   :                                :  +- CometScan parquet spark_catalog.default.store_sales\n               :                                   :                                +- CometBroadcastExchange\n               :                                   :                                   +- CometProject\n               :                                   :                                      +- CometFilter\n               :                                   :                                         +- CometScan parquet spark_catalog.default.date_dim\n               :                                   +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                                      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                                         +-  Project [COMET: Project is not native because the following children are not native (Window)]\n               :                                            +-  Window [COMET: Window is not supported]\n               :                                               +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :                                                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                                                     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               :                                                        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :                                                           +- CometHashAggregate\n               :                                                              +- CometProject\n               :                                                                 +- CometBroadcastHashJoin\n               :                                                                    :- CometFilter\n               :                                                                    :  +- CometScan parquet spark_catalog.default.store_sales\n               :                                                                    +- CometBroadcastExchange\n               :                                                                       +- CometProject\n               :                                                                          +- CometFilter\n               :                                                                             +- CometScan parquet spark_catalog.default.date_dim\n               +-  Project [COMET: Project is not native because the following children are not native (Window)]\n                  +-  Window [COMET: Window is not supported]\n                     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                              +-  Filter [COMET: Filter is not native because the following children are not native (SortMergeJoin)]\n                                 +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                    :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :     +- HashAggregate\n                                    :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                    :           +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                                    :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                    :                 :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                 :     +-  Project [COMET: Project is not native because the following children are not native (Window)]\n                                    :                 :        +-  Window [COMET: Window is not supported]\n                                    :                 :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :                 :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                 :                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                    :                 :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                 :                       +- CometHashAggregate\n                                    :                 :                          +- CometProject\n                                    :                 :                             +- CometBroadcastHashJoin\n                                    :                 :                                :- CometFilter\n                                    :                 :                                :  +- CometScan parquet spark_catalog.default.web_sales\n                                    :                 :                                +- CometBroadcastExchange\n                                    :                 :                                   +- CometProject\n                                    :                 :                                      +- CometFilter\n                                    :                 :                                         +- CometScan parquet spark_catalog.default.date_dim\n                                    :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                       +-  Project [COMET: Project is not native because the following children are not native (Window)]\n                                    :                          +-  Window [COMET: Window is not supported]\n                                    :                             +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                    :                                +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                                   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                    :                                      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    :                                         +- CometHashAggregate\n                                    :                                            +- CometProject\n                                    :                                               +- CometBroadcastHashJoin\n                                    :                                                  :- CometFilter\n                                    :                                                  :  +- CometScan parquet spark_catalog.default.web_sales\n                                    :                                                  +- CometBroadcastExchange\n                                    :                                                     +- CometProject\n                                    :                                                        +- CometFilter\n                                    :                                                           +- CometScan parquet spark_catalog.default.date_dim\n                                    +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                          +- HashAggregate\n                                             +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                                +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                                                   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                                      :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                      :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                      :     +-  Project [COMET: Project is not native because the following children are not native (Window)]\n                                                      :        +-  Window [COMET: Window is not supported]\n                                                      :           +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                      :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                      :                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                                      :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                      :                       +- CometHashAggregate\n                                                      :                          +- CometProject\n                                                      :                             +- CometBroadcastHashJoin\n                                                      :                                :- CometFilter\n                                                      :                                :  +- CometScan parquet spark_catalog.default.store_sales\n                                                      :                                +- CometBroadcastExchange\n                                                      :                                   +- CometProject\n                                                      :                                      +- CometFilter\n                                                      :                                         +- CometScan parquet spark_catalog.default.date_dim\n                                                      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                            +-  Project [COMET: Project is not native because the following children are not native (Window)]\n                                                               +-  Window [COMET: Window is not supported]\n                                                                  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                                        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                                                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                                              +- CometHashAggregate\n                                                                                 +- CometProject\n                                                                                    +- CometBroadcastHashJoin\n                                                                                       :- CometFilter\n                                                                                       :  +- CometScan parquet spark_catalog.default.store_sales\n                                                                                       +- CometBroadcastExchange\n                                                                                          +- CometProject\n                                                                                             +- CometFilter\n                                                                                                +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q57-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q57-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n      :     :        +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      :     :           +-  Window [COMET: Window is not supported]\n      :     :              +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n      :     :                 +-  Window [COMET: Window is not supported]\n      :     :                    +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                          +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :                                +- CometHashAggregate\n      :     :                                   +- CometProject\n      :     :                                      +- CometBroadcastHashJoin\n      :     :                                         :- CometProject\n      :     :                                         :  +- CometBroadcastHashJoin\n      :     :                                         :     :- CometProject\n      :     :                                         :     :  +- CometBroadcastHashJoin\n      :     :                                         :     :     :- CometBroadcastExchange\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- CometFilter\n      :     :                                         :     :        +- CometScan parquet spark_catalog.default.catalog_sales\n      :     :                                         :     +- CometBroadcastExchange\n      :     :                                         :        +- CometFilter\n      :     :                                         :           +- CometScan parquet spark_catalog.default.date_dim\n      :     :                                         +- CometBroadcastExchange\n      :     :                                            +- CometFilter\n      :     :                                               +- CometScan parquet spark_catalog.default.call_center\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           +-  Project [COMET: Project is not native because the following children are not native (Window)]\n      :              +-  Window [COMET: Window is not supported]\n      :                 +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                       +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometBroadcastExchange\n      :                                      :     :     :  +- CometProject\n      :                                      :     :     :     +- CometFilter\n      :                                      :     :     :        +- CometScan parquet spark_catalog.default.item\n      :                                      :     :     +- CometFilter\n      :                                      :     :        +- CometScan parquet spark_catalog.default.catalog_sales\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan parquet spark_catalog.default.call_center\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  Project [COMET: Project is not native because the following children are not native (Window)]\n               +-  Window [COMET: Window is not supported]\n                  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometBroadcastExchange\n                                       :     :     :  +- CometProject\n                                       :     :     :     +- CometFilter\n                                       :     :     :        +- CometScan parquet spark_catalog.default.item\n                                       :     :     +- CometFilter\n                                       :     :        +- CometScan parquet spark_catalog.default.catalog_sales\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan parquet spark_catalog.default.call_center\n\n\nQuery: q64-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q64-v2.7: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n         :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     +- HashAggregate\n         :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         :           +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :              +- BroadcastHashJoin\n         :                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :  +- BroadcastHashJoin\n         :                 :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :  +- BroadcastHashJoin\n         :                 :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :- Subquery\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :  +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :        +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :           +- CometProject\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :              +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :                 +- CometScan parquet spark_catalog.default.date_dim\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  +- Subquery\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :              +- CometProject\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :                 +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :                    +- CometScan parquet spark_catalog.default.item\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometScan parquet spark_catalog.default.store_sales\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan parquet spark_catalog.default.store_returns\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Project)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometBroadcastHashJoin\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometScan parquet spark_catalog.default.catalog_sales\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometBroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometScan parquet spark_catalog.default.catalog_returns\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.store\n         :                 :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.customer\n         :                 :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n         :                 :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n         :                 :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :     :        +- CometProject\n         :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n         :                 :     :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n         :                 :     :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :     :        +- CometProject\n         :                 :     :     :     :     :     :     :     :           +- CometFilter\n         :                 :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n         :                 :     :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.promotion\n         :                 :     :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.household_demographics\n         :                 :     :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :     :        +- CometFilter\n         :                 :     :     :     :     :           +- CometScan parquet spark_catalog.default.household_demographics\n         :                 :     :     :     :     +- BroadcastExchange\n         :                 :     :     :     :        +- CometProject\n         :                 :     :     :     :           +- CometFilter\n         :                 :     :     :     :              +- CometScan parquet spark_catalog.default.customer_address\n         :                 :     :     :     +- BroadcastExchange\n         :                 :     :     :        +- CometProject\n         :                 :     :     :           +- CometFilter\n         :                 :     :     :              +- CometScan parquet spark_catalog.default.customer_address\n         :                 :     :     +- BroadcastExchange\n         :                 :     :        +- CometFilter\n         :                 :     :           +- CometScan parquet spark_catalog.default.income_band\n         :                 :     +- BroadcastExchange\n         :                 :        +- CometFilter\n         :                 :           +- CometScan parquet spark_catalog.default.income_band\n         :                 +- BroadcastExchange\n         :                    +- CometProject\n         :                       +- CometFilter\n         :                          +- CometScan parquet spark_catalog.default.item\n         +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +- HashAggregate\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        +- BroadcastHashJoin\n                           :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :  +- BroadcastHashJoin\n                           :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :  +- BroadcastHashJoin\n                           :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :  +- BroadcastHashJoin\n                           :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :- Subquery\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :  +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :        +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :           +- CometProject\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :              +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  :                 +- CometScan parquet spark_catalog.default.date_dim\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :  +- Subquery\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :              +- CometProject\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :                 +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  :                    +- CometScan parquet spark_catalog.default.item\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometScan parquet spark_catalog.default.store_sales\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan parquet spark_catalog.default.store_returns\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Project)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometBroadcastHashJoin\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometScan parquet spark_catalog.default.catalog_sales\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometBroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometScan parquet spark_catalog.default.catalog_returns\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n                           :     :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                           :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.store\n                           :     :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.customer\n                           :     :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n                           :     :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n                           :     :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :     :        +- CometProject\n                           :     :     :     :     :     :     :     :     :           +- CometFilter\n                           :     :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n                           :     :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :     :        +- CometProject\n                           :     :     :     :     :     :     :     :           +- CometFilter\n                           :     :     :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n                           :     :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.promotion\n                           :     :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.household_demographics\n                           :     :     :     :     :     +- BroadcastExchange\n                           :     :     :     :     :        +- CometFilter\n                           :     :     :     :     :           +- CometScan parquet spark_catalog.default.household_demographics\n                           :     :     :     :     +- BroadcastExchange\n                           :     :     :     :        +- CometProject\n                           :     :     :     :           +- CometFilter\n                           :     :     :     :              +- CometScan parquet spark_catalog.default.customer_address\n                           :     :     :     +- BroadcastExchange\n                           :     :     :        +- CometProject\n                           :     :     :           +- CometFilter\n                           :     :     :              +- CometScan parquet spark_catalog.default.customer_address\n                           :     :     +- BroadcastExchange\n                           :     :        +- CometFilter\n                           :     :           +- CometScan parquet spark_catalog.default.income_band\n                           :     +- BroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan parquet spark_catalog.default.income_band\n                           +- BroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q67a-v2.7. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q67a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate, HashAggregate, HashAggregate, HashAggregate, HashAggregate, HashAggregate, HashAggregate)]\n               :- HashAggregate\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +-  HashAggregate [COMET: ]\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan parquet spark_catalog.default.store\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometFilter\n               :                       +- CometScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n               :        +- HashAggregate\n               :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :              +-  HashAggregate [COMET: ]\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometProject\n               :                       :  +- CometBroadcastHashJoin\n               :                       :     :- CometProject\n               :                       :     :  +- CometBroadcastHashJoin\n               :                       :     :     :- CometFilter\n               :                       :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :                       :     :     +- CometBroadcastExchange\n               :                       :     :        +- CometProject\n               :                       :     :           +- CometFilter\n               :                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :                       :     +- CometBroadcastExchange\n               :                       :        +- CometProject\n               :                       :           +- CometFilter\n               :                       :              +- CometScan parquet spark_catalog.default.store\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n               :        +- HashAggregate\n               :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :              +-  HashAggregate [COMET: ]\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometProject\n               :                       :  +- CometBroadcastHashJoin\n               :                       :     :- CometProject\n               :                       :     :  +- CometBroadcastHashJoin\n               :                       :     :     :- CometFilter\n               :                       :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :                       :     :     +- CometBroadcastExchange\n               :                       :     :        +- CometProject\n               :                       :     :           +- CometFilter\n               :                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :                       :     +- CometBroadcastExchange\n               :                       :        +- CometProject\n               :                       :           +- CometFilter\n               :                       :              +- CometScan parquet spark_catalog.default.store\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n               :        +- HashAggregate\n               :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :              +-  HashAggregate [COMET: ]\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometProject\n               :                       :  +- CometBroadcastHashJoin\n               :                       :     :- CometProject\n               :                       :     :  +- CometBroadcastHashJoin\n               :                       :     :     :- CometFilter\n               :                       :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :                       :     :     +- CometBroadcastExchange\n               :                       :     :        +- CometProject\n               :                       :     :           +- CometFilter\n               :                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :                       :     +- CometBroadcastExchange\n               :                       :        +- CometProject\n               :                       :           +- CometFilter\n               :                       :              +- CometScan parquet spark_catalog.default.store\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n               :        +- HashAggregate\n               :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :              +-  HashAggregate [COMET: ]\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometProject\n               :                       :  +- CometBroadcastHashJoin\n               :                       :     :- CometProject\n               :                       :     :  +- CometBroadcastHashJoin\n               :                       :     :     :- CometFilter\n               :                       :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :                       :     :     +- CometBroadcastExchange\n               :                       :     :        +- CometProject\n               :                       :     :           +- CometFilter\n               :                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :                       :     +- CometBroadcastExchange\n               :                       :        +- CometProject\n               :                       :           +- CometFilter\n               :                       :              +- CometScan parquet spark_catalog.default.store\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n               :        +- HashAggregate\n               :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :              +-  HashAggregate [COMET: ]\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometProject\n               :                       :  +- CometBroadcastHashJoin\n               :                       :     :- CometProject\n               :                       :     :  +- CometBroadcastHashJoin\n               :                       :     :     :- CometFilter\n               :                       :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :                       :     :     +- CometBroadcastExchange\n               :                       :     :        +- CometProject\n               :                       :     :           +- CometFilter\n               :                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :                       :     +- CometBroadcastExchange\n               :                       :        +- CometProject\n               :                       :           +- CometFilter\n               :                       :              +- CometScan parquet spark_catalog.default.store\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n               :        +- HashAggregate\n               :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :              +-  HashAggregate [COMET: ]\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometProject\n               :                       :  +- CometBroadcastHashJoin\n               :                       :     :- CometProject\n               :                       :     :  +- CometBroadcastHashJoin\n               :                       :     :     :- CometFilter\n               :                       :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :                       :     :     +- CometBroadcastExchange\n               :                       :     :        +- CometProject\n               :                       :     :           +- CometFilter\n               :                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :                       :     +- CometBroadcastExchange\n               :                       :        +- CometProject\n               :                       :           +- CometFilter\n               :                       :              +- CometScan parquet spark_catalog.default.store\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n               :        +- HashAggregate\n               :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :              +-  HashAggregate [COMET: ]\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometProject\n               :                       :  +- CometBroadcastHashJoin\n               :                       :     :- CometProject\n               :                       :     :  +- CometBroadcastHashJoin\n               :                       :     :     :- CometFilter\n               :                       :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n               :                       :     :     +- CometBroadcastExchange\n               :                       :     :        +- CometProject\n               :                       :     :           +- CometFilter\n               :                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :                       :     +- CometBroadcastExchange\n               :                       :        +- CometProject\n               :                       :           +- CometFilter\n               :                       :              +- CometScan parquet spark_catalog.default.store\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan parquet spark_catalog.default.item\n               +- HashAggregate\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                        +- HashAggregate\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +-  HashAggregate [COMET: ]\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometFilter\n                                       :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometProject\n                                       :     :           +- CometFilter\n                                       :     :              +- CometScan parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometProject\n                                       :           +- CometFilter\n                                       :              +- CometScan parquet spark_catalog.default.store\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q70a-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q70a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n                     +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n                        :- HashAggregate\n                        :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                        :        +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :           +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                        :              :- CometProject\n                        :              :  +- CometBroadcastHashJoin\n                        :              :     :- CometFilter\n                        :              :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :              :     +- CometBroadcastExchange\n                        :              :        +- CometProject\n                        :              :           +- CometFilter\n                        :              :              +- CometScan parquet spark_catalog.default.date_dim\n                        :              +- BroadcastExchange\n                        :                 +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                        :                    +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Project)]\n                        :                       :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :                       :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                       :     +- CometFilter\n                        :                       :        +- CometScan parquet spark_catalog.default.store\n                        :                       +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                        :                          +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n                        :                             +-  Window [COMET: Window is not supported]\n                        :                                +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n                        :                                   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                        :                                      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                                         +- CometHashAggregate\n                        :                                            +- CometProject\n                        :                                               +- CometBroadcastHashJoin\n                        :                                                  :- CometProject\n                        :                                                  :  +- CometBroadcastHashJoin\n                        :                                                  :     :- CometFilter\n                        :                                                  :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :                                                  :     +- CometBroadcastExchange\n                        :                                                  :        +- CometProject\n                        :                                                  :           +- CometFilter\n                        :                                                  :              +- CometScan parquet spark_catalog.default.store\n                        :                                                  +- CometBroadcastExchange\n                        :                                                     +- CometProject\n                        :                                                        +- CometFilter\n                        :                                                           +- CometScan parquet spark_catalog.default.date_dim\n                        :- HashAggregate\n                        :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                        :        +- HashAggregate\n                        :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                        :                 +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        :                    +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                        :                       :- CometProject\n                        :                       :  +- CometBroadcastHashJoin\n                        :                       :     :- CometFilter\n                        :                       :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :                       :     +- CometBroadcastExchange\n                        :                       :        +- CometProject\n                        :                       :           +- CometFilter\n                        :                       :              +- CometScan parquet spark_catalog.default.date_dim\n                        :                       +- BroadcastExchange\n                        :                          +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                        :                             +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Project)]\n                        :                                :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                        :                                :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                                :     +- CometFilter\n                        :                                :        +- CometScan parquet spark_catalog.default.store\n                        :                                +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                        :                                   +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n                        :                                      +-  Window [COMET: Window is not supported]\n                        :                                         +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n                        :                                            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                        :                                               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :                                                  +- CometHashAggregate\n                        :                                                     +- CometProject\n                        :                                                        +- CometBroadcastHashJoin\n                        :                                                           :- CometProject\n                        :                                                           :  +- CometBroadcastHashJoin\n                        :                                                           :     :- CometFilter\n                        :                                                           :     :  +- CometScan parquet spark_catalog.default.store_sales\n                        :                                                           :     +- CometBroadcastExchange\n                        :                                                           :        +- CometProject\n                        :                                                           :           +- CometFilter\n                        :                                                           :              +- CometScan parquet spark_catalog.default.store\n                        :                                                           +- CometBroadcastExchange\n                        :                                                              +- CometProject\n                        :                                                                 +- CometFilter\n                        :                                                                    +- CometScan parquet spark_catalog.default.date_dim\n                        +- HashAggregate\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                                 +- HashAggregate\n                                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                                          +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                                             +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                                                :- CometProject\n                                                :  +- CometBroadcastHashJoin\n                                                :     :- CometFilter\n                                                :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                                :     +- CometBroadcastExchange\n                                                :        +- CometProject\n                                                :           +- CometFilter\n                                                :              +- CometScan parquet spark_catalog.default.date_dim\n                                                +- BroadcastExchange\n                                                   +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                                                      +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Project)]\n                                                         :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                                                         :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                         :     +- CometFilter\n                                                         :        +- CometScan parquet spark_catalog.default.store\n                                                         +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                                                            +-  Filter [COMET: Filter is not native because the following children are not native (Window)]\n                                                               +-  Window [COMET: Window is not supported]\n                                                                  +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n                                                                     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                                                        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                                           +- CometHashAggregate\n                                                                              +- CometProject\n                                                                                 +- CometBroadcastHashJoin\n                                                                                    :- CometProject\n                                                                                    :  +- CometBroadcastHashJoin\n                                                                                    :     :- CometFilter\n                                                                                    :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                                                                    :     +- CometBroadcastExchange\n                                                                                    :        +- CometProject\n                                                                                    :           +- CometFilter\n                                                                                    :              +- CometScan parquet spark_catalog.default.store\n                                                                                    +- CometBroadcastExchange\n                                                                                       +- CometProject\n                                                                                          +- CometFilter\n                                                                                             +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q72-v2.7. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q72-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +- BroadcastHashJoin\n               :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :  +- BroadcastHashJoin\n               :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :     :  +- BroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometProject\n               :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :- CometProject\n               :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :     :     :- CometProject\n               :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :     :     :     :     :     :- CometBroadcastExchange\n               :     :     :     :     :     :     :     :     :     :  +- CometFilter\n               :     :     :     :     :     :     :     :     :     :     +- CometScan parquet spark_catalog.default.catalog_sales\n               :     :     :     :     :     :     :     :     :     +- CometFilter\n               :     :     :     :     :     :     :     :     :        +- CometScan parquet spark_catalog.default.inventory\n               :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :     :     :     :        +- CometFilter\n               :     :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.warehouse\n               :     :     :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :     :     :        +- CometFilter\n               :     :     :     :     :     :     :           +- CometScan parquet spark_catalog.default.item\n               :     :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :     :        +- CometProject\n               :     :     :     :     :     :           +- CometFilter\n               :     :     :     :     :     :              +- CometScan parquet spark_catalog.default.customer_demographics\n               :     :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :     :        +- CometProject\n               :     :     :     :     :           +- CometFilter\n               :     :     :     :     :              +- CometScan parquet spark_catalog.default.household_demographics\n               :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :        +- CometProject\n               :     :     :     :           +- CometFilter\n               :     :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometFilter\n               :     :     :           +- CometScan parquet spark_catalog.default.date_dim\n               :     :     +- BroadcastExchange\n               :     :        +- CometFilter\n               :     :           +- CometScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan parquet spark_catalog.default.promotion\n               +- BroadcastExchange\n                  +- CometFilter\n                     +- CometScan parquet spark_catalog.default.catalog_returns\n\n\nQuery: q74-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q74-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n      :     :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :  :     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :  :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :  :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :  :              +- CometHashAggregate\n      :     :  :                 +- CometProject\n      :     :  :                    +- CometBroadcastHashJoin\n      :     :  :                       :- CometProject\n      :     :  :                       :  +- CometBroadcastHashJoin\n      :     :  :                       :     :- CometBroadcastExchange\n      :     :  :                       :     :  +- CometProject\n      :     :  :                       :     :     +- CometFilter\n      :     :  :                       :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :  :                       :     +- CometFilter\n      :     :  :                       :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :  :                       +- CometBroadcastExchange\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometScan parquet spark_catalog.default.date_dim\n      :     :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :              +- CometHashAggregate\n      :     :                 +- CometProject\n      :     :                    +- CometBroadcastHashJoin\n      :     :                       :- CometProject\n      :     :                       :  +- CometBroadcastHashJoin\n      :     :                       :     :- CometBroadcastExchange\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometScan parquet spark_catalog.default.customer\n      :     :                       :     +- CometFilter\n      :     :                       :        +- CometScan parquet spark_catalog.default.store_sales\n      :     :                       +- CometBroadcastExchange\n      :     :                          +- CometFilter\n      :     :                             +- CometScan parquet spark_catalog.default.date_dim\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                    +- CometHashAggregate\n      :                       +- CometProject\n      :                          +- CometBroadcastHashJoin\n      :                             :- CometProject\n      :                             :  +- CometBroadcastHashJoin\n      :                             :     :- CometBroadcastExchange\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometScan parquet spark_catalog.default.customer\n      :                             :     +- CometFilter\n      :                             :        +- CometScan parquet spark_catalog.default.web_sales\n      :                             +- CometBroadcastExchange\n      :                                +- CometFilter\n      :                                   +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometBroadcastExchange\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometScan parquet spark_catalog.default.customer\n                           :     +- CometFilter\n                           :        +- CometScan parquet spark_catalog.default.web_sales\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q75-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject, CometUnion)\nQuery: q75-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :        +- HashAggregate\n      :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n      :                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                       +- CometHashAggregate\n      :                          +- CometUnion\n      :                             :- CometProject\n      :                             :  +- CometBroadcastHashJoin\n      :                             :     :- CometProject\n      :                             :     :  +- CometBroadcastHashJoin\n      :                             :     :     :- CometProject\n      :                             :     :     :  +- CometBroadcastHashJoin\n      :                             :     :     :     :- CometFilter\n      :                             :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n      :                             :     :     :     +- CometBroadcastExchange\n      :                             :     :     :        +- CometProject\n      :                             :     :     :           +- CometFilter\n      :                             :     :     :              +- CometScan parquet spark_catalog.default.item\n      :                             :     :     +- CometBroadcastExchange\n      :                             :     :        +- CometFilter\n      :                             :     :           +- CometScan parquet spark_catalog.default.date_dim\n      :                             :     +- CometBroadcastExchange\n      :                             :        +- CometFilter\n      :                             :           +- CometScan parquet spark_catalog.default.catalog_returns\n      :                             :- CometProject\n      :                             :  +- CometBroadcastHashJoin\n      :                             :     :- CometProject\n      :                             :     :  +- CometBroadcastHashJoin\n      :                             :     :     :- CometProject\n      :                             :     :     :  +- CometBroadcastHashJoin\n      :                             :     :     :     :- CometFilter\n      :                             :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n      :                             :     :     :     +- CometBroadcastExchange\n      :                             :     :     :        +- CometProject\n      :                             :     :     :           +- CometFilter\n      :                             :     :     :              +- CometScan parquet spark_catalog.default.item\n      :                             :     :     +- CometBroadcastExchange\n      :                             :     :        +- CometFilter\n      :                             :     :           +- CometScan parquet spark_catalog.default.date_dim\n      :                             :     +- CometBroadcastExchange\n      :                             :        +- CometFilter\n      :                             :           +- CometScan parquet spark_catalog.default.store_returns\n      :                             +- CometProject\n      :                                +- CometBroadcastHashJoin\n      :                                   :- CometProject\n      :                                   :  +- CometBroadcastHashJoin\n      :                                   :     :- CometProject\n      :                                   :     :  +- CometBroadcastHashJoin\n      :                                   :     :     :- CometFilter\n      :                                   :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n      :                                   :     :     +- CometBroadcastExchange\n      :                                   :     :        +- CometProject\n      :                                   :     :           +- CometFilter\n      :                                   :     :              +- CometScan parquet spark_catalog.default.item\n      :                                   :     +- CometBroadcastExchange\n      :                                   :        +- CometFilter\n      :                                   :           +- CometScan parquet spark_catalog.default.date_dim\n      :                                   +- CometBroadcastExchange\n      :                                      +- CometFilter\n      :                                         +- CometScan parquet spark_catalog.default.web_returns\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n               +- HashAggregate\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometProject\n                                    :  +- CometBroadcastHashJoin\n                                    :     :- CometProject\n                                    :     :  +- CometBroadcastHashJoin\n                                    :     :     :- CometProject\n                                    :     :     :  +- CometBroadcastHashJoin\n                                    :     :     :     :- CometFilter\n                                    :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                    :     :     :     +- CometBroadcastExchange\n                                    :     :     :        +- CometProject\n                                    :     :     :           +- CometFilter\n                                    :     :     :              +- CometScan parquet spark_catalog.default.item\n                                    :     :     +- CometBroadcastExchange\n                                    :     :        +- CometFilter\n                                    :     :           +- CometScan parquet spark_catalog.default.date_dim\n                                    :     +- CometBroadcastExchange\n                                    :        +- CometFilter\n                                    :           +- CometScan parquet spark_catalog.default.catalog_returns\n                                    :- CometProject\n                                    :  +- CometBroadcastHashJoin\n                                    :     :- CometProject\n                                    :     :  +- CometBroadcastHashJoin\n                                    :     :     :- CometProject\n                                    :     :     :  +- CometBroadcastHashJoin\n                                    :     :     :     :- CometFilter\n                                    :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                    :     :     :     +- CometBroadcastExchange\n                                    :     :     :        +- CometProject\n                                    :     :     :           +- CometFilter\n                                    :     :     :              +- CometScan parquet spark_catalog.default.item\n                                    :     :     +- CometBroadcastExchange\n                                    :     :        +- CometFilter\n                                    :     :           +- CometScan parquet spark_catalog.default.date_dim\n                                    :     +- CometBroadcastExchange\n                                    :        +- CometFilter\n                                    :           +- CometScan parquet spark_catalog.default.store_returns\n                                    +- CometProject\n                                       +- CometBroadcastHashJoin\n                                          :- CometProject\n                                          :  +- CometBroadcastHashJoin\n                                          :     :- CometProject\n                                          :     :  +- CometBroadcastHashJoin\n                                          :     :     :- CometFilter\n                                          :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                          :     :     +- CometBroadcastExchange\n                                          :     :        +- CometProject\n                                          :     :           +- CometFilter\n                                          :     :              +- CometScan parquet spark_catalog.default.item\n                                          :     +- CometBroadcastExchange\n                                          :        +- CometFilter\n                                          :           +- CometScan parquet spark_catalog.default.date_dim\n                                          +- CometBroadcastExchange\n                                             +- CometFilter\n                                                +- CometScan parquet spark_catalog.default.web_returns\n\n\nQuery: q77a-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q77a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n         +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n            :- HashAggregate\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n            :        +-  Union [COMET: Union is not enabled because the following children are not native (Project, Project, Project)]\n            :           :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n            :           :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :           :     :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n            :           :     :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :     :        +- CometHashAggregate\n            :           :     :           +- CometProject\n            :           :     :              +- CometBroadcastHashJoin\n            :           :     :                 :- CometProject\n            :           :     :                 :  +- CometBroadcastHashJoin\n            :           :     :                 :     :- CometFilter\n            :           :     :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :           :     :                 :     +- CometBroadcastExchange\n            :           :     :                 :        +- CometProject\n            :           :     :                 :           +- CometFilter\n            :           :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :           :     :                 +- CometBroadcastExchange\n            :           :     :                    +- CometFilter\n            :           :     :                       +- CometScan parquet spark_catalog.default.store\n            :           :     +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n            :           :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :              +- CometHashAggregate\n            :           :                 +- CometProject\n            :           :                    +- CometBroadcastHashJoin\n            :           :                       :- CometProject\n            :           :                       :  +- CometBroadcastHashJoin\n            :           :                       :     :- CometFilter\n            :           :                       :     :  +- CometScan parquet spark_catalog.default.store_returns\n            :           :                       :     +- CometBroadcastExchange\n            :           :                       :        +- CometProject\n            :           :                       :           +- CometFilter\n            :           :                       :              +- CometScan parquet spark_catalog.default.date_dim\n            :           :                       +- CometBroadcastExchange\n            :           :                          +- CometFilter\n            :           :                             +- CometScan parquet spark_catalog.default.store\n            :           :-  Project [COMET: Project is not native because the following children are not native (BroadcastNestedLoopJoin)]\n            :           :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (HashAggregate, BroadcastExchange)]\n            :           :     :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :     :     +- CometHashAggregate\n            :           :     :        +- CometProject\n            :           :     :           +- CometBroadcastHashJoin\n            :           :     :              :- CometFilter\n            :           :     :              :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :           :     :              +- CometBroadcastExchange\n            :           :     :                 +- CometProject\n            :           :     :                    +- CometFilter\n            :           :     :                       +- CometScan parquet spark_catalog.default.date_dim\n            :           :     +- BroadcastExchange\n            :           :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :           :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :              +- CometHashAggregate\n            :           :                 +- CometProject\n            :           :                    +- CometBroadcastHashJoin\n            :           :                       :- CometFilter\n            :           :                       :  +- CometScan parquet spark_catalog.default.catalog_returns\n            :           :                       +- CometBroadcastExchange\n            :           :                          +- CometProject\n            :           :                             +- CometFilter\n            :           :                                +- CometScan parquet spark_catalog.default.date_dim\n            :           +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n            :              +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                 :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n            :                 :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                 :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                 :        +- CometHashAggregate\n            :                 :           +- CometProject\n            :                 :              +- CometBroadcastHashJoin\n            :                 :                 :- CometProject\n            :                 :                 :  +- CometBroadcastHashJoin\n            :                 :                 :     :- CometFilter\n            :                 :                 :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                 :                 :     +- CometBroadcastExchange\n            :                 :                 :        +- CometProject\n            :                 :                 :           +- CometFilter\n            :                 :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                 :                 +- CometBroadcastExchange\n            :                 :                    +- CometFilter\n            :                 :                       +- CometScan parquet spark_catalog.default.web_page\n            :                 +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n            :                    +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                          +- CometHashAggregate\n            :                             +- CometProject\n            :                                +- CometBroadcastHashJoin\n            :                                   :- CometProject\n            :                                   :  +- CometBroadcastHashJoin\n            :                                   :     :- CometFilter\n            :                                   :     :  +- CometScan parquet spark_catalog.default.web_returns\n            :                                   :     +- CometBroadcastExchange\n            :                                   :        +- CometProject\n            :                                   :           +- CometFilter\n            :                                   :              +- CometScan parquet spark_catalog.default.date_dim\n            :                                   +- CometBroadcastExchange\n            :                                      +- CometFilter\n            :                                         +- CometScan parquet spark_catalog.default.web_page\n            :- HashAggregate\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n            :        +- HashAggregate\n            :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n            :                 +-  Union [COMET: Union is not enabled because the following children are not native (Project, Project, Project)]\n            :                    :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n            :                    :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                    :     :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n            :                    :     :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :     :        +- CometHashAggregate\n            :                    :     :           +- CometProject\n            :                    :     :              +- CometBroadcastHashJoin\n            :                    :     :                 :- CometProject\n            :                    :     :                 :  +- CometBroadcastHashJoin\n            :                    :     :                 :     :- CometFilter\n            :                    :     :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :     :                 :     +- CometBroadcastExchange\n            :                    :     :                 :        +- CometProject\n            :                    :     :                 :           +- CometFilter\n            :                    :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :     :                 +- CometBroadcastExchange\n            :                    :     :                    +- CometFilter\n            :                    :     :                       +- CometScan parquet spark_catalog.default.store\n            :                    :     +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n            :                    :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :              +- CometHashAggregate\n            :                    :                 +- CometProject\n            :                    :                    +- CometBroadcastHashJoin\n            :                    :                       :- CometProject\n            :                    :                       :  +- CometBroadcastHashJoin\n            :                    :                       :     :- CometFilter\n            :                    :                       :     :  +- CometScan parquet spark_catalog.default.store_returns\n            :                    :                       :     +- CometBroadcastExchange\n            :                    :                       :        +- CometProject\n            :                    :                       :           +- CometFilter\n            :                    :                       :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :                       +- CometBroadcastExchange\n            :                    :                          +- CometFilter\n            :                    :                             +- CometScan parquet spark_catalog.default.store\n            :                    :-  Project [COMET: Project is not native because the following children are not native (BroadcastNestedLoopJoin)]\n            :                    :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (HashAggregate, BroadcastExchange)]\n            :                    :     :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :     :     +- CometHashAggregate\n            :                    :     :        +- CometProject\n            :                    :     :           +- CometBroadcastHashJoin\n            :                    :     :              :- CometFilter\n            :                    :     :              :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :     :              +- CometBroadcastExchange\n            :                    :     :                 +- CometProject\n            :                    :     :                    +- CometFilter\n            :                    :     :                       +- CometScan parquet spark_catalog.default.date_dim\n            :                    :     +- BroadcastExchange\n            :                    :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                    :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :              +- CometHashAggregate\n            :                    :                 +- CometProject\n            :                    :                    +- CometBroadcastHashJoin\n            :                    :                       :- CometFilter\n            :                    :                       :  +- CometScan parquet spark_catalog.default.catalog_returns\n            :                    :                       +- CometBroadcastExchange\n            :                    :                          +- CometProject\n            :                    :                             +- CometFilter\n            :                    :                                +- CometScan parquet spark_catalog.default.date_dim\n            :                    +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n            :                       +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :                          :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n            :                          :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                          :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                          :        +- CometHashAggregate\n            :                          :           +- CometProject\n            :                          :              +- CometBroadcastHashJoin\n            :                          :                 :- CometProject\n            :                          :                 :  +- CometBroadcastHashJoin\n            :                          :                 :     :- CometFilter\n            :                          :                 :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                          :                 :     +- CometBroadcastExchange\n            :                          :                 :        +- CometProject\n            :                          :                 :           +- CometFilter\n            :                          :                 :              +- CometScan parquet spark_catalog.default.date_dim\n            :                          :                 +- CometBroadcastExchange\n            :                          :                    +- CometFilter\n            :                          :                       +- CometScan parquet spark_catalog.default.web_page\n            :                          +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n            :                             +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                                +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                                   +- CometHashAggregate\n            :                                      +- CometProject\n            :                                         +- CometBroadcastHashJoin\n            :                                            :- CometProject\n            :                                            :  +- CometBroadcastHashJoin\n            :                                            :     :- CometFilter\n            :                                            :     :  +- CometScan parquet spark_catalog.default.web_returns\n            :                                            :     +- CometBroadcastExchange\n            :                                            :        +- CometProject\n            :                                            :           +- CometFilter\n            :                                            :              +- CometScan parquet spark_catalog.default.date_dim\n            :                                            +- CometBroadcastExchange\n            :                                               +- CometFilter\n            :                                                  +- CometScan parquet spark_catalog.default.web_page\n            +- HashAggregate\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                     +- HashAggregate\n                        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n                              +-  Union [COMET: Union is not enabled because the following children are not native (Project, Project, Project)]\n                                 :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                                 :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                 :     :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n                                 :     :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                 :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :     :        +- CometHashAggregate\n                                 :     :           +- CometProject\n                                 :     :              +- CometBroadcastHashJoin\n                                 :     :                 :- CometProject\n                                 :     :                 :  +- CometBroadcastHashJoin\n                                 :     :                 :     :- CometFilter\n                                 :     :                 :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                 :     :                 :     +- CometBroadcastExchange\n                                 :     :                 :        +- CometProject\n                                 :     :                 :           +- CometFilter\n                                 :     :                 :              +- CometScan parquet spark_catalog.default.date_dim\n                                 :     :                 +- CometBroadcastExchange\n                                 :     :                    +- CometFilter\n                                 :     :                       +- CometScan parquet spark_catalog.default.store\n                                 :     +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n                                 :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                 :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :              +- CometHashAggregate\n                                 :                 +- CometProject\n                                 :                    +- CometBroadcastHashJoin\n                                 :                       :- CometProject\n                                 :                       :  +- CometBroadcastHashJoin\n                                 :                       :     :- CometFilter\n                                 :                       :     :  +- CometScan parquet spark_catalog.default.store_returns\n                                 :                       :     +- CometBroadcastExchange\n                                 :                       :        +- CometProject\n                                 :                       :           +- CometFilter\n                                 :                       :              +- CometScan parquet spark_catalog.default.date_dim\n                                 :                       +- CometBroadcastExchange\n                                 :                          +- CometFilter\n                                 :                             +- CometScan parquet spark_catalog.default.store\n                                 :-  Project [COMET: Project is not native because the following children are not native (BroadcastNestedLoopJoin)]\n                                 :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not native because the following children are not native (HashAggregate, BroadcastExchange)]\n                                 :     :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                 :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :     :     +- CometHashAggregate\n                                 :     :        +- CometProject\n                                 :     :           +- CometBroadcastHashJoin\n                                 :     :              :- CometFilter\n                                 :     :              :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                 :     :              +- CometBroadcastExchange\n                                 :     :                 +- CometProject\n                                 :     :                    +- CometFilter\n                                 :     :                       +- CometScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                 :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :              +- CometHashAggregate\n                                 :                 +- CometProject\n                                 :                    +- CometBroadcastHashJoin\n                                 :                       :- CometFilter\n                                 :                       :  +- CometScan parquet spark_catalog.default.catalog_returns\n                                 :                       +- CometBroadcastExchange\n                                 :                          +- CometProject\n                                 :                             +- CometFilter\n                                 :                                +- CometScan parquet spark_catalog.default.date_dim\n                                 +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                                    +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                                       :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n                                       :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                       :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       :        +- CometHashAggregate\n                                       :           +- CometProject\n                                       :              +- CometBroadcastHashJoin\n                                       :                 :- CometProject\n                                       :                 :  +- CometBroadcastHashJoin\n                                       :                 :     :- CometFilter\n                                       :                 :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                       :                 :     +- CometBroadcastExchange\n                                       :                 :        +- CometProject\n                                       :                 :           +- CometFilter\n                                       :                 :              +- CometScan parquet spark_catalog.default.date_dim\n                                       :                 +- CometBroadcastExchange\n                                       :                    +- CometFilter\n                                       :                       +- CometScan parquet spark_catalog.default.web_page\n                                       +-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n                                          +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                             +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                                +- CometHashAggregate\n                                                   +- CometProject\n                                                      +- CometBroadcastHashJoin\n                                                         :- CometProject\n                                                         :  +- CometBroadcastHashJoin\n                                                         :     :- CometFilter\n                                                         :     :  +- CometScan parquet spark_catalog.default.web_returns\n                                                         :     +- CometBroadcastExchange\n                                                         :        +- CometProject\n                                                         :           +- CometFilter\n                                                         :              +- CometScan parquet spark_catalog.default.date_dim\n                                                         +- CometBroadcastExchange\n                                                            +- CometFilter\n                                                               +- CometScan parquet spark_catalog.default.web_page\n\n\nQuery: q78-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q78-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n   +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Project, Sort)]\n      :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :-  Sort [COMET: Sort is not native because the following children are not native (HashAggregate)]\n      :     :  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :        +- CometHashAggregate\n      :     :           +- CometProject\n      :     :              +- CometBroadcastHashJoin\n      :     :                 :- CometProject\n      :     :                 :  +- CometFilter\n      :     :                 :     +- CometBroadcastHashJoin\n      :     :                 :        :- CometFilter\n      :     :                 :        :  +- CometScan parquet spark_catalog.default.store_sales\n      :     :                 :        +- CometBroadcastExchange\n      :     :                 :           +- CometFilter\n      :     :                 :              +- CometScan parquet spark_catalog.default.store_returns\n      :     :                 +- CometBroadcastExchange\n      :     :                    +- CometFilter\n      :     :                       +- CometScan parquet spark_catalog.default.date_dim\n      :     +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n      :        +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :                 +- CometHashAggregate\n      :                    +- CometProject\n      :                       +- CometBroadcastHashJoin\n      :                          :- CometProject\n      :                          :  +- CometFilter\n      :                          :     +- CometBroadcastHashJoin\n      :                          :        :- CometFilter\n      :                          :        :  +- CometScan parquet spark_catalog.default.web_sales\n      :                          :        +- CometBroadcastExchange\n      :                          :           +- CometFilter\n      :                          :              +- CometScan parquet spark_catalog.default.web_returns\n      :                          +- CometBroadcastExchange\n      :                             +- CometFilter\n      :                                +- CometScan parquet spark_catalog.default.date_dim\n      +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n         +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometFilter\n                           :     +- CometBroadcastHashJoin\n                           :        :- CometFilter\n                           :        :  +- CometScan parquet spark_catalog.default.catalog_sales\n                           :        +- CometBroadcastExchange\n                           :           +- CometFilter\n                           :              +- CometScan parquet spark_catalog.default.catalog_returns\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan parquet spark_catalog.default.date_dim\n\n\nQuery: q80a-v2.7. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q80a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n         +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n            :- HashAggregate\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n            :        +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n            :           :- HashAggregate\n            :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :     +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :           :        +- CometProject\n            :           :           +- CometBroadcastHashJoin\n            :           :              :- CometProject\n            :           :              :  +- CometBroadcastHashJoin\n            :           :              :     :- CometProject\n            :           :              :     :  +- CometBroadcastHashJoin\n            :           :              :     :     :- CometProject\n            :           :              :     :     :  +- CometBroadcastHashJoin\n            :           :              :     :     :     :- CometProject\n            :           :              :     :     :     :  +- CometBroadcastHashJoin\n            :           :              :     :     :     :     :- CometFilter\n            :           :              :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :           :              :     :     :     :     +- CometBroadcastExchange\n            :           :              :     :     :     :        +- CometFilter\n            :           :              :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n            :           :              :     :     :     +- CometBroadcastExchange\n            :           :              :     :     :        +- CometProject\n            :           :              :     :     :           +- CometFilter\n            :           :              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :           :              :     :     +- CometBroadcastExchange\n            :           :              :     :        +- CometProject\n            :           :              :     :           +- CometFilter\n            :           :              :     :              +- CometScan parquet spark_catalog.default.store\n            :           :              :     +- CometBroadcastExchange\n            :           :              :        +- CometProject\n            :           :              :           +- CometFilter\n            :           :              :              +- CometScan parquet spark_catalog.default.item\n            :           :              +- CometBroadcastExchange\n            :           :                 +- CometProject\n            :           :                    +- CometFilter\n            :           :                       +- CometScan parquet spark_catalog.default.promotion\n            :           :- HashAggregate\n            :           :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           :     +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :           :        +- CometProject\n            :           :           +- CometBroadcastHashJoin\n            :           :              :- CometProject\n            :           :              :  +- CometBroadcastHashJoin\n            :           :              :     :- CometProject\n            :           :              :     :  +- CometBroadcastHashJoin\n            :           :              :     :     :- CometProject\n            :           :              :     :     :  +- CometBroadcastHashJoin\n            :           :              :     :     :     :- CometProject\n            :           :              :     :     :     :  +- CometBroadcastHashJoin\n            :           :              :     :     :     :     :- CometFilter\n            :           :              :     :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :           :              :     :     :     :     +- CometBroadcastExchange\n            :           :              :     :     :     :        +- CometFilter\n            :           :              :     :     :     :           +- CometScan parquet spark_catalog.default.catalog_returns\n            :           :              :     :     :     +- CometBroadcastExchange\n            :           :              :     :     :        +- CometProject\n            :           :              :     :     :           +- CometFilter\n            :           :              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :           :              :     :     +- CometBroadcastExchange\n            :           :              :     :        +- CometProject\n            :           :              :     :           +- CometFilter\n            :           :              :     :              +- CometScan parquet spark_catalog.default.catalog_page\n            :           :              :     +- CometBroadcastExchange\n            :           :              :        +- CometProject\n            :           :              :           +- CometFilter\n            :           :              :              +- CometScan parquet spark_catalog.default.item\n            :           :              +- CometBroadcastExchange\n            :           :                 +- CometProject\n            :           :                    +- CometFilter\n            :           :                       +- CometScan parquet spark_catalog.default.promotion\n            :           +- HashAggregate\n            :              +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                 +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                    +- CometProject\n            :                       +- CometBroadcastHashJoin\n            :                          :- CometProject\n            :                          :  +- CometBroadcastHashJoin\n            :                          :     :- CometProject\n            :                          :     :  +- CometBroadcastHashJoin\n            :                          :     :     :- CometProject\n            :                          :     :     :  +- CometBroadcastHashJoin\n            :                          :     :     :     :- CometProject\n            :                          :     :     :     :  +- CometBroadcastHashJoin\n            :                          :     :     :     :     :- CometFilter\n            :                          :     :     :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                          :     :     :     :     +- CometBroadcastExchange\n            :                          :     :     :     :        +- CometFilter\n            :                          :     :     :     :           +- CometScan parquet spark_catalog.default.web_returns\n            :                          :     :     :     +- CometBroadcastExchange\n            :                          :     :     :        +- CometProject\n            :                          :     :     :           +- CometFilter\n            :                          :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :                          :     :     +- CometBroadcastExchange\n            :                          :     :        +- CometProject\n            :                          :     :           +- CometFilter\n            :                          :     :              +- CometScan parquet spark_catalog.default.web_site\n            :                          :     +- CometBroadcastExchange\n            :                          :        +- CometProject\n            :                          :           +- CometFilter\n            :                          :              +- CometScan parquet spark_catalog.default.item\n            :                          +- CometBroadcastExchange\n            :                             +- CometProject\n            :                                +- CometFilter\n            :                                   +- CometScan parquet spark_catalog.default.promotion\n            :- HashAggregate\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n            :        +- HashAggregate\n            :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n            :                 +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n            :                    :- HashAggregate\n            :                    :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :     +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                    :        +- CometProject\n            :                    :           +- CometBroadcastHashJoin\n            :                    :              :- CometProject\n            :                    :              :  +- CometBroadcastHashJoin\n            :                    :              :     :- CometProject\n            :                    :              :     :  +- CometBroadcastHashJoin\n            :                    :              :     :     :- CometProject\n            :                    :              :     :     :  +- CometBroadcastHashJoin\n            :                    :              :     :     :     :- CometProject\n            :                    :              :     :     :     :  +- CometBroadcastHashJoin\n            :                    :              :     :     :     :     :- CometFilter\n            :                    :              :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n            :                    :              :     :     :     :     +- CometBroadcastExchange\n            :                    :              :     :     :     :        +- CometFilter\n            :                    :              :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n            :                    :              :     :     :     +- CometBroadcastExchange\n            :                    :              :     :     :        +- CometProject\n            :                    :              :     :     :           +- CometFilter\n            :                    :              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :              :     :     +- CometBroadcastExchange\n            :                    :              :     :        +- CometProject\n            :                    :              :     :           +- CometFilter\n            :                    :              :     :              +- CometScan parquet spark_catalog.default.store\n            :                    :              :     +- CometBroadcastExchange\n            :                    :              :        +- CometProject\n            :                    :              :           +- CometFilter\n            :                    :              :              +- CometScan parquet spark_catalog.default.item\n            :                    :              +- CometBroadcastExchange\n            :                    :                 +- CometProject\n            :                    :                    +- CometFilter\n            :                    :                       +- CometScan parquet spark_catalog.default.promotion\n            :                    :- HashAggregate\n            :                    :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    :     +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                    :        +- CometProject\n            :                    :           +- CometBroadcastHashJoin\n            :                    :              :- CometProject\n            :                    :              :  +- CometBroadcastHashJoin\n            :                    :              :     :- CometProject\n            :                    :              :     :  +- CometBroadcastHashJoin\n            :                    :              :     :     :- CometProject\n            :                    :              :     :     :  +- CometBroadcastHashJoin\n            :                    :              :     :     :     :- CometProject\n            :                    :              :     :     :     :  +- CometBroadcastHashJoin\n            :                    :              :     :     :     :     :- CometFilter\n            :                    :              :     :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n            :                    :              :     :     :     :     +- CometBroadcastExchange\n            :                    :              :     :     :     :        +- CometFilter\n            :                    :              :     :     :     :           +- CometScan parquet spark_catalog.default.catalog_returns\n            :                    :              :     :     :     +- CometBroadcastExchange\n            :                    :              :     :     :        +- CometProject\n            :                    :              :     :     :           +- CometFilter\n            :                    :              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :                    :              :     :     +- CometBroadcastExchange\n            :                    :              :     :        +- CometProject\n            :                    :              :     :           +- CometFilter\n            :                    :              :     :              +- CometScan parquet spark_catalog.default.catalog_page\n            :                    :              :     +- CometBroadcastExchange\n            :                    :              :        +- CometProject\n            :                    :              :           +- CometFilter\n            :                    :              :              +- CometScan parquet spark_catalog.default.item\n            :                    :              +- CometBroadcastExchange\n            :                    :                 +- CometProject\n            :                    :                    +- CometFilter\n            :                    :                       +- CometScan parquet spark_catalog.default.promotion\n            :                    +- HashAggregate\n            :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                          +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n            :                             +- CometProject\n            :                                +- CometBroadcastHashJoin\n            :                                   :- CometProject\n            :                                   :  +- CometBroadcastHashJoin\n            :                                   :     :- CometProject\n            :                                   :     :  +- CometBroadcastHashJoin\n            :                                   :     :     :- CometProject\n            :                                   :     :     :  +- CometBroadcastHashJoin\n            :                                   :     :     :     :- CometProject\n            :                                   :     :     :     :  +- CometBroadcastHashJoin\n            :                                   :     :     :     :     :- CometFilter\n            :                                   :     :     :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n            :                                   :     :     :     :     +- CometBroadcastExchange\n            :                                   :     :     :     :        +- CometFilter\n            :                                   :     :     :     :           +- CometScan parquet spark_catalog.default.web_returns\n            :                                   :     :     :     +- CometBroadcastExchange\n            :                                   :     :     :        +- CometProject\n            :                                   :     :     :           +- CometFilter\n            :                                   :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n            :                                   :     :     +- CometBroadcastExchange\n            :                                   :     :        +- CometProject\n            :                                   :     :           +- CometFilter\n            :                                   :     :              +- CometScan parquet spark_catalog.default.web_site\n            :                                   :     +- CometBroadcastExchange\n            :                                   :        +- CometProject\n            :                                   :           +- CometFilter\n            :                                   :              +- CometScan parquet spark_catalog.default.item\n            :                                   +- CometBroadcastExchange\n            :                                      +- CometProject\n            :                                         +- CometFilter\n            :                                            +- CometScan parquet spark_catalog.default.promotion\n            +- HashAggregate\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                     +- HashAggregate\n                        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n                              +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n                                 :- HashAggregate\n                                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :     +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                                 :        +- CometProject\n                                 :           +- CometBroadcastHashJoin\n                                 :              :- CometProject\n                                 :              :  +- CometBroadcastHashJoin\n                                 :              :     :- CometProject\n                                 :              :     :  +- CometBroadcastHashJoin\n                                 :              :     :     :- CometProject\n                                 :              :     :     :  +- CometBroadcastHashJoin\n                                 :              :     :     :     :- CometProject\n                                 :              :     :     :     :  +- CometBroadcastHashJoin\n                                 :              :     :     :     :     :- CometFilter\n                                 :              :     :     :     :     :  +- CometScan parquet spark_catalog.default.store_sales\n                                 :              :     :     :     :     +- CometBroadcastExchange\n                                 :              :     :     :     :        +- CometFilter\n                                 :              :     :     :     :           +- CometScan parquet spark_catalog.default.store_returns\n                                 :              :     :     :     +- CometBroadcastExchange\n                                 :              :     :     :        +- CometProject\n                                 :              :     :     :           +- CometFilter\n                                 :              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n                                 :              :     :     +- CometBroadcastExchange\n                                 :              :     :        +- CometProject\n                                 :              :     :           +- CometFilter\n                                 :              :     :              +- CometScan parquet spark_catalog.default.store\n                                 :              :     +- CometBroadcastExchange\n                                 :              :        +- CometProject\n                                 :              :           +- CometFilter\n                                 :              :              +- CometScan parquet spark_catalog.default.item\n                                 :              +- CometBroadcastExchange\n                                 :                 +- CometProject\n                                 :                    +- CometFilter\n                                 :                       +- CometScan parquet spark_catalog.default.promotion\n                                 :- HashAggregate\n                                 :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                 :     +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                                 :        +- CometProject\n                                 :           +- CometBroadcastHashJoin\n                                 :              :- CometProject\n                                 :              :  +- CometBroadcastHashJoin\n                                 :              :     :- CometProject\n                                 :              :     :  +- CometBroadcastHashJoin\n                                 :              :     :     :- CometProject\n                                 :              :     :     :  +- CometBroadcastHashJoin\n                                 :              :     :     :     :- CometProject\n                                 :              :     :     :     :  +- CometBroadcastHashJoin\n                                 :              :     :     :     :     :- CometFilter\n                                 :              :     :     :     :     :  +- CometScan parquet spark_catalog.default.catalog_sales\n                                 :              :     :     :     :     +- CometBroadcastExchange\n                                 :              :     :     :     :        +- CometFilter\n                                 :              :     :     :     :           +- CometScan parquet spark_catalog.default.catalog_returns\n                                 :              :     :     :     +- CometBroadcastExchange\n                                 :              :     :     :        +- CometProject\n                                 :              :     :     :           +- CometFilter\n                                 :              :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n                                 :              :     :     +- CometBroadcastExchange\n                                 :              :     :        +- CometProject\n                                 :              :     :           +- CometFilter\n                                 :              :     :              +- CometScan parquet spark_catalog.default.catalog_page\n                                 :              :     +- CometBroadcastExchange\n                                 :              :        +- CometProject\n                                 :              :           +- CometFilter\n                                 :              :              +- CometScan parquet spark_catalog.default.item\n                                 :              +- CometBroadcastExchange\n                                 :                 +- CometProject\n                                 :                    +- CometFilter\n                                 :                       +- CometScan parquet spark_catalog.default.promotion\n                                 +- HashAggregate\n                                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from DecimalType(7,2) to DecimalType(12,2) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                                          +- CometProject\n                                             +- CometBroadcastHashJoin\n                                                :- CometProject\n                                                :  +- CometBroadcastHashJoin\n                                                :     :- CometProject\n                                                :     :  +- CometBroadcastHashJoin\n                                                :     :     :- CometProject\n                                                :     :     :  +- CometBroadcastHashJoin\n                                                :     :     :     :- CometProject\n                                                :     :     :     :  +- CometBroadcastHashJoin\n                                                :     :     :     :     :- CometFilter\n                                                :     :     :     :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                                :     :     :     :     +- CometBroadcastExchange\n                                                :     :     :     :        +- CometFilter\n                                                :     :     :     :           +- CometScan parquet spark_catalog.default.web_returns\n                                                :     :     :     +- CometBroadcastExchange\n                                                :     :     :        +- CometProject\n                                                :     :     :           +- CometFilter\n                                                :     :     :              +- CometScan parquet spark_catalog.default.date_dim\n                                                :     :     +- CometBroadcastExchange\n                                                :     :        +- CometProject\n                                                :     :           +- CometFilter\n                                                :     :              +- CometScan parquet spark_catalog.default.web_site\n                                                :     +- CometBroadcastExchange\n                                                :        +- CometProject\n                                                :           +- CometFilter\n                                                :              +- CometScan parquet spark_catalog.default.item\n                                                +- CometBroadcastExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan parquet spark_catalog.default.promotion\n\n\nQuery: q86a-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q86a-v2.7: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (Window)]\n   +-  Window [COMET: Window is not supported]\n      +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Union)]\n                     +-  Union [COMET: Union is not enabled because the following children are not native (HashAggregate, HashAggregate, HashAggregate)]\n                        :-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                        :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     +- CometHashAggregate\n                        :        +- CometProject\n                        :           +- CometBroadcastHashJoin\n                        :              :- CometProject\n                        :              :  +- CometBroadcastHashJoin\n                        :              :     :- CometFilter\n                        :              :     :  +- CometScan parquet spark_catalog.default.web_sales\n                        :              :     +- CometBroadcastExchange\n                        :              :        +- CometProject\n                        :              :           +- CometFilter\n                        :              :              +- CometScan parquet spark_catalog.default.date_dim\n                        :              +- CometBroadcastExchange\n                        :                 +- CometProject\n                        :                    +- CometFilter\n                        :                       +- CometScan parquet spark_catalog.default.item\n                        :- HashAggregate\n                        :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                        :        +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                        :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        :              +- CometHashAggregate\n                        :                 +- CometProject\n                        :                    +- CometBroadcastHashJoin\n                        :                       :- CometProject\n                        :                       :  +- CometBroadcastHashJoin\n                        :                       :     :- CometFilter\n                        :                       :     :  +- CometScan parquet spark_catalog.default.web_sales\n                        :                       :     +- CometBroadcastExchange\n                        :                       :        +- CometProject\n                        :                       :           +- CometFilter\n                        :                       :              +- CometScan parquet spark_catalog.default.date_dim\n                        :                       +- CometBroadcastExchange\n                        :                          +- CometProject\n                        :                             +- CometFilter\n                        :                                +- CometScan parquet spark_catalog.default.item\n                        +- HashAggregate\n                           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n                                 +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                    +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                       +- CometHashAggregate\n                                          +- CometProject\n                                             +- CometBroadcastHashJoin\n                                                :- CometProject\n                                                :  +- CometBroadcastHashJoin\n                                                :     :- CometFilter\n                                                :     :  +- CometScan parquet spark_catalog.default.web_sales\n                                                :     +- CometBroadcastExchange\n                                                :        +- CometProject\n                                                :           +- CometFilter\n                                                :              +- CometScan parquet spark_catalog.default.date_dim\n                                                +- CometBroadcastExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan parquet spark_catalog.default.item\n\n\nQuery: q98-v2.7. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q98-v2.7: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Project [COMET: Project is not native because the following children are not native (Window)]\n      +-  Window [COMET: Window is not supported]\n         +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan parquet spark_catalog.default.store_sales\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan parquet spark_catalog.default.date_dim\n\n\n"
  },
  {
    "path": "spark/inspections/CometTPCHQueriesList-results.txt",
    "content": "Query: q1 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometFilter, CometProject)\nQuery: q1 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +- CometHashAggregate\n            +- CometProject\n               +- CometFilter\n                  +- CometScan parquet \n\n\nQuery: q2 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q2 TPCH Snappy: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n   +- BroadcastHashJoin\n      :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (Project, BroadcastExchange)]\n      :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      :     :  +- BroadcastHashJoin\n      :     :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n      :     :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n      :     :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :     :     +- CometProject\n      :     :     :     :        +- CometBroadcastHashJoin\n      :     :     :     :           :- CometBroadcastExchange\n      :     :     :     :           :  +- CometProject\n      :     :     :     :           :     +- CometFilter\n      :     :     :     :           :        +- CometScan parquet \n      :     :     :     :           +- CometFilter\n      :     :     :     :              +- CometScan parquet \n      :     :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n      :     :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :     :     :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      :     :     :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :     :     :                    +- CometHashAggregate\n      :     :     :                       +- CometProject\n      :     :     :                          +- CometBroadcastHashJoin\n      :     :     :                             :- CometProject\n      :     :     :                             :  +- CometBroadcastHashJoin\n      :     :     :                             :     :- CometProject\n      :     :     :                             :     :  +- CometBroadcastHashJoin\n      :     :     :                             :     :     :- CometFilter\n      :     :     :                             :     :     :  +- CometScan parquet \n      :     :     :                             :     :     +- CometBroadcastExchange\n      :     :     :                             :     :        +- CometFilter\n      :     :     :                             :     :           +- CometScan parquet \n      :     :     :                             :     +- CometBroadcastExchange\n      :     :     :                             :        +- CometFilter\n      :     :     :                             :           +- CometScan parquet \n      :     :     :                             +- CometBroadcastExchange\n      :     :     :                                +- CometProject\n      :     :     :                                   +- CometFilter\n      :     :     :                                      +- CometScan parquet \n      :     :     +- BroadcastExchange\n      :     :        +- CometFilter\n      :     :           +- CometScan parquet \n      :     +- BroadcastExchange\n      :        +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n      :           :  +- Subquery\n      :           :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n      :           :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n      :           :              +- CometProject\n      :           :                 +- CometFilter\n      :           :                    +- CometScan parquet \n      :           +- CometScan parquet \n      +- BroadcastExchange\n         +- CometProject\n            +- CometFilter\n               +- CometScan parquet \n\n\nQuery: q3 TPCH Snappy. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q3 TPCH Snappy: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n      +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +- CometProject\n            :        +- CometBroadcastHashJoin\n            :           :- CometBroadcastExchange\n            :           :  +- CometProject\n            :           :     +- CometFilter\n            :           :        +- CometScan parquet \n            :           +- CometFilter\n            :              +- CometScan parquet \n            +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet \n\n\nQuery: q4 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q4 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometFilter\n                  :     +- CometScan parquet \n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet \n\n\nQuery: q5 TPCH Snappy. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q5 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- HashAggregate\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               +- BroadcastHashJoin\n                  :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (Project, BroadcastExchange)]\n                  :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  :     :  +- BroadcastHashJoin\n                  :     :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                  :     :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                  :     :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :     :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :     :     :     +- CometProject\n                  :     :     :     :        +- CometBroadcastHashJoin\n                  :     :     :     :           :- CometBroadcastExchange\n                  :     :     :     :           :  +- CometFilter\n                  :     :     :     :           :     +- CometScan parquet \n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometScan parquet \n                  :     :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :     :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan parquet \n                  :     :     +- BroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan parquet \n                  :     +- BroadcastExchange\n                  :        +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n                  :           :  +- Subquery\n                  :           :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                  :           :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :           :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n                  :           :              +- CometProject\n                  :           :                 +- CometFilter\n                  :           :                    +- CometScan parquet \n                  :           +- CometScan parquet \n                  +- BroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet \n\n\nQuery: q6 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometFilter, CometProject)\nQuery: q6 TPCH Snappy: ExplainInfo:\n HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- CometHashAggregate\n      +- CometProject\n         +- CometFilter\n            +- CometScan parquet \n\n\nQuery: q7 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q7 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometBroadcastExchange\n                  :     :     :     :     :  +- CometFilter\n                  :     :     :     :     :     +- CometScan parquet \n                  :     :     :     :     +- CometFilter\n                  :     :     :     :        +- CometScan parquet \n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan parquet \n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan parquet \n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan parquet \n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometScan parquet \n\n\nQuery: q8 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q8 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometBroadcastExchange\n                  :     :     :     :     :     :     :  +- CometProject\n                  :     :     :     :     :     :     :     +- CometFilter\n                  :     :     :     :     :     :     :        +- CometScan parquet \n                  :     :     :     :     :     :     +- CometFilter\n                  :     :     :     :     :     :        +- CometScan parquet \n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan parquet \n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometFilter\n                  :     :     :     :           +- CometScan parquet \n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan parquet \n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan parquet \n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan parquet \n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet \n\n\nQuery: q9 TPCH Snappy. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q9 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- HashAggregate\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               +- BroadcastHashJoin\n                  :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                  :  +- BroadcastHashJoin\n                  :     :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n                  :     :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n                  :     :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :     :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :     :     +- CometProject\n                  :     :     :        +- CometBroadcastHashJoin\n                  :     :     :           :- CometProject\n                  :     :     :           :  +- CometBroadcastHashJoin\n                  :     :     :           :     :- CometBroadcastExchange\n                  :     :     :           :     :  +- CometProject\n                  :     :     :           :     :     +- CometFilter\n                  :     :     :           :     :        +- CometScan parquet \n                  :     :     :           :     +- CometFilter\n                  :     :     :           :        +- CometScan parquet \n                  :     :     :           +- CometBroadcastExchange\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometScan parquet \n                  :     :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n                  :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :     :           +- CometFilter\n                  :     :              +- CometScan parquet \n                  :     +- BroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan parquet \n                  +- BroadcastExchange\n                     +- CometFilter\n                        +- CometScan parquet \n\n\nQuery: q10 TPCH Snappy. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q10 TPCH Snappy: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +- BroadcastHashJoin\n               :-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n               :  +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :     :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :     +- CometProject\n               :     :        +- CometBroadcastHashJoin\n               :     :           :- CometBroadcastExchange\n               :     :           :  +- CometFilter\n               :     :           :     +- CometScan parquet \n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometScan parquet \n               :     +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometScan parquet \n               +- BroadcastExchange\n                  +- CometFilter\n                     +- CometScan parquet \n\n\nQuery: q11 TPCH Snappy. Comet Exec: Enabled (CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q11 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n      :  +- Subquery\n      :     +- HashAggregate\n      :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      :           +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n      :              +- CometProject\n      :                 +- CometBroadcastHashJoin\n      :                    :- CometProject\n      :                    :  +- CometBroadcastHashJoin\n      :                    :     :- CometFilter\n      :                    :     :  +- CometScan parquet \n      :                    :     +- CometBroadcastExchange\n      :                    :        +- CometFilter\n      :                    :           +- CometScan parquet \n      :                    +- CometBroadcastExchange\n      :                       +- CometProject\n      :                          +- CometFilter\n      :                             +- CometScan parquet \n      +- HashAggregate\n         +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            +-  HashAggregate [COMET: Comet does not guarantee correct results for cast from IntegerType to DecimalType(10,0) with timezone Some(America/Los_Angeles) and evalMode LEGACY (No overflow check). To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometFilter\n                     :     :  +- CometScan parquet \n                     :     +- CometBroadcastExchange\n                     :        +- CometFilter\n                     :           +- CometScan parquet \n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan parquet \n\n\nQuery: q12 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q12 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometBroadcastExchange\n                  :  +- CometFilter\n                  :     +- CometScan parquet \n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet \n\n\nQuery: q13 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q13 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- HashAggregate\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometScan parquet \n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometScan parquet \n\n\nQuery: q14 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q14 TPCH Snappy: ExplainInfo:\n HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- CometHashAggregate\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometFilter\n            :     +- CometScan parquet \n            +- CometBroadcastExchange\n               +- CometFilter\n                  +- CometScan parquet \n\n\nQuery: q15 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometFilter, CometProject)\nQuery: q15 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      +- BroadcastHashJoin\n         :- BroadcastExchange\n         :  +- CometFilter\n         :     +- CometScan parquet \n         +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :  +- Subquery\n            :     +- HashAggregate\n            :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :           +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n            :              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                    +- CometHashAggregate\n            :                       +- CometProject\n            :                          +- CometFilter\n            :                             +- CometScan parquet \n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan parquet \n\n\nQuery: q16 TPCH Snappy. Comet Exec: Enabled (CometFilter, CometProject)\nQuery: q16 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- HashAggregate\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (HashAggregate)]\n            +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n                     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n                        +- BroadcastHashJoin\n                           :- BroadcastHashJoin\n                           :  :- CometFilter\n                           :  :  +- CometScan parquet \n                           :  +- BroadcastExchange\n                           :     +- CometProject\n                           :        +- CometFilter\n                           :           +- CometScan parquet \n                           +- BroadcastExchange\n                              +- CometFilter\n                                 +- CometScan parquet \n\n\nQuery: q17 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q17 TPCH Snappy: ExplainInfo:\nHashAggregate\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n      +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +- CometProject\n            :        +- CometBroadcastHashJoin\n            :           :- CometFilter\n            :           :  +- CometScan parquet \n            :           +- CometBroadcastExchange\n            :              +- CometProject\n            :                 +- CometFilter\n            :                    +- CometScan parquet \n            +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n               +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                  +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                        +- CometHashAggregate\n                           +- CometFilter\n                              +- CometScan parquet \n\n\nQuery: q18 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometFilter)\nQuery: q18 TPCH Snappy: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n      +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n            :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n            :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :     +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            :        +- BroadcastHashJoin\n            :           :- BroadcastExchange\n            :           :  +- CometFilter\n            :           :     +- CometScan parquet \n            :           +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n            :              :- CometFilter\n            :              :  +- CometScan parquet \n            :              +- BroadcastExchange\n            :                 +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n            :                    +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n            :                       +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n            :                          +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n            :                             +- CometHashAggregate\n            :                                +- CometScan parquet \n            +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange)]\n                     :- CometFilter\n                     :  +- CometScan parquet \n                     +- BroadcastExchange\n                        +-  Project [COMET: Project is not native because the following children are not native (Filter)]\n                           +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n                              +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                                 +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                                    +- CometHashAggregate\n                                       +- CometScan parquet \n\n\nQuery: q19 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q19 TPCH Snappy: ExplainInfo:\n HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- CometHashAggregate\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometFilter\n            :     +- CometScan parquet \n            +- CometBroadcastExchange\n               +- CometFilter\n                  +- CometScan parquet \n\n\nQuery: q20 TPCH Snappy. Comet Exec: Enabled (CometHashAggregate, CometBroadcastHashJoin, CometFilter, CometProject)\nQuery: q20 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n      +- BroadcastHashJoin\n         :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n         :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (Filter, BroadcastExchange)]\n         :     :-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n         :     :  :  +- Subquery\n         :     :  :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :     :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :     :  :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n         :     :  :              +- CometProject\n         :     :  :                 +- CometFilter\n         :     :  :                    +- CometScan parquet \n         :     :  +- CometScan parquet \n         :     +- BroadcastExchange\n         :        +-  Project [COMET: Project is not native because the following children are not native (SortMergeJoin)]\n         :           +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n         :              :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n         :              :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :              :     +- CometBroadcastHashJoin\n         :              :        :- CometFilter\n         :              :        :  +- CometScan parquet \n         :              :        +- CometBroadcastExchange\n         :              :           +- CometProject\n         :              :              +- CometFilter\n         :              :                 +- CometScan parquet \n         :              +-  Sort [COMET: Sort is not native because the following children are not native (Filter)]\n         :                 +-  Filter [COMET: Filter is not native because the following children are not native (HashAggregate)]\n         :                    +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n         :                       +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         :                          +- CometHashAggregate\n         :                             +- CometBroadcastHashJoin\n         :                                :- CometProject\n         :                                :  +- CometFilter\n         :                                :     +- CometScan parquet \n         :                                +- CometBroadcastExchange\n         :                                   +- CometProject\n         :                                      +- CometFilter\n         :                                         +- CometScan parquet \n         +- BroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan parquet \n\n\nQuery: q21 TPCH Snappy. Comet Exec: Enabled (CometFilter, CometProject)\nQuery: q21 TPCH Snappy: ExplainInfo:\n TakeOrderedAndProject [COMET: TakeOrderedAndProject requires shuffle to be enabled]\n+- HashAggregate\n   +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n      +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n         +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n            +- BroadcastHashJoin\n               :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :  +- BroadcastHashJoin\n               :     :-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               :     :  +-  BroadcastHashJoin [COMET: BroadcastHashJoin is not native because the following children are not native (BroadcastExchange, SortMergeJoin)]\n               :     :     :- BroadcastExchange\n               :     :     :  +-  Filter [COMET: xxhash64 is disabled by default. Set spark.comet.xxhash64.enabled=true to enable it.]\n               :     :     :     :  +- Subquery\n               :     :     :     :     +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n               :     :     :     :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :     :     :           +-  ObjectHashAggregate [COMET: ObjectHashAggregate is not supported]\n               :     :     :     :              +- CometProject\n               :     :     :     :                 +- CometFilter\n               :     :     :     :                    +- CometScan parquet \n               :     :     :     +- CometScan parquet \n               :     :     +-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (SortMergeJoin, Sort)]\n               :     :        :-  SortMergeJoin [COMET: SortMergeJoin is not enabled because the following children are not native (Sort, Sort)]\n               :     :        :  :-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        :  :  +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :  :     +- CometProject\n               :     :        :  :        +- CometFilter\n               :     :        :  :           +- CometScan parquet \n               :     :        :  +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :        :     +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :        :        +- CometScan parquet \n               :     :        +-  Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n               :     :           +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n               :     :              +- CometProject\n               :     :                 +- CometFilter\n               :     :                    +- CometScan parquet \n               :     +- BroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan parquet \n               +- BroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan parquet \n\n\nQuery: q22 TPCH Snappy. Comet Exec: Disabled\nQuery: q22 TPCH Snappy: ExplainInfo:\n Sort [COMET: Sort is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- HashAggregate\n      +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n         +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Project)]\n            +-  Project [COMET: Project is not native because the following children are not native (BroadcastHashJoin)]\n               +- BroadcastHashJoin\n                  :-  Filter [COMET: Comet does not guarantee correct results for cast from DecimalType(12,2) to DecimalType(16,6) with timezone Some(America/Los_Angeles) and evalMode LEGACY. To enable all incompatible casts, set spark.comet.cast.allowIncompatible=true]\n                  :  :  +- Subquery\n                  :  :     +-  HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n                  :  :        +-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n                  :  :           +- CometHashAggregate\n                  :  :              +- CometProject\n                  :  :                 +- CometFilter\n                  :  :                    +- CometScan parquet \n                  :  +- CometScan parquet \n                  +- BroadcastExchange\n                     +- CometScan parquet \n\n\nQuery: q1 TPCH Extended Snappy. Comet Exec: Enabled (CometHashAggregate, CometFilter, CometProject)\nQuery: q1 TPCH Extended Snappy: ExplainInfo:\n HashAggregate [COMET: HashAggregate is not native because the following children are not native (Exchange)]\n+-  Exchange [COMET: Comet shuffle is not enabled: spark.sql.adaptive.coalescePartitions.enabled is enabled and spark.comet.shuffle.enforceMode.enabled is not enabled]\n   +- CometHashAggregate\n      +- CometProject\n         +- CometFilter\n            +- CometScan parquet \n\n\n"
  },
  {
    "path": "spark/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n  xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n  <modelVersion>4.0.0</modelVersion>\n  <parent>\n    <groupId>org.apache.datafusion</groupId>\n    <artifactId>comet-parent-spark${spark.version.short}_${scala.binary.version}</artifactId>\n    <version>0.15.0-SNAPSHOT</version>\n    <relativePath>../pom.xml</relativePath>\n  </parent>\n\n  <artifactId>comet-spark-spark${spark.version.short}_${scala.binary.version}</artifactId>\n  <name>comet-spark</name>\n\n  <properties>\n    <!-- Reverse default (skip installation), and then enable only for child modules -->\n    <maven.deploy.skip>false</maven.deploy.skip>\n  </properties>\n\n  <dependencies>\n    <dependency>\n      <groupId>org.apache.datafusion</groupId>\n      <artifactId>comet-common-spark${spark.version.short}_${scala.binary.version}</artifactId>\n      <version>${project.version}</version>\n      <exclusions>\n        <exclusion>\n          <groupId>org.apache.arrow</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.spark</groupId>\n      <artifactId>spark-sql_${scala.binary.version}</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.scala-lang</groupId>\n      <artifactId>scala-library</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.scala-lang</groupId>\n      <artifactId>scala-reflect</artifactId>\n      <scope>provided</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.google.protobuf</groupId>\n      <artifactId>protobuf-java</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.scalatest</groupId>\n      <artifactId>scalatest_${scala.binary.version}</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.scalatestplus</groupId>\n      <artifactId>junit-4-13_${scala.binary.version}</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.spark</groupId>\n      <artifactId>spark-core_${scala.binary.version}</artifactId>\n      <classifier>tests</classifier>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.spark</groupId>\n      <artifactId>spark-catalyst_${scala.binary.version}</artifactId>\n      <classifier>tests</classifier>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.spark</groupId>\n      <artifactId>spark-sql_${scala.binary.version}</artifactId>\n      <classifier>tests</classifier>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.spark</groupId>\n      <artifactId>spark-hadoop-cloud_${scala.binary.version}</artifactId>\n      <classifier>tests</classifier>\n    </dependency>\n    <dependency>\n      <groupId>junit</groupId>\n      <artifactId>junit</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>com.google.guava</groupId>\n      <artifactId>guava</artifactId>\n      <exclusions>\n        <exclusion>\n          <groupId>*</groupId>\n          <artifactId>*</artifactId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>org.codehaus.jackson</groupId>\n      <artifactId>jackson-mapper-asl</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.parquet</groupId>\n      <artifactId>parquet-hadoop</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.parquet</groupId>\n      <artifactId>parquet-hadoop</artifactId>\n      <classifier>tests</classifier>\n      <!-- Note we don't use test scope for this artifact. This is because it's only needed\n         to provide InMemoryKMS class that is shaded below, to make Spark test happy. -->\n    </dependency>\n    <!-- We shade & relocate Arrow dependencies in comet-common, so comet-spark module no longer\n         depends on Arrow. However, when running `mvn test` we still need Arrow classes in the\n         classpath, since the Maven shading happens in 'package' phase which is after 'test' -->\n    <dependency>\n      <groupId>org.apache.arrow</groupId>\n      <artifactId>arrow-memory-unsafe</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.arrow</groupId>\n      <artifactId>arrow-c-data</artifactId>\n      <scope>test</scope>\n    </dependency>\n    <dependency>\n      <groupId>org.apache.hadoop</groupId>\n      <artifactId>hadoop-client-minicluster</artifactId>\n      <scope>test</scope>\n      <exclusions>\n        <!-- hadoop clients are provided by spark -->\n        <exclusion>\n          <artifactId>hadoop-client-api</artifactId>\n          <groupId>org.apache.hadoop</groupId>\n        </exclusion>\n        <exclusion>\n          <artifactId>hadoop-client-runtime</artifactId>\n          <groupId>org.apache.hadoop</groupId>\n        </exclusion>\n        <exclusion>\n          <artifactId>snappy-java</artifactId>\n          <groupId>org.xerial.snappy</groupId>\n        </exclusion>\n        <exclusion>\n          <artifactId>junit</artifactId>\n          <groupId>junit</groupId>\n        </exclusion>\n      </exclusions>\n    </dependency>\n    <dependency>\n      <groupId>org.testcontainers</groupId>\n      <artifactId>minio</artifactId>\n    </dependency>\n    <!-- AWS SDK modules required by Iceberg's S3FileIO (see parent pom for details) -->\n    <dependency>\n      <groupId>software.amazon.awssdk</groupId>\n      <artifactId>s3</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>software.amazon.awssdk</groupId>\n      <artifactId>sts</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>software.amazon.awssdk</groupId>\n      <artifactId>dynamodb</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>software.amazon.awssdk</groupId>\n      <artifactId>glue</artifactId>\n    </dependency>\n    <dependency>\n      <groupId>software.amazon.awssdk</groupId>\n      <artifactId>kms</artifactId>\n    </dependency>\n    <!-- Jetty and Iceberg dependencies for testing native Iceberg scan -->\n    <!-- Note: The specific versions are defined in profiles below based on Spark version -->\n  </dependencies>\n\n  <profiles>\n    <!-- Iceberg dependencies vary by Spark version (Iceberg 1.8.1 for Spark 3.x, 1.10.0 for Spark 4.0) -->\n    <profile>\n      <id>spark-3.4</id>\n      <dependencies>\n        <dependency>\n          <groupId>org.apache.iceberg</groupId>\n          <artifactId>iceberg-spark-runtime-${spark.version.short}_${scala.binary.version}</artifactId>\n          <version>1.5.2</version>\n          <scope>test</scope>\n        </dependency>\n        <!-- Jetty 9.4.x for Spark 3.4 (JDK 11, javax.* packages) -->\n        <dependency>\n          <groupId>org.eclipse.jetty</groupId>\n          <artifactId>jetty-server</artifactId>\n          <version>9.4.53.v20231009</version>\n          <scope>test</scope>\n        </dependency>\n        <dependency>\n          <groupId>org.eclipse.jetty</groupId>\n          <artifactId>jetty-servlet</artifactId>\n          <version>9.4.53.v20231009</version>\n          <scope>test</scope>\n        </dependency>\n      </dependencies>\n    </profile>\n\n    <profile>\n      <id>spark-3.5</id>\n      <activation>\n        <activeByDefault>true</activeByDefault>\n      </activation>\n      <dependencies>\n        <dependency>\n          <groupId>org.apache.iceberg</groupId>\n          <artifactId>iceberg-spark-runtime-${spark.version.short}_${scala.binary.version}</artifactId>\n          <version>1.8.1</version>\n          <scope>test</scope>\n        </dependency>\n        <!-- Jetty 9.4.x for Spark 3.5 (JDK 11, javax.* packages) -->\n        <dependency>\n          <groupId>org.eclipse.jetty</groupId>\n          <artifactId>jetty-server</artifactId>\n          <version>9.4.53.v20231009</version>\n          <scope>test</scope>\n        </dependency>\n        <dependency>\n          <groupId>org.eclipse.jetty</groupId>\n          <artifactId>jetty-servlet</artifactId>\n          <version>9.4.53.v20231009</version>\n          <scope>test</scope>\n        </dependency>\n      </dependencies>\n    </profile>\n\n    <profile>\n      <id>spark-4.0</id>\n      <dependencies>\n        <dependency>\n          <groupId>org.apache.iceberg</groupId>\n          <artifactId>iceberg-spark-runtime-${spark.version.short}_${scala.binary.version}</artifactId>\n          <version>1.10.0</version>\n          <scope>test</scope>\n        </dependency>\n        <!-- Jetty 11.x for Spark 4.0 (jakarta.servlet) -->\n        <dependency>\n          <groupId>org.eclipse.jetty</groupId>\n          <artifactId>jetty-server</artifactId>\n          <version>11.0.24</version>\n          <scope>test</scope>\n        </dependency>\n        <dependency>\n          <groupId>org.eclipse.jetty</groupId>\n          <artifactId>jetty-servlet</artifactId>\n          <version>11.0.24</version>\n          <scope>test</scope>\n        </dependency>\n      </dependencies>\n    </profile>\n    <profile>\n      <id>generate-docs</id>\n      <build>\n        <plugins>\n          <plugin>\n            <groupId>org.codehaus.mojo</groupId>\n            <artifactId>exec-maven-plugin</artifactId>\n            <version>${exec-maven-plugin.version}</version>\n            <executions>\n              <execution>\n                <id>generate-user-guide-reference-docs</id>\n                <phase>package</phase>\n                <goals>\n                  <goal>java</goal>\n                </goals>\n                <configuration>\n                  <mainClass>org.apache.comet.GenerateDocs</mainClass>\n                  <arguments>\n                    <argument>${project.parent.basedir}/docs/source/user-guide/latest/</argument>\n                  </arguments>\n                  <classpathScope>compile</classpathScope>\n                </configuration>\n              </execution>\n            </executions>\n          </plugin>\n        </plugins>\n      </build>\n    </profile>\n  </profiles>\n\n  <build>\n    <plugins>\n      <plugin>\n        <groupId>com.github.os72</groupId>\n        <artifactId>protoc-jar-maven-plugin</artifactId>\n        <version>${protoc-jar-maven-plugin.version}</version>\n        <executions>\n          <execution>\n            <phase>generate-sources</phase>\n            <goals>\n              <goal>run</goal>\n            </goals>\n            <configuration>\n              <protocArtifact>com.google.protobuf:protoc:${protobuf.version}</protocArtifact>\n              <inputDirectories>\n                <include>../native/proto/src/proto</include>\n              </inputDirectories>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>org.scalatest</groupId>\n        <artifactId>scalatest-maven-plugin</artifactId>\n      </plugin>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-shade-plugin</artifactId>\n        <executions>\n          <execution>\n            <phase>package</phase>\n            <goals>\n              <goal>shade</goal>\n            </goals>\n            <configuration>\n              <createSourcesJar>true</createSourcesJar>\n              <shadeSourcesContent>true</shadeSourcesContent>\n              <shadedArtifactAttached>false</shadedArtifactAttached>\n              <createDependencyReducedPom>true</createDependencyReducedPom>\n              <artifactSet>\n                <includes>\n                  <include>org.apache.datafusion:comet-common-spark${spark.version.short}_${scala.binary.version}</include>\n                  <!-- Relocate Protobuf since Spark uses 2.5.0 while Comet uses 3.x -->\n                  <include>com.google.protobuf:protobuf-java</include>\n                  <include>com.google.guava:guava</include>\n                  <include>org.scala-lang.modules:scala-collection-compat_${scala.binary.version}</include>\n                </includes>\n              </artifactSet>\n              <filters>\n                <filter>\n                  <artifact>*:*</artifact>\n                  <excludes>\n                    <exclude>**/*.proto</exclude>\n                    <exclude>**/*.thrift</exclude>\n                    <exclude>git.properties</exclude>\n                    <exclude>log4j.properties</exclude>\n                    <exclude>log4j2.properties</exclude>\n                    <exclude>**/SparkFilterApi.*</exclude>\n                  </excludes>\n                </filter>\n                <filter>\n                  <artifact>org.apache.parquet:parquet-hadoop:tests</artifact>\n                  <includes>\n                    <!-- Used by Spark test `ParquetEncryptionSuite` -->\n                    <include>org/apache/parquet/crypto/keytools/mocks/InMemoryKMS*</include>\n                  </includes>\n                </filter>\n              </filters>\n              <relocations>\n                <relocation>\n                  <pattern>com.google.protobuf</pattern>\n                  <shadedPattern>${comet.shade.packageName}.protobuf</shadedPattern>\n                </relocation>\n                <relocation>\n                  <pattern>com.google.common</pattern>\n                  <shadedPattern>${comet.shade.packageName}.guava</shadedPattern>\n                </relocation>\n                <relocation>\n                  <pattern>com.google.thirdparty</pattern>\n                  <shadedPattern>${comet.shade.packageName}.guava.thirdparty</shadedPattern>\n                </relocation>\n              </relocations>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>org.apache.maven.plugins</groupId>\n        <artifactId>maven-failsafe-plugin</artifactId>\n        <executions>\n          <execution>\n            <goals>\n              <goal>integration-test</goal>\n              <goal>verify</goal>\n            </goals>\n            <configuration>\n              <trimStackTrace>false</trimStackTrace>\n              <argLine>-ea ${extraJavaTestArgs}</argLine>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n      <plugin>\n        <groupId>net.alchim31.maven</groupId>\n        <artifactId>scala-maven-plugin</artifactId>\n      </plugin>\n      <plugin>\n        <groupId>org.codehaus.mojo</groupId>\n        <artifactId>build-helper-maven-plugin</artifactId>\n        <executions>\n          <execution>\n            <id>add-test-source</id>\n            <phase>generate-test-sources</phase>\n            <goals>\n              <goal>add-test-source</goal>\n            </goals>\n            <configuration>\n              <sources>\n                <source>src/test/${shims.majorVerSrc}</source>\n                <source>src/test/${shims.minorVerSrc}</source>\n              </sources>\n            </configuration>\n          </execution>\n          <execution>\n            <id>add-shim-source</id>\n            <phase>generate-sources</phase>\n            <goals>\n              <goal>add-source</goal>\n            </goals>\n            <configuration>\n              <sources>\n                <source>src/main/${shims.majorVerSrc}</source>\n                <source>src/main/${shims.minorVerSrc}</source>\n              </sources>\n            </configuration>\n          </execution>\n        </executions>\n      </plugin>\n    </plugins>\n  </build>\n\n</project>\n"
  },
  {
    "path": "spark/src/main/java/org/apache/comet/CometBatchIterator.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet;\n\nimport scala.collection.Iterator;\n\nimport org.apache.spark.sql.vectorized.ColumnarBatch;\n\nimport org.apache.comet.vector.CometSelectionVector;\nimport org.apache.comet.vector.NativeUtil;\n\n/**\n * Iterator for fetching batches from JVM to native code. Usually called via JNI from native\n * ScanExec.\n *\n * <p>Batches are owned by the JVM. Native code can safely access the batch after calling `next` but\n * the native code must not retain references to the batch because the next call to `hasNext` will\n * signal to the JVM that the batch can be closed.\n */\npublic class CometBatchIterator {\n  private final Iterator<ColumnarBatch> input;\n  private final NativeUtil nativeUtil;\n  private ColumnarBatch previousBatch = null;\n  private ColumnarBatch currentBatch = null;\n\n  CometBatchIterator(Iterator<ColumnarBatch> input, NativeUtil nativeUtil) {\n    this.input = input;\n    this.nativeUtil = nativeUtil;\n  }\n\n  /**\n   * Fetch the next input batch and allow the previous batch to be closed (this may not happen\n   * immediately).\n   *\n   * @return Number of rows in next batch or -1 if no batches left.\n   */\n  public int hasNext() {\n\n    // release reference to previous batch\n    previousBatch = null;\n\n    if (currentBatch == null) {\n      if (input.hasNext()) {\n        currentBatch = input.next();\n      }\n    }\n    if (currentBatch == null) {\n      return -1;\n    } else {\n      return currentBatch.numRows();\n    }\n  }\n\n  /**\n   * Get the next batch of Arrow arrays.\n   *\n   * @param arrayAddrs The addresses of the ArrowArray structures.\n   * @param schemaAddrs The addresses of the ArrowSchema structures.\n   * @return the number of rows of the current batch. -1 if there is no more batch.\n   */\n  public int next(long[] arrayAddrs, long[] schemaAddrs) {\n    if (currentBatch == null) {\n      return -1;\n    }\n\n    // export the batch using the Arrow C Data Interface\n    int numRows = nativeUtil.exportBatch(arrayAddrs, schemaAddrs, currentBatch);\n\n    // keep a reference to the exported batch so that it doesn't get garbage collected\n    // while the native code is still processing it\n    previousBatch = currentBatch;\n\n    currentBatch = null;\n\n    return numRows;\n  }\n\n  /**\n   * Check if the current batch has selection vectors for all columns.\n   *\n   * @return true if all columns are CometSelectionVector instances, false otherwise\n   */\n  public boolean hasSelectionVectors() {\n    if (currentBatch == null) {\n      return false;\n    }\n\n    // Check if all columns are CometSelectionVector instances\n    for (int i = 0; i < currentBatch.numCols(); i++) {\n      if (!(currentBatch.column(i) instanceof CometSelectionVector)) {\n        return false;\n      }\n    }\n    return true;\n  }\n\n  /**\n   * Export selection indices for all columns when they are selection vectors.\n   *\n   * @param arrayAddrs The addresses of the ArrowArray structures for indices\n   * @param schemaAddrs The addresses of the ArrowSchema structures for indices\n   * @return Number of selection indices arrays exported\n   */\n  public int exportSelectionIndices(long[] arrayAddrs, long[] schemaAddrs) {\n    if (currentBatch == null) {\n      return 0;\n    }\n\n    int exportCount = 0;\n    for (int i = 0; i < currentBatch.numCols(); i++) {\n      if (currentBatch.column(i) instanceof CometSelectionVector) {\n        CometSelectionVector selectionVector = (CometSelectionVector) currentBatch.column(i);\n\n        // Export the indices vector\n        nativeUtil.exportSingleVector(\n            selectionVector.getIndices(), arrayAddrs[exportCount], schemaAddrs[exportCount]);\n        exportCount++;\n      }\n    }\n    return exportCount;\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/comet/CometShuffleBlockIterator.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet;\n\nimport java.io.Closeable;\nimport java.io.EOFException;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.nio.ByteBuffer;\nimport java.nio.ByteOrder;\nimport java.nio.channels.Channels;\nimport java.nio.channels.ReadableByteChannel;\n\n/**\n * Provides raw compressed shuffle blocks to native code via JNI.\n *\n * <p>Reads block headers (compressed length + field count) from a shuffle InputStream and loads the\n * compressed body into a DirectByteBuffer. Native code pulls blocks by calling hasNext() and\n * getBuffer().\n *\n * <p>The DirectByteBuffer returned by getBuffer() is only valid until the next hasNext() call.\n * Native code must fully consume it (via read_ipc_compressed which allocates new memory for the\n * decompressed data) before pulling the next block.\n */\npublic class CometShuffleBlockIterator implements Closeable {\n\n  private static final int INITIAL_BUFFER_SIZE = 128 * 1024;\n\n  private final ReadableByteChannel channel;\n  private final InputStream inputStream;\n  private final ByteBuffer headerBuf = ByteBuffer.allocate(16).order(ByteOrder.LITTLE_ENDIAN);\n  private ByteBuffer dataBuf = ByteBuffer.allocateDirect(INITIAL_BUFFER_SIZE);\n  private boolean closed = false;\n  private int currentBlockLength = 0;\n\n  public CometShuffleBlockIterator(InputStream in) {\n    this.inputStream = in;\n    this.channel = Channels.newChannel(in);\n  }\n\n  /**\n   * Reads the next block header and loads the compressed body into the internal buffer. Called by\n   * native code via JNI.\n   *\n   * <p>Header format: 8-byte compressedLength (includes field count but not itself) + 8-byte\n   * fieldCount (discarded, schema comes from protobuf).\n   *\n   * @return the compressed body length in bytes (codec prefix + compressed IPC), or -1 if EOF\n   */\n  public int hasNext() throws IOException {\n    if (closed) {\n      return -1;\n    }\n\n    // Read 16-byte header: clear() resets position=0, limit=capacity,\n    // preparing the buffer for channel.read() to fill it\n    headerBuf.clear();\n    while (headerBuf.hasRemaining()) {\n      int bytesRead = channel.read(headerBuf);\n      if (bytesRead < 0) {\n        if (headerBuf.position() == 0) {\n          close();\n          return -1;\n        }\n        throw new EOFException(\"Data corrupt: unexpected EOF while reading batch header\");\n      }\n    }\n    headerBuf.flip();\n    long compressedLength = headerBuf.getLong();\n    // Field count discarded - schema determined by ShuffleScan protobuf fields\n    headerBuf.getLong();\n\n    // Subtract 8 because compressedLength includes the 8-byte field count we already read\n    long bytesToRead = compressedLength - 8;\n    if (bytesToRead > Integer.MAX_VALUE) {\n      throw new IllegalStateException(\n          \"Native shuffle block size of \"\n              + bytesToRead\n              + \" exceeds maximum of \"\n              + Integer.MAX_VALUE\n              + \". Try reducing spark.comet.columnar.shuffle.batch.size.\");\n    }\n\n    currentBlockLength = (int) bytesToRead;\n\n    if (dataBuf.capacity() < currentBlockLength) {\n      int newCapacity = (int) Math.min(bytesToRead * 2L, Integer.MAX_VALUE);\n      dataBuf = ByteBuffer.allocateDirect(newCapacity);\n    }\n\n    dataBuf.clear();\n    dataBuf.limit(currentBlockLength);\n    while (dataBuf.hasRemaining()) {\n      int bytesRead = channel.read(dataBuf);\n      if (bytesRead < 0) {\n        throw new EOFException(\"Data corrupt: unexpected EOF while reading compressed batch\");\n      }\n    }\n    // Note: native side uses get_direct_buffer_address (base pointer) + currentBlockLength,\n    // not the buffer's position/limit. No flip needed.\n\n    return currentBlockLength;\n  }\n\n  /**\n   * Returns the DirectByteBuffer containing the current block's compressed bytes (4-byte codec\n   * prefix + compressed IPC data). Called by native code via JNI.\n   */\n  public ByteBuffer getBuffer() {\n    return dataBuf;\n  }\n\n  /** Returns the length of the current block in bytes. Called by native code via JNI. */\n  public int getCurrentBlockLength() {\n    return currentBlockLength;\n  }\n\n  @Override\n  public void close() throws IOException {\n    if (!closed) {\n      closed = true;\n      inputStream.close();\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/comet/NativeColumnarToRowInfo.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet;\n\n/**\n * Container for the result of native columnar to row conversion.\n *\n * <p>This class holds the memory address of the converted row data buffer and metadata about each\n * row (offsets and lengths). The native side allocates and owns the memory buffer, and this class\n * provides the JVM with the information needed to read the UnsafeRow data.\n *\n * <p>Memory Layout of the buffer:\n *\n * <pre>\n * ┌─────────────────────────────────────────────────────────────┐\n * │ Row 0: [null bitset][fixed-width values][variable-length]  │\n * ├─────────────────────────────────────────────────────────────┤\n * │ Row 1: [null bitset][fixed-width values][variable-length]  │\n * ├─────────────────────────────────────────────────────────────┤\n * │ ...                                                         │\n * └─────────────────────────────────────────────────────────────┘\n * </pre>\n *\n * <p>The offsets array provides the byte offset from memoryAddress where each row starts. The\n * lengths array provides the total byte length of each row.\n */\npublic class NativeColumnarToRowInfo {\n  /** The memory address of the buffer containing all converted rows. */\n  public final long memoryAddress;\n\n  /** The byte offset from memoryAddress where each row starts. */\n  public final int[] offsets;\n\n  /** The total byte length of each row. */\n  public final int[] lengths;\n\n  /**\n   * Constructs a NativeColumnarToRowInfo with the given memory address and row metadata.\n   *\n   * @param memoryAddress The memory address of the buffer containing converted row data.\n   * @param offsets The byte offset for each row from the base memory address.\n   * @param lengths The byte length of each row.\n   */\n  public NativeColumnarToRowInfo(long memoryAddress, int[] offsets, int[] lengths) {\n    this.memoryAddress = memoryAddress;\n    this.offsets = offsets;\n    this.lengths = lengths;\n  }\n\n  /**\n   * Returns the number of rows in this result.\n   *\n   * @return The number of rows.\n   */\n  public int numRows() {\n    return offsets != null ? offsets.length : 0;\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/parquet/filter2/predicate/SparkFilterApi.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.parquet.filter2.predicate;\n\nimport org.apache.parquet.filter2.predicate.Operators.*;\nimport org.apache.parquet.hadoop.metadata.ColumnPath;\n\n/**\n * Copied from Spark 3.2, in order to fix Parquet shading issue.\n *\n * <p>TODO: find a way to remove this duplication\n */\npublic final class SparkFilterApi {\n  public static IntColumn intColumn(String[] path) {\n    return new IntColumn(ColumnPath.get(path));\n  }\n\n  public static LongColumn longColumn(String[] path) {\n    return new LongColumn(ColumnPath.get(path));\n  }\n\n  public static FloatColumn floatColumn(String[] path) {\n    return new FloatColumn(ColumnPath.get(path));\n  }\n\n  public static DoubleColumn doubleColumn(String[] path) {\n    return new DoubleColumn(ColumnPath.get(path));\n  }\n\n  public static BooleanColumn booleanColumn(String[] path) {\n    return new BooleanColumn(ColumnPath.get(path));\n  }\n\n  public static BinaryColumn binaryColumn(String[] path) {\n    return new BinaryColumn(ColumnPath.get(path));\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/CometTaskMemoryManager.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark;\n\nimport java.io.IOException;\nimport java.util.concurrent.atomic.AtomicLong;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.spark.memory.MemoryConsumer;\nimport org.apache.spark.memory.MemoryMode;\nimport org.apache.spark.memory.TaskMemoryManager;\n\n/**\n * A adapter class that is used by Comet native to acquire & release memory through Spark's unified\n * memory manager. This assumes Spark's off-heap memory mode is enabled.\n */\npublic class CometTaskMemoryManager {\n\n  private static final Logger logger = LoggerFactory.getLogger(CometTaskMemoryManager.class);\n\n  /** The id uniquely identifies the native plan this memory manager is associated to */\n  private final long id;\n\n  private final long taskAttemptId;\n\n  public final TaskMemoryManager internal;\n  private final NativeMemoryConsumer nativeMemoryConsumer;\n  private final AtomicLong used = new AtomicLong();\n\n  public CometTaskMemoryManager(long id, long taskAttemptId) {\n    this.id = id;\n    this.taskAttemptId = taskAttemptId;\n    this.internal = TaskContext$.MODULE$.get().taskMemoryManager();\n    this.nativeMemoryConsumer = new NativeMemoryConsumer();\n  }\n\n  // Called by Comet native through JNI.\n  // Returns the actual amount of memory (in bytes) granted.\n  public long acquireMemory(long size) {\n    if (logger.isTraceEnabled()) {\n      logger.trace(\"Task {} requested {} bytes\", taskAttemptId, size);\n    }\n    long acquired = internal.acquireExecutionMemory(size, nativeMemoryConsumer);\n    long newUsed = used.addAndGet(acquired);\n    if (acquired < size) {\n      logger.warn(\n          \"Task {} requested {} bytes but only received {} bytes. Current allocation is {} and \"\n              + \"the total memory consumption is {} bytes.\",\n          taskAttemptId,\n          size,\n          acquired,\n          newUsed,\n          internal.getMemoryConsumptionForThisTask());\n      // If memory manager is not able to acquire the requested size, log memory usage\n      internal.showMemoryUsage();\n    }\n    return acquired;\n  }\n\n  // Called by Comet native through JNI\n  public void releaseMemory(long size) {\n    if (logger.isTraceEnabled()) {\n      logger.trace(\"Task {} released {} bytes\", taskAttemptId, size);\n    }\n    long newUsed = used.addAndGet(-size);\n    if (newUsed < 0) {\n      logger.error(\n          \"Task {} used memory is negative ({}) after releasing {} bytes\",\n          taskAttemptId,\n          newUsed,\n          size);\n    }\n    internal.releaseExecutionMemory(size, nativeMemoryConsumer);\n  }\n\n  public long getUsed() {\n    return used.get();\n  }\n\n  /**\n   * A dummy memory consumer that does nothing when spilling. At the moment, Comet native doesn't\n   * share the same API as Spark and cannot trigger spill when acquire memory. Therefore, when\n   * acquiring memory from native or JVM, spilling can only be triggered from JVM operators.\n   */\n  private class NativeMemoryConsumer extends MemoryConsumer {\n    protected NativeMemoryConsumer() {\n      super(CometTaskMemoryManager.this.internal, 0, MemoryMode.OFF_HEAP);\n    }\n\n    @Override\n    public long spill(long size, MemoryConsumer trigger) throws IOException {\n      // No spilling\n      return 0;\n    }\n\n    @Override\n    public String toString() {\n      return String.format(\"NativeMemoryConsumer(id=%)\", id);\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/shuffle/comet/CometBoundedShuffleMemoryAllocator.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.comet;\n\nimport java.io.IOException;\nimport java.util.BitSet;\n\nimport org.apache.spark.SparkConf;\nimport org.apache.spark.memory.MemoryConsumer;\nimport org.apache.spark.memory.MemoryMode;\nimport org.apache.spark.memory.SparkOutOfMemoryError;\nimport org.apache.spark.memory.TaskMemoryManager;\nimport org.apache.spark.sql.internal.SQLConf;\nimport org.apache.spark.unsafe.array.LongArray;\nimport org.apache.spark.unsafe.memory.MemoryBlock;\nimport org.apache.spark.unsafe.memory.UnsafeMemoryAllocator;\n\nimport org.apache.comet.CometSparkSessionExtensions$;\n\n/**\n * A simple memory allocator used by `CometShuffleExternalSorter` to allocate memory blocks which\n * store serialized rows. We don't rely on Spark memory allocator because we need to allocate\n * off-heap memory no matter memory mode is on-heap or off-heap. This allocator is configured with\n * fixed size of memory, and it will throw `SparkOutOfMemoryError` if the memory is not enough.\n *\n * <p>Some methods are copied from `org.apache.spark.unsafe.memory.TaskMemoryManager` with\n * modifications. Most modifications are to remove the dependency on the configured memory mode.\n *\n * <p>This allocator is only used by Comet Columnar Shuffle when running in on-heap mode. It is used\n * when users run in on-heap mode as well as in the Spark tests which require on-heap memory\n * configuration.\n *\n * <p>Thus, this allocator is used to allocate separate off-heap memory allocation for Comet\n * Columnar Shuffle and execution apart from Spark's on-heap memory configuration.\n */\npublic final class CometBoundedShuffleMemoryAllocator extends CometShuffleMemoryAllocatorTrait {\n  private final UnsafeMemoryAllocator allocator = new UnsafeMemoryAllocator();\n\n  private final long pageSize;\n  private final long totalMemory;\n  private long allocatedMemory = 0L;\n\n  /** The number of bits used to address the page table. */\n  private static final int PAGE_NUMBER_BITS = 13;\n\n  /** The number of entries in the page table. */\n  private static final int PAGE_TABLE_SIZE = 1 << PAGE_NUMBER_BITS;\n\n  private final MemoryBlock[] pageTable = new MemoryBlock[PAGE_TABLE_SIZE];\n  private final BitSet allocatedPages = new BitSet(PAGE_TABLE_SIZE);\n\n  private static final int OFFSET_BITS = 51;\n  private static final long MASK_LONG_LOWER_51_BITS = 0x7FFFFFFFFFFFFL;\n\n  CometBoundedShuffleMemoryAllocator(\n      SparkConf conf, TaskMemoryManager taskMemoryManager, long pageSize) {\n    super(taskMemoryManager, pageSize, MemoryMode.OFF_HEAP);\n    this.pageSize = pageSize;\n    this.totalMemory =\n        CometSparkSessionExtensions$.MODULE$.getCometShuffleMemorySize(conf, SQLConf.get());\n  }\n\n  private synchronized long _acquireMemory(long size) {\n    if (allocatedMemory >= totalMemory) {\n      throw new SparkOutOfMemoryError(\n          \"UNABLE_TO_ACQUIRE_MEMORY\",\n          java.util.Map.of(\n              \"requestedBytes\", String.valueOf(size),\n              \"receivedBytes\", String.valueOf(totalMemory - allocatedMemory)));\n    }\n    long allocationSize = Math.min(size, totalMemory - allocatedMemory);\n    allocatedMemory += allocationSize;\n    return allocationSize;\n  }\n\n  public long spill(long l, MemoryConsumer memoryConsumer) throws IOException {\n    return 0;\n  }\n\n  public synchronized LongArray allocateArray(long size) {\n    long required = size * 8L;\n    MemoryBlock page = allocateMemoryBlock(required);\n    return new LongArray(page);\n  }\n\n  public synchronized void freeArray(LongArray array) {\n    if (array == null) {\n      return;\n    }\n    free(array.memoryBlock());\n  }\n\n  public synchronized MemoryBlock allocate(long required) {\n    long size = Math.max(pageSize, required);\n    return allocateMemoryBlock(size);\n  }\n\n  private synchronized MemoryBlock allocateMemoryBlock(long required) {\n    if (required > TaskMemoryManager.MAXIMUM_PAGE_SIZE_BYTES) {\n      throw new TooLargePageException(required);\n    }\n\n    long got = _acquireMemory(required);\n\n    if (got < required) {\n      allocatedMemory -= got;\n\n      throw new SparkOutOfMemoryError(\n          \"UNABLE_TO_ACQUIRE_MEMORY\",\n          java.util.Map.of(\n              \"requestedBytes\", String.valueOf(required),\n              \"receivedBytes\", String.valueOf(totalMemory - allocatedMemory)));\n    }\n\n    int pageNumber = allocatedPages.nextClearBit(0);\n    if (pageNumber >= PAGE_TABLE_SIZE) {\n      allocatedMemory -= got;\n\n      throw new IllegalStateException(\n          \"Have already allocated a maximum of \" + PAGE_TABLE_SIZE + \" pages\");\n    }\n\n    MemoryBlock block = allocator.allocate(got);\n\n    block.pageNumber = pageNumber;\n    pageTable[pageNumber] = block;\n    allocatedPages.set(pageNumber);\n\n    return block;\n  }\n\n  public synchronized long free(MemoryBlock block) {\n    if (block.pageNumber == MemoryBlock.FREED_IN_ALLOCATOR_PAGE_NUMBER\n        || block.pageNumber == MemoryBlock.FREED_IN_TMM_PAGE_NUMBER) {\n      // Already freed block\n      return 0;\n    }\n    long blockSize = block.size();\n    allocatedMemory -= blockSize;\n\n    pageTable[block.pageNumber] = null;\n    allocatedPages.clear(block.pageNumber);\n    block.pageNumber = MemoryBlock.FREED_IN_TMM_PAGE_NUMBER;\n\n    allocator.free(block);\n    return blockSize;\n  }\n\n  /**\n   * Returns the offset in the page for the given page plus base offset address. Note that this\n   * method assumes that the page number is valid.\n   */\n  public long getOffsetInPage(long pagePlusOffsetAddress) {\n    long offsetInPage = decodeOffset(pagePlusOffsetAddress);\n    int pageNumber = TaskMemoryManager.decodePageNumber(pagePlusOffsetAddress);\n    assert (pageNumber >= 0 && pageNumber < PAGE_TABLE_SIZE);\n    MemoryBlock page = pageTable[pageNumber];\n    assert (page != null);\n    return page.getBaseOffset() + offsetInPage;\n  }\n\n  public long decodeOffset(long pagePlusOffsetAddress) {\n    return pagePlusOffsetAddress & MASK_LONG_LOWER_51_BITS;\n  }\n\n  public long encodePageNumberAndOffset(int pageNumber, long offsetInPage) {\n    assert (pageNumber >= 0);\n    return ((long) pageNumber) << OFFSET_BITS | offsetInPage & MASK_LONG_LOWER_51_BITS;\n  }\n\n  public long encodePageNumberAndOffset(MemoryBlock page, long offsetInPage) {\n    return encodePageNumberAndOffset(page.pageNumber, offsetInPage - page.getBaseOffset());\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/shuffle/comet/CometShuffleChecksumSupport.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.comet;\n\nimport org.apache.spark.SparkConf;\nimport org.apache.spark.internal.config.package$;\n\npublic interface CometShuffleChecksumSupport {\n  long[] EMPTY_CHECKSUM_VALUE = new long[0];\n\n  default long[] createPartitionChecksums(int numPartitions, SparkConf conf) {\n    if ((boolean) conf.get(package$.MODULE$.SHUFFLE_CHECKSUM_ENABLED())) {\n      long[] checksum = new long[numPartitions];\n\n      // Initialize checksums to Long.MIN_VALUE to indicate that they have not been computed.\n      for (int i = 0; i < numPartitions; i++) {\n        checksum[i] = Long.MIN_VALUE;\n      }\n\n      return checksum;\n    } else {\n      return EMPTY_CHECKSUM_VALUE;\n    }\n  }\n\n  default String getChecksumAlgorithm(SparkConf conf) {\n    if ((boolean) conf.get(package$.MODULE$.SHUFFLE_CHECKSUM_ENABLED())) {\n      return conf.get(package$.MODULE$.SHUFFLE_CHECKSUM_ALGORITHM());\n    } else {\n      return null;\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/shuffle/comet/CometShuffleMemoryAllocator.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.comet;\n\nimport org.apache.spark.SparkConf;\nimport org.apache.spark.memory.MemoryMode;\nimport org.apache.spark.memory.TaskMemoryManager;\n\n/**\n * An interface to instantiate either CometBoundedShuffleMemoryAllocator (on-heap mode) or\n * CometUnifiedShuffleMemoryAllocator (off-heap mode).\n */\npublic final class CometShuffleMemoryAllocator {\n  private static CometShuffleMemoryAllocatorTrait INSTANCE;\n\n  /**\n   * Returns the singleton instance of `CometShuffleMemoryAllocator`. This method should be used\n   * instead of the constructor to ensure that only one instance of `CometShuffleMemoryAllocator` is\n   * created. For on-heap mode (Spark tests), this returns `CometBoundedShuffleMemoryAllocator`.\n   */\n  public static CometShuffleMemoryAllocatorTrait getInstance(\n      SparkConf conf, TaskMemoryManager taskMemoryManager, long pageSize) {\n\n    if (taskMemoryManager.getTungstenMemoryMode() == MemoryMode.OFF_HEAP) {\n      // CometShuffleMemoryAllocator stores pages in TaskMemoryManager which is not singleton,\n      // but one instance per task. So we need to create a new instance for each task.\n      return new CometUnifiedShuffleMemoryAllocator(taskMemoryManager, pageSize);\n    }\n\n    synchronized (CometShuffleMemoryAllocator.class) {\n      if (INSTANCE == null) {\n        // CometBoundedShuffleMemoryAllocator handles pages by itself so it can be a singleton.\n        INSTANCE = new CometBoundedShuffleMemoryAllocator(conf, taskMemoryManager, pageSize);\n      }\n    }\n    return INSTANCE;\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/shuffle/comet/CometShuffleMemoryAllocatorTrait.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.comet;\n\nimport org.apache.spark.memory.MemoryConsumer;\nimport org.apache.spark.memory.MemoryMode;\nimport org.apache.spark.memory.TaskMemoryManager;\nimport org.apache.spark.unsafe.memory.MemoryBlock;\n\n/** The base class for Comet JVM shuffle memory allocators. */\npublic abstract class CometShuffleMemoryAllocatorTrait extends MemoryConsumer {\n  protected CometShuffleMemoryAllocatorTrait(\n      TaskMemoryManager taskMemoryManager, long pageSize, MemoryMode mode) {\n    super(taskMemoryManager, pageSize, mode);\n  }\n\n  public abstract MemoryBlock allocate(long required);\n\n  public abstract long free(MemoryBlock block);\n\n  public abstract long getOffsetInPage(long pagePlusOffsetAddress);\n\n  public abstract long encodePageNumberAndOffset(MemoryBlock page, long offsetInPage);\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/shuffle/comet/CometUnifiedShuffleMemoryAllocator.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.comet;\n\nimport java.io.IOException;\n\nimport org.apache.spark.memory.MemoryConsumer;\nimport org.apache.spark.memory.MemoryMode;\nimport org.apache.spark.memory.TaskMemoryManager;\nimport org.apache.spark.unsafe.memory.MemoryBlock;\n\n/**\n * A simple memory allocator used by `CometShuffleExternalSorter` to allocate memory blocks which\n * store serialized rows. This class is simply an implementation of `MemoryConsumer` that delegates\n * memory allocation to the `TaskMemoryManager`. This requires that the `TaskMemoryManager` is\n * configured with `MemoryMode.OFF_HEAP`, i.e. it is using off-heap memory.\n *\n * <p>If the user does not enable off-heap memory then we want to use\n * CometBoundedShuffleMemoryAllocator. The tests also need to default to using this because off-heap\n * is not enabled when running the Spark SQL tests.\n */\npublic final class CometUnifiedShuffleMemoryAllocator extends CometShuffleMemoryAllocatorTrait {\n\n  CometUnifiedShuffleMemoryAllocator(TaskMemoryManager taskMemoryManager, long pageSize) {\n    super(taskMemoryManager, pageSize, MemoryMode.OFF_HEAP);\n    if (taskMemoryManager.getTungstenMemoryMode() != MemoryMode.OFF_HEAP) {\n      throw new IllegalArgumentException(\n          \"CometUnifiedShuffleMemoryAllocator should be used with off-heap \"\n              + \"memory mode, but got \"\n              + taskMemoryManager.getTungstenMemoryMode());\n    }\n  }\n\n  public long spill(long l, MemoryConsumer memoryConsumer) throws IOException {\n    // JVM shuffle writer does not support spilling for other memory consumers\n    return 0;\n  }\n\n  public synchronized MemoryBlock allocate(long required) {\n    return this.allocatePage(required);\n  }\n\n  public synchronized long free(MemoryBlock block) {\n    if (block.pageNumber == MemoryBlock.FREED_IN_ALLOCATOR_PAGE_NUMBER\n        || block.pageNumber == MemoryBlock.FREED_IN_TMM_PAGE_NUMBER) {\n      // Already freed block\n      return 0;\n    }\n    long blockSize = block.size();\n    this.freePage(block);\n    return blockSize;\n  }\n\n  /**\n   * Returns the offset in the page for the given page plus base offset address. Note that this\n   * method assumes that the page number is valid.\n   */\n  public long getOffsetInPage(long pagePlusOffsetAddress) {\n    return taskMemoryManager.getOffsetInPage(pagePlusOffsetAddress);\n  }\n\n  public long encodePageNumberAndOffset(int pageNumber, long offsetInPage) {\n    return TaskMemoryManager.encodePageNumberAndOffset(pageNumber, offsetInPage);\n  }\n\n  public long encodePageNumberAndOffset(MemoryBlock page, long offsetInPage) {\n    return encodePageNumberAndOffset(page.pageNumber, offsetInPage - page.getBaseOffset());\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/shuffle/comet/TooLargePageException.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.comet;\n\npublic class TooLargePageException extends RuntimeException {\n  TooLargePageException(long size) {\n    super(\"Cannot allocate a page of \" + size + \" bytes.\");\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/shuffle/sort/CometShuffleExternalSorter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.sort;\n\nimport java.io.IOException;\n\nimport org.apache.spark.SparkConf;\nimport org.apache.spark.TaskContext;\nimport org.apache.spark.shuffle.ShuffleWriteMetricsReporter;\nimport org.apache.spark.shuffle.comet.CometShuffleMemoryAllocatorTrait;\nimport org.apache.spark.sql.comet.execution.shuffle.SpillInfo;\nimport org.apache.spark.sql.types.StructType;\nimport org.apache.spark.storage.BlockManager;\n\nimport org.apache.comet.CometConf$;\n\n/**\n * An external sorter interface specialized for sort-based shuffle.\n *\n * <p>Incoming records are appended to data pages. When all records have been inserted (or when the\n * current thread's shuffle memory limit is reached), the in-memory records are sorted according to\n * their partition ids using native sorter. The sorted records are then written to a single output\n * file (or multiple files, if we've spilled).\n *\n * <p>Unlike {@link org.apache.spark.util.collection.ExternalSorter}, this sorter does not merge its\n * spill files. Instead, this merging is performed in {@link\n * org.apache.spark.sql.comet.execution.shuffle.CometUnsafeShuffleWriter}, which uses a specialized\n * merge procedure that avoids extra serialization/deserialization.\n */\npublic interface CometShuffleExternalSorter {\n\n  int MAXIMUM_PAGE_SIZE_BYTES = PackedRecordPointer.MAXIMUM_PAGE_SIZE_BYTES;\n\n  /** Returns the checksums for each partition. */\n  long[] getChecksums();\n\n  /** Sort and spill the current records in response to memory pressure. */\n  void spill() throws IOException;\n\n  /** Return the peak memory used so far, in bytes. */\n  long getPeakMemoryUsedBytes();\n\n  /** Force all memory and spill files to be deleted; called by shuffle error-handling code. */\n  void cleanupResources();\n\n  /**\n   * Writes a record to the shuffle sorter. This copies the record data into this external sorter's\n   * managed memory, which may trigger spilling if the copy would exceed the memory limit. It\n   * inserts a pointer for the record and record's partition id into the in-memory sorter.\n   */\n  void insertRecord(Object recordBase, long recordOffset, int length, int partitionId)\n      throws IOException;\n\n  /**\n   * Close the sorter, causing any buffered data to be sorted and written out to disk.\n   *\n   * @return metadata for the spill files written by this sorter. If no records were ever inserted\n   *     into this sorter, then this will return an empty array.\n   */\n  SpillInfo[] closeAndGetSpills() throws IOException;\n\n  /**\n   * Factory method to create the appropriate sorter implementation based on configuration.\n   *\n   * @return either a sync or async implementation based on the COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED\n   *     configuration\n   */\n  static CometShuffleExternalSorter create(\n      CometShuffleMemoryAllocatorTrait allocator,\n      BlockManager blockManager,\n      TaskContext taskContext,\n      int initialSize,\n      int numPartitions,\n      SparkConf conf,\n      ShuffleWriteMetricsReporter writeMetrics,\n      StructType schema) {\n\n    boolean isAsync = (boolean) CometConf$.MODULE$.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED().get();\n\n    if (isAsync) {\n      return new CometShuffleExternalSorterAsync(\n          allocator,\n          blockManager,\n          taskContext,\n          initialSize,\n          numPartitions,\n          conf,\n          writeMetrics,\n          schema);\n    } else {\n      return new CometShuffleExternalSorterSync(\n          allocator,\n          blockManager,\n          taskContext,\n          initialSize,\n          numPartitions,\n          conf,\n          writeMetrics,\n          schema);\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/shuffle/sort/CometShuffleExternalSorterAsync.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.sort;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.util.LinkedList;\nimport java.util.concurrent.*;\n\nimport scala.Tuple2;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.spark.SparkConf;\nimport org.apache.spark.TaskContext;\nimport org.apache.spark.memory.SparkOutOfMemoryError;\nimport org.apache.spark.shuffle.ShuffleWriteMetricsReporter;\nimport org.apache.spark.shuffle.comet.CometShuffleChecksumSupport;\nimport org.apache.spark.shuffle.comet.CometShuffleMemoryAllocatorTrait;\nimport org.apache.spark.shuffle.comet.TooLargePageException;\nimport org.apache.spark.sql.comet.execution.shuffle.ShuffleThreadPool;\nimport org.apache.spark.sql.comet.execution.shuffle.SpillInfo;\nimport org.apache.spark.sql.types.StructType;\nimport org.apache.spark.storage.BlockManager;\nimport org.apache.spark.storage.TempShuffleBlockId;\nimport org.apache.spark.unsafe.UnsafeAlignedOffset;\nimport org.apache.spark.unsafe.array.LongArray;\nimport org.apache.spark.util.Utils;\n\nimport org.apache.comet.CometConf$;\n\n/**\n * Asynchronous implementation of the external sorter for sort-based shuffle.\n *\n * <p>Incoming records are appended to data pages. When all records have been inserted (or when the\n * current thread's shuffle memory limit is reached), the in-memory records are sorted according to\n * their partition ids using native sorter. The sorted records are then written to a single output\n * file (or multiple files, if we've spilled).\n *\n * <p>Unlike {@link org.apache.spark.util.collection.ExternalSorter}, this sorter does not merge its\n * spill files. Instead, this merging is performed in {@link\n * org.apache.spark.sql.comet.execution.shuffle.CometUnsafeShuffleWriter}, which uses a specialized\n * merge procedure that avoids extra serialization/deserialization.\n *\n * <p>This sorter provides async spilling write mode. When spilling, it will submit a task to thread\n * pool to write shuffle spilling file. After submitting the task, it will continue to buffer, sort\n * incoming records and submit another spilling task once spilling threshold reached again or memory\n * is not enough to buffer incoming records. Each spilling task will write a shuffle spilling file\n * separately. After all records have been sorted and spilled, all spill files will be merged by\n * {@link org.apache.spark.sql.comet.execution.shuffle.CometUnsafeShuffleWriter}.\n */\npublic final class CometShuffleExternalSorterAsync\n    implements CometShuffleExternalSorter, CometShuffleChecksumSupport {\n\n  private static final Logger logger =\n      LoggerFactory.getLogger(CometShuffleExternalSorterAsync.class);\n\n  private final int numPartitions;\n  private final BlockManager blockManager;\n  private final TaskContext taskContext;\n  private final ShuffleWriteMetricsReporter writeMetrics;\n\n  private final StructType schema;\n\n  /** Force this sorter to spill when there are this many elements in memory. */\n  private final int numElementsForSpillThreshold;\n\n  // When this external sorter allocates memory of `sorterArray`, we need to keep its\n  // assigned initial size. After spilling, we will reset the array to its initial size.\n  // See `sorterArray` comment for more details.\n  private int initialSize;\n\n  /** All sorters with memory pages used by the sorters. */\n  private final ConcurrentLinkedQueue<SpillSorter> spillingSorters = new ConcurrentLinkedQueue<>();\n\n  private SpillSorter activeSpillSorter;\n\n  private final LinkedList<SpillInfo> spills = new LinkedList<>();\n\n  /** Peak memory used by this sorter so far, in bytes. */\n  private long peakMemoryUsedBytes;\n\n  // Checksum calculator for each partition. Empty when shuffle checksum disabled.\n  private final long[] partitionChecksums;\n\n  private final String checksumAlgorithm;\n  private final String compressionCodec;\n  private final int compressionLevel;\n\n  // The memory allocator for this sorter. It is used to allocate/free memory pages for this sorter.\n  // Because we need to allocate off-heap memory regardless of configured Spark memory mode\n  // (on-heap/off-heap), we need a separate memory allocator.\n  private final CometShuffleMemoryAllocatorTrait allocator;\n\n  /** Thread pool shared for async spilling write */\n  private final ExecutorService threadPool;\n\n  private final int threadNum;\n\n  private ConcurrentLinkedQueue<Future<Void>> asyncSpillTasks = new ConcurrentLinkedQueue<>();\n\n  private boolean spilling = false;\n\n  private final int uaoSize = UnsafeAlignedOffset.getUaoSize();\n  private final double preferDictionaryRatio;\n  private final boolean tracingEnabled;\n\n  public CometShuffleExternalSorterAsync(\n      CometShuffleMemoryAllocatorTrait allocator,\n      BlockManager blockManager,\n      TaskContext taskContext,\n      int initialSize,\n      int numPartitions,\n      SparkConf conf,\n      ShuffleWriteMetricsReporter writeMetrics,\n      StructType schema) {\n    this.allocator = allocator;\n    this.blockManager = blockManager;\n    this.taskContext = taskContext;\n    this.numPartitions = numPartitions;\n    this.schema = schema;\n    this.numElementsForSpillThreshold =\n        (int) CometConf$.MODULE$.COMET_COLUMNAR_SHUFFLE_SPILL_THRESHOLD().get();\n    this.writeMetrics = writeMetrics;\n\n    this.peakMemoryUsedBytes = getMemoryUsage();\n    this.partitionChecksums = createPartitionChecksums(numPartitions, conf);\n    this.checksumAlgorithm = getChecksumAlgorithm(conf);\n    this.compressionCodec = CometConf$.MODULE$.COMET_EXEC_SHUFFLE_COMPRESSION_CODEC().get();\n    this.compressionLevel =\n        (int) CometConf$.MODULE$.COMET_EXEC_SHUFFLE_COMPRESSION_ZSTD_LEVEL().get();\n\n    this.initialSize = initialSize;\n    this.tracingEnabled = (boolean) CometConf$.MODULE$.COMET_TRACING_ENABLED().get();\n\n    this.threadNum = (int) CometConf$.MODULE$.COMET_COLUMNAR_SHUFFLE_ASYNC_THREAD_NUM().get();\n    assert (this.threadNum > 0);\n    this.threadPool = ShuffleThreadPool.getThreadPool();\n\n    this.preferDictionaryRatio =\n        (double) CometConf$.MODULE$.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO().get();\n\n    this.activeSpillSorter = createSpillSorter();\n  }\n\n  /** Creates a new SpillSorter with all required dependencies. */\n  private SpillSorter createSpillSorter() {\n    return new SpillSorter(\n        allocator,\n        initialSize,\n        schema,\n        uaoSize,\n        preferDictionaryRatio,\n        compressionCodec,\n        compressionLevel,\n        checksumAlgorithm,\n        partitionChecksums,\n        writeMetrics,\n        taskContext,\n        spills,\n        this::spill);\n  }\n\n  @Override\n  public long[] getChecksums() {\n    return partitionChecksums;\n  }\n\n  /** Sort and spill the current records in response to memory pressure. */\n  @Override\n  public void spill() throws IOException {\n    if (spilling || activeSpillSorter == null || activeSpillSorter.numRecords() == 0) {\n      return;\n    }\n\n    // In async mode, if new in-memory sorter cannot allocate required array, it triggers spill\n    // here. This method will initiate new sorter following normal spill logic and cause stack\n    // overflow eventually. So we need to avoid triggering spilling again while spilling. But\n    // we cannot make this as \"synchronized\" because it will block the caller thread.\n    spilling = true;\n\n    logger.info(\n        \"Thread {} spilling sort data of {} to disk ({} {} so far)\",\n        Thread.currentThread().getId(),\n        Utils.bytesToString(getMemoryUsage()),\n        spills.size(),\n        spills.size() > 1 ? \" times\" : \" time\");\n\n    final Tuple2<TempShuffleBlockId, File> spilledFileInfo =\n        blockManager.diskBlockManager().createTempShuffleBlock();\n    final File file = spilledFileInfo._2();\n    final TempShuffleBlockId blockId = spilledFileInfo._1();\n    final SpillInfo spillInfo = new SpillInfo(numPartitions, file, blockId);\n\n    activeSpillSorter.setSpillInfo(spillInfo);\n\n    SpillSorter spillingSorter = activeSpillSorter;\n    Callable<Void> task =\n        () -> {\n          spillingSorter.writeSortedFileNative(false, tracingEnabled);\n          final long spillSize = spillingSorter.freeMemory();\n          spillingSorter.freeArray();\n          spillingSorters.remove(spillingSorter);\n\n          // Reset the in-memory sorter's pointer array only after freeing up the memory pages\n          // holding the records. Otherwise, if the task is over allocated memory, then without\n          // freeing the memory pages, we might not be able to get memory for the pointer array.\n          synchronized (CometShuffleExternalSorterAsync.this) {\n            taskContext.taskMetrics().incMemoryBytesSpilled(spillSize);\n          }\n\n          return null;\n        };\n\n    spillingSorters.add(spillingSorter);\n    asyncSpillTasks.add(threadPool.submit(task));\n\n    while (asyncSpillTasks.size() == threadNum) {\n      for (Future<Void> spillingTask : asyncSpillTasks) {\n        if (spillingTask.isDone()) {\n          asyncSpillTasks.remove(spillingTask);\n          break;\n        }\n      }\n    }\n\n    activeSpillSorter = createSpillSorter();\n\n    spilling = false;\n  }\n\n  private long getMemoryUsage() {\n    long totalPageSize = 0;\n    for (SpillSorter sorter : spillingSorters) {\n      totalPageSize += sorter.getMemoryUsage();\n    }\n    if (activeSpillSorter != null) {\n      totalPageSize += activeSpillSorter.getMemoryUsage();\n    }\n    return totalPageSize;\n  }\n\n  private void updatePeakMemoryUsed() {\n    long mem = getMemoryUsage();\n    if (mem > peakMemoryUsedBytes) {\n      peakMemoryUsedBytes = mem;\n    }\n  }\n\n  /** Return the peak memory used so far, in bytes. */\n  @Override\n  public long getPeakMemoryUsedBytes() {\n    updatePeakMemoryUsed();\n    return peakMemoryUsedBytes;\n  }\n\n  private long freeMemory() {\n    updatePeakMemoryUsed();\n    long memoryFreed = 0;\n    for (SpillSorter sorter : spillingSorters) {\n      memoryFreed += sorter.freeMemory();\n      sorter.freeArray();\n    }\n    memoryFreed += activeSpillSorter.freeMemory();\n    activeSpillSorter.freeArray();\n\n    return memoryFreed;\n  }\n\n  /** Force all memory and spill files to be deleted; called by shuffle error-handling code. */\n  @Override\n  public void cleanupResources() {\n    freeMemory();\n\n    for (SpillInfo spill : spills) {\n      if (spill.file.exists() && !spill.file.delete()) {\n        logger.error(\"Unable to delete spill file {}\", spill.file.getPath());\n      }\n    }\n  }\n\n  /**\n   * Checks whether there is enough space to insert an additional record in to the sort pointer\n   * array and grows the array if additional space is required. If the required space cannot be\n   * obtained, then the in-memory data will be spilled to disk.\n   */\n  private void growPointerArrayIfNecessary() throws IOException {\n    assert (activeSpillSorter != null);\n    if (!activeSpillSorter.hasSpaceForAnotherRecord()) {\n      long used = activeSpillSorter.getMemoryUsage();\n      LongArray array;\n      try {\n        // could trigger spilling\n        array = allocator.allocateArray(used / 8 * 2);\n      } catch (TooLargePageException e) {\n        // The pointer array is too big to fix in a single page, spill.\n        spill();\n        return;\n      } catch (SparkOutOfMemoryError e) {\n        // Cannot allocate enough memory, spill and reset pointer array.\n        try {\n          spill();\n        } catch (SparkOutOfMemoryError e2) {\n          // Cannot allocate memory even after spilling, throw the error.\n          if (!activeSpillSorter.hasSpaceForAnotherRecord()) {\n            logger.error(\"Unable to grow the pointer array\");\n            throw e2;\n          }\n        }\n        return;\n      }\n      // check if spilling is triggered or not\n      if (activeSpillSorter.hasSpaceForAnotherRecord()) {\n        allocator.freeArray(array);\n      } else {\n        activeSpillSorter.expandPointerArray(array);\n      }\n    }\n  }\n\n  /**\n   * Writes a record to the shuffle sorter. This copies the record data into this external sorter's\n   * managed memory, which may trigger spilling if the copy would exceed the memory limit. It\n   * inserts a pointer for the record and record's partition id into the in-memory sorter.\n   */\n  @Override\n  public void insertRecord(Object recordBase, long recordOffset, int length, int partitionId)\n      throws IOException {\n\n    assert (activeSpillSorter != null);\n    int threshold = numElementsForSpillThreshold;\n    if (activeSpillSorter.numRecords() >= threshold) {\n      logger.info(\n          \"Spilling data because number of spilledRecords crossed the threshold \" + threshold);\n      spill();\n    }\n\n    growPointerArrayIfNecessary();\n\n    // Need 4 or 8 bytes to store the record length.\n    final int required = length + uaoSize;\n    // Acquire enough memory to store the record.\n    // If we cannot acquire enough memory, we will spill current writers.\n    if (!activeSpillSorter.acquireNewPageIfNecessary(required)) {\n      // Spilling is happened, initiate new memory page for new writer.\n      activeSpillSorter.initialCurrentPage(required);\n    }\n\n    activeSpillSorter.insertRecord(recordBase, recordOffset, length, partitionId);\n  }\n\n  /**\n   * Close the sorter, causing any buffered data to be sorted and written out to disk.\n   *\n   * @return metadata for the spill files written by this sorter. If no records were ever inserted\n   *     into this sorter, then this will return an empty array.\n   */\n  @Override\n  public SpillInfo[] closeAndGetSpills() throws IOException {\n    if (activeSpillSorter != null) {\n      // Do not count the final file towards the spill count.\n      final Tuple2<TempShuffleBlockId, File> spilledFileInfo =\n          blockManager.diskBlockManager().createTempShuffleBlock();\n      final File file = spilledFileInfo._2();\n      final TempShuffleBlockId blockId = spilledFileInfo._1();\n      final SpillInfo spillInfo = new SpillInfo(numPartitions, file, blockId);\n\n      // Waits for all async tasks to finish.\n      for (Future<Void> task : asyncSpillTasks) {\n        try {\n          task.get();\n        } catch (Exception e) {\n          throw new IOException(e);\n        }\n      }\n\n      asyncSpillTasks.clear();\n\n      activeSpillSorter.setSpillInfo(spillInfo);\n      activeSpillSorter.writeSortedFileNative(true, tracingEnabled);\n\n      freeMemory();\n    }\n\n    return spills.toArray(new SpillInfo[spills.size()]);\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/shuffle/sort/CometShuffleExternalSorterSync.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.sort;\n\nimport java.io.File;\nimport java.io.IOException;\nimport java.util.LinkedList;\n\nimport scala.Tuple2;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.spark.SparkConf;\nimport org.apache.spark.TaskContext;\nimport org.apache.spark.memory.SparkOutOfMemoryError;\nimport org.apache.spark.shuffle.ShuffleWriteMetricsReporter;\nimport org.apache.spark.shuffle.comet.CometShuffleChecksumSupport;\nimport org.apache.spark.shuffle.comet.CometShuffleMemoryAllocatorTrait;\nimport org.apache.spark.shuffle.comet.TooLargePageException;\nimport org.apache.spark.sql.comet.execution.shuffle.SpillInfo;\nimport org.apache.spark.sql.types.StructType;\nimport org.apache.spark.storage.BlockManager;\nimport org.apache.spark.storage.TempShuffleBlockId;\nimport org.apache.spark.unsafe.UnsafeAlignedOffset;\nimport org.apache.spark.unsafe.array.LongArray;\nimport org.apache.spark.util.Utils;\n\nimport org.apache.comet.CometConf$;\n\n/**\n * Synchronous implementation of the external sorter for sort-based shuffle.\n *\n * <p>Incoming records are appended to data pages. When all records have been inserted (or when the\n * current thread's shuffle memory limit is reached), the in-memory records are sorted according to\n * their partition ids using native sorter. The sorted records are then written to a single output\n * file (or multiple files, if we've spilled).\n *\n * <p>Unlike {@link org.apache.spark.util.collection.ExternalSorter}, this sorter does not merge its\n * spill files. Instead, this merging is performed in {@link\n * org.apache.spark.sql.comet.execution.shuffle.CometUnsafeShuffleWriter}, which uses a specialized\n * merge procedure that avoids extra serialization/deserialization.\n */\npublic final class CometShuffleExternalSorterSync\n    implements CometShuffleExternalSorter, CometShuffleChecksumSupport {\n\n  private static final Logger logger =\n      LoggerFactory.getLogger(CometShuffleExternalSorterSync.class);\n\n  private final int numPartitions;\n  private final BlockManager blockManager;\n  private final TaskContext taskContext;\n  private final ShuffleWriteMetricsReporter writeMetrics;\n\n  private final StructType schema;\n\n  /** Force this sorter to spill when there are this many elements in memory. */\n  private final int numElementsForSpillThreshold;\n\n  // When this external sorter allocates memory of `sorterArray`, we need to keep its\n  // assigned initial size. After spilling, we will reset the array to its initial size.\n  // See `sorterArray` comment for more details.\n  private int initialSize;\n\n  private SpillSorter activeSpillSorter;\n\n  private final LinkedList<SpillInfo> spills = new LinkedList<>();\n\n  /** Peak memory used by this sorter so far, in bytes. */\n  private long peakMemoryUsedBytes;\n\n  // Checksum calculator for each partition. Empty when shuffle checksum disabled.\n  private final long[] partitionChecksums;\n\n  private final String checksumAlgorithm;\n  private final String compressionCodec;\n  private final int compressionLevel;\n\n  // The memory allocator for this sorter. It is used to allocate/free memory pages for this sorter.\n  // Because we need to allocate off-heap memory regardless of configured Spark memory mode\n  // (on-heap/off-heap), we need a separate memory allocator.\n  private final CometShuffleMemoryAllocatorTrait allocator;\n\n  private boolean spilling = false;\n\n  private final int uaoSize = UnsafeAlignedOffset.getUaoSize();\n  private final double preferDictionaryRatio;\n  private final boolean tracingEnabled;\n\n  public CometShuffleExternalSorterSync(\n      CometShuffleMemoryAllocatorTrait allocator,\n      BlockManager blockManager,\n      TaskContext taskContext,\n      int initialSize,\n      int numPartitions,\n      SparkConf conf,\n      ShuffleWriteMetricsReporter writeMetrics,\n      StructType schema) {\n    this.allocator = allocator;\n    this.blockManager = blockManager;\n    this.taskContext = taskContext;\n    this.numPartitions = numPartitions;\n    this.schema = schema;\n    this.numElementsForSpillThreshold =\n        (int) CometConf$.MODULE$.COMET_COLUMNAR_SHUFFLE_SPILL_THRESHOLD().get();\n    this.writeMetrics = writeMetrics;\n\n    this.peakMemoryUsedBytes = getMemoryUsage();\n    this.partitionChecksums = createPartitionChecksums(numPartitions, conf);\n    this.checksumAlgorithm = getChecksumAlgorithm(conf);\n    this.compressionCodec = CometConf$.MODULE$.COMET_EXEC_SHUFFLE_COMPRESSION_CODEC().get();\n    this.compressionLevel =\n        (int) CometConf$.MODULE$.COMET_EXEC_SHUFFLE_COMPRESSION_ZSTD_LEVEL().get();\n\n    this.initialSize = initialSize;\n    this.tracingEnabled = (boolean) CometConf$.MODULE$.COMET_TRACING_ENABLED().get();\n\n    this.preferDictionaryRatio =\n        (double) CometConf$.MODULE$.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO().get();\n\n    this.activeSpillSorter = createSpillSorter();\n  }\n\n  /** Creates a new SpillSorter with all required dependencies. */\n  private SpillSorter createSpillSorter() {\n    return new SpillSorter(\n        allocator,\n        initialSize,\n        schema,\n        uaoSize,\n        preferDictionaryRatio,\n        compressionCodec,\n        compressionLevel,\n        checksumAlgorithm,\n        partitionChecksums,\n        writeMetrics,\n        taskContext,\n        spills,\n        this::spill);\n  }\n\n  @Override\n  public long[] getChecksums() {\n    return partitionChecksums;\n  }\n\n  /** Sort and spill the current records in response to memory pressure. */\n  @Override\n  public void spill() throws IOException {\n    if (spilling || activeSpillSorter == null || activeSpillSorter.numRecords() == 0) {\n      return;\n    }\n\n    spilling = true;\n\n    logger.info(\n        \"Thread {} spilling sort data of {} to disk ({} {} so far)\",\n        Thread.currentThread().getId(),\n        Utils.bytesToString(getMemoryUsage()),\n        spills.size(),\n        spills.size() > 1 ? \" times\" : \" time\");\n\n    final Tuple2<TempShuffleBlockId, File> spilledFileInfo =\n        blockManager.diskBlockManager().createTempShuffleBlock();\n    final File file = spilledFileInfo._2();\n    final TempShuffleBlockId blockId = spilledFileInfo._1();\n    final SpillInfo spillInfo = new SpillInfo(numPartitions, file, blockId);\n\n    activeSpillSorter.setSpillInfo(spillInfo);\n\n    activeSpillSorter.writeSortedFileNative(false, tracingEnabled);\n    final long spillSize = activeSpillSorter.freeMemory();\n    activeSpillSorter.reset();\n\n    // Reset the in-memory sorter's pointer array only after freeing up the memory pages holding\n    // the records. Otherwise, if the task is over allocated memory, then without freeing the\n    // memory pages, we might not be able to get memory for the pointer array.\n    taskContext.taskMetrics().incMemoryBytesSpilled(spillSize);\n\n    spilling = false;\n  }\n\n  private long getMemoryUsage() {\n    if (activeSpillSorter != null) {\n      return activeSpillSorter.getMemoryUsage();\n    }\n    return 0;\n  }\n\n  private void updatePeakMemoryUsed() {\n    long mem = getMemoryUsage();\n    if (mem > peakMemoryUsedBytes) {\n      peakMemoryUsedBytes = mem;\n    }\n  }\n\n  /** Return the peak memory used so far, in bytes. */\n  @Override\n  public long getPeakMemoryUsedBytes() {\n    updatePeakMemoryUsed();\n    return peakMemoryUsedBytes;\n  }\n\n  private long freeMemory() {\n    updatePeakMemoryUsed();\n    long memoryFreed = activeSpillSorter.freeMemory();\n    activeSpillSorter.freeArray();\n    return memoryFreed;\n  }\n\n  /** Force all memory and spill files to be deleted; called by shuffle error-handling code. */\n  @Override\n  public void cleanupResources() {\n    freeMemory();\n\n    for (SpillInfo spill : spills) {\n      if (spill.file.exists() && !spill.file.delete()) {\n        logger.error(\"Unable to delete spill file {}\", spill.file.getPath());\n      }\n    }\n  }\n\n  /**\n   * Checks whether there is enough space to insert an additional record in to the sort pointer\n   * array and grows the array if additional space is required. If the required space cannot be\n   * obtained, then the in-memory data will be spilled to disk.\n   */\n  private void growPointerArrayIfNecessary() throws IOException {\n    assert (activeSpillSorter != null);\n    if (!activeSpillSorter.hasSpaceForAnotherRecord()) {\n      long used = activeSpillSorter.getMemoryUsage();\n      LongArray array;\n      try {\n        // could trigger spilling\n        array = allocator.allocateArray(used / 8 * 2);\n      } catch (TooLargePageException e) {\n        // The pointer array is too big to fix in a single page, spill.\n        spill();\n        return;\n      } catch (SparkOutOfMemoryError e) {\n        // Cannot allocate enough memory, spill and reset pointer array.\n        try {\n          spill();\n        } catch (SparkOutOfMemoryError e2) {\n          // Cannot allocate memory even after spilling, throw the error.\n          if (!activeSpillSorter.hasSpaceForAnotherRecord()) {\n            logger.error(\"Unable to grow the pointer array\");\n            throw e2;\n          }\n        }\n        return;\n      }\n      // check if spilling is triggered or not\n      if (activeSpillSorter.hasSpaceForAnotherRecord()) {\n        allocator.freeArray(array);\n      } else {\n        activeSpillSorter.expandPointerArray(array);\n      }\n    }\n  }\n\n  /**\n   * Writes a record to the shuffle sorter. This copies the record data into this external sorter's\n   * managed memory, which may trigger spilling if the copy would exceed the memory limit. It\n   * inserts a pointer for the record and record's partition id into the in-memory sorter.\n   */\n  @Override\n  public void insertRecord(Object recordBase, long recordOffset, int length, int partitionId)\n      throws IOException {\n\n    assert (activeSpillSorter != null);\n    int threshold = numElementsForSpillThreshold;\n    if (activeSpillSorter.numRecords() >= threshold) {\n      logger.info(\n          \"Spilling data because number of spilledRecords crossed the threshold \" + threshold);\n      spill();\n    }\n\n    growPointerArrayIfNecessary();\n\n    // Need 4 or 8 bytes to store the record length.\n    final int required = length + uaoSize;\n    // Acquire enough memory to store the record.\n    // If we cannot acquire enough memory, we will spill current writers.\n    if (!activeSpillSorter.acquireNewPageIfNecessary(required)) {\n      // Spilling is happened, initiate new memory page for new writer.\n      activeSpillSorter.initialCurrentPage(required);\n    }\n\n    activeSpillSorter.insertRecord(recordBase, recordOffset, length, partitionId);\n  }\n\n  /**\n   * Close the sorter, causing any buffered data to be sorted and written out to disk.\n   *\n   * @return metadata for the spill files written by this sorter. If no records were ever inserted\n   *     into this sorter, then this will return an empty array.\n   */\n  @Override\n  public SpillInfo[] closeAndGetSpills() throws IOException {\n    if (activeSpillSorter != null) {\n      // Do not count the final file towards the spill count.\n      final Tuple2<TempShuffleBlockId, File> spilledFileInfo =\n          blockManager.diskBlockManager().createTempShuffleBlock();\n      final File file = spilledFileInfo._2();\n      final TempShuffleBlockId blockId = spilledFileInfo._1();\n      final SpillInfo spillInfo = new SpillInfo(numPartitions, file, blockId);\n\n      activeSpillSorter.setSpillInfo(spillInfo);\n      activeSpillSorter.writeSortedFileNative(true, tracingEnabled);\n\n      freeMemory();\n    }\n\n    return spills.toArray(new SpillInfo[spills.size()]);\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/shuffle/sort/SpillSorter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.sort;\n\nimport java.io.IOException;\nimport java.util.LinkedList;\nimport javax.annotation.Nullable;\n\nimport org.apache.spark.TaskContext;\nimport org.apache.spark.executor.ShuffleWriteMetrics;\nimport org.apache.spark.shuffle.ShuffleWriteMetricsReporter;\nimport org.apache.spark.shuffle.comet.CometShuffleMemoryAllocatorTrait;\nimport org.apache.spark.sql.comet.execution.shuffle.SpillInfo;\nimport org.apache.spark.sql.comet.execution.shuffle.SpillWriter;\nimport org.apache.spark.sql.types.StructType;\nimport org.apache.spark.unsafe.Platform;\nimport org.apache.spark.unsafe.UnsafeAlignedOffset;\nimport org.apache.spark.unsafe.array.LongArray;\n\nimport org.apache.comet.Native;\n\n/**\n * A spill sorter that buffers records in memory, sorts them by partition ID, and writes them to\n * disk. This class is used by CometShuffleExternalSorter to manage individual spill operations.\n *\n * <p>Each SpillSorter instance manages its own memory pages and pointer array. When spilling is\n * triggered, the records are sorted by partition ID using native code and written to a spill file.\n */\npublic class SpillSorter extends SpillWriter {\n\n  /** Callback interface for triggering spill operations in the parent sorter. */\n  @FunctionalInterface\n  public interface SpillCallback {\n    void onSpillRequired() throws IOException;\n  }\n\n  // Configuration fields (immutable after construction)\n  private final int initialSize;\n  private final int uaoSize;\n  private final double preferDictionaryRatio;\n  private final String compressionCodec;\n  private final int compressionLevel;\n  private final String checksumAlgorithm;\n\n  // Shared state (mutable, passed by reference from parent)\n  private final long[] partitionChecksums;\n  private final ShuffleWriteMetricsReporter writeMetrics;\n  private final TaskContext taskContext;\n  private final LinkedList<SpillInfo> spills;\n  private final SpillCallback spillCallback;\n\n  // Internal state\n  private boolean freed = false;\n  private SpillInfo spillInfo;\n  @Nullable private ShuffleInMemorySorter inMemSorter;\n  private LongArray sorterArray;\n\n  /**\n   * Creates a new SpillSorter with explicit dependencies.\n   *\n   * @param allocator Memory allocator for pages and arrays\n   * @param initialSize Initial size for the pointer array\n   * @param schema Schema of the records being sorted\n   * @param uaoSize Size of UnsafeAlignedOffset (4 or 8 bytes)\n   * @param preferDictionaryRatio Dictionary encoding preference ratio\n   * @param compressionCodec Compression codec for spill files\n   * @param compressionLevel Compression level\n   * @param checksumAlgorithm Checksum algorithm (e.g., \"crc32\", \"adler32\")\n   * @param partitionChecksums Array to store partition checksums (shared with parent)\n   * @param writeMetrics Metrics reporter for shuffle writes\n   * @param taskContext Task context for metrics updates\n   * @param spills List to accumulate spill info (shared with parent)\n   * @param spillCallback Callback to trigger spill in parent sorter\n   */\n  public SpillSorter(\n      CometShuffleMemoryAllocatorTrait allocator,\n      int initialSize,\n      StructType schema,\n      int uaoSize,\n      double preferDictionaryRatio,\n      String compressionCodec,\n      int compressionLevel,\n      String checksumAlgorithm,\n      long[] partitionChecksums,\n      ShuffleWriteMetricsReporter writeMetrics,\n      TaskContext taskContext,\n      LinkedList<SpillInfo> spills,\n      SpillCallback spillCallback) {\n\n    this.initialSize = initialSize;\n    this.uaoSize = uaoSize;\n    this.preferDictionaryRatio = preferDictionaryRatio;\n    this.compressionCodec = compressionCodec;\n    this.compressionLevel = compressionLevel;\n    this.checksumAlgorithm = checksumAlgorithm;\n    this.partitionChecksums = partitionChecksums;\n    this.writeMetrics = writeMetrics;\n    this.taskContext = taskContext;\n    this.spills = spills;\n    this.spillCallback = spillCallback;\n\n    this.spillInfo = null;\n    this.allocator = allocator;\n\n    // Allocate array for in-memory sorter.\n    // As we cannot access the address of the internal array in the sorter, so we need to\n    // allocate the array manually and expand the pointer array in the sorter.\n    // We don't want in-memory sorter to allocate memory but the initial size cannot be zero.\n    try {\n      this.inMemSorter = new ShuffleInMemorySorter(allocator, 1, true);\n    } catch (java.lang.IllegalAccessError e) {\n      throw new java.lang.RuntimeException(\n          \"Error loading in-memory sorter check class path -- see \"\n              + \"https://github.com/apache/arrow-datafusion-comet?tab=readme-ov-file#enable-comet-shuffle\",\n          e);\n    }\n    sorterArray = allocator.allocateArray(initialSize);\n    this.inMemSorter.expandPointerArray(sorterArray);\n\n    this.allocatedPages = new LinkedList<>();\n\n    this.nativeLib = new Native();\n    this.dataTypes = serializeSchema(schema);\n  }\n\n  /** Frees allocated memory pages of this writer */\n  @Override\n  public long freeMemory() {\n    // We need to synchronize here because we may get the memory usage by calling\n    // this method in the task thread.\n    synchronized (this) {\n      return super.freeMemory();\n    }\n  }\n\n  @Override\n  public long getMemoryUsage() {\n    // We need to synchronize here because we may free the memory pages in another thread,\n    // i.e. when spilling, but this method may be called in the task thread.\n    synchronized (this) {\n      long totalPageSize = super.getMemoryUsage();\n\n      if (freed) {\n        return totalPageSize;\n      } else {\n        return ((inMemSorter == null) ? 0 : inMemSorter.getMemoryUsage()) + totalPageSize;\n      }\n    }\n  }\n\n  @Override\n  protected void spill(int required) throws IOException {\n    spillCallback.onSpillRequired();\n  }\n\n  /** Free the pointer array held by this sorter. */\n  public void freeArray() {\n    synchronized (this) {\n      inMemSorter.free();\n      freed = true;\n    }\n  }\n\n  /**\n   * Reset the in-memory sorter's pointer array only after freeing up the memory pages holding the\n   * records.\n   */\n  public void reset() {\n    synchronized (this) {\n      // We allocate pointer array outside the sorter.\n      // So we can get array address which can be used by native code.\n      inMemSorter.reset();\n      sorterArray = allocator.allocateArray(initialSize);\n      inMemSorter.expandPointerArray(sorterArray);\n      freed = false;\n    }\n  }\n\n  void setSpillInfo(SpillInfo spillInfo) {\n    this.spillInfo = spillInfo;\n  }\n\n  public int numRecords() {\n    return this.inMemSorter.numRecords();\n  }\n\n  public void writeSortedFileNative(boolean isLastFile, boolean tracingEnabled) throws IOException {\n    // This call performs the actual sort.\n    long arrayAddr = this.sorterArray.getBaseOffset();\n    int pos = inMemSorter.numRecords();\n    nativeLib.sortRowPartitionsNative(arrayAddr, pos, tracingEnabled);\n    ShuffleInMemorySorter.ShuffleSorterIterator sortedRecords =\n        new ShuffleInMemorySorter.ShuffleSorterIterator(pos, this.sorterArray, 0);\n\n    // If there are no sorted records, so we don't need to create an empty spill file.\n    if (!sortedRecords.hasNext()) {\n      return;\n    }\n\n    final ShuffleWriteMetricsReporter writeMetricsToUse;\n\n    if (isLastFile) {\n      // We're writing the final non-spill file, so we _do_ want to count this as shuffle bytes.\n      writeMetricsToUse = writeMetrics;\n    } else {\n      // We're spilling, so bytes written should be counted towards spill rather than write.\n      // Create a dummy WriteMetrics object to absorb these metrics, since we don't want to count\n      // them towards shuffle bytes written.\n      writeMetricsToUse = new ShuffleWriteMetrics();\n    }\n\n    int currentPartition = -1;\n\n    final RowPartition rowPartition = new RowPartition(initialSize);\n\n    while (sortedRecords.hasNext()) {\n      sortedRecords.loadNext();\n      final int partition = sortedRecords.packedRecordPointer.getPartitionId();\n      assert (partition >= currentPartition);\n      if (partition != currentPartition) {\n        // Switch to the new partition\n        if (currentPartition != -1) {\n\n          if (partitionChecksums.length > 0) {\n            // If checksum is enabled, we need to update the checksum for the current partition.\n            setChecksum(partitionChecksums[currentPartition]);\n            setChecksumAlgo(checksumAlgorithm);\n          }\n\n          long written =\n              doSpilling(\n                  dataTypes,\n                  spillInfo.file,\n                  rowPartition,\n                  writeMetricsToUse,\n                  preferDictionaryRatio,\n                  compressionCodec,\n                  compressionLevel,\n                  tracingEnabled);\n          spillInfo.partitionLengths[currentPartition] = written;\n\n          // Store the checksum for the current partition.\n          partitionChecksums[currentPartition] = getChecksum();\n        }\n        currentPartition = partition;\n      }\n\n      final long recordPointer = sortedRecords.packedRecordPointer.getRecordPointer();\n      final long recordOffsetInPage = allocator.getOffsetInPage(recordPointer);\n      // Note that we need to skip over record key (partition id)\n      // Note that we already use off-heap memory for serialized rows, so recordPage is always\n      // null.\n      int recordSizeInBytes = UnsafeAlignedOffset.getSize(null, recordOffsetInPage) - 4;\n      long recordReadPosition = recordOffsetInPage + uaoSize + 4; // skip over record length too\n      rowPartition.addRow(recordReadPosition, recordSizeInBytes);\n    }\n\n    if (currentPartition != -1) {\n      if (partitionChecksums.length > 0) {\n        // If checksum is enabled, we need to update the checksum for the last partition.\n        setChecksum(partitionChecksums[currentPartition]);\n        setChecksumAlgo(checksumAlgorithm);\n      }\n\n      long written =\n          doSpilling(\n              dataTypes,\n              spillInfo.file,\n              rowPartition,\n              writeMetricsToUse,\n              preferDictionaryRatio,\n              compressionCodec,\n              compressionLevel,\n              tracingEnabled);\n      spillInfo.partitionLengths[currentPartition] = written;\n\n      // Store the checksum for the last partition.\n      if (partitionChecksums.length > 0) {\n        partitionChecksums[currentPartition] = getChecksum();\n      }\n\n      synchronized (spills) {\n        spills.add(spillInfo);\n      }\n    }\n\n    if (!isLastFile) { // i.e. this is a spill file\n      // The current semantics of `shuffleRecordsWritten` seem to be that it's updated when\n      // records\n      // are written to disk, not when they enter the shuffle sorting code. DiskBlockObjectWriter\n      // relies on its `recordWritten()` method being called in order to trigger periodic updates\n      // to\n      // `shuffleBytesWritten`. If we were to remove the `recordWritten()` call and increment that\n      // counter at a higher-level, then the in-progress metrics for records written and bytes\n      // written would get out of sync.\n      //\n      // When writing the last file, we pass `writeMetrics` directly to the DiskBlockObjectWriter;\n      // in all other cases, we pass in a dummy write metrics to capture metrics, then copy those\n      // metrics to the true write metrics here. The reason for performing this copying is so that\n      // we can avoid reporting spilled bytes as shuffle write bytes.\n      //\n      // Note that we intentionally ignore the value of `writeMetricsToUse.shuffleWriteTime()`.\n      // Consistent with ExternalSorter, we do not count this IO towards shuffle write time.\n      // SPARK-3577 tracks the spill time separately.\n\n      // This is guaranteed to be a ShuffleWriteMetrics based on the if check in the beginning\n      // of this method.\n      synchronized (writeMetrics) {\n        writeMetrics.incRecordsWritten(((ShuffleWriteMetrics) writeMetricsToUse).recordsWritten());\n        taskContext\n            .taskMetrics()\n            .incDiskBytesSpilled(((ShuffleWriteMetrics) writeMetricsToUse).bytesWritten());\n      }\n    }\n  }\n\n  public boolean hasSpaceForAnotherRecord() {\n    return inMemSorter.hasSpaceForAnotherRecord();\n  }\n\n  public void expandPointerArray(LongArray newArray) {\n    inMemSorter.expandPointerArray(newArray);\n    this.sorterArray = newArray;\n  }\n\n  public void insertRecord(Object recordBase, long recordOffset, int length, int partitionId) {\n    final Object base = currentPage.getBaseObject();\n    final long recordAddress = allocator.encodePageNumberAndOffset(currentPage, pageCursor);\n    UnsafeAlignedOffset.putSize(base, pageCursor, length);\n    pageCursor += uaoSize;\n    Platform.copyMemory(recordBase, recordOffset, base, pageCursor, length);\n    pageCursor += length;\n    inMemSorter.insertRecord(recordAddress, partitionId);\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/sql/comet/CometScalarSubquery.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet;\n\nimport java.util.HashMap;\n\nimport org.apache.spark.sql.execution.ScalarSubquery;\nimport org.apache.spark.sql.types.Decimal;\nimport org.apache.spark.unsafe.types.UTF8String;\n\nimport org.apache.comet.CometRuntimeException;\n\n/** A helper class to execute scalar subqueries and retrieve subquery results from native code. */\npublic class CometScalarSubquery {\n  /**\n   * A map from (planId, subqueryId) to the corresponding ScalarSubquery. We cannot simply use\n   * `subqueryId` because same query plan may be executed multiple times in same executor (i.e., JVM\n   * instance). For such cases, if we delete the ScalarSubquery from the map after the first\n   * execution, the second execution will fail to find the ScalarSubquery if the native code is\n   * still running.\n   */\n  private static final HashMap<Long, HashMap<Long, ScalarSubquery>> subqueryMap = new HashMap<>();\n\n  public static synchronized void setSubquery(long planId, ScalarSubquery subquery) {\n    if (!subqueryMap.containsKey(planId)) {\n      subqueryMap.put(planId, new HashMap<>());\n    }\n\n    subqueryMap.get(planId).put(subquery.exprId().id(), subquery);\n  }\n\n  public static synchronized void removeSubquery(long planId, ScalarSubquery subquery) {\n    if (subqueryMap.containsKey(planId)) {\n      subqueryMap.get(planId).remove(subquery.exprId().id());\n\n      if (subqueryMap.get(planId).isEmpty()) {\n        subqueryMap.remove(planId);\n      }\n    }\n  }\n\n  /** Retrieve the result of subquery. */\n  private static Object getSubquery(Long planId, Long id) {\n    if (!subqueryMap.containsKey(planId)) {\n      throw new CometRuntimeException(\"Subquery \" + id + \" not found for plan \" + planId + \".\");\n    }\n\n    return subqueryMap.get(planId).get(id).eval(null);\n  }\n\n  /** Check if the result of a subquery is null. Called from native code. */\n  public static boolean isNull(long planId, long id) {\n    return getSubquery(planId, id) == null;\n  }\n\n  /** Get the result of a subquery as a boolean. Called from native code. */\n  public static boolean getBoolean(long planId, long id) {\n    return (boolean) getSubquery(planId, id);\n  }\n\n  /** Get the result of a subquery as a byte. Called from native code. */\n  public static byte getByte(long planId, long id) {\n    return (byte) getSubquery(planId, id);\n  }\n\n  /** Get the result of a subquery as a short. Called from native code. */\n  public static short getShort(long planId, long id) {\n    return (short) getSubquery(planId, id);\n  }\n\n  /** Get the result of a subquery as an integer. Called from native code. */\n  public static int getInt(long planId, long id) {\n    return (int) getSubquery(planId, id);\n  }\n\n  /** Get the result of a subquery as a long. Called from native code. */\n  public static long getLong(long planId, long id) {\n    return (long) getSubquery(planId, id);\n  }\n\n  /** Get the result of a subquery as a float. Called from native code. */\n  public static float getFloat(long planId, long id) {\n    return (float) getSubquery(planId, id);\n  }\n\n  /** Get the result of a subquery as a double. Called from native code. */\n  public static double getDouble(long planId, long id) {\n    return (double) getSubquery(planId, id);\n  }\n\n  /** Get the result of a subquery as a decimal represented as bytes. Called from native code. */\n  public static byte[] getDecimal(long planId, long id) {\n    return ((Decimal) getSubquery(planId, id)).toJavaBigDecimal().unscaledValue().toByteArray();\n  }\n\n  /** Get the result of a subquery as a string. Called from native code. */\n  public static String getString(long planId, long id) {\n    return ((UTF8String) getSubquery(planId, id)).toString();\n  }\n\n  /** Get the result of a subquery as a binary. Called from native code. */\n  public static byte[] getBinary(long planId, long id) {\n    return (byte[]) getSubquery(planId, id);\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/sql/comet/execution/shuffle/CometBypassMergeSortShuffleWriter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle;\n\nimport java.io.File;\nimport java.io.FileInputStream;\nimport java.io.IOException;\nimport java.io.OutputStream;\nimport java.nio.channels.FileChannel;\nimport java.util.Optional;\nimport java.util.concurrent.ExecutorService;\n\nimport scala.*;\nimport scala.collection.Iterator;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.spark.Partitioner;\nimport org.apache.spark.ShuffleDependency;\nimport org.apache.spark.SparkConf;\nimport org.apache.spark.TaskContext;\nimport org.apache.spark.internal.config.package$;\nimport org.apache.spark.memory.TaskMemoryManager;\nimport org.apache.spark.network.shuffle.checksum.ShuffleChecksumHelper;\nimport org.apache.spark.scheduler.MapStatus;\nimport org.apache.spark.scheduler.MapStatus$;\nimport org.apache.spark.serializer.SerializerInstance;\nimport org.apache.spark.shuffle.ShuffleWriteMetricsReporter;\nimport org.apache.spark.shuffle.ShuffleWriter;\nimport org.apache.spark.shuffle.api.ShuffleExecutorComponents;\nimport org.apache.spark.shuffle.api.ShuffleMapOutputWriter;\nimport org.apache.spark.shuffle.api.ShufflePartitionWriter;\nimport org.apache.spark.shuffle.api.WritableByteChannelWrapper;\nimport org.apache.spark.shuffle.comet.CometShuffleChecksumSupport;\nimport org.apache.spark.shuffle.comet.CometShuffleMemoryAllocator;\nimport org.apache.spark.shuffle.comet.CometShuffleMemoryAllocatorTrait;\nimport org.apache.spark.shuffle.sort.CometShuffleExternalSorter;\nimport org.apache.spark.sql.catalyst.expressions.UnsafeRow;\nimport org.apache.spark.sql.types.StructType;\nimport org.apache.spark.storage.BlockManager;\nimport org.apache.spark.storage.FileSegment;\nimport org.apache.spark.storage.TempShuffleBlockId;\nimport org.apache.spark.util.Utils;\n\nimport com.google.common.io.Closeables;\n\nimport org.apache.comet.CometConf$;\nimport org.apache.comet.Native;\n\n/**\n * This is based on Spark `BypassMergeSortShuffleWriter`. Instead of `DiskBlockObjectWriter`, this\n * writes Spark Internal Rows through `DiskBlockArrowIPCWriter` as Arrow IPC bytes. Note that Spark\n * `DiskBlockObjectWriter` is general writer for any objects, but `DiskBlockArrowIPCWriter` is\n * specialized for Spark Internal Rows and SQL workloads.\n */\nfinal class CometBypassMergeSortShuffleWriter<K, V> extends ShuffleWriter<K, V>\n    implements CometShuffleChecksumSupport {\n\n  private static final Logger logger =\n      LoggerFactory.getLogger(CometBypassMergeSortShuffleWriter.class);\n  private final int fileBufferSize;\n  private final boolean transferToEnabled;\n  private final int numPartitions;\n  private final BlockManager blockManager;\n  private final TaskMemoryManager memoryManager;\n  private CometShuffleMemoryAllocatorTrait allocator;\n  private final TaskContext taskContext;\n  private final SerializerInstance serializer;\n\n  private final Partitioner partitioner;\n  private final ShuffleWriteMetricsReporter writeMetrics;\n  private final int shuffleId;\n  private final long mapId;\n  private final ShuffleExecutorComponents shuffleExecutorComponents;\n  private final StructType schema;\n\n  /** Array of file writers, one for each partition */\n  private CometDiskBlockWriter[] partitionWriters;\n\n  private FileSegment[] partitionWriterSegments;\n  private MapStatus mapStatus;\n  private long[] partitionLengths;\n\n  /** Checksum calculator for each partition. Empty when shuffle checksum disabled. */\n  private final long[] partitionChecksums;\n\n  private final boolean isAsync;\n\n  private final int asyncThreadNum;\n\n  /** Thread pool shared across all partition writers, for async write batch */\n  private final ExecutorService threadPool;\n\n  /**\n   * Are we in the process of stopping? Because map tasks can call stop() with success = true and\n   * then call stop() with success = false if they get an exception, we want to make sure we don't\n   * try deleting files, etc twice.\n   */\n  private boolean stopping = false;\n\n  private final SparkConf conf;\n\n  private boolean tracingEnabled;\n\n  CometBypassMergeSortShuffleWriter(\n      BlockManager blockManager,\n      TaskMemoryManager memoryManager,\n      TaskContext taskContext,\n      CometBypassMergeSortShuffleHandle<K, V> handle,\n      long mapId,\n      SparkConf conf,\n      ShuffleWriteMetricsReporter writeMetrics,\n      ShuffleExecutorComponents shuffleExecutorComponents) {\n    // Use getSizeAsKb (not bytes) to maintain backwards compatibility if no units are provided\n    this.fileBufferSize = (int) (long) conf.get(package$.MODULE$.SHUFFLE_FILE_BUFFER_SIZE()) * 1024;\n    this.transferToEnabled = conf.getBoolean(\"spark.file.transferTo\", true);\n    this.conf = conf;\n    this.blockManager = blockManager;\n    this.memoryManager = memoryManager;\n    this.taskContext = taskContext;\n    final ShuffleDependency<K, V, V> dep = handle.dependency();\n    this.mapId = mapId;\n    this.serializer = dep.serializer().newInstance();\n    this.shuffleId = dep.shuffleId();\n    this.partitioner = dep.partitioner();\n    this.numPartitions = partitioner.numPartitions();\n    this.writeMetrics = writeMetrics;\n    this.shuffleExecutorComponents = shuffleExecutorComponents;\n    this.schema = ((CometShuffleDependency<?, ?, ?>) dep).schema().get();\n    this.partitionChecksums = createPartitionChecksums(numPartitions, conf);\n\n    this.isAsync = (boolean) CometConf$.MODULE$.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED().get();\n    this.asyncThreadNum = (int) CometConf$.MODULE$.COMET_COLUMNAR_SHUFFLE_ASYNC_THREAD_NUM().get();\n    this.tracingEnabled = (boolean) CometConf$.MODULE$.COMET_TRACING_ENABLED().get();\n\n    if (isAsync) {\n      logger.info(\"Async shuffle writer enabled\");\n      this.threadPool = ShuffleThreadPool.getThreadPool();\n    } else {\n      logger.info(\"Async shuffle writer disabled\");\n      this.threadPool = null;\n    }\n  }\n\n  @Override\n  public void write(Iterator<Product2<K, V>> records) throws IOException {\n    assert (partitionWriters == null);\n    ShuffleMapOutputWriter mapOutputWriter =\n        shuffleExecutorComponents.createMapOutputWriter(shuffleId, mapId, numPartitions);\n    try {\n      if (!records.hasNext()) {\n        partitionLengths =\n            mapOutputWriter\n                .commitAllPartitions(ShuffleChecksumHelper.EMPTY_CHECKSUM_VALUE)\n                .getPartitionLengths();\n        mapStatus =\n            MapStatus$.MODULE$.apply(blockManager.shuffleServerId(), partitionLengths, mapId);\n        return;\n      }\n      final long openStartTime = System.nanoTime();\n      partitionWriters = new CometDiskBlockWriter[numPartitions];\n      partitionWriterSegments = new FileSegment[numPartitions];\n\n      final String checksumAlgorithm = getChecksumAlgorithm(conf);\n\n      allocator =\n          CometShuffleMemoryAllocator.getInstance(\n              conf,\n              memoryManager,\n              Math.min(\n                  CometShuffleExternalSorter.MAXIMUM_PAGE_SIZE_BYTES,\n                  memoryManager.pageSizeBytes()));\n\n      // Allocate the disk writers, and open the files that we'll be writing to\n      for (int i = 0; i < numPartitions; i++) {\n        final Tuple2<TempShuffleBlockId, File> tempShuffleBlockIdPlusFile =\n            blockManager.diskBlockManager().createTempShuffleBlock();\n        final File file = tempShuffleBlockIdPlusFile._2();\n        CometDiskBlockWriter writer =\n            new CometDiskBlockWriter(\n                file,\n                allocator,\n                taskContext,\n                serializer,\n                schema,\n                writeMetrics,\n                conf,\n                isAsync,\n                asyncThreadNum,\n                threadPool,\n                tracingEnabled);\n        if (partitionChecksums.length > 0) {\n          writer.setChecksum(partitionChecksums[i]);\n          writer.setChecksumAlgo(checksumAlgorithm);\n        }\n        partitionWriters[i] = writer;\n      }\n      // Creating the file to write to and creating a disk writer both involve interacting with\n      // the disk, and can take a long time in aggregate when we open many files, so should be\n      // included in the shuffle write time.\n      writeMetrics.incWriteTime(System.nanoTime() - openStartTime);\n\n      long outputRows = 0;\n\n      while (records.hasNext()) {\n        outputRows += 1;\n\n        final Product2<K, V> record = records.next();\n        final K key = record._1();\n        // Safety: `CometBypassMergeSortShuffleWriter` is only used when dealing with Comet shuffle\n        // dependencies, which always produce `ColumnarBatch`es.\n        int partition_id = partitioner.getPartition(key);\n        partitionWriters[partitioner.getPartition(key)].insertRow(\n            (UnsafeRow) record._2(), partition_id);\n      }\n\n      Native _native = new Native();\n      String shuffleMemKey = \"thread_\" + _native.getRustThreadId() + \"_comet_jvm_shuffle\";\n      if (tracingEnabled) {\n        _native.logMemoryUsage(shuffleMemKey, allocator.getUsed());\n      }\n\n      long spillRecords = 0;\n\n      for (int i = 0; i < numPartitions; i++) {\n        CometDiskBlockWriter writer = partitionWriters[i];\n        partitionWriterSegments[i] = writer.close();\n\n        spillRecords += writer.getOutputRecords();\n      }\n\n      if (tracingEnabled) {\n        _native.logMemoryUsage(shuffleMemKey, allocator.getUsed());\n      }\n\n      if (outputRows != spillRecords) {\n        throw new RuntimeException(\n            \"outputRows(\"\n                + outputRows\n                + \") != spillRecords(\"\n                + spillRecords\n                + \"). Please file a bug report.\");\n      }\n\n      // TODO: We probably can move checksum generation here when concatenating partition files\n      partitionLengths = writePartitionedData(mapOutputWriter);\n      mapStatus = MapStatus$.MODULE$.apply(blockManager.shuffleServerId(), partitionLengths, mapId);\n    } catch (Exception e) {\n      try {\n        mapOutputWriter.abort(e);\n      } catch (Exception e2) {\n        logger.error(\"Failed to abort the writer after failing to write map output.\", e2);\n        e.addSuppressed(e2);\n      }\n      throw e;\n    }\n  }\n\n  @Override\n  public long[] getPartitionLengths() {\n    return partitionLengths;\n  }\n\n  /**\n   * Concatenate all of the per-partition files into a single combined file.\n   *\n   * @return array of lengths, in bytes, of each partition of the file (used by map output tracker).\n   */\n  private long[] writePartitionedData(ShuffleMapOutputWriter mapOutputWriter) throws IOException {\n    // Track location of the partition starts in the output file\n    if (partitionWriters != null) {\n      final long writeStartTime = System.nanoTime();\n      final boolean encryptionEnabled = blockManager.serializerManager().encryptionEnabled();\n\n      // Gets computed checksums for each partition\n      for (int i = 0; i < partitionChecksums.length; i++) {\n        partitionChecksums[i] = partitionWriters[i].getChecksum();\n      }\n\n      try {\n        for (int i = 0; i < numPartitions; i++) {\n          final File file = partitionWriterSegments[i].file();\n          ShufflePartitionWriter writer = mapOutputWriter.getPartitionWriter(i);\n\n          if (file.exists()) {\n            if (transferToEnabled && !encryptionEnabled) {\n              // Using WritableByteChannelWrapper to make resource closing consistent between\n              // this implementation and UnsafeShuffleWriter.\n              Optional<WritableByteChannelWrapper> maybeOutputChannel = writer.openChannelWrapper();\n              if (maybeOutputChannel.isPresent()) {\n                writePartitionedDataWithChannel(file, maybeOutputChannel.get());\n              } else {\n                writePartitionedDataWithStream(file, writer);\n              }\n            } else {\n              writePartitionedDataWithStream(file, writer);\n            }\n            if (!file.delete()) {\n              logger.error(\"Unable to delete file for partition {}\", i);\n            }\n          }\n        }\n      } finally {\n        writeMetrics.incWriteTime(System.nanoTime() - writeStartTime);\n      }\n      partitionWriters = null;\n    }\n\n    return mapOutputWriter.commitAllPartitions(partitionChecksums).getPartitionLengths();\n  }\n\n  private void writePartitionedDataWithChannel(File file, WritableByteChannelWrapper outputChannel)\n      throws IOException {\n    boolean copyThrewException = true;\n    try {\n      FileInputStream in = new FileInputStream(file);\n      try (FileChannel inputChannel = in.getChannel()) {\n        Utils.copyFileStreamNIO(inputChannel, outputChannel.channel(), 0L, inputChannel.size());\n        copyThrewException = false;\n      } finally {\n        Closeables.close(in, copyThrewException);\n      }\n    } finally {\n      Closeables.close(outputChannel, copyThrewException);\n    }\n  }\n\n  private void writePartitionedDataWithStream(File file, ShufflePartitionWriter writer)\n      throws IOException {\n    boolean copyThrewException = true;\n    FileInputStream in = new FileInputStream(file);\n    OutputStream outputStream;\n    try {\n      outputStream = blockManager.serializerManager().wrapForEncryption(writer.openStream());\n\n      try {\n        Utils.copyStream(in, outputStream, false, false);\n        copyThrewException = false;\n      } finally {\n        Closeables.close(outputStream, copyThrewException);\n      }\n    } finally {\n      Closeables.close(in, copyThrewException);\n    }\n  }\n\n  @Override\n  public Option<MapStatus> stop(boolean success) {\n    if (stopping) {\n      return None$.empty();\n    } else {\n      stopping = true;\n\n      if (success) {\n        if (mapStatus == null) {\n          throw new IllegalStateException(\"Cannot call stop(true) without having called write()\");\n        }\n        return Option.apply(mapStatus);\n      } else {\n        // The map task failed, so delete our output data.\n        if (partitionWriters != null) {\n          try {\n            for (CometDiskBlockWriter writer : partitionWriters) {\n              writer.freeMemory();\n\n              File file = writer.getFile();\n              if (!file.delete()) {\n                logger.error(\"Error while deleting file {}\", file.getAbsolutePath());\n              }\n            }\n          } finally {\n            partitionWriters = null;\n          }\n        }\n        return None$.empty();\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/sql/comet/execution/shuffle/CometDiskBlockWriter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle;\n\nimport java.io.*;\nimport java.util.Collections;\nimport java.util.Comparator;\nimport java.util.LinkedList;\nimport java.util.concurrent.ConcurrentLinkedQueue;\nimport java.util.concurrent.ExecutorService;\nimport java.util.concurrent.Future;\n\nimport scala.reflect.ClassTag;\nimport scala.reflect.ClassTag$;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.spark.SparkConf;\nimport org.apache.spark.TaskContext;\nimport org.apache.spark.executor.ShuffleWriteMetrics;\nimport org.apache.spark.serializer.SerializationStream;\nimport org.apache.spark.serializer.SerializerInstance;\nimport org.apache.spark.shuffle.ShuffleWriteMetricsReporter;\nimport org.apache.spark.shuffle.comet.CometShuffleMemoryAllocatorTrait;\nimport org.apache.spark.shuffle.sort.RowPartition;\nimport org.apache.spark.sql.catalyst.expressions.UnsafeRow;\nimport org.apache.spark.sql.types.StructType;\nimport org.apache.spark.storage.FileSegment;\nimport org.apache.spark.unsafe.Platform;\nimport org.apache.spark.unsafe.UnsafeAlignedOffset;\n\nimport com.google.common.annotations.VisibleForTesting;\n\nimport org.apache.comet.CometConf$;\nimport org.apache.comet.Native;\n\n/**\n * This class has similar role of Spark `DiskBlockObjectWriter` class which is used to write shuffle\n * data to disk. For Comet, this is specialized to shuffle unsafe rows into disk in Arrow IPC format\n * using native code. Spark `DiskBlockObjectWriter` is not a `MemoryConsumer` as it can simply\n * stream the data to disk. However, for Comet, we need to buffer rows in memory page and then write\n * them to disk as batches in Arrow IPC format. So, we need to extend `MemoryConsumer` to be able to\n * spill the buffered rows to disk when memory pressure is high.\n *\n * <p>Similar to `CometShuffleExternalSorter`, this class also provides asynchronous spill\n * mechanism. But different from `CometShuffleExternalSorter`, as a writer class for Spark\n * hash-based shuffle, it writes all the rows for a partition into a single file, instead of each\n * file for each spill.\n */\npublic final class CometDiskBlockWriter {\n  private static final Logger logger = LoggerFactory.getLogger(CometDiskBlockWriter.class);\n  private static final ClassTag<Object> OBJECT_CLASS_TAG = ClassTag$.MODULE$.Object();\n\n  /** List of all `NativeDiskBlockArrowIPCWriter`s of same shuffle task. */\n  private static final LinkedList<CometDiskBlockWriter> currentWriters = new LinkedList<>();\n\n  /** Queue of pending asynchronous spill tasks. */\n  private ConcurrentLinkedQueue<Future<Void>> asyncSpillTasks = new ConcurrentLinkedQueue<>();\n\n  /** List of `ArrowIPCWriter`s which are spilling. */\n  private final LinkedList<ArrowIPCWriter> spillingWriters = new LinkedList<>();\n\n  private final TaskContext taskContext;\n\n  @VisibleForTesting static final int DEFAULT_INITIAL_SER_BUFFER_SIZE = 1024 * 1024;\n\n  // Copied from Spark `org.apache.spark.shuffle.sort.PackedRecordPointer.MAXIMUM_PAGE_SIZE_BYTES`\n  static final int MAXIMUM_PAGE_SIZE_BYTES = 1 << 27;\n\n  /** The Comet allocator used to allocate pages. */\n  private final CometShuffleMemoryAllocatorTrait allocator;\n\n  /** The serializer used to write rows to memory page. */\n  private final SerializerInstance serializer;\n\n  /** The native library used to write rows to disk. */\n  private final Native nativeLib;\n\n  private final int uaoSize = UnsafeAlignedOffset.getUaoSize();\n  private final StructType schema;\n  private final ShuffleWriteMetricsReporter writeMetrics;\n  private final File file;\n  private long totalWritten = 0L;\n  private boolean initialized = false;\n  private final int columnarBatchSize;\n  private final String compressionCodec;\n  private final int compressionLevel;\n  private final boolean isAsync;\n  private final boolean tracingEnabled;\n  private final int asyncThreadNum;\n  private final ExecutorService threadPool;\n  private final int numElementsForSpillThreshold;\n\n  private final double preferDictionaryRatio;\n\n  /** The current active writer. All incoming rows will be inserted into it. */\n  private ArrowIPCWriter activeWriter;\n\n  /** A flag indicating whether we are in the process of spilling. */\n  private boolean spilling = false;\n\n  /** The buffer used to store serialized row. */\n  private ExposedByteArrayOutputStream serBuffer;\n\n  private SerializationStream serOutputStream;\n\n  private long outputRecords = 0;\n\n  private long insertRecords = 0;\n\n  CometDiskBlockWriter(\n      File file,\n      CometShuffleMemoryAllocatorTrait allocator,\n      TaskContext taskContext,\n      SerializerInstance serializer,\n      StructType schema,\n      ShuffleWriteMetricsReporter writeMetrics,\n      SparkConf conf,\n      boolean isAsync,\n      int asyncThreadNum,\n      ExecutorService threadPool,\n      boolean tracingEnabled) {\n    this.nativeLib = new Native();\n    this.allocator = allocator;\n    this.taskContext = taskContext;\n    this.serializer = serializer;\n    this.schema = schema;\n    this.writeMetrics = writeMetrics;\n    this.file = file;\n    this.isAsync = isAsync;\n    this.tracingEnabled = tracingEnabled;\n    this.asyncThreadNum = asyncThreadNum;\n    this.threadPool = threadPool;\n\n    this.columnarBatchSize = (int) CometConf$.MODULE$.COMET_COLUMNAR_SHUFFLE_BATCH_SIZE().get();\n    this.compressionCodec = CometConf$.MODULE$.COMET_EXEC_SHUFFLE_COMPRESSION_CODEC().get();\n    this.compressionLevel =\n        (int) CometConf$.MODULE$.COMET_EXEC_SHUFFLE_COMPRESSION_ZSTD_LEVEL().get();\n\n    this.numElementsForSpillThreshold =\n        (int) CometConf$.MODULE$.COMET_COLUMNAR_SHUFFLE_SPILL_THRESHOLD().get();\n\n    this.preferDictionaryRatio =\n        (double) CometConf$.MODULE$.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO().get();\n\n    this.activeWriter = new ArrowIPCWriter();\n\n    synchronized (currentWriters) {\n      currentWriters.add(this);\n    }\n  }\n\n  public void setChecksumAlgo(String checksumAlgo) {\n    this.activeWriter.setChecksumAlgo(checksumAlgo);\n  }\n\n  public void setChecksum(long checksum) {\n    this.activeWriter.setChecksum(checksum);\n  }\n\n  public long getChecksum() {\n    return this.activeWriter.getChecksum();\n  }\n\n  private void doSpill(boolean forceSync) throws IOException {\n    // We only allow spilling request from `NativeDiskBlockArrowIPCWriter`.\n    if (spilling || activeWriter.numRecords() == 0) {\n      return;\n    }\n\n    // Set this into spilling state first, so it cannot recursively trigger another spill on itself.\n    spilling = true;\n\n    if (isAsync && !forceSync) {\n      // Although we can continue to submit spill tasks to thread pool, buffering more rows in\n      // memory page will increase memory usage. So, we need to wait for at least one spilling\n      // task to finish.\n      while (asyncSpillTasks.size() == asyncThreadNum) {\n        for (Future<Void> task : asyncSpillTasks) {\n          if (task.isDone()) {\n            asyncSpillTasks.remove(task);\n            break;\n          }\n        }\n      }\n\n      final ArrowIPCWriter spillingWriter = activeWriter;\n      activeWriter = new ArrowIPCWriter();\n\n      spillingWriters.add(spillingWriter);\n\n      asyncSpillTasks.add(\n          threadPool.submit(\n              new Runnable() {\n                @Override\n                public void run() {\n                  try {\n                    long written = spillingWriter.doSpilling(false);\n                    totalWritten += written;\n                  } catch (IOException e) {\n                    throw new RuntimeException(e);\n                  } finally {\n                    spillingWriter.freeMemory();\n                    spillingWriters.remove(spillingWriter);\n                  }\n                }\n              },\n              null));\n\n    } else {\n      // Spill in a synchronous way.\n      // This spill could be triggered by other thread (i.e., other `CometDiskBlockWriter`),\n      // so we need to synchronize it.\n      synchronized (CometDiskBlockWriter.this) {\n        totalWritten += activeWriter.doSpilling(false);\n        activeWriter.freeMemory();\n      }\n    }\n\n    spilling = false;\n  }\n\n  public long getOutputRecords() {\n    return outputRecords;\n  }\n\n  /** Serializes input row and inserts into current allocated page. */\n  public void insertRow(UnsafeRow row, int partitionId) throws IOException {\n    insertRecords++;\n\n    if (!initialized) {\n      serBuffer = new ExposedByteArrayOutputStream(DEFAULT_INITIAL_SER_BUFFER_SIZE);\n      serOutputStream = serializer.serializeStream(serBuffer);\n\n      initialized = true;\n    }\n\n    serBuffer.reset();\n    serOutputStream.writeKey(partitionId, OBJECT_CLASS_TAG);\n    serOutputStream.writeValue(row, OBJECT_CLASS_TAG);\n    serOutputStream.flush();\n\n    final int serializedRecordSize = serBuffer.size();\n    assert (serializedRecordSize > 0);\n\n    // While proceeding with possible spilling and inserting the record, we need to synchronize\n    // it, because other threads may be spilling this writer at the same time.\n    synchronized (CometDiskBlockWriter.this) {\n      if (activeWriter.numRecords() >= numElementsForSpillThreshold\n          || activeWriter.numRecords() >= columnarBatchSize) {\n        int threshold = Math.min(numElementsForSpillThreshold, columnarBatchSize);\n        logger.info(\n            \"Spilling data because number of spilledRecords crossed the threshold \" + threshold);\n        // Spill the current writer\n        doSpill(false);\n        if (activeWriter.numRecords() != 0) {\n          throw new RuntimeException(\n              \"activeWriter.numRecords()(\" + activeWriter.numRecords() + \") != 0\");\n        }\n      }\n\n      // Need 4 or 8 bytes to store the record length.\n      final int required = serializedRecordSize + uaoSize;\n      // Acquire enough memory to store the record.\n      // If we cannot acquire enough memory, we will spill current writers.\n      if (!activeWriter.acquireNewPageIfNecessary(required)) {\n        // Spilling is happened, initiate new memory page for new writer.\n        activeWriter.initialCurrentPage(required);\n      }\n      activeWriter.insertRecord(\n          serBuffer.getBuf(), Platform.BYTE_ARRAY_OFFSET, serializedRecordSize);\n    }\n  }\n\n  FileSegment close() throws IOException {\n    if (isAsync) {\n      for (Future<Void> task : asyncSpillTasks) {\n        try {\n          task.get();\n        } catch (Exception e) {\n          throw new RuntimeException(e);\n        }\n      }\n    }\n\n    totalWritten += activeWriter.doSpilling(true);\n\n    if (outputRecords != insertRecords) {\n      throw new RuntimeException(\n          \"outputRecords(\"\n              + outputRecords\n              + \") != insertRecords(\"\n              + insertRecords\n              + \"). Please file a bug report.\");\n    }\n\n    serBuffer = null;\n    serOutputStream = null;\n\n    activeWriter.freeMemory();\n\n    synchronized (currentWriters) {\n      currentWriters.remove(this);\n    }\n\n    return new FileSegment(file, 0, totalWritten);\n  }\n\n  File getFile() {\n    return file;\n  }\n\n  /** Returns the memory usage of active writer. */\n  long getActiveMemoryUsage() {\n    return activeWriter.getMemoryUsage();\n  }\n\n  void freeMemory() {\n    for (ArrowIPCWriter writer : spillingWriters) {\n      writer.freeMemory();\n    }\n    activeWriter.freeMemory();\n  }\n\n  class ArrowIPCWriter extends SpillWriter {\n    /**\n     * The list of addresses and sizes of rows buffered in memory page which wait for\n     * spilling/shuffle.\n     */\n    private final RowPartition rowPartition;\n\n    ArrowIPCWriter() {\n      rowPartition = new RowPartition(columnarBatchSize);\n\n      this.allocatedPages = new LinkedList<>();\n      this.allocator = CometDiskBlockWriter.this.allocator;\n\n      this.nativeLib = CometDiskBlockWriter.this.nativeLib;\n      this.dataTypes = serializeSchema(schema);\n    }\n\n    /** Inserts a record into current allocated page. */\n    void insertRecord(Object recordBase, long recordOffset, int length) {\n      // This `ArrowIPCWriter` could be spilled by other threads, so we need to synchronize it.\n      final Object base = currentPage.getBaseObject();\n\n      // Add row addresses\n      final long recordAddress = allocator.encodePageNumberAndOffset(currentPage, pageCursor);\n      rowPartition.addRow(allocator.getOffsetInPage(recordAddress) + uaoSize + 4, length - 4);\n\n      // Write the record (row) size\n      UnsafeAlignedOffset.putSize(base, pageCursor, length);\n      pageCursor += uaoSize;\n      // Copy the record (row) data from serialized buffer to page\n      Platform.copyMemory(recordBase, recordOffset, base, pageCursor, length);\n      pageCursor += length;\n    }\n\n    int numRecords() {\n      return rowPartition.getNumRows();\n    }\n\n    /** Spills the current in-memory records of this `ArrowIPCWriter` to disk. */\n    long doSpilling(boolean isLast) throws IOException {\n      final ShuffleWriteMetricsReporter writeMetricsToUse;\n\n      if (isLast) {\n        // We're writing the final non-spill file, so we _do_ want to count this as shuffle bytes.\n        writeMetricsToUse = writeMetrics;\n      } else {\n        // We're spilling, so bytes written should be counted towards spill rather than write.\n        // Create a dummy WriteMetrics object to absorb these metrics, since we don't want to count\n        // them towards shuffle bytes written.\n        writeMetricsToUse = new ShuffleWriteMetrics();\n      }\n\n      final long written;\n\n      // All threads are writing to the same file, so we need to synchronize it.\n      synchronized (file) {\n        outputRecords += rowPartition.getNumRows();\n        written =\n            doSpilling(\n                dataTypes,\n                file,\n                rowPartition,\n                writeMetricsToUse,\n                preferDictionaryRatio,\n                compressionCodec,\n                compressionLevel,\n                tracingEnabled);\n      }\n\n      // Update metrics\n      // Other threads may be updating the metrics at the same time, so we need to synchronize it.\n      synchronized (writeMetrics) {\n        if (!isLast) {\n          writeMetrics.incRecordsWritten(\n              ((ShuffleWriteMetrics) writeMetricsToUse).recordsWritten());\n          taskContext\n              .taskMetrics()\n              .incDiskBytesSpilled(((ShuffleWriteMetrics) writeMetricsToUse).bytesWritten());\n        }\n      }\n\n      return written;\n    }\n\n    /**\n     * Spills the current in-memory records of all `NativeDiskBlockArrowIPCWriter`s until required\n     * memory is acquired.\n     */\n    @Override\n    protected void spill(int required) throws IOException {\n      // Cannot allocate enough memory, spill and try again\n      synchronized (currentWriters) {\n        // Spill from the largest writer first to maximize the amount of memory we can\n        // acquire\n        Collections.sort(\n            currentWriters,\n            new Comparator<CometDiskBlockWriter>() {\n              @Override\n              public int compare(CometDiskBlockWriter lhs, CometDiskBlockWriter rhs) {\n                long lhsMemoryUsage = lhs.getActiveMemoryUsage();\n                long rhsMemoryUsage = rhs.getActiveMemoryUsage();\n                return Long.compare(rhsMemoryUsage, lhsMemoryUsage);\n              }\n            });\n\n        long totalFreed = 0;\n        for (CometDiskBlockWriter writer : currentWriters) {\n          // Force to spill the writer in a synchronous way, otherwise, we may not be able to\n          // acquire enough memory.\n          long used = writer.getActiveMemoryUsage();\n\n          writer.doSpill(true);\n\n          totalFreed += used;\n\n          if (totalFreed >= required) {\n            break;\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/sql/comet/execution/shuffle/CometUnsafeShuffleWriter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle;\n\nimport java.io.FileInputStream;\nimport java.io.IOException;\nimport java.io.InputStream;\nimport java.io.OutputStream;\nimport java.nio.channels.Channels;\nimport java.nio.channels.FileChannel;\nimport java.nio.channels.WritableByteChannel;\nimport java.util.Arrays;\nimport java.util.Iterator;\nimport java.util.Optional;\nimport javax.annotation.Nullable;\n\nimport scala.Option;\nimport scala.Product2;\nimport scala.reflect.ClassTag;\nimport scala.reflect.ClassTag$;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.spark.Partitioner;\nimport org.apache.spark.ShuffleDependency;\nimport org.apache.spark.SparkConf;\nimport org.apache.spark.TaskContext;\nimport org.apache.spark.annotation.Private;\nimport org.apache.spark.internal.config.package$;\nimport org.apache.spark.io.NioBufferedFileInputStream;\nimport org.apache.spark.memory.TaskMemoryManager;\nimport org.apache.spark.network.shuffle.checksum.ShuffleChecksumHelper;\nimport org.apache.spark.network.util.LimitedInputStream;\nimport org.apache.spark.scheduler.MapStatus;\nimport org.apache.spark.scheduler.MapStatus$;\nimport org.apache.spark.serializer.SerializationStream;\nimport org.apache.spark.serializer.SerializerInstance;\nimport org.apache.spark.shuffle.BaseShuffleHandle;\nimport org.apache.spark.shuffle.ShuffleWriteMetricsReporter;\nimport org.apache.spark.shuffle.ShuffleWriter;\nimport org.apache.spark.shuffle.api.ShuffleExecutorComponents;\nimport org.apache.spark.shuffle.api.ShuffleMapOutputWriter;\nimport org.apache.spark.shuffle.api.ShufflePartitionWriter;\nimport org.apache.spark.shuffle.api.SingleSpillShuffleMapOutputWriter;\nimport org.apache.spark.shuffle.api.WritableByteChannelWrapper;\nimport org.apache.spark.shuffle.comet.CometShuffleMemoryAllocator;\nimport org.apache.spark.shuffle.comet.CometShuffleMemoryAllocatorTrait;\nimport org.apache.spark.shuffle.sort.CometShuffleExternalSorter;\nimport org.apache.spark.shuffle.sort.SortShuffleManager;\nimport org.apache.spark.shuffle.sort.UnsafeShuffleWriter;\nimport org.apache.spark.sql.catalyst.expressions.UnsafeRow;\nimport org.apache.spark.sql.types.StructType;\nimport org.apache.spark.storage.BlockManager;\nimport org.apache.spark.storage.TimeTrackingOutputStream;\nimport org.apache.spark.unsafe.Platform;\nimport org.apache.spark.util.Utils;\n\nimport com.google.common.annotations.VisibleForTesting;\nimport com.google.common.io.ByteStreams;\nimport com.google.common.io.Closeables;\n\nimport org.apache.comet.CometConf;\nimport org.apache.comet.Native;\n\nimport static scala.jdk.javaapi.CollectionConverters.*;\n\n/**\n * This is based on Spark {@link UnsafeShuffleWriter}, as a writer to write shuffling rows into\n * Arrow format after sorting rows based on the partition ID.\n *\n * <p>While writing rows through this writer, it writes rows into {@link CometShuffleExternalSorter}\n * which buffers rows in memory (`ShuffleInMemorySorter`). When the memory buffer is full, it will\n * sort rows based on the partition ID and write them into a spill file. In Spark, sorting is\n * performed by `ShuffleInMemorySorter`. In Comet, if off-heap memory is enabled, and radix sort is\n * enabled, Comet will sort rows through native code. Sorting is based on a long array containing\n * compacted partition IDs and row pointers. The row pointers are the addresses of the rows in the\n * off-heap memory. After sorting, Comet will write the sorted rows into a spill file through native\n * code (off-heap memory must be enabled).\n *\n * <p>In native code, Comet converts UnsafeRows to Arrow arrays and writes arrays into the spill\n * file with Arrow IPC format.\n */\n@Private\npublic class CometUnsafeShuffleWriter<K, V> extends ShuffleWriter<K, V> {\n\n  @VisibleForTesting static final int DEFAULT_INITIAL_SER_BUFFER_SIZE = 1024 * 1024;\n  private static final Logger logger = LoggerFactory.getLogger(CometUnsafeShuffleWriter.class);\n  private static final ClassTag<Object> OBJECT_CLASS_TAG = ClassTag$.MODULE$.Object();\n  private final BlockManager blockManager;\n  private final TaskMemoryManager memoryManager;\n  private final SerializerInstance serializer;\n  private final Partitioner partitioner;\n  private final ShuffleWriteMetricsReporter writeMetrics;\n  private final ShuffleExecutorComponents shuffleExecutorComponents;\n  private final int shuffleId;\n  private final long mapId;\n  private final TaskContext taskContext;\n  private final SparkConf sparkConf;\n  private final boolean transferToEnabled;\n  private final int initialSortBufferSize;\n  private final int inputBufferSizeInBytes;\n  private final StructType schema;\n\n  @Nullable private MapStatus mapStatus;\n  @Nullable private CometShuffleExternalSorter sorter;\n\n  @Nullable private long[] partitionLengths;\n\n  private long peakMemoryUsedBytes = 0;\n  private ExposedByteArrayOutputStream serBuffer;\n  private SerializationStream serOutputStream;\n  private Native nativeLib = new Native();\n  private CometShuffleMemoryAllocatorTrait allocator;\n  private boolean tracingEnabled;\n\n  /**\n   * Are we in the process of stopping? Because map tasks can call stop() with success = true and\n   * then call stop() with success = false if they get an exception, we want to make sure we don't\n   * try deleting files, etc twice.\n   */\n  private boolean stopping = false;\n\n  public CometUnsafeShuffleWriter(\n      BlockManager blockManager,\n      TaskMemoryManager memoryManager,\n      BaseShuffleHandle<K, V, V> handle,\n      long mapId,\n      TaskContext taskContext,\n      SparkConf sparkConf,\n      ShuffleWriteMetricsReporter writeMetrics,\n      ShuffleExecutorComponents shuffleExecutorComponents) {\n    final int numPartitions = handle.dependency().partitioner().numPartitions();\n    if (numPartitions > SortShuffleManager.MAX_SHUFFLE_OUTPUT_PARTITIONS_FOR_SERIALIZED_MODE()) {\n      throw new IllegalArgumentException(\n          \"CometUnsafeShuffleWriter can only be used for shuffles with at most \"\n              + SortShuffleManager.MAX_SHUFFLE_OUTPUT_PARTITIONS_FOR_SERIALIZED_MODE()\n              + \" reduce partitions\");\n    }\n    this.blockManager = blockManager;\n    this.memoryManager = memoryManager;\n    this.mapId = mapId;\n    final ShuffleDependency<K, V, V> dep = handle.dependency();\n    this.shuffleId = dep.shuffleId();\n    this.serializer = dep.serializer().newInstance();\n    this.partitioner = dep.partitioner();\n    this.schema = (StructType) ((CometShuffleDependency) dep).schema().get();\n    this.writeMetrics = writeMetrics;\n    this.shuffleExecutorComponents = shuffleExecutorComponents;\n    this.taskContext = taskContext;\n    this.sparkConf = sparkConf;\n    this.transferToEnabled = sparkConf.getBoolean(\"spark.file.transferTo\", true);\n    this.initialSortBufferSize =\n        (int) (long) sparkConf.get(package$.MODULE$.SHUFFLE_SORT_INIT_BUFFER_SIZE());\n    this.inputBufferSizeInBytes =\n        (int) (long) sparkConf.get(package$.MODULE$.SHUFFLE_FILE_BUFFER_SIZE()) * 1024;\n    this.tracingEnabled = (boolean) CometConf.COMET_TRACING_ENABLED().get();\n    open();\n  }\n\n  private static OutputStream openStreamUnchecked(ShufflePartitionWriter writer) {\n    try {\n      return writer.openStream();\n    } catch (IOException e) {\n      throw new RuntimeException(e);\n    }\n  }\n\n  private void updatePeakMemoryUsed() {\n    // sorter can be null if this writer is closed\n    if (sorter != null) {\n      long mem = sorter.getPeakMemoryUsedBytes();\n      if (mem > peakMemoryUsedBytes) {\n        peakMemoryUsedBytes = mem;\n      }\n    }\n  }\n\n  /** Return the peak memory used so far, in bytes. */\n  public long getPeakMemoryUsedBytes() {\n    updatePeakMemoryUsed();\n    return peakMemoryUsedBytes;\n  }\n\n  /** This convenience method should only be called in test code. */\n  @VisibleForTesting\n  public void write(Iterator<Product2<K, V>> records) throws IOException {\n    write(asScala(records));\n  }\n\n  @Override\n  public void write(scala.collection.Iterator<Product2<K, V>> records) throws IOException {\n    // Keep track of success so we know if we encountered an exception\n    // We do this rather than a standard try/catch/re-throw to handle\n    // generic throwables.\n    boolean success = false;\n    if (tracingEnabled) {\n      nativeLib.traceBegin(\"comet_unsafe_shuffle_writer\");\n    }\n    String offheapMemKey = \"thread_\" + nativeLib.getRustThreadId() + \"_comet_jvm_shuffle\";\n    try {\n      while (records.hasNext()) {\n        insertRecordIntoSorter(records.next());\n      }\n      if (tracingEnabled) {\n        nativeLib.logMemoryUsage(offheapMemKey, this.allocator.getUsed());\n      }\n      closeAndWriteOutput();\n      success = true;\n    } finally {\n      if (tracingEnabled) {\n        nativeLib.traceEnd(\"comet_unsafe_shuffle_writer\");\n      }\n      if (sorter != null) {\n        try {\n          sorter.cleanupResources();\n        } catch (Exception e) {\n          // Only throw this error if we won't be masking another\n          // error.\n          if (success) {\n            throw e;\n          } else {\n            logger.error(\n                \"In addition to a failure during writing, we failed during \" + \"cleanup.\", e);\n          }\n        }\n      }\n      if (tracingEnabled) {\n        nativeLib.logMemoryUsage(offheapMemKey, this.allocator.getUsed());\n      }\n    }\n  }\n\n  private void open() {\n    assert (allocator == null);\n    assert (sorter == null);\n    allocator =\n        CometShuffleMemoryAllocator.getInstance(\n            sparkConf,\n            memoryManager,\n            Math.min(\n                CometShuffleExternalSorter.MAXIMUM_PAGE_SIZE_BYTES, memoryManager.pageSizeBytes()));\n    sorter =\n        CometShuffleExternalSorter.create(\n            allocator,\n            blockManager,\n            taskContext,\n            initialSortBufferSize,\n            partitioner.numPartitions(),\n            sparkConf,\n            writeMetrics,\n            schema);\n    serBuffer = new ExposedByteArrayOutputStream(DEFAULT_INITIAL_SER_BUFFER_SIZE);\n    serOutputStream = serializer.serializeStream(serBuffer);\n  }\n\n  @VisibleForTesting\n  void closeAndWriteOutput() throws IOException {\n    assert (sorter != null);\n    updatePeakMemoryUsed();\n    serBuffer = null;\n    serOutputStream = null;\n    final SpillInfo[] spills = sorter.closeAndGetSpills();\n    try {\n      partitionLengths = mergeSpills(spills);\n    } finally {\n      sorter = null;\n      for (SpillInfo spill : spills) {\n        if (spill.file.exists() && !spill.file.delete()) {\n          logger.error(\"Error while deleting spill file {}\", spill.file.getPath());\n        }\n      }\n    }\n    mapStatus = MapStatus$.MODULE$.apply(blockManager.shuffleServerId(), partitionLengths, mapId);\n  }\n\n  @VisibleForTesting\n  /** Serializes records and inserts into external sorter */\n  void insertRecordIntoSorter(Product2<K, V> record) throws IOException {\n    assert (sorter != null);\n    final K key = record._1();\n    final int partitionId = partitioner.getPartition(key);\n    serBuffer.reset();\n    serOutputStream.writeKey(key, OBJECT_CLASS_TAG);\n    serOutputStream.writeValue((UnsafeRow) record._2(), OBJECT_CLASS_TAG);\n    serOutputStream.flush();\n\n    final int serializedRecordSize = serBuffer.size();\n    assert (serializedRecordSize > 0);\n\n    sorter.insertRecord(\n        serBuffer.getBuf(), Platform.BYTE_ARRAY_OFFSET, serializedRecordSize, partitionId);\n  }\n\n  @VisibleForTesting\n  void forceSorterToSpill() throws IOException {\n    assert (sorter != null);\n    sorter.spill();\n  }\n\n  /**\n   * Merge zero or more spill files together, choosing the fastest merging strategy based on the\n   * number of spills and the IO compression codec.\n   *\n   * @return the partition lengths in the merged file.\n   */\n  private long[] mergeSpills(SpillInfo[] spills) throws IOException {\n    final boolean encryptionEnabled = blockManager.serializerManager().encryptionEnabled();\n\n    long[] partitionLengths;\n    if (spills.length == 0) {\n      final ShuffleMapOutputWriter mapWriter =\n          shuffleExecutorComponents.createMapOutputWriter(\n              shuffleId, mapId, partitioner.numPartitions());\n      return mapWriter\n          .commitAllPartitions(ShuffleChecksumHelper.EMPTY_CHECKSUM_VALUE)\n          .getPartitionLengths();\n    } else if (spills.length == 1) {\n      Optional<SingleSpillShuffleMapOutputWriter> maybeSingleFileWriter =\n          shuffleExecutorComponents.createSingleFileMapOutputWriter(shuffleId, mapId);\n      if (maybeSingleFileWriter.isPresent() && !encryptionEnabled) {\n        // Comet native writer doesn't perform encryption. If encryption is enabled, we should\n        // perform spill merging which will perform encryption.\n\n        // Here, we don't need to perform any metrics updates because the bytes written to this\n        // output file would have already been counted as shuffle bytes written.\n        partitionLengths = spills[0].partitionLengths;\n        logger.debug(\n            \"Merge shuffle spills for mapId {} with length {}\", mapId, partitionLengths.length);\n        maybeSingleFileWriter\n            .get()\n            .transferMapSpillFile(spills[0].file, partitionLengths, sorter.getChecksums());\n      } else {\n        partitionLengths = mergeSpillsUsingStandardWriter(spills);\n      }\n    } else {\n      partitionLengths = mergeSpillsUsingStandardWriter(spills);\n    }\n    return partitionLengths;\n  }\n\n  private long[] mergeSpillsUsingStandardWriter(SpillInfo[] spills) throws IOException {\n    long[] partitionLengths;\n    final boolean fastMergeEnabled =\n        (boolean) sparkConf.get(package$.MODULE$.SHUFFLE_UNSAFE_FAST_MERGE_ENABLE());\n    final boolean encryptionEnabled = blockManager.serializerManager().encryptionEnabled();\n    final ShuffleMapOutputWriter mapWriter =\n        shuffleExecutorComponents.createMapOutputWriter(\n            shuffleId, mapId, partitioner.numPartitions());\n    try {\n      // There are multiple spills to merge, so none of these spill files' lengths were counted\n      // towards our shuffle write count or shuffle write time. If we use the slow merge path,\n      // then the final output file's size won't necessarily be equal to the sum of the spill\n      // files' sizes. To guard against this case, we look at the output file's actual size when\n      // computing shuffle bytes written.\n      //\n      // We allow the individual merge methods to report their own IO times since different merge\n      // strategies use different IO techniques.  We count IO during merge towards the shuffle\n      // write time, which appears to be consistent with the \"not bypassing merge-sort\" branch in\n      // ExternalSorter.\n      if (fastMergeEnabled) {\n        // Comet native shuffle writer uses its compression codec instead of Spark's. So different\n        // Spark where fast spill merge is only supported when compression is disabled or when\n        // using a compression codec that supports concatenation of compressed streams, we can\n        // perform a fast spill merge that doesn't need to interpret the spilled bytes.\n        if (transferToEnabled && !encryptionEnabled) {\n          logger.debug(\"Using transferTo-based fast merge\");\n          mergeSpillsWithTransferTo(spills, mapWriter);\n        } else {\n          logger.debug(\"Using fileStream-based fast merge\");\n          mergeSpillsWithFileStream(spills, mapWriter);\n        }\n      } else {\n        logger.debug(\"Using slow merge\");\n        mergeSpillsWithFileStream(spills, mapWriter);\n      }\n      // When closing an UnsafeShuffleExternalSorter that has already spilled once but also has\n      // in-memory records, we write out the in-memory records to a file but do not count that\n      // final write as bytes spilled (instead, it's accounted as shuffle write). The merge needs\n      // to be counted as shuffle write, but this will lead to double-counting of the final\n      // SpillInfo's bytes.\n      writeMetrics.decBytesWritten(spills[spills.length - 1].file.length());\n      partitionLengths = mapWriter.commitAllPartitions(sorter.getChecksums()).getPartitionLengths();\n    } catch (Exception e) {\n      try {\n        mapWriter.abort(e);\n      } catch (Exception e2) {\n        logger.warn(\"Failed to abort writing the map output.\", e2);\n        e.addSuppressed(e2);\n      }\n      throw e;\n    }\n    return partitionLengths;\n  }\n\n  /**\n   * Merges spill files using Java FileStreams. This code path is typically slower than the\n   * NIO-based merge, {@link CometUnsafeShuffleWriter#mergeSpillsWithTransferTo(SpillInfo[],\n   * ShuffleMapOutputWriter)}, and it's mostly used in cases where the IO compression codec does not\n   * support concatenation of compressed data, when encryption is enabled, or when users have\n   * explicitly disabled use of {@code transferTo} in order to work around kernel bugs. This code\n   * path might also be faster in cases where individual partition size in a spill is small and\n   * UnsafeShuffleWriter#mergeSpillsWithTransferTo method performs many small disk ios which is\n   * inefficient. In those case, Using large buffers for input and output files helps reducing the\n   * number of disk ios, making the file merging faster.\n   *\n   * @param spills the spills to merge.\n   * @param mapWriter the map output writer to use for output.\n   * @return the partition lengths in the merged file.\n   */\n  private void mergeSpillsWithFileStream(SpillInfo[] spills, ShuffleMapOutputWriter mapWriter)\n      throws IOException {\n    logger.debug(\"Merge shuffle spills with FileStream for mapId {}\", mapId);\n    final int numPartitions = partitioner.numPartitions();\n    final InputStream[] spillInputStreams = new InputStream[spills.length];\n\n    boolean threwException = true;\n    try {\n      for (int i = 0; i < spills.length; i++) {\n        spillInputStreams[i] =\n            new NioBufferedFileInputStream(spills[i].file, inputBufferSizeInBytes);\n        // Only convert the partitionLengths when debug level is enabled.\n        if (logger.isDebugEnabled()) {\n          logger.debug(\n              \"Partition lengths for mapId {} in Spill {}: {}\",\n              mapId,\n              i,\n              Arrays.toString(spills[i].partitionLengths));\n        }\n      }\n      for (int partition = 0; partition < numPartitions; partition++) {\n        boolean copyThrewException = true;\n        ShufflePartitionWriter writer = mapWriter.getPartitionWriter(partition);\n        OutputStream partitionOutput = writer.openStream();\n        try {\n          partitionOutput = new TimeTrackingOutputStream(writeMetrics, partitionOutput);\n          partitionOutput = blockManager.serializerManager().wrapForEncryption(partitionOutput);\n          for (int i = 0; i < spills.length; i++) {\n            final long partitionLengthInSpill = spills[i].partitionLengths[partition];\n\n            if (partitionLengthInSpill > 0) {\n              InputStream partitionInputStream = null;\n              boolean copySpillThrewException = true;\n              try {\n                partitionInputStream =\n                    new LimitedInputStream(spillInputStreams[i], partitionLengthInSpill, false);\n                ByteStreams.copy(partitionInputStream, partitionOutput);\n                copySpillThrewException = false;\n              } finally {\n                Closeables.close(partitionInputStream, copySpillThrewException);\n              }\n            }\n          }\n          copyThrewException = false;\n        } finally {\n          Closeables.close(partitionOutput, copyThrewException);\n        }\n        long numBytesWritten = writer.getNumBytesWritten();\n        writeMetrics.incBytesWritten(numBytesWritten);\n      }\n      threwException = false;\n    } finally {\n      // To avoid masking exceptions that caused us to prematurely enter the finally block, only\n      // throw exceptions during cleanup if threwException == false.\n      for (InputStream stream : spillInputStreams) {\n        Closeables.close(stream, threwException);\n      }\n    }\n  }\n\n  /**\n   * Merges spill files by using NIO's transferTo to concatenate spill partitions' bytes. This is\n   * only safe when the IO compression codec and serializer support concatenation of serialized\n   * streams.\n   *\n   * @param spills the spills to merge.\n   * @param mapWriter the map output writer to use for output.\n   * @return the partition lengths in the merged file.\n   */\n  private void mergeSpillsWithTransferTo(SpillInfo[] spills, ShuffleMapOutputWriter mapWriter)\n      throws IOException {\n    logger.debug(\"Merge shuffle spills with TransferTo for mapId {}\", mapId);\n    final int numPartitions = partitioner.numPartitions();\n    final FileChannel[] spillInputChannels = new FileChannel[spills.length];\n    final long[] spillInputChannelPositions = new long[spills.length];\n\n    boolean threwException = true;\n    try {\n      for (int i = 0; i < spills.length; i++) {\n        spillInputChannels[i] = new FileInputStream(spills[i].file).getChannel();\n        // Only convert the partitionLengths when debug level is enabled.\n        if (logger.isDebugEnabled()) {\n          logger.debug(\n              \"Partition lengths for mapId {} in Spill {}: {}\",\n              mapId,\n              i,\n              Arrays.toString(spills[i].partitionLengths));\n        }\n      }\n      for (int partition = 0; partition < numPartitions; partition++) {\n        boolean copyThrewException = true;\n        ShufflePartitionWriter writer = mapWriter.getPartitionWriter(partition);\n        WritableByteChannelWrapper resolvedChannel =\n            writer\n                .openChannelWrapper()\n                .orElseGet(() -> new StreamFallbackChannelWrapper(openStreamUnchecked(writer)));\n        try {\n          for (int i = 0; i < spills.length; i++) {\n            long partitionLengthInSpill = spills[i].partitionLengths[partition];\n            final FileChannel spillInputChannel = spillInputChannels[i];\n            final long writeStartTime = System.nanoTime();\n            Utils.copyFileStreamNIO(\n                spillInputChannel,\n                resolvedChannel.channel(),\n                spillInputChannelPositions[i],\n                partitionLengthInSpill);\n            copyThrewException = false;\n            spillInputChannelPositions[i] += partitionLengthInSpill;\n            writeMetrics.incWriteTime(System.nanoTime() - writeStartTime);\n          }\n        } finally {\n          Closeables.close(resolvedChannel, copyThrewException);\n        }\n        long numBytes = writer.getNumBytesWritten();\n        writeMetrics.incBytesWritten(numBytes);\n      }\n      threwException = false;\n    } finally {\n      // To avoid masking exceptions that caused us to prematurely enter the finally block, only\n      // throw exceptions during cleanup if threwException == false.\n      for (int i = 0; i < spills.length; i++) {\n        assert (spillInputChannelPositions[i] == spills[i].file.length());\n        Closeables.close(spillInputChannels[i], threwException);\n      }\n    }\n  }\n\n  @Override\n  public Option<MapStatus> stop(boolean success) {\n    try {\n      taskContext.taskMetrics().incPeakExecutionMemory(getPeakMemoryUsedBytes());\n\n      if (stopping) {\n        return Option.apply(null);\n      } else {\n        stopping = true;\n        if (success) {\n          if (mapStatus == null) {\n            throw new IllegalStateException(\"Cannot call stop(true) without having called write()\");\n          }\n          return Option.apply(mapStatus);\n        } else {\n          return Option.apply(null);\n        }\n      }\n    } finally {\n      if (sorter != null) {\n        // If sorter is non-null, then this implies that we called stop() in response to an error,\n        // so we need to clean up memory and spill files created by the sorter\n        sorter.cleanupResources();\n      }\n    }\n  }\n\n  @Override\n  public long[] getPartitionLengths() {\n    return new long[0];\n  }\n\n  private static final class StreamFallbackChannelWrapper implements WritableByteChannelWrapper {\n    private final WritableByteChannel channel;\n\n    StreamFallbackChannelWrapper(OutputStream fallbackStream) {\n      this.channel = Channels.newChannel(fallbackStream);\n    }\n\n    @Override\n    public WritableByteChannel channel() {\n      return channel;\n    }\n\n    @Override\n    public void close() throws IOException {\n      channel.close();\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/sql/comet/execution/shuffle/ExposedByteArrayOutputStream.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle;\n\nimport java.io.ByteArrayOutputStream;\n\n/** Subclass of ByteArrayOutputStream that exposes `buf` directly. */\npublic final class ExposedByteArrayOutputStream extends ByteArrayOutputStream {\n  ExposedByteArrayOutputStream(int size) {\n    super(size);\n  }\n\n  public byte[] getBuf() {\n    return buf;\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/sql/comet/execution/shuffle/ShuffleThreadPool.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle;\n\nimport java.util.concurrent.*;\n\nimport com.google.common.util.concurrent.ThreadFactoryBuilder;\n\nimport org.apache.comet.CometConf$;\n\npublic class ShuffleThreadPool {\n  private static ThreadPoolExecutor INSTANCE;\n\n  /** Get the thread pool instance for shuffle writer. */\n  public static synchronized ExecutorService getThreadPool() {\n    if (INSTANCE == null) {\n      boolean isAsync = (boolean) CometConf$.MODULE$.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED().get();\n\n      if (isAsync) {\n        ThreadFactory factory =\n            new ThreadFactoryBuilder().setNameFormat(\"async-shuffle-writer-%d\").build();\n\n        int threadNum =\n            (int) CometConf$.MODULE$.COMET_COLUMNAR_SHUFFLE_ASYNC_MAX_THREAD_NUM().get();\n        INSTANCE =\n            new ThreadPoolExecutor(\n                0, threadNum, 1L, TimeUnit.SECONDS, new ThreadPoolQueue(threadNum), factory);\n      }\n    }\n\n    return INSTANCE;\n  }\n}\n\n/**\n * A blocking queue for thread pool. This will block new task submission until there is space in the\n * queue.\n */\nfinal class ThreadPoolQueue extends ArrayBlockingQueue<Runnable> {\n  public ThreadPoolQueue(int capacity) {\n    super(capacity);\n  }\n\n  @Override\n  public boolean offer(Runnable e) {\n    try {\n      put(e);\n    } catch (InterruptedException e1) {\n      Thread.currentThread().interrupt();\n      return false;\n    }\n    return true;\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/sql/comet/execution/shuffle/SpillInfo.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle;\n\nimport java.io.File;\n\nimport org.apache.spark.storage.TempShuffleBlockId;\n\n/** Metadata for a block of data written by ShuffleExternalSorter. */\npublic final class SpillInfo {\n  public final long[] partitionLengths;\n  public final File file;\n  final TempShuffleBlockId blockId;\n\n  public SpillInfo(int numPartitions, File file, TempShuffleBlockId blockId) {\n    this.partitionLengths = new long[numPartitions];\n    this.file = file;\n    this.blockId = blockId;\n  }\n}\n"
  },
  {
    "path": "spark/src/main/java/org/apache/spark/sql/comet/execution/shuffle/SpillWriter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle;\n\nimport java.io.ByteArrayOutputStream;\nimport java.io.File;\nimport java.io.IOException;\nimport java.util.LinkedList;\nimport java.util.Locale;\nimport javax.annotation.Nullable;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.spark.memory.SparkOutOfMemoryError;\nimport org.apache.spark.shuffle.ShuffleWriteMetricsReporter;\nimport org.apache.spark.shuffle.comet.CometShuffleMemoryAllocatorTrait;\nimport org.apache.spark.shuffle.sort.RowPartition;\nimport org.apache.spark.sql.types.StructType;\nimport org.apache.spark.unsafe.memory.MemoryBlock;\n\nimport org.apache.comet.CometConf;\nimport org.apache.comet.Native;\nimport org.apache.comet.serde.QueryPlanSerde$;\n\n/**\n * The interface for writing records into disk. This interface takes input rows and stores them in\n * allocated memory pages. When certain condition is met, the writer will spill the content of\n * memory to disk.\n */\npublic abstract class SpillWriter {\n  private static final Logger logger = LoggerFactory.getLogger(SpillWriter.class);\n\n  /**\n   * Memory pages that hold the records being sorted. The pages in this list are freed when\n   * spilling, although in principle we could recycle these pages across spills (on the other hand,\n   * this might not be necessary if we maintained a pool of re-usable pages in the TaskMemoryManager\n   * itself).\n   */\n  protected LinkedList<MemoryBlock> allocatedPages;\n\n  @Nullable protected MemoryBlock currentPage = null;\n  protected long pageCursor = -1;\n\n  // The memory allocator for this sorter. It is used to allocate/free memory pages for this sorter.\n  // Because we need to allocate off-heap memory regardless of configured Spark memory mode\n  // (on-heap/off-heap), we need a separate memory allocator.\n  protected CometShuffleMemoryAllocatorTrait allocator;\n\n  protected Native nativeLib;\n\n  protected byte[][] dataTypes;\n\n  // 0: CRC32, 1: Adler32, or 2: CRC32C. Spark uses Adler32 by default.\n  protected int checksumAlgo = 1;\n  protected long checksum = -1;\n\n  /** Serialize row schema to byte array. */\n  protected byte[][] serializeSchema(StructType schema) {\n    byte[][] dataTypes = new byte[schema.length()][];\n    for (int i = 0; i < schema.length(); i++) {\n      ByteArrayOutputStream outputStream = new ByteArrayOutputStream();\n      try {\n        QueryPlanSerde$.MODULE$\n            .serializeDataType(schema.apply(i).dataType())\n            .get()\n            .writeTo(outputStream);\n      } catch (IOException e) {\n        throw new RuntimeException(e);\n      }\n      dataTypes[i] = outputStream.toByteArray();\n    }\n\n    return dataTypes;\n  }\n\n  protected void setChecksumAlgo(String checksumAlgo) {\n    String algo = checksumAlgo.toLowerCase(Locale.ROOT);\n\n    if (algo.equals(\"crc32\")) {\n      this.checksumAlgo = 0;\n    } else if (algo.equals(\"adler32\")) {\n      this.checksumAlgo = 1;\n    } else if (algo.equals(\"crc32c\")) {\n      this.checksumAlgo = 2;\n    } else {\n      throw new UnsupportedOperationException(\n          \"Unsupported shuffle checksum algorithm: \" + checksumAlgo);\n    }\n  }\n\n  protected void setChecksum(long checksum) {\n    this.checksum = checksum;\n  }\n\n  protected long getChecksum() {\n    return checksum;\n  }\n\n  /**\n   * Spills the current in-memory records to disk.\n   *\n   * @param required the amount of required memory.\n   */\n  protected abstract void spill(int required) throws IOException;\n\n  /**\n   * Allocates more memory in order to insert an additional record. This will request additional\n   * memory from the memory manager and spill if the requested memory can not be obtained.\n   *\n   * @param required the required space in the data page, in bytes, including space for storing the\n   *     record size. This must be less than or equal to the page size (records that exceed the page\n   *     size are handled via a different code path which uses special overflow pages).\n   * @return true if the memory is allocated successfully, false otherwise. If false is returned, it\n   *     means spilling is happening and the caller should not continue insert into this writer.\n   */\n  public boolean acquireNewPageIfNecessary(int required) {\n    if (currentPage == null\n        || pageCursor + required > currentPage.getBaseOffset() + currentPage.size()) {\n      // TODO: try to find space in previous pages\n      try {\n        currentPage = allocator.allocate(required);\n      } catch (SparkOutOfMemoryError error) {\n        try {\n          // Cannot allocate enough memory, spill\n          spill(required);\n          return false;\n        } catch (IOException e) {\n          throw new RuntimeException(\"Unable to spill() in order to acquire \" + required, e);\n        }\n      }\n      assert (currentPage != null);\n      pageCursor = currentPage.getBaseOffset();\n      allocatedPages.add(currentPage);\n    }\n    return true;\n  }\n\n  /** Allocates initial memory page */\n  public void initialCurrentPage(int required) {\n    assert (currentPage == null);\n    try {\n      currentPage = allocator.allocate(required);\n    } catch (SparkOutOfMemoryError e) {\n      logger.error(\"Unable to acquire {} bytes of memory\", required);\n      throw e;\n    }\n    assert (currentPage != null);\n    pageCursor = currentPage.getBaseOffset();\n    allocatedPages.add(currentPage);\n  }\n\n  /** The core logic of spilling buffered rows into disk. */\n  protected long doSpilling(\n      byte[][] dataTypes,\n      File file,\n      RowPartition rowPartition,\n      ShuffleWriteMetricsReporter writeMetricsToUse,\n      double preferDictionaryRatio,\n      String compressionCodec,\n      int compressionLevel,\n      boolean tracingEnabled) {\n    long[] addresses = rowPartition.getRowAddresses();\n    int[] sizes = rowPartition.getRowSizes();\n\n    boolean checksumEnabled = checksum != -1;\n    long currentChecksum = checksumEnabled ? checksum : 0L;\n\n    long start = System.nanoTime();\n    int batchSize = (int) CometConf.COMET_COLUMNAR_SHUFFLE_BATCH_SIZE().get();\n    long[] results =\n        nativeLib.writeSortedFileNative(\n            addresses,\n            sizes,\n            dataTypes,\n            file.getAbsolutePath(),\n            preferDictionaryRatio,\n            batchSize,\n            checksumEnabled,\n            checksumAlgo,\n            currentChecksum,\n            compressionCodec,\n            compressionLevel,\n            tracingEnabled);\n\n    long written = results[0];\n    checksum = results[1];\n\n    rowPartition.reset();\n\n    // Update metrics\n    // Other threads may be updating the metrics at the same time, so we need to synchronize it.\n    synchronized (writeMetricsToUse) {\n      writeMetricsToUse.incWriteTime(System.nanoTime() - start);\n      writeMetricsToUse.incRecordsWritten(addresses.length);\n      writeMetricsToUse.incBytesWritten(written);\n    }\n\n    return written;\n  }\n\n  /** Frees allocated memory pages of this writer */\n  public long freeMemory() {\n    long freed = 0L;\n    for (MemoryBlock block : allocatedPages) {\n      freed += allocator.free(block);\n    }\n    allocatedPages.clear();\n    currentPage = null;\n    pageCursor = 0;\n\n    return freed;\n  }\n\n  /** Returns the amount of memory used by this writer, in bytes. */\n  public long getMemoryUsage() {\n    // Assume this method won't be called on a spilling writer, so we don't need to synchronize it.\n    long used = 0;\n\n    for (MemoryBlock block : allocatedPages) {\n      used += block.size();\n    }\n\n    return used;\n  }\n}\n"
  },
  {
    "path": "spark/src/main/resources/log4j2.properties",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n# \n#   http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# Set everything to be logged to the file target/unit-tests.log\nrootLogger.level = info\nrootLogger.appenderRef.file.ref = ${sys:test.appender:-File}\n\nappender.file.type = File\nappender.file.name = File\nappender.file.fileName = target/unit-tests.log\nappender.file.layout.type = PatternLayout\nappender.file.layout.pattern = %d{yy/MM/dd HH:mm:ss.SSS} %t %p %c{1}: %m%n\n\n# Tests that launch java subprocesses can set the \"test.appender\" system property to\n# \"console\" to avoid having the child process's logs overwrite the unit test's\n# log file.\nappender.console.type = Console\nappender.console.name = console\nappender.console.target = SYSTEM_ERR\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = %t: %m%n\n\n# Ignore messages below warning level from Jetty, because it's a bit verbose\nlogger.jetty.name = org.sparkproject.jetty\nlogger.jetty.level = warn\n\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/CometExecIterator.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.lang.management.ManagementFactory\n\nimport org.apache.hadoop.conf.Configuration\nimport org.apache.spark._\nimport org.apache.spark.broadcast.Broadcast\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.network.util.ByteUnit\nimport org.apache.spark.sql.comet.CometMetricNode\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.vectorized._\nimport org.apache.spark.util.SerializableConfiguration\n\nimport org.apache.comet.CometConf._\nimport org.apache.comet.Tracing.withTrace\nimport org.apache.comet.exceptions.CometQueryExecutionException\nimport org.apache.comet.parquet.CometFileKeyUnwrapper\nimport org.apache.comet.serde.Config.ConfigMap\nimport org.apache.comet.vector.NativeUtil\n\n/**\n * An iterator class used to execute Comet native query. It takes an input iterator which comes\n * from Comet Scan and is expected to produce batches of Arrow Arrays. During consuming this\n * iterator, it will consume input iterator and pass Arrow Arrays to Comet native engine by\n * addresses. Even after the end of input iterator, this iterator still possibly continues\n * executing native query as there might be blocking operators such as Sort, Aggregate. The API\n * `hasNext` can be used to check if it is the end of this iterator (i.e. the native query is\n * done).\n *\n * @param inputs\n *   The input iterators producing sequence of batches of Arrow Arrays.\n * @param protobufQueryPlan\n *   The serialized bytes of Spark execution plan.\n * @param numParts\n *   The number of partitions.\n * @param partitionIndex\n *   The index of the partition.\n * @param encryptedFilePaths\n *   Paths to encrypted Parquet files that need key unwrapping.\n */\nclass CometExecIterator(\n    val id: Long,\n    inputs: Seq[Iterator[ColumnarBatch]],\n    numOutputCols: Int,\n    protobufQueryPlan: Array[Byte],\n    nativeMetrics: CometMetricNode,\n    numParts: Int,\n    partitionIndex: Int,\n    broadcastedHadoopConfForEncryption: Option[Broadcast[SerializableConfiguration]] = None,\n    encryptedFilePaths: Seq[String] = Seq.empty,\n    shuffleBlockIterators: Map[Int, CometShuffleBlockIterator] = Map.empty)\n    extends Iterator[ColumnarBatch]\n    with Logging {\n\n  private val tracingEnabled = CometConf.COMET_TRACING_ENABLED.get()\n  private val memoryMXBean = ManagementFactory.getMemoryMXBean\n  private val nativeLib = new Native()\n  private val nativeUtil = new NativeUtil()\n  private val taskAttemptId = TaskContext.get().taskAttemptId\n  private val taskCPUs = TaskContext.get().cpus()\n  private val cometTaskMemoryManager = new CometTaskMemoryManager(id, taskAttemptId)\n  // Build a mixed array of iterators: CometShuffleBlockIterator for shuffle\n  // scan indices, CometBatchIterator for regular scan indices.\n  private val inputIterators: Array[Object] = inputs.zipWithIndex.map {\n    case (_, idx) if shuffleBlockIterators.contains(idx) =>\n      shuffleBlockIterators(idx).asInstanceOf[Object]\n    case (iterator, _) =>\n      new CometBatchIterator(iterator, nativeUtil).asInstanceOf[Object]\n  }.toArray\n\n  private val plan = {\n    val conf = SparkEnv.get.conf\n    val localDiskDirs = SparkEnv.get.blockManager.getLocalDiskDirs\n\n    // serialize Comet related Spark configs in protobuf format\n    val protobufSparkConfigs = CometExecIterator.serializeCometSQLConfs()\n\n    // Create keyUnwrapper if encryption is enabled\n    val keyUnwrapper = if (encryptedFilePaths.nonEmpty) {\n      val unwrapper = new CometFileKeyUnwrapper()\n      val hadoopConf: Configuration = broadcastedHadoopConfForEncryption.get.value.value\n\n      encryptedFilePaths.foreach(filePath =>\n        unwrapper.storeDecryptionKeyRetriever(filePath, hadoopConf))\n\n      unwrapper\n    } else {\n      null\n    }\n\n    val memoryConfig = CometExecIterator.getMemoryConfig(conf)\n\n    nativeLib.createPlan(\n      id,\n      inputIterators,\n      protobufQueryPlan,\n      protobufSparkConfigs,\n      numParts,\n      nativeMetrics,\n      metricsUpdateInterval = COMET_METRICS_UPDATE_INTERVAL.get(),\n      cometTaskMemoryManager,\n      localDiskDirs,\n      batchSize = COMET_BATCH_SIZE.get(),\n      memoryConfig.offHeapMode,\n      memoryConfig.memoryPoolType,\n      memoryConfig.memoryLimit,\n      memoryConfig.memoryLimitPerTask,\n      taskAttemptId,\n      taskCPUs,\n      keyUnwrapper)\n  }\n\n  private var nextBatch: Option[ColumnarBatch] = None\n  private var prevBatch: ColumnarBatch = null\n  private var currentBatch: ColumnarBatch = null\n  private var closed: Boolean = false\n\n  // Register a task completion listener to ensure native resources are released\n  // when the task is done.\n  TaskContext.get().addTaskCompletionListener[Unit] { _ =>\n    this.close()\n  }\n\n  private def getNextBatch: Option[ColumnarBatch] = {\n    assert(partitionIndex >= 0 && partitionIndex < numParts)\n\n    val ctx = TaskContext.get()\n\n    try {\n      val result = withTrace(\n        s\"getNextBatch[JVM] stage=${ctx.stageId()}\",\n        tracingEnabled, {\n          nativeUtil.getNextBatch(\n            numOutputCols,\n            (arrayAddrs, schemaAddrs) => {\n              nativeLib.executePlan(ctx.stageId(), partitionIndex, plan, arrayAddrs, schemaAddrs)\n            })\n        })\n\n      if (tracingEnabled) {\n        traceMemoryUsage()\n      }\n\n      result\n    } catch {\n      // Handle CometQueryExecutionException with JSON payload first\n      case e: CometQueryExecutionException =>\n        logError(s\"Native execution for task $taskAttemptId failed\", e)\n        throw SparkErrorConverter.convertToSparkException(e)\n\n      case e: CometNativeException =>\n        // it is generally considered bad practice to log and then rethrow an\n        // exception, but it really helps debugging to be able to see which task\n        // threw the exception, so we log the exception with taskAttemptId here\n        logError(s\"Native execution for task $taskAttemptId failed\", e)\n\n        val parquetError: scala.util.matching.Regex =\n          \"\"\"^Parquet error: (?:.*)$\"\"\".r\n        e.getMessage match {\n          case parquetError() =>\n            // See org.apache.spark.sql.errors.QueryExecutionErrors.failedToReadDataError\n            // See org.apache.parquet.hadoop.ParquetFileReader for error message.\n            throw new SparkException(\n              errorClass = \"_LEGACY_ERROR_TEMP_2254\",\n              messageParameters = Map(\"message\" -> e.getMessage),\n              cause = new SparkException(\"File is not a Parquet file.\", e))\n          case _ =>\n            throw e\n        }\n      case e: Throwable =>\n        throw e\n    }\n  }\n\n  override def hasNext: Boolean = {\n    if (closed) return false\n\n    if (nextBatch.isDefined) {\n      return true\n    }\n\n    // Close previous batch if any.\n    // This is to guarantee safety at the native side before we overwrite the buffer memory\n    // shared across batches in the native side.\n    if (prevBatch != null) {\n      prevBatch.close()\n      prevBatch = null\n    }\n\n    nextBatch = getNextBatch\n\n    logTrace(s\"Task $taskAttemptId memory pool usage is ${cometTaskMemoryManager.getUsed} bytes\")\n\n    if (nextBatch.isEmpty) {\n      close()\n      false\n    } else {\n      true\n    }\n  }\n\n  override def next(): ColumnarBatch = {\n    if (currentBatch != null) {\n      // Eagerly release Arrow Arrays in the previous batch\n      currentBatch.close()\n      currentBatch = null\n    }\n\n    if (nextBatch.isEmpty && !hasNext) {\n      throw new NoSuchElementException(\"No more element\")\n    }\n\n    currentBatch = nextBatch.get\n    prevBatch = currentBatch\n    nextBatch = None\n    currentBatch\n  }\n\n  def close(): Unit = synchronized {\n    if (!closed) {\n      if (currentBatch != null) {\n        currentBatch.close()\n        currentBatch = null\n      }\n      nativeUtil.close()\n      shuffleBlockIterators.values.foreach(_.close())\n      nativeLib.releasePlan(plan)\n\n      if (tracingEnabled) {\n        traceMemoryUsage()\n      }\n\n      val memInUse = cometTaskMemoryManager.getUsed\n      if (memInUse != 0) {\n        logWarning(s\"CometExecIterator closed with non-zero memory usage : $memInUse\")\n      }\n\n      closed = true\n    }\n  }\n\n  private def traceMemoryUsage(): Unit = {\n    nativeLib.logMemoryUsage(\"jvm_heap_used\", memoryMXBean.getHeapMemoryUsage.getUsed)\n  }\n}\n\nobject CometExecIterator extends Logging {\n\n  private def cometSqlConfs: Map[String, String] =\n    SQLConf.get.getAllConfs.filter(_._1.startsWith(CometConf.COMET_PREFIX))\n\n  def serializeCometSQLConfs(): Array[Byte] = {\n    val builder = ConfigMap.newBuilder()\n    cometSqlConfs.foreach { case (k, v) =>\n      if (k.startsWith(s\"${CometConf.COMET_PREFIX}.datafusion.\")) {\n        if (CometConf.COMET_RESPECT_DATAFUSION_CONFIGS.get(SQLConf.get)) {\n          builder.putEntries(k, v)\n        }\n      } else {\n        builder.putEntries(k, v)\n      }\n    }\n    // Inject the resolved executor cores so the native side can use it\n    // for tokio runtime thread count\n    val executorCores = numDriverOrExecutorCores(SparkEnv.get.conf)\n    builder.putEntries(\"spark.executor.cores\", executorCores.toString)\n\n    builder.build().toByteArray\n  }\n\n  def getMemoryConfig(conf: SparkConf): MemoryConfig = {\n    val numCores = numDriverOrExecutorCores(conf)\n    val coresPerTask = conf.get(\"spark.task.cpus\", \"1\").toInt\n    // there are different paths for on-heap vs off-heap mode\n    val offHeapMode = CometSparkSessionExtensions.isOffHeapEnabled(conf)\n    if (offHeapMode) {\n      // in off-heap mode, Comet uses unified memory management to share off-heap memory with Spark\n      val offHeapSize = ByteUnit.MiB.toBytes(conf.getSizeAsMb(\"spark.memory.offHeap.size\"))\n      val memoryFraction = CometConf.COMET_OFFHEAP_MEMORY_POOL_FRACTION.get()\n      val memoryLimit = (offHeapSize * memoryFraction).toLong\n      val memoryLimitPerTask = (memoryLimit.toDouble * coresPerTask / numCores).toLong\n      val memoryPoolType = COMET_OFFHEAP_MEMORY_POOL_TYPE.get()\n      logInfo(\n        s\"memoryPoolType=$memoryPoolType, \" +\n          s\"offHeapSize=${toMB(offHeapSize)}, \" +\n          s\"memoryFraction=$memoryFraction, \" +\n          s\"memoryLimit=${toMB(memoryLimit)}, \" +\n          s\"memoryLimitPerTask=${toMB(memoryLimitPerTask)}\")\n      MemoryConfig(offHeapMode, memoryPoolType = memoryPoolType, memoryLimit, memoryLimitPerTask)\n    } else {\n      // we'll use the built-in memory pool from DF, and initializes with `memory_limit`\n      // and `memory_fraction` below.\n      val memoryLimit = CometSparkSessionExtensions.getCometMemoryOverhead(conf)\n      // example 16GB maxMemory * 16 cores with 4 cores per task results\n      // in memory_limit_per_task = 16 GB * 4 / 16 = 16 GB / 4 = 4GB\n      val memoryLimitPerTask = (memoryLimit.toDouble * coresPerTask / numCores).toLong\n      val memoryPoolType = COMET_ONHEAP_MEMORY_POOL_TYPE.get()\n      logInfo(\n        s\"memoryPoolType=$memoryPoolType, \" +\n          s\"memoryLimit=${toMB(memoryLimit)}, \" +\n          s\"memoryLimitPerTask=${toMB(memoryLimitPerTask)}\")\n      MemoryConfig(offHeapMode, memoryPoolType = memoryPoolType, memoryLimit, memoryLimitPerTask)\n    }\n  }\n\n  private def numDriverOrExecutorCores(conf: SparkConf): Int = {\n    def convertToInt(threads: String): Int = {\n      if (threads == \"*\") Runtime.getRuntime.availableProcessors() else threads.toInt\n    }\n\n    // If running in local mode, get number of threads from the spark.master setting.\n    // See https://spark.apache.org/docs/latest/submitting-applications.html#master-urls\n    // for supported formats\n\n    // `local[*]` means using all available cores and `local[2]` means using 2 cores.\n    val LOCAL_N_REGEX = \"\"\"local\\[([0-9]+|\\*)\\]\"\"\".r\n    // Also handle format `local[num-worker-threads, max-failures]\n    val LOCAL_N_FAILURES_REGEX = \"\"\"local\\[([0-9]+|\\*)\\s*,\\s*([0-9]+)\\]\"\"\".r\n\n    val master = conf.get(\"spark.master\")\n    master match {\n      case \"local\" => 1\n      case LOCAL_N_REGEX(threads) => convertToInt(threads)\n      case LOCAL_N_FAILURES_REGEX(threads, _) => convertToInt(threads)\n      case _ => conf.get(\"spark.executor.cores\", \"1\").toInt\n    }\n  }\n\n  private def toMB(n: Long): String = {\n    s\"${(n.toDouble / 1024.0 / 1024.0).toLong} MB\"\n  }\n}\n\ncase class MemoryConfig(\n    offHeapMode: Boolean,\n    memoryPoolType: String,\n    memoryLimit: Long,\n    memoryLimitPerTask: Long)\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/CometFallback.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport org.apache.spark.sql.catalyst.trees.{TreeNode, TreeNodeTag}\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\n\n/**\n * Sticky fallback marker for shuffle / stage nodes.\n *\n * Comet's shuffle-support predicates (e.g. `CometShuffleExchangeExec.columnarShuffleSupported`)\n * run at both initial planning and AQE stage-prep. Some fallback decisions depend on the\n * surrounding plan shape - for example, the presence of a DPP scan below a shuffle. Between the\n * two passes AQE can reshape that subtree (a completed child stage becomes a\n * `ShuffleQueryStageExec`, a `LeafExecNode` whose `children` is empty), so a naive re-evaluation\n * can flip the decision.\n *\n * When a decision is made on the initial-plan pass, the deciding rule records a sticky tag via\n * [[markForFallback]]. On subsequent passes, callers short-circuit via [[isMarkedForFallback]]\n * and preserve the earlier decision instead of re-deriving it from the current plan shape.\n *\n * This tag is kept separate from `CometExplainInfo.EXTENSION_INFO` on purpose: the explain tag\n * accumulates informational reasons (including rolled-up child reasons), many of which are not a\n * full-fallback signal. Treating any presence of explain info as fallback is too coarse and\n * breaks legitimate conversions (e.g. a shuffle tagged \"Comet native shuffle not enabled\" should\n * still be eligible for columnar shuffle). The fallback tag exists only for decisions that should\n * remain sticky.\n */\nobject CometFallback {\n\n  val STAGE_FALLBACK_TAG: TreeNodeTag[Set[String]] =\n    new TreeNodeTag[Set[String]](\"CometStageFallback\")\n\n  /**\n   * Mark a node so that subsequent shuffle-support re-evaluations fall back to Spark without\n   * re-deriving the decision from the (possibly reshaped) subtree. Also records the reason in the\n   * usual explain channel so it surfaces in extended explain output.\n   */\n  def markForFallback[T <: TreeNode[_]](node: T, reason: String): T = {\n    val existing = node.getTagValue(STAGE_FALLBACK_TAG).getOrElse(Set.empty[String])\n    node.setTagValue(STAGE_FALLBACK_TAG, existing + reason)\n    withInfo(node, reason)\n    node\n  }\n\n  /** True if a prior rule pass marked this node for Spark fallback via [[markForFallback]]. */\n  def isMarkedForFallback(node: TreeNode[_]): Boolean =\n    node.getTagValue(STAGE_FALLBACK_TAG).exists(_.nonEmpty)\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/CometMetricsListener.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport org.apache.spark.CometSource\nimport org.apache.spark.sql.execution.QueryExecution\nimport org.apache.spark.sql.util.QueryExecutionListener\n\nclass CometMetricsListener extends QueryExecutionListener {\n\n  override def onSuccess(funcName: String, qe: QueryExecution, durationNs: Long): Unit = {\n    val stats = CometCoverageStats.forPlan(qe.executedPlan)\n    CometSource.recordStats(stats)\n  }\n\n  override def onFailure(funcName: String, qe: QueryExecution, exception: Exception): Unit = {\n    // Record stats even on failure since the query was still planned\n    val stats = CometCoverageStats.forPlan(qe.executedPlan)\n    CometSource.recordStats(stats)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/CometSparkSessionExtensions.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.nio.ByteOrder\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.network.util.ByteUnit\nimport org.apache.spark.sql.{SparkSession, SparkSessionExtensions}\nimport org.apache.spark.sql.catalyst.rules.Rule\nimport org.apache.spark.sql.catalyst.trees.TreeNode\nimport org.apache.spark.sql.comet._\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.CometConf._\nimport org.apache.comet.rules.{CometExecRule, CometScanRule, EliminateRedundantTransitions}\nimport org.apache.comet.shims.ShimCometSparkSessionExtensions\n\n/**\n * CometDriverPlugin will register an instance of this class with Spark.\n *\n * This class is responsible for injecting Comet rules and extensions into Spark.\n */\nclass CometSparkSessionExtensions\n    extends (SparkSessionExtensions => Unit)\n    with Logging\n    with ShimCometSparkSessionExtensions {\n  override def apply(extensions: SparkSessionExtensions): Unit = {\n    extensions.injectColumnar { session => CometScanColumnar(session) }\n    extensions.injectColumnar { session => CometExecColumnar(session) }\n    extensions.injectQueryStagePrepRule { session => CometScanRule(session) }\n    extensions.injectQueryStagePrepRule { session => CometExecRule(session) }\n  }\n\n  case class CometScanColumnar(session: SparkSession) extends ColumnarRule {\n    override def preColumnarTransitions: Rule[SparkPlan] = CometScanRule(session)\n  }\n\n  case class CometExecColumnar(session: SparkSession) extends ColumnarRule {\n    override def preColumnarTransitions: Rule[SparkPlan] = CometExecRule(session)\n\n    override def postColumnarTransitions: Rule[SparkPlan] =\n      EliminateRedundantTransitions(session)\n  }\n}\n\nobject CometSparkSessionExtensions extends Logging {\n  lazy val isBigEndian: Boolean = ByteOrder.nativeOrder().equals(ByteOrder.BIG_ENDIAN)\n\n  /**\n   * Checks whether Comet extension should be loaded for Spark.\n   */\n  private[comet] def isCometLoaded(conf: SQLConf): Boolean = {\n    if (isBigEndian) {\n      logInfo(\"Comet extension is disabled because platform is big-endian\")\n      return false\n    }\n    if (!COMET_ENABLED.get(conf)) {\n      logInfo(s\"Comet extension is disabled, please turn on ${COMET_ENABLED.key} to enable it\")\n      return false\n    }\n\n    // We don't support INT96 timestamps written by Apache Impala in a different timezone yet\n    if (conf.getConf(SQLConf.PARQUET_INT96_TIMESTAMP_CONVERSION)) {\n      logWarning(\n        \"Comet extension is disabled, because it currently doesn't support\" +\n          s\" ${SQLConf.PARQUET_INT96_TIMESTAMP_CONVERSION} setting to true.\")\n      return false\n    }\n\n    try {\n      // This will load the Comet native lib on demand, and if success, should set\n      // `NativeBase.loaded` to true\n      NativeBase.isLoaded\n    } catch {\n      case e: Throwable =>\n        if (COMET_NATIVE_LOAD_REQUIRED.get(conf)) {\n          throw new CometRuntimeException(\n            \"Error when loading native library. Please fix the error and try again, or fallback \" +\n              s\"to Spark by setting ${COMET_ENABLED.key} to false\",\n            e)\n        } else {\n          logWarning(\n            \"Comet extension is disabled because of error when loading native lib. \" +\n              \"Falling back to Spark\",\n            e)\n        }\n        false\n    }\n  }\n\n  // Check whether Comet shuffle is enabled:\n  // 1. `COMET_EXEC_SHUFFLE_ENABLED` is true\n  // 2. `spark.shuffle.manager` is set to `CometShuffleManager`\n  // 3. Off-heap memory is enabled || Spark/Comet unit testing\n  def isCometShuffleEnabled(conf: SQLConf): Boolean =\n    COMET_EXEC_SHUFFLE_ENABLED.get(conf) && isCometShuffleManagerEnabled(conf)\n\n  def isCometShuffleManagerEnabled(conf: SQLConf): Boolean = {\n    conf.contains(\"spark.shuffle.manager\") && conf.getConfString(\"spark.shuffle.manager\") ==\n      \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\"\n  }\n\n  def isCometScan(op: SparkPlan): Boolean = {\n    op.isInstanceOf[CometBatchScanExec] || op.isInstanceOf[CometScanExec]\n  }\n\n  def isSpark35Plus: Boolean = {\n    org.apache.spark.SPARK_VERSION >= \"3.5\"\n  }\n\n  def isSpark40Plus: Boolean = {\n    org.apache.spark.SPARK_VERSION >= \"4.0\"\n  }\n\n  /**\n   * Whether we should override Spark memory configuration for Comet. This only returns true when\n   * Comet native execution is enabled and/or Comet shuffle is enabled and Comet doesn't use\n   * off-heap mode (unified memory manager).\n   */\n  def shouldOverrideMemoryConf(conf: SparkConf): Boolean = {\n    val cometEnabled = getBooleanConf(conf, CometConf.COMET_ENABLED)\n    val cometShuffleEnabled = getBooleanConf(conf, CometConf.COMET_EXEC_SHUFFLE_ENABLED)\n    val cometExecEnabled = getBooleanConf(conf, CometConf.COMET_EXEC_ENABLED)\n    val offHeapMode = CometSparkSessionExtensions.isOffHeapEnabled(conf)\n    cometEnabled && (cometShuffleEnabled || cometExecEnabled) && !offHeapMode\n  }\n\n  /**\n   * Determines required memory overhead in MB per executor process for Comet when running in\n   * on-heap mode.\n   */\n  def getCometMemoryOverheadInMiB(sparkConf: SparkConf): Long = {\n    if (isOffHeapEnabled(sparkConf)) {\n      // when running in off-heap mode we use unified memory management to share\n      // off-heap memory with Spark so do not add overhead\n      return 0\n    }\n    ConfigHelpers.byteFromString(\n      sparkConf.get(\n        COMET_ONHEAP_MEMORY_OVERHEAD.key,\n        COMET_ONHEAP_MEMORY_OVERHEAD.defaultValueString),\n      ByteUnit.MiB)\n  }\n\n  private def getBooleanConf(conf: SparkConf, entry: ConfigEntry[Boolean]) =\n    conf.getBoolean(entry.key, entry.defaultValue.get)\n\n  /**\n   * Calculates required memory overhead in bytes per executor process for Comet when running in\n   * on-heap mode.\n   */\n  def getCometMemoryOverhead(sparkConf: SparkConf): Long = {\n    ByteUnit.MiB.toBytes(getCometMemoryOverheadInMiB(sparkConf))\n  }\n\n  /**\n   * Calculates required shuffle memory size in bytes per executor process for Comet when running\n   * in on-heap mode.\n   */\n  def getCometShuffleMemorySize(sparkConf: SparkConf, conf: SQLConf = SQLConf.get): Long = {\n    assert(!isOffHeapEnabled(sparkConf))\n\n    val cometMemoryOverhead = getCometMemoryOverheadInMiB(sparkConf)\n\n    val overheadFactor = COMET_ONHEAP_SHUFFLE_MEMORY_FACTOR.get(conf)\n\n    val shuffleMemorySize = (overheadFactor * cometMemoryOverhead).toLong\n    if (shuffleMemorySize > cometMemoryOverhead) {\n      logWarning(\n        s\"Configured shuffle memory size $shuffleMemorySize is larger than Comet memory overhead \" +\n          s\"$cometMemoryOverhead, using Comet memory overhead instead.\")\n      ByteUnit.MiB.toBytes(cometMemoryOverhead)\n    } else {\n      ByteUnit.MiB.toBytes(shuffleMemorySize)\n    }\n  }\n\n  def isOffHeapEnabled(sparkConf: SparkConf): Boolean = {\n    sparkConf.getBoolean(\"spark.memory.offHeap.enabled\", false)\n  }\n\n  /**\n   * Attaches explain information to a TreeNode, rolling up the corresponding information tags\n   * from any child nodes. For now, we are using this to attach the reasons why certain Spark\n   * operators or expressions are disabled.\n   *\n   * @param node\n   *   The node to attach the explain information to. Typically a SparkPlan\n   * @param info\n   *   Information text. Optional, may be null or empty. If not provided, then only information\n   *   from child nodes will be included.\n   * @param exprs\n   *   Child nodes. Information attached in these nodes will be be included in the information\n   *   attached to @node\n   * @tparam T\n   *   The type of the TreeNode. Typically SparkPlan, AggregateExpression, or Expression\n   * @return\n   *   The node with information (if any) attached\n   */\n  def withInfo[T <: TreeNode[_]](node: T, info: String, exprs: T*): T = {\n    // support existing approach of passing in multiple infos in a newline-delimited string\n    val infoSet = if (info == null || info.isEmpty) {\n      Set.empty[String]\n    } else {\n      info.split(\"\\n\").toSet\n    }\n    withInfos(node, infoSet, exprs: _*)\n  }\n\n  /**\n   * Attaches explain information to a TreeNode, rolling up the corresponding information tags\n   * from any child nodes. For now, we are using this to attach the reasons why certain Spark\n   * operators or expressions are disabled.\n   *\n   * @param node\n   *   The node to attach the explain information to. Typically a SparkPlan\n   * @param info\n   *   Information text. May contain zero or more strings. If not provided, then only information\n   *   from child nodes will be included.\n   * @param exprs\n   *   Child nodes. Information attached in these nodes will be be included in the information\n   *   attached to @node\n   * @tparam T\n   *   The type of the TreeNode. Typically SparkPlan, AggregateExpression, or Expression\n   * @return\n   *   The node with information (if any) attached\n   */\n  def withInfos[T <: TreeNode[_]](node: T, info: Set[String], exprs: T*): T = {\n    if (CometConf.COMET_LOG_FALLBACK_REASONS.get()) {\n      for (reason <- info) {\n        logWarning(s\"Comet cannot accelerate ${node.getClass.getSimpleName} because: $reason\")\n      }\n    }\n    val existingNodeInfos = node.getTagValue(CometExplainInfo.EXTENSION_INFO)\n    val newNodeInfo = (existingNodeInfos ++ exprs\n      .flatMap(_.getTagValue(CometExplainInfo.EXTENSION_INFO))).flatten.toSet\n    node.setTagValue(CometExplainInfo.EXTENSION_INFO, newNodeInfo ++ info)\n    node\n  }\n\n  /**\n   * Attaches explain information to a TreeNode, rolling up the corresponding information tags\n   * from any child nodes\n   *\n   * @param node\n   *   The node to attach the explain information to. Typically a SparkPlan\n   * @param exprs\n   *   Child nodes. Information attached in these nodes will be be included in the information\n   *   attached to @node\n   * @tparam T\n   *   The type of the TreeNode. Typically SparkPlan, AggregateExpression, or Expression\n   * @return\n   *   The node with information (if any) attached\n   */\n  def withInfo[T <: TreeNode[_]](node: T, exprs: T*): T = {\n    withInfos(node, Set.empty, exprs: _*)\n  }\n\n  /**\n   * Checks whether a TreeNode has any explain information attached\n   */\n  def hasExplainInfo(node: TreeNode[_]): Boolean = {\n    node.getTagValue(CometExplainInfo.EXTENSION_INFO).exists(_.nonEmpty)\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/DataTypeSupport.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.collection.mutable.ListBuffer\n\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.DataTypeSupport.{ARRAY_ELEMENT, MAP_KEY, MAP_VALUE}\n\ntrait DataTypeSupport {\n\n  /**\n   * Checks if this schema is supported by checking if each field in the schema is supported.\n   *\n   * @param schema\n   *   the schema to check the fields of\n   * @return\n   *   true if all fields in the schema are supported\n   */\n  def isSchemaSupported(schema: StructType, fallbackReasons: ListBuffer[String]): Boolean = {\n    schema.fields.forall(f => isTypeSupported(f.dataType, f.name, fallbackReasons))\n  }\n\n  /**\n   * Determine if Comet supports a data type. This method can be overridden by specific operators\n   * as needed.\n   */\n  def isTypeSupported(\n      dt: DataType,\n      name: String,\n      fallbackReasons: ListBuffer[String]): Boolean = {\n\n    dt match {\n      case BooleanType | ByteType | ShortType | IntegerType | LongType | FloatType | DoubleType |\n          BinaryType | StringType | _: DecimalType | DateType | TimestampType |\n          TimestampNTZType =>\n        true\n      case StructType(fields) =>\n        fields.nonEmpty && fields.forall(f =>\n          isTypeSupported(f.dataType, f.name, fallbackReasons))\n      case ArrayType(elementType, _) =>\n        isTypeSupported(elementType, ARRAY_ELEMENT, fallbackReasons)\n      case MapType(keyType, valueType, _) =>\n        isTypeSupported(keyType, MAP_KEY, fallbackReasons) && isTypeSupported(\n          valueType,\n          MAP_VALUE,\n          fallbackReasons)\n      case _ =>\n        fallbackReasons += s\"Unsupported ${name} of type ${dt}\"\n        false\n    }\n  }\n}\n\nobject DataTypeSupport {\n  val ARRAY_ELEMENT = \"array element\"\n  val MAP_KEY = \"map key\"\n  val MAP_VALUE = \"map value\"\n\n  def isComplexType(dt: DataType): Boolean = dt match {\n    case _: StructType | _: ArrayType | _: MapType => true\n    case _ => false\n  }\n\n  def hasTemporalType(t: DataType): Boolean = t match {\n    case DataTypes.DateType | DataTypes.TimestampType | DataTypes.TimestampNTZType =>\n      true\n    case t: StructType => t.exists(f => hasTemporalType(f.dataType))\n    case t: ArrayType => hasTemporalType(t.elementType)\n    case t: MapType => hasTemporalType(t.keyType) || hasTemporalType(t.valueType)\n    case _ => false\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/ExtendedExplainInfo.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.collection.mutable\n\nimport org.apache.spark.sql.ExtendedExplainGenerator\nimport org.apache.spark.sql.catalyst.trees.{TreeNode, TreeNodeTag}\nimport org.apache.spark.sql.comet.{CometColumnarToRowExec, CometNativeColumnarToRowExec, CometPlan, CometSparkToColumnarExec}\nimport org.apache.spark.sql.execution.{ColumnarToRowExec, InputAdapter, RowToColumnarExec, SparkPlan, WholeStageCodegenExec}\nimport org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanExec, AQEShuffleReadExec, QueryStageExec}\nimport org.apache.spark.sql.execution.exchange.ReusedExchangeExec\n\nimport org.apache.comet.CometExplainInfo.getActualPlan\n\nclass ExtendedExplainInfo extends ExtendedExplainGenerator {\n\n  override def title: String = \"Comet\"\n\n  def generateExtendedInfo(plan: SparkPlan): String = {\n    CometConf.COMET_EXTENDED_EXPLAIN_FORMAT.get() match {\n      case CometConf.COMET_EXTENDED_EXPLAIN_FORMAT_VERBOSE =>\n        // Generates the extended info in a verbose manner, printing each node along with the\n        // extended information in a tree display.\n        val planStats = new CometCoverageStats()\n        val outString = new StringBuilder()\n        generateTreeString(getActualPlan(plan), 0, Seq(), 0, outString, planStats)\n        s\"${outString.toString()}\\n$planStats\"\n      case CometConf.COMET_EXTENDED_EXPLAIN_FORMAT_FALLBACK =>\n        // Generates the extended info as a list of fallback reasons\n        getFallbackReasons(plan).mkString(\"\\n\").trim\n    }\n  }\n\n  def getFallbackReasons(plan: SparkPlan): Seq[String] = {\n    extensionInfo(plan).toSeq.sorted\n  }\n\n  private[comet] def extensionInfo(node: TreeNode[_]): Set[String] = {\n    var info = mutable.Seq[String]()\n    val sorted = sortup(node)\n    sorted.foreach { p =>\n      val all: Set[String] =\n        getActualPlan(p).getTagValue(CometExplainInfo.EXTENSION_INFO).getOrElse(Set.empty[String])\n      for (s <- all) {\n        info = info :+ s\n      }\n    }\n    info.toSet\n  }\n\n  // get all plan nodes, breadth first traversal, then returned the reversed list so\n  // leaf nodes are first\n  private def sortup(node: TreeNode[_]): mutable.Queue[TreeNode[_]] = {\n    val ordered = new mutable.Queue[TreeNode[_]]()\n    val traversed = mutable.Queue[TreeNode[_]](getActualPlan(node))\n    while (traversed.nonEmpty) {\n      val s = traversed.dequeue()\n      ordered += s\n      if (s.innerChildren.nonEmpty) {\n        s.innerChildren.foreach {\n          case c @ (_: TreeNode[_]) => traversed.enqueue(getActualPlan(c))\n          case _ =>\n        }\n      }\n      if (s.children.nonEmpty) {\n        s.children.foreach {\n          case c @ (_: TreeNode[_]) => traversed.enqueue(getActualPlan(c))\n          case _ =>\n        }\n      }\n    }\n    ordered.reverse\n  }\n\n  // Simplified generateTreeString from Spark TreeNode. Appends explain info to the node if any\n  def generateTreeString(\n      node: TreeNode[_],\n      depth: Int,\n      lastChildren: Seq[Boolean],\n      indent: Int,\n      outString: StringBuilder,\n      planStats: CometCoverageStats): Unit = {\n\n    node match {\n      case _: AdaptiveSparkPlanExec | _: InputAdapter | _: QueryStageExec |\n          _: WholeStageCodegenExec | _: ReusedExchangeExec | _: AQEShuffleReadExec =>\n      // ignore\n      case _: RowToColumnarExec | _: ColumnarToRowExec | _: CometColumnarToRowExec |\n          _: CometNativeColumnarToRowExec | _: CometSparkToColumnarExec =>\n        planStats.transitions += 1\n      case _: CometPlan =>\n        planStats.cometOperators += 1\n      case _ =>\n        planStats.sparkOperators += 1\n    }\n\n    outString.append(\"   \" * indent)\n    if (depth > 0) {\n      lastChildren.init.foreach { isLast =>\n        outString.append(if (isLast) \"   \" else \":  \")\n      }\n      outString.append(if (lastChildren.last) \"+- \" else \":- \")\n    }\n\n    val tagValue = node.getTagValue(CometExplainInfo.EXTENSION_INFO)\n    val str = if (tagValue.nonEmpty) {\n      s\" ${node.nodeName} [COMET: ${tagValue.get.mkString(\", \")}]\"\n    } else {\n      node.nodeName\n    }\n    outString.append(str)\n    outString.append(\"\\n\")\n\n    val innerChildrenLocal = node.innerChildren\n    if (innerChildrenLocal.nonEmpty) {\n      innerChildrenLocal.init.foreach {\n        case c @ (_: TreeNode[_]) =>\n          generateTreeString(\n            getActualPlan(c),\n            depth + 2,\n            lastChildren :+ node.children.isEmpty :+ false,\n            indent,\n            outString,\n            planStats)\n        case _ =>\n      }\n      generateTreeString(\n        getActualPlan(innerChildrenLocal.last),\n        depth + 2,\n        lastChildren :+ node.children.isEmpty :+ true,\n        indent,\n        outString,\n        planStats)\n    }\n    if (node.children.nonEmpty) {\n      node.children.init.foreach {\n        case c @ (_: TreeNode[_]) =>\n          generateTreeString(\n            getActualPlan(c),\n            depth + 1,\n            lastChildren :+ false,\n            indent,\n            outString,\n            planStats)\n        case _ =>\n      }\n      node.children.last match {\n        case c @ (_: TreeNode[_]) =>\n          generateTreeString(\n            getActualPlan(c),\n            depth + 1,\n            lastChildren :+ true,\n            indent,\n            outString,\n            planStats)\n        case _ =>\n      }\n    }\n  }\n}\n\nclass CometCoverageStats {\n  var sparkOperators: Int = 0\n  var cometOperators: Int = 0\n  var transitions: Int = 0\n\n  override def toString(): String = {\n    val eligible = sparkOperators + cometOperators\n    val converted =\n      if (eligible == 0) 0.0 else cometOperators.toDouble / eligible * 100.0\n    s\"Comet accelerated $cometOperators out of $eligible \" +\n      s\"eligible operators (${converted.toInt}%). \" +\n      s\"Final plan contains $transitions transitions between Spark and Comet.\"\n  }\n}\n\nobject CometCoverageStats {\n\n  /**\n   * Compute coverage stats for a plan without generating explain string.\n   */\n  def forPlan(plan: SparkPlan): CometCoverageStats = {\n    val stats = new CometCoverageStats()\n    val explainInfo = new ExtendedExplainInfo()\n    explainInfo.generateTreeString(\n      CometExplainInfo.getActualPlan(plan),\n      0,\n      Seq(),\n      0,\n      new StringBuilder(),\n      stats)\n    stats\n  }\n}\n\nobject CometExplainInfo {\n  val EXTENSION_INFO = new TreeNodeTag[Set[String]](\"CometExtensionInfo\")\n\n  def getActualPlan(node: TreeNode[_]): TreeNode[_] = {\n    node match {\n      case p: AdaptiveSparkPlanExec => getActualPlan(p.executedPlan)\n      case p: InputAdapter => getActualPlan(p.child)\n      case p: QueryStageExec => getActualPlan(p.plan)\n      case p: WholeStageCodegenExec => getActualPlan(p.child)\n      case p: ReusedExchangeExec => getActualPlan(p.child)\n      case p => p\n    }\n\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/GenerateDocs.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.io.{BufferedOutputStream, BufferedReader, FileOutputStream, FileReader}\n\nimport scala.collection.mutable\nimport scala.collection.mutable.ListBuffer\n\nimport org.apache.spark.sql.catalyst.expressions.Cast\n\nimport org.apache.comet.CometConf.COMET_ONHEAP_MEMORY_OVERHEAD\nimport org.apache.comet.expressions.{CometCast, CometEvalMode}\nimport org.apache.comet.serde.{Compatible, Incompatible, QueryPlanSerde, Unsupported}\n\n/**\n * Utility for generating markdown documentation from the configs.\n *\n * This is invoked when running `mvn clean package -DskipTests`.\n */\nobject GenerateDocs {\n\n  private val publicConfigs: Set[ConfigEntry[_]] = CometConf.allConfs.filter(_.isPublic).toSet\n\n  def main(args: Array[String]): Unit = {\n    val userGuideLocation = args(0)\n    generateConfigReference(s\"$userGuideLocation/configs.md\")\n    generateCompatibilityGuide(s\"$userGuideLocation/compatibility.md\")\n  }\n\n  private def generateConfigReference(filename: String): Unit = {\n    val pattern = \"<!--BEGIN:CONFIG_TABLE\\\\[(.*)]-->\".r\n    val lines = readFile(filename)\n    val w = new BufferedOutputStream(new FileOutputStream(filename))\n    for (line <- lines) {\n      w.write(s\"${line.stripTrailing()}\\n\".getBytes)\n      line match {\n        case pattern(category) =>\n          w.write(\"<!-- prettier-ignore-start -->\\n\".getBytes)\n          w.write(\"| Config | Description | Default Value |\\n\".getBytes)\n          w.write(\"|--------|-------------|---------------|\\n\".getBytes)\n          category match {\n            case \"enable_expr\" =>\n              for (expr <- QueryPlanSerde.exprSerdeMap.keys.map(_.getSimpleName).toList.sorted) {\n                val config = s\"spark.comet.expression.$expr.enabled\"\n                w.write(\n                  s\"| `$config` | Enable Comet acceleration for `$expr` | true |\\n\".getBytes)\n              }\n              w.write(\"<!-- prettier-ignore-end -->\\n\".getBytes)\n            case \"enable_agg_expr\" =>\n              for (expr <- QueryPlanSerde.aggrSerdeMap.keys.map(_.getSimpleName).toList.sorted) {\n                val config = s\"spark.comet.expression.$expr.enabled\"\n                w.write(\n                  s\"| `$config` | Enable Comet acceleration for `$expr` | true |\\n\".getBytes)\n              }\n              w.write(\"<!-- prettier-ignore-end -->\\n\".getBytes)\n            case _ =>\n              val urlPattern = \"\"\"Comet\\s+(Compatibility|Tuning|Tracing)\\s+Guide\\s+\\(\"\"\".r\n              val confs = publicConfigs.filter(_.category == category).toList.sortBy(_.key)\n              for (conf <- confs) {\n                // convert links to Markdown\n                val doc =\n                  urlPattern.replaceAllIn(conf.doc.trim, m => s\"[Comet ${m.group(1)} Guide](\")\n                // append env var info if present\n                val docWithEnvVar = conf.envVar match {\n                  case Some(envVarName) =>\n                    s\"$doc It can be overridden by the environment variable `$envVarName`.\"\n                  case None => doc\n                }\n                if (conf.defaultValue.isEmpty) {\n                  w.write(s\"| `${conf.key}` | $docWithEnvVar | |\\n\".getBytes)\n                } else {\n                  val isBytesConf = conf.key == COMET_ONHEAP_MEMORY_OVERHEAD.key\n                  if (isBytesConf) {\n                    val bytes = conf.defaultValue.get.asInstanceOf[Long]\n                    w.write(s\"| `${conf.key}` | $docWithEnvVar | $bytes MiB |\\n\".getBytes)\n                  } else {\n                    val defaultVal = conf.defaultValueString\n                    w.write(s\"| `${conf.key}` | $docWithEnvVar | $defaultVal |\\n\".getBytes)\n                  }\n                }\n              }\n              w.write(\"<!-- prettier-ignore-end -->\\n\".getBytes)\n          }\n        case _ =>\n      }\n    }\n    w.close()\n  }\n\n  private def generateCompatibilityGuide(filename: String): Unit = {\n    val lines = readFile(filename)\n    val w = new BufferedOutputStream(new FileOutputStream(filename))\n    for (line <- lines) {\n      w.write(s\"${line.stripTrailing()}\\n\".getBytes)\n      if (line.trim == \"<!--BEGIN:CAST_LEGACY_TABLE-->\") {\n        writeCastMatrixForMode(w, CometEvalMode.LEGACY)\n      } else if (line.trim == \"<!--BEGIN:CAST_TRY_TABLE-->\") {\n        writeCastMatrixForMode(w, CometEvalMode.TRY)\n      } else if (line.trim == \"<!--BEGIN:CAST_ANSI_TABLE-->\") {\n        writeCastMatrixForMode(w, CometEvalMode.ANSI)\n      }\n    }\n    w.close()\n  }\n\n  private def writeCastMatrixForMode(w: BufferedOutputStream, mode: CometEvalMode.Value): Unit = {\n    val sortedTypes = CometCast.supportedTypes.sortBy(_.typeName)\n    val typeNames = sortedTypes.map(_.typeName.replace(\"(10,2)\", \"\"))\n\n    // Collect annotations for meaningful notes\n    val annotations = mutable.ListBuffer[(String, String, String)]()\n\n    w.write(\"<!-- prettier-ignore-start -->\\n\".getBytes)\n\n    // Write header row\n    w.write(\"| |\".getBytes)\n    for (toTypeName <- typeNames) {\n      w.write(s\" $toTypeName |\".getBytes)\n    }\n    w.write(\"\\n\".getBytes)\n\n    // Write separator row\n    w.write(\"|---|\".getBytes)\n    for (_ <- typeNames) {\n      w.write(\"---|\".getBytes)\n    }\n    w.write(\"\\n\".getBytes)\n\n    // Write data rows\n    for ((fromType, fromTypeName) <- sortedTypes.zip(typeNames)) {\n      w.write(s\"| $fromTypeName |\".getBytes)\n      for ((toType, toTypeName) <- sortedTypes.zip(typeNames)) {\n        val cell = if (fromType == toType) {\n          \"-\"\n        } else if (!Cast.canCast(fromType, toType)) {\n          \"N/A\"\n        } else {\n          val supportLevel = CometCast.isSupported(fromType, toType, None, mode)\n          supportLevel match {\n            case Compatible(notes) =>\n              notes.filter(_.trim.nonEmpty).foreach { note =>\n                annotations += ((fromTypeName, toTypeName, note.trim.replace(\"(10,2)\", \"\")))\n              }\n              \"C\"\n            case Incompatible(notes) =>\n              notes.filter(_.trim.nonEmpty).foreach { note =>\n                annotations += ((fromTypeName, toTypeName, note.trim.replace(\"(10,2)\", \"\")))\n              }\n              \"I\"\n            case Unsupported(_) =>\n              \"U\"\n          }\n        }\n        w.write(s\" $cell |\".getBytes)\n      }\n      w.write(\"\\n\".getBytes)\n    }\n\n    // Write annotations if any\n    if (annotations.nonEmpty) {\n      w.write(\"\\n**Notes:**\\n\".getBytes)\n      for ((from, to, note) <- annotations.distinct) {\n        w.write(s\"- **$from -> $to**: $note\\n\".getBytes)\n      }\n    }\n\n    w.write(\"<!-- prettier-ignore-end -->\\n\".getBytes)\n  }\n\n  /** Read file into memory */\n  private def readFile(filename: String): Seq[String] = {\n    val r = new BufferedReader(new FileReader(filename))\n    val buffer = new ListBuffer[String]()\n    var line = r.readLine()\n    var skipping = false\n    while (line != null) {\n      if (line.startsWith(\"<!--BEGIN:\")) {\n        buffer += line\n        skipping = true\n      } else if (line.startsWith(\"<!--END:\")) {\n        buffer += line\n        skipping = false\n      } else if (!skipping) {\n        buffer += line\n      }\n      line = r.readLine()\n    }\n    r.close()\n    buffer.toSeq\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/MetricsSupport.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport org.apache.spark.sql.execution.metric.SQLMetric\n\n/**\n * A trait for Comet operators that support SQL metrics\n */\ntrait MetricsSupport {\n  protected var metrics: Map[String, SQLMetric] = Map.empty\n\n  def getMetrics: Map[String, SQLMetric] = metrics\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/Native.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.nio.ByteBuffer\n\nimport org.apache.spark.CometTaskMemoryManager\nimport org.apache.spark.sql.comet.CometMetricNode\n\nimport org.apache.comet.parquet.CometFileKeyUnwrapper\n\nclass Native extends NativeBase {\n\n  // scalastyle:off\n  /**\n   * Create a native query plan from execution SparkPlan serialized in bytes.\n   * @param id\n   *   The id of the query plan.\n   * @param configMap\n   *   The Java Map object for the configs of native engine.\n   * @param iterators\n   *   the input iterators to the native query plan. It should be the same number as the number of\n   *   scan nodes in the SparkPlan.\n   * @param plan\n   *   the bytes of serialized SparkPlan.\n   * @param metrics\n   *   the native metrics of SparkPlan.\n   * @param metricsUpdateInterval\n   *   the interval in milliseconds to update metrics, if interval is negative, metrics will be\n   *   updated upon task completion.\n   * @param taskMemoryManager\n   *   the task-level memory manager that is responsible for tracking memory usage across JVM and\n   *   native side.\n   * @return\n   *   the address to native query plan.\n   */\n  // scalastyle:off\n  @native def createPlan(\n      id: Long,\n      iterators: Array[Object],\n      plan: Array[Byte],\n      configMapProto: Array[Byte],\n      partitionCount: Int,\n      metrics: CometMetricNode,\n      metricsUpdateInterval: Long,\n      taskMemoryManager: CometTaskMemoryManager,\n      localDirs: Array[String],\n      batchSize: Int,\n      offHeapMode: Boolean,\n      memoryPoolType: String,\n      memoryLimit: Long,\n      memoryLimitPerTask: Long,\n      taskAttemptId: Long,\n      taskCPUs: Long,\n      keyUnwrapper: CometFileKeyUnwrapper): Long\n  // scalastyle:on\n\n  /**\n   * Execute a native query plan based on given input Arrow arrays.\n   *\n   * @param stage\n   *   the stage ID, for informational purposes\n   * @param partition\n   *   the partition ID, for informational purposes\n   * @param plan\n   *   the address to native query plan.\n   * @param arrayAddrs\n   *   the addresses of Arrow Array structures\n   * @param schemaAddrs\n   *   the addresses of Arrow Schema structures\n   * @return\n   *   the number of rows, if -1, it means end of the output.\n   */\n  @native def executePlan(\n      stage: Int,\n      partition: Int,\n      plan: Long,\n      arrayAddrs: Array[Long],\n      schemaAddrs: Array[Long]): Long\n\n  /**\n   * Release and drop the native query plan object and context object.\n   *\n   * @param plan\n   *   the address to native query plan.\n   */\n  @native def releasePlan(plan: Long): Unit\n\n  /**\n   * Used by Comet shuffle external sorter to write sorted records to disk.\n   *\n   * @param addresses\n   *   the array of addresses of Spark unsafe rows.\n   * @param rowSizes\n   *   the row sizes of Spark unsafe rows.\n   * @param datatypes\n   *   the datatypes of fields in Spark unsafe rows.\n   * @param file\n   *   the file path to write to.\n   * @param preferDictionaryRatio\n   *   the ratio of total values to distinct values in a string column that makes the writer to\n   *   prefer dictionary encoding. If it is larger than the specified ratio, dictionary encoding\n   *   will be used when writing columns of string type.\n   * @param batchSize\n   *   the batch size on the native side to buffer outputs during the row to columnar conversion\n   *   before writing them out to disk.\n   * @param checksumEnabled\n   *   whether to compute checksum of written file.\n   * @param checksumAlgo\n   *   the checksum algorithm to use. 0 for CRC32, 1 for Adler32.\n   * @param currentChecksum\n   *   the current checksum of the file. As the checksum is computed incrementally, this is used\n   *   to resume the computation of checksum for previous written data.\n   * @param compressionCodec\n   *   the compression codec\n   * @param compressionLevel\n   *   the compression level\n   * @return\n   *   [the number of bytes written to disk, the checksum]\n   */\n  // scalastyle:off\n  @native def writeSortedFileNative(\n      addresses: Array[Long],\n      rowSizes: Array[Int],\n      datatypes: Array[Array[Byte]],\n      file: String,\n      preferDictionaryRatio: Double,\n      batchSize: Int,\n      checksumEnabled: Boolean,\n      checksumAlgo: Int,\n      currentChecksum: Long,\n      compressionCodec: String,\n      compressionLevel: Int,\n      tracingEnabled: Boolean): Array[Long]\n  // scalastyle:on\n\n  /**\n   * Sorts partition ids of Spark unsafe rows in place. Used by Comet shuffle external sorter.\n   *\n   * @param addr\n   *   the address of the array of compacted partition ids.\n   * @param size\n   *   the size of the array.\n   */\n  @native def sortRowPartitionsNative(addr: Long, size: Long, tracingEnabled: Boolean): Unit\n\n  /**\n   * Decompress and decode a native shuffle block.\n   * @param shuffleBlock\n   *   the encoded anc compressed shuffle block.\n   * @param length\n   *   the limit of the byte buffer.\n   * @param addr\n   *   the address of the array of compressed and encoded bytes.\n   * @param size\n   *   the size of the array.\n   */\n  @native def decodeShuffleBlock(\n      shuffleBlock: ByteBuffer,\n      length: Int,\n      arrayAddrs: Array[Long],\n      schemaAddrs: Array[Long],\n      tracingEnabled: Boolean): Long\n\n  /**\n   * Log the beginning of an event.\n   * @param name\n   *   The name of the event.\n   */\n  @native def traceBegin(name: String): Unit\n\n  /**\n   * Log the end of an event.\n   * @param name\n   *   The name of the event.\n   */\n  @native def traceEnd(name: String): Unit\n\n  /**\n   * Log the amount of memory currently in use.\n   *\n   * @param name\n   *   Type of memory e.g. jvm, native, off-heap pool\n   * @param memoryUsageBytes\n   *   Number of bytes in use\n   */\n  @native def logMemoryUsage(name: String, memoryUsageBytes: Long): Unit\n\n  /**\n   * Returns the Rust thread ID for the current thread.\n   */\n  @native def getRustThreadId(): Long\n\n  // Native Columnar to Row conversion methods\n\n  /**\n   * Initialize a native columnar to row converter.\n   *\n   * @param schema\n   *   Array of serialized data types (as byte arrays) for each column in the schema.\n   * @param batchSize\n   *   The maximum number of rows that will be converted in a single batch. Used to pre-allocate\n   *   the output buffer.\n   * @return\n   *   A handle to the native converter context. This handle must be passed to subsequent convert\n   *   and close calls.\n   */\n  @native def columnarToRowInit(schema: Array[Array[Byte]], batchSize: Int): Long\n\n  /**\n   * Convert Arrow columnar data to Spark UnsafeRow format.\n   *\n   * @param c2rHandle\n   *   The handle returned by columnarToRowInit.\n   * @param arrayAddrs\n   *   The addresses of Arrow Array structures for each column.\n   * @param schemaAddrs\n   *   The addresses of Arrow Schema structures for each column.\n   * @param numRows\n   *   The number of rows to convert.\n   * @return\n   *   A NativeColumnarToRowInfo containing the memory address of the row buffer and metadata\n   *   (offsets and lengths) for each row.\n   */\n  @native def columnarToRowConvert(\n      c2rHandle: Long,\n      arrayAddrs: Array[Long],\n      schemaAddrs: Array[Long],\n      numRows: Int): NativeColumnarToRowInfo\n\n  /**\n   * Close and release the native columnar to row converter.\n   *\n   * @param c2rHandle\n   *   The handle returned by columnarToRowInit.\n   */\n  @native def columnarToRowClose(c2rHandle: Long): Unit\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/NativeColumnarToRowConverter.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.UnsafeRow\nimport org.apache.spark.sql.types.StructType\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport org.apache.comet.serde.QueryPlanSerde\nimport org.apache.comet.vector.NativeUtil\n\n/**\n * Native converter that converts Arrow columnar data to Spark UnsafeRow format.\n *\n * This converter maintains a native handle that holds the conversion context and output buffer.\n * The buffer is reused across conversions to minimize allocations.\n *\n * Memory Management:\n *   - The native side owns the output buffer\n *   - UnsafeRow objects returned by convert() point directly to native memory (zero-copy)\n *   - The buffer is valid until the next convert() call or close()\n *   - Always call close() when done to release native resources\n *\n * @param schema\n *   The schema of the data to convert\n * @param batchSize\n *   Maximum number of rows per batch (used for buffer pre-allocation)\n */\nclass NativeColumnarToRowConverter(schema: StructType, batchSize: Int) extends AutoCloseable {\n\n  private val nativeLib = new Native()\n  private val nativeUtil = new NativeUtil()\n\n  // Serialize the schema for native initialization\n  private val serializedSchema: Array[Array[Byte]] = schema.fields.map { field =>\n    QueryPlanSerde.serializeDataType(field.dataType) match {\n      case Some(dataType) => dataType.toByteArray\n      case None =>\n        throw new UnsupportedOperationException(\n          s\"Data type ${field.dataType} is not supported for native columnar to row conversion\")\n    }\n  }\n\n  // Initialize native context - handle is 0 if initialization failed\n  private var c2rHandle: Long = nativeLib.columnarToRowInit(serializedSchema, batchSize)\n\n  // Reusable UnsafeRow for iteration\n  private val unsafeRow = new UnsafeRow(schema.fields.length)\n\n  /**\n   * Converts a ColumnarBatch to an iterator of InternalRows.\n   *\n   * The returned iterator yields UnsafeRow objects that point directly to native memory. These\n   * rows are valid only until the next call to convert() or close().\n   *\n   * @param batch\n   *   The columnar batch to convert\n   * @return\n   *   An iterator of InternalRows\n   */\n  def convert(batch: ColumnarBatch): Iterator[InternalRow] = {\n    if (c2rHandle == 0) {\n      throw new IllegalStateException(\"NativeColumnarToRowConverter has been closed\")\n    }\n\n    val numRows = batch.numRows()\n    if (numRows == 0) {\n      return Iterator.empty\n    }\n\n    // Export the batch to Arrow FFI and get memory addresses\n    val (arrayAddrs, schemaAddrs, exportedNumRows) = nativeUtil.exportBatchToAddresses(batch)\n\n    // Call native conversion\n    val info = nativeLib.columnarToRowConvert(c2rHandle, arrayAddrs, schemaAddrs, exportedNumRows)\n\n    // Return an iterator that yields UnsafeRows pointing to native memory\n    new NativeRowIterator(info, unsafeRow)\n  }\n\n  /**\n   * Checks if this converter is still open and usable.\n   */\n  def isOpen: Boolean = c2rHandle != 0\n\n  /**\n   * Closes the converter and releases native resources.\n   */\n  override def close(): Unit = {\n    if (c2rHandle != 0) {\n      nativeLib.columnarToRowClose(c2rHandle)\n      c2rHandle = 0\n    }\n    nativeUtil.close()\n  }\n}\n\n/**\n * Iterator that yields UnsafeRows backed by native memory.\n *\n * The UnsafeRow is reused across iterations - callers must copy the row if they need to retain it\n * beyond the current iteration.\n */\nprivate class NativeRowIterator(info: NativeColumnarToRowInfo, unsafeRow: UnsafeRow)\n    extends Iterator[InternalRow] {\n\n  private var currentIdx = 0\n  private val numRows = info.numRows()\n\n  override def hasNext: Boolean = currentIdx < numRows\n\n  override def next(): InternalRow = {\n    if (!hasNext) {\n      throw new NoSuchElementException(\"No more rows\")\n    }\n\n    // Point the UnsafeRow to the native memory\n    val rowAddress = info.memoryAddress + info.offsets(currentIdx)\n    val rowSize = info.lengths(currentIdx)\n\n    unsafeRow.pointTo(null, rowAddress, rowSize)\n    currentIdx += 1\n\n    unsafeRow.copy()\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/SparkErrorConverter.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport org.json4s._\nimport org.json4s.jackson.JsonMethods._\n\nimport org.apache.spark.{QueryContext, SparkException}\nimport org.apache.spark.sql.catalyst.trees.SQLQueryContext\nimport org.apache.spark.sql.comet.shims.ShimSparkErrorConverter\n\nimport com.fasterxml.jackson.core.JsonParseException\n\nimport org.apache.comet.exceptions.CometQueryExecutionException\n\n/**\n * Converts CometQueryExecutionException from native code (with JSON payload) to appropriate Spark\n * QueryExecutionErrors.* exceptions\n *\n * Parses the JSON-encoded error information from native execution and delegates to the\n * version-specific ShimSparkErrorConverter trait for conversion to proper Spark exception types.\n *\n * The ShimSparkErrorConverter handles all error cases using the correct QueryExecutionErrors API\n * for each Spark version.\n */\nobject SparkErrorConverter extends ShimSparkErrorConverter {\n\n  implicit val formats: DefaultFormats.type = DefaultFormats\n  private val UNKNOWN_ERROR = \"UNKNOWN_ERROR_TEMP_COMET\"\n\n  private case class QueryContextJson(\n      sqlText: String,\n      startIndex: Int,\n      stopIndex: Int,\n      objectType: Option[String],\n      objectName: Option[String],\n      line: Int,\n      startPosition: Int)\n\n  private case class ErrorJson(\n      errorType: String,\n      errorClass: Option[String],\n      params: Option[Map[String, Any]],\n      context: Option[QueryContextJson],\n      summary: Option[String])\n\n  /**\n   * Parse JSON from exception and convert to appropriate Spark exception.\n   *\n   * @param e\n   *   the CometQueryExecutionException with JSON message\n   * @return\n   *   the corresponding Spark exception, or the original exception if parsing fails\n   */\n  def convertToSparkException(e: CometQueryExecutionException): Throwable = {\n    try {\n      if (!e.isJsonMessage()) {\n        // Not JSON, return original exception\n        return e\n      }\n    } catch {\n      // Only catch JSON parsing/mapping exceptions - let conversion exceptions propagate\n      case _: MappingException | _: JsonParseException =>\n        return e\n    }\n\n    val json = parse(e.getMessage)\n    val errorJson = json.extract[ErrorJson]\n    val params = errorJson.params.getOrElse(Map.empty)\n    val errorClass =\n      errorJson.errorClass.map(_.trim).filter(_.nonEmpty).getOrElse(UNKNOWN_ERROR)\n\n    // Build Spark SQLQueryContext if context is present (Not all errors carry the query context)\n    val sparkContext: Array[QueryContext] = errorJson.context match {\n      case Some(ctx) =>\n        Array(\n          SQLQueryContext(\n            sqlText = Some(ctx.sqlText),\n            line = Some(ctx.line),\n            startPosition = Some(ctx.startPosition),\n            originStartIndex = Some(ctx.startIndex),\n            originStopIndex = Some(ctx.stopIndex),\n            originObjectType = ctx.objectType,\n            originObjectName = ctx.objectName))\n      case None => Array.empty[QueryContext] // No context\n    }\n\n    val summary: String = errorJson.summary.getOrElse(\"\")\n\n    // Delegate to version-specific shim - let conversion exceptions propagate\n    val optEx = convertErrorType(errorJson.errorType, errorClass, params, sparkContext, summary)\n    optEx match {\n      case Some(exception) =>\n        // successfully converted - return the proper typed exception\n        exception\n\n      case None =>\n        // Unknown error type - fallback to generic SparkException\n        val msgParams = paramsToStringMap(params)\n        if (errorClass.equals(UNKNOWN_ERROR)) {\n          new SparkException(msgParams.mkString(\", \"))\n        } else {\n          new SparkException(errorClass = errorClass, messageParameters = msgParams, cause = null)\n        }\n    }\n  }\n\n  /**\n   * Convert parameter map to string-keyed map for SparkException.\n   */\n  private[comet] def paramsToStringMap(params: Map[String, Any]): Map[String, String] = {\n    params.map { case (k, v) => (k, v.toString) }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/Tracing.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nobject Tracing {\n\n  private val nativeLib = new Native\n\n  def withTrace[T](label: String, tracingEnabled: Boolean, fun: => T): T = {\n    try {\n      if (tracingEnabled) {\n        nativeLib.traceBegin(label)\n      }\n      fun\n    } finally {\n      if (tracingEnabled) {\n        nativeLib.traceEnd(label)\n      }\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/expressions/CometCast.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.expressions\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, Cast, Expression, Literal}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{ArrayType, DataType, DataTypes, DecimalType, NullType, StructType, TimestampType}\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometSparkSessionExtensions.{isSpark40Plus, withInfo}\nimport org.apache.comet.serde.{CometExpressionSerde, Compatible, ExprOuterClass, Incompatible, SupportLevel, Unsupported}\nimport org.apache.comet.serde.ExprOuterClass.Expr\nimport org.apache.comet.serde.QueryPlanSerde.{evalModeToProto, exprToProtoInternal, serializeDataType}\nimport org.apache.comet.shims.CometExprShim\n\nobject CometCast extends CometExpressionSerde[Cast] with CometExprShim {\n\n  def supportedTypes: Seq[DataType] =\n    Seq(\n      DataTypes.BooleanType,\n      DataTypes.ByteType,\n      DataTypes.ShortType,\n      DataTypes.IntegerType,\n      DataTypes.LongType,\n      DataTypes.FloatType,\n      DataTypes.DoubleType,\n      DataTypes.createDecimalType(10, 2),\n      DataTypes.StringType,\n      DataTypes.BinaryType,\n      DataTypes.DateType,\n      DataTypes.TimestampType)\n  // TODO add DataTypes.TimestampNTZType for Spark 3.4 and later\n  // https://github.com/apache/datafusion-comet/issues/378\n\n  override def getSupportLevel(cast: Cast): SupportLevel = {\n    if (cast.child.isInstanceOf[Literal]) {\n      // casting from literal is compatible because we delegate to Spark\n      // further data type checks will be performed by CometLiteral\n      Compatible()\n    } else {\n      isSupported(cast.child.dataType, cast.dataType, cast.timeZoneId, evalMode(cast))\n    }\n  }\n\n  override def convert(\n      cast: Cast,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val cometEvalMode = evalMode(cast)\n    cast.child match {\n      case _: Literal =>\n        exprToProtoInternal(Literal.create(cast.eval(), cast.dataType), inputs, binding)\n      case _ =>\n        if (isAlwaysCastToNull(cast.child.dataType, cast.dataType, cometEvalMode)) {\n          exprToProtoInternal(Literal.create(null, cast.dataType), inputs, binding)\n        } else {\n          val childExpr = exprToProtoInternal(cast.child, inputs, binding)\n          if (childExpr.isDefined) {\n            castToProto(cast, cast.timeZoneId, cast.dataType, childExpr.get, cometEvalMode)\n          } else {\n            withInfo(cast, cast.child)\n            None\n          }\n        }\n    }\n  }\n\n//  Some casts like date -> int/byte / long are always null. Terminate early in planning\n  private def isAlwaysCastToNull(\n      fromType: DataType,\n      toType: DataType,\n      evalMode: CometEvalMode.Value): Boolean = {\n    (fromType, toType) match {\n      case (\n            DataTypes.DateType,\n            DataTypes.BooleanType | DataTypes.ByteType | DataTypes.ShortType |\n            DataTypes.IntegerType | DataTypes.LongType | DataTypes.FloatType |\n            DataTypes.DoubleType | _: DecimalType) if evalMode == CometEvalMode.LEGACY =>\n        true\n      case _ => false\n    }\n  }\n\n  /**\n   * Wrap an already serialized expression in a cast.\n   */\n  def castToProto(\n      expr: Expression,\n      timeZoneId: Option[String],\n      dt: DataType,\n      childExpr: Expr,\n      evalMode: CometEvalMode.Value): Option[Expr] = {\n    serializeDataType(dt) match {\n      case Some(dataType) =>\n        val castBuilder = ExprOuterClass.Cast.newBuilder()\n        castBuilder.setChild(childExpr)\n        castBuilder.setDatatype(dataType)\n        castBuilder.setEvalMode(evalModeToProto(evalMode))\n        castBuilder.setAllowIncompat(\n          SQLConf.get\n            .getConfString(CometConf.getExprAllowIncompatConfigKey(classOf[Cast]), \"false\")\n            .toBoolean)\n        castBuilder.setTimezone(timeZoneId.getOrElse(\"UTC\"))\n        castBuilder.setIsSpark4Plus(isSpark40Plus)\n        Some(\n          ExprOuterClass.Expr\n            .newBuilder()\n            .setCast(castBuilder)\n            .build())\n      case _ =>\n        withInfo(expr, s\"Unsupported datatype in castToProto: $dt\")\n        None\n    }\n  }\n\n  def isSupported(\n      fromType: DataType,\n      toType: DataType,\n      timeZoneId: Option[String],\n      evalMode: CometEvalMode.Value): SupportLevel = {\n\n    if (fromType == toType) {\n      return Compatible()\n    }\n\n    (fromType, toType) match {\n      case (dt: ArrayType, _: ArrayType) if dt.elementType == NullType => Compatible()\n      case (ArrayType(DataTypes.DateType, _), ArrayType(toElementType, _))\n          if toElementType != DataTypes.IntegerType && toElementType != DataTypes.StringType =>\n        unsupported(fromType, toType)\n      case (dt: ArrayType, DataTypes.StringType) if dt.elementType == DataTypes.BinaryType =>\n        Incompatible()\n      case (dt: ArrayType, DataTypes.StringType) =>\n        isSupported(dt.elementType, DataTypes.StringType, timeZoneId, evalMode)\n      case (dt: ArrayType, dt1: ArrayType) =>\n        isSupported(dt.elementType, dt1.elementType, timeZoneId, evalMode)\n      case (dt: DataType, _) if dt.typeName == \"timestamp_ntz\" =>\n        // https://github.com/apache/datafusion-comet/issues/378\n        // https://github.com/apache/datafusion-comet/issues/3179\n        toType match {\n          case DataTypes.TimestampType | DataTypes.DateType | DataTypes.StringType =>\n            Incompatible()\n          case _ =>\n            unsupported(fromType, toType)\n        }\n      case (_: DecimalType, _: DecimalType) =>\n        Compatible()\n      case (DataTypes.StringType, _) =>\n        canCastFromString(toType, timeZoneId, evalMode)\n      case (_, DataTypes.StringType) =>\n        canCastToString(fromType, timeZoneId, evalMode)\n      case (DataTypes.TimestampType, _) =>\n        canCastFromTimestamp(toType)\n      case (_: DecimalType, _) =>\n        canCastFromDecimal(toType)\n      case (DataTypes.BooleanType, _) =>\n        canCastFromBoolean(toType, evalMode)\n      case (DataTypes.ByteType, _) =>\n        canCastFromByte(toType, evalMode)\n      case (DataTypes.ShortType, _) =>\n        canCastFromShort(toType, evalMode)\n      case (DataTypes.IntegerType, _) =>\n        canCastFromInt(toType, evalMode)\n      case (DataTypes.LongType, _) =>\n        canCastFromLong(toType, evalMode)\n      case (DataTypes.FloatType, _) =>\n        canCastFromFloat(toType)\n      case (DataTypes.DoubleType, _) =>\n        canCastFromDouble(toType)\n      case (from_struct: StructType, to_struct: StructType) =>\n        from_struct.fields.zip(to_struct.fields).foreach { case (a, b) =>\n          isSupported(a.dataType, b.dataType, timeZoneId, evalMode) match {\n            case Compatible(_) =>\n            // all good\n            case other =>\n              return other\n          }\n        }\n        Compatible()\n      case (DataTypes.DateType, toType) => canCastFromDate(toType, evalMode)\n      case _ => unsupported(fromType, toType)\n    }\n  }\n\n  private def canCastFromString(\n      toType: DataType,\n      timeZoneId: Option[String],\n      evalMode: CometEvalMode.Value): SupportLevel = {\n    toType match {\n      case DataTypes.BooleanType =>\n        Compatible()\n      case DataTypes.ByteType | DataTypes.ShortType | DataTypes.IntegerType |\n          DataTypes.LongType =>\n        Compatible()\n      case DataTypes.BinaryType =>\n        Compatible()\n      case DataTypes.FloatType | DataTypes.DoubleType =>\n        Compatible()\n      case _: DecimalType =>\n        Compatible()\n      case DataTypes.DateType =>\n        // https://github.com/apache/datafusion-comet/issues/327\n        Compatible(Some(\"Only supports years between 262143 BC and 262142 AD\"))\n      case DataTypes.TimestampType =>\n        Compatible()\n      case _ =>\n        unsupported(DataTypes.StringType, toType)\n    }\n  }\n\n  private def canCastToString(\n      fromType: DataType,\n      timeZoneId: Option[String],\n      evalMode: CometEvalMode.Value): SupportLevel = {\n    fromType match {\n      case DataTypes.BooleanType => Compatible()\n      case DataTypes.ByteType | DataTypes.ShortType | DataTypes.IntegerType |\n          DataTypes.LongType =>\n        Compatible()\n      case DataTypes.DateType => Compatible()\n      case DataTypes.TimestampType => Compatible()\n      case DataTypes.FloatType | DataTypes.DoubleType =>\n        Compatible(\n          Some(\n            \"There can be differences in precision. \" +\n              \"For example, the input \\\"1.4E-45\\\" will produce 1.0E-45 \" +\n              \"instead of 1.4E-45\"))\n      case d: DecimalType if d.scale < 0 =>\n        // Negative-scale decimals require spark.sql.legacy.allowNegativeScaleOfDecimal=true.\n        // When that config is enabled, Spark formats them using Java BigDecimal.toString()\n        // which produces scientific notation (e.g. \"1.23E+4\"). Comet matches this behavior.\n        // When the config is disabled, negative-scale decimals cannot be created in Spark,\n        // so we mark this as incompatible to avoid native execution on unexpected inputs.\n        val allowNegativeScale = SQLConf.get\n          .getConfString(\"spark.sql.legacy.allowNegativeScaleOfDecimal\", \"false\")\n          .toBoolean\n        if (allowNegativeScale) Compatible() else Incompatible()\n      case _: DecimalType =>\n        // Compatible across all eval modes: LEGACY uses cast_decimal128_to_utf8 which\n        // replicates Java BigDecimal.toString() (scientific notation when adj_exp < -6);\n        // TRY and ANSI fall through to DataFusion's plain-notation cast, which matches Spark.\n        Compatible()\n      case DataTypes.BinaryType =>\n        Compatible()\n      case StructType(fields) =>\n        for (field <- fields) {\n          isSupported(field.dataType, DataTypes.StringType, timeZoneId, evalMode) match {\n            case s: Incompatible =>\n              return s\n            case u: Unsupported =>\n              return u\n            case _ =>\n          }\n        }\n        Compatible()\n      case _ => unsupported(fromType, DataTypes.StringType)\n    }\n  }\n\n  private def canCastFromTimestamp(toType: DataType): SupportLevel = {\n    toType match {\n      case DataTypes.BooleanType | DataTypes.ByteType | DataTypes.ShortType |\n          DataTypes.IntegerType =>\n        // https://github.com/apache/datafusion-comet/issues/352\n        // this seems like an edge case that isn't important for us to support\n        unsupported(DataTypes.TimestampType, toType)\n      case DataTypes.LongType =>\n        // https://github.com/apache/datafusion-comet/issues/352\n        Compatible()\n      case DataTypes.StringType => Compatible()\n      case DataTypes.DateType => Compatible()\n      case _ => unsupported(DataTypes.TimestampType, toType)\n    }\n  }\n\n  private def canCastFromBoolean(toType: DataType, evalMode: CometEvalMode.Value): SupportLevel =\n    toType match {\n      case DataTypes.ByteType | DataTypes.ShortType | DataTypes.IntegerType | DataTypes.LongType |\n          DataTypes.FloatType | DataTypes.DoubleType | _: DecimalType =>\n        Compatible()\n      case _: TimestampType if evalMode == CometEvalMode.LEGACY =>\n        Compatible()\n      case _ => unsupported(DataTypes.BooleanType, toType)\n    }\n\n  private def canCastFromByte(toType: DataType, evalMode: CometEvalMode.Value): SupportLevel =\n    toType match {\n      case DataTypes.BooleanType =>\n        Compatible()\n      case DataTypes.ShortType | DataTypes.IntegerType | DataTypes.LongType =>\n        Compatible()\n      case DataTypes.FloatType | DataTypes.DoubleType | _: DecimalType =>\n        Compatible()\n      case DataTypes.BinaryType if (evalMode == CometEvalMode.LEGACY) =>\n        Compatible()\n      case DataTypes.TimestampType =>\n        Compatible()\n      case _ =>\n        unsupported(DataTypes.ByteType, toType)\n    }\n\n  private def canCastFromShort(toType: DataType, evalMode: CometEvalMode.Value): SupportLevel =\n    toType match {\n      case DataTypes.BooleanType =>\n        Compatible()\n      case DataTypes.ByteType | DataTypes.IntegerType | DataTypes.LongType =>\n        Compatible()\n      case DataTypes.FloatType | DataTypes.DoubleType | _: DecimalType =>\n        Compatible()\n      case DataTypes.BinaryType if (evalMode == CometEvalMode.LEGACY) =>\n        Compatible()\n      case DataTypes.TimestampType =>\n        Compatible()\n      case _ =>\n        unsupported(DataTypes.ShortType, toType)\n    }\n\n  private def canCastFromInt(toType: DataType, evalMode: CometEvalMode.Value): SupportLevel =\n    toType match {\n      case DataTypes.BooleanType =>\n        Compatible()\n      case DataTypes.ByteType | DataTypes.ShortType | DataTypes.LongType =>\n        Compatible()\n      case DataTypes.FloatType | DataTypes.DoubleType =>\n        Compatible()\n      case _: DecimalType =>\n        Compatible()\n      case DataTypes.BinaryType if (evalMode == CometEvalMode.LEGACY) => Compatible()\n      case DataTypes.TimestampType =>\n        Compatible()\n      case _ =>\n        unsupported(DataTypes.IntegerType, toType)\n    }\n\n  private def canCastFromLong(toType: DataType, evalMode: CometEvalMode.Value): SupportLevel =\n    toType match {\n      case DataTypes.BooleanType =>\n        Compatible()\n      case DataTypes.ByteType | DataTypes.ShortType | DataTypes.IntegerType =>\n        Compatible()\n      case DataTypes.FloatType | DataTypes.DoubleType =>\n        Compatible()\n      case _: DecimalType =>\n        Compatible()\n      case DataTypes.BinaryType if (evalMode == CometEvalMode.LEGACY) => Compatible()\n      case DataTypes.TimestampType =>\n        Compatible()\n      case _ =>\n        unsupported(DataTypes.LongType, toType)\n    }\n\n  private def canCastFromFloat(toType: DataType): SupportLevel = toType match {\n    case DataTypes.BooleanType | DataTypes.DoubleType | DataTypes.ByteType | DataTypes.ShortType |\n        DataTypes.IntegerType | DataTypes.LongType | DataTypes.TimestampType =>\n      Compatible()\n    case _: DecimalType =>\n      // https://github.com/apache/datafusion-comet/issues/1371\n      Incompatible(Some(\"There can be rounding differences\"))\n    case _ =>\n      unsupported(DataTypes.FloatType, toType)\n  }\n\n  private def canCastFromDouble(toType: DataType): SupportLevel = toType match {\n    case DataTypes.BooleanType | DataTypes.FloatType | DataTypes.ByteType | DataTypes.ShortType |\n        DataTypes.IntegerType | DataTypes.LongType | DataTypes.TimestampType =>\n      Compatible()\n    case _: DecimalType =>\n      // https://github.com/apache/datafusion-comet/issues/1371\n      Incompatible(Some(\"There can be rounding differences\"))\n    case _ => unsupported(DataTypes.DoubleType, toType)\n  }\n\n  private def canCastFromDecimal(toType: DataType): SupportLevel = toType match {\n    case DataTypes.FloatType | DataTypes.DoubleType | DataTypes.ByteType | DataTypes.ShortType |\n        DataTypes.IntegerType | DataTypes.LongType | DataTypes.BooleanType |\n        DataTypes.TimestampType =>\n      Compatible()\n    case _ => Unsupported(Some(s\"Cast from DecimalType to $toType is not supported\"))\n  }\n\n  private def canCastFromDate(toType: DataType, evalMode: CometEvalMode.Value): SupportLevel =\n    toType match {\n      case DataTypes.TimestampType =>\n        Compatible()\n      case DataTypes.BooleanType | DataTypes.ByteType | DataTypes.ShortType |\n          DataTypes.IntegerType | DataTypes.LongType | DataTypes.FloatType |\n          DataTypes.DoubleType | _: DecimalType if evalMode == CometEvalMode.LEGACY =>\n        Compatible()\n      case _ => Unsupported(Some(s\"Cast from DateType to $toType is not supported\"))\n    }\n\n  private def unsupported(fromType: DataType, toType: DataType): Unsupported = {\n    Unsupported(Some(s\"Cast from $fromType to $toType is not supported\"))\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/expressions/CometEvalMode.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.expressions\n\n/**\n * We cannot reference Spark's EvalMode directly because the package is different between Spark\n * versions, so we copy it here.\n *\n * Expression evaluation modes.\n *   - LEGACY: the default evaluation mode, which is compliant to Hive SQL.\n *   - ANSI: a evaluation mode which is compliant to ANSI SQL standard.\n *   - TRY: a evaluation mode for `try_*` functions. It is identical to ANSI evaluation mode\n *     except for returning null result on errors.\n */\nobject CometEvalMode extends Enumeration {\n  val LEGACY, ANSI, TRY = Value\n\n  def fromBoolean(ansiEnabled: Boolean): Value = if (ansiEnabled) {\n    ANSI\n  } else {\n    LEGACY\n  }\n\n  def fromString(str: String): CometEvalMode.Value = CometEvalMode.withName(str)\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/expressions/RegExp.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.expressions\n\nobject RegExp {\n\n  /** Determine whether the regexp pattern is supported natively and compatible with Spark */\n  def isSupportedPattern(pattern: String): Boolean = {\n    // this is a placeholder for implementing logic to determine if the pattern\n    // is known to be compatible with Spark, so that we can enable regexp automatically\n    // for common cases and fallback to Spark for more complex cases\n    false\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/iceberg/IcebergReflection.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.iceberg\n\nimport org.apache.spark.internal.Logging\n\n/**\n * Shared reflection utilities for Iceberg operations.\n *\n * This object provides common reflection methods used across Comet for interacting with Iceberg\n * classes. These are needed because many Iceberg methods are protected or package-private.\n */\nobject IcebergReflection extends Logging {\n\n  /**\n   * Iceberg class names used throughout Comet.\n   */\n  object ClassNames {\n    val CONTENT_SCAN_TASK = \"org.apache.iceberg.ContentScanTask\"\n    val FILE_SCAN_TASK = \"org.apache.iceberg.FileScanTask\"\n    val CONTENT_FILE = \"org.apache.iceberg.ContentFile\"\n    val STRUCT_LIKE = \"org.apache.iceberg.StructLike\"\n    val PARTITION_SCAN_TASK = \"org.apache.iceberg.PartitionScanTask\"\n    val DELETE_FILE = \"org.apache.iceberg.DeleteFile\"\n    val LITERAL = \"org.apache.iceberg.expressions.Literal\"\n    val SCHEMA_PARSER = \"org.apache.iceberg.SchemaParser\"\n    val SCHEMA = \"org.apache.iceberg.Schema\"\n    val PARTITION_SPEC_PARSER = \"org.apache.iceberg.PartitionSpecParser\"\n    val PARTITION_SPEC = \"org.apache.iceberg.PartitionSpec\"\n    val PARTITION_FIELD = \"org.apache.iceberg.PartitionField\"\n    val UNBOUND_PREDICATE = \"org.apache.iceberg.expressions.UnboundPredicate\"\n  }\n\n  /**\n   * Iceberg content types.\n   */\n  object ContentTypes {\n    val POSITION_DELETES = \"POSITION_DELETES\"\n    val EQUALITY_DELETES = \"EQUALITY_DELETES\"\n  }\n\n  /**\n   * Iceberg file formats.\n   */\n  object FileFormats {\n    val PARQUET = \"PARQUET\"\n  }\n\n  /**\n   * Iceberg transform types.\n   */\n  object Transforms {\n    val IDENTITY = \"identity\"\n  }\n\n  /**\n   * Iceberg type names.\n   */\n  object TypeNames {\n    val UNKNOWN = \"unknown\"\n  }\n\n  /**\n   * Loads a class using the thread context classloader first, then falls back to the system\n   * classloader.\n   *\n   * @param className\n   *   Fully qualified class name to load\n   * @return\n   *   The loaded Class object\n   */\n  def loadClass(className: String): Class[_] = {\n    val classLoader = Thread.currentThread().getContextClassLoader\n    if (classLoader != null) {\n      // scalastyle:off classforname\n      Class.forName(className, true, classLoader)\n      // scalastyle:on classforname\n    } else {\n      // Fallback to default classloader if context classloader is null\n      // scalastyle:off classforname\n      Class.forName(className)\n      // scalastyle:on classforname\n    }\n  }\n\n  /**\n   * Searches through class hierarchy to find a method (including protected methods).\n   */\n  def findMethodInHierarchy(\n      clazz: Class[_],\n      methodName: String): Option[java.lang.reflect.Method] = {\n    var current: Class[_] = clazz\n    while (current != null) {\n      try {\n        val method = current.getDeclaredMethod(methodName)\n        method.setAccessible(true)\n        return Some(method)\n      } catch {\n        case _: NoSuchMethodException => current = current.getSuperclass\n      }\n    }\n    None\n  }\n\n  /**\n   * Extracts file location from Iceberg ContentFile, handling both location() and path().\n   *\n   * Different Iceberg versions expose file paths differently:\n   *   - Newer versions: location() returns String\n   *   - Older versions: path() returns CharSequence\n   */\n  def extractFileLocation(contentFileClass: Class[_], file: Any): Option[String] = {\n    try {\n      val locationMethod = contentFileClass.getMethod(\"location\")\n      Some(locationMethod.invoke(file).asInstanceOf[String])\n    } catch {\n      case _: NoSuchMethodException =>\n        try {\n          val pathMethod = contentFileClass.getMethod(\"path\")\n          Some(pathMethod.invoke(file).asInstanceOf[CharSequence].toString)\n        } catch {\n          case _: Exception => None\n        }\n      case _: Exception => None\n    }\n  }\n\n  /**\n   * Extracts file location from ContentFile instance using dynamic class lookup.\n   */\n  def extractFileLocation(file: Any): Option[String] = {\n    try {\n      val contentFileClass = loadClass(ClassNames.CONTENT_FILE)\n      extractFileLocation(contentFileClass, file)\n    } catch {\n      case _: Exception => None\n    }\n  }\n\n  /**\n   * Gets the Iceberg Table from a SparkScan.\n   *\n   * The table() method is protected in SparkScan, requiring reflection to access.\n   */\n  def getTable(scan: Any): Option[Any] = {\n    findMethodInHierarchy(scan.getClass, \"table\").flatMap { tableMethod =>\n      try {\n        Some(tableMethod.invoke(scan))\n      } catch {\n        case e: Exception =>\n          logError(\n            s\"Iceberg reflection failure: Failed to get table from SparkScan: ${e.getMessage}\")\n          None\n      }\n    }\n  }\n\n  /**\n   * Gets the tasks from a SparkScan.\n   *\n   * The tasks() method is protected in SparkScan, requiring reflection to access.\n   */\n  def getTasks(scan: Any): Option[java.util.List[_]] = {\n    try {\n      val tasksMethod = scan.getClass.getSuperclass\n        .getDeclaredMethod(\"tasks\")\n      tasksMethod.setAccessible(true)\n      Some(tasksMethod.invoke(scan).asInstanceOf[java.util.List[_]])\n    } catch {\n      case e: Exception =>\n        logError(\n          s\"Iceberg reflection failure: Failed to get tasks from SparkScan: ${e.getMessage}\")\n        None\n    }\n  }\n\n  /**\n   * Gets the filter expressions from a SparkScan.\n   *\n   * The filterExpressions() method is protected in SparkScan.\n   */\n  def getFilterExpressions(scan: Any): Option[java.util.List[_]] = {\n    try {\n      val filterExpressionsMethod = scan.getClass.getSuperclass.getSuperclass\n        .getDeclaredMethod(\"filterExpressions\")\n      filterExpressionsMethod.setAccessible(true)\n      Some(filterExpressionsMethod.invoke(scan).asInstanceOf[java.util.List[_]])\n    } catch {\n      case e: Exception =>\n        logError(\n          \"Iceberg reflection failure: Failed to get filter expressions from SparkScan: \" +\n            s\"${e.getMessage}\")\n        None\n    }\n  }\n\n  /**\n   * Gets the Iceberg table format version.\n   *\n   * Tries to get formatVersion() directly from table, falling back to\n   * operations().current().formatVersion() for older Iceberg versions.\n   */\n  def getFormatVersion(table: Any): Option[Int] = {\n    try {\n      val formatVersionMethod = table.getClass.getMethod(\"formatVersion\")\n      Some(formatVersionMethod.invoke(table).asInstanceOf[Int])\n    } catch {\n      case _: NoSuchMethodException =>\n        try {\n          // If not directly available, access via operations/metadata\n          val opsMethod = table.getClass.getDeclaredMethod(\"operations\")\n          opsMethod.setAccessible(true)\n          val ops = opsMethod.invoke(table)\n          val currentMethod = ops.getClass.getDeclaredMethod(\"current\")\n          currentMethod.setAccessible(true)\n          val metadata = currentMethod.invoke(ops)\n          val formatVersionMethod = metadata.getClass.getMethod(\"formatVersion\")\n          Some(formatVersionMethod.invoke(metadata).asInstanceOf[Int])\n        } catch {\n          case e: Exception =>\n            logError(s\"Iceberg reflection failure: Failed to get format version: ${e.getMessage}\")\n            None\n        }\n      case e: Exception =>\n        logError(s\"Iceberg reflection failure: Failed to get format version: ${e.getMessage}\")\n        None\n    }\n  }\n\n  /**\n   * Gets the FileIO from an Iceberg table.\n   */\n  def getFileIO(table: Any): Option[Any] = {\n    try {\n      val ioMethod = table.getClass.getMethod(\"io\")\n      Some(ioMethod.invoke(table))\n    } catch {\n      case e: Exception =>\n        logError(s\"Iceberg reflection failure: Failed to get FileIO from table: ${e.getMessage}\")\n        None\n    }\n  }\n\n  /**\n   * Gets storage properties from an Iceberg table's FileIO.\n   *\n   * This extracts credentials from the FileIO implementation, which is critical for REST catalog\n   * credential vending. The REST catalog returns temporary S3 credentials per-table via the\n   * loadTable response, stored in the table's FileIO (typically ResolvingFileIO).\n   *\n   * The properties() method is not on the FileIO interface -- it exists on specific\n   * implementations like ResolvingFileIO and S3FileIO. Returns None gracefully when unavailable.\n   */\n  def getFileIOProperties(table: Any): Option[Map[String, String]] = {\n    import scala.jdk.CollectionConverters._\n    getFileIO(table).flatMap { fileIO =>\n      findMethodInHierarchy(fileIO.getClass, \"properties\").flatMap { propsMethod =>\n        propsMethod.invoke(fileIO) match {\n          case javaMap: java.util.Map[_, _] =>\n            val scalaMap = javaMap.asScala.collect { case (k: String, v: String) =>\n              k -> v\n            }.toMap\n            if (scalaMap.nonEmpty) Some(scalaMap) else None\n          case _ => None\n        }\n      }\n    }\n  }\n\n  /**\n   * Gets the schema from an Iceberg table.\n   */\n  def getSchema(table: Any): Option[Any] = {\n    try {\n      val schemaMethod = table.getClass.getMethod(\"schema\")\n      Some(schemaMethod.invoke(table))\n    } catch {\n      case e: Exception =>\n        logError(s\"Iceberg reflection failure: Failed to get schema from table: ${e.getMessage}\")\n        None\n    }\n  }\n\n  /**\n   * Gets the partition spec from an Iceberg table.\n   */\n  def getPartitionSpec(table: Any): Option[Any] = {\n    try {\n      val specMethod = table.getClass.getMethod(\"spec\")\n      Some(specMethod.invoke(table))\n    } catch {\n      case e: Exception =>\n        logError(\n          s\"Iceberg reflection failure: Failed to get partition spec from table: ${e.getMessage}\")\n        None\n    }\n  }\n\n  /**\n   * Gets the table metadata from an Iceberg table.\n   *\n   * @param table\n   *   The Iceberg table instance\n   * @return\n   *   The TableMetadata object from table.operations().current()\n   */\n  def getTableMetadata(table: Any): Option[Any] = {\n    try {\n      val operationsMethod = table.getClass.getDeclaredMethod(\"operations\")\n      operationsMethod.setAccessible(true)\n      val operations = operationsMethod.invoke(table)\n\n      val currentMethod = operations.getClass.getDeclaredMethod(\"current\")\n      currentMethod.setAccessible(true)\n      Some(currentMethod.invoke(operations))\n    } catch {\n      case e: Exception =>\n        logError(s\"Iceberg reflection failure: Failed to get table metadata: ${e.getMessage}\")\n        None\n    }\n  }\n\n  /**\n   * Gets the metadata file location from an Iceberg table.\n   *\n   * @param table\n   *   The Iceberg table instance\n   * @return\n   *   Path to the table metadata file\n   */\n  def getMetadataLocation(table: Any): Option[String] = {\n    getTableMetadata(table).flatMap { metadata =>\n      try {\n        val metadataFileLocationMethod = metadata.getClass.getMethod(\"metadataFileLocation\")\n        Some(metadataFileLocationMethod.invoke(metadata).asInstanceOf[String])\n      } catch {\n        case e: Exception =>\n          logError(\n            s\"Iceberg reflection failure: Failed to get metadata location: ${e.getMessage}\")\n          None\n      }\n    }\n  }\n\n  /**\n   * Gets the properties map from an Iceberg table's metadata.\n   *\n   * @param table\n   *   The Iceberg table instance\n   * @return\n   *   Map of table properties\n   */\n  def getTableProperties(table: Any): Option[java.util.Map[String, String]] = {\n    getTableMetadata(table).flatMap { metadata =>\n      try {\n        val propertiesMethod = metadata.getClass.getMethod(\"properties\")\n        Some(propertiesMethod.invoke(metadata).asInstanceOf[java.util.Map[String, String]])\n      } catch {\n        case e: Exception =>\n          logError(s\"Iceberg reflection failure: Failed to get table properties: ${e.getMessage}\")\n          None\n      }\n    }\n  }\n\n  /**\n   * Gets delete files from a single FileScanTask.\n   *\n   * @param task\n   *   An Iceberg FileScanTask object\n   * @param fileScanTaskClass\n   *   The FileScanTask class (can be obtained via classforname or passed in if already loaded)\n   * @return\n   *   List of delete files for this task\n   * @throws Exception\n   *   if reflection fails (callers must handle appropriately based on context)\n   */\n  def getDeleteFilesFromTask(task: Any, fileScanTaskClass: Class[_]): java.util.List[_] = {\n    val deletesMethod = fileScanTaskClass.getMethod(\"deletes\")\n    val deletes = deletesMethod.invoke(task).asInstanceOf[java.util.List[_]]\n    if (deletes == null) new java.util.ArrayList[Any]() else deletes\n  }\n\n  /**\n   * Gets equality field IDs from a delete file.\n   *\n   * @param deleteFile\n   *   An Iceberg DeleteFile object\n   * @return\n   *   List of field IDs used in equality deletes, or empty list for position deletes\n   */\n  def getEqualityFieldIds(deleteFile: Any): java.util.List[_] = {\n    try {\n      val deleteFileClass = loadClass(ClassNames.DELETE_FILE)\n      val equalityFieldIdsMethod = deleteFileClass.getMethod(\"equalityFieldIds\")\n      val ids = equalityFieldIdsMethod.invoke(deleteFile).asInstanceOf[java.util.List[_]]\n      if (ids == null) new java.util.ArrayList[Any]() else ids\n    } catch {\n      case _: Exception =>\n        // Position delete files return null/empty for equalityFieldIds\n        new java.util.ArrayList[Any]()\n    }\n  }\n\n  /**\n   * Gets field name and type from schema by field ID.\n   *\n   * @param schema\n   *   Iceberg Schema object\n   * @param fieldId\n   *   Field ID to look up\n   * @return\n   *   Tuple of (field name, field type string)\n   */\n  def getFieldInfo(schema: Any, fieldId: Int): Option[(String, String)] = {\n    try {\n      val findFieldMethod = schema.getClass.getMethod(\"findField\", classOf[Int])\n      val field = findFieldMethod.invoke(schema, fieldId.asInstanceOf[AnyRef])\n      if (field != null) {\n        val nameMethod = field.getClass.getMethod(\"name\")\n        val typeMethod = field.getClass.getMethod(\"type\")\n        val fieldName = nameMethod.invoke(field).toString\n        val fieldType = typeMethod.invoke(field).toString\n        Some((fieldName, fieldType))\n      } else {\n        None\n      }\n    } catch {\n      case e: Exception =>\n        logError(\n          \"Iceberg reflection failure: Failed to get field info for ID \" +\n            s\"$fieldId: ${e.getMessage}\")\n        None\n    }\n  }\n\n  /**\n   * Gets the expected schema from a SparkScan.\n   *\n   * The expectedSchema() method is protected in SparkScan and returns the Iceberg Schema for this\n   * scan (which is the snapshot schema for VERSION AS OF queries).\n   *\n   * @param scan\n   *   The SparkScan object\n   * @return\n   *   The expected Iceberg Schema, or None if reflection fails\n   */\n  def getExpectedSchema(scan: Any): Option[Any] = {\n    findMethodInHierarchy(scan.getClass, \"expectedSchema\").flatMap { schemaMethod =>\n      try {\n        Some(schemaMethod.invoke(scan))\n      } catch {\n        case e: Exception =>\n          logError(s\"Failed to get expectedSchema from SparkScan: ${e.getMessage}\")\n          None\n      }\n    }\n  }\n\n  /**\n   * Builds a field ID mapping from an Iceberg schema.\n   *\n   * Extracts the mapping of column names to Iceberg field IDs from the schema's columns. This is\n   * used for schema evolution support where we need to map between column names and their\n   * corresponding field IDs.\n   *\n   * @param schema\n   *   Iceberg Schema object\n   * @return\n   *   Map from column name to field ID\n   */\n  def buildFieldIdMapping(schema: Any): Map[String, Int] = {\n    import scala.jdk.CollectionConverters._\n    try {\n      val columnsMethod = schema.getClass.getMethod(\"columns\")\n      val columns = columnsMethod.invoke(schema).asInstanceOf[java.util.List[_]]\n\n      columns.asScala.flatMap { column =>\n        try {\n          val nameMethod = column.getClass.getMethod(\"name\")\n          val name = nameMethod.invoke(column).asInstanceOf[String]\n\n          val fieldIdMethod = column.getClass.getMethod(\"fieldId\")\n          val fieldId = fieldIdMethod.invoke(column).asInstanceOf[Int]\n\n          Some(name -> fieldId)\n        } catch {\n          case e: Exception =>\n            logWarning(s\"Failed to extract field ID from column: ${e.getMessage}\")\n            None\n        }\n      }.toMap\n    } catch {\n      case e: Exception =>\n        logWarning(s\"Failed to build field ID mapping from schema: ${e.getMessage}\")\n        Map.empty[String, Int]\n    }\n  }\n\n  /**\n   * Validates partition column types for compatibility with iceberg-rust.\n   *\n   * iceberg-rust's Literal::try_from_json() has incomplete type support: - Binary/fixed types:\n   * unimplemented - Decimals: limited to precision <= 28 (rust_decimal crate limitation)\n   *\n   * @param partitionSpec\n   *   The Iceberg PartitionSpec\n   * @param schema\n   *   The Iceberg Schema to look up field types\n   * @return\n   *   List of unsupported partition types (empty if all supported). Each entry is (fieldName,\n   *   typeStr, reason)\n   */\n  def validatePartitionTypes(partitionSpec: Any, schema: Any): List[(String, String, String)] = {\n    import scala.jdk.CollectionConverters._\n\n    val fieldsMethod = partitionSpec.getClass.getMethod(\"fields\")\n    val fields = fieldsMethod.invoke(partitionSpec).asInstanceOf[java.util.List[_]]\n\n    val partitionFieldClass = loadClass(ClassNames.PARTITION_FIELD)\n    val sourceIdMethod = partitionFieldClass.getMethod(\"sourceId\")\n    val findFieldMethod = schema.getClass.getMethod(\"findField\", classOf[Int])\n\n    val unsupportedTypes = scala.collection.mutable.ListBuffer[(String, String, String)]()\n\n    fields.asScala.foreach { field =>\n      val sourceId = sourceIdMethod.invoke(field).asInstanceOf[Int]\n      val column = findFieldMethod.invoke(schema, sourceId.asInstanceOf[Object])\n\n      if (column != null) {\n        val nameMethod = column.getClass.getMethod(\"name\")\n        val fieldName = nameMethod.invoke(column).asInstanceOf[String]\n\n        val typeMethod = column.getClass.getMethod(\"type\")\n        val icebergType = typeMethod.invoke(column)\n        val typeStr = icebergType.toString\n\n        // iceberg-rust/crates/iceberg/src/spec/values.rs Literal::try_from_json()\n        if (typeStr.startsWith(\"decimal(\")) {\n          val precisionStr = typeStr.substring(8, typeStr.indexOf(','))\n          val precision = precisionStr.toInt\n          // rust_decimal crate maximum precision\n          if (precision > 28) {\n            unsupportedTypes += ((\n              fieldName,\n              typeStr,\n              s\"High-precision decimal (precision=$precision) exceeds maximum of 28 \" +\n                \"(rust_decimal limitation)\"))\n          }\n        } else if (typeStr == \"binary\" || typeStr.startsWith(\"fixed[\")) {\n          unsupportedTypes += ((\n            fieldName,\n            typeStr,\n            \"Binary/fixed types not yet supported (Literal::try_from_json todo!())\"))\n        }\n      }\n    }\n\n    unsupportedTypes.toList\n  }\n}\n\n/**\n * Pre-extracted Iceberg metadata for native scan execution.\n *\n * This class holds all metadata extracted from Iceberg during the planning/validation phase in\n * CometScanRule. By extracting all metadata once during validation (where reflection failures\n * trigger fallback to Spark), we avoid redundant reflection during serialization (where failures\n * would be fatal runtime errors).\n *\n * @param table\n *   The Iceberg Table object\n * @param metadataLocation\n *   Path to the table metadata file\n * @param nameMapping\n *   Optional name mapping from table properties (for schema evolution)\n * @param tasks\n *   List of FileScanTask objects from Iceberg planning\n * @param scanSchema\n *   The expectedSchema from the SparkScan (for schema evolution / VERSION AS OF)\n * @param globalFieldIdMapping\n *   Mapping from column names to Iceberg field IDs (built from scanSchema)\n * @param catalogProperties\n *   Catalog properties for FileIO (S3 credentials, regions, etc.)\n */\ncase class CometIcebergNativeScanMetadata(\n    table: Any,\n    metadataLocation: String,\n    nameMapping: Option[String],\n    @transient tasks: java.util.List[_],\n    scanSchema: Any,\n    tableSchema: Any,\n    globalFieldIdMapping: Map[String, Int],\n    catalogProperties: Map[String, String],\n    fileFormat: String)\n\nobject CometIcebergNativeScanMetadata extends Logging {\n\n  /**\n   * Extracts all Iceberg metadata needed for native scan execution.\n   *\n   * This method performs all reflection operations once during planning/validation. If any\n   * reflection operation fails, returns None to trigger fallback to Spark.\n   *\n   * @param scan\n   *   The Spark BatchScanExec.scan (SparkBatchQueryScan)\n   * @param metadataLocation\n   *   Path to the table metadata file (already extracted)\n   * @param catalogProperties\n   *   Catalog properties for FileIO (already extracted)\n   * @return\n   *   Some(metadata) if all reflection succeeds, None to trigger fallback\n   */\n  def extract(\n      scan: Any,\n      metadataLocation: String,\n      catalogProperties: Map[String, String]): Option[CometIcebergNativeScanMetadata] = {\n    import org.apache.comet.iceberg.IcebergReflection._\n\n    for {\n      table <- getTable(scan)\n      tasks <- getTasks(scan)\n      scanSchema <- getExpectedSchema(scan)\n      tableSchema <- getSchema(table)\n    } yield {\n      // nameMapping is optional - if it fails we just use None\n      val nameMapping = getTableProperties(table).flatMap { properties =>\n        val nameMappingKey = \"schema.name-mapping.default\"\n        if (properties.containsKey(nameMappingKey)) {\n          Some(properties.get(nameMappingKey))\n        } else {\n          None\n        }\n      }\n\n      val globalFieldIdMapping = buildFieldIdMapping(scanSchema)\n\n      // File format is always PARQUET,\n      // validated in CometScanRule.validateIcebergFileScanTasks()\n      // Hardcoded here for extensibility (future ORC/Avro support would add logic here)\n      CometIcebergNativeScanMetadata(\n        table = table,\n        metadataLocation = metadataLocation,\n        nameMapping = nameMapping,\n        tasks = tasks,\n        scanSchema = scanSchema,\n        tableSchema = tableSchema,\n        globalFieldIdMapping = globalFieldIdMapping,\n        catalogProperties = catalogProperties,\n        fileFormat = FileFormats.PARQUET)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/parquet/CometParquetFileFormat.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.hadoop.conf.Configuration\nimport org.apache.parquet.filter2.predicate.FilterApi\nimport org.apache.parquet.hadoop.ParquetInputFormat\nimport org.apache.parquet.hadoop.metadata.FileMetaData\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.util.CaseInsensitiveMap\nimport org.apache.spark.sql.catalyst.util.RebaseDateTime.RebaseSpec\nimport org.apache.spark.sql.comet.CometMetricNode\nimport org.apache.spark.sql.execution.datasources.DataSourceUtils\nimport org.apache.spark.sql.execution.datasources.PartitionedFile\nimport org.apache.spark.sql.execution.datasources.RecordReaderIterator\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetOptions\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetReadSupport\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.sources.Filter\nimport org.apache.spark.sql.types.{DateType, StructType, TimestampType}\nimport org.apache.spark.util.SerializableConfiguration\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.MetricsSupport\nimport org.apache.comet.shims.ShimSQLConf\nimport org.apache.comet.vector.CometVector\n\n/**\n * A Comet specific Parquet format. This mostly reuse the functionalities from Spark's\n * [[ParquetFileFormat]], but overrides:\n *\n *   - `vectorTypes`, so Spark allocates [[CometVector]] instead of it's own on-heap or off-heap\n *     column vector in the whole-stage codegen path.\n *   - `supportBatch`, which simply returns true since data types should have already been checked\n *     in [[org.apache.comet.CometSparkSessionExtensions]]\n *   - `buildReaderWithPartitionValues`, so Spark calls Comet's Parquet reader to read values.\n */\nclass CometParquetFileFormat(session: SparkSession)\n    extends ParquetFileFormat\n    with MetricsSupport\n    with ShimSQLConf {\n  metrics =\n    CometMetricNode.nativeScanMetrics(session.sparkContext) ++ CometMetricNode.parquetScanMetrics(\n      session.sparkContext)\n\n  override def shortName(): String = \"parquet\"\n  override def toString: String = \"CometParquet\"\n  override def hashCode(): Int = getClass.hashCode()\n  override def equals(other: Any): Boolean = other.isInstanceOf[CometParquetFileFormat]\n\n  override def vectorTypes(\n      requiredSchema: StructType,\n      partitionSchema: StructType,\n      sqlConf: SQLConf): Option[Seq[String]] = {\n    val length = requiredSchema.fields.length + partitionSchema.fields.length\n    Option(Seq.fill(length)(classOf[CometVector].getName))\n  }\n\n  override def supportBatch(sparkSession: SparkSession, schema: StructType): Boolean = true\n\n  override def buildReaderWithPartitionValues(\n      sparkSession: SparkSession,\n      dataSchema: StructType,\n      partitionSchema: StructType,\n      requiredSchema: StructType,\n      filters: Seq[Filter],\n      options: Map[String, String],\n      hadoopConf: Configuration): PartitionedFile => Iterator[InternalRow] = {\n    val sqlConf = sparkSession.sessionState.conf\n    CometParquetFileFormat.populateConf(sqlConf, hadoopConf)\n    val broadcastedHadoopConf =\n      sparkSession.sparkContext.broadcast(new SerializableConfiguration(hadoopConf))\n\n    val isCaseSensitive = sqlConf.caseSensitiveAnalysis\n    val useFieldId = CometParquetUtils.readFieldId(sqlConf)\n    val ignoreMissingIds = CometParquetUtils.ignoreMissingIds(sqlConf)\n    val pushDownDate = sqlConf.parquetFilterPushDownDate\n    val pushDownTimestamp = sqlConf.parquetFilterPushDownTimestamp\n    val pushDownDecimal = sqlConf.parquetFilterPushDownDecimal\n    val pushDownStringPredicate = sqlConf.parquetFilterPushDownStringPredicate\n    val pushDownInFilterThreshold = sqlConf.parquetFilterPushDownInFilterThreshold\n    val optionsMap = CaseInsensitiveMap[String](options)\n    val parquetOptions = new ParquetOptions(optionsMap, sqlConf)\n    val datetimeRebaseModeInRead = parquetOptions.datetimeRebaseModeInRead\n    val parquetFilterPushDown = sqlConf.parquetFilterPushDown &&\n      CometConf.COMET_RESPECT_PARQUET_FILTER_PUSHDOWN.get(sqlConf)\n\n    // Comet specific configurations\n    val capacity = CometConf.COMET_BATCH_SIZE.get(sqlConf)\n\n    (file: PartitionedFile) => {\n      val sharedConf = broadcastedHadoopConf.value.value\n      val footer = FooterReader.readFooter(sharedConf, file)\n      val footerFileMetaData = footer.getFileMetaData\n      val datetimeRebaseSpec = CometParquetFileFormat.getDatetimeRebaseSpec(\n        file,\n        requiredSchema,\n        sharedConf,\n        footerFileMetaData,\n        datetimeRebaseModeInRead)\n\n      val parquetSchema = footerFileMetaData.getSchema\n      val parquetFilters = new ParquetFilters(\n        parquetSchema,\n        dataSchema,\n        pushDownDate,\n        pushDownTimestamp,\n        pushDownDecimal,\n        pushDownStringPredicate,\n        pushDownInFilterThreshold,\n        isCaseSensitive,\n        datetimeRebaseSpec)\n\n      val pushed = if (parquetFilterPushDown) {\n        filters\n          .flatMap(parquetFilters.createFilter)\n          .reduceOption(FilterApi.and)\n      } else {\n        None\n      }\n      pushed.foreach(p => ParquetInputFormat.setFilterPredicate(sharedConf, p))\n      val pushedNative = if (parquetFilterPushDown) {\n        parquetFilters.createNativeFilters(filters)\n      } else {\n        None\n      }\n      val recordBatchReader = new NativeBatchReader(\n        sharedConf,\n        file,\n        footer,\n        pushedNative.orNull,\n        capacity,\n        requiredSchema,\n        dataSchema,\n        isCaseSensitive,\n        useFieldId,\n        ignoreMissingIds,\n        datetimeRebaseSpec.mode == CORRECTED,\n        partitionSchema,\n        file.partitionValues,\n        metrics.asJava,\n        CometMetricNode(metrics))\n      try {\n        recordBatchReader.init()\n      } catch {\n        case e: Throwable =>\n          recordBatchReader.close()\n          throw e\n      }\n      val iter = new RecordReaderIterator(recordBatchReader)\n      try {\n        iter.asInstanceOf[Iterator[InternalRow]]\n      } catch {\n        case e: Throwable =>\n          iter.close()\n          throw e\n      }\n    }\n  }\n}\n\nobject CometParquetFileFormat extends Logging with ShimSQLConf {\n\n  /**\n   * Populates Parquet related configurations from the input `sqlConf` to the `hadoopConf`\n   */\n  def populateConf(sqlConf: SQLConf, hadoopConf: Configuration): Unit = {\n    hadoopConf.set(ParquetInputFormat.READ_SUPPORT_CLASS, classOf[ParquetReadSupport].getName)\n    hadoopConf.set(SQLConf.SESSION_LOCAL_TIMEZONE.key, sqlConf.sessionLocalTimeZone)\n    hadoopConf.setBoolean(\n      SQLConf.NESTED_SCHEMA_PRUNING_ENABLED.key,\n      sqlConf.nestedSchemaPruningEnabled)\n    hadoopConf.setBoolean(SQLConf.CASE_SENSITIVE.key, sqlConf.caseSensitiveAnalysis)\n\n    // Sets flags for `ParquetToSparkSchemaConverter`\n    hadoopConf.setBoolean(SQLConf.PARQUET_BINARY_AS_STRING.key, sqlConf.isParquetBinaryAsString)\n    hadoopConf.setBoolean(\n      SQLConf.PARQUET_INT96_AS_TIMESTAMP.key,\n      sqlConf.isParquetINT96AsTimestamp)\n\n    // Comet specific configs\n    hadoopConf.setBoolean(\n      CometConf.COMET_USE_DECIMAL_128.key,\n      CometConf.COMET_USE_DECIMAL_128.get())\n    hadoopConf.setBoolean(\n      CometConf.COMET_EXCEPTION_ON_LEGACY_DATE_TIMESTAMP.key,\n      CometConf.COMET_EXCEPTION_ON_LEGACY_DATE_TIMESTAMP.get())\n    hadoopConf.setInt(CometConf.COMET_BATCH_SIZE.key, CometConf.COMET_BATCH_SIZE.get())\n  }\n\n  def getDatetimeRebaseSpec(\n      file: PartitionedFile,\n      sparkSchema: StructType,\n      sharedConf: Configuration,\n      footerFileMetaData: FileMetaData,\n      datetimeRebaseModeInRead: String): RebaseSpec = {\n    val exceptionOnRebase = sharedConf.getBoolean(\n      CometConf.COMET_EXCEPTION_ON_LEGACY_DATE_TIMESTAMP.key,\n      CometConf.COMET_EXCEPTION_ON_LEGACY_DATE_TIMESTAMP.defaultValue.get)\n    var datetimeRebaseSpec = DataSourceUtils.datetimeRebaseSpec(\n      footerFileMetaData.getKeyValueMetaData.get,\n      datetimeRebaseModeInRead)\n    val hasDateOrTimestamp = sparkSchema.exists(f =>\n      f.dataType match {\n        case DateType | TimestampType => true\n        case _ => false\n      })\n\n    if (hasDateOrTimestamp && datetimeRebaseSpec.mode == LEGACY) {\n      if (exceptionOnRebase) {\n        logWarning(\n          s\"\"\"Found Parquet file $file that could potentially contain dates/timestamps that were\n              written in legacy hybrid Julian/Gregorian calendar. Unlike Spark 3+, which will rebase\n              and return these according to the new Proleptic Gregorian calendar, Comet will throw\n              exception when reading them. If you want to read them as it is according to the hybrid\n              Julian/Gregorian calendar, please set `spark.comet.exceptionOnDatetimeRebase` to\n              false. Otherwise, if you want to read them according to the new Proleptic Gregorian\n              calendar, please disable Comet for this query.\"\"\")\n      } else {\n        // do not throw exception on rebase - read as it is\n        datetimeRebaseSpec = datetimeRebaseSpec.copy(CORRECTED)\n      }\n    }\n\n    datetimeRebaseSpec\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/parquet/ParquetFilters.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet\n\nimport java.lang.{Boolean => JBoolean, Byte => JByte, Double => JDouble, Float => JFloat, Long => JLong, Short => JShort}\nimport java.math.{BigDecimal => JBigDecimal}\nimport java.sql.{Date, Timestamp}\nimport java.time.{Duration, Instant, LocalDate, Period}\nimport java.util.Locale\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.parquet.column.statistics.{Statistics => ParquetStatistics}\nimport org.apache.parquet.filter2.predicate._\nimport org.apache.parquet.filter2.predicate.SparkFilterApi._\nimport org.apache.parquet.io.api.Binary\nimport org.apache.parquet.schema.{GroupType, LogicalTypeAnnotation, MessageType, PrimitiveComparator, PrimitiveType, Type}\nimport org.apache.parquet.schema.LogicalTypeAnnotation.{DecimalLogicalTypeAnnotation, TimeUnit}\nimport org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName\nimport org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName._\nimport org.apache.parquet.schema.Type.Repetition\nimport org.apache.spark.sql.catalyst.util.{quoteIfNeeded, CaseInsensitiveMap, DateTimeUtils, IntervalUtils}\nimport org.apache.spark.sql.catalyst.util.RebaseDateTime.{rebaseGregorianToJulianDays, rebaseGregorianToJulianMicros, RebaseSpec}\nimport org.apache.spark.sql.sources\nimport org.apache.spark.sql.types.StructType\nimport org.apache.spark.unsafe.types.UTF8String\n\nimport com.google.protobuf.CodedOutputStream\n\nimport org.apache.comet.parquet.SourceFilterSerde.{createBinaryExpr, createNameExpr, createUnaryExpr, createValueExpr}\nimport org.apache.comet.serde.ExprOuterClass\nimport org.apache.comet.serde.QueryPlanSerde.scalarFunctionExprToProto\nimport org.apache.comet.shims.ShimSQLConf\n\n/**\n * Copied from Spark 3.4, in order to fix Parquet shading issue. TODO: find a way to remove this\n * duplication\n *\n * Some utility function to convert Spark data source filters to Parquet filters.\n */\nclass ParquetFilters(\n    schema: MessageType,\n    dataSchema: StructType,\n    pushDownDate: Boolean,\n    pushDownTimestamp: Boolean,\n    pushDownDecimal: Boolean,\n    pushDownStringPredicate: Boolean,\n    pushDownInFilterThreshold: Int,\n    caseSensitive: Boolean,\n    datetimeRebaseSpec: RebaseSpec)\n    extends ShimSQLConf {\n  // A map which contains parquet field name and data type, if predicate push down applies.\n  //\n  // Each key in `nameToParquetField` represents a column; `dots` are used as separators for\n  // nested columns. If any part of the names contains `dots`, it is quoted to avoid confusion.\n  // See `org.apache.spark.sql.connector.catalog.quote` for implementation details.\n  private val nameToParquetField: Map[String, ParquetPrimitiveField] = {\n    // Recursively traverse the parquet schema to get primitive fields that can be pushed-down.\n    // `parentFieldNames` is used to keep track of the current nested level when traversing.\n    def getPrimitiveFields(\n        fields: Seq[Type],\n        parentFieldNames: Array[String] = Array.empty): Seq[ParquetPrimitiveField] = {\n      fields.flatMap {\n        // Parquet only supports predicate push-down for non-repeated primitive types.\n        // TODO(SPARK-39393): Remove extra condition when parquet added filter predicate support for\n        //                    repeated columns (https://issues.apache.org/jira/browse/PARQUET-34)\n        case p: PrimitiveType if p.getRepetition != Repetition.REPEATED =>\n          Some(\n            ParquetPrimitiveField(\n              fieldNames = parentFieldNames :+ p.getName,\n              fieldType = ParquetSchemaType(\n                p.getLogicalTypeAnnotation,\n                p.getPrimitiveTypeName,\n                p.getTypeLength)))\n        // Note that when g is a `Struct`, `g.getOriginalType` is `null`.\n        // When g is a `Map`, `g.getOriginalType` is `MAP`.\n        // When g is a `List`, `g.getOriginalType` is `LIST`.\n        case g: GroupType if g.getOriginalType == null =>\n          getPrimitiveFields(g.getFields.asScala.toSeq, parentFieldNames :+ g.getName)\n        // Parquet only supports push-down for primitive types; as a result, Map and List types\n        // are removed.\n        case _ => None\n      }\n    }\n\n    val primitiveFields = getPrimitiveFields(schema.getFields.asScala.toSeq).map { field =>\n      (field.fieldNames.toSeq.map(quoteIfNeeded).mkString(\".\"), field)\n    }\n    if (caseSensitive) {\n      primitiveFields.toMap\n    } else {\n      // Don't consider ambiguity here, i.e. more than one field is matched in case insensitive\n      // mode, just skip pushdown for these fields, they will trigger Exception when reading,\n      // See: SPARK-25132.\n      val dedupPrimitiveFields =\n        primitiveFields\n          .groupBy(_._1.toLowerCase(Locale.ROOT))\n          .filter(_._2.size == 1)\n          .mapValues(_.head._2)\n      CaseInsensitiveMap(dedupPrimitiveFields.toMap)\n    }\n  }\n\n  /**\n   * Holds a single primitive field information stored in the underlying parquet file.\n   *\n   * @param fieldNames\n   *   a field name as an array of string multi-identifier in parquet file\n   * @param fieldType\n   *   field type related info in parquet file\n   */\n  private case class ParquetPrimitiveField(\n      fieldNames: Array[String],\n      fieldType: ParquetSchemaType)\n\n  private case class ParquetSchemaType(\n      logicalTypeAnnotation: LogicalTypeAnnotation,\n      primitiveTypeName: PrimitiveTypeName,\n      length: Int)\n\n  private val ParquetBooleanType = ParquetSchemaType(null, BOOLEAN, 0)\n  private val ParquetByteType =\n    ParquetSchemaType(LogicalTypeAnnotation.intType(8, true), INT32, 0)\n  private val ParquetShortType =\n    ParquetSchemaType(LogicalTypeAnnotation.intType(16, true), INT32, 0)\n  private val ParquetIntegerType = ParquetSchemaType(null, INT32, 0)\n  private val ParquetLongType = ParquetSchemaType(null, INT64, 0)\n  private val ParquetFloatType = ParquetSchemaType(null, FLOAT, 0)\n  private val ParquetDoubleType = ParquetSchemaType(null, DOUBLE, 0)\n  private val ParquetStringType =\n    ParquetSchemaType(LogicalTypeAnnotation.stringType(), BINARY, 0)\n  private val ParquetBinaryType = ParquetSchemaType(null, BINARY, 0)\n  private val ParquetDateType =\n    ParquetSchemaType(LogicalTypeAnnotation.dateType(), INT32, 0)\n  private val ParquetTimestampMicrosType =\n    ParquetSchemaType(LogicalTypeAnnotation.timestampType(true, TimeUnit.MICROS), INT64, 0)\n  private val ParquetTimestampMillisType =\n    ParquetSchemaType(LogicalTypeAnnotation.timestampType(true, TimeUnit.MILLIS), INT64, 0)\n\n  private def dateToDays(date: Any): Int = {\n    val gregorianDays = date match {\n      case d: Date => DateTimeUtils.fromJavaDate(d)\n      case ld: LocalDate => DateTimeUtils.localDateToDays(ld)\n    }\n    datetimeRebaseSpec.mode match {\n      case LEGACY => rebaseGregorianToJulianDays(gregorianDays)\n      case _ => gregorianDays\n    }\n  }\n\n  private def timestampToMicros(v: Any): JLong = {\n    val gregorianMicros = v match {\n      case i: Instant => DateTimeUtils.instantToMicros(i)\n      case t: Timestamp => DateTimeUtils.fromJavaTimestamp(t)\n    }\n    datetimeRebaseSpec.mode match {\n      case LEGACY =>\n        rebaseGregorianToJulianMicros(datetimeRebaseSpec.timeZone, gregorianMicros)\n      case _ => gregorianMicros\n    }\n  }\n\n  private def decimalToInt32(decimal: JBigDecimal): Integer = decimal.unscaledValue().intValue()\n\n  private def decimalToInt64(decimal: JBigDecimal): JLong = decimal.unscaledValue().longValue()\n\n  private def decimalToByteArray(decimal: JBigDecimal, numBytes: Int): Binary = {\n    val decimalBuffer = new Array[Byte](numBytes)\n    val bytes = decimal.unscaledValue().toByteArray\n\n    val fixedLengthBytes = if (bytes.length == numBytes) {\n      bytes\n    } else {\n      val signByte = if (bytes.head < 0) -1: Byte else 0: Byte\n      java.util.Arrays.fill(decimalBuffer, 0, numBytes - bytes.length, signByte)\n      System.arraycopy(bytes, 0, decimalBuffer, numBytes - bytes.length, bytes.length)\n      decimalBuffer\n    }\n    Binary.fromConstantByteArray(fixedLengthBytes, 0, numBytes)\n  }\n\n  private def timestampToMillis(v: Any): JLong = {\n    val micros = timestampToMicros(v)\n    val millis = DateTimeUtils.microsToMillis(micros)\n    millis.asInstanceOf[JLong]\n  }\n\n  private def toIntValue(v: Any): Integer = {\n    Option(v)\n      .map {\n        case p: Period => IntervalUtils.periodToMonths(p)\n        case n => n.asInstanceOf[Number].intValue\n      }\n      .map(_.asInstanceOf[Integer])\n      .orNull\n  }\n\n  private def toLongValue(v: Any): JLong = v match {\n    case d: Duration => IntervalUtils.durationToMicros(d)\n    case l => l.asInstanceOf[JLong]\n  }\n\n  private val makeEq\n      : PartialFunction[ParquetSchemaType, (Array[String], Any) => FilterPredicate] = {\n    case ParquetBooleanType =>\n      (n: Array[String], v: Any) => FilterApi.eq(booleanColumn(n), v.asInstanceOf[JBoolean])\n    case ParquetByteType | ParquetShortType | ParquetIntegerType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.eq(\n          intColumn(n),\n          Option(v).map(_.asInstanceOf[Number].intValue.asInstanceOf[Integer]).orNull)\n    case ParquetLongType =>\n      (n: Array[String], v: Any) => FilterApi.eq(longColumn(n), v.asInstanceOf[JLong])\n    case ParquetFloatType =>\n      (n: Array[String], v: Any) => FilterApi.eq(floatColumn(n), v.asInstanceOf[JFloat])\n    case ParquetDoubleType =>\n      (n: Array[String], v: Any) => FilterApi.eq(doubleColumn(n), v.asInstanceOf[JDouble])\n\n    // Binary.fromString and Binary.fromByteArray don't accept null values\n    case ParquetStringType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.eq(\n          binaryColumn(n),\n          Option(v).map(s => Binary.fromString(s.asInstanceOf[String])).orNull)\n    case ParquetBinaryType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.eq(\n          binaryColumn(n),\n          Option(v).map(_ => Binary.fromReusedByteArray(v.asInstanceOf[Array[Byte]])).orNull)\n    case ParquetDateType if pushDownDate =>\n      (n: Array[String], v: Any) =>\n        FilterApi.eq(\n          intColumn(n),\n          Option(v).map(date => dateToDays(date).asInstanceOf[Integer]).orNull)\n    case ParquetTimestampMicrosType if pushDownTimestamp =>\n      (n: Array[String], v: Any) =>\n        FilterApi.eq(longColumn(n), Option(v).map(timestampToMicros).orNull)\n    case ParquetTimestampMillisType if pushDownTimestamp =>\n      (n: Array[String], v: Any) =>\n        FilterApi.eq(longColumn(n), Option(v).map(timestampToMillis).orNull)\n\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT32, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.eq(\n          intColumn(n),\n          Option(v).map(d => decimalToInt32(d.asInstanceOf[JBigDecimal])).orNull)\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT64, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.eq(\n          longColumn(n),\n          Option(v).map(d => decimalToInt64(d.asInstanceOf[JBigDecimal])).orNull)\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, FIXED_LEN_BYTE_ARRAY, length)\n        if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.eq(\n          binaryColumn(n),\n          Option(v).map(d => decimalToByteArray(d.asInstanceOf[JBigDecimal], length)).orNull)\n  }\n\n  private val makeNotEq\n      : PartialFunction[ParquetSchemaType, (Array[String], Any) => FilterPredicate] = {\n    case ParquetBooleanType =>\n      (n: Array[String], v: Any) => FilterApi.notEq(booleanColumn(n), v.asInstanceOf[JBoolean])\n    case ParquetByteType | ParquetShortType | ParquetIntegerType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.notEq(\n          intColumn(n),\n          Option(v).map(_.asInstanceOf[Number].intValue.asInstanceOf[Integer]).orNull)\n    case ParquetLongType =>\n      (n: Array[String], v: Any) => FilterApi.notEq(longColumn(n), v.asInstanceOf[JLong])\n    case ParquetFloatType =>\n      (n: Array[String], v: Any) => FilterApi.notEq(floatColumn(n), v.asInstanceOf[JFloat])\n    case ParquetDoubleType =>\n      (n: Array[String], v: Any) => FilterApi.notEq(doubleColumn(n), v.asInstanceOf[JDouble])\n\n    case ParquetStringType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.notEq(\n          binaryColumn(n),\n          Option(v).map(s => Binary.fromString(s.asInstanceOf[String])).orNull)\n    case ParquetBinaryType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.notEq(\n          binaryColumn(n),\n          Option(v).map(_ => Binary.fromReusedByteArray(v.asInstanceOf[Array[Byte]])).orNull)\n    case ParquetDateType if pushDownDate =>\n      (n: Array[String], v: Any) =>\n        FilterApi.notEq(\n          intColumn(n),\n          Option(v).map(date => dateToDays(date).asInstanceOf[Integer]).orNull)\n    case ParquetTimestampMicrosType if pushDownTimestamp =>\n      (n: Array[String], v: Any) =>\n        FilterApi.notEq(longColumn(n), Option(v).map(timestampToMicros).orNull)\n    case ParquetTimestampMillisType if pushDownTimestamp =>\n      (n: Array[String], v: Any) =>\n        FilterApi.notEq(longColumn(n), Option(v).map(timestampToMillis).orNull)\n\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT32, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.notEq(\n          intColumn(n),\n          Option(v).map(d => decimalToInt32(d.asInstanceOf[JBigDecimal])).orNull)\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT64, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.notEq(\n          longColumn(n),\n          Option(v).map(d => decimalToInt64(d.asInstanceOf[JBigDecimal])).orNull)\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, FIXED_LEN_BYTE_ARRAY, length)\n        if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.notEq(\n          binaryColumn(n),\n          Option(v).map(d => decimalToByteArray(d.asInstanceOf[JBigDecimal], length)).orNull)\n  }\n\n  private val makeLt\n      : PartialFunction[ParquetSchemaType, (Array[String], Any) => FilterPredicate] = {\n    case ParquetByteType | ParquetShortType | ParquetIntegerType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.lt(intColumn(n), v.asInstanceOf[Number].intValue.asInstanceOf[Integer])\n    case ParquetLongType =>\n      (n: Array[String], v: Any) => FilterApi.lt(longColumn(n), v.asInstanceOf[JLong])\n    case ParquetFloatType =>\n      (n: Array[String], v: Any) => FilterApi.lt(floatColumn(n), v.asInstanceOf[JFloat])\n    case ParquetDoubleType =>\n      (n: Array[String], v: Any) => FilterApi.lt(doubleColumn(n), v.asInstanceOf[JDouble])\n\n    case ParquetStringType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.lt(binaryColumn(n), Binary.fromString(v.asInstanceOf[String]))\n    case ParquetBinaryType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.lt(binaryColumn(n), Binary.fromReusedByteArray(v.asInstanceOf[Array[Byte]]))\n    case ParquetDateType if pushDownDate =>\n      (n: Array[String], v: Any) =>\n        FilterApi.lt(intColumn(n), dateToDays(v).asInstanceOf[Integer])\n    case ParquetTimestampMicrosType if pushDownTimestamp =>\n      (n: Array[String], v: Any) => FilterApi.lt(longColumn(n), timestampToMicros(v))\n    case ParquetTimestampMillisType if pushDownTimestamp =>\n      (n: Array[String], v: Any) => FilterApi.lt(longColumn(n), timestampToMillis(v))\n\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT32, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.lt(intColumn(n), decimalToInt32(v.asInstanceOf[JBigDecimal]))\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT64, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.lt(longColumn(n), decimalToInt64(v.asInstanceOf[JBigDecimal]))\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, FIXED_LEN_BYTE_ARRAY, length)\n        if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.lt(binaryColumn(n), decimalToByteArray(v.asInstanceOf[JBigDecimal], length))\n  }\n\n  private val makeLtEq\n      : PartialFunction[ParquetSchemaType, (Array[String], Any) => FilterPredicate] = {\n    case ParquetByteType | ParquetShortType | ParquetIntegerType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.ltEq(intColumn(n), v.asInstanceOf[Number].intValue.asInstanceOf[Integer])\n    case ParquetLongType =>\n      (n: Array[String], v: Any) => FilterApi.ltEq(longColumn(n), v.asInstanceOf[JLong])\n    case ParquetFloatType =>\n      (n: Array[String], v: Any) => FilterApi.ltEq(floatColumn(n), v.asInstanceOf[JFloat])\n    case ParquetDoubleType =>\n      (n: Array[String], v: Any) => FilterApi.ltEq(doubleColumn(n), v.asInstanceOf[JDouble])\n\n    case ParquetStringType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.ltEq(binaryColumn(n), Binary.fromString(v.asInstanceOf[String]))\n    case ParquetBinaryType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.ltEq(binaryColumn(n), Binary.fromReusedByteArray(v.asInstanceOf[Array[Byte]]))\n    case ParquetDateType if pushDownDate =>\n      (n: Array[String], v: Any) =>\n        FilterApi.ltEq(intColumn(n), dateToDays(v).asInstanceOf[Integer])\n    case ParquetTimestampMicrosType if pushDownTimestamp =>\n      (n: Array[String], v: Any) => FilterApi.ltEq(longColumn(n), timestampToMicros(v))\n    case ParquetTimestampMillisType if pushDownTimestamp =>\n      (n: Array[String], v: Any) => FilterApi.ltEq(longColumn(n), timestampToMillis(v))\n\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT32, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.ltEq(intColumn(n), decimalToInt32(v.asInstanceOf[JBigDecimal]))\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT64, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.ltEq(longColumn(n), decimalToInt64(v.asInstanceOf[JBigDecimal]))\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, FIXED_LEN_BYTE_ARRAY, length)\n        if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.ltEq(binaryColumn(n), decimalToByteArray(v.asInstanceOf[JBigDecimal], length))\n  }\n\n  private val makeGt\n      : PartialFunction[ParquetSchemaType, (Array[String], Any) => FilterPredicate] = {\n    case ParquetByteType | ParquetShortType | ParquetIntegerType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gt(intColumn(n), v.asInstanceOf[Number].intValue.asInstanceOf[Integer])\n    case ParquetLongType =>\n      (n: Array[String], v: Any) => FilterApi.gt(longColumn(n), v.asInstanceOf[JLong])\n    case ParquetFloatType =>\n      (n: Array[String], v: Any) => FilterApi.gt(floatColumn(n), v.asInstanceOf[JFloat])\n    case ParquetDoubleType =>\n      (n: Array[String], v: Any) => FilterApi.gt(doubleColumn(n), v.asInstanceOf[JDouble])\n\n    case ParquetStringType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gt(binaryColumn(n), Binary.fromString(v.asInstanceOf[String]))\n    case ParquetBinaryType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gt(binaryColumn(n), Binary.fromReusedByteArray(v.asInstanceOf[Array[Byte]]))\n    case ParquetDateType if pushDownDate =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gt(intColumn(n), dateToDays(v).asInstanceOf[Integer])\n    case ParquetTimestampMicrosType if pushDownTimestamp =>\n      (n: Array[String], v: Any) => FilterApi.gt(longColumn(n), timestampToMicros(v))\n    case ParquetTimestampMillisType if pushDownTimestamp =>\n      (n: Array[String], v: Any) => FilterApi.gt(longColumn(n), timestampToMillis(v))\n\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT32, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gt(intColumn(n), decimalToInt32(v.asInstanceOf[JBigDecimal]))\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT64, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gt(longColumn(n), decimalToInt64(v.asInstanceOf[JBigDecimal]))\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, FIXED_LEN_BYTE_ARRAY, length)\n        if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gt(binaryColumn(n), decimalToByteArray(v.asInstanceOf[JBigDecimal], length))\n  }\n\n  private val makeGtEq\n      : PartialFunction[ParquetSchemaType, (Array[String], Any) => FilterPredicate] = {\n    case ParquetByteType | ParquetShortType | ParquetIntegerType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gtEq(intColumn(n), v.asInstanceOf[Number].intValue.asInstanceOf[Integer])\n    case ParquetLongType =>\n      (n: Array[String], v: Any) => FilterApi.gtEq(longColumn(n), v.asInstanceOf[JLong])\n    case ParquetFloatType =>\n      (n: Array[String], v: Any) => FilterApi.gtEq(floatColumn(n), v.asInstanceOf[JFloat])\n    case ParquetDoubleType =>\n      (n: Array[String], v: Any) => FilterApi.gtEq(doubleColumn(n), v.asInstanceOf[JDouble])\n\n    case ParquetStringType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gtEq(binaryColumn(n), Binary.fromString(v.asInstanceOf[String]))\n    case ParquetBinaryType =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gtEq(binaryColumn(n), Binary.fromReusedByteArray(v.asInstanceOf[Array[Byte]]))\n    case ParquetDateType if pushDownDate =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gtEq(intColumn(n), dateToDays(v).asInstanceOf[Integer])\n    case ParquetTimestampMicrosType if pushDownTimestamp =>\n      (n: Array[String], v: Any) => FilterApi.gtEq(longColumn(n), timestampToMicros(v))\n    case ParquetTimestampMillisType if pushDownTimestamp =>\n      (n: Array[String], v: Any) => FilterApi.gtEq(longColumn(n), timestampToMillis(v))\n\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT32, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gtEq(intColumn(n), decimalToInt32(v.asInstanceOf[JBigDecimal]))\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT64, _) if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gtEq(longColumn(n), decimalToInt64(v.asInstanceOf[JBigDecimal]))\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, FIXED_LEN_BYTE_ARRAY, length)\n        if pushDownDecimal =>\n      (n: Array[String], v: Any) =>\n        FilterApi.gtEq(binaryColumn(n), decimalToByteArray(v.asInstanceOf[JBigDecimal], length))\n  }\n\n  private val makeInPredicate: PartialFunction[\n    ParquetSchemaType,\n    (Array[String], Array[Any], ParquetStatistics[_]) => FilterPredicate] = {\n    case ParquetByteType | ParquetShortType | ParquetIntegerType =>\n      (n: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(toIntValue(_).toInt).foreach(statistics.updateStats)\n        FilterApi.and(\n          FilterApi.gtEq(intColumn(n), statistics.genericGetMin().asInstanceOf[Integer]),\n          FilterApi.ltEq(intColumn(n), statistics.genericGetMax().asInstanceOf[Integer]))\n\n    case ParquetLongType =>\n      (n: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(toLongValue).foreach(statistics.updateStats(_))\n        FilterApi.and(\n          FilterApi.gtEq(longColumn(n), statistics.genericGetMin().asInstanceOf[JLong]),\n          FilterApi.ltEq(longColumn(n), statistics.genericGetMax().asInstanceOf[JLong]))\n\n    case ParquetFloatType =>\n      (n: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(_.asInstanceOf[JFloat]).foreach(statistics.updateStats(_))\n        FilterApi.and(\n          FilterApi.gtEq(floatColumn(n), statistics.genericGetMin().asInstanceOf[JFloat]),\n          FilterApi.ltEq(floatColumn(n), statistics.genericGetMax().asInstanceOf[JFloat]))\n\n    case ParquetDoubleType =>\n      (n: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(_.asInstanceOf[JDouble]).foreach(statistics.updateStats(_))\n        FilterApi.and(\n          FilterApi.gtEq(doubleColumn(n), statistics.genericGetMin().asInstanceOf[JDouble]),\n          FilterApi.ltEq(doubleColumn(n), statistics.genericGetMax().asInstanceOf[JDouble]))\n\n    case ParquetStringType =>\n      (n: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(s => Binary.fromString(s.asInstanceOf[String])).foreach(statistics.updateStats)\n        FilterApi.and(\n          FilterApi.gtEq(binaryColumn(n), statistics.genericGetMin().asInstanceOf[Binary]),\n          FilterApi.ltEq(binaryColumn(n), statistics.genericGetMax().asInstanceOf[Binary]))\n\n    case ParquetBinaryType =>\n      (n: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(b => Binary.fromReusedByteArray(b.asInstanceOf[Array[Byte]]))\n          .foreach(statistics.updateStats)\n        FilterApi.and(\n          FilterApi.gtEq(binaryColumn(n), statistics.genericGetMin().asInstanceOf[Binary]),\n          FilterApi.ltEq(binaryColumn(n), statistics.genericGetMax().asInstanceOf[Binary]))\n\n    case ParquetDateType if pushDownDate =>\n      (n: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(dateToDays).map(_.asInstanceOf[Integer]).foreach(statistics.updateStats(_))\n        FilterApi.and(\n          FilterApi.gtEq(intColumn(n), statistics.genericGetMin().asInstanceOf[Integer]),\n          FilterApi.ltEq(intColumn(n), statistics.genericGetMax().asInstanceOf[Integer]))\n\n    case ParquetTimestampMicrosType if pushDownTimestamp =>\n      (n: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(timestampToMicros).foreach(statistics.updateStats(_))\n        FilterApi.and(\n          FilterApi.gtEq(longColumn(n), statistics.genericGetMin().asInstanceOf[JLong]),\n          FilterApi.ltEq(longColumn(n), statistics.genericGetMax().asInstanceOf[JLong]))\n\n    case ParquetTimestampMillisType if pushDownTimestamp =>\n      (n: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(timestampToMillis).foreach(statistics.updateStats(_))\n        FilterApi.and(\n          FilterApi.gtEq(longColumn(n), statistics.genericGetMin().asInstanceOf[JLong]),\n          FilterApi.ltEq(longColumn(n), statistics.genericGetMax().asInstanceOf[JLong]))\n\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT32, _) if pushDownDecimal =>\n      (n: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(_.asInstanceOf[JBigDecimal]).map(decimalToInt32).foreach(statistics.updateStats(_))\n        FilterApi.and(\n          FilterApi.gtEq(intColumn(n), statistics.genericGetMin().asInstanceOf[Integer]),\n          FilterApi.ltEq(intColumn(n), statistics.genericGetMax().asInstanceOf[Integer]))\n\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, INT64, _) if pushDownDecimal =>\n      (n: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(_.asInstanceOf[JBigDecimal]).map(decimalToInt64).foreach(statistics.updateStats(_))\n        FilterApi.and(\n          FilterApi.gtEq(longColumn(n), statistics.genericGetMin().asInstanceOf[JLong]),\n          FilterApi.ltEq(longColumn(n), statistics.genericGetMax().asInstanceOf[JLong]))\n\n    case ParquetSchemaType(_: DecimalLogicalTypeAnnotation, FIXED_LEN_BYTE_ARRAY, length)\n        if pushDownDecimal =>\n      (path: Array[String], v: Array[Any], statistics: ParquetStatistics[_]) =>\n        v.map(d => decimalToByteArray(d.asInstanceOf[JBigDecimal], length))\n          .foreach(statistics.updateStats)\n        FilterApi.and(\n          FilterApi.gtEq(binaryColumn(path), statistics.genericGetMin().asInstanceOf[Binary]),\n          FilterApi.ltEq(binaryColumn(path), statistics.genericGetMax().asInstanceOf[Binary]))\n  }\n\n  // Returns filters that can be pushed down when reading Parquet files.\n  def convertibleFilters(filters: Seq[sources.Filter]): Seq[sources.Filter] = {\n    filters.flatMap(convertibleFiltersHelper(_, canPartialPushDown = true))\n  }\n\n  private def convertibleFiltersHelper(\n      predicate: sources.Filter,\n      canPartialPushDown: Boolean): Option[sources.Filter] = {\n    predicate match {\n      case sources.And(left, right) =>\n        val leftResultOptional = convertibleFiltersHelper(left, canPartialPushDown)\n        val rightResultOptional = convertibleFiltersHelper(right, canPartialPushDown)\n        (leftResultOptional, rightResultOptional) match {\n          case (Some(leftResult), Some(rightResult)) => Some(sources.And(leftResult, rightResult))\n          case (Some(leftResult), None) if canPartialPushDown => Some(leftResult)\n          case (None, Some(rightResult)) if canPartialPushDown => Some(rightResult)\n          case _ => None\n        }\n\n      case sources.Or(left, right) =>\n        val leftResultOptional = convertibleFiltersHelper(left, canPartialPushDown)\n        val rightResultOptional = convertibleFiltersHelper(right, canPartialPushDown)\n        if (leftResultOptional.isEmpty || rightResultOptional.isEmpty) {\n          None\n        } else {\n          Some(sources.Or(leftResultOptional.get, rightResultOptional.get))\n        }\n      case sources.Not(pred) =>\n        val resultOptional = convertibleFiltersHelper(pred, canPartialPushDown = false)\n        resultOptional.map(sources.Not)\n\n      case other =>\n        if (createFilter(other).isDefined) {\n          Some(other)\n        } else {\n          None\n        }\n    }\n  }\n\n  /**\n   * Converts data sources filters to Parquet filter predicates.\n   */\n  def createFilter(predicate: sources.Filter): Option[FilterPredicate] = {\n    createFilterHelper(predicate, canPartialPushDownConjuncts = true)\n  }\n\n  // Parquet's type in the given file should be matched to the value's type\n  // in the pushed filter in order to push down the filter to Parquet.\n  private def valueCanMakeFilterOn(name: String, value: Any): Boolean = {\n    value == null || (nameToParquetField(name).fieldType match {\n      case ParquetBooleanType => value.isInstanceOf[JBoolean]\n      case ParquetByteType | ParquetShortType | ParquetIntegerType =>\n        value match {\n          // Byte/Short/Int are all stored as INT32 in Parquet so filters are built using type\n          // Int. We don't create a filter if the value would overflow.\n          case _: JByte | _: JShort | _: Integer => true\n          case v: JLong => v.longValue() >= Int.MinValue && v.longValue() <= Int.MaxValue\n          case _ => false\n        }\n      case ParquetLongType => value.isInstanceOf[JLong]\n      case ParquetFloatType => value.isInstanceOf[JFloat]\n      case ParquetDoubleType => value.isInstanceOf[JDouble]\n      case ParquetStringType => value.isInstanceOf[String]\n      case ParquetBinaryType => value.isInstanceOf[Array[Byte]]\n      case ParquetDateType =>\n        value.isInstanceOf[Date] || value.isInstanceOf[LocalDate]\n      case ParquetTimestampMicrosType | ParquetTimestampMillisType =>\n        value.isInstanceOf[Timestamp] || value.isInstanceOf[Instant]\n      case ParquetSchemaType(decimalType: DecimalLogicalTypeAnnotation, INT32, _) =>\n        isDecimalMatched(value, decimalType)\n      case ParquetSchemaType(decimalType: DecimalLogicalTypeAnnotation, INT64, _) =>\n        isDecimalMatched(value, decimalType)\n      case ParquetSchemaType(\n            decimalType: DecimalLogicalTypeAnnotation,\n            FIXED_LEN_BYTE_ARRAY,\n            _) =>\n        isDecimalMatched(value, decimalType)\n      case _ => false\n    })\n  }\n\n  // Decimal type must make sure that filter value's scale matched the file.\n  // If doesn't matched, which would cause data corruption.\n  private def isDecimalMatched(\n      value: Any,\n      decimalLogicalType: DecimalLogicalTypeAnnotation): Boolean = value match {\n    case decimal: JBigDecimal =>\n      decimal.scale == decimalLogicalType.getScale\n    case _ => false\n  }\n\n  private def canMakeFilterOn(name: String, value: Any): Boolean = {\n    nameToParquetField.contains(name) && valueCanMakeFilterOn(name, value)\n  }\n\n  /**\n   * @param predicate\n   *   the input filter predicates. Not all the predicates can be pushed down.\n   * @param canPartialPushDownConjuncts\n   *   whether a subset of conjuncts of predicates can be pushed down safely. Pushing ONLY one\n   *   side of AND down is safe to do at the top level or none of its ancestors is NOT and OR.\n   * @return\n   *   the Parquet-native filter predicates that are eligible for pushdown.\n   */\n  private def createFilterHelper(\n      predicate: sources.Filter,\n      canPartialPushDownConjuncts: Boolean): Option[FilterPredicate] = {\n    // NOTE:\n    //\n    // For any comparison operator `cmp`, both `a cmp NULL` and `NULL cmp a` evaluate to `NULL`,\n    // which can be casted to `false` implicitly. Please refer to the `eval` method of these\n    // operators and the `PruneFilters` rule for details.\n\n    // Hyukjin:\n    // I added [[EqualNullSafe]] with [[org.apache.parquet.filter2.predicate.Operators.Eq]].\n    // So, it performs equality comparison identically when given [[sources.Filter]] is [[EqualTo]].\n    // The reason why I did this is, that the actual Parquet filter checks null-safe equality\n    // comparison.\n    // So I added this and maybe [[EqualTo]] should be changed. It still seems fine though, because\n    // physical planning does not set `NULL` to [[EqualTo]] but changes it to [[IsNull]] and etc.\n    // Probably I missed something and obviously this should be changed.\n\n    predicate match {\n      case sources.IsNull(name) if canMakeFilterOn(name, null) =>\n        makeEq\n          .lift(nameToParquetField(name).fieldType)\n          .map(_(nameToParquetField(name).fieldNames, null))\n      case sources.IsNotNull(name) if canMakeFilterOn(name, null) =>\n        makeNotEq\n          .lift(nameToParquetField(name).fieldType)\n          .map(_(nameToParquetField(name).fieldNames, null))\n\n      case sources.EqualTo(name, value) if canMakeFilterOn(name, value) =>\n        makeEq\n          .lift(nameToParquetField(name).fieldType)\n          .map(_(nameToParquetField(name).fieldNames, value))\n      case sources.Not(sources.EqualTo(name, value)) if canMakeFilterOn(name, value) =>\n        makeNotEq\n          .lift(nameToParquetField(name).fieldType)\n          .map(_(nameToParquetField(name).fieldNames, value))\n\n      case sources.EqualNullSafe(name, value) if canMakeFilterOn(name, value) =>\n        makeEq\n          .lift(nameToParquetField(name).fieldType)\n          .map(_(nameToParquetField(name).fieldNames, value))\n      case sources.Not(sources.EqualNullSafe(name, value)) if canMakeFilterOn(name, value) =>\n        makeNotEq\n          .lift(nameToParquetField(name).fieldType)\n          .map(_(nameToParquetField(name).fieldNames, value))\n\n      case sources.LessThan(name, value) if (value != null) && canMakeFilterOn(name, value) =>\n        makeLt\n          .lift(nameToParquetField(name).fieldType)\n          .map(_(nameToParquetField(name).fieldNames, value))\n      case sources.LessThanOrEqual(name, value)\n          if (value != null) && canMakeFilterOn(name, value) =>\n        makeLtEq\n          .lift(nameToParquetField(name).fieldType)\n          .map(_(nameToParquetField(name).fieldNames, value))\n\n      case sources.GreaterThan(name, value) if (value != null) && canMakeFilterOn(name, value) =>\n        makeGt\n          .lift(nameToParquetField(name).fieldType)\n          .map(_(nameToParquetField(name).fieldNames, value))\n      case sources.GreaterThanOrEqual(name, value)\n          if (value != null) && canMakeFilterOn(name, value) =>\n        makeGtEq\n          .lift(nameToParquetField(name).fieldType)\n          .map(_(nameToParquetField(name).fieldNames, value))\n\n      case sources.And(lhs, rhs) =>\n        // At here, it is not safe to just convert one side and remove the other side\n        // if we do not understand what the parent filters are.\n        //\n        // Here is an example used to explain the reason.\n        // Let's say we have NOT(a = 2 AND b in ('1')) and we do not understand how to\n        // convert b in ('1'). If we only convert a = 2, we will end up with a filter\n        // NOT(a = 2), which will generate wrong results.\n        //\n        // Pushing one side of AND down is only safe to do at the top level or in the child\n        // AND before hitting NOT or OR conditions, and in this case, the unsupported predicate\n        // can be safely removed.\n        val lhsFilterOption =\n          createFilterHelper(lhs, canPartialPushDownConjuncts)\n        val rhsFilterOption =\n          createFilterHelper(rhs, canPartialPushDownConjuncts)\n\n        (lhsFilterOption, rhsFilterOption) match {\n          case (Some(lhsFilter), Some(rhsFilter)) => Some(FilterApi.and(lhsFilter, rhsFilter))\n          case (Some(lhsFilter), None) if canPartialPushDownConjuncts => Some(lhsFilter)\n          case (None, Some(rhsFilter)) if canPartialPushDownConjuncts => Some(rhsFilter)\n          case _ => None\n        }\n\n      case sources.Or(lhs, rhs) =>\n        // The Or predicate is convertible when both of its children can be pushed down.\n        // That is to say, if one/both of the children can be partially pushed down, the Or\n        // predicate can be partially pushed down as well.\n        //\n        // Here is an example used to explain the reason.\n        // Let's say we have\n        // (a1 AND a2) OR (b1 AND b2),\n        // a1 and b1 is convertible, while a2 and b2 is not.\n        // The predicate can be converted as\n        // (a1 OR b1) AND (a1 OR b2) AND (a2 OR b1) AND (a2 OR b2)\n        // As per the logical in And predicate, we can push down (a1 OR b1).\n        for {\n          lhsFilter <- createFilterHelper(lhs, canPartialPushDownConjuncts)\n          rhsFilter <- createFilterHelper(rhs, canPartialPushDownConjuncts)\n        } yield FilterApi.or(lhsFilter, rhsFilter)\n\n      case sources.Not(pred) =>\n        createFilterHelper(pred, canPartialPushDownConjuncts = false)\n          .map(FilterApi.not)\n\n      case sources.In(name, values)\n          if pushDownInFilterThreshold > 0 && values.nonEmpty &&\n            canMakeFilterOn(name, values.head) =>\n        val fieldType = nameToParquetField(name).fieldType\n        val fieldNames = nameToParquetField(name).fieldNames\n        if (values.length <= pushDownInFilterThreshold) {\n          values.distinct\n            .flatMap { v =>\n              makeEq.lift(fieldType).map(_(fieldNames, v))\n            }\n            .reduceLeftOption(FilterApi.or)\n        } else if (canPartialPushDownConjuncts) {\n          val primitiveType = schema.getColumnDescription(fieldNames).getPrimitiveType\n          val statistics: ParquetStatistics[_] = ParquetStatistics.createStats(primitiveType)\n          if (values.contains(null)) {\n            Seq(\n              makeEq.lift(fieldType).map(_(fieldNames, null)),\n              makeInPredicate\n                .lift(fieldType)\n                .map(_(fieldNames, values.filter(_ != null), statistics))).flatten\n              .reduceLeftOption(FilterApi.or)\n          } else {\n            makeInPredicate.lift(fieldType).map(_(fieldNames, values, statistics))\n          }\n        } else {\n          None\n        }\n\n      case sources.StringStartsWith(name, prefix)\n          if pushDownStringPredicate && canMakeFilterOn(name, prefix) =>\n        Option(prefix).map { v =>\n          FilterApi.userDefined(\n            binaryColumn(nameToParquetField(name).fieldNames),\n            new UserDefinedPredicate[Binary] with Serializable {\n              private val strToBinary = Binary.fromReusedByteArray(v.getBytes)\n              private val size = strToBinary.length\n\n              override def canDrop(statistics: Statistics[Binary]): Boolean = {\n                val comparator = PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR\n                val max = statistics.getMax\n                val min = statistics.getMin\n                comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) < 0 ||\n                comparator.compare(min.slice(0, math.min(size, min.length)), strToBinary) > 0\n              }\n\n              override def inverseCanDrop(statistics: Statistics[Binary]): Boolean = {\n                val comparator = PrimitiveComparator.UNSIGNED_LEXICOGRAPHICAL_BINARY_COMPARATOR\n                val max = statistics.getMax\n                val min = statistics.getMin\n                comparator.compare(max.slice(0, math.min(size, max.length)), strToBinary) == 0 &&\n                comparator.compare(min.slice(0, math.min(size, min.length)), strToBinary) == 0\n              }\n\n              override def keep(value: Binary): Boolean = {\n                value != null && UTF8String\n                  .fromBytes(value.getBytes)\n                  .startsWith(UTF8String.fromBytes(strToBinary.getBytes))\n              }\n            })\n        }\n\n      case sources.StringEndsWith(name, suffix)\n          if pushDownStringPredicate && canMakeFilterOn(name, suffix) =>\n        Option(suffix).map { v =>\n          FilterApi.userDefined(\n            binaryColumn(nameToParquetField(name).fieldNames),\n            new UserDefinedPredicate[Binary] with Serializable {\n              private val suffixStr = UTF8String.fromString(v)\n              override def canDrop(statistics: Statistics[Binary]): Boolean = false\n              override def inverseCanDrop(statistics: Statistics[Binary]): Boolean = false\n              override def keep(value: Binary): Boolean = {\n                value != null && UTF8String.fromBytes(value.getBytes).endsWith(suffixStr)\n              }\n            })\n        }\n\n      case sources.StringContains(name, value)\n          if pushDownStringPredicate && canMakeFilterOn(name, value) =>\n        Option(value).map { v =>\n          FilterApi.userDefined(\n            binaryColumn(nameToParquetField(name).fieldNames),\n            new UserDefinedPredicate[Binary] with Serializable {\n              private val subStr = UTF8String.fromString(v)\n              override def canDrop(statistics: Statistics[Binary]): Boolean = false\n              override def inverseCanDrop(statistics: Statistics[Binary]): Boolean = false\n              override def keep(value: Binary): Boolean = {\n                value != null && UTF8String.fromBytes(value.getBytes).contains(subStr)\n              }\n            })\n        }\n\n      case _ => None\n    }\n  }\n\n  def createNativeFilters(predicates: Seq[sources.Filter]): Option[Array[Byte]] = {\n    predicates.reduceOption(sources.And).flatMap(createNativeFilter).map { expr =>\n      val size = expr.getSerializedSize\n      val bytes = new Array[Byte](size)\n      val codedOutput = CodedOutputStream.newInstance(bytes)\n      expr.writeTo(codedOutput)\n      codedOutput.checkNoSpaceLeft()\n      bytes\n    }\n  }\n\n  private def createNativeFilter(predicate: sources.Filter): Option[ExprOuterClass.Expr] = {\n    def nameUnaryExpr(name: String)(\n        f: (ExprOuterClass.Expr.Builder, ExprOuterClass.UnaryExpr) => ExprOuterClass.Expr.Builder)\n        : Option[ExprOuterClass.Expr] = {\n      createNameExpr(name, dataSchema).map { case (_, childExpr) =>\n        createUnaryExpr(childExpr, f)\n      }\n    }\n\n    def nameValueBinaryExpr(name: String, value: Any)(\n        f: (\n            ExprOuterClass.Expr.Builder,\n            ExprOuterClass.BinaryExpr) => ExprOuterClass.Expr.Builder)\n        : Option[ExprOuterClass.Expr] = {\n      createNameExpr(name, dataSchema).flatMap { case (dataType, childExpr) =>\n        createValueExpr(value, dataType).map(createBinaryExpr(childExpr, _, f))\n      }\n    }\n\n    predicate match {\n      case sources.IsNull(name) if canMakeFilterOn(name, null) =>\n        nameUnaryExpr(name) { (builder, unaryExpr) =>\n          builder.setIsNull(unaryExpr)\n        }\n      case sources.IsNotNull(name) if canMakeFilterOn(name, null) =>\n        nameUnaryExpr(name) { (builder, unaryExpr) =>\n          builder.setIsNotNull(unaryExpr)\n        }\n\n      case sources.EqualTo(name, value) if canMakeFilterOn(name, value) =>\n        nameValueBinaryExpr(name, value) { (builder, binaryExpr) =>\n          builder.setEq(binaryExpr)\n        }\n\n      case sources.Not(sources.EqualTo(name, value)) if canMakeFilterOn(name, value) =>\n        nameValueBinaryExpr(name, value) { (builder, binaryExpr) =>\n          builder.setNeq(binaryExpr)\n        }\n\n      case sources.EqualNullSafe(name, value) if canMakeFilterOn(name, value) =>\n        nameValueBinaryExpr(name, value) { (builder, binaryExpr) =>\n          builder.setEqNullSafe(binaryExpr)\n        }\n\n      case sources.Not(sources.EqualNullSafe(name, value)) if canMakeFilterOn(name, value) =>\n        nameValueBinaryExpr(name, value) { (builder, binaryExpr) =>\n          builder.setNeqNullSafe(binaryExpr)\n        }\n\n      case sources.LessThan(name, value) if (value != null) && canMakeFilterOn(name, value) =>\n        nameValueBinaryExpr(name, value) { (builder, binaryExpr) =>\n          builder.setLt(binaryExpr)\n        }\n\n      case sources.LessThanOrEqual(name, value)\n          if (value != null) && canMakeFilterOn(name, value) =>\n        nameValueBinaryExpr(name, value) { (builder, binaryExpr) =>\n          builder.setLtEq(binaryExpr)\n        }\n\n      case sources.GreaterThan(name, value) if (value != null) && canMakeFilterOn(name, value) =>\n        nameValueBinaryExpr(name, value) { (builder, binaryExpr) =>\n          builder.setGt(binaryExpr)\n        }\n\n      case sources.GreaterThanOrEqual(name, value)\n          if (value != null) && canMakeFilterOn(name, value) =>\n        nameValueBinaryExpr(name, value) { (builder, binaryExpr) =>\n          builder.setGtEq(binaryExpr)\n        }\n\n      case sources.And(lhs, rhs) =>\n        (createNativeFilter(lhs), createNativeFilter(rhs)) match {\n          case (Some(leftExpr), Some(rightExpr)) =>\n            Some(\n              createBinaryExpr(\n                leftExpr,\n                rightExpr,\n                (builder, binaryExpr) => builder.setAnd(binaryExpr)))\n          case _ => None\n        }\n\n      case sources.Or(lhs, rhs) =>\n        (createNativeFilter(lhs), createNativeFilter(rhs)) match {\n          case (Some(leftExpr), Some(rightExpr)) =>\n            Some(\n              createBinaryExpr(\n                leftExpr,\n                rightExpr,\n                (builder, binaryExpr) => builder.setOr(binaryExpr)))\n          case _ => None\n        }\n\n      case sources.Not(pred) =>\n        val childExpr = createNativeFilter(pred)\n        childExpr.map { expr =>\n          createUnaryExpr(expr, (builder, unaryExpr) => builder.setNot(unaryExpr))\n        }\n\n      case sources.In(name, values)\n          if pushDownInFilterThreshold > 0 && values.nonEmpty &&\n            canMakeFilterOn(name, values.head) =>\n        createNameExpr(name, dataSchema).flatMap { case (dataType, nameExpr) =>\n          val valueExprs = values.flatMap(createValueExpr(_, dataType))\n          if (valueExprs.length != values.length) {\n            None\n          } else {\n            val builder = ExprOuterClass.In.newBuilder()\n            builder.setInValue(nameExpr)\n            builder.addAllLists(valueExprs.toSeq.asJava)\n            builder.setNegated(false)\n            Some(\n              ExprOuterClass.Expr\n                .newBuilder()\n                .setIn(builder)\n                .build())\n          }\n        }\n\n      case sources.StringStartsWith(attribute, prefix)\n          if pushDownStringPredicate && canMakeFilterOn(attribute, prefix) =>\n        val attributeExpr = createNameExpr(attribute, dataSchema)\n        val prefixExpr = attributeExpr.flatMap { case (dataType, _) =>\n          createValueExpr(prefix, dataType)\n        }\n        scalarFunctionExprToProto(\"starts_with\", Some(attributeExpr.get._2), prefixExpr)\n\n      case sources.StringEndsWith(attribute, suffix)\n          if pushDownStringPredicate && canMakeFilterOn(attribute, suffix) =>\n        val attributeExpr = createNameExpr(attribute, dataSchema)\n        val suffixExpr = attributeExpr.flatMap { case (dataType, _) =>\n          createValueExpr(suffix, dataType)\n        }\n        scalarFunctionExprToProto(\"ends_with\", Some(attributeExpr.get._2), suffixExpr)\n\n      case sources.StringContains(attribute, value)\n          if pushDownStringPredicate && canMakeFilterOn(attribute, value) =>\n        val attributeExpr = createNameExpr(attribute, dataSchema)\n        val valueExpr = attributeExpr.flatMap { case (dataType, _) =>\n          createValueExpr(value, dataType)\n        }\n        scalarFunctionExprToProto(\"contains\", Some(attributeExpr.get._2), valueExpr)\n\n      case _ => None\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/parquet/SourceFilterSerde.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet\n\nimport java.math.{BigDecimal => JavaBigDecimal}\nimport java.sql.{Date, Timestamp}\nimport java.time.{Instant, LocalDate, LocalDateTime}\n\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.catalyst.util.DateTimeUtils\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.serde.ExprOuterClass\nimport org.apache.comet.serde.ExprOuterClass.Expr\nimport org.apache.comet.serde.LiteralOuterClass\nimport org.apache.comet.serde.QueryPlanSerde.serializeDataType\n\nobject SourceFilterSerde extends Logging {\n\n  def createNameExpr(\n      name: String,\n      schema: StructType): Option[(org.apache.spark.sql.types.DataType, ExprOuterClass.Expr)] = {\n    val filedWithIndex = schema.fields.zipWithIndex.find { case (field, _) =>\n      field.name == name\n    }\n    if (filedWithIndex.isDefined) {\n      val (field, index) = filedWithIndex.get\n      val dataType = serializeDataType(field.dataType)\n      if (dataType.isDefined) {\n        val boundExpr = ExprOuterClass.BoundReference\n          .newBuilder()\n          .setIndex(index)\n          .setDatatype(dataType.get)\n          .build()\n        Some(\n          field.dataType,\n          ExprOuterClass.Expr\n            .newBuilder()\n            .setBound(boundExpr)\n            .build())\n      } else {\n        None\n      }\n    } else {\n      None\n    }\n\n  }\n\n  /**\n   * create a literal value native expression for source filter value, the value is a scala value\n   */\n  def createValueExpr(\n      value: Any,\n      dataType: org.apache.spark.sql.types.DataType): Option[ExprOuterClass.Expr] = {\n    val exprBuilder = LiteralOuterClass.Literal.newBuilder()\n    var valueIsSet = true\n    if (value == null) {\n      exprBuilder.setIsNull(true)\n    } else {\n      exprBuilder.setIsNull(false)\n      // value is a scala value, not a catalyst value\n      // refer to org.apache.spark.sql.catalyst.CatalystTypeConverters.CatalystTypeConverter#toScala\n      dataType match {\n        case _: BooleanType => exprBuilder.setBoolVal(value.asInstanceOf[Boolean])\n        case _: ByteType => exprBuilder.setByteVal(value.asInstanceOf[Byte])\n        case _: ShortType => exprBuilder.setShortVal(value.asInstanceOf[Short])\n        case _: IntegerType => exprBuilder.setIntVal(value.asInstanceOf[Int])\n        case _: LongType => exprBuilder.setLongVal(value.asInstanceOf[Long])\n        case _: FloatType => exprBuilder.setFloatVal(value.asInstanceOf[Float])\n        case _: DoubleType => exprBuilder.setDoubleVal(value.asInstanceOf[Double])\n        case _: StringType => exprBuilder.setStringVal(value.asInstanceOf[String])\n        case _: TimestampType =>\n          value match {\n            case v: Timestamp => exprBuilder.setLongVal(DateTimeUtils.fromJavaTimestamp(v))\n            case v: Instant => exprBuilder.setLongVal(DateTimeUtils.instantToMicros(v))\n            case v: Long => exprBuilder.setLongVal(v)\n            case _ =>\n              valueIsSet = false\n              logWarning(s\"Unexpected timestamp type '${value.getClass}' for value '$value'\")\n          }\n        case _: TimestampNTZType =>\n          value match {\n            case v: LocalDateTime =>\n              exprBuilder.setLongVal(DateTimeUtils.localDateTimeToMicros(v))\n            case v: Long => exprBuilder.setLongVal(v)\n            case _ =>\n              valueIsSet = false\n              logWarning(s\"Unexpected timestamp type '${value.getClass}' for value' $value'\")\n          }\n        case _: DecimalType =>\n          // Pass decimal literal as bytes.\n          val unscaled = value.asInstanceOf[JavaBigDecimal].unscaledValue\n          exprBuilder.setDecimalVal(com.google.protobuf.ByteString.copyFrom(unscaled.toByteArray))\n        case _: BinaryType =>\n          val byteStr =\n            com.google.protobuf.ByteString.copyFrom(value.asInstanceOf[Array[Byte]])\n          exprBuilder.setBytesVal(byteStr)\n        case _: DateType =>\n          value match {\n            case v: LocalDate => exprBuilder.setIntVal(DateTimeUtils.localDateToDays(v))\n            case v: Date => exprBuilder.setIntVal(DateTimeUtils.fromJavaDate(v))\n            case v: Int => exprBuilder.setIntVal(v)\n            case _ =>\n              valueIsSet = false\n              logWarning(s\"Unexpected date type '${value.getClass}' for value '$value'\")\n          }\n        case dt =>\n          valueIsSet = false\n          logWarning(s\"Unexpected data type '$dt' for literal value '$value'\")\n      }\n    }\n\n    val dt = serializeDataType(dataType)\n\n    if (valueIsSet && dt.isDefined) {\n      exprBuilder.setDatatype(dt.get)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setLiteral(exprBuilder)\n          .build())\n    } else {\n      None\n    }\n  }\n\n  def createUnaryExpr(\n      childExpr: Expr,\n      f: (ExprOuterClass.Expr.Builder, ExprOuterClass.UnaryExpr) => ExprOuterClass.Expr.Builder)\n      : ExprOuterClass.Expr = {\n    // create the generic UnaryExpr message\n    val inner = ExprOuterClass.UnaryExpr\n      .newBuilder()\n      .setChild(childExpr)\n      .build()\n    f(\n      ExprOuterClass.Expr\n        .newBuilder(),\n      inner).build()\n  }\n\n  def createBinaryExpr(\n      leftExpr: Expr,\n      rightExpr: Expr,\n      f: (ExprOuterClass.Expr.Builder, ExprOuterClass.BinaryExpr) => ExprOuterClass.Expr.Builder)\n      : ExprOuterClass.Expr = {\n    // create the generic BinaryExpr message\n    val inner = ExprOuterClass.BinaryExpr\n      .newBuilder()\n      .setLeft(leftExpr)\n      .setRight(rightExpr)\n      .build()\n    f(\n      ExprOuterClass.Expr\n        .newBuilder(),\n      inner).build()\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/rules/CometExecRule.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.rules\n\nimport scala.collection.mutable.ListBuffer\n\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.catalyst.expressions.{Divide, DoubleLiteral, EqualNullSafe, EqualTo, Expression, FloatLiteral, GreaterThan, GreaterThanOrEqual, KnownFloatingPointNormalized, LessThan, LessThanOrEqual, NamedExpression, Remainder}\nimport org.apache.spark.sql.catalyst.optimizer.NormalizeNaNAndZero\nimport org.apache.spark.sql.catalyst.rules.Rule\nimport org.apache.spark.sql.catalyst.util.sideBySide\nimport org.apache.spark.sql.comet._\nimport org.apache.spark.sql.comet.execution.shuffle.{CometColumnarShuffle, CometNativeShuffle, CometShuffleExchangeExec}\nimport org.apache.spark.sql.comet.util.Utils\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanExec, AQEShuffleReadExec, BroadcastQueryStageExec, ShuffleQueryStageExec}\nimport org.apache.spark.sql.execution.aggregate.{HashAggregateExec, ObjectHashAggregateExec}\nimport org.apache.spark.sql.execution.command.{DataWritingCommandExec, ExecutedCommandExec}\nimport org.apache.spark.sql.execution.datasources.WriteFilesExec\nimport org.apache.spark.sql.execution.datasources.csv.CSVFileFormat\nimport org.apache.spark.sql.execution.datasources.json.JsonFileFormat\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat\nimport org.apache.spark.sql.execution.datasources.v2.{BatchScanExec, V2CommandExec}\nimport org.apache.spark.sql.execution.datasources.v2.csv.CSVScan\nimport org.apache.spark.sql.execution.datasources.v2.json.JsonScan\nimport org.apache.spark.sql.execution.datasources.v2.parquet.ParquetScan\nimport org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ReusedExchangeExec, ShuffleExchangeExec}\nimport org.apache.spark.sql.execution.joins.{BroadcastHashJoinExec, ShuffledHashJoinExec, SortMergeJoinExec}\nimport org.apache.spark.sql.execution.window.WindowExec\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.{CometConf, CometExplainInfo, ExtendedExplainInfo}\nimport org.apache.comet.CometConf.{COMET_SPARK_TO_ARROW_ENABLED, COMET_SPARK_TO_ARROW_SUPPORTED_OPERATOR_LIST}\nimport org.apache.comet.CometSparkSessionExtensions._\nimport org.apache.comet.rules.CometExecRule.allExecs\nimport org.apache.comet.serde._\nimport org.apache.comet.serde.operator._\n\nobject CometExecRule {\n\n  /**\n   * Fully native operators.\n   */\n  val nativeExecs: Map[Class[_ <: SparkPlan], CometOperatorSerde[_]] =\n    Map(\n      classOf[ProjectExec] -> CometProjectExec,\n      classOf[FilterExec] -> CometFilterExec,\n      classOf[LocalLimitExec] -> CometLocalLimitExec,\n      classOf[GlobalLimitExec] -> CometGlobalLimitExec,\n      classOf[ExpandExec] -> CometExpandExec,\n      classOf[GenerateExec] -> CometExplodeExec,\n      classOf[HashAggregateExec] -> CometHashAggregateExec,\n      classOf[ObjectHashAggregateExec] -> CometObjectHashAggregateExec,\n      classOf[BroadcastHashJoinExec] -> CometBroadcastHashJoinExec,\n      classOf[ShuffledHashJoinExec] -> CometHashJoinExec,\n      classOf[SortMergeJoinExec] -> CometSortMergeJoinExec,\n      classOf[SortExec] -> CometSortExec,\n      classOf[LocalTableScanExec] -> CometLocalTableScanExec,\n      classOf[WindowExec] -> CometWindowExec)\n\n  /**\n   * Sinks that have a native plan of ScanExec.\n   */\n  val sinks: Map[Class[_ <: SparkPlan], CometOperatorSerde[_]] =\n    Map(\n      classOf[CoalesceExec] -> CometCoalesceExec,\n      classOf[CollectLimitExec] -> CometCollectLimitExec,\n      classOf[TakeOrderedAndProjectExec] -> CometTakeOrderedAndProjectExec,\n      classOf[UnionExec] -> CometUnionExec)\n\n  val allExecs: Map[Class[_ <: SparkPlan], CometOperatorSerde[_]] = nativeExecs ++ sinks\n\n}\n\n/**\n * Spark physical optimizer rule for replacing Spark operators with Comet operators.\n */\ncase class CometExecRule(session: SparkSession) extends Rule[SparkPlan] {\n\n  private lazy val showTransformations = CometConf.COMET_EXPLAIN_TRANSFORMATIONS.get()\n\n  private def applyCometShuffle(plan: SparkPlan): SparkPlan = {\n    plan.transformUp {\n      case s: ShuffleExchangeExec if CometShuffleExchangeExec.nativeShuffleSupported(s) =>\n        // Switch to use Decimal128 regardless of precision, since Arrow native execution\n        // doesn't support Decimal32 and Decimal64 yet.\n        conf.setConfString(CometConf.COMET_USE_DECIMAL_128.key, \"true\")\n        CometShuffleExchangeExec(s, shuffleType = CometNativeShuffle)\n\n      case s: ShuffleExchangeExec if CometShuffleExchangeExec.columnarShuffleSupported(s) =>\n        // Columnar shuffle for regular Spark operators (not Comet) and Comet operators\n        // (if configured)\n        CometShuffleExchangeExec(s, shuffleType = CometColumnarShuffle)\n    }\n  }\n\n  private def isCometNative(op: SparkPlan): Boolean = op.isInstanceOf[CometNativeExec]\n\n  // spotless:off\n\n  /**\n   * Tries to transform a Spark physical plan into a Comet plan.\n   *\n   * This rule traverses bottom-up from the original Spark plan and for each plan node, there\n   * are a few cases to consider:\n   *\n   * 1. The child(ren) of the current node `p` cannot be converted to native\n   *   In this case, we'll simply return the original Spark plan, since Comet native\n   *   execution cannot start from an arbitrary Spark operator (unless it is special node\n   *   such as scan or sink such as shuffle exchange, union etc., which are wrapped by\n   *   `CometScanWrapper` and `CometSinkPlaceHolder` respectively).\n   *\n   * 2. The child(ren) of the current node `p` can be converted to native\n   *   There are two sub-cases for this scenario: 1) This node `p` can also be converted to\n   *   native. In this case, we'll create a new native Comet operator for `p` and connect it with\n   *   its previously converted child(ren); 2) This node `p` cannot be converted to native. In\n   *   this case, similar to 1) above, we simply return `p` as it is. Its child(ren) would still\n   *   be native Comet operators.\n   *\n   * After this rule finishes, we'll do another pass on the final plan to convert all adjacent\n   * Comet native operators into a single native execution block. Please see where\n   * `convertBlock` is called below.\n   *\n   * Here are a few examples:\n   *\n   *     Scan                       ======>             CometScan\n   *      |                                                |\n   *     Filter                                         CometFilter\n   *      |                                                |\n   *     HashAggregate                                  CometHashAggregate\n   *      |                                                |\n   *     Exchange                                       CometExchange\n   *      |                                                |\n   *     HashAggregate                                  CometHashAggregate\n   *      |                                                |\n   *     UnsupportedOperator                            UnsupportedOperator\n   *\n   * Native execution doesn't necessarily have to start from `CometScan`:\n   *\n   *     Scan                       =======>            CometScan\n   *      |                                                |\n   *     UnsupportedOperator                            UnsupportedOperator\n   *      |                                                |\n   *     HashAggregate                                  HashAggregate\n   *      |                                                |\n   *     Exchange                                       CometExchange\n   *      |                                                |\n   *     HashAggregate                                  CometHashAggregate\n   *      |                                                |\n   *     UnsupportedOperator                            UnsupportedOperator\n   *\n   * A sink can also be Comet operators other than `CometExchange`, for instance `CometUnion`:\n   *\n   *     Scan   Scan                =======>          CometScan CometScan\n   *      |      |                                       |         |\n   *     Filter Filter                                CometFilter CometFilter\n   *      |      |                                       |         |\n   *        Union                                         CometUnion\n   *          |                                               |\n   *        Project                                       CometProject\n   */\n  // spotless:on\n  private def transform(plan: SparkPlan): SparkPlan = {\n    def convertNode(op: SparkPlan): SparkPlan = op match {\n      // Fully native scan for V1\n      case scan: CometScanExec if scan.scanImpl == CometConf.SCAN_NATIVE_DATAFUSION =>\n        convertToComet(scan, CometNativeScan).getOrElse(scan)\n\n      // Fully native Iceberg scan for V2 (iceberg-rust path)\n      // Only handle scans with native metadata; other scans fall through to isCometScan\n      // Config checks (COMET_ICEBERG_NATIVE_ENABLED, COMET_EXEC_ENABLED) are done in CometScanRule\n      case scan: CometBatchScanExec if scan.nativeIcebergScanMetadata.isDefined =>\n        convertToComet(scan, CometIcebergNativeScan).getOrElse(scan)\n\n      case scan: CometBatchScanExec if scan.wrapped.scan.isInstanceOf[CSVScan] =>\n        convertToComet(scan, CometCsvNativeScanExec).getOrElse(scan)\n\n      // Comet JVM + native scan for V1 and V2\n      case op if isCometScan(op) =>\n        convertToComet(op, CometScanWrapper).getOrElse(op)\n\n      case op if shouldApplySparkToColumnar(conf, op) =>\n        convertToComet(op, CometSparkToColumnarExec).getOrElse(op)\n\n      // AQE reoptimization looks for `DataWritingCommandExec` or `WriteFilesExec`\n      // if there is none it would reinsert write nodes, and since Comet remap those nodes\n      // to Comet counterparties the write nodes are twice to the plan.\n      // Checking if AQE inserted another write Command on top of existing write command\n      case _ @DataWritingCommandExec(_, w: WriteFilesExec)\n          if w.child.isInstanceOf[CometNativeWriteExec] =>\n        w.child\n\n      case op: DataWritingCommandExec =>\n        convertToComet(op, CometDataWritingCommand).getOrElse(op)\n\n      // For AQE broadcast stage on a Comet broadcast exchange\n      case s @ BroadcastQueryStageExec(_, _: CometBroadcastExchangeExec, _) =>\n        convertToComet(s, CometExchangeSink).getOrElse(s)\n\n      case s @ BroadcastQueryStageExec(\n            _,\n            ReusedExchangeExec(_, _: CometBroadcastExchangeExec),\n            _) =>\n        convertToComet(s, CometExchangeSink).getOrElse(s)\n\n      // `CometBroadcastExchangeExec`'s broadcast output is not compatible with Spark's broadcast\n      // exchange. It is only used for Comet native execution. We only transform Spark broadcast\n      // exchange to Comet broadcast exchange if its downstream is a Comet native plan or if the\n      // broadcast exchange is forced to be enabled by Comet config.\n      case plan if plan.children.exists(_.isInstanceOf[BroadcastExchangeExec]) =>\n        val newChildren = plan.children.map {\n          case b: BroadcastExchangeExec if b.children.forall(_.isInstanceOf[CometNativeExec]) =>\n            convertToComet(b, CometBroadcastExchangeExec).getOrElse(b)\n          case other => other\n        }\n        if (!newChildren.exists(_.isInstanceOf[BroadcastExchangeExec])) {\n          val newPlan = convertNode(plan.withNewChildren(newChildren))\n          if (isCometNative(newPlan) || CometConf.COMET_EXEC_BROADCAST_FORCE_ENABLED.get(conf)) {\n            newPlan\n          } else {\n            // copy fallback reasons to the original plan\n            newPlan\n              .getTagValue(CometExplainInfo.EXTENSION_INFO)\n              .foreach(reasons => withInfos(plan, reasons))\n            // return the original plan\n            plan\n          }\n        } else {\n          plan\n        }\n\n      // For AQE shuffle stage on a Comet shuffle exchange\n      case s @ ShuffleQueryStageExec(_, _: CometShuffleExchangeExec, _) =>\n        convertToComet(s, CometExchangeSink).getOrElse(s)\n\n      // For AQE shuffle stage on a reused Comet shuffle exchange\n      // Note that we don't need to handle `ReusedExchangeExec` for non-AQE case, because\n      // the query plan won't be re-optimized/planned in non-AQE mode.\n      case s @ ShuffleQueryStageExec(_, ReusedExchangeExec(_, _: CometShuffleExchangeExec), _) =>\n        convertToComet(s, CometExchangeSink).getOrElse(s)\n\n      case s: ShuffleExchangeExec =>\n        convertToComet(s, CometShuffleExchangeExec).getOrElse(s)\n\n      case op =>\n        // if all children are native (or if this is a leaf node) then see if there is a\n        // registered handler for creating a fully native plan\n        if (op.children.forall(_.isInstanceOf[CometNativeExec])) {\n          val handler = allExecs\n            .get(op.getClass)\n            .map(_.asInstanceOf[CometOperatorSerde[SparkPlan]])\n          handler match {\n            case Some(handler) =>\n              return convertToComet(op, handler).getOrElse(op)\n            case _ =>\n          }\n        }\n\n        op match {\n          case _: CometPlan | _: AQEShuffleReadExec | _: BroadcastExchangeExec |\n              _: BroadcastQueryStageExec | _: AdaptiveSparkPlanExec | _: ExecutedCommandExec |\n              _: V2CommandExec =>\n            // Some execs should never be replaced. We include\n            // these cases specially here so we do not add a misleading 'info' message\n            op\n          case _ =>\n            // The operator was not converted to a Comet plan. Possible reasons for this happening:\n            // 1. Comet does not support this operator.\n            // 2. The operator could not be supported based on query context and current\n            //    configs. In this case, it should have already been tagged with fallback\n            //    reasons.\n            // 3. The operator has children that could not be converted, so execution\n            //    has already fallen back to Spark.\n            if (op.children.forall(_.isInstanceOf[CometNativeExec]) && !hasExplainInfo(op)) {\n              withInfo(op, s\"${op.nodeName} is not supported\")\n            } else {\n              op\n            }\n        }\n    }\n\n    plan.transformUp { case op =>\n      convertNode(op)\n    }\n  }\n\n  private def normalizePlan(plan: SparkPlan): SparkPlan = {\n    plan.transformUp {\n      case p: ProjectExec =>\n        val newProjectList = p.projectList.map(normalize(_).asInstanceOf[NamedExpression])\n        ProjectExec(newProjectList, p.child)\n      case f: FilterExec =>\n        val newCondition = normalize(f.condition)\n        FilterExec(newCondition, f.child)\n    }\n  }\n\n  // Spark will normalize NaN and zero for floating point numbers for several cases.\n  // See `NormalizeFloatingNumbers` optimization rule in Spark.\n  // However, one exception is for comparison operators. Spark does not normalize NaN and zero\n  // because they are handled well in Spark (e.g., `SQLOrderingUtil.compareFloats`). But the\n  // comparison functions in arrow-rs do not normalize NaN and zero. So we need to normalize NaN\n  // and zero for comparison operators in Comet.\n  private def normalize(expr: Expression): Expression = {\n    expr.transformUp {\n      case EqualTo(left, right) =>\n        EqualTo(normalizeNaNAndZero(left), normalizeNaNAndZero(right))\n      case EqualNullSafe(left, right) =>\n        EqualNullSafe(normalizeNaNAndZero(left), normalizeNaNAndZero(right))\n      case GreaterThan(left, right) =>\n        GreaterThan(normalizeNaNAndZero(left), normalizeNaNAndZero(right))\n      case GreaterThanOrEqual(left, right) =>\n        GreaterThanOrEqual(normalizeNaNAndZero(left), normalizeNaNAndZero(right))\n      case LessThan(left, right) =>\n        LessThan(normalizeNaNAndZero(left), normalizeNaNAndZero(right))\n      case LessThanOrEqual(left, right) =>\n        LessThanOrEqual(normalizeNaNAndZero(left), normalizeNaNAndZero(right))\n      case Divide(left, right, evalMode) =>\n        Divide(left, normalizeNaNAndZero(right), evalMode)\n      case Remainder(left, right, evalMode) =>\n        Remainder(left, normalizeNaNAndZero(right), evalMode)\n    }\n  }\n\n  private def normalizeNaNAndZero(expr: Expression): Expression = {\n    expr match {\n      case _: KnownFloatingPointNormalized => expr\n      case FloatLiteral(f) if !f.equals(-0.0f) => expr\n      case DoubleLiteral(d) if !d.equals(-0.0d) => expr\n      case _ =>\n        expr.dataType match {\n          case _: FloatType | _: DoubleType =>\n            KnownFloatingPointNormalized(NormalizeNaNAndZero(expr))\n          case _ => expr\n        }\n    }\n  }\n\n  override def apply(plan: SparkPlan): SparkPlan = {\n    val newPlan = _apply(plan)\n    if (showTransformations && !newPlan.fastEquals(plan)) {\n      logInfo(s\"\"\"\n           |=== Applying Rule $ruleName ===\n           |${sideBySide(plan.treeString, newPlan.treeString).mkString(\"\\n\")}\n           |\"\"\".stripMargin)\n    }\n    newPlan\n  }\n\n  private def _apply(plan: SparkPlan): SparkPlan = {\n    // We shouldn't transform Spark query plan if Comet is not loaded.\n    if (!isCometLoaded(conf)) return plan\n\n    if (!CometConf.COMET_EXEC_ENABLED.get(conf)) {\n      // Comet exec is disabled, but for Spark shuffle, we still can use Comet columnar shuffle\n      if (isCometShuffleEnabled(conf)) {\n        applyCometShuffle(plan)\n      } else {\n        plan\n      }\n    } else {\n      val normalizedPlan = normalizePlan(plan)\n\n      val planWithJoinRewritten = if (CometConf.COMET_REPLACE_SMJ.get()) {\n        normalizedPlan.transformUp { case p =>\n          RewriteJoin.rewrite(p)\n        }\n      } else {\n        normalizedPlan\n      }\n\n      var newPlan = transform(planWithJoinRewritten)\n\n      // if the plan cannot be run fully natively then explain why (when appropriate\n      // config is enabled)\n      if (CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.get()) {\n        val info = new ExtendedExplainInfo()\n        if (info.extensionInfo(newPlan).nonEmpty) {\n          logWarning(\n            \"Comet cannot execute some parts of this plan natively \" +\n              s\"(set ${CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key}=false \" +\n              \"to disable this logging):\\n\" +\n              s\"${info.generateExtendedInfo(newPlan)}\")\n        }\n      }\n\n      // Remove placeholders\n      newPlan = newPlan.transform {\n        case CometSinkPlaceHolder(_, _, s) => s\n        case CometScanWrapper(_, s) => s\n      }\n\n      // Set up logical links\n      newPlan = newPlan.transform {\n        case op: CometExec =>\n          if (op.originalPlan.logicalLink.isEmpty) {\n            op.unsetTagValue(SparkPlan.LOGICAL_PLAN_TAG)\n            op.unsetTagValue(SparkPlan.LOGICAL_PLAN_INHERITED_TAG)\n          } else {\n            op.originalPlan.logicalLink.foreach(op.setLogicalLink)\n          }\n          op\n        case op: CometShuffleExchangeExec =>\n          // Original Spark shuffle exchange operator might have empty logical link.\n          // But the `setLogicalLink` call above on downstream operator of\n          // `CometShuffleExchangeExec` will set its logical link to the downstream\n          // operators which cause AQE behavior to be incorrect. So we need to unset\n          // the logical link here.\n          if (op.originalPlan.logicalLink.isEmpty) {\n            op.unsetTagValue(SparkPlan.LOGICAL_PLAN_TAG)\n            op.unsetTagValue(SparkPlan.LOGICAL_PLAN_INHERITED_TAG)\n          } else {\n            op.originalPlan.logicalLink.foreach(op.setLogicalLink)\n          }\n          op\n\n        case op: CometBroadcastExchangeExec =>\n          if (op.originalPlan.logicalLink.isEmpty) {\n            op.unsetTagValue(SparkPlan.LOGICAL_PLAN_TAG)\n            op.unsetTagValue(SparkPlan.LOGICAL_PLAN_INHERITED_TAG)\n          } else {\n            op.originalPlan.logicalLink.foreach(op.setLogicalLink)\n          }\n          op\n      }\n\n      // Convert native execution block by linking consecutive native operators.\n      var firstNativeOp = true\n      newPlan.transformDown {\n        case op: CometNativeExec =>\n          val newPlan = if (firstNativeOp) {\n            firstNativeOp = false\n            op.convertBlock()\n          } else {\n            op\n          }\n\n          // If reaching leaf node, reset `firstNativeOp` to true\n          // because it will start a new block in next iteration.\n          if (op.children.isEmpty) {\n            firstNativeOp = true\n          }\n\n          // CometNativeWriteExec is special: it has two separate plans:\n          // 1. A protobuf plan (nativeOp) describing the write operation\n          // 2. A Spark plan (child) that produces the data to write\n          // The serializedPlanOpt is a def that always returns Some(...) by serializing\n          // nativeOp on-demand, so it doesn't need convertBlock(). However, its child\n          // (e.g., CometNativeScanExec) may need its own serialization. Reset the flag\n          // so children can start their own native execution blocks.\n          if (op.isInstanceOf[CometNativeWriteExec]) {\n            firstNativeOp = true\n          }\n\n          newPlan\n        case op =>\n          firstNativeOp = true\n          op\n      }\n    }\n  }\n\n  /** Convert a Spark plan to a Comet plan using the specified serde handler */\n  private def convertToComet(op: SparkPlan, handler: CometOperatorSerde[_]): Option[SparkPlan] = {\n    val serde = handler.asInstanceOf[CometOperatorSerde[SparkPlan]]\n    if (isOperatorEnabled(serde, op)) {\n      // For operators that require native children (like writes), check if all data-producing\n      // children are CometNativeExec. This prevents runtime failures when the native operator\n      // expects Arrow arrays but receives non-Arrow data (e.g., OnHeapColumnVector).\n      if (serde.requiresNativeChildren && op.children.nonEmpty) {\n        // Get the actual data-producing children (unwrap WriteFilesExec if present)\n        val dataProducingChildren = op.children.flatMap {\n          case writeFiles: WriteFilesExec => Seq(writeFiles.child)\n          case other => Seq(other)\n        }\n        if (!dataProducingChildren.forall(_.isInstanceOf[CometNativeExec])) {\n          withInfo(op, \"Cannot perform native operation because input is not in Arrow format\")\n          return None\n        }\n      }\n\n      val builder = OperatorOuterClass.Operator.newBuilder().setPlanId(op.id)\n      if (op.children.nonEmpty && op.children.forall(_.isInstanceOf[CometNativeExec])) {\n        val childOp = op.children.map(_.asInstanceOf[CometNativeExec].nativeOp)\n        childOp.foreach(builder.addChildren)\n        return serde\n          .convert(op, builder, childOp: _*)\n          .map(nativeOp => serde.createExec(nativeOp, op))\n      } else {\n        return serde\n          .convert(op, builder)\n          .map(nativeOp => serde.createExec(nativeOp, op))\n      }\n    }\n    None\n  }\n\n  private def isOperatorEnabled(\n      handler: CometOperatorSerde[SparkPlan],\n      op: SparkPlan): Boolean = {\n    val opName = op.getClass.getSimpleName\n    if (handler.enabledConfig.forall(_.get(op.conf))) {\n      handler.getSupportLevel(op) match {\n        case Unsupported(notes) =>\n          withInfo(op, notes.getOrElse(\"\"))\n          false\n        case Incompatible(notes) =>\n          val allowIncompat = CometConf.isOperatorAllowIncompat(opName)\n          val incompatConf = CometConf.getOperatorAllowIncompatConfigKey(opName)\n          if (allowIncompat) {\n            if (notes.isDefined) {\n              logWarning(\n                s\"Comet supports $opName when $incompatConf=true \" +\n                  s\"but has notes: ${notes.get}\")\n            }\n            true\n          } else {\n            val optionalNotes = notes.map(str => s\" ($str)\").getOrElse(\"\")\n            withInfo(\n              op,\n              s\"$opName is not fully compatible with Spark$optionalNotes. \" +\n                s\"To enable it anyway, set $incompatConf=true. \" +\n                s\"${CometConf.COMPAT_GUIDE}.\")\n            false\n          }\n        case Compatible(notes) =>\n          if (notes.isDefined) {\n            logWarning(s\"Comet supports $opName but has notes: ${notes.get}\")\n          }\n          true\n      }\n    } else {\n      withInfo(\n        op,\n        s\"Native support for operator $opName is disabled. \" +\n          s\"Set ${handler.enabledConfig.get.key}=true to enable it.\")\n      false\n    }\n  }\n\n  private def shouldApplySparkToColumnar(conf: SQLConf, op: SparkPlan): Boolean = {\n    // Only consider converting leaf nodes to columnar currently, so that all the following\n    // operators can have a chance to be converted to columnar. Leaf operators that output\n    // columnar batches, such as Spark's vectorized readers, will also be converted to native\n    // comet batches.\n    val fallbackReasons = new ListBuffer[String]()\n    if (CometSparkToColumnarExec.isSchemaSupported(op.schema, fallbackReasons)) {\n      op match {\n        // Convert Spark DS v1 scan to Arrow format\n        case scan: FileSourceScanExec =>\n          scan.relation.fileFormat match {\n            case _: CSVFileFormat => CometConf.COMET_CONVERT_FROM_CSV_ENABLED.get(conf)\n            case _: JsonFileFormat => CometConf.COMET_CONVERT_FROM_JSON_ENABLED.get(conf)\n            case _: ParquetFileFormat => CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.get(conf)\n            case _ => isSparkToArrowEnabled(conf, op)\n          }\n        // Convert Spark DS v2 scan to Arrow format\n        case scan: BatchScanExec =>\n          scan.scan match {\n            case _: CSVScan => CometConf.COMET_CONVERT_FROM_CSV_ENABLED.get(conf)\n            case _: JsonScan => CometConf.COMET_CONVERT_FROM_JSON_ENABLED.get(conf)\n            case _: ParquetScan => CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.get(conf)\n            case _ => isSparkToArrowEnabled(conf, op)\n          }\n        // other leaf nodes\n        case _: LeafExecNode =>\n          isSparkToArrowEnabled(conf, op)\n        case _ =>\n          // TODO: consider converting other intermediate operators to columnar.\n          false\n      }\n    } else {\n      false\n    }\n  }\n\n  private def isSparkToArrowEnabled(conf: SQLConf, op: SparkPlan) = {\n    COMET_SPARK_TO_ARROW_ENABLED.get(conf) && {\n      val simpleClassName = Utils.getSimpleName(op.getClass)\n      val nodeName = simpleClassName.replaceAll(\"Exec$\", \"\")\n      COMET_SPARK_TO_ARROW_SUPPORTED_OPERATOR_LIST.get(conf).contains(nodeName)\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/rules/CometScanRule.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.rules\n\nimport java.net.URI\n\nimport scala.collection.mutable\nimport scala.collection.mutable.ListBuffer\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.hadoop.conf.Configuration\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, DynamicPruningExpression, Expression, GenericInternalRow, InputFileBlockLength, InputFileBlockStart, InputFileName, PlanExpression}\nimport org.apache.spark.sql.catalyst.rules.Rule\nimport org.apache.spark.sql.catalyst.util.{sideBySide, ArrayBasedMapData, GenericArrayData, MetadataColumnHelper}\nimport org.apache.spark.sql.catalyst.util.ResolveDefaultColumns.getExistenceDefaultValues\nimport org.apache.spark.sql.comet.{CometBatchScanExec, CometScanExec}\nimport org.apache.spark.sql.execution.{FileSourceScanExec, InSubqueryExec, SparkPlan, SubqueryAdaptiveBroadcastExec}\nimport org.apache.spark.sql.execution.datasources.HadoopFsRelation\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetUtils\nimport org.apache.spark.sql.execution.datasources.v2.BatchScanExec\nimport org.apache.spark.sql.execution.datasources.v2.csv.CSVScan\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.{CometConf, CometNativeException, DataTypeSupport}\nimport org.apache.comet.CometConf._\nimport org.apache.comet.CometSparkSessionExtensions.{isCometLoaded, withInfo, withInfos}\nimport org.apache.comet.DataTypeSupport.isComplexType\nimport org.apache.comet.iceberg.{CometIcebergNativeScanMetadata, IcebergReflection}\nimport org.apache.comet.objectstore.NativeConfig\nimport org.apache.comet.parquet.CometParquetUtils.{encryptionEnabled, isEncryptionConfigSupported}\nimport org.apache.comet.parquet.Native\nimport org.apache.comet.serde.operator.{CometIcebergNativeScan, CometNativeScan}\nimport org.apache.comet.shims.{CometTypeShim, ShimFileFormat, ShimSubqueryBroadcast}\n\n/**\n * Spark physical optimizer rule for replacing Spark scans with Comet scans.\n */\ncase class CometScanRule(session: SparkSession)\n    extends Rule[SparkPlan]\n    with CometTypeShim\n    with ShimSubqueryBroadcast {\n\n  private lazy val showTransformations = CometConf.COMET_EXPLAIN_TRANSFORMATIONS.get()\n\n  override def apply(plan: SparkPlan): SparkPlan = {\n    val newPlan = _apply(plan)\n    if (showTransformations && !newPlan.fastEquals(plan)) {\n      logInfo(s\"\"\"\n           |=== Applying Rule $ruleName ===\n           |${sideBySide(plan.treeString, newPlan.treeString).mkString(\"\\n\")}\n           |\"\"\".stripMargin)\n    }\n    newPlan\n  }\n\n  private def _apply(plan: SparkPlan): SparkPlan = {\n    if (!isCometLoaded(conf)) return plan\n\n    def isSupportedScanNode(plan: SparkPlan): Boolean = plan match {\n      case _: FileSourceScanExec => true\n      case _: BatchScanExec => true\n      case _ => false\n    }\n\n    def hasMetadataCol(plan: SparkPlan): Boolean = {\n      plan.expressions.exists(_.exists {\n        case a: Attribute =>\n          a.isMetadataCol\n        case _ => false\n      })\n    }\n\n    def isIcebergMetadataTable(scanExec: BatchScanExec): Boolean = {\n      // List of Iceberg metadata tables:\n      // https://iceberg.apache.org/docs/latest/spark-queries/#inspecting-tables\n      val metadataTableSuffix = Set(\n        \"history\",\n        \"metadata_log_entries\",\n        \"snapshots\",\n        \"entries\",\n        \"files\",\n        \"manifests\",\n        \"partitions\",\n        \"position_deletes\",\n        \"all_data_files\",\n        \"all_delete_files\",\n        \"all_entries\",\n        \"all_manifests\")\n\n      metadataTableSuffix.exists(suffix => scanExec.table.name().endsWith(suffix))\n    }\n\n    val fullPlan = plan\n\n    def transformScan(scanNode: SparkPlan): SparkPlan = scanNode match {\n      case scan if !CometConf.COMET_NATIVE_SCAN_ENABLED.get(conf) =>\n        withInfo(scan, \"Comet Scan is not enabled\")\n\n      case scan if hasMetadataCol(scan) =>\n        withInfo(scan, \"Metadata column is not supported\")\n\n      // data source V1\n      case scanExec: FileSourceScanExec =>\n        transformV1Scan(fullPlan, scanExec)\n\n      // data source V2\n      case scanExec: BatchScanExec =>\n        if (isIcebergMetadataTable(scanExec)) {\n          withInfo(scanExec, \"Iceberg Metadata tables are not supported\")\n        } else {\n          transformV2Scan(scanExec)\n        }\n    }\n\n    plan.transform {\n      case scan if isSupportedScanNode(scan) => transformScan(scan)\n    }\n  }\n\n  private def transformV1Scan(plan: SparkPlan, scanExec: FileSourceScanExec): SparkPlan = {\n\n    if (COMET_DPP_FALLBACK_ENABLED.get() &&\n      scanExec.partitionFilters.exists(isDynamicPruningFilter)) {\n      return withInfo(scanExec, \"Dynamic Partition Pruning is not supported\")\n    }\n\n    scanExec.relation match {\n      case r: HadoopFsRelation =>\n        if (!CometScanExec.isFileFormatSupported(r.fileFormat)) {\n          return withInfo(scanExec, s\"Unsupported file format ${r.fileFormat}\")\n        }\n        val hadoopConf = r.sparkSession.sessionState.newHadoopConfWithOptions(r.options)\n\n        // TODO is this restriction valid for all native scan types?\n        val possibleDefaultValues = getExistenceDefaultValues(scanExec.requiredSchema)\n        if (possibleDefaultValues.exists(d => {\n            d != null && (d.isInstanceOf[ArrayBasedMapData] || d\n              .isInstanceOf[GenericInternalRow] || d.isInstanceOf[GenericArrayData])\n          })) {\n          // Spark already converted these to Java-native types, so we can't check SQL types.\n          // ArrayBasedMapData, GenericInternalRow, GenericArrayData correspond to maps, structs,\n          // and arrays respectively.\n          withInfo(\n            scanExec,\n            \"Full native scan disabled because default values for nested types are not supported\")\n          return scanExec\n        }\n\n        COMET_NATIVE_SCAN_IMPL.get() match {\n          case SCAN_AUTO | SCAN_NATIVE_DATAFUSION =>\n            nativeDataFusionScan(plan, session, scanExec, r, hadoopConf).getOrElse(scanExec)\n          case SCAN_NATIVE_ICEBERG_COMPAT =>\n            nativeIcebergCompatScan(session, scanExec, r, hadoopConf).getOrElse(scanExec)\n        }\n\n      case _ =>\n        withInfo(scanExec, s\"Unsupported relation ${scanExec.relation}\")\n    }\n  }\n\n  private def nativeDataFusionScan(\n      plan: SparkPlan,\n      session: SparkSession,\n      scanExec: FileSourceScanExec,\n      r: HadoopFsRelation,\n      hadoopConf: Configuration): Option[SparkPlan] = {\n    if (!COMET_EXEC_ENABLED.get()) {\n      withInfo(\n        scanExec,\n        s\"$SCAN_NATIVE_DATAFUSION scan requires ${COMET_EXEC_ENABLED.key} to be enabled\")\n      return None\n    }\n    if (!CometNativeScan.isSupported(scanExec)) {\n      return None\n    }\n    if (encryptionEnabled(hadoopConf) && !isEncryptionConfigSupported(hadoopConf)) {\n      withInfo(scanExec, s\"$SCAN_NATIVE_DATAFUSION does not support encryption\")\n      return None\n    }\n    if (scanExec.fileConstantMetadataColumns.nonEmpty) {\n      withInfo(scanExec, \"Native DataFusion scan does not support metadata columns\")\n      return None\n    }\n    // input_file_name, input_file_block_start, and input_file_block_length read from\n    // InputFileBlockHolder, a thread-local set by Spark's FileScanRDD. The native DataFusion\n    // scan does not use FileScanRDD, so these expressions would return empty/default values.\n    if (plan.exists(node =>\n        node.expressions.exists(_.exists {\n          case _: InputFileName | _: InputFileBlockStart | _: InputFileBlockLength => true\n          case _ => false\n        }))) {\n      withInfo(\n        scanExec,\n        \"Native DataFusion scan is not compatible with input_file_name, \" +\n          \"input_file_block_start, or input_file_block_length\")\n      return None\n    }\n    if (ShimFileFormat.findRowIndexColumnIndexInSchema(scanExec.requiredSchema) >= 0) {\n      withInfo(scanExec, \"Native DataFusion scan does not support row index generation\")\n      return None\n    }\n    if (session.sessionState.conf.getConf(SQLConf.PARQUET_FIELD_ID_READ_ENABLED) &&\n      ParquetUtils.hasFieldIds(scanExec.requiredSchema)) {\n      withInfo(scanExec, \"Native DataFusion scan does not support Parquet field ID matching\")\n      return None\n    }\n    if (!isSchemaSupported(scanExec, SCAN_NATIVE_DATAFUSION, r)) {\n      return None\n    }\n    Some(CometScanExec(scanExec, session, SCAN_NATIVE_DATAFUSION))\n  }\n\n  private def nativeIcebergCompatScan(\n      session: SparkSession,\n      scanExec: FileSourceScanExec,\n      r: HadoopFsRelation,\n      hadoopConf: Configuration): Option[SparkPlan] = {\n    if (encryptionEnabled(hadoopConf) && !isEncryptionConfigSupported(hadoopConf)) {\n      withInfo(scanExec, s\"$SCAN_NATIVE_ICEBERG_COMPAT does not support encryption\")\n      return None\n    }\n    if (!isSchemaSupported(scanExec, SCAN_NATIVE_ICEBERG_COMPAT, r)) {\n      return None\n    }\n    Some(CometScanExec(scanExec, session, SCAN_NATIVE_ICEBERG_COMPAT))\n  }\n\n  private def transformV2Scan(scanExec: BatchScanExec): SparkPlan = {\n\n    scanExec.scan match {\n      case scan: CSVScan if COMET_CSV_V2_NATIVE_ENABLED.get() =>\n        val fallbackReasons = new ListBuffer[String]()\n        val schemaSupported =\n          CometBatchScanExec.isSchemaSupported(scan.readDataSchema, fallbackReasons)\n        if (!schemaSupported) {\n          fallbackReasons += s\"Schema ${scan.readDataSchema} is not supported\"\n        }\n        val partitionSchemaSupported =\n          CometBatchScanExec.isSchemaSupported(scan.readPartitionSchema, fallbackReasons)\n        if (!partitionSchemaSupported) {\n          fallbackReasons += s\"Partition schema ${scan.readPartitionSchema} is not supported\"\n        }\n        val corruptedRecordsColumnName =\n          SQLConf.get.getConf(SQLConf.COLUMN_NAME_OF_CORRUPT_RECORD)\n        val containsCorruptedRecordsColumn =\n          !scan.readDataSchema.fieldNames.contains(corruptedRecordsColumnName)\n        if (!containsCorruptedRecordsColumn) {\n          fallbackReasons += \"Comet doesn't support the processing of corrupted records\"\n        }\n        val isInferSchemaEnabled = scan.options.getBoolean(\"inferSchema\", false)\n        if (isInferSchemaEnabled) {\n          fallbackReasons += \"Comet doesn't support inferSchema=true option\"\n        }\n        val delimiter =\n          Option(scan.options.get(\"delimiter\"))\n            .orElse(Option(scan.options.get(\"sep\")))\n            .getOrElse(\",\")\n        val isSingleCharacterDelimiter = delimiter.length == 1\n        if (!isSingleCharacterDelimiter) {\n          fallbackReasons +=\n            s\"Comet supports only single-character delimiters, but got: '$delimiter'\"\n        }\n        if (schemaSupported && partitionSchemaSupported && containsCorruptedRecordsColumn\n          && !isInferSchemaEnabled && isSingleCharacterDelimiter) {\n          CometBatchScanExec(\n            scanExec.clone().asInstanceOf[BatchScanExec],\n            runtimeFilters = scanExec.runtimeFilters)\n        } else {\n          withInfos(scanExec, fallbackReasons.toSet)\n        }\n\n      // Iceberg scan - detected by class name\n      case _\n          if scanExec.scan.getClass.getName ==\n            \"org.apache.iceberg.spark.source.SparkBatchQueryScan\" =>\n        val fallbackReasons = new ListBuffer[String]()\n\n        // Native Iceberg scan requires both configs to be enabled\n        if (!COMET_ICEBERG_NATIVE_ENABLED.get()) {\n          fallbackReasons += \"Native Iceberg scan disabled because \" +\n            s\"${COMET_ICEBERG_NATIVE_ENABLED.key} is not enabled\"\n          return withInfos(scanExec, fallbackReasons.toSet)\n        }\n\n        if (!COMET_EXEC_ENABLED.get()) {\n          fallbackReasons += \"Native Iceberg scan disabled because \" +\n            s\"${COMET_EXEC_ENABLED.key} is not enabled\"\n          return withInfos(scanExec, fallbackReasons.toSet)\n        }\n\n        val typeChecker = CometScanTypeChecker(SCAN_NATIVE_DATAFUSION)\n        val schemaSupported =\n          typeChecker.isSchemaSupported(scanExec.scan.readSchema(), fallbackReasons)\n\n        if (!schemaSupported) {\n          fallbackReasons += \"Comet extension is not enabled for \" +\n            s\"${scanExec.scan.getClass.getSimpleName}: Schema not supported\"\n        }\n\n        // Extract all Iceberg metadata once using reflection.\n        // If any required reflection fails, this returns None, and we fall back to Spark.\n        // First get metadataLocation and catalogProperties which are needed by the factory.\n        val tableOpt = IcebergReflection.getTable(scanExec.scan)\n\n        val metadataLocationOpt = tableOpt.flatMap { table =>\n          IcebergReflection.getMetadataLocation(table)\n        }\n\n        val metadataOpt = metadataLocationOpt.flatMap { metadataLocation =>\n          try {\n            val session = org.apache.spark.sql.SparkSession.active\n            val hadoopConf = session.sessionState.newHadoopConf()\n\n            // For REST catalogs, the metadata file may not exist on disk since metadata\n            // is fetched via HTTP. Check if file exists; if not, use table location instead.\n            val metadataUri = new java.net.URI(metadataLocation)\n\n            val metadataFile = new java.io.File(metadataUri.getPath)\n\n            val effectiveLocation =\n              if (!metadataFile.exists() && metadataUri.getScheme == \"file\") {\n                // Metadata file doesn't exist (REST catalog with InMemoryFileIO or similar)\n                // Use table location instead for FileIO initialization\n\n                tableOpt\n                  .flatMap { table =>\n                    try {\n                      val locationMethod = table.getClass.getMethod(\"location\")\n                      val tableLocation = locationMethod.invoke(table).asInstanceOf[String]\n                      Some(tableLocation)\n                    } catch {\n                      case _: Exception =>\n                        Some(metadataLocation)\n                    }\n                  }\n                  .getOrElse(metadataLocation)\n              } else {\n                metadataLocation\n              }\n\n            val effectiveUri = new java.net.URI(effectiveLocation)\n\n            val hadoopS3Options = NativeConfig.extractObjectStoreOptions(hadoopConf, effectiveUri)\n\n            val hadoopDerivedProperties =\n              CometIcebergNativeScan.hadoopToIcebergS3Properties(hadoopS3Options)\n\n            // Extract vended credentials from FileIO (REST catalog credential vending).\n            // FileIO properties take precedence over Hadoop-derived properties because\n            // they contain per-table credentials vended by the REST catalog.\n            val fileIOProperties = tableOpt\n              .flatMap(IcebergReflection.getFileIOProperties)\n              .map(CometIcebergNativeScan.filterStorageProperties)\n              .getOrElse(Map.empty)\n\n            val catalogProperties = hadoopDerivedProperties ++ fileIOProperties\n\n            val result = CometIcebergNativeScanMetadata\n              .extract(scanExec.scan, effectiveLocation, catalogProperties)\n\n            result\n          } catch {\n            case e: Exception =>\n              logError(\n                s\"Failed to extract catalog properties from Iceberg scan: ${e.getMessage}\",\n                e)\n              None\n          }\n        }\n\n        // If metadata extraction failed, fall back to Spark\n        val metadata = metadataOpt match {\n          case Some(m) => m\n          case None =>\n            fallbackReasons += \"Failed to extract Iceberg metadata via reflection\"\n            return withInfos(scanExec, fallbackReasons.toSet)\n        }\n\n        // Now perform all validation using the pre-extracted metadata\n        // Check if table uses a FileIO implementation compatible with iceberg-rust\n\n        val fileIOCompatible = IcebergReflection.getFileIO(metadata.table) match {\n          case Some(_) =>\n            // InMemoryFileIO is now supported with table location fallback for REST catalogs\n            true\n          case None =>\n            fallbackReasons += \"Could not check FileIO compatibility\"\n            false\n        }\n\n        // Check Iceberg table format version\n\n        val formatVersionSupported = IcebergReflection.getFormatVersion(metadata.table) match {\n          case Some(formatVersion) =>\n            if (formatVersion > 2) {\n              fallbackReasons += \"Iceberg table format version \" +\n                s\"$formatVersion is not supported. \" +\n                \"Comet only supports Iceberg table format V1 and V2\"\n              false\n            } else {\n              true\n            }\n          case None =>\n            fallbackReasons += \"Could not verify Iceberg table format version\"\n            false\n        }\n\n        // Single-pass validation of all FileScanTasks\n        val taskValidation =\n          try {\n            CometScanRule.validateIcebergFileScanTasks(metadata.tasks)\n          } catch {\n            case e: Exception =>\n              fallbackReasons += \"Iceberg reflection failure: Could not validate \" +\n                s\"FileScanTasks: ${e.getMessage}\"\n              return withInfos(scanExec, fallbackReasons.toSet)\n          }\n\n        // Check if all files are Parquet format and use supported filesystem schemes\n        val allSupportedFilesystems = if (taskValidation.unsupportedSchemes.isEmpty) {\n          true\n        } else {\n          fallbackReasons += \"Iceberg scan contains files with unsupported filesystem \" +\n            s\"schemes: ${taskValidation.unsupportedSchemes.mkString(\", \")}. \" +\n            \"Comet only supports: file, s3, s3a, gs, gcs, oss, abfss, abfs, wasbs, wasb\"\n          false\n        }\n\n        if (!taskValidation.allParquet) {\n          fallbackReasons += \"Iceberg scan contains non-Parquet files (ORC or Avro). \" +\n            \"Comet only supports Parquet files in Iceberg tables\"\n        }\n\n        // Partition values are deserialized via iceberg-rust's Literal::try_from_json()\n        // which has incomplete type support (binary/fixed unimplemented, decimals limited)\n        val partitionTypesSupported = (for {\n          partitionSpec <- IcebergReflection.getPartitionSpec(metadata.table)\n        } yield {\n          val unsupportedTypes =\n            IcebergReflection.validatePartitionTypes(partitionSpec, metadata.scanSchema)\n\n          if (unsupportedTypes.nonEmpty) {\n            unsupportedTypes.foreach { case (fieldName, typeStr, reason) =>\n              fallbackReasons +=\n                s\"Partition column '$fieldName' with type $typeStr is not yet supported by \" +\n                  s\"iceberg-rust: $reason\"\n            }\n            false\n          } else {\n            true\n          }\n        }).getOrElse {\n          // Fall back to Spark if reflection fails - cannot verify safety\n          val msg =\n            \"Iceberg reflection failure: Could not verify partition types compatibility\"\n          logError(msg)\n          fallbackReasons += msg\n          false\n        }\n\n        // Get filter expressions for complex predicates check\n        val filterExpressionsOpt = IcebergReflection.getFilterExpressions(scanExec.scan)\n\n        // IS NULL/NOT NULL on complex types fail because iceberg-rust's accessor creation\n        // only handles primitive fields. Nested field filters work because Iceberg Java\n        // pre-binds them to field IDs. Element/key access filters don't push down to FileScanTasks.\n        val complexTypePredicatesSupported = filterExpressionsOpt\n          .map { filters =>\n            // Empty filters can't trigger accessor issues\n            if (filters.isEmpty) {\n              true\n            } else {\n              val readSchema = scanExec.scan.readSchema()\n\n              // Identify complex type columns that would trigger accessor creation failures\n              val complexColumns = readSchema\n                .filter(field => isComplexType(field.dataType))\n                .map(_.name)\n                .toSet\n\n              // Detect IS NULL/NOT NULL on complex columns (pattern: is_null(ref(name=\"col\")))\n              // Nested field filters use different patterns and don't trigger this issue\n              val hasComplexNullCheck = filters.asScala.exists { expr =>\n                val exprStr = expr.toString\n                val isNullCheck = exprStr.contains(\"is_null\") || exprStr.contains(\"not_null\")\n                if (isNullCheck) {\n                  complexColumns.exists { colName =>\n                    exprStr.contains(s\"\"\"ref(name=\"$colName\")\"\"\")\n                  }\n                } else {\n                  false\n                }\n              }\n\n              if (hasComplexNullCheck) {\n                fallbackReasons += \"IS NULL / IS NOT NULL predicates on complex type columns \" +\n                  \"(struct/array/map) are not yet supported by iceberg-rust \" +\n                  \"(nested field filters like address.city = 'NYC' are supported)\"\n                false\n              } else {\n                true\n              }\n            }\n          }\n          .getOrElse {\n            // Fall back to Spark if reflection fails - cannot verify safety\n            val msg =\n              \"Iceberg reflection failure: Could not check for complex type predicates\"\n            logError(msg)\n            fallbackReasons += msg\n            false\n          }\n\n        // Check for unsupported transform functions in residual expressions\n        // iceberg-rust can only handle identity transforms in residuals; all other transforms\n        // (truncate, bucket, year, month, day, hour) must fall back to Spark\n        val transformFunctionsSupported = taskValidation.nonIdentityTransform match {\n          case Some(transformType) =>\n            fallbackReasons +=\n              s\"Iceberg transform function '$transformType' in residual expression \" +\n                \"is not yet supported by iceberg-rust. \" +\n                \"Only identity transforms are supported.\"\n            false\n          case None =>\n            true\n        }\n\n        // Check for unsupported struct types in delete files\n        val deleteFileTypesSupported = {\n          var hasUnsupportedDeletes = false\n\n          try {\n            if (!taskValidation.deleteFiles.isEmpty) {\n              taskValidation.deleteFiles.asScala.foreach { deleteFile =>\n                val equalityFieldIds = IcebergReflection.getEqualityFieldIds(deleteFile)\n\n                if (!equalityFieldIds.isEmpty) {\n                  // Look up field types\n                  equalityFieldIds.asScala.foreach { fieldId =>\n                    val fieldInfo = IcebergReflection.getFieldInfo(\n                      metadata.scanSchema,\n                      fieldId.asInstanceOf[Int])\n                    fieldInfo match {\n                      case Some((fieldName, fieldType)) =>\n                        if (fieldType.contains(\"struct\")) {\n                          hasUnsupportedDeletes = true\n                          fallbackReasons +=\n                            s\"Equality delete on unsupported column type '$fieldName' \" +\n                              s\"($fieldType) is not yet supported by iceberg-rust. \" +\n                              \"Struct types in equality deletes \" +\n                              \"require datum conversion support that is not yet implemented.\"\n                        }\n                      case None =>\n                    }\n                  }\n                }\n              }\n            }\n          } catch {\n            case e: Exception =>\n              // Reflection failure means we cannot verify safety - must fall back\n              hasUnsupportedDeletes = true\n              fallbackReasons += \"Iceberg reflection failure: Could not verify delete file \" +\n                s\"types for safety: ${e.getMessage}\"\n          }\n\n          !hasUnsupportedDeletes\n        }\n\n        // Check that all DPP subqueries use InSubqueryExec which we know how to handle.\n        // Future Spark versions might introduce new subquery types we haven't tested.\n        val dppSubqueriesSupported = {\n          val unsupportedSubqueries = scanExec.runtimeFilters.collect {\n            case DynamicPruningExpression(e) if !e.isInstanceOf[InSubqueryExec] =>\n              e.getClass.getSimpleName\n          }\n          // Check for multi-index DPP which we don't support yet.\n          // SPARK-46946 changed SubqueryAdaptiveBroadcastExec from index: Int to indices: Seq[Int]\n          // as a preparatory refactor for future features (Null Safe Equality DPP, multiple\n          // equality predicates). Currently indices always has one element, but future Spark\n          // versions might use multiple indices.\n          val multiIndexDpp = scanExec.runtimeFilters.exists {\n            case DynamicPruningExpression(e: InSubqueryExec) =>\n              e.plan match {\n                case sab: SubqueryAdaptiveBroadcastExec =>\n                  getSubqueryBroadcastIndices(sab).length > 1\n                case _ => false\n              }\n            case _ => false\n          }\n          if (unsupportedSubqueries.nonEmpty) {\n            fallbackReasons +=\n              s\"Unsupported DPP subquery types: ${unsupportedSubqueries.mkString(\", \")}. \" +\n                \"CometIcebergNativeScanExec only supports InSubqueryExec for DPP\"\n            false\n          } else if (multiIndexDpp) {\n            // See SPARK-46946 for context on multi-index DPP\n            fallbackReasons +=\n              \"Multi-index DPP (indices.length > 1) is not yet supported. \" +\n                \"See SPARK-46946 for context.\"\n            false\n          } else {\n            true\n          }\n        }\n\n        if (schemaSupported && fileIOCompatible && formatVersionSupported &&\n          taskValidation.allParquet && allSupportedFilesystems && partitionTypesSupported &&\n          complexTypePredicatesSupported && transformFunctionsSupported &&\n          deleteFileTypesSupported && dppSubqueriesSupported) {\n          CometBatchScanExec(\n            scanExec.clone().asInstanceOf[BatchScanExec],\n            runtimeFilters = scanExec.runtimeFilters,\n            nativeIcebergScanMetadata = Some(metadata))\n        } else {\n          withInfos(scanExec, fallbackReasons.toSet)\n        }\n\n      case other =>\n        withInfo(\n          scanExec,\n          s\"Unsupported scan: ${other.getClass.getName}. \" +\n            \"Comet Scan only supports Parquet and Iceberg Parquet file formats\")\n    }\n  }\n\n  private def isDynamicPruningFilter(e: Expression): Boolean =\n    e.exists(_.isInstanceOf[PlanExpression[_]])\n\n  private def isSchemaSupported(\n      scanExec: FileSourceScanExec,\n      scanImpl: String,\n      r: HadoopFsRelation): Boolean = {\n    val fallbackReasons = new ListBuffer[String]()\n    val typeChecker = CometScanTypeChecker(scanImpl)\n    val schemaSupported =\n      typeChecker.isSchemaSupported(scanExec.requiredSchema, fallbackReasons)\n    if (!schemaSupported) {\n      withInfo(\n        scanExec,\n        s\"Unsupported schema ${scanExec.requiredSchema} \" +\n          s\"for $scanImpl: ${fallbackReasons.mkString(\", \")}\")\n      return false\n    }\n    val partitionSchemaSupported =\n      typeChecker.isSchemaSupported(r.partitionSchema, fallbackReasons)\n    if (!partitionSchemaSupported) {\n      withInfo(\n        scanExec,\n        s\"Unsupported partitioning schema ${scanExec.requiredSchema} \" +\n          s\"for $scanImpl: ${fallbackReasons\n              .mkString(\", \")}\")\n      return false\n    }\n    true\n  }\n}\n\ncase class CometScanTypeChecker(scanImpl: String) extends DataTypeSupport with CometTypeShim {\n\n  // this class is intended to be used with a specific scan impl\n  assert(scanImpl != CometConf.SCAN_AUTO)\n\n  override def isTypeSupported(\n      dt: DataType,\n      name: String,\n      fallbackReasons: ListBuffer[String]): Boolean = {\n    dt match {\n      case ShortType if CometConf.COMET_PARQUET_UNSIGNED_SMALL_INT_CHECK.get() =>\n        fallbackReasons += s\"$scanImpl scan may not handle unsigned UINT_8 correctly for $dt. \" +\n          s\"Set ${CometConf.COMET_PARQUET_UNSIGNED_SMALL_INT_CHECK.key}=false to allow \" +\n          \"native execution if your data does not contain unsigned small integers. \" +\n          CometConf.COMPAT_GUIDE\n        false\n      case dt if isStringCollationType(dt) =>\n        // we don't need specific support for collation in scans, but this\n        // is a convenient place to force the whole query to fall back to Spark for now\n        false\n      case s: StructType if s.fields.isEmpty =>\n        false\n      case _ =>\n        super.isTypeSupported(dt, name, fallbackReasons)\n    }\n  }\n}\n\nobject CometScanRule extends Logging {\n\n  /**\n   * Validating object store configs can cause requests to be made to S3 APIs (such as when\n   * resolving the region for a bucket). We use a cache to reduce the number of S3 calls.\n   *\n   * The key is the config map converted to a string. The value is the reason that the config is\n   * not valid, or None if the config is valid.\n   */\n  val configValidityMap = new mutable.HashMap[String, Option[String]]()\n\n  /**\n   * We do not expect to see a large number of unique configs within the lifetime of a Spark\n   * session, but we reset the cache once it reaches a fixed size to prevent it growing\n   * indefinitely.\n   */\n  val configValidityMapMaxSize = 1024\n\n  def validateObjectStoreConfig(\n      filePath: String,\n      hadoopConf: Configuration,\n      fallbackReasons: mutable.ListBuffer[String]): Unit = {\n    val objectStoreConfigMap =\n      NativeConfig.extractObjectStoreOptions(hadoopConf, URI.create(filePath))\n\n    val cacheKey = objectStoreConfigMap\n      .map { case (k, v) =>\n        s\"$k=$v\"\n      }\n      .toList\n      .sorted\n      .mkString(\"\\n\")\n\n    if (configValidityMap.size >= configValidityMapMaxSize) {\n      logWarning(\"Resetting S3 object store validity cache\")\n      configValidityMap.clear()\n    }\n\n    configValidityMap.get(cacheKey) match {\n      case Some(Some(reason)) =>\n        fallbackReasons += reason\n      case Some(None) =>\n      // previously validated\n      case _ =>\n        try {\n          val objectStoreOptions = objectStoreConfigMap.asJava\n          Native.validateObjectStoreConfig(filePath, objectStoreOptions)\n        } catch {\n          case e: CometNativeException =>\n            val reason = \"Object store config not supported by \" +\n              s\"$SCAN_NATIVE_ICEBERG_COMPAT: ${e.getMessage}\"\n            fallbackReasons += reason\n            configValidityMap.put(cacheKey, Some(reason))\n        }\n    }\n\n  }\n\n  /**\n   * Single-pass validation of Iceberg FileScanTasks.\n   *\n   * Consolidates file format, filesystem scheme, residual transform, and delete file checks into\n   * one iteration for better performance with large tables.\n   */\n  def validateIcebergFileScanTasks(tasks: java.util.List[_]): IcebergTaskValidationResult = {\n    // scalastyle:off classforname\n    val contentScanTaskClass = Class.forName(IcebergReflection.ClassNames.CONTENT_SCAN_TASK)\n    val contentFileClass = Class.forName(IcebergReflection.ClassNames.CONTENT_FILE)\n    val fileScanTaskClass = Class.forName(IcebergReflection.ClassNames.FILE_SCAN_TASK)\n    val unboundPredicateClass = Class.forName(IcebergReflection.ClassNames.UNBOUND_PREDICATE)\n    // scalastyle:on classforname\n\n    // Cache all method lookups outside the loop\n    val fileMethod = contentScanTaskClass.getMethod(\"file\")\n    val formatMethod = contentFileClass.getMethod(\"format\")\n    val pathMethod = contentFileClass.getMethod(\"path\")\n    val residualMethod = contentScanTaskClass.getMethod(\"residual\")\n    val deletesMethod = fileScanTaskClass.getMethod(\"deletes\")\n    val termMethod = unboundPredicateClass.getMethod(\"term\")\n\n    val supportedSchemes =\n      Set(\"file\", \"s3\", \"s3a\", \"gs\", \"gcs\", \"oss\", \"abfss\", \"abfs\", \"wasbs\", \"wasb\")\n\n    var allParquet = true\n    val unsupportedSchemes = mutable.Set[String]()\n    var nonIdentityTransform: Option[String] = None\n    val deleteFiles = new java.util.ArrayList[Any]()\n\n    tasks.asScala.foreach { task =>\n      val dataFile = fileMethod.invoke(task)\n\n      // File format check\n      val fileFormat = formatMethod.invoke(dataFile).toString\n      if (fileFormat != IcebergReflection.FileFormats.PARQUET) {\n        allParquet = false\n      }\n\n      // Filesystem scheme check for data file\n      try {\n        val filePath = pathMethod.invoke(dataFile).toString\n        val uri = new URI(filePath)\n        val scheme = uri.getScheme\n        if (scheme != null && !supportedSchemes.contains(scheme)) {\n          unsupportedSchemes += scheme\n        }\n      } catch {\n        case _: java.net.URISyntaxException => // ignore\n      }\n\n      // Residual transform check (short-circuit if already found unsupported)\n      if (nonIdentityTransform.isEmpty && fileScanTaskClass.isInstance(task)) {\n        try {\n          val residual = residualMethod.invoke(task)\n          if (unboundPredicateClass.isInstance(residual)) {\n            val term = termMethod.invoke(residual)\n            try {\n              val transformMethod = term.getClass.getMethod(\"transform\")\n              transformMethod.setAccessible(true)\n              val transform = transformMethod.invoke(term)\n              val transformStr = transform.toString\n              if (transformStr != IcebergReflection.Transforms.IDENTITY) {\n                nonIdentityTransform = Some(transformStr)\n              }\n            } catch {\n              case _: NoSuchMethodException => // No transform = simple reference, OK\n            }\n          }\n        } catch {\n          case _: Exception => // Skip tasks where we can't get residual\n        }\n      }\n\n      // Collect delete files and check their schemes\n      if (fileScanTaskClass.isInstance(task)) {\n        try {\n          val deletes = deletesMethod.invoke(task).asInstanceOf[java.util.List[_]]\n          deleteFiles.addAll(deletes)\n\n          deletes.asScala.foreach { deleteFile =>\n            IcebergReflection.extractFileLocation(contentFileClass, deleteFile).foreach {\n              deletePath =>\n                try {\n                  val deleteUri = new URI(deletePath)\n                  val deleteScheme = deleteUri.getScheme\n                  if (deleteScheme != null && !supportedSchemes.contains(deleteScheme)) {\n                    unsupportedSchemes += deleteScheme\n                  }\n                } catch {\n                  case _: java.net.URISyntaxException => // ignore\n                }\n            }\n          }\n        } catch {\n          case _: Exception => // ignore errors accessing delete files\n        }\n      }\n    }\n\n    IcebergTaskValidationResult(\n      allParquet,\n      unsupportedSchemes.toSet,\n      nonIdentityTransform,\n      deleteFiles)\n  }\n}\n\n/**\n * Result of single-pass validation over Iceberg FileScanTasks.\n */\ncase class IcebergTaskValidationResult(\n    allParquet: Boolean,\n    unsupportedSchemes: Set[String],\n    nonIdentityTransform: Option[String],\n    deleteFiles: java.util.List[_])\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/rules/EliminateRedundantTransitions.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.rules\n\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.catalyst.rules.Rule\nimport org.apache.spark.sql.catalyst.util.sideBySide\nimport org.apache.spark.sql.comet.{CometCollectLimitExec, CometColumnarToRowExec, CometNativeColumnarToRowExec, CometNativeWriteExec, CometPlan, CometSparkToColumnarExec}\nimport org.apache.spark.sql.comet.execution.shuffle.{CometColumnarShuffle, CometShuffleExchangeExec}\nimport org.apache.spark.sql.execution.{ColumnarToRowExec, RowToColumnarExec, SparkPlan}\nimport org.apache.spark.sql.execution.adaptive.QueryStageExec\nimport org.apache.spark.sql.execution.exchange.ReusedExchangeExec\n\nimport org.apache.comet.CometConf\n\n// This rule is responsible for eliminating redundant transitions between row-based and\n// columnar-based operators for Comet. Currently, three potential redundant transitions are:\n// 1. `ColumnarToRowExec` on top of an ending `CometCollectLimitExec` operator, which is\n//    redundant as `CometCollectLimitExec` already wraps a `ColumnarToRowExec` for row-based\n//    output.\n// 2. Consecutive operators of `CometSparkToColumnarExec` and `ColumnarToRowExec`.\n// 3. AQE inserts an additional `CometSparkToColumnarExec` in addition to the one inserted in the\n//    original plan.\n//\n// Note about the first case: The `ColumnarToRowExec` was added during\n// ApplyColumnarRulesAndInsertTransitions' insertTransitions phase when Spark requests row-based\n// output such as a `collect` call. It's correct to add a redundant `ColumnarToRowExec` for\n// `CometExec`. However, for certain operators such as `CometCollectLimitExec` which overrides\n// `executeCollect`, the redundant `ColumnarToRowExec` makes the override ineffective.\n//\n// Note about the second case: When `spark.comet.sparkToColumnar.enabled` is set, Comet will add\n// `CometSparkToColumnarExec` on top of row-based operators first, but the downstream operator\n// only takes row-based input as it's a vanilla Spark operator(as Comet cannot convert it for\n// various reasons) or Spark requests row-based output such as a `collect` call. Spark will adds\n// another `ColumnarToRowExec` on top of `CometSparkToColumnarExec`. In this case, the pair could\n// be removed.\ncase class EliminateRedundantTransitions(session: SparkSession) extends Rule[SparkPlan] {\n\n  private lazy val showTransformations = CometConf.COMET_EXPLAIN_TRANSFORMATIONS.get()\n\n  override def apply(plan: SparkPlan): SparkPlan = {\n    val newPlan = _apply(plan)\n    if (showTransformations && !newPlan.fastEquals(plan)) {\n      logInfo(s\"\"\"\n           |=== Applying Rule $ruleName ===\n           |${sideBySide(plan.treeString, newPlan.treeString).mkString(\"\\n\")}\n           |\"\"\".stripMargin)\n    }\n    newPlan\n  }\n\n  private def _apply(plan: SparkPlan): SparkPlan = {\n    val eliminatedPlan = plan transformUp {\n      case ColumnarToRowExec(shuffleExchangeExec: CometShuffleExchangeExec)\n          if plan.conf.adaptiveExecutionEnabled =>\n        shuffleExchangeExec\n      case ColumnarToRowExec(sparkToColumnar: CometSparkToColumnarExec) =>\n        if (sparkToColumnar.child.supportsColumnar) {\n          // For Spark Columnar to Comet Columnar, we should keep the ColumnarToRowExec\n          ColumnarToRowExec(sparkToColumnar.child)\n        } else {\n          // For Spark Row to Comet Columnar, we should remove ColumnarToRowExec\n          // and CometSparkToColumnarExec\n          sparkToColumnar.child\n        }\n      // Remove unnecessary transition for native writes\n      // Write should be final operation in the plan\n      case ColumnarToRowExec(nativeWrite: CometNativeWriteExec) =>\n        nativeWrite\n      case c @ ColumnarToRowExec(child) if hasCometNativeChild(child) =>\n        val op = createColumnarToRowExec(child)\n        if (c.logicalLink.isEmpty) {\n          op.unsetTagValue(SparkPlan.LOGICAL_PLAN_TAG)\n          op.unsetTagValue(SparkPlan.LOGICAL_PLAN_INHERITED_TAG)\n        } else {\n          c.logicalLink.foreach(op.setLogicalLink)\n        }\n        op\n      case CometColumnarToRowExec(sparkToColumnar: CometSparkToColumnarExec) =>\n        sparkToColumnar.child\n      case CometNativeColumnarToRowExec(sparkToColumnar: CometSparkToColumnarExec) =>\n        sparkToColumnar.child\n      case CometSparkToColumnarExec(child: CometSparkToColumnarExec) => child\n      // Spark adds `RowToColumnar` under Comet columnar shuffle. But it's redundant as the\n      // shuffle takes row-based input.\n      case s @ CometShuffleExchangeExec(\n            _,\n            RowToColumnarExec(child),\n            _,\n            _,\n            CometColumnarShuffle,\n            _) =>\n        s.withNewChildren(Seq(child))\n    }\n\n    eliminatedPlan match {\n      case ColumnarToRowExec(child: CometCollectLimitExec) =>\n        child\n      case CometColumnarToRowExec(child: CometCollectLimitExec) =>\n        child\n      case CometNativeColumnarToRowExec(child: CometCollectLimitExec) =>\n        child\n      case other =>\n        other\n    }\n  }\n\n  private def hasCometNativeChild(op: SparkPlan): Boolean = {\n    op match {\n      case c: QueryStageExec => hasCometNativeChild(c.plan)\n      case c: ReusedExchangeExec => hasCometNativeChild(c.child)\n      case _ => op.exists(_.isInstanceOf[CometPlan])\n    }\n  }\n\n  /**\n   * Creates an appropriate columnar to row transition operator.\n   *\n   * If native columnar to row conversion is enabled and the schema is supported, uses\n   * CometNativeColumnarToRowExec. Otherwise falls back to CometColumnarToRowExec.\n   */\n  private def createColumnarToRowExec(child: SparkPlan): SparkPlan = {\n    val schema = child.schema\n    val useNative = CometConf.COMET_NATIVE_COLUMNAR_TO_ROW_ENABLED.get() &&\n      CometNativeColumnarToRowExec.supportsSchema(schema)\n\n    if (useNative) {\n      CometNativeColumnarToRowExec(child)\n    } else {\n      CometColumnarToRowExec(child)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/rules/RewriteJoin.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.rules\n\nimport org.apache.spark.sql.catalyst.optimizer.{BuildLeft, BuildRight, BuildSide, JoinSelectionHelper}\nimport org.apache.spark.sql.catalyst.plans.{LeftAnti, LeftSemi}\nimport org.apache.spark.sql.catalyst.plans.logical.Join\nimport org.apache.spark.sql.execution.{SortExec, SparkPlan}\nimport org.apache.spark.sql.execution.joins.{ShuffledHashJoinExec, SortMergeJoinExec}\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\n\n/**\n * Adapted from equivalent rule in Apache Gluten.\n *\n * This rule replaces [[SortMergeJoinExec]] with [[ShuffledHashJoinExec]].\n */\nobject RewriteJoin extends JoinSelectionHelper {\n\n  private def getSmjBuildSide(join: SortMergeJoinExec): Option[BuildSide] = {\n    val leftBuildable = canBuildShuffledHashJoinLeft(join.joinType)\n    val rightBuildable = canBuildShuffledHashJoinRight(join.joinType)\n    if (!leftBuildable && !rightBuildable) {\n      return None\n    }\n    if (!leftBuildable) {\n      return Some(BuildRight)\n    }\n    if (!rightBuildable) {\n      return Some(BuildLeft)\n    }\n    val side = join.logicalLink\n      .flatMap {\n        case join: Join => Some(getOptimalBuildSide(join))\n        case _ => None\n      }\n      .getOrElse {\n        // If smj has no logical link, or its logical link is not a join,\n        // then we always choose left as build side.\n        BuildLeft\n      }\n    Some(side)\n  }\n\n  private def removeSort(plan: SparkPlan) = plan match {\n    case _: SortExec => plan.children.head\n    case _ => plan\n  }\n\n  def rewrite(plan: SparkPlan): SparkPlan = plan match {\n    case smj: SortMergeJoinExec =>\n      getSmjBuildSide(smj) match {\n        case Some(BuildRight) if smj.joinType == LeftAnti || smj.joinType == LeftSemi =>\n          // LeftAnti https://github.com/apache/datafusion-comet/issues/457\n          // LeftSemi https://github.com/apache/datafusion-comet/issues/2667\n          withInfo(\n            smj,\n            \"Cannot rewrite SortMergeJoin to HashJoin: \" +\n              s\"BuildRight with ${smj.joinType} is not supported\")\n          plan\n        case Some(buildSide) =>\n          ShuffledHashJoinExec(\n            smj.leftKeys,\n            smj.rightKeys,\n            smj.joinType,\n            buildSide,\n            smj.condition,\n            removeSort(smj.left),\n            removeSort(smj.right),\n            smj.isSkewJoin)\n        case _ => plan\n      }\n    case _ => plan\n  }\n\n  def getOptimalBuildSide(join: Join): BuildSide = {\n    val leftSize = join.left.stats.sizeInBytes\n    val rightSize = join.right.stats.sizeInBytes\n    val leftRowCount = join.left.stats.rowCount\n    val rightRowCount = join.right.stats.rowCount\n    if (leftSize == rightSize && rightRowCount.isDefined && leftRowCount.isDefined) {\n      if (rightRowCount.get <= leftRowCount.get) {\n        return BuildRight\n      }\n      return BuildLeft\n    }\n    if (rightSize <= leftSize) {\n      return BuildRight\n    }\n    BuildLeft\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/CometAggregateExpressionSerde.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.Attribute\nimport org.apache.spark.sql.catalyst.expressions.aggregate.{AggregateExpression, AggregateFunction}\nimport org.apache.spark.sql.internal.SQLConf\n\n/**\n * Trait for providing serialization logic for aggregate expressions.\n */\ntrait CometAggregateExpressionSerde[T <: AggregateFunction] {\n\n  /**\n   * Get a short name for the expression that can be used as part of a config key related to the\n   * expression, such as enabling or disabling that expression.\n   *\n   * @param expr\n   *   The Spark expression.\n   * @return\n   *   Short name for the expression, defaulting to the Spark class name\n   */\n  def getExprConfigName(expr: T): String = expr.getClass.getSimpleName\n\n  /**\n   * Determine the support level of the expression based on its attributes.\n   *\n   * @param expr\n   *   The Spark expression.\n   * @return\n   *   Support level (Compatible, Incompatible, or Unsupported).\n   */\n  def getSupportLevel(expr: T): SupportLevel = Compatible(None)\n\n  /**\n   * Convert a Spark expression into a protocol buffer representation that can be passed into\n   * native code.\n   *\n   * @param aggExpr\n   *   The aggregate expression.\n   * @param expr\n   *   The aggregate function.\n   * @param inputs\n   *   The input attributes.\n   * @param binding\n   *   Whether the attributes are bound (this is only relevant in aggregate expressions).\n   * @param conf\n   *   SQLConf\n   * @return\n   *   Protocol buffer representation, or None if the expression could not be converted. In this\n   *   case it is expected that the input expression will have been tagged with reasons why it\n   *   could not be converted.\n   */\n  def convert(\n      aggExpr: AggregateExpression,\n      expr: T,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr]\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/CometBloomFilterMightContain.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, BloomFilterMightContain}\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.QueryPlanSerde.exprToProtoInternal\n\nobject CometBloomFilterMightContain extends CometExpressionSerde[BloomFilterMightContain] {\n\n  override def convert(\n      expr: BloomFilterMightContain,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n\n    val bloomFilter = expr.left\n    val value = expr.right\n    val bloomFilterExpr = exprToProtoInternal(bloomFilter, inputs, binding)\n    val valueExpr = exprToProtoInternal(value, inputs, binding)\n    if (bloomFilterExpr.isDefined && valueExpr.isDefined) {\n      val builder = ExprOuterClass.BloomFilterMightContain.newBuilder()\n      builder.setBloomFilter(bloomFilterExpr.get)\n      builder.setValue(valueExpr.get)\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setBloomFilterMightContain(builder)\n          .build())\n    } else {\n      withInfo(expr, bloomFilter, value)\n      None\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/CometExpressionSerde.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, Expression}\n\n/**\n * Trait for providing serialization logic for expressions.\n */\ntrait CometExpressionSerde[T <: Expression] {\n\n  /**\n   * Get a short name for the expression that can be used as part of a config key related to the\n   * expression, such as enabling or disabling that expression.\n   *\n   * @param expr\n   *   The Spark expression.\n   * @return\n   *   Short name for the expression, defaulting to the Spark class name\n   */\n  def getExprConfigName(expr: T): String = expr.getClass.getSimpleName\n\n  /**\n   * Determine the support level of the expression based on its attributes.\n   *\n   * @param expr\n   *   The Spark expression.\n   * @return\n   *   Support level (Compatible, Incompatible, or Unsupported).\n   */\n  def getSupportLevel(expr: T): SupportLevel = Compatible(None)\n\n  /**\n   * Convert a Spark expression into a protocol buffer representation that can be passed into\n   * native code.\n   *\n   * @param expr\n   *   The Spark expression.\n   * @param inputs\n   *   The input attributes.\n   * @param binding\n   *   Whether the attributes are bound (this is only relevant in aggregate expressions).\n   * @return\n   *   Protocol buffer representation, or None if the expression could not be converted. In this\n   *   case it is expected that the input expression will have been tagged with reasons why it\n   *   could not be converted.\n   */\n  def convert(expr: T, inputs: Seq[Attribute], binding: Boolean): Option[ExprOuterClass.Expr]\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/CometOperatorSerde.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.comet.CometNativeExec\nimport org.apache.spark.sql.execution.SparkPlan\n\nimport org.apache.comet.ConfigEntry\nimport org.apache.comet.serde.OperatorOuterClass.Operator\n\n/**\n * Trait for providing serialization logic for operators.\n */\ntrait CometOperatorSerde[T <: SparkPlan] {\n\n  /**\n   * Get the optional Comet configuration entry that is used to enable or disable native support\n   * for this operator.\n   */\n  def enabledConfig: Option[ConfigEntry[Boolean]]\n\n  /**\n   * Indicates whether this operator requires all of its children to be CometNativeExec. If true\n   * and any child is not a native exec, conversion will be skipped and the operator will fall\n   * back to Spark. This is useful for operators like writes that require Arrow-formatted input.\n   */\n  def requiresNativeChildren: Boolean = false\n\n  /**\n   * Determine the support level of the operator based on its attributes.\n   *\n   * @param operator\n   *   The Spark operator.\n   * @return\n   *   Support level (Compatible, Incompatible, or Unsupported).\n   */\n  def getSupportLevel(operator: T): SupportLevel = Compatible(None)\n\n  /**\n   * Convert a Spark operator into a protocol buffer representation that can be passed into native\n   * code.\n   *\n   * @param op\n   *   The Spark operator.\n   * @param builder\n   *   The protobuf builder for the operator.\n   * @param childOp\n   *   Child operators that have already been converted to Comet.\n   * @return\n   *   Protocol buffer representation, or None if the operator could not be converted. In this\n   *   case it is expected that the input operator will have been tagged with reasons why it could\n   *   not be converted.\n   */\n  def convert(\n      op: T,\n      builder: Operator.Builder,\n      childOp: Operator*): Option[OperatorOuterClass.Operator]\n\n  def createExec(nativeOp: Operator, op: T): CometNativeExec\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/CometScalarFunction.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, Expression}\n\nimport org.apache.comet.serde.ExprOuterClass.Expr\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProtoInternal, optExprWithInfo, scalarFunctionExprToProto}\n\n/** Serde for scalar function. */\ncase class CometScalarFunction[T <: Expression](name: String) extends CometExpressionSerde[T] {\n  override def convert(expr: T, inputs: Seq[Attribute], binding: Boolean): Option[Expr] = {\n    val childExpr = expr.children.map(exprToProtoInternal(_, inputs, binding))\n    val optExpr = scalarFunctionExprToProto(name, childExpr: _*)\n    optExprWithInfo(optExpr, expr, expr.children: _*)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/CometScalarSubquery.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.Attribute\nimport org.apache.spark.sql.execution.ScalarSubquery\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.QueryPlanSerde.{serializeDataType, supportedDataType}\n\nobject CometScalarSubquery extends CometExpressionSerde[ScalarSubquery] {\n  override def convert(\n      expr: ScalarSubquery,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (supportedDataType(expr.dataType)) {\n      val dataType = serializeDataType(expr.dataType)\n      if (dataType.isEmpty) {\n        withInfo(expr, s\"Failed to serialize datatype ${expr.dataType} for scalar subquery\")\n        return None\n      }\n\n      val builder = ExprOuterClass.Subquery\n        .newBuilder()\n        .setId(expr.exprId.id)\n        .setDatatype(dataType.get)\n      Some(ExprOuterClass.Expr.newBuilder().setSubquery(builder).build())\n    } else {\n      withInfo(expr, s\"Unsupported data type: ${expr.dataType}\")\n      None\n    }\n\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/CometSortOrder.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Ascending, Attribute, Descending, NullsFirst, NullsLast, SortOrder}\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.QueryPlanSerde.exprToProtoInternal\n\nobject CometSortOrder extends CometExpressionSerde[SortOrder] {\n\n  override def getSupportLevel(expr: SortOrder): SupportLevel = {\n\n    def containsFloatingPoint(dt: DataType): Boolean = {\n      dt match {\n        case DataTypes.FloatType | DataTypes.DoubleType => true\n        case ArrayType(elementType, _) => containsFloatingPoint(elementType)\n        case StructType(fields) => fields.exists(f => containsFloatingPoint(f.dataType))\n        case MapType(keyType, valueType, _) =>\n          containsFloatingPoint(keyType) || containsFloatingPoint(valueType)\n        case _ => false\n      }\n    }\n\n    if (CometConf.COMET_EXEC_STRICT_FLOATING_POINT.get() &&\n      containsFloatingPoint(expr.child.dataType)) {\n      // https://github.com/apache/datafusion-comet/issues/2626\n      Incompatible(\n        Some(\n          \"Sorting on floating-point is not 100% compatible with Spark, and Comet is running \" +\n            s\"with ${CometConf.COMET_EXEC_STRICT_FLOATING_POINT.key}=true. \" +\n            s\"${CometConf.COMPAT_GUIDE}\"))\n    } else {\n      Compatible()\n    }\n  }\n\n  override def convert(\n      expr: SortOrder,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n\n    if (childExpr.isDefined) {\n      val sortOrderBuilder = ExprOuterClass.SortOrder.newBuilder()\n\n      sortOrderBuilder.setChild(childExpr.get)\n\n      expr.direction match {\n        case Ascending => sortOrderBuilder.setDirectionValue(0)\n        case Descending => sortOrderBuilder.setDirectionValue(1)\n      }\n\n      expr.nullOrdering match {\n        case NullsFirst => sortOrderBuilder.setNullOrderingValue(0)\n        case NullsLast => sortOrderBuilder.setNullOrderingValue(1)\n      }\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setSortOrder(sortOrderBuilder)\n          .build())\n    } else {\n      withInfo(expr, expr.child)\n      None\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/QueryPlanSerde.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport java.util.concurrent.atomic.AtomicLong\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.catalyst.expressions._\nimport org.apache.spark.sql.catalyst.expressions.aggregate._\nimport org.apache.spark.sql.catalyst.expressions.objects.StaticInvoke\nimport org.apache.spark.sql.comet.DecimalPrecision\nimport org.apache.spark.sql.execution.{ScalarSubquery, SparkPlan}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.expressions._\nimport org.apache.comet.serde.ExprOuterClass.{AggExpr, Expr, ScalarFunc}\nimport org.apache.comet.serde.Types.{DataType => ProtoDataType}\nimport org.apache.comet.serde.Types.DataType._\nimport org.apache.comet.serde.literals.CometLiteral\nimport org.apache.comet.shims.CometExprShim\n\n/**\n * An utility object for query plan and expression serialization.\n */\nobject QueryPlanSerde extends Logging with CometExprShim {\n\n  private val arrayExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n    classOf[ArrayAppend] -> CometArrayAppend,\n    classOf[ArrayCompact] -> CometArrayCompact,\n    classOf[ArrayContains] -> CometArrayContains,\n    classOf[ArrayDistinct] -> CometScalarFunction(\"array_distinct\"),\n    classOf[ArrayExcept] -> CometArrayExcept,\n    classOf[ArrayFilter] -> CometArrayFilter,\n    classOf[ArrayInsert] -> CometArrayInsert,\n    classOf[ArrayIntersect] -> CometArrayIntersect,\n    classOf[ArrayJoin] -> CometArrayJoin,\n    classOf[ArrayMax] -> CometArrayMax,\n    classOf[ArrayMin] -> CometArrayMin,\n    classOf[ArrayRemove] -> CometArrayRemove,\n    classOf[ArrayRepeat] -> CometArrayRepeat,\n    classOf[SortArray] -> CometSortArray,\n    classOf[ArraysOverlap] -> CometArraysOverlap,\n    classOf[ArrayUnion] -> CometArrayUnion,\n    classOf[CreateArray] -> CometCreateArray,\n    classOf[ElementAt] -> CometElementAt,\n    classOf[Flatten] -> CometFlatten,\n    classOf[GetArrayItem] -> CometGetArrayItem,\n    classOf[Size] -> CometSize)\n\n  private val conditionalExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] =\n    Map(classOf[CaseWhen] -> CometCaseWhen, classOf[If] -> CometIf)\n\n  private val predicateExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n    classOf[And] -> CometAnd,\n    classOf[EqualTo] -> CometEqualTo,\n    classOf[EqualNullSafe] -> CometEqualNullSafe,\n    classOf[GreaterThan] -> CometGreaterThan,\n    classOf[GreaterThanOrEqual] -> CometGreaterThanOrEqual,\n    classOf[LessThan] -> CometLessThan,\n    classOf[LessThanOrEqual] -> CometLessThanOrEqual,\n    classOf[In] -> CometIn,\n    classOf[IsNotNull] -> CometIsNotNull,\n    classOf[IsNull] -> CometIsNull,\n    classOf[InSet] -> CometInSet,\n    classOf[Not] -> CometNot,\n    classOf[Or] -> CometOr)\n\n  private val mathExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n    classOf[Acos] -> CometScalarFunction(\"acos\"),\n    classOf[Add] -> CometAdd,\n    classOf[Asin] -> CometScalarFunction(\"asin\"),\n    classOf[Atan] -> CometScalarFunction(\"atan\"),\n    classOf[Atan2] -> CometAtan2,\n    classOf[Ceil] -> CometCeil,\n    classOf[Cos] -> CometScalarFunction(\"cos\"),\n    classOf[Cosh] -> CometScalarFunction(\"cosh\"),\n    classOf[Divide] -> CometDivide,\n    classOf[Exp] -> CometScalarFunction(\"exp\"),\n    classOf[Expm1] -> CometScalarFunction(\"expm1\"),\n    classOf[Floor] -> CometFloor,\n    classOf[Hex] -> CometHex,\n    classOf[IntegralDivide] -> CometIntegralDivide,\n    classOf[IsNaN] -> CometIsNaN,\n    classOf[Log] -> CometLog,\n    classOf[Log2] -> CometLog2,\n    classOf[Log10] -> CometLog10,\n    classOf[Logarithm] -> CometLogarithm,\n    classOf[Multiply] -> CometMultiply,\n    classOf[Pow] -> CometScalarFunction(\"pow\"),\n    classOf[Rand] -> CometRand,\n    classOf[Randn] -> CometRandn,\n    classOf[Remainder] -> CometRemainder,\n    classOf[Round] -> CometRound,\n    classOf[Signum] -> CometScalarFunction(\"signum\"),\n    classOf[Sin] -> CometScalarFunction(\"sin\"),\n    classOf[Sinh] -> CometScalarFunction(\"sinh\"),\n    classOf[Sqrt] -> CometScalarFunction(\"sqrt\"),\n    classOf[Subtract] -> CometSubtract,\n    classOf[Tan] -> CometScalarFunction(\"tan\"),\n    classOf[Tanh] -> CometScalarFunction(\"tanh\"),\n    classOf[Cot] -> CometScalarFunction(\"cot\"),\n    classOf[UnaryMinus] -> CometUnaryMinus,\n    classOf[Unhex] -> CometUnhex,\n    classOf[Abs] -> CometAbs,\n    classOf[Bin] -> CometScalarFunction(\"bin\"))\n\n  private val mapExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n    classOf[GetMapValue] -> CometMapExtract,\n    classOf[MapKeys] -> CometMapKeys,\n    classOf[MapEntries] -> CometMapEntries,\n    classOf[MapValues] -> CometMapValues,\n    classOf[MapFromArrays] -> CometMapFromArrays,\n    classOf[MapContainsKey] -> CometMapContainsKey,\n    classOf[MapFromEntries] -> CometMapFromEntries)\n\n  private val structExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n    classOf[CreateNamedStruct] -> CometCreateNamedStruct,\n    classOf[GetArrayStructFields] -> CometGetArrayStructFields,\n    classOf[GetStructField] -> CometGetStructField,\n    classOf[JsonToStructs] -> CometJsonToStructs,\n    classOf[StructsToJson] -> CometStructsToJson,\n    classOf[StructsToCsv] -> CometStructsToCsv)\n\n  private val hashExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n    classOf[Crc32] -> CometScalarFunction(\"crc32\"),\n    classOf[Md5] -> CometScalarFunction(\"md5\"),\n    classOf[Murmur3Hash] -> CometMurmur3Hash,\n    classOf[Sha2] -> CometSha2,\n    classOf[XxHash64] -> CometXxHash64,\n    classOf[Sha1] -> CometSha1)\n\n  private val stringExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n    classOf[Ascii] -> CometScalarFunction(\"ascii\"),\n    classOf[BitLength] -> CometScalarFunction(\"bit_length\"),\n    classOf[Chr] -> CometScalarFunction(\"char\"),\n    classOf[ConcatWs] -> CometConcatWs,\n    classOf[Concat] -> CometConcat,\n    classOf[Contains] -> CometScalarFunction(\"contains\"),\n    classOf[EndsWith] -> CometScalarFunction(\"ends_with\"),\n    classOf[GetJsonObject] -> CometGetJsonObject,\n    classOf[InitCap] -> CometInitCap,\n    classOf[Length] -> CometLength,\n    classOf[Like] -> CometLike,\n    classOf[Lower] -> CometLower,\n    classOf[OctetLength] -> CometScalarFunction(\"octet_length\"),\n    classOf[RegExpReplace] -> CometRegExpReplace,\n    classOf[Reverse] -> CometReverse,\n    classOf[RLike] -> CometRLike,\n    classOf[StartsWith] -> CometScalarFunction(\"starts_with\"),\n    classOf[StringInstr] -> CometScalarFunction(\"instr\"),\n    classOf[StringRepeat] -> CometStringRepeat,\n    classOf[StringReplace] -> CometScalarFunction(\"replace\"),\n    classOf[StringRPad] -> CometStringRPad,\n    classOf[StringLPad] -> CometStringLPad,\n    classOf[StringSpace] -> CometScalarFunction(\"space\"),\n    classOf[StringSplit] -> CometStringSplit,\n    classOf[StringTranslate] -> CometScalarFunction(\"translate\"),\n    classOf[StringTrim] -> CometScalarFunction(\"trim\"),\n    classOf[StringTrimBoth] -> CometScalarFunction(\"btrim\"),\n    classOf[StringTrimLeft] -> CometScalarFunction(\"ltrim\"),\n    classOf[StringTrimRight] -> CometScalarFunction(\"rtrim\"),\n    classOf[Left] -> CometLeft,\n    classOf[Right] -> CometRight,\n    classOf[Substring] -> CometSubstring,\n    classOf[Upper] -> CometUpper)\n\n  private val bitwiseExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n    classOf[BitwiseAnd] -> CometBitwiseAnd,\n    classOf[BitwiseCount] -> CometBitwiseCount,\n    classOf[BitwiseGet] -> CometBitwiseGet,\n    classOf[BitwiseOr] -> CometBitwiseOr,\n    classOf[BitwiseNot] -> CometBitwiseNot,\n    classOf[BitwiseXor] -> CometBitwiseXor,\n    classOf[ShiftLeft] -> CometShiftLeft,\n    classOf[ShiftRight] -> CometShiftRight)\n\n  private val temporalExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n    classOf[DateAdd] -> CometDateAdd,\n    classOf[DateDiff] -> CometDateDiff,\n    classOf[DateFormatClass] -> CometDateFormat,\n    classOf[DateFromUnixDate] -> CometDateFromUnixDate,\n    classOf[Days] -> CometDays,\n    classOf[Hours] -> CometHours,\n    classOf[DateSub] -> CometDateSub,\n    classOf[UnixDate] -> CometUnixDate,\n    classOf[FromUnixTime] -> CometFromUnixTime,\n    classOf[LastDay] -> CometLastDay,\n    classOf[Hour] -> CometHour,\n    classOf[MakeDate] -> CometMakeDate,\n    classOf[Minute] -> CometMinute,\n    classOf[NextDay] -> CometNextDay,\n    classOf[Second] -> CometSecond,\n    classOf[TruncDate] -> CometTruncDate,\n    classOf[TruncTimestamp] -> CometTruncTimestamp,\n    classOf[UnixTimestamp] -> CometUnixTimestamp,\n    classOf[Year] -> CometYear,\n    classOf[Month] -> CometMonth,\n    classOf[DayOfMonth] -> CometDayOfMonth,\n    classOf[DayOfWeek] -> CometDayOfWeek,\n    classOf[WeekDay] -> CometWeekDay,\n    classOf[DayOfYear] -> CometDayOfYear,\n    classOf[WeekOfYear] -> CometWeekOfYear,\n    classOf[Quarter] -> CometQuarter)\n\n  private val conversionExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n    classOf[Cast] -> CometCast)\n\n  private val miscExpressions: Map[Class[_ <: Expression], CometExpressionSerde[_]] = Map(\n    // TODO PromotePrecision\n    classOf[Alias] -> CometAlias,\n    classOf[AttributeReference] -> CometAttributeReference,\n    classOf[BloomFilterMightContain] -> CometBloomFilterMightContain,\n    classOf[CheckOverflow] -> CometCheckOverflow,\n    classOf[Coalesce] -> CometCoalesce,\n    classOf[KnownFloatingPointNormalized] -> CometKnownFloatingPointNormalized,\n    classOf[Literal] -> CometLiteral,\n    classOf[MakeDecimal] -> CometMakeDecimal,\n    classOf[MonotonicallyIncreasingID] -> CometMonotonicallyIncreasingId,\n    classOf[ScalarSubquery] -> CometScalarSubquery,\n    classOf[SparkPartitionID] -> CometSparkPartitionId,\n    classOf[SortOrder] -> CometSortOrder,\n    classOf[StaticInvoke] -> CometStaticInvoke,\n    classOf[UnscaledValue] -> CometUnscaledValue)\n\n  /**\n   * Mapping of Spark expression class to Comet expression handler.\n   */\n  val exprSerdeMap: Map[Class[_ <: Expression], CometExpressionSerde[_]] =\n    mathExpressions ++ hashExpressions ++ stringExpressions ++\n      conditionalExpressions ++ mapExpressions ++ predicateExpressions ++\n      structExpressions ++ bitwiseExpressions ++ miscExpressions ++ arrayExpressions ++\n      temporalExpressions ++ conversionExpressions\n\n  /**\n   * Mapping of Spark aggregate expression class to Comet expression handler.\n   */\n  val aggrSerdeMap: Map[Class[_], CometAggregateExpressionSerde[_]] = Map(\n    classOf[Average] -> CometAverage,\n    classOf[BitAndAgg] -> CometBitAndAgg,\n    classOf[BitOrAgg] -> CometBitOrAgg,\n    classOf[BitXorAgg] -> CometBitXOrAgg,\n    classOf[BloomFilterAggregate] -> CometBloomFilterAggregate,\n    classOf[Corr] -> CometCorr,\n    classOf[Count] -> CometCount,\n    classOf[CovPopulation] -> CometCovPopulation,\n    classOf[CovSample] -> CometCovSample,\n    classOf[First] -> CometFirst,\n    classOf[Last] -> CometLast,\n    classOf[Max] -> CometMax,\n    classOf[Min] -> CometMin,\n    classOf[StddevPop] -> CometStddevPop,\n    classOf[StddevSamp] -> CometStddevSamp,\n    classOf[Sum] -> CometSum,\n    classOf[VariancePop] -> CometVariancePop,\n    classOf[VarianceSamp] -> CometVarianceSamp)\n\n  //  A unique id for each expression. ~used to look up QueryContext during error creation.\n  private val exprIdCounter = new AtomicLong(0)\n\n  private def nextExprId(): Long = exprIdCounter.incrementAndGet()\n\n  /**\n   * Extract SQL context information (query text, line/position, object name) from the\n   * expression's origin.\n   *\n   * @param expr\n   *   The Spark expression to extract context from\n   * @return\n   *   Some(QueryContext) if origin is present, None otherwise\n   */\n  private def extractQueryContext(expr: Expression): Option[ExprOuterClass.QueryContext] = {\n    val contexts = expr.origin.getQueryContext\n    if (contexts != null && contexts.length > 0) {\n      try {\n        val ctx = contexts(0)\n        // Check if this is a SQLQueryContext with additional fields\n        ctx match {\n          case sqlCtx: org.apache.spark.sql.catalyst.trees.SQLQueryContext =>\n            val builder = ExprOuterClass.QueryContext\n              .newBuilder()\n              .setSqlText(sqlCtx.sqlText.getOrElse(\"\"))\n              .setStartIndex(sqlCtx.originStartIndex.getOrElse(ctx.startIndex))\n              .setStopIndex(sqlCtx.originStopIndex.getOrElse(ctx.stopIndex))\n              .setLine(sqlCtx.line.getOrElse(0))\n              .setStartPosition(sqlCtx.startPosition.getOrElse(0))\n\n            // Add optional fields if present\n            sqlCtx.originObjectType.foreach(builder.setObjectType)\n            sqlCtx.originObjectName.foreach(builder.setObjectName)\n\n            Some(builder.build())\n          case _ =>\n            // Fallback: use only QueryContext interface methods\n            val builder = ExprOuterClass.QueryContext\n              .newBuilder()\n              .setSqlText(ctx.fragment())\n              .setStartIndex(ctx.startIndex())\n              .setStopIndex(ctx.stopIndex())\n              .setLine(0)\n              .setStartPosition(0)\n\n            if (ctx.objectType() != null && !ctx.objectType().isEmpty) {\n              builder.setObjectType(ctx.objectType())\n            }\n            if (ctx.objectName() != null && !ctx.objectName().isEmpty) {\n              builder.setObjectName(ctx.objectName())\n            }\n\n            Some(builder.build())\n        }\n      } catch {\n        case _: Exception => None\n      }\n    } else {\n      None\n    }\n  }\n\n  def supportedDataType(dt: DataType, allowComplex: Boolean = false): Boolean = dt match {\n    case _: ByteType | _: ShortType | _: IntegerType | _: LongType | _: FloatType |\n        _: DoubleType | _: StringType | _: BinaryType | _: TimestampType | _: TimestampNTZType |\n        _: DecimalType | _: DateType | _: BooleanType | _: NullType =>\n      true\n    case s: StructType if allowComplex =>\n      s.fields.nonEmpty && s.fields.map(_.dataType).forall(supportedDataType(_, allowComplex))\n    case a: ArrayType if allowComplex =>\n      supportedDataType(a.elementType, allowComplex)\n    case m: MapType if allowComplex =>\n      supportedDataType(m.keyType, allowComplex) && supportedDataType(m.valueType, allowComplex)\n    case _ =>\n      false\n  }\n\n  /**\n   * Serializes Spark datatype to protobuf. Note that, a datatype can be serialized by this method\n   * doesn't mean it is supported by Comet native execution, i.e., `supportedDataType` may return\n   * false for it.\n   */\n  def serializeDataType(dt: org.apache.spark.sql.types.DataType): Option[Types.DataType] = {\n    val typeId = dt match {\n      case _: BooleanType => 0\n      case _: ByteType => 1\n      case _: ShortType => 2\n      case _: IntegerType => 3\n      case _: LongType => 4\n      case _: FloatType => 5\n      case _: DoubleType => 6\n      case _: StringType => 7\n      case _: BinaryType => 8\n      case _: TimestampType => 9\n      case _: DecimalType => 10\n      case _: TimestampNTZType => 11\n      case _: DateType => 12\n      case _: NullType => 13\n      case _: ArrayType => 14\n      case _: MapType => 15\n      case _: StructType => 16\n      case dt =>\n        logWarning(s\"Cannot serialize Spark data type: $dt\")\n        return None\n    }\n\n    val builder = ProtoDataType.newBuilder()\n    builder.setTypeIdValue(typeId)\n\n    // Decimal\n    val dataType = dt match {\n      case t: DecimalType =>\n        val info = DataTypeInfo.newBuilder()\n        val decimal = DecimalInfo.newBuilder()\n        decimal.setPrecision(t.precision)\n        decimal.setScale(t.scale)\n        info.setDecimal(decimal)\n        builder.setTypeInfo(info.build()).build()\n\n      case a: ArrayType =>\n        val elementType = serializeDataType(a.elementType)\n\n        if (elementType.isEmpty) {\n          return None\n        }\n\n        val info = DataTypeInfo.newBuilder()\n        val list = ListInfo.newBuilder()\n        list.setElementType(elementType.get)\n        list.setContainsNull(a.containsNull)\n\n        info.setList(list)\n        builder.setTypeInfo(info.build()).build()\n\n      case m: MapType =>\n        val keyType = serializeDataType(m.keyType)\n        if (keyType.isEmpty) {\n          return None\n        }\n\n        val valueType = serializeDataType(m.valueType)\n        if (valueType.isEmpty) {\n          return None\n        }\n\n        val info = DataTypeInfo.newBuilder()\n        val map = MapInfo.newBuilder()\n        map.setKeyType(keyType.get)\n        map.setValueType(valueType.get)\n        map.setValueContainsNull(m.valueContainsNull)\n\n        info.setMap(map)\n        builder.setTypeInfo(info.build()).build()\n\n      case s: StructType =>\n        val info = DataTypeInfo.newBuilder()\n        val struct = StructInfo.newBuilder()\n\n        val fieldNames = s.fields.map(_.name).toIterable.asJava\n        val fieldDatatypes = s.fields.map(f => serializeDataType(f.dataType)).toSeq\n        val fieldNullable = s.fields.map(f => Boolean.box(f.nullable)).toIterable.asJava\n\n        if (fieldDatatypes.exists(_.isEmpty)) {\n          return None\n        }\n\n        struct.addAllFieldNames(fieldNames)\n        struct.addAllFieldDatatypes(fieldDatatypes.map(_.get).asJava)\n        struct.addAllFieldNullable(fieldNullable)\n\n        info.setStruct(struct)\n        builder.setTypeInfo(info.build()).build()\n      case _ => builder.build()\n    }\n\n    Some(dataType)\n  }\n\n  def aggExprToProto(\n      aggExpr: AggregateExpression,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[AggExpr] = {\n\n    // Support Count(distinct single_value)\n    // COUNT(DISTINCT x) - supported\n    // COUNT(DISTINCT x, x) - supported through transition to COUNT(DISTINCT x)\n    // COUNT(DISTINCT x, y) - not supported\n    if (aggExpr.isDistinct\n      &&\n      !(aggExpr.aggregateFunction.prettyName == \"count\" &&\n        aggExpr.aggregateFunction.children.length == 1)) {\n      withInfo(aggExpr, s\"Distinct aggregate not supported for: $aggExpr\")\n      return None\n    }\n\n    val fn = aggExpr.aggregateFunction\n    val cometExpr = aggrSerdeMap.get(fn.getClass)\n    val protoAggExprOpt = cometExpr match {\n      case Some(handler) =>\n        val aggHandler = handler.asInstanceOf[CometAggregateExpressionSerde[AggregateFunction]]\n        val exprConfName = aggHandler.getExprConfigName(fn)\n        if (!CometConf.isExprEnabled(exprConfName)) {\n          withInfo(\n            aggExpr,\n            \"Expression support is disabled. Set \" +\n              s\"${CometConf.getExprEnabledConfigKey(exprConfName)}=true to enable it.\")\n          return None\n        }\n        aggHandler.getSupportLevel(fn) match {\n          case Unsupported(notes) =>\n            withInfo(fn, notes.getOrElse(\"\"))\n            None\n          case Incompatible(notes) =>\n            val exprAllowIncompat = CometConf.isExprAllowIncompat(exprConfName)\n            if (exprAllowIncompat) {\n              if (notes.isDefined) {\n                logWarning(\n                  s\"Comet supports $fn when \" +\n                    s\"${CometConf.getExprAllowIncompatConfigKey(exprConfName)}=true \" +\n                    s\"but has notes: ${notes.get}\")\n              }\n              aggHandler.convert(aggExpr, fn, inputs, binding, conf)\n            } else {\n              val optionalNotes = notes.map(str => s\" ($str)\").getOrElse(\"\")\n              withInfo(\n                fn,\n                s\"$fn is not fully compatible with Spark$optionalNotes. To enable it anyway, \" +\n                  s\"set ${CometConf.getExprAllowIncompatConfigKey(exprConfName)}=true. \" +\n                  s\"${CometConf.COMPAT_GUIDE}.\")\n              None\n            }\n          case Compatible(notes) =>\n            if (notes.isDefined) {\n              logWarning(s\"Comet supports $fn but has notes: ${notes.get}\")\n            }\n            aggHandler.convert(aggExpr, fn, inputs, binding, conf)\n        }\n      case _ =>\n        withInfo(\n          aggExpr,\n          s\"unsupported Spark aggregate function: ${fn.prettyName}\",\n          fn.children: _*)\n        None\n    }\n\n    // Attach QueryContext and expr_id to the aggregate expression\n    protoAggExprOpt.flatMap { protoAggExpr =>\n      val builder = protoAggExpr.toBuilder\n      builder.setExprId(nextExprId())\n\n      // Serialize FILTER (WHERE ...) clause if present.\n      // The filter is only meaningful in Partial mode; Final/PartialMerge never set it.\n      if (aggExpr.filter.isDefined && aggExpr.mode == Partial) {\n        val filterProto = exprToProto(aggExpr.filter.get, inputs, binding)\n        if (filterProto.isEmpty) {\n          withInfo(aggExpr, aggExpr.filter.get)\n          return None\n        }\n        builder.setFilter(filterProto.get)\n      }\n\n      extractQueryContext(fn).foreach { ctx =>\n        builder.setQueryContext(ctx)\n      }\n      Some(builder.build())\n    }\n  }\n\n  def evalModeToProto(evalMode: CometEvalMode.Value): ExprOuterClass.EvalMode = {\n    evalMode match {\n      case CometEvalMode.LEGACY => ExprOuterClass.EvalMode.LEGACY\n      case CometEvalMode.TRY => ExprOuterClass.EvalMode.TRY\n      case CometEvalMode.ANSI => ExprOuterClass.EvalMode.ANSI\n      case _ => throw new IllegalStateException(s\"Invalid evalMode $evalMode\")\n    }\n  }\n\n  /**\n   * Convert a Spark expression to a protocol-buffer representation of a native Comet/DataFusion\n   * expression.\n   *\n   * This method performs a transformation on the plan to handle decimal promotion and then calls\n   * into the recursive method [[exprToProtoInternal]].\n   *\n   * @param expr\n   *   The input expression\n   * @param inputs\n   *   The input attributes\n   * @param binding\n   *   Whether to bind the expression to the input attributes\n   * @return\n   *   The protobuf representation of the expression, or None if the expression is not supported.\n   *   In the case where None is returned, the expression will be tagged with the reason(s) why it\n   *   is not supported.\n   */\n  def exprToProto(\n      expr: Expression,\n      inputs: Seq[Attribute],\n      binding: Boolean = true): Option[Expr] = {\n\n    val conf = SQLConf.get\n    val newExpr =\n      DecimalPrecision.promote(conf.decimalOperationsAllowPrecisionLoss, expr, !conf.ansiEnabled)\n    exprToProtoInternal(newExpr, inputs, binding)\n  }\n\n  /**\n   * Convert a Spark expression to a protocol-buffer representation of a native Comet/DataFusion\n   * expression.\n   *\n   * @param expr\n   *   The input expression\n   * @param inputs\n   *   The input attributes\n   * @param binding\n   *   Whether to bind the expression to the input attributes\n   * @return\n   *   The protobuf representation of the expression, or None if the expression is not supported.\n   *   In the case where None is returned, the expression will be tagged with the reason(s) why it\n   *   is not supported.\n   */\n  def exprToProtoInternal(\n      expr: Expression,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n\n    def convert[T <: Expression](expr: T, handler: CometExpressionSerde[T]): Option[Expr] = {\n      val exprConfName = handler.getExprConfigName(expr)\n      if (!CometConf.isExprEnabled(exprConfName)) {\n        withInfo(\n          expr,\n          \"Expression support is disabled. Set \" +\n            s\"${CometConf.getExprEnabledConfigKey(exprConfName)}=true to enable it.\")\n        return None\n      }\n      handler.getSupportLevel(expr) match {\n        case Unsupported(notes) =>\n          withInfo(expr, notes.getOrElse(\"\"))\n          None\n        case Incompatible(notes) =>\n          val exprAllowIncompat = CometConf.isExprAllowIncompat(exprConfName)\n          if (exprAllowIncompat) {\n            if (notes.isDefined) {\n              logWarning(\n                s\"Comet supports $expr when \" +\n                  s\"${CometConf.getExprAllowIncompatConfigKey(exprConfName)}=true \" +\n                  s\"but has notes: ${notes.get}\")\n            }\n            handler.convert(expr, inputs, binding)\n          } else {\n            val optionalNotes = notes.map(str => s\" ($str)\").getOrElse(\"\")\n            withInfo(\n              expr,\n              s\"$expr is not fully compatible with Spark$optionalNotes. To enable it anyway, \" +\n                s\"set ${CometConf.getExprAllowIncompatConfigKey(exprConfName)}=true. \" +\n                s\"${CometConf.COMPAT_GUIDE}.\")\n            None\n          }\n        case Compatible(notes) =>\n          if (notes.isDefined) {\n            logWarning(s\"Comet supports $expr but has notes: ${notes.get}\")\n          }\n          handler.convert(expr, inputs, binding)\n      }\n    }\n\n    versionSpecificExprToProtoInternal(expr, inputs, binding)\n      .orElse(expr match {\n\n        case UnaryExpression(child) if expr.prettyName == \"promote_precision\" =>\n          // `UnaryExpression` includes `PromotePrecision` for Spark 3.3\n          // `PromotePrecision` is just a wrapper, don't need to serialize it.\n          exprToProtoInternal(child, inputs, binding)\n\n        case expr =>\n          QueryPlanSerde.exprSerdeMap.get(expr.getClass) match {\n            case Some(handler) =>\n              convert(expr, handler.asInstanceOf[CometExpressionSerde[Expression]])\n            case _ =>\n              withInfo(expr, s\"${expr.prettyName} is not supported\", expr.children: _*)\n              None\n          }\n      })\n      .map { protoExpr =>\n        // Attach QueryContext and expr_id to the expression\n        val builder = protoExpr.toBuilder\n        builder.setExprId(nextExprId())\n        extractQueryContext(expr).foreach { ctx =>\n          builder.setQueryContext(ctx)\n        }\n        builder.build()\n      }\n  }\n\n  /**\n   * Creates a UnaryExpr by calling exprToProtoInternal for the provided child expression and then\n   * invokes the supplied function to wrap this UnaryExpr in a top-level Expr.\n   *\n   * @param child\n   *   Spark expression\n   * @param inputs\n   *   Inputs to the expression\n   * @param f\n   *   Function that accepts an Expr.Builder and a UnaryExpr and builds the specific top-level\n   *   Expr\n   * @return\n   *   Some(Expr) or None if not supported\n   */\n  def createUnaryExpr(\n      expr: Expression,\n      child: Expression,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      f: (ExprOuterClass.Expr.Builder, ExprOuterClass.UnaryExpr) => ExprOuterClass.Expr.Builder)\n      : Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(child, inputs, binding) // TODO review\n    if (childExpr.isDefined) {\n      // create the generic UnaryExpr message\n      val inner = ExprOuterClass.UnaryExpr\n        .newBuilder()\n        .setChild(childExpr.get)\n        .build()\n      // call the user-supplied function to wrap UnaryExpr in a top-level Expr\n      // such as Expr.IsNull or Expr.IsNotNull\n      Some(\n        f(\n          ExprOuterClass.Expr\n            .newBuilder(),\n          inner).build())\n    } else {\n      withInfo(expr, child)\n      None\n    }\n  }\n\n  def createBinaryExpr(\n      expr: Expression,\n      left: Expression,\n      right: Expression,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      f: (ExprOuterClass.Expr.Builder, ExprOuterClass.BinaryExpr) => ExprOuterClass.Expr.Builder)\n      : Option[ExprOuterClass.Expr] = {\n    val leftExpr = exprToProtoInternal(left, inputs, binding)\n    val rightExpr = exprToProtoInternal(right, inputs, binding)\n    if (leftExpr.isDefined && rightExpr.isDefined) {\n      // create the generic BinaryExpr message\n      val inner = ExprOuterClass.BinaryExpr\n        .newBuilder()\n        .setLeft(leftExpr.get)\n        .setRight(rightExpr.get)\n        .build()\n      // call the user-supplied function to wrap BinaryExpr in a top-level Expr\n      // such as Expr.And or Expr.Or\n      Some(\n        f(\n          ExprOuterClass.Expr\n            .newBuilder(),\n          inner).build())\n    } else {\n      withInfo(expr, left, right)\n      None\n    }\n  }\n\n  def scalarFunctionExprToProtoWithReturnType(\n      funcName: String,\n      returnType: DataType,\n      failOnError: Boolean,\n      args: Option[Expr]*): Option[Expr] = {\n    val builder = ExprOuterClass.ScalarFunc.newBuilder()\n    builder.setFunc(funcName)\n    builder.setFailOnError(failOnError)\n    serializeDataType(returnType).flatMap { t =>\n      builder.setReturnType(t)\n      scalarFunctionExprToProto0(builder, args: _*)\n    }\n  }\n\n  def scalarFunctionExprToProto(funcName: String, args: Option[Expr]*): Option[Expr] = {\n    val builder = ExprOuterClass.ScalarFunc.newBuilder()\n    builder.setFunc(funcName)\n    builder.setFailOnError(false)\n    scalarFunctionExprToProto0(builder, args: _*)\n  }\n\n  private def scalarFunctionExprToProto0(\n      builder: ScalarFunc.Builder,\n      args: Option[Expr]*): Option[Expr] = {\n    args.foreach {\n      case Some(a) => builder.addArgs(a)\n      case _ =>\n        return None\n    }\n    Some(ExprOuterClass.Expr.newBuilder().setScalarFunc(builder).build())\n  }\n\n  // Utility method. Adds explain info if the result of calling exprToProto is None\n  def optExprWithInfo(\n      optExpr: Option[Expr],\n      expr: Expression,\n      childExpr: Expression*): Option[Expr] = {\n    optExpr match {\n      case None =>\n        withInfo(expr, childExpr: _*)\n        None\n      case o => o\n    }\n\n  }\n\n  // scalastyle:off\n  /**\n   * Align w/ Arrow's\n   * [[https://github.com/apache/arrow-rs/blob/55.2.0/arrow-ord/src/rank.rs#L30-L40 can_rank]] and\n   * [[https://github.com/apache/arrow-rs/blob/55.2.0/arrow-ord/src/sort.rs#L193-L215 can_sort_to_indices]]\n   *\n   * TODO: Include SparkSQL's [[YearMonthIntervalType]] and [[DayTimeIntervalType]]\n   */\n  // scalastyle:on\n  def supportedScalarSortElementType(dt: DataType): Boolean = {\n    dt match {\n      case _: ByteType | _: ShortType | _: IntegerType | _: LongType | _: FloatType |\n          _: DoubleType | _: DecimalType | _: DateType | _: TimestampType | _: TimestampNTZType |\n          _: BooleanType | _: BinaryType | _: StringType =>\n        true\n      case _ =>\n        false\n    }\n  }\n\n  def supportedSortType(op: SparkPlan, sortOrder: Seq[SortOrder]): Boolean = {\n    if (sortOrder.length == 1) {\n      val canSort = sortOrder.head.dataType match {\n        case ArrayType(elementType, _) => supportedScalarSortElementType(elementType)\n        case MapType(_, valueType, _) => supportedScalarSortElementType(valueType)\n        case _ => supportedScalarSortElementType(sortOrder.head.dataType)\n      }\n      if (!canSort) {\n        withInfo(op, s\"Sort on single column of type ${sortOrder.head.dataType} is not supported\")\n        false\n      } else {\n        true\n      }\n    } else {\n      true\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/SupportLevel.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nsealed trait SupportLevel\n\n/**\n * Comet either supports this feature with full compatibility with Spark, or may have known\n * differences in some specific edge cases that are unlikely to be an issue for most users.\n *\n * Any compatibility differences are noted in the\n * [[https://datafusion.apache.org/comet/user-guide/compatibility.html Comet Compatibility Guide]].\n */\ncase class Compatible(notes: Option[String] = None) extends SupportLevel\n\n/**\n * Comet supports this feature but results can be different from Spark.\n *\n * Any compatibility differences are noted in the\n * [[https://datafusion.apache.org/comet/user-guide/compatibility.html Comet Compatibility Guide]].\n */\ncase class Incompatible(notes: Option[String] = None) extends SupportLevel\n\n/** Comet does not support this feature */\ncase class Unsupported(notes: Option[String] = None) extends SupportLevel\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/aggregates.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.sql.catalyst.expressions.Attribute\nimport org.apache.spark.sql.catalyst.expressions.aggregate.{AggregateExpression, Average, BitAndAgg, BitOrAgg, BitXorAgg, BloomFilterAggregate, CentralMomentAgg, Corr, Count, Covariance, CovPopulation, CovSample, First, Last, Max, Min, StddevPop, StddevSamp, Sum, VariancePop, VarianceSamp}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{ByteType, DataTypes, DecimalType, IntegerType, LongType, ShortType, StringType}\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometConf.COMET_EXEC_STRICT_FLOATING_POINT\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.QueryPlanSerde.{evalModeToProto, exprToProto, serializeDataType}\nimport org.apache.comet.shims.CometEvalModeUtil\n\nobject CometMin extends CometAggregateExpressionSerde[Min] {\n\n  override def convert(\n      aggExpr: AggregateExpression,\n      expr: Min,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    if (!AggSerde.minMaxDataTypeSupported(expr.dataType)) {\n      withInfo(aggExpr, s\"Unsupported data type: ${expr.dataType}\")\n      return None\n    }\n\n    if (expr.dataType == DataTypes.FloatType || expr.dataType == DataTypes.DoubleType) {\n      if (CometConf.COMET_EXEC_STRICT_FLOATING_POINT.get()) {\n        // https://github.com/apache/datafusion-comet/issues/2448\n        withInfo(\n          aggExpr,\n          s\"floating-point not supported when ${COMET_EXEC_STRICT_FLOATING_POINT.key}=true\")\n        return None\n      }\n    }\n\n    val child = expr.children.head\n    val childExpr = exprToProto(child, inputs, binding)\n    val dataType = serializeDataType(expr.dataType)\n\n    if (childExpr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.Min.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setDatatype(dataType.get)\n\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setMin(builder)\n          .build())\n    } else if (dataType.isEmpty) {\n      withInfo(aggExpr, s\"datatype ${expr.dataType} is not supported\", child)\n      None\n    } else {\n      withInfo(aggExpr, child)\n      None\n    }\n  }\n}\n\nobject CometMax extends CometAggregateExpressionSerde[Max] {\n\n  override def convert(\n      aggExpr: AggregateExpression,\n      expr: Max,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    if (!AggSerde.minMaxDataTypeSupported(expr.dataType)) {\n      withInfo(aggExpr, s\"Unsupported data type: ${expr.dataType}\")\n      return None\n    }\n\n    if (expr.dataType == DataTypes.FloatType || expr.dataType == DataTypes.DoubleType) {\n      if (CometConf.COMET_EXEC_STRICT_FLOATING_POINT.get()) {\n        // https://github.com/apache/datafusion-comet/issues/2448\n        withInfo(\n          aggExpr,\n          s\"floating-point not supported when ${COMET_EXEC_STRICT_FLOATING_POINT.key}=true\")\n        return None\n      }\n    }\n\n    val child = expr.children.head\n    val childExpr = exprToProto(child, inputs, binding)\n    val dataType = serializeDataType(expr.dataType)\n\n    if (childExpr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.Max.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setDatatype(dataType.get)\n\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setMax(builder)\n          .build())\n    } else if (dataType.isEmpty) {\n      withInfo(aggExpr, s\"datatype ${expr.dataType} is not supported\", child)\n      None\n    } else {\n      withInfo(aggExpr, child)\n      None\n    }\n  }\n}\n\nobject CometCount extends CometAggregateExpressionSerde[Count] {\n  override def convert(\n      aggExpr: AggregateExpression,\n      expr: Count,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    val exprChildren = expr.children.map(exprToProto(_, inputs, binding))\n    if (exprChildren.forall(_.isDefined)) {\n      val builder = ExprOuterClass.Count.newBuilder()\n      builder.addAllChildren(exprChildren.map(_.get).asJava)\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setCount(builder)\n          .build())\n    } else {\n      withInfo(aggExpr, expr.children: _*)\n      None\n    }\n  }\n}\n\nobject CometAverage extends CometAggregateExpressionSerde[Average] {\n\n  override def convert(\n      aggExpr: AggregateExpression,\n      avg: Average,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n\n    if (!AggSerde.avgDataTypeSupported(avg.dataType)) {\n      withInfo(aggExpr, s\"Unsupported data type: ${avg.dataType}\")\n      return None\n    }\n\n    val child = avg.child\n    val childExpr = exprToProto(child, inputs, binding)\n    val dataType = serializeDataType(avg.dataType)\n\n    val sumDataType = child.dataType match {\n      case decimalType: DecimalType =>\n        // This is input precision + 10 to be consistent with Spark\n        val precision = Math.min(DecimalType.MAX_PRECISION, decimalType.precision + 10)\n        val newType =\n          DecimalType.apply(precision, decimalType.scale)\n        serializeDataType(newType)\n      case _ =>\n        serializeDataType(child.dataType)\n    }\n\n    if (childExpr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.Avg.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setDatatype(dataType.get)\n      builder.setEvalMode(evalModeToProto(CometEvalModeUtil.fromSparkEvalMode(avg.evalMode)))\n      builder.setSumDatatype(sumDataType.get)\n\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setAvg(builder)\n          .build())\n    } else if (dataType.isEmpty) {\n      withInfo(aggExpr, s\"datatype ${avg.dataType} is not supported\", child)\n      None\n    } else {\n      withInfo(aggExpr, child)\n      None\n    }\n  }\n}\n\nobject CometSum extends CometAggregateExpressionSerde[Sum] {\n\n  override def convert(\n      aggExpr: AggregateExpression,\n      sum: Sum,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n\n    if (!AggSerde.sumDataTypeSupported(sum.dataType)) {\n      withInfo(aggExpr, s\"Unsupported data type: ${sum.dataType}\")\n      return None\n    }\n\n    val evalMode = sum.evalMode\n\n    val childExpr = exprToProto(sum.child, inputs, binding)\n    val dataType = serializeDataType(sum.dataType)\n\n    if (childExpr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.Sum.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setDatatype(dataType.get)\n      builder.setEvalMode(evalModeToProto(CometEvalModeUtil.fromSparkEvalMode(evalMode)))\n\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setSum(builder)\n          .build())\n    } else {\n      if (dataType.isEmpty) {\n        withInfo(aggExpr, s\"datatype ${sum.dataType} is not supported\", sum.child)\n      } else {\n        withInfo(aggExpr, sum.child)\n      }\n      None\n    }\n  }\n}\n\nobject CometFirst extends CometAggregateExpressionSerde[First] {\n  override def convert(\n      aggExpr: AggregateExpression,\n      first: First,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    val child = first.children.head\n    val childExpr = exprToProto(child, inputs, binding)\n    val dataType = serializeDataType(first.dataType)\n\n    if (childExpr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.First.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setDatatype(dataType.get)\n      builder.setIgnoreNulls(first.ignoreNulls)\n\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setFirst(builder)\n          .build())\n    } else if (dataType.isEmpty) {\n      withInfo(aggExpr, s\"datatype ${first.dataType} is not supported\", child)\n      None\n    } else {\n      withInfo(aggExpr, child)\n      None\n    }\n  }\n}\n\nobject CometLast extends CometAggregateExpressionSerde[Last] {\n  override def convert(\n      aggExpr: AggregateExpression,\n      last: Last,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    val child = last.children.head\n    val childExpr = exprToProto(child, inputs, binding)\n    val dataType = serializeDataType(last.dataType)\n\n    if (childExpr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.Last.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setDatatype(dataType.get)\n      builder.setIgnoreNulls(last.ignoreNulls)\n\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setLast(builder)\n          .build())\n    } else if (dataType.isEmpty) {\n      withInfo(aggExpr, s\"datatype ${last.dataType} is not supported\", child)\n      None\n    } else {\n      withInfo(aggExpr, child)\n      None\n    }\n  }\n}\n\nobject CometBitAndAgg extends CometAggregateExpressionSerde[BitAndAgg] {\n  override def convert(\n      aggExpr: AggregateExpression,\n      bitAnd: BitAndAgg,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    if (!AggSerde.bitwiseAggTypeSupported(bitAnd.dataType)) {\n      withInfo(aggExpr, s\"Unsupported data type: ${bitAnd.dataType}\")\n      return None\n    }\n    val child = bitAnd.child\n    val childExpr = exprToProto(child, inputs, binding)\n    val dataType = serializeDataType(bitAnd.dataType)\n\n    if (childExpr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.BitAndAgg.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setDatatype(dataType.get)\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setBitAndAgg(builder)\n          .build())\n    } else if (dataType.isEmpty) {\n      withInfo(aggExpr, s\"datatype ${bitAnd.dataType} is not supported\", child)\n      None\n    } else {\n      withInfo(aggExpr, child)\n      None\n    }\n  }\n}\n\nobject CometBitOrAgg extends CometAggregateExpressionSerde[BitOrAgg] {\n  override def convert(\n      aggExpr: AggregateExpression,\n      bitOr: BitOrAgg,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    if (!AggSerde.bitwiseAggTypeSupported(bitOr.dataType)) {\n      withInfo(aggExpr, s\"Unsupported data type: ${bitOr.dataType}\")\n      return None\n    }\n    val child = bitOr.child\n    val childExpr = exprToProto(child, inputs, binding)\n    val dataType = serializeDataType(bitOr.dataType)\n\n    if (childExpr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.BitOrAgg.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setDatatype(dataType.get)\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setBitOrAgg(builder)\n          .build())\n    } else if (dataType.isEmpty) {\n      withInfo(aggExpr, s\"datatype ${bitOr.dataType} is not supported\", child)\n      None\n    } else {\n      withInfo(aggExpr, child)\n      None\n    }\n  }\n}\n\nobject CometBitXOrAgg extends CometAggregateExpressionSerde[BitXorAgg] {\n  override def convert(\n      aggExpr: AggregateExpression,\n      bitXor: BitXorAgg,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    if (!AggSerde.bitwiseAggTypeSupported(bitXor.dataType)) {\n      withInfo(aggExpr, s\"Unsupported data type: ${bitXor.dataType}\")\n      return None\n    }\n    val child = bitXor.child\n    val childExpr = exprToProto(child, inputs, binding)\n    val dataType = serializeDataType(bitXor.dataType)\n\n    if (childExpr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.BitXorAgg.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setDatatype(dataType.get)\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setBitXorAgg(builder)\n          .build())\n    } else if (dataType.isEmpty) {\n      withInfo(aggExpr, s\"datatype ${bitXor.dataType} is not supported\", child)\n      None\n    } else {\n      withInfo(aggExpr, child)\n      None\n    }\n  }\n}\n\ntrait CometCovBase {\n  def convertCov(\n      aggExpr: AggregateExpression,\n      cov: Covariance,\n      nullOnDivideByZero: Boolean,\n      statsType: Int,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    val child1Expr = exprToProto(cov.left, inputs, binding)\n    val child2Expr = exprToProto(cov.right, inputs, binding)\n    val dataType = serializeDataType(cov.dataType)\n\n    if (child1Expr.isDefined && child2Expr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.Covariance.newBuilder()\n      builder.setChild1(child1Expr.get)\n      builder.setChild2(child2Expr.get)\n      builder.setNullOnDivideByZero(nullOnDivideByZero)\n      builder.setDatatype(dataType.get)\n      builder.setStatsTypeValue(statsType)\n\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setCovariance(builder)\n          .build())\n    } else {\n      withInfo(aggExpr, \"Child expression or data type not supported\")\n      None\n    }\n  }\n}\n\nobject CometCovSample extends CometAggregateExpressionSerde[CovSample] with CometCovBase {\n  override def convert(\n      aggExpr: AggregateExpression,\n      covSample: CovSample,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    convertCov(\n      aggExpr,\n      covSample,\n      covSample.nullOnDivideByZero,\n      0,\n      inputs,\n      binding,\n      conf: SQLConf)\n  }\n}\n\nobject CometCovPopulation extends CometAggregateExpressionSerde[CovPopulation] with CometCovBase {\n  override def convert(\n      aggExpr: AggregateExpression,\n      covPopulation: CovPopulation,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    convertCov(\n      aggExpr,\n      covPopulation,\n      covPopulation.nullOnDivideByZero,\n      1,\n      inputs,\n      binding,\n      conf: SQLConf)\n  }\n}\n\ntrait CometVariance {\n  def convertVariance(\n      aggExpr: AggregateExpression,\n      expr: CentralMomentAgg,\n      nullOnDivideByZero: Boolean,\n      statsType: Int,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.AggExpr] = {\n    val childExpr = exprToProto(expr.child, inputs, binding)\n    val dataType = serializeDataType(expr.dataType)\n\n    if (childExpr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.Variance.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setNullOnDivideByZero(nullOnDivideByZero)\n      builder.setDatatype(dataType.get)\n      builder.setStatsTypeValue(statsType)\n\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setVariance(builder)\n          .build())\n    } else {\n      withInfo(aggExpr, expr.child)\n      None\n    }\n  }\n\n}\n\nobject CometVarianceSamp extends CometAggregateExpressionSerde[VarianceSamp] with CometVariance {\n  override def convert(\n      aggExpr: AggregateExpression,\n      variance: VarianceSamp,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    convertVariance(aggExpr, variance, variance.nullOnDivideByZero, 0, inputs, binding)\n  }\n}\n\nobject CometVariancePop extends CometAggregateExpressionSerde[VariancePop] with CometVariance {\n  override def convert(\n      aggExpr: AggregateExpression,\n      variance: VariancePop,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    convertVariance(aggExpr, variance, variance.nullOnDivideByZero, 1, inputs, binding)\n  }\n}\n\ntrait CometStddev {\n  def convertStddev(\n      aggExpr: AggregateExpression,\n      stddev: CentralMomentAgg,\n      nullOnDivideByZero: Boolean,\n      statsType: Int,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    val child = stddev.child\n    val childExpr = exprToProto(child, inputs, binding)\n    val dataType = serializeDataType(stddev.dataType)\n\n    if (childExpr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.Stddev.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setNullOnDivideByZero(nullOnDivideByZero)\n      builder.setDatatype(dataType.get)\n      builder.setStatsTypeValue(statsType)\n\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setStddev(builder)\n          .build())\n    } else {\n      withInfo(aggExpr, child)\n      None\n    }\n  }\n}\n\nobject CometStddevSamp extends CometAggregateExpressionSerde[StddevSamp] with CometStddev {\n  override def convert(\n      aggExpr: AggregateExpression,\n      stddev: StddevSamp,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    convertStddev(aggExpr, stddev, stddev.nullOnDivideByZero, 0, inputs, binding, conf: SQLConf)\n  }\n}\n\nobject CometStddevPop extends CometAggregateExpressionSerde[StddevPop] with CometStddev {\n  override def convert(\n      aggExpr: AggregateExpression,\n      stddev: StddevPop,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    convertStddev(aggExpr, stddev, stddev.nullOnDivideByZero, 1, inputs, binding, conf: SQLConf)\n  }\n}\n\nobject CometCorr extends CometAggregateExpressionSerde[Corr] {\n  override def convert(\n      aggExpr: AggregateExpression,\n      corr: Corr,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    val child1Expr = exprToProto(corr.x, inputs, binding)\n    val child2Expr = exprToProto(corr.y, inputs, binding)\n    val dataType = serializeDataType(corr.dataType)\n\n    if (child1Expr.isDefined && child2Expr.isDefined && dataType.isDefined) {\n      val builder = ExprOuterClass.Correlation.newBuilder()\n      builder.setChild1(child1Expr.get)\n      builder.setChild2(child2Expr.get)\n      builder.setNullOnDivideByZero(corr.nullOnDivideByZero)\n      builder.setDatatype(dataType.get)\n\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setCorrelation(builder)\n          .build())\n    } else {\n      withInfo(aggExpr, corr.x, corr.y)\n      None\n    }\n  }\n}\n\nobject CometBloomFilterAggregate extends CometAggregateExpressionSerde[BloomFilterAggregate] {\n\n  override def convert(\n      aggExpr: AggregateExpression,\n      bloomFilter: BloomFilterAggregate,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      conf: SQLConf): Option[ExprOuterClass.AggExpr] = {\n    // We ignore mutableAggBufferOffset and inputAggBufferOffset because they are\n    // implementation details for Spark's ObjectHashAggregate.\n    val childExpr = exprToProto(bloomFilter.child, inputs, binding)\n    val numItemsExpr = exprToProto(bloomFilter.estimatedNumItemsExpression, inputs, binding)\n    val numBitsExpr = exprToProto(bloomFilter.numBitsExpression, inputs, binding)\n    val dataType = serializeDataType(bloomFilter.dataType)\n\n    if (childExpr.isDefined &&\n      (bloomFilter.child.dataType\n        .isInstanceOf[ByteType] ||\n        bloomFilter.child.dataType\n          .isInstanceOf[ShortType] ||\n        bloomFilter.child.dataType\n          .isInstanceOf[IntegerType] ||\n        bloomFilter.child.dataType\n          .isInstanceOf[LongType] ||\n        bloomFilter.child.dataType\n          .isInstanceOf[StringType]) &&\n      numItemsExpr.isDefined &&\n      numBitsExpr.isDefined &&\n      dataType.isDefined) {\n      val builder = ExprOuterClass.BloomFilterAgg.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setNumItems(numItemsExpr.get)\n      builder.setNumBits(numBitsExpr.get)\n      builder.setDatatype(dataType.get)\n\n      Some(\n        ExprOuterClass.AggExpr\n          .newBuilder()\n          .setBloomFilterAgg(builder)\n          .build())\n    } else {\n      withInfo(\n        aggExpr,\n        bloomFilter.child,\n        bloomFilter.estimatedNumItemsExpression,\n        bloomFilter.numBitsExpression)\n      None\n    }\n  }\n}\n\nobject AggSerde {\n  import org.apache.spark.sql.types._\n\n  def minMaxDataTypeSupported(dt: DataType): Boolean = {\n    dt match {\n      case BooleanType => true\n      case ByteType | ShortType | IntegerType | LongType => true\n      case FloatType | DoubleType => true\n      case _: DecimalType => true\n      case DateType | TimestampType => true\n      case _ => false\n    }\n  }\n\n  def avgDataTypeSupported(dt: DataType): Boolean = {\n    dt match {\n      case ByteType | ShortType | IntegerType | LongType => true\n      case FloatType | DoubleType => true\n      case _: DecimalType => true\n      case _ => false\n    }\n  }\n\n  def sumDataTypeSupported(dt: DataType): Boolean = {\n    dt match {\n      case ByteType | ShortType | IntegerType | LongType => true\n      case FloatType | DoubleType => true\n      case _: DecimalType => true\n      case _ => false\n    }\n  }\n\n  def bitwiseAggTypeSupported(dt: DataType): Boolean = {\n    dt match {\n      case ByteType | ShortType | IntegerType | LongType => true\n      case _ => false\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/arithmetic.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport scala.math.min\n\nimport org.apache.spark.sql.catalyst.expressions.{Add, Attribute, Cast, Divide, EmptyRow, EqualTo, EvalMode, Expression, If, IntegralDivide, Literal, Multiply, Remainder, Round, Subtract, UnaryMinus}\nimport org.apache.spark.sql.types.{ByteType, DataType, DecimalType, DoubleType, FloatType, IntegerType, LongType, ShortType}\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.expressions.{CometCast, CometEvalMode}\nimport org.apache.comet.serde.QueryPlanSerde.{evalModeToProto, exprToProtoInternal, optExprWithInfo, scalarFunctionExprToProtoWithReturnType, serializeDataType}\nimport org.apache.comet.shims.CometEvalModeUtil\n\ntrait MathBase {\n  def createMathExpression(\n      expr: Expression,\n      left: Expression,\n      right: Expression,\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      dataType: DataType,\n      evalMode: EvalMode.Value,\n      f: (ExprOuterClass.Expr.Builder, ExprOuterClass.MathExpr) => ExprOuterClass.Expr.Builder)\n      : Option[ExprOuterClass.Expr] = {\n    val leftExpr = exprToProtoInternal(left, inputs, binding)\n    val rightExpr = exprToProtoInternal(right, inputs, binding)\n\n    if (leftExpr.isDefined && rightExpr.isDefined) {\n      // create the generic MathExpr message\n      val builder = ExprOuterClass.MathExpr.newBuilder()\n      builder.setLeft(leftExpr.get)\n      builder.setRight(rightExpr.get)\n      builder.setEvalMode(evalModeToProto(CometEvalModeUtil.fromSparkEvalMode(evalMode)))\n      serializeDataType(dataType).foreach { t =>\n        builder.setReturnType(t)\n      }\n      val inner = builder.build()\n      // call the user-supplied function to wrap MathExpr in a top-level Expr\n      // such as Expr.Add or Expr.Divide\n      Some(\n        f(\n          ExprOuterClass.Expr\n            .newBuilder(),\n          inner).build())\n    } else {\n      withInfo(expr, left, right)\n      None\n    }\n  }\n\n  def nullIfWhenPrimitive(expression: Expression): Expression = {\n    val zero = Literal.default(expression.dataType)\n    expression match {\n      case _: Literal if expression != zero => expression\n      case _ =>\n        If(EqualTo(expression, zero), Literal.create(null, expression.dataType), expression)\n    }\n  }\n\n  def supportedDataType(dt: DataType): Boolean = dt match {\n    case _: ByteType | _: ShortType | _: IntegerType | _: LongType | _: FloatType |\n        _: DoubleType | _: DecimalType =>\n      true\n    case _ =>\n      false\n  }\n\n}\n\nobject CometAdd extends CometExpressionSerde[Add] with MathBase {\n\n  override def convert(\n      expr: Add,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (!supportedDataType(expr.left.dataType)) {\n      withInfo(expr, s\"Unsupported datatype ${expr.left.dataType}\")\n      return None\n    }\n    createMathExpression(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      expr.dataType,\n      expr.evalMode,\n      (builder, mathExpr) => builder.setAdd(mathExpr))\n  }\n}\n\nobject CometSubtract extends CometExpressionSerde[Subtract] with MathBase {\n\n  override def convert(\n      expr: Subtract,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (!supportedDataType(expr.left.dataType)) {\n      withInfo(expr, s\"Unsupported datatype ${expr.left.dataType}\")\n      return None\n    }\n    createMathExpression(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      expr.dataType,\n      expr.evalMode,\n      (builder, mathExpr) => builder.setSubtract(mathExpr))\n  }\n}\n\nobject CometMultiply extends CometExpressionSerde[Multiply] with MathBase {\n\n  override def convert(\n      expr: Multiply,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (!supportedDataType(expr.left.dataType)) {\n      withInfo(expr, s\"Unsupported datatype ${expr.left.dataType}\")\n      return None\n    }\n    createMathExpression(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      expr.dataType,\n      expr.evalMode,\n      (builder, mathExpr) => builder.setMultiply(mathExpr))\n  }\n}\n\nobject CometDivide extends CometExpressionSerde[Divide] with MathBase {\n\n  override def convert(\n      expr: Divide,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // Datafusion now throws an exception for dividing by zero\n    // See https://github.com/apache/arrow-datafusion/pull/6792\n    // For now, use NullIf to swap zeros with nulls.\n    val rightExpr =\n      if (expr.evalMode != EvalMode.ANSI) nullIfWhenPrimitive(expr.right) else expr.right\n    if (!supportedDataType(expr.left.dataType)) {\n      withInfo(expr, s\"Unsupported datatype ${expr.left.dataType}\")\n      return None\n    }\n    val divideExpr = createMathExpression(\n      expr,\n      expr.left,\n      rightExpr,\n      inputs,\n      binding,\n      expr.dataType,\n      expr.evalMode,\n      (builder, mathExpr) => builder.setDivide(mathExpr))\n\n    // For decimal division Spark applies CheckOverflow after dividing: in ANSI mode overflow\n    // throws NUMERIC_VALUE_OUT_OF_RANGE; in legacy/try mode it returns null.  The Rust\n    // spark_decimal_div_internal uses i128::MAX as a sentinel for overflow, so without this\n    // wrapper an ANSI overflow would silently return a garbage value instead of throwing.\n    if (divideExpr.isDefined && expr.dataType.isInstanceOf[DecimalType] &&\n      serializeDataType(expr.dataType).isDefined) {\n      val builder = ExprOuterClass.CheckOverflow.newBuilder()\n      builder.setChild(divideExpr.get)\n      builder.setFailOnError(expr.evalMode == EvalMode.ANSI)\n      builder.setDatatype(serializeDataType(expr.dataType).get)\n      Some(ExprOuterClass.Expr.newBuilder().setCheckOverflow(builder).build())\n    } else {\n      divideExpr\n    }\n  }\n}\n\nobject CometIntegralDivide extends CometExpressionSerde[IntegralDivide] with MathBase {\n\n  override def convert(\n      expr: IntegralDivide,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (!supportedDataType(expr.left.dataType)) {\n      withInfo(expr, s\"Unsupported datatype ${expr.left.dataType}\")\n      return None\n    }\n\n//    Precision is set to 19 (max precision for a numerical data type except DecimalType)\n\n    val left =\n      if (expr.left.dataType.isInstanceOf[DecimalType]) expr.left\n      else Cast(expr.left, DecimalType(19, 0))\n    val right =\n      if (expr.right.dataType.isInstanceOf[DecimalType]) expr.right\n      else Cast(expr.right, DecimalType(19, 0))\n\n    val rightExpr = if (expr.evalMode != EvalMode.ANSI) nullIfWhenPrimitive(right) else right\n\n    val dataType = (left.dataType, rightExpr.dataType) match {\n      case (l: DecimalType, r: DecimalType) =>\n        // copy from IntegralDivide.resultDecimalType\n        val intDig = l.precision - l.scale + r.scale\n        DecimalType(min(if (intDig == 0) 1 else intDig, DecimalType.MAX_PRECISION), 0)\n      case _ => left.dataType\n    }\n\n    val divideExpr = createMathExpression(\n      expr,\n      left,\n      rightExpr,\n      inputs,\n      binding,\n      dataType,\n      expr.evalMode,\n      (builder, mathExpr) => builder.setIntegralDivide(mathExpr))\n\n    if (divideExpr.isDefined) {\n      val childExpr = if (dataType.isInstanceOf[DecimalType]) {\n        // check overflow for decimal type\n        val builder = ExprOuterClass.CheckOverflow.newBuilder()\n        builder.setChild(divideExpr.get)\n        builder.setFailOnError(expr.evalMode == EvalMode.ANSI)\n        builder.setDatatype(serializeDataType(dataType).get)\n        Some(\n          ExprOuterClass.Expr\n            .newBuilder()\n            .setCheckOverflow(builder)\n            .build())\n      } else {\n        divideExpr\n      }\n\n      // cast result to long\n      CometCast.castToProto(expr, None, LongType, childExpr.get, CometEvalMode.LEGACY)\n    } else {\n      None\n    }\n  }\n}\n\nobject CometRemainder extends CometExpressionSerde[Remainder] with MathBase {\n\n  override def convert(\n      expr: Remainder,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (!supportedDataType(expr.left.dataType)) {\n      withInfo(expr, s\"Unsupported datatype ${expr.left.dataType}\")\n      return None\n    }\n    if (expr.evalMode == EvalMode.TRY) {\n      withInfo(expr, s\"Eval mode ${expr.evalMode} is not supported\")\n      return None\n    }\n\n    createMathExpression(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      expr.dataType,\n      expr.evalMode,\n      (builder, mathExpr) => builder.setRemainder(mathExpr))\n  }\n}\n\nobject CometRound extends CometExpressionSerde[Round] {\n\n  override def convert(\n      r: Round,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // _scale s a constant, copied from Spark's RoundBase because it is a protected val\n    val scaleV: Any = r.scale.eval(EmptyRow)\n    val _scale: Int = scaleV.asInstanceOf[Int]\n\n    lazy val childExpr = exprToProtoInternal(r.child, inputs, binding)\n    r.child.dataType match {\n      case t: DecimalType if t.scale < 0 => // Spark disallows negative scale SPARK-30252\n        withInfo(r, \"Decimal type has negative scale\")\n        None\n      case _ if scaleV == null =>\n        exprToProtoInternal(Literal(null), inputs, binding)\n      case _: ByteType | ShortType | IntegerType | LongType if _scale >= 0 =>\n        childExpr // _scale(I.e. decimal place) >= 0 is a no-op for integer types in Spark\n      case _: FloatType | DoubleType =>\n        // We cannot properly match with the Spark behavior for floating-point numbers.\n        // Spark uses BigDecimal for rounding float/double, and BigDecimal fist converts a\n        // double to string internally in order to create its own internal representation.\n        // The problem is BigDecimal uses java.lang.Double.toString() and it has complicated\n        // rounding algorithm. E.g. -5.81855622136895E8 is actually\n        // -581855622.13689494132995605468750. Note the 5th fractional digit is 4 instead of\n        // 5. Java(Scala)'s toString() rounds it up to -581855622.136895. This makes a\n        // difference when rounding at 5th digit, I.e. round(-5.81855622136895E8, 5) should be\n        // -5.818556221369E8, instead of -5.8185562213689E8. There is also an example that\n        // toString() does NOT round up. 6.1317116247283497E18 is 6131711624728349696. It can\n        // be rounded up to 6.13171162472835E18 that still represents the same double number.\n        // I.e. 6.13171162472835E18 == 6.1317116247283497E18. However, toString() does not.\n        // That results in round(6.1317116247283497E18, -5) == 6.1317116247282995E18 instead\n        // of 6.1317116247283999E18.\n        withInfo(r, \"Comet does not support Spark's BigDecimal rounding\")\n        None\n      case _ =>\n        // `scale` must be Int64 type in DataFusion\n        val scaleExpr = exprToProtoInternal(Literal(_scale.toLong, LongType), inputs, binding)\n        val optExpr =\n          scalarFunctionExprToProtoWithReturnType(\n            \"round\",\n            r.dataType,\n            r.ansiEnabled,\n            childExpr,\n            scaleExpr)\n        optExprWithInfo(optExpr, r, r.child)\n    }\n\n  }\n}\nobject CometUnaryMinus extends CometExpressionSerde[UnaryMinus] {\n\n  override def convert(\n      expr: UnaryMinus,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    if (childExpr.isDefined) {\n      val builder = ExprOuterClass.UnaryMinus.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setFailOnError(expr.failOnError)\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setUnaryMinus(builder)\n          .build())\n    } else {\n      withInfo(expr, expr.child)\n      None\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/arrays.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport scala.annotation.tailrec\n\nimport org.apache.spark.sql.catalyst.expressions.{ArrayAppend, ArrayContains, ArrayExcept, ArrayFilter, ArrayInsert, ArrayIntersect, ArrayJoin, ArrayMax, ArrayMin, ArrayRemove, ArrayRepeat, ArraysOverlap, ArrayUnion, Attribute, CreateArray, ElementAt, Expression, Flatten, GetArrayItem, IsNotNull, Literal, Reverse, Size, SortArray}\nimport org.apache.spark.sql.catalyst.util.GenericArrayData\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.QueryPlanSerde._\nimport org.apache.comet.shims.CometExprShim\n\nobject CometArrayRemove\n    extends CometExpressionSerde[ArrayRemove]\n    with CometExprShim\n    with ArraysBase {\n\n  override def convert(\n      expr: ArrayRemove,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val inputTypes: Set[DataType] = expr.children.map(_.dataType).toSet\n    for (dt <- inputTypes) {\n      if (!isTypeSupported(dt)) {\n        withInfo(expr, s\"data type not supported: $dt\")\n        return None\n      }\n    }\n    val arrayExprProto = exprToProto(expr.left, inputs, binding)\n    val keyExprProto = exprToProto(expr.right, inputs, binding)\n\n    scalarFunctionExprToProto(\"array_remove_all\", arrayExprProto, keyExprProto)\n  }\n}\n\nobject CometArrayAppend extends CometExpressionSerde[ArrayAppend] {\n\n  override def getSupportLevel(expr: ArrayAppend): SupportLevel = Compatible()\n\n  override def convert(\n      expr: ArrayAppend,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val child = expr.children.head\n    val elementType = child.dataType.asInstanceOf[ArrayType].elementType\n\n    val arrayExprProto = exprToProto(expr.children.head, inputs, binding)\n    val keyExprProto = exprToProto(expr.children(1), inputs, binding)\n\n    // DataFusion's array_append always returns a list with nullable elements,\n    // so we must promise ArrayType(elementType, containsNull = true) here even if\n    // Spark's expr.dataType has containsNull = false (e.g. for array(1,2,3)).\n    val arrayAppendScalarExpr =\n      scalarFunctionExprToProtoWithReturnType(\n        \"array_append\",\n        ArrayType(elementType, containsNull = true),\n        false,\n        arrayExprProto,\n        keyExprProto)\n\n    val isNotNullExpr = createUnaryExpr(\n      expr,\n      expr.children.head,\n      inputs,\n      binding,\n      (builder, unaryExpr) => builder.setIsNotNull(unaryExpr))\n\n    val nullLiteralProto = exprToProto(Literal(null, elementType), Seq.empty)\n\n    if (arrayAppendScalarExpr.isDefined && isNotNullExpr.isDefined && nullLiteralProto.isDefined) {\n      val caseWhenExpr = ExprOuterClass.CaseWhen\n        .newBuilder()\n        .addWhen(isNotNullExpr.get)\n        .addThen(arrayAppendScalarExpr.get)\n        .setElseExpr(nullLiteralProto.get)\n        .build()\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setCaseWhen(caseWhenExpr)\n          .build())\n    } else {\n      withInfo(expr, expr.children: _*)\n      None\n    }\n  }\n}\n\nobject CometArrayContains extends CometExpressionSerde[ArrayContains] {\n\n  override def convert(\n      expr: ArrayContains,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val arrayExprProto = exprToProto(expr.children.head, inputs, binding)\n    val keyExprProto = exprToProto(expr.children(1), inputs, binding)\n\n    scalarFunctionExprToProto(\"array_contains\", arrayExprProto, keyExprProto)\n  }\n}\n\nobject CometSortArray extends CometExpressionSerde[SortArray] {\n  private def containsFloatingPoint(dt: DataType): Boolean = {\n    dt match {\n      case FloatType | DoubleType => true\n      case ArrayType(elementType, _) => containsFloatingPoint(elementType)\n      case StructType(fields) => fields.exists(f => containsFloatingPoint(f.dataType))\n      case MapType(keyType, valueType, _) =>\n        containsFloatingPoint(keyType) || containsFloatingPoint(valueType)\n      case _ => false\n    }\n  }\n\n  private def supportedSortArrayElementType(\n      dt: DataType,\n      nestedInArray: Boolean = false): Boolean = {\n    dt match {\n      // DataFusion's array_sort compares nested arrays through Arrow's rank kernel.\n      // That kernel does not support Struct or Null child values,\n      // so array<array<struct<...>>> and array<array<null>> would fail at runtime.\n      case _: NullType if !nestedInArray =>\n        true\n      case ArrayType(elementType, _) =>\n        supportedSortArrayElementType(elementType, nestedInArray = true)\n      case StructType(fields) if !nestedInArray =>\n        fields.forall(f => supportedSortArrayElementType(f.dataType))\n      case _ =>\n        supportedScalarSortElementType(dt)\n    }\n  }\n\n  override def getSupportLevel(expr: SortArray): SupportLevel = {\n    val elementType = expr.base.dataType.asInstanceOf[ArrayType].elementType\n\n    if (!supportedSortArrayElementType(elementType)) {\n      Unsupported(Some(s\"Sort on array element type $elementType is not supported\"))\n    } else if (CometConf.COMET_EXEC_STRICT_FLOATING_POINT.get() &&\n      containsFloatingPoint(elementType)) {\n      Incompatible(\n        Some(\n          \"Sorting on floating-point is not 100% compatible with Spark, and Comet is running \" +\n            s\"with ${CometConf.COMET_EXEC_STRICT_FLOATING_POINT.key}=true. \" +\n            s\"${CometConf.COMPAT_GUIDE}\"))\n    } else {\n      Compatible()\n    }\n  }\n\n  override def convert(\n      expr: SortArray,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val arrayExprProto = exprToProtoInternal(expr.base, inputs, binding)\n    val (sortDirectionExprProto, nullOrderingExprProto) = expr.ascendingOrder match {\n      case Literal(value: Boolean, BooleanType) =>\n        val direction = if (value) \"ASC\" else \"DESC\"\n        val nullOrdering = if (value) \"NULLS FIRST\" else \"NULLS LAST\"\n        (\n          exprToProtoInternal(Literal(direction), inputs, binding),\n          exprToProtoInternal(Literal(nullOrdering), inputs, binding))\n      case other =>\n        withInfo(expr, s\"ascendingOrder must be a boolean literal: $other\")\n        (None, None)\n    }\n\n    val sortArrayScalarExpr =\n      scalarFunctionExprToProto(\n        \"array_sort\",\n        arrayExprProto,\n        sortDirectionExprProto,\n        nullOrderingExprProto)\n    optExprWithInfo(sortArrayScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometArrayIntersect extends CometExpressionSerde[ArrayIntersect] {\n\n  override def getSupportLevel(expr: ArrayIntersect): SupportLevel = Incompatible(None)\n\n  override def convert(\n      expr: ArrayIntersect,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val leftArrayExprProto = exprToProto(expr.children.head, inputs, binding)\n    val rightArrayExprProto = exprToProto(expr.children(1), inputs, binding)\n\n    val arraysIntersectScalarExpr =\n      scalarFunctionExprToProto(\"array_intersect\", leftArrayExprProto, rightArrayExprProto)\n    optExprWithInfo(arraysIntersectScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometArrayMax extends CometExpressionSerde[ArrayMax] {\n  override def convert(\n      expr: ArrayMax,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val arrayExprProto = exprToProto(expr.children.head, inputs, binding)\n\n    val arrayMaxScalarExpr =\n      scalarFunctionExprToProto(\"array_max\", arrayExprProto)\n    optExprWithInfo(arrayMaxScalarExpr, expr)\n  }\n}\n\nobject CometArrayMin extends CometExpressionSerde[ArrayMin] {\n  override def convert(\n      expr: ArrayMin,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val arrayExprProto = exprToProto(expr.children.head, inputs, binding)\n\n    val arrayMinScalarExpr = scalarFunctionExprToProto(\"array_min\", arrayExprProto)\n    optExprWithInfo(arrayMinScalarExpr, expr)\n  }\n}\n\nobject CometArraysOverlap extends CometExpressionSerde[ArraysOverlap] {\n\n  override def getSupportLevel(expr: ArraysOverlap): SupportLevel =\n    Incompatible(\n      Some(\n        \"Inconsistent behavior with NULL values\" +\n          \" (https://github.com/apache/datafusion-comet/issues/3645)\" +\n          \" (https://github.com/apache/datafusion-comet/issues/2036)\"))\n\n  override def convert(\n      expr: ArraysOverlap,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val leftArrayExprProto = exprToProto(expr.children.head, inputs, binding)\n    val rightArrayExprProto = exprToProto(expr.children(1), inputs, binding)\n\n    val arraysOverlapScalarExpr = scalarFunctionExprToProtoWithReturnType(\n      \"array_has_any\",\n      BooleanType,\n      false,\n      leftArrayExprProto,\n      rightArrayExprProto)\n    optExprWithInfo(arraysOverlapScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometArrayRepeat extends CometExpressionSerde[ArrayRepeat] {\n\n  override def convert(\n      expr: ArrayRepeat,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val elementProto = exprToProto(expr.left, inputs, binding)\n    val countProto = exprToProto(expr.right, inputs, binding)\n    val returnType = ArrayType(elementType = expr.left.dataType)\n    for {\n      countIsNotNullExpr <- countIsNotNullExpr(expr, inputs, binding)\n      arrayRepeatExprProto <- scalarFunctionExprToProto(\"array_repeat\", elementProto, countProto)\n      nullLiteralExprProto <- exprToProtoInternal(Literal(null, returnType), inputs, binding)\n    } yield {\n      val caseWhenProto = ExprOuterClass.CaseWhen\n        .newBuilder()\n        .addWhen(countIsNotNullExpr)\n        .addThen(arrayRepeatExprProto)\n        .setElseExpr(nullLiteralExprProto)\n        .build()\n      ExprOuterClass.Expr\n        .newBuilder()\n        .setCaseWhen(caseWhenProto)\n        .build()\n    }\n  }\n\n  private def countIsNotNullExpr(\n      expr: ArrayRepeat,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createUnaryExpr(\n      expr,\n      expr.right,\n      inputs,\n      binding,\n      (builder, countExpr) => builder.setIsNotNull(countExpr))\n  }\n}\n\nobject CometArrayCompact extends CometExpressionSerde[Expression] {\n\n  override def getSupportLevel(expr: Expression): SupportLevel = Compatible()\n\n  override def convert(\n      expr: Expression,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val child = expr.children.head\n    val elementType = child.dataType.asInstanceOf[ArrayType].elementType\n\n    val arrayExprProto = exprToProto(child, inputs, binding)\n\n    // Use Comet's SparkArrayCompact UDF instead of DataFusion's array_remove_all.\n    // DF 53 changed array_remove_all to return NULL when the element arg is NULL,\n    // which breaks the array_compact use case.\n    // TODO: upstream to datafusion-spark crate\n    val arrayCompactScalarExpr = scalarFunctionExprToProtoWithReturnType(\n      \"spark_array_compact\",\n      ArrayType(elementType = elementType),\n      false,\n      arrayExprProto)\n    optExprWithInfo(arrayCompactScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometArrayExcept extends CometExpressionSerde[ArrayExcept] with CometExprShim {\n\n  override def getSupportLevel(expr: ArrayExcept): SupportLevel = Incompatible(None)\n\n  @tailrec\n  def isTypeSupported(dt: DataType): Boolean = {\n    import DataTypes._\n    dt match {\n      case BooleanType | ByteType | ShortType | IntegerType | LongType | FloatType | DoubleType |\n          _: DecimalType | DateType | TimestampType | TimestampNTZType | StringType =>\n        true\n      case BinaryType => false\n      case ArrayType(elementType, _) => isTypeSupported(elementType)\n      case _: StructType =>\n        false\n      case _ => false\n    }\n  }\n\n  override def convert(\n      expr: ArrayExcept,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val inputTypes = expr.children.map(_.dataType).toSet\n    for (dt <- inputTypes) {\n      if (!isTypeSupported(dt)) {\n        withInfo(expr, s\"data type not supported: $dt\")\n        return None\n      }\n    }\n    val leftArrayExprProto = exprToProto(expr.left, inputs, binding)\n    val rightArrayExprProto = exprToProto(expr.right, inputs, binding)\n\n    val arrayExceptScalarExpr =\n      scalarFunctionExprToProto(\"array_except\", leftArrayExprProto, rightArrayExprProto)\n    optExprWithInfo(arrayExceptScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometArrayJoin extends CometExpressionSerde[ArrayJoin] {\n\n  override def getSupportLevel(expr: ArrayJoin): SupportLevel = Incompatible(None)\n\n  override def convert(\n      expr: ArrayJoin,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val arrayExpr = expr.asInstanceOf[ArrayJoin]\n    val arrayExprProto = exprToProto(arrayExpr.array, inputs, binding)\n    val delimiterExprProto = exprToProto(arrayExpr.delimiter, inputs, binding)\n\n    arrayExpr.nullReplacement match {\n      case Some(nullReplacementExpr) =>\n        val nullReplacementExprProto = exprToProto(nullReplacementExpr, inputs, binding)\n\n        val arrayJoinScalarExpr = scalarFunctionExprToProto(\n          \"array_to_string\",\n          arrayExprProto,\n          delimiterExprProto,\n          nullReplacementExprProto)\n\n        optExprWithInfo(\n          arrayJoinScalarExpr,\n          expr,\n          arrayExpr,\n          arrayExpr.delimiter,\n          nullReplacementExpr)\n      case None =>\n        val arrayJoinScalarExpr =\n          scalarFunctionExprToProto(\"array_to_string\", arrayExprProto, delimiterExprProto)\n\n        optExprWithInfo(arrayJoinScalarExpr, expr, arrayExpr, arrayExpr.delimiter)\n    }\n  }\n}\n\nobject CometArrayInsert extends CometExpressionSerde[ArrayInsert] {\n\n  override def getSupportLevel(expr: ArrayInsert): SupportLevel = Incompatible(None)\n\n  override def convert(\n      expr: ArrayInsert,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val srcExprProto = exprToProtoInternal(expr.children.head, inputs, binding)\n    val posExprProto = exprToProtoInternal(expr.children(1), inputs, binding)\n    val itemExprProto = exprToProtoInternal(expr.children(2), inputs, binding)\n    val legacyNegativeIndex =\n      SQLConf.get.getConfString(\"spark.sql.legacy.negativeIndexInArrayInsert\").toBoolean\n    if (srcExprProto.isDefined && posExprProto.isDefined && itemExprProto.isDefined) {\n      val arrayInsertBuilder = ExprOuterClass.ArrayInsert\n        .newBuilder()\n        .setSrcArrayExpr(srcExprProto.get)\n        .setPosExpr(posExprProto.get)\n        .setItemExpr(itemExprProto.get)\n        .setLegacyNegativeIndex(legacyNegativeIndex)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setArrayInsert(arrayInsertBuilder)\n          .build())\n    } else {\n      withInfo(\n        expr,\n        \"unsupported arguments for ArrayInsert\",\n        expr.children.head,\n        expr.children(1),\n        expr.children(2))\n      None\n    }\n  }\n}\n\nobject CometArrayUnion extends CometExpressionSerde[ArrayUnion] {\n\n  override def getSupportLevel(expr: ArrayUnion): SupportLevel =\n    Incompatible(\n      Some(\n        \"Correctness issue\" +\n          \" (https://github.com/apache/datafusion-comet/issues/3644)\"))\n\n  override def convert(\n      expr: ArrayUnion,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val leftArrayExprProto = exprToProto(expr.children.head, inputs, binding)\n    val rightArrayExprProto = exprToProto(expr.children(1), inputs, binding)\n\n    val arraysUnionScalarExpr =\n      scalarFunctionExprToProto(\"array_union\", leftArrayExprProto, rightArrayExprProto)\n    optExprWithInfo(arraysUnionScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometCreateArray extends CometExpressionSerde[CreateArray] {\n  override def convert(\n      expr: CreateArray,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val children = expr.children\n\n    // Handle empty array: return literal directly to avoid DataFusion coerce_types bug\n    // when make_array is called with 0 arguments (issue #3338)\n    if (children.isEmpty) {\n      val emptyArrayLiteral =\n        Literal.create(new GenericArrayData(Array.empty[Any]), expr.dataType)\n      return exprToProtoInternal(emptyArrayLiteral, inputs, binding)\n    }\n\n    val childExprs = children.map(exprToProtoInternal(_, inputs, binding))\n\n    if (childExprs.forall(_.isDefined)) {\n      scalarFunctionExprToProto(\"make_array\", childExprs: _*)\n    } else {\n      withInfo(expr, \"unsupported arguments for CreateArray\", children: _*)\n      None\n    }\n  }\n}\n\nobject CometGetArrayItem extends CometExpressionSerde[GetArrayItem] {\n\n  override def convert(\n      expr: GetArrayItem,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val ordinalExpr = exprToProtoInternal(expr.ordinal, inputs, binding)\n\n    if (childExpr.isDefined && ordinalExpr.isDefined) {\n      val listExtractBuilder = ExprOuterClass.ListExtract\n        .newBuilder()\n        .setChild(childExpr.get)\n        .setOrdinal(ordinalExpr.get)\n        .setOneBased(false)\n        .setFailOnError(expr.failOnError)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setListExtract(listExtractBuilder)\n          .build())\n    } else {\n      withInfo(expr, \"unsupported arguments for GetArrayItem\", expr.child, expr.ordinal)\n      None\n    }\n  }\n}\n\nobject CometArrayReverse extends CometExpressionSerde[Reverse] with ArraysBase {\n  val unsupportedReason = \"reverse on array containing binary is not supported\"\n\n  @tailrec\n  private def containsBinary(dt: DataType): Boolean = {\n    dt match {\n      case BinaryType => true\n      case ArrayType(elementType, _) => containsBinary(elementType)\n      case _ => false\n    }\n  }\n\n  override def getSupportLevel(expr: Reverse): SupportLevel = {\n    if (containsBinary(expr.child.dataType)) {\n      Incompatible(Some(unsupportedReason))\n    } else {\n      Compatible(None)\n    }\n  }\n\n  override def convert(\n      expr: Reverse,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (!isTypeSupported(expr.child.dataType)) {\n      withInfo(expr, s\"child data type not supported: ${expr.child.dataType}\")\n      return None\n    }\n    val reverseExprProto = exprToProto(expr.child, inputs, binding)\n    val reverseScalarExpr = scalarFunctionExprToProto(\"array_reverse\", reverseExprProto)\n    optExprWithInfo(reverseScalarExpr, expr, expr.children: _*)\n  }\n\n}\n\nobject CometElementAt extends CometExpressionSerde[ElementAt] {\n\n  override def convert(\n      expr: ElementAt,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.left, inputs, binding)\n    val ordinalExpr = exprToProtoInternal(expr.right, inputs, binding)\n    val defaultExpr = expr.defaultValueOutOfBound.flatMap(exprToProtoInternal(_, inputs, binding))\n\n    if (!expr.left.dataType.isInstanceOf[ArrayType]) {\n      withInfo(expr, \"Input is not an array\")\n      return None\n    }\n\n    if (childExpr.isDefined && ordinalExpr.isDefined &&\n      defaultExpr.isDefined == expr.defaultValueOutOfBound.isDefined) {\n      val arrayExtractBuilder = ExprOuterClass.ListExtract\n        .newBuilder()\n        .setChild(childExpr.get)\n        .setOrdinal(ordinalExpr.get)\n        .setOneBased(true)\n        .setFailOnError(expr.failOnError)\n\n      defaultExpr.foreach(arrayExtractBuilder.setDefaultValue)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setListExtract(arrayExtractBuilder)\n          .build())\n    } else {\n      withInfo(expr, \"unsupported arguments for ElementAt\", expr.left, expr.right)\n      None\n    }\n  }\n}\n\nobject CometFlatten extends CometExpressionSerde[Flatten] with ArraysBase {\n\n  override def convert(\n      expr: Flatten,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val inputTypes = expr.children.map(_.dataType).toSet\n    for (dt <- inputTypes) {\n      if (!isTypeSupported(dt)) {\n        withInfo(expr, s\"data type not supported: $dt\")\n        return None\n      }\n    }\n    val flattenExprProto = exprToProto(expr.child, inputs, binding)\n    val flattenScalarExpr = scalarFunctionExprToProto(\"flatten\", flattenExprProto)\n    optExprWithInfo(flattenScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometArrayFilter extends CometExpressionSerde[ArrayFilter] {\n\n  override def getSupportLevel(expr: ArrayFilter): SupportLevel = {\n    expr.function.children.headOption match {\n      case Some(_: IsNotNull) => Compatible()\n      case _ => Unsupported()\n    }\n  }\n\n  override def convert(\n      expr: ArrayFilter,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    CometArrayCompact.convert(expr, inputs, binding)\n  }\n}\n\nobject CometSize extends CometExpressionSerde[Size] {\n\n  override def getSupportLevel(expr: Size): SupportLevel = {\n    expr.child.dataType match {\n      case _: ArrayType => Compatible()\n      case _: MapType => Unsupported(Some(\"size does not support map inputs\"))\n      case other =>\n        // this should be unreachable because Spark only supports map and array inputs\n        Unsupported(Some(s\"Unsupported child data type: $other\"))\n    }\n  }\n\n  override def convert(\n      expr: Size,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val arrayExprProto = exprToProto(expr.child, inputs, binding)\n    for {\n      isNotNullExprProto <- createIsNotNullExprProto(expr, inputs, binding)\n      sizeScalarExprProto <- scalarFunctionExprToProto(\"size\", arrayExprProto)\n      emptyLiteralExprProto <- createLiteralExprProto(SQLConf.get.legacySizeOfNull)\n    } yield {\n      val caseWhenExpr = ExprOuterClass.CaseWhen\n        .newBuilder()\n        .addWhen(isNotNullExprProto)\n        .addThen(sizeScalarExprProto)\n        .setElseExpr(emptyLiteralExprProto)\n        .build()\n      ExprOuterClass.Expr\n        .newBuilder()\n        .setCaseWhen(caseWhenExpr)\n        .build()\n    }\n  }\n\n  private def createIsNotNullExprProto(\n      expr: Size,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createUnaryExpr(\n      expr,\n      expr.child,\n      inputs,\n      binding,\n      (builder, unaryExpr) => builder.setIsNotNull(unaryExpr))\n  }\n\n  private def createLiteralExprProto(legacySizeOfNull: Boolean): Option[ExprOuterClass.Expr] = {\n    val value = if (legacySizeOfNull) -1 else null\n    exprToProto(Literal(value, IntegerType), Seq.empty)\n  }\n\n}\n\ntrait ArraysBase {\n\n  def isTypeSupported(dt: DataType): Boolean = {\n    import DataTypes._\n    dt match {\n      case BooleanType | ByteType | ShortType | IntegerType | LongType | FloatType | DoubleType |\n          _: DecimalType | DateType | TimestampType | TimestampNTZType | StringType =>\n        true\n      case BinaryType => false\n      case ArrayType(elementType, _) => isTypeSupported(elementType)\n      case _: StructType =>\n        // https://github.com/apache/datafusion-comet/issues/1307\n        false\n      case _ => false\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/bitwise.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions._\nimport org.apache.spark.sql.types.{ByteType, LongType}\n\nimport org.apache.comet.serde.QueryPlanSerde._\n\nobject CometBitwiseAnd extends CometExpressionSerde[BitwiseAnd] {\n  override def convert(\n      expr: BitwiseAnd,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setBitwiseAnd(binaryExpr))\n  }\n}\n\nobject CometBitwiseNot extends CometExpressionSerde[BitwiseNot] {\n  override def convert(\n      expr: BitwiseNot,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childProto = exprToProto(expr.child, inputs, binding)\n    val bitNotScalarExpr =\n      scalarFunctionExprToProto(\"bitwise_not\", childProto)\n    optExprWithInfo(bitNotScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometBitwiseOr extends CometExpressionSerde[BitwiseOr] {\n  override def convert(\n      expr: BitwiseOr,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setBitwiseOr(binaryExpr))\n  }\n}\n\nobject CometBitwiseXor extends CometExpressionSerde[BitwiseXor] {\n  override def convert(\n      expr: BitwiseXor,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setBitwiseXor(binaryExpr))\n  }\n}\n\nobject CometShiftRight extends CometExpressionSerde[ShiftRight] {\n  override def convert(\n      expr: ShiftRight,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // DataFusion bitwise shift right expression requires\n    // same data type between left and right side\n    val rightExpression = if (expr.left.dataType == LongType) {\n      Cast(expr.right, LongType)\n    } else {\n      expr.right\n    }\n\n    createBinaryExpr(\n      expr,\n      expr.left,\n      rightExpression,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setBitwiseShiftRight(binaryExpr))\n  }\n}\n\nobject CometShiftLeft extends CometExpressionSerde[ShiftLeft] {\n  override def convert(\n      expr: ShiftLeft,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // DataFusion bitwise shift right expression requires\n    // same data type between left and right side\n    val rightExpression = if (expr.left.dataType == LongType) {\n      Cast(expr.right, LongType)\n    } else {\n      expr.right\n    }\n\n    createBinaryExpr(\n      expr,\n      expr.left,\n      rightExpression,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setBitwiseShiftLeft(binaryExpr))\n  }\n}\n\nobject CometBitwiseGet extends CometExpressionSerde[BitwiseGet] {\n  override def convert(\n      expr: BitwiseGet,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val argProto = exprToProto(expr.left, inputs, binding)\n    val posProto = exprToProto(expr.right, inputs, binding)\n    val bitGetScalarExpr =\n      scalarFunctionExprToProtoWithReturnType(\"bit_get\", ByteType, false, argProto, posProto)\n    optExprWithInfo(bitGetScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometBitwiseCount extends CometScalarFunction[BitwiseCount](\"bit_count\")\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/collectionOperations.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, Reverse}\nimport org.apache.spark.sql.types.ArrayType\n\nimport org.apache.comet.serde.ExprOuterClass.Expr\n\nobject CometReverse extends CometScalarFunction[Reverse](\"reverse\") {\n\n  override def getSupportLevel(expr: Reverse): SupportLevel = {\n    if (expr.child.dataType.isInstanceOf[ArrayType]) {\n      CometArrayReverse.getSupportLevel(expr)\n    } else {\n      Compatible()\n    }\n  }\n\n  override def convert(expr: Reverse, inputs: Seq[Attribute], binding: Boolean): Option[Expr] = {\n    if (expr.child.dataType.isInstanceOf[ArrayType]) {\n      CometArrayReverse.convert(expr, inputs, binding)\n    } else {\n      super.convert(expr, inputs, binding)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/conditional.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, CaseWhen, Coalesce, Expression, If, IsNotNull}\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.QueryPlanSerde.exprToProtoInternal\n\nobject CometIf extends CometExpressionSerde[If] {\n  override def convert(\n      expr: If,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val predicateExpr = exprToProtoInternal(expr.predicate, inputs, binding)\n    val trueExpr = exprToProtoInternal(expr.trueValue, inputs, binding)\n    val falseExpr = exprToProtoInternal(expr.falseValue, inputs, binding)\n    if (predicateExpr.isDefined && trueExpr.isDefined && falseExpr.isDefined) {\n      val builder = ExprOuterClass.IfExpr.newBuilder()\n      builder.setIfExpr(predicateExpr.get)\n      builder.setTrueExpr(trueExpr.get)\n      builder.setFalseExpr(falseExpr.get)\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setIf(builder)\n          .build())\n    } else {\n      withInfo(expr, expr.predicate, expr.trueValue, expr.falseValue)\n      None\n    }\n  }\n}\n\nobject CometCaseWhen extends CometExpressionSerde[CaseWhen] {\n  override def convert(\n      expr: CaseWhen,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    var allBranches: Seq[Expression] = Seq()\n    val whenSeq = expr.branches.map(elements => {\n      allBranches = allBranches :+ elements._1\n      exprToProtoInternal(elements._1, inputs, binding)\n    })\n    val thenSeq = expr.branches.map(elements => {\n      allBranches = allBranches :+ elements._2\n      exprToProtoInternal(elements._2, inputs, binding)\n    })\n    assert(whenSeq.length == thenSeq.length)\n    if (whenSeq.forall(_.isDefined) && thenSeq.forall(_.isDefined)) {\n      val builder = ExprOuterClass.CaseWhen.newBuilder()\n      builder.addAllWhen(whenSeq.map(_.get).asJava)\n      builder.addAllThen(thenSeq.map(_.get).asJava)\n      if (expr.elseValue.isDefined) {\n        val elseValueExpr =\n          exprToProtoInternal(expr.elseValue.get, inputs, binding)\n        if (elseValueExpr.isDefined) {\n          builder.setElseExpr(elseValueExpr.get)\n        } else {\n          withInfo(expr, expr.elseValue.get)\n          return None\n        }\n      }\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setCaseWhen(builder)\n          .build())\n    } else {\n      withInfo(expr, allBranches: _*)\n      None\n    }\n  }\n}\n\nobject CometCoalesce extends CometExpressionSerde[Coalesce] {\n  override def convert(\n      expr: Coalesce,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val branches = expr.children.dropRight(1).map { child =>\n      (IsNotNull(child), child)\n    }\n    val elseValue = expr.children.last\n    val whenSeq = branches.map(elements => {\n      exprToProtoInternal(elements._1, inputs, binding)\n    })\n    val thenSeq = branches.map(elements => {\n      exprToProtoInternal(elements._2, inputs, binding)\n    })\n    assert(whenSeq.length == thenSeq.length)\n    if (whenSeq.forall(_.isDefined) && thenSeq.forall(_.isDefined)) {\n      val builder = ExprOuterClass.CaseWhen.newBuilder()\n      builder.addAllWhen(whenSeq.map(_.get).asJava)\n      builder.addAllThen(thenSeq.map(_.get).asJava)\n      val elseValueExpr = exprToProtoInternal(elseValue, inputs, binding)\n      if (elseValueExpr.isDefined) {\n        builder.setElseExpr(elseValueExpr.get)\n      } else {\n        withInfo(expr, elseValue)\n        return None\n      }\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setCaseWhen(builder)\n          .build())\n    } else {\n      withInfo(expr, branches.map(_._2): _*)\n      None\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/contraintExpressions.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, KnownFloatingPointNormalized}\nimport org.apache.spark.sql.catalyst.optimizer.NormalizeNaNAndZero\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProtoInternal, optExprWithInfo, serializeDataType}\n\nobject CometKnownFloatingPointNormalized\n    extends CometExpressionSerde[KnownFloatingPointNormalized] {\n\n  override def getSupportLevel(expr: KnownFloatingPointNormalized): SupportLevel = {\n    expr.child match {\n      case _: NormalizeNaNAndZero => Compatible()\n      case _ =>\n        Unsupported(\n          Some(\n            \"KnownFloatingPointNormalized only supports NormalizeNaNAndZero child expressions\"))\n    }\n  }\n\n  override def convert(\n      expr: KnownFloatingPointNormalized,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n\n    val wrapped = expr.child.asInstanceOf[NormalizeNaNAndZero].child\n\n    val dataType = serializeDataType(wrapped.dataType)\n    if (dataType.isEmpty) {\n      withInfo(wrapped, s\"Unsupported datatype ${wrapped.dataType}\")\n      return None\n    }\n    val ex = exprToProtoInternal(wrapped, inputs, binding)\n    val optExpr = ex.map { child =>\n      val builder = ExprOuterClass.NormalizeNaNAndZero\n        .newBuilder()\n        .setChild(child)\n        .setDatatype(dataType.get)\n      ExprOuterClass.Expr.newBuilder().setNormalizeNanAndZero(builder).build()\n    }\n    optExprWithInfo(optExpr, expr, wrapped)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/datetime.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport java.util.Locale\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, DateAdd, DateDiff, DateFormatClass, DateFromUnixDate, DateSub, DayOfMonth, DayOfWeek, DayOfYear, Days, GetDateField, Hour, Hours, LastDay, Literal, MakeDate, Minute, Month, NextDay, Quarter, Second, TruncDate, TruncTimestamp, UnixDate, UnixTimestamp, WeekDay, WeekOfYear, Year}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{DateType, IntegerType, StringType, TimestampNTZType, TimestampType}\nimport org.apache.spark.unsafe.types.UTF8String\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.expressions.{CometCast, CometEvalMode}\nimport org.apache.comet.serde.CometGetDateField.CometGetDateField\nimport org.apache.comet.serde.ExprOuterClass.Expr\nimport org.apache.comet.serde.QueryPlanSerde._\n\nprivate object CometGetDateField extends Enumeration {\n  type CometGetDateField = Value\n\n  // See: https://datafusion.apache.org/user-guide/sql/scalar_functions.html#date-part\n  val Year: Value = Value(\"year\")\n  val Month: Value = Value(\"month\")\n  val DayOfMonth: Value = Value(\"day\")\n  // Datafusion: day of the week where Sunday is 0, but spark sunday is 1 (1 = Sunday,\n  // 2 = Monday, ..., 7 = Saturday).\n  val DayOfWeek: Value = Value(\"dow\")\n  val DayOfYear: Value = Value(\"doy\")\n  val WeekDay: Value = Value(\"isodow\") // day of the week where Monday is 0\n  val WeekOfYear: Value = Value(\"week\")\n  val Quarter: Value = Value(\"quarter\")\n}\n\n/**\n * Convert spark [[org.apache.spark.sql.catalyst.expressions.GetDateField]] expressions to\n * Datafusion\n * [[https://datafusion.apache.org/user-guide/sql/scalar_functions.html#date-part datepart]]\n * function.\n */\ntrait CometExprGetDateField[T <: GetDateField] {\n  def getDateField(\n      expr: T,\n      field: CometGetDateField,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val periodType = exprToProtoInternal(Literal(field.toString), inputs, binding)\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val optExpr = scalarFunctionExprToProto(\"datepart\", Seq(periodType, childExpr): _*)\n      .map(e => {\n        Expr\n          .newBuilder()\n          .setCast(\n            ExprOuterClass.Cast\n              .newBuilder()\n              .setChild(e)\n              .setDatatype(serializeDataType(IntegerType).get)\n              .setEvalMode(ExprOuterClass.EvalMode.LEGACY)\n              .setAllowIncompat(false)\n              .build())\n          .build()\n      })\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n\nobject CometYear extends CometExpressionSerde[Year] with CometExprGetDateField[Year] {\n  override def convert(\n      expr: Year,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    getDateField(expr, CometGetDateField.Year, inputs, binding)\n  }\n}\n\nobject CometMonth extends CometExpressionSerde[Month] with CometExprGetDateField[Month] {\n  override def convert(\n      expr: Month,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    getDateField(expr, CometGetDateField.Month, inputs, binding)\n  }\n}\n\nobject CometDayOfMonth\n    extends CometExpressionSerde[DayOfMonth]\n    with CometExprGetDateField[DayOfMonth] {\n  override def convert(\n      expr: DayOfMonth,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    getDateField(expr, CometGetDateField.DayOfMonth, inputs, binding)\n  }\n}\n\nobject CometDayOfWeek\n    extends CometExpressionSerde[DayOfWeek]\n    with CometExprGetDateField[DayOfWeek] {\n  override def convert(\n      expr: DayOfWeek,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // Datafusion: day of the week where Sunday is 0, but spark sunday is 1 (1 = Sunday,\n    // 2 = Monday, ..., 7 = Saturday). So we need to add 1 to the result of datepart(dow, ...)\n    val optExpr = getDateField(expr, CometGetDateField.DayOfWeek, inputs, binding)\n      .zip(exprToProtoInternal(Literal(1), inputs, binding))\n      .map { case (left, right) =>\n        Expr\n          .newBuilder()\n          .setAdd(\n            ExprOuterClass.MathExpr\n              .newBuilder()\n              .setLeft(left)\n              .setRight(right)\n              .setEvalMode(ExprOuterClass.EvalMode.LEGACY)\n              .setReturnType(serializeDataType(IntegerType).get)\n              .build())\n          .build()\n      }\n      .headOption\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n\nobject CometWeekDay extends CometExpressionSerde[WeekDay] with CometExprGetDateField[WeekDay] {\n  override def convert(\n      expr: WeekDay,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    getDateField(expr, CometGetDateField.WeekDay, inputs, binding)\n  }\n}\n\nobject CometDayOfYear\n    extends CometExpressionSerde[DayOfYear]\n    with CometExprGetDateField[DayOfYear] {\n  override def convert(\n      expr: DayOfYear,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    getDateField(expr, CometGetDateField.DayOfYear, inputs, binding)\n  }\n}\n\nobject CometWeekOfYear\n    extends CometExpressionSerde[WeekOfYear]\n    with CometExprGetDateField[WeekOfYear] {\n  override def convert(\n      expr: WeekOfYear,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    getDateField(expr, CometGetDateField.WeekOfYear, inputs, binding)\n  }\n}\n\nobject CometQuarter extends CometExpressionSerde[Quarter] with CometExprGetDateField[Quarter] {\n  override def convert(\n      expr: Quarter,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    getDateField(expr, CometGetDateField.Quarter, inputs, binding)\n  }\n}\n\nobject CometHour extends CometExpressionSerde[Hour] {\n\n  override def getSupportLevel(expr: Hour): SupportLevel = {\n    if (expr.child.dataType.typeName == \"timestamp_ntz\") {\n      Incompatible(\n        Some(\n          \"Incorrectly applies timezone conversion to TimestampNTZ inputs\" +\n            \" (https://github.com/apache/datafusion-comet/issues/3180)\"))\n    } else {\n      Compatible()\n    }\n  }\n\n  override def convert(\n      expr: Hour,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n\n    if (childExpr.isDefined) {\n      val builder = ExprOuterClass.Hour.newBuilder()\n      builder.setChild(childExpr.get)\n\n      val timeZone = expr.timeZoneId.getOrElse(\"UTC\")\n      builder.setTimezone(timeZone)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setHour(builder)\n          .build())\n    } else {\n      withInfo(expr, expr.child)\n      None\n    }\n  }\n}\n\nobject CometMinute extends CometExpressionSerde[Minute] {\n\n  override def getSupportLevel(expr: Minute): SupportLevel = {\n    if (expr.child.dataType.typeName == \"timestamp_ntz\") {\n      Incompatible(\n        Some(\n          \"Incorrectly applies timezone conversion to TimestampNTZ inputs\" +\n            \" (https://github.com/apache/datafusion-comet/issues/3180)\"))\n    } else {\n      Compatible()\n    }\n  }\n\n  override def convert(\n      expr: Minute,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n\n    if (childExpr.isDefined) {\n      val builder = ExprOuterClass.Minute.newBuilder()\n      builder.setChild(childExpr.get)\n\n      val timeZone = expr.timeZoneId.getOrElse(\"UTC\")\n      builder.setTimezone(timeZone)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setMinute(builder)\n          .build())\n    } else {\n      withInfo(expr, expr.child)\n      None\n    }\n  }\n}\n\nobject CometSecond extends CometExpressionSerde[Second] {\n\n  override def getSupportLevel(expr: Second): SupportLevel = {\n    if (expr.child.dataType.typeName == \"timestamp_ntz\") {\n      Incompatible(\n        Some(\n          \"Incorrectly applies timezone conversion to TimestampNTZ inputs\" +\n            \" (https://github.com/apache/datafusion-comet/issues/3180)\"))\n    } else {\n      Compatible()\n    }\n  }\n\n  override def convert(\n      expr: Second,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n\n    if (childExpr.isDefined) {\n      val builder = ExprOuterClass.Second.newBuilder()\n      builder.setChild(childExpr.get)\n\n      val timeZone = expr.timeZoneId.getOrElse(\"UTC\")\n      builder.setTimezone(timeZone)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setSecond(builder)\n          .build())\n    } else {\n      withInfo(expr, expr.child)\n      None\n    }\n  }\n}\n\nobject CometUnixTimestamp extends CometExpressionSerde[UnixTimestamp] {\n\n  private def isSupportedInputType(expr: UnixTimestamp): Boolean = {\n    // Note: TimestampNTZType is not supported because Comet incorrectly applies\n    // timezone conversion to TimestampNTZ values. TimestampNTZ stores local time\n    // without timezone, so no conversion should be applied.\n    expr.children.head.dataType match {\n      case TimestampType | DateType => true\n      case _ => false\n    }\n  }\n\n  override def getSupportLevel(expr: UnixTimestamp): SupportLevel = {\n    if (isSupportedInputType(expr)) {\n      Compatible()\n    } else {\n      val inputType = expr.children.head.dataType\n      Unsupported(Some(s\"unix_timestamp does not support input type: $inputType\"))\n    }\n  }\n\n  override def convert(\n      expr: UnixTimestamp,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (!isSupportedInputType(expr)) {\n      val inputType = expr.children.head.dataType\n      withInfo(expr, s\"unix_timestamp does not support input type: $inputType\")\n      return None\n    }\n\n    val childExpr = exprToProtoInternal(expr.children.head, inputs, binding)\n\n    if (childExpr.isDefined) {\n      val builder = ExprOuterClass.UnixTimestamp.newBuilder()\n      builder.setChild(childExpr.get)\n\n      val timeZone = expr.timeZoneId.getOrElse(\"UTC\")\n      builder.setTimezone(timeZone)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setUnixTimestamp(builder)\n          .build())\n    } else {\n      withInfo(expr, expr.children.head)\n      None\n    }\n  }\n}\n\nobject CometDateAdd extends CometScalarFunction[DateAdd](\"date_add\")\n\nobject CometDateSub extends CometScalarFunction[DateSub](\"date_sub\")\n\nobject CometNextDay extends CometScalarFunction[NextDay](\"next_day\")\n\nobject CometMakeDate extends CometScalarFunction[MakeDate](\"make_date\")\n\nobject CometLastDay extends CometScalarFunction[LastDay](\"last_day\")\n\nobject CometDateFromUnixDate extends CometScalarFunction[DateFromUnixDate](\"date_from_unix_date\")\n\nobject CometDateDiff extends CometScalarFunction[DateDiff](\"date_diff\")\n\n/**\n * Converts a date to the number of days since Unix epoch (1970-01-01). Since dates are internally\n * stored as days since epoch, this is a simple cast to integer.\n */\nobject CometUnixDate extends CometExpressionSerde[UnixDate] {\n  override def convert(\n      expr: UnixDate,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val optExpr = childExpr.map { child =>\n      Expr\n        .newBuilder()\n        .setCast(\n          ExprOuterClass.Cast\n            .newBuilder()\n            .setChild(child)\n            .setDatatype(serializeDataType(IntegerType).get)\n            .setEvalMode(ExprOuterClass.EvalMode.LEGACY)\n            .setAllowIncompat(false)\n            .build())\n        .build()\n    }\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n\nobject CometTruncDate extends CometExpressionSerde[TruncDate] {\n\n  val supportedFormats: Seq[String] =\n    Seq(\"year\", \"yyyy\", \"yy\", \"quarter\", \"mon\", \"month\", \"mm\", \"week\")\n\n  override def getSupportLevel(expr: TruncDate): SupportLevel = {\n    expr.format match {\n      case Literal(fmt: UTF8String, _) =>\n        if (supportedFormats.contains(fmt.toString.toLowerCase(Locale.ROOT))) {\n          Compatible()\n        } else {\n          Unsupported(Some(s\"Format $fmt is not supported\"))\n        }\n      case _ =>\n        Incompatible(\n          Some(\"Invalid format strings will throw an exception instead of returning NULL\"))\n    }\n  }\n\n  override def convert(\n      expr: TruncDate,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.date, inputs, binding)\n    val formatExpr = exprToProtoInternal(expr.format, inputs, binding)\n    val optExpr =\n      scalarFunctionExprToProtoWithReturnType(\n        \"date_trunc\",\n        DateType,\n        false,\n        childExpr,\n        formatExpr)\n    optExprWithInfo(optExpr, expr, expr.date, expr.format)\n  }\n}\n\nobject CometTruncTimestamp extends CometExpressionSerde[TruncTimestamp] {\n\n  val supportedFormats: Seq[String] =\n    Seq(\n      \"year\",\n      \"yyyy\",\n      \"yy\",\n      \"quarter\",\n      \"mon\",\n      \"month\",\n      \"mm\",\n      \"week\",\n      \"day\",\n      \"dd\",\n      \"hour\",\n      \"minute\",\n      \"second\",\n      \"millisecond\",\n      \"microsecond\")\n\n  override def getSupportLevel(expr: TruncTimestamp): SupportLevel = {\n    val timezone = expr.timeZoneId.getOrElse(\"UTC\")\n    val isUtc = timezone == \"UTC\" || timezone == \"Etc/UTC\"\n    expr.format match {\n      case Literal(fmt: UTF8String, _) =>\n        if (supportedFormats.contains(fmt.toString.toLowerCase(Locale.ROOT))) {\n          if (isUtc) {\n            Compatible()\n          } else {\n            Incompatible(\n              Some(\n                s\"Incorrect results in non-UTC timezone '$timezone'\" +\n                  \" (https://github.com/apache/datafusion-comet/issues/2649)\"))\n          }\n        } else {\n          Unsupported(Some(s\"Format $fmt is not supported\"))\n        }\n      case _ =>\n        Incompatible(\n          Some(\"Invalid format strings will throw an exception instead of returning NULL\"))\n    }\n  }\n\n  override def convert(\n      expr: TruncTimestamp,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.timestamp, inputs, binding)\n    val formatExpr = exprToProtoInternal(expr.format, inputs, binding)\n\n    if (childExpr.isDefined && formatExpr.isDefined) {\n      val builder = ExprOuterClass.TruncTimestamp.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setFormat(formatExpr.get)\n\n      val timeZone = expr.timeZoneId.getOrElse(\"UTC\")\n      builder.setTimezone(timeZone)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setTruncTimestamp(builder)\n          .build())\n    } else {\n      withInfo(expr, expr.timestamp, expr.format)\n      None\n    }\n  }\n}\n\n/**\n * Converts Spark DateFormatClass expression to DataFusion's to_char function.\n *\n * Spark uses Java SimpleDateFormat patterns while DataFusion uses strftime patterns. This\n * implementation supports a whitelist of common format strings that can be reliably mapped\n * between the two systems.\n */\nobject CometDateFormat extends CometExpressionSerde[DateFormatClass] {\n\n  /**\n   * Mapping from Spark SimpleDateFormat patterns to strftime patterns. Only formats in this map\n   * are supported.\n   */\n  val supportedFormats: Map[String, String] = Map(\n    // Full date formats\n    \"yyyy-MM-dd\" -> \"%Y-%m-%d\",\n    \"yyyy/MM/dd\" -> \"%Y/%m/%d\",\n    \"yyyy-MM-dd HH:mm:ss\" -> \"%Y-%m-%d %H:%M:%S\",\n    \"yyyy/MM/dd HH:mm:ss\" -> \"%Y/%m/%d %H:%M:%S\",\n    // Date components\n    \"yyyy\" -> \"%Y\",\n    \"yy\" -> \"%y\",\n    \"MM\" -> \"%m\",\n    \"dd\" -> \"%d\",\n    // Time formats\n    \"HH:mm:ss\" -> \"%H:%M:%S\",\n    \"HH:mm\" -> \"%H:%M\",\n    \"HH\" -> \"%H\",\n    \"mm\" -> \"%M\",\n    \"ss\" -> \"%S\",\n    // Combined formats\n    \"yyyyMMdd\" -> \"%Y%m%d\",\n    \"yyyyMM\" -> \"%Y%m\",\n    // Month and day names\n    \"EEEE\" -> \"%A\",\n    \"EEE\" -> \"%a\",\n    \"MMMM\" -> \"%B\",\n    \"MMM\" -> \"%b\",\n    // 12-hour time\n    \"hh:mm:ss a\" -> \"%I:%M:%S %p\",\n    \"hh:mm a\" -> \"%I:%M %p\",\n    \"h:mm a\" -> \"%-I:%M %p\",\n    // ISO formats\n    \"yyyy-MM-dd'T'HH:mm:ss\" -> \"%Y-%m-%dT%H:%M:%S\")\n\n  override def getSupportLevel(expr: DateFormatClass): SupportLevel = {\n    // Check timezone - only UTC is fully compatible\n    val timezone = expr.timeZoneId.getOrElse(\"UTC\")\n    val isUtc = timezone == \"UTC\" || timezone == \"Etc/UTC\"\n\n    expr.right match {\n      case Literal(fmt: UTF8String, _) =>\n        val format = fmt.toString\n        if (supportedFormats.contains(format)) {\n          if (isUtc) {\n            Compatible()\n          } else {\n            Incompatible(Some(s\"Non-UTC timezone '$timezone' may produce different results\"))\n          }\n        } else {\n          Unsupported(\n            Some(\n              s\"Format '$format' is not supported. Supported formats: \" +\n                supportedFormats.keys.mkString(\", \")))\n        }\n      case _ =>\n        Unsupported(Some(\"Only literal format strings are supported\"))\n    }\n  }\n\n  override def convert(\n      expr: DateFormatClass,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // Get the format string - must be a literal for us to map it\n    val strftimeFormat = expr.right match {\n      case Literal(fmt: UTF8String, _) =>\n        supportedFormats.get(fmt.toString)\n      case _ => None\n    }\n\n    strftimeFormat match {\n      case Some(format) =>\n        val childExpr = exprToProtoInternal(expr.left, inputs, binding)\n        val formatExpr = exprToProtoInternal(Literal(format), inputs, binding)\n\n        val optExpr = scalarFunctionExprToProtoWithReturnType(\n          \"to_char\",\n          StringType,\n          false,\n          childExpr,\n          formatExpr)\n        optExprWithInfo(optExpr, expr, expr.left, expr.right)\n      case None =>\n        withInfo(expr, expr.left, expr.right)\n        None\n    }\n  }\n}\n\n/**\n * Converts a timestamp to the number of hours since Unix epoch (1970-01-01 00:00:00 UTC). This is\n * a V2 partition transform expression.\n *\n * Both TimestampType and TimestampNTZType use direct division of the raw microsecond value\n * without applying any session timezone offset.\n */\nobject CometHours extends CometExpressionSerde[Hours] {\n  override def convert(\n      expr: Hours,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val optExpr = expr.child.dataType match {\n      case TimestampType | TimestampNTZType =>\n        exprToProtoInternal(expr.child, inputs, binding).map { childExpr =>\n          val builder = ExprOuterClass.HoursTransform.newBuilder()\n          builder.setChild(childExpr)\n\n          ExprOuterClass.Expr\n            .newBuilder()\n            .setHoursTransform(builder)\n            .build()\n        }\n      case other =>\n        withInfo(expr, s\"Hours does not support input type: $other\")\n        None\n    }\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n\n/**\n * Converts a timestamp or date to the number of days since Unix epoch (1970-01-01). This is a V2\n * partition transform expression.\n *\n * For DateType: dates are internally stored as days since epoch, so this is a simple cast to\n * integer (same as CometUnixDate).\n *\n * For TimestampType: uses a timezone-aware Cast(Timestamp to Date) followed by Cast(Date to Int).\n * The first cast respects the session timezone to correctly determine the date boundary.\n */\nobject CometDays extends CometExpressionSerde[Days] {\n  override def convert(\n      expr: Days,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n\n    // Normalize input to DateType (Timestamp converts to Date first)\n    val dateExprOpt = expr.child.dataType match {\n      case DateType => childExpr\n      case TimestampType =>\n        val timezone = SQLConf.get.sessionLocalTimeZone\n        childExpr.flatMap { child =>\n          CometCast.castToProto(expr, Some(timezone), DateType, child, CometEvalMode.LEGACY)\n        }\n      case other =>\n        withInfo(expr, s\"Days does not support input type: $other\")\n        None\n    }\n\n    // Convert DateType to IntegerType (days since epoch)\n    val optExpr = dateExprOpt.map { dateExpr =>\n      Expr\n        .newBuilder()\n        .setCast(\n          ExprOuterClass.Cast\n            .newBuilder()\n            .setChild(dateExpr)\n            .setDatatype(serializeDataType(IntegerType).get)\n            .setEvalMode(ExprOuterClass.EvalMode.LEGACY)\n            .setAllowIncompat(false)\n            .build())\n        .build()\n    }\n\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/decimalExpressions.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, MakeDecimal, UnscaledValue}\nimport org.apache.spark.sql.types.{DecimalType, LongType}\n\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProtoInternal, optExprWithInfo, scalarFunctionExprToProtoWithReturnType}\n\nobject CometUnscaledValue extends CometExpressionSerde[UnscaledValue] {\n  override def convert(\n      expr: UnscaledValue,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val optExpr =\n      scalarFunctionExprToProtoWithReturnType(\"unscaled_value\", LongType, false, childExpr)\n    optExprWithInfo(optExpr, expr, expr.child)\n\n  }\n}\n\nobject CometMakeDecimal extends CometExpressionSerde[MakeDecimal] {\n\n  override def getSupportLevel(expr: MakeDecimal): SupportLevel = {\n    expr.child.dataType match {\n      case LongType => Compatible()\n      case other => Unsupported(Some(s\"Unsupported input data type: $other\"))\n    }\n  }\n\n  override def convert(\n      expr: MakeDecimal,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val optExpr = scalarFunctionExprToProtoWithReturnType(\n      \"make_decimal\",\n      DecimalType(expr.precision, expr.scale),\n      failOnError = !expr.nullOnOverflow,\n      childExpr)\n    optExprWithInfo(optExpr, expr, expr.child)\n\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/hash.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, Expression, Murmur3Hash, Sha1, Sha2, XxHash64}\nimport org.apache.spark.sql.types.{ArrayType, DataType, DecimalType, IntegerType, LongType, MapType, StringType, StructType}\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProtoInternal, scalarFunctionExprToProtoWithReturnType, serializeDataType, supportedDataType}\n\nobject CometXxHash64 extends CometExpressionSerde[XxHash64] {\n  override def convert(\n      expr: XxHash64,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (!HashUtils.isSupportedType(expr)) {\n      return None\n    }\n    val exprs = expr.children.map(exprToProtoInternal(_, inputs, binding))\n    val seedBuilder = LiteralOuterClass.Literal\n      .newBuilder()\n      .setDatatype(serializeDataType(LongType).get)\n      .setLongVal(expr.seed)\n    val seedExpr = Some(ExprOuterClass.Expr.newBuilder().setLiteral(seedBuilder).build())\n    // the seed is put at the end of the arguments\n    scalarFunctionExprToProtoWithReturnType(\"xxhash64\", LongType, false, exprs :+ seedExpr: _*)\n  }\n}\n\nobject CometMurmur3Hash extends CometExpressionSerde[Murmur3Hash] {\n  override def convert(\n      expr: Murmur3Hash,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (!HashUtils.isSupportedType(expr)) {\n      return None\n    }\n    val exprs = expr.children.map(exprToProtoInternal(_, inputs, binding))\n    val seedBuilder = LiteralOuterClass.Literal\n      .newBuilder()\n      .setDatatype(serializeDataType(IntegerType).get)\n      .setIntVal(expr.seed)\n    val seedExpr = Some(ExprOuterClass.Expr.newBuilder().setLiteral(seedBuilder).build())\n    // the seed is put at the end of the arguments\n    scalarFunctionExprToProtoWithReturnType(\n      \"murmur3_hash\",\n      IntegerType,\n      false,\n      exprs :+ seedExpr: _*)\n  }\n}\n\nobject CometSha2 extends CometExpressionSerde[Sha2] {\n  override def convert(\n      expr: Sha2,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (!HashUtils.isSupportedType(expr)) {\n      return None\n    }\n\n    // It's possible for spark to dynamically compute the number of bits from input\n    // expression, however DataFusion does not support that yet.\n    if (!expr.right.foldable) {\n      withInfo(expr, \"For Sha2, non literal numBits is not supported\")\n      return None\n    }\n\n    val leftExpr = exprToProtoInternal(expr.left, inputs, binding)\n    val numBitsExpr = exprToProtoInternal(expr.right, inputs, binding)\n    scalarFunctionExprToProtoWithReturnType(\"sha2\", StringType, false, leftExpr, numBitsExpr)\n  }\n}\n\nobject CometSha1 extends CometExpressionSerde[Sha1] {\n  override def convert(\n      expr: Sha1,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (!HashUtils.isSupportedType(expr)) {\n      withInfo(expr, s\"HashUtils doesn't support dataType: ${expr.child.dataType}\")\n      return None\n    }\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    scalarFunctionExprToProtoWithReturnType(\"sha1\", StringType, false, childExpr)\n  }\n}\n\nprivate object HashUtils {\n  def isSupportedType(expr: Expression): Boolean = {\n    for (child <- expr.children) {\n      if (!isSupportedDataType(expr, child.dataType)) {\n        return false\n      }\n    }\n    true\n  }\n\n  private def isSupportedDataType(expr: Expression, dt: DataType): Boolean = {\n    dt match {\n      case d: DecimalType if d.precision > 18 =>\n        // Spark converts decimals with precision > 18 into\n        // Java BigDecimal before hashing\n        withInfo(expr, s\"Unsupported datatype: $dt (precision > 18)\")\n        false\n      case s: StructType =>\n        s.fields.forall(f => isSupportedDataType(expr, f.dataType))\n      case a: ArrayType =>\n        isSupportedDataType(expr, a.elementType)\n      case m: MapType =>\n        isSupportedDataType(expr, m.keyType) && isSupportedDataType(expr, m.valueType)\n      case _ if !supportedDataType(dt, allowComplex = true) =>\n        withInfo(expr, s\"Unsupported datatype $dt\")\n        false\n      case _ =>\n        true\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/literals.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde.literals\n\nimport java.lang\n\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, Literal}\nimport org.apache.spark.sql.catalyst.util.ArrayData\nimport org.apache.spark.sql.types.{ArrayType, BinaryType, BooleanType, ByteType, DateType, Decimal, DecimalType, DoubleType, FloatType, IntegerType, LongType, NullType, ShortType, StringType, TimestampNTZType, TimestampType}\nimport org.apache.spark.unsafe.types.UTF8String\n\nimport com.google.protobuf.ByteString\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.DataTypeSupport.isComplexType\nimport org.apache.comet.serde.{CometExpressionSerde, Compatible, ExprOuterClass, LiteralOuterClass, SupportLevel, Unsupported}\nimport org.apache.comet.serde.QueryPlanSerde.{serializeDataType, supportedDataType}\nimport org.apache.comet.serde.Types.ListLiteral\n\nobject CometLiteral extends CometExpressionSerde[Literal] with Logging {\n\n  override def getSupportLevel(expr: Literal): SupportLevel = {\n\n    if (supportedDataType(\n        expr.dataType,\n        allowComplex = expr.value == null ||\n\n          // Nested literal support for native reader\n          // can be tracked https://github.com/apache/datafusion-comet/issues/1937\n          (expr.dataType\n            .isInstanceOf[ArrayType] && (!isComplexType(\n            expr.dataType.asInstanceOf[ArrayType].elementType) || expr.dataType\n            .asInstanceOf[ArrayType]\n            .elementType\n            .isInstanceOf[ArrayType])))) {\n      Compatible(None)\n    } else {\n      Unsupported(Some(s\"Unsupported data type ${expr.dataType}\"))\n    }\n  }\n\n  override def convert(\n      expr: Literal,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val dataType = expr.dataType\n    val value = expr.value\n\n    val exprBuilder = LiteralOuterClass.Literal.newBuilder()\n\n    if (value == null) {\n      exprBuilder.setIsNull(true)\n    } else {\n      exprBuilder.setIsNull(false)\n      dataType match {\n        case _: BooleanType => exprBuilder.setBoolVal(value.asInstanceOf[Boolean])\n        case _: ByteType => exprBuilder.setByteVal(value.asInstanceOf[Byte])\n        case _: ShortType => exprBuilder.setShortVal(value.asInstanceOf[Short])\n        case _: IntegerType | _: DateType => exprBuilder.setIntVal(value.asInstanceOf[Int])\n        case _: LongType | _: TimestampType | _: TimestampNTZType =>\n          exprBuilder.setLongVal(value.asInstanceOf[Long])\n        case _: FloatType => exprBuilder.setFloatVal(value.asInstanceOf[Float])\n        case _: DoubleType => exprBuilder.setDoubleVal(value.asInstanceOf[Double])\n        case _: StringType =>\n          exprBuilder.setStringVal(value.asInstanceOf[UTF8String].toString)\n        case _: DecimalType =>\n          // Pass decimal literal as bytes.\n          val unscaled = value.asInstanceOf[Decimal].toBigDecimal.underlying.unscaledValue\n          exprBuilder.setDecimalVal(com.google.protobuf.ByteString.copyFrom(unscaled.toByteArray))\n        case _: BinaryType =>\n          val byteStr =\n            com.google.protobuf.ByteString.copyFrom(value.asInstanceOf[Array[Byte]])\n          exprBuilder.setBytesVal(byteStr)\n\n        case arr: ArrayType =>\n          val listLiteralBuilder: ListLiteral.Builder =\n            makeListLiteral(value.asInstanceOf[ArrayData].array, arr)\n          exprBuilder.setListVal(listLiteralBuilder.build())\n          exprBuilder.setDatatype(serializeDataType(dataType).get)\n        case dt =>\n          withInfo(expr, s\"Unexpected datatype '$dt' for literal value '$value'\")\n          return None\n      }\n    }\n\n    val dt = serializeDataType(dataType)\n\n    if (dt.isDefined) {\n      exprBuilder.setDatatype(dt.get)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setLiteral(exprBuilder)\n          .build())\n    } else {\n      withInfo(expr, s\"Unsupported datatype $dataType\")\n      None\n    }\n\n  }\n\n  private def makeListLiteral(array: Array[Any], arrayType: ArrayType): ListLiteral.Builder = {\n    val listLiteralBuilder = ListLiteral.newBuilder()\n    arrayType.elementType match {\n      case NullType =>\n        array.foreach(_ => listLiteralBuilder.addNullMask(true))\n      case BooleanType =>\n        array.foreach(v => {\n          val casted = v.asInstanceOf[lang.Boolean]\n          listLiteralBuilder.addBooleanValues(casted)\n          listLiteralBuilder.addNullMask(casted != null)\n        })\n      case ByteType =>\n        array.foreach(v => {\n          val casted = v.asInstanceOf[lang.Byte]\n          listLiteralBuilder.addByteValues(\n            if (casted != null) casted.intValue()\n            else null.asInstanceOf[Integer])\n          listLiteralBuilder.addNullMask(casted != null)\n        })\n      case ShortType =>\n        array.foreach(v => {\n          val casted = v.asInstanceOf[lang.Short]\n          listLiteralBuilder.addShortValues(\n            if (casted != null) casted.intValue()\n            else null.asInstanceOf[Integer])\n          listLiteralBuilder.addNullMask(casted != null)\n        })\n      case IntegerType | DateType =>\n        array.foreach(v => {\n          val casted = v.asInstanceOf[Integer]\n          listLiteralBuilder.addIntValues(casted)\n          listLiteralBuilder.addNullMask(casted != null)\n        })\n      case LongType | TimestampType | TimestampNTZType =>\n        array.foreach(v => {\n          val casted = v.asInstanceOf[lang.Long]\n          listLiteralBuilder.addLongValues(casted)\n          listLiteralBuilder.addNullMask(casted != null)\n        })\n      case FloatType =>\n        array.foreach(v => {\n          val casted = v.asInstanceOf[lang.Float]\n          listLiteralBuilder.addFloatValues(casted)\n          listLiteralBuilder.addNullMask(casted != null)\n        })\n      case DoubleType =>\n        array.foreach(v => {\n          val casted = v.asInstanceOf[lang.Double]\n          listLiteralBuilder.addDoubleValues(casted)\n          listLiteralBuilder.addNullMask(casted != null)\n        })\n      case StringType =>\n        array.foreach(v => {\n          val casted = v.asInstanceOf[UTF8String]\n          listLiteralBuilder.addStringValues(if (casted != null) casted.toString else \"\")\n          listLiteralBuilder.addNullMask(casted != null)\n        })\n      case _: DecimalType =>\n        array\n          .foreach(v => {\n            val casted =\n              v.asInstanceOf[Decimal]\n            listLiteralBuilder.addDecimalValues(if (casted != null) {\n              com.google.protobuf.ByteString\n                .copyFrom(casted.toBigDecimal.underlying.unscaledValue.toByteArray)\n            } else ByteString.EMPTY)\n            listLiteralBuilder.addNullMask(casted != null)\n          })\n      case _: BinaryType =>\n        array\n          .foreach(v => {\n            val casted =\n              v.asInstanceOf[Array[Byte]]\n            listLiteralBuilder.addBytesValues(if (casted != null) {\n              com.google.protobuf.ByteString.copyFrom(casted)\n            } else ByteString.EMPTY)\n            listLiteralBuilder.addNullMask(casted != null)\n          })\n      case a: ArrayType =>\n        array.foreach(v => {\n          val casted = v.asInstanceOf[ArrayData]\n          listLiteralBuilder.addListValues(if (casted != null) {\n            makeListLiteral(casted.array, a)\n          } else ListLiteral.newBuilder())\n          listLiteralBuilder.addNullMask(casted != null)\n        })\n    }\n    listLiteralBuilder\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/maps.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions._\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.serde.QueryPlanSerde.{createBinaryExpr, exprToProtoInternal, optExprWithInfo, scalarFunctionExprToProto}\n\nobject CometMapKeys extends CometExpressionSerde[MapKeys] {\n\n  override def convert(\n      expr: MapKeys,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val mapKeysScalarExpr = scalarFunctionExprToProto(\"map_keys\", childExpr)\n    optExprWithInfo(mapKeysScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometMapEntries extends CometExpressionSerde[MapEntries] {\n\n  override def convert(\n      expr: MapEntries,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val mapEntriesScalarExpr = scalarFunctionExprToProto(\"map_entries\", childExpr)\n    optExprWithInfo(mapEntriesScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometMapValues extends CometExpressionSerde[MapValues] {\n\n  override def convert(\n      expr: MapValues,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val mapValuesScalarExpr = scalarFunctionExprToProto(\"map_values\", childExpr)\n    optExprWithInfo(mapValuesScalarExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometMapExtract extends CometExpressionSerde[GetMapValue] {\n\n  override def convert(\n      expr: GetMapValue,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val mapExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val keyExpr = exprToProtoInternal(expr.key, inputs, binding)\n    val mapExtractExpr = scalarFunctionExprToProto(\"map_extract\", mapExpr, keyExpr)\n    optExprWithInfo(mapExtractExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometMapFromArrays extends CometExpressionSerde[MapFromArrays] {\n\n  override def convert(\n      expr: MapFromArrays,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val keysExpr = exprToProtoInternal(expr.left, inputs, binding)\n    val valuesExpr = exprToProtoInternal(expr.right, inputs, binding)\n    val keyType = expr.left.dataType.asInstanceOf[ArrayType].elementType\n    val valueType = expr.right.dataType.asInstanceOf[ArrayType].elementType\n    val returnType = MapType(keyType = keyType, valueType = valueType)\n    for {\n      andBinaryExprProto <- createAndBinaryExpr(expr, inputs, binding)\n      mapFromArraysExprProto <- scalarFunctionExprToProto(\"map\", keysExpr, valuesExpr)\n      nullLiteralExprProto <- exprToProtoInternal(Literal(null, returnType), inputs, binding)\n    } yield {\n      val caseWhenExprProto = ExprOuterClass.CaseWhen\n        .newBuilder()\n        .addWhen(andBinaryExprProto)\n        .addThen(mapFromArraysExprProto)\n        .setElseExpr(nullLiteralExprProto)\n        .build()\n      ExprOuterClass.Expr\n        .newBuilder()\n        .setCaseWhen(caseWhenExprProto)\n        .build()\n    }\n  }\n\n  private def createAndBinaryExpr(\n      expr: MapFromArrays,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      IsNotNull(expr.left),\n      IsNotNull(expr.right),\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setAnd(binaryExpr))\n  }\n}\n\nobject CometMapContainsKey extends CometExpressionSerde[MapContainsKey] {\n\n  override def convert(\n      expr: MapContainsKey,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // Replace with array_has(map_keys(map), key)\n    val mapExpr = exprToProtoInternal(expr.left, inputs, binding)\n    val keyExpr = exprToProtoInternal(expr.right, inputs, binding)\n\n    val mapKeysExpr = scalarFunctionExprToProto(\"map_keys\", mapExpr)\n\n    val mapContainsKeyExpr = scalarFunctionExprToProto(\"array_has\", mapKeysExpr, keyExpr)\n    optExprWithInfo(mapContainsKeyExpr, expr, expr.children: _*)\n  }\n}\n\nobject CometMapFromEntries extends CometScalarFunction[MapFromEntries](\"map_from_entries\") {\n  val keyUnsupportedReason = \"Using BinaryType as Map keys is not allowed in map_from_entries\"\n  val valueUnsupportedReason = \"Using BinaryType as Map values is not allowed in map_from_entries\"\n\n  private def containsBinary(dataType: DataType): Boolean = {\n    dataType match {\n      case BinaryType => true\n      case StructType(fields) => fields.exists(field => containsBinary(field.dataType))\n      case ArrayType(elementType, _) => containsBinary(elementType)\n      case _ => false\n    }\n  }\n\n  override def getSupportLevel(expr: MapFromEntries): SupportLevel = {\n    if (containsBinary(expr.dataType.keyType)) {\n      return Incompatible(Some(keyUnsupportedReason))\n    }\n    if (containsBinary(expr.dataType.valueType)) {\n      return Incompatible(Some(valueUnsupportedReason))\n    }\n    Compatible(None)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/math.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Abs, Add, Atan2, Attribute, Ceil, CheckOverflow, Expression, Floor, Hex, If, LessThanOrEqual, Literal, Log, Log10, Log2, Logarithm, Unhex}\nimport org.apache.spark.sql.types.{DecimalType, DoubleType, NumericType}\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProtoInternal, optExprWithInfo, scalarFunctionExprToProto, scalarFunctionExprToProtoWithReturnType, serializeDataType}\n\nobject CometAtan2 extends CometExpressionSerde[Atan2] {\n  override def convert(\n      expr: Atan2,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // Spark adds +0.0 to inputs in order to convert -0.0 to +0.0\n    val left = Add(expr.left, Literal.default(expr.left.dataType))\n    val right = Add(expr.right, Literal.default(expr.right.dataType))\n    val leftExpr = exprToProtoInternal(left, inputs, binding)\n    val rightExpr = exprToProtoInternal(right, inputs, binding)\n    val optExpr = scalarFunctionExprToProto(\"atan2\", leftExpr, rightExpr)\n    optExprWithInfo(optExpr, expr, expr.left, expr.right)\n  }\n}\n\nobject CometCeil extends CometExpressionSerde[Ceil] {\n  override def convert(\n      expr: Ceil,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    expr.child.dataType match {\n      case t: DecimalType if t.scale == 0 => // zero scale is no-op\n        childExpr\n      case t: DecimalType if t.scale < 0 => // Spark disallows negative scale SPARK-30252\n        withInfo(expr, s\"Decimal type $t has negative scale\")\n        None\n      case _ =>\n        val optExpr =\n          scalarFunctionExprToProtoWithReturnType(\"ceil\", expr.dataType, false, childExpr)\n        optExprWithInfo(optExpr, expr, expr.child)\n    }\n  }\n}\n\nobject CometFloor extends CometExpressionSerde[Floor] {\n  override def convert(\n      expr: Floor,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    expr.child.dataType match {\n      case t: DecimalType if t.scale == 0 => // zero scale is no-op\n        childExpr\n      case t: DecimalType if t.scale < 0 => // Spark disallows negative scale SPARK-30252\n        withInfo(expr, s\"Decimal type $t has negative scale\")\n        None\n      case _ =>\n        val optExpr =\n          scalarFunctionExprToProtoWithReturnType(\"floor\", expr.dataType, false, childExpr)\n        optExprWithInfo(optExpr, expr, expr.child)\n    }\n  }\n}\n\n// The expression for `log` functions is defined as null on numbers less than or equal\n// to 0. This matches Spark and Hive behavior, where non positive values eval to null\n// instead of NaN or -Infinity.\nobject CometLog extends CometExpressionSerde[Log] with MathExprBase {\n  override def convert(\n      expr: Log,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(nullIfNegative(expr.child), inputs, binding)\n    val optExpr = scalarFunctionExprToProto(\"ln\", childExpr)\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n\nobject CometLog10 extends CometExpressionSerde[Log10] with MathExprBase {\n  override def convert(\n      expr: Log10,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(nullIfNegative(expr.child), inputs, binding)\n    val optExpr = scalarFunctionExprToProto(\"log10\", childExpr)\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n\nobject CometLog2 extends CometExpressionSerde[Log2] with MathExprBase {\n  override def convert(\n      expr: Log2,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(nullIfNegative(expr.child), inputs, binding)\n    val optExpr = scalarFunctionExprToProto(\"log2\", childExpr)\n    optExprWithInfo(optExpr, expr, expr.child)\n\n  }\n}\n\nobject CometLogarithm extends CometExpressionSerde[Logarithm] {\n  override def convert(\n      expr: Logarithm,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    // Uses custom spark_log UDF that returns null when base <= 0 or value <= 0,\n    // matching Spark's Logarithm.nullSafeEval behavior.\n    val leftExpr = exprToProtoInternal(expr.left, inputs, binding)\n    val rightExpr = exprToProtoInternal(expr.right, inputs, binding)\n    val optExpr =\n      scalarFunctionExprToProtoWithReturnType(\"spark_log\", DoubleType, false, leftExpr, rightExpr)\n    optExprWithInfo(optExpr, expr, expr.left, expr.right)\n  }\n}\n\nobject CometHex extends CometExpressionSerde[Hex] with MathExprBase {\n  override def convert(\n      expr: Hex,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val optExpr = scalarFunctionExprToProtoWithReturnType(\"hex\", expr.dataType, false, childExpr)\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n\nobject CometUnhex extends CometExpressionSerde[Unhex] with MathExprBase {\n  override def convert(\n      expr: Unhex,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val failOnErrorExpr = exprToProtoInternal(Literal(expr.failOnError), inputs, binding)\n\n    val optExpr =\n      scalarFunctionExprToProtoWithReturnType(\n        \"unhex\",\n        expr.dataType,\n        false,\n        childExpr,\n        failOnErrorExpr)\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n\nobject CometAbs extends CometExpressionSerde[Abs] with MathExprBase {\n\n  override def getSupportLevel(expr: Abs): SupportLevel = {\n    expr.child.dataType match {\n      case _: NumericType =>\n        Compatible()\n      case _ =>\n        // Spark supports NumericType, DayTimeIntervalType, and YearMonthIntervalType\n        Unsupported(Some(\"Only integral, floating-point, and decimal types are supported\"))\n    }\n  }\n\n  override def convert(\n      expr: Abs,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val failOnErrorExpr = exprToProtoInternal(Literal(expr.failOnError), inputs, binding)\n\n    val optExpr =\n      scalarFunctionExprToProtoWithReturnType(\n        \"abs\",\n        expr.dataType,\n        false,\n        childExpr,\n        failOnErrorExpr)\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n\nsealed trait MathExprBase {\n  protected def nullIfNegative(expression: Expression): Expression = {\n    val zero = Literal.default(expression.dataType)\n    If(LessThanOrEqual(expression, zero), Literal.create(null, expression.dataType), expression)\n  }\n}\n\nobject CometCheckOverflow extends CometExpressionSerde[CheckOverflow] {\n\n  override def getSupportLevel(expr: CheckOverflow): SupportLevel = {\n    if (expr.dataType.isInstanceOf[DecimalType]) {\n      Compatible()\n    } else {\n      Unsupported(Some(\"dataType must be DecimalType\"))\n    }\n  }\n\n  override def convert(\n      expr: CheckOverflow,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n\n    if (childExpr.isDefined) {\n      val builder = ExprOuterClass.CheckOverflow.newBuilder()\n      builder.setChild(childExpr.get)\n      builder.setFailOnError(!expr.nullOnOverflow)\n\n      // `dataType` must be decimal type\n      assert(expr.dataType.isInstanceOf[DecimalType])\n      val dataType = serializeDataType(expr.dataType)\n      builder.setDatatype(dataType.get)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setCheckOverflow(builder)\n          .build())\n    } else {\n      withInfo(expr, expr.child)\n      None\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/namedExpressions.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Alias, Attribute, AttributeReference, BindReferences, BoundReference}\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProtoInternal, serializeDataType}\n\nobject CometAlias extends CometExpressionSerde[Alias] {\n  override def convert(\n      a: Alias,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val r = exprToProtoInternal(a.child, inputs, binding)\n    if (r.isEmpty) {\n      withInfo(a, a.child)\n    }\n    r\n  }\n}\n\nobject CometAttributeReference extends CometExpressionSerde[AttributeReference] {\n  override def convert(\n      attr: AttributeReference,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val dataType = serializeDataType(attr.dataType)\n\n    if (dataType.isDefined) {\n      if (binding) {\n        // Spark may produce unresolvable attributes in some cases,\n        // for example https://github.com/apache/datafusion-comet/issues/925.\n        // So, we allow the binding to fail.\n        val boundRef: Any = BindReferences\n          .bindReference(attr, inputs, allowFailures = true)\n\n        if (boundRef.isInstanceOf[AttributeReference]) {\n          withInfo(attr, s\"cannot resolve $attr among ${inputs.mkString(\", \")}\")\n          return None\n        }\n\n        val boundExpr = ExprOuterClass.BoundReference\n          .newBuilder()\n          .setIndex(boundRef.asInstanceOf[BoundReference].ordinal)\n          .setDatatype(dataType.get)\n          .build()\n\n        Some(\n          ExprOuterClass.Expr\n            .newBuilder()\n            .setBound(boundExpr)\n            .build())\n      } else {\n        val unboundRef = ExprOuterClass.UnboundReference\n          .newBuilder()\n          .setName(attr.name)\n          .setDatatype(dataType.get)\n          .build()\n\n        Some(\n          ExprOuterClass.Expr\n            .newBuilder()\n            .setUnbound(unboundRef)\n            .build())\n      }\n    } else {\n      withInfo(attr, s\"unsupported datatype: ${attr.dataType}\")\n      None\n    }\n\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/nondetermenistic.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, Expression, Literal, MonotonicallyIncreasingID, Rand, Randn, SparkPartitionID}\n\nobject CometSparkPartitionId extends CometExpressionSerde[SparkPartitionID] {\n  override def convert(\n      expr: SparkPartitionID,\n      _inputs: Seq[Attribute],\n      _binding: Boolean): Option[ExprOuterClass.Expr] = {\n    Some(\n      ExprOuterClass.Expr\n        .newBuilder()\n        .setSparkPartitionId(ExprOuterClass.EmptyExpr.newBuilder())\n        .build())\n  }\n}\n\nobject CometMonotonicallyIncreasingId extends CometExpressionSerde[MonotonicallyIncreasingID] {\n  override def convert(\n      expr: MonotonicallyIncreasingID,\n      _inputs: Seq[Attribute],\n      _binding: Boolean): Option[ExprOuterClass.Expr] = {\n    Some(\n      ExprOuterClass.Expr\n        .newBuilder()\n        .setMonotonicallyIncreasingId(ExprOuterClass.EmptyExpr.newBuilder())\n        .build())\n  }\n}\n\nsealed abstract class CometRandCommonSerde[T <: Expression] extends CometExpressionSerde[T] {\n  protected def extractSeedFromExpr(expr: Expression): Option[Long] = {\n    expr match {\n      case Literal(seed: Long, _) => Some(seed)\n      case Literal(null, _) => Some(0L)\n      case _ => None\n    }\n  }\n}\n\nobject CometRand extends CometRandCommonSerde[Rand] {\n  override def convert(\n      expr: Rand,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    extractSeedFromExpr(expr.child).map { seed =>\n      ExprOuterClass.Expr\n        .newBuilder()\n        .setRand(ExprOuterClass.Rand.newBuilder().setSeed(seed))\n        .build()\n    }\n  }\n}\n\nobject CometRandn extends CometRandCommonSerde[Randn] {\n  override def convert(\n      expr: Randn,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    extractSeedFromExpr(expr.child).map { seed =>\n      ExprOuterClass.Expr\n        .newBuilder()\n        .setRandn(ExprOuterClass.Rand.newBuilder().setSeed(seed))\n        .build()\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/operator/CometDataWritingCommand.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde.operator\n\nimport java.net.URI\nimport java.util.Locale\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.SparkException\nimport org.apache.spark.sql.comet.{CometNativeExec, CometNativeWriteExec}\nimport org.apache.spark.sql.execution.command.DataWritingCommandExec\nimport org.apache.spark.sql.execution.datasources.{InsertIntoHadoopFsRelationCommand, WriteFilesExec}\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.{CometConf, ConfigEntry}\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.objectstore.NativeConfig\nimport org.apache.comet.serde.{CometOperatorSerde, Incompatible, OperatorOuterClass, SupportLevel, Unsupported}\nimport org.apache.comet.serde.OperatorOuterClass.Operator\nimport org.apache.comet.serde.QueryPlanSerde.serializeDataType\n\n/**\n * CometOperatorSerde implementation for DataWritingCommandExec that converts Parquet write\n * operations to use Comet's native Parquet writer.\n */\nobject CometDataWritingCommand extends CometOperatorSerde[DataWritingCommandExec] {\n\n  private val supportedCompressionCodes = Set(\"none\", \"snappy\", \"lz4\", \"zstd\")\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] =\n    Some(CometConf.COMET_NATIVE_PARQUET_WRITE_ENABLED)\n\n  // Native writes require Arrow-formatted input data. If the scan falls back to Spark\n  // (e.g., due to unsupported complex types), the write must also fall back.\n  override def requiresNativeChildren: Boolean = true\n\n  override def getSupportLevel(op: DataWritingCommandExec): SupportLevel = {\n    op.cmd match {\n      case cmd: InsertIntoHadoopFsRelationCommand =>\n        cmd.fileFormat match {\n          case _: ParquetFileFormat =>\n            if (!cmd.outputPath.toString.startsWith(\"file:\") && !cmd.outputPath.toString\n                .startsWith(\"hdfs:\")) {\n              return Unsupported(Some(\"Supported output filesystems: local, HDFS\"))\n            }\n\n            if (cmd.bucketSpec.isDefined) {\n              return Unsupported(Some(\"Bucketed writes are not supported\"))\n            }\n\n            if (cmd.partitionColumns.nonEmpty || cmd.staticPartitions.nonEmpty) {\n              return Unsupported(Some(\"Partitioned writes are not supported\"))\n            }\n\n            val codec = parseCompressionCodec(cmd)\n            if (!supportedCompressionCodes.contains(codec)) {\n              return Unsupported(Some(s\"Unsupported compression codec: $codec\"))\n            }\n\n            Incompatible(Some(\"Parquet write support is highly experimental\"))\n          case _ =>\n            Unsupported(Some(\"Only Parquet writes are supported\"))\n        }\n      case other =>\n        Unsupported(Some(s\"Unsupported write command: ${other.getClass}\"))\n    }\n  }\n\n  override def convert(\n      op: DataWritingCommandExec,\n      builder: Operator.Builder,\n      childOp: Operator*): Option[OperatorOuterClass.Operator] = {\n\n    try {\n      val cmd = op.cmd.asInstanceOf[InsertIntoHadoopFsRelationCommand]\n\n      val scanOp = OperatorOuterClass.Scan\n        .newBuilder()\n        .setSource(cmd.query.nodeName)\n        .setArrowFfiSafe(false)\n\n      // Add fields from the query output schema\n      val scanTypes = cmd.query.output.flatMap { attr =>\n        serializeDataType(attr.dataType)\n      }\n\n      if (scanTypes.length != cmd.query.output.length) {\n        withInfo(op, \"Cannot serialize data types for native write\")\n        return None\n      }\n\n      scanTypes.foreach(scanOp.addFields)\n\n      val scanOperator = Operator\n        .newBuilder()\n        .setPlanId(op.id)\n        .setScan(scanOp.build())\n        .build()\n\n      val outputPath = cmd.outputPath.toString\n\n      val codec = parseCompressionCodec(cmd) match {\n        case \"snappy\" => OperatorOuterClass.CompressionCodec.Snappy\n        case \"lz4\" => OperatorOuterClass.CompressionCodec.Lz4\n        case \"zstd\" => OperatorOuterClass.CompressionCodec.Zstd\n        case \"none\" => OperatorOuterClass.CompressionCodec.None\n        case other =>\n          withInfo(op, s\"Unsupported compression codec: $other\")\n          return None\n      }\n\n      val writerOpBuilder = OperatorOuterClass.ParquetWriter\n        .newBuilder()\n        .setOutputPath(outputPath)\n        .setCompression(codec)\n        .addAllColumnNames(cmd.query.output.map(_.name).asJava)\n      // Note: work_dir, job_id, and task_attempt_id will be set at execution time\n      // in CometNativeWriteExec, as they depend on the Spark task context\n\n      // Collect S3/cloud storage configurations\n      val session = op.session\n      val hadoopConf = session.sessionState.newHadoopConfWithOptions(cmd.options)\n      val objectStoreOptions =\n        NativeConfig.extractObjectStoreOptions(hadoopConf, URI.create(outputPath))\n      objectStoreOptions.foreach { case (key, value) =>\n        writerOpBuilder.putObjectStoreOptions(key, value)\n      }\n\n      val writerOp = writerOpBuilder.build()\n\n      val writerOperator = Operator\n        .newBuilder()\n        .setPlanId(op.id)\n        .addChildren(scanOperator)\n        .setParquetWriter(writerOp)\n        .build()\n\n      Some(writerOperator)\n    } catch {\n      case e: Exception =>\n        withInfo(\n          op,\n          \"Failed to convert DataWritingCommandExec to native execution: \" +\n            s\"${e.getMessage}\")\n        None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: DataWritingCommandExec): CometNativeExec = {\n    val cmd = op.cmd.asInstanceOf[InsertIntoHadoopFsRelationCommand]\n    val outputPath = cmd.outputPath.toString\n\n    // Get the child plan from the WriteFilesExec or use the child directly\n    val childPlan = op.child match {\n      case writeFiles: WriteFilesExec =>\n        // The WriteFilesExec child should already be a Comet operator\n        writeFiles.child\n      case other =>\n        // Fallback: use the child directly\n        other\n    }\n\n    // Create FileCommitProtocol for atomic writes\n    val jobId = java.util.UUID.randomUUID().toString\n    val committer =\n      try {\n        // Use Spark's SQLHadoopMapReduceCommitProtocol\n        val committerClass =\n          classOf[org.apache.spark.sql.execution.datasources.SQLHadoopMapReduceCommitProtocol]\n        val constructor =\n          committerClass.getConstructor(classOf[String], classOf[String], classOf[Boolean])\n        Some(\n          constructor\n            .newInstance(\n              jobId,\n              outputPath,\n              java.lang.Boolean.FALSE // dynamicPartitionOverwrite = false for now\n            )\n            .asInstanceOf[org.apache.spark.internal.io.FileCommitProtocol])\n      } catch {\n        case e: Exception =>\n          throw new SparkException(s\"Could not instantiate FileCommitProtocol: ${e.getMessage}\")\n      }\n\n    CometNativeWriteExec(nativeOp, childPlan, outputPath, committer, jobId)\n  }\n\n  private def parseCompressionCodec(cmd: InsertIntoHadoopFsRelationCommand) = {\n    cmd.options\n      .getOrElse(\n        \"compression\",\n        SQLConf.get.getConfString(\n          SQLConf.PARQUET_COMPRESSION.key,\n          SQLConf.PARQUET_COMPRESSION.defaultValueString))\n      .toLowerCase(Locale.ROOT)\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/operator/CometIcebergNativeScan.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde.operator\n\nimport scala.collection.mutable\nimport scala.jdk.CollectionConverters._\n\nimport org.json4s.JsonDSL._\nimport org.json4s.jackson.JsonMethods._\n\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.catalyst.expressions._\nimport org.apache.spark.sql.comet.{CometBatchScanExec, CometNativeExec}\nimport org.apache.spark.sql.execution.datasources.v2.{BatchScanExec, DataSourceRDD, DataSourceRDDPartition}\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.{CometConf, ConfigEntry}\nimport org.apache.comet.iceberg.{CometIcebergNativeScanMetadata, IcebergReflection}\nimport org.apache.comet.serde.{CometOperatorSerde, OperatorOuterClass}\nimport org.apache.comet.serde.ExprOuterClass.Expr\nimport org.apache.comet.serde.OperatorOuterClass.{Operator, SparkStructField}\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProto, serializeDataType}\n\nobject CometIcebergNativeScan extends CometOperatorSerde[CometBatchScanExec] with Logging {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = None\n\n  /**\n   * Constants specific to Iceberg expression conversion (not in shared IcebergReflection).\n   */\n  private object Constants {\n    // Iceberg expression operation names\n    object Operations {\n      val IS_NULL = \"IS_NULL\"\n      val IS_NOT_NULL = \"IS_NOT_NULL\"\n      val NOT_NULL = \"NOT_NULL\"\n      val EQ = \"EQ\"\n      val NOT_EQ = \"NOT_EQ\"\n      val LT = \"LT\"\n      val LT_EQ = \"LT_EQ\"\n      val GT = \"GT\"\n      val GT_EQ = \"GT_EQ\"\n      val IN = \"IN\"\n      val NOT_IN = \"NOT_IN\"\n    }\n\n    // Iceberg expression class name suffixes\n    object ExpressionTypes {\n      val UNBOUND_PREDICATE = \"UnboundPredicate\"\n      val AND = \"And\"\n      val OR = \"Or\"\n      val NOT = \"Not\"\n    }\n  }\n\n  /**\n   * Converts an Iceberg partition value to protobuf format. Protobuf is less verbose than JSON.\n   * The following types are also serialized as integer values instead of as strings - Timestamps,\n   * Dates, Decimals, FieldIDs\n   */\n  private def partitionValueToProto(\n      fieldId: Int,\n      fieldTypeStr: String,\n      value: Any): OperatorOuterClass.PartitionValue = {\n    val builder = OperatorOuterClass.PartitionValue.newBuilder()\n    builder.setFieldId(fieldId)\n\n    if (value == null) {\n      builder.setIsNull(true)\n    } else {\n      builder.setIsNull(false)\n      fieldTypeStr match {\n        case t if t.startsWith(\"timestamp\") =>\n          val micros = value match {\n            case l: java.lang.Long => l.longValue()\n            case i: java.lang.Integer => i.longValue()\n            case _ => value.toString.toLong\n          }\n          if (t.contains(\"tz\")) {\n            builder.setTimestampTzVal(micros)\n          } else {\n            builder.setTimestampVal(micros)\n          }\n\n        case \"date\" =>\n          val days = value.asInstanceOf[java.lang.Integer].intValue()\n          builder.setDateVal(days)\n\n        case d if d.startsWith(\"decimal(\") =>\n          // Serialize as unscaled BigInteger bytes\n          val bigDecimal = value match {\n            case bd: java.math.BigDecimal => bd\n            case _ => new java.math.BigDecimal(value.toString)\n          }\n          val unscaledBytes = bigDecimal.unscaledValue().toByteArray\n          builder.setDecimalVal(com.google.protobuf.ByteString.copyFrom(unscaledBytes))\n\n        case \"string\" =>\n          builder.setStringVal(value.toString)\n\n        case \"int\" =>\n          val intVal = value match {\n            case i: java.lang.Integer => i.intValue()\n            case l: java.lang.Long => l.intValue()\n            case _ => value.toString.toInt\n          }\n          builder.setIntVal(intVal)\n\n        case \"long\" =>\n          val longVal = value match {\n            case l: java.lang.Long => l.longValue()\n            case i: java.lang.Integer => i.longValue()\n            case _ => value.toString.toLong\n          }\n          builder.setLongVal(longVal)\n\n        case \"float\" =>\n          val floatVal = value match {\n            case f: java.lang.Float => f.floatValue()\n            case d: java.lang.Double => d.floatValue()\n            case _ => value.toString.toFloat\n          }\n          builder.setFloatVal(floatVal)\n\n        case \"double\" =>\n          val doubleVal = value match {\n            case d: java.lang.Double => d.doubleValue()\n            case f: java.lang.Float => f.doubleValue()\n            case _ => value.toString.toDouble\n          }\n          builder.setDoubleVal(doubleVal)\n\n        case \"boolean\" =>\n          val boolVal = value match {\n            case b: java.lang.Boolean => b.booleanValue()\n            case _ => value.toString.toBoolean\n          }\n          builder.setBoolVal(boolVal)\n\n        case \"uuid\" =>\n          // UUID as bytes (16 bytes) or string\n          val uuidBytes = value match {\n            case uuid: java.util.UUID =>\n              val bb = java.nio.ByteBuffer.wrap(new Array[Byte](16))\n              bb.putLong(uuid.getMostSignificantBits)\n              bb.putLong(uuid.getLeastSignificantBits)\n              bb.array()\n            case _ =>\n              // Parse UUID string and convert to bytes\n              val uuid = java.util.UUID.fromString(value.toString)\n              val bb = java.nio.ByteBuffer.wrap(new Array[Byte](16))\n              bb.putLong(uuid.getMostSignificantBits)\n              bb.putLong(uuid.getLeastSignificantBits)\n              bb.array()\n          }\n          builder.setUuidVal(com.google.protobuf.ByteString.copyFrom(uuidBytes))\n\n        case t if t.startsWith(\"fixed[\") || t.startsWith(\"binary\") =>\n          val bytes = value match {\n            case bytes: Array[Byte] => bytes\n            case _ => value.toString.getBytes(\"UTF-8\")\n          }\n          if (t.startsWith(\"fixed\")) {\n            builder.setFixedVal(com.google.protobuf.ByteString.copyFrom(bytes))\n          } else {\n            builder.setBinaryVal(com.google.protobuf.ByteString.copyFrom(bytes))\n          }\n\n        // Fallback: infer type from Java type ?\n        case _ =>\n          value match {\n            case s: String => builder.setStringVal(s)\n            case i: java.lang.Integer => builder.setIntVal(i.intValue())\n            case l: java.lang.Long => builder.setLongVal(l.longValue())\n            case d: java.lang.Double => builder.setDoubleVal(d.doubleValue())\n            case f: java.lang.Float => builder.setFloatVal(f.floatValue())\n            case b: java.lang.Boolean => builder.setBoolVal(b.booleanValue())\n            case other => builder.setStringVal(other.toString)\n          }\n      }\n    }\n\n    builder.build()\n  }\n\n  /**\n   * Helper to extract a literal from an Iceberg expression and build a binary predicate.\n   */\n  private def buildBinaryPredicate(\n      exprClass: Class[_],\n      icebergExpr: Any,\n      attribute: Attribute,\n      builder: (Expression, Expression) => Expression): Option[Expression] = {\n    try {\n      val literalMethod = exprClass.getMethod(\"literal\")\n      val literal = literalMethod.invoke(icebergExpr)\n      val value = convertIcebergLiteral(literal, attribute.dataType)\n      Some(builder(attribute, value))\n    } catch {\n      case _: Exception => None\n    }\n  }\n\n  /**\n   * Extracts delete files from an Iceberg FileScanTask as a list (for deduplication).\n   */\n  private def extractDeleteFilesList(\n      task: Any,\n      contentFileClass: Class[_],\n      fileScanTaskClass: Class[_],\n      fileIO: Option[Any]): Seq[OperatorOuterClass.IcebergDeleteFile] = {\n    try {\n      val deleteFileClass = IcebergReflection.loadClass(IcebergReflection.ClassNames.DELETE_FILE)\n\n      val deletes = IcebergReflection.getDeleteFilesFromTask(task, fileScanTaskClass)\n\n      deletes.asScala.flatMap { deleteFile =>\n        try {\n          IcebergReflection\n            .extractFileLocation(contentFileClass, deleteFile)\n            .map { deletePath =>\n              val deleteBuilder =\n                OperatorOuterClass.IcebergDeleteFile.newBuilder()\n              deleteBuilder.setFilePath(deletePath)\n\n              val contentType =\n                try {\n                  val contentMethod = deleteFileClass.getMethod(\"content\")\n                  val content = contentMethod.invoke(deleteFile)\n                  content.toString match {\n                    case IcebergReflection.ContentTypes.POSITION_DELETES =>\n                      IcebergReflection.ContentTypes.POSITION_DELETES\n                    case IcebergReflection.ContentTypes.EQUALITY_DELETES =>\n                      IcebergReflection.ContentTypes.EQUALITY_DELETES\n                    case other => other\n                  }\n                } catch {\n                  case _: Exception =>\n                    IcebergReflection.ContentTypes.POSITION_DELETES\n                }\n              deleteBuilder.setContentType(contentType)\n\n              val specId =\n                try {\n                  val specIdMethod = deleteFileClass.getMethod(\"specId\")\n                  specIdMethod.invoke(deleteFile).asInstanceOf[Int]\n                } catch {\n                  case _: Exception => 0\n                }\n              deleteBuilder.setPartitionSpecId(specId)\n\n              // Workaround for https://github.com/apache/iceberg/issues/12554\n              // RewriteTablePath rewrites path references inside position delete files,\n              // making the copied file possibly differ in size, but does not update\n              // file_size_in_bytes in the manifest. The manifest value cannot be trusted;\n              // always use FileIO to get the actual size.\n              val inputFile = fileIO.get.getClass\n                .getMethod(\"newInputFile\", classOf[String])\n                .invoke(fileIO.get, deletePath)\n              val actualDeleteFileSizeInBytes =\n                inputFile.getClass\n                  .getMethod(\"getLength\")\n                  .invoke(inputFile)\n                  .asInstanceOf[Long]\n              deleteBuilder.setFileSizeInBytes(actualDeleteFileSizeInBytes)\n\n              try {\n                val equalityIdsMethod =\n                  deleteFileClass.getMethod(\"equalityFieldIds\")\n                val equalityIds = equalityIdsMethod\n                  .invoke(deleteFile)\n                  .asInstanceOf[java.util.List[Integer]]\n                equalityIds.forEach(id => deleteBuilder.addEqualityIds(id))\n              } catch {\n                case _: Exception =>\n              }\n\n              deleteBuilder.build()\n            }\n        } catch {\n          case e: Exception =>\n            logWarning(s\"Failed to serialize delete file: ${e.getMessage}\")\n            None\n        }\n      }.toSeq\n    } catch {\n      case e: Exception =>\n        val msg =\n          \"Iceberg reflection failure: Failed to extract deletes from FileScanTask: \" +\n            s\"${e.getMessage}\"\n        logError(msg)\n        throw new RuntimeException(msg, e)\n    }\n  }\n\n  /**\n   * Serializes partition spec and data from an Iceberg FileScanTask.\n   *\n   * Extracts partition specification (field definitions and transforms) and partition data\n   * (actual values) from the task. This information is used by the native execution engine to\n   * build a constants_map for identity-transformed partition columns and to handle\n   * partition-level filtering.\n   */\n  private def serializePartitionData(\n      task: Any,\n      contentScanTaskClass: Class[_],\n      fileScanTaskClass: Class[_],\n      taskBuilder: OperatorOuterClass.IcebergFileScanTask.Builder,\n      commonBuilder: OperatorOuterClass.IcebergScanCommon.Builder,\n      partitionTypeToPoolIndex: mutable.HashMap[String, Int],\n      partitionSpecToPoolIndex: mutable.HashMap[String, Int],\n      partitionDataToPoolIndex: mutable.HashMap[String, Int]): Unit = {\n    try {\n      val specMethod = fileScanTaskClass.getMethod(\"spec\")\n      val spec = specMethod.invoke(task)\n\n      if (spec != null) {\n        // Deduplicate partition spec\n        try {\n          val partitionSpecParserClass =\n            IcebergReflection.loadClass(IcebergReflection.ClassNames.PARTITION_SPEC_PARSER)\n          val toJsonMethod = partitionSpecParserClass.getMethod(\n            \"toJson\",\n            IcebergReflection.loadClass(IcebergReflection.ClassNames.PARTITION_SPEC))\n          val partitionSpecJson = toJsonMethod\n            .invoke(null, spec)\n            .asInstanceOf[String]\n\n          val specIdx = partitionSpecToPoolIndex.getOrElseUpdate(\n            partitionSpecJson, {\n              val idx = partitionSpecToPoolIndex.size\n              commonBuilder.addPartitionSpecPool(partitionSpecJson)\n              idx\n            })\n          taskBuilder.setPartitionSpecIdx(specIdx)\n        } catch {\n          case e: Exception =>\n            logWarning(s\"Failed to serialize partition spec to JSON: ${e.getMessage}\")\n        }\n\n        // Get partition data from the task (via file().partition())\n        val partitionMethod = contentScanTaskClass.getMethod(\"partition\")\n        val partitionData = partitionMethod.invoke(task)\n\n        if (partitionData != null) {\n          // Get the partition type/schema from the spec\n          val partitionTypeMethod = spec.getClass.getMethod(\"partitionType\")\n          val partitionType = partitionTypeMethod.invoke(spec)\n\n          // Check if partition type has any fields before serializing\n          val fieldsMethod = partitionType.getClass.getMethod(\"fields\")\n          val fields = fieldsMethod\n            .invoke(partitionType)\n            .asInstanceOf[java.util.List[_]]\n\n          // Helper to get field type string (shared by both type and data serialization)\n          def getFieldType(field: Any): String = {\n            val typeMethod = field.getClass.getMethod(\"type\")\n            typeMethod.invoke(field).toString\n          }\n\n          // Only serialize partition type if there are actual partition fields\n          if (!fields.isEmpty) {\n            try {\n              // Manually build StructType JSON to match iceberg-rust expectations.\n              // Using Iceberg's SchemaParser.toJson() would include schema-level\n              // metadata (e.g., \"schema-id\") that iceberg-rust's StructType\n              // deserializer rejects. We need pure StructType format:\n              // {\"type\":\"struct\",\"fields\":[...]}\n\n              // Filter out fields with unknown types (dropped partition fields).\n              // Unknown type fields represent partition columns that have been dropped\n              // from the schema. Per the Iceberg spec, unknown type fields are not\n              // stored in data files and iceberg-rust doesn't support deserializing\n              // them. Since these columns are dropped, we don't need to expose their\n              // partition values when reading.\n              val fieldsJson = fields.asScala.flatMap { field =>\n                val fieldTypeStr = getFieldType(field)\n\n                // Skip fields with unknown type (dropped partition columns)\n                if (fieldTypeStr == IcebergReflection.TypeNames.UNKNOWN) {\n                  None\n                } else {\n                  val fieldIdMethod = field.getClass.getMethod(\"fieldId\")\n                  val fieldId = fieldIdMethod.invoke(field).asInstanceOf[Int]\n\n                  val nameMethod = field.getClass.getMethod(\"name\")\n                  val fieldName = nameMethod.invoke(field).asInstanceOf[String]\n\n                  val isOptionalMethod = field.getClass.getMethod(\"isOptional\")\n                  val isOptional =\n                    isOptionalMethod.invoke(field).asInstanceOf[Boolean]\n                  val required = !isOptional\n\n                  Some(\n                    (\"id\" -> fieldId) ~\n                      (\"name\" -> fieldName) ~\n                      (\"required\" -> required) ~\n                      (\"type\" -> fieldTypeStr))\n                }\n              }.toList\n\n              // Only serialize if we have non-unknown fields\n              if (fieldsJson.nonEmpty) {\n                val partitionTypeJson = compact(\n                  render(\n                    (\"type\" -> \"struct\") ~\n                      (\"fields\" -> fieldsJson)))\n\n                val typeIdx = partitionTypeToPoolIndex.getOrElseUpdate(\n                  partitionTypeJson, {\n                    val idx = partitionTypeToPoolIndex.size\n                    commonBuilder.addPartitionTypePool(partitionTypeJson)\n                    idx\n                  })\n                taskBuilder.setPartitionTypeIdx(typeIdx)\n              }\n            } catch {\n              case e: Exception =>\n                logWarning(s\"Failed to serialize partition type to JSON: ${e.getMessage}\")\n            }\n          }\n\n          // Serialize partition data to protobuf for native execution.\n          // The native execution engine uses partition_data protobuf messages to\n          // build a constants_map, which provides partition values to identity-\n          // transformed partition columns. Non-identity transforms (bucket, truncate,\n          // days, etc.) read values from data files.\n          //\n          // IMPORTANT: Use partition field IDs (not source field IDs) to match\n          // the schema.\n\n          // Filter out fields with unknown type (same as partition type filtering)\n          val partitionValues: Seq[OperatorOuterClass.PartitionValue] =\n            fields.asScala.zipWithIndex.flatMap { case (field, idx) =>\n              val fieldTypeStr = getFieldType(field)\n\n              // Skip fields with unknown type (dropped partition columns)\n              if (fieldTypeStr == IcebergReflection.TypeNames.UNKNOWN) {\n                None\n              } else {\n                // Use the partition type's field ID (same as in partition_type_json)\n                val fieldIdMethod = field.getClass.getMethod(\"fieldId\")\n                val fieldId = fieldIdMethod.invoke(field).asInstanceOf[Int]\n\n                val getMethod =\n                  partitionData.getClass.getMethod(\"get\", classOf[Int], classOf[Class[_]])\n                val value = getMethod.invoke(partitionData, Integer.valueOf(idx), classOf[Object])\n\n                Some(partitionValueToProto(fieldId, fieldTypeStr, value))\n              }\n            }.toSeq\n\n          // Only serialize partition data if we have non-unknown fields\n          if (partitionValues.nonEmpty) {\n            val partitionDataProto = OperatorOuterClass.PartitionData\n              .newBuilder()\n              .addAllValues(partitionValues.asJava)\n              .build()\n\n            // Deduplicate by protobuf bytes (use Base64 string as key)\n            val partitionDataBytes = partitionDataProto.toByteArray\n            val partitionDataKey = java.util.Base64.getEncoder.encodeToString(partitionDataBytes)\n\n            val partitionDataIdx = partitionDataToPoolIndex.getOrElseUpdate(\n              partitionDataKey, {\n                val idx = partitionDataToPoolIndex.size\n                commonBuilder.addPartitionDataPool(partitionDataProto)\n                idx\n              })\n            taskBuilder.setPartitionDataIdx(partitionDataIdx)\n          }\n        }\n      }\n    } catch {\n      case e: Exception =>\n        val msg =\n          \"Iceberg reflection failure: Failed to extract partition data from FileScanTask: \" +\n            s\"${e.getMessage}\"\n        logError(msg, e)\n        throw new RuntimeException(msg, e)\n    }\n  }\n\n  /** Storage-related property prefixes passed through to native FileIO. */\n  private val storagePropertyPrefixes =\n    Seq(\"s3.\", \"gcs.\", \"adls.\", \"client.\")\n\n  /**\n   * Filters a properties map to only include storage-related keys. FileIO.properties() may\n   * contain catalog URIs, bearer tokens, and other non-storage settings that should not be passed\n   * to the native FileIO builder.\n   */\n  def filterStorageProperties(props: Map[String, String]): Map[String, String] = {\n    props.filter { case (key, _) =>\n      storagePropertyPrefixes.exists(prefix => key.startsWith(prefix))\n    }\n  }\n\n  /**\n   * Transforms Hadoop S3A configuration keys to Iceberg FileIO property keys.\n   *\n   * Iceberg-rust's FileIO expects Iceberg-format keys (e.g., s3.access-key-id), not Hadoop keys\n   * (e.g., fs.s3a.access.key). This function converts Hadoop keys extracted from Spark's\n   * configuration to the format expected by iceberg-rust.\n   */\n  def hadoopToIcebergS3Properties(hadoopProps: Map[String, String]): Map[String, String] = {\n    hadoopProps.flatMap { case (key, value) =>\n      key match {\n        // Global S3A configuration keys\n        case \"fs.s3a.access.key\" => Some(\"s3.access-key-id\" -> value)\n        case \"fs.s3a.secret.key\" => Some(\"s3.secret-access-key\" -> value)\n        case \"fs.s3a.session.token\" => Some(\"s3.session-token\" -> value)\n        case \"fs.s3a.endpoint\" => Some(\"s3.endpoint\" -> value)\n        case \"fs.s3a.path.style.access\" => Some(\"s3.path-style-access\" -> value)\n        case \"fs.s3a.endpoint.region\" => Some(\"s3.region\" -> value)\n\n        // Per-bucket configuration keys (e.g., fs.s3a.bucket.mybucket.access.key)\n        // Extract bucket name and property, then transform to s3.* format\n        case k if k.startsWith(\"fs.s3a.bucket.\") =>\n          val parts = k.stripPrefix(\"fs.s3a.bucket.\").split(\"\\\\.\", 2)\n          if (parts.length == 2) {\n            val bucket = parts(0)\n            val property = parts(1)\n            property match {\n              case \"access.key\" => Some(s\"s3.bucket.$bucket.access-key-id\" -> value)\n              case \"secret.key\" => Some(s\"s3.bucket.$bucket.secret-access-key\" -> value)\n              case \"session.token\" => Some(s\"s3.bucket.$bucket.session.token\" -> value)\n              case \"endpoint\" => Some(s\"s3.bucket.$bucket.endpoint\" -> value)\n              case \"path.style.access\" => Some(s\"s3.bucket.$bucket.path-style-access\" -> value)\n              case \"endpoint.region\" => Some(s\"s3.bucket.$bucket.region\" -> value)\n              case _ => None\n            }\n          } else {\n            None\n          }\n\n        // Pass through any keys that are already in Iceberg format\n        case k if k.startsWith(\"s3.\") => Some(key -> value)\n\n        // Ignore all other keys\n        case _ => None\n      }\n    }\n  }\n\n  /**\n   * Converts Iceberg Expression objects to Spark Catalyst expressions.\n   *\n   * This is used to extract per-file residual expressions from Iceberg FileScanTasks. Residuals\n   * are created by Iceberg's ResidualEvaluator through partial evaluation of scan filters against\n   * each file's partition data. These residuals enable row-group level filtering in the Parquet\n   * reader.\n   *\n   * The conversion uses reflection because Iceberg expressions are not directly accessible from\n   * Spark's classpath during query planning.\n   */\n  def convertIcebergExpression(icebergExpr: Any, output: Seq[Attribute]): Option[Expression] = {\n    try {\n      val exprClass = icebergExpr.getClass\n      val attributeMap = output.map(attr => attr.name -> attr).toMap\n\n      // Check for UnboundPredicate\n      if (exprClass.getName.endsWith(Constants.ExpressionTypes.UNBOUND_PREDICATE)) {\n        val opMethod = exprClass.getMethod(\"op\")\n        val termMethod = exprClass.getMethod(\"term\")\n        val operation = opMethod.invoke(icebergExpr)\n        val term = termMethod.invoke(icebergExpr)\n\n        // Get column name from term\n        val refMethod = term.getClass.getMethod(\"ref\")\n        val ref = refMethod.invoke(term)\n        val nameMethod = ref.getClass.getMethod(\"name\")\n        val columnName = nameMethod.invoke(ref).asInstanceOf[String]\n\n        val attr = attributeMap.get(columnName)\n\n        val opName = operation.toString\n\n        attr.flatMap { attribute =>\n          opName match {\n            case Constants.Operations.IS_NULL =>\n              Some(IsNull(attribute))\n\n            case Constants.Operations.IS_NOT_NULL | Constants.Operations.NOT_NULL =>\n              Some(IsNotNull(attribute))\n\n            case Constants.Operations.EQ =>\n              buildBinaryPredicate(exprClass, icebergExpr, attribute, EqualTo)\n\n            case Constants.Operations.NOT_EQ =>\n              buildBinaryPredicate(\n                exprClass,\n                icebergExpr,\n                attribute,\n                (a, v) => Not(EqualTo(a, v)))\n\n            case Constants.Operations.LT =>\n              buildBinaryPredicate(exprClass, icebergExpr, attribute, LessThan)\n\n            case Constants.Operations.LT_EQ =>\n              buildBinaryPredicate(exprClass, icebergExpr, attribute, LessThanOrEqual)\n\n            case Constants.Operations.GT =>\n              buildBinaryPredicate(exprClass, icebergExpr, attribute, GreaterThan)\n\n            case Constants.Operations.GT_EQ =>\n              buildBinaryPredicate(exprClass, icebergExpr, attribute, GreaterThanOrEqual)\n\n            case Constants.Operations.IN =>\n              val literalsMethod = exprClass.getMethod(\"literals\")\n              val literals = literalsMethod.invoke(icebergExpr).asInstanceOf[java.util.List[_]]\n              val values =\n                literals.asScala.map(lit => convertIcebergLiteral(lit, attribute.dataType))\n              Some(In(attribute, values.toSeq))\n\n            case Constants.Operations.NOT_IN =>\n              val literalsMethod = exprClass.getMethod(\"literals\")\n              val literals = literalsMethod.invoke(icebergExpr).asInstanceOf[java.util.List[_]]\n              val values =\n                literals.asScala.map(lit => convertIcebergLiteral(lit, attribute.dataType))\n              Some(Not(In(attribute, values.toSeq)))\n\n            case _ =>\n              None\n          }\n        }\n      } else if (exprClass.getName.endsWith(Constants.ExpressionTypes.AND)) {\n        val leftMethod = exprClass.getMethod(\"left\")\n        val rightMethod = exprClass.getMethod(\"right\")\n        val left = leftMethod.invoke(icebergExpr)\n        val right = rightMethod.invoke(icebergExpr)\n\n        (convertIcebergExpression(left, output), convertIcebergExpression(right, output)) match {\n          case (Some(l), Some(r)) => Some(And(l, r))\n          case _ => None\n        }\n      } else if (exprClass.getName.endsWith(Constants.ExpressionTypes.OR)) {\n        val leftMethod = exprClass.getMethod(\"left\")\n        val rightMethod = exprClass.getMethod(\"right\")\n        val left = leftMethod.invoke(icebergExpr)\n        val right = rightMethod.invoke(icebergExpr)\n\n        (convertIcebergExpression(left, output), convertIcebergExpression(right, output)) match {\n          case (Some(l), Some(r)) => Some(Or(l, r))\n          case _ => None\n        }\n      } else if (exprClass.getName.endsWith(Constants.ExpressionTypes.NOT)) {\n        val childMethod = exprClass.getMethod(\"child\")\n        val child = childMethod.invoke(icebergExpr)\n\n        convertIcebergExpression(child, output).map(Not)\n      } else {\n        None\n      }\n    } catch {\n      case _: Exception =>\n        None\n    }\n  }\n\n  /**\n   * Converts an Iceberg Literal to a Spark Literal\n   */\n  private def convertIcebergLiteral(icebergLiteral: Any, sparkType: DataType): Literal = {\n    // Load Literal interface to get value() method (use interface to avoid package-private issues)\n    val literalClass = IcebergReflection.loadClass(IcebergReflection.ClassNames.LITERAL)\n    val valueMethod = literalClass.getMethod(\"value\")\n    val value = valueMethod.invoke(icebergLiteral)\n\n    // Convert Java types to Spark internal types\n    val sparkValue = (value, sparkType) match {\n      case (s: String, _: StringType) =>\n        org.apache.spark.unsafe.types.UTF8String.fromString(s)\n      case (v, _) => v\n    }\n\n    Literal(sparkValue, sparkType)\n  }\n\n  /**\n   * Converts a CometBatchScanExec to a minimal placeholder IcebergScan operator.\n   *\n   * Returns a placeholder operator with only metadata_location for matching during partition\n   * injection. All other fields (catalog properties, required schema, pools, partition data) are\n   * set by serializePartitions() at execution time after DPP resolves.\n   */\n  override def convert(\n      scan: CometBatchScanExec,\n      builder: Operator.Builder,\n      childOp: Operator*): Option[OperatorOuterClass.Operator] = {\n\n    val metadata = scan.nativeIcebergScanMetadata.getOrElse {\n      throw new IllegalStateException(\n        \"Programming error: CometBatchScanExec.nativeIcebergScanMetadata is None. \" +\n          \"Metadata should have been extracted in CometScanRule.\")\n    }\n\n    val icebergScanBuilder = OperatorOuterClass.IcebergScan.newBuilder()\n    val commonBuilder = OperatorOuterClass.IcebergScanCommon.newBuilder()\n\n    // Only set metadata_location - used for matching in PlanDataInjector.\n    // All other fields (catalog_properties, required_schema, pools) are set by\n    // serializePartitions() at execution time, so setting them here would be wasted work.\n    commonBuilder.setMetadataLocation(metadata.metadataLocation)\n\n    icebergScanBuilder.setCommon(commonBuilder.build())\n    // partition field intentionally empty - will be populated at execution time\n\n    builder.clearChildren()\n    Some(builder.setIcebergScan(icebergScanBuilder).build())\n  }\n\n  /**\n   * Serializes partitions from inputRDD at execution time.\n   *\n   * Called after doPrepare() has resolved DPP subqueries. Builds pools and per-partition data in\n   * one pass from the DPP-filtered partitions.\n   *\n   * @param scanExec\n   *   The BatchScanExec whose inputRDD contains the DPP-filtered partitions\n   * @param output\n   *   The output attributes for the scan\n   * @param metadata\n   *   Pre-extracted Iceberg metadata from CometScanRule\n   * @return\n   *   Tuple of (commonBytes, perPartitionBytes) for native execution\n   */\n  def serializePartitions(\n      scanExec: BatchScanExec,\n      output: Seq[Attribute],\n      metadata: CometIcebergNativeScanMetadata): (Array[Byte], Array[Array[Byte]]) = {\n\n    val commonBuilder = OperatorOuterClass.IcebergScanCommon.newBuilder()\n\n    // Deduplication structures - map unique values to pool indices\n    val schemaToPoolIndex = mutable.HashMap[AnyRef, Int]()\n    val partitionTypeToPoolIndex = mutable.HashMap[String, Int]()\n    val partitionSpecToPoolIndex = mutable.HashMap[String, Int]()\n    val nameMappingToPoolIndex = mutable.HashMap[String, Int]()\n    val projectFieldIdsToPoolIndex = mutable.HashMap[Seq[Int], Int]()\n    val partitionDataToPoolIndex = mutable.HashMap[String, Int]()\n    val deleteFilesToPoolIndex =\n      mutable.HashMap[Seq[OperatorOuterClass.IcebergDeleteFile], Int]()\n    val residualToPoolIndex = mutable.HashMap[Option[Expr], Int]()\n\n    val fileIO = IcebergReflection.getFileIO(metadata.table)\n\n    val perPartitionBuilders = mutable.ArrayBuffer[OperatorOuterClass.IcebergScan]()\n\n    var totalTasks = 0\n\n    commonBuilder.setMetadataLocation(metadata.metadataLocation)\n    commonBuilder.setDataFileConcurrencyLimit(\n      CometConf.COMET_ICEBERG_DATA_FILE_CONCURRENCY_LIMIT.get())\n    metadata.catalogProperties.foreach { case (key, value) =>\n      commonBuilder.putCatalogProperties(key, value)\n    }\n\n    output.foreach { attr =>\n      val field = SparkStructField\n        .newBuilder()\n        .setName(attr.name)\n        .setNullable(attr.nullable)\n      serializeDataType(attr.dataType).foreach(field.setDataType)\n      commonBuilder.addRequiredSchema(field.build())\n    }\n\n    // Load Iceberg classes once (avoid repeated class loading in loop)\n    val contentScanTaskClass =\n      IcebergReflection.loadClass(IcebergReflection.ClassNames.CONTENT_SCAN_TASK)\n    val fileScanTaskClass =\n      IcebergReflection.loadClass(IcebergReflection.ClassNames.FILE_SCAN_TASK)\n    val contentFileClass =\n      IcebergReflection.loadClass(IcebergReflection.ClassNames.CONTENT_FILE)\n    val schemaParserClass =\n      IcebergReflection.loadClass(IcebergReflection.ClassNames.SCHEMA_PARSER)\n    val schemaClass =\n      IcebergReflection.loadClass(IcebergReflection.ClassNames.SCHEMA)\n\n    // Cache method lookups (avoid repeated getMethod in loop)\n    val fileMethod = contentScanTaskClass.getMethod(\"file\")\n    val startMethod = contentScanTaskClass.getMethod(\"start\")\n    val lengthMethod = contentScanTaskClass.getMethod(\"length\")\n    val residualMethod = contentScanTaskClass.getMethod(\"residual\")\n    val fileSizeInBytesMethod = contentFileClass.getMethod(\"fileSizeInBytes\")\n    val taskSchemaMethod = fileScanTaskClass.getMethod(\"schema\")\n    val toJsonMethod = schemaParserClass.getMethod(\"toJson\", schemaClass)\n    toJsonMethod.setAccessible(true)\n\n    // Access inputRDD - safe now, DPP is resolved\n    scanExec.inputRDD match {\n      case rdd: DataSourceRDD =>\n        val partitions = rdd.partitions\n        partitions.foreach { partition =>\n          val partitionBuilder = OperatorOuterClass.IcebergScan.newBuilder()\n\n          val inputPartitions = partition\n            .asInstanceOf[DataSourceRDDPartition]\n            .inputPartitions\n\n          inputPartitions.foreach { inputPartition =>\n            val inputPartClass = inputPartition.getClass\n\n            try {\n              val taskGroupMethod = inputPartClass.getDeclaredMethod(\"taskGroup\")\n              taskGroupMethod.setAccessible(true)\n              val taskGroup = taskGroupMethod.invoke(inputPartition)\n\n              val taskGroupClass = taskGroup.getClass\n              val tasksMethod = taskGroupClass.getMethod(\"tasks\")\n              val tasksCollection =\n                tasksMethod.invoke(taskGroup).asInstanceOf[java.util.Collection[_]]\n\n              tasksCollection.asScala.foreach { task =>\n                totalTasks += 1\n\n                val taskBuilder = OperatorOuterClass.IcebergFileScanTask.newBuilder()\n\n                val dataFile = fileMethod.invoke(task)\n\n                val filePathOpt =\n                  IcebergReflection.extractFileLocation(contentFileClass, dataFile)\n\n                filePathOpt match {\n                  case Some(filePath) =>\n                    taskBuilder.setDataFilePath(filePath)\n                  case None =>\n                    val msg =\n                      \"Iceberg reflection failure: Cannot extract file path from data file\"\n                    logError(msg)\n                    throw new RuntimeException(msg)\n                }\n\n                val start = startMethod.invoke(task).asInstanceOf[Long]\n                taskBuilder.setStart(start)\n\n                val length = lengthMethod.invoke(task).asInstanceOf[Long]\n                taskBuilder.setLength(length)\n\n                val fileSizeInBytes =\n                  fileSizeInBytesMethod.invoke(dataFile).asInstanceOf[Long]\n                taskBuilder.setFileSizeInBytes(fileSizeInBytes)\n\n                val taskSchema = taskSchemaMethod.invoke(task)\n\n                val deletes =\n                  IcebergReflection.getDeleteFilesFromTask(task, fileScanTaskClass)\n                val hasDeletes = !deletes.isEmpty\n\n                val schema: AnyRef =\n                  if (hasDeletes) {\n                    taskSchema\n                  } else {\n                    val scanSchemaFieldIds = IcebergReflection\n                      .buildFieldIdMapping(metadata.scanSchema)\n                      .values\n                      .toSet\n                    val tableSchemaFieldIds = IcebergReflection\n                      .buildFieldIdMapping(metadata.tableSchema)\n                      .values\n                      .toSet\n                    val hasHistoricalColumns =\n                      scanSchemaFieldIds.exists(id => !tableSchemaFieldIds.contains(id))\n\n                    if (hasHistoricalColumns) {\n                      metadata.scanSchema.asInstanceOf[AnyRef]\n                    } else {\n                      metadata.tableSchema.asInstanceOf[AnyRef]\n                    }\n                  }\n\n                val schemaIdx = schemaToPoolIndex.getOrElseUpdate(\n                  schema, {\n                    val idx = schemaToPoolIndex.size\n                    val schemaJson = toJsonMethod.invoke(null, schema).asInstanceOf[String]\n                    commonBuilder.addSchemaPool(schemaJson)\n                    idx\n                  })\n                taskBuilder.setSchemaIdx(schemaIdx)\n\n                val nameToFieldId = IcebergReflection.buildFieldIdMapping(schema)\n\n                val projectFieldIds = output.flatMap { attr =>\n                  nameToFieldId\n                    .get(attr.name)\n                    .orElse(metadata.globalFieldIdMapping.get(attr.name))\n                    .orElse {\n                      logWarning(s\"Column '${attr.name}' not found in task or scan schema, \" +\n                        \"skipping projection\")\n                      None\n                    }\n                }\n\n                val projectFieldIdsIdx = projectFieldIdsToPoolIndex.getOrElseUpdate(\n                  projectFieldIds, {\n                    val idx = projectFieldIdsToPoolIndex.size\n                    val listBuilder = OperatorOuterClass.ProjectFieldIdList.newBuilder()\n                    projectFieldIds.foreach(id => listBuilder.addFieldIds(id))\n                    commonBuilder.addProjectFieldIdsPool(listBuilder.build())\n                    idx\n                  })\n                taskBuilder.setProjectFieldIdsIdx(projectFieldIdsIdx)\n\n                val deleteFilesList =\n                  extractDeleteFilesList(task, contentFileClass, fileScanTaskClass, fileIO)\n                if (deleteFilesList.nonEmpty) {\n                  val deleteFilesIdx = deleteFilesToPoolIndex.getOrElseUpdate(\n                    deleteFilesList, {\n                      val idx = deleteFilesToPoolIndex.size\n                      val listBuilder = OperatorOuterClass.DeleteFileList.newBuilder()\n                      deleteFilesList.foreach(df => listBuilder.addDeleteFiles(df))\n                      commonBuilder.addDeleteFilesPool(listBuilder.build())\n                      idx\n                    })\n                  taskBuilder.setDeleteFilesIdx(deleteFilesIdx)\n                }\n\n                val residualExprOpt =\n                  try {\n                    val residualExpr = residualMethod.invoke(task)\n                    val catalystExpr = convertIcebergExpression(residualExpr, output)\n                    catalystExpr.flatMap { expr =>\n                      exprToProto(expr, output, binding = false)\n                    }\n                  } catch {\n                    case e: Exception =>\n                      logWarning(\n                        \"Failed to extract residual expression from FileScanTask: \" +\n                          s\"${e.getMessage}\")\n                      None\n                  }\n\n                residualExprOpt.foreach { residualExpr =>\n                  val residualIdx = residualToPoolIndex.getOrElseUpdate(\n                    Some(residualExpr), {\n                      val idx = residualToPoolIndex.size\n                      commonBuilder.addResidualPool(residualExpr)\n                      idx\n                    })\n                  taskBuilder.setResidualIdx(residualIdx)\n                }\n\n                serializePartitionData(\n                  task,\n                  contentScanTaskClass,\n                  fileScanTaskClass,\n                  taskBuilder,\n                  commonBuilder,\n                  partitionTypeToPoolIndex,\n                  partitionSpecToPoolIndex,\n                  partitionDataToPoolIndex)\n\n                metadata.nameMapping.foreach { nm =>\n                  val nmIdx = nameMappingToPoolIndex.getOrElseUpdate(\n                    nm, {\n                      val idx = nameMappingToPoolIndex.size\n                      commonBuilder.addNameMappingPool(nm)\n                      idx\n                    })\n                  taskBuilder.setNameMappingIdx(nmIdx)\n                }\n\n                partitionBuilder.addFileScanTasks(taskBuilder.build())\n              }\n            }\n          }\n\n          perPartitionBuilders += partitionBuilder.build()\n        }\n      case _ =>\n        throw new IllegalStateException(\"Expected DataSourceRDD from BatchScanExec\")\n    }\n\n    // Log deduplication summary\n    val allPoolSizes = Seq(\n      schemaToPoolIndex.size,\n      partitionTypeToPoolIndex.size,\n      partitionSpecToPoolIndex.size,\n      nameMappingToPoolIndex.size,\n      projectFieldIdsToPoolIndex.size,\n      partitionDataToPoolIndex.size,\n      deleteFilesToPoolIndex.size,\n      residualToPoolIndex.size)\n\n    val avgDedup = if (totalTasks == 0) {\n      \"0.0\"\n    } else {\n      val nonEmptyPools = allPoolSizes.filter(_ > 0)\n      if (nonEmptyPools.isEmpty) {\n        \"0.0\"\n      } else {\n        val avgUnique = nonEmptyPools.sum.toDouble / nonEmptyPools.length\n        f\"${(1.0 - avgUnique / totalTasks) * 100}%.1f\"\n      }\n    }\n\n    val partitionDataPoolBytes = commonBuilder.getPartitionDataPoolList.asScala\n      .map(_.getSerializedSize)\n      .sum\n\n    logInfo(s\"IcebergScan: $totalTasks tasks, ${allPoolSizes.size} pools ($avgDedup% avg dedup)\")\n    if (partitionDataToPoolIndex.nonEmpty) {\n      logInfo(\n        s\"  Partition data pool: ${partitionDataToPoolIndex.size} unique values, \" +\n          s\"$partitionDataPoolBytes bytes (protobuf)\")\n    }\n\n    val commonBytes = commonBuilder.build().toByteArray\n    val perPartitionBytes = perPartitionBuilders.map(_.toByteArray).toArray\n\n    (commonBytes, perPartitionBytes)\n  }\n\n  override def createExec(nativeOp: Operator, op: CometBatchScanExec): CometNativeExec = {\n    import org.apache.spark.sql.comet.CometIcebergNativeScanExec\n\n    // Extract metadata - it must be present at this point\n    val metadata = op.nativeIcebergScanMetadata.getOrElse {\n      throw new IllegalStateException(\n        \"Programming error: CometBatchScanExec.nativeIcebergScanMetadata is None. \" +\n          \"Metadata should have been extracted in CometScanRule.\")\n    }\n\n    // Extract metadataLocation from the native operator's common data\n    val metadataLocation = nativeOp.getIcebergScan.getCommon.getMetadataLocation\n\n    // Pass BatchScanExec reference for deferred serialization (DPP support)\n    // Serialization happens at execution time after doPrepare() resolves DPP subqueries\n    CometIcebergNativeScanExec(nativeOp, op.wrapped, op.session, metadataLocation, metadata)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/operator/CometNativeScan.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde.operator\n\nimport scala.collection.mutable.ListBuffer\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.catalyst.expressions.{Expression, Literal, PlanExpression}\nimport org.apache.spark.sql.catalyst.util.ResolveDefaultColumns.getExistenceDefaultValues\nimport org.apache.spark.sql.comet.{CometNativeExec, CometNativeScanExec, CometScanExec}\nimport org.apache.spark.sql.execution.FileSourceScanExec\nimport org.apache.spark.sql.execution.datasources.PartitionedFile\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.{CometConf, ConfigEntry}\nimport org.apache.comet.CometConf.COMET_EXEC_ENABLED\nimport org.apache.comet.CometSparkSessionExtensions.{hasExplainInfo, withInfo}\nimport org.apache.comet.objectstore.NativeConfig\nimport org.apache.comet.parquet.CometParquetUtils\nimport org.apache.comet.serde.{CometOperatorSerde, Compatible, OperatorOuterClass, SupportLevel}\nimport org.apache.comet.serde.ExprOuterClass.Expr\nimport org.apache.comet.serde.OperatorOuterClass.Operator\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProto, serializeDataType}\n\n/**\n * Validation and serde logic for `native_datafusion` scans.\n */\nobject CometNativeScan extends CometOperatorSerde[CometScanExec] with Logging {\n\n  /** Determine whether the scan is supported and tag the Spark plan with any fallback reasons */\n  def isSupported(scanExec: FileSourceScanExec): Boolean = {\n\n    if (hasExplainInfo(scanExec)) {\n      // this node has already been tagged with fallback reasons\n      return false\n    }\n\n    if (!COMET_EXEC_ENABLED.get()) {\n      withInfo(scanExec, s\"Full native scan disabled because ${COMET_EXEC_ENABLED.key} disabled\")\n    }\n\n    // Native DataFusion doesn't support subqueries/dynamic pruning\n    if (scanExec.partitionFilters.exists(isDynamicPruningFilter)) {\n      withInfo(scanExec, \"Native DataFusion scan does not support subqueries/dynamic pruning\")\n    }\n\n    if (SQLConf.get.ignoreCorruptFiles ||\n      scanExec.relation.options\n        .get(\"ignorecorruptfiles\") // Spark sets this to lowercase.\n        .contains(\"true\")) {\n      withInfo(scanExec, \"Full native scan disabled because ignoreCorruptFiles enabled\")\n    }\n\n    if (SQLConf.get.ignoreMissingFiles ||\n      scanExec.relation.options\n        .get(\"ignoremissingfiles\") // Spark sets this to lowercase.\n        .contains(\"true\")) {\n\n      withInfo(scanExec, \"Full native scan disabled because ignoreMissingFiles enabled\")\n    }\n\n    // the scan is supported if no fallback reasons were added to the node\n    !hasExplainInfo(scanExec)\n  }\n\n  private def isDynamicPruningFilter(e: Expression): Boolean =\n    e.exists(_.isInstanceOf[PlanExpression[_]])\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = None\n\n  override def getSupportLevel(operator: CometScanExec): SupportLevel = {\n    // all checks happen in CometScanRule before ScanExec is converted to CometScanExec, so\n    // we always report compatible here because this serde object is for the converted CometScanExec\n    Compatible()\n  }\n\n  override def convert(\n      scan: CometScanExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    val nativeScanBuilder = OperatorOuterClass.NativeScan.newBuilder()\n    val commonBuilder = OperatorOuterClass.NativeScanCommon.newBuilder()\n\n    // Set source in common (used as part of injection key)\n    commonBuilder.setSource(scan.simpleStringWithNodeId())\n\n    val scanTypes = scan.output.flatten { attr =>\n      serializeDataType(attr.dataType)\n    }\n\n    if (scanTypes.length == scan.output.length) {\n      commonBuilder.addAllFields(scanTypes.asJava)\n\n      // Sink operators don't have children\n      builder.clearChildren()\n\n      if (scan.conf.getConf(SQLConf.PARQUET_FILTER_PUSHDOWN_ENABLED) &&\n        CometConf.COMET_RESPECT_PARQUET_FILTER_PUSHDOWN.get(scan.conf)) {\n\n        val dataFilters = new ListBuffer[Expr]()\n        for (filter <- scan.supportedDataFilters) {\n          exprToProto(filter, scan.output) match {\n            case Some(proto) => dataFilters += proto\n            case _ =>\n              logWarning(s\"Unsupported data filter $filter\")\n          }\n        }\n        commonBuilder.addAllDataFilters(dataFilters.asJava)\n      }\n\n      val possibleDefaultValues = getExistenceDefaultValues(scan.requiredSchema)\n      if (possibleDefaultValues.exists(_ != null)) {\n        // Our schema has default values. Serialize two lists, one with the default values\n        // and another with the indexes in the schema so the native side can map missing\n        // columns to these default values.\n        val (defaultValues, indexes) = possibleDefaultValues.zipWithIndex\n          .filter { case (expr, _) => expr != null }\n          .map { case (expr, index) =>\n            // ResolveDefaultColumnsUtil.getExistenceDefaultValues has evaluated these\n            // expressions and they should now just be literals.\n            (Literal(expr), index.toLong.asInstanceOf[java.lang.Long])\n          }\n          .unzip\n        commonBuilder.addAllDefaultValues(\n          defaultValues.flatMap(exprToProto(_, scan.output)).toIterable.asJava)\n        commonBuilder.addAllDefaultValuesIndexes(indexes.toIterable.asJava)\n      }\n\n      // Extract object store options from first file (S3 configs apply to all files in scan)\n      var firstPartition: Option[PartitionedFile] = None\n      val filePartitions = scan.getFilePartitions()\n      firstPartition = filePartitions.flatMap(_.files.headOption).headOption\n\n      val partitionSchema = schema2Proto(scan.relation.partitionSchema.fields)\n      val requiredSchema = schema2Proto(scan.requiredSchema.fields)\n      val dataSchema = schema2Proto(scan.relation.dataSchema.fields)\n\n      val dataSchemaIndexes = scan.requiredSchema.fields.map(field => {\n        scan.relation.dataSchema.fieldIndex(field.name)\n      })\n      val partitionSchemaIndexes = Array\n        .range(\n          scan.relation.dataSchema.fields.length,\n          scan.relation.dataSchema.length + scan.relation.partitionSchema.fields.length)\n\n      val projectionVector = (dataSchemaIndexes ++ partitionSchemaIndexes).map(idx =>\n        idx.toLong.asInstanceOf[java.lang.Long])\n\n      commonBuilder.addAllProjectionVector(projectionVector.toIterable.asJava)\n\n      // In `CometScanRule`, we ensure partitionSchema is supported.\n      assert(partitionSchema.length == scan.relation.partitionSchema.fields.length)\n\n      commonBuilder.addAllDataSchema(dataSchema.toIterable.asJava)\n      commonBuilder.addAllRequiredSchema(requiredSchema.toIterable.asJava)\n      commonBuilder.addAllPartitionSchema(partitionSchema.toIterable.asJava)\n      commonBuilder.setSessionTimezone(scan.conf.getConfString(\"spark.sql.session.timeZone\"))\n      commonBuilder.setCaseSensitive(scan.conf.getConf[Boolean](SQLConf.CASE_SENSITIVE))\n\n      // Collect S3/cloud storage configurations\n      val hadoopConf = scan.relation.sparkSession.sessionState\n        .newHadoopConfWithOptions(scan.relation.options)\n\n      commonBuilder.setEncryptionEnabled(CometParquetUtils.encryptionEnabled(hadoopConf))\n\n      firstPartition.foreach { partitionFile =>\n        val objectStoreOptions =\n          NativeConfig.extractObjectStoreOptions(hadoopConf, partitionFile.pathUri)\n        objectStoreOptions.foreach { case (key, value) =>\n          commonBuilder.putObjectStoreOptions(key, value)\n        }\n      }\n\n      // Set common data in NativeScan (file_partition will be populated at execution time)\n      nativeScanBuilder.setCommon(commonBuilder.build())\n\n      Some(builder.setNativeScan(nativeScanBuilder).build())\n\n    } else {\n      // There are unsupported scan type\n      withInfo(\n        scan,\n        s\"unsupported Comet operator: ${scan.nodeName}, due to unsupported data types above\")\n      None\n    }\n\n  }\n\n  override def createExec(nativeOp: Operator, op: CometScanExec): CometNativeExec = {\n    CometNativeScanExec(nativeOp, op.wrapped, op.session, op)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/operator/CometSink.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde.operator\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.sql.comet.{CometNativeExec, CometSinkPlaceHolder}\nimport org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\nimport org.apache.spark.sql.execution.SparkPlan\nimport org.apache.spark.sql.execution.adaptive.ShuffleQueryStageExec\nimport org.apache.spark.sql.execution.exchange.ReusedExchangeExec\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.ConfigEntry\nimport org.apache.comet.serde.{CometOperatorSerde, OperatorOuterClass}\nimport org.apache.comet.serde.OperatorOuterClass.Operator\nimport org.apache.comet.serde.QueryPlanSerde.{serializeDataType, supportedDataType}\n\n/**\n * CometSink is the base class for transformations from a Spark operator to a Comet operator where\n * the native plan is a ScanExec that will read data from the Comet operator running the JVM.\n */\nabstract class CometSink[T <: SparkPlan] extends CometOperatorSerde[T] {\n\n  /** Whether the data produced by the Comet operator is FFI safe */\n  def isFfiSafe: Boolean = true\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = None\n\n  override def convert(\n      op: T,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    val supportedTypes =\n      op.output.forall(a => supportedDataType(a.dataType, allowComplex = true))\n\n    if (!supportedTypes) {\n      withInfo(op, \"Unsupported data type\")\n      return None\n    }\n\n    // These operators are source of Comet native execution chain\n    val scanBuilder = OperatorOuterClass.Scan.newBuilder()\n    val source = op.simpleStringWithNodeId()\n    if (source.isEmpty) {\n      scanBuilder.setSource(op.getClass.getSimpleName)\n    } else {\n      scanBuilder.setSource(source)\n    }\n    scanBuilder.setArrowFfiSafe(isFfiSafe)\n\n    val scanTypes = op.output.flatten { attr =>\n      serializeDataType(attr.dataType)\n    }\n\n    if (scanTypes.length == op.output.length) {\n      scanBuilder.addAllFields(scanTypes.asJava)\n\n      // Sink operators don't have children\n      builder.clearChildren()\n\n      Some(builder.setScan(scanBuilder).build())\n    } else {\n      // There are unsupported scan type\n      withInfo(\n        op,\n        s\"unsupported Comet operator: ${op.nodeName}, due to unsupported data types above\")\n      None\n    }\n  }\n}\n\nobject CometExchangeSink extends CometSink[SparkPlan] {\n\n  override def convert(\n      op: SparkPlan,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    if (shouldUseShuffleScan(op)) {\n      convertToShuffleScan(op, builder)\n    } else {\n      super.convert(op, builder, childOp: _*)\n    }\n  }\n\n  private def shouldUseShuffleScan(op: SparkPlan): Boolean = {\n    if (!CometConf.COMET_SHUFFLE_DIRECT_READ_ENABLED.get()) return false\n\n    // Extract the CometShuffleExchangeExec from the wrapper\n    val shuffleExec = op match {\n      case ShuffleQueryStageExec(_, s: CometShuffleExchangeExec, _) => Some(s)\n      case ShuffleQueryStageExec(_, ReusedExchangeExec(_, s: CometShuffleExchangeExec), _) =>\n        Some(s)\n      case s: CometShuffleExchangeExec => Some(s)\n      case _ => None\n    }\n\n    shuffleExec.isDefined\n  }\n\n  private def convertToShuffleScan(\n      op: SparkPlan,\n      builder: Operator.Builder): Option[OperatorOuterClass.Operator] = {\n    val supportedTypes =\n      op.output.forall(a => supportedDataType(a.dataType, allowComplex = true))\n\n    if (!supportedTypes) {\n      withInfo(op, \"Unsupported data type for shuffle direct read\")\n      return None\n    }\n\n    val scanBuilder = OperatorOuterClass.ShuffleScan.newBuilder()\n    val source = op.simpleStringWithNodeId()\n    if (source.isEmpty) {\n      scanBuilder.setSource(op.getClass.getSimpleName)\n    } else {\n      scanBuilder.setSource(source)\n    }\n\n    val scanTypes = op.output.flatMap { attr =>\n      serializeDataType(attr.dataType)\n    }\n\n    if (scanTypes.length == op.output.length) {\n      scanBuilder.addAllFields(scanTypes.asJava)\n      builder.clearChildren()\n      Some(builder.setShuffleScan(scanBuilder).build())\n    } else {\n      withInfo(op, s\"unsupported data types in ${op.nodeName} for shuffle direct read\")\n      None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: SparkPlan): CometNativeExec =\n    CometSinkPlaceHolder(nativeOp, op, op)\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/operator/package.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.Literal\nimport org.apache.spark.sql.execution.datasources.FilePartition\nimport org.apache.spark.sql.types.{StructField, StructType}\n\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProto, serializeDataType}\n\npackage object operator {\n\n  def schema2Proto(fields: Array[StructField]): Array[OperatorOuterClass.SparkStructField] = {\n    val fieldBuilder = OperatorOuterClass.SparkStructField.newBuilder()\n    fields.map { field =>\n      fieldBuilder.setName(field.name)\n      fieldBuilder.setDataType(serializeDataType(field.dataType).get)\n      fieldBuilder.setNullable(field.nullable)\n      fieldBuilder.build()\n    }\n  }\n\n  def partition2Proto(\n      partition: FilePartition,\n      partitionSchema: StructType): OperatorOuterClass.SparkFilePartition = {\n    val partitionBuilder = OperatorOuterClass.SparkFilePartition.newBuilder()\n    partition.files.foreach(file => {\n      // Process the partition values\n      val partitionValues = file.partitionValues\n      assert(partitionValues.numFields == partitionSchema.length)\n      val partitionVals =\n        partitionValues.toSeq(partitionSchema).zipWithIndex.map { case (value, i) =>\n          val attr = partitionSchema(i)\n          val valueProto = exprToProto(Literal(value, attr.dataType), Seq.empty)\n          // In `CometScanRule`, we have already checked that all partition values are\n          // supported. So, we can safely use `get` here.\n          assert(\n            valueProto.isDefined,\n            s\"Unsupported partition value: $value, type: ${attr.dataType}\")\n          valueProto.get\n        }\n      val fileBuilder = OperatorOuterClass.SparkPartitionedFile.newBuilder()\n      partitionVals.foreach(fileBuilder.addPartitionValues)\n      fileBuilder\n        .setFilePath(file.filePath.toString)\n        .setStart(file.start)\n        .setLength(file.length)\n        .setFileSize(file.fileSize)\n      partitionBuilder.addPartitionedFile(fileBuilder.build())\n    })\n    partitionBuilder.build()\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/predicates.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.sql.catalyst.expressions.{And, Attribute, EqualNullSafe, EqualTo, Expression, GreaterThan, GreaterThanOrEqual, In, InSet, IsNaN, IsNotNull, IsNull, LessThan, LessThanOrEqual, Literal, Not, Or}\nimport org.apache.spark.sql.types.BooleanType\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.ExprOuterClass.Expr\nimport org.apache.comet.serde.QueryPlanSerde._\n\nobject CometNot extends CometExpressionSerde[Not] {\n  override def convert(\n      expr: Not,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n\n    expr.child match {\n      case expr: EqualTo =>\n        createBinaryExpr(\n          expr,\n          expr.left,\n          expr.right,\n          inputs,\n          binding,\n          (builder, binaryExpr) => builder.setNeq(binaryExpr))\n      case expr: EqualNullSafe =>\n        createBinaryExpr(\n          expr,\n          expr.left,\n          expr.right,\n          inputs,\n          binding,\n          (builder, binaryExpr) => builder.setNeqNullSafe(binaryExpr))\n      case expr: In =>\n        ComparisonUtils.in(expr, expr.value, expr.list, inputs, binding, negate = true)\n      case _ =>\n        createUnaryExpr(\n          expr,\n          expr.child,\n          inputs,\n          binding,\n          (builder, unaryExpr) => builder.setNot(unaryExpr))\n    }\n  }\n}\n\nobject CometAnd extends CometExpressionSerde[And] {\n  override def convert(\n      expr: And,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setAnd(binaryExpr))\n  }\n}\n\nobject CometOr extends CometExpressionSerde[Or] {\n  override def convert(\n      expr: Or,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setOr(binaryExpr))\n  }\n}\n\nobject CometEqualTo extends CometExpressionSerde[EqualTo] {\n  override def convert(\n      expr: EqualTo,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setEq(binaryExpr))\n  }\n}\n\nobject CometEqualNullSafe extends CometExpressionSerde[EqualNullSafe] {\n  override def convert(\n      expr: EqualNullSafe,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setEqNullSafe(binaryExpr))\n  }\n}\n\nobject CometGreaterThan extends CometExpressionSerde[GreaterThan] {\n  override def convert(\n      expr: GreaterThan,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setGt(binaryExpr))\n  }\n}\n\nobject CometGreaterThanOrEqual extends CometExpressionSerde[GreaterThanOrEqual] {\n  override def convert(\n      expr: GreaterThanOrEqual,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setGtEq(binaryExpr))\n  }\n}\n\nobject CometLessThan extends CometExpressionSerde[LessThan] {\n  override def convert(\n      expr: LessThan,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setLt(binaryExpr))\n  }\n}\n\nobject CometLessThanOrEqual extends CometExpressionSerde[LessThanOrEqual] {\n  override def convert(\n      expr: LessThanOrEqual,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createBinaryExpr(\n      expr,\n      expr.left,\n      expr.right,\n      inputs,\n      binding,\n      (builder, binaryExpr) => builder.setLtEq(binaryExpr))\n  }\n}\n\nobject CometIsNull extends CometExpressionSerde[IsNull] {\n  override def convert(\n      expr: IsNull,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createUnaryExpr(\n      expr,\n      expr.child,\n      inputs,\n      binding,\n      (builder, unaryExpr) => builder.setIsNull(unaryExpr))\n  }\n}\n\nobject CometIsNotNull extends CometExpressionSerde[IsNotNull] {\n  override def convert(\n      expr: IsNotNull,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    createUnaryExpr(\n      expr,\n      expr.child,\n      inputs,\n      binding,\n      (builder, unaryExpr) => builder.setIsNotNull(unaryExpr))\n  }\n}\n\nobject CometIsNaN extends CometExpressionSerde[IsNaN] {\n  override def convert(\n      expr: IsNaN,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n    val optExpr = scalarFunctionExprToProtoWithReturnType(\"isnan\", BooleanType, false, childExpr)\n\n    optExprWithInfo(optExpr, expr, expr.child)\n  }\n}\n\nobject CometIn extends CometExpressionSerde[In] {\n  override def convert(\n      expr: In,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    ComparisonUtils.in(expr, expr.value, expr.list, inputs, binding, negate = false)\n  }\n}\n\nobject CometInSet extends CometExpressionSerde[InSet] {\n  override def convert(\n      expr: InSet,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val valueDataType = expr.child.dataType\n    val list = expr.hset.map { setVal =>\n      Literal(setVal, valueDataType)\n    }.toSeq\n    // Change `InSet` to `In` expression\n    // We do Spark `InSet` optimization in native (DataFusion) side.\n    ComparisonUtils.in(expr, expr.child, list, inputs, binding, negate = false)\n  }\n}\n\nobject ComparisonUtils {\n\n  def in(\n      expr: Expression,\n      value: Expression,\n      list: Seq[Expression],\n      inputs: Seq[Attribute],\n      binding: Boolean,\n      negate: Boolean): Option[Expr] = {\n    val valueExpr = exprToProtoInternal(value, inputs, binding)\n    val listExprs = list.map(exprToProtoInternal(_, inputs, binding))\n    if (valueExpr.isDefined && listExprs.forall(_.isDefined)) {\n      val builder = ExprOuterClass.In.newBuilder()\n      builder.setInValue(valueExpr.get)\n      builder.addAllLists(listExprs.map(_.get).asJava)\n      builder.setNegated(negate)\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setIn(builder)\n          .build())\n    } else {\n      val allExprs = list ++ Seq(value)\n      withInfo(expr, allExprs: _*)\n      None\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/statics.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, ExpressionImplUtils}\nimport org.apache.spark.sql.catalyst.expressions.objects.StaticInvoke\nimport org.apache.spark.sql.catalyst.util.CharVarcharCodegenUtils\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\n\nobject CometStaticInvoke extends CometExpressionSerde[StaticInvoke] {\n\n  // With Spark 3.4, CharVarcharCodegenUtils.readSidePadding gets called to pad spaces for\n  // char types.\n  // See https://github.com/apache/spark/pull/38151\n  private val staticInvokeExpressions\n      : Map[(String, Class[_]), CometExpressionSerde[StaticInvoke]] =\n    Map(\n      (\"readSidePadding\", classOf[CharVarcharCodegenUtils]) -> CometScalarFunction(\n        \"read_side_padding\"),\n      (\"isLuhnNumber\", classOf[ExpressionImplUtils]) -> CometScalarFunction(\"luhn_check\"))\n\n  override def convert(\n      expr: StaticInvoke,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    staticInvokeExpressions.get((expr.functionName, expr.staticObject)) match {\n      case Some(handler) =>\n        handler.convert(expr, inputs, binding)\n      case None =>\n        withInfo(\n          expr,\n          s\"Static invoke expression: ${expr.functionName} is not supported\",\n          expr.children: _*)\n        None\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/strings.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport java.util.Locale\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, Cast, Concat, ConcatWs, Expression, GetJsonObject, If, InitCap, IsNull, Left, Length, Like, Literal, Lower, RegExpReplace, Right, RLike, StringLPad, StringRepeat, StringRPad, StringSplit, Substring, Upper}\nimport org.apache.spark.sql.types.{BinaryType, DataTypes, LongType, StringType}\nimport org.apache.spark.unsafe.types.UTF8String\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.expressions.{CometCast, CometEvalMode, RegExp}\nimport org.apache.comet.serde.ExprOuterClass.Expr\nimport org.apache.comet.serde.QueryPlanSerde.{createBinaryExpr, exprToProtoInternal, optExprWithInfo, scalarFunctionExprToProto, scalarFunctionExprToProtoWithReturnType}\n\nobject CometStringRepeat extends CometExpressionSerde[StringRepeat] {\n\n  override def convert(\n      expr: StringRepeat,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val children = expr.children\n    val leftCast = Cast(children(0), StringType)\n    val rightCast = Cast(children(1), LongType)\n    val leftExpr = exprToProtoInternal(leftCast, inputs, binding)\n    val rightExpr = exprToProtoInternal(rightCast, inputs, binding)\n    val optExpr = scalarFunctionExprToProto(\"repeat\", leftExpr, rightExpr)\n    optExprWithInfo(optExpr, expr, leftCast, rightCast)\n  }\n}\n\nclass CometCaseConversionBase[T <: Expression](function: String)\n    extends CometScalarFunction[T](function) {\n\n  override def convert(expr: T, inputs: Seq[Attribute], binding: Boolean): Option[Expr] = {\n    if (!CometConf.COMET_CASE_CONVERSION_ENABLED.get()) {\n      withInfo(\n        expr,\n        \"Comet is not compatible with Spark for case conversion in \" +\n          s\"locale-specific cases. Set ${CometConf.COMET_CASE_CONVERSION_ENABLED.key}=true \" +\n          \"to enable it anyway.\")\n      return None\n    }\n    super.convert(expr, inputs, binding)\n  }\n}\n\nobject CometUpper extends CometCaseConversionBase[Upper](\"upper\")\n\nobject CometLower extends CometCaseConversionBase[Lower](\"lower\")\n\nobject CometLength extends CometScalarFunction[Length](\"length\") {\n  override def getSupportLevel(expr: Length): SupportLevel = expr.child.dataType match {\n    case _: BinaryType => Unsupported(Some(\"Length on BinaryType is not supported\"))\n    case _ => Compatible()\n  }\n}\n\nobject CometInitCap extends CometScalarFunction[InitCap](\"initcap\") {\n\n  override def getSupportLevel(expr: InitCap): SupportLevel = {\n    // Behavior differs from Spark. One example is that for the input \"robert rose-smith\", Spark\n    // will produce \"Robert Rose-smith\", but Comet will produce \"Robert Rose-Smith\".\n    // https://github.com/apache/datafusion-comet/issues/1052\n    Incompatible(None)\n  }\n\n  override def convert(expr: InitCap, inputs: Seq[Attribute], binding: Boolean): Option[Expr] = {\n    super.convert(expr, inputs, binding)\n  }\n}\n\nobject CometSubstring extends CometExpressionSerde[Substring] {\n\n  override def convert(\n      expr: Substring,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n    (expr.pos, expr.len) match {\n      case (Literal(pos, _), Literal(len, _)) =>\n        exprToProtoInternal(expr.str, inputs, binding) match {\n          case Some(strExpr) =>\n            val builder = ExprOuterClass.Substring.newBuilder()\n            builder.setChild(strExpr)\n            builder.setStart(pos.asInstanceOf[Int])\n            builder.setLen(len.asInstanceOf[Int])\n            Some(ExprOuterClass.Expr.newBuilder().setSubstring(builder).build())\n          case None =>\n            withInfo(expr, expr.str)\n            None\n        }\n      case _ =>\n        withInfo(expr, \"Substring pos and len must be literals\")\n        None\n    }\n  }\n}\n\nobject CometLeft extends CometExpressionSerde[Left] {\n\n  override def convert(expr: Left, inputs: Seq[Attribute], binding: Boolean): Option[Expr] = {\n    expr.len match {\n      case Literal(lenValue, _) =>\n        exprToProtoInternal(expr.str, inputs, binding) match {\n          case Some(strExpr) =>\n            val builder = ExprOuterClass.Substring.newBuilder()\n            builder.setChild(strExpr)\n            builder.setStart(1)\n            builder.setLen(lenValue.asInstanceOf[Int])\n            Some(ExprOuterClass.Expr.newBuilder().setSubstring(builder).build())\n          case None =>\n            withInfo(expr, expr.str)\n            None\n        }\n      case _ =>\n        withInfo(expr, \"LEFT len must be a literal\")\n        None\n    }\n  }\n\n  override def getSupportLevel(expr: Left): SupportLevel = {\n    expr.str.dataType match {\n      case _: BinaryType | _: StringType => Compatible()\n      case _ => Unsupported(Some(s\"LEFT does not support ${expr.str.dataType}\"))\n    }\n  }\n}\n\nobject CometRight extends CometExpressionSerde[Right] {\n\n  override def convert(expr: Right, inputs: Seq[Attribute], binding: Boolean): Option[Expr] = {\n    expr.len match {\n      case Literal(lenValue, _) =>\n        val lenInt = lenValue.asInstanceOf[Int]\n        if (lenInt <= 0) {\n          // Match Spark's behavior: If(IsNull(str), NULL, \"\")\n          // This ensures NULL propagation: RIGHT(NULL, 0) -> NULL, RIGHT(\"hello\", 0) -> \"\"\n          val isNullExpr = IsNull(expr.str)\n          val nullLiteral = Literal.create(null, StringType)\n          val emptyStringLiteral = Literal(UTF8String.EMPTY_UTF8, StringType)\n          val ifExpr = If(isNullExpr, nullLiteral, emptyStringLiteral)\n\n          // Serialize the If expression using existing infrastructure\n          exprToProtoInternal(ifExpr, inputs, binding)\n        } else {\n          exprToProtoInternal(expr.str, inputs, binding) match {\n            case Some(strExpr) =>\n              val builder = ExprOuterClass.Substring.newBuilder()\n              builder.setChild(strExpr)\n              builder.setStart(-lenInt)\n              builder.setLen(lenInt)\n              Some(ExprOuterClass.Expr.newBuilder().setSubstring(builder).build())\n            case None =>\n              withInfo(expr, expr.str)\n              None\n          }\n        }\n      case _ =>\n        withInfo(expr, \"RIGHT len must be a literal\")\n        None\n    }\n  }\n\n  override def getSupportLevel(expr: Right): SupportLevel = {\n    expr.str.dataType match {\n      case _: StringType => Compatible()\n      case _ => Unsupported(Some(s\"RIGHT does not support ${expr.str.dataType}\"))\n    }\n  }\n}\n\nobject CometConcat extends CometScalarFunction[Concat](\"concat\") {\n  val unsupportedReason = \"CONCAT supports only string input parameters\"\n\n  override def getSupportLevel(expr: Concat): SupportLevel = {\n    if (expr.children.forall(_.dataType == DataTypes.StringType)) {\n      Compatible()\n    } else {\n      Incompatible(Some(unsupportedReason))\n    }\n  }\n}\n\nobject CometConcatWs extends CometExpressionSerde[ConcatWs] {\n\n  override def convert(expr: ConcatWs, inputs: Seq[Attribute], binding: Boolean): Option[Expr] = {\n    expr.children.headOption match {\n      // Match Spark behavior: when the separator is NULL, the result of concat_ws is NULL.\n      case Some(Literal(null, _)) =>\n        val nullLiteral = Literal.create(null, expr.dataType)\n        exprToProtoInternal(nullLiteral, inputs, binding)\n\n      case _ if expr.children.forall(_.foldable) =>\n        // Fall back to Spark for all-literal args so ConstantFolding can handle it\n        withInfo(expr, \"all arguments are foldable\")\n        None\n\n      case _ =>\n        // For all other cases, use the generic scalar function implementation.\n        CometScalarFunction[ConcatWs](\"concat_ws\").convert(expr, inputs, binding)\n    }\n  }\n}\n\nobject CometLike extends CometExpressionSerde[Like] {\n\n  override def convert(expr: Like, inputs: Seq[Attribute], binding: Boolean): Option[Expr] = {\n    if (expr.escapeChar == '\\\\') {\n      createBinaryExpr(\n        expr,\n        expr.left,\n        expr.right,\n        inputs,\n        binding,\n        (builder, binaryExpr) => builder.setLike(binaryExpr))\n    } else {\n      withInfo(expr, s\"custom escape character ${expr.escapeChar} not supported in LIKE\")\n      None\n    }\n  }\n}\n\nobject CometRLike extends CometExpressionSerde[RLike] {\n\n  override def convert(expr: RLike, inputs: Seq[Attribute], binding: Boolean): Option[Expr] = {\n    expr.right match {\n      case Literal(pattern, DataTypes.StringType) =>\n        if (!RegExp.isSupportedPattern(pattern.toString) &&\n          !CometConf.isExprAllowIncompat(\"regexp\")) {\n          withInfo(\n            expr,\n            s\"Regexp pattern $pattern is not compatible with Spark. \" +\n              s\"Set ${CometConf.getExprAllowIncompatConfigKey(\"regexp\")}=true \" +\n              \"to allow it anyway.\")\n          None\n        } else {\n          createBinaryExpr(\n            expr,\n            expr.left,\n            expr.right,\n            inputs,\n            binding,\n            (builder, binaryExpr) => builder.setRlike(binaryExpr))\n        }\n      case _ =>\n        withInfo(expr, \"Only scalar regexp patterns are supported\")\n        None\n    }\n  }\n}\n\nobject CometStringRPad extends CometExpressionSerde[StringRPad] {\n\n  override def getSupportLevel(expr: StringRPad): SupportLevel = {\n    if (expr.str.isInstanceOf[Literal]) {\n      return Unsupported(Some(\"Scalar values are not supported for the str argument\"))\n    }\n    if (!expr.pad.isInstanceOf[Literal]) {\n      return Unsupported(Some(\"Only scalar values are supported for the pad argument\"))\n    }\n    Compatible()\n  }\n\n  override def convert(\n      expr: StringRPad,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n\n    scalarFunctionExprToProto(\n      \"rpad\",\n      exprToProtoInternal(expr.str, inputs, binding),\n      exprToProtoInternal(expr.len, inputs, binding),\n      exprToProtoInternal(expr.pad, inputs, binding))\n  }\n}\n\nobject CometStringLPad extends CometExpressionSerde[StringLPad] {\n\n  override def getSupportLevel(expr: StringLPad): SupportLevel = {\n    if (expr.str.isInstanceOf[Literal]) {\n      return Unsupported(Some(\"Scalar values are not supported for the str argument\"))\n    }\n    if (!expr.pad.isInstanceOf[Literal]) {\n      return Unsupported(Some(\"Only scalar values are supported for the pad argument\"))\n    }\n    Compatible()\n  }\n\n  override def convert(\n      expr: StringLPad,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n    scalarFunctionExprToProto(\n      \"lpad\",\n      exprToProtoInternal(expr.str, inputs, binding),\n      exprToProtoInternal(expr.len, inputs, binding),\n      exprToProtoInternal(expr.pad, inputs, binding))\n  }\n}\n\nobject CometRegExpReplace extends CometExpressionSerde[RegExpReplace] {\n  override def getSupportLevel(expr: RegExpReplace): SupportLevel = {\n    if (!RegExp.isSupportedPattern(expr.regexp.toString) &&\n      !CometConf.isExprAllowIncompat(\"regexp\")) {\n      withInfo(\n        expr,\n        s\"Regexp pattern ${expr.regexp} is not compatible with Spark. \" +\n          s\"Set ${CometConf.getExprAllowIncompatConfigKey(\"regexp\")}=true \" +\n          \"to allow it anyway.\")\n      return Incompatible()\n    }\n    expr.pos match {\n      case Literal(value, DataTypes.IntegerType) if value == 1 => Compatible()\n      case _ =>\n        Unsupported(Some(\"Comet only supports regexp_replace with an offset of 1 (no offset).\"))\n    }\n  }\n\n  override def convert(\n      expr: RegExpReplace,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n    val subjectExpr = exprToProtoInternal(expr.subject, inputs, binding)\n    val patternExpr = exprToProtoInternal(expr.regexp, inputs, binding)\n    val replacementExpr = exprToProtoInternal(expr.rep, inputs, binding)\n    // DataFusion's regexp_replace stops at the first match. We need to add the 'g' flag\n    // to apply the regex globally to match Spark behavior.\n    val flagsExpr = exprToProtoInternal(Literal(\"g\"), inputs, binding)\n    val optExpr = scalarFunctionExprToProto(\n      \"regexp_replace\",\n      subjectExpr,\n      patternExpr,\n      replacementExpr,\n      flagsExpr)\n    optExprWithInfo(optExpr, expr, expr.subject, expr.regexp, expr.rep, expr.pos)\n  }\n}\n\n/**\n * Serde for StringSplit expression. This is a custom Comet function (not a built-in DataFusion\n * function), so we need to include the return type in the protobuf to avoid DataFusion registry\n * lookup failures.\n */\nobject CometStringSplit extends CometExpressionSerde[StringSplit] {\n\n  override def getSupportLevel(expr: StringSplit): SupportLevel =\n    Incompatible(Some(\"Regex engine differences between Java and Rust\"))\n\n  override def convert(\n      expr: StringSplit,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n    val strExpr = exprToProtoInternal(expr.str, inputs, binding)\n    val regexExpr = exprToProtoInternal(expr.regex, inputs, binding)\n    val limitExpr = exprToProtoInternal(expr.limit, inputs, binding)\n    val optExpr = scalarFunctionExprToProtoWithReturnType(\n      \"split\",\n      expr.dataType,\n      false,\n      strExpr,\n      regexExpr,\n      limitExpr)\n    optExprWithInfo(optExpr, expr, expr.str, expr.regex, expr.limit)\n  }\n}\n\nobject CometGetJsonObject extends CometExpressionSerde[GetJsonObject] {\n\n  override def getSupportLevel(expr: GetJsonObject): SupportLevel =\n    Incompatible(\n      Some(\n        \"Spark allows single-quoted JSON and unescaped control characters \" +\n          \"which Comet does not support\"))\n\n  override def convert(\n      expr: GetJsonObject,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n    val jsonExpr = exprToProtoInternal(expr.json, inputs, binding)\n    val pathExpr = exprToProtoInternal(expr.path, inputs, binding)\n    val optExpr = scalarFunctionExprToProtoWithReturnType(\n      \"get_json_object\",\n      expr.dataType,\n      false,\n      jsonExpr,\n      pathExpr)\n    optExprWithInfo(optExpr, expr, expr.json, expr.path)\n  }\n}\n\ntrait CommonStringExprs {\n\n  def stringDecode(\n      expr: Expression,\n      charset: Expression,\n      bin: Expression,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n    charset match {\n      case Literal(str, DataTypes.StringType)\n          if str.toString.toLowerCase(Locale.ROOT) == \"utf-8\" =>\n        // decode(col, 'utf-8') can be treated as a cast with \"try\" eval mode that puts nulls\n        // for invalid strings.\n        // Left child is the binary expression.\n        val binExpr = exprToProtoInternal(bin, inputs, binding)\n        if (binExpr.isDefined) {\n          CometCast.castToProto(expr, None, DataTypes.StringType, binExpr.get, CometEvalMode.TRY)\n        } else {\n          withInfo(expr, bin)\n          None\n        }\n      case _ =>\n        withInfo(expr, \"Comet only supports decoding with 'utf-8'.\")\n        None\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/structs.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport scala.jdk.CollectionConverters._\nimport scala.util.Try\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, CreateNamedStruct, GetArrayStructFields, GetStructField, JsonToStructs, StructsToCsv, StructsToJson}\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.DataTypeSupport\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProtoInternal, serializeDataType}\n\nobject CometCreateNamedStruct extends CometExpressionSerde[CreateNamedStruct] {\n  override def convert(\n      expr: CreateNamedStruct,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (expr.names.length != expr.names.distinct.length) {\n      withInfo(expr, \"CreateNamedStruct with duplicate field names are not supported\")\n      return None\n    }\n\n    val valExprs = expr.valExprs.map(exprToProtoInternal(_, inputs, binding))\n\n    if (valExprs.forall(_.isDefined)) {\n      val structBuilder = ExprOuterClass.CreateNamedStruct.newBuilder()\n      structBuilder.addAllValues(valExprs.map(_.get).asJava)\n      structBuilder.addAllNames(expr.names.map(_.toString).asJava)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setCreateNamedStruct(structBuilder)\n          .build())\n    } else {\n      withInfo(expr, \"unsupported arguments for CreateNamedStruct\", expr.valExprs: _*)\n      None\n    }\n\n  }\n}\n\nobject CometGetStructField extends CometExpressionSerde[GetStructField] {\n  override def convert(\n      expr: GetStructField,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    exprToProtoInternal(expr.child, inputs, binding).map { childExpr =>\n      val getStructFieldBuilder = ExprOuterClass.GetStructField\n        .newBuilder()\n        .setChild(childExpr)\n        .setOrdinal(expr.ordinal)\n\n      ExprOuterClass.Expr\n        .newBuilder()\n        .setGetStructField(getStructFieldBuilder)\n        .build()\n    }\n  }\n}\n\nobject CometGetArrayStructFields extends CometExpressionSerde[GetArrayStructFields] {\n  override def convert(\n      expr: GetArrayStructFields,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val childExpr = exprToProtoInternal(expr.child, inputs, binding)\n\n    if (childExpr.isDefined) {\n      val arrayStructFieldsBuilder = ExprOuterClass.GetArrayStructFields\n        .newBuilder()\n        .setChild(childExpr.get)\n        .setOrdinal(expr.ordinal)\n\n      Some(\n        ExprOuterClass.Expr\n          .newBuilder()\n          .setGetArrayStructFields(arrayStructFieldsBuilder)\n          .build())\n    } else {\n      withInfo(expr, \"unsupported arguments for GetArrayStructFields\", expr.child)\n      None\n    }\n  }\n}\n\nobject CometStructsToJson extends CometExpressionSerde[StructsToJson] {\n\n  override def getSupportLevel(expr: StructsToJson): SupportLevel =\n    Incompatible(\n      Some(\n        \"Does not support Infinity/-Infinity for numeric types\" +\n          \" (https://github.com/apache/datafusion-comet/issues/3016)\"))\n\n  override def convert(\n      expr: StructsToJson,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    if (expr.options.nonEmpty) {\n      withInfo(expr, \"StructsToJson with options is not supported\")\n      None\n    } else {\n      val isSupported = expr.child.dataType match {\n        case s: StructType =>\n          s.fields.forall(f => isSupportedType(f.dataType))\n        case _: MapType | _: ArrayType =>\n          // Spark supports map and array in StructsToJson but this is not yet\n          // implemented in Comet\n          false\n        case _ =>\n          false\n      }\n\n      if (isSupported) {\n        exprToProtoInternal(expr.child, inputs, binding) match {\n          case Some(p) =>\n            val toJson = ExprOuterClass.ToJson\n              .newBuilder()\n              .setChild(p)\n              .setTimezone(expr.timeZoneId.getOrElse(\"UTC\"))\n              .setIgnoreNullFields(true)\n              .build()\n            Some(\n              ExprOuterClass.Expr\n                .newBuilder()\n                .setToJson(toJson)\n                .build())\n          case _ =>\n            withInfo(expr, expr.child)\n            None\n        }\n      } else {\n        withInfo(expr, \"Unsupported data type\", expr.child)\n        None\n      }\n    }\n  }\n\n  def isSupportedType(dt: DataType): Boolean = {\n    dt match {\n      case StructType(fields) =>\n        fields.forall(f => isSupportedType(f.dataType))\n      case DataTypes.BooleanType | DataTypes.ByteType | DataTypes.ShortType |\n          DataTypes.IntegerType | DataTypes.LongType | DataTypes.FloatType |\n          DataTypes.DoubleType | DataTypes.StringType =>\n        true\n      case DataTypes.DateType | DataTypes.TimestampType =>\n        // TODO implement these types with tests for formatting options and timezone\n        false\n      case _: MapType | _: ArrayType =>\n        // Spark supports map and array in StructsToJson but this is not yet\n        // implemented in Comet\n        false\n      case _ => false\n    }\n  }\n}\n\nobject CometJsonToStructs extends CometExpressionSerde[JsonToStructs] {\n\n  override def getSupportLevel(expr: JsonToStructs): SupportLevel = {\n    // this feature is partially implemented and not comprehensively tested yet\n    Incompatible()\n  }\n\n  override def convert(\n      expr: JsonToStructs,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n\n    if (expr.schema == null) {\n      withInfo(expr, \"from_json requires explicit schema\")\n      return None\n    }\n\n    def isSupportedType(dt: DataType): Boolean = {\n      dt match {\n        case StructType(fields) =>\n          fields.nonEmpty && fields.forall(f => isSupportedType(f.dataType))\n        case DataTypes.IntegerType | DataTypes.LongType | DataTypes.FloatType |\n            DataTypes.DoubleType | DataTypes.BooleanType | DataTypes.StringType =>\n          true\n        case _ => false\n      }\n    }\n\n    val schemaType = expr.schema\n    if (!isSupportedType(schemaType)) {\n      withInfo(expr, \"from_json: Unsupported schema type\")\n      return None\n    }\n\n    val options = expr.options\n    if (options.nonEmpty) {\n      val mode = options.getOrElse(\"mode\", \"PERMISSIVE\")\n      if (mode != \"PERMISSIVE\") {\n        withInfo(expr, s\"from_json: Only PERMISSIVE mode supported, got: $mode\")\n        return None\n      }\n      val knownOptions = Set(\"mode\")\n      val unknownOpts = options.keySet -- knownOptions\n      if (unknownOpts.nonEmpty) {\n        withInfo(expr, s\"from_json: Ignoring unsupported options: ${unknownOpts.mkString(\", \")}\")\n      }\n    }\n\n    // Convert child expression and schema to protobuf\n    for {\n      childProto <- exprToProtoInternal(expr.child, inputs, binding)\n      schemaProto <- serializeDataType(schemaType)\n    } yield {\n      val fromJson = ExprOuterClass.FromJson\n        .newBuilder()\n        .setChild(childProto)\n        .setSchema(schemaProto)\n        .setTimezone(expr.timeZoneId.getOrElse(\"UTC\"))\n        .build()\n      ExprOuterClass.Expr.newBuilder().setFromJson(fromJson).build()\n    }\n  }\n}\n\nobject CometStructsToCsv extends CometExpressionSerde[StructsToCsv] {\n\n  private val incompatibleDataTypes = Seq(DateType, TimestampType, TimestampNTZType, BinaryType)\n\n  override def getSupportLevel(expr: StructsToCsv): SupportLevel = {\n    val dataTypes = expr.inputSchema.fields.map(_.dataType)\n    val containsComplexType = dataTypes.exists(DataTypeSupport.isComplexType)\n    if (containsComplexType) {\n      return Unsupported(\n        Some(\n          s\"The schema ${expr.inputSchema} is not supported because it includes a complex type\"))\n    }\n    val containsIncompatibleDataTypes = dataTypes.exists(incompatibleDataTypes.contains)\n    if (containsIncompatibleDataTypes) {\n      return Incompatible(\n        Some(\n          s\"The schema ${expr.inputSchema} is not supported because \" +\n            s\"it includes a incompatible data types: $incompatibleDataTypes\"))\n    }\n    // https://github.com/apache/datafusion-comet/issues/3232\n    Incompatible()\n  }\n\n  override def convert(\n      expr: StructsToCsv,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    for {\n      childProto <- exprToProtoInternal(expr.child, inputs, binding)\n    } yield {\n      val optionsProto = options2Proto(expr.options, expr.timeZoneId)\n      val toCsv = ExprOuterClass.ToCsv\n        .newBuilder()\n        .setChild(childProto)\n        .setOptions(optionsProto)\n        .build()\n      ExprOuterClass.Expr.newBuilder().setToCsv(toCsv).build()\n    }\n  }\n\n  private def options2Proto(\n      options: Map[String, String],\n      timeZoneId: Option[String]): ExprOuterClass.CsvWriteOptions = {\n    ExprOuterClass.CsvWriteOptions\n      .newBuilder()\n      .setDelimiter(options.getOrElse(\"delimiter\", \",\"))\n      .setQuote(options.getOrElse(\"quote\", \"\\\"\"))\n      .setEscape(options.getOrElse(\"escape\", \"\\\\\"))\n      .setNullValue(options.getOrElse(\"nullValue\", \"\"))\n      .setTimezone(timeZoneId.getOrElse(\"UTC\"))\n      .setIgnoreLeadingWhiteSpace(options\n        .get(\"ignoreLeadingWhiteSpace\")\n        .flatMap(ignoreLeadingWhiteSpace => Try(ignoreLeadingWhiteSpace.toBoolean).toOption)\n        .getOrElse(true))\n      .setIgnoreTrailingWhiteSpace(options\n        .get(\"ignoreTrailingWhiteSpace\")\n        .flatMap(ignoreTrailingWhiteSpace => Try(ignoreTrailingWhiteSpace.toBoolean).toOption)\n        .getOrElse(true))\n      .setQuoteAll(options\n        .get(\"quoteAll\")\n        .flatMap(quoteAll => Try(quoteAll.toBoolean).toOption)\n        .getOrElse(false))\n      .build()\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/serde/unixtime.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.serde\n\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, FromUnixTime, Literal}\nimport org.apache.spark.sql.catalyst.util.TimestampFormatter\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProtoInternal, optExprWithInfo, scalarFunctionExprToProto}\n\n// TODO: DataFusion supports only -8334601211038 <= sec <= 8210266876799\n// https://github.com/apache/datafusion/issues/16594\nobject CometFromUnixTime extends CometExpressionSerde[FromUnixTime] {\n\n  override def getSupportLevel(expr: FromUnixTime): SupportLevel = Incompatible(None)\n\n  override def convert(\n      expr: FromUnixTime,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[ExprOuterClass.Expr] = {\n    val secExpr = exprToProtoInternal(expr.sec, inputs, binding)\n    // TODO: DataFusion toChar does not support Spark datetime pattern format\n    // https://github.com/apache/datafusion/issues/16577\n    // https://github.com/apache/datafusion/issues/14536\n    // After fixing these issues, use provided `format` instead of the manual replacement below\n    val formatExpr = exprToProtoInternal(Literal(\"%Y-%m-%d %H:%M:%S\"), inputs, binding)\n    val timeZone = exprToProtoInternal(Literal(expr.timeZoneId.orNull), inputs, binding)\n\n    if (expr.format != Literal(TimestampFormatter.defaultPattern)) {\n      withInfo(expr, \"Datetime pattern format is unsupported\")\n      None\n    } else if (secExpr.isDefined && formatExpr.isDefined) {\n      val timestampExpr =\n        scalarFunctionExprToProto(\"from_unixtime\", Seq(secExpr, timeZone): _*)\n      val optExpr = scalarFunctionExprToProto(\"to_char\", Seq(timestampExpr, formatExpr): _*)\n      optExprWithInfo(optExpr, expr, expr.sec, expr.format)\n    } else {\n      withInfo(expr, expr.sec, expr.format)\n      None\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/testing/FuzzDataGenerator.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.testing\n\nimport java.math.{BigDecimal, RoundingMode}\nimport java.nio.charset.Charset\nimport java.sql.Timestamp\nimport java.text.SimpleDateFormat\nimport java.time.{Instant, LocalDateTime, ZoneId}\n\nimport scala.collection.mutable.ListBuffer\nimport scala.util.Random\n\nimport org.apache.commons.lang3.RandomStringUtils\nimport org.apache.spark.sql.{DataFrame, Row, SparkSession}\nimport org.apache.spark.sql.types._\n\nobject FuzzDataGenerator {\n\n  /**\n   * Date to use as base for generating temporal columns. Random integers will be added to or\n   * subtracted from this value.\n   *\n   * Date was chosen to trigger generating a timestamp that's larger than a 64-bit nanosecond\n   * timestamp can represent so that we test support for INT96 timestamps.\n   */\n  val defaultBaseDate: Long =\n    new SimpleDateFormat(\"YYYY-MM-DD hh:mm:ss\").parse(\"3333-05-25 12:34:56\").getTime\n\n  val floatNaNLiteral = \"FLOAT('NaN')\"\n  val doubleNaNLiteral = \"DOUBLE('NaN')\"\n\n  def generateSchema(options: SchemaGenOptions): StructType = {\n    val primitiveTypes = options.primitiveTypes\n    val dataTypes = ListBuffer[DataType]()\n    dataTypes.appendAll(primitiveTypes)\n\n    val arraysOfPrimitives = primitiveTypes.map(DataTypes.createArrayType)\n\n    if (options.generateStruct) {\n      dataTypes += StructType(\n        primitiveTypes.zipWithIndex.map(x => StructField(s\"c${x._2}\", x._1, nullable = true)))\n\n      if (options.generateArray) {\n        dataTypes += StructType(arraysOfPrimitives.zipWithIndex.map(x =>\n          StructField(s\"c${x._2}\", x._1, nullable = true)))\n      }\n    }\n\n    if (options.generateMap) {\n      dataTypes += MapType(DataTypes.IntegerType, DataTypes.StringType)\n    }\n\n    if (options.generateArray) {\n      dataTypes.appendAll(arraysOfPrimitives)\n\n      if (options.generateStruct) {\n        dataTypes += DataTypes.createArrayType(StructType(primitiveTypes.zipWithIndex.map(x =>\n          StructField(s\"c${x._2}\", x._1, nullable = true))))\n      }\n\n      if (options.generateMap) {\n        dataTypes += DataTypes.createArrayType(\n          MapType(DataTypes.IntegerType, DataTypes.StringType))\n      }\n    }\n\n    // generate schema using random data types\n    val fields = dataTypes.zipWithIndex\n      .map(i => StructField(s\"c${i._2}\", i._1, nullable = true))\n    StructType(fields.toSeq)\n  }\n\n  def generateNestedSchema(\n      r: Random,\n      numCols: Int,\n      minDepth: Int,\n      maxDepth: Int,\n      options: SchemaGenOptions): StructType = {\n    assert(numCols > 0)\n    assert(minDepth >= 0)\n    assert(maxDepth >= 0)\n    assert(minDepth <= maxDepth)\n    assert(\n      options.generateArray || options.generateStruct || options.generateMap,\n      \"cannot generate nested schema if options do not include generating complex types\")\n\n    var counter = 0\n\n    def generateFieldName() = {\n      val name = s\"c_$counter\"\n      counter += 1\n      name\n    }\n\n    def generateArray(depth: Int, name: String) = {\n      val element = genField(r, depth + 1)\n      StructField(name, DataTypes.createArrayType(element.dataType, true))\n    }\n\n    def generateStruct(depth: Int, name: String) = {\n      val fields =\n        Range(1, 2 + r.nextInt(10)).map(_ => genField(r, depth + 1)).toArray\n      StructField(name, DataTypes.createStructType(fields))\n    }\n\n    def generateMap(depth: Int, name: String) = {\n      val keyField = genField(r, depth + 1)\n      val valueField = genField(r, depth + 1)\n      StructField(name, DataTypes.createMapType(keyField.dataType, valueField.dataType))\n    }\n\n    def generatePrimitive(name: String) = {\n      StructField(name, randomChoice(options.primitiveTypes, r))\n    }\n\n    def genField(r: Random, depth: Int): StructField = {\n      val name = generateFieldName()\n      val generators = new ListBuffer[() => StructField]()\n      if (options.generateArray && depth < maxDepth) {\n        generators += (() => generateArray(depth + 1, name))\n      }\n      if (options.generateStruct && depth < maxDepth) {\n        generators += (() => generateStruct(depth + 1, name))\n      }\n      if (options.generateMap && depth < maxDepth) {\n        generators += (() => generateMap(depth + 1, name))\n      }\n      if (depth >= minDepth) {\n        generators += (() => generatePrimitive(name))\n      }\n      randomChoice(generators.toSeq, r)()\n    }\n\n    StructType(Range(0, numCols).map(_ => genField(r, 0)))\n  }\n\n  def generateDataFrame(\n      r: Random,\n      spark: SparkSession,\n      schema: StructType,\n      numRows: Int,\n      options: DataGenOptions): DataFrame = {\n\n    // generate columnar data\n    val cols: Seq[Seq[Any]] =\n      schema.fields.map(f => generateColumn(r, f.dataType, numRows, options)).toSeq\n\n    // convert to rows\n    val rows = Range(0, numRows).map(rowIndex => {\n      Row.fromSeq(cols.map(_(rowIndex)))\n    })\n\n    spark.createDataFrame(spark.sparkContext.parallelize(rows), schema)\n  }\n\n  private def generateColumn(\n      r: Random,\n      dataType: DataType,\n      numRows: Int,\n      options: DataGenOptions): Seq[Any] = {\n    dataType match {\n      case ArrayType(elementType, _) =>\n        val values = generateColumn(r, elementType, numRows, options)\n        val list = ListBuffer[Any]()\n        for (i <- 0 until numRows) {\n          if (i % 10 == 0 && options.allowNull) {\n            list += null\n          } else {\n            list += Range(0, r.nextInt(5)).map(j => values((i + j) % values.length)).toArray\n          }\n        }\n        list.toSeq\n      case StructType(fields) =>\n        val values = fields.map(f => generateColumn(r, f.dataType, numRows, options))\n        Range(0, numRows).map(i => Row(values.indices.map(j => values(j)(i)): _*))\n      case MapType(keyType, valueType, _) =>\n        val mapOptions = options.copy(allowNull = false)\n        val k = generateColumn(r, keyType, numRows, mapOptions)\n        val v = generateColumn(r, valueType, numRows, mapOptions)\n        k.zip(v).map(x => Map(x._1 -> x._2))\n      case DataTypes.BooleanType =>\n        generateColumn(r, DataTypes.LongType, numRows, options)\n          .map(_.asInstanceOf[Long].toShort)\n          .map(s => s % 2 == 0)\n      case DataTypes.ByteType =>\n        generateColumn(r, DataTypes.LongType, numRows, options)\n          .map(_.asInstanceOf[Long].toByte)\n      case DataTypes.ShortType =>\n        generateColumn(r, DataTypes.LongType, numRows, options)\n          .map(_.asInstanceOf[Long].toShort)\n      case DataTypes.IntegerType =>\n        generateColumn(r, DataTypes.LongType, numRows, options)\n          .map(_.asInstanceOf[Long].toInt)\n      case DataTypes.LongType =>\n        Range(0, numRows).map(_ => {\n          r.nextInt(50) match {\n            case 0 if options.allowNull => null\n            case 1 => 0L\n            case 2 => Byte.MinValue.toLong\n            case 3 => Byte.MaxValue.toLong\n            case 4 => Short.MinValue.toLong\n            case 5 => Short.MaxValue.toLong\n            case 6 => Int.MinValue.toLong\n            case 7 => Int.MaxValue.toLong\n            case 8 => Long.MinValue\n            case 9 => Long.MaxValue\n            case _ => r.nextLong()\n          }\n        })\n      case DataTypes.FloatType =>\n        Range(0, numRows).map(_ => {\n          r.nextInt(20) match {\n            case 0 if options.allowNull => null\n            case 1 if options.generateInfinity => Float.NegativeInfinity\n            case 2 if options.generateInfinity => Float.PositiveInfinity\n            case 3 => Float.MinValue\n            case 4 => Float.MaxValue\n            case 5 => 0.0f\n            case 6 if options.generateNegativeZero => -0.0f\n            case 7 if options.generateNaN => Float.NaN\n            case _ => r.nextFloat()\n          }\n        })\n      case DataTypes.DoubleType =>\n        Range(0, numRows).map(_ => {\n          r.nextInt(20) match {\n            case 0 if options.allowNull => null\n            case 1 if options.generateInfinity => Double.NegativeInfinity\n            case 2 if options.generateInfinity => Double.PositiveInfinity\n            case 3 => Double.MinValue\n            case 4 => Double.MaxValue\n            case 5 => 0.0\n            case 6 if options.generateNegativeZero => -0.0\n            case 7 if options.generateNaN => Double.NaN\n            case _ => r.nextDouble()\n          }\n        })\n      case dt: DecimalType =>\n        Range(0, numRows).map(_ =>\n          new BigDecimal(r.nextDouble()).setScale(dt.scale, RoundingMode.HALF_UP))\n      case DataTypes.StringType =>\n        Range(0, numRows).map(_ => {\n          r.nextInt(10) match {\n            case 0 if options.allowNull => null\n            case 1 => r.nextInt().toByte.toString\n            case 2 => r.nextLong().toString\n            case 3 => r.nextDouble().toString\n            case 4 => RandomStringUtils.randomAlphabetic(options.maxStringLength)\n            case 5 =>\n              // use a constant value to trigger dictionary encoding\n              \"dict_encode_me!\"\n            case 6 if options.customStrings.nonEmpty =>\n              randomChoice(options.customStrings, r)\n            case _ => r.nextString(options.maxStringLength)\n          }\n        })\n      case DataTypes.BinaryType =>\n        generateColumn(r, DataTypes.StringType, numRows, options)\n          .map {\n            case x: String =>\n              x.getBytes(Charset.defaultCharset())\n            case _ =>\n              null\n          }\n      case DataTypes.DateType =>\n        Range(0, numRows).map(_ => new java.sql.Date(options.baseDate + r.nextInt()))\n      case DataTypes.TimestampType =>\n        Range(0, numRows).map(_ => new Timestamp(options.baseDate + r.nextInt()))\n      case DataTypes.TimestampNTZType =>\n        Range(0, numRows).map(_ =>\n          LocalDateTime.ofInstant(\n            Instant.ofEpochMilli(options.baseDate + r.nextInt()),\n            ZoneId.systemDefault()))\n      case _ => throw new IllegalStateException(s\"Cannot generate data for $dataType yet\")\n    }\n  }\n\n  private def randomChoice[T](list: Seq[T], r: Random): T = {\n    list(r.nextInt(list.length))\n  }\n\n}\n\nobject SchemaGenOptions {\n  val defaultPrimitiveTypes: Seq[DataType] = Seq(\n    DataTypes.BooleanType,\n    DataTypes.ByteType,\n    DataTypes.ShortType,\n    DataTypes.IntegerType,\n    DataTypes.LongType,\n    DataTypes.FloatType,\n    DataTypes.DoubleType,\n    DataTypes.createDecimalType(10, 2),\n    DataTypes.createDecimalType(36, 18),\n    DataTypes.DateType,\n    DataTypes.TimestampType,\n    DataTypes.TimestampNTZType,\n    DataTypes.StringType,\n    DataTypes.BinaryType)\n}\n\ncase class SchemaGenOptions(\n    generateArray: Boolean = false,\n    generateStruct: Boolean = false,\n    generateMap: Boolean = false,\n    primitiveTypes: Seq[DataType] = SchemaGenOptions.defaultPrimitiveTypes)\n\ncase class DataGenOptions(\n    allowNull: Boolean = true,\n    generateNegativeZero: Boolean = true,\n    generateNaN: Boolean = true,\n    baseDate: Long = FuzzDataGenerator.defaultBaseDate,\n    customStrings: Seq[String] = Seq.empty,\n    maxStringLength: Int = 8,\n    generateInfinity: Boolean = true)\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/comet/testing/ParquetGenerator.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.testing\n\nimport scala.util.Random\n\nimport org.apache.spark.sql.{SaveMode, SparkSession}\nimport org.apache.spark.sql.types.StructType\n\nobject ParquetGenerator {\n\n  /** Generate a Parquet file using a generated schema */\n  def makeParquetFile(\n      r: Random,\n      spark: SparkSession,\n      filename: String,\n      numRows: Int,\n      schemaGenOptions: SchemaGenOptions,\n      dataGenOptions: DataGenOptions): Unit = {\n    val schema = FuzzDataGenerator.generateSchema(schemaGenOptions)\n    makeParquetFile(r, spark, filename, schema, numRows, dataGenOptions)\n  }\n\n  /** Generate a Parquet file using the provided schema */\n  def makeParquetFile(\n      r: Random,\n      spark: SparkSession,\n      filename: String,\n      schema: StructType,\n      numRows: Int,\n      options: DataGenOptions): Unit = {\n    val df = FuzzDataGenerator.generateDataFrame(r, spark, schema, numRows, options)\n    df.write.mode(SaveMode.Overwrite).parquet(filename)\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/CometSource.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark\n\nimport org.apache.spark.metrics.source.Source\n\nimport com.codahale.metrics.{Counter, Gauge, MetricRegistry}\n\nimport org.apache.comet.CometCoverageStats\n\n/**\n * Exposes following metrics (hooked from CometCoverageStats)\n *   - operators.native: Total operators executed natively\n *   - operators.spark: Total operators that fell back to Spark\n *   - queries.planned: Total queries processed\n *   - transitions: Total Spark-to-Comet transitions\n *   - acceleration.ratio: native / (native + spark)\n */\nobject CometSource extends Source {\n  override val sourceName = \"comet\"\n  override val metricRegistry = new MetricRegistry()\n\n  val NATIVE_OPERATORS: Counter =\n    metricRegistry.counter(MetricRegistry.name(\"operators\", \"native\"))\n  val SPARK_OPERATORS: Counter = metricRegistry.counter(MetricRegistry.name(\"operators\", \"spark\"))\n  val QUERIES_PLANNED: Counter = metricRegistry.counter(MetricRegistry.name(\"queries\", \"planned\"))\n  val TRANSITIONS: Counter = metricRegistry.counter(MetricRegistry.name(\"transitions\"))\n\n  metricRegistry.register(\n    MetricRegistry.name(\"acceleration\", \"ratio\"),\n    new Gauge[Double] {\n      override def getValue: Double = {\n        val native = NATIVE_OPERATORS.getCount\n        val total = native + SPARK_OPERATORS.getCount\n        if (total > 0) native.toDouble / total else 0.0\n      }\n    })\n\n  def recordStats(stats: CometCoverageStats): Unit = {\n    NATIVE_OPERATORS.inc(stats.cometOperators)\n    SPARK_OPERATORS.inc(stats.sparkOperators)\n    TRANSITIONS.inc(stats.transitions)\n    QUERIES_PLANNED.inc()\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/Plugins.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark\n\nimport java.{util => ju}\nimport java.util.Collections\n\nimport org.apache.spark.api.plugin.{DriverPlugin, ExecutorPlugin, PluginContext, SparkPlugin}\nimport org.apache.spark.comet.shims.ShimCometDriverPlugin\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.internal.config.{EXECUTOR_MEMORY, EXECUTOR_MEMORY_OVERHEAD, EXECUTOR_MEMORY_OVERHEAD_FACTOR}\nimport org.apache.spark.sql.internal.StaticSQLConf\n\nimport org.apache.comet.CometConf.{COMET_METRICS_ENABLED, COMET_ONHEAP_ENABLED}\nimport org.apache.comet.CometSparkSessionExtensions\n\n/**\n * Comet driver plugin. This class is loaded by Spark's plugin framework. It will be instantiated\n * on driver side only. It will update the SparkConf with the extra configuration provided by\n * Comet, e.g., Comet memory configurations.\n *\n * Note that `SparkContext.conf` is spark package only. So this plugin must be in spark package.\n * Although `SparkContext.getConf` is public, it returns a copy of the SparkConf, so it cannot\n * actually change Spark configs at runtime.\n *\n * To enable this plugin, set the config \"spark.plugins\" to `org.apache.spark.CometPlugin`.\n */\nclass CometDriverPlugin extends DriverPlugin with Logging with ShimCometDriverPlugin {\n  private val EXECUTOR_MEMORY_DEFAULT = \"1g\"\n\n  override def init(sc: SparkContext, pluginContext: PluginContext): ju.Map[String, String] = {\n    logInfo(\"CometDriverPlugin init\")\n\n    if (!CometSparkSessionExtensions.isOffHeapEnabled(sc.getConf) &&\n      !sc.getConf.getBoolean(COMET_ONHEAP_ENABLED.key, false)) {\n      logWarning(\"Comet plugin is disabled because Spark is not running in off-heap mode.\")\n      return Collections.emptyMap[String, String]\n    }\n\n    // register CometSparkSessionExtensions if it isn't already registered\n    CometDriverPlugin.registerCometSessionExtension(sc.conf)\n\n    // Register Comet metrics\n    CometDriverPlugin.registerCometMetrics(sc)\n\n    if (CometSparkSessionExtensions.shouldOverrideMemoryConf(sc.getConf)) {\n      val execMemOverhead = if (sc.getConf.contains(EXECUTOR_MEMORY_OVERHEAD.key)) {\n        sc.getConf.getSizeAsMb(EXECUTOR_MEMORY_OVERHEAD.key)\n      } else {\n        // By default, executorMemory * spark.executor.memoryOverheadFactor, with minimum of 384MB\n        val executorMemory =\n          sc.getConf.getSizeAsMb(EXECUTOR_MEMORY.key, EXECUTOR_MEMORY_DEFAULT)\n        val memoryOverheadFactor = sc.getConf.get(EXECUTOR_MEMORY_OVERHEAD_FACTOR)\n        val memoryOverheadMinMib = getMemoryOverheadMinMib(sc.getConf)\n\n        Math.max((executorMemory * memoryOverheadFactor).toLong, memoryOverheadMinMib)\n      }\n\n      val cometMemOverhead = CometSparkSessionExtensions.getCometMemoryOverheadInMiB(sc.getConf)\n      sc.conf.set(EXECUTOR_MEMORY_OVERHEAD.key, s\"${execMemOverhead + cometMemOverhead}M\")\n      val newExecMemOverhead = sc.getConf.getSizeAsMb(EXECUTOR_MEMORY_OVERHEAD.key)\n\n      logInfo(s\"\"\"\n         Overriding Spark memory configuration for Comet:\n           - Spark executor memory overhead: ${execMemOverhead}MB\n           - Comet memory overhead: ${cometMemOverhead}MB\n           - Updated Spark executor memory overhead: ${newExecMemOverhead}MB\n         \"\"\")\n    } else {\n      logInfo(\"Comet is running in unified memory mode and sharing off-heap memory with Spark\")\n    }\n\n    Collections.emptyMap[String, String]\n  }\n\n  override def receive(message: Any): AnyRef = super.receive(message)\n\n  override def shutdown(): Unit = {\n    logInfo(\"CometDriverPlugin shutdown\")\n\n    super.shutdown()\n  }\n\n  override def registerMetrics(appId: String, pluginContext: PluginContext): Unit =\n    super.registerMetrics(appId, pluginContext)\n\n}\n\nobject CometDriverPlugin extends Logging {\n  def registerCometMetrics(sc: SparkContext): Unit = {\n    if (sc.getConf.getBoolean(\n        COMET_METRICS_ENABLED.key,\n        COMET_METRICS_ENABLED.defaultValue.get)) {\n      sc.env.metricsSystem.registerSource(CometSource)\n\n      val listenerKey = \"spark.sql.queryExecutionListeners\"\n      val listenerClass = \"org.apache.comet.CometMetricsListener\"\n      val listeners = sc.conf.get(listenerKey, \"\")\n      if (listeners.isEmpty) {\n        logInfo(s\"Setting $listenerKey=$listenerClass\")\n        sc.conf.set(listenerKey, listenerClass)\n      } else {\n        val currentListeners = listeners.split(\",\").map(_.trim)\n        if (!currentListeners.contains(listenerClass)) {\n          val newValue = s\"$listeners,$listenerClass\"\n          logInfo(s\"Setting $listenerKey=$newValue\")\n          sc.conf.set(listenerKey, newValue)\n        }\n      }\n    } else {\n      logInfo(\n        \"Comet metrics reporting is disabled. Set spark.comet.metrics.enabled=true to enable.\")\n    }\n  }\n\n  def registerCometSessionExtension(conf: SparkConf): Unit = {\n    val extensionKey = StaticSQLConf.SPARK_SESSION_EXTENSIONS.key\n    val extensionClass = classOf[CometSparkSessionExtensions].getName\n    val extensions = conf.get(extensionKey, \"\")\n    if (extensions.isEmpty) {\n      logInfo(s\"Setting $extensionKey=$extensionClass\")\n      conf.set(extensionKey, extensionClass)\n    } else {\n      val currentExtensions = extensions.split(\",\").map(_.trim)\n      if (!currentExtensions.contains(extensionClass)) {\n        val newValue = s\"$extensions,$extensionClass\"\n        logInfo(s\"Setting $extensionKey=$newValue\")\n        conf.set(extensionKey, newValue)\n      }\n    }\n  }\n}\n\n/**\n * The Comet plugin for Spark. To enable this plugin, set the config \"spark.plugins\" to\n * `org.apache.spark.CometPlugin`\n */\nclass CometPlugin extends SparkPlugin with Logging {\n  override def driverPlugin(): DriverPlugin = new CometDriverPlugin\n\n  override def executorPlugin(): ExecutorPlugin = null\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/shuffle/sort/RowPartition.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.sort\n\nimport scala.collection.mutable.ArrayBuffer\n\nclass RowPartition(initialSize: Int) {\n  private var rowAddresses: ArrayBuffer[Long] = new ArrayBuffer[Long](initialSize)\n  private var rowSizes: ArrayBuffer[Int] = new ArrayBuffer[Int](initialSize)\n\n  def addRow(addr: Long, size: Int): Unit = {\n    rowAddresses += addr\n    rowSizes += size\n  }\n\n  def getNumRows: Int = if (rowAddresses == null) {\n    0\n  } else {\n    rowAddresses.size\n  }\n\n  def getRowAddresses: Array[Long] = {\n    val array = rowAddresses.toArray\n    rowAddresses = null\n    array\n  }\n\n  def getRowSizes: Array[Int] = {\n    val array = rowSizes.toArray\n    rowSizes = null\n    array\n  }\n\n  def reset(): Unit = {\n    rowAddresses = new ArrayBuffer[Long](initialSize)\n    rowSizes = new ArrayBuffer[Int](initialSize)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometBatchScanExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.collection.mutable.ListBuffer\n\nimport org.apache.spark.rdd._\nimport org.apache.spark.sql.catalyst._\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, DynamicPruningExpression, Expression, Literal, SortOrder}\nimport org.apache.spark.sql.catalyst.plans.QueryPlan\nimport org.apache.spark.sql.catalyst.util.truncatedString\nimport org.apache.spark.sql.connector.read._\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.execution.datasources.v2._\nimport org.apache.spark.sql.execution.metric._\nimport org.apache.spark.sql.types.{ArrayType, DataType, MapType, StructType}\nimport org.apache.spark.sql.vectorized._\n\nimport com.google.common.base.Objects\n\nimport org.apache.comet.{DataTypeSupport, MetricsSupport}\nimport org.apache.comet.iceberg.CometIcebergNativeScanMetadata\n\ncase class CometBatchScanExec(\n    wrapped: BatchScanExec,\n    runtimeFilters: Seq[Expression],\n    @transient nativeIcebergScanMetadata: Option[CometIcebergNativeScanMetadata] = None)\n    extends DataSourceV2ScanExecBase\n    with CometPlan {\n  def ordering: Option[Seq[SortOrder]] = wrapped.ordering\n\n  wrapped.logicalLink.foreach(setLogicalLink)\n\n  def keyGroupedPartitioning: Option[Seq[Expression]] = wrapped.keyGroupedPartitioning\n\n  def inputPartitions: Seq[InputPartition] = wrapped.inputPartitions\n\n  override lazy val inputRDD: RDD[InternalRow] = wrappedScan.inputRDD\n\n  override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    val rdd = inputRDD.asInstanceOf[RDD[ColumnarBatch]]\n\n    // These metrics are important for streaming solutions.\n    // despite there being similar metrics published by the native reader.\n    val numOutputRows = longMetric(\"numOutputRows\")\n    val scanTime = longMetric(\"scanTime\")\n    rdd.mapPartitionsInternal { batches =>\n      new Iterator[ColumnarBatch] {\n\n        override def hasNext: Boolean = {\n          // The `FileScanRDD` returns an iterator which scans the file during the `hasNext` call.\n          val startNs = System.nanoTime()\n          val res = batches.hasNext\n          scanTime += System.nanoTime() - startNs\n          res\n        }\n\n        override def next(): ColumnarBatch = {\n          val batch = batches.next()\n          numOutputRows += batch.numRows()\n          batch\n        }\n      }\n    }\n  }\n\n  // `ReusedSubqueryExec` in Spark only call non-columnar execute.\n  override def doExecute(): RDD[InternalRow] = {\n    ColumnarToRowExec(this).doExecute()\n  }\n\n  override def executeCollect(): Array[InternalRow] = {\n    ColumnarToRowExec(this).executeCollect()\n  }\n\n  override def readerFactory: PartitionReaderFactory = wrappedScan.readerFactory\n\n  override def scan: Scan = wrapped.scan\n\n  override def output: Seq[Attribute] = wrapped.output\n\n  override def equals(other: Any): Boolean = other match {\n    case other: CometBatchScanExec =>\n      // `wrapped` in `this` and `other` could reference to the same `BatchScanExec` object,\n      // check `runtimeFilters` equality too.\n      this.wrappedScan == other.wrappedScan && this.runtimeFilters == other.runtimeFilters\n    case _ =>\n      false\n  }\n\n  override def hashCode(): Int = {\n    Objects.hashCode(wrappedScan, runtimeFilters)\n  }\n\n  override def doCanonicalize(): CometBatchScanExec = {\n    this.copy(\n      wrapped = wrappedScan.doCanonicalize(),\n      runtimeFilters = QueryPlan.normalizePredicates(\n        runtimeFilters.filterNot(_ == DynamicPruningExpression(Literal.TrueLiteral)),\n        output),\n      nativeIcebergScanMetadata = None)\n  }\n\n  override def nodeName: String = {\n    wrapped.nodeName.replace(\"BatchScan\", \"CometBatchScan\")\n  }\n\n  override def simpleString(maxFields: Int): String = {\n    val truncatedOutputString = truncatedString(output, \"[\", \", \", \"]\", maxFields)\n    val runtimeFiltersString =\n      s\"RuntimeFilters: ${runtimeFilters.mkString(\"[\", \",\", \"]\")}\"\n    val result = s\"$nodeName$truncatedOutputString ${scan.description()} $runtimeFiltersString\"\n    redact(result)\n  }\n\n  private def wrappedScan: BatchScanExec = {\n    // The runtime filters in this scan could be transformed by optimizer rules such as\n    // `PlanAdaptiveDynamicPruningFilters`, while the one in the wrapped scan is not. And\n    // since `inputRDD` uses the latter and therefore will be incorrect if we don't set it here.\n    //\n    // There is, however, no good way to modify `wrapped.runtimeFilters` since it is immutable.\n    // It is not good to use `wrapped.copy` here since it will also re-initialize those lazy val\n    // in the `BatchScanExec`, e.g., metrics.\n    //\n    // TODO: find a better approach than this hack\n    val f = classOf[BatchScanExec].getDeclaredField(\"runtimeFilters\")\n    f.setAccessible(true)\n    f.set(wrapped, runtimeFilters)\n    wrapped\n  }\n\n  override lazy val metrics: Map[String, SQLMetric] =\n    wrappedScan.customMetrics ++ CometMetricNode.baseScanMetrics(\n      session.sparkContext) ++ (scan match {\n      case s: MetricsSupport => s.getMetrics\n      case _ => Map.empty\n    })\n\n  @transient override lazy val partitions: Seq[Seq[InputPartition]] = wrappedScan.partitions\n\n  override def supportsColumnar: Boolean = true\n}\n\nobject CometBatchScanExec extends DataTypeSupport {\n  override def isTypeSupported(\n      dt: DataType,\n      name: String,\n      fallbackReasons: ListBuffer[String]): Boolean = dt match {\n    case _: StructType | _: ArrayType | _: MapType => false\n    case _ => super.isTypeSupported(dt, name, fallbackReasons)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometBroadcastExchangeExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport java.util.UUID\nimport java.util.concurrent.{Future, TimeoutException, TimeUnit}\n\nimport scala.concurrent.{ExecutionContext, Promise}\nimport scala.concurrent.duration.NANOSECONDS\nimport scala.util.control.NonFatal\n\nimport org.apache.spark.{broadcast, Partition, SparkContext, SparkException, TaskContext}\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.Attribute\nimport org.apache.spark.sql.catalyst.plans.logical.Statistics\nimport org.apache.spark.sql.catalyst.plans.physical.{BroadcastMode, BroadcastPartitioning, Partitioning}\nimport org.apache.spark.sql.comet.util.Utils\nimport org.apache.spark.sql.errors.QueryExecutionErrors\nimport org.apache.spark.sql.execution.{ColumnarToRowExec, SparkPlan, SQLExecution}\nimport org.apache.spark.sql.execution.adaptive.{AQEShuffleReadExec, ShuffleQueryStageExec}\nimport org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, BroadcastExchangeLike, ReusedExchangeExec}\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}\nimport org.apache.spark.sql.internal.{SQLConf, StaticSQLConf}\nimport org.apache.spark.sql.vectorized.ColumnarBatch\nimport org.apache.spark.util.{SparkFatalException, ThreadUtils}\nimport org.apache.spark.util.io.ChunkedByteBuffer\n\nimport com.google.common.base.Objects\n\nimport org.apache.comet.{CometConf, CometRuntimeException, ConfigEntry}\nimport org.apache.comet.serde.OperatorOuterClass\nimport org.apache.comet.serde.operator.CometSink\nimport org.apache.comet.shims.ShimCometBroadcastExchangeExec\n\n/**\n * A [[CometBroadcastExchangeExec]] collects, transforms and finally broadcasts the result of a\n * transformed SparkPlan. This is a copy of the [[BroadcastExchangeExec]] class with the necessary\n * changes to support the Comet operator.\n *\n * [[CometBroadcastExchangeExec]] will be used in broadcast join operator.\n *\n * Note that this class cannot extend `CometExec` as usual similar to other Comet operators. As\n * the trait `BroadcastExchangeLike` in Spark extends abstract class `Exchange`, it limits the\n * flexibility to extend `CometExec` and `Exchange` at the same time.\n */\ncase class CometBroadcastExchangeExec(\n    originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    mode: BroadcastMode,\n    override val child: SparkPlan)\n    extends BroadcastExchangeLike\n    with ShimCometBroadcastExchangeExec\n    with CometPlan {\n\n  override val runId: UUID = UUID.randomUUID\n\n  override lazy val metrics: Map[String, SQLMetric] = Map(\n    \"dataSize\" -> SQLMetrics.createSizeMetric(sparkContext, \"data size\"),\n    \"numOutputRows\" -> SQLMetrics.createMetric(sparkContext, \"number of output rows\"),\n    \"collectTime\" -> SQLMetrics.createTimingMetric(sparkContext, \"time to collect\"),\n    \"buildTime\" -> SQLMetrics.createTimingMetric(sparkContext, \"time to build\"),\n    \"broadcastTime\" -> SQLMetrics.createTimingMetric(sparkContext, \"time to broadcast\"),\n    \"numCoalescedBatches\" -> SQLMetrics.createMetric(\n      sparkContext,\n      \"number of coalesced batches for broadcast\"),\n    \"numCoalescedRows\" -> SQLMetrics.createMetric(\n      sparkContext,\n      \"number of coalesced rows for broadcast\"))\n\n  override def doCanonicalize(): SparkPlan = {\n    CometBroadcastExchangeExec(null, null, mode, child.canonicalized)\n  }\n\n  override def runtimeStatistics: Statistics = {\n    val dataSize = metrics(\"dataSize\").value\n    val rowCount = metrics(\"numOutputRows\").value\n    Statistics(dataSize, Some(rowCount))\n  }\n\n  override def outputPartitioning: Partitioning = BroadcastPartitioning(mode)\n\n  @transient\n  private lazy val promise = Promise[broadcast.Broadcast[Any]]()\n\n  @transient\n  override lazy val completionFuture: scala.concurrent.Future[broadcast.Broadcast[Any]] =\n    promise.future\n\n  @transient\n  private val timeout: Long = conf.broadcastTimeout\n\n  @transient\n  private lazy val maxBroadcastRows = 512000000\n\n  def getNumPartitions(): Int = {\n    child.executeColumnar().getNumPartitions\n  }\n\n  @transient\n  override lazy val relationFuture: Future[broadcast.Broadcast[Any]] = {\n    SQLExecution.withThreadLocalCaptured[broadcast.Broadcast[Any]](\n      session,\n      CometBroadcastExchangeExec.executionContext) {\n      try {\n        setJobGroupOrTag(sparkContext, this)\n        val beforeCollect = System.nanoTime()\n\n        val countsAndBytes = child match {\n          case c: CometPlan => CometExec.getByteArrayRdd(c).collect()\n          case AQEShuffleReadExec(s: ShuffleQueryStageExec, _)\n              if s.plan.isInstanceOf[CometPlan] =>\n            CometExec.getByteArrayRdd(s.plan.asInstanceOf[CometPlan]).collect()\n          case s: ShuffleQueryStageExec if s.plan.isInstanceOf[CometPlan] =>\n            CometExec.getByteArrayRdd(s.plan.asInstanceOf[CometPlan]).collect()\n          case ReusedExchangeExec(_, plan) if plan.isInstanceOf[CometPlan] =>\n            CometExec.getByteArrayRdd(plan.asInstanceOf[CometPlan]).collect()\n          case AQEShuffleReadExec(ShuffleQueryStageExec(_, ReusedExchangeExec(_, plan), _), _)\n              if plan.isInstanceOf[CometPlan] =>\n            CometExec.getByteArrayRdd(plan.asInstanceOf[CometPlan]).collect()\n          case ShuffleQueryStageExec(_, ReusedExchangeExec(_, plan), _)\n              if plan.isInstanceOf[CometPlan] =>\n            CometExec.getByteArrayRdd(plan.asInstanceOf[CometPlan]).collect()\n          case AQEShuffleReadExec(s: ShuffleQueryStageExec, _) =>\n            throw new CometRuntimeException(\n              \"Child of CometBroadcastExchangeExec should be CometExec, \" +\n                s\"but got: ${s.plan.getClass}\")\n          case _ =>\n            throw new CometRuntimeException(\n              \"Child of CometBroadcastExchangeExec should be CometExec, \" +\n                s\"but got: ${child.getClass}\")\n        }\n\n        val numRows = countsAndBytes.map(_._1).sum\n        val input = countsAndBytes.iterator.map(countAndBytes => countAndBytes._2)\n\n        longMetric(\"numOutputRows\") += numRows\n        if (numRows >= maxBroadcastRows) {\n          throw QueryExecutionErrors.cannotBroadcastTableOverMaxTableRowsError(\n            maxBroadcastRows,\n            numRows)\n        }\n\n        val beforeBuild = System.nanoTime()\n        longMetric(\"collectTime\") += NANOSECONDS.toMillis(beforeBuild - beforeCollect)\n\n        // Coalesce the many small per-shuffle-block buffers into a single buffer.\n        // Without this, each consumer task deserializes one Arrow IPC stream per\n        // shuffle block (one per writer task per partition), which is very expensive\n        // when there are hundreds of writer tasks and partitions. See the scaladoc\n        // on coalesceBroadcastBatches for details.\n        val (batches, coalescedBatches, coalescedRows) = Utils.coalesceBroadcastBatches(input)\n        longMetric(\"numCoalescedBatches\") += coalescedBatches\n        longMetric(\"numCoalescedRows\") += coalescedRows\n\n        val dataSize = batches.map(_.size).sum\n\n        longMetric(\"dataSize\") += dataSize\n        val maxBytes = maxBroadcastTableBytes(conf)\n        if (dataSize >= maxBytes) {\n          throw QueryExecutionErrors.cannotBroadcastTableOverMaxTableBytesError(\n            maxBytes,\n            dataSize)\n        }\n\n        val beforeBroadcast = System.nanoTime()\n        longMetric(\"buildTime\") += NANOSECONDS.toMillis(beforeBroadcast - beforeBuild)\n\n        // (3.4 only) SPARK-39983 - Broadcast the relation without caching the unserialized object.\n        val broadcasted = sparkContext\n          .broadcastInternal(batches, true)\n          .asInstanceOf[broadcast.Broadcast[Any]]\n        longMetric(\"broadcastTime\") += NANOSECONDS.toMillis(System.nanoTime() - beforeBroadcast)\n        val executionId = sparkContext.getLocalProperty(SQLExecution.EXECUTION_ID_KEY)\n        SQLMetrics.postDriverMetricUpdates(sparkContext, executionId, metrics.values.toSeq)\n        promise.trySuccess(broadcasted)\n        broadcasted\n      } catch {\n        // SPARK-24294: To bypass scala bug: https://github.com/scala/bug/issues/9554, we throw\n        // SparkFatalException, which is a subclass of Exception. ThreadUtils.awaitResult\n        // will catch this exception and re-throw the wrapped fatal throwable.\n        case oe: OutOfMemoryError =>\n          val ex = new SparkFatalException(oe)\n          promise.tryFailure(ex)\n          throw ex\n        case e if !NonFatal(e) =>\n          val ex = new SparkFatalException(e)\n          promise.tryFailure(ex)\n          throw ex\n        case e: Throwable =>\n          promise.tryFailure(e)\n          throw e\n      }\n    }\n  }\n\n  override protected def doPrepare(): Unit = {\n    // Materialize the future.\n    relationFuture\n  }\n\n  override protected def doExecute(): RDD[InternalRow] = {\n    throw QueryExecutionErrors.executeCodePathUnsupportedError(\"CometBroadcastExchangeExec\")\n  }\n\n  override def supportsColumnar: Boolean = true\n\n  // This is basically for unit test only.\n  override def executeCollect(): Array[InternalRow] =\n    ColumnarToRowExec(this).executeCollect()\n\n  // This is basically for unit test only, called by `executeCollect` indirectly.\n  override protected def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    val broadcasted = executeBroadcast[Array[ChunkedByteBuffer]]()\n\n    new CometBatchRDD(sparkContext, getNumPartitions(), broadcasted)\n  }\n\n  // After https://issues.apache.org/jira/browse/SPARK-48195, Spark plan will cache created RDD.\n  // Since we may change the number of partitions in CometBatchRDD,\n  // we need a method that always creates a new CometBatchRDD.\n  def executeColumnar(numPartitions: Int): RDD[ColumnarBatch] = {\n    if (isCanonicalizedPlan) {\n      throw SparkException.internalError(\"A canonicalized plan is not supposed to be executed.\")\n    }\n\n    val broadcasted = executeBroadcast[Array[ChunkedByteBuffer]]()\n    new CometBatchRDD(sparkContext, numPartitions, broadcasted)\n  }\n\n  override protected[sql] def doExecuteBroadcast[T](): broadcast.Broadcast[T] = {\n    try {\n      relationFuture.get(timeout, TimeUnit.SECONDS).asInstanceOf[broadcast.Broadcast[T]]\n    } catch {\n      case ex: TimeoutException =>\n        logError(s\"Could not execute broadcast in $timeout secs.\", ex)\n        if (!relationFuture.isDone) {\n          cancelJobGroup(sparkContext, this)\n          relationFuture.cancel(true)\n        }\n        throw QueryExecutionErrors.executeBroadcastTimeoutError(timeout, Some(ex))\n    }\n  }\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometBroadcastExchangeExec =>\n        this.originalPlan == other.originalPlan &&\n        this.child == other.child\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int = Objects.hashCode(child)\n\n  override def stringArgs: Iterator[Any] = Iterator(output, child)\n\n  override protected def withNewChildInternal(newChild: SparkPlan): CometBroadcastExchangeExec =\n    copy(child = newChild)\n}\n\nobject CometBroadcastExchangeExec extends CometSink[BroadcastExchangeExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_BROADCAST_EXCHANGE_ENABLED)\n\n  override def createExec(\n      nativeOp: OperatorOuterClass.Operator,\n      b: BroadcastExchangeExec): CometNativeExec = {\n    CometSinkPlaceHolder(nativeOp, b, CometBroadcastExchangeExec(b, b.output, b.mode, b.child))\n  }\n\n  private[comet] val executionContext = ExecutionContext.fromExecutorService(\n    ThreadUtils.newDaemonCachedThreadPool(\n      \"comet-broadcast-exchange\",\n      SQLConf.get.getConf(StaticSQLConf.BROADCAST_EXCHANGE_MAX_THREAD_THRESHOLD)))\n}\n\n/**\n * [[CometBatchRDD]] is a [[RDD]] of [[ColumnarBatch]]s that are broadcasted to the executors. It\n * is only used by [[CometBroadcastExchangeExec]] to broadcast the result of a Comet operator.\n *\n * @param sc\n *   SparkContext\n * @param numPartitions\n *   number of partitions\n * @param value\n *   the broadcasted batches which are serialized into an array of [[ChunkedByteBuffer]]s\n */\nclass CometBatchRDD(\n    sc: SparkContext,\n    val numPartitions: Int,\n    value: broadcast.Broadcast[Array[ChunkedByteBuffer]])\n    extends RDD[ColumnarBatch](sc, Nil) {\n\n  override def getPartitions: Array[Partition] = (0 until numPartitions).toArray.map { i =>\n    new CometBatchPartition(i, value)\n  }\n\n  override def compute(split: Partition, context: TaskContext): Iterator[ColumnarBatch] = {\n    val partition = split.asInstanceOf[CometBatchPartition]\n    partition.value.value.toIterator\n      .flatMap(Utils.decodeBatches(_, this.getClass.getSimpleName))\n  }\n}\n\nclass CometBatchPartition(\n    override val index: Int,\n    val value: broadcast.Broadcast[Array[ChunkedByteBuffer]])\n    extends Partition {}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometCoalesceExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.catalyst.expressions.Attribute\nimport org.apache.spark.sql.catalyst.plans.physical.{Partitioning, SinglePartition, UnknownPartitioning}\nimport org.apache.spark.sql.execution.{CoalesceExec, SparkPlan, UnaryExecNode}\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport com.google.common.base.Objects\n\nimport org.apache.comet.{CometConf, ConfigEntry}\nimport org.apache.comet.serde.OperatorOuterClass\nimport org.apache.comet.serde.operator.CometSink\n\nobject CometCoalesceExec extends CometSink[CoalesceExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_COALESCE_ENABLED)\n\n  override def createExec(\n      nativeOp: OperatorOuterClass.Operator,\n      op: CoalesceExec): CometNativeExec = {\n    CometSinkPlaceHolder(\n      nativeOp,\n      op,\n      CometCoalesceExec(op, op.output, op.numPartitions, op.child))\n  }\n}\n\n/**\n * This is basically a copy of Spark's CoalesceExec, but supports columnar processing to make it\n * more efficient when including it in a Comet query plan.\n */\ncase class CometCoalesceExec(\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    numPartitions: Int,\n    child: SparkPlan)\n    extends CometExec\n    with UnaryExecNode {\n  protected override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    val rdd = child.executeColumnar()\n    if (numPartitions == 1 && rdd.getNumPartitions < 1) {\n      // Make sure we don't output an RDD with 0 partitions, when claiming that we have a\n      // `SinglePartition`.\n      CometExecUtils.emptyRDDWithPartitions(sparkContext, 1)\n    } else {\n      rdd.coalesce(numPartitions, shuffle = false)\n    }\n  }\n\n  override def outputPartitioning: Partitioning = {\n    if (numPartitions == 1) SinglePartition\n    else UnknownPartitioning(numPartitions)\n  }\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometCoalesceExec =>\n        this.numPartitions == other.numPartitions && this.child == other.child\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int = Objects.hashCode(numPartitions: java.lang.Integer, child)\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometCollectLimitExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.serializer.Serializer\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.Attribute\nimport org.apache.spark.sql.catalyst.plans.physical.{Partitioning, SinglePartition}\nimport org.apache.spark.sql.comet.execution.shuffle.{CometShuffledBatchRDD, CometShuffleExchangeExec}\nimport org.apache.spark.sql.execution.{CollectLimitExec, ColumnarToRowExec, SparkPlan, UnaryExecNode, UnsafeRowSerializer}\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics, SQLShuffleReadMetricsReporter, SQLShuffleWriteMetricsReporter}\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport com.google.common.base.Objects\n\nimport org.apache.comet.{CometConf, ConfigEntry}\nimport org.apache.comet.CometSparkSessionExtensions.isCometShuffleEnabled\nimport org.apache.comet.serde.{Compatible, OperatorOuterClass, SupportLevel, Unsupported}\nimport org.apache.comet.serde.operator.CometSink\n\nobject CometCollectLimitExec extends CometSink[CollectLimitExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_COLLECT_LIMIT_ENABLED)\n\n  override def getSupportLevel(op: CollectLimitExec): SupportLevel = {\n    if (!isCometShuffleEnabled(op.conf)) {\n      return Unsupported(Some(\"Comet shuffle is not enabled\"))\n    }\n    Compatible()\n  }\n\n  override def createExec(\n      nativeOp: OperatorOuterClass.Operator,\n      op: CollectLimitExec): CometNativeExec = {\n    CometSinkPlaceHolder(nativeOp, op, CometCollectLimitExec(op, op.limit, op.offset, op.child))\n  }\n}\n\n/**\n * Comet physical plan node for Spark `CollectLimitExec`.\n *\n * Similar to `CometTakeOrderedAndProjectExec`, it contains two native executions seperated by a\n * Comet shuffle.\n */\ncase class CometCollectLimitExec(\n    override val originalPlan: SparkPlan,\n    limit: Int,\n    offset: Int,\n    child: SparkPlan)\n    extends CometExec\n    with UnaryExecNode {\n  override def output: Seq[Attribute] = child.output\n  override def outputPartitioning: Partitioning = SinglePartition\n\n  private lazy val writeMetrics =\n    SQLShuffleWriteMetricsReporter.createShuffleWriteMetrics(sparkContext)\n  private lazy val readMetrics =\n    SQLShuffleReadMetricsReporter.createShuffleReadMetrics(sparkContext)\n  override lazy val metrics: Map[String, SQLMetric] = Map(\n    \"dataSize\" -> SQLMetrics.createSizeMetric(sparkContext, \"data size\"),\n    \"numPartitions\" -> SQLMetrics.createMetric(\n      sparkContext,\n      \"number of partitions\")) ++ readMetrics ++ writeMetrics ++ CometMetricNode.shuffleMetrics(\n    sparkContext)\n\n  private lazy val serializer: Serializer =\n    new UnsafeRowSerializer(child.output.size, longMetric(\"dataSize\"))\n\n  override def executeCollect(): Array[InternalRow] = {\n    val rows =\n      if (limit >= 0) ColumnarToRowExec(child).executeTake(limit)\n      else ColumnarToRowExec(child).executeCollect()\n    if (offset > 0) rows.drop(offset) else rows\n  }\n\n  protected override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    val childRDD = child.executeColumnar()\n    if (childRDD.getNumPartitions == 0) {\n      CometExecUtils.emptyRDDWithPartitions(sparkContext, 1)\n    } else {\n      val singlePartitionRDD = if (childRDD.getNumPartitions == 1) {\n        childRDD\n      } else {\n        val localLimitedRDD = if (limit >= 0) {\n          CometExecUtils.getNativeLimitRDD(childRDD, output, limit)\n        } else {\n          childRDD\n        }\n        // Shuffle to Single Partition using Comet shuffle\n        val dep = CometShuffleExchangeExec.prepareShuffleDependency(\n          localLimitedRDD,\n          child.output,\n          outputPartitioning,\n          serializer,\n          metrics)\n        metrics(\"numPartitions\").set(dep.partitioner.numPartitions)\n\n        new CometShuffledBatchRDD(dep, readMetrics)\n      }\n      CometExecUtils.getNativeLimitRDD(singlePartitionRDD, output, limit, offset)\n    }\n  }\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n\n  override def stringArgs: Iterator[Any] = Iterator(limit, offset, child)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometCollectLimitExec =>\n        this.limit == other.limit && this.offset == other.offset &&\n        this.child == other.child\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int =\n    Objects.hashCode(limit: java.lang.Integer, offset: java.lang.Integer, child)\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometColumnarToRowExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport java.util.UUID\nimport java.util.concurrent.{Future, TimeoutException, TimeUnit}\n\nimport scala.concurrent.Promise\nimport scala.jdk.CollectionConverters._\nimport scala.util.control.NonFatal\n\nimport org.apache.spark.{broadcast, SparkException}\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, SortOrder, UnsafeProjection}\nimport org.apache.spark.sql.catalyst.expressions.codegen._\nimport org.apache.spark.sql.catalyst.expressions.codegen.Block._\nimport org.apache.spark.sql.catalyst.plans.physical.Partitioning\nimport org.apache.spark.sql.comet.util.{Utils => CometUtils}\nimport org.apache.spark.sql.errors.QueryExecutionErrors\nimport org.apache.spark.sql.execution.{CodegenSupport, ColumnarToRowTransition, SparkPlan, SQLExecution}\nimport org.apache.spark.sql.execution.adaptive.BroadcastQueryStageExec\nimport org.apache.spark.sql.execution.exchange.ReusedExchangeExec\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}\nimport org.apache.spark.sql.execution.vectorized.{ConstantColumnVector, WritableColumnVector}\nimport org.apache.spark.sql.types._\nimport org.apache.spark.sql.vectorized.{ColumnarBatch, ColumnVector}\nimport org.apache.spark.util.{SparkFatalException, Utils}\nimport org.apache.spark.util.io.ChunkedByteBuffer\n\nimport org.apache.comet.vector.CometPlainVector\n\n/**\n * Copied from Spark `ColumnarToRowExec`. Comet needs the fix for SPARK-50235 but cannot wait for\n * the fix to be released in Spark versions. We copy the implementation here to apply the fix.\n */\ncase class CometColumnarToRowExec(child: SparkPlan)\n    extends ColumnarToRowTransition\n    with CometPlan\n    with CodegenSupport {\n  // supportsColumnar requires to be only called on driver side, see also SPARK-37779.\n  assert(Utils.isInRunningSparkTask || child.supportsColumnar)\n\n  override def output: Seq[Attribute] = child.output\n\n  override def outputPartitioning: Partitioning = child.outputPartitioning\n\n  override def outputOrdering: Seq[SortOrder] = child.outputOrdering\n\n  // `ColumnarToRowExec` processes the input RDD directly, which is kind of a leaf node in the\n  // codegen stage and needs to do the limit check.\n  protected override def canCheckLimitNotReached: Boolean = true\n\n  override lazy val metrics: Map[String, SQLMetric] = Map(\n    \"numOutputRows\" -> SQLMetrics.createMetric(sparkContext, \"number of output rows\"),\n    \"numInputBatches\" -> SQLMetrics.createMetric(sparkContext, \"number of input batches\"))\n\n  override def doExecute(): RDD[InternalRow] = {\n    val numOutputRows = longMetric(\"numOutputRows\")\n    val numInputBatches = longMetric(\"numInputBatches\")\n    // This avoids calling `output` in the RDD closure, so that we don't need to include the entire\n    // plan (this) in the closure.\n    val localOutput = this.output\n    child.executeColumnar().mapPartitionsInternal { batches =>\n      val toUnsafe = UnsafeProjection.create(localOutput, localOutput)\n      batches.flatMap { batch =>\n        numInputBatches += 1\n        numOutputRows += batch.numRows()\n        batch.rowIterator().asScala.map(toUnsafe)\n      }\n    }\n  }\n\n  @transient\n  private lazy val promise = Promise[broadcast.Broadcast[Any]]()\n\n  @transient\n  private val timeout: Long = conf.broadcastTimeout\n\n  private val runId: UUID = UUID.randomUUID\n\n  private lazy val cometBroadcastExchange = findCometBroadcastExchange(child)\n\n  @transient\n  lazy val relationFuture: Future[broadcast.Broadcast[Any]] = {\n    SQLExecution.withThreadLocalCaptured[broadcast.Broadcast[Any]](\n      session,\n      CometBroadcastExchangeExec.executionContext) {\n      try {\n        // Setup a job group here so later it may get cancelled by groupId if necessary.\n        sparkContext.setJobGroup(\n          runId.toString,\n          s\"CometColumnarToRow broadcast exchange (runId $runId)\",\n          interruptOnCancel = true)\n\n        val numOutputRows = longMetric(\"numOutputRows\")\n        val numInputBatches = longMetric(\"numInputBatches\")\n        val localOutput = this.output\n        val broadcastColumnar = child.executeBroadcast()\n        val serializedBatches = broadcastColumnar.value.asInstanceOf[Array[ChunkedByteBuffer]]\n        val toUnsafe = UnsafeProjection.create(localOutput, localOutput)\n        val rows = serializedBatches.iterator\n          .flatMap(CometUtils.decodeBatches(_, this.getClass.getSimpleName))\n          .flatMap { batch =>\n            numInputBatches += 1\n            numOutputRows += batch.numRows()\n            batch.rowIterator().asScala.map(toUnsafe)\n          }\n\n        val mode = cometBroadcastExchange.get.mode\n        val relation = mode.transform(rows, Some(numOutputRows.value))\n        val broadcasted = sparkContext.broadcastInternal(relation, serializedOnly = true)\n        val executionId = sparkContext.getLocalProperty(SQLExecution.EXECUTION_ID_KEY)\n        SQLMetrics.postDriverMetricUpdates(sparkContext, executionId, metrics.values.toSeq)\n        promise.trySuccess(broadcasted)\n        broadcasted\n      } catch {\n        // SPARK-24294: To bypass scala bug: https://github.com/scala/bug/issues/9554, we throw\n        // SparkFatalException, which is a subclass of Exception. ThreadUtils.awaitResult\n        // will catch this exception and re-throw the wrapped fatal throwable.\n        case oe: OutOfMemoryError =>\n          val ex = new SparkFatalException(oe)\n          promise.tryFailure(ex)\n          throw ex\n        case e if !NonFatal(e) =>\n          val ex = new SparkFatalException(e)\n          promise.tryFailure(ex)\n          throw ex\n        case e: Throwable =>\n          promise.tryFailure(e)\n          throw e\n      }\n    }\n  }\n\n  override def doExecuteBroadcast[T](): broadcast.Broadcast[T] = {\n    if (cometBroadcastExchange.isEmpty) {\n      throw new SparkException(\n        \"ColumnarToRowExec only supports doExecuteBroadcast when child contains a \" +\n          \"CometBroadcastExchange, but got \" + child)\n    }\n\n    try {\n      relationFuture.get(timeout, TimeUnit.SECONDS).asInstanceOf[broadcast.Broadcast[T]]\n    } catch {\n      case ex: TimeoutException =>\n        logError(s\"Could not execute broadcast in $timeout secs.\", ex)\n        if (!relationFuture.isDone) {\n          sparkContext.cancelJobGroup(runId.toString)\n          relationFuture.cancel(true)\n        }\n        throw QueryExecutionErrors.executeBroadcastTimeoutError(timeout, Some(ex))\n    }\n  }\n\n  private def findCometBroadcastExchange(op: SparkPlan): Option[CometBroadcastExchangeExec] = {\n    op match {\n      case b: CometBroadcastExchangeExec => Some(b)\n      case b: BroadcastQueryStageExec => findCometBroadcastExchange(b.plan)\n      case b: ReusedExchangeExec => findCometBroadcastExchange(b.child)\n      case _ => op.children.collectFirst(Function.unlift(findCometBroadcastExchange))\n    }\n  }\n\n  /**\n   * Generate [[ColumnVector]] expressions for our parent to consume as rows. This is called once\n   * per [[ColumnVector]] in the batch.\n   */\n  private def genCodeColumnVector(\n      ctx: CodegenContext,\n      columnVar: String,\n      ordinal: String,\n      dataType: DataType,\n      nullable: Boolean): ExprCode = {\n    val javaType = CodeGenerator.javaType(dataType)\n    val value = CodeGenerator.getValueFromVector(columnVar, dataType, ordinal)\n    val isNullVar = if (nullable) {\n      JavaCode.isNullVariable(ctx.freshName(\"isNull\"))\n    } else {\n      FalseLiteral\n    }\n    val valueVar = ctx.freshName(\"value\")\n    val str = s\"columnVector[$columnVar, $ordinal, ${dataType.simpleString}]\"\n    val code = code\"${ctx.registerComment(str)}\" + (if (nullable) {\n                                                      code\"\"\"\n        boolean $isNullVar = $columnVar.isNullAt($ordinal);\n        $javaType $valueVar = $isNullVar ? ${CodeGenerator.defaultValue(dataType)} : ($value);\n      \"\"\"\n                                                    } else {\n                                                      code\"$javaType $valueVar = $value;\"\n                                                    })\n    ExprCode(code, isNullVar, JavaCode.variable(valueVar, dataType))\n  }\n\n  /**\n   * Produce code to process the input iterator as [[ColumnarBatch]]es. This produces an\n   * [[org.apache.spark.sql.catalyst.expressions.UnsafeRow]] for each row in each batch.\n   */\n  override protected def doProduce(ctx: CodegenContext): String = {\n    // PhysicalRDD always just has one input\n    val input = ctx.addMutableState(\"scala.collection.Iterator\", \"input\", v => s\"$v = inputs[0];\")\n\n    // metrics\n    val numOutputRows = metricTerm(ctx, \"numOutputRows\")\n    val numInputBatches = metricTerm(ctx, \"numInputBatches\")\n\n    val columnarBatchClz = classOf[ColumnarBatch].getName\n    val batch = ctx.addMutableState(columnarBatchClz, \"batch\")\n\n    val idx = ctx.addMutableState(CodeGenerator.JAVA_INT, \"batchIdx\") // init as batchIdx = 0\n    val columnVectorClzs =\n      child.vectorTypes.getOrElse(Seq.fill(output.indices.size)(classOf[ColumnVector].getName))\n    val (colVars, columnAssigns) = columnVectorClzs.zipWithIndex.map {\n      case (columnVectorClz, i) =>\n        val name = ctx.addMutableState(columnVectorClz, s\"colInstance$i\")\n        (name, s\"$name = ($columnVectorClz) $batch.column($i);\")\n    }.unzip\n\n    val nextBatch = ctx.freshName(\"nextBatch\")\n    val nextBatchFuncName = ctx.addNewFunction(\n      nextBatch,\n      s\"\"\"\n         |private void $nextBatch() throws java.io.IOException {\n         |  if ($input.hasNext()) {\n         |    $batch = ($columnarBatchClz)$input.next();\n         |    $numInputBatches.add(1);\n         |    $numOutputRows.add($batch.numRows());\n         |    $idx = 0;\n         |    ${columnAssigns.mkString(\"\", \"\\n\", \"\\n\")}\n         |  }\n         |}\"\"\".stripMargin)\n\n    ctx.currentVars = null\n    val rowidx = ctx.freshName(\"rowIdx\")\n    val columnsBatchInput = (output zip colVars).map { case (attr, colVar) =>\n      genCodeColumnVector(ctx, colVar, rowidx, attr.dataType, attr.nullable)\n    }\n    val localIdx = ctx.freshName(\"localIdx\")\n    val localEnd = ctx.freshName(\"localEnd\")\n    val numRows = ctx.freshName(\"numRows\")\n    val shouldStop = if (parent.needStopCheck) {\n      s\"if (shouldStop()) { $idx = $rowidx + 1; return; }\"\n    } else {\n      \"// shouldStop check is eliminated\"\n    }\n\n    val writableColumnVectorClz = classOf[WritableColumnVector].getName\n    val constantColumnVectorClz = classOf[ConstantColumnVector].getName\n    val cometPlainColumnVectorClz = classOf[CometPlainVector].getName\n\n    // scalastyle:off line.size.limit\n    s\"\"\"\n       |if ($batch == null) {\n       |  $nextBatchFuncName();\n       |}\n       |while ($limitNotReachedCond $batch != null) {\n       |  int $numRows = $batch.numRows();\n       |  int $localEnd = $numRows - $idx;\n       |  for (int $localIdx = 0; $localIdx < $localEnd; $localIdx++) {\n       |    int $rowidx = $idx + $localIdx;\n       |    ${consume(ctx, columnsBatchInput).trim}\n       |    $shouldStop\n       |  }\n       |  $idx = $numRows;\n       |\n       |  // Comet fix for SPARK-50235\n       |  for (int i = 0; i < ${colVars.length}; i++) {\n       |    if (!($batch.column(i) instanceof $writableColumnVectorClz || $batch.column(i) instanceof $constantColumnVectorClz || $batch.column(i) instanceof $cometPlainColumnVectorClz)) {\n       |      $batch.column(i).close();\n       |    } else if ($batch.column(i) instanceof $cometPlainColumnVectorClz) {\n       |      $cometPlainColumnVectorClz cometPlainColumnVector = ($cometPlainColumnVectorClz) $batch.column(i);\n       |      if (!cometPlainColumnVector.isReused()) {\n       |        cometPlainColumnVector.close();\n       |      }\n       |    }\n       |  }\n       |\n       |  $batch = null;\n       |  $nextBatchFuncName();\n       |}\n       |// Comet fix for SPARK-50235: clean up resources\n       |if ($batch != null) {\n       |  $batch.close();\n       |}\n     \"\"\".stripMargin\n    // scalastyle:on line.size.limit\n  }\n\n  override def inputRDDs(): Seq[RDD[InternalRow]] = {\n    Seq(child.executeColumnar().asInstanceOf[RDD[InternalRow]]) // Hack because of type erasure\n  }\n\n  override protected def withNewChildInternal(newChild: SparkPlan): CometColumnarToRowExec =\n    copy(child = newChild)\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometCsvNativeScanExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.sql.catalyst.csv.CSVOptions\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, SortOrder}\nimport org.apache.spark.sql.catalyst.plans.physical.{Partitioning, UnknownPartitioning}\nimport org.apache.spark.sql.execution.SparkPlan\nimport org.apache.spark.sql.execution.datasources.FilePartition\nimport org.apache.spark.sql.execution.datasources.v2.BatchScanExec\nimport org.apache.spark.sql.execution.datasources.v2.csv.CSVScan\n\nimport com.google.common.base.Objects\n\nimport org.apache.comet.{CometConf, ConfigEntry}\nimport org.apache.comet.objectstore.NativeConfig\nimport org.apache.comet.serde.{CometOperatorSerde, OperatorOuterClass}\nimport org.apache.comet.serde.OperatorOuterClass.Operator\nimport org.apache.comet.serde.operator.{partition2Proto, schema2Proto}\n\n/*\n * Native CSV scan operator that delegates file reading to datafusion.\n */\ncase class CometCsvNativeScanExec(\n    override val nativeOp: Operator,\n    override val output: Seq[Attribute],\n    @transient override val originalPlan: BatchScanExec,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometLeafExec {\n  override val supportsColumnar: Boolean = true\n\n  override val nodeName: String = \"CometCsvNativeScan\"\n\n  override def outputPartitioning: Partitioning = UnknownPartitioning(\n    originalPlan.inputPartitions.length)\n\n  override def outputOrdering: Seq[SortOrder] = Nil\n\n  override protected def doCanonicalize(): SparkPlan = {\n    CometCsvNativeScanExec(nativeOp, output, originalPlan, serializedPlanOpt)\n  }\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometCsvNativeScanExec =>\n        this.output == other.output &&\n        this.serializedPlanOpt == other.serializedPlanOpt &&\n        this.originalPlan == other.originalPlan\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int = {\n    Objects.hashCode(output, serializedPlanOpt, originalPlan)\n  }\n}\n\nobject CometCsvNativeScanExec extends CometOperatorSerde[CometBatchScanExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_CSV_V2_NATIVE_ENABLED)\n\n  override def convert(\n      op: CometBatchScanExec,\n      builder: Operator.Builder,\n      childOp: Operator*): Option[Operator] = {\n    val csvScanBuilder = OperatorOuterClass.CsvScan.newBuilder()\n    val csvScan = op.wrapped.scan.asInstanceOf[CSVScan]\n    val sessionState = op.session.sessionState\n    val options = {\n      val columnPruning = sessionState.conf.csvColumnPruning\n      val timeZone = sessionState.conf.sessionLocalTimeZone\n      new CSVOptions(csvScan.options.asScala.toMap, columnPruning, timeZone)\n    }\n    val filePartitions = op.inputPartitions.map(_.asInstanceOf[FilePartition])\n    val csvOptionsProto = csvOptions2Proto(options)\n    val dataSchemaProto = schema2Proto(csvScan.dataSchema.fields)\n    val readSchemaFieldNames = csvScan.readDataSchema.fieldNames\n    val projectionVector = csvScan.dataSchema.fields.zipWithIndex\n      .filter { case (field, _) =>\n        readSchemaFieldNames.contains(field.name)\n      }\n      .map(_._2.asInstanceOf[Integer])\n    val partitionSchemaProto = schema2Proto(csvScan.readPartitionSchema.fields)\n    val partitionsProto = filePartitions.map(partition2Proto(_, csvScan.readPartitionSchema))\n\n    val objectStoreOptions = filePartitions.headOption\n      .flatMap { partitionFile =>\n        val hadoopConf = sessionState\n          .newHadoopConfWithOptions(op.session.sparkContext.conf.getAll.toMap)\n        partitionFile.files.headOption\n          .map(file => NativeConfig.extractObjectStoreOptions(hadoopConf, file.pathUri))\n      }\n      .getOrElse(Map.empty)\n\n    csvScanBuilder.putAllObjectStoreOptions(objectStoreOptions.asJava)\n    csvScanBuilder.setCsvOptions(csvOptionsProto)\n    csvScanBuilder.addAllFilePartitions(partitionsProto.asJava)\n    csvScanBuilder.addAllDataSchema(dataSchemaProto.toIterable.asJava)\n    csvScanBuilder.addAllProjectionVector(projectionVector.toIterable.asJava)\n    csvScanBuilder.addAllPartitionSchema(partitionSchemaProto.toIterable.asJava)\n    Some(builder.setCsvScan(csvScanBuilder).build())\n  }\n\n  override def createExec(nativeOp: Operator, op: CometBatchScanExec): CometNativeExec = {\n    CometCsvNativeScanExec(nativeOp, op.output, op.wrapped, SerializedPlan(None))\n  }\n\n  private def csvOptions2Proto(options: CSVOptions): OperatorOuterClass.CsvOptions = {\n    val csvOptionsBuilder = OperatorOuterClass.CsvOptions.newBuilder()\n    csvOptionsBuilder.setDelimiter(options.delimiter)\n    csvOptionsBuilder.setHasHeader(options.headerFlag)\n    csvOptionsBuilder.setQuote(options.quote.toString)\n    csvOptionsBuilder.setEscape(options.escape.toString)\n    csvOptionsBuilder.setTerminator(options.lineSeparator.getOrElse(\"\\n\"))\n    csvOptionsBuilder.setTruncatedRows(options.multiLine)\n    if (options.isCommentSet) {\n      csvOptionsBuilder.setComment(options.comment.toString)\n    }\n    csvOptionsBuilder.build()\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometExecRDD.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport org.apache.spark._\nimport org.apache.spark.broadcast.Broadcast\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.comet.execution.shuffle.CometShuffledBatchRDD\nimport org.apache.spark.sql.execution.ScalarSubquery\nimport org.apache.spark.sql.vectorized.ColumnarBatch\nimport org.apache.spark.util.SerializableConfiguration\n\nimport org.apache.comet.CometExecIterator\nimport org.apache.comet.serde.OperatorOuterClass\n\n/**\n * Partition that carries per-partition planning data, avoiding closure capture of all partitions.\n */\nprivate[spark] class CometExecPartition(\n    override val index: Int,\n    val inputPartitions: Array[Partition],\n    val planDataByKey: Map[String, Array[Byte]])\n    extends Partition\n\n/**\n * Unified RDD for Comet native execution.\n *\n * Solves the closure capture problem: instead of capturing all partitions' data in the closure\n * (which gets serialized to every task), each Partition object carries only its own data.\n *\n * Handles three cases:\n *   - With inputs + per-partition data: injects planning data into operator tree\n *   - With inputs + no per-partition data: just zips inputs (no injection overhead)\n *   - No inputs: uses numPartitions to create partitions\n *\n * NOTE: This RDD does not handle DPP (InSubqueryExec), which is resolved in\n * CometIcebergNativeScanExec.serializedPartitionData before this RDD is created. It also handles\n * ScalarSubquery expressions by registering them with CometScalarSubquery before execution.\n */\nprivate[spark] class CometExecRDD(\n    sc: SparkContext,\n    var inputRDDs: Seq[RDD[ColumnarBatch]],\n    commonByKey: Map[String, Array[Byte]],\n    @transient perPartitionByKey: Map[String, Array[Array[Byte]]],\n    serializedPlan: Array[Byte],\n    defaultNumPartitions: Int,\n    numOutputCols: Int,\n    nativeMetrics: CometMetricNode,\n    subqueries: Seq[ScalarSubquery],\n    broadcastedHadoopConfForEncryption: Option[Broadcast[SerializableConfiguration]] = None,\n    encryptedFilePaths: Seq[String] = Seq.empty,\n    shuffleScanIndices: Set[Int] = Set.empty)\n    extends RDD[ColumnarBatch](sc, inputRDDs.map(rdd => new OneToOneDependency(rdd))) {\n\n  // Determine partition count: from inputs if available, otherwise from parameter\n  private val numPartitions: Int = if (inputRDDs.nonEmpty) {\n    inputRDDs.head.partitions.length\n  } else if (perPartitionByKey.nonEmpty) {\n    perPartitionByKey.values.head.length\n  } else {\n    defaultNumPartitions\n  }\n\n  // Validate all per-partition arrays have the same length to prevent\n  // ArrayIndexOutOfBoundsException in getPartitions (e.g., from broadcast scans with\n  // different partition counts after DPP filtering)\n  require(\n    perPartitionByKey.values.forall(_.length == numPartitions),\n    s\"All per-partition arrays must have length $numPartitions, but found: \" +\n      perPartitionByKey.map { case (key, arr) => s\"$key -> ${arr.length}\" }.mkString(\", \"))\n\n  override protected def getPartitions: Array[Partition] = {\n    (0 until numPartitions).map { idx =>\n      val inputParts = inputRDDs.map(_.partitions(idx)).toArray\n      val planData = perPartitionByKey.map { case (key, arr) => key -> arr(idx) }\n      new CometExecPartition(idx, inputParts, planData)\n    }.toArray\n  }\n\n  override def compute(split: Partition, context: TaskContext): Iterator[ColumnarBatch] = {\n    val partition = split.asInstanceOf[CometExecPartition]\n\n    val inputs = inputRDDs.zip(partition.inputPartitions).map { case (rdd, part) =>\n      rdd.iterator(part, context)\n    }\n\n    // Only inject if we have per-partition planning data\n    val actualPlan = if (commonByKey.nonEmpty) {\n      val basePlan = OperatorOuterClass.Operator.parseFrom(serializedPlan)\n      val injected =\n        PlanDataInjector.injectPlanData(basePlan, commonByKey, partition.planDataByKey)\n      PlanDataInjector.serializeOperator(injected)\n    } else {\n      serializedPlan\n    }\n\n    // Create shuffle block iterators for inputs that are CometShuffledBatchRDD\n    val shuffleBlockIters = shuffleScanIndices.flatMap { idx =>\n      inputRDDs(idx) match {\n        case rdd: CometShuffledBatchRDD =>\n          Some(idx -> rdd.computeAsShuffleBlockIterator(partition.inputPartitions(idx), context))\n        case _ => None\n      }\n    }.toMap\n\n    val it = new CometExecIterator(\n      CometExec.newIterId,\n      inputs,\n      numOutputCols,\n      actualPlan,\n      nativeMetrics,\n      numPartitions,\n      partition.index,\n      broadcastedHadoopConfForEncryption,\n      encryptedFilePaths,\n      shuffleBlockIters)\n\n    // Register ScalarSubqueries so native code can look them up\n    subqueries.foreach(sub => CometScalarSubquery.setSubquery(it.id, sub))\n\n    Option(context).foreach { ctx =>\n      ctx.addTaskCompletionListener[Unit] { _ =>\n        subqueries.foreach(sub => CometScalarSubquery.removeSubquery(it.id, sub))\n      }\n    }\n\n    it\n  }\n\n  // Duplicates logic from Spark's ZippedPartitionsBaseRDD.getPreferredLocations\n  override def getPreferredLocations(split: Partition): Seq[String] = {\n    if (inputRDDs == null || inputRDDs.isEmpty) return Nil\n\n    val idx = split.index\n    val prefs = inputRDDs.map(rdd => rdd.preferredLocations(rdd.partitions(idx)))\n    // Prefer nodes where all inputs are local; fall back to any input's preferred location\n    val intersection = prefs.reduce((a, b) => a.intersect(b))\n    if (intersection.nonEmpty) intersection else prefs.flatten.distinct\n  }\n\n  override def clearDependencies(): Unit = {\n    super.clearDependencies()\n    inputRDDs = null\n  }\n}\n\nobject CometExecRDD {\n\n  /**\n   * Creates an RDD for native execution with optional per-partition planning data.\n   */\n  // scalastyle:off\n  def apply(\n      sc: SparkContext,\n      inputRDDs: Seq[RDD[ColumnarBatch]],\n      commonByKey: Map[String, Array[Byte]],\n      perPartitionByKey: Map[String, Array[Array[Byte]]],\n      serializedPlan: Array[Byte],\n      numPartitions: Int,\n      numOutputCols: Int,\n      nativeMetrics: CometMetricNode,\n      subqueries: Seq[ScalarSubquery],\n      broadcastedHadoopConfForEncryption: Option[Broadcast[SerializableConfiguration]] = None,\n      encryptedFilePaths: Seq[String] = Seq.empty,\n      shuffleScanIndices: Set[Int] = Set.empty): CometExecRDD = {\n    // scalastyle:on\n\n    new CometExecRDD(\n      sc,\n      inputRDDs,\n      commonByKey,\n      perPartitionByKey,\n      serializedPlan,\n      numPartitions,\n      numOutputCols,\n      nativeMetrics,\n      subqueries,\n      broadcastedHadoopConfForEncryption,\n      encryptedFilePaths,\n      shuffleScanIndices)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometExecUtils.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.jdk.CollectionConverters._\nimport scala.reflect.ClassTag\n\nimport org.apache.spark.{Partition, SparkContext, TaskContext}\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, NamedExpression, SortOrder}\nimport org.apache.spark.sql.execution.SparkPlan\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport org.apache.comet.serde.OperatorOuterClass\nimport org.apache.comet.serde.OperatorOuterClass.Operator\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProto, serializeDataType}\n\nobject CometExecUtils {\n\n  /**\n   * Create an empty RDD with the given number of partitions.\n   */\n  def emptyRDDWithPartitions[T: ClassTag](\n      sparkContext: SparkContext,\n      numPartitions: Int): RDD[T] = {\n    new EmptyRDDWithPartitions(sparkContext, numPartitions)\n  }\n\n  /**\n   * Transform the given RDD into a new RDD that takes the first `limit` elements of each\n   * partition. The limit operation is performed on the native side.\n   */\n  def getNativeLimitRDD(\n      childPlan: RDD[ColumnarBatch],\n      outputAttribute: Seq[Attribute],\n      limit: Int,\n      offset: Int = 0): RDD[ColumnarBatch] = {\n    val numParts = childPlan.getNumPartitions\n    // Serialize the plan once before mapping to avoid repeated serialization per partition\n    val limitOp = CometExecUtils.getLimitNativePlan(outputAttribute, limit, offset).get\n    val serializedPlan = CometExec.serializeNativePlan(limitOp)\n    childPlan.mapPartitionsWithIndexInternal { case (idx, iter) =>\n      CometExec.getCometIterator(Seq(iter), outputAttribute.length, serializedPlan, numParts, idx)\n    }\n  }\n\n  /**\n   * Prepare Projection + TopK native plan for CometTakeOrderedAndProjectExec.\n   */\n  def getProjectionNativePlan(\n      projectList: Seq[NamedExpression],\n      outputAttributes: Seq[Attribute],\n      sortOrder: Seq[SortOrder],\n      child: SparkPlan,\n      limit: Int,\n      offset: Int = 0): Option[Operator] = {\n    getTopKNativePlan(outputAttributes, sortOrder, child, limit, offset).flatMap { topK =>\n      val exprs = projectList.map(exprToProto(_, child.output))\n\n      if (exprs.forall(_.isDefined)) {\n        val projectBuilder = OperatorOuterClass.Projection.newBuilder()\n        projectBuilder.addAllProjectList(exprs.map(_.get).asJava)\n        val opBuilder = OperatorOuterClass.Operator\n          .newBuilder()\n          .addChildren(topK)\n        Some(opBuilder.setProjection(projectBuilder).build())\n      } else {\n        None\n      }\n    }\n  }\n\n  /**\n   * Prepare Limit native plan for Comet operators which take the first `limit` elements of each\n   * child partition\n   */\n  def getLimitNativePlan(\n      outputAttributes: Seq[Attribute],\n      limit: Int,\n      offset: Int = 0): Option[Operator] = {\n    val scanBuilder = OperatorOuterClass.Scan.newBuilder().setSource(\"LimitInput\")\n    val scanOpBuilder = OperatorOuterClass.Operator.newBuilder()\n\n    val scanTypes = outputAttributes.flatten { attr =>\n      serializeDataType(attr.dataType)\n    }\n\n    if (scanTypes.length == outputAttributes.length) {\n      scanBuilder.addAllFields(scanTypes.asJava)\n\n      val limitBuilder = OperatorOuterClass.Limit.newBuilder()\n      limitBuilder.setLimit(limit)\n      limitBuilder.setOffset(offset)\n\n      val limitOpBuilder = OperatorOuterClass.Operator\n        .newBuilder()\n        .addChildren(scanOpBuilder.setScan(scanBuilder))\n      Some(limitOpBuilder.setLimit(limitBuilder).build())\n    } else {\n      None\n    }\n  }\n\n  /**\n   * Prepare TopK native plan for CometTakeOrderedAndProjectExec.\n   */\n  def getTopKNativePlan(\n      outputAttributes: Seq[Attribute],\n      sortOrder: Seq[SortOrder],\n      child: SparkPlan,\n      limit: Int,\n      offset: Int = 0): Option[Operator] = {\n    val scanBuilder = OperatorOuterClass.Scan.newBuilder().setSource(\"TopKInput\")\n    val scanOpBuilder = OperatorOuterClass.Operator.newBuilder()\n\n    val scanTypes = outputAttributes.flatten { attr =>\n      serializeDataType(attr.dataType)\n    }\n\n    if (scanTypes.length == outputAttributes.length) {\n      scanBuilder.addAllFields(scanTypes.asJava)\n\n      val sortOrders = sortOrder.map(exprToProto(_, child.output))\n\n      if (sortOrders.forall(_.isDefined)) {\n        val sortBuilder = OperatorOuterClass.Sort.newBuilder()\n        sortBuilder.addAllSortOrders(sortOrders.map(_.get).asJava)\n        sortBuilder.setFetch(limit)\n        sortBuilder.setSkip(offset)\n\n        val sortOpBuilder = OperatorOuterClass.Operator\n          .newBuilder()\n          .addChildren(scanOpBuilder.setScan(scanBuilder))\n        Some(sortOpBuilder.setSort(sortBuilder).build())\n      } else {\n        None\n      }\n    } else {\n      None\n    }\n  }\n}\n\n/** A simple RDD with no data, but with the given number of partitions. */\nprivate class EmptyRDDWithPartitions[T: ClassTag](\n    @transient private val sc: SparkContext,\n    numPartitions: Int)\n    extends RDD[T](sc, Nil) {\n\n  override def getPartitions: Array[Partition] =\n    Array.tabulate(numPartitions)(i => EmptyPartition(i))\n\n  override def compute(split: Partition, context: TaskContext): Iterator[T] = {\n    Iterator.empty\n  }\n}\n\nprivate case class EmptyPartition(index: Int) extends Partition\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometIcebergNativeScanExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, Cast, DynamicPruningExpression, SortOrder}\nimport org.apache.spark.sql.catalyst.plans.QueryPlan\nimport org.apache.spark.sql.catalyst.plans.physical.{Partitioning, UnknownPartitioning}\nimport org.apache.spark.sql.execution.{InSubqueryExec, SubqueryAdaptiveBroadcastExec}\nimport org.apache.spark.sql.execution.datasources.v2.BatchScanExec\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}\nimport org.apache.spark.sql.vectorized.ColumnarBatch\nimport org.apache.spark.util.AccumulatorV2\n\nimport com.google.common.base.Objects\n\nimport org.apache.comet.iceberg.CometIcebergNativeScanMetadata\nimport org.apache.comet.serde.OperatorOuterClass.Operator\nimport org.apache.comet.serde.operator.CometIcebergNativeScan\nimport org.apache.comet.shims.ShimSubqueryBroadcast\n\n/**\n * Native Iceberg scan operator that delegates file reading to iceberg-rust.\n *\n * Replaces Spark's Iceberg BatchScanExec to bypass the DataSource V2 API and enable native\n * execution. Iceberg's catalog and planning run in Spark to produce FileScanTasks, which are\n * serialized to protobuf for the native side to execute using iceberg-rust's FileIO and\n * ArrowReader. This provides better performance than reading through Spark's abstraction layers.\n *\n * Supports Dynamic Partition Pruning (DPP) by deferring partition serialization to execution\n * time. The doPrepare() method waits for DPP subqueries to resolve, then lazy\n * serializedPartitionData serializes the DPP-filtered partitions from inputRDD.\n */\ncase class CometIcebergNativeScanExec(\n    override val nativeOp: Operator,\n    override val output: Seq[Attribute],\n    @transient override val originalPlan: BatchScanExec,\n    override val serializedPlanOpt: SerializedPlan,\n    metadataLocation: String,\n    @transient nativeIcebergScanMetadata: CometIcebergNativeScanMetadata)\n    extends CometLeafExec\n    with ShimSubqueryBroadcast {\n\n  override val supportsColumnar: Boolean = true\n\n  override val nodeName: String = \"CometIcebergNativeScan\"\n\n  /**\n   * Prepare DPP subquery plans. Called by Spark's prepare() before doExecuteColumnar().\n   *\n   * This follows Spark's convention of preparing subqueries in doPrepare() rather than\n   * doExecuteColumnar(). While the actual waiting for DPP results happens later in\n   * serializedPartitionData, calling prepare() here ensures subquery plans are set up before\n   * execution begins.\n   */\n  override protected def doPrepare(): Unit = {\n    originalPlan.runtimeFilters.foreach {\n      case DynamicPruningExpression(e: InSubqueryExec) =>\n        e.plan.prepare()\n      case _ =>\n    }\n    super.doPrepare()\n  }\n\n  /**\n   * Lazy partition serialization - deferred until execution time for DPP support.\n   *\n   * Entry points: This lazy val may be triggered from either doExecuteColumnar() (via\n   * commonData/perPartitionData) or capturedMetricValues (for Iceberg metrics). Lazy val\n   * semantics ensure single evaluation regardless of entry point.\n   *\n   * DPP (Dynamic Partition Pruning) Flow:\n   *\n   * {{{\n   * Planning time:\n   *   CometIcebergNativeScanExec created\n   *     - serializedPartitionData not evaluated (lazy)\n   *     - No partition serialization yet\n   *\n   * Execution time:\n   *   1. Spark calls prepare() on the plan tree\n   *        - doPrepare() calls e.plan.prepare() for each DPP filter\n   *        - Subquery plans are set up (but not yet executed)\n   *\n   *   2. Spark calls doExecuteColumnar() (or metrics are accessed)\n   *        - Accesses perPartitionData (or capturedMetricValues)\n   *        - Forces serializedPartitionData evaluation (here)\n   *        - Waits for DPP values (updateResult or reflection)\n   *        - Calls serializePartitions with DPP-filtered inputRDD\n   *        - Only matching partitions are serialized\n   * }}}\n   */\n  @transient private lazy val serializedPartitionData: (Array[Byte], Array[Array[Byte]]) = {\n    // Ensure DPP subqueries are resolved before accessing inputRDD.\n    originalPlan.runtimeFilters.foreach {\n      case DynamicPruningExpression(e: InSubqueryExec) if e.values().isEmpty =>\n        e.plan match {\n          case sab: SubqueryAdaptiveBroadcastExec =>\n            // SubqueryAdaptiveBroadcastExec.executeCollect() throws, so we call\n            // child.executeCollect() directly. We use the index from SAB to find the\n            // right buildKey, then locate that key's column in child.output.\n            val rows = sab.child.executeCollect()\n            val indices = getSubqueryBroadcastIndices(sab)\n\n            // SPARK-46946 changed index: Int to indices: Seq[Int] as a preparatory refactor\n            // for future features (Null Safe Equality DPP, multiple equality predicates).\n            // Currently indices always has one element. CometScanRule checks for multi-index\n            // DPP and falls back, so this assertion should never fail.\n            assert(\n              indices.length == 1,\n              s\"Multi-index DPP not supported: indices=$indices. See SPARK-46946.\")\n            val buildKeyIndex = indices.head\n            val buildKey = sab.buildKeys(buildKeyIndex)\n\n            // Find column index in child.output by matching buildKey's exprId\n            val colIndex = buildKey match {\n              case attr: Attribute =>\n                sab.child.output.indexWhere(_.exprId == attr.exprId)\n              // DPP may cast partition column to match join key type\n              case Cast(attr: Attribute, _, _, _) =>\n                sab.child.output.indexWhere(_.exprId == attr.exprId)\n              case _ => buildKeyIndex\n            }\n            if (colIndex < 0) {\n              throw new IllegalStateException(\n                s\"DPP build key '$buildKey' not found in ${sab.child.output.map(_.name)}\")\n            }\n\n            setInSubqueryResult(e, rows.map(_.get(colIndex, e.child.dataType)))\n          case _ =>\n            e.updateResult()\n        }\n      case _ =>\n    }\n\n    CometIcebergNativeScan.serializePartitions(originalPlan, output, nativeIcebergScanMetadata)\n  }\n\n  /**\n   * Sets InSubqueryExec's private result field via reflection.\n   *\n   * Reflection is required because:\n   *   - SubqueryAdaptiveBroadcastExec.executeCollect() throws UnsupportedOperationException\n   *   - InSubqueryExec has no public setter for result, only updateResult() which calls\n   *     executeCollect()\n   *   - We can't replace e.plan since it's a val\n   */\n  private def setInSubqueryResult(e: InSubqueryExec, result: Array[_]): Unit = {\n    val fields = e.getClass.getDeclaredFields\n    // Field name is mangled by Scala compiler, e.g. \"org$apache$...$InSubqueryExec$$result\"\n    val resultField = fields\n      .find(f => f.getName.endsWith(\"$result\") && !f.getName.contains(\"Broadcast\"))\n      .getOrElse {\n        throw new IllegalStateException(\n          s\"Cannot find 'result' field in ${e.getClass.getName}. \" +\n            \"Spark version may be incompatible with Comet's DPP implementation.\")\n      }\n    resultField.setAccessible(true)\n    resultField.set(e, result)\n  }\n\n  def commonData: Array[Byte] = serializedPartitionData._1\n  def perPartitionData: Array[Array[Byte]] = serializedPartitionData._2\n\n  // numPartitions for execution - derived from actual DPP-filtered partitions\n  // Only accessed during execution, not planning\n  def numPartitions: Int = perPartitionData.length\n\n  override lazy val outputPartitioning: Partitioning = UnknownPartitioning(numPartitions)\n\n  override lazy val outputOrdering: Seq[SortOrder] = Nil\n\n  // Capture metric VALUES and TYPES (not objects!) in a serializable case class\n  // This survives serialization while SQLMetric objects get reset to 0\n  private case class MetricValue(name: String, value: Long, metricType: String)\n\n  /**\n   * Maps Iceberg V2 custom metric types to standard Spark metric types for better UI formatting.\n   *\n   * Iceberg uses V2 custom metrics which don't get formatted in Spark UI (they just show raw\n   * numbers). By mapping to standard Spark types, we get proper formatting:\n   *   - \"size\" metrics: formatted as KB/MB/GB (e.g., \"10.3 GB\" instead of \"11040868925\")\n   *   - \"timing\" metrics: formatted as ms/s (e.g., \"200 ms\" instead of \"200\")\n   *   - \"sum\" metrics: plain numbers with commas (e.g., \"1,000\")\n   *\n   * This provides better UX than vanilla Iceberg Java which shows raw numbers.\n   */\n  private def mapMetricType(name: String, originalType: String): String = {\n    import java.util.Locale\n\n    // Only remap V2 custom metrics; leave standard Spark metrics unchanged\n    if (!originalType.startsWith(\"v2Custom_\")) {\n      return originalType\n    }\n\n    // Map based on metric name patterns from Iceberg\n    val nameLower = name.toLowerCase(Locale.ROOT)\n    if (nameLower.contains(\"size\")) {\n      \"size\" // Will format as KB/MB/GB\n    } else if (nameLower.contains(\"duration\")) {\n      \"timing\" // Will format as ms/s (Iceberg durations are in milliseconds)\n    } else {\n      \"sum\" // Plain number formatting\n    }\n  }\n\n  /**\n   * Captures Iceberg planning metrics for display in Spark UI.\n   *\n   * This lazy val intentionally triggers serializedPartitionData evaluation because Iceberg\n   * populates metrics during planning (when inputRDD is accessed). Both this and\n   * doExecuteColumnar() may trigger serializedPartitionData, but lazy val semantics ensure it's\n   * evaluated only once.\n   */\n  @transient private lazy val capturedMetricValues: Seq[MetricValue] = {\n    // Guard against null originalPlan (from doCanonicalize)\n    if (originalPlan == null) {\n      Seq.empty\n    } else {\n      // Trigger serializedPartitionData to ensure Iceberg planning has run and\n      // metrics are populated\n      val _ = serializedPartitionData\n\n      originalPlan.metrics\n        .filterNot { case (name, _) =>\n          // Filter out metrics that are now runtime metrics incremented on the native side\n          name == \"numOutputRows\" || name == \"numDeletes\" || name == \"numSplits\"\n        }\n        .map { case (name, metric) =>\n          val mappedType = mapMetricType(name, metric.metricType)\n          MetricValue(name, metric.value, mappedType)\n        }\n        .toSeq\n    }\n  }\n\n  /**\n   * Immutable SQLMetric for planning metrics that don't change during execution.\n   *\n   * Regular SQLMetric extends AccumulatorV2, which means when execution completes, accumulator\n   * updates from executors (which are 0 since they don't update planning metrics) get merged back\n   * to the driver, overwriting the driver's values with 0.\n   *\n   * This class overrides the accumulator methods to make the metric truly immutable once set.\n   */\n  private class ImmutableSQLMetric(metricType: String) extends SQLMetric(metricType, 0) {\n\n    override def merge(other: AccumulatorV2[Long, Long]): Unit = {}\n\n    override def reset(): Unit = {}\n  }\n\n  override lazy val metrics: Map[String, SQLMetric] = {\n    val baseMetrics = Map(\n      \"output_rows\" -> SQLMetrics.createMetric(sparkContext, \"number of output rows\"))\n\n    // Create IMMUTABLE metrics with captured values AND types\n    // these won't be affected by accumulator merges\n    val icebergMetrics = capturedMetricValues.map { mv =>\n      // Create the immutable metric with initValue = 0 (Spark 4 requires initValue <= 0)\n      val metric = new ImmutableSQLMetric(mv.metricType)\n      // Set the actual value after creation\n      metric.set(mv.value)\n      // Register it with SparkContext to assign metadata (name, etc.)\n      sparkContext.register(metric, mv.name)\n      mv.name -> metric\n    }.toMap\n\n    // Add num_splits as a runtime metric (incremented on the native side during execution)\n    val numSplitsMetric = SQLMetrics.createMetric(sparkContext, \"number of file splits processed\")\n\n    baseMetrics ++ icebergMetrics + (\"num_splits\" -> numSplitsMetric)\n  }\n\n  /** Executes using CometExecRDD - planning data is computed lazily on first access. */\n  override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    val nativeMetrics = CometMetricNode.fromCometPlan(this)\n    val serializedPlan = CometExec.serializeNativePlan(nativeOp)\n    CometExecRDD(\n      sparkContext,\n      inputRDDs = Seq.empty,\n      commonByKey = Map(metadataLocation -> commonData),\n      perPartitionByKey = Map(metadataLocation -> perPartitionData),\n      serializedPlan = serializedPlan,\n      numPartitions = perPartitionData.length,\n      numOutputCols = output.length,\n      nativeMetrics = nativeMetrics,\n      subqueries = Seq.empty)\n  }\n\n  /**\n   * Override convertBlock to preserve @transient fields. The parent implementation uses\n   * makeCopy() which loses transient fields.\n   */\n  override def convertBlock(): CometIcebergNativeScanExec = {\n    // Serialize the native plan if not already done\n    val newSerializedPlan = if (serializedPlanOpt.isEmpty) {\n      val bytes = CometExec.serializeNativePlan(nativeOp)\n      SerializedPlan(Some(bytes))\n    } else {\n      serializedPlanOpt\n    }\n\n    // Create new instance preserving transient fields\n    CometIcebergNativeScanExec(\n      nativeOp,\n      output,\n      originalPlan,\n      newSerializedPlan,\n      metadataLocation,\n      nativeIcebergScanMetadata)\n  }\n\n  override protected def doCanonicalize(): CometIcebergNativeScanExec = {\n    CometIcebergNativeScanExec(\n      nativeOp,\n      output.map(QueryPlan.normalizeExpressions(_, output)),\n      null, // Don't need originalPlan for canonicalization\n      SerializedPlan(None),\n      metadataLocation,\n      null\n    ) // Don't need metadata for canonicalization\n  }\n\n  override def stringArgs: Iterator[Any] = {\n    // Use metadata task count to avoid triggering serializedPartitionData during planning\n    val hasMeta = nativeIcebergScanMetadata != null && nativeIcebergScanMetadata.tasks != null\n    val taskCount = if (hasMeta) nativeIcebergScanMetadata.tasks.size() else 0\n    val scanDesc = if (originalPlan != null) originalPlan.scan.description() else \"canonicalized\"\n    // Include runtime filters (DPP) in string representation\n    val runtimeFiltersStr = if (originalPlan != null && originalPlan.runtimeFilters.nonEmpty) {\n      s\", runtimeFilters=${originalPlan.runtimeFilters.mkString(\"[\", \", \", \"]\")}\"\n    } else {\n      \"\"\n    }\n    Iterator(output, s\"$metadataLocation, $scanDesc$runtimeFiltersStr\", taskCount)\n  }\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometIcebergNativeScanExec =>\n        this.metadataLocation == other.metadataLocation &&\n        this.output == other.output &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int =\n    Objects.hashCode(metadataLocation, output.asJava, serializedPlanOpt)\n}\n\nobject CometIcebergNativeScanExec {\n\n  /** Creates a CometIcebergNativeScanExec with deferred partition serialization. */\n  def apply(\n      nativeOp: Operator,\n      scanExec: BatchScanExec,\n      session: SparkSession,\n      metadataLocation: String,\n      nativeIcebergScanMetadata: CometIcebergNativeScanMetadata): CometIcebergNativeScanExec = {\n\n    val exec = CometIcebergNativeScanExec(\n      nativeOp,\n      scanExec.output,\n      scanExec,\n      SerializedPlan(None),\n      metadataLocation,\n      nativeIcebergScanMetadata)\n\n    scanExec.logicalLink.foreach(exec.setLogicalLink)\n    exec\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometLocalTableScanExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport org.apache.spark.TaskContext\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, UnsafeProjection}\nimport org.apache.spark.sql.comet.CometLocalTableScanExec.createMetricsIterator\nimport org.apache.spark.sql.comet.execution.arrow.CometArrowConverters\nimport org.apache.spark.sql.execution.{LeafExecNode, LocalTableScanExec}\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport com.google.common.base.Objects\n\nimport org.apache.comet.{CometConf, ConfigEntry}\nimport org.apache.comet.serde.OperatorOuterClass.Operator\nimport org.apache.comet.serde.operator.CometSink\n\ncase class CometLocalTableScanExec(\n    originalPlan: LocalTableScanExec,\n    @transient rows: Seq[InternalRow],\n    override val output: Seq[Attribute])\n    extends CometExec\n    with LeafExecNode {\n\n  override lazy val metrics: Map[String, SQLMetric] = Map(\n    \"numOutputRows\" -> SQLMetrics.createMetric(sparkContext, \"number of output rows\"))\n\n  @transient private lazy val unsafeRows: Array[InternalRow] = {\n    if (rows.isEmpty) {\n      Array.empty\n    } else {\n      val proj = UnsafeProjection.create(output, output)\n      rows.map(r => proj(r).copy()).toArray\n    }\n  }\n\n  @transient private lazy val rdd: RDD[InternalRow] = {\n    if (rows.isEmpty) {\n      sparkContext.emptyRDD\n    } else {\n      val numSlices = math.min(unsafeRows.length, session.leafNodeDefaultParallelism)\n      sparkContext.parallelize(unsafeRows, numSlices)\n    }\n  }\n\n  override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    val numInputRows = longMetric(\"numOutputRows\")\n    val maxRecordsPerBatch = CometConf.COMET_BATCH_SIZE.get(conf)\n    // Use UTC to match native side expectations. See CometSparkToColumnarExec.\n    val timeZoneId = \"UTC\"\n    rdd.mapPartitionsInternal { sparkBatches =>\n      val context = TaskContext.get()\n      val batches = CometArrowConverters.rowToArrowBatchIter(\n        sparkBatches,\n        originalPlan.schema,\n        maxRecordsPerBatch,\n        timeZoneId,\n        context)\n      createMetricsIterator(batches, numInputRows)\n    }\n  }\n\n  override protected def stringArgs: Iterator[Any] = {\n    if (rows.isEmpty) {\n      Iterator(\"<empty>\", output)\n    } else {\n      Iterator(output)\n    }\n  }\n\n  override def supportsColumnar: Boolean = true\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometLocalTableScanExec =>\n        this.originalPlan == other.originalPlan &&\n        this.schema == other.schema &&\n        this.output == other.output\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int = Objects.hashCode(originalPlan, originalPlan.schema, output)\n}\n\nobject CometLocalTableScanExec extends CometSink[LocalTableScanExec] {\n\n  // uses CometArrowConverters, which re-uses arrays\n  override def isFfiSafe: Boolean = false\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED)\n\n  override def createExec(nativeOp: Operator, op: LocalTableScanExec): CometNativeExec = {\n    CometScanWrapper(nativeOp, CometLocalTableScanExec(op, op.rows, op.output))\n  }\n\n  private def createMetricsIterator(\n      it: Iterator[ColumnarBatch],\n      numInputRows: SQLMetric): Iterator[ColumnarBatch] = {\n    new Iterator[ColumnarBatch] {\n      override def hasNext: Boolean = it.hasNext\n\n      override def next(): ColumnarBatch = {\n        val batch = it.next()\n        numInputRows.add(batch.numRows())\n        batch\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometMetricNode.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.{SparkContext, TaskContext}\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.execution.SparkPlan\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}\n\nimport org.apache.comet.serde.Metric\n\n/**\n * A node carrying SQL metrics from SparkPlan, and metrics of its children. Native code will call\n * [[getChildNode]] and [[set]] to update the metrics.\n *\n * @param metrics\n *   the mapping between metric name of native operator to `SQLMetric` of Spark operator. For\n *   example, `numOutputRows` -> `SQLMetrics(\"numOutputRows\")` means the native operator will\n *   update `numOutputRows` metric with the value of `SQLMetrics(\"numOutputRows\")` in Spark\n *   operator.\n */\ncase class CometMetricNode(metrics: Map[String, SQLMetric], children: Seq[CometMetricNode])\n    extends Logging {\n\n  /**\n   * Returns the leaf node (deepest single-child descendant). For a native scan plan like\n   * FilterExec -> DataSourceExec, this returns the DataSourceExec node which has the\n   * bytes_scanned and output_rows metrics from the Parquet reader.\n   */\n  def leafNode: CometMetricNode = {\n    if (children.isEmpty) this\n    else children.head.leafNode\n  }\n\n  /**\n   * Returns all leaf nodes (nodes with no children) in the metric tree. Unlike [[leafNode]] which\n   * only follows the first child, this finds all leaves, which is needed for plans with multiple\n   * scans (e.g., joins, unions).\n   */\n  def leafNodes: Seq[CometMetricNode] = {\n    if (children.isEmpty) Seq(this)\n    else children.flatMap(_.leafNodes)\n  }\n\n  /**\n   * Reports aggregated scan input metrics (bytesRead, recordsRead) to Spark's task metrics.\n   * Aggregates across all scan leaf nodes to handle plans with multiple scans (e.g., joins). Must\n   * be called in a TaskCompletionListener after the iterator is fully consumed.\n   */\n  def reportScanInputMetrics(ctx: TaskContext): Unit = {\n    ctx.addTaskCompletionListener[Unit] { _ =>\n      val scanLeaves = leafNodes.filter(_.metrics.contains(\"bytes_scanned\"))\n      if (scanLeaves.nonEmpty) {\n        val totalBytes = scanLeaves.map(_.metrics(\"bytes_scanned\").value).sum\n        val totalRows = scanLeaves.map { leaf =>\n          val outputRows =\n            leaf.metrics.get(\"output_rows\").map(_.value).getOrElse(0L)\n          val prunedRows =\n            leaf.metrics.get(\"pushdown_rows_pruned\").map(_.value).getOrElse(0L)\n          outputRows + prunedRows\n        }.sum\n        ctx.taskMetrics().inputMetrics.setBytesRead(totalBytes)\n        ctx.taskMetrics().inputMetrics.setRecordsRead(totalRows)\n      }\n    }\n  }\n\n  /**\n   * Gets a child node. Called from native.\n   */\n  def getChildNode(i: Int): CometMetricNode = {\n    if (i < 0 || i >= children.length) {\n      // TODO: throw an exception, e.g. IllegalArgumentException, instead?\n      return null\n    }\n    children(i)\n  }\n\n  /**\n   * Update the value of a metric. This method will typically be called multiple times for the\n   * same metric during multiple calls to executePlan.\n   *\n   * @param metricName\n   *   the name of the metric at native operator.\n   * @param v\n   *   the value to set.\n   */\n  def set(metricName: String, v: Long): Unit = {\n    metrics.get(metricName) match {\n      case Some(metric) => metric.set(v)\n      case None =>\n        // no-op\n        logDebug(s\"Non-existing metric: $metricName. Ignored\")\n    }\n  }\n\n  private def set_all(metricNode: Metric.NativeMetricNode): Unit = {\n    metricNode.getMetricsMap.forEach((name, value) => {\n      set(name, value)\n    })\n    metricNode.getChildrenList.asScala.zip(children).foreach { case (child, childNode) =>\n      childNode.set_all(child)\n    }\n  }\n\n  // Called via JNI from `comet_metric_node.rs`\n  def set_all_from_bytes(bytes: Array[Byte]): Unit = {\n    val metricNode = Metric.NativeMetricNode.parseFrom(bytes)\n    set_all(metricNode)\n  }\n}\n\nobject CometMetricNode {\n\n  /**\n   * The baseline SQL metrics for DataFusion `BaselineMetrics`.\n   */\n  def baselineMetrics(sc: SparkContext): Map[String, SQLMetric] = {\n    Map(\n      \"output_rows\" -> SQLMetrics.createMetric(sc, \"number of output rows\"),\n      \"elapsed_compute\" -> SQLMetrics.createNanoTimingMetric(\n        sc,\n        \"total time (in ms) spent in this operator\"))\n  }\n\n  /**\n   * Base SQL Metrics for ScanExec and BatchScanExec These are needed for various Spark systems to\n   * function properly, and are calculated on the general iterator created by the scan operator,\n   * regardless of which scan implementation is used.\n   */\n  def baseScanMetrics(sc: SparkContext): Map[String, SQLMetric] = {\n    Map(\n      \"numOutputRows\" -> SQLMetrics.createMetric(sc, \"number of output rows\"),\n      \"scanTime\" -> SQLMetrics.createNanoTimingMetric(sc, \"scan time\"))\n  }\n\n  /**\n   * Metrics specific to Parquet format in scan operators. This provides some statistics about the\n   * read files, as well as some meta filtering occuring. These metrics are independent of the\n   * native reader metrics.\n   */\n  def parquetScanMetrics(sc: SparkContext): Map[String, SQLMetric] = {\n    Map(\n      \"ParquetRowGroups\" -> SQLMetrics.createMetric(sc, \"num of Parquet row groups read\"),\n      \"ParquetNativeDecodeTime\" -> SQLMetrics.createNanoTimingMetric(\n        sc,\n        \"time spent in Parquet native decoding\"),\n      \"ParquetNativeLoadTime\" -> SQLMetrics.createNanoTimingMetric(\n        sc,\n        \"time spent in loading Parquet native vectors\"),\n      \"ParquetLoadRowGroupTime\" -> SQLMetrics.createNanoTimingMetric(\n        sc,\n        \"time spent in loading Parquet row groups\"),\n      \"ParquetInputFileReadTime\" -> SQLMetrics.createNanoTimingMetric(\n        sc,\n        \"time spent in reading Parquet file from storage\"),\n      \"ParquetInputFileReadSize\" -> SQLMetrics.createSizeMetric(\n        sc,\n        \"read size when reading Parquet file from storage (MB)\"),\n      \"ParquetInputFileReadThroughput\" -> SQLMetrics.createAverageMetric(\n        sc,\n        \"read throughput when reading Parquet file from storage (MB/sec)\"))\n  }\n\n  /**\n   * SQL Metrics from the native Datafusion reader.\n   */\n  def nativeScanMetrics(sc: SparkContext): Map[String, SQLMetric] = {\n    Map(\n      \"output_rows\" -> SQLMetrics.createMetric(sc, \"number of output rows\"),\n      \"time_elapsed_opening\" ->\n        SQLMetrics.createNanoTimingMetric(sc, \"Wall clock time elapsed for file opening\"),\n      \"time_elapsed_scanning_until_data\" ->\n        SQLMetrics.createNanoTimingMetric(\n          sc,\n          \"Wall clock time elapsed for file scanning + \" +\n            \"first record batch of decompression + decoding\"),\n      \"time_elapsed_scanning_total\" ->\n        SQLMetrics.createNanoTimingMetric(\n          sc,\n          \"Elapsed wall clock time for for scanning \" +\n            \"+ record batch decompression / decoding\"),\n      \"time_elapsed_processing\" ->\n        SQLMetrics.createNanoTimingMetric(\n          sc,\n          \"Wall clock time elapsed for data decompression + decoding\"),\n      \"file_open_errors\" ->\n        SQLMetrics.createMetric(sc, \"Count of errors opening file\"),\n      \"file_scan_errors\" ->\n        SQLMetrics.createMetric(sc, \"Count of errors scanning file\"),\n      \"predicate_evaluation_errors\" ->\n        SQLMetrics.createMetric(sc, \"Number of times the predicate could not be evaluated\"),\n      \"row_groups_matched_bloom_filter\" ->\n        SQLMetrics.createMetric(\n          sc,\n          \"Number of row groups whose bloom filters were checked and matched (not pruned)\"),\n      \"row_groups_pruned_bloom_filter\" ->\n        SQLMetrics.createMetric(sc, \"Number of row groups pruned by bloom filters\"),\n      \"row_groups_matched_statistics\" ->\n        SQLMetrics.createMetric(\n          sc,\n          \"Number of row groups whose statistics were checked and matched (not pruned)\"),\n      \"row_groups_pruned_statistics\" ->\n        SQLMetrics.createMetric(sc, \"Number of row groups pruned by statistics\"),\n      \"bytes_scanned\" ->\n        SQLMetrics.createSizeMetric(sc, \"Number of bytes scanned\"),\n      \"pushdown_rows_pruned\" ->\n        SQLMetrics.createMetric(sc, \"Rows filtered out by predicates pushed into parquet scan\"),\n      \"pushdown_rows_matched\" ->\n        SQLMetrics.createMetric(sc, \"Rows passed predicates pushed into parquet scan\"),\n      \"row_pushdown_eval_time\" ->\n        SQLMetrics.createNanoTimingMetric(sc, \"Time spent evaluating row-level pushdown filters\"),\n      \"statistics_eval_time\" ->\n        SQLMetrics.createNanoTimingMetric(\n          sc,\n          \"Time spent evaluating row group-level statistics filters\"),\n      \"bloom_filter_eval_time\" ->\n        SQLMetrics.createNanoTimingMetric(sc, \"Time spent evaluating row group Bloom Filters\"),\n      \"page_index_rows_pruned\" ->\n        SQLMetrics.createMetric(sc, \"Rows filtered out by parquet page index\"),\n      \"page_index_rows_matched\" ->\n        SQLMetrics.createMetric(sc, \"Rows passed through the parquet page index\"),\n      \"page_index_eval_time\" ->\n        SQLMetrics.createNanoTimingMetric(sc, \"Time spent evaluating parquet page index filters\"),\n      \"metadata_load_time\" ->\n        SQLMetrics.createNanoTimingMetric(\n          sc,\n          \"Time spent reading and parsing metadata from the footer\"))\n  }\n\n  /**\n   * SQL Metrics for DataFusion HashJoin\n   */\n  def hashJoinMetrics(sc: SparkContext): Map[String, SQLMetric] = {\n    Map(\n      \"build_time\" ->\n        SQLMetrics.createNanoTimingMetric(sc, \"Total time for collecting build-side of join\"),\n      \"build_input_batches\" ->\n        SQLMetrics.createMetric(sc, \"Number of batches consumed by build-side\"),\n      \"build_input_rows\" ->\n        SQLMetrics.createMetric(sc, \"Number of rows consumed by build-side\"),\n      \"build_mem_used\" ->\n        SQLMetrics.createSizeMetric(sc, \"Memory used by build-side\"),\n      \"input_batches\" ->\n        SQLMetrics.createMetric(sc, \"Number of batches consumed by probe-side\"),\n      \"input_rows\" ->\n        SQLMetrics.createMetric(sc, \"Number of rows consumed by probe-side\"),\n      \"output_batches\" -> SQLMetrics.createMetric(sc, \"Number of batches produced\"),\n      \"output_rows\" -> SQLMetrics.createMetric(sc, \"Number of rows produced\"),\n      \"join_time\" -> SQLMetrics.createNanoTimingMetric(sc, \"Total time for joining\"))\n  }\n\n  /**\n   * SQL Metrics for DataFusion SortMergeJoin\n   */\n  def sortMergeJoinMetrics(sc: SparkContext): Map[String, SQLMetric] = {\n    Map(\n      \"peak_mem_used\" ->\n        SQLMetrics.createSizeMetric(sc, \"Memory used by build-side\"),\n      \"input_batches\" ->\n        SQLMetrics.createMetric(sc, \"Number of batches consumed by probe-side\"),\n      \"input_rows\" ->\n        SQLMetrics.createMetric(sc, \"Number of rows consumed by probe-side\"),\n      \"output_batches\" -> SQLMetrics.createMetric(sc, \"Number of batches produced\"),\n      \"output_rows\" -> SQLMetrics.createMetric(sc, \"Number of rows produced\"),\n      \"join_time\" -> SQLMetrics.createNanoTimingMetric(sc, \"Total time for joining\"),\n      \"spill_count\" -> SQLMetrics.createMetric(sc, \"Count of spills\"),\n      \"spilled_bytes\" -> SQLMetrics.createSizeMetric(sc, \"Total spilled bytes\"),\n      \"spilled_rows\" -> SQLMetrics.createMetric(sc, \"Total spilled rows\"))\n  }\n\n  def shuffleMetrics(sc: SparkContext): Map[String, SQLMetric] = {\n    Map(\n      \"elapsed_compute\" -> SQLMetrics.createNanoTimingMetric(sc, \"native shuffle writer time\"),\n      \"repart_time\" -> SQLMetrics.createNanoTimingMetric(sc, \"repartition time\"),\n      \"encode_time\" -> SQLMetrics.createNanoTimingMetric(sc, \"encoding and compression time\"),\n      \"decode_time\" -> SQLMetrics.createNanoTimingMetric(sc, \"decoding and decompression time\"),\n      \"spill_count\" -> SQLMetrics.createMetric(sc, \"number of spills\"),\n      \"spilled_bytes\" -> SQLMetrics.createSizeMetric(sc, \"spilled bytes\"),\n      \"input_batches\" -> SQLMetrics.createMetric(sc, \"number of input batches\"))\n  }\n\n  /**\n   * Creates a [[CometMetricNode]] from a [[CometPlan]].\n   */\n  def fromCometPlan(cometPlan: SparkPlan): CometMetricNode = {\n    val children = cometPlan.children.map(fromCometPlan)\n    CometMetricNode(cometPlan.metrics, children)\n  }\n\n  /**\n   * Creates a [[CometMetricNode]] from a map of [[SQLMetric]].\n   */\n  def apply(metrics: Map[String, SQLMetric]): CometMetricNode = {\n    CometMetricNode(metrics, Nil)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometNativeColumnarToRowExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport java.util.UUID\nimport java.util.concurrent.{Future, TimeoutException, TimeUnit}\n\nimport scala.concurrent.Promise\nimport scala.util.control.NonFatal\n\nimport org.apache.spark.{broadcast, SparkException, TaskContext}\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, SortOrder}\nimport org.apache.spark.sql.catalyst.plans.physical.Partitioning\nimport org.apache.spark.sql.comet.util.{Utils => CometUtils}\nimport org.apache.spark.sql.errors.QueryExecutionErrors\nimport org.apache.spark.sql.execution.{ColumnarToRowTransition, SparkPlan, SQLExecution}\nimport org.apache.spark.sql.execution.adaptive.BroadcastQueryStageExec\nimport org.apache.spark.sql.execution.exchange.ReusedExchangeExec\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}\nimport org.apache.spark.sql.types.StructType\nimport org.apache.spark.util.{SparkFatalException, Utils}\n\nimport org.apache.comet.{CometConf, NativeColumnarToRowConverter}\n\n/**\n * Native implementation of ColumnarToRowExec that converts Arrow columnar data to Spark UnsafeRow\n * format using Rust.\n *\n * This feature is enabled by default and can be disabled by setting\n * `spark.comet.exec.columnarToRow.native.enabled=false`.\n *\n * Benefits over the JVM implementation:\n *   - Zero-copy for variable-length types (strings, binary)\n *   - Better CPU cache utilization through vectorized processing\n *   - Reduced GC pressure\n *\n * @param child\n *   The child plan that produces columnar batches\n */\ncase class CometNativeColumnarToRowExec(child: SparkPlan)\n    extends ColumnarToRowTransition\n    with CometPlan {\n\n  // supportsColumnar requires to be only called on driver side, see also SPARK-37779.\n  assert(Utils.isInRunningSparkTask || child.supportsColumnar)\n\n  override def output: Seq[Attribute] = child.output\n\n  override def outputPartitioning: Partitioning = child.outputPartitioning\n\n  override def outputOrdering: Seq[SortOrder] = child.outputOrdering\n\n  override lazy val metrics: Map[String, SQLMetric] = Map(\n    \"numOutputRows\" -> SQLMetrics.createMetric(sparkContext, \"number of output rows\"),\n    \"numInputBatches\" -> SQLMetrics.createMetric(sparkContext, \"number of input batches\"),\n    \"convertTime\" -> SQLMetrics.createNanoTimingMetric(sparkContext, \"time in conversion\"))\n\n  @transient\n  private lazy val promise = Promise[broadcast.Broadcast[Any]]()\n\n  @transient\n  private val timeout: Long = conf.broadcastTimeout\n\n  private val runId: UUID = UUID.randomUUID\n\n  private lazy val cometBroadcastExchange = findCometBroadcastExchange(child)\n\n  @transient\n  lazy val relationFuture: Future[broadcast.Broadcast[Any]] = {\n    SQLExecution.withThreadLocalCaptured[broadcast.Broadcast[Any]](\n      session,\n      CometBroadcastExchangeExec.executionContext) {\n      try {\n        // Setup a job group here so later it may get cancelled by groupId if necessary.\n        sparkContext.setJobGroup(\n          runId.toString,\n          s\"CometNativeColumnarToRow broadcast exchange (runId $runId)\",\n          interruptOnCancel = true)\n\n        val numOutputRows = longMetric(\"numOutputRows\")\n        val numInputBatches = longMetric(\"numInputBatches\")\n        val localSchema = this.schema\n        val batchSize = CometConf.COMET_BATCH_SIZE.get()\n        val broadcastColumnar = child.executeBroadcast()\n        val serializedBatches =\n          broadcastColumnar.value.asInstanceOf[Array[org.apache.spark.util.io.ChunkedByteBuffer]]\n\n        // Use native converter to convert columnar data to rows\n        val converter = new NativeColumnarToRowConverter(localSchema, batchSize)\n        try {\n          val rows = serializedBatches.iterator\n            .flatMap(CometUtils.decodeBatches(_, this.getClass.getSimpleName))\n            .flatMap { batch =>\n              numInputBatches += 1\n              numOutputRows += batch.numRows()\n              val result = converter.convert(batch)\n              // Wrap iterator to close batch after consumption\n              new Iterator[InternalRow] {\n                override def hasNext: Boolean = {\n                  val hasMore = result.hasNext\n                  if (!hasMore) {\n                    batch.close()\n                  }\n                  hasMore\n                }\n                override def next(): InternalRow = result.next()\n              }\n            }\n\n          val mode = cometBroadcastExchange.get.mode\n          val relation = mode.transform(rows, Some(numOutputRows.value))\n          val broadcasted = sparkContext.broadcastInternal(relation, serializedOnly = true)\n          val executionId = sparkContext.getLocalProperty(SQLExecution.EXECUTION_ID_KEY)\n          SQLMetrics.postDriverMetricUpdates(sparkContext, executionId, metrics.values.toSeq)\n          promise.trySuccess(broadcasted)\n          broadcasted\n        } finally {\n          converter.close()\n        }\n      } catch {\n        // SPARK-24294: To bypass scala bug: https://github.com/scala/bug/issues/9554, we throw\n        // SparkFatalException, which is a subclass of Exception. ThreadUtils.awaitResult\n        // will catch this exception and re-throw the wrapped fatal throwable.\n        case oe: OutOfMemoryError =>\n          val ex = new SparkFatalException(oe)\n          promise.tryFailure(ex)\n          throw ex\n        case e if !NonFatal(e) =>\n          val ex = new SparkFatalException(e)\n          promise.tryFailure(ex)\n          throw ex\n        case e: Throwable =>\n          promise.tryFailure(e)\n          throw e\n      }\n    }\n  }\n\n  override def doExecuteBroadcast[T](): broadcast.Broadcast[T] = {\n    if (cometBroadcastExchange.isEmpty) {\n      throw new SparkException(\n        \"CometNativeColumnarToRowExec only supports doExecuteBroadcast when child contains a \" +\n          \"CometBroadcastExchange, but got \" + child)\n    }\n\n    try {\n      relationFuture.get(timeout, TimeUnit.SECONDS).asInstanceOf[broadcast.Broadcast[T]]\n    } catch {\n      case ex: TimeoutException =>\n        logError(s\"Could not execute broadcast in $timeout secs.\", ex)\n        if (!relationFuture.isDone) {\n          sparkContext.cancelJobGroup(runId.toString)\n          relationFuture.cancel(true)\n        }\n        throw QueryExecutionErrors.executeBroadcastTimeoutError(timeout, Some(ex))\n    }\n  }\n\n  private def findCometBroadcastExchange(op: SparkPlan): Option[CometBroadcastExchangeExec] = {\n    op match {\n      case b: CometBroadcastExchangeExec => Some(b)\n      case b: BroadcastQueryStageExec => findCometBroadcastExchange(b.plan)\n      case b: ReusedExchangeExec => findCometBroadcastExchange(b.child)\n      case _ => op.children.collectFirst(Function.unlift(findCometBroadcastExchange))\n    }\n  }\n\n  override def doExecute(): RDD[InternalRow] = {\n    val numOutputRows = longMetric(\"numOutputRows\")\n    val numInputBatches = longMetric(\"numInputBatches\")\n    val convertTime = longMetric(\"convertTime\")\n\n    // Get the schema and batch size for native conversion\n    val localSchema = child.schema\n    val batchSize = CometConf.COMET_BATCH_SIZE.get()\n\n    child.executeColumnar().mapPartitionsInternal { batches =>\n      // Create native converter for this partition\n      val converter = new NativeColumnarToRowConverter(localSchema, batchSize)\n\n      // Register cleanup on task completion\n      TaskContext.get().addTaskCompletionListener[Unit] { _ =>\n        converter.close()\n      }\n\n      batches.flatMap { batch =>\n        numInputBatches += 1\n        val numRows = batch.numRows()\n        numOutputRows += numRows\n\n        val startTime = System.nanoTime()\n        val result = converter.convert(batch)\n        convertTime += System.nanoTime() - startTime\n\n        // Wrap iterator to close batch after consumption\n        new Iterator[InternalRow] {\n          override def hasNext: Boolean = {\n            val hasMore = result.hasNext\n            if (!hasMore) {\n              batch.close()\n            }\n            hasMore\n          }\n          override def next(): InternalRow = result.next()\n        }\n      }\n    }\n  }\n\n  override protected def withNewChildInternal(newChild: SparkPlan): CometNativeColumnarToRowExec =\n    copy(child = newChild)\n}\n\nobject CometNativeColumnarToRowExec {\n\n  /**\n   * Checks if the given schema is supported by native columnar to row conversion.\n   *\n   * Currently supported types:\n   *   - Primitive types: Boolean, Byte, Short, Int, Long, Float, Double\n   *   - Date and Timestamp (microseconds)\n   *   - Decimal (both inline and variable-length)\n   *   - String and Binary\n   *   - Struct, Array, Map (nested types)\n   */\n  def supportsSchema(schema: StructType): Boolean = {\n    import org.apache.spark.sql.types._\n\n    def isSupported(dataType: DataType): Boolean = dataType match {\n      case BooleanType | ByteType | ShortType | IntegerType | LongType => true\n      case FloatType | DoubleType => true\n      case DateType => true\n      case TimestampType => true\n      case _: DecimalType => true\n      case StringType | BinaryType => true\n      case StructType(fields) => fields.forall(f => isSupported(f.dataType))\n      case ArrayType(elementType, _) => isSupported(elementType)\n      case MapType(keyType, valueType, _) => isSupported(keyType) && isSupported(valueType)\n      case _ => false\n    }\n\n    schema.fields.forall(f => isSupported(f.dataType))\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometNativeScanExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.reflect.ClassTag\n\nimport org.apache.spark.{Partition, TaskContext}\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.catalyst._\nimport org.apache.spark.sql.catalyst.expressions._\nimport org.apache.spark.sql.catalyst.plans.QueryPlan\nimport org.apache.spark.sql.catalyst.plans.physical.{Partitioning, UnknownPartitioning}\nimport org.apache.spark.sql.comet.shims.ShimStreamSourceAwareSparkPlan\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.execution.datasources._\nimport org.apache.spark.sql.execution.metric.SQLMetric\nimport org.apache.spark.sql.types._\nimport org.apache.spark.sql.vectorized.ColumnarBatch\nimport org.apache.spark.util.SerializableConfiguration\nimport org.apache.spark.util.collection._\n\nimport com.google.common.base.Objects\n\nimport org.apache.comet.parquet.{CometParquetFileFormat, CometParquetUtils}\nimport org.apache.comet.serde.OperatorOuterClass.Operator\n\n/**\n * Native scan operator for DataSource V1 Parquet files using DataFusion's ParquetExec.\n *\n * Replaces Spark's FileSourceScanExec to enable native execution. File planning runs in Spark to\n * produce FilePartitions (handling bucketing, partition pruning, etc.), which are serialized to\n * protobuf for DataFusion to execute using its ParquetExec. This provides better performance than\n * reading through Spark's FileFormat abstraction.\n *\n * Uses split-mode serialization introduced in PR #3349: common scan metadata (schemas, filters,\n * projections) is serialized once at planning time, while per-partition file lists are lazily\n * serialized at execution time. This reduces memory when scanning tables with many partitions, as\n * each executor task receives only its partition's file list rather than all files.\n */\ncase class CometNativeScanExec(\n    override val nativeOp: Operator,\n    @transient relation: HadoopFsRelation,\n    override val output: Seq[Attribute],\n    requiredSchema: StructType,\n    partitionFilters: Seq[Expression],\n    optionalBucketSet: Option[BitSet],\n    optionalNumCoalescedBuckets: Option[Int],\n    dataFilters: Seq[Expression],\n    tableIdentifier: Option[TableIdentifier],\n    disableBucketedScan: Boolean = false,\n    originalPlan: FileSourceScanExec,\n    override val serializedPlanOpt: SerializedPlan,\n    @transient scan: CometScanExec, // Lazy access to file partitions without serializing with plan\n    sourceKey: String) // Key for PlanDataInjector to match common+partition data at runtime\n    extends CometLeafExec\n    with DataSourceScanExec\n    with ShimStreamSourceAwareSparkPlan {\n\n  override lazy val metadata: Map[String, String] = originalPlan.metadata\n\n  override val nodeName: String =\n    s\"CometNativeScan $relation ${tableIdentifier.map(_.unquotedString).getOrElse(\"\")}\"\n\n  override def verboseStringWithOperatorId(): String = {\n    val metadataStr = metadata.toSeq.sorted\n      .filterNot {\n        case (_, value) if (value.isEmpty || value.equals(\"[]\")) => true\n        case (key, _) if (key.equals(\"DataFilters\") || key.equals(\"Format\")) => true\n        case (_, _) => false\n      }\n      .map {\n        case (key, _) if (key.equals(\"Location\")) =>\n          val location = relation.location\n          val numPaths = location.rootPaths.length\n          val abbreviatedLocation = if (numPaths <= 1) {\n            location.rootPaths.mkString(\"[\", \", \", \"]\")\n          } else {\n            \"[\" + location.rootPaths.head + s\", ... ${numPaths - 1} entries]\"\n          }\n          s\"$key: ${location.getClass.getSimpleName} ${redact(abbreviatedLocation)}\"\n        case (key, value) => s\"$key: ${redact(value)}\"\n      }\n\n    s\"\"\"\n       |$formattedNodeName\n       |${ExplainUtils.generateFieldString(\"Output\", output)}\n       |${metadataStr.mkString(\"\\n\")}\n       |\"\"\".stripMargin\n  }\n\n  // exposed for testing\n  lazy val bucketedScan: Boolean = originalPlan.bucketedScan && !disableBucketedScan\n\n  override lazy val outputPartitioning: Partitioning = {\n    if (bucketedScan) {\n      originalPlan.outputPartitioning\n    } else {\n      UnknownPartitioning(originalPlan.inputRDD.getNumPartitions)\n    }\n  }\n\n  override lazy val outputOrdering: Seq[SortOrder] = originalPlan.outputOrdering\n\n  /**\n   * Lazy partition serialization - deferred until execution time to reduce driver memory.\n   *\n   * Split-mode serialization pattern:\n   * {{{\n   * Planning time:\n   *   - CometNativeScan.convert() serializes common data (schemas, filters, projections)\n   *   - commonData embedded in nativeOp protobuf\n   *   - File partitions NOT serialized yet\n   *\n   * Execution time:\n   *   - doExecuteColumnar() accesses commonData and perPartitionData\n   *   - Forces serializedPartitionData evaluation (here)\n   *   - Each partition's file list serialized separately\n   *   - CometExecRDD receives per-partition data and injects at runtime\n   * }}}\n   *\n   * This pattern reduces memory usage for tables with many partitions - instead of serializing\n   * all files for all partitions in the driver, we serialize only common metadata (once) and each\n   * partition's files (lazily, as tasks are scheduled).\n   */\n  @transient private lazy val serializedPartitionData: (Array[Byte], Array[Array[Byte]]) = {\n    // Extract common data from nativeOp\n    val commonBytes = nativeOp.getNativeScan.getCommon.toByteArray\n\n    // Get file partitions from CometScanExec (handles bucketing, etc.)\n    val filePartitions = scan.getFilePartitions()\n\n    // Serialize each partition's files\n    import org.apache.comet.serde.operator.partition2Proto\n    val perPartitionBytes = filePartitions.map { filePartition =>\n      val partitionProto = partition2Proto(filePartition, relation.partitionSchema)\n      val partitionNativeScan = org.apache.comet.serde.OperatorOuterClass.NativeScan\n        .newBuilder()\n        .setFilePartition(partitionProto)\n        .build()\n\n      partitionNativeScan.toByteArray\n    }.toArray\n\n    (commonBytes, perPartitionBytes)\n  }\n\n  def commonData: Array[Byte] = serializedPartitionData._1\n  def perPartitionData: Array[Array[Byte]] = serializedPartitionData._2\n\n  override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    val nativeMetrics = CometMetricNode.fromCometPlan(this)\n    val serializedPlan = CometExec.serializeNativePlan(nativeOp)\n\n    // Encryption config must be passed to each executor task\n    val hadoopConf = relation.sparkSession.sessionState\n      .newHadoopConfWithOptions(relation.options)\n    val encryptionEnabled = CometParquetUtils.encryptionEnabled(hadoopConf)\n    val (broadcastedHadoopConfForEncryption, encryptedFilePaths) = if (encryptionEnabled) {\n      val broadcastedConf = relation.sparkSession.sparkContext\n        .broadcast(new SerializableConfiguration(hadoopConf))\n      (Some(broadcastedConf), relation.inputFiles.toSeq)\n    } else {\n      (None, Seq.empty)\n    }\n\n    new CometExecRDD(\n      sparkContext,\n      Seq.empty,\n      Map(sourceKey -> commonData),\n      Map(sourceKey -> perPartitionData),\n      serializedPlan,\n      perPartitionData.length,\n      output.length,\n      nativeMetrics,\n      Seq.empty,\n      broadcastedHadoopConfForEncryption,\n      encryptedFilePaths) {\n      override def compute(split: Partition, context: TaskContext): Iterator[ColumnarBatch] = {\n        val res = super.compute(split, context)\n\n        // Report scan input metrics after the iterator is fully consumed.\n        Option(context).foreach(nativeMetrics.reportScanInputMetrics)\n\n        res\n      }\n    }\n  }\n\n  override def doCanonicalize(): CometNativeScanExec = {\n    CometNativeScanExec(\n      nativeOp,\n      relation,\n      output.map(QueryPlan.normalizeExpressions(_, output)),\n      requiredSchema,\n      QueryPlan.normalizePredicates(\n        CometScanUtils.filterUnusedDynamicPruningExpressions(partitionFilters),\n        output),\n      optionalBucketSet,\n      optionalNumCoalescedBuckets,\n      QueryPlan.normalizePredicates(dataFilters, output),\n      None,\n      disableBucketedScan,\n      originalPlan.doCanonicalize(),\n      SerializedPlan(None),\n      null, // Transient scan not needed for canonicalization\n      \"\"\n    ) // sourceKey not needed for canonicalization\n  }\n\n  override def stringArgs: Iterator[Any] = Iterator(output)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometNativeScanExec =>\n        this.originalPlan == other.originalPlan &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int = Objects.hashCode(originalPlan, serializedPlanOpt)\n\n  private val driverMetricKeys =\n    Set(\n      \"numFiles\",\n      \"filesSize\",\n      \"numPartitions\",\n      \"metadataTime\",\n      \"staticFilesNum\",\n      \"staticFilesSize\",\n      \"pruningTime\")\n\n  override lazy val metrics: Map[String, SQLMetric] = {\n    val nativeMetrics = CometMetricNode.nativeScanMetrics(session.sparkContext)\n    // Map native metric names to Spark metric names\n    val withAlias = nativeMetrics.get(\"output_rows\") match {\n      case Some(metric) => nativeMetrics + (\"numOutputRows\" -> metric)\n      case None => nativeMetrics\n    }\n    withAlias ++ scan.metrics.filterKeys(driverMetricKeys)\n  }\n\n  /**\n   * See [[org.apache.spark.sql.execution.DataSourceScanExec.inputRDDs]]. Only used for tests.\n   */\n  override def inputRDDs(): Seq[RDD[InternalRow]] = originalPlan.inputRDDs()\n}\n\nobject CometNativeScanExec {\n  def apply(\n      nativeOp: Operator,\n      scanExec: FileSourceScanExec,\n      session: SparkSession,\n      scan: CometScanExec): CometNativeScanExec = {\n    // TreeNode.mapProductIterator is protected method.\n    def mapProductIterator[B: ClassTag](product: Product, f: Any => B): Array[B] = {\n      val arr = Array.ofDim[B](product.productArity)\n      var i = 0\n      while (i < arr.length) {\n        arr(i) = f(product.productElement(i))\n        i += 1\n      }\n      arr\n    }\n\n    // Generate unique key for this scan so PlanDataInjector can match common+partition data.\n    // Multiple scans of same table with different projections/filters get different keys.\n    val common = nativeOp.getNativeScan.getCommon\n    val source = common.getSource\n    val keyComponents = Seq(\n      common.getRequiredSchemaList.toString,\n      common.getDataFiltersList.toString,\n      common.getProjectionVectorList.toString,\n      common.getFieldsList.toString)\n    val hashCode = keyComponents.mkString(\"|\").hashCode\n    val sourceKey = s\"${source}_${hashCode}\"\n\n    // Replacing the relation in FileSourceScanExec by `copy` seems causing some issues\n    // on other Spark distributions if FileSourceScanExec constructor is changed.\n    // Using `makeCopy` to avoid the issue.\n    // https://github.com/apache/arrow-datafusion-comet/issues/190\n    def transform(arg: Any): AnyRef = arg match {\n      case _: HadoopFsRelation =>\n        scanExec.relation.copy(fileFormat = new CometParquetFileFormat(session))(session)\n      case other: AnyRef => other\n      case null => null\n    }\n\n    val newArgs = mapProductIterator(scanExec, transform)\n    val wrapped = scanExec.makeCopy(newArgs).asInstanceOf[FileSourceScanExec]\n    val batchScanExec = CometNativeScanExec(\n      nativeOp,\n      wrapped.relation,\n      wrapped.output,\n      wrapped.requiredSchema,\n      wrapped.partitionFilters,\n      wrapped.optionalBucketSet,\n      wrapped.optionalNumCoalescedBuckets,\n      wrapped.dataFilters,\n      wrapped.tableIdentifier,\n      wrapped.disableBucketedScan,\n      wrapped,\n      SerializedPlan(None),\n      scan,\n      sourceKey)\n    scanExec.logicalLink.foreach(batchScanExec.setLogicalLink)\n    batchScanExec\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometNativeWriteExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.hadoop.mapreduce.{Job, TaskAttemptContext, TaskAttemptID, TaskID, TaskType}\nimport org.apache.hadoop.mapreduce.task.TaskAttemptContextImpl\nimport org.apache.spark.internal.io.FileCommitProtocol\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.execution.{SparkPlan, UnaryExecNode}\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}\nimport org.apache.spark.sql.vectorized.ColumnarBatch\nimport org.apache.spark.util.Utils\n\nimport com.google.protobuf.CodedOutputStream\n\nimport org.apache.comet.CometExecIterator\nimport org.apache.comet.serde.OperatorOuterClass.Operator\n\n/**\n * Comet physical operator for native Parquet write operations with FileCommitProtocol support.\n *\n * This operator writes data to Parquet files using the native Comet engine. It integrates with\n * Spark's FileCommitProtocol to provide atomic writes with proper staging and commit semantics.\n *\n * The implementation includes support for Spark's file commit protocol through work_dir, job_id,\n * and task_attempt_id parameters that can be set in the operator. When work_dir is set, files are\n * written to a temporary location that can be atomically committed later.\n *\n * @param nativeOp\n *   The native operator representing the write operation (template, will be modified per task)\n * @param child\n *   The child operator providing the data to write\n * @param outputPath\n *   The path where the Parquet file will be written\n * @param committer\n *   FileCommitProtocol for atomic writes. If None, files are written directly.\n * @param jobTrackerID\n *   Unique identifier for this write job\n */\ncase class CometNativeWriteExec(\n    nativeOp: Operator,\n    child: SparkPlan,\n    outputPath: String,\n    committer: Option[FileCommitProtocol] = None,\n    jobTrackerID: String = Utils.createTempDir().getName)\n    extends CometNativeExec\n    with UnaryExecNode {\n\n  override def originalPlan: SparkPlan = child\n\n  // Accumulator to collect TaskCommitMessages from all tasks\n  // Must be eagerly initialized on driver, not lazy\n  @transient private val taskCommitMessagesAccum =\n    sparkContext.collectionAccumulator[FileCommitProtocol.TaskCommitMessage](\"taskCommitMessages\")\n\n  override def serializedPlanOpt: SerializedPlan = {\n    val size = nativeOp.getSerializedSize\n    val bytes = new Array[Byte](size)\n    val codedOutput = CodedOutputStream.newInstance(bytes)\n    nativeOp.writeTo(codedOutput)\n    codedOutput.checkNoSpaceLeft()\n    SerializedPlan(Some(bytes))\n  }\n\n  override def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    copy(child = newChild)\n\n  override def nodeName: String = \"CometNativeWrite\"\n\n  override lazy val metrics: Map[String, SQLMetric] = Map(\n    \"files_written\" -> SQLMetrics.createMetric(sparkContext, \"number of written data files\"),\n    \"bytes_written\" -> SQLMetrics.createSizeMetric(sparkContext, \"written data\"),\n    \"rows_written\" -> SQLMetrics.createMetric(sparkContext, \"number of written rows\"))\n\n  override def doExecute(): RDD[InternalRow] = {\n    // Setup job if committer is present\n    committer.foreach { c =>\n      val jobContext = createJobContext()\n      c.setupJob(jobContext)\n    }\n\n    // Execute the native write with commit protocol\n    val resultRDD = doExecuteColumnar()\n\n    // Force execution by consuming all batches\n    resultRDD\n      .mapPartitions { iter =>\n        iter.foreach(_.close())\n        Iterator.empty\n      }\n      .count()\n\n    // Extract write statistics from metrics\n    val filesWritten = metrics(\"files_written\").value\n    val bytesWritten = metrics(\"bytes_written\").value\n    val rowsWritten = metrics(\"rows_written\").value\n\n    // Collect TaskCommitMessages from accumulator\n    val commitMessages = taskCommitMessagesAccum.value.asScala.toSeq\n\n    // Commit job with collected TaskCommitMessages\n    committer.foreach { c =>\n      val jobContext = createJobContext()\n      try {\n        c.commitJob(jobContext, commitMessages)\n        logInfo(\n          s\"Successfully committed write job to $outputPath: \" +\n            s\"$filesWritten files, $bytesWritten bytes, $rowsWritten rows\")\n      } catch {\n        case e: Exception =>\n          logError(\"Failed to commit job, aborting\", e)\n          c.abortJob(jobContext)\n          throw e\n      }\n    }\n\n    // Return empty RDD as write operations don't return data\n    sparkContext.emptyRDD[InternalRow]\n  }\n\n  override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    // Get the input data from the child operator\n    val childRDD = if (child.supportsColumnar) {\n      child.executeColumnar()\n    } else {\n      // If child doesn't support columnar, convert to columnar\n      child.execute().mapPartitionsInternal { _ =>\n        // TODO this could delegate to CometRowToColumnar, but maybe Comet\n        // does not need to support this case?\n        throw new UnsupportedOperationException(\n          \"Row-based child operators not yet supported for native write\")\n      }\n    }\n\n    // Capture metadata before the transformation\n    val numPartitions = childRDD.getNumPartitions\n    val numOutputCols = child.output.length\n    val capturedCommitter = committer\n    val capturedJobTrackerID = jobTrackerID\n    val capturedNativeOp = nativeOp\n    val capturedAccumulator = taskCommitMessagesAccum // Capture accumulator for use in tasks\n\n    // Execute native write operation with task-level commit protocol\n    childRDD.mapPartitionsInternal { iter =>\n      val partitionId = org.apache.spark.TaskContext.getPartitionId()\n      val taskAttemptId = org.apache.spark.TaskContext.get().taskAttemptId()\n\n      // Setup task-level commit protocol if provided\n      val (workDir, taskContext, commitMsg) = capturedCommitter\n        .map { committer =>\n          val taskContext =\n            createTaskContext(capturedJobTrackerID, partitionId, taskAttemptId.toInt)\n\n          // Setup task - this creates the temporary working directory\n          committer.setupTask(taskContext)\n\n          // Get the work directory for temp files\n          val workPath = committer.newTaskTempFile(taskContext, None, \"\")\n          val workDir = new Path(workPath).getParent.toString\n\n          (Some(workDir), Some((committer, taskContext)), null)\n        }\n        .getOrElse((None, None, null))\n\n      // Modify the native operator to include task-specific parameters\n      val modifiedNativeOp = if (workDir.isDefined) {\n        val parquetWriter = capturedNativeOp.getParquetWriter.toBuilder\n          .setWorkDir(workDir.get)\n          .setJobId(capturedJobTrackerID)\n          .setTaskAttemptId(taskAttemptId.toInt)\n          .build()\n\n        capturedNativeOp.toBuilder.setParquetWriter(parquetWriter).build()\n      } else {\n        capturedNativeOp\n      }\n\n      val nativeMetrics = CometMetricNode.fromCometPlan(this)\n\n      val size = modifiedNativeOp.getSerializedSize\n      val planBytes = new Array[Byte](size)\n      val codedOutput = CodedOutputStream.newInstance(planBytes)\n      modifiedNativeOp.writeTo(codedOutput)\n      codedOutput.checkNoSpaceLeft()\n\n      val execIterator = new CometExecIterator(\n        CometExec.newIterId,\n        Seq(iter),\n        numOutputCols,\n        planBytes,\n        nativeMetrics,\n        numPartitions,\n        partitionId,\n        None,\n        Seq.empty)\n\n      // Wrap the iterator to handle task commit/abort and capture TaskCommitMessage\n      new Iterator[ColumnarBatch] {\n        private var completed = false\n        private var thrownException: Option[Throwable] = None\n\n        override def hasNext: Boolean = {\n          val result =\n            try {\n              execIterator.hasNext\n            } catch {\n              case e: Throwable =>\n                thrownException = Some(e)\n                handleTaskEnd()\n                throw e\n            }\n\n          if (!result && !completed) {\n            handleTaskEnd()\n          }\n\n          result\n        }\n\n        override def next(): ColumnarBatch = {\n          try {\n            execIterator.next()\n          } catch {\n            case e: Throwable =>\n              thrownException = Some(e)\n              handleTaskEnd()\n              throw e\n          }\n        }\n\n        private def handleTaskEnd(): Unit = {\n          if (!completed) {\n            completed = true\n\n            // Handle commit or abort based on whether an exception was thrown\n            taskContext.foreach { case (committer, ctx) =>\n              try {\n                if (thrownException.isEmpty) {\n                  // Commit the task and add message to accumulator\n                  val message = committer.commitTask(ctx)\n                  capturedAccumulator.add(message)\n                  logInfo(s\"Task ${ctx.getTaskAttemptID} committed successfully\")\n                } else {\n                  // Abort the task\n                  committer.abortTask(ctx)\n                  val exMsg = thrownException.get.getMessage\n                  logWarning(s\"Task ${ctx.getTaskAttemptID} aborted due to exception: $exMsg\")\n                }\n              } catch {\n                case e: Exception =>\n                  // Log the commit/abort exception but don't mask the original exception\n                  logError(s\"Error during task commit/abort: ${e.getMessage}\", e)\n                  if (thrownException.isEmpty) {\n                    // If no original exception, propagate the commit/abort exception\n                    throw e\n                  }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  /** Create a JobContext for the write job */\n  private def createJobContext(): Job = {\n    val job = Job.getInstance()\n    job.setJobID(new org.apache.hadoop.mapreduce.JobID(jobTrackerID, 0))\n    job\n  }\n\n  /** Create a TaskAttemptContext for a specific task */\n  private def createTaskContext(\n      jobId: String,\n      partitionId: Int,\n      attemptNumber: Int): TaskAttemptContext = {\n    val job = Job.getInstance()\n    val taskAttemptID = new TaskAttemptID(\n      new TaskID(new org.apache.hadoop.mapreduce.JobID(jobId, 0), TaskType.REDUCE, partitionId),\n      attemptNumber)\n    new TaskAttemptContextImpl(job.getConfiguration, taskAttemptID)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometPlan.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport org.apache.spark.sql.execution.SparkPlan\n\n/**\n * The base trait for physical Comet operators.\n */\ntrait CometPlan extends SparkPlan\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometScanExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.collection.mutable.HashMap\nimport scala.concurrent.duration.NANOSECONDS\nimport scala.reflect.ClassTag\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.catalyst._\nimport org.apache.spark.sql.catalyst.catalog.BucketSpec\nimport org.apache.spark.sql.catalyst.expressions._\nimport org.apache.spark.sql.catalyst.plans.QueryPlan\nimport org.apache.spark.sql.catalyst.plans.physical._\nimport org.apache.spark.sql.catalyst.util.CaseInsensitiveMap\nimport org.apache.spark.sql.comet.shims.ShimCometScanExec\nimport org.apache.spark.sql.errors.QueryExecutionErrors\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.execution.datasources._\nimport org.apache.spark.sql.execution.datasources.parquet.{ParquetFileFormat, ParquetOptions}\nimport org.apache.spark.sql.execution.metric._\nimport org.apache.spark.sql.types._\nimport org.apache.spark.sql.vectorized.ColumnarBatch\nimport org.apache.spark.util.collection._\n\nimport org.apache.comet.{CometConf, MetricsSupport}\nimport org.apache.comet.parquet.CometParquetFileFormat\n\n/**\n * Comet physical scan node for DataSource V1. Most of the code here follow Spark's\n * [[FileSourceScanExec]].\n *\n * This is a hybrid scan where the native plan will contain a `ScanExec` that reads batches of\n * data from the JVM via JNI. The ultimate source of data may be a JVM implementation such as\n * Spark readers, or could be the `native_iceberg_compat` native scan.\n *\n * Note that scanImpl can only be `native_datafusion` after CometScanRule runs and before\n * CometExecRule runs. It will never be set to `native_datafusion` at execution time\n */\ncase class CometScanExec(\n    scanImpl: String,\n    @transient relation: HadoopFsRelation,\n    output: Seq[Attribute],\n    requiredSchema: StructType,\n    partitionFilters: Seq[Expression],\n    optionalBucketSet: Option[BitSet],\n    optionalNumCoalescedBuckets: Option[Int],\n    dataFilters: Seq[Expression],\n    tableIdentifier: Option[TableIdentifier],\n    disableBucketedScan: Boolean = false,\n    wrapped: FileSourceScanExec)\n    extends DataSourceScanExec\n    with ShimCometScanExec\n    with CometPlan {\n\n  assert(scanImpl != CometConf.SCAN_AUTO)\n\n  override val nodeName: String =\n    s\"CometScan [$scanImpl] $relation ${tableIdentifier.map(_.unquotedString).getOrElse(\"\")}\"\n\n  // FIXME: ideally we should reuse wrapped.supportsColumnar, however that fails many tests\n  override lazy val supportsColumnar: Boolean =\n    relation.fileFormat.supportBatch(relation.sparkSession, schema)\n\n  override def vectorTypes: Option[Seq[String]] = wrapped.vectorTypes\n\n  private lazy val driverMetrics: HashMap[String, Long] = HashMap.empty\n\n  /**\n   * Send the driver-side metrics. Before calling this function, selectedPartitions has been\n   * initialized. See SPARK-26327 for more details.\n   */\n  private def sendDriverMetrics(): Unit = {\n    driverMetrics.foreach(e => metrics(e._1).set(e._2))\n    val executionId = sparkContext.getLocalProperty(SQLExecution.EXECUTION_ID_KEY)\n    SQLMetrics.postDriverMetricUpdates(\n      sparkContext,\n      executionId,\n      metrics.filter(e => driverMetrics.contains(e._1)).values.toSeq)\n  }\n\n  private def isDynamicPruningFilter(e: Expression): Boolean =\n    e.find(_.isInstanceOf[PlanExpression[_]]).isDefined\n\n  @transient lazy val selectedPartitions: Array[PartitionDirectory] = {\n    val optimizerMetadataTimeNs = relation.location.metadataOpsTimeNs.getOrElse(0L)\n    val startTime = System.nanoTime()\n    val ret =\n      relation.location.listFiles(partitionFilters.filterNot(isDynamicPruningFilter), dataFilters)\n    setFilesNumAndSizeMetric(ret, true)\n    val timeTakenMs =\n      NANOSECONDS.toMillis((System.nanoTime() - startTime) + optimizerMetadataTimeNs)\n    driverMetrics(\"metadataTime\") = timeTakenMs\n    ret\n  }.toArray\n\n  // We can only determine the actual partitions at runtime when a dynamic partition filter is\n  // present. This is because such a filter relies on information that is only available at run\n  // time (for instance the keys used in the other side of a join).\n  @transient private lazy val dynamicallySelectedPartitions: Array[PartitionDirectory] = {\n    val dynamicPartitionFilters = partitionFilters.filter(isDynamicPruningFilter)\n\n    if (dynamicPartitionFilters.nonEmpty) {\n      val startTime = System.nanoTime()\n      // call the file index for the files matching all filters except dynamic partition filters\n      val predicate = dynamicPartitionFilters.reduce(And)\n      val partitionColumns = relation.partitionSchema\n      val boundPredicate = Predicate.create(\n        predicate.transform { case a: AttributeReference =>\n          val index = partitionColumns.indexWhere(a.name == _.name)\n          BoundReference(index, partitionColumns(index).dataType, nullable = true)\n        },\n        Nil)\n      val ret = selectedPartitions.filter(p => boundPredicate.eval(p.values))\n      setFilesNumAndSizeMetric(ret, false)\n      val timeTakenMs = (System.nanoTime() - startTime) / 1000 / 1000\n      driverMetrics(\"pruningTime\") = timeTakenMs\n      ret\n    } else {\n      selectedPartitions\n    }\n  }\n\n  // exposed for testing\n  lazy val bucketedScan: Boolean = wrapped.bucketedScan\n\n  override lazy val (outputPartitioning, outputOrdering): (Partitioning, Seq[SortOrder]) = {\n    if (bucketedScan) {\n      (wrapped.outputPartitioning, wrapped.outputOrdering)\n    } else {\n      val files = selectedPartitions.flatMap(partition => partition.files)\n      val numPartitions = files.length\n      (UnknownPartitioning(numPartitions), wrapped.outputOrdering)\n    }\n  }\n\n  /**\n   * Returns the data filters that are supported for this scan implementation. For\n   * native_datafusion scans, this excludes dynamic pruning filters (subqueries) and null checks\n   * on array columns (see [[isNullCheckOnArrayColumn]]).\n   */\n  lazy val supportedDataFilters: Seq[Expression] = {\n    if (scanImpl == CometConf.SCAN_NATIVE_DATAFUSION) {\n      dataFilters\n        .filterNot(isDynamicPruningFilter)\n        .filterNot(isNullCheckOnArrayColumn)\n    } else {\n      dataFilters\n    }\n  }\n\n  /**\n   * Returns true for IsNotNull/IsNull predicates on ArrayType columns.\n   *\n   * These must be excluded from native scan data filters because:\n   *\n   *   1. Parquet does not support predicate pushdown on repeated columns. The Parquet library's\n   *      SchemaCompatibilityValidator rejects filter predicates on repeated fields entirely\n   *      (SPARK-39393, PARQUET-34). Spark's own ParquetFilters excludes REPEATED columns from\n   *      pushdown for the same reason.\n   *\n   * 2. When Comet attaches these filters via ParquetSource.with_predicate(), DataFusion's list\n   * predicate pushdown (PR #19545) considers IsNotNull on List columns a supported predicate and\n   * pushes it into the Parquet reader as a RowFilter. This triggers an arrow-rs bug where\n   * ListArrayReader crashes on bare repeated primitives (\"item_reader def levels are None\").\n   *\n   * 3. Even without the arrow-rs bug, the filter is redundant: a bare repeated field is never\n   * null (an empty repeated field means zero elements, not null), and DataFusion's optimizer\n   * would eliminate the filter if it went through the normal planning path.\n   *\n   * Filtering these out is safe -- the predicate is still evaluated after reading, so correctness\n   * is preserved.\n   */\n  private def isNullCheckOnArrayColumn(expr: Expression): Boolean = expr match {\n    case IsNotNull(child) => child.dataType.isInstanceOf[ArrayType]\n    case IsNull(child) => child.dataType.isInstanceOf[ArrayType]\n    case _ => false\n  }\n\n  @transient\n  private lazy val pushedDownFilters = {\n    getPushedDownFilters(relation, supportedDataFilters)\n  }\n\n  override lazy val metadata: Map[String, String] =\n    if (wrapped == null) Map.empty else wrapped.metadata\n\n  override def verboseStringWithOperatorId(): String = {\n    val metadataStr = metadata.toSeq.sorted\n      .filterNot {\n        case (_, value) if (value.isEmpty || value.equals(\"[]\")) => true\n        case (key, _) if (key.equals(\"DataFilters\") || key.equals(\"Format\")) => true\n        case (_, _) => false\n      }\n      .map {\n        case (key, _) if (key.equals(\"Location\")) =>\n          val location = relation.location\n          val numPaths = location.rootPaths.length\n          val abbreviatedLocation = if (numPaths <= 1) {\n            location.rootPaths.mkString(\"[\", \", \", \"]\")\n          } else {\n            \"[\" + location.rootPaths.head + s\", ... ${numPaths - 1} entries]\"\n          }\n          s\"$key: ${location.getClass.getSimpleName} ${redact(abbreviatedLocation)}\"\n        case (key, value) => s\"$key: ${redact(value)}\"\n      }\n\n    s\"\"\"\n       |$formattedNodeName\n       |${ExplainUtils.generateFieldString(\"Output\", output)}\n       |${metadataStr.mkString(\"\\n\")}\n       |\"\"\".stripMargin\n  }\n\n  lazy val inputRDD: RDD[InternalRow] = {\n    val options = relation.options +\n      (FileFormat.OPTION_RETURNING_BATCH -> supportsColumnar.toString)\n    val readFile: (PartitionedFile) => Iterator[InternalRow] =\n      relation.fileFormat.buildReaderWithPartitionValues(\n        sparkSession = relation.sparkSession,\n        dataSchema = relation.dataSchema,\n        partitionSchema = relation.partitionSchema,\n        requiredSchema = requiredSchema,\n        filters = pushedDownFilters,\n        options = options,\n        hadoopConf =\n          relation.sparkSession.sessionState.newHadoopConfWithOptions(relation.options))\n\n    val readRDD = if (bucketedScan) {\n      createBucketedReadRDD(\n        relation.bucketSpec.get,\n        readFile,\n        dynamicallySelectedPartitions,\n        relation)\n    } else {\n      createReadRDD(readFile, dynamicallySelectedPartitions, relation)\n    }\n    sendDriverMetrics()\n    readRDD\n  }\n\n  override def inputRDDs(): Seq[RDD[InternalRow]] = {\n    inputRDD :: Nil\n  }\n\n  /** Helper for computing total number and size of files in selected partitions. */\n  private def setFilesNumAndSizeMetric(\n      partitions: Seq[PartitionDirectory],\n      static: Boolean): Unit = {\n    val filesNum = partitions.map(_.files.size.toLong).sum\n    val filesSize = partitions.map(_.files.map(_.getLen).sum).sum\n    if (!static || !partitionFilters.exists(isDynamicPruningFilter)) {\n      driverMetrics(\"numFiles\") = filesNum\n      driverMetrics(\"filesSize\") = filesSize\n    } else {\n      driverMetrics(\"staticFilesNum\") = filesNum\n      driverMetrics(\"staticFilesSize\") = filesSize\n    }\n    if (relation.partitionSchema.nonEmpty) {\n      driverMetrics(\"numPartitions\") = partitions.length\n    }\n  }\n\n  override lazy val metrics: Map[String, SQLMetric] =\n    wrapped.driverMetrics ++ CometMetricNode.baseScanMetrics(\n      session.sparkContext) ++ (relation.fileFormat match {\n      case m: MetricsSupport => m.getMetrics\n      case _ => Map.empty\n    })\n\n  protected override def doExecute(): RDD[InternalRow] = {\n    ColumnarToRowExec(this).doExecute()\n  }\n\n  protected override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    val rdd = inputRDD.asInstanceOf[RDD[ColumnarBatch]]\n\n    // These metrics are important for streaming solutions.\n    // despite there being similar metrics published by the native reader.\n    val numOutputRows = longMetric(\"numOutputRows\")\n    val scanTime = longMetric(\"scanTime\")\n    rdd.mapPartitionsInternal { batches =>\n      new Iterator[ColumnarBatch] {\n\n        override def hasNext: Boolean = {\n          // The `FileScanRDD` returns an iterator which scans the file during the `hasNext` call.\n          val startNs = System.nanoTime()\n          val res = batches.hasNext\n          scanTime += System.nanoTime() - startNs\n          res\n        }\n\n        override def next(): ColumnarBatch = {\n          val batch = batches.next()\n          numOutputRows += batch.numRows()\n          batch\n        }\n      }\n    }\n  }\n\n  override def executeCollect(): Array[InternalRow] = {\n    ColumnarToRowExec(this).executeCollect()\n  }\n\n  /**\n   * Get the file partitions for this scan without instantiating readers or RDD. This is useful\n   * for native scans that only need partition metadata.\n   */\n  def getFilePartitions(): Seq[FilePartition] = {\n    val filePartitions = if (bucketedScan) {\n      createFilePartitionsForBucketedScan(\n        relation.bucketSpec.get,\n        dynamicallySelectedPartitions,\n        relation)\n    } else {\n      createFilePartitionsForNonBucketedScan(dynamicallySelectedPartitions, relation)\n    }\n    sendDriverMetrics()\n    filePartitions\n  }\n\n  /**\n   * Create file partitions for bucketed scans without instantiating readers.\n   *\n   * @param bucketSpec\n   *   the bucketing spec.\n   * @param selectedPartitions\n   *   Hive-style partition that are part of the read.\n   * @param fsRelation\n   *   [[HadoopFsRelation]] associated with the read.\n   */\n  private def createFilePartitionsForBucketedScan(\n      bucketSpec: BucketSpec,\n      selectedPartitions: Array[PartitionDirectory],\n      fsRelation: HadoopFsRelation): Seq[FilePartition] = {\n    logInfo(s\"Planning with ${bucketSpec.numBuckets} buckets\")\n    val filesGroupedToBuckets =\n      selectedPartitions\n        .flatMap { p =>\n          p.files.map { f =>\n            getPartitionedFile(f, p)\n          }\n        }\n        .groupBy { f =>\n          BucketingUtils\n            .getBucketId(new Path(f.filePath.toString()).getName)\n            .getOrElse(throw QueryExecutionErrors.invalidBucketFile(f.filePath.toString()))\n        }\n\n    val prunedFilesGroupedToBuckets = if (optionalBucketSet.isDefined) {\n      val bucketSet = optionalBucketSet.get\n      filesGroupedToBuckets.filter { f =>\n        bucketSet.get(f._1)\n      }\n    } else {\n      filesGroupedToBuckets\n    }\n\n    optionalNumCoalescedBuckets\n      .map { numCoalescedBuckets =>\n        logInfo(s\"Coalescing to ${numCoalescedBuckets} buckets\")\n        val coalescedBuckets = prunedFilesGroupedToBuckets.groupBy(_._1 % numCoalescedBuckets)\n        Seq.tabulate(numCoalescedBuckets) { bucketId =>\n          val partitionedFiles = coalescedBuckets\n            .get(bucketId)\n            .map {\n              _.values.flatten.toArray\n            }\n            .getOrElse(Array.empty)\n          FilePartition(bucketId, partitionedFiles)\n        }\n      }\n      .getOrElse {\n        Seq.tabulate(bucketSpec.numBuckets) { bucketId =>\n          FilePartition(bucketId, prunedFilesGroupedToBuckets.getOrElse(bucketId, Array.empty))\n        }\n      }\n  }\n\n  /**\n   * Create file partitions for non-bucketed scans without instantiating readers.\n   *\n   * @param selectedPartitions\n   *   Hive-style partition that are part of the read.\n   * @param fsRelation\n   *   [[HadoopFsRelation]] associated with the read.\n   */\n  private def createFilePartitionsForNonBucketedScan(\n      selectedPartitions: Array[PartitionDirectory],\n      fsRelation: HadoopFsRelation): Seq[FilePartition] = {\n    val openCostInBytes = fsRelation.sparkSession.sessionState.conf.filesOpenCostInBytes\n    val maxSplitBytes =\n      FilePartition.maxSplitBytes(fsRelation.sparkSession, selectedPartitions)\n    logInfo(\n      s\"Planning scan with bin packing, max size: $maxSplitBytes bytes, \" +\n        s\"open cost is considered as scanning $openCostInBytes bytes.\")\n\n    // Filter files with bucket pruning if possible\n    val bucketingEnabled = fsRelation.sparkSession.sessionState.conf.bucketingEnabled\n    val shouldProcess: Path => Boolean = optionalBucketSet match {\n      case Some(bucketSet) if bucketingEnabled =>\n        // Do not prune the file if bucket file name is invalid\n        filePath => BucketingUtils.getBucketId(filePath.getName).forall(bucketSet.get)\n      case _ =>\n        _ => true\n    }\n\n    val splitFiles = selectedPartitions\n      .flatMap { partition =>\n        partition.files.flatMap { file =>\n          // getPath() is very expensive so we only want to call it once in this block:\n          val filePath = file.getPath\n\n          if (shouldProcess(filePath)) {\n            val isSplitable = relation.fileFormat.isSplitable(\n              relation.sparkSession,\n              relation.options,\n              filePath) &&\n              // SPARK-39634: Allow file splitting in combination with row index generation once\n              // the fix for PARQUET-2161 is available.\n              !isNeededForSchema(requiredSchema)\n            super.splitFiles(\n              sparkSession = relation.sparkSession,\n              file = file,\n              filePath = filePath,\n              isSplitable = isSplitable,\n              maxSplitBytes = maxSplitBytes,\n              partitionValues = partition.values)\n          } else {\n            Seq.empty\n          }\n        }\n      }\n      .sortBy(_.length)(implicitly[Ordering[Long]].reverse)\n\n    FilePartition.getFilePartitions(relation.sparkSession, splitFiles, maxSplitBytes)\n  }\n\n  /**\n   * Create an RDD for bucketed reads. The non-bucketed variant of this function is\n   * [[createReadRDD]].\n   *\n   * Each RDD partition being returned should include all the files with the same bucket id from\n   * all the given Hive partitions.\n   *\n   * @param bucketSpec\n   *   the bucketing spec.\n   * @param readFile\n   *   a function to read each (part of a) file.\n   * @param selectedPartitions\n   *   Hive-style partition that are part of the read.\n   * @param fsRelation\n   *   [[HadoopFsRelation]] associated with the read.\n   */\n  private def createBucketedReadRDD(\n      bucketSpec: BucketSpec,\n      readFile: (PartitionedFile) => Iterator[InternalRow],\n      selectedPartitions: Array[PartitionDirectory],\n      fsRelation: HadoopFsRelation): RDD[InternalRow] = {\n    val filePartitions =\n      createFilePartitionsForBucketedScan(bucketSpec, selectedPartitions, fsRelation)\n    prepareRDD(fsRelation, readFile, filePartitions)\n  }\n\n  /**\n   * Create an RDD for non-bucketed reads. The bucketed variant of this function is\n   * [[createBucketedReadRDD]].\n   *\n   * @param readFile\n   *   a function to read each (part of a) file.\n   * @param selectedPartitions\n   *   Hive-style partition that are part of the read.\n   * @param fsRelation\n   *   [[HadoopFsRelation]] associated with the read.\n   */\n  private def createReadRDD(\n      readFile: (PartitionedFile) => Iterator[InternalRow],\n      selectedPartitions: Array[PartitionDirectory],\n      fsRelation: HadoopFsRelation): RDD[InternalRow] = {\n    val filePartitions = createFilePartitionsForNonBucketedScan(selectedPartitions, fsRelation)\n    prepareRDD(fsRelation, readFile, filePartitions)\n  }\n\n  private def prepareRDD(\n      fsRelation: HadoopFsRelation,\n      readFile: (PartitionedFile) => Iterator[InternalRow],\n      partitions: Seq[FilePartition]): RDD[InternalRow] = {\n    val sqlConf = fsRelation.sparkSession.sessionState.conf\n    newFileScanRDD(\n      fsRelation,\n      readFile,\n      partitions,\n      new StructType(requiredSchema.fields ++ fsRelation.partitionSchema.fields),\n      new ParquetOptions(CaseInsensitiveMap(relation.options), sqlConf))\n  }\n\n  override def doCanonicalize(): CometScanExec = {\n    CometScanExec(\n      scanImpl,\n      relation,\n      output.map(QueryPlan.normalizeExpressions(_, output)),\n      requiredSchema,\n      QueryPlan.normalizePredicates(\n        CometScanUtils.filterUnusedDynamicPruningExpressions(partitionFilters),\n        output),\n      optionalBucketSet,\n      optionalNumCoalescedBuckets,\n      QueryPlan.normalizePredicates(dataFilters, output),\n      None,\n      disableBucketedScan,\n      null)\n  }\n}\n\nobject CometScanExec {\n\n  def apply(\n      scanExec: FileSourceScanExec,\n      session: SparkSession,\n      scanImpl: String): CometScanExec = {\n    // TreeNode.mapProductIterator is protected method.\n    def mapProductIterator[B: ClassTag](product: Product, f: Any => B): Array[B] = {\n      val arr = Array.ofDim[B](product.productArity)\n      var i = 0\n      while (i < arr.length) {\n        arr(i) = f(product.productElement(i))\n        i += 1\n      }\n      arr\n    }\n\n    // Replacing the relation in FileSourceScanExec by `copy` seems causing some issues\n    // on other Spark distributions if FileSourceScanExec constructor is changed.\n    // Using `makeCopy` to avoid the issue.\n    // https://github.com/apache/arrow-datafusion-comet/issues/190\n    def transform(arg: Any): AnyRef = arg match {\n      case _: HadoopFsRelation =>\n        scanExec.relation.copy(fileFormat = new CometParquetFileFormat(session))(session)\n      case other: AnyRef => other\n      case null => null\n    }\n\n    val newArgs = mapProductIterator(scanExec, transform)\n    val wrapped = scanExec.makeCopy(newArgs).asInstanceOf[FileSourceScanExec]\n    val batchScanExec = CometScanExec(\n      scanImpl,\n      wrapped.relation,\n      wrapped.output,\n      wrapped.requiredSchema,\n      wrapped.partitionFilters,\n      wrapped.optionalBucketSet,\n      wrapped.optionalNumCoalescedBuckets,\n      wrapped.dataFilters,\n      wrapped.tableIdentifier,\n      wrapped.disableBucketedScan,\n      wrapped)\n    scanExec.logicalLink.foreach(batchScanExec.setLogicalLink)\n    batchScanExec\n  }\n\n  def isFileFormatSupported(fileFormat: FileFormat): Boolean = {\n    // Only support Spark's built-in Parquet scans, not others such as Delta which use a subclass\n    // of ParquetFileFormat.\n    fileFormat.getClass().equals(classOf[ParquetFileFormat])\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometScanUtils.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport org.apache.spark.sql.catalyst.expressions.{DynamicPruningExpression, Expression, Literal}\n\nobject CometScanUtils {\n\n  /**\n   * Filters unused DynamicPruningExpression expressions - one which has been replaced with\n   * DynamicPruningExpression(Literal.TrueLiteral) during Physical Planning\n   */\n  def filterUnusedDynamicPruningExpressions(predicates: Seq[Expression]): Seq[Expression] = {\n    predicates.filterNot(_ == DynamicPruningExpression(Literal.TrueLiteral))\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometSparkToColumnarExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.collection.mutable.ListBuffer\n\nimport org.apache.spark.TaskContext\nimport org.apache.spark.broadcast.Broadcast\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, SortOrder}\nimport org.apache.spark.sql.catalyst.plans.physical.Partitioning\nimport org.apache.spark.sql.comet.execution.arrow.CometArrowConverters\nimport org.apache.spark.sql.execution.{RowToColumnarTransition, SparkPlan}\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}\nimport org.apache.spark.sql.types._\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport org.apache.comet.{CometConf, DataTypeSupport}\nimport org.apache.comet.serde.OperatorOuterClass\nimport org.apache.comet.serde.operator.CometSink\n\ncase class CometSparkToColumnarExec(child: SparkPlan)\n    extends RowToColumnarTransition\n    with CometPlan {\n  override def output: Seq[Attribute] = child.output\n\n  override def outputPartitioning: Partitioning = child.outputPartitioning\n\n  override def outputOrdering: Seq[SortOrder] = child.outputOrdering\n\n  override protected def doExecute(): RDD[InternalRow] = {\n    child.execute()\n  }\n\n  override def doExecuteBroadcast[T](): Broadcast[T] = {\n    child.executeBroadcast()\n  }\n\n  override def supportsColumnar: Boolean = true\n\n  override def nodeName: String = if (child.supportsColumnar) {\n    \"CometSparkColumnarToColumnar\"\n  } else {\n    \"CometSparkRowToColumnar\"\n  }\n\n  override lazy val metrics: Map[String, SQLMetric] = Map(\n    \"numInputRows\" -> SQLMetrics.createMetric(sparkContext, \"number of input rows\"),\n    \"numOutputBatches\" -> SQLMetrics.createMetric(sparkContext, \"number of output batches\"),\n    \"conversionTime\" -> SQLMetrics.createNanoTimingMetric(\n      sparkContext,\n      \"time converting Spark batches to Arrow batches\"))\n\n  // The conversion happens in next(), so wrap the call to measure time spent.\n  private def createTimingIter(\n      iter: Iterator[ColumnarBatch],\n      numInputRows: SQLMetric,\n      numOutputBatches: SQLMetric,\n      conversionTime: SQLMetric): Iterator[ColumnarBatch] = {\n    new Iterator[ColumnarBatch] {\n\n      override def hasNext: Boolean = {\n        iter.hasNext\n      }\n\n      override def next(): ColumnarBatch = {\n        val startNs = System.nanoTime()\n        val batch = iter.next()\n        conversionTime += System.nanoTime() - startNs\n        numInputRows += batch.numRows()\n        numOutputBatches += 1\n        batch\n      }\n    }\n  }\n\n  override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    val numInputRows = longMetric(\"numInputRows\")\n    val numOutputBatches = longMetric(\"numOutputBatches\")\n    val conversionTime = longMetric(\"conversionTime\")\n    val maxRecordsPerBatch = CometConf.COMET_BATCH_SIZE.get(conf)\n    // Use UTC for Arrow schema timezone to match the native side, which always\n    // deserializes Timestamp as Timestamp(Microsecond, Some(\"UTC\")). Spark's internal\n    // timestamp representation is always UTC microseconds, so the timezone here is\n    // purely schema metadata. Using session timezone would cause Arrow RowConverter\n    // schema mismatch errors in non-UTC sessions. See COMET-2720.\n    val timeZoneId = \"UTC\"\n    val schema = child.schema\n\n    if (child.supportsColumnar) {\n      child\n        .executeColumnar()\n        .mapPartitionsInternal { sparkBatches =>\n          val arrowBatches =\n            sparkBatches.flatMap { sparkBatch =>\n              val context = TaskContext.get()\n              CometArrowConverters.columnarBatchToArrowBatchIter(\n                sparkBatch,\n                schema,\n                maxRecordsPerBatch,\n                timeZoneId,\n                context)\n            }\n          createTimingIter(arrowBatches, numInputRows, numOutputBatches, conversionTime)\n        }\n    } else {\n      child\n        .execute()\n        .mapPartitionsInternal { sparkBatches =>\n          val context = TaskContext.get()\n          val arrowBatches =\n            CometArrowConverters.rowToArrowBatchIter(\n              sparkBatches,\n              schema,\n              maxRecordsPerBatch,\n              timeZoneId,\n              context)\n          createTimingIter(arrowBatches, numInputRows, numOutputBatches, conversionTime)\n        }\n    }\n  }\n\n  override protected def withNewChildInternal(newChild: SparkPlan): CometSparkToColumnarExec =\n    copy(child = newChild)\n\n}\n\nobject CometSparkToColumnarExec extends CometSink[SparkPlan] with DataTypeSupport {\n\n  // uses CometArrowConverters, which re-uses arrays\n  override def isFfiSafe: Boolean = false\n\n  override def createExec(\n      nativeOp: OperatorOuterClass.Operator,\n      op: SparkPlan): CometNativeExec = {\n    CometScanWrapper(nativeOp, CometSparkToColumnarExec(op))\n  }\n\n  override def isTypeSupported(\n      dt: DataType,\n      name: String,\n      fallbackReasons: ListBuffer[String]): Boolean = dt match {\n    case _: ArrayType | _: MapType => false\n    case _ => super.isTypeSupported(dt, name, fallbackReasons)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometTakeOrderedAndProjectExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport org.apache.spark.TaskContext\nimport org.apache.spark.rdd.{ParallelCollectionRDD, RDD}\nimport org.apache.spark.serializer.Serializer\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, AttributeSet, NamedExpression, SortOrder}\nimport org.apache.spark.sql.catalyst.util.truncatedString\nimport org.apache.spark.sql.comet.execution.shuffle.{CometShuffledBatchRDD, CometShuffleExchangeExec}\nimport org.apache.spark.sql.execution.{SparkPlan, TakeOrderedAndProjectExec, UnaryExecNode, UnsafeRowSerializer}\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics, SQLShuffleReadMetricsReporter, SQLShuffleWriteMetricsReporter}\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport org.apache.comet.{CometConf, ConfigEntry}\nimport org.apache.comet.CometSparkSessionExtensions.isCometShuffleEnabled\nimport org.apache.comet.serde.{Compatible, OperatorOuterClass, SupportLevel, Unsupported}\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProto, supportedSortType}\nimport org.apache.comet.serde.operator.CometSink\n\nobject CometTakeOrderedAndProjectExec extends CometSink[TakeOrderedAndProjectExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_TAKE_ORDERED_AND_PROJECT_ENABLED)\n\n  override def getSupportLevel(op: TakeOrderedAndProjectExec): SupportLevel = {\n    if (!isCometShuffleEnabled(op.conf)) {\n      return Unsupported(Some(\"TakeOrderedAndProject requires shuffle to be enabled\"))\n    }\n    op.projectList.foreach { p =>\n      val o = exprToProto(p, op.child.output)\n      if (o.isEmpty) {\n        return Unsupported(Some(s\"unsupported projection: $p\"))\n      }\n      o\n    }\n    op.sortOrder.foreach { s =>\n      val o = exprToProto(s, op.child.output)\n      if (o.isEmpty) {\n        return Unsupported(Some(s\"unsupported sort order: $s\"))\n      }\n      o\n    }\n    if (!supportedSortType(op, op.sortOrder)) {\n      return Unsupported(Some(\"unsupported data type in sort order\"))\n    }\n    Compatible()\n  }\n\n  override def createExec(\n      nativeOp: OperatorOuterClass.Operator,\n      op: TakeOrderedAndProjectExec): CometNativeExec = {\n    CometSinkPlaceHolder(\n      nativeOp,\n      op,\n      CometTakeOrderedAndProjectExec(\n        op,\n        op.output,\n        op.limit,\n        op.offset,\n        op.sortOrder,\n        op.projectList,\n        op.child))\n  }\n}\n\n/**\n * Comet physical plan node for Spark `TakeOrderedAndProjectExec`.\n *\n * It is used to execute a `TakeOrderedAndProjectExec` physical operator by using Comet native\n * engine. It is not like other physical plan nodes which are wrapped by `CometExec`, because it\n * contains two native executions separated by a Comet shuffle exchange.\n */\ncase class CometTakeOrderedAndProjectExec(\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    limit: Int,\n    offset: Int,\n    sortOrder: Seq[SortOrder],\n    projectList: Seq[NamedExpression],\n    child: SparkPlan)\n    extends CometExec\n    with UnaryExecNode {\n\n  override def producedAttributes: AttributeSet = outputSet ++ AttributeSet(projectList)\n\n  private lazy val writeMetrics =\n    SQLShuffleWriteMetricsReporter.createShuffleWriteMetrics(sparkContext)\n\n  private lazy val readMetrics =\n    SQLShuffleReadMetricsReporter.createShuffleReadMetrics(sparkContext)\n\n  override lazy val metrics: Map[String, SQLMetric] = Map(\n    \"dataSize\" -> SQLMetrics.createSizeMetric(sparkContext, \"data size\"),\n    \"numPartitions\" -> SQLMetrics.createMetric(\n      sparkContext,\n      \"number of partitions\")) ++ readMetrics ++ writeMetrics ++ CometMetricNode.shuffleMetrics(\n    sparkContext)\n\n  private lazy val serializer: Serializer =\n    new UnsafeRowSerializer(child.output.size, longMetric(\"dataSize\"))\n\n  // Exposed for testing.\n  lazy val orderingSatisfies: Boolean =\n    SortOrder.orderingSatisfies(child.outputOrdering, sortOrder)\n\n  protected override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    val childRDD = child.executeColumnar()\n    if (childRDD.getNumPartitions == 0 || limit == 0) {\n      new ParallelCollectionRDD(sparkContext, Seq.empty[ColumnarBatch], 1, Map.empty)\n    } else {\n      val singlePartitionRDD = if (childRDD.getNumPartitions == 1) {\n        childRDD\n      } else {\n        val localTopK = if (orderingSatisfies) {\n          CometExecUtils.getNativeLimitRDD(childRDD, child.output, limit)\n        } else {\n          val numParts = childRDD.getNumPartitions\n          // Serialize the plan once before mapping to avoid repeated serialization per partition\n          val topK =\n            CometExecUtils\n              .getTopKNativePlan(child.output, sortOrder, child, limit)\n              .get\n          val serializedTopK = CometExec.serializeNativePlan(topK)\n          val numOutputCols = child.output.length\n          childRDD.mapPartitionsWithIndexInternal { case (idx, iter) =>\n            CometExec.getCometIterator(Seq(iter), numOutputCols, serializedTopK, numParts, idx)\n          }\n        }\n\n        // Shuffle to Single Partition using Comet native shuffle\n        val dep = CometShuffleExchangeExec.prepareShuffleDependency(\n          localTopK,\n          child.output,\n          outputPartitioning,\n          serializer,\n          metrics)\n        metrics(\"numPartitions\").set(dep.partitioner.numPartitions)\n\n        new CometShuffledBatchRDD(dep, readMetrics)\n      }\n\n      // Serialize the plan once before mapping to avoid repeated serialization per partition\n      val topKAndProjection = CometExecUtils\n        .getProjectionNativePlan(projectList, child.output, sortOrder, child, limit, offset)\n        .get\n      val serializedTopKAndProjection = CometExec.serializeNativePlan(topKAndProjection)\n      val finalOutputLength = output.length\n      singlePartitionRDD.mapPartitionsInternal { iter =>\n        val it = CometExec.getCometIterator(\n          Seq(iter),\n          finalOutputLength,\n          serializedTopKAndProjection,\n          1,\n          0)\n        setSubqueries(it.id, this)\n\n        Option(TaskContext.get()).foreach { context =>\n          context.addTaskCompletionListener[Unit] { _ =>\n            cleanSubqueries(it.id, this)\n          }\n        }\n\n        it\n      }\n    }\n  }\n\n  override def simpleString(maxFields: Int): String = {\n    val orderByString = truncatedString(sortOrder, \"[\", \",\", \"]\", maxFields)\n    val outputString = truncatedString(output, \"[\", \",\", \"]\", maxFields)\n\n    s\"CometTakeOrderedAndProjectExec(limit=$limit, offset=$offset, \" +\n      s\"orderBy=$orderByString, output=$outputString)\"\n  }\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/CometWindowExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.sql.catalyst.expressions.{Alias, Attribute, AttributeReference, AttributeSet, CurrentRow, Expression, FrameLessOffsetWindowFunction, Lag, Lead, NamedExpression, RangeFrame, RowFrame, SortOrder, SpecifiedWindowFrame, UnboundedFollowing, UnboundedPreceding, WindowExpression}\nimport org.apache.spark.sql.catalyst.expressions.aggregate.{AggregateExpression, Complete, Count, Max, Min, Sum}\nimport org.apache.spark.sql.catalyst.plans.physical.Partitioning\nimport org.apache.spark.sql.execution.SparkPlan\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}\nimport org.apache.spark.sql.execution.window.WindowExec\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.DecimalType\n\nimport com.google.common.base.Objects\n\nimport org.apache.comet.{CometConf, ConfigEntry}\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.serde.{AggSerde, CometOperatorSerde, Incompatible, OperatorOuterClass, SupportLevel}\nimport org.apache.comet.serde.OperatorOuterClass.Operator\nimport org.apache.comet.serde.QueryPlanSerde.{aggExprToProto, exprToProto, scalarFunctionExprToProto}\n\nobject CometWindowExec extends CometOperatorSerde[WindowExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_WINDOW_ENABLED)\n\n  override def getSupportLevel(op: WindowExec): SupportLevel = {\n    Incompatible(Some(\"Native WindowExec has known correctness issues\"))\n  }\n\n  override def convert(\n      op: WindowExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    val output = op.child.output\n\n    val winExprs: Array[WindowExpression] = op.windowExpression.flatMap { expr =>\n      expr match {\n        case alias: Alias =>\n          alias.child match {\n            case winExpr: WindowExpression =>\n              Some(winExpr)\n            case _ =>\n              None\n          }\n        case _ =>\n          None\n      }\n    }.toArray\n\n    if (winExprs.length != op.windowExpression.length) {\n      withInfo(op, \"Unsupported window expression(s)\")\n      return None\n    }\n\n    // Offset window functions (LAG, LEAD) support arbitrary partition and order specs, so skip\n    // the validatePartitionAndSortSpecsForWindowFunc check which requires partition columns to\n    // equal order columns. That stricter check is only needed for aggregate window functions.\n    val hasOnlyOffsetFunctions = winExprs.nonEmpty &&\n      winExprs.forall(e => e.windowFunction.isInstanceOf[FrameLessOffsetWindowFunction])\n    if (!hasOnlyOffsetFunctions && op.partitionSpec.nonEmpty && op.orderSpec.nonEmpty &&\n      !validatePartitionAndSortSpecsForWindowFunc(op.partitionSpec, op.orderSpec, op)) {\n      return None\n    }\n\n    val windowExprProto = winExprs.map(windowExprToProto(_, output, op.conf))\n    val partitionExprs = op.partitionSpec.map(exprToProto(_, op.child.output))\n\n    val sortOrders = op.orderSpec.map(exprToProto(_, op.child.output))\n\n    if (windowExprProto.forall(_.isDefined) && partitionExprs.forall(_.isDefined)\n      && sortOrders.forall(_.isDefined)) {\n      val windowBuilder = OperatorOuterClass.Window.newBuilder()\n      windowBuilder.addAllWindowExpr(windowExprProto.map(_.get).toIterable.asJava)\n      windowBuilder.addAllPartitionByList(partitionExprs.map(_.get).asJava)\n      windowBuilder.addAllOrderByList(sortOrders.map(_.get).asJava)\n      Some(builder.setWindow(windowBuilder).build())\n    } else {\n      None\n    }\n\n  }\n\n  private def windowExprToProto(\n      windowExpr: WindowExpression,\n      output: Seq[Attribute],\n      conf: SQLConf): Option[OperatorOuterClass.WindowExpr] = {\n\n    val aggregateExpressions: Array[AggregateExpression] = windowExpr.flatMap { expr =>\n      expr match {\n        case agg: AggregateExpression =>\n          agg.aggregateFunction match {\n            case _: Count =>\n              Some(agg)\n            case min: Min =>\n              if (AggSerde.minMaxDataTypeSupported(min.dataType)) {\n                Some(agg)\n              } else {\n                withInfo(windowExpr, s\"datatype ${min.dataType} is not supported\", expr)\n                None\n              }\n            case max: Max =>\n              if (AggSerde.minMaxDataTypeSupported(max.dataType)) {\n                Some(agg)\n              } else {\n                withInfo(windowExpr, s\"datatype ${max.dataType} is not supported\", expr)\n                None\n              }\n            case s: Sum =>\n              if (AggSerde.sumDataTypeSupported(s.dataType) && !s.dataType\n                  .isInstanceOf[DecimalType]) {\n                Some(agg)\n              } else {\n                withInfo(windowExpr, s\"datatype ${s.dataType} is not supported\", expr)\n                None\n              }\n            case _ =>\n              withInfo(\n                windowExpr,\n                s\"aggregate ${agg.aggregateFunction}\" +\n                  \" is not supported for window function\",\n                expr)\n              None\n          }\n        case _ =>\n          None\n      }\n    }.toArray\n\n    val (aggExpr, builtinFunc, ignoreNulls) = if (aggregateExpressions.nonEmpty) {\n      val modes = aggregateExpressions.map(_.mode).distinct\n      assert(modes.size == 1 && modes.head == Complete)\n      (aggExprToProto(aggregateExpressions.head, output, true, conf), None, false)\n    } else {\n      windowExpr.windowFunction match {\n        case lag: Lag =>\n          val inputExpr = exprToProto(lag.input, output)\n          val offsetExpr = exprToProto(lag.inputOffset, output)\n          val defaultExpr = exprToProto(lag.default, output)\n          val func = scalarFunctionExprToProto(\"lag\", inputExpr, offsetExpr, defaultExpr)\n          (None, func, lag.ignoreNulls)\n        case lead: Lead =>\n          val inputExpr = exprToProto(lead.input, output)\n          val offsetExpr = exprToProto(lead.offset, output)\n          val defaultExpr = exprToProto(lead.default, output)\n          val func = scalarFunctionExprToProto(\"lead\", inputExpr, offsetExpr, defaultExpr)\n          (None, func, lead.ignoreNulls)\n        case _ =>\n          (None, exprToProto(windowExpr.windowFunction, output), false)\n      }\n    }\n\n    if (aggExpr.isEmpty && builtinFunc.isEmpty) {\n      return None\n    }\n\n    val f = windowExpr.windowSpec.frameSpecification\n\n    val (frameType, lowerBound, upperBound) = f match {\n      case SpecifiedWindowFrame(frameType, lBound, uBound) =>\n        val frameProto = frameType match {\n          case RowFrame => OperatorOuterClass.WindowFrameType.Rows\n          case RangeFrame => OperatorOuterClass.WindowFrameType.Range\n        }\n\n        val lBoundProto = lBound match {\n          case UnboundedPreceding =>\n            OperatorOuterClass.LowerWindowFrameBound\n              .newBuilder()\n              .setUnboundedPreceding(OperatorOuterClass.UnboundedPreceding.newBuilder().build())\n              .build()\n          case CurrentRow =>\n            OperatorOuterClass.LowerWindowFrameBound\n              .newBuilder()\n              .setCurrentRow(OperatorOuterClass.CurrentRow.newBuilder().build())\n              .build()\n          case e if frameType == RowFrame =>\n            val offset = e.eval() match {\n              case i: Integer => i.toLong\n              case l: Long => l\n              case _ => return None\n            }\n            OperatorOuterClass.LowerWindowFrameBound\n              .newBuilder()\n              .setPreceding(\n                OperatorOuterClass.Preceding\n                  .newBuilder()\n                  .setOffset(offset)\n                  .build())\n              .build()\n          case _ =>\n            // TODO add support for numeric and temporal RANGE BETWEEN expressions\n            // see https://github.com/apache/datafusion-comet/issues/1246\n            return None\n        }\n\n        val uBoundProto = uBound match {\n          case UnboundedFollowing =>\n            OperatorOuterClass.UpperWindowFrameBound\n              .newBuilder()\n              .setUnboundedFollowing(OperatorOuterClass.UnboundedFollowing.newBuilder().build())\n              .build()\n          case CurrentRow =>\n            OperatorOuterClass.UpperWindowFrameBound\n              .newBuilder()\n              .setCurrentRow(OperatorOuterClass.CurrentRow.newBuilder().build())\n              .build()\n          case e if frameType == RowFrame =>\n            val offset = e.eval() match {\n              case i: Integer => i.toLong\n              case l: Long => l\n              case _ => return None\n            }\n            OperatorOuterClass.UpperWindowFrameBound\n              .newBuilder()\n              .setFollowing(\n                OperatorOuterClass.Following\n                  .newBuilder()\n                  .setOffset(offset)\n                  .build())\n              .build()\n          case _ =>\n            // TODO add support for numeric and temporal RANGE BETWEEN expressions\n            // see https://github.com/apache/datafusion-comet/issues/1246\n            return None\n        }\n\n        (frameProto, lBoundProto, uBoundProto)\n      case _ =>\n        (\n          OperatorOuterClass.WindowFrameType.Rows,\n          OperatorOuterClass.LowerWindowFrameBound\n            .newBuilder()\n            .setUnboundedPreceding(OperatorOuterClass.UnboundedPreceding.newBuilder().build())\n            .build(),\n          OperatorOuterClass.UpperWindowFrameBound\n            .newBuilder()\n            .setUnboundedFollowing(OperatorOuterClass.UnboundedFollowing.newBuilder().build())\n            .build())\n    }\n\n    val frame = OperatorOuterClass.WindowFrame\n      .newBuilder()\n      .setFrameType(frameType)\n      .setLowerBound(lowerBound)\n      .setUpperBound(upperBound)\n      .build()\n\n    val spec =\n      OperatorOuterClass.WindowSpecDefinition.newBuilder().setFrameSpecification(frame).build()\n\n    if (builtinFunc.isDefined) {\n      Some(\n        OperatorOuterClass.WindowExpr\n          .newBuilder()\n          .setBuiltInWindowFunction(builtinFunc.get)\n          .setSpec(spec)\n          .setIgnoreNulls(ignoreNulls)\n          .build())\n    } else if (aggExpr.isDefined) {\n      Some(\n        OperatorOuterClass.WindowExpr\n          .newBuilder()\n          .setAggFunc(aggExpr.get)\n          .setSpec(spec)\n          .build())\n    } else {\n      None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: WindowExec): CometNativeExec = {\n    CometWindowExec(\n      nativeOp,\n      op,\n      op.output,\n      op.windowExpression,\n      op.partitionSpec,\n      op.orderSpec,\n      op.child,\n      SerializedPlan(None))\n  }\n\n  private def validatePartitionAndSortSpecsForWindowFunc(\n      partitionSpec: Seq[Expression],\n      orderSpec: Seq[SortOrder],\n      op: SparkPlan): Boolean = {\n    if (partitionSpec.length != orderSpec.length) {\n      return false\n    }\n\n    val partitionColumnNames = partitionSpec.collect {\n      case a: AttributeReference => a.name\n      case other =>\n        withInfo(op, s\"Unsupported partition expression: ${other.getClass.getSimpleName}\")\n        return false\n    }\n\n    val orderColumnNames = orderSpec.collect { case s: SortOrder =>\n      s.child match {\n        case a: AttributeReference => a.name\n        case other =>\n          withInfo(op, s\"Unsupported sort expression: ${other.getClass.getSimpleName}\")\n          return false\n      }\n    }\n\n    if (partitionColumnNames.zip(orderColumnNames).exists { case (partCol, orderCol) =>\n        partCol != orderCol\n      }) {\n      withInfo(op, \"Partitioning and sorting specifications must be the same.\")\n      return false\n    }\n\n    true\n  }\n\n}\n\n/**\n * Comet physical plan node for Spark `WindowsExec`.\n *\n * It is used to execute a `WindowsExec` physical operator by using Comet native engine. It is not\n * like other physical plan nodes which are wrapped by `CometExec`, because it contains two native\n * executions separated by a Comet shuffle exchange.\n */\ncase class CometWindowExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    windowExpression: Seq[NamedExpression],\n    partitionSpec: Seq[Expression],\n    orderSpec: Seq[SortOrder],\n    child: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometUnaryExec {\n\n  override def nodeName: String = \"CometWindowExec\"\n  override def producedAttributes: AttributeSet = outputSet ++ AttributeSet(windowExpression)\n\n  override lazy val metrics: Map[String, SQLMetric] = Map(\n    \"dataSize\" -> SQLMetrics.createSizeMetric(sparkContext, \"data size\"),\n    \"numPartitions\" -> SQLMetrics.createMetric(sparkContext, \"number of partitions\"))\n\n  override def outputOrdering: Seq[SortOrder] = child.outputOrdering\n\n  override def outputPartitioning: Partitioning = child.outputPartitioning\n\n  protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n\n  override def stringArgs: Iterator[Any] =\n    Iterator(output, windowExpression, partitionSpec, orderSpec, child)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometWindowExec =>\n        this.output == other.output &&\n        this.windowExpression == other.windowExpression && this.child == other.child &&\n        this.partitionSpec == other.partitionSpec && this.orderSpec == other.orderSpec &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int =\n    Objects.hashCode(output, windowExpression, partitionSpec, orderSpec, child)\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/DecimalPrecision.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.math.{max, min}\n\nimport org.apache.spark.sql.catalyst.expressions._\nimport org.apache.spark.sql.types.DecimalType\n\n/**\n * This is mostly copied from the `decimalAndDecimal` method in Spark's [[DecimalPrecision]] which\n * existed before Spark 3.4.\n *\n * In Spark 3.4 and up, the method `decimalAndDecimal` is removed from Spark, and for binary\n * expressions with different decimal precisions from children, the difference is handled in the\n * expression evaluation instead (see SPARK-39316).\n *\n * However in Comet, we still have to rely on the type coercion to ensure the decimal precision is\n * the same for both children of a binary expression, since our arithmetic kernels do not yet\n * handle the case where precision is different. Therefore, this re-apply the logic in the\n * original rule, and rely on `Cast` and `CheckOverflow` for decimal binary operation.\n *\n * TODO: instead of relying on this rule, it's probably better to enhance arithmetic kernels to\n * handle different decimal precisions\n */\nobject DecimalPrecision {\n  def promote(\n      allowPrecisionLoss: Boolean,\n      expr: Expression,\n      nullOnOverflow: Boolean): Expression = {\n    expr.transformUp {\n      // This means the binary expression is already optimized with the rule in Spark. This can\n      // happen if the Spark version is < 3.4\n      case e: BinaryArithmetic if e.left.prettyName == \"promote_precision\" => e\n\n      case add @ Add(DecimalExpression(p1, s1), DecimalExpression(p2, s2), _) =>\n        val resultScale = max(s1, s2)\n        val resultType = if (allowPrecisionLoss) {\n          DecimalType.adjustPrecisionScale(max(p1 - s1, p2 - s2) + resultScale + 1, resultScale)\n        } else {\n          DecimalType.bounded(max(p1 - s1, p2 - s2) + resultScale + 1, resultScale)\n        }\n        CheckOverflow(add, resultType, nullOnOverflow)\n\n      case sub @ Subtract(DecimalExpression(p1, s1), DecimalExpression(p2, s2), _) =>\n        val resultScale = max(s1, s2)\n        val resultType = if (allowPrecisionLoss) {\n          DecimalType.adjustPrecisionScale(max(p1 - s1, p2 - s2) + resultScale + 1, resultScale)\n        } else {\n          DecimalType.bounded(max(p1 - s1, p2 - s2) + resultScale + 1, resultScale)\n        }\n        CheckOverflow(sub, resultType, nullOnOverflow)\n\n      case mul @ Multiply(DecimalExpression(p1, s1), DecimalExpression(p2, s2), _) =>\n        val resultType = if (allowPrecisionLoss) {\n          DecimalType.adjustPrecisionScale(p1 + p2 + 1, s1 + s2)\n        } else {\n          DecimalType.bounded(p1 + p2 + 1, s1 + s2)\n        }\n        CheckOverflow(mul, resultType, nullOnOverflow)\n\n      case div @ Divide(DecimalExpression(p1, s1), DecimalExpression(p2, s2), _) =>\n        val resultType = if (allowPrecisionLoss) {\n          // Precision: p1 - s1 + s2 + max(6, s1 + p2 + 1)\n          // Scale: max(6, s1 + p2 + 1)\n          val intDig = p1 - s1 + s2\n          val scale = max(DecimalType.MINIMUM_ADJUSTED_SCALE, s1 + p2 + 1)\n          val prec = intDig + scale\n          DecimalType.adjustPrecisionScale(prec, scale)\n        } else {\n          var intDig = min(DecimalType.MAX_SCALE, p1 - s1 + s2)\n          var decDig = min(DecimalType.MAX_SCALE, max(6, s1 + p2 + 1))\n          val diff = (intDig + decDig) - DecimalType.MAX_SCALE\n          if (diff > 0) {\n            decDig -= diff / 2 + 1\n            intDig = DecimalType.MAX_SCALE - decDig\n          }\n          DecimalType.bounded(intDig + decDig, decDig)\n        }\n        CheckOverflow(div, resultType, nullOnOverflow)\n\n      case rem @ Remainder(DecimalExpression(p1, s1), DecimalExpression(p2, s2), _) =>\n        val resultType = if (allowPrecisionLoss) {\n          DecimalType.adjustPrecisionScale(min(p1 - s1, p2 - s2) + max(s1, s2), max(s1, s2))\n        } else {\n          DecimalType.bounded(min(p1 - s1, p2 - s2) + max(s1, s2), max(s1, s2))\n        }\n        CheckOverflow(rem, resultType, nullOnOverflow)\n\n      case e => e\n    }\n  }\n\n  // TODO: consider to use `org.apache.spark.sql.types.DecimalExpression` for Spark 3.5+\n  object DecimalExpression {\n    def unapply(e: Expression): Option[(Int, Int)] = e.dataType match {\n      case t: DecimalType => Some((t.precision, t.scale))\n      case _ => None\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/execution/shuffle/CometBlockStoreShuffleReader.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle\n\nimport java.io.InputStream\n\nimport org.apache.spark.{InterruptibleIterator, MapOutputTracker, SparkEnv, TaskContext}\nimport org.apache.spark.internal.{config, Logging}\nimport org.apache.spark.io.CompressionCodec\nimport org.apache.spark.serializer.SerializerManager\nimport org.apache.spark.shuffle.BaseShuffleHandle\nimport org.apache.spark.shuffle.ShuffleReader\nimport org.apache.spark.shuffle.ShuffleReadMetricsReporter\nimport org.apache.spark.sql.vectorized.ColumnarBatch\nimport org.apache.spark.storage.BlockId\nimport org.apache.spark.storage.BlockManager\nimport org.apache.spark.storage.BlockManagerId\nimport org.apache.spark.storage.ShuffleBlockFetcherIterator\nimport org.apache.spark.util.CompletionIterator\n\nimport org.apache.comet.{CometConf, Native}\nimport org.apache.comet.vector.NativeUtil\n\n/**\n * Shuffle reader that reads data from the block manager. It reads Arrow-serialized data (IPC\n * format) and returns an iterator of ColumnarBatch.\n */\nclass CometBlockStoreShuffleReader[K, C](\n    handle: BaseShuffleHandle[K, _, C],\n    blocksByAddress: Iterator[(BlockManagerId, scala.collection.Seq[(BlockId, Long, Int)])],\n    context: TaskContext,\n    readMetrics: ShuffleReadMetricsReporter,\n    serializerManager: SerializerManager = SparkEnv.get.serializerManager,\n    blockManager: BlockManager = SparkEnv.get.blockManager,\n    mapOutputTracker: MapOutputTracker = SparkEnv.get.mapOutputTracker,\n    shouldBatchFetch: Boolean = false)\n    extends ShuffleReader[K, C]\n    with Logging {\n\n  private val dep = handle.dependency.asInstanceOf[CometShuffleDependency[_, _, _]]\n\n  private def fetchIterator: Iterator[(BlockId, InputStream)] = {\n    new ShuffleBlockFetcherIterator(\n      context,\n      blockManager.blockStoreClient,\n      blockManager,\n      mapOutputTracker,\n      // To tackle Scala issue between Seq and scala.collection.Seq\n      blocksByAddress.map(pair => (pair._1, pair._2.toSeq)),\n      (_, inputStream) => {\n        if (dep.shuffleType == CometColumnarShuffle) {\n          // Only columnar shuffle supports encryption\n          serializerManager.wrapForEncryption(inputStream)\n        } else {\n          inputStream\n        }\n      },\n      // Note: we use getSizeAsMb when no suffix is provided for backwards compatibility\n      SparkEnv.get.conf.get(config.REDUCER_MAX_SIZE_IN_FLIGHT) * 1024 * 1024,\n      SparkEnv.get.conf.get(config.REDUCER_MAX_REQS_IN_FLIGHT),\n      SparkEnv.get.conf.get(config.REDUCER_MAX_BLOCKS_IN_FLIGHT_PER_ADDRESS),\n      SparkEnv.get.conf.get(config.MAX_REMOTE_BLOCK_SIZE_FETCH_TO_MEM),\n      SparkEnv.get.conf.get(config.SHUFFLE_MAX_ATTEMPTS_ON_NETTY_OOM),\n      SparkEnv.get.conf.get(config.SHUFFLE_DETECT_CORRUPT),\n      SparkEnv.get.conf.get(config.SHUFFLE_DETECT_CORRUPT_MEMORY),\n      SparkEnv.get.conf.get(config.SHUFFLE_CHECKSUM_ENABLED),\n      SparkEnv.get.conf.get(config.SHUFFLE_CHECKSUM_ALGORITHM),\n      readMetrics,\n      fetchContinuousBlocksInBatch).toCompletionIterator\n  }\n\n  /** Read the combined key-values for this reduce task */\n  override def read(): Iterator[Product2[K, C]] = {\n    var currentReadIterator: NativeBatchDecoderIterator = null\n    val nativeLib = new Native()\n    val nativeUtil = new NativeUtil()\n    val tracingEnabled = CometConf.COMET_TRACING_ENABLED.get()\n\n    // Closes last read iterator and shared resources after the task is finished.\n    // We need to close read iterator during iterating input streams,\n    // instead of one callback per read iterator. Otherwise if there are too many\n    // read iterators, it may blow up the call stack and cause OOM.\n    context.addTaskCompletionListener[Unit] { _ =>\n      if (currentReadIterator != null) {\n        currentReadIterator.close()\n      }\n      nativeUtil.close()\n    }\n\n    val recordIter: Iterator[(Int, ColumnarBatch)] = fetchIterator\n      .flatMap(blockIdAndStream => {\n        if (currentReadIterator != null) {\n          currentReadIterator.close()\n        }\n        currentReadIterator = NativeBatchDecoderIterator(\n          blockIdAndStream._2,\n          dep.decodeTime,\n          nativeLib,\n          nativeUtil,\n          tracingEnabled)\n        currentReadIterator\n      })\n      .map(b => (0, b))\n\n    // Update the context task metrics for each record read.\n    val metricIter = CompletionIterator[(Any, Any), Iterator[(Any, Any)]](\n      recordIter.map { record =>\n        readMetrics.incRecordsRead(record._2.numRows())\n        record\n      },\n      context.taskMetrics().mergeShuffleReadMetrics())\n\n    // An interruptible iterator must be used here in order to support task cancellation\n    val interruptibleIter = new InterruptibleIterator[(Any, Any)](context, metricIter)\n\n    val aggregatedIter: Iterator[Product2[K, C]] = if (dep.aggregator.isDefined) {\n      throw new UnsupportedOperationException(\"aggregate not allowed\")\n    } else {\n      interruptibleIter.asInstanceOf[Iterator[Product2[K, C]]]\n    }\n\n    // Sort the output if there is a sort ordering defined.\n    val resultIter = dep.keyOrdering match {\n      case Some(_: Ordering[K]) =>\n        throw new UnsupportedOperationException(\"order not allowed\")\n      case None =>\n        aggregatedIter\n    }\n\n    resultIter match {\n      case _: InterruptibleIterator[Product2[K, C]] => resultIter\n      case _ =>\n        // Use another interruptible iterator here to support task cancellation as aggregator\n        // or(and) sorter may have consumed previous interruptible iterator.\n        new InterruptibleIterator[Product2[K, C]](context, resultIter)\n    }\n  }\n\n  /**\n   * Returns the raw concatenated InputStream of all shuffle blocks, bypassing the decode step.\n   * Used by ShuffleScan direct read path.\n   */\n  def readAsRawStream(): InputStream = {\n    val streams = fetchIterator.map(_._2)\n    new java.io.SequenceInputStream(new java.util.Enumeration[InputStream] {\n      override def hasMoreElements: Boolean = streams.hasNext\n      override def nextElement(): InputStream = streams.next()\n    })\n  }\n\n  private def fetchContinuousBlocksInBatch: Boolean = {\n    val conf = SparkEnv.get.conf\n    val serializerRelocatable = dep.serializer.supportsRelocationOfSerializedObjects\n    val compressed = conf.get(config.SHUFFLE_COMPRESS)\n    val codecConcatenation = if (compressed) {\n      CompressionCodec.supportsConcatenationOfSerializedStreams(\n        CompressionCodec.createCodec(conf))\n    } else {\n      true\n    }\n    val useOldFetchProtocol = conf.get(config.SHUFFLE_USE_OLD_FETCH_PROTOCOL)\n\n    // SPARK-34790: Fetching continuous blocks in batch is incompatible with io encryption.\n    val ioEncryption = conf.get(config.IO_ENCRYPTION_ENABLED)\n\n    val doBatchFetch = shouldBatchFetch && serializerRelocatable &&\n      (!compressed || codecConcatenation) && !useOldFetchProtocol && !ioEncryption\n    if (shouldBatchFetch && !doBatchFetch) {\n      logDebug(\n        \"The feature tag of continuous shuffle block fetching is set to true, but \" +\n          \"we can not enable the feature because other conditions are not satisfied. \" +\n          s\"Shuffle compress: $compressed, serializer relocatable: $serializerRelocatable, \" +\n          s\"codec concatenation: $codecConcatenation, use old shuffle fetch protocol: \" +\n          s\"$useOldFetchProtocol, io encryption: $ioEncryption.\")\n    }\n    doBatchFetch\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/execution/shuffle/CometNativeShuffleWriter.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle\n\nimport java.nio.{ByteBuffer, ByteOrder}\nimport java.nio.file.{Files, Paths}\n\nimport scala.collection.mutable\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.{SparkEnv, TaskContext}\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.scheduler.MapStatus\nimport org.apache.spark.shuffle.{IndexShuffleBlockResolver, ShuffleWriteMetricsReporter, ShuffleWriter}\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, Expression, Literal}\nimport org.apache.spark.sql.catalyst.plans.physical.{HashPartitioning, Partitioning, RangePartitioning, RoundRobinPartitioning, SinglePartition}\nimport org.apache.spark.sql.comet.{CometExec, CometMetricNode}\nimport org.apache.spark.sql.execution.metric.SQLMetric\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.serde.{OperatorOuterClass, PartitioningOuterClass, QueryPlanSerde}\nimport org.apache.comet.serde.OperatorOuterClass.{CompressionCodec, Operator}\nimport org.apache.comet.serde.QueryPlanSerde.serializeDataType\n\n/**\n * A [[ShuffleWriter]] that will delegate shuffle write to native shuffle.\n */\nclass CometNativeShuffleWriter[K, V](\n    outputPartitioning: Partitioning,\n    outputAttributes: Seq[Attribute],\n    metrics: Map[String, SQLMetric],\n    numParts: Int,\n    shuffleId: Int,\n    mapId: Long,\n    context: TaskContext,\n    metricsReporter: ShuffleWriteMetricsReporter,\n    rangePartitionBounds: Option[Seq[InternalRow]] = None)\n    extends ShuffleWriter[K, V]\n    with Logging {\n\n  private val OFFSET_LENGTH = 8\n\n  var partitionLengths: Array[Long] = _\n  var mapStatus: MapStatus = _\n\n  override def write(inputs: Iterator[Product2[K, V]]): Unit = {\n    val shuffleBlockResolver =\n      SparkEnv.get.shuffleManager.shuffleBlockResolver.asInstanceOf[IndexShuffleBlockResolver]\n    val dataFile = shuffleBlockResolver.getDataFile(shuffleId, mapId)\n    val indexFile = shuffleBlockResolver.getIndexFile(shuffleId, mapId)\n    val tempDataFilename = dataFile.getPath.replace(\".data\", \".data.tmp\")\n    val tempIndexFilename = indexFile.getPath.replace(\".index\", \".index.tmp\")\n    val tempDataFilePath = Paths.get(tempDataFilename)\n    val tempIndexFilePath = Paths.get(tempIndexFilename)\n\n    // Call native shuffle write\n    val nativePlan = getNativePlan(tempDataFilename, tempIndexFilename)\n\n    val detailedMetrics = Seq(\n      \"elapsed_compute\",\n      \"encode_time\",\n      \"repart_time\",\n      \"input_batches\",\n      \"spill_count\",\n      \"spilled_bytes\")\n\n    // Maps native metrics to SQL metrics\n    val metricsOutputRows = new SQLMetric(\"outputRows\")\n    val metricsWriteTime = new SQLMetric(\"writeTime\")\n    val nativeSQLMetrics = Map(\n      \"output_rows\" -> metricsOutputRows,\n      \"data_size\" -> metrics(\"dataSize\"),\n      \"write_time\" -> metricsWriteTime) ++\n      metrics.filterKeys(detailedMetrics.contains)\n    val nativeMetrics = CometMetricNode(nativeSQLMetrics)\n\n    // Getting rid of the fake partitionId\n    val newInputs = inputs.asInstanceOf[Iterator[_ <: Product2[Any, Any]]].map(_._2)\n\n    val cometIter = CometExec.getCometIterator(\n      Seq(newInputs.asInstanceOf[Iterator[ColumnarBatch]]),\n      outputAttributes.length,\n      nativePlan,\n      nativeMetrics,\n      numParts,\n      context.partitionId(),\n      broadcastedHadoopConfForEncryption = None,\n      encryptedFilePaths = Seq.empty)\n\n    while (cometIter.hasNext) {\n      cometIter.next()\n    }\n    cometIter.close()\n\n    // get partition lengths from shuffle write output index file\n    var offset = 0L\n    partitionLengths = Files\n      .readAllBytes(tempIndexFilePath)\n      .grouped(OFFSET_LENGTH)\n      .drop(1) // first partition offset is always 0\n      .map(indexBytes => {\n        val partitionOffset =\n          ByteBuffer.wrap(indexBytes).order(ByteOrder.LITTLE_ENDIAN).getLong\n        val partitionLength = partitionOffset - offset\n        offset = partitionOffset\n        partitionLength\n      })\n      .toArray\n    Files.delete(tempIndexFilePath)\n\n    // Total written bytes at native\n    metricsReporter.incBytesWritten(Files.size(tempDataFilePath))\n    metricsReporter.incRecordsWritten(metricsOutputRows.value)\n    metricsReporter.incWriteTime(metricsWriteTime.value)\n\n    // Report spill metrics to Spark's task metrics so they appear in\n    // Spark UI task summaries (not just SQL metrics)\n    val spilledBytes = nativeSQLMetrics.get(\"spilled_bytes\").map(_.value).getOrElse(0L)\n    if (spilledBytes > 0) {\n      context.taskMetrics().incMemoryBytesSpilled(spilledBytes)\n      context.taskMetrics().incDiskBytesSpilled(spilledBytes)\n    }\n\n    // commit\n    shuffleBlockResolver.writeMetadataFileAndCommit(\n      shuffleId,\n      mapId,\n      partitionLengths,\n      Array.empty, // TODO: add checksums\n      tempDataFilePath.toFile)\n    mapStatus =\n      MapStatus.apply(SparkEnv.get.blockManager.shuffleServerId, partitionLengths, mapId)\n  }\n\n  private def isSinglePartitioning(p: Partitioning): Boolean = p match {\n    case SinglePartition => true\n    case rp: RangePartitioning =>\n      // Spark sometimes generates RangePartitioning schemes with numPartitions == 1,\n      // or the computed bounds results in a single target partition.\n      // In this case Comet just serializes a SinglePartition scheme to native.\n      rp.numPartitions == 1 || rangePartitionBounds.forall(_.isEmpty)\n    case hp: HashPartitioning => hp.numPartitions == 1\n    case _ => false\n  }\n\n  private def getNativePlan(dataFile: String, indexFile: String): Operator = {\n    val scanBuilder = OperatorOuterClass.Scan.newBuilder().setSource(\"ShuffleWriterInput\")\n    val opBuilder = OperatorOuterClass.Operator.newBuilder()\n\n    val scanTypes = outputAttributes.flatten { attr =>\n      serializeDataType(attr.dataType)\n    }\n\n    if (scanTypes.length == outputAttributes.length) {\n      scanBuilder.addAllFields(scanTypes.asJava)\n\n      val shuffleWriterBuilder = OperatorOuterClass.ShuffleWriter.newBuilder()\n      shuffleWriterBuilder.setOutputDataFile(dataFile)\n      shuffleWriterBuilder.setOutputIndexFile(indexFile)\n\n      if (SparkEnv.get.conf.getBoolean(\"spark.shuffle.compress\", true)) {\n        val codec = CometConf.COMET_EXEC_SHUFFLE_COMPRESSION_CODEC.get() match {\n          case \"zstd\" => CompressionCodec.Zstd\n          case \"lz4\" => CompressionCodec.Lz4\n          case \"snappy\" => CompressionCodec.Snappy\n          case other => throw new UnsupportedOperationException(s\"invalid codec: $other\")\n        }\n        shuffleWriterBuilder.setCodec(codec)\n      } else {\n        shuffleWriterBuilder.setCodec(CompressionCodec.None)\n      }\n      shuffleWriterBuilder.setCompressionLevel(\n        CometConf.COMET_EXEC_SHUFFLE_COMPRESSION_ZSTD_LEVEL.get)\n      shuffleWriterBuilder.setWriteBufferSize(\n        CometConf.COMET_SHUFFLE_WRITE_BUFFER_SIZE.get().min(Int.MaxValue).toInt)\n\n      outputPartitioning match {\n        case p if isSinglePartitioning(p) =>\n          val partitioning = PartitioningOuterClass.SinglePartition.newBuilder()\n\n          val partitioningBuilder = PartitioningOuterClass.Partitioning.newBuilder()\n          shuffleWriterBuilder.setPartitioning(\n            partitioningBuilder.setSinglePartition(partitioning).build())\n        case _: HashPartitioning =>\n          val hashPartitioning = outputPartitioning.asInstanceOf[HashPartitioning]\n\n          val partitioning = PartitioningOuterClass.HashPartition.newBuilder()\n          partitioning.setNumPartitions(outputPartitioning.numPartitions)\n\n          val partitionExprs = hashPartitioning.expressions\n            .flatMap(e => QueryPlanSerde.exprToProto(e, outputAttributes))\n\n          if (partitionExprs.length != hashPartitioning.expressions.length) {\n            throw new UnsupportedOperationException(\n              s\"Partitioning $hashPartitioning is not supported.\")\n          }\n\n          partitioning.addAllHashExpression(partitionExprs.asJava)\n\n          val partitioningBuilder = PartitioningOuterClass.Partitioning.newBuilder()\n          shuffleWriterBuilder.setPartitioning(\n            partitioningBuilder.setHashPartition(partitioning).build())\n        case _: RangePartitioning =>\n          val rangePartitioning = outputPartitioning.asInstanceOf[RangePartitioning]\n\n          val partitioning = PartitioningOuterClass.RangePartition.newBuilder()\n          partitioning.setNumPartitions(outputPartitioning.numPartitions)\n\n          // Detect duplicates by tracking expressions directly, similar to DataFusion's LexOrdering\n          // DataFusion will deduplicate identical sort expressions in LexOrdering,\n          // so we need to transform boundary rows to match the deduplicated structure\n          val seenExprs = mutable.HashSet[Expression]()\n          val deduplicationMap = mutable.ArrayBuffer[(Int, Boolean)]() // (originalIndex, isKept)\n\n          rangePartitioning.ordering.zipWithIndex.foreach { case (sortOrder, idx) =>\n            if (seenExprs.contains(sortOrder.child)) {\n              deduplicationMap += (idx -> false) // Will be deduplicated by DataFusion\n            } else {\n              seenExprs += sortOrder.child\n              deduplicationMap += (idx -> true) // Will be kept by DataFusion\n            }\n          }\n\n          {\n            // Serialize the ordering expressions for comparisons\n            val orderingExprs = rangePartitioning.ordering\n              .flatMap(e => QueryPlanSerde.exprToProto(e, outputAttributes))\n            if (orderingExprs.length != rangePartitioning.ordering.length) {\n              throw new UnsupportedOperationException(\n                s\"Partitioning $rangePartitioning is not supported.\")\n            }\n            partitioning.addAllSortOrders(orderingExprs.asJava)\n          }\n\n          // Convert Spark's sequence of InternalRows that represent partitioning boundaries to\n          // sequences of Literals, where each outer entry represents a boundary row, and each\n          // internal entry is a value in that row. In other words, these are stored in row major\n          // order, not column major\n          val boundarySchema = rangePartitioning.ordering.flatMap(e => Some(e.dataType))\n\n          // Transform boundary rows to match DataFusion's deduplicated structure\n          val transformedBoundaryExprs: Seq[Seq[Literal]] =\n            rangePartitionBounds.get.map((row: InternalRow) => {\n              // For every InternalRow, map its values to Literals\n              val allLiterals =\n                row.toSeq(boundarySchema).zip(boundarySchema).map { case (value, valueType) =>\n                  Literal(value, valueType)\n                }\n\n              // Keep only the literals that correspond to non-deduplicated expressions\n              allLiterals\n                .zip(deduplicationMap)\n                .filter(_._2._2) // Keep only where isKept = true\n                .map(_._1) // Extract the literal\n            })\n\n          {\n            // Convert the sequences of Literals to a collection of serialized BoundaryRows\n            val boundaryRows: Seq[PartitioningOuterClass.BoundaryRow] = transformedBoundaryExprs\n              .map((rowLiterals: Seq[Literal]) => {\n                // Serialize each sequence of Literals as a BoundaryRow\n                val rowBuilder = PartitioningOuterClass.BoundaryRow.newBuilder();\n                val serializedExprs =\n                  rowLiterals.map(lit_value =>\n                    QueryPlanSerde.exprToProto(lit_value, outputAttributes).get)\n                rowBuilder.addAllPartitionBounds(serializedExprs.asJava)\n                rowBuilder.build()\n              })\n            partitioning.addAllBoundaryRows(boundaryRows.asJava)\n          }\n\n          val partitioningBuilder = PartitioningOuterClass.Partitioning.newBuilder()\n          shuffleWriterBuilder.setPartitioning(\n            partitioningBuilder.setRangePartition(partitioning).build())\n\n        case _: RoundRobinPartitioning =>\n          val partitioning = PartitioningOuterClass.RoundRobinPartition.newBuilder()\n          partitioning.setNumPartitions(outputPartitioning.numPartitions)\n          partitioning.setMaxHashColumns(\n            CometConf.COMET_EXEC_SHUFFLE_WITH_ROUND_ROBIN_PARTITIONING_MAX_HASH_COLUMNS.get())\n\n          val partitioningBuilder = PartitioningOuterClass.Partitioning.newBuilder()\n          shuffleWriterBuilder.setPartitioning(\n            partitioningBuilder.setRoundRobinPartition(partitioning).build())\n\n        case _ =>\n          throw new UnsupportedOperationException(\n            s\"Partitioning $outputPartitioning is not supported.\")\n      }\n\n      shuffleWriterBuilder.setTracingEnabled(CometConf.COMET_TRACING_ENABLED.get())\n\n      val shuffleWriterOpBuilder = OperatorOuterClass.Operator.newBuilder()\n      shuffleWriterOpBuilder\n        .setShuffleWriter(shuffleWriterBuilder)\n        .addChildren(opBuilder.setScan(scanBuilder).build())\n        .build()\n    } else {\n      // There are unsupported scan type\n      throw new UnsupportedOperationException(\n        s\"$outputAttributes contains unsupported data types for CometShuffleExchangeExec.\")\n    }\n  }\n\n  override def stop(success: Boolean): Option[MapStatus] = {\n    if (success) {\n      Some(mapStatus)\n    } else {\n      None\n    }\n  }\n\n  override def getPartitionLengths(): Array[Long] = partitionLengths\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/execution/shuffle/CometShuffleDependency.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle\n\nimport scala.reflect.ClassTag\n\nimport org.apache.spark.{Aggregator, Partitioner, ShuffleDependency, SparkEnv}\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.serializer.Serializer\nimport org.apache.spark.shuffle.ShuffleWriteProcessor\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.Attribute\nimport org.apache.spark.sql.catalyst.plans.physical.Partitioning\nimport org.apache.spark.sql.execution.metric.SQLMetric\nimport org.apache.spark.sql.types.StructType\n\n/**\n * A [[ShuffleDependency]] that allows us to identify the shuffle dependency as a Comet shuffle.\n */\nclass CometShuffleDependency[K: ClassTag, V: ClassTag, C: ClassTag](\n    @transient private val _rdd: RDD[_ <: Product2[K, V]],\n    override val partitioner: Partitioner,\n    override val serializer: Serializer = SparkEnv.get.serializer,\n    override val keyOrdering: Option[Ordering[K]] = None,\n    override val aggregator: Option[Aggregator[K, V, C]] = None,\n    override val mapSideCombine: Boolean = false,\n    override val shuffleWriterProcessor: ShuffleWriteProcessor = new ShuffleWriteProcessor,\n    val shuffleType: ShuffleType = CometNativeShuffle,\n    val schema: Option[StructType] = None,\n    val decodeTime: SQLMetric,\n    val outputPartitioning: Option[Partitioning] = None,\n    val outputAttributes: Seq[Attribute] = Seq.empty,\n    val shuffleWriteMetrics: Map[String, SQLMetric] = Map.empty,\n    val numParts: Int = 0,\n    val rangePartitionBounds: Option[Seq[InternalRow]] = None)\n    extends ShuffleDependency[K, V, C](\n      _rdd,\n      partitioner,\n      serializer,\n      keyOrdering,\n      aggregator,\n      mapSideCombine,\n      shuffleWriterProcessor) {}\n\n/** Indicates shuffle type */\nsealed trait ShuffleType\n\n/** Indicates that the shuffle is performed by Comet native library */\ncase object CometNativeShuffle extends ShuffleType\n\n/** Indicates that the shuffle is performed by Comet JVM class */\ncase object CometColumnarShuffle extends ShuffleType\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/execution/shuffle/CometShuffleExchangeExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle\n\nimport java.util.function.Supplier\n\nimport scala.concurrent.Future\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark._\nimport org.apache.spark.internal.config\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.serializer.Serializer\nimport org.apache.spark.shuffle.sort.SortShuffleManager\nimport org.apache.spark.sql.catalyst.{InternalRow, SQLConfHelper}\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, BoundReference, Expression, PlanExpression, UnsafeProjection, UnsafeRow}\nimport org.apache.spark.sql.catalyst.expressions.codegen.LazilyGeneratedOrdering\nimport org.apache.spark.sql.catalyst.plans.logical.Statistics\nimport org.apache.spark.sql.catalyst.plans.physical._\nimport org.apache.spark.sql.comet.{CometMetricNode, CometNativeExec, CometPlan, CometSinkPlaceHolder}\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.execution.adaptive.ShuffleQueryStageExec\nimport org.apache.spark.sql.execution.exchange.{ENSURE_REQUIREMENTS, ShuffleExchangeExec, ShuffleExchangeLike, ShuffleOrigin}\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics, SQLShuffleReadMetricsReporter, SQLShuffleWriteMetricsReporter}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{ArrayType, BinaryType, BooleanType, ByteType, DataType, DateType, DecimalType, DoubleType, FloatType, IntegerType, LongType, MapType, ShortType, StringType, StructType, TimestampNTZType, TimestampType}\nimport org.apache.spark.sql.vectorized.ColumnarBatch\nimport org.apache.spark.util.MutablePair\nimport org.apache.spark.util.collection.unsafe.sort.{PrefixComparators, RecordComparator}\nimport org.apache.spark.util.random.XORShiftRandom\n\nimport com.google.common.base.Objects\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometConf.{COMET_EXEC_SHUFFLE_ENABLED, COMET_SHUFFLE_MODE}\nimport org.apache.comet.CometFallback.{isMarkedForFallback, markForFallback}\nimport org.apache.comet.CometSparkSessionExtensions.{isCometShuffleManagerEnabled, withInfo}\nimport org.apache.comet.serde.{Compatible, OperatorOuterClass, QueryPlanSerde, SupportLevel, Unsupported}\nimport org.apache.comet.serde.operator.CometSink\nimport org.apache.comet.shims.ShimCometShuffleExchangeExec\n\n/**\n * Performs a shuffle that will result in the desired partitioning.\n */\ncase class CometShuffleExchangeExec(\n    override val outputPartitioning: Partitioning,\n    child: SparkPlan,\n    originalPlan: ShuffleExchangeLike,\n    shuffleOrigin: ShuffleOrigin = ENSURE_REQUIREMENTS,\n    shuffleType: ShuffleType = CometNativeShuffle,\n    advisoryPartitionSize: Option[Long] = None)\n    extends ShuffleExchangeLike\n    with CometPlan\n    with ShimCometShuffleExchangeExec {\n\n  private lazy val writeMetrics =\n    SQLShuffleWriteMetricsReporter.createShuffleWriteMetrics(sparkContext)\n  private[sql] lazy val readMetrics =\n    SQLShuffleReadMetricsReporter.createShuffleReadMetrics(sparkContext)\n  override lazy val metrics: Map[String, SQLMetric] = Map(\n    \"dataSize\" -> SQLMetrics.createSizeMetric(sparkContext, \"data size\"),\n    \"numPartitions\" -> SQLMetrics.createMetric(\n      sparkContext,\n      \"number of partitions\")) ++ readMetrics ++ writeMetrics ++ CometMetricNode.shuffleMetrics(\n    sparkContext)\n\n  override def nodeName: String = if (shuffleType == CometNativeShuffle) {\n    \"CometExchange\"\n  } else {\n    \"CometColumnarExchange\"\n  }\n\n  private lazy val serializer: Serializer =\n    new UnsafeRowSerializer(child.output.size, longMetric(\"dataSize\"))\n\n  @transient lazy val inputRDD: RDD[_] = if (shuffleType == CometNativeShuffle) {\n    // CometNativeShuffle assumes that the input plan is Comet plan.\n    child.executeColumnar()\n  } else if (shuffleType == CometColumnarShuffle) {\n    // CometColumnarShuffle uses Spark's row-based execute() API. For Spark row-based plans,\n    // rows flow directly. For Comet native plans, their doExecute() wraps with ColumnarToRowExec\n    // to convert columnar batches to rows.\n    child.execute()\n  } else {\n    throw new UnsupportedOperationException(\n      s\"Unsupported shuffle type: ${shuffleType.getClass.getName}\")\n  }\n\n  // 'mapOutputStatisticsFuture' is only needed when enable AQE.\n  @transient\n  override lazy val mapOutputStatisticsFuture: Future[MapOutputStatistics] = {\n    if (inputRDD.getNumPartitions == 0) {\n      Future.successful(null)\n    } else {\n      sparkContext.submitMapStage(shuffleDependency)\n    }\n  }\n\n  override def numMappers: Int = shuffleDependency.rdd.getNumPartitions\n\n  override def numPartitions: Int = shuffleDependency.partitioner.numPartitions\n\n  override def getShuffleRDD(partitionSpecs: Array[ShufflePartitionSpec]): RDD[_] =\n    new CometShuffledBatchRDD(shuffleDependency, readMetrics, partitionSpecs)\n\n  override def runtimeStatistics: Statistics = {\n    val dataSize =\n      metrics(\"dataSize\").value * Math.max(CometConf.COMET_EXCHANGE_SIZE_MULTIPLIER.get(conf), 1)\n    val rowCount = metrics(SQLShuffleWriteMetricsReporter.SHUFFLE_RECORDS_WRITTEN).value\n    Statistics(dataSize.toLong, Some(rowCount))\n  }\n\n  // TODO: add `override` keyword after dropping Spark-3.x supports\n  def shuffleId: Int = getShuffleId(shuffleDependency)\n\n  /**\n   * A [[ShuffleDependency]] that will partition rows of its child based on the partitioning\n   * scheme defined in `newPartitioning`. Those partitions of the returned ShuffleDependency will\n   * be the input of shuffle.\n   */\n  @transient\n  lazy val shuffleDependency: ShuffleDependency[Int, _, _] =\n    if (shuffleType == CometNativeShuffle) {\n      val dep = CometShuffleExchangeExec.prepareShuffleDependency(\n        inputRDD.asInstanceOf[RDD[ColumnarBatch]],\n        child.output,\n        outputPartitioning,\n        serializer,\n        metrics)\n      metrics(\"numPartitions\").set(dep.partitioner.numPartitions)\n      val executionId = sparkContext.getLocalProperty(SQLExecution.EXECUTION_ID_KEY)\n      SQLMetrics.postDriverMetricUpdates(\n        sparkContext,\n        executionId,\n        metrics(\"numPartitions\") :: Nil)\n      dep\n    } else if (shuffleType == CometColumnarShuffle) {\n      val dep = CometShuffleExchangeExec.prepareJVMShuffleDependency(\n        inputRDD.asInstanceOf[RDD[InternalRow]],\n        child.output,\n        outputPartitioning,\n        serializer,\n        metrics)\n      metrics(\"numPartitions\").set(dep.partitioner.numPartitions)\n      val executionId = sparkContext.getLocalProperty(SQLExecution.EXECUTION_ID_KEY)\n      SQLMetrics.postDriverMetricUpdates(\n        sparkContext,\n        executionId,\n        metrics(\"numPartitions\") :: Nil)\n      dep\n    } else {\n      throw new UnsupportedOperationException(\n        s\"Unsupported shuffle type: ${shuffleType.getClass.getName}\")\n    }\n\n  protected override def doExecute(): RDD[InternalRow] = {\n    ColumnarToRowExec(this).doExecute()\n  }\n\n  /**\n   * Comet supports columnar execution.\n   */\n  override val supportsColumnar: Boolean = true\n\n  /**\n   * Caches the created CometShuffledBatchRDD so we can reuse that.\n   */\n  private var cachedShuffleRDD: CometShuffledBatchRDD = null\n\n  /**\n   * Comet returns RDD[ColumnarBatch] for columnar execution.\n   */\n  protected override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    // Returns the same CometShuffledBatchRDD if this plan is used by multiple plans.\n    if (cachedShuffleRDD == null) {\n      cachedShuffleRDD = new CometShuffledBatchRDD(shuffleDependency, readMetrics)\n    }\n    cachedShuffleRDD\n  }\n\n  override protected def withNewChildInternal(newChild: SparkPlan): CometShuffleExchangeExec =\n    copy(child = newChild)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometShuffleExchangeExec =>\n        this.outputPartitioning == other.outputPartitioning &&\n        this.shuffleOrigin == other.shuffleOrigin && this.child == other.child &&\n        this.shuffleType == other.shuffleType &&\n        this.advisoryPartitionSize == other.advisoryPartitionSize\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int =\n    Objects.hashCode(outputPartitioning, shuffleOrigin, shuffleType, advisoryPartitionSize, child)\n\n  override def stringArgs: Iterator[Any] =\n    Iterator(outputPartitioning, shuffleOrigin, shuffleType, child) ++ Iterator(s\"[plan_id=$id]\")\n}\n\nobject CometShuffleExchangeExec\n    extends CometSink[ShuffleExchangeExec]\n    with ShimCometShuffleExchangeExec\n    with SQLConfHelper {\n\n  override def getSupportLevel(op: ShuffleExchangeExec): SupportLevel = {\n    if (nativeShuffleSupported(op) || columnarShuffleSupported(op)) {\n      Compatible()\n    } else {\n      Unsupported()\n    }\n  }\n\n  override def createExec(\n      nativeOp: OperatorOuterClass.Operator,\n      op: ShuffleExchangeExec): CometNativeExec = {\n    if (nativeShuffleSupported(op) && op.children.forall(_.isInstanceOf[CometNativeExec])) {\n      // Switch to use Decimal128 regardless of precision, since Arrow native execution\n      // doesn't support Decimal32 and Decimal64 yet.\n      conf.setConfString(CometConf.COMET_USE_DECIMAL_128.key, \"true\")\n      CometSinkPlaceHolder(\n        nativeOp,\n        op,\n        CometShuffleExchangeExec(op, shuffleType = CometNativeShuffle))\n\n    } else if (columnarShuffleSupported(op)) {\n      CometSinkPlaceHolder(\n        nativeOp,\n        op,\n        CometShuffleExchangeExec(op, shuffleType = CometColumnarShuffle))\n    } else {\n      throw new IllegalStateException()\n    }\n  }\n\n  /**\n   * Whether the given Spark partitioning is supported by Comet native shuffle.\n   */\n  def nativeShuffleSupported(s: ShuffleExchangeExec): Boolean = {\n\n    /**\n     * Determine which data types are supported as partition columns in native shuffle.\n     *\n     * For HashPartitioning this defines the key that determines how data should be collocated for\n     * operations like `groupByKey`, `reduceByKey`, or `join`. Native code does not support\n     * hashing complex types, see hash_funcs/utils.rs\n     */\n    def supportedHashPartitioningDataType(dt: DataType): Boolean = dt match {\n      case _: BooleanType | _: ByteType | _: ShortType | _: IntegerType | _: LongType |\n          _: FloatType | _: DoubleType | _: StringType | _: BinaryType | _: TimestampType |\n          _: TimestampNTZType | _: DateType =>\n        true\n      case _: DecimalType =>\n        // TODO enforce this check\n        // https://github.com/apache/datafusion-comet/issues/3079\n        // Decimals with precision > 18 require Java BigDecimal conversion before hashing\n        // d.precision <= 18\n        true\n      case _ =>\n        false\n    }\n\n    /**\n     * Check if a data type contains a decimal with precision > 18. Such decimals require\n     * conversion to Java BigDecimal before hashing, which is not supported in native shuffle.\n     */\n    def containsHighPrecisionDecimal(dt: DataType): Boolean = dt match {\n      case d: DecimalType => d.precision > 18\n      case StructType(fields) => fields.exists(f => containsHighPrecisionDecimal(f.dataType))\n      case ArrayType(elementType, _) => containsHighPrecisionDecimal(elementType)\n      case MapType(keyType, valueType, _) =>\n        containsHighPrecisionDecimal(keyType) || containsHighPrecisionDecimal(valueType)\n      case _ => false\n    }\n\n    /**\n     * Determine which data types are supported as partition columns in native shuffle.\n     *\n     * For RangePartitioning this defines the key that determines how data should be collocated\n     * for operations like `orderBy`, `repartitionByRange`. Native code does not support sorting\n     * complex types.\n     */\n    def supportedRangePartitioningDataType(dt: DataType): Boolean = dt match {\n      case _: BooleanType | _: ByteType | _: ShortType | _: IntegerType | _: LongType |\n          _: FloatType | _: DoubleType | _: StringType | _: BinaryType | _: TimestampType |\n          _: TimestampNTZType | _: DecimalType | _: DateType =>\n        true\n      case _ =>\n        false\n    }\n\n    /**\n     * Determine which data types are supported as data columns in native shuffle.\n     *\n     * Native shuffle relies on the Arrow IPC writer to serialize batches to disk, so it should\n     * support all types that Comet supports.\n     */\n    def supportedSerializableDataType(dt: DataType): Boolean = dt match {\n      case _: BooleanType | _: ByteType | _: ShortType | _: IntegerType | _: LongType |\n          _: FloatType | _: DoubleType | _: StringType | _: BinaryType | _: TimestampType |\n          _: TimestampNTZType | _: DecimalType | _: DateType =>\n        true\n      case StructType(fields) =>\n        fields.nonEmpty && fields.forall(f => supportedSerializableDataType(f.dataType))\n      case ArrayType(elementType, _) =>\n        supportedSerializableDataType(elementType)\n      case MapType(keyType, valueType, _) =>\n        supportedSerializableDataType(keyType) && supportedSerializableDataType(valueType)\n      case _ =>\n        false\n    }\n\n    // Preserve any prior-pass fallback decision (see `CometFallback`).\n    if (isMarkedForFallback(s)) {\n      return false\n    }\n\n    if (!isCometShuffleEnabledWithInfo(s)) {\n      return false\n    }\n\n    if (!isCometNativeShuffleMode(s.conf)) {\n      withInfo(s, \"Comet native shuffle not enabled\")\n      return false\n    }\n\n    if (!isCometPlan(s.child)) {\n      // we do not need to report a fallback reason if the child plan is not a Comet plan\n      return false\n    }\n\n    val inputs = s.child.output\n\n    for (input <- inputs) {\n      if (!supportedSerializableDataType(input.dataType)) {\n        withInfo(s, s\"unsupported shuffle data type ${input.dataType} for input $input\")\n        return false\n      }\n    }\n\n    val partitioning = s.outputPartitioning\n    val conf = SQLConf.get\n    partitioning match {\n      case HashPartitioning(expressions, _) =>\n        var supported = true\n        if (!CometConf.COMET_EXEC_SHUFFLE_WITH_HASH_PARTITIONING_ENABLED.get(conf)) {\n          withInfo(\n            s,\n            s\"${CometConf.COMET_EXEC_SHUFFLE_WITH_HASH_PARTITIONING_ENABLED.key} is disabled\")\n          supported = false\n        }\n        for (expr <- expressions) {\n          if (QueryPlanSerde.exprToProto(expr, inputs).isEmpty) {\n            withInfo(s, s\"unsupported hash partitioning expression: $expr\")\n            supported = false\n            // We don't short-circuit in case there is more than one unsupported expression\n            // to provide info for.\n          }\n        }\n        for (dt <- expressions.map(_.dataType).distinct) {\n          if (!supportedHashPartitioningDataType(dt)) {\n            withInfo(s, s\"unsupported hash partitioning data type for native shuffle: $dt\")\n            supported = false\n          }\n        }\n        supported\n      case SinglePartition =>\n        // we already checked that the input types are supported\n        true\n      case RangePartitioning(orderings, _) =>\n        if (!CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.get(conf)) {\n          withInfo(\n            s,\n            s\"${CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key} is disabled\")\n          return false\n        }\n        var supported = true\n        for (o <- orderings) {\n          if (QueryPlanSerde.exprToProto(o, inputs).isEmpty) {\n            withInfo(s, s\"unsupported range partitioning sort order: $o\", o)\n            supported = false\n            // We don't short-circuit in case there is more than one unsupported expression\n            // to provide info for.\n          }\n        }\n        for (dt <- orderings.map(_.dataType).distinct) {\n          if (!supportedRangePartitioningDataType(dt)) {\n            withInfo(s, s\"unsupported range partitioning data type for native shuffle: $dt\")\n            supported = false\n          }\n        }\n        supported\n      case RoundRobinPartitioning(_) =>\n        val config = CometConf.COMET_EXEC_SHUFFLE_WITH_ROUND_ROBIN_PARTITIONING_ENABLED\n        if (!config.get(conf)) {\n          withInfo(s, s\"${config.key} is disabled\")\n          return false\n        }\n        // RoundRobin partitioning uses position-based distribution matching Spark's behavior\n        true\n      case _ =>\n        withInfo(\n          s,\n          s\"unsupported Spark partitioning for native shuffle: ${partitioning.getClass.getName}\")\n        false\n    }\n  }\n\n  /**\n   * Check if JVM-based columnar shuffle (CometColumnarExchange) can be used for this shuffle. JVM\n   * shuffle is used when the child plan is not a Comet native operator, or when native shuffle\n   * doesn't support the required partitioning type.\n   */\n  def columnarShuffleSupported(s: ShuffleExchangeExec): Boolean = {\n\n    /**\n     * Determine which data types are supported as data columns in columnar shuffle.\n     *\n     * Comet columnar shuffle used native code to convert Spark unsafe rows to Arrow batches, see\n     * shuffle/row.rs\n     */\n    def supportedSerializableDataType(dt: DataType): Boolean = dt match {\n      case _: BooleanType | _: ByteType | _: ShortType | _: IntegerType | _: LongType |\n          _: FloatType | _: DoubleType | _: StringType | _: BinaryType | _: TimestampType |\n          _: TimestampNTZType | _: DecimalType | _: DateType =>\n        true\n      case StructType(fields) =>\n        fields.nonEmpty && fields.forall(f => supportedSerializableDataType(f.dataType)) &&\n        // Java Arrow stream reader cannot work on duplicate field name\n        fields.map(f => f.name).distinct.length == fields.length &&\n        fields.nonEmpty\n      case ArrayType(elementType, _) =>\n        supportedSerializableDataType(elementType)\n      case MapType(keyType, valueType, _) =>\n        supportedSerializableDataType(keyType) && supportedSerializableDataType(valueType)\n      case _ =>\n        false\n    }\n\n    // Preserve any prior-pass fallback decision (see `CometFallback`).\n    if (isMarkedForFallback(s)) {\n      return false\n    }\n\n    if (!isCometShuffleEnabledWithInfo(s)) {\n      return false\n    }\n\n    if (CometConf.COMET_DPP_FALLBACK_ENABLED.get() && stageContainsDPPScan(s)) {\n      markForFallback(s, \"Stage contains a scan with Dynamic Partition Pruning\")\n      return false\n    }\n\n    if (!isCometJVMShuffleMode(s.conf)) {\n      withInfo(s, \"Comet columnar shuffle not enabled\")\n      return false\n    }\n\n    if (isShuffleOperator(s.child)) {\n      withInfo(s, s\"Child ${s.child.getClass.getName} is a shuffle operator\")\n      return false\n    }\n\n    if (!(!s.child.supportsColumnar || isCometPlan(s.child))) {\n      withInfo(s, s\"Child ${s.child.getClass.getName} is a neither row-based or a Comet operator\")\n      return false\n    }\n\n    val inputs = s.child.output\n\n    for (input <- inputs) {\n      if (!supportedSerializableDataType(input.dataType)) {\n        withInfo(s, s\"unsupported shuffle data type ${input.dataType} for input $input\")\n        return false\n      }\n    }\n\n    val partitioning = s.outputPartitioning\n    partitioning match {\n      case HashPartitioning(expressions, _) =>\n        var supported = true\n        for (expr <- expressions) {\n          if (QueryPlanSerde.exprToProto(expr, inputs).isEmpty) {\n            withInfo(s, s\"unsupported hash partitioning expression: $expr\")\n            supported = false\n            // We don't short-circuit in case there is more than one unsupported expression\n            // to provide info for.\n          }\n        }\n        supported\n      case SinglePartition =>\n        // we already checked that the input types are supported\n        true\n      case RoundRobinPartitioning(_) =>\n        // we already checked that the input types are supported\n        true\n      case RangePartitioning(orderings, _) =>\n        var supported = true\n        for (o <- orderings) {\n          if (QueryPlanSerde.exprToProto(o, inputs).isEmpty) {\n            withInfo(s, s\"unsupported range partitioning sort order: $o\")\n            supported = false\n            // We don't short-circuit in case there is more than one unsupported expression\n            // to provide info for.\n          }\n        }\n        supported\n      case _ =>\n        withInfo(\n          s,\n          s\"unsupported Spark partitioning for columnar shuffle: ${partitioning.getClass.getName}\")\n        false\n    }\n  }\n\n  private def isCometNativeShuffleMode(conf: SQLConf): Boolean = {\n    COMET_SHUFFLE_MODE.get(conf) match {\n      case \"native\" => true\n      case \"auto\" => true\n      case _ => false\n    }\n  }\n\n  private def isCometJVMShuffleMode(conf: SQLConf): Boolean = {\n    COMET_SHUFFLE_MODE.get(conf) match {\n      case \"jvm\" => true\n      case \"auto\" => true\n      case _ => false\n    }\n  }\n\n  private def isCometPlan(op: SparkPlan): Boolean = op.isInstanceOf[CometPlan]\n\n  /**\n   * Returns true if a given spark plan is Comet shuffle operator.\n   */\n  private def isShuffleOperator(op: SparkPlan): Boolean = {\n    op match {\n      case op: ShuffleQueryStageExec if op.plan.isInstanceOf[CometShuffleExchangeExec] => true\n      case _: CometShuffleExchangeExec => true\n      case op: CometSinkPlaceHolder => isShuffleOperator(op.child)\n      case _ => false\n    }\n  }\n\n  /**\n   * Returns true if the stage (the subtree rooted at this shuffle) contains a scan with Dynamic\n   * Partition Pruning (DPP). When DPP is present, the scan falls back to Spark, and wrapping the\n   * stage with Comet shuffle creates inefficient row-to-columnar transitions.\n   */\n  private def stageContainsDPPScan(s: ShuffleExchangeExec): Boolean = {\n    def isDynamicPruningFilter(e: Expression): Boolean =\n      e.exists(_.isInstanceOf[PlanExpression[_]])\n\n    s.child.exists {\n      case scan: FileSourceScanExec =>\n        scan.partitionFilters.exists(isDynamicPruningFilter)\n      case _ => false\n    }\n  }\n\n  def isCometShuffleEnabledWithInfo(op: SparkPlan): Boolean = {\n    if (!COMET_EXEC_SHUFFLE_ENABLED.get(op.conf)) {\n      withInfo(\n        op,\n        s\"Comet shuffle is not enabled: ${COMET_EXEC_SHUFFLE_ENABLED.key} is not enabled\")\n      false\n    } else if (!isCometShuffleManagerEnabled(op.conf)) {\n      withInfo(op, s\"spark.shuffle.manager is not set to ${classOf[CometShuffleManager].getName}\")\n      false\n    } else {\n      true\n    }\n  }\n\n  def prepareShuffleDependency(\n      rdd: RDD[ColumnarBatch],\n      outputAttributes: Seq[Attribute],\n      outputPartitioning: Partitioning,\n      serializer: Serializer,\n      metrics: Map[String, SQLMetric]): ShuffleDependency[Int, ColumnarBatch, ColumnarBatch] = {\n    val numParts = rdd.getNumPartitions\n\n    // The code block below is mostly brought over from\n    // ShuffleExchangeExec::prepareShuffleDependency\n    val (partitioner, rangePartitionBounds) = outputPartitioning match {\n      case rangePartitioning: RangePartitioning =>\n        // Extract only fields used for sorting to avoid collecting large fields that does not\n        // affect sorting result when deciding partition bounds in RangePartitioner\n        val rddForSampling = rdd.mapPartitionsInternal { iter =>\n          val projection =\n            UnsafeProjection.create(rangePartitioning.ordering.map(_.child), outputAttributes)\n          val mutablePair = new MutablePair[InternalRow, Null]()\n\n          // Internally, RangePartitioner runs a job on the RDD that samples keys to compute\n          // partition bounds. To get accurate samples, we need to copy the mutable keys.\n          iter.flatMap { batch =>\n            val rowIter = batch.rowIterator().asScala\n            rowIter.map { row =>\n              mutablePair.update(projection(row).copy(), null)\n            }\n          }\n        }\n\n        // Construct ordering on extracted sort key.\n        val orderingAttributes = rangePartitioning.ordering.zipWithIndex.map { case (ord, i) =>\n          ord.copy(child = BoundReference(i, ord.dataType, ord.nullable))\n        }\n        implicit val ordering = new LazilyGeneratedOrdering(orderingAttributes)\n        // Use Spark's RangePartitioner to compute bounds from global samples\n        val rangePartitioner = new RangePartitioner(\n          rangePartitioning.numPartitions,\n          rddForSampling,\n          ascending = true,\n          samplePointsPerPartitionHint = SQLConf.get.rangeExchangeSampleSizePerPartition)\n\n        // Use reflection to access the private rangeBounds field\n        val rangeBoundsField = rangePartitioner.getClass.getDeclaredField(\"rangeBounds\")\n        rangeBoundsField.setAccessible(true)\n        val rangeBounds =\n          rangeBoundsField.get(rangePartitioner).asInstanceOf[Array[InternalRow]].toSeq\n\n        (rangePartitioner.asInstanceOf[Partitioner], Some(rangeBounds))\n\n      case _ =>\n        (\n          new Partitioner {\n            override def numPartitions: Int = outputPartitioning.numPartitions\n\n            override def getPartition(key: Any): Int = key.asInstanceOf[Int]\n          },\n          None)\n    }\n\n    val dependency = new CometShuffleDependency[Int, ColumnarBatch, ColumnarBatch](\n      rdd.map(\n        (0, _)\n      ), // adding fake partitionId that is always 0 because ShuffleDependency requires it\n      serializer = serializer,\n      shuffleWriterProcessor = ShuffleExchangeExec.createShuffleWriteProcessor(metrics),\n      shuffleType = CometNativeShuffle,\n      partitioner = partitioner,\n      decodeTime = metrics(\"decode_time\"),\n      outputPartitioning = Some(outputPartitioning),\n      outputAttributes = outputAttributes,\n      shuffleWriteMetrics = metrics,\n      numParts = numParts,\n      rangePartitionBounds = rangePartitionBounds)\n    dependency\n  }\n\n  /**\n   * This is copied from Spark `ShuffleExchangeExec.needToCopyObjectsBeforeShuffle`. The only\n   * difference is that we use `CometShuffleManager` instead of `SortShuffleManager`.\n   */\n  private def needToCopyObjectsBeforeShuffle(partitioner: Partitioner): Boolean = {\n    // Note: even though we only use the partitioner's `numPartitions` field, we require it to be\n    // passed instead of directly passing the number of partitions in order to guard against\n    // corner-cases where a partitioner constructed with `numPartitions` partitions may output\n    // fewer partitions (like RangePartitioner, for example).\n    val conf = SparkEnv.get.conf\n    val shuffleManager = SparkEnv.get.shuffleManager\n    val sortBasedShuffleOn = shuffleManager.isInstanceOf[CometShuffleManager]\n    val bypassMergeThreshold = conf.get(config.SHUFFLE_SORT_BYPASS_MERGE_THRESHOLD)\n    val numParts = partitioner.numPartitions\n    if (sortBasedShuffleOn) {\n      if (numParts <= bypassMergeThreshold) {\n        // If we're using the original SortShuffleManager and the number of output partitions is\n        // sufficiently small, then Spark will fall back to the hash-based shuffle write path, which\n        // doesn't buffer deserialized records.\n        // Note that we'll have to remove this case if we fix SPARK-6026 and remove this bypass.\n        false\n      } else if (numParts <= SortShuffleManager.MAX_SHUFFLE_OUTPUT_PARTITIONS_FOR_SERIALIZED_MODE) {\n        // SPARK-4550 and  SPARK-7081 extended sort-based shuffle to serialize individual records\n        // prior to sorting them. This optimization is only applied in cases where shuffle\n        // dependency does not specify an aggregator or ordering and the record serializer has\n        // certain properties and the number of partitions doesn't exceed the limitation. If this\n        // optimization is enabled, we can safely avoid the copy.\n        //\n        // Exchange never configures its ShuffledRDDs with aggregators or key orderings, and the\n        // serializer in Spark SQL always satisfy the properties, so we only need to check whether\n        // the number of partitions exceeds the limitation.\n        false\n      } else {\n        // This different to Spark `SortShuffleManager`.\n        // Comet doesn't use Spark `ExternalSorter` to buffer records in memory, so we don't need to\n        // copy.\n        false\n      }\n    } else {\n      // Catch-all case to safely handle any future ShuffleManager implementations.\n      true\n    }\n  }\n\n  /**\n   * Returns a [[ShuffleDependency]] that will partition rows of its child based on the\n   * partitioning scheme defined in `newPartitioning`. Those partitions of the returned\n   * ShuffleDependency will be the input of shuffle.\n   */\n  def prepareJVMShuffleDependency(\n      rdd: RDD[InternalRow],\n      outputAttributes: Seq[Attribute],\n      newPartitioning: Partitioning,\n      serializer: Serializer,\n      writeMetrics: Map[String, SQLMetric]): ShuffleDependency[Int, InternalRow, InternalRow] = {\n    val part: Partitioner = newPartitioning match {\n      case RoundRobinPartitioning(numPartitions) => new HashPartitioner(numPartitions)\n      case HashPartitioning(_, n) =>\n        // For HashPartitioning, the partitioning key is already a valid partition ID, as we use\n        // `HashPartitioning.partitionIdExpression` to produce partitioning key.\n        new PartitionIdPassthrough(n)\n      case RangePartitioning(sortingExpressions, numPartitions) =>\n        // Extract only fields used for sorting to avoid collecting large fields that does not\n        // affect sorting result when deciding partition bounds in RangePartitioner\n        val rddForSampling = rdd.mapPartitionsInternal { iter =>\n          val projection =\n            UnsafeProjection.create(sortingExpressions.map(_.child), outputAttributes)\n          val mutablePair = new MutablePair[InternalRow, Null]()\n          // Internally, RangePartitioner runs a job on the RDD that samples keys to compute\n          // partition bounds. To get accurate samples, we need to copy the mutable keys.\n          iter.map(row => mutablePair.update(projection(row).copy(), null))\n        }\n        // Construct ordering on extracted sort key.\n        val orderingAttributes = sortingExpressions.zipWithIndex.map { case (ord, i) =>\n          ord.copy(child = BoundReference(i, ord.dataType, ord.nullable))\n        }\n        implicit val ordering = new LazilyGeneratedOrdering(orderingAttributes)\n        new RangePartitioner(\n          numPartitions,\n          rddForSampling,\n          ascending = true,\n          samplePointsPerPartitionHint = SQLConf.get.rangeExchangeSampleSizePerPartition)\n      case SinglePartition => new ConstantPartitioner\n      case _ => throw new IllegalStateException(s\"Exchange not implemented for $newPartitioning\")\n      // TODO: Handle BroadcastPartitioning.\n    }\n\n    def getPartitionKeyExtractor(): InternalRow => Any = newPartitioning match {\n      case RoundRobinPartitioning(numPartitions) =>\n        // Distributes elements evenly across output partitions, starting from a random partition.\n        // nextInt(numPartitions) implementation has a special case when bound is a power of 2,\n        // which is basically taking several highest bits from the initial seed, with only a\n        // minimal scrambling. Due to deterministic seed, using the generator only once,\n        // and lack of scrambling, the position values for power-of-two numPartitions always\n        // end up being almost the same regardless of the index. substantially scrambling the\n        // seed by hashing will help. Refer to SPARK-21782 for more details.\n        val partitionId = TaskContext.get().partitionId()\n        var position = new XORShiftRandom(partitionId).nextInt(numPartitions)\n        (_: InternalRow) => {\n          // The HashPartitioner will handle the `mod` by the number of partitions\n          position += 1\n          position\n        }\n      case h: HashPartitioning =>\n        val projection = UnsafeProjection.create(h.partitionIdExpression :: Nil, outputAttributes)\n        row => projection(row).getInt(0)\n      case RangePartitioning(sortingExpressions, _) =>\n        val projection =\n          UnsafeProjection.create(sortingExpressions.map(_.child), outputAttributes)\n        row => projection(row)\n      case SinglePartition => identity\n      case _ => throw new IllegalStateException(s\"Exchange not implemented for $newPartitioning\")\n    }\n\n    val isRoundRobin = newPartitioning.isInstanceOf[RoundRobinPartitioning] &&\n      newPartitioning.numPartitions > 1\n\n    val rddWithPartitionIds: RDD[Product2[Int, InternalRow]] = {\n      // [SPARK-23207] Have to make sure the generated RoundRobinPartitioning is deterministic,\n      // otherwise a retry task may output different rows and thus lead to data loss.\n      //\n      // Currently we following the most straight-forward way that perform a local sort before\n      // partitioning.\n      //\n      // Note that we don't perform local sort if the new partitioning has only 1 partition, under\n      // that case all output rows go to the same partition.\n      val newRdd = if (isRoundRobin && SQLConf.get.sortBeforeRepartition) {\n        rdd.mapPartitionsInternal { iter =>\n          val recordComparatorSupplier = new Supplier[RecordComparator] {\n            override def get: RecordComparator = new RecordBinaryComparator()\n          }\n          // The comparator for comparing row hashcode, which should always be Integer.\n          val prefixComparator = PrefixComparators.LONG\n\n          // The prefix computer generates row hashcode as the prefix, so we may decrease the\n          // probability that the prefixes are equal when input rows choose column values from a\n          // limited range.\n          val prefixComputer = new UnsafeExternalRowSorter.PrefixComputer {\n            private val result = new UnsafeExternalRowSorter.PrefixComputer.Prefix\n\n            override def computePrefix(\n                row: InternalRow): UnsafeExternalRowSorter.PrefixComputer.Prefix = {\n              // The hashcode generated from the binary form of a [[UnsafeRow]] should not be null.\n              result.isNull = false\n              result.value = row.hashCode()\n              result\n            }\n          }\n          val pageSize = SparkEnv.get.memoryManager.pageSizeBytes\n\n          val sorter = UnsafeExternalRowSorter.createWithRecordComparator(\n            fromAttributes(outputAttributes),\n            recordComparatorSupplier,\n            prefixComparator,\n            prefixComputer,\n            pageSize,\n            // We are comparing binary here, which does not support radix sort.\n            // See more details in SPARK-28699.\n            false)\n          sorter.sort(iter.asInstanceOf[Iterator[UnsafeRow]])\n        }\n      } else {\n        rdd\n      }\n\n      // round-robin function is order sensitive if we don't sort the input.\n      val isOrderSensitive = isRoundRobin && !SQLConf.get.sortBeforeRepartition\n      if (CometShuffleExchangeExec.needToCopyObjectsBeforeShuffle(part)) {\n        newRdd.mapPartitionsWithIndexInternal(\n          (_, iter) => {\n            val getPartitionKey = getPartitionKeyExtractor()\n            iter.map { row => (part.getPartition(getPartitionKey(row)), row.copy()) }\n          },\n          isOrderSensitive = isOrderSensitive)\n      } else {\n        newRdd.mapPartitionsWithIndexInternal(\n          (_, iter) => {\n            val getPartitionKey = getPartitionKeyExtractor()\n            val mutablePair = new MutablePair[Int, InternalRow]()\n            iter.map { row => mutablePair.update(part.getPartition(getPartitionKey(row)), row) }\n          },\n          isOrderSensitive = isOrderSensitive)\n      }\n    }\n\n    // Now, we manually create a ShuffleDependency. Because pairs in rddWithPartitionIds\n    // are in the form of (partitionId, row) and every partitionId is in the expected range\n    // [0, part.numPartitions - 1]. The partitioner of this is a PartitionIdPassthrough.\n    val dependency =\n      new CometShuffleDependency[Int, InternalRow, InternalRow](\n        rddWithPartitionIds,\n        new PartitionIdPassthrough(part.numPartitions),\n        serializer,\n        shuffleWriterProcessor = ShuffleExchangeExec.createShuffleWriteProcessor(writeMetrics),\n        shuffleType = CometColumnarShuffle,\n        schema = Some(fromAttributes(outputAttributes)),\n        decodeTime = writeMetrics(\"decode_time\"))\n\n    dependency\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/execution/shuffle/CometShuffleManager.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle\n\nimport java.util.Collections\nimport java.util.concurrent.ConcurrentHashMap\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.ShuffleDependency\nimport org.apache.spark.SparkConf\nimport org.apache.spark.SparkEnv\nimport org.apache.spark.TaskContext\nimport org.apache.spark.internal.{config, Logging}\nimport org.apache.spark.shuffle._\nimport org.apache.spark.shuffle.api.ShuffleExecutorComponents\nimport org.apache.spark.shuffle.sort.{BypassMergeSortShuffleHandle, SerializedShuffleHandle, SortShuffleManager, SortShuffleWriter}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.util.collection.OpenHashSet\n\nimport org.apache.comet.CometConf\n\n/**\n * A [[ShuffleManager]] that uses Arrow format to shuffle data.\n */\nclass CometShuffleManager(conf: SparkConf) extends ShuffleManager with Logging {\n\n  import CometShuffleManager._\n  import SortShuffleManager._\n\n  if (!conf.getBoolean(\"spark.shuffle.spill\", true)) {\n    logWarning(\n      \"spark.shuffle.spill was set to false, but this configuration is ignored as of Spark 1.6+.\" +\n        \" Shuffle will continue to spill to disk when necessary.\")\n  }\n\n  private val sortShuffleManager = new SortShuffleManager(conf)\n\n  /**\n   * A mapping from shuffle ids to the task ids of mappers producing output for those shuffles.\n   */\n  private[this] val taskIdMapsForShuffle = new ConcurrentHashMap[Int, OpenHashSet[Long]]()\n\n  // Lazy initialization to avoid accessing SparkEnv.get during ShuffleManager construction,\n  // which can cause hangs when SparkEnv is not fully initialized (e.g., during Hive metastore ops)\n  // This is only initialized when getWriter/getReader is called (during task execution),\n  // at which point SparkEnv should be fully available\n  @volatile private var _shuffleExecutorComponents: ShuffleExecutorComponents = _\n\n  private def shuffleExecutorComponents: ShuffleExecutorComponents = {\n    if (_shuffleExecutorComponents == null) {\n      synchronized {\n        if (_shuffleExecutorComponents == null) {\n          val executorComponents = ShuffleDataIOUtils.loadShuffleDataIO(conf).executor()\n          val extraConfigs =\n            conf.getAllWithPrefix(ShuffleDataIOUtils.SHUFFLE_SPARK_CONF_PREFIX).toMap\n          // SparkEnv.get should be available when getWriter/getReader is called\n          // (during task execution), but check for null to avoid hangs\n          val env = SparkEnv.get\n          if (env == null) {\n            throw new IllegalStateException(\n              \"SparkEnv.get is null during shuffleExecutorComponents initialization. \" +\n                \"This may indicate a timing issue with SparkEnv initialization.\")\n          }\n          executorComponents.initializeExecutor(\n            conf.getAppId,\n            env.executorId,\n            extraConfigs.asJava)\n          _shuffleExecutorComponents = executorComponents\n        }\n      }\n    }\n    _shuffleExecutorComponents\n  }\n\n  override val shuffleBlockResolver: IndexShuffleBlockResolver = {\n    // The patch versions of Spark 3.4 have different constructor signatures:\n    // See https://github.com/apache/spark/commit/5180694705be3508bd21dd9b863a59b8cb8ba193\n    // We look for proper constructor by reflection.\n    classOf[IndexShuffleBlockResolver].getDeclaredConstructors\n      .filter(c => List(2, 3).contains(c.getParameterCount()))\n      .map { c =>\n        c.getParameterCount match {\n          case 2 =>\n            c.newInstance(conf, null).asInstanceOf[IndexShuffleBlockResolver]\n          case 3 =>\n            c.newInstance(conf, null, Collections.emptyMap())\n              .asInstanceOf[IndexShuffleBlockResolver]\n        }\n      }\n      .head\n  }\n\n  /**\n   * (override) Obtains a [[ShuffleHandle]] to pass to tasks.\n   */\n  def registerShuffle[K, V, C](\n      shuffleId: Int,\n      dependency: ShuffleDependency[K, V, C]): ShuffleHandle = {\n    dependency match {\n      case cometShuffleDependency: CometShuffleDependency[_, _, _] =>\n        // Comet shuffle dependency, which comes from `CometShuffleExchangeExec`.\n        cometShuffleDependency.shuffleType match {\n          case CometColumnarShuffle =>\n            // Comet columnar shuffle, which uses Arrow format to shuffle data.\n            if (shouldBypassMergeSort(conf, dependency) ||\n              !SortShuffleManager.canUseSerializedShuffle(dependency)) {\n              new CometBypassMergeSortShuffleHandle(\n                shuffleId,\n                dependency.asInstanceOf[ShuffleDependency[K, V, V]])\n            } else {\n              new CometSerializedShuffleHandle(\n                shuffleId,\n                dependency.asInstanceOf[ShuffleDependency[K, V, V]])\n            }\n          case CometNativeShuffle =>\n            new CometNativeShuffleHandle(\n              shuffleId,\n              dependency.asInstanceOf[ShuffleDependency[K, V, V]])\n          case _ =>\n            // Unsupported shuffle type.\n            throw new UnsupportedOperationException(\n              s\"Unsupported shuffle type: ${cometShuffleDependency.shuffleType}\")\n        }\n      case _ =>\n        // It is a Spark shuffle dependency, so we use Spark Sort Shuffle Manager.\n        if (SortShuffleWriter.shouldBypassMergeSort(conf, dependency)) {\n          // If there are fewer than spark.shuffle.sort.bypassMergeThreshold partitions and we don't\n          // need map-side aggregation, then write numPartitions files directly and just concatenate\n          // them at the end. This avoids doing serialization and deserialization twice to merge\n          // together the spilled files, which would happen with the normal code path. The downside\n          // is having multiple files open at a time and thus more memory allocated to buffers.\n          new BypassMergeSortShuffleHandle[K, V](\n            shuffleId,\n            dependency.asInstanceOf[ShuffleDependency[K, V, V]])\n        } else if (SortShuffleManager.canUseSerializedShuffle(dependency)) {\n          // Otherwise, try to buffer map outputs in a serialized form, since this is more\n          // efficient:\n          new SerializedShuffleHandle[K, V](\n            shuffleId,\n            dependency.asInstanceOf[ShuffleDependency[K, V, V]])\n        } else {\n          // Otherwise, buffer map outputs in a deserialized form:\n          new BaseShuffleHandle(shuffleId, dependency)\n        }\n    }\n  }\n\n  override def getReader[K, C](\n      handle: ShuffleHandle,\n      startMapIndex: Int,\n      endMapIndex: Int,\n      startPartition: Int,\n      endPartition: Int,\n      context: TaskContext,\n      metrics: ShuffleReadMetricsReporter): ShuffleReader[K, C] = {\n    val baseShuffleHandle = handle.asInstanceOf[BaseShuffleHandle[K, _, C]]\n    val (blocksByAddress, canEnableBatchFetch) =\n      if (baseShuffleHandle.dependency.shuffleMergeEnabled) {\n        val res = SparkEnv.get.mapOutputTracker.getPushBasedShuffleMapSizesByExecutorId(\n          handle.shuffleId,\n          startMapIndex,\n          endMapIndex,\n          startPartition,\n          endPartition)\n        (res.iter, res.enableBatchFetch)\n      } else {\n        val address = SparkEnv.get.mapOutputTracker.getMapSizesByExecutorId(\n          handle.shuffleId,\n          startMapIndex,\n          endMapIndex,\n          startPartition,\n          endPartition)\n        (address, true)\n      }\n\n    if (handle.isInstanceOf[CometBypassMergeSortShuffleHandle[_, _]] ||\n      handle.isInstanceOf[CometSerializedShuffleHandle[_, _]] ||\n      handle.isInstanceOf[CometNativeShuffleHandle[_, _]]) {\n      new CometBlockStoreShuffleReader(\n        handle.asInstanceOf[BaseShuffleHandle[K, _, C]],\n        blocksByAddress,\n        context,\n        metrics,\n        shouldBatchFetch =\n          canEnableBatchFetch && canUseBatchFetch(startPartition, endPartition, context))\n    } else {\n      // It is a Spark shuffle dependency, so we use Spark Sort Shuffle Reader.\n      sortShuffleManager.getReader(\n        handle,\n        startMapIndex,\n        endMapIndex,\n        startPartition,\n        endPartition,\n        context,\n        metrics)\n    }\n  }\n\n  /** Get a writer for a given partition. Called on executors by map tasks. */\n  override def getWriter[K, V](\n      handle: ShuffleHandle,\n      mapId: Long,\n      context: TaskContext,\n      metrics: ShuffleWriteMetricsReporter): ShuffleWriter[K, V] = {\n    val mapTaskIds =\n      taskIdMapsForShuffle.computeIfAbsent(handle.shuffleId, _ => new OpenHashSet[Long](16))\n    mapTaskIds.synchronized {\n      mapTaskIds.add(context.taskAttemptId())\n    }\n    val env = SparkEnv.get\n    handle match {\n      case cometShuffleHandle: CometNativeShuffleHandle[K @unchecked, V @unchecked] =>\n        val dep = cometShuffleHandle.dependency.asInstanceOf[CometShuffleDependency[_, _, _]]\n        new CometNativeShuffleWriter(\n          dep.outputPartitioning.get,\n          dep.outputAttributes,\n          dep.shuffleWriteMetrics,\n          dep.numParts,\n          dep.shuffleId,\n          mapId,\n          context,\n          metrics,\n          dep.rangePartitionBounds)\n      case bypassMergeSortHandle: CometBypassMergeSortShuffleHandle[K @unchecked, V @unchecked] =>\n        new CometBypassMergeSortShuffleWriter(\n          env.blockManager,\n          context.taskMemoryManager(),\n          context,\n          bypassMergeSortHandle,\n          mapId,\n          env.conf,\n          metrics,\n          shuffleExecutorComponents)\n      case unsafeShuffleHandle: CometSerializedShuffleHandle[K @unchecked, V @unchecked] =>\n        new CometUnsafeShuffleWriter(\n          env.blockManager,\n          context.taskMemoryManager(),\n          unsafeShuffleHandle,\n          mapId,\n          context,\n          env.conf,\n          metrics,\n          shuffleExecutorComponents)\n      case _ =>\n        // It is a Spark shuffle dependency, so we use Spark Sort Shuffle Writer.\n        sortShuffleManager.getWriter(handle, mapId, context, metrics)\n    }\n  }\n\n  /** Remove a shuffle's metadata from the ShuffleManager. */\n  override def unregisterShuffle(shuffleId: Int): Boolean = {\n    Option(taskIdMapsForShuffle.remove(shuffleId)).foreach { mapTaskIds =>\n      mapTaskIds.iterator.foreach { mapTaskId =>\n        shuffleBlockResolver.removeDataByMap(shuffleId, mapTaskId)\n      }\n    }\n    true\n  }\n\n  /** Shut down this ShuffleManager. */\n  override def stop(): Unit = {\n    shuffleBlockResolver.stop()\n  }\n}\n\nobject CometShuffleManager extends Logging {\n\n  def shouldBypassMergeSort(conf: SparkConf, dep: ShuffleDependency[_, _, _]): Boolean = {\n    // We cannot bypass sorting if we need to do map-side aggregation.\n    if (dep.mapSideCombine) {\n      false\n    } else {\n      // Condition from Spark:\n      // Bypass merge sort if we have fewer than `spark.shuffle.sort.bypassMergeThreshold`\n      val partitionCond = SortShuffleWriter.shouldBypassMergeSort(conf, dep)\n\n      // Bypass merge sort if we have partition * cores fewer than\n      // `spark.comet.columnar.shuffle.async.max.thread.num`\n      val executorCores = conf.get(config.EXECUTOR_CORES)\n      val maxThreads = CometConf.COMET_COLUMNAR_SHUFFLE_ASYNC_MAX_THREAD_NUM.get(SQLConf.get)\n      val threadCond = dep.partitioner.numPartitions * executorCores <= maxThreads\n\n      // Comet columnar shuffle buffers rows in memory. If too many cores are used with\n      // relatively high number of partitions, it may cause OOM when initializing the\n      // hash-based shuffle writers at beginning of the task. For example, 10 cores\n      // with 100 partitions will allocates 1000 writers. Sort-based shuffle doesn't have\n      // this issue because it only allocates one writer per task.\n      partitionCond && threadCond\n    }\n  }\n}\n\n/**\n * Subclass of [[BaseShuffleHandle]], used to identify when we've chosen to use the bypass merge\n * sort shuffle path.\n */\nprivate[spark] class CometBypassMergeSortShuffleHandle[K, V](\n    shuffleId: Int,\n    dependency: ShuffleDependency[K, V, V])\n    extends BaseShuffleHandle(shuffleId, dependency) {}\n\n/**\n * Subclass of [[BaseShuffleHandle]], used to identify when we've chosen to use the serialized\n * shuffle.\n */\nprivate[spark] class CometSerializedShuffleHandle[K, V](\n    shuffleId: Int,\n    dependency: ShuffleDependency[K, V, V])\n    extends BaseShuffleHandle(shuffleId, dependency) {}\n\n/**\n * Subclass of [[BaseShuffleHandle]], used to identify when we've chosen to use the native shuffle\n * writer.\n */\nprivate[spark] class CometNativeShuffleHandle[K, V](\n    shuffleId: Int,\n    dependency: ShuffleDependency[K, V, V])\n    extends BaseShuffleHandle(shuffleId, dependency) {}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/execution/shuffle/CometShuffledRowRDD.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle\n\nimport org.apache.spark._\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.shuffle.sort.SortShuffleManager\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLShuffleReadMetricsReporter}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport org.apache.comet.CometShuffleBlockIterator\n\n/**\n * Different from [[org.apache.spark.sql.execution.ShuffledRowRDD]], this RDD is specialized for\n * reading shuffled data through [[CometBlockStoreShuffleReader]]. The shuffled data is read in an\n * iterator of [[Product2[Int, ColumnarBatch]]] instead of [[Product2[Int, InternalRow]]].\n */\nclass CometShuffledBatchRDD(\n    var dependency: ShuffleDependency[Int, _, _],\n    val metrics: Map[String, SQLMetric],\n    partitionSpecs: Array[ShufflePartitionSpec])\n    extends RDD[ColumnarBatch](dependency.rdd.context, Nil) {\n\n  def this(dependency: ShuffleDependency[Int, _, _], metrics: Map[String, SQLMetric]) = {\n    this(\n      dependency,\n      metrics,\n      Array.tabulate(dependency.partitioner.numPartitions)(i => CoalescedPartitionSpec(i, i + 1)))\n  }\n\n  dependency.rdd.context.setLocalProperty(\n    SortShuffleManager.FETCH_SHUFFLE_BLOCKS_IN_BATCH_ENABLED_KEY,\n    SQLConf.get.fetchShuffleBlocksInBatch.toString)\n\n  override def getDependencies: Seq[Dependency[_]] = List(dependency)\n\n  override val partitioner: Option[Partitioner] =\n    if (partitionSpecs.forall(_.isInstanceOf[CoalescedPartitionSpec])) {\n      val indices = partitionSpecs.map(_.asInstanceOf[CoalescedPartitionSpec].startReducerIndex)\n      // TODO this check is based on assumptions of callers' behavior but is sufficient for now.\n      if (indices.toSet.size == partitionSpecs.length) {\n        Some(new CoalescedPartitioner(dependency.partitioner, indices))\n      } else {\n        None\n      }\n    } else {\n      None\n    }\n\n  override def getPartitions: Array[Partition] = {\n    Array.tabulate[Partition](partitionSpecs.length) { i =>\n      ShuffledRowRDDPartition(i, partitionSpecs(i))\n    }\n  }\n\n  override def getPreferredLocations(partition: Partition): Seq[String] = {\n    val tracker = SparkEnv.get.mapOutputTracker.asInstanceOf[MapOutputTrackerMaster]\n    partition.asInstanceOf[ShuffledRowRDDPartition].spec match {\n      case CoalescedPartitionSpec(startReducerIndex, endReducerIndex, _) =>\n        // TODO order by partition size.\n        startReducerIndex.until(endReducerIndex).flatMap { reducerIndex =>\n          tracker.getPreferredLocationsForShuffle(dependency, reducerIndex)\n        }\n\n      case PartialReducerPartitionSpec(_, startMapIndex, endMapIndex, _) =>\n        tracker.getMapLocation(dependency, startMapIndex, endMapIndex)\n\n      case PartialMapperPartitionSpec(mapIndex, _, _) =>\n        tracker.getMapLocation(dependency, mapIndex, mapIndex + 1)\n\n      case CoalescedMapperPartitionSpec(startMapIndex, endMapIndex, _) =>\n        tracker.getMapLocation(dependency, startMapIndex, endMapIndex)\n    }\n  }\n\n  private def createReader(\n      split: Partition,\n      context: TaskContext): CometBlockStoreShuffleReader[_, _] = {\n    val tempMetrics = context.taskMetrics().createTempShuffleReadMetrics()\n    // `SQLShuffleReadMetricsReporter` will update its own metrics for SQL exchange operator,\n    // as well as the `tempMetrics` for basic shuffle metrics.\n    val sqlMetricsReporter = new SQLShuffleReadMetricsReporter(tempMetrics, metrics)\n    split.asInstanceOf[ShuffledRowRDDPartition].spec match {\n      case CoalescedPartitionSpec(startReducerIndex, endReducerIndex, _) =>\n        SparkEnv.get.shuffleManager\n          .getReader(\n            dependency.shuffleHandle,\n            startReducerIndex,\n            endReducerIndex,\n            context,\n            sqlMetricsReporter)\n          .asInstanceOf[CometBlockStoreShuffleReader[_, _]]\n\n      case PartialReducerPartitionSpec(reducerIndex, startMapIndex, endMapIndex, _) =>\n        SparkEnv.get.shuffleManager\n          .getReader(\n            dependency.shuffleHandle,\n            startMapIndex,\n            endMapIndex,\n            reducerIndex,\n            reducerIndex + 1,\n            context,\n            sqlMetricsReporter)\n          .asInstanceOf[CometBlockStoreShuffleReader[_, _]]\n\n      case PartialMapperPartitionSpec(mapIndex, startReducerIndex, endReducerIndex) =>\n        SparkEnv.get.shuffleManager\n          .getReader(\n            dependency.shuffleHandle,\n            mapIndex,\n            mapIndex + 1,\n            startReducerIndex,\n            endReducerIndex,\n            context,\n            sqlMetricsReporter)\n          .asInstanceOf[CometBlockStoreShuffleReader[_, _]]\n\n      case CoalescedMapperPartitionSpec(startMapIndex, endMapIndex, numReducers) =>\n        SparkEnv.get.shuffleManager\n          .getReader(\n            dependency.shuffleHandle,\n            startMapIndex,\n            endMapIndex,\n            0,\n            numReducers,\n            context,\n            sqlMetricsReporter)\n          .asInstanceOf[CometBlockStoreShuffleReader[_, _]]\n    }\n  }\n\n  /**\n   * Creates a CometShuffleBlockIterator that provides raw compressed shuffle blocks for direct\n   * consumption by native code, bypassing Arrow FFI.\n   */\n  def computeAsShuffleBlockIterator(\n      split: Partition,\n      context: TaskContext): CometShuffleBlockIterator = {\n    val reader = createReader(split, context)\n    new CometShuffleBlockIterator(reader.readAsRawStream())\n  }\n\n  override def compute(split: Partition, context: TaskContext): Iterator[ColumnarBatch] = {\n    val reader = createReader(split, context)\n    // TODO: Reads IPC by native code\n    reader.read().asInstanceOf[Iterator[Product2[Int, ColumnarBatch]]].map(_._2)\n  }\n\n  override def clearDependencies(): Unit = {\n    super.clearDependencies()\n    dependency = null\n  }\n}\n\n/**\n * The [[Partition]] used by [[CometShuffledRowRDD]].\n */\nfinal case class ShuffledRowRDDPartition(index: Int, spec: ShufflePartitionSpec) extends Partition\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/execution/shuffle/NativeBatchDecoderIterator.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.execution.shuffle\n\nimport java.io.{EOFException, InputStream}\nimport java.nio.{ByteBuffer, ByteOrder}\nimport java.nio.channels.{Channels, ReadableByteChannel}\n\nimport org.apache.spark.sql.execution.metric.SQLMetric\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nimport org.apache.comet.Native\nimport org.apache.comet.vector.NativeUtil\n\n/**\n * This iterator wraps a Spark input stream that is reading shuffle blocks generated by the Comet\n * native ShuffleWriterExec and then calls native code to decompress and decode the shuffle blocks\n * and use Arrow FFI to return the Arrow record batch.\n */\ncase class NativeBatchDecoderIterator(\n    in: InputStream,\n    decodeTime: SQLMetric,\n    nativeLib: Native,\n    nativeUtil: NativeUtil,\n    tracingEnabled: Boolean)\n    extends Iterator[ColumnarBatch] {\n\n  private var isClosed = false\n  private val longBuf = ByteBuffer.allocate(8).order(ByteOrder.LITTLE_ENDIAN)\n  private var currentBatch: ColumnarBatch = null\n  private var batch = fetchNext()\n\n  import NativeBatchDecoderIterator._\n\n  private val channel: ReadableByteChannel = if (in != null) {\n    Channels.newChannel(in)\n  } else {\n    null\n  }\n\n  def hasNext(): Boolean = {\n    if (channel == null || isClosed) {\n      return false\n    }\n    if (batch.isDefined) {\n      return true\n    }\n\n    // Release the previous batch.\n    if (currentBatch != null) {\n      currentBatch.close()\n      currentBatch = null\n    }\n\n    batch = fetchNext()\n    if (batch.isEmpty) {\n      close()\n      return false\n    }\n    true\n  }\n\n  def next(): ColumnarBatch = {\n    if (!hasNext) {\n      throw new NoSuchElementException\n    }\n\n    val nextBatch = batch.get\n\n    currentBatch = nextBatch\n    batch = None\n    currentBatch\n  }\n\n  private def fetchNext(): Option[ColumnarBatch] = {\n    if (channel == null || isClosed) {\n      return None\n    }\n\n    // read compressed batch size from header\n    try {\n      longBuf.clear()\n      while (longBuf.hasRemaining && channel.read(longBuf) >= 0) {}\n    } catch {\n      case _: EOFException =>\n        close()\n        return None\n    }\n\n    // If we reach the end of the stream, we are done, or if we read partial length\n    // then the stream is corrupted.\n    if (longBuf.hasRemaining) {\n      if (longBuf.position() == 0) {\n        close()\n        return None\n      }\n      throw new EOFException(\"Data corrupt: unexpected EOF while reading compressed ipc lengths\")\n    }\n\n    // get compressed length (including headers)\n    longBuf.flip()\n    val compressedLength = longBuf.getLong\n\n    // read field count from header\n    longBuf.clear()\n    while (longBuf.hasRemaining && channel.read(longBuf) >= 0) {}\n    if (longBuf.hasRemaining) {\n      throw new EOFException(\"Data corrupt: unexpected EOF while reading field count\")\n    }\n    longBuf.flip()\n    val fieldCount = longBuf.getLong.toInt\n\n    // read body\n    val bytesToRead = compressedLength - 8\n    if (bytesToRead > Integer.MAX_VALUE) {\n      // very unlikely that shuffle block will reach 2GB\n      throw new IllegalStateException(\n        s\"Native shuffle block size of $bytesToRead exceeds \" +\n          s\"maximum of ${Integer.MAX_VALUE}. Try reducing shuffle batch size.\")\n    }\n    var dataBuf = threadLocalDataBuf.get()\n    if (dataBuf.capacity() < bytesToRead) {\n      // it is unlikely that we would overflow here since it would\n      // require a 1GB compressed shuffle block but we check anyway\n      val newCapacity = (bytesToRead * 2L).min(Integer.MAX_VALUE).toInt\n      dataBuf = ByteBuffer.allocateDirect(newCapacity)\n      threadLocalDataBuf.set(dataBuf)\n    }\n    dataBuf.clear()\n    dataBuf.limit(bytesToRead.toInt)\n    while (dataBuf.hasRemaining && channel.read(dataBuf) >= 0) {}\n    if (dataBuf.hasRemaining) {\n      throw new EOFException(\"Data corrupt: unexpected EOF while reading compressed batch\")\n    }\n\n    // make native call to decode batch\n    val startTime = System.nanoTime()\n    val batch = nativeUtil.getNextBatch(\n      fieldCount,\n      (arrayAddrs, schemaAddrs) => {\n        nativeLib.decodeShuffleBlock(\n          dataBuf,\n          bytesToRead.toInt,\n          arrayAddrs,\n          schemaAddrs,\n          tracingEnabled)\n      })\n    decodeTime.add(System.nanoTime() - startTime)\n\n    batch\n  }\n\n  def close(): Unit = {\n    synchronized {\n      if (!isClosed) {\n        if (currentBatch != null) {\n          currentBatch.close()\n          currentBatch = null\n        }\n        in.close()\n        resetDataBuf()\n        isClosed = true\n      }\n    }\n  }\n}\n\nobject NativeBatchDecoderIterator {\n\n  private val INITIAL_BUFFER_SIZE = 128 * 1024\n\n  private val threadLocalDataBuf: ThreadLocal[ByteBuffer] = ThreadLocal.withInitial(() => {\n    ByteBuffer.allocateDirect(INITIAL_BUFFER_SIZE)\n  })\n\n  private def resetDataBuf(): Unit = {\n    if (threadLocalDataBuf.get().capacity() > INITIAL_BUFFER_SIZE) {\n      threadLocalDataBuf.set(ByteBuffer.allocateDirect(INITIAL_BUFFER_SIZE))\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/operators.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport java.util.Locale\n\nimport scala.collection.mutable\nimport scala.collection.mutable.ArrayBuffer\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.{Partition, TaskContext}\nimport org.apache.spark.broadcast.Broadcast\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.{Ascending, Attribute, AttributeSet, Expression, ExpressionSet, Generator, NamedExpression, SortOrder}\nimport org.apache.spark.sql.catalyst.expressions.aggregate.{AggregateExpression, AggregateMode, Final, Partial, PartialMerge}\nimport org.apache.spark.sql.catalyst.optimizer.{BuildLeft, BuildRight, BuildSide}\nimport org.apache.spark.sql.catalyst.plans._\nimport org.apache.spark.sql.catalyst.plans.physical._\nimport org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\nimport org.apache.spark.sql.comet.util.Utils\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.execution.adaptive.{AQEShuffleReadExec, BroadcastQueryStageExec, ShuffleQueryStageExec}\nimport org.apache.spark.sql.execution.aggregate.{BaseAggregateExec, HashAggregateExec, ObjectHashAggregateExec}\nimport org.apache.spark.sql.execution.exchange.ReusedExchangeExec\nimport org.apache.spark.sql.execution.joins.{BroadcastHashJoinExec, HashJoin, ShuffledHashJoinExec, SortMergeJoinExec}\nimport org.apache.spark.sql.execution.metric.{SQLMetric, SQLMetrics}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{ArrayType, BooleanType, ByteType, DataType, DateType, DecimalType, DoubleType, FloatType, IntegerType, LongType, MapType, ShortType, StringType, TimestampNTZType}\nimport org.apache.spark.sql.vectorized.ColumnarBatch\nimport org.apache.spark.util.SerializableConfiguration\nimport org.apache.spark.util.io.ChunkedByteBuffer\n\nimport com.google.common.base.Objects\nimport com.google.protobuf.CodedOutputStream\n\nimport org.apache.comet.{CometConf, CometExecIterator, CometRuntimeException, ConfigEntry}\nimport org.apache.comet.CometSparkSessionExtensions.{isCometShuffleEnabled, withInfo}\nimport org.apache.comet.parquet.CometParquetUtils\nimport org.apache.comet.serde.{CometOperatorSerde, Compatible, Incompatible, OperatorOuterClass, SupportLevel, Unsupported}\nimport org.apache.comet.serde.OperatorOuterClass.{AggregateMode => CometAggregateMode, Operator}\nimport org.apache.comet.serde.QueryPlanSerde.{aggExprToProto, exprToProto, supportedSortType}\nimport org.apache.comet.serde.operator.CometSink\n\n/**\n * Trait for injecting per-partition planning data into operator nodes.\n *\n * Implementations handle specific operator types (e.g., Iceberg scans, Delta scans).\n */\nprivate[comet] trait PlanDataInjector {\n\n  /** Check if this injector can handle the given operator. */\n  def canInject(op: Operator): Boolean\n\n  /** Extract the key used to look up planning data for this operator. */\n  def getKey(op: Operator): Option[String]\n\n  /** Inject common + partition data into the operator node. */\n  def inject(op: Operator, commonBytes: Array[Byte], partitionBytes: Array[Byte]): Operator\n}\n\n/**\n * Registry and utilities for injecting per-partition planning data into operator trees.\n */\nprivate[comet] object PlanDataInjector {\n\n  // Registry of injectors for different operator types\n  private val injectors: Seq[PlanDataInjector] = Seq(\n    IcebergPlanDataInjector,\n    NativeScanPlanDataInjector\n    // Future: DeltaPlanDataInjector, HudiPlanDataInjector, etc.\n  )\n\n  /**\n   * Injects planning data into an Operator tree by finding nodes that need injection and applying\n   * the appropriate injector.\n   *\n   * Supports joins over multiple tables by matching each operator with its corresponding data\n   * based on a key (e.g., metadata_location for Iceberg).\n   */\n  def injectPlanData(\n      op: Operator,\n      commonByKey: Map[String, Array[Byte]],\n      partitionByKey: Map[String, Array[Byte]]): Operator = {\n    val builder = op.toBuilder\n\n    // Try each injector to see if it can handle this operator\n    for (injector <- injectors if injector.canInject(op)) {\n      injector.getKey(op) match {\n        case Some(key) =>\n          (commonByKey.get(key), partitionByKey.get(key)) match {\n            case (Some(commonBytes), Some(partitionBytes)) =>\n              val injectedOp = injector.inject(op, commonBytes, partitionBytes)\n              // Copy the injected operator's fields to our builder\n              builder.clear()\n              builder.mergeFrom(injectedOp)\n            case _ =>\n              throw new CometRuntimeException(s\"Missing planning data for key: $key\")\n          }\n        case None =>\n      }\n    }\n\n    // Recursively process children\n    builder.clearChildren()\n    op.getChildrenList.asScala.foreach { child =>\n      builder.addChildren(injectPlanData(child, commonByKey, partitionByKey))\n    }\n\n    builder.build()\n  }\n\n  def serializeOperator(op: Operator): Array[Byte] = {\n    val size = op.getSerializedSize\n    val bytes = new Array[Byte](size)\n    val codedOutput = CodedOutputStream.newInstance(bytes)\n    op.writeTo(codedOutput)\n    codedOutput.checkNoSpaceLeft()\n    bytes\n  }\n}\n\n/**\n * Injector for Iceberg scan operators.\n */\nprivate[comet] object IcebergPlanDataInjector extends PlanDataInjector {\n  import java.nio.ByteBuffer\n  import java.util.{LinkedHashMap, Map => JMap}\n\n  private final val maxCacheEntries = 16\n\n  // Cache parsed IcebergScanCommon to avoid reparsing for Iceberg tables with large numbers of\n  // partitions (thousands or more) that may repeatedly parse the same commonBytes.\n  // IcebergPlanDataInjector is a singleton, so we use an LRU cache to eventually evict old\n  // IcebergScanCommon objects. 16 seems like a reasonable starting point since these objects\n  // are not large. Thread-safe LinkedHashMap with accessOrder=true provides LRU ordering.\n  private val commonCache = java.util.Collections.synchronizedMap(\n    new LinkedHashMap[ByteBuffer, OperatorOuterClass.IcebergScanCommon](4, 0.75f, true) {\n      override def removeEldestEntry(\n          eldest: JMap.Entry[ByteBuffer, OperatorOuterClass.IcebergScanCommon]): Boolean = {\n        size() > maxCacheEntries\n      }\n    })\n\n  override def canInject(op: Operator): Boolean =\n    op.hasIcebergScan &&\n      op.getIcebergScan.getFileScanTasksCount == 0 &&\n      op.getIcebergScan.hasCommon\n\n  override def getKey(op: Operator): Option[String] =\n    Some(op.getIcebergScan.getCommon.getMetadataLocation)\n\n  override def inject(\n      op: Operator,\n      commonBytes: Array[Byte],\n      partitionBytes: Array[Byte]): Operator = {\n    val scan = op.getIcebergScan\n\n    // Cache the parsed common data to avoid deserializing on every partition\n    val cacheKey = ByteBuffer.wrap(commonBytes)\n    val common = commonCache.synchronized {\n      Option(commonCache.get(cacheKey)).getOrElse {\n        val parsed = OperatorOuterClass.IcebergScanCommon.parseFrom(commonBytes)\n        commonCache.put(cacheKey, parsed)\n        parsed\n      }\n    }\n\n    val tasksOnly = OperatorOuterClass.IcebergScan.parseFrom(partitionBytes)\n\n    val scanBuilder = scan.toBuilder\n    scanBuilder.setCommon(common)\n    scanBuilder.addAllFileScanTasks(tasksOnly.getFileScanTasksList)\n\n    op.toBuilder.setIcebergScan(scanBuilder).build()\n  }\n}\n\n/**\n * Injector for NativeScan operators.\n */\nprivate[comet] object NativeScanPlanDataInjector extends PlanDataInjector {\n\n  override def canInject(op: Operator): Boolean =\n    op.hasNativeScan &&\n      op.getNativeScan.hasCommon &&\n      !op.getNativeScan.hasFilePartition\n\n  override def getKey(op: Operator): Option[String] = {\n    // Reconstruct the same sourceKey that was used when storing the data\n    val common = op.getNativeScan.getCommon\n    val source = common.getSource\n    val keyComponents = Seq(\n      common.getRequiredSchemaList.toString,\n      common.getDataFiltersList.toString,\n      common.getProjectionVectorList.toString,\n      common.getFieldsList.toString)\n    val hashCode = keyComponents.mkString(\"|\").hashCode\n    Some(s\"${source}_${hashCode}\")\n  }\n\n  override def inject(\n      op: Operator,\n      commonBytes: Array[Byte],\n      partitionBytes: Array[Byte]): Operator = {\n\n    val common = OperatorOuterClass.NativeScanCommon.parseFrom(commonBytes)\n    val partitionOnly = OperatorOuterClass.NativeScan.parseFrom(partitionBytes)\n\n    // Build complete NativeScan with common fields + this partition's file list\n    val scanBuilder = OperatorOuterClass.NativeScan.newBuilder()\n    scanBuilder.setCommon(common)\n    scanBuilder.setFilePartition(partitionOnly.getFilePartition)\n\n    op.toBuilder.setNativeScan(scanBuilder).build()\n  }\n}\n\n/**\n * A Comet physical operator\n */\nabstract class CometExec extends CometPlan {\n\n  /** The original Spark operator from which this Comet operator is converted from */\n  def originalPlan: SparkPlan\n\n  /** Comet always support columnar execution */\n  override def supportsColumnar: Boolean = true\n\n  override def output: Seq[Attribute] = originalPlan.output\n\n  override def doExecute(): RDD[InternalRow] =\n    ColumnarToRowExec(this).doExecute()\n\n  override def executeCollect(): Array[InternalRow] =\n    ColumnarToRowExec(this).executeCollect()\n\n  override def outputOrdering: Seq[SortOrder] = originalPlan.outputOrdering\n\n  // `CometExec` reuses the outputPartitioning of the original SparkPlan.\n  // Note that if the outputPartitioning of the original SparkPlan depends on its children,\n  // we should override this method in the specific CometExec, because Spark AQE may change the\n  // outputPartitioning of SparkPlan, e.g., AQEShuffleReadExec.\n  override def outputPartitioning: Partitioning = originalPlan.outputPartitioning\n\n  protected def setSubqueries(planId: Long, sparkPlan: SparkPlan): Unit = {\n    sparkPlan.children.foreach(setSubqueries(planId, _))\n\n    sparkPlan.expressions.foreach {\n      _.collect { case sub: ScalarSubquery =>\n        CometScalarSubquery.setSubquery(planId, sub)\n      }\n    }\n  }\n\n  protected def cleanSubqueries(planId: Long, sparkPlan: SparkPlan): Unit = {\n    sparkPlan.children.foreach(cleanSubqueries(planId, _))\n\n    sparkPlan.expressions.foreach {\n      _.collect { case sub: ScalarSubquery =>\n        CometScalarSubquery.removeSubquery(planId, sub)\n      }\n    }\n  }\n\n  /** Collects all ScalarSubquery expressions from a plan tree. */\n  protected def collectSubqueries(sparkPlan: SparkPlan): Seq[ScalarSubquery] = {\n    val childSubqueries = sparkPlan.children.flatMap(collectSubqueries)\n    val planSubqueries = sparkPlan.expressions.flatMap {\n      _.collect { case sub: ScalarSubquery => sub }\n    }\n    childSubqueries ++ planSubqueries\n  }\n}\n\nobject CometExec {\n  // An unique id for each CometExecIterator, used to identify the native query execution.\n  private val curId = new java.util.concurrent.atomic.AtomicLong()\n\n  def newIterId: Long = curId.getAndIncrement()\n\n  /**\n   * Serialize a native plan to bytes. Use this method to serialize the plan once before calling\n   * getCometIterator for each partition, avoiding repeated serialization.\n   */\n  def serializeNativePlan(nativePlan: Operator): Array[Byte] = {\n    val size = nativePlan.getSerializedSize\n    val bytes = new Array[Byte](size)\n    val codedOutput = CodedOutputStream.newInstance(bytes)\n    nativePlan.writeTo(codedOutput)\n    codedOutput.checkNoSpaceLeft()\n    bytes\n  }\n\n  def getCometIterator(\n      inputs: Seq[Iterator[ColumnarBatch]],\n      numOutputCols: Int,\n      nativePlan: Operator,\n      numParts: Int,\n      partitionIdx: Int): CometExecIterator = {\n    getCometIterator(\n      inputs,\n      numOutputCols,\n      nativePlan,\n      CometMetricNode(Map.empty),\n      numParts,\n      partitionIdx,\n      broadcastedHadoopConfForEncryption = None,\n      encryptedFilePaths = Seq.empty)\n  }\n\n  /**\n   * Create a CometExecIterator with a pre-serialized native plan. Use this overload when\n   * executing the same plan across multiple partitions to avoid serializing the plan repeatedly.\n   */\n  def getCometIterator(\n      inputs: Seq[Iterator[ColumnarBatch]],\n      numOutputCols: Int,\n      serializedPlan: Array[Byte],\n      numParts: Int,\n      partitionIdx: Int): CometExecIterator = {\n    new CometExecIterator(\n      newIterId,\n      inputs,\n      numOutputCols,\n      serializedPlan,\n      CometMetricNode(Map.empty),\n      numParts,\n      partitionIdx,\n      broadcastedHadoopConfForEncryption = None,\n      encryptedFilePaths = Seq.empty)\n  }\n\n  def getCometIterator(\n      inputs: Seq[Iterator[ColumnarBatch]],\n      numOutputCols: Int,\n      nativePlan: Operator,\n      nativeMetrics: CometMetricNode,\n      numParts: Int,\n      partitionIdx: Int,\n      broadcastedHadoopConfForEncryption: Option[Broadcast[SerializableConfiguration]],\n      encryptedFilePaths: Seq[String]): CometExecIterator = {\n    val bytes = serializeNativePlan(nativePlan)\n    new CometExecIterator(\n      newIterId,\n      inputs,\n      numOutputCols,\n      bytes,\n      nativeMetrics,\n      numParts,\n      partitionIdx,\n      broadcastedHadoopConfForEncryption,\n      encryptedFilePaths)\n  }\n\n  /**\n   * Executes this Comet operator and serialized output ColumnarBatch into bytes.\n   */\n  def getByteArrayRdd(cometPlan: CometPlan): RDD[(Long, ChunkedByteBuffer)] = {\n    cometPlan.executeColumnar().mapPartitionsInternal { iter =>\n      Utils.serializeBatches(iter)\n    }\n  }\n}\n\n/**\n * A Comet native physical operator.\n */\nabstract class CometNativeExec extends CometExec {\n\n  /**\n   * The serialized native query plan, optional. This is only defined when the current node is the\n   * \"boundary\" node between native and Spark.\n   */\n  def serializedPlanOpt: SerializedPlan\n\n  /** The Comet native operator */\n  def nativeOp: Operator\n\n  override protected def doPrepare(): Unit = prepareSubqueries(this)\n\n  override lazy val metrics: Map[String, SQLMetric] =\n    CometMetricNode.baselineMetrics(sparkContext)\n\n  private def prepareSubqueries(sparkPlan: SparkPlan): Unit = {\n    val runningSubqueries = new ArrayBuffer[ExecSubqueryExpression]\n\n    sparkPlan.children.foreach(prepareSubqueries)\n\n    sparkPlan.expressions.foreach {\n      _.collect { case e: ScalarSubquery =>\n        runningSubqueries += e\n      }\n    }\n\n    // fill in the result of subqueries\n    runningSubqueries.foreach { sub =>\n      sub.updateResult()\n    }\n\n    runningSubqueries.clear()\n  }\n\n  override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    serializedPlanOpt.plan match {\n      case None =>\n        // This is in the middle of a native execution, it should not be executed directly.\n        throw new CometRuntimeException(\n          s\"CometNativeExec should not be executed directly without a serialized plan: $this\")\n      case Some(serializedPlan) =>\n        // Switch to use Decimal128 regardless of precision, since Arrow native execution\n        // doesn't support Decimal32 and Decimal64 yet.\n        SQLConf.get.setConfString(CometConf.COMET_USE_DECIMAL_128.key, \"true\")\n\n        val serializedPlanCopy = serializedPlan\n        // TODO: support native metrics for all operators.\n        val nativeMetrics = CometMetricNode.fromCometPlan(this)\n\n        // Go over all the native scans, in order to see if they need encryption options.\n        // For each relation in a CometNativeScan generate a hadoopConf,\n        // for each file path in a relation associate with hadoopConf\n        // This is done per native plan, so only count scans until a comet input is reached.\n        val encryptionOptions =\n          mutable.ArrayBuffer.empty[(Broadcast[SerializableConfiguration], Seq[String])]\n        foreachUntilCometInput(this) {\n          case scan: CometNativeScanExec =>\n            // This creates a hadoopConf that brings in any SQLConf \"spark.hadoop.*\" configs and\n            // per-relation configs since different tables might have different decryption\n            // properties.\n            val hadoopConf = scan.relation.sparkSession.sessionState\n              .newHadoopConfWithOptions(scan.relation.options)\n            val encryptionEnabled = CometParquetUtils.encryptionEnabled(hadoopConf)\n            if (encryptionEnabled) {\n              // hadoopConf isn't serializable, so we have to do a broadcasted config.\n              val broadcastedConf =\n                scan.relation.sparkSession.sparkContext\n                  .broadcast(new SerializableConfiguration(hadoopConf))\n\n              val optsTuple: (Broadcast[SerializableConfiguration], Seq[String]) =\n                (broadcastedConf, scan.relation.inputFiles.toSeq)\n              encryptionOptions += optsTuple\n            }\n          case _ => // no-op\n        }\n        assert(\n          encryptionOptions.size <= 1,\n          \"We expect one native scan that requires encryption reading in a Comet plan,\" +\n            \" since we will broadcast one hadoopConf.\")\n        // If this assumption changes in the future, you can look at the commit history of #2447\n        // to see how there used to be a map of relations to broadcasted confs in case multiple\n        // relations in a single plan. The example that came up was UNION. See discussion at:\n        // https://github.com/apache/datafusion-comet/pull/2447#discussion_r2406118264\n        val (broadcastedHadoopConfForEncryption, encryptedFilePaths) =\n          encryptionOptions.headOption match {\n            case Some((conf, paths)) => (Some(conf), paths)\n            case None => (None, Seq.empty)\n          }\n\n        // Find planning data within this stage (stops at shuffle boundaries).\n        val (commonByKey, perPartitionByKey) = findAllPlanData(this)\n\n        // Collect the input ColumnarBatches from the child operators and create a CometExecIterator\n        // to execute the native plan.\n        val sparkPlans = ArrayBuffer.empty[SparkPlan]\n        val inputs = ArrayBuffer.empty[RDD[ColumnarBatch]]\n\n        foreachUntilCometInput(this)(sparkPlans += _)\n\n        // Find the first non broadcast plan\n        val firstNonBroadcastPlan = sparkPlans.zipWithIndex.find {\n          case (_: CometBroadcastExchangeExec, _) => false\n          case (BroadcastQueryStageExec(_, _: CometBroadcastExchangeExec, _), _) => false\n          case (BroadcastQueryStageExec(_, _: ReusedExchangeExec, _), _) => false\n          case (ReusedExchangeExec(_, _: CometBroadcastExchangeExec), _) => false\n          case _ => true\n        }\n\n        val containsBroadcastInput = sparkPlans.exists {\n          case _: CometBroadcastExchangeExec => true\n          case BroadcastQueryStageExec(_, _: CometBroadcastExchangeExec, _) => true\n          case BroadcastQueryStageExec(_, _: ReusedExchangeExec, _) => true\n          case ReusedExchangeExec(_, _: CometBroadcastExchangeExec) => true\n          case _ => false\n        }\n\n        // If the first non broadcast plan is not found, it means all the plans are broadcast plans.\n        // This is not expected, so throw an exception.\n        if (containsBroadcastInput && firstNonBroadcastPlan.isEmpty) {\n          throw new CometRuntimeException(s\"Cannot find the first non broadcast plan: $this\")\n        }\n\n        // If the first non broadcast plan is found, we need to adjust the partition number of\n        // the broadcast plans to make sure they have the same partition number as the first non\n        // broadcast plan.\n        val (firstNonBroadcastPlanRDD, firstNonBroadcastPlanNumPartitions) =\n          firstNonBroadcastPlan.get._1 match {\n            case plan: CometNativeExec =>\n              (null, plan.outputPartitioning.numPartitions)\n            case plan =>\n              val rdd = plan.executeColumnar()\n              (rdd, rdd.getNumPartitions)\n          }\n\n        // Spark doesn't need to zip Broadcast RDDs, so it doesn't schedule Broadcast RDDs with\n        // same partition number. But for Comet, we need to zip them so we need to adjust the\n        // partition number of Broadcast RDDs to make sure they have the same partition number.\n        sparkPlans.zipWithIndex.foreach { case (plan, idx) =>\n          plan match {\n            case c: CometBroadcastExchangeExec =>\n              inputs += c.executeColumnar(firstNonBroadcastPlanNumPartitions)\n            case BroadcastQueryStageExec(_, c: CometBroadcastExchangeExec, _) =>\n              inputs += c.executeColumnar(firstNonBroadcastPlanNumPartitions)\n            case ReusedExchangeExec(_, c: CometBroadcastExchangeExec) =>\n              inputs += c.executeColumnar(firstNonBroadcastPlanNumPartitions)\n            case BroadcastQueryStageExec(\n                  _,\n                  ReusedExchangeExec(_, c: CometBroadcastExchangeExec),\n                  _) =>\n              inputs += c.executeColumnar(firstNonBroadcastPlanNumPartitions)\n            case _: CometNativeExec =>\n            // no-op\n            case _ if idx == firstNonBroadcastPlan.get._2 =>\n              inputs += firstNonBroadcastPlanRDD\n            case _ =>\n              val rdd = plan.executeColumnar()\n              if (rdd.getNumPartitions != firstNonBroadcastPlanNumPartitions) {\n                throw new CometRuntimeException(\n                  s\"Partition number mismatch: ${rdd.getNumPartitions} != \" +\n                    s\"$firstNonBroadcastPlanNumPartitions\")\n              } else {\n                inputs += rdd\n              }\n          }\n        }\n\n        if (inputs.isEmpty && !sparkPlans.forall(_.isInstanceOf[CometNativeExec])) {\n          throw new CometRuntimeException(s\"No input for CometNativeExec:\\n $this\")\n        }\n\n        // Detect ShuffleScan indices for direct read in CometExecRDD\n        val shuffleScanIndices = findShuffleScanIndices(serializedPlanCopy)\n\n        // Unified RDD creation - CometExecRDD handles all cases\n        val subqueries = collectSubqueries(this)\n        val hasScanInput = sparkPlans.exists(_.isInstanceOf[CometNativeScanExec])\n        new CometExecRDD(\n          sparkContext,\n          inputs.toSeq,\n          commonByKey,\n          perPartitionByKey,\n          serializedPlanCopy,\n          firstNonBroadcastPlanNumPartitions,\n          output.length,\n          nativeMetrics,\n          subqueries,\n          broadcastedHadoopConfForEncryption,\n          encryptedFilePaths,\n          shuffleScanIndices) {\n          override def compute(\n              split: Partition,\n              context: TaskContext): Iterator[ColumnarBatch] = {\n            val res = super.compute(split, context)\n\n            // Report scan input metrics only when the native plan contains a scan.\n            if (hasScanInput) {\n              Option(context).foreach(nativeMetrics.reportScanInputMetrics)\n            }\n\n            res\n          }\n        }\n    }\n  }\n\n  /**\n   * Traverse the tree of Comet physical operators until reaching the input sources operators and\n   * apply the given function to each operator.\n   *\n   * The input sources include the following operators:\n   *   - CometScanExec - Comet scan node\n   *   - CometBatchScanExec - Comet scan node\n   *   - CometIcebergNativeScanExec - Native Iceberg scan node\n   *   - ShuffleQueryStageExec - AQE shuffle stage node on top of Comet shuffle\n   *   - AQEShuffleReadExec - AQE shuffle read node on top of Comet shuffle\n   *   - CometShuffleExchangeExec - Comet shuffle exchange node\n   *   - CometUnionExec, etc. which executes its children native plan and produces ColumnarBatches\n   *\n   * @param plan\n   *   the root of the Comet physical plan tree (e.g., the root of the SparkPlan tree of a query)\n   *   to traverse\n   * @param func\n   *   the function to apply to each Comet physical operator\n   */\n  def foreachUntilCometInput(plan: SparkPlan)(func: SparkPlan => Unit): Unit = {\n    plan match {\n      case _: CometNativeScanExec | _: CometScanExec | _: CometBatchScanExec |\n          _: CometIcebergNativeScanExec | _: CometCsvNativeScanExec | _: ShuffleQueryStageExec |\n          _: AQEShuffleReadExec | _: CometShuffleExchangeExec | _: CometUnionExec |\n          _: CometTakeOrderedAndProjectExec | _: CometCoalesceExec | _: ReusedExchangeExec |\n          _: CometBroadcastExchangeExec | _: BroadcastQueryStageExec |\n          _: CometSparkToColumnarExec | _: CometLocalTableScanExec =>\n        func(plan)\n      case _: CometPlan =>\n        // Other Comet operators, continue to traverse the tree.\n        plan.children.foreach(foreachUntilCometInput(_)(func))\n      case _ =>\n      // no op\n    }\n  }\n\n  /**\n   * Walk the serialized protobuf plan depth-first to find which input indices correspond to\n   * ShuffleScan vs Scan leaf nodes. Each Scan or ShuffleScan leaf consumes one input in order.\n   */\n  private def findShuffleScanIndices(planBytes: Array[Byte]): Set[Int] = {\n    val plan = OperatorOuterClass.Operator.parseFrom(planBytes)\n    var scanIndex = 0\n    val indices = mutable.Set.empty[Int]\n    def walk(op: OperatorOuterClass.Operator): Unit = {\n      if (op.hasShuffleScan) {\n        indices += scanIndex\n        scanIndex += 1\n      } else if (op.hasScan) {\n        scanIndex += 1\n      } else {\n        op.getChildrenList.asScala.foreach(walk)\n      }\n    }\n    walk(plan)\n    indices.toSet\n  }\n\n  /**\n   * Find all plan nodes with per-partition planning data in the plan tree. Returns two maps keyed\n   * by a unique identifier: one for common data (shared across partitions) and one for\n   * per-partition data.\n   *\n   * Currently supports Iceberg scans (keyed by metadata_location). Additional scan types can be\n   * added by extending this method.\n   *\n   * Stops at stage boundaries (shuffle exchanges, etc.) because partition indices are only valid\n   * within the same stage.\n   *\n   * @return\n   *   (commonByKey, perPartitionByKey) - common data is shared, per-partition varies\n   */\n  private def findAllPlanData(\n      plan: SparkPlan): (Map[String, Array[Byte]], Map[String, Array[Array[Byte]]]) = {\n    plan match {\n      // Found an Iceberg scan with planning data\n      case iceberg: CometIcebergNativeScanExec\n          if iceberg.commonData.nonEmpty && iceberg.perPartitionData.nonEmpty =>\n        (\n          Map(iceberg.metadataLocation -> iceberg.commonData),\n          Map(iceberg.metadataLocation -> iceberg.perPartitionData))\n\n      // Found a NativeScan with planning data\n      case nativeScan: CometNativeScanExec =>\n        (\n          Map(nativeScan.sourceKey -> nativeScan.commonData),\n          Map(nativeScan.sourceKey -> nativeScan.perPartitionData))\n\n      // Broadcast stages are boundaries - don't collect per-partition data from inside them.\n      // After DPP filtering, broadcast scans may have different partition counts than the\n      // probe side, causing ArrayIndexOutOfBoundsException in CometExecRDD.getPartitions.\n      case _: BroadcastQueryStageExec | _: CometBroadcastExchangeExec =>\n        (Map.empty, Map.empty)\n\n      // Stage boundaries - stop searching (partition indices won't align after these)\n      case _: ShuffleQueryStageExec | _: AQEShuffleReadExec | _: CometShuffleExchangeExec |\n          _: CometUnionExec | _: CometTakeOrderedAndProjectExec | _: CometCoalesceExec |\n          _: ReusedExchangeExec | _: CometSparkToColumnarExec =>\n        (Map.empty, Map.empty)\n\n      // Continue searching through other operators, combining results from all children\n      case _ =>\n        val results = plan.children.map(findAllPlanData)\n        (results.flatMap(_._1).toMap, results.flatMap(_._2).toMap)\n    }\n  }\n\n  /**\n   * Converts this native Comet operator and its children into a native block which can be\n   * executed as a whole (i.e., in a single JNI call) from the native side.\n   */\n  def convertBlock(): CometNativeExec = {\n    def transform(arg: Any): AnyRef = arg match {\n      case serializedPlan: SerializedPlan if serializedPlan.isEmpty =>\n        val size = nativeOp.getSerializedSize\n        val bytes = new Array[Byte](size)\n        val codedOutput = CodedOutputStream.newInstance(bytes)\n        nativeOp.writeTo(codedOutput)\n        codedOutput.checkNoSpaceLeft()\n        SerializedPlan(Some(bytes))\n      case other: AnyRef => other\n      case null => null\n    }\n\n    val newArgs = mapProductIterator(transform)\n    makeCopy(newArgs).asInstanceOf[CometNativeExec]\n  }\n\n  /**\n   * Cleans the serialized plan from this native Comet operator. Used to canonicalize the plan.\n   */\n  def cleanBlock(): CometNativeExec = {\n    def transform(arg: Any): AnyRef = arg match {\n      case serializedPlan: SerializedPlan if serializedPlan.isDefined =>\n        SerializedPlan(None)\n      case other: AnyRef => other\n      case null => null\n    }\n\n    val newArgs = mapProductIterator(transform)\n    makeCopy(newArgs).asInstanceOf[CometNativeExec]\n  }\n\n  override protected def doCanonicalize(): SparkPlan = {\n    val canonicalizedPlan = super\n      .doCanonicalize()\n      .asInstanceOf[CometNativeExec]\n      .canonicalizePlans()\n\n    if (serializedPlanOpt.isDefined) {\n      // If the plan is a boundary node, we should remove the serialized plan.\n      canonicalizedPlan.cleanBlock()\n    } else {\n      canonicalizedPlan\n    }\n  }\n\n  /**\n   * Canonicalizes the plans of Product parameters in Comet native operators.\n   */\n  protected def canonicalizePlans(): CometNativeExec = {\n    def transform(arg: Any): AnyRef = arg match {\n      case sparkPlan: SparkPlan\n          if !sparkPlan.isInstanceOf[CometNativeExec] &&\n            children.forall(_ != sparkPlan) =>\n        // Different to Spark, Comet native query node might have a Spark plan as Product element.\n        // We need to canonicalize the Spark plan. But it cannot be another Comet native query node,\n        // otherwise it will cause recursive canonicalization.\n        null\n      case other: AnyRef => other\n      case null => null\n    }\n\n    val newArgs = mapProductIterator(transform)\n    makeCopy(newArgs).asInstanceOf[CometNativeExec]\n  }\n}\n\nabstract class CometLeafExec extends CometNativeExec with LeafExecNode\n\nabstract class CometUnaryExec extends CometNativeExec with UnaryExecNode\n\nabstract class CometBinaryExec extends CometNativeExec with BinaryExecNode\n\n/**\n * Represents the serialized plan of Comet native operators. Only the first operator in a block of\n * continuous Comet native operators has defined plan bytes which contains the serialization of\n * the plan tree of the block.\n */\ncase class SerializedPlan(plan: Option[Array[Byte]]) {\n  def isDefined: Boolean = plan.isDefined\n\n  def isEmpty: Boolean = plan.isEmpty\n}\n\nobject CometProjectExec extends CometOperatorSerde[ProjectExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] =\n    Some(CometConf.COMET_EXEC_PROJECT_ENABLED)\n\n  override def convert(\n      op: ProjectExec,\n      builder: Operator.Builder,\n      childOp: Operator*): Option[OperatorOuterClass.Operator] = {\n    val exprs = op.projectList.map(exprToProto(_, op.child.output))\n\n    if (exprs.forall(_.isDefined) && childOp.nonEmpty) {\n      val projectBuilder = OperatorOuterClass.Projection\n        .newBuilder()\n        .addAllProjectList(exprs.map(_.get).asJava)\n      Some(builder.setProjection(projectBuilder).build())\n    } else {\n      withInfo(op, op.projectList: _*)\n      None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: ProjectExec): CometNativeExec = {\n    CometProjectExec(nativeOp, op, op.output, op.projectList, op.child, SerializedPlan(None))\n  }\n}\n\ncase class CometProjectExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    projectList: Seq[NamedExpression],\n    child: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometUnaryExec\n    with PartitioningPreservingUnaryExecNode {\n  override def producedAttributes: AttributeSet = outputSet\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n\n  override def stringArgs: Iterator[Any] = Iterator(output, projectList, child)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometProjectExec =>\n        this.output == other.output &&\n        this.projectList == other.projectList &&\n        this.child == other.child &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int = Objects.hashCode(output, projectList, child)\n\n  override protected def outputExpressions: Seq[NamedExpression] = projectList\n}\n\nobject CometFilterExec extends CometOperatorSerde[FilterExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] =\n    Some(CometConf.COMET_EXEC_FILTER_ENABLED)\n\n  override def convert(\n      op: FilterExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    val cond = exprToProto(op.condition, op.child.output)\n\n    if (cond.isDefined && childOp.nonEmpty) {\n      val filterBuilder = OperatorOuterClass.Filter\n        .newBuilder()\n        .setPredicate(cond.get)\n      Some(builder.setFilter(filterBuilder).build())\n    } else {\n      withInfo(op, op.condition, op.child)\n      None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: FilterExec): CometNativeExec = {\n    CometFilterExec(nativeOp, op, op.output, op.condition, op.child, SerializedPlan(None))\n  }\n}\n\ncase class CometFilterExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    condition: Expression,\n    child: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometUnaryExec {\n\n  override def outputPartitioning: Partitioning = child.outputPartitioning\n\n  override def outputOrdering: Seq[SortOrder] = child.outputOrdering\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n\n  override def stringArgs: Iterator[Any] =\n    Iterator(output, condition, child)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometFilterExec =>\n        this.output == other.output &&\n        this.condition == other.condition && this.child == other.child &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int = Objects.hashCode(output, condition, child)\n\n  override def verboseStringWithOperatorId(): String = {\n    s\"\"\"\n       |$formattedNodeName\n       |${ExplainUtils.generateFieldString(\"Input\", child.output)}\n       |Condition : ${condition}\n       |\"\"\".stripMargin\n  }\n}\n\nobject CometSortExec extends CometOperatorSerde[SortExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] =\n    Some(CometConf.COMET_EXEC_SORT_ENABLED)\n\n  override def convert(\n      op: SortExec,\n      builder: Operator.Builder,\n      childOp: Operator*): Option[OperatorOuterClass.Operator] = {\n    if (!supportedSortType(op, op.sortOrder)) {\n      withInfo(op, \"Unsupported data type in sort expressions\")\n      return None\n    }\n\n    val sortOrders = op.sortOrder.map(exprToProto(_, op.child.output))\n\n    if (sortOrders.forall(_.isDefined) && childOp.nonEmpty) {\n      val sortBuilder = OperatorOuterClass.Sort\n        .newBuilder()\n        .addAllSortOrders(sortOrders.map(_.get).asJava)\n      Some(builder.setSort(sortBuilder).build())\n    } else {\n      withInfo(op, \"sort order not supported\", op.sortOrder: _*)\n      None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: SortExec): CometNativeExec = {\n    CometSortExec(\n      nativeOp,\n      op,\n      op.output,\n      op.outputOrdering,\n      op.sortOrder,\n      op.child,\n      SerializedPlan(None))\n  }\n}\n\ncase class CometSortExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    override val outputOrdering: Seq[SortOrder],\n    sortOrder: Seq[SortOrder],\n    child: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometUnaryExec {\n\n  override def outputPartitioning: Partitioning = child.outputPartitioning\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n\n  override def stringArgs: Iterator[Any] =\n    Iterator(output, sortOrder, child)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometSortExec =>\n        this.output == other.output &&\n        this.sortOrder == other.sortOrder && this.child == other.child &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int = Objects.hashCode(output, sortOrder, child)\n\n  override lazy val metrics: Map[String, SQLMetric] =\n    CometMetricNode.baselineMetrics(sparkContext) ++\n      Map(\n        \"spill_count\" -> SQLMetrics.createMetric(sparkContext, \"number of spills\"),\n        \"spilled_bytes\" -> SQLMetrics.createSizeMetric(sparkContext, \"total spilled bytes\"))\n}\n\nobject CometLocalLimitExec extends CometOperatorSerde[LocalLimitExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] =\n    Some(CometConf.COMET_EXEC_LOCAL_LIMIT_ENABLED)\n\n  override def convert(\n      op: LocalLimitExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    if (childOp.nonEmpty) {\n      // LocalLimit doesn't use offset, but it shares same operator serde class.\n      // Just set it to zero.\n      val limitBuilder = OperatorOuterClass.Limit\n        .newBuilder()\n        .setLimit(op.limit)\n        .setOffset(0)\n      Some(builder.setLimit(limitBuilder).build())\n    } else {\n      withInfo(op, \"No child operator\")\n      None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: LocalLimitExec): CometNativeExec = {\n    CometLocalLimitExec(nativeOp, op, op.limit, op.child, SerializedPlan(None))\n  }\n}\n\ncase class CometLocalLimitExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    limit: Int,\n    child: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometUnaryExec {\n\n  override def output: Seq[Attribute] = child.output\n\n  override def outputPartitioning: Partitioning = child.outputPartitioning\n\n  override def outputOrdering: Seq[SortOrder] = child.outputOrdering\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n\n  override def stringArgs: Iterator[Any] = Iterator(limit, child)\n\n  override lazy val metrics: Map[String, SQLMetric] = Map.empty\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometLocalLimitExec =>\n        this.output == other.output &&\n        this.limit == other.limit && this.child == other.child &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int = Objects.hashCode(output, limit: java.lang.Integer, child)\n}\n\nobject CometGlobalLimitExec extends CometOperatorSerde[GlobalLimitExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] =\n    Some(CometConf.COMET_EXEC_GLOBAL_LIMIT_ENABLED)\n\n  override def convert(\n      op: GlobalLimitExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    if (childOp.nonEmpty) {\n      val limitBuilder = OperatorOuterClass.Limit.newBuilder()\n\n      limitBuilder.setLimit(op.limit).setOffset(op.offset)\n\n      Some(builder.setLimit(limitBuilder).build())\n    } else {\n      withInfo(op, \"No child operator\")\n      None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: GlobalLimitExec): CometNativeExec = {\n    CometGlobalLimitExec(nativeOp, op, op.limit, op.offset, op.child, SerializedPlan(None))\n  }\n}\n\ncase class CometGlobalLimitExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    limit: Int,\n    offset: Int,\n    child: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometUnaryExec {\n\n  override def output: Seq[Attribute] = child.output\n\n  override def outputPartitioning: Partitioning = child.outputPartitioning\n\n  override def outputOrdering: Seq[SortOrder] = child.outputOrdering\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n\n  override def stringArgs: Iterator[Any] = Iterator(limit, offset, child)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometGlobalLimitExec =>\n        this.output == other.output &&\n        this.limit == other.limit &&\n        this.offset == other.offset &&\n        this.child == other.child &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int =\n    Objects.hashCode(output, limit: java.lang.Integer, offset: java.lang.Integer, child)\n}\n\nobject CometExpandExec extends CometOperatorSerde[ExpandExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_EXPAND_ENABLED)\n\n  override def convert(\n      op: ExpandExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    var allProjExprs: Seq[Expression] = Seq()\n    val projExprs = op.projections.flatMap(_.map(e => {\n      allProjExprs = allProjExprs :+ e\n      exprToProto(e, op.child.output)\n    }))\n\n    if (projExprs.forall(_.isDefined) && childOp.nonEmpty) {\n      val expandBuilder = OperatorOuterClass.Expand\n        .newBuilder()\n        .addAllProjectList(projExprs.map(_.get).asJava)\n        .setNumExprPerProject(op.projections.head.size)\n      Some(builder.setExpand(expandBuilder).build())\n    } else {\n      withInfo(op, allProjExprs: _*)\n      None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: ExpandExec): CometNativeExec = {\n    CometExpandExec(nativeOp, op, op.output, op.projections, op.child, SerializedPlan(None))\n  }\n}\n\ncase class CometExpandExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    projections: Seq[Seq[Expression]],\n    child: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometUnaryExec {\n  override def outputPartitioning: Partitioning = UnknownPartitioning(0)\n\n  override def producedAttributes: AttributeSet = outputSet\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n\n  override def stringArgs: Iterator[Any] = Iterator(projections, output, child)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometExpandExec =>\n        this.output == other.output &&\n        this.projections == other.projections && this.child == other.child &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int = Objects.hashCode(output, projections, child)\n\n  // TODO: support native Expand metrics\n  override lazy val metrics: Map[String, SQLMetric] = Map.empty\n}\n\nobject CometExplodeExec extends CometOperatorSerde[GenerateExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_EXPLODE_ENABLED)\n\n  override def getSupportLevel(op: GenerateExec): SupportLevel = {\n    if (!op.generator.deterministic) {\n      return Unsupported(Some(\"Only deterministic generators are supported\"))\n    }\n    if (op.generator.children.length != 1) {\n      return Unsupported(Some(\"generators with multiple inputs are not supported\"))\n    }\n    if (op.generator.nodeName.toLowerCase(Locale.ROOT) != \"explode\") {\n      return Unsupported(Some(s\"Unsupported generator: ${op.generator.nodeName}\"))\n    }\n    if (op.outer) {\n      // DataFusion UnnestExec has different semantics to Spark for this case\n      // https://github.com/apache/datafusion/issues/19053\n      return Incompatible(Some(\"Empty arrays are not preserved as null outputs when outer=true\"))\n    }\n    op.generator.children.head.dataType match {\n      case _: ArrayType =>\n        Compatible()\n      case _: MapType =>\n        // TODO add support for map types\n        // https://github.com/apache/datafusion-comet/issues/2837\n        Unsupported(Some(\"Comet only supports explode/explode_outer for arrays, not maps\"))\n      case other =>\n        Unsupported(Some(s\"Unsupported data type: $other\"))\n    }\n  }\n\n  override def convert(\n      op: GenerateExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    val childExpr = op.generator.children.head\n    val childExprProto = exprToProto(childExpr, op.child.output)\n\n    if (childExprProto.isEmpty) {\n      withInfo(op, childExpr)\n      return None\n    }\n\n    // Convert the projection expressions (columns to carry forward)\n    // These are the non-generator output columns\n    val generatorOutput = op.generatorOutput.toSet\n    val projectExprs = op.output.filterNot(generatorOutput.contains).map { attr =>\n      exprToProto(attr, op.child.output)\n    }\n\n    if (projectExprs.exists(_.isEmpty) || childOp.isEmpty) {\n      withInfo(op, op.output: _*)\n      return None\n    }\n\n    val explodeBuilder = OperatorOuterClass.Explode\n      .newBuilder()\n      .setChild(childExprProto.get)\n      .setOuter(op.outer)\n      .addAllProjectList(projectExprs.map(_.get).asJava)\n\n    Some(builder.setExplode(explodeBuilder).build())\n  }\n\n  override def createExec(nativeOp: Operator, op: GenerateExec): CometNativeExec = {\n    CometExplodeExec(\n      nativeOp,\n      op,\n      op.output,\n      op.generator,\n      op.generatorOutput,\n      op.child,\n      SerializedPlan(None))\n  }\n}\n\ncase class CometExplodeExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    generator: Generator,\n    generatorOutput: Seq[Attribute],\n    child: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometUnaryExec {\n  override def outputPartitioning: Partitioning = child.outputPartitioning\n\n  override def producedAttributes: AttributeSet = AttributeSet(generatorOutput)\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n\n  override def stringArgs: Iterator[Any] = Iterator(generator, generatorOutput, output, child)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometExplodeExec =>\n        this.output == other.output &&\n        this.generator == other.generator &&\n        this.generatorOutput == other.generatorOutput &&\n        this.child == other.child &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int = Objects.hashCode(output, generator, generatorOutput, child)\n\n  override lazy val metrics: Map[String, SQLMetric] =\n    CometMetricNode.baselineMetrics(sparkContext) ++\n      Map(\n        \"input_batches\" -> SQLMetrics.createMetric(sparkContext, \"number of input batches\"),\n        \"input_rows\" -> SQLMetrics.createMetric(sparkContext, \"number of input rows\"),\n        \"output_batches\" -> SQLMetrics.createMetric(sparkContext, \"number of output batches\"))\n}\n\nobject CometUnionExec extends CometSink[UnionExec] {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_UNION_ENABLED)\n\n  override def createExec(\n      nativeOp: OperatorOuterClass.Operator,\n      op: UnionExec): CometNativeExec = {\n    CometSinkPlaceHolder(nativeOp, op, CometUnionExec(op, op.output, op.children))\n  }\n}\n\ncase class CometUnionExec(\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    children: Seq[SparkPlan])\n    extends CometExec {\n\n  override def doExecuteColumnar(): RDD[ColumnarBatch] = {\n    sparkContext.union(children.map(_.executeColumnar()))\n  }\n\n  override protected def withNewChildrenInternal(newChildren: IndexedSeq[SparkPlan]): SparkPlan =\n    this.copy(children = newChildren)\n\n  override def verboseStringWithOperatorId(): String = {\n    val childrenString = children.zipWithIndex\n      .map { case (child, index) =>\n        s\"Child $index ${ExplainUtils.generateFieldString(\"Input\", child.output)}\"\n      }\n      .mkString(\"\\n\")\n    s\"\"\"\n       |$formattedNodeName\n       |$childrenString\n       |\"\"\".stripMargin\n  }\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometUnionExec =>\n        this.output == other.output &&\n        this.children == other.children\n      case _ => false\n    }\n  }\n\n  override def hashCode(): Int = Objects.hashCode(output, children)\n}\n\ntrait CometBaseAggregate {\n\n  def doConvert(\n      aggregate: BaseAggregateExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n\n    val modes = aggregate.aggregateExpressions.map(_.mode).distinct\n    // In distinct aggregates there can be a combination of modes\n    val multiMode = modes.size > 1\n    // For a final mode HashAggregate, we only need to transform the HashAggregate\n    // if there is Comet partial aggregation.\n    val sparkFinalMode = modes.contains(Final) && findCometPartialAgg(aggregate.child).isEmpty\n\n    if (multiMode || sparkFinalMode) {\n      return None\n    }\n\n    val groupingExpressions = aggregate.groupingExpressions\n    val aggregateExpressions = aggregate.aggregateExpressions\n    val aggregateAttributes = aggregate.aggregateAttributes\n    val resultExpressions = aggregate.resultExpressions\n    val child = aggregate.child\n\n    if (groupingExpressions.isEmpty && aggregateExpressions.isEmpty) {\n      withInfo(aggregate, \"No group by or aggregation\")\n      return None\n    }\n\n    if (groupingExpressions.exists(expr =>\n        expr.dataType match {\n          case _: MapType => true\n          case _ => false\n        })) {\n      withInfo(aggregate, \"Grouping on map types is not supported\")\n      return None\n    }\n\n    val groupingExprsWithInput =\n      groupingExpressions.map(expr => expr.name -> exprToProto(expr, child.output))\n\n    val emptyExprs = groupingExprsWithInput.collect {\n      case (expr, proto) if proto.isEmpty => expr\n    }\n\n    if (emptyExprs.nonEmpty) {\n      withInfo(aggregate, s\"Unsupported group expressions: ${emptyExprs.mkString(\", \")}\")\n      return None\n    }\n\n    val groupingExprs = groupingExprsWithInput.map(_._2)\n\n    // In some of the cases, the aggregateExpressions could be empty.\n    // For example, if the aggregate functions only have group by or if the aggregate\n    // functions only have distinct aggregate functions:\n    //\n    // SELECT COUNT(distinct col2), col1 FROM test group by col1\n    //  +- HashAggregate (keys =[col1# 6], functions =[count (distinct col2#7)] )\n    //    +- Exchange hashpartitioning (col1#6, 10), ENSURE_REQUIREMENTS, [plan_id = 36]\n    //      +- HashAggregate (keys =[col1#6], functions =[partial_count (distinct col2#7)] )\n    //        +- HashAggregate (keys =[col1#6, col2#7], functions =[] )\n    //          +- Exchange hashpartitioning (col1#6, col2#7, 10), ENSURE_REQUIREMENTS, ...\n    //            +- HashAggregate (keys =[col1#6, col2#7], functions =[] )\n    //              +- FileScan parquet spark_catalog.default.test[col1#6, col2#7] ......\n    // If the aggregateExpressions is empty, we only want to build groupingExpressions,\n    // and skip processing of aggregateExpressions.\n    if (aggregateExpressions.isEmpty) {\n      val hashAggBuilder = OperatorOuterClass.HashAggregate.newBuilder()\n      hashAggBuilder.addAllGroupingExprs(groupingExprs.map(_.get).asJava)\n      val attributes = groupingExpressions.map(_.toAttribute) ++ aggregateAttributes\n      val resultExprs = resultExpressions.map(exprToProto(_, attributes))\n      if (resultExprs.exists(_.isEmpty)) {\n        withInfo(\n          aggregate,\n          s\"Unsupported result expressions found in: $resultExpressions\",\n          resultExpressions: _*)\n        return None\n      }\n      hashAggBuilder.addAllResultExprs(resultExprs.map(_.get).asJava)\n      Some(builder.setHashAgg(hashAggBuilder).build())\n    } else {\n      val modes = aggregateExpressions.map(_.mode).distinct\n\n      if (modes.size != 1) {\n        // This shouldn't happen as all aggregation expressions should share the same mode.\n        // Fallback to Spark nevertheless here.\n        withInfo(aggregate, \"All aggregate expressions do not have the same mode\")\n        return None\n      }\n\n      val mode = modes.head match {\n        case Partial => CometAggregateMode.Partial\n        case Final => CometAggregateMode.Final\n        case _ =>\n          withInfo(aggregate, s\"Unsupported aggregation mode ${modes.head}\")\n          return None\n      }\n\n      // In final mode, the aggregate expressions are bound to the output of the\n      // child and partial aggregate expressions buffer attributes produced by partial\n      // aggregation. This is done in Spark `HashAggregateExec` internally. In Comet,\n      // we don't have to do this because we don't use the merging expression.\n      val binding = mode != CometAggregateMode.Final\n      // `output` is only used when `binding` is true (i.e., non-Final)\n      val output = child.output\n\n      val aggExprs =\n        aggregateExpressions.map(aggExprToProto(_, output, binding, aggregate.conf))\n\n      if (aggExprs.exists(_.isEmpty)) {\n        withInfo(\n          aggregate,\n          \"Unsupported aggregate expression(s)\",\n          aggregateExpressions ++ aggregateExpressions.map(_.aggregateFunction): _*)\n        return None\n      }\n\n      if (childOp.nonEmpty && groupingExprs.forall(_.isDefined) &&\n        aggExprs.forall(_.isDefined)) {\n        val hashAggBuilder = OperatorOuterClass.HashAggregate.newBuilder()\n        hashAggBuilder.addAllGroupingExprs(groupingExprs.map(_.get).asJava)\n        hashAggBuilder.addAllAggExprs(aggExprs.map(_.get).asJava)\n        if (mode == CometAggregateMode.Final) {\n          val attributes = groupingExpressions.map(_.toAttribute) ++ aggregateAttributes\n          val resultExprs = resultExpressions.map(exprToProto(_, attributes))\n          if (resultExprs.exists(_.isEmpty)) {\n            withInfo(\n              aggregate,\n              s\"Unsupported result expressions found in: $resultExpressions\",\n              resultExpressions: _*)\n            return None\n          }\n          hashAggBuilder.addAllResultExprs(resultExprs.map(_.get).asJava)\n        }\n        hashAggBuilder.setModeValue(mode.getNumber)\n        Some(builder.setHashAgg(hashAggBuilder).build())\n      } else {\n        val allChildren: Seq[Expression] =\n          groupingExpressions ++ aggregateExpressions ++ aggregateAttributes\n        withInfo(aggregate, allChildren: _*)\n        None\n      }\n    }\n\n  }\n\n  /**\n   * Find the first Comet partial aggregate in the plan. If it reaches a Spark HashAggregate with\n   * partial mode, it will return None.\n   */\n  private def findCometPartialAgg(plan: SparkPlan): Option[CometHashAggregateExec] = {\n    plan.collectFirst {\n      case agg: CometHashAggregateExec if agg.aggregateExpressions.forall(_.mode == Partial) =>\n        Some(agg)\n      case agg: HashAggregateExec if agg.aggregateExpressions.forall(_.mode == Partial) => None\n      case agg: ObjectHashAggregateExec if agg.aggregateExpressions.forall(_.mode == Partial) =>\n        None\n      case a: AQEShuffleReadExec => findCometPartialAgg(a.child)\n      case s: ShuffleQueryStageExec => findCometPartialAgg(s.plan)\n    }.flatten\n  }\n\n}\n\nobject CometHashAggregateExec\n    extends CometOperatorSerde[HashAggregateExec]\n    with CometBaseAggregate {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_AGGREGATE_ENABLED)\n\n  override def getSupportLevel(op: HashAggregateExec): SupportLevel = {\n    // some unit tests need to disable partial or final hash aggregate support to test that\n    // CometExecRule does not allow mixed Spark/Comet aggregates\n    if (!CometConf.COMET_ENABLE_PARTIAL_HASH_AGGREGATE.get(op.conf) &&\n      op.aggregateExpressions.exists(expr => expr.mode == Partial || expr.mode == PartialMerge)) {\n      return Unsupported(Some(\"Partial aggregates disabled via test config\"))\n    }\n    if (!CometConf.COMET_ENABLE_FINAL_HASH_AGGREGATE.get(op.conf) &&\n      op.aggregateExpressions.exists(_.mode == Final)) {\n      return Unsupported(Some(\"Final aggregates disabled via test config\"))\n    }\n    Compatible()\n  }\n\n  override def convert(\n      aggregate: HashAggregateExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    doConvert(aggregate, builder, childOp: _*)\n  }\n\n  override def createExec(nativeOp: Operator, op: HashAggregateExec): CometNativeExec = {\n    CometHashAggregateExec(\n      nativeOp,\n      op,\n      op.output,\n      op.groupingExpressions,\n      op.aggregateExpressions,\n      op.resultExpressions,\n      op.child.output,\n      op.child,\n      SerializedPlan(None))\n  }\n}\n\nobject CometObjectHashAggregateExec\n    extends CometOperatorSerde[ObjectHashAggregateExec]\n    with CometBaseAggregate {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_AGGREGATE_ENABLED)\n\n  override def convert(\n      aggregate: ObjectHashAggregateExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n\n    if (!isCometShuffleEnabled(aggregate.conf)) {\n      // When Comet shuffle is disabled, we don't want to transform the HashAggregate\n      // to CometHashAggregate. Otherwise, we probably get partial Comet aggregation\n      // and final Spark aggregation.\n      return None\n    }\n\n    doConvert(aggregate, builder, childOp: _*)\n  }\n\n  override def createExec(nativeOp: Operator, op: ObjectHashAggregateExec): CometNativeExec = {\n    CometHashAggregateExec(\n      nativeOp,\n      op,\n      op.output,\n      op.groupingExpressions,\n      op.aggregateExpressions,\n      op.resultExpressions,\n      op.child.output,\n      op.child,\n      SerializedPlan(None))\n  }\n}\n\ncase class CometHashAggregateExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    groupingExpressions: Seq[NamedExpression],\n    aggregateExpressions: Seq[AggregateExpression],\n    resultExpressions: Seq[NamedExpression],\n    input: Seq[Attribute],\n    child: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometUnaryExec\n    with PartitioningPreservingUnaryExecNode {\n\n  // The aggExprs could be empty. For example, if the aggregate functions only have\n  // distinct aggregate functions or only have group by, the aggExprs is empty and\n  // modes is empty too. If aggExprs is not empty, we need to verify all the\n  // aggregates have the same mode.\n  val modes: Seq[AggregateMode] = aggregateExpressions.map(_.mode).distinct\n  assert(modes.length == 1 || modes.isEmpty)\n  val mode = modes.headOption\n\n  override def producedAttributes: AttributeSet = outputSet ++ AttributeSet(resultExpressions)\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan =\n    this.copy(child = newChild)\n\n  override def verboseStringWithOperatorId(): String = {\n    s\"\"\"\n       |$formattedNodeName\n       |${ExplainUtils.generateFieldString(\"Input\", child.output)}\n       |${ExplainUtils.generateFieldString(\"Keys\", groupingExpressions)}\n       |${ExplainUtils.generateFieldString(\"Functions\", aggregateExpressions)}\n       |\"\"\".stripMargin\n  }\n\n  override def stringArgs: Iterator[Any] =\n    Iterator(input, mode, groupingExpressions, aggregateExpressions, child)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometHashAggregateExec =>\n        this.output == other.output &&\n        this.groupingExpressions == other.groupingExpressions &&\n        this.aggregateExpressions == other.aggregateExpressions &&\n        this.input == other.input &&\n        this.mode == other.mode &&\n        this.child == other.child &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int =\n    Objects.hashCode(output, groupingExpressions, aggregateExpressions, input, mode, child)\n\n  override protected def outputExpressions: Seq[NamedExpression] = resultExpressions\n}\n\ntrait CometHashJoin {\n\n  def doConvert(\n      join: HashJoin,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    // `HashJoin` has only two implementations in Spark, but we check the type of the join to\n    // make sure we are handling the correct join type.\n    if (!(CometConf.COMET_EXEC_HASH_JOIN_ENABLED.get(join.conf) &&\n        join.isInstanceOf[ShuffledHashJoinExec]) &&\n      !(CometConf.COMET_EXEC_BROADCAST_HASH_JOIN_ENABLED.get(join.conf) &&\n        join.isInstanceOf[BroadcastHashJoinExec])) {\n      withInfo(join, s\"Invalid hash join type ${join.nodeName}\")\n      return None\n    }\n\n    if (join.buildSide == BuildRight && join.joinType == LeftAnti) {\n      // https://github.com/apache/datafusion-comet/issues/457\n      withInfo(join, \"BuildRight with LeftAnti is not supported\")\n      return None\n    }\n\n    val condition = join.condition.map { cond =>\n      val condProto = exprToProto(cond, join.left.output ++ join.right.output)\n      if (condProto.isEmpty) {\n        withInfo(join, cond)\n        return None\n      }\n      condProto.get\n    }\n\n    val joinType = {\n      import OperatorOuterClass.JoinType\n      join.joinType match {\n        case Inner => JoinType.Inner\n        case LeftOuter => JoinType.LeftOuter\n        case RightOuter => JoinType.RightOuter\n        case FullOuter => JoinType.FullOuter\n        case LeftSemi => JoinType.LeftSemi\n        case LeftAnti => JoinType.LeftAnti\n        case _ =>\n          // Spark doesn't support other join types\n          withInfo(join, s\"Unsupported join type ${join.joinType}\")\n          return None\n      }\n    }\n\n    val leftKeys = join.leftKeys.map(exprToProto(_, join.left.output))\n    val rightKeys = join.rightKeys.map(exprToProto(_, join.right.output))\n\n    if (leftKeys.forall(_.isDefined) &&\n      rightKeys.forall(_.isDefined) &&\n      childOp.nonEmpty) {\n      val joinBuilder = OperatorOuterClass.HashJoin\n        .newBuilder()\n        .setJoinType(joinType)\n        .addAllLeftJoinKeys(leftKeys.map(_.get).asJava)\n        .addAllRightJoinKeys(rightKeys.map(_.get).asJava)\n        .setBuildSide(if (join.buildSide == BuildLeft) OperatorOuterClass.BuildSide.BuildLeft\n        else OperatorOuterClass.BuildSide.BuildRight)\n      condition.foreach(joinBuilder.setCondition)\n      Some(builder.setHashJoin(joinBuilder).build())\n    } else {\n      val allExprs: Seq[Expression] = join.leftKeys ++ join.rightKeys\n      withInfo(join, allExprs: _*)\n      None\n    }\n  }\n}\n\nobject CometBroadcastHashJoinExec extends CometOperatorSerde[HashJoin] with CometHashJoin {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] =\n    Some(CometConf.COMET_EXEC_HASH_JOIN_ENABLED)\n\n  override def convert(\n      join: HashJoin,\n      builder: Operator.Builder,\n      childOp: Operator*): Option[Operator] =\n    doConvert(join, builder, childOp: _*)\n\n  override def createExec(nativeOp: Operator, op: HashJoin): CometNativeExec = {\n    CometBroadcastHashJoinExec(\n      nativeOp,\n      op,\n      op.output,\n      op.outputOrdering,\n      op.leftKeys,\n      op.rightKeys,\n      op.joinType,\n      op.condition,\n      op.buildSide,\n      op.left,\n      op.right,\n      SerializedPlan(None))\n  }\n}\n\nobject CometHashJoinExec extends CometOperatorSerde[HashJoin] with CometHashJoin {\n\n  override def enabledConfig: Option[ConfigEntry[Boolean]] =\n    Some(CometConf.COMET_EXEC_HASH_JOIN_ENABLED)\n\n  override def convert(\n      join: HashJoin,\n      builder: Operator.Builder,\n      childOp: Operator*): Option[Operator] =\n    doConvert(join, builder, childOp: _*)\n\n  override def createExec(nativeOp: Operator, op: HashJoin): CometNativeExec = {\n    CometHashJoinExec(\n      nativeOp,\n      op,\n      op.output,\n      op.outputOrdering,\n      op.leftKeys,\n      op.rightKeys,\n      op.joinType,\n      op.condition,\n      op.buildSide,\n      op.left,\n      op.right,\n      SerializedPlan(None))\n  }\n}\n\ncase class CometHashJoinExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    override val outputOrdering: Seq[SortOrder],\n    leftKeys: Seq[Expression],\n    rightKeys: Seq[Expression],\n    joinType: JoinType,\n    condition: Option[Expression],\n    buildSide: BuildSide,\n    override val left: SparkPlan,\n    override val right: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometBinaryExec {\n\n  override def outputPartitioning: Partitioning = joinType match {\n    case _: InnerLike =>\n      PartitioningCollection(Seq(left.outputPartitioning, right.outputPartitioning))\n    case LeftOuter => left.outputPartitioning\n    case RightOuter => right.outputPartitioning\n    case FullOuter => UnknownPartitioning(left.outputPartitioning.numPartitions)\n    case LeftExistence(_) => left.outputPartitioning\n    case x =>\n      throw new IllegalArgumentException(s\"ShuffledJoin should not take $x as the JoinType\")\n  }\n\n  override def withNewChildrenInternal(newLeft: SparkPlan, newRight: SparkPlan): SparkPlan =\n    this.copy(left = newLeft, right = newRight)\n\n  override def stringArgs: Iterator[Any] =\n    Iterator(leftKeys, rightKeys, joinType, buildSide, condition, left, right)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometHashJoinExec =>\n        this.output == other.output &&\n        this.leftKeys == other.leftKeys &&\n        this.rightKeys == other.rightKeys &&\n        this.condition == other.condition &&\n        this.buildSide == other.buildSide &&\n        this.left == other.left &&\n        this.right == other.right &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int =\n    Objects.hashCode(output, leftKeys, rightKeys, condition, buildSide, left, right)\n\n  override lazy val metrics: Map[String, SQLMetric] =\n    CometMetricNode.hashJoinMetrics(sparkContext)\n}\n\ncase class CometBroadcastHashJoinExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    override val outputOrdering: Seq[SortOrder],\n    leftKeys: Seq[Expression],\n    rightKeys: Seq[Expression],\n    joinType: JoinType,\n    condition: Option[Expression],\n    buildSide: BuildSide,\n    override val left: SparkPlan,\n    override val right: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometBinaryExec {\n\n  // The following logic of `outputPartitioning` is copied from Spark `BroadcastHashJoinExec`.\n  protected lazy val streamedPlan: SparkPlan = buildSide match {\n    case BuildLeft => right\n    case BuildRight => left\n  }\n\n  override lazy val outputPartitioning: Partitioning = {\n    joinType match {\n      case _: InnerLike if conf.broadcastHashJoinOutputPartitioningExpandLimit > 0 =>\n        streamedPlan.outputPartitioning match {\n          case h: HashPartitioning => expandOutputPartitioning(h)\n          case h: Expression if h.getClass.getName.contains(\"CoalescedHashPartitioning\") =>\n            expandOutputPartitioning(h)\n          case c: PartitioningCollection => expandOutputPartitioning(c)\n          case other => other\n        }\n      case _ => streamedPlan.outputPartitioning\n    }\n  }\n\n  protected lazy val (buildKeys, streamedKeys) = {\n    require(\n      leftKeys.length == rightKeys.length &&\n        leftKeys\n          .map(_.dataType)\n          .zip(rightKeys.map(_.dataType))\n          .forall(types => types._1.sameType(types._2)),\n      \"Join keys from two sides should have same length and types\")\n    buildSide match {\n      case BuildLeft => (leftKeys, rightKeys)\n      case BuildRight => (rightKeys, leftKeys)\n    }\n  }\n\n  // An one-to-many mapping from a streamed key to build keys.\n  private lazy val streamedKeyToBuildKeyMapping = {\n    val mapping = mutable.Map.empty[Expression, Seq[Expression]]\n    streamedKeys.zip(buildKeys).foreach { case (streamedKey, buildKey) =>\n      val key = streamedKey.canonicalized\n      mapping.get(key) match {\n        case Some(v) => mapping.put(key, v :+ buildKey)\n        case None => mapping.put(key, Seq(buildKey))\n      }\n    }\n    mapping.toMap\n  }\n\n  // Expands the given partitioning collection recursively.\n  private def expandOutputPartitioning(\n      partitioning: PartitioningCollection): PartitioningCollection = {\n    PartitioningCollection(partitioning.partitionings.flatMap {\n      case h: HashPartitioning => expandOutputPartitioning(h).partitionings\n      case h: Expression if h.getClass.getName.contains(\"CoalescedHashPartitioning\") =>\n        expandOutputPartitioning(h).partitionings\n      case c: PartitioningCollection => Seq(expandOutputPartitioning(c))\n      case other => Seq(other)\n    })\n  }\n\n  // Expands the given hash partitioning by substituting streamed keys with build keys.\n  // For example, if the expressions for the given partitioning are Seq(\"a\", \"b\", \"c\")\n  // where the streamed keys are Seq(\"b\", \"c\") and the build keys are Seq(\"x\", \"y\"),\n  // the expanded partitioning will have the following expressions:\n  // Seq(\"a\", \"b\", \"c\"), Seq(\"a\", \"b\", \"y\"), Seq(\"a\", \"x\", \"c\"), Seq(\"a\", \"x\", \"y\").\n  // The expanded expressions are returned as PartitioningCollection.\n  private def expandOutputPartitioning(\n      partitioning: Partitioning with Expression): PartitioningCollection = {\n    val maxNumCombinations = conf.broadcastHashJoinOutputPartitioningExpandLimit\n    var currentNumCombinations = 0\n\n    def generateExprCombinations(\n        current: Seq[Expression],\n        accumulated: Seq[Expression]): Seq[Seq[Expression]] = {\n      if (currentNumCombinations >= maxNumCombinations) {\n        Nil\n      } else if (current.isEmpty) {\n        currentNumCombinations += 1\n        Seq(accumulated)\n      } else {\n        val buildKeysOpt = streamedKeyToBuildKeyMapping.get(current.head.canonicalized)\n        generateExprCombinations(current.tail, accumulated :+ current.head) ++\n          buildKeysOpt\n            .map(_.flatMap(b => generateExprCombinations(current.tail, accumulated :+ b)))\n            .getOrElse(Nil)\n      }\n    }\n\n    val hashPartitioningLikeExpressions =\n      partitioning match {\n        case p: HashPartitioningLike => p.expressions\n        case _ => Seq()\n      }\n    PartitioningCollection(\n      generateExprCombinations(hashPartitioningLikeExpressions, Nil)\n        .map(exprs => partitioning.withNewChildren(exprs).asInstanceOf[Partitioning]))\n  }\n\n  override def withNewChildrenInternal(newLeft: SparkPlan, newRight: SparkPlan): SparkPlan =\n    this.copy(left = newLeft, right = newRight)\n\n  override def stringArgs: Iterator[Any] =\n    Iterator(leftKeys, rightKeys, joinType, condition, buildSide, left, right)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometBroadcastHashJoinExec =>\n        this.output == other.output &&\n        this.leftKeys == other.leftKeys &&\n        this.rightKeys == other.rightKeys &&\n        this.condition == other.condition &&\n        this.buildSide == other.buildSide &&\n        this.left == other.left &&\n        this.right == other.right &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int =\n    Objects.hashCode(output, leftKeys, rightKeys, condition, buildSide, left, right)\n\n  override lazy val metrics: Map[String, SQLMetric] =\n    CometMetricNode.hashJoinMetrics(sparkContext)\n}\n\nobject CometSortMergeJoinExec extends CometOperatorSerde[SortMergeJoinExec] {\n  override def enabledConfig: Option[ConfigEntry[Boolean]] = Some(\n    CometConf.COMET_EXEC_SORT_MERGE_JOIN_ENABLED)\n\n  override def convert(\n      join: SortMergeJoinExec,\n      builder: Operator.Builder,\n      childOp: OperatorOuterClass.Operator*): Option[OperatorOuterClass.Operator] = {\n    // `requiredOrders` and `getKeyOrdering` are copied from Spark's SortMergeJoinExec.\n    def requiredOrders(keys: Seq[Expression]): Seq[SortOrder] = {\n      keys.map(SortOrder(_, Ascending))\n    }\n\n    def getKeyOrdering(\n        keys: Seq[Expression],\n        childOutputOrdering: Seq[SortOrder]): Seq[SortOrder] = {\n      val requiredOrdering = requiredOrders(keys)\n      if (SortOrder.orderingSatisfies(childOutputOrdering, requiredOrdering)) {\n        keys.zip(childOutputOrdering).map { case (key, childOrder) =>\n          val sameOrderExpressionsSet = ExpressionSet(childOrder.children) - key\n          SortOrder(key, Ascending, sameOrderExpressionsSet.toSeq)\n        }\n      } else {\n        requiredOrdering\n      }\n    }\n\n    if (join.condition.isDefined &&\n      !CometConf.COMET_EXEC_SORT_MERGE_JOIN_WITH_JOIN_FILTER_ENABLED\n        .get(join.conf)) {\n      withInfo(\n        join,\n        s\"${CometConf.COMET_EXEC_SORT_MERGE_JOIN_WITH_JOIN_FILTER_ENABLED.key} is not enabled\",\n        join.condition.get)\n      return None\n    }\n\n    val condition = join.condition.map { cond =>\n      val condProto = exprToProto(cond, join.left.output ++ join.right.output)\n      if (condProto.isEmpty) {\n        withInfo(join, cond)\n        return None\n      }\n      condProto.get\n    }\n\n    val joinType = {\n      import OperatorOuterClass.JoinType\n      join.joinType match {\n        case Inner => JoinType.Inner\n        case LeftOuter => JoinType.LeftOuter\n        case RightOuter => JoinType.RightOuter\n        case FullOuter => JoinType.FullOuter\n        case LeftSemi => JoinType.LeftSemi\n        case LeftAnti => JoinType.LeftAnti\n        case _ =>\n          // Spark doesn't support other join types\n          withInfo(join, s\"Unsupported join type ${join.joinType}\")\n          return None\n      }\n    }\n\n    // Checks if the join keys are supported by DataFusion SortMergeJoin.\n    val errorMsgs = join.leftKeys.flatMap { key =>\n      if (!supportedSortMergeJoinEqualType(key.dataType)) {\n        Some(s\"Unsupported join key type ${key.dataType} on key: ${key.sql}\")\n      } else {\n        None\n      }\n    }\n\n    if (errorMsgs.nonEmpty) {\n      withInfo(join, errorMsgs.mkString(\"\\n\"))\n      return None\n    }\n\n    val leftKeys = join.leftKeys.map(exprToProto(_, join.left.output))\n    val rightKeys = join.rightKeys.map(exprToProto(_, join.right.output))\n\n    val sortOptions = getKeyOrdering(join.leftKeys, join.left.outputOrdering)\n      .map(exprToProto(_, join.left.output))\n\n    if (sortOptions.forall(_.isDefined) &&\n      leftKeys.forall(_.isDefined) &&\n      rightKeys.forall(_.isDefined) &&\n      childOp.nonEmpty) {\n      val joinBuilder = OperatorOuterClass.SortMergeJoin\n        .newBuilder()\n        .setJoinType(joinType)\n        .addAllSortOptions(sortOptions.map(_.get).asJava)\n        .addAllLeftJoinKeys(leftKeys.map(_.get).asJava)\n        .addAllRightJoinKeys(rightKeys.map(_.get).asJava)\n      condition.map(joinBuilder.setCondition)\n      Some(builder.setSortMergeJoin(joinBuilder).build())\n    } else {\n      val allExprs: Seq[Expression] = join.leftKeys ++ join.rightKeys\n      withInfo(join, allExprs: _*)\n      None\n    }\n  }\n\n  override def createExec(nativeOp: Operator, op: SortMergeJoinExec): CometNativeExec = {\n    CometSortMergeJoinExec(\n      nativeOp,\n      op,\n      op.output,\n      op.outputOrdering,\n      op.leftKeys,\n      op.rightKeys,\n      op.joinType,\n      op.condition,\n      op.left,\n      op.right,\n      SerializedPlan(None))\n  }\n\n  /**\n   * Returns true if given datatype is supported as a key in DataFusion sort merge join.\n   */\n  private def supportedSortMergeJoinEqualType(dataType: DataType): Boolean = dataType match {\n    case _: ByteType | _: ShortType | _: IntegerType | _: LongType | _: FloatType |\n        _: DoubleType | _: StringType | _: DateType | _: DecimalType | _: BooleanType =>\n      true\n    case TimestampNTZType => true\n    case _ => false\n  }\n\n}\n\ncase class CometSortMergeJoinExec(\n    override val nativeOp: Operator,\n    override val originalPlan: SparkPlan,\n    override val output: Seq[Attribute],\n    override val outputOrdering: Seq[SortOrder],\n    leftKeys: Seq[Expression],\n    rightKeys: Seq[Expression],\n    joinType: JoinType,\n    condition: Option[Expression],\n    override val left: SparkPlan,\n    override val right: SparkPlan,\n    override val serializedPlanOpt: SerializedPlan)\n    extends CometBinaryExec {\n\n  override def outputPartitioning: Partitioning = joinType match {\n    case _: InnerLike =>\n      PartitioningCollection(Seq(left.outputPartitioning, right.outputPartitioning))\n    case LeftOuter => left.outputPartitioning\n    case RightOuter => right.outputPartitioning\n    case FullOuter => UnknownPartitioning(left.outputPartitioning.numPartitions)\n    case LeftExistence(_) => left.outputPartitioning\n    case x =>\n      throw new IllegalArgumentException(s\"ShuffledJoin should not take $x as the JoinType\")\n  }\n\n  override def withNewChildrenInternal(newLeft: SparkPlan, newRight: SparkPlan): SparkPlan =\n    this.copy(left = newLeft, right = newRight)\n\n  override def stringArgs: Iterator[Any] =\n    Iterator(leftKeys, rightKeys, joinType, condition, left, right)\n\n  override def equals(obj: Any): Boolean = {\n    obj match {\n      case other: CometSortMergeJoinExec =>\n        this.output == other.output &&\n        this.leftKeys == other.leftKeys &&\n        this.rightKeys == other.rightKeys &&\n        this.condition == other.condition &&\n        this.left == other.left &&\n        this.right == other.right &&\n        this.serializedPlanOpt == other.serializedPlanOpt\n      case _ =>\n        false\n    }\n  }\n\n  override def hashCode(): Int =\n    Objects.hashCode(output, leftKeys, rightKeys, condition, left, right)\n\n  override lazy val metrics: Map[String, SQLMetric] =\n    CometMetricNode.sortMergeJoinMetrics(sparkContext)\n}\n\nobject CometScanWrapper extends CometSink[SparkPlan] {\n  override def createExec(nativeOp: Operator, op: SparkPlan): CometNativeExec = {\n    CometScanWrapper(nativeOp, op)\n  }\n}\n\ncase class CometScanWrapper(override val nativeOp: Operator, override val originalPlan: SparkPlan)\n    extends CometNativeExec\n    with LeafExecNode {\n  override val serializedPlanOpt: SerializedPlan = SerializedPlan(None)\n\n  override def stringArgs: Iterator[Any] = Iterator(originalPlan.output, originalPlan)\n}\n\n/**\n * A pseudo Comet physical scan node after Comet operators. This node is used to be a placeholder\n * for chaining with following Comet native operators after previous Comet operators. This node\n * will be removed after `CometExecRule`.\n *\n * This is very similar to `CometScanWrapper` above except it has child.\n */\ncase class CometSinkPlaceHolder(\n    override val nativeOp: Operator, // Must be a Scan\n    override val originalPlan: SparkPlan,\n    child: SparkPlan)\n    extends CometUnaryExec {\n  override val serializedPlanOpt: SerializedPlan = SerializedPlan(None)\n\n  override protected def withNewChildInternal(newChild: SparkPlan): SparkPlan = {\n    this.copy(child = newChild)\n  }\n\n  override def stringArgs: Iterator[Any] = Iterator(originalPlan.output, child)\n}\n"
  },
  {
    "path": "spark/src/main/scala/org/apache/spark/sql/comet/plans/AliasAwareOutputExpression.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.plans\n\nimport scala.collection.mutable\n\nimport org.apache.spark.sql.catalyst.SQLConfHelper\nimport org.apache.spark.sql.catalyst.expressions.{Alias, Attribute, AttributeSet, Expression, NamedExpression}\nimport org.apache.spark.sql.internal.SQLConf.EXPRESSION_PROJECTION_CANDIDATE_LIMIT\n\n/**\n * A trait that provides functionality to handle aliases in the `outputExpressions`.\n */\ntrait AliasAwareOutputExpression extends SQLConfHelper {\n  protected val aliasCandidateLimit: Int = conf.getConf(EXPRESSION_PROJECTION_CANDIDATE_LIMIT)\n  protected def outputExpressions: Seq[NamedExpression]\n\n  /**\n   * This method can be used to strip expression which does not affect the result, for example:\n   * strip the expression which is ordering agnostic for output ordering.\n   */\n  protected def strip(expr: Expression): Expression = expr\n\n  // Build an `Expression` -> `Attribute` alias map.\n  // There can be multiple alias defined for the same expressions but it doesn't make sense to store\n  // more than `aliasCandidateLimit` attributes for an expression. In those cases the old logic\n  // handled only the last alias so we need to make sure that we give precedence to that.\n  // If the `outputExpressions` contain simple attributes we need to add those too to the map.\n  @transient\n  private lazy val aliasMap = {\n    val aliases = mutable.Map[Expression, mutable.ArrayBuffer[Attribute]]()\n    outputExpressions.reverse.foreach {\n      case a @ Alias(child, _) =>\n        val buffer =\n          aliases.getOrElseUpdate(strip(child).canonicalized, mutable.ArrayBuffer.empty)\n        if (buffer.size < aliasCandidateLimit) {\n          buffer += a.toAttribute\n        }\n      case _ =>\n    }\n    outputExpressions.foreach {\n      case a: Attribute if aliases.contains(a.canonicalized) =>\n        val buffer = aliases(a.canonicalized)\n        if (buffer.size < aliasCandidateLimit) {\n          buffer += a\n        }\n      case _ =>\n    }\n    aliases\n  }\n\n  protected def hasAlias: Boolean = aliasMap.nonEmpty\n\n  /**\n   * Return a stream of expressions in which the original expression is projected with `aliasMap`.\n   */\n  protected def projectExpression(expr: Expression): Stream[Expression] = {\n    val outputSet = AttributeSet(outputExpressions.map(_.toAttribute))\n    expr.multiTransformDown {\n      // Mapping with aliases\n      case e: Expression if aliasMap.contains(e.canonicalized) =>\n        aliasMap(e.canonicalized).toSeq ++ (if (e.containsChild.nonEmpty) Seq(e) else Seq.empty)\n\n      // Prune if we encounter an attribute that we can't map and it is not in output set.\n      // This prune will go up to the closest `multiTransformDown()` call and returns `Stream.empty`\n      // there.\n      case a: Attribute if !outputSet.contains(a) => Seq.empty\n    }.toStream\n  }\n\n  def generateCartesianProduct[T](elementSeqs: Seq[() => Seq[T]]): Stream[Seq[T]] = {\n    elementSeqs.foldRight(Stream(Seq.empty[T]))((elements, elementTails) =>\n      for {\n        elementTail <- elementTails\n        element <- elements()\n      } yield element +: elementTail)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.4/org/apache/comet/shims/CometExprShim.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.catalyst.expressions._\n\nimport org.apache.comet.expressions.CometEvalMode\nimport org.apache.comet.serde.CommonStringExprs\nimport org.apache.comet.serde.ExprOuterClass.{BinaryOutputStyle, Expr}\n\n/**\n * `CometExprShim` acts as a shim for parsing expressions from different Spark versions.\n */\ntrait CometExprShim extends CommonStringExprs {\n  protected def evalMode(c: Cast): CometEvalMode.Value =\n    CometEvalModeUtil.fromSparkEvalMode(c.evalMode)\n\n  protected def binaryOutputStyle: BinaryOutputStyle = BinaryOutputStyle.HEX_DISCRETE\n\n  def versionSpecificExprToProtoInternal(\n      expr: Expression,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n    expr match {\n      case s: StringDecode =>\n        // Right child is the encoding expression.\n        stringDecode(expr, s.charset, s.bin, inputs, binding)\n\n      case _ => None\n    }\n  }\n}\n\nobject CometEvalModeUtil {\n  def fromSparkEvalMode(evalMode: EvalMode.Value): CometEvalMode.Value = evalMode match {\n    case EvalMode.LEGACY => CometEvalMode.LEGACY\n    case EvalMode.TRY => CometEvalMode.TRY\n    case EvalMode.ANSI => CometEvalMode.ANSI\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.4/org/apache/comet/shims/ShimCometBroadcastExchangeExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.SparkContext\nimport org.apache.spark.network.util.JavaUtils\nimport org.apache.spark.sql.execution.exchange.BroadcastExchangeLike\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.shims.ShimCometBroadcastExchangeExec.SPARK_MAX_BROADCAST_TABLE_SIZE\n\ntrait ShimCometBroadcastExchangeExec {\n\n  def setJobGroupOrTag(sc: SparkContext, broadcastExchange: BroadcastExchangeLike): Unit = {\n    // Setup a job group here so later it may get cancelled by groupId if necessary.\n    sc.setJobGroup(\n      broadcastExchange.runId.toString,\n      s\"broadcast exchange (runId ${broadcastExchange.runId})\",\n      interruptOnCancel = true)\n  }\n\n  def cancelJobGroup(sc: SparkContext, broadcastExchange: BroadcastExchangeLike): Unit = {\n    sc.cancelJobGroup(broadcastExchange.runId.toString)\n  }\n\n  def maxBroadcastTableBytes(conf: SQLConf): Long = {\n    JavaUtils.byteStringAsBytes(conf.getConfString(SPARK_MAX_BROADCAST_TABLE_SIZE, \"8GB\"))\n  }\n\n}\n\nobject ShimCometBroadcastExchangeExec {\n  val SPARK_MAX_BROADCAST_TABLE_SIZE = \"spark.sql.maxBroadcastTableSize\"\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.4/org/apache/comet/shims/ShimSQLConf.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.internal.SQLConf.LegacyBehaviorPolicy\n\ntrait ShimSQLConf {\n  protected val LEGACY = LegacyBehaviorPolicy.LEGACY\n  protected val CORRECTED = LegacyBehaviorPolicy.CORRECTED\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.4/org/apache/comet/shims/ShimSubqueryBroadcast.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.execution.SubqueryAdaptiveBroadcastExec\n\ntrait ShimSubqueryBroadcast {\n\n  /**\n   * Gets the build key indices from SubqueryAdaptiveBroadcastExec. Spark 3.x has `index: Int`,\n   * Spark 4.x has `indices: Seq[Int]`.\n   */\n  def getSubqueryBroadcastIndices(sab: SubqueryAdaptiveBroadcastExec): Seq[Int] = {\n    Seq(sab.index)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.4/org/apache/spark/sql/comet/shims/ShimCometScanExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport org.apache.hadoop.fs.{FileStatus, Path}\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression}\nimport org.apache.spark.sql.execution.{FileSourceScanExec, PartitionedFileUtil}\nimport org.apache.spark.sql.execution.datasources._\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetOptions\nimport org.apache.spark.sql.sources.Filter\nimport org.apache.spark.sql.types.StructType\n\nimport org.apache.comet.shims.ShimFileFormat\n\ntrait ShimCometScanExec {\n  def wrapped: FileSourceScanExec\n\n  lazy val fileConstantMetadataColumns: Seq[AttributeReference] =\n    wrapped.fileConstantMetadataColumns\n\n  protected def newFileScanRDD(\n      fsRelation: HadoopFsRelation,\n      readFunction: PartitionedFile => Iterator[InternalRow],\n      filePartitions: Seq[FilePartition],\n      readSchema: StructType,\n      options: ParquetOptions): FileScanRDD = new FileScanRDD(\n    fsRelation.sparkSession,\n    readFunction,\n    filePartitions,\n    readSchema,\n    fileConstantMetadataColumns,\n    options)\n\n  protected def isNeededForSchema(sparkSchema: StructType): Boolean = {\n    // TODO: remove after PARQUET-2161 becomes available in Parquet (tracked in SPARK-39634)\n    ShimFileFormat.findRowIndexColumnIndexInSchema(sparkSchema) >= 0\n  }\n\n  protected def getPartitionedFile(f: FileStatus, p: PartitionDirectory): PartitionedFile =\n    PartitionedFileUtil.getPartitionedFile(f, f.getPath, p.values)\n\n  protected def splitFiles(\n      sparkSession: SparkSession,\n      file: FileStatus,\n      filePath: Path,\n      isSplitable: Boolean,\n      maxSplitBytes: Long,\n      partitionValues: InternalRow): Seq[PartitionedFile] =\n    PartitionedFileUtil.splitFiles(\n      sparkSession,\n      file,\n      filePath,\n      isSplitable,\n      maxSplitBytes,\n      partitionValues)\n\n  protected def getPushedDownFilters(\n      relation: HadoopFsRelation,\n      dataFilters: Seq[Expression]): Seq[Filter] = {\n    val supportNestedPredicatePushdown = DataSourceUtils.supportNestedPredicatePushdown(relation)\n    dataFilters.flatMap(DataSourceStrategy.translateFilter(_, supportNestedPredicatePushdown))\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.4/org/apache/spark/sql/comet/shims/ShimSparkErrorConverter.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport java.io.FileNotFoundException\n\nimport scala.util.matching.Regex\n\nimport org.apache.spark.{QueryContext, SparkDateTimeException, SparkException}\nimport org.apache.spark.sql.catalyst.trees.SQLQueryContext\nimport org.apache.spark.sql.errors.QueryExecutionErrors\nimport org.apache.spark.sql.types._\nimport org.apache.spark.unsafe.types.UTF8String\n\nobject ShimSparkErrorConverter {\n  val ObjectLocationPattern: Regex = \"Object at location (.+?) not found\".r\n}\n\n/**\n * Spark 3.4 implementation for converting error types to proper Spark exceptions.\n *\n * Handles all error cases using the Spark 3.4 QueryExecutionErrors API. Differs from the 3.5\n * version in 4 places where the API changed between 3.4 and 3.5.\n */\ntrait ShimSparkErrorConverter {\n\n  private def sqlCtx(context: Array[QueryContext]): SQLQueryContext =\n    context.headOption.map(_.asInstanceOf[SQLQueryContext]).getOrElse(null)\n\n  private def parseFloatLiteral(value: String): Float = {\n    value.toLowerCase match {\n      case \"inf\" | \"+inf\" | \"infinity\" | \"+infinity\" => Float.PositiveInfinity\n      case \"-inf\" | \"-infinity\" => Float.NegativeInfinity\n      case \"nan\" | \"+nan\" | \"-nan\" => Float.NaN\n      case _ => value.toFloat\n    }\n  }\n\n  private def parseDoubleLiteral(value: String): Double = {\n    val normalized = value.toLowerCase.stripSuffix(\"d\")\n    normalized match {\n      case \"inf\" | \"+inf\" | \"infinity\" | \"+infinity\" => Double.PositiveInfinity\n      case \"-inf\" | \"-infinity\" => Double.NegativeInfinity\n      case \"nan\" | \"+nan\" | \"-nan\" => Double.NaN\n      case _ => normalized.toDouble\n    }\n  }\n\n  def convertErrorType(\n      errorType: String,\n      errorClass: String,\n      params: Map[String, Any],\n      context: Array[QueryContext],\n      summary: String): Option[Throwable] = {\n    val _ = (errorClass, summary)\n\n    errorType match {\n\n      case \"DivideByZero\" =>\n        Some(QueryExecutionErrors.divideByZeroError(sqlCtx(context)))\n\n      case \"RemainderByZero\" =>\n        Some(\n          new SparkException(\n            errorClass = \"REMAINDER_BY_ZERO\",\n            messageParameters = params.map { case (k, v) => (k, v.toString) },\n            cause = null))\n\n      case \"IntervalDividedByZero\" =>\n        Some(QueryExecutionErrors.intervalDividedByZeroError(sqlCtx(context)))\n\n      case \"BinaryArithmeticOverflow\" =>\n        // Spark 3.x does not take functionName parameter\n        Some(\n          QueryExecutionErrors.binaryArithmeticCauseOverflowError(\n            params(\"value1\").toString.toShort,\n            params(\"symbol\").toString,\n            params(\"value2\").toString.toShort))\n\n      case \"ArithmeticOverflow\" =>\n        val fromType = params(\"fromType\").toString\n        Some(\n          QueryExecutionErrors\n            .arithmeticOverflowError(fromType + \" overflow\", \"\", sqlCtx(context)))\n\n      case \"IntegralDivideOverflow\" =>\n        Some(QueryExecutionErrors.overflowInIntegralDivideError(sqlCtx(context)))\n\n      case \"DecimalSumOverflow\" =>\n        // Spark 3.x takes SQLQueryContext, not QueryContext\n        Some(QueryExecutionErrors.overflowInSumOfDecimalError(sqlCtx(context)))\n\n      case \"NumericValueOutOfRange\" =>\n        val decimal = Decimal(params(\"value\").toString)\n        Some(\n          QueryExecutionErrors.cannotChangeDecimalPrecisionError(\n            decimal,\n            params(\"precision\").toString.toInt,\n            params(\"scale\").toString.toInt,\n            sqlCtx(context)))\n\n      case \"DatetimeOverflow\" =>\n        Some(\n          new SparkException(\n            errorClass = \"DATETIME_OVERFLOW\",\n            messageParameters = params.map { case (k, v) => (k, v.toString) },\n            cause = null))\n\n      case \"InvalidArrayIndex\" =>\n        Some(\n          QueryExecutionErrors.invalidArrayIndexError(\n            params(\"indexValue\").toString.toInt,\n            params(\"arraySize\").toString.toInt,\n            sqlCtx(context)))\n\n      case \"InvalidElementAtIndex\" =>\n        Some(\n          QueryExecutionErrors.invalidElementAtIndexError(\n            params(\"indexValue\").toString.toInt,\n            params(\"arraySize\").toString.toInt,\n            sqlCtx(context)))\n\n      case \"InvalidIndexOfZero\" =>\n        Some(QueryExecutionErrors.invalidIndexOfZeroError(sqlCtx(context)))\n\n      case \"InvalidBitmapPosition\" =>\n        // invalidBitmapPositionError does not exist in Spark 3.4\n        Some(\n          new SparkException(\n            errorClass = \"INVALID_BITMAP_POSITION\",\n            messageParameters = params.map { case (k, v) => (k, v.toString) },\n            cause = null))\n\n      case \"DuplicatedMapKey\" =>\n        Some(QueryExecutionErrors.duplicateMapKeyFoundError(params(\"key\")))\n\n      case \"NullMapKey\" =>\n        Some(QueryExecutionErrors.nullAsMapKeyNotAllowedError())\n\n      case \"MapKeyValueDiffSizes\" =>\n        Some(QueryExecutionErrors.mapDataKeyArrayLengthDiffersFromValueArrayLengthError())\n\n      case \"ExceedMapSizeLimit\" =>\n        Some(QueryExecutionErrors.exceedMapSizeLimitError(params(\"size\").toString.toInt))\n\n      case \"CollectionSizeLimitExceeded\" =>\n        // createArrayWithElementsExceedLimitError takes (count: Any) in Spark 3.4\n        Some(\n          QueryExecutionErrors.createArrayWithElementsExceedLimitError(\n            params(\"numElements\").toString.toLong))\n\n      case \"NotNullAssertViolation\" =>\n        Some(\n          QueryExecutionErrors.foundNullValueForNotNullableFieldError(\n            params(\"fieldName\").toString))\n\n      case \"ValueIsNull\" =>\n        Some(\n          QueryExecutionErrors.fieldCannotBeNullError(\n            params.getOrElse(\"rowIndex\", 0).toString.toInt,\n            params(\"fieldName\").toString))\n\n      case \"CannotParseTimestamp\" =>\n        Some(\n          QueryExecutionErrors.ansiDateTimeParseError(new Exception(params(\"message\").toString)))\n\n      case \"InvalidFractionOfSecond\" =>\n        Some(QueryExecutionErrors.invalidFractionOfSecondError())\n\n      case \"CastInvalidValue\" =>\n        val str = UTF8String.fromString(params(\"value\").toString)\n        val targetType = getDataType(params(\"toType\").toString)\n        Some(\n          QueryExecutionErrors\n            .invalidInputInCastToNumberError(targetType, str, sqlCtx(context)))\n\n      case \"InvalidInputInCastToDatetime\" =>\n        val expression =\n          s\"'${params(\"value\").toString.replace(\"\\\\\", \"\\\\\\\\\").replace(\"'\", \"\\\\'\")}'\"\n        val sourceType = s\"\"\"\"${params(\"fromType\").toString}\"\"\"\"\n        val targetType = s\"\"\"\"${params(\"toType\").toString}\"\"\"\"\n        Some(\n          new SparkDateTimeException(\n            errorClass = \"CAST_INVALID_INPUT\",\n            messageParameters = Map(\n              \"expression\" -> expression,\n              \"sourceType\" -> sourceType,\n              \"targetType\" -> targetType,\n              \"ansiConfig\" -> \"\\\"spark.sql.ansi.enabled\\\"\"),\n            context = context,\n            summary = summary))\n\n      case \"CastOverFlow\" =>\n        val fromType = getDataType(params(\"fromType\").toString)\n        val toType = getDataType(params(\"toType\").toString)\n        val valueStr = params(\"value\").toString\n\n        val typedValue: Any = fromType match {\n          case _: DecimalType =>\n            val cleanStr = if (valueStr.endsWith(\"BD\")) valueStr.dropRight(2) else valueStr\n            Decimal(cleanStr)\n          case ByteType =>\n            val cleanStr = if (valueStr.endsWith(\"T\")) valueStr.dropRight(1) else valueStr\n            cleanStr.toByte\n          case ShortType =>\n            val cleanStr = if (valueStr.endsWith(\"S\")) valueStr.dropRight(1) else valueStr\n            cleanStr.toShort\n          case IntegerType => valueStr.toInt\n          case LongType =>\n            val cleanStr = if (valueStr.endsWith(\"L\")) valueStr.dropRight(1) else valueStr\n            cleanStr.toLong\n          case FloatType => parseFloatLiteral(valueStr)\n          case DoubleType => parseDoubleLiteral(valueStr)\n          case StringType => UTF8String.fromString(valueStr)\n          case _ => valueStr\n        }\n\n        Some(QueryExecutionErrors.castingCauseOverflowError(typedValue, fromType, toType))\n\n      case \"CannotParseDecimal\" =>\n        Some(QueryExecutionErrors.cannotParseDecimalError())\n\n      case \"InvalidUtf8String\" =>\n        // invalidUTF8StringError does not exist in Spark 3.x; use generic fallback\n        Some(\n          new SparkException(\n            errorClass = \"INVALID_UTF8_STRING\",\n            messageParameters = params.map { case (k, v) => (k, v.toString) },\n            cause = null))\n\n      case \"UnexpectedPositiveValue\" =>\n        Some(\n          QueryExecutionErrors.unexpectedValueForStartInFunctionError(\n            params(\"parameterName\").toString))\n\n      case \"UnexpectedNegativeValue\" =>\n        Some(\n          QueryExecutionErrors.unexpectedValueForLengthInFunctionError(\n            params(\"parameterName\").toString))\n\n      case \"InvalidRegexGroupIndex\" =>\n        // invalidRegexGroupIndexError does not exist in Spark 3.4\n        Some(\n          new SparkException(\n            errorClass = \"INVALID_REGEX_GROUP_INDEX\",\n            messageParameters = params.map { case (k, v) => (k, v.toString) },\n            cause = null))\n\n      case \"DatatypeCannotOrder\" =>\n        // orderedOperationUnsupportedByDataTypeError takes DataType in Spark 3.4, not String\n        Some(\n          QueryExecutionErrors.orderedOperationUnsupportedByDataTypeError(\n            getDataType(params(\"dataType\").toString)))\n\n      case \"ScalarSubqueryTooManyRows\" =>\n        // multipleRowScalarSubqueryError was renamed to multipleRowSubqueryError in Spark 3.x\n        Some(QueryExecutionErrors.multipleRowSubqueryError(sqlCtx(context)))\n\n      case \"IntervalArithmeticOverflowWithSuggestion\" =>\n        // Spark 3.x uses a single intervalArithmeticOverflowError method\n        Some(\n          QueryExecutionErrors.intervalArithmeticOverflowError(\n            \"Interval arithmetic overflow\",\n            params.get(\"functionName\").map(_.toString).getOrElse(\"\"),\n            sqlCtx(context)))\n\n      case \"IntervalArithmeticOverflowWithoutSuggestion\" =>\n        Some(\n          QueryExecutionErrors\n            .intervalArithmeticOverflowError(\"Interval arithmetic overflow\", \"\", sqlCtx(context)))\n\n      case \"DuplicateFieldCaseInsensitive\" =>\n        Some(\n          QueryExecutionErrors.foundDuplicateFieldInCaseInsensitiveModeError(\n            params(\"requiredFieldName\").toString,\n            params(\"matchedOrcFields\").toString))\n\n      case \"FileNotFound\" =>\n        val msg = params(\"message\").toString\n        // Extract file path from native error message and format like Hadoop's\n        // FileNotFoundException: \"File <path> does not exist\"\n        val path = ShimSparkErrorConverter.ObjectLocationPattern\n          .findFirstMatchIn(msg)\n          .map(_.group(1))\n          .getOrElse(msg)\n        Some(\n          QueryExecutionErrors.readCurrentFileNotFoundError(\n            new FileNotFoundException(s\"File $path does not exist\")))\n\n      case _ =>\n        None\n    }\n  }\n\n  private def getDataType(typeName: String): DataType = {\n    typeName.toUpperCase match {\n      case \"BYTE\" | \"TINYINT\" => ByteType\n      case \"SHORT\" | \"SMALLINT\" => ShortType\n      case \"INT\" | \"INTEGER\" => IntegerType\n      case \"LONG\" | \"BIGINT\" => LongType\n      case \"FLOAT\" | \"REAL\" => FloatType\n      case \"DOUBLE\" => DoubleType\n      case \"DECIMAL\" => DecimalType.SYSTEM_DEFAULT\n      case \"STRING\" | \"VARCHAR\" => StringType\n      case \"BINARY\" => BinaryType\n      case \"BOOLEAN\" => BooleanType\n      case \"DATE\" => DateType\n      case \"TIMESTAMP\" => TimestampType\n      case _ =>\n        try {\n          DataType.fromDDL(typeName)\n        } catch {\n          case _: Exception =>\n            // fromDDL rejects types that are syntactically invalid in SQL DDL, such as\n            // DECIMAL(p,s) with a negative scale (valid when allowNegativeScaleOfDecimal=true).\n            // Parse those manually rather than silently falling back to StringType.\n            if (typeName.toUpperCase.startsWith(\"DECIMAL(\") && typeName.endsWith(\")\")) {\n              val inner = typeName.substring(\"DECIMAL(\".length, typeName.length - 1)\n              val parts = inner.split(\",\")\n              if (parts.length == 2) {\n                try {\n                  DataTypes.createDecimalType(parts(0).trim.toInt, parts(1).trim.toInt)\n                } catch {\n                  case _: Exception => StringType\n                }\n              } else StringType\n            } else StringType\n        }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.5/org/apache/comet/shims/CometExprShim.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.catalyst.expressions._\nimport org.apache.spark.sql.types.DataTypes\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.expressions.{CometCast, CometEvalMode}\nimport org.apache.comet.serde.{CommonStringExprs, Compatible, ExprOuterClass, Incompatible}\nimport org.apache.comet.serde.ExprOuterClass.{BinaryOutputStyle, Expr}\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProtoInternal, optExprWithInfo, scalarFunctionExprToProto}\n\n/**\n * `CometExprShim` acts as a shim for parsing expressions from different Spark versions.\n */\ntrait CometExprShim extends CommonStringExprs {\n  protected def evalMode(c: Cast): CometEvalMode.Value =\n    CometEvalModeUtil.fromSparkEvalMode(c.evalMode)\n\n  protected def binaryOutputStyle: BinaryOutputStyle = BinaryOutputStyle.HEX_DISCRETE\n\n  def versionSpecificExprToProtoInternal(\n      expr: Expression,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n    expr match {\n      case s: StringDecode =>\n        // Right child is the encoding expression.\n        stringDecode(expr, s.charset, s.bin, inputs, binding)\n\n      case expr @ ToPrettyString(child, timeZoneId) =>\n        val castSupported = CometCast.isSupported(\n          child.dataType,\n          DataTypes.StringType,\n          timeZoneId,\n          CometEvalMode.TRY)\n\n        val isCastSupported = castSupported match {\n          case Compatible(_) => true\n          case Incompatible(_) => true\n          case _ => false\n        }\n\n        if (isCastSupported) {\n          exprToProtoInternal(child, inputs, binding) match {\n            case Some(p) =>\n              val toPrettyString = ExprOuterClass.ToPrettyString\n                .newBuilder()\n                .setChild(p)\n                .setTimezone(timeZoneId.getOrElse(\"UTC\"))\n                .setBinaryOutputStyle(binaryOutputStyle)\n                .build()\n              Some(\n                ExprOuterClass.Expr\n                  .newBuilder()\n                  .setToPrettyString(toPrettyString)\n                  .build())\n            case _ =>\n              withInfo(expr, child)\n              None\n          }\n        } else {\n          None\n        }\n\n      case wb: WidthBucket =>\n        val childExprs = wb.children.map(exprToProtoInternal(_, inputs, binding))\n        val optExpr = scalarFunctionExprToProto(\"width_bucket\", childExprs: _*)\n        optExprWithInfo(optExpr, wb, wb.children: _*)\n\n      case _ => None\n    }\n  }\n}\n\nobject CometEvalModeUtil {\n  def fromSparkEvalMode(evalMode: EvalMode.Value): CometEvalMode.Value = evalMode match {\n    case EvalMode.LEGACY => CometEvalMode.LEGACY\n    case EvalMode.TRY => CometEvalMode.TRY\n    case EvalMode.ANSI => CometEvalMode.ANSI\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.5/org/apache/comet/shims/ShimCometBroadcastExchangeExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.SparkContext\nimport org.apache.spark.network.util.JavaUtils\nimport org.apache.spark.sql.execution.exchange.BroadcastExchangeLike\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.shims.ShimCometBroadcastExchangeExec.SPARK_MAX_BROADCAST_TABLE_SIZE\n\ntrait ShimCometBroadcastExchangeExec {\n\n  def setJobGroupOrTag(sc: SparkContext, broadcastExchange: BroadcastExchangeLike): Unit = {\n    // Setup a job tag here so later it may get cancelled by tag if necessary.\n    sc.addJobTag(broadcastExchange.jobTag)\n    sc.setInterruptOnCancel(true)\n  }\n\n  def cancelJobGroup(sc: SparkContext, broadcastExchange: BroadcastExchangeLike): Unit = {\n    sc.cancelJobsWithTag(broadcastExchange.jobTag)\n  }\n\n  def maxBroadcastTableBytes(conf: SQLConf): Long = {\n    JavaUtils.byteStringAsBytes(conf.getConfString(SPARK_MAX_BROADCAST_TABLE_SIZE, \"8GB\"))\n  }\n\n}\n\nobject ShimCometBroadcastExchangeExec {\n  val SPARK_MAX_BROADCAST_TABLE_SIZE = \"spark.sql.maxBroadcastTableSize\"\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.5/org/apache/comet/shims/ShimSQLConf.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.internal.LegacyBehaviorPolicy\n\ntrait ShimSQLConf {\n  protected val LEGACY = LegacyBehaviorPolicy.LEGACY\n  protected val CORRECTED = LegacyBehaviorPolicy.CORRECTED\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.5/org/apache/comet/shims/ShimSubqueryBroadcast.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.execution.SubqueryAdaptiveBroadcastExec\n\ntrait ShimSubqueryBroadcast {\n\n  /**\n   * Gets the build key indices from SubqueryAdaptiveBroadcastExec. Spark 3.x has `index: Int`,\n   * Spark 4.x has `indices: Seq[Int]`.\n   */\n  def getSubqueryBroadcastIndices(sab: SubqueryAdaptiveBroadcastExec): Seq[Int] = {\n    Seq(sab.index)\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.5/org/apache/spark/sql/comet/shims/ShimCometScanExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport scala.math.Ordering.Implicits._\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.SPARK_VERSION_SHORT\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression}\nimport org.apache.spark.sql.execution.{FileSourceScanExec, PartitionedFileUtil}\nimport org.apache.spark.sql.execution.datasources._\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetOptions\nimport org.apache.spark.sql.sources.Filter\nimport org.apache.spark.sql.types.StructType\nimport org.apache.spark.util.VersionUtils\n\ntrait ShimCometScanExec {\n  def wrapped: FileSourceScanExec\n\n  lazy val fileConstantMetadataColumns: Seq[AttributeReference] =\n    wrapped.fileConstantMetadataColumns\n\n  def isSparkVersionAtLeast355: Boolean = {\n    VersionUtils.majorMinorPatchVersion(SPARK_VERSION_SHORT) match {\n      case Some((major, minor, patch)) => (major, minor, patch) >= (3, 5, 5)\n      case None =>\n        throw new IllegalArgumentException(s\"Malformed Spark version: $SPARK_VERSION_SHORT\")\n    }\n  }\n\n  protected def newFileScanRDD(\n      fsRelation: HadoopFsRelation,\n      readFunction: PartitionedFile => Iterator[InternalRow],\n      filePartitions: Seq[FilePartition],\n      readSchema: StructType,\n      options: ParquetOptions): FileScanRDD = new FileScanRDD(\n    fsRelation.sparkSession,\n    readFunction,\n    filePartitions,\n    readSchema,\n    fileConstantMetadataColumns,\n    fsRelation.fileFormat.fileConstantMetadataExtractors,\n    options)\n\n  // see SPARK-39634\n  protected def isNeededForSchema(sparkSchema: StructType): Boolean = false\n\n  protected def getPartitionedFile(\n      f: FileStatusWithMetadata,\n      p: PartitionDirectory): PartitionedFile =\n    // Use reflection to invoke the relevant method according to the spark version\n    // See https://github.com/apache/datafusion-comet/issues/1572\n    if (isSparkVersionAtLeast355) {\n      PartitionedFileUtil.getClass\n        .getMethod(\n          \"getPartitionedFile\",\n          classOf[FileStatusWithMetadata],\n          classOf[Path],\n          classOf[InternalRow])\n        .invoke(PartitionedFileUtil, f, f.getPath, p.values)\n        .asInstanceOf[PartitionedFile]\n    } else {\n      PartitionedFileUtil.getClass\n        .getMethod(\"getPartitionedFile\", classOf[FileStatusWithMetadata], classOf[InternalRow])\n        .invoke(PartitionedFileUtil, f, p.values)\n        .asInstanceOf[PartitionedFile]\n    }\n\n  protected def splitFiles(\n      sparkSession: SparkSession,\n      file: FileStatusWithMetadata,\n      filePath: Path,\n      isSplitable: Boolean,\n      maxSplitBytes: Long,\n      partitionValues: InternalRow): Seq[PartitionedFile] = {\n    // Use reflection to invoke the relevant method according to the spark version\n    // See https://github.com/apache/datafusion-comet/issues/1572\n    if (isSparkVersionAtLeast355) {\n      PartitionedFileUtil.getClass\n        .getMethod(\n          \"splitFiles\",\n          classOf[SparkSession],\n          classOf[FileStatusWithMetadata],\n          classOf[Path],\n          java.lang.Boolean.TYPE,\n          java.lang.Long.TYPE,\n          classOf[InternalRow])\n        .invoke(\n          PartitionedFileUtil,\n          sparkSession,\n          file,\n          filePath,\n          java.lang.Boolean.valueOf(isSplitable),\n          java.lang.Long.valueOf(maxSplitBytes),\n          partitionValues)\n        .asInstanceOf[Seq[PartitionedFile]]\n    } else {\n      PartitionedFileUtil.getClass\n        .getMethod(\n          \"splitFiles\",\n          classOf[SparkSession],\n          classOf[FileStatusWithMetadata],\n          java.lang.Boolean.TYPE,\n          java.lang.Long.TYPE,\n          classOf[InternalRow])\n        .invoke(\n          PartitionedFileUtil,\n          sparkSession,\n          file,\n          java.lang.Boolean.valueOf(isSplitable),\n          java.lang.Long.valueOf(maxSplitBytes),\n          partitionValues)\n        .asInstanceOf[Seq[PartitionedFile]]\n    }\n  }\n\n  protected def getPushedDownFilters(\n      relation: HadoopFsRelation,\n      dataFilters: Seq[Expression]): Seq[Filter] = {\n    val supportNestedPredicatePushdown = DataSourceUtils.supportNestedPredicatePushdown(relation)\n    dataFilters.flatMap(DataSourceStrategy.translateFilter(_, supportNestedPredicatePushdown))\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.5/org/apache/spark/sql/comet/shims/ShimSparkErrorConverter.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport java.io.FileNotFoundException\n\nimport scala.util.matching.Regex\n\nimport org.apache.spark.{QueryContext, SparkDateTimeException, SparkException}\nimport org.apache.spark.sql.catalyst.trees.SQLQueryContext\nimport org.apache.spark.sql.errors.QueryExecutionErrors\nimport org.apache.spark.sql.types._\nimport org.apache.spark.unsafe.types.UTF8String\n\nobject ShimSparkErrorConverter {\n  val ObjectLocationPattern: Regex = \"Object at location (.+?) not found\".r\n}\n\n/**\n * Spark 3.5 implementation for converting error types to proper Spark exceptions.\n *\n * Handles all error cases using the Spark 3.5 QueryExecutionErrors API. The 4 cases with API\n * differences from Spark 4.0 are handled with Spark 3.x-specific calls.\n */\ntrait ShimSparkErrorConverter {\n\n  private def sqlCtx(context: Array[QueryContext]): SQLQueryContext =\n    context.headOption.map(_.asInstanceOf[SQLQueryContext]).getOrElse(null)\n\n  private def parseFloatLiteral(value: String): Float = {\n    value.toLowerCase match {\n      case \"inf\" | \"+inf\" | \"infinity\" | \"+infinity\" => Float.PositiveInfinity\n      case \"-inf\" | \"-infinity\" => Float.NegativeInfinity\n      case \"nan\" | \"+nan\" | \"-nan\" => Float.NaN\n      case _ => value.toFloat\n    }\n  }\n\n  private def parseDoubleLiteral(value: String): Double = {\n    val normalized = value.toLowerCase.stripSuffix(\"d\")\n    normalized match {\n      case \"inf\" | \"+inf\" | \"infinity\" | \"+infinity\" => Double.PositiveInfinity\n      case \"-inf\" | \"-infinity\" => Double.NegativeInfinity\n      case \"nan\" | \"+nan\" | \"-nan\" => Double.NaN\n      case _ => normalized.toDouble\n    }\n  }\n\n  def convertErrorType(\n      errorType: String,\n      errorClass: String,\n      params: Map[String, Any],\n      context: Array[QueryContext],\n      summary: String): Option[Throwable] = {\n    val _ = (errorClass, summary)\n\n    errorType match {\n\n      case \"DivideByZero\" =>\n        Some(QueryExecutionErrors.divideByZeroError(sqlCtx(context)))\n\n      case \"RemainderByZero\" =>\n        Some(\n          new SparkException(\n            errorClass = \"REMAINDER_BY_ZERO\",\n            messageParameters = params.map { case (k, v) => (k, v.toString) },\n            cause = null))\n\n      case \"IntervalDividedByZero\" =>\n        Some(QueryExecutionErrors.intervalDividedByZeroError(sqlCtx(context)))\n\n      case \"BinaryArithmeticOverflow\" =>\n        // Spark 3.x does not take functionName parameter\n        Some(\n          QueryExecutionErrors.binaryArithmeticCauseOverflowError(\n            params(\"value1\").toString.toShort,\n            params(\"symbol\").toString,\n            params(\"value2\").toString.toShort))\n\n      case \"ArithmeticOverflow\" =>\n        val fromType = params(\"fromType\").toString\n        Some(\n          QueryExecutionErrors\n            .arithmeticOverflowError(fromType + \" overflow\", \"\", sqlCtx(context)))\n\n      case \"IntegralDivideOverflow\" =>\n        Some(QueryExecutionErrors.overflowInIntegralDivideError(sqlCtx(context)))\n\n      case \"DecimalSumOverflow\" =>\n        // Spark 3.x takes SQLQueryContext, not QueryContext\n        Some(QueryExecutionErrors.overflowInSumOfDecimalError(sqlCtx(context)))\n\n      case \"NumericValueOutOfRange\" =>\n        val decimal = Decimal(params(\"value\").toString)\n        Some(\n          QueryExecutionErrors.cannotChangeDecimalPrecisionError(\n            decimal,\n            params(\"precision\").toString.toInt,\n            params(\"scale\").toString.toInt,\n            sqlCtx(context)))\n\n      case \"DatetimeOverflow\" =>\n        Some(\n          new SparkException(\n            errorClass = \"DATETIME_OVERFLOW\",\n            messageParameters = params.map { case (k, v) => (k, v.toString) },\n            cause = null))\n\n      case \"InvalidArrayIndex\" =>\n        Some(\n          QueryExecutionErrors.invalidArrayIndexError(\n            params(\"indexValue\").toString.toInt,\n            params(\"arraySize\").toString.toInt,\n            sqlCtx(context)))\n\n      case \"InvalidElementAtIndex\" =>\n        Some(\n          QueryExecutionErrors.invalidElementAtIndexError(\n            params(\"indexValue\").toString.toInt,\n            params(\"arraySize\").toString.toInt,\n            sqlCtx(context)))\n\n      case \"InvalidIndexOfZero\" =>\n        Some(QueryExecutionErrors.invalidIndexOfZeroError(sqlCtx(context)))\n\n      case \"InvalidBitmapPosition\" =>\n        Some(\n          QueryExecutionErrors.invalidBitmapPositionError(\n            params(\"bitPosition\").toString.toLong,\n            params(\"bitmapNumBytes\").toString.toLong))\n\n      case \"DuplicatedMapKey\" =>\n        Some(QueryExecutionErrors.duplicateMapKeyFoundError(params(\"key\")))\n\n      case \"NullMapKey\" =>\n        Some(QueryExecutionErrors.nullAsMapKeyNotAllowedError())\n\n      case \"MapKeyValueDiffSizes\" =>\n        Some(QueryExecutionErrors.mapDataKeyArrayLengthDiffersFromValueArrayLengthError())\n\n      case \"ExceedMapSizeLimit\" =>\n        Some(QueryExecutionErrors.exceedMapSizeLimitError(params(\"size\").toString.toInt))\n\n      case \"CollectionSizeLimitExceeded\" =>\n        Some(\n          QueryExecutionErrors.createArrayWithElementsExceedLimitError(\n            \"array\",\n            params(\"numElements\").toString.toLong))\n\n      case \"NotNullAssertViolation\" =>\n        Some(\n          QueryExecutionErrors.foundNullValueForNotNullableFieldError(\n            params(\"fieldName\").toString))\n\n      case \"ValueIsNull\" =>\n        Some(\n          QueryExecutionErrors.fieldCannotBeNullError(\n            params.getOrElse(\"rowIndex\", 0).toString.toInt,\n            params(\"fieldName\").toString))\n\n      case \"CannotParseTimestamp\" =>\n        Some(\n          QueryExecutionErrors.ansiDateTimeParseError(new Exception(params(\"message\").toString)))\n\n      case \"InvalidFractionOfSecond\" =>\n        Some(QueryExecutionErrors.invalidFractionOfSecondError())\n\n      case \"CastInvalidValue\" =>\n        val str = UTF8String.fromString(params(\"value\").toString)\n        val targetType = getDataType(params(\"toType\").toString)\n        Some(\n          QueryExecutionErrors\n            .invalidInputInCastToNumberError(targetType, str, sqlCtx(context)))\n\n      case \"InvalidInputInCastToDatetime\" =>\n        val expression =\n          s\"'${params(\"value\").toString.replace(\"\\\\\", \"\\\\\\\\\").replace(\"'\", \"\\\\'\")}'\"\n        val sourceType = s\"\"\"\"${params(\"fromType\").toString}\"\"\"\"\n        val targetType = s\"\"\"\"${params(\"toType\").toString}\"\"\"\"\n        Some(\n          new SparkDateTimeException(\n            errorClass = \"CAST_INVALID_INPUT\",\n            messageParameters = Map(\n              \"expression\" -> expression,\n              \"sourceType\" -> sourceType,\n              \"targetType\" -> targetType,\n              \"ansiConfig\" -> \"\\\"spark.sql.ansi.enabled\\\"\"),\n            context = context,\n            summary = summary))\n\n      case \"CastOverFlow\" =>\n        val fromType = getDataType(params(\"fromType\").toString)\n        val toType = getDataType(params(\"toType\").toString)\n        val valueStr = params(\"value\").toString\n\n        val typedValue: Any = fromType match {\n          case _: DecimalType =>\n            val cleanStr = if (valueStr.endsWith(\"BD\")) valueStr.dropRight(2) else valueStr\n            Decimal(cleanStr)\n          case ByteType =>\n            val cleanStr = if (valueStr.endsWith(\"T\")) valueStr.dropRight(1) else valueStr\n            cleanStr.toByte\n          case ShortType =>\n            val cleanStr = if (valueStr.endsWith(\"S\")) valueStr.dropRight(1) else valueStr\n            cleanStr.toShort\n          case IntegerType => valueStr.toInt\n          case LongType =>\n            val cleanStr = if (valueStr.endsWith(\"L\")) valueStr.dropRight(1) else valueStr\n            cleanStr.toLong\n          case FloatType => parseFloatLiteral(valueStr)\n          case DoubleType => parseDoubleLiteral(valueStr)\n          case StringType => UTF8String.fromString(valueStr)\n          case _ => valueStr\n        }\n\n        Some(QueryExecutionErrors.castingCauseOverflowError(typedValue, fromType, toType))\n\n      case \"CannotParseDecimal\" =>\n        Some(QueryExecutionErrors.cannotParseDecimalError())\n\n      case \"InvalidUtf8String\" =>\n        // invalidUTF8StringError does not exist in Spark 3.x; use generic fallback\n        Some(\n          new SparkException(\n            errorClass = \"INVALID_UTF8_STRING\",\n            messageParameters = params.map { case (k, v) => (k, v.toString) },\n            cause = null))\n\n      case \"UnexpectedPositiveValue\" =>\n        Some(\n          QueryExecutionErrors.unexpectedValueForStartInFunctionError(\n            params(\"parameterName\").toString))\n\n      case \"UnexpectedNegativeValue\" =>\n        Some(\n          QueryExecutionErrors.unexpectedValueForLengthInFunctionError(\n            params(\"parameterName\").toString))\n\n      case \"InvalidRegexGroupIndex\" =>\n        Some(\n          QueryExecutionErrors.invalidRegexGroupIndexError(\n            params(\"functionName\").toString,\n            params(\"groupCount\").toString.toInt,\n            params(\"groupIndex\").toString.toInt))\n\n      case \"DatatypeCannotOrder\" =>\n        Some(\n          QueryExecutionErrors.orderedOperationUnsupportedByDataTypeError(\n            params(\"dataType\").toString))\n\n      case \"ScalarSubqueryTooManyRows\" =>\n        // multipleRowScalarSubqueryError was renamed to multipleRowSubqueryError in Spark 3.x\n        Some(QueryExecutionErrors.multipleRowSubqueryError(sqlCtx(context)))\n\n      case \"IntervalArithmeticOverflowWithSuggestion\" =>\n        // Spark 3.x uses a single intervalArithmeticOverflowError method\n        Some(\n          QueryExecutionErrors.intervalArithmeticOverflowError(\n            \"Interval arithmetic overflow\",\n            params.get(\"functionName\").map(_.toString).getOrElse(\"\"),\n            sqlCtx(context)))\n\n      case \"IntervalArithmeticOverflowWithoutSuggestion\" =>\n        Some(\n          QueryExecutionErrors\n            .intervalArithmeticOverflowError(\"Interval arithmetic overflow\", \"\", sqlCtx(context)))\n\n      case \"DuplicateFieldCaseInsensitive\" =>\n        Some(\n          QueryExecutionErrors.foundDuplicateFieldInCaseInsensitiveModeError(\n            params(\"requiredFieldName\").toString,\n            params(\"matchedOrcFields\").toString))\n\n      case \"FileNotFound\" =>\n        val msg = params(\"message\").toString\n        // Extract file path from native error message and format like Hadoop's\n        // FileNotFoundException: \"File <path> does not exist\"\n        val path = ShimSparkErrorConverter.ObjectLocationPattern\n          .findFirstMatchIn(msg)\n          .map(_.group(1))\n          .getOrElse(msg)\n        Some(\n          QueryExecutionErrors.readCurrentFileNotFoundError(\n            new FileNotFoundException(s\"File $path does not exist\")))\n\n      case _ =>\n        None\n    }\n  }\n\n  private def getDataType(typeName: String): DataType = {\n    typeName.toUpperCase match {\n      case \"BYTE\" | \"TINYINT\" => ByteType\n      case \"SHORT\" | \"SMALLINT\" => ShortType\n      case \"INT\" | \"INTEGER\" => IntegerType\n      case \"LONG\" | \"BIGINT\" => LongType\n      case \"FLOAT\" | \"REAL\" => FloatType\n      case \"DOUBLE\" => DoubleType\n      case \"DECIMAL\" => DecimalType.SYSTEM_DEFAULT\n      case \"STRING\" | \"VARCHAR\" => StringType\n      case \"BINARY\" => BinaryType\n      case \"BOOLEAN\" => BooleanType\n      case \"DATE\" => DateType\n      case \"TIMESTAMP\" => TimestampType\n      case _ =>\n        try {\n          DataType.fromDDL(typeName)\n        } catch {\n          case _: Exception =>\n            // fromDDL rejects types that are syntactically invalid in SQL DDL, such as\n            // DECIMAL(p,s) with a negative scale (valid when allowNegativeScaleOfDecimal=true).\n            // Parse those manually rather than silently falling back to StringType.\n            if (typeName.toUpperCase.startsWith(\"DECIMAL(\") && typeName.endsWith(\")\")) {\n              val inner = typeName.substring(\"DECIMAL(\".length, typeName.length - 1)\n              val parts = inner.split(\",\")\n              if (parts.length == 2) {\n                try {\n                  DataTypes.createDecimalType(parts(0).trim.toInt, parts(1).trim.toInt)\n                } catch {\n                  case _: Exception => StringType\n                }\n              } else StringType\n            } else StringType\n        }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.x/org/apache/comet/shims/ShimCometShuffleExchangeExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.ShuffleDependency\nimport org.apache.spark.sql.catalyst.expressions.Attribute\nimport org.apache.spark.sql.comet.execution.shuffle.{CometShuffleExchangeExec, ShuffleType}\nimport org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\nimport org.apache.spark.sql.types.{StructField, StructType}\n\ntrait ShimCometShuffleExchangeExec {\n  // TODO: remove after dropping Spark 3.4 support\n  def apply(s: ShuffleExchangeExec, shuffleType: ShuffleType): CometShuffleExchangeExec = {\n    val advisoryPartitionSize = s.getClass.getDeclaredMethods\n      .filter(_.getName == \"advisoryPartitionSize\")\n      .flatMap(_.invoke(s).asInstanceOf[Option[Long]])\n      .headOption\n    CometShuffleExchangeExec(\n      s.outputPartitioning,\n      s.child,\n      s,\n      s.shuffleOrigin,\n      shuffleType,\n      advisoryPartitionSize)\n  }\n\n  // TODO: remove after dropping Spark 3.x support\n  protected def fromAttributes(attributes: Seq[Attribute]): StructType =\n    StructType(attributes.map(a => StructField(a.name, a.dataType, a.nullable, a.metadata)))\n\n  // TODO: remove after dropping Spark 3.x support\n  protected def getShuffleId(shuffleDependency: ShuffleDependency[Int, _, _]): Int = 0\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.x/org/apache/comet/shims/ShimCometSparkSessionExtensions.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.execution.{QueryExecution, SparkPlan}\n\ntrait ShimCometSparkSessionExtensions {\n\n  /**\n   * TODO: delete after dropping Spark 3.x support and directly call\n   * SQLConf.EXTENDED_EXPLAIN_PROVIDERS.key\n   */\n  protected val EXTENDED_EXPLAIN_PROVIDERS_KEY = \"spark.sql.extendedExplainProviders\"\n\n  // Extended info is available only since Spark 4.0.0\n  // (https://issues.apache.org/jira/browse/SPARK-47289)\n  def supportsExtendedExplainInfo(qe: QueryExecution): Boolean = {\n    try {\n      // Look for QueryExecution.extendedExplainInfo(scala.Function1[String, Unit], SparkPlan)\n      qe.getClass.getDeclaredMethod(\n        \"extendedExplainInfo\",\n        classOf[String => Unit],\n        classOf[SparkPlan])\n    } catch {\n      case _: NoSuchMethodException | _: SecurityException => return false\n    }\n    true\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.x/org/apache/spark/comet/shims/ShimCometDriverPlugin.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.comet.shims\n\nimport org.apache.spark.SparkConf\n\ntrait ShimCometDriverPlugin {\n  // `org.apache.spark.internal.config.EXECUTOR_MIN_MEMORY_OVERHEAD` was added since Spark 4.0.0\n  private val EXECUTOR_MIN_MEMORY_OVERHEAD = \"spark.executor.minMemoryOverhead\"\n  private val EXECUTOR_MIN_MEMORY_OVERHEAD_DEFAULT = 384L\n\n  def getMemoryOverheadMinMib(sc: SparkConf): Long =\n    sc.getLong(EXECUTOR_MIN_MEMORY_OVERHEAD, EXECUTOR_MIN_MEMORY_OVERHEAD_DEFAULT)\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.x/org/apache/spark/sql/ExtendedExplainGenerator.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport org.apache.spark.sql.execution.SparkPlan\n\n/**\n * A trait for a session extension to implement that provides addition explain plan information.\n * We copy this from Spark 4.0 since this trait is not available in Spark 3.x. We can remove this\n * after dropping Spark 3.x support.\n */\n\ntrait ExtendedExplainGenerator {\n  def title: String\n\n  def generateExtendedInfo(plan: SparkPlan): String\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.x/org/apache/spark/sql/comet/shims/ShimCometShuffleWriteProcessor.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport org.apache.spark.{Partition, ShuffleDependency, TaskContext}\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.scheduler.MapStatus\nimport org.apache.spark.shuffle.ShuffleWriteProcessor\n\ntrait ShimCometShuffleWriteProcessor extends ShuffleWriteProcessor {\n  override def write(\n      rdd: RDD[_],\n      dep: ShuffleDependency[_, _, _],\n      mapId: Long,\n      context: TaskContext,\n      partition: Partition): MapStatus = {\n    val rawIter = rdd.iterator(partition, context)\n    write(rawIter, dep, mapId, partition.index, context)\n  }\n\n  def write(\n      inputs: Iterator[_],\n      dep: ShuffleDependency[_, _, _],\n      mapId: Long,\n      mapIndex: Int,\n      context: TaskContext): MapStatus\n}\n"
  },
  {
    "path": "spark/src/main/spark-3.x/org/apache/spark/sql/comet/shims/ShimStreamSourceAwareSparkPlan.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\ntrait ShimStreamSourceAwareSparkPlan {}\n"
  },
  {
    "path": "spark/src/main/spark-4.0/org/apache/comet/shims/CometExprShim.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.catalyst.expressions._\nimport org.apache.spark.sql.catalyst.expressions.objects.StaticInvoke\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.internal.types.StringTypeWithCollation\nimport org.apache.spark.sql.types.{BinaryType, BooleanType, DataTypes, StringType}\n\nimport org.apache.comet.CometSparkSessionExtensions.withInfo\nimport org.apache.comet.expressions.{CometCast, CometEvalMode}\nimport org.apache.comet.serde.{CommonStringExprs, Compatible, ExprOuterClass, Incompatible}\nimport org.apache.comet.serde.ExprOuterClass.{BinaryOutputStyle, Expr}\nimport org.apache.comet.serde.QueryPlanSerde.{exprToProtoInternal, optExprWithInfo, scalarFunctionExprToProto}\n\n/**\n * `CometExprShim` acts as a shim for parsing expressions from different Spark versions.\n */\ntrait CometExprShim extends CommonStringExprs {\n  protected def evalMode(c: Cast): CometEvalMode.Value =\n    CometEvalModeUtil.fromSparkEvalMode(c.evalMode)\n\n  protected def binaryOutputStyle: BinaryOutputStyle = {\n    SQLConf.get\n      .getConf(SQLConf.BINARY_OUTPUT_STYLE)\n      .map(SQLConf.BinaryOutputStyle.withName) match {\n      case Some(SQLConf.BinaryOutputStyle.UTF8) => BinaryOutputStyle.UTF8\n      case Some(SQLConf.BinaryOutputStyle.BASIC) => BinaryOutputStyle.BASIC\n      case Some(SQLConf.BinaryOutputStyle.BASE64) => BinaryOutputStyle.BASE64\n      case Some(SQLConf.BinaryOutputStyle.HEX) => BinaryOutputStyle.HEX\n      case _ => BinaryOutputStyle.HEX_DISCRETE\n    }\n  }\n\n  def versionSpecificExprToProtoInternal(\n      expr: Expression,\n      inputs: Seq[Attribute],\n      binding: Boolean): Option[Expr] = {\n    expr match {\n      case s: StaticInvoke\n          if s.staticObject == classOf[StringDecode] &&\n            s.dataType.isInstanceOf[StringType] &&\n            s.functionName == \"decode\" &&\n            s.arguments.size == 4 &&\n            s.inputTypes == Seq(\n              BinaryType,\n              StringTypeWithCollation(supportsTrimCollation = true),\n              BooleanType,\n              BooleanType) =>\n        val Seq(bin, charset, _, _) = s.arguments\n        stringDecode(expr, charset, bin, inputs, binding)\n\n      case expr @ ToPrettyString(child, timeZoneId) =>\n        val castSupported = CometCast.isSupported(\n          child.dataType,\n          DataTypes.StringType,\n          timeZoneId,\n          CometEvalMode.TRY)\n\n        val isCastSupported = castSupported match {\n          case Compatible(_) => true\n          case Incompatible(_) => true\n          case _ => false\n        }\n\n        if (isCastSupported) {\n          exprToProtoInternal(child, inputs, binding) match {\n            case Some(p) =>\n              val toPrettyString = ExprOuterClass.ToPrettyString\n                .newBuilder()\n                .setChild(p)\n                .setTimezone(timeZoneId.getOrElse(\"UTC\"))\n                .setBinaryOutputStyle(binaryOutputStyle)\n                .build()\n              Some(\n                ExprOuterClass.Expr\n                  .newBuilder()\n                  .setToPrettyString(toPrettyString)\n                  .build())\n            case _ =>\n              withInfo(expr, child)\n              None\n          }\n        } else {\n          None\n        }\n\n      case wb: WidthBucket =>\n        val childExprs = wb.children.map(exprToProtoInternal(_, inputs, binding))\n        val optExpr = scalarFunctionExprToProto(\"width_bucket\", childExprs: _*)\n        optExprWithInfo(optExpr, wb, wb.children: _*)\n\n      // KnownNotContainsNull is a TaggingExpression added in Spark 4.0 that only\n      // changes schema metadata (containsNull = false). It has no runtime effect,\n      // so we pass through to the child expression.\n      case k: KnownNotContainsNull =>\n        exprToProtoInternal(k.child, inputs, binding)\n\n      case _ => None\n    }\n  }\n}\n\nobject CometEvalModeUtil {\n  def fromSparkEvalMode(evalMode: EvalMode.Value): CometEvalMode.Value = evalMode match {\n    case EvalMode.LEGACY => CometEvalMode.LEGACY\n    case EvalMode.TRY => CometEvalMode.TRY\n    case EvalMode.ANSI => CometEvalMode.ANSI\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-4.0/org/apache/comet/shims/ShimCometBroadcastExchangeExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.SparkContext\nimport org.apache.spark.network.util.JavaUtils\nimport org.apache.spark.sql.execution.exchange.BroadcastExchangeLike\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.shims.ShimCometBroadcastExchangeExec.SPARK_MAX_BROADCAST_TABLE_SIZE\n\ntrait ShimCometBroadcastExchangeExec {\n\n  def setJobGroupOrTag(sc: SparkContext, broadcastExchange: BroadcastExchangeLike): Unit = {\n    // Setup a job tag here so later it may get cancelled by tag if necessary.\n    sc.addJobTag(broadcastExchange.jobTag)\n    sc.setInterruptOnCancel(true)\n  }\n\n  def cancelJobGroup(sc: SparkContext, broadcastExchange: BroadcastExchangeLike): Unit = {\n    sc.cancelJobsWithTag(broadcastExchange.jobTag)\n  }\n\n  def maxBroadcastTableBytes(conf: SQLConf): Long = {\n    JavaUtils.byteStringAsBytes(conf.getConfString(SPARK_MAX_BROADCAST_TABLE_SIZE, \"8GB\"))\n  }\n\n}\n\nobject ShimCometBroadcastExchangeExec {\n  val SPARK_MAX_BROADCAST_TABLE_SIZE = \"spark.sql.maxBroadcastTableSize\"\n}\n"
  },
  {
    "path": "spark/src/main/spark-4.0/org/apache/comet/shims/ShimCometShuffleExchangeExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.ShuffleDependency\nimport org.apache.spark.sql.catalyst.expressions.Attribute\nimport org.apache.spark.sql.catalyst.types.DataTypeUtils\nimport org.apache.spark.sql.comet.execution.shuffle.{CometShuffleExchangeExec, ShuffleType}\nimport org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\nimport org.apache.spark.sql.types.StructType\n\ntrait ShimCometShuffleExchangeExec {\n  def apply(s: ShuffleExchangeExec, shuffleType: ShuffleType): CometShuffleExchangeExec = {\n    CometShuffleExchangeExec(\n      s.outputPartitioning,\n      s.child,\n      s,\n      s.shuffleOrigin,\n      shuffleType,\n      s.advisoryPartitionSize)\n  }\n\n  protected def fromAttributes(attributes: Seq[Attribute]): StructType =\n    DataTypeUtils.fromAttributes(attributes)\n\n  protected def getShuffleId(shuffleDependency: ShuffleDependency[Int, _, _]): Int =\n    shuffleDependency.shuffleId\n}\n"
  },
  {
    "path": "spark/src/main/spark-4.0/org/apache/comet/shims/ShimCometSparkSessionExtensions.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.connector.expressions.aggregate.Aggregation\nimport org.apache.spark.sql.execution.QueryExecution\nimport org.apache.spark.sql.execution.datasources.v2.parquet.ParquetScan\nimport org.apache.spark.sql.internal.SQLConf\n\ntrait ShimCometSparkSessionExtensions {\n  protected def getPushedAggregate(scan: ParquetScan): Option[Aggregation] = scan.pushedAggregate\n\n  protected def supportsExtendedExplainInfo(qe: QueryExecution): Boolean = true\n\n  protected val EXTENDED_EXPLAIN_PROVIDERS_KEY = SQLConf.EXTENDED_EXPLAIN_PROVIDERS.key\n}\n"
  },
  {
    "path": "spark/src/main/spark-4.0/org/apache/comet/shims/ShimSQLConf.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.internal.LegacyBehaviorPolicy\n\ntrait ShimSQLConf {\n  protected val LEGACY = LegacyBehaviorPolicy.LEGACY\n  protected val CORRECTED = LegacyBehaviorPolicy.CORRECTED\n}\n"
  },
  {
    "path": "spark/src/main/spark-4.0/org/apache/comet/shims/ShimSubqueryBroadcast.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.execution.SubqueryAdaptiveBroadcastExec\n\ntrait ShimSubqueryBroadcast {\n\n  /**\n   * Gets the build key indices from SubqueryAdaptiveBroadcastExec. Spark 3.x has `index: Int`,\n   * Spark 4.x has `indices: Seq[Int]`.\n   */\n  def getSubqueryBroadcastIndices(sab: SubqueryAdaptiveBroadcastExec): Seq[Int] = {\n    sab.indices\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-4.0/org/apache/spark/comet/shims/ShimCometDriverPlugin.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.comet.shims\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.internal.config.EXECUTOR_MIN_MEMORY_OVERHEAD\n\ntrait ShimCometDriverPlugin {\n  protected def getMemoryOverheadMinMib(sparkConf: SparkConf): Long =\n    sparkConf.get(EXECUTOR_MIN_MEMORY_OVERHEAD)\n}\n"
  },
  {
    "path": "spark/src/main/spark-4.0/org/apache/spark/sql/comet/shims/ShimCometScanExec.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.{AttributeReference, Expression, FileSourceConstantMetadataAttribute, Literal}\nimport org.apache.spark.sql.execution.{FileSourceScanExec, PartitionedFileUtil, ScalarSubquery}\nimport org.apache.spark.sql.execution.datasources._\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetOptions\nimport org.apache.spark.sql.sources.Filter\nimport org.apache.spark.sql.types.StructType\n\ntrait ShimCometScanExec extends ShimStreamSourceAwareSparkPlan {\n  def wrapped: FileSourceScanExec\n\n  lazy val fileConstantMetadataColumns: Seq[AttributeReference] =\n    wrapped.fileConstantMetadataColumns\n\n  protected def newFileScanRDD(\n      fsRelation: HadoopFsRelation,\n      readFunction: PartitionedFile => Iterator[InternalRow],\n      filePartitions: Seq[FilePartition],\n      readSchema: StructType,\n      options: ParquetOptions): FileScanRDD = {\n    new FileScanRDD(\n      fsRelation.sparkSession,\n      readFunction,\n      filePartitions,\n      readSchema,\n      fileConstantMetadataColumns,\n      fsRelation.fileFormat.fileConstantMetadataExtractors,\n      options)\n  }\n\n  // see SPARK-39634\n  protected def isNeededForSchema(sparkSchema: StructType): Boolean = false\n\n  protected def getPartitionedFile(\n      f: FileStatusWithMetadata,\n      p: PartitionDirectory): PartitionedFile =\n    PartitionedFileUtil.getPartitionedFile(f, f.getPath, p.values, 0, f.getLen)\n\n  protected def splitFiles(\n      sparkSession: SparkSession,\n      file: FileStatusWithMetadata,\n      filePath: Path,\n      isSplitable: Boolean,\n      maxSplitBytes: Long,\n      partitionValues: InternalRow): Seq[PartitionedFile] =\n    PartitionedFileUtil.splitFiles(file, filePath, isSplitable, maxSplitBytes, partitionValues)\n\n  protected def getPushedDownFilters(\n      relation: HadoopFsRelation,\n      dataFilters: Seq[Expression]): Seq[Filter] = {\n    translateToV1Filters(relation, dataFilters, _.toLiteral)\n  }\n\n  // From Spark FileSourceScanLike\n  private def translateToV1Filters(\n      relation: HadoopFsRelation,\n      dataFilters: Seq[Expression],\n      scalarSubqueryToLiteral: ScalarSubquery => Literal): Seq[Filter] = {\n    val scalarSubqueryReplaced = dataFilters.map(_.transform {\n      // Replace scalar subquery to literal so that `DataSourceStrategy.translateFilter` can\n      // support translating it.\n      case scalarSubquery: ScalarSubquery => scalarSubqueryToLiteral(scalarSubquery)\n    })\n\n    val supportNestedPredicatePushdown = DataSourceUtils.supportNestedPredicatePushdown(relation)\n    // `dataFilters` should not include any constant metadata col filters\n    // because the metadata struct has been flatted in FileSourceStrategy\n    // and thus metadata col filters are invalid to be pushed down. Metadata that is generated\n    // during the scan can be used for filters.\n    scalarSubqueryReplaced\n      .filterNot(_.references.exists {\n        case FileSourceConstantMetadataAttribute(_) => true\n        case _ => false\n      })\n      .flatMap(DataSourceStrategy.translateFilter(_, supportNestedPredicatePushdown))\n  }\n\n}\n"
  },
  {
    "path": "spark/src/main/spark-4.0/org/apache/spark/sql/comet/shims/ShimCometShuffleWriteProcessor.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport org.apache.spark.shuffle.ShuffleWriteProcessor\n\ntrait ShimCometShuffleWriteProcessor extends ShuffleWriteProcessor {}\n"
  },
  {
    "path": "spark/src/main/spark-4.0/org/apache/spark/sql/comet/shims/ShimSparkErrorConverter.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport scala.util.matching.Regex\n\nimport org.apache.spark.QueryContext\nimport org.apache.spark.SparkException\nimport org.apache.spark.SparkFileNotFoundException\nimport org.apache.spark.sql.errors.QueryExecutionErrors\nimport org.apache.spark.sql.types._\nimport org.apache.spark.unsafe.types.UTF8String\n\nobject ShimSparkErrorConverter {\n  val ObjectLocationPattern: Regex = \"Object at location (.+?) not found\".r\n}\n\n/**\n * Spark 4.0-specific implementation for converting error types to proper Spark exceptions.\n */\ntrait ShimSparkErrorConverter {\n\n  private def parseFloatLiteral(value: String): Float = {\n    value.toLowerCase match {\n      case \"inf\" | \"+inf\" | \"infinity\" | \"+infinity\" => Float.PositiveInfinity\n      case \"-inf\" | \"-infinity\" => Float.NegativeInfinity\n      case \"nan\" | \"+nan\" | \"-nan\" => Float.NaN\n      case _ => value.toFloat\n    }\n  }\n\n  private def parseDoubleLiteral(value: String): Double = {\n    val normalized = value.toLowerCase.stripSuffix(\"d\")\n    normalized match {\n      case \"inf\" | \"+inf\" | \"infinity\" | \"+infinity\" => Double.PositiveInfinity\n      case \"-inf\" | \"-infinity\" => Double.NegativeInfinity\n      case \"nan\" | \"+nan\" | \"-nan\" => Double.NaN\n      case _ => normalized.toDouble\n    }\n  }\n\n  /**\n   * Convert error type string and parameters to appropriate Spark exception. Version-specific\n   * implementations call the correct QueryExecutionErrors.* methods.\n   *\n   * @param errorType\n   *   The error type from JSON (e.g., \"DivideByZero\")\n   * @param errorClass\n   *   The Spark error class (e.g., \"DIVIDE_BY_ZERO\")\n   * @param params\n   *   Error parameters from JSON\n   * @param context\n   *   QueryContext array with SQL text and position information\n   * @param summary\n   *   Formatted summary string showing error location\n   * @return\n   *   Throwable (specific exception type from QueryExecutionErrors), or None if unknown\n   */\n  def convertErrorType(\n      errorType: String,\n      _errorClass: String,\n      params: Map[String, Any],\n      context: Array[QueryContext],\n      _summary: String): Option[Throwable] = {\n\n    errorType match {\n\n      case \"DivideByZero\" =>\n        Some(QueryExecutionErrors.divideByZeroError(context.headOption.orNull))\n\n      case \"RemainderByZero\" =>\n        // SPARK 4.0 REMOVED remainderByZeroError  so we use generic arithmetic exception\n        Some(\n          new SparkException(\n            errorClass = \"REMAINDER_BY_ZERO\",\n            messageParameters = params.map { case (k, v) => (k, v.toString) },\n            cause = null))\n\n      case \"IntervalDividedByZero\" =>\n        Some(QueryExecutionErrors.intervalDividedByZeroError(context.headOption.orNull))\n\n      case \"BinaryArithmeticOverflow\" =>\n        Some(\n          QueryExecutionErrors.binaryArithmeticCauseOverflowError(\n            params(\"value1\").toString.toShort,\n            params(\"symbol\").toString,\n            params(\"value2\").toString.toShort,\n            params(\"functionName\").toString))\n\n      case \"ArithmeticOverflow\" =>\n        val fromType = params(\"fromType\").toString\n        Some(\n          QueryExecutionErrors\n            .arithmeticOverflowError(fromType + \" overflow\", \"\", context.headOption.orNull))\n\n      case \"IntegralDivideOverflow\" =>\n        Some(QueryExecutionErrors.overflowInIntegralDivideError(context.headOption.orNull))\n\n      case \"DecimalSumOverflow\" =>\n        Some(QueryExecutionErrors.overflowInSumOfDecimalError(context.headOption.orNull, \"\"))\n\n      case \"NumericValueOutOfRange\" =>\n        val decimal = Decimal(params(\"value\").toString)\n        Some(\n          QueryExecutionErrors.cannotChangeDecimalPrecisionError(\n            decimal,\n            params(\"precision\").toString.toInt,\n            params(\"scale\").toString.toInt,\n            context.headOption.orNull))\n\n      case \"DatetimeOverflow\" =>\n        // Spark 4.0 doesn't have datetimeOverflowError\n        Some(\n          new SparkException(\n            errorClass = \"DATETIME_OVERFLOW\",\n            messageParameters = params.map { case (k, v) => (k, v.toString) },\n            cause = null))\n\n      case \"InvalidArrayIndex\" =>\n        Some(\n          QueryExecutionErrors.invalidArrayIndexError(\n            params(\"indexValue\").toString.toInt,\n            params(\"arraySize\").toString.toInt,\n            context.headOption.orNull))\n\n      case \"InvalidElementAtIndex\" =>\n        Some(\n          QueryExecutionErrors.invalidElementAtIndexError(\n            params(\"indexValue\").toString.toInt,\n            params(\"arraySize\").toString.toInt,\n            context.headOption.orNull))\n\n      case \"InvalidIndexOfZero\" =>\n        Some(QueryExecutionErrors.invalidIndexOfZeroError(context.headOption.orNull))\n\n      case \"InvalidBitmapPosition\" =>\n        Some(\n          QueryExecutionErrors.invalidBitmapPositionError(\n            params(\"bitPosition\").toString.toLong,\n            params(\"bitmapNumBytes\").toString.toLong))\n\n      case \"DuplicatedMapKey\" =>\n        Some(QueryExecutionErrors.duplicateMapKeyFoundError(params(\"key\")))\n\n      case \"NullMapKey\" =>\n        Some(QueryExecutionErrors.nullAsMapKeyNotAllowedError())\n\n      case \"MapKeyValueDiffSizes\" =>\n        Some(QueryExecutionErrors.mapDataKeyArrayLengthDiffersFromValueArrayLengthError())\n\n      case \"ExceedMapSizeLimit\" =>\n        Some(QueryExecutionErrors.exceedMapSizeLimitError(params(\"size\").toString.toInt))\n\n      case \"CollectionSizeLimitExceeded\" =>\n        Some(\n          QueryExecutionErrors.createArrayWithElementsExceedLimitError(\n            \"array\",\n            params(\"numElements\").toString.toLong))\n\n      case \"NotNullAssertViolation\" =>\n        Some(\n          QueryExecutionErrors.foundNullValueForNotNullableFieldError(\n            params(\"fieldName\").toString))\n\n      case \"ValueIsNull\" =>\n        Some(\n          QueryExecutionErrors.fieldCannotBeNullError(\n            params.getOrElse(\"rowIndex\", 0).toString.toInt,\n            params(\"fieldName\").toString))\n\n      case \"CannotParseTimestamp\" =>\n        Some(\n          QueryExecutionErrors.ansiDateTimeParseError(\n            new Exception(params(\"message\").toString),\n            params(\"suggestedFunc\").toString))\n\n      case \"InvalidFractionOfSecond\" =>\n        Some(QueryExecutionErrors.invalidFractionOfSecondError(params(\"value\").toString.toDouble))\n\n      case \"CastInvalidValue\" =>\n        val str = UTF8String.fromString(params(\"value\").toString)\n        val targetType = getDataType(params(\"toType\").toString)\n        Some(\n          QueryExecutionErrors\n            .invalidInputInCastToNumberError(targetType, str, context.headOption.orNull))\n\n      case \"InvalidInputInCastToDatetime\" =>\n        val str = UTF8String.fromString(params(\"value\").toString)\n        val targetType = getDataType(params(\"toType\").toString)\n        Some(\n          QueryExecutionErrors\n            .invalidInputInCastToDatetimeError(str, targetType, context.headOption.orNull))\n\n      case \"CastOverFlow\" =>\n        val fromType = getDataType(params(\"fromType\").toString)\n        val toType = getDataType(params(\"toType\").toString)\n        val valueStr = params(\"value\").toString\n\n        // Convert string value to appropriate type for toSQLValue\n        val typedValue: Any = fromType match {\n          case _: DecimalType =>\n            // Parse decimal string (may have \"BD\" suffix from BigDecimal.toString)\n            val cleanStr = if (valueStr.endsWith(\"BD\")) valueStr.dropRight(2) else valueStr\n            Decimal(cleanStr)\n          case ByteType =>\n            // Strip \"T\" suffix for TINYINT literals\n            val cleanStr = if (valueStr.endsWith(\"T\")) valueStr.dropRight(1) else valueStr\n            cleanStr.toByte\n          case ShortType =>\n            // Strip \"S\" suffix for SMALLINT literals\n            val cleanStr = if (valueStr.endsWith(\"S\")) valueStr.dropRight(1) else valueStr\n            cleanStr.toShort\n          case IntegerType => valueStr.toInt\n          case LongType =>\n            // Strip \"L\" suffix for BIGINT literals\n            val cleanStr = if (valueStr.endsWith(\"L\")) valueStr.dropRight(1) else valueStr\n            cleanStr.toLong\n          case FloatType => parseFloatLiteral(valueStr)\n          case DoubleType => parseDoubleLiteral(valueStr)\n          case StringType => UTF8String.fromString(valueStr)\n          case _ => valueStr // Fallback to string\n        }\n\n        Some(QueryExecutionErrors.castingCauseOverflowError(typedValue, fromType, toType))\n\n      case \"CannotParseDecimal\" =>\n        Some(QueryExecutionErrors.cannotParseDecimalError())\n\n      case \"InvalidUtf8String\" =>\n        val hexStr = UTF8String.fromString(params(\"hexString\").toString)\n        Some(QueryExecutionErrors.invalidUTF8StringError(hexStr))\n\n      case \"UnexpectedPositiveValue\" =>\n        Some(\n          QueryExecutionErrors.unexpectedValueForStartInFunctionError(\n            params(\"parameterName\").toString))\n\n      case \"UnexpectedNegativeValue\" =>\n        Some(\n          QueryExecutionErrors.unexpectedValueForLengthInFunctionError(\n            params(\"parameterName\").toString,\n            params(\"actualValue\").toString.toInt))\n\n      case \"InvalidRegexGroupIndex\" =>\n        Some(\n          QueryExecutionErrors.invalidRegexGroupIndexError(\n            params(\"functionName\").toString,\n            params(\"groupCount\").toString.toInt,\n            params(\"groupIndex\").toString.toInt))\n\n      case \"DatatypeCannotOrder\" =>\n        Some(\n          QueryExecutionErrors.orderedOperationUnsupportedByDataTypeError(\n            params(\"dataType\").toString))\n\n      case \"ScalarSubqueryTooManyRows\" =>\n        Some(QueryExecutionErrors.multipleRowScalarSubqueryError(context.headOption.orNull))\n\n      case \"IntervalArithmeticOverflowWithSuggestion\" =>\n        Some(\n          QueryExecutionErrors.withSuggestionIntervalArithmeticOverflowError(\n            params.get(\"functionName\").map(_.toString).getOrElse(\"\"),\n            context.headOption.orNull))\n\n      case \"IntervalArithmeticOverflowWithoutSuggestion\" =>\n        Some(\n          QueryExecutionErrors.withoutSuggestionIntervalArithmeticOverflowError(\n            context.headOption.orNull))\n\n      case \"DuplicateFieldCaseInsensitive\" =>\n        Some(\n          QueryExecutionErrors.foundDuplicateFieldInCaseInsensitiveModeError(\n            params(\"requiredFieldName\").toString,\n            params(\"matchedOrcFields\").toString))\n\n      case \"FileNotFound\" =>\n        val msg = params(\"message\").toString\n        // Extract file path from native error message and format like Hadoop's\n        // FileNotFoundException: \"File <path> does not exist\"\n        val path = ShimSparkErrorConverter.ObjectLocationPattern\n          .findFirstMatchIn(msg)\n          .map(_.group(1))\n          .getOrElse(msg)\n        // readCurrentFileNotFoundError was removed in Spark 4.0; construct directly\n        Some(\n          new SparkFileNotFoundException(\n            errorClass = \"_LEGACY_ERROR_TEMP_2055\",\n            messageParameters = Map(\"message\" -> s\"File $path does not exist\")))\n\n      case _ =>\n        // Unknown error type - return None to trigger fallback\n        None\n    }\n  }\n\n  private def getDataType(typeName: String): DataType = {\n    typeName.toUpperCase match {\n      case \"BYTE\" | \"TINYINT\" => ByteType\n      case \"SHORT\" | \"SMALLINT\" => ShortType\n      case \"INT\" | \"INTEGER\" => IntegerType\n      case \"LONG\" | \"BIGINT\" => LongType\n      case \"FLOAT\" | \"REAL\" => FloatType\n      case \"DOUBLE\" => DoubleType\n      case \"DECIMAL\" => DecimalType.SYSTEM_DEFAULT\n      case \"STRING\" | \"VARCHAR\" => StringType\n      case \"BINARY\" => BinaryType\n      case \"BOOLEAN\" => BooleanType\n      case \"DATE\" => DateType\n      case \"TIMESTAMP\" => TimestampType\n      case _ =>\n        try {\n          DataType.fromDDL(typeName)\n        } catch {\n          case _: Exception =>\n            // fromDDL rejects types that are syntactically invalid in SQL DDL, such as\n            // DECIMAL(p,s) with a negative scale (valid when allowNegativeScaleOfDecimal=true).\n            // Parse those manually rather than silently falling back to StringType.\n            if (typeName.toUpperCase.startsWith(\"DECIMAL(\") && typeName.endsWith(\")\")) {\n              val inner = typeName.substring(\"DECIMAL(\".length, typeName.length - 1)\n              val parts = inner.split(\",\")\n              if (parts.length == 2) {\n                try {\n                  DataTypes.createDecimalType(parts(0).trim.toInt, parts(1).trim.toInt)\n                } catch {\n                  case _: Exception => StringType\n                }\n              } else StringType\n            } else StringType\n        }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/main/spark-4.0/org/apache/spark/sql/comet/shims/ShimStreamSourceAwareSparkPlan.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\nimport org.apache.spark.sql.connector.read.streaming.SparkDataStream\nimport org.apache.spark.sql.execution.StreamSourceAwareSparkPlan\n\ntrait ShimStreamSourceAwareSparkPlan extends StreamSourceAwareSparkPlan {\n  override def getStream: Option[SparkDataStream] = None\n}\n"
  },
  {
    "path": "spark/src/test/java/org/apache/comet/IntegrationTestSuite.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet;\n\nimport java.lang.annotation.ElementType;\nimport java.lang.annotation.Retention;\nimport java.lang.annotation.RetentionPolicy;\nimport java.lang.annotation.Target;\n\nimport org.scalatest.TagAnnotation;\n\n/**\n * This annotation is used on integration test suites. So we can exclude these tests from execution\n * of scalatest-maven-plugin.\n */\n@TagAnnotation\n@Retention(RetentionPolicy.RUNTIME)\n@Target({ElementType.METHOD, ElementType.TYPE})\npublic @interface IntegrationTestSuite {}\n"
  },
  {
    "path": "spark/src/test/java/org/apache/comet/hadoop/fs/FakeHDFSFileSystem.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.hadoop.fs;\n\nimport java.net.URI;\n\nimport org.apache.hadoop.fs.RawLocalFileSystem;\n\npublic class FakeHDFSFileSystem extends RawLocalFileSystem {\n\n  public static final String PREFIX = \"fake://fake-bucket\";\n\n  public FakeHDFSFileSystem() {\n    // Avoid `URI scheme is not \"file\"` error on\n    // RawLocalFileSystem$DeprecatedRawLocalFileStatus.getOwner\n    RawLocalFileSystem.useStatIfAvailable();\n  }\n\n  @Override\n  public String getScheme() {\n    return \"fake\";\n  }\n\n  @Override\n  public URI getUri() {\n    return URI.create(PREFIX);\n  }\n}\n"
  },
  {
    "path": "spark/src/test/java/org/apache/iceberg/rest/RESTCatalogAdapter.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.iceberg.rest;\n\nimport java.io.IOException;\nimport java.util.List;\nimport java.util.Map;\nimport java.util.function.Consumer;\n\nimport org.apache.iceberg.BaseTable;\nimport org.apache.iceberg.BaseTransaction;\nimport org.apache.iceberg.Table;\nimport org.apache.iceberg.Transaction;\nimport org.apache.iceberg.Transactions;\nimport org.apache.iceberg.catalog.Catalog;\nimport org.apache.iceberg.catalog.Namespace;\nimport org.apache.iceberg.catalog.SupportsNamespaces;\nimport org.apache.iceberg.catalog.TableIdentifier;\nimport org.apache.iceberg.catalog.ViewCatalog;\nimport org.apache.iceberg.exceptions.AlreadyExistsException;\nimport org.apache.iceberg.exceptions.CommitFailedException;\nimport org.apache.iceberg.exceptions.CommitStateUnknownException;\nimport org.apache.iceberg.exceptions.ForbiddenException;\nimport org.apache.iceberg.exceptions.NamespaceNotEmptyException;\nimport org.apache.iceberg.exceptions.NoSuchIcebergTableException;\nimport org.apache.iceberg.exceptions.NoSuchNamespaceException;\nimport org.apache.iceberg.exceptions.NoSuchTableException;\nimport org.apache.iceberg.exceptions.NoSuchViewException;\nimport org.apache.iceberg.exceptions.NotAuthorizedException;\nimport org.apache.iceberg.exceptions.RESTException;\nimport org.apache.iceberg.exceptions.UnprocessableEntityException;\nimport org.apache.iceberg.exceptions.ValidationException;\nimport org.apache.iceberg.relocated.com.google.common.base.Splitter;\nimport org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;\nimport org.apache.iceberg.relocated.com.google.common.collect.Lists;\nimport org.apache.iceberg.rest.requests.CommitTransactionRequest;\nimport org.apache.iceberg.rest.requests.CreateNamespaceRequest;\nimport org.apache.iceberg.rest.requests.CreateTableRequest;\nimport org.apache.iceberg.rest.requests.CreateViewRequest;\nimport org.apache.iceberg.rest.requests.RegisterTableRequest;\nimport org.apache.iceberg.rest.requests.RenameTableRequest;\nimport org.apache.iceberg.rest.requests.ReportMetricsRequest;\nimport org.apache.iceberg.rest.requests.UpdateNamespacePropertiesRequest;\nimport org.apache.iceberg.rest.requests.UpdateTableRequest;\nimport org.apache.iceberg.rest.responses.ConfigResponse;\nimport org.apache.iceberg.rest.responses.CreateNamespaceResponse;\nimport org.apache.iceberg.rest.responses.ErrorResponse;\nimport org.apache.iceberg.rest.responses.GetNamespaceResponse;\nimport org.apache.iceberg.rest.responses.ListNamespacesResponse;\nimport org.apache.iceberg.rest.responses.ListTablesResponse;\nimport org.apache.iceberg.rest.responses.LoadTableResponse;\nimport org.apache.iceberg.rest.responses.LoadViewResponse;\nimport org.apache.iceberg.rest.responses.OAuthTokenResponse;\nimport org.apache.iceberg.rest.responses.UpdateNamespacePropertiesResponse;\nimport org.apache.iceberg.util.Pair;\nimport org.apache.iceberg.util.PropertyUtil;\n\n/** Adaptor class to translate REST requests into {@link Catalog} API calls. */\npublic class RESTCatalogAdapter implements RESTClient {\n  private static final Splitter SLASH = Splitter.on('/');\n\n  private static final Map<Class<? extends Exception>, Integer> EXCEPTION_ERROR_CODES =\n      ImmutableMap.<Class<? extends Exception>, Integer>builder()\n          .put(IllegalArgumentException.class, 400)\n          .put(ValidationException.class, 400)\n          .put(NamespaceNotEmptyException.class, 400) // TODO: should this be more specific?\n          .put(NotAuthorizedException.class, 401)\n          .put(ForbiddenException.class, 403)\n          .put(NoSuchNamespaceException.class, 404)\n          .put(NoSuchTableException.class, 404)\n          .put(NoSuchViewException.class, 404)\n          .put(NoSuchIcebergTableException.class, 404)\n          .put(UnsupportedOperationException.class, 406)\n          .put(AlreadyExistsException.class, 409)\n          .put(CommitFailedException.class, 409)\n          .put(UnprocessableEntityException.class, 422)\n          .put(CommitStateUnknownException.class, 500)\n          .buildOrThrow();\n\n  private final Catalog catalog;\n  private final SupportsNamespaces asNamespaceCatalog;\n  private final ViewCatalog asViewCatalog;\n\n  // Optional credentials to inject into loadTable responses, simulating REST catalog\n  // credential vending. When non-empty, these are added to LoadTableResponse.config().\n  private Map<String, String> vendedCredentials = ImmutableMap.of();\n\n  public void setVendedCredentials(Map<String, String> credentials) {\n    this.vendedCredentials = credentials;\n  }\n\n  public RESTCatalogAdapter(Catalog catalog) {\n    this.catalog = catalog;\n    this.asNamespaceCatalog =\n        catalog instanceof SupportsNamespaces ? (SupportsNamespaces) catalog : null;\n    this.asViewCatalog = catalog instanceof ViewCatalog ? (ViewCatalog) catalog : null;\n  }\n\n  enum HTTPMethod {\n    GET,\n    HEAD,\n    POST,\n    DELETE\n  }\n\n  enum Route {\n    TOKENS(HTTPMethod.POST, \"v1/oauth/tokens\", null, OAuthTokenResponse.class),\n    SEPARATE_AUTH_TOKENS_URI(\n        HTTPMethod.POST, \"https://auth-server.com/token\", null, OAuthTokenResponse.class),\n    CONFIG(HTTPMethod.GET, \"v1/config\", null, ConfigResponse.class),\n    LIST_NAMESPACES(HTTPMethod.GET, \"v1/namespaces\", null, ListNamespacesResponse.class),\n    CREATE_NAMESPACE(\n        HTTPMethod.POST,\n        \"v1/namespaces\",\n        CreateNamespaceRequest.class,\n        CreateNamespaceResponse.class),\n    LOAD_NAMESPACE(HTTPMethod.GET, \"v1/namespaces/{namespace}\", null, GetNamespaceResponse.class),\n    DROP_NAMESPACE(HTTPMethod.DELETE, \"v1/namespaces/{namespace}\"),\n    UPDATE_NAMESPACE(\n        HTTPMethod.POST,\n        \"v1/namespaces/{namespace}/properties\",\n        UpdateNamespacePropertiesRequest.class,\n        UpdateNamespacePropertiesResponse.class),\n    LIST_TABLES(HTTPMethod.GET, \"v1/namespaces/{namespace}/tables\", null, ListTablesResponse.class),\n    CREATE_TABLE(\n        HTTPMethod.POST,\n        \"v1/namespaces/{namespace}/tables\",\n        CreateTableRequest.class,\n        LoadTableResponse.class),\n    LOAD_TABLE(\n        HTTPMethod.GET, \"v1/namespaces/{namespace}/tables/{name}\", null, LoadTableResponse.class),\n    REGISTER_TABLE(\n        HTTPMethod.POST,\n        \"v1/namespaces/{namespace}/register\",\n        RegisterTableRequest.class,\n        LoadTableResponse.class),\n    UPDATE_TABLE(\n        HTTPMethod.POST,\n        \"v1/namespaces/{namespace}/tables/{name}\",\n        UpdateTableRequest.class,\n        LoadTableResponse.class),\n    DROP_TABLE(HTTPMethod.DELETE, \"v1/namespaces/{namespace}/tables/{name}\"),\n    RENAME_TABLE(HTTPMethod.POST, \"v1/tables/rename\", RenameTableRequest.class, null),\n    REPORT_METRICS(\n        HTTPMethod.POST,\n        \"v1/namespaces/{namespace}/tables/{name}/metrics\",\n        ReportMetricsRequest.class,\n        null),\n    COMMIT_TRANSACTION(\n        HTTPMethod.POST, \"v1/transactions/commit\", CommitTransactionRequest.class, null),\n    LIST_VIEWS(HTTPMethod.GET, \"v1/namespaces/{namespace}/views\", null, ListTablesResponse.class),\n    LOAD_VIEW(\n        HTTPMethod.GET, \"v1/namespaces/{namespace}/views/{name}\", null, LoadViewResponse.class),\n    CREATE_VIEW(\n        HTTPMethod.POST,\n        \"v1/namespaces/{namespace}/views\",\n        CreateViewRequest.class,\n        LoadViewResponse.class),\n    UPDATE_VIEW(\n        HTTPMethod.POST,\n        \"v1/namespaces/{namespace}/views/{name}\",\n        UpdateTableRequest.class,\n        LoadViewResponse.class),\n    RENAME_VIEW(HTTPMethod.POST, \"v1/views/rename\", RenameTableRequest.class, null),\n    DROP_VIEW(HTTPMethod.DELETE, \"v1/namespaces/{namespace}/views/{name}\");\n\n    private final HTTPMethod method;\n    private final int requiredLength;\n    private final Map<Integer, String> requirements;\n    private final Map<Integer, String> variables;\n    private final Class<? extends RESTRequest> requestClass;\n    private final Class<? extends RESTResponse> responseClass;\n\n    Route(HTTPMethod method, String pattern) {\n      this(method, pattern, null, null);\n    }\n\n    Route(\n        HTTPMethod method,\n        String pattern,\n        Class<? extends RESTRequest> requestClass,\n        Class<? extends RESTResponse> responseClass) {\n      this.method = method;\n\n      // parse the pattern into requirements and variables\n      List<String> parts = SLASH.splitToList(pattern);\n      ImmutableMap.Builder<Integer, String> requirementsBuilder = ImmutableMap.builder();\n      ImmutableMap.Builder<Integer, String> variablesBuilder = ImmutableMap.builder();\n      for (int pos = 0; pos < parts.size(); pos += 1) {\n        String part = parts.get(pos);\n        if (part.startsWith(\"{\") && part.endsWith(\"}\")) {\n          variablesBuilder.put(pos, part.substring(1, part.length() - 1));\n        } else {\n          requirementsBuilder.put(pos, part);\n        }\n      }\n\n      this.requestClass = requestClass;\n      this.responseClass = responseClass;\n\n      this.requiredLength = parts.size();\n      this.requirements = requirementsBuilder.build();\n      this.variables = variablesBuilder.build();\n    }\n\n    private boolean matches(HTTPMethod requestMethod, List<String> requestPath) {\n      return method == requestMethod\n          && requiredLength == requestPath.size()\n          && requirements.entrySet().stream()\n              .allMatch(\n                  requirement ->\n                      requirement\n                          .getValue()\n                          .equalsIgnoreCase(requestPath.get(requirement.getKey())));\n    }\n\n    private Map<String, String> variables(List<String> requestPath) {\n      ImmutableMap.Builder<String, String> vars = ImmutableMap.builder();\n      variables.forEach((key, value) -> vars.put(value, requestPath.get(key)));\n      return vars.build();\n    }\n\n    public static Pair<Route, Map<String, String>> from(HTTPMethod method, String path) {\n      List<String> parts = SLASH.splitToList(path);\n      for (Route candidate : Route.values()) {\n        if (candidate.matches(method, parts)) {\n          return Pair.of(candidate, candidate.variables(parts));\n        }\n      }\n\n      return null;\n    }\n\n    public Class<? extends RESTRequest> requestClass() {\n      return requestClass;\n    }\n\n    public Class<? extends RESTResponse> responseClass() {\n      return responseClass;\n    }\n  }\n\n  private static OAuthTokenResponse handleOAuthRequest(Object body) {\n    Map<String, String> request = (Map<String, String>) castRequest(Map.class, body);\n    String grantType = request.get(\"grant_type\");\n    switch (grantType) {\n      case \"client_credentials\":\n        return OAuthTokenResponse.builder()\n            .withToken(\"client-credentials-token:sub=\" + request.get(\"client_id\"))\n            .withTokenType(\"Bearer\")\n            .build();\n\n      case \"urn:ietf:params:oauth:grant-type:token-exchange\":\n        String actor = request.get(\"actor_token\");\n        String token =\n            String.format(\n                \"token-exchange-token:sub=%s%s\",\n                request.get(\"subject_token\"), actor != null ? \",act=\" + actor : \"\");\n        return OAuthTokenResponse.builder()\n            .withToken(token)\n            .withIssuedTokenType(\"urn:ietf:params:oauth:token-type:access_token\")\n            .withTokenType(\"Bearer\")\n            .build();\n\n      default:\n        throw new UnsupportedOperationException(\"Unsupported grant_type: \" + grantType);\n    }\n  }\n\n  @SuppressWarnings({\"MethodLength\", \"checkstyle:CyclomaticComplexity\"})\n  public <T extends RESTResponse> T handleRequest(\n      Route route, Map<String, String> vars, Object body, Class<T> responseType) {\n    T response = doHandleRequest(route, vars, body, responseType);\n    // Inject vended credentials into any LoadTableResponse, simulating REST catalog\n    // credential vending. This covers CREATE_TABLE, LOAD_TABLE, UPDATE_TABLE, etc.\n    if (!vendedCredentials.isEmpty() && response instanceof LoadTableResponse) {\n      LoadTableResponse original = (LoadTableResponse) response;\n      @SuppressWarnings(\"unchecked\")\n      T withCreds =\n          (T)\n              LoadTableResponse.builder()\n                  .withTableMetadata(original.tableMetadata())\n                  .addAllConfig(original.config())\n                  .addAllConfig(vendedCredentials)\n                  .build();\n      return withCreds;\n    }\n    return response;\n  }\n\n  private <T extends RESTResponse> T doHandleRequest(\n      Route route, Map<String, String> vars, Object body, Class<T> responseType) {\n    switch (route) {\n      case TOKENS:\n        return castResponse(responseType, handleOAuthRequest(body));\n\n      case CONFIG:\n        return castResponse(responseType, ConfigResponse.builder().build());\n\n      case LIST_NAMESPACES:\n        if (asNamespaceCatalog != null) {\n          Namespace ns;\n          if (vars.containsKey(\"parent\")) {\n            ns =\n                Namespace.of(\n                    RESTUtil.NAMESPACE_SPLITTER\n                        .splitToStream(vars.get(\"parent\"))\n                        .toArray(String[]::new));\n          } else {\n            ns = Namespace.empty();\n          }\n\n          return castResponse(responseType, CatalogHandlers.listNamespaces(asNamespaceCatalog, ns));\n        }\n        break;\n\n      case CREATE_NAMESPACE:\n        if (asNamespaceCatalog != null) {\n          CreateNamespaceRequest request = castRequest(CreateNamespaceRequest.class, body);\n          return castResponse(\n              responseType, CatalogHandlers.createNamespace(asNamespaceCatalog, request));\n        }\n        break;\n\n      case LOAD_NAMESPACE:\n        if (asNamespaceCatalog != null) {\n          Namespace namespace = namespaceFromPathVars(vars);\n          return castResponse(\n              responseType, CatalogHandlers.loadNamespace(asNamespaceCatalog, namespace));\n        }\n        break;\n\n      case DROP_NAMESPACE:\n        if (asNamespaceCatalog != null) {\n          CatalogHandlers.dropNamespace(asNamespaceCatalog, namespaceFromPathVars(vars));\n          return null;\n        }\n        break;\n\n      case UPDATE_NAMESPACE:\n        if (asNamespaceCatalog != null) {\n          Namespace namespace = namespaceFromPathVars(vars);\n          UpdateNamespacePropertiesRequest request =\n              castRequest(UpdateNamespacePropertiesRequest.class, body);\n          return castResponse(\n              responseType,\n              CatalogHandlers.updateNamespaceProperties(asNamespaceCatalog, namespace, request));\n        }\n        break;\n\n      case LIST_TABLES:\n        {\n          Namespace namespace = namespaceFromPathVars(vars);\n          return castResponse(responseType, CatalogHandlers.listTables(catalog, namespace));\n        }\n\n      case CREATE_TABLE:\n        {\n          Namespace namespace = namespaceFromPathVars(vars);\n          CreateTableRequest request = castRequest(CreateTableRequest.class, body);\n          request.validate();\n          if (request.stageCreate()) {\n            return castResponse(\n                responseType, CatalogHandlers.stageTableCreate(catalog, namespace, request));\n          } else {\n            return castResponse(\n                responseType, CatalogHandlers.createTable(catalog, namespace, request));\n          }\n        }\n\n      case DROP_TABLE:\n        {\n          if (PropertyUtil.propertyAsBoolean(vars, \"purgeRequested\", false)) {\n            CatalogHandlers.purgeTable(catalog, identFromPathVars(vars));\n          } else {\n            CatalogHandlers.dropTable(catalog, identFromPathVars(vars));\n          }\n          return null;\n        }\n\n      case LOAD_TABLE:\n        {\n          TableIdentifier ident = identFromPathVars(vars);\n          return castResponse(responseType, CatalogHandlers.loadTable(catalog, ident));\n        }\n\n      case REGISTER_TABLE:\n        {\n          Namespace namespace = namespaceFromPathVars(vars);\n          RegisterTableRequest request = castRequest(RegisterTableRequest.class, body);\n          return castResponse(\n              responseType, CatalogHandlers.registerTable(catalog, namespace, request));\n        }\n\n      case UPDATE_TABLE:\n        {\n          TableIdentifier ident = identFromPathVars(vars);\n          UpdateTableRequest request = castRequest(UpdateTableRequest.class, body);\n          return castResponse(responseType, CatalogHandlers.updateTable(catalog, ident, request));\n        }\n\n      case RENAME_TABLE:\n        {\n          RenameTableRequest request = castRequest(RenameTableRequest.class, body);\n          CatalogHandlers.renameTable(catalog, request);\n          return null;\n        }\n\n      case REPORT_METRICS:\n        {\n          // nothing to do here other than checking that we're getting the correct request\n          castRequest(ReportMetricsRequest.class, body);\n          return null;\n        }\n\n      case COMMIT_TRANSACTION:\n        {\n          CommitTransactionRequest request = castRequest(CommitTransactionRequest.class, body);\n          commitTransaction(catalog, request);\n          return null;\n        }\n\n      case LIST_VIEWS:\n        {\n          if (null != asViewCatalog) {\n            Namespace namespace = namespaceFromPathVars(vars);\n            return castResponse(responseType, CatalogHandlers.listViews(asViewCatalog, namespace));\n          }\n          break;\n        }\n\n      case CREATE_VIEW:\n        {\n          if (null != asViewCatalog) {\n            Namespace namespace = namespaceFromPathVars(vars);\n            CreateViewRequest request = castRequest(CreateViewRequest.class, body);\n            return castResponse(\n                responseType, CatalogHandlers.createView(asViewCatalog, namespace, request));\n          }\n          break;\n        }\n\n      case LOAD_VIEW:\n        {\n          if (null != asViewCatalog) {\n            TableIdentifier ident = identFromPathVars(vars);\n            return castResponse(responseType, CatalogHandlers.loadView(asViewCatalog, ident));\n          }\n          break;\n        }\n\n      case UPDATE_VIEW:\n        {\n          if (null != asViewCatalog) {\n            TableIdentifier ident = identFromPathVars(vars);\n            UpdateTableRequest request = castRequest(UpdateTableRequest.class, body);\n            return castResponse(\n                responseType, CatalogHandlers.updateView(asViewCatalog, ident, request));\n          }\n          break;\n        }\n\n      case RENAME_VIEW:\n        {\n          if (null != asViewCatalog) {\n            RenameTableRequest request = castRequest(RenameTableRequest.class, body);\n            CatalogHandlers.renameView(asViewCatalog, request);\n            return null;\n          }\n          break;\n        }\n\n      case DROP_VIEW:\n        {\n          if (null != asViewCatalog) {\n            CatalogHandlers.dropView(asViewCatalog, identFromPathVars(vars));\n            return null;\n          }\n          break;\n        }\n\n      default:\n        if (responseType == OAuthTokenResponse.class) {\n          return castResponse(responseType, handleOAuthRequest(body));\n        }\n    }\n\n    return null;\n  }\n\n  /**\n   * This is a very simplistic approach that only validates the requirements for each table and does\n   * not do any other conflict detection. Therefore, it does not guarantee true transactional\n   * atomicity, which is left to the implementation details of a REST server.\n   */\n  private static void commitTransaction(Catalog catalog, CommitTransactionRequest request) {\n    List<Transaction> transactions = Lists.newArrayList();\n\n    for (UpdateTableRequest tableChange : request.tableChanges()) {\n      Table table = catalog.loadTable(tableChange.identifier());\n      if (table instanceof BaseTable) {\n        Transaction transaction =\n            Transactions.newTransaction(\n                tableChange.identifier().toString(), ((BaseTable) table).operations());\n        transactions.add(transaction);\n\n        BaseTransaction.TransactionTable txTable =\n            (BaseTransaction.TransactionTable) transaction.table();\n\n        // this performs validations and makes temporary commits that are in-memory\n        CatalogHandlers.commit(txTable.operations(), tableChange);\n      } else {\n        throw new IllegalStateException(\"Cannot wrap catalog that does not produce BaseTable\");\n      }\n    }\n\n    // only commit if validations passed previously\n    transactions.forEach(Transaction::commitTransaction);\n  }\n\n  public <T extends RESTResponse> T execute(\n      HTTPMethod method,\n      String path,\n      Map<String, String> queryParams,\n      Object body,\n      Class<T> responseType,\n      Map<String, String> headers,\n      Consumer<ErrorResponse> errorHandler) {\n    ErrorResponse.Builder errorBuilder = ErrorResponse.builder();\n    Pair<Route, Map<String, String>> routeAndVars = Route.from(method, path);\n    if (routeAndVars != null) {\n      try {\n        ImmutableMap.Builder<String, String> vars = ImmutableMap.builder();\n        if (queryParams != null) {\n          vars.putAll(queryParams);\n        }\n        vars.putAll(routeAndVars.second());\n\n        return handleRequest(routeAndVars.first(), vars.build(), body, responseType);\n\n      } catch (RuntimeException e) {\n        configureResponseFromException(e, errorBuilder);\n      }\n\n    } else {\n      errorBuilder\n          .responseCode(400)\n          .withType(\"BadRequestException\")\n          .withMessage(String.format(\"No route for request: %s %s\", method, path));\n    }\n\n    ErrorResponse error = errorBuilder.build();\n    errorHandler.accept(error);\n\n    // if the error handler doesn't throw an exception, throw a generic one\n    throw new RESTException(\"Unhandled error: %s\", error);\n  }\n\n  @Override\n  public <T extends RESTResponse> T delete(\n      String path,\n      Class<T> responseType,\n      Map<String, String> headers,\n      Consumer<ErrorResponse> errorHandler) {\n    return execute(HTTPMethod.DELETE, path, null, null, responseType, headers, errorHandler);\n  }\n\n  @Override\n  public <T extends RESTResponse> T delete(\n      String path,\n      Map<String, String> queryParams,\n      Class<T> responseType,\n      Map<String, String> headers,\n      Consumer<ErrorResponse> errorHandler) {\n    return execute(HTTPMethod.DELETE, path, queryParams, null, responseType, headers, errorHandler);\n  }\n\n  @Override\n  public <T extends RESTResponse> T post(\n      String path,\n      RESTRequest body,\n      Class<T> responseType,\n      Map<String, String> headers,\n      Consumer<ErrorResponse> errorHandler) {\n    return execute(HTTPMethod.POST, path, null, body, responseType, headers, errorHandler);\n  }\n\n  @Override\n  public <T extends RESTResponse> T get(\n      String path,\n      Map<String, String> queryParams,\n      Class<T> responseType,\n      Map<String, String> headers,\n      Consumer<ErrorResponse> errorHandler) {\n    return execute(HTTPMethod.GET, path, queryParams, null, responseType, headers, errorHandler);\n  }\n\n  @Override\n  public void head(String path, Map<String, String> headers, Consumer<ErrorResponse> errorHandler) {\n    execute(HTTPMethod.HEAD, path, null, null, null, headers, errorHandler);\n  }\n\n  @Override\n  public <T extends RESTResponse> T postForm(\n      String path,\n      Map<String, String> formData,\n      Class<T> responseType,\n      Map<String, String> headers,\n      Consumer<ErrorResponse> errorHandler) {\n    return execute(HTTPMethod.POST, path, null, formData, responseType, headers, errorHandler);\n  }\n\n  @Override\n  public void close() throws IOException {\n    // The calling test is responsible for closing the underlying catalog backing this REST catalog\n    // so that the underlying backend catalog is not closed and reopened during the REST catalog's\n    // initialize method when fetching the server configuration.\n  }\n\n  private static class BadResponseType extends RuntimeException {\n    private BadResponseType(Class<?> responseType, Object response) {\n      super(\n          String.format(\"Invalid response object, not a %s: %s\", responseType.getName(), response));\n    }\n  }\n\n  private static class BadRequestType extends RuntimeException {\n    private BadRequestType(Class<?> requestType, Object request) {\n      super(String.format(\"Invalid request object, not a %s: %s\", requestType.getName(), request));\n    }\n  }\n\n  public static <T> T castRequest(Class<T> requestType, Object request) {\n    if (requestType.isInstance(request)) {\n      return requestType.cast(request);\n    }\n\n    throw new BadRequestType(requestType, request);\n  }\n\n  public static <T extends RESTResponse> T castResponse(Class<T> responseType, Object response) {\n    if (responseType.isInstance(response)) {\n      return responseType.cast(response);\n    }\n\n    throw new BadResponseType(responseType, response);\n  }\n\n  public static void configureResponseFromException(\n      Exception exc, ErrorResponse.Builder errorBuilder) {\n    errorBuilder\n        .responseCode(EXCEPTION_ERROR_CODES.getOrDefault(exc.getClass(), 500))\n        .withType(exc.getClass().getSimpleName())\n        .withMessage(exc.getMessage())\n        .withStackTrace(exc);\n  }\n\n  private static Namespace namespaceFromPathVars(Map<String, String> pathVars) {\n    return RESTUtil.decodeNamespace(pathVars.get(\"namespace\"));\n  }\n\n  private static TableIdentifier identFromPathVars(Map<String, String> pathVars) {\n    return TableIdentifier.of(\n        namespaceFromPathVars(pathVars), RESTUtil.decodeString(pathVars.get(\"name\")));\n  }\n}\n"
  },
  {
    "path": "spark/src/test/resources/log4j.properties",
    "content": "#\n# Licensed to the Apache Software Foundation (ASF) under one or more\n# contributor license agreements.  See the NOTICE file distributed with\n# this work for additional information regarding copyright ownership.\n# The ASF licenses this file to You under the Apache License, Version 2.0\n# (the \"License\"); you may not use this file except in compliance with\n# the License.  You may obtain a copy of the License at\n#\n#    http://www.apache.org/licenses/LICENSE-2.0\n#\n# Unless required by applicable law or agreed to in writing, software\n# distributed under the License is distributed on an \"AS IS\" BASIS,\n# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.\n# See the License for the specific language governing permissions and\n# limitations under the License.\n#\n\n# Set everything to be logged to the file target/unit-tests.log\ntest.appender=file\nlog4j.rootCategory=INFO, ${test.appender}\nlog4j.appender.file=org.apache.log4j.FileAppender\nlog4j.appender.file.append=true\nlog4j.appender.file.file=target/unit-tests.log\nlog4j.appender.file.layout=org.apache.log4j.PatternLayout\nlog4j.appender.file.layout.ConversionPattern=%d{yy/MM/dd HH:mm:ss.SSS} %t %p %c{1}: %m%n\n\n# Tests that launch java subprocesses can set the \"test.appender\" system property to\n# \"console\" to avoid having the child process's logs overwrite the unit test's\n# log file.\nlog4j.appender.console=org.apache.log4j.ConsoleAppender\nlog4j.appender.console.target=System.err\nlog4j.appender.console.layout=org.apache.log4j.PatternLayout\nlog4j.appender.console.layout.ConversionPattern=%t: %m%n\n\n# Ignore messages below warning level from Jetty, because it's a bit verbose\nlog4j.logger.org.sparkproject.jetty=WARN\n"
  },
  {
    "path": "spark/src/test/resources/log4j2.properties",
    "content": "# Licensed to the Apache Software Foundation (ASF) under one\n# or more contributor license agreements.  See the NOTICE file\n# distributed with this work for additional information\n# regarding copyright ownership.  The ASF licenses this file\n# to you under the Apache License, Version 2.0 (the\n# \"License\"); you may not use this file except in compliance\n# with the License.  You may obtain a copy of the License at\n# \n#   http://www.apache.org/licenses/LICENSE-2.0\n# \n# Unless required by applicable law or agreed to in writing,\n# software distributed under the License is distributed on an\n# \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n# KIND, either express or implied.  See the License for the\n# specific language governing permissions and limitations\n# under the License.\n\n# Set everything to be logged to the file target/unit-tests.log\nrootLogger.level = info\nrootLogger.appenderRef.file.ref = ${sys:test.appender:-File}\n\nappender.file.type = File\nappender.file.name = File\nappender.file.fileName = target/unit-tests.log\nappender.file.layout.type = PatternLayout\nappender.file.layout.pattern = %d{yy/MM/dd HH:mm:ss.SSS} %t %p %c{1}: %m%n\n\n# Tests that launch java subprocesses can set the \"test.appender\" system property to\n# \"console\" to avoid having the child process's logs overwrite the unit test's\n# log file.\nappender.console.type = Console\nappender.console.name = console\nappender.console.target = SYSTEM_ERR\nappender.console.layout.type = PatternLayout\nappender.console.layout.pattern = %t: %m%n\n\n# Ignore messages below warning level from Jetty, because it's a bit verbose\nlogger.jetty.name = org.sparkproject.jetty\nlogger.jetty.level = warn\n\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/aggregate/aggregate_filter.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Tests for SQL aggregate FILTER (WHERE ...) clause support.\n-- See https://github.com/apache/datafusion-comet/issues/XXXX\n\nstatement\nCREATE TABLE test_agg_filter(\n  grp string,\n  i int,\n  l long,\n  d decimal(10, 2),\n  flag boolean\n) USING parquet\n\nstatement\nINSERT INTO test_agg_filter VALUES\n  ('a', 1,  10,  1.00, true),\n  ('a', 2,  20,  2.00, false),\n  ('a', 3,  30,  3.00, true),\n  ('b', 4,  40,  4.00, false),\n  ('b', 5,  50,  5.00, true),\n  ('b', NULL, NULL, NULL, true)\n\n-- Basic FILTER on SUM(int)\nquery\nSELECT SUM(i) FILTER (WHERE flag = true) FROM test_agg_filter\n\n-- FILTER on SUM with GROUP BY\nquery\nSELECT grp, SUM(i) FILTER (WHERE flag = true) FROM test_agg_filter GROUP BY grp ORDER BY grp\n\n-- FILTER on SUM(long)\nquery\nSELECT SUM(l) FILTER (WHERE flag = true) FROM test_agg_filter\n\n-- FILTER on SUM(decimal)\nquery\nSELECT SUM(d) FILTER (WHERE flag = true) FROM test_agg_filter\n\n-- Multiple aggregates: one with filter, one without\nquery\nSELECT SUM(i), SUM(i) FILTER (WHERE flag = true) FROM test_agg_filter\n\n-- FILTER with NULL rows: NULLs should not be included even when filter passes\nquery\nSELECT grp, SUM(i) FILTER (WHERE flag = true) FROM test_agg_filter GROUP BY grp ORDER BY grp\n\n-- FILTER with COUNT\nquery\nSELECT COUNT(*) FILTER (WHERE flag = true) FROM test_agg_filter\n\n-- FILTER with COUNT GROUP BY\nquery\nSELECT grp, COUNT(*) FILTER (WHERE flag = true) FROM test_agg_filter GROUP BY grp ORDER BY grp\n\n-- FILTER on AVG(int)\nquery\nSELECT AVG(i) FILTER (WHERE flag = true) FROM test_agg_filter\n\n-- FILTER on AVG with GROUP BY\nquery\nSELECT grp, AVG(i) FILTER (WHERE flag = true) FROM test_agg_filter GROUP BY grp ORDER BY grp\n\n-- FILTER on AVG(decimal): final cast back to decimal may differ from Spark due to rounding,\n-- so use spark_answer_only mode to validate correctness without checking operator coverage\nquery spark_answer_only\nSELECT AVG(d) FILTER (WHERE flag = true) FROM test_agg_filter\n\nquery spark_answer_only\nSELECT grp, AVG(d) FILTER (WHERE flag = true) FROM test_agg_filter GROUP BY grp ORDER BY grp\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/aggregate/avg.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_avg(i int, l long, f float, d double, grp string) USING parquet\n\nstatement\nINSERT INTO test_avg VALUES (1, 10, 1.5, 1.5, 'a'), (2, 20, 2.5, 2.5, 'a'), (3, 30, 3.5, 3.5, 'b'), (NULL, NULL, NULL, NULL, 'b'), (0, 0, 0.0, 0.0, 'a')\n\nquery tolerance=1e-6\nSELECT avg(i), avg(l), avg(f), avg(d) FROM test_avg\n\nquery tolerance=1e-6\nSELECT grp, avg(d) FROM test_avg GROUP BY grp ORDER BY grp\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/aggregate/bit_agg.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_bit_agg(i int, grp string) USING parquet\n\nstatement\nINSERT INTO test_bit_agg VALUES (1, 'a'), (2, 'a'), (3, 'a'), (4, 'b'), (5, 'b'), (NULL, 'b')\n\nquery\nSELECT bit_and(i), bit_or(i), bit_xor(i) FROM test_bit_agg\n\nquery\nSELECT grp, bit_and(i), bit_or(i), bit_xor(i) FROM test_bit_agg GROUP BY grp ORDER BY grp\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/aggregate/corr.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n\nstatement\nCREATE TABLE test_corr(x double, y double, grp string) USING parquet\n\nstatement\nINSERT INTO test_corr VALUES (1.0, 2.0, 'a'), (2.0, 4.0, 'a'), (3.0, 6.0, 'a'), (1.0, 1.0, 'b'), (2.0, 3.0, 'b'), (NULL, 1.0, 'b')\n\nquery tolerance=1e-6\nSELECT corr(x, y) FROM test_corr\n\nquery tolerance=1e-6\nSELECT grp, corr(x, y) FROM test_corr GROUP BY grp ORDER BY grp\n\n-- Test permutations of NULL and NaN\nstatement\nCREATE TABLE test_corr_nan(x double, y double, grp string) USING parquet\n\nstatement\nINSERT INTO test_corr_nan VALUES (cast('NaN' as double), cast('NaN' as double), 'both_nan'), (cast('NaN' as double), 1.0, 'nan_val'), (1.0, cast('NaN' as double), 'val_nan'), (NULL, cast('NaN' as double), 'null_nan'), (cast('NaN' as double), NULL, 'nan_null'), (NULL, NULL, 'both_null'), (NULL, 1.0, 'null_val'), (1.0, NULL, 'val_null'), (cast('NaN' as double), cast('NaN' as double), 'mixed'), (1.0, 2.0, 'mixed'), (3.0, 4.0, 'mixed'), (cast('NaN' as double), cast('NaN' as double), 'multi_nan'), (cast('NaN' as double), cast('NaN' as double), 'multi_nan'), (cast('NaN' as double), cast('NaN' as double), 'multi_nan')\n\nquery tolerance=1e-6\nSELECT grp, corr(x, y) FROM test_corr_nan GROUP BY grp ORDER BY grp\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/aggregate/count.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_count(i int, s string, grp string) USING parquet\n\nstatement\nINSERT INTO test_count VALUES (1, 'a', 'x'), (2, 'b', 'x'), (NULL, NULL, 'y'), (3, 'c', 'y'), (NULL, 'd', 'y')\n\nquery\nSELECT count(*), count(i), count(s) FROM test_count\n\nquery\nSELECT grp, count(*), count(i) FROM test_count GROUP BY grp ORDER BY grp\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/aggregate/covariance.sql",
    "content": "\n-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_covar(x double, y double, grp string) USING parquet\n\nstatement\nINSERT INTO test_covar VALUES (1.0, 2.0, 'a'), (2.0, 4.0, 'a'), (3.0, 6.0, 'a'), (1.0, 1.0, 'b'), (2.0, 3.0, 'b'), (NULL, 1.0, 'b')\n\nquery tolerance=1e-6\nSELECT covar_samp(x, y), covar_pop(x, y) FROM test_covar\n\nquery tolerance=1e-6\nSELECT grp, covar_samp(x, y), covar_pop(x, y) FROM test_covar GROUP BY grp ORDER BY grp\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/aggregate/first_last.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ConfigMatrix: parquet.enable.dictionary=false,true\n\n-- ============================================================\n-- Setup: shared tables\n-- ============================================================\n\nstatement\nCREATE TABLE test_first_last(i int, grp string) USING parquet\n\nstatement\nINSERT INTO test_first_last VALUES (1, 'a'), (2, 'a'), (3, 'a'), (NULL, 'b'), (4, 'b')\n\nstatement\nCREATE TABLE test_ignore_nulls(id int, val int, grp string) USING parquet\n\nstatement\nINSERT INTO test_ignore_nulls VALUES\n  (1, NULL, 'a'),\n  (2, 10,   'a'),\n  (3, 20,   'a'),\n  (4, NULL, 'b'),\n  (5, 30,   'b'),\n  (6, NULL, 'b')\n\nstatement\nCREATE TABLE test_all_nulls(val int, grp string) USING parquet\n\nstatement\nINSERT INTO test_all_nulls VALUES (NULL, 'a'), (NULL, 'a'), (NULL, 'b'), (1, 'b')\n\nstatement\nCREATE TABLE test_empty(val int) USING parquet\n\nstatement\nCREATE TABLE test_single_row(val int) USING parquet\n\nstatement\nINSERT INTO test_single_row VALUES (42)\n\nstatement\nCREATE TABLE test_single_null(val int) USING parquet\n\nstatement\nINSERT INTO test_single_null VALUES (NULL)\n\nstatement\nCREATE TABLE test_types(\n  i_val int,\n  l_val bigint,\n  d_val double,\n  s_val string,\n  b_val boolean,\n  grp string\n) USING parquet\n\nstatement\nINSERT INTO test_types VALUES\n  (NULL, NULL,  NULL,  NULL,  NULL,  'a'),\n  (1,    100,   1.5,   'foo', true,  'a'),\n  (2,    200,   2.5,   'bar', false, 'a'),\n  (NULL, NULL,  NULL,  NULL,  NULL,  'b'),\n  (NULL, NULL,  NULL,  NULL,  NULL,  'b')\n\nstatement\nCREATE TABLE test_null_positions(id int, val int, grp string) USING parquet\n\nstatement\nINSERT INTO test_null_positions VALUES\n  (1, NULL, 'start_null'),\n  (2, 10,   'start_null'),\n  (3, 20,   'start_null'),\n  (4, 10,   'mid_null'),\n  (5, NULL, 'mid_null'),\n  (6, 20,   'mid_null'),\n  (7, 10,   'end_null'),\n  (8, 20,   'end_null'),\n  (9, NULL, 'end_null')\n\nstatement\nCREATE TABLE test_expr_nulls(val int, grp string) USING parquet\n\nstatement\nINSERT INTO test_expr_nulls VALUES (1, 'a'), (2, 'a'), (3, 'a'), (4, 'b'), (5, 'b'), (6, 'b')\n\nstatement\nCREATE TABLE test_decimal(val DECIMAL(10,2), grp string) USING parquet\n\nstatement\nINSERT INTO test_decimal VALUES (NULL, 'a'), (1.23, 'a'), (4.56, 'a'), (NULL, 'b'), (NULL, 'b')\n\nstatement\nCREATE TABLE test_date(val DATE, grp string) USING parquet\n\nstatement\nINSERT INTO test_date VALUES (NULL, 'a'), (DATE '2024-01-01', 'a'), (DATE '2024-12-31', 'a'), (NULL, 'b'), (NULL, 'b')\n\nstatement\nCREATE TABLE test_large(val int, grp string) USING parquet\n\nstatement\nINSERT INTO test_large\nSELECT\n  CASE WHEN id % 10 = 0 THEN id ELSE NULL END as val,\n  CASE WHEN id < 500 THEN 'a' ELSE 'b' END as grp\nFROM (SELECT explode(sequence(1, 1000)) as id)\n\n-- ############################################################\n-- FIRST\n-- ############################################################\n\n-- ============================================================\n-- first: basic (default behavior includes nulls)\n-- ============================================================\n\nquery\nSELECT first(i) FROM test_first_last\n\nquery\nSELECT first(i, true) FROM test_first_last\n\nquery\nSELECT grp, first(i) FROM test_first_last GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- first IGNORE NULLS: basic\n-- ============================================================\n\n-- without grouping\nquery\nSELECT first(val) IGNORE NULLS FROM test_ignore_nulls\n\n-- with grouping\nquery\nSELECT grp, first(val) IGNORE NULLS FROM test_ignore_nulls GROUP BY grp ORDER BY grp\n\n-- contrast: default behavior (RESPECT NULLS) - null can appear as first\nquery\nSELECT grp, first(val) FROM test_ignore_nulls GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- first IGNORE NULLS: all values null in a group\n-- ============================================================\n\n-- group 'a' has all nulls -> should return null even with IGNORE NULLS\nquery\nSELECT grp, first(val) IGNORE NULLS FROM test_all_nulls GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- first IGNORE NULLS: empty table\n-- ============================================================\n\nquery\nSELECT first(val) IGNORE NULLS FROM test_empty\n\n-- ============================================================\n-- first IGNORE NULLS: single row\n-- ============================================================\n\nquery\nSELECT first(val) IGNORE NULLS FROM test_single_row\n\nquery\nSELECT first(val) IGNORE NULLS FROM test_single_null\n\n-- ============================================================\n-- first IGNORE NULLS: multiple data types\n-- ============================================================\n\nquery expect_fallback(SortAggregate is not supported)\nSELECT grp,\n  first(i_val) IGNORE NULLS,\n  first(l_val) IGNORE NULLS,\n  first(d_val) IGNORE NULLS,\n  first(s_val) IGNORE NULLS,\n  first(b_val) IGNORE NULLS\nFROM test_types GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- first IGNORE NULLS: nulls at beginning, middle, end\n-- ============================================================\n\nquery\nSELECT grp, first(val) IGNORE NULLS FROM test_null_positions GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- first IGNORE NULLS: boolean parameter form\n-- ============================================================\n\n-- first(val, true) is equivalent to first(val) IGNORE NULLS\nquery\nSELECT grp, first(val, true) FROM test_null_positions GROUP BY grp ORDER BY grp\n\n-- first(val, false) is equivalent to first(val) (default, respect nulls)\nquery\nSELECT grp, first(val, false) FROM test_null_positions GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- first IGNORE NULLS: with expressions producing nulls\n-- ============================================================\n\n-- IF expression produces nulls for even values\nquery\nSELECT grp,\n  first(IF(val % 2 = 0, NULL, val)) IGNORE NULLS\nFROM test_expr_nulls GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- first IGNORE NULLS: mixed with other aggregations\n-- ============================================================\n\nquery\nSELECT grp,\n  first(val) IGNORE NULLS,\n  first(val),\n  count(val),\n  sum(val)\nFROM test_ignore_nulls GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- first IGNORE NULLS: with HAVING clause\n-- ============================================================\n\nquery\nSELECT grp, first(val) IGNORE NULLS as f\nFROM test_ignore_nulls\nGROUP BY grp\nHAVING first(val) IGNORE NULLS IS NOT NULL\nORDER BY grp\n\n-- ============================================================\n-- first IGNORE NULLS: decimal type\n-- ============================================================\n\nquery\nSELECT grp, first(val) IGNORE NULLS FROM test_decimal GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- first IGNORE NULLS: date type\n-- ============================================================\n\nquery\nSELECT grp, first(val) IGNORE NULLS FROM test_date GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- first IGNORE NULLS: large group (multi-batch)\n-- ============================================================\n\nquery\nSELECT grp, first(val) IGNORE NULLS FROM test_large GROUP BY grp ORDER BY grp\n\n-- ############################################################\n-- LAST\n-- ############################################################\n\n-- ============================================================\n-- last: basic (default behavior includes nulls)\n-- ============================================================\n\nquery\nSELECT last(i) FROM test_first_last\n\nquery\nSELECT last(i, true) FROM test_first_last\n\nquery\nSELECT grp, last(i) FROM test_first_last GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- last IGNORE NULLS: basic\n-- ============================================================\n\n-- without grouping\nquery\nSELECT last(val) IGNORE NULLS FROM test_ignore_nulls\n\n-- with grouping\nquery\nSELECT grp, last(val) IGNORE NULLS FROM test_ignore_nulls GROUP BY grp ORDER BY grp\n\n-- contrast: default behavior (RESPECT NULLS) - null can appear as last\nquery\nSELECT grp, last(val) FROM test_ignore_nulls GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- last IGNORE NULLS: all values null in a group\n-- ============================================================\n\n-- group 'a' has all nulls -> should return null even with IGNORE NULLS\nquery\nSELECT grp, last(val) IGNORE NULLS FROM test_all_nulls GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- last IGNORE NULLS: empty table\n-- ============================================================\n\nquery\nSELECT last(val) IGNORE NULLS FROM test_empty\n\n-- ============================================================\n-- last IGNORE NULLS: single row\n-- ============================================================\n\nquery\nSELECT last(val) IGNORE NULLS FROM test_single_row\n\nquery\nSELECT last(val) IGNORE NULLS FROM test_single_null\n\n-- ============================================================\n-- last IGNORE NULLS: multiple data types\n-- ============================================================\n\nquery expect_fallback(SortAggregate is not supported)\nSELECT grp,\n  last(i_val) IGNORE NULLS,\n  last(l_val) IGNORE NULLS,\n  last(d_val) IGNORE NULLS,\n  last(s_val) IGNORE NULLS,\n  last(b_val) IGNORE NULLS\nFROM test_types GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- last IGNORE NULLS: nulls at beginning, middle, end\n-- ============================================================\n\nquery\nSELECT grp, last(val) IGNORE NULLS FROM test_null_positions GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- last IGNORE NULLS: boolean parameter form\n-- ============================================================\n\n-- last(val, true) is equivalent to last(val) IGNORE NULLS\nquery\nSELECT grp, last(val, true) FROM test_null_positions GROUP BY grp ORDER BY grp\n\n-- last(val, false) is equivalent to last(val) (default, respect nulls)\nquery\nSELECT grp, last(val, false) FROM test_null_positions GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- last IGNORE NULLS: with expressions producing nulls\n-- ============================================================\n\n-- IF expression produces nulls for even values\nquery\nSELECT grp,\n  last(IF(val % 2 = 0, NULL, val)) IGNORE NULLS\nFROM test_expr_nulls GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- last IGNORE NULLS: mixed with other aggregations\n-- ============================================================\n\nquery\nSELECT grp,\n  last(val) IGNORE NULLS,\n  last(val),\n  count(val),\n  sum(val)\nFROM test_ignore_nulls GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- last IGNORE NULLS: with HAVING clause\n-- ============================================================\n\nquery\nSELECT grp, last(val) IGNORE NULLS as l\nFROM test_ignore_nulls\nGROUP BY grp\nHAVING last(val) IGNORE NULLS IS NOT NULL\nORDER BY grp\n\n-- ============================================================\n-- last IGNORE NULLS: decimal type\n-- ============================================================\n\nquery\nSELECT grp, last(val) IGNORE NULLS FROM test_decimal GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- last IGNORE NULLS: date type\n-- ============================================================\n\nquery\nSELECT grp, last(val) IGNORE NULLS FROM test_date GROUP BY grp ORDER BY grp\n\n-- ============================================================\n-- last IGNORE NULLS: large group (multi-batch)\n-- ============================================================\n\nquery\nSELECT grp, last(val) IGNORE NULLS FROM test_large GROUP BY grp ORDER BY grp\n\n-- ############################################################\n-- FIRST + LAST combined\n-- ############################################################\n\n-- ============================================================\n-- first and last together in same query\n-- ============================================================\n\nquery\nSELECT first(i), last(i) FROM test_first_last\n\nquery\nSELECT first(i, true), last(i, true) FROM test_first_last\n\nquery\nSELECT grp, first(i), last(i) FROM test_first_last GROUP BY grp ORDER BY grp\n\nquery\nSELECT grp,\n  first(val) IGNORE NULLS,\n  last(val) IGNORE NULLS\nFROM test_ignore_nulls GROUP BY grp ORDER BY grp\n\nquery\nSELECT grp,\n  first(val) IGNORE NULLS,\n  last(val) IGNORE NULLS,\n  first(val),\n  last(val),\n  count(val),\n  sum(val)\nFROM test_ignore_nulls GROUP BY grp ORDER BY grp\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/aggregate/min_max.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_min_max(i int, d double, s string, grp string) USING parquet\n\nstatement\nINSERT INTO test_min_max VALUES (1, 1.5, 'b', 'x'), (3, 3.5, 'a', 'x'), (2, 2.5, 'c', 'y'), (NULL, NULL, NULL, 'y'), (-1, -1.5, 'z', 'x')\n\nquery expect_fallback(SortAggregate is not supported)\nSELECT min(i), max(i), min(d), max(d), min(s), max(s) FROM test_min_max\n\nquery\nSELECT grp, min(i), max(i) FROM test_min_max GROUP BY grp ORDER BY grp\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/aggregate/stddev.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_stddev(d double, grp string) USING parquet\n\nstatement\nINSERT INTO test_stddev VALUES (1.0, 'a'), (2.0, 'a'), (3.0, 'a'), (4.0, 'b'), (5.0, 'b'), (NULL, 'b')\n\nquery tolerance=1e-6\nSELECT stddev(d), stddev_samp(d), stddev_pop(d) FROM test_stddev\n\nquery tolerance=1e-6\nSELECT grp, stddev(d) FROM test_stddev GROUP BY grp ORDER BY grp\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/aggregate/sum.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_sum(i int, l long, f float, d double, grp string) USING parquet\n\nstatement\nINSERT INTO test_sum VALUES (1, 10, 1.5, 1.5, 'a'), (2, 20, 2.5, 2.5, 'a'), (3, 30, 3.5, 3.5, 'b'), (NULL, NULL, NULL, NULL, 'b'), (2147483647, 9223372036854775807, cast('Infinity' as float), cast('Infinity' as double), 'c')\n\nquery\nSELECT sum(i), sum(l) FROM test_sum\n\nquery tolerance=1e-6\nSELECT sum(f), sum(d) FROM test_sum\n\nquery\nSELECT grp, sum(i) FROM test_sum GROUP BY grp ORDER BY grp\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/aggregate/variance.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_variance(d double, grp string) USING parquet\n\nstatement\nINSERT INTO test_variance VALUES (1.0, 'a'), (2.0, 'a'), (3.0, 'a'), (4.0, 'b'), (5.0, 'b'), (NULL, 'b')\n\nquery tolerance=1e-6\nSELECT variance(d), var_samp(d), var_pop(d) FROM test_variance\n\nquery tolerance=1e-6\nSELECT grp, variance(d) FROM test_variance GROUP BY grp ORDER BY grp\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_append.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- On Spark 4.0, array_append is a RuntimeReplaceable that rewrites to array_insert(-1),\n-- so we need to allow the incompatible array_insert to run natively there.\n-- Config: spark.comet.expression.ArrayInsert.allowIncompatible=true\n\nstatement\nCREATE TABLE test_array_append(arr array<int>, val int) USING parquet\n\nstatement\nINSERT INTO test_array_append VALUES (array(1, 2, 3), 4), (array(), 1), (NULL, 1), (array(1, 2), NULL)\n\nquery\nSELECT array_append(arr, val) FROM test_array_append\n\n-- column + literal\nquery\nSELECT array_append(arr, 99) FROM test_array_append\n\n-- literal + column\nquery\nSELECT array_append(array(1, 2, 3), val) FROM test_array_append\n\n-- literal + literal\nquery\nSELECT array_append(array(1, 2, 3), 4), array_append(array(), 1), array_append(cast(NULL as array<int>), 1)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_compact.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n\nstatement\nCREATE TABLE test_array_compact(arr array<int>) USING parquet\n\nstatement\nINSERT INTO test_array_compact VALUES (array(1, NULL, 2, NULL, 3)), (array()), (NULL), (array(NULL, NULL)), (array(1, 2, 3))\n\n-- column argument\nquery\nSELECT array_compact(arr) FROM test_array_compact\n\n-- literal arguments\nquery\nSELECT array_compact(array(1, NULL, 2, NULL, 3))\n\n-- string element type\nstatement\nCREATE TABLE test_array_compact_str(arr array<string>) USING parquet\n\nstatement\nINSERT INTO test_array_compact_str VALUES (array('a', NULL, 'b', NULL, 'c')), (array()), (NULL), (array(NULL, NULL)), (array('', NULL, '', NULL))\n\nquery\nSELECT array_compact(arr) FROM test_array_compact_str\n\n-- double element type\nquery\nSELECT array_compact(array(1.0, NULL, 2.0, NULL, 3.0))\n\n-- nested array type (removes null arrays from outer, preserves null elements in inner)\nquery\nSELECT array_compact(array(array(1, NULL, 3), NULL, array(NULL, 2, 3)))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_concat.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- migrated from CometExpressionSuite \"test concat function - arrays\"\n-- https://github.com/apache/datafusion-comet/issues/2647\n\nstatement\nCREATE TABLE test_array_concat(c1 array<int>, c2 array<int>, c3 array<int>, c4 array<int>, c5 array<int>) USING parquet\n\nstatement\nINSERT INTO test_array_concat VALUES (array(0, 1), array(2, 3), array(), array(null), null), (array(1, 2), array(3, 4), array(), array(null), null), (array(2, 3), array(4, 5), array(), array(null), null)\n\nquery expect_fallback(CONCAT supports only string input parameters)\nSELECT concat(c1, c2) AS x FROM test_array_concat\n\nquery expect_fallback(CONCAT supports only string input parameters)\nSELECT concat(c1, c1) AS x FROM test_array_concat\n\nquery expect_fallback(CONCAT supports only string input parameters)\nSELECT concat(c1, c2, c3) AS x FROM test_array_concat\n\nquery expect_fallback(CONCAT supports only string input parameters)\nSELECT concat(c1, c2, c3, c5) AS x FROM test_array_concat\n\nquery expect_fallback(CONCAT supports only string input parameters)\nSELECT concat(concat(c1, c2, c3), concat(c1, c3)) AS x FROM test_array_concat\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_contains.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_array_contains(arr array<int>, val int) USING parquet\n\nstatement\nINSERT INTO test_array_contains VALUES (array(1, 2, 3), 2), (array(1, 2, 3), 4), (array(1, NULL, 3), NULL), (array(), 1), (NULL, 1)\n\nquery\nSELECT array_contains(arr, val) FROM test_array_contains\n\n-- column + literal\nquery\nSELECT array_contains(arr, 2) FROM test_array_contains\n\n-- literal + column\nquery\nSELECT array_contains(array(1, 2, 3), val) FROM test_array_contains\n\n-- literal + literal\nquery\nSELECT array_contains(array(1, 2, 3), 2), array_contains(array(1, 2, 3), 4), array_contains(array(), 1), array_contains(cast(NULL as array<int>), 1)\n\n-- Additional NULL array tests (issue #3345 fix verification)\n-- NULL array with integer value\nquery\nSELECT array_contains(cast(NULL as array<int>), 1)\n\n-- NULL array with string value\nquery\nSELECT array_contains(cast(NULL as array<string>), 'test')\n\n-- NULL array with NULL value\nquery\nSELECT array_contains(cast(NULL as array<int>), cast(NULL as int))\n\n-- NULL array with column value\nquery\nSELECT array_contains(cast(NULL as array<int>), val) FROM test_array_contains\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_distinct.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ===== INT arrays =====\n\nstatement\nCREATE TABLE test_array_distinct_int(arr array<int>) USING parquet\n\nstatement\nINSERT INTO test_array_distinct_int VALUES\n  (array(1, 2, 2, 3, 3)),\n  (array()),\n  (NULL),\n  (array(NULL, 1, NULL, 2)),\n  (array(1)),\n  (array(NULL, NULL, NULL)),\n  (array(-2147483648, 2147483647, -2147483648, 0)),\n  (array(0, -1, -1, 0, 1))\n\n-- column argument\nquery\nSELECT array_distinct(arr) FROM test_array_distinct_int\n\n-- literal arguments\nquery\nSELECT array_distinct(array(1, 2, 2, 3, 3))\n\n-- all NULLs\nquery\nSELECT array_distinct(array(CAST(NULL AS INT), CAST(NULL AS INT)))\n\n-- NULL input\nquery\nSELECT array_distinct(CAST(NULL AS array<int>))\n\n-- boundary values\nquery\nSELECT array_distinct(array(-2147483648, 2147483647, -2147483648, 2147483647, 0))\n\n-- ===== LONG arrays =====\n\nstatement\nCREATE TABLE test_array_distinct_long(arr array<bigint>) USING parquet\n\nstatement\nINSERT INTO test_array_distinct_long VALUES\n  (array(1, 2, 2, 3, 3)),\n  (NULL),\n  (array(NULL, 1, NULL, 2)),\n  (array(-9223372036854775808, 9223372036854775807, -9223372036854775808))\n\nquery\nSELECT array_distinct(arr) FROM test_array_distinct_long\n\n-- boundary values\nquery\nSELECT array_distinct(array(CAST(-9223372036854775808 AS BIGINT), CAST(9223372036854775807 AS BIGINT), CAST(-9223372036854775808 AS BIGINT)))\n\n-- ===== STRING arrays =====\n\nstatement\nCREATE TABLE test_array_distinct_string(arr array<string>) USING parquet\n\nstatement\nINSERT INTO test_array_distinct_string VALUES\n  (array('b', 'a', 'a', 'c', 'b')),\n  (array('')),\n  (NULL),\n  (array(NULL, 'a', NULL, 'a')),\n  (array('', '', NULL, '')),\n  (array('hello', 'world', 'hello'))\n\nquery\nSELECT array_distinct(arr) FROM test_array_distinct_string\n\n-- empty string and NULL distinction\nquery\nSELECT array_distinct(array('', NULL, '', NULL, 'a'))\n\n-- ===== BOOLEAN arrays =====\n\nstatement\nCREATE TABLE test_array_distinct_bool(arr array<boolean>) USING parquet\n\nstatement\nINSERT INTO test_array_distinct_bool VALUES\n  (array(true, false, false, true)),\n  (array(true, true)),\n  (NULL),\n  (array(NULL, true, NULL, false))\n\nquery\nSELECT array_distinct(arr) FROM test_array_distinct_bool\n\n-- ===== DOUBLE arrays =====\n\nstatement\nCREATE TABLE test_array_distinct_double(arr array<double>) USING parquet\n\nstatement\nINSERT INTO test_array_distinct_double VALUES\n  (array(1.123, 0.1234, 1.121, 1.123, 0.1234)),\n  (NULL),\n  (array(NULL, 1.0, NULL, 2.0))\n\nquery\nSELECT array_distinct(arr) FROM test_array_distinct_double\n\n-- NaN deduplication\nquery\nSELECT array_distinct(array(CAST('NaN' AS DOUBLE), CAST('NaN' AS DOUBLE), 1.0, 1.0))\n\n-- NaN with NULL\nquery\nSELECT array_distinct(array(CAST('NaN' AS DOUBLE), NULL, CAST('NaN' AS DOUBLE), NULL, 1.0))\n\n-- Infinity\nquery\nSELECT array_distinct(array(CAST('Infinity' AS DOUBLE), CAST('-Infinity' AS DOUBLE), CAST('Infinity' AS DOUBLE), 0.0))\n\n-- negative zero\nquery\nSELECT array_distinct(array(0.0, -0.0, 1.0))\n\n-- ===== FLOAT arrays =====\n\nstatement\nCREATE TABLE test_array_distinct_float(arr array<float>) USING parquet\n\nstatement\nINSERT INTO test_array_distinct_float VALUES\n  (array(CAST(1.123 AS FLOAT), CAST(0.1234 AS FLOAT), CAST(1.121 AS FLOAT), CAST(1.123 AS FLOAT))),\n  (NULL),\n  (array(CAST(NULL AS FLOAT), CAST(1.0 AS FLOAT), CAST(NULL AS FLOAT)))\n\nquery\nSELECT array_distinct(arr) FROM test_array_distinct_float\n\n-- Float NaN deduplication\nquery\nSELECT array_distinct(array(CAST('NaN' AS FLOAT), CAST('NaN' AS FLOAT), CAST(1.0 AS FLOAT)))\n\n-- ===== DECIMAL arrays =====\n\nstatement\nCREATE TABLE test_array_distinct_decimal(arr array<decimal(10,2)>) USING parquet\n\nstatement\nINSERT INTO test_array_distinct_decimal VALUES\n  (array(1.10, 2.20, 1.10, 3.30)),\n  (NULL),\n  (array(NULL, 1.10, NULL, 1.10))\n\nquery\nSELECT array_distinct(arr) FROM test_array_distinct_decimal\n\n-- ===== Nested array (array of arrays) =====\n\nquery\nSELECT array_distinct(array(array(1, 2), array(3, 4), array(1, 2), array(3, 4)))\n\nquery\nSELECT array_distinct(array(array(1, 2), CAST(NULL AS array<int>), array(1, 2), CAST(NULL AS array<int>)))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_except.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Config: spark.comet.expression.ArrayExcept.allowIncompatible=true\n\nstatement\nCREATE TABLE test_array_except(a array<int>, b array<int>) USING parquet\n\nstatement\nINSERT INTO test_array_except VALUES (array(1, 2, 3), array(2, 3, 4)), (array(1, 2), array()), (array(), array(1)), (NULL, array(1)), (array(1, NULL), array(NULL))\n\nquery\nSELECT array_except(a, b) FROM test_array_except\n\n-- column + literal\nquery\nSELECT array_except(a, array(2, 3)) FROM test_array_except\n\n-- literal + column\nquery\nSELECT array_except(array(1, 2, 3), b) FROM test_array_except\n\n-- literal + literal\nquery ignore(https://github.com/apache/datafusion-comet/issues/3646)\nSELECT array_except(array(1, 2, 3), array(2, 3, 4)), array_except(array(1, 2), array()), array_except(array(), array(1)), array_except(cast(NULL as array<int>), array(1))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_filter.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_array_filter(arr array<int>) USING parquet\n\nstatement\nINSERT INTO test_array_filter VALUES (array(1, 2, 3, 4, 5)), (array(-1, 0, 1)), (array(10)), (NULL)\n\nquery spark_answer_only\nSELECT filter(arr, x -> x > 2) FROM test_array_filter\n\nquery spark_answer_only\nSELECT filter(arr, x -> x >= 0) FROM test_array_filter\n\nquery spark_answer_only\nSELECT filter(arr, (x, i) -> i > 0) FROM test_array_filter\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_insert.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ConfigMatrix: parquet.enable.dictionary=false,true\n-- Config: spark.comet.expression.ArrayInsert.allowIncompatible=true\n\n-- ============================================================\n-- Integer arrays with column arguments\n-- ============================================================\n\nstatement\nCREATE TABLE test_array_insert(arr array<int>, pos int, val int) USING parquet\n\nstatement\nINSERT INTO test_array_insert VALUES\n  (array(1, 2, 3), 2, 10),\n  (array(1, 2, 3), 1, 10),\n  (array(1, 2, 3), 4, 10),\n  (array(), 1, 10),\n  (NULL, 1, 10),\n  (array(1, 2, 3), NULL, 10),\n  (array(1, 2, 3), 3, NULL)\n\n-- basic column arguments\nquery\nSELECT array_insert(arr, pos, val) FROM test_array_insert\n\n-- ============================================================\n-- Literal arguments (all-literal queries test native eval\n-- because CometSqlFileTestSuite disables constant folding)\n-- ============================================================\n\n-- basic insert at various positions\nquery\nSELECT array_insert(array(1, 2, 3), 2, 10)\n\n-- prepend (pos=1)\nquery\nSELECT array_insert(array(1, 2, 3), 1, 10)\n\n-- append (pos = len+1)\nquery\nSELECT array_insert(array(1, 2, 3), 4, 10)\n\n-- insert into empty array\nquery\nSELECT array_insert(array(), 1, 10)\n\n-- ============================================================\n-- NULL handling\n-- ============================================================\n\n-- null array\nquery\nSELECT array_insert(CAST(NULL AS ARRAY<INT>), 1, 10)\n\n-- null position\nquery\nSELECT array_insert(array(1, 2, 3), CAST(NULL AS INT), 10)\n\n-- null value (insert null into array)\nquery\nSELECT array_insert(array(1, 2, 3), 2, CAST(NULL AS INT))\n\n-- null value appended beyond end\nquery\nSELECT array_insert(array(1, 2, 3, NULL), 4, CAST(NULL AS INT))\n\n-- array with existing nulls\nquery\nSELECT array_insert(array(1, NULL, 3), 2, 10)\n\n-- ============================================================\n-- Positive out-of-bounds (null padding)\n-- ============================================================\n\n-- one beyond end\nquery\nSELECT array_insert(array(1, 2, 3), 5, 99)\n\n-- far beyond end\nquery\nSELECT array_insert(array(1, 2, 3), 10, 99)\n\n-- ============================================================\n-- Negative indices (non-legacy mode, which is the default)\n-- ============================================================\n\n-- -1 appends after last element\nquery\nSELECT array_insert(array(1, 2, 3), -1, 10)\n\n-- -2 inserts before last element\nquery\nSELECT array_insert(array(1, 2, 3), -2, 10)\n\n-- -3 inserts before second-to-last\nquery\nSELECT array_insert(array(1, 2, 3), -3, 10)\n\n-- -4 inserts before first element (len = 3, so -4 = before start)\nquery\nSELECT array_insert(array(1, 2, 3), -4, 10)\n\n-- negative beyond start (null padding)\nquery\nSELECT array_insert(array(1, 2, 3), -5, 10)\n\n-- far negative beyond start\nquery\nSELECT array_insert(array(1, 2, 3), -10, 10)\n\n-- ============================================================\n-- pos=0 error\n-- Note: Spark throws INVALID_INDEX_OF_ZERO, Comet throws a different message.\n-- Cannot use expect_error because patterns differ. This is a known incompatibility.\n-- ============================================================\n\n-- ============================================================\n-- String arrays\n-- ============================================================\n\nstatement\nCREATE TABLE test_array_insert_str(arr array<string>, pos int, val string) USING parquet\n\nstatement\nINSERT INTO test_array_insert_str VALUES\n  (array('a', 'b', 'c'), 2, 'd'),\n  (array('a', 'b', 'c'), 1, 'd'),\n  (array('a', 'b', 'c'), 4, 'd'),\n  (array(), 1, 'x'),\n  (NULL, 1, 'x'),\n  (array('a', NULL, 'c'), 2, 'z'),\n  (array('a', 'b', 'c'), 3, NULL)\n\n-- column arguments with strings\nquery\nSELECT array_insert(arr, pos, val) FROM test_array_insert_str\n\n-- literal string arrays\nquery\nSELECT array_insert(array('hello', 'world'), 1, 'first')\n\nquery\nSELECT array_insert(array('hello', 'world'), 3, 'last')\n\n-- empty strings\nquery\nSELECT array_insert(array('', 'a', ''), 2, '')\n\n-- multibyte UTF-8 characters\nquery\nSELECT array_insert(array('hello', 'world'), 2, 'cafe\\u0301')\n\nquery\nSELECT array_insert(array('abc', 'def'), 1, '中文')\n\n-- ============================================================\n-- Boolean arrays\n-- ============================================================\n\nstatement\nCREATE TABLE test_array_insert_bool(arr array<boolean>, pos int, val boolean) USING parquet\n\nstatement\nINSERT INTO test_array_insert_bool VALUES\n  (array(true, false, true), 2, false),\n  (array(true, false), 3, true),\n  (NULL, 1, true),\n  (array(true), 1, NULL)\n\nquery\nSELECT array_insert(arr, pos, val) FROM test_array_insert_bool\n\nquery\nSELECT array_insert(array(true, false, true), 3, true)\n\n-- ============================================================\n-- Double arrays\n-- ============================================================\n\nstatement\nCREATE TABLE test_array_insert_double(arr array<double>, pos int, val double) USING parquet\n\nstatement\nINSERT INTO test_array_insert_double VALUES\n  (array(1.1, 2.2, 3.3), 2, 4.4),\n  (array(1.1, 2.2), 3, 0.0),\n  (NULL, 1, 1.0),\n  (array(1.1, 2.2), 1, NULL)\n\nquery\nSELECT array_insert(arr, pos, val) FROM test_array_insert_double\n\n-- special float values\nquery\nSELECT array_insert(array(CAST(1.0 AS DOUBLE), CAST(2.0 AS DOUBLE)), 2, CAST('NaN' AS DOUBLE))\n\nquery\nSELECT array_insert(array(CAST(1.0 AS DOUBLE), CAST(2.0 AS DOUBLE)), 2, CAST('Infinity' AS DOUBLE))\n\nquery\nSELECT array_insert(array(CAST(1.0 AS DOUBLE), CAST(2.0 AS DOUBLE)), 2, CAST('-Infinity' AS DOUBLE))\n\n-- negative zero\nquery\nSELECT array_insert(array(CAST(1.0 AS DOUBLE), CAST(2.0 AS DOUBLE)), 1, CAST(-0.0 AS DOUBLE))\n\n-- ============================================================\n-- Long arrays\n-- ============================================================\n\nquery\nSELECT array_insert(array(CAST(1 AS BIGINT), CAST(2 AS BIGINT)), 2, CAST(3 AS BIGINT))\n\n-- ============================================================\n-- Short arrays\n-- ============================================================\n\nquery\nSELECT array_insert(array(CAST(1 AS SMALLINT), CAST(2 AS SMALLINT)), 2, CAST(3 AS SMALLINT))\n\n-- ============================================================\n-- Byte arrays\n-- ============================================================\n\nquery\nSELECT array_insert(array(CAST(1 AS TINYINT), CAST(2 AS TINYINT)), 2, CAST(3 AS TINYINT))\n\n-- ============================================================\n-- Float arrays\n-- ============================================================\n\nquery\nSELECT array_insert(array(CAST(1.1 AS FLOAT), CAST(2.2 AS FLOAT)), 2, CAST(3.3 AS FLOAT))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_insert_legacy.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Tests for array_insert with legacy negative index mode enabled.\n-- In legacy mode, -1 inserts BEFORE the last element (not after).\n\n-- ConfigMatrix: parquet.enable.dictionary=false,true\n-- Config: spark.comet.expression.ArrayInsert.allowIncompatible=true\n-- Config: spark.sql.legacy.negativeIndexInArrayInsert=true\n\n-- -1 inserts before last element in legacy mode\nquery\nSELECT array_insert(array(1, 2, 3), -1, 10)\n\n-- -2 inserts before second-to-last\nquery\nSELECT array_insert(array(1, 2, 3), -2, 10)\n\n-- -3 inserts before first element\nquery\nSELECT array_insert(array(1, 2, 3), -3, 10)\n\n-- negative beyond start with null padding (legacy mode pads differently)\nquery\nSELECT array_insert(array(1, 2, 3), -5, 10)\n\n-- far negative beyond start\nquery\nSELECT array_insert(array(1, 3, 4), -2, 2)\n\n-- column-based test\nstatement\nCREATE TABLE test_ai_legacy(arr array<int>, pos int, val int) USING parquet\n\nstatement\nINSERT INTO test_ai_legacy VALUES\n  (array(1, 2, 3), -1, 10),\n  (array(4, 5), -1, 20),\n  (array(1, 2, 3), -4, 10),\n  (NULL, -1, 10)\n\nquery\nSELECT array_insert(arr, pos, val) FROM test_ai_legacy\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_intersect.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Config: spark.comet.expression.ArrayIntersect.allowIncompatible=true\n\nstatement\nCREATE TABLE test_array_intersect(a array<int>, b array<int>) USING parquet\n\nstatement\nINSERT INTO test_array_intersect VALUES (array(1, 2, 3), array(2, 3, 4)), (array(1, 2), array(3, 4)), (array(), array(1)), (NULL, array(1)), (array(1, NULL), array(NULL, 2))\n\nquery\nSELECT array_intersect(a, b) FROM test_array_intersect\n\n-- column + literal\nquery\nSELECT array_intersect(a, array(2, 3)) FROM test_array_intersect\n\n-- literal + column\nquery\nSELECT array_intersect(array(1, 2, 3), b) FROM test_array_intersect\n\n-- literal + literal\nquery\nSELECT array_intersect(array(1, 2, 3), array(2, 3, 4)), array_intersect(array(1, 2), array(3, 4)), array_intersect(array(), array(1)), array_intersect(cast(NULL as array<int>), array(1))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_join.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_array_join(arr array<string>) USING parquet\n\nstatement\nINSERT INTO test_array_join VALUES (array('a', 'b', 'c')), (array('hello', 'world')), (array()), (NULL), (array('a', NULL, 'c'))\n\nquery spark_answer_only\nSELECT array_join(arr, ',') FROM test_array_join\n\nquery spark_answer_only\nSELECT array_join(arr, ',', 'NULL') FROM test_array_join\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_max.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_array_max(arr array<int>) USING parquet\n\nstatement\nINSERT INTO test_array_max VALUES (array(1, 2, 3)), (array(3, 1, 2)), (array()), (NULL), (array(NULL, 1, 2)), (array(-1, -2, -3))\n\nquery spark_answer_only\nSELECT array_max(arr) FROM test_array_max\n\n-- literal arguments\nquery\nSELECT array_max(array(1, 2, 3)), array_max(array()), array_max(cast(NULL as array<int>))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_min.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_array_min(arr array<int>) USING parquet\n\nstatement\nINSERT INTO test_array_min VALUES (array(1, 2, 3)), (array(3, 1, 2)), (array()), (NULL), (array(NULL, 1, 2)), (array(-1, -2, -3))\n\nquery spark_answer_only\nSELECT array_min(arr) FROM test_array_min\n\n-- literal arguments\nquery\nSELECT array_min(array(1, 2, 3)), array_min(array()), array_min(cast(NULL as array<int>))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_remove.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Config: spark.comet.expression.ArrayRemove.allowIncompatible=true\n\nstatement\nCREATE TABLE test_array_remove(arr array<int>, val int) USING parquet\n\nstatement\nINSERT INTO test_array_remove VALUES (array(1, 2, 3, 2), 2), (array(1, 2, 3), 4), (array(), 1), (NULL, 1), (array(1, NULL, 3), NULL)\n\nquery\nSELECT array_remove(arr, val) FROM test_array_remove\n\n-- column + literal\nquery\nSELECT array_remove(arr, 2) FROM test_array_remove\n\n-- literal + column\nquery\nSELECT array_remove(array(1, 2, 3, 2), val) FROM test_array_remove\n\n-- literal + literal\nquery\nSELECT array_remove(array(1, 2, 3, 2), 2), array_remove(array(1, 2, 3), 4), array_remove(array(), 1), array_remove(cast(NULL as array<int>), 1)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_repeat.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_array_repeat(val int, cnt int) USING parquet\n\nstatement\nINSERT INTO test_array_repeat VALUES (1, 3), (NULL, 3), (1, 0), (1, -1), (1, NULL)\n\nquery\nSELECT array_repeat(val, cnt) FROM test_array_repeat\n\n-- column + literal\nquery\nSELECT array_repeat(val, 3) FROM test_array_repeat\n\n-- literal + column\nquery\nSELECT array_repeat(1, cnt) FROM test_array_repeat\n\n-- literal + literal\nquery\nSELECT array_repeat(1, 3), array_repeat(NULL, 3), array_repeat(1, 0), array_repeat(1, -1)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/array_union.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Config: spark.comet.expression.ArrayUnion.allowIncompatible=true\n\nstatement\nCREATE TABLE test_array_union(a array<int>, b array<int>) USING parquet\n\nstatement\nINSERT INTO test_array_union VALUES (array(1, 2, 3), array(3, 4, 5)), (array(1, 2), array()), (array(), array(1)), (NULL, array(1)), (array(1, NULL), array(NULL, 2))\n\nquery ignore(https://github.com/apache/datafusion-comet/issues/3644)\nSELECT array_union(a, b) FROM test_array_union\n\n-- column + literal\nquery ignore(https://github.com/apache/datafusion-comet/issues/3644)\nSELECT array_union(a, array(3, 4, 5)) FROM test_array_union\n\n-- literal + column\nquery\nSELECT array_union(array(1, 2, 3), b) FROM test_array_union\n\n-- literal + literal\nquery\nSELECT array_union(array(1, 2, 3), array(3, 4, 5)), array_union(array(1, 2), array()), array_union(array(), array(1)), array_union(cast(NULL as array<int>), array(1))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/arrays_overlap.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Config: spark.comet.expression.ArraysOverlap.allowIncompatible=true\n\nstatement\nCREATE TABLE test_arrays_overlap(a array<int>, b array<int>) USING parquet\n\nstatement\nINSERT INTO test_arrays_overlap VALUES (array(1, 2, 3), array(3, 4, 5)), (array(1, 2), array(3, 4)), (array(), array(1)), (NULL, array(1)), (array(1, NULL), array(NULL, 2))\n\nquery ignore(https://github.com/apache/datafusion-comet/issues/3645)\nSELECT arrays_overlap(a, b) FROM test_arrays_overlap\n\n-- column + literal\nquery ignore(https://github.com/apache/datafusion-comet/issues/3645)\nSELECT arrays_overlap(a, array(3, 4, 5)) FROM test_arrays_overlap\n\n-- literal + column\nquery\nSELECT arrays_overlap(array(1, 2, 3), b) FROM test_arrays_overlap\n\n-- literal + literal\nquery\nSELECT arrays_overlap(array(1, 2, 3), array(3, 4, 5)), arrays_overlap(array(1, 2), array(3, 4)), arrays_overlap(array(), array(1)), arrays_overlap(cast(NULL as array<int>), array(1))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/create_array.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_create_array(a int, b int, c int) USING parquet\n\nstatement\nINSERT INTO test_create_array VALUES (1, 2, 3), (NULL, 2, 3), (NULL, NULL, NULL)\n\nquery\nSELECT array(a, b, c) FROM test_create_array\n\nquery\nSELECT array(1, 2, 3, NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/element_at.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_element_at(arr array<int>) USING parquet\n\nstatement\nINSERT INTO test_element_at VALUES (array(1, 2, 3)), (array(10)), (NULL)\n\nquery spark_answer_only\nSELECT element_at(arr, 1), element_at(arr, -1) FROM test_element_at\n\n-- literal arguments\nquery ignore(Spark codegen bug with literal element_at when constant folding is disabled)\nSELECT element_at(array(1, 2, 3), 1), element_at(array(1, 2, 3), -1)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/element_at_ansi.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ANSI mode element_at tests\n-- Tests that element_at throws exceptions for out-of-bounds access in ANSI mode\n-- Note: element_at uses 1-based indexing\n\n-- Config: spark.sql.ansi.enabled=true\n\n-- ============================================================================\n-- Test data setup\n-- ============================================================================\n\nstatement\nCREATE TABLE ansi_element_at_oob(arr array<int>) USING parquet\n\nstatement\nINSERT INTO ansi_element_at_oob VALUES (array(1, 2, 3))\n\n-- ============================================================================\n-- element_at index out of bounds (positive index)\n-- Spark throws: [INVALID_ARRAY_INDEX_IN_ELEMENT_AT] ...\n-- Comet throws: Index out of bounds for array\n-- See https://github.com/apache/datafusion-comet/issues/3375\n-- ============================================================================\n\n-- index beyond array length should throw (1-based indexing)\nquery ignore(https://github.com/apache/datafusion-comet/issues/3375)\nSELECT element_at(arr, 10) FROM ansi_element_at_oob\n\n-- literal array with out of bounds access\nquery ignore(https://github.com/apache/datafusion-comet/issues/3375)\nSELECT element_at(array(1, 2, 3), 5)\n\n-- ============================================================================\n-- element_at with index 0 (invalid)\n-- Spark throws: [INVALID_INDEX_OF_ZERO] The index 0 is invalid\n-- Comet throws: different error message\n-- See https://github.com/apache/datafusion-comet/issues/3375\n-- ============================================================================\n\n-- index 0 is not valid for element_at (1-based indexing)\nquery ignore(https://github.com/apache/datafusion-comet/issues/3375)\nSELECT element_at(arr, 0) FROM ansi_element_at_oob\n\n-- literal with index 0\nquery ignore(https://github.com/apache/datafusion-comet/issues/3375)\nSELECT element_at(array(1, 2, 3), 0)\n\n-- ============================================================================\n-- element_at index out of bounds (negative index beyond array)\n-- ============================================================================\n\n-- negative index beyond array size should throw\nquery ignore(https://github.com/apache/datafusion-comet/issues/3375)\nSELECT element_at(arr, -10) FROM ansi_element_at_oob\n\n-- literal with negative out of bounds\nquery ignore(https://github.com/apache/datafusion-comet/issues/3375)\nSELECT element_at(array(1, 2, 3), -5)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/flatten.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_flatten(arr array<array<int>>) USING parquet\n\nstatement\nINSERT INTO test_flatten VALUES (array(array(1, 2), array(3, 4))), (array(array(), array(1))), (array()), (NULL), (array(array(1, NULL), array(NULL)))\n\nquery spark_answer_only\nSELECT flatten(arr) FROM test_flatten\n\n-- literal arguments\nquery spark_answer_only\nSELECT flatten(array(array(1, 2), array(3, 4)))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/get_array_item.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_get_array_item(arr array<int>, idx int) USING parquet\n\nstatement\nINSERT INTO test_get_array_item VALUES (array(10, 20, 30), 0), (array(10, 20, 30), 1), (array(10, 20, 30), 2), (array(1), 0), (NULL, 0), (array(10, 20), NULL)\n\nquery\nSELECT arr[0], arr[1], arr[2] FROM test_get_array_item\n\nquery\nSELECT arr[idx] FROM test_get_array_item\n\n-- literal arguments\nquery\nSELECT array(10, 20, 30)[0], array(10, 20, 30)[2], array()[0]\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/get_array_item_ansi.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ANSI mode array index access tests\n-- Tests that array[index] throws exceptions for out-of-bounds access in ANSI mode\n\n-- Config: spark.sql.ansi.enabled=true\n\n-- ============================================================================\n-- Test data setup\n-- ============================================================================\n\nstatement\nCREATE TABLE ansi_array_oob(arr array<int>) USING parquet\n\nstatement\nINSERT INTO ansi_array_oob VALUES (array(1, 2, 3))\n\n-- ============================================================================\n-- Array index out of bounds (positive index)\n-- Spark throws: [INVALID_ARRAY_INDEX] The index X is out of bounds\n-- Comet throws: Index out of bounds for array\n-- See https://github.com/apache/datafusion-comet/issues/3375\n-- ============================================================================\n\n-- index beyond array length should throw (0-based indexing)\nquery ignore(https://github.com/apache/datafusion-comet/issues/3375)\nSELECT arr[10] FROM ansi_array_oob\n\n-- literal array with out of bounds access\nquery ignore(https://github.com/apache/datafusion-comet/issues/3375)\nSELECT array(1, 2, 3)[5]\n\n-- ============================================================================\n-- Array index out of bounds (negative index)\n-- ============================================================================\n\n-- negative index should throw\nquery ignore(https://github.com/apache/datafusion-comet/issues/3375)\nSELECT arr[-1] FROM ansi_array_oob\n\n-- literal with negative index\nquery ignore(https://github.com/apache/datafusion-comet/issues/3375)\nSELECT array(1, 2, 3)[-1]\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/get_array_struct_fields.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_arr_struct(arr array<struct<name: string, value: int>>) USING parquet\n\nstatement\nINSERT INTO test_arr_struct VALUES (array(named_struct('name', 'a', 'value', 1), named_struct('name', 'b', 'value', 2))), (array(named_struct('name', 'x', 'value', 10))), (NULL)\n\nquery spark_answer_only\nSELECT arr.name, arr.value FROM test_arr_struct\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/size.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_size(arr array<int>, m map<string, int>) USING parquet\n\nstatement\nINSERT INTO test_size VALUES (array(1, 2, 3), map('a', 1, 'b', 2)), (array(), map()), (NULL, NULL)\n\nquery spark_answer_only\nSELECT size(arr), size(m) FROM test_size\n\n-- literal arguments\nquery\nSELECT size(array(1, 2, 3)), size(array()), size(cast(NULL as array<int>))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/array/sort_array.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ConfigMatrix: parquet.enable.dictionary=false,true\n\nstatement\nCREATE TABLE test_sort_array_int(arr array<int>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_int VALUES\n  (array(3, 1, 4, 1, 5)),\n  (array(3, NULL, 1, NULL, 2)),\n  (array()),\n  (NULL)\n\nquery\nSELECT sort_array(arr) FROM test_sort_array_int\n\nquery\nSELECT sort_array(arr, true) FROM test_sort_array_int\n\nquery\nSELECT sort_array(arr, false) FROM test_sort_array_int\n\nstatement\nCREATE TABLE test_sort_array_string(arr array<string>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_string VALUES\n  (array('d', 'c', 'b', 'a')),\n  (array('b', NULL, 'a')),\n  (array()),\n  (NULL)\n\nquery\nSELECT sort_array(arr) FROM test_sort_array_string\n\nquery\nSELECT sort_array(arr, true) FROM test_sort_array_string\n\nquery\nSELECT sort_array(arr, false) FROM test_sort_array_string\n\nstatement\nCREATE TABLE test_sort_array_double(arr array<double>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_double VALUES\n  (array(\n    CAST('Infinity' AS DOUBLE),\n    CAST('-Infinity' AS DOUBLE),\n    CAST('NaN' AS DOUBLE),\n    3.0,\n    1.0,\n    NULL,\n    -0.0,\n    0.0)),\n  (array(\n    CAST('NaN' AS DOUBLE),\n    CAST('NaN' AS DOUBLE),\n    CAST('Infinity' AS DOUBLE),\n    CAST('-Infinity' AS DOUBLE),\n    -5.0,\n    2.0)),\n  (array()),\n  (NULL)\n\nquery\nSELECT sort_array(arr) FROM test_sort_array_double\n\nquery\nSELECT sort_array(arr, true) FROM test_sort_array_double\n\nquery\nSELECT sort_array(arr, false) FROM test_sort_array_double\n\nstatement\nCREATE TABLE test_sort_array_float(arr array<float>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_float VALUES\n  (array(\n    CAST('Infinity' AS FLOAT),\n    CAST('-Infinity' AS FLOAT),\n    CAST('NaN' AS FLOAT),\n    CAST(3.0 AS FLOAT),\n    CAST(1.0 AS FLOAT),\n    CAST(NULL AS FLOAT),\n    CAST(-0.0 AS FLOAT),\n    CAST(0.0 AS FLOAT))),\n  (array(\n    CAST('NaN' AS FLOAT),\n    CAST('Infinity' AS FLOAT),\n    CAST('-Infinity' AS FLOAT),\n    CAST(-5.0 AS FLOAT),\n    CAST(2.0 AS FLOAT))),\n  (array()),\n  (NULL)\n\nquery\nSELECT sort_array(arr) FROM test_sort_array_float\n\nquery\nSELECT sort_array(arr, true) FROM test_sort_array_float\n\nquery\nSELECT sort_array(arr, false) FROM test_sort_array_float\n\nstatement\nCREATE TABLE test_sort_array_decimal(arr array<decimal(12, 3)>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_decimal VALUES\n  (CAST(array(CAST(100 AS DECIMAL(10, 0)), CAST(10 AS DECIMAL(10, 0))) AS array<decimal(12, 3)>)),\n  (CAST(array(\n    CAST(1 AS DECIMAL(10, 0)),\n    CAST(1.0 AS DECIMAL(10, 1)),\n    CAST(1.00 AS DECIMAL(10, 2)),\n    CAST(1.000 AS DECIMAL(10, 3))) AS array<decimal(12, 3)>)),\n  (array()),\n  (NULL)\n\nquery\nSELECT sort_array(arr) FROM test_sort_array_decimal\n\nquery\nSELECT sort_array(arr, true) FROM test_sort_array_decimal\n\nquery\nSELECT sort_array(arr, false) FROM test_sort_array_decimal\n\nstatement\nCREATE TABLE test_sort_array_boolean(arr array<boolean>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_boolean VALUES\n  (array(true, false, true, false)),\n  (array(true, false, true, NULL, false)),\n  (array()),\n  (NULL)\n\nquery\nSELECT sort_array(arr) FROM test_sort_array_boolean\n\nquery\nSELECT sort_array(arr, true) FROM test_sort_array_boolean\n\nquery\nSELECT sort_array(arr, false) FROM test_sort_array_boolean\n\nstatement\nCREATE TABLE test_sort_array_date(arr array<date>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_date VALUES\n  (array(DATE '2026-01-03', DATE '2026-01-01', DATE '2026-01-02')),\n  (array(DATE '2026-01-02', NULL, DATE '2026-01-01')),\n  (array()),\n  (NULL)\n\nquery\nSELECT sort_array(arr) FROM test_sort_array_date\n\nquery\nSELECT sort_array(arr, true) FROM test_sort_array_date\n\nquery\nSELECT sort_array(arr, false) FROM test_sort_array_date\n\nstatement\nCREATE TABLE test_sort_array_timestamp(arr array<timestamp>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_timestamp VALUES\n  (array(\n    TIMESTAMP '2026-01-03 01:00:00',\n    TIMESTAMP '2026-01-01 02:00:00',\n    TIMESTAMP '2026-01-02 03:00:00')),\n  (array(\n    TIMESTAMP '2026-01-02 00:00:00',\n    NULL,\n    TIMESTAMP '2026-01-01 00:00:00')),\n  (array()),\n  (NULL)\n\nquery\nSELECT sort_array(arr) FROM test_sort_array_timestamp\n\nquery\nSELECT sort_array(arr, true) FROM test_sort_array_timestamp\n\nquery\nSELECT sort_array(arr, false) FROM test_sort_array_timestamp\n\nstatement\nCREATE TABLE test_sort_array_binary(arr array<binary>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_binary VALUES\n  (array(unhex('FF'), unhex('00'), unhex('0A'))),\n  (array(unhex('0B'), NULL, unhex('01'))),\n  (array()),\n  (NULL)\n\nquery\nSELECT\n  hex(element_at(sorted_arr, 1)),\n  hex(element_at(sorted_arr, 2)),\n  hex(element_at(sorted_arr, 3))\nFROM (\n  SELECT sort_array(arr) AS sorted_arr\n  FROM test_sort_array_binary\n)\n\nquery\nSELECT\n  hex(element_at(sorted_arr, 1)),\n  hex(element_at(sorted_arr, 2)),\n  hex(element_at(sorted_arr, 3))\nFROM (\n  SELECT sort_array(arr, true) AS sorted_arr\n  FROM test_sort_array_binary\n)\n\nquery\nSELECT\n  hex(element_at(sorted_arr, 1)),\n  hex(element_at(sorted_arr, 2)),\n  hex(element_at(sorted_arr, 3))\nFROM (\n  SELECT sort_array(arr, false) AS sorted_arr\n  FROM test_sort_array_binary\n)\n\nstatement\nCREATE TABLE test_sort_array_struct(arr array<struct<a:int,b:string>>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_struct VALUES\n  (array(\n    named_struct('a', 2, 'b', 'b'),\n    named_struct('a', 1, 'b', 'c'),\n    named_struct('a', 1, 'b', 'a'))),\n  (array(\n    named_struct('a', 2, 'b', NULL),\n    named_struct('a', 1, 'b', 'z'),\n    named_struct('a', 1, 'b', NULL))),\n  (array()),\n  (NULL)\n\nquery\nSELECT sort_array(arr) FROM test_sort_array_struct\n\nquery\nSELECT sort_array(arr, false) FROM test_sort_array_struct\n\nstatement\nCREATE TABLE test_sort_array_nested(arr array<array<int>>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_nested VALUES\n  (array(array(2, 3), array(1), array(2, 1))),\n  (array(array(1, NULL), array(1), NULL)),\n  (array()),\n  (NULL)\n\nquery\nSELECT sort_array(arr) FROM test_sort_array_nested\n\nquery\nSELECT sort_array(arr, false) FROM test_sort_array_nested\n\nstatement\nCREATE TABLE test_sort_array_nested_struct(arr array<array<struct<a:int>>>) USING parquet\n\nstatement\nINSERT INTO test_sort_array_nested_struct VALUES\n  (array(\n    array(named_struct('a', 2)),\n    array(named_struct('a', 1)))),\n  (array()),\n  (NULL)\n\nquery expect_fallback(Sort on array element type ArrayType(StructType(StructField(a,IntegerType)\nSELECT sort_array(arr) FROM test_sort_array_nested_struct\n\nquery expect_fallback(Sort on array element type ArrayType(StructType(StructField(a,IntegerType)\nSELECT sort_array(arr, false) FROM test_sort_array_nested_struct\n\n-- literal arguments\nquery\nSELECT\n  sort_array(array(3, 1, 4, 1, 5)),\n  sort_array(array(3, 1, 4, 1, 5), true),\n  sort_array(array(3, NULL, 1, NULL, 2)),\n  sort_array(array(3, NULL, 1, NULL, 2), false),\n  sort_array(\n    array(\n      CAST('Infinity' AS DOUBLE),\n      CAST('-Infinity' AS DOUBLE),\n      CAST('NaN' AS DOUBLE),\n      1.0,\n      NULL,\n      -0.0,\n      0.0)),\n  sort_array(\n    array(\n      CAST('Infinity' AS DOUBLE),\n      CAST('-Infinity' AS DOUBLE),\n      CAST('NaN' AS DOUBLE),\n      1.0,\n      NULL,\n      -0.0,\n      0.0),\n    false),\n  sort_array(\n    array(\n      CAST('Infinity' AS FLOAT),\n      CAST('-Infinity' AS FLOAT),\n      CAST('NaN' AS FLOAT),\n      CAST(1.0 AS FLOAT),\n      CAST(NULL AS FLOAT),\n      CAST(-0.0 AS FLOAT),\n      CAST(0.0 AS FLOAT))),\n  sort_array(\n    array(\n      CAST('Infinity' AS FLOAT),\n      CAST('-Infinity' AS FLOAT),\n      CAST('NaN' AS FLOAT),\n      CAST(1.0 AS FLOAT),\n      CAST(NULL AS FLOAT),\n      CAST(-0.0 AS FLOAT),\n      CAST(0.0 AS FLOAT)),\n    false),\n  sort_array(\n    CAST(array(\n      CAST(100 AS DECIMAL(10, 0)),\n      CAST(10 AS DECIMAL(10, 0)),\n      CAST(1 AS DECIMAL(10, 0)),\n      CAST(1.00 AS DECIMAL(10, 2))) AS array<decimal(12, 3)>)),\n  sort_array(\n    CAST(array(\n      CAST(100 AS DECIMAL(10, 0)),\n      CAST(10 AS DECIMAL(10, 0)),\n      CAST(1 AS DECIMAL(10, 0)),\n      CAST(1.00 AS DECIMAL(10, 2))) AS array<decimal(12, 3)>),\n    false),\n  sort_array(array(true, false, true, false)),\n  sort_array(array(true, false, true, NULL, false)),\n  sort_array(array(true, false, true, NULL, false), false),\n  sort_array(array(DATE '2026-01-03', DATE '2026-01-01', DATE '2026-01-02')),\n  sort_array(array(DATE '2026-01-02', NULL, DATE '2026-01-01'), false),\n  sort_array(\n    array(\n      TIMESTAMP '2026-01-03 01:00:00',\n      TIMESTAMP '2026-01-01 02:00:00',\n      TIMESTAMP '2026-01-02 03:00:00')),\n  sort_array(\n    array(\n      TIMESTAMP '2026-01-02 00:00:00',\n      NULL,\n      TIMESTAMP '2026-01-01 00:00:00'),\n    false),\n  hex(element_at(sort_array(array(unhex('FF'), unhex('00'), unhex('0A'))), 1)),\n  hex(element_at(sort_array(array(unhex('FF'), unhex('00'), unhex('0A'))), 2)),\n  hex(element_at(sort_array(array(unhex('FF'), unhex('00'), unhex('0A'))), 3)),\n  hex(element_at(sort_array(array(unhex('0B'), NULL, unhex('01')), false), 1)),\n  hex(element_at(sort_array(array(unhex('0B'), NULL, unhex('01')), false), 2)),\n  hex(element_at(sort_array(array(unhex('0B'), NULL, unhex('01')), false), 3)),\n  sort_array(\n    array(\n      named_struct('a', 2, 'b', 'b'),\n      named_struct('a', 1, 'b', 'c'),\n      named_struct('a', 1, 'b', 'a'))),\n  sort_array(array(array(2, 3), array(1), array(2, 1))),\n  sort_array(array(array(1, NULL), array(1), NULL)),\n  sort_array(array(NULL, NULL)),\n  sort_array(cast(NULL as array<int>))\n\nquery expect_fallback(Sort on array element type ArrayType(StructType(StructField(a,IntegerType)\nSELECT sort_array(\n  array(\n    array(named_struct('a', 2)),\n    array(named_struct('a', 1))))\n\nquery expect_error(BOOLEAN)\nSELECT sort_array(array(3, 1, 4, 1, 5), 1)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/bitwise/bitwise.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Setup\nstatement\nCREATE TABLE test(col1 int, col2 int) USING parquet\n\nstatement\nINSERT INTO test VALUES(1111, 2)\n\nstatement\nINSERT INTO test VALUES(1111, 2)\n\nstatement\nINSERT INTO test VALUES(3333, 4)\n\nstatement\nINSERT INTO test VALUES(5555, 6)\n\n-- Queries\nquery\nSELECT col1 & col2, col1 | col2, col1 ^ col2 FROM test\n\nquery\nSELECT col1 & 1234, col1 | 1234, col1 ^ 1234 FROM test\n\nquery\nSELECT shiftright(col1, 2), shiftright(col1, col2) FROM test\n\nquery\nSELECT shiftleft(col1, 2), shiftleft(col1, col2) FROM test\n\nquery\nSELECT ~(11), ~col1, ~col2 FROM test\n\n-- BitwiseCount\nstatement\nCREATE TABLE test_bit_count(i int, l long) USING parquet\n\nstatement\nINSERT INTO test_bit_count VALUES (0, 0), (1, 1), (7, 7), (-1, -1), (2147483647, 9223372036854775807), (NULL, NULL)\n\nquery\nSELECT bit_count(i), bit_count(l) FROM test_bit_count\n\n-- BitwiseGet\nstatement\nCREATE TABLE test_bit_get(i int, pos int) USING parquet\n\nstatement\nINSERT INTO test_bit_get VALUES (11, 0), (11, 1), (11, 2), (11, 3), (0, 0), (NULL, 0), (11, NULL)\n\nquery spark_answer_only\nSELECT bit_get(i, pos) FROM test_bit_get\n\n-- literal arguments\nquery\nSELECT 1111 & 2, 1111 | 2, 1111 ^ 2\n\nquery\nSELECT bit_count(0), bit_count(7), bit_count(-1)\n\nquery spark_answer_only\nSELECT bit_get(11, 0), bit_get(11, 1), bit_get(11, 2), bit_get(11, 3)\n\nquery\nSELECT shiftright(1111, 2), shiftleft(1111, 2)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/cast/cast.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_cast(i int, l long, f float, d double, s string, b boolean) USING parquet\n\nstatement\nINSERT INTO test_cast VALUES (1, 1, 1.5, 1.5, '123', true), (0, 0, 0.0, 0.0, '0', false), (NULL, NULL, NULL, NULL, NULL, NULL), (-1, -1, -1.5, -1.5, '-1', true), (2147483647, 9223372036854775807, cast('NaN' as float), cast('Infinity' as double), 'abc', false)\n\nquery\nSELECT cast(i as long), cast(i as double), cast(i as string) FROM test_cast\n\nquery\nSELECT cast(l as int), cast(l as double), cast(l as string) FROM test_cast\n\nquery\nSELECT cast(f as double), cast(f as int), cast(f as string) FROM test_cast\n\nquery\nSELECT cast(d as float), cast(d as int), cast(d as string) FROM test_cast\n\nquery\nSELECT cast(s as int), cast(s as double) FROM test_cast\n\nquery\nSELECT cast(b as int), cast(b as string), cast(i as boolean) FROM test_cast\n\n-- literal arguments\nquery\nSELECT cast(1 as long), cast(1 as double), cast(1 as string), cast('123' as int), cast('3.14' as double), cast(true as int), cast(NULL as int)\n\n-- string to timestamp: common formats\nquery\nSELECT cast('2020-01-01' as timestamp)\n\nquery\nSELECT cast('2020-01-01 12:34:56' as timestamp)\n\nquery\nSELECT cast('2020-01-01T12:34:56' as timestamp)\n\nquery\nSELECT cast('2020-01-01T12:34:56.123456' as timestamp)\n\n-- string to timestamp: Z and offset suffixes\nquery\nSELECT cast('2020-01-01T12:34:56Z' as timestamp)\n\nquery\nSELECT cast('2020-01-01T12:34:56+05:30' as timestamp)\n\nquery\nSELECT cast('2020-01-01T12:34:56-07:00' as timestamp)\n\n-- string to timestamp: space separator and UTC/GMT zone suffixes\nquery\nSELECT cast('2020-01-01 12:34:56 UTC' as timestamp)\n\nquery\nSELECT cast('2020-01-01 12:34:56GMT+08:00' as timestamp)\n\n-- string to timestamp: time-only and T-prefixed time-only\nquery\nSELECT cast('T12:34:56' as timestamp)\n\n-- string to timestamp: year-only and year-month\nquery\nSELECT cast('2020' as timestamp)\n\nquery\nSELECT cast('2020-06' as timestamp)\n\n-- string to timestamp: negative year\nquery\nSELECT cast('-0100-01-01' as timestamp)\n\n-- string to timestamp: invalid values return null (non-ANSI)\nquery\nSELECT cast('not-a-timestamp' as timestamp)\n\nquery\nSELECT cast('2020-13-01' as timestamp)\n\nquery\nSELECT cast('2020-01-32' as timestamp)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/cast/cast_decimal_to_primitive.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ConfigMatrix: parquet.enable.dictionary=false,true\n\nstatement\nCREATE TABLE test_cast_decimal(d10 decimal(10,2), d5 decimal(5,0)) USING parquet\n\nstatement\nINSERT INTO test_cast_decimal VALUES\n  (123.45, 123),\n  (-67.89, -67),\n  (0.00, 0),\n  (0.01, 1),\n  (-0.01, -1),\n  (99999999.99, 99999),\n  (-99999999.99, -99999),\n  (NULL, NULL)\n\n-- decimal(10,2) column to FLOAT\nquery\nSELECT cast(d10 as float) FROM test_cast_decimal\n\n-- decimal(10,2) column to DOUBLE\nquery\nSELECT cast(d10 as double) FROM test_cast_decimal\n\n-- decimal(10,2) column to INT\nquery\nSELECT cast(d10 as int) FROM test_cast_decimal\n\n-- decimal(10,2) column to LONG\nquery\nSELECT cast(d10 as long) FROM test_cast_decimal\n\n-- decimal(10,2) column to BOOLEAN\nquery\nSELECT cast(d10 as boolean) FROM test_cast_decimal\n\n-- decimal(5,0) column to FLOAT\nquery\nSELECT cast(d5 as float) FROM test_cast_decimal\n\n-- decimal(5,0) column to DOUBLE\nquery\nSELECT cast(d5 as double) FROM test_cast_decimal\n\n-- decimal(5,0) column to INT\nquery\nSELECT cast(d5 as int) FROM test_cast_decimal\n\n-- decimal(5,0) column to LONG\nquery\nSELECT cast(d5 as long) FROM test_cast_decimal\n\n-- decimal(5,0) column to BOOLEAN\nquery\nSELECT cast(d5 as boolean) FROM test_cast_decimal\n\n-- decimal(38,18) table: covers boundary values that exercise the i128 code path\nstatement\nCREATE TABLE test_cast_decimal_high_precision(d38 decimal(38,18)) USING parquet\n\nstatement\nINSERT INTO test_cast_decimal_high_precision VALUES\n  (CAST('99999999999999999999.999999999999999999' AS decimal(38,18))),\n  (CAST('-99999999999999999999.999999999999999999' AS decimal(38,18))),\n  (CAST('9223372036854775807.000000000000000000' AS decimal(38,18))),\n  (CAST('-9223372036854775808.000000000000000000' AS decimal(38,18))),\n  (CAST('1.000000000000000000' AS decimal(38,18))),\n  (CAST('-1.000000000000000000' AS decimal(38,18))),\n  (CAST('0.000000000000000000' AS decimal(38,18))),\n  (NULL)\n\n-- decimal(38,18) column to FLOAT\nquery\nSELECT cast(d38 as float) FROM test_cast_decimal_high_precision\n\n-- decimal(38,18) column to DOUBLE\nquery\nSELECT cast(d38 as double) FROM test_cast_decimal_high_precision\n\n-- decimal(38,18) column to INT\nquery\nSELECT cast(d38 as int) FROM test_cast_decimal_high_precision\n\n-- decimal(38,18) column to LONG\nquery\nSELECT cast(d38 as long) FROM test_cast_decimal_high_precision\n\n-- decimal(38,18) column to BOOLEAN\nquery\nSELECT cast(d38 as boolean) FROM test_cast_decimal_high_precision\n\n-- additional precision/scale combinations: decimal(15,5) has fractional part with int overflow\n-- possible; decimal(20,0) has no fractional part with long overflow possible\nstatement\nCREATE TABLE test_cast_decimal_extra(\n  d15_5 decimal(15,5),\n  d20_0 decimal(20,0)\n) USING parquet\n\nstatement\nINSERT INTO test_cast_decimal_extra VALUES\n  (2147483648.12345, 9223372036854775808),    -- d15_5 overflows INT; d20_0 overflows LONG\n  (-2147483649.12345, -9223372036854775809),\n  (123.45678, 2147483648),                    -- fractional truncation; d20_0 overflows INT only\n  (0.00001, 1),\n  (-0.00001, -1),\n  (0.00000, 0),\n  (NULL, NULL)\n\n-- decimal(15,5) to INT (exercises fractional truncation and int overflow)\nquery\nSELECT cast(d15_5 as int) FROM test_cast_decimal_extra\n\n-- decimal(15,5) to LONG\nquery\nSELECT cast(d15_5 as long) FROM test_cast_decimal_extra\n\n-- decimal(15,5) to BOOLEAN\nquery\nSELECT cast(d15_5 as boolean) FROM test_cast_decimal_extra\n\n-- decimal(20,0) to INT\nquery\nSELECT cast(d20_0 as int) FROM test_cast_decimal_extra\n\n-- decimal(20,0) to LONG (exercises long overflow)\nquery\nSELECT cast(d20_0 as long) FROM test_cast_decimal_extra\n\n-- decimal(20,0) to BOOLEAN\nquery\nSELECT cast(d20_0 as boolean) FROM test_cast_decimal_extra\n\n-- literal casts: decimal(10,2) to float\nquery\nSELECT cast(cast(1.50 as decimal(10,2)) as float),\n       cast(cast(0.00 as decimal(10,2)) as float),\n       cast(cast(-1.50 as decimal(10,2)) as float),\n       cast(cast(NULL as decimal(10,2)) as float)\n\n-- literal casts: decimal(5,0) to float\nquery\nSELECT cast(cast(123 as decimal(5,0)) as float),\n       cast(cast(0 as decimal(5,0)) as float),\n       cast(cast(-123 as decimal(5,0)) as float),\n       cast(cast(NULL as decimal(5,0)) as float)\n\n-- literal casts: decimal(10,2) to boolean\nquery\nSELECT cast(cast(1.50 as decimal(10,2)) as boolean),\n       cast(cast(0.00 as decimal(10,2)) as boolean),\n       cast(cast(NULL as decimal(10,2)) as boolean)\n\n-- literal casts: decimal(5,0) to boolean\nquery\nSELECT cast(cast(1 as decimal(5,0)) as boolean),\n       cast(cast(0 as decimal(5,0)) as boolean),\n       cast(cast(NULL as decimal(5,0)) as boolean)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/cast/cast_double_to_string.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_double_to_string(d double, id int) USING parquet\n\nstatement\nINSERT INTO test_double_to_string VALUES\n  (-0.0, 1),\n  (0.0, 2),\n  (1.5, 3),\n  (-1.5, 4),\n  (cast('NaN' as double), 5),\n  (cast('Infinity' as double), 6),\n  (cast('-Infinity' as double), 7),\n  (NULL, 8),\n  (1.0E20, 9),\n  (1.0E-20, 10),\n  (-1.0E20, 11),\n  (0.001, 12),\n  (123456789.0, 13),\n  (1.23456789E10, 14)\n\nquery\nSELECT cast(d as string), id FROM test_double_to_string ORDER BY id\n\nquery\nSELECT cast(-0.0 as string), cast(0.0 as string)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/conditional/boolean.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- compare true/false to negative zero\nstatement\nCREATE TABLE test(col1 boolean, col2 float) USING parquet\n\nstatement\nINSERT INTO test VALUES(true, -0.0)\n\nstatement\nINSERT INTO test VALUES(false, -0.0)\n\nquery\nSELECT col1, negative(col2), cast(col1 as float), col1 = negative(col2) FROM test\n\n-- not\nstatement\nCREATE TABLE test_not(col1 int, col2 boolean) USING parquet\n\nstatement\nINSERT INTO test_not VALUES(1, false), (2, true), (3, true), (3, false)\n\nquery\nSELECT col1, col2, NOT(col2), !(col2) FROM test_not\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/conditional/case_when.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_case_when(i int, s string) USING parquet\n\nstatement\nINSERT INTO test_case_when VALUES (1, 'a'), (2, 'b'), (3, 'c'), (NULL, 'd'), (1, NULL), (NULL, NULL)\n\nquery\nSELECT CASE WHEN i = 1 THEN 'one' WHEN i = 2 THEN 'two' ELSE 'other' END FROM test_case_when\n\nquery\nSELECT CASE i WHEN 1 THEN 'one' WHEN 2 THEN 'two' END FROM test_case_when\n\nquery\nSELECT CASE WHEN s IS NULL THEN 'null_val' ELSE s END FROM test_case_when\n\n-- literal arguments\nquery\nSELECT CASE WHEN i = 1 THEN s WHEN i = 2 THEN 'fixed' ELSE s END FROM test_case_when\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/conditional/coalesce.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_coalesce(a int, b int, c int) USING parquet\n\nstatement\nINSERT INTO test_coalesce VALUES (1, 2, 3), (NULL, 2, 3), (NULL, NULL, 3), (NULL, NULL, NULL), (1, NULL, NULL)\n\nquery\nSELECT coalesce(a, b, c) FROM test_coalesce\n\nquery\nSELECT coalesce(a) FROM test_coalesce\n\nquery\nSELECT coalesce(a, 99) FROM test_coalesce\n\n-- literal arguments\nquery\nSELECT coalesce(NULL, NULL, 99), coalesce(1, NULL, 99), coalesce(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/conditional/if_expr.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_if(cond boolean, a int, b int) USING parquet\n\nstatement\nINSERT INTO test_if VALUES (true, 1, 2), (false, 1, 2), (NULL, 1, 2), (true, NULL, 2), (false, 1, NULL)\n\nquery\nSELECT IF(cond, a, b) FROM test_if\n\nquery\nSELECT IF(a > 0, 'positive', 'non-positive') FROM test_if\n\n-- literal arguments\nquery\nSELECT IF(true, 1, 2), IF(false, 1, 2), IF(NULL, 1, 2)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/conditional/in_set.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ConfigMatrix: spark.sql.optimizer.inSetConversionThreshold=100,0\n\n-- test in(set)/not in(set)\nstatement\nCREATE TABLE names(id int, name varchar(20)) USING parquet\n\nstatement\nINSERT INTO names VALUES(1, 'James'), (1, 'Jones'), (2, 'Smith'), (3, 'Smith'), (NULL, 'Jones'), (4, NULL)\n\nquery\nSELECT * FROM names WHERE id in (1, 2, 4, NULL)\n\nquery\nSELECT * FROM names WHERE name in ('Smith', 'Brown', NULL)\n\nquery\nSELECT * FROM names WHERE id not in (1)\n\nquery spark_answer_only\nSELECT * FROM names WHERE name not in ('Smith', 'Brown', NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/conditional/is_not_null.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_is_not_null(i int, s string, d double) USING parquet\n\nstatement\nINSERT INTO test_is_not_null VALUES (1, 'a', 1.0), (NULL, NULL, NULL), (0, '', 0.0), (NULL, 'b', cast('NaN' as double))\n\nquery\nSELECT i IS NOT NULL, s IS NOT NULL, d IS NOT NULL FROM test_is_not_null\n\nquery\nSELECT isnotnull(i), isnotnull(s), isnotnull(d) FROM test_is_not_null\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/conditional/is_null.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_is_null(i int, s string, d double) USING parquet\n\nstatement\nINSERT INTO test_is_null VALUES (1, 'a', 1.0), (NULL, NULL, NULL), (0, '', 0.0), (NULL, 'b', cast('NaN' as double))\n\nquery\nSELECT i IS NULL, s IS NULL, d IS NULL FROM test_is_null\n\nquery\nSELECT isnull(i), isnull(s), isnull(d) FROM test_is_null\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/conditional/predicates.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_pred(a int, b int) USING parquet\n\nstatement\nINSERT INTO test_pred VALUES (1, 2), (2, 2), (3, 1), (NULL, 1), (1, NULL), (NULL, NULL)\n\nquery\nSELECT a = b, a <=> b, a < b, a > b, a <= b, a >= b FROM test_pred\n\nquery\nSELECT a != b, NOT (a = b) FROM test_pred\n\nquery\nSELECT (a > 1) AND (b > 1), (a > 1) OR (b > 1), NOT (a > 1) FROM test_pred\n\n-- column op literal\nquery\nSELECT a = 1, a <=> 1, a < 2, a > 0, a <= 1, a >= 2 FROM test_pred\n\n-- literal op column\nquery\nSELECT 2 = a, 2 < a, 0 > a, 1 <= a, 3 >= a FROM test_pred\n\n-- literal op literal\nquery\nSELECT 1 = 1, 1 = 2, 1 <=> NULL, NULL <=> NULL, NULL = NULL, 1 < 2, 2 > 1\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/date_add.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_date_add(d date, n int) USING parquet\n\nstatement\nINSERT INTO test_date_add VALUES (date('2024-01-15'), 1), (date('2024-01-15'), -1), (date('2024-01-15'), 0), (date('2024-12-31'), 1), (NULL, 1), (date('2024-01-15'), NULL)\n\nquery\nSELECT date_add(d, n) FROM test_date_add\n\n-- column + literal\nquery\nSELECT date_add(d, 5) FROM test_date_add\n\n-- literal + column\nquery\nSELECT date_add(date('2024-01-15'), n) FROM test_date_add\n\n-- literal + literal\nquery\nSELECT date_add(date('2024-01-15'), 5), date_add(date('2024-12-31'), 1), date_add(NULL, 1)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/date_diff.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_datediff(d1 date, d2 date) USING parquet\n\nstatement\nINSERT INTO test_datediff VALUES (date('2024-01-15'), date('2024-01-10')), (date('2024-01-10'), date('2024-01-15')), (date('2024-01-15'), date('2024-01-15')), (NULL, date('2024-01-15')), (date('2024-01-15'), NULL)\n\nquery\nSELECT datediff(d1, d2) FROM test_datediff\n\n-- column + literal\nquery\nSELECT datediff(d1, date('2024-01-10')) FROM test_datediff\n\n-- literal + column\nquery\nSELECT datediff(date('2024-01-20'), d2) FROM test_datediff\n\n-- literal + literal\nquery\nSELECT datediff(date('2024-01-15'), date('2024-01-10')), datediff(date('2024-01-10'), date('2024-01-15')), datediff(NULL, date('2024-01-15'))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/date_format.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_date_format(ts timestamp) USING parquet\n\nstatement\nINSERT INTO test_date_format VALUES (timestamp('2024-06-15 10:30:45')), (timestamp('1970-01-01 00:00:00')), (NULL)\n\nquery expect_fallback(Non-UTC timezone)\nSELECT date_format(ts, 'yyyy-MM-dd') FROM test_date_format\n\nquery expect_fallback(Non-UTC timezone)\nSELECT date_format(ts, 'HH:mm:ss') FROM test_date_format\n\nquery expect_fallback(Non-UTC timezone)\nSELECT date_format(ts, 'yyyy-MM-dd HH:mm:ss') FROM test_date_format\n\n-- literal arguments\nquery expect_fallback(Non-UTC timezone)\nSELECT date_format(timestamp('2024-06-15 10:30:45'), 'yyyy-MM-dd')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/date_format_enabled.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Test date_format() with allowIncompatible enabled (happy path)\n-- Uses UTC timezone to ensure results match Spark\n-- Config: spark.comet.expression.DateFormatClass.allowIncompatible=true\n-- Config: spark.sql.session.timeZone=UTC\n\nstatement\nCREATE TABLE test_date_format_enabled(ts timestamp) USING parquet\n\nstatement\nINSERT INTO test_date_format_enabled VALUES (timestamp('2024-06-15 10:30:45')), (timestamp('1970-01-01 00:00:00')), (NULL)\n\nquery\nSELECT date_format(ts, 'yyyy-MM-dd') FROM test_date_format_enabled\n\nquery\nSELECT date_format(ts, 'HH:mm:ss') FROM test_date_format_enabled\n\nquery\nSELECT date_format(ts, 'yyyy-MM-dd HH:mm:ss') FROM test_date_format_enabled\n\n-- literal arguments\nquery\nSELECT date_format(timestamp('2024-06-15 10:30:45'), 'yyyy-MM-dd')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/date_from_unix_date.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_date_from_unix_date(i int) USING parquet\n\n-- -719162 = 0001-01-01 (Spark min date), 2932896 = 9999-12-31 (Spark max date)\nstatement\nINSERT INTO test_date_from_unix_date VALUES (0), (1), (-1), (18993), (-25567), (-719162), (2932896), (NULL)\n\nquery\nSELECT date_from_unix_date(i) FROM test_date_from_unix_date\n\n-- literal arguments\nquery\nSELECT date_from_unix_date(0), date_from_unix_date(1), date_from_unix_date(-1), date_from_unix_date(18993), date_from_unix_date(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/date_sub.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_date_sub(d date, n int) USING parquet\n\nstatement\nINSERT INTO test_date_sub VALUES (date('2024-01-15'), 1), (date('2024-01-15'), -1), (date('2024-01-15'), 0), (date('2024-01-01'), 1), (NULL, 1), (date('2024-01-15'), NULL)\n\nquery\nSELECT date_sub(d, n) FROM test_date_sub\n\n-- column + literal\nquery\nSELECT date_sub(d, 5) FROM test_date_sub\n\n-- literal + column\nquery\nSELECT date_sub(date('2024-01-15'), n) FROM test_date_sub\n\n-- literal + literal\nquery\nSELECT date_sub(date('2024-01-15'), 5), date_sub(date('2024-01-01'), 1), date_sub(NULL, 1)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/datetime.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- DatePart functions\nstatement\nCREATE TABLE test_dt(col timestamp) USING parquet\n\nstatement\nINSERT INTO test_dt VALUES (timestamp('2024-06-15 10:30:00')), (timestamp('1900-01-01')), (null)\n\nquery\nSELECT col, year(col), month(col), day(col), weekday(col), dayofweek(col), dayofyear(col), weekofyear(col), quarter(col) FROM test_dt\n\nquery\nSELECT hour(col), minute(col), second(col) FROM test_dt\n\n-- Midnight and end-of-day\nstatement\nCREATE TABLE test_dt_hms(ts timestamp) USING parquet\n\nstatement\nINSERT INTO test_dt_hms VALUES (timestamp('2024-01-01 00:00:00')), (timestamp('2024-01-01 23:59:59')), (timestamp('2024-06-15 12:30:45')), (NULL)\n\nquery\nSELECT hour(ts), minute(ts), second(ts) FROM test_dt_hms\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/from_unix_time.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_from_unix_time(t long) USING parquet\n\nstatement\nINSERT INTO test_from_unix_time VALUES (0), (1718451045), (-1), (NULL), (2147483647)\n\nquery expect_fallback(not fully compatible with Spark)\nSELECT from_unixtime(t) FROM test_from_unix_time\n\nquery expect_fallback(not fully compatible with Spark)\nSELECT from_unixtime(t, 'yyyy-MM-dd') FROM test_from_unix_time\n\n-- literal arguments\nquery expect_fallback(not fully compatible with Spark)\nSELECT from_unixtime(0)\n\nquery expect_fallback(not fully compatible with Spark)\nSELECT from_unixtime(1718451045, 'yyyy-MM-dd')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/from_unix_time_enabled.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Test from_unixtime() with allowIncompatible enabled (happy path)\n-- Uses UTC timezone to ensure results match Spark\n-- Config: spark.comet.expression.FromUnixTime.allowIncompatible=true\n-- Config: spark.sql.session.timeZone=UTC\n\nstatement\nCREATE TABLE test_from_unix_time_enabled(t long) USING parquet\n\nstatement\nINSERT INTO test_from_unix_time_enabled VALUES (0), (1718451045), (-1), (NULL), (2147483647)\n\n-- Even with allowIncompatible=true, the default datetime pattern is unsupported natively\nquery spark_answer_only\nSELECT from_unixtime(t) FROM test_from_unix_time_enabled\n\nquery spark_answer_only\nSELECT from_unixtime(t, 'yyyy-MM-dd') FROM test_from_unix_time_enabled\n\n-- literal arguments\nquery spark_answer_only\nSELECT from_unixtime(0)\n\nquery spark_answer_only\nSELECT from_unixtime(1718451045, 'yyyy-MM-dd')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/hour.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_hour(ts timestamp) USING parquet\n\nstatement\nINSERT INTO test_hour VALUES (timestamp('2024-01-15 00:00:00')), (timestamp('2024-01-15 12:30:45')), (timestamp('2024-01-15 23:59:59')), (NULL)\n\nquery\nSELECT hour(ts) FROM test_hour\n\n-- literal arguments\nquery ignore(https://github.com/apache/datafusion-comet/issues/3336)\nSELECT hour(timestamp('2024-01-15 00:00:00')), hour(timestamp('2024-01-15 12:30:45')), hour(timestamp('2024-01-15 23:59:59'))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/last_day.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_last_day(d date) USING parquet\n\nstatement\nINSERT INTO test_last_day VALUES (date('2024-01-15')), (date('2024-02-15')), (date('2023-02-15')), (date('2024-12-01')), (NULL)\n\nquery\nSELECT last_day(d) FROM test_last_day\n\n-- literal arguments\nquery\nSELECT last_day(date('2024-01-15')), last_day(date('2024-02-15')), last_day(date('2023-02-15')), last_day(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/make_date.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_make_date(year int, month int, day int) USING parquet\n\nstatement\nINSERT INTO test_make_date VALUES\n  (2023, 12, 25),\n  (1970, 1, 1),\n  (2000, 6, 15),\n  (1999, 12, 31),\n  (2024, 2, 29),\n  (NULL, 1, 1),\n  (2023, NULL, 1),\n  (2023, 1, NULL),\n  (NULL, NULL, NULL)\n\n-- column arguments\nquery\nSELECT year, month, day, make_date(year, month, day) FROM test_make_date ORDER BY year, month, day\n\n-- literal year, column month and day\nquery\nSELECT make_date(2023, month, day) FROM test_make_date ORDER BY month, day\n\n-- column year, literal month and day\nquery\nSELECT make_date(year, 6, 15) FROM test_make_date ORDER BY year\n\n-- column year and month, literal day\nquery\nSELECT make_date(year, month, 1) FROM test_make_date ORDER BY year, month\n\n-- literal values\nquery\nSELECT make_date(2023, 12, 25)\n\nquery\nSELECT make_date(1970, 1, 1)\n\n-- null handling with literals\nquery\nSELECT make_date(NULL, 1, 1)\n\nquery\nSELECT make_date(2023, NULL, 1)\n\nquery\nSELECT make_date(2023, 1, NULL)\n\n-- leap year edge cases\n-- 2000 WAS a leap year (divisible by 400)\nquery\nSELECT make_date(2000, 2, 29)\n\n-- 2004 was a leap year (divisible by 4, not by 100)\nquery\nSELECT make_date(2004, 2, 29)\n\n-- 2023 is NOT a leap year - Feb 29 should return NULL\nquery\nSELECT make_date(2023, 2, 29)\n\n-- 1900 was NOT a leap year (divisible by 100 but not 400) - Feb 29 should return NULL\nquery\nSELECT make_date(1900, 2, 29)\n\n-- 2100 will NOT be a leap year (divisible by 100 but not 400)\nquery\nSELECT make_date(2100, 2, 29)\n\n-- invalid date handling - should return NULL\nquery\nSELECT make_date(2023, 2, 30)\n\nquery\nSELECT make_date(2023, 2, 31)\n\nquery\nSELECT make_date(2023, 4, 31)\n\nquery\nSELECT make_date(2023, 6, 31)\n\nquery\nSELECT make_date(2023, 9, 31)\n\nquery\nSELECT make_date(2023, 11, 31)\n\n-- boundary values - invalid month/day values should return NULL\nquery\nSELECT make_date(2023, 0, 15)\n\nquery\nSELECT make_date(2023, 13, 15)\n\nquery\nSELECT make_date(2023, -1, 15)\n\nquery\nSELECT make_date(2023, 6, 0)\n\nquery\nSELECT make_date(2023, 6, 32)\n\nquery\nSELECT make_date(2023, 6, -1)\n\n-- extreme years\nquery\nSELECT make_date(1, 1, 1)\n\nquery\nSELECT make_date(9999, 12, 31)\n\nquery\nSELECT make_date(0, 1, 1)\n\nquery\nSELECT make_date(-1, 1, 1)\n\n-- month boundaries - last day of each month\nquery\nSELECT make_date(2023, 1, 31)\n\nquery\nSELECT make_date(2023, 3, 31)\n\nquery\nSELECT make_date(2023, 4, 30)\n\nquery\nSELECT make_date(2023, 5, 31)\n\nquery\nSELECT make_date(2023, 6, 30)\n\nquery\nSELECT make_date(2023, 7, 31)\n\nquery\nSELECT make_date(2023, 8, 31)\n\nquery\nSELECT make_date(2023, 9, 30)\n\nquery\nSELECT make_date(2023, 10, 31)\n\nquery\nSELECT make_date(2023, 11, 30)\n\nquery\nSELECT make_date(2023, 12, 31)\n\nquery\nSELECT make_date(2024, 2, 29)\n\nquery\nSELECT make_date(2023, 2, 28)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/minute.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_minute(ts timestamp) USING parquet\n\nstatement\nINSERT INTO test_minute VALUES (timestamp('2024-01-15 10:00:00')), (timestamp('2024-01-15 10:30:00')), (timestamp('2024-01-15 10:59:59')), (NULL)\n\nquery\nSELECT minute(ts) FROM test_minute\n\n-- literal arguments\nquery ignore(https://github.com/apache/datafusion-comet/issues/3336)\nSELECT minute(timestamp('2024-01-15 10:00:00')), minute(timestamp('2024-01-15 10:30:00')), minute(timestamp('2024-01-15 10:59:59'))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/next_day.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_next_day(d date) USING parquet\n\nstatement\nINSERT INTO test_next_day VALUES (date('2023-01-01')), (date('2024-02-29')), (date('1969-12-31')), (date('2024-06-15')), (NULL)\n\n-- full day names\nquery\nSELECT next_day(d, 'Sunday') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Monday') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Tuesday') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Wednesday') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Thursday') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Friday') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Saturday') FROM test_next_day\n\n-- abbreviated day names\nquery\nSELECT next_day(d, 'Sun') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Mon') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Tue') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Wed') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Thu') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Fri') FROM test_next_day\n\nquery\nSELECT next_day(d, 'Sat') FROM test_next_day\n\n-- literal arguments\nquery\nSELECT next_day(date('2023-01-01'), 'Monday'), next_day(date('2023-01-01'), 'Sunday')\n\n-- null handling\nquery\nSELECT next_day(NULL, 'Monday'), next_day(date('2023-01-01'), NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/second.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_second(ts timestamp) USING parquet\n\nstatement\nINSERT INTO test_second VALUES (timestamp('2024-01-15 10:30:00')), (timestamp('2024-01-15 10:30:30')), (timestamp('2024-01-15 10:30:59')), (NULL)\n\nquery\nSELECT second(ts) FROM test_second\n\n-- literal arguments\nquery ignore(https://github.com/apache/datafusion-comet/issues/3336)\nSELECT second(timestamp('2024-01-15 10:30:00')), second(timestamp('2024-01-15 10:30:30')), second(timestamp('2024-01-15 10:30:59'))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/trunc_date.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_trunc_date(d date) USING parquet\n\nstatement\nINSERT INTO test_trunc_date VALUES (date('2024-06-15')), (date('2024-01-01')), (date('2024-12-31')), (NULL)\n\nquery\nSELECT trunc(d, 'year') FROM test_trunc_date\n\nquery\nSELECT trunc(d, 'month') FROM test_trunc_date\n\nquery\nSELECT trunc(d, 'quarter') FROM test_trunc_date\n\n-- literal arguments\nquery\nSELECT trunc(date('2024-06-15'), 'year'), trunc(date('2024-06-15'), 'month'), trunc(date('2024-06-15'), 'quarter')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/trunc_timestamp.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Config: spark.comet.expression.TruncTimestamp.allowIncompatible=true\n\nstatement\nCREATE TABLE test_trunc_ts(ts timestamp) USING parquet\n\nstatement\nINSERT INTO test_trunc_ts VALUES (timestamp('2024-06-15 10:30:45')), (timestamp('2024-01-01 00:00:00')), (NULL)\n\nquery\nSELECT date_trunc('year', ts) FROM test_trunc_ts\n\nquery\nSELECT date_trunc('month', ts) FROM test_trunc_ts\n\nquery\nSELECT date_trunc('day', ts) FROM test_trunc_ts\n\nquery\nSELECT date_trunc('hour', ts) FROM test_trunc_ts\n\n-- literal arguments\nquery\nSELECT date_trunc('year', timestamp('2024-06-15 10:30:45')), date_trunc('month', timestamp('2024-06-15 10:30:45')), date_trunc('day', timestamp('2024-06-15 10:30:45'))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/unix_date.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_unix_date(d date) USING parquet\n\nstatement\nINSERT INTO test_unix_date VALUES (date('1970-01-01')), (date('2024-01-15')), (date('1969-12-31')), (NULL)\n\nquery spark_answer_only\nSELECT unix_date(d) FROM test_unix_date\n\n-- literal arguments\nquery spark_answer_only\nSELECT unix_date(date('1970-01-01')), unix_date(date('2024-01-15')), unix_date(date('1969-12-31')), unix_date(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/datetime/unix_timestamp.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_unix_ts(ts timestamp) USING parquet\n\nstatement\nINSERT INTO test_unix_ts VALUES (timestamp('1970-01-01 00:00:00')), (timestamp('2024-06-15 10:30:45')), (NULL)\n\nquery\nSELECT unix_timestamp(ts) FROM test_unix_ts\n\n-- literal arguments\nquery ignore(https://github.com/apache/datafusion-comet/issues/3336)\nSELECT unix_timestamp(timestamp('1970-01-01 00:00:00')), unix_timestamp(timestamp('2024-06-15 10:30:45'))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/decimal/decimal_div.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Decimal division (/) and integral division (div) in legacy (non-ANSI) mode.\n-- Also covers try_divide (TRY mode semantics).\n-- See decimal_div_ansi.sql for ANSI mode tests.\n\n-- ConfigMatrix: parquet.enable.dictionary=false,true\n\n-- ============================================================================\n-- Basic divide and integral divide\n-- ============================================================================\n\nstatement\nCREATE TABLE test_decimal_div(a decimal(18,6), b decimal(18,6)) USING parquet\n\nstatement\nINSERT INTO test_decimal_div VALUES\n  (10.000000,  3.000000),\n  (7.500000,   2.500000),\n  (-10.000000, 3.000000),\n  (10.000000, -3.000000),\n  (-10.000000,-3.000000),\n  (0.000000,   5.000000),\n  (1.000000,   3.000000),\n  (NULL,       3.000000),\n  (10.000000,  NULL)\n\n-- column / column\nquery\nSELECT a / b FROM test_decimal_div\n\n-- column div column (integral division: drops fractional part)\nquery\nSELECT a div b FROM test_decimal_div\n\n-- ============================================================================\n-- Divide by zero: legacy mode returns NULL, not an error\n-- ============================================================================\n\nstatement\nCREATE TABLE test_decimal_div_zero(a decimal(18,6)) USING parquet\n\nstatement\nINSERT INTO test_decimal_div_zero VALUES\n  (10.000000),\n  (-5.000000),\n  (0.000000),\n  (NULL)\n\n-- a / 0 returns null in legacy mode\nquery\nSELECT a / cast(0 as decimal(18,6)) FROM test_decimal_div_zero\n\n-- a div 0 returns null in legacy mode\nquery\nSELECT a div cast(0 as decimal(18,6)) FROM test_decimal_div_zero\n\n-- ============================================================================\n-- TRY mode: try_divide always returns NULL on zero divisor\n-- ============================================================================\n\n-- try_divide by zero returns null even when ANSI mode is on externally\nquery\nSELECT try_divide(a, cast(0.000000 as decimal(18,6))) FROM test_decimal_div_zero\n\n-- try_divide with non-zero divisor returns the quotient\nquery\nSELECT try_divide(a, b) FROM test_decimal_div\n\n-- try_divide literal decimal by zero\nquery\nSELECT try_divide(cast(10.5 as decimal(10,2)), cast(0.0 as decimal(10,2)))\n\n-- try_divide with NULL inputs\nquery\nSELECT try_divide(NULL, cast(3.0 as decimal(10,2))),\n       try_divide(cast(10.0 as decimal(10,2)), NULL)\n\n-- ============================================================================\n-- Overflow in legacy mode produces NULL\n-- ============================================================================\n\nstatement\nCREATE TABLE test_decimal_overflow(a decimal(38,0)) USING parquet\n\nstatement\nINSERT INTO test_decimal_overflow VALUES\n  (99999999999999999999999999999999999999),\n  (-99999999999999999999999999999999999999)\n\n-- Dividing max decimal(38,0) by a small fraction overflows the result type -> null\nquery\nSELECT a / cast(0.01 as decimal(10,2)) FROM test_decimal_overflow\n\n-- Integral divide overflow -> null\nquery\nSELECT a div cast(0.40 as decimal(2,2)) FROM test_decimal_overflow\n\n-- ============================================================================\n-- Literal arguments (constant folding is disabled by the test runner)\n-- ============================================================================\n\n-- literal / literal\nquery\nSELECT cast(10.5 as decimal(10,2)) / cast(3.5 as decimal(10,2))\n\n-- literal div literal\nquery\nSELECT cast(10 as decimal(10,0)) div cast(3 as decimal(10,0))\n\n-- literal / zero\nquery\nSELECT cast(7.0 as decimal(10,2)) / cast(0.0 as decimal(10,2))\n\n-- literal div zero\nquery\nSELECT cast(7 as decimal(10,0)) div cast(0 as decimal(10,0))\n\n-- NULL literal\nquery\nSELECT cast(NULL as decimal(10,2)) / cast(3.0 as decimal(10,2)),\n       cast(5.0 as decimal(10,2)) / cast(NULL as decimal(10,2))\n\n-- ============================================================================\n-- Mixed precision and scale\n-- ============================================================================\n\nstatement\nCREATE TABLE test_decimal_mixed(a decimal(18,6), b decimal(10,2)) USING parquet\n\nstatement\nINSERT INTO test_decimal_mixed VALUES\n  (123456.789012, 99.99),\n  (0.000001, 1.00),\n  (-123456.789012, 0.01),\n  (NULL, NULL)\n\n-- spark_answer_only: mixed-precision division can trigger the precision-loss path in Spark's\n-- decimal arithmetic which Comet does not yet handle natively; see\n-- https://github.com/apache/datafusion-comet/issues/1526\nquery spark_answer_only\nSELECT a / b FROM test_decimal_mixed\n\nquery\nSELECT a div b FROM test_decimal_mixed\n\n-- ============================================================================\n-- Various precision/scale combinations\n-- ============================================================================\n\nstatement\nCREATE TABLE test_decimal_prec(\n  a5_2 decimal(5,2),\n  b5_2 decimal(5,2),\n  a38_4 decimal(38,4),\n  b38_4 decimal(38,4)\n) USING parquet\n\nstatement\nINSERT INTO test_decimal_prec VALUES\n  (10.50, 3.25, 9999999999999999.1234, 3.0001),\n  (-10.50, 3.25, -9999999999999999.1234, 3.0001),\n  (0.00, 1.00, 0.0000, 1.0000),\n  (NULL, NULL, NULL, NULL)\n\nquery\nSELECT a5_2 / b5_2 FROM test_decimal_prec\n\nquery\nSELECT a5_2 div b5_2 FROM test_decimal_prec\n\n-- spark_answer_only: decimal(38,4) / decimal(38,4) produces a result type that exceeds\n-- Decimal(38,x) precision, triggering a precision-loss path Comet does not yet handle natively;\n-- see https://github.com/apache/datafusion-comet/issues/1526\nquery spark_answer_only\nSELECT a38_4 / b38_4 FROM test_decimal_prec\n\nquery\nSELECT a38_4 div b38_4 FROM test_decimal_prec\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/decimal/decimal_div_ansi.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Decimal division (/) and integral division (div) in ANSI mode.\n-- Divide-by-zero throws for both operators; try_divide still returns NULL.\n-- Overflow throws NUMERIC_VALUE_OUT_OF_RANGE for both / and div.\n-- See decimal_div.sql for legacy-mode and try_divide tests.\n\n-- Config: spark.sql.ansi.enabled=true\n\n-- ============================================================================\n-- Setup\n-- ============================================================================\n\nstatement\nCREATE TABLE test_ansi_decimal(a decimal(18,6), b decimal(18,6)) USING parquet\n\nstatement\nINSERT INTO test_ansi_decimal VALUES\n  (10.000000,  3.000000),\n  (7.500000,   2.500000),\n  (-10.000000, 3.000000),\n  (10.000000, -3.000000),\n  (0.000000,   5.000000),\n  (NULL,       3.000000),\n  (10.000000,  NULL)\n\nstatement\nCREATE TABLE test_ansi_zero(a decimal(18,6)) USING parquet\n\nstatement\nINSERT INTO test_ansi_zero VALUES (10.000000), (-5.000000), (0.000000), (NULL)\n\n-- ============================================================================\n-- Normal division works in ANSI mode (no zero divisor, no overflow)\n-- ============================================================================\n\nquery\nSELECT a / b FROM test_ansi_decimal\n\nquery\nSELECT a div b FROM test_ansi_decimal\n\n-- ============================================================================\n-- ANSI mode: decimal / by zero throws DIVIDE_BY_ZERO\n-- ============================================================================\n\n-- column / zero column\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT a / cast(0.000000 as decimal(18,6)) FROM test_ansi_zero\n\n-- literal / zero literal\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT cast(10.0 as decimal(18,6)) / cast(0.000000 as decimal(18,6))\n\n-- ============================================================================\n-- ANSI mode: decimal div by zero throws DIVIDE_BY_ZERO\n-- ============================================================================\n\n-- column div zero column\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT a div cast(0.000000 as decimal(18,6)) FROM test_ansi_zero\n\n-- literal div zero literal\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT cast(10 as decimal(18,0)) div cast(0.000000 as decimal(18,6))\n\n-- ============================================================================\n-- TRY mode: try_divide returns NULL even when ANSI is enabled globally\n-- (divide-by-zero, overflow, and NULL inputs all produce NULL)\n-- ============================================================================\n\n-- try_divide by zero -> null (TRY semantics override ANSI)\nquery\nSELECT try_divide(a, cast(0.000000 as decimal(18,6))) FROM test_ansi_zero\n\n-- try_divide with normal values\nquery\nSELECT try_divide(a, b) FROM test_ansi_decimal\n\n-- try_divide NULL inputs\nquery\nSELECT try_divide(NULL, cast(3.000000 as decimal(18,6))),\n       try_divide(cast(10.000000 as decimal(18,6)), NULL)\n\n-- try_divide overflow -> null (TRY semantics override ANSI even for overflow)\n-- Uses the same overflowing values as the ANSI overflow table below.\nstatement\nCREATE TABLE test_try_div_overflow(a decimal(38,0), b decimal(2,2)) USING parquet\n\nstatement\nINSERT INTO test_try_div_overflow VALUES\n  ( 99999999999999999999999999999999999999,  0.01),\n  (-99999999999999999999999999999999999999,  0.01)\n\nquery\nSELECT try_divide(a, b) FROM test_try_div_overflow\n\n-- ============================================================================\n-- ANSI mode: decimal div overflow throws NUMERIC_VALUE_OUT_OF_RANGE\n-- All values produce a quotient > Decimal(38,0).max so every row overflows.\n-- ============================================================================\n\nstatement\nCREATE TABLE test_ansi_overflow(a decimal(38,0), b decimal(2,2)) USING parquet\n\nstatement\nINSERT INTO test_ansi_overflow VALUES\n  (-62672277069777110394022909049981876593, -0.40),\n  ( 54400354300704342908577384819323710194,  0.18)\n\nquery expect_error(NUMERIC_VALUE_OUT_OF_RANGE)\nSELECT a div b FROM test_ansi_overflow\n\n-- ============================================================================\n-- ANSI mode: decimal / (regular divide) overflow throws NUMERIC_VALUE_OUT_OF_RANGE\n-- The result type of decimal(38,0) / decimal(2,2) is decimal(38,0) which cannot\n-- hold values as large as 10^38 / 0.01 = 10^40.\n-- ============================================================================\n\nstatement\nCREATE TABLE test_ansi_div_overflow(a decimal(38,0), b decimal(2,2)) USING parquet\n\nstatement\nINSERT INTO test_ansi_div_overflow VALUES\n  ( 99999999999999999999999999999999999999,  0.01),\n  (-99999999999999999999999999999999999999,  0.01)\n\nquery expect_error(NUMERIC_VALUE_OUT_OF_RANGE)\nSELECT a / b FROM test_ansi_div_overflow\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/decimal/decimal_ops.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Config: spark.comet.expression.Cast.allowIncompatible=true\n\n-- Decimal arithmetic exercises CheckOverflow, MakeDecimal, UnscaledValue\nstatement\nCREATE TABLE test_decimal(a decimal(10,2), b decimal(10,2)) USING parquet\n\nstatement\nINSERT INTO test_decimal VALUES (10.50, 3.20), (0.00, 1.00), (-10.50, 3.20), (99999999.99, 0.01), (NULL, 1.00)\n\nquery\nSELECT a + b, a - b, a * b, a / b FROM test_decimal\n\nquery\nSELECT a % b FROM test_decimal\n\n-- column + literal\nquery\nSELECT a + cast(1.00 as decimal(10,2)), a - cast(1.00 as decimal(10,2)), a * cast(2.00 as decimal(10,2)) FROM test_decimal\n\n-- literal + column\nquery\nSELECT cast(100.00 as decimal(10,2)) + b, cast(100.00 as decimal(10,2)) - b, cast(100.00 as decimal(10,2)) * b FROM test_decimal\n\n-- literal + literal\nquery\nSELECT cast(10.50 as decimal(10,2)) + cast(3.20 as decimal(10,2)), cast(10.50 as decimal(10,2)) - cast(3.20 as decimal(10,2)), cast(10.50 as decimal(10,2)) * cast(3.20 as decimal(10,2)), cast(10.50 as decimal(10,2)) / cast(3.20 as decimal(10,2))\n\n-- Mixed precision\nstatement\nCREATE TABLE test_decimal_mix(a decimal(18,6), b decimal(10,2)) USING parquet\n\nstatement\nINSERT INTO test_decimal_mix VALUES (123456.789012, 99.99), (0.000001, 1.00), (-123456.789012, 0.01), (NULL, NULL)\n\nquery\nSELECT a + b, a - b, a * b FROM test_decimal_mix\n\nquery spark_answer_only\nSELECT a / b FROM test_decimal_mix\n\n-- Cast to decimal\nstatement\nCREATE TABLE test_dec_cast(i int, l long, d double, s string) USING parquet\n\nstatement\nINSERT INTO test_dec_cast VALUES (42, 123456789, 3.14159, '99.99'), (0, 0, 0.0, '0.00'), (NULL, NULL, NULL, NULL)\n\nquery\nSELECT cast(i as decimal(10,2)), cast(l as decimal(18,2)), cast(d as decimal(10,5)), cast(s as decimal(10,2)) FROM test_dec_cast\n\n-- literal arguments\nquery\nSELECT cast(42 as decimal(10,2)), cast(3.14 as decimal(10,5)), cast(NULL as decimal(10,2))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/hash/crc32.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- crc32 function\nstatement\nCREATE TABLE test(col string, a int, b float) USING parquet\n\nstatement\nINSERT INTO test VALUES ('Spark SQL  ', 10, 1.2), (NULL, NULL, NULL), ('', 0, 0.0), ('苹果手机', NULL, 3.999999), ('Spark SQL  ', 10, 1.2), (NULL, NULL, NULL), ('', 0, 0.0), ('苹果手机', NULL, 3.999999)\n\nquery\nSELECT crc32(col), crc32(cast(a as string)), crc32(cast(b as string)) FROM test\n\n-- literal arguments\nquery\nSELECT crc32('Spark SQL')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/hash/hash.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- hash functions\nstatement\nCREATE TABLE test(col string, a int, b float) USING parquet\n\nstatement\nINSERT INTO test VALUES ('Spark SQL  ', 10, 1.2), (NULL, NULL, NULL), ('', 0, 0.0), ('苹果手机', NULL, 3.999999), ('Spark SQL  ', 10, 1.2), (NULL, NULL, NULL), ('', 0, 0.0), ('苹果手机', NULL, 3.999999)\n\nquery\nSELECT md5(col), md5(cast(a as string)), md5(cast(b as string)), hash(col), hash(col, 1), hash(col, 0), hash(col, a, b), hash(b, a, col), xxhash64(col), xxhash64(col, 1), xxhash64(col, 0), xxhash64(col, a, b), xxhash64(b, a, col), sha2(col, 0), sha2(col, 256), sha2(col, 224), sha2(col, 384), sha2(col, 512), sha2(col, 128), sha2(col, -1), sha1(col), sha1(cast(a as string)), sha1(cast(b as string)) FROM test\n\n-- literal arguments\nquery ignore(https://github.com/apache/datafusion-comet/issues/3340)\nSELECT md5('Spark SQL'), sha1('test'), sha2('test', 256), hash('test'), xxhash64('test')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/map/get_map_value.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_map(m map<string, int>) USING parquet\n\nstatement\nINSERT INTO test_map VALUES (map('a', 1, 'b', 2, 'c', 3)), (map('x', 10)), (NULL)\n\nquery spark_answer_only\nSELECT m['a'], m['b'], m['c'] FROM test_map\n\nquery spark_answer_only\nSELECT m['x'], m['missing'] FROM test_map\n\n-- literal arguments\nquery spark_answer_only\nSELECT map('a', 1, 'b', 2)['a'], map('a', 1, 'b', 2)['missing'], map('a', 1, 'b', 2)[NULL]\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/map/map_contains_key.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Config: spark.comet.expression.ArrayContains.allowIncompatible=true\n\n-- TODO: replace map_from_arrays with map whenever map is supported in Comet\n\n-- Basic integer key tests with map literals\nquery\nselect map_contains_key(map_from_arrays(array(1, 2), array('a', 'b')), 5)\n\nquery\nselect map_contains_key(map_from_arrays(array(1, 2), array('a', 'b')), 1)\n\n-- Decimal type coercion tests\n-- TODO: requires map cast to be supported in Comet\nquery spark_answer_only\nselect map_contains_key(map_from_arrays(array(1, 2), array('a', 'b')), 5.0)\n\nquery spark_answer_only\nselect map_contains_key(map_from_arrays(array(1, 2), array('a', 'b')), 1.0)\n\nquery spark_answer_only\nselect map_contains_key(map_from_arrays(array(1.0, 2), array('a', 'b')), 5)\n\nquery spark_answer_only\nselect map_contains_key(map_from_arrays(array(1.0, 2), array('a', 'b')), 1)\n\n-- Empty map tests\n-- TODO: requires casting from NullType to be supported in Comet\nquery spark_answer_only\nselect map_contains_key(map_from_arrays(array(), array()), 0)\n\n-- Test with table data\nstatement\nCREATE TABLE test_map_contains_key(m map<string, int>) USING parquet\n\nstatement\nINSERT INTO test_map_contains_key VALUES (map_from_arrays(array('a', 'b', 'c'), array(1, 2, 3))), (map_from_arrays(array('x'), array(10))), (map_from_arrays(array(), array())), (NULL)\n\nquery\nSELECT map_contains_key(m, 'a') FROM test_map_contains_key\n\nquery\nSELECT map_contains_key(m, 'x') FROM test_map_contains_key\n\nquery\nSELECT map_contains_key(m, 'missing') FROM test_map_contains_key\n\n-- Test with integer key map\nstatement\nCREATE TABLE test_map_int_key(m map<int, string>) USING parquet\n\nstatement\nINSERT INTO test_map_int_key VALUES (map_from_arrays(array(1, 2), array('a', 'b'))), (map_from_arrays(array(), array())), (NULL)\n\nquery\nSELECT map_contains_key(m, 1) FROM test_map_int_key\n\nquery\nSELECT map_contains_key(m, 5) FROM test_map_int_key\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/map/map_entries.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_map_entries(m map<string, int>) USING parquet\n\nstatement\nINSERT INTO test_map_entries VALUES (map('a', 1, 'b', 2)), (map()), (NULL)\n\nquery spark_answer_only\nSELECT map_entries(m) FROM test_map_entries\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/map/map_from_arrays.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_map_from_arrays(k array<string>, v array<int>) USING parquet\n\nstatement\nINSERT INTO test_map_from_arrays VALUES\n  (array('a', 'b', 'c'), array(1, 2, 3)),\n  (array(), array()),\n  (NULL, NULL),\n  (array('x'), NULL),\n  (NULL, array(99))\n\n-- basic functionality\nquery spark_answer_only\nSELECT map_from_arrays(k, v) FROM test_map_from_arrays WHERE k IS NOT NULL AND v IS NOT NULL\n\n-- both inputs NULL should return NULL\nquery\nSELECT map_from_arrays(k, v) FROM test_map_from_arrays WHERE k IS NULL AND v IS NULL\n\n-- keys not null but values null should return NULL (Spark behavior)\nquery\nSELECT map_from_arrays(k, v) FROM test_map_from_arrays WHERE k IS NOT NULL AND v IS NULL\n\n-- keys null but values not null should return NULL (Spark behavior)\nquery\nSELECT map_from_arrays(k, v) FROM test_map_from_arrays WHERE k IS NULL AND v IS NOT NULL\n\n-- all rows including nulls\nquery spark_answer_only\nSELECT map_from_arrays(k, v) FROM test_map_from_arrays\n\n-- literal arguments\nquery spark_answer_only\nSELECT map_from_arrays(array('a', 'b'), array(1, 2))\n\n-- literal null arguments\nquery\nSELECT map_from_arrays(NULL, array(1, 2))\n\nquery\nSELECT map_from_arrays(array('a'), NULL)\n\nquery\nSELECT map_from_arrays(NULL, NULL)"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/map/map_from_entries.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_map_from_entries(entries array<struct<key:string, value:int>>) USING parquet\n\nstatement\nINSERT INTO test_map_from_entries VALUES (array(struct('a', 1), struct('b', 2), struct('c', 3))), (array()), (NULL)\n\nquery\nSELECT map_from_entries(entries) FROM test_map_from_entries\n\nquery expect_fallback(Using BinaryType as Map keys is not allowed in map_from_entries)\nSELECT map_from_entries(array(struct(cast('x' as binary), 10)))\n\nquery expect_fallback(Using BinaryType as Map values is not allowed in map_from_entries)\nSELECT map_from_entries(array(struct(10, cast('x' as binary))))\n\n-- literal arguments\nquery spark_answer_only\nSELECT map_from_entries(array(struct('x', 10), struct('y', 20), struct('z', 30)))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/map/map_keys.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_map_keys(m map<string, int>) USING parquet\n\nstatement\nINSERT INTO test_map_keys VALUES (map('a', 1, 'b', 2, 'c', 3)), (map()), (NULL)\n\nquery spark_answer_only\nSELECT map_keys(m) FROM test_map_keys\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/map/map_values.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_map_values(m map<string, int>) USING parquet\n\nstatement\nINSERT INTO test_map_values VALUES (map('a', 1, 'b', 2, 'c', 3)), (map()), (NULL)\n\nquery spark_answer_only\nSELECT map_values(m) FROM test_map_values\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/abs.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_abs(i int, l long, f float, d double) USING parquet\n\nstatement\nINSERT INTO test_abs VALUES (1, 1, 1.5, 1.5), (-1, -1, -1.5, -1.5), (0, 0, 0.0, 0.0), (NULL, NULL, NULL, NULL), (2147483647, 9223372036854775807, cast('Infinity' as float), cast('NaN' as double))\n\nquery\nSELECT abs(i), abs(l), abs(f), abs(d) FROM test_abs\n\n-- literal arguments\nquery\nSELECT abs(-5), abs(-1.5), abs(0), abs(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/abs_ansi.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ANSI mode abs function tests\n-- Tests that abs throws exceptions for overflow on minimum integer values\n\n-- Config: spark.sql.ansi.enabled=true\n\n-- ============================================================================\n-- Test data setup\n-- ============================================================================\n\nstatement\nCREATE TABLE ansi_test_abs_int(v int) USING parquet\n\nstatement\nINSERT INTO ansi_test_abs_int VALUES (-2147483648)\n\nstatement\nCREATE TABLE ansi_test_abs_long(v long) USING parquet\n\nstatement\nINSERT INTO ansi_test_abs_long VALUES (-9223372036854775808)\n\nstatement\nCREATE TABLE ansi_test_abs_short(v short) USING parquet\n\nstatement\nINSERT INTO ansi_test_abs_short VALUES (-32768)\n\nstatement\nCREATE TABLE ansi_test_abs_byte(v tinyint) USING parquet\n\nstatement\nINSERT INTO ansi_test_abs_byte VALUES (-128)\n\n-- ============================================================================\n-- abs(INT_MIN) overflow\n-- ============================================================================\n\n-- abs(-2147483648) cannot be represented as int (since INT_MAX = 2147483647)\nquery expect_error(overflow)\nSELECT abs(v) FROM ansi_test_abs_int\n\n-- literal\nquery expect_error(overflow)\nSELECT abs(-2147483648)\n\n-- ============================================================================\n-- abs(LONG_MIN) overflow\n-- ============================================================================\n\n-- abs(-9223372036854775808) cannot be represented as long\nquery expect_error(overflow)\nSELECT abs(v) FROM ansi_test_abs_long\n\n-- literal\nquery expect_error(overflow)\nSELECT abs(-9223372036854775808L)\n\n-- ============================================================================\n-- abs(SHORT_MIN) overflow\n-- ============================================================================\n\n-- abs(-32768) cannot be represented as short\nquery expect_error(overflow)\nSELECT abs(v) FROM ansi_test_abs_short\n\n-- literal\nquery expect_error(overflow)\nSELECT abs(cast(-32768 as short))\n\n-- ============================================================================\n-- abs(BYTE_MIN) overflow\n-- ============================================================================\n\n-- abs(-128) cannot be represented as tinyint\nquery expect_error(overflow)\nSELECT abs(v) FROM ansi_test_abs_byte\n\n-- literal\nquery expect_error(overflow)\nSELECT abs(cast(-128 as tinyint))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/acos.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_acos(d double) USING parquet\n\nstatement\nINSERT INTO test_acos VALUES (0.0), (1.0), (-1.0), (0.5), (NULL), (cast('NaN' as double)), (2.0), (-2.0)\n\nquery tolerance=1e-6\nSELECT acos(d) FROM test_acos\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT acos(0.5), acos(1.0), acos(-1.0), acos(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/arithmetic.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- negative\nstatement\nCREATE TABLE test_neg(col1 int) USING parquet\n\nstatement\nINSERT INTO test_neg VALUES(1), (2), (3), (3)\n\nquery\nSELECT negative(col1), -(col1) FROM test_neg\n\n-- integral division overflow\nstatement\nCREATE TABLE test_div(c1 long, c2 short) USING parquet\n\nstatement\nINSERT INTO test_div VALUES(-9223372036854775808, -1)\n\nquery\nSELECT c1 div c2 FROM test_div ORDER BY c1\n\n-- Add, Subtract, Multiply, Divide, Remainder\nstatement\nCREATE TABLE test_arith(a int, b int, la long, lb long, fa float, fb float, da double, db double) USING parquet\n\nstatement\nINSERT INTO test_arith VALUES (10, 3, 10, 3, 10.5, 3.2, 10.5, 3.2), (0, 1, 0, 1, 0.0, 1.0, 0.0, 1.0), (-10, 3, -10, 3, -10.5, 3.2, -10.5, 3.2), (2147483647, 1, 9223372036854775807, 1, cast('Infinity' as float), 1.0, cast('NaN' as double), 1.0), (NULL, 1, NULL, 1, NULL, 1.0, NULL, 1.0)\n\nquery\nSELECT a + b, a - b, a * b, a / b, a % b FROM test_arith\n\nquery\nSELECT la + lb, la - lb, la * lb, la / lb, la % lb FROM test_arith\n\nquery\nSELECT fa + fb, fa - fb, fa * fb, fa / fb FROM test_arith\n\nquery\nSELECT da + db, da - db, da * db, da / db FROM test_arith\n\n-- column + literal\nquery\nSELECT a + 3, a - 3, a * 3, a / 3, a % 3 FROM test_arith\n\n-- literal + column\nquery\nSELECT 10 + b, 10 - b, 10 * b, 10 / b, 10 % b FROM test_arith\n\n-- literal + literal\nquery\nSELECT 10 + 3, 10 - 3, 10 * 3, 10 / 3, 10 % 3\n\n-- unary negative with literal\nquery\nSELECT negative(5), negative(-5), negative(0), -(5)\n\n-- division by zero\nstatement\nCREATE TABLE test_div_zero(a int, b int, d double) USING parquet\n\nstatement\nINSERT INTO test_div_zero VALUES (1, 0, 0.0), (0, 0, 0.0)\n\nquery\nSELECT a / b, a % b, d / 0.0 FROM test_div_zero\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/arithmetic_ansi.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ANSI mode arithmetic tests\n-- Tests that ANSI mode throws exceptions for overflow and division by zero\n\n-- Config: spark.sql.ansi.enabled=true\n\n-- ============================================================================\n-- Test data setup for integer overflow\n-- ============================================================================\n\nstatement\nCREATE TABLE ansi_int_overflow(a int, b int) USING parquet\n\nstatement\nINSERT INTO ansi_int_overflow VALUES (2147483647, 1), (-2147483648, 1), (-2147483648, -1)\n\nstatement\nCREATE TABLE ansi_long_overflow(a long, b long) USING parquet\n\nstatement\nINSERT INTO ansi_long_overflow VALUES (9223372036854775807, 1), (-9223372036854775808, 1), (-9223372036854775808, -1)\n\nstatement\nCREATE TABLE ansi_div_zero(a int, b int, c long, d long) USING parquet\n\nstatement\nINSERT INTO ansi_div_zero VALUES (1, 0, 1, 0)\n\n-- ============================================================================\n-- Integer addition overflow\n-- ============================================================================\n\n-- INT_MAX + 1 should overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT a + b FROM ansi_int_overflow WHERE a = 2147483647\n\n-- literal overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT 2147483647 + 1\n\n-- ============================================================================\n-- Integer subtraction overflow\n-- ============================================================================\n\n-- INT_MIN - 1 should overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT a - b FROM ansi_int_overflow WHERE a = -2147483648\n\n-- literal overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT -2147483648 - 1\n\n-- ============================================================================\n-- Integer multiplication overflow\n-- ============================================================================\n\n-- INT_MAX * 2 should overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT a * 2 FROM ansi_int_overflow WHERE a = 2147483647\n\n-- literal overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT 2147483647 * 2\n\n-- ============================================================================\n-- Long addition overflow\n-- ============================================================================\n\n-- LONG_MAX + 1 should overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT a + b FROM ansi_long_overflow WHERE a = 9223372036854775807\n\n-- ============================================================================\n-- Long subtraction overflow\n-- ============================================================================\n\n-- LONG_MIN - 1 should overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT a - b FROM ansi_long_overflow WHERE a = -9223372036854775808\n\n-- ============================================================================\n-- Long multiplication overflow\n-- ============================================================================\n\n-- LONG_MAX * 2 should overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT a * 2 FROM ansi_long_overflow WHERE a = 9223372036854775807\n\n-- ============================================================================\n-- Integer division by zero\n-- ============================================================================\n\n-- column / 0 should throw\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT a / b FROM ansi_div_zero\n\n-- column div 0 (integral division) should throw\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT a div b FROM ansi_div_zero\n\n-- column % 0 (remainder) should throw\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT a % b FROM ansi_div_zero\n\n-- literal / 0 should throw\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT 1 / 0\n\n-- literal div 0 should throw\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT 1 div 0\n\n-- literal % 0 should throw\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT 1 % 0\n\n-- ============================================================================\n-- Long division by zero\n-- ============================================================================\n\n-- long column / 0 should throw\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT c / d FROM ansi_div_zero\n\n-- long column div 0 should throw\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT c div d FROM ansi_div_zero\n\n-- long column % 0 should throw\nquery expect_error(DIVIDE_BY_ZERO)\nSELECT c % d FROM ansi_div_zero\n\n-- ============================================================================\n-- Unary minus overflow\n-- ============================================================================\n\n-- negating INT_MIN should overflow (since INT_MAX is 2147483647, -(-2147483648) cannot fit)\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT -a FROM ansi_int_overflow WHERE a = -2147483648\n\n-- negating LONG_MIN should overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT -a FROM ansi_long_overflow WHERE a = -9223372036854775808\n\n-- literal negation overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT -(-2147483648)\n\n-- literal long negation overflow\nquery expect_error(ARITHMETIC_OVERFLOW)\nSELECT -(-9223372036854775808L)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/asin.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_asin(d double) USING parquet\n\nstatement\nINSERT INTO test_asin VALUES (0.0), (1.0), (-1.0), (0.5), (NULL), (cast('NaN' as double)), (2.0), (-2.0)\n\nquery tolerance=1e-6\nSELECT asin(d) FROM test_asin\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT asin(0.5), asin(1.0), asin(-1.0), asin(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/atan.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_atan(d double) USING parquet\n\nstatement\nINSERT INTO test_atan VALUES (0.0), (1.0), (-1.0), (100.0), (-100.0), (NULL), (cast('NaN' as double)), (cast('Infinity' as double)), (cast('-Infinity' as double))\n\nquery tolerance=1e-6\nSELECT atan(d) FROM test_atan\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT atan(1.0), atan(0.0), atan(-1.0), atan(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/atan2.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_atan2(y double, x double) USING parquet\n\nstatement\nINSERT INTO test_atan2 VALUES\n  (0.0, 0.0), (0.0, -0.0), (0.0, 1.0), (0.0, -1.0), (0.0, NULL), (0.0, cast('NaN' as double)), (0.0, cast('Infinity' as double)), (0.0, cast('-Infinity' as double)),\n  (-0.0, 0.0), (-0.0, -0.0), (-0.0, 1.0), (-0.0, -1.0), (-0.0, NULL), (-0.0, cast('NaN' as double)), (-0.0, cast('Infinity' as double)), (-0.0, cast('-Infinity' as double)),\n  (1.0, 0.0), (1.0, -0.0), (1.0, 1.0), (1.0, -1.0), (1.0, NULL), (1.0, cast('NaN' as double)), (1.0, cast('Infinity' as double)), (1.0, cast('-Infinity' as double)),\n  (-1.0, 0.0), (-1.0, -0.0), (-1.0, 1.0), (-1.0, -1.0), (-1.0, NULL), (-1.0, cast('NaN' as double)), (-1.0, cast('Infinity' as double)), (-1.0, cast('-Infinity' as double)),\n  (NULL, 0.0), (NULL, -0.0), (NULL, 1.0), (NULL, -1.0), (NULL, NULL), (NULL, cast('NaN' as double)), (NULL, cast('Infinity' as double)), (NULL, cast('-Infinity' as double)),\n  (cast('NaN' as double), 0.0), (cast('NaN' as double), -0.0), (cast('NaN' as double), 1.0), (cast('NaN' as double), -1.0), (cast('NaN' as double), NULL), (cast('NaN' as double), cast('NaN' as double)), (cast('NaN' as double), cast('Infinity' as double)), (cast('NaN' as double), cast('-Infinity' as double)),\n  (cast('Infinity' as double), 0.0), (cast('Infinity' as double), -0.0), (cast('Infinity' as double), 1.0), (cast('Infinity' as double), -1.0), (cast('Infinity' as double), NULL), (cast('Infinity' as double), cast('NaN' as double)), (cast('Infinity' as double), cast('Infinity' as double)), (cast('Infinity' as double), cast('-Infinity' as double)),\n  (cast('-Infinity' as double), 0.0), (cast('-Infinity' as double), -0.0), (cast('-Infinity' as double), 1.0), (cast('-Infinity' as double), -1.0), (cast('-Infinity' as double), NULL), (cast('-Infinity' as double), cast('NaN' as double)), (cast('-Infinity' as double), cast('Infinity' as double)), (cast('-Infinity' as double), cast('-Infinity' as double))\n\nquery tolerance=1e-6\nSELECT atan2(y, x) FROM test_atan2\n\n-- column + literal\nquery tolerance=1e-6\nSELECT atan2(y, 1.0) FROM test_atan2\n\n-- literal + column\nquery tolerance=1e-6\nSELECT atan2(1.0, x) FROM test_atan2\n\n-- literal permutations\nquery tolerance=1e-6\nSELECT atan2(0.0, 0.0), atan2(0.0, -0.0), atan2(0.0, 1.0), atan2(0.0, -1.0),\n  atan2(-0.0, 0.0), atan2(-0.0, -0.0), atan2(-0.0, 1.0), atan2(-0.0, -1.0),\n  atan2(1.0, 0.0), atan2(1.0, -0.0), atan2(1.0, 1.0), atan2(1.0, -1.0),\n  atan2(-1.0, 0.0), atan2(-1.0, -0.0), atan2(-1.0, 1.0), atan2(-1.0, -1.0)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/bin.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_bin(i int, l long, f float, d double) USING parquet\n\nstatement\nINSERT INTO test_bin VALUES\n(13, 1, 1.5, 1.5),\n(1, 1, 0.1, 0.1),\n(100, 100, 99.99, 99.99),\n(255, 255, 255.255, 255.255),\n(-13, -1, -1.5, -1.5),\n(-1, -1, -0.1, -0.1),\n(-100, -100, -99.99, -99.99),\n(-255, -255, -255.255, -255.255),\n(0, 0, 0.0, 0.0),\n(0, 0, -0.0, -0.0),\n(NULL, NULL, NULL, NULL),\n(2147483647, 9223372036854775807, cast('Infinity' as float), cast('Infinity' as double)),\n(-2147483648, -9223372036854775808, cast('-Infinity' as float), cast('-Infinity' as double)),\n(100, 100, cast('Infinity' as float), cast('Infinity' as double)),\n(200, 200, cast('-Infinity' as float), cast('-Infinity' as double)),\n(300, 300, cast('NaN' as float), cast('NaN' as double)),\n(5, 5, 1.401298464e-45, 4.9406564584124654e-324),\n(1000, 1000, 3.402823466e+38, 1.7976931348623157e+308),\n(7, 7, 0.1, 0.1),\n(8, 8, 0.2, 0.2),\n(9, 9, 0.3, 0.3),\n(10, 10, 0.3333333333333333, 0.3333333333333333),\n(500, 500, cast('NaN' as float), cast('Infinity' as double)),\n(501, 501, cast('Infinity' as float), cast('NaN' as double)),\n(502, 502, cast('-Infinity' as float), cast('NaN' as double)),\n(503, 503, cast('NaN' as float), cast('-Infinity' as double)),\n(123456789, 123456789, 123456.789, 123456789.123456789),\n(-123456789, -123456789, -123456.789, -123456789.123456789),\n(2147483646, 9223372036854775806, 999999.999, 999999999.999999999),\n(-2147483647, -9223372036854775807, -999999.999, -999999999.999999999),\n(999, NULL, NULL, 999.999),\n(NULL, 888, 888.888, NULL),\n(777, 777, NULL, 777.777),\n(666, 666, 666.666, NULL),\n(15, 15, 1.23456789, 1.2345678901234567),\n(16, 16, -1.23456789, -1.2345678901234567),\n(314, 314, 3.14159265, 3.14159265358979323846),\n(271, 271, 2.71828183, 2.71828182845904523536);\n\nquery\nSELECT bin(i), bin(l), bin(f), bin(d) FROM test_bin\n\n-- literal arguments\nquery\nSELECT bin(-5), bin(-1.5), bin(0), bin(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/ceil.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ConfigMatrix: parquet.enable.dictionary=false,true\n\nstatement\nCREATE TABLE test_ceil(f float, d double, dec DECIMAL(5, 2)) USING parquet\n\nstatement\nINSERT INTO test_ceil VALUES (1.1, 1.1, 1.10), (-1.1, -1.1, -1.10), (0.0, 0.0, 0.00), (1.0, 1.0, 1.00), (NULL, NULL, NULL), (cast('NaN' as float), cast('NaN' as double), NULL), (cast('Infinity' as float), cast('Infinity' as double), NULL)\n\nquery\nSELECT ceil(f), ceil(d), ceil(dec) FROM test_ceil\n\n-- literal arguments\nquery\nSELECT ceil(1.1), ceil(-1.1), ceil(0.0), ceil(NULL), ceil(cast(1.10 as DECIMAL(5, 2))), ceil(cast(-1.10 as DECIMAL(5, 2)))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/cos.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_cos(d double) USING parquet\n\nstatement\nINSERT INTO test_cos VALUES (0.0), (3.141592653589793), (1.5707963267948966), (-3.141592653589793), (NULL), (cast('NaN' as double)), (cast('Infinity' as double))\n\nquery tolerance=1e-6\nSELECT cos(d) FROM test_cos\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT cos(0.0), cos(3.141592653589793), cos(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/cosh.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_cosh(d double) USING parquet\n\nstatement\nINSERT INTO test_cosh VALUES (0.0), (1.0), (-1.0), (100.0), (NULL), (cast('NaN' as double))\n\nquery tolerance=1e-6\nSELECT cosh(d) FROM test_cosh\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT cosh(0.0), cosh(1.0), cosh(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/cot.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_cot(d double) USING parquet\n\nstatement\nINSERT INTO test_cot VALUES (0.7853981633974483), (1.0), (-1.0), (0.1), (NULL), (cast('NaN' as double))\n\nquery tolerance=1e-6\nSELECT cot(d) FROM test_cot\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT cot(1.0), cot(0.5), cot(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/exp.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_exp(d double) USING parquet\n\nstatement\nINSERT INTO test_exp VALUES (0.0), (1.0), (-1.0), (100.0), (-100.0), (NULL), (cast('NaN' as double)), (cast('-Infinity' as double))\n\nquery tolerance=1e-6\nSELECT exp(d) FROM test_exp\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT exp(0.0), exp(1.0), exp(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/expm1.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_expm1(d double) USING parquet\n\nstatement\nINSERT INTO test_expm1 VALUES (0.0), (1.0), (-1.0), (0.001), (NULL), (cast('NaN' as double)), (cast('-Infinity' as double))\n\nquery tolerance=1e-6\nSELECT expm1(d) FROM test_expm1\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT expm1(0.0), expm1(1.0), expm1(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/floor.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- ConfigMatrix: parquet.enable.dictionary=false,true\n\nstatement\nCREATE TABLE test_floor(f float, d double, dec DECIMAL(5, 2)) USING parquet\n\nstatement\nINSERT INTO test_floor VALUES (1.9, 1.9, 1.90), (-1.1, -1.1, -1.10), (0.0, 0.0, 0.00), (1.0, 1.0, 1.00), (NULL, NULL, NULL), (cast('NaN' as float), cast('NaN' as double), NULL), (cast('Infinity' as float), cast('Infinity' as double), NULL)\n\nquery\nSELECT floor(f), floor(d), floor(dec) FROM test_floor\n\n-- literal arguments\nquery\nSELECT floor(1.9), floor(-1.1), floor(0.0), floor(NULL), floor(cast(1.90 as DECIMAL(5, 2))), floor(cast(-1.10 as DECIMAL(5, 2)))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/isnan.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_isnan(f float, d double) USING parquet\n\nstatement\nINSERT INTO test_isnan VALUES (1.0, 1.0), (cast('NaN' as float), cast('NaN' as double)), (NULL, NULL), (cast('Infinity' as float), cast('Infinity' as double)), (0.0, 0.0)\n\nquery\nSELECT isnan(f), isnan(d) FROM test_isnan\n\n-- literal arguments\nquery\nSELECT isnan(cast('NaN' as double)), isnan(1.0), isnan(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/log.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_log(d double) USING parquet\n\nstatement\nINSERT INTO test_log VALUES (1.0), (2.718281828459045), (10.0), (0.5), (NULL), (cast('NaN' as double)), (cast('Infinity' as double)), (0.0), (-1.0)\n\nquery tolerance=1e-6\nSELECT ln(d) FROM test_log\n\nquery tolerance=1e-6\nSELECT log(10.0, d) FROM test_log\n\n-- column + literal (log base from column)\nquery tolerance=1e-6\nSELECT log(d, 10.0) FROM test_log\n\n-- literal (1-arg form)\nquery tolerance=1e-6\nSELECT ln(1.0), ln(2.718281828459045), ln(10.0), ln(NULL)\n\n-- literal + literal (2-arg form)\nquery tolerance=1e-6\nSELECT log(10.0, 100.0), log(2.0, 8.0), log(10.0, 1.0), log(NULL, 10.0)\n\n-- edge cases: base or value <= 0 should return null\nquery tolerance=1e-6\nSELECT log(0.0, 10.0), log(-1.0, 10.0), log(10.0, 0.0), log(10.0, -1.0), log(0.0, 0.0), log(-1.0, -1.0)\n\n-- edge case: log(1, 1) produces NaN (0/0) which Spark preserves as NaN\nquery tolerance=1e-6\nSELECT log(1.0, 1.0)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/log10.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_log10(d double) USING parquet\n\nstatement\nINSERT INTO test_log10 VALUES (1.0), (10.0), (100.0), (0.1), (NULL), (cast('NaN' as double)), (cast('Infinity' as double)), (0.0), (-1.0)\n\nquery tolerance=1e-6\nSELECT log10(d) FROM test_log10\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT log10(100.0), log10(1.0), log10(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/log2.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_log2(d double) USING parquet\n\nstatement\nINSERT INTO test_log2 VALUES (1.0), (2.0), (4.0), (8.0), (0.5), (NULL), (cast('NaN' as double)), (cast('Infinity' as double)), (0.0), (-1.0)\n\nquery tolerance=1e-6\nSELECT log2(d) FROM test_log2\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT log2(8.0), log2(1.0), log2(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/pow.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_pow(base double, exp double) USING parquet\n\nstatement\nINSERT INTO test_pow VALUES (2.0, 3.0), (0.0, 0.0), (-1.0, 2.0), (-1.0, 0.5), (2.0, -1.0), (NULL, 2.0), (2.0, NULL), (cast('NaN' as double), 2.0), (cast('Infinity' as double), 2.0), (2.0, cast('Infinity' as double))\n\nquery tolerance=1e-6\nSELECT pow(base, exp) FROM test_pow\n\n-- column + literal\nquery tolerance=1e-6\nSELECT pow(base, 2.0) FROM test_pow\n\n-- literal + column\nquery tolerance=1e-6\nSELECT pow(2.0, exp) FROM test_pow\n\n-- literal + literal\nquery tolerance=1e-6\nSELECT pow(2.0, 3.0), pow(0.0, 0.0), pow(-1.0, 2.0), pow(NULL, 2.0)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/round.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_round(d double, i int) USING parquet\n\nstatement\nINSERT INTO test_round VALUES (2.5, 0), (3.5, 0), (-2.5, 0), (123.456, 2), (123.456, -1), (NULL, 0), (cast('NaN' as double), 0), (cast('Infinity' as double), 0), (0.0, 0)\n\nquery expect_fallback(BigDecimal rounding)\nSELECT round(d, 0) FROM test_round WHERE i = 0\n\nquery expect_fallback(BigDecimal rounding)\nSELECT round(d, 2) FROM test_round WHERE i = 2\n\nquery expect_fallback(BigDecimal rounding)\nSELECT round(d, -1) FROM test_round WHERE i = -1\n\nquery expect_fallback(BigDecimal rounding)\nSELECT round(d) FROM test_round\n\n-- literal + literal\nquery expect_fallback(BigDecimal rounding)\nSELECT round(123.456, 2), round(2.5, 0), round(3.5, 0), round(-2.5, 0), round(NULL, 0)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/signum.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_signum(d double) USING parquet\n\nstatement\nINSERT INTO test_signum VALUES (5.0), (-5.0), (0.0), (NULL), (cast('NaN' as double)), (cast('Infinity' as double)), (cast('-Infinity' as double))\n\nquery\nSELECT signum(d) FROM test_signum\n\n-- literal arguments\nquery\nSELECT signum(-5.0), signum(5.0), signum(0.0), signum(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/sin.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_sin(d double) USING parquet\n\nstatement\nINSERT INTO test_sin VALUES (0.0), (3.141592653589793), (1.5707963267948966), (-1.5707963267948966), (NULL), (cast('NaN' as double)), (cast('Infinity' as double))\n\nquery tolerance=1e-6\nSELECT sin(d) FROM test_sin\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT sin(0.0), sin(1.5707963267948966), sin(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/sinh.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_sinh(d double) USING parquet\n\nstatement\nINSERT INTO test_sinh VALUES (0.0), (1.0), (-1.0), (NULL), (cast('NaN' as double))\n\nquery tolerance=1e-6\nSELECT sinh(d) FROM test_sinh\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT sinh(0.0), sinh(1.0), sinh(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/sqrt.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_sqrt(d double) USING parquet\n\nstatement\nINSERT INTO test_sqrt VALUES (0.0), (1.0), (4.0), (2.0), (-1.0), (NULL), (cast('NaN' as double)), (cast('Infinity' as double))\n\nquery tolerance=1e-6\nSELECT sqrt(d) FROM test_sqrt\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT sqrt(4.0), sqrt(2.0), sqrt(0.0), sqrt(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/tan.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Config: spark.comet.expression.Tan.allowIncompatible=true\n\nstatement\nCREATE TABLE test_tan(d double) USING parquet\n\nstatement\nINSERT INTO test_tan VALUES (0.0), (-0.0), (0.7853981633974483), (-0.7853981633974483), (1.0), (NULL), (cast('NaN' as double)), (cast('Infinity' as double))\n\nquery tolerance=1e-6\nSELECT tan(d) FROM test_tan\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT tan(0.0), tan(-0.0), tan(0.7853981633974483), tan(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/math/tanh.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_tanh(d double) USING parquet\n\nstatement\nINSERT INTO test_tanh VALUES (0.0), (1.0), (-1.0), (100.0), (-100.0), (NULL), (cast('NaN' as double))\n\nquery tolerance=1e-6\nSELECT tanh(d) FROM test_tanh\n\n-- literal arguments\nquery tolerance=1e-6\nSELECT tanh(0.0), tanh(1.0), tanh(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/misc/parquet_default_values.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- parquet default values\nstatement\nCREATE TABLE t1(col1 boolean) USING parquet\n\nstatement\nINSERT INTO t1 VALUES(true)\n\nstatement\nALTER TABLE t1 ADD COLUMN col2 string DEFAULT 'hello'\n\nquery\nSELECT * FROM t1\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/misc/scalar_subquery.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_subq(a int, b int) USING parquet\n\nstatement\nINSERT INTO test_subq VALUES (1, 10), (2, 20), (3, 30), (NULL, 40)\n\nquery spark_answer_only\nSELECT a, (SELECT max(b) FROM test_subq) FROM test_subq\n\nquery spark_answer_only\nSELECT a, (SELECT min(b) FROM test_subq) + a FROM test_subq\n\nquery spark_answer_only\nSELECT a, b, b > (SELECT avg(b) FROM test_subq) FROM test_subq\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/misc/width_bucket.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- MinSparkVersion: 3.5\n\nstatement\nCREATE TABLE test_wb(v double) USING parquet\n\nstatement\nINSERT INTO test_wb VALUES (0.0), (2.5), (5.0), (7.5), (10.0), (-1.0), (11.0), (NULL)\n\nquery\nSELECT v, width_bucket(v, 0, 10, 4) FROM test_wb\n\nquery\nSELECT v, width_bucket(v, 10, 0, 4) FROM test_wb\n\nquery\nSELECT v, width_bucket(v, 0, 10, 1) FROM test_wb\n\n-- literal arguments\nquery\nSELECT width_bucket(5.0, 0, 10, 4), width_bucket(0.0, 0, 10, 4), width_bucket(NULL, 0, 10, 4)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/ascii.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_ascii(s string) USING parquet\n\nstatement\nINSERT INTO test_ascii VALUES ('A'), ('z'), ('0'), (''), (NULL), ('hello')\n\nquery\nSELECT ascii(s) FROM test_ascii\n\n-- literal arguments\nquery\nSELECT ascii('A'), ascii(''), ascii(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/bit_length.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_bit_length(s string) USING parquet\n\nstatement\nINSERT INTO test_bit_length VALUES (''), ('a'), ('hello'), (NULL), ('café')\n\nquery\nSELECT bit_length(s) FROM test_bit_length\n\n-- literal arguments\nquery\nSELECT bit_length('hello'), bit_length(''), bit_length(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/chr.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_chr(i long) USING parquet\n\nstatement\nINSERT INTO test_chr VALUES (65), (97), (0), (NULL), (128522), (256), (48)\n\nquery\nSELECT chr(i) FROM test_chr\n\n-- literal arguments\nquery\nSELECT chr(65), chr(0), chr(-1), chr(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/concat.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_concat(a string, b string, c string, d string) USING parquet\n\nstatement\nINSERT INTO test_concat VALUES ('hello', ' ', 'world', NULL), ('', '', '', NULL), (NULL, 'b', 'c', NULL), ('a', NULL, 'c', NULL), (NULL, NULL, NULL, NULL)\n\nquery\nSELECT concat(a, b, c) FROM test_concat\n\nquery\nSELECT a || b || c FROM test_concat\n\n-- mixed: column + literal + column\nquery\nSELECT concat(a, ' ', c) FROM test_concat\n\n-- migrated from CometExpressionSuite \"test concat function - strings\"\n-- two arguments\nquery\nSELECT concat(a, b) FROM test_concat\n\n-- same column twice\nquery\nSELECT concat(a, a) FROM test_concat\n\n-- four arguments with null column\nquery\nSELECT concat(a, b, c, d) FROM test_concat\n\n-- nested concat\nquery\nSELECT concat(concat(a, b, c), concat(a, c)) FROM test_concat\n\n-- literal + literal + literal\nquery\nSELECT concat('hello', ' ', 'world'), concat('', '', ''), concat(NULL, 'b', 'c')\n\n-- migrated from CometExpressionSuite \"test concat function - binary\"\n-- https://github.com/apache/datafusion-comet/issues/2647\nstatement\nCREATE TABLE test_concat_binary USING parquet AS SELECT cast(uuid() as binary) c1, cast(uuid() as binary) c2, cast(uuid() as binary) c3, cast(uuid() as binary) c4, cast(null as binary) c5 FROM range(10)\n\nquery expect_fallback(CONCAT supports only string input parameters)\nSELECT concat(c1, c2) AS x FROM test_concat_binary\n\nquery expect_fallback(CONCAT supports only string input parameters)\nSELECT concat(c1, c1) AS x FROM test_concat_binary\n\nquery expect_fallback(CONCAT supports only string input parameters)\nSELECT concat(c1, c2, c3) AS x FROM test_concat_binary\n\nquery expect_fallback(CONCAT supports only string input parameters)\nSELECT concat(c1, c2, c3, c5) AS x FROM test_concat_binary\n\nquery expect_fallback(CONCAT supports only string input parameters)\nSELECT concat(concat(c1, c2, c3), concat(c1, c3)) AS x FROM test_concat_binary\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/concat_ws.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_concat_ws(a string, b string, c string) USING parquet\n\nstatement\nINSERT INTO test_concat_ws VALUES ('hello', 'beautiful', 'world'), ('', '', ''), (NULL, 'b', 'c'), ('a', NULL, 'c'), (NULL, NULL, NULL)\n\nquery\nSELECT concat_ws(',', a, b, c) FROM test_concat_ws\n\nquery\nSELECT concat_ws('', a, b, c) FROM test_concat_ws\n\nquery\nSELECT concat_ws(NULL, a, b, c) FROM test_concat_ws\n\n-- migrated from CometStringExpressionSuite \"string concat_ws\"\nstatement\nCREATE TABLE names(id int, first_name varchar(20), middle_initial char(1), last_name varchar(20)) USING parquet\n\nstatement\nINSERT INTO names VALUES(1, 'James', 'B', 'Taylor'), (2, 'Smith', 'C', 'Davis'), (3, NULL, NULL, NULL), (4, 'Smith', 'C', 'Davis')\n\nquery\nSELECT concat_ws(' ', first_name, middle_initial, last_name) FROM names\n\n-- literal + literal + literal (falls back to Spark when all args are foldable)\nquery spark_answer_only\nSELECT concat_ws(',', 'hello', 'world'), concat_ws(',', '', ''), concat_ws(',', NULL, 'b', 'c'), concat_ws(NULL, 'a', 'b')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/contains.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_contains(s string, sub string) USING parquet\n\nstatement\nINSERT INTO test_contains VALUES ('hello world', 'world'), ('hello', ''), ('', ''), ('hello', 'xyz'), (NULL, 'a'), ('hello', NULL)\n\nquery\nSELECT contains(s, sub) FROM test_contains\n\n-- column + literal\nquery\nSELECT contains(s, 'world') FROM test_contains\n\n-- literal + column\nquery\nSELECT contains('hello world', sub) FROM test_contains\n\n-- literal + literal\nquery\nSELECT contains('hello world', 'world'), contains('hello', 'xyz'), contains('', ''), contains(NULL, 'a')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/ends_with.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_ends_with(s string, suffix string) USING parquet\n\nstatement\nINSERT INTO test_ends_with VALUES ('hello world', 'world'), ('hello', ''), ('', ''), ('hello', 'xyz'), (NULL, 'a'), ('hello', NULL)\n\nquery\nSELECT endswith(s, suffix) FROM test_ends_with\n\n-- column + literal\nquery\nSELECT endswith(s, 'world') FROM test_ends_with\n\n-- literal + column\nquery\nSELECT endswith('hello world', suffix) FROM test_ends_with\n\n-- literal + literal\nquery\nSELECT endswith('hello world', 'world'), endswith('hello', 'xyz'), endswith('', ''), endswith(NULL, 'a')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/get_json_object.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Config: spark.comet.expression.GetJsonObject.allowIncompatible=true\n-- ConfigMatrix: parquet.enable.dictionary=false,true\n\nstatement\nCREATE TABLE test_get_json_object(json_str string, path string) USING parquet\n\nstatement\nINSERT INTO test_get_json_object VALUES ('{\"name\":\"John\",\"age\":30}', '$.name'), ('{\"a\":{\"b\":{\"c\":\"deep\"}}}', '$.a.b.c'), ('{\"items\":[1,2,3]}', '$.items[1]'), (NULL, '$.name'), ('{\"name\":\"Alice\"}', NULL), ('invalid json', '$.name'), ('{\"a\":\"b\"}', '$'), ('{\"key\":\"value\"}', '$.missing'), ('{\"arr\":[\"x\",\"y\",\"z\"]}', '$.arr[0]'), ('{\"a\":null}', '$.a')\n\n-- column json, column path\nquery\nSELECT get_json_object(json_str, path) FROM test_get_json_object\n\n-- column json, literal path\nquery\nSELECT get_json_object(json_str, '$.name') FROM test_get_json_object\n\n-- simple field extraction\nquery\nSELECT get_json_object('{\"name\":\"John\",\"age\":30}', '$.name')\n\n-- numeric value\nquery\nSELECT get_json_object('{\"name\":\"John\",\"age\":30}', '$.age')\n\n-- nested field\nquery\nSELECT get_json_object('{\"user\":{\"profile\":{\"name\":\"Alice\"}}}', '$.user.profile.name')\n\n-- array index\nquery\nSELECT get_json_object('[1,2,3]', '$[0]'), get_json_object('[1,2,3]', '$[2]')\n\n-- out of bounds array index\nquery\nSELECT get_json_object('[1,2,3]', '$[3]')\n\n-- nested array field\nquery\nSELECT get_json_object('{\"items\":[\"apple\",\"banana\",\"cherry\"]}', '$.items[1]')\n\n-- root path returns entire input\nquery\nSELECT get_json_object('{\"a\":\"b\"}', '$')\n\n-- null json\nquery\nSELECT get_json_object(NULL, '$.name')\n\n-- null path\nquery\nSELECT get_json_object('{\"a\":\"b\"}', NULL)\n\n-- invalid json returns null\nquery\nSELECT get_json_object('not json', '$.a')\n\n-- missing field returns null\nquery\nSELECT get_json_object('{\"a\":\"b\"}', '$.c')\n\n-- json null value returns null\nquery\nSELECT get_json_object('{\"a\":null}', '$.a')\n\n-- boolean value\nquery\nSELECT get_json_object('{\"flag\":true}', '$.flag')\n\n-- nested object returned as json\nquery\nSELECT get_json_object('{\"a\":{\"b\":1}}', '$.a')\n\n-- array returned as json\nquery\nSELECT get_json_object('{\"a\":[1,2,3]}', '$.a')\n\n-- bracket notation\nquery\nSELECT get_json_object('{\"key with spaces\":\"it works\"}', '$[''key with spaces'']')\n\n-- wildcard on array\nquery\nSELECT get_json_object('[{\"a\":\"b\"},{\"a\":\"c\"}]', '$[*].a')\n\n-- empty string json\nquery\nSELECT get_json_object('', '$.a')\n\n-- deeply nested\nquery\nSELECT get_json_object('{\"a\":{\"b\":{\"c\":{\"d\":\"found\"}}}}', '$.a.b.c.d')\n\n-- array of arrays\nquery\nSELECT get_json_object('[[1,2],[3,4]]', '$[0][1]')\n\n-- wildcard on simple array returns the whole array\nquery\nSELECT get_json_object('[1,2,3]', '$[*]')\n\n-- object wildcard (Spark returns null for $.* on objects)\nquery\nSELECT get_json_object('{\"a\":1,\"b\":2,\"c\":3}', '$.*')\n\n-- single wildcard match returns unwrapped value\nquery\nSELECT get_json_object('[{\"a\":\"only\"}]', '$[*].a')\n\n-- wildcard with missing fields in some elements\nquery\nSELECT get_json_object('[{\"a\":1},{\"b\":2},{\"a\":3}]', '$[*].a')\n\n-- invalid path: recursive descent\nquery\nSELECT get_json_object('{\"a\":1}', '$..a')\n\n-- invalid path: no dollar sign\nquery\nSELECT get_json_object('{\"a\":1}', 'a')\n\n-- invalid path: dot-bracket\nquery\nSELECT get_json_object('[1,2,3]', '$.[0]')\n\n-- field name with special chars\nquery\nSELECT get_json_object('{\"fb:testid\":\"123\"}', '$.fb:testid')\n\n-- escaped quotes in string values\nquery\nSELECT get_json_object('{\"a\":\"b\\\\\"c\"}', '$.a')\n\n-- numeric precision preserved\nquery\nSELECT get_json_object('{\"price\":8.95}', '$.price')\n\n-- object key ordering preserved in output\nquery\nSELECT get_json_object('{\"z\":1,\"a\":2}', '$')\n\n-- unicode field names and values\nquery\nSELECT get_json_object('{\"名前\":\"太郎\",\"年齢\":25}', '$.名前')\n\n-- unicode values\nquery\nSELECT get_json_object('{\"greeting\":\"こんにちは世界\"}', '$.greeting')\n\n-- emoji in values\nquery\nSELECT get_json_object('{\"emoji\":\"🎉🚀\"}', '$.emoji')\n\n-- unicode bracket notation\nquery\nSELECT get_json_object('{\"键\":\"值\"}', '$[''键'']')\n\n-- mixed scripts (Latin with accents)\nquery\nSELECT get_json_object('{\"data\":\"café résumé naïve\"}', '$.data')\n\n-- unicode in wildcard results\nquery\nSELECT get_json_object('[{\"名\":\"Alice\"},{\"名\":\"太郎\"}]', '$[*].名')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/hex.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_hex(i int, l long, s string) USING parquet\n\nstatement\nINSERT INTO test_hex VALUES (0, 0, ''), (255, 255, 'Spark'), (-1, -1, NULL), (NULL, NULL, 'ABC'), (2147483647, 9223372036854775807, 'a')\n\nquery\nSELECT hex(i), hex(l), hex(s) FROM test_hex\n\n-- literal arguments\nquery\nSELECT hex(255), hex('Spark'), hex(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/init_cap.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_initcap(s string) USING parquet\n\nstatement\nINSERT INTO test_initcap VALUES ('hello world'), ('HELLO WORLD'), (''), (NULL), ('hello-world'), ('123abc'), ('  spaces  ')\n\nquery expect_fallback(not fully compatible with Spark)\nSELECT initcap(s) FROM test_initcap\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/init_cap_enabled.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Test initcap() with allowIncompatible enabled (happy path)\n-- Config: spark.comet.expression.InitCap.allowIncompatible=true\n\nstatement\nCREATE TABLE test_initcap_enabled(s string) USING parquet\n\nstatement\nINSERT INTO test_initcap_enabled VALUES ('hello world'), ('HELLO WORLD'), (''), (NULL), ('123abc'), ('  spaces  ')\n\nquery\nSELECT initcap(s) FROM test_initcap_enabled\n\n-- literal arguments\nquery\nSELECT initcap('hello world'), initcap(''), initcap(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/left.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_str_left(s string, n int) USING parquet\n\nstatement\nINSERT INTO test_str_left VALUES ('hello', 3), ('hello', 0), ('hello', -1), ('hello', 10), ('', 3), (NULL, 3), ('hello', NULL)\n\nquery expect_fallback(Substring pos and len must be literals)\nSELECT left(s, n) FROM test_str_left\n\n-- column + literal\nquery\nSELECT left(s, 3) FROM test_str_left\n\n-- column + literal: edge cases\nquery\nSELECT left(s, 0) FROM test_str_left\n\nquery\nSELECT left(s, -1) FROM test_str_left\n\nquery\n-- n exceeds length of 'hello' (5 chars)\nSELECT left(s, 10) FROM test_str_left\n\n-- literal + column\nquery expect_fallback(Substring pos and len must be literals)\nSELECT left('hello', n) FROM test_str_left\n\n-- literal + literal\nquery ignore(https://github.com/apache/datafusion-comet/issues/3337)\nSELECT left('hello', 3), left('hello', 0), left('hello', -1), left('', 3), left(NULL, 3)\n\n-- unicode\nstatement\nCREATE TABLE test_str_left_unicode(s string) USING parquet\n\nstatement\nINSERT INTO test_str_left_unicode VALUES ('café'), ('hello世界'), ('😀emoji'), ('తెలుగు'), (NULL)\n\nquery\nSELECT s, left(s, 2) FROM test_str_left_unicode\n\nquery\nSELECT s, left(s, 4) FROM test_str_left_unicode\n\nquery\nSELECT s, left(s, 0) FROM test_str_left_unicode\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/length.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_length(s string) USING parquet\n\nstatement\nINSERT INTO test_length VALUES (''), ('a'), ('hello'), (NULL), ('café')\n\nquery\nSELECT length(s), char_length(s) FROM test_length\n\n-- literal arguments\nquery\nSELECT length('hello'), length(''), length(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/like.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_like(s string) USING parquet\n\nstatement\nINSERT INTO test_like VALUES ('hello'), ('world'), (''), (NULL), ('Hello'), ('h%llo'), ('h_llo')\n\nquery\nSELECT s LIKE 'h%' FROM test_like\n\nquery\nSELECT s LIKE '%llo' FROM test_like\n\nquery\nSELECT s LIKE 'h_llo' FROM test_like\n\nquery\nSELECT s LIKE '' FROM test_like\n\n-- literal arguments\nquery\nSELECT 'hello' LIKE 'h%', 'hello' LIKE 'xyz%', '' LIKE '', NULL LIKE 'a'\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/lower.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_lower(s string) USING parquet\n\nstatement\nINSERT INTO test_lower VALUES ('HELLO'), ('hello'), ('Hello World'), (''), (NULL), ('123ABC')\n\nquery expect_fallback(case conversion)\nSELECT lower(s) FROM test_lower\n\n-- literal arguments\nquery expect_fallback(case conversion)\nSELECT lower('HELLO'), lower(''), lower(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/lower_enabled.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Test lower() with case conversion enabled (happy path)\n-- Config: spark.comet.caseConversion.enabled=true\n\nstatement\nCREATE TABLE test_lower_enabled(s string) USING parquet\n\nstatement\nINSERT INTO test_lower_enabled VALUES ('HELLO'), ('hello'), ('Hello World'), (''), (NULL), ('123ABC')\n\nquery\nSELECT lower(s) FROM test_lower_enabled\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/luhn_check.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- MinSparkVersion: 3.5\n\nstatement\nCREATE TABLE test_luhn(s string) USING parquet\n\nstatement\nINSERT INTO test_luhn VALUES ('79927398710'), ('79927398713'), ('1234567812345670'), ('0'), (''), ('abc'), (NULL)\n\n-- column input\nquery\nSELECT luhn_check(s) FROM test_luhn\n\n-- literal arguments\nquery\nSELECT luhn_check('79927398713'), luhn_check('79927398710'), luhn_check(''), luhn_check(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/octet_length.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_octet_length(s string) USING parquet\n\nstatement\nINSERT INTO test_octet_length VALUES (''), ('a'), ('hello'), (NULL), ('café')\n\nquery\nSELECT octet_length(s) FROM test_octet_length\n\n-- literal arguments\nquery\nSELECT octet_length('hello'), octet_length(''), octet_length(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/regexp_replace.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_regexp_replace(s string) USING parquet\n\nstatement\nINSERT INTO test_regexp_replace VALUES ('100-200'), ('abc'), (''), (NULL), ('phone 123-456-7890')\n\nquery expect_fallback(Regexp pattern)\nSELECT regexp_replace(s, '(\\\\d+)', 'X') FROM test_regexp_replace\n\nquery expect_fallback(Regexp pattern)\nSELECT regexp_replace(s, '(\\\\d+)', 'X', 1) FROM test_regexp_replace\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/regexp_replace_enabled.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Test regexp_replace() with regexp allowIncompatible enabled (happy path)\n-- Config: spark.comet.expression.regexp.allowIncompatible=true\n\nstatement\nCREATE TABLE test_regexp_replace_enabled(s string) USING parquet\n\nstatement\nINSERT INTO test_regexp_replace_enabled VALUES ('100-200'), ('abc'), (''), (NULL), ('phone 123-456-7890')\n\nquery\nSELECT regexp_replace(s, '(\\d+)', 'X') FROM test_regexp_replace_enabled\n\nquery\nSELECT regexp_replace(s, '(\\d+)', 'X', 1) FROM test_regexp_replace_enabled\n\n-- literal + literal + literal\nquery\nSELECT regexp_replace('100-200', '(\\d+)', 'X'), regexp_replace('abc', '(\\d+)', 'X'), regexp_replace(NULL, '(\\d+)', 'X')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/reverse.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_reverse(s string) USING parquet\n\nstatement\nINSERT INTO test_reverse VALUES ('hello'), (''), (NULL), ('a'), ('abcde'), ('café')\n\nquery\nSELECT reverse(s) FROM test_reverse\n\n-- literal arguments\nquery\nSELECT reverse('hello'), reverse(''), reverse(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/right.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Note: Right is a RuntimeReplaceable expression. Spark replaces it with\n-- If(IsNull(str), null, If(len <= 0, \"\", Substring(str, -len, len)))\n-- before Comet sees it. CometRight handles the serde, but the optimizer\n-- may replace it first. We use spark_answer_only to verify correctness.\nstatement\nCREATE TABLE test_str_right(s string, n int) USING parquet\n\nstatement\nINSERT INTO test_str_right VALUES ('hello', 3), ('hello', 0), ('hello', -1), ('hello', 10), ('', 3), (NULL, 3), ('hello', NULL)\n\n-- both columns: len must be literal, falls back\nquery spark_answer_only\nSELECT right(s, n) FROM test_str_right\n\n-- column + literal: basic\nquery spark_answer_only\nSELECT right(s, 3) FROM test_str_right\n\n-- column + literal: edge cases\nquery spark_answer_only\nSELECT right(s, 0) FROM test_str_right\n\nquery spark_answer_only\nSELECT right(s, -1) FROM test_str_right\n\nquery spark_answer_only\n-- n exceeds length of 'hello' (5 chars)\nSELECT right(s, 10) FROM test_str_right\n\n-- literal + column: falls back\nquery spark_answer_only\nSELECT right('hello', n) FROM test_str_right\n\n-- literal + literal\nquery spark_answer_only\nSELECT right('hello', 3), right('hello', 0), right('hello', -1), right('', 3), right(NULL, 3)\n\n-- null propagation with len <= 0 (critical: NULL str with non-positive len must return NULL, not empty string)\nquery spark_answer_only\nSELECT right(CAST(NULL AS STRING), 0), right(CAST(NULL AS STRING), -1), right(CAST(NULL AS STRING), 2)\n\n-- mixed null and non-null values with len <= 0\nstatement\nCREATE TABLE test_str_right_nulls(s string) USING parquet\n\nstatement\nINSERT INTO test_str_right_nulls VALUES ('hello'), (NULL), (''), ('world')\n\nquery spark_answer_only\nSELECT s, right(s, 0) FROM test_str_right_nulls\n\nquery spark_answer_only\nSELECT s, right(s, -1) FROM test_str_right_nulls\n\nquery spark_answer_only\nSELECT s, right(s, 2) FROM test_str_right_nulls\n\n-- equivalence with substring\nquery spark_answer_only\nSELECT s, right(s, 3), substring(s, -3, 3) FROM test_str_right_nulls\n\n-- unicode\nstatement\nCREATE TABLE test_str_right_unicode(s string) USING parquet\n\nstatement\nINSERT INTO test_str_right_unicode VALUES ('café'), ('hello世界'), ('😀emoji'), ('తెలుగు'), (NULL)\n\nquery spark_answer_only\nSELECT s, right(s, 2) FROM test_str_right_unicode\n\nquery spark_answer_only\nSELECT s, right(s, 4) FROM test_str_right_unicode\n\nquery spark_answer_only\nSELECT s, right(s, 0) FROM test_str_right_unicode\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/rlike.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_rlike(s string) USING parquet\n\nstatement\nINSERT INTO test_rlike VALUES ('hello'), ('12345'), (''), (NULL), ('Hello World'), ('abc123')\n\nquery expect_fallback(Regexp pattern)\nSELECT s RLIKE '^[0-9]+$' FROM test_rlike\n\nquery expect_fallback(Regexp pattern)\nSELECT s RLIKE '^[a-z]+$' FROM test_rlike\n\nquery spark_answer_only\nSELECT s RLIKE '' FROM test_rlike\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/rlike_enabled.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Test RLIKE with regexp allowIncompatible enabled (happy path)\n-- Config: spark.comet.expression.regexp.allowIncompatible=true\n\nstatement\nCREATE TABLE test_rlike_enabled(s string) USING parquet\n\nstatement\nINSERT INTO test_rlike_enabled VALUES ('hello'), ('12345'), (''), (NULL), ('Hello World'), ('abc123')\n\nquery\nSELECT s RLIKE '^[0-9]+$' FROM test_rlike_enabled\n\nquery\nSELECT s RLIKE '^[a-z]+$' FROM test_rlike_enabled\n\nquery\nSELECT s RLIKE '' FROM test_rlike_enabled\n\n-- literal arguments\nquery\nSELECT 'hello' RLIKE '^[a-z]+$', '12345' RLIKE '^[a-z]+$', '' RLIKE '', NULL RLIKE 'a'\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/starts_with.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_starts_with(s string, prefix string) USING parquet\n\nstatement\nINSERT INTO test_starts_with VALUES ('hello world', 'hello'), ('hello', ''), ('', ''), ('hello', 'xyz'), (NULL, 'a'), ('hello', NULL)\n\nquery\nSELECT startswith(s, prefix) FROM test_starts_with\n\n-- column + literal\nquery\nSELECT startswith(s, 'hello') FROM test_starts_with\n\n-- literal + column\nquery\nSELECT startswith('hello world', prefix) FROM test_starts_with\n\n-- literal + literal\nquery\nSELECT startswith('hello world', 'hello'), startswith('hello', 'xyz'), startswith('', ''), startswith(NULL, 'a')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/string.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- substring with start < 1\nstatement\nCREATE TABLE t(col string) USING parquet\n\nstatement\nINSERT INTO t VALUES('123456')\n\nquery\nSELECT substring(col, 0) FROM t\n\nquery\nSELECT substring(col, -1) FROM t\n\n-- md5\nstatement\nCREATE TABLE test_md5(col String) USING parquet\n\nstatement\nINSERT INTO test_md5 VALUES ('test1'), ('test1'), ('test2'), ('test2'), (NULL), ('')\n\nquery\nSELECT md5(col) FROM test_md5\n\n-- unhex\nstatement\nCREATE TABLE unhex_table(col string) USING parquet\n\nstatement\nINSERT INTO unhex_table VALUES ('537061726B2053514C'), ('737472696E67'), ('\\0'), (''), ('###'), ('G123'), ('hello'), ('A1B'), ('0A1B')\n\nquery\nSELECT unhex(col) FROM unhex_table\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/string_instr.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_instr(s string, sub string) USING parquet\n\nstatement\nINSERT INTO test_instr VALUES ('hello world', 'world'), ('hello', 'xyz'), ('hello', ''), ('', ''), (NULL, 'a'), ('hello', NULL), ('abcabc', 'bc')\n\nquery\nSELECT instr(s, sub) FROM test_instr\n\n-- column + literal\nquery\nSELECT instr(s, 'world') FROM test_instr\n\n-- literal + column\nquery\nSELECT instr('hello world', sub) FROM test_instr\n\n-- literal + literal\nquery\nSELECT instr('hello world', 'world'), instr('hello', 'xyz'), instr('', ''), instr(NULL, 'a')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/string_lpad.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_lpad(s string, len int, pad string) USING parquet\n\nstatement\nINSERT INTO test_lpad VALUES ('hi', 5, 'x'), ('hello', 3, 'x'), ('hi', 5, 'xy'), ('', 3, 'a'), (NULL, 5, 'x'), ('hi', 0, 'x'), ('hi', -1, 'x')\n\nquery expect_fallback(Only scalar values are supported for the pad argument)\nSELECT lpad(s, len, pad) FROM test_lpad\n\nquery\nSELECT lpad(s, len) FROM test_lpad\n\n-- column + literal + literal\nquery\nSELECT lpad(s, 5, 'x') FROM test_lpad\n\n-- literal + literal + literal\nquery expect_fallback(Scalar values are not supported for the str argument)\nSELECT lpad('hi', 5, 'x'), lpad('hello', 3, 'x'), lpad('', 3, 'a'), lpad(NULL, 5, 'x')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/string_repeat.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_repeat(s string, n int) USING parquet\n\nstatement\nINSERT INTO test_repeat VALUES ('hi', 3), ('', 5), ('a', 0), ('a', -1), (NULL, 3), ('hi', NULL)\n\nquery\nSELECT repeat(s, n) FROM test_repeat\n\n-- column + literal\nquery\nSELECT repeat(s, 3) FROM test_repeat\n\n-- literal + column\nquery\nSELECT repeat('hi', n) FROM test_repeat\n\n-- literal + literal\nquery\nSELECT repeat('hi', 3), repeat('', 5), repeat('a', 0), repeat(NULL, 3)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/string_replace.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_str_replace(s string, search string, replace string) USING parquet\n\nstatement\nINSERT INTO test_str_replace VALUES ('hello world', 'world', 'there'), ('aaa', 'a', 'bb'), ('hello', 'xyz', 'abc'), ('', 'a', 'b'), (NULL, 'a', 'b')\n\nquery\nSELECT replace(s, search, replace) FROM test_str_replace\n\nquery ignore(https://github.com/apache/datafusion-comet/issues/3344)\nSELECT replace('hello', '', 'x')\n\n-- column + literal + literal\nquery\nSELECT replace(s, 'world', 'there') FROM test_str_replace\n\n-- literal + column + column\nquery\nSELECT replace('hello world', search, replace) FROM test_str_replace\n\n-- literal + literal + literal\nquery\nSELECT replace('hello world', 'world', 'there'), replace('aaa', 'a', 'bb'), replace('hello', 'xyz', 'abc'), replace(NULL, 'a', 'b')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/string_rpad.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_rpad(s string, len int, pad string) USING parquet\n\nstatement\nINSERT INTO test_rpad VALUES ('hi', 5, 'x'), ('hello', 3, 'x'), ('hi', 5, 'xy'), ('', 3, 'a'), (NULL, 5, 'x'), ('hi', 0, 'x'), ('hi', -1, 'x')\n\nquery expect_fallback(Only scalar values are supported for the pad argument)\nSELECT rpad(s, len, pad) FROM test_rpad\n\nquery\nSELECT rpad(s, len) FROM test_rpad\n\n-- column + literal + literal\nquery\nSELECT rpad(s, 5, 'x') FROM test_rpad\n\n-- literal + literal + literal\nquery expect_fallback(Scalar values are not supported for the str argument)\nSELECT rpad('hi', 5, 'x'), rpad('hello', 3, 'x'), rpad('', 3, 'a'), rpad(NULL, 5, 'x')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/string_space.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_space(n int) USING parquet\n\nstatement\nINSERT INTO test_space VALUES (0), (1), (5), (NULL), (-1)\n\nquery\nSELECT concat('[', space(n), ']') FROM test_space WHERE n >= 0 OR n IS NULL\n\nquery\nSELECT concat('[', space(n), ']') FROM test_space WHERE n < 0\n\n-- literal arguments\nquery\nSELECT concat('[', space(5), ']'), concat('[', space(0), ']'), space(-1), space(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/string_translate.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_translate(s string, from_str string, to_str string) USING parquet\n\nstatement\nINSERT INTO test_translate VALUES ('hello', 'el', 'ip'), ('hello', 'aeiou', '12345'), ('', 'a', 'b'), (NULL, 'a', 'b'), ('hello', '', ''), ('abc', 'abc', 'x')\n\nquery\nSELECT translate(s, from_str, to_str) FROM test_translate\n\n-- column + literal + literal\nquery\nSELECT translate(s, 'el', 'ip') FROM test_translate\n\n-- literal + column + column\nquery\nSELECT translate('hello', from_str, to_str) FROM test_translate\n\n-- literal + literal + literal\nquery\nSELECT translate('hello', 'el', 'ip'), translate('hello', 'aeiou', '12345'), translate('', 'a', 'b'), translate(NULL, 'a', 'b')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/string_trim.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_trim(s string) USING parquet\n\nstatement\nINSERT INTO test_trim VALUES ('  hello  '), ('hello'), (''), (NULL), ('  '), (' hello world ')\n\nquery\nSELECT trim(s), ltrim(s), rtrim(s) FROM test_trim\n\nquery\nSELECT trim(BOTH 'h' FROM s) FROM test_trim\n\n-- literal arguments\nquery\nSELECT trim('  hello  '), ltrim('  hello  '), rtrim('  hello  ')\n\nquery\nSELECT trim(BOTH 'h' FROM 'hello'), trim(LEADING ' ' FROM '  hello  '), trim(TRAILING ' ' FROM '  hello  ')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/substring.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_substring(s string) USING parquet\n\nstatement\nINSERT INTO test_substring VALUES ('hello world'), (''), (NULL), ('abc')\n\nquery\nSELECT substring(s, 1, 5) FROM test_substring\n\nquery\nSELECT substring(s, -3) FROM test_substring\n\nquery\nSELECT substring(s, 0, 3) FROM test_substring\n\nquery\nSELECT substring(s, 1, 0) FROM test_substring\n\nquery\nSELECT substring(s, 1, -1) FROM test_substring\n\nquery\nSELECT substring(s, 100) FROM test_substring\n\n-- literal + literal + literal\nquery ignore(https://github.com/apache/datafusion-comet/issues/3337)\nSELECT substring('hello world', 1, 5), substring('hello world', -3), substring('', 1, 5), substring(NULL, 1, 5)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/unhex.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_unhex(s string) USING parquet\n\nstatement\nINSERT INTO test_unhex VALUES ('537061726B2053514C'), ('41'), ('0A1B'), (''), (NULL), ('GG'), ('hello'), ('A1B')\n\nquery\nSELECT hex(unhex(s)) FROM test_unhex\n\nquery\nSELECT unhex(s) IS NULL FROM test_unhex\n\n-- literal arguments\nquery\nSELECT unhex('41'), unhex('GG'), unhex(''), unhex(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/upper.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_upper(s string) USING parquet\n\nstatement\nINSERT INTO test_upper VALUES ('hello'), ('HELLO'), ('Hello World'), (''), (NULL), ('123abc')\n\nquery expect_fallback(case conversion)\nSELECT upper(s) FROM test_upper\n\n-- literal arguments\nquery expect_fallback(case conversion)\nSELECT upper('hello'), upper(''), upper(NULL)\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/string/upper_enabled.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Test upper() with case conversion enabled (happy path)\n-- Config: spark.comet.caseConversion.enabled=true\n\nstatement\nCREATE TABLE test_upper_enabled(s string) USING parquet\n\nstatement\nINSERT INTO test_upper_enabled VALUES ('hello'), ('HELLO'), ('Hello World'), (''), (NULL), ('123abc')\n\nquery\nSELECT upper(s) FROM test_upper_enabled\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/struct/create_named_struct.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_named_struct(a int, b string, c double) USING parquet\n\nstatement\nINSERT INTO test_named_struct VALUES (1, 'hello', 1.5), (NULL, NULL, NULL), (0, '', 0.0)\n\nquery\nSELECT named_struct('x', a, 'y', b, 'z', c) FROM test_named_struct\n\nquery\nSELECT struct(a, b, c) FROM test_named_struct\n\n-- literal arguments\nquery\nSELECT named_struct('x', 1, 'y', 'hello', 'z', 3.14)\n\nquery\nSELECT named_struct('x', a, 'y', 'fixed_val', 'z', c) FROM test_named_struct\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/struct/get_struct_field.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_struct(s struct<name: string, age: int, score: double>) USING parquet\n\nstatement\nINSERT INTO test_struct VALUES (named_struct('name', 'Alice', 'age', 30, 'score', 95.5)), (named_struct('name', 'Bob', 'age', 25, 'score', 88.0)), (NULL)\n\nquery\nSELECT s.name, s.age, s.score FROM test_struct\n\nquery\nSELECT s.name, s.age + 1, s.score * 2 FROM test_struct\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/struct/json_to_structs.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_from_json(j string) USING parquet\n\nstatement\nINSERT INTO test_from_json VALUES ('{\"a\": 1, \"b\": \"hello\"}'), ('{}'), (NULL), ('{\"a\": null}'), ('invalid json')\n\nquery spark_answer_only\nSELECT from_json(j, 'a INT, b STRING') FROM test_from_json\n\n-- literal arguments\nquery spark_answer_only\nSELECT from_json('{\"a\": 1, \"b\": \"hello\"}', 'a INT, b STRING')\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/struct/structs_to_json.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nstatement\nCREATE TABLE test_to_json(a int, b string) USING parquet\n\nstatement\nINSERT INTO test_to_json VALUES (1, 'hello'), (NULL, NULL), (0, '')\n\nquery spark_answer_only\nSELECT to_json(named_struct('a', a, 'b', b)) FROM test_to_json\n\n-- literal arguments\nquery spark_answer_only\nSELECT to_json(named_struct('a', 1, 'b', 'hello'))\n"
  },
  {
    "path": "spark/src/test/resources/sql-tests/expressions/window/lag_lead.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n--   http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Config: spark.comet.operator.WindowExec.allowIncompatible=true\n\n-- ============================================================\n-- Setup: shared tables\n-- ============================================================\n\nstatement\nCREATE TABLE test_lag_lead(id int, val int, grp string) USING parquet\n\nstatement\nINSERT INTO test_lag_lead VALUES\n  (1, 10, 'a'),\n  (2, 20, 'a'),\n  (3, 30, 'a'),\n  (4, 40, 'b'),\n  (5, 50, 'b')\n\nstatement\nCREATE TABLE test_nulls(id int, val int, grp string) USING parquet\n\nstatement\nINSERT INTO test_nulls VALUES\n  (1, NULL, 'a'),\n  (2, 10,   'a'),\n  (3, NULL, 'a'),\n  (4, 20,   'a'),\n  (5, NULL, 'b'),\n  (6, 30,   'b'),\n  (7, NULL, 'b')\n\nstatement\nCREATE TABLE test_all_nulls(id int, val int, grp string) USING parquet\n\nstatement\nINSERT INTO test_all_nulls VALUES\n  (1, NULL, 'a'),\n  (2, NULL, 'a'),\n  (3, NULL, 'b'),\n  (4, 1,    'b')\n\nstatement\nCREATE TABLE test_single_row(id int, val int) USING parquet\n\nstatement\nINSERT INTO test_single_row VALUES (1, 42)\n\nstatement\nCREATE TABLE test_types(\n  id int,\n  i_val int,\n  l_val bigint,\n  d_val double,\n  s_val string,\n  grp string\n) USING parquet\n\nstatement\nINSERT INTO test_types VALUES\n  (1, NULL, NULL,  NULL,  NULL,  'a'),\n  (2, 1,    100,   1.5,   'foo', 'a'),\n  (3, 2,    200,   2.5,   'bar', 'a'),\n  (4, NULL, NULL,  NULL,  NULL,  'b'),\n  (5, 3,    300,   3.5,   'baz', 'b')\n\n-- ############################################################\n-- LAG\n-- ############################################################\n\n-- ============================================================\n-- lag: basic (default offset = 1)\n-- ============================================================\n\nquery\nSELECT id, val,\n  LAG(val) OVER (ORDER BY id) as lag_val\nFROM test_lag_lead\n\nquery\nSELECT grp, id, val,\n  LAG(val) OVER (PARTITION BY grp ORDER BY id) as lag_val\nFROM test_lag_lead\n\n-- ============================================================\n-- lag: with explicit offset\n-- ============================================================\n\nquery\nSELECT id, val,\n  LAG(val, 2) OVER (ORDER BY id) as lag_val_2\nFROM test_lag_lead\n\n-- ============================================================\n-- lag: with offset and default value\n-- ============================================================\n\nquery\nSELECT id, val,\n  LAG(val, 2, -1) OVER (ORDER BY id) as lag_val_2\nFROM test_lag_lead\n\n-- ============================================================\n-- lag IGNORE NULLS: basic\n-- ============================================================\n\nquery\nSELECT id, val,\n  LAG(val) IGNORE NULLS OVER (ORDER BY id) as lag_val\nFROM test_nulls\n\nquery\nSELECT grp, id, val,\n  LAG(val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id) as lag_val\nFROM test_nulls\n\n-- ============================================================\n-- lag IGNORE NULLS: all values null in a group\n-- ============================================================\n\nquery\nSELECT grp, id, val,\n  LAG(val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id) as lag_val\nFROM test_all_nulls\n\n-- ============================================================\n-- lag IGNORE NULLS: single row\n-- ============================================================\n\nquery\nSELECT id, val,\n  LAG(val) IGNORE NULLS OVER (ORDER BY id) as lag_val\nFROM test_single_row\n\n-- ============================================================\n-- lag IGNORE NULLS: multiple data types\n-- ============================================================\n\nquery\nSELECT grp, id,\n  LAG(i_val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id),\n  LAG(l_val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id),\n  LAG(d_val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id),\n  LAG(s_val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id)\nFROM test_types\n\n-- ============================================================\n-- lag IGNORE NULLS: with offset > 1\n-- ============================================================\n\nquery\nSELECT id, val,\n  LAG(val, 2) IGNORE NULLS OVER (ORDER BY id) as lag_val_2\nFROM test_nulls\n\n-- ============================================================\n-- lag: contrast IGNORE NULLS vs RESPECT NULLS\n-- ============================================================\n\nquery\nSELECT id, val,\n  LAG(val) OVER (ORDER BY id) as lag_respect,\n  LAG(val) IGNORE NULLS OVER (ORDER BY id) as lag_ignore\nFROM test_nulls\n\n-- ############################################################\n-- LEAD\n-- ############################################################\n\n-- ============================================================\n-- lead: basic (default offset = 1)\n-- ============================================================\n\nquery\nSELECT id, val,\n  LEAD(val) OVER (ORDER BY id) as lead_val\nFROM test_lag_lead\n\nquery\nSELECT grp, id, val,\n  LEAD(val) OVER (PARTITION BY grp ORDER BY id) as lead_val\nFROM test_lag_lead\n\n-- ============================================================\n-- lead: with explicit offset\n-- ============================================================\n\nquery\nSELECT id, val,\n  LEAD(val, 2) OVER (ORDER BY id) as lead_val_2\nFROM test_lag_lead\n\n-- ============================================================\n-- lead: with offset and default value\n-- ============================================================\n\nquery\nSELECT id, val,\n  LEAD(val, 2, -1) OVER (ORDER BY id) as lead_val_2\nFROM test_lag_lead\n\n-- ============================================================\n-- lead IGNORE NULLS: basic\n-- ============================================================\n\nquery\nSELECT id, val,\n  LEAD(val) IGNORE NULLS OVER (ORDER BY id) as lead_val\nFROM test_nulls\n\nquery\nSELECT grp, id, val,\n  LEAD(val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id) as lead_val\nFROM test_nulls\n\n-- ============================================================\n-- lead IGNORE NULLS: all values null in a group\n-- ============================================================\n\nquery\nSELECT grp, id, val,\n  LEAD(val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id) as lead_val\nFROM test_all_nulls\n\n-- ============================================================\n-- lead IGNORE NULLS: single row\n-- ============================================================\n\nquery\nSELECT id, val,\n  LEAD(val) IGNORE NULLS OVER (ORDER BY id) as lead_val\nFROM test_single_row\n\n-- ============================================================\n-- lead IGNORE NULLS: multiple data types\n-- ============================================================\n\nquery\nSELECT grp, id,\n  LEAD(i_val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id),\n  LEAD(l_val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id),\n  LEAD(d_val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id),\n  LEAD(s_val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id)\nFROM test_types\n\n-- ============================================================\n-- lead IGNORE NULLS: with offset > 1\n-- ============================================================\n\nquery\nSELECT id, val,\n  LEAD(val, 2) IGNORE NULLS OVER (ORDER BY id) as lead_val_2\nFROM test_nulls\n\n-- ============================================================\n-- lead: contrast IGNORE NULLS vs RESPECT NULLS\n-- ============================================================\n\nquery\nSELECT id, val,\n  LEAD(val) OVER (ORDER BY id) as lead_respect,\n  LEAD(val) IGNORE NULLS OVER (ORDER BY id) as lead_ignore\nFROM test_nulls\n\n-- ############################################################\n-- LAG + LEAD combined\n-- ############################################################\n\nquery\nSELECT id, val,\n  LAG(val) OVER (ORDER BY id) as lag_val,\n  LEAD(val) OVER (ORDER BY id) as lead_val\nFROM test_lag_lead\n\nquery\nSELECT grp, id, val,\n  LAG(val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id) as lag_ignore,\n  LEAD(val) IGNORE NULLS OVER (PARTITION BY grp ORDER BY id) as lead_ignore\nFROM test_nulls\n"
  },
  {
    "path": "spark/src/test/resources/test-data/csv-test-1.csv",
    "content": "1,2,3\n4,5,6\n7,0,8"
  },
  {
    "path": "spark/src/test/resources/test-data/csv-test-2.csv",
    "content": "a,b,c\n1,2,3\n4,5,6\n7,0,8"
  },
  {
    "path": "spark/src/test/resources/test-data/json-test-1.ndjson",
    "content": "{ \"a\": [1], \"b\": { \"c\": \"foo\" }}\n{ \"a\": 2, \"b\": { \"d\": \"bar\" }}\n{ \"b\": { \"d\": \"bar\" }}\n{ \"a\": 3 }\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-extended/q72.sql",
    "content": "select  i_item_desc\n      ,w_warehouse_name\n      ,d1.d_week_seq\n      ,sum(case when p_promo_sk is null then 1 else 0 end) no_promo\n      ,sum(case when p_promo_sk is not null then 1 else 0 end) promo\n      ,count(*) total_cnt\nfrom catalog_sales\njoin date_dim d1 on (cs_sold_date_sk = d1.d_date_sk)\njoin customer_demographics on (cs_bill_cdemo_sk = cd_demo_sk)\njoin household_demographics on (cs_bill_hdemo_sk = hd_demo_sk)\njoin item on (i_item_sk = cs_item_sk)\njoin inventory on (cs_item_sk = inv_item_sk)\njoin warehouse on (w_warehouse_sk=inv_warehouse_sk)\njoin date_dim d2 on (inv_date_sk = d2.d_date_sk)\njoin date_dim d3 on (cs_ship_date_sk = d3.d_date_sk)\nleft outer join promotion on (cs_promo_sk=p_promo_sk)\nleft outer join catalog_returns on (cr_item_sk = cs_item_sk and cr_order_number = cs_order_number)\nwhere d1.d_week_seq = d2.d_week_seq\n  and inv_quantity_on_hand < cs_quantity \n  and d3.d_date > d1.d_date + 5\n  and hd_buy_potential = '501-1000'\n  and d1.d_year = 1999\n  and cd_marital_status = 'S'\ngroup by i_item_desc,w_warehouse_name,d1.d_week_seq\norder by total_cnt desc, i_item_desc, w_warehouse_name, d_week_seq\nLIMIT 100\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/add_decimals.sql",
    "content": "SELECT ss_net_profit + ss_net_profit\nFROM store_sales"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/add_many_decimals.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- This is testing the cost of a complex expression that will create many intermediate arrays in Comet\n\nselect sum(\n    ss_wholesale_cost+\n    ss_list_price+\n    ss_sales_price+\n    ss_ext_discount_amt+\n    ss_ext_sales_price+\n    ss_ext_wholesale_cost+\n    ss_ext_list_price+\n    ss_ext_tax+\n    ss_coupon_amt+\n    ss_net_paid+\n    ss_net_paid_inc_tax+\n    ss_net_profit\n)\nfrom store_sales;\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/add_many_integers.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- This is testing the cost of a complex expression that will create many intermediate arrays in Comet\n\nselect sum(\n    ss_sold_date_sk+\n    ss_sold_time_sk+\n    ss_item_sk+\n    ss_customer_sk+\n    ss_cdemo_sk+\n    ss_hdemo_sk+\n    ss_addr_sk+\n    ss_store_sk+\n    ss_promo_sk+\n    ss_ticket_number+\n    ss_quantity\n)\nfrom store_sales;\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/agg_high_cardinality.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n                  \nselect ws_item_sk, sum(ws_wholesale_cost) from web_sales group by ws_item_sk;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/agg_low_cardinality.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n                  \nselect ws_warehouse_sk, sum(ws_wholesale_cost) from web_sales group by ws_warehouse_sk;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/agg_stddev.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nselect w_warehouse_name,w_warehouse_sk,i_item_sk,d_moy\n     ,stddev_samp(inv_quantity_on_hand) stdev\nfrom inventory\n   ,item\n   ,warehouse\n   ,date_dim\nwhere inv_item_sk = i_item_sk\n  and inv_warehouse_sk = w_warehouse_sk\n  and inv_date_sk = d_date_sk\n  and d_year =2001\ngroup by w_warehouse_name,w_warehouse_sk,i_item_sk,d_moy;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/agg_sum_decimals_no_grouping.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nselect\n    sum(ss_wholesale_cost),\n    sum(ss_list_price),\n    sum(ss_sales_price),\n    sum(ss_ext_discount_amt),\n    sum(ss_ext_sales_price),\n    sum(ss_ext_wholesale_cost),\n    sum(ss_ext_list_price),\n    sum(ss_ext_tax),\n    sum(ss_coupon_amt),\n    sum(ss_net_paid),\n    sum(ss_net_paid_inc_tax),\n    sum(ss_net_profit)\nfrom store_sales;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/agg_sum_integers_no_grouping.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nselect\n    sum(ss_sold_date_sk),\n    sum(ss_sold_time_sk),\n    sum(ss_item_sk),\n    sum(ss_customer_sk),\n    sum(ss_cdemo_sk),\n    sum(ss_hdemo_sk),\n    sum(ss_addr_sk),\n    sum(ss_store_sk),\n    sum(ss_promo_sk),\n    sum(ss_ticket_number),\n    sum(ss_quantity)\nfrom store_sales;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/agg_sum_integers_with_grouping.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nselect\n    ss_quantity,\n    sum(ss_sold_date_sk),\n    sum(ss_sold_time_sk),\n    sum(ss_item_sk),\n    sum(ss_customer_sk),\n    sum(ss_cdemo_sk),\n    sum(ss_hdemo_sk),\n    sum(ss_addr_sk),\n    sum(ss_store_sk),\n    sum(ss_promo_sk),\n    sum(ss_ticket_number),\n    sum(ss_quantity)\nfrom store_sales\ngroup by ss_quantity;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/case_when_column_or_null.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nselect\n    sum(case when (ss_quantity=1) then ss_sales_price else null end) sun_sales,\n    sum(case when (ss_quantity=2) then ss_sales_price else null end) mon_sales,\n    sum(case when (ss_quantity=3) then ss_sales_price else null end) tue_sales,\n    sum(case when (ss_quantity=4) then ss_sales_price else null end) wed_sales,\n    sum(case when (ss_quantity=5) then ss_sales_price else null end) thu_sales,\n    sum(case when (ss_quantity=6) then ss_sales_price else null end) fri_sales,\n    sum(case when (ss_quantity=7) then ss_sales_price else null end) sat_sales\nfrom store_sales;\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/case_when_scalar.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n                  \nselect\n    sum(case when ws_wholesale_cost > 10 then 1 else 0 end) as a,\n    sum(case when ws_wholesale_cost > 20 then 1 else 0 end) as b,\n    sum(case when ws_wholesale_cost > 30 then 1 else 0 end) as c\nfrom web_sales;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/char_type.sql",
    "content": "SELECT\n    cd_gender\nFROM customer_demographics\nWHERE\n    cd_gender = 'M' AND\n    cd_marital_status = 'S' AND\n    cd_education_status = 'College'\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/explode.sql",
    "content": "SELECT i_item_sk, explode(array(i_brand_id, i_class_id, i_category_id, i_manufact_id, i_manager_id))\nFROM item\nORDER BY i_item_sk\nLIMIT 1000"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/filter_highly_selective.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nselect count(*) from web_sales where ws_wholesale_cost = 100;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/filter_less_selective.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n                  \nselect count(*) from web_sales where ws_wholesale_cost > 10;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/if_column_or_null.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nselect\n    sum(if(d_day_name='Sunday', ss_sales_price, null)) sun_sales,\n    sum(if(d_day_name='Monday', ss_sales_price, null)) mon_sales,\n    sum(if(d_day_name='Tuesday', ss_sales_price, null)) tue_sales,\n    sum(if(d_day_name='Wednesday', ss_sales_price, null)) wed_sales,\n    sum(if(d_day_name='Thursday', ss_sales_price, null)) thu_sales,\n    sum(if(d_day_name='Friday', ss_sales_price, null)) fri_sales,\n    sum(if(d_day_name='Saturday', ss_sales_price, null)) sat_sales\nfrom date_dim join store_sales on d_date_sk = ss_sold_date_sk\nwhere d_year = 2000;\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/join_anti.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Based on q94\n\nselect\n    count(distinct ws_order_number) as `order count`\nfrom\n    web_sales ws1\nwhere not exists(select *\n                 from web_returns wr1\n                 where ws1.ws_order_number = wr1.wr_order_number)\norder by count(distinct ws_order_number)\n    LIMIT 100;\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/join_condition.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- This is based on the first join in q72 when there is no join reordering\n\nselect count(*)\nfrom catalog_sales join inventory on cs_item_sk = inv_item_sk\nwhere cs_warehouse_sk = 1 and inv_quantity_on_hand = 666\nand inv_quantity_on_hand < cs_quantity;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/join_exploding_output.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- This is based on the first join in q72 when there is no join reordering\n\nselect cs_order_number, cs_quantity, inv_quantity_on_hand\nfrom catalog_sales join inventory on cs_item_sk = inv_item_sk\nwhere cs_warehouse_sk = 1 and inv_quantity_on_hand = 666;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/join_inner.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nselect ss_sold_date_sk, ss_sold_time_sk, ss_quantity, d_year, d_moy, d_dom\nfrom date_dim join store_sales on d_date_sk = ss_sold_date_sk\nwhere d_year = 2000;\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/join_left_outer.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\nselect count(*)\nfrom store_sales left outer join store_returns on\nss_item_sk = sr_item_sk and ss_ticket_number = sr_ticket_number"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/join_semi.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- Based on q94\n\nselect\n    count(distinct ws_order_number) as `order count`\nfrom\n    web_sales ws1\nwhere exists (select *\n              from web_sales ws2\n              where ws1.ws_order_number = ws2.ws_order_number\n                and ws1.ws_warehouse_sk <> ws2.ws_warehouse_sk)\norder by count(distinct ws_order_number)\n    LIMIT 100;\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/rlike.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- This is not part of TPC-DS but runs on TPC-DS data\n\nSELECT i_item_desc, i_item_desc RLIKE '[A-Z][aeiou]+' FROM item;"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/scan_decimal.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- This is testing the cost of reading a single decimal(7,2) column\n\nselect ss_net_profit from store_sales;\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-micro-benchmarks/to_json.sql",
    "content": "-- Licensed to the Apache Software Foundation (ASF) under one\n-- or more contributor license agreements.  See the NOTICE file\n-- distributed with this work for additional information\n-- regarding copyright ownership.  The ASF licenses this file\n-- to you under the Apache License, Version 2.0 (the\n-- \"License\"); you may not use this file except in compliance\n-- with the License.  You may obtain a copy of the License at\n--\n-- http://www.apache.org/licenses/LICENSE-2.0\n--\n-- Unless required by applicable law or agreed to in writing,\n-- software distributed under the License is distributed on an\n-- \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n-- KIND, either express or implied.  See the License for the\n-- specific language governing permissions and limitations\n-- under the License.\n\n-- This is not part of TPC-DS but runs on TPC-DS data\n\nSELECT to_json(named_struct('id', i_item_sk, 'desc', i_item_desc, 'color', i_color)) FROM item;"
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q1.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Filter\n      :     :     :                    :  +- ColumnarToRow\n      :     :     :                    :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :           +- SubqueryBroadcast\n      :     :     :                    :              +- BroadcastExchange\n      :     :     :                    :                 +- CometNativeColumnarToRow\n      :     :     :                    :                    +- CometProject\n      :     :     :                    :                       +- CometFilter\n      :     :     :                    :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Filter\n      :     :                                         :  +- ColumnarToRow\n      :     :                                         :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :           +- ReusedSubquery\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometProject\n      :     :                                                  +- CometFilter\n      :     :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 49 eligible operators (36%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q1.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometFilter\n         :     :     :                 :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :     :                 :        +- SubqueryBroadcast\n         :     :     :                 :           +- BroadcastExchange\n         :     :     :                 :              +- CometNativeColumnarToRow\n         :     :     :                 :                 +- CometProject\n         :     :     :                 :                    +- CometFilter\n         :     :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometFilter\n         :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometHashAggregate\n         :     :                       +- CometExchange\n         :     :                          +- CometHashAggregate\n         :     :                             +- CometProject\n         :     :                                +- CometBroadcastHashJoin\n         :     :                                   :- CometFilter\n         :     :                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :                                   :        +- ReusedSubquery\n         :     :                                   +- CometBroadcastExchange\n         :     :                                      +- CometProject\n         :     :                                         +- CometFilter\n         :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 46 out of 49 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q10.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :- BroadcastHashJoin\n                  :     :        :  :- BroadcastHashJoin\n                  :     :        :  :  :- CometNativeColumnarToRow\n                  :     :        :  :  :  +- CometFilter\n                  :     :        :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :        :  :  +- BroadcastExchange\n                  :     :        :  :     +- Project\n                  :     :        :  :        +- BroadcastHashJoin\n                  :     :        :  :           :- ColumnarToRow\n                  :     :        :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :  :           :        +- SubqueryBroadcast\n                  :     :        :  :           :           +- BroadcastExchange\n                  :     :        :  :           :              +- CometNativeColumnarToRow\n                  :     :        :  :           :                 +- CometProject\n                  :     :        :  :           :                    +- CometFilter\n                  :     :        :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  :           +- BroadcastExchange\n                  :     :        :  :              +- CometNativeColumnarToRow\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- Project\n                  :     :        :        +- BroadcastHashJoin\n                  :     :        :           :- ColumnarToRow\n                  :     :        :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :           :        +- ReusedSubquery\n                  :     :        :           +- BroadcastExchange\n                  :     :        :              +- CometNativeColumnarToRow\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 54 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q10.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                  :     :        :  :- CometNativeColumnarToRow\n                  :     :        :  :  +- CometBroadcastHashJoin\n                  :     :        :  :     :- CometFilter\n                  :     :        :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :        :  :     +- CometBroadcastExchange\n                  :     :        :  :        +- CometProject\n                  :     :        :  :           +- CometBroadcastHashJoin\n                  :     :        :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :        :  :              :     +- SubqueryBroadcast\n                  :     :        :  :              :        +- BroadcastExchange\n                  :     :        :  :              :           +- CometNativeColumnarToRow\n                  :     :        :  :              :              +- CometProject\n                  :     :        :  :              :                 +- CometFilter\n                  :     :        :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  :              +- CometBroadcastExchange\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- CometNativeColumnarToRow\n                  :     :        :        +- CometProject\n                  :     :        :           +- CometBroadcastHashJoin\n                  :     :        :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :        :              :     +- ReusedSubquery\n                  :     :        :              +- CometBroadcastExchange\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- CometNativeColumnarToRow\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 54 eligible operators (64%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q11.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Project\n      :     :     :                    :  +- BroadcastHashJoin\n      :     :     :                    :     :- CometNativeColumnarToRow\n      :     :     :                    :     :  +- CometProject\n      :     :     :                    :     :     +- CometFilter\n      :     :     :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :                    :     +- BroadcastExchange\n      :     :     :                    :        +- Filter\n      :     :     :                    :           +- ColumnarToRow\n      :     :     :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :                    +- SubqueryBroadcast\n      :     :     :                    :                       +- BroadcastExchange\n      :     :     :                    :                          +- CometNativeColumnarToRow\n      :     :     :                    :                             +- CometFilter\n      :     :     :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometFilter\n      :     :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     +- BroadcastExchange\n      :     :        +- HashAggregate\n      :     :           +- CometNativeColumnarToRow\n      :     :              +- CometColumnarExchange\n      :     :                 +- HashAggregate\n      :     :                    +- Project\n      :     :                       +- BroadcastHashJoin\n      :     :                          :- Project\n      :     :                          :  +- BroadcastHashJoin\n      :     :                          :     :- CometNativeColumnarToRow\n      :     :                          :     :  +- CometProject\n      :     :                          :     :     +- CometFilter\n      :     :                          :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                          :     +- BroadcastExchange\n      :     :                          :        +- Filter\n      :     :                          :           +- ColumnarToRow\n      :     :                          :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                          :                    +- SubqueryBroadcast\n      :     :                          :                       +- BroadcastExchange\n      :     :                          :                          +- CometNativeColumnarToRow\n      :     :                          :                             +- CometFilter\n      :     :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                          +- BroadcastExchange\n      :     :                             +- CometNativeColumnarToRow\n      :     :                                +- CometFilter\n      :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 86 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q11.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometProject\n         :     :     :                 :  +- CometBroadcastHashJoin\n         :     :     :                 :     :- CometProject\n         :     :     :                 :     :  +- CometFilter\n         :     :     :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :                 :     +- CometBroadcastExchange\n         :     :     :                 :        +- CometFilter\n         :     :     :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :                 :                 +- SubqueryBroadcast\n         :     :     :                 :                    +- BroadcastExchange\n         :     :     :                 :                       +- CometNativeColumnarToRow\n         :     :     :                 :                          +- CometFilter\n         :     :     :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometFilter\n         :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometExchange\n         :     :              +- CometHashAggregate\n         :     :                 +- CometProject\n         :     :                    +- CometBroadcastHashJoin\n         :     :                       :- CometProject\n         :     :                       :  +- CometBroadcastHashJoin\n         :     :                       :     :- CometProject\n         :     :                       :     :  +- CometFilter\n         :     :                       :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                       :     +- CometBroadcastExchange\n         :     :                       :        +- CometFilter\n         :     :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                       :                 +- SubqueryBroadcast\n         :     :                       :                    +- BroadcastExchange\n         :     :                       :                       +- CometNativeColumnarToRow\n         :     :                       :                          +- CometFilter\n         :     :                       :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                       +- CometBroadcastExchange\n         :     :                          +- CometFilter\n         :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 80 out of 86 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q12.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q12.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q13.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Project\n               :     :     :  +- BroadcastHashJoin\n               :     :     :     :- Project\n               :     :     :     :  +- BroadcastHashJoin\n               :     :     :     :     :- Filter\n               :     :     :     :     :  +- ColumnarToRow\n               :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :     :     :           +- SubqueryBroadcast\n               :     :     :     :     :              +- BroadcastExchange\n               :     :     :     :     :                 +- CometNativeColumnarToRow\n               :     :     :     :     :                    +- CometProject\n               :     :     :     :     :                       +- CometFilter\n               :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     :     :     +- BroadcastExchange\n               :     :     :     :        +- CometNativeColumnarToRow\n               :     :     :     :           +- CometFilter\n               :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :     :     :     +- BroadcastExchange\n               :     :     :        +- CometNativeColumnarToRow\n               :     :     :           +- CometProject\n               :     :     :              +- CometFilter\n               :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +- CometNativeColumnarToRow\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.household_demographics\n\nComet accelerated 17 out of 38 eligible operators (44%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q13.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometProject\n               :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :- CometFilter\n               :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     :     :     :        +- SubqueryBroadcast\n               :     :     :     :     :           +- BroadcastExchange\n               :     :     :     :     :              +- CometNativeColumnarToRow\n               :     :     :     :     :                 +- CometProject\n               :     :     :     :     :                    +- CometFilter\n               :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :        +- CometFilter\n               :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometProject\n               :     :     :           +- CometFilter\n               :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n               +- CometBroadcastExchange\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n\nComet accelerated 36 out of 38 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q14a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- Project\n                  :  +- Filter\n                  :     :  +- Subquery\n                  :     :     +- HashAggregate\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometColumnarExchange\n                  :     :              +- HashAggregate\n                  :     :                 +- Union\n                  :     :                    :- Project\n                  :     :                    :  +- BroadcastHashJoin\n                  :     :                    :     :- ColumnarToRow\n                  :     :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                    :     :        +- ReusedSubquery\n                  :     :                    :     +- BroadcastExchange\n                  :     :                    :        +- CometNativeColumnarToRow\n                  :     :                    :           +- CometProject\n                  :     :                    :              +- CometFilter\n                  :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                    :- Project\n                  :     :                    :  +- BroadcastHashJoin\n                  :     :                    :     :- ColumnarToRow\n                  :     :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                    :     :        +- ReusedSubquery\n                  :     :                    :     +- BroadcastExchange\n                  :     :                    :        +- CometNativeColumnarToRow\n                  :     :                    :           +- CometProject\n                  :     :                    :              +- CometFilter\n                  :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                    +- Project\n                  :     :                       +- BroadcastHashJoin\n                  :     :                          :- ColumnarToRow\n                  :     :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                          :        +- ReusedSubquery\n                  :     :                          +- BroadcastExchange\n                  :     :                             +- CometNativeColumnarToRow\n                  :     :                                +- CometProject\n                  :     :                                   +- CometFilter\n                  :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- HashAggregate\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometColumnarExchange\n                  :              +- HashAggregate\n                  :                 +- Project\n                  :                    +- BroadcastHashJoin\n                  :                       :- Project\n                  :                       :  +- BroadcastHashJoin\n                  :                       :     :- BroadcastHashJoin\n                  :                       :     :  :- Filter\n                  :                       :     :  :  +- ColumnarToRow\n                  :                       :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :  :           +- SubqueryBroadcast\n                  :                       :     :  :              +- BroadcastExchange\n                  :                       :     :  :                 +- CometNativeColumnarToRow\n                  :                       :     :  :                    +- CometProject\n                  :                       :     :  :                       +- CometFilter\n                  :                       :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :  +- BroadcastExchange\n                  :                       :     :     +- Project\n                  :                       :     :        +- BroadcastHashJoin\n                  :                       :     :           :- CometNativeColumnarToRow\n                  :                       :     :           :  +- CometFilter\n                  :                       :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :           +- BroadcastExchange\n                  :                       :     :              +- BroadcastHashJoin\n                  :                       :     :                 :- CometNativeColumnarToRow\n                  :                       :     :                 :  +- CometHashAggregate\n                  :                       :     :                 :     +- CometColumnarExchange\n                  :                       :     :                 :        +- HashAggregate\n                  :                       :     :                 :           +- Project\n                  :                       :     :                 :              +- BroadcastHashJoin\n                  :                       :     :                 :                 :- Project\n                  :                       :     :                 :                 :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :     :- Filter\n                  :                       :     :                 :                 :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :     :           +- SubqueryBroadcast\n                  :                       :     :                 :                 :     :              +- BroadcastExchange\n                  :                       :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :     :                    +- CometProject\n                  :                       :     :                 :                 :     :                       +- CometFilter\n                  :                       :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 :     +- BroadcastExchange\n                  :                       :     :                 :                 :        +- BroadcastHashJoin\n                  :                       :     :                 :                 :           :- CometNativeColumnarToRow\n                  :                       :     :                 :                 :           :  +- CometFilter\n                  :                       :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :           +- BroadcastExchange\n                  :                       :     :                 :                 :              +- Project\n                  :                       :     :                 :                 :                 +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :- Project\n                  :                       :     :                 :                 :                    :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :     :- Filter\n                  :                       :     :                 :                 :                    :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :                    :     :           +- ReusedSubquery\n                  :                       :     :                 :                 :                    :     +- BroadcastExchange\n                  :                       :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                    :           +- CometFilter\n                  :                       :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :                    +- BroadcastExchange\n                  :                       :     :                 :                 :                       +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                          +- CometProject\n                  :                       :     :                 :                 :                             +- CometFilter\n                  :                       :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 +- BroadcastExchange\n                  :                       :     :                 :                    +- CometNativeColumnarToRow\n                  :                       :     :                 :                       +- CometProject\n                  :                       :     :                 :                          +- CometFilter\n                  :                       :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 +- BroadcastExchange\n                  :                       :     :                    +- Project\n                  :                       :     :                       +- BroadcastHashJoin\n                  :                       :     :                          :- Project\n                  :                       :     :                          :  +- BroadcastHashJoin\n                  :                       :     :                          :     :- Filter\n                  :                       :     :                          :     :  +- ColumnarToRow\n                  :                       :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                          :     :           +- ReusedSubquery\n                  :                       :     :                          :     +- BroadcastExchange\n                  :                       :     :                          :        +- CometNativeColumnarToRow\n                  :                       :     :                          :           +- CometFilter\n                  :                       :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                          +- BroadcastExchange\n                  :                       :     :                             +- CometNativeColumnarToRow\n                  :                       :     :                                +- CometProject\n                  :                       :     :                                   +- CometFilter\n                  :                       :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     +- BroadcastExchange\n                  :                       :        +- BroadcastHashJoin\n                  :                       :           :- CometNativeColumnarToRow\n                  :                       :           :  +- CometFilter\n                  :                       :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :           +- BroadcastExchange\n                  :                       :              +- Project\n                  :                       :                 +- BroadcastHashJoin\n                  :                       :                    :- CometNativeColumnarToRow\n                  :                       :                    :  +- CometFilter\n                  :                       :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                    +- BroadcastExchange\n                  :                       :                       +- BroadcastHashJoin\n                  :                       :                          :- CometNativeColumnarToRow\n                  :                       :                          :  +- CometHashAggregate\n                  :                       :                          :     +- CometColumnarExchange\n                  :                       :                          :        +- HashAggregate\n                  :                       :                          :           +- Project\n                  :                       :                          :              +- BroadcastHashJoin\n                  :                       :                          :                 :- Project\n                  :                       :                          :                 :  +- BroadcastHashJoin\n                  :                       :                          :                 :     :- Filter\n                  :                       :                          :                 :     :  +- ColumnarToRow\n                  :                       :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :     :           +- SubqueryBroadcast\n                  :                       :                          :                 :     :              +- BroadcastExchange\n                  :                       :                          :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :                          :                 :     :                    +- CometProject\n                  :                       :                          :                 :     :                       +- CometFilter\n                  :                       :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 :     +- BroadcastExchange\n                  :                       :                          :                 :        +- BroadcastHashJoin\n                  :                       :                          :                 :           :- CometNativeColumnarToRow\n                  :                       :                          :                 :           :  +- CometFilter\n                  :                       :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :           +- BroadcastExchange\n                  :                       :                          :                 :              +- Project\n                  :                       :                          :                 :                 +- BroadcastHashJoin\n                  :                       :                          :                 :                    :- Project\n                  :                       :                          :                 :                    :  +- BroadcastHashJoin\n                  :                       :                          :                 :                    :     :- Filter\n                  :                       :                          :                 :                    :     :  +- ColumnarToRow\n                  :                       :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :                    :     :           +- ReusedSubquery\n                  :                       :                          :                 :                    :     +- BroadcastExchange\n                  :                       :                          :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :                          :                 :                    :           +- CometFilter\n                  :                       :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :                    +- BroadcastExchange\n                  :                       :                          :                 :                       +- CometNativeColumnarToRow\n                  :                       :                          :                 :                          +- CometProject\n                  :                       :                          :                 :                             +- CometFilter\n                  :                       :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 +- BroadcastExchange\n                  :                       :                          :                    +- CometNativeColumnarToRow\n                  :                       :                          :                       +- CometProject\n                  :                       :                          :                          +- CometFilter\n                  :                       :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          +- BroadcastExchange\n                  :                       :                             +- Project\n                  :                       :                                +- BroadcastHashJoin\n                  :                       :                                   :- Project\n                  :                       :                                   :  +- BroadcastHashJoin\n                  :                       :                                   :     :- Filter\n                  :                       :                                   :     :  +- ColumnarToRow\n                  :                       :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                                   :     :           +- ReusedSubquery\n                  :                       :                                   :     +- BroadcastExchange\n                  :                       :                                   :        +- CometNativeColumnarToRow\n                  :                       :                                   :           +- CometFilter\n                  :                       :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                                   +- BroadcastExchange\n                  :                       :                                      +- CometNativeColumnarToRow\n                  :                       :                                         +- CometProject\n                  :                       :                                            +- CometFilter\n                  :                       :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       +- BroadcastExchange\n                  :                          +- CometNativeColumnarToRow\n                  :                             +- CometProject\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :- Project\n                  :  +- Filter\n                  :     :  +- ReusedSubquery\n                  :     +- HashAggregate\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometColumnarExchange\n                  :              +- HashAggregate\n                  :                 +- Project\n                  :                    +- BroadcastHashJoin\n                  :                       :- Project\n                  :                       :  +- BroadcastHashJoin\n                  :                       :     :- BroadcastHashJoin\n                  :                       :     :  :- Filter\n                  :                       :     :  :  +- ColumnarToRow\n                  :                       :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :  :           +- ReusedSubquery\n                  :                       :     :  +- BroadcastExchange\n                  :                       :     :     +- Project\n                  :                       :     :        +- BroadcastHashJoin\n                  :                       :     :           :- CometNativeColumnarToRow\n                  :                       :     :           :  +- CometFilter\n                  :                       :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :           +- BroadcastExchange\n                  :                       :     :              +- BroadcastHashJoin\n                  :                       :     :                 :- CometNativeColumnarToRow\n                  :                       :     :                 :  +- CometHashAggregate\n                  :                       :     :                 :     +- CometColumnarExchange\n                  :                       :     :                 :        +- HashAggregate\n                  :                       :     :                 :           +- Project\n                  :                       :     :                 :              +- BroadcastHashJoin\n                  :                       :     :                 :                 :- Project\n                  :                       :     :                 :                 :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :     :- Filter\n                  :                       :     :                 :                 :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :     :           +- SubqueryBroadcast\n                  :                       :     :                 :                 :     :              +- BroadcastExchange\n                  :                       :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :     :                    +- CometProject\n                  :                       :     :                 :                 :     :                       +- CometFilter\n                  :                       :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 :     +- BroadcastExchange\n                  :                       :     :                 :                 :        +- BroadcastHashJoin\n                  :                       :     :                 :                 :           :- CometNativeColumnarToRow\n                  :                       :     :                 :                 :           :  +- CometFilter\n                  :                       :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :           +- BroadcastExchange\n                  :                       :     :                 :                 :              +- Project\n                  :                       :     :                 :                 :                 +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :- Project\n                  :                       :     :                 :                 :                    :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :     :- Filter\n                  :                       :     :                 :                 :                    :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :                    :     :           +- ReusedSubquery\n                  :                       :     :                 :                 :                    :     +- BroadcastExchange\n                  :                       :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                    :           +- CometFilter\n                  :                       :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :                    +- BroadcastExchange\n                  :                       :     :                 :                 :                       +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                          +- CometProject\n                  :                       :     :                 :                 :                             +- CometFilter\n                  :                       :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 +- BroadcastExchange\n                  :                       :     :                 :                    +- CometNativeColumnarToRow\n                  :                       :     :                 :                       +- CometProject\n                  :                       :     :                 :                          +- CometFilter\n                  :                       :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 +- BroadcastExchange\n                  :                       :     :                    +- Project\n                  :                       :     :                       +- BroadcastHashJoin\n                  :                       :     :                          :- Project\n                  :                       :     :                          :  +- BroadcastHashJoin\n                  :                       :     :                          :     :- Filter\n                  :                       :     :                          :     :  +- ColumnarToRow\n                  :                       :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                          :     :           +- ReusedSubquery\n                  :                       :     :                          :     +- BroadcastExchange\n                  :                       :     :                          :        +- CometNativeColumnarToRow\n                  :                       :     :                          :           +- CometFilter\n                  :                       :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                          +- BroadcastExchange\n                  :                       :     :                             +- CometNativeColumnarToRow\n                  :                       :     :                                +- CometProject\n                  :                       :     :                                   +- CometFilter\n                  :                       :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     +- BroadcastExchange\n                  :                       :        +- BroadcastHashJoin\n                  :                       :           :- CometNativeColumnarToRow\n                  :                       :           :  +- CometFilter\n                  :                       :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :           +- BroadcastExchange\n                  :                       :              +- Project\n                  :                       :                 +- BroadcastHashJoin\n                  :                       :                    :- CometNativeColumnarToRow\n                  :                       :                    :  +- CometFilter\n                  :                       :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                    +- BroadcastExchange\n                  :                       :                       +- BroadcastHashJoin\n                  :                       :                          :- CometNativeColumnarToRow\n                  :                       :                          :  +- CometHashAggregate\n                  :                       :                          :     +- CometColumnarExchange\n                  :                       :                          :        +- HashAggregate\n                  :                       :                          :           +- Project\n                  :                       :                          :              +- BroadcastHashJoin\n                  :                       :                          :                 :- Project\n                  :                       :                          :                 :  +- BroadcastHashJoin\n                  :                       :                          :                 :     :- Filter\n                  :                       :                          :                 :     :  +- ColumnarToRow\n                  :                       :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :     :           +- SubqueryBroadcast\n                  :                       :                          :                 :     :              +- BroadcastExchange\n                  :                       :                          :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :                          :                 :     :                    +- CometProject\n                  :                       :                          :                 :     :                       +- CometFilter\n                  :                       :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 :     +- BroadcastExchange\n                  :                       :                          :                 :        +- BroadcastHashJoin\n                  :                       :                          :                 :           :- CometNativeColumnarToRow\n                  :                       :                          :                 :           :  +- CometFilter\n                  :                       :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :           +- BroadcastExchange\n                  :                       :                          :                 :              +- Project\n                  :                       :                          :                 :                 +- BroadcastHashJoin\n                  :                       :                          :                 :                    :- Project\n                  :                       :                          :                 :                    :  +- BroadcastHashJoin\n                  :                       :                          :                 :                    :     :- Filter\n                  :                       :                          :                 :                    :     :  +- ColumnarToRow\n                  :                       :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :                    :     :           +- ReusedSubquery\n                  :                       :                          :                 :                    :     +- BroadcastExchange\n                  :                       :                          :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :                          :                 :                    :           +- CometFilter\n                  :                       :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :                    +- BroadcastExchange\n                  :                       :                          :                 :                       +- CometNativeColumnarToRow\n                  :                       :                          :                 :                          +- CometProject\n                  :                       :                          :                 :                             +- CometFilter\n                  :                       :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 +- BroadcastExchange\n                  :                       :                          :                    +- CometNativeColumnarToRow\n                  :                       :                          :                       +- CometProject\n                  :                       :                          :                          +- CometFilter\n                  :                       :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          +- BroadcastExchange\n                  :                       :                             +- Project\n                  :                       :                                +- BroadcastHashJoin\n                  :                       :                                   :- Project\n                  :                       :                                   :  +- BroadcastHashJoin\n                  :                       :                                   :     :- Filter\n                  :                       :                                   :     :  +- ColumnarToRow\n                  :                       :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                                   :     :           +- ReusedSubquery\n                  :                       :                                   :     +- BroadcastExchange\n                  :                       :                                   :        +- CometNativeColumnarToRow\n                  :                       :                                   :           +- CometFilter\n                  :                       :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                                   +- BroadcastExchange\n                  :                       :                                      +- CometNativeColumnarToRow\n                  :                       :                                         +- CometProject\n                  :                       :                                            +- CometFilter\n                  :                       :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       +- BroadcastExchange\n                  :                          +- CometNativeColumnarToRow\n                  :                             +- CometProject\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- Project\n                     +- Filter\n                        :  +- ReusedSubquery\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- BroadcastHashJoin\n                                          :     :  :- Filter\n                                          :     :  :  +- ColumnarToRow\n                                          :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :  :           +- ReusedSubquery\n                                          :     :  +- BroadcastExchange\n                                          :     :     +- Project\n                                          :     :        +- BroadcastHashJoin\n                                          :     :           :- CometNativeColumnarToRow\n                                          :     :           :  +- CometFilter\n                                          :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :           +- BroadcastExchange\n                                          :     :              +- BroadcastHashJoin\n                                          :     :                 :- CometNativeColumnarToRow\n                                          :     :                 :  +- CometHashAggregate\n                                          :     :                 :     +- CometColumnarExchange\n                                          :     :                 :        +- HashAggregate\n                                          :     :                 :           +- Project\n                                          :     :                 :              +- BroadcastHashJoin\n                                          :     :                 :                 :- Project\n                                          :     :                 :                 :  +- BroadcastHashJoin\n                                          :     :                 :                 :     :- Filter\n                                          :     :                 :                 :     :  +- ColumnarToRow\n                                          :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                 :                 :     :           +- SubqueryBroadcast\n                                          :     :                 :                 :     :              +- BroadcastExchange\n                                          :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                          :     :                 :                 :     :                    +- CometProject\n                                          :     :                 :                 :     :                       +- CometFilter\n                                          :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 :                 :     +- BroadcastExchange\n                                          :     :                 :                 :        +- BroadcastHashJoin\n                                          :     :                 :                 :           :- CometNativeColumnarToRow\n                                          :     :                 :                 :           :  +- CometFilter\n                                          :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :                 :                 :           +- BroadcastExchange\n                                          :     :                 :                 :              +- Project\n                                          :     :                 :                 :                 +- BroadcastHashJoin\n                                          :     :                 :                 :                    :- Project\n                                          :     :                 :                 :                    :  +- BroadcastHashJoin\n                                          :     :                 :                 :                    :     :- Filter\n                                          :     :                 :                 :                    :     :  +- ColumnarToRow\n                                          :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                 :                 :                    :     :           +- ReusedSubquery\n                                          :     :                 :                 :                    :     +- BroadcastExchange\n                                          :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                          :     :                 :                 :                    :           +- CometFilter\n                                          :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :                 :                 :                    +- BroadcastExchange\n                                          :     :                 :                 :                       +- CometNativeColumnarToRow\n                                          :     :                 :                 :                          +- CometProject\n                                          :     :                 :                 :                             +- CometFilter\n                                          :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 :                 +- BroadcastExchange\n                                          :     :                 :                    +- CometNativeColumnarToRow\n                                          :     :                 :                       +- CometProject\n                                          :     :                 :                          +- CometFilter\n                                          :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 +- BroadcastExchange\n                                          :     :                    +- Project\n                                          :     :                       +- BroadcastHashJoin\n                                          :     :                          :- Project\n                                          :     :                          :  +- BroadcastHashJoin\n                                          :     :                          :     :- Filter\n                                          :     :                          :     :  +- ColumnarToRow\n                                          :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                          :     :           +- ReusedSubquery\n                                          :     :                          :     +- BroadcastExchange\n                                          :     :                          :        +- CometNativeColumnarToRow\n                                          :     :                          :           +- CometFilter\n                                          :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :                          +- BroadcastExchange\n                                          :     :                             +- CometNativeColumnarToRow\n                                          :     :                                +- CometProject\n                                          :     :                                   +- CometFilter\n                                          :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- BroadcastHashJoin\n                                          :           :- CometNativeColumnarToRow\n                                          :           :  +- CometFilter\n                                          :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :           +- BroadcastExchange\n                                          :              +- Project\n                                          :                 +- BroadcastHashJoin\n                                          :                    :- CometNativeColumnarToRow\n                                          :                    :  +- CometFilter\n                                          :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    +- BroadcastExchange\n                                          :                       +- BroadcastHashJoin\n                                          :                          :- CometNativeColumnarToRow\n                                          :                          :  +- CometHashAggregate\n                                          :                          :     +- CometColumnarExchange\n                                          :                          :        +- HashAggregate\n                                          :                          :           +- Project\n                                          :                          :              +- BroadcastHashJoin\n                                          :                          :                 :- Project\n                                          :                          :                 :  +- BroadcastHashJoin\n                                          :                          :                 :     :- Filter\n                                          :                          :                 :     :  +- ColumnarToRow\n                                          :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                          :                 :     :           +- SubqueryBroadcast\n                                          :                          :                 :     :              +- BroadcastExchange\n                                          :                          :                 :     :                 +- CometNativeColumnarToRow\n                                          :                          :                 :     :                    +- CometProject\n                                          :                          :                 :     :                       +- CometFilter\n                                          :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          :                 :     +- BroadcastExchange\n                                          :                          :                 :        +- BroadcastHashJoin\n                                          :                          :                 :           :- CometNativeColumnarToRow\n                                          :                          :                 :           :  +- CometFilter\n                                          :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                          :                 :           +- BroadcastExchange\n                                          :                          :                 :              +- Project\n                                          :                          :                 :                 +- BroadcastHashJoin\n                                          :                          :                 :                    :- Project\n                                          :                          :                 :                    :  +- BroadcastHashJoin\n                                          :                          :                 :                    :     :- Filter\n                                          :                          :                 :                    :     :  +- ColumnarToRow\n                                          :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                          :                 :                    :     :           +- ReusedSubquery\n                                          :                          :                 :                    :     +- BroadcastExchange\n                                          :                          :                 :                    :        +- CometNativeColumnarToRow\n                                          :                          :                 :                    :           +- CometFilter\n                                          :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                          :                 :                    +- BroadcastExchange\n                                          :                          :                 :                       +- CometNativeColumnarToRow\n                                          :                          :                 :                          +- CometProject\n                                          :                          :                 :                             +- CometFilter\n                                          :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          :                 +- BroadcastExchange\n                                          :                          :                    +- CometNativeColumnarToRow\n                                          :                          :                       +- CometProject\n                                          :                          :                          +- CometFilter\n                                          :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          +- BroadcastExchange\n                                          :                             +- Project\n                                          :                                +- BroadcastHashJoin\n                                          :                                   :- Project\n                                          :                                   :  +- BroadcastHashJoin\n                                          :                                   :     :- Filter\n                                          :                                   :     :  +- ColumnarToRow\n                                          :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                                   :     :           +- ReusedSubquery\n                                          :                                   :     +- BroadcastExchange\n                                          :                                   :        +- CometNativeColumnarToRow\n                                          :                                   :           +- CometFilter\n                                          :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                                   +- BroadcastExchange\n                                          :                                      +- CometNativeColumnarToRow\n                                          :                                         +- CometProject\n                                          :                                            +- CometFilter\n                                          :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 164 out of 458 eligible operators (35%). Final plan contains 93 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q14a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometProject\n                  :  +- CometFilter\n                  :     :  +- Subquery\n                  :     :     +- CometNativeColumnarToRow\n                  :     :        +- CometHashAggregate\n                  :     :           +- CometExchange\n                  :     :              +- CometHashAggregate\n                  :     :                 +- CometUnion\n                  :     :                    :- CometProject\n                  :     :                    :  +- CometBroadcastHashJoin\n                  :     :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :                    :     :     +- ReusedSubquery\n                  :     :                    :     +- CometBroadcastExchange\n                  :     :                    :        +- CometProject\n                  :     :                    :           +- CometFilter\n                  :     :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                    :- CometProject\n                  :     :                    :  +- CometBroadcastHashJoin\n                  :     :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     :     +- ReusedSubquery\n                  :     :                    :     +- CometBroadcastExchange\n                  :     :                    :        +- CometProject\n                  :     :                    :           +- CometFilter\n                  :     :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                    +- CometProject\n                  :     :                       +- CometBroadcastHashJoin\n                  :     :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :                          :     +- ReusedSubquery\n                  :     :                          +- CometBroadcastExchange\n                  :     :                             +- CometProject\n                  :     :                                +- CometFilter\n                  :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometHashAggregate\n                  :        +- CometExchange\n                  :           +- CometHashAggregate\n                  :              +- CometProject\n                  :                 +- CometBroadcastHashJoin\n                  :                    :- CometProject\n                  :                    :  +- CometBroadcastHashJoin\n                  :                    :     :- CometBroadcastHashJoin\n                  :                    :     :  :- CometFilter\n                  :                    :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :     :  :        +- SubqueryBroadcast\n                  :                    :     :  :           +- BroadcastExchange\n                  :                    :     :  :              +- CometNativeColumnarToRow\n                  :                    :     :  :                 +- CometProject\n                  :                    :     :  :                    +- CometFilter\n                  :                    :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :  +- CometBroadcastExchange\n                  :                    :     :     +- CometProject\n                  :                    :     :        +- CometBroadcastHashJoin\n                  :                    :     :           :- CometFilter\n                  :                    :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :           +- CometBroadcastExchange\n                  :                    :     :              +- CometBroadcastHashJoin\n                  :                    :     :                 :- CometHashAggregate\n                  :                    :     :                 :  +- CometExchange\n                  :                    :     :                 :     +- CometHashAggregate\n                  :                    :     :                 :        +- CometProject\n                  :                    :     :                 :           +- CometBroadcastHashJoin\n                  :                    :     :                 :              :- CometProject\n                  :                    :     :                 :              :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :     :- CometFilter\n                  :                    :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :     :                 :              :     :        +- SubqueryBroadcast\n                  :                    :     :                 :              :     :           +- BroadcastExchange\n                  :                    :     :                 :              :     :              +- CometNativeColumnarToRow\n                  :                    :     :                 :              :     :                 +- CometProject\n                  :                    :     :                 :              :     :                    +- CometFilter\n                  :                    :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              :     +- CometBroadcastExchange\n                  :                    :     :                 :              :        +- CometBroadcastHashJoin\n                  :                    :     :                 :              :           :- CometFilter\n                  :                    :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :           +- CometBroadcastExchange\n                  :                    :     :                 :              :              +- CometProject\n                  :                    :     :                 :              :                 +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :- CometProject\n                  :                    :     :                 :              :                    :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :     :- CometFilter\n                  :                    :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :     :                 :              :                    :     :        +- ReusedSubquery\n                  :                    :     :                 :              :                    :     +- CometBroadcastExchange\n                  :                    :     :                 :              :                    :        +- CometFilter\n                  :                    :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :                    +- CometBroadcastExchange\n                  :                    :     :                 :              :                       +- CometProject\n                  :                    :     :                 :              :                          +- CometFilter\n                  :                    :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              +- CometBroadcastExchange\n                  :                    :     :                 :                 +- CometProject\n                  :                    :     :                 :                    +- CometFilter\n                  :                    :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 +- CometBroadcastExchange\n                  :                    :     :                    +- CometProject\n                  :                    :     :                       +- CometBroadcastHashJoin\n                  :                    :     :                          :- CometProject\n                  :                    :     :                          :  +- CometBroadcastHashJoin\n                  :                    :     :                          :     :- CometFilter\n                  :                    :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :     :                          :     :        +- ReusedSubquery\n                  :                    :     :                          :     +- CometBroadcastExchange\n                  :                    :     :                          :        +- CometFilter\n                  :                    :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                          +- CometBroadcastExchange\n                  :                    :     :                             +- CometProject\n                  :                    :     :                                +- CometFilter\n                  :                    :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     +- CometBroadcastExchange\n                  :                    :        +- CometBroadcastHashJoin\n                  :                    :           :- CometFilter\n                  :                    :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :           +- CometBroadcastExchange\n                  :                    :              +- CometProject\n                  :                    :                 +- CometBroadcastHashJoin\n                  :                    :                    :- CometFilter\n                  :                    :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                    +- CometBroadcastExchange\n                  :                    :                       +- CometBroadcastHashJoin\n                  :                    :                          :- CometHashAggregate\n                  :                    :                          :  +- CometExchange\n                  :                    :                          :     +- CometHashAggregate\n                  :                    :                          :        +- CometProject\n                  :                    :                          :           +- CometBroadcastHashJoin\n                  :                    :                          :              :- CometProject\n                  :                    :                          :              :  +- CometBroadcastHashJoin\n                  :                    :                          :              :     :- CometFilter\n                  :                    :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :                          :              :     :        +- SubqueryBroadcast\n                  :                    :                          :              :     :           +- BroadcastExchange\n                  :                    :                          :              :     :              +- CometNativeColumnarToRow\n                  :                    :                          :              :     :                 +- CometProject\n                  :                    :                          :              :     :                    +- CometFilter\n                  :                    :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              :     +- CometBroadcastExchange\n                  :                    :                          :              :        +- CometBroadcastHashJoin\n                  :                    :                          :              :           :- CometFilter\n                  :                    :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :           +- CometBroadcastExchange\n                  :                    :                          :              :              +- CometProject\n                  :                    :                          :              :                 +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :- CometProject\n                  :                    :                          :              :                    :  +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :     :- CometFilter\n                  :                    :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :                          :              :                    :     :        +- ReusedSubquery\n                  :                    :                          :              :                    :     +- CometBroadcastExchange\n                  :                    :                          :              :                    :        +- CometFilter\n                  :                    :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :                    +- CometBroadcastExchange\n                  :                    :                          :              :                       +- CometProject\n                  :                    :                          :              :                          +- CometFilter\n                  :                    :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              +- CometBroadcastExchange\n                  :                    :                          :                 +- CometProject\n                  :                    :                          :                    +- CometFilter\n                  :                    :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          +- CometBroadcastExchange\n                  :                    :                             +- CometProject\n                  :                    :                                +- CometBroadcastHashJoin\n                  :                    :                                   :- CometProject\n                  :                    :                                   :  +- CometBroadcastHashJoin\n                  :                    :                                   :     :- CometFilter\n                  :                    :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :                                   :     :        +- ReusedSubquery\n                  :                    :                                   :     +- CometBroadcastExchange\n                  :                    :                                   :        +- CometFilter\n                  :                    :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                                   +- CometBroadcastExchange\n                  :                    :                                      +- CometProject\n                  :                    :                                         +- CometFilter\n                  :                    :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    +- CometBroadcastExchange\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :- CometProject\n                  :  +- CometFilter\n                  :     :  +- ReusedSubquery\n                  :     +- CometHashAggregate\n                  :        +- CometExchange\n                  :           +- CometHashAggregate\n                  :              +- CometProject\n                  :                 +- CometBroadcastHashJoin\n                  :                    :- CometProject\n                  :                    :  +- CometBroadcastHashJoin\n                  :                    :     :- CometBroadcastHashJoin\n                  :                    :     :  :- CometFilter\n                  :                    :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :     :  :        +- ReusedSubquery\n                  :                    :     :  +- CometBroadcastExchange\n                  :                    :     :     +- CometProject\n                  :                    :     :        +- CometBroadcastHashJoin\n                  :                    :     :           :- CometFilter\n                  :                    :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :           +- CometBroadcastExchange\n                  :                    :     :              +- CometBroadcastHashJoin\n                  :                    :     :                 :- CometHashAggregate\n                  :                    :     :                 :  +- CometExchange\n                  :                    :     :                 :     +- CometHashAggregate\n                  :                    :     :                 :        +- CometProject\n                  :                    :     :                 :           +- CometBroadcastHashJoin\n                  :                    :     :                 :              :- CometProject\n                  :                    :     :                 :              :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :     :- CometFilter\n                  :                    :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :     :                 :              :     :        +- SubqueryBroadcast\n                  :                    :     :                 :              :     :           +- BroadcastExchange\n                  :                    :     :                 :              :     :              +- CometNativeColumnarToRow\n                  :                    :     :                 :              :     :                 +- CometProject\n                  :                    :     :                 :              :     :                    +- CometFilter\n                  :                    :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              :     +- CometBroadcastExchange\n                  :                    :     :                 :              :        +- CometBroadcastHashJoin\n                  :                    :     :                 :              :           :- CometFilter\n                  :                    :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :           +- CometBroadcastExchange\n                  :                    :     :                 :              :              +- CometProject\n                  :                    :     :                 :              :                 +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :- CometProject\n                  :                    :     :                 :              :                    :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :     :- CometFilter\n                  :                    :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :     :                 :              :                    :     :        +- ReusedSubquery\n                  :                    :     :                 :              :                    :     +- CometBroadcastExchange\n                  :                    :     :                 :              :                    :        +- CometFilter\n                  :                    :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :                    +- CometBroadcastExchange\n                  :                    :     :                 :              :                       +- CometProject\n                  :                    :     :                 :              :                          +- CometFilter\n                  :                    :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              +- CometBroadcastExchange\n                  :                    :     :                 :                 +- CometProject\n                  :                    :     :                 :                    +- CometFilter\n                  :                    :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 +- CometBroadcastExchange\n                  :                    :     :                    +- CometProject\n                  :                    :     :                       +- CometBroadcastHashJoin\n                  :                    :     :                          :- CometProject\n                  :                    :     :                          :  +- CometBroadcastHashJoin\n                  :                    :     :                          :     :- CometFilter\n                  :                    :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :     :                          :     :        +- ReusedSubquery\n                  :                    :     :                          :     +- CometBroadcastExchange\n                  :                    :     :                          :        +- CometFilter\n                  :                    :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                          +- CometBroadcastExchange\n                  :                    :     :                             +- CometProject\n                  :                    :     :                                +- CometFilter\n                  :                    :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     +- CometBroadcastExchange\n                  :                    :        +- CometBroadcastHashJoin\n                  :                    :           :- CometFilter\n                  :                    :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :           +- CometBroadcastExchange\n                  :                    :              +- CometProject\n                  :                    :                 +- CometBroadcastHashJoin\n                  :                    :                    :- CometFilter\n                  :                    :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                    +- CometBroadcastExchange\n                  :                    :                       +- CometBroadcastHashJoin\n                  :                    :                          :- CometHashAggregate\n                  :                    :                          :  +- CometExchange\n                  :                    :                          :     +- CometHashAggregate\n                  :                    :                          :        +- CometProject\n                  :                    :                          :           +- CometBroadcastHashJoin\n                  :                    :                          :              :- CometProject\n                  :                    :                          :              :  +- CometBroadcastHashJoin\n                  :                    :                          :              :     :- CometFilter\n                  :                    :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :                          :              :     :        +- SubqueryBroadcast\n                  :                    :                          :              :     :           +- BroadcastExchange\n                  :                    :                          :              :     :              +- CometNativeColumnarToRow\n                  :                    :                          :              :     :                 +- CometProject\n                  :                    :                          :              :     :                    +- CometFilter\n                  :                    :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              :     +- CometBroadcastExchange\n                  :                    :                          :              :        +- CometBroadcastHashJoin\n                  :                    :                          :              :           :- CometFilter\n                  :                    :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :           +- CometBroadcastExchange\n                  :                    :                          :              :              +- CometProject\n                  :                    :                          :              :                 +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :- CometProject\n                  :                    :                          :              :                    :  +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :     :- CometFilter\n                  :                    :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :                          :              :                    :     :        +- ReusedSubquery\n                  :                    :                          :              :                    :     +- CometBroadcastExchange\n                  :                    :                          :              :                    :        +- CometFilter\n                  :                    :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :                    +- CometBroadcastExchange\n                  :                    :                          :              :                       +- CometProject\n                  :                    :                          :              :                          +- CometFilter\n                  :                    :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              +- CometBroadcastExchange\n                  :                    :                          :                 +- CometProject\n                  :                    :                          :                    +- CometFilter\n                  :                    :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          +- CometBroadcastExchange\n                  :                    :                             +- CometProject\n                  :                    :                                +- CometBroadcastHashJoin\n                  :                    :                                   :- CometProject\n                  :                    :                                   :  +- CometBroadcastHashJoin\n                  :                    :                                   :     :- CometFilter\n                  :                    :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :                                   :     :        +- ReusedSubquery\n                  :                    :                                   :     +- CometBroadcastExchange\n                  :                    :                                   :        +- CometFilter\n                  :                    :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                                   +- CometBroadcastExchange\n                  :                    :                                      +- CometProject\n                  :                    :                                         +- CometFilter\n                  :                    :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    +- CometBroadcastExchange\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometProject\n                     +- CometFilter\n                        :  +- ReusedSubquery\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometBroadcastHashJoin\n                                       :     :  :- CometFilter\n                                       :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                       :     :  :        +- ReusedSubquery\n                                       :     :  +- CometBroadcastExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometFilter\n                                       :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometBroadcastHashJoin\n                                       :     :                 :- CometHashAggregate\n                                       :     :                 :  +- CometExchange\n                                       :     :                 :     +- CometHashAggregate\n                                       :     :                 :        +- CometProject\n                                       :     :                 :           +- CometBroadcastHashJoin\n                                       :     :                 :              :- CometProject\n                                       :     :                 :              :  +- CometBroadcastHashJoin\n                                       :     :                 :              :     :- CometFilter\n                                       :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :                 :              :     :        +- SubqueryBroadcast\n                                       :     :                 :              :     :           +- BroadcastExchange\n                                       :     :                 :              :     :              +- CometNativeColumnarToRow\n                                       :     :                 :              :     :                 +- CometProject\n                                       :     :                 :              :     :                    +- CometFilter\n                                       :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :                 :              :     +- CometBroadcastExchange\n                                       :     :                 :              :        +- CometBroadcastHashJoin\n                                       :     :                 :              :           :- CometFilter\n                                       :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :                 :              :           +- CometBroadcastExchange\n                                       :     :                 :              :              +- CometProject\n                                       :     :                 :              :                 +- CometBroadcastHashJoin\n                                       :     :                 :              :                    :- CometProject\n                                       :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                       :     :                 :              :                    :     :- CometFilter\n                                       :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :                 :              :                    :     :        +- ReusedSubquery\n                                       :     :                 :              :                    :     +- CometBroadcastExchange\n                                       :     :                 :              :                    :        +- CometFilter\n                                       :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :                 :              :                    +- CometBroadcastExchange\n                                       :     :                 :              :                       +- CometProject\n                                       :     :                 :              :                          +- CometFilter\n                                       :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :                 :              +- CometBroadcastExchange\n                                       :     :                 :                 +- CometProject\n                                       :     :                 :                    +- CometFilter\n                                       :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :                 +- CometBroadcastExchange\n                                       :     :                    +- CometProject\n                                       :     :                       +- CometBroadcastHashJoin\n                                       :     :                          :- CometProject\n                                       :     :                          :  +- CometBroadcastHashJoin\n                                       :     :                          :     :- CometFilter\n                                       :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                       :     :                          :     :        +- ReusedSubquery\n                                       :     :                          :     +- CometBroadcastExchange\n                                       :     :                          :        +- CometFilter\n                                       :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :                          +- CometBroadcastExchange\n                                       :     :                             +- CometProject\n                                       :     :                                +- CometFilter\n                                       :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometBroadcastHashJoin\n                                       :           :- CometFilter\n                                       :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :           +- CometBroadcastExchange\n                                       :              +- CometProject\n                                       :                 +- CometBroadcastHashJoin\n                                       :                    :- CometFilter\n                                       :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                    +- CometBroadcastExchange\n                                       :                       +- CometBroadcastHashJoin\n                                       :                          :- CometHashAggregate\n                                       :                          :  +- CometExchange\n                                       :                          :     +- CometHashAggregate\n                                       :                          :        +- CometProject\n                                       :                          :           +- CometBroadcastHashJoin\n                                       :                          :              :- CometProject\n                                       :                          :              :  +- CometBroadcastHashJoin\n                                       :                          :              :     :- CometFilter\n                                       :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :                          :              :     :        +- SubqueryBroadcast\n                                       :                          :              :     :           +- BroadcastExchange\n                                       :                          :              :     :              +- CometNativeColumnarToRow\n                                       :                          :              :     :                 +- CometProject\n                                       :                          :              :     :                    +- CometFilter\n                                       :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :                          :              :     +- CometBroadcastExchange\n                                       :                          :              :        +- CometBroadcastHashJoin\n                                       :                          :              :           :- CometFilter\n                                       :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                          :              :           +- CometBroadcastExchange\n                                       :                          :              :              +- CometProject\n                                       :                          :              :                 +- CometBroadcastHashJoin\n                                       :                          :              :                    :- CometProject\n                                       :                          :              :                    :  +- CometBroadcastHashJoin\n                                       :                          :              :                    :     :- CometFilter\n                                       :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :                          :              :                    :     :        +- ReusedSubquery\n                                       :                          :              :                    :     +- CometBroadcastExchange\n                                       :                          :              :                    :        +- CometFilter\n                                       :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                          :              :                    +- CometBroadcastExchange\n                                       :                          :              :                       +- CometProject\n                                       :                          :              :                          +- CometFilter\n                                       :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :                          :              +- CometBroadcastExchange\n                                       :                          :                 +- CometProject\n                                       :                          :                    +- CometFilter\n                                       :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :                          +- CometBroadcastExchange\n                                       :                             +- CometProject\n                                       :                                +- CometBroadcastHashJoin\n                                       :                                   :- CometProject\n                                       :                                   :  +- CometBroadcastHashJoin\n                                       :                                   :     :- CometFilter\n                                       :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                       :                                   :     :        +- ReusedSubquery\n                                       :                                   :     +- CometBroadcastExchange\n                                       :                                   :        +- CometFilter\n                                       :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                                   +- CometBroadcastExchange\n                                       :                                      +- CometProject\n                                       :                                         +- CometFilter\n                                       :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 424 out of 458 eligible operators (92%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q14b.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- BroadcastHashJoin\n   :- Filter\n   :  :  +- Subquery\n   :  :     +- HashAggregate\n   :  :        +- CometNativeColumnarToRow\n   :  :           +- CometColumnarExchange\n   :  :              +- HashAggregate\n   :  :                 +- Union\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    +- Project\n   :  :                       +- BroadcastHashJoin\n   :  :                          :- ColumnarToRow\n   :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                          :        +- ReusedSubquery\n   :  :                          +- BroadcastExchange\n   :  :                             +- CometNativeColumnarToRow\n   :  :                                +- CometProject\n   :  :                                   +- CometFilter\n   :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  +- HashAggregate\n   :     +- CometNativeColumnarToRow\n   :        +- CometColumnarExchange\n   :           +- HashAggregate\n   :              +- Project\n   :                 +- BroadcastHashJoin\n   :                    :- Project\n   :                    :  +- BroadcastHashJoin\n   :                    :     :- BroadcastHashJoin\n   :                    :     :  :- Filter\n   :                    :     :  :  +- ColumnarToRow\n   :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :  :           +- SubqueryBroadcast\n   :                    :     :  :              +- BroadcastExchange\n   :                    :     :  :                 +- CometNativeColumnarToRow\n   :                    :     :  :                    +- CometProject\n   :                    :     :  :                       +- CometFilter\n   :                    :     :  :                          :  +- Subquery\n   :                    :     :  :                          :     +- CometNativeColumnarToRow\n   :                    :     :  :                          :        +- CometProject\n   :                    :     :  :                          :           +- CometFilter\n   :                    :     :  :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  +- BroadcastExchange\n   :                    :     :     +- Project\n   :                    :     :        +- BroadcastHashJoin\n   :                    :     :           :- CometNativeColumnarToRow\n   :                    :     :           :  +- CometFilter\n   :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :           +- BroadcastExchange\n   :                    :     :              +- BroadcastHashJoin\n   :                    :     :                 :- CometNativeColumnarToRow\n   :                    :     :                 :  +- CometHashAggregate\n   :                    :     :                 :     +- CometColumnarExchange\n   :                    :     :                 :        +- HashAggregate\n   :                    :     :                 :           +- Project\n   :                    :     :                 :              +- BroadcastHashJoin\n   :                    :     :                 :                 :- Project\n   :                    :     :                 :                 :  +- BroadcastHashJoin\n   :                    :     :                 :                 :     :- Filter\n   :                    :     :                 :                 :     :  +- ColumnarToRow\n   :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :     :           +- SubqueryBroadcast\n   :                    :     :                 :                 :     :              +- BroadcastExchange\n   :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n   :                    :     :                 :                 :     :                    +- CometProject\n   :                    :     :                 :                 :     :                       +- CometFilter\n   :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 :     +- BroadcastExchange\n   :                    :     :                 :                 :        +- BroadcastHashJoin\n   :                    :     :                 :                 :           :- CometNativeColumnarToRow\n   :                    :     :                 :                 :           :  +- CometFilter\n   :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :           +- BroadcastExchange\n   :                    :     :                 :                 :              +- Project\n   :                    :     :                 :                 :                 +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :- Project\n   :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :     :- Filter\n   :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n   :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n   :                    :     :                 :                 :                    :     +- BroadcastExchange\n   :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                    :           +- CometFilter\n   :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :                    +- BroadcastExchange\n   :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                          +- CometProject\n   :                    :     :                 :                 :                             +- CometFilter\n   :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 +- BroadcastExchange\n   :                    :     :                 :                    +- CometNativeColumnarToRow\n   :                    :     :                 :                       +- CometProject\n   :                    :     :                 :                          +- CometFilter\n   :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 +- BroadcastExchange\n   :                    :     :                    +- Project\n   :                    :     :                       +- BroadcastHashJoin\n   :                    :     :                          :- Project\n   :                    :     :                          :  +- BroadcastHashJoin\n   :                    :     :                          :     :- Filter\n   :                    :     :                          :     :  +- ColumnarToRow\n   :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                          :     :           +- ReusedSubquery\n   :                    :     :                          :     +- BroadcastExchange\n   :                    :     :                          :        +- CometNativeColumnarToRow\n   :                    :     :                          :           +- CometFilter\n   :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                          +- BroadcastExchange\n   :                    :     :                             +- CometNativeColumnarToRow\n   :                    :     :                                +- CometProject\n   :                    :     :                                   +- CometFilter\n   :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     +- BroadcastExchange\n   :                    :        +- BroadcastHashJoin\n   :                    :           :- CometNativeColumnarToRow\n   :                    :           :  +- CometFilter\n   :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :           +- BroadcastExchange\n   :                    :              +- Project\n   :                    :                 +- BroadcastHashJoin\n   :                    :                    :- CometNativeColumnarToRow\n   :                    :                    :  +- CometFilter\n   :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                    +- BroadcastExchange\n   :                    :                       +- BroadcastHashJoin\n   :                    :                          :- CometNativeColumnarToRow\n   :                    :                          :  +- CometHashAggregate\n   :                    :                          :     +- CometColumnarExchange\n   :                    :                          :        +- HashAggregate\n   :                    :                          :           +- Project\n   :                    :                          :              +- BroadcastHashJoin\n   :                    :                          :                 :- Project\n   :                    :                          :                 :  +- BroadcastHashJoin\n   :                    :                          :                 :     :- Filter\n   :                    :                          :                 :     :  +- ColumnarToRow\n   :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :     :           +- SubqueryBroadcast\n   :                    :                          :                 :     :              +- BroadcastExchange\n   :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n   :                    :                          :                 :     :                    +- CometProject\n   :                    :                          :                 :     :                       +- CometFilter\n   :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 :     +- BroadcastExchange\n   :                    :                          :                 :        +- BroadcastHashJoin\n   :                    :                          :                 :           :- CometNativeColumnarToRow\n   :                    :                          :                 :           :  +- CometFilter\n   :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :           +- BroadcastExchange\n   :                    :                          :                 :              +- Project\n   :                    :                          :                 :                 +- BroadcastHashJoin\n   :                    :                          :                 :                    :- Project\n   :                    :                          :                 :                    :  +- BroadcastHashJoin\n   :                    :                          :                 :                    :     :- Filter\n   :                    :                          :                 :                    :     :  +- ColumnarToRow\n   :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :                    :     :           +- ReusedSubquery\n   :                    :                          :                 :                    :     +- BroadcastExchange\n   :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n   :                    :                          :                 :                    :           +- CometFilter\n   :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :                    +- BroadcastExchange\n   :                    :                          :                 :                       +- CometNativeColumnarToRow\n   :                    :                          :                 :                          +- CometProject\n   :                    :                          :                 :                             +- CometFilter\n   :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 +- BroadcastExchange\n   :                    :                          :                    +- CometNativeColumnarToRow\n   :                    :                          :                       +- CometProject\n   :                    :                          :                          +- CometFilter\n   :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          +- BroadcastExchange\n   :                    :                             +- Project\n   :                    :                                +- BroadcastHashJoin\n   :                    :                                   :- Project\n   :                    :                                   :  +- BroadcastHashJoin\n   :                    :                                   :     :- Filter\n   :                    :                                   :     :  +- ColumnarToRow\n   :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                                   :     :           +- ReusedSubquery\n   :                    :                                   :     +- BroadcastExchange\n   :                    :                                   :        +- CometNativeColumnarToRow\n   :                    :                                   :           +- CometFilter\n   :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                                   +- BroadcastExchange\n   :                    :                                      +- CometNativeColumnarToRow\n   :                    :                                         +- CometProject\n   :                    :                                            +- CometFilter\n   :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    +- BroadcastExchange\n   :                       +- CometNativeColumnarToRow\n   :                          +- CometProject\n   :                             +- CometFilter\n   :                                :  +- Subquery\n   :                                :     +- CometNativeColumnarToRow\n   :                                :        +- CometProject\n   :                                :           +- CometFilter\n   :                                :              +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   +- BroadcastExchange\n      +- Filter\n         :  +- ReusedSubquery\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- BroadcastHashJoin\n                           :     :  :- Filter\n                           :     :  :  +- ColumnarToRow\n                           :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :  :           +- SubqueryBroadcast\n                           :     :  :              +- BroadcastExchange\n                           :     :  :                 +- CometNativeColumnarToRow\n                           :     :  :                    +- CometProject\n                           :     :  :                       +- CometFilter\n                           :     :  :                          :  +- Subquery\n                           :     :  :                          :     +- CometNativeColumnarToRow\n                           :     :  :                          :        +- CometProject\n                           :     :  :                          :           +- CometFilter\n                           :     :  :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  +- BroadcastExchange\n                           :     :     +- Project\n                           :     :        +- BroadcastHashJoin\n                           :     :           :- CometNativeColumnarToRow\n                           :     :           :  +- CometFilter\n                           :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :           +- BroadcastExchange\n                           :     :              +- BroadcastHashJoin\n                           :     :                 :- CometNativeColumnarToRow\n                           :     :                 :  +- CometHashAggregate\n                           :     :                 :     +- CometColumnarExchange\n                           :     :                 :        +- HashAggregate\n                           :     :                 :           +- Project\n                           :     :                 :              +- BroadcastHashJoin\n                           :     :                 :                 :- Project\n                           :     :                 :                 :  +- BroadcastHashJoin\n                           :     :                 :                 :     :- Filter\n                           :     :                 :                 :     :  +- ColumnarToRow\n                           :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :     :           +- SubqueryBroadcast\n                           :     :                 :                 :     :              +- BroadcastExchange\n                           :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                           :     :                 :                 :     :                    +- CometProject\n                           :     :                 :                 :     :                       +- CometFilter\n                           :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 :     +- BroadcastExchange\n                           :     :                 :                 :        +- BroadcastHashJoin\n                           :     :                 :                 :           :- CometNativeColumnarToRow\n                           :     :                 :                 :           :  +- CometFilter\n                           :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :           +- BroadcastExchange\n                           :     :                 :                 :              +- Project\n                           :     :                 :                 :                 +- BroadcastHashJoin\n                           :     :                 :                 :                    :- Project\n                           :     :                 :                 :                    :  +- BroadcastHashJoin\n                           :     :                 :                 :                    :     :- Filter\n                           :     :                 :                 :                    :     :  +- ColumnarToRow\n                           :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :                    :     :           +- ReusedSubquery\n                           :     :                 :                 :                    :     +- BroadcastExchange\n                           :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                           :     :                 :                 :                    :           +- CometFilter\n                           :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :                    +- BroadcastExchange\n                           :     :                 :                 :                       +- CometNativeColumnarToRow\n                           :     :                 :                 :                          +- CometProject\n                           :     :                 :                 :                             +- CometFilter\n                           :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 +- BroadcastExchange\n                           :     :                 :                    +- CometNativeColumnarToRow\n                           :     :                 :                       +- CometProject\n                           :     :                 :                          +- CometFilter\n                           :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 +- BroadcastExchange\n                           :     :                    +- Project\n                           :     :                       +- BroadcastHashJoin\n                           :     :                          :- Project\n                           :     :                          :  +- BroadcastHashJoin\n                           :     :                          :     :- Filter\n                           :     :                          :     :  +- ColumnarToRow\n                           :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                          :     :           +- ReusedSubquery\n                           :     :                          :     +- BroadcastExchange\n                           :     :                          :        +- CometNativeColumnarToRow\n                           :     :                          :           +- CometFilter\n                           :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                          +- BroadcastExchange\n                           :     :                             +- CometNativeColumnarToRow\n                           :     :                                +- CometProject\n                           :     :                                   +- CometFilter\n                           :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     +- BroadcastExchange\n                           :        +- BroadcastHashJoin\n                           :           :- CometNativeColumnarToRow\n                           :           :  +- CometFilter\n                           :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :           +- BroadcastExchange\n                           :              +- Project\n                           :                 +- BroadcastHashJoin\n                           :                    :- CometNativeColumnarToRow\n                           :                    :  +- CometFilter\n                           :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                    +- BroadcastExchange\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometHashAggregate\n                           :                          :     +- CometColumnarExchange\n                           :                          :        +- HashAggregate\n                           :                          :           +- Project\n                           :                          :              +- BroadcastHashJoin\n                           :                          :                 :- Project\n                           :                          :                 :  +- BroadcastHashJoin\n                           :                          :                 :     :- Filter\n                           :                          :                 :     :  +- ColumnarToRow\n                           :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :     :           +- SubqueryBroadcast\n                           :                          :                 :     :              +- BroadcastExchange\n                           :                          :                 :     :                 +- CometNativeColumnarToRow\n                           :                          :                 :     :                    +- CometProject\n                           :                          :                 :     :                       +- CometFilter\n                           :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 :     +- BroadcastExchange\n                           :                          :                 :        +- BroadcastHashJoin\n                           :                          :                 :           :- CometNativeColumnarToRow\n                           :                          :                 :           :  +- CometFilter\n                           :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :           +- BroadcastExchange\n                           :                          :                 :              +- Project\n                           :                          :                 :                 +- BroadcastHashJoin\n                           :                          :                 :                    :- Project\n                           :                          :                 :                    :  +- BroadcastHashJoin\n                           :                          :                 :                    :     :- Filter\n                           :                          :                 :                    :     :  +- ColumnarToRow\n                           :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :                    :     :           +- ReusedSubquery\n                           :                          :                 :                    :     +- BroadcastExchange\n                           :                          :                 :                    :        +- CometNativeColumnarToRow\n                           :                          :                 :                    :           +- CometFilter\n                           :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :                    +- BroadcastExchange\n                           :                          :                 :                       +- CometNativeColumnarToRow\n                           :                          :                 :                          +- CometProject\n                           :                          :                 :                             +- CometFilter\n                           :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 +- BroadcastExchange\n                           :                          :                    +- CometNativeColumnarToRow\n                           :                          :                       +- CometProject\n                           :                          :                          +- CometFilter\n                           :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- BroadcastHashJoin\n                           :                                   :- Project\n                           :                                   :  +- BroadcastHashJoin\n                           :                                   :     :- Filter\n                           :                                   :     :  +- ColumnarToRow\n                           :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                   :     :           +- ReusedSubquery\n                           :                                   :     +- BroadcastExchange\n                           :                                   :        +- CometNativeColumnarToRow\n                           :                                   :           +- CometFilter\n                           :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                                   +- BroadcastExchange\n                           :                                      +- CometNativeColumnarToRow\n                           :                                         +- CometProject\n                           :                                            +- CometFilter\n                           :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometProject\n                                    +- CometFilter\n                                       :  +- Subquery\n                                       :     +- CometNativeColumnarToRow\n                                       :        +- CometProject\n                                       :           +- CometFilter\n                                       :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 128 out of 333 eligible operators (38%). Final plan contains 69 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q14b.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometBroadcastHashJoin\n      :- CometFilter\n      :  :  +- Subquery\n      :  :     +- CometNativeColumnarToRow\n      :  :        +- CometHashAggregate\n      :  :           +- CometExchange\n      :  :              +- CometHashAggregate\n      :  :                 +- CometUnion\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    +- CometProject\n      :  :                       +- CometBroadcastHashJoin\n      :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :  :                          :     +- ReusedSubquery\n      :  :                          +- CometBroadcastExchange\n      :  :                             +- CometProject\n      :  :                                +- CometFilter\n      :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  +- CometHashAggregate\n      :     +- CometExchange\n      :        +- CometHashAggregate\n      :           +- CometProject\n      :              +- CometBroadcastHashJoin\n      :                 :- CometProject\n      :                 :  +- CometBroadcastHashJoin\n      :                 :     :- CometBroadcastHashJoin\n      :                 :     :  :- CometFilter\n      :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :  :        +- SubqueryBroadcast\n      :                 :     :  :           +- BroadcastExchange\n      :                 :     :  :              +- CometNativeColumnarToRow\n      :                 :     :  :                 +- CometProject\n      :                 :     :  :                    +- CometFilter\n      :                 :     :  :                       :  +- Subquery\n      :                 :     :  :                       :     +- CometNativeColumnarToRow\n      :                 :     :  :                       :        +- CometProject\n      :                 :     :  :                       :           +- CometFilter\n      :                 :     :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  +- CometBroadcastExchange\n      :                 :     :     +- CometProject\n      :                 :     :        +- CometBroadcastHashJoin\n      :                 :     :           :- CometFilter\n      :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :           +- CometBroadcastExchange\n      :                 :     :              +- CometBroadcastHashJoin\n      :                 :     :                 :- CometHashAggregate\n      :                 :     :                 :  +- CometExchange\n      :                 :     :                 :     +- CometHashAggregate\n      :                 :     :                 :        +- CometProject\n      :                 :     :                 :           +- CometBroadcastHashJoin\n      :                 :     :                 :              :- CometProject\n      :                 :     :                 :              :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :     :- CometFilter\n      :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :                 :              :     :        +- SubqueryBroadcast\n      :                 :     :                 :              :     :           +- BroadcastExchange\n      :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n      :                 :     :                 :              :     :                 +- CometProject\n      :                 :     :                 :              :     :                    +- CometFilter\n      :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              :     +- CometBroadcastExchange\n      :                 :     :                 :              :        +- CometBroadcastHashJoin\n      :                 :     :                 :              :           :- CometFilter\n      :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :           +- CometBroadcastExchange\n      :                 :     :                 :              :              +- CometProject\n      :                 :     :                 :              :                 +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :- CometProject\n      :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :     :- CometFilter\n      :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :     :                 :              :                    :     :        +- ReusedSubquery\n      :                 :     :                 :              :                    :     +- CometBroadcastExchange\n      :                 :     :                 :              :                    :        +- CometFilter\n      :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :                    +- CometBroadcastExchange\n      :                 :     :                 :              :                       +- CometProject\n      :                 :     :                 :              :                          +- CometFilter\n      :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              +- CometBroadcastExchange\n      :                 :     :                 :                 +- CometProject\n      :                 :     :                 :                    +- CometFilter\n      :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 +- CometBroadcastExchange\n      :                 :     :                    +- CometProject\n      :                 :     :                       +- CometBroadcastHashJoin\n      :                 :     :                          :- CometProject\n      :                 :     :                          :  +- CometBroadcastHashJoin\n      :                 :     :                          :     :- CometFilter\n      :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :     :                          :     :        +- ReusedSubquery\n      :                 :     :                          :     +- CometBroadcastExchange\n      :                 :     :                          :        +- CometFilter\n      :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                          +- CometBroadcastExchange\n      :                 :     :                             +- CometProject\n      :                 :     :                                +- CometFilter\n      :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     +- CometBroadcastExchange\n      :                 :        +- CometBroadcastHashJoin\n      :                 :           :- CometFilter\n      :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :           +- CometBroadcastExchange\n      :                 :              +- CometProject\n      :                 :                 +- CometBroadcastHashJoin\n      :                 :                    :- CometFilter\n      :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                    +- CometBroadcastExchange\n      :                 :                       +- CometBroadcastHashJoin\n      :                 :                          :- CometHashAggregate\n      :                 :                          :  +- CometExchange\n      :                 :                          :     +- CometHashAggregate\n      :                 :                          :        +- CometProject\n      :                 :                          :           +- CometBroadcastHashJoin\n      :                 :                          :              :- CometProject\n      :                 :                          :              :  +- CometBroadcastHashJoin\n      :                 :                          :              :     :- CometFilter\n      :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :                          :              :     :        +- SubqueryBroadcast\n      :                 :                          :              :     :           +- BroadcastExchange\n      :                 :                          :              :     :              +- CometNativeColumnarToRow\n      :                 :                          :              :     :                 +- CometProject\n      :                 :                          :              :     :                    +- CometFilter\n      :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              :     +- CometBroadcastExchange\n      :                 :                          :              :        +- CometBroadcastHashJoin\n      :                 :                          :              :           :- CometFilter\n      :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :           +- CometBroadcastExchange\n      :                 :                          :              :              +- CometProject\n      :                 :                          :              :                 +- CometBroadcastHashJoin\n      :                 :                          :              :                    :- CometProject\n      :                 :                          :              :                    :  +- CometBroadcastHashJoin\n      :                 :                          :              :                    :     :- CometFilter\n      :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :                          :              :                    :     :        +- ReusedSubquery\n      :                 :                          :              :                    :     +- CometBroadcastExchange\n      :                 :                          :              :                    :        +- CometFilter\n      :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :                    +- CometBroadcastExchange\n      :                 :                          :              :                       +- CometProject\n      :                 :                          :              :                          +- CometFilter\n      :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              +- CometBroadcastExchange\n      :                 :                          :                 +- CometProject\n      :                 :                          :                    +- CometFilter\n      :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          +- CometBroadcastExchange\n      :                 :                             +- CometProject\n      :                 :                                +- CometBroadcastHashJoin\n      :                 :                                   :- CometProject\n      :                 :                                   :  +- CometBroadcastHashJoin\n      :                 :                                   :     :- CometFilter\n      :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :                                   :     :        +- ReusedSubquery\n      :                 :                                   :     +- CometBroadcastExchange\n      :                 :                                   :        +- CometFilter\n      :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                                   +- CometBroadcastExchange\n      :                 :                                      +- CometProject\n      :                 :                                         +- CometFilter\n      :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 +- CometBroadcastExchange\n      :                    +- CometProject\n      :                       +- CometFilter\n      :                          :  +- ReusedSubquery\n      :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      +- CometBroadcastExchange\n         +- CometFilter\n            :  +- ReusedSubquery\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometBroadcastHashJoin\n                           :     :  :- CometFilter\n                           :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :  :        +- SubqueryBroadcast\n                           :     :  :           +- BroadcastExchange\n                           :     :  :              +- CometNativeColumnarToRow\n                           :     :  :                 +- CometProject\n                           :     :  :                    +- CometFilter\n                           :     :  :                       :  +- Subquery\n                           :     :  :                       :     +- CometNativeColumnarToRow\n                           :     :  :                       :        +- CometProject\n                           :     :  :                       :           +- CometFilter\n                           :     :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  +- CometBroadcastExchange\n                           :     :     +- CometProject\n                           :     :        +- CometBroadcastHashJoin\n                           :     :           :- CometFilter\n                           :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :           +- CometBroadcastExchange\n                           :     :              +- CometBroadcastHashJoin\n                           :     :                 :- CometHashAggregate\n                           :     :                 :  +- CometExchange\n                           :     :                 :     +- CometHashAggregate\n                           :     :                 :        +- CometProject\n                           :     :                 :           +- CometBroadcastHashJoin\n                           :     :                 :              :- CometProject\n                           :     :                 :              :  +- CometBroadcastHashJoin\n                           :     :                 :              :     :- CometFilter\n                           :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :                 :              :     :        +- SubqueryBroadcast\n                           :     :                 :              :     :           +- BroadcastExchange\n                           :     :                 :              :     :              +- CometNativeColumnarToRow\n                           :     :                 :              :     :                 +- CometProject\n                           :     :                 :              :     :                    +- CometFilter\n                           :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              :     +- CometBroadcastExchange\n                           :     :                 :              :        +- CometBroadcastHashJoin\n                           :     :                 :              :           :- CometFilter\n                           :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :           +- CometBroadcastExchange\n                           :     :                 :              :              +- CometProject\n                           :     :                 :              :                 +- CometBroadcastHashJoin\n                           :     :                 :              :                    :- CometProject\n                           :     :                 :              :                    :  +- CometBroadcastHashJoin\n                           :     :                 :              :                    :     :- CometFilter\n                           :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :     :                 :              :                    :     :        +- ReusedSubquery\n                           :     :                 :              :                    :     +- CometBroadcastExchange\n                           :     :                 :              :                    :        +- CometFilter\n                           :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :                    +- CometBroadcastExchange\n                           :     :                 :              :                       +- CometProject\n                           :     :                 :              :                          +- CometFilter\n                           :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              +- CometBroadcastExchange\n                           :     :                 :                 +- CometProject\n                           :     :                 :                    +- CometFilter\n                           :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 +- CometBroadcastExchange\n                           :     :                    +- CometProject\n                           :     :                       +- CometBroadcastHashJoin\n                           :     :                          :- CometProject\n                           :     :                          :  +- CometBroadcastHashJoin\n                           :     :                          :     :- CometFilter\n                           :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :     :                          :     :        +- ReusedSubquery\n                           :     :                          :     +- CometBroadcastExchange\n                           :     :                          :        +- CometFilter\n                           :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                          +- CometBroadcastExchange\n                           :     :                             +- CometProject\n                           :     :                                +- CometFilter\n                           :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     +- CometBroadcastExchange\n                           :        +- CometBroadcastHashJoin\n                           :           :- CometFilter\n                           :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :           +- CometBroadcastExchange\n                           :              +- CometProject\n                           :                 +- CometBroadcastHashJoin\n                           :                    :- CometFilter\n                           :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                    +- CometBroadcastExchange\n                           :                       +- CometBroadcastHashJoin\n                           :                          :- CometHashAggregate\n                           :                          :  +- CometExchange\n                           :                          :     +- CometHashAggregate\n                           :                          :        +- CometProject\n                           :                          :           +- CometBroadcastHashJoin\n                           :                          :              :- CometProject\n                           :                          :              :  +- CometBroadcastHashJoin\n                           :                          :              :     :- CometFilter\n                           :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                          :              :     :        +- SubqueryBroadcast\n                           :                          :              :     :           +- BroadcastExchange\n                           :                          :              :     :              +- CometNativeColumnarToRow\n                           :                          :              :     :                 +- CometProject\n                           :                          :              :     :                    +- CometFilter\n                           :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              :     +- CometBroadcastExchange\n                           :                          :              :        +- CometBroadcastHashJoin\n                           :                          :              :           :- CometFilter\n                           :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :           +- CometBroadcastExchange\n                           :                          :              :              +- CometProject\n                           :                          :              :                 +- CometBroadcastHashJoin\n                           :                          :              :                    :- CometProject\n                           :                          :              :                    :  +- CometBroadcastHashJoin\n                           :                          :              :                    :     :- CometFilter\n                           :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :                          :              :                    :     :        +- ReusedSubquery\n                           :                          :              :                    :     +- CometBroadcastExchange\n                           :                          :              :                    :        +- CometFilter\n                           :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :                    +- CometBroadcastExchange\n                           :                          :              :                       +- CometProject\n                           :                          :              :                          +- CometFilter\n                           :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              +- CometBroadcastExchange\n                           :                          :                 +- CometProject\n                           :                          :                    +- CometFilter\n                           :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          +- CometBroadcastExchange\n                           :                             +- CometProject\n                           :                                +- CometBroadcastHashJoin\n                           :                                   :- CometProject\n                           :                                   :  +- CometBroadcastHashJoin\n                           :                                   :     :- CometFilter\n                           :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                                   :     :        +- ReusedSubquery\n                           :                                   :     +- CometBroadcastExchange\n                           :                                   :        +- CometFilter\n                           :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                                   +- CometBroadcastExchange\n                           :                                      +- CometProject\n                           :                                         +- CometFilter\n                           :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    :  +- ReusedSubquery\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 298 out of 327 eligible operators (91%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q15.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Filter\n                  :     :     :  +- ColumnarToRow\n                  :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           +- SubqueryBroadcast\n                  :     :     :              +- BroadcastExchange\n                  :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :                    +- CometProject\n                  :     :     :                       +- CometFilter\n                  :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 28 eligible operators (42%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q15.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometFilter\n                  :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :        +- SubqueryBroadcast\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 26 out of 28 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q16.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.call_center\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q16.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q17.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- Filter\n                  :     :     :     :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :     :        +- Filter\n                  :     :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- Filter\n                  :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :                    +- ReusedSubquery\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 22 out of 57 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q17.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometFilter\n                  :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :     :     :     :           +- BroadcastExchange\n                  :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                 +- CometProject\n                  :     :     :     :     :     :     :                    +- CometFilter\n                  :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :     :                 +- ReusedSubquery\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 52 out of 57 eligible operators (91%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q18.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Project\n                     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :- Project\n                     :     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :     :- Filter\n                     :     :     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :     :     :           +- SubqueryBroadcast\n                     :     :     :     :     :     :              +- BroadcastExchange\n                     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :     :     :     :                    +- CometProject\n                     :     :     :     :     :     :                       +- CometFilter\n                     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :     :           +- CometProject\n                     :     :     :     :     :              +- CometFilter\n                     :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :           +- CometProject\n                     :     :     :     :              +- CometFilter\n                     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 21 out of 47 eligible operators (44%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q18.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :- CometProject\n                     :     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :     :- CometFilter\n                     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :     :     :     :     :     :        +- SubqueryBroadcast\n                     :     :     :     :     :     :           +- BroadcastExchange\n                     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :     :     :     :                 +- CometProject\n                     :     :     :     :     :     :                    +- CometFilter\n                     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :     :        +- CometProject\n                     :     :     :     :     :           +- CometFilter\n                     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :        +- CometProject\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 45 out of 47 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q19.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometFilter\n                  :     :     :     :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometFilter\n                  :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometNativeScan parquet spark_catalog.default.item\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometNativeScan parquet spark_catalog.default.customer\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 35 out of 35 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q19.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometFilter\n                  :     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometFilter\n                  :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 35 out of 35 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q2.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometHashAggregate\n            :     :  +- CometExchange\n            :     :     +- CometHashAggregate\n            :     :        +- CometProject\n            :     :           +- CometBroadcastHashJoin\n            :     :              :- CometUnion\n            :     :              :  :- CometProject\n            :     :              :  :  +- CometNativeScan parquet spark_catalog.default.web_sales\n            :     :              :  +- CometProject\n            :     :              :     +- CometNativeScan parquet spark_catalog.default.catalog_sales\n            :     :              +- CometBroadcastExchange\n            :     :                 +- CometProject\n            :     :                    +- CometFilter\n            :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometHashAggregate\n                     :  +- CometExchange\n                     :     +- CometHashAggregate\n                     :        +- CometProject\n                     :           +- CometBroadcastHashJoin\n                     :              :- CometUnion\n                     :              :  :- CometProject\n                     :              :  :  +- CometNativeScan parquet spark_catalog.default.web_sales\n                     :              :  +- CometProject\n                     :              :     +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                     :              +- CometBroadcastExchange\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 45 out of 45 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q2.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometHashAggregate\n            :     :  +- CometExchange\n            :     :     +- CometHashAggregate\n            :     :        +- CometProject\n            :     :           +- CometBroadcastHashJoin\n            :     :              :- CometUnion\n            :     :              :  :- CometProject\n            :     :              :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n            :     :              :  +- CometProject\n            :     :              :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :     :              +- CometBroadcastExchange\n            :     :                 +- CometProject\n            :     :                    +- CometFilter\n            :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometHashAggregate\n                     :  +- CometExchange\n                     :     +- CometHashAggregate\n                     :        +- CometProject\n                     :           +- CometBroadcastHashJoin\n                     :              :- CometUnion\n                     :              :  :- CometProject\n                     :              :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :              :  +- CometProject\n                     :              :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :              +- CometBroadcastExchange\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 45 out of 45 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q20.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q20.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q21.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Filter\n                     :     :     :  +- ColumnarToRow\n                     :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :           +- SubqueryBroadcast\n                     :     :     :              +- BroadcastExchange\n                     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :                    +- CometFilter\n                     :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometFilter\n                     :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 10 out of 27 eligible operators (37%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q21.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometFilter\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometFilter\n                     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                     :     :     :        +- SubqueryBroadcast\n                     :     :     :           +- BroadcastExchange\n                     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :                 +- CometFilter\n                     :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometFilter\n                     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 25 out of 27 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q22.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Filter\n                     :     :     :  +- ColumnarToRow\n                     :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :           +- SubqueryBroadcast\n                     :     :     :              +- BroadcastExchange\n                     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :                    +- CometProject\n                     :     :     :                       +- CometFilter\n                     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.warehouse\n\nComet accelerated 12 out of 29 eligible operators (41%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q22.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometFilter\n                     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                     :     :     :        +- SubqueryBroadcast\n                     :     :     :           +- BroadcastExchange\n                     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :                 +- CometProject\n                     :     :     :                    +- CometFilter\n                     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n\nComet accelerated 27 out of 29 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q23a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometUnion\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometProject\n            :     :  +- CometSortMergeJoin\n            :     :     :- CometSort\n            :     :     :  +- CometColumnarExchange\n            :     :     :     +- Project\n            :     :     :        +- BroadcastHashJoin\n            :     :     :           :- ColumnarToRow\n            :     :     :           :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :     :           :        +- SubqueryBroadcast\n            :     :     :           :           +- BroadcastExchange\n            :     :     :           :              +- CometNativeColumnarToRow\n            :     :     :           :                 +- CometProject\n            :     :     :           :                    +- CometFilter\n            :     :     :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :     :           +- BroadcastExchange\n            :     :     :              +- Project\n            :     :     :                 +- Filter\n            :     :     :                    +- HashAggregate\n            :     :     :                       +- CometNativeColumnarToRow\n            :     :     :                          +- CometColumnarExchange\n            :     :     :                             +- HashAggregate\n            :     :     :                                +- Project\n            :     :     :                                   +- BroadcastHashJoin\n            :     :     :                                      :- Project\n            :     :     :                                      :  +- BroadcastHashJoin\n            :     :     :                                      :     :- Filter\n            :     :     :                                      :     :  +- ColumnarToRow\n            :     :     :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :     :                                      :     :           +- SubqueryBroadcast\n            :     :     :                                      :     :              +- BroadcastExchange\n            :     :     :                                      :     :                 +- CometNativeColumnarToRow\n            :     :     :                                      :     :                    +- CometProject\n            :     :     :                                      :     :                       +- CometFilter\n            :     :     :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :     :                                      :     +- BroadcastExchange\n            :     :     :                                      :        +- CometNativeColumnarToRow\n            :     :     :                                      :           +- CometProject\n            :     :     :                                      :              +- CometFilter\n            :     :     :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :     :                                      +- BroadcastExchange\n            :     :     :                                         +- CometNativeColumnarToRow\n            :     :     :                                            +- CometFilter\n            :     :     :                                               +- CometNativeScan parquet spark_catalog.default.item\n            :     :     +- CometSort\n            :     :        +- CometProject\n            :     :           +- CometFilter\n            :     :              :  +- Subquery\n            :     :              :     +- HashAggregate\n            :     :              :        +- CometNativeColumnarToRow\n            :     :              :           +- CometColumnarExchange\n            :     :              :              +- HashAggregate\n            :     :              :                 +- HashAggregate\n            :     :              :                    +- CometNativeColumnarToRow\n            :     :              :                       +- CometColumnarExchange\n            :     :              :                          +- HashAggregate\n            :     :              :                             +- Project\n            :     :              :                                +- BroadcastHashJoin\n            :     :              :                                   :- Project\n            :     :              :                                   :  +- BroadcastHashJoin\n            :     :              :                                   :     :- Filter\n            :     :              :                                   :     :  +- ColumnarToRow\n            :     :              :                                   :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :              :                                   :     :           +- SubqueryBroadcast\n            :     :              :                                   :     :              +- BroadcastExchange\n            :     :              :                                   :     :                 +- CometNativeColumnarToRow\n            :     :              :                                   :     :                    +- CometProject\n            :     :              :                                   :     :                       +- CometFilter\n            :     :              :                                   :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :              :                                   :     +- BroadcastExchange\n            :     :              :                                   :        +- CometNativeColumnarToRow\n            :     :              :                                   :           +- CometFilter\n            :     :              :                                   :              +- CometNativeScan parquet spark_catalog.default.customer\n            :     :              :                                   +- BroadcastExchange\n            :     :              :                                      +- CometNativeColumnarToRow\n            :     :              :                                         +- CometProject\n            :     :              :                                            +- CometFilter\n            :     :              :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :              +- CometHashAggregate\n            :     :                 +- CometExchange\n            :     :                    +- CometHashAggregate\n            :     :                       +- CometProject\n            :     :                          +- CometBroadcastHashJoin\n            :     :                             :- CometProject\n            :     :                             :  +- CometFilter\n            :     :                             :     +- CometNativeScan parquet spark_catalog.default.store_sales\n            :     :                             +- CometBroadcastExchange\n            :     :                                +- CometFilter\n            :     :                                   +- CometNativeScan parquet spark_catalog.default.customer\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometColumnarExchange\n                  :     :     +- Project\n                  :     :        +- BroadcastHashJoin\n                  :     :           :- ColumnarToRow\n                  :     :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :           :        +- ReusedSubquery\n                  :     :           +- BroadcastExchange\n                  :     :              +- Project\n                  :     :                 +- Filter\n                  :     :                    +- HashAggregate\n                  :     :                       +- CometNativeColumnarToRow\n                  :     :                          +- CometColumnarExchange\n                  :     :                             +- HashAggregate\n                  :     :                                +- Project\n                  :     :                                   +- BroadcastHashJoin\n                  :     :                                      :- Project\n                  :     :                                      :  +- BroadcastHashJoin\n                  :     :                                      :     :- Filter\n                  :     :                                      :     :  +- ColumnarToRow\n                  :     :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                                      :     :           +- SubqueryBroadcast\n                  :     :                                      :     :              +- BroadcastExchange\n                  :     :                                      :     :                 +- CometNativeColumnarToRow\n                  :     :                                      :     :                    +- CometProject\n                  :     :                                      :     :                       +- CometFilter\n                  :     :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                                      :     +- BroadcastExchange\n                  :     :                                      :        +- CometNativeColumnarToRow\n                  :     :                                      :           +- CometProject\n                  :     :                                      :              +- CometFilter\n                  :     :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                                      +- BroadcastExchange\n                  :     :                                         +- CometNativeColumnarToRow\n                  :     :                                            +- CometFilter\n                  :     :                                               +- CometNativeScan parquet spark_catalog.default.item\n                  :     +- CometSort\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              :  +- ReusedSubquery\n                  :              +- CometHashAggregate\n                  :                 +- CometExchange\n                  :                    +- CometHashAggregate\n                  :                       +- CometProject\n                  :                          +- CometBroadcastHashJoin\n                  :                             :- CometProject\n                  :                             :  +- CometFilter\n                  :                             :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :                             +- CometBroadcastExchange\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.customer\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 83 out of 138 eligible operators (60%). Final plan contains 20 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q23a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometUnion\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometProject\n            :     :  +- CometSortMergeJoin\n            :     :     :- CometSort\n            :     :     :  +- CometExchange\n            :     :     :     +- CometProject\n            :     :     :        +- CometBroadcastHashJoin\n            :     :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :     :     :           :     +- SubqueryBroadcast\n            :     :     :           :        +- BroadcastExchange\n            :     :     :           :           +- CometNativeColumnarToRow\n            :     :     :           :              +- CometProject\n            :     :     :           :                 +- CometFilter\n            :     :     :           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :     :           +- CometBroadcastExchange\n            :     :     :              +- CometProject\n            :     :     :                 +- CometFilter\n            :     :     :                    +- CometHashAggregate\n            :     :     :                       +- CometExchange\n            :     :     :                          +- CometHashAggregate\n            :     :     :                             +- CometProject\n            :     :     :                                +- CometBroadcastHashJoin\n            :     :     :                                   :- CometProject\n            :     :     :                                   :  +- CometBroadcastHashJoin\n            :     :     :                                   :     :- CometFilter\n            :     :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :     :                                   :     :        +- SubqueryBroadcast\n            :     :     :                                   :     :           +- BroadcastExchange\n            :     :     :                                   :     :              +- CometNativeColumnarToRow\n            :     :     :                                   :     :                 +- CometProject\n            :     :     :                                   :     :                    +- CometFilter\n            :     :     :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :     :                                   :     +- CometBroadcastExchange\n            :     :     :                                   :        +- CometProject\n            :     :     :                                   :           +- CometFilter\n            :     :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :     :                                   +- CometBroadcastExchange\n            :     :     :                                      +- CometFilter\n            :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n            :     :     +- CometSort\n            :     :        +- CometProject\n            :     :           +- CometFilter\n            :     :              :  +- Subquery\n            :     :              :     +- CometNativeColumnarToRow\n            :     :              :        +- CometHashAggregate\n            :     :              :           +- CometExchange\n            :     :              :              +- CometHashAggregate\n            :     :              :                 +- CometHashAggregate\n            :     :              :                    +- CometExchange\n            :     :              :                       +- CometHashAggregate\n            :     :              :                          +- CometProject\n            :     :              :                             +- CometBroadcastHashJoin\n            :     :              :                                :- CometProject\n            :     :              :                                :  +- CometBroadcastHashJoin\n            :     :              :                                :     :- CometFilter\n            :     :              :                                :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :              :                                :     :        +- SubqueryBroadcast\n            :     :              :                                :     :           +- BroadcastExchange\n            :     :              :                                :     :              +- CometNativeColumnarToRow\n            :     :              :                                :     :                 +- CometProject\n            :     :              :                                :     :                    +- CometFilter\n            :     :              :                                :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :              :                                :     +- CometBroadcastExchange\n            :     :              :                                :        +- CometFilter\n            :     :              :                                :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :     :              :                                +- CometBroadcastExchange\n            :     :              :                                   +- CometProject\n            :     :              :                                      +- CometFilter\n            :     :              :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :              +- CometHashAggregate\n            :     :                 +- CometExchange\n            :     :                    +- CometHashAggregate\n            :     :                       +- CometProject\n            :     :                          +- CometBroadcastHashJoin\n            :     :                             :- CometProject\n            :     :                             :  +- CometFilter\n            :     :                             :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :                             +- CometBroadcastExchange\n            :     :                                +- CometFilter\n            :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometExchange\n                  :     :     +- CometProject\n                  :     :        +- CometBroadcastHashJoin\n                  :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :           :     +- ReusedSubquery\n                  :     :           +- CometBroadcastExchange\n                  :     :              +- CometProject\n                  :     :                 +- CometFilter\n                  :     :                    +- CometHashAggregate\n                  :     :                       +- CometExchange\n                  :     :                          +- CometHashAggregate\n                  :     :                             +- CometProject\n                  :     :                                +- CometBroadcastHashJoin\n                  :     :                                   :- CometProject\n                  :     :                                   :  +- CometBroadcastHashJoin\n                  :     :                                   :     :- CometFilter\n                  :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :                                   :     :        +- SubqueryBroadcast\n                  :     :                                   :     :           +- BroadcastExchange\n                  :     :                                   :     :              +- CometNativeColumnarToRow\n                  :     :                                   :     :                 +- CometProject\n                  :     :                                   :     :                    +- CometFilter\n                  :     :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                                   :     +- CometBroadcastExchange\n                  :     :                                   :        +- CometProject\n                  :     :                                   :           +- CometFilter\n                  :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                                   +- CometBroadcastExchange\n                  :     :                                      +- CometFilter\n                  :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :     +- CometSort\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              :  +- ReusedSubquery\n                  :              +- CometHashAggregate\n                  :                 +- CometExchange\n                  :                    +- CometHashAggregate\n                  :                       +- CometProject\n                  :                          +- CometBroadcastHashJoin\n                  :                             :- CometProject\n                  :                             :  +- CometFilter\n                  :                             :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                             +- CometBroadcastExchange\n                  :                                +- CometFilter\n                  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 127 out of 138 eligible operators (92%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q23b.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometSortMergeJoin\n      :              :     :  :- CometSort\n      :              :     :  :  +- CometColumnarExchange\n      :              :     :  :     +- Project\n      :              :     :  :        +- BroadcastHashJoin\n      :              :     :  :           :- Filter\n      :              :     :  :           :  +- ColumnarToRow\n      :              :     :  :           :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :              :     :  :           :           +- SubqueryBroadcast\n      :              :     :  :           :              +- BroadcastExchange\n      :              :     :  :           :                 +- CometNativeColumnarToRow\n      :              :     :  :           :                    +- CometProject\n      :              :     :  :           :                       +- CometFilter\n      :              :     :  :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :  :           +- BroadcastExchange\n      :              :     :  :              +- Project\n      :              :     :  :                 +- Filter\n      :              :     :  :                    +- HashAggregate\n      :              :     :  :                       +- CometNativeColumnarToRow\n      :              :     :  :                          +- CometColumnarExchange\n      :              :     :  :                             +- HashAggregate\n      :              :     :  :                                +- Project\n      :              :     :  :                                   +- BroadcastHashJoin\n      :              :     :  :                                      :- Project\n      :              :     :  :                                      :  +- BroadcastHashJoin\n      :              :     :  :                                      :     :- Filter\n      :              :     :  :                                      :     :  +- ColumnarToRow\n      :              :     :  :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :              :     :  :                                      :     :           +- SubqueryBroadcast\n      :              :     :  :                                      :     :              +- BroadcastExchange\n      :              :     :  :                                      :     :                 +- CometNativeColumnarToRow\n      :              :     :  :                                      :     :                    +- CometProject\n      :              :     :  :                                      :     :                       +- CometFilter\n      :              :     :  :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :  :                                      :     +- BroadcastExchange\n      :              :     :  :                                      :        +- CometNativeColumnarToRow\n      :              :     :  :                                      :           +- CometProject\n      :              :     :  :                                      :              +- CometFilter\n      :              :     :  :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :  :                                      +- BroadcastExchange\n      :              :     :  :                                         +- CometNativeColumnarToRow\n      :              :     :  :                                            +- CometFilter\n      :              :     :  :                                               +- CometNativeScan parquet spark_catalog.default.item\n      :              :     :  +- CometSort\n      :              :     :     +- CometProject\n      :              :     :        +- CometFilter\n      :              :     :           :  +- Subquery\n      :              :     :           :     +- HashAggregate\n      :              :     :           :        +- CometNativeColumnarToRow\n      :              :     :           :           +- CometColumnarExchange\n      :              :     :           :              +- HashAggregate\n      :              :     :           :                 +- HashAggregate\n      :              :     :           :                    +- CometNativeColumnarToRow\n      :              :     :           :                       +- CometColumnarExchange\n      :              :     :           :                          +- HashAggregate\n      :              :     :           :                             +- Project\n      :              :     :           :                                +- BroadcastHashJoin\n      :              :     :           :                                   :- Project\n      :              :     :           :                                   :  +- BroadcastHashJoin\n      :              :     :           :                                   :     :- Filter\n      :              :     :           :                                   :     :  +- ColumnarToRow\n      :              :     :           :                                   :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :              :     :           :                                   :     :           +- SubqueryBroadcast\n      :              :     :           :                                   :     :              +- BroadcastExchange\n      :              :     :           :                                   :     :                 +- CometNativeColumnarToRow\n      :              :     :           :                                   :     :                    +- CometProject\n      :              :     :           :                                   :     :                       +- CometFilter\n      :              :     :           :                                   :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :           :                                   :     +- BroadcastExchange\n      :              :     :           :                                   :        +- CometNativeColumnarToRow\n      :              :     :           :                                   :           +- CometFilter\n      :              :     :           :                                   :              +- CometNativeScan parquet spark_catalog.default.customer\n      :              :     :           :                                   +- BroadcastExchange\n      :              :     :           :                                      +- CometNativeColumnarToRow\n      :              :     :           :                                         +- CometProject\n      :              :     :           :                                            +- CometFilter\n      :              :     :           :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :           +- CometHashAggregate\n      :              :     :              +- CometExchange\n      :              :     :                 +- CometHashAggregate\n      :              :     :                    +- CometProject\n      :              :     :                       +- CometBroadcastHashJoin\n      :              :     :                          :- CometProject\n      :              :     :                          :  +- CometFilter\n      :              :     :                          :     +- CometNativeScan parquet spark_catalog.default.store_sales\n      :              :     :                          +- CometBroadcastExchange\n      :              :     :                             +- CometFilter\n      :              :     :                                +- CometNativeScan parquet spark_catalog.default.customer\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometSortMergeJoin\n      :              :              :- CometSort\n      :              :              :  +- CometExchange\n      :              :              :     +- CometFilter\n      :              :              :        +- CometNativeScan parquet spark_catalog.default.customer\n      :              :              +- CometSort\n      :              :                 +- CometProject\n      :              :                    +- CometFilter\n      :              :                       :  +- ReusedSubquery\n      :              :                       +- CometHashAggregate\n      :              :                          +- CometExchange\n      :              :                             +- CometHashAggregate\n      :              :                                +- CometProject\n      :              :                                   +- CometBroadcastHashJoin\n      :              :                                      :- CometProject\n      :              :                                      :  +- CometFilter\n      :              :                                      :     +- CometNativeScan parquet spark_catalog.default.store_sales\n      :              :                                      +- CometBroadcastExchange\n      :              :                                         +- CometFilter\n      :              :                                            +- CometNativeScan parquet spark_catalog.default.customer\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometSortMergeJoin\n                     :     :  :- CometSort\n                     :     :  :  +- CometColumnarExchange\n                     :     :  :     +- Project\n                     :     :  :        +- BroadcastHashJoin\n                     :     :  :           :- Filter\n                     :     :  :           :  +- ColumnarToRow\n                     :     :  :           :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :  :           :           +- ReusedSubquery\n                     :     :  :           +- BroadcastExchange\n                     :     :  :              +- Project\n                     :     :  :                 +- Filter\n                     :     :  :                    +- HashAggregate\n                     :     :  :                       +- CometNativeColumnarToRow\n                     :     :  :                          +- CometColumnarExchange\n                     :     :  :                             +- HashAggregate\n                     :     :  :                                +- Project\n                     :     :  :                                   +- BroadcastHashJoin\n                     :     :  :                                      :- Project\n                     :     :  :                                      :  +- BroadcastHashJoin\n                     :     :  :                                      :     :- Filter\n                     :     :  :                                      :     :  +- ColumnarToRow\n                     :     :  :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :  :                                      :     :           +- SubqueryBroadcast\n                     :     :  :                                      :     :              +- BroadcastExchange\n                     :     :  :                                      :     :                 +- CometNativeColumnarToRow\n                     :     :  :                                      :     :                    +- CometProject\n                     :     :  :                                      :     :                       +- CometFilter\n                     :     :  :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :  :                                      :     +- BroadcastExchange\n                     :     :  :                                      :        +- CometNativeColumnarToRow\n                     :     :  :                                      :           +- CometProject\n                     :     :  :                                      :              +- CometFilter\n                     :     :  :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :  :                                      +- BroadcastExchange\n                     :     :  :                                         +- CometNativeColumnarToRow\n                     :     :  :                                            +- CometFilter\n                     :     :  :                                               +- CometNativeScan parquet spark_catalog.default.item\n                     :     :  +- CometSort\n                     :     :     +- CometProject\n                     :     :        +- CometFilter\n                     :     :           :  +- ReusedSubquery\n                     :     :           +- CometHashAggregate\n                     :     :              +- CometExchange\n                     :     :                 +- CometHashAggregate\n                     :     :                    +- CometProject\n                     :     :                       +- CometBroadcastHashJoin\n                     :     :                          :- CometProject\n                     :     :                          :  +- CometFilter\n                     :     :                          :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                     :     :                          +- CometBroadcastExchange\n                     :     :                             +- CometFilter\n                     :     :                                +- CometNativeScan parquet spark_catalog.default.customer\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometSortMergeJoin\n                     :              :- CometSort\n                     :              :  +- CometExchange\n                     :              :     +- CometFilter\n                     :              :        +- CometNativeScan parquet spark_catalog.default.customer\n                     :              +- CometSort\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       :  +- ReusedSubquery\n                     :                       +- CometHashAggregate\n                     :                          +- CometExchange\n                     :                             +- CometHashAggregate\n                     :                                +- CometProject\n                     :                                   +- CometBroadcastHashJoin\n                     :                                      :- CometProject\n                     :                                      :  +- CometFilter\n                     :                                      :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                     :                                      +- CometBroadcastExchange\n                     :                                         +- CometFilter\n                     :                                            +- CometNativeScan parquet spark_catalog.default.customer\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 131 out of 190 eligible operators (68%). Final plan contains 20 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q23b.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometSortMergeJoin\n      :              :     :  :- CometSort\n      :              :     :  :  +- CometExchange\n      :              :     :  :     +- CometProject\n      :              :     :  :        +- CometBroadcastHashJoin\n      :              :     :  :           :- CometFilter\n      :              :     :  :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :  :           :        +- SubqueryBroadcast\n      :              :     :  :           :           +- BroadcastExchange\n      :              :     :  :           :              +- CometNativeColumnarToRow\n      :              :     :  :           :                 +- CometProject\n      :              :     :  :           :                    +- CometFilter\n      :              :     :  :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :  :           +- CometBroadcastExchange\n      :              :     :  :              +- CometProject\n      :              :     :  :                 +- CometFilter\n      :              :     :  :                    +- CometHashAggregate\n      :              :     :  :                       +- CometExchange\n      :              :     :  :                          +- CometHashAggregate\n      :              :     :  :                             +- CometProject\n      :              :     :  :                                +- CometBroadcastHashJoin\n      :              :     :  :                                   :- CometProject\n      :              :     :  :                                   :  +- CometBroadcastHashJoin\n      :              :     :  :                                   :     :- CometFilter\n      :              :     :  :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :  :                                   :     :        +- SubqueryBroadcast\n      :              :     :  :                                   :     :           +- BroadcastExchange\n      :              :     :  :                                   :     :              +- CometNativeColumnarToRow\n      :              :     :  :                                   :     :                 +- CometProject\n      :              :     :  :                                   :     :                    +- CometFilter\n      :              :     :  :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :  :                                   :     +- CometBroadcastExchange\n      :              :     :  :                                   :        +- CometProject\n      :              :     :  :                                   :           +- CometFilter\n      :              :     :  :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :  :                                   +- CometBroadcastExchange\n      :              :     :  :                                      +- CometFilter\n      :              :     :  :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :              :     :  +- CometSort\n      :              :     :     +- CometProject\n      :              :     :        +- CometFilter\n      :              :     :           :  +- Subquery\n      :              :     :           :     +- CometNativeColumnarToRow\n      :              :     :           :        +- CometHashAggregate\n      :              :     :           :           +- CometExchange\n      :              :     :           :              +- CometHashAggregate\n      :              :     :           :                 +- CometHashAggregate\n      :              :     :           :                    +- CometExchange\n      :              :     :           :                       +- CometHashAggregate\n      :              :     :           :                          +- CometProject\n      :              :     :           :                             +- CometBroadcastHashJoin\n      :              :     :           :                                :- CometProject\n      :              :     :           :                                :  +- CometBroadcastHashJoin\n      :              :     :           :                                :     :- CometFilter\n      :              :     :           :                                :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :           :                                :     :        +- SubqueryBroadcast\n      :              :     :           :                                :     :           +- BroadcastExchange\n      :              :     :           :                                :     :              +- CometNativeColumnarToRow\n      :              :     :           :                                :     :                 +- CometProject\n      :              :     :           :                                :     :                    +- CometFilter\n      :              :     :           :                                :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :           :                                :     +- CometBroadcastExchange\n      :              :     :           :                                :        +- CometFilter\n      :              :     :           :                                :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :           :                                +- CometBroadcastExchange\n      :              :     :           :                                   +- CometProject\n      :              :     :           :                                      +- CometFilter\n      :              :     :           :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :           +- CometHashAggregate\n      :              :     :              +- CometExchange\n      :              :     :                 +- CometHashAggregate\n      :              :     :                    +- CometProject\n      :              :     :                       +- CometBroadcastHashJoin\n      :              :     :                          :- CometProject\n      :              :     :                          :  +- CometFilter\n      :              :     :                          :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :                          +- CometBroadcastExchange\n      :              :     :                             +- CometFilter\n      :              :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometSortMergeJoin\n      :              :              :- CometSort\n      :              :              :  +- CometExchange\n      :              :              :     +- CometFilter\n      :              :              :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :              +- CometSort\n      :              :                 +- CometProject\n      :              :                    +- CometFilter\n      :              :                       :  +- ReusedSubquery\n      :              :                       +- CometHashAggregate\n      :              :                          +- CometExchange\n      :              :                             +- CometHashAggregate\n      :              :                                +- CometProject\n      :              :                                   +- CometBroadcastHashJoin\n      :              :                                      :- CometProject\n      :              :                                      :  +- CometFilter\n      :              :                                      :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :                                      +- CometBroadcastExchange\n      :              :                                         +- CometFilter\n      :              :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometSortMergeJoin\n                     :     :  :- CometSort\n                     :     :  :  +- CometExchange\n                     :     :  :     +- CometProject\n                     :     :  :        +- CometBroadcastHashJoin\n                     :     :  :           :- CometFilter\n                     :     :  :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :     :  :           :        +- ReusedSubquery\n                     :     :  :           +- CometBroadcastExchange\n                     :     :  :              +- CometProject\n                     :     :  :                 +- CometFilter\n                     :     :  :                    +- CometHashAggregate\n                     :     :  :                       +- CometExchange\n                     :     :  :                          +- CometHashAggregate\n                     :     :  :                             +- CometProject\n                     :     :  :                                +- CometBroadcastHashJoin\n                     :     :  :                                   :- CometProject\n                     :     :  :                                   :  +- CometBroadcastHashJoin\n                     :     :  :                                   :     :- CometFilter\n                     :     :  :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :  :                                   :     :        +- SubqueryBroadcast\n                     :     :  :                                   :     :           +- BroadcastExchange\n                     :     :  :                                   :     :              +- CometNativeColumnarToRow\n                     :     :  :                                   :     :                 +- CometProject\n                     :     :  :                                   :     :                    +- CometFilter\n                     :     :  :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :  :                                   :     +- CometBroadcastExchange\n                     :     :  :                                   :        +- CometProject\n                     :     :  :                                   :           +- CometFilter\n                     :     :  :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :  :                                   +- CometBroadcastExchange\n                     :     :  :                                      +- CometFilter\n                     :     :  :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     :     :  +- CometSort\n                     :     :     +- CometProject\n                     :     :        +- CometFilter\n                     :     :           :  +- ReusedSubquery\n                     :     :           +- CometHashAggregate\n                     :     :              +- CometExchange\n                     :     :                 +- CometHashAggregate\n                     :     :                    +- CometProject\n                     :     :                       +- CometBroadcastHashJoin\n                     :     :                          :- CometProject\n                     :     :                          :  +- CometFilter\n                     :     :                          :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :                          +- CometBroadcastExchange\n                     :     :                             +- CometFilter\n                     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometSortMergeJoin\n                     :              :- CometSort\n                     :              :  +- CometExchange\n                     :              :     +- CometFilter\n                     :              :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :              +- CometSort\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       :  +- ReusedSubquery\n                     :                       +- CometHashAggregate\n                     :                          +- CometExchange\n                     :                             +- CometHashAggregate\n                     :                                +- CometProject\n                     :                                   +- CometBroadcastHashJoin\n                     :                                      :- CometProject\n                     :                                      :  +- CometFilter\n                     :                                      :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :                                      +- CometBroadcastExchange\n                     :                                         +- CometFilter\n                     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 177 out of 190 eligible operators (93%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q24a.native_datafusion/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometNativeScan parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometNativeScan parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometNativeScan parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometNativeScan parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q24a.native_iceberg_compat/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q24b.native_datafusion/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometNativeScan parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometNativeScan parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometNativeScan parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometNativeScan parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q24b.native_iceberg_compat/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q25.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- Filter\n                  :     :     :     :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :     :        +- Filter\n                  :     :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- Filter\n                  :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :                    +- ReusedSubquery\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 22 out of 57 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q25.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometFilter\n                  :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :     :     :     :           +- BroadcastExchange\n                  :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                 +- CometProject\n                  :     :     :     :     :     :     :                    +- CometFilter\n                  :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :     :                 +- ReusedSubquery\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 52 out of 57 eligible operators (91%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q26.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Filter\n                  :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :              +- BroadcastExchange\n                  :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :                    +- CometProject\n                  :     :     :     :                       +- CometFilter\n                  :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.item\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 16 out of 35 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q26.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :           +- BroadcastExchange\n                  :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :                 +- CometProject\n                  :     :     :     :                    +- CometFilter\n                  :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 33 out of 35 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q27.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Filter\n                     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :           +- SubqueryBroadcast\n                     :     :     :     :              +- BroadcastExchange\n                     :     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :     :                    +- CometProject\n                     :     :     :     :                       +- CometFilter\n                     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometProject\n                     :     :     :              +- CometFilter\n                     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.store\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 16 out of 36 eligible operators (44%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q27.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometFilter\n                     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :     :     :        +- SubqueryBroadcast\n                     :     :     :     :           +- BroadcastExchange\n                     :     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :     :                 +- CometProject\n                     :     :     :     :                    +- CometFilter\n                     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometProject\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 34 out of 36 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q28.native_datafusion/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :     +- CometColumnarExchange\n:  :  :  :  :        +- HashAggregate\n:  :  :  :  :           +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :  :              +- CometNativeColumnarToRow\n:  :  :  :  :                 +- CometExchange\n:  :  :  :  :                    +- CometHashAggregate\n:  :  :  :  :                       +- CometProject\n:  :  :  :  :                          +- CometFilter\n:  :  :  :  :                             +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometColumnarExchange\n:  :  :  :              +- HashAggregate\n:  :  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :                    +- CometNativeColumnarToRow\n:  :  :  :                       +- CometExchange\n:  :  :  :                          +- CometHashAggregate\n:  :  :  :                             +- CometProject\n:  :  :  :                                +- CometFilter\n:  :  :  :                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometColumnarExchange\n:  :  :              +- HashAggregate\n:  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :                    +- CometNativeColumnarToRow\n:  :  :                       +- CometExchange\n:  :  :                          +- CometHashAggregate\n:  :  :                             +- CometProject\n:  :  :                                +- CometFilter\n:  :  :                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometColumnarExchange\n:  :              +- HashAggregate\n:  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :                    +- CometNativeColumnarToRow\n:  :                       +- CometExchange\n:  :                          +- CometHashAggregate\n:  :                             +- CometProject\n:  :                                +- CometFilter\n:  :                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:                    +- CometNativeColumnarToRow\n:                       +- CometExchange\n:                          +- CometHashAggregate\n:                             +- CometProject\n:                                +- CometFilter\n:                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometColumnarExchange\n            +- HashAggregate\n               +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n                  +- CometNativeColumnarToRow\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.store_sales\n\nComet accelerated 42 out of 64 eligible operators (65%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q28.native_iceberg_compat/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :     +- CometColumnarExchange\n:  :  :  :  :        +- HashAggregate\n:  :  :  :  :           +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :  :              +- CometNativeColumnarToRow\n:  :  :  :  :                 +- CometExchange\n:  :  :  :  :                    +- CometHashAggregate\n:  :  :  :  :                       +- CometProject\n:  :  :  :  :                          +- CometFilter\n:  :  :  :  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometColumnarExchange\n:  :  :  :              +- HashAggregate\n:  :  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :                    +- CometNativeColumnarToRow\n:  :  :  :                       +- CometExchange\n:  :  :  :                          +- CometHashAggregate\n:  :  :  :                             +- CometProject\n:  :  :  :                                +- CometFilter\n:  :  :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometColumnarExchange\n:  :  :              +- HashAggregate\n:  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :                    +- CometNativeColumnarToRow\n:  :  :                       +- CometExchange\n:  :  :                          +- CometHashAggregate\n:  :  :                             +- CometProject\n:  :  :                                +- CometFilter\n:  :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometColumnarExchange\n:  :              +- HashAggregate\n:  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :                    +- CometNativeColumnarToRow\n:  :                       +- CometExchange\n:  :                          +- CometHashAggregate\n:  :                             +- CometProject\n:  :                                +- CometFilter\n:  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:                    +- CometNativeColumnarToRow\n:                       +- CometExchange\n:                          +- CometHashAggregate\n:                             +- CometProject\n:                                +- CometFilter\n:                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometColumnarExchange\n            +- HashAggregate\n               +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n                  +- CometNativeColumnarToRow\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n\nComet accelerated 42 out of 64 eligible operators (65%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q29.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- Filter\n                  :     :     :     :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :     :        +- Filter\n                  :     :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- Filter\n                  :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 25 out of 61 eligible operators (40%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q29.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometFilter\n                  :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :     :     :     :           +- BroadcastExchange\n                  :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                 +- CometProject\n                  :     :     :     :     :     :     :                    +- CometFilter\n                  :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 55 out of 61 eligible operators (90%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q3.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q3.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q30.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Project\n      :     :     :                    :  +- BroadcastHashJoin\n      :     :     :                    :     :- Filter\n      :     :     :                    :     :  +- ColumnarToRow\n      :     :     :                    :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :     :           +- SubqueryBroadcast\n      :     :     :                    :     :              +- BroadcastExchange\n      :     :     :                    :     :                 +- CometNativeColumnarToRow\n      :     :     :                    :     :                    +- CometProject\n      :     :     :                    :     :                       +- CometFilter\n      :     :     :                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    :     +- BroadcastExchange\n      :     :     :                    :        +- CometNativeColumnarToRow\n      :     :     :                    :           +- CometProject\n      :     :     :                    :              +- CometFilter\n      :     :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Filter\n      :     :                                         :     :  +- ColumnarToRow\n      :     :                                         :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :           +- ReusedSubquery\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometProject\n      :     :                                         :              +- CometFilter\n      :     :                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometProject\n      :     :                                                  +- CometFilter\n      :     :                                                     +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 24 out of 61 eligible operators (39%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q30.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometProject\n         :     :     :                 :  +- CometBroadcastHashJoin\n         :     :     :                 :     :- CometFilter\n         :     :     :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :     :     :                 :     :        +- SubqueryBroadcast\n         :     :     :                 :     :           +- BroadcastExchange\n         :     :     :                 :     :              +- CometNativeColumnarToRow\n         :     :     :                 :     :                 +- CometProject\n         :     :     :                 :     :                    +- CometFilter\n         :     :     :                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 :     +- CometBroadcastExchange\n         :     :     :                 :        +- CometProject\n         :     :     :                 :           +- CometFilter\n         :     :     :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometFilter\n         :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometHashAggregate\n         :     :                       +- CometExchange\n         :     :                          +- CometHashAggregate\n         :     :                             +- CometProject\n         :     :                                +- CometBroadcastHashJoin\n         :     :                                   :- CometProject\n         :     :                                   :  +- CometBroadcastHashJoin\n         :     :                                   :     :- CometFilter\n         :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :     :                                   :     :        +- ReusedSubquery\n         :     :                                   :     +- CometBroadcastExchange\n         :     :                                   :        +- CometProject\n         :     :                                   :           +- CometFilter\n         :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                   +- CometBroadcastExchange\n         :     :                                      +- CometProject\n         :     :                                         +- CometFilter\n         :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 58 out of 61 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q31.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Project\n            :  +- BroadcastHashJoin\n            :     :- BroadcastHashJoin\n            :     :  :- Project\n            :     :  :  +- BroadcastHashJoin\n            :     :  :     :- BroadcastHashJoin\n            :     :  :     :  :- HashAggregate\n            :     :  :     :  :  +- CometNativeColumnarToRow\n            :     :  :     :  :     +- CometColumnarExchange\n            :     :  :     :  :        +- HashAggregate\n            :     :  :     :  :           +- Project\n            :     :  :     :  :              +- BroadcastHashJoin\n            :     :  :     :  :                 :- Project\n            :     :  :     :  :                 :  +- BroadcastHashJoin\n            :     :  :     :  :                 :     :- Filter\n            :     :  :     :  :                 :     :  +- ColumnarToRow\n            :     :  :     :  :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :  :     :  :                 :     :           +- SubqueryBroadcast\n            :     :  :     :  :                 :     :              +- BroadcastExchange\n            :     :  :     :  :                 :     :                 +- CometNativeColumnarToRow\n            :     :  :     :  :                 :     :                    +- CometFilter\n            :     :  :     :  :                 :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :  :                 :     +- BroadcastExchange\n            :     :  :     :  :                 :        +- CometNativeColumnarToRow\n            :     :  :     :  :                 :           +- CometFilter\n            :     :  :     :  :                 :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :  :                 +- BroadcastExchange\n            :     :  :     :  :                    +- CometNativeColumnarToRow\n            :     :  :     :  :                       +- CometFilter\n            :     :  :     :  :                          +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     :  :     :  +- BroadcastExchange\n            :     :  :     :     +- HashAggregate\n            :     :  :     :        +- CometNativeColumnarToRow\n            :     :  :     :           +- CometColumnarExchange\n            :     :  :     :              +- HashAggregate\n            :     :  :     :                 +- Project\n            :     :  :     :                    +- BroadcastHashJoin\n            :     :  :     :                       :- Project\n            :     :  :     :                       :  +- BroadcastHashJoin\n            :     :  :     :                       :     :- Filter\n            :     :  :     :                       :     :  +- ColumnarToRow\n            :     :  :     :                       :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :  :     :                       :     :           +- SubqueryBroadcast\n            :     :  :     :                       :     :              +- BroadcastExchange\n            :     :  :     :                       :     :                 +- CometNativeColumnarToRow\n            :     :  :     :                       :     :                    +- CometFilter\n            :     :  :     :                       :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :                       :     +- BroadcastExchange\n            :     :  :     :                       :        +- CometNativeColumnarToRow\n            :     :  :     :                       :           +- CometFilter\n            :     :  :     :                       :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :                       +- BroadcastExchange\n            :     :  :     :                          +- CometNativeColumnarToRow\n            :     :  :     :                             +- CometFilter\n            :     :  :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     :  :     +- BroadcastExchange\n            :     :  :        +- HashAggregate\n            :     :  :           +- CometNativeColumnarToRow\n            :     :  :              +- CometColumnarExchange\n            :     :  :                 +- HashAggregate\n            :     :  :                    +- Project\n            :     :  :                       +- BroadcastHashJoin\n            :     :  :                          :- Project\n            :     :  :                          :  +- BroadcastHashJoin\n            :     :  :                          :     :- Filter\n            :     :  :                          :     :  +- ColumnarToRow\n            :     :  :                          :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :  :                          :     :           +- SubqueryBroadcast\n            :     :  :                          :     :              +- BroadcastExchange\n            :     :  :                          :     :                 +- CometNativeColumnarToRow\n            :     :  :                          :     :                    +- CometFilter\n            :     :  :                          :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :                          :     +- BroadcastExchange\n            :     :  :                          :        +- CometNativeColumnarToRow\n            :     :  :                          :           +- CometFilter\n            :     :  :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :                          +- BroadcastExchange\n            :     :  :                             +- CometNativeColumnarToRow\n            :     :  :                                +- CometFilter\n            :     :  :                                   +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     :  +- BroadcastExchange\n            :     :     +- HashAggregate\n            :     :        +- CometNativeColumnarToRow\n            :     :           +- CometColumnarExchange\n            :     :              +- HashAggregate\n            :     :                 +- Project\n            :     :                    +- BroadcastHashJoin\n            :     :                       :- Project\n            :     :                       :  +- BroadcastHashJoin\n            :     :                       :     :- Filter\n            :     :                       :     :  +- ColumnarToRow\n            :     :                       :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :                       :     :           +- ReusedSubquery\n            :     :                       :     +- BroadcastExchange\n            :     :                       :        +- CometNativeColumnarToRow\n            :     :                       :           +- CometFilter\n            :     :                       :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :                       +- BroadcastExchange\n            :     :                          +- CometNativeColumnarToRow\n            :     :                             +- CometFilter\n            :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     +- BroadcastExchange\n            :        +- HashAggregate\n            :           +- CometNativeColumnarToRow\n            :              +- CometColumnarExchange\n            :                 +- HashAggregate\n            :                    +- Project\n            :                       +- BroadcastHashJoin\n            :                          :- Project\n            :                          :  +- BroadcastHashJoin\n            :                          :     :- Filter\n            :                          :     :  +- ColumnarToRow\n            :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                          :     :           +- ReusedSubquery\n            :                          :     +- BroadcastExchange\n            :                          :        +- CometNativeColumnarToRow\n            :                          :           +- CometFilter\n            :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                          +- BroadcastExchange\n            :                             +- CometNativeColumnarToRow\n            :                                +- CometFilter\n            :                                   +- CometNativeScan parquet spark_catalog.default.customer_address\n            +- BroadcastExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- ReusedSubquery\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometFilter\n                                 :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 38 out of 120 eligible operators (31%). Final plan contains 28 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q31.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometBroadcastHashJoin\n            :     :  :- CometProject\n            :     :  :  +- CometBroadcastHashJoin\n            :     :  :     :- CometBroadcastHashJoin\n            :     :  :     :  :- CometHashAggregate\n            :     :  :     :  :  +- CometExchange\n            :     :  :     :  :     +- CometHashAggregate\n            :     :  :     :  :        +- CometProject\n            :     :  :     :  :           +- CometBroadcastHashJoin\n            :     :  :     :  :              :- CometProject\n            :     :  :     :  :              :  +- CometBroadcastHashJoin\n            :     :  :     :  :              :     :- CometFilter\n            :     :  :     :  :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :  :     :  :              :     :        +- SubqueryBroadcast\n            :     :  :     :  :              :     :           +- BroadcastExchange\n            :     :  :     :  :              :     :              +- CometNativeColumnarToRow\n            :     :  :     :  :              :     :                 +- CometFilter\n            :     :  :     :  :              :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :  :              :     +- CometBroadcastExchange\n            :     :  :     :  :              :        +- CometFilter\n            :     :  :     :  :              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :  :              +- CometBroadcastExchange\n            :     :  :     :  :                 +- CometFilter\n            :     :  :     :  :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     :  :     :  +- CometBroadcastExchange\n            :     :  :     :     +- CometHashAggregate\n            :     :  :     :        +- CometExchange\n            :     :  :     :           +- CometHashAggregate\n            :     :  :     :              +- CometProject\n            :     :  :     :                 +- CometBroadcastHashJoin\n            :     :  :     :                    :- CometProject\n            :     :  :     :                    :  +- CometBroadcastHashJoin\n            :     :  :     :                    :     :- CometFilter\n            :     :  :     :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :  :     :                    :     :        +- SubqueryBroadcast\n            :     :  :     :                    :     :           +- BroadcastExchange\n            :     :  :     :                    :     :              +- CometNativeColumnarToRow\n            :     :  :     :                    :     :                 +- CometFilter\n            :     :  :     :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :                    :     +- CometBroadcastExchange\n            :     :  :     :                    :        +- CometFilter\n            :     :  :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :                    +- CometBroadcastExchange\n            :     :  :     :                       +- CometFilter\n            :     :  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     :  :     +- CometBroadcastExchange\n            :     :  :        +- CometHashAggregate\n            :     :  :           +- CometExchange\n            :     :  :              +- CometHashAggregate\n            :     :  :                 +- CometProject\n            :     :  :                    +- CometBroadcastHashJoin\n            :     :  :                       :- CometProject\n            :     :  :                       :  +- CometBroadcastHashJoin\n            :     :  :                       :     :- CometFilter\n            :     :  :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :  :                       :     :        +- SubqueryBroadcast\n            :     :  :                       :     :           +- BroadcastExchange\n            :     :  :                       :     :              +- CometNativeColumnarToRow\n            :     :  :                       :     :                 +- CometFilter\n            :     :  :                       :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :                       :     +- CometBroadcastExchange\n            :     :  :                       :        +- CometFilter\n            :     :  :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :                       +- CometBroadcastExchange\n            :     :  :                          +- CometFilter\n            :     :  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     :  +- CometBroadcastExchange\n            :     :     +- CometHashAggregate\n            :     :        +- CometExchange\n            :     :           +- CometHashAggregate\n            :     :              +- CometProject\n            :     :                 +- CometBroadcastHashJoin\n            :     :                    :- CometProject\n            :     :                    :  +- CometBroadcastHashJoin\n            :     :                    :     :- CometFilter\n            :     :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n            :     :                    :     :        +- ReusedSubquery\n            :     :                    :     +- CometBroadcastExchange\n            :     :                    :        +- CometFilter\n            :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :                    +- CometBroadcastExchange\n            :     :                       +- CometFilter\n            :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     +- CometBroadcastExchange\n            :        +- CometHashAggregate\n            :           +- CometExchange\n            :              +- CometHashAggregate\n            :                 +- CometProject\n            :                    +- CometBroadcastHashJoin\n            :                       :- CometProject\n            :                       :  +- CometBroadcastHashJoin\n            :                       :     :- CometFilter\n            :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n            :                       :     :        +- ReusedSubquery\n            :                       :     +- CometBroadcastExchange\n            :                       :        +- CometFilter\n            :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                       +- CometBroadcastExchange\n            :                          +- CometFilter\n            :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            +- CometBroadcastExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- ReusedSubquery\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 111 out of 120 eligible operators (92%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q32.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Filter\n               :     :     :  +- ColumnarToRow\n               :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :           +- SubqueryBroadcast\n               :     :     :              +- BroadcastExchange\n               :     :     :                 +- CometNativeColumnarToRow\n               :     :     :                    +- CometProject\n               :     :     :                       +- CometFilter\n               :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.item\n               :     +- BroadcastExchange\n               :        +- Filter\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Project\n               :                          +- BroadcastHashJoin\n               :                             :- Filter\n               :                             :  +- ColumnarToRow\n               :                             :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                             :           +- ReusedSubquery\n               :                             +- BroadcastExchange\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometProject\n               :                                      +- CometFilter\n               :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 38 eligible operators (36%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q32.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :     :     :        +- SubqueryBroadcast\n               :     :     :           +- BroadcastExchange\n               :     :     :              +- CometNativeColumnarToRow\n               :     :     :                 +- CometProject\n               :     :     :                    +- CometFilter\n               :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                          :        +- ReusedSubquery\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 35 out of 38 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q33.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- SubqueryBroadcast\n               :                 :     :     :              +- BroadcastExchange\n               :                 :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :                    +- CometProject\n               :                 :     :     :                       +- CometFilter\n               :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometNativeScan parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- ReusedSubquery\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometNativeScan parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.item\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- ReusedSubquery\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometBroadcastHashJoin\n                                          :- CometFilter\n                                          :  +- CometNativeScan parquet spark_catalog.default.item\n                                          +- CometBroadcastExchange\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 46 out of 93 eligible operators (49%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q33.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :     :     :        +- SubqueryBroadcast\n               :              :     :     :           +- BroadcastExchange\n               :              :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :                 +- CometProject\n               :              :     :     :                    +- CometFilter\n               :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometBroadcastHashJoin\n               :                    :- CometFilter\n               :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    +- CometBroadcastExchange\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :     :        +- ReusedSubquery\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometBroadcastHashJoin\n               :                    :- CometFilter\n               :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    +- CometBroadcastExchange\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :     :        +- ReusedSubquery\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              +- CometBroadcastExchange\n                                 +- CometBroadcastHashJoin\n                                    :- CometFilter\n                                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 89 out of 93 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q34.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Filter\n            :  +- HashAggregate\n            :     +- CometNativeColumnarToRow\n            :        +- CometColumnarExchange\n            :           +- HashAggregate\n            :              +- Project\n            :                 +- BroadcastHashJoin\n            :                    :- Project\n            :                    :  +- BroadcastHashJoin\n            :                    :     :- Project\n            :                    :     :  +- BroadcastHashJoin\n            :                    :     :     :- Filter\n            :                    :     :     :  +- ColumnarToRow\n            :                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                    :     :     :           +- SubqueryBroadcast\n            :                    :     :     :              +- BroadcastExchange\n            :                    :     :     :                 +- CometNativeColumnarToRow\n            :                    :     :     :                    +- CometProject\n            :                    :     :     :                       +- CometFilter\n            :                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     :     +- BroadcastExchange\n            :                    :     :        +- CometNativeColumnarToRow\n            :                    :     :           +- CometProject\n            :                    :     :              +- CometFilter\n            :                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     +- BroadcastExchange\n            :                    :        +- CometNativeColumnarToRow\n            :                    :           +- CometProject\n            :                    :              +- CometFilter\n            :                    :                 +- CometNativeScan parquet spark_catalog.default.store\n            :                    +- BroadcastExchange\n            :                       +- CometNativeColumnarToRow\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometNativeScan parquet spark_catalog.default.household_demographics\n            +- BroadcastExchange\n               +- CometNativeColumnarToRow\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 37 eligible operators (48%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q34.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometFilter\n            :  +- CometHashAggregate\n            :     +- CometExchange\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometFilter\n            :                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :        +- SubqueryBroadcast\n            :                 :     :     :           +- BroadcastExchange\n            :                 :     :     :              +- CometNativeColumnarToRow\n            :                 :     :     :                 +- CometProject\n            :                 :     :     :                    +- CometFilter\n            :                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometProject\n            :                 :     :           +- CometFilter\n            :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometProject\n            :                 :           +- CometFilter\n            :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 37 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q35.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :- BroadcastHashJoin\n                  :     :        :  :- BroadcastHashJoin\n                  :     :        :  :  :- CometNativeColumnarToRow\n                  :     :        :  :  :  +- CometFilter\n                  :     :        :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :        :  :  +- BroadcastExchange\n                  :     :        :  :     +- Project\n                  :     :        :  :        +- BroadcastHashJoin\n                  :     :        :  :           :- ColumnarToRow\n                  :     :        :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :  :           :        +- SubqueryBroadcast\n                  :     :        :  :           :           +- BroadcastExchange\n                  :     :        :  :           :              +- CometNativeColumnarToRow\n                  :     :        :  :           :                 +- CometProject\n                  :     :        :  :           :                    +- CometFilter\n                  :     :        :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  :           +- BroadcastExchange\n                  :     :        :  :              +- CometNativeColumnarToRow\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- Project\n                  :     :        :        +- BroadcastHashJoin\n                  :     :        :           :- ColumnarToRow\n                  :     :        :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :           :        +- ReusedSubquery\n                  :     :        :           +- BroadcastExchange\n                  :     :        :              +- CometNativeColumnarToRow\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 54 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q35.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                  :     :        :  :- CometNativeColumnarToRow\n                  :     :        :  :  +- CometBroadcastHashJoin\n                  :     :        :  :     :- CometFilter\n                  :     :        :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :        :  :     +- CometBroadcastExchange\n                  :     :        :  :        +- CometProject\n                  :     :        :  :           +- CometBroadcastHashJoin\n                  :     :        :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :        :  :              :     +- SubqueryBroadcast\n                  :     :        :  :              :        +- BroadcastExchange\n                  :     :        :  :              :           +- CometNativeColumnarToRow\n                  :     :        :  :              :              +- CometProject\n                  :     :        :  :              :                 +- CometFilter\n                  :     :        :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  :              +- CometBroadcastExchange\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- CometNativeColumnarToRow\n                  :     :        :        +- CometProject\n                  :     :        :           +- CometBroadcastHashJoin\n                  :     :        :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :        :              :     +- ReusedSubquery\n                  :     :        :              +- CometBroadcastExchange\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- CometNativeColumnarToRow\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 54 eligible operators (64%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q36.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- Filter\n                                    :     :     :  +- ColumnarToRow\n                                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :     :           +- SubqueryBroadcast\n                                    :     :     :              +- BroadcastExchange\n                                    :     :     :                 +- CometNativeColumnarToRow\n                                    :     :     :                    +- CometProject\n                                    :     :     :                       +- CometFilter\n                                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometProject\n                                    :     :              +- CometFilter\n                                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.item\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 15 out of 34 eligible operators (44%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q36.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometExpand\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :     :        +- SubqueryBroadcast\n                                 :     :     :           +- BroadcastExchange\n                                 :     :     :              +- CometNativeColumnarToRow\n                                 :     :     :                 +- CometProject\n                                 :     :     :                    +- CometFilter\n                                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 29 out of 34 eligible operators (85%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q37.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- BroadcastExchange\n                  :  +- Project\n                  :     +- BroadcastHashJoin\n                  :        :- Project\n                  :        :  +- BroadcastHashJoin\n                  :        :     :- CometNativeColumnarToRow\n                  :        :     :  +- CometProject\n                  :        :     :     +- CometFilter\n                  :        :     :        +- CometNativeScan parquet spark_catalog.default.item\n                  :        :     +- BroadcastExchange\n                  :        :        +- Project\n                  :        :           +- Filter\n                  :        :              +- ColumnarToRow\n                  :        :                 +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :        :                       +- SubqueryBroadcast\n                  :        :                          +- BroadcastExchange\n                  :        :                             +- CometNativeColumnarToRow\n                  :        :                                +- CometProject\n                  :        :                                   +- CometFilter\n                  :        :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :        +- BroadcastExchange\n                  :           +- CometNativeColumnarToRow\n                  :              +- CometProject\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n\nComet accelerated 15 out of 30 eligible operators (50%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q37.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometBroadcastExchange\n                  :  +- CometProject\n                  :     +- CometBroadcastHashJoin\n                  :        :- CometProject\n                  :        :  +- CometBroadcastHashJoin\n                  :        :     :- CometProject\n                  :        :     :  +- CometFilter\n                  :        :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :        :     +- CometBroadcastExchange\n                  :        :        +- CometProject\n                  :        :           +- CometFilter\n                  :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :        :                    +- SubqueryBroadcast\n                  :        :                       +- BroadcastExchange\n                  :        :                          +- CometNativeColumnarToRow\n                  :        :                             +- CometProject\n                  :        :                                +- CometFilter\n                  :        :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        +- CometBroadcastExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n\nComet accelerated 28 out of 30 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q38.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometBroadcastHashJoin\n               :  :- CometHashAggregate\n               :  :  +- CometColumnarExchange\n               :  :     +- HashAggregate\n               :  :        +- Project\n               :  :           +- BroadcastHashJoin\n               :  :              :- Project\n               :  :              :  +- BroadcastHashJoin\n               :  :              :     :- Filter\n               :  :              :     :  +- ColumnarToRow\n               :  :              :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :  :              :     :           +- SubqueryBroadcast\n               :  :              :     :              +- BroadcastExchange\n               :  :              :     :                 +- CometNativeColumnarToRow\n               :  :              :     :                    +- CometProject\n               :  :              :     :                       +- CometFilter\n               :  :              :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :              :     +- BroadcastExchange\n               :  :              :        +- CometNativeColumnarToRow\n               :  :              :           +- CometProject\n               :  :              :              +- CometFilter\n               :  :              :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :              +- BroadcastExchange\n               :  :                 +- CometNativeColumnarToRow\n               :  :                    +- CometProject\n               :  :                       +- CometFilter\n               :  :                          +- CometNativeScan parquet spark_catalog.default.customer\n               :  +- CometBroadcastExchange\n               :     +- CometHashAggregate\n               :        +- CometColumnarExchange\n               :           +- HashAggregate\n               :              +- Project\n               :                 +- BroadcastHashJoin\n               :                    :- Project\n               :                    :  +- BroadcastHashJoin\n               :                    :     :- Filter\n               :                    :     :  +- ColumnarToRow\n               :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :     :           +- ReusedSubquery\n               :                    :     +- BroadcastExchange\n               :                    :        +- CometNativeColumnarToRow\n               :                    :           +- CometProject\n               :                    :              +- CometFilter\n               :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    +- BroadcastExchange\n               :                       +- CometNativeColumnarToRow\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometNativeScan parquet spark_catalog.default.customer\n               +- CometBroadcastExchange\n                  +- CometHashAggregate\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- ReusedSubquery\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 66 eligible operators (53%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q38.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometBroadcastHashJoin\n               :  :- CometHashAggregate\n               :  :  +- CometExchange\n               :  :     +- CometHashAggregate\n               :  :        +- CometProject\n               :  :           +- CometBroadcastHashJoin\n               :  :              :- CometProject\n               :  :              :  +- CometBroadcastHashJoin\n               :  :              :     :- CometFilter\n               :  :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :  :              :     :        +- SubqueryBroadcast\n               :  :              :     :           +- BroadcastExchange\n               :  :              :     :              +- CometNativeColumnarToRow\n               :  :              :     :                 +- CometProject\n               :  :              :     :                    +- CometFilter\n               :  :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :              :     +- CometBroadcastExchange\n               :  :              :        +- CometProject\n               :  :              :           +- CometFilter\n               :  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :              +- CometBroadcastExchange\n               :  :                 +- CometProject\n               :  :                    +- CometFilter\n               :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               :  +- CometBroadcastExchange\n               :     +- CometHashAggregate\n               :        +- CometExchange\n               :           +- CometHashAggregate\n               :              +- CometProject\n               :                 +- CometBroadcastHashJoin\n               :                    :- CometProject\n               :                    :  +- CometBroadcastHashJoin\n               :                    :     :- CometFilter\n               :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :     :        +- ReusedSubquery\n               :                    :     +- CometBroadcastExchange\n               :                    :        +- CometProject\n               :                    :           +- CometFilter\n               :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometBroadcastExchange\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               +- CometBroadcastExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometFilter\n                                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :        +- ReusedSubquery\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 62 out of 66 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q39a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- BroadcastHashJoin\n         :- Project\n         :  +- Filter\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- Project\n         :                    +- BroadcastHashJoin\n         :                       :- Project\n         :                       :  +- BroadcastHashJoin\n         :                       :     :- Project\n         :                       :     :  +- BroadcastHashJoin\n         :                       :     :     :- Filter\n         :                       :     :     :  +- ColumnarToRow\n         :                       :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                       :     :     :           +- SubqueryBroadcast\n         :                       :     :     :              +- BroadcastExchange\n         :                       :     :     :                 +- CometNativeColumnarToRow\n         :                       :     :     :                    +- CometProject\n         :                       :     :     :                       +- CometFilter\n         :                       :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                       :     :     +- BroadcastExchange\n         :                       :     :        +- CometNativeColumnarToRow\n         :                       :     :           +- CometFilter\n         :                       :     :              +- CometNativeScan parquet spark_catalog.default.item\n         :                       :     +- BroadcastExchange\n         :                       :        +- CometNativeColumnarToRow\n         :                       :           +- CometFilter\n         :                       :              +- CometNativeScan parquet spark_catalog.default.warehouse\n         :                       +- BroadcastExchange\n         :                          +- CometNativeColumnarToRow\n         :                             +- CometProject\n         :                                +- CometFilter\n         :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- BroadcastExchange\n            +- Project\n               +- Filter\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- Filter\n                                    :     :     :  +- ColumnarToRow\n                                    :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :     :           +- SubqueryBroadcast\n                                    :     :     :              +- BroadcastExchange\n                                    :     :     :                 +- CometNativeColumnarToRow\n                                    :     :     :                    +- CometProject\n                                    :     :     :                       +- CometFilter\n                                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometFilter\n                                    :     :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometFilter\n                                    :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 24 out of 60 eligible operators (40%). Final plan contains 13 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q39a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometFilter\n         :     +- CometHashAggregate\n         :        +- CometExchange\n         :           +- CometHashAggregate\n         :              +- CometProject\n         :                 +- CometBroadcastHashJoin\n         :                    :- CometProject\n         :                    :  +- CometBroadcastHashJoin\n         :                    :     :- CometProject\n         :                    :     :  +- CometBroadcastHashJoin\n         :                    :     :     :- CometFilter\n         :                    :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n         :                    :     :     :        +- SubqueryBroadcast\n         :                    :     :     :           +- BroadcastExchange\n         :                    :     :     :              +- CometNativeColumnarToRow\n         :                    :     :     :                 +- CometProject\n         :                    :     :     :                    +- CometFilter\n         :                    :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                    :     :     +- CometBroadcastExchange\n         :                    :     :        +- CometFilter\n         :                    :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                    :     +- CometBroadcastExchange\n         :                    :        +- CometFilter\n         :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n         :                    +- CometBroadcastExchange\n         :                       +- CometProject\n         :                          +- CometFilter\n         :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                                 :     :     :        +- SubqueryBroadcast\n                                 :     :     :           +- BroadcastExchange\n                                 :     :     :              +- CometNativeColumnarToRow\n                                 :     :     :                 +- CometProject\n                                 :     :     :                    +- CometFilter\n                                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometFilter\n                                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 56 out of 60 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q39b.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- BroadcastHashJoin\n         :- Project\n         :  +- Filter\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- Project\n         :                    +- BroadcastHashJoin\n         :                       :- Project\n         :                       :  +- BroadcastHashJoin\n         :                       :     :- Project\n         :                       :     :  +- BroadcastHashJoin\n         :                       :     :     :- Filter\n         :                       :     :     :  +- ColumnarToRow\n         :                       :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                       :     :     :           +- SubqueryBroadcast\n         :                       :     :     :              +- BroadcastExchange\n         :                       :     :     :                 +- CometNativeColumnarToRow\n         :                       :     :     :                    +- CometProject\n         :                       :     :     :                       +- CometFilter\n         :                       :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                       :     :     +- BroadcastExchange\n         :                       :     :        +- CometNativeColumnarToRow\n         :                       :     :           +- CometFilter\n         :                       :     :              +- CometNativeScan parquet spark_catalog.default.item\n         :                       :     +- BroadcastExchange\n         :                       :        +- CometNativeColumnarToRow\n         :                       :           +- CometFilter\n         :                       :              +- CometNativeScan parquet spark_catalog.default.warehouse\n         :                       +- BroadcastExchange\n         :                          +- CometNativeColumnarToRow\n         :                             +- CometProject\n         :                                +- CometFilter\n         :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- BroadcastExchange\n            +- Project\n               +- Filter\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- Filter\n                                    :     :     :  +- ColumnarToRow\n                                    :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :     :           +- SubqueryBroadcast\n                                    :     :     :              +- BroadcastExchange\n                                    :     :     :                 +- CometNativeColumnarToRow\n                                    :     :     :                    +- CometProject\n                                    :     :     :                       +- CometFilter\n                                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometFilter\n                                    :     :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometFilter\n                                    :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 24 out of 60 eligible operators (40%). Final plan contains 13 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q39b.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometFilter\n         :     +- CometHashAggregate\n         :        +- CometExchange\n         :           +- CometHashAggregate\n         :              +- CometProject\n         :                 +- CometBroadcastHashJoin\n         :                    :- CometProject\n         :                    :  +- CometBroadcastHashJoin\n         :                    :     :- CometProject\n         :                    :     :  +- CometBroadcastHashJoin\n         :                    :     :     :- CometFilter\n         :                    :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n         :                    :     :     :        +- SubqueryBroadcast\n         :                    :     :     :           +- BroadcastExchange\n         :                    :     :     :              +- CometNativeColumnarToRow\n         :                    :     :     :                 +- CometProject\n         :                    :     :     :                    +- CometFilter\n         :                    :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                    :     :     +- CometBroadcastExchange\n         :                    :     :        +- CometFilter\n         :                    :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                    :     +- CometBroadcastExchange\n         :                    :        +- CometFilter\n         :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n         :                    +- CometBroadcastExchange\n         :                       +- CometProject\n         :                          +- CometFilter\n         :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                                 :     :     :        +- SubqueryBroadcast\n                                 :     :     :           +- BroadcastExchange\n                                 :     :     :              +- CometNativeColumnarToRow\n                                 :     :     :                 +- CometProject\n                                 :     :     :                    +- CometFilter\n                                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometFilter\n                                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 56 out of 60 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q4.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Project\n      :     :     :  +- BroadcastHashJoin\n      :     :     :     :- BroadcastHashJoin\n      :     :     :     :  :- Filter\n      :     :     :     :  :  +- HashAggregate\n      :     :     :     :  :     +- CometNativeColumnarToRow\n      :     :     :     :  :        +- CometColumnarExchange\n      :     :     :     :  :           +- HashAggregate\n      :     :     :     :  :              +- Project\n      :     :     :     :  :                 +- BroadcastHashJoin\n      :     :     :     :  :                    :- Project\n      :     :     :     :  :                    :  +- BroadcastHashJoin\n      :     :     :     :  :                    :     :- CometNativeColumnarToRow\n      :     :     :     :  :                    :     :  +- CometProject\n      :     :     :     :  :                    :     :     +- CometFilter\n      :     :     :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :     :  :                    :     +- BroadcastExchange\n      :     :     :     :  :                    :        +- Filter\n      :     :     :     :  :                    :           +- ColumnarToRow\n      :     :     :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :     :  :                    :                    +- SubqueryBroadcast\n      :     :     :     :  :                    :                       +- BroadcastExchange\n      :     :     :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :     :     :  :                    :                             +- CometFilter\n      :     :     :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     :  :                    +- BroadcastExchange\n      :     :     :     :  :                       +- CometNativeColumnarToRow\n      :     :     :     :  :                          +- CometFilter\n      :     :     :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     :  +- BroadcastExchange\n      :     :     :     :     +- HashAggregate\n      :     :     :     :        +- CometNativeColumnarToRow\n      :     :     :     :           +- CometColumnarExchange\n      :     :     :     :              +- HashAggregate\n      :     :     :     :                 +- Project\n      :     :     :     :                    +- BroadcastHashJoin\n      :     :     :     :                       :- Project\n      :     :     :     :                       :  +- BroadcastHashJoin\n      :     :     :     :                       :     :- CometNativeColumnarToRow\n      :     :     :     :                       :     :  +- CometProject\n      :     :     :     :                       :     :     +- CometFilter\n      :     :     :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :     :                       :     +- BroadcastExchange\n      :     :     :     :                       :        +- Filter\n      :     :     :     :                       :           +- ColumnarToRow\n      :     :     :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :     :                       :                    +- SubqueryBroadcast\n      :     :     :     :                       :                       +- BroadcastExchange\n      :     :     :     :                       :                          +- CometNativeColumnarToRow\n      :     :     :     :                       :                             +- CometFilter\n      :     :     :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     :                       +- BroadcastExchange\n      :     :     :     :                          +- CometNativeColumnarToRow\n      :     :     :     :                             +- CometFilter\n      :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     +- BroadcastExchange\n      :     :     :        +- Filter\n      :     :     :           +- HashAggregate\n      :     :     :              +- CometNativeColumnarToRow\n      :     :     :                 +- CometColumnarExchange\n      :     :     :                    +- HashAggregate\n      :     :     :                       +- Project\n      :     :     :                          +- BroadcastHashJoin\n      :     :     :                             :- Project\n      :     :     :                             :  +- BroadcastHashJoin\n      :     :     :                             :     :- CometNativeColumnarToRow\n      :     :     :                             :     :  +- CometProject\n      :     :     :                             :     :     +- CometFilter\n      :     :     :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :                             :     +- BroadcastExchange\n      :     :     :                             :        +- Filter\n      :     :     :                             :           +- ColumnarToRow\n      :     :     :                             :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                             :                    +- ReusedSubquery\n      :     :     :                             +- BroadcastExchange\n      :     :     :                                +- CometNativeColumnarToRow\n      :     :     :                                   +- CometFilter\n      :     :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     +- BroadcastExchange\n      :     :        +- HashAggregate\n      :     :           +- CometNativeColumnarToRow\n      :     :              +- CometColumnarExchange\n      :     :                 +- HashAggregate\n      :     :                    +- Project\n      :     :                       +- BroadcastHashJoin\n      :     :                          :- Project\n      :     :                          :  +- BroadcastHashJoin\n      :     :                          :     :- CometNativeColumnarToRow\n      :     :                          :     :  +- CometProject\n      :     :                          :     :     +- CometFilter\n      :     :                          :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                          :     +- BroadcastExchange\n      :     :                          :        +- Filter\n      :     :                          :           +- ColumnarToRow\n      :     :                          :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                          :                    +- ReusedSubquery\n      :     :                          +- BroadcastExchange\n      :     :                             +- CometNativeColumnarToRow\n      :     :                                +- CometFilter\n      :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 40 out of 126 eligible operators (31%). Final plan contains 26 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q4.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometProject\n         :     :     :  +- CometBroadcastHashJoin\n         :     :     :     :- CometBroadcastHashJoin\n         :     :     :     :  :- CometFilter\n         :     :     :     :  :  +- CometHashAggregate\n         :     :     :     :  :     +- CometExchange\n         :     :     :     :  :        +- CometHashAggregate\n         :     :     :     :  :           +- CometProject\n         :     :     :     :  :              +- CometBroadcastHashJoin\n         :     :     :     :  :                 :- CometProject\n         :     :     :     :  :                 :  +- CometBroadcastHashJoin\n         :     :     :     :  :                 :     :- CometProject\n         :     :     :     :  :                 :     :  +- CometFilter\n         :     :     :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :     :  :                 :     +- CometBroadcastExchange\n         :     :     :     :  :                 :        +- CometFilter\n         :     :     :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :     :  :                 :                 +- SubqueryBroadcast\n         :     :     :     :  :                 :                    +- BroadcastExchange\n         :     :     :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :     :     :  :                 :                          +- CometFilter\n         :     :     :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     :  :                 +- CometBroadcastExchange\n         :     :     :     :  :                    +- CometFilter\n         :     :     :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     :  +- CometBroadcastExchange\n         :     :     :     :     +- CometHashAggregate\n         :     :     :     :        +- CometExchange\n         :     :     :     :           +- CometHashAggregate\n         :     :     :     :              +- CometProject\n         :     :     :     :                 +- CometBroadcastHashJoin\n         :     :     :     :                    :- CometProject\n         :     :     :     :                    :  +- CometBroadcastHashJoin\n         :     :     :     :                    :     :- CometProject\n         :     :     :     :                    :     :  +- CometFilter\n         :     :     :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :     :                    :     +- CometBroadcastExchange\n         :     :     :     :                    :        +- CometFilter\n         :     :     :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :     :                    :                 +- SubqueryBroadcast\n         :     :     :     :                    :                    +- BroadcastExchange\n         :     :     :     :                    :                       +- CometNativeColumnarToRow\n         :     :     :     :                    :                          +- CometFilter\n         :     :     :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     :                    +- CometBroadcastExchange\n         :     :     :     :                       +- CometFilter\n         :     :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     +- CometBroadcastExchange\n         :     :     :        +- CometFilter\n         :     :     :           +- CometHashAggregate\n         :     :     :              +- CometExchange\n         :     :     :                 +- CometHashAggregate\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometBroadcastHashJoin\n         :     :     :                          :- CometProject\n         :     :     :                          :  +- CometBroadcastHashJoin\n         :     :     :                          :     :- CometProject\n         :     :     :                          :     :  +- CometFilter\n         :     :     :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :                          :     +- CometBroadcastExchange\n         :     :     :                          :        +- CometFilter\n         :     :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :     :     :                          :                 +- ReusedSubquery\n         :     :     :                          +- CometBroadcastExchange\n         :     :     :                             +- CometFilter\n         :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometExchange\n         :     :              +- CometHashAggregate\n         :     :                 +- CometProject\n         :     :                    +- CometBroadcastHashJoin\n         :     :                       :- CometProject\n         :     :                       :  +- CometBroadcastHashJoin\n         :     :                       :     :- CometProject\n         :     :                       :     :  +- CometFilter\n         :     :                       :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                       :     +- CometBroadcastExchange\n         :     :                       :        +- CometFilter\n         :     :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :     :                       :                 +- ReusedSubquery\n         :     :                       +- CometBroadcastExchange\n         :     :                          +- CometFilter\n         :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 118 out of 126 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q40.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometSortMergeJoin\n                  :     :     :     :- CometSort\n                  :     :     :     :  +- CometColumnarExchange\n                  :     :     :     :     +- Filter\n                  :     :     :     :        +- ColumnarToRow\n                  :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :                          +- CometFilter\n                  :     :     :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometSort\n                  :     :     :        +- CometExchange\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 32 out of 36 eligible operators (88%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q40.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometSortMergeJoin\n                  :     :     :     :- CometSort\n                  :     :     :     :  +- CometExchange\n                  :     :     :     :     +- CometFilter\n                  :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :              +- SubqueryBroadcast\n                  :     :     :     :                 +- BroadcastExchange\n                  :     :     :     :                    +- CometNativeColumnarToRow\n                  :     :     :     :                       +- CometFilter\n                  :     :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometSort\n                  :     :     :        +- CometExchange\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 34 out of 36 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q41.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometFilter\n                  :     +- CometNativeScan parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q41.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometFilter\n                  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q42.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q42.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q43.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q43.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q44.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- SortMergeJoin\n      :     :     :- Sort\n      :     :     :  +- Project\n      :     :     :     +- Filter\n      :     :     :        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :     :           +- CometNativeColumnarToRow\n      :     :     :              +- CometSort\n      :     :     :                 +- CometExchange\n      :     :     :                    +- CometFilter\n      :     :     :                       :  +- Subquery\n      :     :     :                       :     +- CometNativeColumnarToRow\n      :     :     :                       :        +- CometHashAggregate\n      :     :     :                       :           +- CometExchange\n      :     :     :                       :              +- CometHashAggregate\n      :     :     :                       :                 +- CometProject\n      :     :     :                       :                    +- CometFilter\n      :     :     :                       :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n      :     :     :                       +- CometHashAggregate\n      :     :     :                          +- CometExchange\n      :     :     :                             +- CometHashAggregate\n      :     :     :                                +- CometProject\n      :     :     :                                   +- CometFilter\n      :     :     :                                      +- CometNativeScan parquet spark_catalog.default.store_sales\n      :     :     +- Sort\n      :     :        +- Project\n      :     :           +- Filter\n      :     :              +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :                 +- CometNativeColumnarToRow\n      :     :                    +- CometSort\n      :     :                       +- CometExchange\n      :     :                          +- CometFilter\n      :     :                             :  +- Subquery\n      :     :                             :     +- CometNativeColumnarToRow\n      :     :                             :        +- CometHashAggregate\n      :     :                             :           +- CometExchange\n      :     :                             :              +- CometHashAggregate\n      :     :                             :                 +- CometProject\n      :     :                             :                    +- CometFilter\n      :     :                             :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometExchange\n      :     :                                   +- CometHashAggregate\n      :     :                                      +- CometProject\n      :     :                                         +- CometFilter\n      :     :                                            +- CometNativeScan parquet spark_catalog.default.store_sales\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.item\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 36 out of 55 eligible operators (65%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q44.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- SortMergeJoin\n      :     :     :- Sort\n      :     :     :  +- Project\n      :     :     :     +- Filter\n      :     :     :        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :     :           +- CometNativeColumnarToRow\n      :     :     :              +- CometSort\n      :     :     :                 +- CometExchange\n      :     :     :                    +- CometFilter\n      :     :     :                       :  +- Subquery\n      :     :     :                       :     +- CometNativeColumnarToRow\n      :     :     :                       :        +- CometHashAggregate\n      :     :     :                       :           +- CometExchange\n      :     :     :                       :              +- CometHashAggregate\n      :     :     :                       :                 +- CometProject\n      :     :     :                       :                    +- CometFilter\n      :     :     :                       :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     :     :                       +- CometHashAggregate\n      :     :     :                          +- CometExchange\n      :     :     :                             +- CometHashAggregate\n      :     :     :                                +- CometProject\n      :     :     :                                   +- CometFilter\n      :     :     :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     :     +- Sort\n      :     :        +- Project\n      :     :           +- Filter\n      :     :              +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :                 +- CometNativeColumnarToRow\n      :     :                    +- CometSort\n      :     :                       +- CometExchange\n      :     :                          +- CometFilter\n      :     :                             :  +- Subquery\n      :     :                             :     +- CometNativeColumnarToRow\n      :     :                             :        +- CometHashAggregate\n      :     :                             :           +- CometExchange\n      :     :                             :              +- CometHashAggregate\n      :     :                             :                 +- CometProject\n      :     :                             :                    +- CometFilter\n      :     :                             :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometExchange\n      :     :                                   +- CometHashAggregate\n      :     :                                      +- CometProject\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 36 out of 55 eligible operators (65%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q45.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- Filter\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Project\n                     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :- Filter\n                     :     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :     :           +- SubqueryBroadcast\n                     :     :     :     :     :              +- BroadcastExchange\n                     :     :     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :     :     :                    +- CometProject\n                     :     :     :     :     :                       +- CometFilter\n                     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometProject\n                     :     :     :              +- CometFilter\n                     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 41 eligible operators (43%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q45.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- Filter\n                  +-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                     :- CometNativeColumnarToRow\n                     :  +- CometProject\n                     :     +- CometBroadcastHashJoin\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometProject\n                     :        :     :  +- CometBroadcastHashJoin\n                     :        :     :     :- CometProject\n                     :        :     :     :  +- CometBroadcastHashJoin\n                     :        :     :     :     :- CometFilter\n                     :        :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :        :     :     :     :        +- SubqueryBroadcast\n                     :        :     :     :     :           +- BroadcastExchange\n                     :        :     :     :     :              +- CometNativeColumnarToRow\n                     :        :     :     :     :                 +- CometProject\n                     :        :     :     :     :                    +- CometFilter\n                     :        :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :     :     :     +- CometBroadcastExchange\n                     :        :     :     :        +- CometFilter\n                     :        :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :        :     :     +- CometBroadcastExchange\n                     :        :     :        +- CometProject\n                     :        :     :           +- CometFilter\n                     :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        +- CometBroadcastExchange\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 32 out of 41 eligible operators (78%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q46.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- HashAggregate\n      :     :  +- CometNativeColumnarToRow\n      :     :     +- CometColumnarExchange\n      :     :        +- HashAggregate\n      :     :           +- Project\n      :     :              +- BroadcastHashJoin\n      :     :                 :- Project\n      :     :                 :  +- BroadcastHashJoin\n      :     :                 :     :- Project\n      :     :                 :     :  +- BroadcastHashJoin\n      :     :                 :     :     :- Project\n      :     :                 :     :     :  +- BroadcastHashJoin\n      :     :                 :     :     :     :- Filter\n      :     :                 :     :     :     :  +- ColumnarToRow\n      :     :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                 :     :     :     :           +- SubqueryBroadcast\n      :     :                 :     :     :     :              +- BroadcastExchange\n      :     :                 :     :     :     :                 +- CometNativeColumnarToRow\n      :     :                 :     :     :     :                    +- CometProject\n      :     :                 :     :     :     :                       +- CometFilter\n      :     :                 :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     :     +- BroadcastExchange\n      :     :                 :     :     :        +- CometNativeColumnarToRow\n      :     :                 :     :     :           +- CometProject\n      :     :                 :     :     :              +- CometFilter\n      :     :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     +- BroadcastExchange\n      :     :                 :     :        +- CometNativeColumnarToRow\n      :     :                 :     :           +- CometProject\n      :     :                 :     :              +- CometFilter\n      :     :                 :     :                 +- CometNativeScan parquet spark_catalog.default.store\n      :     :                 :     +- BroadcastExchange\n      :     :                 :        +- CometNativeColumnarToRow\n      :     :                 :           +- CometProject\n      :     :                 :              +- CometFilter\n      :     :                 :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n      :     :                 +- BroadcastExchange\n      :     :                    +- CometNativeColumnarToRow\n      :     :                       +- CometFilter\n      :     :                          +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometFilter\n               +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 20 out of 45 eligible operators (44%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q46.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometHashAggregate\n         :     :  +- CometExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometProject\n         :     :           +- CometBroadcastHashJoin\n         :     :              :- CometProject\n         :     :              :  +- CometBroadcastHashJoin\n         :     :              :     :- CometProject\n         :     :              :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :- CometProject\n         :     :              :     :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :     :- CometFilter\n         :     :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :              :     :     :     :        +- SubqueryBroadcast\n         :     :              :     :     :     :           +- BroadcastExchange\n         :     :              :     :     :     :              +- CometNativeColumnarToRow\n         :     :              :     :     :     :                 +- CometProject\n         :     :              :     :     :     :                    +- CometFilter\n         :     :              :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     :     +- CometBroadcastExchange\n         :     :              :     :     :        +- CometProject\n         :     :              :     :     :           +- CometFilter\n         :     :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     +- CometBroadcastExchange\n         :     :              :     :        +- CometProject\n         :     :              :     :           +- CometFilter\n         :     :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     :              :     +- CometBroadcastExchange\n         :     :              :        +- CometProject\n         :     :              :           +- CometFilter\n         :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         :     :              +- CometBroadcastExchange\n         :     :                 +- CometFilter\n         :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 43 out of 45 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q47.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q47.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q48.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Project\n               :     :     :  +- BroadcastHashJoin\n               :     :     :     :- Filter\n               :     :     :     :  +- ColumnarToRow\n               :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :     :           +- SubqueryBroadcast\n               :     :     :     :              +- BroadcastExchange\n               :     :     :     :                 +- CometNativeColumnarToRow\n               :     :     :     :                    +- CometProject\n               :     :     :     :                       +- CometFilter\n               :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     :     +- BroadcastExchange\n               :     :     :        +- CometNativeColumnarToRow\n               :     :     :           +- CometFilter\n               :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n               :     +- BroadcastExchange\n               :        +- CometNativeColumnarToRow\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 15 out of 33 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q48.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometFilter\n               :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     :     :        +- SubqueryBroadcast\n               :     :     :     :           +- BroadcastExchange\n               :     :     :     :              +- CometNativeColumnarToRow\n               :     :     :     :                 +- CometProject\n               :     :     :     :                    +- CometFilter\n               :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometFilter\n               :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 31 out of 33 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q49.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- SubqueryBroadcast\n               :                                         :     :                    +- BroadcastExchange\n               :                                         :     :                       +- CometNativeColumnarToRow\n               :                                         :     :                          +- CometProject\n               :                                         :     :                             +- CometFilter\n               :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- ReusedSubquery\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometColumnarExchange\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- BroadcastExchange\n                                                         :     :  +- Project\n                                                         :     :     +- Filter\n                                                         :     :        +- ColumnarToRow\n                                                         :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :                 +- ReusedSubquery\n                                                         :     +- CometNativeColumnarToRow\n                                                         :        +- CometProject\n                                                         :           +- CometFilter\n                                                         :              +- CometNativeScan parquet spark_catalog.default.store_returns\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 33 out of 87 eligible operators (37%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q49.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :              +- SubqueryBroadcast\n               :                                      :     :                 +- BroadcastExchange\n               :                                      :     :                    +- CometNativeColumnarToRow\n               :                                      :     :                       +- CometProject\n               :                                      :     :                          +- CometFilter\n               :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :              +- ReusedSubquery\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometExchange\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometBroadcastHashJoin\n                                                      :- CometProject\n                                                      :  +- CometBroadcastHashJoin\n                                                      :     :- CometBroadcastExchange\n                                                      :     :  +- CometProject\n                                                      :     :     +- CometFilter\n                                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :     :              +- ReusedSubquery\n                                                      :     +- CometProject\n                                                      :        +- CometFilter\n                                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                                      +- CometBroadcastExchange\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 66 out of 87 eligible operators (75%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q5.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- Project\n                  :              +- BroadcastHashJoin\n                  :                 :- Project\n                  :                 :  +- BroadcastHashJoin\n                  :                 :     :- Union\n                  :                 :     :  :- Project\n                  :                 :     :  :  +- Filter\n                  :                 :     :  :     +- ColumnarToRow\n                  :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :  :              +- SubqueryBroadcast\n                  :                 :     :  :                 +- BroadcastExchange\n                  :                 :     :  :                    +- CometNativeColumnarToRow\n                  :                 :     :  :                       +- CometProject\n                  :                 :     :  :                          +- CometFilter\n                  :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                 :     :  +- Project\n                  :                 :     :     +- Filter\n                  :                 :     :        +- ColumnarToRow\n                  :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :                 +- ReusedSubquery\n                  :                 :     +- BroadcastExchange\n                  :                 :        +- CometNativeColumnarToRow\n                  :                 :           +- CometProject\n                  :                 :              +- CometFilter\n                  :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                 +- BroadcastExchange\n                  :                    +- CometNativeColumnarToRow\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometNativeScan parquet spark_catalog.default.store\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- Project\n                  :              +- BroadcastHashJoin\n                  :                 :- Project\n                  :                 :  +- BroadcastHashJoin\n                  :                 :     :- Union\n                  :                 :     :  :- Project\n                  :                 :     :  :  +- Filter\n                  :                 :     :  :     +- ColumnarToRow\n                  :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :  :              +- ReusedSubquery\n                  :                 :     :  +- Project\n                  :                 :     :     +- Filter\n                  :                 :     :        +- ColumnarToRow\n                  :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :                 +- ReusedSubquery\n                  :                 :     +- BroadcastExchange\n                  :                 :        +- CometNativeColumnarToRow\n                  :                 :           +- CometProject\n                  :                 :              +- CometFilter\n                  :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                 +- BroadcastExchange\n                  :                    +- CometNativeColumnarToRow\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Union\n                                    :     :  :- Project\n                                    :     :  :  +- Filter\n                                    :     :  :     +- ColumnarToRow\n                                    :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :  :              +- ReusedSubquery\n                                    :     :  +- Project\n                                    :     :     +- BroadcastHashJoin\n                                    :     :        :- BroadcastExchange\n                                    :     :        :  +- ColumnarToRow\n                                    :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :        :           +- ReusedSubquery\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometProject\n                                    :     :              +- CometFilter\n                                    :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 28 out of 86 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q5.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometUnion\n                  :              :     :  :- CometProject\n                  :              :     :  :  +- CometFilter\n                  :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :              :     :  :           +- SubqueryBroadcast\n                  :              :     :  :              +- BroadcastExchange\n                  :              :     :  :                 +- CometNativeColumnarToRow\n                  :              :     :  :                    +- CometProject\n                  :              :     :  :                       +- CometFilter\n                  :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :  +- CometProject\n                  :              :     :     +- CometFilter\n                  :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :              :     :              +- ReusedSubquery\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometUnion\n                  :              :     :  :- CometProject\n                  :              :     :  :  +- CometFilter\n                  :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :              :     :  :           +- ReusedSubquery\n                  :              :     :  +- CometProject\n                  :              :     :     +- CometFilter\n                  :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :              :     :              +- ReusedSubquery\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometUnion\n                                 :     :  :- CometProject\n                                 :     :  :  +- CometFilter\n                                 :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :  :           +- ReusedSubquery\n                                 :     :  +- CometProject\n                                 :     :     +- CometBroadcastHashJoin\n                                 :     :        :- CometBroadcastExchange\n                                 :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                 :     :        :        +- ReusedSubquery\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 79 out of 86 eligible operators (91%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q50.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- CometNativeColumnarToRow\n                  :     :     :     :  +- CometFilter\n                  :     :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- Filter\n                  :     :     :           +- ColumnarToRow\n                  :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :                    +- SubqueryBroadcast\n                  :     :     :                       +- BroadcastExchange\n                  :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :                             +- CometProject\n                  :     :     :                                +- CometFilter\n                  :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.store\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q50.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :                 +- SubqueryBroadcast\n                  :     :     :                    +- BroadcastExchange\n                  :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :                          +- CometProject\n                  :     :     :                             +- CometFilter\n                  :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 31 out of 33 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q51.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometProject\n                  +- CometSortMergeJoin\n                     :- CometSort\n                     :  +- CometColumnarExchange\n                     :     +- Project\n                     :        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                     :           +- CometNativeColumnarToRow\n                     :              +- CometSort\n                     :                 +- CometColumnarExchange\n                     :                    +- HashAggregate\n                     :                       +- CometNativeColumnarToRow\n                     :                          +- CometColumnarExchange\n                     :                             +- HashAggregate\n                     :                                +- Project\n                     :                                   +- BroadcastHashJoin\n                     :                                      :- Filter\n                     :                                      :  +- ColumnarToRow\n                     :                                      :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :                                      :           +- SubqueryBroadcast\n                     :                                      :              +- BroadcastExchange\n                     :                                      :                 +- CometNativeColumnarToRow\n                     :                                      :                    +- CometProject\n                     :                                      :                       +- CometFilter\n                     :                                      :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :                                      +- BroadcastExchange\n                     :                                         +- CometNativeColumnarToRow\n                     :                                            +- CometProject\n                     :                                               +- CometFilter\n                     :                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- CometSort\n                        +- CometColumnarExchange\n                           +- Project\n                              +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                 +- CometNativeColumnarToRow\n                                    +- CometSort\n                                       +- CometColumnarExchange\n                                          +- HashAggregate\n                                             +- CometNativeColumnarToRow\n                                                +- CometColumnarExchange\n                                                   +- HashAggregate\n                                                      +- Project\n                                                         +- BroadcastHashJoin\n                                                            :- Filter\n                                                            :  +- ColumnarToRow\n                                                            :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                            :           +- ReusedSubquery\n                                                            +- BroadcastExchange\n                                                               +- CometNativeColumnarToRow\n                                                                  +- CometProject\n                                                                     +- CometFilter\n                                                                        +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 23 out of 47 eligible operators (48%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q51.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometProject\n                  +- CometSortMergeJoin\n                     :- CometSort\n                     :  +- CometColumnarExchange\n                     :     +- Project\n                     :        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                     :           +- CometNativeColumnarToRow\n                     :              +- CometSort\n                     :                 +- CometExchange\n                     :                    +- CometHashAggregate\n                     :                       +- CometExchange\n                     :                          +- CometHashAggregate\n                     :                             +- CometProject\n                     :                                +- CometBroadcastHashJoin\n                     :                                   :- CometFilter\n                     :                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :                                   :        +- SubqueryBroadcast\n                     :                                   :           +- BroadcastExchange\n                     :                                   :              +- CometNativeColumnarToRow\n                     :                                   :                 +- CometProject\n                     :                                   :                    +- CometFilter\n                     :                                   :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :                                   +- CometBroadcastExchange\n                     :                                      +- CometProject\n                     :                                         +- CometFilter\n                     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometSort\n                        +- CometColumnarExchange\n                           +- Project\n                              +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                 +- CometNativeColumnarToRow\n                                    +- CometSort\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometExchange\n                                                +- CometHashAggregate\n                                                   +- CometProject\n                                                      +- CometBroadcastHashJoin\n                                                         :- CometFilter\n                                                         :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                         :        +- ReusedSubquery\n                                                         +- CometBroadcastExchange\n                                                            +- CometProject\n                                                               +- CometFilter\n                                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 37 out of 47 eligible operators (78%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q52.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q52.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q53.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- CometNativeColumnarToRow\n                                    :     :     :  +- CometProject\n                                    :     :     :     +- CometFilter\n                                    :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- Filter\n                                    :     :           +- ColumnarToRow\n                                    :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :                    +- SubqueryBroadcast\n                                    :     :                       +- BroadcastExchange\n                                    :     :                          +- CometNativeColumnarToRow\n                                    :     :                             +- CometProject\n                                    :     :                                +- CometFilter\n                                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q53.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometFilter\n                                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :                 +- SubqueryBroadcast\n                                 :     :                    +- BroadcastExchange\n                                 :     :                       +- CometNativeColumnarToRow\n                                 :     :                          +- CometProject\n                                 :     :                             +- CometFilter\n                                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 27 out of 33 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q54.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +- BroadcastHashJoin\n                              :- Project\n                              :  +- BroadcastHashJoin\n                              :     :- Project\n                              :     :  +- BroadcastHashJoin\n                              :     :     :- Project\n                              :     :     :  +- BroadcastHashJoin\n                              :     :     :     :- CometNativeColumnarToRow\n                              :     :     :     :  +- CometHashAggregate\n                              :     :     :     :     +- CometColumnarExchange\n                              :     :     :     :        +- HashAggregate\n                              :     :     :     :           +- Project\n                              :     :     :     :              +- BroadcastHashJoin\n                              :     :     :     :                 :- Project\n                              :     :     :     :                 :  +- BroadcastHashJoin\n                              :     :     :     :                 :     :- Project\n                              :     :     :     :                 :     :  +- BroadcastHashJoin\n                              :     :     :     :                 :     :     :- Union\n                              :     :     :     :                 :     :     :  :- Project\n                              :     :     :     :                 :     :     :  :  +- Filter\n                              :     :     :     :                 :     :     :  :     +- ColumnarToRow\n                              :     :     :     :                 :     :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :                 :     :     :  :              +- SubqueryBroadcast\n                              :     :     :     :                 :     :     :  :                 +- BroadcastExchange\n                              :     :     :     :                 :     :     :  :                    +- CometNativeColumnarToRow\n                              :     :     :     :                 :     :     :  :                       +- CometProject\n                              :     :     :     :                 :     :     :  :                          +- CometFilter\n                              :     :     :     :                 :     :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :                 :     :     :  +- Project\n                              :     :     :     :                 :     :     :     +- Filter\n                              :     :     :     :                 :     :     :        +- ColumnarToRow\n                              :     :     :     :                 :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :                 :     :     :                 +- ReusedSubquery\n                              :     :     :     :                 :     :     +- BroadcastExchange\n                              :     :     :     :                 :     :        +- CometNativeColumnarToRow\n                              :     :     :     :                 :     :           +- CometProject\n                              :     :     :     :                 :     :              +- CometFilter\n                              :     :     :     :                 :     :                 +- CometNativeScan parquet spark_catalog.default.item\n                              :     :     :     :                 :     +- BroadcastExchange\n                              :     :     :     :                 :        +- CometNativeColumnarToRow\n                              :     :     :     :                 :           +- CometProject\n                              :     :     :     :                 :              +- CometFilter\n                              :     :     :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :                 +- BroadcastExchange\n                              :     :     :     :                    +- CometNativeColumnarToRow\n                              :     :     :     :                       +- CometFilter\n                              :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.customer\n                              :     :     :     +- BroadcastExchange\n                              :     :     :        +- Filter\n                              :     :     :           +- ColumnarToRow\n                              :     :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :                    +- SubqueryBroadcast\n                              :     :     :                       +- BroadcastExchange\n                              :     :     :                          +- CometNativeColumnarToRow\n                              :     :     :                             +- CometProject\n                              :     :     :                                +- CometFilter\n                              :     :     :                                   :  :- Subquery\n                              :     :     :                                   :  :  +- CometNativeColumnarToRow\n                              :     :     :                                   :  :     +- CometHashAggregate\n                              :     :     :                                   :  :        +- CometExchange\n                              :     :     :                                   :  :           +- CometHashAggregate\n                              :     :     :                                   :  :              +- CometProject\n                              :     :     :                                   :  :                 +- CometFilter\n                              :     :     :                                   :  :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :                                   :  +- Subquery\n                              :     :     :                                   :     +- CometNativeColumnarToRow\n                              :     :     :                                   :        +- CometHashAggregate\n                              :     :     :                                   :           +- CometExchange\n                              :     :     :                                   :              +- CometHashAggregate\n                              :     :     :                                   :                 +- CometProject\n                              :     :     :                                   :                    +- CometFilter\n                              :     :     :                                   :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     +- BroadcastExchange\n                              :     :        +- CometNativeColumnarToRow\n                              :     :           +- CometProject\n                              :     :              +- CometFilter\n                              :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     +- BroadcastExchange\n                              :        +- CometNativeColumnarToRow\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.store\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          :  :- Subquery\n                                          :  :  +- CometNativeColumnarToRow\n                                          :  :     +- CometHashAggregate\n                                          :  :        +- CometExchange\n                                          :  :           +- CometHashAggregate\n                                          :  :              +- CometProject\n                                          :  :                 +- CometFilter\n                                          :  :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  +- Subquery\n                                          :     +- CometNativeColumnarToRow\n                                          :        +- CometHashAggregate\n                                          :           +- CometExchange\n                                          :              +- CometHashAggregate\n                                          :                 +- CometProject\n                                          :                    +- CometFilter\n                                          :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 51 out of 96 eligible operators (53%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q54.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometBroadcastHashJoin\n                           :     :     :- CometProject\n                           :     :     :  +- CometBroadcastHashJoin\n                           :     :     :     :- CometHashAggregate\n                           :     :     :     :  +- CometExchange\n                           :     :     :     :     +- CometHashAggregate\n                           :     :     :     :        +- CometProject\n                           :     :     :     :           +- CometBroadcastHashJoin\n                           :     :     :     :              :- CometProject\n                           :     :     :     :              :  +- CometBroadcastHashJoin\n                           :     :     :     :              :     :- CometProject\n                           :     :     :     :              :     :  +- CometBroadcastHashJoin\n                           :     :     :     :              :     :     :- CometUnion\n                           :     :     :     :              :     :     :  :- CometProject\n                           :     :     :     :              :     :     :  :  +- CometFilter\n                           :     :     :     :              :     :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :     :     :     :              :     :     :  :           +- SubqueryBroadcast\n                           :     :     :     :              :     :     :  :              +- BroadcastExchange\n                           :     :     :     :              :     :     :  :                 +- CometNativeColumnarToRow\n                           :     :     :     :              :     :     :  :                    +- CometProject\n                           :     :     :     :              :     :     :  :                       +- CometFilter\n                           :     :     :     :              :     :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :     :              :     :     :  +- CometProject\n                           :     :     :     :              :     :     :     +- CometFilter\n                           :     :     :     :              :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :     :     :     :              :     :     :              +- ReusedSubquery\n                           :     :     :     :              :     :     +- CometBroadcastExchange\n                           :     :     :     :              :     :        +- CometProject\n                           :     :     :     :              :     :           +- CometFilter\n                           :     :     :     :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :     :     :              :     +- CometBroadcastExchange\n                           :     :     :     :              :        +- CometProject\n                           :     :     :     :              :           +- CometFilter\n                           :     :     :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :     :              +- CometBroadcastExchange\n                           :     :     :     :                 +- CometFilter\n                           :     :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     :     :     +- CometBroadcastExchange\n                           :     :     :        +- CometFilter\n                           :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :     :                 +- SubqueryBroadcast\n                           :     :     :                    +- BroadcastExchange\n                           :     :     :                       +- CometNativeColumnarToRow\n                           :     :     :                          +- CometProject\n                           :     :     :                             +- CometFilter\n                           :     :     :                                :  :- Subquery\n                           :     :     :                                :  :  +- CometNativeColumnarToRow\n                           :     :     :                                :  :     +- CometHashAggregate\n                           :     :     :                                :  :        +- CometExchange\n                           :     :     :                                :  :           +- CometHashAggregate\n                           :     :     :                                :  :              +- CometProject\n                           :     :     :                                :  :                 +- CometFilter\n                           :     :     :                                :  :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :                                :  +- Subquery\n                           :     :     :                                :     +- CometNativeColumnarToRow\n                           :     :     :                                :        +- CometHashAggregate\n                           :     :     :                                :           +- CometExchange\n                           :     :     :                                :              +- CometHashAggregate\n                           :     :     :                                :                 +- CometProject\n                           :     :     :                                :                    +- CometFilter\n                           :     :     :                                :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     +- CometBroadcastExchange\n                           :     :        +- CometProject\n                           :     :           +- CometFilter\n                           :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    :  :- ReusedSubquery\n                                    :  +- ReusedSubquery\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 75 out of 84 eligible operators (89%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q55.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q55.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q56.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- SubqueryBroadcast\n               :                 :     :     :              +- BroadcastExchange\n               :                 :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :                    +- CometProject\n               :                 :     :     :                       +- CometFilter\n               :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- ReusedSubquery\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- ReusedSubquery\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometFilter\n                                             :  +- CometNativeScan parquet spark_catalog.default.item\n                                             +- CometBroadcastExchange\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 49 out of 96 eligible operators (51%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q56.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :     :     :        +- SubqueryBroadcast\n               :              :     :     :           +- BroadcastExchange\n               :              :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :                 +- CometProject\n               :              :     :     :                    +- CometFilter\n               :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :     :        +- ReusedSubquery\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :     :        +- ReusedSubquery\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 92 out of 96 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q57.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.call_center\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q57.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q58.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Filter\n      :     :  +- HashAggregate\n      :     :     +- CometNativeColumnarToRow\n      :     :        +- CometColumnarExchange\n      :     :           +- HashAggregate\n      :     :              +- Project\n      :     :                 +- BroadcastHashJoin\n      :     :                    :- Project\n      :     :                    :  +- BroadcastHashJoin\n      :     :                    :     :- Filter\n      :     :                    :     :  +- ColumnarToRow\n      :     :                    :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                    :     :           +- SubqueryBroadcast\n      :     :                    :     :              +- BroadcastExchange\n      :     :                    :     :                 +- CometNativeColumnarToRow\n      :     :                    :     :                    +- CometProject\n      :     :                    :     :                       +- CometBroadcastHashJoin\n      :     :                    :     :                          :- CometFilter\n      :     :                    :     :                          :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                    :     :                          +- CometBroadcastExchange\n      :     :                    :     :                             +- CometProject\n      :     :                    :     :                                +- CometFilter\n      :     :                    :     :                                   :  +- Subquery\n      :     :                    :     :                                   :     +- CometNativeColumnarToRow\n      :     :                    :     :                                   :        +- CometProject\n      :     :                    :     :                                   :           +- CometFilter\n      :     :                    :     :                                   :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                    :     +- BroadcastExchange\n      :     :                    :        +- CometNativeColumnarToRow\n      :     :                    :           +- CometProject\n      :     :                    :              +- CometFilter\n      :     :                    :                 +- CometNativeScan parquet spark_catalog.default.item\n      :     :                    +- BroadcastExchange\n      :     :                       +- CometNativeColumnarToRow\n      :     :                          +- CometProject\n      :     :                             +- CometBroadcastHashJoin\n      :     :                                :- CometFilter\n      :     :                                :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                +- CometBroadcastExchange\n      :     :                                   +- CometProject\n      :     :                                      +- CometFilter\n      :     :                                         :  +- Subquery\n      :     :                                         :     +- CometNativeColumnarToRow\n      :     :                                         :        +- CometProject\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- Filter\n      :                             :     :  +- ColumnarToRow\n      :                             :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :     :           +- SubqueryBroadcast\n      :                             :     :              +- BroadcastExchange\n      :                             :     :                 +- CometNativeColumnarToRow\n      :                             :     :                    +- CometProject\n      :                             :     :                       +- CometBroadcastHashJoin\n      :                             :     :                          :- CometFilter\n      :                             :     :                          :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                             :     :                          +- CometBroadcastExchange\n      :                             :     :                             +- CometProject\n      :                             :     :                                +- CometFilter\n      :                             :     :                                   :  +- ReusedSubquery\n      :                             :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                             :     +- BroadcastExchange\n      :                             :        +- CometNativeColumnarToRow\n      :                             :           +- CometProject\n      :                             :              +- CometFilter\n      :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometProject\n      :                                      +- CometBroadcastHashJoin\n      :                                         :- CometFilter\n      :                                         :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- CometBroadcastExchange\n      :                                            +- CometProject\n      :                                               +- CometFilter\n      :                                                  :  +- ReusedSubquery\n      :                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- Filter\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +- BroadcastHashJoin\n                              :- Project\n                              :  +- BroadcastHashJoin\n                              :     :- Filter\n                              :     :  +- ColumnarToRow\n                              :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :           +- ReusedSubquery\n                              :     +- BroadcastExchange\n                              :        +- CometNativeColumnarToRow\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.item\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometBroadcastHashJoin\n                                          :- CometFilter\n                                          :  +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- CometBroadcastExchange\n                                             +- CometProject\n                                                +- CometFilter\n                                                   :  +- ReusedSubquery\n                                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 58 out of 108 eligible operators (53%). Final plan contains 16 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q58.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometFilter\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometBroadcastHashJoin\n         :     :                 :     :- CometFilter\n         :     :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                 :     :        +- SubqueryBroadcast\n         :     :                 :     :           +- BroadcastExchange\n         :     :                 :     :              +- CometNativeColumnarToRow\n         :     :                 :     :                 +- CometProject\n         :     :                 :     :                    +- CometBroadcastHashJoin\n         :     :                 :     :                       :- CometFilter\n         :     :                 :     :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :     :                       +- CometBroadcastExchange\n         :     :                 :     :                          +- CometProject\n         :     :                 :     :                             +- CometFilter\n         :     :                 :     :                                :  +- Subquery\n         :     :                 :     :                                :     +- CometNativeColumnarToRow\n         :     :                 :     :                                :        +- CometProject\n         :     :                 :     :                                :           +- CometFilter\n         :     :                 :     :                                :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :     +- CometBroadcastExchange\n         :     :                 :        +- CometProject\n         :     :                 :           +- CometFilter\n         :     :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometProject\n         :     :                       +- CometBroadcastHashJoin\n         :     :                          :- CometFilter\n         :     :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                          +- CometBroadcastExchange\n         :     :                             +- CometProject\n         :     :                                +- CometFilter\n         :     :                                   :  +- Subquery\n         :     :                                   :     +- CometNativeColumnarToRow\n         :     :                                   :        +- CometProject\n         :     :                                   :           +- CometFilter\n         :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometFilter\n         :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :                          :     :        +- SubqueryBroadcast\n         :                          :     :           +- BroadcastExchange\n         :                          :     :              +- CometNativeColumnarToRow\n         :                          :     :                 +- CometProject\n         :                          :     :                    +- CometBroadcastHashJoin\n         :                          :     :                       :- CometFilter\n         :                          :     :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                          :     :                       +- CometBroadcastExchange\n         :                          :     :                          +- CometProject\n         :                          :     :                             +- CometFilter\n         :                          :     :                                :  +- ReusedSubquery\n         :                          :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometProject\n         :                          :           +- CometFilter\n         :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                          +- CometBroadcastExchange\n         :                             +- CometProject\n         :                                +- CometBroadcastHashJoin\n         :                                   :- CometFilter\n         :                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                   +- CometBroadcastExchange\n         :                                      +- CometProject\n         :                                         +- CometFilter\n         :                                            :  +- ReusedSubquery\n         :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- ReusedSubquery\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                :  +- ReusedSubquery\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 98 out of 108 eligible operators (90%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q59.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometHashAggregate\n         :     :     :  +- CometExchange\n         :     :     :     +- CometHashAggregate\n         :     :     :        +- CometProject\n         :     :     :           +- CometBroadcastHashJoin\n         :     :     :              :- CometFilter\n         :     :     :              :  +- CometNativeScan parquet spark_catalog.default.store_sales\n         :     :     :              +- CometBroadcastExchange\n         :     :     :                 +- CometProject\n         :     :     :                    +- CometFilter\n         :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometProject\n         :     :           +- CometFilter\n         :     :              +- CometNativeScan parquet spark_catalog.default.store\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometHashAggregate\n                  :     :  +- CometExchange\n                  :     :     +- CometHashAggregate\n                  :     :        +- CometProject\n                  :     :           +- CometBroadcastHashJoin\n                  :     :              :- CometFilter\n                  :     :              :  +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     :              +- CometBroadcastExchange\n                  :     :                 +- CometProject\n                  :     :                    +- CometFilter\n                  :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 50 out of 50 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q59.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometHashAggregate\n         :     :     :  +- CometExchange\n         :     :     :     +- CometHashAggregate\n         :     :     :        +- CometProject\n         :     :     :           +- CometBroadcastHashJoin\n         :     :     :              :- CometFilter\n         :     :     :              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :              +- CometBroadcastExchange\n         :     :     :                 +- CometProject\n         :     :     :                    +- CometFilter\n         :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometProject\n         :     :           +- CometFilter\n         :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometHashAggregate\n                  :     :  +- CometExchange\n                  :     :     +- CometHashAggregate\n                  :     :        +- CometProject\n                  :     :           +- CometBroadcastHashJoin\n                  :     :              :- CometFilter\n                  :     :              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :              +- CometBroadcastExchange\n                  :     :                 +- CometProject\n                  :     :                    +- CometFilter\n                  :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 50 out of 50 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q6.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- CometNativeColumnarToRow\n                     :     :     :  +- CometProject\n                     :     :     :     +- CometBroadcastHashJoin\n                     :     :     :        :- CometProject\n                     :     :     :        :  +- CometFilter\n                     :     :     :        :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     :     :        +- CometBroadcastExchange\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     +- BroadcastExchange\n                     :     :        +- Filter\n                     :     :           +- ColumnarToRow\n                     :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :                    +- SubqueryBroadcast\n                     :     :                       +- BroadcastExchange\n                     :     :                          +- CometNativeColumnarToRow\n                     :     :                             +- CometProject\n                     :     :                                +- CometFilter\n                     :     :                                   :  +- Subquery\n                     :     :                                   :     +- CometNativeColumnarToRow\n                     :     :                                   :        +- CometHashAggregate\n                     :     :                                   :           +- CometExchange\n                     :     :                                   :              +- CometHashAggregate\n                     :     :                                   :                 +- CometProject\n                     :     :                                   :                    +- CometFilter\n                     :     :                                   :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 :  +- Subquery\n                     :                 :     +- CometNativeColumnarToRow\n                     :                 :        +- CometHashAggregate\n                     :                 :           +- CometExchange\n                     :                 :              +- CometHashAggregate\n                     :                 :                 +- CometProject\n                     :                 :                    +- CometFilter\n                     :                 :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometFilter\n                                 :  +- CometNativeScan parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 39 out of 58 eligible operators (67%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q6.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometFilter\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometFilter\n                     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometFilter\n                     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :                 +- SubqueryBroadcast\n                     :     :                    +- BroadcastExchange\n                     :     :                       +- CometNativeColumnarToRow\n                     :     :                          +- CometProject\n                     :     :                             +- CometFilter\n                     :     :                                :  +- Subquery\n                     :     :                                :     +- CometNativeColumnarToRow\n                     :     :                                :        +- CometHashAggregate\n                     :     :                                :           +- CometExchange\n                     :     :                                :              +- CometHashAggregate\n                     :     :                                :                 +- CometProject\n                     :     :                                :                    +- CometFilter\n                     :     :                                :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              :  +- ReusedSubquery\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometFilter\n                              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 48 out of 52 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q60.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- SubqueryBroadcast\n               :                 :     :     :              +- BroadcastExchange\n               :                 :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :                    +- CometProject\n               :                 :     :     :                       +- CometFilter\n               :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- ReusedSubquery\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- ReusedSubquery\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometFilter\n                                             :  +- CometNativeScan parquet spark_catalog.default.item\n                                             +- CometBroadcastExchange\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 49 out of 96 eligible operators (51%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q60.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :     :     :        +- SubqueryBroadcast\n               :              :     :     :           +- BroadcastExchange\n               :              :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :                 +- CometProject\n               :              :     :     :                    +- CometFilter\n               :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :     :        +- ReusedSubquery\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :     :        +- ReusedSubquery\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 92 out of 96 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q61.native_datafusion/extended.txt",
    "content": "Project\n+- BroadcastNestedLoopJoin\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- SubqueryBroadcast\n   :                 :     :     :     :     :     :              +- BroadcastExchange\n   :                 :     :     :     :     :     :                 +- CometNativeColumnarToRow\n   :                 :     :     :     :     :     :                    +- CometProject\n   :                 :     :     :     :     :     :                       +- CometFilter\n   :                 :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.promotion\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometProject\n   :                 :     :     :              +- CometFilter\n   :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometFilter\n   :                 :     :              +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   +- BroadcastExchange\n      +- HashAggregate\n         +- CometNativeColumnarToRow\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- Project\n                        :  +- BroadcastHashJoin\n                        :     :- Project\n                        :     :  +- BroadcastHashJoin\n                        :     :     :- Project\n                        :     :     :  +- BroadcastHashJoin\n                        :     :     :     :- Project\n                        :     :     :     :  +- BroadcastHashJoin\n                        :     :     :     :     :- Filter\n                        :     :     :     :     :  +- ColumnarToRow\n                        :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :     :     :     :     :           +- ReusedSubquery\n                        :     :     :     :     +- BroadcastExchange\n                        :     :     :     :        +- CometNativeColumnarToRow\n                        :     :     :     :           +- CometProject\n                        :     :     :     :              +- CometFilter\n                        :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store\n                        :     :     :     +- BroadcastExchange\n                        :     :     :        +- CometNativeColumnarToRow\n                        :     :     :           +- CometProject\n                        :     :     :              +- CometFilter\n                        :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     :     +- BroadcastExchange\n                        :     :        +- CometNativeColumnarToRow\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                        :     +- BroadcastExchange\n                        :        +- CometNativeColumnarToRow\n                        :           +- CometProject\n                        :              +- CometFilter\n                        :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- BroadcastExchange\n                           +- CometNativeColumnarToRow\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 36 out of 83 eligible operators (43%). Final plan contains 16 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q61.native_iceberg_compat/extended.txt",
    "content": "Project\n+-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n   :- CometNativeColumnarToRow\n   :  +- CometHashAggregate\n   :     +- CometExchange\n   :        +- CometHashAggregate\n   :           +- CometProject\n   :              +- CometBroadcastHashJoin\n   :                 :- CometProject\n   :                 :  +- CometBroadcastHashJoin\n   :                 :     :- CometProject\n   :                 :     :  +- CometBroadcastHashJoin\n   :                 :     :     :- CometProject\n   :                 :     :     :  +- CometBroadcastHashJoin\n   :                 :     :     :     :- CometProject\n   :                 :     :     :     :  +- CometBroadcastHashJoin\n   :                 :     :     :     :     :- CometProject\n   :                 :     :     :     :     :  +- CometBroadcastHashJoin\n   :                 :     :     :     :     :     :- CometFilter\n   :                 :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n   :                 :     :     :     :     :     :        +- SubqueryBroadcast\n   :                 :     :     :     :     :     :           +- BroadcastExchange\n   :                 :     :     :     :     :     :              +- CometNativeColumnarToRow\n   :                 :     :     :     :     :     :                 +- CometProject\n   :                 :     :     :     :     :     :                    +- CometFilter\n   :                 :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n   :                 :     :     :     :     :     +- CometBroadcastExchange\n   :                 :     :     :     :     :        +- CometProject\n   :                 :     :     :     :     :           +- CometFilter\n   :                 :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n   :                 :     :     :     :     +- CometBroadcastExchange\n   :                 :     :     :     :        +- CometProject\n   :                 :     :     :     :           +- CometFilter\n   :                 :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n   :                 :     :     :     +- CometBroadcastExchange\n   :                 :     :     :        +- CometProject\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n   :                 :     :     +- CometBroadcastExchange\n   :                 :     :        +- CometFilter\n   :                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n   :                 :     +- CometBroadcastExchange\n   :                 :        +- CometProject\n   :                 :           +- CometFilter\n   :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n   :                 +- CometBroadcastExchange\n   :                    +- CometProject\n   :                       +- CometFilter\n   :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n   +- BroadcastExchange\n      +- CometNativeColumnarToRow\n         +- CometHashAggregate\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometProject\n                        :     :     :  +- CometBroadcastHashJoin\n                        :     :     :     :- CometProject\n                        :     :     :     :  +- CometBroadcastHashJoin\n                        :     :     :     :     :- CometFilter\n                        :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                        :     :     :     :     :        +- ReusedSubquery\n                        :     :     :     :     +- CometBroadcastExchange\n                        :     :     :     :        +- CometProject\n                        :     :     :     :           +- CometFilter\n                        :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                        :     :     :     +- CometBroadcastExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometFilter\n                        :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometFilter\n                        :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 77 out of 83 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q62.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometNativeScan parquet spark_catalog.default.web_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.web_site\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q62.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q63.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- CometNativeColumnarToRow\n                                    :     :     :  +- CometProject\n                                    :     :     :     +- CometFilter\n                                    :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- Filter\n                                    :     :           +- ColumnarToRow\n                                    :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :                    +- SubqueryBroadcast\n                                    :     :                       +- BroadcastExchange\n                                    :     :                          +- CometNativeColumnarToRow\n                                    :     :                             +- CometProject\n                                    :     :                                +- CometFilter\n                                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q63.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometFilter\n                                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :                 +- SubqueryBroadcast\n                                 :     :                    +- BroadcastExchange\n                                 :     :                       +- CometNativeColumnarToRow\n                                 :     :                          +- CometProject\n                                 :     :                             +- CometFilter\n                                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 27 out of 33 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q64.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometNativeScan parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 228 out of 242 eligible operators (94%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q64.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 238 out of 242 eligible operators (98%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q65.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- CometNativeColumnarToRow\n      :     :     :  +- CometFilter\n      :     :     :     +- CometNativeScan parquet spark_catalog.default.store\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- Project\n      :     :                          +- BroadcastHashJoin\n      :     :                             :- Filter\n      :     :                             :  +- ColumnarToRow\n      :     :                             :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                             :           +- SubqueryBroadcast\n      :     :                             :              +- BroadcastExchange\n      :     :                             :                 +- CometNativeColumnarToRow\n      :     :                             :                    +- CometProject\n      :     :                             :                       +- CometFilter\n      :     :                             :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                             +- BroadcastExchange\n      :     :                                +- CometNativeColumnarToRow\n      :     :                                   +- CometProject\n      :     :                                      +- CometFilter\n      :     :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.item\n      +- BroadcastExchange\n         +- Filter\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Filter\n                                          :  +- ColumnarToRow\n                                          :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :           +- ReusedSubquery\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 17 out of 48 eligible operators (35%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q65.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometProject\n         :     :                       +- CometBroadcastHashJoin\n         :     :                          :- CometFilter\n         :     :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                          :        +- SubqueryBroadcast\n         :     :                          :           +- BroadcastExchange\n         :     :                          :              +- CometNativeColumnarToRow\n         :     :                          :                 +- CometProject\n         :     :                          :                    +- CometFilter\n         :     :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                          +- CometBroadcastExchange\n         :     :                             +- CometProject\n         :     :                                +- CometFilter\n         :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :        +- ReusedSubquery\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 45 out of 48 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q66.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Project\n               :                 :     :     :  +- BroadcastHashJoin\n               :                 :     :     :     :- Filter\n               :                 :     :     :     :  +- ColumnarToRow\n               :                 :     :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :     :           +- SubqueryBroadcast\n               :                 :     :     :     :              +- BroadcastExchange\n               :                 :     :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :     :                    +- CometFilter\n               :                 :     :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     :     +- BroadcastExchange\n               :                 :     :     :        +- CometNativeColumnarToRow\n               :                 :     :     :           +- CometProject\n               :                 :     :     :              +- CometFilter\n               :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.warehouse\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometFilter\n               :                 :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.time_dim\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometNativeScan parquet spark_catalog.default.ship_mode\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Project\n                                 :     :     :  +- BroadcastHashJoin\n                                 :     :     :     :- Filter\n                                 :     :     :     :  +- ColumnarToRow\n                                 :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :     :           +- ReusedSubquery\n                                 :     :     :     +- BroadcastExchange\n                                 :     :     :        +- CometNativeColumnarToRow\n                                 :     :     :           +- CometProject\n                                 :     :     :              +- CometFilter\n                                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.warehouse\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometFilter\n                                 :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.time_dim\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.ship_mode\n\nComet accelerated 27 out of 66 eligible operators (40%). Final plan contains 14 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q66.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometProject\n               :              :     :     :  +- CometBroadcastHashJoin\n               :              :     :     :     :- CometFilter\n               :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :              :     :     :     :        +- SubqueryBroadcast\n               :              :     :     :     :           +- BroadcastExchange\n               :              :     :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :     :                 +- CometFilter\n               :              :     :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     :     +- CometBroadcastExchange\n               :              :     :     :        +- CometProject\n               :              :     :     :           +- CometFilter\n               :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometFilter\n               :              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometFilter\n               :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometFilter\n                              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :     :     :        +- ReusedSubquery\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n\nComet accelerated 63 out of 66 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q67.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- Filter\n                                    :     :     :  +- ColumnarToRow\n                                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :     :           +- SubqueryBroadcast\n                                    :     :     :              +- BroadcastExchange\n                                    :     :     :                 +- CometNativeColumnarToRow\n                                    :     :     :                    +- CometProject\n                                    :     :     :                       +- CometFilter\n                                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometProject\n                                    :     :              +- CometFilter\n                                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.store\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 15 out of 34 eligible operators (44%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q67.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometExpand\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :     :        +- SubqueryBroadcast\n                                 :     :     :           +- BroadcastExchange\n                                 :     :     :              +- CometNativeColumnarToRow\n                                 :     :     :                 +- CometProject\n                                 :     :     :                    +- CometFilter\n                                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 29 out of 34 eligible operators (85%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q68.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- HashAggregate\n      :     :  +- CometNativeColumnarToRow\n      :     :     +- CometColumnarExchange\n      :     :        +- HashAggregate\n      :     :           +- Project\n      :     :              +- BroadcastHashJoin\n      :     :                 :- Project\n      :     :                 :  +- BroadcastHashJoin\n      :     :                 :     :- Project\n      :     :                 :     :  +- BroadcastHashJoin\n      :     :                 :     :     :- Project\n      :     :                 :     :     :  +- BroadcastHashJoin\n      :     :                 :     :     :     :- Filter\n      :     :                 :     :     :     :  +- ColumnarToRow\n      :     :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                 :     :     :     :           +- SubqueryBroadcast\n      :     :                 :     :     :     :              +- BroadcastExchange\n      :     :                 :     :     :     :                 +- CometNativeColumnarToRow\n      :     :                 :     :     :     :                    +- CometProject\n      :     :                 :     :     :     :                       +- CometFilter\n      :     :                 :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     :     +- BroadcastExchange\n      :     :                 :     :     :        +- CometNativeColumnarToRow\n      :     :                 :     :     :           +- CometProject\n      :     :                 :     :     :              +- CometFilter\n      :     :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     +- BroadcastExchange\n      :     :                 :     :        +- CometNativeColumnarToRow\n      :     :                 :     :           +- CometProject\n      :     :                 :     :              +- CometFilter\n      :     :                 :     :                 +- CometNativeScan parquet spark_catalog.default.store\n      :     :                 :     +- BroadcastExchange\n      :     :                 :        +- CometNativeColumnarToRow\n      :     :                 :           +- CometProject\n      :     :                 :              +- CometFilter\n      :     :                 :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n      :     :                 +- BroadcastExchange\n      :     :                    +- CometNativeColumnarToRow\n      :     :                       +- CometFilter\n      :     :                          +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometFilter\n               +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 20 out of 45 eligible operators (44%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q68.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometHashAggregate\n         :     :  +- CometExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometProject\n         :     :           +- CometBroadcastHashJoin\n         :     :              :- CometProject\n         :     :              :  +- CometBroadcastHashJoin\n         :     :              :     :- CometProject\n         :     :              :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :- CometProject\n         :     :              :     :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :     :- CometFilter\n         :     :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :              :     :     :     :        +- SubqueryBroadcast\n         :     :              :     :     :     :           +- BroadcastExchange\n         :     :              :     :     :     :              +- CometNativeColumnarToRow\n         :     :              :     :     :     :                 +- CometProject\n         :     :              :     :     :     :                    +- CometFilter\n         :     :              :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     :     +- CometBroadcastExchange\n         :     :              :     :     :        +- CometProject\n         :     :              :     :     :           +- CometFilter\n         :     :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     +- CometBroadcastExchange\n         :     :              :     :        +- CometProject\n         :     :              :     :           +- CometFilter\n         :     :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     :              :     +- CometBroadcastExchange\n         :     :              :        +- CometProject\n         :     :              :           +- CometFilter\n         :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         :     :              +- CometBroadcastExchange\n         :     :                 +- CometFilter\n         :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 43 out of 45 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q69.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- BroadcastHashJoin\n                  :     :     :  :- BroadcastHashJoin\n                  :     :     :  :  :- CometNativeColumnarToRow\n                  :     :     :  :  :  +- CometFilter\n                  :     :     :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :     :  :  +- BroadcastExchange\n                  :     :     :  :     +- Project\n                  :     :     :  :        +- BroadcastHashJoin\n                  :     :     :  :           :- ColumnarToRow\n                  :     :     :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :  :           :        +- SubqueryBroadcast\n                  :     :     :  :           :           +- BroadcastExchange\n                  :     :     :  :           :              +- CometNativeColumnarToRow\n                  :     :     :  :           :                 +- CometProject\n                  :     :     :  :           :                    +- CometFilter\n                  :     :     :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :  :           +- BroadcastExchange\n                  :     :     :  :              +- CometNativeColumnarToRow\n                  :     :     :  :                 +- CometProject\n                  :     :     :  :                    +- CometFilter\n                  :     :     :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- Project\n                  :     :     :        +- BroadcastHashJoin\n                  :     :     :           :- ColumnarToRow\n                  :     :     :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           :        +- ReusedSubquery\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- Project\n                  :     :           +- BroadcastHashJoin\n                  :     :              :- ColumnarToRow\n                  :     :              :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :              :        +- ReusedSubquery\n                  :     :              +- BroadcastExchange\n                  :     :                 +- CometNativeColumnarToRow\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 53 eligible operators (39%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q69.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :-  BroadcastHashJoin [COMET: BuildRight with LeftAnti is not supported]\n                  :     :     :  :- CometNativeColumnarToRow\n                  :     :     :  :  +- CometBroadcastHashJoin\n                  :     :     :  :     :- CometFilter\n                  :     :     :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :     :  :     +- CometBroadcastExchange\n                  :     :     :  :        +- CometProject\n                  :     :     :  :           +- CometBroadcastHashJoin\n                  :     :     :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :  :              :     +- SubqueryBroadcast\n                  :     :     :  :              :        +- BroadcastExchange\n                  :     :     :  :              :           +- CometNativeColumnarToRow\n                  :     :     :  :              :              +- CometProject\n                  :     :     :  :              :                 +- CometFilter\n                  :     :     :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :  :              +- CometBroadcastExchange\n                  :     :     :  :                 +- CometProject\n                  :     :     :  :                    +- CometFilter\n                  :     :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- CometNativeColumnarToRow\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometBroadcastHashJoin\n                  :     :     :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :     :              :     +- ReusedSubquery\n                  :     :     :              +- CometBroadcastExchange\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometBroadcastHashJoin\n                  :     :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                 :     +- ReusedSubquery\n                  :     :                 +- CometBroadcastExchange\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 53 eligible operators (66%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q7.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Filter\n                  :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :              +- BroadcastExchange\n                  :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :                    +- CometProject\n                  :     :     :     :                       +- CometFilter\n                  :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.item\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 16 out of 35 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q7.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :           +- BroadcastExchange\n                  :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :                 +- CometProject\n                  :     :     :     :                    +- CometFilter\n                  :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 33 out of 35 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q70.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Filter\n                                    :     :  +- ColumnarToRow\n                                    :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :           +- SubqueryBroadcast\n                                    :     :              +- BroadcastExchange\n                                    :     :                 +- CometNativeColumnarToRow\n                                    :     :                    +- CometProject\n                                    :     :                       +- CometFilter\n                                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- Project\n                                          +- BroadcastHashJoin\n                                             :- CometNativeColumnarToRow\n                                             :  +- CometFilter\n                                             :     +- CometNativeScan parquet spark_catalog.default.store\n                                             +- BroadcastExchange\n                                                +- Project\n                                                   +- Filter\n                                                      +- Window\n                                                         +- Sort\n                                                            +- HashAggregate\n                                                               +- CometNativeColumnarToRow\n                                                                  +- CometColumnarExchange\n                                                                     +- HashAggregate\n                                                                        +- Project\n                                                                           +- BroadcastHashJoin\n                                                                              :- Project\n                                                                              :  +- BroadcastHashJoin\n                                                                              :     :- Filter\n                                                                              :     :  +- ColumnarToRow\n                                                                              :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                              :     :           +- ReusedSubquery\n                                                                              :     +- BroadcastExchange\n                                                                              :        +- CometNativeColumnarToRow\n                                                                              :           +- CometProject\n                                                                              :              +- CometFilter\n                                                                              :                 +- CometNativeScan parquet spark_catalog.default.store\n                                                                              +- BroadcastExchange\n                                                                                 +- CometNativeColumnarToRow\n                                                                                    +- CometProject\n                                                                                       +- CometFilter\n                                                                                          +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 18 out of 52 eligible operators (34%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q70.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- CometNativeColumnarToRow\n                                    :  +- CometProject\n                                    :     +- CometBroadcastHashJoin\n                                    :        :- CometFilter\n                                    :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :        :        +- SubqueryBroadcast\n                                    :        :           +- BroadcastExchange\n                                    :        :              +- CometNativeColumnarToRow\n                                    :        :                 +- CometProject\n                                    :        :                    +- CometFilter\n                                    :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :        +- CometBroadcastExchange\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- Project\n                                          +- BroadcastHashJoin\n                                             :- CometNativeColumnarToRow\n                                             :  +- CometFilter\n                                             :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                             +- BroadcastExchange\n                                                +- Project\n                                                   +- Filter\n                                                      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                         +- CometNativeColumnarToRow\n                                                            +- CometSort\n                                                               +- CometHashAggregate\n                                                                  +- CometExchange\n                                                                     +- CometHashAggregate\n                                                                        +- CometProject\n                                                                           +- CometBroadcastHashJoin\n                                                                              :- CometProject\n                                                                              :  +- CometBroadcastHashJoin\n                                                                              :     :- CometFilter\n                                                                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                              :     :        +- ReusedSubquery\n                                                                              :     +- CometBroadcastExchange\n                                                                              :        +- CometProject\n                                                                              :           +- CometFilter\n                                                                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                                                              +- CometBroadcastExchange\n                                                                                 +- CometProject\n                                                                                    +- CometFilter\n                                                                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 34 out of 52 eligible operators (65%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q71.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- CometNativeColumnarToRow\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- Project\n                        :  +- BroadcastHashJoin\n                        :     :- BroadcastExchange\n                        :     :  +- CometNativeColumnarToRow\n                        :     :     +- CometProject\n                        :     :        +- CometFilter\n                        :     :           +- CometNativeScan parquet spark_catalog.default.item\n                        :     +- Union\n                        :        :- Project\n                        :        :  +- BroadcastHashJoin\n                        :        :     :- Filter\n                        :        :     :  +- ColumnarToRow\n                        :        :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :        :     :           +- SubqueryBroadcast\n                        :        :     :              +- BroadcastExchange\n                        :        :     :                 +- CometNativeColumnarToRow\n                        :        :     :                    +- CometProject\n                        :        :     :                       +- CometFilter\n                        :        :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :        :     +- BroadcastExchange\n                        :        :        +- CometNativeColumnarToRow\n                        :        :           +- CometProject\n                        :        :              +- CometFilter\n                        :        :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :        :- Project\n                        :        :  +- BroadcastHashJoin\n                        :        :     :- Filter\n                        :        :     :  +- ColumnarToRow\n                        :        :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :        :     :           +- ReusedSubquery\n                        :        :     +- BroadcastExchange\n                        :        :        +- CometNativeColumnarToRow\n                        :        :           +- CometProject\n                        :        :              +- CometFilter\n                        :        :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :        +- Project\n                        :           +- BroadcastHashJoin\n                        :              :- Filter\n                        :              :  +- ColumnarToRow\n                        :              :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :              :           +- ReusedSubquery\n                        :              +- BroadcastExchange\n                        :                 +- CometNativeColumnarToRow\n                        :                    +- CometProject\n                        :                       +- CometFilter\n                        :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                        +- BroadcastExchange\n                           +- CometNativeColumnarToRow\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.time_dim\n\nComet accelerated 21 out of 49 eligible operators (42%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q71.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometBroadcastExchange\n                     :     :  +- CometProject\n                     :     :     +- CometFilter\n                     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     :     +- CometUnion\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometFilter\n                     :        :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :        :     :        +- SubqueryBroadcast\n                     :        :     :           +- BroadcastExchange\n                     :        :     :              +- CometNativeColumnarToRow\n                     :        :     :                 +- CometProject\n                     :        :     :                    +- CometFilter\n                     :        :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometFilter\n                     :        :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :        :     :        +- ReusedSubquery\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        +- CometProject\n                     :           +- CometBroadcastHashJoin\n                     :              :- CometFilter\n                     :              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :              :        +- ReusedSubquery\n                     :              +- CometBroadcastExchange\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n\nComet accelerated 45 out of 49 eligible operators (91%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q72.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometColumnarExchange\n                  :     +- Project\n                  :        +- BroadcastHashJoin\n                  :           :- Project\n                  :           :  +- BroadcastHashJoin\n                  :           :     :- Project\n                  :           :     :  +- BroadcastHashJoin\n                  :           :     :     :- Project\n                  :           :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :- Project\n                  :           :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :- Project\n                  :           :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- Filter\n                  :           :     :     :     :     :     :     :     :     :  +- ColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :           :     :     :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :              +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                    +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                       +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :           +- CometProject\n                  :           :     :     :     :     :              +- CometFilter\n                  :           :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :           +- CometProject\n                  :           :     :     :     :              +- CometFilter\n                  :           :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- BroadcastExchange\n                  :           :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :           +- CometProject\n                  :           :     :     :              +- CometFilter\n                  :           :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     +- BroadcastExchange\n                  :           :     :        +- CometNativeColumnarToRow\n                  :           :     :           +- CometFilter\n                  :           :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     +- BroadcastExchange\n                  :           :        +- CometNativeColumnarToRow\n                  :           :           +- CometFilter\n                  :           :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           +- BroadcastExchange\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n\nComet accelerated 37 out of 68 eligible operators (54%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q72.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometExchange\n                  :     +- CometProject\n                  :        +- CometBroadcastHashJoin\n                  :           :- CometProject\n                  :           :  +- CometBroadcastHashJoin\n                  :           :     :- CometProject\n                  :           :     :  +- CometBroadcastHashJoin\n                  :           :     :     :- CometProject\n                  :           :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :- CometProject\n                  :           :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :- CometProject\n                  :           :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- CometFilter\n                  :           :     :     :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :           :     :     :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :           +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                 +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                    +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :        +- CometProject\n                  :           :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :        +- CometProject\n                  :           :     :     :     :           +- CometFilter\n                  :           :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- CometBroadcastExchange\n                  :           :     :     :        +- CometProject\n                  :           :     :     :           +- CometFilter\n                  :           :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     +- CometBroadcastExchange\n                  :           :     :        +- CometFilter\n                  :           :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     +- CometBroadcastExchange\n                  :           :        +- CometFilter\n                  :           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           +- CometBroadcastExchange\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n\nComet accelerated 66 out of 68 eligible operators (97%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q73.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Filter\n            :  +- HashAggregate\n            :     +- CometNativeColumnarToRow\n            :        +- CometColumnarExchange\n            :           +- HashAggregate\n            :              +- Project\n            :                 +- BroadcastHashJoin\n            :                    :- Project\n            :                    :  +- BroadcastHashJoin\n            :                    :     :- Project\n            :                    :     :  +- BroadcastHashJoin\n            :                    :     :     :- Filter\n            :                    :     :     :  +- ColumnarToRow\n            :                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                    :     :     :           +- SubqueryBroadcast\n            :                    :     :     :              +- BroadcastExchange\n            :                    :     :     :                 +- CometNativeColumnarToRow\n            :                    :     :     :                    +- CometProject\n            :                    :     :     :                       +- CometFilter\n            :                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     :     +- BroadcastExchange\n            :                    :     :        +- CometNativeColumnarToRow\n            :                    :     :           +- CometProject\n            :                    :     :              +- CometFilter\n            :                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     +- BroadcastExchange\n            :                    :        +- CometNativeColumnarToRow\n            :                    :           +- CometProject\n            :                    :              +- CometFilter\n            :                    :                 +- CometNativeScan parquet spark_catalog.default.store\n            :                    +- BroadcastExchange\n            :                       +- CometNativeColumnarToRow\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometNativeScan parquet spark_catalog.default.household_demographics\n            +- BroadcastExchange\n               +- CometNativeColumnarToRow\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 37 eligible operators (48%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q73.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometFilter\n            :  +- CometHashAggregate\n            :     +- CometExchange\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometFilter\n            :                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :        +- SubqueryBroadcast\n            :                 :     :     :           +- BroadcastExchange\n            :                 :     :     :              +- CometNativeColumnarToRow\n            :                 :     :     :                 +- CometProject\n            :                 :     :     :                    +- CometFilter\n            :                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometProject\n            :                 :     :           +- CometFilter\n            :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometProject\n            :                 :           +- CometFilter\n            :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 37 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q74.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- BroadcastHashJoin\n      :     :  :- Filter\n      :     :  :  +- HashAggregate\n      :     :  :     +- CometNativeColumnarToRow\n      :     :  :        +- CometColumnarExchange\n      :     :  :           +- HashAggregate\n      :     :  :              +- Project\n      :     :  :                 +- BroadcastHashJoin\n      :     :  :                    :- Project\n      :     :  :                    :  +- BroadcastHashJoin\n      :     :  :                    :     :- CometNativeColumnarToRow\n      :     :  :                    :     :  +- CometProject\n      :     :  :                    :     :     +- CometFilter\n      :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :  :                    :     +- BroadcastExchange\n      :     :  :                    :        +- Filter\n      :     :  :                    :           +- ColumnarToRow\n      :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :  :                    :                    +- SubqueryBroadcast\n      :     :  :                    :                       +- BroadcastExchange\n      :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :  :                    :                             +- CometFilter\n      :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  :                    +- BroadcastExchange\n      :     :  :                       +- CometNativeColumnarToRow\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  +- BroadcastExchange\n      :     :     +- HashAggregate\n      :     :        +- CometNativeColumnarToRow\n      :     :           +- CometColumnarExchange\n      :     :              +- HashAggregate\n      :     :                 +- Project\n      :     :                    +- BroadcastHashJoin\n      :     :                       :- Project\n      :     :                       :  +- BroadcastHashJoin\n      :     :                       :     :- CometNativeColumnarToRow\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                       :     +- BroadcastExchange\n      :     :                       :        +- Filter\n      :     :                       :           +- ColumnarToRow\n      :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                       :                    +- SubqueryBroadcast\n      :     :                       :                       +- BroadcastExchange\n      :     :                       :                          +- CometNativeColumnarToRow\n      :     :                       :                             +- CometFilter\n      :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                       +- BroadcastExchange\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometFilter\n      :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 85 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q74.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometBroadcastHashJoin\n         :     :  :- CometFilter\n         :     :  :  +- CometHashAggregate\n         :     :  :     +- CometExchange\n         :     :  :        +- CometHashAggregate\n         :     :  :           +- CometProject\n         :     :  :              +- CometBroadcastHashJoin\n         :     :  :                 :- CometProject\n         :     :  :                 :  +- CometBroadcastHashJoin\n         :     :  :                 :     :- CometProject\n         :     :  :                 :     :  +- CometFilter\n         :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :  :                 :     +- CometBroadcastExchange\n         :     :  :                 :        +- CometFilter\n         :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :  :                 :                 +- SubqueryBroadcast\n         :     :  :                 :                    +- BroadcastExchange\n         :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :  :                 :                          +- CometFilter\n         :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  :                 +- CometBroadcastExchange\n         :     :  :                    +- CometFilter\n         :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  +- CometBroadcastExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometExchange\n         :     :           +- CometHashAggregate\n         :     :              +- CometProject\n         :     :                 +- CometBroadcastHashJoin\n         :     :                    :- CometProject\n         :     :                    :  +- CometBroadcastHashJoin\n         :     :                    :     :- CometProject\n         :     :                    :     :  +- CometFilter\n         :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                    :     +- CometBroadcastExchange\n         :     :                    :        +- CometFilter\n         :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                    :                 +- SubqueryBroadcast\n         :     :                    :                    +- BroadcastExchange\n         :     :                    :                       +- CometNativeColumnarToRow\n         :     :                    :                          +- CometFilter\n         :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                    +- CometBroadcastExchange\n         :     :                       +- CometFilter\n         :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 79 out of 85 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q75.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- SubqueryBroadcast\n         :                             :     :           :     :              +- BroadcastExchange\n         :                             :     :           :     :                 +- CometNativeColumnarToRow\n         :                             :     :           :     :                    +- CometFilter\n         :                             :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- ReusedSubquery\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometColumnarExchange\n         :                                   :     +- Project\n         :                                   :        +- BroadcastHashJoin\n         :                                   :           :- Project\n         :                                   :           :  +- BroadcastHashJoin\n         :                                   :           :     :- Filter\n         :                                   :           :     :  +- ColumnarToRow\n         :                                   :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                                   :           :     :           +- ReusedSubquery\n         :                                   :           :     +- BroadcastExchange\n         :                                   :           :        +- CometNativeColumnarToRow\n         :                                   :           :           +- CometProject\n         :                                   :           :              +- CometFilter\n         :                                   :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                                   :           +- BroadcastExchange\n         :                                   :              +- CometNativeColumnarToRow\n         :                                   :                 +- CometFilter\n         :                                   :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometNativeScan parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- SubqueryBroadcast\n                                       :     :           :     :              +- BroadcastExchange\n                                       :     :           :     :                 +- CometNativeColumnarToRow\n                                       :     :           :     :                    +- CometFilter\n                                       :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- ReusedSubquery\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometColumnarExchange\n                                             :     +- Project\n                                             :        +- BroadcastHashJoin\n                                             :           :- Project\n                                             :           :  +- BroadcastHashJoin\n                                             :           :     :- Filter\n                                             :           :     :  +- ColumnarToRow\n                                             :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                             :           :     :           +- ReusedSubquery\n                                             :           :     +- BroadcastExchange\n                                             :           :        +- CometNativeColumnarToRow\n                                             :           :           +- CometProject\n                                             :           :              +- CometFilter\n                                             :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                             :           +- BroadcastExchange\n                                             :              +- CometNativeColumnarToRow\n                                             :                 +- CometFilter\n                                             :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.web_returns\n\nComet accelerated 111 out of 167 eligible operators (66%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q75.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :                             :     :           :     :        +- SubqueryBroadcast\n         :                             :     :           :     :           +- BroadcastExchange\n         :                             :     :           :     :              +- CometNativeColumnarToRow\n         :                             :     :           :     :                 +- CometFilter\n         :                             :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :                             :     :           :     :        +- ReusedSubquery\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometExchange\n         :                                   :     +- CometProject\n         :                                   :        +- CometBroadcastHashJoin\n         :                                   :           :- CometProject\n         :                                   :           :  +- CometBroadcastHashJoin\n         :                                   :           :     :- CometFilter\n         :                                   :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                                   :           :     :        +- ReusedSubquery\n         :                                   :           :     +- CometBroadcastExchange\n         :                                   :           :        +- CometProject\n         :                                   :           :           +- CometFilter\n         :                                   :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                                   :           +- CometBroadcastExchange\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :           :     :        +- SubqueryBroadcast\n                                       :     :           :     :           +- BroadcastExchange\n                                       :     :           :     :              +- CometNativeColumnarToRow\n                                       :     :           :     :                 +- CometFilter\n                                       :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :           :     :        +- ReusedSubquery\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometExchange\n                                             :     +- CometProject\n                                             :        +- CometBroadcastHashJoin\n                                             :           :- CometProject\n                                             :           :  +- CometBroadcastHashJoin\n                                             :           :     :- CometFilter\n                                             :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                             :           :     :        +- ReusedSubquery\n                                             :           :     +- CometBroadcastExchange\n                                             :           :        +- CometProject\n                                             :           :           +- CometFilter\n                                             :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                             :           +- CometBroadcastExchange\n                                             :              +- CometFilter\n                                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n\nComet accelerated 159 out of 167 eligible operators (95%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q76.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometNativeScan parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometNativeScan parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometNativeScan parquet spark_catalog.default.web_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometNativeScan parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometFilter\n                     :     :  +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometNativeScan parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 44 out of 44 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q76.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometFilter\n                     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 44 out of 44 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q77.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- HashAggregate\n                  :     :  +- CometNativeColumnarToRow\n                  :     :     +- CometColumnarExchange\n                  :     :        +- HashAggregate\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- Project\n                  :     :                 :  +- BroadcastHashJoin\n                  :     :                 :     :- Filter\n                  :     :                 :     :  +- ColumnarToRow\n                  :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :     :           +- SubqueryBroadcast\n                  :     :                 :     :              +- BroadcastExchange\n                  :     :                 :     :                 +- CometNativeColumnarToRow\n                  :     :                 :     :                    +- CometProject\n                  :     :                 :     :                       +- CometFilter\n                  :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                 :     +- BroadcastExchange\n                  :     :                 :        +- CometNativeColumnarToRow\n                  :     :                 :           +- CometProject\n                  :     :                 :              +- CometFilter\n                  :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometFilter\n                  :     :                          +- CometNativeScan parquet spark_catalog.default.store\n                  :     +- BroadcastExchange\n                  :        +- HashAggregate\n                  :           +- CometNativeColumnarToRow\n                  :              +- CometColumnarExchange\n                  :                 +- HashAggregate\n                  :                    +- Project\n                  :                       +- BroadcastHashJoin\n                  :                          :- Project\n                  :                          :  +- BroadcastHashJoin\n                  :                          :     :- Filter\n                  :                          :     :  +- ColumnarToRow\n                  :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                          :     :           +- ReusedSubquery\n                  :                          :     +- BroadcastExchange\n                  :                          :        +- CometNativeColumnarToRow\n                  :                          :           +- CometProject\n                  :                          :              +- CometFilter\n                  :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                          +- BroadcastExchange\n                  :                             +- CometNativeColumnarToRow\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.store\n                  :- Project\n                  :  +- BroadcastNestedLoopJoin\n                  :     :- BroadcastExchange\n                  :     :  +- HashAggregate\n                  :     :     +- CometNativeColumnarToRow\n                  :     :        +- CometColumnarExchange\n                  :     :           +- HashAggregate\n                  :     :              +- Project\n                  :     :                 +- BroadcastHashJoin\n                  :     :                    :- ColumnarToRow\n                  :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                    :        +- ReusedSubquery\n                  :     :                    +- BroadcastExchange\n                  :     :                       +- CometNativeColumnarToRow\n                  :     :                          +- CometProject\n                  :     :                             +- CometFilter\n                  :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- HashAggregate\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometColumnarExchange\n                  :              +- HashAggregate\n                  :                 +- Project\n                  :                    +- BroadcastHashJoin\n                  :                       :- ColumnarToRow\n                  :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :        +- ReusedSubquery\n                  :                       +- BroadcastExchange\n                  :                          +- CometNativeColumnarToRow\n                  :                             +- CometProject\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- HashAggregate\n                        :  +- CometNativeColumnarToRow\n                        :     +- CometColumnarExchange\n                        :        +- HashAggregate\n                        :           +- Project\n                        :              +- BroadcastHashJoin\n                        :                 :- Project\n                        :                 :  +- BroadcastHashJoin\n                        :                 :     :- Filter\n                        :                 :     :  +- ColumnarToRow\n                        :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :                 :     :           +- ReusedSubquery\n                        :                 :     +- BroadcastExchange\n                        :                 :        +- CometNativeColumnarToRow\n                        :                 :           +- CometProject\n                        :                 :              +- CometFilter\n                        :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :                 +- BroadcastExchange\n                        :                    +- CometNativeColumnarToRow\n                        :                       +- CometFilter\n                        :                          +- CometNativeScan parquet spark_catalog.default.web_page\n                        +- BroadcastExchange\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Project\n                                          +- BroadcastHashJoin\n                                             :- Project\n                                             :  +- BroadcastHashJoin\n                                             :     :- Filter\n                                             :     :  +- ColumnarToRow\n                                             :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                             :     :           +- ReusedSubquery\n                                             :     +- BroadcastExchange\n                                             :        +- CometNativeColumnarToRow\n                                             :           +- CometProject\n                                             :              +- CometFilter\n                                             :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             +- BroadcastExchange\n                                                +- CometNativeColumnarToRow\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.web_page\n\nComet accelerated 36 out of 109 eligible operators (33%). Final plan contains 24 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q77.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- CometNativeColumnarToRow\n                  :  +- CometProject\n                  :     +- CometBroadcastHashJoin\n                  :        :- CometHashAggregate\n                  :        :  +- CometExchange\n                  :        :     +- CometHashAggregate\n                  :        :        +- CometProject\n                  :        :           +- CometBroadcastHashJoin\n                  :        :              :- CometProject\n                  :        :              :  +- CometBroadcastHashJoin\n                  :        :              :     :- CometFilter\n                  :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :        :              :     :        +- SubqueryBroadcast\n                  :        :              :     :           +- BroadcastExchange\n                  :        :              :     :              +- CometNativeColumnarToRow\n                  :        :              :     :                 +- CometProject\n                  :        :              :     :                    +- CometFilter\n                  :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        :              :     +- CometBroadcastExchange\n                  :        :              :        +- CometProject\n                  :        :              :           +- CometFilter\n                  :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        :              +- CometBroadcastExchange\n                  :        :                 +- CometFilter\n                  :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :        +- CometBroadcastExchange\n                  :           +- CometHashAggregate\n                  :              +- CometExchange\n                  :                 +- CometHashAggregate\n                  :                    +- CometProject\n                  :                       +- CometBroadcastHashJoin\n                  :                          :- CometProject\n                  :                          :  +- CometBroadcastHashJoin\n                  :                          :     :- CometFilter\n                  :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :                          :     :        +- ReusedSubquery\n                  :                          :     +- CometBroadcastExchange\n                  :                          :        +- CometProject\n                  :                          :           +- CometFilter\n                  :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                          +- CometBroadcastExchange\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :- Project\n                  :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n                  :     :- BroadcastExchange\n                  :     :  +- CometNativeColumnarToRow\n                  :     :     +- CometHashAggregate\n                  :     :        +- CometExchange\n                  :     :           +- CometHashAggregate\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometNativeColumnarToRow\n                  :        +- CometHashAggregate\n                  :           +- CometExchange\n                  :              +- CometHashAggregate\n                  :                 +- CometProject\n                  :                    +- CometBroadcastHashJoin\n                  :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :                       :     +- ReusedSubquery\n                  :                       +- CometBroadcastExchange\n                  :                          +- CometProject\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometProject\n                           :           +- CometBroadcastHashJoin\n                           :              :- CometProject\n                           :              :  +- CometBroadcastHashJoin\n                           :              :     :- CometFilter\n                           :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :              :     :        +- ReusedSubquery\n                           :              :     +- CometBroadcastExchange\n                           :              :        +- CometProject\n                           :              :           +- CometFilter\n                           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              +- CometBroadcastExchange\n                           :                 +- CometFilter\n                           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n                           +- CometBroadcastExchange\n                              +- CometHashAggregate\n                                 +- CometExchange\n                                    +- CometHashAggregate\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometProject\n                                             :  +- CometBroadcastHashJoin\n                                             :     :- CometFilter\n                                             :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                             :     :        +- ReusedSubquery\n                                             :     +- CometBroadcastExchange\n                                             :        +- CometProject\n                                             :           +- CometFilter\n                                             :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometBroadcastExchange\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n\nComet accelerated 94 out of 109 eligible operators (86%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q78.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometColumnarExchange\n         :     :                 :        :     +- Filter\n         :     :                 :        :        +- ColumnarToRow\n         :     :                 :        :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :     :                 :        :                 +- SubqueryBroadcast\n         :     :                 :        :                    +- BroadcastExchange\n         :     :                 :        :                       +- CometNativeColumnarToRow\n         :     :                 :        :                          +- CometFilter\n         :     :                 :        :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometColumnarExchange\n         :                          :        :     +- Filter\n         :                          :        :        +- ColumnarToRow\n         :                          :        :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                          :        :                 +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometNativeScan parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometColumnarExchange\n                              :        :     +- Filter\n                              :        :        +- ColumnarToRow\n                              :        :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :        :                 +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 64 out of 76 eligible operators (84%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q78.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometExchange\n         :     :                 :        :     +- CometFilter\n         :     :                 :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                 :        :              +- SubqueryBroadcast\n         :     :                 :        :                 +- BroadcastExchange\n         :     :                 :        :                    +- CometNativeColumnarToRow\n         :     :                 :        :                       +- CometFilter\n         :     :                 :        :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometExchange\n         :                          :        :     +- CometFilter\n         :                          :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :        :              +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometExchange\n                              :        :     +- CometFilter\n                              :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :        :              +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 70 out of 76 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q79.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- HashAggregate\n      :  +- CometNativeColumnarToRow\n      :     +- CometColumnarExchange\n      :        +- HashAggregate\n      :           +- Project\n      :              +- BroadcastHashJoin\n      :                 :- Project\n      :                 :  +- BroadcastHashJoin\n      :                 :     :- Project\n      :                 :     :  +- BroadcastHashJoin\n      :                 :     :     :- Filter\n      :                 :     :     :  +- ColumnarToRow\n      :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                 :     :     :           +- SubqueryBroadcast\n      :                 :     :     :              +- BroadcastExchange\n      :                 :     :     :                 +- CometNativeColumnarToRow\n      :                 :     :     :                    +- CometProject\n      :                 :     :     :                       +- CometFilter\n      :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                 :     :     +- BroadcastExchange\n      :                 :     :        +- CometNativeColumnarToRow\n      :                 :     :           +- CometProject\n      :                 :     :              +- CometFilter\n      :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                 :     +- BroadcastExchange\n      :                 :        +- CometNativeColumnarToRow\n      :                 :           +- CometProject\n      :                 :              +- CometFilter\n      :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n      :                 +- BroadcastExchange\n      :                    +- CometNativeColumnarToRow\n      :                       +- CometProject\n      :                          +- CometFilter\n      :                             +- CometNativeScan parquet spark_catalog.default.household_demographics\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 16 out of 35 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q79.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometHashAggregate\n         :  +- CometExchange\n         :     +- CometHashAggregate\n         :        +- CometProject\n         :           +- CometBroadcastHashJoin\n         :              :- CometProject\n         :              :  +- CometBroadcastHashJoin\n         :              :     :- CometProject\n         :              :     :  +- CometBroadcastHashJoin\n         :              :     :     :- CometFilter\n         :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :              :     :     :        +- SubqueryBroadcast\n         :              :     :     :           +- BroadcastExchange\n         :              :     :     :              +- CometNativeColumnarToRow\n         :              :     :     :                 +- CometProject\n         :              :     :     :                    +- CometFilter\n         :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :              :     :     +- CometBroadcastExchange\n         :              :     :        +- CometProject\n         :              :     :           +- CometFilter\n         :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :              :     +- CometBroadcastExchange\n         :              :        +- CometProject\n         :              :           +- CometFilter\n         :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :              +- CometBroadcastExchange\n         :                 +- CometProject\n         :                    +- CometFilter\n         :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 33 out of 35 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q8.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Filter\n                  :     :     :  +- ColumnarToRow\n                  :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           +- SubqueryBroadcast\n                  :     :     :              +- BroadcastExchange\n                  :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :                    +- CometProject\n                  :     :     :                       +- CometFilter\n                  :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometBroadcastHashJoin\n                                    :- CometProject\n                                    :  +- CometFilter\n                                    :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometHashAggregate\n                                                +- CometExchange\n                                                   +- CometHashAggregate\n                                                      +- CometProject\n                                                         +- CometBroadcastHashJoin\n                                                            :- CometProject\n                                                            :  +- CometFilter\n                                                            :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                                                            +- CometBroadcastExchange\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 32 out of 48 eligible operators (66%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q8.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometFilter\n                  :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :        +- SubqueryBroadcast\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometHashAggregate\n                        +- CometExchange\n                           +- CometHashAggregate\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometFilter\n                                 :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometHashAggregate\n                                             +- CometExchange\n                                                +- CometHashAggregate\n                                                   +- CometProject\n                                                      +- CometBroadcastHashJoin\n                                                         :- CometProject\n                                                         :  +- CometFilter\n                                                         :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                                                         +- CometBroadcastExchange\n                                                            +- CometProject\n                                                               +- CometFilter\n                                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 46 out of 48 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q80.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometColumnarExchange\n                  :              :     :     :     :     :     +- Filter\n                  :              :     :     :     :     :        +- ColumnarToRow\n                  :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :              :     :     :     :     :                 +- SubqueryBroadcast\n                  :              :     :     :     :     :                    +- BroadcastExchange\n                  :              :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :              :     :     :     :     :                          +- CometProject\n                  :              :     :     :     :     :                             +- CometFilter\n                  :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometNativeScan parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometColumnarExchange\n                  :              :     :     :     :     :     +- Filter\n                  :              :     :     :     :     :        +- ColumnarToRow\n                  :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :              :     :     :     :     :                 +- ReusedSubquery\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometNativeScan parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometBroadcastHashJoin\n                                 :     :     :     :- CometProject\n                                 :     :     :     :  +- CometSortMergeJoin\n                                 :     :     :     :     :- CometSort\n                                 :     :     :     :     :  +- CometColumnarExchange\n                                 :     :     :     :     :     +- Filter\n                                 :     :     :     :     :        +- ColumnarToRow\n                                 :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :     :     :                 +- ReusedSubquery\n                                 :     :     :     :     +- CometSort\n                                 :     :     :     :        +- CometExchange\n                                 :     :     :     :           +- CometProject\n                                 :     :     :     :              +- CometFilter\n                                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n                                 :     :     :     +- CometBroadcastExchange\n                                 :     :     :        +- CometProject\n                                 :     :     :           +- CometFilter\n                                 :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometNativeScan parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 117 out of 127 eligible operators (92%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q80.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometExchange\n                  :              :     :     :     :     :     +- CometFilter\n                  :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :              :     :     :     :     :              +- SubqueryBroadcast\n                  :              :     :     :     :     :                 +- BroadcastExchange\n                  :              :     :     :     :     :                    +- CometNativeColumnarToRow\n                  :              :     :     :     :     :                       +- CometProject\n                  :              :     :     :     :     :                          +- CometFilter\n                  :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometExchange\n                  :              :     :     :     :     :     +- CometFilter\n                  :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :              :     :     :     :     :              +- ReusedSubquery\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometBroadcastHashJoin\n                                 :     :     :     :- CometProject\n                                 :     :     :     :  +- CometSortMergeJoin\n                                 :     :     :     :     :- CometSort\n                                 :     :     :     :     :  +- CometExchange\n                                 :     :     :     :     :     +- CometFilter\n                                 :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :     :     :     :              +- ReusedSubquery\n                                 :     :     :     :     +- CometSort\n                                 :     :     :     :        +- CometExchange\n                                 :     :     :     :           +- CometProject\n                                 :     :     :     :              +- CometFilter\n                                 :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                 :     :     :     +- CometBroadcastExchange\n                                 :     :     :        +- CometProject\n                                 :     :     :           +- CometFilter\n                                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 123 out of 127 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q81.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Project\n      :     :     :                    :  +- BroadcastHashJoin\n      :     :     :                    :     :- Filter\n      :     :     :                    :     :  +- ColumnarToRow\n      :     :     :                    :     :     +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :     :           +- SubqueryBroadcast\n      :     :     :                    :     :              +- BroadcastExchange\n      :     :     :                    :     :                 +- CometNativeColumnarToRow\n      :     :     :                    :     :                    +- CometProject\n      :     :     :                    :     :                       +- CometFilter\n      :     :     :                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    :     +- BroadcastExchange\n      :     :     :                    :        +- CometNativeColumnarToRow\n      :     :     :                    :           +- CometProject\n      :     :     :                    :              +- CometFilter\n      :     :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Filter\n      :     :                                         :     :  +- ColumnarToRow\n      :     :                                         :     :     +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :           +- ReusedSubquery\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometProject\n      :     :                                         :              +- CometFilter\n      :     :                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometProject\n      :     :                                                  +- CometFilter\n      :     :                                                     +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 24 out of 61 eligible operators (39%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q81.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometProject\n         :     :     :                 :  +- CometBroadcastHashJoin\n         :     :     :                 :     :- CometFilter\n         :     :     :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :     :     :                 :     :        +- SubqueryBroadcast\n         :     :     :                 :     :           +- BroadcastExchange\n         :     :     :                 :     :              +- CometNativeColumnarToRow\n         :     :     :                 :     :                 +- CometProject\n         :     :     :                 :     :                    +- CometFilter\n         :     :     :                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 :     +- CometBroadcastExchange\n         :     :     :                 :        +- CometProject\n         :     :     :                 :           +- CometFilter\n         :     :     :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometFilter\n         :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometHashAggregate\n         :     :                       +- CometExchange\n         :     :                          +- CometHashAggregate\n         :     :                             +- CometProject\n         :     :                                +- CometBroadcastHashJoin\n         :     :                                   :- CometProject\n         :     :                                   :  +- CometBroadcastHashJoin\n         :     :                                   :     :- CometFilter\n         :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :     :                                   :     :        +- ReusedSubquery\n         :     :                                   :     +- CometBroadcastExchange\n         :     :                                   :        +- CometProject\n         :     :                                   :           +- CometFilter\n         :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                   +- CometBroadcastExchange\n         :     :                                      +- CometProject\n         :     :                                         +- CometFilter\n         :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 58 out of 61 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q82.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- BroadcastExchange\n                  :  +- Project\n                  :     +- BroadcastHashJoin\n                  :        :- Project\n                  :        :  +- BroadcastHashJoin\n                  :        :     :- CometNativeColumnarToRow\n                  :        :     :  +- CometProject\n                  :        :     :     +- CometFilter\n                  :        :     :        +- CometNativeScan parquet spark_catalog.default.item\n                  :        :     +- BroadcastExchange\n                  :        :        +- Project\n                  :        :           +- Filter\n                  :        :              +- ColumnarToRow\n                  :        :                 +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :        :                       +- SubqueryBroadcast\n                  :        :                          +- BroadcastExchange\n                  :        :                             +- CometNativeColumnarToRow\n                  :        :                                +- CometProject\n                  :        :                                   +- CometFilter\n                  :        :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :        +- BroadcastExchange\n                  :           +- CometNativeColumnarToRow\n                  :              +- CometProject\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.store_sales\n\nComet accelerated 15 out of 30 eligible operators (50%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q82.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometBroadcastExchange\n                  :  +- CometProject\n                  :     +- CometBroadcastHashJoin\n                  :        :- CometProject\n                  :        :  +- CometBroadcastHashJoin\n                  :        :     :- CometProject\n                  :        :     :  +- CometFilter\n                  :        :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :        :     +- CometBroadcastExchange\n                  :        :        +- CometProject\n                  :        :           +- CometFilter\n                  :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :        :                    +- SubqueryBroadcast\n                  :        :                       +- BroadcastExchange\n                  :        :                          +- CometNativeColumnarToRow\n                  :        :                             +- CometProject\n                  :        :                                +- CometFilter\n                  :        :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        +- CometBroadcastExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n\nComet accelerated 28 out of 30 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q83.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- HashAggregate\n      :     :  +- CometNativeColumnarToRow\n      :     :     +- CometColumnarExchange\n      :     :        +- HashAggregate\n      :     :           +- Project\n      :     :              +- BroadcastHashJoin\n      :     :                 :- Project\n      :     :                 :  +- BroadcastHashJoin\n      :     :                 :     :- Filter\n      :     :                 :     :  +- ColumnarToRow\n      :     :                 :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                 :     :           +- SubqueryBroadcast\n      :     :                 :     :              +- BroadcastExchange\n      :     :                 :     :                 +- CometNativeColumnarToRow\n      :     :                 :     :                    +- CometProject\n      :     :                 :     :                       +- CometBroadcastHashJoin\n      :     :                 :     :                          :- CometFilter\n      :     :                 :     :                          :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :                          +- CometBroadcastExchange\n      :     :                 :     :                             +- CometProject\n      :     :                 :     :                                +- CometBroadcastHashJoin\n      :     :                 :     :                                   :- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :                                   +- CometBroadcastExchange\n      :     :                 :     :                                      +- CometProject\n      :     :                 :     :                                         +- CometFilter\n      :     :                 :     :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     +- BroadcastExchange\n      :     :                 :        +- CometNativeColumnarToRow\n      :     :                 :           +- CometProject\n      :     :                 :              +- CometFilter\n      :     :                 :                 +- CometNativeScan parquet spark_catalog.default.item\n      :     :                 +- BroadcastExchange\n      :     :                    +- CometNativeColumnarToRow\n      :     :                       +- CometProject\n      :     :                          +- CometBroadcastHashJoin\n      :     :                             :- CometFilter\n      :     :                             :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                             +- CometBroadcastExchange\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometProject\n      :     :                                            +- CometFilter\n      :     :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- HashAggregate\n      :           +- CometNativeColumnarToRow\n      :              +- CometColumnarExchange\n      :                 +- HashAggregate\n      :                    +- Project\n      :                       +- BroadcastHashJoin\n      :                          :- Project\n      :                          :  +- BroadcastHashJoin\n      :                          :     :- Filter\n      :                          :     :  +- ColumnarToRow\n      :                          :     :     +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                          :     :           +- SubqueryBroadcast\n      :                          :     :              +- BroadcastExchange\n      :                          :     :                 +- CometNativeColumnarToRow\n      :                          :     :                    +- CometProject\n      :                          :     :                       +- CometBroadcastHashJoin\n      :                          :     :                          :- CometFilter\n      :                          :     :                          :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                          :     :                          +- CometBroadcastExchange\n      :                          :     :                             +- CometProject\n      :                          :     :                                +- CometBroadcastHashJoin\n      :                          :     :                                   :- CometNativeScan parquet spark_catalog.default.date_dim\n      :                          :     :                                   +- CometBroadcastExchange\n      :                          :     :                                      +- CometProject\n      :                          :     :                                         +- CometFilter\n      :                          :     :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                          :     +- BroadcastExchange\n      :                          :        +- CometNativeColumnarToRow\n      :                          :           +- CometProject\n      :                          :              +- CometFilter\n      :                          :                 +- CometNativeScan parquet spark_catalog.default.item\n      :                          +- BroadcastExchange\n      :                             +- CometNativeColumnarToRow\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometFilter\n      :                                      :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometProject\n      :                                            +- CometBroadcastHashJoin\n      :                                               :- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                               +- CometBroadcastExchange\n      :                                                  +- CometProject\n      :                                                     +- CometFilter\n      :                                                        +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- Filter\n                           :     :  +- ColumnarToRow\n                           :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :           +- ReusedSubquery\n                           :     +- BroadcastExchange\n                           :        +- CometNativeColumnarToRow\n                           :           +- CometProject\n                           :              +- CometFilter\n                           :                 +- CometNativeScan parquet spark_catalog.default.item\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometBroadcastHashJoin\n                                                :- CometNativeScan parquet spark_catalog.default.date_dim\n                                                +- CometBroadcastExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 72 out of 114 eligible operators (63%). Final plan contains 14 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q83.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometHashAggregate\n         :     :  +- CometExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometProject\n         :     :           +- CometBroadcastHashJoin\n         :     :              :- CometProject\n         :     :              :  +- CometBroadcastHashJoin\n         :     :              :     :- CometFilter\n         :     :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :              :     :        +- SubqueryBroadcast\n         :     :              :     :           +- BroadcastExchange\n         :     :              :     :              +- CometNativeColumnarToRow\n         :     :              :     :                 +- CometProject\n         :     :              :     :                    +- CometBroadcastHashJoin\n         :     :              :     :                       :- CometFilter\n         :     :              :     :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :                       +- CometBroadcastExchange\n         :     :              :     :                          +- CometProject\n         :     :              :     :                             +- CometBroadcastHashJoin\n         :     :              :     :                                :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :                                +- CometBroadcastExchange\n         :     :              :     :                                   +- CometProject\n         :     :              :     :                                      +- CometFilter\n         :     :              :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     +- CometBroadcastExchange\n         :     :              :        +- CometProject\n         :     :              :           +- CometFilter\n         :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :     :              +- CometBroadcastExchange\n         :     :                 +- CometProject\n         :     :                    +- CometBroadcastHashJoin\n         :     :                       :- CometFilter\n         :     :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                       +- CometBroadcastExchange\n         :     :                          +- CometProject\n         :     :                             +- CometBroadcastHashJoin\n         :     :                                :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                +- CometBroadcastExchange\n         :     :                                   +- CometProject\n         :     :                                      +- CometFilter\n         :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometProject\n         :                    +- CometBroadcastHashJoin\n         :                       :- CometProject\n         :                       :  +- CometBroadcastHashJoin\n         :                       :     :- CometFilter\n         :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :                       :     :        +- SubqueryBroadcast\n         :                       :     :           +- BroadcastExchange\n         :                       :     :              +- CometNativeColumnarToRow\n         :                       :     :                 +- CometProject\n         :                       :     :                    +- CometBroadcastHashJoin\n         :                       :     :                       :- CometFilter\n         :                       :     :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                       :     :                       +- CometBroadcastExchange\n         :                       :     :                          +- CometProject\n         :                       :     :                             +- CometBroadcastHashJoin\n         :                       :     :                                :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                       :     :                                +- CometBroadcastExchange\n         :                       :     :                                   +- CometProject\n         :                       :     :                                      +- CometFilter\n         :                       :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                       :     +- CometBroadcastExchange\n         :                       :        +- CometProject\n         :                       :           +- CometFilter\n         :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                       +- CometBroadcastExchange\n         :                          +- CometProject\n         :                             +- CometBroadcastHashJoin\n         :                                :- CometFilter\n         :                                :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                +- CometBroadcastExchange\n         :                                   +- CometProject\n         :                                      +- CometBroadcastHashJoin\n         :                                         :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                         +- CometBroadcastExchange\n         :                                            +- CometProject\n         :                                               +- CometFilter\n         :                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometFilter\n                           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                           :     :        +- ReusedSubquery\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometFilter\n                                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometBroadcastExchange\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 109 out of 114 eligible operators (95%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q84.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometBroadcastExchange\n         :  +- CometProject\n         :     +- CometBroadcastHashJoin\n         :        :- CometProject\n         :        :  +- CometBroadcastHashJoin\n         :        :     :- CometProject\n         :        :     :  +- CometBroadcastHashJoin\n         :        :     :     :- CometProject\n         :        :     :     :  +- CometBroadcastHashJoin\n         :        :     :     :     :- CometProject\n         :        :     :     :     :  +- CometFilter\n         :        :     :     :     :     +- CometNativeScan parquet spark_catalog.default.customer\n         :        :     :     :     +- CometBroadcastExchange\n         :        :     :     :        +- CometProject\n         :        :     :     :           +- CometFilter\n         :        :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n         :        :     :     +- CometBroadcastExchange\n         :        :     :        +- CometFilter\n         :        :     :           +- CometNativeScan parquet spark_catalog.default.customer_demographics\n         :        :     +- CometBroadcastExchange\n         :        :        +- CometFilter\n         :        :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n         :        +- CometBroadcastExchange\n         :           +- CometProject\n         :              +- CometFilter\n         :                 +- CometNativeScan parquet spark_catalog.default.income_band\n         +- CometProject\n            +- CometFilter\n               +- CometNativeScan parquet spark_catalog.default.store_returns\n\nComet accelerated 32 out of 32 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q84.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometBroadcastExchange\n         :  +- CometProject\n         :     +- CometBroadcastHashJoin\n         :        :- CometProject\n         :        :  +- CometBroadcastHashJoin\n         :        :     :- CometProject\n         :        :     :  +- CometBroadcastHashJoin\n         :        :     :     :- CometProject\n         :        :     :     :  +- CometBroadcastHashJoin\n         :        :     :     :     :- CometProject\n         :        :     :     :     :  +- CometFilter\n         :        :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :        :     :     :     +- CometBroadcastExchange\n         :        :     :     :        +- CometProject\n         :        :     :     :           +- CometFilter\n         :        :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :        :     :     +- CometBroadcastExchange\n         :        :     :        +- CometFilter\n         :        :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n         :        :     +- CometBroadcastExchange\n         :        :        +- CometFilter\n         :        :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         :        +- CometBroadcastExchange\n         :           +- CometProject\n         :              +- CometFilter\n         :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n         +- CometProject\n            +- CometFilter\n               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n\nComet accelerated 32 out of 32 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q85.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- BroadcastExchange\n                  :     :     :     :     :     :     :  +- Filter\n                  :     :     :     :     :     :     :     +- ColumnarToRow\n                  :     :     :     :     :     :     :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :              +- SubqueryBroadcast\n                  :     :     :     :     :     :     :                 +- BroadcastExchange\n                  :     :     :     :     :     :     :                    +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                       +- CometProject\n                  :     :     :     :     :     :     :                          +- CometFilter\n                  :     :     :     :     :     :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometNativeColumnarToRow\n                  :     :     :     :     :     :        +- CometProject\n                  :     :     :     :     :     :           +- CometFilter\n                  :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.web_returns\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :     :           +- CometFilter\n                  :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.web_page\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.reason\n\nComet accelerated 24 out of 52 eligible operators (46%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q85.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometBroadcastExchange\n                  :     :     :     :     :     :     :  +- CometFilter\n                  :     :     :     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometProject\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.reason\n\nComet accelerated 50 out of 52 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q86.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Filter\n                                    :     :  +- ColumnarToRow\n                                    :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :           +- SubqueryBroadcast\n                                    :     :              +- BroadcastExchange\n                                    :     :                 +- CometNativeColumnarToRow\n                                    :     :                    +- CometProject\n                                    :     :                       +- CometFilter\n                                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 12 out of 28 eligible operators (42%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q86.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometExpand\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometFilter\n                                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :        +- SubqueryBroadcast\n                                 :     :           +- BroadcastExchange\n                                 :     :              +- CometNativeColumnarToRow\n                                 :     :                 +- CometProject\n                                 :     :                    +- CometFilter\n                                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 23 out of 28 eligible operators (82%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q87.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  BroadcastHashJoin [COMET: BuildRight with LeftAnti is not supported]\n               :  :- CometNativeColumnarToRow\n               :  :  +- CometHashAggregate\n               :  :     +- CometColumnarExchange\n               :  :        +- HashAggregate\n               :  :           +- Project\n               :  :              +- BroadcastHashJoin\n               :  :                 :- Project\n               :  :                 :  +- BroadcastHashJoin\n               :  :                 :     :- Filter\n               :  :                 :     :  +- ColumnarToRow\n               :  :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :  :                 :     :           +- SubqueryBroadcast\n               :  :                 :     :              +- BroadcastExchange\n               :  :                 :     :                 +- CometNativeColumnarToRow\n               :  :                 :     :                    +- CometProject\n               :  :                 :     :                       +- CometFilter\n               :  :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :                 :     +- BroadcastExchange\n               :  :                 :        +- CometNativeColumnarToRow\n               :  :                 :           +- CometProject\n               :  :                 :              +- CometFilter\n               :  :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :                 +- BroadcastExchange\n               :  :                    +- CometNativeColumnarToRow\n               :  :                       +- CometProject\n               :  :                          +- CometFilter\n               :  :                             +- CometNativeScan parquet spark_catalog.default.customer\n               :  +- BroadcastExchange\n               :     +- CometNativeColumnarToRow\n               :        +- CometHashAggregate\n               :           +- CometColumnarExchange\n               :              +- HashAggregate\n               :                 +- Project\n               :                    +- BroadcastHashJoin\n               :                       :- Project\n               :                       :  +- BroadcastHashJoin\n               :                       :     :- Filter\n               :                       :     :  +- ColumnarToRow\n               :                       :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                       :     :           +- ReusedSubquery\n               :                       :     +- BroadcastExchange\n               :                       :        +- CometNativeColumnarToRow\n               :                       :           +- CometProject\n               :                       :              +- CometFilter\n               :                       :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                       +- BroadcastExchange\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.customer\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometHashAggregate\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Filter\n                                    :     :  +- ColumnarToRow\n                                    :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :           +- ReusedSubquery\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 28 out of 66 eligible operators (42%). Final plan contains 14 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q87.native_iceberg_compat/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  BroadcastHashJoin [COMET: BuildRight with LeftAnti is not supported]\n               :  :- CometNativeColumnarToRow\n               :  :  +- CometHashAggregate\n               :  :     +- CometExchange\n               :  :        +- CometHashAggregate\n               :  :           +- CometProject\n               :  :              +- CometBroadcastHashJoin\n               :  :                 :- CometProject\n               :  :                 :  +- CometBroadcastHashJoin\n               :  :                 :     :- CometFilter\n               :  :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :  :                 :     :        +- SubqueryBroadcast\n               :  :                 :     :           +- BroadcastExchange\n               :  :                 :     :              +- CometNativeColumnarToRow\n               :  :                 :     :                 +- CometProject\n               :  :                 :     :                    +- CometFilter\n               :  :                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :                 :     +- CometBroadcastExchange\n               :  :                 :        +- CometProject\n               :  :                 :           +- CometFilter\n               :  :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :                 +- CometBroadcastExchange\n               :  :                    +- CometProject\n               :  :                       +- CometFilter\n               :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               :  +- BroadcastExchange\n               :     +- CometNativeColumnarToRow\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometProject\n               :                       :  +- CometBroadcastHashJoin\n               :                       :     :- CometFilter\n               :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                       :     :        +- ReusedSubquery\n               :                       :     +- CometBroadcastExchange\n               :                       :        +- CometProject\n               :                       :           +- CometFilter\n               :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometHashAggregate\n                        +- CometExchange\n                           +- CometHashAggregate\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometProject\n                                    :  +- CometBroadcastHashJoin\n                                    :     :- CometFilter\n                                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :     :        +- ReusedSubquery\n                                    :     +- CometBroadcastExchange\n                                    :        +- CometProject\n                                    :           +- CometFilter\n                                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 55 out of 66 eligible operators (83%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q88.native_datafusion/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :  :  :     +- CometExchange\n:  :  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :  :           +- CometProject\n:  :  :  :  :  :  :              +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :- CometProject\n:  :  :  :  :  :  :                 :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :- CometProject\n:  :  :  :  :  :  :                 :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :     :- CometProject\n:  :  :  :  :  :  :                 :     :     :  +- CometFilter\n:  :  :  :  :  :  :                 :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  :  :  :                 :     :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :     :        +- CometProject\n:  :  :  :  :  :  :                 :     :           +- CometFilter\n:  :  :  :  :  :  :                 :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :  :                 :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :        +- CometProject\n:  :  :  :  :  :  :                 :           +- CometFilter\n:  :  :  :  :  :  :                 :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :  :  :  :                 +- CometBroadcastExchange\n:  :  :  :  :  :  :                    +- CometProject\n:  :  :  :  :  :  :                       +- CometFilter\n:  :  :  :  :  :  :                          +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :           +- CometExchange\n:  :  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :  :                 +- CometProject\n:  :  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :- CometProject\n:  :  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :- CometProject\n:  :  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :        +- CometProject\n:  :  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :  :                          +- CometProject\n:  :  :  :  :  :                             +- CometFilter\n:  :  :  :  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :           +- CometExchange\n:  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :                 +- CometProject\n:  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :                       :- CometProject\n:  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :- CometProject\n:  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :                       :        +- CometProject\n:  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :                          +- CometProject\n:  :  :  :  :                             +- CometFilter\n:  :  :  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometExchange\n:  :  :  :              +- CometHashAggregate\n:  :  :  :                 +- CometProject\n:  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :                       :- CometProject\n:  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :- CometProject\n:  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :     :- CometProject\n:  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :                       :     :        +- CometProject\n:  :  :  :                       :     :           +- CometFilter\n:  :  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :                       :        +- CometProject\n:  :  :  :                       :           +- CometFilter\n:  :  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :                       +- CometBroadcastExchange\n:  :  :  :                          +- CometProject\n:  :  :  :                             +- CometFilter\n:  :  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometExchange\n:  :  :              +- CometHashAggregate\n:  :  :                 +- CometProject\n:  :  :                    +- CometBroadcastHashJoin\n:  :  :                       :- CometProject\n:  :  :                       :  +- CometBroadcastHashJoin\n:  :  :                       :     :- CometProject\n:  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :                       :     :     :- CometProject\n:  :  :                       :     :     :  +- CometFilter\n:  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :                       :     :     +- CometBroadcastExchange\n:  :  :                       :     :        +- CometProject\n:  :  :                       :     :           +- CometFilter\n:  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :                       :     +- CometBroadcastExchange\n:  :  :                       :        +- CometProject\n:  :  :                       :           +- CometFilter\n:  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :                       +- CometBroadcastExchange\n:  :  :                          +- CometProject\n:  :  :                             +- CometFilter\n:  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometBroadcastHashJoin\n:  :                       :- CometProject\n:  :                       :  +- CometBroadcastHashJoin\n:  :                       :     :- CometProject\n:  :                       :     :  +- CometBroadcastHashJoin\n:  :                       :     :     :- CometProject\n:  :                       :     :     :  +- CometFilter\n:  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :                       :     :     +- CometBroadcastExchange\n:  :                       :     :        +- CometProject\n:  :                       :     :           +- CometFilter\n:  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :                       :     +- CometBroadcastExchange\n:  :                       :        +- CometProject\n:  :                       :           +- CometFilter\n:  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :                       +- CometBroadcastExchange\n:  :                          +- CometProject\n:  :                             +- CometFilter\n:  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometExchange\n:              +- CometHashAggregate\n:                 +- CometProject\n:                    +- CometBroadcastHashJoin\n:                       :- CometProject\n:                       :  +- CometBroadcastHashJoin\n:                       :     :- CometProject\n:                       :     :  +- CometBroadcastHashJoin\n:                       :     :     :- CometProject\n:                       :     :     :  +- CometFilter\n:                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:                       :     :     +- CometBroadcastExchange\n:                       :     :        +- CometProject\n:                       :     :           +- CometFilter\n:                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:                       :     +- CometBroadcastExchange\n:                       :        +- CometProject\n:                       :           +- CometFilter\n:                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:                       +- CometBroadcastExchange\n:                          +- CometProject\n:                             +- CometFilter\n:                                +- CometNativeScan parquet spark_catalog.default.store\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometFilter\n                     :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometNativeScan parquet spark_catalog.default.time_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 192 out of 206 eligible operators (93%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q88.native_iceberg_compat/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :  :  :     +- CometExchange\n:  :  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :  :           +- CometProject\n:  :  :  :  :  :  :              +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :- CometProject\n:  :  :  :  :  :  :                 :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :- CometProject\n:  :  :  :  :  :  :                 :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :     :- CometProject\n:  :  :  :  :  :  :                 :     :     :  +- CometFilter\n:  :  :  :  :  :  :                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  :  :  :                 :     :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :     :        +- CometProject\n:  :  :  :  :  :  :                 :     :           +- CometFilter\n:  :  :  :  :  :  :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :  :                 :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :        +- CometProject\n:  :  :  :  :  :  :                 :           +- CometFilter\n:  :  :  :  :  :  :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :  :  :  :                 +- CometBroadcastExchange\n:  :  :  :  :  :  :                    +- CometProject\n:  :  :  :  :  :  :                       +- CometFilter\n:  :  :  :  :  :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :           +- CometExchange\n:  :  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :  :                 +- CometProject\n:  :  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :- CometProject\n:  :  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :- CometProject\n:  :  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :        +- CometProject\n:  :  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :  :                          +- CometProject\n:  :  :  :  :  :                             +- CometFilter\n:  :  :  :  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :           +- CometExchange\n:  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :                 +- CometProject\n:  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :                       :- CometProject\n:  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :- CometProject\n:  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :                       :        +- CometProject\n:  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :                          +- CometProject\n:  :  :  :  :                             +- CometFilter\n:  :  :  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometExchange\n:  :  :  :              +- CometHashAggregate\n:  :  :  :                 +- CometProject\n:  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :                       :- CometProject\n:  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :- CometProject\n:  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :     :- CometProject\n:  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :                       :     :        +- CometProject\n:  :  :  :                       :     :           +- CometFilter\n:  :  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :                       :        +- CometProject\n:  :  :  :                       :           +- CometFilter\n:  :  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :                       +- CometBroadcastExchange\n:  :  :  :                          +- CometProject\n:  :  :  :                             +- CometFilter\n:  :  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometExchange\n:  :  :              +- CometHashAggregate\n:  :  :                 +- CometProject\n:  :  :                    +- CometBroadcastHashJoin\n:  :  :                       :- CometProject\n:  :  :                       :  +- CometBroadcastHashJoin\n:  :  :                       :     :- CometProject\n:  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :                       :     :     :- CometProject\n:  :  :                       :     :     :  +- CometFilter\n:  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :                       :     :     +- CometBroadcastExchange\n:  :  :                       :     :        +- CometProject\n:  :  :                       :     :           +- CometFilter\n:  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :                       :     +- CometBroadcastExchange\n:  :  :                       :        +- CometProject\n:  :  :                       :           +- CometFilter\n:  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :                       +- CometBroadcastExchange\n:  :  :                          +- CometProject\n:  :  :                             +- CometFilter\n:  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometBroadcastHashJoin\n:  :                       :- CometProject\n:  :                       :  +- CometBroadcastHashJoin\n:  :                       :     :- CometProject\n:  :                       :     :  +- CometBroadcastHashJoin\n:  :                       :     :     :- CometProject\n:  :                       :     :     :  +- CometFilter\n:  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :                       :     :     +- CometBroadcastExchange\n:  :                       :     :        +- CometProject\n:  :                       :     :           +- CometFilter\n:  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :                       :     +- CometBroadcastExchange\n:  :                       :        +- CometProject\n:  :                       :           +- CometFilter\n:  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :                       +- CometBroadcastExchange\n:  :                          +- CometProject\n:  :                             +- CometFilter\n:  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometExchange\n:              +- CometHashAggregate\n:                 +- CometProject\n:                    +- CometBroadcastHashJoin\n:                       :- CometProject\n:                       :  +- CometBroadcastHashJoin\n:                       :     :- CometProject\n:                       :     :  +- CometBroadcastHashJoin\n:                       :     :     :- CometProject\n:                       :     :     :  +- CometFilter\n:                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:                       :     :     +- CometBroadcastExchange\n:                       :     :        +- CometProject\n:                       :     :           +- CometFilter\n:                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:                       :     +- CometBroadcastExchange\n:                       :        +- CometProject\n:                       :           +- CometFilter\n:                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:                       +- CometBroadcastExchange\n:                          +- CometProject\n:                             +- CometFilter\n:                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometFilter\n                     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 192 out of 206 eligible operators (93%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q89.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- CometNativeColumnarToRow\n                                    :     :     :  +- CometProject\n                                    :     :     :     +- CometFilter\n                                    :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- Filter\n                                    :     :           +- ColumnarToRow\n                                    :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :                    +- SubqueryBroadcast\n                                    :     :                       +- BroadcastExchange\n                                    :     :                          +- CometNativeColumnarToRow\n                                    :     :                             +- CometProject\n                                    :     :                                +- CometFilter\n                                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q89.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometFilter\n                                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :                 +- SubqueryBroadcast\n                                 :     :                    +- BroadcastExchange\n                                 :     :                       +- CometNativeColumnarToRow\n                                 :     :                          +- CometProject\n                                 :     :                             +- CometFilter\n                                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 27 out of 33 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q9.native_datafusion/extended.txt",
    "content": " Project [COMET: ]\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  +- ReusedSubquery\n+- CometNativeColumnarToRow\n   +- CometFilter\n      +- CometNativeScan parquet spark_catalog.default.reason\n\nComet accelerated 37 out of 53 eligible operators (69%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q9.native_iceberg_compat/extended.txt",
    "content": " Project [COMET: ]\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  +- ReusedSubquery\n+- CometNativeColumnarToRow\n   +- CometFilter\n      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.reason\n\nComet accelerated 37 out of 53 eligible operators (69%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q90.native_datafusion/extended.txt",
    "content": "Project\n+-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n   :- CometNativeColumnarToRow\n   :  +- CometHashAggregate\n   :     +- CometExchange\n   :        +- CometHashAggregate\n   :           +- CometProject\n   :              +- CometBroadcastHashJoin\n   :                 :- CometProject\n   :                 :  +- CometBroadcastHashJoin\n   :                 :     :- CometProject\n   :                 :     :  +- CometBroadcastHashJoin\n   :                 :     :     :- CometProject\n   :                 :     :     :  +- CometFilter\n   :                 :     :     :     +- CometNativeScan parquet spark_catalog.default.web_sales\n   :                 :     :     +- CometBroadcastExchange\n   :                 :     :        +- CometProject\n   :                 :     :           +- CometFilter\n   :                 :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n   :                 :     +- CometBroadcastExchange\n   :                 :        +- CometProject\n   :                 :           +- CometFilter\n   :                 :              +- CometNativeScan parquet spark_catalog.default.time_dim\n   :                 +- CometBroadcastExchange\n   :                    +- CometProject\n   :                       +- CometFilter\n   :                          +- CometNativeScan parquet spark_catalog.default.web_page\n   +- BroadcastExchange\n      +- CometNativeColumnarToRow\n         +- CometHashAggregate\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometProject\n                        :     :     :  +- CometFilter\n                        :     :     :     +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.time_dim\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.web_page\n\nComet accelerated 48 out of 51 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q90.native_iceberg_compat/extended.txt",
    "content": "Project\n+-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n   :- CometNativeColumnarToRow\n   :  +- CometHashAggregate\n   :     +- CometExchange\n   :        +- CometHashAggregate\n   :           +- CometProject\n   :              +- CometBroadcastHashJoin\n   :                 :- CometProject\n   :                 :  +- CometBroadcastHashJoin\n   :                 :     :- CometProject\n   :                 :     :  +- CometBroadcastHashJoin\n   :                 :     :     :- CometProject\n   :                 :     :     :  +- CometFilter\n   :                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n   :                 :     :     +- CometBroadcastExchange\n   :                 :     :        +- CometProject\n   :                 :     :           +- CometFilter\n   :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n   :                 :     +- CometBroadcastExchange\n   :                 :        +- CometProject\n   :                 :           +- CometFilter\n   :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n   :                 +- CometBroadcastExchange\n   :                    +- CometProject\n   :                       +- CometFilter\n   :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n   +- BroadcastExchange\n      +- CometNativeColumnarToRow\n         +- CometHashAggregate\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometProject\n                        :     :     :  +- CometFilter\n                        :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n\nComet accelerated 48 out of 51 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q91.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- CometNativeColumnarToRow\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- Project\n                        :  +- BroadcastHashJoin\n                        :     :- Project\n                        :     :  +- BroadcastHashJoin\n                        :     :     :- Project\n                        :     :     :  +- BroadcastHashJoin\n                        :     :     :     :- Project\n                        :     :     :     :  +- BroadcastHashJoin\n                        :     :     :     :     :- Project\n                        :     :     :     :     :  +- BroadcastHashJoin\n                        :     :     :     :     :     :- CometNativeColumnarToRow\n                        :     :     :     :     :     :  +- CometProject\n                        :     :     :     :     :     :     +- CometFilter\n                        :     :     :     :     :     :        +- CometNativeScan parquet spark_catalog.default.call_center\n                        :     :     :     :     :     +- BroadcastExchange\n                        :     :     :     :     :        +- Filter\n                        :     :     :     :     :           +- ColumnarToRow\n                        :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :     :     :     :     :                    +- SubqueryBroadcast\n                        :     :     :     :     :                       +- BroadcastExchange\n                        :     :     :     :     :                          +- CometNativeColumnarToRow\n                        :     :     :     :     :                             +- CometProject\n                        :     :     :     :     :                                +- CometFilter\n                        :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     :     :     :     +- BroadcastExchange\n                        :     :     :     :        +- CometNativeColumnarToRow\n                        :     :     :     :           +- CometProject\n                        :     :     :     :              +- CometFilter\n                        :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     :     :     +- BroadcastExchange\n                        :     :     :        +- CometNativeColumnarToRow\n                        :     :     :           +- CometFilter\n                        :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                        :     :     +- BroadcastExchange\n                        :     :        +- CometNativeColumnarToRow\n                        :     :           +- CometProject\n                        :     :              +- CometFilter\n                        :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                        :     +- BroadcastExchange\n                        :        +- CometNativeColumnarToRow\n                        :           +- CometProject\n                        :              +- CometFilter\n                        :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                        +- BroadcastExchange\n                           +- CometNativeColumnarToRow\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.household_demographics\n\nComet accelerated 23 out of 47 eligible operators (48%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q91.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :- CometProject\n                     :     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :     :- CometProject\n                     :     :     :     :     :     :  +- CometFilter\n                     :     :     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n                     :     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :     :        +- CometFilter\n                     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                     :     :     :     :     :                 +- SubqueryBroadcast\n                     :     :     :     :     :                    +- BroadcastExchange\n                     :     :     :     :     :                       +- CometNativeColumnarToRow\n                     :     :     :     :     :                          +- CometProject\n                     :     :     :     :     :                             +- CometFilter\n                     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :        +- CometProject\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n\nComet accelerated 45 out of 47 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q92.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Filter\n               :     :     :  +- ColumnarToRow\n               :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :           +- SubqueryBroadcast\n               :     :     :              +- BroadcastExchange\n               :     :     :                 +- CometNativeColumnarToRow\n               :     :     :                    +- CometProject\n               :     :     :                       +- CometFilter\n               :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.item\n               :     +- BroadcastExchange\n               :        +- Filter\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Project\n               :                          +- BroadcastHashJoin\n               :                             :- Filter\n               :                             :  +- ColumnarToRow\n               :                             :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                             :           +- ReusedSubquery\n               :                             +- BroadcastExchange\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometProject\n               :                                      +- CometFilter\n               :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 38 eligible operators (36%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q92.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :     :     :        +- SubqueryBroadcast\n               :     :     :           +- BroadcastExchange\n               :     :     :              +- CometNativeColumnarToRow\n               :     :     :                 +- CometProject\n               :     :     :                    +- CometFilter\n               :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :        +- ReusedSubquery\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 35 out of 38 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q93.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometExchange\n                  :     :     +- CometProject\n                  :     :        +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     +- CometSort\n                  :        +- CometExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.reason\n\nComet accelerated 21 out of 21 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q93.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometExchange\n                  :     :     +- CometProject\n                  :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     +- CometSort\n                  :        +- CometExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.reason\n\nComet accelerated 21 out of 21 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q94.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometNativeScan parquet spark_catalog.default.web_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q94.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q95.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometSortMergeJoin\n                        :     :     :  :  :- CometSort\n                        :     :     :  :  :  +- CometExchange\n                        :     :     :  :  :     +- CometProject\n                        :     :     :  :  :        +- CometFilter\n                        :     :     :  :  :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  :  +- CometProject\n                        :     :     :  :     +- CometSortMergeJoin\n                        :     :     :  :        :- CometSort\n                        :     :     :  :        :  +- CometExchange\n                        :     :     :  :        :     +- CometProject\n                        :     :     :  :        :        +- CometFilter\n                        :     :     :  :        :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  :        +- CometSort\n                        :     :     :  :           +- CometExchange\n                        :     :     :  :              +- CometProject\n                        :     :     :  :                 +- CometFilter\n                        :     :     :  :                    +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometProject\n                        :     :     :     +- CometSortMergeJoin\n                        :     :     :        :- CometSort\n                        :     :     :        :  +- CometExchange\n                        :     :     :        :     +- CometProject\n                        :     :     :        :        +- CometFilter\n                        :     :     :        :           +- CometNativeScan parquet spark_catalog.default.web_returns\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometSortMergeJoin\n                        :     :     :              :- CometSort\n                        :     :     :              :  +- CometExchange\n                        :     :     :              :     +- CometProject\n                        :     :     :              :        +- CometFilter\n                        :     :     :              :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :              +- CometSort\n                        :     :     :                 +- CometExchange\n                        :     :     :                    +- CometProject\n                        :     :     :                       +- CometFilter\n                        :     :     :                          +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 59 out of 61 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q95.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometSortMergeJoin\n                        :     :     :  :  :- CometSort\n                        :     :     :  :  :  +- CometExchange\n                        :     :     :  :  :     +- CometProject\n                        :     :     :  :  :        +- CometFilter\n                        :     :     :  :  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  :  +- CometProject\n                        :     :     :  :     +- CometSortMergeJoin\n                        :     :     :  :        :- CometSort\n                        :     :     :  :        :  +- CometExchange\n                        :     :     :  :        :     +- CometProject\n                        :     :     :  :        :        +- CometFilter\n                        :     :     :  :        :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  :        +- CometSort\n                        :     :     :  :           +- CometExchange\n                        :     :     :  :              +- CometProject\n                        :     :     :  :                 +- CometFilter\n                        :     :     :  :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometProject\n                        :     :     :     +- CometSortMergeJoin\n                        :     :     :        :- CometSort\n                        :     :     :        :  +- CometExchange\n                        :     :     :        :     +- CometProject\n                        :     :     :        :        +- CometFilter\n                        :     :     :        :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometSortMergeJoin\n                        :     :     :              :- CometSort\n                        :     :     :              :  +- CometExchange\n                        :     :     :              :     +- CometProject\n                        :     :     :              :        +- CometFilter\n                        :     :     :              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :              +- CometSort\n                        :     :     :                 +- CometExchange\n                        :     :     :                    +- CometProject\n                        :     :     :                       +- CometFilter\n                        :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 59 out of 61 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q96.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometFilter\n               :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometNativeScan parquet spark_catalog.default.time_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 24 out of 24 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q96.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometFilter\n               :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 24 out of 24 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q97.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometSortMergeJoin\n               :- CometSort\n               :  +- CometHashAggregate\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- ColumnarToRow\n               :                 :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :        +- SubqueryBroadcast\n               :                 :           +- BroadcastExchange\n               :                 :              +- CometNativeColumnarToRow\n               :                 :                 +- CometProject\n               :                 :                    +- CometFilter\n               :                 :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- CometSort\n                  +- CometHashAggregate\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- ColumnarToRow\n                                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :        +- ReusedSubquery\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 20 out of 33 eligible operators (60%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q97.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometSortMergeJoin\n               :- CometSort\n               :  +- CometHashAggregate\n               :     +- CometExchange\n               :        +- CometHashAggregate\n               :           +- CometProject\n               :              +- CometBroadcastHashJoin\n               :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                 :     +- SubqueryBroadcast\n               :                 :        +- BroadcastExchange\n               :                 :           +- CometNativeColumnarToRow\n               :                 :              +- CometProject\n               :                 :                 +- CometFilter\n               :                 :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                 +- CometBroadcastExchange\n               :                    +- CometProject\n               :                       +- CometFilter\n               :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometSort\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                 :     +- ReusedSubquery\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 30 out of 33 eligible operators (90%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q98.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometProject\n   +- CometSort\n      +- CometColumnarExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Filter\n                                          :     :  +- ColumnarToRow\n                                          :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :           +- SubqueryBroadcast\n                                          :     :              +- BroadcastExchange\n                                          :     :                 +- CometNativeColumnarToRow\n                                          :     :                    +- CometProject\n                                          :     :                       +- CometFilter\n                                          :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometProject\n                                          :              +- CometFilter\n                                          :                 +- CometNativeScan parquet spark_catalog.default.item\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 15 out of 29 eligible operators (51%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q98.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometProject\n   +- CometSort\n      +- CometColumnarExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometFilter\n                                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :        +- SubqueryBroadcast\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometProject\n                                       :     :                    +- CometFilter\n                                       :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometProject\n                                       :           +- CometFilter\n                                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 25 out of 29 eligible operators (86%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q99.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.call_center\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4/q99.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q1.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Filter\n      :     :     :                    :  +- ColumnarToRow\n      :     :     :                    :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :           +- SubqueryBroadcast\n      :     :     :                    :              +- BroadcastExchange\n      :     :     :                    :                 +- CometNativeColumnarToRow\n      :     :     :                    :                    +- CometProject\n      :     :     :                    :                       +- CometFilter\n      :     :     :                    :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Filter\n      :     :                                         :  +- ColumnarToRow\n      :     :                                         :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :           +- ReusedSubquery\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometProject\n      :     :                                                  +- CometFilter\n      :     :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 49 eligible operators (36%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q1.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometFilter\n         :     :     :                 :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :     :                 :        +- SubqueryBroadcast\n         :     :     :                 :           +- BroadcastExchange\n         :     :     :                 :              +- CometNativeColumnarToRow\n         :     :     :                 :                 +- CometProject\n         :     :     :                 :                    +- CometFilter\n         :     :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometFilter\n         :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometHashAggregate\n         :     :                       +- CometExchange\n         :     :                          +- CometHashAggregate\n         :     :                             +- CometProject\n         :     :                                +- CometBroadcastHashJoin\n         :     :                                   :- CometFilter\n         :     :                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :                                   :        +- ReusedSubquery\n         :     :                                   +- CometBroadcastExchange\n         :     :                                      +- CometProject\n         :     :                                         +- CometFilter\n         :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 46 out of 49 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q10.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :- BroadcastHashJoin\n                  :     :        :  :- BroadcastHashJoin\n                  :     :        :  :  :- CometNativeColumnarToRow\n                  :     :        :  :  :  +- CometFilter\n                  :     :        :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :        :  :  +- BroadcastExchange\n                  :     :        :  :     +- Project\n                  :     :        :  :        +- BroadcastHashJoin\n                  :     :        :  :           :- ColumnarToRow\n                  :     :        :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :  :           :        +- SubqueryBroadcast\n                  :     :        :  :           :           +- BroadcastExchange\n                  :     :        :  :           :              +- CometNativeColumnarToRow\n                  :     :        :  :           :                 +- CometProject\n                  :     :        :  :           :                    +- CometFilter\n                  :     :        :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  :           +- BroadcastExchange\n                  :     :        :  :              +- CometNativeColumnarToRow\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- Project\n                  :     :        :        +- BroadcastHashJoin\n                  :     :        :           :- ColumnarToRow\n                  :     :        :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :           :        +- ReusedSubquery\n                  :     :        :           +- BroadcastExchange\n                  :     :        :              +- CometNativeColumnarToRow\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 54 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q10.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                  :     :        :  :- CometNativeColumnarToRow\n                  :     :        :  :  +- CometBroadcastHashJoin\n                  :     :        :  :     :- CometFilter\n                  :     :        :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :        :  :     +- CometBroadcastExchange\n                  :     :        :  :        +- CometProject\n                  :     :        :  :           +- CometBroadcastHashJoin\n                  :     :        :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :        :  :              :     +- SubqueryBroadcast\n                  :     :        :  :              :        +- BroadcastExchange\n                  :     :        :  :              :           +- CometNativeColumnarToRow\n                  :     :        :  :              :              +- CometProject\n                  :     :        :  :              :                 +- CometFilter\n                  :     :        :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  :              +- CometBroadcastExchange\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- CometNativeColumnarToRow\n                  :     :        :        +- CometProject\n                  :     :        :           +- CometBroadcastHashJoin\n                  :     :        :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :        :              :     +- ReusedSubquery\n                  :     :        :              +- CometBroadcastExchange\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- CometNativeColumnarToRow\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 54 eligible operators (64%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q11.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Project\n      :     :     :                    :  +- BroadcastHashJoin\n      :     :     :                    :     :- CometNativeColumnarToRow\n      :     :     :                    :     :  +- CometProject\n      :     :     :                    :     :     +- CometFilter\n      :     :     :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :                    :     +- BroadcastExchange\n      :     :     :                    :        +- Filter\n      :     :     :                    :           +- ColumnarToRow\n      :     :     :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :                    +- SubqueryBroadcast\n      :     :     :                    :                       +- BroadcastExchange\n      :     :     :                    :                          +- CometNativeColumnarToRow\n      :     :     :                    :                             +- CometFilter\n      :     :     :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometFilter\n      :     :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     +- BroadcastExchange\n      :     :        +- HashAggregate\n      :     :           +- CometNativeColumnarToRow\n      :     :              +- CometColumnarExchange\n      :     :                 +- HashAggregate\n      :     :                    +- Project\n      :     :                       +- BroadcastHashJoin\n      :     :                          :- Project\n      :     :                          :  +- BroadcastHashJoin\n      :     :                          :     :- CometNativeColumnarToRow\n      :     :                          :     :  +- CometProject\n      :     :                          :     :     +- CometFilter\n      :     :                          :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                          :     +- BroadcastExchange\n      :     :                          :        +- Filter\n      :     :                          :           +- ColumnarToRow\n      :     :                          :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                          :                    +- SubqueryBroadcast\n      :     :                          :                       +- BroadcastExchange\n      :     :                          :                          +- CometNativeColumnarToRow\n      :     :                          :                             +- CometFilter\n      :     :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                          +- BroadcastExchange\n      :     :                             +- CometNativeColumnarToRow\n      :     :                                +- CometFilter\n      :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 86 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q11.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometProject\n         :     :     :                 :  +- CometBroadcastHashJoin\n         :     :     :                 :     :- CometProject\n         :     :     :                 :     :  +- CometFilter\n         :     :     :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :                 :     +- CometBroadcastExchange\n         :     :     :                 :        +- CometFilter\n         :     :     :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :                 :                 +- SubqueryBroadcast\n         :     :     :                 :                    +- BroadcastExchange\n         :     :     :                 :                       +- CometNativeColumnarToRow\n         :     :     :                 :                          +- CometFilter\n         :     :     :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometFilter\n         :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometExchange\n         :     :              +- CometHashAggregate\n         :     :                 +- CometProject\n         :     :                    +- CometBroadcastHashJoin\n         :     :                       :- CometProject\n         :     :                       :  +- CometBroadcastHashJoin\n         :     :                       :     :- CometProject\n         :     :                       :     :  +- CometFilter\n         :     :                       :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                       :     +- CometBroadcastExchange\n         :     :                       :        +- CometFilter\n         :     :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                       :                 +- SubqueryBroadcast\n         :     :                       :                    +- BroadcastExchange\n         :     :                       :                       +- CometNativeColumnarToRow\n         :     :                       :                          +- CometFilter\n         :     :                       :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                       +- CometBroadcastExchange\n         :     :                          +- CometFilter\n         :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 80 out of 86 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q12.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q12.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q13.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Project\n               :     :     :  +- BroadcastHashJoin\n               :     :     :     :- Project\n               :     :     :     :  +- BroadcastHashJoin\n               :     :     :     :     :- Filter\n               :     :     :     :     :  +- ColumnarToRow\n               :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :     :     :           +- SubqueryBroadcast\n               :     :     :     :     :              +- BroadcastExchange\n               :     :     :     :     :                 +- CometNativeColumnarToRow\n               :     :     :     :     :                    +- CometProject\n               :     :     :     :     :                       +- CometFilter\n               :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     :     :     +- BroadcastExchange\n               :     :     :     :        +- CometNativeColumnarToRow\n               :     :     :     :           +- CometFilter\n               :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :     :     :     +- BroadcastExchange\n               :     :     :        +- CometNativeColumnarToRow\n               :     :     :           +- CometProject\n               :     :     :              +- CometFilter\n               :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +- CometNativeColumnarToRow\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.household_demographics\n\nComet accelerated 17 out of 38 eligible operators (44%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q13.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometProject\n               :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :- CometFilter\n               :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     :     :     :        +- SubqueryBroadcast\n               :     :     :     :     :           +- BroadcastExchange\n               :     :     :     :     :              +- CometNativeColumnarToRow\n               :     :     :     :     :                 +- CometProject\n               :     :     :     :     :                    +- CometFilter\n               :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :        +- CometFilter\n               :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometProject\n               :     :     :           +- CometFilter\n               :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n               +- CometBroadcastExchange\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n\nComet accelerated 36 out of 38 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q14a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- Project\n                  :  +- Filter\n                  :     :  +- Subquery\n                  :     :     +- HashAggregate\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometColumnarExchange\n                  :     :              +- HashAggregate\n                  :     :                 +- Union\n                  :     :                    :- Project\n                  :     :                    :  +- BroadcastHashJoin\n                  :     :                    :     :- ColumnarToRow\n                  :     :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                    :     :        +- ReusedSubquery\n                  :     :                    :     +- BroadcastExchange\n                  :     :                    :        +- CometNativeColumnarToRow\n                  :     :                    :           +- CometProject\n                  :     :                    :              +- CometFilter\n                  :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                    :- Project\n                  :     :                    :  +- BroadcastHashJoin\n                  :     :                    :     :- ColumnarToRow\n                  :     :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                    :     :        +- ReusedSubquery\n                  :     :                    :     +- BroadcastExchange\n                  :     :                    :        +- CometNativeColumnarToRow\n                  :     :                    :           +- CometProject\n                  :     :                    :              +- CometFilter\n                  :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                    +- Project\n                  :     :                       +- BroadcastHashJoin\n                  :     :                          :- ColumnarToRow\n                  :     :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                          :        +- ReusedSubquery\n                  :     :                          +- BroadcastExchange\n                  :     :                             +- CometNativeColumnarToRow\n                  :     :                                +- CometProject\n                  :     :                                   +- CometFilter\n                  :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- HashAggregate\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometColumnarExchange\n                  :              +- HashAggregate\n                  :                 +- Project\n                  :                    +- BroadcastHashJoin\n                  :                       :- Project\n                  :                       :  +- BroadcastHashJoin\n                  :                       :     :- BroadcastHashJoin\n                  :                       :     :  :- Filter\n                  :                       :     :  :  +- ColumnarToRow\n                  :                       :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :  :           +- SubqueryBroadcast\n                  :                       :     :  :              +- BroadcastExchange\n                  :                       :     :  :                 +- CometNativeColumnarToRow\n                  :                       :     :  :                    +- CometProject\n                  :                       :     :  :                       +- CometFilter\n                  :                       :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :  +- BroadcastExchange\n                  :                       :     :     +- Project\n                  :                       :     :        +- BroadcastHashJoin\n                  :                       :     :           :- CometNativeColumnarToRow\n                  :                       :     :           :  +- CometFilter\n                  :                       :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :           +- BroadcastExchange\n                  :                       :     :              +- BroadcastHashJoin\n                  :                       :     :                 :- CometNativeColumnarToRow\n                  :                       :     :                 :  +- CometHashAggregate\n                  :                       :     :                 :     +- CometColumnarExchange\n                  :                       :     :                 :        +- HashAggregate\n                  :                       :     :                 :           +- Project\n                  :                       :     :                 :              +- BroadcastHashJoin\n                  :                       :     :                 :                 :- Project\n                  :                       :     :                 :                 :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :     :- Filter\n                  :                       :     :                 :                 :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :     :           +- SubqueryBroadcast\n                  :                       :     :                 :                 :     :              +- BroadcastExchange\n                  :                       :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :     :                    +- CometProject\n                  :                       :     :                 :                 :     :                       +- CometFilter\n                  :                       :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 :     +- BroadcastExchange\n                  :                       :     :                 :                 :        +- BroadcastHashJoin\n                  :                       :     :                 :                 :           :- CometNativeColumnarToRow\n                  :                       :     :                 :                 :           :  +- CometFilter\n                  :                       :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :           +- BroadcastExchange\n                  :                       :     :                 :                 :              +- Project\n                  :                       :     :                 :                 :                 +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :- Project\n                  :                       :     :                 :                 :                    :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :     :- Filter\n                  :                       :     :                 :                 :                    :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :                    :     :           +- ReusedSubquery\n                  :                       :     :                 :                 :                    :     +- BroadcastExchange\n                  :                       :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                    :           +- CometFilter\n                  :                       :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :                    +- BroadcastExchange\n                  :                       :     :                 :                 :                       +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                          +- CometProject\n                  :                       :     :                 :                 :                             +- CometFilter\n                  :                       :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 +- BroadcastExchange\n                  :                       :     :                 :                    +- CometNativeColumnarToRow\n                  :                       :     :                 :                       +- CometProject\n                  :                       :     :                 :                          +- CometFilter\n                  :                       :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 +- BroadcastExchange\n                  :                       :     :                    +- Project\n                  :                       :     :                       +- BroadcastHashJoin\n                  :                       :     :                          :- Project\n                  :                       :     :                          :  +- BroadcastHashJoin\n                  :                       :     :                          :     :- Filter\n                  :                       :     :                          :     :  +- ColumnarToRow\n                  :                       :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                          :     :           +- ReusedSubquery\n                  :                       :     :                          :     +- BroadcastExchange\n                  :                       :     :                          :        +- CometNativeColumnarToRow\n                  :                       :     :                          :           +- CometFilter\n                  :                       :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                          +- BroadcastExchange\n                  :                       :     :                             +- CometNativeColumnarToRow\n                  :                       :     :                                +- CometProject\n                  :                       :     :                                   +- CometFilter\n                  :                       :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     +- BroadcastExchange\n                  :                       :        +- BroadcastHashJoin\n                  :                       :           :- CometNativeColumnarToRow\n                  :                       :           :  +- CometFilter\n                  :                       :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :           +- BroadcastExchange\n                  :                       :              +- Project\n                  :                       :                 +- BroadcastHashJoin\n                  :                       :                    :- CometNativeColumnarToRow\n                  :                       :                    :  +- CometFilter\n                  :                       :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                    +- BroadcastExchange\n                  :                       :                       +- BroadcastHashJoin\n                  :                       :                          :- CometNativeColumnarToRow\n                  :                       :                          :  +- CometHashAggregate\n                  :                       :                          :     +- CometColumnarExchange\n                  :                       :                          :        +- HashAggregate\n                  :                       :                          :           +- Project\n                  :                       :                          :              +- BroadcastHashJoin\n                  :                       :                          :                 :- Project\n                  :                       :                          :                 :  +- BroadcastHashJoin\n                  :                       :                          :                 :     :- Filter\n                  :                       :                          :                 :     :  +- ColumnarToRow\n                  :                       :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :     :           +- SubqueryBroadcast\n                  :                       :                          :                 :     :              +- BroadcastExchange\n                  :                       :                          :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :                          :                 :     :                    +- CometProject\n                  :                       :                          :                 :     :                       +- CometFilter\n                  :                       :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 :     +- BroadcastExchange\n                  :                       :                          :                 :        +- BroadcastHashJoin\n                  :                       :                          :                 :           :- CometNativeColumnarToRow\n                  :                       :                          :                 :           :  +- CometFilter\n                  :                       :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :           +- BroadcastExchange\n                  :                       :                          :                 :              +- Project\n                  :                       :                          :                 :                 +- BroadcastHashJoin\n                  :                       :                          :                 :                    :- Project\n                  :                       :                          :                 :                    :  +- BroadcastHashJoin\n                  :                       :                          :                 :                    :     :- Filter\n                  :                       :                          :                 :                    :     :  +- ColumnarToRow\n                  :                       :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :                    :     :           +- ReusedSubquery\n                  :                       :                          :                 :                    :     +- BroadcastExchange\n                  :                       :                          :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :                          :                 :                    :           +- CometFilter\n                  :                       :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :                    +- BroadcastExchange\n                  :                       :                          :                 :                       +- CometNativeColumnarToRow\n                  :                       :                          :                 :                          +- CometProject\n                  :                       :                          :                 :                             +- CometFilter\n                  :                       :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 +- BroadcastExchange\n                  :                       :                          :                    +- CometNativeColumnarToRow\n                  :                       :                          :                       +- CometProject\n                  :                       :                          :                          +- CometFilter\n                  :                       :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          +- BroadcastExchange\n                  :                       :                             +- Project\n                  :                       :                                +- BroadcastHashJoin\n                  :                       :                                   :- Project\n                  :                       :                                   :  +- BroadcastHashJoin\n                  :                       :                                   :     :- Filter\n                  :                       :                                   :     :  +- ColumnarToRow\n                  :                       :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                                   :     :           +- ReusedSubquery\n                  :                       :                                   :     +- BroadcastExchange\n                  :                       :                                   :        +- CometNativeColumnarToRow\n                  :                       :                                   :           +- CometFilter\n                  :                       :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                                   +- BroadcastExchange\n                  :                       :                                      +- CometNativeColumnarToRow\n                  :                       :                                         +- CometProject\n                  :                       :                                            +- CometFilter\n                  :                       :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       +- BroadcastExchange\n                  :                          +- CometNativeColumnarToRow\n                  :                             +- CometProject\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :- Project\n                  :  +- Filter\n                  :     :  +- ReusedSubquery\n                  :     +- HashAggregate\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometColumnarExchange\n                  :              +- HashAggregate\n                  :                 +- Project\n                  :                    +- BroadcastHashJoin\n                  :                       :- Project\n                  :                       :  +- BroadcastHashJoin\n                  :                       :     :- BroadcastHashJoin\n                  :                       :     :  :- Filter\n                  :                       :     :  :  +- ColumnarToRow\n                  :                       :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :  :           +- ReusedSubquery\n                  :                       :     :  +- BroadcastExchange\n                  :                       :     :     +- Project\n                  :                       :     :        +- BroadcastHashJoin\n                  :                       :     :           :- CometNativeColumnarToRow\n                  :                       :     :           :  +- CometFilter\n                  :                       :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :           +- BroadcastExchange\n                  :                       :     :              +- BroadcastHashJoin\n                  :                       :     :                 :- CometNativeColumnarToRow\n                  :                       :     :                 :  +- CometHashAggregate\n                  :                       :     :                 :     +- CometColumnarExchange\n                  :                       :     :                 :        +- HashAggregate\n                  :                       :     :                 :           +- Project\n                  :                       :     :                 :              +- BroadcastHashJoin\n                  :                       :     :                 :                 :- Project\n                  :                       :     :                 :                 :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :     :- Filter\n                  :                       :     :                 :                 :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :     :           +- SubqueryBroadcast\n                  :                       :     :                 :                 :     :              +- BroadcastExchange\n                  :                       :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :     :                    +- CometProject\n                  :                       :     :                 :                 :     :                       +- CometFilter\n                  :                       :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 :     +- BroadcastExchange\n                  :                       :     :                 :                 :        +- BroadcastHashJoin\n                  :                       :     :                 :                 :           :- CometNativeColumnarToRow\n                  :                       :     :                 :                 :           :  +- CometFilter\n                  :                       :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :           +- BroadcastExchange\n                  :                       :     :                 :                 :              +- Project\n                  :                       :     :                 :                 :                 +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :- Project\n                  :                       :     :                 :                 :                    :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :     :- Filter\n                  :                       :     :                 :                 :                    :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :                    :     :           +- ReusedSubquery\n                  :                       :     :                 :                 :                    :     +- BroadcastExchange\n                  :                       :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                    :           +- CometFilter\n                  :                       :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :                    +- BroadcastExchange\n                  :                       :     :                 :                 :                       +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                          +- CometProject\n                  :                       :     :                 :                 :                             +- CometFilter\n                  :                       :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 +- BroadcastExchange\n                  :                       :     :                 :                    +- CometNativeColumnarToRow\n                  :                       :     :                 :                       +- CometProject\n                  :                       :     :                 :                          +- CometFilter\n                  :                       :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 +- BroadcastExchange\n                  :                       :     :                    +- Project\n                  :                       :     :                       +- BroadcastHashJoin\n                  :                       :     :                          :- Project\n                  :                       :     :                          :  +- BroadcastHashJoin\n                  :                       :     :                          :     :- Filter\n                  :                       :     :                          :     :  +- ColumnarToRow\n                  :                       :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                          :     :           +- ReusedSubquery\n                  :                       :     :                          :     +- BroadcastExchange\n                  :                       :     :                          :        +- CometNativeColumnarToRow\n                  :                       :     :                          :           +- CometFilter\n                  :                       :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                          +- BroadcastExchange\n                  :                       :     :                             +- CometNativeColumnarToRow\n                  :                       :     :                                +- CometProject\n                  :                       :     :                                   +- CometFilter\n                  :                       :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     +- BroadcastExchange\n                  :                       :        +- BroadcastHashJoin\n                  :                       :           :- CometNativeColumnarToRow\n                  :                       :           :  +- CometFilter\n                  :                       :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :           +- BroadcastExchange\n                  :                       :              +- Project\n                  :                       :                 +- BroadcastHashJoin\n                  :                       :                    :- CometNativeColumnarToRow\n                  :                       :                    :  +- CometFilter\n                  :                       :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                    +- BroadcastExchange\n                  :                       :                       +- BroadcastHashJoin\n                  :                       :                          :- CometNativeColumnarToRow\n                  :                       :                          :  +- CometHashAggregate\n                  :                       :                          :     +- CometColumnarExchange\n                  :                       :                          :        +- HashAggregate\n                  :                       :                          :           +- Project\n                  :                       :                          :              +- BroadcastHashJoin\n                  :                       :                          :                 :- Project\n                  :                       :                          :                 :  +- BroadcastHashJoin\n                  :                       :                          :                 :     :- Filter\n                  :                       :                          :                 :     :  +- ColumnarToRow\n                  :                       :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :     :           +- SubqueryBroadcast\n                  :                       :                          :                 :     :              +- BroadcastExchange\n                  :                       :                          :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :                          :                 :     :                    +- CometProject\n                  :                       :                          :                 :     :                       +- CometFilter\n                  :                       :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 :     +- BroadcastExchange\n                  :                       :                          :                 :        +- BroadcastHashJoin\n                  :                       :                          :                 :           :- CometNativeColumnarToRow\n                  :                       :                          :                 :           :  +- CometFilter\n                  :                       :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :           +- BroadcastExchange\n                  :                       :                          :                 :              +- Project\n                  :                       :                          :                 :                 +- BroadcastHashJoin\n                  :                       :                          :                 :                    :- Project\n                  :                       :                          :                 :                    :  +- BroadcastHashJoin\n                  :                       :                          :                 :                    :     :- Filter\n                  :                       :                          :                 :                    :     :  +- ColumnarToRow\n                  :                       :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :                    :     :           +- ReusedSubquery\n                  :                       :                          :                 :                    :     +- BroadcastExchange\n                  :                       :                          :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :                          :                 :                    :           +- CometFilter\n                  :                       :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :                    +- BroadcastExchange\n                  :                       :                          :                 :                       +- CometNativeColumnarToRow\n                  :                       :                          :                 :                          +- CometProject\n                  :                       :                          :                 :                             +- CometFilter\n                  :                       :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 +- BroadcastExchange\n                  :                       :                          :                    +- CometNativeColumnarToRow\n                  :                       :                          :                       +- CometProject\n                  :                       :                          :                          +- CometFilter\n                  :                       :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          +- BroadcastExchange\n                  :                       :                             +- Project\n                  :                       :                                +- BroadcastHashJoin\n                  :                       :                                   :- Project\n                  :                       :                                   :  +- BroadcastHashJoin\n                  :                       :                                   :     :- Filter\n                  :                       :                                   :     :  +- ColumnarToRow\n                  :                       :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                                   :     :           +- ReusedSubquery\n                  :                       :                                   :     +- BroadcastExchange\n                  :                       :                                   :        +- CometNativeColumnarToRow\n                  :                       :                                   :           +- CometFilter\n                  :                       :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                                   +- BroadcastExchange\n                  :                       :                                      +- CometNativeColumnarToRow\n                  :                       :                                         +- CometProject\n                  :                       :                                            +- CometFilter\n                  :                       :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       +- BroadcastExchange\n                  :                          +- CometNativeColumnarToRow\n                  :                             +- CometProject\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- Project\n                     +- Filter\n                        :  +- ReusedSubquery\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- BroadcastHashJoin\n                                          :     :  :- Filter\n                                          :     :  :  +- ColumnarToRow\n                                          :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :  :           +- ReusedSubquery\n                                          :     :  +- BroadcastExchange\n                                          :     :     +- Project\n                                          :     :        +- BroadcastHashJoin\n                                          :     :           :- CometNativeColumnarToRow\n                                          :     :           :  +- CometFilter\n                                          :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :           +- BroadcastExchange\n                                          :     :              +- BroadcastHashJoin\n                                          :     :                 :- CometNativeColumnarToRow\n                                          :     :                 :  +- CometHashAggregate\n                                          :     :                 :     +- CometColumnarExchange\n                                          :     :                 :        +- HashAggregate\n                                          :     :                 :           +- Project\n                                          :     :                 :              +- BroadcastHashJoin\n                                          :     :                 :                 :- Project\n                                          :     :                 :                 :  +- BroadcastHashJoin\n                                          :     :                 :                 :     :- Filter\n                                          :     :                 :                 :     :  +- ColumnarToRow\n                                          :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                 :                 :     :           +- SubqueryBroadcast\n                                          :     :                 :                 :     :              +- BroadcastExchange\n                                          :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                          :     :                 :                 :     :                    +- CometProject\n                                          :     :                 :                 :     :                       +- CometFilter\n                                          :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 :                 :     +- BroadcastExchange\n                                          :     :                 :                 :        +- BroadcastHashJoin\n                                          :     :                 :                 :           :- CometNativeColumnarToRow\n                                          :     :                 :                 :           :  +- CometFilter\n                                          :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :                 :                 :           +- BroadcastExchange\n                                          :     :                 :                 :              +- Project\n                                          :     :                 :                 :                 +- BroadcastHashJoin\n                                          :     :                 :                 :                    :- Project\n                                          :     :                 :                 :                    :  +- BroadcastHashJoin\n                                          :     :                 :                 :                    :     :- Filter\n                                          :     :                 :                 :                    :     :  +- ColumnarToRow\n                                          :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                 :                 :                    :     :           +- ReusedSubquery\n                                          :     :                 :                 :                    :     +- BroadcastExchange\n                                          :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                          :     :                 :                 :                    :           +- CometFilter\n                                          :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :                 :                 :                    +- BroadcastExchange\n                                          :     :                 :                 :                       +- CometNativeColumnarToRow\n                                          :     :                 :                 :                          +- CometProject\n                                          :     :                 :                 :                             +- CometFilter\n                                          :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 :                 +- BroadcastExchange\n                                          :     :                 :                    +- CometNativeColumnarToRow\n                                          :     :                 :                       +- CometProject\n                                          :     :                 :                          +- CometFilter\n                                          :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 +- BroadcastExchange\n                                          :     :                    +- Project\n                                          :     :                       +- BroadcastHashJoin\n                                          :     :                          :- Project\n                                          :     :                          :  +- BroadcastHashJoin\n                                          :     :                          :     :- Filter\n                                          :     :                          :     :  +- ColumnarToRow\n                                          :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                          :     :           +- ReusedSubquery\n                                          :     :                          :     +- BroadcastExchange\n                                          :     :                          :        +- CometNativeColumnarToRow\n                                          :     :                          :           +- CometFilter\n                                          :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :                          +- BroadcastExchange\n                                          :     :                             +- CometNativeColumnarToRow\n                                          :     :                                +- CometProject\n                                          :     :                                   +- CometFilter\n                                          :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- BroadcastHashJoin\n                                          :           :- CometNativeColumnarToRow\n                                          :           :  +- CometFilter\n                                          :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :           +- BroadcastExchange\n                                          :              +- Project\n                                          :                 +- BroadcastHashJoin\n                                          :                    :- CometNativeColumnarToRow\n                                          :                    :  +- CometFilter\n                                          :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    +- BroadcastExchange\n                                          :                       +- BroadcastHashJoin\n                                          :                          :- CometNativeColumnarToRow\n                                          :                          :  +- CometHashAggregate\n                                          :                          :     +- CometColumnarExchange\n                                          :                          :        +- HashAggregate\n                                          :                          :           +- Project\n                                          :                          :              +- BroadcastHashJoin\n                                          :                          :                 :- Project\n                                          :                          :                 :  +- BroadcastHashJoin\n                                          :                          :                 :     :- Filter\n                                          :                          :                 :     :  +- ColumnarToRow\n                                          :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                          :                 :     :           +- SubqueryBroadcast\n                                          :                          :                 :     :              +- BroadcastExchange\n                                          :                          :                 :     :                 +- CometNativeColumnarToRow\n                                          :                          :                 :     :                    +- CometProject\n                                          :                          :                 :     :                       +- CometFilter\n                                          :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          :                 :     +- BroadcastExchange\n                                          :                          :                 :        +- BroadcastHashJoin\n                                          :                          :                 :           :- CometNativeColumnarToRow\n                                          :                          :                 :           :  +- CometFilter\n                                          :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                          :                 :           +- BroadcastExchange\n                                          :                          :                 :              +- Project\n                                          :                          :                 :                 +- BroadcastHashJoin\n                                          :                          :                 :                    :- Project\n                                          :                          :                 :                    :  +- BroadcastHashJoin\n                                          :                          :                 :                    :     :- Filter\n                                          :                          :                 :                    :     :  +- ColumnarToRow\n                                          :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                          :                 :                    :     :           +- ReusedSubquery\n                                          :                          :                 :                    :     +- BroadcastExchange\n                                          :                          :                 :                    :        +- CometNativeColumnarToRow\n                                          :                          :                 :                    :           +- CometFilter\n                                          :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                          :                 :                    +- BroadcastExchange\n                                          :                          :                 :                       +- CometNativeColumnarToRow\n                                          :                          :                 :                          +- CometProject\n                                          :                          :                 :                             +- CometFilter\n                                          :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          :                 +- BroadcastExchange\n                                          :                          :                    +- CometNativeColumnarToRow\n                                          :                          :                       +- CometProject\n                                          :                          :                          +- CometFilter\n                                          :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          +- BroadcastExchange\n                                          :                             +- Project\n                                          :                                +- BroadcastHashJoin\n                                          :                                   :- Project\n                                          :                                   :  +- BroadcastHashJoin\n                                          :                                   :     :- Filter\n                                          :                                   :     :  +- ColumnarToRow\n                                          :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                                   :     :           +- ReusedSubquery\n                                          :                                   :     +- BroadcastExchange\n                                          :                                   :        +- CometNativeColumnarToRow\n                                          :                                   :           +- CometFilter\n                                          :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                                   +- BroadcastExchange\n                                          :                                      +- CometNativeColumnarToRow\n                                          :                                         +- CometProject\n                                          :                                            +- CometFilter\n                                          :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 164 out of 458 eligible operators (35%). Final plan contains 93 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q14a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometProject\n                  :  +- CometFilter\n                  :     :  +- Subquery\n                  :     :     +- CometNativeColumnarToRow\n                  :     :        +- CometHashAggregate\n                  :     :           +- CometExchange\n                  :     :              +- CometHashAggregate\n                  :     :                 +- CometUnion\n                  :     :                    :- CometProject\n                  :     :                    :  +- CometBroadcastHashJoin\n                  :     :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :                    :     :     +- ReusedSubquery\n                  :     :                    :     +- CometBroadcastExchange\n                  :     :                    :        +- CometProject\n                  :     :                    :           +- CometFilter\n                  :     :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                    :- CometProject\n                  :     :                    :  +- CometBroadcastHashJoin\n                  :     :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     :     +- ReusedSubquery\n                  :     :                    :     +- CometBroadcastExchange\n                  :     :                    :        +- CometProject\n                  :     :                    :           +- CometFilter\n                  :     :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                    +- CometProject\n                  :     :                       +- CometBroadcastHashJoin\n                  :     :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :                          :     +- ReusedSubquery\n                  :     :                          +- CometBroadcastExchange\n                  :     :                             +- CometProject\n                  :     :                                +- CometFilter\n                  :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometHashAggregate\n                  :        +- CometExchange\n                  :           +- CometHashAggregate\n                  :              +- CometProject\n                  :                 +- CometBroadcastHashJoin\n                  :                    :- CometProject\n                  :                    :  +- CometBroadcastHashJoin\n                  :                    :     :- CometBroadcastHashJoin\n                  :                    :     :  :- CometFilter\n                  :                    :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :     :  :        +- SubqueryBroadcast\n                  :                    :     :  :           +- BroadcastExchange\n                  :                    :     :  :              +- CometNativeColumnarToRow\n                  :                    :     :  :                 +- CometProject\n                  :                    :     :  :                    +- CometFilter\n                  :                    :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :  +- CometBroadcastExchange\n                  :                    :     :     +- CometProject\n                  :                    :     :        +- CometBroadcastHashJoin\n                  :                    :     :           :- CometFilter\n                  :                    :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :           +- CometBroadcastExchange\n                  :                    :     :              +- CometBroadcastHashJoin\n                  :                    :     :                 :- CometHashAggregate\n                  :                    :     :                 :  +- CometExchange\n                  :                    :     :                 :     +- CometHashAggregate\n                  :                    :     :                 :        +- CometProject\n                  :                    :     :                 :           +- CometBroadcastHashJoin\n                  :                    :     :                 :              :- CometProject\n                  :                    :     :                 :              :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :     :- CometFilter\n                  :                    :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :     :                 :              :     :        +- SubqueryBroadcast\n                  :                    :     :                 :              :     :           +- BroadcastExchange\n                  :                    :     :                 :              :     :              +- CometNativeColumnarToRow\n                  :                    :     :                 :              :     :                 +- CometProject\n                  :                    :     :                 :              :     :                    +- CometFilter\n                  :                    :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              :     +- CometBroadcastExchange\n                  :                    :     :                 :              :        +- CometBroadcastHashJoin\n                  :                    :     :                 :              :           :- CometFilter\n                  :                    :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :           +- CometBroadcastExchange\n                  :                    :     :                 :              :              +- CometProject\n                  :                    :     :                 :              :                 +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :- CometProject\n                  :                    :     :                 :              :                    :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :     :- CometFilter\n                  :                    :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :     :                 :              :                    :     :        +- ReusedSubquery\n                  :                    :     :                 :              :                    :     +- CometBroadcastExchange\n                  :                    :     :                 :              :                    :        +- CometFilter\n                  :                    :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :                    +- CometBroadcastExchange\n                  :                    :     :                 :              :                       +- CometProject\n                  :                    :     :                 :              :                          +- CometFilter\n                  :                    :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              +- CometBroadcastExchange\n                  :                    :     :                 :                 +- CometProject\n                  :                    :     :                 :                    +- CometFilter\n                  :                    :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 +- CometBroadcastExchange\n                  :                    :     :                    +- CometProject\n                  :                    :     :                       +- CometBroadcastHashJoin\n                  :                    :     :                          :- CometProject\n                  :                    :     :                          :  +- CometBroadcastHashJoin\n                  :                    :     :                          :     :- CometFilter\n                  :                    :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :     :                          :     :        +- ReusedSubquery\n                  :                    :     :                          :     +- CometBroadcastExchange\n                  :                    :     :                          :        +- CometFilter\n                  :                    :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                          +- CometBroadcastExchange\n                  :                    :     :                             +- CometProject\n                  :                    :     :                                +- CometFilter\n                  :                    :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     +- CometBroadcastExchange\n                  :                    :        +- CometBroadcastHashJoin\n                  :                    :           :- CometFilter\n                  :                    :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :           +- CometBroadcastExchange\n                  :                    :              +- CometProject\n                  :                    :                 +- CometBroadcastHashJoin\n                  :                    :                    :- CometFilter\n                  :                    :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                    +- CometBroadcastExchange\n                  :                    :                       +- CometBroadcastHashJoin\n                  :                    :                          :- CometHashAggregate\n                  :                    :                          :  +- CometExchange\n                  :                    :                          :     +- CometHashAggregate\n                  :                    :                          :        +- CometProject\n                  :                    :                          :           +- CometBroadcastHashJoin\n                  :                    :                          :              :- CometProject\n                  :                    :                          :              :  +- CometBroadcastHashJoin\n                  :                    :                          :              :     :- CometFilter\n                  :                    :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :                          :              :     :        +- SubqueryBroadcast\n                  :                    :                          :              :     :           +- BroadcastExchange\n                  :                    :                          :              :     :              +- CometNativeColumnarToRow\n                  :                    :                          :              :     :                 +- CometProject\n                  :                    :                          :              :     :                    +- CometFilter\n                  :                    :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              :     +- CometBroadcastExchange\n                  :                    :                          :              :        +- CometBroadcastHashJoin\n                  :                    :                          :              :           :- CometFilter\n                  :                    :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :           +- CometBroadcastExchange\n                  :                    :                          :              :              +- CometProject\n                  :                    :                          :              :                 +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :- CometProject\n                  :                    :                          :              :                    :  +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :     :- CometFilter\n                  :                    :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :                          :              :                    :     :        +- ReusedSubquery\n                  :                    :                          :              :                    :     +- CometBroadcastExchange\n                  :                    :                          :              :                    :        +- CometFilter\n                  :                    :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :                    +- CometBroadcastExchange\n                  :                    :                          :              :                       +- CometProject\n                  :                    :                          :              :                          +- CometFilter\n                  :                    :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              +- CometBroadcastExchange\n                  :                    :                          :                 +- CometProject\n                  :                    :                          :                    +- CometFilter\n                  :                    :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          +- CometBroadcastExchange\n                  :                    :                             +- CometProject\n                  :                    :                                +- CometBroadcastHashJoin\n                  :                    :                                   :- CometProject\n                  :                    :                                   :  +- CometBroadcastHashJoin\n                  :                    :                                   :     :- CometFilter\n                  :                    :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :                                   :     :        +- ReusedSubquery\n                  :                    :                                   :     +- CometBroadcastExchange\n                  :                    :                                   :        +- CometFilter\n                  :                    :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                                   +- CometBroadcastExchange\n                  :                    :                                      +- CometProject\n                  :                    :                                         +- CometFilter\n                  :                    :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    +- CometBroadcastExchange\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :- CometProject\n                  :  +- CometFilter\n                  :     :  +- ReusedSubquery\n                  :     +- CometHashAggregate\n                  :        +- CometExchange\n                  :           +- CometHashAggregate\n                  :              +- CometProject\n                  :                 +- CometBroadcastHashJoin\n                  :                    :- CometProject\n                  :                    :  +- CometBroadcastHashJoin\n                  :                    :     :- CometBroadcastHashJoin\n                  :                    :     :  :- CometFilter\n                  :                    :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :     :  :        +- ReusedSubquery\n                  :                    :     :  +- CometBroadcastExchange\n                  :                    :     :     +- CometProject\n                  :                    :     :        +- CometBroadcastHashJoin\n                  :                    :     :           :- CometFilter\n                  :                    :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :           +- CometBroadcastExchange\n                  :                    :     :              +- CometBroadcastHashJoin\n                  :                    :     :                 :- CometHashAggregate\n                  :                    :     :                 :  +- CometExchange\n                  :                    :     :                 :     +- CometHashAggregate\n                  :                    :     :                 :        +- CometProject\n                  :                    :     :                 :           +- CometBroadcastHashJoin\n                  :                    :     :                 :              :- CometProject\n                  :                    :     :                 :              :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :     :- CometFilter\n                  :                    :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :     :                 :              :     :        +- SubqueryBroadcast\n                  :                    :     :                 :              :     :           +- BroadcastExchange\n                  :                    :     :                 :              :     :              +- CometNativeColumnarToRow\n                  :                    :     :                 :              :     :                 +- CometProject\n                  :                    :     :                 :              :     :                    +- CometFilter\n                  :                    :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              :     +- CometBroadcastExchange\n                  :                    :     :                 :              :        +- CometBroadcastHashJoin\n                  :                    :     :                 :              :           :- CometFilter\n                  :                    :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :           +- CometBroadcastExchange\n                  :                    :     :                 :              :              +- CometProject\n                  :                    :     :                 :              :                 +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :- CometProject\n                  :                    :     :                 :              :                    :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :     :- CometFilter\n                  :                    :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :     :                 :              :                    :     :        +- ReusedSubquery\n                  :                    :     :                 :              :                    :     +- CometBroadcastExchange\n                  :                    :     :                 :              :                    :        +- CometFilter\n                  :                    :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :                    +- CometBroadcastExchange\n                  :                    :     :                 :              :                       +- CometProject\n                  :                    :     :                 :              :                          +- CometFilter\n                  :                    :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              +- CometBroadcastExchange\n                  :                    :     :                 :                 +- CometProject\n                  :                    :     :                 :                    +- CometFilter\n                  :                    :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 +- CometBroadcastExchange\n                  :                    :     :                    +- CometProject\n                  :                    :     :                       +- CometBroadcastHashJoin\n                  :                    :     :                          :- CometProject\n                  :                    :     :                          :  +- CometBroadcastHashJoin\n                  :                    :     :                          :     :- CometFilter\n                  :                    :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :     :                          :     :        +- ReusedSubquery\n                  :                    :     :                          :     +- CometBroadcastExchange\n                  :                    :     :                          :        +- CometFilter\n                  :                    :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                          +- CometBroadcastExchange\n                  :                    :     :                             +- CometProject\n                  :                    :     :                                +- CometFilter\n                  :                    :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     +- CometBroadcastExchange\n                  :                    :        +- CometBroadcastHashJoin\n                  :                    :           :- CometFilter\n                  :                    :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :           +- CometBroadcastExchange\n                  :                    :              +- CometProject\n                  :                    :                 +- CometBroadcastHashJoin\n                  :                    :                    :- CometFilter\n                  :                    :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                    +- CometBroadcastExchange\n                  :                    :                       +- CometBroadcastHashJoin\n                  :                    :                          :- CometHashAggregate\n                  :                    :                          :  +- CometExchange\n                  :                    :                          :     +- CometHashAggregate\n                  :                    :                          :        +- CometProject\n                  :                    :                          :           +- CometBroadcastHashJoin\n                  :                    :                          :              :- CometProject\n                  :                    :                          :              :  +- CometBroadcastHashJoin\n                  :                    :                          :              :     :- CometFilter\n                  :                    :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :                          :              :     :        +- SubqueryBroadcast\n                  :                    :                          :              :     :           +- BroadcastExchange\n                  :                    :                          :              :     :              +- CometNativeColumnarToRow\n                  :                    :                          :              :     :                 +- CometProject\n                  :                    :                          :              :     :                    +- CometFilter\n                  :                    :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              :     +- CometBroadcastExchange\n                  :                    :                          :              :        +- CometBroadcastHashJoin\n                  :                    :                          :              :           :- CometFilter\n                  :                    :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :           +- CometBroadcastExchange\n                  :                    :                          :              :              +- CometProject\n                  :                    :                          :              :                 +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :- CometProject\n                  :                    :                          :              :                    :  +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :     :- CometFilter\n                  :                    :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :                          :              :                    :     :        +- ReusedSubquery\n                  :                    :                          :              :                    :     +- CometBroadcastExchange\n                  :                    :                          :              :                    :        +- CometFilter\n                  :                    :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :                    +- CometBroadcastExchange\n                  :                    :                          :              :                       +- CometProject\n                  :                    :                          :              :                          +- CometFilter\n                  :                    :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              +- CometBroadcastExchange\n                  :                    :                          :                 +- CometProject\n                  :                    :                          :                    +- CometFilter\n                  :                    :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          +- CometBroadcastExchange\n                  :                    :                             +- CometProject\n                  :                    :                                +- CometBroadcastHashJoin\n                  :                    :                                   :- CometProject\n                  :                    :                                   :  +- CometBroadcastHashJoin\n                  :                    :                                   :     :- CometFilter\n                  :                    :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :                                   :     :        +- ReusedSubquery\n                  :                    :                                   :     +- CometBroadcastExchange\n                  :                    :                                   :        +- CometFilter\n                  :                    :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                                   +- CometBroadcastExchange\n                  :                    :                                      +- CometProject\n                  :                    :                                         +- CometFilter\n                  :                    :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    +- CometBroadcastExchange\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometProject\n                     +- CometFilter\n                        :  +- ReusedSubquery\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometBroadcastHashJoin\n                                       :     :  :- CometFilter\n                                       :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                       :     :  :        +- ReusedSubquery\n                                       :     :  +- CometBroadcastExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometFilter\n                                       :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometBroadcastHashJoin\n                                       :     :                 :- CometHashAggregate\n                                       :     :                 :  +- CometExchange\n                                       :     :                 :     +- CometHashAggregate\n                                       :     :                 :        +- CometProject\n                                       :     :                 :           +- CometBroadcastHashJoin\n                                       :     :                 :              :- CometProject\n                                       :     :                 :              :  +- CometBroadcastHashJoin\n                                       :     :                 :              :     :- CometFilter\n                                       :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :                 :              :     :        +- SubqueryBroadcast\n                                       :     :                 :              :     :           +- BroadcastExchange\n                                       :     :                 :              :     :              +- CometNativeColumnarToRow\n                                       :     :                 :              :     :                 +- CometProject\n                                       :     :                 :              :     :                    +- CometFilter\n                                       :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :                 :              :     +- CometBroadcastExchange\n                                       :     :                 :              :        +- CometBroadcastHashJoin\n                                       :     :                 :              :           :- CometFilter\n                                       :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :                 :              :           +- CometBroadcastExchange\n                                       :     :                 :              :              +- CometProject\n                                       :     :                 :              :                 +- CometBroadcastHashJoin\n                                       :     :                 :              :                    :- CometProject\n                                       :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                       :     :                 :              :                    :     :- CometFilter\n                                       :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :                 :              :                    :     :        +- ReusedSubquery\n                                       :     :                 :              :                    :     +- CometBroadcastExchange\n                                       :     :                 :              :                    :        +- CometFilter\n                                       :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :                 :              :                    +- CometBroadcastExchange\n                                       :     :                 :              :                       +- CometProject\n                                       :     :                 :              :                          +- CometFilter\n                                       :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :                 :              +- CometBroadcastExchange\n                                       :     :                 :                 +- CometProject\n                                       :     :                 :                    +- CometFilter\n                                       :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :                 +- CometBroadcastExchange\n                                       :     :                    +- CometProject\n                                       :     :                       +- CometBroadcastHashJoin\n                                       :     :                          :- CometProject\n                                       :     :                          :  +- CometBroadcastHashJoin\n                                       :     :                          :     :- CometFilter\n                                       :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                       :     :                          :     :        +- ReusedSubquery\n                                       :     :                          :     +- CometBroadcastExchange\n                                       :     :                          :        +- CometFilter\n                                       :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :                          +- CometBroadcastExchange\n                                       :     :                             +- CometProject\n                                       :     :                                +- CometFilter\n                                       :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometBroadcastHashJoin\n                                       :           :- CometFilter\n                                       :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :           +- CometBroadcastExchange\n                                       :              +- CometProject\n                                       :                 +- CometBroadcastHashJoin\n                                       :                    :- CometFilter\n                                       :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                    +- CometBroadcastExchange\n                                       :                       +- CometBroadcastHashJoin\n                                       :                          :- CometHashAggregate\n                                       :                          :  +- CometExchange\n                                       :                          :     +- CometHashAggregate\n                                       :                          :        +- CometProject\n                                       :                          :           +- CometBroadcastHashJoin\n                                       :                          :              :- CometProject\n                                       :                          :              :  +- CometBroadcastHashJoin\n                                       :                          :              :     :- CometFilter\n                                       :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :                          :              :     :        +- SubqueryBroadcast\n                                       :                          :              :     :           +- BroadcastExchange\n                                       :                          :              :     :              +- CometNativeColumnarToRow\n                                       :                          :              :     :                 +- CometProject\n                                       :                          :              :     :                    +- CometFilter\n                                       :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :                          :              :     +- CometBroadcastExchange\n                                       :                          :              :        +- CometBroadcastHashJoin\n                                       :                          :              :           :- CometFilter\n                                       :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                          :              :           +- CometBroadcastExchange\n                                       :                          :              :              +- CometProject\n                                       :                          :              :                 +- CometBroadcastHashJoin\n                                       :                          :              :                    :- CometProject\n                                       :                          :              :                    :  +- CometBroadcastHashJoin\n                                       :                          :              :                    :     :- CometFilter\n                                       :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :                          :              :                    :     :        +- ReusedSubquery\n                                       :                          :              :                    :     +- CometBroadcastExchange\n                                       :                          :              :                    :        +- CometFilter\n                                       :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                          :              :                    +- CometBroadcastExchange\n                                       :                          :              :                       +- CometProject\n                                       :                          :              :                          +- CometFilter\n                                       :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :                          :              +- CometBroadcastExchange\n                                       :                          :                 +- CometProject\n                                       :                          :                    +- CometFilter\n                                       :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :                          +- CometBroadcastExchange\n                                       :                             +- CometProject\n                                       :                                +- CometBroadcastHashJoin\n                                       :                                   :- CometProject\n                                       :                                   :  +- CometBroadcastHashJoin\n                                       :                                   :     :- CometFilter\n                                       :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                       :                                   :     :        +- ReusedSubquery\n                                       :                                   :     +- CometBroadcastExchange\n                                       :                                   :        +- CometFilter\n                                       :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                                   +- CometBroadcastExchange\n                                       :                                      +- CometProject\n                                       :                                         +- CometFilter\n                                       :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 424 out of 458 eligible operators (92%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q14b.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- BroadcastHashJoin\n   :- Filter\n   :  :  +- Subquery\n   :  :     +- HashAggregate\n   :  :        +- CometNativeColumnarToRow\n   :  :           +- CometColumnarExchange\n   :  :              +- HashAggregate\n   :  :                 +- Union\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    +- Project\n   :  :                       +- BroadcastHashJoin\n   :  :                          :- ColumnarToRow\n   :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                          :        +- ReusedSubquery\n   :  :                          +- BroadcastExchange\n   :  :                             +- CometNativeColumnarToRow\n   :  :                                +- CometProject\n   :  :                                   +- CometFilter\n   :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  +- HashAggregate\n   :     +- CometNativeColumnarToRow\n   :        +- CometColumnarExchange\n   :           +- HashAggregate\n   :              +- Project\n   :                 +- BroadcastHashJoin\n   :                    :- Project\n   :                    :  +- BroadcastHashJoin\n   :                    :     :- BroadcastHashJoin\n   :                    :     :  :- Filter\n   :                    :     :  :  +- ColumnarToRow\n   :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :  :           +- SubqueryBroadcast\n   :                    :     :  :              +- BroadcastExchange\n   :                    :     :  :                 +- CometNativeColumnarToRow\n   :                    :     :  :                    +- CometProject\n   :                    :     :  :                       +- CometFilter\n   :                    :     :  :                          :  +- Subquery\n   :                    :     :  :                          :     +- CometNativeColumnarToRow\n   :                    :     :  :                          :        +- CometProject\n   :                    :     :  :                          :           +- CometFilter\n   :                    :     :  :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  +- BroadcastExchange\n   :                    :     :     +- Project\n   :                    :     :        +- BroadcastHashJoin\n   :                    :     :           :- CometNativeColumnarToRow\n   :                    :     :           :  +- CometFilter\n   :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :           +- BroadcastExchange\n   :                    :     :              +- BroadcastHashJoin\n   :                    :     :                 :- CometNativeColumnarToRow\n   :                    :     :                 :  +- CometHashAggregate\n   :                    :     :                 :     +- CometColumnarExchange\n   :                    :     :                 :        +- HashAggregate\n   :                    :     :                 :           +- Project\n   :                    :     :                 :              +- BroadcastHashJoin\n   :                    :     :                 :                 :- Project\n   :                    :     :                 :                 :  +- BroadcastHashJoin\n   :                    :     :                 :                 :     :- Filter\n   :                    :     :                 :                 :     :  +- ColumnarToRow\n   :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :     :           +- SubqueryBroadcast\n   :                    :     :                 :                 :     :              +- BroadcastExchange\n   :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n   :                    :     :                 :                 :     :                    +- CometProject\n   :                    :     :                 :                 :     :                       +- CometFilter\n   :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 :     +- BroadcastExchange\n   :                    :     :                 :                 :        +- BroadcastHashJoin\n   :                    :     :                 :                 :           :- CometNativeColumnarToRow\n   :                    :     :                 :                 :           :  +- CometFilter\n   :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :           +- BroadcastExchange\n   :                    :     :                 :                 :              +- Project\n   :                    :     :                 :                 :                 +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :- Project\n   :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :     :- Filter\n   :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n   :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n   :                    :     :                 :                 :                    :     +- BroadcastExchange\n   :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                    :           +- CometFilter\n   :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :                    +- BroadcastExchange\n   :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                          +- CometProject\n   :                    :     :                 :                 :                             +- CometFilter\n   :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 +- BroadcastExchange\n   :                    :     :                 :                    +- CometNativeColumnarToRow\n   :                    :     :                 :                       +- CometProject\n   :                    :     :                 :                          +- CometFilter\n   :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 +- BroadcastExchange\n   :                    :     :                    +- Project\n   :                    :     :                       +- BroadcastHashJoin\n   :                    :     :                          :- Project\n   :                    :     :                          :  +- BroadcastHashJoin\n   :                    :     :                          :     :- Filter\n   :                    :     :                          :     :  +- ColumnarToRow\n   :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                          :     :           +- ReusedSubquery\n   :                    :     :                          :     +- BroadcastExchange\n   :                    :     :                          :        +- CometNativeColumnarToRow\n   :                    :     :                          :           +- CometFilter\n   :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                          +- BroadcastExchange\n   :                    :     :                             +- CometNativeColumnarToRow\n   :                    :     :                                +- CometProject\n   :                    :     :                                   +- CometFilter\n   :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     +- BroadcastExchange\n   :                    :        +- BroadcastHashJoin\n   :                    :           :- CometNativeColumnarToRow\n   :                    :           :  +- CometFilter\n   :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :           +- BroadcastExchange\n   :                    :              +- Project\n   :                    :                 +- BroadcastHashJoin\n   :                    :                    :- CometNativeColumnarToRow\n   :                    :                    :  +- CometFilter\n   :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                    +- BroadcastExchange\n   :                    :                       +- BroadcastHashJoin\n   :                    :                          :- CometNativeColumnarToRow\n   :                    :                          :  +- CometHashAggregate\n   :                    :                          :     +- CometColumnarExchange\n   :                    :                          :        +- HashAggregate\n   :                    :                          :           +- Project\n   :                    :                          :              +- BroadcastHashJoin\n   :                    :                          :                 :- Project\n   :                    :                          :                 :  +- BroadcastHashJoin\n   :                    :                          :                 :     :- Filter\n   :                    :                          :                 :     :  +- ColumnarToRow\n   :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :     :           +- SubqueryBroadcast\n   :                    :                          :                 :     :              +- BroadcastExchange\n   :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n   :                    :                          :                 :     :                    +- CometProject\n   :                    :                          :                 :     :                       +- CometFilter\n   :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 :     +- BroadcastExchange\n   :                    :                          :                 :        +- BroadcastHashJoin\n   :                    :                          :                 :           :- CometNativeColumnarToRow\n   :                    :                          :                 :           :  +- CometFilter\n   :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :           +- BroadcastExchange\n   :                    :                          :                 :              +- Project\n   :                    :                          :                 :                 +- BroadcastHashJoin\n   :                    :                          :                 :                    :- Project\n   :                    :                          :                 :                    :  +- BroadcastHashJoin\n   :                    :                          :                 :                    :     :- Filter\n   :                    :                          :                 :                    :     :  +- ColumnarToRow\n   :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :                    :     :           +- ReusedSubquery\n   :                    :                          :                 :                    :     +- BroadcastExchange\n   :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n   :                    :                          :                 :                    :           +- CometFilter\n   :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :                    +- BroadcastExchange\n   :                    :                          :                 :                       +- CometNativeColumnarToRow\n   :                    :                          :                 :                          +- CometProject\n   :                    :                          :                 :                             +- CometFilter\n   :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 +- BroadcastExchange\n   :                    :                          :                    +- CometNativeColumnarToRow\n   :                    :                          :                       +- CometProject\n   :                    :                          :                          +- CometFilter\n   :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          +- BroadcastExchange\n   :                    :                             +- Project\n   :                    :                                +- BroadcastHashJoin\n   :                    :                                   :- Project\n   :                    :                                   :  +- BroadcastHashJoin\n   :                    :                                   :     :- Filter\n   :                    :                                   :     :  +- ColumnarToRow\n   :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                                   :     :           +- ReusedSubquery\n   :                    :                                   :     +- BroadcastExchange\n   :                    :                                   :        +- CometNativeColumnarToRow\n   :                    :                                   :           +- CometFilter\n   :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                                   +- BroadcastExchange\n   :                    :                                      +- CometNativeColumnarToRow\n   :                    :                                         +- CometProject\n   :                    :                                            +- CometFilter\n   :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    +- BroadcastExchange\n   :                       +- CometNativeColumnarToRow\n   :                          +- CometProject\n   :                             +- CometFilter\n   :                                :  +- Subquery\n   :                                :     +- CometNativeColumnarToRow\n   :                                :        +- CometProject\n   :                                :           +- CometFilter\n   :                                :              +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   +- BroadcastExchange\n      +- Filter\n         :  +- ReusedSubquery\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- BroadcastHashJoin\n                           :     :  :- Filter\n                           :     :  :  +- ColumnarToRow\n                           :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :  :           +- SubqueryBroadcast\n                           :     :  :              +- BroadcastExchange\n                           :     :  :                 +- CometNativeColumnarToRow\n                           :     :  :                    +- CometProject\n                           :     :  :                       +- CometFilter\n                           :     :  :                          :  +- Subquery\n                           :     :  :                          :     +- CometNativeColumnarToRow\n                           :     :  :                          :        +- CometProject\n                           :     :  :                          :           +- CometFilter\n                           :     :  :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  +- BroadcastExchange\n                           :     :     +- Project\n                           :     :        +- BroadcastHashJoin\n                           :     :           :- CometNativeColumnarToRow\n                           :     :           :  +- CometFilter\n                           :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :           +- BroadcastExchange\n                           :     :              +- BroadcastHashJoin\n                           :     :                 :- CometNativeColumnarToRow\n                           :     :                 :  +- CometHashAggregate\n                           :     :                 :     +- CometColumnarExchange\n                           :     :                 :        +- HashAggregate\n                           :     :                 :           +- Project\n                           :     :                 :              +- BroadcastHashJoin\n                           :     :                 :                 :- Project\n                           :     :                 :                 :  +- BroadcastHashJoin\n                           :     :                 :                 :     :- Filter\n                           :     :                 :                 :     :  +- ColumnarToRow\n                           :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :     :           +- SubqueryBroadcast\n                           :     :                 :                 :     :              +- BroadcastExchange\n                           :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                           :     :                 :                 :     :                    +- CometProject\n                           :     :                 :                 :     :                       +- CometFilter\n                           :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 :     +- BroadcastExchange\n                           :     :                 :                 :        +- BroadcastHashJoin\n                           :     :                 :                 :           :- CometNativeColumnarToRow\n                           :     :                 :                 :           :  +- CometFilter\n                           :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :           +- BroadcastExchange\n                           :     :                 :                 :              +- Project\n                           :     :                 :                 :                 +- BroadcastHashJoin\n                           :     :                 :                 :                    :- Project\n                           :     :                 :                 :                    :  +- BroadcastHashJoin\n                           :     :                 :                 :                    :     :- Filter\n                           :     :                 :                 :                    :     :  +- ColumnarToRow\n                           :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :                    :     :           +- ReusedSubquery\n                           :     :                 :                 :                    :     +- BroadcastExchange\n                           :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                           :     :                 :                 :                    :           +- CometFilter\n                           :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :                    +- BroadcastExchange\n                           :     :                 :                 :                       +- CometNativeColumnarToRow\n                           :     :                 :                 :                          +- CometProject\n                           :     :                 :                 :                             +- CometFilter\n                           :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 +- BroadcastExchange\n                           :     :                 :                    +- CometNativeColumnarToRow\n                           :     :                 :                       +- CometProject\n                           :     :                 :                          +- CometFilter\n                           :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 +- BroadcastExchange\n                           :     :                    +- Project\n                           :     :                       +- BroadcastHashJoin\n                           :     :                          :- Project\n                           :     :                          :  +- BroadcastHashJoin\n                           :     :                          :     :- Filter\n                           :     :                          :     :  +- ColumnarToRow\n                           :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                          :     :           +- ReusedSubquery\n                           :     :                          :     +- BroadcastExchange\n                           :     :                          :        +- CometNativeColumnarToRow\n                           :     :                          :           +- CometFilter\n                           :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                          +- BroadcastExchange\n                           :     :                             +- CometNativeColumnarToRow\n                           :     :                                +- CometProject\n                           :     :                                   +- CometFilter\n                           :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     +- BroadcastExchange\n                           :        +- BroadcastHashJoin\n                           :           :- CometNativeColumnarToRow\n                           :           :  +- CometFilter\n                           :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :           +- BroadcastExchange\n                           :              +- Project\n                           :                 +- BroadcastHashJoin\n                           :                    :- CometNativeColumnarToRow\n                           :                    :  +- CometFilter\n                           :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                    +- BroadcastExchange\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometHashAggregate\n                           :                          :     +- CometColumnarExchange\n                           :                          :        +- HashAggregate\n                           :                          :           +- Project\n                           :                          :              +- BroadcastHashJoin\n                           :                          :                 :- Project\n                           :                          :                 :  +- BroadcastHashJoin\n                           :                          :                 :     :- Filter\n                           :                          :                 :     :  +- ColumnarToRow\n                           :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :     :           +- SubqueryBroadcast\n                           :                          :                 :     :              +- BroadcastExchange\n                           :                          :                 :     :                 +- CometNativeColumnarToRow\n                           :                          :                 :     :                    +- CometProject\n                           :                          :                 :     :                       +- CometFilter\n                           :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 :     +- BroadcastExchange\n                           :                          :                 :        +- BroadcastHashJoin\n                           :                          :                 :           :- CometNativeColumnarToRow\n                           :                          :                 :           :  +- CometFilter\n                           :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :           +- BroadcastExchange\n                           :                          :                 :              +- Project\n                           :                          :                 :                 +- BroadcastHashJoin\n                           :                          :                 :                    :- Project\n                           :                          :                 :                    :  +- BroadcastHashJoin\n                           :                          :                 :                    :     :- Filter\n                           :                          :                 :                    :     :  +- ColumnarToRow\n                           :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :                    :     :           +- ReusedSubquery\n                           :                          :                 :                    :     +- BroadcastExchange\n                           :                          :                 :                    :        +- CometNativeColumnarToRow\n                           :                          :                 :                    :           +- CometFilter\n                           :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :                    +- BroadcastExchange\n                           :                          :                 :                       +- CometNativeColumnarToRow\n                           :                          :                 :                          +- CometProject\n                           :                          :                 :                             +- CometFilter\n                           :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 +- BroadcastExchange\n                           :                          :                    +- CometNativeColumnarToRow\n                           :                          :                       +- CometProject\n                           :                          :                          +- CometFilter\n                           :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- BroadcastHashJoin\n                           :                                   :- Project\n                           :                                   :  +- BroadcastHashJoin\n                           :                                   :     :- Filter\n                           :                                   :     :  +- ColumnarToRow\n                           :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                   :     :           +- ReusedSubquery\n                           :                                   :     +- BroadcastExchange\n                           :                                   :        +- CometNativeColumnarToRow\n                           :                                   :           +- CometFilter\n                           :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                                   +- BroadcastExchange\n                           :                                      +- CometNativeColumnarToRow\n                           :                                         +- CometProject\n                           :                                            +- CometFilter\n                           :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometProject\n                                    +- CometFilter\n                                       :  +- Subquery\n                                       :     +- CometNativeColumnarToRow\n                                       :        +- CometProject\n                                       :           +- CometFilter\n                                       :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 128 out of 333 eligible operators (38%). Final plan contains 69 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q14b.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometBroadcastHashJoin\n      :- CometFilter\n      :  :  +- Subquery\n      :  :     +- CometNativeColumnarToRow\n      :  :        +- CometHashAggregate\n      :  :           +- CometExchange\n      :  :              +- CometHashAggregate\n      :  :                 +- CometUnion\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    +- CometProject\n      :  :                       +- CometBroadcastHashJoin\n      :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :  :                          :     +- ReusedSubquery\n      :  :                          +- CometBroadcastExchange\n      :  :                             +- CometProject\n      :  :                                +- CometFilter\n      :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  +- CometHashAggregate\n      :     +- CometExchange\n      :        +- CometHashAggregate\n      :           +- CometProject\n      :              +- CometBroadcastHashJoin\n      :                 :- CometProject\n      :                 :  +- CometBroadcastHashJoin\n      :                 :     :- CometBroadcastHashJoin\n      :                 :     :  :- CometFilter\n      :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :  :        +- SubqueryBroadcast\n      :                 :     :  :           +- BroadcastExchange\n      :                 :     :  :              +- CometNativeColumnarToRow\n      :                 :     :  :                 +- CometProject\n      :                 :     :  :                    +- CometFilter\n      :                 :     :  :                       :  +- Subquery\n      :                 :     :  :                       :     +- CometNativeColumnarToRow\n      :                 :     :  :                       :        +- CometProject\n      :                 :     :  :                       :           +- CometFilter\n      :                 :     :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  +- CometBroadcastExchange\n      :                 :     :     +- CometProject\n      :                 :     :        +- CometBroadcastHashJoin\n      :                 :     :           :- CometFilter\n      :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :           +- CometBroadcastExchange\n      :                 :     :              +- CometBroadcastHashJoin\n      :                 :     :                 :- CometHashAggregate\n      :                 :     :                 :  +- CometExchange\n      :                 :     :                 :     +- CometHashAggregate\n      :                 :     :                 :        +- CometProject\n      :                 :     :                 :           +- CometBroadcastHashJoin\n      :                 :     :                 :              :- CometProject\n      :                 :     :                 :              :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :     :- CometFilter\n      :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :                 :              :     :        +- SubqueryBroadcast\n      :                 :     :                 :              :     :           +- BroadcastExchange\n      :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n      :                 :     :                 :              :     :                 +- CometProject\n      :                 :     :                 :              :     :                    +- CometFilter\n      :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              :     +- CometBroadcastExchange\n      :                 :     :                 :              :        +- CometBroadcastHashJoin\n      :                 :     :                 :              :           :- CometFilter\n      :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :           +- CometBroadcastExchange\n      :                 :     :                 :              :              +- CometProject\n      :                 :     :                 :              :                 +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :- CometProject\n      :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :     :- CometFilter\n      :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :     :                 :              :                    :     :        +- ReusedSubquery\n      :                 :     :                 :              :                    :     +- CometBroadcastExchange\n      :                 :     :                 :              :                    :        +- CometFilter\n      :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :                    +- CometBroadcastExchange\n      :                 :     :                 :              :                       +- CometProject\n      :                 :     :                 :              :                          +- CometFilter\n      :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              +- CometBroadcastExchange\n      :                 :     :                 :                 +- CometProject\n      :                 :     :                 :                    +- CometFilter\n      :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 +- CometBroadcastExchange\n      :                 :     :                    +- CometProject\n      :                 :     :                       +- CometBroadcastHashJoin\n      :                 :     :                          :- CometProject\n      :                 :     :                          :  +- CometBroadcastHashJoin\n      :                 :     :                          :     :- CometFilter\n      :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :     :                          :     :        +- ReusedSubquery\n      :                 :     :                          :     +- CometBroadcastExchange\n      :                 :     :                          :        +- CometFilter\n      :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                          +- CometBroadcastExchange\n      :                 :     :                             +- CometProject\n      :                 :     :                                +- CometFilter\n      :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     +- CometBroadcastExchange\n      :                 :        +- CometBroadcastHashJoin\n      :                 :           :- CometFilter\n      :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :           +- CometBroadcastExchange\n      :                 :              +- CometProject\n      :                 :                 +- CometBroadcastHashJoin\n      :                 :                    :- CometFilter\n      :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                    +- CometBroadcastExchange\n      :                 :                       +- CometBroadcastHashJoin\n      :                 :                          :- CometHashAggregate\n      :                 :                          :  +- CometExchange\n      :                 :                          :     +- CometHashAggregate\n      :                 :                          :        +- CometProject\n      :                 :                          :           +- CometBroadcastHashJoin\n      :                 :                          :              :- CometProject\n      :                 :                          :              :  +- CometBroadcastHashJoin\n      :                 :                          :              :     :- CometFilter\n      :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :                          :              :     :        +- SubqueryBroadcast\n      :                 :                          :              :     :           +- BroadcastExchange\n      :                 :                          :              :     :              +- CometNativeColumnarToRow\n      :                 :                          :              :     :                 +- CometProject\n      :                 :                          :              :     :                    +- CometFilter\n      :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              :     +- CometBroadcastExchange\n      :                 :                          :              :        +- CometBroadcastHashJoin\n      :                 :                          :              :           :- CometFilter\n      :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :           +- CometBroadcastExchange\n      :                 :                          :              :              +- CometProject\n      :                 :                          :              :                 +- CometBroadcastHashJoin\n      :                 :                          :              :                    :- CometProject\n      :                 :                          :              :                    :  +- CometBroadcastHashJoin\n      :                 :                          :              :                    :     :- CometFilter\n      :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :                          :              :                    :     :        +- ReusedSubquery\n      :                 :                          :              :                    :     +- CometBroadcastExchange\n      :                 :                          :              :                    :        +- CometFilter\n      :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :                    +- CometBroadcastExchange\n      :                 :                          :              :                       +- CometProject\n      :                 :                          :              :                          +- CometFilter\n      :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              +- CometBroadcastExchange\n      :                 :                          :                 +- CometProject\n      :                 :                          :                    +- CometFilter\n      :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          +- CometBroadcastExchange\n      :                 :                             +- CometProject\n      :                 :                                +- CometBroadcastHashJoin\n      :                 :                                   :- CometProject\n      :                 :                                   :  +- CometBroadcastHashJoin\n      :                 :                                   :     :- CometFilter\n      :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :                                   :     :        +- ReusedSubquery\n      :                 :                                   :     +- CometBroadcastExchange\n      :                 :                                   :        +- CometFilter\n      :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                                   +- CometBroadcastExchange\n      :                 :                                      +- CometProject\n      :                 :                                         +- CometFilter\n      :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 +- CometBroadcastExchange\n      :                    +- CometProject\n      :                       +- CometFilter\n      :                          :  +- ReusedSubquery\n      :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      +- CometBroadcastExchange\n         +- CometFilter\n            :  +- ReusedSubquery\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometBroadcastHashJoin\n                           :     :  :- CometFilter\n                           :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :  :        +- SubqueryBroadcast\n                           :     :  :           +- BroadcastExchange\n                           :     :  :              +- CometNativeColumnarToRow\n                           :     :  :                 +- CometProject\n                           :     :  :                    +- CometFilter\n                           :     :  :                       :  +- Subquery\n                           :     :  :                       :     +- CometNativeColumnarToRow\n                           :     :  :                       :        +- CometProject\n                           :     :  :                       :           +- CometFilter\n                           :     :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  +- CometBroadcastExchange\n                           :     :     +- CometProject\n                           :     :        +- CometBroadcastHashJoin\n                           :     :           :- CometFilter\n                           :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :           +- CometBroadcastExchange\n                           :     :              +- CometBroadcastHashJoin\n                           :     :                 :- CometHashAggregate\n                           :     :                 :  +- CometExchange\n                           :     :                 :     +- CometHashAggregate\n                           :     :                 :        +- CometProject\n                           :     :                 :           +- CometBroadcastHashJoin\n                           :     :                 :              :- CometProject\n                           :     :                 :              :  +- CometBroadcastHashJoin\n                           :     :                 :              :     :- CometFilter\n                           :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :                 :              :     :        +- SubqueryBroadcast\n                           :     :                 :              :     :           +- BroadcastExchange\n                           :     :                 :              :     :              +- CometNativeColumnarToRow\n                           :     :                 :              :     :                 +- CometProject\n                           :     :                 :              :     :                    +- CometFilter\n                           :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              :     +- CometBroadcastExchange\n                           :     :                 :              :        +- CometBroadcastHashJoin\n                           :     :                 :              :           :- CometFilter\n                           :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :           +- CometBroadcastExchange\n                           :     :                 :              :              +- CometProject\n                           :     :                 :              :                 +- CometBroadcastHashJoin\n                           :     :                 :              :                    :- CometProject\n                           :     :                 :              :                    :  +- CometBroadcastHashJoin\n                           :     :                 :              :                    :     :- CometFilter\n                           :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :     :                 :              :                    :     :        +- ReusedSubquery\n                           :     :                 :              :                    :     +- CometBroadcastExchange\n                           :     :                 :              :                    :        +- CometFilter\n                           :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :                    +- CometBroadcastExchange\n                           :     :                 :              :                       +- CometProject\n                           :     :                 :              :                          +- CometFilter\n                           :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              +- CometBroadcastExchange\n                           :     :                 :                 +- CometProject\n                           :     :                 :                    +- CometFilter\n                           :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 +- CometBroadcastExchange\n                           :     :                    +- CometProject\n                           :     :                       +- CometBroadcastHashJoin\n                           :     :                          :- CometProject\n                           :     :                          :  +- CometBroadcastHashJoin\n                           :     :                          :     :- CometFilter\n                           :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :     :                          :     :        +- ReusedSubquery\n                           :     :                          :     +- CometBroadcastExchange\n                           :     :                          :        +- CometFilter\n                           :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                          +- CometBroadcastExchange\n                           :     :                             +- CometProject\n                           :     :                                +- CometFilter\n                           :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     +- CometBroadcastExchange\n                           :        +- CometBroadcastHashJoin\n                           :           :- CometFilter\n                           :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :           +- CometBroadcastExchange\n                           :              +- CometProject\n                           :                 +- CometBroadcastHashJoin\n                           :                    :- CometFilter\n                           :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                    +- CometBroadcastExchange\n                           :                       +- CometBroadcastHashJoin\n                           :                          :- CometHashAggregate\n                           :                          :  +- CometExchange\n                           :                          :     +- CometHashAggregate\n                           :                          :        +- CometProject\n                           :                          :           +- CometBroadcastHashJoin\n                           :                          :              :- CometProject\n                           :                          :              :  +- CometBroadcastHashJoin\n                           :                          :              :     :- CometFilter\n                           :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                          :              :     :        +- SubqueryBroadcast\n                           :                          :              :     :           +- BroadcastExchange\n                           :                          :              :     :              +- CometNativeColumnarToRow\n                           :                          :              :     :                 +- CometProject\n                           :                          :              :     :                    +- CometFilter\n                           :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              :     +- CometBroadcastExchange\n                           :                          :              :        +- CometBroadcastHashJoin\n                           :                          :              :           :- CometFilter\n                           :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :           +- CometBroadcastExchange\n                           :                          :              :              +- CometProject\n                           :                          :              :                 +- CometBroadcastHashJoin\n                           :                          :              :                    :- CometProject\n                           :                          :              :                    :  +- CometBroadcastHashJoin\n                           :                          :              :                    :     :- CometFilter\n                           :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :                          :              :                    :     :        +- ReusedSubquery\n                           :                          :              :                    :     +- CometBroadcastExchange\n                           :                          :              :                    :        +- CometFilter\n                           :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :                    +- CometBroadcastExchange\n                           :                          :              :                       +- CometProject\n                           :                          :              :                          +- CometFilter\n                           :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              +- CometBroadcastExchange\n                           :                          :                 +- CometProject\n                           :                          :                    +- CometFilter\n                           :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          +- CometBroadcastExchange\n                           :                             +- CometProject\n                           :                                +- CometBroadcastHashJoin\n                           :                                   :- CometProject\n                           :                                   :  +- CometBroadcastHashJoin\n                           :                                   :     :- CometFilter\n                           :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                                   :     :        +- ReusedSubquery\n                           :                                   :     +- CometBroadcastExchange\n                           :                                   :        +- CometFilter\n                           :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                                   +- CometBroadcastExchange\n                           :                                      +- CometProject\n                           :                                         +- CometFilter\n                           :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    :  +- ReusedSubquery\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 298 out of 327 eligible operators (91%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q15.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Filter\n                  :     :     :  +- ColumnarToRow\n                  :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           +- SubqueryBroadcast\n                  :     :     :              +- BroadcastExchange\n                  :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :                    +- CometProject\n                  :     :     :                       +- CometFilter\n                  :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 28 eligible operators (42%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q15.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometFilter\n                  :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :        +- SubqueryBroadcast\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 26 out of 28 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q16.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.call_center\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q16.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q17.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- Filter\n                  :     :     :     :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :     :        +- Filter\n                  :     :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- Filter\n                  :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :                    +- ReusedSubquery\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 22 out of 57 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q17.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometFilter\n                  :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :     :     :     :           +- BroadcastExchange\n                  :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                 +- CometProject\n                  :     :     :     :     :     :     :                    +- CometFilter\n                  :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :     :                 +- ReusedSubquery\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 52 out of 57 eligible operators (91%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q18.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Project\n                     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :- Project\n                     :     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :     :- Filter\n                     :     :     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :     :     :           +- SubqueryBroadcast\n                     :     :     :     :     :     :              +- BroadcastExchange\n                     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :     :     :     :                    +- CometProject\n                     :     :     :     :     :     :                       +- CometFilter\n                     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :     :           +- CometProject\n                     :     :     :     :     :              +- CometFilter\n                     :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :           +- CometProject\n                     :     :     :     :              +- CometFilter\n                     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 21 out of 47 eligible operators (44%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q18.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :- CometProject\n                     :     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :     :- CometFilter\n                     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :     :     :     :     :     :        +- SubqueryBroadcast\n                     :     :     :     :     :     :           +- BroadcastExchange\n                     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :     :     :     :                 +- CometProject\n                     :     :     :     :     :     :                    +- CometFilter\n                     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :     :        +- CometProject\n                     :     :     :     :     :           +- CometFilter\n                     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :        +- CometProject\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 45 out of 47 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q19.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometFilter\n                  :     :     :     :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometFilter\n                  :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometNativeScan parquet spark_catalog.default.item\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometNativeScan parquet spark_catalog.default.customer\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 35 out of 35 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q19.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometFilter\n                  :     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometFilter\n                  :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 35 out of 35 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q2.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometHashAggregate\n            :     :  +- CometExchange\n            :     :     +- CometHashAggregate\n            :     :        +- CometProject\n            :     :           +- CometBroadcastHashJoin\n            :     :              :- CometUnion\n            :     :              :  :- CometProject\n            :     :              :  :  +- CometNativeScan parquet spark_catalog.default.web_sales\n            :     :              :  +- CometProject\n            :     :              :     +- CometNativeScan parquet spark_catalog.default.catalog_sales\n            :     :              +- CometBroadcastExchange\n            :     :                 +- CometProject\n            :     :                    +- CometFilter\n            :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometHashAggregate\n                     :  +- CometExchange\n                     :     +- CometHashAggregate\n                     :        +- CometProject\n                     :           +- CometBroadcastHashJoin\n                     :              :- CometUnion\n                     :              :  :- CometProject\n                     :              :  :  +- CometNativeScan parquet spark_catalog.default.web_sales\n                     :              :  +- CometProject\n                     :              :     +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                     :              +- CometBroadcastExchange\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 45 out of 45 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q2.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometHashAggregate\n            :     :  +- CometExchange\n            :     :     +- CometHashAggregate\n            :     :        +- CometProject\n            :     :           +- CometBroadcastHashJoin\n            :     :              :- CometUnion\n            :     :              :  :- CometProject\n            :     :              :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n            :     :              :  +- CometProject\n            :     :              :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :     :              +- CometBroadcastExchange\n            :     :                 +- CometProject\n            :     :                    +- CometFilter\n            :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometHashAggregate\n                     :  +- CometExchange\n                     :     +- CometHashAggregate\n                     :        +- CometProject\n                     :           +- CometBroadcastHashJoin\n                     :              :- CometUnion\n                     :              :  :- CometProject\n                     :              :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :              :  +- CometProject\n                     :              :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :              +- CometBroadcastExchange\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 45 out of 45 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q20.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q20.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q21.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Filter\n                     :     :     :  +- ColumnarToRow\n                     :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :           +- SubqueryBroadcast\n                     :     :     :              +- BroadcastExchange\n                     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :                    +- CometFilter\n                     :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometFilter\n                     :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 10 out of 27 eligible operators (37%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q21.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometFilter\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometFilter\n                     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                     :     :     :        +- SubqueryBroadcast\n                     :     :     :           +- BroadcastExchange\n                     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :                 +- CometFilter\n                     :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometFilter\n                     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 25 out of 27 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q22.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Filter\n                     :     :     :  +- ColumnarToRow\n                     :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :           +- SubqueryBroadcast\n                     :     :     :              +- BroadcastExchange\n                     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :                    +- CometProject\n                     :     :     :                       +- CometFilter\n                     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.warehouse\n\nComet accelerated 12 out of 29 eligible operators (41%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q22.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometFilter\n                     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                     :     :     :        +- SubqueryBroadcast\n                     :     :     :           +- BroadcastExchange\n                     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :                 +- CometProject\n                     :     :     :                    +- CometFilter\n                     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n\nComet accelerated 27 out of 29 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q23a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometUnion\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometProject\n            :     :  +- CometSortMergeJoin\n            :     :     :- CometSort\n            :     :     :  +- CometColumnarExchange\n            :     :     :     +- Project\n            :     :     :        +- BroadcastHashJoin\n            :     :     :           :- ColumnarToRow\n            :     :     :           :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :     :           :        +- SubqueryBroadcast\n            :     :     :           :           +- BroadcastExchange\n            :     :     :           :              +- CometNativeColumnarToRow\n            :     :     :           :                 +- CometProject\n            :     :     :           :                    +- CometFilter\n            :     :     :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :     :           +- BroadcastExchange\n            :     :     :              +- Project\n            :     :     :                 +- Filter\n            :     :     :                    +- HashAggregate\n            :     :     :                       +- CometNativeColumnarToRow\n            :     :     :                          +- CometColumnarExchange\n            :     :     :                             +- HashAggregate\n            :     :     :                                +- Project\n            :     :     :                                   +- BroadcastHashJoin\n            :     :     :                                      :- Project\n            :     :     :                                      :  +- BroadcastHashJoin\n            :     :     :                                      :     :- Filter\n            :     :     :                                      :     :  +- ColumnarToRow\n            :     :     :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :     :                                      :     :           +- SubqueryBroadcast\n            :     :     :                                      :     :              +- BroadcastExchange\n            :     :     :                                      :     :                 +- CometNativeColumnarToRow\n            :     :     :                                      :     :                    +- CometProject\n            :     :     :                                      :     :                       +- CometFilter\n            :     :     :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :     :                                      :     +- BroadcastExchange\n            :     :     :                                      :        +- CometNativeColumnarToRow\n            :     :     :                                      :           +- CometProject\n            :     :     :                                      :              +- CometFilter\n            :     :     :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :     :                                      +- BroadcastExchange\n            :     :     :                                         +- CometNativeColumnarToRow\n            :     :     :                                            +- CometFilter\n            :     :     :                                               +- CometNativeScan parquet spark_catalog.default.item\n            :     :     +- CometSort\n            :     :        +- CometProject\n            :     :           +- CometFilter\n            :     :              :  +- Subquery\n            :     :              :     +- HashAggregate\n            :     :              :        +- CometNativeColumnarToRow\n            :     :              :           +- CometColumnarExchange\n            :     :              :              +- HashAggregate\n            :     :              :                 +- HashAggregate\n            :     :              :                    +- CometNativeColumnarToRow\n            :     :              :                       +- CometColumnarExchange\n            :     :              :                          +- HashAggregate\n            :     :              :                             +- Project\n            :     :              :                                +- BroadcastHashJoin\n            :     :              :                                   :- Project\n            :     :              :                                   :  +- BroadcastHashJoin\n            :     :              :                                   :     :- Filter\n            :     :              :                                   :     :  +- ColumnarToRow\n            :     :              :                                   :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :              :                                   :     :           +- SubqueryBroadcast\n            :     :              :                                   :     :              +- BroadcastExchange\n            :     :              :                                   :     :                 +- CometNativeColumnarToRow\n            :     :              :                                   :     :                    +- CometProject\n            :     :              :                                   :     :                       +- CometFilter\n            :     :              :                                   :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :              :                                   :     +- BroadcastExchange\n            :     :              :                                   :        +- CometNativeColumnarToRow\n            :     :              :                                   :           +- CometFilter\n            :     :              :                                   :              +- CometNativeScan parquet spark_catalog.default.customer\n            :     :              :                                   +- BroadcastExchange\n            :     :              :                                      +- CometNativeColumnarToRow\n            :     :              :                                         +- CometProject\n            :     :              :                                            +- CometFilter\n            :     :              :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :              +- CometHashAggregate\n            :     :                 +- CometExchange\n            :     :                    +- CometHashAggregate\n            :     :                       +- CometProject\n            :     :                          +- CometBroadcastHashJoin\n            :     :                             :- CometProject\n            :     :                             :  +- CometFilter\n            :     :                             :     +- CometNativeScan parquet spark_catalog.default.store_sales\n            :     :                             +- CometBroadcastExchange\n            :     :                                +- CometFilter\n            :     :                                   +- CometNativeScan parquet spark_catalog.default.customer\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometColumnarExchange\n                  :     :     +- Project\n                  :     :        +- BroadcastHashJoin\n                  :     :           :- ColumnarToRow\n                  :     :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :           :        +- ReusedSubquery\n                  :     :           +- BroadcastExchange\n                  :     :              +- Project\n                  :     :                 +- Filter\n                  :     :                    +- HashAggregate\n                  :     :                       +- CometNativeColumnarToRow\n                  :     :                          +- CometColumnarExchange\n                  :     :                             +- HashAggregate\n                  :     :                                +- Project\n                  :     :                                   +- BroadcastHashJoin\n                  :     :                                      :- Project\n                  :     :                                      :  +- BroadcastHashJoin\n                  :     :                                      :     :- Filter\n                  :     :                                      :     :  +- ColumnarToRow\n                  :     :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                                      :     :           +- SubqueryBroadcast\n                  :     :                                      :     :              +- BroadcastExchange\n                  :     :                                      :     :                 +- CometNativeColumnarToRow\n                  :     :                                      :     :                    +- CometProject\n                  :     :                                      :     :                       +- CometFilter\n                  :     :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                                      :     +- BroadcastExchange\n                  :     :                                      :        +- CometNativeColumnarToRow\n                  :     :                                      :           +- CometProject\n                  :     :                                      :              +- CometFilter\n                  :     :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                                      +- BroadcastExchange\n                  :     :                                         +- CometNativeColumnarToRow\n                  :     :                                            +- CometFilter\n                  :     :                                               +- CometNativeScan parquet spark_catalog.default.item\n                  :     +- CometSort\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              :  +- ReusedSubquery\n                  :              +- CometHashAggregate\n                  :                 +- CometExchange\n                  :                    +- CometHashAggregate\n                  :                       +- CometProject\n                  :                          +- CometBroadcastHashJoin\n                  :                             :- CometProject\n                  :                             :  +- CometFilter\n                  :                             :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :                             +- CometBroadcastExchange\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.customer\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 83 out of 138 eligible operators (60%). Final plan contains 20 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q23a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometUnion\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometProject\n            :     :  +- CometSortMergeJoin\n            :     :     :- CometSort\n            :     :     :  +- CometExchange\n            :     :     :     +- CometProject\n            :     :     :        +- CometBroadcastHashJoin\n            :     :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :     :     :           :     +- SubqueryBroadcast\n            :     :     :           :        +- BroadcastExchange\n            :     :     :           :           +- CometNativeColumnarToRow\n            :     :     :           :              +- CometProject\n            :     :     :           :                 +- CometFilter\n            :     :     :           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :     :           +- CometBroadcastExchange\n            :     :     :              +- CometProject\n            :     :     :                 +- CometFilter\n            :     :     :                    +- CometHashAggregate\n            :     :     :                       +- CometExchange\n            :     :     :                          +- CometHashAggregate\n            :     :     :                             +- CometProject\n            :     :     :                                +- CometBroadcastHashJoin\n            :     :     :                                   :- CometProject\n            :     :     :                                   :  +- CometBroadcastHashJoin\n            :     :     :                                   :     :- CometFilter\n            :     :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :     :                                   :     :        +- SubqueryBroadcast\n            :     :     :                                   :     :           +- BroadcastExchange\n            :     :     :                                   :     :              +- CometNativeColumnarToRow\n            :     :     :                                   :     :                 +- CometProject\n            :     :     :                                   :     :                    +- CometFilter\n            :     :     :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :     :                                   :     +- CometBroadcastExchange\n            :     :     :                                   :        +- CometProject\n            :     :     :                                   :           +- CometFilter\n            :     :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :     :                                   +- CometBroadcastExchange\n            :     :     :                                      +- CometFilter\n            :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n            :     :     +- CometSort\n            :     :        +- CometProject\n            :     :           +- CometFilter\n            :     :              :  +- Subquery\n            :     :              :     +- CometNativeColumnarToRow\n            :     :              :        +- CometHashAggregate\n            :     :              :           +- CometExchange\n            :     :              :              +- CometHashAggregate\n            :     :              :                 +- CometHashAggregate\n            :     :              :                    +- CometExchange\n            :     :              :                       +- CometHashAggregate\n            :     :              :                          +- CometProject\n            :     :              :                             +- CometBroadcastHashJoin\n            :     :              :                                :- CometProject\n            :     :              :                                :  +- CometBroadcastHashJoin\n            :     :              :                                :     :- CometFilter\n            :     :              :                                :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :              :                                :     :        +- SubqueryBroadcast\n            :     :              :                                :     :           +- BroadcastExchange\n            :     :              :                                :     :              +- CometNativeColumnarToRow\n            :     :              :                                :     :                 +- CometProject\n            :     :              :                                :     :                    +- CometFilter\n            :     :              :                                :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :              :                                :     +- CometBroadcastExchange\n            :     :              :                                :        +- CometFilter\n            :     :              :                                :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :     :              :                                +- CometBroadcastExchange\n            :     :              :                                   +- CometProject\n            :     :              :                                      +- CometFilter\n            :     :              :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :              +- CometHashAggregate\n            :     :                 +- CometExchange\n            :     :                    +- CometHashAggregate\n            :     :                       +- CometProject\n            :     :                          +- CometBroadcastHashJoin\n            :     :                             :- CometProject\n            :     :                             :  +- CometFilter\n            :     :                             :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :                             +- CometBroadcastExchange\n            :     :                                +- CometFilter\n            :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometExchange\n                  :     :     +- CometProject\n                  :     :        +- CometBroadcastHashJoin\n                  :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :           :     +- ReusedSubquery\n                  :     :           +- CometBroadcastExchange\n                  :     :              +- CometProject\n                  :     :                 +- CometFilter\n                  :     :                    +- CometHashAggregate\n                  :     :                       +- CometExchange\n                  :     :                          +- CometHashAggregate\n                  :     :                             +- CometProject\n                  :     :                                +- CometBroadcastHashJoin\n                  :     :                                   :- CometProject\n                  :     :                                   :  +- CometBroadcastHashJoin\n                  :     :                                   :     :- CometFilter\n                  :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :                                   :     :        +- SubqueryBroadcast\n                  :     :                                   :     :           +- BroadcastExchange\n                  :     :                                   :     :              +- CometNativeColumnarToRow\n                  :     :                                   :     :                 +- CometProject\n                  :     :                                   :     :                    +- CometFilter\n                  :     :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                                   :     +- CometBroadcastExchange\n                  :     :                                   :        +- CometProject\n                  :     :                                   :           +- CometFilter\n                  :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                                   +- CometBroadcastExchange\n                  :     :                                      +- CometFilter\n                  :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :     +- CometSort\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              :  +- ReusedSubquery\n                  :              +- CometHashAggregate\n                  :                 +- CometExchange\n                  :                    +- CometHashAggregate\n                  :                       +- CometProject\n                  :                          +- CometBroadcastHashJoin\n                  :                             :- CometProject\n                  :                             :  +- CometFilter\n                  :                             :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                             +- CometBroadcastExchange\n                  :                                +- CometFilter\n                  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 127 out of 138 eligible operators (92%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q23b.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometSortMergeJoin\n      :              :     :  :- CometSort\n      :              :     :  :  +- CometColumnarExchange\n      :              :     :  :     +- Project\n      :              :     :  :        +- BroadcastHashJoin\n      :              :     :  :           :- Filter\n      :              :     :  :           :  +- ColumnarToRow\n      :              :     :  :           :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :              :     :  :           :           +- SubqueryBroadcast\n      :              :     :  :           :              +- BroadcastExchange\n      :              :     :  :           :                 +- CometNativeColumnarToRow\n      :              :     :  :           :                    +- CometProject\n      :              :     :  :           :                       +- CometFilter\n      :              :     :  :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :  :           +- BroadcastExchange\n      :              :     :  :              +- Project\n      :              :     :  :                 +- Filter\n      :              :     :  :                    +- HashAggregate\n      :              :     :  :                       +- CometNativeColumnarToRow\n      :              :     :  :                          +- CometColumnarExchange\n      :              :     :  :                             +- HashAggregate\n      :              :     :  :                                +- Project\n      :              :     :  :                                   +- BroadcastHashJoin\n      :              :     :  :                                      :- Project\n      :              :     :  :                                      :  +- BroadcastHashJoin\n      :              :     :  :                                      :     :- Filter\n      :              :     :  :                                      :     :  +- ColumnarToRow\n      :              :     :  :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :              :     :  :                                      :     :           +- SubqueryBroadcast\n      :              :     :  :                                      :     :              +- BroadcastExchange\n      :              :     :  :                                      :     :                 +- CometNativeColumnarToRow\n      :              :     :  :                                      :     :                    +- CometProject\n      :              :     :  :                                      :     :                       +- CometFilter\n      :              :     :  :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :  :                                      :     +- BroadcastExchange\n      :              :     :  :                                      :        +- CometNativeColumnarToRow\n      :              :     :  :                                      :           +- CometProject\n      :              :     :  :                                      :              +- CometFilter\n      :              :     :  :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :  :                                      +- BroadcastExchange\n      :              :     :  :                                         +- CometNativeColumnarToRow\n      :              :     :  :                                            +- CometFilter\n      :              :     :  :                                               +- CometNativeScan parquet spark_catalog.default.item\n      :              :     :  +- CometSort\n      :              :     :     +- CometProject\n      :              :     :        +- CometFilter\n      :              :     :           :  +- Subquery\n      :              :     :           :     +- HashAggregate\n      :              :     :           :        +- CometNativeColumnarToRow\n      :              :     :           :           +- CometColumnarExchange\n      :              :     :           :              +- HashAggregate\n      :              :     :           :                 +- HashAggregate\n      :              :     :           :                    +- CometNativeColumnarToRow\n      :              :     :           :                       +- CometColumnarExchange\n      :              :     :           :                          +- HashAggregate\n      :              :     :           :                             +- Project\n      :              :     :           :                                +- BroadcastHashJoin\n      :              :     :           :                                   :- Project\n      :              :     :           :                                   :  +- BroadcastHashJoin\n      :              :     :           :                                   :     :- Filter\n      :              :     :           :                                   :     :  +- ColumnarToRow\n      :              :     :           :                                   :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :              :     :           :                                   :     :           +- SubqueryBroadcast\n      :              :     :           :                                   :     :              +- BroadcastExchange\n      :              :     :           :                                   :     :                 +- CometNativeColumnarToRow\n      :              :     :           :                                   :     :                    +- CometProject\n      :              :     :           :                                   :     :                       +- CometFilter\n      :              :     :           :                                   :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :           :                                   :     +- BroadcastExchange\n      :              :     :           :                                   :        +- CometNativeColumnarToRow\n      :              :     :           :                                   :           +- CometFilter\n      :              :     :           :                                   :              +- CometNativeScan parquet spark_catalog.default.customer\n      :              :     :           :                                   +- BroadcastExchange\n      :              :     :           :                                      +- CometNativeColumnarToRow\n      :              :     :           :                                         +- CometProject\n      :              :     :           :                                            +- CometFilter\n      :              :     :           :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :           +- CometHashAggregate\n      :              :     :              +- CometExchange\n      :              :     :                 +- CometHashAggregate\n      :              :     :                    +- CometProject\n      :              :     :                       +- CometBroadcastHashJoin\n      :              :     :                          :- CometProject\n      :              :     :                          :  +- CometFilter\n      :              :     :                          :     +- CometNativeScan parquet spark_catalog.default.store_sales\n      :              :     :                          +- CometBroadcastExchange\n      :              :     :                             +- CometFilter\n      :              :     :                                +- CometNativeScan parquet spark_catalog.default.customer\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometSortMergeJoin\n      :              :              :- CometSort\n      :              :              :  +- CometExchange\n      :              :              :     +- CometFilter\n      :              :              :        +- CometNativeScan parquet spark_catalog.default.customer\n      :              :              +- CometSort\n      :              :                 +- CometProject\n      :              :                    +- CometFilter\n      :              :                       :  +- ReusedSubquery\n      :              :                       +- CometHashAggregate\n      :              :                          +- CometExchange\n      :              :                             +- CometHashAggregate\n      :              :                                +- CometProject\n      :              :                                   +- CometBroadcastHashJoin\n      :              :                                      :- CometProject\n      :              :                                      :  +- CometFilter\n      :              :                                      :     +- CometNativeScan parquet spark_catalog.default.store_sales\n      :              :                                      +- CometBroadcastExchange\n      :              :                                         +- CometFilter\n      :              :                                            +- CometNativeScan parquet spark_catalog.default.customer\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometSortMergeJoin\n                     :     :  :- CometSort\n                     :     :  :  +- CometColumnarExchange\n                     :     :  :     +- Project\n                     :     :  :        +- BroadcastHashJoin\n                     :     :  :           :- Filter\n                     :     :  :           :  +- ColumnarToRow\n                     :     :  :           :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :  :           :           +- ReusedSubquery\n                     :     :  :           +- BroadcastExchange\n                     :     :  :              +- Project\n                     :     :  :                 +- Filter\n                     :     :  :                    +- HashAggregate\n                     :     :  :                       +- CometNativeColumnarToRow\n                     :     :  :                          +- CometColumnarExchange\n                     :     :  :                             +- HashAggregate\n                     :     :  :                                +- Project\n                     :     :  :                                   +- BroadcastHashJoin\n                     :     :  :                                      :- Project\n                     :     :  :                                      :  +- BroadcastHashJoin\n                     :     :  :                                      :     :- Filter\n                     :     :  :                                      :     :  +- ColumnarToRow\n                     :     :  :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :  :                                      :     :           +- SubqueryBroadcast\n                     :     :  :                                      :     :              +- BroadcastExchange\n                     :     :  :                                      :     :                 +- CometNativeColumnarToRow\n                     :     :  :                                      :     :                    +- CometProject\n                     :     :  :                                      :     :                       +- CometFilter\n                     :     :  :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :  :                                      :     +- BroadcastExchange\n                     :     :  :                                      :        +- CometNativeColumnarToRow\n                     :     :  :                                      :           +- CometProject\n                     :     :  :                                      :              +- CometFilter\n                     :     :  :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :  :                                      +- BroadcastExchange\n                     :     :  :                                         +- CometNativeColumnarToRow\n                     :     :  :                                            +- CometFilter\n                     :     :  :                                               +- CometNativeScan parquet spark_catalog.default.item\n                     :     :  +- CometSort\n                     :     :     +- CometProject\n                     :     :        +- CometFilter\n                     :     :           :  +- ReusedSubquery\n                     :     :           +- CometHashAggregate\n                     :     :              +- CometExchange\n                     :     :                 +- CometHashAggregate\n                     :     :                    +- CometProject\n                     :     :                       +- CometBroadcastHashJoin\n                     :     :                          :- CometProject\n                     :     :                          :  +- CometFilter\n                     :     :                          :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                     :     :                          +- CometBroadcastExchange\n                     :     :                             +- CometFilter\n                     :     :                                +- CometNativeScan parquet spark_catalog.default.customer\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometSortMergeJoin\n                     :              :- CometSort\n                     :              :  +- CometExchange\n                     :              :     +- CometFilter\n                     :              :        +- CometNativeScan parquet spark_catalog.default.customer\n                     :              +- CometSort\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       :  +- ReusedSubquery\n                     :                       +- CometHashAggregate\n                     :                          +- CometExchange\n                     :                             +- CometHashAggregate\n                     :                                +- CometProject\n                     :                                   +- CometBroadcastHashJoin\n                     :                                      :- CometProject\n                     :                                      :  +- CometFilter\n                     :                                      :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                     :                                      +- CometBroadcastExchange\n                     :                                         +- CometFilter\n                     :                                            +- CometNativeScan parquet spark_catalog.default.customer\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 131 out of 190 eligible operators (68%). Final plan contains 20 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q23b.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometSortMergeJoin\n      :              :     :  :- CometSort\n      :              :     :  :  +- CometExchange\n      :              :     :  :     +- CometProject\n      :              :     :  :        +- CometBroadcastHashJoin\n      :              :     :  :           :- CometFilter\n      :              :     :  :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :  :           :        +- SubqueryBroadcast\n      :              :     :  :           :           +- BroadcastExchange\n      :              :     :  :           :              +- CometNativeColumnarToRow\n      :              :     :  :           :                 +- CometProject\n      :              :     :  :           :                    +- CometFilter\n      :              :     :  :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :  :           +- CometBroadcastExchange\n      :              :     :  :              +- CometProject\n      :              :     :  :                 +- CometFilter\n      :              :     :  :                    +- CometHashAggregate\n      :              :     :  :                       +- CometExchange\n      :              :     :  :                          +- CometHashAggregate\n      :              :     :  :                             +- CometProject\n      :              :     :  :                                +- CometBroadcastHashJoin\n      :              :     :  :                                   :- CometProject\n      :              :     :  :                                   :  +- CometBroadcastHashJoin\n      :              :     :  :                                   :     :- CometFilter\n      :              :     :  :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :  :                                   :     :        +- SubqueryBroadcast\n      :              :     :  :                                   :     :           +- BroadcastExchange\n      :              :     :  :                                   :     :              +- CometNativeColumnarToRow\n      :              :     :  :                                   :     :                 +- CometProject\n      :              :     :  :                                   :     :                    +- CometFilter\n      :              :     :  :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :  :                                   :     +- CometBroadcastExchange\n      :              :     :  :                                   :        +- CometProject\n      :              :     :  :                                   :           +- CometFilter\n      :              :     :  :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :  :                                   +- CometBroadcastExchange\n      :              :     :  :                                      +- CometFilter\n      :              :     :  :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :              :     :  +- CometSort\n      :              :     :     +- CometProject\n      :              :     :        +- CometFilter\n      :              :     :           :  +- Subquery\n      :              :     :           :     +- CometNativeColumnarToRow\n      :              :     :           :        +- CometHashAggregate\n      :              :     :           :           +- CometExchange\n      :              :     :           :              +- CometHashAggregate\n      :              :     :           :                 +- CometHashAggregate\n      :              :     :           :                    +- CometExchange\n      :              :     :           :                       +- CometHashAggregate\n      :              :     :           :                          +- CometProject\n      :              :     :           :                             +- CometBroadcastHashJoin\n      :              :     :           :                                :- CometProject\n      :              :     :           :                                :  +- CometBroadcastHashJoin\n      :              :     :           :                                :     :- CometFilter\n      :              :     :           :                                :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :           :                                :     :        +- SubqueryBroadcast\n      :              :     :           :                                :     :           +- BroadcastExchange\n      :              :     :           :                                :     :              +- CometNativeColumnarToRow\n      :              :     :           :                                :     :                 +- CometProject\n      :              :     :           :                                :     :                    +- CometFilter\n      :              :     :           :                                :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :           :                                :     +- CometBroadcastExchange\n      :              :     :           :                                :        +- CometFilter\n      :              :     :           :                                :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :           :                                +- CometBroadcastExchange\n      :              :     :           :                                   +- CometProject\n      :              :     :           :                                      +- CometFilter\n      :              :     :           :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :           +- CometHashAggregate\n      :              :     :              +- CometExchange\n      :              :     :                 +- CometHashAggregate\n      :              :     :                    +- CometProject\n      :              :     :                       +- CometBroadcastHashJoin\n      :              :     :                          :- CometProject\n      :              :     :                          :  +- CometFilter\n      :              :     :                          :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :                          +- CometBroadcastExchange\n      :              :     :                             +- CometFilter\n      :              :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometSortMergeJoin\n      :              :              :- CometSort\n      :              :              :  +- CometExchange\n      :              :              :     +- CometFilter\n      :              :              :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :              +- CometSort\n      :              :                 +- CometProject\n      :              :                    +- CometFilter\n      :              :                       :  +- ReusedSubquery\n      :              :                       +- CometHashAggregate\n      :              :                          +- CometExchange\n      :              :                             +- CometHashAggregate\n      :              :                                +- CometProject\n      :              :                                   +- CometBroadcastHashJoin\n      :              :                                      :- CometProject\n      :              :                                      :  +- CometFilter\n      :              :                                      :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :                                      +- CometBroadcastExchange\n      :              :                                         +- CometFilter\n      :              :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometSortMergeJoin\n                     :     :  :- CometSort\n                     :     :  :  +- CometExchange\n                     :     :  :     +- CometProject\n                     :     :  :        +- CometBroadcastHashJoin\n                     :     :  :           :- CometFilter\n                     :     :  :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :     :  :           :        +- ReusedSubquery\n                     :     :  :           +- CometBroadcastExchange\n                     :     :  :              +- CometProject\n                     :     :  :                 +- CometFilter\n                     :     :  :                    +- CometHashAggregate\n                     :     :  :                       +- CometExchange\n                     :     :  :                          +- CometHashAggregate\n                     :     :  :                             +- CometProject\n                     :     :  :                                +- CometBroadcastHashJoin\n                     :     :  :                                   :- CometProject\n                     :     :  :                                   :  +- CometBroadcastHashJoin\n                     :     :  :                                   :     :- CometFilter\n                     :     :  :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :  :                                   :     :        +- SubqueryBroadcast\n                     :     :  :                                   :     :           +- BroadcastExchange\n                     :     :  :                                   :     :              +- CometNativeColumnarToRow\n                     :     :  :                                   :     :                 +- CometProject\n                     :     :  :                                   :     :                    +- CometFilter\n                     :     :  :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :  :                                   :     +- CometBroadcastExchange\n                     :     :  :                                   :        +- CometProject\n                     :     :  :                                   :           +- CometFilter\n                     :     :  :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :  :                                   +- CometBroadcastExchange\n                     :     :  :                                      +- CometFilter\n                     :     :  :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     :     :  +- CometSort\n                     :     :     +- CometProject\n                     :     :        +- CometFilter\n                     :     :           :  +- ReusedSubquery\n                     :     :           +- CometHashAggregate\n                     :     :              +- CometExchange\n                     :     :                 +- CometHashAggregate\n                     :     :                    +- CometProject\n                     :     :                       +- CometBroadcastHashJoin\n                     :     :                          :- CometProject\n                     :     :                          :  +- CometFilter\n                     :     :                          :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :                          +- CometBroadcastExchange\n                     :     :                             +- CometFilter\n                     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometSortMergeJoin\n                     :              :- CometSort\n                     :              :  +- CometExchange\n                     :              :     +- CometFilter\n                     :              :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :              +- CometSort\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       :  +- ReusedSubquery\n                     :                       +- CometHashAggregate\n                     :                          +- CometExchange\n                     :                             +- CometHashAggregate\n                     :                                +- CometProject\n                     :                                   +- CometBroadcastHashJoin\n                     :                                      :- CometProject\n                     :                                      :  +- CometFilter\n                     :                                      :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :                                      +- CometBroadcastExchange\n                     :                                         +- CometFilter\n                     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 177 out of 190 eligible operators (93%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q24a.native_datafusion/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometNativeScan parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometNativeScan parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometNativeScan parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometNativeScan parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q24a.native_iceberg_compat/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q24b.native_datafusion/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometNativeScan parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometNativeScan parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometNativeScan parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometNativeScan parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q24b.native_iceberg_compat/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q25.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- Filter\n                  :     :     :     :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :     :        +- Filter\n                  :     :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- Filter\n                  :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :                    +- ReusedSubquery\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 22 out of 57 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q25.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometFilter\n                  :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :     :     :     :           +- BroadcastExchange\n                  :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                 +- CometProject\n                  :     :     :     :     :     :     :                    +- CometFilter\n                  :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :     :                 +- ReusedSubquery\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 52 out of 57 eligible operators (91%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q26.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Filter\n                  :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :              +- BroadcastExchange\n                  :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :                    +- CometProject\n                  :     :     :     :                       +- CometFilter\n                  :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.item\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 16 out of 35 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q26.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :           +- BroadcastExchange\n                  :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :                 +- CometProject\n                  :     :     :     :                    +- CometFilter\n                  :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 33 out of 35 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q27.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Filter\n                     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :           +- SubqueryBroadcast\n                     :     :     :     :              +- BroadcastExchange\n                     :     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :     :                    +- CometProject\n                     :     :     :     :                       +- CometFilter\n                     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometProject\n                     :     :     :              +- CometFilter\n                     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.store\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 16 out of 36 eligible operators (44%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q27.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometFilter\n                     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :     :     :        +- SubqueryBroadcast\n                     :     :     :     :           +- BroadcastExchange\n                     :     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :     :                 +- CometProject\n                     :     :     :     :                    +- CometFilter\n                     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometProject\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 34 out of 36 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q28.native_datafusion/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :     +- CometColumnarExchange\n:  :  :  :  :        +- HashAggregate\n:  :  :  :  :           +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :  :              +- CometNativeColumnarToRow\n:  :  :  :  :                 +- CometExchange\n:  :  :  :  :                    +- CometHashAggregate\n:  :  :  :  :                       +- CometProject\n:  :  :  :  :                          +- CometFilter\n:  :  :  :  :                             +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometColumnarExchange\n:  :  :  :              +- HashAggregate\n:  :  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :                    +- CometNativeColumnarToRow\n:  :  :  :                       +- CometExchange\n:  :  :  :                          +- CometHashAggregate\n:  :  :  :                             +- CometProject\n:  :  :  :                                +- CometFilter\n:  :  :  :                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometColumnarExchange\n:  :  :              +- HashAggregate\n:  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :                    +- CometNativeColumnarToRow\n:  :  :                       +- CometExchange\n:  :  :                          +- CometHashAggregate\n:  :  :                             +- CometProject\n:  :  :                                +- CometFilter\n:  :  :                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometColumnarExchange\n:  :              +- HashAggregate\n:  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :                    +- CometNativeColumnarToRow\n:  :                       +- CometExchange\n:  :                          +- CometHashAggregate\n:  :                             +- CometProject\n:  :                                +- CometFilter\n:  :                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:                    +- CometNativeColumnarToRow\n:                       +- CometExchange\n:                          +- CometHashAggregate\n:                             +- CometProject\n:                                +- CometFilter\n:                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometColumnarExchange\n            +- HashAggregate\n               +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n                  +- CometNativeColumnarToRow\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.store_sales\n\nComet accelerated 42 out of 64 eligible operators (65%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q28.native_iceberg_compat/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :     +- CometColumnarExchange\n:  :  :  :  :        +- HashAggregate\n:  :  :  :  :           +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :  :              +- CometNativeColumnarToRow\n:  :  :  :  :                 +- CometExchange\n:  :  :  :  :                    +- CometHashAggregate\n:  :  :  :  :                       +- CometProject\n:  :  :  :  :                          +- CometFilter\n:  :  :  :  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometColumnarExchange\n:  :  :  :              +- HashAggregate\n:  :  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :                    +- CometNativeColumnarToRow\n:  :  :  :                       +- CometExchange\n:  :  :  :                          +- CometHashAggregate\n:  :  :  :                             +- CometProject\n:  :  :  :                                +- CometFilter\n:  :  :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometColumnarExchange\n:  :  :              +- HashAggregate\n:  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :                    +- CometNativeColumnarToRow\n:  :  :                       +- CometExchange\n:  :  :                          +- CometHashAggregate\n:  :  :                             +- CometProject\n:  :  :                                +- CometFilter\n:  :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometColumnarExchange\n:  :              +- HashAggregate\n:  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :                    +- CometNativeColumnarToRow\n:  :                       +- CometExchange\n:  :                          +- CometHashAggregate\n:  :                             +- CometProject\n:  :                                +- CometFilter\n:  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:                    +- CometNativeColumnarToRow\n:                       +- CometExchange\n:                          +- CometHashAggregate\n:                             +- CometProject\n:                                +- CometFilter\n:                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometColumnarExchange\n            +- HashAggregate\n               +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n                  +- CometNativeColumnarToRow\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n\nComet accelerated 42 out of 64 eligible operators (65%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q29.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- Filter\n                  :     :     :     :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :     :        +- Filter\n                  :     :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- Filter\n                  :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 25 out of 61 eligible operators (40%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q29.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometFilter\n                  :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :     :     :     :           +- BroadcastExchange\n                  :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                 +- CometProject\n                  :     :     :     :     :     :     :                    +- CometFilter\n                  :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 55 out of 61 eligible operators (90%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q3.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q3.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q30.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Project\n      :     :     :                    :  +- BroadcastHashJoin\n      :     :     :                    :     :- Filter\n      :     :     :                    :     :  +- ColumnarToRow\n      :     :     :                    :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :     :           +- SubqueryBroadcast\n      :     :     :                    :     :              +- BroadcastExchange\n      :     :     :                    :     :                 +- CometNativeColumnarToRow\n      :     :     :                    :     :                    +- CometProject\n      :     :     :                    :     :                       +- CometFilter\n      :     :     :                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    :     +- BroadcastExchange\n      :     :     :                    :        +- CometNativeColumnarToRow\n      :     :     :                    :           +- CometProject\n      :     :     :                    :              +- CometFilter\n      :     :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Filter\n      :     :                                         :     :  +- ColumnarToRow\n      :     :                                         :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :           +- ReusedSubquery\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometProject\n      :     :                                         :              +- CometFilter\n      :     :                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometProject\n      :     :                                                  +- CometFilter\n      :     :                                                     +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 24 out of 61 eligible operators (39%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q30.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometProject\n         :     :     :                 :  +- CometBroadcastHashJoin\n         :     :     :                 :     :- CometFilter\n         :     :     :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :     :     :                 :     :        +- SubqueryBroadcast\n         :     :     :                 :     :           +- BroadcastExchange\n         :     :     :                 :     :              +- CometNativeColumnarToRow\n         :     :     :                 :     :                 +- CometProject\n         :     :     :                 :     :                    +- CometFilter\n         :     :     :                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 :     +- CometBroadcastExchange\n         :     :     :                 :        +- CometProject\n         :     :     :                 :           +- CometFilter\n         :     :     :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometFilter\n         :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometHashAggregate\n         :     :                       +- CometExchange\n         :     :                          +- CometHashAggregate\n         :     :                             +- CometProject\n         :     :                                +- CometBroadcastHashJoin\n         :     :                                   :- CometProject\n         :     :                                   :  +- CometBroadcastHashJoin\n         :     :                                   :     :- CometFilter\n         :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :     :                                   :     :        +- ReusedSubquery\n         :     :                                   :     +- CometBroadcastExchange\n         :     :                                   :        +- CometProject\n         :     :                                   :           +- CometFilter\n         :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                   +- CometBroadcastExchange\n         :     :                                      +- CometProject\n         :     :                                         +- CometFilter\n         :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 58 out of 61 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q31.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Project\n            :  +- BroadcastHashJoin\n            :     :- BroadcastHashJoin\n            :     :  :- Project\n            :     :  :  +- BroadcastHashJoin\n            :     :  :     :- BroadcastHashJoin\n            :     :  :     :  :- HashAggregate\n            :     :  :     :  :  +- CometNativeColumnarToRow\n            :     :  :     :  :     +- CometColumnarExchange\n            :     :  :     :  :        +- HashAggregate\n            :     :  :     :  :           +- Project\n            :     :  :     :  :              +- BroadcastHashJoin\n            :     :  :     :  :                 :- Project\n            :     :  :     :  :                 :  +- BroadcastHashJoin\n            :     :  :     :  :                 :     :- Filter\n            :     :  :     :  :                 :     :  +- ColumnarToRow\n            :     :  :     :  :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :  :     :  :                 :     :           +- SubqueryBroadcast\n            :     :  :     :  :                 :     :              +- BroadcastExchange\n            :     :  :     :  :                 :     :                 +- CometNativeColumnarToRow\n            :     :  :     :  :                 :     :                    +- CometFilter\n            :     :  :     :  :                 :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :  :                 :     +- BroadcastExchange\n            :     :  :     :  :                 :        +- CometNativeColumnarToRow\n            :     :  :     :  :                 :           +- CometFilter\n            :     :  :     :  :                 :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :  :                 +- BroadcastExchange\n            :     :  :     :  :                    +- CometNativeColumnarToRow\n            :     :  :     :  :                       +- CometFilter\n            :     :  :     :  :                          +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     :  :     :  +- BroadcastExchange\n            :     :  :     :     +- HashAggregate\n            :     :  :     :        +- CometNativeColumnarToRow\n            :     :  :     :           +- CometColumnarExchange\n            :     :  :     :              +- HashAggregate\n            :     :  :     :                 +- Project\n            :     :  :     :                    +- BroadcastHashJoin\n            :     :  :     :                       :- Project\n            :     :  :     :                       :  +- BroadcastHashJoin\n            :     :  :     :                       :     :- Filter\n            :     :  :     :                       :     :  +- ColumnarToRow\n            :     :  :     :                       :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :  :     :                       :     :           +- SubqueryBroadcast\n            :     :  :     :                       :     :              +- BroadcastExchange\n            :     :  :     :                       :     :                 +- CometNativeColumnarToRow\n            :     :  :     :                       :     :                    +- CometFilter\n            :     :  :     :                       :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :                       :     +- BroadcastExchange\n            :     :  :     :                       :        +- CometNativeColumnarToRow\n            :     :  :     :                       :           +- CometFilter\n            :     :  :     :                       :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :                       +- BroadcastExchange\n            :     :  :     :                          +- CometNativeColumnarToRow\n            :     :  :     :                             +- CometFilter\n            :     :  :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     :  :     +- BroadcastExchange\n            :     :  :        +- HashAggregate\n            :     :  :           +- CometNativeColumnarToRow\n            :     :  :              +- CometColumnarExchange\n            :     :  :                 +- HashAggregate\n            :     :  :                    +- Project\n            :     :  :                       +- BroadcastHashJoin\n            :     :  :                          :- Project\n            :     :  :                          :  +- BroadcastHashJoin\n            :     :  :                          :     :- Filter\n            :     :  :                          :     :  +- ColumnarToRow\n            :     :  :                          :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :  :                          :     :           +- SubqueryBroadcast\n            :     :  :                          :     :              +- BroadcastExchange\n            :     :  :                          :     :                 +- CometNativeColumnarToRow\n            :     :  :                          :     :                    +- CometFilter\n            :     :  :                          :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :                          :     +- BroadcastExchange\n            :     :  :                          :        +- CometNativeColumnarToRow\n            :     :  :                          :           +- CometFilter\n            :     :  :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :                          +- BroadcastExchange\n            :     :  :                             +- CometNativeColumnarToRow\n            :     :  :                                +- CometFilter\n            :     :  :                                   +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     :  +- BroadcastExchange\n            :     :     +- HashAggregate\n            :     :        +- CometNativeColumnarToRow\n            :     :           +- CometColumnarExchange\n            :     :              +- HashAggregate\n            :     :                 +- Project\n            :     :                    +- BroadcastHashJoin\n            :     :                       :- Project\n            :     :                       :  +- BroadcastHashJoin\n            :     :                       :     :- Filter\n            :     :                       :     :  +- ColumnarToRow\n            :     :                       :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :                       :     :           +- ReusedSubquery\n            :     :                       :     +- BroadcastExchange\n            :     :                       :        +- CometNativeColumnarToRow\n            :     :                       :           +- CometFilter\n            :     :                       :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :                       +- BroadcastExchange\n            :     :                          +- CometNativeColumnarToRow\n            :     :                             +- CometFilter\n            :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     +- BroadcastExchange\n            :        +- HashAggregate\n            :           +- CometNativeColumnarToRow\n            :              +- CometColumnarExchange\n            :                 +- HashAggregate\n            :                    +- Project\n            :                       +- BroadcastHashJoin\n            :                          :- Project\n            :                          :  +- BroadcastHashJoin\n            :                          :     :- Filter\n            :                          :     :  +- ColumnarToRow\n            :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                          :     :           +- ReusedSubquery\n            :                          :     +- BroadcastExchange\n            :                          :        +- CometNativeColumnarToRow\n            :                          :           +- CometFilter\n            :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                          +- BroadcastExchange\n            :                             +- CometNativeColumnarToRow\n            :                                +- CometFilter\n            :                                   +- CometNativeScan parquet spark_catalog.default.customer_address\n            +- BroadcastExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- ReusedSubquery\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometFilter\n                                 :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 38 out of 120 eligible operators (31%). Final plan contains 28 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q31.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometBroadcastHashJoin\n            :     :  :- CometProject\n            :     :  :  +- CometBroadcastHashJoin\n            :     :  :     :- CometBroadcastHashJoin\n            :     :  :     :  :- CometHashAggregate\n            :     :  :     :  :  +- CometExchange\n            :     :  :     :  :     +- CometHashAggregate\n            :     :  :     :  :        +- CometProject\n            :     :  :     :  :           +- CometBroadcastHashJoin\n            :     :  :     :  :              :- CometProject\n            :     :  :     :  :              :  +- CometBroadcastHashJoin\n            :     :  :     :  :              :     :- CometFilter\n            :     :  :     :  :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :  :     :  :              :     :        +- SubqueryBroadcast\n            :     :  :     :  :              :     :           +- BroadcastExchange\n            :     :  :     :  :              :     :              +- CometNativeColumnarToRow\n            :     :  :     :  :              :     :                 +- CometFilter\n            :     :  :     :  :              :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :  :              :     +- CometBroadcastExchange\n            :     :  :     :  :              :        +- CometFilter\n            :     :  :     :  :              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :  :              +- CometBroadcastExchange\n            :     :  :     :  :                 +- CometFilter\n            :     :  :     :  :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     :  :     :  +- CometBroadcastExchange\n            :     :  :     :     +- CometHashAggregate\n            :     :  :     :        +- CometExchange\n            :     :  :     :           +- CometHashAggregate\n            :     :  :     :              +- CometProject\n            :     :  :     :                 +- CometBroadcastHashJoin\n            :     :  :     :                    :- CometProject\n            :     :  :     :                    :  +- CometBroadcastHashJoin\n            :     :  :     :                    :     :- CometFilter\n            :     :  :     :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :  :     :                    :     :        +- SubqueryBroadcast\n            :     :  :     :                    :     :           +- BroadcastExchange\n            :     :  :     :                    :     :              +- CometNativeColumnarToRow\n            :     :  :     :                    :     :                 +- CometFilter\n            :     :  :     :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :                    :     +- CometBroadcastExchange\n            :     :  :     :                    :        +- CometFilter\n            :     :  :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :                    +- CometBroadcastExchange\n            :     :  :     :                       +- CometFilter\n            :     :  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     :  :     +- CometBroadcastExchange\n            :     :  :        +- CometHashAggregate\n            :     :  :           +- CometExchange\n            :     :  :              +- CometHashAggregate\n            :     :  :                 +- CometProject\n            :     :  :                    +- CometBroadcastHashJoin\n            :     :  :                       :- CometProject\n            :     :  :                       :  +- CometBroadcastHashJoin\n            :     :  :                       :     :- CometFilter\n            :     :  :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :  :                       :     :        +- SubqueryBroadcast\n            :     :  :                       :     :           +- BroadcastExchange\n            :     :  :                       :     :              +- CometNativeColumnarToRow\n            :     :  :                       :     :                 +- CometFilter\n            :     :  :                       :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :                       :     +- CometBroadcastExchange\n            :     :  :                       :        +- CometFilter\n            :     :  :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :                       +- CometBroadcastExchange\n            :     :  :                          +- CometFilter\n            :     :  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     :  +- CometBroadcastExchange\n            :     :     +- CometHashAggregate\n            :     :        +- CometExchange\n            :     :           +- CometHashAggregate\n            :     :              +- CometProject\n            :     :                 +- CometBroadcastHashJoin\n            :     :                    :- CometProject\n            :     :                    :  +- CometBroadcastHashJoin\n            :     :                    :     :- CometFilter\n            :     :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n            :     :                    :     :        +- ReusedSubquery\n            :     :                    :     +- CometBroadcastExchange\n            :     :                    :        +- CometFilter\n            :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :                    +- CometBroadcastExchange\n            :     :                       +- CometFilter\n            :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     +- CometBroadcastExchange\n            :        +- CometHashAggregate\n            :           +- CometExchange\n            :              +- CometHashAggregate\n            :                 +- CometProject\n            :                    +- CometBroadcastHashJoin\n            :                       :- CometProject\n            :                       :  +- CometBroadcastHashJoin\n            :                       :     :- CometFilter\n            :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n            :                       :     :        +- ReusedSubquery\n            :                       :     +- CometBroadcastExchange\n            :                       :        +- CometFilter\n            :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                       +- CometBroadcastExchange\n            :                          +- CometFilter\n            :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            +- CometBroadcastExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- ReusedSubquery\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 111 out of 120 eligible operators (92%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q32.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Filter\n               :     :     :  +- ColumnarToRow\n               :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :           +- SubqueryBroadcast\n               :     :     :              +- BroadcastExchange\n               :     :     :                 +- CometNativeColumnarToRow\n               :     :     :                    +- CometProject\n               :     :     :                       +- CometFilter\n               :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.item\n               :     +- BroadcastExchange\n               :        +- Filter\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Project\n               :                          +- BroadcastHashJoin\n               :                             :- Filter\n               :                             :  +- ColumnarToRow\n               :                             :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                             :           +- ReusedSubquery\n               :                             +- BroadcastExchange\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometProject\n               :                                      +- CometFilter\n               :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 38 eligible operators (36%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q32.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :     :     :        +- SubqueryBroadcast\n               :     :     :           +- BroadcastExchange\n               :     :     :              +- CometNativeColumnarToRow\n               :     :     :                 +- CometProject\n               :     :     :                    +- CometFilter\n               :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                          :        +- ReusedSubquery\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 35 out of 38 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q33.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- SubqueryBroadcast\n               :                 :     :     :              +- BroadcastExchange\n               :                 :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :                    +- CometProject\n               :                 :     :     :                       +- CometFilter\n               :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometNativeScan parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- ReusedSubquery\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometNativeScan parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.item\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- ReusedSubquery\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometBroadcastHashJoin\n                                          :- CometFilter\n                                          :  +- CometNativeScan parquet spark_catalog.default.item\n                                          +- CometBroadcastExchange\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 46 out of 93 eligible operators (49%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q33.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :     :     :        +- SubqueryBroadcast\n               :              :     :     :           +- BroadcastExchange\n               :              :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :                 +- CometProject\n               :              :     :     :                    +- CometFilter\n               :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometBroadcastHashJoin\n               :                    :- CometFilter\n               :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    +- CometBroadcastExchange\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :     :        +- ReusedSubquery\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometBroadcastHashJoin\n               :                    :- CometFilter\n               :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    +- CometBroadcastExchange\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :     :        +- ReusedSubquery\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              +- CometBroadcastExchange\n                                 +- CometBroadcastHashJoin\n                                    :- CometFilter\n                                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 89 out of 93 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q34.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Filter\n            :  +- HashAggregate\n            :     +- CometNativeColumnarToRow\n            :        +- CometColumnarExchange\n            :           +- HashAggregate\n            :              +- Project\n            :                 +- BroadcastHashJoin\n            :                    :- Project\n            :                    :  +- BroadcastHashJoin\n            :                    :     :- Project\n            :                    :     :  +- BroadcastHashJoin\n            :                    :     :     :- Filter\n            :                    :     :     :  +- ColumnarToRow\n            :                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                    :     :     :           +- SubqueryBroadcast\n            :                    :     :     :              +- BroadcastExchange\n            :                    :     :     :                 +- CometNativeColumnarToRow\n            :                    :     :     :                    +- CometProject\n            :                    :     :     :                       +- CometFilter\n            :                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     :     +- BroadcastExchange\n            :                    :     :        +- CometNativeColumnarToRow\n            :                    :     :           +- CometProject\n            :                    :     :              +- CometFilter\n            :                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     +- BroadcastExchange\n            :                    :        +- CometNativeColumnarToRow\n            :                    :           +- CometProject\n            :                    :              +- CometFilter\n            :                    :                 +- CometNativeScan parquet spark_catalog.default.store\n            :                    +- BroadcastExchange\n            :                       +- CometNativeColumnarToRow\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometNativeScan parquet spark_catalog.default.household_demographics\n            +- BroadcastExchange\n               +- CometNativeColumnarToRow\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 37 eligible operators (48%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q34.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometFilter\n            :  +- CometHashAggregate\n            :     +- CometExchange\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometFilter\n            :                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :        +- SubqueryBroadcast\n            :                 :     :     :           +- BroadcastExchange\n            :                 :     :     :              +- CometNativeColumnarToRow\n            :                 :     :     :                 +- CometProject\n            :                 :     :     :                    +- CometFilter\n            :                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometProject\n            :                 :     :           +- CometFilter\n            :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometProject\n            :                 :           +- CometFilter\n            :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 37 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q35.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :- BroadcastHashJoin\n                  :     :        :  :- BroadcastHashJoin\n                  :     :        :  :  :- CometNativeColumnarToRow\n                  :     :        :  :  :  +- CometFilter\n                  :     :        :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :        :  :  +- BroadcastExchange\n                  :     :        :  :     +- Project\n                  :     :        :  :        +- BroadcastHashJoin\n                  :     :        :  :           :- ColumnarToRow\n                  :     :        :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :  :           :        +- SubqueryBroadcast\n                  :     :        :  :           :           +- BroadcastExchange\n                  :     :        :  :           :              +- CometNativeColumnarToRow\n                  :     :        :  :           :                 +- CometProject\n                  :     :        :  :           :                    +- CometFilter\n                  :     :        :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  :           +- BroadcastExchange\n                  :     :        :  :              +- CometNativeColumnarToRow\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- Project\n                  :     :        :        +- BroadcastHashJoin\n                  :     :        :           :- ColumnarToRow\n                  :     :        :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :           :        +- ReusedSubquery\n                  :     :        :           +- BroadcastExchange\n                  :     :        :              +- CometNativeColumnarToRow\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 54 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q35.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                  :     :        :  :- CometNativeColumnarToRow\n                  :     :        :  :  +- CometBroadcastHashJoin\n                  :     :        :  :     :- CometFilter\n                  :     :        :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :        :  :     +- CometBroadcastExchange\n                  :     :        :  :        +- CometProject\n                  :     :        :  :           +- CometBroadcastHashJoin\n                  :     :        :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :        :  :              :     +- SubqueryBroadcast\n                  :     :        :  :              :        +- BroadcastExchange\n                  :     :        :  :              :           +- CometNativeColumnarToRow\n                  :     :        :  :              :              +- CometProject\n                  :     :        :  :              :                 +- CometFilter\n                  :     :        :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  :              +- CometBroadcastExchange\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- CometNativeColumnarToRow\n                  :     :        :        +- CometProject\n                  :     :        :           +- CometBroadcastHashJoin\n                  :     :        :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :        :              :     +- ReusedSubquery\n                  :     :        :              +- CometBroadcastExchange\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- CometNativeColumnarToRow\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 54 eligible operators (64%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q36.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- Filter\n                                    :     :     :  +- ColumnarToRow\n                                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :     :           +- SubqueryBroadcast\n                                    :     :     :              +- BroadcastExchange\n                                    :     :     :                 +- CometNativeColumnarToRow\n                                    :     :     :                    +- CometProject\n                                    :     :     :                       +- CometFilter\n                                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometProject\n                                    :     :              +- CometFilter\n                                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.item\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 15 out of 34 eligible operators (44%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q36.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometExpand\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :     :        +- SubqueryBroadcast\n                                 :     :     :           +- BroadcastExchange\n                                 :     :     :              +- CometNativeColumnarToRow\n                                 :     :     :                 +- CometProject\n                                 :     :     :                    +- CometFilter\n                                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 29 out of 34 eligible operators (85%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q37.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- BroadcastExchange\n                  :  +- Project\n                  :     +- BroadcastHashJoin\n                  :        :- Project\n                  :        :  +- BroadcastHashJoin\n                  :        :     :- CometNativeColumnarToRow\n                  :        :     :  +- CometProject\n                  :        :     :     +- CometFilter\n                  :        :     :        +- CometNativeScan parquet spark_catalog.default.item\n                  :        :     +- BroadcastExchange\n                  :        :        +- Project\n                  :        :           +- Filter\n                  :        :              +- ColumnarToRow\n                  :        :                 +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :        :                       +- SubqueryBroadcast\n                  :        :                          +- BroadcastExchange\n                  :        :                             +- CometNativeColumnarToRow\n                  :        :                                +- CometProject\n                  :        :                                   +- CometFilter\n                  :        :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :        +- BroadcastExchange\n                  :           +- CometNativeColumnarToRow\n                  :              +- CometProject\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n\nComet accelerated 15 out of 30 eligible operators (50%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q37.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometBroadcastExchange\n                  :  +- CometProject\n                  :     +- CometBroadcastHashJoin\n                  :        :- CometProject\n                  :        :  +- CometBroadcastHashJoin\n                  :        :     :- CometProject\n                  :        :     :  +- CometFilter\n                  :        :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :        :     +- CometBroadcastExchange\n                  :        :        +- CometProject\n                  :        :           +- CometFilter\n                  :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :        :                    +- SubqueryBroadcast\n                  :        :                       +- BroadcastExchange\n                  :        :                          +- CometNativeColumnarToRow\n                  :        :                             +- CometProject\n                  :        :                                +- CometFilter\n                  :        :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        +- CometBroadcastExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n\nComet accelerated 28 out of 30 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q38.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometBroadcastHashJoin\n               :  :- CometHashAggregate\n               :  :  +- CometColumnarExchange\n               :  :     +- HashAggregate\n               :  :        +- Project\n               :  :           +- BroadcastHashJoin\n               :  :              :- Project\n               :  :              :  +- BroadcastHashJoin\n               :  :              :     :- Filter\n               :  :              :     :  +- ColumnarToRow\n               :  :              :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :  :              :     :           +- SubqueryBroadcast\n               :  :              :     :              +- BroadcastExchange\n               :  :              :     :                 +- CometNativeColumnarToRow\n               :  :              :     :                    +- CometProject\n               :  :              :     :                       +- CometFilter\n               :  :              :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :              :     +- BroadcastExchange\n               :  :              :        +- CometNativeColumnarToRow\n               :  :              :           +- CometProject\n               :  :              :              +- CometFilter\n               :  :              :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :              +- BroadcastExchange\n               :  :                 +- CometNativeColumnarToRow\n               :  :                    +- CometProject\n               :  :                       +- CometFilter\n               :  :                          +- CometNativeScan parquet spark_catalog.default.customer\n               :  +- CometBroadcastExchange\n               :     +- CometHashAggregate\n               :        +- CometColumnarExchange\n               :           +- HashAggregate\n               :              +- Project\n               :                 +- BroadcastHashJoin\n               :                    :- Project\n               :                    :  +- BroadcastHashJoin\n               :                    :     :- Filter\n               :                    :     :  +- ColumnarToRow\n               :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :     :           +- ReusedSubquery\n               :                    :     +- BroadcastExchange\n               :                    :        +- CometNativeColumnarToRow\n               :                    :           +- CometProject\n               :                    :              +- CometFilter\n               :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    +- BroadcastExchange\n               :                       +- CometNativeColumnarToRow\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometNativeScan parquet spark_catalog.default.customer\n               +- CometBroadcastExchange\n                  +- CometHashAggregate\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- ReusedSubquery\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 66 eligible operators (53%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q38.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometBroadcastHashJoin\n               :  :- CometHashAggregate\n               :  :  +- CometExchange\n               :  :     +- CometHashAggregate\n               :  :        +- CometProject\n               :  :           +- CometBroadcastHashJoin\n               :  :              :- CometProject\n               :  :              :  +- CometBroadcastHashJoin\n               :  :              :     :- CometFilter\n               :  :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :  :              :     :        +- SubqueryBroadcast\n               :  :              :     :           +- BroadcastExchange\n               :  :              :     :              +- CometNativeColumnarToRow\n               :  :              :     :                 +- CometProject\n               :  :              :     :                    +- CometFilter\n               :  :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :              :     +- CometBroadcastExchange\n               :  :              :        +- CometProject\n               :  :              :           +- CometFilter\n               :  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :              +- CometBroadcastExchange\n               :  :                 +- CometProject\n               :  :                    +- CometFilter\n               :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               :  +- CometBroadcastExchange\n               :     +- CometHashAggregate\n               :        +- CometExchange\n               :           +- CometHashAggregate\n               :              +- CometProject\n               :                 +- CometBroadcastHashJoin\n               :                    :- CometProject\n               :                    :  +- CometBroadcastHashJoin\n               :                    :     :- CometFilter\n               :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :     :        +- ReusedSubquery\n               :                    :     +- CometBroadcastExchange\n               :                    :        +- CometProject\n               :                    :           +- CometFilter\n               :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometBroadcastExchange\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               +- CometBroadcastExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometFilter\n                                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :        +- ReusedSubquery\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 62 out of 66 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q39a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- BroadcastHashJoin\n         :- Project\n         :  +- Filter\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- Project\n         :                    +- BroadcastHashJoin\n         :                       :- Project\n         :                       :  +- BroadcastHashJoin\n         :                       :     :- Project\n         :                       :     :  +- BroadcastHashJoin\n         :                       :     :     :- Filter\n         :                       :     :     :  +- ColumnarToRow\n         :                       :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                       :     :     :           +- SubqueryBroadcast\n         :                       :     :     :              +- BroadcastExchange\n         :                       :     :     :                 +- CometNativeColumnarToRow\n         :                       :     :     :                    +- CometProject\n         :                       :     :     :                       +- CometFilter\n         :                       :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                       :     :     +- BroadcastExchange\n         :                       :     :        +- CometNativeColumnarToRow\n         :                       :     :           +- CometFilter\n         :                       :     :              +- CometNativeScan parquet spark_catalog.default.item\n         :                       :     +- BroadcastExchange\n         :                       :        +- CometNativeColumnarToRow\n         :                       :           +- CometFilter\n         :                       :              +- CometNativeScan parquet spark_catalog.default.warehouse\n         :                       +- BroadcastExchange\n         :                          +- CometNativeColumnarToRow\n         :                             +- CometProject\n         :                                +- CometFilter\n         :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- BroadcastExchange\n            +- Project\n               +- Filter\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- Filter\n                                    :     :     :  +- ColumnarToRow\n                                    :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :     :           +- SubqueryBroadcast\n                                    :     :     :              +- BroadcastExchange\n                                    :     :     :                 +- CometNativeColumnarToRow\n                                    :     :     :                    +- CometProject\n                                    :     :     :                       +- CometFilter\n                                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometFilter\n                                    :     :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometFilter\n                                    :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 24 out of 60 eligible operators (40%). Final plan contains 13 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q39a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometFilter\n         :     +- CometHashAggregate\n         :        +- CometExchange\n         :           +- CometHashAggregate\n         :              +- CometProject\n         :                 +- CometBroadcastHashJoin\n         :                    :- CometProject\n         :                    :  +- CometBroadcastHashJoin\n         :                    :     :- CometProject\n         :                    :     :  +- CometBroadcastHashJoin\n         :                    :     :     :- CometFilter\n         :                    :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n         :                    :     :     :        +- SubqueryBroadcast\n         :                    :     :     :           +- BroadcastExchange\n         :                    :     :     :              +- CometNativeColumnarToRow\n         :                    :     :     :                 +- CometProject\n         :                    :     :     :                    +- CometFilter\n         :                    :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                    :     :     +- CometBroadcastExchange\n         :                    :     :        +- CometFilter\n         :                    :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                    :     +- CometBroadcastExchange\n         :                    :        +- CometFilter\n         :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n         :                    +- CometBroadcastExchange\n         :                       +- CometProject\n         :                          +- CometFilter\n         :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                                 :     :     :        +- SubqueryBroadcast\n                                 :     :     :           +- BroadcastExchange\n                                 :     :     :              +- CometNativeColumnarToRow\n                                 :     :     :                 +- CometProject\n                                 :     :     :                    +- CometFilter\n                                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometFilter\n                                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 56 out of 60 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q39b.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- BroadcastHashJoin\n         :- Project\n         :  +- Filter\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- Project\n         :                    +- BroadcastHashJoin\n         :                       :- Project\n         :                       :  +- BroadcastHashJoin\n         :                       :     :- Project\n         :                       :     :  +- BroadcastHashJoin\n         :                       :     :     :- Filter\n         :                       :     :     :  +- ColumnarToRow\n         :                       :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                       :     :     :           +- SubqueryBroadcast\n         :                       :     :     :              +- BroadcastExchange\n         :                       :     :     :                 +- CometNativeColumnarToRow\n         :                       :     :     :                    +- CometProject\n         :                       :     :     :                       +- CometFilter\n         :                       :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                       :     :     +- BroadcastExchange\n         :                       :     :        +- CometNativeColumnarToRow\n         :                       :     :           +- CometFilter\n         :                       :     :              +- CometNativeScan parquet spark_catalog.default.item\n         :                       :     +- BroadcastExchange\n         :                       :        +- CometNativeColumnarToRow\n         :                       :           +- CometFilter\n         :                       :              +- CometNativeScan parquet spark_catalog.default.warehouse\n         :                       +- BroadcastExchange\n         :                          +- CometNativeColumnarToRow\n         :                             +- CometProject\n         :                                +- CometFilter\n         :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- BroadcastExchange\n            +- Project\n               +- Filter\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- Filter\n                                    :     :     :  +- ColumnarToRow\n                                    :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :     :           +- SubqueryBroadcast\n                                    :     :     :              +- BroadcastExchange\n                                    :     :     :                 +- CometNativeColumnarToRow\n                                    :     :     :                    +- CometProject\n                                    :     :     :                       +- CometFilter\n                                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometFilter\n                                    :     :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometFilter\n                                    :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 24 out of 60 eligible operators (40%). Final plan contains 13 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q39b.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometFilter\n         :     +- CometHashAggregate\n         :        +- CometExchange\n         :           +- CometHashAggregate\n         :              +- CometProject\n         :                 +- CometBroadcastHashJoin\n         :                    :- CometProject\n         :                    :  +- CometBroadcastHashJoin\n         :                    :     :- CometProject\n         :                    :     :  +- CometBroadcastHashJoin\n         :                    :     :     :- CometFilter\n         :                    :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n         :                    :     :     :        +- SubqueryBroadcast\n         :                    :     :     :           +- BroadcastExchange\n         :                    :     :     :              +- CometNativeColumnarToRow\n         :                    :     :     :                 +- CometProject\n         :                    :     :     :                    +- CometFilter\n         :                    :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                    :     :     +- CometBroadcastExchange\n         :                    :     :        +- CometFilter\n         :                    :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                    :     +- CometBroadcastExchange\n         :                    :        +- CometFilter\n         :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n         :                    +- CometBroadcastExchange\n         :                       +- CometProject\n         :                          +- CometFilter\n         :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                                 :     :     :        +- SubqueryBroadcast\n                                 :     :     :           +- BroadcastExchange\n                                 :     :     :              +- CometNativeColumnarToRow\n                                 :     :     :                 +- CometProject\n                                 :     :     :                    +- CometFilter\n                                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometFilter\n                                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 56 out of 60 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q4.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Project\n      :     :     :  +- BroadcastHashJoin\n      :     :     :     :- BroadcastHashJoin\n      :     :     :     :  :- Filter\n      :     :     :     :  :  +- HashAggregate\n      :     :     :     :  :     +- CometNativeColumnarToRow\n      :     :     :     :  :        +- CometColumnarExchange\n      :     :     :     :  :           +- HashAggregate\n      :     :     :     :  :              +- Project\n      :     :     :     :  :                 +- BroadcastHashJoin\n      :     :     :     :  :                    :- Project\n      :     :     :     :  :                    :  +- BroadcastHashJoin\n      :     :     :     :  :                    :     :- CometNativeColumnarToRow\n      :     :     :     :  :                    :     :  +- CometProject\n      :     :     :     :  :                    :     :     +- CometFilter\n      :     :     :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :     :  :                    :     +- BroadcastExchange\n      :     :     :     :  :                    :        +- Filter\n      :     :     :     :  :                    :           +- ColumnarToRow\n      :     :     :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :     :  :                    :                    +- SubqueryBroadcast\n      :     :     :     :  :                    :                       +- BroadcastExchange\n      :     :     :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :     :     :  :                    :                             +- CometFilter\n      :     :     :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     :  :                    +- BroadcastExchange\n      :     :     :     :  :                       +- CometNativeColumnarToRow\n      :     :     :     :  :                          +- CometFilter\n      :     :     :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     :  +- BroadcastExchange\n      :     :     :     :     +- HashAggregate\n      :     :     :     :        +- CometNativeColumnarToRow\n      :     :     :     :           +- CometColumnarExchange\n      :     :     :     :              +- HashAggregate\n      :     :     :     :                 +- Project\n      :     :     :     :                    +- BroadcastHashJoin\n      :     :     :     :                       :- Project\n      :     :     :     :                       :  +- BroadcastHashJoin\n      :     :     :     :                       :     :- CometNativeColumnarToRow\n      :     :     :     :                       :     :  +- CometProject\n      :     :     :     :                       :     :     +- CometFilter\n      :     :     :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :     :                       :     +- BroadcastExchange\n      :     :     :     :                       :        +- Filter\n      :     :     :     :                       :           +- ColumnarToRow\n      :     :     :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :     :                       :                    +- SubqueryBroadcast\n      :     :     :     :                       :                       +- BroadcastExchange\n      :     :     :     :                       :                          +- CometNativeColumnarToRow\n      :     :     :     :                       :                             +- CometFilter\n      :     :     :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     :                       +- BroadcastExchange\n      :     :     :     :                          +- CometNativeColumnarToRow\n      :     :     :     :                             +- CometFilter\n      :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     +- BroadcastExchange\n      :     :     :        +- Filter\n      :     :     :           +- HashAggregate\n      :     :     :              +- CometNativeColumnarToRow\n      :     :     :                 +- CometColumnarExchange\n      :     :     :                    +- HashAggregate\n      :     :     :                       +- Project\n      :     :     :                          +- BroadcastHashJoin\n      :     :     :                             :- Project\n      :     :     :                             :  +- BroadcastHashJoin\n      :     :     :                             :     :- CometNativeColumnarToRow\n      :     :     :                             :     :  +- CometProject\n      :     :     :                             :     :     +- CometFilter\n      :     :     :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :                             :     +- BroadcastExchange\n      :     :     :                             :        +- Filter\n      :     :     :                             :           +- ColumnarToRow\n      :     :     :                             :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                             :                    +- ReusedSubquery\n      :     :     :                             +- BroadcastExchange\n      :     :     :                                +- CometNativeColumnarToRow\n      :     :     :                                   +- CometFilter\n      :     :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     +- BroadcastExchange\n      :     :        +- HashAggregate\n      :     :           +- CometNativeColumnarToRow\n      :     :              +- CometColumnarExchange\n      :     :                 +- HashAggregate\n      :     :                    +- Project\n      :     :                       +- BroadcastHashJoin\n      :     :                          :- Project\n      :     :                          :  +- BroadcastHashJoin\n      :     :                          :     :- CometNativeColumnarToRow\n      :     :                          :     :  +- CometProject\n      :     :                          :     :     +- CometFilter\n      :     :                          :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                          :     +- BroadcastExchange\n      :     :                          :        +- Filter\n      :     :                          :           +- ColumnarToRow\n      :     :                          :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                          :                    +- ReusedSubquery\n      :     :                          +- BroadcastExchange\n      :     :                             +- CometNativeColumnarToRow\n      :     :                                +- CometFilter\n      :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 40 out of 126 eligible operators (31%). Final plan contains 26 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q4.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometProject\n         :     :     :  +- CometBroadcastHashJoin\n         :     :     :     :- CometBroadcastHashJoin\n         :     :     :     :  :- CometFilter\n         :     :     :     :  :  +- CometHashAggregate\n         :     :     :     :  :     +- CometExchange\n         :     :     :     :  :        +- CometHashAggregate\n         :     :     :     :  :           +- CometProject\n         :     :     :     :  :              +- CometBroadcastHashJoin\n         :     :     :     :  :                 :- CometProject\n         :     :     :     :  :                 :  +- CometBroadcastHashJoin\n         :     :     :     :  :                 :     :- CometProject\n         :     :     :     :  :                 :     :  +- CometFilter\n         :     :     :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :     :  :                 :     +- CometBroadcastExchange\n         :     :     :     :  :                 :        +- CometFilter\n         :     :     :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :     :  :                 :                 +- SubqueryBroadcast\n         :     :     :     :  :                 :                    +- BroadcastExchange\n         :     :     :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :     :     :  :                 :                          +- CometFilter\n         :     :     :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     :  :                 +- CometBroadcastExchange\n         :     :     :     :  :                    +- CometFilter\n         :     :     :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     :  +- CometBroadcastExchange\n         :     :     :     :     +- CometHashAggregate\n         :     :     :     :        +- CometExchange\n         :     :     :     :           +- CometHashAggregate\n         :     :     :     :              +- CometProject\n         :     :     :     :                 +- CometBroadcastHashJoin\n         :     :     :     :                    :- CometProject\n         :     :     :     :                    :  +- CometBroadcastHashJoin\n         :     :     :     :                    :     :- CometProject\n         :     :     :     :                    :     :  +- CometFilter\n         :     :     :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :     :                    :     +- CometBroadcastExchange\n         :     :     :     :                    :        +- CometFilter\n         :     :     :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :     :                    :                 +- SubqueryBroadcast\n         :     :     :     :                    :                    +- BroadcastExchange\n         :     :     :     :                    :                       +- CometNativeColumnarToRow\n         :     :     :     :                    :                          +- CometFilter\n         :     :     :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     :                    +- CometBroadcastExchange\n         :     :     :     :                       +- CometFilter\n         :     :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     +- CometBroadcastExchange\n         :     :     :        +- CometFilter\n         :     :     :           +- CometHashAggregate\n         :     :     :              +- CometExchange\n         :     :     :                 +- CometHashAggregate\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometBroadcastHashJoin\n         :     :     :                          :- CometProject\n         :     :     :                          :  +- CometBroadcastHashJoin\n         :     :     :                          :     :- CometProject\n         :     :     :                          :     :  +- CometFilter\n         :     :     :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :                          :     +- CometBroadcastExchange\n         :     :     :                          :        +- CometFilter\n         :     :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :     :     :                          :                 +- ReusedSubquery\n         :     :     :                          +- CometBroadcastExchange\n         :     :     :                             +- CometFilter\n         :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometExchange\n         :     :              +- CometHashAggregate\n         :     :                 +- CometProject\n         :     :                    +- CometBroadcastHashJoin\n         :     :                       :- CometProject\n         :     :                       :  +- CometBroadcastHashJoin\n         :     :                       :     :- CometProject\n         :     :                       :     :  +- CometFilter\n         :     :                       :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                       :     +- CometBroadcastExchange\n         :     :                       :        +- CometFilter\n         :     :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :     :                       :                 +- ReusedSubquery\n         :     :                       +- CometBroadcastExchange\n         :     :                          +- CometFilter\n         :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 118 out of 126 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q40.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometSortMergeJoin\n                  :     :     :     :- CometSort\n                  :     :     :     :  +- CometColumnarExchange\n                  :     :     :     :     +- Filter\n                  :     :     :     :        +- ColumnarToRow\n                  :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :                          +- CometFilter\n                  :     :     :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometSort\n                  :     :     :        +- CometExchange\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 32 out of 36 eligible operators (88%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q40.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometSortMergeJoin\n                  :     :     :     :- CometSort\n                  :     :     :     :  +- CometExchange\n                  :     :     :     :     +- CometFilter\n                  :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :              +- SubqueryBroadcast\n                  :     :     :     :                 +- BroadcastExchange\n                  :     :     :     :                    +- CometNativeColumnarToRow\n                  :     :     :     :                       +- CometFilter\n                  :     :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometSort\n                  :     :     :        +- CometExchange\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 34 out of 36 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q41.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometFilter\n                  :     +- CometNativeScan parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q41.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometFilter\n                  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q42.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q42.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q43.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q43.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q44.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- SortMergeJoin\n      :     :     :- Sort\n      :     :     :  +- Project\n      :     :     :     +- Filter\n      :     :     :        +- Window\n      :     :     :           +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n      :     :     :              +- CometNativeColumnarToRow\n      :     :     :                 +- CometSort\n      :     :     :                    +- CometColumnarExchange\n      :     :     :                       +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n      :     :     :                          +- CometNativeColumnarToRow\n      :     :     :                             +- CometSort\n      :     :     :                                +- CometFilter\n      :     :     :                                   :  +- Subquery\n      :     :     :                                   :     +- CometNativeColumnarToRow\n      :     :     :                                   :        +- CometHashAggregate\n      :     :     :                                   :           +- CometExchange\n      :     :     :                                   :              +- CometHashAggregate\n      :     :     :                                   :                 +- CometProject\n      :     :     :                                   :                    +- CometFilter\n      :     :     :                                   :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n      :     :     :                                   +- CometHashAggregate\n      :     :     :                                      +- CometExchange\n      :     :     :                                         +- CometHashAggregate\n      :     :     :                                            +- CometProject\n      :     :     :                                               +- CometFilter\n      :     :     :                                                  +- CometNativeScan parquet spark_catalog.default.store_sales\n      :     :     +- Sort\n      :     :        +- Project\n      :     :           +- Filter\n      :     :              +- Window\n      :     :                 +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n      :     :                    +- CometNativeColumnarToRow\n      :     :                       +- CometSort\n      :     :                          +- CometColumnarExchange\n      :     :                             +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n      :     :                                +- CometNativeColumnarToRow\n      :     :                                   +- CometSort\n      :     :                                      +- CometFilter\n      :     :                                         :  +- ReusedSubquery\n      :     :                                         +- CometHashAggregate\n      :     :                                            +- CometExchange\n      :     :                                               +- CometHashAggregate\n      :     :                                                  +- CometProject\n      :     :                                                     +- CometFilter\n      :     :                                                        +- CometNativeScan parquet spark_catalog.default.store_sales\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.item\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 32 out of 55 eligible operators (58%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q44.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- SortMergeJoin\n      :     :     :- Sort\n      :     :     :  +- Project\n      :     :     :     +- Filter\n      :     :     :        +- Window\n      :     :     :           +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n      :     :     :              +- CometNativeColumnarToRow\n      :     :     :                 +- CometSort\n      :     :     :                    +- CometColumnarExchange\n      :     :     :                       +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n      :     :     :                          +- CometNativeColumnarToRow\n      :     :     :                             +- CometSort\n      :     :     :                                +- CometFilter\n      :     :     :                                   :  +- Subquery\n      :     :     :                                   :     +- CometNativeColumnarToRow\n      :     :     :                                   :        +- CometHashAggregate\n      :     :     :                                   :           +- CometExchange\n      :     :     :                                   :              +- CometHashAggregate\n      :     :     :                                   :                 +- CometProject\n      :     :     :                                   :                    +- CometFilter\n      :     :     :                                   :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     :     :                                   +- CometHashAggregate\n      :     :     :                                      +- CometExchange\n      :     :     :                                         +- CometHashAggregate\n      :     :     :                                            +- CometProject\n      :     :     :                                               +- CometFilter\n      :     :     :                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     :     +- Sort\n      :     :        +- Project\n      :     :           +- Filter\n      :     :              +- Window\n      :     :                 +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n      :     :                    +- CometNativeColumnarToRow\n      :     :                       +- CometSort\n      :     :                          +- CometColumnarExchange\n      :     :                             +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n      :     :                                +- CometNativeColumnarToRow\n      :     :                                   +- CometSort\n      :     :                                      +- CometFilter\n      :     :                                         :  +- ReusedSubquery\n      :     :                                         +- CometHashAggregate\n      :     :                                            +- CometExchange\n      :     :                                               +- CometHashAggregate\n      :     :                                                  +- CometProject\n      :     :                                                     +- CometFilter\n      :     :                                                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 32 out of 55 eligible operators (58%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q45.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- Filter\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Project\n                     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :- Filter\n                     :     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :     :           +- SubqueryBroadcast\n                     :     :     :     :     :              +- BroadcastExchange\n                     :     :     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :     :     :                    +- CometProject\n                     :     :     :     :     :                       +- CometFilter\n                     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometProject\n                     :     :     :              +- CometFilter\n                     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 41 eligible operators (43%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q45.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- Filter\n                  +-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                     :- CometNativeColumnarToRow\n                     :  +- CometProject\n                     :     +- CometBroadcastHashJoin\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometProject\n                     :        :     :  +- CometBroadcastHashJoin\n                     :        :     :     :- CometProject\n                     :        :     :     :  +- CometBroadcastHashJoin\n                     :        :     :     :     :- CometFilter\n                     :        :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :        :     :     :     :        +- SubqueryBroadcast\n                     :        :     :     :     :           +- BroadcastExchange\n                     :        :     :     :     :              +- CometNativeColumnarToRow\n                     :        :     :     :     :                 +- CometProject\n                     :        :     :     :     :                    +- CometFilter\n                     :        :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :     :     :     +- CometBroadcastExchange\n                     :        :     :     :        +- CometFilter\n                     :        :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :        :     :     +- CometBroadcastExchange\n                     :        :     :        +- CometProject\n                     :        :     :           +- CometFilter\n                     :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        +- CometBroadcastExchange\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 32 out of 41 eligible operators (78%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q46.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- HashAggregate\n      :     :  +- CometNativeColumnarToRow\n      :     :     +- CometColumnarExchange\n      :     :        +- HashAggregate\n      :     :           +- Project\n      :     :              +- BroadcastHashJoin\n      :     :                 :- Project\n      :     :                 :  +- BroadcastHashJoin\n      :     :                 :     :- Project\n      :     :                 :     :  +- BroadcastHashJoin\n      :     :                 :     :     :- Project\n      :     :                 :     :     :  +- BroadcastHashJoin\n      :     :                 :     :     :     :- Filter\n      :     :                 :     :     :     :  +- ColumnarToRow\n      :     :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                 :     :     :     :           +- SubqueryBroadcast\n      :     :                 :     :     :     :              +- BroadcastExchange\n      :     :                 :     :     :     :                 +- CometNativeColumnarToRow\n      :     :                 :     :     :     :                    +- CometProject\n      :     :                 :     :     :     :                       +- CometFilter\n      :     :                 :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     :     +- BroadcastExchange\n      :     :                 :     :     :        +- CometNativeColumnarToRow\n      :     :                 :     :     :           +- CometProject\n      :     :                 :     :     :              +- CometFilter\n      :     :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     +- BroadcastExchange\n      :     :                 :     :        +- CometNativeColumnarToRow\n      :     :                 :     :           +- CometProject\n      :     :                 :     :              +- CometFilter\n      :     :                 :     :                 +- CometNativeScan parquet spark_catalog.default.store\n      :     :                 :     +- BroadcastExchange\n      :     :                 :        +- CometNativeColumnarToRow\n      :     :                 :           +- CometProject\n      :     :                 :              +- CometFilter\n      :     :                 :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n      :     :                 +- BroadcastExchange\n      :     :                    +- CometNativeColumnarToRow\n      :     :                       +- CometFilter\n      :     :                          +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometFilter\n               +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 20 out of 45 eligible operators (44%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q46.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometHashAggregate\n         :     :  +- CometExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometProject\n         :     :           +- CometBroadcastHashJoin\n         :     :              :- CometProject\n         :     :              :  +- CometBroadcastHashJoin\n         :     :              :     :- CometProject\n         :     :              :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :- CometProject\n         :     :              :     :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :     :- CometFilter\n         :     :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :              :     :     :     :        +- SubqueryBroadcast\n         :     :              :     :     :     :           +- BroadcastExchange\n         :     :              :     :     :     :              +- CometNativeColumnarToRow\n         :     :              :     :     :     :                 +- CometProject\n         :     :              :     :     :     :                    +- CometFilter\n         :     :              :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     :     +- CometBroadcastExchange\n         :     :              :     :     :        +- CometProject\n         :     :              :     :     :           +- CometFilter\n         :     :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     +- CometBroadcastExchange\n         :     :              :     :        +- CometProject\n         :     :              :     :           +- CometFilter\n         :     :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     :              :     +- CometBroadcastExchange\n         :     :              :        +- CometProject\n         :     :              :           +- CometFilter\n         :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         :     :              +- CometBroadcastExchange\n         :     :                 +- CometFilter\n         :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 43 out of 45 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q47.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q47.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q48.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Project\n               :     :     :  +- BroadcastHashJoin\n               :     :     :     :- Filter\n               :     :     :     :  +- ColumnarToRow\n               :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :     :           +- SubqueryBroadcast\n               :     :     :     :              +- BroadcastExchange\n               :     :     :     :                 +- CometNativeColumnarToRow\n               :     :     :     :                    +- CometProject\n               :     :     :     :                       +- CometFilter\n               :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     :     +- BroadcastExchange\n               :     :     :        +- CometNativeColumnarToRow\n               :     :     :           +- CometFilter\n               :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n               :     +- BroadcastExchange\n               :        +- CometNativeColumnarToRow\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 15 out of 33 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q48.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometFilter\n               :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     :     :        +- SubqueryBroadcast\n               :     :     :     :           +- BroadcastExchange\n               :     :     :     :              +- CometNativeColumnarToRow\n               :     :     :     :                 +- CometProject\n               :     :     :     :                    +- CometFilter\n               :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometFilter\n               :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 31 out of 33 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q49.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- SubqueryBroadcast\n               :                                         :     :                    +- BroadcastExchange\n               :                                         :     :                       +- CometNativeColumnarToRow\n               :                                         :     :                          +- CometProject\n               :                                         :     :                             +- CometFilter\n               :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- ReusedSubquery\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometColumnarExchange\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- BroadcastExchange\n                                                         :     :  +- Project\n                                                         :     :     +- Filter\n                                                         :     :        +- ColumnarToRow\n                                                         :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :                 +- ReusedSubquery\n                                                         :     +- CometNativeColumnarToRow\n                                                         :        +- CometProject\n                                                         :           +- CometFilter\n                                                         :              +- CometNativeScan parquet spark_catalog.default.store_returns\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 33 out of 87 eligible operators (37%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q49.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :              +- SubqueryBroadcast\n               :                                      :     :                 +- BroadcastExchange\n               :                                      :     :                    +- CometNativeColumnarToRow\n               :                                      :     :                       +- CometProject\n               :                                      :     :                          +- CometFilter\n               :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :              +- ReusedSubquery\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometExchange\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometBroadcastHashJoin\n                                                      :- CometProject\n                                                      :  +- CometBroadcastHashJoin\n                                                      :     :- CometBroadcastExchange\n                                                      :     :  +- CometProject\n                                                      :     :     +- CometFilter\n                                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :     :              +- ReusedSubquery\n                                                      :     +- CometProject\n                                                      :        +- CometFilter\n                                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                                      +- CometBroadcastExchange\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 66 out of 87 eligible operators (75%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q5.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- Project\n                  :              +- BroadcastHashJoin\n                  :                 :- Project\n                  :                 :  +- BroadcastHashJoin\n                  :                 :     :- Union\n                  :                 :     :  :- Project\n                  :                 :     :  :  +- Filter\n                  :                 :     :  :     +- ColumnarToRow\n                  :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :  :              +- SubqueryBroadcast\n                  :                 :     :  :                 +- BroadcastExchange\n                  :                 :     :  :                    +- CometNativeColumnarToRow\n                  :                 :     :  :                       +- CometProject\n                  :                 :     :  :                          +- CometFilter\n                  :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                 :     :  +- Project\n                  :                 :     :     +- Filter\n                  :                 :     :        +- ColumnarToRow\n                  :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :                 +- ReusedSubquery\n                  :                 :     +- BroadcastExchange\n                  :                 :        +- CometNativeColumnarToRow\n                  :                 :           +- CometProject\n                  :                 :              +- CometFilter\n                  :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                 +- BroadcastExchange\n                  :                    +- CometNativeColumnarToRow\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometNativeScan parquet spark_catalog.default.store\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- Project\n                  :              +- BroadcastHashJoin\n                  :                 :- Project\n                  :                 :  +- BroadcastHashJoin\n                  :                 :     :- Union\n                  :                 :     :  :- Project\n                  :                 :     :  :  +- Filter\n                  :                 :     :  :     +- ColumnarToRow\n                  :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :  :              +- ReusedSubquery\n                  :                 :     :  +- Project\n                  :                 :     :     +- Filter\n                  :                 :     :        +- ColumnarToRow\n                  :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :                 +- ReusedSubquery\n                  :                 :     +- BroadcastExchange\n                  :                 :        +- CometNativeColumnarToRow\n                  :                 :           +- CometProject\n                  :                 :              +- CometFilter\n                  :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                 +- BroadcastExchange\n                  :                    +- CometNativeColumnarToRow\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Union\n                                    :     :  :- Project\n                                    :     :  :  +- Filter\n                                    :     :  :     +- ColumnarToRow\n                                    :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :  :              +- ReusedSubquery\n                                    :     :  +- Project\n                                    :     :     +- BroadcastHashJoin\n                                    :     :        :- BroadcastExchange\n                                    :     :        :  +- ColumnarToRow\n                                    :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :        :           +- ReusedSubquery\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometProject\n                                    :     :              +- CometFilter\n                                    :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 28 out of 86 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q5.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometUnion\n                  :              :     :  :- CometProject\n                  :              :     :  :  +- CometFilter\n                  :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :              :     :  :           +- SubqueryBroadcast\n                  :              :     :  :              +- BroadcastExchange\n                  :              :     :  :                 +- CometNativeColumnarToRow\n                  :              :     :  :                    +- CometProject\n                  :              :     :  :                       +- CometFilter\n                  :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :  +- CometProject\n                  :              :     :     +- CometFilter\n                  :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :              :     :              +- ReusedSubquery\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometUnion\n                  :              :     :  :- CometProject\n                  :              :     :  :  +- CometFilter\n                  :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :              :     :  :           +- ReusedSubquery\n                  :              :     :  +- CometProject\n                  :              :     :     +- CometFilter\n                  :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :              :     :              +- ReusedSubquery\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometUnion\n                                 :     :  :- CometProject\n                                 :     :  :  +- CometFilter\n                                 :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :  :           +- ReusedSubquery\n                                 :     :  +- CometProject\n                                 :     :     +- CometBroadcastHashJoin\n                                 :     :        :- CometBroadcastExchange\n                                 :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                 :     :        :        +- ReusedSubquery\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 79 out of 86 eligible operators (91%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q50.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- CometNativeColumnarToRow\n                  :     :     :     :  +- CometFilter\n                  :     :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- Filter\n                  :     :     :           +- ColumnarToRow\n                  :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :                    +- SubqueryBroadcast\n                  :     :     :                       +- BroadcastExchange\n                  :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :                             +- CometProject\n                  :     :     :                                +- CometFilter\n                  :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.store\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q50.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :                 +- SubqueryBroadcast\n                  :     :     :                    +- BroadcastExchange\n                  :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :                          +- CometProject\n                  :     :     :                             +- CometFilter\n                  :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 31 out of 33 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q51.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometProject\n                  +- CometSortMergeJoin\n                     :- CometSort\n                     :  +- CometColumnarExchange\n                     :     +- Project\n                     :        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                     :           +- CometNativeColumnarToRow\n                     :              +- CometSort\n                     :                 +- CometColumnarExchange\n                     :                    +- HashAggregate\n                     :                       +- CometNativeColumnarToRow\n                     :                          +- CometColumnarExchange\n                     :                             +- HashAggregate\n                     :                                +- Project\n                     :                                   +- BroadcastHashJoin\n                     :                                      :- Filter\n                     :                                      :  +- ColumnarToRow\n                     :                                      :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :                                      :           +- SubqueryBroadcast\n                     :                                      :              +- BroadcastExchange\n                     :                                      :                 +- CometNativeColumnarToRow\n                     :                                      :                    +- CometProject\n                     :                                      :                       +- CometFilter\n                     :                                      :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :                                      +- BroadcastExchange\n                     :                                         +- CometNativeColumnarToRow\n                     :                                            +- CometProject\n                     :                                               +- CometFilter\n                     :                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- CometSort\n                        +- CometColumnarExchange\n                           +- Project\n                              +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                 +- CometNativeColumnarToRow\n                                    +- CometSort\n                                       +- CometColumnarExchange\n                                          +- HashAggregate\n                                             +- CometNativeColumnarToRow\n                                                +- CometColumnarExchange\n                                                   +- HashAggregate\n                                                      +- Project\n                                                         +- BroadcastHashJoin\n                                                            :- Filter\n                                                            :  +- ColumnarToRow\n                                                            :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                            :           +- ReusedSubquery\n                                                            +- BroadcastExchange\n                                                               +- CometNativeColumnarToRow\n                                                                  +- CometProject\n                                                                     +- CometFilter\n                                                                        +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 23 out of 47 eligible operators (48%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q51.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometProject\n                  +- CometSortMergeJoin\n                     :- CometSort\n                     :  +- CometColumnarExchange\n                     :     +- Project\n                     :        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                     :           +- CometNativeColumnarToRow\n                     :              +- CometSort\n                     :                 +- CometExchange\n                     :                    +- CometHashAggregate\n                     :                       +- CometExchange\n                     :                          +- CometHashAggregate\n                     :                             +- CometProject\n                     :                                +- CometBroadcastHashJoin\n                     :                                   :- CometFilter\n                     :                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :                                   :        +- SubqueryBroadcast\n                     :                                   :           +- BroadcastExchange\n                     :                                   :              +- CometNativeColumnarToRow\n                     :                                   :                 +- CometProject\n                     :                                   :                    +- CometFilter\n                     :                                   :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :                                   +- CometBroadcastExchange\n                     :                                      +- CometProject\n                     :                                         +- CometFilter\n                     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometSort\n                        +- CometColumnarExchange\n                           +- Project\n                              +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                 +- CometNativeColumnarToRow\n                                    +- CometSort\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometExchange\n                                                +- CometHashAggregate\n                                                   +- CometProject\n                                                      +- CometBroadcastHashJoin\n                                                         :- CometFilter\n                                                         :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                         :        +- ReusedSubquery\n                                                         +- CometBroadcastExchange\n                                                            +- CometProject\n                                                               +- CometFilter\n                                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 37 out of 47 eligible operators (78%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q52.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q52.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q53.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- CometNativeColumnarToRow\n                                    :     :     :  +- CometProject\n                                    :     :     :     +- CometFilter\n                                    :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- Filter\n                                    :     :           +- ColumnarToRow\n                                    :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :                    +- SubqueryBroadcast\n                                    :     :                       +- BroadcastExchange\n                                    :     :                          +- CometNativeColumnarToRow\n                                    :     :                             +- CometProject\n                                    :     :                                +- CometFilter\n                                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q53.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometFilter\n                                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :                 +- SubqueryBroadcast\n                                 :     :                    +- BroadcastExchange\n                                 :     :                       +- CometNativeColumnarToRow\n                                 :     :                          +- CometProject\n                                 :     :                             +- CometFilter\n                                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 27 out of 33 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q54.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +- BroadcastHashJoin\n                              :- Project\n                              :  +- BroadcastHashJoin\n                              :     :- Project\n                              :     :  +- BroadcastHashJoin\n                              :     :     :- Project\n                              :     :     :  +- BroadcastHashJoin\n                              :     :     :     :- CometNativeColumnarToRow\n                              :     :     :     :  +- CometHashAggregate\n                              :     :     :     :     +- CometColumnarExchange\n                              :     :     :     :        +- HashAggregate\n                              :     :     :     :           +- Project\n                              :     :     :     :              +- BroadcastHashJoin\n                              :     :     :     :                 :- Project\n                              :     :     :     :                 :  +- BroadcastHashJoin\n                              :     :     :     :                 :     :- Project\n                              :     :     :     :                 :     :  +- BroadcastHashJoin\n                              :     :     :     :                 :     :     :- Union\n                              :     :     :     :                 :     :     :  :- Project\n                              :     :     :     :                 :     :     :  :  +- Filter\n                              :     :     :     :                 :     :     :  :     +- ColumnarToRow\n                              :     :     :     :                 :     :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :                 :     :     :  :              +- SubqueryBroadcast\n                              :     :     :     :                 :     :     :  :                 +- BroadcastExchange\n                              :     :     :     :                 :     :     :  :                    +- CometNativeColumnarToRow\n                              :     :     :     :                 :     :     :  :                       +- CometProject\n                              :     :     :     :                 :     :     :  :                          +- CometFilter\n                              :     :     :     :                 :     :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :                 :     :     :  +- Project\n                              :     :     :     :                 :     :     :     +- Filter\n                              :     :     :     :                 :     :     :        +- ColumnarToRow\n                              :     :     :     :                 :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :                 :     :     :                 +- ReusedSubquery\n                              :     :     :     :                 :     :     +- BroadcastExchange\n                              :     :     :     :                 :     :        +- CometNativeColumnarToRow\n                              :     :     :     :                 :     :           +- CometProject\n                              :     :     :     :                 :     :              +- CometFilter\n                              :     :     :     :                 :     :                 +- CometNativeScan parquet spark_catalog.default.item\n                              :     :     :     :                 :     +- BroadcastExchange\n                              :     :     :     :                 :        +- CometNativeColumnarToRow\n                              :     :     :     :                 :           +- CometProject\n                              :     :     :     :                 :              +- CometFilter\n                              :     :     :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :                 +- BroadcastExchange\n                              :     :     :     :                    +- CometNativeColumnarToRow\n                              :     :     :     :                       +- CometFilter\n                              :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.customer\n                              :     :     :     +- BroadcastExchange\n                              :     :     :        +- Filter\n                              :     :     :           +- ColumnarToRow\n                              :     :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :                    +- SubqueryBroadcast\n                              :     :     :                       +- BroadcastExchange\n                              :     :     :                          +- CometNativeColumnarToRow\n                              :     :     :                             +- CometProject\n                              :     :     :                                +- CometFilter\n                              :     :     :                                   :  :- Subquery\n                              :     :     :                                   :  :  +- CometNativeColumnarToRow\n                              :     :     :                                   :  :     +- CometHashAggregate\n                              :     :     :                                   :  :        +- CometExchange\n                              :     :     :                                   :  :           +- CometHashAggregate\n                              :     :     :                                   :  :              +- CometProject\n                              :     :     :                                   :  :                 +- CometFilter\n                              :     :     :                                   :  :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :                                   :  +- Subquery\n                              :     :     :                                   :     +- CometNativeColumnarToRow\n                              :     :     :                                   :        +- CometHashAggregate\n                              :     :     :                                   :           +- CometExchange\n                              :     :     :                                   :              +- CometHashAggregate\n                              :     :     :                                   :                 +- CometProject\n                              :     :     :                                   :                    +- CometFilter\n                              :     :     :                                   :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     +- BroadcastExchange\n                              :     :        +- CometNativeColumnarToRow\n                              :     :           +- CometProject\n                              :     :              +- CometFilter\n                              :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     +- BroadcastExchange\n                              :        +- CometNativeColumnarToRow\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.store\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          :  :- Subquery\n                                          :  :  +- CometNativeColumnarToRow\n                                          :  :     +- CometHashAggregate\n                                          :  :        +- CometExchange\n                                          :  :           +- CometHashAggregate\n                                          :  :              +- CometProject\n                                          :  :                 +- CometFilter\n                                          :  :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  +- Subquery\n                                          :     +- CometNativeColumnarToRow\n                                          :        +- CometHashAggregate\n                                          :           +- CometExchange\n                                          :              +- CometHashAggregate\n                                          :                 +- CometProject\n                                          :                    +- CometFilter\n                                          :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 51 out of 96 eligible operators (53%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q54.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometBroadcastHashJoin\n                           :     :     :- CometProject\n                           :     :     :  +- CometBroadcastHashJoin\n                           :     :     :     :- CometHashAggregate\n                           :     :     :     :  +- CometExchange\n                           :     :     :     :     +- CometHashAggregate\n                           :     :     :     :        +- CometProject\n                           :     :     :     :           +- CometBroadcastHashJoin\n                           :     :     :     :              :- CometProject\n                           :     :     :     :              :  +- CometBroadcastHashJoin\n                           :     :     :     :              :     :- CometProject\n                           :     :     :     :              :     :  +- CometBroadcastHashJoin\n                           :     :     :     :              :     :     :- CometUnion\n                           :     :     :     :              :     :     :  :- CometProject\n                           :     :     :     :              :     :     :  :  +- CometFilter\n                           :     :     :     :              :     :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :     :     :     :              :     :     :  :           +- SubqueryBroadcast\n                           :     :     :     :              :     :     :  :              +- BroadcastExchange\n                           :     :     :     :              :     :     :  :                 +- CometNativeColumnarToRow\n                           :     :     :     :              :     :     :  :                    +- CometProject\n                           :     :     :     :              :     :     :  :                       +- CometFilter\n                           :     :     :     :              :     :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :     :              :     :     :  +- CometProject\n                           :     :     :     :              :     :     :     +- CometFilter\n                           :     :     :     :              :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :     :     :     :              :     :     :              +- ReusedSubquery\n                           :     :     :     :              :     :     +- CometBroadcastExchange\n                           :     :     :     :              :     :        +- CometProject\n                           :     :     :     :              :     :           +- CometFilter\n                           :     :     :     :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :     :     :              :     +- CometBroadcastExchange\n                           :     :     :     :              :        +- CometProject\n                           :     :     :     :              :           +- CometFilter\n                           :     :     :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :     :              +- CometBroadcastExchange\n                           :     :     :     :                 +- CometFilter\n                           :     :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     :     :     +- CometBroadcastExchange\n                           :     :     :        +- CometFilter\n                           :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :     :                 +- SubqueryBroadcast\n                           :     :     :                    +- BroadcastExchange\n                           :     :     :                       +- CometNativeColumnarToRow\n                           :     :     :                          +- CometProject\n                           :     :     :                             +- CometFilter\n                           :     :     :                                :  :- Subquery\n                           :     :     :                                :  :  +- CometNativeColumnarToRow\n                           :     :     :                                :  :     +- CometHashAggregate\n                           :     :     :                                :  :        +- CometExchange\n                           :     :     :                                :  :           +- CometHashAggregate\n                           :     :     :                                :  :              +- CometProject\n                           :     :     :                                :  :                 +- CometFilter\n                           :     :     :                                :  :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :                                :  +- Subquery\n                           :     :     :                                :     +- CometNativeColumnarToRow\n                           :     :     :                                :        +- CometHashAggregate\n                           :     :     :                                :           +- CometExchange\n                           :     :     :                                :              +- CometHashAggregate\n                           :     :     :                                :                 +- CometProject\n                           :     :     :                                :                    +- CometFilter\n                           :     :     :                                :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     +- CometBroadcastExchange\n                           :     :        +- CometProject\n                           :     :           +- CometFilter\n                           :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    :  :- ReusedSubquery\n                                    :  +- ReusedSubquery\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 75 out of 84 eligible operators (89%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q55.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q55.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q56.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- SubqueryBroadcast\n               :                 :     :     :              +- BroadcastExchange\n               :                 :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :                    +- CometProject\n               :                 :     :     :                       +- CometFilter\n               :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- ReusedSubquery\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- ReusedSubquery\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometFilter\n                                             :  +- CometNativeScan parquet spark_catalog.default.item\n                                             +- CometBroadcastExchange\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 49 out of 96 eligible operators (51%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q56.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :     :     :        +- SubqueryBroadcast\n               :              :     :     :           +- BroadcastExchange\n               :              :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :                 +- CometProject\n               :              :     :     :                    +- CometFilter\n               :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :     :        +- ReusedSubquery\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :     :        +- ReusedSubquery\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 92 out of 96 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q57.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.call_center\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q57.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q58.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Filter\n      :     :  +- HashAggregate\n      :     :     +- CometNativeColumnarToRow\n      :     :        +- CometColumnarExchange\n      :     :           +- HashAggregate\n      :     :              +- Project\n      :     :                 +- BroadcastHashJoin\n      :     :                    :- Project\n      :     :                    :  +- BroadcastHashJoin\n      :     :                    :     :- Filter\n      :     :                    :     :  +- ColumnarToRow\n      :     :                    :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                    :     :           +- SubqueryBroadcast\n      :     :                    :     :              +- BroadcastExchange\n      :     :                    :     :                 +- CometNativeColumnarToRow\n      :     :                    :     :                    +- CometProject\n      :     :                    :     :                       +- CometBroadcastHashJoin\n      :     :                    :     :                          :- CometFilter\n      :     :                    :     :                          :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                    :     :                          +- CometBroadcastExchange\n      :     :                    :     :                             +- CometProject\n      :     :                    :     :                                +- CometFilter\n      :     :                    :     :                                   :  +- Subquery\n      :     :                    :     :                                   :     +- CometNativeColumnarToRow\n      :     :                    :     :                                   :        +- CometProject\n      :     :                    :     :                                   :           +- CometFilter\n      :     :                    :     :                                   :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                    :     +- BroadcastExchange\n      :     :                    :        +- CometNativeColumnarToRow\n      :     :                    :           +- CometProject\n      :     :                    :              +- CometFilter\n      :     :                    :                 +- CometNativeScan parquet spark_catalog.default.item\n      :     :                    +- BroadcastExchange\n      :     :                       +- CometNativeColumnarToRow\n      :     :                          +- CometProject\n      :     :                             +- CometBroadcastHashJoin\n      :     :                                :- CometFilter\n      :     :                                :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                +- CometBroadcastExchange\n      :     :                                   +- CometProject\n      :     :                                      +- CometFilter\n      :     :                                         :  +- Subquery\n      :     :                                         :     +- CometNativeColumnarToRow\n      :     :                                         :        +- CometProject\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- Filter\n      :                             :     :  +- ColumnarToRow\n      :                             :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :     :           +- ReusedSubquery\n      :                             :     +- BroadcastExchange\n      :                             :        +- CometNativeColumnarToRow\n      :                             :           +- CometProject\n      :                             :              +- CometFilter\n      :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometProject\n      :                                      +- CometBroadcastHashJoin\n      :                                         :- CometFilter\n      :                                         :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- CometBroadcastExchange\n      :                                            +- CometProject\n      :                                               +- CometFilter\n      :                                                  :  +- Subquery\n      :                                                  :     +- CometNativeColumnarToRow\n      :                                                  :        +- CometProject\n      :                                                  :           +- CometFilter\n      :                                                  :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- Filter\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +- BroadcastHashJoin\n                              :- Project\n                              :  +- BroadcastHashJoin\n                              :     :- Filter\n                              :     :  +- ColumnarToRow\n                              :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :           +- ReusedSubquery\n                              :     +- BroadcastExchange\n                              :        +- CometNativeColumnarToRow\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.item\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometBroadcastHashJoin\n                                          :- CometFilter\n                                          :  +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- CometBroadcastExchange\n                                             +- CometProject\n                                                +- CometFilter\n                                                   :  +- Subquery\n                                                   :     +- CometNativeColumnarToRow\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 56 out of 104 eligible operators (53%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q58.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometFilter\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometBroadcastHashJoin\n         :     :                 :     :- CometFilter\n         :     :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                 :     :        +- SubqueryBroadcast\n         :     :                 :     :           +- BroadcastExchange\n         :     :                 :     :              +- CometNativeColumnarToRow\n         :     :                 :     :                 +- CometProject\n         :     :                 :     :                    +- CometBroadcastHashJoin\n         :     :                 :     :                       :- CometFilter\n         :     :                 :     :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :     :                       +- CometBroadcastExchange\n         :     :                 :     :                          +- CometProject\n         :     :                 :     :                             +- CometFilter\n         :     :                 :     :                                :  +- Subquery\n         :     :                 :     :                                :     +- CometNativeColumnarToRow\n         :     :                 :     :                                :        +- CometProject\n         :     :                 :     :                                :           +- CometFilter\n         :     :                 :     :                                :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :     +- CometBroadcastExchange\n         :     :                 :        +- CometProject\n         :     :                 :           +- CometFilter\n         :     :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometProject\n         :     :                       +- CometBroadcastHashJoin\n         :     :                          :- CometFilter\n         :     :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                          +- CometBroadcastExchange\n         :     :                             +- CometProject\n         :     :                                +- CometFilter\n         :     :                                   :  +- Subquery\n         :     :                                   :     +- CometNativeColumnarToRow\n         :     :                                   :        +- CometProject\n         :     :                                   :           +- CometFilter\n         :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometFilter\n         :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :                          :     :        +- ReusedSubquery\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometProject\n         :                          :           +- CometFilter\n         :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                          +- CometBroadcastExchange\n         :                             +- CometProject\n         :                                +- CometBroadcastHashJoin\n         :                                   :- CometFilter\n         :                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                   +- CometBroadcastExchange\n         :                                      +- CometProject\n         :                                         +- CometFilter\n         :                                            :  +- Subquery\n         :                                            :     +- CometNativeColumnarToRow\n         :                                            :        +- CometProject\n         :                                            :           +- CometFilter\n         :                                            :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- ReusedSubquery\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                :  +- Subquery\n                                                :     +- CometNativeColumnarToRow\n                                                :        +- CometProject\n                                                :           +- CometFilter\n                                                :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 96 out of 104 eligible operators (92%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q59.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometHashAggregate\n         :     :     :  +- CometExchange\n         :     :     :     +- CometHashAggregate\n         :     :     :        +- CometProject\n         :     :     :           +- CometBroadcastHashJoin\n         :     :     :              :- CometFilter\n         :     :     :              :  +- CometNativeScan parquet spark_catalog.default.store_sales\n         :     :     :              +- CometBroadcastExchange\n         :     :     :                 +- CometProject\n         :     :     :                    +- CometFilter\n         :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometProject\n         :     :           +- CometFilter\n         :     :              +- CometNativeScan parquet spark_catalog.default.store\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometHashAggregate\n                  :     :  +- CometExchange\n                  :     :     +- CometHashAggregate\n                  :     :        +- CometProject\n                  :     :           +- CometBroadcastHashJoin\n                  :     :              :- CometFilter\n                  :     :              :  +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     :              +- CometBroadcastExchange\n                  :     :                 +- CometProject\n                  :     :                    +- CometFilter\n                  :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 50 out of 50 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q59.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometHashAggregate\n         :     :     :  +- CometExchange\n         :     :     :     +- CometHashAggregate\n         :     :     :        +- CometProject\n         :     :     :           +- CometBroadcastHashJoin\n         :     :     :              :- CometFilter\n         :     :     :              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :              +- CometBroadcastExchange\n         :     :     :                 +- CometProject\n         :     :     :                    +- CometFilter\n         :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometProject\n         :     :           +- CometFilter\n         :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometHashAggregate\n                  :     :  +- CometExchange\n                  :     :     +- CometHashAggregate\n                  :     :        +- CometProject\n                  :     :           +- CometBroadcastHashJoin\n                  :     :              :- CometFilter\n                  :     :              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :              +- CometBroadcastExchange\n                  :     :                 +- CometProject\n                  :     :                    +- CometFilter\n                  :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 50 out of 50 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q6.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- CometNativeColumnarToRow\n                     :     :     :  +- CometProject\n                     :     :     :     +- CometBroadcastHashJoin\n                     :     :     :        :- CometProject\n                     :     :     :        :  +- CometFilter\n                     :     :     :        :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     :     :        +- CometBroadcastExchange\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     +- BroadcastExchange\n                     :     :        +- Filter\n                     :     :           +- ColumnarToRow\n                     :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :                    +- SubqueryBroadcast\n                     :     :                       +- BroadcastExchange\n                     :     :                          +- CometNativeColumnarToRow\n                     :     :                             +- CometProject\n                     :     :                                +- CometFilter\n                     :     :                                   :  +- Subquery\n                     :     :                                   :     +- CometNativeColumnarToRow\n                     :     :                                   :        +- CometHashAggregate\n                     :     :                                   :           +- CometExchange\n                     :     :                                   :              +- CometHashAggregate\n                     :     :                                   :                 +- CometProject\n                     :     :                                   :                    +- CometFilter\n                     :     :                                   :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 :  +- Subquery\n                     :                 :     +- CometNativeColumnarToRow\n                     :                 :        +- CometHashAggregate\n                     :                 :           +- CometExchange\n                     :                 :              +- CometHashAggregate\n                     :                 :                 +- CometProject\n                     :                 :                    +- CometFilter\n                     :                 :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometFilter\n                                 :  +- CometNativeScan parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 39 out of 58 eligible operators (67%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q6.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometFilter\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometFilter\n                     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometFilter\n                     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :                 +- SubqueryBroadcast\n                     :     :                    +- BroadcastExchange\n                     :     :                       +- CometNativeColumnarToRow\n                     :     :                          +- CometProject\n                     :     :                             +- CometFilter\n                     :     :                                :  +- Subquery\n                     :     :                                :     +- CometNativeColumnarToRow\n                     :     :                                :        +- CometHashAggregate\n                     :     :                                :           +- CometExchange\n                     :     :                                :              +- CometHashAggregate\n                     :     :                                :                 +- CometProject\n                     :     :                                :                    +- CometFilter\n                     :     :                                :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              :  +- ReusedSubquery\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometFilter\n                              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 48 out of 52 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q60.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- SubqueryBroadcast\n               :                 :     :     :              +- BroadcastExchange\n               :                 :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :                    +- CometProject\n               :                 :     :     :                       +- CometFilter\n               :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- ReusedSubquery\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- ReusedSubquery\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometFilter\n                                             :  +- CometNativeScan parquet spark_catalog.default.item\n                                             +- CometBroadcastExchange\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 49 out of 96 eligible operators (51%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q60.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :     :     :        +- SubqueryBroadcast\n               :              :     :     :           +- BroadcastExchange\n               :              :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :                 +- CometProject\n               :              :     :     :                    +- CometFilter\n               :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :     :        +- ReusedSubquery\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :     :        +- ReusedSubquery\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 92 out of 96 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q61.native_datafusion/extended.txt",
    "content": "Project\n+- BroadcastNestedLoopJoin\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- SubqueryBroadcast\n   :                 :     :     :     :     :     :              +- BroadcastExchange\n   :                 :     :     :     :     :     :                 +- CometNativeColumnarToRow\n   :                 :     :     :     :     :     :                    +- CometProject\n   :                 :     :     :     :     :     :                       +- CometFilter\n   :                 :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.promotion\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometProject\n   :                 :     :     :              +- CometFilter\n   :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometFilter\n   :                 :     :              +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   +- BroadcastExchange\n      +- HashAggregate\n         +- CometNativeColumnarToRow\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- Project\n                        :  +- BroadcastHashJoin\n                        :     :- Project\n                        :     :  +- BroadcastHashJoin\n                        :     :     :- Project\n                        :     :     :  +- BroadcastHashJoin\n                        :     :     :     :- Project\n                        :     :     :     :  +- BroadcastHashJoin\n                        :     :     :     :     :- Filter\n                        :     :     :     :     :  +- ColumnarToRow\n                        :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :     :     :     :     :           +- ReusedSubquery\n                        :     :     :     :     +- BroadcastExchange\n                        :     :     :     :        +- CometNativeColumnarToRow\n                        :     :     :     :           +- CometProject\n                        :     :     :     :              +- CometFilter\n                        :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store\n                        :     :     :     +- BroadcastExchange\n                        :     :     :        +- CometNativeColumnarToRow\n                        :     :     :           +- CometProject\n                        :     :     :              +- CometFilter\n                        :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     :     +- BroadcastExchange\n                        :     :        +- CometNativeColumnarToRow\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                        :     +- BroadcastExchange\n                        :        +- CometNativeColumnarToRow\n                        :           +- CometProject\n                        :              +- CometFilter\n                        :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- BroadcastExchange\n                           +- CometNativeColumnarToRow\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 36 out of 83 eligible operators (43%). Final plan contains 16 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q61.native_iceberg_compat/extended.txt",
    "content": "Project\n+-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n   :- CometNativeColumnarToRow\n   :  +- CometHashAggregate\n   :     +- CometExchange\n   :        +- CometHashAggregate\n   :           +- CometProject\n   :              +- CometBroadcastHashJoin\n   :                 :- CometProject\n   :                 :  +- CometBroadcastHashJoin\n   :                 :     :- CometProject\n   :                 :     :  +- CometBroadcastHashJoin\n   :                 :     :     :- CometProject\n   :                 :     :     :  +- CometBroadcastHashJoin\n   :                 :     :     :     :- CometProject\n   :                 :     :     :     :  +- CometBroadcastHashJoin\n   :                 :     :     :     :     :- CometProject\n   :                 :     :     :     :     :  +- CometBroadcastHashJoin\n   :                 :     :     :     :     :     :- CometFilter\n   :                 :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n   :                 :     :     :     :     :     :        +- SubqueryBroadcast\n   :                 :     :     :     :     :     :           +- BroadcastExchange\n   :                 :     :     :     :     :     :              +- CometNativeColumnarToRow\n   :                 :     :     :     :     :     :                 +- CometProject\n   :                 :     :     :     :     :     :                    +- CometFilter\n   :                 :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n   :                 :     :     :     :     :     +- CometBroadcastExchange\n   :                 :     :     :     :     :        +- CometProject\n   :                 :     :     :     :     :           +- CometFilter\n   :                 :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n   :                 :     :     :     :     +- CometBroadcastExchange\n   :                 :     :     :     :        +- CometProject\n   :                 :     :     :     :           +- CometFilter\n   :                 :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n   :                 :     :     :     +- CometBroadcastExchange\n   :                 :     :     :        +- CometProject\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n   :                 :     :     +- CometBroadcastExchange\n   :                 :     :        +- CometFilter\n   :                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n   :                 :     +- CometBroadcastExchange\n   :                 :        +- CometProject\n   :                 :           +- CometFilter\n   :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n   :                 +- CometBroadcastExchange\n   :                    +- CometProject\n   :                       +- CometFilter\n   :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n   +- BroadcastExchange\n      +- CometNativeColumnarToRow\n         +- CometHashAggregate\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometProject\n                        :     :     :  +- CometBroadcastHashJoin\n                        :     :     :     :- CometProject\n                        :     :     :     :  +- CometBroadcastHashJoin\n                        :     :     :     :     :- CometFilter\n                        :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                        :     :     :     :     :        +- ReusedSubquery\n                        :     :     :     :     +- CometBroadcastExchange\n                        :     :     :     :        +- CometProject\n                        :     :     :     :           +- CometFilter\n                        :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                        :     :     :     +- CometBroadcastExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometFilter\n                        :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometFilter\n                        :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 77 out of 83 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q62.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometNativeScan parquet spark_catalog.default.web_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.web_site\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q62.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q63.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- CometNativeColumnarToRow\n                                    :     :     :  +- CometProject\n                                    :     :     :     +- CometFilter\n                                    :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- Filter\n                                    :     :           +- ColumnarToRow\n                                    :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :                    +- SubqueryBroadcast\n                                    :     :                       +- BroadcastExchange\n                                    :     :                          +- CometNativeColumnarToRow\n                                    :     :                             +- CometProject\n                                    :     :                                +- CometFilter\n                                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q63.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometFilter\n                                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :                 +- SubqueryBroadcast\n                                 :     :                    +- BroadcastExchange\n                                 :     :                       +- CometNativeColumnarToRow\n                                 :     :                          +- CometProject\n                                 :     :                             +- CometFilter\n                                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 27 out of 33 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q64.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometNativeScan parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 228 out of 242 eligible operators (94%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q64.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 238 out of 242 eligible operators (98%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q65.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- CometNativeColumnarToRow\n      :     :     :  +- CometFilter\n      :     :     :     +- CometNativeScan parquet spark_catalog.default.store\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- Project\n      :     :                          +- BroadcastHashJoin\n      :     :                             :- Filter\n      :     :                             :  +- ColumnarToRow\n      :     :                             :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                             :           +- SubqueryBroadcast\n      :     :                             :              +- BroadcastExchange\n      :     :                             :                 +- CometNativeColumnarToRow\n      :     :                             :                    +- CometProject\n      :     :                             :                       +- CometFilter\n      :     :                             :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                             +- BroadcastExchange\n      :     :                                +- CometNativeColumnarToRow\n      :     :                                   +- CometProject\n      :     :                                      +- CometFilter\n      :     :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.item\n      +- BroadcastExchange\n         +- Filter\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Filter\n                                          :  +- ColumnarToRow\n                                          :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :           +- ReusedSubquery\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 17 out of 48 eligible operators (35%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q65.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometProject\n         :     :                       +- CometBroadcastHashJoin\n         :     :                          :- CometFilter\n         :     :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                          :        +- SubqueryBroadcast\n         :     :                          :           +- BroadcastExchange\n         :     :                          :              +- CometNativeColumnarToRow\n         :     :                          :                 +- CometProject\n         :     :                          :                    +- CometFilter\n         :     :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                          +- CometBroadcastExchange\n         :     :                             +- CometProject\n         :     :                                +- CometFilter\n         :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :        +- ReusedSubquery\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 45 out of 48 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q66.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Project\n               :                 :     :     :  +- BroadcastHashJoin\n               :                 :     :     :     :- Filter\n               :                 :     :     :     :  +- ColumnarToRow\n               :                 :     :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :     :           +- SubqueryBroadcast\n               :                 :     :     :     :              +- BroadcastExchange\n               :                 :     :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :     :                    +- CometFilter\n               :                 :     :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     :     +- BroadcastExchange\n               :                 :     :     :        +- CometNativeColumnarToRow\n               :                 :     :     :           +- CometProject\n               :                 :     :     :              +- CometFilter\n               :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.warehouse\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometFilter\n               :                 :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.time_dim\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometNativeScan parquet spark_catalog.default.ship_mode\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Project\n                                 :     :     :  +- BroadcastHashJoin\n                                 :     :     :     :- Filter\n                                 :     :     :     :  +- ColumnarToRow\n                                 :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :     :           +- ReusedSubquery\n                                 :     :     :     +- BroadcastExchange\n                                 :     :     :        +- CometNativeColumnarToRow\n                                 :     :     :           +- CometProject\n                                 :     :     :              +- CometFilter\n                                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.warehouse\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometFilter\n                                 :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.time_dim\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.ship_mode\n\nComet accelerated 27 out of 66 eligible operators (40%). Final plan contains 14 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q66.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometProject\n               :              :     :     :  +- CometBroadcastHashJoin\n               :              :     :     :     :- CometFilter\n               :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :              :     :     :     :        +- SubqueryBroadcast\n               :              :     :     :     :           +- BroadcastExchange\n               :              :     :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :     :                 +- CometFilter\n               :              :     :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     :     +- CometBroadcastExchange\n               :              :     :     :        +- CometProject\n               :              :     :     :           +- CometFilter\n               :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometFilter\n               :              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometFilter\n               :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometFilter\n                              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :     :     :        +- ReusedSubquery\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n\nComet accelerated 63 out of 66 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q67.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- Window\n      +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- WindowGroupLimit\n                     +- Sort\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Expand\n                                       +- Project\n                                          +- BroadcastHashJoin\n                                             :- Project\n                                             :  +- BroadcastHashJoin\n                                             :     :- Project\n                                             :     :  +- BroadcastHashJoin\n                                             :     :     :- Filter\n                                             :     :     :  +- ColumnarToRow\n                                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                             :     :     :           +- SubqueryBroadcast\n                                             :     :     :              +- BroadcastExchange\n                                             :     :     :                 +- CometNativeColumnarToRow\n                                             :     :     :                    +- CometProject\n                                             :     :     :                       +- CometFilter\n                                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             :     :     +- BroadcastExchange\n                                             :     :        +- CometNativeColumnarToRow\n                                             :     :           +- CometProject\n                                             :     :              +- CometFilter\n                                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             :     +- BroadcastExchange\n                                             :        +- CometNativeColumnarToRow\n                                             :           +- CometProject\n                                             :              +- CometFilter\n                                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                                             +- BroadcastExchange\n                                                +- CometNativeColumnarToRow\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 15 out of 37 eligible operators (40%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q67.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- Window\n      +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                     +- CometNativeColumnarToRow\n                        +- CometSort\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometExpand\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometProject\n                                             :  +- CometBroadcastHashJoin\n                                             :     :- CometProject\n                                             :     :  +- CometBroadcastHashJoin\n                                             :     :     :- CometFilter\n                                             :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                             :     :     :        +- SubqueryBroadcast\n                                             :     :     :           +- BroadcastExchange\n                                             :     :     :              +- CometNativeColumnarToRow\n                                             :     :     :                 +- CometProject\n                                             :     :     :                    +- CometFilter\n                                             :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             :     :     +- CometBroadcastExchange\n                                             :     :        +- CometProject\n                                             :     :           +- CometFilter\n                                             :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             :     +- CometBroadcastExchange\n                                             :        +- CometProject\n                                             :           +- CometFilter\n                                             :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                             +- CometBroadcastExchange\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 30 out of 37 eligible operators (81%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q68.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- HashAggregate\n      :     :  +- CometNativeColumnarToRow\n      :     :     +- CometColumnarExchange\n      :     :        +- HashAggregate\n      :     :           +- Project\n      :     :              +- BroadcastHashJoin\n      :     :                 :- Project\n      :     :                 :  +- BroadcastHashJoin\n      :     :                 :     :- Project\n      :     :                 :     :  +- BroadcastHashJoin\n      :     :                 :     :     :- Project\n      :     :                 :     :     :  +- BroadcastHashJoin\n      :     :                 :     :     :     :- Filter\n      :     :                 :     :     :     :  +- ColumnarToRow\n      :     :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                 :     :     :     :           +- SubqueryBroadcast\n      :     :                 :     :     :     :              +- BroadcastExchange\n      :     :                 :     :     :     :                 +- CometNativeColumnarToRow\n      :     :                 :     :     :     :                    +- CometProject\n      :     :                 :     :     :     :                       +- CometFilter\n      :     :                 :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     :     +- BroadcastExchange\n      :     :                 :     :     :        +- CometNativeColumnarToRow\n      :     :                 :     :     :           +- CometProject\n      :     :                 :     :     :              +- CometFilter\n      :     :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     +- BroadcastExchange\n      :     :                 :     :        +- CometNativeColumnarToRow\n      :     :                 :     :           +- CometProject\n      :     :                 :     :              +- CometFilter\n      :     :                 :     :                 +- CometNativeScan parquet spark_catalog.default.store\n      :     :                 :     +- BroadcastExchange\n      :     :                 :        +- CometNativeColumnarToRow\n      :     :                 :           +- CometProject\n      :     :                 :              +- CometFilter\n      :     :                 :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n      :     :                 +- BroadcastExchange\n      :     :                    +- CometNativeColumnarToRow\n      :     :                       +- CometFilter\n      :     :                          +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometFilter\n               +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 20 out of 45 eligible operators (44%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q68.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometHashAggregate\n         :     :  +- CometExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometProject\n         :     :           +- CometBroadcastHashJoin\n         :     :              :- CometProject\n         :     :              :  +- CometBroadcastHashJoin\n         :     :              :     :- CometProject\n         :     :              :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :- CometProject\n         :     :              :     :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :     :- CometFilter\n         :     :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :              :     :     :     :        +- SubqueryBroadcast\n         :     :              :     :     :     :           +- BroadcastExchange\n         :     :              :     :     :     :              +- CometNativeColumnarToRow\n         :     :              :     :     :     :                 +- CometProject\n         :     :              :     :     :     :                    +- CometFilter\n         :     :              :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     :     +- CometBroadcastExchange\n         :     :              :     :     :        +- CometProject\n         :     :              :     :     :           +- CometFilter\n         :     :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     +- CometBroadcastExchange\n         :     :              :     :        +- CometProject\n         :     :              :     :           +- CometFilter\n         :     :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     :              :     +- CometBroadcastExchange\n         :     :              :        +- CometProject\n         :     :              :           +- CometFilter\n         :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         :     :              +- CometBroadcastExchange\n         :     :                 +- CometFilter\n         :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 43 out of 45 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q69.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- BroadcastHashJoin\n                  :     :     :  :- BroadcastHashJoin\n                  :     :     :  :  :- CometNativeColumnarToRow\n                  :     :     :  :  :  +- CometFilter\n                  :     :     :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :     :  :  +- BroadcastExchange\n                  :     :     :  :     +- Project\n                  :     :     :  :        +- BroadcastHashJoin\n                  :     :     :  :           :- ColumnarToRow\n                  :     :     :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :  :           :        +- SubqueryBroadcast\n                  :     :     :  :           :           +- BroadcastExchange\n                  :     :     :  :           :              +- CometNativeColumnarToRow\n                  :     :     :  :           :                 +- CometProject\n                  :     :     :  :           :                    +- CometFilter\n                  :     :     :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :  :           +- BroadcastExchange\n                  :     :     :  :              +- CometNativeColumnarToRow\n                  :     :     :  :                 +- CometProject\n                  :     :     :  :                    +- CometFilter\n                  :     :     :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- Project\n                  :     :     :        +- BroadcastHashJoin\n                  :     :     :           :- ColumnarToRow\n                  :     :     :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           :        +- ReusedSubquery\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- Project\n                  :     :           +- BroadcastHashJoin\n                  :     :              :- ColumnarToRow\n                  :     :              :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :              :        +- ReusedSubquery\n                  :     :              +- BroadcastExchange\n                  :     :                 +- CometNativeColumnarToRow\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 53 eligible operators (39%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q69.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :-  BroadcastHashJoin [COMET: BuildRight with LeftAnti is not supported]\n                  :     :     :  :- CometNativeColumnarToRow\n                  :     :     :  :  +- CometBroadcastHashJoin\n                  :     :     :  :     :- CometFilter\n                  :     :     :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :     :  :     +- CometBroadcastExchange\n                  :     :     :  :        +- CometProject\n                  :     :     :  :           +- CometBroadcastHashJoin\n                  :     :     :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :  :              :     +- SubqueryBroadcast\n                  :     :     :  :              :        +- BroadcastExchange\n                  :     :     :  :              :           +- CometNativeColumnarToRow\n                  :     :     :  :              :              +- CometProject\n                  :     :     :  :              :                 +- CometFilter\n                  :     :     :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :  :              +- CometBroadcastExchange\n                  :     :     :  :                 +- CometProject\n                  :     :     :  :                    +- CometFilter\n                  :     :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- CometNativeColumnarToRow\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometBroadcastHashJoin\n                  :     :     :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :     :              :     +- ReusedSubquery\n                  :     :     :              +- CometBroadcastExchange\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometBroadcastHashJoin\n                  :     :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                 :     +- ReusedSubquery\n                  :     :                 +- CometBroadcastExchange\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 53 eligible operators (66%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q7.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Filter\n                  :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :              +- BroadcastExchange\n                  :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :                    +- CometProject\n                  :     :     :     :                       +- CometFilter\n                  :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.item\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 16 out of 35 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q7.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :           +- BroadcastExchange\n                  :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :                 +- CometProject\n                  :     :     :     :                    +- CometFilter\n                  :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 33 out of 35 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q70.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Filter\n                                    :     :  +- ColumnarToRow\n                                    :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :           +- SubqueryBroadcast\n                                    :     :              +- BroadcastExchange\n                                    :     :                 +- CometNativeColumnarToRow\n                                    :     :                    +- CometProject\n                                    :     :                       +- CometFilter\n                                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- Project\n                                          +- BroadcastHashJoin\n                                             :- CometNativeColumnarToRow\n                                             :  +- CometFilter\n                                             :     +- CometNativeScan parquet spark_catalog.default.store\n                                             +- BroadcastExchange\n                                                +- Project\n                                                   +- Filter\n                                                      +- Window\n                                                         +- WindowGroupLimit\n                                                            +- Sort\n                                                               +- HashAggregate\n                                                                  +- CometNativeColumnarToRow\n                                                                     +- CometColumnarExchange\n                                                                        +- HashAggregate\n                                                                           +- Project\n                                                                              +- BroadcastHashJoin\n                                                                                 :- Project\n                                                                                 :  +- BroadcastHashJoin\n                                                                                 :     :- Filter\n                                                                                 :     :  +- ColumnarToRow\n                                                                                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                                 :     :           +- ReusedSubquery\n                                                                                 :     +- BroadcastExchange\n                                                                                 :        +- CometNativeColumnarToRow\n                                                                                 :           +- CometProject\n                                                                                 :              +- CometFilter\n                                                                                 :                 +- CometNativeScan parquet spark_catalog.default.store\n                                                                                 +- BroadcastExchange\n                                                                                    +- CometNativeColumnarToRow\n                                                                                       +- CometProject\n                                                                                          +- CometFilter\n                                                                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 18 out of 53 eligible operators (33%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q70.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- CometNativeColumnarToRow\n                                    :  +- CometProject\n                                    :     +- CometBroadcastHashJoin\n                                    :        :- CometFilter\n                                    :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :        :        +- SubqueryBroadcast\n                                    :        :           +- BroadcastExchange\n                                    :        :              +- CometNativeColumnarToRow\n                                    :        :                 +- CometProject\n                                    :        :                    +- CometFilter\n                                    :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :        +- CometBroadcastExchange\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- Project\n                                          +- BroadcastHashJoin\n                                             :- CometNativeColumnarToRow\n                                             :  +- CometFilter\n                                             :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                             +- BroadcastExchange\n                                                +- Project\n                                                   +- Filter\n                                                      +- Window\n                                                         +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometSort\n                                                                  +- CometHashAggregate\n                                                                     +- CometExchange\n                                                                        +- CometHashAggregate\n                                                                           +- CometProject\n                                                                              +- CometBroadcastHashJoin\n                                                                                 :- CometProject\n                                                                                 :  +- CometBroadcastHashJoin\n                                                                                 :     :- CometFilter\n                                                                                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                                 :     :        +- ReusedSubquery\n                                                                                 :     +- CometBroadcastExchange\n                                                                                 :        +- CometProject\n                                                                                 :           +- CometFilter\n                                                                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                                                                 +- CometBroadcastExchange\n                                                                                    +- CometProject\n                                                                                       +- CometFilter\n                                                                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 34 out of 53 eligible operators (64%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q71.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- CometNativeColumnarToRow\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- Project\n                        :  +- BroadcastHashJoin\n                        :     :- BroadcastExchange\n                        :     :  +- CometNativeColumnarToRow\n                        :     :     +- CometProject\n                        :     :        +- CometFilter\n                        :     :           +- CometNativeScan parquet spark_catalog.default.item\n                        :     +- Union\n                        :        :- Project\n                        :        :  +- BroadcastHashJoin\n                        :        :     :- Filter\n                        :        :     :  +- ColumnarToRow\n                        :        :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :        :     :           +- SubqueryBroadcast\n                        :        :     :              +- BroadcastExchange\n                        :        :     :                 +- CometNativeColumnarToRow\n                        :        :     :                    +- CometProject\n                        :        :     :                       +- CometFilter\n                        :        :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :        :     +- BroadcastExchange\n                        :        :        +- CometNativeColumnarToRow\n                        :        :           +- CometProject\n                        :        :              +- CometFilter\n                        :        :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :        :- Project\n                        :        :  +- BroadcastHashJoin\n                        :        :     :- Filter\n                        :        :     :  +- ColumnarToRow\n                        :        :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :        :     :           +- ReusedSubquery\n                        :        :     +- BroadcastExchange\n                        :        :        +- CometNativeColumnarToRow\n                        :        :           +- CometProject\n                        :        :              +- CometFilter\n                        :        :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :        +- Project\n                        :           +- BroadcastHashJoin\n                        :              :- Filter\n                        :              :  +- ColumnarToRow\n                        :              :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :              :           +- ReusedSubquery\n                        :              +- BroadcastExchange\n                        :                 +- CometNativeColumnarToRow\n                        :                    +- CometProject\n                        :                       +- CometFilter\n                        :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                        +- BroadcastExchange\n                           +- CometNativeColumnarToRow\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.time_dim\n\nComet accelerated 21 out of 49 eligible operators (42%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q71.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometBroadcastExchange\n                     :     :  +- CometProject\n                     :     :     +- CometFilter\n                     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     :     +- CometUnion\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometFilter\n                     :        :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :        :     :        +- SubqueryBroadcast\n                     :        :     :           +- BroadcastExchange\n                     :        :     :              +- CometNativeColumnarToRow\n                     :        :     :                 +- CometProject\n                     :        :     :                    +- CometFilter\n                     :        :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometFilter\n                     :        :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :        :     :        +- ReusedSubquery\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        +- CometProject\n                     :           +- CometBroadcastHashJoin\n                     :              :- CometFilter\n                     :              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :              :        +- ReusedSubquery\n                     :              +- CometBroadcastExchange\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n\nComet accelerated 45 out of 49 eligible operators (91%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q72.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometColumnarExchange\n                  :     +- Project\n                  :        +- BroadcastHashJoin\n                  :           :- Project\n                  :           :  +- BroadcastHashJoin\n                  :           :     :- Project\n                  :           :     :  +- BroadcastHashJoin\n                  :           :     :     :- Project\n                  :           :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :- Project\n                  :           :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :- Project\n                  :           :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- Filter\n                  :           :     :     :     :     :     :     :     :     :  +- ColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :           :     :     :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :              +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                    +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                       +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :           +- CometProject\n                  :           :     :     :     :     :              +- CometFilter\n                  :           :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :           +- CometProject\n                  :           :     :     :     :              +- CometFilter\n                  :           :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- BroadcastExchange\n                  :           :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :           +- CometProject\n                  :           :     :     :              +- CometFilter\n                  :           :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     +- BroadcastExchange\n                  :           :     :        +- CometNativeColumnarToRow\n                  :           :     :           +- CometFilter\n                  :           :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     +- BroadcastExchange\n                  :           :        +- CometNativeColumnarToRow\n                  :           :           +- CometFilter\n                  :           :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           +- BroadcastExchange\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n\nComet accelerated 37 out of 68 eligible operators (54%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q72.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometExchange\n                  :     +- CometProject\n                  :        +- CometBroadcastHashJoin\n                  :           :- CometProject\n                  :           :  +- CometBroadcastHashJoin\n                  :           :     :- CometProject\n                  :           :     :  +- CometBroadcastHashJoin\n                  :           :     :     :- CometProject\n                  :           :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :- CometProject\n                  :           :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :- CometProject\n                  :           :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- CometFilter\n                  :           :     :     :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :           :     :     :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :           +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                 +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                    +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :        +- CometProject\n                  :           :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :        +- CometProject\n                  :           :     :     :     :           +- CometFilter\n                  :           :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- CometBroadcastExchange\n                  :           :     :     :        +- CometProject\n                  :           :     :     :           +- CometFilter\n                  :           :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     +- CometBroadcastExchange\n                  :           :     :        +- CometFilter\n                  :           :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     +- CometBroadcastExchange\n                  :           :        +- CometFilter\n                  :           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           +- CometBroadcastExchange\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n\nComet accelerated 66 out of 68 eligible operators (97%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q73.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Filter\n            :  +- HashAggregate\n            :     +- CometNativeColumnarToRow\n            :        +- CometColumnarExchange\n            :           +- HashAggregate\n            :              +- Project\n            :                 +- BroadcastHashJoin\n            :                    :- Project\n            :                    :  +- BroadcastHashJoin\n            :                    :     :- Project\n            :                    :     :  +- BroadcastHashJoin\n            :                    :     :     :- Filter\n            :                    :     :     :  +- ColumnarToRow\n            :                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                    :     :     :           +- SubqueryBroadcast\n            :                    :     :     :              +- BroadcastExchange\n            :                    :     :     :                 +- CometNativeColumnarToRow\n            :                    :     :     :                    +- CometProject\n            :                    :     :     :                       +- CometFilter\n            :                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     :     +- BroadcastExchange\n            :                    :     :        +- CometNativeColumnarToRow\n            :                    :     :           +- CometProject\n            :                    :     :              +- CometFilter\n            :                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     +- BroadcastExchange\n            :                    :        +- CometNativeColumnarToRow\n            :                    :           +- CometProject\n            :                    :              +- CometFilter\n            :                    :                 +- CometNativeScan parquet spark_catalog.default.store\n            :                    +- BroadcastExchange\n            :                       +- CometNativeColumnarToRow\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometNativeScan parquet spark_catalog.default.household_demographics\n            +- BroadcastExchange\n               +- CometNativeColumnarToRow\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 37 eligible operators (48%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q73.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometFilter\n            :  +- CometHashAggregate\n            :     +- CometExchange\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometFilter\n            :                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :        +- SubqueryBroadcast\n            :                 :     :     :           +- BroadcastExchange\n            :                 :     :     :              +- CometNativeColumnarToRow\n            :                 :     :     :                 +- CometProject\n            :                 :     :     :                    +- CometFilter\n            :                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometProject\n            :                 :     :           +- CometFilter\n            :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometProject\n            :                 :           +- CometFilter\n            :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 37 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q74.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- BroadcastHashJoin\n      :     :  :- Filter\n      :     :  :  +- HashAggregate\n      :     :  :     +- CometNativeColumnarToRow\n      :     :  :        +- CometColumnarExchange\n      :     :  :           +- HashAggregate\n      :     :  :              +- Project\n      :     :  :                 +- BroadcastHashJoin\n      :     :  :                    :- Project\n      :     :  :                    :  +- BroadcastHashJoin\n      :     :  :                    :     :- CometNativeColumnarToRow\n      :     :  :                    :     :  +- CometProject\n      :     :  :                    :     :     +- CometFilter\n      :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :  :                    :     +- BroadcastExchange\n      :     :  :                    :        +- Filter\n      :     :  :                    :           +- ColumnarToRow\n      :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :  :                    :                    +- SubqueryBroadcast\n      :     :  :                    :                       +- BroadcastExchange\n      :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :  :                    :                             +- CometFilter\n      :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  :                    +- BroadcastExchange\n      :     :  :                       +- CometNativeColumnarToRow\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  +- BroadcastExchange\n      :     :     +- HashAggregate\n      :     :        +- CometNativeColumnarToRow\n      :     :           +- CometColumnarExchange\n      :     :              +- HashAggregate\n      :     :                 +- Project\n      :     :                    +- BroadcastHashJoin\n      :     :                       :- Project\n      :     :                       :  +- BroadcastHashJoin\n      :     :                       :     :- CometNativeColumnarToRow\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                       :     +- BroadcastExchange\n      :     :                       :        +- Filter\n      :     :                       :           +- ColumnarToRow\n      :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                       :                    +- SubqueryBroadcast\n      :     :                       :                       +- BroadcastExchange\n      :     :                       :                          +- CometNativeColumnarToRow\n      :     :                       :                             +- CometFilter\n      :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                       +- BroadcastExchange\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometFilter\n      :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 85 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q74.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometBroadcastHashJoin\n         :     :  :- CometFilter\n         :     :  :  +- CometHashAggregate\n         :     :  :     +- CometExchange\n         :     :  :        +- CometHashAggregate\n         :     :  :           +- CometProject\n         :     :  :              +- CometBroadcastHashJoin\n         :     :  :                 :- CometProject\n         :     :  :                 :  +- CometBroadcastHashJoin\n         :     :  :                 :     :- CometProject\n         :     :  :                 :     :  +- CometFilter\n         :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :  :                 :     +- CometBroadcastExchange\n         :     :  :                 :        +- CometFilter\n         :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :  :                 :                 +- SubqueryBroadcast\n         :     :  :                 :                    +- BroadcastExchange\n         :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :  :                 :                          +- CometFilter\n         :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  :                 +- CometBroadcastExchange\n         :     :  :                    +- CometFilter\n         :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  +- CometBroadcastExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometExchange\n         :     :           +- CometHashAggregate\n         :     :              +- CometProject\n         :     :                 +- CometBroadcastHashJoin\n         :     :                    :- CometProject\n         :     :                    :  +- CometBroadcastHashJoin\n         :     :                    :     :- CometProject\n         :     :                    :     :  +- CometFilter\n         :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                    :     +- CometBroadcastExchange\n         :     :                    :        +- CometFilter\n         :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                    :                 +- SubqueryBroadcast\n         :     :                    :                    +- BroadcastExchange\n         :     :                    :                       +- CometNativeColumnarToRow\n         :     :                    :                          +- CometFilter\n         :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                    +- CometBroadcastExchange\n         :     :                       +- CometFilter\n         :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 79 out of 85 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q75.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- SubqueryBroadcast\n         :                             :     :           :     :              +- BroadcastExchange\n         :                             :     :           :     :                 +- CometNativeColumnarToRow\n         :                             :     :           :     :                    +- CometFilter\n         :                             :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- ReusedSubquery\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometColumnarExchange\n         :                                   :     +- Project\n         :                                   :        +- BroadcastHashJoin\n         :                                   :           :- Project\n         :                                   :           :  +- BroadcastHashJoin\n         :                                   :           :     :- Filter\n         :                                   :           :     :  +- ColumnarToRow\n         :                                   :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                                   :           :     :           +- ReusedSubquery\n         :                                   :           :     +- BroadcastExchange\n         :                                   :           :        +- CometNativeColumnarToRow\n         :                                   :           :           +- CometProject\n         :                                   :           :              +- CometFilter\n         :                                   :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                                   :           +- BroadcastExchange\n         :                                   :              +- CometNativeColumnarToRow\n         :                                   :                 +- CometFilter\n         :                                   :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometNativeScan parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- SubqueryBroadcast\n                                       :     :           :     :              +- BroadcastExchange\n                                       :     :           :     :                 +- CometNativeColumnarToRow\n                                       :     :           :     :                    +- CometFilter\n                                       :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- ReusedSubquery\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometColumnarExchange\n                                             :     +- Project\n                                             :        +- BroadcastHashJoin\n                                             :           :- Project\n                                             :           :  +- BroadcastHashJoin\n                                             :           :     :- Filter\n                                             :           :     :  +- ColumnarToRow\n                                             :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                             :           :     :           +- ReusedSubquery\n                                             :           :     +- BroadcastExchange\n                                             :           :        +- CometNativeColumnarToRow\n                                             :           :           +- CometProject\n                                             :           :              +- CometFilter\n                                             :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                             :           +- BroadcastExchange\n                                             :              +- CometNativeColumnarToRow\n                                             :                 +- CometFilter\n                                             :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.web_returns\n\nComet accelerated 111 out of 167 eligible operators (66%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q75.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :                             :     :           :     :        +- SubqueryBroadcast\n         :                             :     :           :     :           +- BroadcastExchange\n         :                             :     :           :     :              +- CometNativeColumnarToRow\n         :                             :     :           :     :                 +- CometFilter\n         :                             :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :                             :     :           :     :        +- ReusedSubquery\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometExchange\n         :                                   :     +- CometProject\n         :                                   :        +- CometBroadcastHashJoin\n         :                                   :           :- CometProject\n         :                                   :           :  +- CometBroadcastHashJoin\n         :                                   :           :     :- CometFilter\n         :                                   :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                                   :           :     :        +- ReusedSubquery\n         :                                   :           :     +- CometBroadcastExchange\n         :                                   :           :        +- CometProject\n         :                                   :           :           +- CometFilter\n         :                                   :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                                   :           +- CometBroadcastExchange\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :           :     :        +- SubqueryBroadcast\n                                       :     :           :     :           +- BroadcastExchange\n                                       :     :           :     :              +- CometNativeColumnarToRow\n                                       :     :           :     :                 +- CometFilter\n                                       :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :           :     :        +- ReusedSubquery\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometExchange\n                                             :     +- CometProject\n                                             :        +- CometBroadcastHashJoin\n                                             :           :- CometProject\n                                             :           :  +- CometBroadcastHashJoin\n                                             :           :     :- CometFilter\n                                             :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                             :           :     :        +- ReusedSubquery\n                                             :           :     +- CometBroadcastExchange\n                                             :           :        +- CometProject\n                                             :           :           +- CometFilter\n                                             :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                             :           +- CometBroadcastExchange\n                                             :              +- CometFilter\n                                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n\nComet accelerated 159 out of 167 eligible operators (95%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q76.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometNativeScan parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometNativeScan parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometNativeScan parquet spark_catalog.default.web_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometNativeScan parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometFilter\n                     :     :  +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometNativeScan parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 44 out of 44 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q76.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometFilter\n                     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 44 out of 44 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q77.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- HashAggregate\n                  :     :  +- CometNativeColumnarToRow\n                  :     :     +- CometColumnarExchange\n                  :     :        +- HashAggregate\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- Project\n                  :     :                 :  +- BroadcastHashJoin\n                  :     :                 :     :- Filter\n                  :     :                 :     :  +- ColumnarToRow\n                  :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :     :           +- SubqueryBroadcast\n                  :     :                 :     :              +- BroadcastExchange\n                  :     :                 :     :                 +- CometNativeColumnarToRow\n                  :     :                 :     :                    +- CometProject\n                  :     :                 :     :                       +- CometFilter\n                  :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                 :     +- BroadcastExchange\n                  :     :                 :        +- CometNativeColumnarToRow\n                  :     :                 :           +- CometProject\n                  :     :                 :              +- CometFilter\n                  :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometFilter\n                  :     :                          +- CometNativeScan parquet spark_catalog.default.store\n                  :     +- BroadcastExchange\n                  :        +- HashAggregate\n                  :           +- CometNativeColumnarToRow\n                  :              +- CometColumnarExchange\n                  :                 +- HashAggregate\n                  :                    +- Project\n                  :                       +- BroadcastHashJoin\n                  :                          :- Project\n                  :                          :  +- BroadcastHashJoin\n                  :                          :     :- Filter\n                  :                          :     :  +- ColumnarToRow\n                  :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                          :     :           +- ReusedSubquery\n                  :                          :     +- BroadcastExchange\n                  :                          :        +- CometNativeColumnarToRow\n                  :                          :           +- CometProject\n                  :                          :              +- CometFilter\n                  :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                          +- BroadcastExchange\n                  :                             +- CometNativeColumnarToRow\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.store\n                  :- Project\n                  :  +- BroadcastNestedLoopJoin\n                  :     :- BroadcastExchange\n                  :     :  +- HashAggregate\n                  :     :     +- CometNativeColumnarToRow\n                  :     :        +- CometColumnarExchange\n                  :     :           +- HashAggregate\n                  :     :              +- Project\n                  :     :                 +- BroadcastHashJoin\n                  :     :                    :- ColumnarToRow\n                  :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                    :        +- ReusedSubquery\n                  :     :                    +- BroadcastExchange\n                  :     :                       +- CometNativeColumnarToRow\n                  :     :                          +- CometProject\n                  :     :                             +- CometFilter\n                  :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- HashAggregate\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometColumnarExchange\n                  :              +- HashAggregate\n                  :                 +- Project\n                  :                    +- BroadcastHashJoin\n                  :                       :- ColumnarToRow\n                  :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :        +- ReusedSubquery\n                  :                       +- BroadcastExchange\n                  :                          +- CometNativeColumnarToRow\n                  :                             +- CometProject\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- HashAggregate\n                        :  +- CometNativeColumnarToRow\n                        :     +- CometColumnarExchange\n                        :        +- HashAggregate\n                        :           +- Project\n                        :              +- BroadcastHashJoin\n                        :                 :- Project\n                        :                 :  +- BroadcastHashJoin\n                        :                 :     :- Filter\n                        :                 :     :  +- ColumnarToRow\n                        :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :                 :     :           +- ReusedSubquery\n                        :                 :     +- BroadcastExchange\n                        :                 :        +- CometNativeColumnarToRow\n                        :                 :           +- CometProject\n                        :                 :              +- CometFilter\n                        :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :                 +- BroadcastExchange\n                        :                    +- CometNativeColumnarToRow\n                        :                       +- CometFilter\n                        :                          +- CometNativeScan parquet spark_catalog.default.web_page\n                        +- BroadcastExchange\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Project\n                                          +- BroadcastHashJoin\n                                             :- Project\n                                             :  +- BroadcastHashJoin\n                                             :     :- Filter\n                                             :     :  +- ColumnarToRow\n                                             :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                             :     :           +- ReusedSubquery\n                                             :     +- BroadcastExchange\n                                             :        +- CometNativeColumnarToRow\n                                             :           +- CometProject\n                                             :              +- CometFilter\n                                             :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             +- BroadcastExchange\n                                                +- CometNativeColumnarToRow\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.web_page\n\nComet accelerated 36 out of 109 eligible operators (33%). Final plan contains 24 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q77.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- CometNativeColumnarToRow\n                  :  +- CometProject\n                  :     +- CometBroadcastHashJoin\n                  :        :- CometHashAggregate\n                  :        :  +- CometExchange\n                  :        :     +- CometHashAggregate\n                  :        :        +- CometProject\n                  :        :           +- CometBroadcastHashJoin\n                  :        :              :- CometProject\n                  :        :              :  +- CometBroadcastHashJoin\n                  :        :              :     :- CometFilter\n                  :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :        :              :     :        +- SubqueryBroadcast\n                  :        :              :     :           +- BroadcastExchange\n                  :        :              :     :              +- CometNativeColumnarToRow\n                  :        :              :     :                 +- CometProject\n                  :        :              :     :                    +- CometFilter\n                  :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        :              :     +- CometBroadcastExchange\n                  :        :              :        +- CometProject\n                  :        :              :           +- CometFilter\n                  :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        :              +- CometBroadcastExchange\n                  :        :                 +- CometFilter\n                  :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :        +- CometBroadcastExchange\n                  :           +- CometHashAggregate\n                  :              +- CometExchange\n                  :                 +- CometHashAggregate\n                  :                    +- CometProject\n                  :                       +- CometBroadcastHashJoin\n                  :                          :- CometProject\n                  :                          :  +- CometBroadcastHashJoin\n                  :                          :     :- CometFilter\n                  :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :                          :     :        +- ReusedSubquery\n                  :                          :     +- CometBroadcastExchange\n                  :                          :        +- CometProject\n                  :                          :           +- CometFilter\n                  :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                          +- CometBroadcastExchange\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :- Project\n                  :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n                  :     :- BroadcastExchange\n                  :     :  +- CometNativeColumnarToRow\n                  :     :     +- CometHashAggregate\n                  :     :        +- CometExchange\n                  :     :           +- CometHashAggregate\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometNativeColumnarToRow\n                  :        +- CometHashAggregate\n                  :           +- CometExchange\n                  :              +- CometHashAggregate\n                  :                 +- CometProject\n                  :                    +- CometBroadcastHashJoin\n                  :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :                       :     +- ReusedSubquery\n                  :                       +- CometBroadcastExchange\n                  :                          +- CometProject\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometProject\n                           :           +- CometBroadcastHashJoin\n                           :              :- CometProject\n                           :              :  +- CometBroadcastHashJoin\n                           :              :     :- CometFilter\n                           :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :              :     :        +- ReusedSubquery\n                           :              :     +- CometBroadcastExchange\n                           :              :        +- CometProject\n                           :              :           +- CometFilter\n                           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              +- CometBroadcastExchange\n                           :                 +- CometFilter\n                           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n                           +- CometBroadcastExchange\n                              +- CometHashAggregate\n                                 +- CometExchange\n                                    +- CometHashAggregate\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometProject\n                                             :  +- CometBroadcastHashJoin\n                                             :     :- CometFilter\n                                             :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                             :     :        +- ReusedSubquery\n                                             :     +- CometBroadcastExchange\n                                             :        +- CometProject\n                                             :           +- CometFilter\n                                             :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometBroadcastExchange\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n\nComet accelerated 94 out of 109 eligible operators (86%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q78.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometColumnarExchange\n         :     :                 :        :     +- Filter\n         :     :                 :        :        +- ColumnarToRow\n         :     :                 :        :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :     :                 :        :                 +- SubqueryBroadcast\n         :     :                 :        :                    +- BroadcastExchange\n         :     :                 :        :                       +- CometNativeColumnarToRow\n         :     :                 :        :                          +- CometFilter\n         :     :                 :        :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometColumnarExchange\n         :                          :        :     +- Filter\n         :                          :        :        +- ColumnarToRow\n         :                          :        :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                          :        :                 +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometNativeScan parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometColumnarExchange\n                              :        :     +- Filter\n                              :        :        +- ColumnarToRow\n                              :        :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :        :                 +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 64 out of 76 eligible operators (84%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q78.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometExchange\n         :     :                 :        :     +- CometFilter\n         :     :                 :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                 :        :              +- SubqueryBroadcast\n         :     :                 :        :                 +- BroadcastExchange\n         :     :                 :        :                    +- CometNativeColumnarToRow\n         :     :                 :        :                       +- CometFilter\n         :     :                 :        :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometExchange\n         :                          :        :     +- CometFilter\n         :                          :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :        :              +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometExchange\n                              :        :     +- CometFilter\n                              :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :        :              +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 70 out of 76 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q79.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- HashAggregate\n      :  +- CometNativeColumnarToRow\n      :     +- CometColumnarExchange\n      :        +- HashAggregate\n      :           +- Project\n      :              +- BroadcastHashJoin\n      :                 :- Project\n      :                 :  +- BroadcastHashJoin\n      :                 :     :- Project\n      :                 :     :  +- BroadcastHashJoin\n      :                 :     :     :- Filter\n      :                 :     :     :  +- ColumnarToRow\n      :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                 :     :     :           +- SubqueryBroadcast\n      :                 :     :     :              +- BroadcastExchange\n      :                 :     :     :                 +- CometNativeColumnarToRow\n      :                 :     :     :                    +- CometProject\n      :                 :     :     :                       +- CometFilter\n      :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                 :     :     +- BroadcastExchange\n      :                 :     :        +- CometNativeColumnarToRow\n      :                 :     :           +- CometProject\n      :                 :     :              +- CometFilter\n      :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                 :     +- BroadcastExchange\n      :                 :        +- CometNativeColumnarToRow\n      :                 :           +- CometProject\n      :                 :              +- CometFilter\n      :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n      :                 +- BroadcastExchange\n      :                    +- CometNativeColumnarToRow\n      :                       +- CometProject\n      :                          +- CometFilter\n      :                             +- CometNativeScan parquet spark_catalog.default.household_demographics\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 16 out of 35 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q79.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometHashAggregate\n         :  +- CometExchange\n         :     +- CometHashAggregate\n         :        +- CometProject\n         :           +- CometBroadcastHashJoin\n         :              :- CometProject\n         :              :  +- CometBroadcastHashJoin\n         :              :     :- CometProject\n         :              :     :  +- CometBroadcastHashJoin\n         :              :     :     :- CometFilter\n         :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :              :     :     :        +- SubqueryBroadcast\n         :              :     :     :           +- BroadcastExchange\n         :              :     :     :              +- CometNativeColumnarToRow\n         :              :     :     :                 +- CometProject\n         :              :     :     :                    +- CometFilter\n         :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :              :     :     +- CometBroadcastExchange\n         :              :     :        +- CometProject\n         :              :     :           +- CometFilter\n         :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :              :     +- CometBroadcastExchange\n         :              :        +- CometProject\n         :              :           +- CometFilter\n         :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :              +- CometBroadcastExchange\n         :                 +- CometProject\n         :                    +- CometFilter\n         :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 33 out of 35 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q8.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Filter\n                  :     :     :  +- ColumnarToRow\n                  :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           +- SubqueryBroadcast\n                  :     :     :              +- BroadcastExchange\n                  :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :                    +- CometProject\n                  :     :     :                       +- CometFilter\n                  :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometBroadcastHashJoin\n                                    :- CometProject\n                                    :  +- CometFilter\n                                    :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometHashAggregate\n                                                +- CometExchange\n                                                   +- CometHashAggregate\n                                                      +- CometProject\n                                                         +- CometBroadcastHashJoin\n                                                            :- CometProject\n                                                            :  +- CometFilter\n                                                            :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                                                            +- CometBroadcastExchange\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 32 out of 48 eligible operators (66%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q8.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometFilter\n                  :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :        +- SubqueryBroadcast\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometHashAggregate\n                        +- CometExchange\n                           +- CometHashAggregate\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometFilter\n                                 :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometHashAggregate\n                                             +- CometExchange\n                                                +- CometHashAggregate\n                                                   +- CometProject\n                                                      +- CometBroadcastHashJoin\n                                                         :- CometProject\n                                                         :  +- CometFilter\n                                                         :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                                                         +- CometBroadcastExchange\n                                                            +- CometProject\n                                                               +- CometFilter\n                                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 46 out of 48 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q80.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometColumnarExchange\n                  :              :     :     :     :     :     +- Filter\n                  :              :     :     :     :     :        +- ColumnarToRow\n                  :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :              :     :     :     :     :                 +- SubqueryBroadcast\n                  :              :     :     :     :     :                    +- BroadcastExchange\n                  :              :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :              :     :     :     :     :                          +- CometProject\n                  :              :     :     :     :     :                             +- CometFilter\n                  :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometNativeScan parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometColumnarExchange\n                  :              :     :     :     :     :     +- Filter\n                  :              :     :     :     :     :        +- ColumnarToRow\n                  :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :              :     :     :     :     :                 +- ReusedSubquery\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometNativeScan parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometBroadcastHashJoin\n                                 :     :     :     :- CometProject\n                                 :     :     :     :  +- CometSortMergeJoin\n                                 :     :     :     :     :- CometSort\n                                 :     :     :     :     :  +- CometColumnarExchange\n                                 :     :     :     :     :     +- Filter\n                                 :     :     :     :     :        +- ColumnarToRow\n                                 :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :     :     :                 +- ReusedSubquery\n                                 :     :     :     :     +- CometSort\n                                 :     :     :     :        +- CometExchange\n                                 :     :     :     :           +- CometProject\n                                 :     :     :     :              +- CometFilter\n                                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n                                 :     :     :     +- CometBroadcastExchange\n                                 :     :     :        +- CometProject\n                                 :     :     :           +- CometFilter\n                                 :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometNativeScan parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 117 out of 127 eligible operators (92%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q80.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometExchange\n                  :              :     :     :     :     :     +- CometFilter\n                  :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :              :     :     :     :     :              +- SubqueryBroadcast\n                  :              :     :     :     :     :                 +- BroadcastExchange\n                  :              :     :     :     :     :                    +- CometNativeColumnarToRow\n                  :              :     :     :     :     :                       +- CometProject\n                  :              :     :     :     :     :                          +- CometFilter\n                  :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometExchange\n                  :              :     :     :     :     :     +- CometFilter\n                  :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :              :     :     :     :     :              +- ReusedSubquery\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometBroadcastHashJoin\n                                 :     :     :     :- CometProject\n                                 :     :     :     :  +- CometSortMergeJoin\n                                 :     :     :     :     :- CometSort\n                                 :     :     :     :     :  +- CometExchange\n                                 :     :     :     :     :     +- CometFilter\n                                 :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :     :     :     :              +- ReusedSubquery\n                                 :     :     :     :     +- CometSort\n                                 :     :     :     :        +- CometExchange\n                                 :     :     :     :           +- CometProject\n                                 :     :     :     :              +- CometFilter\n                                 :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                 :     :     :     +- CometBroadcastExchange\n                                 :     :     :        +- CometProject\n                                 :     :     :           +- CometFilter\n                                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 123 out of 127 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q81.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Project\n      :     :     :                    :  +- BroadcastHashJoin\n      :     :     :                    :     :- Filter\n      :     :     :                    :     :  +- ColumnarToRow\n      :     :     :                    :     :     +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :     :           +- SubqueryBroadcast\n      :     :     :                    :     :              +- BroadcastExchange\n      :     :     :                    :     :                 +- CometNativeColumnarToRow\n      :     :     :                    :     :                    +- CometProject\n      :     :     :                    :     :                       +- CometFilter\n      :     :     :                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    :     +- BroadcastExchange\n      :     :     :                    :        +- CometNativeColumnarToRow\n      :     :     :                    :           +- CometProject\n      :     :     :                    :              +- CometFilter\n      :     :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Filter\n      :     :                                         :     :  +- ColumnarToRow\n      :     :                                         :     :     +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :           +- ReusedSubquery\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometProject\n      :     :                                         :              +- CometFilter\n      :     :                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometProject\n      :     :                                                  +- CometFilter\n      :     :                                                     +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 24 out of 61 eligible operators (39%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q81.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometProject\n         :     :     :                 :  +- CometBroadcastHashJoin\n         :     :     :                 :     :- CometFilter\n         :     :     :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :     :     :                 :     :        +- SubqueryBroadcast\n         :     :     :                 :     :           +- BroadcastExchange\n         :     :     :                 :     :              +- CometNativeColumnarToRow\n         :     :     :                 :     :                 +- CometProject\n         :     :     :                 :     :                    +- CometFilter\n         :     :     :                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 :     +- CometBroadcastExchange\n         :     :     :                 :        +- CometProject\n         :     :     :                 :           +- CometFilter\n         :     :     :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometFilter\n         :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometHashAggregate\n         :     :                       +- CometExchange\n         :     :                          +- CometHashAggregate\n         :     :                             +- CometProject\n         :     :                                +- CometBroadcastHashJoin\n         :     :                                   :- CometProject\n         :     :                                   :  +- CometBroadcastHashJoin\n         :     :                                   :     :- CometFilter\n         :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :     :                                   :     :        +- ReusedSubquery\n         :     :                                   :     +- CometBroadcastExchange\n         :     :                                   :        +- CometProject\n         :     :                                   :           +- CometFilter\n         :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                   +- CometBroadcastExchange\n         :     :                                      +- CometProject\n         :     :                                         +- CometFilter\n         :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 58 out of 61 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q82.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- BroadcastExchange\n                  :  +- Project\n                  :     +- BroadcastHashJoin\n                  :        :- Project\n                  :        :  +- BroadcastHashJoin\n                  :        :     :- CometNativeColumnarToRow\n                  :        :     :  +- CometProject\n                  :        :     :     +- CometFilter\n                  :        :     :        +- CometNativeScan parquet spark_catalog.default.item\n                  :        :     +- BroadcastExchange\n                  :        :        +- Project\n                  :        :           +- Filter\n                  :        :              +- ColumnarToRow\n                  :        :                 +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :        :                       +- SubqueryBroadcast\n                  :        :                          +- BroadcastExchange\n                  :        :                             +- CometNativeColumnarToRow\n                  :        :                                +- CometProject\n                  :        :                                   +- CometFilter\n                  :        :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :        +- BroadcastExchange\n                  :           +- CometNativeColumnarToRow\n                  :              +- CometProject\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.store_sales\n\nComet accelerated 15 out of 30 eligible operators (50%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q82.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometBroadcastExchange\n                  :  +- CometProject\n                  :     +- CometBroadcastHashJoin\n                  :        :- CometProject\n                  :        :  +- CometBroadcastHashJoin\n                  :        :     :- CometProject\n                  :        :     :  +- CometFilter\n                  :        :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :        :     +- CometBroadcastExchange\n                  :        :        +- CometProject\n                  :        :           +- CometFilter\n                  :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :        :                    +- SubqueryBroadcast\n                  :        :                       +- BroadcastExchange\n                  :        :                          +- CometNativeColumnarToRow\n                  :        :                             +- CometProject\n                  :        :                                +- CometFilter\n                  :        :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        +- CometBroadcastExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n\nComet accelerated 28 out of 30 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q83.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- HashAggregate\n      :     :  +- CometNativeColumnarToRow\n      :     :     +- CometColumnarExchange\n      :     :        +- HashAggregate\n      :     :           +- Project\n      :     :              +- BroadcastHashJoin\n      :     :                 :- Project\n      :     :                 :  +- BroadcastHashJoin\n      :     :                 :     :- Filter\n      :     :                 :     :  +- ColumnarToRow\n      :     :                 :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                 :     :           +- SubqueryBroadcast\n      :     :                 :     :              +- BroadcastExchange\n      :     :                 :     :                 +- CometNativeColumnarToRow\n      :     :                 :     :                    +- CometProject\n      :     :                 :     :                       +- CometBroadcastHashJoin\n      :     :                 :     :                          :- CometFilter\n      :     :                 :     :                          :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :                          +- CometBroadcastExchange\n      :     :                 :     :                             +- CometProject\n      :     :                 :     :                                +- CometBroadcastHashJoin\n      :     :                 :     :                                   :- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :                                   +- CometBroadcastExchange\n      :     :                 :     :                                      +- CometProject\n      :     :                 :     :                                         +- CometFilter\n      :     :                 :     :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     +- BroadcastExchange\n      :     :                 :        +- CometNativeColumnarToRow\n      :     :                 :           +- CometProject\n      :     :                 :              +- CometFilter\n      :     :                 :                 +- CometNativeScan parquet spark_catalog.default.item\n      :     :                 +- BroadcastExchange\n      :     :                    +- CometNativeColumnarToRow\n      :     :                       +- CometProject\n      :     :                          +- CometBroadcastHashJoin\n      :     :                             :- CometFilter\n      :     :                             :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                             +- CometBroadcastExchange\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometProject\n      :     :                                            +- CometFilter\n      :     :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- HashAggregate\n      :           +- CometNativeColumnarToRow\n      :              +- CometColumnarExchange\n      :                 +- HashAggregate\n      :                    +- Project\n      :                       +- BroadcastHashJoin\n      :                          :- Project\n      :                          :  +- BroadcastHashJoin\n      :                          :     :- Filter\n      :                          :     :  +- ColumnarToRow\n      :                          :     :     +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                          :     :           +- ReusedSubquery\n      :                          :     +- BroadcastExchange\n      :                          :        +- CometNativeColumnarToRow\n      :                          :           +- CometProject\n      :                          :              +- CometFilter\n      :                          :                 +- CometNativeScan parquet spark_catalog.default.item\n      :                          +- BroadcastExchange\n      :                             +- CometNativeColumnarToRow\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometFilter\n      :                                      :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometProject\n      :                                            +- CometBroadcastHashJoin\n      :                                               :- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                               +- CometBroadcastExchange\n      :                                                  +- CometProject\n      :                                                     +- CometFilter\n      :                                                        +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- Filter\n                           :     :  +- ColumnarToRow\n                           :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :           +- ReusedSubquery\n                           :     +- BroadcastExchange\n                           :        +- CometNativeColumnarToRow\n                           :           +- CometProject\n                           :              +- CometFilter\n                           :                 +- CometNativeScan parquet spark_catalog.default.item\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometBroadcastHashJoin\n                                                :- CometNativeScan parquet spark_catalog.default.date_dim\n                                                +- CometBroadcastExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 60 out of 101 eligible operators (59%). Final plan contains 13 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q83.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometHashAggregate\n         :     :  +- CometExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometProject\n         :     :           +- CometBroadcastHashJoin\n         :     :              :- CometProject\n         :     :              :  +- CometBroadcastHashJoin\n         :     :              :     :- CometFilter\n         :     :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :              :     :        +- SubqueryBroadcast\n         :     :              :     :           +- BroadcastExchange\n         :     :              :     :              +- CometNativeColumnarToRow\n         :     :              :     :                 +- CometProject\n         :     :              :     :                    +- CometBroadcastHashJoin\n         :     :              :     :                       :- CometFilter\n         :     :              :     :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :                       +- CometBroadcastExchange\n         :     :              :     :                          +- CometProject\n         :     :              :     :                             +- CometBroadcastHashJoin\n         :     :              :     :                                :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :                                +- CometBroadcastExchange\n         :     :              :     :                                   +- CometProject\n         :     :              :     :                                      +- CometFilter\n         :     :              :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     +- CometBroadcastExchange\n         :     :              :        +- CometProject\n         :     :              :           +- CometFilter\n         :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :     :              +- CometBroadcastExchange\n         :     :                 +- CometProject\n         :     :                    +- CometBroadcastHashJoin\n         :     :                       :- CometFilter\n         :     :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                       +- CometBroadcastExchange\n         :     :                          +- CometProject\n         :     :                             +- CometBroadcastHashJoin\n         :     :                                :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                +- CometBroadcastExchange\n         :     :                                   +- CometProject\n         :     :                                      +- CometFilter\n         :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometProject\n         :                    +- CometBroadcastHashJoin\n         :                       :- CometProject\n         :                       :  +- CometBroadcastHashJoin\n         :                       :     :- CometFilter\n         :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :                       :     :        +- ReusedSubquery\n         :                       :     +- CometBroadcastExchange\n         :                       :        +- CometProject\n         :                       :           +- CometFilter\n         :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                       +- CometBroadcastExchange\n         :                          +- CometProject\n         :                             +- CometBroadcastHashJoin\n         :                                :- CometFilter\n         :                                :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                +- CometBroadcastExchange\n         :                                   +- CometProject\n         :                                      +- CometBroadcastHashJoin\n         :                                         :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                         +- CometBroadcastExchange\n         :                                            +- CometProject\n         :                                               +- CometFilter\n         :                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometFilter\n                           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                           :     :        +- ReusedSubquery\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometFilter\n                                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometBroadcastExchange\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 97 out of 101 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q84.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometBroadcastExchange\n         :  +- CometProject\n         :     +- CometBroadcastHashJoin\n         :        :- CometProject\n         :        :  +- CometBroadcastHashJoin\n         :        :     :- CometProject\n         :        :     :  +- CometBroadcastHashJoin\n         :        :     :     :- CometProject\n         :        :     :     :  +- CometBroadcastHashJoin\n         :        :     :     :     :- CometProject\n         :        :     :     :     :  +- CometFilter\n         :        :     :     :     :     +- CometNativeScan parquet spark_catalog.default.customer\n         :        :     :     :     +- CometBroadcastExchange\n         :        :     :     :        +- CometProject\n         :        :     :     :           +- CometFilter\n         :        :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n         :        :     :     +- CometBroadcastExchange\n         :        :     :        +- CometFilter\n         :        :     :           +- CometNativeScan parquet spark_catalog.default.customer_demographics\n         :        :     +- CometBroadcastExchange\n         :        :        +- CometFilter\n         :        :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n         :        +- CometBroadcastExchange\n         :           +- CometProject\n         :              +- CometFilter\n         :                 +- CometNativeScan parquet spark_catalog.default.income_band\n         +- CometProject\n            +- CometFilter\n               +- CometNativeScan parquet spark_catalog.default.store_returns\n\nComet accelerated 32 out of 32 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q84.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometBroadcastExchange\n         :  +- CometProject\n         :     +- CometBroadcastHashJoin\n         :        :- CometProject\n         :        :  +- CometBroadcastHashJoin\n         :        :     :- CometProject\n         :        :     :  +- CometBroadcastHashJoin\n         :        :     :     :- CometProject\n         :        :     :     :  +- CometBroadcastHashJoin\n         :        :     :     :     :- CometProject\n         :        :     :     :     :  +- CometFilter\n         :        :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :        :     :     :     +- CometBroadcastExchange\n         :        :     :     :        +- CometProject\n         :        :     :     :           +- CometFilter\n         :        :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :        :     :     +- CometBroadcastExchange\n         :        :     :        +- CometFilter\n         :        :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n         :        :     +- CometBroadcastExchange\n         :        :        +- CometFilter\n         :        :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         :        +- CometBroadcastExchange\n         :           +- CometProject\n         :              +- CometFilter\n         :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n         +- CometProject\n            +- CometFilter\n               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n\nComet accelerated 32 out of 32 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q85.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- BroadcastExchange\n                  :     :     :     :     :     :     :  +- Filter\n                  :     :     :     :     :     :     :     +- ColumnarToRow\n                  :     :     :     :     :     :     :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :              +- SubqueryBroadcast\n                  :     :     :     :     :     :     :                 +- BroadcastExchange\n                  :     :     :     :     :     :     :                    +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                       +- CometProject\n                  :     :     :     :     :     :     :                          +- CometFilter\n                  :     :     :     :     :     :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometNativeColumnarToRow\n                  :     :     :     :     :     :        +- CometProject\n                  :     :     :     :     :     :           +- CometFilter\n                  :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.web_returns\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :     :           +- CometFilter\n                  :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.web_page\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.reason\n\nComet accelerated 24 out of 52 eligible operators (46%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q85.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometBroadcastExchange\n                  :     :     :     :     :     :     :  +- CometFilter\n                  :     :     :     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometProject\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.reason\n\nComet accelerated 50 out of 52 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q86.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Filter\n                                    :     :  +- ColumnarToRow\n                                    :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :           +- SubqueryBroadcast\n                                    :     :              +- BroadcastExchange\n                                    :     :                 +- CometNativeColumnarToRow\n                                    :     :                    +- CometProject\n                                    :     :                       +- CometFilter\n                                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 12 out of 28 eligible operators (42%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q86.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometExpand\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometFilter\n                                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :        +- SubqueryBroadcast\n                                 :     :           +- BroadcastExchange\n                                 :     :              +- CometNativeColumnarToRow\n                                 :     :                 +- CometProject\n                                 :     :                    +- CometFilter\n                                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 23 out of 28 eligible operators (82%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q87.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  BroadcastHashJoin [COMET: BuildRight with LeftAnti is not supported]\n               :  :- CometNativeColumnarToRow\n               :  :  +- CometHashAggregate\n               :  :     +- CometColumnarExchange\n               :  :        +- HashAggregate\n               :  :           +- Project\n               :  :              +- BroadcastHashJoin\n               :  :                 :- Project\n               :  :                 :  +- BroadcastHashJoin\n               :  :                 :     :- Filter\n               :  :                 :     :  +- ColumnarToRow\n               :  :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :  :                 :     :           +- SubqueryBroadcast\n               :  :                 :     :              +- BroadcastExchange\n               :  :                 :     :                 +- CometNativeColumnarToRow\n               :  :                 :     :                    +- CometProject\n               :  :                 :     :                       +- CometFilter\n               :  :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :                 :     +- BroadcastExchange\n               :  :                 :        +- CometNativeColumnarToRow\n               :  :                 :           +- CometProject\n               :  :                 :              +- CometFilter\n               :  :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :                 +- BroadcastExchange\n               :  :                    +- CometNativeColumnarToRow\n               :  :                       +- CometProject\n               :  :                          +- CometFilter\n               :  :                             +- CometNativeScan parquet spark_catalog.default.customer\n               :  +- BroadcastExchange\n               :     +- CometNativeColumnarToRow\n               :        +- CometHashAggregate\n               :           +- CometColumnarExchange\n               :              +- HashAggregate\n               :                 +- Project\n               :                    +- BroadcastHashJoin\n               :                       :- Project\n               :                       :  +- BroadcastHashJoin\n               :                       :     :- Filter\n               :                       :     :  +- ColumnarToRow\n               :                       :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                       :     :           +- ReusedSubquery\n               :                       :     +- BroadcastExchange\n               :                       :        +- CometNativeColumnarToRow\n               :                       :           +- CometProject\n               :                       :              +- CometFilter\n               :                       :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                       +- BroadcastExchange\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.customer\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometHashAggregate\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Filter\n                                    :     :  +- ColumnarToRow\n                                    :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :           +- ReusedSubquery\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 28 out of 66 eligible operators (42%). Final plan contains 14 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q87.native_iceberg_compat/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  BroadcastHashJoin [COMET: BuildRight with LeftAnti is not supported]\n               :  :- CometNativeColumnarToRow\n               :  :  +- CometHashAggregate\n               :  :     +- CometExchange\n               :  :        +- CometHashAggregate\n               :  :           +- CometProject\n               :  :              +- CometBroadcastHashJoin\n               :  :                 :- CometProject\n               :  :                 :  +- CometBroadcastHashJoin\n               :  :                 :     :- CometFilter\n               :  :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :  :                 :     :        +- SubqueryBroadcast\n               :  :                 :     :           +- BroadcastExchange\n               :  :                 :     :              +- CometNativeColumnarToRow\n               :  :                 :     :                 +- CometProject\n               :  :                 :     :                    +- CometFilter\n               :  :                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :                 :     +- CometBroadcastExchange\n               :  :                 :        +- CometProject\n               :  :                 :           +- CometFilter\n               :  :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :                 +- CometBroadcastExchange\n               :  :                    +- CometProject\n               :  :                       +- CometFilter\n               :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               :  +- BroadcastExchange\n               :     +- CometNativeColumnarToRow\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometProject\n               :                       :  +- CometBroadcastHashJoin\n               :                       :     :- CometFilter\n               :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                       :     :        +- ReusedSubquery\n               :                       :     +- CometBroadcastExchange\n               :                       :        +- CometProject\n               :                       :           +- CometFilter\n               :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometHashAggregate\n                        +- CometExchange\n                           +- CometHashAggregate\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometProject\n                                    :  +- CometBroadcastHashJoin\n                                    :     :- CometFilter\n                                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :     :        +- ReusedSubquery\n                                    :     +- CometBroadcastExchange\n                                    :        +- CometProject\n                                    :           +- CometFilter\n                                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 55 out of 66 eligible operators (83%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q88.native_datafusion/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :  :  :     +- CometExchange\n:  :  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :  :           +- CometProject\n:  :  :  :  :  :  :              +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :- CometProject\n:  :  :  :  :  :  :                 :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :- CometProject\n:  :  :  :  :  :  :                 :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :     :- CometProject\n:  :  :  :  :  :  :                 :     :     :  +- CometFilter\n:  :  :  :  :  :  :                 :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  :  :  :                 :     :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :     :        +- CometProject\n:  :  :  :  :  :  :                 :     :           +- CometFilter\n:  :  :  :  :  :  :                 :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :  :                 :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :        +- CometProject\n:  :  :  :  :  :  :                 :           +- CometFilter\n:  :  :  :  :  :  :                 :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :  :  :  :                 +- CometBroadcastExchange\n:  :  :  :  :  :  :                    +- CometProject\n:  :  :  :  :  :  :                       +- CometFilter\n:  :  :  :  :  :  :                          +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :           +- CometExchange\n:  :  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :  :                 +- CometProject\n:  :  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :- CometProject\n:  :  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :- CometProject\n:  :  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :        +- CometProject\n:  :  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :  :                          +- CometProject\n:  :  :  :  :  :                             +- CometFilter\n:  :  :  :  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :           +- CometExchange\n:  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :                 +- CometProject\n:  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :                       :- CometProject\n:  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :- CometProject\n:  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :                       :        +- CometProject\n:  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :                          +- CometProject\n:  :  :  :  :                             +- CometFilter\n:  :  :  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometExchange\n:  :  :  :              +- CometHashAggregate\n:  :  :  :                 +- CometProject\n:  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :                       :- CometProject\n:  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :- CometProject\n:  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :     :- CometProject\n:  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :                       :     :        +- CometProject\n:  :  :  :                       :     :           +- CometFilter\n:  :  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :                       :        +- CometProject\n:  :  :  :                       :           +- CometFilter\n:  :  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :                       +- CometBroadcastExchange\n:  :  :  :                          +- CometProject\n:  :  :  :                             +- CometFilter\n:  :  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometExchange\n:  :  :              +- CometHashAggregate\n:  :  :                 +- CometProject\n:  :  :                    +- CometBroadcastHashJoin\n:  :  :                       :- CometProject\n:  :  :                       :  +- CometBroadcastHashJoin\n:  :  :                       :     :- CometProject\n:  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :                       :     :     :- CometProject\n:  :  :                       :     :     :  +- CometFilter\n:  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :                       :     :     +- CometBroadcastExchange\n:  :  :                       :     :        +- CometProject\n:  :  :                       :     :           +- CometFilter\n:  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :                       :     +- CometBroadcastExchange\n:  :  :                       :        +- CometProject\n:  :  :                       :           +- CometFilter\n:  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :                       +- CometBroadcastExchange\n:  :  :                          +- CometProject\n:  :  :                             +- CometFilter\n:  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometBroadcastHashJoin\n:  :                       :- CometProject\n:  :                       :  +- CometBroadcastHashJoin\n:  :                       :     :- CometProject\n:  :                       :     :  +- CometBroadcastHashJoin\n:  :                       :     :     :- CometProject\n:  :                       :     :     :  +- CometFilter\n:  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :                       :     :     +- CometBroadcastExchange\n:  :                       :     :        +- CometProject\n:  :                       :     :           +- CometFilter\n:  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :                       :     +- CometBroadcastExchange\n:  :                       :        +- CometProject\n:  :                       :           +- CometFilter\n:  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :                       +- CometBroadcastExchange\n:  :                          +- CometProject\n:  :                             +- CometFilter\n:  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometExchange\n:              +- CometHashAggregate\n:                 +- CometProject\n:                    +- CometBroadcastHashJoin\n:                       :- CometProject\n:                       :  +- CometBroadcastHashJoin\n:                       :     :- CometProject\n:                       :     :  +- CometBroadcastHashJoin\n:                       :     :     :- CometProject\n:                       :     :     :  +- CometFilter\n:                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:                       :     :     +- CometBroadcastExchange\n:                       :     :        +- CometProject\n:                       :     :           +- CometFilter\n:                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:                       :     +- CometBroadcastExchange\n:                       :        +- CometProject\n:                       :           +- CometFilter\n:                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:                       +- CometBroadcastExchange\n:                          +- CometProject\n:                             +- CometFilter\n:                                +- CometNativeScan parquet spark_catalog.default.store\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometFilter\n                     :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometNativeScan parquet spark_catalog.default.time_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 192 out of 206 eligible operators (93%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q88.native_iceberg_compat/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :  :  :     +- CometExchange\n:  :  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :  :           +- CometProject\n:  :  :  :  :  :  :              +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :- CometProject\n:  :  :  :  :  :  :                 :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :- CometProject\n:  :  :  :  :  :  :                 :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :     :- CometProject\n:  :  :  :  :  :  :                 :     :     :  +- CometFilter\n:  :  :  :  :  :  :                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  :  :  :                 :     :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :     :        +- CometProject\n:  :  :  :  :  :  :                 :     :           +- CometFilter\n:  :  :  :  :  :  :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :  :                 :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :        +- CometProject\n:  :  :  :  :  :  :                 :           +- CometFilter\n:  :  :  :  :  :  :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :  :  :  :                 +- CometBroadcastExchange\n:  :  :  :  :  :  :                    +- CometProject\n:  :  :  :  :  :  :                       +- CometFilter\n:  :  :  :  :  :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :           +- CometExchange\n:  :  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :  :                 +- CometProject\n:  :  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :- CometProject\n:  :  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :- CometProject\n:  :  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :        +- CometProject\n:  :  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :  :                          +- CometProject\n:  :  :  :  :  :                             +- CometFilter\n:  :  :  :  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :           +- CometExchange\n:  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :                 +- CometProject\n:  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :                       :- CometProject\n:  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :- CometProject\n:  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :                       :        +- CometProject\n:  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :                          +- CometProject\n:  :  :  :  :                             +- CometFilter\n:  :  :  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometExchange\n:  :  :  :              +- CometHashAggregate\n:  :  :  :                 +- CometProject\n:  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :                       :- CometProject\n:  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :- CometProject\n:  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :     :- CometProject\n:  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :                       :     :        +- CometProject\n:  :  :  :                       :     :           +- CometFilter\n:  :  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :                       :        +- CometProject\n:  :  :  :                       :           +- CometFilter\n:  :  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :                       +- CometBroadcastExchange\n:  :  :  :                          +- CometProject\n:  :  :  :                             +- CometFilter\n:  :  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometExchange\n:  :  :              +- CometHashAggregate\n:  :  :                 +- CometProject\n:  :  :                    +- CometBroadcastHashJoin\n:  :  :                       :- CometProject\n:  :  :                       :  +- CometBroadcastHashJoin\n:  :  :                       :     :- CometProject\n:  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :                       :     :     :- CometProject\n:  :  :                       :     :     :  +- CometFilter\n:  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :                       :     :     +- CometBroadcastExchange\n:  :  :                       :     :        +- CometProject\n:  :  :                       :     :           +- CometFilter\n:  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :                       :     +- CometBroadcastExchange\n:  :  :                       :        +- CometProject\n:  :  :                       :           +- CometFilter\n:  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :                       +- CometBroadcastExchange\n:  :  :                          +- CometProject\n:  :  :                             +- CometFilter\n:  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometBroadcastHashJoin\n:  :                       :- CometProject\n:  :                       :  +- CometBroadcastHashJoin\n:  :                       :     :- CometProject\n:  :                       :     :  +- CometBroadcastHashJoin\n:  :                       :     :     :- CometProject\n:  :                       :     :     :  +- CometFilter\n:  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :                       :     :     +- CometBroadcastExchange\n:  :                       :     :        +- CometProject\n:  :                       :     :           +- CometFilter\n:  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :                       :     +- CometBroadcastExchange\n:  :                       :        +- CometProject\n:  :                       :           +- CometFilter\n:  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :                       +- CometBroadcastExchange\n:  :                          +- CometProject\n:  :                             +- CometFilter\n:  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometExchange\n:              +- CometHashAggregate\n:                 +- CometProject\n:                    +- CometBroadcastHashJoin\n:                       :- CometProject\n:                       :  +- CometBroadcastHashJoin\n:                       :     :- CometProject\n:                       :     :  +- CometBroadcastHashJoin\n:                       :     :     :- CometProject\n:                       :     :     :  +- CometFilter\n:                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:                       :     :     +- CometBroadcastExchange\n:                       :     :        +- CometProject\n:                       :     :           +- CometFilter\n:                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:                       :     +- CometBroadcastExchange\n:                       :        +- CometProject\n:                       :           +- CometFilter\n:                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:                       +- CometBroadcastExchange\n:                          +- CometProject\n:                             +- CometFilter\n:                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometFilter\n                     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 192 out of 206 eligible operators (93%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q89.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- CometNativeColumnarToRow\n                                    :     :     :  +- CometProject\n                                    :     :     :     +- CometFilter\n                                    :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- Filter\n                                    :     :           +- ColumnarToRow\n                                    :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :                    +- SubqueryBroadcast\n                                    :     :                       +- BroadcastExchange\n                                    :     :                          +- CometNativeColumnarToRow\n                                    :     :                             +- CometProject\n                                    :     :                                +- CometFilter\n                                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q89.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometFilter\n                                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :                 +- SubqueryBroadcast\n                                 :     :                    +- BroadcastExchange\n                                 :     :                       +- CometNativeColumnarToRow\n                                 :     :                          +- CometProject\n                                 :     :                             +- CometFilter\n                                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 27 out of 33 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q9.native_datafusion/extended.txt",
    "content": " Project [COMET: ]\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  +- ReusedSubquery\n+- CometNativeColumnarToRow\n   +- CometFilter\n      +- CometNativeScan parquet spark_catalog.default.reason\n\nComet accelerated 37 out of 53 eligible operators (69%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q9.native_iceberg_compat/extended.txt",
    "content": " Project [COMET: ]\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  +- ReusedSubquery\n+- CometNativeColumnarToRow\n   +- CometFilter\n      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.reason\n\nComet accelerated 37 out of 53 eligible operators (69%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q90.native_datafusion/extended.txt",
    "content": "Project\n+-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n   :- CometNativeColumnarToRow\n   :  +- CometHashAggregate\n   :     +- CometExchange\n   :        +- CometHashAggregate\n   :           +- CometProject\n   :              +- CometBroadcastHashJoin\n   :                 :- CometProject\n   :                 :  +- CometBroadcastHashJoin\n   :                 :     :- CometProject\n   :                 :     :  +- CometBroadcastHashJoin\n   :                 :     :     :- CometProject\n   :                 :     :     :  +- CometFilter\n   :                 :     :     :     +- CometNativeScan parquet spark_catalog.default.web_sales\n   :                 :     :     +- CometBroadcastExchange\n   :                 :     :        +- CometProject\n   :                 :     :           +- CometFilter\n   :                 :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n   :                 :     +- CometBroadcastExchange\n   :                 :        +- CometProject\n   :                 :           +- CometFilter\n   :                 :              +- CometNativeScan parquet spark_catalog.default.time_dim\n   :                 +- CometBroadcastExchange\n   :                    +- CometProject\n   :                       +- CometFilter\n   :                          +- CometNativeScan parquet spark_catalog.default.web_page\n   +- BroadcastExchange\n      +- CometNativeColumnarToRow\n         +- CometHashAggregate\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometProject\n                        :     :     :  +- CometFilter\n                        :     :     :     +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.time_dim\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.web_page\n\nComet accelerated 48 out of 51 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q90.native_iceberg_compat/extended.txt",
    "content": "Project\n+-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n   :- CometNativeColumnarToRow\n   :  +- CometHashAggregate\n   :     +- CometExchange\n   :        +- CometHashAggregate\n   :           +- CometProject\n   :              +- CometBroadcastHashJoin\n   :                 :- CometProject\n   :                 :  +- CometBroadcastHashJoin\n   :                 :     :- CometProject\n   :                 :     :  +- CometBroadcastHashJoin\n   :                 :     :     :- CometProject\n   :                 :     :     :  +- CometFilter\n   :                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n   :                 :     :     +- CometBroadcastExchange\n   :                 :     :        +- CometProject\n   :                 :     :           +- CometFilter\n   :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n   :                 :     +- CometBroadcastExchange\n   :                 :        +- CometProject\n   :                 :           +- CometFilter\n   :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n   :                 +- CometBroadcastExchange\n   :                    +- CometProject\n   :                       +- CometFilter\n   :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n   +- BroadcastExchange\n      +- CometNativeColumnarToRow\n         +- CometHashAggregate\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometProject\n                        :     :     :  +- CometFilter\n                        :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n\nComet accelerated 48 out of 51 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q91.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- CometNativeColumnarToRow\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- Project\n                        :  +- BroadcastHashJoin\n                        :     :- Project\n                        :     :  +- BroadcastHashJoin\n                        :     :     :- Project\n                        :     :     :  +- BroadcastHashJoin\n                        :     :     :     :- Project\n                        :     :     :     :  +- BroadcastHashJoin\n                        :     :     :     :     :- Project\n                        :     :     :     :     :  +- BroadcastHashJoin\n                        :     :     :     :     :     :- CometNativeColumnarToRow\n                        :     :     :     :     :     :  +- CometProject\n                        :     :     :     :     :     :     +- CometFilter\n                        :     :     :     :     :     :        +- CometNativeScan parquet spark_catalog.default.call_center\n                        :     :     :     :     :     +- BroadcastExchange\n                        :     :     :     :     :        +- Filter\n                        :     :     :     :     :           +- ColumnarToRow\n                        :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :     :     :     :     :                    +- SubqueryBroadcast\n                        :     :     :     :     :                       +- BroadcastExchange\n                        :     :     :     :     :                          +- CometNativeColumnarToRow\n                        :     :     :     :     :                             +- CometProject\n                        :     :     :     :     :                                +- CometFilter\n                        :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     :     :     :     +- BroadcastExchange\n                        :     :     :     :        +- CometNativeColumnarToRow\n                        :     :     :     :           +- CometProject\n                        :     :     :     :              +- CometFilter\n                        :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     :     :     +- BroadcastExchange\n                        :     :     :        +- CometNativeColumnarToRow\n                        :     :     :           +- CometFilter\n                        :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                        :     :     +- BroadcastExchange\n                        :     :        +- CometNativeColumnarToRow\n                        :     :           +- CometProject\n                        :     :              +- CometFilter\n                        :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                        :     +- BroadcastExchange\n                        :        +- CometNativeColumnarToRow\n                        :           +- CometProject\n                        :              +- CometFilter\n                        :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                        +- BroadcastExchange\n                           +- CometNativeColumnarToRow\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.household_demographics\n\nComet accelerated 23 out of 47 eligible operators (48%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q91.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :- CometProject\n                     :     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :     :- CometProject\n                     :     :     :     :     :     :  +- CometFilter\n                     :     :     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n                     :     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :     :        +- CometFilter\n                     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                     :     :     :     :     :                 +- SubqueryBroadcast\n                     :     :     :     :     :                    +- BroadcastExchange\n                     :     :     :     :     :                       +- CometNativeColumnarToRow\n                     :     :     :     :     :                          +- CometProject\n                     :     :     :     :     :                             +- CometFilter\n                     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :        +- CometProject\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n\nComet accelerated 45 out of 47 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q92.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Filter\n               :     :     :  +- ColumnarToRow\n               :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :           +- SubqueryBroadcast\n               :     :     :              +- BroadcastExchange\n               :     :     :                 +- CometNativeColumnarToRow\n               :     :     :                    +- CometProject\n               :     :     :                       +- CometFilter\n               :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.item\n               :     +- BroadcastExchange\n               :        +- Filter\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Project\n               :                          +- BroadcastHashJoin\n               :                             :- Filter\n               :                             :  +- ColumnarToRow\n               :                             :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                             :           +- ReusedSubquery\n               :                             +- BroadcastExchange\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometProject\n               :                                      +- CometFilter\n               :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 38 eligible operators (36%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q92.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :     :     :        +- SubqueryBroadcast\n               :     :     :           +- BroadcastExchange\n               :     :     :              +- CometNativeColumnarToRow\n               :     :     :                 +- CometProject\n               :     :     :                    +- CometFilter\n               :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :        +- ReusedSubquery\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 35 out of 38 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q93.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometExchange\n                  :     :     +- CometProject\n                  :     :        +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     +- CometSort\n                  :        +- CometExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.reason\n\nComet accelerated 21 out of 21 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q93.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometExchange\n                  :     :     +- CometProject\n                  :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     +- CometSort\n                  :        +- CometExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.reason\n\nComet accelerated 21 out of 21 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q94.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometNativeScan parquet spark_catalog.default.web_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q94.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q95.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometSortMergeJoin\n                        :     :     :  :  :- CometSort\n                        :     :     :  :  :  +- CometExchange\n                        :     :     :  :  :     +- CometProject\n                        :     :     :  :  :        +- CometFilter\n                        :     :     :  :  :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  :  +- CometProject\n                        :     :     :  :     +- CometSortMergeJoin\n                        :     :     :  :        :- CometSort\n                        :     :     :  :        :  +- CometExchange\n                        :     :     :  :        :     +- CometProject\n                        :     :     :  :        :        +- CometFilter\n                        :     :     :  :        :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  :        +- CometSort\n                        :     :     :  :           +- CometExchange\n                        :     :     :  :              +- CometProject\n                        :     :     :  :                 +- CometFilter\n                        :     :     :  :                    +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometProject\n                        :     :     :     +- CometSortMergeJoin\n                        :     :     :        :- CometSort\n                        :     :     :        :  +- CometExchange\n                        :     :     :        :     +- CometProject\n                        :     :     :        :        +- CometFilter\n                        :     :     :        :           +- CometNativeScan parquet spark_catalog.default.web_returns\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometSortMergeJoin\n                        :     :     :              :- CometSort\n                        :     :     :              :  +- CometExchange\n                        :     :     :              :     +- CometProject\n                        :     :     :              :        +- CometFilter\n                        :     :     :              :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :              +- CometSort\n                        :     :     :                 +- CometExchange\n                        :     :     :                    +- CometProject\n                        :     :     :                       +- CometFilter\n                        :     :     :                          +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 59 out of 61 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q95.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometSortMergeJoin\n                        :     :     :  :  :- CometSort\n                        :     :     :  :  :  +- CometExchange\n                        :     :     :  :  :     +- CometProject\n                        :     :     :  :  :        +- CometFilter\n                        :     :     :  :  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  :  +- CometProject\n                        :     :     :  :     +- CometSortMergeJoin\n                        :     :     :  :        :- CometSort\n                        :     :     :  :        :  +- CometExchange\n                        :     :     :  :        :     +- CometProject\n                        :     :     :  :        :        +- CometFilter\n                        :     :     :  :        :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  :        +- CometSort\n                        :     :     :  :           +- CometExchange\n                        :     :     :  :              +- CometProject\n                        :     :     :  :                 +- CometFilter\n                        :     :     :  :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometProject\n                        :     :     :     +- CometSortMergeJoin\n                        :     :     :        :- CometSort\n                        :     :     :        :  +- CometExchange\n                        :     :     :        :     +- CometProject\n                        :     :     :        :        +- CometFilter\n                        :     :     :        :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometSortMergeJoin\n                        :     :     :              :- CometSort\n                        :     :     :              :  +- CometExchange\n                        :     :     :              :     +- CometProject\n                        :     :     :              :        +- CometFilter\n                        :     :     :              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :              +- CometSort\n                        :     :     :                 +- CometExchange\n                        :     :     :                    +- CometProject\n                        :     :     :                       +- CometFilter\n                        :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 59 out of 61 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q96.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometFilter\n               :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometNativeScan parquet spark_catalog.default.time_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 24 out of 24 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q96.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometFilter\n               :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 24 out of 24 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q97.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometSortMergeJoin\n               :- CometSort\n               :  +- CometHashAggregate\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- ColumnarToRow\n               :                 :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :        +- SubqueryBroadcast\n               :                 :           +- BroadcastExchange\n               :                 :              +- CometNativeColumnarToRow\n               :                 :                 +- CometProject\n               :                 :                    +- CometFilter\n               :                 :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- CometSort\n                  +- CometHashAggregate\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- ColumnarToRow\n                                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :        +- ReusedSubquery\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 20 out of 33 eligible operators (60%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q97.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometSortMergeJoin\n               :- CometSort\n               :  +- CometHashAggregate\n               :     +- CometExchange\n               :        +- CometHashAggregate\n               :           +- CometProject\n               :              +- CometBroadcastHashJoin\n               :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                 :     +- SubqueryBroadcast\n               :                 :        +- BroadcastExchange\n               :                 :           +- CometNativeColumnarToRow\n               :                 :              +- CometProject\n               :                 :                 +- CometFilter\n               :                 :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                 +- CometBroadcastExchange\n               :                    +- CometProject\n               :                       +- CometFilter\n               :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometSort\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                 :     +- ReusedSubquery\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 30 out of 33 eligible operators (90%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q98.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometProject\n   +- CometSort\n      +- CometColumnarExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Filter\n                                          :     :  +- ColumnarToRow\n                                          :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :           +- SubqueryBroadcast\n                                          :     :              +- BroadcastExchange\n                                          :     :                 +- CometNativeColumnarToRow\n                                          :     :                    +- CometProject\n                                          :     :                       +- CometFilter\n                                          :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometProject\n                                          :              +- CometFilter\n                                          :                 +- CometNativeScan parquet spark_catalog.default.item\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 15 out of 29 eligible operators (51%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q98.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometProject\n   +- CometSort\n      +- CometColumnarExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometFilter\n                                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :        +- SubqueryBroadcast\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometProject\n                                       :     :                    +- CometFilter\n                                       :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometProject\n                                       :           +- CometFilter\n                                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 25 out of 29 eligible operators (86%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q99.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.call_center\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark3_5/q99.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q1.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Filter\n      :     :     :                    :  +- ColumnarToRow\n      :     :     :                    :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :           +- SubqueryBroadcast\n      :     :     :                    :              +- BroadcastExchange\n      :     :     :                    :                 +- CometNativeColumnarToRow\n      :     :     :                    :                    +- CometProject\n      :     :     :                    :                       +- CometFilter\n      :     :     :                    :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Filter\n      :     :                                         :  +- ColumnarToRow\n      :     :                                         :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :           +- ReusedSubquery\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometProject\n      :     :                                                  +- CometFilter\n      :     :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 49 eligible operators (36%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q1.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometFilter\n         :     :     :                 :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :     :                 :        +- SubqueryBroadcast\n         :     :     :                 :           +- BroadcastExchange\n         :     :     :                 :              +- CometNativeColumnarToRow\n         :     :     :                 :                 +- CometProject\n         :     :     :                 :                    +- CometFilter\n         :     :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometFilter\n         :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometHashAggregate\n         :     :                       +- CometExchange\n         :     :                          +- CometHashAggregate\n         :     :                             +- CometProject\n         :     :                                +- CometBroadcastHashJoin\n         :     :                                   :- CometFilter\n         :     :                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :                                   :        +- ReusedSubquery\n         :     :                                   +- CometBroadcastExchange\n         :     :                                      +- CometProject\n         :     :                                         +- CometFilter\n         :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 46 out of 49 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q10.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :- BroadcastHashJoin\n                  :     :        :  :- BroadcastHashJoin\n                  :     :        :  :  :- CometNativeColumnarToRow\n                  :     :        :  :  :  +- CometFilter\n                  :     :        :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :        :  :  +- BroadcastExchange\n                  :     :        :  :     +- Project\n                  :     :        :  :        +- BroadcastHashJoin\n                  :     :        :  :           :- ColumnarToRow\n                  :     :        :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :  :           :        +- SubqueryBroadcast\n                  :     :        :  :           :           +- BroadcastExchange\n                  :     :        :  :           :              +- CometNativeColumnarToRow\n                  :     :        :  :           :                 +- CometProject\n                  :     :        :  :           :                    +- CometFilter\n                  :     :        :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  :           +- BroadcastExchange\n                  :     :        :  :              +- CometNativeColumnarToRow\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- Project\n                  :     :        :        +- BroadcastHashJoin\n                  :     :        :           :- ColumnarToRow\n                  :     :        :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :           :        +- ReusedSubquery\n                  :     :        :           +- BroadcastExchange\n                  :     :        :              +- CometNativeColumnarToRow\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 54 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q10.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                  :     :        :  :- CometNativeColumnarToRow\n                  :     :        :  :  +- CometBroadcastHashJoin\n                  :     :        :  :     :- CometFilter\n                  :     :        :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :        :  :     +- CometBroadcastExchange\n                  :     :        :  :        +- CometProject\n                  :     :        :  :           +- CometBroadcastHashJoin\n                  :     :        :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :        :  :              :     +- SubqueryBroadcast\n                  :     :        :  :              :        +- BroadcastExchange\n                  :     :        :  :              :           +- CometNativeColumnarToRow\n                  :     :        :  :              :              +- CometProject\n                  :     :        :  :              :                 +- CometFilter\n                  :     :        :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  :              +- CometBroadcastExchange\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- CometNativeColumnarToRow\n                  :     :        :        +- CometProject\n                  :     :        :           +- CometBroadcastHashJoin\n                  :     :        :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :        :              :     +- ReusedSubquery\n                  :     :        :              +- CometBroadcastExchange\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- CometNativeColumnarToRow\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 54 eligible operators (64%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q11.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Project\n      :     :     :                    :  +- BroadcastHashJoin\n      :     :     :                    :     :- CometNativeColumnarToRow\n      :     :     :                    :     :  +- CometProject\n      :     :     :                    :     :     +- CometFilter\n      :     :     :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :                    :     +- BroadcastExchange\n      :     :     :                    :        +- Filter\n      :     :     :                    :           +- ColumnarToRow\n      :     :     :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :                    +- SubqueryBroadcast\n      :     :     :                    :                       +- BroadcastExchange\n      :     :     :                    :                          +- CometNativeColumnarToRow\n      :     :     :                    :                             +- CometFilter\n      :     :     :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometFilter\n      :     :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     +- BroadcastExchange\n      :     :        +- HashAggregate\n      :     :           +- CometNativeColumnarToRow\n      :     :              +- CometColumnarExchange\n      :     :                 +- HashAggregate\n      :     :                    +- Project\n      :     :                       +- BroadcastHashJoin\n      :     :                          :- Project\n      :     :                          :  +- BroadcastHashJoin\n      :     :                          :     :- CometNativeColumnarToRow\n      :     :                          :     :  +- CometProject\n      :     :                          :     :     +- CometFilter\n      :     :                          :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                          :     +- BroadcastExchange\n      :     :                          :        +- Filter\n      :     :                          :           +- ColumnarToRow\n      :     :                          :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                          :                    +- SubqueryBroadcast\n      :     :                          :                       +- BroadcastExchange\n      :     :                          :                          +- CometNativeColumnarToRow\n      :     :                          :                             +- CometFilter\n      :     :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                          +- BroadcastExchange\n      :     :                             +- CometNativeColumnarToRow\n      :     :                                +- CometFilter\n      :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 86 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q11.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometProject\n         :     :     :                 :  +- CometBroadcastHashJoin\n         :     :     :                 :     :- CometProject\n         :     :     :                 :     :  +- CometFilter\n         :     :     :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :                 :     +- CometBroadcastExchange\n         :     :     :                 :        +- CometFilter\n         :     :     :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :                 :                 +- SubqueryBroadcast\n         :     :     :                 :                    +- BroadcastExchange\n         :     :     :                 :                       +- CometNativeColumnarToRow\n         :     :     :                 :                          +- CometFilter\n         :     :     :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometFilter\n         :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometExchange\n         :     :              +- CometHashAggregate\n         :     :                 +- CometProject\n         :     :                    +- CometBroadcastHashJoin\n         :     :                       :- CometProject\n         :     :                       :  +- CometBroadcastHashJoin\n         :     :                       :     :- CometProject\n         :     :                       :     :  +- CometFilter\n         :     :                       :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                       :     +- CometBroadcastExchange\n         :     :                       :        +- CometFilter\n         :     :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                       :                 +- SubqueryBroadcast\n         :     :                       :                    +- BroadcastExchange\n         :     :                       :                       +- CometNativeColumnarToRow\n         :     :                       :                          +- CometFilter\n         :     :                       :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                       +- CometBroadcastExchange\n         :     :                          +- CometFilter\n         :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 80 out of 86 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q12.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q12.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q13.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Project\n               :     :     :  +- BroadcastHashJoin\n               :     :     :     :- Project\n               :     :     :     :  +- BroadcastHashJoin\n               :     :     :     :     :- Filter\n               :     :     :     :     :  +- ColumnarToRow\n               :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :     :     :           +- SubqueryBroadcast\n               :     :     :     :     :              +- BroadcastExchange\n               :     :     :     :     :                 +- CometNativeColumnarToRow\n               :     :     :     :     :                    +- CometProject\n               :     :     :     :     :                       +- CometFilter\n               :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     :     :     +- BroadcastExchange\n               :     :     :     :        +- CometNativeColumnarToRow\n               :     :     :     :           +- CometFilter\n               :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :     :     :     +- BroadcastExchange\n               :     :     :        +- CometNativeColumnarToRow\n               :     :     :           +- CometProject\n               :     :     :              +- CometFilter\n               :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     +- BroadcastExchange\n               :        +- CometNativeColumnarToRow\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.household_demographics\n\nComet accelerated 17 out of 38 eligible operators (44%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q13.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometProject\n               :     :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :     :- CometFilter\n               :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     :     :     :        +- SubqueryBroadcast\n               :     :     :     :     :           +- BroadcastExchange\n               :     :     :     :     :              +- CometNativeColumnarToRow\n               :     :     :     :     :                 +- CometProject\n               :     :     :     :     :                    +- CometFilter\n               :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     :     :     +- CometBroadcastExchange\n               :     :     :     :        +- CometFilter\n               :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometProject\n               :     :     :           +- CometFilter\n               :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n               +- CometBroadcastExchange\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n\nComet accelerated 36 out of 38 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q14a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- Project\n                  :  +- Filter\n                  :     :  +- Subquery\n                  :     :     +- HashAggregate\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometColumnarExchange\n                  :     :              +- HashAggregate\n                  :     :                 +- Union\n                  :     :                    :- Project\n                  :     :                    :  +- BroadcastHashJoin\n                  :     :                    :     :- ColumnarToRow\n                  :     :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                    :     :        +- ReusedSubquery\n                  :     :                    :     +- BroadcastExchange\n                  :     :                    :        +- CometNativeColumnarToRow\n                  :     :                    :           +- CometProject\n                  :     :                    :              +- CometFilter\n                  :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                    :- Project\n                  :     :                    :  +- BroadcastHashJoin\n                  :     :                    :     :- ColumnarToRow\n                  :     :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                    :     :        +- ReusedSubquery\n                  :     :                    :     +- BroadcastExchange\n                  :     :                    :        +- CometNativeColumnarToRow\n                  :     :                    :           +- CometProject\n                  :     :                    :              +- CometFilter\n                  :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                    +- Project\n                  :     :                       +- BroadcastHashJoin\n                  :     :                          :- ColumnarToRow\n                  :     :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                          :        +- ReusedSubquery\n                  :     :                          +- BroadcastExchange\n                  :     :                             +- CometNativeColumnarToRow\n                  :     :                                +- CometProject\n                  :     :                                   +- CometFilter\n                  :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- HashAggregate\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometColumnarExchange\n                  :              +- HashAggregate\n                  :                 +- Project\n                  :                    +- BroadcastHashJoin\n                  :                       :- Project\n                  :                       :  +- BroadcastHashJoin\n                  :                       :     :- BroadcastHashJoin\n                  :                       :     :  :- Filter\n                  :                       :     :  :  +- ColumnarToRow\n                  :                       :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :  :           +- SubqueryBroadcast\n                  :                       :     :  :              +- BroadcastExchange\n                  :                       :     :  :                 +- CometNativeColumnarToRow\n                  :                       :     :  :                    +- CometProject\n                  :                       :     :  :                       +- CometFilter\n                  :                       :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :  +- BroadcastExchange\n                  :                       :     :     +- Project\n                  :                       :     :        +- BroadcastHashJoin\n                  :                       :     :           :- CometNativeColumnarToRow\n                  :                       :     :           :  +- CometFilter\n                  :                       :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :           +- BroadcastExchange\n                  :                       :     :              +- BroadcastHashJoin\n                  :                       :     :                 :- CometNativeColumnarToRow\n                  :                       :     :                 :  +- CometHashAggregate\n                  :                       :     :                 :     +- CometColumnarExchange\n                  :                       :     :                 :        +- HashAggregate\n                  :                       :     :                 :           +- Project\n                  :                       :     :                 :              +- BroadcastHashJoin\n                  :                       :     :                 :                 :- Project\n                  :                       :     :                 :                 :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :     :- Filter\n                  :                       :     :                 :                 :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :     :           +- SubqueryBroadcast\n                  :                       :     :                 :                 :     :              +- BroadcastExchange\n                  :                       :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :     :                    +- CometProject\n                  :                       :     :                 :                 :     :                       +- CometFilter\n                  :                       :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 :     +- BroadcastExchange\n                  :                       :     :                 :                 :        +- BroadcastHashJoin\n                  :                       :     :                 :                 :           :- CometNativeColumnarToRow\n                  :                       :     :                 :                 :           :  +- CometFilter\n                  :                       :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :           +- BroadcastExchange\n                  :                       :     :                 :                 :              +- Project\n                  :                       :     :                 :                 :                 +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :- Project\n                  :                       :     :                 :                 :                    :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :     :- Filter\n                  :                       :     :                 :                 :                    :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :                    :     :           +- ReusedSubquery\n                  :                       :     :                 :                 :                    :     +- BroadcastExchange\n                  :                       :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                    :           +- CometFilter\n                  :                       :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :                    +- BroadcastExchange\n                  :                       :     :                 :                 :                       +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                          +- CometProject\n                  :                       :     :                 :                 :                             +- CometFilter\n                  :                       :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 +- BroadcastExchange\n                  :                       :     :                 :                    +- CometNativeColumnarToRow\n                  :                       :     :                 :                       +- CometProject\n                  :                       :     :                 :                          +- CometFilter\n                  :                       :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 +- BroadcastExchange\n                  :                       :     :                    +- Project\n                  :                       :     :                       +- BroadcastHashJoin\n                  :                       :     :                          :- Project\n                  :                       :     :                          :  +- BroadcastHashJoin\n                  :                       :     :                          :     :- Filter\n                  :                       :     :                          :     :  +- ColumnarToRow\n                  :                       :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                          :     :           +- ReusedSubquery\n                  :                       :     :                          :     +- BroadcastExchange\n                  :                       :     :                          :        +- CometNativeColumnarToRow\n                  :                       :     :                          :           +- CometFilter\n                  :                       :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                          +- BroadcastExchange\n                  :                       :     :                             +- CometNativeColumnarToRow\n                  :                       :     :                                +- CometProject\n                  :                       :     :                                   +- CometFilter\n                  :                       :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     +- BroadcastExchange\n                  :                       :        +- BroadcastHashJoin\n                  :                       :           :- CometNativeColumnarToRow\n                  :                       :           :  +- CometFilter\n                  :                       :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :           +- BroadcastExchange\n                  :                       :              +- Project\n                  :                       :                 +- BroadcastHashJoin\n                  :                       :                    :- CometNativeColumnarToRow\n                  :                       :                    :  +- CometFilter\n                  :                       :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                    +- BroadcastExchange\n                  :                       :                       +- BroadcastHashJoin\n                  :                       :                          :- CometNativeColumnarToRow\n                  :                       :                          :  +- CometHashAggregate\n                  :                       :                          :     +- CometColumnarExchange\n                  :                       :                          :        +- HashAggregate\n                  :                       :                          :           +- Project\n                  :                       :                          :              +- BroadcastHashJoin\n                  :                       :                          :                 :- Project\n                  :                       :                          :                 :  +- BroadcastHashJoin\n                  :                       :                          :                 :     :- Filter\n                  :                       :                          :                 :     :  +- ColumnarToRow\n                  :                       :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :     :           +- SubqueryBroadcast\n                  :                       :                          :                 :     :              +- BroadcastExchange\n                  :                       :                          :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :                          :                 :     :                    +- CometProject\n                  :                       :                          :                 :     :                       +- CometFilter\n                  :                       :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 :     +- BroadcastExchange\n                  :                       :                          :                 :        +- BroadcastHashJoin\n                  :                       :                          :                 :           :- CometNativeColumnarToRow\n                  :                       :                          :                 :           :  +- CometFilter\n                  :                       :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :           +- BroadcastExchange\n                  :                       :                          :                 :              +- Project\n                  :                       :                          :                 :                 +- BroadcastHashJoin\n                  :                       :                          :                 :                    :- Project\n                  :                       :                          :                 :                    :  +- BroadcastHashJoin\n                  :                       :                          :                 :                    :     :- Filter\n                  :                       :                          :                 :                    :     :  +- ColumnarToRow\n                  :                       :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :                    :     :           +- ReusedSubquery\n                  :                       :                          :                 :                    :     +- BroadcastExchange\n                  :                       :                          :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :                          :                 :                    :           +- CometFilter\n                  :                       :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :                    +- BroadcastExchange\n                  :                       :                          :                 :                       +- CometNativeColumnarToRow\n                  :                       :                          :                 :                          +- CometProject\n                  :                       :                          :                 :                             +- CometFilter\n                  :                       :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 +- BroadcastExchange\n                  :                       :                          :                    +- CometNativeColumnarToRow\n                  :                       :                          :                       +- CometProject\n                  :                       :                          :                          +- CometFilter\n                  :                       :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          +- BroadcastExchange\n                  :                       :                             +- Project\n                  :                       :                                +- BroadcastHashJoin\n                  :                       :                                   :- Project\n                  :                       :                                   :  +- BroadcastHashJoin\n                  :                       :                                   :     :- Filter\n                  :                       :                                   :     :  +- ColumnarToRow\n                  :                       :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                                   :     :           +- ReusedSubquery\n                  :                       :                                   :     +- BroadcastExchange\n                  :                       :                                   :        +- CometNativeColumnarToRow\n                  :                       :                                   :           +- CometFilter\n                  :                       :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                                   +- BroadcastExchange\n                  :                       :                                      +- CometNativeColumnarToRow\n                  :                       :                                         +- CometProject\n                  :                       :                                            +- CometFilter\n                  :                       :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       +- BroadcastExchange\n                  :                          +- CometNativeColumnarToRow\n                  :                             +- CometProject\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :- Project\n                  :  +- Filter\n                  :     :  +- ReusedSubquery\n                  :     +- HashAggregate\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometColumnarExchange\n                  :              +- HashAggregate\n                  :                 +- Project\n                  :                    +- BroadcastHashJoin\n                  :                       :- Project\n                  :                       :  +- BroadcastHashJoin\n                  :                       :     :- BroadcastHashJoin\n                  :                       :     :  :- Filter\n                  :                       :     :  :  +- ColumnarToRow\n                  :                       :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :  :           +- ReusedSubquery\n                  :                       :     :  +- BroadcastExchange\n                  :                       :     :     +- Project\n                  :                       :     :        +- BroadcastHashJoin\n                  :                       :     :           :- CometNativeColumnarToRow\n                  :                       :     :           :  +- CometFilter\n                  :                       :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :           +- BroadcastExchange\n                  :                       :     :              +- BroadcastHashJoin\n                  :                       :     :                 :- CometNativeColumnarToRow\n                  :                       :     :                 :  +- CometHashAggregate\n                  :                       :     :                 :     +- CometColumnarExchange\n                  :                       :     :                 :        +- HashAggregate\n                  :                       :     :                 :           +- Project\n                  :                       :     :                 :              +- BroadcastHashJoin\n                  :                       :     :                 :                 :- Project\n                  :                       :     :                 :                 :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :     :- Filter\n                  :                       :     :                 :                 :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :     :           +- SubqueryBroadcast\n                  :                       :     :                 :                 :     :              +- BroadcastExchange\n                  :                       :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :     :                    +- CometProject\n                  :                       :     :                 :                 :     :                       +- CometFilter\n                  :                       :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 :     +- BroadcastExchange\n                  :                       :     :                 :                 :        +- BroadcastHashJoin\n                  :                       :     :                 :                 :           :- CometNativeColumnarToRow\n                  :                       :     :                 :                 :           :  +- CometFilter\n                  :                       :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :           +- BroadcastExchange\n                  :                       :     :                 :                 :              +- Project\n                  :                       :     :                 :                 :                 +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :- Project\n                  :                       :     :                 :                 :                    :  +- BroadcastHashJoin\n                  :                       :     :                 :                 :                    :     :- Filter\n                  :                       :     :                 :                 :                    :     :  +- ColumnarToRow\n                  :                       :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                 :                 :                    :     :           +- ReusedSubquery\n                  :                       :     :                 :                 :                    :     +- BroadcastExchange\n                  :                       :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                    :           +- CometFilter\n                  :                       :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                 :                 :                    +- BroadcastExchange\n                  :                       :     :                 :                 :                       +- CometNativeColumnarToRow\n                  :                       :     :                 :                 :                          +- CometProject\n                  :                       :     :                 :                 :                             +- CometFilter\n                  :                       :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 :                 +- BroadcastExchange\n                  :                       :     :                 :                    +- CometNativeColumnarToRow\n                  :                       :     :                 :                       +- CometProject\n                  :                       :     :                 :                          +- CometFilter\n                  :                       :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     :                 +- BroadcastExchange\n                  :                       :     :                    +- Project\n                  :                       :     :                       +- BroadcastHashJoin\n                  :                       :     :                          :- Project\n                  :                       :     :                          :  +- BroadcastHashJoin\n                  :                       :     :                          :     :- Filter\n                  :                       :     :                          :     :  +- ColumnarToRow\n                  :                       :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :     :                          :     :           +- ReusedSubquery\n                  :                       :     :                          :     +- BroadcastExchange\n                  :                       :     :                          :        +- CometNativeColumnarToRow\n                  :                       :     :                          :           +- CometFilter\n                  :                       :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :     :                          +- BroadcastExchange\n                  :                       :     :                             +- CometNativeColumnarToRow\n                  :                       :     :                                +- CometProject\n                  :                       :     :                                   +- CometFilter\n                  :                       :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :     +- BroadcastExchange\n                  :                       :        +- BroadcastHashJoin\n                  :                       :           :- CometNativeColumnarToRow\n                  :                       :           :  +- CometFilter\n                  :                       :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :           +- BroadcastExchange\n                  :                       :              +- Project\n                  :                       :                 +- BroadcastHashJoin\n                  :                       :                    :- CometNativeColumnarToRow\n                  :                       :                    :  +- CometFilter\n                  :                       :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                    +- BroadcastExchange\n                  :                       :                       +- BroadcastHashJoin\n                  :                       :                          :- CometNativeColumnarToRow\n                  :                       :                          :  +- CometHashAggregate\n                  :                       :                          :     +- CometColumnarExchange\n                  :                       :                          :        +- HashAggregate\n                  :                       :                          :           +- Project\n                  :                       :                          :              +- BroadcastHashJoin\n                  :                       :                          :                 :- Project\n                  :                       :                          :                 :  +- BroadcastHashJoin\n                  :                       :                          :                 :     :- Filter\n                  :                       :                          :                 :     :  +- ColumnarToRow\n                  :                       :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :     :           +- SubqueryBroadcast\n                  :                       :                          :                 :     :              +- BroadcastExchange\n                  :                       :                          :                 :     :                 +- CometNativeColumnarToRow\n                  :                       :                          :                 :     :                    +- CometProject\n                  :                       :                          :                 :     :                       +- CometFilter\n                  :                       :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 :     +- BroadcastExchange\n                  :                       :                          :                 :        +- BroadcastHashJoin\n                  :                       :                          :                 :           :- CometNativeColumnarToRow\n                  :                       :                          :                 :           :  +- CometFilter\n                  :                       :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :           +- BroadcastExchange\n                  :                       :                          :                 :              +- Project\n                  :                       :                          :                 :                 +- BroadcastHashJoin\n                  :                       :                          :                 :                    :- Project\n                  :                       :                          :                 :                    :  +- BroadcastHashJoin\n                  :                       :                          :                 :                    :     :- Filter\n                  :                       :                          :                 :                    :     :  +- ColumnarToRow\n                  :                       :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                          :                 :                    :     :           +- ReusedSubquery\n                  :                       :                          :                 :                    :     +- BroadcastExchange\n                  :                       :                          :                 :                    :        +- CometNativeColumnarToRow\n                  :                       :                          :                 :                    :           +- CometFilter\n                  :                       :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                          :                 :                    +- BroadcastExchange\n                  :                       :                          :                 :                       +- CometNativeColumnarToRow\n                  :                       :                          :                 :                          +- CometProject\n                  :                       :                          :                 :                             +- CometFilter\n                  :                       :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          :                 +- BroadcastExchange\n                  :                       :                          :                    +- CometNativeColumnarToRow\n                  :                       :                          :                       +- CometProject\n                  :                       :                          :                          +- CometFilter\n                  :                       :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       :                          +- BroadcastExchange\n                  :                       :                             +- Project\n                  :                       :                                +- BroadcastHashJoin\n                  :                       :                                   :- Project\n                  :                       :                                   :  +- BroadcastHashJoin\n                  :                       :                                   :     :- Filter\n                  :                       :                                   :     :  +- ColumnarToRow\n                  :                       :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :                                   :     :           +- ReusedSubquery\n                  :                       :                                   :     +- BroadcastExchange\n                  :                       :                                   :        +- CometNativeColumnarToRow\n                  :                       :                                   :           +- CometFilter\n                  :                       :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                  :                       :                                   +- BroadcastExchange\n                  :                       :                                      +- CometNativeColumnarToRow\n                  :                       :                                         +- CometProject\n                  :                       :                                            +- CometFilter\n                  :                       :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                       +- BroadcastExchange\n                  :                          +- CometNativeColumnarToRow\n                  :                             +- CometProject\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- Project\n                     +- Filter\n                        :  +- ReusedSubquery\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- BroadcastHashJoin\n                                          :     :  :- Filter\n                                          :     :  :  +- ColumnarToRow\n                                          :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :  :           +- ReusedSubquery\n                                          :     :  +- BroadcastExchange\n                                          :     :     +- Project\n                                          :     :        +- BroadcastHashJoin\n                                          :     :           :- CometNativeColumnarToRow\n                                          :     :           :  +- CometFilter\n                                          :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :           +- BroadcastExchange\n                                          :     :              +- BroadcastHashJoin\n                                          :     :                 :- CometNativeColumnarToRow\n                                          :     :                 :  +- CometHashAggregate\n                                          :     :                 :     +- CometColumnarExchange\n                                          :     :                 :        +- HashAggregate\n                                          :     :                 :           +- Project\n                                          :     :                 :              +- BroadcastHashJoin\n                                          :     :                 :                 :- Project\n                                          :     :                 :                 :  +- BroadcastHashJoin\n                                          :     :                 :                 :     :- Filter\n                                          :     :                 :                 :     :  +- ColumnarToRow\n                                          :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                 :                 :     :           +- SubqueryBroadcast\n                                          :     :                 :                 :     :              +- BroadcastExchange\n                                          :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                          :     :                 :                 :     :                    +- CometProject\n                                          :     :                 :                 :     :                       +- CometFilter\n                                          :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 :                 :     +- BroadcastExchange\n                                          :     :                 :                 :        +- BroadcastHashJoin\n                                          :     :                 :                 :           :- CometNativeColumnarToRow\n                                          :     :                 :                 :           :  +- CometFilter\n                                          :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :                 :                 :           +- BroadcastExchange\n                                          :     :                 :                 :              +- Project\n                                          :     :                 :                 :                 +- BroadcastHashJoin\n                                          :     :                 :                 :                    :- Project\n                                          :     :                 :                 :                    :  +- BroadcastHashJoin\n                                          :     :                 :                 :                    :     :- Filter\n                                          :     :                 :                 :                    :     :  +- ColumnarToRow\n                                          :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                 :                 :                    :     :           +- ReusedSubquery\n                                          :     :                 :                 :                    :     +- BroadcastExchange\n                                          :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                          :     :                 :                 :                    :           +- CometFilter\n                                          :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :                 :                 :                    +- BroadcastExchange\n                                          :     :                 :                 :                       +- CometNativeColumnarToRow\n                                          :     :                 :                 :                          +- CometProject\n                                          :     :                 :                 :                             +- CometFilter\n                                          :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 :                 +- BroadcastExchange\n                                          :     :                 :                    +- CometNativeColumnarToRow\n                                          :     :                 :                       +- CometProject\n                                          :     :                 :                          +- CometFilter\n                                          :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 +- BroadcastExchange\n                                          :     :                    +- Project\n                                          :     :                       +- BroadcastHashJoin\n                                          :     :                          :- Project\n                                          :     :                          :  +- BroadcastHashJoin\n                                          :     :                          :     :- Filter\n                                          :     :                          :     :  +- ColumnarToRow\n                                          :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                          :     :           +- ReusedSubquery\n                                          :     :                          :     +- BroadcastExchange\n                                          :     :                          :        +- CometNativeColumnarToRow\n                                          :     :                          :           +- CometFilter\n                                          :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :                          +- BroadcastExchange\n                                          :     :                             +- CometNativeColumnarToRow\n                                          :     :                                +- CometProject\n                                          :     :                                   +- CometFilter\n                                          :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- BroadcastHashJoin\n                                          :           :- CometNativeColumnarToRow\n                                          :           :  +- CometFilter\n                                          :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :           +- BroadcastExchange\n                                          :              +- Project\n                                          :                 +- BroadcastHashJoin\n                                          :                    :- CometNativeColumnarToRow\n                                          :                    :  +- CometFilter\n                                          :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    +- BroadcastExchange\n                                          :                       +- BroadcastHashJoin\n                                          :                          :- CometNativeColumnarToRow\n                                          :                          :  +- CometHashAggregate\n                                          :                          :     +- CometColumnarExchange\n                                          :                          :        +- HashAggregate\n                                          :                          :           +- Project\n                                          :                          :              +- BroadcastHashJoin\n                                          :                          :                 :- Project\n                                          :                          :                 :  +- BroadcastHashJoin\n                                          :                          :                 :     :- Filter\n                                          :                          :                 :     :  +- ColumnarToRow\n                                          :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                          :                 :     :           +- SubqueryBroadcast\n                                          :                          :                 :     :              +- BroadcastExchange\n                                          :                          :                 :     :                 +- CometNativeColumnarToRow\n                                          :                          :                 :     :                    +- CometProject\n                                          :                          :                 :     :                       +- CometFilter\n                                          :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          :                 :     +- BroadcastExchange\n                                          :                          :                 :        +- BroadcastHashJoin\n                                          :                          :                 :           :- CometNativeColumnarToRow\n                                          :                          :                 :           :  +- CometFilter\n                                          :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                          :                 :           +- BroadcastExchange\n                                          :                          :                 :              +- Project\n                                          :                          :                 :                 +- BroadcastHashJoin\n                                          :                          :                 :                    :- Project\n                                          :                          :                 :                    :  +- BroadcastHashJoin\n                                          :                          :                 :                    :     :- Filter\n                                          :                          :                 :                    :     :  +- ColumnarToRow\n                                          :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                          :                 :                    :     :           +- ReusedSubquery\n                                          :                          :                 :                    :     +- BroadcastExchange\n                                          :                          :                 :                    :        +- CometNativeColumnarToRow\n                                          :                          :                 :                    :           +- CometFilter\n                                          :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                          :                 :                    +- BroadcastExchange\n                                          :                          :                 :                       +- CometNativeColumnarToRow\n                                          :                          :                 :                          +- CometProject\n                                          :                          :                 :                             +- CometFilter\n                                          :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          :                 +- BroadcastExchange\n                                          :                          :                    +- CometNativeColumnarToRow\n                                          :                          :                       +- CometProject\n                                          :                          :                          +- CometFilter\n                                          :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          +- BroadcastExchange\n                                          :                             +- Project\n                                          :                                +- BroadcastHashJoin\n                                          :                                   :- Project\n                                          :                                   :  +- BroadcastHashJoin\n                                          :                                   :     :- Filter\n                                          :                                   :     :  +- ColumnarToRow\n                                          :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                                   :     :           +- ReusedSubquery\n                                          :                                   :     +- BroadcastExchange\n                                          :                                   :        +- CometNativeColumnarToRow\n                                          :                                   :           +- CometFilter\n                                          :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                                   +- BroadcastExchange\n                                          :                                      +- CometNativeColumnarToRow\n                                          :                                         +- CometProject\n                                          :                                            +- CometFilter\n                                          :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 164 out of 458 eligible operators (35%). Final plan contains 93 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q14a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometProject\n                  :  +- CometFilter\n                  :     :  +- Subquery\n                  :     :     +- CometNativeColumnarToRow\n                  :     :        +- CometHashAggregate\n                  :     :           +- CometExchange\n                  :     :              +- CometHashAggregate\n                  :     :                 +- CometUnion\n                  :     :                    :- CometProject\n                  :     :                    :  +- CometBroadcastHashJoin\n                  :     :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :                    :     :     +- ReusedSubquery\n                  :     :                    :     +- CometBroadcastExchange\n                  :     :                    :        +- CometProject\n                  :     :                    :           +- CometFilter\n                  :     :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                    :- CometProject\n                  :     :                    :  +- CometBroadcastHashJoin\n                  :     :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     :     +- ReusedSubquery\n                  :     :                    :     +- CometBroadcastExchange\n                  :     :                    :        +- CometProject\n                  :     :                    :           +- CometFilter\n                  :     :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                    +- CometProject\n                  :     :                       +- CometBroadcastHashJoin\n                  :     :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :                          :     +- ReusedSubquery\n                  :     :                          +- CometBroadcastExchange\n                  :     :                             +- CometProject\n                  :     :                                +- CometFilter\n                  :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometHashAggregate\n                  :        +- CometExchange\n                  :           +- CometHashAggregate\n                  :              +- CometProject\n                  :                 +- CometBroadcastHashJoin\n                  :                    :- CometProject\n                  :                    :  +- CometBroadcastHashJoin\n                  :                    :     :- CometBroadcastHashJoin\n                  :                    :     :  :- CometFilter\n                  :                    :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :     :  :        +- SubqueryBroadcast\n                  :                    :     :  :           +- BroadcastExchange\n                  :                    :     :  :              +- CometNativeColumnarToRow\n                  :                    :     :  :                 +- CometProject\n                  :                    :     :  :                    +- CometFilter\n                  :                    :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :  +- CometBroadcastExchange\n                  :                    :     :     +- CometProject\n                  :                    :     :        +- CometBroadcastHashJoin\n                  :                    :     :           :- CometFilter\n                  :                    :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :           +- CometBroadcastExchange\n                  :                    :     :              +- CometBroadcastHashJoin\n                  :                    :     :                 :- CometHashAggregate\n                  :                    :     :                 :  +- CometExchange\n                  :                    :     :                 :     +- CometHashAggregate\n                  :                    :     :                 :        +- CometProject\n                  :                    :     :                 :           +- CometBroadcastHashJoin\n                  :                    :     :                 :              :- CometProject\n                  :                    :     :                 :              :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :     :- CometFilter\n                  :                    :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :     :                 :              :     :        +- SubqueryBroadcast\n                  :                    :     :                 :              :     :           +- BroadcastExchange\n                  :                    :     :                 :              :     :              +- CometNativeColumnarToRow\n                  :                    :     :                 :              :     :                 +- CometProject\n                  :                    :     :                 :              :     :                    +- CometFilter\n                  :                    :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              :     +- CometBroadcastExchange\n                  :                    :     :                 :              :        +- CometBroadcastHashJoin\n                  :                    :     :                 :              :           :- CometFilter\n                  :                    :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :           +- CometBroadcastExchange\n                  :                    :     :                 :              :              +- CometProject\n                  :                    :     :                 :              :                 +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :- CometProject\n                  :                    :     :                 :              :                    :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :     :- CometFilter\n                  :                    :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :     :                 :              :                    :     :        +- ReusedSubquery\n                  :                    :     :                 :              :                    :     +- CometBroadcastExchange\n                  :                    :     :                 :              :                    :        +- CometFilter\n                  :                    :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :                    +- CometBroadcastExchange\n                  :                    :     :                 :              :                       +- CometProject\n                  :                    :     :                 :              :                          +- CometFilter\n                  :                    :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              +- CometBroadcastExchange\n                  :                    :     :                 :                 +- CometProject\n                  :                    :     :                 :                    +- CometFilter\n                  :                    :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 +- CometBroadcastExchange\n                  :                    :     :                    +- CometProject\n                  :                    :     :                       +- CometBroadcastHashJoin\n                  :                    :     :                          :- CometProject\n                  :                    :     :                          :  +- CometBroadcastHashJoin\n                  :                    :     :                          :     :- CometFilter\n                  :                    :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :     :                          :     :        +- ReusedSubquery\n                  :                    :     :                          :     +- CometBroadcastExchange\n                  :                    :     :                          :        +- CometFilter\n                  :                    :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                          +- CometBroadcastExchange\n                  :                    :     :                             +- CometProject\n                  :                    :     :                                +- CometFilter\n                  :                    :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     +- CometBroadcastExchange\n                  :                    :        +- CometBroadcastHashJoin\n                  :                    :           :- CometFilter\n                  :                    :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :           +- CometBroadcastExchange\n                  :                    :              +- CometProject\n                  :                    :                 +- CometBroadcastHashJoin\n                  :                    :                    :- CometFilter\n                  :                    :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                    +- CometBroadcastExchange\n                  :                    :                       +- CometBroadcastHashJoin\n                  :                    :                          :- CometHashAggregate\n                  :                    :                          :  +- CometExchange\n                  :                    :                          :     +- CometHashAggregate\n                  :                    :                          :        +- CometProject\n                  :                    :                          :           +- CometBroadcastHashJoin\n                  :                    :                          :              :- CometProject\n                  :                    :                          :              :  +- CometBroadcastHashJoin\n                  :                    :                          :              :     :- CometFilter\n                  :                    :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :                          :              :     :        +- SubqueryBroadcast\n                  :                    :                          :              :     :           +- BroadcastExchange\n                  :                    :                          :              :     :              +- CometNativeColumnarToRow\n                  :                    :                          :              :     :                 +- CometProject\n                  :                    :                          :              :     :                    +- CometFilter\n                  :                    :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              :     +- CometBroadcastExchange\n                  :                    :                          :              :        +- CometBroadcastHashJoin\n                  :                    :                          :              :           :- CometFilter\n                  :                    :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :           +- CometBroadcastExchange\n                  :                    :                          :              :              +- CometProject\n                  :                    :                          :              :                 +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :- CometProject\n                  :                    :                          :              :                    :  +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :     :- CometFilter\n                  :                    :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :                          :              :                    :     :        +- ReusedSubquery\n                  :                    :                          :              :                    :     +- CometBroadcastExchange\n                  :                    :                          :              :                    :        +- CometFilter\n                  :                    :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :                    +- CometBroadcastExchange\n                  :                    :                          :              :                       +- CometProject\n                  :                    :                          :              :                          +- CometFilter\n                  :                    :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              +- CometBroadcastExchange\n                  :                    :                          :                 +- CometProject\n                  :                    :                          :                    +- CometFilter\n                  :                    :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          +- CometBroadcastExchange\n                  :                    :                             +- CometProject\n                  :                    :                                +- CometBroadcastHashJoin\n                  :                    :                                   :- CometProject\n                  :                    :                                   :  +- CometBroadcastHashJoin\n                  :                    :                                   :     :- CometFilter\n                  :                    :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :                                   :     :        +- ReusedSubquery\n                  :                    :                                   :     +- CometBroadcastExchange\n                  :                    :                                   :        +- CometFilter\n                  :                    :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                                   +- CometBroadcastExchange\n                  :                    :                                      +- CometProject\n                  :                    :                                         +- CometFilter\n                  :                    :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    +- CometBroadcastExchange\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :- CometProject\n                  :  +- CometFilter\n                  :     :  +- ReusedSubquery\n                  :     +- CometHashAggregate\n                  :        +- CometExchange\n                  :           +- CometHashAggregate\n                  :              +- CometProject\n                  :                 +- CometBroadcastHashJoin\n                  :                    :- CometProject\n                  :                    :  +- CometBroadcastHashJoin\n                  :                    :     :- CometBroadcastHashJoin\n                  :                    :     :  :- CometFilter\n                  :                    :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :     :  :        +- ReusedSubquery\n                  :                    :     :  +- CometBroadcastExchange\n                  :                    :     :     +- CometProject\n                  :                    :     :        +- CometBroadcastHashJoin\n                  :                    :     :           :- CometFilter\n                  :                    :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :           +- CometBroadcastExchange\n                  :                    :     :              +- CometBroadcastHashJoin\n                  :                    :     :                 :- CometHashAggregate\n                  :                    :     :                 :  +- CometExchange\n                  :                    :     :                 :     +- CometHashAggregate\n                  :                    :     :                 :        +- CometProject\n                  :                    :     :                 :           +- CometBroadcastHashJoin\n                  :                    :     :                 :              :- CometProject\n                  :                    :     :                 :              :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :     :- CometFilter\n                  :                    :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :     :                 :              :     :        +- SubqueryBroadcast\n                  :                    :     :                 :              :     :           +- BroadcastExchange\n                  :                    :     :                 :              :     :              +- CometNativeColumnarToRow\n                  :                    :     :                 :              :     :                 +- CometProject\n                  :                    :     :                 :              :     :                    +- CometFilter\n                  :                    :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              :     +- CometBroadcastExchange\n                  :                    :     :                 :              :        +- CometBroadcastHashJoin\n                  :                    :     :                 :              :           :- CometFilter\n                  :                    :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :           +- CometBroadcastExchange\n                  :                    :     :                 :              :              +- CometProject\n                  :                    :     :                 :              :                 +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :- CometProject\n                  :                    :     :                 :              :                    :  +- CometBroadcastHashJoin\n                  :                    :     :                 :              :                    :     :- CometFilter\n                  :                    :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :     :                 :              :                    :     :        +- ReusedSubquery\n                  :                    :     :                 :              :                    :     +- CometBroadcastExchange\n                  :                    :     :                 :              :                    :        +- CometFilter\n                  :                    :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                 :              :                    +- CometBroadcastExchange\n                  :                    :     :                 :              :                       +- CometProject\n                  :                    :     :                 :              :                          +- CometFilter\n                  :                    :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 :              +- CometBroadcastExchange\n                  :                    :     :                 :                 +- CometProject\n                  :                    :     :                 :                    +- CometFilter\n                  :                    :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     :                 +- CometBroadcastExchange\n                  :                    :     :                    +- CometProject\n                  :                    :     :                       +- CometBroadcastHashJoin\n                  :                    :     :                          :- CometProject\n                  :                    :     :                          :  +- CometBroadcastHashJoin\n                  :                    :     :                          :     :- CometFilter\n                  :                    :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :     :                          :     :        +- ReusedSubquery\n                  :                    :     :                          :     +- CometBroadcastExchange\n                  :                    :     :                          :        +- CometFilter\n                  :                    :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :     :                          +- CometBroadcastExchange\n                  :                    :     :                             +- CometProject\n                  :                    :     :                                +- CometFilter\n                  :                    :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :     +- CometBroadcastExchange\n                  :                    :        +- CometBroadcastHashJoin\n                  :                    :           :- CometFilter\n                  :                    :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :           +- CometBroadcastExchange\n                  :                    :              +- CometProject\n                  :                    :                 +- CometBroadcastHashJoin\n                  :                    :                    :- CometFilter\n                  :                    :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                    +- CometBroadcastExchange\n                  :                    :                       +- CometBroadcastHashJoin\n                  :                    :                          :- CometHashAggregate\n                  :                    :                          :  +- CometExchange\n                  :                    :                          :     +- CometHashAggregate\n                  :                    :                          :        +- CometProject\n                  :                    :                          :           +- CometBroadcastHashJoin\n                  :                    :                          :              :- CometProject\n                  :                    :                          :              :  +- CometBroadcastHashJoin\n                  :                    :                          :              :     :- CometFilter\n                  :                    :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                    :                          :              :     :        +- SubqueryBroadcast\n                  :                    :                          :              :     :           +- BroadcastExchange\n                  :                    :                          :              :     :              +- CometNativeColumnarToRow\n                  :                    :                          :              :     :                 +- CometProject\n                  :                    :                          :              :     :                    +- CometFilter\n                  :                    :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              :     +- CometBroadcastExchange\n                  :                    :                          :              :        +- CometBroadcastHashJoin\n                  :                    :                          :              :           :- CometFilter\n                  :                    :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :           +- CometBroadcastExchange\n                  :                    :                          :              :              +- CometProject\n                  :                    :                          :              :                 +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :- CometProject\n                  :                    :                          :              :                    :  +- CometBroadcastHashJoin\n                  :                    :                          :              :                    :     :- CometFilter\n                  :                    :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :                    :                          :              :                    :     :        +- ReusedSubquery\n                  :                    :                          :              :                    :     +- CometBroadcastExchange\n                  :                    :                          :              :                    :        +- CometFilter\n                  :                    :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                          :              :                    +- CometBroadcastExchange\n                  :                    :                          :              :                       +- CometProject\n                  :                    :                          :              :                          +- CometFilter\n                  :                    :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          :              +- CometBroadcastExchange\n                  :                    :                          :                 +- CometProject\n                  :                    :                          :                    +- CometFilter\n                  :                    :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    :                          +- CometBroadcastExchange\n                  :                    :                             +- CometProject\n                  :                    :                                +- CometBroadcastHashJoin\n                  :                    :                                   :- CometProject\n                  :                    :                                   :  +- CometBroadcastHashJoin\n                  :                    :                                   :     :- CometFilter\n                  :                    :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :                    :                                   :     :        +- ReusedSubquery\n                  :                    :                                   :     +- CometBroadcastExchange\n                  :                    :                                   :        +- CometFilter\n                  :                    :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :                    :                                   +- CometBroadcastExchange\n                  :                    :                                      +- CometProject\n                  :                    :                                         +- CometFilter\n                  :                    :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                    +- CometBroadcastExchange\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometProject\n                     +- CometFilter\n                        :  +- ReusedSubquery\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometBroadcastHashJoin\n                                       :     :  :- CometFilter\n                                       :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                       :     :  :        +- ReusedSubquery\n                                       :     :  +- CometBroadcastExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometFilter\n                                       :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometBroadcastHashJoin\n                                       :     :                 :- CometHashAggregate\n                                       :     :                 :  +- CometExchange\n                                       :     :                 :     +- CometHashAggregate\n                                       :     :                 :        +- CometProject\n                                       :     :                 :           +- CometBroadcastHashJoin\n                                       :     :                 :              :- CometProject\n                                       :     :                 :              :  +- CometBroadcastHashJoin\n                                       :     :                 :              :     :- CometFilter\n                                       :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :                 :              :     :        +- SubqueryBroadcast\n                                       :     :                 :              :     :           +- BroadcastExchange\n                                       :     :                 :              :     :              +- CometNativeColumnarToRow\n                                       :     :                 :              :     :                 +- CometProject\n                                       :     :                 :              :     :                    +- CometFilter\n                                       :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :                 :              :     +- CometBroadcastExchange\n                                       :     :                 :              :        +- CometBroadcastHashJoin\n                                       :     :                 :              :           :- CometFilter\n                                       :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :                 :              :           +- CometBroadcastExchange\n                                       :     :                 :              :              +- CometProject\n                                       :     :                 :              :                 +- CometBroadcastHashJoin\n                                       :     :                 :              :                    :- CometProject\n                                       :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                       :     :                 :              :                    :     :- CometFilter\n                                       :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :                 :              :                    :     :        +- ReusedSubquery\n                                       :     :                 :              :                    :     +- CometBroadcastExchange\n                                       :     :                 :              :                    :        +- CometFilter\n                                       :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :                 :              :                    +- CometBroadcastExchange\n                                       :     :                 :              :                       +- CometProject\n                                       :     :                 :              :                          +- CometFilter\n                                       :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :                 :              +- CometBroadcastExchange\n                                       :     :                 :                 +- CometProject\n                                       :     :                 :                    +- CometFilter\n                                       :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :                 +- CometBroadcastExchange\n                                       :     :                    +- CometProject\n                                       :     :                       +- CometBroadcastHashJoin\n                                       :     :                          :- CometProject\n                                       :     :                          :  +- CometBroadcastHashJoin\n                                       :     :                          :     :- CometFilter\n                                       :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                       :     :                          :     :        +- ReusedSubquery\n                                       :     :                          :     +- CometBroadcastExchange\n                                       :     :                          :        +- CometFilter\n                                       :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :                          +- CometBroadcastExchange\n                                       :     :                             +- CometProject\n                                       :     :                                +- CometFilter\n                                       :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometBroadcastHashJoin\n                                       :           :- CometFilter\n                                       :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :           +- CometBroadcastExchange\n                                       :              +- CometProject\n                                       :                 +- CometBroadcastHashJoin\n                                       :                    :- CometFilter\n                                       :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                    +- CometBroadcastExchange\n                                       :                       +- CometBroadcastHashJoin\n                                       :                          :- CometHashAggregate\n                                       :                          :  +- CometExchange\n                                       :                          :     +- CometHashAggregate\n                                       :                          :        +- CometProject\n                                       :                          :           +- CometBroadcastHashJoin\n                                       :                          :              :- CometProject\n                                       :                          :              :  +- CometBroadcastHashJoin\n                                       :                          :              :     :- CometFilter\n                                       :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :                          :              :     :        +- SubqueryBroadcast\n                                       :                          :              :     :           +- BroadcastExchange\n                                       :                          :              :     :              +- CometNativeColumnarToRow\n                                       :                          :              :     :                 +- CometProject\n                                       :                          :              :     :                    +- CometFilter\n                                       :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :                          :              :     +- CometBroadcastExchange\n                                       :                          :              :        +- CometBroadcastHashJoin\n                                       :                          :              :           :- CometFilter\n                                       :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                          :              :           +- CometBroadcastExchange\n                                       :                          :              :              +- CometProject\n                                       :                          :              :                 +- CometBroadcastHashJoin\n                                       :                          :              :                    :- CometProject\n                                       :                          :              :                    :  +- CometBroadcastHashJoin\n                                       :                          :              :                    :     :- CometFilter\n                                       :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :                          :              :                    :     :        +- ReusedSubquery\n                                       :                          :              :                    :     +- CometBroadcastExchange\n                                       :                          :              :                    :        +- CometFilter\n                                       :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                          :              :                    +- CometBroadcastExchange\n                                       :                          :              :                       +- CometProject\n                                       :                          :              :                          +- CometFilter\n                                       :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :                          :              +- CometBroadcastExchange\n                                       :                          :                 +- CometProject\n                                       :                          :                    +- CometFilter\n                                       :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :                          +- CometBroadcastExchange\n                                       :                             +- CometProject\n                                       :                                +- CometBroadcastHashJoin\n                                       :                                   :- CometProject\n                                       :                                   :  +- CometBroadcastHashJoin\n                                       :                                   :     :- CometFilter\n                                       :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                       :                                   :     :        +- ReusedSubquery\n                                       :                                   :     +- CometBroadcastExchange\n                                       :                                   :        +- CometFilter\n                                       :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :                                   +- CometBroadcastExchange\n                                       :                                      +- CometProject\n                                       :                                         +- CometFilter\n                                       :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 424 out of 458 eligible operators (92%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q14b.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- BroadcastHashJoin\n   :- Filter\n   :  :  +- Subquery\n   :  :     +- HashAggregate\n   :  :        +- CometNativeColumnarToRow\n   :  :           +- CometColumnarExchange\n   :  :              +- HashAggregate\n   :  :                 +- Union\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    +- Project\n   :  :                       +- BroadcastHashJoin\n   :  :                          :- ColumnarToRow\n   :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                          :        +- ReusedSubquery\n   :  :                          +- BroadcastExchange\n   :  :                             +- CometNativeColumnarToRow\n   :  :                                +- CometProject\n   :  :                                   +- CometFilter\n   :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  +- HashAggregate\n   :     +- CometNativeColumnarToRow\n   :        +- CometColumnarExchange\n   :           +- HashAggregate\n   :              +- Project\n   :                 +- BroadcastHashJoin\n   :                    :- Project\n   :                    :  +- BroadcastHashJoin\n   :                    :     :- BroadcastHashJoin\n   :                    :     :  :- Filter\n   :                    :     :  :  +- ColumnarToRow\n   :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :  :           +- SubqueryBroadcast\n   :                    :     :  :              +- BroadcastExchange\n   :                    :     :  :                 +- CometNativeColumnarToRow\n   :                    :     :  :                    +- CometProject\n   :                    :     :  :                       +- CometFilter\n   :                    :     :  :                          :  +- ReusedSubquery\n   :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  :                                +- Subquery\n   :                    :     :  :                                   +- CometNativeColumnarToRow\n   :                    :     :  :                                      +- CometProject\n   :                    :     :  :                                         +- CometFilter\n   :                    :     :  :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  +- BroadcastExchange\n   :                    :     :     +- Project\n   :                    :     :        +- BroadcastHashJoin\n   :                    :     :           :- CometNativeColumnarToRow\n   :                    :     :           :  +- CometFilter\n   :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :           +- BroadcastExchange\n   :                    :     :              +- BroadcastHashJoin\n   :                    :     :                 :- CometNativeColumnarToRow\n   :                    :     :                 :  +- CometHashAggregate\n   :                    :     :                 :     +- CometColumnarExchange\n   :                    :     :                 :        +- HashAggregate\n   :                    :     :                 :           +- Project\n   :                    :     :                 :              +- BroadcastHashJoin\n   :                    :     :                 :                 :- Project\n   :                    :     :                 :                 :  +- BroadcastHashJoin\n   :                    :     :                 :                 :     :- Filter\n   :                    :     :                 :                 :     :  +- ColumnarToRow\n   :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :     :           +- SubqueryBroadcast\n   :                    :     :                 :                 :     :              +- BroadcastExchange\n   :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n   :                    :     :                 :                 :     :                    +- CometProject\n   :                    :     :                 :                 :     :                       +- CometFilter\n   :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 :     +- BroadcastExchange\n   :                    :     :                 :                 :        +- BroadcastHashJoin\n   :                    :     :                 :                 :           :- CometNativeColumnarToRow\n   :                    :     :                 :                 :           :  +- CometFilter\n   :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :           +- BroadcastExchange\n   :                    :     :                 :                 :              +- Project\n   :                    :     :                 :                 :                 +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :- Project\n   :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :     :- Filter\n   :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n   :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n   :                    :     :                 :                 :                    :     +- BroadcastExchange\n   :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                    :           +- CometFilter\n   :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :                    +- BroadcastExchange\n   :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                          +- CometProject\n   :                    :     :                 :                 :                             +- CometFilter\n   :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 +- BroadcastExchange\n   :                    :     :                 :                    +- CometNativeColumnarToRow\n   :                    :     :                 :                       +- CometProject\n   :                    :     :                 :                          +- CometFilter\n   :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 +- BroadcastExchange\n   :                    :     :                    +- Project\n   :                    :     :                       +- BroadcastHashJoin\n   :                    :     :                          :- Project\n   :                    :     :                          :  +- BroadcastHashJoin\n   :                    :     :                          :     :- Filter\n   :                    :     :                          :     :  +- ColumnarToRow\n   :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                          :     :           +- ReusedSubquery\n   :                    :     :                          :     +- BroadcastExchange\n   :                    :     :                          :        +- CometNativeColumnarToRow\n   :                    :     :                          :           +- CometFilter\n   :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                          +- BroadcastExchange\n   :                    :     :                             +- CometNativeColumnarToRow\n   :                    :     :                                +- CometProject\n   :                    :     :                                   +- CometFilter\n   :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     +- BroadcastExchange\n   :                    :        +- BroadcastHashJoin\n   :                    :           :- CometNativeColumnarToRow\n   :                    :           :  +- CometFilter\n   :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :           +- BroadcastExchange\n   :                    :              +- Project\n   :                    :                 +- BroadcastHashJoin\n   :                    :                    :- CometNativeColumnarToRow\n   :                    :                    :  +- CometFilter\n   :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                    +- BroadcastExchange\n   :                    :                       +- BroadcastHashJoin\n   :                    :                          :- CometNativeColumnarToRow\n   :                    :                          :  +- CometHashAggregate\n   :                    :                          :     +- CometColumnarExchange\n   :                    :                          :        +- HashAggregate\n   :                    :                          :           +- Project\n   :                    :                          :              +- BroadcastHashJoin\n   :                    :                          :                 :- Project\n   :                    :                          :                 :  +- BroadcastHashJoin\n   :                    :                          :                 :     :- Filter\n   :                    :                          :                 :     :  +- ColumnarToRow\n   :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :     :           +- SubqueryBroadcast\n   :                    :                          :                 :     :              +- BroadcastExchange\n   :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n   :                    :                          :                 :     :                    +- CometProject\n   :                    :                          :                 :     :                       +- CometFilter\n   :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 :     +- BroadcastExchange\n   :                    :                          :                 :        +- BroadcastHashJoin\n   :                    :                          :                 :           :- CometNativeColumnarToRow\n   :                    :                          :                 :           :  +- CometFilter\n   :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :           +- BroadcastExchange\n   :                    :                          :                 :              +- Project\n   :                    :                          :                 :                 +- BroadcastHashJoin\n   :                    :                          :                 :                    :- Project\n   :                    :                          :                 :                    :  +- BroadcastHashJoin\n   :                    :                          :                 :                    :     :- Filter\n   :                    :                          :                 :                    :     :  +- ColumnarToRow\n   :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :                    :     :           +- ReusedSubquery\n   :                    :                          :                 :                    :     +- BroadcastExchange\n   :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n   :                    :                          :                 :                    :           +- CometFilter\n   :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :                    +- BroadcastExchange\n   :                    :                          :                 :                       +- CometNativeColumnarToRow\n   :                    :                          :                 :                          +- CometProject\n   :                    :                          :                 :                             +- CometFilter\n   :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 +- BroadcastExchange\n   :                    :                          :                    +- CometNativeColumnarToRow\n   :                    :                          :                       +- CometProject\n   :                    :                          :                          +- CometFilter\n   :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          +- BroadcastExchange\n   :                    :                             +- Project\n   :                    :                                +- BroadcastHashJoin\n   :                    :                                   :- Project\n   :                    :                                   :  +- BroadcastHashJoin\n   :                    :                                   :     :- Filter\n   :                    :                                   :     :  +- ColumnarToRow\n   :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                                   :     :           +- ReusedSubquery\n   :                    :                                   :     +- BroadcastExchange\n   :                    :                                   :        +- CometNativeColumnarToRow\n   :                    :                                   :           +- CometFilter\n   :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                                   +- BroadcastExchange\n   :                    :                                      +- CometNativeColumnarToRow\n   :                    :                                         +- CometProject\n   :                    :                                            +- CometFilter\n   :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    +- BroadcastExchange\n   :                       +- CometNativeColumnarToRow\n   :                          +- CometProject\n   :                             +- CometFilter\n   :                                :  +- ReusedSubquery\n   :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                                      +- Subquery\n   :                                         +- CometNativeColumnarToRow\n   :                                            +- CometProject\n   :                                               +- CometFilter\n   :                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n   +- BroadcastExchange\n      +- Filter\n         :  +- ReusedSubquery\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- BroadcastHashJoin\n                           :     :  :- Filter\n                           :     :  :  +- ColumnarToRow\n                           :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :  :           +- SubqueryBroadcast\n                           :     :  :              +- BroadcastExchange\n                           :     :  :                 +- CometNativeColumnarToRow\n                           :     :  :                    +- CometProject\n                           :     :  :                       +- CometFilter\n                           :     :  :                          :  +- ReusedSubquery\n                           :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  :                                +- Subquery\n                           :     :  :                                   +- CometNativeColumnarToRow\n                           :     :  :                                      +- CometProject\n                           :     :  :                                         +- CometFilter\n                           :     :  :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  +- BroadcastExchange\n                           :     :     +- Project\n                           :     :        +- BroadcastHashJoin\n                           :     :           :- CometNativeColumnarToRow\n                           :     :           :  +- CometFilter\n                           :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :           +- BroadcastExchange\n                           :     :              +- BroadcastHashJoin\n                           :     :                 :- CometNativeColumnarToRow\n                           :     :                 :  +- CometHashAggregate\n                           :     :                 :     +- CometColumnarExchange\n                           :     :                 :        +- HashAggregate\n                           :     :                 :           +- Project\n                           :     :                 :              +- BroadcastHashJoin\n                           :     :                 :                 :- Project\n                           :     :                 :                 :  +- BroadcastHashJoin\n                           :     :                 :                 :     :- Filter\n                           :     :                 :                 :     :  +- ColumnarToRow\n                           :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :     :           +- SubqueryBroadcast\n                           :     :                 :                 :     :              +- BroadcastExchange\n                           :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                           :     :                 :                 :     :                    +- CometProject\n                           :     :                 :                 :     :                       +- CometFilter\n                           :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 :     +- BroadcastExchange\n                           :     :                 :                 :        +- BroadcastHashJoin\n                           :     :                 :                 :           :- CometNativeColumnarToRow\n                           :     :                 :                 :           :  +- CometFilter\n                           :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :           +- BroadcastExchange\n                           :     :                 :                 :              +- Project\n                           :     :                 :                 :                 +- BroadcastHashJoin\n                           :     :                 :                 :                    :- Project\n                           :     :                 :                 :                    :  +- BroadcastHashJoin\n                           :     :                 :                 :                    :     :- Filter\n                           :     :                 :                 :                    :     :  +- ColumnarToRow\n                           :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :                    :     :           +- ReusedSubquery\n                           :     :                 :                 :                    :     +- BroadcastExchange\n                           :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                           :     :                 :                 :                    :           +- CometFilter\n                           :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :                    +- BroadcastExchange\n                           :     :                 :                 :                       +- CometNativeColumnarToRow\n                           :     :                 :                 :                          +- CometProject\n                           :     :                 :                 :                             +- CometFilter\n                           :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 +- BroadcastExchange\n                           :     :                 :                    +- CometNativeColumnarToRow\n                           :     :                 :                       +- CometProject\n                           :     :                 :                          +- CometFilter\n                           :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 +- BroadcastExchange\n                           :     :                    +- Project\n                           :     :                       +- BroadcastHashJoin\n                           :     :                          :- Project\n                           :     :                          :  +- BroadcastHashJoin\n                           :     :                          :     :- Filter\n                           :     :                          :     :  +- ColumnarToRow\n                           :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                          :     :           +- ReusedSubquery\n                           :     :                          :     +- BroadcastExchange\n                           :     :                          :        +- CometNativeColumnarToRow\n                           :     :                          :           +- CometFilter\n                           :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                          +- BroadcastExchange\n                           :     :                             +- CometNativeColumnarToRow\n                           :     :                                +- CometProject\n                           :     :                                   +- CometFilter\n                           :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     +- BroadcastExchange\n                           :        +- BroadcastHashJoin\n                           :           :- CometNativeColumnarToRow\n                           :           :  +- CometFilter\n                           :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :           +- BroadcastExchange\n                           :              +- Project\n                           :                 +- BroadcastHashJoin\n                           :                    :- CometNativeColumnarToRow\n                           :                    :  +- CometFilter\n                           :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                    +- BroadcastExchange\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometHashAggregate\n                           :                          :     +- CometColumnarExchange\n                           :                          :        +- HashAggregate\n                           :                          :           +- Project\n                           :                          :              +- BroadcastHashJoin\n                           :                          :                 :- Project\n                           :                          :                 :  +- BroadcastHashJoin\n                           :                          :                 :     :- Filter\n                           :                          :                 :     :  +- ColumnarToRow\n                           :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :     :           +- SubqueryBroadcast\n                           :                          :                 :     :              +- BroadcastExchange\n                           :                          :                 :     :                 +- CometNativeColumnarToRow\n                           :                          :                 :     :                    +- CometProject\n                           :                          :                 :     :                       +- CometFilter\n                           :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 :     +- BroadcastExchange\n                           :                          :                 :        +- BroadcastHashJoin\n                           :                          :                 :           :- CometNativeColumnarToRow\n                           :                          :                 :           :  +- CometFilter\n                           :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :           +- BroadcastExchange\n                           :                          :                 :              +- Project\n                           :                          :                 :                 +- BroadcastHashJoin\n                           :                          :                 :                    :- Project\n                           :                          :                 :                    :  +- BroadcastHashJoin\n                           :                          :                 :                    :     :- Filter\n                           :                          :                 :                    :     :  +- ColumnarToRow\n                           :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :                    :     :           +- ReusedSubquery\n                           :                          :                 :                    :     +- BroadcastExchange\n                           :                          :                 :                    :        +- CometNativeColumnarToRow\n                           :                          :                 :                    :           +- CometFilter\n                           :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :                    +- BroadcastExchange\n                           :                          :                 :                       +- CometNativeColumnarToRow\n                           :                          :                 :                          +- CometProject\n                           :                          :                 :                             +- CometFilter\n                           :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 +- BroadcastExchange\n                           :                          :                    +- CometNativeColumnarToRow\n                           :                          :                       +- CometProject\n                           :                          :                          +- CometFilter\n                           :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- BroadcastHashJoin\n                           :                                   :- Project\n                           :                                   :  +- BroadcastHashJoin\n                           :                                   :     :- Filter\n                           :                                   :     :  +- ColumnarToRow\n                           :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                   :     :           +- ReusedSubquery\n                           :                                   :     +- BroadcastExchange\n                           :                                   :        +- CometNativeColumnarToRow\n                           :                                   :           +- CometFilter\n                           :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                                   +- BroadcastExchange\n                           :                                      +- CometNativeColumnarToRow\n                           :                                         +- CometProject\n                           :                                            +- CometFilter\n                           :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometProject\n                                    +- CometFilter\n                                       :  +- ReusedSubquery\n                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             +- Subquery\n                                                +- CometNativeColumnarToRow\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 128 out of 337 eligible operators (37%). Final plan contains 69 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q14b.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometBroadcastHashJoin\n      :- CometFilter\n      :  :  +- Subquery\n      :  :     +- CometNativeColumnarToRow\n      :  :        +- CometHashAggregate\n      :  :           +- CometExchange\n      :  :              +- CometHashAggregate\n      :  :                 +- CometUnion\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    +- CometProject\n      :  :                       +- CometBroadcastHashJoin\n      :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :  :                          :     +- ReusedSubquery\n      :  :                          +- CometBroadcastExchange\n      :  :                             +- CometProject\n      :  :                                +- CometFilter\n      :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  +- CometHashAggregate\n      :     +- CometExchange\n      :        +- CometHashAggregate\n      :           +- CometProject\n      :              +- CometBroadcastHashJoin\n      :                 :- CometProject\n      :                 :  +- CometBroadcastHashJoin\n      :                 :     :- CometBroadcastHashJoin\n      :                 :     :  :- CometFilter\n      :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :  :        +- SubqueryBroadcast\n      :                 :     :  :           +- BroadcastExchange\n      :                 :     :  :              +- CometNativeColumnarToRow\n      :                 :     :  :                 +- CometProject\n      :                 :     :  :                    +- CometFilter\n      :                 :     :  :                       :  +- ReusedSubquery\n      :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  :                             +- Subquery\n      :                 :     :  :                                +- CometNativeColumnarToRow\n      :                 :     :  :                                   +- CometProject\n      :                 :     :  :                                      +- CometFilter\n      :                 :     :  :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  +- CometBroadcastExchange\n      :                 :     :     +- CometProject\n      :                 :     :        +- CometBroadcastHashJoin\n      :                 :     :           :- CometFilter\n      :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :           +- CometBroadcastExchange\n      :                 :     :              +- CometBroadcastHashJoin\n      :                 :     :                 :- CometHashAggregate\n      :                 :     :                 :  +- CometExchange\n      :                 :     :                 :     +- CometHashAggregate\n      :                 :     :                 :        +- CometProject\n      :                 :     :                 :           +- CometBroadcastHashJoin\n      :                 :     :                 :              :- CometProject\n      :                 :     :                 :              :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :     :- CometFilter\n      :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :                 :              :     :        +- SubqueryBroadcast\n      :                 :     :                 :              :     :           +- BroadcastExchange\n      :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n      :                 :     :                 :              :     :                 +- CometProject\n      :                 :     :                 :              :     :                    +- CometFilter\n      :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              :     +- CometBroadcastExchange\n      :                 :     :                 :              :        +- CometBroadcastHashJoin\n      :                 :     :                 :              :           :- CometFilter\n      :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :           +- CometBroadcastExchange\n      :                 :     :                 :              :              +- CometProject\n      :                 :     :                 :              :                 +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :- CometProject\n      :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :     :- CometFilter\n      :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :     :                 :              :                    :     :        +- ReusedSubquery\n      :                 :     :                 :              :                    :     +- CometBroadcastExchange\n      :                 :     :                 :              :                    :        +- CometFilter\n      :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :                    +- CometBroadcastExchange\n      :                 :     :                 :              :                       +- CometProject\n      :                 :     :                 :              :                          +- CometFilter\n      :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              +- CometBroadcastExchange\n      :                 :     :                 :                 +- CometProject\n      :                 :     :                 :                    +- CometFilter\n      :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 +- CometBroadcastExchange\n      :                 :     :                    +- CometProject\n      :                 :     :                       +- CometBroadcastHashJoin\n      :                 :     :                          :- CometProject\n      :                 :     :                          :  +- CometBroadcastHashJoin\n      :                 :     :                          :     :- CometFilter\n      :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :     :                          :     :        +- ReusedSubquery\n      :                 :     :                          :     +- CometBroadcastExchange\n      :                 :     :                          :        +- CometFilter\n      :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                          +- CometBroadcastExchange\n      :                 :     :                             +- CometProject\n      :                 :     :                                +- CometFilter\n      :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     +- CometBroadcastExchange\n      :                 :        +- CometBroadcastHashJoin\n      :                 :           :- CometFilter\n      :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :           +- CometBroadcastExchange\n      :                 :              +- CometProject\n      :                 :                 +- CometBroadcastHashJoin\n      :                 :                    :- CometFilter\n      :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                    +- CometBroadcastExchange\n      :                 :                       +- CometBroadcastHashJoin\n      :                 :                          :- CometHashAggregate\n      :                 :                          :  +- CometExchange\n      :                 :                          :     +- CometHashAggregate\n      :                 :                          :        +- CometProject\n      :                 :                          :           +- CometBroadcastHashJoin\n      :                 :                          :              :- CometProject\n      :                 :                          :              :  +- CometBroadcastHashJoin\n      :                 :                          :              :     :- CometFilter\n      :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :                          :              :     :        +- SubqueryBroadcast\n      :                 :                          :              :     :           +- BroadcastExchange\n      :                 :                          :              :     :              +- CometNativeColumnarToRow\n      :                 :                          :              :     :                 +- CometProject\n      :                 :                          :              :     :                    +- CometFilter\n      :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              :     +- CometBroadcastExchange\n      :                 :                          :              :        +- CometBroadcastHashJoin\n      :                 :                          :              :           :- CometFilter\n      :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :           +- CometBroadcastExchange\n      :                 :                          :              :              +- CometProject\n      :                 :                          :              :                 +- CometBroadcastHashJoin\n      :                 :                          :              :                    :- CometProject\n      :                 :                          :              :                    :  +- CometBroadcastHashJoin\n      :                 :                          :              :                    :     :- CometFilter\n      :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :                          :              :                    :     :        +- ReusedSubquery\n      :                 :                          :              :                    :     +- CometBroadcastExchange\n      :                 :                          :              :                    :        +- CometFilter\n      :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :                    +- CometBroadcastExchange\n      :                 :                          :              :                       +- CometProject\n      :                 :                          :              :                          +- CometFilter\n      :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              +- CometBroadcastExchange\n      :                 :                          :                 +- CometProject\n      :                 :                          :                    +- CometFilter\n      :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          +- CometBroadcastExchange\n      :                 :                             +- CometProject\n      :                 :                                +- CometBroadcastHashJoin\n      :                 :                                   :- CometProject\n      :                 :                                   :  +- CometBroadcastHashJoin\n      :                 :                                   :     :- CometFilter\n      :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :                                   :     :        +- ReusedSubquery\n      :                 :                                   :     +- CometBroadcastExchange\n      :                 :                                   :        +- CometFilter\n      :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                                   +- CometBroadcastExchange\n      :                 :                                      +- CometProject\n      :                 :                                         +- CometFilter\n      :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 +- CometBroadcastExchange\n      :                    +- CometProject\n      :                       +- CometFilter\n      :                          :  +- ReusedSubquery\n      :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                +- ReusedSubquery\n      +- CometBroadcastExchange\n         +- CometFilter\n            :  +- ReusedSubquery\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometBroadcastHashJoin\n                           :     :  :- CometFilter\n                           :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :  :        +- SubqueryBroadcast\n                           :     :  :           +- BroadcastExchange\n                           :     :  :              +- CometNativeColumnarToRow\n                           :     :  :                 +- CometProject\n                           :     :  :                    +- CometFilter\n                           :     :  :                       :  +- ReusedSubquery\n                           :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  :                             +- Subquery\n                           :     :  :                                +- CometNativeColumnarToRow\n                           :     :  :                                   +- CometProject\n                           :     :  :                                      +- CometFilter\n                           :     :  :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  +- CometBroadcastExchange\n                           :     :     +- CometProject\n                           :     :        +- CometBroadcastHashJoin\n                           :     :           :- CometFilter\n                           :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :           +- CometBroadcastExchange\n                           :     :              +- CometBroadcastHashJoin\n                           :     :                 :- CometHashAggregate\n                           :     :                 :  +- CometExchange\n                           :     :                 :     +- CometHashAggregate\n                           :     :                 :        +- CometProject\n                           :     :                 :           +- CometBroadcastHashJoin\n                           :     :                 :              :- CometProject\n                           :     :                 :              :  +- CometBroadcastHashJoin\n                           :     :                 :              :     :- CometFilter\n                           :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :                 :              :     :        +- SubqueryBroadcast\n                           :     :                 :              :     :           +- BroadcastExchange\n                           :     :                 :              :     :              +- CometNativeColumnarToRow\n                           :     :                 :              :     :                 +- CometProject\n                           :     :                 :              :     :                    +- CometFilter\n                           :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              :     +- CometBroadcastExchange\n                           :     :                 :              :        +- CometBroadcastHashJoin\n                           :     :                 :              :           :- CometFilter\n                           :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :           +- CometBroadcastExchange\n                           :     :                 :              :              +- CometProject\n                           :     :                 :              :                 +- CometBroadcastHashJoin\n                           :     :                 :              :                    :- CometProject\n                           :     :                 :              :                    :  +- CometBroadcastHashJoin\n                           :     :                 :              :                    :     :- CometFilter\n                           :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :     :                 :              :                    :     :        +- ReusedSubquery\n                           :     :                 :              :                    :     +- CometBroadcastExchange\n                           :     :                 :              :                    :        +- CometFilter\n                           :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :                    +- CometBroadcastExchange\n                           :     :                 :              :                       +- CometProject\n                           :     :                 :              :                          +- CometFilter\n                           :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              +- CometBroadcastExchange\n                           :     :                 :                 +- CometProject\n                           :     :                 :                    +- CometFilter\n                           :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 +- CometBroadcastExchange\n                           :     :                    +- CometProject\n                           :     :                       +- CometBroadcastHashJoin\n                           :     :                          :- CometProject\n                           :     :                          :  +- CometBroadcastHashJoin\n                           :     :                          :     :- CometFilter\n                           :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :     :                          :     :        +- ReusedSubquery\n                           :     :                          :     +- CometBroadcastExchange\n                           :     :                          :        +- CometFilter\n                           :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                          +- CometBroadcastExchange\n                           :     :                             +- CometProject\n                           :     :                                +- CometFilter\n                           :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     +- CometBroadcastExchange\n                           :        +- CometBroadcastHashJoin\n                           :           :- CometFilter\n                           :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :           +- CometBroadcastExchange\n                           :              +- CometProject\n                           :                 +- CometBroadcastHashJoin\n                           :                    :- CometFilter\n                           :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                    +- CometBroadcastExchange\n                           :                       +- CometBroadcastHashJoin\n                           :                          :- CometHashAggregate\n                           :                          :  +- CometExchange\n                           :                          :     +- CometHashAggregate\n                           :                          :        +- CometProject\n                           :                          :           +- CometBroadcastHashJoin\n                           :                          :              :- CometProject\n                           :                          :              :  +- CometBroadcastHashJoin\n                           :                          :              :     :- CometFilter\n                           :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                          :              :     :        +- SubqueryBroadcast\n                           :                          :              :     :           +- BroadcastExchange\n                           :                          :              :     :              +- CometNativeColumnarToRow\n                           :                          :              :     :                 +- CometProject\n                           :                          :              :     :                    +- CometFilter\n                           :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              :     +- CometBroadcastExchange\n                           :                          :              :        +- CometBroadcastHashJoin\n                           :                          :              :           :- CometFilter\n                           :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :           +- CometBroadcastExchange\n                           :                          :              :              +- CometProject\n                           :                          :              :                 +- CometBroadcastHashJoin\n                           :                          :              :                    :- CometProject\n                           :                          :              :                    :  +- CometBroadcastHashJoin\n                           :                          :              :                    :     :- CometFilter\n                           :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :                          :              :                    :     :        +- ReusedSubquery\n                           :                          :              :                    :     +- CometBroadcastExchange\n                           :                          :              :                    :        +- CometFilter\n                           :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :                    +- CometBroadcastExchange\n                           :                          :              :                       +- CometProject\n                           :                          :              :                          +- CometFilter\n                           :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              +- CometBroadcastExchange\n                           :                          :                 +- CometProject\n                           :                          :                    +- CometFilter\n                           :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          +- CometBroadcastExchange\n                           :                             +- CometProject\n                           :                                +- CometBroadcastHashJoin\n                           :                                   :- CometProject\n                           :                                   :  +- CometBroadcastHashJoin\n                           :                                   :     :- CometFilter\n                           :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                                   :     :        +- ReusedSubquery\n                           :                                   :     +- CometBroadcastExchange\n                           :                                   :        +- CometFilter\n                           :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                                   +- CometBroadcastExchange\n                           :                                      +- CometProject\n                           :                                         +- CometFilter\n                           :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    :  +- ReusedSubquery\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          +- ReusedSubquery\n\nComet accelerated 298 out of 331 eligible operators (90%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q15.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Filter\n                  :     :     :  +- ColumnarToRow\n                  :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           +- SubqueryBroadcast\n                  :     :     :              +- BroadcastExchange\n                  :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :                    +- CometProject\n                  :     :     :                       +- CometFilter\n                  :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 28 eligible operators (42%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q15.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometFilter\n                  :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :        +- SubqueryBroadcast\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 26 out of 28 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q16.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.call_center\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q16.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q17.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- Filter\n                  :     :     :     :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :     :        +- Filter\n                  :     :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- Filter\n                  :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :                    +- ReusedSubquery\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 22 out of 57 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q17.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometFilter\n                  :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :     :     :     :           +- BroadcastExchange\n                  :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                 +- CometProject\n                  :     :     :     :     :     :     :                    +- CometFilter\n                  :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :     :                 +- ReusedSubquery\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 52 out of 57 eligible operators (91%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q18.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Project\n                     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :- Project\n                     :     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :     :- Filter\n                     :     :     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :     :     :           +- SubqueryBroadcast\n                     :     :     :     :     :     :              +- BroadcastExchange\n                     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :     :     :     :                    +- CometProject\n                     :     :     :     :     :     :                       +- CometFilter\n                     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :     :           +- CometProject\n                     :     :     :     :     :              +- CometFilter\n                     :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :           +- CometProject\n                     :     :     :     :              +- CometFilter\n                     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 21 out of 47 eligible operators (44%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q18.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :- CometProject\n                     :     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :     :- CometFilter\n                     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :     :     :     :     :     :        +- SubqueryBroadcast\n                     :     :     :     :     :     :           +- BroadcastExchange\n                     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :     :     :     :                 +- CometProject\n                     :     :     :     :     :     :                    +- CometFilter\n                     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :     :        +- CometProject\n                     :     :     :     :     :           +- CometFilter\n                     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :        +- CometProject\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 45 out of 47 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q19.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometFilter\n                  :     :     :     :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometFilter\n                  :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometNativeScan parquet spark_catalog.default.item\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometNativeScan parquet spark_catalog.default.customer\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 35 out of 35 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q19.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometFilter\n                  :     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometFilter\n                  :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometFilter\n                  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 35 out of 35 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q2.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometHashAggregate\n            :     :  +- CometExchange\n            :     :     +- CometHashAggregate\n            :     :        +- CometProject\n            :     :           +- CometBroadcastHashJoin\n            :     :              :- CometUnion\n            :     :              :  :- CometProject\n            :     :              :  :  +- CometNativeScan parquet spark_catalog.default.web_sales\n            :     :              :  +- CometProject\n            :     :              :     +- CometNativeScan parquet spark_catalog.default.catalog_sales\n            :     :              +- CometBroadcastExchange\n            :     :                 +- CometProject\n            :     :                    +- CometFilter\n            :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometHashAggregate\n                     :  +- CometExchange\n                     :     +- CometHashAggregate\n                     :        +- CometProject\n                     :           +- CometBroadcastHashJoin\n                     :              :- CometUnion\n                     :              :  :- CometProject\n                     :              :  :  +- CometNativeScan parquet spark_catalog.default.web_sales\n                     :              :  +- CometProject\n                     :              :     +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                     :              +- CometBroadcastExchange\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 45 out of 45 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q2.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometHashAggregate\n            :     :  +- CometExchange\n            :     :     +- CometHashAggregate\n            :     :        +- CometProject\n            :     :           +- CometBroadcastHashJoin\n            :     :              :- CometUnion\n            :     :              :  :- CometProject\n            :     :              :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n            :     :              :  +- CometProject\n            :     :              :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :     :              +- CometBroadcastExchange\n            :     :                 +- CometProject\n            :     :                    +- CometFilter\n            :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometHashAggregate\n                     :  +- CometExchange\n                     :     +- CometHashAggregate\n                     :        +- CometProject\n                     :           +- CometBroadcastHashJoin\n                     :              :- CometUnion\n                     :              :  :- CometProject\n                     :              :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :              :  +- CometProject\n                     :              :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :              +- CometBroadcastExchange\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 45 out of 45 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q20.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q20.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q21.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Filter\n                     :     :     :  +- ColumnarToRow\n                     :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :           +- SubqueryBroadcast\n                     :     :     :              +- BroadcastExchange\n                     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :                    +- CometFilter\n                     :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometFilter\n                     :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 10 out of 27 eligible operators (37%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q21.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometFilter\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometFilter\n                     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                     :     :     :        +- SubqueryBroadcast\n                     :     :     :           +- BroadcastExchange\n                     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :                 +- CometFilter\n                     :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometFilter\n                     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 25 out of 27 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q22.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Filter\n                     :     :     :  +- ColumnarToRow\n                     :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :           +- SubqueryBroadcast\n                     :     :     :              +- BroadcastExchange\n                     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :                    +- CometProject\n                     :     :     :                       +- CometFilter\n                     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.warehouse\n\nComet accelerated 12 out of 29 eligible operators (41%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q22.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometFilter\n                     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                     :     :     :        +- SubqueryBroadcast\n                     :     :     :           +- BroadcastExchange\n                     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :                 +- CometProject\n                     :     :     :                    +- CometFilter\n                     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n\nComet accelerated 27 out of 29 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q23a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometUnion\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometProject\n            :     :  +- CometSortMergeJoin\n            :     :     :- CometSort\n            :     :     :  +- CometColumnarExchange\n            :     :     :     +- Project\n            :     :     :        +- BroadcastHashJoin\n            :     :     :           :- ColumnarToRow\n            :     :     :           :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :     :           :        +- SubqueryBroadcast\n            :     :     :           :           +- BroadcastExchange\n            :     :     :           :              +- CometNativeColumnarToRow\n            :     :     :           :                 +- CometProject\n            :     :     :           :                    +- CometFilter\n            :     :     :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :     :           +- BroadcastExchange\n            :     :     :              +- Project\n            :     :     :                 +- Filter\n            :     :     :                    +- HashAggregate\n            :     :     :                       +- CometNativeColumnarToRow\n            :     :     :                          +- CometColumnarExchange\n            :     :     :                             +- HashAggregate\n            :     :     :                                +- Project\n            :     :     :                                   +- BroadcastHashJoin\n            :     :     :                                      :- Project\n            :     :     :                                      :  +- BroadcastHashJoin\n            :     :     :                                      :     :- Filter\n            :     :     :                                      :     :  +- ColumnarToRow\n            :     :     :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :     :                                      :     :           +- SubqueryBroadcast\n            :     :     :                                      :     :              +- BroadcastExchange\n            :     :     :                                      :     :                 +- CometNativeColumnarToRow\n            :     :     :                                      :     :                    +- CometProject\n            :     :     :                                      :     :                       +- CometFilter\n            :     :     :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :     :                                      :     +- BroadcastExchange\n            :     :     :                                      :        +- CometNativeColumnarToRow\n            :     :     :                                      :           +- CometProject\n            :     :     :                                      :              +- CometFilter\n            :     :     :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :     :                                      +- BroadcastExchange\n            :     :     :                                         +- CometNativeColumnarToRow\n            :     :     :                                            +- CometFilter\n            :     :     :                                               +- CometNativeScan parquet spark_catalog.default.item\n            :     :     +- CometSort\n            :     :        +- CometProject\n            :     :           +- CometFilter\n            :     :              :  +- Subquery\n            :     :              :     +- HashAggregate\n            :     :              :        +- CometNativeColumnarToRow\n            :     :              :           +- CometColumnarExchange\n            :     :              :              +- HashAggregate\n            :     :              :                 +- HashAggregate\n            :     :              :                    +- CometNativeColumnarToRow\n            :     :              :                       +- CometColumnarExchange\n            :     :              :                          +- HashAggregate\n            :     :              :                             +- Project\n            :     :              :                                +- BroadcastHashJoin\n            :     :              :                                   :- Project\n            :     :              :                                   :  +- BroadcastHashJoin\n            :     :              :                                   :     :- Filter\n            :     :              :                                   :     :  +- ColumnarToRow\n            :     :              :                                   :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :              :                                   :     :           +- SubqueryBroadcast\n            :     :              :                                   :     :              +- BroadcastExchange\n            :     :              :                                   :     :                 +- CometNativeColumnarToRow\n            :     :              :                                   :     :                    +- CometProject\n            :     :              :                                   :     :                       +- CometFilter\n            :     :              :                                   :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :              :                                   :     +- BroadcastExchange\n            :     :              :                                   :        +- CometNativeColumnarToRow\n            :     :              :                                   :           +- CometFilter\n            :     :              :                                   :              +- CometNativeScan parquet spark_catalog.default.customer\n            :     :              :                                   +- BroadcastExchange\n            :     :              :                                      +- CometNativeColumnarToRow\n            :     :              :                                         +- CometProject\n            :     :              :                                            +- CometFilter\n            :     :              :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :              +- CometHashAggregate\n            :     :                 +- CometExchange\n            :     :                    +- CometHashAggregate\n            :     :                       +- CometProject\n            :     :                          +- CometBroadcastHashJoin\n            :     :                             :- CometProject\n            :     :                             :  +- CometFilter\n            :     :                             :     +- CometNativeScan parquet spark_catalog.default.store_sales\n            :     :                             +- CometBroadcastExchange\n            :     :                                +- CometFilter\n            :     :                                   +- CometNativeScan parquet spark_catalog.default.customer\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometColumnarExchange\n                  :     :     +- Project\n                  :     :        +- BroadcastHashJoin\n                  :     :           :- ColumnarToRow\n                  :     :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :           :        +- ReusedSubquery\n                  :     :           +- BroadcastExchange\n                  :     :              +- Project\n                  :     :                 +- Filter\n                  :     :                    +- HashAggregate\n                  :     :                       +- CometNativeColumnarToRow\n                  :     :                          +- CometColumnarExchange\n                  :     :                             +- HashAggregate\n                  :     :                                +- Project\n                  :     :                                   +- BroadcastHashJoin\n                  :     :                                      :- Project\n                  :     :                                      :  +- BroadcastHashJoin\n                  :     :                                      :     :- Filter\n                  :     :                                      :     :  +- ColumnarToRow\n                  :     :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                                      :     :           +- SubqueryBroadcast\n                  :     :                                      :     :              +- BroadcastExchange\n                  :     :                                      :     :                 +- CometNativeColumnarToRow\n                  :     :                                      :     :                    +- CometProject\n                  :     :                                      :     :                       +- CometFilter\n                  :     :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                                      :     +- BroadcastExchange\n                  :     :                                      :        +- CometNativeColumnarToRow\n                  :     :                                      :           +- CometProject\n                  :     :                                      :              +- CometFilter\n                  :     :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                                      +- BroadcastExchange\n                  :     :                                         +- CometNativeColumnarToRow\n                  :     :                                            +- CometFilter\n                  :     :                                               +- CometNativeScan parquet spark_catalog.default.item\n                  :     +- CometSort\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              :  +- ReusedSubquery\n                  :              +- CometHashAggregate\n                  :                 +- CometExchange\n                  :                    +- CometHashAggregate\n                  :                       +- CometProject\n                  :                          +- CometBroadcastHashJoin\n                  :                             :- CometProject\n                  :                             :  +- CometFilter\n                  :                             :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :                             +- CometBroadcastExchange\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.customer\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 83 out of 138 eligible operators (60%). Final plan contains 20 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q23a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometUnion\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometProject\n            :     :  +- CometSortMergeJoin\n            :     :     :- CometSort\n            :     :     :  +- CometExchange\n            :     :     :     +- CometProject\n            :     :     :        +- CometBroadcastHashJoin\n            :     :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :     :     :           :     +- SubqueryBroadcast\n            :     :     :           :        +- BroadcastExchange\n            :     :     :           :           +- CometNativeColumnarToRow\n            :     :     :           :              +- CometProject\n            :     :     :           :                 +- CometFilter\n            :     :     :           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :     :           +- CometBroadcastExchange\n            :     :     :              +- CometProject\n            :     :     :                 +- CometFilter\n            :     :     :                    +- CometHashAggregate\n            :     :     :                       +- CometExchange\n            :     :     :                          +- CometHashAggregate\n            :     :     :                             +- CometProject\n            :     :     :                                +- CometBroadcastHashJoin\n            :     :     :                                   :- CometProject\n            :     :     :                                   :  +- CometBroadcastHashJoin\n            :     :     :                                   :     :- CometFilter\n            :     :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :     :                                   :     :        +- SubqueryBroadcast\n            :     :     :                                   :     :           +- BroadcastExchange\n            :     :     :                                   :     :              +- CometNativeColumnarToRow\n            :     :     :                                   :     :                 +- CometProject\n            :     :     :                                   :     :                    +- CometFilter\n            :     :     :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :     :                                   :     +- CometBroadcastExchange\n            :     :     :                                   :        +- CometProject\n            :     :     :                                   :           +- CometFilter\n            :     :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :     :                                   +- CometBroadcastExchange\n            :     :     :                                      +- CometFilter\n            :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n            :     :     +- CometSort\n            :     :        +- CometProject\n            :     :           +- CometFilter\n            :     :              :  +- Subquery\n            :     :              :     +- CometNativeColumnarToRow\n            :     :              :        +- CometHashAggregate\n            :     :              :           +- CometExchange\n            :     :              :              +- CometHashAggregate\n            :     :              :                 +- CometHashAggregate\n            :     :              :                    +- CometExchange\n            :     :              :                       +- CometHashAggregate\n            :     :              :                          +- CometProject\n            :     :              :                             +- CometBroadcastHashJoin\n            :     :              :                                :- CometProject\n            :     :              :                                :  +- CometBroadcastHashJoin\n            :     :              :                                :     :- CometFilter\n            :     :              :                                :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :              :                                :     :        +- SubqueryBroadcast\n            :     :              :                                :     :           +- BroadcastExchange\n            :     :              :                                :     :              +- CometNativeColumnarToRow\n            :     :              :                                :     :                 +- CometProject\n            :     :              :                                :     :                    +- CometFilter\n            :     :              :                                :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :              :                                :     +- CometBroadcastExchange\n            :     :              :                                :        +- CometFilter\n            :     :              :                                :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :     :              :                                +- CometBroadcastExchange\n            :     :              :                                   +- CometProject\n            :     :              :                                      +- CometFilter\n            :     :              :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :              +- CometHashAggregate\n            :     :                 +- CometExchange\n            :     :                    +- CometHashAggregate\n            :     :                       +- CometProject\n            :     :                          +- CometBroadcastHashJoin\n            :     :                             :- CometProject\n            :     :                             :  +- CometFilter\n            :     :                             :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :                             +- CometBroadcastExchange\n            :     :                                +- CometFilter\n            :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :     +- CometBroadcastExchange\n            :        +- CometProject\n            :           +- CometFilter\n            :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometExchange\n                  :     :     +- CometProject\n                  :     :        +- CometBroadcastHashJoin\n                  :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :           :     +- ReusedSubquery\n                  :     :           +- CometBroadcastExchange\n                  :     :              +- CometProject\n                  :     :                 +- CometFilter\n                  :     :                    +- CometHashAggregate\n                  :     :                       +- CometExchange\n                  :     :                          +- CometHashAggregate\n                  :     :                             +- CometProject\n                  :     :                                +- CometBroadcastHashJoin\n                  :     :                                   :- CometProject\n                  :     :                                   :  +- CometBroadcastHashJoin\n                  :     :                                   :     :- CometFilter\n                  :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :                                   :     :        +- SubqueryBroadcast\n                  :     :                                   :     :           +- BroadcastExchange\n                  :     :                                   :     :              +- CometNativeColumnarToRow\n                  :     :                                   :     :                 +- CometProject\n                  :     :                                   :     :                    +- CometFilter\n                  :     :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                                   :     +- CometBroadcastExchange\n                  :     :                                   :        +- CometProject\n                  :     :                                   :           +- CometFilter\n                  :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :                                   +- CometBroadcastExchange\n                  :     :                                      +- CometFilter\n                  :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :     +- CometSort\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              :  +- ReusedSubquery\n                  :              +- CometHashAggregate\n                  :                 +- CometExchange\n                  :                    +- CometHashAggregate\n                  :                       +- CometProject\n                  :                          +- CometBroadcastHashJoin\n                  :                             :- CometProject\n                  :                             :  +- CometFilter\n                  :                             :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                             +- CometBroadcastExchange\n                  :                                +- CometFilter\n                  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 127 out of 138 eligible operators (92%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q23b.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometSortMergeJoin\n      :              :     :  :- CometSort\n      :              :     :  :  +- CometColumnarExchange\n      :              :     :  :     +- Project\n      :              :     :  :        +- BroadcastHashJoin\n      :              :     :  :           :- Filter\n      :              :     :  :           :  +- ColumnarToRow\n      :              :     :  :           :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :              :     :  :           :           +- SubqueryBroadcast\n      :              :     :  :           :              +- BroadcastExchange\n      :              :     :  :           :                 +- CometNativeColumnarToRow\n      :              :     :  :           :                    +- CometProject\n      :              :     :  :           :                       +- CometFilter\n      :              :     :  :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :  :           +- BroadcastExchange\n      :              :     :  :              +- Project\n      :              :     :  :                 +- Filter\n      :              :     :  :                    +- HashAggregate\n      :              :     :  :                       +- CometNativeColumnarToRow\n      :              :     :  :                          +- CometColumnarExchange\n      :              :     :  :                             +- HashAggregate\n      :              :     :  :                                +- Project\n      :              :     :  :                                   +- BroadcastHashJoin\n      :              :     :  :                                      :- Project\n      :              :     :  :                                      :  +- BroadcastHashJoin\n      :              :     :  :                                      :     :- Filter\n      :              :     :  :                                      :     :  +- ColumnarToRow\n      :              :     :  :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :              :     :  :                                      :     :           +- SubqueryBroadcast\n      :              :     :  :                                      :     :              +- BroadcastExchange\n      :              :     :  :                                      :     :                 +- CometNativeColumnarToRow\n      :              :     :  :                                      :     :                    +- CometProject\n      :              :     :  :                                      :     :                       +- CometFilter\n      :              :     :  :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :  :                                      :     +- BroadcastExchange\n      :              :     :  :                                      :        +- CometNativeColumnarToRow\n      :              :     :  :                                      :           +- CometProject\n      :              :     :  :                                      :              +- CometFilter\n      :              :     :  :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :  :                                      +- BroadcastExchange\n      :              :     :  :                                         +- CometNativeColumnarToRow\n      :              :     :  :                                            +- CometFilter\n      :              :     :  :                                               +- CometNativeScan parquet spark_catalog.default.item\n      :              :     :  +- CometSort\n      :              :     :     +- CometProject\n      :              :     :        +- CometFilter\n      :              :     :           :  +- Subquery\n      :              :     :           :     +- HashAggregate\n      :              :     :           :        +- CometNativeColumnarToRow\n      :              :     :           :           +- CometColumnarExchange\n      :              :     :           :              +- HashAggregate\n      :              :     :           :                 +- HashAggregate\n      :              :     :           :                    +- CometNativeColumnarToRow\n      :              :     :           :                       +- CometColumnarExchange\n      :              :     :           :                          +- HashAggregate\n      :              :     :           :                             +- Project\n      :              :     :           :                                +- BroadcastHashJoin\n      :              :     :           :                                   :- Project\n      :              :     :           :                                   :  +- BroadcastHashJoin\n      :              :     :           :                                   :     :- Filter\n      :              :     :           :                                   :     :  +- ColumnarToRow\n      :              :     :           :                                   :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :              :     :           :                                   :     :           +- SubqueryBroadcast\n      :              :     :           :                                   :     :              +- BroadcastExchange\n      :              :     :           :                                   :     :                 +- CometNativeColumnarToRow\n      :              :     :           :                                   :     :                    +- CometProject\n      :              :     :           :                                   :     :                       +- CometFilter\n      :              :     :           :                                   :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :           :                                   :     +- BroadcastExchange\n      :              :     :           :                                   :        +- CometNativeColumnarToRow\n      :              :     :           :                                   :           +- CometFilter\n      :              :     :           :                                   :              +- CometNativeScan parquet spark_catalog.default.customer\n      :              :     :           :                                   +- BroadcastExchange\n      :              :     :           :                                      +- CometNativeColumnarToRow\n      :              :     :           :                                         +- CometProject\n      :              :     :           :                                            +- CometFilter\n      :              :     :           :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n      :              :     :           +- CometHashAggregate\n      :              :     :              +- CometExchange\n      :              :     :                 +- CometHashAggregate\n      :              :     :                    +- CometProject\n      :              :     :                       +- CometBroadcastHashJoin\n      :              :     :                          :- CometProject\n      :              :     :                          :  +- CometFilter\n      :              :     :                          :     +- CometNativeScan parquet spark_catalog.default.store_sales\n      :              :     :                          +- CometBroadcastExchange\n      :              :     :                             +- CometFilter\n      :              :     :                                +- CometNativeScan parquet spark_catalog.default.customer\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometSortMergeJoin\n      :              :              :- CometSort\n      :              :              :  +- CometExchange\n      :              :              :     +- CometFilter\n      :              :              :        +- CometNativeScan parquet spark_catalog.default.customer\n      :              :              +- CometSort\n      :              :                 +- CometProject\n      :              :                    +- CometFilter\n      :              :                       :  +- ReusedSubquery\n      :              :                       +- CometHashAggregate\n      :              :                          +- CometExchange\n      :              :                             +- CometHashAggregate\n      :              :                                +- CometProject\n      :              :                                   +- CometBroadcastHashJoin\n      :              :                                      :- CometProject\n      :              :                                      :  +- CometFilter\n      :              :                                      :     +- CometNativeScan parquet spark_catalog.default.store_sales\n      :              :                                      +- CometBroadcastExchange\n      :              :                                         +- CometFilter\n      :              :                                            +- CometNativeScan parquet spark_catalog.default.customer\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometSortMergeJoin\n                     :     :  :- CometSort\n                     :     :  :  +- CometColumnarExchange\n                     :     :  :     +- Project\n                     :     :  :        +- BroadcastHashJoin\n                     :     :  :           :- Filter\n                     :     :  :           :  +- ColumnarToRow\n                     :     :  :           :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :  :           :           +- ReusedSubquery\n                     :     :  :           +- BroadcastExchange\n                     :     :  :              +- Project\n                     :     :  :                 +- Filter\n                     :     :  :                    +- HashAggregate\n                     :     :  :                       +- CometNativeColumnarToRow\n                     :     :  :                          +- CometColumnarExchange\n                     :     :  :                             +- HashAggregate\n                     :     :  :                                +- Project\n                     :     :  :                                   +- BroadcastHashJoin\n                     :     :  :                                      :- Project\n                     :     :  :                                      :  +- BroadcastHashJoin\n                     :     :  :                                      :     :- Filter\n                     :     :  :                                      :     :  +- ColumnarToRow\n                     :     :  :                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :  :                                      :     :           +- SubqueryBroadcast\n                     :     :  :                                      :     :              +- BroadcastExchange\n                     :     :  :                                      :     :                 +- CometNativeColumnarToRow\n                     :     :  :                                      :     :                    +- CometProject\n                     :     :  :                                      :     :                       +- CometFilter\n                     :     :  :                                      :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :  :                                      :     +- BroadcastExchange\n                     :     :  :                                      :        +- CometNativeColumnarToRow\n                     :     :  :                                      :           +- CometProject\n                     :     :  :                                      :              +- CometFilter\n                     :     :  :                                      :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :  :                                      +- BroadcastExchange\n                     :     :  :                                         +- CometNativeColumnarToRow\n                     :     :  :                                            +- CometFilter\n                     :     :  :                                               +- CometNativeScan parquet spark_catalog.default.item\n                     :     :  +- CometSort\n                     :     :     +- CometProject\n                     :     :        +- CometFilter\n                     :     :           :  +- ReusedSubquery\n                     :     :           +- CometHashAggregate\n                     :     :              +- CometExchange\n                     :     :                 +- CometHashAggregate\n                     :     :                    +- CometProject\n                     :     :                       +- CometBroadcastHashJoin\n                     :     :                          :- CometProject\n                     :     :                          :  +- CometFilter\n                     :     :                          :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                     :     :                          +- CometBroadcastExchange\n                     :     :                             +- CometFilter\n                     :     :                                +- CometNativeScan parquet spark_catalog.default.customer\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometSortMergeJoin\n                     :              :- CometSort\n                     :              :  +- CometExchange\n                     :              :     +- CometFilter\n                     :              :        +- CometNativeScan parquet spark_catalog.default.customer\n                     :              +- CometSort\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       :  +- ReusedSubquery\n                     :                       +- CometHashAggregate\n                     :                          +- CometExchange\n                     :                             +- CometHashAggregate\n                     :                                +- CometProject\n                     :                                   +- CometBroadcastHashJoin\n                     :                                      :- CometProject\n                     :                                      :  +- CometFilter\n                     :                                      :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                     :                                      +- CometBroadcastExchange\n                     :                                         +- CometFilter\n                     :                                            +- CometNativeScan parquet spark_catalog.default.customer\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 131 out of 190 eligible operators (68%). Final plan contains 20 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q23b.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometSortMergeJoin\n      :              :     :  :- CometSort\n      :              :     :  :  +- CometExchange\n      :              :     :  :     +- CometProject\n      :              :     :  :        +- CometBroadcastHashJoin\n      :              :     :  :           :- CometFilter\n      :              :     :  :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :  :           :        +- SubqueryBroadcast\n      :              :     :  :           :           +- BroadcastExchange\n      :              :     :  :           :              +- CometNativeColumnarToRow\n      :              :     :  :           :                 +- CometProject\n      :              :     :  :           :                    +- CometFilter\n      :              :     :  :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :  :           +- CometBroadcastExchange\n      :              :     :  :              +- CometProject\n      :              :     :  :                 +- CometFilter\n      :              :     :  :                    +- CometHashAggregate\n      :              :     :  :                       +- CometExchange\n      :              :     :  :                          +- CometHashAggregate\n      :              :     :  :                             +- CometProject\n      :              :     :  :                                +- CometBroadcastHashJoin\n      :              :     :  :                                   :- CometProject\n      :              :     :  :                                   :  +- CometBroadcastHashJoin\n      :              :     :  :                                   :     :- CometFilter\n      :              :     :  :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :  :                                   :     :        +- SubqueryBroadcast\n      :              :     :  :                                   :     :           +- BroadcastExchange\n      :              :     :  :                                   :     :              +- CometNativeColumnarToRow\n      :              :     :  :                                   :     :                 +- CometProject\n      :              :     :  :                                   :     :                    +- CometFilter\n      :              :     :  :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :  :                                   :     +- CometBroadcastExchange\n      :              :     :  :                                   :        +- CometProject\n      :              :     :  :                                   :           +- CometFilter\n      :              :     :  :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :  :                                   +- CometBroadcastExchange\n      :              :     :  :                                      +- CometFilter\n      :              :     :  :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :              :     :  +- CometSort\n      :              :     :     +- CometProject\n      :              :     :        +- CometFilter\n      :              :     :           :  +- Subquery\n      :              :     :           :     +- CometNativeColumnarToRow\n      :              :     :           :        +- CometHashAggregate\n      :              :     :           :           +- CometExchange\n      :              :     :           :              +- CometHashAggregate\n      :              :     :           :                 +- CometHashAggregate\n      :              :     :           :                    +- CometExchange\n      :              :     :           :                       +- CometHashAggregate\n      :              :     :           :                          +- CometProject\n      :              :     :           :                             +- CometBroadcastHashJoin\n      :              :     :           :                                :- CometProject\n      :              :     :           :                                :  +- CometBroadcastHashJoin\n      :              :     :           :                                :     :- CometFilter\n      :              :     :           :                                :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :           :                                :     :        +- SubqueryBroadcast\n      :              :     :           :                                :     :           +- BroadcastExchange\n      :              :     :           :                                :     :              +- CometNativeColumnarToRow\n      :              :     :           :                                :     :                 +- CometProject\n      :              :     :           :                                :     :                    +- CometFilter\n      :              :     :           :                                :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :           :                                :     +- CometBroadcastExchange\n      :              :     :           :                                :        +- CometFilter\n      :              :     :           :                                :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :           :                                +- CometBroadcastExchange\n      :              :     :           :                                   +- CometProject\n      :              :     :           :                                      +- CometFilter\n      :              :     :           :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :           +- CometHashAggregate\n      :              :     :              +- CometExchange\n      :              :     :                 +- CometHashAggregate\n      :              :     :                    +- CometProject\n      :              :     :                       +- CometBroadcastHashJoin\n      :              :     :                          :- CometProject\n      :              :     :                          :  +- CometFilter\n      :              :     :                          :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :                          +- CometBroadcastExchange\n      :              :     :                             +- CometFilter\n      :              :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometSortMergeJoin\n      :              :              :- CometSort\n      :              :              :  +- CometExchange\n      :              :              :     +- CometFilter\n      :              :              :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :              +- CometSort\n      :              :                 +- CometProject\n      :              :                    +- CometFilter\n      :              :                       :  +- ReusedSubquery\n      :              :                       +- CometHashAggregate\n      :              :                          +- CometExchange\n      :              :                             +- CometHashAggregate\n      :              :                                +- CometProject\n      :              :                                   +- CometBroadcastHashJoin\n      :              :                                      :- CometProject\n      :              :                                      :  +- CometFilter\n      :              :                                      :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :                                      +- CometBroadcastExchange\n      :              :                                         +- CometFilter\n      :              :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometSortMergeJoin\n                     :     :  :- CometSort\n                     :     :  :  +- CometExchange\n                     :     :  :     +- CometProject\n                     :     :  :        +- CometBroadcastHashJoin\n                     :     :  :           :- CometFilter\n                     :     :  :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :     :  :           :        +- ReusedSubquery\n                     :     :  :           +- CometBroadcastExchange\n                     :     :  :              +- CometProject\n                     :     :  :                 +- CometFilter\n                     :     :  :                    +- CometHashAggregate\n                     :     :  :                       +- CometExchange\n                     :     :  :                          +- CometHashAggregate\n                     :     :  :                             +- CometProject\n                     :     :  :                                +- CometBroadcastHashJoin\n                     :     :  :                                   :- CometProject\n                     :     :  :                                   :  +- CometBroadcastHashJoin\n                     :     :  :                                   :     :- CometFilter\n                     :     :  :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :  :                                   :     :        +- SubqueryBroadcast\n                     :     :  :                                   :     :           +- BroadcastExchange\n                     :     :  :                                   :     :              +- CometNativeColumnarToRow\n                     :     :  :                                   :     :                 +- CometProject\n                     :     :  :                                   :     :                    +- CometFilter\n                     :     :  :                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :  :                                   :     +- CometBroadcastExchange\n                     :     :  :                                   :        +- CometProject\n                     :     :  :                                   :           +- CometFilter\n                     :     :  :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :  :                                   +- CometBroadcastExchange\n                     :     :  :                                      +- CometFilter\n                     :     :  :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     :     :  +- CometSort\n                     :     :     +- CometProject\n                     :     :        +- CometFilter\n                     :     :           :  +- ReusedSubquery\n                     :     :           +- CometHashAggregate\n                     :     :              +- CometExchange\n                     :     :                 +- CometHashAggregate\n                     :     :                    +- CometProject\n                     :     :                       +- CometBroadcastHashJoin\n                     :     :                          :- CometProject\n                     :     :                          :  +- CometFilter\n                     :     :                          :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :                          +- CometBroadcastExchange\n                     :     :                             +- CometFilter\n                     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometSortMergeJoin\n                     :              :- CometSort\n                     :              :  +- CometExchange\n                     :              :     +- CometFilter\n                     :              :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :              +- CometSort\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       :  +- ReusedSubquery\n                     :                       +- CometHashAggregate\n                     :                          +- CometExchange\n                     :                             +- CometHashAggregate\n                     :                                +- CometProject\n                     :                                   +- CometBroadcastHashJoin\n                     :                                      :- CometProject\n                     :                                      :  +- CometFilter\n                     :                                      :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :                                      +- CometBroadcastExchange\n                     :                                         +- CometFilter\n                     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 177 out of 190 eligible operators (93%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q24a.native_datafusion/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometNativeScan parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometNativeScan parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometNativeScan parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometNativeScan parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q24a.native_iceberg_compat/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q24b.native_datafusion/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometNativeScan parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometNativeScan parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometNativeScan parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometNativeScan parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q24b.native_iceberg_compat/extended.txt",
    "content": "Filter\n:  +- Subquery\n:     +- HashAggregate\n:        +- CometNativeColumnarToRow\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +- HashAggregate\n:                    +- CometNativeColumnarToRow\n:                       +- CometColumnarExchange\n:                          +- HashAggregate\n:                             +- Project\n:                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n:                                   :- CometNativeColumnarToRow\n:                                   :  +- CometProject\n:                                   :     +- CometBroadcastHashJoin\n:                                   :        :- CometProject\n:                                   :        :  +- CometBroadcastHashJoin\n:                                   :        :     :- CometProject\n:                                   :        :     :  +- CometBroadcastHashJoin\n:                                   :        :     :     :- CometProject\n:                                   :        :     :     :  +- CometSortMergeJoin\n:                                   :        :     :     :     :- CometSort\n:                                   :        :     :     :     :  +- CometExchange\n:                                   :        :     :     :     :     +- CometProject\n:                                   :        :     :     :     :        +- CometFilter\n:                                   :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:                                   :        :     :     :     +- CometSort\n:                                   :        :     :     :        +- CometExchange\n:                                   :        :     :     :           +- CometProject\n:                                   :        :     :     :              +- CometFilter\n:                                   :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n:                                   :        :     :     +- CometBroadcastExchange\n:                                   :        :     :        +- CometProject\n:                                   :        :     :           +- CometFilter\n:                                   :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:                                   :        :     +- CometBroadcastExchange\n:                                   :        :        +- CometProject\n:                                   :        :           +- CometFilter\n:                                   :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n:                                   :        +- CometBroadcastExchange\n:                                   :           +- CometProject\n:                                   :              +- CometFilter\n:                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n:                                   +- BroadcastExchange\n:                                      +- CometNativeColumnarToRow\n:                                         +- CometProject\n:                                            +- CometFilter\n:                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                              :- CometNativeColumnarToRow\n                              :  +- CometProject\n                              :     +- CometBroadcastHashJoin\n                              :        :- CometProject\n                              :        :  +- CometBroadcastHashJoin\n                              :        :     :- CometProject\n                              :        :     :  +- CometBroadcastHashJoin\n                              :        :     :     :- CometProject\n                              :        :     :     :  +- CometSortMergeJoin\n                              :        :     :     :     :- CometSort\n                              :        :     :     :     :  +- CometExchange\n                              :        :     :     :     :     +- CometProject\n                              :        :     :     :     :        +- CometFilter\n                              :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :        :     :     :     +- CometSort\n                              :        :     :     :        +- CometExchange\n                              :        :     :     :           +- CometProject\n                              :        :     :     :              +- CometFilter\n                              :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :        :     :     +- CometBroadcastExchange\n                              :        :     :        +- CometProject\n                              :        :     :           +- CometFilter\n                              :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :        :     +- CometBroadcastExchange\n                              :        :        +- CometProject\n                              :        :           +- CometFilter\n                              :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :        +- CometBroadcastExchange\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 70 out of 86 eligible operators (81%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q25.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- Filter\n                  :     :     :     :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :     :        +- Filter\n                  :     :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- Filter\n                  :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :                    +- ReusedSubquery\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 22 out of 57 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q25.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometFilter\n                  :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :     :     :     :           +- BroadcastExchange\n                  :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                 +- CometProject\n                  :     :     :     :     :     :     :                    +- CometFilter\n                  :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :     :                 +- ReusedSubquery\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 52 out of 57 eligible operators (91%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q26.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Filter\n                  :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :              +- BroadcastExchange\n                  :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :                    +- CometProject\n                  :     :     :     :                       +- CometFilter\n                  :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.item\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 16 out of 35 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q26.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :           +- BroadcastExchange\n                  :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :                 +- CometProject\n                  :     :     :     :                    +- CometFilter\n                  :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 33 out of 35 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q27.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Filter\n                     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :           +- SubqueryBroadcast\n                     :     :     :     :              +- BroadcastExchange\n                     :     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :     :                    +- CometProject\n                     :     :     :     :                       +- CometFilter\n                     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometProject\n                     :     :     :              +- CometFilter\n                     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.store\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 16 out of 36 eligible operators (44%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q27.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometFilter\n                     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :     :     :        +- SubqueryBroadcast\n                     :     :     :     :           +- BroadcastExchange\n                     :     :     :     :              +- CometNativeColumnarToRow\n                     :     :     :     :                 +- CometProject\n                     :     :     :     :                    +- CometFilter\n                     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometProject\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 34 out of 36 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q28.native_datafusion/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :     +- CometColumnarExchange\n:  :  :  :  :        +- HashAggregate\n:  :  :  :  :           +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :  :              +- CometNativeColumnarToRow\n:  :  :  :  :                 +- CometExchange\n:  :  :  :  :                    +- CometHashAggregate\n:  :  :  :  :                       +- CometProject\n:  :  :  :  :                          +- CometFilter\n:  :  :  :  :                             +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometColumnarExchange\n:  :  :  :              +- HashAggregate\n:  :  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :                    +- CometNativeColumnarToRow\n:  :  :  :                       +- CometExchange\n:  :  :  :                          +- CometHashAggregate\n:  :  :  :                             +- CometProject\n:  :  :  :                                +- CometFilter\n:  :  :  :                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometColumnarExchange\n:  :  :              +- HashAggregate\n:  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :                    +- CometNativeColumnarToRow\n:  :  :                       +- CometExchange\n:  :  :                          +- CometHashAggregate\n:  :  :                             +- CometProject\n:  :  :                                +- CometFilter\n:  :  :                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometColumnarExchange\n:  :              +- HashAggregate\n:  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :                    +- CometNativeColumnarToRow\n:  :                       +- CometExchange\n:  :                          +- CometHashAggregate\n:  :                             +- CometProject\n:  :                                +- CometFilter\n:  :                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:                    +- CometNativeColumnarToRow\n:                       +- CometExchange\n:                          +- CometHashAggregate\n:                             +- CometProject\n:                                +- CometFilter\n:                                   +- CometNativeScan parquet spark_catalog.default.store_sales\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometColumnarExchange\n            +- HashAggregate\n               +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n                  +- CometNativeColumnarToRow\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.store_sales\n\nComet accelerated 42 out of 64 eligible operators (65%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q28.native_iceberg_compat/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :     +- CometColumnarExchange\n:  :  :  :  :        +- HashAggregate\n:  :  :  :  :           +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :  :              +- CometNativeColumnarToRow\n:  :  :  :  :                 +- CometExchange\n:  :  :  :  :                    +- CometHashAggregate\n:  :  :  :  :                       +- CometProject\n:  :  :  :  :                          +- CometFilter\n:  :  :  :  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometColumnarExchange\n:  :  :  :              +- HashAggregate\n:  :  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :  :                    +- CometNativeColumnarToRow\n:  :  :  :                       +- CometExchange\n:  :  :  :                          +- CometHashAggregate\n:  :  :  :                             +- CometProject\n:  :  :  :                                +- CometFilter\n:  :  :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometColumnarExchange\n:  :  :              +- HashAggregate\n:  :  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :  :                    +- CometNativeColumnarToRow\n:  :  :                       +- CometExchange\n:  :  :                          +- CometHashAggregate\n:  :  :                             +- CometProject\n:  :  :                                +- CometFilter\n:  :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometColumnarExchange\n:  :              +- HashAggregate\n:  :                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:  :                    +- CometNativeColumnarToRow\n:  :                       +- CometExchange\n:  :                          +- CometHashAggregate\n:  :                             +- CometProject\n:  :                                +- CometFilter\n:  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometColumnarExchange\n:              +- HashAggregate\n:                 +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n:                    +- CometNativeColumnarToRow\n:                       +- CometExchange\n:                          +- CometHashAggregate\n:                             +- CometProject\n:                                +- CometFilter\n:                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometColumnarExchange\n            +- HashAggregate\n               +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n                  +- CometNativeColumnarToRow\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n\nComet accelerated 42 out of 64 eligible operators (65%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q29.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- Filter\n                  :     :     :     :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :     :        +- Filter\n                  :     :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- Filter\n                  :     :     :     :     :           +- ColumnarToRow\n                  :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :                    +- SubqueryBroadcast\n                  :     :     :     :     :                       +- BroadcastExchange\n                  :     :     :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :     :     :                             +- CometProject\n                  :     :     :     :     :                                +- CometFilter\n                  :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 25 out of 61 eligible operators (40%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q29.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometFilter\n                  :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :     :     :     :           +- BroadcastExchange\n                  :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                 +- CometProject\n                  :     :     :     :     :     :     :                    +- CometFilter\n                  :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :     :                          +- CometProject\n                  :     :     :     :     :                             +- CometFilter\n                  :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 55 out of 61 eligible operators (90%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q3.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q3.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q30.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Project\n      :     :     :                    :  +- BroadcastHashJoin\n      :     :     :                    :     :- Filter\n      :     :     :                    :     :  +- ColumnarToRow\n      :     :     :                    :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :     :           +- SubqueryBroadcast\n      :     :     :                    :     :              +- BroadcastExchange\n      :     :     :                    :     :                 +- CometNativeColumnarToRow\n      :     :     :                    :     :                    +- CometProject\n      :     :     :                    :     :                       +- CometFilter\n      :     :     :                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    :     +- BroadcastExchange\n      :     :     :                    :        +- CometNativeColumnarToRow\n      :     :     :                    :           +- CometProject\n      :     :     :                    :              +- CometFilter\n      :     :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Filter\n      :     :                                         :     :  +- ColumnarToRow\n      :     :                                         :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :           +- ReusedSubquery\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometProject\n      :     :                                         :              +- CometFilter\n      :     :                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometProject\n      :     :                                                  +- CometFilter\n      :     :                                                     +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 24 out of 61 eligible operators (39%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q30.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometProject\n         :     :     :                 :  +- CometBroadcastHashJoin\n         :     :     :                 :     :- CometFilter\n         :     :     :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :     :     :                 :     :        +- SubqueryBroadcast\n         :     :     :                 :     :           +- BroadcastExchange\n         :     :     :                 :     :              +- CometNativeColumnarToRow\n         :     :     :                 :     :                 +- CometProject\n         :     :     :                 :     :                    +- CometFilter\n         :     :     :                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 :     +- CometBroadcastExchange\n         :     :     :                 :        +- CometProject\n         :     :     :                 :           +- CometFilter\n         :     :     :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometFilter\n         :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometHashAggregate\n         :     :                       +- CometExchange\n         :     :                          +- CometHashAggregate\n         :     :                             +- CometProject\n         :     :                                +- CometBroadcastHashJoin\n         :     :                                   :- CometProject\n         :     :                                   :  +- CometBroadcastHashJoin\n         :     :                                   :     :- CometFilter\n         :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :     :                                   :     :        +- ReusedSubquery\n         :     :                                   :     +- CometBroadcastExchange\n         :     :                                   :        +- CometProject\n         :     :                                   :           +- CometFilter\n         :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                   +- CometBroadcastExchange\n         :     :                                      +- CometProject\n         :     :                                         +- CometFilter\n         :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 58 out of 61 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q31.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Project\n            :  +- BroadcastHashJoin\n            :     :- BroadcastHashJoin\n            :     :  :- Project\n            :     :  :  +- BroadcastHashJoin\n            :     :  :     :- BroadcastHashJoin\n            :     :  :     :  :- HashAggregate\n            :     :  :     :  :  +- CometNativeColumnarToRow\n            :     :  :     :  :     +- CometColumnarExchange\n            :     :  :     :  :        +- HashAggregate\n            :     :  :     :  :           +- Project\n            :     :  :     :  :              +- BroadcastHashJoin\n            :     :  :     :  :                 :- Project\n            :     :  :     :  :                 :  +- BroadcastHashJoin\n            :     :  :     :  :                 :     :- Filter\n            :     :  :     :  :                 :     :  +- ColumnarToRow\n            :     :  :     :  :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :  :     :  :                 :     :           +- SubqueryBroadcast\n            :     :  :     :  :                 :     :              +- BroadcastExchange\n            :     :  :     :  :                 :     :                 +- CometNativeColumnarToRow\n            :     :  :     :  :                 :     :                    +- CometFilter\n            :     :  :     :  :                 :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :  :                 :     +- BroadcastExchange\n            :     :  :     :  :                 :        +- CometNativeColumnarToRow\n            :     :  :     :  :                 :           +- CometFilter\n            :     :  :     :  :                 :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :  :                 +- BroadcastExchange\n            :     :  :     :  :                    +- CometNativeColumnarToRow\n            :     :  :     :  :                       +- CometFilter\n            :     :  :     :  :                          +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     :  :     :  +- BroadcastExchange\n            :     :  :     :     +- HashAggregate\n            :     :  :     :        +- CometNativeColumnarToRow\n            :     :  :     :           +- CometColumnarExchange\n            :     :  :     :              +- HashAggregate\n            :     :  :     :                 +- Project\n            :     :  :     :                    +- BroadcastHashJoin\n            :     :  :     :                       :- Project\n            :     :  :     :                       :  +- BroadcastHashJoin\n            :     :  :     :                       :     :- Filter\n            :     :  :     :                       :     :  +- ColumnarToRow\n            :     :  :     :                       :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :  :     :                       :     :           +- SubqueryBroadcast\n            :     :  :     :                       :     :              +- BroadcastExchange\n            :     :  :     :                       :     :                 +- CometNativeColumnarToRow\n            :     :  :     :                       :     :                    +- CometFilter\n            :     :  :     :                       :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :                       :     +- BroadcastExchange\n            :     :  :     :                       :        +- CometNativeColumnarToRow\n            :     :  :     :                       :           +- CometFilter\n            :     :  :     :                       :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :     :                       +- BroadcastExchange\n            :     :  :     :                          +- CometNativeColumnarToRow\n            :     :  :     :                             +- CometFilter\n            :     :  :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     :  :     +- BroadcastExchange\n            :     :  :        +- HashAggregate\n            :     :  :           +- CometNativeColumnarToRow\n            :     :  :              +- CometColumnarExchange\n            :     :  :                 +- HashAggregate\n            :     :  :                    +- Project\n            :     :  :                       +- BroadcastHashJoin\n            :     :  :                          :- Project\n            :     :  :                          :  +- BroadcastHashJoin\n            :     :  :                          :     :- Filter\n            :     :  :                          :     :  +- ColumnarToRow\n            :     :  :                          :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :  :                          :     :           +- SubqueryBroadcast\n            :     :  :                          :     :              +- BroadcastExchange\n            :     :  :                          :     :                 +- CometNativeColumnarToRow\n            :     :  :                          :     :                    +- CometFilter\n            :     :  :                          :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :                          :     +- BroadcastExchange\n            :     :  :                          :        +- CometNativeColumnarToRow\n            :     :  :                          :           +- CometFilter\n            :     :  :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :  :                          +- BroadcastExchange\n            :     :  :                             +- CometNativeColumnarToRow\n            :     :  :                                +- CometFilter\n            :     :  :                                   +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     :  +- BroadcastExchange\n            :     :     +- HashAggregate\n            :     :        +- CometNativeColumnarToRow\n            :     :           +- CometColumnarExchange\n            :     :              +- HashAggregate\n            :     :                 +- Project\n            :     :                    +- BroadcastHashJoin\n            :     :                       :- Project\n            :     :                       :  +- BroadcastHashJoin\n            :     :                       :     :- Filter\n            :     :                       :     :  +- ColumnarToRow\n            :     :                       :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :     :                       :     :           +- ReusedSubquery\n            :     :                       :     +- BroadcastExchange\n            :     :                       :        +- CometNativeColumnarToRow\n            :     :                       :           +- CometFilter\n            :     :                       :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :     :                       +- BroadcastExchange\n            :     :                          +- CometNativeColumnarToRow\n            :     :                             +- CometFilter\n            :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n            :     +- BroadcastExchange\n            :        +- HashAggregate\n            :           +- CometNativeColumnarToRow\n            :              +- CometColumnarExchange\n            :                 +- HashAggregate\n            :                    +- Project\n            :                       +- BroadcastHashJoin\n            :                          :- Project\n            :                          :  +- BroadcastHashJoin\n            :                          :     :- Filter\n            :                          :     :  +- ColumnarToRow\n            :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                          :     :           +- ReusedSubquery\n            :                          :     +- BroadcastExchange\n            :                          :        +- CometNativeColumnarToRow\n            :                          :           +- CometFilter\n            :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                          +- BroadcastExchange\n            :                             +- CometNativeColumnarToRow\n            :                                +- CometFilter\n            :                                   +- CometNativeScan parquet spark_catalog.default.customer_address\n            +- BroadcastExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- ReusedSubquery\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometFilter\n                                 :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 38 out of 120 eligible operators (31%). Final plan contains 28 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q31.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometProject\n            :  +- CometBroadcastHashJoin\n            :     :- CometBroadcastHashJoin\n            :     :  :- CometProject\n            :     :  :  +- CometBroadcastHashJoin\n            :     :  :     :- CometBroadcastHashJoin\n            :     :  :     :  :- CometHashAggregate\n            :     :  :     :  :  +- CometExchange\n            :     :  :     :  :     +- CometHashAggregate\n            :     :  :     :  :        +- CometProject\n            :     :  :     :  :           +- CometBroadcastHashJoin\n            :     :  :     :  :              :- CometProject\n            :     :  :     :  :              :  +- CometBroadcastHashJoin\n            :     :  :     :  :              :     :- CometFilter\n            :     :  :     :  :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :  :     :  :              :     :        +- SubqueryBroadcast\n            :     :  :     :  :              :     :           +- BroadcastExchange\n            :     :  :     :  :              :     :              +- CometNativeColumnarToRow\n            :     :  :     :  :              :     :                 +- CometFilter\n            :     :  :     :  :              :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :  :              :     +- CometBroadcastExchange\n            :     :  :     :  :              :        +- CometFilter\n            :     :  :     :  :              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :  :              +- CometBroadcastExchange\n            :     :  :     :  :                 +- CometFilter\n            :     :  :     :  :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     :  :     :  +- CometBroadcastExchange\n            :     :  :     :     +- CometHashAggregate\n            :     :  :     :        +- CometExchange\n            :     :  :     :           +- CometHashAggregate\n            :     :  :     :              +- CometProject\n            :     :  :     :                 +- CometBroadcastHashJoin\n            :     :  :     :                    :- CometProject\n            :     :  :     :                    :  +- CometBroadcastHashJoin\n            :     :  :     :                    :     :- CometFilter\n            :     :  :     :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :  :     :                    :     :        +- SubqueryBroadcast\n            :     :  :     :                    :     :           +- BroadcastExchange\n            :     :  :     :                    :     :              +- CometNativeColumnarToRow\n            :     :  :     :                    :     :                 +- CometFilter\n            :     :  :     :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :                    :     +- CometBroadcastExchange\n            :     :  :     :                    :        +- CometFilter\n            :     :  :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :     :                    +- CometBroadcastExchange\n            :     :  :     :                       +- CometFilter\n            :     :  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     :  :     +- CometBroadcastExchange\n            :     :  :        +- CometHashAggregate\n            :     :  :           +- CometExchange\n            :     :  :              +- CometHashAggregate\n            :     :  :                 +- CometProject\n            :     :  :                    +- CometBroadcastHashJoin\n            :     :  :                       :- CometProject\n            :     :  :                       :  +- CometBroadcastHashJoin\n            :     :  :                       :     :- CometFilter\n            :     :  :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :     :  :                       :     :        +- SubqueryBroadcast\n            :     :  :                       :     :           +- BroadcastExchange\n            :     :  :                       :     :              +- CometNativeColumnarToRow\n            :     :  :                       :     :                 +- CometFilter\n            :     :  :                       :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :                       :     +- CometBroadcastExchange\n            :     :  :                       :        +- CometFilter\n            :     :  :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :  :                       +- CometBroadcastExchange\n            :     :  :                          +- CometFilter\n            :     :  :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     :  +- CometBroadcastExchange\n            :     :     +- CometHashAggregate\n            :     :        +- CometExchange\n            :     :           +- CometHashAggregate\n            :     :              +- CometProject\n            :     :                 +- CometBroadcastHashJoin\n            :     :                    :- CometProject\n            :     :                    :  +- CometBroadcastHashJoin\n            :     :                    :     :- CometFilter\n            :     :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n            :     :                    :     :        +- ReusedSubquery\n            :     :                    :     +- CometBroadcastExchange\n            :     :                    :        +- CometFilter\n            :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :     :                    +- CometBroadcastExchange\n            :     :                       +- CometFilter\n            :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :     +- CometBroadcastExchange\n            :        +- CometHashAggregate\n            :           +- CometExchange\n            :              +- CometHashAggregate\n            :                 +- CometProject\n            :                    +- CometBroadcastHashJoin\n            :                       :- CometProject\n            :                       :  +- CometBroadcastHashJoin\n            :                       :     :- CometFilter\n            :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n            :                       :     :        +- ReusedSubquery\n            :                       :     +- CometBroadcastExchange\n            :                       :        +- CometFilter\n            :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                       +- CometBroadcastExchange\n            :                          +- CometFilter\n            :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            +- CometBroadcastExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- ReusedSubquery\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 111 out of 120 eligible operators (92%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q32.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Filter\n               :     :     :  +- ColumnarToRow\n               :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :           +- SubqueryBroadcast\n               :     :     :              +- BroadcastExchange\n               :     :     :                 +- CometNativeColumnarToRow\n               :     :     :                    +- CometProject\n               :     :     :                       +- CometFilter\n               :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.item\n               :     +- BroadcastExchange\n               :        +- Filter\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Project\n               :                          +- BroadcastHashJoin\n               :                             :- Filter\n               :                             :  +- ColumnarToRow\n               :                             :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                             :           +- ReusedSubquery\n               :                             +- BroadcastExchange\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometProject\n               :                                      +- CometFilter\n               :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 38 eligible operators (36%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q32.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :     :     :        +- SubqueryBroadcast\n               :     :     :           +- BroadcastExchange\n               :     :     :              +- CometNativeColumnarToRow\n               :     :     :                 +- CometProject\n               :     :     :                    +- CometFilter\n               :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                          :        +- ReusedSubquery\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 35 out of 38 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q33.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- SubqueryBroadcast\n               :                 :     :     :              +- BroadcastExchange\n               :                 :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :                    +- CometProject\n               :                 :     :     :                       +- CometFilter\n               :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometNativeScan parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- ReusedSubquery\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometNativeScan parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.item\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- ReusedSubquery\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometBroadcastHashJoin\n                                          :- CometFilter\n                                          :  +- CometNativeScan parquet spark_catalog.default.item\n                                          +- CometBroadcastExchange\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 46 out of 93 eligible operators (49%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q33.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :     :     :        +- SubqueryBroadcast\n               :              :     :     :           +- BroadcastExchange\n               :              :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :                 +- CometProject\n               :              :     :     :                    +- CometFilter\n               :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometBroadcastHashJoin\n               :                    :- CometFilter\n               :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    +- CometBroadcastExchange\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :     :        +- ReusedSubquery\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometBroadcastHashJoin\n               :                    :- CometFilter\n               :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    +- CometBroadcastExchange\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :     :        +- ReusedSubquery\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              +- CometBroadcastExchange\n                                 +- CometBroadcastHashJoin\n                                    :- CometFilter\n                                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 89 out of 93 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q34.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Filter\n            :  +- HashAggregate\n            :     +- CometNativeColumnarToRow\n            :        +- CometColumnarExchange\n            :           +- HashAggregate\n            :              +- Project\n            :                 +- BroadcastHashJoin\n            :                    :- Project\n            :                    :  +- BroadcastHashJoin\n            :                    :     :- Project\n            :                    :     :  +- BroadcastHashJoin\n            :                    :     :     :- Filter\n            :                    :     :     :  +- ColumnarToRow\n            :                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                    :     :     :           +- SubqueryBroadcast\n            :                    :     :     :              +- BroadcastExchange\n            :                    :     :     :                 +- CometNativeColumnarToRow\n            :                    :     :     :                    +- CometProject\n            :                    :     :     :                       +- CometFilter\n            :                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     :     +- BroadcastExchange\n            :                    :     :        +- CometNativeColumnarToRow\n            :                    :     :           +- CometProject\n            :                    :     :              +- CometFilter\n            :                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     +- BroadcastExchange\n            :                    :        +- CometNativeColumnarToRow\n            :                    :           +- CometProject\n            :                    :              +- CometFilter\n            :                    :                 +- CometNativeScan parquet spark_catalog.default.store\n            :                    +- BroadcastExchange\n            :                       +- CometNativeColumnarToRow\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometNativeScan parquet spark_catalog.default.household_demographics\n            +- BroadcastExchange\n               +- CometNativeColumnarToRow\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 37 eligible operators (48%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q34.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometFilter\n            :  +- CometHashAggregate\n            :     +- CometExchange\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometFilter\n            :                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :        +- SubqueryBroadcast\n            :                 :     :     :           +- BroadcastExchange\n            :                 :     :     :              +- CometNativeColumnarToRow\n            :                 :     :     :                 +- CometProject\n            :                 :     :     :                    +- CometFilter\n            :                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometProject\n            :                 :     :           +- CometFilter\n            :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometProject\n            :                 :           +- CometFilter\n            :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 37 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q35.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :- BroadcastHashJoin\n                  :     :        :  :- BroadcastHashJoin\n                  :     :        :  :  :- CometNativeColumnarToRow\n                  :     :        :  :  :  +- CometFilter\n                  :     :        :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :        :  :  +- BroadcastExchange\n                  :     :        :  :     +- Project\n                  :     :        :  :        +- BroadcastHashJoin\n                  :     :        :  :           :- ColumnarToRow\n                  :     :        :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :  :           :        +- SubqueryBroadcast\n                  :     :        :  :           :           +- BroadcastExchange\n                  :     :        :  :           :              +- CometNativeColumnarToRow\n                  :     :        :  :           :                 +- CometProject\n                  :     :        :  :           :                    +- CometFilter\n                  :     :        :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  :           +- BroadcastExchange\n                  :     :        :  :              +- CometNativeColumnarToRow\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- Project\n                  :     :        :        +- BroadcastHashJoin\n                  :     :        :           :- ColumnarToRow\n                  :     :        :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :           :        +- ReusedSubquery\n                  :     :        :           +- BroadcastExchange\n                  :     :        :              +- CometNativeColumnarToRow\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 54 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q35.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                  :     :        :  :- CometNativeColumnarToRow\n                  :     :        :  :  +- CometBroadcastHashJoin\n                  :     :        :  :     :- CometFilter\n                  :     :        :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :        :  :     +- CometBroadcastExchange\n                  :     :        :  :        +- CometProject\n                  :     :        :  :           +- CometBroadcastHashJoin\n                  :     :        :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :        :  :              :     +- SubqueryBroadcast\n                  :     :        :  :              :        +- BroadcastExchange\n                  :     :        :  :              :           +- CometNativeColumnarToRow\n                  :     :        :  :              :              +- CometProject\n                  :     :        :  :              :                 +- CometFilter\n                  :     :        :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  :              +- CometBroadcastExchange\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- CometNativeColumnarToRow\n                  :     :        :        +- CometProject\n                  :     :        :           +- CometBroadcastHashJoin\n                  :     :        :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :        :              :     +- ReusedSubquery\n                  :     :        :              +- CometBroadcastExchange\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- CometNativeColumnarToRow\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 54 eligible operators (64%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q36.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- Filter\n                                    :     :     :  +- ColumnarToRow\n                                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :     :           +- SubqueryBroadcast\n                                    :     :     :              +- BroadcastExchange\n                                    :     :     :                 +- CometNativeColumnarToRow\n                                    :     :     :                    +- CometProject\n                                    :     :     :                       +- CometFilter\n                                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometProject\n                                    :     :              +- CometFilter\n                                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.item\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 15 out of 34 eligible operators (44%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q36.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometExpand\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :     :        +- SubqueryBroadcast\n                                 :     :     :           +- BroadcastExchange\n                                 :     :     :              +- CometNativeColumnarToRow\n                                 :     :     :                 +- CometProject\n                                 :     :     :                    +- CometFilter\n                                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 29 out of 34 eligible operators (85%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q37.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- BroadcastExchange\n                  :  +- Project\n                  :     +- BroadcastHashJoin\n                  :        :- Project\n                  :        :  +- BroadcastHashJoin\n                  :        :     :- CometNativeColumnarToRow\n                  :        :     :  +- CometProject\n                  :        :     :     +- CometFilter\n                  :        :     :        +- CometNativeScan parquet spark_catalog.default.item\n                  :        :     +- BroadcastExchange\n                  :        :        +- Project\n                  :        :           +- Filter\n                  :        :              +- ColumnarToRow\n                  :        :                 +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :        :                       +- SubqueryBroadcast\n                  :        :                          +- BroadcastExchange\n                  :        :                             +- CometNativeColumnarToRow\n                  :        :                                +- CometProject\n                  :        :                                   +- CometFilter\n                  :        :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :        +- BroadcastExchange\n                  :           +- CometNativeColumnarToRow\n                  :              +- CometProject\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n\nComet accelerated 15 out of 30 eligible operators (50%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q37.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometBroadcastExchange\n                  :  +- CometProject\n                  :     +- CometBroadcastHashJoin\n                  :        :- CometProject\n                  :        :  +- CometBroadcastHashJoin\n                  :        :     :- CometProject\n                  :        :     :  +- CometFilter\n                  :        :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :        :     +- CometBroadcastExchange\n                  :        :        +- CometProject\n                  :        :           +- CometFilter\n                  :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :        :                    +- SubqueryBroadcast\n                  :        :                       +- BroadcastExchange\n                  :        :                          +- CometNativeColumnarToRow\n                  :        :                             +- CometProject\n                  :        :                                +- CometFilter\n                  :        :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        +- CometBroadcastExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n\nComet accelerated 28 out of 30 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q38.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometBroadcastHashJoin\n               :  :- CometHashAggregate\n               :  :  +- CometColumnarExchange\n               :  :     +- HashAggregate\n               :  :        +- Project\n               :  :           +- BroadcastHashJoin\n               :  :              :- Project\n               :  :              :  +- BroadcastHashJoin\n               :  :              :     :- Filter\n               :  :              :     :  +- ColumnarToRow\n               :  :              :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :  :              :     :           +- SubqueryBroadcast\n               :  :              :     :              +- BroadcastExchange\n               :  :              :     :                 +- CometNativeColumnarToRow\n               :  :              :     :                    +- CometProject\n               :  :              :     :                       +- CometFilter\n               :  :              :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :              :     +- BroadcastExchange\n               :  :              :        +- CometNativeColumnarToRow\n               :  :              :           +- CometProject\n               :  :              :              +- CometFilter\n               :  :              :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :              +- BroadcastExchange\n               :  :                 +- CometNativeColumnarToRow\n               :  :                    +- CometProject\n               :  :                       +- CometFilter\n               :  :                          +- CometNativeScan parquet spark_catalog.default.customer\n               :  +- CometBroadcastExchange\n               :     +- CometHashAggregate\n               :        +- CometColumnarExchange\n               :           +- HashAggregate\n               :              +- Project\n               :                 +- BroadcastHashJoin\n               :                    :- Project\n               :                    :  +- BroadcastHashJoin\n               :                    :     :- Filter\n               :                    :     :  +- ColumnarToRow\n               :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :     :           +- ReusedSubquery\n               :                    :     +- BroadcastExchange\n               :                    :        +- CometNativeColumnarToRow\n               :                    :           +- CometProject\n               :                    :              +- CometFilter\n               :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    +- BroadcastExchange\n               :                       +- CometNativeColumnarToRow\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometNativeScan parquet spark_catalog.default.customer\n               +- CometBroadcastExchange\n                  +- CometHashAggregate\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- ReusedSubquery\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 66 eligible operators (53%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q38.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometBroadcastHashJoin\n               :  :- CometHashAggregate\n               :  :  +- CometExchange\n               :  :     +- CometHashAggregate\n               :  :        +- CometProject\n               :  :           +- CometBroadcastHashJoin\n               :  :              :- CometProject\n               :  :              :  +- CometBroadcastHashJoin\n               :  :              :     :- CometFilter\n               :  :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :  :              :     :        +- SubqueryBroadcast\n               :  :              :     :           +- BroadcastExchange\n               :  :              :     :              +- CometNativeColumnarToRow\n               :  :              :     :                 +- CometProject\n               :  :              :     :                    +- CometFilter\n               :  :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :              :     +- CometBroadcastExchange\n               :  :              :        +- CometProject\n               :  :              :           +- CometFilter\n               :  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :              +- CometBroadcastExchange\n               :  :                 +- CometProject\n               :  :                    +- CometFilter\n               :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               :  +- CometBroadcastExchange\n               :     +- CometHashAggregate\n               :        +- CometExchange\n               :           +- CometHashAggregate\n               :              +- CometProject\n               :                 +- CometBroadcastHashJoin\n               :                    :- CometProject\n               :                    :  +- CometBroadcastHashJoin\n               :                    :     :- CometFilter\n               :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :     :        +- ReusedSubquery\n               :                    :     +- CometBroadcastExchange\n               :                    :        +- CometProject\n               :                    :           +- CometFilter\n               :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometBroadcastExchange\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               +- CometBroadcastExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometFilter\n                                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :        +- ReusedSubquery\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 62 out of 66 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q39a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- BroadcastHashJoin\n         :- Project\n         :  +- Filter\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- Project\n         :                    +- BroadcastHashJoin\n         :                       :- Project\n         :                       :  +- BroadcastHashJoin\n         :                       :     :- Project\n         :                       :     :  +- BroadcastHashJoin\n         :                       :     :     :- Filter\n         :                       :     :     :  +- ColumnarToRow\n         :                       :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                       :     :     :           +- SubqueryBroadcast\n         :                       :     :     :              +- BroadcastExchange\n         :                       :     :     :                 +- CometNativeColumnarToRow\n         :                       :     :     :                    +- CometProject\n         :                       :     :     :                       +- CometFilter\n         :                       :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                       :     :     +- BroadcastExchange\n         :                       :     :        +- CometNativeColumnarToRow\n         :                       :     :           +- CometFilter\n         :                       :     :              +- CometNativeScan parquet spark_catalog.default.item\n         :                       :     +- BroadcastExchange\n         :                       :        +- CometNativeColumnarToRow\n         :                       :           +- CometFilter\n         :                       :              +- CometNativeScan parquet spark_catalog.default.warehouse\n         :                       +- BroadcastExchange\n         :                          +- CometNativeColumnarToRow\n         :                             +- CometProject\n         :                                +- CometFilter\n         :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- BroadcastExchange\n            +- Project\n               +- Filter\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- Filter\n                                    :     :     :  +- ColumnarToRow\n                                    :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :     :           +- SubqueryBroadcast\n                                    :     :     :              +- BroadcastExchange\n                                    :     :     :                 +- CometNativeColumnarToRow\n                                    :     :     :                    +- CometProject\n                                    :     :     :                       +- CometFilter\n                                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometFilter\n                                    :     :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometFilter\n                                    :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 24 out of 60 eligible operators (40%). Final plan contains 13 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q39a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometFilter\n         :     +- CometHashAggregate\n         :        +- CometExchange\n         :           +- CometHashAggregate\n         :              +- CometProject\n         :                 +- CometBroadcastHashJoin\n         :                    :- CometProject\n         :                    :  +- CometBroadcastHashJoin\n         :                    :     :- CometProject\n         :                    :     :  +- CometBroadcastHashJoin\n         :                    :     :     :- CometFilter\n         :                    :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n         :                    :     :     :        +- SubqueryBroadcast\n         :                    :     :     :           +- BroadcastExchange\n         :                    :     :     :              +- CometNativeColumnarToRow\n         :                    :     :     :                 +- CometProject\n         :                    :     :     :                    +- CometFilter\n         :                    :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                    :     :     +- CometBroadcastExchange\n         :                    :     :        +- CometFilter\n         :                    :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                    :     +- CometBroadcastExchange\n         :                    :        +- CometFilter\n         :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n         :                    +- CometBroadcastExchange\n         :                       +- CometProject\n         :                          +- CometFilter\n         :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                                 :     :     :        +- SubqueryBroadcast\n                                 :     :     :           +- BroadcastExchange\n                                 :     :     :              +- CometNativeColumnarToRow\n                                 :     :     :                 +- CometProject\n                                 :     :     :                    +- CometFilter\n                                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometFilter\n                                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 56 out of 60 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q39b.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- BroadcastHashJoin\n         :- Project\n         :  +- Filter\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- Project\n         :                    +- BroadcastHashJoin\n         :                       :- Project\n         :                       :  +- BroadcastHashJoin\n         :                       :     :- Project\n         :                       :     :  +- BroadcastHashJoin\n         :                       :     :     :- Filter\n         :                       :     :     :  +- ColumnarToRow\n         :                       :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                       :     :     :           +- SubqueryBroadcast\n         :                       :     :     :              +- BroadcastExchange\n         :                       :     :     :                 +- CometNativeColumnarToRow\n         :                       :     :     :                    +- CometProject\n         :                       :     :     :                       +- CometFilter\n         :                       :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                       :     :     +- BroadcastExchange\n         :                       :     :        +- CometNativeColumnarToRow\n         :                       :     :           +- CometFilter\n         :                       :     :              +- CometNativeScan parquet spark_catalog.default.item\n         :                       :     +- BroadcastExchange\n         :                       :        +- CometNativeColumnarToRow\n         :                       :           +- CometFilter\n         :                       :              +- CometNativeScan parquet spark_catalog.default.warehouse\n         :                       +- BroadcastExchange\n         :                          +- CometNativeColumnarToRow\n         :                             +- CometProject\n         :                                +- CometFilter\n         :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- BroadcastExchange\n            +- Project\n               +- Filter\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- Filter\n                                    :     :     :  +- ColumnarToRow\n                                    :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :     :           +- SubqueryBroadcast\n                                    :     :     :              +- BroadcastExchange\n                                    :     :     :                 +- CometNativeColumnarToRow\n                                    :     :     :                    +- CometProject\n                                    :     :     :                       +- CometFilter\n                                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometFilter\n                                    :     :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometFilter\n                                    :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 24 out of 60 eligible operators (40%). Final plan contains 13 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q39b.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometFilter\n         :     +- CometHashAggregate\n         :        +- CometExchange\n         :           +- CometHashAggregate\n         :              +- CometProject\n         :                 +- CometBroadcastHashJoin\n         :                    :- CometProject\n         :                    :  +- CometBroadcastHashJoin\n         :                    :     :- CometProject\n         :                    :     :  +- CometBroadcastHashJoin\n         :                    :     :     :- CometFilter\n         :                    :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n         :                    :     :     :        +- SubqueryBroadcast\n         :                    :     :     :           +- BroadcastExchange\n         :                    :     :     :              +- CometNativeColumnarToRow\n         :                    :     :     :                 +- CometProject\n         :                    :     :     :                    +- CometFilter\n         :                    :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                    :     :     +- CometBroadcastExchange\n         :                    :     :        +- CometFilter\n         :                    :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                    :     +- CometBroadcastExchange\n         :                    :        +- CometFilter\n         :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n         :                    +- CometBroadcastExchange\n         :                       +- CometProject\n         :                          +- CometFilter\n         :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometFilter\n                                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                                 :     :     :        +- SubqueryBroadcast\n                                 :     :     :           +- BroadcastExchange\n                                 :     :     :              +- CometNativeColumnarToRow\n                                 :     :     :                 +- CometProject\n                                 :     :     :                    +- CometFilter\n                                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometFilter\n                                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 56 out of 60 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q4.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Project\n      :     :     :  +- BroadcastHashJoin\n      :     :     :     :- BroadcastHashJoin\n      :     :     :     :  :- Filter\n      :     :     :     :  :  +- HashAggregate\n      :     :     :     :  :     +- CometNativeColumnarToRow\n      :     :     :     :  :        +- CometColumnarExchange\n      :     :     :     :  :           +- HashAggregate\n      :     :     :     :  :              +- Project\n      :     :     :     :  :                 +- BroadcastHashJoin\n      :     :     :     :  :                    :- Project\n      :     :     :     :  :                    :  +- BroadcastHashJoin\n      :     :     :     :  :                    :     :- CometNativeColumnarToRow\n      :     :     :     :  :                    :     :  +- CometProject\n      :     :     :     :  :                    :     :     +- CometFilter\n      :     :     :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :     :  :                    :     +- BroadcastExchange\n      :     :     :     :  :                    :        +- Filter\n      :     :     :     :  :                    :           +- ColumnarToRow\n      :     :     :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :     :  :                    :                    +- SubqueryBroadcast\n      :     :     :     :  :                    :                       +- BroadcastExchange\n      :     :     :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :     :     :  :                    :                             +- CometFilter\n      :     :     :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     :  :                    +- BroadcastExchange\n      :     :     :     :  :                       +- CometNativeColumnarToRow\n      :     :     :     :  :                          +- CometFilter\n      :     :     :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     :  +- BroadcastExchange\n      :     :     :     :     +- HashAggregate\n      :     :     :     :        +- CometNativeColumnarToRow\n      :     :     :     :           +- CometColumnarExchange\n      :     :     :     :              +- HashAggregate\n      :     :     :     :                 +- Project\n      :     :     :     :                    +- BroadcastHashJoin\n      :     :     :     :                       :- Project\n      :     :     :     :                       :  +- BroadcastHashJoin\n      :     :     :     :                       :     :- CometNativeColumnarToRow\n      :     :     :     :                       :     :  +- CometProject\n      :     :     :     :                       :     :     +- CometFilter\n      :     :     :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :     :                       :     +- BroadcastExchange\n      :     :     :     :                       :        +- Filter\n      :     :     :     :                       :           +- ColumnarToRow\n      :     :     :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :     :                       :                    +- SubqueryBroadcast\n      :     :     :     :                       :                       +- BroadcastExchange\n      :     :     :     :                       :                          +- CometNativeColumnarToRow\n      :     :     :     :                       :                             +- CometFilter\n      :     :     :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     :                       +- BroadcastExchange\n      :     :     :     :                          +- CometNativeColumnarToRow\n      :     :     :     :                             +- CometFilter\n      :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :     +- BroadcastExchange\n      :     :     :        +- Filter\n      :     :     :           +- HashAggregate\n      :     :     :              +- CometNativeColumnarToRow\n      :     :     :                 +- CometColumnarExchange\n      :     :     :                    +- HashAggregate\n      :     :     :                       +- Project\n      :     :     :                          +- BroadcastHashJoin\n      :     :     :                             :- Project\n      :     :     :                             :  +- BroadcastHashJoin\n      :     :     :                             :     :- CometNativeColumnarToRow\n      :     :     :                             :     :  +- CometProject\n      :     :     :                             :     :     +- CometFilter\n      :     :     :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :     :                             :     +- BroadcastExchange\n      :     :     :                             :        +- Filter\n      :     :     :                             :           +- ColumnarToRow\n      :     :     :                             :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                             :                    +- ReusedSubquery\n      :     :     :                             +- BroadcastExchange\n      :     :     :                                +- CometNativeColumnarToRow\n      :     :     :                                   +- CometFilter\n      :     :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     +- BroadcastExchange\n      :     :        +- HashAggregate\n      :     :           +- CometNativeColumnarToRow\n      :     :              +- CometColumnarExchange\n      :     :                 +- HashAggregate\n      :     :                    +- Project\n      :     :                       +- BroadcastHashJoin\n      :     :                          :- Project\n      :     :                          :  +- BroadcastHashJoin\n      :     :                          :     :- CometNativeColumnarToRow\n      :     :                          :     :  +- CometProject\n      :     :                          :     :     +- CometFilter\n      :     :                          :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                          :     +- BroadcastExchange\n      :     :                          :        +- Filter\n      :     :                          :           +- ColumnarToRow\n      :     :                          :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                          :                    +- ReusedSubquery\n      :     :                          +- BroadcastExchange\n      :     :                             +- CometNativeColumnarToRow\n      :     :                                +- CometFilter\n      :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 40 out of 126 eligible operators (31%). Final plan contains 26 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q4.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometProject\n         :     :     :  +- CometBroadcastHashJoin\n         :     :     :     :- CometBroadcastHashJoin\n         :     :     :     :  :- CometFilter\n         :     :     :     :  :  +- CometHashAggregate\n         :     :     :     :  :     +- CometExchange\n         :     :     :     :  :        +- CometHashAggregate\n         :     :     :     :  :           +- CometProject\n         :     :     :     :  :              +- CometBroadcastHashJoin\n         :     :     :     :  :                 :- CometProject\n         :     :     :     :  :                 :  +- CometBroadcastHashJoin\n         :     :     :     :  :                 :     :- CometProject\n         :     :     :     :  :                 :     :  +- CometFilter\n         :     :     :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :     :  :                 :     +- CometBroadcastExchange\n         :     :     :     :  :                 :        +- CometFilter\n         :     :     :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :     :  :                 :                 +- SubqueryBroadcast\n         :     :     :     :  :                 :                    +- BroadcastExchange\n         :     :     :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :     :     :  :                 :                          +- CometFilter\n         :     :     :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     :  :                 +- CometBroadcastExchange\n         :     :     :     :  :                    +- CometFilter\n         :     :     :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     :  +- CometBroadcastExchange\n         :     :     :     :     +- CometHashAggregate\n         :     :     :     :        +- CometExchange\n         :     :     :     :           +- CometHashAggregate\n         :     :     :     :              +- CometProject\n         :     :     :     :                 +- CometBroadcastHashJoin\n         :     :     :     :                    :- CometProject\n         :     :     :     :                    :  +- CometBroadcastHashJoin\n         :     :     :     :                    :     :- CometProject\n         :     :     :     :                    :     :  +- CometFilter\n         :     :     :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :     :                    :     +- CometBroadcastExchange\n         :     :     :     :                    :        +- CometFilter\n         :     :     :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :     :                    :                 +- SubqueryBroadcast\n         :     :     :     :                    :                    +- BroadcastExchange\n         :     :     :     :                    :                       +- CometNativeColumnarToRow\n         :     :     :     :                    :                          +- CometFilter\n         :     :     :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     :                    +- CometBroadcastExchange\n         :     :     :     :                       +- CometFilter\n         :     :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :     +- CometBroadcastExchange\n         :     :     :        +- CometFilter\n         :     :     :           +- CometHashAggregate\n         :     :     :              +- CometExchange\n         :     :     :                 +- CometHashAggregate\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometBroadcastHashJoin\n         :     :     :                          :- CometProject\n         :     :     :                          :  +- CometBroadcastHashJoin\n         :     :     :                          :     :- CometProject\n         :     :     :                          :     :  +- CometFilter\n         :     :     :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :     :                          :     +- CometBroadcastExchange\n         :     :     :                          :        +- CometFilter\n         :     :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :     :     :                          :                 +- ReusedSubquery\n         :     :     :                          +- CometBroadcastExchange\n         :     :     :                             +- CometFilter\n         :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometExchange\n         :     :              +- CometHashAggregate\n         :     :                 +- CometProject\n         :     :                    +- CometBroadcastHashJoin\n         :     :                       :- CometProject\n         :     :                       :  +- CometBroadcastHashJoin\n         :     :                       :     :- CometProject\n         :     :                       :     :  +- CometFilter\n         :     :                       :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                       :     +- CometBroadcastExchange\n         :     :                       :        +- CometFilter\n         :     :                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :     :                       :                 +- ReusedSubquery\n         :     :                       +- CometBroadcastExchange\n         :     :                          +- CometFilter\n         :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 118 out of 126 eligible operators (93%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q40.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometSortMergeJoin\n                  :     :     :     :- CometSort\n                  :     :     :     :  +- CometColumnarExchange\n                  :     :     :     :     +- Filter\n                  :     :     :     :        +- ColumnarToRow\n                  :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :                 +- SubqueryBroadcast\n                  :     :     :     :                    +- BroadcastExchange\n                  :     :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :     :                          +- CometFilter\n                  :     :     :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometSort\n                  :     :     :        +- CometExchange\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 32 out of 36 eligible operators (88%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q40.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometSortMergeJoin\n                  :     :     :     :- CometSort\n                  :     :     :     :  +- CometExchange\n                  :     :     :     :     +- CometFilter\n                  :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     :              +- SubqueryBroadcast\n                  :     :     :     :                 +- BroadcastExchange\n                  :     :     :     :                    +- CometNativeColumnarToRow\n                  :     :     :     :                       +- CometFilter\n                  :     :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometSort\n                  :     :     :        +- CometExchange\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 34 out of 36 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q41.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometFilter\n                  :     +- CometNativeScan parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q41.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometFilter\n                  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q42.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q42.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q43.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q43.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q44.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometSortMergeJoin\n         :     :     :- CometSort\n         :     :     :  +- CometColumnarExchange\n         :     :     :     +- Project\n         :     :     :        +- Filter\n         :     :     :           +- Window\n         :     :     :              +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         :     :     :                 +- CometNativeColumnarToRow\n         :     :     :                    +- CometSort\n         :     :     :                       +- CometColumnarExchange\n         :     :     :                          +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         :     :     :                             +- CometNativeColumnarToRow\n         :     :     :                                +- CometSort\n         :     :     :                                   +- CometFilter\n         :     :     :                                      :  +- Subquery\n         :     :     :                                      :     +- CometNativeColumnarToRow\n         :     :     :                                      :        +- CometHashAggregate\n         :     :     :                                      :           +- CometExchange\n         :     :     :                                      :              +- CometHashAggregate\n         :     :     :                                      :                 +- CometProject\n         :     :     :                                      :                    +- CometFilter\n         :     :     :                                      :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n         :     :     :                                      +- CometHashAggregate\n         :     :     :                                         +- CometExchange\n         :     :     :                                            +- CometHashAggregate\n         :     :     :                                               +- CometProject\n         :     :     :                                                  +- CometFilter\n         :     :     :                                                     +- CometNativeScan parquet spark_catalog.default.store_sales\n         :     :     +- CometSort\n         :     :        +- CometColumnarExchange\n         :     :           +- Project\n         :     :              +- Filter\n         :     :                 +- Window\n         :     :                    +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         :     :                       +- CometNativeColumnarToRow\n         :     :                          +- CometSort\n         :     :                             +- CometColumnarExchange\n         :     :                                +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         :     :                                   +- CometNativeColumnarToRow\n         :     :                                      +- CometSort\n         :     :                                         +- CometFilter\n         :     :                                            :  +- ReusedSubquery\n         :     :                                            +- CometHashAggregate\n         :     :                                               +- CometExchange\n         :     :                                                  +- CometHashAggregate\n         :     :                                                     +- CometProject\n         :     :                                                        +- CometFilter\n         :     :                                                           +- CometNativeScan parquet spark_catalog.default.store_sales\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometNativeScan parquet spark_catalog.default.item\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 45 out of 57 eligible operators (78%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q44.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometSortMergeJoin\n         :     :     :- CometSort\n         :     :     :  +- CometColumnarExchange\n         :     :     :     +- Project\n         :     :     :        +- Filter\n         :     :     :           +- Window\n         :     :     :              +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         :     :     :                 +- CometNativeColumnarToRow\n         :     :     :                    +- CometSort\n         :     :     :                       +- CometColumnarExchange\n         :     :     :                          +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         :     :     :                             +- CometNativeColumnarToRow\n         :     :     :                                +- CometSort\n         :     :     :                                   +- CometFilter\n         :     :     :                                      :  +- Subquery\n         :     :     :                                      :     +- CometNativeColumnarToRow\n         :     :     :                                      :        +- CometHashAggregate\n         :     :     :                                      :           +- CometExchange\n         :     :     :                                      :              +- CometHashAggregate\n         :     :     :                                      :                 +- CometProject\n         :     :     :                                      :                    +- CometFilter\n         :     :     :                                      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :                                      +- CometHashAggregate\n         :     :     :                                         +- CometExchange\n         :     :     :                                            +- CometHashAggregate\n         :     :     :                                               +- CometProject\n         :     :     :                                                  +- CometFilter\n         :     :     :                                                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     +- CometSort\n         :     :        +- CometColumnarExchange\n         :     :           +- Project\n         :     :              +- Filter\n         :     :                 +- Window\n         :     :                    +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         :     :                       +- CometNativeColumnarToRow\n         :     :                          +- CometSort\n         :     :                             +- CometColumnarExchange\n         :     :                                +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         :     :                                   +- CometNativeColumnarToRow\n         :     :                                      +- CometSort\n         :     :                                         +- CometFilter\n         :     :                                            :  +- ReusedSubquery\n         :     :                                            +- CometHashAggregate\n         :     :                                               +- CometExchange\n         :     :                                                  +- CometHashAggregate\n         :     :                                                     +- CometProject\n         :     :                                                        +- CometFilter\n         :     :                                                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 45 out of 57 eligible operators (78%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q45.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- Filter\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Project\n                     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :- Filter\n                     :     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :     :           +- SubqueryBroadcast\n                     :     :     :     :     :              +- BroadcastExchange\n                     :     :     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :     :     :                    +- CometProject\n                     :     :     :     :     :                       +- CometFilter\n                     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometProject\n                     :     :     :              +- CometFilter\n                     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 41 eligible operators (43%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q45.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- Filter\n                  +-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                     :- CometNativeColumnarToRow\n                     :  +- CometProject\n                     :     +- CometBroadcastHashJoin\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometProject\n                     :        :     :  +- CometBroadcastHashJoin\n                     :        :     :     :- CometProject\n                     :        :     :     :  +- CometBroadcastHashJoin\n                     :        :     :     :     :- CometFilter\n                     :        :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :        :     :     :     :        +- SubqueryBroadcast\n                     :        :     :     :     :           +- BroadcastExchange\n                     :        :     :     :     :              +- CometNativeColumnarToRow\n                     :        :     :     :     :                 +- CometProject\n                     :        :     :     :     :                    +- CometFilter\n                     :        :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :     :     :     +- CometBroadcastExchange\n                     :        :     :     :        +- CometFilter\n                     :        :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :        :     :     +- CometBroadcastExchange\n                     :        :     :        +- CometProject\n                     :        :     :           +- CometFilter\n                     :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        +- CometBroadcastExchange\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 32 out of 41 eligible operators (78%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q46.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- HashAggregate\n      :     :  +- CometNativeColumnarToRow\n      :     :     +- CometColumnarExchange\n      :     :        +- HashAggregate\n      :     :           +- Project\n      :     :              +- BroadcastHashJoin\n      :     :                 :- Project\n      :     :                 :  +- BroadcastHashJoin\n      :     :                 :     :- Project\n      :     :                 :     :  +- BroadcastHashJoin\n      :     :                 :     :     :- Project\n      :     :                 :     :     :  +- BroadcastHashJoin\n      :     :                 :     :     :     :- Filter\n      :     :                 :     :     :     :  +- ColumnarToRow\n      :     :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                 :     :     :     :           +- SubqueryBroadcast\n      :     :                 :     :     :     :              +- BroadcastExchange\n      :     :                 :     :     :     :                 +- CometNativeColumnarToRow\n      :     :                 :     :     :     :                    +- CometProject\n      :     :                 :     :     :     :                       +- CometFilter\n      :     :                 :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     :     +- BroadcastExchange\n      :     :                 :     :     :        +- CometNativeColumnarToRow\n      :     :                 :     :     :           +- CometProject\n      :     :                 :     :     :              +- CometFilter\n      :     :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     +- BroadcastExchange\n      :     :                 :     :        +- CometNativeColumnarToRow\n      :     :                 :     :           +- CometProject\n      :     :                 :     :              +- CometFilter\n      :     :                 :     :                 +- CometNativeScan parquet spark_catalog.default.store\n      :     :                 :     +- BroadcastExchange\n      :     :                 :        +- CometNativeColumnarToRow\n      :     :                 :           +- CometProject\n      :     :                 :              +- CometFilter\n      :     :                 :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n      :     :                 +- BroadcastExchange\n      :     :                    +- CometNativeColumnarToRow\n      :     :                       +- CometFilter\n      :     :                          +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometFilter\n               +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 20 out of 45 eligible operators (44%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q46.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometHashAggregate\n         :     :  +- CometExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometProject\n         :     :           +- CometBroadcastHashJoin\n         :     :              :- CometProject\n         :     :              :  +- CometBroadcastHashJoin\n         :     :              :     :- CometProject\n         :     :              :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :- CometProject\n         :     :              :     :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :     :- CometFilter\n         :     :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :              :     :     :     :        +- SubqueryBroadcast\n         :     :              :     :     :     :           +- BroadcastExchange\n         :     :              :     :     :     :              +- CometNativeColumnarToRow\n         :     :              :     :     :     :                 +- CometProject\n         :     :              :     :     :     :                    +- CometFilter\n         :     :              :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     :     +- CometBroadcastExchange\n         :     :              :     :     :        +- CometProject\n         :     :              :     :     :           +- CometFilter\n         :     :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     +- CometBroadcastExchange\n         :     :              :     :        +- CometProject\n         :     :              :     :           +- CometFilter\n         :     :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     :              :     +- CometBroadcastExchange\n         :     :              :        +- CometProject\n         :     :              :           +- CometFilter\n         :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         :     :              +- CometBroadcastExchange\n         :     :                 +- CometFilter\n         :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 43 out of 45 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q47.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q47.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q48.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Project\n               :     :     :  +- BroadcastHashJoin\n               :     :     :     :- Filter\n               :     :     :     :  +- ColumnarToRow\n               :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :     :           +- SubqueryBroadcast\n               :     :     :     :              +- BroadcastExchange\n               :     :     :     :                 +- CometNativeColumnarToRow\n               :     :     :     :                    +- CometProject\n               :     :     :     :                       +- CometFilter\n               :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     :     +- BroadcastExchange\n               :     :     :        +- CometNativeColumnarToRow\n               :     :     :           +- CometFilter\n               :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n               :     +- BroadcastExchange\n               :        +- CometNativeColumnarToRow\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 15 out of 33 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q48.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometBroadcastHashJoin\n               :     :     :     :- CometFilter\n               :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     :     :        +- SubqueryBroadcast\n               :     :     :     :           +- BroadcastExchange\n               :     :     :     :              +- CometNativeColumnarToRow\n               :     :     :     :                 +- CometProject\n               :     :     :     :                    +- CometFilter\n               :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     :     +- CometBroadcastExchange\n               :     :     :        +- CometFilter\n               :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 31 out of 33 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q49.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- SubqueryBroadcast\n               :                                         :     :                    +- BroadcastExchange\n               :                                         :     :                       +- CometNativeColumnarToRow\n               :                                         :     :                          +- CometProject\n               :                                         :     :                             +- CometFilter\n               :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- ReusedSubquery\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometColumnarExchange\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- BroadcastExchange\n                                                         :     :  +- Project\n                                                         :     :     +- Filter\n                                                         :     :        +- ColumnarToRow\n                                                         :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :                 +- ReusedSubquery\n                                                         :     +- CometNativeColumnarToRow\n                                                         :        +- CometProject\n                                                         :           +- CometFilter\n                                                         :              +- CometNativeScan parquet spark_catalog.default.store_returns\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 33 out of 87 eligible operators (37%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q49.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :              +- SubqueryBroadcast\n               :                                      :     :                 +- BroadcastExchange\n               :                                      :     :                    +- CometNativeColumnarToRow\n               :                                      :     :                       +- CometProject\n               :                                      :     :                          +- CometFilter\n               :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :              +- ReusedSubquery\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometExchange\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometBroadcastHashJoin\n                                                      :- CometProject\n                                                      :  +- CometBroadcastHashJoin\n                                                      :     :- CometBroadcastExchange\n                                                      :     :  +- CometProject\n                                                      :     :     +- CometFilter\n                                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :     :              +- ReusedSubquery\n                                                      :     +- CometProject\n                                                      :        +- CometFilter\n                                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                                      +- CometBroadcastExchange\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 66 out of 87 eligible operators (75%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q5.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- Project\n                  :              +- BroadcastHashJoin\n                  :                 :- Project\n                  :                 :  +- BroadcastHashJoin\n                  :                 :     :- Union\n                  :                 :     :  :- Project\n                  :                 :     :  :  +- Filter\n                  :                 :     :  :     +- ColumnarToRow\n                  :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :  :              +- SubqueryBroadcast\n                  :                 :     :  :                 +- BroadcastExchange\n                  :                 :     :  :                    +- CometNativeColumnarToRow\n                  :                 :     :  :                       +- CometProject\n                  :                 :     :  :                          +- CometFilter\n                  :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                 :     :  +- Project\n                  :                 :     :     +- Filter\n                  :                 :     :        +- ColumnarToRow\n                  :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :                 +- ReusedSubquery\n                  :                 :     +- BroadcastExchange\n                  :                 :        +- CometNativeColumnarToRow\n                  :                 :           +- CometProject\n                  :                 :              +- CometFilter\n                  :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                 +- BroadcastExchange\n                  :                    +- CometNativeColumnarToRow\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometNativeScan parquet spark_catalog.default.store\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- Project\n                  :              +- BroadcastHashJoin\n                  :                 :- Project\n                  :                 :  +- BroadcastHashJoin\n                  :                 :     :- Union\n                  :                 :     :  :- Project\n                  :                 :     :  :  +- Filter\n                  :                 :     :  :     +- ColumnarToRow\n                  :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :  :              +- ReusedSubquery\n                  :                 :     :  +- Project\n                  :                 :     :     +- Filter\n                  :                 :     :        +- ColumnarToRow\n                  :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :                 +- ReusedSubquery\n                  :                 :     +- BroadcastExchange\n                  :                 :        +- CometNativeColumnarToRow\n                  :                 :           +- CometProject\n                  :                 :              +- CometFilter\n                  :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                 +- BroadcastExchange\n                  :                    +- CometNativeColumnarToRow\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Union\n                                    :     :  :- Project\n                                    :     :  :  +- Filter\n                                    :     :  :     +- ColumnarToRow\n                                    :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :  :              +- ReusedSubquery\n                                    :     :  +- Project\n                                    :     :     +- BroadcastHashJoin\n                                    :     :        :- BroadcastExchange\n                                    :     :        :  +- ColumnarToRow\n                                    :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :        :           +- ReusedSubquery\n                                    :     :        +- CometNativeColumnarToRow\n                                    :     :           +- CometProject\n                                    :     :              +- CometFilter\n                                    :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 28 out of 86 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q5.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometUnion\n                  :              :     :  :- CometProject\n                  :              :     :  :  +- CometFilter\n                  :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :              :     :  :           +- SubqueryBroadcast\n                  :              :     :  :              +- BroadcastExchange\n                  :              :     :  :                 +- CometNativeColumnarToRow\n                  :              :     :  :                    +- CometProject\n                  :              :     :  :                       +- CometFilter\n                  :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :  +- CometProject\n                  :              :     :     +- CometFilter\n                  :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :              :     :              +- ReusedSubquery\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometUnion\n                  :              :     :  :- CometProject\n                  :              :     :  :  +- CometFilter\n                  :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :              :     :  :           +- ReusedSubquery\n                  :              :     :  +- CometProject\n                  :              :     :     +- CometFilter\n                  :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :              :     :              +- ReusedSubquery\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometUnion\n                                 :     :  :- CometProject\n                                 :     :  :  +- CometFilter\n                                 :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :  :           +- ReusedSubquery\n                                 :     :  +- CometProject\n                                 :     :     +- CometBroadcastHashJoin\n                                 :     :        :- CometBroadcastExchange\n                                 :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                 :     :        :        +- ReusedSubquery\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 79 out of 86 eligible operators (91%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q50.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- CometNativeColumnarToRow\n                  :     :     :     :  +- CometFilter\n                  :     :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- Filter\n                  :     :     :           +- ColumnarToRow\n                  :     :     :              +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :                    +- SubqueryBroadcast\n                  :     :     :                       +- BroadcastExchange\n                  :     :     :                          +- CometNativeColumnarToRow\n                  :     :     :                             +- CometProject\n                  :     :     :                                +- CometFilter\n                  :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.store\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q50.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :     :     :                 +- SubqueryBroadcast\n                  :     :     :                    +- BroadcastExchange\n                  :     :     :                       +- CometNativeColumnarToRow\n                  :     :     :                          +- CometProject\n                  :     :     :                             +- CometFilter\n                  :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 31 out of 33 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q51.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometProject\n                  +- CometSortMergeJoin\n                     :- CometSort\n                     :  +- CometColumnarExchange\n                     :     +- Project\n                     :        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                     :           +- CometNativeColumnarToRow\n                     :              +- CometSort\n                     :                 +- CometColumnarExchange\n                     :                    +- HashAggregate\n                     :                       +- CometNativeColumnarToRow\n                     :                          +- CometColumnarExchange\n                     :                             +- HashAggregate\n                     :                                +- Project\n                     :                                   +- BroadcastHashJoin\n                     :                                      :- Filter\n                     :                                      :  +- ColumnarToRow\n                     :                                      :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :                                      :           +- SubqueryBroadcast\n                     :                                      :              +- BroadcastExchange\n                     :                                      :                 +- CometNativeColumnarToRow\n                     :                                      :                    +- CometProject\n                     :                                      :                       +- CometFilter\n                     :                                      :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :                                      +- BroadcastExchange\n                     :                                         +- CometNativeColumnarToRow\n                     :                                            +- CometProject\n                     :                                               +- CometFilter\n                     :                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- CometSort\n                        +- CometColumnarExchange\n                           +- Project\n                              +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                 +- CometNativeColumnarToRow\n                                    +- CometSort\n                                       +- CometColumnarExchange\n                                          +- HashAggregate\n                                             +- CometNativeColumnarToRow\n                                                +- CometColumnarExchange\n                                                   +- HashAggregate\n                                                      +- Project\n                                                         +- BroadcastHashJoin\n                                                            :- Filter\n                                                            :  +- ColumnarToRow\n                                                            :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                            :           +- ReusedSubquery\n                                                            +- BroadcastExchange\n                                                               +- CometNativeColumnarToRow\n                                                                  +- CometProject\n                                                                     +- CometFilter\n                                                                        +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 23 out of 47 eligible operators (48%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q51.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometProject\n                  +- CometSortMergeJoin\n                     :- CometSort\n                     :  +- CometColumnarExchange\n                     :     +- Project\n                     :        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                     :           +- CometNativeColumnarToRow\n                     :              +- CometSort\n                     :                 +- CometExchange\n                     :                    +- CometHashAggregate\n                     :                       +- CometExchange\n                     :                          +- CometHashAggregate\n                     :                             +- CometProject\n                     :                                +- CometBroadcastHashJoin\n                     :                                   :- CometFilter\n                     :                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :                                   :        +- SubqueryBroadcast\n                     :                                   :           +- BroadcastExchange\n                     :                                   :              +- CometNativeColumnarToRow\n                     :                                   :                 +- CometProject\n                     :                                   :                    +- CometFilter\n                     :                                   :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :                                   +- CometBroadcastExchange\n                     :                                      +- CometProject\n                     :                                         +- CometFilter\n                     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometSort\n                        +- CometColumnarExchange\n                           +- Project\n                              +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                 +- CometNativeColumnarToRow\n                                    +- CometSort\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometExchange\n                                                +- CometHashAggregate\n                                                   +- CometProject\n                                                      +- CometBroadcastHashJoin\n                                                         :- CometFilter\n                                                         :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                         :        +- ReusedSubquery\n                                                         +- CometBroadcastExchange\n                                                            +- CometProject\n                                                               +- CometFilter\n                                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 37 out of 47 eligible operators (78%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q52.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q52.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q53.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- CometNativeColumnarToRow\n                                    :     :     :  +- CometProject\n                                    :     :     :     +- CometFilter\n                                    :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- Filter\n                                    :     :           +- ColumnarToRow\n                                    :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :                    +- SubqueryBroadcast\n                                    :     :                       +- BroadcastExchange\n                                    :     :                          +- CometNativeColumnarToRow\n                                    :     :                             +- CometProject\n                                    :     :                                +- CometFilter\n                                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q53.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometFilter\n                                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :                 +- SubqueryBroadcast\n                                 :     :                    +- BroadcastExchange\n                                 :     :                       +- CometNativeColumnarToRow\n                                 :     :                          +- CometProject\n                                 :     :                             +- CometFilter\n                                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 27 out of 33 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q54.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +- BroadcastHashJoin\n                              :- Project\n                              :  +- BroadcastHashJoin\n                              :     :- Project\n                              :     :  +- BroadcastHashJoin\n                              :     :     :- Project\n                              :     :     :  +- BroadcastHashJoin\n                              :     :     :     :- CometNativeColumnarToRow\n                              :     :     :     :  +- CometHashAggregate\n                              :     :     :     :     +- CometColumnarExchange\n                              :     :     :     :        +- HashAggregate\n                              :     :     :     :           +- Project\n                              :     :     :     :              +- BroadcastHashJoin\n                              :     :     :     :                 :- Project\n                              :     :     :     :                 :  +- BroadcastHashJoin\n                              :     :     :     :                 :     :- Project\n                              :     :     :     :                 :     :  +- BroadcastHashJoin\n                              :     :     :     :                 :     :     :- Union\n                              :     :     :     :                 :     :     :  :- Project\n                              :     :     :     :                 :     :     :  :  +- Filter\n                              :     :     :     :                 :     :     :  :     +- ColumnarToRow\n                              :     :     :     :                 :     :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :                 :     :     :  :              +- SubqueryBroadcast\n                              :     :     :     :                 :     :     :  :                 +- BroadcastExchange\n                              :     :     :     :                 :     :     :  :                    +- CometNativeColumnarToRow\n                              :     :     :     :                 :     :     :  :                       +- CometProject\n                              :     :     :     :                 :     :     :  :                          +- CometFilter\n                              :     :     :     :                 :     :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :                 :     :     :  +- Project\n                              :     :     :     :                 :     :     :     +- Filter\n                              :     :     :     :                 :     :     :        +- ColumnarToRow\n                              :     :     :     :                 :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :                 :     :     :                 +- ReusedSubquery\n                              :     :     :     :                 :     :     +- BroadcastExchange\n                              :     :     :     :                 :     :        +- CometNativeColumnarToRow\n                              :     :     :     :                 :     :           +- CometProject\n                              :     :     :     :                 :     :              +- CometFilter\n                              :     :     :     :                 :     :                 +- CometNativeScan parquet spark_catalog.default.item\n                              :     :     :     :                 :     +- BroadcastExchange\n                              :     :     :     :                 :        +- CometNativeColumnarToRow\n                              :     :     :     :                 :           +- CometProject\n                              :     :     :     :                 :              +- CometFilter\n                              :     :     :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :                 +- BroadcastExchange\n                              :     :     :     :                    +- CometNativeColumnarToRow\n                              :     :     :     :                       +- CometFilter\n                              :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.customer\n                              :     :     :     +- BroadcastExchange\n                              :     :     :        +- Filter\n                              :     :     :           +- ColumnarToRow\n                              :     :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :                    +- SubqueryBroadcast\n                              :     :     :                       +- BroadcastExchange\n                              :     :     :                          +- CometNativeColumnarToRow\n                              :     :     :                             +- CometProject\n                              :     :     :                                +- CometFilter\n                              :     :     :                                   :  :- ReusedSubquery\n                              :     :     :                                   :  +- ReusedSubquery\n                              :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :                                         :- Subquery\n                              :     :     :                                         :  +- CometNativeColumnarToRow\n                              :     :     :                                         :     +- CometHashAggregate\n                              :     :     :                                         :        +- CometExchange\n                              :     :     :                                         :           +- CometHashAggregate\n                              :     :     :                                         :              +- CometProject\n                              :     :     :                                         :                 +- CometFilter\n                              :     :     :                                         :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :                                         +- Subquery\n                              :     :     :                                            +- CometNativeColumnarToRow\n                              :     :     :                                               +- CometHashAggregate\n                              :     :     :                                                  +- CometExchange\n                              :     :     :                                                     +- CometHashAggregate\n                              :     :     :                                                        +- CometProject\n                              :     :     :                                                           +- CometFilter\n                              :     :     :                                                              +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     +- BroadcastExchange\n                              :     :        +- CometNativeColumnarToRow\n                              :     :           +- CometProject\n                              :     :              +- CometFilter\n                              :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     +- BroadcastExchange\n                              :        +- CometNativeColumnarToRow\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.store\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometFilter\n                                          :  :- ReusedSubquery\n                                          :  +- ReusedSubquery\n                                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                :- Subquery\n                                                :  +- CometNativeColumnarToRow\n                                                :     +- CometHashAggregate\n                                                :        +- CometExchange\n                                                :           +- CometHashAggregate\n                                                :              +- CometProject\n                                                :                 +- CometFilter\n                                                :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                +- Subquery\n                                                   +- CometNativeColumnarToRow\n                                                      +- CometHashAggregate\n                                                         +- CometExchange\n                                                            +- CometHashAggregate\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 51 out of 100 eligible operators (51%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q54.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometBroadcastHashJoin\n                           :     :     :- CometProject\n                           :     :     :  +- CometBroadcastHashJoin\n                           :     :     :     :- CometHashAggregate\n                           :     :     :     :  +- CometExchange\n                           :     :     :     :     +- CometHashAggregate\n                           :     :     :     :        +- CometProject\n                           :     :     :     :           +- CometBroadcastHashJoin\n                           :     :     :     :              :- CometProject\n                           :     :     :     :              :  +- CometBroadcastHashJoin\n                           :     :     :     :              :     :- CometProject\n                           :     :     :     :              :     :  +- CometBroadcastHashJoin\n                           :     :     :     :              :     :     :- CometUnion\n                           :     :     :     :              :     :     :  :- CometProject\n                           :     :     :     :              :     :     :  :  +- CometFilter\n                           :     :     :     :              :     :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :     :     :     :              :     :     :  :           +- SubqueryBroadcast\n                           :     :     :     :              :     :     :  :              +- BroadcastExchange\n                           :     :     :     :              :     :     :  :                 +- CometNativeColumnarToRow\n                           :     :     :     :              :     :     :  :                    +- CometProject\n                           :     :     :     :              :     :     :  :                       +- CometFilter\n                           :     :     :     :              :     :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :     :              :     :     :  +- CometProject\n                           :     :     :     :              :     :     :     +- CometFilter\n                           :     :     :     :              :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :     :     :     :              :     :     :              +- ReusedSubquery\n                           :     :     :     :              :     :     +- CometBroadcastExchange\n                           :     :     :     :              :     :        +- CometProject\n                           :     :     :     :              :     :           +- CometFilter\n                           :     :     :     :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :     :     :              :     +- CometBroadcastExchange\n                           :     :     :     :              :        +- CometProject\n                           :     :     :     :              :           +- CometFilter\n                           :     :     :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :     :              +- CometBroadcastExchange\n                           :     :     :     :                 +- CometFilter\n                           :     :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     :     :     +- CometBroadcastExchange\n                           :     :     :        +- CometFilter\n                           :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :     :                 +- SubqueryBroadcast\n                           :     :     :                    +- BroadcastExchange\n                           :     :     :                       +- CometNativeColumnarToRow\n                           :     :     :                          +- CometProject\n                           :     :     :                             +- CometFilter\n                           :     :     :                                :  :- ReusedSubquery\n                           :     :     :                                :  +- ReusedSubquery\n                           :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :                                      :- Subquery\n                           :     :     :                                      :  +- CometNativeColumnarToRow\n                           :     :     :                                      :     +- CometHashAggregate\n                           :     :     :                                      :        +- CometExchange\n                           :     :     :                                      :           +- CometHashAggregate\n                           :     :     :                                      :              +- CometProject\n                           :     :     :                                      :                 +- CometFilter\n                           :     :     :                                      :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     :                                      +- Subquery\n                           :     :     :                                         +- CometNativeColumnarToRow\n                           :     :     :                                            +- CometHashAggregate\n                           :     :     :                                               +- CometExchange\n                           :     :     :                                                  +- CometHashAggregate\n                           :     :     :                                                     +- CometProject\n                           :     :     :                                                        +- CometFilter\n                           :     :     :                                                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :     +- CometBroadcastExchange\n                           :     :        +- CometProject\n                           :     :           +- CometFilter\n                           :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    :  :- ReusedSubquery\n                                    :  +- ReusedSubquery\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :- ReusedSubquery\n                                          +- ReusedSubquery\n\nComet accelerated 75 out of 88 eligible operators (85%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q55.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q55.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometFilter\n                  :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 18 out of 18 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q56.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- SubqueryBroadcast\n               :                 :     :     :              +- BroadcastExchange\n               :                 :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :                    +- CometProject\n               :                 :     :     :                       +- CometFilter\n               :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- ReusedSubquery\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- ReusedSubquery\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometFilter\n                                             :  +- CometNativeScan parquet spark_catalog.default.item\n                                             +- CometBroadcastExchange\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 49 out of 96 eligible operators (51%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q56.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :     :     :        +- SubqueryBroadcast\n               :              :     :     :           +- BroadcastExchange\n               :              :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :                 +- CometProject\n               :              :     :     :                    +- CometFilter\n               :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :     :        +- ReusedSubquery\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :     :        +- ReusedSubquery\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 92 out of 96 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q57.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.call_center\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q57.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q58.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Filter\n      :     :  +- HashAggregate\n      :     :     +- CometNativeColumnarToRow\n      :     :        +- CometColumnarExchange\n      :     :           +- HashAggregate\n      :     :              +- Project\n      :     :                 +- BroadcastHashJoin\n      :     :                    :- Project\n      :     :                    :  +- BroadcastHashJoin\n      :     :                    :     :- Filter\n      :     :                    :     :  +- ColumnarToRow\n      :     :                    :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                    :     :           +- SubqueryBroadcast\n      :     :                    :     :              +- BroadcastExchange\n      :     :                    :     :                 +- CometNativeColumnarToRow\n      :     :                    :     :                    +- CometProject\n      :     :                    :     :                       +- CometBroadcastHashJoin\n      :     :                    :     :                          :- CometFilter\n      :     :                    :     :                          :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                    :     :                          +- CometBroadcastExchange\n      :     :                    :     :                             +- CometProject\n      :     :                    :     :                                +- CometFilter\n      :     :                    :     :                                   :  +- ReusedSubquery\n      :     :                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                    :     :                                         +- Subquery\n      :     :                    :     :                                            +- CometNativeColumnarToRow\n      :     :                    :     :                                               +- CometProject\n      :     :                    :     :                                                  +- CometFilter\n      :     :                    :     :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                    :     +- BroadcastExchange\n      :     :                    :        +- CometNativeColumnarToRow\n      :     :                    :           +- CometProject\n      :     :                    :              +- CometFilter\n      :     :                    :                 +- CometNativeScan parquet spark_catalog.default.item\n      :     :                    +- BroadcastExchange\n      :     :                       +- CometNativeColumnarToRow\n      :     :                          +- CometProject\n      :     :                             +- CometBroadcastHashJoin\n      :     :                                :- CometFilter\n      :     :                                :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                +- CometBroadcastExchange\n      :     :                                   +- CometProject\n      :     :                                      +- CometFilter\n      :     :                                         :  +- ReusedSubquery\n      :     :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                               +- Subquery\n      :     :                                                  +- CometNativeColumnarToRow\n      :     :                                                     +- CometProject\n      :     :                                                        +- CometFilter\n      :     :                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- Filter\n      :                             :     :  +- ColumnarToRow\n      :                             :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :     :           +- ReusedSubquery\n      :                             :     +- BroadcastExchange\n      :                             :        +- CometNativeColumnarToRow\n      :                             :           +- CometProject\n      :                             :              +- CometFilter\n      :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometProject\n      :                                      +- CometBroadcastHashJoin\n      :                                         :- CometFilter\n      :                                         :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- CometBroadcastExchange\n      :                                            +- CometProject\n      :                                               +- CometFilter\n      :                                                  :  +- ReusedSubquery\n      :                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                                        +- Subquery\n      :                                                           +- CometNativeColumnarToRow\n      :                                                              +- CometProject\n      :                                                                 +- CometFilter\n      :                                                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- Filter\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Project\n                           +- BroadcastHashJoin\n                              :- Project\n                              :  +- BroadcastHashJoin\n                              :     :- Filter\n                              :     :  +- ColumnarToRow\n                              :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :           +- ReusedSubquery\n                              :     +- BroadcastExchange\n                              :        +- CometNativeColumnarToRow\n                              :           +- CometProject\n                              :              +- CometFilter\n                              :                 +- CometNativeScan parquet spark_catalog.default.item\n                              +- BroadcastExchange\n                                 +- CometNativeColumnarToRow\n                                    +- CometProject\n                                       +- CometBroadcastHashJoin\n                                          :- CometFilter\n                                          :  +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- CometBroadcastExchange\n                                             +- CometProject\n                                                +- CometFilter\n                                                   :  +- ReusedSubquery\n                                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         +- Subquery\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 56 out of 108 eligible operators (51%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q58.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometFilter\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometBroadcastHashJoin\n         :     :                 :     :- CometFilter\n         :     :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                 :     :        +- SubqueryBroadcast\n         :     :                 :     :           +- BroadcastExchange\n         :     :                 :     :              +- CometNativeColumnarToRow\n         :     :                 :     :                 +- CometProject\n         :     :                 :     :                    +- CometBroadcastHashJoin\n         :     :                 :     :                       :- CometFilter\n         :     :                 :     :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :     :                       +- CometBroadcastExchange\n         :     :                 :     :                          +- CometProject\n         :     :                 :     :                             +- CometFilter\n         :     :                 :     :                                :  +- ReusedSubquery\n         :     :                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :     :                                      +- Subquery\n         :     :                 :     :                                         +- CometNativeColumnarToRow\n         :     :                 :     :                                            +- CometProject\n         :     :                 :     :                                               +- CometFilter\n         :     :                 :     :                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :     +- CometBroadcastExchange\n         :     :                 :        +- CometProject\n         :     :                 :           +- CometFilter\n         :     :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometProject\n         :     :                       +- CometBroadcastHashJoin\n         :     :                          :- CometFilter\n         :     :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                          +- CometBroadcastExchange\n         :     :                             +- CometProject\n         :     :                                +- CometFilter\n         :     :                                   :  +- ReusedSubquery\n         :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                         +- Subquery\n         :     :                                            +- CometNativeColumnarToRow\n         :     :                                               +- CometProject\n         :     :                                                  +- CometFilter\n         :     :                                                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometFilter\n         :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :                          :     :        +- ReusedSubquery\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometProject\n         :                          :           +- CometFilter\n         :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                          +- CometBroadcastExchange\n         :                             +- CometProject\n         :                                +- CometBroadcastHashJoin\n         :                                   :- CometFilter\n         :                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                   +- CometBroadcastExchange\n         :                                      +- CometProject\n         :                                         +- CometFilter\n         :                                            :  +- ReusedSubquery\n         :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                                  +- Subquery\n         :                                                     +- CometNativeColumnarToRow\n         :                                                        +- CometProject\n         :                                                           +- CometFilter\n         :                                                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- ReusedSubquery\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                :  +- ReusedSubquery\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      +- Subquery\n                                                         +- CometNativeColumnarToRow\n                                                            +- CometProject\n                                                               +- CometFilter\n                                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 96 out of 108 eligible operators (88%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q59.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometHashAggregate\n         :     :     :  +- CometExchange\n         :     :     :     +- CometHashAggregate\n         :     :     :        +- CometProject\n         :     :     :           +- CometBroadcastHashJoin\n         :     :     :              :- CometFilter\n         :     :     :              :  +- CometNativeScan parquet spark_catalog.default.store_sales\n         :     :     :              +- CometBroadcastExchange\n         :     :     :                 +- CometProject\n         :     :     :                    +- CometFilter\n         :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometProject\n         :     :           +- CometFilter\n         :     :              +- CometNativeScan parquet spark_catalog.default.store\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometHashAggregate\n                  :     :  +- CometExchange\n                  :     :     +- CometHashAggregate\n                  :     :        +- CometProject\n                  :     :           +- CometBroadcastHashJoin\n                  :     :              :- CometFilter\n                  :     :              :  +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     :              +- CometBroadcastExchange\n                  :     :                 +- CometProject\n                  :     :                    +- CometFilter\n                  :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometNativeScan parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 50 out of 50 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q59.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometHashAggregate\n         :     :     :  +- CometExchange\n         :     :     :     +- CometHashAggregate\n         :     :     :        +- CometProject\n         :     :     :           +- CometBroadcastHashJoin\n         :     :     :              :- CometFilter\n         :     :     :              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :     :              +- CometBroadcastExchange\n         :     :     :                 +- CometProject\n         :     :     :                    +- CometFilter\n         :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometProject\n         :     :           +- CometFilter\n         :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometHashAggregate\n                  :     :  +- CometExchange\n                  :     :     +- CometHashAggregate\n                  :     :        +- CometProject\n                  :     :           +- CometBroadcastHashJoin\n                  :     :              :- CometFilter\n                  :     :              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :              +- CometBroadcastExchange\n                  :     :                 +- CometProject\n                  :     :                    +- CometFilter\n                  :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 50 out of 50 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q6.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- CometNativeColumnarToRow\n                     :     :     :  +- CometProject\n                     :     :     :     +- CometBroadcastHashJoin\n                     :     :     :        :- CometProject\n                     :     :     :        :  +- CometFilter\n                     :     :     :        :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     :     :        +- CometBroadcastExchange\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     +- BroadcastExchange\n                     :     :        +- Filter\n                     :     :           +- ColumnarToRow\n                     :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :                    +- SubqueryBroadcast\n                     :     :                       +- BroadcastExchange\n                     :     :                          +- CometNativeColumnarToRow\n                     :     :                             +- CometProject\n                     :     :                                +- CometFilter\n                     :     :                                   :  +- ReusedSubquery\n                     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :                                         +- Subquery\n                     :     :                                            +- CometNativeColumnarToRow\n                     :     :                                               +- CometHashAggregate\n                     :     :                                                  +- CometExchange\n                     :     :                                                     +- CometHashAggregate\n                     :     :                                                        +- CometProject\n                     :     :                                                           +- CometFilter\n                     :     :                                                              +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 :  +- ReusedSubquery\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :                       +- Subquery\n                     :                          +- CometNativeColumnarToRow\n                     :                             +- CometHashAggregate\n                     :                                +- CometExchange\n                     :                                   +- CometHashAggregate\n                     :                                      +- CometProject\n                     :                                         +- CometFilter\n                     :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometFilter\n                                 :  +- CometNativeScan parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 39 out of 60 eligible operators (65%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q6.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometFilter\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometFilter\n                     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometFilter\n                     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :                 +- SubqueryBroadcast\n                     :     :                    +- BroadcastExchange\n                     :     :                       +- CometNativeColumnarToRow\n                     :     :                          +- CometProject\n                     :     :                             +- CometFilter\n                     :     :                                :  +- ReusedSubquery\n                     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :                                      +- Subquery\n                     :     :                                         +- CometNativeColumnarToRow\n                     :     :                                            +- CometHashAggregate\n                     :     :                                               +- CometExchange\n                     :     :                                                  +- CometHashAggregate\n                     :     :                                                     +- CometProject\n                     :     :                                                        +- CometFilter\n                     :     :                                                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              :  +- ReusedSubquery\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :                    +- ReusedSubquery\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometFilter\n                              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 48 out of 54 eligible operators (88%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q60.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- SubqueryBroadcast\n               :                 :     :     :              +- BroadcastExchange\n               :                 :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :                    +- CometProject\n               :                 :     :     :                       +- CometFilter\n               :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Filter\n               :                 :     :     :  +- ColumnarToRow\n               :                 :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :           +- ReusedSubquery\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometProject\n               :                 :     :              +- CometFilter\n               :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometFilter\n               :                             :  +- CometNativeScan parquet spark_catalog.default.item\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometNativeScan parquet spark_catalog.default.item\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- ReusedSubquery\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometFilter\n                                             :  +- CometNativeScan parquet spark_catalog.default.item\n                                             +- CometBroadcastExchange\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 49 out of 96 eligible operators (51%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q60.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :     :     :        +- SubqueryBroadcast\n               :              :     :     :           +- BroadcastExchange\n               :              :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :                 +- CometProject\n               :              :     :     :                    +- CometFilter\n               :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometFilter\n               :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :     :        +- ReusedSubquery\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometProject\n               :              :     :           +- CometFilter\n               :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometFilter\n               :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :     :        +- ReusedSubquery\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 92 out of 96 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q61.native_datafusion/extended.txt",
    "content": "Project\n+- BroadcastNestedLoopJoin\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- SubqueryBroadcast\n   :                 :     :     :     :     :     :              +- BroadcastExchange\n   :                 :     :     :     :     :     :                 +- CometNativeColumnarToRow\n   :                 :     :     :     :     :     :                    +- CometProject\n   :                 :     :     :     :     :     :                       +- CometFilter\n   :                 :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.promotion\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometProject\n   :                 :     :     :              +- CometFilter\n   :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometFilter\n   :                 :     :              +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   +- BroadcastExchange\n      +- HashAggregate\n         +- CometNativeColumnarToRow\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- Project\n                        :  +- BroadcastHashJoin\n                        :     :- Project\n                        :     :  +- BroadcastHashJoin\n                        :     :     :- Project\n                        :     :     :  +- BroadcastHashJoin\n                        :     :     :     :- Project\n                        :     :     :     :  +- BroadcastHashJoin\n                        :     :     :     :     :- Filter\n                        :     :     :     :     :  +- ColumnarToRow\n                        :     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :     :     :     :     :           +- ReusedSubquery\n                        :     :     :     :     +- BroadcastExchange\n                        :     :     :     :        +- CometNativeColumnarToRow\n                        :     :     :     :           +- CometProject\n                        :     :     :     :              +- CometFilter\n                        :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store\n                        :     :     :     +- BroadcastExchange\n                        :     :     :        +- CometNativeColumnarToRow\n                        :     :     :           +- CometProject\n                        :     :     :              +- CometFilter\n                        :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     :     +- BroadcastExchange\n                        :     :        +- CometNativeColumnarToRow\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                        :     +- BroadcastExchange\n                        :        +- CometNativeColumnarToRow\n                        :           +- CometProject\n                        :              +- CometFilter\n                        :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- BroadcastExchange\n                           +- CometNativeColumnarToRow\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 36 out of 83 eligible operators (43%). Final plan contains 16 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q61.native_iceberg_compat/extended.txt",
    "content": "Project\n+-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n   :- CometNativeColumnarToRow\n   :  +- CometHashAggregate\n   :     +- CometExchange\n   :        +- CometHashAggregate\n   :           +- CometProject\n   :              +- CometBroadcastHashJoin\n   :                 :- CometProject\n   :                 :  +- CometBroadcastHashJoin\n   :                 :     :- CometProject\n   :                 :     :  +- CometBroadcastHashJoin\n   :                 :     :     :- CometProject\n   :                 :     :     :  +- CometBroadcastHashJoin\n   :                 :     :     :     :- CometProject\n   :                 :     :     :     :  +- CometBroadcastHashJoin\n   :                 :     :     :     :     :- CometProject\n   :                 :     :     :     :     :  +- CometBroadcastHashJoin\n   :                 :     :     :     :     :     :- CometFilter\n   :                 :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n   :                 :     :     :     :     :     :        +- SubqueryBroadcast\n   :                 :     :     :     :     :     :           +- BroadcastExchange\n   :                 :     :     :     :     :     :              +- CometNativeColumnarToRow\n   :                 :     :     :     :     :     :                 +- CometProject\n   :                 :     :     :     :     :     :                    +- CometFilter\n   :                 :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n   :                 :     :     :     :     :     +- CometBroadcastExchange\n   :                 :     :     :     :     :        +- CometProject\n   :                 :     :     :     :     :           +- CometFilter\n   :                 :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n   :                 :     :     :     :     +- CometBroadcastExchange\n   :                 :     :     :     :        +- CometProject\n   :                 :     :     :     :           +- CometFilter\n   :                 :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n   :                 :     :     :     +- CometBroadcastExchange\n   :                 :     :     :        +- CometProject\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n   :                 :     :     +- CometBroadcastExchange\n   :                 :     :        +- CometFilter\n   :                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n   :                 :     +- CometBroadcastExchange\n   :                 :        +- CometProject\n   :                 :           +- CometFilter\n   :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n   :                 +- CometBroadcastExchange\n   :                    +- CometProject\n   :                       +- CometFilter\n   :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n   +- BroadcastExchange\n      +- CometNativeColumnarToRow\n         +- CometHashAggregate\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometProject\n                        :     :     :  +- CometBroadcastHashJoin\n                        :     :     :     :- CometProject\n                        :     :     :     :  +- CometBroadcastHashJoin\n                        :     :     :     :     :- CometFilter\n                        :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                        :     :     :     :     :        +- ReusedSubquery\n                        :     :     :     :     +- CometBroadcastExchange\n                        :     :     :     :        +- CometProject\n                        :     :     :     :           +- CometFilter\n                        :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                        :     :     :     +- CometBroadcastExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometFilter\n                        :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometFilter\n                        :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 77 out of 83 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q62.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometNativeScan parquet spark_catalog.default.web_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.web_site\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q62.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q63.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- CometNativeColumnarToRow\n                                    :     :     :  +- CometProject\n                                    :     :     :     +- CometFilter\n                                    :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- Filter\n                                    :     :           +- ColumnarToRow\n                                    :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :                    +- SubqueryBroadcast\n                                    :     :                       +- BroadcastExchange\n                                    :     :                          +- CometNativeColumnarToRow\n                                    :     :                             +- CometProject\n                                    :     :                                +- CometFilter\n                                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q63.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometFilter\n                                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :                 +- SubqueryBroadcast\n                                 :     :                    +- BroadcastExchange\n                                 :     :                       +- CometNativeColumnarToRow\n                                 :     :                          +- CometProject\n                                 :     :                             +- CometFilter\n                                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 27 out of 33 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q64.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometNativeScan parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 228 out of 242 eligible operators (94%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q64.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 238 out of 242 eligible operators (98%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q65.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- CometNativeColumnarToRow\n      :     :     :  +- CometFilter\n      :     :     :     +- CometNativeScan parquet spark_catalog.default.store\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- Project\n      :     :                          +- BroadcastHashJoin\n      :     :                             :- Filter\n      :     :                             :  +- ColumnarToRow\n      :     :                             :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                             :           +- SubqueryBroadcast\n      :     :                             :              +- BroadcastExchange\n      :     :                             :                 +- CometNativeColumnarToRow\n      :     :                             :                    +- CometProject\n      :     :                             :                       +- CometFilter\n      :     :                             :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                             +- BroadcastExchange\n      :     :                                +- CometNativeColumnarToRow\n      :     :                                   +- CometProject\n      :     :                                      +- CometFilter\n      :     :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.item\n      +- BroadcastExchange\n         +- Filter\n            +- HashAggregate\n               +- CometNativeColumnarToRow\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Filter\n                                          :  +- ColumnarToRow\n                                          :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :           +- ReusedSubquery\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 17 out of 48 eligible operators (35%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q65.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometProject\n         :     :                       +- CometBroadcastHashJoin\n         :     :                          :- CometFilter\n         :     :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                          :        +- SubqueryBroadcast\n         :     :                          :           +- BroadcastExchange\n         :     :                          :              +- CometNativeColumnarToRow\n         :     :                          :                 +- CometProject\n         :     :                          :                    +- CometFilter\n         :     :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                          +- CometBroadcastExchange\n         :     :                             +- CometProject\n         :     :                                +- CometFilter\n         :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :        +- ReusedSubquery\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 45 out of 48 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q66.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- Project\n               :                 :  +- BroadcastHashJoin\n               :                 :     :- Project\n               :                 :     :  +- BroadcastHashJoin\n               :                 :     :     :- Project\n               :                 :     :     :  +- BroadcastHashJoin\n               :                 :     :     :     :- Filter\n               :                 :     :     :     :  +- ColumnarToRow\n               :                 :     :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :     :     :     :           +- SubqueryBroadcast\n               :                 :     :     :     :              +- BroadcastExchange\n               :                 :     :     :     :                 +- CometNativeColumnarToRow\n               :                 :     :     :     :                    +- CometFilter\n               :                 :     :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     :     :     +- BroadcastExchange\n               :                 :     :     :        +- CometNativeColumnarToRow\n               :                 :     :     :           +- CometProject\n               :                 :     :     :              +- CometFilter\n               :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.warehouse\n               :                 :     :     +- BroadcastExchange\n               :                 :     :        +- CometNativeColumnarToRow\n               :                 :     :           +- CometFilter\n               :                 :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 :     +- BroadcastExchange\n               :                 :        +- CometNativeColumnarToRow\n               :                 :           +- CometProject\n               :                 :              +- CometFilter\n               :                 :                 +- CometNativeScan parquet spark_catalog.default.time_dim\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometNativeScan parquet spark_catalog.default.ship_mode\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Project\n                                 :     :     :  +- BroadcastHashJoin\n                                 :     :     :     :- Filter\n                                 :     :     :     :  +- ColumnarToRow\n                                 :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :     :           +- ReusedSubquery\n                                 :     :     :     +- BroadcastExchange\n                                 :     :     :        +- CometNativeColumnarToRow\n                                 :     :     :           +- CometProject\n                                 :     :     :              +- CometFilter\n                                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.warehouse\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometFilter\n                                 :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.time_dim\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.ship_mode\n\nComet accelerated 27 out of 66 eligible operators (40%). Final plan contains 14 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q66.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometProject\n               :           +- CometBroadcastHashJoin\n               :              :- CometProject\n               :              :  +- CometBroadcastHashJoin\n               :              :     :- CometProject\n               :              :     :  +- CometBroadcastHashJoin\n               :              :     :     :- CometProject\n               :              :     :     :  +- CometBroadcastHashJoin\n               :              :     :     :     :- CometFilter\n               :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :              :     :     :     :        +- SubqueryBroadcast\n               :              :     :     :     :           +- BroadcastExchange\n               :              :     :     :     :              +- CometNativeColumnarToRow\n               :              :     :     :     :                 +- CometFilter\n               :              :     :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     :     :     +- CometBroadcastExchange\n               :              :     :     :        +- CometProject\n               :              :     :     :           +- CometFilter\n               :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n               :              :     :     +- CometBroadcastExchange\n               :              :     :        +- CometFilter\n               :              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometBroadcastExchange\n               :              :        +- CometProject\n               :              :           +- CometFilter\n               :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n               :              +- CometBroadcastExchange\n               :                 +- CometProject\n               :                    +- CometFilter\n               :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometFilter\n                              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :     :     :        +- ReusedSubquery\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n\nComet accelerated 63 out of 66 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q67.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- Window\n      +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- WindowGroupLimit\n                     +- Sort\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Expand\n                                       +- Project\n                                          +- BroadcastHashJoin\n                                             :- Project\n                                             :  +- BroadcastHashJoin\n                                             :     :- Project\n                                             :     :  +- BroadcastHashJoin\n                                             :     :     :- Filter\n                                             :     :     :  +- ColumnarToRow\n                                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                             :     :     :           +- SubqueryBroadcast\n                                             :     :     :              +- BroadcastExchange\n                                             :     :     :                 +- CometNativeColumnarToRow\n                                             :     :     :                    +- CometProject\n                                             :     :     :                       +- CometFilter\n                                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             :     :     +- BroadcastExchange\n                                             :     :        +- CometNativeColumnarToRow\n                                             :     :           +- CometProject\n                                             :     :              +- CometFilter\n                                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             :     +- BroadcastExchange\n                                             :        +- CometNativeColumnarToRow\n                                             :           +- CometProject\n                                             :              +- CometFilter\n                                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                                             +- BroadcastExchange\n                                                +- CometNativeColumnarToRow\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 15 out of 37 eligible operators (40%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q67.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- Window\n      +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                     +- CometNativeColumnarToRow\n                        +- CometSort\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometExpand\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometProject\n                                             :  +- CometBroadcastHashJoin\n                                             :     :- CometProject\n                                             :     :  +- CometBroadcastHashJoin\n                                             :     :     :- CometFilter\n                                             :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                             :     :     :        +- SubqueryBroadcast\n                                             :     :     :           +- BroadcastExchange\n                                             :     :     :              +- CometNativeColumnarToRow\n                                             :     :     :                 +- CometProject\n                                             :     :     :                    +- CometFilter\n                                             :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             :     :     +- CometBroadcastExchange\n                                             :     :        +- CometProject\n                                             :     :           +- CometFilter\n                                             :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             :     +- CometBroadcastExchange\n                                             :        +- CometProject\n                                             :           +- CometFilter\n                                             :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                             +- CometBroadcastExchange\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 30 out of 37 eligible operators (81%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q68.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- HashAggregate\n      :     :  +- CometNativeColumnarToRow\n      :     :     +- CometColumnarExchange\n      :     :        +- HashAggregate\n      :     :           +- Project\n      :     :              +- BroadcastHashJoin\n      :     :                 :- Project\n      :     :                 :  +- BroadcastHashJoin\n      :     :                 :     :- Project\n      :     :                 :     :  +- BroadcastHashJoin\n      :     :                 :     :     :- Project\n      :     :                 :     :     :  +- BroadcastHashJoin\n      :     :                 :     :     :     :- Filter\n      :     :                 :     :     :     :  +- ColumnarToRow\n      :     :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                 :     :     :     :           +- SubqueryBroadcast\n      :     :                 :     :     :     :              +- BroadcastExchange\n      :     :                 :     :     :     :                 +- CometNativeColumnarToRow\n      :     :                 :     :     :     :                    +- CometProject\n      :     :                 :     :     :     :                       +- CometFilter\n      :     :                 :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     :     +- BroadcastExchange\n      :     :                 :     :     :        +- CometNativeColumnarToRow\n      :     :                 :     :     :           +- CometProject\n      :     :                 :     :     :              +- CometFilter\n      :     :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :     +- BroadcastExchange\n      :     :                 :     :        +- CometNativeColumnarToRow\n      :     :                 :     :           +- CometProject\n      :     :                 :     :              +- CometFilter\n      :     :                 :     :                 +- CometNativeScan parquet spark_catalog.default.store\n      :     :                 :     +- BroadcastExchange\n      :     :                 :        +- CometNativeColumnarToRow\n      :     :                 :           +- CometProject\n      :     :                 :              +- CometFilter\n      :     :                 :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n      :     :                 +- BroadcastExchange\n      :     :                    +- CometNativeColumnarToRow\n      :     :                       +- CometFilter\n      :     :                          +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometFilter\n               +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 20 out of 45 eligible operators (44%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q68.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometHashAggregate\n         :     :  +- CometExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometProject\n         :     :           +- CometBroadcastHashJoin\n         :     :              :- CometProject\n         :     :              :  +- CometBroadcastHashJoin\n         :     :              :     :- CometProject\n         :     :              :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :- CometProject\n         :     :              :     :     :  +- CometBroadcastHashJoin\n         :     :              :     :     :     :- CometFilter\n         :     :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :              :     :     :     :        +- SubqueryBroadcast\n         :     :              :     :     :     :           +- BroadcastExchange\n         :     :              :     :     :     :              +- CometNativeColumnarToRow\n         :     :              :     :     :     :                 +- CometProject\n         :     :              :     :     :     :                    +- CometFilter\n         :     :              :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     :     +- CometBroadcastExchange\n         :     :              :     :     :        +- CometProject\n         :     :              :     :     :           +- CometFilter\n         :     :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :     +- CometBroadcastExchange\n         :     :              :     :        +- CometProject\n         :     :              :     :           +- CometFilter\n         :     :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :     :              :     +- CometBroadcastExchange\n         :     :              :        +- CometProject\n         :     :              :           +- CometFilter\n         :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         :     :              +- CometBroadcastExchange\n         :     :                 +- CometFilter\n         :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometFilter\n               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 43 out of 45 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q69.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- BroadcastHashJoin\n                  :     :     :  :- BroadcastHashJoin\n                  :     :     :  :  :- CometNativeColumnarToRow\n                  :     :     :  :  :  +- CometFilter\n                  :     :     :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :     :  :  +- BroadcastExchange\n                  :     :     :  :     +- Project\n                  :     :     :  :        +- BroadcastHashJoin\n                  :     :     :  :           :- ColumnarToRow\n                  :     :     :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :  :           :        +- SubqueryBroadcast\n                  :     :     :  :           :           +- BroadcastExchange\n                  :     :     :  :           :              +- CometNativeColumnarToRow\n                  :     :     :  :           :                 +- CometProject\n                  :     :     :  :           :                    +- CometFilter\n                  :     :     :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :  :           +- BroadcastExchange\n                  :     :     :  :              +- CometNativeColumnarToRow\n                  :     :     :  :                 +- CometProject\n                  :     :     :  :                    +- CometFilter\n                  :     :     :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- Project\n                  :     :     :        +- BroadcastHashJoin\n                  :     :     :           :- ColumnarToRow\n                  :     :     :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           :        +- ReusedSubquery\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- Project\n                  :     :           +- BroadcastHashJoin\n                  :     :              :- ColumnarToRow\n                  :     :              :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :              :        +- ReusedSubquery\n                  :     :              +- BroadcastExchange\n                  :     :                 +- CometNativeColumnarToRow\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 53 eligible operators (39%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q69.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :-  BroadcastHashJoin [COMET: BuildRight with LeftAnti is not supported]\n                  :     :     :  :- CometNativeColumnarToRow\n                  :     :     :  :  +- CometBroadcastHashJoin\n                  :     :     :  :     :- CometFilter\n                  :     :     :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :     :  :     +- CometBroadcastExchange\n                  :     :     :  :        +- CometProject\n                  :     :     :  :           +- CometBroadcastHashJoin\n                  :     :     :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :  :              :     +- SubqueryBroadcast\n                  :     :     :  :              :        +- BroadcastExchange\n                  :     :     :  :              :           +- CometNativeColumnarToRow\n                  :     :     :  :              :              +- CometProject\n                  :     :     :  :              :                 +- CometFilter\n                  :     :     :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :  :              +- CometBroadcastExchange\n                  :     :     :  :                 +- CometProject\n                  :     :     :  :                    +- CometFilter\n                  :     :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- CometNativeColumnarToRow\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometBroadcastHashJoin\n                  :     :     :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :     :              :     +- ReusedSubquery\n                  :     :     :              +- CometBroadcastExchange\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometBroadcastHashJoin\n                  :     :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                 :     +- ReusedSubquery\n                  :     :                 +- CometBroadcastExchange\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 53 eligible operators (66%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q7.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Filter\n                  :     :     :     :  +- ColumnarToRow\n                  :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :              +- BroadcastExchange\n                  :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :                    +- CometProject\n                  :     :     :     :                       +- CometFilter\n                  :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.item\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 16 out of 35 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q7.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :     :        +- SubqueryBroadcast\n                  :     :     :     :           +- BroadcastExchange\n                  :     :     :     :              +- CometNativeColumnarToRow\n                  :     :     :     :                 +- CometProject\n                  :     :     :     :                    +- CometFilter\n                  :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 33 out of 35 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q70.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Filter\n                                    :     :  +- ColumnarToRow\n                                    :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :           +- SubqueryBroadcast\n                                    :     :              +- BroadcastExchange\n                                    :     :                 +- CometNativeColumnarToRow\n                                    :     :                    +- CometProject\n                                    :     :                       +- CometFilter\n                                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- Project\n                                          +- BroadcastHashJoin\n                                             :- CometNativeColumnarToRow\n                                             :  +- CometFilter\n                                             :     +- CometNativeScan parquet spark_catalog.default.store\n                                             +- BroadcastExchange\n                                                +- Project\n                                                   +- Filter\n                                                      +- Window\n                                                         +- WindowGroupLimit\n                                                            +- Sort\n                                                               +- HashAggregate\n                                                                  +- CometNativeColumnarToRow\n                                                                     +- CometColumnarExchange\n                                                                        +- HashAggregate\n                                                                           +- Project\n                                                                              +- BroadcastHashJoin\n                                                                                 :- Project\n                                                                                 :  +- BroadcastHashJoin\n                                                                                 :     :- Filter\n                                                                                 :     :  +- ColumnarToRow\n                                                                                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                                 :     :           +- ReusedSubquery\n                                                                                 :     +- BroadcastExchange\n                                                                                 :        +- CometNativeColumnarToRow\n                                                                                 :           +- CometProject\n                                                                                 :              +- CometFilter\n                                                                                 :                 +- CometNativeScan parquet spark_catalog.default.store\n                                                                                 +- BroadcastExchange\n                                                                                    +- CometNativeColumnarToRow\n                                                                                       +- CometProject\n                                                                                          +- CometFilter\n                                                                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 18 out of 53 eligible operators (33%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q70.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- CometNativeColumnarToRow\n                                    :  +- CometProject\n                                    :     +- CometBroadcastHashJoin\n                                    :        :- CometFilter\n                                    :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :        :        +- SubqueryBroadcast\n                                    :        :           +- BroadcastExchange\n                                    :        :              +- CometNativeColumnarToRow\n                                    :        :                 +- CometProject\n                                    :        :                    +- CometFilter\n                                    :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :        +- CometBroadcastExchange\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- Project\n                                          +- BroadcastHashJoin\n                                             :- CometNativeColumnarToRow\n                                             :  +- CometFilter\n                                             :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                             +- BroadcastExchange\n                                                +- Project\n                                                   +- Filter\n                                                      +- Window\n                                                         +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometSort\n                                                                  +- CometHashAggregate\n                                                                     +- CometExchange\n                                                                        +- CometHashAggregate\n                                                                           +- CometProject\n                                                                              +- CometBroadcastHashJoin\n                                                                                 :- CometProject\n                                                                                 :  +- CometBroadcastHashJoin\n                                                                                 :     :- CometFilter\n                                                                                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                                 :     :        +- ReusedSubquery\n                                                                                 :     +- CometBroadcastExchange\n                                                                                 :        +- CometProject\n                                                                                 :           +- CometFilter\n                                                                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                                                                 +- CometBroadcastExchange\n                                                                                    +- CometProject\n                                                                                       +- CometFilter\n                                                                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 34 out of 53 eligible operators (64%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q71.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- CometNativeColumnarToRow\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- Project\n                        :  +- BroadcastHashJoin\n                        :     :- BroadcastExchange\n                        :     :  +- CometNativeColumnarToRow\n                        :     :     +- CometProject\n                        :     :        +- CometFilter\n                        :     :           +- CometNativeScan parquet spark_catalog.default.item\n                        :     +- Union\n                        :        :- Project\n                        :        :  +- BroadcastHashJoin\n                        :        :     :- Filter\n                        :        :     :  +- ColumnarToRow\n                        :        :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :        :     :           +- SubqueryBroadcast\n                        :        :     :              +- BroadcastExchange\n                        :        :     :                 +- CometNativeColumnarToRow\n                        :        :     :                    +- CometProject\n                        :        :     :                       +- CometFilter\n                        :        :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :        :     +- BroadcastExchange\n                        :        :        +- CometNativeColumnarToRow\n                        :        :           +- CometProject\n                        :        :              +- CometFilter\n                        :        :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :        :- Project\n                        :        :  +- BroadcastHashJoin\n                        :        :     :- Filter\n                        :        :     :  +- ColumnarToRow\n                        :        :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :        :     :           +- ReusedSubquery\n                        :        :     +- BroadcastExchange\n                        :        :        +- CometNativeColumnarToRow\n                        :        :           +- CometProject\n                        :        :              +- CometFilter\n                        :        :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :        +- Project\n                        :           +- BroadcastHashJoin\n                        :              :- Filter\n                        :              :  +- ColumnarToRow\n                        :              :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :              :           +- ReusedSubquery\n                        :              +- BroadcastExchange\n                        :                 +- CometNativeColumnarToRow\n                        :                    +- CometProject\n                        :                       +- CometFilter\n                        :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                        +- BroadcastExchange\n                           +- CometNativeColumnarToRow\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.time_dim\n\nComet accelerated 21 out of 49 eligible operators (42%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q71.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometBroadcastExchange\n                     :     :  +- CometProject\n                     :     :     +- CometFilter\n                     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     :     +- CometUnion\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometFilter\n                     :        :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                     :        :     :        +- SubqueryBroadcast\n                     :        :     :           +- BroadcastExchange\n                     :        :     :              +- CometNativeColumnarToRow\n                     :        :     :                 +- CometProject\n                     :        :     :                    +- CometFilter\n                     :        :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometFilter\n                     :        :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :        :     :        +- ReusedSubquery\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        +- CometProject\n                     :           +- CometBroadcastHashJoin\n                     :              :- CometFilter\n                     :              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :              :        +- ReusedSubquery\n                     :              +- CometBroadcastExchange\n                     :                 +- CometProject\n                     :                    +- CometFilter\n                     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n\nComet accelerated 45 out of 49 eligible operators (91%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q72.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometColumnarExchange\n                  :     +- Project\n                  :        +- BroadcastHashJoin\n                  :           :- Project\n                  :           :  +- BroadcastHashJoin\n                  :           :     :- Project\n                  :           :     :  +- BroadcastHashJoin\n                  :           :     :     :- Project\n                  :           :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :- Project\n                  :           :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :- Project\n                  :           :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- Filter\n                  :           :     :     :     :     :     :     :     :     :  +- ColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :           :     :     :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :              +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                    +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                       +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :           +- CometProject\n                  :           :     :     :     :     :              +- CometFilter\n                  :           :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :           +- CometProject\n                  :           :     :     :     :              +- CometFilter\n                  :           :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- BroadcastExchange\n                  :           :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :           +- CometProject\n                  :           :     :     :              +- CometFilter\n                  :           :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     +- BroadcastExchange\n                  :           :     :        +- CometNativeColumnarToRow\n                  :           :     :           +- CometFilter\n                  :           :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     +- BroadcastExchange\n                  :           :        +- CometNativeColumnarToRow\n                  :           :           +- CometFilter\n                  :           :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           +- BroadcastExchange\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n\nComet accelerated 37 out of 68 eligible operators (54%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q72.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometExchange\n                  :     +- CometProject\n                  :        +- CometBroadcastHashJoin\n                  :           :- CometProject\n                  :           :  +- CometBroadcastHashJoin\n                  :           :     :- CometProject\n                  :           :     :  +- CometBroadcastHashJoin\n                  :           :     :     :- CometProject\n                  :           :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :- CometProject\n                  :           :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :- CometProject\n                  :           :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- CometFilter\n                  :           :     :     :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :           :     :     :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :           +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                 +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                    +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :        +- CometProject\n                  :           :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :        +- CometProject\n                  :           :     :     :     :           +- CometFilter\n                  :           :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- CometBroadcastExchange\n                  :           :     :     :        +- CometProject\n                  :           :     :     :           +- CometFilter\n                  :           :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     +- CometBroadcastExchange\n                  :           :     :        +- CometFilter\n                  :           :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     +- CometBroadcastExchange\n                  :           :        +- CometFilter\n                  :           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           +- CometBroadcastExchange\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n\nComet accelerated 66 out of 68 eligible operators (97%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q73.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Filter\n            :  +- HashAggregate\n            :     +- CometNativeColumnarToRow\n            :        +- CometColumnarExchange\n            :           +- HashAggregate\n            :              +- Project\n            :                 +- BroadcastHashJoin\n            :                    :- Project\n            :                    :  +- BroadcastHashJoin\n            :                    :     :- Project\n            :                    :     :  +- BroadcastHashJoin\n            :                    :     :     :- Filter\n            :                    :     :     :  +- ColumnarToRow\n            :                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                    :     :     :           +- SubqueryBroadcast\n            :                    :     :     :              +- BroadcastExchange\n            :                    :     :     :                 +- CometNativeColumnarToRow\n            :                    :     :     :                    +- CometProject\n            :                    :     :     :                       +- CometFilter\n            :                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     :     +- BroadcastExchange\n            :                    :     :        +- CometNativeColumnarToRow\n            :                    :     :           +- CometProject\n            :                    :     :              +- CometFilter\n            :                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     +- BroadcastExchange\n            :                    :        +- CometNativeColumnarToRow\n            :                    :           +- CometProject\n            :                    :              +- CometFilter\n            :                    :                 +- CometNativeScan parquet spark_catalog.default.store\n            :                    +- BroadcastExchange\n            :                       +- CometNativeColumnarToRow\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometNativeScan parquet spark_catalog.default.household_demographics\n            +- BroadcastExchange\n               +- CometNativeColumnarToRow\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 37 eligible operators (48%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q73.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometFilter\n            :  +- CometHashAggregate\n            :     +- CometExchange\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometFilter\n            :                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :        +- SubqueryBroadcast\n            :                 :     :     :           +- BroadcastExchange\n            :                 :     :     :              +- CometNativeColumnarToRow\n            :                 :     :     :                 +- CometProject\n            :                 :     :     :                    +- CometFilter\n            :                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometProject\n            :                 :     :           +- CometFilter\n            :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometProject\n            :                 :           +- CometFilter\n            :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 37 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q74.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- BroadcastHashJoin\n      :     :  :- Filter\n      :     :  :  +- HashAggregate\n      :     :  :     +- CometNativeColumnarToRow\n      :     :  :        +- CometColumnarExchange\n      :     :  :           +- HashAggregate\n      :     :  :              +- Project\n      :     :  :                 +- BroadcastHashJoin\n      :     :  :                    :- Project\n      :     :  :                    :  +- BroadcastHashJoin\n      :     :  :                    :     :- CometNativeColumnarToRow\n      :     :  :                    :     :  +- CometProject\n      :     :  :                    :     :     +- CometFilter\n      :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :  :                    :     +- BroadcastExchange\n      :     :  :                    :        +- Filter\n      :     :  :                    :           +- ColumnarToRow\n      :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :  :                    :                    +- SubqueryBroadcast\n      :     :  :                    :                       +- BroadcastExchange\n      :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :  :                    :                             +- CometFilter\n      :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  :                    +- BroadcastExchange\n      :     :  :                       +- CometNativeColumnarToRow\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  +- BroadcastExchange\n      :     :     +- HashAggregate\n      :     :        +- CometNativeColumnarToRow\n      :     :           +- CometColumnarExchange\n      :     :              +- HashAggregate\n      :     :                 +- Project\n      :     :                    +- BroadcastHashJoin\n      :     :                       :- Project\n      :     :                       :  +- BroadcastHashJoin\n      :     :                       :     :- CometNativeColumnarToRow\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                       :     +- BroadcastExchange\n      :     :                       :        +- Filter\n      :     :                       :           +- ColumnarToRow\n      :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                       :                    +- SubqueryBroadcast\n      :     :                       :                       +- BroadcastExchange\n      :     :                       :                          +- CometNativeColumnarToRow\n      :     :                       :                             +- CometFilter\n      :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                       +- BroadcastExchange\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometFilter\n      :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 85 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q74.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometBroadcastHashJoin\n         :     :  :- CometFilter\n         :     :  :  +- CometHashAggregate\n         :     :  :     +- CometExchange\n         :     :  :        +- CometHashAggregate\n         :     :  :           +- CometProject\n         :     :  :              +- CometBroadcastHashJoin\n         :     :  :                 :- CometProject\n         :     :  :                 :  +- CometBroadcastHashJoin\n         :     :  :                 :     :- CometProject\n         :     :  :                 :     :  +- CometFilter\n         :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :  :                 :     +- CometBroadcastExchange\n         :     :  :                 :        +- CometFilter\n         :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :  :                 :                 +- SubqueryBroadcast\n         :     :  :                 :                    +- BroadcastExchange\n         :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :  :                 :                          +- CometFilter\n         :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  :                 +- CometBroadcastExchange\n         :     :  :                    +- CometFilter\n         :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  +- CometBroadcastExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometExchange\n         :     :           +- CometHashAggregate\n         :     :              +- CometProject\n         :     :                 +- CometBroadcastHashJoin\n         :     :                    :- CometProject\n         :     :                    :  +- CometBroadcastHashJoin\n         :     :                    :     :- CometProject\n         :     :                    :     :  +- CometFilter\n         :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                    :     +- CometBroadcastExchange\n         :     :                    :        +- CometFilter\n         :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                    :                 +- SubqueryBroadcast\n         :     :                    :                    +- BroadcastExchange\n         :     :                    :                       +- CometNativeColumnarToRow\n         :     :                    :                          +- CometFilter\n         :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                    +- CometBroadcastExchange\n         :     :                       +- CometFilter\n         :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 79 out of 85 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q75.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- SubqueryBroadcast\n         :                             :     :           :     :              +- BroadcastExchange\n         :                             :     :           :     :                 +- CometNativeColumnarToRow\n         :                             :     :           :     :                    +- CometFilter\n         :                             :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- ReusedSubquery\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometColumnarExchange\n         :                                   :     +- Project\n         :                                   :        +- BroadcastHashJoin\n         :                                   :           :- Project\n         :                                   :           :  +- BroadcastHashJoin\n         :                                   :           :     :- Filter\n         :                                   :           :     :  +- ColumnarToRow\n         :                                   :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                                   :           :     :           +- ReusedSubquery\n         :                                   :           :     +- BroadcastExchange\n         :                                   :           :        +- CometNativeColumnarToRow\n         :                                   :           :           +- CometProject\n         :                                   :           :              +- CometFilter\n         :                                   :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                                   :           +- BroadcastExchange\n         :                                   :              +- CometNativeColumnarToRow\n         :                                   :                 +- CometFilter\n         :                                   :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometNativeScan parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- SubqueryBroadcast\n                                       :     :           :     :              +- BroadcastExchange\n                                       :     :           :     :                 +- CometNativeColumnarToRow\n                                       :     :           :     :                    +- CometFilter\n                                       :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- ReusedSubquery\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometColumnarExchange\n                                             :     +- Project\n                                             :        +- BroadcastHashJoin\n                                             :           :- Project\n                                             :           :  +- BroadcastHashJoin\n                                             :           :     :- Filter\n                                             :           :     :  +- ColumnarToRow\n                                             :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                             :           :     :           +- ReusedSubquery\n                                             :           :     +- BroadcastExchange\n                                             :           :        +- CometNativeColumnarToRow\n                                             :           :           +- CometProject\n                                             :           :              +- CometFilter\n                                             :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                             :           +- BroadcastExchange\n                                             :              +- CometNativeColumnarToRow\n                                             :                 +- CometFilter\n                                             :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.web_returns\n\nComet accelerated 111 out of 167 eligible operators (66%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q75.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :                             :     :           :     :        +- SubqueryBroadcast\n         :                             :     :           :     :           +- BroadcastExchange\n         :                             :     :           :     :              +- CometNativeColumnarToRow\n         :                             :     :           :     :                 +- CometFilter\n         :                             :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :                             :     :           :     :        +- ReusedSubquery\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometExchange\n         :                                   :     +- CometProject\n         :                                   :        +- CometBroadcastHashJoin\n         :                                   :           :- CometProject\n         :                                   :           :  +- CometBroadcastHashJoin\n         :                                   :           :     :- CometFilter\n         :                                   :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                                   :           :     :        +- ReusedSubquery\n         :                                   :           :     +- CometBroadcastExchange\n         :                                   :           :        +- CometProject\n         :                                   :           :           +- CometFilter\n         :                                   :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                                   :           +- CometBroadcastExchange\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :           :     :        +- SubqueryBroadcast\n                                       :     :           :     :           +- BroadcastExchange\n                                       :     :           :     :              +- CometNativeColumnarToRow\n                                       :     :           :     :                 +- CometFilter\n                                       :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :           :     :        +- ReusedSubquery\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometExchange\n                                             :     +- CometProject\n                                             :        +- CometBroadcastHashJoin\n                                             :           :- CometProject\n                                             :           :  +- CometBroadcastHashJoin\n                                             :           :     :- CometFilter\n                                             :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                             :           :     :        +- ReusedSubquery\n                                             :           :     +- CometBroadcastExchange\n                                             :           :        +- CometProject\n                                             :           :           +- CometFilter\n                                             :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                             :           +- CometBroadcastExchange\n                                             :              +- CometFilter\n                                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n\nComet accelerated 159 out of 167 eligible operators (95%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q76.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometNativeScan parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometNativeScan parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometNativeScan parquet spark_catalog.default.web_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometNativeScan parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometFilter\n                     :     :  +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometNativeScan parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 44 out of 44 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q76.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometFilter\n                     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 44 out of 44 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q77.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- HashAggregate\n                  :     :  +- CometNativeColumnarToRow\n                  :     :     +- CometColumnarExchange\n                  :     :        +- HashAggregate\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- Project\n                  :     :                 :  +- BroadcastHashJoin\n                  :     :                 :     :- Filter\n                  :     :                 :     :  +- ColumnarToRow\n                  :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :     :           +- SubqueryBroadcast\n                  :     :                 :     :              +- BroadcastExchange\n                  :     :                 :     :                 +- CometNativeColumnarToRow\n                  :     :                 :     :                    +- CometProject\n                  :     :                 :     :                       +- CometFilter\n                  :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                 :     +- BroadcastExchange\n                  :     :                 :        +- CometNativeColumnarToRow\n                  :     :                 :           +- CometProject\n                  :     :                 :              +- CometFilter\n                  :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometFilter\n                  :     :                          +- CometNativeScan parquet spark_catalog.default.store\n                  :     +- BroadcastExchange\n                  :        +- HashAggregate\n                  :           +- CometNativeColumnarToRow\n                  :              +- CometColumnarExchange\n                  :                 +- HashAggregate\n                  :                    +- Project\n                  :                       +- BroadcastHashJoin\n                  :                          :- Project\n                  :                          :  +- BroadcastHashJoin\n                  :                          :     :- Filter\n                  :                          :     :  +- ColumnarToRow\n                  :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                          :     :           +- ReusedSubquery\n                  :                          :     +- BroadcastExchange\n                  :                          :        +- CometNativeColumnarToRow\n                  :                          :           +- CometProject\n                  :                          :              +- CometFilter\n                  :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                          +- BroadcastExchange\n                  :                             +- CometNativeColumnarToRow\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.store\n                  :- Project\n                  :  +- BroadcastNestedLoopJoin\n                  :     :- BroadcastExchange\n                  :     :  +- HashAggregate\n                  :     :     +- CometNativeColumnarToRow\n                  :     :        +- CometColumnarExchange\n                  :     :           +- HashAggregate\n                  :     :              +- Project\n                  :     :                 +- BroadcastHashJoin\n                  :     :                    :- ColumnarToRow\n                  :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                    :        +- ReusedSubquery\n                  :     :                    +- BroadcastExchange\n                  :     :                       +- CometNativeColumnarToRow\n                  :     :                          +- CometProject\n                  :     :                             +- CometFilter\n                  :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- HashAggregate\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometColumnarExchange\n                  :              +- HashAggregate\n                  :                 +- Project\n                  :                    +- BroadcastHashJoin\n                  :                       :- ColumnarToRow\n                  :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                       :        +- ReusedSubquery\n                  :                       +- BroadcastExchange\n                  :                          +- CometNativeColumnarToRow\n                  :                             +- CometProject\n                  :                                +- CometFilter\n                  :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- HashAggregate\n                        :  +- CometNativeColumnarToRow\n                        :     +- CometColumnarExchange\n                        :        +- HashAggregate\n                        :           +- Project\n                        :              +- BroadcastHashJoin\n                        :                 :- Project\n                        :                 :  +- BroadcastHashJoin\n                        :                 :     :- Filter\n                        :                 :     :  +- ColumnarToRow\n                        :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :                 :     :           +- ReusedSubquery\n                        :                 :     +- BroadcastExchange\n                        :                 :        +- CometNativeColumnarToRow\n                        :                 :           +- CometProject\n                        :                 :              +- CometFilter\n                        :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :                 +- BroadcastExchange\n                        :                    +- CometNativeColumnarToRow\n                        :                       +- CometFilter\n                        :                          +- CometNativeScan parquet spark_catalog.default.web_page\n                        +- BroadcastExchange\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Project\n                                          +- BroadcastHashJoin\n                                             :- Project\n                                             :  +- BroadcastHashJoin\n                                             :     :- Filter\n                                             :     :  +- ColumnarToRow\n                                             :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                             :     :           +- ReusedSubquery\n                                             :     +- BroadcastExchange\n                                             :        +- CometNativeColumnarToRow\n                                             :           +- CometProject\n                                             :              +- CometFilter\n                                             :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             +- BroadcastExchange\n                                                +- CometNativeColumnarToRow\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.web_page\n\nComet accelerated 36 out of 109 eligible operators (33%). Final plan contains 24 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q77.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Union\n                  :- CometNativeColumnarToRow\n                  :  +- CometProject\n                  :     +- CometBroadcastHashJoin\n                  :        :- CometHashAggregate\n                  :        :  +- CometExchange\n                  :        :     +- CometHashAggregate\n                  :        :        +- CometProject\n                  :        :           +- CometBroadcastHashJoin\n                  :        :              :- CometProject\n                  :        :              :  +- CometBroadcastHashJoin\n                  :        :              :     :- CometFilter\n                  :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :        :              :     :        +- SubqueryBroadcast\n                  :        :              :     :           +- BroadcastExchange\n                  :        :              :     :              +- CometNativeColumnarToRow\n                  :        :              :     :                 +- CometProject\n                  :        :              :     :                    +- CometFilter\n                  :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        :              :     +- CometBroadcastExchange\n                  :        :              :        +- CometProject\n                  :        :              :           +- CometFilter\n                  :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        :              +- CometBroadcastExchange\n                  :        :                 +- CometFilter\n                  :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :        +- CometBroadcastExchange\n                  :           +- CometHashAggregate\n                  :              +- CometExchange\n                  :                 +- CometHashAggregate\n                  :                    +- CometProject\n                  :                       +- CometBroadcastHashJoin\n                  :                          :- CometProject\n                  :                          :  +- CometBroadcastHashJoin\n                  :                          :     :- CometFilter\n                  :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :                          :     :        +- ReusedSubquery\n                  :                          :     +- CometBroadcastExchange\n                  :                          :        +- CometProject\n                  :                          :           +- CometFilter\n                  :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                          +- CometBroadcastExchange\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :- Project\n                  :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n                  :     :- BroadcastExchange\n                  :     :  +- CometNativeColumnarToRow\n                  :     :     +- CometHashAggregate\n                  :     :        +- CometExchange\n                  :     :           +- CometHashAggregate\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometNativeColumnarToRow\n                  :        +- CometHashAggregate\n                  :           +- CometExchange\n                  :              +- CometHashAggregate\n                  :                 +- CometProject\n                  :                    +- CometBroadcastHashJoin\n                  :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :                       :     +- ReusedSubquery\n                  :                       +- CometBroadcastExchange\n                  :                          +- CometProject\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometProject\n                           :           +- CometBroadcastHashJoin\n                           :              :- CometProject\n                           :              :  +- CometBroadcastHashJoin\n                           :              :     :- CometFilter\n                           :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :              :     :        +- ReusedSubquery\n                           :              :     +- CometBroadcastExchange\n                           :              :        +- CometProject\n                           :              :           +- CometFilter\n                           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              +- CometBroadcastExchange\n                           :                 +- CometFilter\n                           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n                           +- CometBroadcastExchange\n                              +- CometHashAggregate\n                                 +- CometExchange\n                                    +- CometHashAggregate\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometProject\n                                             :  +- CometBroadcastHashJoin\n                                             :     :- CometFilter\n                                             :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                             :     :        +- ReusedSubquery\n                                             :     +- CometBroadcastExchange\n                                             :        +- CometProject\n                                             :           +- CometFilter\n                                             :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometBroadcastExchange\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n\nComet accelerated 94 out of 109 eligible operators (86%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q78.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometColumnarExchange\n         :     :                 :        :     +- Filter\n         :     :                 :        :        +- ColumnarToRow\n         :     :                 :        :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :     :                 :        :                 +- SubqueryBroadcast\n         :     :                 :        :                    +- BroadcastExchange\n         :     :                 :        :                       +- CometNativeColumnarToRow\n         :     :                 :        :                          +- CometFilter\n         :     :                 :        :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometColumnarExchange\n         :                          :        :     +- Filter\n         :                          :        :        +- ColumnarToRow\n         :                          :        :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                          :        :                 +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometNativeScan parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometColumnarExchange\n                              :        :     +- Filter\n                              :        :        +- ColumnarToRow\n                              :        :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :        :                 +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 64 out of 76 eligible operators (84%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q78.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometExchange\n         :     :                 :        :     +- CometFilter\n         :     :                 :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                 :        :              +- SubqueryBroadcast\n         :     :                 :        :                 +- BroadcastExchange\n         :     :                 :        :                    +- CometNativeColumnarToRow\n         :     :                 :        :                       +- CometFilter\n         :     :                 :        :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometExchange\n         :                          :        :     +- CometFilter\n         :                          :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :        :              +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometExchange\n                              :        :     +- CometFilter\n                              :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :        :              +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 70 out of 76 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q79.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- HashAggregate\n      :  +- CometNativeColumnarToRow\n      :     +- CometColumnarExchange\n      :        +- HashAggregate\n      :           +- Project\n      :              +- BroadcastHashJoin\n      :                 :- Project\n      :                 :  +- BroadcastHashJoin\n      :                 :     :- Project\n      :                 :     :  +- BroadcastHashJoin\n      :                 :     :     :- Filter\n      :                 :     :     :  +- ColumnarToRow\n      :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                 :     :     :           +- SubqueryBroadcast\n      :                 :     :     :              +- BroadcastExchange\n      :                 :     :     :                 +- CometNativeColumnarToRow\n      :                 :     :     :                    +- CometProject\n      :                 :     :     :                       +- CometFilter\n      :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                 :     :     +- BroadcastExchange\n      :                 :     :        +- CometNativeColumnarToRow\n      :                 :     :           +- CometProject\n      :                 :     :              +- CometFilter\n      :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                 :     +- BroadcastExchange\n      :                 :        +- CometNativeColumnarToRow\n      :                 :           +- CometProject\n      :                 :              +- CometFilter\n      :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n      :                 +- BroadcastExchange\n      :                    +- CometNativeColumnarToRow\n      :                       +- CometProject\n      :                          +- CometFilter\n      :                             +- CometNativeScan parquet spark_catalog.default.household_demographics\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 16 out of 35 eligible operators (45%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q79.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometHashAggregate\n         :  +- CometExchange\n         :     +- CometHashAggregate\n         :        +- CometProject\n         :           +- CometBroadcastHashJoin\n         :              :- CometProject\n         :              :  +- CometBroadcastHashJoin\n         :              :     :- CometProject\n         :              :     :  +- CometBroadcastHashJoin\n         :              :     :     :- CometFilter\n         :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :              :     :     :        +- SubqueryBroadcast\n         :              :     :     :           +- BroadcastExchange\n         :              :     :     :              +- CometNativeColumnarToRow\n         :              :     :     :                 +- CometProject\n         :              :     :     :                    +- CometFilter\n         :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :              :     :     +- CometBroadcastExchange\n         :              :     :        +- CometProject\n         :              :     :           +- CometFilter\n         :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :              :     +- CometBroadcastExchange\n         :              :        +- CometProject\n         :              :           +- CometFilter\n         :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :              +- CometBroadcastExchange\n         :                 +- CometProject\n         :                    +- CometFilter\n         :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 33 out of 35 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q8.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Filter\n                  :     :     :  +- ColumnarToRow\n                  :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           +- SubqueryBroadcast\n                  :     :     :              +- BroadcastExchange\n                  :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :                    +- CometProject\n                  :     :     :                       +- CometFilter\n                  :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometBroadcastHashJoin\n                                    :- CometProject\n                                    :  +- CometFilter\n                                    :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometHashAggregate\n                                                +- CometExchange\n                                                   +- CometHashAggregate\n                                                      +- CometProject\n                                                         +- CometBroadcastHashJoin\n                                                            :- CometProject\n                                                            :  +- CometFilter\n                                                            :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                                                            +- CometBroadcastExchange\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 32 out of 48 eligible operators (66%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q8.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometFilter\n                  :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :        +- SubqueryBroadcast\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  +- CometBroadcastExchange\n                     +- CometHashAggregate\n                        +- CometExchange\n                           +- CometHashAggregate\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometFilter\n                                 :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometHashAggregate\n                                             +- CometExchange\n                                                +- CometHashAggregate\n                                                   +- CometProject\n                                                      +- CometBroadcastHashJoin\n                                                         :- CometProject\n                                                         :  +- CometFilter\n                                                         :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                                                         +- CometBroadcastExchange\n                                                            +- CometProject\n                                                               +- CometFilter\n                                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 46 out of 48 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q80.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometColumnarExchange\n                  :              :     :     :     :     :     +- Filter\n                  :              :     :     :     :     :        +- ColumnarToRow\n                  :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :              :     :     :     :     :                 +- SubqueryBroadcast\n                  :              :     :     :     :     :                    +- BroadcastExchange\n                  :              :     :     :     :     :                       +- CometNativeColumnarToRow\n                  :              :     :     :     :     :                          +- CometProject\n                  :              :     :     :     :     :                             +- CometFilter\n                  :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometNativeScan parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometColumnarExchange\n                  :              :     :     :     :     :     +- Filter\n                  :              :     :     :     :     :        +- ColumnarToRow\n                  :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :              :     :     :     :     :                 +- ReusedSubquery\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometNativeScan parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometBroadcastHashJoin\n                                 :     :     :     :- CometProject\n                                 :     :     :     :  +- CometSortMergeJoin\n                                 :     :     :     :     :- CometSort\n                                 :     :     :     :     :  +- CometColumnarExchange\n                                 :     :     :     :     :     +- Filter\n                                 :     :     :     :     :        +- ColumnarToRow\n                                 :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :     :     :                 +- ReusedSubquery\n                                 :     :     :     :     +- CometSort\n                                 :     :     :     :        +- CometExchange\n                                 :     :     :     :           +- CometProject\n                                 :     :     :     :              +- CometFilter\n                                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n                                 :     :     :     +- CometBroadcastExchange\n                                 :     :     :        +- CometProject\n                                 :     :     :           +- CometFilter\n                                 :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometNativeScan parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 117 out of 127 eligible operators (92%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q80.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometExpand\n               +- CometUnion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometExchange\n                  :              :     :     :     :     :     +- CometFilter\n                  :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :              :     :     :     :     :              +- SubqueryBroadcast\n                  :              :     :     :     :     :                 +- BroadcastExchange\n                  :              :     :     :     :     :                    +- CometNativeColumnarToRow\n                  :              :     :     :     :     :                       +- CometProject\n                  :              :     :     :     :     :                          +- CometFilter\n                  :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometProject\n                  :              :     :     :  +- CometBroadcastHashJoin\n                  :              :     :     :     :- CometProject\n                  :              :     :     :     :  +- CometSortMergeJoin\n                  :              :     :     :     :     :- CometSort\n                  :              :     :     :     :     :  +- CometExchange\n                  :              :     :     :     :     :     +- CometFilter\n                  :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :              :     :     :     :     :              +- ReusedSubquery\n                  :              :     :     :     :     +- CometSort\n                  :              :     :     :     :        +- CometExchange\n                  :              :     :     :     :           +- CometProject\n                  :              :     :     :     :              +- CometFilter\n                  :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                  :              :     :     :     +- CometBroadcastExchange\n                  :              :     :     :        +- CometProject\n                  :              :     :     :           +- CometFilter\n                  :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometBroadcastHashJoin\n                                 :     :     :     :- CometProject\n                                 :     :     :     :  +- CometSortMergeJoin\n                                 :     :     :     :     :- CometSort\n                                 :     :     :     :     :  +- CometExchange\n                                 :     :     :     :     :     +- CometFilter\n                                 :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :     :     :     :              +- ReusedSubquery\n                                 :     :     :     :     +- CometSort\n                                 :     :     :     :        +- CometExchange\n                                 :     :     :     :           +- CometProject\n                                 :     :     :     :              +- CometFilter\n                                 :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                 :     :     :     +- CometBroadcastExchange\n                                 :     :     :        +- CometProject\n                                 :     :     :           +- CometFilter\n                                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometProject\n                                 :     :           +- CometFilter\n                                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 123 out of 127 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q81.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- BroadcastHashJoin\n      :     :     :- Filter\n      :     :     :  +- HashAggregate\n      :     :     :     +- CometNativeColumnarToRow\n      :     :     :        +- CometColumnarExchange\n      :     :     :           +- HashAggregate\n      :     :     :              +- Project\n      :     :     :                 +- BroadcastHashJoin\n      :     :     :                    :- Project\n      :     :     :                    :  +- BroadcastHashJoin\n      :     :     :                    :     :- Filter\n      :     :     :                    :     :  +- ColumnarToRow\n      :     :     :                    :     :     +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :     :                    :     :           +- SubqueryBroadcast\n      :     :     :                    :     :              +- BroadcastExchange\n      :     :     :                    :     :                 +- CometNativeColumnarToRow\n      :     :     :                    :     :                    +- CometProject\n      :     :     :                    :     :                       +- CometFilter\n      :     :     :                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    :     +- BroadcastExchange\n      :     :     :                    :        +- CometNativeColumnarToRow\n      :     :     :                    :           +- CometProject\n      :     :     :                    :              +- CometFilter\n      :     :     :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :     :                    +- BroadcastExchange\n      :     :     :                       +- CometNativeColumnarToRow\n      :     :     :                          +- CometProject\n      :     :     :                             +- CometFilter\n      :     :     :                                +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     :     +- BroadcastExchange\n      :     :        +- Filter\n      :     :           +- HashAggregate\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometColumnarExchange\n      :     :                    +- HashAggregate\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Filter\n      :     :                                         :     :  +- ColumnarToRow\n      :     :                                         :     :     +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :           +- ReusedSubquery\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometProject\n      :     :                                         :              +- CometFilter\n      :     :                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometProject\n      :     :                                                  +- CometFilter\n      :     :                                                     +- CometNativeScan parquet spark_catalog.default.customer_address\n      :     +- BroadcastExchange\n      :        +- CometNativeColumnarToRow\n      :           +- CometProject\n      :              +- CometFilter\n      :                 +- CometNativeScan parquet spark_catalog.default.customer\n      +- BroadcastExchange\n         +- CometNativeColumnarToRow\n            +- CometProject\n               +- CometFilter\n                  +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 24 out of 61 eligible operators (39%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q81.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometProject\n         :     :  +- CometBroadcastHashJoin\n         :     :     :- CometFilter\n         :     :     :  +- CometHashAggregate\n         :     :     :     +- CometExchange\n         :     :     :        +- CometHashAggregate\n         :     :     :           +- CometProject\n         :     :     :              +- CometBroadcastHashJoin\n         :     :     :                 :- CometProject\n         :     :     :                 :  +- CometBroadcastHashJoin\n         :     :     :                 :     :- CometFilter\n         :     :     :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :     :     :                 :     :        +- SubqueryBroadcast\n         :     :     :                 :     :           +- BroadcastExchange\n         :     :     :                 :     :              +- CometNativeColumnarToRow\n         :     :     :                 :     :                 +- CometProject\n         :     :     :                 :     :                    +- CometFilter\n         :     :     :                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 :     +- CometBroadcastExchange\n         :     :     :                 :        +- CometProject\n         :     :     :                 :           +- CometFilter\n         :     :     :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :     :                 +- CometBroadcastExchange\n         :     :     :                    +- CometProject\n         :     :     :                       +- CometFilter\n         :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     :     +- CometBroadcastExchange\n         :     :        +- CometFilter\n         :     :           +- CometHashAggregate\n         :     :              +- CometExchange\n         :     :                 +- CometHashAggregate\n         :     :                    +- CometHashAggregate\n         :     :                       +- CometExchange\n         :     :                          +- CometHashAggregate\n         :     :                             +- CometProject\n         :     :                                +- CometBroadcastHashJoin\n         :     :                                   :- CometProject\n         :     :                                   :  +- CometBroadcastHashJoin\n         :     :                                   :     :- CometFilter\n         :     :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :     :                                   :     :        +- ReusedSubquery\n         :     :                                   :     +- CometBroadcastExchange\n         :     :                                   :        +- CometProject\n         :     :                                   :           +- CometFilter\n         :     :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                   +- CometBroadcastExchange\n         :     :                                      +- CometProject\n         :     :                                         +- CometFilter\n         :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :     +- CometBroadcastExchange\n         :        +- CometProject\n         :           +- CometFilter\n         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         +- CometBroadcastExchange\n            +- CometProject\n               +- CometFilter\n                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 58 out of 61 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q82.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- BroadcastExchange\n                  :  +- Project\n                  :     +- BroadcastHashJoin\n                  :        :- Project\n                  :        :  +- BroadcastHashJoin\n                  :        :     :- CometNativeColumnarToRow\n                  :        :     :  +- CometProject\n                  :        :     :     +- CometFilter\n                  :        :     :        +- CometNativeScan parquet spark_catalog.default.item\n                  :        :     +- BroadcastExchange\n                  :        :        +- Project\n                  :        :           +- Filter\n                  :        :              +- ColumnarToRow\n                  :        :                 +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :        :                       +- SubqueryBroadcast\n                  :        :                          +- BroadcastExchange\n                  :        :                             +- CometNativeColumnarToRow\n                  :        :                                +- CometProject\n                  :        :                                   +- CometFilter\n                  :        :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :        +- BroadcastExchange\n                  :           +- CometNativeColumnarToRow\n                  :              +- CometProject\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.store_sales\n\nComet accelerated 15 out of 30 eligible operators (50%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q82.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometBroadcastExchange\n                  :  +- CometProject\n                  :     +- CometBroadcastHashJoin\n                  :        :- CometProject\n                  :        :  +- CometBroadcastHashJoin\n                  :        :     :- CometProject\n                  :        :     :  +- CometFilter\n                  :        :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :        :     +- CometBroadcastExchange\n                  :        :        +- CometProject\n                  :        :           +- CometFilter\n                  :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :        :                    +- SubqueryBroadcast\n                  :        :                       +- BroadcastExchange\n                  :        :                          +- CometNativeColumnarToRow\n                  :        :                             +- CometProject\n                  :        :                                +- CometFilter\n                  :        :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :        +- CometBroadcastExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n\nComet accelerated 28 out of 30 eligible operators (93%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q83.ansi.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- HashAggregate\n      :     :  +- CometNativeColumnarToRow\n      :     :     +- CometColumnarExchange\n      :     :        +- HashAggregate\n      :     :           +- Project\n      :     :              +- BroadcastHashJoin\n      :     :                 :- Project\n      :     :                 :  +- BroadcastHashJoin\n      :     :                 :     :- Filter\n      :     :                 :     :  +- ColumnarToRow\n      :     :                 :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                 :     :           +- SubqueryBroadcast\n      :     :                 :     :              +- BroadcastExchange\n      :     :                 :     :                 +- CometNativeColumnarToRow\n      :     :                 :     :                    +- CometProject\n      :     :                 :     :                       +- CometBroadcastHashJoin\n      :     :                 :     :                          :- CometFilter\n      :     :                 :     :                          :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :                          +- CometBroadcastExchange\n      :     :                 :     :                             +- CometProject\n      :     :                 :     :                                +- CometBroadcastHashJoin\n      :     :                 :     :                                   :- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     :                                   +- CometBroadcastExchange\n      :     :                 :     :                                      +- CometProject\n      :     :                 :     :                                         +- CometFilter\n      :     :                 :     :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                 :     +- BroadcastExchange\n      :     :                 :        +- CometNativeColumnarToRow\n      :     :                 :           +- CometProject\n      :     :                 :              +- CometFilter\n      :     :                 :                 +- CometNativeScan parquet spark_catalog.default.item\n      :     :                 +- BroadcastExchange\n      :     :                    +- CometNativeColumnarToRow\n      :     :                       +- CometProject\n      :     :                          +- CometBroadcastHashJoin\n      :     :                             :- CometFilter\n      :     :                             :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                             +- CometBroadcastExchange\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometProject\n      :     :                                            +- CometFilter\n      :     :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- HashAggregate\n      :           +- CometNativeColumnarToRow\n      :              +- CometColumnarExchange\n      :                 +- HashAggregate\n      :                    +- Project\n      :                       +- BroadcastHashJoin\n      :                          :- Project\n      :                          :  +- BroadcastHashJoin\n      :                          :     :- Filter\n      :                          :     :  +- ColumnarToRow\n      :                          :     :     +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                          :     :           +- ReusedSubquery\n      :                          :     +- BroadcastExchange\n      :                          :        +- CometNativeColumnarToRow\n      :                          :           +- CometProject\n      :                          :              +- CometFilter\n      :                          :                 +- CometNativeScan parquet spark_catalog.default.item\n      :                          +- BroadcastExchange\n      :                             +- CometNativeColumnarToRow\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometFilter\n      :                                      :  +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometProject\n      :                                            +- CometBroadcastHashJoin\n      :                                               :- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                               +- CometBroadcastExchange\n      :                                                  +- CometProject\n      :                                                     +- CometFilter\n      :                                                        +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- Filter\n                           :     :  +- ColumnarToRow\n                           :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :           +- ReusedSubquery\n                           :     +- BroadcastExchange\n                           :        +- CometNativeColumnarToRow\n                           :           +- CometProject\n                           :              +- CometFilter\n                           :                 +- CometNativeScan parquet spark_catalog.default.item\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometFilter\n                                       :  +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometBroadcastHashJoin\n                                                :- CometNativeScan parquet spark_catalog.default.date_dim\n                                                +- CometBroadcastExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 60 out of 101 eligible operators (59%). Final plan contains 13 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q83.ansi.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometHashAggregate\n         :     :  +- CometExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometProject\n         :     :           +- CometBroadcastHashJoin\n         :     :              :- CometProject\n         :     :              :  +- CometBroadcastHashJoin\n         :     :              :     :- CometFilter\n         :     :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :              :     :        +- SubqueryBroadcast\n         :     :              :     :           +- BroadcastExchange\n         :     :              :     :              +- CometNativeColumnarToRow\n         :     :              :     :                 +- CometProject\n         :     :              :     :                    +- CometBroadcastHashJoin\n         :     :              :     :                       :- CometFilter\n         :     :              :     :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :                       +- CometBroadcastExchange\n         :     :              :     :                          +- CometProject\n         :     :              :     :                             +- CometBroadcastHashJoin\n         :     :              :     :                                :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     :                                +- CometBroadcastExchange\n         :     :              :     :                                   +- CometProject\n         :     :              :     :                                      +- CometFilter\n         :     :              :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :              :     +- CometBroadcastExchange\n         :     :              :        +- CometProject\n         :     :              :           +- CometFilter\n         :     :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :     :              +- CometBroadcastExchange\n         :     :                 +- CometProject\n         :     :                    +- CometBroadcastHashJoin\n         :     :                       :- CometFilter\n         :     :                       :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                       +- CometBroadcastExchange\n         :     :                          +- CometProject\n         :     :                             +- CometBroadcastHashJoin\n         :     :                                :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                                +- CometBroadcastExchange\n         :     :                                   +- CometProject\n         :     :                                      +- CometFilter\n         :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometProject\n         :                    +- CometBroadcastHashJoin\n         :                       :- CometProject\n         :                       :  +- CometBroadcastHashJoin\n         :                       :     :- CometFilter\n         :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :                       :     :        +- ReusedSubquery\n         :                       :     +- CometBroadcastExchange\n         :                       :        +- CometProject\n         :                       :           +- CometFilter\n         :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                       +- CometBroadcastExchange\n         :                          +- CometProject\n         :                             +- CometBroadcastHashJoin\n         :                                :- CometFilter\n         :                                :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                +- CometBroadcastExchange\n         :                                   +- CometProject\n         :                                      +- CometBroadcastHashJoin\n         :                                         :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                         +- CometBroadcastExchange\n         :                                            +- CometProject\n         :                                               +- CometFilter\n         :                                                  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometFilter\n                           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                           :     :        +- ReusedSubquery\n                           :     +- CometBroadcastExchange\n                           :        +- CometProject\n                           :           +- CometFilter\n                           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometFilter\n                                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometBroadcastHashJoin\n                                             :- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometBroadcastExchange\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 97 out of 101 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q84.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometBroadcastExchange\n         :  +- CometProject\n         :     +- CometBroadcastHashJoin\n         :        :- CometProject\n         :        :  +- CometBroadcastHashJoin\n         :        :     :- CometProject\n         :        :     :  +- CometBroadcastHashJoin\n         :        :     :     :- CometProject\n         :        :     :     :  +- CometBroadcastHashJoin\n         :        :     :     :     :- CometProject\n         :        :     :     :     :  +- CometFilter\n         :        :     :     :     :     +- CometNativeScan parquet spark_catalog.default.customer\n         :        :     :     :     +- CometBroadcastExchange\n         :        :     :     :        +- CometProject\n         :        :     :     :           +- CometFilter\n         :        :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n         :        :     :     +- CometBroadcastExchange\n         :        :     :        +- CometFilter\n         :        :     :           +- CometNativeScan parquet spark_catalog.default.customer_demographics\n         :        :     +- CometBroadcastExchange\n         :        :        +- CometFilter\n         :        :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n         :        +- CometBroadcastExchange\n         :           +- CometProject\n         :              +- CometFilter\n         :                 +- CometNativeScan parquet spark_catalog.default.income_band\n         +- CometProject\n            +- CometFilter\n               +- CometNativeScan parquet spark_catalog.default.store_returns\n\nComet accelerated 32 out of 32 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q84.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometBroadcastExchange\n         :  +- CometProject\n         :     +- CometBroadcastHashJoin\n         :        :- CometProject\n         :        :  +- CometBroadcastHashJoin\n         :        :     :- CometProject\n         :        :     :  +- CometBroadcastHashJoin\n         :        :     :     :- CometProject\n         :        :     :     :  +- CometBroadcastHashJoin\n         :        :     :     :     :- CometProject\n         :        :     :     :     :  +- CometFilter\n         :        :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :        :     :     :     +- CometBroadcastExchange\n         :        :     :     :        +- CometProject\n         :        :     :     :           +- CometFilter\n         :        :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         :        :     :     +- CometBroadcastExchange\n         :        :     :        +- CometFilter\n         :        :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n         :        :     +- CometBroadcastExchange\n         :        :        +- CometFilter\n         :        :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n         :        +- CometBroadcastExchange\n         :           +- CometProject\n         :              +- CometFilter\n         :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n         +- CometProject\n            +- CometFilter\n               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n\nComet accelerated 32 out of 32 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q85.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- Project\n                  :     :     :  +- BroadcastHashJoin\n                  :     :     :     :- Project\n                  :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :- Project\n                  :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :- Project\n                  :     :     :     :     :     :  +- BroadcastHashJoin\n                  :     :     :     :     :     :     :- BroadcastExchange\n                  :     :     :     :     :     :     :  +- Filter\n                  :     :     :     :     :     :     :     +- ColumnarToRow\n                  :     :     :     :     :     :     :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :     :     :     :     :              +- SubqueryBroadcast\n                  :     :     :     :     :     :     :                 +- BroadcastExchange\n                  :     :     :     :     :     :     :                    +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                       +- CometProject\n                  :     :     :     :     :     :     :                          +- CometFilter\n                  :     :     :     :     :     :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometNativeColumnarToRow\n                  :     :     :     :     :     :        +- CometProject\n                  :     :     :     :     :     :           +- CometFilter\n                  :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.web_returns\n                  :     :     :     :     :     +- BroadcastExchange\n                  :     :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :     :           +- CometFilter\n                  :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.web_page\n                  :     :     :     :     +- BroadcastExchange\n                  :     :     :     :        +- CometNativeColumnarToRow\n                  :     :     :     :           +- CometProject\n                  :     :     :     :              +- CometFilter\n                  :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     :     +- BroadcastExchange\n                  :     :     :        +- CometNativeColumnarToRow\n                  :     :     :           +- CometProject\n                  :     :     :              +- CometFilter\n                  :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :     :     +- BroadcastExchange\n                  :     :        +- CometNativeColumnarToRow\n                  :     :           +- CometProject\n                  :     :              +- CometFilter\n                  :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.reason\n\nComet accelerated 24 out of 52 eligible operators (46%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q85.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometProject\n                  :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :- CometProject\n                  :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :- CometProject\n                  :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :     :     :     :- CometBroadcastExchange\n                  :     :     :     :     :     :     :  +- CometFilter\n                  :     :     :     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :     :     :     :     :     :     :              +- BroadcastExchange\n                  :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :     :     :     :     :     :     :                    +- CometProject\n                  :     :     :     :     :     :     :                       +- CometFilter\n                  :     :     :     :     :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :     :     :     :     +- CometProject\n                  :     :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                  :     :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :     :        +- CometFilter\n                  :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n                  :     :     :     :     +- CometBroadcastExchange\n                  :     :     :     :        +- CometProject\n                  :     :     :     :           +- CometFilter\n                  :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometProject\n                  :     :     :           +- CometFilter\n                  :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.reason\n\nComet accelerated 50 out of 52 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q86.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Expand\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Filter\n                                    :     :  +- ColumnarToRow\n                                    :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :           +- SubqueryBroadcast\n                                    :     :              +- BroadcastExchange\n                                    :     :                 +- CometNativeColumnarToRow\n                                    :     :                    +- CometProject\n                                    :     :                       +- CometFilter\n                                    :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 12 out of 28 eligible operators (42%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q86.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometExpand\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometFilter\n                                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                 :     :        +- SubqueryBroadcast\n                                 :     :           +- BroadcastExchange\n                                 :     :              +- CometNativeColumnarToRow\n                                 :     :                 +- CometProject\n                                 :     :                    +- CometFilter\n                                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 23 out of 28 eligible operators (82%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q87.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  BroadcastHashJoin [COMET: BuildRight with LeftAnti is not supported]\n               :  :- CometNativeColumnarToRow\n               :  :  +- CometHashAggregate\n               :  :     +- CometColumnarExchange\n               :  :        +- HashAggregate\n               :  :           +- Project\n               :  :              +- BroadcastHashJoin\n               :  :                 :- Project\n               :  :                 :  +- BroadcastHashJoin\n               :  :                 :     :- Filter\n               :  :                 :     :  +- ColumnarToRow\n               :  :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :  :                 :     :           +- SubqueryBroadcast\n               :  :                 :     :              +- BroadcastExchange\n               :  :                 :     :                 +- CometNativeColumnarToRow\n               :  :                 :     :                    +- CometProject\n               :  :                 :     :                       +- CometFilter\n               :  :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :                 :     +- BroadcastExchange\n               :  :                 :        +- CometNativeColumnarToRow\n               :  :                 :           +- CometProject\n               :  :                 :              +- CometFilter\n               :  :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :  :                 +- BroadcastExchange\n               :  :                    +- CometNativeColumnarToRow\n               :  :                       +- CometProject\n               :  :                          +- CometFilter\n               :  :                             +- CometNativeScan parquet spark_catalog.default.customer\n               :  +- BroadcastExchange\n               :     +- CometNativeColumnarToRow\n               :        +- CometHashAggregate\n               :           +- CometColumnarExchange\n               :              +- HashAggregate\n               :                 +- Project\n               :                    +- BroadcastHashJoin\n               :                       :- Project\n               :                       :  +- BroadcastHashJoin\n               :                       :     :- Filter\n               :                       :     :  +- ColumnarToRow\n               :                       :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                       :     :           +- ReusedSubquery\n               :                       :     +- BroadcastExchange\n               :                       :        +- CometNativeColumnarToRow\n               :                       :           +- CometProject\n               :                       :              +- CometFilter\n               :                       :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                       +- BroadcastExchange\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.customer\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometHashAggregate\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Filter\n                                    :     :  +- ColumnarToRow\n                                    :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :           +- ReusedSubquery\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 28 out of 66 eligible operators (42%). Final plan contains 14 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q87.native_iceberg_compat/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  BroadcastHashJoin [COMET: BuildRight with LeftAnti is not supported]\n               :  :- CometNativeColumnarToRow\n               :  :  +- CometHashAggregate\n               :  :     +- CometExchange\n               :  :        +- CometHashAggregate\n               :  :           +- CometProject\n               :  :              +- CometBroadcastHashJoin\n               :  :                 :- CometProject\n               :  :                 :  +- CometBroadcastHashJoin\n               :  :                 :     :- CometFilter\n               :  :                 :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :  :                 :     :        +- SubqueryBroadcast\n               :  :                 :     :           +- BroadcastExchange\n               :  :                 :     :              +- CometNativeColumnarToRow\n               :  :                 :     :                 +- CometProject\n               :  :                 :     :                    +- CometFilter\n               :  :                 :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :                 :     +- CometBroadcastExchange\n               :  :                 :        +- CometProject\n               :  :                 :           +- CometFilter\n               :  :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :  :                 +- CometBroadcastExchange\n               :  :                    +- CometProject\n               :  :                       +- CometFilter\n               :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               :  +- BroadcastExchange\n               :     +- CometNativeColumnarToRow\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometProject\n               :                       :  +- CometBroadcastHashJoin\n               :                       :     :- CometFilter\n               :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                       :     :        +- ReusedSubquery\n               :                       :     +- CometBroadcastExchange\n               :                       :        +- CometProject\n               :                       :           +- CometFilter\n               :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                       +- CometBroadcastExchange\n               :                          +- CometProject\n               :                             +- CometFilter\n               :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometHashAggregate\n                        +- CometExchange\n                           +- CometHashAggregate\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometProject\n                                    :  +- CometBroadcastHashJoin\n                                    :     :- CometFilter\n                                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :     :        +- ReusedSubquery\n                                    :     +- CometBroadcastExchange\n                                    :        +- CometProject\n                                    :           +- CometFilter\n                                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 55 out of 66 eligible operators (83%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q88.native_datafusion/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :  :  :     +- CometExchange\n:  :  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :  :           +- CometProject\n:  :  :  :  :  :  :              +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :- CometProject\n:  :  :  :  :  :  :                 :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :- CometProject\n:  :  :  :  :  :  :                 :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :     :- CometProject\n:  :  :  :  :  :  :                 :     :     :  +- CometFilter\n:  :  :  :  :  :  :                 :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  :  :  :                 :     :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :     :        +- CometProject\n:  :  :  :  :  :  :                 :     :           +- CometFilter\n:  :  :  :  :  :  :                 :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :  :                 :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :        +- CometProject\n:  :  :  :  :  :  :                 :           +- CometFilter\n:  :  :  :  :  :  :                 :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :  :  :  :                 +- CometBroadcastExchange\n:  :  :  :  :  :  :                    +- CometProject\n:  :  :  :  :  :  :                       +- CometFilter\n:  :  :  :  :  :  :                          +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :           +- CometExchange\n:  :  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :  :                 +- CometProject\n:  :  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :- CometProject\n:  :  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :- CometProject\n:  :  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :        +- CometProject\n:  :  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :  :                          +- CometProject\n:  :  :  :  :  :                             +- CometFilter\n:  :  :  :  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :           +- CometExchange\n:  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :                 +- CometProject\n:  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :                       :- CometProject\n:  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :- CometProject\n:  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :                       :        +- CometProject\n:  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :                          +- CometProject\n:  :  :  :  :                             +- CometFilter\n:  :  :  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometExchange\n:  :  :  :              +- CometHashAggregate\n:  :  :  :                 +- CometProject\n:  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :                       :- CometProject\n:  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :- CometProject\n:  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :     :- CometProject\n:  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :                       :     :        +- CometProject\n:  :  :  :                       :     :           +- CometFilter\n:  :  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :                       :        +- CometProject\n:  :  :  :                       :           +- CometFilter\n:  :  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :  :                       +- CometBroadcastExchange\n:  :  :  :                          +- CometProject\n:  :  :  :                             +- CometFilter\n:  :  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometExchange\n:  :  :              +- CometHashAggregate\n:  :  :                 +- CometProject\n:  :  :                    +- CometBroadcastHashJoin\n:  :  :                       :- CometProject\n:  :  :                       :  +- CometBroadcastHashJoin\n:  :  :                       :     :- CometProject\n:  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :                       :     :     :- CometProject\n:  :  :                       :     :     :  +- CometFilter\n:  :  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :  :                       :     :     +- CometBroadcastExchange\n:  :  :                       :     :        +- CometProject\n:  :  :                       :     :           +- CometFilter\n:  :  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :  :                       :     +- CometBroadcastExchange\n:  :  :                       :        +- CometProject\n:  :  :                       :           +- CometFilter\n:  :  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :  :                       +- CometBroadcastExchange\n:  :  :                          +- CometProject\n:  :  :                             +- CometFilter\n:  :  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometBroadcastHashJoin\n:  :                       :- CometProject\n:  :                       :  +- CometBroadcastHashJoin\n:  :                       :     :- CometProject\n:  :                       :     :  +- CometBroadcastHashJoin\n:  :                       :     :     :- CometProject\n:  :                       :     :     :  +- CometFilter\n:  :                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :                       :     :     +- CometBroadcastExchange\n:  :                       :     :        +- CometProject\n:  :                       :     :           +- CometFilter\n:  :                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:  :                       :     +- CometBroadcastExchange\n:  :                       :        +- CometProject\n:  :                       :           +- CometFilter\n:  :                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:  :                       +- CometBroadcastExchange\n:  :                          +- CometProject\n:  :                             +- CometFilter\n:  :                                +- CometNativeScan parquet spark_catalog.default.store\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometExchange\n:              +- CometHashAggregate\n:                 +- CometProject\n:                    +- CometBroadcastHashJoin\n:                       :- CometProject\n:                       :  +- CometBroadcastHashJoin\n:                       :     :- CometProject\n:                       :     :  +- CometBroadcastHashJoin\n:                       :     :     :- CometProject\n:                       :     :     :  +- CometFilter\n:                       :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n:                       :     :     +- CometBroadcastExchange\n:                       :     :        +- CometProject\n:                       :     :           +- CometFilter\n:                       :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n:                       :     +- CometBroadcastExchange\n:                       :        +- CometProject\n:                       :           +- CometFilter\n:                       :              +- CometNativeScan parquet spark_catalog.default.time_dim\n:                       +- CometBroadcastExchange\n:                          +- CometProject\n:                             +- CometFilter\n:                                +- CometNativeScan parquet spark_catalog.default.store\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometFilter\n                     :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometNativeScan parquet spark_catalog.default.time_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 192 out of 206 eligible operators (93%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q88.native_iceberg_compat/extended.txt",
    "content": "BroadcastNestedLoopJoin\n:- BroadcastNestedLoopJoin\n:  :- BroadcastNestedLoopJoin\n:  :  :- BroadcastNestedLoopJoin\n:  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :- BroadcastNestedLoopJoin\n:  :  :  :  :  :-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n:  :  :  :  :  :  :- CometNativeColumnarToRow\n:  :  :  :  :  :  :  +- CometHashAggregate\n:  :  :  :  :  :  :     +- CometExchange\n:  :  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :  :           +- CometProject\n:  :  :  :  :  :  :              +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :- CometProject\n:  :  :  :  :  :  :                 :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :- CometProject\n:  :  :  :  :  :  :                 :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :  :                 :     :     :- CometProject\n:  :  :  :  :  :  :                 :     :     :  +- CometFilter\n:  :  :  :  :  :  :                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  :  :  :                 :     :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :     :        +- CometProject\n:  :  :  :  :  :  :                 :     :           +- CometFilter\n:  :  :  :  :  :  :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :  :                 :     +- CometBroadcastExchange\n:  :  :  :  :  :  :                 :        +- CometProject\n:  :  :  :  :  :  :                 :           +- CometFilter\n:  :  :  :  :  :  :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :  :  :  :                 +- CometBroadcastExchange\n:  :  :  :  :  :  :                    +- CometProject\n:  :  :  :  :  :  :                       +- CometFilter\n:  :  :  :  :  :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :  :           +- CometExchange\n:  :  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :  :                 +- CometProject\n:  :  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :- CometProject\n:  :  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :- CometProject\n:  :  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :  :                       :        +- CometProject\n:  :  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :  :                          +- CometProject\n:  :  :  :  :  :                             +- CometFilter\n:  :  :  :  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  :  :  +- BroadcastExchange\n:  :  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :  :        +- CometHashAggregate\n:  :  :  :  :           +- CometExchange\n:  :  :  :  :              +- CometHashAggregate\n:  :  :  :  :                 +- CometProject\n:  :  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :  :                       :- CometProject\n:  :  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :- CometProject\n:  :  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :  :                       :     :     :- CometProject\n:  :  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :  :                       :     :        +- CometProject\n:  :  :  :  :                       :     :           +- CometFilter\n:  :  :  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :  :                       :        +- CometProject\n:  :  :  :  :                       :           +- CometFilter\n:  :  :  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :  :                       +- CometBroadcastExchange\n:  :  :  :  :                          +- CometProject\n:  :  :  :  :                             +- CometFilter\n:  :  :  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  :  +- BroadcastExchange\n:  :  :  :     +- CometNativeColumnarToRow\n:  :  :  :        +- CometHashAggregate\n:  :  :  :           +- CometExchange\n:  :  :  :              +- CometHashAggregate\n:  :  :  :                 +- CometProject\n:  :  :  :                    +- CometBroadcastHashJoin\n:  :  :  :                       :- CometProject\n:  :  :  :                       :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :- CometProject\n:  :  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :  :                       :     :     :- CometProject\n:  :  :  :                       :     :     :  +- CometFilter\n:  :  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :  :                       :     :     +- CometBroadcastExchange\n:  :  :  :                       :     :        +- CometProject\n:  :  :  :                       :     :           +- CometFilter\n:  :  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :  :                       :     +- CometBroadcastExchange\n:  :  :  :                       :        +- CometProject\n:  :  :  :                       :           +- CometFilter\n:  :  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :  :                       +- CometBroadcastExchange\n:  :  :  :                          +- CometProject\n:  :  :  :                             +- CometFilter\n:  :  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  :  +- BroadcastExchange\n:  :  :     +- CometNativeColumnarToRow\n:  :  :        +- CometHashAggregate\n:  :  :           +- CometExchange\n:  :  :              +- CometHashAggregate\n:  :  :                 +- CometProject\n:  :  :                    +- CometBroadcastHashJoin\n:  :  :                       :- CometProject\n:  :  :                       :  +- CometBroadcastHashJoin\n:  :  :                       :     :- CometProject\n:  :  :                       :     :  +- CometBroadcastHashJoin\n:  :  :                       :     :     :- CometProject\n:  :  :                       :     :     :  +- CometFilter\n:  :  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :  :                       :     :     +- CometBroadcastExchange\n:  :  :                       :     :        +- CometProject\n:  :  :                       :     :           +- CometFilter\n:  :  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :  :                       :     +- CometBroadcastExchange\n:  :  :                       :        +- CometProject\n:  :  :                       :           +- CometFilter\n:  :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :  :                       +- CometBroadcastExchange\n:  :  :                          +- CometProject\n:  :  :                             +- CometFilter\n:  :  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  :  +- BroadcastExchange\n:  :     +- CometNativeColumnarToRow\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometBroadcastHashJoin\n:  :                       :- CometProject\n:  :                       :  +- CometBroadcastHashJoin\n:  :                       :     :- CometProject\n:  :                       :     :  +- CometBroadcastHashJoin\n:  :                       :     :     :- CometProject\n:  :                       :     :     :  +- CometFilter\n:  :                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :                       :     :     +- CometBroadcastExchange\n:  :                       :     :        +- CometProject\n:  :                       :     :           +- CometFilter\n:  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:  :                       :     +- CometBroadcastExchange\n:  :                       :        +- CometProject\n:  :                       :           +- CometFilter\n:  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:  :                       +- CometBroadcastExchange\n:  :                          +- CometProject\n:  :                             +- CometFilter\n:  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n:  +- BroadcastExchange\n:     +- CometNativeColumnarToRow\n:        +- CometHashAggregate\n:           +- CometExchange\n:              +- CometHashAggregate\n:                 +- CometProject\n:                    +- CometBroadcastHashJoin\n:                       :- CometProject\n:                       :  +- CometBroadcastHashJoin\n:                       :     :- CometProject\n:                       :     :  +- CometBroadcastHashJoin\n:                       :     :     :- CometProject\n:                       :     :     :  +- CometFilter\n:                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:                       :     :     +- CometBroadcastExchange\n:                       :     :        +- CometProject\n:                       :     :           +- CometFilter\n:                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n:                       :     +- CometBroadcastExchange\n:                       :        +- CometProject\n:                       :           +- CometFilter\n:                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n:                       +- CometBroadcastExchange\n:                          +- CometProject\n:                             +- CometFilter\n:                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n+- BroadcastExchange\n   +- CometNativeColumnarToRow\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometFilter\n                     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 192 out of 206 eligible operators (93%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q89.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- Project\n                                 +- BroadcastHashJoin\n                                    :- Project\n                                    :  +- BroadcastHashJoin\n                                    :     :- Project\n                                    :     :  +- BroadcastHashJoin\n                                    :     :     :- CometNativeColumnarToRow\n                                    :     :     :  +- CometProject\n                                    :     :     :     +- CometFilter\n                                    :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                    :     :     +- BroadcastExchange\n                                    :     :        +- Filter\n                                    :     :           +- ColumnarToRow\n                                    :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :     :                    +- SubqueryBroadcast\n                                    :     :                       +- BroadcastExchange\n                                    :     :                          +- CometNativeColumnarToRow\n                                    :     :                             +- CometProject\n                                    :     :                                +- CometFilter\n                                    :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :     +- BroadcastExchange\n                                    :        +- CometNativeColumnarToRow\n                                    :           +- CometProject\n                                    :              +- CometFilter\n                                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    +- BroadcastExchange\n                                       +- CometNativeColumnarToRow\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 14 out of 33 eligible operators (42%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q89.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- Filter\n      +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometProject\n                                 :  +- CometBroadcastHashJoin\n                                 :     :- CometProject\n                                 :     :  +- CometBroadcastHashJoin\n                                 :     :     :- CometProject\n                                 :     :     :  +- CometFilter\n                                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                 :     :     +- CometBroadcastExchange\n                                 :     :        +- CometFilter\n                                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                 :     :                 +- SubqueryBroadcast\n                                 :     :                    +- BroadcastExchange\n                                 :     :                       +- CometNativeColumnarToRow\n                                 :     :                          +- CometProject\n                                 :     :                             +- CometFilter\n                                 :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 :     +- CometBroadcastExchange\n                                 :        +- CometProject\n                                 :           +- CometFilter\n                                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 27 out of 33 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q9.native_datafusion/extended.txt",
    "content": " Project [COMET: ]\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometNativeScan parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  +- ReusedSubquery\n+- CometNativeColumnarToRow\n   +- CometFilter\n      +- CometNativeScan parquet spark_catalog.default.reason\n\nComet accelerated 37 out of 53 eligible operators (69%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q9.native_iceberg_compat/extended.txt",
    "content": " Project [COMET: ]\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  :- ReusedSubquery\n:  :- Subquery\n:  :  +- CometNativeColumnarToRow\n:  :     +- CometProject\n:  :        +- CometHashAggregate\n:  :           +- CometExchange\n:  :              +- CometHashAggregate\n:  :                 +- CometProject\n:  :                    +- CometFilter\n:  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n:  :- ReusedSubquery\n:  +- ReusedSubquery\n+- CometNativeColumnarToRow\n   +- CometFilter\n      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.reason\n\nComet accelerated 37 out of 53 eligible operators (69%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q90.native_datafusion/extended.txt",
    "content": "Project\n+-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n   :- CometNativeColumnarToRow\n   :  +- CometHashAggregate\n   :     +- CometExchange\n   :        +- CometHashAggregate\n   :           +- CometProject\n   :              +- CometBroadcastHashJoin\n   :                 :- CometProject\n   :                 :  +- CometBroadcastHashJoin\n   :                 :     :- CometProject\n   :                 :     :  +- CometBroadcastHashJoin\n   :                 :     :     :- CometProject\n   :                 :     :     :  +- CometFilter\n   :                 :     :     :     +- CometNativeScan parquet spark_catalog.default.web_sales\n   :                 :     :     +- CometBroadcastExchange\n   :                 :     :        +- CometProject\n   :                 :     :           +- CometFilter\n   :                 :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n   :                 :     +- CometBroadcastExchange\n   :                 :        +- CometProject\n   :                 :           +- CometFilter\n   :                 :              +- CometNativeScan parquet spark_catalog.default.time_dim\n   :                 +- CometBroadcastExchange\n   :                    +- CometProject\n   :                       +- CometFilter\n   :                          +- CometNativeScan parquet spark_catalog.default.web_page\n   +- BroadcastExchange\n      +- CometNativeColumnarToRow\n         +- CometHashAggregate\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometProject\n                        :     :     :  +- CometFilter\n                        :     :     :     +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.time_dim\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.web_page\n\nComet accelerated 48 out of 51 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q90.native_iceberg_compat/extended.txt",
    "content": "Project\n+-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n   :- CometNativeColumnarToRow\n   :  +- CometHashAggregate\n   :     +- CometExchange\n   :        +- CometHashAggregate\n   :           +- CometProject\n   :              +- CometBroadcastHashJoin\n   :                 :- CometProject\n   :                 :  +- CometBroadcastHashJoin\n   :                 :     :- CometProject\n   :                 :     :  +- CometBroadcastHashJoin\n   :                 :     :     :- CometProject\n   :                 :     :     :  +- CometFilter\n   :                 :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n   :                 :     :     +- CometBroadcastExchange\n   :                 :     :        +- CometProject\n   :                 :     :           +- CometFilter\n   :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n   :                 :     +- CometBroadcastExchange\n   :                 :        +- CometProject\n   :                 :           +- CometFilter\n   :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n   :                 +- CometBroadcastExchange\n   :                    +- CometProject\n   :                       +- CometFilter\n   :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n   +- BroadcastExchange\n      +- CometNativeColumnarToRow\n         +- CometHashAggregate\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometProject\n                        :     :     :  +- CometFilter\n                        :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n\nComet accelerated 48 out of 51 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q91.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- CometNativeColumnarToRow\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- Project\n                     +- BroadcastHashJoin\n                        :- Project\n                        :  +- BroadcastHashJoin\n                        :     :- Project\n                        :     :  +- BroadcastHashJoin\n                        :     :     :- Project\n                        :     :     :  +- BroadcastHashJoin\n                        :     :     :     :- Project\n                        :     :     :     :  +- BroadcastHashJoin\n                        :     :     :     :     :- Project\n                        :     :     :     :     :  +- BroadcastHashJoin\n                        :     :     :     :     :     :- CometNativeColumnarToRow\n                        :     :     :     :     :     :  +- CometProject\n                        :     :     :     :     :     :     +- CometFilter\n                        :     :     :     :     :     :        +- CometNativeScan parquet spark_catalog.default.call_center\n                        :     :     :     :     :     +- BroadcastExchange\n                        :     :     :     :     :        +- Filter\n                        :     :     :     :     :           +- ColumnarToRow\n                        :     :     :     :     :              +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                        :     :     :     :     :                    +- SubqueryBroadcast\n                        :     :     :     :     :                       +- BroadcastExchange\n                        :     :     :     :     :                          +- CometNativeColumnarToRow\n                        :     :     :     :     :                             +- CometProject\n                        :     :     :     :     :                                +- CometFilter\n                        :     :     :     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     :     :     :     +- BroadcastExchange\n                        :     :     :     :        +- CometNativeColumnarToRow\n                        :     :     :     :           +- CometProject\n                        :     :     :     :              +- CometFilter\n                        :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     :     :     +- BroadcastExchange\n                        :     :     :        +- CometNativeColumnarToRow\n                        :     :     :           +- CometFilter\n                        :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                        :     :     +- BroadcastExchange\n                        :     :        +- CometNativeColumnarToRow\n                        :     :           +- CometProject\n                        :     :              +- CometFilter\n                        :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                        :     +- BroadcastExchange\n                        :        +- CometNativeColumnarToRow\n                        :           +- CometProject\n                        :              +- CometFilter\n                        :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                        +- BroadcastExchange\n                           +- CometNativeColumnarToRow\n                              +- CometProject\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.household_demographics\n\nComet accelerated 23 out of 47 eligible operators (48%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q91.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :- CometProject\n                     :     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :     :- CometProject\n                     :     :     :     :     :     :  +- CometFilter\n                     :     :     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n                     :     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :     :        +- CometFilter\n                     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                     :     :     :     :     :                 +- SubqueryBroadcast\n                     :     :     :     :     :                    +- BroadcastExchange\n                     :     :     :     :     :                       +- CometNativeColumnarToRow\n                     :     :     :     :     :                          +- CometProject\n                     :     :     :     :     :                             +- CometFilter\n                     :     :     :     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :        +- CometProject\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n\nComet accelerated 45 out of 47 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q92.native_datafusion/extended.txt",
    "content": "HashAggregate\n+- CometNativeColumnarToRow\n   +- CometColumnarExchange\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :- Project\n               :  +- BroadcastHashJoin\n               :     :- Project\n               :     :  +- BroadcastHashJoin\n               :     :     :- Filter\n               :     :     :  +- ColumnarToRow\n               :     :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :     :     :           +- SubqueryBroadcast\n               :     :     :              +- BroadcastExchange\n               :     :     :                 +- CometNativeColumnarToRow\n               :     :     :                    +- CometProject\n               :     :     :                       +- CometFilter\n               :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :     :     +- BroadcastExchange\n               :     :        +- CometNativeColumnarToRow\n               :     :           +- CometProject\n               :     :              +- CometFilter\n               :     :                 +- CometNativeScan parquet spark_catalog.default.item\n               :     +- BroadcastExchange\n               :        +- Filter\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Project\n               :                          +- BroadcastHashJoin\n               :                             :- Filter\n               :                             :  +- ColumnarToRow\n               :                             :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                             :           +- ReusedSubquery\n               :                             +- BroadcastExchange\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometProject\n               :                                      +- CometFilter\n               :                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- CometNativeColumnarToRow\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 38 eligible operators (36%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q92.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometFilter\n               :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :     :     :        +- SubqueryBroadcast\n               :     :     :           +- BroadcastExchange\n               :     :     :              +- CometNativeColumnarToRow\n               :     :     :                 +- CometProject\n               :     :     :                    +- CometFilter\n               :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :     +- CometBroadcastExchange\n               :        +- CometFilter\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometFilter\n               :                          :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :        +- ReusedSubquery\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 35 out of 38 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q93.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometExchange\n                  :     :     +- CometProject\n                  :     :        +- CometNativeScan parquet spark_catalog.default.store_sales\n                  :     +- CometSort\n                  :        +- CometExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.reason\n\nComet accelerated 21 out of 21 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q93.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometSortMergeJoin\n                  :     :- CometSort\n                  :     :  +- CometExchange\n                  :     :     +- CometProject\n                  :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     +- CometSort\n                  :        +- CometExchange\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.reason\n\nComet accelerated 21 out of 21 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q94.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometNativeScan parquet spark_catalog.default.web_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q94.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometProject\n                        :     :     :  :  +- CometSortMergeJoin\n                        :     :     :  :     :- CometSort\n                        :     :     :  :     :  +- CometExchange\n                        :     :     :  :     :     +- CometProject\n                        :     :     :  :     :        +- CometFilter\n                        :     :     :  :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  :     +- CometSort\n                        :     :     :  :        +- CometExchange\n                        :     :     :  :           +- CometProject\n                        :     :     :  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometSort\n                        :     :     :     +- CometExchange\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 37 out of 39 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q95.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometSortMergeJoin\n                        :     :     :  :  :- CometSort\n                        :     :     :  :  :  +- CometExchange\n                        :     :     :  :  :     +- CometProject\n                        :     :     :  :  :        +- CometFilter\n                        :     :     :  :  :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  :  +- CometProject\n                        :     :     :  :     +- CometSortMergeJoin\n                        :     :     :  :        :- CometSort\n                        :     :     :  :        :  +- CometExchange\n                        :     :     :  :        :     +- CometProject\n                        :     :     :  :        :        +- CometFilter\n                        :     :     :  :        :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  :        +- CometSort\n                        :     :     :  :           +- CometExchange\n                        :     :     :  :              +- CometProject\n                        :     :     :  :                 +- CometFilter\n                        :     :     :  :                    +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometProject\n                        :     :     :     +- CometSortMergeJoin\n                        :     :     :        :- CometSort\n                        :     :     :        :  +- CometExchange\n                        :     :     :        :     +- CometProject\n                        :     :     :        :        +- CometFilter\n                        :     :     :        :           +- CometNativeScan parquet spark_catalog.default.web_returns\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometSortMergeJoin\n                        :     :     :              :- CometSort\n                        :     :     :              :  +- CometExchange\n                        :     :     :              :     +- CometProject\n                        :     :     :              :        +- CometFilter\n                        :     :     :              :           +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     :              +- CometSort\n                        :     :     :                 +- CometExchange\n                        :     :     :                    +- CometProject\n                        :     :     :                       +- CometFilter\n                        :     :     :                          +- CometNativeScan parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 59 out of 61 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q95.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometColumnarExchange\n      +- HashAggregate\n         +-  HashAggregate [COMET: Unsupported aggregation mode PartialMerge]\n            +- CometNativeColumnarToRow\n               +- CometHashAggregate\n                  +- CometProject\n                     +- CometBroadcastHashJoin\n                        :- CometProject\n                        :  +- CometBroadcastHashJoin\n                        :     :- CometProject\n                        :     :  +- CometBroadcastHashJoin\n                        :     :     :- CometSortMergeJoin\n                        :     :     :  :- CometSortMergeJoin\n                        :     :     :  :  :- CometSort\n                        :     :     :  :  :  +- CometExchange\n                        :     :     :  :  :     +- CometProject\n                        :     :     :  :  :        +- CometFilter\n                        :     :     :  :  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  :  +- CometProject\n                        :     :     :  :     +- CometSortMergeJoin\n                        :     :     :  :        :- CometSort\n                        :     :     :  :        :  +- CometExchange\n                        :     :     :  :        :     +- CometProject\n                        :     :     :  :        :        +- CometFilter\n                        :     :     :  :        :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  :        +- CometSort\n                        :     :     :  :           +- CometExchange\n                        :     :     :  :              +- CometProject\n                        :     :     :  :                 +- CometFilter\n                        :     :     :  :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :  +- CometProject\n                        :     :     :     +- CometSortMergeJoin\n                        :     :     :        :- CometSort\n                        :     :     :        :  +- CometExchange\n                        :     :     :        :     +- CometProject\n                        :     :     :        :        +- CometFilter\n                        :     :     :        :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                        :     :     :        +- CometProject\n                        :     :     :           +- CometSortMergeJoin\n                        :     :     :              :- CometSort\n                        :     :     :              :  +- CometExchange\n                        :     :     :              :     +- CometProject\n                        :     :     :              :        +- CometFilter\n                        :     :     :              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     :              +- CometSort\n                        :     :     :                 +- CometExchange\n                        :     :     :                    +- CometProject\n                        :     :     :                       +- CometFilter\n                        :     :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                        :     :     +- CometBroadcastExchange\n                        :     :        +- CometProject\n                        :     :           +- CometFilter\n                        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                        :     +- CometBroadcastExchange\n                        :        +- CometProject\n                        :           +- CometFilter\n                        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                        +- CometBroadcastExchange\n                           +- CometProject\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 59 out of 61 eligible operators (96%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q96.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometFilter\n               :     :     :     +- CometNativeScan parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometNativeScan parquet spark_catalog.default.household_demographics\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometNativeScan parquet spark_catalog.default.time_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 24 out of 24 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q96.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometBroadcastHashJoin\n               :- CometProject\n               :  +- CometBroadcastHashJoin\n               :     :- CometProject\n               :     :  +- CometBroadcastHashJoin\n               :     :     :- CometProject\n               :     :     :  +- CometFilter\n               :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :     :     +- CometBroadcastExchange\n               :     :        +- CometProject\n               :     :           +- CometFilter\n               :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n               :     +- CometBroadcastExchange\n               :        +- CometProject\n               :           +- CometFilter\n               :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.time_dim\n               +- CometBroadcastExchange\n                  +- CometProject\n                     +- CometFilter\n                        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 24 out of 24 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q97.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometSortMergeJoin\n               :- CometSort\n               :  +- CometHashAggregate\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Project\n               :              +- BroadcastHashJoin\n               :                 :- ColumnarToRow\n               :                 :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                 :        +- SubqueryBroadcast\n               :                 :           +- BroadcastExchange\n               :                 :              +- CometNativeColumnarToRow\n               :                 :                 +- CometProject\n               :                 :                    +- CometFilter\n               :                 :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                 +- BroadcastExchange\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometProject\n               :                          +- CometFilter\n               :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- CometSort\n                  +- CometHashAggregate\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- ColumnarToRow\n                                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :        +- ReusedSubquery\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 20 out of 33 eligible operators (60%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q97.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometHashAggregate\n   +- CometExchange\n      +- CometHashAggregate\n         +- CometProject\n            +- CometSortMergeJoin\n               :- CometSort\n               :  +- CometHashAggregate\n               :     +- CometExchange\n               :        +- CometHashAggregate\n               :           +- CometProject\n               :              +- CometBroadcastHashJoin\n               :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                 :     +- SubqueryBroadcast\n               :                 :        +- BroadcastExchange\n               :                 :           +- CometNativeColumnarToRow\n               :                 :              +- CometProject\n               :                 :                 +- CometFilter\n               :                 :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                 +- CometBroadcastExchange\n               :                    +- CometProject\n               :                       +- CometFilter\n               :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometSort\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                 :     +- ReusedSubquery\n                                 +- CometBroadcastExchange\n                                    +- CometProject\n                                       +- CometFilter\n                                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 30 out of 33 eligible operators (90%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q98.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometProject\n   +- CometSort\n      +- CometColumnarExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Filter\n                                          :     :  +- ColumnarToRow\n                                          :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :           +- SubqueryBroadcast\n                                          :     :              +- BroadcastExchange\n                                          :     :                 +- CometNativeColumnarToRow\n                                          :     :                    +- CometProject\n                                          :     :                       +- CometFilter\n                                          :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometProject\n                                          :              +- CometFilter\n                                          :                 +- CometNativeScan parquet spark_catalog.default.item\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 15 out of 29 eligible operators (51%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q98.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometProject\n   +- CometSort\n      +- CometColumnarExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometFilter\n                                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :        +- SubqueryBroadcast\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometProject\n                                       :     :                    +- CometFilter\n                                       :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometProject\n                                       :           +- CometFilter\n                                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       +- CometBroadcastExchange\n                                          +- CometProject\n                                             +- CometFilter\n                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 25 out of 29 eligible operators (86%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q99.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometNativeScan parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometNativeScan parquet spark_catalog.default.call_center\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v1_4-spark4_0/q99.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometProject\n                  :     :     :  +- CometBroadcastHashJoin\n                  :     :     :     :- CometFilter\n                  :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :     :     +- CometBroadcastExchange\n                  :     :     :        +- CometFilter\n                  :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometProject\n                  :     :           +- CometFilter\n                  :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.ship_mode\n                  :     +- CometBroadcastExchange\n                  :        +- CometFilter\n                  :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 28 eligible operators (100%). Final plan contains 1 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q10a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- BroadcastHashJoin\n                  :     :     :  :- CometNativeColumnarToRow\n                  :     :     :  :  +- CometFilter\n                  :     :     :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- Project\n                  :     :     :        +- BroadcastHashJoin\n                  :     :     :           :- ColumnarToRow\n                  :     :     :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           :        +- SubqueryBroadcast\n                  :     :     :           :           +- BroadcastExchange\n                  :     :     :           :              +- CometNativeColumnarToRow\n                  :     :     :           :                 +- CometProject\n                  :     :     :           :                    +- CometFilter\n                  :     :     :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- Union\n                  :     :           :- Project\n                  :     :           :  +- BroadcastHashJoin\n                  :     :           :     :- ColumnarToRow\n                  :     :           :     :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :           :     :        +- ReusedSubquery\n                  :     :           :     +- BroadcastExchange\n                  :     :           :        +- CometNativeColumnarToRow\n                  :     :           :           +- CometProject\n                  :     :           :              +- CometFilter\n                  :     :           :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 52 eligible operators (40%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q10a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometBroadcastHashJoin\n                  :     :     :  :- CometFilter\n                  :     :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :     :  +- CometBroadcastExchange\n                  :     :     :     +- CometProject\n                  :     :     :        +- CometBroadcastHashJoin\n                  :     :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :           :     +- SubqueryBroadcast\n                  :     :     :           :        +- BroadcastExchange\n                  :     :     :           :           +- CometNativeColumnarToRow\n                  :     :     :           :              +- CometProject\n                  :     :     :           :                 +- CometFilter\n                  :     :     :           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :           +- CometBroadcastExchange\n                  :     :     :              +- CometProject\n                  :     :     :                 +- CometFilter\n                  :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometUnion\n                  :     :           :- CometProject\n                  :     :           :  +- CometBroadcastHashJoin\n                  :     :           :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :           :     :     +- ReusedSubquery\n                  :     :           :     +- CometBroadcastExchange\n                  :     :           :        +- CometProject\n                  :     :           :           +- CometFilter\n                  :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :           +- CometProject\n                  :     :              +- CometBroadcastHashJoin\n                  :     :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                 :     +- ReusedSubquery\n                  :     :                 +- CometBroadcastExchange\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 48 out of 52 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q11.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- BroadcastHashJoin\n      :     :  :- Filter\n      :     :  :  +- HashAggregate\n      :     :  :     +- CometNativeColumnarToRow\n      :     :  :        +- CometColumnarExchange\n      :     :  :           +- HashAggregate\n      :     :  :              +- Project\n      :     :  :                 +- BroadcastHashJoin\n      :     :  :                    :- Project\n      :     :  :                    :  +- BroadcastHashJoin\n      :     :  :                    :     :- CometNativeColumnarToRow\n      :     :  :                    :     :  +- CometProject\n      :     :  :                    :     :     +- CometFilter\n      :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :  :                    :     +- BroadcastExchange\n      :     :  :                    :        +- Filter\n      :     :  :                    :           +- ColumnarToRow\n      :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :  :                    :                    +- SubqueryBroadcast\n      :     :  :                    :                       +- BroadcastExchange\n      :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :  :                    :                             +- CometFilter\n      :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  :                    +- BroadcastExchange\n      :     :  :                       +- CometNativeColumnarToRow\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  +- BroadcastExchange\n      :     :     +- HashAggregate\n      :     :        +- CometNativeColumnarToRow\n      :     :           +- CometColumnarExchange\n      :     :              +- HashAggregate\n      :     :                 +- Project\n      :     :                    +- BroadcastHashJoin\n      :     :                       :- Project\n      :     :                       :  +- BroadcastHashJoin\n      :     :                       :     :- CometNativeColumnarToRow\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                       :     +- BroadcastExchange\n      :     :                       :        +- Filter\n      :     :                       :           +- ColumnarToRow\n      :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                       :                    +- SubqueryBroadcast\n      :     :                       :                       +- BroadcastExchange\n      :     :                       :                          +- CometNativeColumnarToRow\n      :     :                       :                             +- CometFilter\n      :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                       +- BroadcastExchange\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometFilter\n      :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 85 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q11.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometBroadcastHashJoin\n         :     :  :- CometFilter\n         :     :  :  +- CometHashAggregate\n         :     :  :     +- CometExchange\n         :     :  :        +- CometHashAggregate\n         :     :  :           +- CometProject\n         :     :  :              +- CometBroadcastHashJoin\n         :     :  :                 :- CometProject\n         :     :  :                 :  +- CometBroadcastHashJoin\n         :     :  :                 :     :- CometProject\n         :     :  :                 :     :  +- CometFilter\n         :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :  :                 :     +- CometBroadcastExchange\n         :     :  :                 :        +- CometFilter\n         :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :  :                 :                 +- SubqueryBroadcast\n         :     :  :                 :                    +- BroadcastExchange\n         :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :  :                 :                          +- CometFilter\n         :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  :                 +- CometBroadcastExchange\n         :     :  :                    +- CometFilter\n         :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  +- CometBroadcastExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometExchange\n         :     :           +- CometHashAggregate\n         :     :              +- CometProject\n         :     :                 +- CometBroadcastHashJoin\n         :     :                    :- CometProject\n         :     :                    :  +- CometBroadcastHashJoin\n         :     :                    :     :- CometProject\n         :     :                    :     :  +- CometFilter\n         :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                    :     +- CometBroadcastExchange\n         :     :                    :        +- CometFilter\n         :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                    :                 +- SubqueryBroadcast\n         :     :                    :                    +- BroadcastExchange\n         :     :                    :                       +- CometNativeColumnarToRow\n         :     :                    :                          +- CometFilter\n         :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                    +- CometBroadcastExchange\n         :     :                       +- CometFilter\n         :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 79 out of 85 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q12.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q12.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q14.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- BroadcastHashJoin\n   :- Filter\n   :  :  +- Subquery\n   :  :     +- HashAggregate\n   :  :        +- CometNativeColumnarToRow\n   :  :           +- CometColumnarExchange\n   :  :              +- HashAggregate\n   :  :                 +- Union\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    +- Project\n   :  :                       +- BroadcastHashJoin\n   :  :                          :- ColumnarToRow\n   :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                          :        +- ReusedSubquery\n   :  :                          +- BroadcastExchange\n   :  :                             +- CometNativeColumnarToRow\n   :  :                                +- CometProject\n   :  :                                   +- CometFilter\n   :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  +- HashAggregate\n   :     +- CometNativeColumnarToRow\n   :        +- CometColumnarExchange\n   :           +- HashAggregate\n   :              +- Project\n   :                 +- BroadcastHashJoin\n   :                    :- Project\n   :                    :  +- BroadcastHashJoin\n   :                    :     :- BroadcastHashJoin\n   :                    :     :  :- Filter\n   :                    :     :  :  +- ColumnarToRow\n   :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :  :           +- SubqueryBroadcast\n   :                    :     :  :              +- BroadcastExchange\n   :                    :     :  :                 +- CometNativeColumnarToRow\n   :                    :     :  :                    +- CometProject\n   :                    :     :  :                       +- CometFilter\n   :                    :     :  :                          :  +- Subquery\n   :                    :     :  :                          :     +- CometNativeColumnarToRow\n   :                    :     :  :                          :        +- CometProject\n   :                    :     :  :                          :           +- CometFilter\n   :                    :     :  :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  +- BroadcastExchange\n   :                    :     :     +- Project\n   :                    :     :        +- BroadcastHashJoin\n   :                    :     :           :- CometNativeColumnarToRow\n   :                    :     :           :  +- CometFilter\n   :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :           +- BroadcastExchange\n   :                    :     :              +- BroadcastHashJoin\n   :                    :     :                 :- CometNativeColumnarToRow\n   :                    :     :                 :  +- CometHashAggregate\n   :                    :     :                 :     +- CometColumnarExchange\n   :                    :     :                 :        +- HashAggregate\n   :                    :     :                 :           +- Project\n   :                    :     :                 :              +- BroadcastHashJoin\n   :                    :     :                 :                 :- Project\n   :                    :     :                 :                 :  +- BroadcastHashJoin\n   :                    :     :                 :                 :     :- Filter\n   :                    :     :                 :                 :     :  +- ColumnarToRow\n   :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :     :           +- SubqueryBroadcast\n   :                    :     :                 :                 :     :              +- BroadcastExchange\n   :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n   :                    :     :                 :                 :     :                    +- CometProject\n   :                    :     :                 :                 :     :                       +- CometFilter\n   :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 :     +- BroadcastExchange\n   :                    :     :                 :                 :        +- BroadcastHashJoin\n   :                    :     :                 :                 :           :- CometNativeColumnarToRow\n   :                    :     :                 :                 :           :  +- CometFilter\n   :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :           +- BroadcastExchange\n   :                    :     :                 :                 :              +- Project\n   :                    :     :                 :                 :                 +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :- Project\n   :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :     :- Filter\n   :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n   :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n   :                    :     :                 :                 :                    :     +- BroadcastExchange\n   :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                    :           +- CometFilter\n   :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :                    +- BroadcastExchange\n   :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                          +- CometProject\n   :                    :     :                 :                 :                             +- CometFilter\n   :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 +- BroadcastExchange\n   :                    :     :                 :                    +- CometNativeColumnarToRow\n   :                    :     :                 :                       +- CometProject\n   :                    :     :                 :                          +- CometFilter\n   :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 +- BroadcastExchange\n   :                    :     :                    +- Project\n   :                    :     :                       +- BroadcastHashJoin\n   :                    :     :                          :- Project\n   :                    :     :                          :  +- BroadcastHashJoin\n   :                    :     :                          :     :- Filter\n   :                    :     :                          :     :  +- ColumnarToRow\n   :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                          :     :           +- ReusedSubquery\n   :                    :     :                          :     +- BroadcastExchange\n   :                    :     :                          :        +- CometNativeColumnarToRow\n   :                    :     :                          :           +- CometFilter\n   :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                          +- BroadcastExchange\n   :                    :     :                             +- CometNativeColumnarToRow\n   :                    :     :                                +- CometProject\n   :                    :     :                                   +- CometFilter\n   :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     +- BroadcastExchange\n   :                    :        +- BroadcastHashJoin\n   :                    :           :- CometNativeColumnarToRow\n   :                    :           :  +- CometFilter\n   :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :           +- BroadcastExchange\n   :                    :              +- Project\n   :                    :                 +- BroadcastHashJoin\n   :                    :                    :- CometNativeColumnarToRow\n   :                    :                    :  +- CometFilter\n   :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                    +- BroadcastExchange\n   :                    :                       +- BroadcastHashJoin\n   :                    :                          :- CometNativeColumnarToRow\n   :                    :                          :  +- CometHashAggregate\n   :                    :                          :     +- CometColumnarExchange\n   :                    :                          :        +- HashAggregate\n   :                    :                          :           +- Project\n   :                    :                          :              +- BroadcastHashJoin\n   :                    :                          :                 :- Project\n   :                    :                          :                 :  +- BroadcastHashJoin\n   :                    :                          :                 :     :- Filter\n   :                    :                          :                 :     :  +- ColumnarToRow\n   :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :     :           +- SubqueryBroadcast\n   :                    :                          :                 :     :              +- BroadcastExchange\n   :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n   :                    :                          :                 :     :                    +- CometProject\n   :                    :                          :                 :     :                       +- CometFilter\n   :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 :     +- BroadcastExchange\n   :                    :                          :                 :        +- BroadcastHashJoin\n   :                    :                          :                 :           :- CometNativeColumnarToRow\n   :                    :                          :                 :           :  +- CometFilter\n   :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :           +- BroadcastExchange\n   :                    :                          :                 :              +- Project\n   :                    :                          :                 :                 +- BroadcastHashJoin\n   :                    :                          :                 :                    :- Project\n   :                    :                          :                 :                    :  +- BroadcastHashJoin\n   :                    :                          :                 :                    :     :- Filter\n   :                    :                          :                 :                    :     :  +- ColumnarToRow\n   :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :                    :     :           +- ReusedSubquery\n   :                    :                          :                 :                    :     +- BroadcastExchange\n   :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n   :                    :                          :                 :                    :           +- CometFilter\n   :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :                    +- BroadcastExchange\n   :                    :                          :                 :                       +- CometNativeColumnarToRow\n   :                    :                          :                 :                          +- CometProject\n   :                    :                          :                 :                             +- CometFilter\n   :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 +- BroadcastExchange\n   :                    :                          :                    +- CometNativeColumnarToRow\n   :                    :                          :                       +- CometProject\n   :                    :                          :                          +- CometFilter\n   :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          +- BroadcastExchange\n   :                    :                             +- Project\n   :                    :                                +- BroadcastHashJoin\n   :                    :                                   :- Project\n   :                    :                                   :  +- BroadcastHashJoin\n   :                    :                                   :     :- Filter\n   :                    :                                   :     :  +- ColumnarToRow\n   :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                                   :     :           +- ReusedSubquery\n   :                    :                                   :     +- BroadcastExchange\n   :                    :                                   :        +- CometNativeColumnarToRow\n   :                    :                                   :           +- CometFilter\n   :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                                   +- BroadcastExchange\n   :                    :                                      +- CometNativeColumnarToRow\n   :                    :                                         +- CometProject\n   :                    :                                            +- CometFilter\n   :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    +- BroadcastExchange\n   :                       +- CometNativeColumnarToRow\n   :                          +- CometProject\n   :                             +- CometFilter\n   :                                :  +- Subquery\n   :                                :     +- CometNativeColumnarToRow\n   :                                :        +- CometProject\n   :                                :           +- CometFilter\n   :                                :              +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   +- BroadcastExchange\n      +- Filter\n         :  +- ReusedSubquery\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- BroadcastHashJoin\n                           :     :  :- Filter\n                           :     :  :  +- ColumnarToRow\n                           :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :  :           +- SubqueryBroadcast\n                           :     :  :              +- BroadcastExchange\n                           :     :  :                 +- CometNativeColumnarToRow\n                           :     :  :                    +- CometProject\n                           :     :  :                       +- CometFilter\n                           :     :  :                          :  +- Subquery\n                           :     :  :                          :     +- CometNativeColumnarToRow\n                           :     :  :                          :        +- CometProject\n                           :     :  :                          :           +- CometFilter\n                           :     :  :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  +- BroadcastExchange\n                           :     :     +- Project\n                           :     :        +- BroadcastHashJoin\n                           :     :           :- CometNativeColumnarToRow\n                           :     :           :  +- CometFilter\n                           :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :           +- BroadcastExchange\n                           :     :              +- BroadcastHashJoin\n                           :     :                 :- CometNativeColumnarToRow\n                           :     :                 :  +- CometHashAggregate\n                           :     :                 :     +- CometColumnarExchange\n                           :     :                 :        +- HashAggregate\n                           :     :                 :           +- Project\n                           :     :                 :              +- BroadcastHashJoin\n                           :     :                 :                 :- Project\n                           :     :                 :                 :  +- BroadcastHashJoin\n                           :     :                 :                 :     :- Filter\n                           :     :                 :                 :     :  +- ColumnarToRow\n                           :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :     :           +- SubqueryBroadcast\n                           :     :                 :                 :     :              +- BroadcastExchange\n                           :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                           :     :                 :                 :     :                    +- CometProject\n                           :     :                 :                 :     :                       +- CometFilter\n                           :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 :     +- BroadcastExchange\n                           :     :                 :                 :        +- BroadcastHashJoin\n                           :     :                 :                 :           :- CometNativeColumnarToRow\n                           :     :                 :                 :           :  +- CometFilter\n                           :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :           +- BroadcastExchange\n                           :     :                 :                 :              +- Project\n                           :     :                 :                 :                 +- BroadcastHashJoin\n                           :     :                 :                 :                    :- Project\n                           :     :                 :                 :                    :  +- BroadcastHashJoin\n                           :     :                 :                 :                    :     :- Filter\n                           :     :                 :                 :                    :     :  +- ColumnarToRow\n                           :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :                    :     :           +- ReusedSubquery\n                           :     :                 :                 :                    :     +- BroadcastExchange\n                           :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                           :     :                 :                 :                    :           +- CometFilter\n                           :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :                    +- BroadcastExchange\n                           :     :                 :                 :                       +- CometNativeColumnarToRow\n                           :     :                 :                 :                          +- CometProject\n                           :     :                 :                 :                             +- CometFilter\n                           :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 +- BroadcastExchange\n                           :     :                 :                    +- CometNativeColumnarToRow\n                           :     :                 :                       +- CometProject\n                           :     :                 :                          +- CometFilter\n                           :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 +- BroadcastExchange\n                           :     :                    +- Project\n                           :     :                       +- BroadcastHashJoin\n                           :     :                          :- Project\n                           :     :                          :  +- BroadcastHashJoin\n                           :     :                          :     :- Filter\n                           :     :                          :     :  +- ColumnarToRow\n                           :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                          :     :           +- ReusedSubquery\n                           :     :                          :     +- BroadcastExchange\n                           :     :                          :        +- CometNativeColumnarToRow\n                           :     :                          :           +- CometFilter\n                           :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                          +- BroadcastExchange\n                           :     :                             +- CometNativeColumnarToRow\n                           :     :                                +- CometProject\n                           :     :                                   +- CometFilter\n                           :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     +- BroadcastExchange\n                           :        +- BroadcastHashJoin\n                           :           :- CometNativeColumnarToRow\n                           :           :  +- CometFilter\n                           :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :           +- BroadcastExchange\n                           :              +- Project\n                           :                 +- BroadcastHashJoin\n                           :                    :- CometNativeColumnarToRow\n                           :                    :  +- CometFilter\n                           :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                    +- BroadcastExchange\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometHashAggregate\n                           :                          :     +- CometColumnarExchange\n                           :                          :        +- HashAggregate\n                           :                          :           +- Project\n                           :                          :              +- BroadcastHashJoin\n                           :                          :                 :- Project\n                           :                          :                 :  +- BroadcastHashJoin\n                           :                          :                 :     :- Filter\n                           :                          :                 :     :  +- ColumnarToRow\n                           :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :     :           +- SubqueryBroadcast\n                           :                          :                 :     :              +- BroadcastExchange\n                           :                          :                 :     :                 +- CometNativeColumnarToRow\n                           :                          :                 :     :                    +- CometProject\n                           :                          :                 :     :                       +- CometFilter\n                           :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 :     +- BroadcastExchange\n                           :                          :                 :        +- BroadcastHashJoin\n                           :                          :                 :           :- CometNativeColumnarToRow\n                           :                          :                 :           :  +- CometFilter\n                           :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :           +- BroadcastExchange\n                           :                          :                 :              +- Project\n                           :                          :                 :                 +- BroadcastHashJoin\n                           :                          :                 :                    :- Project\n                           :                          :                 :                    :  +- BroadcastHashJoin\n                           :                          :                 :                    :     :- Filter\n                           :                          :                 :                    :     :  +- ColumnarToRow\n                           :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :                    :     :           +- ReusedSubquery\n                           :                          :                 :                    :     +- BroadcastExchange\n                           :                          :                 :                    :        +- CometNativeColumnarToRow\n                           :                          :                 :                    :           +- CometFilter\n                           :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :                    +- BroadcastExchange\n                           :                          :                 :                       +- CometNativeColumnarToRow\n                           :                          :                 :                          +- CometProject\n                           :                          :                 :                             +- CometFilter\n                           :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 +- BroadcastExchange\n                           :                          :                    +- CometNativeColumnarToRow\n                           :                          :                       +- CometProject\n                           :                          :                          +- CometFilter\n                           :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- BroadcastHashJoin\n                           :                                   :- Project\n                           :                                   :  +- BroadcastHashJoin\n                           :                                   :     :- Filter\n                           :                                   :     :  +- ColumnarToRow\n                           :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                   :     :           +- ReusedSubquery\n                           :                                   :     +- BroadcastExchange\n                           :                                   :        +- CometNativeColumnarToRow\n                           :                                   :           +- CometFilter\n                           :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                                   +- BroadcastExchange\n                           :                                      +- CometNativeColumnarToRow\n                           :                                         +- CometProject\n                           :                                            +- CometFilter\n                           :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometProject\n                                    +- CometFilter\n                                       :  +- Subquery\n                                       :     +- CometNativeColumnarToRow\n                                       :        +- CometProject\n                                       :           +- CometFilter\n                                       :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 128 out of 333 eligible operators (38%). Final plan contains 69 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q14.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometBroadcastHashJoin\n      :- CometFilter\n      :  :  +- Subquery\n      :  :     +- CometNativeColumnarToRow\n      :  :        +- CometHashAggregate\n      :  :           +- CometExchange\n      :  :              +- CometHashAggregate\n      :  :                 +- CometUnion\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    +- CometProject\n      :  :                       +- CometBroadcastHashJoin\n      :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :  :                          :     +- ReusedSubquery\n      :  :                          +- CometBroadcastExchange\n      :  :                             +- CometProject\n      :  :                                +- CometFilter\n      :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  +- CometHashAggregate\n      :     +- CometExchange\n      :        +- CometHashAggregate\n      :           +- CometProject\n      :              +- CometBroadcastHashJoin\n      :                 :- CometProject\n      :                 :  +- CometBroadcastHashJoin\n      :                 :     :- CometBroadcastHashJoin\n      :                 :     :  :- CometFilter\n      :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :  :        +- SubqueryBroadcast\n      :                 :     :  :           +- BroadcastExchange\n      :                 :     :  :              +- CometNativeColumnarToRow\n      :                 :     :  :                 +- CometProject\n      :                 :     :  :                    +- CometFilter\n      :                 :     :  :                       :  +- Subquery\n      :                 :     :  :                       :     +- CometNativeColumnarToRow\n      :                 :     :  :                       :        +- CometProject\n      :                 :     :  :                       :           +- CometFilter\n      :                 :     :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  +- CometBroadcastExchange\n      :                 :     :     +- CometProject\n      :                 :     :        +- CometBroadcastHashJoin\n      :                 :     :           :- CometFilter\n      :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :           +- CometBroadcastExchange\n      :                 :     :              +- CometBroadcastHashJoin\n      :                 :     :                 :- CometHashAggregate\n      :                 :     :                 :  +- CometExchange\n      :                 :     :                 :     +- CometHashAggregate\n      :                 :     :                 :        +- CometProject\n      :                 :     :                 :           +- CometBroadcastHashJoin\n      :                 :     :                 :              :- CometProject\n      :                 :     :                 :              :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :     :- CometFilter\n      :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :                 :              :     :        +- SubqueryBroadcast\n      :                 :     :                 :              :     :           +- BroadcastExchange\n      :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n      :                 :     :                 :              :     :                 +- CometProject\n      :                 :     :                 :              :     :                    +- CometFilter\n      :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              :     +- CometBroadcastExchange\n      :                 :     :                 :              :        +- CometBroadcastHashJoin\n      :                 :     :                 :              :           :- CometFilter\n      :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :           +- CometBroadcastExchange\n      :                 :     :                 :              :              +- CometProject\n      :                 :     :                 :              :                 +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :- CometProject\n      :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :     :- CometFilter\n      :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :     :                 :              :                    :     :        +- ReusedSubquery\n      :                 :     :                 :              :                    :     +- CometBroadcastExchange\n      :                 :     :                 :              :                    :        +- CometFilter\n      :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :                    +- CometBroadcastExchange\n      :                 :     :                 :              :                       +- CometProject\n      :                 :     :                 :              :                          +- CometFilter\n      :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              +- CometBroadcastExchange\n      :                 :     :                 :                 +- CometProject\n      :                 :     :                 :                    +- CometFilter\n      :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 +- CometBroadcastExchange\n      :                 :     :                    +- CometProject\n      :                 :     :                       +- CometBroadcastHashJoin\n      :                 :     :                          :- CometProject\n      :                 :     :                          :  +- CometBroadcastHashJoin\n      :                 :     :                          :     :- CometFilter\n      :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :     :                          :     :        +- ReusedSubquery\n      :                 :     :                          :     +- CometBroadcastExchange\n      :                 :     :                          :        +- CometFilter\n      :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                          +- CometBroadcastExchange\n      :                 :     :                             +- CometProject\n      :                 :     :                                +- CometFilter\n      :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     +- CometBroadcastExchange\n      :                 :        +- CometBroadcastHashJoin\n      :                 :           :- CometFilter\n      :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :           +- CometBroadcastExchange\n      :                 :              +- CometProject\n      :                 :                 +- CometBroadcastHashJoin\n      :                 :                    :- CometFilter\n      :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                    +- CometBroadcastExchange\n      :                 :                       +- CometBroadcastHashJoin\n      :                 :                          :- CometHashAggregate\n      :                 :                          :  +- CometExchange\n      :                 :                          :     +- CometHashAggregate\n      :                 :                          :        +- CometProject\n      :                 :                          :           +- CometBroadcastHashJoin\n      :                 :                          :              :- CometProject\n      :                 :                          :              :  +- CometBroadcastHashJoin\n      :                 :                          :              :     :- CometFilter\n      :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :                          :              :     :        +- SubqueryBroadcast\n      :                 :                          :              :     :           +- BroadcastExchange\n      :                 :                          :              :     :              +- CometNativeColumnarToRow\n      :                 :                          :              :     :                 +- CometProject\n      :                 :                          :              :     :                    +- CometFilter\n      :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              :     +- CometBroadcastExchange\n      :                 :                          :              :        +- CometBroadcastHashJoin\n      :                 :                          :              :           :- CometFilter\n      :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :           +- CometBroadcastExchange\n      :                 :                          :              :              +- CometProject\n      :                 :                          :              :                 +- CometBroadcastHashJoin\n      :                 :                          :              :                    :- CometProject\n      :                 :                          :              :                    :  +- CometBroadcastHashJoin\n      :                 :                          :              :                    :     :- CometFilter\n      :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :                          :              :                    :     :        +- ReusedSubquery\n      :                 :                          :              :                    :     +- CometBroadcastExchange\n      :                 :                          :              :                    :        +- CometFilter\n      :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :                    +- CometBroadcastExchange\n      :                 :                          :              :                       +- CometProject\n      :                 :                          :              :                          +- CometFilter\n      :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              +- CometBroadcastExchange\n      :                 :                          :                 +- CometProject\n      :                 :                          :                    +- CometFilter\n      :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          +- CometBroadcastExchange\n      :                 :                             +- CometProject\n      :                 :                                +- CometBroadcastHashJoin\n      :                 :                                   :- CometProject\n      :                 :                                   :  +- CometBroadcastHashJoin\n      :                 :                                   :     :- CometFilter\n      :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :                                   :     :        +- ReusedSubquery\n      :                 :                                   :     +- CometBroadcastExchange\n      :                 :                                   :        +- CometFilter\n      :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                                   +- CometBroadcastExchange\n      :                 :                                      +- CometProject\n      :                 :                                         +- CometFilter\n      :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 +- CometBroadcastExchange\n      :                    +- CometProject\n      :                       +- CometFilter\n      :                          :  +- ReusedSubquery\n      :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      +- CometBroadcastExchange\n         +- CometFilter\n            :  +- ReusedSubquery\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometBroadcastHashJoin\n                           :     :  :- CometFilter\n                           :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :  :        +- SubqueryBroadcast\n                           :     :  :           +- BroadcastExchange\n                           :     :  :              +- CometNativeColumnarToRow\n                           :     :  :                 +- CometProject\n                           :     :  :                    +- CometFilter\n                           :     :  :                       :  +- Subquery\n                           :     :  :                       :     +- CometNativeColumnarToRow\n                           :     :  :                       :        +- CometProject\n                           :     :  :                       :           +- CometFilter\n                           :     :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  +- CometBroadcastExchange\n                           :     :     +- CometProject\n                           :     :        +- CometBroadcastHashJoin\n                           :     :           :- CometFilter\n                           :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :           +- CometBroadcastExchange\n                           :     :              +- CometBroadcastHashJoin\n                           :     :                 :- CometHashAggregate\n                           :     :                 :  +- CometExchange\n                           :     :                 :     +- CometHashAggregate\n                           :     :                 :        +- CometProject\n                           :     :                 :           +- CometBroadcastHashJoin\n                           :     :                 :              :- CometProject\n                           :     :                 :              :  +- CometBroadcastHashJoin\n                           :     :                 :              :     :- CometFilter\n                           :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :                 :              :     :        +- SubqueryBroadcast\n                           :     :                 :              :     :           +- BroadcastExchange\n                           :     :                 :              :     :              +- CometNativeColumnarToRow\n                           :     :                 :              :     :                 +- CometProject\n                           :     :                 :              :     :                    +- CometFilter\n                           :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              :     +- CometBroadcastExchange\n                           :     :                 :              :        +- CometBroadcastHashJoin\n                           :     :                 :              :           :- CometFilter\n                           :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :           +- CometBroadcastExchange\n                           :     :                 :              :              +- CometProject\n                           :     :                 :              :                 +- CometBroadcastHashJoin\n                           :     :                 :              :                    :- CometProject\n                           :     :                 :              :                    :  +- CometBroadcastHashJoin\n                           :     :                 :              :                    :     :- CometFilter\n                           :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :     :                 :              :                    :     :        +- ReusedSubquery\n                           :     :                 :              :                    :     +- CometBroadcastExchange\n                           :     :                 :              :                    :        +- CometFilter\n                           :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :                    +- CometBroadcastExchange\n                           :     :                 :              :                       +- CometProject\n                           :     :                 :              :                          +- CometFilter\n                           :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              +- CometBroadcastExchange\n                           :     :                 :                 +- CometProject\n                           :     :                 :                    +- CometFilter\n                           :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 +- CometBroadcastExchange\n                           :     :                    +- CometProject\n                           :     :                       +- CometBroadcastHashJoin\n                           :     :                          :- CometProject\n                           :     :                          :  +- CometBroadcastHashJoin\n                           :     :                          :     :- CometFilter\n                           :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :     :                          :     :        +- ReusedSubquery\n                           :     :                          :     +- CometBroadcastExchange\n                           :     :                          :        +- CometFilter\n                           :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                          +- CometBroadcastExchange\n                           :     :                             +- CometProject\n                           :     :                                +- CometFilter\n                           :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     +- CometBroadcastExchange\n                           :        +- CometBroadcastHashJoin\n                           :           :- CometFilter\n                           :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :           +- CometBroadcastExchange\n                           :              +- CometProject\n                           :                 +- CometBroadcastHashJoin\n                           :                    :- CometFilter\n                           :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                    +- CometBroadcastExchange\n                           :                       +- CometBroadcastHashJoin\n                           :                          :- CometHashAggregate\n                           :                          :  +- CometExchange\n                           :                          :     +- CometHashAggregate\n                           :                          :        +- CometProject\n                           :                          :           +- CometBroadcastHashJoin\n                           :                          :              :- CometProject\n                           :                          :              :  +- CometBroadcastHashJoin\n                           :                          :              :     :- CometFilter\n                           :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                          :              :     :        +- SubqueryBroadcast\n                           :                          :              :     :           +- BroadcastExchange\n                           :                          :              :     :              +- CometNativeColumnarToRow\n                           :                          :              :     :                 +- CometProject\n                           :                          :              :     :                    +- CometFilter\n                           :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              :     +- CometBroadcastExchange\n                           :                          :              :        +- CometBroadcastHashJoin\n                           :                          :              :           :- CometFilter\n                           :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :           +- CometBroadcastExchange\n                           :                          :              :              +- CometProject\n                           :                          :              :                 +- CometBroadcastHashJoin\n                           :                          :              :                    :- CometProject\n                           :                          :              :                    :  +- CometBroadcastHashJoin\n                           :                          :              :                    :     :- CometFilter\n                           :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :                          :              :                    :     :        +- ReusedSubquery\n                           :                          :              :                    :     +- CometBroadcastExchange\n                           :                          :              :                    :        +- CometFilter\n                           :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :                    +- CometBroadcastExchange\n                           :                          :              :                       +- CometProject\n                           :                          :              :                          +- CometFilter\n                           :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              +- CometBroadcastExchange\n                           :                          :                 +- CometProject\n                           :                          :                    +- CometFilter\n                           :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          +- CometBroadcastExchange\n                           :                             +- CometProject\n                           :                                +- CometBroadcastHashJoin\n                           :                                   :- CometProject\n                           :                                   :  +- CometBroadcastHashJoin\n                           :                                   :     :- CometFilter\n                           :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                                   :     :        +- ReusedSubquery\n                           :                                   :     +- CometBroadcastExchange\n                           :                                   :        +- CometFilter\n                           :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                                   +- CometBroadcastExchange\n                           :                                      +- CometProject\n                           :                                         +- CometFilter\n                           :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    :  +- ReusedSubquery\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 298 out of 327 eligible operators (91%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q14a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- Filter\n               :              :  :  +- Subquery\n               :              :  :     +- HashAggregate\n               :              :  :        +- CometNativeColumnarToRow\n               :              :  :           +- CometColumnarExchange\n               :              :  :              +- HashAggregate\n               :              :  :                 +- Union\n               :              :  :                    :- Project\n               :              :  :                    :  +- BroadcastHashJoin\n               :              :  :                    :     :- ColumnarToRow\n               :              :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :  :                    :     :        +- ReusedSubquery\n               :              :  :                    :     +- BroadcastExchange\n               :              :  :                    :        +- CometNativeColumnarToRow\n               :              :  :                    :           +- CometProject\n               :              :  :                    :              +- CometFilter\n               :              :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  :                    :- Project\n               :              :  :                    :  +- BroadcastHashJoin\n               :              :  :                    :     :- ColumnarToRow\n               :              :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :  :                    :     :        +- SubqueryBroadcast\n               :              :  :                    :     :           +- BroadcastExchange\n               :              :  :                    :     :              +- CometNativeColumnarToRow\n               :              :  :                    :     :                 +- CometProject\n               :              :  :                    :     :                    +- CometFilter\n               :              :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  :                    :     +- BroadcastExchange\n               :              :  :                    :        +- CometNativeColumnarToRow\n               :              :  :                    :           +- CometProject\n               :              :  :                    :              +- CometFilter\n               :              :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  :                    +- Project\n               :              :  :                       +- BroadcastHashJoin\n               :              :  :                          :- ColumnarToRow\n               :              :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :  :                          :        +- ReusedSubquery\n               :              :  :                          +- BroadcastExchange\n               :              :  :                             +- CometNativeColumnarToRow\n               :              :  :                                +- CometProject\n               :              :  :                                   +- CometFilter\n               :              :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  +- HashAggregate\n               :              :     +- CometNativeColumnarToRow\n               :              :        +- CometColumnarExchange\n               :              :           +- HashAggregate\n               :              :              +- Project\n               :              :                 +- BroadcastHashJoin\n               :              :                    :- Project\n               :              :                    :  +- BroadcastHashJoin\n               :              :                    :     :- BroadcastHashJoin\n               :              :                    :     :  :- Filter\n               :              :                    :     :  :  +- ColumnarToRow\n               :              :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :  :           +- SubqueryBroadcast\n               :              :                    :     :  :              +- BroadcastExchange\n               :              :                    :     :  :                 +- CometNativeColumnarToRow\n               :              :                    :     :  :                    +- CometProject\n               :              :                    :     :  :                       +- CometFilter\n               :              :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :  +- BroadcastExchange\n               :              :                    :     :     +- Project\n               :              :                    :     :        +- BroadcastHashJoin\n               :              :                    :     :           :- CometNativeColumnarToRow\n               :              :                    :     :           :  +- CometFilter\n               :              :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :           +- BroadcastExchange\n               :              :                    :     :              +- BroadcastHashJoin\n               :              :                    :     :                 :- CometNativeColumnarToRow\n               :              :                    :     :                 :  +- CometHashAggregate\n               :              :                    :     :                 :     +- CometColumnarExchange\n               :              :                    :     :                 :        +- HashAggregate\n               :              :                    :     :                 :           +- Project\n               :              :                    :     :                 :              +- BroadcastHashJoin\n               :              :                    :     :                 :                 :- Project\n               :              :                    :     :                 :                 :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :     :- Filter\n               :              :                    :     :                 :                 :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :              :                    :     :                 :                 :     :              +- BroadcastExchange\n               :              :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :     :                    +- CometProject\n               :              :                    :     :                 :                 :     :                       +- CometFilter\n               :              :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 :     +- BroadcastExchange\n               :              :                    :     :                 :                 :        +- BroadcastHashJoin\n               :              :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :           :  +- CometFilter\n               :              :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :           +- BroadcastExchange\n               :              :                    :     :                 :                 :              +- Project\n               :              :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :- Project\n               :              :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :     :- Filter\n               :              :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :              :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :              :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                    :           +- CometFilter\n               :              :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :                    +- BroadcastExchange\n               :              :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                          +- CometProject\n               :              :                    :     :                 :                 :                             +- CometFilter\n               :              :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 +- BroadcastExchange\n               :              :                    :     :                 :                    +- CometNativeColumnarToRow\n               :              :                    :     :                 :                       +- CometProject\n               :              :                    :     :                 :                          +- CometFilter\n               :              :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 +- BroadcastExchange\n               :              :                    :     :                    +- Project\n               :              :                    :     :                       +- BroadcastHashJoin\n               :              :                    :     :                          :- Project\n               :              :                    :     :                          :  +- BroadcastHashJoin\n               :              :                    :     :                          :     :- Filter\n               :              :                    :     :                          :     :  +- ColumnarToRow\n               :              :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                          :     :           +- ReusedSubquery\n               :              :                    :     :                          :     +- BroadcastExchange\n               :              :                    :     :                          :        +- CometNativeColumnarToRow\n               :              :                    :     :                          :           +- CometFilter\n               :              :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                          +- BroadcastExchange\n               :              :                    :     :                             +- CometNativeColumnarToRow\n               :              :                    :     :                                +- CometProject\n               :              :                    :     :                                   +- CometFilter\n               :              :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     +- BroadcastExchange\n               :              :                    :        +- BroadcastHashJoin\n               :              :                    :           :- CometNativeColumnarToRow\n               :              :                    :           :  +- CometFilter\n               :              :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :           +- BroadcastExchange\n               :              :                    :              +- Project\n               :              :                    :                 +- BroadcastHashJoin\n               :              :                    :                    :- CometNativeColumnarToRow\n               :              :                    :                    :  +- CometFilter\n               :              :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                    +- BroadcastExchange\n               :              :                    :                       +- BroadcastHashJoin\n               :              :                    :                          :- CometNativeColumnarToRow\n               :              :                    :                          :  +- CometHashAggregate\n               :              :                    :                          :     +- CometColumnarExchange\n               :              :                    :                          :        +- HashAggregate\n               :              :                    :                          :           +- Project\n               :              :                    :                          :              +- BroadcastHashJoin\n               :              :                    :                          :                 :- Project\n               :              :                    :                          :                 :  +- BroadcastHashJoin\n               :              :                    :                          :                 :     :- Filter\n               :              :                    :                          :                 :     :  +- ColumnarToRow\n               :              :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :     :           +- SubqueryBroadcast\n               :              :                    :                          :                 :     :              +- BroadcastExchange\n               :              :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :                          :                 :     :                    +- CometProject\n               :              :                    :                          :                 :     :                       +- CometFilter\n               :              :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 :     +- BroadcastExchange\n               :              :                    :                          :                 :        +- BroadcastHashJoin\n               :              :                    :                          :                 :           :- CometNativeColumnarToRow\n               :              :                    :                          :                 :           :  +- CometFilter\n               :              :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :           +- BroadcastExchange\n               :              :                    :                          :                 :              +- Project\n               :              :                    :                          :                 :                 +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :- Project\n               :              :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :     :- Filter\n               :              :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :              :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :              :                    :                          :                 :                    :     +- BroadcastExchange\n               :              :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                    :           +- CometFilter\n               :              :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :                    +- BroadcastExchange\n               :              :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                          +- CometProject\n               :              :                    :                          :                 :                             +- CometFilter\n               :              :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 +- BroadcastExchange\n               :              :                    :                          :                    +- CometNativeColumnarToRow\n               :              :                    :                          :                       +- CometProject\n               :              :                    :                          :                          +- CometFilter\n               :              :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          +- BroadcastExchange\n               :              :                    :                             +- Project\n               :              :                    :                                +- BroadcastHashJoin\n               :              :                    :                                   :- Project\n               :              :                    :                                   :  +- BroadcastHashJoin\n               :              :                    :                                   :     :- Filter\n               :              :                    :                                   :     :  +- ColumnarToRow\n               :              :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                                   :     :           +- ReusedSubquery\n               :              :                    :                                   :     +- BroadcastExchange\n               :              :                    :                                   :        +- CometNativeColumnarToRow\n               :              :                    :                                   :           +- CometFilter\n               :              :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                                   +- BroadcastExchange\n               :              :                    :                                      +- CometNativeColumnarToRow\n               :              :                    :                                         +- CometProject\n               :              :                    :                                            +- CometFilter\n               :              :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    +- BroadcastExchange\n               :              :                       +- CometNativeColumnarToRow\n               :              :                          +- CometProject\n               :              :                             +- CometFilter\n               :              :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :- Filter\n               :              :  :  +- ReusedSubquery\n               :              :  +- HashAggregate\n               :              :     +- CometNativeColumnarToRow\n               :              :        +- CometColumnarExchange\n               :              :           +- HashAggregate\n               :              :              +- Project\n               :              :                 +- BroadcastHashJoin\n               :              :                    :- Project\n               :              :                    :  +- BroadcastHashJoin\n               :              :                    :     :- BroadcastHashJoin\n               :              :                    :     :  :- Filter\n               :              :                    :     :  :  +- ColumnarToRow\n               :              :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :  :           +- ReusedSubquery\n               :              :                    :     :  +- BroadcastExchange\n               :              :                    :     :     +- Project\n               :              :                    :     :        +- BroadcastHashJoin\n               :              :                    :     :           :- CometNativeColumnarToRow\n               :              :                    :     :           :  +- CometFilter\n               :              :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :           +- BroadcastExchange\n               :              :                    :     :              +- BroadcastHashJoin\n               :              :                    :     :                 :- CometNativeColumnarToRow\n               :              :                    :     :                 :  +- CometHashAggregate\n               :              :                    :     :                 :     +- CometColumnarExchange\n               :              :                    :     :                 :        +- HashAggregate\n               :              :                    :     :                 :           +- Project\n               :              :                    :     :                 :              +- BroadcastHashJoin\n               :              :                    :     :                 :                 :- Project\n               :              :                    :     :                 :                 :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :     :- Filter\n               :              :                    :     :                 :                 :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :              :                    :     :                 :                 :     :              +- BroadcastExchange\n               :              :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :     :                    +- CometProject\n               :              :                    :     :                 :                 :     :                       +- CometFilter\n               :              :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 :     +- BroadcastExchange\n               :              :                    :     :                 :                 :        +- BroadcastHashJoin\n               :              :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :           :  +- CometFilter\n               :              :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :           +- BroadcastExchange\n               :              :                    :     :                 :                 :              +- Project\n               :              :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :- Project\n               :              :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :     :- Filter\n               :              :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :              :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :              :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                    :           +- CometFilter\n               :              :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :                    +- BroadcastExchange\n               :              :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                          +- CometProject\n               :              :                    :     :                 :                 :                             +- CometFilter\n               :              :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 +- BroadcastExchange\n               :              :                    :     :                 :                    +- CometNativeColumnarToRow\n               :              :                    :     :                 :                       +- CometProject\n               :              :                    :     :                 :                          +- CometFilter\n               :              :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 +- BroadcastExchange\n               :              :                    :     :                    +- Project\n               :              :                    :     :                       +- BroadcastHashJoin\n               :              :                    :     :                          :- Project\n               :              :                    :     :                          :  +- BroadcastHashJoin\n               :              :                    :     :                          :     :- Filter\n               :              :                    :     :                          :     :  +- ColumnarToRow\n               :              :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                          :     :           +- ReusedSubquery\n               :              :                    :     :                          :     +- BroadcastExchange\n               :              :                    :     :                          :        +- CometNativeColumnarToRow\n               :              :                    :     :                          :           +- CometFilter\n               :              :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                          +- BroadcastExchange\n               :              :                    :     :                             +- CometNativeColumnarToRow\n               :              :                    :     :                                +- CometProject\n               :              :                    :     :                                   +- CometFilter\n               :              :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     +- BroadcastExchange\n               :              :                    :        +- BroadcastHashJoin\n               :              :                    :           :- CometNativeColumnarToRow\n               :              :                    :           :  +- CometFilter\n               :              :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :           +- BroadcastExchange\n               :              :                    :              +- Project\n               :              :                    :                 +- BroadcastHashJoin\n               :              :                    :                    :- CometNativeColumnarToRow\n               :              :                    :                    :  +- CometFilter\n               :              :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                    +- BroadcastExchange\n               :              :                    :                       +- BroadcastHashJoin\n               :              :                    :                          :- CometNativeColumnarToRow\n               :              :                    :                          :  +- CometHashAggregate\n               :              :                    :                          :     +- CometColumnarExchange\n               :              :                    :                          :        +- HashAggregate\n               :              :                    :                          :           +- Project\n               :              :                    :                          :              +- BroadcastHashJoin\n               :              :                    :                          :                 :- Project\n               :              :                    :                          :                 :  +- BroadcastHashJoin\n               :              :                    :                          :                 :     :- Filter\n               :              :                    :                          :                 :     :  +- ColumnarToRow\n               :              :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :     :           +- SubqueryBroadcast\n               :              :                    :                          :                 :     :              +- BroadcastExchange\n               :              :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :                          :                 :     :                    +- CometProject\n               :              :                    :                          :                 :     :                       +- CometFilter\n               :              :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 :     +- BroadcastExchange\n               :              :                    :                          :                 :        +- BroadcastHashJoin\n               :              :                    :                          :                 :           :- CometNativeColumnarToRow\n               :              :                    :                          :                 :           :  +- CometFilter\n               :              :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :           +- BroadcastExchange\n               :              :                    :                          :                 :              +- Project\n               :              :                    :                          :                 :                 +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :- Project\n               :              :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :     :- Filter\n               :              :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :              :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :              :                    :                          :                 :                    :     +- BroadcastExchange\n               :              :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                    :           +- CometFilter\n               :              :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :                    +- BroadcastExchange\n               :              :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                          +- CometProject\n               :              :                    :                          :                 :                             +- CometFilter\n               :              :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 +- BroadcastExchange\n               :              :                    :                          :                    +- CometNativeColumnarToRow\n               :              :                    :                          :                       +- CometProject\n               :              :                    :                          :                          +- CometFilter\n               :              :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          +- BroadcastExchange\n               :              :                    :                             +- Project\n               :              :                    :                                +- BroadcastHashJoin\n               :              :                    :                                   :- Project\n               :              :                    :                                   :  +- BroadcastHashJoin\n               :              :                    :                                   :     :- Filter\n               :              :                    :                                   :     :  +- ColumnarToRow\n               :              :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                                   :     :           +- ReusedSubquery\n               :              :                    :                                   :     +- BroadcastExchange\n               :              :                    :                                   :        +- CometNativeColumnarToRow\n               :              :                    :                                   :           +- CometFilter\n               :              :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                                   +- BroadcastExchange\n               :              :                    :                                      +- CometNativeColumnarToRow\n               :              :                    :                                         +- CometProject\n               :              :                    :                                            +- CometFilter\n               :              :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    +- BroadcastExchange\n               :              :                       +- CometNativeColumnarToRow\n               :              :                          +- CometProject\n               :              :                             +- CometFilter\n               :              :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              +- Filter\n               :                 :  +- ReusedSubquery\n               :                 +- HashAggregate\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometColumnarExchange\n               :                          +- HashAggregate\n               :                             +- Project\n               :                                +- BroadcastHashJoin\n               :                                   :- Project\n               :                                   :  +- BroadcastHashJoin\n               :                                   :     :- BroadcastHashJoin\n               :                                   :     :  :- Filter\n               :                                   :     :  :  +- ColumnarToRow\n               :                                   :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :  :           +- ReusedSubquery\n               :                                   :     :  +- BroadcastExchange\n               :                                   :     :     +- Project\n               :                                   :     :        +- BroadcastHashJoin\n               :                                   :     :           :- CometNativeColumnarToRow\n               :                                   :     :           :  +- CometFilter\n               :                                   :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :           +- BroadcastExchange\n               :                                   :     :              +- BroadcastHashJoin\n               :                                   :     :                 :- CometNativeColumnarToRow\n               :                                   :     :                 :  +- CometHashAggregate\n               :                                   :     :                 :     +- CometColumnarExchange\n               :                                   :     :                 :        +- HashAggregate\n               :                                   :     :                 :           +- Project\n               :                                   :     :                 :              +- BroadcastHashJoin\n               :                                   :     :                 :                 :- Project\n               :                                   :     :                 :                 :  +- BroadcastHashJoin\n               :                                   :     :                 :                 :     :- Filter\n               :                                   :     :                 :                 :     :  +- ColumnarToRow\n               :                                   :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                   :     :                 :                 :     :              +- BroadcastExchange\n               :                                   :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                   :     :                 :                 :     :                    +- CometProject\n               :                                   :     :                 :                 :     :                       +- CometFilter\n               :                                   :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :                 :                 :     +- BroadcastExchange\n               :                                   :     :                 :                 :        +- BroadcastHashJoin\n               :                                   :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                   :     :                 :                 :           :  +- CometFilter\n               :                                   :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :                 :                 :           +- BroadcastExchange\n               :                                   :     :                 :                 :              +- Project\n               :                                   :     :                 :                 :                 +- BroadcastHashJoin\n               :                                   :     :                 :                 :                    :- Project\n               :                                   :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                   :     :                 :                 :                    :     :- Filter\n               :                                   :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                   :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                   :     :                 :                 :                    :     +- BroadcastExchange\n               :                                   :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                   :     :                 :                 :                    :           +- CometFilter\n               :                                   :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :                 :                 :                    +- BroadcastExchange\n               :                                   :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                   :     :                 :                 :                          +- CometProject\n               :                                   :     :                 :                 :                             +- CometFilter\n               :                                   :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :                 :                 +- BroadcastExchange\n               :                                   :     :                 :                    +- CometNativeColumnarToRow\n               :                                   :     :                 :                       +- CometProject\n               :                                   :     :                 :                          +- CometFilter\n               :                                   :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :                 +- BroadcastExchange\n               :                                   :     :                    +- Project\n               :                                   :     :                       +- BroadcastHashJoin\n               :                                   :     :                          :- Project\n               :                                   :     :                          :  +- BroadcastHashJoin\n               :                                   :     :                          :     :- Filter\n               :                                   :     :                          :     :  +- ColumnarToRow\n               :                                   :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :                          :     :           +- ReusedSubquery\n               :                                   :     :                          :     +- BroadcastExchange\n               :                                   :     :                          :        +- CometNativeColumnarToRow\n               :                                   :     :                          :           +- CometFilter\n               :                                   :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :                          +- BroadcastExchange\n               :                                   :     :                             +- CometNativeColumnarToRow\n               :                                   :     :                                +- CometProject\n               :                                   :     :                                   +- CometFilter\n               :                                   :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     +- BroadcastExchange\n               :                                   :        +- BroadcastHashJoin\n               :                                   :           :- CometNativeColumnarToRow\n               :                                   :           :  +- CometFilter\n               :                                   :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :           +- BroadcastExchange\n               :                                   :              +- Project\n               :                                   :                 +- BroadcastHashJoin\n               :                                   :                    :- CometNativeColumnarToRow\n               :                                   :                    :  +- CometFilter\n               :                                   :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                    +- BroadcastExchange\n               :                                   :                       +- BroadcastHashJoin\n               :                                   :                          :- CometNativeColumnarToRow\n               :                                   :                          :  +- CometHashAggregate\n               :                                   :                          :     +- CometColumnarExchange\n               :                                   :                          :        +- HashAggregate\n               :                                   :                          :           +- Project\n               :                                   :                          :              +- BroadcastHashJoin\n               :                                   :                          :                 :- Project\n               :                                   :                          :                 :  +- BroadcastHashJoin\n               :                                   :                          :                 :     :- Filter\n               :                                   :                          :                 :     :  +- ColumnarToRow\n               :                                   :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :                          :                 :     :           +- SubqueryBroadcast\n               :                                   :                          :                 :     :              +- BroadcastExchange\n               :                                   :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                   :                          :                 :     :                    +- CometProject\n               :                                   :                          :                 :     :                       +- CometFilter\n               :                                   :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :                          :                 :     +- BroadcastExchange\n               :                                   :                          :                 :        +- BroadcastHashJoin\n               :                                   :                          :                 :           :- CometNativeColumnarToRow\n               :                                   :                          :                 :           :  +- CometFilter\n               :                                   :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                          :                 :           +- BroadcastExchange\n               :                                   :                          :                 :              +- Project\n               :                                   :                          :                 :                 +- BroadcastHashJoin\n               :                                   :                          :                 :                    :- Project\n               :                                   :                          :                 :                    :  +- BroadcastHashJoin\n               :                                   :                          :                 :                    :     :- Filter\n               :                                   :                          :                 :                    :     :  +- ColumnarToRow\n               :                                   :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :                          :                 :                    :     :           +- ReusedSubquery\n               :                                   :                          :                 :                    :     +- BroadcastExchange\n               :                                   :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                   :                          :                 :                    :           +- CometFilter\n               :                                   :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                          :                 :                    +- BroadcastExchange\n               :                                   :                          :                 :                       +- CometNativeColumnarToRow\n               :                                   :                          :                 :                          +- CometProject\n               :                                   :                          :                 :                             +- CometFilter\n               :                                   :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :                          :                 +- BroadcastExchange\n               :                                   :                          :                    +- CometNativeColumnarToRow\n               :                                   :                          :                       +- CometProject\n               :                                   :                          :                          +- CometFilter\n               :                                   :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :                          +- BroadcastExchange\n               :                                   :                             +- Project\n               :                                   :                                +- BroadcastHashJoin\n               :                                   :                                   :- Project\n               :                                   :                                   :  +- BroadcastHashJoin\n               :                                   :                                   :     :- Filter\n               :                                   :                                   :     :  +- ColumnarToRow\n               :                                   :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :                                   :     :           +- ReusedSubquery\n               :                                   :                                   :     +- BroadcastExchange\n               :                                   :                                   :        +- CometNativeColumnarToRow\n               :                                   :                                   :           +- CometFilter\n               :                                   :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                                   +- BroadcastExchange\n               :                                   :                                      +- CometNativeColumnarToRow\n               :                                   :                                         +- CometProject\n               :                                   :                                            +- CometFilter\n               :                                   :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   +- BroadcastExchange\n               :                                      +- CometNativeColumnarToRow\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Filter\n               :                          :  :  +- Subquery\n               :                          :  :     +- HashAggregate\n               :                          :  :        +- CometNativeColumnarToRow\n               :                          :  :           +- CometColumnarExchange\n               :                          :  :              +- HashAggregate\n               :                          :  :                 +- Union\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- ReusedSubquery\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- SubqueryBroadcast\n               :                          :  :                    :     :           +- BroadcastExchange\n               :                          :  :                    :     :              +- CometNativeColumnarToRow\n               :                          :  :                    :     :                 +- CometProject\n               :                          :  :                    :     :                    +- CometFilter\n               :                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    +- Project\n               :                          :  :                       +- BroadcastHashJoin\n               :                          :  :                          :- ColumnarToRow\n               :                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                          :        +- ReusedSubquery\n               :                          :  :                          +- BroadcastExchange\n               :                          :  :                             +- CometNativeColumnarToRow\n               :                          :  :                                +- CometProject\n               :                          :  :                                   +- CometFilter\n               :                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- SubqueryBroadcast\n               :                          :                    :     :  :              +- BroadcastExchange\n               :                          :                    :     :  :                 +- CometNativeColumnarToRow\n               :                          :                    :     :  :                    +- CometProject\n               :                          :                    :     :  :                       +- CometFilter\n               :                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :- Filter\n               :                          :  :  +- ReusedSubquery\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- ReusedSubquery\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Filter\n               :                             :  +- ReusedSubquery\n               :                             +- HashAggregate\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometColumnarExchange\n               :                                      +- HashAggregate\n               :                                         +- Project\n               :                                            +- BroadcastHashJoin\n               :                                               :- Project\n               :                                               :  +- BroadcastHashJoin\n               :                                               :     :- BroadcastHashJoin\n               :                                               :     :  :- Filter\n               :                                               :     :  :  +- ColumnarToRow\n               :                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :  :           +- ReusedSubquery\n               :                                               :     :  +- BroadcastExchange\n               :                                               :     :     +- Project\n               :                                               :     :        +- BroadcastHashJoin\n               :                                               :     :           :- CometNativeColumnarToRow\n               :                                               :     :           :  +- CometFilter\n               :                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :           +- BroadcastExchange\n               :                                               :     :              +- BroadcastHashJoin\n               :                                               :     :                 :- CometNativeColumnarToRow\n               :                                               :     :                 :  +- CometHashAggregate\n               :                                               :     :                 :     +- CometColumnarExchange\n               :                                               :     :                 :        +- HashAggregate\n               :                                               :     :                 :           +- Project\n               :                                               :     :                 :              +- BroadcastHashJoin\n               :                                               :     :                 :                 :- Project\n               :                                               :     :                 :                 :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :     :- Filter\n               :                                               :     :                 :                 :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                               :     :                 :                 :     :              +- BroadcastExchange\n               :                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :     :                    +- CometProject\n               :                                               :     :                 :                 :     :                       +- CometFilter\n               :                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 :     +- BroadcastExchange\n               :                                               :     :                 :                 :        +- BroadcastHashJoin\n               :                                               :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                               :     :                 :                 :           :  +- CometFilter\n               :                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :           +- BroadcastExchange\n               :                                               :     :                 :                 :              +- Project\n               :                                               :     :                 :                 :                 +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :- Project\n               :                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :     :- Filter\n               :                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                               :     :                 :                 :                    :     +- BroadcastExchange\n               :                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                    :           +- CometFilter\n               :                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :                    +- BroadcastExchange\n               :                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                          +- CometProject\n               :                                               :     :                 :                 :                             +- CometFilter\n               :                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 +- BroadcastExchange\n               :                                               :     :                 :                    +- CometNativeColumnarToRow\n               :                                               :     :                 :                       +- CometProject\n               :                                               :     :                 :                          +- CometFilter\n               :                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 +- BroadcastExchange\n               :                                               :     :                    +- Project\n               :                                               :     :                       +- BroadcastHashJoin\n               :                                               :     :                          :- Project\n               :                                               :     :                          :  +- BroadcastHashJoin\n               :                                               :     :                          :     :- Filter\n               :                                               :     :                          :     :  +- ColumnarToRow\n               :                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                          :     :           +- ReusedSubquery\n               :                                               :     :                          :     +- BroadcastExchange\n               :                                               :     :                          :        +- CometNativeColumnarToRow\n               :                                               :     :                          :           +- CometFilter\n               :                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                          +- BroadcastExchange\n               :                                               :     :                             +- CometNativeColumnarToRow\n               :                                               :     :                                +- CometProject\n               :                                               :     :                                   +- CometFilter\n               :                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     +- BroadcastExchange\n               :                                               :        +- BroadcastHashJoin\n               :                                               :           :- CometNativeColumnarToRow\n               :                                               :           :  +- CometFilter\n               :                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :           +- BroadcastExchange\n               :                                               :              +- Project\n               :                                               :                 +- BroadcastHashJoin\n               :                                               :                    :- CometNativeColumnarToRow\n               :                                               :                    :  +- CometFilter\n               :                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                    +- BroadcastExchange\n               :                                               :                       +- BroadcastHashJoin\n               :                                               :                          :- CometNativeColumnarToRow\n               :                                               :                          :  +- CometHashAggregate\n               :                                               :                          :     +- CometColumnarExchange\n               :                                               :                          :        +- HashAggregate\n               :                                               :                          :           +- Project\n               :                                               :                          :              +- BroadcastHashJoin\n               :                                               :                          :                 :- Project\n               :                                               :                          :                 :  +- BroadcastHashJoin\n               :                                               :                          :                 :     :- Filter\n               :                                               :                          :                 :     :  +- ColumnarToRow\n               :                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :     :           +- SubqueryBroadcast\n               :                                               :                          :                 :     :              +- BroadcastExchange\n               :                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :                          :                 :     :                    +- CometProject\n               :                                               :                          :                 :     :                       +- CometFilter\n               :                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 :     +- BroadcastExchange\n               :                                               :                          :                 :        +- BroadcastHashJoin\n               :                                               :                          :                 :           :- CometNativeColumnarToRow\n               :                                               :                          :                 :           :  +- CometFilter\n               :                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :           +- BroadcastExchange\n               :                                               :                          :                 :              +- Project\n               :                                               :                          :                 :                 +- BroadcastHashJoin\n               :                                               :                          :                 :                    :- Project\n               :                                               :                          :                 :                    :  +- BroadcastHashJoin\n               :                                               :                          :                 :                    :     :- Filter\n               :                                               :                          :                 :                    :     :  +- ColumnarToRow\n               :                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :                    :     :           +- ReusedSubquery\n               :                                               :                          :                 :                    :     +- BroadcastExchange\n               :                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :                          :                 :                    :           +- CometFilter\n               :                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :                    +- BroadcastExchange\n               :                                               :                          :                 :                       +- CometNativeColumnarToRow\n               :                                               :                          :                 :                          +- CometProject\n               :                                               :                          :                 :                             +- CometFilter\n               :                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 +- BroadcastExchange\n               :                                               :                          :                    +- CometNativeColumnarToRow\n               :                                               :                          :                       +- CometProject\n               :                                               :                          :                          +- CometFilter\n               :                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          +- BroadcastExchange\n               :                                               :                             +- Project\n               :                                               :                                +- BroadcastHashJoin\n               :                                               :                                   :- Project\n               :                                               :                                   :  +- BroadcastHashJoin\n               :                                               :                                   :     :- Filter\n               :                                               :                                   :     :  +- ColumnarToRow\n               :                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                                   :     :           +- ReusedSubquery\n               :                                               :                                   :     +- BroadcastExchange\n               :                                               :                                   :        +- CometNativeColumnarToRow\n               :                                               :                                   :           +- CometFilter\n               :                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                                   +- BroadcastExchange\n               :                                               :                                      +- CometNativeColumnarToRow\n               :                                               :                                         +- CometProject\n               :                                               :                                            +- CometFilter\n               :                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               +- BroadcastExchange\n               :                                                  +- CometNativeColumnarToRow\n               :                                                     +- CometProject\n               :                                                        +- CometFilter\n               :                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Filter\n               :                          :  :  +- Subquery\n               :                          :  :     +- HashAggregate\n               :                          :  :        +- CometNativeColumnarToRow\n               :                          :  :           +- CometColumnarExchange\n               :                          :  :              +- HashAggregate\n               :                          :  :                 +- Union\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- ReusedSubquery\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- SubqueryBroadcast\n               :                          :  :                    :     :           +- BroadcastExchange\n               :                          :  :                    :     :              +- CometNativeColumnarToRow\n               :                          :  :                    :     :                 +- CometProject\n               :                          :  :                    :     :                    +- CometFilter\n               :                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    +- Project\n               :                          :  :                       +- BroadcastHashJoin\n               :                          :  :                          :- ColumnarToRow\n               :                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                          :        +- ReusedSubquery\n               :                          :  :                          +- BroadcastExchange\n               :                          :  :                             +- CometNativeColumnarToRow\n               :                          :  :                                +- CometProject\n               :                          :  :                                   +- CometFilter\n               :                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- SubqueryBroadcast\n               :                          :                    :     :  :              +- BroadcastExchange\n               :                          :                    :     :  :                 +- CometNativeColumnarToRow\n               :                          :                    :     :  :                    +- CometProject\n               :                          :                    :     :  :                       +- CometFilter\n               :                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :- Filter\n               :                          :  :  +- ReusedSubquery\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- ReusedSubquery\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Filter\n               :                             :  +- ReusedSubquery\n               :                             +- HashAggregate\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometColumnarExchange\n               :                                      +- HashAggregate\n               :                                         +- Project\n               :                                            +- BroadcastHashJoin\n               :                                               :- Project\n               :                                               :  +- BroadcastHashJoin\n               :                                               :     :- BroadcastHashJoin\n               :                                               :     :  :- Filter\n               :                                               :     :  :  +- ColumnarToRow\n               :                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :  :           +- ReusedSubquery\n               :                                               :     :  +- BroadcastExchange\n               :                                               :     :     +- Project\n               :                                               :     :        +- BroadcastHashJoin\n               :                                               :     :           :- CometNativeColumnarToRow\n               :                                               :     :           :  +- CometFilter\n               :                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :           +- BroadcastExchange\n               :                                               :     :              +- BroadcastHashJoin\n               :                                               :     :                 :- CometNativeColumnarToRow\n               :                                               :     :                 :  +- CometHashAggregate\n               :                                               :     :                 :     +- CometColumnarExchange\n               :                                               :     :                 :        +- HashAggregate\n               :                                               :     :                 :           +- Project\n               :                                               :     :                 :              +- BroadcastHashJoin\n               :                                               :     :                 :                 :- Project\n               :                                               :     :                 :                 :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :     :- Filter\n               :                                               :     :                 :                 :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                               :     :                 :                 :     :              +- BroadcastExchange\n               :                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :     :                    +- CometProject\n               :                                               :     :                 :                 :     :                       +- CometFilter\n               :                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 :     +- BroadcastExchange\n               :                                               :     :                 :                 :        +- BroadcastHashJoin\n               :                                               :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                               :     :                 :                 :           :  +- CometFilter\n               :                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :           +- BroadcastExchange\n               :                                               :     :                 :                 :              +- Project\n               :                                               :     :                 :                 :                 +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :- Project\n               :                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :     :- Filter\n               :                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                               :     :                 :                 :                    :     +- BroadcastExchange\n               :                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                    :           +- CometFilter\n               :                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :                    +- BroadcastExchange\n               :                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                          +- CometProject\n               :                                               :     :                 :                 :                             +- CometFilter\n               :                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 +- BroadcastExchange\n               :                                               :     :                 :                    +- CometNativeColumnarToRow\n               :                                               :     :                 :                       +- CometProject\n               :                                               :     :                 :                          +- CometFilter\n               :                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 +- BroadcastExchange\n               :                                               :     :                    +- Project\n               :                                               :     :                       +- BroadcastHashJoin\n               :                                               :     :                          :- Project\n               :                                               :     :                          :  +- BroadcastHashJoin\n               :                                               :     :                          :     :- Filter\n               :                                               :     :                          :     :  +- ColumnarToRow\n               :                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                          :     :           +- ReusedSubquery\n               :                                               :     :                          :     +- BroadcastExchange\n               :                                               :     :                          :        +- CometNativeColumnarToRow\n               :                                               :     :                          :           +- CometFilter\n               :                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                          +- BroadcastExchange\n               :                                               :     :                             +- CometNativeColumnarToRow\n               :                                               :     :                                +- CometProject\n               :                                               :     :                                   +- CometFilter\n               :                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     +- BroadcastExchange\n               :                                               :        +- BroadcastHashJoin\n               :                                               :           :- CometNativeColumnarToRow\n               :                                               :           :  +- CometFilter\n               :                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :           +- BroadcastExchange\n               :                                               :              +- Project\n               :                                               :                 +- BroadcastHashJoin\n               :                                               :                    :- CometNativeColumnarToRow\n               :                                               :                    :  +- CometFilter\n               :                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                    +- BroadcastExchange\n               :                                               :                       +- BroadcastHashJoin\n               :                                               :                          :- CometNativeColumnarToRow\n               :                                               :                          :  +- CometHashAggregate\n               :                                               :                          :     +- CometColumnarExchange\n               :                                               :                          :        +- HashAggregate\n               :                                               :                          :           +- Project\n               :                                               :                          :              +- BroadcastHashJoin\n               :                                               :                          :                 :- Project\n               :                                               :                          :                 :  +- BroadcastHashJoin\n               :                                               :                          :                 :     :- Filter\n               :                                               :                          :                 :     :  +- ColumnarToRow\n               :                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :     :           +- SubqueryBroadcast\n               :                                               :                          :                 :     :              +- BroadcastExchange\n               :                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :                          :                 :     :                    +- CometProject\n               :                                               :                          :                 :     :                       +- CometFilter\n               :                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 :     +- BroadcastExchange\n               :                                               :                          :                 :        +- BroadcastHashJoin\n               :                                               :                          :                 :           :- CometNativeColumnarToRow\n               :                                               :                          :                 :           :  +- CometFilter\n               :                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :           +- BroadcastExchange\n               :                                               :                          :                 :              +- Project\n               :                                               :                          :                 :                 +- BroadcastHashJoin\n               :                                               :                          :                 :                    :- Project\n               :                                               :                          :                 :                    :  +- BroadcastHashJoin\n               :                                               :                          :                 :                    :     :- Filter\n               :                                               :                          :                 :                    :     :  +- ColumnarToRow\n               :                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :                    :     :           +- ReusedSubquery\n               :                                               :                          :                 :                    :     +- BroadcastExchange\n               :                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :                          :                 :                    :           +- CometFilter\n               :                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :                    +- BroadcastExchange\n               :                                               :                          :                 :                       +- CometNativeColumnarToRow\n               :                                               :                          :                 :                          +- CometProject\n               :                                               :                          :                 :                             +- CometFilter\n               :                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 +- BroadcastExchange\n               :                                               :                          :                    +- CometNativeColumnarToRow\n               :                                               :                          :                       +- CometProject\n               :                                               :                          :                          +- CometFilter\n               :                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          +- BroadcastExchange\n               :                                               :                             +- Project\n               :                                               :                                +- BroadcastHashJoin\n               :                                               :                                   :- Project\n               :                                               :                                   :  +- BroadcastHashJoin\n               :                                               :                                   :     :- Filter\n               :                                               :                                   :     :  +- ColumnarToRow\n               :                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                                   :     :           +- ReusedSubquery\n               :                                               :                                   :     +- BroadcastExchange\n               :                                               :                                   :        +- CometNativeColumnarToRow\n               :                                               :                                   :           +- CometFilter\n               :                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                                   +- BroadcastExchange\n               :                                               :                                      +- CometNativeColumnarToRow\n               :                                               :                                         +- CometProject\n               :                                               :                                            +- CometFilter\n               :                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               +- BroadcastExchange\n               :                                                  +- CometNativeColumnarToRow\n               :                                                     +- CometProject\n               :                                                        +- CometFilter\n               :                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Filter\n               :                          :  :  +- Subquery\n               :                          :  :     +- HashAggregate\n               :                          :  :        +- CometNativeColumnarToRow\n               :                          :  :           +- CometColumnarExchange\n               :                          :  :              +- HashAggregate\n               :                          :  :                 +- Union\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- ReusedSubquery\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- SubqueryBroadcast\n               :                          :  :                    :     :           +- BroadcastExchange\n               :                          :  :                    :     :              +- CometNativeColumnarToRow\n               :                          :  :                    :     :                 +- CometProject\n               :                          :  :                    :     :                    +- CometFilter\n               :                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    +- Project\n               :                          :  :                       +- BroadcastHashJoin\n               :                          :  :                          :- ColumnarToRow\n               :                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                          :        +- ReusedSubquery\n               :                          :  :                          +- BroadcastExchange\n               :                          :  :                             +- CometNativeColumnarToRow\n               :                          :  :                                +- CometProject\n               :                          :  :                                   +- CometFilter\n               :                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- SubqueryBroadcast\n               :                          :                    :     :  :              +- BroadcastExchange\n               :                          :                    :     :  :                 +- CometNativeColumnarToRow\n               :                          :                    :     :  :                    +- CometProject\n               :                          :                    :     :  :                       +- CometFilter\n               :                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :- Filter\n               :                          :  :  +- ReusedSubquery\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- ReusedSubquery\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Filter\n               :                             :  +- ReusedSubquery\n               :                             +- HashAggregate\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometColumnarExchange\n               :                                      +- HashAggregate\n               :                                         +- Project\n               :                                            +- BroadcastHashJoin\n               :                                               :- Project\n               :                                               :  +- BroadcastHashJoin\n               :                                               :     :- BroadcastHashJoin\n               :                                               :     :  :- Filter\n               :                                               :     :  :  +- ColumnarToRow\n               :                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :  :           +- ReusedSubquery\n               :                                               :     :  +- BroadcastExchange\n               :                                               :     :     +- Project\n               :                                               :     :        +- BroadcastHashJoin\n               :                                               :     :           :- CometNativeColumnarToRow\n               :                                               :     :           :  +- CometFilter\n               :                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :           +- BroadcastExchange\n               :                                               :     :              +- BroadcastHashJoin\n               :                                               :     :                 :- CometNativeColumnarToRow\n               :                                               :     :                 :  +- CometHashAggregate\n               :                                               :     :                 :     +- CometColumnarExchange\n               :                                               :     :                 :        +- HashAggregate\n               :                                               :     :                 :           +- Project\n               :                                               :     :                 :              +- BroadcastHashJoin\n               :                                               :     :                 :                 :- Project\n               :                                               :     :                 :                 :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :     :- Filter\n               :                                               :     :                 :                 :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                               :     :                 :                 :     :              +- BroadcastExchange\n               :                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :     :                    +- CometProject\n               :                                               :     :                 :                 :     :                       +- CometFilter\n               :                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 :     +- BroadcastExchange\n               :                                               :     :                 :                 :        +- BroadcastHashJoin\n               :                                               :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                               :     :                 :                 :           :  +- CometFilter\n               :                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :           +- BroadcastExchange\n               :                                               :     :                 :                 :              +- Project\n               :                                               :     :                 :                 :                 +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :- Project\n               :                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :     :- Filter\n               :                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                               :     :                 :                 :                    :     +- BroadcastExchange\n               :                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                    :           +- CometFilter\n               :                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :                    +- BroadcastExchange\n               :                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                          +- CometProject\n               :                                               :     :                 :                 :                             +- CometFilter\n               :                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 +- BroadcastExchange\n               :                                               :     :                 :                    +- CometNativeColumnarToRow\n               :                                               :     :                 :                       +- CometProject\n               :                                               :     :                 :                          +- CometFilter\n               :                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 +- BroadcastExchange\n               :                                               :     :                    +- Project\n               :                                               :     :                       +- BroadcastHashJoin\n               :                                               :     :                          :- Project\n               :                                               :     :                          :  +- BroadcastHashJoin\n               :                                               :     :                          :     :- Filter\n               :                                               :     :                          :     :  +- ColumnarToRow\n               :                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                          :     :           +- ReusedSubquery\n               :                                               :     :                          :     +- BroadcastExchange\n               :                                               :     :                          :        +- CometNativeColumnarToRow\n               :                                               :     :                          :           +- CometFilter\n               :                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                          +- BroadcastExchange\n               :                                               :     :                             +- CometNativeColumnarToRow\n               :                                               :     :                                +- CometProject\n               :                                               :     :                                   +- CometFilter\n               :                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     +- BroadcastExchange\n               :                                               :        +- BroadcastHashJoin\n               :                                               :           :- CometNativeColumnarToRow\n               :                                               :           :  +- CometFilter\n               :                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :           +- BroadcastExchange\n               :                                               :              +- Project\n               :                                               :                 +- BroadcastHashJoin\n               :                                               :                    :- CometNativeColumnarToRow\n               :                                               :                    :  +- CometFilter\n               :                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                    +- BroadcastExchange\n               :                                               :                       +- BroadcastHashJoin\n               :                                               :                          :- CometNativeColumnarToRow\n               :                                               :                          :  +- CometHashAggregate\n               :                                               :                          :     +- CometColumnarExchange\n               :                                               :                          :        +- HashAggregate\n               :                                               :                          :           +- Project\n               :                                               :                          :              +- BroadcastHashJoin\n               :                                               :                          :                 :- Project\n               :                                               :                          :                 :  +- BroadcastHashJoin\n               :                                               :                          :                 :     :- Filter\n               :                                               :                          :                 :     :  +- ColumnarToRow\n               :                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :     :           +- SubqueryBroadcast\n               :                                               :                          :                 :     :              +- BroadcastExchange\n               :                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :                          :                 :     :                    +- CometProject\n               :                                               :                          :                 :     :                       +- CometFilter\n               :                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 :     +- BroadcastExchange\n               :                                               :                          :                 :        +- BroadcastHashJoin\n               :                                               :                          :                 :           :- CometNativeColumnarToRow\n               :                                               :                          :                 :           :  +- CometFilter\n               :                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :           +- BroadcastExchange\n               :                                               :                          :                 :              +- Project\n               :                                               :                          :                 :                 +- BroadcastHashJoin\n               :                                               :                          :                 :                    :- Project\n               :                                               :                          :                 :                    :  +- BroadcastHashJoin\n               :                                               :                          :                 :                    :     :- Filter\n               :                                               :                          :                 :                    :     :  +- ColumnarToRow\n               :                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :                    :     :           +- ReusedSubquery\n               :                                               :                          :                 :                    :     +- BroadcastExchange\n               :                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :                          :                 :                    :           +- CometFilter\n               :                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :                    +- BroadcastExchange\n               :                                               :                          :                 :                       +- CometNativeColumnarToRow\n               :                                               :                          :                 :                          +- CometProject\n               :                                               :                          :                 :                             +- CometFilter\n               :                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 +- BroadcastExchange\n               :                                               :                          :                    +- CometNativeColumnarToRow\n               :                                               :                          :                       +- CometProject\n               :                                               :                          :                          +- CometFilter\n               :                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          +- BroadcastExchange\n               :                                               :                             +- Project\n               :                                               :                                +- BroadcastHashJoin\n               :                                               :                                   :- Project\n               :                                               :                                   :  +- BroadcastHashJoin\n               :                                               :                                   :     :- Filter\n               :                                               :                                   :     :  +- ColumnarToRow\n               :                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                                   :     :           +- ReusedSubquery\n               :                                               :                                   :     +- BroadcastExchange\n               :                                               :                                   :        +- CometNativeColumnarToRow\n               :                                               :                                   :           +- CometFilter\n               :                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                                   +- BroadcastExchange\n               :                                               :                                      +- CometNativeColumnarToRow\n               :                                               :                                         +- CometProject\n               :                                               :                                            +- CometFilter\n               :                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               +- BroadcastExchange\n               :                                                  +- CometNativeColumnarToRow\n               :                                                     +- CometProject\n               :                                                        +- CometFilter\n               :                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- Filter\n                                          :  :  +- Subquery\n                                          :  :     +- HashAggregate\n                                          :  :        +- CometNativeColumnarToRow\n                                          :  :           +- CometColumnarExchange\n                                          :  :              +- HashAggregate\n                                          :  :                 +- Union\n                                          :  :                    :- Project\n                                          :  :                    :  +- BroadcastHashJoin\n                                          :  :                    :     :- ColumnarToRow\n                                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :  :                    :     :        +- ReusedSubquery\n                                          :  :                    :     +- BroadcastExchange\n                                          :  :                    :        +- CometNativeColumnarToRow\n                                          :  :                    :           +- CometProject\n                                          :  :                    :              +- CometFilter\n                                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  :                    :- Project\n                                          :  :                    :  +- BroadcastHashJoin\n                                          :  :                    :     :- ColumnarToRow\n                                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :  :                    :     :        +- SubqueryBroadcast\n                                          :  :                    :     :           +- BroadcastExchange\n                                          :  :                    :     :              +- CometNativeColumnarToRow\n                                          :  :                    :     :                 +- CometProject\n                                          :  :                    :     :                    +- CometFilter\n                                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  :                    :     +- BroadcastExchange\n                                          :  :                    :        +- CometNativeColumnarToRow\n                                          :  :                    :           +- CometProject\n                                          :  :                    :              +- CometFilter\n                                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  :                    +- Project\n                                          :  :                       +- BroadcastHashJoin\n                                          :  :                          :- ColumnarToRow\n                                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :  :                          :        +- ReusedSubquery\n                                          :  :                          +- BroadcastExchange\n                                          :  :                             +- CometNativeColumnarToRow\n                                          :  :                                +- CometProject\n                                          :  :                                   +- CometFilter\n                                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  +- HashAggregate\n                                          :     +- CometNativeColumnarToRow\n                                          :        +- CometColumnarExchange\n                                          :           +- HashAggregate\n                                          :              +- Project\n                                          :                 +- BroadcastHashJoin\n                                          :                    :- Project\n                                          :                    :  +- BroadcastHashJoin\n                                          :                    :     :- BroadcastHashJoin\n                                          :                    :     :  :- Filter\n                                          :                    :     :  :  +- ColumnarToRow\n                                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :  :           +- SubqueryBroadcast\n                                          :                    :     :  :              +- BroadcastExchange\n                                          :                    :     :  :                 +- CometNativeColumnarToRow\n                                          :                    :     :  :                    +- CometProject\n                                          :                    :     :  :                       +- CometFilter\n                                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :  +- BroadcastExchange\n                                          :                    :     :     +- Project\n                                          :                    :     :        +- BroadcastHashJoin\n                                          :                    :     :           :- CometNativeColumnarToRow\n                                          :                    :     :           :  +- CometFilter\n                                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :           +- BroadcastExchange\n                                          :                    :     :              +- BroadcastHashJoin\n                                          :                    :     :                 :- CometNativeColumnarToRow\n                                          :                    :     :                 :  +- CometHashAggregate\n                                          :                    :     :                 :     +- CometColumnarExchange\n                                          :                    :     :                 :        +- HashAggregate\n                                          :                    :     :                 :           +- Project\n                                          :                    :     :                 :              +- BroadcastHashJoin\n                                          :                    :     :                 :                 :- Project\n                                          :                    :     :                 :                 :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :     :- Filter\n                                          :                    :     :                 :                 :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n                                          :                    :     :                 :                 :     :              +- BroadcastExchange\n                                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :     :                    +- CometProject\n                                          :                    :     :                 :                 :     :                       +- CometFilter\n                                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 :     +- BroadcastExchange\n                                          :                    :     :                 :                 :        +- BroadcastHashJoin\n                                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :           :  +- CometFilter\n                                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :           +- BroadcastExchange\n                                          :                    :     :                 :                 :              +- Project\n                                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :- Project\n                                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :     :- Filter\n                                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n                                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n                                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                    :           +- CometFilter\n                                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :                    +- BroadcastExchange\n                                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                          +- CometProject\n                                          :                    :     :                 :                 :                             +- CometFilter\n                                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 +- BroadcastExchange\n                                          :                    :     :                 :                    +- CometNativeColumnarToRow\n                                          :                    :     :                 :                       +- CometProject\n                                          :                    :     :                 :                          +- CometFilter\n                                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 +- BroadcastExchange\n                                          :                    :     :                    +- Project\n                                          :                    :     :                       +- BroadcastHashJoin\n                                          :                    :     :                          :- Project\n                                          :                    :     :                          :  +- BroadcastHashJoin\n                                          :                    :     :                          :     :- Filter\n                                          :                    :     :                          :     :  +- ColumnarToRow\n                                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                          :     :           +- ReusedSubquery\n                                          :                    :     :                          :     +- BroadcastExchange\n                                          :                    :     :                          :        +- CometNativeColumnarToRow\n                                          :                    :     :                          :           +- CometFilter\n                                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                          +- BroadcastExchange\n                                          :                    :     :                             +- CometNativeColumnarToRow\n                                          :                    :     :                                +- CometProject\n                                          :                    :     :                                   +- CometFilter\n                                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     +- BroadcastExchange\n                                          :                    :        +- BroadcastHashJoin\n                                          :                    :           :- CometNativeColumnarToRow\n                                          :                    :           :  +- CometFilter\n                                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :           +- BroadcastExchange\n                                          :                    :              +- Project\n                                          :                    :                 +- BroadcastHashJoin\n                                          :                    :                    :- CometNativeColumnarToRow\n                                          :                    :                    :  +- CometFilter\n                                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                    +- BroadcastExchange\n                                          :                    :                       +- BroadcastHashJoin\n                                          :                    :                          :- CometNativeColumnarToRow\n                                          :                    :                          :  +- CometHashAggregate\n                                          :                    :                          :     +- CometColumnarExchange\n                                          :                    :                          :        +- HashAggregate\n                                          :                    :                          :           +- Project\n                                          :                    :                          :              +- BroadcastHashJoin\n                                          :                    :                          :                 :- Project\n                                          :                    :                          :                 :  +- BroadcastHashJoin\n                                          :                    :                          :                 :     :- Filter\n                                          :                    :                          :                 :     :  +- ColumnarToRow\n                                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :     :           +- SubqueryBroadcast\n                                          :                    :                          :                 :     :              +- BroadcastExchange\n                                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :                          :                 :     :                    +- CometProject\n                                          :                    :                          :                 :     :                       +- CometFilter\n                                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 :     +- BroadcastExchange\n                                          :                    :                          :                 :        +- BroadcastHashJoin\n                                          :                    :                          :                 :           :- CometNativeColumnarToRow\n                                          :                    :                          :                 :           :  +- CometFilter\n                                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :           +- BroadcastExchange\n                                          :                    :                          :                 :              +- Project\n                                          :                    :                          :                 :                 +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :- Project\n                                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :     :- Filter\n                                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n                                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n                                          :                    :                          :                 :                    :     +- BroadcastExchange\n                                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                    :           +- CometFilter\n                                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :                    +- BroadcastExchange\n                                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                          +- CometProject\n                                          :                    :                          :                 :                             +- CometFilter\n                                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 +- BroadcastExchange\n                                          :                    :                          :                    +- CometNativeColumnarToRow\n                                          :                    :                          :                       +- CometProject\n                                          :                    :                          :                          +- CometFilter\n                                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          +- BroadcastExchange\n                                          :                    :                             +- Project\n                                          :                    :                                +- BroadcastHashJoin\n                                          :                    :                                   :- Project\n                                          :                    :                                   :  +- BroadcastHashJoin\n                                          :                    :                                   :     :- Filter\n                                          :                    :                                   :     :  +- ColumnarToRow\n                                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                                   :     :           +- ReusedSubquery\n                                          :                    :                                   :     +- BroadcastExchange\n                                          :                    :                                   :        +- CometNativeColumnarToRow\n                                          :                    :                                   :           +- CometFilter\n                                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                                   +- BroadcastExchange\n                                          :                    :                                      +- CometNativeColumnarToRow\n                                          :                    :                                         +- CometProject\n                                          :                    :                                            +- CometFilter\n                                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    +- BroadcastExchange\n                                          :                       +- CometNativeColumnarToRow\n                                          :                          +- CometProject\n                                          :                             +- CometFilter\n                                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :- Filter\n                                          :  :  +- ReusedSubquery\n                                          :  +- HashAggregate\n                                          :     +- CometNativeColumnarToRow\n                                          :        +- CometColumnarExchange\n                                          :           +- HashAggregate\n                                          :              +- Project\n                                          :                 +- BroadcastHashJoin\n                                          :                    :- Project\n                                          :                    :  +- BroadcastHashJoin\n                                          :                    :     :- BroadcastHashJoin\n                                          :                    :     :  :- Filter\n                                          :                    :     :  :  +- ColumnarToRow\n                                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :  :           +- ReusedSubquery\n                                          :                    :     :  +- BroadcastExchange\n                                          :                    :     :     +- Project\n                                          :                    :     :        +- BroadcastHashJoin\n                                          :                    :     :           :- CometNativeColumnarToRow\n                                          :                    :     :           :  +- CometFilter\n                                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :           +- BroadcastExchange\n                                          :                    :     :              +- BroadcastHashJoin\n                                          :                    :     :                 :- CometNativeColumnarToRow\n                                          :                    :     :                 :  +- CometHashAggregate\n                                          :                    :     :                 :     +- CometColumnarExchange\n                                          :                    :     :                 :        +- HashAggregate\n                                          :                    :     :                 :           +- Project\n                                          :                    :     :                 :              +- BroadcastHashJoin\n                                          :                    :     :                 :                 :- Project\n                                          :                    :     :                 :                 :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :     :- Filter\n                                          :                    :     :                 :                 :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n                                          :                    :     :                 :                 :     :              +- BroadcastExchange\n                                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :     :                    +- CometProject\n                                          :                    :     :                 :                 :     :                       +- CometFilter\n                                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 :     +- BroadcastExchange\n                                          :                    :     :                 :                 :        +- BroadcastHashJoin\n                                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :           :  +- CometFilter\n                                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :           +- BroadcastExchange\n                                          :                    :     :                 :                 :              +- Project\n                                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :- Project\n                                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :     :- Filter\n                                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n                                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n                                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                    :           +- CometFilter\n                                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :                    +- BroadcastExchange\n                                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                          +- CometProject\n                                          :                    :     :                 :                 :                             +- CometFilter\n                                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 +- BroadcastExchange\n                                          :                    :     :                 :                    +- CometNativeColumnarToRow\n                                          :                    :     :                 :                       +- CometProject\n                                          :                    :     :                 :                          +- CometFilter\n                                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 +- BroadcastExchange\n                                          :                    :     :                    +- Project\n                                          :                    :     :                       +- BroadcastHashJoin\n                                          :                    :     :                          :- Project\n                                          :                    :     :                          :  +- BroadcastHashJoin\n                                          :                    :     :                          :     :- Filter\n                                          :                    :     :                          :     :  +- ColumnarToRow\n                                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                          :     :           +- ReusedSubquery\n                                          :                    :     :                          :     +- BroadcastExchange\n                                          :                    :     :                          :        +- CometNativeColumnarToRow\n                                          :                    :     :                          :           +- CometFilter\n                                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                          +- BroadcastExchange\n                                          :                    :     :                             +- CometNativeColumnarToRow\n                                          :                    :     :                                +- CometProject\n                                          :                    :     :                                   +- CometFilter\n                                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     +- BroadcastExchange\n                                          :                    :        +- BroadcastHashJoin\n                                          :                    :           :- CometNativeColumnarToRow\n                                          :                    :           :  +- CometFilter\n                                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :           +- BroadcastExchange\n                                          :                    :              +- Project\n                                          :                    :                 +- BroadcastHashJoin\n                                          :                    :                    :- CometNativeColumnarToRow\n                                          :                    :                    :  +- CometFilter\n                                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                    +- BroadcastExchange\n                                          :                    :                       +- BroadcastHashJoin\n                                          :                    :                          :- CometNativeColumnarToRow\n                                          :                    :                          :  +- CometHashAggregate\n                                          :                    :                          :     +- CometColumnarExchange\n                                          :                    :                          :        +- HashAggregate\n                                          :                    :                          :           +- Project\n                                          :                    :                          :              +- BroadcastHashJoin\n                                          :                    :                          :                 :- Project\n                                          :                    :                          :                 :  +- BroadcastHashJoin\n                                          :                    :                          :                 :     :- Filter\n                                          :                    :                          :                 :     :  +- ColumnarToRow\n                                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :     :           +- SubqueryBroadcast\n                                          :                    :                          :                 :     :              +- BroadcastExchange\n                                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :                          :                 :     :                    +- CometProject\n                                          :                    :                          :                 :     :                       +- CometFilter\n                                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 :     +- BroadcastExchange\n                                          :                    :                          :                 :        +- BroadcastHashJoin\n                                          :                    :                          :                 :           :- CometNativeColumnarToRow\n                                          :                    :                          :                 :           :  +- CometFilter\n                                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :           +- BroadcastExchange\n                                          :                    :                          :                 :              +- Project\n                                          :                    :                          :                 :                 +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :- Project\n                                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :     :- Filter\n                                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n                                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n                                          :                    :                          :                 :                    :     +- BroadcastExchange\n                                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                    :           +- CometFilter\n                                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :                    +- BroadcastExchange\n                                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                          +- CometProject\n                                          :                    :                          :                 :                             +- CometFilter\n                                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 +- BroadcastExchange\n                                          :                    :                          :                    +- CometNativeColumnarToRow\n                                          :                    :                          :                       +- CometProject\n                                          :                    :                          :                          +- CometFilter\n                                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          +- BroadcastExchange\n                                          :                    :                             +- Project\n                                          :                    :                                +- BroadcastHashJoin\n                                          :                    :                                   :- Project\n                                          :                    :                                   :  +- BroadcastHashJoin\n                                          :                    :                                   :     :- Filter\n                                          :                    :                                   :     :  +- ColumnarToRow\n                                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                                   :     :           +- ReusedSubquery\n                                          :                    :                                   :     +- BroadcastExchange\n                                          :                    :                                   :        +- CometNativeColumnarToRow\n                                          :                    :                                   :           +- CometFilter\n                                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                                   +- BroadcastExchange\n                                          :                    :                                      +- CometNativeColumnarToRow\n                                          :                    :                                         +- CometProject\n                                          :                    :                                            +- CometFilter\n                                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    +- BroadcastExchange\n                                          :                       +- CometNativeColumnarToRow\n                                          :                          +- CometProject\n                                          :                             +- CometFilter\n                                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- Filter\n                                             :  +- ReusedSubquery\n                                             +- HashAggregate\n                                                +- CometNativeColumnarToRow\n                                                   +- CometColumnarExchange\n                                                      +- HashAggregate\n                                                         +- Project\n                                                            +- BroadcastHashJoin\n                                                               :- Project\n                                                               :  +- BroadcastHashJoin\n                                                               :     :- BroadcastHashJoin\n                                                               :     :  :- Filter\n                                                               :     :  :  +- ColumnarToRow\n                                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :  :           +- ReusedSubquery\n                                                               :     :  +- BroadcastExchange\n                                                               :     :     +- Project\n                                                               :     :        +- BroadcastHashJoin\n                                                               :     :           :- CometNativeColumnarToRow\n                                                               :     :           :  +- CometFilter\n                                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :           +- BroadcastExchange\n                                                               :     :              +- BroadcastHashJoin\n                                                               :     :                 :- CometNativeColumnarToRow\n                                                               :     :                 :  +- CometHashAggregate\n                                                               :     :                 :     +- CometColumnarExchange\n                                                               :     :                 :        +- HashAggregate\n                                                               :     :                 :           +- Project\n                                                               :     :                 :              +- BroadcastHashJoin\n                                                               :     :                 :                 :- Project\n                                                               :     :                 :                 :  +- BroadcastHashJoin\n                                                               :     :                 :                 :     :- Filter\n                                                               :     :                 :                 :     :  +- ColumnarToRow\n                                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :                 :                 :     :           +- SubqueryBroadcast\n                                                               :     :                 :                 :     :              +- BroadcastExchange\n                                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                                               :     :                 :                 :     :                    +- CometProject\n                                                               :     :                 :                 :     :                       +- CometFilter\n                                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     :                 :                 :     +- BroadcastExchange\n                                                               :     :                 :                 :        +- BroadcastHashJoin\n                                                               :     :                 :                 :           :- CometNativeColumnarToRow\n                                                               :     :                 :                 :           :  +- CometFilter\n                                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :                 :                 :           +- BroadcastExchange\n                                                               :     :                 :                 :              +- Project\n                                                               :     :                 :                 :                 +- BroadcastHashJoin\n                                                               :     :                 :                 :                    :- Project\n                                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n                                                               :     :                 :                 :                    :     :- Filter\n                                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n                                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n                                                               :     :                 :                 :                    :     +- BroadcastExchange\n                                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                                               :     :                 :                 :                    :           +- CometFilter\n                                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :                 :                 :                    +- BroadcastExchange\n                                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n                                                               :     :                 :                 :                          +- CometProject\n                                                               :     :                 :                 :                             +- CometFilter\n                                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     :                 :                 +- BroadcastExchange\n                                                               :     :                 :                    +- CometNativeColumnarToRow\n                                                               :     :                 :                       +- CometProject\n                                                               :     :                 :                          +- CometFilter\n                                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     :                 +- BroadcastExchange\n                                                               :     :                    +- Project\n                                                               :     :                       +- BroadcastHashJoin\n                                                               :     :                          :- Project\n                                                               :     :                          :  +- BroadcastHashJoin\n                                                               :     :                          :     :- Filter\n                                                               :     :                          :     :  +- ColumnarToRow\n                                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :                          :     :           +- ReusedSubquery\n                                                               :     :                          :     +- BroadcastExchange\n                                                               :     :                          :        +- CometNativeColumnarToRow\n                                                               :     :                          :           +- CometFilter\n                                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :                          +- BroadcastExchange\n                                                               :     :                             +- CometNativeColumnarToRow\n                                                               :     :                                +- CometProject\n                                                               :     :                                   +- CometFilter\n                                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     +- BroadcastExchange\n                                                               :        +- BroadcastHashJoin\n                                                               :           :- CometNativeColumnarToRow\n                                                               :           :  +- CometFilter\n                                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :           +- BroadcastExchange\n                                                               :              +- Project\n                                                               :                 +- BroadcastHashJoin\n                                                               :                    :- CometNativeColumnarToRow\n                                                               :                    :  +- CometFilter\n                                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                    +- BroadcastExchange\n                                                               :                       +- BroadcastHashJoin\n                                                               :                          :- CometNativeColumnarToRow\n                                                               :                          :  +- CometHashAggregate\n                                                               :                          :     +- CometColumnarExchange\n                                                               :                          :        +- HashAggregate\n                                                               :                          :           +- Project\n                                                               :                          :              +- BroadcastHashJoin\n                                                               :                          :                 :- Project\n                                                               :                          :                 :  +- BroadcastHashJoin\n                                                               :                          :                 :     :- Filter\n                                                               :                          :                 :     :  +- ColumnarToRow\n                                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :                          :                 :     :           +- SubqueryBroadcast\n                                                               :                          :                 :     :              +- BroadcastExchange\n                                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n                                                               :                          :                 :     :                    +- CometProject\n                                                               :                          :                 :     :                       +- CometFilter\n                                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :                          :                 :     +- BroadcastExchange\n                                                               :                          :                 :        +- BroadcastHashJoin\n                                                               :                          :                 :           :- CometNativeColumnarToRow\n                                                               :                          :                 :           :  +- CometFilter\n                                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                          :                 :           +- BroadcastExchange\n                                                               :                          :                 :              +- Project\n                                                               :                          :                 :                 +- BroadcastHashJoin\n                                                               :                          :                 :                    :- Project\n                                                               :                          :                 :                    :  +- BroadcastHashJoin\n                                                               :                          :                 :                    :     :- Filter\n                                                               :                          :                 :                    :     :  +- ColumnarToRow\n                                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :                          :                 :                    :     :           +- ReusedSubquery\n                                                               :                          :                 :                    :     +- BroadcastExchange\n                                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n                                                               :                          :                 :                    :           +- CometFilter\n                                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                          :                 :                    +- BroadcastExchange\n                                                               :                          :                 :                       +- CometNativeColumnarToRow\n                                                               :                          :                 :                          +- CometProject\n                                                               :                          :                 :                             +- CometFilter\n                                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :                          :                 +- BroadcastExchange\n                                                               :                          :                    +- CometNativeColumnarToRow\n                                                               :                          :                       +- CometProject\n                                                               :                          :                          +- CometFilter\n                                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :                          +- BroadcastExchange\n                                                               :                             +- Project\n                                                               :                                +- BroadcastHashJoin\n                                                               :                                   :- Project\n                                                               :                                   :  +- BroadcastHashJoin\n                                                               :                                   :     :- Filter\n                                                               :                                   :     :  +- ColumnarToRow\n                                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :                                   :     :           +- ReusedSubquery\n                                                               :                                   :     +- BroadcastExchange\n                                                               :                                   :        +- CometNativeColumnarToRow\n                                                               :                                   :           +- CometFilter\n                                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                                   +- BroadcastExchange\n                                                               :                                      +- CometNativeColumnarToRow\n                                                               :                                         +- CometProject\n                                                               :                                            +- CometFilter\n                                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               +- BroadcastExchange\n                                                                  +- CometNativeColumnarToRow\n                                                                     +- CometProject\n                                                                        +- CometFilter\n                                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 842 out of 2302 eligible operators (36%). Final plan contains 475 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q14a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometFilter\n               :           :  :  +- Subquery\n               :           :  :     +- CometNativeColumnarToRow\n               :           :  :        +- CometHashAggregate\n               :           :  :           +- CometExchange\n               :           :  :              +- CometHashAggregate\n               :           :  :                 +- CometUnion\n               :           :  :                    :- CometProject\n               :           :  :                    :  +- CometBroadcastHashJoin\n               :           :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :  :                    :     :     +- ReusedSubquery\n               :           :  :                    :     +- CometBroadcastExchange\n               :           :  :                    :        +- CometProject\n               :           :  :                    :           +- CometFilter\n               :           :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  :                    :- CometProject\n               :           :  :                    :  +- CometBroadcastHashJoin\n               :           :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :  :                    :     :     +- SubqueryBroadcast\n               :           :  :                    :     :        +- BroadcastExchange\n               :           :  :                    :     :           +- CometNativeColumnarToRow\n               :           :  :                    :     :              +- CometProject\n               :           :  :                    :     :                 +- CometFilter\n               :           :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  :                    :     +- CometBroadcastExchange\n               :           :  :                    :        +- CometProject\n               :           :  :                    :           +- CometFilter\n               :           :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  :                    +- CometProject\n               :           :  :                       +- CometBroadcastHashJoin\n               :           :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :  :                          :     +- ReusedSubquery\n               :           :  :                          +- CometBroadcastExchange\n               :           :  :                             +- CometProject\n               :           :  :                                +- CometFilter\n               :           :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  +- CometHashAggregate\n               :           :     +- CometExchange\n               :           :        +- CometHashAggregate\n               :           :           +- CometProject\n               :           :              +- CometBroadcastHashJoin\n               :           :                 :- CometProject\n               :           :                 :  +- CometBroadcastHashJoin\n               :           :                 :     :- CometBroadcastHashJoin\n               :           :                 :     :  :- CometFilter\n               :           :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :     :  :        +- SubqueryBroadcast\n               :           :                 :     :  :           +- BroadcastExchange\n               :           :                 :     :  :              +- CometNativeColumnarToRow\n               :           :                 :     :  :                 +- CometProject\n               :           :                 :     :  :                    +- CometFilter\n               :           :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :  +- CometBroadcastExchange\n               :           :                 :     :     +- CometProject\n               :           :                 :     :        +- CometBroadcastHashJoin\n               :           :                 :     :           :- CometFilter\n               :           :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :           +- CometBroadcastExchange\n               :           :                 :     :              +- CometBroadcastHashJoin\n               :           :                 :     :                 :- CometHashAggregate\n               :           :                 :     :                 :  +- CometExchange\n               :           :                 :     :                 :     +- CometHashAggregate\n               :           :                 :     :                 :        +- CometProject\n               :           :                 :     :                 :           +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :- CometProject\n               :           :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :     :- CometFilter\n               :           :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :           :                 :     :                 :              :     :           +- BroadcastExchange\n               :           :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :           :                 :     :                 :              :     :                 +- CometProject\n               :           :                 :     :                 :              :     :                    +- CometFilter\n               :           :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :           :- CometFilter\n               :           :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :           +- CometBroadcastExchange\n               :           :                 :     :                 :              :              +- CometProject\n               :           :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :- CometProject\n               :           :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :     :- CometFilter\n               :           :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :           :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :                    :        +- CometFilter\n               :           :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :                    +- CometBroadcastExchange\n               :           :                 :     :                 :              :                       +- CometProject\n               :           :                 :     :                 :              :                          +- CometFilter\n               :           :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              +- CometBroadcastExchange\n               :           :                 :     :                 :                 +- CometProject\n               :           :                 :     :                 :                    +- CometFilter\n               :           :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 +- CometBroadcastExchange\n               :           :                 :     :                    +- CometProject\n               :           :                 :     :                       +- CometBroadcastHashJoin\n               :           :                 :     :                          :- CometProject\n               :           :                 :     :                          :  +- CometBroadcastHashJoin\n               :           :                 :     :                          :     :- CometFilter\n               :           :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :     :                          :     :        +- ReusedSubquery\n               :           :                 :     :                          :     +- CometBroadcastExchange\n               :           :                 :     :                          :        +- CometFilter\n               :           :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                          +- CometBroadcastExchange\n               :           :                 :     :                             +- CometProject\n               :           :                 :     :                                +- CometFilter\n               :           :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     +- CometBroadcastExchange\n               :           :                 :        +- CometBroadcastHashJoin\n               :           :                 :           :- CometFilter\n               :           :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :           +- CometBroadcastExchange\n               :           :                 :              +- CometProject\n               :           :                 :                 +- CometBroadcastHashJoin\n               :           :                 :                    :- CometFilter\n               :           :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                    +- CometBroadcastExchange\n               :           :                 :                       +- CometBroadcastHashJoin\n               :           :                 :                          :- CometHashAggregate\n               :           :                 :                          :  +- CometExchange\n               :           :                 :                          :     +- CometHashAggregate\n               :           :                 :                          :        +- CometProject\n               :           :                 :                          :           +- CometBroadcastHashJoin\n               :           :                 :                          :              :- CometProject\n               :           :                 :                          :              :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :     :- CometFilter\n               :           :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :                          :              :     :        +- SubqueryBroadcast\n               :           :                 :                          :              :     :           +- BroadcastExchange\n               :           :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :           :                 :                          :              :     :                 +- CometProject\n               :           :                 :                          :              :     :                    +- CometFilter\n               :           :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              :     +- CometBroadcastExchange\n               :           :                 :                          :              :        +- CometBroadcastHashJoin\n               :           :                 :                          :              :           :- CometFilter\n               :           :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :           +- CometBroadcastExchange\n               :           :                 :                          :              :              +- CometProject\n               :           :                 :                          :              :                 +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :- CometProject\n               :           :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :     :- CometFilter\n               :           :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :                          :              :                    :     :        +- ReusedSubquery\n               :           :                 :                          :              :                    :     +- CometBroadcastExchange\n               :           :                 :                          :              :                    :        +- CometFilter\n               :           :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :                    +- CometBroadcastExchange\n               :           :                 :                          :              :                       +- CometProject\n               :           :                 :                          :              :                          +- CometFilter\n               :           :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              +- CometBroadcastExchange\n               :           :                 :                          :                 +- CometProject\n               :           :                 :                          :                    +- CometFilter\n               :           :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          +- CometBroadcastExchange\n               :           :                 :                             +- CometProject\n               :           :                 :                                +- CometBroadcastHashJoin\n               :           :                 :                                   :- CometProject\n               :           :                 :                                   :  +- CometBroadcastHashJoin\n               :           :                 :                                   :     :- CometFilter\n               :           :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :                                   :     :        +- ReusedSubquery\n               :           :                 :                                   :     +- CometBroadcastExchange\n               :           :                 :                                   :        +- CometFilter\n               :           :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                                   +- CometBroadcastExchange\n               :           :                 :                                      +- CometProject\n               :           :                 :                                         +- CometFilter\n               :           :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 +- CometBroadcastExchange\n               :           :                    +- CometProject\n               :           :                       +- CometFilter\n               :           :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :- CometFilter\n               :           :  :  +- ReusedSubquery\n               :           :  +- CometHashAggregate\n               :           :     +- CometExchange\n               :           :        +- CometHashAggregate\n               :           :           +- CometProject\n               :           :              +- CometBroadcastHashJoin\n               :           :                 :- CometProject\n               :           :                 :  +- CometBroadcastHashJoin\n               :           :                 :     :- CometBroadcastHashJoin\n               :           :                 :     :  :- CometFilter\n               :           :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :     :  :        +- ReusedSubquery\n               :           :                 :     :  +- CometBroadcastExchange\n               :           :                 :     :     +- CometProject\n               :           :                 :     :        +- CometBroadcastHashJoin\n               :           :                 :     :           :- CometFilter\n               :           :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :           +- CometBroadcastExchange\n               :           :                 :     :              +- CometBroadcastHashJoin\n               :           :                 :     :                 :- CometHashAggregate\n               :           :                 :     :                 :  +- CometExchange\n               :           :                 :     :                 :     +- CometHashAggregate\n               :           :                 :     :                 :        +- CometProject\n               :           :                 :     :                 :           +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :- CometProject\n               :           :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :     :- CometFilter\n               :           :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :           :                 :     :                 :              :     :           +- BroadcastExchange\n               :           :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :           :                 :     :                 :              :     :                 +- CometProject\n               :           :                 :     :                 :              :     :                    +- CometFilter\n               :           :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :           :- CometFilter\n               :           :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :           +- CometBroadcastExchange\n               :           :                 :     :                 :              :              +- CometProject\n               :           :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :- CometProject\n               :           :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :     :- CometFilter\n               :           :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :           :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :                    :        +- CometFilter\n               :           :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :                    +- CometBroadcastExchange\n               :           :                 :     :                 :              :                       +- CometProject\n               :           :                 :     :                 :              :                          +- CometFilter\n               :           :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              +- CometBroadcastExchange\n               :           :                 :     :                 :                 +- CometProject\n               :           :                 :     :                 :                    +- CometFilter\n               :           :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 +- CometBroadcastExchange\n               :           :                 :     :                    +- CometProject\n               :           :                 :     :                       +- CometBroadcastHashJoin\n               :           :                 :     :                          :- CometProject\n               :           :                 :     :                          :  +- CometBroadcastHashJoin\n               :           :                 :     :                          :     :- CometFilter\n               :           :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :     :                          :     :        +- ReusedSubquery\n               :           :                 :     :                          :     +- CometBroadcastExchange\n               :           :                 :     :                          :        +- CometFilter\n               :           :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                          +- CometBroadcastExchange\n               :           :                 :     :                             +- CometProject\n               :           :                 :     :                                +- CometFilter\n               :           :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     +- CometBroadcastExchange\n               :           :                 :        +- CometBroadcastHashJoin\n               :           :                 :           :- CometFilter\n               :           :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :           +- CometBroadcastExchange\n               :           :                 :              +- CometProject\n               :           :                 :                 +- CometBroadcastHashJoin\n               :           :                 :                    :- CometFilter\n               :           :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                    +- CometBroadcastExchange\n               :           :                 :                       +- CometBroadcastHashJoin\n               :           :                 :                          :- CometHashAggregate\n               :           :                 :                          :  +- CometExchange\n               :           :                 :                          :     +- CometHashAggregate\n               :           :                 :                          :        +- CometProject\n               :           :                 :                          :           +- CometBroadcastHashJoin\n               :           :                 :                          :              :- CometProject\n               :           :                 :                          :              :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :     :- CometFilter\n               :           :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :                          :              :     :        +- SubqueryBroadcast\n               :           :                 :                          :              :     :           +- BroadcastExchange\n               :           :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :           :                 :                          :              :     :                 +- CometProject\n               :           :                 :                          :              :     :                    +- CometFilter\n               :           :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              :     +- CometBroadcastExchange\n               :           :                 :                          :              :        +- CometBroadcastHashJoin\n               :           :                 :                          :              :           :- CometFilter\n               :           :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :           +- CometBroadcastExchange\n               :           :                 :                          :              :              +- CometProject\n               :           :                 :                          :              :                 +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :- CometProject\n               :           :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :     :- CometFilter\n               :           :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :                          :              :                    :     :        +- ReusedSubquery\n               :           :                 :                          :              :                    :     +- CometBroadcastExchange\n               :           :                 :                          :              :                    :        +- CometFilter\n               :           :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :                    +- CometBroadcastExchange\n               :           :                 :                          :              :                       +- CometProject\n               :           :                 :                          :              :                          +- CometFilter\n               :           :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              +- CometBroadcastExchange\n               :           :                 :                          :                 +- CometProject\n               :           :                 :                          :                    +- CometFilter\n               :           :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          +- CometBroadcastExchange\n               :           :                 :                             +- CometProject\n               :           :                 :                                +- CometBroadcastHashJoin\n               :           :                 :                                   :- CometProject\n               :           :                 :                                   :  +- CometBroadcastHashJoin\n               :           :                 :                                   :     :- CometFilter\n               :           :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :                                   :     :        +- ReusedSubquery\n               :           :                 :                                   :     +- CometBroadcastExchange\n               :           :                 :                                   :        +- CometFilter\n               :           :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                                   +- CometBroadcastExchange\n               :           :                 :                                      +- CometProject\n               :           :                 :                                         +- CometFilter\n               :           :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 +- CometBroadcastExchange\n               :           :                    +- CometProject\n               :           :                       +- CometFilter\n               :           :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           +- CometFilter\n               :              :  +- ReusedSubquery\n               :              +- CometHashAggregate\n               :                 +- CometExchange\n               :                    +- CometHashAggregate\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometProject\n               :                             :  +- CometBroadcastHashJoin\n               :                             :     :- CometBroadcastHashJoin\n               :                             :     :  :- CometFilter\n               :                             :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                             :     :  :        +- ReusedSubquery\n               :                             :     :  +- CometBroadcastExchange\n               :                             :     :     +- CometProject\n               :                             :     :        +- CometBroadcastHashJoin\n               :                             :     :           :- CometFilter\n               :                             :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :           +- CometBroadcastExchange\n               :                             :     :              +- CometBroadcastHashJoin\n               :                             :     :                 :- CometHashAggregate\n               :                             :     :                 :  +- CometExchange\n               :                             :     :                 :     +- CometHashAggregate\n               :                             :     :                 :        +- CometProject\n               :                             :     :                 :           +- CometBroadcastHashJoin\n               :                             :     :                 :              :- CometProject\n               :                             :     :                 :              :  +- CometBroadcastHashJoin\n               :                             :     :                 :              :     :- CometFilter\n               :                             :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                             :     :                 :              :     :        +- SubqueryBroadcast\n               :                             :     :                 :              :     :           +- BroadcastExchange\n               :                             :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                             :     :                 :              :     :                 +- CometProject\n               :                             :     :                 :              :     :                    +- CometFilter\n               :                             :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     :                 :              :     +- CometBroadcastExchange\n               :                             :     :                 :              :        +- CometBroadcastHashJoin\n               :                             :     :                 :              :           :- CometFilter\n               :                             :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :                 :              :           +- CometBroadcastExchange\n               :                             :     :                 :              :              +- CometProject\n               :                             :     :                 :              :                 +- CometBroadcastHashJoin\n               :                             :     :                 :              :                    :- CometProject\n               :                             :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                             :     :                 :              :                    :     :- CometFilter\n               :                             :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                             :     :                 :              :                    :     :        +- ReusedSubquery\n               :                             :     :                 :              :                    :     +- CometBroadcastExchange\n               :                             :     :                 :              :                    :        +- CometFilter\n               :                             :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :                 :              :                    +- CometBroadcastExchange\n               :                             :     :                 :              :                       +- CometProject\n               :                             :     :                 :              :                          +- CometFilter\n               :                             :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     :                 :              +- CometBroadcastExchange\n               :                             :     :                 :                 +- CometProject\n               :                             :     :                 :                    +- CometFilter\n               :                             :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     :                 +- CometBroadcastExchange\n               :                             :     :                    +- CometProject\n               :                             :     :                       +- CometBroadcastHashJoin\n               :                             :     :                          :- CometProject\n               :                             :     :                          :  +- CometBroadcastHashJoin\n               :                             :     :                          :     :- CometFilter\n               :                             :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                             :     :                          :     :        +- ReusedSubquery\n               :                             :     :                          :     +- CometBroadcastExchange\n               :                             :     :                          :        +- CometFilter\n               :                             :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :                          +- CometBroadcastExchange\n               :                             :     :                             +- CometProject\n               :                             :     :                                +- CometFilter\n               :                             :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     +- CometBroadcastExchange\n               :                             :        +- CometBroadcastHashJoin\n               :                             :           :- CometFilter\n               :                             :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :           +- CometBroadcastExchange\n               :                             :              +- CometProject\n               :                             :                 +- CometBroadcastHashJoin\n               :                             :                    :- CometFilter\n               :                             :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                    +- CometBroadcastExchange\n               :                             :                       +- CometBroadcastHashJoin\n               :                             :                          :- CometHashAggregate\n               :                             :                          :  +- CometExchange\n               :                             :                          :     +- CometHashAggregate\n               :                             :                          :        +- CometProject\n               :                             :                          :           +- CometBroadcastHashJoin\n               :                             :                          :              :- CometProject\n               :                             :                          :              :  +- CometBroadcastHashJoin\n               :                             :                          :              :     :- CometFilter\n               :                             :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                             :                          :              :     :        +- SubqueryBroadcast\n               :                             :                          :              :     :           +- BroadcastExchange\n               :                             :                          :              :     :              +- CometNativeColumnarToRow\n               :                             :                          :              :     :                 +- CometProject\n               :                             :                          :              :     :                    +- CometFilter\n               :                             :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :                          :              :     +- CometBroadcastExchange\n               :                             :                          :              :        +- CometBroadcastHashJoin\n               :                             :                          :              :           :- CometFilter\n               :                             :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                          :              :           +- CometBroadcastExchange\n               :                             :                          :              :              +- CometProject\n               :                             :                          :              :                 +- CometBroadcastHashJoin\n               :                             :                          :              :                    :- CometProject\n               :                             :                          :              :                    :  +- CometBroadcastHashJoin\n               :                             :                          :              :                    :     :- CometFilter\n               :                             :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                             :                          :              :                    :     :        +- ReusedSubquery\n               :                             :                          :              :                    :     +- CometBroadcastExchange\n               :                             :                          :              :                    :        +- CometFilter\n               :                             :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                          :              :                    +- CometBroadcastExchange\n               :                             :                          :              :                       +- CometProject\n               :                             :                          :              :                          +- CometFilter\n               :                             :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :                          :              +- CometBroadcastExchange\n               :                             :                          :                 +- CometProject\n               :                             :                          :                    +- CometFilter\n               :                             :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :                          +- CometBroadcastExchange\n               :                             :                             +- CometProject\n               :                             :                                +- CometBroadcastHashJoin\n               :                             :                                   :- CometProject\n               :                             :                                   :  +- CometBroadcastHashJoin\n               :                             :                                   :     :- CometFilter\n               :                             :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                             :                                   :     :        +- ReusedSubquery\n               :                             :                                   :     +- CometBroadcastExchange\n               :                             :                                   :        +- CometFilter\n               :                             :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                                   +- CometBroadcastExchange\n               :                             :                                      +- CometProject\n               :                             :                                         +- CometFilter\n               :                             :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometFilter\n               :                    :  :  +- Subquery\n               :                    :  :     +- CometNativeColumnarToRow\n               :                    :  :        +- CometHashAggregate\n               :                    :  :           +- CometExchange\n               :                    :  :              +- CometHashAggregate\n               :                    :  :                 +- CometUnion\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :  :                    :     :     +- ReusedSubquery\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :  :                    :     :     +- SubqueryBroadcast\n               :                    :  :                    :     :        +- BroadcastExchange\n               :                    :  :                    :     :           +- CometNativeColumnarToRow\n               :                    :  :                    :     :              +- CometProject\n               :                    :  :                    :     :                 +- CometFilter\n               :                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    +- CometProject\n               :                    :  :                       +- CometBroadcastHashJoin\n               :                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :  :                          :     +- ReusedSubquery\n               :                    :  :                          +- CometBroadcastExchange\n               :                    :  :                             +- CometProject\n               :                    :  :                                +- CometFilter\n               :                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :  :        +- SubqueryBroadcast\n               :                    :                 :     :  :           +- BroadcastExchange\n               :                    :                 :     :  :              +- CometNativeColumnarToRow\n               :                    :                 :     :  :                 +- CometProject\n               :                    :                 :     :  :                    +- CometFilter\n               :                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :- CometFilter\n               :                    :  :  +- ReusedSubquery\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :  :        +- ReusedSubquery\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometFilter\n               :                       :  +- ReusedSubquery\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastHashJoin\n               :                                      :     :  :- CometFilter\n               :                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :  :        +- ReusedSubquery\n               :                                      :     :  +- CometBroadcastExchange\n               :                                      :     :     +- CometProject\n               :                                      :     :        +- CometBroadcastHashJoin\n               :                                      :     :           :- CometFilter\n               :                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :           +- CometBroadcastExchange\n               :                                      :     :              +- CometBroadcastHashJoin\n               :                                      :     :                 :- CometHashAggregate\n               :                                      :     :                 :  +- CometExchange\n               :                                      :     :                 :     +- CometHashAggregate\n               :                                      :     :                 :        +- CometProject\n               :                                      :     :                 :           +- CometBroadcastHashJoin\n               :                                      :     :                 :              :- CometProject\n               :                                      :     :                 :              :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :     :- CometFilter\n               :                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :     :                 :              :     :        +- SubqueryBroadcast\n               :                                      :     :                 :              :     :           +- BroadcastExchange\n               :                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                                      :     :                 :              :     :                 +- CometProject\n               :                                      :     :                 :              :     :                    +- CometFilter\n               :                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              :     +- CometBroadcastExchange\n               :                                      :     :                 :              :        +- CometBroadcastHashJoin\n               :                                      :     :                 :              :           :- CometFilter\n               :                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :           +- CometBroadcastExchange\n               :                                      :     :                 :              :              +- CometProject\n               :                                      :     :                 :              :                 +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :- CometProject\n               :                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :     :- CometFilter\n               :                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :                 :              :                    :     :        +- ReusedSubquery\n               :                                      :     :                 :              :                    :     +- CometBroadcastExchange\n               :                                      :     :                 :              :                    :        +- CometFilter\n               :                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :                    +- CometBroadcastExchange\n               :                                      :     :                 :              :                       +- CometProject\n               :                                      :     :                 :              :                          +- CometFilter\n               :                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              +- CometBroadcastExchange\n               :                                      :     :                 :                 +- CometProject\n               :                                      :     :                 :                    +- CometFilter\n               :                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 +- CometBroadcastExchange\n               :                                      :     :                    +- CometProject\n               :                                      :     :                       +- CometBroadcastHashJoin\n               :                                      :     :                          :- CometProject\n               :                                      :     :                          :  +- CometBroadcastHashJoin\n               :                                      :     :                          :     :- CometFilter\n               :                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :                          :     :        +- ReusedSubquery\n               :                                      :     :                          :     +- CometBroadcastExchange\n               :                                      :     :                          :        +- CometFilter\n               :                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                          +- CometBroadcastExchange\n               :                                      :     :                             +- CometProject\n               :                                      :     :                                +- CometFilter\n               :                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometBroadcastExchange\n               :                                      :        +- CometBroadcastHashJoin\n               :                                      :           :- CometFilter\n               :                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :           +- CometBroadcastExchange\n               :                                      :              +- CometProject\n               :                                      :                 +- CometBroadcastHashJoin\n               :                                      :                    :- CometFilter\n               :                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                    +- CometBroadcastExchange\n               :                                      :                       +- CometBroadcastHashJoin\n               :                                      :                          :- CometHashAggregate\n               :                                      :                          :  +- CometExchange\n               :                                      :                          :     +- CometHashAggregate\n               :                                      :                          :        +- CometProject\n               :                                      :                          :           +- CometBroadcastHashJoin\n               :                                      :                          :              :- CometProject\n               :                                      :                          :              :  +- CometBroadcastHashJoin\n               :                                      :                          :              :     :- CometFilter\n               :                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :                          :              :     :        +- SubqueryBroadcast\n               :                                      :                          :              :     :           +- BroadcastExchange\n               :                                      :                          :              :     :              +- CometNativeColumnarToRow\n               :                                      :                          :              :     :                 +- CometProject\n               :                                      :                          :              :     :                    +- CometFilter\n               :                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              :     +- CometBroadcastExchange\n               :                                      :                          :              :        +- CometBroadcastHashJoin\n               :                                      :                          :              :           :- CometFilter\n               :                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :           +- CometBroadcastExchange\n               :                                      :                          :              :              +- CometProject\n               :                                      :                          :              :                 +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :- CometProject\n               :                                      :                          :              :                    :  +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :     :- CometFilter\n               :                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :                          :              :                    :     :        +- ReusedSubquery\n               :                                      :                          :              :                    :     +- CometBroadcastExchange\n               :                                      :                          :              :                    :        +- CometFilter\n               :                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :                    +- CometBroadcastExchange\n               :                                      :                          :              :                       +- CometProject\n               :                                      :                          :              :                          +- CometFilter\n               :                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              +- CometBroadcastExchange\n               :                                      :                          :                 +- CometProject\n               :                                      :                          :                    +- CometFilter\n               :                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          +- CometBroadcastExchange\n               :                                      :                             +- CometProject\n               :                                      :                                +- CometBroadcastHashJoin\n               :                                      :                                   :- CometProject\n               :                                      :                                   :  +- CometBroadcastHashJoin\n               :                                      :                                   :     :- CometFilter\n               :                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :                                   :     :        +- ReusedSubquery\n               :                                      :                                   :     +- CometBroadcastExchange\n               :                                      :                                   :        +- CometFilter\n               :                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                                   +- CometBroadcastExchange\n               :                                      :                                      +- CometProject\n               :                                      :                                         +- CometFilter\n               :                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometFilter\n               :                    :  :  +- Subquery\n               :                    :  :     +- CometNativeColumnarToRow\n               :                    :  :        +- CometHashAggregate\n               :                    :  :           +- CometExchange\n               :                    :  :              +- CometHashAggregate\n               :                    :  :                 +- CometUnion\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :  :                    :     :     +- ReusedSubquery\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :  :                    :     :     +- SubqueryBroadcast\n               :                    :  :                    :     :        +- BroadcastExchange\n               :                    :  :                    :     :           +- CometNativeColumnarToRow\n               :                    :  :                    :     :              +- CometProject\n               :                    :  :                    :     :                 +- CometFilter\n               :                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    +- CometProject\n               :                    :  :                       +- CometBroadcastHashJoin\n               :                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :  :                          :     +- ReusedSubquery\n               :                    :  :                          +- CometBroadcastExchange\n               :                    :  :                             +- CometProject\n               :                    :  :                                +- CometFilter\n               :                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :  :        +- SubqueryBroadcast\n               :                    :                 :     :  :           +- BroadcastExchange\n               :                    :                 :     :  :              +- CometNativeColumnarToRow\n               :                    :                 :     :  :                 +- CometProject\n               :                    :                 :     :  :                    +- CometFilter\n               :                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :- CometFilter\n               :                    :  :  +- ReusedSubquery\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :  :        +- ReusedSubquery\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometFilter\n               :                       :  +- ReusedSubquery\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastHashJoin\n               :                                      :     :  :- CometFilter\n               :                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :  :        +- ReusedSubquery\n               :                                      :     :  +- CometBroadcastExchange\n               :                                      :     :     +- CometProject\n               :                                      :     :        +- CometBroadcastHashJoin\n               :                                      :     :           :- CometFilter\n               :                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :           +- CometBroadcastExchange\n               :                                      :     :              +- CometBroadcastHashJoin\n               :                                      :     :                 :- CometHashAggregate\n               :                                      :     :                 :  +- CometExchange\n               :                                      :     :                 :     +- CometHashAggregate\n               :                                      :     :                 :        +- CometProject\n               :                                      :     :                 :           +- CometBroadcastHashJoin\n               :                                      :     :                 :              :- CometProject\n               :                                      :     :                 :              :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :     :- CometFilter\n               :                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :     :                 :              :     :        +- SubqueryBroadcast\n               :                                      :     :                 :              :     :           +- BroadcastExchange\n               :                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                                      :     :                 :              :     :                 +- CometProject\n               :                                      :     :                 :              :     :                    +- CometFilter\n               :                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              :     +- CometBroadcastExchange\n               :                                      :     :                 :              :        +- CometBroadcastHashJoin\n               :                                      :     :                 :              :           :- CometFilter\n               :                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :           +- CometBroadcastExchange\n               :                                      :     :                 :              :              +- CometProject\n               :                                      :     :                 :              :                 +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :- CometProject\n               :                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :     :- CometFilter\n               :                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :                 :              :                    :     :        +- ReusedSubquery\n               :                                      :     :                 :              :                    :     +- CometBroadcastExchange\n               :                                      :     :                 :              :                    :        +- CometFilter\n               :                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :                    +- CometBroadcastExchange\n               :                                      :     :                 :              :                       +- CometProject\n               :                                      :     :                 :              :                          +- CometFilter\n               :                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              +- CometBroadcastExchange\n               :                                      :     :                 :                 +- CometProject\n               :                                      :     :                 :                    +- CometFilter\n               :                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 +- CometBroadcastExchange\n               :                                      :     :                    +- CometProject\n               :                                      :     :                       +- CometBroadcastHashJoin\n               :                                      :     :                          :- CometProject\n               :                                      :     :                          :  +- CometBroadcastHashJoin\n               :                                      :     :                          :     :- CometFilter\n               :                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :                          :     :        +- ReusedSubquery\n               :                                      :     :                          :     +- CometBroadcastExchange\n               :                                      :     :                          :        +- CometFilter\n               :                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                          +- CometBroadcastExchange\n               :                                      :     :                             +- CometProject\n               :                                      :     :                                +- CometFilter\n               :                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometBroadcastExchange\n               :                                      :        +- CometBroadcastHashJoin\n               :                                      :           :- CometFilter\n               :                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :           +- CometBroadcastExchange\n               :                                      :              +- CometProject\n               :                                      :                 +- CometBroadcastHashJoin\n               :                                      :                    :- CometFilter\n               :                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                    +- CometBroadcastExchange\n               :                                      :                       +- CometBroadcastHashJoin\n               :                                      :                          :- CometHashAggregate\n               :                                      :                          :  +- CometExchange\n               :                                      :                          :     +- CometHashAggregate\n               :                                      :                          :        +- CometProject\n               :                                      :                          :           +- CometBroadcastHashJoin\n               :                                      :                          :              :- CometProject\n               :                                      :                          :              :  +- CometBroadcastHashJoin\n               :                                      :                          :              :     :- CometFilter\n               :                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :                          :              :     :        +- SubqueryBroadcast\n               :                                      :                          :              :     :           +- BroadcastExchange\n               :                                      :                          :              :     :              +- CometNativeColumnarToRow\n               :                                      :                          :              :     :                 +- CometProject\n               :                                      :                          :              :     :                    +- CometFilter\n               :                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              :     +- CometBroadcastExchange\n               :                                      :                          :              :        +- CometBroadcastHashJoin\n               :                                      :                          :              :           :- CometFilter\n               :                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :           +- CometBroadcastExchange\n               :                                      :                          :              :              +- CometProject\n               :                                      :                          :              :                 +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :- CometProject\n               :                                      :                          :              :                    :  +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :     :- CometFilter\n               :                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :                          :              :                    :     :        +- ReusedSubquery\n               :                                      :                          :              :                    :     +- CometBroadcastExchange\n               :                                      :                          :              :                    :        +- CometFilter\n               :                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :                    +- CometBroadcastExchange\n               :                                      :                          :              :                       +- CometProject\n               :                                      :                          :              :                          +- CometFilter\n               :                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              +- CometBroadcastExchange\n               :                                      :                          :                 +- CometProject\n               :                                      :                          :                    +- CometFilter\n               :                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          +- CometBroadcastExchange\n               :                                      :                             +- CometProject\n               :                                      :                                +- CometBroadcastHashJoin\n               :                                      :                                   :- CometProject\n               :                                      :                                   :  +- CometBroadcastHashJoin\n               :                                      :                                   :     :- CometFilter\n               :                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :                                   :     :        +- ReusedSubquery\n               :                                      :                                   :     +- CometBroadcastExchange\n               :                                      :                                   :        +- CometFilter\n               :                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                                   +- CometBroadcastExchange\n               :                                      :                                      +- CometProject\n               :                                      :                                         +- CometFilter\n               :                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometFilter\n               :                    :  :  +- Subquery\n               :                    :  :     +- CometNativeColumnarToRow\n               :                    :  :        +- CometHashAggregate\n               :                    :  :           +- CometExchange\n               :                    :  :              +- CometHashAggregate\n               :                    :  :                 +- CometUnion\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :  :                    :     :     +- ReusedSubquery\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :  :                    :     :     +- SubqueryBroadcast\n               :                    :  :                    :     :        +- BroadcastExchange\n               :                    :  :                    :     :           +- CometNativeColumnarToRow\n               :                    :  :                    :     :              +- CometProject\n               :                    :  :                    :     :                 +- CometFilter\n               :                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    +- CometProject\n               :                    :  :                       +- CometBroadcastHashJoin\n               :                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :  :                          :     +- ReusedSubquery\n               :                    :  :                          +- CometBroadcastExchange\n               :                    :  :                             +- CometProject\n               :                    :  :                                +- CometFilter\n               :                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :  :        +- SubqueryBroadcast\n               :                    :                 :     :  :           +- BroadcastExchange\n               :                    :                 :     :  :              +- CometNativeColumnarToRow\n               :                    :                 :     :  :                 +- CometProject\n               :                    :                 :     :  :                    +- CometFilter\n               :                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :- CometFilter\n               :                    :  :  +- ReusedSubquery\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :  :        +- ReusedSubquery\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometFilter\n               :                       :  +- ReusedSubquery\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastHashJoin\n               :                                      :     :  :- CometFilter\n               :                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :  :        +- ReusedSubquery\n               :                                      :     :  +- CometBroadcastExchange\n               :                                      :     :     +- CometProject\n               :                                      :     :        +- CometBroadcastHashJoin\n               :                                      :     :           :- CometFilter\n               :                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :           +- CometBroadcastExchange\n               :                                      :     :              +- CometBroadcastHashJoin\n               :                                      :     :                 :- CometHashAggregate\n               :                                      :     :                 :  +- CometExchange\n               :                                      :     :                 :     +- CometHashAggregate\n               :                                      :     :                 :        +- CometProject\n               :                                      :     :                 :           +- CometBroadcastHashJoin\n               :                                      :     :                 :              :- CometProject\n               :                                      :     :                 :              :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :     :- CometFilter\n               :                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :     :                 :              :     :        +- SubqueryBroadcast\n               :                                      :     :                 :              :     :           +- BroadcastExchange\n               :                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                                      :     :                 :              :     :                 +- CometProject\n               :                                      :     :                 :              :     :                    +- CometFilter\n               :                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              :     +- CometBroadcastExchange\n               :                                      :     :                 :              :        +- CometBroadcastHashJoin\n               :                                      :     :                 :              :           :- CometFilter\n               :                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :           +- CometBroadcastExchange\n               :                                      :     :                 :              :              +- CometProject\n               :                                      :     :                 :              :                 +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :- CometProject\n               :                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :     :- CometFilter\n               :                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :                 :              :                    :     :        +- ReusedSubquery\n               :                                      :     :                 :              :                    :     +- CometBroadcastExchange\n               :                                      :     :                 :              :                    :        +- CometFilter\n               :                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :                    +- CometBroadcastExchange\n               :                                      :     :                 :              :                       +- CometProject\n               :                                      :     :                 :              :                          +- CometFilter\n               :                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              +- CometBroadcastExchange\n               :                                      :     :                 :                 +- CometProject\n               :                                      :     :                 :                    +- CometFilter\n               :                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 +- CometBroadcastExchange\n               :                                      :     :                    +- CometProject\n               :                                      :     :                       +- CometBroadcastHashJoin\n               :                                      :     :                          :- CometProject\n               :                                      :     :                          :  +- CometBroadcastHashJoin\n               :                                      :     :                          :     :- CometFilter\n               :                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :                          :     :        +- ReusedSubquery\n               :                                      :     :                          :     +- CometBroadcastExchange\n               :                                      :     :                          :        +- CometFilter\n               :                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                          +- CometBroadcastExchange\n               :                                      :     :                             +- CometProject\n               :                                      :     :                                +- CometFilter\n               :                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometBroadcastExchange\n               :                                      :        +- CometBroadcastHashJoin\n               :                                      :           :- CometFilter\n               :                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :           +- CometBroadcastExchange\n               :                                      :              +- CometProject\n               :                                      :                 +- CometBroadcastHashJoin\n               :                                      :                    :- CometFilter\n               :                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                    +- CometBroadcastExchange\n               :                                      :                       +- CometBroadcastHashJoin\n               :                                      :                          :- CometHashAggregate\n               :                                      :                          :  +- CometExchange\n               :                                      :                          :     +- CometHashAggregate\n               :                                      :                          :        +- CometProject\n               :                                      :                          :           +- CometBroadcastHashJoin\n               :                                      :                          :              :- CometProject\n               :                                      :                          :              :  +- CometBroadcastHashJoin\n               :                                      :                          :              :     :- CometFilter\n               :                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :                          :              :     :        +- SubqueryBroadcast\n               :                                      :                          :              :     :           +- BroadcastExchange\n               :                                      :                          :              :     :              +- CometNativeColumnarToRow\n               :                                      :                          :              :     :                 +- CometProject\n               :                                      :                          :              :     :                    +- CometFilter\n               :                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              :     +- CometBroadcastExchange\n               :                                      :                          :              :        +- CometBroadcastHashJoin\n               :                                      :                          :              :           :- CometFilter\n               :                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :           +- CometBroadcastExchange\n               :                                      :                          :              :              +- CometProject\n               :                                      :                          :              :                 +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :- CometProject\n               :                                      :                          :              :                    :  +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :     :- CometFilter\n               :                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :                          :              :                    :     :        +- ReusedSubquery\n               :                                      :                          :              :                    :     +- CometBroadcastExchange\n               :                                      :                          :              :                    :        +- CometFilter\n               :                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :                    +- CometBroadcastExchange\n               :                                      :                          :              :                       +- CometProject\n               :                                      :                          :              :                          +- CometFilter\n               :                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              +- CometBroadcastExchange\n               :                                      :                          :                 +- CometProject\n               :                                      :                          :                    +- CometFilter\n               :                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          +- CometBroadcastExchange\n               :                                      :                             +- CometProject\n               :                                      :                                +- CometBroadcastHashJoin\n               :                                      :                                   :- CometProject\n               :                                      :                                   :  +- CometBroadcastHashJoin\n               :                                      :                                   :     :- CometFilter\n               :                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :                                   :     :        +- ReusedSubquery\n               :                                      :                                   :     +- CometBroadcastExchange\n               :                                      :                                   :        +- CometFilter\n               :                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                                   +- CometBroadcastExchange\n               :                                      :                                      +- CometProject\n               :                                      :                                         +- CometFilter\n               :                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometFilter\n                                    :  :  +- Subquery\n                                    :  :     +- CometNativeColumnarToRow\n                                    :  :        +- CometHashAggregate\n                                    :  :           +- CometExchange\n                                    :  :              +- CometHashAggregate\n                                    :  :                 +- CometUnion\n                                    :  :                    :- CometProject\n                                    :  :                    :  +- CometBroadcastHashJoin\n                                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :  :                    :     :     +- ReusedSubquery\n                                    :  :                    :     +- CometBroadcastExchange\n                                    :  :                    :        +- CometProject\n                                    :  :                    :           +- CometFilter\n                                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  :                    :- CometProject\n                                    :  :                    :  +- CometBroadcastHashJoin\n                                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :  :                    :     :     +- SubqueryBroadcast\n                                    :  :                    :     :        +- BroadcastExchange\n                                    :  :                    :     :           +- CometNativeColumnarToRow\n                                    :  :                    :     :              +- CometProject\n                                    :  :                    :     :                 +- CometFilter\n                                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  :                    :     +- CometBroadcastExchange\n                                    :  :                    :        +- CometProject\n                                    :  :                    :           +- CometFilter\n                                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  :                    +- CometProject\n                                    :  :                       +- CometBroadcastHashJoin\n                                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :  :                          :     +- ReusedSubquery\n                                    :  :                          +- CometBroadcastExchange\n                                    :  :                             +- CometProject\n                                    :  :                                +- CometFilter\n                                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  +- CometHashAggregate\n                                    :     +- CometExchange\n                                    :        +- CometHashAggregate\n                                    :           +- CometProject\n                                    :              +- CometBroadcastHashJoin\n                                    :                 :- CometProject\n                                    :                 :  +- CometBroadcastHashJoin\n                                    :                 :     :- CometBroadcastHashJoin\n                                    :                 :     :  :- CometFilter\n                                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :     :  :        +- SubqueryBroadcast\n                                    :                 :     :  :           +- BroadcastExchange\n                                    :                 :     :  :              +- CometNativeColumnarToRow\n                                    :                 :     :  :                 +- CometProject\n                                    :                 :     :  :                    +- CometFilter\n                                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :  +- CometBroadcastExchange\n                                    :                 :     :     +- CometProject\n                                    :                 :     :        +- CometBroadcastHashJoin\n                                    :                 :     :           :- CometFilter\n                                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :           +- CometBroadcastExchange\n                                    :                 :     :              +- CometBroadcastHashJoin\n                                    :                 :     :                 :- CometHashAggregate\n                                    :                 :     :                 :  +- CometExchange\n                                    :                 :     :                 :     +- CometHashAggregate\n                                    :                 :     :                 :        +- CometProject\n                                    :                 :     :                 :           +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :- CometProject\n                                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :     :- CometFilter\n                                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n                                    :                 :     :                 :              :     :           +- BroadcastExchange\n                                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n                                    :                 :     :                 :              :     :                 +- CometProject\n                                    :                 :     :                 :              :     :                    +- CometFilter\n                                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :           :- CometFilter\n                                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :           +- CometBroadcastExchange\n                                    :                 :     :                 :              :              +- CometProject\n                                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :- CometProject\n                                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :     :- CometFilter\n                                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n                                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :                    :        +- CometFilter\n                                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :                    +- CometBroadcastExchange\n                                    :                 :     :                 :              :                       +- CometProject\n                                    :                 :     :                 :              :                          +- CometFilter\n                                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              +- CometBroadcastExchange\n                                    :                 :     :                 :                 +- CometProject\n                                    :                 :     :                 :                    +- CometFilter\n                                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 +- CometBroadcastExchange\n                                    :                 :     :                    +- CometProject\n                                    :                 :     :                       +- CometBroadcastHashJoin\n                                    :                 :     :                          :- CometProject\n                                    :                 :     :                          :  +- CometBroadcastHashJoin\n                                    :                 :     :                          :     :- CometFilter\n                                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :     :                          :     :        +- ReusedSubquery\n                                    :                 :     :                          :     +- CometBroadcastExchange\n                                    :                 :     :                          :        +- CometFilter\n                                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                          +- CometBroadcastExchange\n                                    :                 :     :                             +- CometProject\n                                    :                 :     :                                +- CometFilter\n                                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     +- CometBroadcastExchange\n                                    :                 :        +- CometBroadcastHashJoin\n                                    :                 :           :- CometFilter\n                                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :           +- CometBroadcastExchange\n                                    :                 :              +- CometProject\n                                    :                 :                 +- CometBroadcastHashJoin\n                                    :                 :                    :- CometFilter\n                                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                    +- CometBroadcastExchange\n                                    :                 :                       +- CometBroadcastHashJoin\n                                    :                 :                          :- CometHashAggregate\n                                    :                 :                          :  +- CometExchange\n                                    :                 :                          :     +- CometHashAggregate\n                                    :                 :                          :        +- CometProject\n                                    :                 :                          :           +- CometBroadcastHashJoin\n                                    :                 :                          :              :- CometProject\n                                    :                 :                          :              :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :     :- CometFilter\n                                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :                          :              :     :        +- SubqueryBroadcast\n                                    :                 :                          :              :     :           +- BroadcastExchange\n                                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n                                    :                 :                          :              :     :                 +- CometProject\n                                    :                 :                          :              :     :                    +- CometFilter\n                                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              :     +- CometBroadcastExchange\n                                    :                 :                          :              :        +- CometBroadcastHashJoin\n                                    :                 :                          :              :           :- CometFilter\n                                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :           +- CometBroadcastExchange\n                                    :                 :                          :              :              +- CometProject\n                                    :                 :                          :              :                 +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :- CometProject\n                                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :     :- CometFilter\n                                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :                          :              :                    :     :        +- ReusedSubquery\n                                    :                 :                          :              :                    :     +- CometBroadcastExchange\n                                    :                 :                          :              :                    :        +- CometFilter\n                                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :                    +- CometBroadcastExchange\n                                    :                 :                          :              :                       +- CometProject\n                                    :                 :                          :              :                          +- CometFilter\n                                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              +- CometBroadcastExchange\n                                    :                 :                          :                 +- CometProject\n                                    :                 :                          :                    +- CometFilter\n                                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          +- CometBroadcastExchange\n                                    :                 :                             +- CometProject\n                                    :                 :                                +- CometBroadcastHashJoin\n                                    :                 :                                   :- CometProject\n                                    :                 :                                   :  +- CometBroadcastHashJoin\n                                    :                 :                                   :     :- CometFilter\n                                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :                                   :     :        +- ReusedSubquery\n                                    :                 :                                   :     +- CometBroadcastExchange\n                                    :                 :                                   :        +- CometFilter\n                                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                                   +- CometBroadcastExchange\n                                    :                 :                                      +- CometProject\n                                    :                 :                                         +- CometFilter\n                                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 +- CometBroadcastExchange\n                                    :                    +- CometProject\n                                    :                       +- CometFilter\n                                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :- CometFilter\n                                    :  :  +- ReusedSubquery\n                                    :  +- CometHashAggregate\n                                    :     +- CometExchange\n                                    :        +- CometHashAggregate\n                                    :           +- CometProject\n                                    :              +- CometBroadcastHashJoin\n                                    :                 :- CometProject\n                                    :                 :  +- CometBroadcastHashJoin\n                                    :                 :     :- CometBroadcastHashJoin\n                                    :                 :     :  :- CometFilter\n                                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :     :  :        +- ReusedSubquery\n                                    :                 :     :  +- CometBroadcastExchange\n                                    :                 :     :     +- CometProject\n                                    :                 :     :        +- CometBroadcastHashJoin\n                                    :                 :     :           :- CometFilter\n                                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :           +- CometBroadcastExchange\n                                    :                 :     :              +- CometBroadcastHashJoin\n                                    :                 :     :                 :- CometHashAggregate\n                                    :                 :     :                 :  +- CometExchange\n                                    :                 :     :                 :     +- CometHashAggregate\n                                    :                 :     :                 :        +- CometProject\n                                    :                 :     :                 :           +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :- CometProject\n                                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :     :- CometFilter\n                                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n                                    :                 :     :                 :              :     :           +- BroadcastExchange\n                                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n                                    :                 :     :                 :              :     :                 +- CometProject\n                                    :                 :     :                 :              :     :                    +- CometFilter\n                                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :           :- CometFilter\n                                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :           +- CometBroadcastExchange\n                                    :                 :     :                 :              :              +- CometProject\n                                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :- CometProject\n                                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :     :- CometFilter\n                                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n                                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :                    :        +- CometFilter\n                                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :                    +- CometBroadcastExchange\n                                    :                 :     :                 :              :                       +- CometProject\n                                    :                 :     :                 :              :                          +- CometFilter\n                                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              +- CometBroadcastExchange\n                                    :                 :     :                 :                 +- CometProject\n                                    :                 :     :                 :                    +- CometFilter\n                                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 +- CometBroadcastExchange\n                                    :                 :     :                    +- CometProject\n                                    :                 :     :                       +- CometBroadcastHashJoin\n                                    :                 :     :                          :- CometProject\n                                    :                 :     :                          :  +- CometBroadcastHashJoin\n                                    :                 :     :                          :     :- CometFilter\n                                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :     :                          :     :        +- ReusedSubquery\n                                    :                 :     :                          :     +- CometBroadcastExchange\n                                    :                 :     :                          :        +- CometFilter\n                                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                          +- CometBroadcastExchange\n                                    :                 :     :                             +- CometProject\n                                    :                 :     :                                +- CometFilter\n                                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     +- CometBroadcastExchange\n                                    :                 :        +- CometBroadcastHashJoin\n                                    :                 :           :- CometFilter\n                                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :           +- CometBroadcastExchange\n                                    :                 :              +- CometProject\n                                    :                 :                 +- CometBroadcastHashJoin\n                                    :                 :                    :- CometFilter\n                                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                    +- CometBroadcastExchange\n                                    :                 :                       +- CometBroadcastHashJoin\n                                    :                 :                          :- CometHashAggregate\n                                    :                 :                          :  +- CometExchange\n                                    :                 :                          :     +- CometHashAggregate\n                                    :                 :                          :        +- CometProject\n                                    :                 :                          :           +- CometBroadcastHashJoin\n                                    :                 :                          :              :- CometProject\n                                    :                 :                          :              :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :     :- CometFilter\n                                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :                          :              :     :        +- SubqueryBroadcast\n                                    :                 :                          :              :     :           +- BroadcastExchange\n                                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n                                    :                 :                          :              :     :                 +- CometProject\n                                    :                 :                          :              :     :                    +- CometFilter\n                                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              :     +- CometBroadcastExchange\n                                    :                 :                          :              :        +- CometBroadcastHashJoin\n                                    :                 :                          :              :           :- CometFilter\n                                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :           +- CometBroadcastExchange\n                                    :                 :                          :              :              +- CometProject\n                                    :                 :                          :              :                 +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :- CometProject\n                                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :     :- CometFilter\n                                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :                          :              :                    :     :        +- ReusedSubquery\n                                    :                 :                          :              :                    :     +- CometBroadcastExchange\n                                    :                 :                          :              :                    :        +- CometFilter\n                                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :                    +- CometBroadcastExchange\n                                    :                 :                          :              :                       +- CometProject\n                                    :                 :                          :              :                          +- CometFilter\n                                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              +- CometBroadcastExchange\n                                    :                 :                          :                 +- CometProject\n                                    :                 :                          :                    +- CometFilter\n                                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          +- CometBroadcastExchange\n                                    :                 :                             +- CometProject\n                                    :                 :                                +- CometBroadcastHashJoin\n                                    :                 :                                   :- CometProject\n                                    :                 :                                   :  +- CometBroadcastHashJoin\n                                    :                 :                                   :     :- CometFilter\n                                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :                                   :     :        +- ReusedSubquery\n                                    :                 :                                   :     +- CometBroadcastExchange\n                                    :                 :                                   :        +- CometFilter\n                                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                                   +- CometBroadcastExchange\n                                    :                 :                                      +- CometProject\n                                    :                 :                                         +- CometFilter\n                                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 +- CometBroadcastExchange\n                                    :                    +- CometProject\n                                    :                       +- CometFilter\n                                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- CometFilter\n                                       :  +- ReusedSubquery\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometBroadcastHashJoin\n                                                      :- CometProject\n                                                      :  +- CometBroadcastHashJoin\n                                                      :     :- CometBroadcastHashJoin\n                                                      :     :  :- CometFilter\n                                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                      :     :  :        +- ReusedSubquery\n                                                      :     :  +- CometBroadcastExchange\n                                                      :     :     +- CometProject\n                                                      :     :        +- CometBroadcastHashJoin\n                                                      :     :           :- CometFilter\n                                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :           +- CometBroadcastExchange\n                                                      :     :              +- CometBroadcastHashJoin\n                                                      :     :                 :- CometHashAggregate\n                                                      :     :                 :  +- CometExchange\n                                                      :     :                 :     +- CometHashAggregate\n                                                      :     :                 :        +- CometProject\n                                                      :     :                 :           +- CometBroadcastHashJoin\n                                                      :     :                 :              :- CometProject\n                                                      :     :                 :              :  +- CometBroadcastHashJoin\n                                                      :     :                 :              :     :- CometFilter\n                                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :     :                 :              :     :        +- SubqueryBroadcast\n                                                      :     :                 :              :     :           +- BroadcastExchange\n                                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n                                                      :     :                 :              :     :                 +- CometProject\n                                                      :     :                 :              :     :                    +- CometFilter\n                                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     :                 :              :     +- CometBroadcastExchange\n                                                      :     :                 :              :        +- CometBroadcastHashJoin\n                                                      :     :                 :              :           :- CometFilter\n                                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :                 :              :           +- CometBroadcastExchange\n                                                      :     :                 :              :              +- CometProject\n                                                      :     :                 :              :                 +- CometBroadcastHashJoin\n                                                      :     :                 :              :                    :- CometProject\n                                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                                      :     :                 :              :                    :     :- CometFilter\n                                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                                      :     :                 :              :                    :     :        +- ReusedSubquery\n                                                      :     :                 :              :                    :     +- CometBroadcastExchange\n                                                      :     :                 :              :                    :        +- CometFilter\n                                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :                 :              :                    +- CometBroadcastExchange\n                                                      :     :                 :              :                       +- CometProject\n                                                      :     :                 :              :                          +- CometFilter\n                                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     :                 :              +- CometBroadcastExchange\n                                                      :     :                 :                 +- CometProject\n                                                      :     :                 :                    +- CometFilter\n                                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     :                 +- CometBroadcastExchange\n                                                      :     :                    +- CometProject\n                                                      :     :                       +- CometBroadcastHashJoin\n                                                      :     :                          :- CometProject\n                                                      :     :                          :  +- CometBroadcastHashJoin\n                                                      :     :                          :     :- CometFilter\n                                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                      :     :                          :     :        +- ReusedSubquery\n                                                      :     :                          :     +- CometBroadcastExchange\n                                                      :     :                          :        +- CometFilter\n                                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :                          +- CometBroadcastExchange\n                                                      :     :                             +- CometProject\n                                                      :     :                                +- CometFilter\n                                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     +- CometBroadcastExchange\n                                                      :        +- CometBroadcastHashJoin\n                                                      :           :- CometFilter\n                                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :           +- CometBroadcastExchange\n                                                      :              +- CometProject\n                                                      :                 +- CometBroadcastHashJoin\n                                                      :                    :- CometFilter\n                                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                    +- CometBroadcastExchange\n                                                      :                       +- CometBroadcastHashJoin\n                                                      :                          :- CometHashAggregate\n                                                      :                          :  +- CometExchange\n                                                      :                          :     +- CometHashAggregate\n                                                      :                          :        +- CometProject\n                                                      :                          :           +- CometBroadcastHashJoin\n                                                      :                          :              :- CometProject\n                                                      :                          :              :  +- CometBroadcastHashJoin\n                                                      :                          :              :     :- CometFilter\n                                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :                          :              :     :        +- SubqueryBroadcast\n                                                      :                          :              :     :           +- BroadcastExchange\n                                                      :                          :              :     :              +- CometNativeColumnarToRow\n                                                      :                          :              :     :                 +- CometProject\n                                                      :                          :              :     :                    +- CometFilter\n                                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :                          :              :     +- CometBroadcastExchange\n                                                      :                          :              :        +- CometBroadcastHashJoin\n                                                      :                          :              :           :- CometFilter\n                                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                          :              :           +- CometBroadcastExchange\n                                                      :                          :              :              +- CometProject\n                                                      :                          :              :                 +- CometBroadcastHashJoin\n                                                      :                          :              :                    :- CometProject\n                                                      :                          :              :                    :  +- CometBroadcastHashJoin\n                                                      :                          :              :                    :     :- CometFilter\n                                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                                      :                          :              :                    :     :        +- ReusedSubquery\n                                                      :                          :              :                    :     +- CometBroadcastExchange\n                                                      :                          :              :                    :        +- CometFilter\n                                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                          :              :                    +- CometBroadcastExchange\n                                                      :                          :              :                       +- CometProject\n                                                      :                          :              :                          +- CometFilter\n                                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :                          :              +- CometBroadcastExchange\n                                                      :                          :                 +- CometProject\n                                                      :                          :                    +- CometFilter\n                                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :                          +- CometBroadcastExchange\n                                                      :                             +- CometProject\n                                                      :                                +- CometBroadcastHashJoin\n                                                      :                                   :- CometProject\n                                                      :                                   :  +- CometBroadcastHashJoin\n                                                      :                                   :     :- CometFilter\n                                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                      :                                   :     :        +- ReusedSubquery\n                                                      :                                   :     +- CometBroadcastExchange\n                                                      :                                   :        +- CometFilter\n                                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                                   +- CometBroadcastExchange\n                                                      :                                      +- CometProject\n                                                      :                                         +- CometFilter\n                                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      +- CometBroadcastExchange\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 2127 out of 2302 eligible operators (92%). Final plan contains 46 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q18a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Union\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- SubqueryBroadcast\n   :                 :     :     :     :     :     :              +- BroadcastExchange\n   :                 :     :     :     :     :     :                 +- CometNativeColumnarToRow\n   :                 :     :     :     :     :     :                    +- CometProject\n   :                 :     :     :     :     :     :                       +- CometFilter\n   :                 :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Project\n                     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :- Project\n                     :     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :     :- Filter\n                     :     :     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :     :     :           +- ReusedSubquery\n                     :     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :     :           +- CometProject\n                     :     :     :     :     :              +- CometFilter\n                     :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :           +- CometProject\n                     :     :     :     :              +- CometFilter\n                     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 92 out of 210 eligible operators (43%). Final plan contains 41 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q18a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- SubqueryBroadcast\n      :              :     :     :     :     :     :           +- BroadcastExchange\n      :              :     :     :     :     :     :              +- CometNativeColumnarToRow\n      :              :     :     :     :     :     :                 +- CometProject\n      :              :     :     :     :     :     :                    +- CometFilter\n      :              :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- ReusedSubquery\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- ReusedSubquery\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- ReusedSubquery\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :- CometProject\n                     :     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :     :- CometFilter\n                     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :     :     :     :     :     :        +- ReusedSubquery\n                     :     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :     :        +- CometProject\n                     :     :     :     :     :           +- CometFilter\n                     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :        +- CometProject\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 204 out of 210 eligible operators (97%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q20.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q20.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q22.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastNestedLoopJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Filter\n                     :     :     :  +- ColumnarToRow\n                     :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :           +- SubqueryBroadcast\n                     :     :     :              +- BroadcastExchange\n                     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :                    +- CometProject\n                     :     :     :                       +- CometFilter\n                     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometNativeScan parquet spark_catalog.default.warehouse\n\nComet accelerated 11 out of 28 eligible operators (39%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q22.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n                     :- CometNativeColumnarToRow\n                     :  +- CometProject\n                     :     +- CometBroadcastHashJoin\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometFilter\n                     :        :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                     :        :     :        +- SubqueryBroadcast\n                     :        :     :           +- BroadcastExchange\n                     :        :     :              +- CometNativeColumnarToRow\n                     :        :     :                 +- CometProject\n                     :        :     :                    +- CometFilter\n                     :        :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        +- CometBroadcastExchange\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n\nComet accelerated 19 out of 28 eligible operators (67%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q22a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Union\n   :- HashAggregate\n   :  +- HashAggregate\n   :     +- HashAggregate\n   :        +- CometNativeColumnarToRow\n   :           +- CometColumnarExchange\n   :              +- HashAggregate\n   :                 +- Project\n   :                    +- BroadcastHashJoin\n   :                       :- Project\n   :                       :  +- BroadcastHashJoin\n   :                       :     :- Project\n   :                       :     :  +- BroadcastHashJoin\n   :                       :     :     :- Filter\n   :                       :     :     :  +- ColumnarToRow\n   :                       :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                       :     :     :           +- SubqueryBroadcast\n   :                       :     :     :              +- BroadcastExchange\n   :                       :     :     :                 +- CometNativeColumnarToRow\n   :                       :     :     :                    +- CometProject\n   :                       :     :     :                       +- CometFilter\n   :                       :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                       :     :     +- BroadcastExchange\n   :                       :     :        +- CometNativeColumnarToRow\n   :                       :     :           +- CometProject\n   :                       :     :              +- CometFilter\n   :                       :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                       :     +- BroadcastExchange\n   :                       :        +- CometNativeColumnarToRow\n   :                       :           +- CometProject\n   :                       :              +- CometFilter\n   :                       :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                       +- BroadcastExchange\n   :                          +- CometNativeColumnarToRow\n   :                             +- CometFilter\n   :                                +- CometNativeScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- HashAggregate\n   :              +- CometNativeColumnarToRow\n   :                 +- CometColumnarExchange\n   :                    +- HashAggregate\n   :                       +- Project\n   :                          +- BroadcastHashJoin\n   :                             :- Project\n   :                             :  +- BroadcastHashJoin\n   :                             :     :- Project\n   :                             :     :  +- BroadcastHashJoin\n   :                             :     :     :- Filter\n   :                             :     :     :  +- ColumnarToRow\n   :                             :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                             :     :     :           +- SubqueryBroadcast\n   :                             :     :     :              +- BroadcastExchange\n   :                             :     :     :                 +- CometNativeColumnarToRow\n   :                             :     :     :                    +- CometProject\n   :                             :     :     :                       +- CometFilter\n   :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     :     +- BroadcastExchange\n   :                             :     :        +- CometNativeColumnarToRow\n   :                             :     :           +- CometProject\n   :                             :     :              +- CometFilter\n   :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     +- BroadcastExchange\n   :                             :        +- CometNativeColumnarToRow\n   :                             :           +- CometProject\n   :                             :              +- CometFilter\n   :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                             +- BroadcastExchange\n   :                                +- CometNativeColumnarToRow\n   :                                   +- CometFilter\n   :                                      +- CometNativeScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- HashAggregate\n   :              +- CometNativeColumnarToRow\n   :                 +- CometColumnarExchange\n   :                    +- HashAggregate\n   :                       +- Project\n   :                          +- BroadcastHashJoin\n   :                             :- Project\n   :                             :  +- BroadcastHashJoin\n   :                             :     :- Project\n   :                             :     :  +- BroadcastHashJoin\n   :                             :     :     :- Filter\n   :                             :     :     :  +- ColumnarToRow\n   :                             :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                             :     :     :           +- SubqueryBroadcast\n   :                             :     :     :              +- BroadcastExchange\n   :                             :     :     :                 +- CometNativeColumnarToRow\n   :                             :     :     :                    +- CometProject\n   :                             :     :     :                       +- CometFilter\n   :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     :     +- BroadcastExchange\n   :                             :     :        +- CometNativeColumnarToRow\n   :                             :     :           +- CometProject\n   :                             :     :              +- CometFilter\n   :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     +- BroadcastExchange\n   :                             :        +- CometNativeColumnarToRow\n   :                             :           +- CometProject\n   :                             :              +- CometFilter\n   :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                             +- BroadcastExchange\n   :                                +- CometNativeColumnarToRow\n   :                                   +- CometFilter\n   :                                      +- CometNativeScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- HashAggregate\n   :              +- CometNativeColumnarToRow\n   :                 +- CometColumnarExchange\n   :                    +- HashAggregate\n   :                       +- Project\n   :                          +- BroadcastHashJoin\n   :                             :- Project\n   :                             :  +- BroadcastHashJoin\n   :                             :     :- Project\n   :                             :     :  +- BroadcastHashJoin\n   :                             :     :     :- Filter\n   :                             :     :     :  +- ColumnarToRow\n   :                             :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                             :     :     :           +- SubqueryBroadcast\n   :                             :     :     :              +- BroadcastExchange\n   :                             :     :     :                 +- CometNativeColumnarToRow\n   :                             :     :     :                    +- CometProject\n   :                             :     :     :                       +- CometFilter\n   :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     :     +- BroadcastExchange\n   :                             :     :        +- CometNativeColumnarToRow\n   :                             :     :           +- CometProject\n   :                             :     :              +- CometFilter\n   :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     +- BroadcastExchange\n   :                             :        +- CometNativeColumnarToRow\n   :                             :           +- CometProject\n   :                             :              +- CometFilter\n   :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                             +- BroadcastExchange\n   :                                +- CometNativeColumnarToRow\n   :                                   +- CometFilter\n   :                                      +- CometNativeScan parquet spark_catalog.default.warehouse\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- SubqueryBroadcast\n                                 :     :     :              +- BroadcastExchange\n                                 :     :     :                 +- CometNativeColumnarToRow\n                                 :     :     :                    +- CometProject\n                                 :     :     :                       +- CometFilter\n                                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.warehouse\n\nComet accelerated 64 out of 151 eligible operators (42%). Final plan contains 34 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q22a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometHashAggregate\n      :     +- CometHashAggregate\n      :        +- CometExchange\n      :           +- CometHashAggregate\n      :              +- CometProject\n      :                 +- CometBroadcastHashJoin\n      :                    :- CometProject\n      :                    :  +- CometBroadcastHashJoin\n      :                    :     :- CometProject\n      :                    :     :  +- CometBroadcastHashJoin\n      :                    :     :     :- CometFilter\n      :                    :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                    :     :     :        +- SubqueryBroadcast\n      :                    :     :     :           +- BroadcastExchange\n      :                    :     :     :              +- CometNativeColumnarToRow\n      :                    :     :     :                 +- CometProject\n      :                    :     :     :                    +- CometFilter\n      :                    :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                    :     :     +- CometBroadcastExchange\n      :                    :     :        +- CometProject\n      :                    :     :           +- CometFilter\n      :                    :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                    :     +- CometBroadcastExchange\n      :                    :        +- CometProject\n      :                    :           +- CometFilter\n      :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                    +- CometBroadcastExchange\n      :                       +- CometFilter\n      :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometHashAggregate\n      :           +- CometExchange\n      :              +- CometHashAggregate\n      :                 +- CometProject\n      :                    +- CometBroadcastHashJoin\n      :                       :- CometProject\n      :                       :  +- CometBroadcastHashJoin\n      :                       :     :- CometProject\n      :                       :     :  +- CometBroadcastHashJoin\n      :                       :     :     :- CometFilter\n      :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                       :     :     :        +- SubqueryBroadcast\n      :                       :     :     :           +- BroadcastExchange\n      :                       :     :     :              +- CometNativeColumnarToRow\n      :                       :     :     :                 +- CometProject\n      :                       :     :     :                    +- CometFilter\n      :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     :     +- CometBroadcastExchange\n      :                       :     :        +- CometProject\n      :                       :     :           +- CometFilter\n      :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     +- CometBroadcastExchange\n      :                       :        +- CometProject\n      :                       :           +- CometFilter\n      :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                       +- CometBroadcastExchange\n      :                          +- CometFilter\n      :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometHashAggregate\n      :           +- CometExchange\n      :              +- CometHashAggregate\n      :                 +- CometProject\n      :                    +- CometBroadcastHashJoin\n      :                       :- CometProject\n      :                       :  +- CometBroadcastHashJoin\n      :                       :     :- CometProject\n      :                       :     :  +- CometBroadcastHashJoin\n      :                       :     :     :- CometFilter\n      :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                       :     :     :        +- SubqueryBroadcast\n      :                       :     :     :           +- BroadcastExchange\n      :                       :     :     :              +- CometNativeColumnarToRow\n      :                       :     :     :                 +- CometProject\n      :                       :     :     :                    +- CometFilter\n      :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     :     +- CometBroadcastExchange\n      :                       :     :        +- CometProject\n      :                       :     :           +- CometFilter\n      :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     +- CometBroadcastExchange\n      :                       :        +- CometProject\n      :                       :           +- CometFilter\n      :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                       +- CometBroadcastExchange\n      :                          +- CometFilter\n      :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometHashAggregate\n      :           +- CometExchange\n      :              +- CometHashAggregate\n      :                 +- CometProject\n      :                    +- CometBroadcastHashJoin\n      :                       :- CometProject\n      :                       :  +- CometBroadcastHashJoin\n      :                       :     :- CometProject\n      :                       :     :  +- CometBroadcastHashJoin\n      :                       :     :     :- CometFilter\n      :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                       :     :     :        +- SubqueryBroadcast\n      :                       :     :     :           +- BroadcastExchange\n      :                       :     :     :              +- CometNativeColumnarToRow\n      :                       :     :     :                 +- CometProject\n      :                       :     :     :                    +- CometFilter\n      :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     :     +- CometBroadcastExchange\n      :                       :     :        +- CometProject\n      :                       :     :           +- CometFilter\n      :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     +- CometBroadcastExchange\n      :                       :        +- CometProject\n      :                       :           +- CometFilter\n      :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                       +- CometBroadcastExchange\n      :                          +- CometFilter\n      :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                              :     :     :        +- SubqueryBroadcast\n                              :     :     :           +- BroadcastExchange\n                              :     :     :              +- CometNativeColumnarToRow\n                              :     :     :                 +- CometProject\n                              :     :     :                    +- CometFilter\n                              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n\nComet accelerated 141 out of 151 eligible operators (93%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q24.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Filter\n         :  +- Subquery\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- HashAggregate\n         :                    +- CometNativeColumnarToRow\n         :                       +- CometColumnarExchange\n         :                          +- HashAggregate\n         :                             +- Project\n         :                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n         :                                   :- CometNativeColumnarToRow\n         :                                   :  +- CometProject\n         :                                   :     +- CometBroadcastHashJoin\n         :                                   :        :- CometProject\n         :                                   :        :  +- CometBroadcastHashJoin\n         :                                   :        :     :- CometProject\n         :                                   :        :     :  +- CometBroadcastHashJoin\n         :                                   :        :     :     :- CometProject\n         :                                   :        :     :     :  +- CometSortMergeJoin\n         :                                   :        :     :     :     :- CometSort\n         :                                   :        :     :     :     :  +- CometExchange\n         :                                   :        :     :     :     :     +- CometProject\n         :                                   :        :     :     :     :        +- CometFilter\n         :                                   :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n         :                                   :        :     :     :     +- CometSort\n         :                                   :        :     :     :        +- CometExchange\n         :                                   :        :     :     :           +- CometProject\n         :                                   :        :     :     :              +- CometFilter\n         :                                   :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n         :                                   :        :     :     +- CometBroadcastExchange\n         :                                   :        :     :        +- CometProject\n         :                                   :        :     :           +- CometFilter\n         :                                   :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n         :                                   :        :     +- CometBroadcastExchange\n         :                                   :        :        +- CometProject\n         :                                   :        :           +- CometFilter\n         :                                   :        :              +- CometNativeScan parquet spark_catalog.default.item\n         :                                   :        +- CometBroadcastExchange\n         :                                   :           +- CometProject\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometNativeScan parquet spark_catalog.default.customer\n         :                                   +- BroadcastExchange\n         :                                      +- CometNativeColumnarToRow\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometNativeScan parquet spark_catalog.default.customer_address\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- HashAggregate\n                        +- CometNativeColumnarToRow\n                           +- CometColumnarExchange\n                              +- HashAggregate\n                                 +- Project\n                                    +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                                       :- CometNativeColumnarToRow\n                                       :  +- CometProject\n                                       :     +- CometBroadcastHashJoin\n                                       :        :- CometProject\n                                       :        :  +- CometBroadcastHashJoin\n                                       :        :     :- CometProject\n                                       :        :     :  +- CometBroadcastHashJoin\n                                       :        :     :     :- CometProject\n                                       :        :     :     :  +- CometSortMergeJoin\n                                       :        :     :     :     :- CometSort\n                                       :        :     :     :     :  +- CometExchange\n                                       :        :     :     :     :     +- CometProject\n                                       :        :     :     :     :        +- CometFilter\n                                       :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                                       :        :     :     :     +- CometSort\n                                       :        :     :     :        +- CometExchange\n                                       :        :     :     :           +- CometProject\n                                       :        :     :     :              +- CometFilter\n                                       :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                       :        :     :     +- CometBroadcastExchange\n                                       :        :     :        +- CometProject\n                                       :        :     :           +- CometFilter\n                                       :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n                                       :        :     +- CometBroadcastExchange\n                                       :        :        +- CometProject\n                                       :        :           +- CometFilter\n                                       :        :              +- CometNativeScan parquet spark_catalog.default.item\n                                       :        +- CometBroadcastExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.customer\n                                       +- BroadcastExchange\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 72 out of 88 eligible operators (81%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q24.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Filter\n         :  +- Subquery\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- HashAggregate\n         :                    +- CometNativeColumnarToRow\n         :                       +- CometColumnarExchange\n         :                          +- HashAggregate\n         :                             +- Project\n         :                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n         :                                   :- CometNativeColumnarToRow\n         :                                   :  +- CometProject\n         :                                   :     +- CometBroadcastHashJoin\n         :                                   :        :- CometProject\n         :                                   :        :  +- CometBroadcastHashJoin\n         :                                   :        :     :- CometProject\n         :                                   :        :     :  +- CometBroadcastHashJoin\n         :                                   :        :     :     :- CometProject\n         :                                   :        :     :     :  +- CometSortMergeJoin\n         :                                   :        :     :     :     :- CometSort\n         :                                   :        :     :     :     :  +- CometExchange\n         :                                   :        :     :     :     :     +- CometProject\n         :                                   :        :     :     :     :        +- CometFilter\n         :                                   :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :                                   :        :     :     :     +- CometSort\n         :                                   :        :     :     :        +- CometExchange\n         :                                   :        :     :     :           +- CometProject\n         :                                   :        :     :     :              +- CometFilter\n         :                                   :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :                                   :        :     :     +- CometBroadcastExchange\n         :                                   :        :     :        +- CometProject\n         :                                   :        :     :           +- CometFilter\n         :                                   :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :                                   :        :     +- CometBroadcastExchange\n         :                                   :        :        +- CometProject\n         :                                   :        :           +- CometFilter\n         :                                   :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                                   :        +- CometBroadcastExchange\n         :                                   :           +- CometProject\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                                   +- BroadcastExchange\n         :                                      +- CometNativeColumnarToRow\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- HashAggregate\n                        +- CometNativeColumnarToRow\n                           +- CometColumnarExchange\n                              +- HashAggregate\n                                 +- Project\n                                    +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                                       :- CometNativeColumnarToRow\n                                       :  +- CometProject\n                                       :     +- CometBroadcastHashJoin\n                                       :        :- CometProject\n                                       :        :  +- CometBroadcastHashJoin\n                                       :        :     :- CometProject\n                                       :        :     :  +- CometBroadcastHashJoin\n                                       :        :     :     :- CometProject\n                                       :        :     :     :  +- CometSortMergeJoin\n                                       :        :     :     :     :- CometSort\n                                       :        :     :     :     :  +- CometExchange\n                                       :        :     :     :     :     +- CometProject\n                                       :        :     :     :     :        +- CometFilter\n                                       :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :        :     :     :     +- CometSort\n                                       :        :     :     :        +- CometExchange\n                                       :        :     :     :           +- CometProject\n                                       :        :     :     :              +- CometFilter\n                                       :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                       :        :     :     +- CometBroadcastExchange\n                                       :        :     :        +- CometProject\n                                       :        :     :           +- CometFilter\n                                       :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                       :        :     +- CometBroadcastExchange\n                                       :        :        +- CometProject\n                                       :        :           +- CometFilter\n                                       :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :        +- CometBroadcastExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                                       +- BroadcastExchange\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 72 out of 88 eligible operators (81%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q27a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Union\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Filter\n   :                 :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :           +- SubqueryBroadcast\n   :                 :     :     :     :              +- BroadcastExchange\n   :                 :     :     :     :                 +- CometNativeColumnarToRow\n   :                 :     :     :     :                    +- CometProject\n   :                 :     :     :     :                       +- CometFilter\n   :                 :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometProject\n   :                 :     :     :              +- CometFilter\n   :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Filter\n   :                 :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometProject\n   :                 :     :     :              +- CometFilter\n   :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Filter\n                     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :           +- ReusedSubquery\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometProject\n                     :     :     :              +- CometFilter\n                     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.store\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 41 out of 95 eligible operators (43%). Final plan contains 19 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q27a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometFilter\n      :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :     :     :        +- SubqueryBroadcast\n      :              :     :     :     :           +- BroadcastExchange\n      :              :     :     :     :              +- CometNativeColumnarToRow\n      :              :     :     :     :                 +- CometProject\n      :              :     :     :     :                    +- CometFilter\n      :              :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometProject\n      :              :     :     :           +- CometFilter\n      :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometFilter\n      :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :     :     :        +- ReusedSubquery\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometProject\n      :              :     :     :           +- CometFilter\n      :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometFilter\n                     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :     :     :        +- ReusedSubquery\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometProject\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 91 out of 95 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q34.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Filter\n            :  +- HashAggregate\n            :     +- CometNativeColumnarToRow\n            :        +- CometColumnarExchange\n            :           +- HashAggregate\n            :              +- Project\n            :                 +- BroadcastHashJoin\n            :                    :- Project\n            :                    :  +- BroadcastHashJoin\n            :                    :     :- Project\n            :                    :     :  +- BroadcastHashJoin\n            :                    :     :     :- Filter\n            :                    :     :     :  +- ColumnarToRow\n            :                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                    :     :     :           +- SubqueryBroadcast\n            :                    :     :     :              +- BroadcastExchange\n            :                    :     :     :                 +- CometNativeColumnarToRow\n            :                    :     :     :                    +- CometProject\n            :                    :     :     :                       +- CometFilter\n            :                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     :     +- BroadcastExchange\n            :                    :     :        +- CometNativeColumnarToRow\n            :                    :     :           +- CometProject\n            :                    :     :              +- CometFilter\n            :                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     +- BroadcastExchange\n            :                    :        +- CometNativeColumnarToRow\n            :                    :           +- CometProject\n            :                    :              +- CometFilter\n            :                    :                 +- CometNativeScan parquet spark_catalog.default.store\n            :                    +- BroadcastExchange\n            :                       +- CometNativeColumnarToRow\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometNativeScan parquet spark_catalog.default.household_demographics\n            +- BroadcastExchange\n               +- CometNativeColumnarToRow\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 37 eligible operators (48%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q34.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometFilter\n            :  +- CometHashAggregate\n            :     +- CometExchange\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometFilter\n            :                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :        +- SubqueryBroadcast\n            :                 :     :     :           +- BroadcastExchange\n            :                 :     :     :              +- CometNativeColumnarToRow\n            :                 :     :     :                 +- CometProject\n            :                 :     :     :                    +- CometFilter\n            :                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometProject\n            :                 :     :           +- CometFilter\n            :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometProject\n            :                 :           +- CometFilter\n            :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 37 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q35.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :- BroadcastHashJoin\n                  :     :        :  :- BroadcastHashJoin\n                  :     :        :  :  :- CometNativeColumnarToRow\n                  :     :        :  :  :  +- CometFilter\n                  :     :        :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :        :  :  +- BroadcastExchange\n                  :     :        :  :     +- Project\n                  :     :        :  :        +- BroadcastHashJoin\n                  :     :        :  :           :- ColumnarToRow\n                  :     :        :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :  :           :        +- SubqueryBroadcast\n                  :     :        :  :           :           +- BroadcastExchange\n                  :     :        :  :           :              +- CometNativeColumnarToRow\n                  :     :        :  :           :                 +- CometProject\n                  :     :        :  :           :                    +- CometFilter\n                  :     :        :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  :           +- BroadcastExchange\n                  :     :        :  :              +- CometNativeColumnarToRow\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- Project\n                  :     :        :        +- BroadcastHashJoin\n                  :     :        :           :- ColumnarToRow\n                  :     :        :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :           :        +- ReusedSubquery\n                  :     :        :           +- BroadcastExchange\n                  :     :        :              +- CometNativeColumnarToRow\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 54 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q35.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                  :     :        :  :- CometNativeColumnarToRow\n                  :     :        :  :  +- CometBroadcastHashJoin\n                  :     :        :  :     :- CometFilter\n                  :     :        :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :        :  :     +- CometBroadcastExchange\n                  :     :        :  :        +- CometProject\n                  :     :        :  :           +- CometBroadcastHashJoin\n                  :     :        :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :        :  :              :     +- SubqueryBroadcast\n                  :     :        :  :              :        +- BroadcastExchange\n                  :     :        :  :              :           +- CometNativeColumnarToRow\n                  :     :        :  :              :              +- CometProject\n                  :     :        :  :              :                 +- CometFilter\n                  :     :        :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  :              +- CometBroadcastExchange\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- CometNativeColumnarToRow\n                  :     :        :        +- CometProject\n                  :     :        :           +- CometBroadcastHashJoin\n                  :     :        :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :        :              :     +- ReusedSubquery\n                  :     :        :              +- CometBroadcastExchange\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- CometNativeColumnarToRow\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 54 eligible operators (64%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q35a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- BroadcastHashJoin\n                  :     :     :  :- CometNativeColumnarToRow\n                  :     :     :  :  +- CometFilter\n                  :     :     :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- Project\n                  :     :     :        +- BroadcastHashJoin\n                  :     :     :           :- ColumnarToRow\n                  :     :     :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           :        +- SubqueryBroadcast\n                  :     :     :           :           +- BroadcastExchange\n                  :     :     :           :              +- CometNativeColumnarToRow\n                  :     :     :           :                 +- CometProject\n                  :     :     :           :                    +- CometFilter\n                  :     :     :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- Union\n                  :     :           :- Project\n                  :     :           :  +- BroadcastHashJoin\n                  :     :           :     :- ColumnarToRow\n                  :     :           :     :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :           :     :        +- ReusedSubquery\n                  :     :           :     +- BroadcastExchange\n                  :     :           :        +- CometNativeColumnarToRow\n                  :     :           :           +- CometProject\n                  :     :           :              +- CometFilter\n                  :     :           :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 52 eligible operators (40%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q35a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometBroadcastHashJoin\n                  :     :     :  :- CometFilter\n                  :     :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :     :  +- CometBroadcastExchange\n                  :     :     :     +- CometProject\n                  :     :     :        +- CometBroadcastHashJoin\n                  :     :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :           :     +- SubqueryBroadcast\n                  :     :     :           :        +- BroadcastExchange\n                  :     :     :           :           +- CometNativeColumnarToRow\n                  :     :     :           :              +- CometProject\n                  :     :     :           :                 +- CometFilter\n                  :     :     :           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :           +- CometBroadcastExchange\n                  :     :     :              +- CometProject\n                  :     :     :                 +- CometFilter\n                  :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometUnion\n                  :     :           :- CometProject\n                  :     :           :  +- CometBroadcastHashJoin\n                  :     :           :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :           :     :     +- ReusedSubquery\n                  :     :           :     +- CometBroadcastExchange\n                  :     :           :        +- CometProject\n                  :     :           :           +- CometFilter\n                  :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :           +- CometProject\n                  :     :              +- CometBroadcastHashJoin\n                  :     :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                 :     +- ReusedSubquery\n                  :     :                 +- CometBroadcastExchange\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 48 out of 52 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q36a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- Project\n                           :                 :  +- BroadcastHashJoin\n                           :                 :     :- Project\n                           :                 :     :  +- BroadcastHashJoin\n                           :                 :     :     :- Filter\n                           :                 :     :     :  +- ColumnarToRow\n                           :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                 :     :     :           +- SubqueryBroadcast\n                           :                 :     :     :              +- BroadcastExchange\n                           :                 :     :     :                 +- CometNativeColumnarToRow\n                           :                 :     :     :                    +- CometProject\n                           :                 :     :     :                       +- CometFilter\n                           :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     :     +- BroadcastExchange\n                           :                 :     :        +- CometNativeColumnarToRow\n                           :                 :     :           +- CometProject\n                           :                 :     :              +- CometFilter\n                           :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     +- BroadcastExchange\n                           :                 :        +- CometNativeColumnarToRow\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                           :                 +- BroadcastExchange\n                           :                    +- CometNativeColumnarToRow\n                           :                       +- CometProject\n                           :                          +- CometFilter\n                           :                             +- CometNativeScan parquet spark_catalog.default.store\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.store\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- Project\n                                                         :     :  +- BroadcastHashJoin\n                                                         :     :     :- Filter\n                                                         :     :     :  +- ColumnarToRow\n                                                         :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :     :           +- SubqueryBroadcast\n                                                         :     :     :              +- BroadcastExchange\n                                                         :     :     :                 +- CometNativeColumnarToRow\n                                                         :     :     :                    +- CometProject\n                                                         :     :     :                       +- CometFilter\n                                                         :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     :     +- BroadcastExchange\n                                                         :     :        +- CometNativeColumnarToRow\n                                                         :     :           +- CometProject\n                                                         :     :              +- CometFilter\n                                                         :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     +- BroadcastExchange\n                                                         :        +- CometNativeColumnarToRow\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometNativeScan parquet spark_catalog.default.item\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 45 out of 99 eligible operators (45%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q36a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometUnion\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometProject\n                           :           +- CometBroadcastHashJoin\n                           :              :- CometProject\n                           :              :  +- CometBroadcastHashJoin\n                           :              :     :- CometProject\n                           :              :     :  +- CometBroadcastHashJoin\n                           :              :     :     :- CometFilter\n                           :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :              :     :     :        +- SubqueryBroadcast\n                           :              :     :     :           +- BroadcastExchange\n                           :              :     :     :              +- CometNativeColumnarToRow\n                           :              :     :     :                 +- CometProject\n                           :              :     :     :                    +- CometFilter\n                           :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              :     :     +- CometBroadcastExchange\n                           :              :     :        +- CometProject\n                           :              :     :           +- CometFilter\n                           :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              :     +- CometBroadcastExchange\n                           :              :        +- CometProject\n                           :              :           +- CometFilter\n                           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :              +- CometBroadcastExchange\n                           :                 +- CometProject\n                           :                    +- CometFilter\n                           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometHashAggregate\n                           :           +- CometExchange\n                           :              +- CometHashAggregate\n                           :                 +- CometProject\n                           :                    +- CometBroadcastHashJoin\n                           :                       :- CometProject\n                           :                       :  +- CometBroadcastHashJoin\n                           :                       :     :- CometProject\n                           :                       :     :  +- CometBroadcastHashJoin\n                           :                       :     :     :- CometFilter\n                           :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                       :     :     :        +- SubqueryBroadcast\n                           :                       :     :     :           +- BroadcastExchange\n                           :                       :     :     :              +- CometNativeColumnarToRow\n                           :                       :     :     :                 +- CometProject\n                           :                       :     :     :                    +- CometFilter\n                           :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       :     :     +- CometBroadcastExchange\n                           :                       :     :        +- CometProject\n                           :                       :     :           +- CometFilter\n                           :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       :     +- CometBroadcastExchange\n                           :                       :        +- CometProject\n                           :                       :           +- CometFilter\n                           :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                       +- CometBroadcastExchange\n                           :                          +- CometProject\n                           :                             +- CometFilter\n                           :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometProject\n                                                   :     :  +- CometBroadcastHashJoin\n                                                   :     :     :- CometFilter\n                                                   :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                   :     :     :        +- SubqueryBroadcast\n                                                   :     :     :           +- BroadcastExchange\n                                                   :     :     :              +- CometNativeColumnarToRow\n                                                   :     :     :                 +- CometProject\n                                                   :     :     :                    +- CometFilter\n                                                   :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     :     +- CometBroadcastExchange\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 90 out of 99 eligible operators (90%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q47.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q47.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q49.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- SubqueryBroadcast\n               :                                         :     :                    +- BroadcastExchange\n               :                                         :     :                       +- CometNativeColumnarToRow\n               :                                         :     :                          +- CometProject\n               :                                         :     :                             +- CometFilter\n               :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- ReusedSubquery\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometColumnarExchange\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- BroadcastExchange\n                                                         :     :  +- Project\n                                                         :     :     +- Filter\n                                                         :     :        +- ColumnarToRow\n                                                         :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :                 +- ReusedSubquery\n                                                         :     +- CometNativeColumnarToRow\n                                                         :        +- CometProject\n                                                         :           +- CometFilter\n                                                         :              +- CometNativeScan parquet spark_catalog.default.store_returns\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 33 out of 87 eligible operators (37%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q49.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :              +- SubqueryBroadcast\n               :                                      :     :                 +- BroadcastExchange\n               :                                      :     :                    +- CometNativeColumnarToRow\n               :                                      :     :                       +- CometProject\n               :                                      :     :                          +- CometFilter\n               :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :              +- ReusedSubquery\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometExchange\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometBroadcastHashJoin\n                                                      :- CometProject\n                                                      :  +- CometBroadcastHashJoin\n                                                      :     :- CometBroadcastExchange\n                                                      :     :  +- CometProject\n                                                      :     :     +- CometFilter\n                                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :     :              +- ReusedSubquery\n                                                      :     +- CometProject\n                                                      :        +- CometFilter\n                                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                                      +- CometBroadcastExchange\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 66 out of 87 eligible operators (75%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q51a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :  +- CometNativeColumnarToRow\n               :     +- CometSort\n               :        +- CometExchange\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometSortMergeJoin\n               :                    :- CometSort\n               :                    :  +- CometColumnarExchange\n               :                    :     +- HashAggregate\n               :                    :        +- CometNativeColumnarToRow\n               :                    :           +- CometColumnarExchange\n               :                    :              +- HashAggregate\n               :                    :                 +- Project\n               :                    :                    +- BroadcastHashJoin\n               :                    :                       :- Project\n               :                    :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                       :     +- CometNativeColumnarToRow\n               :                    :                       :        +- CometSort\n               :                    :                       :           +- CometColumnarExchange\n               :                    :                       :              +- HashAggregate\n               :                    :                       :                 +- CometNativeColumnarToRow\n               :                    :                       :                    +- CometColumnarExchange\n               :                    :                       :                       +- HashAggregate\n               :                    :                       :                          +- Project\n               :                    :                       :                             +- BroadcastHashJoin\n               :                    :                       :                                :- Filter\n               :                    :                       :                                :  +- ColumnarToRow\n               :                    :                       :                                :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :                       :                                :           +- SubqueryBroadcast\n               :                    :                       :                                :              +- BroadcastExchange\n               :                    :                       :                                :                 +- CometNativeColumnarToRow\n               :                    :                       :                                :                    +- CometProject\n               :                    :                       :                                :                       +- CometFilter\n               :                    :                       :                                :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                       :                                +- BroadcastExchange\n               :                    :                       :                                   +- CometNativeColumnarToRow\n               :                    :                       :                                      +- CometProject\n               :                    :                       :                                         +- CometFilter\n               :                    :                       :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                       +- BroadcastExchange\n               :                    :                          +- Project\n               :                    :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                                +- CometNativeColumnarToRow\n               :                    :                                   +- CometSort\n               :                    :                                      +- CometColumnarExchange\n               :                    :                                         +- HashAggregate\n               :                    :                                            +- CometNativeColumnarToRow\n               :                    :                                               +- CometColumnarExchange\n               :                    :                                                  +- HashAggregate\n               :                    :                                                     +- Project\n               :                    :                                                        +- BroadcastHashJoin\n               :                    :                                                           :- Filter\n               :                    :                                                           :  +- ColumnarToRow\n               :                    :                                                           :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :                                                           :           +- SubqueryBroadcast\n               :                    :                                                           :              +- BroadcastExchange\n               :                    :                                                           :                 +- CometNativeColumnarToRow\n               :                    :                                                           :                    +- CometProject\n               :                    :                                                           :                       +- CometFilter\n               :                    :                                                           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                                                           +- BroadcastExchange\n               :                    :                                                              +- CometNativeColumnarToRow\n               :                    :                                                                 +- CometProject\n               :                    :                                                                    +- CometFilter\n               :                    :                                                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    +- CometSort\n               :                       +- CometColumnarExchange\n               :                          +- HashAggregate\n               :                             +- CometNativeColumnarToRow\n               :                                +- CometColumnarExchange\n               :                                   +- HashAggregate\n               :                                      +- Project\n               :                                         +- BroadcastHashJoin\n               :                                            :- Project\n               :                                            :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                            :     +- CometNativeColumnarToRow\n               :                                            :        +- CometSort\n               :                                            :           +- CometColumnarExchange\n               :                                            :              +- HashAggregate\n               :                                            :                 +- CometNativeColumnarToRow\n               :                                            :                    +- CometColumnarExchange\n               :                                            :                       +- HashAggregate\n               :                                            :                          +- Project\n               :                                            :                             +- BroadcastHashJoin\n               :                                            :                                :- Filter\n               :                                            :                                :  +- ColumnarToRow\n               :                                            :                                :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                            :                                :           +- ReusedSubquery\n               :                                            :                                +- BroadcastExchange\n               :                                            :                                   +- CometNativeColumnarToRow\n               :                                            :                                      +- CometProject\n               :                                            :                                         +- CometFilter\n               :                                            :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                            +- BroadcastExchange\n               :                                               +- Project\n               :                                                  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                                     +- CometNativeColumnarToRow\n               :                                                        +- CometSort\n               :                                                           +- CometColumnarExchange\n               :                                                              +- HashAggregate\n               :                                                                 +- CometNativeColumnarToRow\n               :                                                                    +- CometColumnarExchange\n               :                                                                       +- HashAggregate\n               :                                                                          +- Project\n               :                                                                             +- BroadcastHashJoin\n               :                                                                                :- Filter\n               :                                                                                :  +- ColumnarToRow\n               :                                                                                :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                                                                :           +- ReusedSubquery\n               :                                                                                +- BroadcastExchange\n               :                                                                                   +- CometNativeColumnarToRow\n               :                                                                                      +- CometProject\n               :                                                                                         +- CometFilter\n               :                                                                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- Project\n                     +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                        +- CometNativeColumnarToRow\n                           +- CometSort\n                              +- CometExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometSortMergeJoin\n                                          :- CometSort\n                                          :  +- CometColumnarExchange\n                                          :     +- HashAggregate\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometColumnarExchange\n                                          :              +- HashAggregate\n                                          :                 +- Project\n                                          :                    +- BroadcastHashJoin\n                                          :                       :- Project\n                                          :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                       :     +- CometNativeColumnarToRow\n                                          :                       :        +- CometSort\n                                          :                       :           +- CometColumnarExchange\n                                          :                       :              +- HashAggregate\n                                          :                       :                 +- CometNativeColumnarToRow\n                                          :                       :                    +- CometColumnarExchange\n                                          :                       :                       +- HashAggregate\n                                          :                       :                          +- Project\n                                          :                       :                             +- BroadcastHashJoin\n                                          :                       :                                :- Filter\n                                          :                       :                                :  +- ColumnarToRow\n                                          :                       :                                :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                       :                                :           +- SubqueryBroadcast\n                                          :                       :                                :              +- BroadcastExchange\n                                          :                       :                                :                 +- CometNativeColumnarToRow\n                                          :                       :                                :                    +- CometProject\n                                          :                       :                                :                       +- CometFilter\n                                          :                       :                                :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                       :                                +- BroadcastExchange\n                                          :                       :                                   +- CometNativeColumnarToRow\n                                          :                       :                                      +- CometProject\n                                          :                       :                                         +- CometFilter\n                                          :                       :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                       +- BroadcastExchange\n                                          :                          +- Project\n                                          :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                                +- CometNativeColumnarToRow\n                                          :                                   +- CometSort\n                                          :                                      +- CometColumnarExchange\n                                          :                                         +- HashAggregate\n                                          :                                            +- CometNativeColumnarToRow\n                                          :                                               +- CometColumnarExchange\n                                          :                                                  +- HashAggregate\n                                          :                                                     +- Project\n                                          :                                                        +- BroadcastHashJoin\n                                          :                                                           :- Filter\n                                          :                                                           :  +- ColumnarToRow\n                                          :                                                           :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                                                           :           +- SubqueryBroadcast\n                                          :                                                           :              +- BroadcastExchange\n                                          :                                                           :                 +- CometNativeColumnarToRow\n                                          :                                                           :                    +- CometProject\n                                          :                                                           :                       +- CometFilter\n                                          :                                                           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                                                           +- BroadcastExchange\n                                          :                                                              +- CometNativeColumnarToRow\n                                          :                                                                 +- CometProject\n                                          :                                                                    +- CometFilter\n                                          :                                                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- CometSort\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- CometNativeColumnarToRow\n                                                      +- CometColumnarExchange\n                                                         +- HashAggregate\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- Project\n                                                                  :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                  :     +- CometNativeColumnarToRow\n                                                                  :        +- CometSort\n                                                                  :           +- CometColumnarExchange\n                                                                  :              +- HashAggregate\n                                                                  :                 +- CometNativeColumnarToRow\n                                                                  :                    +- CometColumnarExchange\n                                                                  :                       +- HashAggregate\n                                                                  :                          +- Project\n                                                                  :                             +- BroadcastHashJoin\n                                                                  :                                :- Filter\n                                                                  :                                :  +- ColumnarToRow\n                                                                  :                                :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                  :                                :           +- ReusedSubquery\n                                                                  :                                +- BroadcastExchange\n                                                                  :                                   +- CometNativeColumnarToRow\n                                                                  :                                      +- CometProject\n                                                                  :                                         +- CometFilter\n                                                                  :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                           +- CometNativeColumnarToRow\n                                                                              +- CometSort\n                                                                                 +- CometColumnarExchange\n                                                                                    +- HashAggregate\n                                                                                       +- CometNativeColumnarToRow\n                                                                                          +- CometColumnarExchange\n                                                                                             +- HashAggregate\n                                                                                                +- Project\n                                                                                                   +- BroadcastHashJoin\n                                                                                                      :- Filter\n                                                                                                      :  +- ColumnarToRow\n                                                                                                      :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                                                      :           +- ReusedSubquery\n                                                                                                      +- BroadcastExchange\n                                                                                                         +- CometNativeColumnarToRow\n                                                                                                            +- CometProject\n                                                                                                               +- CometFilter\n                                                                                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 82 out of 196 eligible operators (41%). Final plan contains 42 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q51a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :  +- CometNativeColumnarToRow\n               :     +- CometSort\n               :        +- CometExchange\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometSortMergeJoin\n               :                    :- CometSort\n               :                    :  +- CometColumnarExchange\n               :                    :     +- HashAggregate\n               :                    :        +- CometNativeColumnarToRow\n               :                    :           +- CometColumnarExchange\n               :                    :              +- HashAggregate\n               :                    :                 +- Project\n               :                    :                    +- BroadcastHashJoin\n               :                    :                       :- Project\n               :                    :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                       :     +- CometNativeColumnarToRow\n               :                    :                       :        +- CometSort\n               :                    :                       :           +- CometExchange\n               :                    :                       :              +- CometHashAggregate\n               :                    :                       :                 +- CometExchange\n               :                    :                       :                    +- CometHashAggregate\n               :                    :                       :                       +- CometProject\n               :                    :                       :                          +- CometBroadcastHashJoin\n               :                    :                       :                             :- CometFilter\n               :                    :                       :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                       :                             :        +- SubqueryBroadcast\n               :                    :                       :                             :           +- BroadcastExchange\n               :                    :                       :                             :              +- CometNativeColumnarToRow\n               :                    :                       :                             :                 +- CometProject\n               :                    :                       :                             :                    +- CometFilter\n               :                    :                       :                             :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                       :                             +- CometBroadcastExchange\n               :                    :                       :                                +- CometProject\n               :                    :                       :                                   +- CometFilter\n               :                    :                       :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                       +- BroadcastExchange\n               :                    :                          +- Project\n               :                    :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                                +- CometNativeColumnarToRow\n               :                    :                                   +- CometSort\n               :                    :                                      +- CometExchange\n               :                    :                                         +- CometHashAggregate\n               :                    :                                            +- CometExchange\n               :                    :                                               +- CometHashAggregate\n               :                    :                                                  +- CometProject\n               :                    :                                                     +- CometBroadcastHashJoin\n               :                    :                                                        :- CometFilter\n               :                    :                                                        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                                                        :        +- SubqueryBroadcast\n               :                    :                                                        :           +- BroadcastExchange\n               :                    :                                                        :              +- CometNativeColumnarToRow\n               :                    :                                                        :                 +- CometProject\n               :                    :                                                        :                    +- CometFilter\n               :                    :                                                        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                                                        +- CometBroadcastExchange\n               :                    :                                                           +- CometProject\n               :                    :                                                              +- CometFilter\n               :                    :                                                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometSort\n               :                       +- CometColumnarExchange\n               :                          +- HashAggregate\n               :                             +- CometNativeColumnarToRow\n               :                                +- CometColumnarExchange\n               :                                   +- HashAggregate\n               :                                      +- Project\n               :                                         +- BroadcastHashJoin\n               :                                            :- Project\n               :                                            :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                            :     +- CometNativeColumnarToRow\n               :                                            :        +- CometSort\n               :                                            :           +- CometExchange\n               :                                            :              +- CometHashAggregate\n               :                                            :                 +- CometExchange\n               :                                            :                    +- CometHashAggregate\n               :                                            :                       +- CometProject\n               :                                            :                          +- CometBroadcastHashJoin\n               :                                            :                             :- CometFilter\n               :                                            :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                            :                             :        +- ReusedSubquery\n               :                                            :                             +- CometBroadcastExchange\n               :                                            :                                +- CometProject\n               :                                            :                                   +- CometFilter\n               :                                            :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                            +- BroadcastExchange\n               :                                               +- Project\n               :                                                  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                                     +- CometNativeColumnarToRow\n               :                                                        +- CometSort\n               :                                                           +- CometExchange\n               :                                                              +- CometHashAggregate\n               :                                                                 +- CometExchange\n               :                                                                    +- CometHashAggregate\n               :                                                                       +- CometProject\n               :                                                                          +- CometBroadcastHashJoin\n               :                                                                             :- CometFilter\n               :                                                                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                                                             :        +- ReusedSubquery\n               :                                                                             +- CometBroadcastExchange\n               :                                                                                +- CometProject\n               :                                                                                   +- CometFilter\n               :                                                                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- Project\n                     +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                        +- CometNativeColumnarToRow\n                           +- CometSort\n                              +- CometExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometSortMergeJoin\n                                          :- CometSort\n                                          :  +- CometColumnarExchange\n                                          :     +- HashAggregate\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometColumnarExchange\n                                          :              +- HashAggregate\n                                          :                 +- Project\n                                          :                    +- BroadcastHashJoin\n                                          :                       :- Project\n                                          :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                       :     +- CometNativeColumnarToRow\n                                          :                       :        +- CometSort\n                                          :                       :           +- CometExchange\n                                          :                       :              +- CometHashAggregate\n                                          :                       :                 +- CometExchange\n                                          :                       :                    +- CometHashAggregate\n                                          :                       :                       +- CometProject\n                                          :                       :                          +- CometBroadcastHashJoin\n                                          :                       :                             :- CometFilter\n                                          :                       :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                          :                       :                             :        +- SubqueryBroadcast\n                                          :                       :                             :           +- BroadcastExchange\n                                          :                       :                             :              +- CometNativeColumnarToRow\n                                          :                       :                             :                 +- CometProject\n                                          :                       :                             :                    +- CometFilter\n                                          :                       :                             :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                       :                             +- CometBroadcastExchange\n                                          :                       :                                +- CometProject\n                                          :                       :                                   +- CometFilter\n                                          :                       :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                       +- BroadcastExchange\n                                          :                          +- Project\n                                          :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                                +- CometNativeColumnarToRow\n                                          :                                   +- CometSort\n                                          :                                      +- CometExchange\n                                          :                                         +- CometHashAggregate\n                                          :                                            +- CometExchange\n                                          :                                               +- CometHashAggregate\n                                          :                                                  +- CometProject\n                                          :                                                     +- CometBroadcastHashJoin\n                                          :                                                        :- CometFilter\n                                          :                                                        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                          :                                                        :        +- SubqueryBroadcast\n                                          :                                                        :           +- BroadcastExchange\n                                          :                                                        :              +- CometNativeColumnarToRow\n                                          :                                                        :                 +- CometProject\n                                          :                                                        :                    +- CometFilter\n                                          :                                                        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                                                        +- CometBroadcastExchange\n                                          :                                                           +- CometProject\n                                          :                                                              +- CometFilter\n                                          :                                                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          +- CometSort\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- CometNativeColumnarToRow\n                                                      +- CometColumnarExchange\n                                                         +- HashAggregate\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- Project\n                                                                  :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                  :     +- CometNativeColumnarToRow\n                                                                  :        +- CometSort\n                                                                  :           +- CometExchange\n                                                                  :              +- CometHashAggregate\n                                                                  :                 +- CometExchange\n                                                                  :                    +- CometHashAggregate\n                                                                  :                       +- CometProject\n                                                                  :                          +- CometBroadcastHashJoin\n                                                                  :                             :- CometFilter\n                                                                  :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                  :                             :        +- ReusedSubquery\n                                                                  :                             +- CometBroadcastExchange\n                                                                  :                                +- CometProject\n                                                                  :                                   +- CometFilter\n                                                                  :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                           +- CometNativeColumnarToRow\n                                                                              +- CometSort\n                                                                                 +- CometExchange\n                                                                                    +- CometHashAggregate\n                                                                                       +- CometExchange\n                                                                                          +- CometHashAggregate\n                                                                                             +- CometProject\n                                                                                                +- CometBroadcastHashJoin\n                                                                                                   :- CometFilter\n                                                                                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                                                   :        +- ReusedSubquery\n                                                                                                   +- CometBroadcastExchange\n                                                                                                      +- CometProject\n                                                                                                         +- CometFilter\n                                                                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 138 out of 196 eligible operators (70%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q57.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.call_center\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q57.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q5a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- HashAggregate\n               :              :  +- CometNativeColumnarToRow\n               :              :     +- CometColumnarExchange\n               :              :        +- HashAggregate\n               :              :           +- Project\n               :              :              +- BroadcastHashJoin\n               :              :                 :- Project\n               :              :                 :  +- BroadcastHashJoin\n               :              :                 :     :- Union\n               :              :                 :     :  :- Project\n               :              :                 :     :  :  +- Filter\n               :              :                 :     :  :     +- ColumnarToRow\n               :              :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :  :              +- SubqueryBroadcast\n               :              :                 :     :  :                 +- BroadcastExchange\n               :              :                 :     :  :                    +- CometNativeColumnarToRow\n               :              :                 :     :  :                       +- CometProject\n               :              :                 :     :  :                          +- CometFilter\n               :              :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                 :     :  +- Project\n               :              :                 :     :     +- Filter\n               :              :                 :     :        +- ColumnarToRow\n               :              :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :                 +- ReusedSubquery\n               :              :                 :     +- BroadcastExchange\n               :              :                 :        +- CometNativeColumnarToRow\n               :              :                 :           +- CometProject\n               :              :                 :              +- CometFilter\n               :              :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                 +- BroadcastExchange\n               :              :                    +- CometNativeColumnarToRow\n               :              :                       +- CometProject\n               :              :                          +- CometFilter\n               :              :                             +- CometNativeScan parquet spark_catalog.default.store\n               :              :- HashAggregate\n               :              :  +- CometNativeColumnarToRow\n               :              :     +- CometColumnarExchange\n               :              :        +- HashAggregate\n               :              :           +- Project\n               :              :              +- BroadcastHashJoin\n               :              :                 :- Project\n               :              :                 :  +- BroadcastHashJoin\n               :              :                 :     :- Union\n               :              :                 :     :  :- Project\n               :              :                 :     :  :  +- Filter\n               :              :                 :     :  :     +- ColumnarToRow\n               :              :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :  :              +- ReusedSubquery\n               :              :                 :     :  +- Project\n               :              :                 :     :     +- Filter\n               :              :                 :     :        +- ColumnarToRow\n               :              :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :                 +- ReusedSubquery\n               :              :                 :     +- BroadcastExchange\n               :              :                 :        +- CometNativeColumnarToRow\n               :              :                 :           +- CometProject\n               :              :                 :              +- CometFilter\n               :              :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                 +- BroadcastExchange\n               :              :                    +- CometNativeColumnarToRow\n               :              :                       +- CometProject\n               :              :                          +- CometFilter\n               :              :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :              +- HashAggregate\n               :                 +- CometNativeColumnarToRow\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- Project\n               :                             +- BroadcastHashJoin\n               :                                :- Project\n               :                                :  +- BroadcastHashJoin\n               :                                :     :- Union\n               :                                :     :  :- Project\n               :                                :     :  :  +- Filter\n               :                                :     :  :     +- ColumnarToRow\n               :                                :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                :     :  :              +- ReusedSubquery\n               :                                :     :  +- Project\n               :                                :     :     +- BroadcastHashJoin\n               :                                :     :        :- BroadcastExchange\n               :                                :     :        :  +- ColumnarToRow\n               :                                :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                :     :        :           +- ReusedSubquery\n               :                                :     :        +- CometNativeColumnarToRow\n               :                                :     :           +- CometProject\n               :                                :     :              +- CometFilter\n               :                                :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n               :                                :     +- BroadcastExchange\n               :                                :        +- CometNativeColumnarToRow\n               :                                :           +- CometProject\n               :                                :              +- CometFilter\n               :                                :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                +- BroadcastExchange\n               :                                   +- CometNativeColumnarToRow\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometNativeScan parquet spark_catalog.default.web_site\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- HashAggregate\n               :                          :  +- CometNativeColumnarToRow\n               :                          :     +- CometColumnarExchange\n               :                          :        +- HashAggregate\n               :                          :           +- Project\n               :                          :              +- BroadcastHashJoin\n               :                          :                 :- Project\n               :                          :                 :  +- BroadcastHashJoin\n               :                          :                 :     :- Union\n               :                          :                 :     :  :- Project\n               :                          :                 :     :  :  +- Filter\n               :                          :                 :     :  :     +- ColumnarToRow\n               :                          :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :  :              +- SubqueryBroadcast\n               :                          :                 :     :  :                 +- BroadcastExchange\n               :                          :                 :     :  :                    +- CometNativeColumnarToRow\n               :                          :                 :     :  :                       +- CometProject\n               :                          :                 :     :  :                          +- CometFilter\n               :                          :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                 :     :  +- Project\n               :                          :                 :     :     +- Filter\n               :                          :                 :     :        +- ColumnarToRow\n               :                          :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :                 +- ReusedSubquery\n               :                          :                 :     +- BroadcastExchange\n               :                          :                 :        +- CometNativeColumnarToRow\n               :                          :                 :           +- CometProject\n               :                          :                 :              +- CometFilter\n               :                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                 +- BroadcastExchange\n               :                          :                    +- CometNativeColumnarToRow\n               :                          :                       +- CometProject\n               :                          :                          +- CometFilter\n               :                          :                             +- CometNativeScan parquet spark_catalog.default.store\n               :                          :- HashAggregate\n               :                          :  +- CometNativeColumnarToRow\n               :                          :     +- CometColumnarExchange\n               :                          :        +- HashAggregate\n               :                          :           +- Project\n               :                          :              +- BroadcastHashJoin\n               :                          :                 :- Project\n               :                          :                 :  +- BroadcastHashJoin\n               :                          :                 :     :- Union\n               :                          :                 :     :  :- Project\n               :                          :                 :     :  :  +- Filter\n               :                          :                 :     :  :     +- ColumnarToRow\n               :                          :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :  :              +- ReusedSubquery\n               :                          :                 :     :  +- Project\n               :                          :                 :     :     +- Filter\n               :                          :                 :     :        +- ColumnarToRow\n               :                          :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :                 +- ReusedSubquery\n               :                          :                 :     +- BroadcastExchange\n               :                          :                 :        +- CometNativeColumnarToRow\n               :                          :                 :           +- CometProject\n               :                          :                 :              +- CometFilter\n               :                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                 +- BroadcastExchange\n               :                          :                    +- CometNativeColumnarToRow\n               :                          :                       +- CometProject\n               :                          :                          +- CometFilter\n               :                          :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :                          +- HashAggregate\n               :                             +- CometNativeColumnarToRow\n               :                                +- CometColumnarExchange\n               :                                   +- HashAggregate\n               :                                      +- Project\n               :                                         +- BroadcastHashJoin\n               :                                            :- Project\n               :                                            :  +- BroadcastHashJoin\n               :                                            :     :- Union\n               :                                            :     :  :- Project\n               :                                            :     :  :  +- Filter\n               :                                            :     :  :     +- ColumnarToRow\n               :                                            :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                            :     :  :              +- ReusedSubquery\n               :                                            :     :  +- Project\n               :                                            :     :     +- BroadcastHashJoin\n               :                                            :     :        :- BroadcastExchange\n               :                                            :     :        :  +- ColumnarToRow\n               :                                            :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                            :     :        :           +- ReusedSubquery\n               :                                            :     :        +- CometNativeColumnarToRow\n               :                                            :     :           +- CometProject\n               :                                            :     :              +- CometFilter\n               :                                            :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n               :                                            :     +- BroadcastExchange\n               :                                            :        +- CometNativeColumnarToRow\n               :                                            :           +- CometProject\n               :                                            :              +- CometFilter\n               :                                            :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                            +- BroadcastExchange\n               :                                               +- CometNativeColumnarToRow\n               :                                                  +- CometProject\n               :                                                     +- CometFilter\n               :                                                        +- CometNativeScan parquet spark_catalog.default.web_site\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- HashAggregate\n                                          :  +- CometNativeColumnarToRow\n                                          :     +- CometColumnarExchange\n                                          :        +- HashAggregate\n                                          :           +- Project\n                                          :              +- BroadcastHashJoin\n                                          :                 :- Project\n                                          :                 :  +- BroadcastHashJoin\n                                          :                 :     :- Union\n                                          :                 :     :  :- Project\n                                          :                 :     :  :  +- Filter\n                                          :                 :     :  :     +- ColumnarToRow\n                                          :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :  :              +- SubqueryBroadcast\n                                          :                 :     :  :                 +- BroadcastExchange\n                                          :                 :     :  :                    +- CometNativeColumnarToRow\n                                          :                 :     :  :                       +- CometProject\n                                          :                 :     :  :                          +- CometFilter\n                                          :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                 :     :  +- Project\n                                          :                 :     :     +- Filter\n                                          :                 :     :        +- ColumnarToRow\n                                          :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :                 +- ReusedSubquery\n                                          :                 :     +- BroadcastExchange\n                                          :                 :        +- CometNativeColumnarToRow\n                                          :                 :           +- CometProject\n                                          :                 :              +- CometFilter\n                                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                 +- BroadcastExchange\n                                          :                    +- CometNativeColumnarToRow\n                                          :                       +- CometProject\n                                          :                          +- CometFilter\n                                          :                             +- CometNativeScan parquet spark_catalog.default.store\n                                          :- HashAggregate\n                                          :  +- CometNativeColumnarToRow\n                                          :     +- CometColumnarExchange\n                                          :        +- HashAggregate\n                                          :           +- Project\n                                          :              +- BroadcastHashJoin\n                                          :                 :- Project\n                                          :                 :  +- BroadcastHashJoin\n                                          :                 :     :- Union\n                                          :                 :     :  :- Project\n                                          :                 :     :  :  +- Filter\n                                          :                 :     :  :     +- ColumnarToRow\n                                          :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :  :              +- ReusedSubquery\n                                          :                 :     :  +- Project\n                                          :                 :     :     +- Filter\n                                          :                 :     :        +- ColumnarToRow\n                                          :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :                 +- ReusedSubquery\n                                          :                 :     +- BroadcastExchange\n                                          :                 :        +- CometNativeColumnarToRow\n                                          :                 :           +- CometProject\n                                          :                 :              +- CometFilter\n                                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                 +- BroadcastExchange\n                                          :                    +- CometNativeColumnarToRow\n                                          :                       +- CometProject\n                                          :                          +- CometFilter\n                                          :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n                                          +- HashAggregate\n                                             +- CometNativeColumnarToRow\n                                                +- CometColumnarExchange\n                                                   +- HashAggregate\n                                                      +- Project\n                                                         +- BroadcastHashJoin\n                                                            :- Project\n                                                            :  +- BroadcastHashJoin\n                                                            :     :- Union\n                                                            :     :  :- Project\n                                                            :     :  :  +- Filter\n                                                            :     :  :     +- ColumnarToRow\n                                                            :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                            :     :  :              +- ReusedSubquery\n                                                            :     :  +- Project\n                                                            :     :     +- BroadcastHashJoin\n                                                            :     :        :- BroadcastExchange\n                                                            :     :        :  +- ColumnarToRow\n                                                            :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                            :     :        :           +- ReusedSubquery\n                                                            :     :        +- CometNativeColumnarToRow\n                                                            :     :           +- CometProject\n                                                            :     :              +- CometFilter\n                                                            :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n                                                            :     +- BroadcastExchange\n                                                            :        +- CometNativeColumnarToRow\n                                                            :           +- CometProject\n                                                            :              +- CometFilter\n                                                            :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                            +- BroadcastExchange\n                                                               +- CometNativeColumnarToRow\n                                                                  +- CometProject\n                                                                     +- CometFilter\n                                                                        +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 89 out of 263 eligible operators (33%). Final plan contains 57 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q5a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometUnion\n               :           :              :     :  :- CometProject\n               :           :              :     :  :  +- CometFilter\n               :           :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :              :     :  :           +- SubqueryBroadcast\n               :           :              :     :  :              +- BroadcastExchange\n               :           :              :     :  :                 +- CometNativeColumnarToRow\n               :           :              :     :  :                    +- CometProject\n               :           :              :     :  :                       +- CometFilter\n               :           :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :  +- CometProject\n               :           :              :     :     +- CometFilter\n               :           :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :           :              :     :              +- ReusedSubquery\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometUnion\n               :           :              :     :  :- CometProject\n               :           :              :     :  :  +- CometFilter\n               :           :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :              :     :  :           +- ReusedSubquery\n               :           :              :     :  +- CometProject\n               :           :              :     :     +- CometFilter\n               :           :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :           :              :     :              +- ReusedSubquery\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometProject\n               :                          :  +- CometBroadcastHashJoin\n               :                          :     :- CometUnion\n               :                          :     :  :- CometProject\n               :                          :     :  :  +- CometFilter\n               :                          :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :     :  :           +- ReusedSubquery\n               :                          :     :  +- CometProject\n               :                          :     :     +- CometBroadcastHashJoin\n               :                          :     :        :- CometBroadcastExchange\n               :                          :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                          :     :        :        +- ReusedSubquery\n               :                          :     :        +- CometProject\n               :                          :     :           +- CometFilter\n               :                          :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :     +- CometBroadcastExchange\n               :                          :        +- CometProject\n               :                          :           +- CometFilter\n               :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometUnion\n               :                    :              :     :  :- CometProject\n               :                    :              :     :  :  +- CometFilter\n               :                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :              :     :  :           +- SubqueryBroadcast\n               :                    :              :     :  :              +- BroadcastExchange\n               :                    :              :     :  :                 +- CometNativeColumnarToRow\n               :                    :              :     :  :                    +- CometProject\n               :                    :              :     :  :                       +- CometFilter\n               :                    :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :  +- CometProject\n               :                    :              :     :     +- CometFilter\n               :                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :                    :              :     :              +- ReusedSubquery\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometUnion\n               :                    :              :     :  :- CometProject\n               :                    :              :     :  :  +- CometFilter\n               :                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :              :     :  :           +- ReusedSubquery\n               :                    :              :     :  +- CometProject\n               :                    :              :     :     +- CometFilter\n               :                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                    :              :     :              +- ReusedSubquery\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :                    +- CometHashAggregate\n               :                       +- CometExchange\n               :                          +- CometHashAggregate\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometProject\n               :                                   :  +- CometBroadcastHashJoin\n               :                                   :     :- CometUnion\n               :                                   :     :  :- CometProject\n               :                                   :     :  :  +- CometFilter\n               :                                   :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :     :  :           +- ReusedSubquery\n               :                                   :     :  +- CometProject\n               :                                   :     :     +- CometBroadcastHashJoin\n               :                                   :     :        :- CometBroadcastExchange\n               :                                   :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                   :     :        :        +- ReusedSubquery\n               :                                   :     :        +- CometProject\n               :                                   :     :           +- CometFilter\n               :                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :     +- CometBroadcastExchange\n               :                                   :        +- CometProject\n               :                                   :           +- CometFilter\n               :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometUnion\n                                    :              :     :  :- CometProject\n                                    :              :     :  :  +- CometFilter\n                                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :              :     :  :           +- SubqueryBroadcast\n                                    :              :     :  :              +- BroadcastExchange\n                                    :              :     :  :                 +- CometNativeColumnarToRow\n                                    :              :     :  :                    +- CometProject\n                                    :              :     :  :                       +- CometFilter\n                                    :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :  +- CometProject\n                                    :              :     :     +- CometFilter\n                                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                    :              :     :              +- ReusedSubquery\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometUnion\n                                    :              :     :  :- CometProject\n                                    :              :     :  :  +- CometFilter\n                                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :              :     :  :           +- ReusedSubquery\n                                    :              :     :  +- CometProject\n                                    :              :     :     +- CometFilter\n                                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                    :              :     :              +- ReusedSubquery\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometUnion\n                                                   :     :  :- CometProject\n                                                   :     :  :  +- CometFilter\n                                                   :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     :  :           +- ReusedSubquery\n                                                   :     :  +- CometProject\n                                                   :     :     +- CometBroadcastHashJoin\n                                                   :     :        :- CometBroadcastExchange\n                                                   :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                                   :     :        :        +- ReusedSubquery\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 242 out of 263 eligible operators (92%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q6.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- CometNativeColumnarToRow\n                     :     :     :  +- CometProject\n                     :     :     :     +- CometBroadcastHashJoin\n                     :     :     :        :- CometProject\n                     :     :     :        :  +- CometFilter\n                     :     :     :        :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     :     :        +- CometBroadcastExchange\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     +- BroadcastExchange\n                     :     :        +- Filter\n                     :     :           +- ColumnarToRow\n                     :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :                    +- SubqueryBroadcast\n                     :     :                       +- BroadcastExchange\n                     :     :                          +- CometNativeColumnarToRow\n                     :     :                             +- CometProject\n                     :     :                                +- CometFilter\n                     :     :                                   :  +- Subquery\n                     :     :                                   :     +- CometNativeColumnarToRow\n                     :     :                                   :        +- CometHashAggregate\n                     :     :                                   :           +- CometExchange\n                     :     :                                   :              +- CometHashAggregate\n                     :     :                                   :                 +- CometProject\n                     :     :                                   :                    +- CometFilter\n                     :     :                                   :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 :  +- Subquery\n                     :                 :     +- CometNativeColumnarToRow\n                     :                 :        +- CometHashAggregate\n                     :                 :           +- CometExchange\n                     :                 :              +- CometHashAggregate\n                     :                 :                 +- CometProject\n                     :                 :                    +- CometFilter\n                     :                 :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometFilter\n                                 :  +- CometNativeScan parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 39 out of 58 eligible operators (67%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q6.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometFilter\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometFilter\n                     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometFilter\n                     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :                 +- SubqueryBroadcast\n                     :     :                    +- BroadcastExchange\n                     :     :                       +- CometNativeColumnarToRow\n                     :     :                          +- CometProject\n                     :     :                             +- CometFilter\n                     :     :                                :  +- Subquery\n                     :     :                                :     +- CometNativeColumnarToRow\n                     :     :                                :        +- CometHashAggregate\n                     :     :                                :           +- CometExchange\n                     :     :                                :              +- CometHashAggregate\n                     :     :                                :                 +- CometProject\n                     :     :                                :                    +- CometFilter\n                     :     :                                :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              :  +- ReusedSubquery\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometFilter\n                              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 48 out of 52 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q64.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometNativeScan parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 228 out of 242 eligible operators (94%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q64.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 238 out of 242 eligible operators (98%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q67a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- Union\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- Project\n                  :              +- BroadcastHashJoin\n                  :                 :- Project\n                  :                 :  +- BroadcastHashJoin\n                  :                 :     :- Project\n                  :                 :     :  +- BroadcastHashJoin\n                  :                 :     :     :- Filter\n                  :                 :     :     :  +- ColumnarToRow\n                  :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                 :     :     :           +- SubqueryBroadcast\n                  :                 :     :     :              +- BroadcastExchange\n                  :                 :     :     :                 +- CometNativeColumnarToRow\n                  :                 :     :     :                    +- CometProject\n                  :                 :     :     :                       +- CometFilter\n                  :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                 :     :     +- BroadcastExchange\n                  :                 :     :        +- CometNativeColumnarToRow\n                  :                 :     :           +- CometProject\n                  :                 :     :              +- CometFilter\n                  :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                 :     +- BroadcastExchange\n                  :                 :        +- CometNativeColumnarToRow\n                  :                 :           +- CometProject\n                  :                 :              +- CometFilter\n                  :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n                  :                 +- BroadcastExchange\n                  :                    +- CometNativeColumnarToRow\n                  :                       +- CometProject\n                  :                          +- CometFilter\n                  :                             +- CometNativeScan parquet spark_catalog.default.item\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- HashAggregate\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometColumnarExchange\n                  :                    +- HashAggregate\n                  :                       +- Project\n                  :                          +- BroadcastHashJoin\n                  :                             :- Project\n                  :                             :  +- BroadcastHashJoin\n                  :                             :     :- Project\n                  :                             :     :  +- BroadcastHashJoin\n                  :                             :     :     :- Filter\n                  :                             :     :     :  +- ColumnarToRow\n                  :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                             :     :     :           +- SubqueryBroadcast\n                  :                             :     :     :              +- BroadcastExchange\n                  :                             :     :     :                 +- CometNativeColumnarToRow\n                  :                             :     :     :                    +- CometProject\n                  :                             :     :     :                       +- CometFilter\n                  :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     :     +- BroadcastExchange\n                  :                             :     :        +- CometNativeColumnarToRow\n                  :                             :     :           +- CometProject\n                  :                             :     :              +- CometFilter\n                  :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     +- BroadcastExchange\n                  :                             :        +- CometNativeColumnarToRow\n                  :                             :           +- CometProject\n                  :                             :              +- CometFilter\n                  :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                  :                             +- BroadcastExchange\n                  :                                +- CometNativeColumnarToRow\n                  :                                   +- CometProject\n                  :                                      +- CometFilter\n                  :                                         +- CometNativeScan parquet spark_catalog.default.item\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- HashAggregate\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometColumnarExchange\n                  :                    +- HashAggregate\n                  :                       +- Project\n                  :                          +- BroadcastHashJoin\n                  :                             :- Project\n                  :                             :  +- BroadcastHashJoin\n                  :                             :     :- Project\n                  :                             :     :  +- BroadcastHashJoin\n                  :                             :     :     :- Filter\n                  :                             :     :     :  +- ColumnarToRow\n                  :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                             :     :     :           +- SubqueryBroadcast\n                  :                             :     :     :              +- BroadcastExchange\n                  :                             :     :     :                 +- CometNativeColumnarToRow\n                  :                             :     :     :                    +- CometProject\n                  :                             :     :     :                       +- CometFilter\n                  :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     :     +- BroadcastExchange\n                  :                             :     :        +- CometNativeColumnarToRow\n                  :                             :     :           +- CometProject\n                  :                             :     :              +- CometFilter\n                  :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     +- BroadcastExchange\n                  :                             :        +- CometNativeColumnarToRow\n                  :                             :           +- CometProject\n                  :                             :              +- CometFilter\n                  :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                  :                             +- BroadcastExchange\n                  :                                +- CometNativeColumnarToRow\n                  :                                   +- CometProject\n                  :                                      +- CometFilter\n                  :                                         +- CometNativeScan parquet spark_catalog.default.item\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- HashAggregate\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometColumnarExchange\n                  :                    +- HashAggregate\n                  :                       +- Project\n                  :                          +- BroadcastHashJoin\n                  :                             :- Project\n                  :                             :  +- BroadcastHashJoin\n                  :                             :     :- Project\n                  :                             :     :  +- BroadcastHashJoin\n                  :                             :     :     :- Filter\n                  :                             :     :     :  +- ColumnarToRow\n                  :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                             :     :     :           +- SubqueryBroadcast\n                  :                             :     :     :              +- BroadcastExchange\n                  :                             :     :     :                 +- CometNativeColumnarToRow\n                  :                             :     :     :                    +- CometProject\n                  :                             :     :     :                       +- CometFilter\n                  :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     :     +- BroadcastExchange\n                  :                             :     :        +- CometNativeColumnarToRow\n                  :                             :     :           +- CometProject\n                  :                             :     :              +- CometFilter\n                  :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     +- BroadcastExchange\n                  :                             :        +- CometNativeColumnarToRow\n                  :                             :           +- CometProject\n                  :                             :              +- CometFilter\n                  :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                  :                             +- BroadcastExchange\n                  :                                +- CometNativeColumnarToRow\n                  :                                   +- CometProject\n                  :                                      +- CometFilter\n                  :                                         +- CometNativeScan parquet spark_catalog.default.item\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- HashAggregate\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometColumnarExchange\n                  :                    +- HashAggregate\n                  :                       +- Project\n                  :                          +- BroadcastHashJoin\n                  :                             :- Project\n                  :                             :  +- BroadcastHashJoin\n                  :                             :     :- Project\n                  :                             :     :  +- BroadcastHashJoin\n                  :                             :     :     :- Filter\n                  :                             :     :     :  +- ColumnarToRow\n                  :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                             :     :     :           +- SubqueryBroadcast\n                  :                             :     :     :              +- BroadcastExchange\n                  :                             :     :     :                 +- CometNativeColumnarToRow\n                  :                             :     :     :                    +- CometProject\n                  :                             :     :     :                       +- CometFilter\n                  :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     :     +- BroadcastExchange\n                  :                             :     :        +- CometNativeColumnarToRow\n                  :                             :     :           +- CometProject\n                  :                             :     :              +- CometFilter\n                  :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     +- BroadcastExchange\n                  :                             :        +- CometNativeColumnarToRow\n                  :                             :           +- CometProject\n                  :                             :              +- CometFilter\n                  :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                  :                             +- BroadcastExchange\n                  :                                +- CometNativeColumnarToRow\n                  :                                   +- CometProject\n                  :                                      +- CometFilter\n                  :                                         +- CometNativeScan parquet spark_catalog.default.item\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- HashAggregate\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometColumnarExchange\n                  :                    +- HashAggregate\n                  :                       +- Project\n                  :                          +- BroadcastHashJoin\n                  :                             :- Project\n                  :                             :  +- BroadcastHashJoin\n                  :                             :     :- Project\n                  :                             :     :  +- BroadcastHashJoin\n                  :                             :     :     :- Filter\n                  :                             :     :     :  +- ColumnarToRow\n                  :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                             :     :     :           +- SubqueryBroadcast\n                  :                             :     :     :              +- BroadcastExchange\n                  :                             :     :     :                 +- CometNativeColumnarToRow\n                  :                             :     :     :                    +- CometProject\n                  :                             :     :     :                       +- CometFilter\n                  :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     :     +- BroadcastExchange\n                  :                             :     :        +- CometNativeColumnarToRow\n                  :                             :     :           +- CometProject\n                  :                             :     :              +- CometFilter\n                  :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     +- BroadcastExchange\n                  :                             :        +- CometNativeColumnarToRow\n                  :                             :           +- CometProject\n                  :                             :              +- CometFilter\n                  :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                  :                             +- BroadcastExchange\n                  :                                +- CometNativeColumnarToRow\n                  :                                   +- CometProject\n                  :                                      +- CometFilter\n                  :                                         +- CometNativeScan parquet spark_catalog.default.item\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- HashAggregate\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometColumnarExchange\n                  :                    +- HashAggregate\n                  :                       +- Project\n                  :                          +- BroadcastHashJoin\n                  :                             :- Project\n                  :                             :  +- BroadcastHashJoin\n                  :                             :     :- Project\n                  :                             :     :  +- BroadcastHashJoin\n                  :                             :     :     :- Filter\n                  :                             :     :     :  +- ColumnarToRow\n                  :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                             :     :     :           +- SubqueryBroadcast\n                  :                             :     :     :              +- BroadcastExchange\n                  :                             :     :     :                 +- CometNativeColumnarToRow\n                  :                             :     :     :                    +- CometProject\n                  :                             :     :     :                       +- CometFilter\n                  :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     :     +- BroadcastExchange\n                  :                             :     :        +- CometNativeColumnarToRow\n                  :                             :     :           +- CometProject\n                  :                             :     :              +- CometFilter\n                  :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     +- BroadcastExchange\n                  :                             :        +- CometNativeColumnarToRow\n                  :                             :           +- CometProject\n                  :                             :              +- CometFilter\n                  :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                  :                             +- BroadcastExchange\n                  :                                +- CometNativeColumnarToRow\n                  :                                   +- CometProject\n                  :                                      +- CometFilter\n                  :                                         +- CometNativeScan parquet spark_catalog.default.item\n                  :- HashAggregate\n                  :  +- CometNativeColumnarToRow\n                  :     +- CometColumnarExchange\n                  :        +- HashAggregate\n                  :           +- HashAggregate\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometColumnarExchange\n                  :                    +- HashAggregate\n                  :                       +- Project\n                  :                          +- BroadcastHashJoin\n                  :                             :- Project\n                  :                             :  +- BroadcastHashJoin\n                  :                             :     :- Project\n                  :                             :     :  +- BroadcastHashJoin\n                  :                             :     :     :- Filter\n                  :                             :     :     :  +- ColumnarToRow\n                  :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :                             :     :     :           +- SubqueryBroadcast\n                  :                             :     :     :              +- BroadcastExchange\n                  :                             :     :     :                 +- CometNativeColumnarToRow\n                  :                             :     :     :                    +- CometProject\n                  :                             :     :     :                       +- CometFilter\n                  :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     :     +- BroadcastExchange\n                  :                             :     :        +- CometNativeColumnarToRow\n                  :                             :     :           +- CometProject\n                  :                             :     :              +- CometFilter\n                  :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :                             :     +- BroadcastExchange\n                  :                             :        +- CometNativeColumnarToRow\n                  :                             :           +- CometProject\n                  :                             :              +- CometFilter\n                  :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                  :                             +- BroadcastExchange\n                  :                                +- CometNativeColumnarToRow\n                  :                                   +- CometProject\n                  :                                      +- CometFilter\n                  :                                         +- CometNativeScan parquet spark_catalog.default.item\n                  +- HashAggregate\n                     +- CometNativeColumnarToRow\n                        +- CometColumnarExchange\n                           +- HashAggregate\n                              +- HashAggregate\n                                 +- CometNativeColumnarToRow\n                                    +- CometColumnarExchange\n                                       +- HashAggregate\n                                          +- Project\n                                             +- BroadcastHashJoin\n                                                :- Project\n                                                :  +- BroadcastHashJoin\n                                                :     :- Project\n                                                :     :  +- BroadcastHashJoin\n                                                :     :     :- Filter\n                                                :     :     :  +- ColumnarToRow\n                                                :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                :     :     :           +- SubqueryBroadcast\n                                                :     :     :              +- BroadcastExchange\n                                                :     :     :                 +- CometNativeColumnarToRow\n                                                :     :     :                    +- CometProject\n                                                :     :     :                       +- CometFilter\n                                                :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                :     :     +- BroadcastExchange\n                                                :     :        +- CometNativeColumnarToRow\n                                                :     :           +- CometProject\n                                                :     :              +- CometFilter\n                                                :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                :     +- BroadcastExchange\n                                                :        +- CometNativeColumnarToRow\n                                                :           +- CometProject\n                                                :              +- CometFilter\n                                                :                 +- CometNativeScan parquet spark_catalog.default.store\n                                                +- BroadcastExchange\n                                                   +- CometNativeColumnarToRow\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 127 out of 282 eligible operators (45%). Final plan contains 63 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q67a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometUnion\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometProject\n                  :           +- CometBroadcastHashJoin\n                  :              :- CometProject\n                  :              :  +- CometBroadcastHashJoin\n                  :              :     :- CometProject\n                  :              :     :  +- CometBroadcastHashJoin\n                  :              :     :     :- CometFilter\n                  :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :              :     :     :        +- SubqueryBroadcast\n                  :              :     :     :           +- BroadcastExchange\n                  :              :     :     :              +- CometNativeColumnarToRow\n                  :              :     :     :                 +- CometProject\n                  :              :     :     :                    +- CometFilter\n                  :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     :     +- CometBroadcastExchange\n                  :              :     :        +- CometProject\n                  :              :     :           +- CometFilter\n                  :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :              :     +- CometBroadcastExchange\n                  :              :        +- CometProject\n                  :              :           +- CometFilter\n                  :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :              +- CometBroadcastExchange\n                  :                 +- CometProject\n                  :                    +- CometFilter\n                  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometHashAggregate\n                  :           +- CometExchange\n                  :              +- CometHashAggregate\n                  :                 +- CometProject\n                  :                    +- CometBroadcastHashJoin\n                  :                       :- CometProject\n                  :                       :  +- CometBroadcastHashJoin\n                  :                       :     :- CometProject\n                  :                       :     :  +- CometBroadcastHashJoin\n                  :                       :     :     :- CometFilter\n                  :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                       :     :     :        +- SubqueryBroadcast\n                  :                       :     :     :           +- BroadcastExchange\n                  :                       :     :     :              +- CometNativeColumnarToRow\n                  :                       :     :     :                 +- CometProject\n                  :                       :     :     :                    +- CometFilter\n                  :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     :     +- CometBroadcastExchange\n                  :                       :     :        +- CometProject\n                  :                       :     :           +- CometFilter\n                  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     +- CometBroadcastExchange\n                  :                       :        +- CometProject\n                  :                       :           +- CometFilter\n                  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :                       +- CometBroadcastExchange\n                  :                          +- CometProject\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometHashAggregate\n                  :           +- CometExchange\n                  :              +- CometHashAggregate\n                  :                 +- CometProject\n                  :                    +- CometBroadcastHashJoin\n                  :                       :- CometProject\n                  :                       :  +- CometBroadcastHashJoin\n                  :                       :     :- CometProject\n                  :                       :     :  +- CometBroadcastHashJoin\n                  :                       :     :     :- CometFilter\n                  :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                       :     :     :        +- SubqueryBroadcast\n                  :                       :     :     :           +- BroadcastExchange\n                  :                       :     :     :              +- CometNativeColumnarToRow\n                  :                       :     :     :                 +- CometProject\n                  :                       :     :     :                    +- CometFilter\n                  :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     :     +- CometBroadcastExchange\n                  :                       :     :        +- CometProject\n                  :                       :     :           +- CometFilter\n                  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     +- CometBroadcastExchange\n                  :                       :        +- CometProject\n                  :                       :           +- CometFilter\n                  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :                       +- CometBroadcastExchange\n                  :                          +- CometProject\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometHashAggregate\n                  :           +- CometExchange\n                  :              +- CometHashAggregate\n                  :                 +- CometProject\n                  :                    +- CometBroadcastHashJoin\n                  :                       :- CometProject\n                  :                       :  +- CometBroadcastHashJoin\n                  :                       :     :- CometProject\n                  :                       :     :  +- CometBroadcastHashJoin\n                  :                       :     :     :- CometFilter\n                  :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                       :     :     :        +- SubqueryBroadcast\n                  :                       :     :     :           +- BroadcastExchange\n                  :                       :     :     :              +- CometNativeColumnarToRow\n                  :                       :     :     :                 +- CometProject\n                  :                       :     :     :                    +- CometFilter\n                  :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     :     +- CometBroadcastExchange\n                  :                       :     :        +- CometProject\n                  :                       :     :           +- CometFilter\n                  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     +- CometBroadcastExchange\n                  :                       :        +- CometProject\n                  :                       :           +- CometFilter\n                  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :                       +- CometBroadcastExchange\n                  :                          +- CometProject\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometHashAggregate\n                  :           +- CometExchange\n                  :              +- CometHashAggregate\n                  :                 +- CometProject\n                  :                    +- CometBroadcastHashJoin\n                  :                       :- CometProject\n                  :                       :  +- CometBroadcastHashJoin\n                  :                       :     :- CometProject\n                  :                       :     :  +- CometBroadcastHashJoin\n                  :                       :     :     :- CometFilter\n                  :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                       :     :     :        +- SubqueryBroadcast\n                  :                       :     :     :           +- BroadcastExchange\n                  :                       :     :     :              +- CometNativeColumnarToRow\n                  :                       :     :     :                 +- CometProject\n                  :                       :     :     :                    +- CometFilter\n                  :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     :     +- CometBroadcastExchange\n                  :                       :     :        +- CometProject\n                  :                       :     :           +- CometFilter\n                  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     +- CometBroadcastExchange\n                  :                       :        +- CometProject\n                  :                       :           +- CometFilter\n                  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :                       +- CometBroadcastExchange\n                  :                          +- CometProject\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometHashAggregate\n                  :           +- CometExchange\n                  :              +- CometHashAggregate\n                  :                 +- CometProject\n                  :                    +- CometBroadcastHashJoin\n                  :                       :- CometProject\n                  :                       :  +- CometBroadcastHashJoin\n                  :                       :     :- CometProject\n                  :                       :     :  +- CometBroadcastHashJoin\n                  :                       :     :     :- CometFilter\n                  :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                       :     :     :        +- SubqueryBroadcast\n                  :                       :     :     :           +- BroadcastExchange\n                  :                       :     :     :              +- CometNativeColumnarToRow\n                  :                       :     :     :                 +- CometProject\n                  :                       :     :     :                    +- CometFilter\n                  :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     :     +- CometBroadcastExchange\n                  :                       :     :        +- CometProject\n                  :                       :     :           +- CometFilter\n                  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     +- CometBroadcastExchange\n                  :                       :        +- CometProject\n                  :                       :           +- CometFilter\n                  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :                       +- CometBroadcastExchange\n                  :                          +- CometProject\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometHashAggregate\n                  :           +- CometExchange\n                  :              +- CometHashAggregate\n                  :                 +- CometProject\n                  :                    +- CometBroadcastHashJoin\n                  :                       :- CometProject\n                  :                       :  +- CometBroadcastHashJoin\n                  :                       :     :- CometProject\n                  :                       :     :  +- CometBroadcastHashJoin\n                  :                       :     :     :- CometFilter\n                  :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                       :     :     :        +- SubqueryBroadcast\n                  :                       :     :     :           +- BroadcastExchange\n                  :                       :     :     :              +- CometNativeColumnarToRow\n                  :                       :     :     :                 +- CometProject\n                  :                       :     :     :                    +- CometFilter\n                  :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     :     +- CometBroadcastExchange\n                  :                       :     :        +- CometProject\n                  :                       :     :           +- CometFilter\n                  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     +- CometBroadcastExchange\n                  :                       :        +- CometProject\n                  :                       :           +- CometFilter\n                  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :                       +- CometBroadcastExchange\n                  :                          +- CometProject\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :- CometHashAggregate\n                  :  +- CometExchange\n                  :     +- CometHashAggregate\n                  :        +- CometHashAggregate\n                  :           +- CometExchange\n                  :              +- CometHashAggregate\n                  :                 +- CometProject\n                  :                    +- CometBroadcastHashJoin\n                  :                       :- CometProject\n                  :                       :  +- CometBroadcastHashJoin\n                  :                       :     :- CometProject\n                  :                       :     :  +- CometBroadcastHashJoin\n                  :                       :     :     :- CometFilter\n                  :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :                       :     :     :        +- SubqueryBroadcast\n                  :                       :     :     :           +- BroadcastExchange\n                  :                       :     :     :              +- CometNativeColumnarToRow\n                  :                       :     :     :                 +- CometProject\n                  :                       :     :     :                    +- CometFilter\n                  :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     :     +- CometBroadcastExchange\n                  :                       :     :        +- CometProject\n                  :                       :     :           +- CometFilter\n                  :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :                       :     +- CometBroadcastExchange\n                  :                       :        +- CometProject\n                  :                       :           +- CometFilter\n                  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                  :                       +- CometBroadcastExchange\n                  :                          +- CometProject\n                  :                             +- CometFilter\n                  :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometProject\n                                       +- CometBroadcastHashJoin\n                                          :- CometProject\n                                          :  +- CometBroadcastHashJoin\n                                          :     :- CometProject\n                                          :     :  +- CometBroadcastHashJoin\n                                          :     :     :- CometFilter\n                                          :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                          :     :     :        +- SubqueryBroadcast\n                                          :     :     :           +- BroadcastExchange\n                                          :     :     :              +- CometNativeColumnarToRow\n                                          :     :     :                 +- CometProject\n                                          :     :     :                    +- CometFilter\n                                          :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :     :     +- CometBroadcastExchange\n                                          :     :        +- CometProject\n                                          :     :           +- CometFilter\n                                          :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :     +- CometBroadcastExchange\n                                          :        +- CometProject\n                                          :           +- CometFilter\n                                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                          +- CometBroadcastExchange\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 261 out of 282 eligible operators (92%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q70a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- Project\n                           :                 :  +- BroadcastHashJoin\n                           :                 :     :- Filter\n                           :                 :     :  +- ColumnarToRow\n                           :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                 :     :           +- SubqueryBroadcast\n                           :                 :     :              +- BroadcastExchange\n                           :                 :     :                 +- CometNativeColumnarToRow\n                           :                 :     :                    +- CometProject\n                           :                 :     :                       +- CometFilter\n                           :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     +- BroadcastExchange\n                           :                 :        +- CometNativeColumnarToRow\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 +- BroadcastExchange\n                           :                    +- Project\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometFilter\n                           :                          :     +- CometNativeScan parquet spark_catalog.default.store\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- Filter\n                           :                                   +- Window\n                           :                                      +- Sort\n                           :                                         +- HashAggregate\n                           :                                            +- CometNativeColumnarToRow\n                           :                                               +- CometColumnarExchange\n                           :                                                  +- HashAggregate\n                           :                                                     +- Project\n                           :                                                        +- BroadcastHashJoin\n                           :                                                           :- Project\n                           :                                                           :  +- BroadcastHashJoin\n                           :                                                           :     :- Filter\n                           :                                                           :     :  +- ColumnarToRow\n                           :                                                           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                                           :     :           +- ReusedSubquery\n                           :                                                           :     +- BroadcastExchange\n                           :                                                           :        +- CometNativeColumnarToRow\n                           :                                                           :           +- CometProject\n                           :                                                           :              +- CometFilter\n                           :                                                           :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                                                           +- BroadcastExchange\n                           :                                                              +- CometNativeColumnarToRow\n                           :                                                                 +- CometProject\n                           :                                                                    +- CometFilter\n                           :                                                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Filter\n                           :                             :     :  +- ColumnarToRow\n                           :                             :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :           +- SubqueryBroadcast\n                           :                             :     :              +- BroadcastExchange\n                           :                             :     :                 +- CometNativeColumnarToRow\n                           :                             :     :                    +- CometProject\n                           :                             :     :                       +- CometFilter\n                           :                             :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             +- BroadcastExchange\n                           :                                +- Project\n                           :                                   +- BroadcastHashJoin\n                           :                                      :- CometNativeColumnarToRow\n                           :                                      :  +- CometFilter\n                           :                                      :     +- CometNativeScan parquet spark_catalog.default.store\n                           :                                      +- BroadcastExchange\n                           :                                         +- Project\n                           :                                            +- Filter\n                           :                                               +- Window\n                           :                                                  +- Sort\n                           :                                                     +- HashAggregate\n                           :                                                        +- CometNativeColumnarToRow\n                           :                                                           +- CometColumnarExchange\n                           :                                                              +- HashAggregate\n                           :                                                                 +- Project\n                           :                                                                    +- BroadcastHashJoin\n                           :                                                                       :- Project\n                           :                                                                       :  +- BroadcastHashJoin\n                           :                                                                       :     :- Filter\n                           :                                                                       :     :  +- ColumnarToRow\n                           :                                                                       :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                                                       :     :           +- ReusedSubquery\n                           :                                                                       :     +- BroadcastExchange\n                           :                                                                       :        +- CometNativeColumnarToRow\n                           :                                                                       :           +- CometProject\n                           :                                                                       :              +- CometFilter\n                           :                                                                       :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                                                                       +- BroadcastExchange\n                           :                                                                          +- CometNativeColumnarToRow\n                           :                                                                             +- CometProject\n                           :                                                                                +- CometFilter\n                           :                                                                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- Filter\n                                                         :     :  +- ColumnarToRow\n                                                         :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :           +- SubqueryBroadcast\n                                                         :     :              +- BroadcastExchange\n                                                         :     :                 +- CometNativeColumnarToRow\n                                                         :     :                    +- CometProject\n                                                         :     :                       +- CometFilter\n                                                         :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     +- BroadcastExchange\n                                                         :        +- CometNativeColumnarToRow\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         +- BroadcastExchange\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- CometNativeColumnarToRow\n                                                                  :  +- CometFilter\n                                                                  :     +- CometNativeScan parquet spark_catalog.default.store\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +- Filter\n                                                                           +- Window\n                                                                              +- Sort\n                                                                                 +- HashAggregate\n                                                                                    +- CometNativeColumnarToRow\n                                                                                       +- CometColumnarExchange\n                                                                                          +- HashAggregate\n                                                                                             +- Project\n                                                                                                +- BroadcastHashJoin\n                                                                                                   :- Project\n                                                                                                   :  +- BroadcastHashJoin\n                                                                                                   :     :- Filter\n                                                                                                   :     :  +- ColumnarToRow\n                                                                                                   :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                                                   :     :           +- ReusedSubquery\n                                                                                                   :     +- BroadcastExchange\n                                                                                                   :        +- CometNativeColumnarToRow\n                                                                                                   :           +- CometProject\n                                                                                                   :              +- CometFilter\n                                                                                                   :                 +- CometNativeScan parquet spark_catalog.default.store\n                                                                                                   +- BroadcastExchange\n                                                                                                      +- CometNativeColumnarToRow\n                                                                                                         +- CometProject\n                                                                                                            +- CometFilter\n                                                                                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 54 out of 153 eligible operators (35%). Final plan contains 30 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q70a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- CometNativeColumnarToRow\n                           :                 :  +- CometProject\n                           :                 :     +- CometBroadcastHashJoin\n                           :                 :        :- CometFilter\n                           :                 :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                 :        :        +- SubqueryBroadcast\n                           :                 :        :           +- BroadcastExchange\n                           :                 :        :              +- CometNativeColumnarToRow\n                           :                 :        :                 +- CometProject\n                           :                 :        :                    +- CometFilter\n                           :                 :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                 :        +- CometBroadcastExchange\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                 +- BroadcastExchange\n                           :                    +- Project\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometFilter\n                           :                          :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- Filter\n                           :                                   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                           :                                      +- CometNativeColumnarToRow\n                           :                                         +- CometSort\n                           :                                            +- CometHashAggregate\n                           :                                               +- CometExchange\n                           :                                                  +- CometHashAggregate\n                           :                                                     +- CometProject\n                           :                                                        +- CometBroadcastHashJoin\n                           :                                                           :- CometProject\n                           :                                                           :  +- CometBroadcastHashJoin\n                           :                                                           :     :- CometFilter\n                           :                                                           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                                                           :     :        +- ReusedSubquery\n                           :                                                           :     +- CometBroadcastExchange\n                           :                                                           :        +- CometProject\n                           :                                                           :           +- CometFilter\n                           :                                                           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                                                           +- CometBroadcastExchange\n                           :                                                              +- CometProject\n                           :                                                                 +- CometFilter\n                           :                                                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- CometNativeColumnarToRow\n                           :                             :  +- CometProject\n                           :                             :     +- CometBroadcastHashJoin\n                           :                             :        :- CometFilter\n                           :                             :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                             :        :        +- SubqueryBroadcast\n                           :                             :        :           +- BroadcastExchange\n                           :                             :        :              +- CometNativeColumnarToRow\n                           :                             :        :                 +- CometProject\n                           :                             :        :                    +- CometFilter\n                           :                             :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                             :        +- CometBroadcastExchange\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                             +- BroadcastExchange\n                           :                                +- Project\n                           :                                   +- BroadcastHashJoin\n                           :                                      :- CometNativeColumnarToRow\n                           :                                      :  +- CometFilter\n                           :                                      :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                                      +- BroadcastExchange\n                           :                                         +- Project\n                           :                                            +- Filter\n                           :                                               +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                           :                                                  +- CometNativeColumnarToRow\n                           :                                                     +- CometSort\n                           :                                                        +- CometHashAggregate\n                           :                                                           +- CometExchange\n                           :                                                              +- CometHashAggregate\n                           :                                                                 +- CometProject\n                           :                                                                    +- CometBroadcastHashJoin\n                           :                                                                       :- CometProject\n                           :                                                                       :  +- CometBroadcastHashJoin\n                           :                                                                       :     :- CometFilter\n                           :                                                                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                                                                       :     :        +- ReusedSubquery\n                           :                                                                       :     +- CometBroadcastExchange\n                           :                                                                       :        +- CometProject\n                           :                                                                       :           +- CometFilter\n                           :                                                                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                                                                       +- CometBroadcastExchange\n                           :                                                                          +- CometProject\n                           :                                                                             +- CometFilter\n                           :                                                                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- CometNativeColumnarToRow\n                                                         :  +- CometProject\n                                                         :     +- CometBroadcastHashJoin\n                                                         :        :- CometFilter\n                                                         :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                         :        :        +- SubqueryBroadcast\n                                                         :        :           +- BroadcastExchange\n                                                         :        :              +- CometNativeColumnarToRow\n                                                         :        :                 +- CometProject\n                                                         :        :                    +- CometFilter\n                                                         :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                         :        +- CometBroadcastExchange\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                         +- BroadcastExchange\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- CometNativeColumnarToRow\n                                                                  :  +- CometFilter\n                                                                  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +- Filter\n                                                                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                              +- CometNativeColumnarToRow\n                                                                                 +- CometSort\n                                                                                    +- CometHashAggregate\n                                                                                       +- CometExchange\n                                                                                          +- CometHashAggregate\n                                                                                             +- CometProject\n                                                                                                +- CometBroadcastHashJoin\n                                                                                                   :- CometProject\n                                                                                                   :  +- CometBroadcastHashJoin\n                                                                                                   :     :- CometFilter\n                                                                                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                                                   :     :        +- ReusedSubquery\n                                                                                                   :     +- CometBroadcastExchange\n                                                                                                   :        +- CometProject\n                                                                                                   :           +- CometFilter\n                                                                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                                                                                   +- CometBroadcastExchange\n                                                                                                      +- CometProject\n                                                                                                         +- CometFilter\n                                                                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 102 out of 153 eligible operators (66%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q72.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometColumnarExchange\n                  :     +- Project\n                  :        +- BroadcastHashJoin\n                  :           :- Project\n                  :           :  +- BroadcastHashJoin\n                  :           :     :- Project\n                  :           :     :  +- BroadcastHashJoin\n                  :           :     :     :- Project\n                  :           :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :- Project\n                  :           :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :- Project\n                  :           :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- Filter\n                  :           :     :     :     :     :     :     :     :     :  +- ColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :           :     :     :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :              +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                    +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                       +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :           +- CometProject\n                  :           :     :     :     :     :              +- CometFilter\n                  :           :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :           +- CometProject\n                  :           :     :     :     :              +- CometFilter\n                  :           :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- BroadcastExchange\n                  :           :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :           +- CometProject\n                  :           :     :     :              +- CometFilter\n                  :           :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     +- BroadcastExchange\n                  :           :     :        +- CometNativeColumnarToRow\n                  :           :     :           +- CometFilter\n                  :           :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     +- BroadcastExchange\n                  :           :        +- CometNativeColumnarToRow\n                  :           :           +- CometFilter\n                  :           :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           +- BroadcastExchange\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n\nComet accelerated 37 out of 68 eligible operators (54%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q72.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometExchange\n                  :     +- CometProject\n                  :        +- CometBroadcastHashJoin\n                  :           :- CometProject\n                  :           :  +- CometBroadcastHashJoin\n                  :           :     :- CometProject\n                  :           :     :  +- CometBroadcastHashJoin\n                  :           :     :     :- CometProject\n                  :           :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :- CometProject\n                  :           :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :- CometProject\n                  :           :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- CometFilter\n                  :           :     :     :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :           :     :     :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :           +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                 +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                    +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :        +- CometProject\n                  :           :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :        +- CometProject\n                  :           :     :     :     :           +- CometFilter\n                  :           :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- CometBroadcastExchange\n                  :           :     :     :        +- CometProject\n                  :           :     :     :           +- CometFilter\n                  :           :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     +- CometBroadcastExchange\n                  :           :     :        +- CometFilter\n                  :           :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     +- CometBroadcastExchange\n                  :           :        +- CometFilter\n                  :           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           +- CometBroadcastExchange\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n\nComet accelerated 66 out of 68 eligible operators (97%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q74.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- BroadcastHashJoin\n      :     :  :- Filter\n      :     :  :  +- HashAggregate\n      :     :  :     +- CometNativeColumnarToRow\n      :     :  :        +- CometColumnarExchange\n      :     :  :           +- HashAggregate\n      :     :  :              +- Project\n      :     :  :                 +- BroadcastHashJoin\n      :     :  :                    :- Project\n      :     :  :                    :  +- BroadcastHashJoin\n      :     :  :                    :     :- CometNativeColumnarToRow\n      :     :  :                    :     :  +- CometProject\n      :     :  :                    :     :     +- CometFilter\n      :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :  :                    :     +- BroadcastExchange\n      :     :  :                    :        +- Filter\n      :     :  :                    :           +- ColumnarToRow\n      :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :  :                    :                    +- SubqueryBroadcast\n      :     :  :                    :                       +- BroadcastExchange\n      :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :  :                    :                             +- CometFilter\n      :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  :                    +- BroadcastExchange\n      :     :  :                       +- CometNativeColumnarToRow\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  +- BroadcastExchange\n      :     :     +- HashAggregate\n      :     :        +- CometNativeColumnarToRow\n      :     :           +- CometColumnarExchange\n      :     :              +- HashAggregate\n      :     :                 +- Project\n      :     :                    +- BroadcastHashJoin\n      :     :                       :- Project\n      :     :                       :  +- BroadcastHashJoin\n      :     :                       :     :- CometNativeColumnarToRow\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                       :     +- BroadcastExchange\n      :     :                       :        +- Filter\n      :     :                       :           +- ColumnarToRow\n      :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                       :                    +- SubqueryBroadcast\n      :     :                       :                       +- BroadcastExchange\n      :     :                       :                          +- CometNativeColumnarToRow\n      :     :                       :                             +- CometFilter\n      :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                       +- BroadcastExchange\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometFilter\n      :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 85 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q74.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometBroadcastHashJoin\n         :     :  :- CometFilter\n         :     :  :  +- CometHashAggregate\n         :     :  :     +- CometExchange\n         :     :  :        +- CometHashAggregate\n         :     :  :           +- CometProject\n         :     :  :              +- CometBroadcastHashJoin\n         :     :  :                 :- CometProject\n         :     :  :                 :  +- CometBroadcastHashJoin\n         :     :  :                 :     :- CometProject\n         :     :  :                 :     :  +- CometFilter\n         :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :  :                 :     +- CometBroadcastExchange\n         :     :  :                 :        +- CometFilter\n         :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :  :                 :                 +- SubqueryBroadcast\n         :     :  :                 :                    +- BroadcastExchange\n         :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :  :                 :                          +- CometFilter\n         :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  :                 +- CometBroadcastExchange\n         :     :  :                    +- CometFilter\n         :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  +- CometBroadcastExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometExchange\n         :     :           +- CometHashAggregate\n         :     :              +- CometProject\n         :     :                 +- CometBroadcastHashJoin\n         :     :                    :- CometProject\n         :     :                    :  +- CometBroadcastHashJoin\n         :     :                    :     :- CometProject\n         :     :                    :     :  +- CometFilter\n         :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                    :     +- CometBroadcastExchange\n         :     :                    :        +- CometFilter\n         :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                    :                 +- SubqueryBroadcast\n         :     :                    :                    +- BroadcastExchange\n         :     :                    :                       +- CometNativeColumnarToRow\n         :     :                    :                          +- CometFilter\n         :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                    +- CometBroadcastExchange\n         :     :                       +- CometFilter\n         :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 79 out of 85 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q75.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- SubqueryBroadcast\n         :                             :     :           :     :              +- BroadcastExchange\n         :                             :     :           :     :                 +- CometNativeColumnarToRow\n         :                             :     :           :     :                    +- CometFilter\n         :                             :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- ReusedSubquery\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometColumnarExchange\n         :                                   :     +- Project\n         :                                   :        +- BroadcastHashJoin\n         :                                   :           :- Project\n         :                                   :           :  +- BroadcastHashJoin\n         :                                   :           :     :- Filter\n         :                                   :           :     :  +- ColumnarToRow\n         :                                   :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                                   :           :     :           +- ReusedSubquery\n         :                                   :           :     +- BroadcastExchange\n         :                                   :           :        +- CometNativeColumnarToRow\n         :                                   :           :           +- CometProject\n         :                                   :           :              +- CometFilter\n         :                                   :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                                   :           +- BroadcastExchange\n         :                                   :              +- CometNativeColumnarToRow\n         :                                   :                 +- CometFilter\n         :                                   :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometNativeScan parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- SubqueryBroadcast\n                                       :     :           :     :              +- BroadcastExchange\n                                       :     :           :     :                 +- CometNativeColumnarToRow\n                                       :     :           :     :                    +- CometFilter\n                                       :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- ReusedSubquery\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometColumnarExchange\n                                             :     +- Project\n                                             :        +- BroadcastHashJoin\n                                             :           :- Project\n                                             :           :  +- BroadcastHashJoin\n                                             :           :     :- Filter\n                                             :           :     :  +- ColumnarToRow\n                                             :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                             :           :     :           +- ReusedSubquery\n                                             :           :     +- BroadcastExchange\n                                             :           :        +- CometNativeColumnarToRow\n                                             :           :           +- CometProject\n                                             :           :              +- CometFilter\n                                             :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                             :           +- BroadcastExchange\n                                             :              +- CometNativeColumnarToRow\n                                             :                 +- CometFilter\n                                             :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.web_returns\n\nComet accelerated 111 out of 167 eligible operators (66%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q75.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :                             :     :           :     :        +- SubqueryBroadcast\n         :                             :     :           :     :           +- BroadcastExchange\n         :                             :     :           :     :              +- CometNativeColumnarToRow\n         :                             :     :           :     :                 +- CometFilter\n         :                             :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :                             :     :           :     :        +- ReusedSubquery\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometExchange\n         :                                   :     +- CometProject\n         :                                   :        +- CometBroadcastHashJoin\n         :                                   :           :- CometProject\n         :                                   :           :  +- CometBroadcastHashJoin\n         :                                   :           :     :- CometFilter\n         :                                   :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                                   :           :     :        +- ReusedSubquery\n         :                                   :           :     +- CometBroadcastExchange\n         :                                   :           :        +- CometProject\n         :                                   :           :           +- CometFilter\n         :                                   :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                                   :           +- CometBroadcastExchange\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :           :     :        +- SubqueryBroadcast\n                                       :     :           :     :           +- BroadcastExchange\n                                       :     :           :     :              +- CometNativeColumnarToRow\n                                       :     :           :     :                 +- CometFilter\n                                       :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :           :     :        +- ReusedSubquery\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometExchange\n                                             :     +- CometProject\n                                             :        +- CometBroadcastHashJoin\n                                             :           :- CometProject\n                                             :           :  +- CometBroadcastHashJoin\n                                             :           :     :- CometFilter\n                                             :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                             :           :     :        +- ReusedSubquery\n                                             :           :     +- CometBroadcastExchange\n                                             :           :        +- CometProject\n                                             :           :           +- CometFilter\n                                             :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                             :           +- CometBroadcastExchange\n                                             :              +- CometFilter\n                                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n\nComet accelerated 159 out of 167 eligible operators (95%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q77a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- Project\n               :              :  +- BroadcastHashJoin\n               :              :     :- HashAggregate\n               :              :     :  +- CometNativeColumnarToRow\n               :              :     :     +- CometColumnarExchange\n               :              :     :        +- HashAggregate\n               :              :     :           +- Project\n               :              :     :              +- BroadcastHashJoin\n               :              :     :                 :- Project\n               :              :     :                 :  +- BroadcastHashJoin\n               :              :     :                 :     :- Filter\n               :              :     :                 :     :  +- ColumnarToRow\n               :              :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :     :                 :     :           +- SubqueryBroadcast\n               :              :     :                 :     :              +- BroadcastExchange\n               :              :     :                 :     :                 +- CometNativeColumnarToRow\n               :              :     :                 :     :                    +- CometProject\n               :              :     :                 :     :                       +- CometFilter\n               :              :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :     :                 :     +- BroadcastExchange\n               :              :     :                 :        +- CometNativeColumnarToRow\n               :              :     :                 :           +- CometProject\n               :              :     :                 :              +- CometFilter\n               :              :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :     :                 +- BroadcastExchange\n               :              :     :                    +- CometNativeColumnarToRow\n               :              :     :                       +- CometFilter\n               :              :     :                          +- CometNativeScan parquet spark_catalog.default.store\n               :              :     +- BroadcastExchange\n               :              :        +- HashAggregate\n               :              :           +- CometNativeColumnarToRow\n               :              :              +- CometColumnarExchange\n               :              :                 +- HashAggregate\n               :              :                    +- Project\n               :              :                       +- BroadcastHashJoin\n               :              :                          :- Project\n               :              :                          :  +- BroadcastHashJoin\n               :              :                          :     :- Filter\n               :              :                          :     :  +- ColumnarToRow\n               :              :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                          :     :           +- ReusedSubquery\n               :              :                          :     +- BroadcastExchange\n               :              :                          :        +- CometNativeColumnarToRow\n               :              :                          :           +- CometProject\n               :              :                          :              +- CometFilter\n               :              :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                          +- BroadcastExchange\n               :              :                             +- CometNativeColumnarToRow\n               :              :                                +- CometFilter\n               :              :                                   +- CometNativeScan parquet spark_catalog.default.store\n               :              :- Project\n               :              :  +- BroadcastNestedLoopJoin\n               :              :     :- BroadcastExchange\n               :              :     :  +- HashAggregate\n               :              :     :     +- CometNativeColumnarToRow\n               :              :     :        +- CometColumnarExchange\n               :              :     :           +- HashAggregate\n               :              :     :              +- Project\n               :              :     :                 +- BroadcastHashJoin\n               :              :     :                    :- ColumnarToRow\n               :              :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :     :                    :        +- ReusedSubquery\n               :              :     :                    +- BroadcastExchange\n               :              :     :                       +- CometNativeColumnarToRow\n               :              :     :                          +- CometProject\n               :              :     :                             +- CometFilter\n               :              :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :     +- HashAggregate\n               :              :        +- CometNativeColumnarToRow\n               :              :           +- CometColumnarExchange\n               :              :              +- HashAggregate\n               :              :                 +- Project\n               :              :                    +- BroadcastHashJoin\n               :              :                       :- ColumnarToRow\n               :              :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                       :        +- ReusedSubquery\n               :              :                       +- BroadcastExchange\n               :              :                          +- CometNativeColumnarToRow\n               :              :                             +- CometProject\n               :              :                                +- CometFilter\n               :              :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              +- Project\n               :                 +- BroadcastHashJoin\n               :                    :- HashAggregate\n               :                    :  +- CometNativeColumnarToRow\n               :                    :     +- CometColumnarExchange\n               :                    :        +- HashAggregate\n               :                    :           +- Project\n               :                    :              +- BroadcastHashJoin\n               :                    :                 :- Project\n               :                    :                 :  +- BroadcastHashJoin\n               :                    :                 :     :- Filter\n               :                    :                 :     :  +- ColumnarToRow\n               :                    :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :                 :     :           +- ReusedSubquery\n               :                    :                 :     +- BroadcastExchange\n               :                    :                 :        +- CometNativeColumnarToRow\n               :                    :                 :           +- CometProject\n               :                    :                 :              +- CometFilter\n               :                    :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                 +- BroadcastExchange\n               :                    :                    +- CometNativeColumnarToRow\n               :                    :                       +- CometFilter\n               :                    :                          +- CometNativeScan parquet spark_catalog.default.web_page\n               :                    +- BroadcastExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- Filter\n               :                                         :     :  +- ColumnarToRow\n               :                                         :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :           +- ReusedSubquery\n               :                                         :     +- BroadcastExchange\n               :                                         :        +- CometNativeColumnarToRow\n               :                                         :           +- CometProject\n               :                                         :              +- CometFilter\n               :                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometFilter\n               :                                                  +- CometNativeScan parquet spark_catalog.default.web_page\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Project\n               :                          :  +- BroadcastHashJoin\n               :                          :     :- HashAggregate\n               :                          :     :  +- CometNativeColumnarToRow\n               :                          :     :     +- CometColumnarExchange\n               :                          :     :        +- HashAggregate\n               :                          :     :           +- Project\n               :                          :     :              +- BroadcastHashJoin\n               :                          :     :                 :- Project\n               :                          :     :                 :  +- BroadcastHashJoin\n               :                          :     :                 :     :- Filter\n               :                          :     :                 :     :  +- ColumnarToRow\n               :                          :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :     :                 :     :           +- SubqueryBroadcast\n               :                          :     :                 :     :              +- BroadcastExchange\n               :                          :     :                 :     :                 +- CometNativeColumnarToRow\n               :                          :     :                 :     :                    +- CometProject\n               :                          :     :                 :     :                       +- CometFilter\n               :                          :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     :                 :     +- BroadcastExchange\n               :                          :     :                 :        +- CometNativeColumnarToRow\n               :                          :     :                 :           +- CometProject\n               :                          :     :                 :              +- CometFilter\n               :                          :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     :                 +- BroadcastExchange\n               :                          :     :                    +- CometNativeColumnarToRow\n               :                          :     :                       +- CometFilter\n               :                          :     :                          +- CometNativeScan parquet spark_catalog.default.store\n               :                          :     +- BroadcastExchange\n               :                          :        +- HashAggregate\n               :                          :           +- CometNativeColumnarToRow\n               :                          :              +- CometColumnarExchange\n               :                          :                 +- HashAggregate\n               :                          :                    +- Project\n               :                          :                       +- BroadcastHashJoin\n               :                          :                          :- Project\n               :                          :                          :  +- BroadcastHashJoin\n               :                          :                          :     :- Filter\n               :                          :                          :     :  +- ColumnarToRow\n               :                          :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                          :     :           +- ReusedSubquery\n               :                          :                          :     +- BroadcastExchange\n               :                          :                          :        +- CometNativeColumnarToRow\n               :                          :                          :           +- CometProject\n               :                          :                          :              +- CometFilter\n               :                          :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                          +- BroadcastExchange\n               :                          :                             +- CometNativeColumnarToRow\n               :                          :                                +- CometFilter\n               :                          :                                   +- CometNativeScan parquet spark_catalog.default.store\n               :                          :- Project\n               :                          :  +- BroadcastNestedLoopJoin\n               :                          :     :- BroadcastExchange\n               :                          :     :  +- HashAggregate\n               :                          :     :     +- CometNativeColumnarToRow\n               :                          :     :        +- CometColumnarExchange\n               :                          :     :           +- HashAggregate\n               :                          :     :              +- Project\n               :                          :     :                 +- BroadcastHashJoin\n               :                          :     :                    :- ColumnarToRow\n               :                          :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :     :                    :        +- ReusedSubquery\n               :                          :     :                    +- BroadcastExchange\n               :                          :     :                       +- CometNativeColumnarToRow\n               :                          :     :                          +- CometProject\n               :                          :     :                             +- CometFilter\n               :                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     +- HashAggregate\n               :                          :        +- CometNativeColumnarToRow\n               :                          :           +- CometColumnarExchange\n               :                          :              +- HashAggregate\n               :                          :                 +- Project\n               :                          :                    +- BroadcastHashJoin\n               :                          :                       :- ColumnarToRow\n               :                          :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                       :        +- ReusedSubquery\n               :                          :                       +- BroadcastExchange\n               :                          :                          +- CometNativeColumnarToRow\n               :                          :                             +- CometProject\n               :                          :                                +- CometFilter\n               :                          :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Project\n               :                             +- BroadcastHashJoin\n               :                                :- HashAggregate\n               :                                :  +- CometNativeColumnarToRow\n               :                                :     +- CometColumnarExchange\n               :                                :        +- HashAggregate\n               :                                :           +- Project\n               :                                :              +- BroadcastHashJoin\n               :                                :                 :- Project\n               :                                :                 :  +- BroadcastHashJoin\n               :                                :                 :     :- Filter\n               :                                :                 :     :  +- ColumnarToRow\n               :                                :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                :                 :     :           +- ReusedSubquery\n               :                                :                 :     +- BroadcastExchange\n               :                                :                 :        +- CometNativeColumnarToRow\n               :                                :                 :           +- CometProject\n               :                                :                 :              +- CometFilter\n               :                                :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                :                 +- BroadcastExchange\n               :                                :                    +- CometNativeColumnarToRow\n               :                                :                       +- CometFilter\n               :                                :                          +- CometNativeScan parquet spark_catalog.default.web_page\n               :                                +- BroadcastExchange\n               :                                   +- HashAggregate\n               :                                      +- CometNativeColumnarToRow\n               :                                         +- CometColumnarExchange\n               :                                            +- HashAggregate\n               :                                               +- Project\n               :                                                  +- BroadcastHashJoin\n               :                                                     :- Project\n               :                                                     :  +- BroadcastHashJoin\n               :                                                     :     :- Filter\n               :                                                     :     :  +- ColumnarToRow\n               :                                                     :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                                     :     :           +- ReusedSubquery\n               :                                                     :     +- BroadcastExchange\n               :                                                     :        +- CometNativeColumnarToRow\n               :                                                     :           +- CometProject\n               :                                                     :              +- CometFilter\n               :                                                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                                     +- BroadcastExchange\n               :                                                        +- CometNativeColumnarToRow\n               :                                                           +- CometFilter\n               :                                                              +- CometNativeScan parquet spark_catalog.default.web_page\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- HashAggregate\n                                          :     :  +- CometNativeColumnarToRow\n                                          :     :     +- CometColumnarExchange\n                                          :     :        +- HashAggregate\n                                          :     :           +- Project\n                                          :     :              +- BroadcastHashJoin\n                                          :     :                 :- Project\n                                          :     :                 :  +- BroadcastHashJoin\n                                          :     :                 :     :- Filter\n                                          :     :                 :     :  +- ColumnarToRow\n                                          :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                 :     :           +- SubqueryBroadcast\n                                          :     :                 :     :              +- BroadcastExchange\n                                          :     :                 :     :                 +- CometNativeColumnarToRow\n                                          :     :                 :     :                    +- CometProject\n                                          :     :                 :     :                       +- CometFilter\n                                          :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 :     +- BroadcastExchange\n                                          :     :                 :        +- CometNativeColumnarToRow\n                                          :     :                 :           +- CometProject\n                                          :     :                 :              +- CometFilter\n                                          :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 +- BroadcastExchange\n                                          :     :                    +- CometNativeColumnarToRow\n                                          :     :                       +- CometFilter\n                                          :     :                          +- CometNativeScan parquet spark_catalog.default.store\n                                          :     +- BroadcastExchange\n                                          :        +- HashAggregate\n                                          :           +- CometNativeColumnarToRow\n                                          :              +- CometColumnarExchange\n                                          :                 +- HashAggregate\n                                          :                    +- Project\n                                          :                       +- BroadcastHashJoin\n                                          :                          :- Project\n                                          :                          :  +- BroadcastHashJoin\n                                          :                          :     :- Filter\n                                          :                          :     :  +- ColumnarToRow\n                                          :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                          :     :           +- ReusedSubquery\n                                          :                          :     +- BroadcastExchange\n                                          :                          :        +- CometNativeColumnarToRow\n                                          :                          :           +- CometProject\n                                          :                          :              +- CometFilter\n                                          :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          +- BroadcastExchange\n                                          :                             +- CometNativeColumnarToRow\n                                          :                                +- CometFilter\n                                          :                                   +- CometNativeScan parquet spark_catalog.default.store\n                                          :- Project\n                                          :  +- BroadcastNestedLoopJoin\n                                          :     :- BroadcastExchange\n                                          :     :  +- HashAggregate\n                                          :     :     +- CometNativeColumnarToRow\n                                          :     :        +- CometColumnarExchange\n                                          :     :           +- HashAggregate\n                                          :     :              +- Project\n                                          :     :                 +- BroadcastHashJoin\n                                          :     :                    :- ColumnarToRow\n                                          :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    :        +- ReusedSubquery\n                                          :     :                    +- BroadcastExchange\n                                          :     :                       +- CometNativeColumnarToRow\n                                          :     :                          +- CometProject\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- HashAggregate\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometColumnarExchange\n                                          :              +- HashAggregate\n                                          :                 +- Project\n                                          :                    +- BroadcastHashJoin\n                                          :                       :- ColumnarToRow\n                                          :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                       :        +- ReusedSubquery\n                                          :                       +- BroadcastExchange\n                                          :                          +- CometNativeColumnarToRow\n                                          :                             +- CometProject\n                                          :                                +- CometFilter\n                                          :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- Project\n                                             +- BroadcastHashJoin\n                                                :- HashAggregate\n                                                :  +- CometNativeColumnarToRow\n                                                :     +- CometColumnarExchange\n                                                :        +- HashAggregate\n                                                :           +- Project\n                                                :              +- BroadcastHashJoin\n                                                :                 :- Project\n                                                :                 :  +- BroadcastHashJoin\n                                                :                 :     :- Filter\n                                                :                 :     :  +- ColumnarToRow\n                                                :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                :                 :     :           +- ReusedSubquery\n                                                :                 :     +- BroadcastExchange\n                                                :                 :        +- CometNativeColumnarToRow\n                                                :                 :           +- CometProject\n                                                :                 :              +- CometFilter\n                                                :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                :                 +- BroadcastExchange\n                                                :                    +- CometNativeColumnarToRow\n                                                :                       +- CometFilter\n                                                :                          +- CometNativeScan parquet spark_catalog.default.web_page\n                                                +- BroadcastExchange\n                                                   +- HashAggregate\n                                                      +- CometNativeColumnarToRow\n                                                         +- CometColumnarExchange\n                                                            +- HashAggregate\n                                                               +- Project\n                                                                  +- BroadcastHashJoin\n                                                                     :- Project\n                                                                     :  +- BroadcastHashJoin\n                                                                     :     :- Filter\n                                                                     :     :  +- ColumnarToRow\n                                                                     :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                     :     :           +- ReusedSubquery\n                                                                     :     +- BroadcastExchange\n                                                                     :        +- CometNativeColumnarToRow\n                                                                     :           +- CometProject\n                                                                     :              +- CometFilter\n                                                                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                                     +- BroadcastExchange\n                                                                        +- CometNativeColumnarToRow\n                                                                           +- CometFilter\n                                                                              +- CometNativeScan parquet spark_catalog.default.web_page\n\nComet accelerated 113 out of 332 eligible operators (34%). Final plan contains 75 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q77a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- CometNativeColumnarToRow\n               :              :  +- CometProject\n               :              :     +- CometBroadcastHashJoin\n               :              :        :- CometHashAggregate\n               :              :        :  +- CometExchange\n               :              :        :     +- CometHashAggregate\n               :              :        :        +- CometProject\n               :              :        :           +- CometBroadcastHashJoin\n               :              :        :              :- CometProject\n               :              :        :              :  +- CometBroadcastHashJoin\n               :              :        :              :     :- CometFilter\n               :              :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :        :              :     :        +- SubqueryBroadcast\n               :              :        :              :     :           +- BroadcastExchange\n               :              :        :              :     :              +- CometNativeColumnarToRow\n               :              :        :              :     :                 +- CometProject\n               :              :        :              :     :                    +- CometFilter\n               :              :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :        :              :     +- CometBroadcastExchange\n               :              :        :              :        +- CometProject\n               :              :        :              :           +- CometFilter\n               :              :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :        :              +- CometBroadcastExchange\n               :              :        :                 +- CometFilter\n               :              :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :              :        +- CometBroadcastExchange\n               :              :           +- CometHashAggregate\n               :              :              +- CometExchange\n               :              :                 +- CometHashAggregate\n               :              :                    +- CometProject\n               :              :                       +- CometBroadcastHashJoin\n               :              :                          :- CometProject\n               :              :                          :  +- CometBroadcastHashJoin\n               :              :                          :     :- CometFilter\n               :              :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :              :                          :     :        +- ReusedSubquery\n               :              :                          :     +- CometBroadcastExchange\n               :              :                          :        +- CometProject\n               :              :                          :           +- CometFilter\n               :              :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :                          +- CometBroadcastExchange\n               :              :                             +- CometFilter\n               :              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :              :- Project\n               :              :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n               :              :     :- BroadcastExchange\n               :              :     :  +- CometNativeColumnarToRow\n               :              :     :     +- CometHashAggregate\n               :              :     :        +- CometExchange\n               :              :     :           +- CometHashAggregate\n               :              :     :              +- CometProject\n               :              :     :                 +- CometBroadcastHashJoin\n               :              :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :                    :     +- ReusedSubquery\n               :              :     :                    +- CometBroadcastExchange\n               :              :     :                       +- CometProject\n               :              :     :                          +- CometFilter\n               :              :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometNativeColumnarToRow\n               :              :        +- CometHashAggregate\n               :              :           +- CometExchange\n               :              :              +- CometHashAggregate\n               :              :                 +- CometProject\n               :              :                    +- CometBroadcastHashJoin\n               :              :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :              :                       :     +- ReusedSubquery\n               :              :                       +- CometBroadcastExchange\n               :              :                          +- CometProject\n               :              :                             +- CometFilter\n               :              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              +- CometNativeColumnarToRow\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometHashAggregate\n               :                       :  +- CometExchange\n               :                       :     +- CometHashAggregate\n               :                       :        +- CometProject\n               :                       :           +- CometBroadcastHashJoin\n               :                       :              :- CometProject\n               :                       :              :  +- CometBroadcastHashJoin\n               :                       :              :     :- CometFilter\n               :                       :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                       :              :     :        +- ReusedSubquery\n               :                       :              :     +- CometBroadcastExchange\n               :                       :              :        +- CometProject\n               :                       :              :           +- CometFilter\n               :                       :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                       :              +- CometBroadcastExchange\n               :                       :                 +- CometFilter\n               :                       :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               :                       +- CometBroadcastExchange\n               :                          +- CometHashAggregate\n               :                             +- CometExchange\n               :                                +- CometHashAggregate\n               :                                   +- CometProject\n               :                                      +- CometBroadcastHashJoin\n               :                                         :- CometProject\n               :                                         :  +- CometBroadcastHashJoin\n               :                                         :     :- CometFilter\n               :                                         :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                         :     :        +- ReusedSubquery\n               :                                         :     +- CometBroadcastExchange\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                         +- CometBroadcastExchange\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- CometNativeColumnarToRow\n               :                          :  +- CometProject\n               :                          :     +- CometBroadcastHashJoin\n               :                          :        :- CometHashAggregate\n               :                          :        :  +- CometExchange\n               :                          :        :     +- CometHashAggregate\n               :                          :        :        +- CometProject\n               :                          :        :           +- CometBroadcastHashJoin\n               :                          :        :              :- CometProject\n               :                          :        :              :  +- CometBroadcastHashJoin\n               :                          :        :              :     :- CometFilter\n               :                          :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                          :        :              :     :        +- SubqueryBroadcast\n               :                          :        :              :     :           +- BroadcastExchange\n               :                          :        :              :     :              +- CometNativeColumnarToRow\n               :                          :        :              :     :                 +- CometProject\n               :                          :        :              :     :                    +- CometFilter\n               :                          :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :        :              :     +- CometBroadcastExchange\n               :                          :        :              :        +- CometProject\n               :                          :        :              :           +- CometFilter\n               :                          :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :        :              +- CometBroadcastExchange\n               :                          :        :                 +- CometFilter\n               :                          :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                          :        +- CometBroadcastExchange\n               :                          :           +- CometHashAggregate\n               :                          :              +- CometExchange\n               :                          :                 +- CometHashAggregate\n               :                          :                    +- CometProject\n               :                          :                       +- CometBroadcastHashJoin\n               :                          :                          :- CometProject\n               :                          :                          :  +- CometBroadcastHashJoin\n               :                          :                          :     :- CometFilter\n               :                          :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :                          :                          :     :        +- ReusedSubquery\n               :                          :                          :     +- CometBroadcastExchange\n               :                          :                          :        +- CometProject\n               :                          :                          :           +- CometFilter\n               :                          :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :                          +- CometBroadcastExchange\n               :                          :                             +- CometFilter\n               :                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                          :- Project\n               :                          :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n               :                          :     :- BroadcastExchange\n               :                          :     :  +- CometNativeColumnarToRow\n               :                          :     :     +- CometHashAggregate\n               :                          :     :        +- CometExchange\n               :                          :     :           +- CometHashAggregate\n               :                          :     :              +- CometProject\n               :                          :     :                 +- CometBroadcastHashJoin\n               :                          :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                          :     :                    :     +- ReusedSubquery\n               :                          :     :                    +- CometBroadcastExchange\n               :                          :     :                       +- CometProject\n               :                          :     :                          +- CometFilter\n               :                          :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometHashAggregate\n               :                          :           +- CometExchange\n               :                          :              +- CometHashAggregate\n               :                          :                 +- CometProject\n               :                          :                    +- CometBroadcastHashJoin\n               :                          :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                          :                       :     +- ReusedSubquery\n               :                          :                       +- CometBroadcastExchange\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometHashAggregate\n               :                                   :  +- CometExchange\n               :                                   :     +- CometHashAggregate\n               :                                   :        +- CometProject\n               :                                   :           +- CometBroadcastHashJoin\n               :                                   :              :- CometProject\n               :                                   :              :  +- CometBroadcastHashJoin\n               :                                   :              :     :- CometFilter\n               :                                   :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :              :     :        +- ReusedSubquery\n               :                                   :              :     +- CometBroadcastExchange\n               :                                   :              :        +- CometProject\n               :                                   :              :           +- CometFilter\n               :                                   :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                   :              +- CometBroadcastExchange\n               :                                   :                 +- CometFilter\n               :                                   :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometHashAggregate\n               :                                         +- CometExchange\n               :                                            +- CometHashAggregate\n               :                                               +- CometProject\n               :                                                  +- CometBroadcastHashJoin\n               :                                                     :- CometProject\n               :                                                     :  +- CometBroadcastHashJoin\n               :                                                     :     :- CometFilter\n               :                                                     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                                     :     :        +- ReusedSubquery\n               :                                                     :     +- CometBroadcastExchange\n               :                                                     :        +- CometProject\n               :                                                     :           +- CometFilter\n               :                                                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                                     +- CometBroadcastExchange\n               :                                                        +- CometFilter\n               :                                                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- CometNativeColumnarToRow\n                                          :  +- CometProject\n                                          :     +- CometBroadcastHashJoin\n                                          :        :- CometHashAggregate\n                                          :        :  +- CometExchange\n                                          :        :     +- CometHashAggregate\n                                          :        :        +- CometProject\n                                          :        :           +- CometBroadcastHashJoin\n                                          :        :              :- CometProject\n                                          :        :              :  +- CometBroadcastHashJoin\n                                          :        :              :     :- CometFilter\n                                          :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                          :        :              :     :        +- SubqueryBroadcast\n                                          :        :              :     :           +- BroadcastExchange\n                                          :        :              :     :              +- CometNativeColumnarToRow\n                                          :        :              :     :                 +- CometProject\n                                          :        :              :     :                    +- CometFilter\n                                          :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :        :              :     +- CometBroadcastExchange\n                                          :        :              :        +- CometProject\n                                          :        :              :           +- CometFilter\n                                          :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :        :              +- CometBroadcastExchange\n                                          :        :                 +- CometFilter\n                                          :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                          :        +- CometBroadcastExchange\n                                          :           +- CometHashAggregate\n                                          :              +- CometExchange\n                                          :                 +- CometHashAggregate\n                                          :                    +- CometProject\n                                          :                       +- CometBroadcastHashJoin\n                                          :                          :- CometProject\n                                          :                          :  +- CometBroadcastHashJoin\n                                          :                          :     :- CometFilter\n                                          :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                          :                          :     :        +- ReusedSubquery\n                                          :                          :     +- CometBroadcastExchange\n                                          :                          :        +- CometProject\n                                          :                          :           +- CometFilter\n                                          :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                          +- CometBroadcastExchange\n                                          :                             +- CometFilter\n                                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                          :- Project\n                                          :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n                                          :     :- BroadcastExchange\n                                          :     :  +- CometNativeColumnarToRow\n                                          :     :     +- CometHashAggregate\n                                          :     :        +- CometExchange\n                                          :     :           +- CometHashAggregate\n                                          :     :              +- CometProject\n                                          :     :                 +- CometBroadcastHashJoin\n                                          :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                          :     :                    :     +- ReusedSubquery\n                                          :     :                    +- CometBroadcastExchange\n                                          :     :                       +- CometProject\n                                          :     :                          +- CometFilter\n                                          :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :     +- CometNativeColumnarToRow\n                                          :        +- CometHashAggregate\n                                          :           +- CometExchange\n                                          :              +- CometHashAggregate\n                                          :                 +- CometProject\n                                          :                    +- CometBroadcastHashJoin\n                                          :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                          :                       :     +- ReusedSubquery\n                                          :                       +- CometBroadcastExchange\n                                          :                          +- CometProject\n                                          :                             +- CometFilter\n                                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometHashAggregate\n                                                   :  +- CometExchange\n                                                   :     +- CometHashAggregate\n                                                   :        +- CometProject\n                                                   :           +- CometBroadcastHashJoin\n                                                   :              :- CometProject\n                                                   :              :  +- CometBroadcastHashJoin\n                                                   :              :     :- CometFilter\n                                                   :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :              :     :        +- ReusedSubquery\n                                                   :              :     +- CometBroadcastExchange\n                                                   :              :        +- CometProject\n                                                   :              :           +- CometFilter\n                                                   :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :              +- CometBroadcastExchange\n                                                   :                 +- CometFilter\n                                                   :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n                                                   +- CometBroadcastExchange\n                                                      +- CometHashAggregate\n                                                         +- CometExchange\n                                                            +- CometHashAggregate\n                                                               +- CometProject\n                                                                  +- CometBroadcastHashJoin\n                                                                     :- CometProject\n                                                                     :  +- CometBroadcastHashJoin\n                                                                     :     :- CometFilter\n                                                                     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                                                     :     :        +- ReusedSubquery\n                                                                     :     +- CometBroadcastExchange\n                                                                     :        +- CometProject\n                                                                     :           +- CometFilter\n                                                                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                                     +- CometBroadcastExchange\n                                                                        +- CometFilter\n                                                                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n\nComet accelerated 287 out of 332 eligible operators (86%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q78.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometColumnarExchange\n         :     :                 :        :     +- Filter\n         :     :                 :        :        +- ColumnarToRow\n         :     :                 :        :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :     :                 :        :                 +- SubqueryBroadcast\n         :     :                 :        :                    +- BroadcastExchange\n         :     :                 :        :                       +- CometNativeColumnarToRow\n         :     :                 :        :                          +- CometFilter\n         :     :                 :        :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometColumnarExchange\n         :                          :        :     +- Filter\n         :                          :        :        +- ColumnarToRow\n         :                          :        :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                          :        :                 +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometNativeScan parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometColumnarExchange\n                              :        :     +- Filter\n                              :        :        +- ColumnarToRow\n                              :        :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :        :                 +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 64 out of 76 eligible operators (84%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q78.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometExchange\n         :     :                 :        :     +- CometFilter\n         :     :                 :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                 :        :              +- SubqueryBroadcast\n         :     :                 :        :                 +- BroadcastExchange\n         :     :                 :        :                    +- CometNativeColumnarToRow\n         :     :                 :        :                       +- CometFilter\n         :     :                 :        :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometExchange\n         :                          :        :     +- CometFilter\n         :                          :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :        :              +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometExchange\n                              :        :     +- CometFilter\n                              :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :        :              +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 70 out of 76 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q80a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometColumnarExchange\n               :           :              :     :     :     :     :     +- Filter\n               :           :              :     :     :     :     :        +- ColumnarToRow\n               :           :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :           :              :     :     :     :     :                 +- SubqueryBroadcast\n               :           :              :     :     :     :     :                    +- BroadcastExchange\n               :           :              :     :     :     :     :                       +- CometNativeColumnarToRow\n               :           :              :     :     :     :     :                          +- CometProject\n               :           :              :     :     :     :     :                             +- CometFilter\n               :           :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometColumnarExchange\n               :           :              :     :     :     :     :     +- Filter\n               :           :              :     :     :     :     :        +- ColumnarToRow\n               :           :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :           :              :     :     :     :     :                 +- ReusedSubquery\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometProject\n               :                          :  +- CometBroadcastHashJoin\n               :                          :     :- CometProject\n               :                          :     :  +- CometBroadcastHashJoin\n               :                          :     :     :- CometProject\n               :                          :     :     :  +- CometBroadcastHashJoin\n               :                          :     :     :     :- CometProject\n               :                          :     :     :     :  +- CometSortMergeJoin\n               :                          :     :     :     :     :- CometSort\n               :                          :     :     :     :     :  +- CometColumnarExchange\n               :                          :     :     :     :     :     +- Filter\n               :                          :     :     :     :     :        +- ColumnarToRow\n               :                          :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :     :     :     :     :                 +- ReusedSubquery\n               :                          :     :     :     :     +- CometSort\n               :                          :     :     :     :        +- CometExchange\n               :                          :     :     :     :           +- CometProject\n               :                          :     :     :     :              +- CometFilter\n               :                          :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                          :     :     :     +- CometBroadcastExchange\n               :                          :     :     :        +- CometProject\n               :                          :     :     :           +- CometFilter\n               :                          :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     :     +- CometBroadcastExchange\n               :                          :     :        +- CometProject\n               :                          :     :           +- CometFilter\n               :                          :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n               :                          :     +- CometBroadcastExchange\n               :                          :        +- CometProject\n               :                          :           +- CometFilter\n               :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.promotion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometColumnarExchange\n               :                    :              :     :     :     :     :     +- Filter\n               :                    :              :     :     :     :     :        +- ColumnarToRow\n               :                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :              :     :     :     :     :                 +- SubqueryBroadcast\n               :                    :              :     :     :     :     :                    +- BroadcastExchange\n               :                    :              :     :     :     :     :                       +- CometNativeColumnarToRow\n               :                    :              :     :     :     :     :                          +- CometProject\n               :                    :              :     :     :     :     :                             +- CometFilter\n               :                    :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometColumnarExchange\n               :                    :              :     :     :     :     :     +- Filter\n               :                    :              :     :     :     :     :        +- ColumnarToRow\n               :                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :              :     :     :     :     :                 +- ReusedSubquery\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :                    +- CometHashAggregate\n               :                       +- CometExchange\n               :                          +- CometHashAggregate\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometProject\n               :                                   :  +- CometBroadcastHashJoin\n               :                                   :     :- CometProject\n               :                                   :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :- CometProject\n               :                                   :     :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :     :- CometProject\n               :                                   :     :     :     :  +- CometSortMergeJoin\n               :                                   :     :     :     :     :- CometSort\n               :                                   :     :     :     :     :  +- CometColumnarExchange\n               :                                   :     :     :     :     :     +- Filter\n               :                                   :     :     :     :     :        +- ColumnarToRow\n               :                                   :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :     :     :     :                 +- ReusedSubquery\n               :                                   :     :     :     :     +- CometSort\n               :                                   :     :     :     :        +- CometExchange\n               :                                   :     :     :     :           +- CometProject\n               :                                   :     :     :     :              +- CometFilter\n               :                                   :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                                   :     :     :     +- CometBroadcastExchange\n               :                                   :     :     :        +- CometProject\n               :                                   :     :     :           +- CometFilter\n               :                                   :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :     +- CometBroadcastExchange\n               :                                   :     :        +- CometProject\n               :                                   :     :           +- CometFilter\n               :                                   :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n               :                                   :     +- CometBroadcastExchange\n               :                                   :        +- CometProject\n               :                                   :           +- CometFilter\n               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometNativeScan parquet spark_catalog.default.promotion\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometColumnarExchange\n                                    :              :     :     :     :     :     +- Filter\n                                    :              :     :     :     :     :        +- ColumnarToRow\n                                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :              :     :     :     :     :                 +- SubqueryBroadcast\n                                    :              :     :     :     :     :                    +- BroadcastExchange\n                                    :              :     :     :     :     :                       +- CometNativeColumnarToRow\n                                    :              :     :     :     :     :                          +- CometProject\n                                    :              :     :     :     :     :                             +- CometFilter\n                                    :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometColumnarExchange\n                                    :              :     :     :     :     :     +- Filter\n                                    :              :     :     :     :     :        +- ColumnarToRow\n                                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :              :     :     :     :     :                 +- ReusedSubquery\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometProject\n                                                   :     :  +- CometBroadcastHashJoin\n                                                   :     :     :- CometProject\n                                                   :     :     :  +- CometBroadcastHashJoin\n                                                   :     :     :     :- CometProject\n                                                   :     :     :     :  +- CometSortMergeJoin\n                                                   :     :     :     :     :- CometSort\n                                                   :     :     :     :     :  +- CometColumnarExchange\n                                                   :     :     :     :     :     +- Filter\n                                                   :     :     :     :     :        +- ColumnarToRow\n                                                   :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                   :     :     :     :     :                 +- ReusedSubquery\n                                                   :     :     :     :     +- CometSort\n                                                   :     :     :     :        +- CometExchange\n                                                   :     :     :     :           +- CometProject\n                                                   :     :     :     :              +- CometFilter\n                                                   :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n                                                   :     :     :     +- CometBroadcastExchange\n                                                   :     :     :        +- CometProject\n                                                   :     :     :           +- CometFilter\n                                                   :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                   :     :     +- CometBroadcastExchange\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 356 out of 386 eligible operators (92%). Final plan contains 13 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q80a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometExchange\n               :           :              :     :     :     :     :     +- CometFilter\n               :           :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :              :     :     :     :     :              +- SubqueryBroadcast\n               :           :              :     :     :     :     :                 +- BroadcastExchange\n               :           :              :     :     :     :     :                    +- CometNativeColumnarToRow\n               :           :              :     :     :     :     :                       +- CometProject\n               :           :              :     :     :     :     :                          +- CometFilter\n               :           :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometExchange\n               :           :              :     :     :     :     :     +- CometFilter\n               :           :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :              :     :     :     :     :              +- ReusedSubquery\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometProject\n               :                          :  +- CometBroadcastHashJoin\n               :                          :     :- CometProject\n               :                          :     :  +- CometBroadcastHashJoin\n               :                          :     :     :- CometProject\n               :                          :     :     :  +- CometBroadcastHashJoin\n               :                          :     :     :     :- CometProject\n               :                          :     :     :     :  +- CometSortMergeJoin\n               :                          :     :     :     :     :- CometSort\n               :                          :     :     :     :     :  +- CometExchange\n               :                          :     :     :     :     :     +- CometFilter\n               :                          :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :     :     :     :     :              +- ReusedSubquery\n               :                          :     :     :     :     +- CometSort\n               :                          :     :     :     :        +- CometExchange\n               :                          :     :     :     :           +- CometProject\n               :                          :     :     :     :              +- CometFilter\n               :                          :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                          :     :     :     +- CometBroadcastExchange\n               :                          :     :     :        +- CometProject\n               :                          :     :     :           +- CometFilter\n               :                          :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :     :     +- CometBroadcastExchange\n               :                          :     :        +- CometProject\n               :                          :     :           +- CometFilter\n               :                          :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               :                          :     +- CometBroadcastExchange\n               :                          :        +- CometProject\n               :                          :           +- CometFilter\n               :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometExchange\n               :                    :              :     :     :     :     :     +- CometFilter\n               :                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :              :     :     :     :     :              +- SubqueryBroadcast\n               :                    :              :     :     :     :     :                 +- BroadcastExchange\n               :                    :              :     :     :     :     :                    +- CometNativeColumnarToRow\n               :                    :              :     :     :     :     :                       +- CometProject\n               :                    :              :     :     :     :     :                          +- CometFilter\n               :                    :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometExchange\n               :                    :              :     :     :     :     :     +- CometFilter\n               :                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :              :     :     :     :     :              +- ReusedSubquery\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :                    +- CometHashAggregate\n               :                       +- CometExchange\n               :                          +- CometHashAggregate\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometProject\n               :                                   :  +- CometBroadcastHashJoin\n               :                                   :     :- CometProject\n               :                                   :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :- CometProject\n               :                                   :     :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :     :- CometProject\n               :                                   :     :     :     :  +- CometSortMergeJoin\n               :                                   :     :     :     :     :- CometSort\n               :                                   :     :     :     :     :  +- CometExchange\n               :                                   :     :     :     :     :     +- CometFilter\n               :                                   :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :     :     :     :     :              +- ReusedSubquery\n               :                                   :     :     :     :     +- CometSort\n               :                                   :     :     :     :        +- CometExchange\n               :                                   :     :     :     :           +- CometProject\n               :                                   :     :     :     :              +- CometFilter\n               :                                   :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                   :     :     :     +- CometBroadcastExchange\n               :                                   :     :     :        +- CometProject\n               :                                   :     :     :           +- CometFilter\n               :                                   :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                   :     :     +- CometBroadcastExchange\n               :                                   :     :        +- CometProject\n               :                                   :     :           +- CometFilter\n               :                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               :                                   :     +- CometBroadcastExchange\n               :                                   :        +- CometProject\n               :                                   :           +- CometFilter\n               :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometExchange\n                                    :              :     :     :     :     :     +- CometFilter\n                                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :              :     :     :     :     :              +- SubqueryBroadcast\n                                    :              :     :     :     :     :                 +- BroadcastExchange\n                                    :              :     :     :     :     :                    +- CometNativeColumnarToRow\n                                    :              :     :     :     :     :                       +- CometProject\n                                    :              :     :     :     :     :                          +- CometFilter\n                                    :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometExchange\n                                    :              :     :     :     :     :     +- CometFilter\n                                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :              :     :     :     :     :              +- ReusedSubquery\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometProject\n                                                   :     :  +- CometBroadcastHashJoin\n                                                   :     :     :- CometProject\n                                                   :     :     :  +- CometBroadcastHashJoin\n                                                   :     :     :     :- CometProject\n                                                   :     :     :     :  +- CometSortMergeJoin\n                                                   :     :     :     :     :- CometSort\n                                                   :     :     :     :     :  +- CometExchange\n                                                   :     :     :     :     :     +- CometFilter\n                                                   :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     :     :     :     :              +- ReusedSubquery\n                                                   :     :     :     :     +- CometSort\n                                                   :     :     :     :        +- CometExchange\n                                                   :     :     :     :           +- CometProject\n                                                   :     :     :     :              +- CometFilter\n                                                   :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                                   :     :     :     +- CometBroadcastExchange\n                                                   :     :     :        +- CometProject\n                                                   :     :     :           +- CometFilter\n                                                   :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     :     +- CometBroadcastExchange\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 374 out of 386 eligible operators (96%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q86a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- Project\n                           :                 :  +- BroadcastHashJoin\n                           :                 :     :- Filter\n                           :                 :     :  +- ColumnarToRow\n                           :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                 :     :           +- SubqueryBroadcast\n                           :                 :     :              +- BroadcastExchange\n                           :                 :     :                 +- CometNativeColumnarToRow\n                           :                 :     :                    +- CometProject\n                           :                 :     :                       +- CometFilter\n                           :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     +- BroadcastExchange\n                           :                 :        +- CometNativeColumnarToRow\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 +- BroadcastExchange\n                           :                    +- CometNativeColumnarToRow\n                           :                       +- CometProject\n                           :                          +- CometFilter\n                           :                             +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Filter\n                           :                             :     :  +- ColumnarToRow\n                           :                             :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :           +- SubqueryBroadcast\n                           :                             :     :              +- BroadcastExchange\n                           :                             :     :                 +- CometNativeColumnarToRow\n                           :                             :     :                    +- CometProject\n                           :                             :     :                       +- CometFilter\n                           :                             :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- Filter\n                                                         :     :  +- ColumnarToRow\n                                                         :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :           +- SubqueryBroadcast\n                                                         :     :              +- BroadcastExchange\n                                                         :     :                 +- CometNativeColumnarToRow\n                                                         :     :                    +- CometProject\n                                                         :     :                       +- CometFilter\n                                                         :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     +- BroadcastExchange\n                                                         :        +- CometNativeColumnarToRow\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 36 out of 81 eligible operators (44%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q86a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometUnion\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometProject\n                           :           +- CometBroadcastHashJoin\n                           :              :- CometProject\n                           :              :  +- CometBroadcastHashJoin\n                           :              :     :- CometFilter\n                           :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :              :     :        +- SubqueryBroadcast\n                           :              :     :           +- BroadcastExchange\n                           :              :     :              +- CometNativeColumnarToRow\n                           :              :     :                 +- CometProject\n                           :              :     :                    +- CometFilter\n                           :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              :     +- CometBroadcastExchange\n                           :              :        +- CometProject\n                           :              :           +- CometFilter\n                           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              +- CometBroadcastExchange\n                           :                 +- CometProject\n                           :                    +- CometFilter\n                           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometHashAggregate\n                           :           +- CometExchange\n                           :              +- CometHashAggregate\n                           :                 +- CometProject\n                           :                    +- CometBroadcastHashJoin\n                           :                       :- CometProject\n                           :                       :  +- CometBroadcastHashJoin\n                           :                       :     :- CometFilter\n                           :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                       :     :        +- SubqueryBroadcast\n                           :                       :     :           +- BroadcastExchange\n                           :                       :     :              +- CometNativeColumnarToRow\n                           :                       :     :                 +- CometProject\n                           :                       :     :                    +- CometFilter\n                           :                       :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       :     +- CometBroadcastExchange\n                           :                       :        +- CometProject\n                           :                       :           +- CometFilter\n                           :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       +- CometBroadcastExchange\n                           :                          +- CometProject\n                           :                             +- CometFilter\n                           :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometFilter\n                                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     :        +- SubqueryBroadcast\n                                                   :     :           +- BroadcastExchange\n                                                   :     :              +- CometNativeColumnarToRow\n                                                   :     :                 +- CometProject\n                                                   :     :                    +- CometFilter\n                                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 72 out of 81 eligible operators (88%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q98.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n            +- CometNativeColumnarToRow\n               +- CometSort\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- CometNativeColumnarToRow\n                           +- CometColumnarExchange\n                              +- HashAggregate\n                                 +- Project\n                                    +- BroadcastHashJoin\n                                       :- Project\n                                       :  +- BroadcastHashJoin\n                                       :     :- Filter\n                                       :     :  +- ColumnarToRow\n                                       :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           +- SubqueryBroadcast\n                                       :     :              +- BroadcastExchange\n                                       :     :                 +- CometNativeColumnarToRow\n                                       :     :                    +- CometProject\n                                       :     :                       +- CometFilter\n                                       :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- BroadcastExchange\n                                       :        +- CometNativeColumnarToRow\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       +- BroadcastExchange\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 28 eligible operators (50%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7/q98.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n            +- CometNativeColumnarToRow\n               +- CometSort\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometExchange\n                           +- CometHashAggregate\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometProject\n                                    :  +- CometBroadcastHashJoin\n                                    :     :- CometFilter\n                                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :     :        +- SubqueryBroadcast\n                                    :     :           +- BroadcastExchange\n                                    :     :              +- CometNativeColumnarToRow\n                                    :     :                 +- CometProject\n                                    :     :                    +- CometFilter\n                                    :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :     +- CometBroadcastExchange\n                                    :        +- CometProject\n                                    :           +- CometFilter\n                                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 24 out of 28 eligible operators (85%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q10a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- BroadcastHashJoin\n                  :     :     :  :- CometNativeColumnarToRow\n                  :     :     :  :  +- CometFilter\n                  :     :     :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- Project\n                  :     :     :        +- BroadcastHashJoin\n                  :     :     :           :- ColumnarToRow\n                  :     :     :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           :        +- SubqueryBroadcast\n                  :     :     :           :           +- BroadcastExchange\n                  :     :     :           :              +- CometNativeColumnarToRow\n                  :     :     :           :                 +- CometProject\n                  :     :     :           :                    +- CometFilter\n                  :     :     :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- Union\n                  :     :           :- Project\n                  :     :           :  +- BroadcastHashJoin\n                  :     :           :     :- ColumnarToRow\n                  :     :           :     :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :           :     :        +- ReusedSubquery\n                  :     :           :     +- BroadcastExchange\n                  :     :           :        +- CometNativeColumnarToRow\n                  :     :           :           +- CometProject\n                  :     :           :              +- CometFilter\n                  :     :           :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 52 eligible operators (40%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q10a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometBroadcastHashJoin\n                  :     :     :  :- CometFilter\n                  :     :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :     :  +- CometBroadcastExchange\n                  :     :     :     +- CometProject\n                  :     :     :        +- CometBroadcastHashJoin\n                  :     :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :           :     +- SubqueryBroadcast\n                  :     :     :           :        +- BroadcastExchange\n                  :     :     :           :           +- CometNativeColumnarToRow\n                  :     :     :           :              +- CometProject\n                  :     :     :           :                 +- CometFilter\n                  :     :     :           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :           +- CometBroadcastExchange\n                  :     :     :              +- CometProject\n                  :     :     :                 +- CometFilter\n                  :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometUnion\n                  :     :           :- CometProject\n                  :     :           :  +- CometBroadcastHashJoin\n                  :     :           :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :           :     :     +- ReusedSubquery\n                  :     :           :     +- CometBroadcastExchange\n                  :     :           :        +- CometProject\n                  :     :           :           +- CometFilter\n                  :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :           +- CometProject\n                  :     :              +- CometBroadcastHashJoin\n                  :     :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                 :     +- ReusedSubquery\n                  :     :                 +- CometBroadcastExchange\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 48 out of 52 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q11.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- BroadcastHashJoin\n      :     :  :- Filter\n      :     :  :  +- HashAggregate\n      :     :  :     +- CometNativeColumnarToRow\n      :     :  :        +- CometColumnarExchange\n      :     :  :           +- HashAggregate\n      :     :  :              +- Project\n      :     :  :                 +- BroadcastHashJoin\n      :     :  :                    :- Project\n      :     :  :                    :  +- BroadcastHashJoin\n      :     :  :                    :     :- CometNativeColumnarToRow\n      :     :  :                    :     :  +- CometProject\n      :     :  :                    :     :     +- CometFilter\n      :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :  :                    :     +- BroadcastExchange\n      :     :  :                    :        +- Filter\n      :     :  :                    :           +- ColumnarToRow\n      :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :  :                    :                    +- SubqueryBroadcast\n      :     :  :                    :                       +- BroadcastExchange\n      :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :  :                    :                             +- CometFilter\n      :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  :                    +- BroadcastExchange\n      :     :  :                       +- CometNativeColumnarToRow\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  +- BroadcastExchange\n      :     :     +- HashAggregate\n      :     :        +- CometNativeColumnarToRow\n      :     :           +- CometColumnarExchange\n      :     :              +- HashAggregate\n      :     :                 +- Project\n      :     :                    +- BroadcastHashJoin\n      :     :                       :- Project\n      :     :                       :  +- BroadcastHashJoin\n      :     :                       :     :- CometNativeColumnarToRow\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                       :     +- BroadcastExchange\n      :     :                       :        +- Filter\n      :     :                       :           +- ColumnarToRow\n      :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                       :                    +- SubqueryBroadcast\n      :     :                       :                       +- BroadcastExchange\n      :     :                       :                          +- CometNativeColumnarToRow\n      :     :                       :                             +- CometFilter\n      :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                       +- BroadcastExchange\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometFilter\n      :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 85 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q11.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometBroadcastHashJoin\n         :     :  :- CometFilter\n         :     :  :  +- CometHashAggregate\n         :     :  :     +- CometExchange\n         :     :  :        +- CometHashAggregate\n         :     :  :           +- CometProject\n         :     :  :              +- CometBroadcastHashJoin\n         :     :  :                 :- CometProject\n         :     :  :                 :  +- CometBroadcastHashJoin\n         :     :  :                 :     :- CometProject\n         :     :  :                 :     :  +- CometFilter\n         :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :  :                 :     +- CometBroadcastExchange\n         :     :  :                 :        +- CometFilter\n         :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :  :                 :                 +- SubqueryBroadcast\n         :     :  :                 :                    +- BroadcastExchange\n         :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :  :                 :                          +- CometFilter\n         :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  :                 +- CometBroadcastExchange\n         :     :  :                    +- CometFilter\n         :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  +- CometBroadcastExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometExchange\n         :     :           +- CometHashAggregate\n         :     :              +- CometProject\n         :     :                 +- CometBroadcastHashJoin\n         :     :                    :- CometProject\n         :     :                    :  +- CometBroadcastHashJoin\n         :     :                    :     :- CometProject\n         :     :                    :     :  +- CometFilter\n         :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                    :     +- CometBroadcastExchange\n         :     :                    :        +- CometFilter\n         :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                    :                 +- SubqueryBroadcast\n         :     :                    :                    +- BroadcastExchange\n         :     :                    :                       +- CometNativeColumnarToRow\n         :     :                    :                          +- CometFilter\n         :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                    +- CometBroadcastExchange\n         :     :                       +- CometFilter\n         :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 79 out of 85 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q12.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q12.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q14.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- BroadcastHashJoin\n   :- Filter\n   :  :  +- Subquery\n   :  :     +- HashAggregate\n   :  :        +- CometNativeColumnarToRow\n   :  :           +- CometColumnarExchange\n   :  :              +- HashAggregate\n   :  :                 +- Union\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    +- Project\n   :  :                       +- BroadcastHashJoin\n   :  :                          :- ColumnarToRow\n   :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                          :        +- ReusedSubquery\n   :  :                          +- BroadcastExchange\n   :  :                             +- CometNativeColumnarToRow\n   :  :                                +- CometProject\n   :  :                                   +- CometFilter\n   :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  +- HashAggregate\n   :     +- CometNativeColumnarToRow\n   :        +- CometColumnarExchange\n   :           +- HashAggregate\n   :              +- Project\n   :                 +- BroadcastHashJoin\n   :                    :- Project\n   :                    :  +- BroadcastHashJoin\n   :                    :     :- BroadcastHashJoin\n   :                    :     :  :- Filter\n   :                    :     :  :  +- ColumnarToRow\n   :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :  :           +- SubqueryBroadcast\n   :                    :     :  :              +- BroadcastExchange\n   :                    :     :  :                 +- CometNativeColumnarToRow\n   :                    :     :  :                    +- CometProject\n   :                    :     :  :                       +- CometFilter\n   :                    :     :  :                          :  +- Subquery\n   :                    :     :  :                          :     +- CometNativeColumnarToRow\n   :                    :     :  :                          :        +- CometProject\n   :                    :     :  :                          :           +- CometFilter\n   :                    :     :  :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  +- BroadcastExchange\n   :                    :     :     +- Project\n   :                    :     :        +- BroadcastHashJoin\n   :                    :     :           :- CometNativeColumnarToRow\n   :                    :     :           :  +- CometFilter\n   :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :           +- BroadcastExchange\n   :                    :     :              +- BroadcastHashJoin\n   :                    :     :                 :- CometNativeColumnarToRow\n   :                    :     :                 :  +- CometHashAggregate\n   :                    :     :                 :     +- CometColumnarExchange\n   :                    :     :                 :        +- HashAggregate\n   :                    :     :                 :           +- Project\n   :                    :     :                 :              +- BroadcastHashJoin\n   :                    :     :                 :                 :- Project\n   :                    :     :                 :                 :  +- BroadcastHashJoin\n   :                    :     :                 :                 :     :- Filter\n   :                    :     :                 :                 :     :  +- ColumnarToRow\n   :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :     :           +- SubqueryBroadcast\n   :                    :     :                 :                 :     :              +- BroadcastExchange\n   :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n   :                    :     :                 :                 :     :                    +- CometProject\n   :                    :     :                 :                 :     :                       +- CometFilter\n   :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 :     +- BroadcastExchange\n   :                    :     :                 :                 :        +- BroadcastHashJoin\n   :                    :     :                 :                 :           :- CometNativeColumnarToRow\n   :                    :     :                 :                 :           :  +- CometFilter\n   :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :           +- BroadcastExchange\n   :                    :     :                 :                 :              +- Project\n   :                    :     :                 :                 :                 +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :- Project\n   :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :     :- Filter\n   :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n   :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n   :                    :     :                 :                 :                    :     +- BroadcastExchange\n   :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                    :           +- CometFilter\n   :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :                    +- BroadcastExchange\n   :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                          +- CometProject\n   :                    :     :                 :                 :                             +- CometFilter\n   :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 +- BroadcastExchange\n   :                    :     :                 :                    +- CometNativeColumnarToRow\n   :                    :     :                 :                       +- CometProject\n   :                    :     :                 :                          +- CometFilter\n   :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 +- BroadcastExchange\n   :                    :     :                    +- Project\n   :                    :     :                       +- BroadcastHashJoin\n   :                    :     :                          :- Project\n   :                    :     :                          :  +- BroadcastHashJoin\n   :                    :     :                          :     :- Filter\n   :                    :     :                          :     :  +- ColumnarToRow\n   :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                          :     :           +- ReusedSubquery\n   :                    :     :                          :     +- BroadcastExchange\n   :                    :     :                          :        +- CometNativeColumnarToRow\n   :                    :     :                          :           +- CometFilter\n   :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                          +- BroadcastExchange\n   :                    :     :                             +- CometNativeColumnarToRow\n   :                    :     :                                +- CometProject\n   :                    :     :                                   +- CometFilter\n   :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     +- BroadcastExchange\n   :                    :        +- BroadcastHashJoin\n   :                    :           :- CometNativeColumnarToRow\n   :                    :           :  +- CometFilter\n   :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :           +- BroadcastExchange\n   :                    :              +- Project\n   :                    :                 +- BroadcastHashJoin\n   :                    :                    :- CometNativeColumnarToRow\n   :                    :                    :  +- CometFilter\n   :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                    +- BroadcastExchange\n   :                    :                       +- BroadcastHashJoin\n   :                    :                          :- CometNativeColumnarToRow\n   :                    :                          :  +- CometHashAggregate\n   :                    :                          :     +- CometColumnarExchange\n   :                    :                          :        +- HashAggregate\n   :                    :                          :           +- Project\n   :                    :                          :              +- BroadcastHashJoin\n   :                    :                          :                 :- Project\n   :                    :                          :                 :  +- BroadcastHashJoin\n   :                    :                          :                 :     :- Filter\n   :                    :                          :                 :     :  +- ColumnarToRow\n   :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :     :           +- SubqueryBroadcast\n   :                    :                          :                 :     :              +- BroadcastExchange\n   :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n   :                    :                          :                 :     :                    +- CometProject\n   :                    :                          :                 :     :                       +- CometFilter\n   :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 :     +- BroadcastExchange\n   :                    :                          :                 :        +- BroadcastHashJoin\n   :                    :                          :                 :           :- CometNativeColumnarToRow\n   :                    :                          :                 :           :  +- CometFilter\n   :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :           +- BroadcastExchange\n   :                    :                          :                 :              +- Project\n   :                    :                          :                 :                 +- BroadcastHashJoin\n   :                    :                          :                 :                    :- Project\n   :                    :                          :                 :                    :  +- BroadcastHashJoin\n   :                    :                          :                 :                    :     :- Filter\n   :                    :                          :                 :                    :     :  +- ColumnarToRow\n   :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :                    :     :           +- ReusedSubquery\n   :                    :                          :                 :                    :     +- BroadcastExchange\n   :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n   :                    :                          :                 :                    :           +- CometFilter\n   :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :                    +- BroadcastExchange\n   :                    :                          :                 :                       +- CometNativeColumnarToRow\n   :                    :                          :                 :                          +- CometProject\n   :                    :                          :                 :                             +- CometFilter\n   :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 +- BroadcastExchange\n   :                    :                          :                    +- CometNativeColumnarToRow\n   :                    :                          :                       +- CometProject\n   :                    :                          :                          +- CometFilter\n   :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          +- BroadcastExchange\n   :                    :                             +- Project\n   :                    :                                +- BroadcastHashJoin\n   :                    :                                   :- Project\n   :                    :                                   :  +- BroadcastHashJoin\n   :                    :                                   :     :- Filter\n   :                    :                                   :     :  +- ColumnarToRow\n   :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                                   :     :           +- ReusedSubquery\n   :                    :                                   :     +- BroadcastExchange\n   :                    :                                   :        +- CometNativeColumnarToRow\n   :                    :                                   :           +- CometFilter\n   :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                                   +- BroadcastExchange\n   :                    :                                      +- CometNativeColumnarToRow\n   :                    :                                         +- CometProject\n   :                    :                                            +- CometFilter\n   :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    +- BroadcastExchange\n   :                       +- CometNativeColumnarToRow\n   :                          +- CometProject\n   :                             +- CometFilter\n   :                                :  +- Subquery\n   :                                :     +- CometNativeColumnarToRow\n   :                                :        +- CometProject\n   :                                :           +- CometFilter\n   :                                :              +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   +- BroadcastExchange\n      +- Filter\n         :  +- ReusedSubquery\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- BroadcastHashJoin\n                           :     :  :- Filter\n                           :     :  :  +- ColumnarToRow\n                           :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :  :           +- SubqueryBroadcast\n                           :     :  :              +- BroadcastExchange\n                           :     :  :                 +- CometNativeColumnarToRow\n                           :     :  :                    +- CometProject\n                           :     :  :                       +- CometFilter\n                           :     :  :                          :  +- Subquery\n                           :     :  :                          :     +- CometNativeColumnarToRow\n                           :     :  :                          :        +- CometProject\n                           :     :  :                          :           +- CometFilter\n                           :     :  :                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  +- BroadcastExchange\n                           :     :     +- Project\n                           :     :        +- BroadcastHashJoin\n                           :     :           :- CometNativeColumnarToRow\n                           :     :           :  +- CometFilter\n                           :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :           +- BroadcastExchange\n                           :     :              +- BroadcastHashJoin\n                           :     :                 :- CometNativeColumnarToRow\n                           :     :                 :  +- CometHashAggregate\n                           :     :                 :     +- CometColumnarExchange\n                           :     :                 :        +- HashAggregate\n                           :     :                 :           +- Project\n                           :     :                 :              +- BroadcastHashJoin\n                           :     :                 :                 :- Project\n                           :     :                 :                 :  +- BroadcastHashJoin\n                           :     :                 :                 :     :- Filter\n                           :     :                 :                 :     :  +- ColumnarToRow\n                           :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :     :           +- SubqueryBroadcast\n                           :     :                 :                 :     :              +- BroadcastExchange\n                           :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                           :     :                 :                 :     :                    +- CometProject\n                           :     :                 :                 :     :                       +- CometFilter\n                           :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 :     +- BroadcastExchange\n                           :     :                 :                 :        +- BroadcastHashJoin\n                           :     :                 :                 :           :- CometNativeColumnarToRow\n                           :     :                 :                 :           :  +- CometFilter\n                           :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :           +- BroadcastExchange\n                           :     :                 :                 :              +- Project\n                           :     :                 :                 :                 +- BroadcastHashJoin\n                           :     :                 :                 :                    :- Project\n                           :     :                 :                 :                    :  +- BroadcastHashJoin\n                           :     :                 :                 :                    :     :- Filter\n                           :     :                 :                 :                    :     :  +- ColumnarToRow\n                           :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :                    :     :           +- ReusedSubquery\n                           :     :                 :                 :                    :     +- BroadcastExchange\n                           :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                           :     :                 :                 :                    :           +- CometFilter\n                           :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :                    +- BroadcastExchange\n                           :     :                 :                 :                       +- CometNativeColumnarToRow\n                           :     :                 :                 :                          +- CometProject\n                           :     :                 :                 :                             +- CometFilter\n                           :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 +- BroadcastExchange\n                           :     :                 :                    +- CometNativeColumnarToRow\n                           :     :                 :                       +- CometProject\n                           :     :                 :                          +- CometFilter\n                           :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 +- BroadcastExchange\n                           :     :                    +- Project\n                           :     :                       +- BroadcastHashJoin\n                           :     :                          :- Project\n                           :     :                          :  +- BroadcastHashJoin\n                           :     :                          :     :- Filter\n                           :     :                          :     :  +- ColumnarToRow\n                           :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                          :     :           +- ReusedSubquery\n                           :     :                          :     +- BroadcastExchange\n                           :     :                          :        +- CometNativeColumnarToRow\n                           :     :                          :           +- CometFilter\n                           :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                          +- BroadcastExchange\n                           :     :                             +- CometNativeColumnarToRow\n                           :     :                                +- CometProject\n                           :     :                                   +- CometFilter\n                           :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     +- BroadcastExchange\n                           :        +- BroadcastHashJoin\n                           :           :- CometNativeColumnarToRow\n                           :           :  +- CometFilter\n                           :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :           +- BroadcastExchange\n                           :              +- Project\n                           :                 +- BroadcastHashJoin\n                           :                    :- CometNativeColumnarToRow\n                           :                    :  +- CometFilter\n                           :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                    +- BroadcastExchange\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometHashAggregate\n                           :                          :     +- CometColumnarExchange\n                           :                          :        +- HashAggregate\n                           :                          :           +- Project\n                           :                          :              +- BroadcastHashJoin\n                           :                          :                 :- Project\n                           :                          :                 :  +- BroadcastHashJoin\n                           :                          :                 :     :- Filter\n                           :                          :                 :     :  +- ColumnarToRow\n                           :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :     :           +- SubqueryBroadcast\n                           :                          :                 :     :              +- BroadcastExchange\n                           :                          :                 :     :                 +- CometNativeColumnarToRow\n                           :                          :                 :     :                    +- CometProject\n                           :                          :                 :     :                       +- CometFilter\n                           :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 :     +- BroadcastExchange\n                           :                          :                 :        +- BroadcastHashJoin\n                           :                          :                 :           :- CometNativeColumnarToRow\n                           :                          :                 :           :  +- CometFilter\n                           :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :           +- BroadcastExchange\n                           :                          :                 :              +- Project\n                           :                          :                 :                 +- BroadcastHashJoin\n                           :                          :                 :                    :- Project\n                           :                          :                 :                    :  +- BroadcastHashJoin\n                           :                          :                 :                    :     :- Filter\n                           :                          :                 :                    :     :  +- ColumnarToRow\n                           :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :                    :     :           +- ReusedSubquery\n                           :                          :                 :                    :     +- BroadcastExchange\n                           :                          :                 :                    :        +- CometNativeColumnarToRow\n                           :                          :                 :                    :           +- CometFilter\n                           :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :                    +- BroadcastExchange\n                           :                          :                 :                       +- CometNativeColumnarToRow\n                           :                          :                 :                          +- CometProject\n                           :                          :                 :                             +- CometFilter\n                           :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 +- BroadcastExchange\n                           :                          :                    +- CometNativeColumnarToRow\n                           :                          :                       +- CometProject\n                           :                          :                          +- CometFilter\n                           :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- BroadcastHashJoin\n                           :                                   :- Project\n                           :                                   :  +- BroadcastHashJoin\n                           :                                   :     :- Filter\n                           :                                   :     :  +- ColumnarToRow\n                           :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                   :     :           +- ReusedSubquery\n                           :                                   :     +- BroadcastExchange\n                           :                                   :        +- CometNativeColumnarToRow\n                           :                                   :           +- CometFilter\n                           :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                                   +- BroadcastExchange\n                           :                                      +- CometNativeColumnarToRow\n                           :                                         +- CometProject\n                           :                                            +- CometFilter\n                           :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometProject\n                                    +- CometFilter\n                                       :  +- Subquery\n                                       :     +- CometNativeColumnarToRow\n                                       :        +- CometProject\n                                       :           +- CometFilter\n                                       :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 128 out of 333 eligible operators (38%). Final plan contains 69 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q14.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometBroadcastHashJoin\n      :- CometFilter\n      :  :  +- Subquery\n      :  :     +- CometNativeColumnarToRow\n      :  :        +- CometHashAggregate\n      :  :           +- CometExchange\n      :  :              +- CometHashAggregate\n      :  :                 +- CometUnion\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    +- CometProject\n      :  :                       +- CometBroadcastHashJoin\n      :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :  :                          :     +- ReusedSubquery\n      :  :                          +- CometBroadcastExchange\n      :  :                             +- CometProject\n      :  :                                +- CometFilter\n      :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  +- CometHashAggregate\n      :     +- CometExchange\n      :        +- CometHashAggregate\n      :           +- CometProject\n      :              +- CometBroadcastHashJoin\n      :                 :- CometProject\n      :                 :  +- CometBroadcastHashJoin\n      :                 :     :- CometBroadcastHashJoin\n      :                 :     :  :- CometFilter\n      :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :  :        +- SubqueryBroadcast\n      :                 :     :  :           +- BroadcastExchange\n      :                 :     :  :              +- CometNativeColumnarToRow\n      :                 :     :  :                 +- CometProject\n      :                 :     :  :                    +- CometFilter\n      :                 :     :  :                       :  +- Subquery\n      :                 :     :  :                       :     +- CometNativeColumnarToRow\n      :                 :     :  :                       :        +- CometProject\n      :                 :     :  :                       :           +- CometFilter\n      :                 :     :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  +- CometBroadcastExchange\n      :                 :     :     +- CometProject\n      :                 :     :        +- CometBroadcastHashJoin\n      :                 :     :           :- CometFilter\n      :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :           +- CometBroadcastExchange\n      :                 :     :              +- CometBroadcastHashJoin\n      :                 :     :                 :- CometHashAggregate\n      :                 :     :                 :  +- CometExchange\n      :                 :     :                 :     +- CometHashAggregate\n      :                 :     :                 :        +- CometProject\n      :                 :     :                 :           +- CometBroadcastHashJoin\n      :                 :     :                 :              :- CometProject\n      :                 :     :                 :              :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :     :- CometFilter\n      :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :                 :              :     :        +- SubqueryBroadcast\n      :                 :     :                 :              :     :           +- BroadcastExchange\n      :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n      :                 :     :                 :              :     :                 +- CometProject\n      :                 :     :                 :              :     :                    +- CometFilter\n      :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              :     +- CometBroadcastExchange\n      :                 :     :                 :              :        +- CometBroadcastHashJoin\n      :                 :     :                 :              :           :- CometFilter\n      :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :           +- CometBroadcastExchange\n      :                 :     :                 :              :              +- CometProject\n      :                 :     :                 :              :                 +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :- CometProject\n      :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :     :- CometFilter\n      :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :     :                 :              :                    :     :        +- ReusedSubquery\n      :                 :     :                 :              :                    :     +- CometBroadcastExchange\n      :                 :     :                 :              :                    :        +- CometFilter\n      :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :                    +- CometBroadcastExchange\n      :                 :     :                 :              :                       +- CometProject\n      :                 :     :                 :              :                          +- CometFilter\n      :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              +- CometBroadcastExchange\n      :                 :     :                 :                 +- CometProject\n      :                 :     :                 :                    +- CometFilter\n      :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 +- CometBroadcastExchange\n      :                 :     :                    +- CometProject\n      :                 :     :                       +- CometBroadcastHashJoin\n      :                 :     :                          :- CometProject\n      :                 :     :                          :  +- CometBroadcastHashJoin\n      :                 :     :                          :     :- CometFilter\n      :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :     :                          :     :        +- ReusedSubquery\n      :                 :     :                          :     +- CometBroadcastExchange\n      :                 :     :                          :        +- CometFilter\n      :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                          +- CometBroadcastExchange\n      :                 :     :                             +- CometProject\n      :                 :     :                                +- CometFilter\n      :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     +- CometBroadcastExchange\n      :                 :        +- CometBroadcastHashJoin\n      :                 :           :- CometFilter\n      :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :           +- CometBroadcastExchange\n      :                 :              +- CometProject\n      :                 :                 +- CometBroadcastHashJoin\n      :                 :                    :- CometFilter\n      :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                    +- CometBroadcastExchange\n      :                 :                       +- CometBroadcastHashJoin\n      :                 :                          :- CometHashAggregate\n      :                 :                          :  +- CometExchange\n      :                 :                          :     +- CometHashAggregate\n      :                 :                          :        +- CometProject\n      :                 :                          :           +- CometBroadcastHashJoin\n      :                 :                          :              :- CometProject\n      :                 :                          :              :  +- CometBroadcastHashJoin\n      :                 :                          :              :     :- CometFilter\n      :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :                          :              :     :        +- SubqueryBroadcast\n      :                 :                          :              :     :           +- BroadcastExchange\n      :                 :                          :              :     :              +- CometNativeColumnarToRow\n      :                 :                          :              :     :                 +- CometProject\n      :                 :                          :              :     :                    +- CometFilter\n      :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              :     +- CometBroadcastExchange\n      :                 :                          :              :        +- CometBroadcastHashJoin\n      :                 :                          :              :           :- CometFilter\n      :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :           +- CometBroadcastExchange\n      :                 :                          :              :              +- CometProject\n      :                 :                          :              :                 +- CometBroadcastHashJoin\n      :                 :                          :              :                    :- CometProject\n      :                 :                          :              :                    :  +- CometBroadcastHashJoin\n      :                 :                          :              :                    :     :- CometFilter\n      :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :                          :              :                    :     :        +- ReusedSubquery\n      :                 :                          :              :                    :     +- CometBroadcastExchange\n      :                 :                          :              :                    :        +- CometFilter\n      :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :                    +- CometBroadcastExchange\n      :                 :                          :              :                       +- CometProject\n      :                 :                          :              :                          +- CometFilter\n      :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              +- CometBroadcastExchange\n      :                 :                          :                 +- CometProject\n      :                 :                          :                    +- CometFilter\n      :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          +- CometBroadcastExchange\n      :                 :                             +- CometProject\n      :                 :                                +- CometBroadcastHashJoin\n      :                 :                                   :- CometProject\n      :                 :                                   :  +- CometBroadcastHashJoin\n      :                 :                                   :     :- CometFilter\n      :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :                                   :     :        +- ReusedSubquery\n      :                 :                                   :     +- CometBroadcastExchange\n      :                 :                                   :        +- CometFilter\n      :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                                   +- CometBroadcastExchange\n      :                 :                                      +- CometProject\n      :                 :                                         +- CometFilter\n      :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 +- CometBroadcastExchange\n      :                    +- CometProject\n      :                       +- CometFilter\n      :                          :  +- ReusedSubquery\n      :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      +- CometBroadcastExchange\n         +- CometFilter\n            :  +- ReusedSubquery\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometBroadcastHashJoin\n                           :     :  :- CometFilter\n                           :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :  :        +- SubqueryBroadcast\n                           :     :  :           +- BroadcastExchange\n                           :     :  :              +- CometNativeColumnarToRow\n                           :     :  :                 +- CometProject\n                           :     :  :                    +- CometFilter\n                           :     :  :                       :  +- Subquery\n                           :     :  :                       :     +- CometNativeColumnarToRow\n                           :     :  :                       :        +- CometProject\n                           :     :  :                       :           +- CometFilter\n                           :     :  :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  +- CometBroadcastExchange\n                           :     :     +- CometProject\n                           :     :        +- CometBroadcastHashJoin\n                           :     :           :- CometFilter\n                           :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :           +- CometBroadcastExchange\n                           :     :              +- CometBroadcastHashJoin\n                           :     :                 :- CometHashAggregate\n                           :     :                 :  +- CometExchange\n                           :     :                 :     +- CometHashAggregate\n                           :     :                 :        +- CometProject\n                           :     :                 :           +- CometBroadcastHashJoin\n                           :     :                 :              :- CometProject\n                           :     :                 :              :  +- CometBroadcastHashJoin\n                           :     :                 :              :     :- CometFilter\n                           :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :                 :              :     :        +- SubqueryBroadcast\n                           :     :                 :              :     :           +- BroadcastExchange\n                           :     :                 :              :     :              +- CometNativeColumnarToRow\n                           :     :                 :              :     :                 +- CometProject\n                           :     :                 :              :     :                    +- CometFilter\n                           :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              :     +- CometBroadcastExchange\n                           :     :                 :              :        +- CometBroadcastHashJoin\n                           :     :                 :              :           :- CometFilter\n                           :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :           +- CometBroadcastExchange\n                           :     :                 :              :              +- CometProject\n                           :     :                 :              :                 +- CometBroadcastHashJoin\n                           :     :                 :              :                    :- CometProject\n                           :     :                 :              :                    :  +- CometBroadcastHashJoin\n                           :     :                 :              :                    :     :- CometFilter\n                           :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :     :                 :              :                    :     :        +- ReusedSubquery\n                           :     :                 :              :                    :     +- CometBroadcastExchange\n                           :     :                 :              :                    :        +- CometFilter\n                           :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :                    +- CometBroadcastExchange\n                           :     :                 :              :                       +- CometProject\n                           :     :                 :              :                          +- CometFilter\n                           :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              +- CometBroadcastExchange\n                           :     :                 :                 +- CometProject\n                           :     :                 :                    +- CometFilter\n                           :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 +- CometBroadcastExchange\n                           :     :                    +- CometProject\n                           :     :                       +- CometBroadcastHashJoin\n                           :     :                          :- CometProject\n                           :     :                          :  +- CometBroadcastHashJoin\n                           :     :                          :     :- CometFilter\n                           :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :     :                          :     :        +- ReusedSubquery\n                           :     :                          :     +- CometBroadcastExchange\n                           :     :                          :        +- CometFilter\n                           :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                          +- CometBroadcastExchange\n                           :     :                             +- CometProject\n                           :     :                                +- CometFilter\n                           :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     +- CometBroadcastExchange\n                           :        +- CometBroadcastHashJoin\n                           :           :- CometFilter\n                           :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :           +- CometBroadcastExchange\n                           :              +- CometProject\n                           :                 +- CometBroadcastHashJoin\n                           :                    :- CometFilter\n                           :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                    +- CometBroadcastExchange\n                           :                       +- CometBroadcastHashJoin\n                           :                          :- CometHashAggregate\n                           :                          :  +- CometExchange\n                           :                          :     +- CometHashAggregate\n                           :                          :        +- CometProject\n                           :                          :           +- CometBroadcastHashJoin\n                           :                          :              :- CometProject\n                           :                          :              :  +- CometBroadcastHashJoin\n                           :                          :              :     :- CometFilter\n                           :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                          :              :     :        +- SubqueryBroadcast\n                           :                          :              :     :           +- BroadcastExchange\n                           :                          :              :     :              +- CometNativeColumnarToRow\n                           :                          :              :     :                 +- CometProject\n                           :                          :              :     :                    +- CometFilter\n                           :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              :     +- CometBroadcastExchange\n                           :                          :              :        +- CometBroadcastHashJoin\n                           :                          :              :           :- CometFilter\n                           :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :           +- CometBroadcastExchange\n                           :                          :              :              +- CometProject\n                           :                          :              :                 +- CometBroadcastHashJoin\n                           :                          :              :                    :- CometProject\n                           :                          :              :                    :  +- CometBroadcastHashJoin\n                           :                          :              :                    :     :- CometFilter\n                           :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :                          :              :                    :     :        +- ReusedSubquery\n                           :                          :              :                    :     +- CometBroadcastExchange\n                           :                          :              :                    :        +- CometFilter\n                           :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :                    +- CometBroadcastExchange\n                           :                          :              :                       +- CometProject\n                           :                          :              :                          +- CometFilter\n                           :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              +- CometBroadcastExchange\n                           :                          :                 +- CometProject\n                           :                          :                    +- CometFilter\n                           :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          +- CometBroadcastExchange\n                           :                             +- CometProject\n                           :                                +- CometBroadcastHashJoin\n                           :                                   :- CometProject\n                           :                                   :  +- CometBroadcastHashJoin\n                           :                                   :     :- CometFilter\n                           :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                                   :     :        +- ReusedSubquery\n                           :                                   :     +- CometBroadcastExchange\n                           :                                   :        +- CometFilter\n                           :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                                   +- CometBroadcastExchange\n                           :                                      +- CometProject\n                           :                                         +- CometFilter\n                           :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    :  +- ReusedSubquery\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 298 out of 327 eligible operators (91%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q14a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- Filter\n               :              :  :  +- Subquery\n               :              :  :     +- HashAggregate\n               :              :  :        +- CometNativeColumnarToRow\n               :              :  :           +- CometColumnarExchange\n               :              :  :              +- HashAggregate\n               :              :  :                 +- Union\n               :              :  :                    :- Project\n               :              :  :                    :  +- BroadcastHashJoin\n               :              :  :                    :     :- ColumnarToRow\n               :              :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :  :                    :     :        +- ReusedSubquery\n               :              :  :                    :     +- BroadcastExchange\n               :              :  :                    :        +- CometNativeColumnarToRow\n               :              :  :                    :           +- CometProject\n               :              :  :                    :              +- CometFilter\n               :              :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  :                    :- Project\n               :              :  :                    :  +- BroadcastHashJoin\n               :              :  :                    :     :- ColumnarToRow\n               :              :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :  :                    :     :        +- SubqueryBroadcast\n               :              :  :                    :     :           +- BroadcastExchange\n               :              :  :                    :     :              +- CometNativeColumnarToRow\n               :              :  :                    :     :                 +- CometProject\n               :              :  :                    :     :                    +- CometFilter\n               :              :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  :                    :     +- BroadcastExchange\n               :              :  :                    :        +- CometNativeColumnarToRow\n               :              :  :                    :           +- CometProject\n               :              :  :                    :              +- CometFilter\n               :              :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  :                    +- Project\n               :              :  :                       +- BroadcastHashJoin\n               :              :  :                          :- ColumnarToRow\n               :              :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :  :                          :        +- ReusedSubquery\n               :              :  :                          +- BroadcastExchange\n               :              :  :                             +- CometNativeColumnarToRow\n               :              :  :                                +- CometProject\n               :              :  :                                   +- CometFilter\n               :              :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  +- HashAggregate\n               :              :     +- CometNativeColumnarToRow\n               :              :        +- CometColumnarExchange\n               :              :           +- HashAggregate\n               :              :              +- Project\n               :              :                 +- BroadcastHashJoin\n               :              :                    :- Project\n               :              :                    :  +- BroadcastHashJoin\n               :              :                    :     :- BroadcastHashJoin\n               :              :                    :     :  :- Filter\n               :              :                    :     :  :  +- ColumnarToRow\n               :              :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :  :           +- SubqueryBroadcast\n               :              :                    :     :  :              +- BroadcastExchange\n               :              :                    :     :  :                 +- CometNativeColumnarToRow\n               :              :                    :     :  :                    +- CometProject\n               :              :                    :     :  :                       +- CometFilter\n               :              :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :  +- BroadcastExchange\n               :              :                    :     :     +- Project\n               :              :                    :     :        +- BroadcastHashJoin\n               :              :                    :     :           :- CometNativeColumnarToRow\n               :              :                    :     :           :  +- CometFilter\n               :              :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :           +- BroadcastExchange\n               :              :                    :     :              +- BroadcastHashJoin\n               :              :                    :     :                 :- CometNativeColumnarToRow\n               :              :                    :     :                 :  +- CometHashAggregate\n               :              :                    :     :                 :     +- CometColumnarExchange\n               :              :                    :     :                 :        +- HashAggregate\n               :              :                    :     :                 :           +- Project\n               :              :                    :     :                 :              +- BroadcastHashJoin\n               :              :                    :     :                 :                 :- Project\n               :              :                    :     :                 :                 :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :     :- Filter\n               :              :                    :     :                 :                 :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :              :                    :     :                 :                 :     :              +- BroadcastExchange\n               :              :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :     :                    +- CometProject\n               :              :                    :     :                 :                 :     :                       +- CometFilter\n               :              :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 :     +- BroadcastExchange\n               :              :                    :     :                 :                 :        +- BroadcastHashJoin\n               :              :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :           :  +- CometFilter\n               :              :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :           +- BroadcastExchange\n               :              :                    :     :                 :                 :              +- Project\n               :              :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :- Project\n               :              :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :     :- Filter\n               :              :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :              :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :              :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                    :           +- CometFilter\n               :              :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :                    +- BroadcastExchange\n               :              :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                          +- CometProject\n               :              :                    :     :                 :                 :                             +- CometFilter\n               :              :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 +- BroadcastExchange\n               :              :                    :     :                 :                    +- CometNativeColumnarToRow\n               :              :                    :     :                 :                       +- CometProject\n               :              :                    :     :                 :                          +- CometFilter\n               :              :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 +- BroadcastExchange\n               :              :                    :     :                    +- Project\n               :              :                    :     :                       +- BroadcastHashJoin\n               :              :                    :     :                          :- Project\n               :              :                    :     :                          :  +- BroadcastHashJoin\n               :              :                    :     :                          :     :- Filter\n               :              :                    :     :                          :     :  +- ColumnarToRow\n               :              :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                          :     :           +- ReusedSubquery\n               :              :                    :     :                          :     +- BroadcastExchange\n               :              :                    :     :                          :        +- CometNativeColumnarToRow\n               :              :                    :     :                          :           +- CometFilter\n               :              :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                          +- BroadcastExchange\n               :              :                    :     :                             +- CometNativeColumnarToRow\n               :              :                    :     :                                +- CometProject\n               :              :                    :     :                                   +- CometFilter\n               :              :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     +- BroadcastExchange\n               :              :                    :        +- BroadcastHashJoin\n               :              :                    :           :- CometNativeColumnarToRow\n               :              :                    :           :  +- CometFilter\n               :              :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :           +- BroadcastExchange\n               :              :                    :              +- Project\n               :              :                    :                 +- BroadcastHashJoin\n               :              :                    :                    :- CometNativeColumnarToRow\n               :              :                    :                    :  +- CometFilter\n               :              :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                    +- BroadcastExchange\n               :              :                    :                       +- BroadcastHashJoin\n               :              :                    :                          :- CometNativeColumnarToRow\n               :              :                    :                          :  +- CometHashAggregate\n               :              :                    :                          :     +- CometColumnarExchange\n               :              :                    :                          :        +- HashAggregate\n               :              :                    :                          :           +- Project\n               :              :                    :                          :              +- BroadcastHashJoin\n               :              :                    :                          :                 :- Project\n               :              :                    :                          :                 :  +- BroadcastHashJoin\n               :              :                    :                          :                 :     :- Filter\n               :              :                    :                          :                 :     :  +- ColumnarToRow\n               :              :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :     :           +- SubqueryBroadcast\n               :              :                    :                          :                 :     :              +- BroadcastExchange\n               :              :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :                          :                 :     :                    +- CometProject\n               :              :                    :                          :                 :     :                       +- CometFilter\n               :              :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 :     +- BroadcastExchange\n               :              :                    :                          :                 :        +- BroadcastHashJoin\n               :              :                    :                          :                 :           :- CometNativeColumnarToRow\n               :              :                    :                          :                 :           :  +- CometFilter\n               :              :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :           +- BroadcastExchange\n               :              :                    :                          :                 :              +- Project\n               :              :                    :                          :                 :                 +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :- Project\n               :              :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :     :- Filter\n               :              :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :              :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :              :                    :                          :                 :                    :     +- BroadcastExchange\n               :              :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                    :           +- CometFilter\n               :              :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :                    +- BroadcastExchange\n               :              :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                          +- CometProject\n               :              :                    :                          :                 :                             +- CometFilter\n               :              :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 +- BroadcastExchange\n               :              :                    :                          :                    +- CometNativeColumnarToRow\n               :              :                    :                          :                       +- CometProject\n               :              :                    :                          :                          +- CometFilter\n               :              :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          +- BroadcastExchange\n               :              :                    :                             +- Project\n               :              :                    :                                +- BroadcastHashJoin\n               :              :                    :                                   :- Project\n               :              :                    :                                   :  +- BroadcastHashJoin\n               :              :                    :                                   :     :- Filter\n               :              :                    :                                   :     :  +- ColumnarToRow\n               :              :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                                   :     :           +- ReusedSubquery\n               :              :                    :                                   :     +- BroadcastExchange\n               :              :                    :                                   :        +- CometNativeColumnarToRow\n               :              :                    :                                   :           +- CometFilter\n               :              :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                                   +- BroadcastExchange\n               :              :                    :                                      +- CometNativeColumnarToRow\n               :              :                    :                                         +- CometProject\n               :              :                    :                                            +- CometFilter\n               :              :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    +- BroadcastExchange\n               :              :                       +- CometNativeColumnarToRow\n               :              :                          +- CometProject\n               :              :                             +- CometFilter\n               :              :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :- Filter\n               :              :  :  +- ReusedSubquery\n               :              :  +- HashAggregate\n               :              :     +- CometNativeColumnarToRow\n               :              :        +- CometColumnarExchange\n               :              :           +- HashAggregate\n               :              :              +- Project\n               :              :                 +- BroadcastHashJoin\n               :              :                    :- Project\n               :              :                    :  +- BroadcastHashJoin\n               :              :                    :     :- BroadcastHashJoin\n               :              :                    :     :  :- Filter\n               :              :                    :     :  :  +- ColumnarToRow\n               :              :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :  :           +- ReusedSubquery\n               :              :                    :     :  +- BroadcastExchange\n               :              :                    :     :     +- Project\n               :              :                    :     :        +- BroadcastHashJoin\n               :              :                    :     :           :- CometNativeColumnarToRow\n               :              :                    :     :           :  +- CometFilter\n               :              :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :           +- BroadcastExchange\n               :              :                    :     :              +- BroadcastHashJoin\n               :              :                    :     :                 :- CometNativeColumnarToRow\n               :              :                    :     :                 :  +- CometHashAggregate\n               :              :                    :     :                 :     +- CometColumnarExchange\n               :              :                    :     :                 :        +- HashAggregate\n               :              :                    :     :                 :           +- Project\n               :              :                    :     :                 :              +- BroadcastHashJoin\n               :              :                    :     :                 :                 :- Project\n               :              :                    :     :                 :                 :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :     :- Filter\n               :              :                    :     :                 :                 :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :              :                    :     :                 :                 :     :              +- BroadcastExchange\n               :              :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :     :                    +- CometProject\n               :              :                    :     :                 :                 :     :                       +- CometFilter\n               :              :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 :     +- BroadcastExchange\n               :              :                    :     :                 :                 :        +- BroadcastHashJoin\n               :              :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :           :  +- CometFilter\n               :              :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :           +- BroadcastExchange\n               :              :                    :     :                 :                 :              +- Project\n               :              :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :- Project\n               :              :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :     :- Filter\n               :              :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :              :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :              :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                    :           +- CometFilter\n               :              :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :                    +- BroadcastExchange\n               :              :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                          +- CometProject\n               :              :                    :     :                 :                 :                             +- CometFilter\n               :              :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 +- BroadcastExchange\n               :              :                    :     :                 :                    +- CometNativeColumnarToRow\n               :              :                    :     :                 :                       +- CometProject\n               :              :                    :     :                 :                          +- CometFilter\n               :              :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 +- BroadcastExchange\n               :              :                    :     :                    +- Project\n               :              :                    :     :                       +- BroadcastHashJoin\n               :              :                    :     :                          :- Project\n               :              :                    :     :                          :  +- BroadcastHashJoin\n               :              :                    :     :                          :     :- Filter\n               :              :                    :     :                          :     :  +- ColumnarToRow\n               :              :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                          :     :           +- ReusedSubquery\n               :              :                    :     :                          :     +- BroadcastExchange\n               :              :                    :     :                          :        +- CometNativeColumnarToRow\n               :              :                    :     :                          :           +- CometFilter\n               :              :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                          +- BroadcastExchange\n               :              :                    :     :                             +- CometNativeColumnarToRow\n               :              :                    :     :                                +- CometProject\n               :              :                    :     :                                   +- CometFilter\n               :              :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     +- BroadcastExchange\n               :              :                    :        +- BroadcastHashJoin\n               :              :                    :           :- CometNativeColumnarToRow\n               :              :                    :           :  +- CometFilter\n               :              :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :           +- BroadcastExchange\n               :              :                    :              +- Project\n               :              :                    :                 +- BroadcastHashJoin\n               :              :                    :                    :- CometNativeColumnarToRow\n               :              :                    :                    :  +- CometFilter\n               :              :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                    +- BroadcastExchange\n               :              :                    :                       +- BroadcastHashJoin\n               :              :                    :                          :- CometNativeColumnarToRow\n               :              :                    :                          :  +- CometHashAggregate\n               :              :                    :                          :     +- CometColumnarExchange\n               :              :                    :                          :        +- HashAggregate\n               :              :                    :                          :           +- Project\n               :              :                    :                          :              +- BroadcastHashJoin\n               :              :                    :                          :                 :- Project\n               :              :                    :                          :                 :  +- BroadcastHashJoin\n               :              :                    :                          :                 :     :- Filter\n               :              :                    :                          :                 :     :  +- ColumnarToRow\n               :              :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :     :           +- SubqueryBroadcast\n               :              :                    :                          :                 :     :              +- BroadcastExchange\n               :              :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :                          :                 :     :                    +- CometProject\n               :              :                    :                          :                 :     :                       +- CometFilter\n               :              :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 :     +- BroadcastExchange\n               :              :                    :                          :                 :        +- BroadcastHashJoin\n               :              :                    :                          :                 :           :- CometNativeColumnarToRow\n               :              :                    :                          :                 :           :  +- CometFilter\n               :              :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :           +- BroadcastExchange\n               :              :                    :                          :                 :              +- Project\n               :              :                    :                          :                 :                 +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :- Project\n               :              :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :     :- Filter\n               :              :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :              :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :              :                    :                          :                 :                    :     +- BroadcastExchange\n               :              :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                    :           +- CometFilter\n               :              :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :                    +- BroadcastExchange\n               :              :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                          +- CometProject\n               :              :                    :                          :                 :                             +- CometFilter\n               :              :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 +- BroadcastExchange\n               :              :                    :                          :                    +- CometNativeColumnarToRow\n               :              :                    :                          :                       +- CometProject\n               :              :                    :                          :                          +- CometFilter\n               :              :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          +- BroadcastExchange\n               :              :                    :                             +- Project\n               :              :                    :                                +- BroadcastHashJoin\n               :              :                    :                                   :- Project\n               :              :                    :                                   :  +- BroadcastHashJoin\n               :              :                    :                                   :     :- Filter\n               :              :                    :                                   :     :  +- ColumnarToRow\n               :              :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                                   :     :           +- ReusedSubquery\n               :              :                    :                                   :     +- BroadcastExchange\n               :              :                    :                                   :        +- CometNativeColumnarToRow\n               :              :                    :                                   :           +- CometFilter\n               :              :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                                   +- BroadcastExchange\n               :              :                    :                                      +- CometNativeColumnarToRow\n               :              :                    :                                         +- CometProject\n               :              :                    :                                            +- CometFilter\n               :              :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    +- BroadcastExchange\n               :              :                       +- CometNativeColumnarToRow\n               :              :                          +- CometProject\n               :              :                             +- CometFilter\n               :              :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              +- Filter\n               :                 :  +- ReusedSubquery\n               :                 +- HashAggregate\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometColumnarExchange\n               :                          +- HashAggregate\n               :                             +- Project\n               :                                +- BroadcastHashJoin\n               :                                   :- Project\n               :                                   :  +- BroadcastHashJoin\n               :                                   :     :- BroadcastHashJoin\n               :                                   :     :  :- Filter\n               :                                   :     :  :  +- ColumnarToRow\n               :                                   :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :  :           +- ReusedSubquery\n               :                                   :     :  +- BroadcastExchange\n               :                                   :     :     +- Project\n               :                                   :     :        +- BroadcastHashJoin\n               :                                   :     :           :- CometNativeColumnarToRow\n               :                                   :     :           :  +- CometFilter\n               :                                   :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :           +- BroadcastExchange\n               :                                   :     :              +- BroadcastHashJoin\n               :                                   :     :                 :- CometNativeColumnarToRow\n               :                                   :     :                 :  +- CometHashAggregate\n               :                                   :     :                 :     +- CometColumnarExchange\n               :                                   :     :                 :        +- HashAggregate\n               :                                   :     :                 :           +- Project\n               :                                   :     :                 :              +- BroadcastHashJoin\n               :                                   :     :                 :                 :- Project\n               :                                   :     :                 :                 :  +- BroadcastHashJoin\n               :                                   :     :                 :                 :     :- Filter\n               :                                   :     :                 :                 :     :  +- ColumnarToRow\n               :                                   :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                   :     :                 :                 :     :              +- BroadcastExchange\n               :                                   :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                   :     :                 :                 :     :                    +- CometProject\n               :                                   :     :                 :                 :     :                       +- CometFilter\n               :                                   :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :                 :                 :     +- BroadcastExchange\n               :                                   :     :                 :                 :        +- BroadcastHashJoin\n               :                                   :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                   :     :                 :                 :           :  +- CometFilter\n               :                                   :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :                 :                 :           +- BroadcastExchange\n               :                                   :     :                 :                 :              +- Project\n               :                                   :     :                 :                 :                 +- BroadcastHashJoin\n               :                                   :     :                 :                 :                    :- Project\n               :                                   :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                   :     :                 :                 :                    :     :- Filter\n               :                                   :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                   :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                   :     :                 :                 :                    :     +- BroadcastExchange\n               :                                   :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                   :     :                 :                 :                    :           +- CometFilter\n               :                                   :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :                 :                 :                    +- BroadcastExchange\n               :                                   :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                   :     :                 :                 :                          +- CometProject\n               :                                   :     :                 :                 :                             +- CometFilter\n               :                                   :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :                 :                 +- BroadcastExchange\n               :                                   :     :                 :                    +- CometNativeColumnarToRow\n               :                                   :     :                 :                       +- CometProject\n               :                                   :     :                 :                          +- CometFilter\n               :                                   :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :                 +- BroadcastExchange\n               :                                   :     :                    +- Project\n               :                                   :     :                       +- BroadcastHashJoin\n               :                                   :     :                          :- Project\n               :                                   :     :                          :  +- BroadcastHashJoin\n               :                                   :     :                          :     :- Filter\n               :                                   :     :                          :     :  +- ColumnarToRow\n               :                                   :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :                          :     :           +- ReusedSubquery\n               :                                   :     :                          :     +- BroadcastExchange\n               :                                   :     :                          :        +- CometNativeColumnarToRow\n               :                                   :     :                          :           +- CometFilter\n               :                                   :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :                          +- BroadcastExchange\n               :                                   :     :                             +- CometNativeColumnarToRow\n               :                                   :     :                                +- CometProject\n               :                                   :     :                                   +- CometFilter\n               :                                   :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     +- BroadcastExchange\n               :                                   :        +- BroadcastHashJoin\n               :                                   :           :- CometNativeColumnarToRow\n               :                                   :           :  +- CometFilter\n               :                                   :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :           +- BroadcastExchange\n               :                                   :              +- Project\n               :                                   :                 +- BroadcastHashJoin\n               :                                   :                    :- CometNativeColumnarToRow\n               :                                   :                    :  +- CometFilter\n               :                                   :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                    +- BroadcastExchange\n               :                                   :                       +- BroadcastHashJoin\n               :                                   :                          :- CometNativeColumnarToRow\n               :                                   :                          :  +- CometHashAggregate\n               :                                   :                          :     +- CometColumnarExchange\n               :                                   :                          :        +- HashAggregate\n               :                                   :                          :           +- Project\n               :                                   :                          :              +- BroadcastHashJoin\n               :                                   :                          :                 :- Project\n               :                                   :                          :                 :  +- BroadcastHashJoin\n               :                                   :                          :                 :     :- Filter\n               :                                   :                          :                 :     :  +- ColumnarToRow\n               :                                   :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :                          :                 :     :           +- SubqueryBroadcast\n               :                                   :                          :                 :     :              +- BroadcastExchange\n               :                                   :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                   :                          :                 :     :                    +- CometProject\n               :                                   :                          :                 :     :                       +- CometFilter\n               :                                   :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :                          :                 :     +- BroadcastExchange\n               :                                   :                          :                 :        +- BroadcastHashJoin\n               :                                   :                          :                 :           :- CometNativeColumnarToRow\n               :                                   :                          :                 :           :  +- CometFilter\n               :                                   :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                          :                 :           +- BroadcastExchange\n               :                                   :                          :                 :              +- Project\n               :                                   :                          :                 :                 +- BroadcastHashJoin\n               :                                   :                          :                 :                    :- Project\n               :                                   :                          :                 :                    :  +- BroadcastHashJoin\n               :                                   :                          :                 :                    :     :- Filter\n               :                                   :                          :                 :                    :     :  +- ColumnarToRow\n               :                                   :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :                          :                 :                    :     :           +- ReusedSubquery\n               :                                   :                          :                 :                    :     +- BroadcastExchange\n               :                                   :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                   :                          :                 :                    :           +- CometFilter\n               :                                   :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                          :                 :                    +- BroadcastExchange\n               :                                   :                          :                 :                       +- CometNativeColumnarToRow\n               :                                   :                          :                 :                          +- CometProject\n               :                                   :                          :                 :                             +- CometFilter\n               :                                   :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :                          :                 +- BroadcastExchange\n               :                                   :                          :                    +- CometNativeColumnarToRow\n               :                                   :                          :                       +- CometProject\n               :                                   :                          :                          +- CometFilter\n               :                                   :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :                          +- BroadcastExchange\n               :                                   :                             +- Project\n               :                                   :                                +- BroadcastHashJoin\n               :                                   :                                   :- Project\n               :                                   :                                   :  +- BroadcastHashJoin\n               :                                   :                                   :     :- Filter\n               :                                   :                                   :     :  +- ColumnarToRow\n               :                                   :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :                                   :     :           +- ReusedSubquery\n               :                                   :                                   :     +- BroadcastExchange\n               :                                   :                                   :        +- CometNativeColumnarToRow\n               :                                   :                                   :           +- CometFilter\n               :                                   :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                                   +- BroadcastExchange\n               :                                   :                                      +- CometNativeColumnarToRow\n               :                                   :                                         +- CometProject\n               :                                   :                                            +- CometFilter\n               :                                   :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   +- BroadcastExchange\n               :                                      +- CometNativeColumnarToRow\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Filter\n               :                          :  :  +- Subquery\n               :                          :  :     +- HashAggregate\n               :                          :  :        +- CometNativeColumnarToRow\n               :                          :  :           +- CometColumnarExchange\n               :                          :  :              +- HashAggregate\n               :                          :  :                 +- Union\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- ReusedSubquery\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- SubqueryBroadcast\n               :                          :  :                    :     :           +- BroadcastExchange\n               :                          :  :                    :     :              +- CometNativeColumnarToRow\n               :                          :  :                    :     :                 +- CometProject\n               :                          :  :                    :     :                    +- CometFilter\n               :                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    +- Project\n               :                          :  :                       +- BroadcastHashJoin\n               :                          :  :                          :- ColumnarToRow\n               :                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                          :        +- ReusedSubquery\n               :                          :  :                          +- BroadcastExchange\n               :                          :  :                             +- CometNativeColumnarToRow\n               :                          :  :                                +- CometProject\n               :                          :  :                                   +- CometFilter\n               :                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- SubqueryBroadcast\n               :                          :                    :     :  :              +- BroadcastExchange\n               :                          :                    :     :  :                 +- CometNativeColumnarToRow\n               :                          :                    :     :  :                    +- CometProject\n               :                          :                    :     :  :                       +- CometFilter\n               :                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :- Filter\n               :                          :  :  +- ReusedSubquery\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- ReusedSubquery\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Filter\n               :                             :  +- ReusedSubquery\n               :                             +- HashAggregate\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometColumnarExchange\n               :                                      +- HashAggregate\n               :                                         +- Project\n               :                                            +- BroadcastHashJoin\n               :                                               :- Project\n               :                                               :  +- BroadcastHashJoin\n               :                                               :     :- BroadcastHashJoin\n               :                                               :     :  :- Filter\n               :                                               :     :  :  +- ColumnarToRow\n               :                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :  :           +- ReusedSubquery\n               :                                               :     :  +- BroadcastExchange\n               :                                               :     :     +- Project\n               :                                               :     :        +- BroadcastHashJoin\n               :                                               :     :           :- CometNativeColumnarToRow\n               :                                               :     :           :  +- CometFilter\n               :                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :           +- BroadcastExchange\n               :                                               :     :              +- BroadcastHashJoin\n               :                                               :     :                 :- CometNativeColumnarToRow\n               :                                               :     :                 :  +- CometHashAggregate\n               :                                               :     :                 :     +- CometColumnarExchange\n               :                                               :     :                 :        +- HashAggregate\n               :                                               :     :                 :           +- Project\n               :                                               :     :                 :              +- BroadcastHashJoin\n               :                                               :     :                 :                 :- Project\n               :                                               :     :                 :                 :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :     :- Filter\n               :                                               :     :                 :                 :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                               :     :                 :                 :     :              +- BroadcastExchange\n               :                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :     :                    +- CometProject\n               :                                               :     :                 :                 :     :                       +- CometFilter\n               :                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 :     +- BroadcastExchange\n               :                                               :     :                 :                 :        +- BroadcastHashJoin\n               :                                               :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                               :     :                 :                 :           :  +- CometFilter\n               :                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :           +- BroadcastExchange\n               :                                               :     :                 :                 :              +- Project\n               :                                               :     :                 :                 :                 +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :- Project\n               :                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :     :- Filter\n               :                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                               :     :                 :                 :                    :     +- BroadcastExchange\n               :                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                    :           +- CometFilter\n               :                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :                    +- BroadcastExchange\n               :                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                          +- CometProject\n               :                                               :     :                 :                 :                             +- CometFilter\n               :                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 +- BroadcastExchange\n               :                                               :     :                 :                    +- CometNativeColumnarToRow\n               :                                               :     :                 :                       +- CometProject\n               :                                               :     :                 :                          +- CometFilter\n               :                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 +- BroadcastExchange\n               :                                               :     :                    +- Project\n               :                                               :     :                       +- BroadcastHashJoin\n               :                                               :     :                          :- Project\n               :                                               :     :                          :  +- BroadcastHashJoin\n               :                                               :     :                          :     :- Filter\n               :                                               :     :                          :     :  +- ColumnarToRow\n               :                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                          :     :           +- ReusedSubquery\n               :                                               :     :                          :     +- BroadcastExchange\n               :                                               :     :                          :        +- CometNativeColumnarToRow\n               :                                               :     :                          :           +- CometFilter\n               :                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                          +- BroadcastExchange\n               :                                               :     :                             +- CometNativeColumnarToRow\n               :                                               :     :                                +- CometProject\n               :                                               :     :                                   +- CometFilter\n               :                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     +- BroadcastExchange\n               :                                               :        +- BroadcastHashJoin\n               :                                               :           :- CometNativeColumnarToRow\n               :                                               :           :  +- CometFilter\n               :                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :           +- BroadcastExchange\n               :                                               :              +- Project\n               :                                               :                 +- BroadcastHashJoin\n               :                                               :                    :- CometNativeColumnarToRow\n               :                                               :                    :  +- CometFilter\n               :                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                    +- BroadcastExchange\n               :                                               :                       +- BroadcastHashJoin\n               :                                               :                          :- CometNativeColumnarToRow\n               :                                               :                          :  +- CometHashAggregate\n               :                                               :                          :     +- CometColumnarExchange\n               :                                               :                          :        +- HashAggregate\n               :                                               :                          :           +- Project\n               :                                               :                          :              +- BroadcastHashJoin\n               :                                               :                          :                 :- Project\n               :                                               :                          :                 :  +- BroadcastHashJoin\n               :                                               :                          :                 :     :- Filter\n               :                                               :                          :                 :     :  +- ColumnarToRow\n               :                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :     :           +- SubqueryBroadcast\n               :                                               :                          :                 :     :              +- BroadcastExchange\n               :                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :                          :                 :     :                    +- CometProject\n               :                                               :                          :                 :     :                       +- CometFilter\n               :                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 :     +- BroadcastExchange\n               :                                               :                          :                 :        +- BroadcastHashJoin\n               :                                               :                          :                 :           :- CometNativeColumnarToRow\n               :                                               :                          :                 :           :  +- CometFilter\n               :                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :           +- BroadcastExchange\n               :                                               :                          :                 :              +- Project\n               :                                               :                          :                 :                 +- BroadcastHashJoin\n               :                                               :                          :                 :                    :- Project\n               :                                               :                          :                 :                    :  +- BroadcastHashJoin\n               :                                               :                          :                 :                    :     :- Filter\n               :                                               :                          :                 :                    :     :  +- ColumnarToRow\n               :                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :                    :     :           +- ReusedSubquery\n               :                                               :                          :                 :                    :     +- BroadcastExchange\n               :                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :                          :                 :                    :           +- CometFilter\n               :                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :                    +- BroadcastExchange\n               :                                               :                          :                 :                       +- CometNativeColumnarToRow\n               :                                               :                          :                 :                          +- CometProject\n               :                                               :                          :                 :                             +- CometFilter\n               :                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 +- BroadcastExchange\n               :                                               :                          :                    +- CometNativeColumnarToRow\n               :                                               :                          :                       +- CometProject\n               :                                               :                          :                          +- CometFilter\n               :                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          +- BroadcastExchange\n               :                                               :                             +- Project\n               :                                               :                                +- BroadcastHashJoin\n               :                                               :                                   :- Project\n               :                                               :                                   :  +- BroadcastHashJoin\n               :                                               :                                   :     :- Filter\n               :                                               :                                   :     :  +- ColumnarToRow\n               :                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                                   :     :           +- ReusedSubquery\n               :                                               :                                   :     +- BroadcastExchange\n               :                                               :                                   :        +- CometNativeColumnarToRow\n               :                                               :                                   :           +- CometFilter\n               :                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                                   +- BroadcastExchange\n               :                                               :                                      +- CometNativeColumnarToRow\n               :                                               :                                         +- CometProject\n               :                                               :                                            +- CometFilter\n               :                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               +- BroadcastExchange\n               :                                                  +- CometNativeColumnarToRow\n               :                                                     +- CometProject\n               :                                                        +- CometFilter\n               :                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Filter\n               :                          :  :  +- Subquery\n               :                          :  :     +- HashAggregate\n               :                          :  :        +- CometNativeColumnarToRow\n               :                          :  :           +- CometColumnarExchange\n               :                          :  :              +- HashAggregate\n               :                          :  :                 +- Union\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- ReusedSubquery\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- SubqueryBroadcast\n               :                          :  :                    :     :           +- BroadcastExchange\n               :                          :  :                    :     :              +- CometNativeColumnarToRow\n               :                          :  :                    :     :                 +- CometProject\n               :                          :  :                    :     :                    +- CometFilter\n               :                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    +- Project\n               :                          :  :                       +- BroadcastHashJoin\n               :                          :  :                          :- ColumnarToRow\n               :                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                          :        +- ReusedSubquery\n               :                          :  :                          +- BroadcastExchange\n               :                          :  :                             +- CometNativeColumnarToRow\n               :                          :  :                                +- CometProject\n               :                          :  :                                   +- CometFilter\n               :                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- SubqueryBroadcast\n               :                          :                    :     :  :              +- BroadcastExchange\n               :                          :                    :     :  :                 +- CometNativeColumnarToRow\n               :                          :                    :     :  :                    +- CometProject\n               :                          :                    :     :  :                       +- CometFilter\n               :                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :- Filter\n               :                          :  :  +- ReusedSubquery\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- ReusedSubquery\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Filter\n               :                             :  +- ReusedSubquery\n               :                             +- HashAggregate\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometColumnarExchange\n               :                                      +- HashAggregate\n               :                                         +- Project\n               :                                            +- BroadcastHashJoin\n               :                                               :- Project\n               :                                               :  +- BroadcastHashJoin\n               :                                               :     :- BroadcastHashJoin\n               :                                               :     :  :- Filter\n               :                                               :     :  :  +- ColumnarToRow\n               :                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :  :           +- ReusedSubquery\n               :                                               :     :  +- BroadcastExchange\n               :                                               :     :     +- Project\n               :                                               :     :        +- BroadcastHashJoin\n               :                                               :     :           :- CometNativeColumnarToRow\n               :                                               :     :           :  +- CometFilter\n               :                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :           +- BroadcastExchange\n               :                                               :     :              +- BroadcastHashJoin\n               :                                               :     :                 :- CometNativeColumnarToRow\n               :                                               :     :                 :  +- CometHashAggregate\n               :                                               :     :                 :     +- CometColumnarExchange\n               :                                               :     :                 :        +- HashAggregate\n               :                                               :     :                 :           +- Project\n               :                                               :     :                 :              +- BroadcastHashJoin\n               :                                               :     :                 :                 :- Project\n               :                                               :     :                 :                 :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :     :- Filter\n               :                                               :     :                 :                 :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                               :     :                 :                 :     :              +- BroadcastExchange\n               :                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :     :                    +- CometProject\n               :                                               :     :                 :                 :     :                       +- CometFilter\n               :                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 :     +- BroadcastExchange\n               :                                               :     :                 :                 :        +- BroadcastHashJoin\n               :                                               :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                               :     :                 :                 :           :  +- CometFilter\n               :                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :           +- BroadcastExchange\n               :                                               :     :                 :                 :              +- Project\n               :                                               :     :                 :                 :                 +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :- Project\n               :                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :     :- Filter\n               :                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                               :     :                 :                 :                    :     +- BroadcastExchange\n               :                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                    :           +- CometFilter\n               :                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :                    +- BroadcastExchange\n               :                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                          +- CometProject\n               :                                               :     :                 :                 :                             +- CometFilter\n               :                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 +- BroadcastExchange\n               :                                               :     :                 :                    +- CometNativeColumnarToRow\n               :                                               :     :                 :                       +- CometProject\n               :                                               :     :                 :                          +- CometFilter\n               :                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 +- BroadcastExchange\n               :                                               :     :                    +- Project\n               :                                               :     :                       +- BroadcastHashJoin\n               :                                               :     :                          :- Project\n               :                                               :     :                          :  +- BroadcastHashJoin\n               :                                               :     :                          :     :- Filter\n               :                                               :     :                          :     :  +- ColumnarToRow\n               :                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                          :     :           +- ReusedSubquery\n               :                                               :     :                          :     +- BroadcastExchange\n               :                                               :     :                          :        +- CometNativeColumnarToRow\n               :                                               :     :                          :           +- CometFilter\n               :                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                          +- BroadcastExchange\n               :                                               :     :                             +- CometNativeColumnarToRow\n               :                                               :     :                                +- CometProject\n               :                                               :     :                                   +- CometFilter\n               :                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     +- BroadcastExchange\n               :                                               :        +- BroadcastHashJoin\n               :                                               :           :- CometNativeColumnarToRow\n               :                                               :           :  +- CometFilter\n               :                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :           +- BroadcastExchange\n               :                                               :              +- Project\n               :                                               :                 +- BroadcastHashJoin\n               :                                               :                    :- CometNativeColumnarToRow\n               :                                               :                    :  +- CometFilter\n               :                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                    +- BroadcastExchange\n               :                                               :                       +- BroadcastHashJoin\n               :                                               :                          :- CometNativeColumnarToRow\n               :                                               :                          :  +- CometHashAggregate\n               :                                               :                          :     +- CometColumnarExchange\n               :                                               :                          :        +- HashAggregate\n               :                                               :                          :           +- Project\n               :                                               :                          :              +- BroadcastHashJoin\n               :                                               :                          :                 :- Project\n               :                                               :                          :                 :  +- BroadcastHashJoin\n               :                                               :                          :                 :     :- Filter\n               :                                               :                          :                 :     :  +- ColumnarToRow\n               :                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :     :           +- SubqueryBroadcast\n               :                                               :                          :                 :     :              +- BroadcastExchange\n               :                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :                          :                 :     :                    +- CometProject\n               :                                               :                          :                 :     :                       +- CometFilter\n               :                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 :     +- BroadcastExchange\n               :                                               :                          :                 :        +- BroadcastHashJoin\n               :                                               :                          :                 :           :- CometNativeColumnarToRow\n               :                                               :                          :                 :           :  +- CometFilter\n               :                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :           +- BroadcastExchange\n               :                                               :                          :                 :              +- Project\n               :                                               :                          :                 :                 +- BroadcastHashJoin\n               :                                               :                          :                 :                    :- Project\n               :                                               :                          :                 :                    :  +- BroadcastHashJoin\n               :                                               :                          :                 :                    :     :- Filter\n               :                                               :                          :                 :                    :     :  +- ColumnarToRow\n               :                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :                    :     :           +- ReusedSubquery\n               :                                               :                          :                 :                    :     +- BroadcastExchange\n               :                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :                          :                 :                    :           +- CometFilter\n               :                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :                    +- BroadcastExchange\n               :                                               :                          :                 :                       +- CometNativeColumnarToRow\n               :                                               :                          :                 :                          +- CometProject\n               :                                               :                          :                 :                             +- CometFilter\n               :                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 +- BroadcastExchange\n               :                                               :                          :                    +- CometNativeColumnarToRow\n               :                                               :                          :                       +- CometProject\n               :                                               :                          :                          +- CometFilter\n               :                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          +- BroadcastExchange\n               :                                               :                             +- Project\n               :                                               :                                +- BroadcastHashJoin\n               :                                               :                                   :- Project\n               :                                               :                                   :  +- BroadcastHashJoin\n               :                                               :                                   :     :- Filter\n               :                                               :                                   :     :  +- ColumnarToRow\n               :                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                                   :     :           +- ReusedSubquery\n               :                                               :                                   :     +- BroadcastExchange\n               :                                               :                                   :        +- CometNativeColumnarToRow\n               :                                               :                                   :           +- CometFilter\n               :                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                                   +- BroadcastExchange\n               :                                               :                                      +- CometNativeColumnarToRow\n               :                                               :                                         +- CometProject\n               :                                               :                                            +- CometFilter\n               :                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               +- BroadcastExchange\n               :                                                  +- CometNativeColumnarToRow\n               :                                                     +- CometProject\n               :                                                        +- CometFilter\n               :                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Filter\n               :                          :  :  +- Subquery\n               :                          :  :     +- HashAggregate\n               :                          :  :        +- CometNativeColumnarToRow\n               :                          :  :           +- CometColumnarExchange\n               :                          :  :              +- HashAggregate\n               :                          :  :                 +- Union\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- ReusedSubquery\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- SubqueryBroadcast\n               :                          :  :                    :     :           +- BroadcastExchange\n               :                          :  :                    :     :              +- CometNativeColumnarToRow\n               :                          :  :                    :     :                 +- CometProject\n               :                          :  :                    :     :                    +- CometFilter\n               :                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    +- Project\n               :                          :  :                       +- BroadcastHashJoin\n               :                          :  :                          :- ColumnarToRow\n               :                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                          :        +- ReusedSubquery\n               :                          :  :                          +- BroadcastExchange\n               :                          :  :                             +- CometNativeColumnarToRow\n               :                          :  :                                +- CometProject\n               :                          :  :                                   +- CometFilter\n               :                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- SubqueryBroadcast\n               :                          :                    :     :  :              +- BroadcastExchange\n               :                          :                    :     :  :                 +- CometNativeColumnarToRow\n               :                          :                    :     :  :                    +- CometProject\n               :                          :                    :     :  :                       +- CometFilter\n               :                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :- Filter\n               :                          :  :  +- ReusedSubquery\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- ReusedSubquery\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Filter\n               :                             :  +- ReusedSubquery\n               :                             +- HashAggregate\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometColumnarExchange\n               :                                      +- HashAggregate\n               :                                         +- Project\n               :                                            +- BroadcastHashJoin\n               :                                               :- Project\n               :                                               :  +- BroadcastHashJoin\n               :                                               :     :- BroadcastHashJoin\n               :                                               :     :  :- Filter\n               :                                               :     :  :  +- ColumnarToRow\n               :                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :  :           +- ReusedSubquery\n               :                                               :     :  +- BroadcastExchange\n               :                                               :     :     +- Project\n               :                                               :     :        +- BroadcastHashJoin\n               :                                               :     :           :- CometNativeColumnarToRow\n               :                                               :     :           :  +- CometFilter\n               :                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :           +- BroadcastExchange\n               :                                               :     :              +- BroadcastHashJoin\n               :                                               :     :                 :- CometNativeColumnarToRow\n               :                                               :     :                 :  +- CometHashAggregate\n               :                                               :     :                 :     +- CometColumnarExchange\n               :                                               :     :                 :        +- HashAggregate\n               :                                               :     :                 :           +- Project\n               :                                               :     :                 :              +- BroadcastHashJoin\n               :                                               :     :                 :                 :- Project\n               :                                               :     :                 :                 :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :     :- Filter\n               :                                               :     :                 :                 :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                               :     :                 :                 :     :              +- BroadcastExchange\n               :                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :     :                    +- CometProject\n               :                                               :     :                 :                 :     :                       +- CometFilter\n               :                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 :     +- BroadcastExchange\n               :                                               :     :                 :                 :        +- BroadcastHashJoin\n               :                                               :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                               :     :                 :                 :           :  +- CometFilter\n               :                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :           +- BroadcastExchange\n               :                                               :     :                 :                 :              +- Project\n               :                                               :     :                 :                 :                 +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :- Project\n               :                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :     :- Filter\n               :                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                               :     :                 :                 :                    :     +- BroadcastExchange\n               :                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                    :           +- CometFilter\n               :                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :                    +- BroadcastExchange\n               :                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                          +- CometProject\n               :                                               :     :                 :                 :                             +- CometFilter\n               :                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 +- BroadcastExchange\n               :                                               :     :                 :                    +- CometNativeColumnarToRow\n               :                                               :     :                 :                       +- CometProject\n               :                                               :     :                 :                          +- CometFilter\n               :                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 +- BroadcastExchange\n               :                                               :     :                    +- Project\n               :                                               :     :                       +- BroadcastHashJoin\n               :                                               :     :                          :- Project\n               :                                               :     :                          :  +- BroadcastHashJoin\n               :                                               :     :                          :     :- Filter\n               :                                               :     :                          :     :  +- ColumnarToRow\n               :                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                          :     :           +- ReusedSubquery\n               :                                               :     :                          :     +- BroadcastExchange\n               :                                               :     :                          :        +- CometNativeColumnarToRow\n               :                                               :     :                          :           +- CometFilter\n               :                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                          +- BroadcastExchange\n               :                                               :     :                             +- CometNativeColumnarToRow\n               :                                               :     :                                +- CometProject\n               :                                               :     :                                   +- CometFilter\n               :                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     +- BroadcastExchange\n               :                                               :        +- BroadcastHashJoin\n               :                                               :           :- CometNativeColumnarToRow\n               :                                               :           :  +- CometFilter\n               :                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :           +- BroadcastExchange\n               :                                               :              +- Project\n               :                                               :                 +- BroadcastHashJoin\n               :                                               :                    :- CometNativeColumnarToRow\n               :                                               :                    :  +- CometFilter\n               :                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                    +- BroadcastExchange\n               :                                               :                       +- BroadcastHashJoin\n               :                                               :                          :- CometNativeColumnarToRow\n               :                                               :                          :  +- CometHashAggregate\n               :                                               :                          :     +- CometColumnarExchange\n               :                                               :                          :        +- HashAggregate\n               :                                               :                          :           +- Project\n               :                                               :                          :              +- BroadcastHashJoin\n               :                                               :                          :                 :- Project\n               :                                               :                          :                 :  +- BroadcastHashJoin\n               :                                               :                          :                 :     :- Filter\n               :                                               :                          :                 :     :  +- ColumnarToRow\n               :                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :     :           +- SubqueryBroadcast\n               :                                               :                          :                 :     :              +- BroadcastExchange\n               :                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :                          :                 :     :                    +- CometProject\n               :                                               :                          :                 :     :                       +- CometFilter\n               :                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 :     +- BroadcastExchange\n               :                                               :                          :                 :        +- BroadcastHashJoin\n               :                                               :                          :                 :           :- CometNativeColumnarToRow\n               :                                               :                          :                 :           :  +- CometFilter\n               :                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :           +- BroadcastExchange\n               :                                               :                          :                 :              +- Project\n               :                                               :                          :                 :                 +- BroadcastHashJoin\n               :                                               :                          :                 :                    :- Project\n               :                                               :                          :                 :                    :  +- BroadcastHashJoin\n               :                                               :                          :                 :                    :     :- Filter\n               :                                               :                          :                 :                    :     :  +- ColumnarToRow\n               :                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :                    :     :           +- ReusedSubquery\n               :                                               :                          :                 :                    :     +- BroadcastExchange\n               :                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :                          :                 :                    :           +- CometFilter\n               :                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :                    +- BroadcastExchange\n               :                                               :                          :                 :                       +- CometNativeColumnarToRow\n               :                                               :                          :                 :                          +- CometProject\n               :                                               :                          :                 :                             +- CometFilter\n               :                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 +- BroadcastExchange\n               :                                               :                          :                    +- CometNativeColumnarToRow\n               :                                               :                          :                       +- CometProject\n               :                                               :                          :                          +- CometFilter\n               :                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          +- BroadcastExchange\n               :                                               :                             +- Project\n               :                                               :                                +- BroadcastHashJoin\n               :                                               :                                   :- Project\n               :                                               :                                   :  +- BroadcastHashJoin\n               :                                               :                                   :     :- Filter\n               :                                               :                                   :     :  +- ColumnarToRow\n               :                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                                   :     :           +- ReusedSubquery\n               :                                               :                                   :     +- BroadcastExchange\n               :                                               :                                   :        +- CometNativeColumnarToRow\n               :                                               :                                   :           +- CometFilter\n               :                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                                   +- BroadcastExchange\n               :                                               :                                      +- CometNativeColumnarToRow\n               :                                               :                                         +- CometProject\n               :                                               :                                            +- CometFilter\n               :                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               +- BroadcastExchange\n               :                                                  +- CometNativeColumnarToRow\n               :                                                     +- CometProject\n               :                                                        +- CometFilter\n               :                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- Filter\n                                          :  :  +- Subquery\n                                          :  :     +- HashAggregate\n                                          :  :        +- CometNativeColumnarToRow\n                                          :  :           +- CometColumnarExchange\n                                          :  :              +- HashAggregate\n                                          :  :                 +- Union\n                                          :  :                    :- Project\n                                          :  :                    :  +- BroadcastHashJoin\n                                          :  :                    :     :- ColumnarToRow\n                                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :  :                    :     :        +- ReusedSubquery\n                                          :  :                    :     +- BroadcastExchange\n                                          :  :                    :        +- CometNativeColumnarToRow\n                                          :  :                    :           +- CometProject\n                                          :  :                    :              +- CometFilter\n                                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  :                    :- Project\n                                          :  :                    :  +- BroadcastHashJoin\n                                          :  :                    :     :- ColumnarToRow\n                                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :  :                    :     :        +- SubqueryBroadcast\n                                          :  :                    :     :           +- BroadcastExchange\n                                          :  :                    :     :              +- CometNativeColumnarToRow\n                                          :  :                    :     :                 +- CometProject\n                                          :  :                    :     :                    +- CometFilter\n                                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  :                    :     +- BroadcastExchange\n                                          :  :                    :        +- CometNativeColumnarToRow\n                                          :  :                    :           +- CometProject\n                                          :  :                    :              +- CometFilter\n                                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  :                    +- Project\n                                          :  :                       +- BroadcastHashJoin\n                                          :  :                          :- ColumnarToRow\n                                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :  :                          :        +- ReusedSubquery\n                                          :  :                          +- BroadcastExchange\n                                          :  :                             +- CometNativeColumnarToRow\n                                          :  :                                +- CometProject\n                                          :  :                                   +- CometFilter\n                                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  +- HashAggregate\n                                          :     +- CometNativeColumnarToRow\n                                          :        +- CometColumnarExchange\n                                          :           +- HashAggregate\n                                          :              +- Project\n                                          :                 +- BroadcastHashJoin\n                                          :                    :- Project\n                                          :                    :  +- BroadcastHashJoin\n                                          :                    :     :- BroadcastHashJoin\n                                          :                    :     :  :- Filter\n                                          :                    :     :  :  +- ColumnarToRow\n                                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :  :           +- SubqueryBroadcast\n                                          :                    :     :  :              +- BroadcastExchange\n                                          :                    :     :  :                 +- CometNativeColumnarToRow\n                                          :                    :     :  :                    +- CometProject\n                                          :                    :     :  :                       +- CometFilter\n                                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :  +- BroadcastExchange\n                                          :                    :     :     +- Project\n                                          :                    :     :        +- BroadcastHashJoin\n                                          :                    :     :           :- CometNativeColumnarToRow\n                                          :                    :     :           :  +- CometFilter\n                                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :           +- BroadcastExchange\n                                          :                    :     :              +- BroadcastHashJoin\n                                          :                    :     :                 :- CometNativeColumnarToRow\n                                          :                    :     :                 :  +- CometHashAggregate\n                                          :                    :     :                 :     +- CometColumnarExchange\n                                          :                    :     :                 :        +- HashAggregate\n                                          :                    :     :                 :           +- Project\n                                          :                    :     :                 :              +- BroadcastHashJoin\n                                          :                    :     :                 :                 :- Project\n                                          :                    :     :                 :                 :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :     :- Filter\n                                          :                    :     :                 :                 :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n                                          :                    :     :                 :                 :     :              +- BroadcastExchange\n                                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :     :                    +- CometProject\n                                          :                    :     :                 :                 :     :                       +- CometFilter\n                                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 :     +- BroadcastExchange\n                                          :                    :     :                 :                 :        +- BroadcastHashJoin\n                                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :           :  +- CometFilter\n                                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :           +- BroadcastExchange\n                                          :                    :     :                 :                 :              +- Project\n                                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :- Project\n                                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :     :- Filter\n                                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n                                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n                                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                    :           +- CometFilter\n                                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :                    +- BroadcastExchange\n                                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                          +- CometProject\n                                          :                    :     :                 :                 :                             +- CometFilter\n                                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 +- BroadcastExchange\n                                          :                    :     :                 :                    +- CometNativeColumnarToRow\n                                          :                    :     :                 :                       +- CometProject\n                                          :                    :     :                 :                          +- CometFilter\n                                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 +- BroadcastExchange\n                                          :                    :     :                    +- Project\n                                          :                    :     :                       +- BroadcastHashJoin\n                                          :                    :     :                          :- Project\n                                          :                    :     :                          :  +- BroadcastHashJoin\n                                          :                    :     :                          :     :- Filter\n                                          :                    :     :                          :     :  +- ColumnarToRow\n                                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                          :     :           +- ReusedSubquery\n                                          :                    :     :                          :     +- BroadcastExchange\n                                          :                    :     :                          :        +- CometNativeColumnarToRow\n                                          :                    :     :                          :           +- CometFilter\n                                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                          +- BroadcastExchange\n                                          :                    :     :                             +- CometNativeColumnarToRow\n                                          :                    :     :                                +- CometProject\n                                          :                    :     :                                   +- CometFilter\n                                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     +- BroadcastExchange\n                                          :                    :        +- BroadcastHashJoin\n                                          :                    :           :- CometNativeColumnarToRow\n                                          :                    :           :  +- CometFilter\n                                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :           +- BroadcastExchange\n                                          :                    :              +- Project\n                                          :                    :                 +- BroadcastHashJoin\n                                          :                    :                    :- CometNativeColumnarToRow\n                                          :                    :                    :  +- CometFilter\n                                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                    +- BroadcastExchange\n                                          :                    :                       +- BroadcastHashJoin\n                                          :                    :                          :- CometNativeColumnarToRow\n                                          :                    :                          :  +- CometHashAggregate\n                                          :                    :                          :     +- CometColumnarExchange\n                                          :                    :                          :        +- HashAggregate\n                                          :                    :                          :           +- Project\n                                          :                    :                          :              +- BroadcastHashJoin\n                                          :                    :                          :                 :- Project\n                                          :                    :                          :                 :  +- BroadcastHashJoin\n                                          :                    :                          :                 :     :- Filter\n                                          :                    :                          :                 :     :  +- ColumnarToRow\n                                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :     :           +- SubqueryBroadcast\n                                          :                    :                          :                 :     :              +- BroadcastExchange\n                                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :                          :                 :     :                    +- CometProject\n                                          :                    :                          :                 :     :                       +- CometFilter\n                                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 :     +- BroadcastExchange\n                                          :                    :                          :                 :        +- BroadcastHashJoin\n                                          :                    :                          :                 :           :- CometNativeColumnarToRow\n                                          :                    :                          :                 :           :  +- CometFilter\n                                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :           +- BroadcastExchange\n                                          :                    :                          :                 :              +- Project\n                                          :                    :                          :                 :                 +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :- Project\n                                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :     :- Filter\n                                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n                                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n                                          :                    :                          :                 :                    :     +- BroadcastExchange\n                                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                    :           +- CometFilter\n                                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :                    +- BroadcastExchange\n                                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                          +- CometProject\n                                          :                    :                          :                 :                             +- CometFilter\n                                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 +- BroadcastExchange\n                                          :                    :                          :                    +- CometNativeColumnarToRow\n                                          :                    :                          :                       +- CometProject\n                                          :                    :                          :                          +- CometFilter\n                                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          +- BroadcastExchange\n                                          :                    :                             +- Project\n                                          :                    :                                +- BroadcastHashJoin\n                                          :                    :                                   :- Project\n                                          :                    :                                   :  +- BroadcastHashJoin\n                                          :                    :                                   :     :- Filter\n                                          :                    :                                   :     :  +- ColumnarToRow\n                                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                                   :     :           +- ReusedSubquery\n                                          :                    :                                   :     +- BroadcastExchange\n                                          :                    :                                   :        +- CometNativeColumnarToRow\n                                          :                    :                                   :           +- CometFilter\n                                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                                   +- BroadcastExchange\n                                          :                    :                                      +- CometNativeColumnarToRow\n                                          :                    :                                         +- CometProject\n                                          :                    :                                            +- CometFilter\n                                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    +- BroadcastExchange\n                                          :                       +- CometNativeColumnarToRow\n                                          :                          +- CometProject\n                                          :                             +- CometFilter\n                                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :- Filter\n                                          :  :  +- ReusedSubquery\n                                          :  +- HashAggregate\n                                          :     +- CometNativeColumnarToRow\n                                          :        +- CometColumnarExchange\n                                          :           +- HashAggregate\n                                          :              +- Project\n                                          :                 +- BroadcastHashJoin\n                                          :                    :- Project\n                                          :                    :  +- BroadcastHashJoin\n                                          :                    :     :- BroadcastHashJoin\n                                          :                    :     :  :- Filter\n                                          :                    :     :  :  +- ColumnarToRow\n                                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :  :           +- ReusedSubquery\n                                          :                    :     :  +- BroadcastExchange\n                                          :                    :     :     +- Project\n                                          :                    :     :        +- BroadcastHashJoin\n                                          :                    :     :           :- CometNativeColumnarToRow\n                                          :                    :     :           :  +- CometFilter\n                                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :           +- BroadcastExchange\n                                          :                    :     :              +- BroadcastHashJoin\n                                          :                    :     :                 :- CometNativeColumnarToRow\n                                          :                    :     :                 :  +- CometHashAggregate\n                                          :                    :     :                 :     +- CometColumnarExchange\n                                          :                    :     :                 :        +- HashAggregate\n                                          :                    :     :                 :           +- Project\n                                          :                    :     :                 :              +- BroadcastHashJoin\n                                          :                    :     :                 :                 :- Project\n                                          :                    :     :                 :                 :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :     :- Filter\n                                          :                    :     :                 :                 :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n                                          :                    :     :                 :                 :     :              +- BroadcastExchange\n                                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :     :                    +- CometProject\n                                          :                    :     :                 :                 :     :                       +- CometFilter\n                                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 :     +- BroadcastExchange\n                                          :                    :     :                 :                 :        +- BroadcastHashJoin\n                                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :           :  +- CometFilter\n                                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :           +- BroadcastExchange\n                                          :                    :     :                 :                 :              +- Project\n                                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :- Project\n                                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :     :- Filter\n                                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n                                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n                                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                    :           +- CometFilter\n                                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :                    +- BroadcastExchange\n                                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                          +- CometProject\n                                          :                    :     :                 :                 :                             +- CometFilter\n                                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 +- BroadcastExchange\n                                          :                    :     :                 :                    +- CometNativeColumnarToRow\n                                          :                    :     :                 :                       +- CometProject\n                                          :                    :     :                 :                          +- CometFilter\n                                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 +- BroadcastExchange\n                                          :                    :     :                    +- Project\n                                          :                    :     :                       +- BroadcastHashJoin\n                                          :                    :     :                          :- Project\n                                          :                    :     :                          :  +- BroadcastHashJoin\n                                          :                    :     :                          :     :- Filter\n                                          :                    :     :                          :     :  +- ColumnarToRow\n                                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                          :     :           +- ReusedSubquery\n                                          :                    :     :                          :     +- BroadcastExchange\n                                          :                    :     :                          :        +- CometNativeColumnarToRow\n                                          :                    :     :                          :           +- CometFilter\n                                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                          +- BroadcastExchange\n                                          :                    :     :                             +- CometNativeColumnarToRow\n                                          :                    :     :                                +- CometProject\n                                          :                    :     :                                   +- CometFilter\n                                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     +- BroadcastExchange\n                                          :                    :        +- BroadcastHashJoin\n                                          :                    :           :- CometNativeColumnarToRow\n                                          :                    :           :  +- CometFilter\n                                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :           +- BroadcastExchange\n                                          :                    :              +- Project\n                                          :                    :                 +- BroadcastHashJoin\n                                          :                    :                    :- CometNativeColumnarToRow\n                                          :                    :                    :  +- CometFilter\n                                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                    +- BroadcastExchange\n                                          :                    :                       +- BroadcastHashJoin\n                                          :                    :                          :- CometNativeColumnarToRow\n                                          :                    :                          :  +- CometHashAggregate\n                                          :                    :                          :     +- CometColumnarExchange\n                                          :                    :                          :        +- HashAggregate\n                                          :                    :                          :           +- Project\n                                          :                    :                          :              +- BroadcastHashJoin\n                                          :                    :                          :                 :- Project\n                                          :                    :                          :                 :  +- BroadcastHashJoin\n                                          :                    :                          :                 :     :- Filter\n                                          :                    :                          :                 :     :  +- ColumnarToRow\n                                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :     :           +- SubqueryBroadcast\n                                          :                    :                          :                 :     :              +- BroadcastExchange\n                                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :                          :                 :     :                    +- CometProject\n                                          :                    :                          :                 :     :                       +- CometFilter\n                                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 :     +- BroadcastExchange\n                                          :                    :                          :                 :        +- BroadcastHashJoin\n                                          :                    :                          :                 :           :- CometNativeColumnarToRow\n                                          :                    :                          :                 :           :  +- CometFilter\n                                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :           +- BroadcastExchange\n                                          :                    :                          :                 :              +- Project\n                                          :                    :                          :                 :                 +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :- Project\n                                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :     :- Filter\n                                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n                                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n                                          :                    :                          :                 :                    :     +- BroadcastExchange\n                                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                    :           +- CometFilter\n                                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :                    +- BroadcastExchange\n                                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                          +- CometProject\n                                          :                    :                          :                 :                             +- CometFilter\n                                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 +- BroadcastExchange\n                                          :                    :                          :                    +- CometNativeColumnarToRow\n                                          :                    :                          :                       +- CometProject\n                                          :                    :                          :                          +- CometFilter\n                                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          +- BroadcastExchange\n                                          :                    :                             +- Project\n                                          :                    :                                +- BroadcastHashJoin\n                                          :                    :                                   :- Project\n                                          :                    :                                   :  +- BroadcastHashJoin\n                                          :                    :                                   :     :- Filter\n                                          :                    :                                   :     :  +- ColumnarToRow\n                                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                                   :     :           +- ReusedSubquery\n                                          :                    :                                   :     +- BroadcastExchange\n                                          :                    :                                   :        +- CometNativeColumnarToRow\n                                          :                    :                                   :           +- CometFilter\n                                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                                   +- BroadcastExchange\n                                          :                    :                                      +- CometNativeColumnarToRow\n                                          :                    :                                         +- CometProject\n                                          :                    :                                            +- CometFilter\n                                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    +- BroadcastExchange\n                                          :                       +- CometNativeColumnarToRow\n                                          :                          +- CometProject\n                                          :                             +- CometFilter\n                                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- Filter\n                                             :  +- ReusedSubquery\n                                             +- HashAggregate\n                                                +- CometNativeColumnarToRow\n                                                   +- CometColumnarExchange\n                                                      +- HashAggregate\n                                                         +- Project\n                                                            +- BroadcastHashJoin\n                                                               :- Project\n                                                               :  +- BroadcastHashJoin\n                                                               :     :- BroadcastHashJoin\n                                                               :     :  :- Filter\n                                                               :     :  :  +- ColumnarToRow\n                                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :  :           +- ReusedSubquery\n                                                               :     :  +- BroadcastExchange\n                                                               :     :     +- Project\n                                                               :     :        +- BroadcastHashJoin\n                                                               :     :           :- CometNativeColumnarToRow\n                                                               :     :           :  +- CometFilter\n                                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :           +- BroadcastExchange\n                                                               :     :              +- BroadcastHashJoin\n                                                               :     :                 :- CometNativeColumnarToRow\n                                                               :     :                 :  +- CometHashAggregate\n                                                               :     :                 :     +- CometColumnarExchange\n                                                               :     :                 :        +- HashAggregate\n                                                               :     :                 :           +- Project\n                                                               :     :                 :              +- BroadcastHashJoin\n                                                               :     :                 :                 :- Project\n                                                               :     :                 :                 :  +- BroadcastHashJoin\n                                                               :     :                 :                 :     :- Filter\n                                                               :     :                 :                 :     :  +- ColumnarToRow\n                                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :                 :                 :     :           +- SubqueryBroadcast\n                                                               :     :                 :                 :     :              +- BroadcastExchange\n                                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                                               :     :                 :                 :     :                    +- CometProject\n                                                               :     :                 :                 :     :                       +- CometFilter\n                                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     :                 :                 :     +- BroadcastExchange\n                                                               :     :                 :                 :        +- BroadcastHashJoin\n                                                               :     :                 :                 :           :- CometNativeColumnarToRow\n                                                               :     :                 :                 :           :  +- CometFilter\n                                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :                 :                 :           +- BroadcastExchange\n                                                               :     :                 :                 :              +- Project\n                                                               :     :                 :                 :                 +- BroadcastHashJoin\n                                                               :     :                 :                 :                    :- Project\n                                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n                                                               :     :                 :                 :                    :     :- Filter\n                                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n                                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n                                                               :     :                 :                 :                    :     +- BroadcastExchange\n                                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                                               :     :                 :                 :                    :           +- CometFilter\n                                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :                 :                 :                    +- BroadcastExchange\n                                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n                                                               :     :                 :                 :                          +- CometProject\n                                                               :     :                 :                 :                             +- CometFilter\n                                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     :                 :                 +- BroadcastExchange\n                                                               :     :                 :                    +- CometNativeColumnarToRow\n                                                               :     :                 :                       +- CometProject\n                                                               :     :                 :                          +- CometFilter\n                                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     :                 +- BroadcastExchange\n                                                               :     :                    +- Project\n                                                               :     :                       +- BroadcastHashJoin\n                                                               :     :                          :- Project\n                                                               :     :                          :  +- BroadcastHashJoin\n                                                               :     :                          :     :- Filter\n                                                               :     :                          :     :  +- ColumnarToRow\n                                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :                          :     :           +- ReusedSubquery\n                                                               :     :                          :     +- BroadcastExchange\n                                                               :     :                          :        +- CometNativeColumnarToRow\n                                                               :     :                          :           +- CometFilter\n                                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :                          +- BroadcastExchange\n                                                               :     :                             +- CometNativeColumnarToRow\n                                                               :     :                                +- CometProject\n                                                               :     :                                   +- CometFilter\n                                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     +- BroadcastExchange\n                                                               :        +- BroadcastHashJoin\n                                                               :           :- CometNativeColumnarToRow\n                                                               :           :  +- CometFilter\n                                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :           +- BroadcastExchange\n                                                               :              +- Project\n                                                               :                 +- BroadcastHashJoin\n                                                               :                    :- CometNativeColumnarToRow\n                                                               :                    :  +- CometFilter\n                                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                    +- BroadcastExchange\n                                                               :                       +- BroadcastHashJoin\n                                                               :                          :- CometNativeColumnarToRow\n                                                               :                          :  +- CometHashAggregate\n                                                               :                          :     +- CometColumnarExchange\n                                                               :                          :        +- HashAggregate\n                                                               :                          :           +- Project\n                                                               :                          :              +- BroadcastHashJoin\n                                                               :                          :                 :- Project\n                                                               :                          :                 :  +- BroadcastHashJoin\n                                                               :                          :                 :     :- Filter\n                                                               :                          :                 :     :  +- ColumnarToRow\n                                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :                          :                 :     :           +- SubqueryBroadcast\n                                                               :                          :                 :     :              +- BroadcastExchange\n                                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n                                                               :                          :                 :     :                    +- CometProject\n                                                               :                          :                 :     :                       +- CometFilter\n                                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :                          :                 :     +- BroadcastExchange\n                                                               :                          :                 :        +- BroadcastHashJoin\n                                                               :                          :                 :           :- CometNativeColumnarToRow\n                                                               :                          :                 :           :  +- CometFilter\n                                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                          :                 :           +- BroadcastExchange\n                                                               :                          :                 :              +- Project\n                                                               :                          :                 :                 +- BroadcastHashJoin\n                                                               :                          :                 :                    :- Project\n                                                               :                          :                 :                    :  +- BroadcastHashJoin\n                                                               :                          :                 :                    :     :- Filter\n                                                               :                          :                 :                    :     :  +- ColumnarToRow\n                                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :                          :                 :                    :     :           +- ReusedSubquery\n                                                               :                          :                 :                    :     +- BroadcastExchange\n                                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n                                                               :                          :                 :                    :           +- CometFilter\n                                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                          :                 :                    +- BroadcastExchange\n                                                               :                          :                 :                       +- CometNativeColumnarToRow\n                                                               :                          :                 :                          +- CometProject\n                                                               :                          :                 :                             +- CometFilter\n                                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :                          :                 +- BroadcastExchange\n                                                               :                          :                    +- CometNativeColumnarToRow\n                                                               :                          :                       +- CometProject\n                                                               :                          :                          +- CometFilter\n                                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :                          +- BroadcastExchange\n                                                               :                             +- Project\n                                                               :                                +- BroadcastHashJoin\n                                                               :                                   :- Project\n                                                               :                                   :  +- BroadcastHashJoin\n                                                               :                                   :     :- Filter\n                                                               :                                   :     :  +- ColumnarToRow\n                                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :                                   :     :           +- ReusedSubquery\n                                                               :                                   :     +- BroadcastExchange\n                                                               :                                   :        +- CometNativeColumnarToRow\n                                                               :                                   :           +- CometFilter\n                                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                                   +- BroadcastExchange\n                                                               :                                      +- CometNativeColumnarToRow\n                                                               :                                         +- CometProject\n                                                               :                                            +- CometFilter\n                                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               +- BroadcastExchange\n                                                                  +- CometNativeColumnarToRow\n                                                                     +- CometProject\n                                                                        +- CometFilter\n                                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 842 out of 2302 eligible operators (36%). Final plan contains 475 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q14a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometFilter\n               :           :  :  +- Subquery\n               :           :  :     +- CometNativeColumnarToRow\n               :           :  :        +- CometHashAggregate\n               :           :  :           +- CometExchange\n               :           :  :              +- CometHashAggregate\n               :           :  :                 +- CometUnion\n               :           :  :                    :- CometProject\n               :           :  :                    :  +- CometBroadcastHashJoin\n               :           :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :  :                    :     :     +- ReusedSubquery\n               :           :  :                    :     +- CometBroadcastExchange\n               :           :  :                    :        +- CometProject\n               :           :  :                    :           +- CometFilter\n               :           :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  :                    :- CometProject\n               :           :  :                    :  +- CometBroadcastHashJoin\n               :           :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :  :                    :     :     +- SubqueryBroadcast\n               :           :  :                    :     :        +- BroadcastExchange\n               :           :  :                    :     :           +- CometNativeColumnarToRow\n               :           :  :                    :     :              +- CometProject\n               :           :  :                    :     :                 +- CometFilter\n               :           :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  :                    :     +- CometBroadcastExchange\n               :           :  :                    :        +- CometProject\n               :           :  :                    :           +- CometFilter\n               :           :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  :                    +- CometProject\n               :           :  :                       +- CometBroadcastHashJoin\n               :           :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :  :                          :     +- ReusedSubquery\n               :           :  :                          +- CometBroadcastExchange\n               :           :  :                             +- CometProject\n               :           :  :                                +- CometFilter\n               :           :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  +- CometHashAggregate\n               :           :     +- CometExchange\n               :           :        +- CometHashAggregate\n               :           :           +- CometProject\n               :           :              +- CometBroadcastHashJoin\n               :           :                 :- CometProject\n               :           :                 :  +- CometBroadcastHashJoin\n               :           :                 :     :- CometBroadcastHashJoin\n               :           :                 :     :  :- CometFilter\n               :           :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :     :  :        +- SubqueryBroadcast\n               :           :                 :     :  :           +- BroadcastExchange\n               :           :                 :     :  :              +- CometNativeColumnarToRow\n               :           :                 :     :  :                 +- CometProject\n               :           :                 :     :  :                    +- CometFilter\n               :           :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :  +- CometBroadcastExchange\n               :           :                 :     :     +- CometProject\n               :           :                 :     :        +- CometBroadcastHashJoin\n               :           :                 :     :           :- CometFilter\n               :           :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :           +- CometBroadcastExchange\n               :           :                 :     :              +- CometBroadcastHashJoin\n               :           :                 :     :                 :- CometHashAggregate\n               :           :                 :     :                 :  +- CometExchange\n               :           :                 :     :                 :     +- CometHashAggregate\n               :           :                 :     :                 :        +- CometProject\n               :           :                 :     :                 :           +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :- CometProject\n               :           :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :     :- CometFilter\n               :           :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :           :                 :     :                 :              :     :           +- BroadcastExchange\n               :           :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :           :                 :     :                 :              :     :                 +- CometProject\n               :           :                 :     :                 :              :     :                    +- CometFilter\n               :           :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :           :- CometFilter\n               :           :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :           +- CometBroadcastExchange\n               :           :                 :     :                 :              :              +- CometProject\n               :           :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :- CometProject\n               :           :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :     :- CometFilter\n               :           :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :           :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :                    :        +- CometFilter\n               :           :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :                    +- CometBroadcastExchange\n               :           :                 :     :                 :              :                       +- CometProject\n               :           :                 :     :                 :              :                          +- CometFilter\n               :           :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              +- CometBroadcastExchange\n               :           :                 :     :                 :                 +- CometProject\n               :           :                 :     :                 :                    +- CometFilter\n               :           :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 +- CometBroadcastExchange\n               :           :                 :     :                    +- CometProject\n               :           :                 :     :                       +- CometBroadcastHashJoin\n               :           :                 :     :                          :- CometProject\n               :           :                 :     :                          :  +- CometBroadcastHashJoin\n               :           :                 :     :                          :     :- CometFilter\n               :           :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :     :                          :     :        +- ReusedSubquery\n               :           :                 :     :                          :     +- CometBroadcastExchange\n               :           :                 :     :                          :        +- CometFilter\n               :           :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                          +- CometBroadcastExchange\n               :           :                 :     :                             +- CometProject\n               :           :                 :     :                                +- CometFilter\n               :           :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     +- CometBroadcastExchange\n               :           :                 :        +- CometBroadcastHashJoin\n               :           :                 :           :- CometFilter\n               :           :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :           +- CometBroadcastExchange\n               :           :                 :              +- CometProject\n               :           :                 :                 +- CometBroadcastHashJoin\n               :           :                 :                    :- CometFilter\n               :           :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                    +- CometBroadcastExchange\n               :           :                 :                       +- CometBroadcastHashJoin\n               :           :                 :                          :- CometHashAggregate\n               :           :                 :                          :  +- CometExchange\n               :           :                 :                          :     +- CometHashAggregate\n               :           :                 :                          :        +- CometProject\n               :           :                 :                          :           +- CometBroadcastHashJoin\n               :           :                 :                          :              :- CometProject\n               :           :                 :                          :              :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :     :- CometFilter\n               :           :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :                          :              :     :        +- SubqueryBroadcast\n               :           :                 :                          :              :     :           +- BroadcastExchange\n               :           :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :           :                 :                          :              :     :                 +- CometProject\n               :           :                 :                          :              :     :                    +- CometFilter\n               :           :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              :     +- CometBroadcastExchange\n               :           :                 :                          :              :        +- CometBroadcastHashJoin\n               :           :                 :                          :              :           :- CometFilter\n               :           :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :           +- CometBroadcastExchange\n               :           :                 :                          :              :              +- CometProject\n               :           :                 :                          :              :                 +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :- CometProject\n               :           :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :     :- CometFilter\n               :           :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :                          :              :                    :     :        +- ReusedSubquery\n               :           :                 :                          :              :                    :     +- CometBroadcastExchange\n               :           :                 :                          :              :                    :        +- CometFilter\n               :           :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :                    +- CometBroadcastExchange\n               :           :                 :                          :              :                       +- CometProject\n               :           :                 :                          :              :                          +- CometFilter\n               :           :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              +- CometBroadcastExchange\n               :           :                 :                          :                 +- CometProject\n               :           :                 :                          :                    +- CometFilter\n               :           :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          +- CometBroadcastExchange\n               :           :                 :                             +- CometProject\n               :           :                 :                                +- CometBroadcastHashJoin\n               :           :                 :                                   :- CometProject\n               :           :                 :                                   :  +- CometBroadcastHashJoin\n               :           :                 :                                   :     :- CometFilter\n               :           :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :                                   :     :        +- ReusedSubquery\n               :           :                 :                                   :     +- CometBroadcastExchange\n               :           :                 :                                   :        +- CometFilter\n               :           :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                                   +- CometBroadcastExchange\n               :           :                 :                                      +- CometProject\n               :           :                 :                                         +- CometFilter\n               :           :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 +- CometBroadcastExchange\n               :           :                    +- CometProject\n               :           :                       +- CometFilter\n               :           :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :- CometFilter\n               :           :  :  +- ReusedSubquery\n               :           :  +- CometHashAggregate\n               :           :     +- CometExchange\n               :           :        +- CometHashAggregate\n               :           :           +- CometProject\n               :           :              +- CometBroadcastHashJoin\n               :           :                 :- CometProject\n               :           :                 :  +- CometBroadcastHashJoin\n               :           :                 :     :- CometBroadcastHashJoin\n               :           :                 :     :  :- CometFilter\n               :           :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :     :  :        +- ReusedSubquery\n               :           :                 :     :  +- CometBroadcastExchange\n               :           :                 :     :     +- CometProject\n               :           :                 :     :        +- CometBroadcastHashJoin\n               :           :                 :     :           :- CometFilter\n               :           :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :           +- CometBroadcastExchange\n               :           :                 :     :              +- CometBroadcastHashJoin\n               :           :                 :     :                 :- CometHashAggregate\n               :           :                 :     :                 :  +- CometExchange\n               :           :                 :     :                 :     +- CometHashAggregate\n               :           :                 :     :                 :        +- CometProject\n               :           :                 :     :                 :           +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :- CometProject\n               :           :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :     :- CometFilter\n               :           :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :           :                 :     :                 :              :     :           +- BroadcastExchange\n               :           :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :           :                 :     :                 :              :     :                 +- CometProject\n               :           :                 :     :                 :              :     :                    +- CometFilter\n               :           :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :           :- CometFilter\n               :           :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :           +- CometBroadcastExchange\n               :           :                 :     :                 :              :              +- CometProject\n               :           :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :- CometProject\n               :           :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :     :- CometFilter\n               :           :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :           :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :                    :        +- CometFilter\n               :           :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :                    +- CometBroadcastExchange\n               :           :                 :     :                 :              :                       +- CometProject\n               :           :                 :     :                 :              :                          +- CometFilter\n               :           :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              +- CometBroadcastExchange\n               :           :                 :     :                 :                 +- CometProject\n               :           :                 :     :                 :                    +- CometFilter\n               :           :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 +- CometBroadcastExchange\n               :           :                 :     :                    +- CometProject\n               :           :                 :     :                       +- CometBroadcastHashJoin\n               :           :                 :     :                          :- CometProject\n               :           :                 :     :                          :  +- CometBroadcastHashJoin\n               :           :                 :     :                          :     :- CometFilter\n               :           :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :     :                          :     :        +- ReusedSubquery\n               :           :                 :     :                          :     +- CometBroadcastExchange\n               :           :                 :     :                          :        +- CometFilter\n               :           :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                          +- CometBroadcastExchange\n               :           :                 :     :                             +- CometProject\n               :           :                 :     :                                +- CometFilter\n               :           :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     +- CometBroadcastExchange\n               :           :                 :        +- CometBroadcastHashJoin\n               :           :                 :           :- CometFilter\n               :           :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :           +- CometBroadcastExchange\n               :           :                 :              +- CometProject\n               :           :                 :                 +- CometBroadcastHashJoin\n               :           :                 :                    :- CometFilter\n               :           :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                    +- CometBroadcastExchange\n               :           :                 :                       +- CometBroadcastHashJoin\n               :           :                 :                          :- CometHashAggregate\n               :           :                 :                          :  +- CometExchange\n               :           :                 :                          :     +- CometHashAggregate\n               :           :                 :                          :        +- CometProject\n               :           :                 :                          :           +- CometBroadcastHashJoin\n               :           :                 :                          :              :- CometProject\n               :           :                 :                          :              :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :     :- CometFilter\n               :           :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :                          :              :     :        +- SubqueryBroadcast\n               :           :                 :                          :              :     :           +- BroadcastExchange\n               :           :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :           :                 :                          :              :     :                 +- CometProject\n               :           :                 :                          :              :     :                    +- CometFilter\n               :           :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              :     +- CometBroadcastExchange\n               :           :                 :                          :              :        +- CometBroadcastHashJoin\n               :           :                 :                          :              :           :- CometFilter\n               :           :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :           +- CometBroadcastExchange\n               :           :                 :                          :              :              +- CometProject\n               :           :                 :                          :              :                 +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :- CometProject\n               :           :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :     :- CometFilter\n               :           :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :                          :              :                    :     :        +- ReusedSubquery\n               :           :                 :                          :              :                    :     +- CometBroadcastExchange\n               :           :                 :                          :              :                    :        +- CometFilter\n               :           :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :                    +- CometBroadcastExchange\n               :           :                 :                          :              :                       +- CometProject\n               :           :                 :                          :              :                          +- CometFilter\n               :           :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              +- CometBroadcastExchange\n               :           :                 :                          :                 +- CometProject\n               :           :                 :                          :                    +- CometFilter\n               :           :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          +- CometBroadcastExchange\n               :           :                 :                             +- CometProject\n               :           :                 :                                +- CometBroadcastHashJoin\n               :           :                 :                                   :- CometProject\n               :           :                 :                                   :  +- CometBroadcastHashJoin\n               :           :                 :                                   :     :- CometFilter\n               :           :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :                                   :     :        +- ReusedSubquery\n               :           :                 :                                   :     +- CometBroadcastExchange\n               :           :                 :                                   :        +- CometFilter\n               :           :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                                   +- CometBroadcastExchange\n               :           :                 :                                      +- CometProject\n               :           :                 :                                         +- CometFilter\n               :           :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 +- CometBroadcastExchange\n               :           :                    +- CometProject\n               :           :                       +- CometFilter\n               :           :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           +- CometFilter\n               :              :  +- ReusedSubquery\n               :              +- CometHashAggregate\n               :                 +- CometExchange\n               :                    +- CometHashAggregate\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometProject\n               :                             :  +- CometBroadcastHashJoin\n               :                             :     :- CometBroadcastHashJoin\n               :                             :     :  :- CometFilter\n               :                             :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                             :     :  :        +- ReusedSubquery\n               :                             :     :  +- CometBroadcastExchange\n               :                             :     :     +- CometProject\n               :                             :     :        +- CometBroadcastHashJoin\n               :                             :     :           :- CometFilter\n               :                             :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :           +- CometBroadcastExchange\n               :                             :     :              +- CometBroadcastHashJoin\n               :                             :     :                 :- CometHashAggregate\n               :                             :     :                 :  +- CometExchange\n               :                             :     :                 :     +- CometHashAggregate\n               :                             :     :                 :        +- CometProject\n               :                             :     :                 :           +- CometBroadcastHashJoin\n               :                             :     :                 :              :- CometProject\n               :                             :     :                 :              :  +- CometBroadcastHashJoin\n               :                             :     :                 :              :     :- CometFilter\n               :                             :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                             :     :                 :              :     :        +- SubqueryBroadcast\n               :                             :     :                 :              :     :           +- BroadcastExchange\n               :                             :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                             :     :                 :              :     :                 +- CometProject\n               :                             :     :                 :              :     :                    +- CometFilter\n               :                             :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     :                 :              :     +- CometBroadcastExchange\n               :                             :     :                 :              :        +- CometBroadcastHashJoin\n               :                             :     :                 :              :           :- CometFilter\n               :                             :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :                 :              :           +- CometBroadcastExchange\n               :                             :     :                 :              :              +- CometProject\n               :                             :     :                 :              :                 +- CometBroadcastHashJoin\n               :                             :     :                 :              :                    :- CometProject\n               :                             :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                             :     :                 :              :                    :     :- CometFilter\n               :                             :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                             :     :                 :              :                    :     :        +- ReusedSubquery\n               :                             :     :                 :              :                    :     +- CometBroadcastExchange\n               :                             :     :                 :              :                    :        +- CometFilter\n               :                             :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :                 :              :                    +- CometBroadcastExchange\n               :                             :     :                 :              :                       +- CometProject\n               :                             :     :                 :              :                          +- CometFilter\n               :                             :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     :                 :              +- CometBroadcastExchange\n               :                             :     :                 :                 +- CometProject\n               :                             :     :                 :                    +- CometFilter\n               :                             :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     :                 +- CometBroadcastExchange\n               :                             :     :                    +- CometProject\n               :                             :     :                       +- CometBroadcastHashJoin\n               :                             :     :                          :- CometProject\n               :                             :     :                          :  +- CometBroadcastHashJoin\n               :                             :     :                          :     :- CometFilter\n               :                             :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                             :     :                          :     :        +- ReusedSubquery\n               :                             :     :                          :     +- CometBroadcastExchange\n               :                             :     :                          :        +- CometFilter\n               :                             :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :                          +- CometBroadcastExchange\n               :                             :     :                             +- CometProject\n               :                             :     :                                +- CometFilter\n               :                             :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     +- CometBroadcastExchange\n               :                             :        +- CometBroadcastHashJoin\n               :                             :           :- CometFilter\n               :                             :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :           +- CometBroadcastExchange\n               :                             :              +- CometProject\n               :                             :                 +- CometBroadcastHashJoin\n               :                             :                    :- CometFilter\n               :                             :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                    +- CometBroadcastExchange\n               :                             :                       +- CometBroadcastHashJoin\n               :                             :                          :- CometHashAggregate\n               :                             :                          :  +- CometExchange\n               :                             :                          :     +- CometHashAggregate\n               :                             :                          :        +- CometProject\n               :                             :                          :           +- CometBroadcastHashJoin\n               :                             :                          :              :- CometProject\n               :                             :                          :              :  +- CometBroadcastHashJoin\n               :                             :                          :              :     :- CometFilter\n               :                             :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                             :                          :              :     :        +- SubqueryBroadcast\n               :                             :                          :              :     :           +- BroadcastExchange\n               :                             :                          :              :     :              +- CometNativeColumnarToRow\n               :                             :                          :              :     :                 +- CometProject\n               :                             :                          :              :     :                    +- CometFilter\n               :                             :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :                          :              :     +- CometBroadcastExchange\n               :                             :                          :              :        +- CometBroadcastHashJoin\n               :                             :                          :              :           :- CometFilter\n               :                             :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                          :              :           +- CometBroadcastExchange\n               :                             :                          :              :              +- CometProject\n               :                             :                          :              :                 +- CometBroadcastHashJoin\n               :                             :                          :              :                    :- CometProject\n               :                             :                          :              :                    :  +- CometBroadcastHashJoin\n               :                             :                          :              :                    :     :- CometFilter\n               :                             :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                             :                          :              :                    :     :        +- ReusedSubquery\n               :                             :                          :              :                    :     +- CometBroadcastExchange\n               :                             :                          :              :                    :        +- CometFilter\n               :                             :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                          :              :                    +- CometBroadcastExchange\n               :                             :                          :              :                       +- CometProject\n               :                             :                          :              :                          +- CometFilter\n               :                             :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :                          :              +- CometBroadcastExchange\n               :                             :                          :                 +- CometProject\n               :                             :                          :                    +- CometFilter\n               :                             :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :                          +- CometBroadcastExchange\n               :                             :                             +- CometProject\n               :                             :                                +- CometBroadcastHashJoin\n               :                             :                                   :- CometProject\n               :                             :                                   :  +- CometBroadcastHashJoin\n               :                             :                                   :     :- CometFilter\n               :                             :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                             :                                   :     :        +- ReusedSubquery\n               :                             :                                   :     +- CometBroadcastExchange\n               :                             :                                   :        +- CometFilter\n               :                             :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                                   +- CometBroadcastExchange\n               :                             :                                      +- CometProject\n               :                             :                                         +- CometFilter\n               :                             :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometFilter\n               :                    :  :  +- Subquery\n               :                    :  :     +- CometNativeColumnarToRow\n               :                    :  :        +- CometHashAggregate\n               :                    :  :           +- CometExchange\n               :                    :  :              +- CometHashAggregate\n               :                    :  :                 +- CometUnion\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :  :                    :     :     +- ReusedSubquery\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :  :                    :     :     +- SubqueryBroadcast\n               :                    :  :                    :     :        +- BroadcastExchange\n               :                    :  :                    :     :           +- CometNativeColumnarToRow\n               :                    :  :                    :     :              +- CometProject\n               :                    :  :                    :     :                 +- CometFilter\n               :                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    +- CometProject\n               :                    :  :                       +- CometBroadcastHashJoin\n               :                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :  :                          :     +- ReusedSubquery\n               :                    :  :                          +- CometBroadcastExchange\n               :                    :  :                             +- CometProject\n               :                    :  :                                +- CometFilter\n               :                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :  :        +- SubqueryBroadcast\n               :                    :                 :     :  :           +- BroadcastExchange\n               :                    :                 :     :  :              +- CometNativeColumnarToRow\n               :                    :                 :     :  :                 +- CometProject\n               :                    :                 :     :  :                    +- CometFilter\n               :                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :- CometFilter\n               :                    :  :  +- ReusedSubquery\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :  :        +- ReusedSubquery\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometFilter\n               :                       :  +- ReusedSubquery\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastHashJoin\n               :                                      :     :  :- CometFilter\n               :                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :  :        +- ReusedSubquery\n               :                                      :     :  +- CometBroadcastExchange\n               :                                      :     :     +- CometProject\n               :                                      :     :        +- CometBroadcastHashJoin\n               :                                      :     :           :- CometFilter\n               :                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :           +- CometBroadcastExchange\n               :                                      :     :              +- CometBroadcastHashJoin\n               :                                      :     :                 :- CometHashAggregate\n               :                                      :     :                 :  +- CometExchange\n               :                                      :     :                 :     +- CometHashAggregate\n               :                                      :     :                 :        +- CometProject\n               :                                      :     :                 :           +- CometBroadcastHashJoin\n               :                                      :     :                 :              :- CometProject\n               :                                      :     :                 :              :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :     :- CometFilter\n               :                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :     :                 :              :     :        +- SubqueryBroadcast\n               :                                      :     :                 :              :     :           +- BroadcastExchange\n               :                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                                      :     :                 :              :     :                 +- CometProject\n               :                                      :     :                 :              :     :                    +- CometFilter\n               :                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              :     +- CometBroadcastExchange\n               :                                      :     :                 :              :        +- CometBroadcastHashJoin\n               :                                      :     :                 :              :           :- CometFilter\n               :                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :           +- CometBroadcastExchange\n               :                                      :     :                 :              :              +- CometProject\n               :                                      :     :                 :              :                 +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :- CometProject\n               :                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :     :- CometFilter\n               :                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :                 :              :                    :     :        +- ReusedSubquery\n               :                                      :     :                 :              :                    :     +- CometBroadcastExchange\n               :                                      :     :                 :              :                    :        +- CometFilter\n               :                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :                    +- CometBroadcastExchange\n               :                                      :     :                 :              :                       +- CometProject\n               :                                      :     :                 :              :                          +- CometFilter\n               :                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              +- CometBroadcastExchange\n               :                                      :     :                 :                 +- CometProject\n               :                                      :     :                 :                    +- CometFilter\n               :                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 +- CometBroadcastExchange\n               :                                      :     :                    +- CometProject\n               :                                      :     :                       +- CometBroadcastHashJoin\n               :                                      :     :                          :- CometProject\n               :                                      :     :                          :  +- CometBroadcastHashJoin\n               :                                      :     :                          :     :- CometFilter\n               :                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :                          :     :        +- ReusedSubquery\n               :                                      :     :                          :     +- CometBroadcastExchange\n               :                                      :     :                          :        +- CometFilter\n               :                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                          +- CometBroadcastExchange\n               :                                      :     :                             +- CometProject\n               :                                      :     :                                +- CometFilter\n               :                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometBroadcastExchange\n               :                                      :        +- CometBroadcastHashJoin\n               :                                      :           :- CometFilter\n               :                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :           +- CometBroadcastExchange\n               :                                      :              +- CometProject\n               :                                      :                 +- CometBroadcastHashJoin\n               :                                      :                    :- CometFilter\n               :                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                    +- CometBroadcastExchange\n               :                                      :                       +- CometBroadcastHashJoin\n               :                                      :                          :- CometHashAggregate\n               :                                      :                          :  +- CometExchange\n               :                                      :                          :     +- CometHashAggregate\n               :                                      :                          :        +- CometProject\n               :                                      :                          :           +- CometBroadcastHashJoin\n               :                                      :                          :              :- CometProject\n               :                                      :                          :              :  +- CometBroadcastHashJoin\n               :                                      :                          :              :     :- CometFilter\n               :                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :                          :              :     :        +- SubqueryBroadcast\n               :                                      :                          :              :     :           +- BroadcastExchange\n               :                                      :                          :              :     :              +- CometNativeColumnarToRow\n               :                                      :                          :              :     :                 +- CometProject\n               :                                      :                          :              :     :                    +- CometFilter\n               :                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              :     +- CometBroadcastExchange\n               :                                      :                          :              :        +- CometBroadcastHashJoin\n               :                                      :                          :              :           :- CometFilter\n               :                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :           +- CometBroadcastExchange\n               :                                      :                          :              :              +- CometProject\n               :                                      :                          :              :                 +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :- CometProject\n               :                                      :                          :              :                    :  +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :     :- CometFilter\n               :                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :                          :              :                    :     :        +- ReusedSubquery\n               :                                      :                          :              :                    :     +- CometBroadcastExchange\n               :                                      :                          :              :                    :        +- CometFilter\n               :                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :                    +- CometBroadcastExchange\n               :                                      :                          :              :                       +- CometProject\n               :                                      :                          :              :                          +- CometFilter\n               :                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              +- CometBroadcastExchange\n               :                                      :                          :                 +- CometProject\n               :                                      :                          :                    +- CometFilter\n               :                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          +- CometBroadcastExchange\n               :                                      :                             +- CometProject\n               :                                      :                                +- CometBroadcastHashJoin\n               :                                      :                                   :- CometProject\n               :                                      :                                   :  +- CometBroadcastHashJoin\n               :                                      :                                   :     :- CometFilter\n               :                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :                                   :     :        +- ReusedSubquery\n               :                                      :                                   :     +- CometBroadcastExchange\n               :                                      :                                   :        +- CometFilter\n               :                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                                   +- CometBroadcastExchange\n               :                                      :                                      +- CometProject\n               :                                      :                                         +- CometFilter\n               :                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometFilter\n               :                    :  :  +- Subquery\n               :                    :  :     +- CometNativeColumnarToRow\n               :                    :  :        +- CometHashAggregate\n               :                    :  :           +- CometExchange\n               :                    :  :              +- CometHashAggregate\n               :                    :  :                 +- CometUnion\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :  :                    :     :     +- ReusedSubquery\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :  :                    :     :     +- SubqueryBroadcast\n               :                    :  :                    :     :        +- BroadcastExchange\n               :                    :  :                    :     :           +- CometNativeColumnarToRow\n               :                    :  :                    :     :              +- CometProject\n               :                    :  :                    :     :                 +- CometFilter\n               :                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    +- CometProject\n               :                    :  :                       +- CometBroadcastHashJoin\n               :                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :  :                          :     +- ReusedSubquery\n               :                    :  :                          +- CometBroadcastExchange\n               :                    :  :                             +- CometProject\n               :                    :  :                                +- CometFilter\n               :                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :  :        +- SubqueryBroadcast\n               :                    :                 :     :  :           +- BroadcastExchange\n               :                    :                 :     :  :              +- CometNativeColumnarToRow\n               :                    :                 :     :  :                 +- CometProject\n               :                    :                 :     :  :                    +- CometFilter\n               :                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :- CometFilter\n               :                    :  :  +- ReusedSubquery\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :  :        +- ReusedSubquery\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometFilter\n               :                       :  +- ReusedSubquery\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastHashJoin\n               :                                      :     :  :- CometFilter\n               :                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :  :        +- ReusedSubquery\n               :                                      :     :  +- CometBroadcastExchange\n               :                                      :     :     +- CometProject\n               :                                      :     :        +- CometBroadcastHashJoin\n               :                                      :     :           :- CometFilter\n               :                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :           +- CometBroadcastExchange\n               :                                      :     :              +- CometBroadcastHashJoin\n               :                                      :     :                 :- CometHashAggregate\n               :                                      :     :                 :  +- CometExchange\n               :                                      :     :                 :     +- CometHashAggregate\n               :                                      :     :                 :        +- CometProject\n               :                                      :     :                 :           +- CometBroadcastHashJoin\n               :                                      :     :                 :              :- CometProject\n               :                                      :     :                 :              :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :     :- CometFilter\n               :                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :     :                 :              :     :        +- SubqueryBroadcast\n               :                                      :     :                 :              :     :           +- BroadcastExchange\n               :                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                                      :     :                 :              :     :                 +- CometProject\n               :                                      :     :                 :              :     :                    +- CometFilter\n               :                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              :     +- CometBroadcastExchange\n               :                                      :     :                 :              :        +- CometBroadcastHashJoin\n               :                                      :     :                 :              :           :- CometFilter\n               :                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :           +- CometBroadcastExchange\n               :                                      :     :                 :              :              +- CometProject\n               :                                      :     :                 :              :                 +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :- CometProject\n               :                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :     :- CometFilter\n               :                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :                 :              :                    :     :        +- ReusedSubquery\n               :                                      :     :                 :              :                    :     +- CometBroadcastExchange\n               :                                      :     :                 :              :                    :        +- CometFilter\n               :                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :                    +- CometBroadcastExchange\n               :                                      :     :                 :              :                       +- CometProject\n               :                                      :     :                 :              :                          +- CometFilter\n               :                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              +- CometBroadcastExchange\n               :                                      :     :                 :                 +- CometProject\n               :                                      :     :                 :                    +- CometFilter\n               :                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 +- CometBroadcastExchange\n               :                                      :     :                    +- CometProject\n               :                                      :     :                       +- CometBroadcastHashJoin\n               :                                      :     :                          :- CometProject\n               :                                      :     :                          :  +- CometBroadcastHashJoin\n               :                                      :     :                          :     :- CometFilter\n               :                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :                          :     :        +- ReusedSubquery\n               :                                      :     :                          :     +- CometBroadcastExchange\n               :                                      :     :                          :        +- CometFilter\n               :                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                          +- CometBroadcastExchange\n               :                                      :     :                             +- CometProject\n               :                                      :     :                                +- CometFilter\n               :                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometBroadcastExchange\n               :                                      :        +- CometBroadcastHashJoin\n               :                                      :           :- CometFilter\n               :                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :           +- CometBroadcastExchange\n               :                                      :              +- CometProject\n               :                                      :                 +- CometBroadcastHashJoin\n               :                                      :                    :- CometFilter\n               :                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                    +- CometBroadcastExchange\n               :                                      :                       +- CometBroadcastHashJoin\n               :                                      :                          :- CometHashAggregate\n               :                                      :                          :  +- CometExchange\n               :                                      :                          :     +- CometHashAggregate\n               :                                      :                          :        +- CometProject\n               :                                      :                          :           +- CometBroadcastHashJoin\n               :                                      :                          :              :- CometProject\n               :                                      :                          :              :  +- CometBroadcastHashJoin\n               :                                      :                          :              :     :- CometFilter\n               :                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :                          :              :     :        +- SubqueryBroadcast\n               :                                      :                          :              :     :           +- BroadcastExchange\n               :                                      :                          :              :     :              +- CometNativeColumnarToRow\n               :                                      :                          :              :     :                 +- CometProject\n               :                                      :                          :              :     :                    +- CometFilter\n               :                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              :     +- CometBroadcastExchange\n               :                                      :                          :              :        +- CometBroadcastHashJoin\n               :                                      :                          :              :           :- CometFilter\n               :                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :           +- CometBroadcastExchange\n               :                                      :                          :              :              +- CometProject\n               :                                      :                          :              :                 +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :- CometProject\n               :                                      :                          :              :                    :  +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :     :- CometFilter\n               :                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :                          :              :                    :     :        +- ReusedSubquery\n               :                                      :                          :              :                    :     +- CometBroadcastExchange\n               :                                      :                          :              :                    :        +- CometFilter\n               :                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :                    +- CometBroadcastExchange\n               :                                      :                          :              :                       +- CometProject\n               :                                      :                          :              :                          +- CometFilter\n               :                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              +- CometBroadcastExchange\n               :                                      :                          :                 +- CometProject\n               :                                      :                          :                    +- CometFilter\n               :                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          +- CometBroadcastExchange\n               :                                      :                             +- CometProject\n               :                                      :                                +- CometBroadcastHashJoin\n               :                                      :                                   :- CometProject\n               :                                      :                                   :  +- CometBroadcastHashJoin\n               :                                      :                                   :     :- CometFilter\n               :                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :                                   :     :        +- ReusedSubquery\n               :                                      :                                   :     +- CometBroadcastExchange\n               :                                      :                                   :        +- CometFilter\n               :                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                                   +- CometBroadcastExchange\n               :                                      :                                      +- CometProject\n               :                                      :                                         +- CometFilter\n               :                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometFilter\n               :                    :  :  +- Subquery\n               :                    :  :     +- CometNativeColumnarToRow\n               :                    :  :        +- CometHashAggregate\n               :                    :  :           +- CometExchange\n               :                    :  :              +- CometHashAggregate\n               :                    :  :                 +- CometUnion\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :  :                    :     :     +- ReusedSubquery\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :  :                    :     :     +- SubqueryBroadcast\n               :                    :  :                    :     :        +- BroadcastExchange\n               :                    :  :                    :     :           +- CometNativeColumnarToRow\n               :                    :  :                    :     :              +- CometProject\n               :                    :  :                    :     :                 +- CometFilter\n               :                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    +- CometProject\n               :                    :  :                       +- CometBroadcastHashJoin\n               :                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :  :                          :     +- ReusedSubquery\n               :                    :  :                          +- CometBroadcastExchange\n               :                    :  :                             +- CometProject\n               :                    :  :                                +- CometFilter\n               :                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :  :        +- SubqueryBroadcast\n               :                    :                 :     :  :           +- BroadcastExchange\n               :                    :                 :     :  :              +- CometNativeColumnarToRow\n               :                    :                 :     :  :                 +- CometProject\n               :                    :                 :     :  :                    +- CometFilter\n               :                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :- CometFilter\n               :                    :  :  +- ReusedSubquery\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :  :        +- ReusedSubquery\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometFilter\n               :                       :  +- ReusedSubquery\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastHashJoin\n               :                                      :     :  :- CometFilter\n               :                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :  :        +- ReusedSubquery\n               :                                      :     :  +- CometBroadcastExchange\n               :                                      :     :     +- CometProject\n               :                                      :     :        +- CometBroadcastHashJoin\n               :                                      :     :           :- CometFilter\n               :                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :           +- CometBroadcastExchange\n               :                                      :     :              +- CometBroadcastHashJoin\n               :                                      :     :                 :- CometHashAggregate\n               :                                      :     :                 :  +- CometExchange\n               :                                      :     :                 :     +- CometHashAggregate\n               :                                      :     :                 :        +- CometProject\n               :                                      :     :                 :           +- CometBroadcastHashJoin\n               :                                      :     :                 :              :- CometProject\n               :                                      :     :                 :              :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :     :- CometFilter\n               :                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :     :                 :              :     :        +- SubqueryBroadcast\n               :                                      :     :                 :              :     :           +- BroadcastExchange\n               :                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                                      :     :                 :              :     :                 +- CometProject\n               :                                      :     :                 :              :     :                    +- CometFilter\n               :                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              :     +- CometBroadcastExchange\n               :                                      :     :                 :              :        +- CometBroadcastHashJoin\n               :                                      :     :                 :              :           :- CometFilter\n               :                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :           +- CometBroadcastExchange\n               :                                      :     :                 :              :              +- CometProject\n               :                                      :     :                 :              :                 +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :- CometProject\n               :                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :     :- CometFilter\n               :                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :                 :              :                    :     :        +- ReusedSubquery\n               :                                      :     :                 :              :                    :     +- CometBroadcastExchange\n               :                                      :     :                 :              :                    :        +- CometFilter\n               :                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :                    +- CometBroadcastExchange\n               :                                      :     :                 :              :                       +- CometProject\n               :                                      :     :                 :              :                          +- CometFilter\n               :                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              +- CometBroadcastExchange\n               :                                      :     :                 :                 +- CometProject\n               :                                      :     :                 :                    +- CometFilter\n               :                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 +- CometBroadcastExchange\n               :                                      :     :                    +- CometProject\n               :                                      :     :                       +- CometBroadcastHashJoin\n               :                                      :     :                          :- CometProject\n               :                                      :     :                          :  +- CometBroadcastHashJoin\n               :                                      :     :                          :     :- CometFilter\n               :                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :                          :     :        +- ReusedSubquery\n               :                                      :     :                          :     +- CometBroadcastExchange\n               :                                      :     :                          :        +- CometFilter\n               :                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                          +- CometBroadcastExchange\n               :                                      :     :                             +- CometProject\n               :                                      :     :                                +- CometFilter\n               :                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometBroadcastExchange\n               :                                      :        +- CometBroadcastHashJoin\n               :                                      :           :- CometFilter\n               :                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :           +- CometBroadcastExchange\n               :                                      :              +- CometProject\n               :                                      :                 +- CometBroadcastHashJoin\n               :                                      :                    :- CometFilter\n               :                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                    +- CometBroadcastExchange\n               :                                      :                       +- CometBroadcastHashJoin\n               :                                      :                          :- CometHashAggregate\n               :                                      :                          :  +- CometExchange\n               :                                      :                          :     +- CometHashAggregate\n               :                                      :                          :        +- CometProject\n               :                                      :                          :           +- CometBroadcastHashJoin\n               :                                      :                          :              :- CometProject\n               :                                      :                          :              :  +- CometBroadcastHashJoin\n               :                                      :                          :              :     :- CometFilter\n               :                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :                          :              :     :        +- SubqueryBroadcast\n               :                                      :                          :              :     :           +- BroadcastExchange\n               :                                      :                          :              :     :              +- CometNativeColumnarToRow\n               :                                      :                          :              :     :                 +- CometProject\n               :                                      :                          :              :     :                    +- CometFilter\n               :                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              :     +- CometBroadcastExchange\n               :                                      :                          :              :        +- CometBroadcastHashJoin\n               :                                      :                          :              :           :- CometFilter\n               :                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :           +- CometBroadcastExchange\n               :                                      :                          :              :              +- CometProject\n               :                                      :                          :              :                 +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :- CometProject\n               :                                      :                          :              :                    :  +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :     :- CometFilter\n               :                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :                          :              :                    :     :        +- ReusedSubquery\n               :                                      :                          :              :                    :     +- CometBroadcastExchange\n               :                                      :                          :              :                    :        +- CometFilter\n               :                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :                    +- CometBroadcastExchange\n               :                                      :                          :              :                       +- CometProject\n               :                                      :                          :              :                          +- CometFilter\n               :                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              +- CometBroadcastExchange\n               :                                      :                          :                 +- CometProject\n               :                                      :                          :                    +- CometFilter\n               :                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          +- CometBroadcastExchange\n               :                                      :                             +- CometProject\n               :                                      :                                +- CometBroadcastHashJoin\n               :                                      :                                   :- CometProject\n               :                                      :                                   :  +- CometBroadcastHashJoin\n               :                                      :                                   :     :- CometFilter\n               :                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :                                   :     :        +- ReusedSubquery\n               :                                      :                                   :     +- CometBroadcastExchange\n               :                                      :                                   :        +- CometFilter\n               :                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                                   +- CometBroadcastExchange\n               :                                      :                                      +- CometProject\n               :                                      :                                         +- CometFilter\n               :                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometFilter\n                                    :  :  +- Subquery\n                                    :  :     +- CometNativeColumnarToRow\n                                    :  :        +- CometHashAggregate\n                                    :  :           +- CometExchange\n                                    :  :              +- CometHashAggregate\n                                    :  :                 +- CometUnion\n                                    :  :                    :- CometProject\n                                    :  :                    :  +- CometBroadcastHashJoin\n                                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :  :                    :     :     +- ReusedSubquery\n                                    :  :                    :     +- CometBroadcastExchange\n                                    :  :                    :        +- CometProject\n                                    :  :                    :           +- CometFilter\n                                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  :                    :- CometProject\n                                    :  :                    :  +- CometBroadcastHashJoin\n                                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :  :                    :     :     +- SubqueryBroadcast\n                                    :  :                    :     :        +- BroadcastExchange\n                                    :  :                    :     :           +- CometNativeColumnarToRow\n                                    :  :                    :     :              +- CometProject\n                                    :  :                    :     :                 +- CometFilter\n                                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  :                    :     +- CometBroadcastExchange\n                                    :  :                    :        +- CometProject\n                                    :  :                    :           +- CometFilter\n                                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  :                    +- CometProject\n                                    :  :                       +- CometBroadcastHashJoin\n                                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :  :                          :     +- ReusedSubquery\n                                    :  :                          +- CometBroadcastExchange\n                                    :  :                             +- CometProject\n                                    :  :                                +- CometFilter\n                                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  +- CometHashAggregate\n                                    :     +- CometExchange\n                                    :        +- CometHashAggregate\n                                    :           +- CometProject\n                                    :              +- CometBroadcastHashJoin\n                                    :                 :- CometProject\n                                    :                 :  +- CometBroadcastHashJoin\n                                    :                 :     :- CometBroadcastHashJoin\n                                    :                 :     :  :- CometFilter\n                                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :     :  :        +- SubqueryBroadcast\n                                    :                 :     :  :           +- BroadcastExchange\n                                    :                 :     :  :              +- CometNativeColumnarToRow\n                                    :                 :     :  :                 +- CometProject\n                                    :                 :     :  :                    +- CometFilter\n                                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :  +- CometBroadcastExchange\n                                    :                 :     :     +- CometProject\n                                    :                 :     :        +- CometBroadcastHashJoin\n                                    :                 :     :           :- CometFilter\n                                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :           +- CometBroadcastExchange\n                                    :                 :     :              +- CometBroadcastHashJoin\n                                    :                 :     :                 :- CometHashAggregate\n                                    :                 :     :                 :  +- CometExchange\n                                    :                 :     :                 :     +- CometHashAggregate\n                                    :                 :     :                 :        +- CometProject\n                                    :                 :     :                 :           +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :- CometProject\n                                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :     :- CometFilter\n                                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n                                    :                 :     :                 :              :     :           +- BroadcastExchange\n                                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n                                    :                 :     :                 :              :     :                 +- CometProject\n                                    :                 :     :                 :              :     :                    +- CometFilter\n                                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :           :- CometFilter\n                                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :           +- CometBroadcastExchange\n                                    :                 :     :                 :              :              +- CometProject\n                                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :- CometProject\n                                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :     :- CometFilter\n                                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n                                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :                    :        +- CometFilter\n                                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :                    +- CometBroadcastExchange\n                                    :                 :     :                 :              :                       +- CometProject\n                                    :                 :     :                 :              :                          +- CometFilter\n                                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              +- CometBroadcastExchange\n                                    :                 :     :                 :                 +- CometProject\n                                    :                 :     :                 :                    +- CometFilter\n                                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 +- CometBroadcastExchange\n                                    :                 :     :                    +- CometProject\n                                    :                 :     :                       +- CometBroadcastHashJoin\n                                    :                 :     :                          :- CometProject\n                                    :                 :     :                          :  +- CometBroadcastHashJoin\n                                    :                 :     :                          :     :- CometFilter\n                                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :     :                          :     :        +- ReusedSubquery\n                                    :                 :     :                          :     +- CometBroadcastExchange\n                                    :                 :     :                          :        +- CometFilter\n                                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                          +- CometBroadcastExchange\n                                    :                 :     :                             +- CometProject\n                                    :                 :     :                                +- CometFilter\n                                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     +- CometBroadcastExchange\n                                    :                 :        +- CometBroadcastHashJoin\n                                    :                 :           :- CometFilter\n                                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :           +- CometBroadcastExchange\n                                    :                 :              +- CometProject\n                                    :                 :                 +- CometBroadcastHashJoin\n                                    :                 :                    :- CometFilter\n                                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                    +- CometBroadcastExchange\n                                    :                 :                       +- CometBroadcastHashJoin\n                                    :                 :                          :- CometHashAggregate\n                                    :                 :                          :  +- CometExchange\n                                    :                 :                          :     +- CometHashAggregate\n                                    :                 :                          :        +- CometProject\n                                    :                 :                          :           +- CometBroadcastHashJoin\n                                    :                 :                          :              :- CometProject\n                                    :                 :                          :              :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :     :- CometFilter\n                                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :                          :              :     :        +- SubqueryBroadcast\n                                    :                 :                          :              :     :           +- BroadcastExchange\n                                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n                                    :                 :                          :              :     :                 +- CometProject\n                                    :                 :                          :              :     :                    +- CometFilter\n                                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              :     +- CometBroadcastExchange\n                                    :                 :                          :              :        +- CometBroadcastHashJoin\n                                    :                 :                          :              :           :- CometFilter\n                                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :           +- CometBroadcastExchange\n                                    :                 :                          :              :              +- CometProject\n                                    :                 :                          :              :                 +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :- CometProject\n                                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :     :- CometFilter\n                                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :                          :              :                    :     :        +- ReusedSubquery\n                                    :                 :                          :              :                    :     +- CometBroadcastExchange\n                                    :                 :                          :              :                    :        +- CometFilter\n                                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :                    +- CometBroadcastExchange\n                                    :                 :                          :              :                       +- CometProject\n                                    :                 :                          :              :                          +- CometFilter\n                                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              +- CometBroadcastExchange\n                                    :                 :                          :                 +- CometProject\n                                    :                 :                          :                    +- CometFilter\n                                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          +- CometBroadcastExchange\n                                    :                 :                             +- CometProject\n                                    :                 :                                +- CometBroadcastHashJoin\n                                    :                 :                                   :- CometProject\n                                    :                 :                                   :  +- CometBroadcastHashJoin\n                                    :                 :                                   :     :- CometFilter\n                                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :                                   :     :        +- ReusedSubquery\n                                    :                 :                                   :     +- CometBroadcastExchange\n                                    :                 :                                   :        +- CometFilter\n                                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                                   +- CometBroadcastExchange\n                                    :                 :                                      +- CometProject\n                                    :                 :                                         +- CometFilter\n                                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 +- CometBroadcastExchange\n                                    :                    +- CometProject\n                                    :                       +- CometFilter\n                                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :- CometFilter\n                                    :  :  +- ReusedSubquery\n                                    :  +- CometHashAggregate\n                                    :     +- CometExchange\n                                    :        +- CometHashAggregate\n                                    :           +- CometProject\n                                    :              +- CometBroadcastHashJoin\n                                    :                 :- CometProject\n                                    :                 :  +- CometBroadcastHashJoin\n                                    :                 :     :- CometBroadcastHashJoin\n                                    :                 :     :  :- CometFilter\n                                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :     :  :        +- ReusedSubquery\n                                    :                 :     :  +- CometBroadcastExchange\n                                    :                 :     :     +- CometProject\n                                    :                 :     :        +- CometBroadcastHashJoin\n                                    :                 :     :           :- CometFilter\n                                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :           +- CometBroadcastExchange\n                                    :                 :     :              +- CometBroadcastHashJoin\n                                    :                 :     :                 :- CometHashAggregate\n                                    :                 :     :                 :  +- CometExchange\n                                    :                 :     :                 :     +- CometHashAggregate\n                                    :                 :     :                 :        +- CometProject\n                                    :                 :     :                 :           +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :- CometProject\n                                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :     :- CometFilter\n                                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n                                    :                 :     :                 :              :     :           +- BroadcastExchange\n                                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n                                    :                 :     :                 :              :     :                 +- CometProject\n                                    :                 :     :                 :              :     :                    +- CometFilter\n                                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :           :- CometFilter\n                                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :           +- CometBroadcastExchange\n                                    :                 :     :                 :              :              +- CometProject\n                                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :- CometProject\n                                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :     :- CometFilter\n                                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n                                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :                    :        +- CometFilter\n                                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :                    +- CometBroadcastExchange\n                                    :                 :     :                 :              :                       +- CometProject\n                                    :                 :     :                 :              :                          +- CometFilter\n                                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              +- CometBroadcastExchange\n                                    :                 :     :                 :                 +- CometProject\n                                    :                 :     :                 :                    +- CometFilter\n                                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 +- CometBroadcastExchange\n                                    :                 :     :                    +- CometProject\n                                    :                 :     :                       +- CometBroadcastHashJoin\n                                    :                 :     :                          :- CometProject\n                                    :                 :     :                          :  +- CometBroadcastHashJoin\n                                    :                 :     :                          :     :- CometFilter\n                                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :     :                          :     :        +- ReusedSubquery\n                                    :                 :     :                          :     +- CometBroadcastExchange\n                                    :                 :     :                          :        +- CometFilter\n                                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                          +- CometBroadcastExchange\n                                    :                 :     :                             +- CometProject\n                                    :                 :     :                                +- CometFilter\n                                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     +- CometBroadcastExchange\n                                    :                 :        +- CometBroadcastHashJoin\n                                    :                 :           :- CometFilter\n                                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :           +- CometBroadcastExchange\n                                    :                 :              +- CometProject\n                                    :                 :                 +- CometBroadcastHashJoin\n                                    :                 :                    :- CometFilter\n                                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                    +- CometBroadcastExchange\n                                    :                 :                       +- CometBroadcastHashJoin\n                                    :                 :                          :- CometHashAggregate\n                                    :                 :                          :  +- CometExchange\n                                    :                 :                          :     +- CometHashAggregate\n                                    :                 :                          :        +- CometProject\n                                    :                 :                          :           +- CometBroadcastHashJoin\n                                    :                 :                          :              :- CometProject\n                                    :                 :                          :              :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :     :- CometFilter\n                                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :                          :              :     :        +- SubqueryBroadcast\n                                    :                 :                          :              :     :           +- BroadcastExchange\n                                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n                                    :                 :                          :              :     :                 +- CometProject\n                                    :                 :                          :              :     :                    +- CometFilter\n                                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              :     +- CometBroadcastExchange\n                                    :                 :                          :              :        +- CometBroadcastHashJoin\n                                    :                 :                          :              :           :- CometFilter\n                                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :           +- CometBroadcastExchange\n                                    :                 :                          :              :              +- CometProject\n                                    :                 :                          :              :                 +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :- CometProject\n                                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :     :- CometFilter\n                                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :                          :              :                    :     :        +- ReusedSubquery\n                                    :                 :                          :              :                    :     +- CometBroadcastExchange\n                                    :                 :                          :              :                    :        +- CometFilter\n                                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :                    +- CometBroadcastExchange\n                                    :                 :                          :              :                       +- CometProject\n                                    :                 :                          :              :                          +- CometFilter\n                                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              +- CometBroadcastExchange\n                                    :                 :                          :                 +- CometProject\n                                    :                 :                          :                    +- CometFilter\n                                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          +- CometBroadcastExchange\n                                    :                 :                             +- CometProject\n                                    :                 :                                +- CometBroadcastHashJoin\n                                    :                 :                                   :- CometProject\n                                    :                 :                                   :  +- CometBroadcastHashJoin\n                                    :                 :                                   :     :- CometFilter\n                                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :                                   :     :        +- ReusedSubquery\n                                    :                 :                                   :     +- CometBroadcastExchange\n                                    :                 :                                   :        +- CometFilter\n                                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                                   +- CometBroadcastExchange\n                                    :                 :                                      +- CometProject\n                                    :                 :                                         +- CometFilter\n                                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 +- CometBroadcastExchange\n                                    :                    +- CometProject\n                                    :                       +- CometFilter\n                                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- CometFilter\n                                       :  +- ReusedSubquery\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometBroadcastHashJoin\n                                                      :- CometProject\n                                                      :  +- CometBroadcastHashJoin\n                                                      :     :- CometBroadcastHashJoin\n                                                      :     :  :- CometFilter\n                                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                      :     :  :        +- ReusedSubquery\n                                                      :     :  +- CometBroadcastExchange\n                                                      :     :     +- CometProject\n                                                      :     :        +- CometBroadcastHashJoin\n                                                      :     :           :- CometFilter\n                                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :           +- CometBroadcastExchange\n                                                      :     :              +- CometBroadcastHashJoin\n                                                      :     :                 :- CometHashAggregate\n                                                      :     :                 :  +- CometExchange\n                                                      :     :                 :     +- CometHashAggregate\n                                                      :     :                 :        +- CometProject\n                                                      :     :                 :           +- CometBroadcastHashJoin\n                                                      :     :                 :              :- CometProject\n                                                      :     :                 :              :  +- CometBroadcastHashJoin\n                                                      :     :                 :              :     :- CometFilter\n                                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :     :                 :              :     :        +- SubqueryBroadcast\n                                                      :     :                 :              :     :           +- BroadcastExchange\n                                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n                                                      :     :                 :              :     :                 +- CometProject\n                                                      :     :                 :              :     :                    +- CometFilter\n                                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     :                 :              :     +- CometBroadcastExchange\n                                                      :     :                 :              :        +- CometBroadcastHashJoin\n                                                      :     :                 :              :           :- CometFilter\n                                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :                 :              :           +- CometBroadcastExchange\n                                                      :     :                 :              :              +- CometProject\n                                                      :     :                 :              :                 +- CometBroadcastHashJoin\n                                                      :     :                 :              :                    :- CometProject\n                                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                                      :     :                 :              :                    :     :- CometFilter\n                                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                                      :     :                 :              :                    :     :        +- ReusedSubquery\n                                                      :     :                 :              :                    :     +- CometBroadcastExchange\n                                                      :     :                 :              :                    :        +- CometFilter\n                                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :                 :              :                    +- CometBroadcastExchange\n                                                      :     :                 :              :                       +- CometProject\n                                                      :     :                 :              :                          +- CometFilter\n                                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     :                 :              +- CometBroadcastExchange\n                                                      :     :                 :                 +- CometProject\n                                                      :     :                 :                    +- CometFilter\n                                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     :                 +- CometBroadcastExchange\n                                                      :     :                    +- CometProject\n                                                      :     :                       +- CometBroadcastHashJoin\n                                                      :     :                          :- CometProject\n                                                      :     :                          :  +- CometBroadcastHashJoin\n                                                      :     :                          :     :- CometFilter\n                                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                      :     :                          :     :        +- ReusedSubquery\n                                                      :     :                          :     +- CometBroadcastExchange\n                                                      :     :                          :        +- CometFilter\n                                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :                          +- CometBroadcastExchange\n                                                      :     :                             +- CometProject\n                                                      :     :                                +- CometFilter\n                                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     +- CometBroadcastExchange\n                                                      :        +- CometBroadcastHashJoin\n                                                      :           :- CometFilter\n                                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :           +- CometBroadcastExchange\n                                                      :              +- CometProject\n                                                      :                 +- CometBroadcastHashJoin\n                                                      :                    :- CometFilter\n                                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                    +- CometBroadcastExchange\n                                                      :                       +- CometBroadcastHashJoin\n                                                      :                          :- CometHashAggregate\n                                                      :                          :  +- CometExchange\n                                                      :                          :     +- CometHashAggregate\n                                                      :                          :        +- CometProject\n                                                      :                          :           +- CometBroadcastHashJoin\n                                                      :                          :              :- CometProject\n                                                      :                          :              :  +- CometBroadcastHashJoin\n                                                      :                          :              :     :- CometFilter\n                                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :                          :              :     :        +- SubqueryBroadcast\n                                                      :                          :              :     :           +- BroadcastExchange\n                                                      :                          :              :     :              +- CometNativeColumnarToRow\n                                                      :                          :              :     :                 +- CometProject\n                                                      :                          :              :     :                    +- CometFilter\n                                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :                          :              :     +- CometBroadcastExchange\n                                                      :                          :              :        +- CometBroadcastHashJoin\n                                                      :                          :              :           :- CometFilter\n                                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                          :              :           +- CometBroadcastExchange\n                                                      :                          :              :              +- CometProject\n                                                      :                          :              :                 +- CometBroadcastHashJoin\n                                                      :                          :              :                    :- CometProject\n                                                      :                          :              :                    :  +- CometBroadcastHashJoin\n                                                      :                          :              :                    :     :- CometFilter\n                                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                                      :                          :              :                    :     :        +- ReusedSubquery\n                                                      :                          :              :                    :     +- CometBroadcastExchange\n                                                      :                          :              :                    :        +- CometFilter\n                                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                          :              :                    +- CometBroadcastExchange\n                                                      :                          :              :                       +- CometProject\n                                                      :                          :              :                          +- CometFilter\n                                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :                          :              +- CometBroadcastExchange\n                                                      :                          :                 +- CometProject\n                                                      :                          :                    +- CometFilter\n                                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :                          +- CometBroadcastExchange\n                                                      :                             +- CometProject\n                                                      :                                +- CometBroadcastHashJoin\n                                                      :                                   :- CometProject\n                                                      :                                   :  +- CometBroadcastHashJoin\n                                                      :                                   :     :- CometFilter\n                                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                      :                                   :     :        +- ReusedSubquery\n                                                      :                                   :     +- CometBroadcastExchange\n                                                      :                                   :        +- CometFilter\n                                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                                   +- CometBroadcastExchange\n                                                      :                                      +- CometProject\n                                                      :                                         +- CometFilter\n                                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      +- CometBroadcastExchange\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 2127 out of 2302 eligible operators (92%). Final plan contains 46 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q18a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Union\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- SubqueryBroadcast\n   :                 :     :     :     :     :     :              +- BroadcastExchange\n   :                 :     :     :     :     :     :                 +- CometNativeColumnarToRow\n   :                 :     :     :     :     :     :                    +- CometProject\n   :                 :     :     :     :     :     :                       +- CometFilter\n   :                 :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Project\n                     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :- Project\n                     :     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :     :- Filter\n                     :     :     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :     :     :           +- ReusedSubquery\n                     :     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :     :           +- CometProject\n                     :     :     :     :     :              +- CometFilter\n                     :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :           +- CometProject\n                     :     :     :     :              +- CometFilter\n                     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 92 out of 210 eligible operators (43%). Final plan contains 41 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q18a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- SubqueryBroadcast\n      :              :     :     :     :     :     :           +- BroadcastExchange\n      :              :     :     :     :     :     :              +- CometNativeColumnarToRow\n      :              :     :     :     :     :     :                 +- CometProject\n      :              :     :     :     :     :     :                    +- CometFilter\n      :              :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- ReusedSubquery\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- ReusedSubquery\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- ReusedSubquery\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :- CometProject\n                     :     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :     :- CometFilter\n                     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :     :     :     :     :     :        +- ReusedSubquery\n                     :     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :     :        +- CometProject\n                     :     :     :     :     :           +- CometFilter\n                     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :        +- CometProject\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 204 out of 210 eligible operators (97%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q20.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q20.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q22.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastNestedLoopJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Filter\n                     :     :     :  +- ColumnarToRow\n                     :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :           +- SubqueryBroadcast\n                     :     :     :              +- BroadcastExchange\n                     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :                    +- CometProject\n                     :     :     :                       +- CometFilter\n                     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometNativeScan parquet spark_catalog.default.warehouse\n\nComet accelerated 11 out of 28 eligible operators (39%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q22.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n                     :- CometNativeColumnarToRow\n                     :  +- CometProject\n                     :     +- CometBroadcastHashJoin\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometFilter\n                     :        :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                     :        :     :        +- SubqueryBroadcast\n                     :        :     :           +- BroadcastExchange\n                     :        :     :              +- CometNativeColumnarToRow\n                     :        :     :                 +- CometProject\n                     :        :     :                    +- CometFilter\n                     :        :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        +- CometBroadcastExchange\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n\nComet accelerated 19 out of 28 eligible operators (67%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q22a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Union\n   :- HashAggregate\n   :  +- HashAggregate\n   :     +- HashAggregate\n   :        +- CometNativeColumnarToRow\n   :           +- CometColumnarExchange\n   :              +- HashAggregate\n   :                 +- Project\n   :                    +- BroadcastHashJoin\n   :                       :- Project\n   :                       :  +- BroadcastHashJoin\n   :                       :     :- Project\n   :                       :     :  +- BroadcastHashJoin\n   :                       :     :     :- Filter\n   :                       :     :     :  +- ColumnarToRow\n   :                       :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                       :     :     :           +- SubqueryBroadcast\n   :                       :     :     :              +- BroadcastExchange\n   :                       :     :     :                 +- CometNativeColumnarToRow\n   :                       :     :     :                    +- CometProject\n   :                       :     :     :                       +- CometFilter\n   :                       :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                       :     :     +- BroadcastExchange\n   :                       :     :        +- CometNativeColumnarToRow\n   :                       :     :           +- CometProject\n   :                       :     :              +- CometFilter\n   :                       :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                       :     +- BroadcastExchange\n   :                       :        +- CometNativeColumnarToRow\n   :                       :           +- CometProject\n   :                       :              +- CometFilter\n   :                       :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                       +- BroadcastExchange\n   :                          +- CometNativeColumnarToRow\n   :                             +- CometFilter\n   :                                +- CometNativeScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- HashAggregate\n   :              +- CometNativeColumnarToRow\n   :                 +- CometColumnarExchange\n   :                    +- HashAggregate\n   :                       +- Project\n   :                          +- BroadcastHashJoin\n   :                             :- Project\n   :                             :  +- BroadcastHashJoin\n   :                             :     :- Project\n   :                             :     :  +- BroadcastHashJoin\n   :                             :     :     :- Filter\n   :                             :     :     :  +- ColumnarToRow\n   :                             :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                             :     :     :           +- SubqueryBroadcast\n   :                             :     :     :              +- BroadcastExchange\n   :                             :     :     :                 +- CometNativeColumnarToRow\n   :                             :     :     :                    +- CometProject\n   :                             :     :     :                       +- CometFilter\n   :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     :     +- BroadcastExchange\n   :                             :     :        +- CometNativeColumnarToRow\n   :                             :     :           +- CometProject\n   :                             :     :              +- CometFilter\n   :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     +- BroadcastExchange\n   :                             :        +- CometNativeColumnarToRow\n   :                             :           +- CometProject\n   :                             :              +- CometFilter\n   :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                             +- BroadcastExchange\n   :                                +- CometNativeColumnarToRow\n   :                                   +- CometFilter\n   :                                      +- CometNativeScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- HashAggregate\n   :              +- CometNativeColumnarToRow\n   :                 +- CometColumnarExchange\n   :                    +- HashAggregate\n   :                       +- Project\n   :                          +- BroadcastHashJoin\n   :                             :- Project\n   :                             :  +- BroadcastHashJoin\n   :                             :     :- Project\n   :                             :     :  +- BroadcastHashJoin\n   :                             :     :     :- Filter\n   :                             :     :     :  +- ColumnarToRow\n   :                             :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                             :     :     :           +- SubqueryBroadcast\n   :                             :     :     :              +- BroadcastExchange\n   :                             :     :     :                 +- CometNativeColumnarToRow\n   :                             :     :     :                    +- CometProject\n   :                             :     :     :                       +- CometFilter\n   :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     :     +- BroadcastExchange\n   :                             :     :        +- CometNativeColumnarToRow\n   :                             :     :           +- CometProject\n   :                             :     :              +- CometFilter\n   :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     +- BroadcastExchange\n   :                             :        +- CometNativeColumnarToRow\n   :                             :           +- CometProject\n   :                             :              +- CometFilter\n   :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                             +- BroadcastExchange\n   :                                +- CometNativeColumnarToRow\n   :                                   +- CometFilter\n   :                                      +- CometNativeScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- HashAggregate\n   :              +- CometNativeColumnarToRow\n   :                 +- CometColumnarExchange\n   :                    +- HashAggregate\n   :                       +- Project\n   :                          +- BroadcastHashJoin\n   :                             :- Project\n   :                             :  +- BroadcastHashJoin\n   :                             :     :- Project\n   :                             :     :  +- BroadcastHashJoin\n   :                             :     :     :- Filter\n   :                             :     :     :  +- ColumnarToRow\n   :                             :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                             :     :     :           +- SubqueryBroadcast\n   :                             :     :     :              +- BroadcastExchange\n   :                             :     :     :                 +- CometNativeColumnarToRow\n   :                             :     :     :                    +- CometProject\n   :                             :     :     :                       +- CometFilter\n   :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     :     +- BroadcastExchange\n   :                             :     :        +- CometNativeColumnarToRow\n   :                             :     :           +- CometProject\n   :                             :     :              +- CometFilter\n   :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     +- BroadcastExchange\n   :                             :        +- CometNativeColumnarToRow\n   :                             :           +- CometProject\n   :                             :              +- CometFilter\n   :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                             +- BroadcastExchange\n   :                                +- CometNativeColumnarToRow\n   :                                   +- CometFilter\n   :                                      +- CometNativeScan parquet spark_catalog.default.warehouse\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- SubqueryBroadcast\n                                 :     :     :              +- BroadcastExchange\n                                 :     :     :                 +- CometNativeColumnarToRow\n                                 :     :     :                    +- CometProject\n                                 :     :     :                       +- CometFilter\n                                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.warehouse\n\nComet accelerated 64 out of 151 eligible operators (42%). Final plan contains 34 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q22a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometHashAggregate\n      :     +- CometHashAggregate\n      :        +- CometExchange\n      :           +- CometHashAggregate\n      :              +- CometProject\n      :                 +- CometBroadcastHashJoin\n      :                    :- CometProject\n      :                    :  +- CometBroadcastHashJoin\n      :                    :     :- CometProject\n      :                    :     :  +- CometBroadcastHashJoin\n      :                    :     :     :- CometFilter\n      :                    :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                    :     :     :        +- SubqueryBroadcast\n      :                    :     :     :           +- BroadcastExchange\n      :                    :     :     :              +- CometNativeColumnarToRow\n      :                    :     :     :                 +- CometProject\n      :                    :     :     :                    +- CometFilter\n      :                    :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                    :     :     +- CometBroadcastExchange\n      :                    :     :        +- CometProject\n      :                    :     :           +- CometFilter\n      :                    :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                    :     +- CometBroadcastExchange\n      :                    :        +- CometProject\n      :                    :           +- CometFilter\n      :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                    +- CometBroadcastExchange\n      :                       +- CometFilter\n      :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometHashAggregate\n      :           +- CometExchange\n      :              +- CometHashAggregate\n      :                 +- CometProject\n      :                    +- CometBroadcastHashJoin\n      :                       :- CometProject\n      :                       :  +- CometBroadcastHashJoin\n      :                       :     :- CometProject\n      :                       :     :  +- CometBroadcastHashJoin\n      :                       :     :     :- CometFilter\n      :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                       :     :     :        +- SubqueryBroadcast\n      :                       :     :     :           +- BroadcastExchange\n      :                       :     :     :              +- CometNativeColumnarToRow\n      :                       :     :     :                 +- CometProject\n      :                       :     :     :                    +- CometFilter\n      :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     :     +- CometBroadcastExchange\n      :                       :     :        +- CometProject\n      :                       :     :           +- CometFilter\n      :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     +- CometBroadcastExchange\n      :                       :        +- CometProject\n      :                       :           +- CometFilter\n      :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                       +- CometBroadcastExchange\n      :                          +- CometFilter\n      :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometHashAggregate\n      :           +- CometExchange\n      :              +- CometHashAggregate\n      :                 +- CometProject\n      :                    +- CometBroadcastHashJoin\n      :                       :- CometProject\n      :                       :  +- CometBroadcastHashJoin\n      :                       :     :- CometProject\n      :                       :     :  +- CometBroadcastHashJoin\n      :                       :     :     :- CometFilter\n      :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                       :     :     :        +- SubqueryBroadcast\n      :                       :     :     :           +- BroadcastExchange\n      :                       :     :     :              +- CometNativeColumnarToRow\n      :                       :     :     :                 +- CometProject\n      :                       :     :     :                    +- CometFilter\n      :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     :     +- CometBroadcastExchange\n      :                       :     :        +- CometProject\n      :                       :     :           +- CometFilter\n      :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     +- CometBroadcastExchange\n      :                       :        +- CometProject\n      :                       :           +- CometFilter\n      :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                       +- CometBroadcastExchange\n      :                          +- CometFilter\n      :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometHashAggregate\n      :           +- CometExchange\n      :              +- CometHashAggregate\n      :                 +- CometProject\n      :                    +- CometBroadcastHashJoin\n      :                       :- CometProject\n      :                       :  +- CometBroadcastHashJoin\n      :                       :     :- CometProject\n      :                       :     :  +- CometBroadcastHashJoin\n      :                       :     :     :- CometFilter\n      :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                       :     :     :        +- SubqueryBroadcast\n      :                       :     :     :           +- BroadcastExchange\n      :                       :     :     :              +- CometNativeColumnarToRow\n      :                       :     :     :                 +- CometProject\n      :                       :     :     :                    +- CometFilter\n      :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     :     +- CometBroadcastExchange\n      :                       :     :        +- CometProject\n      :                       :     :           +- CometFilter\n      :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     +- CometBroadcastExchange\n      :                       :        +- CometProject\n      :                       :           +- CometFilter\n      :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                       +- CometBroadcastExchange\n      :                          +- CometFilter\n      :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                              :     :     :        +- SubqueryBroadcast\n                              :     :     :           +- BroadcastExchange\n                              :     :     :              +- CometNativeColumnarToRow\n                              :     :     :                 +- CometProject\n                              :     :     :                    +- CometFilter\n                              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n\nComet accelerated 141 out of 151 eligible operators (93%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q24.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Filter\n         :  +- Subquery\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- HashAggregate\n         :                    +- CometNativeColumnarToRow\n         :                       +- CometColumnarExchange\n         :                          +- HashAggregate\n         :                             +- Project\n         :                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n         :                                   :- CometNativeColumnarToRow\n         :                                   :  +- CometProject\n         :                                   :     +- CometBroadcastHashJoin\n         :                                   :        :- CometProject\n         :                                   :        :  +- CometBroadcastHashJoin\n         :                                   :        :     :- CometProject\n         :                                   :        :     :  +- CometBroadcastHashJoin\n         :                                   :        :     :     :- CometProject\n         :                                   :        :     :     :  +- CometSortMergeJoin\n         :                                   :        :     :     :     :- CometSort\n         :                                   :        :     :     :     :  +- CometExchange\n         :                                   :        :     :     :     :     +- CometProject\n         :                                   :        :     :     :     :        +- CometFilter\n         :                                   :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n         :                                   :        :     :     :     +- CometSort\n         :                                   :        :     :     :        +- CometExchange\n         :                                   :        :     :     :           +- CometProject\n         :                                   :        :     :     :              +- CometFilter\n         :                                   :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n         :                                   :        :     :     +- CometBroadcastExchange\n         :                                   :        :     :        +- CometProject\n         :                                   :        :     :           +- CometFilter\n         :                                   :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n         :                                   :        :     +- CometBroadcastExchange\n         :                                   :        :        +- CometProject\n         :                                   :        :           +- CometFilter\n         :                                   :        :              +- CometNativeScan parquet spark_catalog.default.item\n         :                                   :        +- CometBroadcastExchange\n         :                                   :           +- CometProject\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometNativeScan parquet spark_catalog.default.customer\n         :                                   +- BroadcastExchange\n         :                                      +- CometNativeColumnarToRow\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometNativeScan parquet spark_catalog.default.customer_address\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- HashAggregate\n                        +- CometNativeColumnarToRow\n                           +- CometColumnarExchange\n                              +- HashAggregate\n                                 +- Project\n                                    +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                                       :- CometNativeColumnarToRow\n                                       :  +- CometProject\n                                       :     +- CometBroadcastHashJoin\n                                       :        :- CometProject\n                                       :        :  +- CometBroadcastHashJoin\n                                       :        :     :- CometProject\n                                       :        :     :  +- CometBroadcastHashJoin\n                                       :        :     :     :- CometProject\n                                       :        :     :     :  +- CometSortMergeJoin\n                                       :        :     :     :     :- CometSort\n                                       :        :     :     :     :  +- CometExchange\n                                       :        :     :     :     :     +- CometProject\n                                       :        :     :     :     :        +- CometFilter\n                                       :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                                       :        :     :     :     +- CometSort\n                                       :        :     :     :        +- CometExchange\n                                       :        :     :     :           +- CometProject\n                                       :        :     :     :              +- CometFilter\n                                       :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                       :        :     :     +- CometBroadcastExchange\n                                       :        :     :        +- CometProject\n                                       :        :     :           +- CometFilter\n                                       :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n                                       :        :     +- CometBroadcastExchange\n                                       :        :        +- CometProject\n                                       :        :           +- CometFilter\n                                       :        :              +- CometNativeScan parquet spark_catalog.default.item\n                                       :        +- CometBroadcastExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.customer\n                                       +- BroadcastExchange\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 72 out of 88 eligible operators (81%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q24.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Filter\n         :  +- Subquery\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- HashAggregate\n         :                    +- CometNativeColumnarToRow\n         :                       +- CometColumnarExchange\n         :                          +- HashAggregate\n         :                             +- Project\n         :                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n         :                                   :- CometNativeColumnarToRow\n         :                                   :  +- CometProject\n         :                                   :     +- CometBroadcastHashJoin\n         :                                   :        :- CometProject\n         :                                   :        :  +- CometBroadcastHashJoin\n         :                                   :        :     :- CometProject\n         :                                   :        :     :  +- CometBroadcastHashJoin\n         :                                   :        :     :     :- CometProject\n         :                                   :        :     :     :  +- CometSortMergeJoin\n         :                                   :        :     :     :     :- CometSort\n         :                                   :        :     :     :     :  +- CometExchange\n         :                                   :        :     :     :     :     +- CometProject\n         :                                   :        :     :     :     :        +- CometFilter\n         :                                   :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :                                   :        :     :     :     +- CometSort\n         :                                   :        :     :     :        +- CometExchange\n         :                                   :        :     :     :           +- CometProject\n         :                                   :        :     :     :              +- CometFilter\n         :                                   :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :                                   :        :     :     +- CometBroadcastExchange\n         :                                   :        :     :        +- CometProject\n         :                                   :        :     :           +- CometFilter\n         :                                   :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :                                   :        :     +- CometBroadcastExchange\n         :                                   :        :        +- CometProject\n         :                                   :        :           +- CometFilter\n         :                                   :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                                   :        +- CometBroadcastExchange\n         :                                   :           +- CometProject\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                                   +- BroadcastExchange\n         :                                      +- CometNativeColumnarToRow\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- HashAggregate\n                        +- CometNativeColumnarToRow\n                           +- CometColumnarExchange\n                              +- HashAggregate\n                                 +- Project\n                                    +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                                       :- CometNativeColumnarToRow\n                                       :  +- CometProject\n                                       :     +- CometBroadcastHashJoin\n                                       :        :- CometProject\n                                       :        :  +- CometBroadcastHashJoin\n                                       :        :     :- CometProject\n                                       :        :     :  +- CometBroadcastHashJoin\n                                       :        :     :     :- CometProject\n                                       :        :     :     :  +- CometSortMergeJoin\n                                       :        :     :     :     :- CometSort\n                                       :        :     :     :     :  +- CometExchange\n                                       :        :     :     :     :     +- CometProject\n                                       :        :     :     :     :        +- CometFilter\n                                       :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :        :     :     :     +- CometSort\n                                       :        :     :     :        +- CometExchange\n                                       :        :     :     :           +- CometProject\n                                       :        :     :     :              +- CometFilter\n                                       :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                       :        :     :     +- CometBroadcastExchange\n                                       :        :     :        +- CometProject\n                                       :        :     :           +- CometFilter\n                                       :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                       :        :     +- CometBroadcastExchange\n                                       :        :        +- CometProject\n                                       :        :           +- CometFilter\n                                       :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :        +- CometBroadcastExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                                       +- BroadcastExchange\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 72 out of 88 eligible operators (81%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q27a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Union\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Filter\n   :                 :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :           +- SubqueryBroadcast\n   :                 :     :     :     :              +- BroadcastExchange\n   :                 :     :     :     :                 +- CometNativeColumnarToRow\n   :                 :     :     :     :                    +- CometProject\n   :                 :     :     :     :                       +- CometFilter\n   :                 :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometProject\n   :                 :     :     :              +- CometFilter\n   :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Filter\n   :                 :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometProject\n   :                 :     :     :              +- CometFilter\n   :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Filter\n                     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :           +- ReusedSubquery\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometProject\n                     :     :     :              +- CometFilter\n                     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.store\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 41 out of 95 eligible operators (43%). Final plan contains 19 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q27a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometFilter\n      :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :     :     :        +- SubqueryBroadcast\n      :              :     :     :     :           +- BroadcastExchange\n      :              :     :     :     :              +- CometNativeColumnarToRow\n      :              :     :     :     :                 +- CometProject\n      :              :     :     :     :                    +- CometFilter\n      :              :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometProject\n      :              :     :     :           +- CometFilter\n      :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometFilter\n      :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :     :     :        +- ReusedSubquery\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometProject\n      :              :     :     :           +- CometFilter\n      :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometFilter\n                     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :     :     :        +- ReusedSubquery\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometProject\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 91 out of 95 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q34.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Filter\n            :  +- HashAggregate\n            :     +- CometNativeColumnarToRow\n            :        +- CometColumnarExchange\n            :           +- HashAggregate\n            :              +- Project\n            :                 +- BroadcastHashJoin\n            :                    :- Project\n            :                    :  +- BroadcastHashJoin\n            :                    :     :- Project\n            :                    :     :  +- BroadcastHashJoin\n            :                    :     :     :- Filter\n            :                    :     :     :  +- ColumnarToRow\n            :                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                    :     :     :           +- SubqueryBroadcast\n            :                    :     :     :              +- BroadcastExchange\n            :                    :     :     :                 +- CometNativeColumnarToRow\n            :                    :     :     :                    +- CometProject\n            :                    :     :     :                       +- CometFilter\n            :                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     :     +- BroadcastExchange\n            :                    :     :        +- CometNativeColumnarToRow\n            :                    :     :           +- CometProject\n            :                    :     :              +- CometFilter\n            :                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     +- BroadcastExchange\n            :                    :        +- CometNativeColumnarToRow\n            :                    :           +- CometProject\n            :                    :              +- CometFilter\n            :                    :                 +- CometNativeScan parquet spark_catalog.default.store\n            :                    +- BroadcastExchange\n            :                       +- CometNativeColumnarToRow\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometNativeScan parquet spark_catalog.default.household_demographics\n            +- BroadcastExchange\n               +- CometNativeColumnarToRow\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 37 eligible operators (48%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q34.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometFilter\n            :  +- CometHashAggregate\n            :     +- CometExchange\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometFilter\n            :                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :        +- SubqueryBroadcast\n            :                 :     :     :           +- BroadcastExchange\n            :                 :     :     :              +- CometNativeColumnarToRow\n            :                 :     :     :                 +- CometProject\n            :                 :     :     :                    +- CometFilter\n            :                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometProject\n            :                 :     :           +- CometFilter\n            :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometProject\n            :                 :           +- CometFilter\n            :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 37 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q35.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :- BroadcastHashJoin\n                  :     :        :  :- BroadcastHashJoin\n                  :     :        :  :  :- CometNativeColumnarToRow\n                  :     :        :  :  :  +- CometFilter\n                  :     :        :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :        :  :  +- BroadcastExchange\n                  :     :        :  :     +- Project\n                  :     :        :  :        +- BroadcastHashJoin\n                  :     :        :  :           :- ColumnarToRow\n                  :     :        :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :  :           :        +- SubqueryBroadcast\n                  :     :        :  :           :           +- BroadcastExchange\n                  :     :        :  :           :              +- CometNativeColumnarToRow\n                  :     :        :  :           :                 +- CometProject\n                  :     :        :  :           :                    +- CometFilter\n                  :     :        :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  :           +- BroadcastExchange\n                  :     :        :  :              +- CometNativeColumnarToRow\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- Project\n                  :     :        :        +- BroadcastHashJoin\n                  :     :        :           :- ColumnarToRow\n                  :     :        :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :           :        +- ReusedSubquery\n                  :     :        :           +- BroadcastExchange\n                  :     :        :              +- CometNativeColumnarToRow\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 54 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q35.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                  :     :        :  :- CometNativeColumnarToRow\n                  :     :        :  :  +- CometBroadcastHashJoin\n                  :     :        :  :     :- CometFilter\n                  :     :        :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :        :  :     +- CometBroadcastExchange\n                  :     :        :  :        +- CometProject\n                  :     :        :  :           +- CometBroadcastHashJoin\n                  :     :        :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :        :  :              :     +- SubqueryBroadcast\n                  :     :        :  :              :        +- BroadcastExchange\n                  :     :        :  :              :           +- CometNativeColumnarToRow\n                  :     :        :  :              :              +- CometProject\n                  :     :        :  :              :                 +- CometFilter\n                  :     :        :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  :              +- CometBroadcastExchange\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- CometNativeColumnarToRow\n                  :     :        :        +- CometProject\n                  :     :        :           +- CometBroadcastHashJoin\n                  :     :        :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :        :              :     +- ReusedSubquery\n                  :     :        :              +- CometBroadcastExchange\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- CometNativeColumnarToRow\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 54 eligible operators (64%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q35a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- BroadcastHashJoin\n                  :     :     :  :- CometNativeColumnarToRow\n                  :     :     :  :  +- CometFilter\n                  :     :     :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- Project\n                  :     :     :        +- BroadcastHashJoin\n                  :     :     :           :- ColumnarToRow\n                  :     :     :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           :        +- SubqueryBroadcast\n                  :     :     :           :           +- BroadcastExchange\n                  :     :     :           :              +- CometNativeColumnarToRow\n                  :     :     :           :                 +- CometProject\n                  :     :     :           :                    +- CometFilter\n                  :     :     :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- Union\n                  :     :           :- Project\n                  :     :           :  +- BroadcastHashJoin\n                  :     :           :     :- ColumnarToRow\n                  :     :           :     :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :           :     :        +- ReusedSubquery\n                  :     :           :     +- BroadcastExchange\n                  :     :           :        +- CometNativeColumnarToRow\n                  :     :           :           +- CometProject\n                  :     :           :              +- CometFilter\n                  :     :           :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 52 eligible operators (40%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q35a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometBroadcastHashJoin\n                  :     :     :  :- CometFilter\n                  :     :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :     :  +- CometBroadcastExchange\n                  :     :     :     +- CometProject\n                  :     :     :        +- CometBroadcastHashJoin\n                  :     :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :           :     +- SubqueryBroadcast\n                  :     :     :           :        +- BroadcastExchange\n                  :     :     :           :           +- CometNativeColumnarToRow\n                  :     :     :           :              +- CometProject\n                  :     :     :           :                 +- CometFilter\n                  :     :     :           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :           +- CometBroadcastExchange\n                  :     :     :              +- CometProject\n                  :     :     :                 +- CometFilter\n                  :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometUnion\n                  :     :           :- CometProject\n                  :     :           :  +- CometBroadcastHashJoin\n                  :     :           :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :           :     :     +- ReusedSubquery\n                  :     :           :     +- CometBroadcastExchange\n                  :     :           :        +- CometProject\n                  :     :           :           +- CometFilter\n                  :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :           +- CometProject\n                  :     :              +- CometBroadcastHashJoin\n                  :     :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                 :     +- ReusedSubquery\n                  :     :                 +- CometBroadcastExchange\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 48 out of 52 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q36a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- Project\n                           :                 :  +- BroadcastHashJoin\n                           :                 :     :- Project\n                           :                 :     :  +- BroadcastHashJoin\n                           :                 :     :     :- Filter\n                           :                 :     :     :  +- ColumnarToRow\n                           :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                 :     :     :           +- SubqueryBroadcast\n                           :                 :     :     :              +- BroadcastExchange\n                           :                 :     :     :                 +- CometNativeColumnarToRow\n                           :                 :     :     :                    +- CometProject\n                           :                 :     :     :                       +- CometFilter\n                           :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     :     +- BroadcastExchange\n                           :                 :     :        +- CometNativeColumnarToRow\n                           :                 :     :           +- CometProject\n                           :                 :     :              +- CometFilter\n                           :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     +- BroadcastExchange\n                           :                 :        +- CometNativeColumnarToRow\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                           :                 +- BroadcastExchange\n                           :                    +- CometNativeColumnarToRow\n                           :                       +- CometProject\n                           :                          +- CometFilter\n                           :                             +- CometNativeScan parquet spark_catalog.default.store\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.store\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- Project\n                                                         :     :  +- BroadcastHashJoin\n                                                         :     :     :- Filter\n                                                         :     :     :  +- ColumnarToRow\n                                                         :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :     :           +- SubqueryBroadcast\n                                                         :     :     :              +- BroadcastExchange\n                                                         :     :     :                 +- CometNativeColumnarToRow\n                                                         :     :     :                    +- CometProject\n                                                         :     :     :                       +- CometFilter\n                                                         :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     :     +- BroadcastExchange\n                                                         :     :        +- CometNativeColumnarToRow\n                                                         :     :           +- CometProject\n                                                         :     :              +- CometFilter\n                                                         :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     +- BroadcastExchange\n                                                         :        +- CometNativeColumnarToRow\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometNativeScan parquet spark_catalog.default.item\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 45 out of 99 eligible operators (45%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q36a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometUnion\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometProject\n                           :           +- CometBroadcastHashJoin\n                           :              :- CometProject\n                           :              :  +- CometBroadcastHashJoin\n                           :              :     :- CometProject\n                           :              :     :  +- CometBroadcastHashJoin\n                           :              :     :     :- CometFilter\n                           :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :              :     :     :        +- SubqueryBroadcast\n                           :              :     :     :           +- BroadcastExchange\n                           :              :     :     :              +- CometNativeColumnarToRow\n                           :              :     :     :                 +- CometProject\n                           :              :     :     :                    +- CometFilter\n                           :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              :     :     +- CometBroadcastExchange\n                           :              :     :        +- CometProject\n                           :              :     :           +- CometFilter\n                           :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              :     +- CometBroadcastExchange\n                           :              :        +- CometProject\n                           :              :           +- CometFilter\n                           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :              +- CometBroadcastExchange\n                           :                 +- CometProject\n                           :                    +- CometFilter\n                           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometHashAggregate\n                           :           +- CometExchange\n                           :              +- CometHashAggregate\n                           :                 +- CometProject\n                           :                    +- CometBroadcastHashJoin\n                           :                       :- CometProject\n                           :                       :  +- CometBroadcastHashJoin\n                           :                       :     :- CometProject\n                           :                       :     :  +- CometBroadcastHashJoin\n                           :                       :     :     :- CometFilter\n                           :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                       :     :     :        +- SubqueryBroadcast\n                           :                       :     :     :           +- BroadcastExchange\n                           :                       :     :     :              +- CometNativeColumnarToRow\n                           :                       :     :     :                 +- CometProject\n                           :                       :     :     :                    +- CometFilter\n                           :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       :     :     +- CometBroadcastExchange\n                           :                       :     :        +- CometProject\n                           :                       :     :           +- CometFilter\n                           :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       :     +- CometBroadcastExchange\n                           :                       :        +- CometProject\n                           :                       :           +- CometFilter\n                           :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                       +- CometBroadcastExchange\n                           :                          +- CometProject\n                           :                             +- CometFilter\n                           :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometProject\n                                                   :     :  +- CometBroadcastHashJoin\n                                                   :     :     :- CometFilter\n                                                   :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                   :     :     :        +- SubqueryBroadcast\n                                                   :     :     :           +- BroadcastExchange\n                                                   :     :     :              +- CometNativeColumnarToRow\n                                                   :     :     :                 +- CometProject\n                                                   :     :     :                    +- CometFilter\n                                                   :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     :     +- CometBroadcastExchange\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 90 out of 99 eligible operators (90%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q47.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q47.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q49.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- SubqueryBroadcast\n               :                                         :     :                    +- BroadcastExchange\n               :                                         :     :                       +- CometNativeColumnarToRow\n               :                                         :     :                          +- CometProject\n               :                                         :     :                             +- CometFilter\n               :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- ReusedSubquery\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometColumnarExchange\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- BroadcastExchange\n                                                         :     :  +- Project\n                                                         :     :     +- Filter\n                                                         :     :        +- ColumnarToRow\n                                                         :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :                 +- ReusedSubquery\n                                                         :     +- CometNativeColumnarToRow\n                                                         :        +- CometProject\n                                                         :           +- CometFilter\n                                                         :              +- CometNativeScan parquet spark_catalog.default.store_returns\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 33 out of 87 eligible operators (37%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q49.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :              +- SubqueryBroadcast\n               :                                      :     :                 +- BroadcastExchange\n               :                                      :     :                    +- CometNativeColumnarToRow\n               :                                      :     :                       +- CometProject\n               :                                      :     :                          +- CometFilter\n               :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :              +- ReusedSubquery\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometExchange\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometBroadcastHashJoin\n                                                      :- CometProject\n                                                      :  +- CometBroadcastHashJoin\n                                                      :     :- CometBroadcastExchange\n                                                      :     :  +- CometProject\n                                                      :     :     +- CometFilter\n                                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :     :              +- ReusedSubquery\n                                                      :     +- CometProject\n                                                      :        +- CometFilter\n                                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                                      +- CometBroadcastExchange\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 66 out of 87 eligible operators (75%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q51a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :  +- CometNativeColumnarToRow\n               :     +- CometSort\n               :        +- CometExchange\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometSortMergeJoin\n               :                    :- CometSort\n               :                    :  +- CometColumnarExchange\n               :                    :     +- HashAggregate\n               :                    :        +- CometNativeColumnarToRow\n               :                    :           +- CometColumnarExchange\n               :                    :              +- HashAggregate\n               :                    :                 +- Project\n               :                    :                    +- BroadcastHashJoin\n               :                    :                       :- Project\n               :                    :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                       :     +- CometNativeColumnarToRow\n               :                    :                       :        +- CometSort\n               :                    :                       :           +- CometColumnarExchange\n               :                    :                       :              +- HashAggregate\n               :                    :                       :                 +- CometNativeColumnarToRow\n               :                    :                       :                    +- CometColumnarExchange\n               :                    :                       :                       +- HashAggregate\n               :                    :                       :                          +- Project\n               :                    :                       :                             +- BroadcastHashJoin\n               :                    :                       :                                :- Filter\n               :                    :                       :                                :  +- ColumnarToRow\n               :                    :                       :                                :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :                       :                                :           +- SubqueryBroadcast\n               :                    :                       :                                :              +- BroadcastExchange\n               :                    :                       :                                :                 +- CometNativeColumnarToRow\n               :                    :                       :                                :                    +- CometProject\n               :                    :                       :                                :                       +- CometFilter\n               :                    :                       :                                :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                       :                                +- BroadcastExchange\n               :                    :                       :                                   +- CometNativeColumnarToRow\n               :                    :                       :                                      +- CometProject\n               :                    :                       :                                         +- CometFilter\n               :                    :                       :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                       +- BroadcastExchange\n               :                    :                          +- Project\n               :                    :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                                +- CometNativeColumnarToRow\n               :                    :                                   +- CometSort\n               :                    :                                      +- CometColumnarExchange\n               :                    :                                         +- HashAggregate\n               :                    :                                            +- CometNativeColumnarToRow\n               :                    :                                               +- CometColumnarExchange\n               :                    :                                                  +- HashAggregate\n               :                    :                                                     +- Project\n               :                    :                                                        +- BroadcastHashJoin\n               :                    :                                                           :- Filter\n               :                    :                                                           :  +- ColumnarToRow\n               :                    :                                                           :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :                                                           :           +- SubqueryBroadcast\n               :                    :                                                           :              +- BroadcastExchange\n               :                    :                                                           :                 +- CometNativeColumnarToRow\n               :                    :                                                           :                    +- CometProject\n               :                    :                                                           :                       +- CometFilter\n               :                    :                                                           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                                                           +- BroadcastExchange\n               :                    :                                                              +- CometNativeColumnarToRow\n               :                    :                                                                 +- CometProject\n               :                    :                                                                    +- CometFilter\n               :                    :                                                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    +- CometSort\n               :                       +- CometColumnarExchange\n               :                          +- HashAggregate\n               :                             +- CometNativeColumnarToRow\n               :                                +- CometColumnarExchange\n               :                                   +- HashAggregate\n               :                                      +- Project\n               :                                         +- BroadcastHashJoin\n               :                                            :- Project\n               :                                            :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                            :     +- CometNativeColumnarToRow\n               :                                            :        +- CometSort\n               :                                            :           +- CometColumnarExchange\n               :                                            :              +- HashAggregate\n               :                                            :                 +- CometNativeColumnarToRow\n               :                                            :                    +- CometColumnarExchange\n               :                                            :                       +- HashAggregate\n               :                                            :                          +- Project\n               :                                            :                             +- BroadcastHashJoin\n               :                                            :                                :- Filter\n               :                                            :                                :  +- ColumnarToRow\n               :                                            :                                :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                            :                                :           +- ReusedSubquery\n               :                                            :                                +- BroadcastExchange\n               :                                            :                                   +- CometNativeColumnarToRow\n               :                                            :                                      +- CometProject\n               :                                            :                                         +- CometFilter\n               :                                            :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                            +- BroadcastExchange\n               :                                               +- Project\n               :                                                  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                                     +- CometNativeColumnarToRow\n               :                                                        +- CometSort\n               :                                                           +- CometColumnarExchange\n               :                                                              +- HashAggregate\n               :                                                                 +- CometNativeColumnarToRow\n               :                                                                    +- CometColumnarExchange\n               :                                                                       +- HashAggregate\n               :                                                                          +- Project\n               :                                                                             +- BroadcastHashJoin\n               :                                                                                :- Filter\n               :                                                                                :  +- ColumnarToRow\n               :                                                                                :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                                                                :           +- ReusedSubquery\n               :                                                                                +- BroadcastExchange\n               :                                                                                   +- CometNativeColumnarToRow\n               :                                                                                      +- CometProject\n               :                                                                                         +- CometFilter\n               :                                                                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- Project\n                     +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                        +- CometNativeColumnarToRow\n                           +- CometSort\n                              +- CometExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometSortMergeJoin\n                                          :- CometSort\n                                          :  +- CometColumnarExchange\n                                          :     +- HashAggregate\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometColumnarExchange\n                                          :              +- HashAggregate\n                                          :                 +- Project\n                                          :                    +- BroadcastHashJoin\n                                          :                       :- Project\n                                          :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                       :     +- CometNativeColumnarToRow\n                                          :                       :        +- CometSort\n                                          :                       :           +- CometColumnarExchange\n                                          :                       :              +- HashAggregate\n                                          :                       :                 +- CometNativeColumnarToRow\n                                          :                       :                    +- CometColumnarExchange\n                                          :                       :                       +- HashAggregate\n                                          :                       :                          +- Project\n                                          :                       :                             +- BroadcastHashJoin\n                                          :                       :                                :- Filter\n                                          :                       :                                :  +- ColumnarToRow\n                                          :                       :                                :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                       :                                :           +- SubqueryBroadcast\n                                          :                       :                                :              +- BroadcastExchange\n                                          :                       :                                :                 +- CometNativeColumnarToRow\n                                          :                       :                                :                    +- CometProject\n                                          :                       :                                :                       +- CometFilter\n                                          :                       :                                :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                       :                                +- BroadcastExchange\n                                          :                       :                                   +- CometNativeColumnarToRow\n                                          :                       :                                      +- CometProject\n                                          :                       :                                         +- CometFilter\n                                          :                       :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                       +- BroadcastExchange\n                                          :                          +- Project\n                                          :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                                +- CometNativeColumnarToRow\n                                          :                                   +- CometSort\n                                          :                                      +- CometColumnarExchange\n                                          :                                         +- HashAggregate\n                                          :                                            +- CometNativeColumnarToRow\n                                          :                                               +- CometColumnarExchange\n                                          :                                                  +- HashAggregate\n                                          :                                                     +- Project\n                                          :                                                        +- BroadcastHashJoin\n                                          :                                                           :- Filter\n                                          :                                                           :  +- ColumnarToRow\n                                          :                                                           :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                                                           :           +- SubqueryBroadcast\n                                          :                                                           :              +- BroadcastExchange\n                                          :                                                           :                 +- CometNativeColumnarToRow\n                                          :                                                           :                    +- CometProject\n                                          :                                                           :                       +- CometFilter\n                                          :                                                           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                                                           +- BroadcastExchange\n                                          :                                                              +- CometNativeColumnarToRow\n                                          :                                                                 +- CometProject\n                                          :                                                                    +- CometFilter\n                                          :                                                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- CometSort\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- CometNativeColumnarToRow\n                                                      +- CometColumnarExchange\n                                                         +- HashAggregate\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- Project\n                                                                  :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                  :     +- CometNativeColumnarToRow\n                                                                  :        +- CometSort\n                                                                  :           +- CometColumnarExchange\n                                                                  :              +- HashAggregate\n                                                                  :                 +- CometNativeColumnarToRow\n                                                                  :                    +- CometColumnarExchange\n                                                                  :                       +- HashAggregate\n                                                                  :                          +- Project\n                                                                  :                             +- BroadcastHashJoin\n                                                                  :                                :- Filter\n                                                                  :                                :  +- ColumnarToRow\n                                                                  :                                :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                  :                                :           +- ReusedSubquery\n                                                                  :                                +- BroadcastExchange\n                                                                  :                                   +- CometNativeColumnarToRow\n                                                                  :                                      +- CometProject\n                                                                  :                                         +- CometFilter\n                                                                  :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                           +- CometNativeColumnarToRow\n                                                                              +- CometSort\n                                                                                 +- CometColumnarExchange\n                                                                                    +- HashAggregate\n                                                                                       +- CometNativeColumnarToRow\n                                                                                          +- CometColumnarExchange\n                                                                                             +- HashAggregate\n                                                                                                +- Project\n                                                                                                   +- BroadcastHashJoin\n                                                                                                      :- Filter\n                                                                                                      :  +- ColumnarToRow\n                                                                                                      :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                                                      :           +- ReusedSubquery\n                                                                                                      +- BroadcastExchange\n                                                                                                         +- CometNativeColumnarToRow\n                                                                                                            +- CometProject\n                                                                                                               +- CometFilter\n                                                                                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 82 out of 196 eligible operators (41%). Final plan contains 42 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q51a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :  +- CometNativeColumnarToRow\n               :     +- CometSort\n               :        +- CometExchange\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometSortMergeJoin\n               :                    :- CometSort\n               :                    :  +- CometColumnarExchange\n               :                    :     +- HashAggregate\n               :                    :        +- CometNativeColumnarToRow\n               :                    :           +- CometColumnarExchange\n               :                    :              +- HashAggregate\n               :                    :                 +- Project\n               :                    :                    +- BroadcastHashJoin\n               :                    :                       :- Project\n               :                    :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                       :     +- CometNativeColumnarToRow\n               :                    :                       :        +- CometSort\n               :                    :                       :           +- CometExchange\n               :                    :                       :              +- CometHashAggregate\n               :                    :                       :                 +- CometExchange\n               :                    :                       :                    +- CometHashAggregate\n               :                    :                       :                       +- CometProject\n               :                    :                       :                          +- CometBroadcastHashJoin\n               :                    :                       :                             :- CometFilter\n               :                    :                       :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                       :                             :        +- SubqueryBroadcast\n               :                    :                       :                             :           +- BroadcastExchange\n               :                    :                       :                             :              +- CometNativeColumnarToRow\n               :                    :                       :                             :                 +- CometProject\n               :                    :                       :                             :                    +- CometFilter\n               :                    :                       :                             :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                       :                             +- CometBroadcastExchange\n               :                    :                       :                                +- CometProject\n               :                    :                       :                                   +- CometFilter\n               :                    :                       :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                       +- BroadcastExchange\n               :                    :                          +- Project\n               :                    :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                                +- CometNativeColumnarToRow\n               :                    :                                   +- CometSort\n               :                    :                                      +- CometExchange\n               :                    :                                         +- CometHashAggregate\n               :                    :                                            +- CometExchange\n               :                    :                                               +- CometHashAggregate\n               :                    :                                                  +- CometProject\n               :                    :                                                     +- CometBroadcastHashJoin\n               :                    :                                                        :- CometFilter\n               :                    :                                                        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                                                        :        +- SubqueryBroadcast\n               :                    :                                                        :           +- BroadcastExchange\n               :                    :                                                        :              +- CometNativeColumnarToRow\n               :                    :                                                        :                 +- CometProject\n               :                    :                                                        :                    +- CometFilter\n               :                    :                                                        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                                                        +- CometBroadcastExchange\n               :                    :                                                           +- CometProject\n               :                    :                                                              +- CometFilter\n               :                    :                                                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometSort\n               :                       +- CometColumnarExchange\n               :                          +- HashAggregate\n               :                             +- CometNativeColumnarToRow\n               :                                +- CometColumnarExchange\n               :                                   +- HashAggregate\n               :                                      +- Project\n               :                                         +- BroadcastHashJoin\n               :                                            :- Project\n               :                                            :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                            :     +- CometNativeColumnarToRow\n               :                                            :        +- CometSort\n               :                                            :           +- CometExchange\n               :                                            :              +- CometHashAggregate\n               :                                            :                 +- CometExchange\n               :                                            :                    +- CometHashAggregate\n               :                                            :                       +- CometProject\n               :                                            :                          +- CometBroadcastHashJoin\n               :                                            :                             :- CometFilter\n               :                                            :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                            :                             :        +- ReusedSubquery\n               :                                            :                             +- CometBroadcastExchange\n               :                                            :                                +- CometProject\n               :                                            :                                   +- CometFilter\n               :                                            :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                            +- BroadcastExchange\n               :                                               +- Project\n               :                                                  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                                     +- CometNativeColumnarToRow\n               :                                                        +- CometSort\n               :                                                           +- CometExchange\n               :                                                              +- CometHashAggregate\n               :                                                                 +- CometExchange\n               :                                                                    +- CometHashAggregate\n               :                                                                       +- CometProject\n               :                                                                          +- CometBroadcastHashJoin\n               :                                                                             :- CometFilter\n               :                                                                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                                                             :        +- ReusedSubquery\n               :                                                                             +- CometBroadcastExchange\n               :                                                                                +- CometProject\n               :                                                                                   +- CometFilter\n               :                                                                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- Project\n                     +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                        +- CometNativeColumnarToRow\n                           +- CometSort\n                              +- CometExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometSortMergeJoin\n                                          :- CometSort\n                                          :  +- CometColumnarExchange\n                                          :     +- HashAggregate\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometColumnarExchange\n                                          :              +- HashAggregate\n                                          :                 +- Project\n                                          :                    +- BroadcastHashJoin\n                                          :                       :- Project\n                                          :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                       :     +- CometNativeColumnarToRow\n                                          :                       :        +- CometSort\n                                          :                       :           +- CometExchange\n                                          :                       :              +- CometHashAggregate\n                                          :                       :                 +- CometExchange\n                                          :                       :                    +- CometHashAggregate\n                                          :                       :                       +- CometProject\n                                          :                       :                          +- CometBroadcastHashJoin\n                                          :                       :                             :- CometFilter\n                                          :                       :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                          :                       :                             :        +- SubqueryBroadcast\n                                          :                       :                             :           +- BroadcastExchange\n                                          :                       :                             :              +- CometNativeColumnarToRow\n                                          :                       :                             :                 +- CometProject\n                                          :                       :                             :                    +- CometFilter\n                                          :                       :                             :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                       :                             +- CometBroadcastExchange\n                                          :                       :                                +- CometProject\n                                          :                       :                                   +- CometFilter\n                                          :                       :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                       +- BroadcastExchange\n                                          :                          +- Project\n                                          :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                                +- CometNativeColumnarToRow\n                                          :                                   +- CometSort\n                                          :                                      +- CometExchange\n                                          :                                         +- CometHashAggregate\n                                          :                                            +- CometExchange\n                                          :                                               +- CometHashAggregate\n                                          :                                                  +- CometProject\n                                          :                                                     +- CometBroadcastHashJoin\n                                          :                                                        :- CometFilter\n                                          :                                                        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                          :                                                        :        +- SubqueryBroadcast\n                                          :                                                        :           +- BroadcastExchange\n                                          :                                                        :              +- CometNativeColumnarToRow\n                                          :                                                        :                 +- CometProject\n                                          :                                                        :                    +- CometFilter\n                                          :                                                        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                                                        +- CometBroadcastExchange\n                                          :                                                           +- CometProject\n                                          :                                                              +- CometFilter\n                                          :                                                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          +- CometSort\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- CometNativeColumnarToRow\n                                                      +- CometColumnarExchange\n                                                         +- HashAggregate\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- Project\n                                                                  :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                  :     +- CometNativeColumnarToRow\n                                                                  :        +- CometSort\n                                                                  :           +- CometExchange\n                                                                  :              +- CometHashAggregate\n                                                                  :                 +- CometExchange\n                                                                  :                    +- CometHashAggregate\n                                                                  :                       +- CometProject\n                                                                  :                          +- CometBroadcastHashJoin\n                                                                  :                             :- CometFilter\n                                                                  :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                  :                             :        +- ReusedSubquery\n                                                                  :                             +- CometBroadcastExchange\n                                                                  :                                +- CometProject\n                                                                  :                                   +- CometFilter\n                                                                  :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                           +- CometNativeColumnarToRow\n                                                                              +- CometSort\n                                                                                 +- CometExchange\n                                                                                    +- CometHashAggregate\n                                                                                       +- CometExchange\n                                                                                          +- CometHashAggregate\n                                                                                             +- CometProject\n                                                                                                +- CometBroadcastHashJoin\n                                                                                                   :- CometFilter\n                                                                                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                                                   :        +- ReusedSubquery\n                                                                                                   +- CometBroadcastExchange\n                                                                                                      +- CometProject\n                                                                                                         +- CometFilter\n                                                                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 138 out of 196 eligible operators (70%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q57.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.call_center\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q57.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q5a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- HashAggregate\n               :              :  +- CometNativeColumnarToRow\n               :              :     +- CometColumnarExchange\n               :              :        +- HashAggregate\n               :              :           +- Project\n               :              :              +- BroadcastHashJoin\n               :              :                 :- Project\n               :              :                 :  +- BroadcastHashJoin\n               :              :                 :     :- Union\n               :              :                 :     :  :- Project\n               :              :                 :     :  :  +- Filter\n               :              :                 :     :  :     +- ColumnarToRow\n               :              :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :  :              +- SubqueryBroadcast\n               :              :                 :     :  :                 +- BroadcastExchange\n               :              :                 :     :  :                    +- CometNativeColumnarToRow\n               :              :                 :     :  :                       +- CometProject\n               :              :                 :     :  :                          +- CometFilter\n               :              :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                 :     :  +- Project\n               :              :                 :     :     +- Filter\n               :              :                 :     :        +- ColumnarToRow\n               :              :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :                 +- ReusedSubquery\n               :              :                 :     +- BroadcastExchange\n               :              :                 :        +- CometNativeColumnarToRow\n               :              :                 :           +- CometProject\n               :              :                 :              +- CometFilter\n               :              :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                 +- BroadcastExchange\n               :              :                    +- CometNativeColumnarToRow\n               :              :                       +- CometProject\n               :              :                          +- CometFilter\n               :              :                             +- CometNativeScan parquet spark_catalog.default.store\n               :              :- HashAggregate\n               :              :  +- CometNativeColumnarToRow\n               :              :     +- CometColumnarExchange\n               :              :        +- HashAggregate\n               :              :           +- Project\n               :              :              +- BroadcastHashJoin\n               :              :                 :- Project\n               :              :                 :  +- BroadcastHashJoin\n               :              :                 :     :- Union\n               :              :                 :     :  :- Project\n               :              :                 :     :  :  +- Filter\n               :              :                 :     :  :     +- ColumnarToRow\n               :              :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :  :              +- ReusedSubquery\n               :              :                 :     :  +- Project\n               :              :                 :     :     +- Filter\n               :              :                 :     :        +- ColumnarToRow\n               :              :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :                 +- ReusedSubquery\n               :              :                 :     +- BroadcastExchange\n               :              :                 :        +- CometNativeColumnarToRow\n               :              :                 :           +- CometProject\n               :              :                 :              +- CometFilter\n               :              :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                 +- BroadcastExchange\n               :              :                    +- CometNativeColumnarToRow\n               :              :                       +- CometProject\n               :              :                          +- CometFilter\n               :              :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :              +- HashAggregate\n               :                 +- CometNativeColumnarToRow\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- Project\n               :                             +- BroadcastHashJoin\n               :                                :- Project\n               :                                :  +- BroadcastHashJoin\n               :                                :     :- Union\n               :                                :     :  :- Project\n               :                                :     :  :  +- Filter\n               :                                :     :  :     +- ColumnarToRow\n               :                                :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                :     :  :              +- ReusedSubquery\n               :                                :     :  +- Project\n               :                                :     :     +- BroadcastHashJoin\n               :                                :     :        :- BroadcastExchange\n               :                                :     :        :  +- ColumnarToRow\n               :                                :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                :     :        :           +- ReusedSubquery\n               :                                :     :        +- CometNativeColumnarToRow\n               :                                :     :           +- CometProject\n               :                                :     :              +- CometFilter\n               :                                :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n               :                                :     +- BroadcastExchange\n               :                                :        +- CometNativeColumnarToRow\n               :                                :           +- CometProject\n               :                                :              +- CometFilter\n               :                                :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                +- BroadcastExchange\n               :                                   +- CometNativeColumnarToRow\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometNativeScan parquet spark_catalog.default.web_site\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- HashAggregate\n               :                          :  +- CometNativeColumnarToRow\n               :                          :     +- CometColumnarExchange\n               :                          :        +- HashAggregate\n               :                          :           +- Project\n               :                          :              +- BroadcastHashJoin\n               :                          :                 :- Project\n               :                          :                 :  +- BroadcastHashJoin\n               :                          :                 :     :- Union\n               :                          :                 :     :  :- Project\n               :                          :                 :     :  :  +- Filter\n               :                          :                 :     :  :     +- ColumnarToRow\n               :                          :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :  :              +- SubqueryBroadcast\n               :                          :                 :     :  :                 +- BroadcastExchange\n               :                          :                 :     :  :                    +- CometNativeColumnarToRow\n               :                          :                 :     :  :                       +- CometProject\n               :                          :                 :     :  :                          +- CometFilter\n               :                          :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                 :     :  +- Project\n               :                          :                 :     :     +- Filter\n               :                          :                 :     :        +- ColumnarToRow\n               :                          :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :                 +- ReusedSubquery\n               :                          :                 :     +- BroadcastExchange\n               :                          :                 :        +- CometNativeColumnarToRow\n               :                          :                 :           +- CometProject\n               :                          :                 :              +- CometFilter\n               :                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                 +- BroadcastExchange\n               :                          :                    +- CometNativeColumnarToRow\n               :                          :                       +- CometProject\n               :                          :                          +- CometFilter\n               :                          :                             +- CometNativeScan parquet spark_catalog.default.store\n               :                          :- HashAggregate\n               :                          :  +- CometNativeColumnarToRow\n               :                          :     +- CometColumnarExchange\n               :                          :        +- HashAggregate\n               :                          :           +- Project\n               :                          :              +- BroadcastHashJoin\n               :                          :                 :- Project\n               :                          :                 :  +- BroadcastHashJoin\n               :                          :                 :     :- Union\n               :                          :                 :     :  :- Project\n               :                          :                 :     :  :  +- Filter\n               :                          :                 :     :  :     +- ColumnarToRow\n               :                          :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :  :              +- ReusedSubquery\n               :                          :                 :     :  +- Project\n               :                          :                 :     :     +- Filter\n               :                          :                 :     :        +- ColumnarToRow\n               :                          :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :                 +- ReusedSubquery\n               :                          :                 :     +- BroadcastExchange\n               :                          :                 :        +- CometNativeColumnarToRow\n               :                          :                 :           +- CometProject\n               :                          :                 :              +- CometFilter\n               :                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                 +- BroadcastExchange\n               :                          :                    +- CometNativeColumnarToRow\n               :                          :                       +- CometProject\n               :                          :                          +- CometFilter\n               :                          :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :                          +- HashAggregate\n               :                             +- CometNativeColumnarToRow\n               :                                +- CometColumnarExchange\n               :                                   +- HashAggregate\n               :                                      +- Project\n               :                                         +- BroadcastHashJoin\n               :                                            :- Project\n               :                                            :  +- BroadcastHashJoin\n               :                                            :     :- Union\n               :                                            :     :  :- Project\n               :                                            :     :  :  +- Filter\n               :                                            :     :  :     +- ColumnarToRow\n               :                                            :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                            :     :  :              +- ReusedSubquery\n               :                                            :     :  +- Project\n               :                                            :     :     +- BroadcastHashJoin\n               :                                            :     :        :- BroadcastExchange\n               :                                            :     :        :  +- ColumnarToRow\n               :                                            :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                            :     :        :           +- ReusedSubquery\n               :                                            :     :        +- CometNativeColumnarToRow\n               :                                            :     :           +- CometProject\n               :                                            :     :              +- CometFilter\n               :                                            :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n               :                                            :     +- BroadcastExchange\n               :                                            :        +- CometNativeColumnarToRow\n               :                                            :           +- CometProject\n               :                                            :              +- CometFilter\n               :                                            :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                            +- BroadcastExchange\n               :                                               +- CometNativeColumnarToRow\n               :                                                  +- CometProject\n               :                                                     +- CometFilter\n               :                                                        +- CometNativeScan parquet spark_catalog.default.web_site\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- HashAggregate\n                                          :  +- CometNativeColumnarToRow\n                                          :     +- CometColumnarExchange\n                                          :        +- HashAggregate\n                                          :           +- Project\n                                          :              +- BroadcastHashJoin\n                                          :                 :- Project\n                                          :                 :  +- BroadcastHashJoin\n                                          :                 :     :- Union\n                                          :                 :     :  :- Project\n                                          :                 :     :  :  +- Filter\n                                          :                 :     :  :     +- ColumnarToRow\n                                          :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :  :              +- SubqueryBroadcast\n                                          :                 :     :  :                 +- BroadcastExchange\n                                          :                 :     :  :                    +- CometNativeColumnarToRow\n                                          :                 :     :  :                       +- CometProject\n                                          :                 :     :  :                          +- CometFilter\n                                          :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                 :     :  +- Project\n                                          :                 :     :     +- Filter\n                                          :                 :     :        +- ColumnarToRow\n                                          :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :                 +- ReusedSubquery\n                                          :                 :     +- BroadcastExchange\n                                          :                 :        +- CometNativeColumnarToRow\n                                          :                 :           +- CometProject\n                                          :                 :              +- CometFilter\n                                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                 +- BroadcastExchange\n                                          :                    +- CometNativeColumnarToRow\n                                          :                       +- CometProject\n                                          :                          +- CometFilter\n                                          :                             +- CometNativeScan parquet spark_catalog.default.store\n                                          :- HashAggregate\n                                          :  +- CometNativeColumnarToRow\n                                          :     +- CometColumnarExchange\n                                          :        +- HashAggregate\n                                          :           +- Project\n                                          :              +- BroadcastHashJoin\n                                          :                 :- Project\n                                          :                 :  +- BroadcastHashJoin\n                                          :                 :     :- Union\n                                          :                 :     :  :- Project\n                                          :                 :     :  :  +- Filter\n                                          :                 :     :  :     +- ColumnarToRow\n                                          :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :  :              +- ReusedSubquery\n                                          :                 :     :  +- Project\n                                          :                 :     :     +- Filter\n                                          :                 :     :        +- ColumnarToRow\n                                          :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :                 +- ReusedSubquery\n                                          :                 :     +- BroadcastExchange\n                                          :                 :        +- CometNativeColumnarToRow\n                                          :                 :           +- CometProject\n                                          :                 :              +- CometFilter\n                                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                 +- BroadcastExchange\n                                          :                    +- CometNativeColumnarToRow\n                                          :                       +- CometProject\n                                          :                          +- CometFilter\n                                          :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n                                          +- HashAggregate\n                                             +- CometNativeColumnarToRow\n                                                +- CometColumnarExchange\n                                                   +- HashAggregate\n                                                      +- Project\n                                                         +- BroadcastHashJoin\n                                                            :- Project\n                                                            :  +- BroadcastHashJoin\n                                                            :     :- Union\n                                                            :     :  :- Project\n                                                            :     :  :  +- Filter\n                                                            :     :  :     +- ColumnarToRow\n                                                            :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                            :     :  :              +- ReusedSubquery\n                                                            :     :  +- Project\n                                                            :     :     +- BroadcastHashJoin\n                                                            :     :        :- BroadcastExchange\n                                                            :     :        :  +- ColumnarToRow\n                                                            :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                            :     :        :           +- ReusedSubquery\n                                                            :     :        +- CometNativeColumnarToRow\n                                                            :     :           +- CometProject\n                                                            :     :              +- CometFilter\n                                                            :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n                                                            :     +- BroadcastExchange\n                                                            :        +- CometNativeColumnarToRow\n                                                            :           +- CometProject\n                                                            :              +- CometFilter\n                                                            :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                            +- BroadcastExchange\n                                                               +- CometNativeColumnarToRow\n                                                                  +- CometProject\n                                                                     +- CometFilter\n                                                                        +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 89 out of 263 eligible operators (33%). Final plan contains 57 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q5a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometUnion\n               :           :              :     :  :- CometProject\n               :           :              :     :  :  +- CometFilter\n               :           :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :              :     :  :           +- SubqueryBroadcast\n               :           :              :     :  :              +- BroadcastExchange\n               :           :              :     :  :                 +- CometNativeColumnarToRow\n               :           :              :     :  :                    +- CometProject\n               :           :              :     :  :                       +- CometFilter\n               :           :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :  +- CometProject\n               :           :              :     :     +- CometFilter\n               :           :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :           :              :     :              +- ReusedSubquery\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometUnion\n               :           :              :     :  :- CometProject\n               :           :              :     :  :  +- CometFilter\n               :           :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :              :     :  :           +- ReusedSubquery\n               :           :              :     :  +- CometProject\n               :           :              :     :     +- CometFilter\n               :           :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :           :              :     :              +- ReusedSubquery\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometProject\n               :                          :  +- CometBroadcastHashJoin\n               :                          :     :- CometUnion\n               :                          :     :  :- CometProject\n               :                          :     :  :  +- CometFilter\n               :                          :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :     :  :           +- ReusedSubquery\n               :                          :     :  +- CometProject\n               :                          :     :     +- CometBroadcastHashJoin\n               :                          :     :        :- CometBroadcastExchange\n               :                          :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                          :     :        :        +- ReusedSubquery\n               :                          :     :        +- CometProject\n               :                          :     :           +- CometFilter\n               :                          :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :     +- CometBroadcastExchange\n               :                          :        +- CometProject\n               :                          :           +- CometFilter\n               :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometUnion\n               :                    :              :     :  :- CometProject\n               :                    :              :     :  :  +- CometFilter\n               :                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :              :     :  :           +- SubqueryBroadcast\n               :                    :              :     :  :              +- BroadcastExchange\n               :                    :              :     :  :                 +- CometNativeColumnarToRow\n               :                    :              :     :  :                    +- CometProject\n               :                    :              :     :  :                       +- CometFilter\n               :                    :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :  +- CometProject\n               :                    :              :     :     +- CometFilter\n               :                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :                    :              :     :              +- ReusedSubquery\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometUnion\n               :                    :              :     :  :- CometProject\n               :                    :              :     :  :  +- CometFilter\n               :                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :              :     :  :           +- ReusedSubquery\n               :                    :              :     :  +- CometProject\n               :                    :              :     :     +- CometFilter\n               :                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                    :              :     :              +- ReusedSubquery\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :                    +- CometHashAggregate\n               :                       +- CometExchange\n               :                          +- CometHashAggregate\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometProject\n               :                                   :  +- CometBroadcastHashJoin\n               :                                   :     :- CometUnion\n               :                                   :     :  :- CometProject\n               :                                   :     :  :  +- CometFilter\n               :                                   :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :     :  :           +- ReusedSubquery\n               :                                   :     :  +- CometProject\n               :                                   :     :     +- CometBroadcastHashJoin\n               :                                   :     :        :- CometBroadcastExchange\n               :                                   :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                   :     :        :        +- ReusedSubquery\n               :                                   :     :        +- CometProject\n               :                                   :     :           +- CometFilter\n               :                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :     +- CometBroadcastExchange\n               :                                   :        +- CometProject\n               :                                   :           +- CometFilter\n               :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometUnion\n                                    :              :     :  :- CometProject\n                                    :              :     :  :  +- CometFilter\n                                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :              :     :  :           +- SubqueryBroadcast\n                                    :              :     :  :              +- BroadcastExchange\n                                    :              :     :  :                 +- CometNativeColumnarToRow\n                                    :              :     :  :                    +- CometProject\n                                    :              :     :  :                       +- CometFilter\n                                    :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :  +- CometProject\n                                    :              :     :     +- CometFilter\n                                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                    :              :     :              +- ReusedSubquery\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometUnion\n                                    :              :     :  :- CometProject\n                                    :              :     :  :  +- CometFilter\n                                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :              :     :  :           +- ReusedSubquery\n                                    :              :     :  +- CometProject\n                                    :              :     :     +- CometFilter\n                                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                    :              :     :              +- ReusedSubquery\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometUnion\n                                                   :     :  :- CometProject\n                                                   :     :  :  +- CometFilter\n                                                   :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     :  :           +- ReusedSubquery\n                                                   :     :  +- CometProject\n                                                   :     :     +- CometBroadcastHashJoin\n                                                   :     :        :- CometBroadcastExchange\n                                                   :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                                   :     :        :        +- ReusedSubquery\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 242 out of 263 eligible operators (92%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q6.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- CometNativeColumnarToRow\n                     :     :     :  +- CometProject\n                     :     :     :     +- CometBroadcastHashJoin\n                     :     :     :        :- CometProject\n                     :     :     :        :  +- CometFilter\n                     :     :     :        :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     :     :        +- CometBroadcastExchange\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     +- BroadcastExchange\n                     :     :        +- Filter\n                     :     :           +- ColumnarToRow\n                     :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :                    +- SubqueryBroadcast\n                     :     :                       +- BroadcastExchange\n                     :     :                          +- CometNativeColumnarToRow\n                     :     :                             +- CometProject\n                     :     :                                +- CometFilter\n                     :     :                                   :  +- Subquery\n                     :     :                                   :     +- CometNativeColumnarToRow\n                     :     :                                   :        +- CometHashAggregate\n                     :     :                                   :           +- CometExchange\n                     :     :                                   :              +- CometHashAggregate\n                     :     :                                   :                 +- CometProject\n                     :     :                                   :                    +- CometFilter\n                     :     :                                   :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 :  +- Subquery\n                     :                 :     +- CometNativeColumnarToRow\n                     :                 :        +- CometHashAggregate\n                     :                 :           +- CometExchange\n                     :                 :              +- CometHashAggregate\n                     :                 :                 +- CometProject\n                     :                 :                    +- CometFilter\n                     :                 :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometFilter\n                                 :  +- CometNativeScan parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 39 out of 58 eligible operators (67%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q6.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometFilter\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometFilter\n                     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometFilter\n                     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :                 +- SubqueryBroadcast\n                     :     :                    +- BroadcastExchange\n                     :     :                       +- CometNativeColumnarToRow\n                     :     :                          +- CometProject\n                     :     :                             +- CometFilter\n                     :     :                                :  +- Subquery\n                     :     :                                :     +- CometNativeColumnarToRow\n                     :     :                                :        +- CometHashAggregate\n                     :     :                                :           +- CometExchange\n                     :     :                                :              +- CometHashAggregate\n                     :     :                                :                 +- CometProject\n                     :     :                                :                    +- CometFilter\n                     :     :                                :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              :  +- ReusedSubquery\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometFilter\n                              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 48 out of 52 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q64.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometNativeScan parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 228 out of 242 eligible operators (94%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q64.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 238 out of 242 eligible operators (98%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q67a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- Window\n      +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- WindowGroupLimit\n                     +- Sort\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- Project\n                           :                 :  +- BroadcastHashJoin\n                           :                 :     :- Project\n                           :                 :     :  +- BroadcastHashJoin\n                           :                 :     :     :- Filter\n                           :                 :     :     :  +- ColumnarToRow\n                           :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                 :     :     :           +- SubqueryBroadcast\n                           :                 :     :     :              +- BroadcastExchange\n                           :                 :     :     :                 +- CometNativeColumnarToRow\n                           :                 :     :     :                    +- CometProject\n                           :                 :     :     :                       +- CometFilter\n                           :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     :     +- BroadcastExchange\n                           :                 :     :        +- CometNativeColumnarToRow\n                           :                 :     :           +- CometProject\n                           :                 :     :              +- CometFilter\n                           :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     +- BroadcastExchange\n                           :                 :        +- CometNativeColumnarToRow\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                 +- BroadcastExchange\n                           :                    +- CometNativeColumnarToRow\n                           :                       +- CometProject\n                           :                          +- CometFilter\n                           :                             +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- Project\n                                                         :     :  +- BroadcastHashJoin\n                                                         :     :     :- Filter\n                                                         :     :     :  +- ColumnarToRow\n                                                         :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :     :           +- SubqueryBroadcast\n                                                         :     :     :              +- BroadcastExchange\n                                                         :     :     :                 +- CometNativeColumnarToRow\n                                                         :     :     :                    +- CometProject\n                                                         :     :     :                       +- CometFilter\n                                                         :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     :     +- BroadcastExchange\n                                                         :     :        +- CometNativeColumnarToRow\n                                                         :     :           +- CometProject\n                                                         :     :              +- CometFilter\n                                                         :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     +- BroadcastExchange\n                                                         :        +- CometNativeColumnarToRow\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometNativeScan parquet spark_catalog.default.store\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 127 out of 285 eligible operators (44%). Final plan contains 63 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q67a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- Window\n      +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                     +- CometNativeColumnarToRow\n                        +- CometSort\n                           +- CometUnion\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometProject\n                              :           +- CometBroadcastHashJoin\n                              :              :- CometProject\n                              :              :  +- CometBroadcastHashJoin\n                              :              :     :- CometProject\n                              :              :     :  +- CometBroadcastHashJoin\n                              :              :     :     :- CometFilter\n                              :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :              :     :     :        +- SubqueryBroadcast\n                              :              :     :     :           +- BroadcastExchange\n                              :              :     :     :              +- CometNativeColumnarToRow\n                              :              :     :     :                 +- CometProject\n                              :              :     :     :                    +- CometFilter\n                              :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :              :     :     +- CometBroadcastExchange\n                              :              :     :        +- CometProject\n                              :              :     :           +- CometFilter\n                              :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :              :     +- CometBroadcastExchange\n                              :              :        +- CometProject\n                              :              :           +- CometFilter\n                              :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :              +- CometBroadcastExchange\n                              :                 +- CometProject\n                              :                    +- CometFilter\n                              :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometHashAggregate\n                                 +- CometExchange\n                                    +- CometHashAggregate\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometBroadcastHashJoin\n                                                      :- CometProject\n                                                      :  +- CometBroadcastHashJoin\n                                                      :     :- CometProject\n                                                      :     :  +- CometBroadcastHashJoin\n                                                      :     :     :- CometFilter\n                                                      :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :     :     :        +- SubqueryBroadcast\n                                                      :     :     :           +- BroadcastExchange\n                                                      :     :     :              +- CometNativeColumnarToRow\n                                                      :     :     :                 +- CometProject\n                                                      :     :     :                    +- CometFilter\n                                                      :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     :     +- CometBroadcastExchange\n                                                      :     :        +- CometProject\n                                                      :     :           +- CometFilter\n                                                      :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     +- CometBroadcastExchange\n                                                      :        +- CometProject\n                                                      :           +- CometFilter\n                                                      :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                                      +- CometBroadcastExchange\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 262 out of 285 eligible operators (91%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q70a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- Project\n                           :                 :  +- BroadcastHashJoin\n                           :                 :     :- Filter\n                           :                 :     :  +- ColumnarToRow\n                           :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                 :     :           +- SubqueryBroadcast\n                           :                 :     :              +- BroadcastExchange\n                           :                 :     :                 +- CometNativeColumnarToRow\n                           :                 :     :                    +- CometProject\n                           :                 :     :                       +- CometFilter\n                           :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     +- BroadcastExchange\n                           :                 :        +- CometNativeColumnarToRow\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 +- BroadcastExchange\n                           :                    +- Project\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometFilter\n                           :                          :     +- CometNativeScan parquet spark_catalog.default.store\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- Filter\n                           :                                   +- Window\n                           :                                      +- WindowGroupLimit\n                           :                                         +- Sort\n                           :                                            +- HashAggregate\n                           :                                               +- CometNativeColumnarToRow\n                           :                                                  +- CometColumnarExchange\n                           :                                                     +- HashAggregate\n                           :                                                        +- Project\n                           :                                                           +- BroadcastHashJoin\n                           :                                                              :- Project\n                           :                                                              :  +- BroadcastHashJoin\n                           :                                                              :     :- Filter\n                           :                                                              :     :  +- ColumnarToRow\n                           :                                                              :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                                              :     :           +- ReusedSubquery\n                           :                                                              :     +- BroadcastExchange\n                           :                                                              :        +- CometNativeColumnarToRow\n                           :                                                              :           +- CometProject\n                           :                                                              :              +- CometFilter\n                           :                                                              :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                                                              +- BroadcastExchange\n                           :                                                                 +- CometNativeColumnarToRow\n                           :                                                                    +- CometProject\n                           :                                                                       +- CometFilter\n                           :                                                                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Filter\n                           :                             :     :  +- ColumnarToRow\n                           :                             :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :           +- SubqueryBroadcast\n                           :                             :     :              +- BroadcastExchange\n                           :                             :     :                 +- CometNativeColumnarToRow\n                           :                             :     :                    +- CometProject\n                           :                             :     :                       +- CometFilter\n                           :                             :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             +- BroadcastExchange\n                           :                                +- Project\n                           :                                   +- BroadcastHashJoin\n                           :                                      :- CometNativeColumnarToRow\n                           :                                      :  +- CometFilter\n                           :                                      :     +- CometNativeScan parquet spark_catalog.default.store\n                           :                                      +- BroadcastExchange\n                           :                                         +- Project\n                           :                                            +- Filter\n                           :                                               +- Window\n                           :                                                  +- WindowGroupLimit\n                           :                                                     +- Sort\n                           :                                                        +- HashAggregate\n                           :                                                           +- CometNativeColumnarToRow\n                           :                                                              +- CometColumnarExchange\n                           :                                                                 +- HashAggregate\n                           :                                                                    +- Project\n                           :                                                                       +- BroadcastHashJoin\n                           :                                                                          :- Project\n                           :                                                                          :  +- BroadcastHashJoin\n                           :                                                                          :     :- Filter\n                           :                                                                          :     :  +- ColumnarToRow\n                           :                                                                          :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                                                          :     :           +- ReusedSubquery\n                           :                                                                          :     +- BroadcastExchange\n                           :                                                                          :        +- CometNativeColumnarToRow\n                           :                                                                          :           +- CometProject\n                           :                                                                          :              +- CometFilter\n                           :                                                                          :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                                                                          +- BroadcastExchange\n                           :                                                                             +- CometNativeColumnarToRow\n                           :                                                                                +- CometProject\n                           :                                                                                   +- CometFilter\n                           :                                                                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- Filter\n                                                         :     :  +- ColumnarToRow\n                                                         :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :           +- SubqueryBroadcast\n                                                         :     :              +- BroadcastExchange\n                                                         :     :                 +- CometNativeColumnarToRow\n                                                         :     :                    +- CometProject\n                                                         :     :                       +- CometFilter\n                                                         :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     +- BroadcastExchange\n                                                         :        +- CometNativeColumnarToRow\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         +- BroadcastExchange\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- CometNativeColumnarToRow\n                                                                  :  +- CometFilter\n                                                                  :     +- CometNativeScan parquet spark_catalog.default.store\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +- Filter\n                                                                           +- Window\n                                                                              +- WindowGroupLimit\n                                                                                 +- Sort\n                                                                                    +- HashAggregate\n                                                                                       +- CometNativeColumnarToRow\n                                                                                          +- CometColumnarExchange\n                                                                                             +- HashAggregate\n                                                                                                +- Project\n                                                                                                   +- BroadcastHashJoin\n                                                                                                      :- Project\n                                                                                                      :  +- BroadcastHashJoin\n                                                                                                      :     :- Filter\n                                                                                                      :     :  +- ColumnarToRow\n                                                                                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                                                      :     :           +- ReusedSubquery\n                                                                                                      :     +- BroadcastExchange\n                                                                                                      :        +- CometNativeColumnarToRow\n                                                                                                      :           +- CometProject\n                                                                                                      :              +- CometFilter\n                                                                                                      :                 +- CometNativeScan parquet spark_catalog.default.store\n                                                                                                      +- BroadcastExchange\n                                                                                                         +- CometNativeColumnarToRow\n                                                                                                            +- CometProject\n                                                                                                               +- CometFilter\n                                                                                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 54 out of 156 eligible operators (34%). Final plan contains 30 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q70a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- CometNativeColumnarToRow\n                           :                 :  +- CometProject\n                           :                 :     +- CometBroadcastHashJoin\n                           :                 :        :- CometFilter\n                           :                 :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                 :        :        +- SubqueryBroadcast\n                           :                 :        :           +- BroadcastExchange\n                           :                 :        :              +- CometNativeColumnarToRow\n                           :                 :        :                 +- CometProject\n                           :                 :        :                    +- CometFilter\n                           :                 :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                 :        +- CometBroadcastExchange\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                 +- BroadcastExchange\n                           :                    +- Project\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometFilter\n                           :                          :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- Filter\n                           :                                   +- Window\n                           :                                      +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                           :                                         +- CometNativeColumnarToRow\n                           :                                            +- CometSort\n                           :                                               +- CometHashAggregate\n                           :                                                  +- CometExchange\n                           :                                                     +- CometHashAggregate\n                           :                                                        +- CometProject\n                           :                                                           +- CometBroadcastHashJoin\n                           :                                                              :- CometProject\n                           :                                                              :  +- CometBroadcastHashJoin\n                           :                                                              :     :- CometFilter\n                           :                                                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                                                              :     :        +- ReusedSubquery\n                           :                                                              :     +- CometBroadcastExchange\n                           :                                                              :        +- CometProject\n                           :                                                              :           +- CometFilter\n                           :                                                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                                                              +- CometBroadcastExchange\n                           :                                                                 +- CometProject\n                           :                                                                    +- CometFilter\n                           :                                                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- CometNativeColumnarToRow\n                           :                             :  +- CometProject\n                           :                             :     +- CometBroadcastHashJoin\n                           :                             :        :- CometFilter\n                           :                             :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                             :        :        +- SubqueryBroadcast\n                           :                             :        :           +- BroadcastExchange\n                           :                             :        :              +- CometNativeColumnarToRow\n                           :                             :        :                 +- CometProject\n                           :                             :        :                    +- CometFilter\n                           :                             :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                             :        +- CometBroadcastExchange\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                             +- BroadcastExchange\n                           :                                +- Project\n                           :                                   +- BroadcastHashJoin\n                           :                                      :- CometNativeColumnarToRow\n                           :                                      :  +- CometFilter\n                           :                                      :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                                      +- BroadcastExchange\n                           :                                         +- Project\n                           :                                            +- Filter\n                           :                                               +- Window\n                           :                                                  +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                           :                                                     +- CometNativeColumnarToRow\n                           :                                                        +- CometSort\n                           :                                                           +- CometHashAggregate\n                           :                                                              +- CometExchange\n                           :                                                                 +- CometHashAggregate\n                           :                                                                    +- CometProject\n                           :                                                                       +- CometBroadcastHashJoin\n                           :                                                                          :- CometProject\n                           :                                                                          :  +- CometBroadcastHashJoin\n                           :                                                                          :     :- CometFilter\n                           :                                                                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                                                                          :     :        +- ReusedSubquery\n                           :                                                                          :     +- CometBroadcastExchange\n                           :                                                                          :        +- CometProject\n                           :                                                                          :           +- CometFilter\n                           :                                                                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                                                                          +- CometBroadcastExchange\n                           :                                                                             +- CometProject\n                           :                                                                                +- CometFilter\n                           :                                                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- CometNativeColumnarToRow\n                                                         :  +- CometProject\n                                                         :     +- CometBroadcastHashJoin\n                                                         :        :- CometFilter\n                                                         :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                         :        :        +- SubqueryBroadcast\n                                                         :        :           +- BroadcastExchange\n                                                         :        :              +- CometNativeColumnarToRow\n                                                         :        :                 +- CometProject\n                                                         :        :                    +- CometFilter\n                                                         :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                         :        +- CometBroadcastExchange\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                         +- BroadcastExchange\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- CometNativeColumnarToRow\n                                                                  :  +- CometFilter\n                                                                  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +- Filter\n                                                                           +- Window\n                                                                              +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                                                                                 +- CometNativeColumnarToRow\n                                                                                    +- CometSort\n                                                                                       +- CometHashAggregate\n                                                                                          +- CometExchange\n                                                                                             +- CometHashAggregate\n                                                                                                +- CometProject\n                                                                                                   +- CometBroadcastHashJoin\n                                                                                                      :- CometProject\n                                                                                                      :  +- CometBroadcastHashJoin\n                                                                                                      :     :- CometFilter\n                                                                                                      :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                                                      :     :        +- ReusedSubquery\n                                                                                                      :     +- CometBroadcastExchange\n                                                                                                      :        +- CometProject\n                                                                                                      :           +- CometFilter\n                                                                                                      :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                                                                                      +- CometBroadcastExchange\n                                                                                                         +- CometProject\n                                                                                                            +- CometFilter\n                                                                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 102 out of 156 eligible operators (65%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q72.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometColumnarExchange\n                  :     +- Project\n                  :        +- BroadcastHashJoin\n                  :           :- Project\n                  :           :  +- BroadcastHashJoin\n                  :           :     :- Project\n                  :           :     :  +- BroadcastHashJoin\n                  :           :     :     :- Project\n                  :           :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :- Project\n                  :           :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :- Project\n                  :           :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- Filter\n                  :           :     :     :     :     :     :     :     :     :  +- ColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :           :     :     :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :              +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                    +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                       +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :           +- CometProject\n                  :           :     :     :     :     :              +- CometFilter\n                  :           :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :           +- CometProject\n                  :           :     :     :     :              +- CometFilter\n                  :           :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- BroadcastExchange\n                  :           :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :           +- CometProject\n                  :           :     :     :              +- CometFilter\n                  :           :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     +- BroadcastExchange\n                  :           :     :        +- CometNativeColumnarToRow\n                  :           :     :           +- CometFilter\n                  :           :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     +- BroadcastExchange\n                  :           :        +- CometNativeColumnarToRow\n                  :           :           +- CometFilter\n                  :           :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           +- BroadcastExchange\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n\nComet accelerated 37 out of 68 eligible operators (54%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q72.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometExchange\n                  :     +- CometProject\n                  :        +- CometBroadcastHashJoin\n                  :           :- CometProject\n                  :           :  +- CometBroadcastHashJoin\n                  :           :     :- CometProject\n                  :           :     :  +- CometBroadcastHashJoin\n                  :           :     :     :- CometProject\n                  :           :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :- CometProject\n                  :           :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :- CometProject\n                  :           :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- CometFilter\n                  :           :     :     :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :           :     :     :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :           +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                 +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                    +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :        +- CometProject\n                  :           :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :        +- CometProject\n                  :           :     :     :     :           +- CometFilter\n                  :           :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- CometBroadcastExchange\n                  :           :     :     :        +- CometProject\n                  :           :     :     :           +- CometFilter\n                  :           :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     +- CometBroadcastExchange\n                  :           :     :        +- CometFilter\n                  :           :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     +- CometBroadcastExchange\n                  :           :        +- CometFilter\n                  :           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           +- CometBroadcastExchange\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n\nComet accelerated 66 out of 68 eligible operators (97%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q74.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- BroadcastHashJoin\n      :     :  :- Filter\n      :     :  :  +- HashAggregate\n      :     :  :     +- CometNativeColumnarToRow\n      :     :  :        +- CometColumnarExchange\n      :     :  :           +- HashAggregate\n      :     :  :              +- Project\n      :     :  :                 +- BroadcastHashJoin\n      :     :  :                    :- Project\n      :     :  :                    :  +- BroadcastHashJoin\n      :     :  :                    :     :- CometNativeColumnarToRow\n      :     :  :                    :     :  +- CometProject\n      :     :  :                    :     :     +- CometFilter\n      :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :  :                    :     +- BroadcastExchange\n      :     :  :                    :        +- Filter\n      :     :  :                    :           +- ColumnarToRow\n      :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :  :                    :                    +- SubqueryBroadcast\n      :     :  :                    :                       +- BroadcastExchange\n      :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :  :                    :                             +- CometFilter\n      :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  :                    +- BroadcastExchange\n      :     :  :                       +- CometNativeColumnarToRow\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  +- BroadcastExchange\n      :     :     +- HashAggregate\n      :     :        +- CometNativeColumnarToRow\n      :     :           +- CometColumnarExchange\n      :     :              +- HashAggregate\n      :     :                 +- Project\n      :     :                    +- BroadcastHashJoin\n      :     :                       :- Project\n      :     :                       :  +- BroadcastHashJoin\n      :     :                       :     :- CometNativeColumnarToRow\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                       :     +- BroadcastExchange\n      :     :                       :        +- Filter\n      :     :                       :           +- ColumnarToRow\n      :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                       :                    +- SubqueryBroadcast\n      :     :                       :                       +- BroadcastExchange\n      :     :                       :                          +- CometNativeColumnarToRow\n      :     :                       :                             +- CometFilter\n      :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                       +- BroadcastExchange\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometFilter\n      :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 85 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q74.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometBroadcastHashJoin\n         :     :  :- CometFilter\n         :     :  :  +- CometHashAggregate\n         :     :  :     +- CometExchange\n         :     :  :        +- CometHashAggregate\n         :     :  :           +- CometProject\n         :     :  :              +- CometBroadcastHashJoin\n         :     :  :                 :- CometProject\n         :     :  :                 :  +- CometBroadcastHashJoin\n         :     :  :                 :     :- CometProject\n         :     :  :                 :     :  +- CometFilter\n         :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :  :                 :     +- CometBroadcastExchange\n         :     :  :                 :        +- CometFilter\n         :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :  :                 :                 +- SubqueryBroadcast\n         :     :  :                 :                    +- BroadcastExchange\n         :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :  :                 :                          +- CometFilter\n         :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  :                 +- CometBroadcastExchange\n         :     :  :                    +- CometFilter\n         :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  +- CometBroadcastExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometExchange\n         :     :           +- CometHashAggregate\n         :     :              +- CometProject\n         :     :                 +- CometBroadcastHashJoin\n         :     :                    :- CometProject\n         :     :                    :  +- CometBroadcastHashJoin\n         :     :                    :     :- CometProject\n         :     :                    :     :  +- CometFilter\n         :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                    :     +- CometBroadcastExchange\n         :     :                    :        +- CometFilter\n         :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                    :                 +- SubqueryBroadcast\n         :     :                    :                    +- BroadcastExchange\n         :     :                    :                       +- CometNativeColumnarToRow\n         :     :                    :                          +- CometFilter\n         :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                    +- CometBroadcastExchange\n         :     :                       +- CometFilter\n         :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 79 out of 85 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q75.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- SubqueryBroadcast\n         :                             :     :           :     :              +- BroadcastExchange\n         :                             :     :           :     :                 +- CometNativeColumnarToRow\n         :                             :     :           :     :                    +- CometFilter\n         :                             :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- ReusedSubquery\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometColumnarExchange\n         :                                   :     +- Project\n         :                                   :        +- BroadcastHashJoin\n         :                                   :           :- Project\n         :                                   :           :  +- BroadcastHashJoin\n         :                                   :           :     :- Filter\n         :                                   :           :     :  +- ColumnarToRow\n         :                                   :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                                   :           :     :           +- ReusedSubquery\n         :                                   :           :     +- BroadcastExchange\n         :                                   :           :        +- CometNativeColumnarToRow\n         :                                   :           :           +- CometProject\n         :                                   :           :              +- CometFilter\n         :                                   :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                                   :           +- BroadcastExchange\n         :                                   :              +- CometNativeColumnarToRow\n         :                                   :                 +- CometFilter\n         :                                   :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometNativeScan parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- SubqueryBroadcast\n                                       :     :           :     :              +- BroadcastExchange\n                                       :     :           :     :                 +- CometNativeColumnarToRow\n                                       :     :           :     :                    +- CometFilter\n                                       :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- ReusedSubquery\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometColumnarExchange\n                                             :     +- Project\n                                             :        +- BroadcastHashJoin\n                                             :           :- Project\n                                             :           :  +- BroadcastHashJoin\n                                             :           :     :- Filter\n                                             :           :     :  +- ColumnarToRow\n                                             :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                             :           :     :           +- ReusedSubquery\n                                             :           :     +- BroadcastExchange\n                                             :           :        +- CometNativeColumnarToRow\n                                             :           :           +- CometProject\n                                             :           :              +- CometFilter\n                                             :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                             :           +- BroadcastExchange\n                                             :              +- CometNativeColumnarToRow\n                                             :                 +- CometFilter\n                                             :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.web_returns\n\nComet accelerated 111 out of 167 eligible operators (66%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q75.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :                             :     :           :     :        +- SubqueryBroadcast\n         :                             :     :           :     :           +- BroadcastExchange\n         :                             :     :           :     :              +- CometNativeColumnarToRow\n         :                             :     :           :     :                 +- CometFilter\n         :                             :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :                             :     :           :     :        +- ReusedSubquery\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometExchange\n         :                                   :     +- CometProject\n         :                                   :        +- CometBroadcastHashJoin\n         :                                   :           :- CometProject\n         :                                   :           :  +- CometBroadcastHashJoin\n         :                                   :           :     :- CometFilter\n         :                                   :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                                   :           :     :        +- ReusedSubquery\n         :                                   :           :     +- CometBroadcastExchange\n         :                                   :           :        +- CometProject\n         :                                   :           :           +- CometFilter\n         :                                   :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                                   :           +- CometBroadcastExchange\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :           :     :        +- SubqueryBroadcast\n                                       :     :           :     :           +- BroadcastExchange\n                                       :     :           :     :              +- CometNativeColumnarToRow\n                                       :     :           :     :                 +- CometFilter\n                                       :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :           :     :        +- ReusedSubquery\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometExchange\n                                             :     +- CometProject\n                                             :        +- CometBroadcastHashJoin\n                                             :           :- CometProject\n                                             :           :  +- CometBroadcastHashJoin\n                                             :           :     :- CometFilter\n                                             :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                             :           :     :        +- ReusedSubquery\n                                             :           :     +- CometBroadcastExchange\n                                             :           :        +- CometProject\n                                             :           :           +- CometFilter\n                                             :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                             :           +- CometBroadcastExchange\n                                             :              +- CometFilter\n                                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n\nComet accelerated 159 out of 167 eligible operators (95%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q77a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- Project\n               :              :  +- BroadcastHashJoin\n               :              :     :- HashAggregate\n               :              :     :  +- CometNativeColumnarToRow\n               :              :     :     +- CometColumnarExchange\n               :              :     :        +- HashAggregate\n               :              :     :           +- Project\n               :              :     :              +- BroadcastHashJoin\n               :              :     :                 :- Project\n               :              :     :                 :  +- BroadcastHashJoin\n               :              :     :                 :     :- Filter\n               :              :     :                 :     :  +- ColumnarToRow\n               :              :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :     :                 :     :           +- SubqueryBroadcast\n               :              :     :                 :     :              +- BroadcastExchange\n               :              :     :                 :     :                 +- CometNativeColumnarToRow\n               :              :     :                 :     :                    +- CometProject\n               :              :     :                 :     :                       +- CometFilter\n               :              :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :     :                 :     +- BroadcastExchange\n               :              :     :                 :        +- CometNativeColumnarToRow\n               :              :     :                 :           +- CometProject\n               :              :     :                 :              +- CometFilter\n               :              :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :     :                 +- BroadcastExchange\n               :              :     :                    +- CometNativeColumnarToRow\n               :              :     :                       +- CometFilter\n               :              :     :                          +- CometNativeScan parquet spark_catalog.default.store\n               :              :     +- BroadcastExchange\n               :              :        +- HashAggregate\n               :              :           +- CometNativeColumnarToRow\n               :              :              +- CometColumnarExchange\n               :              :                 +- HashAggregate\n               :              :                    +- Project\n               :              :                       +- BroadcastHashJoin\n               :              :                          :- Project\n               :              :                          :  +- BroadcastHashJoin\n               :              :                          :     :- Filter\n               :              :                          :     :  +- ColumnarToRow\n               :              :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                          :     :           +- ReusedSubquery\n               :              :                          :     +- BroadcastExchange\n               :              :                          :        +- CometNativeColumnarToRow\n               :              :                          :           +- CometProject\n               :              :                          :              +- CometFilter\n               :              :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                          +- BroadcastExchange\n               :              :                             +- CometNativeColumnarToRow\n               :              :                                +- CometFilter\n               :              :                                   +- CometNativeScan parquet spark_catalog.default.store\n               :              :- Project\n               :              :  +- BroadcastNestedLoopJoin\n               :              :     :- BroadcastExchange\n               :              :     :  +- HashAggregate\n               :              :     :     +- CometNativeColumnarToRow\n               :              :     :        +- CometColumnarExchange\n               :              :     :           +- HashAggregate\n               :              :     :              +- Project\n               :              :     :                 +- BroadcastHashJoin\n               :              :     :                    :- ColumnarToRow\n               :              :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :     :                    :        +- ReusedSubquery\n               :              :     :                    +- BroadcastExchange\n               :              :     :                       +- CometNativeColumnarToRow\n               :              :     :                          +- CometProject\n               :              :     :                             +- CometFilter\n               :              :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :     +- HashAggregate\n               :              :        +- CometNativeColumnarToRow\n               :              :           +- CometColumnarExchange\n               :              :              +- HashAggregate\n               :              :                 +- Project\n               :              :                    +- BroadcastHashJoin\n               :              :                       :- ColumnarToRow\n               :              :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                       :        +- ReusedSubquery\n               :              :                       +- BroadcastExchange\n               :              :                          +- CometNativeColumnarToRow\n               :              :                             +- CometProject\n               :              :                                +- CometFilter\n               :              :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              +- Project\n               :                 +- BroadcastHashJoin\n               :                    :- HashAggregate\n               :                    :  +- CometNativeColumnarToRow\n               :                    :     +- CometColumnarExchange\n               :                    :        +- HashAggregate\n               :                    :           +- Project\n               :                    :              +- BroadcastHashJoin\n               :                    :                 :- Project\n               :                    :                 :  +- BroadcastHashJoin\n               :                    :                 :     :- Filter\n               :                    :                 :     :  +- ColumnarToRow\n               :                    :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :                 :     :           +- ReusedSubquery\n               :                    :                 :     +- BroadcastExchange\n               :                    :                 :        +- CometNativeColumnarToRow\n               :                    :                 :           +- CometProject\n               :                    :                 :              +- CometFilter\n               :                    :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                 +- BroadcastExchange\n               :                    :                    +- CometNativeColumnarToRow\n               :                    :                       +- CometFilter\n               :                    :                          +- CometNativeScan parquet spark_catalog.default.web_page\n               :                    +- BroadcastExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- Filter\n               :                                         :     :  +- ColumnarToRow\n               :                                         :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :           +- ReusedSubquery\n               :                                         :     +- BroadcastExchange\n               :                                         :        +- CometNativeColumnarToRow\n               :                                         :           +- CometProject\n               :                                         :              +- CometFilter\n               :                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometFilter\n               :                                                  +- CometNativeScan parquet spark_catalog.default.web_page\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Project\n               :                          :  +- BroadcastHashJoin\n               :                          :     :- HashAggregate\n               :                          :     :  +- CometNativeColumnarToRow\n               :                          :     :     +- CometColumnarExchange\n               :                          :     :        +- HashAggregate\n               :                          :     :           +- Project\n               :                          :     :              +- BroadcastHashJoin\n               :                          :     :                 :- Project\n               :                          :     :                 :  +- BroadcastHashJoin\n               :                          :     :                 :     :- Filter\n               :                          :     :                 :     :  +- ColumnarToRow\n               :                          :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :     :                 :     :           +- SubqueryBroadcast\n               :                          :     :                 :     :              +- BroadcastExchange\n               :                          :     :                 :     :                 +- CometNativeColumnarToRow\n               :                          :     :                 :     :                    +- CometProject\n               :                          :     :                 :     :                       +- CometFilter\n               :                          :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     :                 :     +- BroadcastExchange\n               :                          :     :                 :        +- CometNativeColumnarToRow\n               :                          :     :                 :           +- CometProject\n               :                          :     :                 :              +- CometFilter\n               :                          :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     :                 +- BroadcastExchange\n               :                          :     :                    +- CometNativeColumnarToRow\n               :                          :     :                       +- CometFilter\n               :                          :     :                          +- CometNativeScan parquet spark_catalog.default.store\n               :                          :     +- BroadcastExchange\n               :                          :        +- HashAggregate\n               :                          :           +- CometNativeColumnarToRow\n               :                          :              +- CometColumnarExchange\n               :                          :                 +- HashAggregate\n               :                          :                    +- Project\n               :                          :                       +- BroadcastHashJoin\n               :                          :                          :- Project\n               :                          :                          :  +- BroadcastHashJoin\n               :                          :                          :     :- Filter\n               :                          :                          :     :  +- ColumnarToRow\n               :                          :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                          :     :           +- ReusedSubquery\n               :                          :                          :     +- BroadcastExchange\n               :                          :                          :        +- CometNativeColumnarToRow\n               :                          :                          :           +- CometProject\n               :                          :                          :              +- CometFilter\n               :                          :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                          +- BroadcastExchange\n               :                          :                             +- CometNativeColumnarToRow\n               :                          :                                +- CometFilter\n               :                          :                                   +- CometNativeScan parquet spark_catalog.default.store\n               :                          :- Project\n               :                          :  +- BroadcastNestedLoopJoin\n               :                          :     :- BroadcastExchange\n               :                          :     :  +- HashAggregate\n               :                          :     :     +- CometNativeColumnarToRow\n               :                          :     :        +- CometColumnarExchange\n               :                          :     :           +- HashAggregate\n               :                          :     :              +- Project\n               :                          :     :                 +- BroadcastHashJoin\n               :                          :     :                    :- ColumnarToRow\n               :                          :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :     :                    :        +- ReusedSubquery\n               :                          :     :                    +- BroadcastExchange\n               :                          :     :                       +- CometNativeColumnarToRow\n               :                          :     :                          +- CometProject\n               :                          :     :                             +- CometFilter\n               :                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     +- HashAggregate\n               :                          :        +- CometNativeColumnarToRow\n               :                          :           +- CometColumnarExchange\n               :                          :              +- HashAggregate\n               :                          :                 +- Project\n               :                          :                    +- BroadcastHashJoin\n               :                          :                       :- ColumnarToRow\n               :                          :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                       :        +- ReusedSubquery\n               :                          :                       +- BroadcastExchange\n               :                          :                          +- CometNativeColumnarToRow\n               :                          :                             +- CometProject\n               :                          :                                +- CometFilter\n               :                          :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Project\n               :                             +- BroadcastHashJoin\n               :                                :- HashAggregate\n               :                                :  +- CometNativeColumnarToRow\n               :                                :     +- CometColumnarExchange\n               :                                :        +- HashAggregate\n               :                                :           +- Project\n               :                                :              +- BroadcastHashJoin\n               :                                :                 :- Project\n               :                                :                 :  +- BroadcastHashJoin\n               :                                :                 :     :- Filter\n               :                                :                 :     :  +- ColumnarToRow\n               :                                :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                :                 :     :           +- ReusedSubquery\n               :                                :                 :     +- BroadcastExchange\n               :                                :                 :        +- CometNativeColumnarToRow\n               :                                :                 :           +- CometProject\n               :                                :                 :              +- CometFilter\n               :                                :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                :                 +- BroadcastExchange\n               :                                :                    +- CometNativeColumnarToRow\n               :                                :                       +- CometFilter\n               :                                :                          +- CometNativeScan parquet spark_catalog.default.web_page\n               :                                +- BroadcastExchange\n               :                                   +- HashAggregate\n               :                                      +- CometNativeColumnarToRow\n               :                                         +- CometColumnarExchange\n               :                                            +- HashAggregate\n               :                                               +- Project\n               :                                                  +- BroadcastHashJoin\n               :                                                     :- Project\n               :                                                     :  +- BroadcastHashJoin\n               :                                                     :     :- Filter\n               :                                                     :     :  +- ColumnarToRow\n               :                                                     :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                                     :     :           +- ReusedSubquery\n               :                                                     :     +- BroadcastExchange\n               :                                                     :        +- CometNativeColumnarToRow\n               :                                                     :           +- CometProject\n               :                                                     :              +- CometFilter\n               :                                                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                                     +- BroadcastExchange\n               :                                                        +- CometNativeColumnarToRow\n               :                                                           +- CometFilter\n               :                                                              +- CometNativeScan parquet spark_catalog.default.web_page\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- HashAggregate\n                                          :     :  +- CometNativeColumnarToRow\n                                          :     :     +- CometColumnarExchange\n                                          :     :        +- HashAggregate\n                                          :     :           +- Project\n                                          :     :              +- BroadcastHashJoin\n                                          :     :                 :- Project\n                                          :     :                 :  +- BroadcastHashJoin\n                                          :     :                 :     :- Filter\n                                          :     :                 :     :  +- ColumnarToRow\n                                          :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                 :     :           +- SubqueryBroadcast\n                                          :     :                 :     :              +- BroadcastExchange\n                                          :     :                 :     :                 +- CometNativeColumnarToRow\n                                          :     :                 :     :                    +- CometProject\n                                          :     :                 :     :                       +- CometFilter\n                                          :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 :     +- BroadcastExchange\n                                          :     :                 :        +- CometNativeColumnarToRow\n                                          :     :                 :           +- CometProject\n                                          :     :                 :              +- CometFilter\n                                          :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 +- BroadcastExchange\n                                          :     :                    +- CometNativeColumnarToRow\n                                          :     :                       +- CometFilter\n                                          :     :                          +- CometNativeScan parquet spark_catalog.default.store\n                                          :     +- BroadcastExchange\n                                          :        +- HashAggregate\n                                          :           +- CometNativeColumnarToRow\n                                          :              +- CometColumnarExchange\n                                          :                 +- HashAggregate\n                                          :                    +- Project\n                                          :                       +- BroadcastHashJoin\n                                          :                          :- Project\n                                          :                          :  +- BroadcastHashJoin\n                                          :                          :     :- Filter\n                                          :                          :     :  +- ColumnarToRow\n                                          :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                          :     :           +- ReusedSubquery\n                                          :                          :     +- BroadcastExchange\n                                          :                          :        +- CometNativeColumnarToRow\n                                          :                          :           +- CometProject\n                                          :                          :              +- CometFilter\n                                          :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          +- BroadcastExchange\n                                          :                             +- CometNativeColumnarToRow\n                                          :                                +- CometFilter\n                                          :                                   +- CometNativeScan parquet spark_catalog.default.store\n                                          :- Project\n                                          :  +- BroadcastNestedLoopJoin\n                                          :     :- BroadcastExchange\n                                          :     :  +- HashAggregate\n                                          :     :     +- CometNativeColumnarToRow\n                                          :     :        +- CometColumnarExchange\n                                          :     :           +- HashAggregate\n                                          :     :              +- Project\n                                          :     :                 +- BroadcastHashJoin\n                                          :     :                    :- ColumnarToRow\n                                          :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    :        +- ReusedSubquery\n                                          :     :                    +- BroadcastExchange\n                                          :     :                       +- CometNativeColumnarToRow\n                                          :     :                          +- CometProject\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- HashAggregate\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometColumnarExchange\n                                          :              +- HashAggregate\n                                          :                 +- Project\n                                          :                    +- BroadcastHashJoin\n                                          :                       :- ColumnarToRow\n                                          :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                       :        +- ReusedSubquery\n                                          :                       +- BroadcastExchange\n                                          :                          +- CometNativeColumnarToRow\n                                          :                             +- CometProject\n                                          :                                +- CometFilter\n                                          :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- Project\n                                             +- BroadcastHashJoin\n                                                :- HashAggregate\n                                                :  +- CometNativeColumnarToRow\n                                                :     +- CometColumnarExchange\n                                                :        +- HashAggregate\n                                                :           +- Project\n                                                :              +- BroadcastHashJoin\n                                                :                 :- Project\n                                                :                 :  +- BroadcastHashJoin\n                                                :                 :     :- Filter\n                                                :                 :     :  +- ColumnarToRow\n                                                :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                :                 :     :           +- ReusedSubquery\n                                                :                 :     +- BroadcastExchange\n                                                :                 :        +- CometNativeColumnarToRow\n                                                :                 :           +- CometProject\n                                                :                 :              +- CometFilter\n                                                :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                :                 +- BroadcastExchange\n                                                :                    +- CometNativeColumnarToRow\n                                                :                       +- CometFilter\n                                                :                          +- CometNativeScan parquet spark_catalog.default.web_page\n                                                +- BroadcastExchange\n                                                   +- HashAggregate\n                                                      +- CometNativeColumnarToRow\n                                                         +- CometColumnarExchange\n                                                            +- HashAggregate\n                                                               +- Project\n                                                                  +- BroadcastHashJoin\n                                                                     :- Project\n                                                                     :  +- BroadcastHashJoin\n                                                                     :     :- Filter\n                                                                     :     :  +- ColumnarToRow\n                                                                     :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                     :     :           +- ReusedSubquery\n                                                                     :     +- BroadcastExchange\n                                                                     :        +- CometNativeColumnarToRow\n                                                                     :           +- CometProject\n                                                                     :              +- CometFilter\n                                                                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                                     +- BroadcastExchange\n                                                                        +- CometNativeColumnarToRow\n                                                                           +- CometFilter\n                                                                              +- CometNativeScan parquet spark_catalog.default.web_page\n\nComet accelerated 113 out of 332 eligible operators (34%). Final plan contains 75 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q77a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- CometNativeColumnarToRow\n               :              :  +- CometProject\n               :              :     +- CometBroadcastHashJoin\n               :              :        :- CometHashAggregate\n               :              :        :  +- CometExchange\n               :              :        :     +- CometHashAggregate\n               :              :        :        +- CometProject\n               :              :        :           +- CometBroadcastHashJoin\n               :              :        :              :- CometProject\n               :              :        :              :  +- CometBroadcastHashJoin\n               :              :        :              :     :- CometFilter\n               :              :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :        :              :     :        +- SubqueryBroadcast\n               :              :        :              :     :           +- BroadcastExchange\n               :              :        :              :     :              +- CometNativeColumnarToRow\n               :              :        :              :     :                 +- CometProject\n               :              :        :              :     :                    +- CometFilter\n               :              :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :        :              :     +- CometBroadcastExchange\n               :              :        :              :        +- CometProject\n               :              :        :              :           +- CometFilter\n               :              :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :        :              +- CometBroadcastExchange\n               :              :        :                 +- CometFilter\n               :              :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :              :        +- CometBroadcastExchange\n               :              :           +- CometHashAggregate\n               :              :              +- CometExchange\n               :              :                 +- CometHashAggregate\n               :              :                    +- CometProject\n               :              :                       +- CometBroadcastHashJoin\n               :              :                          :- CometProject\n               :              :                          :  +- CometBroadcastHashJoin\n               :              :                          :     :- CometFilter\n               :              :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :              :                          :     :        +- ReusedSubquery\n               :              :                          :     +- CometBroadcastExchange\n               :              :                          :        +- CometProject\n               :              :                          :           +- CometFilter\n               :              :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :                          +- CometBroadcastExchange\n               :              :                             +- CometFilter\n               :              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :              :- Project\n               :              :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n               :              :     :- BroadcastExchange\n               :              :     :  +- CometNativeColumnarToRow\n               :              :     :     +- CometHashAggregate\n               :              :     :        +- CometExchange\n               :              :     :           +- CometHashAggregate\n               :              :     :              +- CometProject\n               :              :     :                 +- CometBroadcastHashJoin\n               :              :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :                    :     +- ReusedSubquery\n               :              :     :                    +- CometBroadcastExchange\n               :              :     :                       +- CometProject\n               :              :     :                          +- CometFilter\n               :              :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometNativeColumnarToRow\n               :              :        +- CometHashAggregate\n               :              :           +- CometExchange\n               :              :              +- CometHashAggregate\n               :              :                 +- CometProject\n               :              :                    +- CometBroadcastHashJoin\n               :              :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :              :                       :     +- ReusedSubquery\n               :              :                       +- CometBroadcastExchange\n               :              :                          +- CometProject\n               :              :                             +- CometFilter\n               :              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              +- CometNativeColumnarToRow\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometHashAggregate\n               :                       :  +- CometExchange\n               :                       :     +- CometHashAggregate\n               :                       :        +- CometProject\n               :                       :           +- CometBroadcastHashJoin\n               :                       :              :- CometProject\n               :                       :              :  +- CometBroadcastHashJoin\n               :                       :              :     :- CometFilter\n               :                       :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                       :              :     :        +- ReusedSubquery\n               :                       :              :     +- CometBroadcastExchange\n               :                       :              :        +- CometProject\n               :                       :              :           +- CometFilter\n               :                       :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                       :              +- CometBroadcastExchange\n               :                       :                 +- CometFilter\n               :                       :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               :                       +- CometBroadcastExchange\n               :                          +- CometHashAggregate\n               :                             +- CometExchange\n               :                                +- CometHashAggregate\n               :                                   +- CometProject\n               :                                      +- CometBroadcastHashJoin\n               :                                         :- CometProject\n               :                                         :  +- CometBroadcastHashJoin\n               :                                         :     :- CometFilter\n               :                                         :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                         :     :        +- ReusedSubquery\n               :                                         :     +- CometBroadcastExchange\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                         +- CometBroadcastExchange\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- CometNativeColumnarToRow\n               :                          :  +- CometProject\n               :                          :     +- CometBroadcastHashJoin\n               :                          :        :- CometHashAggregate\n               :                          :        :  +- CometExchange\n               :                          :        :     +- CometHashAggregate\n               :                          :        :        +- CometProject\n               :                          :        :           +- CometBroadcastHashJoin\n               :                          :        :              :- CometProject\n               :                          :        :              :  +- CometBroadcastHashJoin\n               :                          :        :              :     :- CometFilter\n               :                          :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                          :        :              :     :        +- SubqueryBroadcast\n               :                          :        :              :     :           +- BroadcastExchange\n               :                          :        :              :     :              +- CometNativeColumnarToRow\n               :                          :        :              :     :                 +- CometProject\n               :                          :        :              :     :                    +- CometFilter\n               :                          :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :        :              :     +- CometBroadcastExchange\n               :                          :        :              :        +- CometProject\n               :                          :        :              :           +- CometFilter\n               :                          :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :        :              +- CometBroadcastExchange\n               :                          :        :                 +- CometFilter\n               :                          :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                          :        +- CometBroadcastExchange\n               :                          :           +- CometHashAggregate\n               :                          :              +- CometExchange\n               :                          :                 +- CometHashAggregate\n               :                          :                    +- CometProject\n               :                          :                       +- CometBroadcastHashJoin\n               :                          :                          :- CometProject\n               :                          :                          :  +- CometBroadcastHashJoin\n               :                          :                          :     :- CometFilter\n               :                          :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :                          :                          :     :        +- ReusedSubquery\n               :                          :                          :     +- CometBroadcastExchange\n               :                          :                          :        +- CometProject\n               :                          :                          :           +- CometFilter\n               :                          :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :                          +- CometBroadcastExchange\n               :                          :                             +- CometFilter\n               :                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                          :- Project\n               :                          :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n               :                          :     :- BroadcastExchange\n               :                          :     :  +- CometNativeColumnarToRow\n               :                          :     :     +- CometHashAggregate\n               :                          :     :        +- CometExchange\n               :                          :     :           +- CometHashAggregate\n               :                          :     :              +- CometProject\n               :                          :     :                 +- CometBroadcastHashJoin\n               :                          :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                          :     :                    :     +- ReusedSubquery\n               :                          :     :                    +- CometBroadcastExchange\n               :                          :     :                       +- CometProject\n               :                          :     :                          +- CometFilter\n               :                          :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometHashAggregate\n               :                          :           +- CometExchange\n               :                          :              +- CometHashAggregate\n               :                          :                 +- CometProject\n               :                          :                    +- CometBroadcastHashJoin\n               :                          :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                          :                       :     +- ReusedSubquery\n               :                          :                       +- CometBroadcastExchange\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometHashAggregate\n               :                                   :  +- CometExchange\n               :                                   :     +- CometHashAggregate\n               :                                   :        +- CometProject\n               :                                   :           +- CometBroadcastHashJoin\n               :                                   :              :- CometProject\n               :                                   :              :  +- CometBroadcastHashJoin\n               :                                   :              :     :- CometFilter\n               :                                   :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :              :     :        +- ReusedSubquery\n               :                                   :              :     +- CometBroadcastExchange\n               :                                   :              :        +- CometProject\n               :                                   :              :           +- CometFilter\n               :                                   :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                   :              +- CometBroadcastExchange\n               :                                   :                 +- CometFilter\n               :                                   :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometHashAggregate\n               :                                         +- CometExchange\n               :                                            +- CometHashAggregate\n               :                                               +- CometProject\n               :                                                  +- CometBroadcastHashJoin\n               :                                                     :- CometProject\n               :                                                     :  +- CometBroadcastHashJoin\n               :                                                     :     :- CometFilter\n               :                                                     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                                     :     :        +- ReusedSubquery\n               :                                                     :     +- CometBroadcastExchange\n               :                                                     :        +- CometProject\n               :                                                     :           +- CometFilter\n               :                                                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                                     +- CometBroadcastExchange\n               :                                                        +- CometFilter\n               :                                                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- CometNativeColumnarToRow\n                                          :  +- CometProject\n                                          :     +- CometBroadcastHashJoin\n                                          :        :- CometHashAggregate\n                                          :        :  +- CometExchange\n                                          :        :     +- CometHashAggregate\n                                          :        :        +- CometProject\n                                          :        :           +- CometBroadcastHashJoin\n                                          :        :              :- CometProject\n                                          :        :              :  +- CometBroadcastHashJoin\n                                          :        :              :     :- CometFilter\n                                          :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                          :        :              :     :        +- SubqueryBroadcast\n                                          :        :              :     :           +- BroadcastExchange\n                                          :        :              :     :              +- CometNativeColumnarToRow\n                                          :        :              :     :                 +- CometProject\n                                          :        :              :     :                    +- CometFilter\n                                          :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :        :              :     +- CometBroadcastExchange\n                                          :        :              :        +- CometProject\n                                          :        :              :           +- CometFilter\n                                          :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :        :              +- CometBroadcastExchange\n                                          :        :                 +- CometFilter\n                                          :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                          :        +- CometBroadcastExchange\n                                          :           +- CometHashAggregate\n                                          :              +- CometExchange\n                                          :                 +- CometHashAggregate\n                                          :                    +- CometProject\n                                          :                       +- CometBroadcastHashJoin\n                                          :                          :- CometProject\n                                          :                          :  +- CometBroadcastHashJoin\n                                          :                          :     :- CometFilter\n                                          :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                          :                          :     :        +- ReusedSubquery\n                                          :                          :     +- CometBroadcastExchange\n                                          :                          :        +- CometProject\n                                          :                          :           +- CometFilter\n                                          :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                          +- CometBroadcastExchange\n                                          :                             +- CometFilter\n                                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                          :- Project\n                                          :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n                                          :     :- BroadcastExchange\n                                          :     :  +- CometNativeColumnarToRow\n                                          :     :     +- CometHashAggregate\n                                          :     :        +- CometExchange\n                                          :     :           +- CometHashAggregate\n                                          :     :              +- CometProject\n                                          :     :                 +- CometBroadcastHashJoin\n                                          :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                          :     :                    :     +- ReusedSubquery\n                                          :     :                    +- CometBroadcastExchange\n                                          :     :                       +- CometProject\n                                          :     :                          +- CometFilter\n                                          :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :     +- CometNativeColumnarToRow\n                                          :        +- CometHashAggregate\n                                          :           +- CometExchange\n                                          :              +- CometHashAggregate\n                                          :                 +- CometProject\n                                          :                    +- CometBroadcastHashJoin\n                                          :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                          :                       :     +- ReusedSubquery\n                                          :                       +- CometBroadcastExchange\n                                          :                          +- CometProject\n                                          :                             +- CometFilter\n                                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometHashAggregate\n                                                   :  +- CometExchange\n                                                   :     +- CometHashAggregate\n                                                   :        +- CometProject\n                                                   :           +- CometBroadcastHashJoin\n                                                   :              :- CometProject\n                                                   :              :  +- CometBroadcastHashJoin\n                                                   :              :     :- CometFilter\n                                                   :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :              :     :        +- ReusedSubquery\n                                                   :              :     +- CometBroadcastExchange\n                                                   :              :        +- CometProject\n                                                   :              :           +- CometFilter\n                                                   :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :              +- CometBroadcastExchange\n                                                   :                 +- CometFilter\n                                                   :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n                                                   +- CometBroadcastExchange\n                                                      +- CometHashAggregate\n                                                         +- CometExchange\n                                                            +- CometHashAggregate\n                                                               +- CometProject\n                                                                  +- CometBroadcastHashJoin\n                                                                     :- CometProject\n                                                                     :  +- CometBroadcastHashJoin\n                                                                     :     :- CometFilter\n                                                                     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                                                     :     :        +- ReusedSubquery\n                                                                     :     +- CometBroadcastExchange\n                                                                     :        +- CometProject\n                                                                     :           +- CometFilter\n                                                                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                                     +- CometBroadcastExchange\n                                                                        +- CometFilter\n                                                                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n\nComet accelerated 287 out of 332 eligible operators (86%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q78.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometColumnarExchange\n         :     :                 :        :     +- Filter\n         :     :                 :        :        +- ColumnarToRow\n         :     :                 :        :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :     :                 :        :                 +- SubqueryBroadcast\n         :     :                 :        :                    +- BroadcastExchange\n         :     :                 :        :                       +- CometNativeColumnarToRow\n         :     :                 :        :                          +- CometFilter\n         :     :                 :        :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometColumnarExchange\n         :                          :        :     +- Filter\n         :                          :        :        +- ColumnarToRow\n         :                          :        :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                          :        :                 +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometNativeScan parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometColumnarExchange\n                              :        :     +- Filter\n                              :        :        +- ColumnarToRow\n                              :        :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :        :                 +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 64 out of 76 eligible operators (84%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q78.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometExchange\n         :     :                 :        :     +- CometFilter\n         :     :                 :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                 :        :              +- SubqueryBroadcast\n         :     :                 :        :                 +- BroadcastExchange\n         :     :                 :        :                    +- CometNativeColumnarToRow\n         :     :                 :        :                       +- CometFilter\n         :     :                 :        :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometExchange\n         :                          :        :     +- CometFilter\n         :                          :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :        :              +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometExchange\n                              :        :     +- CometFilter\n                              :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :        :              +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 70 out of 76 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q80a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometColumnarExchange\n               :           :              :     :     :     :     :     +- Filter\n               :           :              :     :     :     :     :        +- ColumnarToRow\n               :           :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :           :              :     :     :     :     :                 +- SubqueryBroadcast\n               :           :              :     :     :     :     :                    +- BroadcastExchange\n               :           :              :     :     :     :     :                       +- CometNativeColumnarToRow\n               :           :              :     :     :     :     :                          +- CometProject\n               :           :              :     :     :     :     :                             +- CometFilter\n               :           :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometColumnarExchange\n               :           :              :     :     :     :     :     +- Filter\n               :           :              :     :     :     :     :        +- ColumnarToRow\n               :           :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :           :              :     :     :     :     :                 +- ReusedSubquery\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometProject\n               :                          :  +- CometBroadcastHashJoin\n               :                          :     :- CometProject\n               :                          :     :  +- CometBroadcastHashJoin\n               :                          :     :     :- CometProject\n               :                          :     :     :  +- CometBroadcastHashJoin\n               :                          :     :     :     :- CometProject\n               :                          :     :     :     :  +- CometSortMergeJoin\n               :                          :     :     :     :     :- CometSort\n               :                          :     :     :     :     :  +- CometColumnarExchange\n               :                          :     :     :     :     :     +- Filter\n               :                          :     :     :     :     :        +- ColumnarToRow\n               :                          :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :     :     :     :     :                 +- ReusedSubquery\n               :                          :     :     :     :     +- CometSort\n               :                          :     :     :     :        +- CometExchange\n               :                          :     :     :     :           +- CometProject\n               :                          :     :     :     :              +- CometFilter\n               :                          :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                          :     :     :     +- CometBroadcastExchange\n               :                          :     :     :        +- CometProject\n               :                          :     :     :           +- CometFilter\n               :                          :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     :     +- CometBroadcastExchange\n               :                          :     :        +- CometProject\n               :                          :     :           +- CometFilter\n               :                          :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n               :                          :     +- CometBroadcastExchange\n               :                          :        +- CometProject\n               :                          :           +- CometFilter\n               :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.promotion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometColumnarExchange\n               :                    :              :     :     :     :     :     +- Filter\n               :                    :              :     :     :     :     :        +- ColumnarToRow\n               :                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :              :     :     :     :     :                 +- SubqueryBroadcast\n               :                    :              :     :     :     :     :                    +- BroadcastExchange\n               :                    :              :     :     :     :     :                       +- CometNativeColumnarToRow\n               :                    :              :     :     :     :     :                          +- CometProject\n               :                    :              :     :     :     :     :                             +- CometFilter\n               :                    :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometColumnarExchange\n               :                    :              :     :     :     :     :     +- Filter\n               :                    :              :     :     :     :     :        +- ColumnarToRow\n               :                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :              :     :     :     :     :                 +- ReusedSubquery\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :                    +- CometHashAggregate\n               :                       +- CometExchange\n               :                          +- CometHashAggregate\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometProject\n               :                                   :  +- CometBroadcastHashJoin\n               :                                   :     :- CometProject\n               :                                   :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :- CometProject\n               :                                   :     :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :     :- CometProject\n               :                                   :     :     :     :  +- CometSortMergeJoin\n               :                                   :     :     :     :     :- CometSort\n               :                                   :     :     :     :     :  +- CometColumnarExchange\n               :                                   :     :     :     :     :     +- Filter\n               :                                   :     :     :     :     :        +- ColumnarToRow\n               :                                   :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :     :     :     :                 +- ReusedSubquery\n               :                                   :     :     :     :     +- CometSort\n               :                                   :     :     :     :        +- CometExchange\n               :                                   :     :     :     :           +- CometProject\n               :                                   :     :     :     :              +- CometFilter\n               :                                   :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                                   :     :     :     +- CometBroadcastExchange\n               :                                   :     :     :        +- CometProject\n               :                                   :     :     :           +- CometFilter\n               :                                   :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :     +- CometBroadcastExchange\n               :                                   :     :        +- CometProject\n               :                                   :     :           +- CometFilter\n               :                                   :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n               :                                   :     +- CometBroadcastExchange\n               :                                   :        +- CometProject\n               :                                   :           +- CometFilter\n               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometNativeScan parquet spark_catalog.default.promotion\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometColumnarExchange\n                                    :              :     :     :     :     :     +- Filter\n                                    :              :     :     :     :     :        +- ColumnarToRow\n                                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :              :     :     :     :     :                 +- SubqueryBroadcast\n                                    :              :     :     :     :     :                    +- BroadcastExchange\n                                    :              :     :     :     :     :                       +- CometNativeColumnarToRow\n                                    :              :     :     :     :     :                          +- CometProject\n                                    :              :     :     :     :     :                             +- CometFilter\n                                    :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometColumnarExchange\n                                    :              :     :     :     :     :     +- Filter\n                                    :              :     :     :     :     :        +- ColumnarToRow\n                                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :              :     :     :     :     :                 +- ReusedSubquery\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometProject\n                                                   :     :  +- CometBroadcastHashJoin\n                                                   :     :     :- CometProject\n                                                   :     :     :  +- CometBroadcastHashJoin\n                                                   :     :     :     :- CometProject\n                                                   :     :     :     :  +- CometSortMergeJoin\n                                                   :     :     :     :     :- CometSort\n                                                   :     :     :     :     :  +- CometColumnarExchange\n                                                   :     :     :     :     :     +- Filter\n                                                   :     :     :     :     :        +- ColumnarToRow\n                                                   :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                   :     :     :     :     :                 +- ReusedSubquery\n                                                   :     :     :     :     +- CometSort\n                                                   :     :     :     :        +- CometExchange\n                                                   :     :     :     :           +- CometProject\n                                                   :     :     :     :              +- CometFilter\n                                                   :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n                                                   :     :     :     +- CometBroadcastExchange\n                                                   :     :     :        +- CometProject\n                                                   :     :     :           +- CometFilter\n                                                   :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                   :     :     +- CometBroadcastExchange\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 356 out of 386 eligible operators (92%). Final plan contains 13 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q80a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometExchange\n               :           :              :     :     :     :     :     +- CometFilter\n               :           :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :              :     :     :     :     :              +- SubqueryBroadcast\n               :           :              :     :     :     :     :                 +- BroadcastExchange\n               :           :              :     :     :     :     :                    +- CometNativeColumnarToRow\n               :           :              :     :     :     :     :                       +- CometProject\n               :           :              :     :     :     :     :                          +- CometFilter\n               :           :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometExchange\n               :           :              :     :     :     :     :     +- CometFilter\n               :           :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :              :     :     :     :     :              +- ReusedSubquery\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometProject\n               :                          :  +- CometBroadcastHashJoin\n               :                          :     :- CometProject\n               :                          :     :  +- CometBroadcastHashJoin\n               :                          :     :     :- CometProject\n               :                          :     :     :  +- CometBroadcastHashJoin\n               :                          :     :     :     :- CometProject\n               :                          :     :     :     :  +- CometSortMergeJoin\n               :                          :     :     :     :     :- CometSort\n               :                          :     :     :     :     :  +- CometExchange\n               :                          :     :     :     :     :     +- CometFilter\n               :                          :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :     :     :     :     :              +- ReusedSubquery\n               :                          :     :     :     :     +- CometSort\n               :                          :     :     :     :        +- CometExchange\n               :                          :     :     :     :           +- CometProject\n               :                          :     :     :     :              +- CometFilter\n               :                          :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                          :     :     :     +- CometBroadcastExchange\n               :                          :     :     :        +- CometProject\n               :                          :     :     :           +- CometFilter\n               :                          :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :     :     +- CometBroadcastExchange\n               :                          :     :        +- CometProject\n               :                          :     :           +- CometFilter\n               :                          :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               :                          :     +- CometBroadcastExchange\n               :                          :        +- CometProject\n               :                          :           +- CometFilter\n               :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometExchange\n               :                    :              :     :     :     :     :     +- CometFilter\n               :                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :              :     :     :     :     :              +- SubqueryBroadcast\n               :                    :              :     :     :     :     :                 +- BroadcastExchange\n               :                    :              :     :     :     :     :                    +- CometNativeColumnarToRow\n               :                    :              :     :     :     :     :                       +- CometProject\n               :                    :              :     :     :     :     :                          +- CometFilter\n               :                    :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometExchange\n               :                    :              :     :     :     :     :     +- CometFilter\n               :                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :              :     :     :     :     :              +- ReusedSubquery\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :                    +- CometHashAggregate\n               :                       +- CometExchange\n               :                          +- CometHashAggregate\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometProject\n               :                                   :  +- CometBroadcastHashJoin\n               :                                   :     :- CometProject\n               :                                   :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :- CometProject\n               :                                   :     :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :     :- CometProject\n               :                                   :     :     :     :  +- CometSortMergeJoin\n               :                                   :     :     :     :     :- CometSort\n               :                                   :     :     :     :     :  +- CometExchange\n               :                                   :     :     :     :     :     +- CometFilter\n               :                                   :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :     :     :     :     :              +- ReusedSubquery\n               :                                   :     :     :     :     +- CometSort\n               :                                   :     :     :     :        +- CometExchange\n               :                                   :     :     :     :           +- CometProject\n               :                                   :     :     :     :              +- CometFilter\n               :                                   :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                   :     :     :     +- CometBroadcastExchange\n               :                                   :     :     :        +- CometProject\n               :                                   :     :     :           +- CometFilter\n               :                                   :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                   :     :     +- CometBroadcastExchange\n               :                                   :     :        +- CometProject\n               :                                   :     :           +- CometFilter\n               :                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               :                                   :     +- CometBroadcastExchange\n               :                                   :        +- CometProject\n               :                                   :           +- CometFilter\n               :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometExchange\n                                    :              :     :     :     :     :     +- CometFilter\n                                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :              :     :     :     :     :              +- SubqueryBroadcast\n                                    :              :     :     :     :     :                 +- BroadcastExchange\n                                    :              :     :     :     :     :                    +- CometNativeColumnarToRow\n                                    :              :     :     :     :     :                       +- CometProject\n                                    :              :     :     :     :     :                          +- CometFilter\n                                    :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometExchange\n                                    :              :     :     :     :     :     +- CometFilter\n                                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :              :     :     :     :     :              +- ReusedSubquery\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometProject\n                                                   :     :  +- CometBroadcastHashJoin\n                                                   :     :     :- CometProject\n                                                   :     :     :  +- CometBroadcastHashJoin\n                                                   :     :     :     :- CometProject\n                                                   :     :     :     :  +- CometSortMergeJoin\n                                                   :     :     :     :     :- CometSort\n                                                   :     :     :     :     :  +- CometExchange\n                                                   :     :     :     :     :     +- CometFilter\n                                                   :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     :     :     :     :              +- ReusedSubquery\n                                                   :     :     :     :     +- CometSort\n                                                   :     :     :     :        +- CometExchange\n                                                   :     :     :     :           +- CometProject\n                                                   :     :     :     :              +- CometFilter\n                                                   :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                                   :     :     :     +- CometBroadcastExchange\n                                                   :     :     :        +- CometProject\n                                                   :     :     :           +- CometFilter\n                                                   :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     :     +- CometBroadcastExchange\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 374 out of 386 eligible operators (96%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q86a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- Project\n                           :                 :  +- BroadcastHashJoin\n                           :                 :     :- Filter\n                           :                 :     :  +- ColumnarToRow\n                           :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                 :     :           +- SubqueryBroadcast\n                           :                 :     :              +- BroadcastExchange\n                           :                 :     :                 +- CometNativeColumnarToRow\n                           :                 :     :                    +- CometProject\n                           :                 :     :                       +- CometFilter\n                           :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     +- BroadcastExchange\n                           :                 :        +- CometNativeColumnarToRow\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 +- BroadcastExchange\n                           :                    +- CometNativeColumnarToRow\n                           :                       +- CometProject\n                           :                          +- CometFilter\n                           :                             +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Filter\n                           :                             :     :  +- ColumnarToRow\n                           :                             :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :           +- SubqueryBroadcast\n                           :                             :     :              +- BroadcastExchange\n                           :                             :     :                 +- CometNativeColumnarToRow\n                           :                             :     :                    +- CometProject\n                           :                             :     :                       +- CometFilter\n                           :                             :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- Filter\n                                                         :     :  +- ColumnarToRow\n                                                         :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :           +- SubqueryBroadcast\n                                                         :     :              +- BroadcastExchange\n                                                         :     :                 +- CometNativeColumnarToRow\n                                                         :     :                    +- CometProject\n                                                         :     :                       +- CometFilter\n                                                         :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     +- BroadcastExchange\n                                                         :        +- CometNativeColumnarToRow\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 36 out of 81 eligible operators (44%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q86a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometUnion\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometProject\n                           :           +- CometBroadcastHashJoin\n                           :              :- CometProject\n                           :              :  +- CometBroadcastHashJoin\n                           :              :     :- CometFilter\n                           :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :              :     :        +- SubqueryBroadcast\n                           :              :     :           +- BroadcastExchange\n                           :              :     :              +- CometNativeColumnarToRow\n                           :              :     :                 +- CometProject\n                           :              :     :                    +- CometFilter\n                           :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              :     +- CometBroadcastExchange\n                           :              :        +- CometProject\n                           :              :           +- CometFilter\n                           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              +- CometBroadcastExchange\n                           :                 +- CometProject\n                           :                    +- CometFilter\n                           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometHashAggregate\n                           :           +- CometExchange\n                           :              +- CometHashAggregate\n                           :                 +- CometProject\n                           :                    +- CometBroadcastHashJoin\n                           :                       :- CometProject\n                           :                       :  +- CometBroadcastHashJoin\n                           :                       :     :- CometFilter\n                           :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                       :     :        +- SubqueryBroadcast\n                           :                       :     :           +- BroadcastExchange\n                           :                       :     :              +- CometNativeColumnarToRow\n                           :                       :     :                 +- CometProject\n                           :                       :     :                    +- CometFilter\n                           :                       :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       :     +- CometBroadcastExchange\n                           :                       :        +- CometProject\n                           :                       :           +- CometFilter\n                           :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       +- CometBroadcastExchange\n                           :                          +- CometProject\n                           :                             +- CometFilter\n                           :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometFilter\n                                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     :        +- SubqueryBroadcast\n                                                   :     :           +- BroadcastExchange\n                                                   :     :              +- CometNativeColumnarToRow\n                                                   :     :                 +- CometProject\n                                                   :     :                    +- CometFilter\n                                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 72 out of 81 eligible operators (88%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q98.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n            +- CometNativeColumnarToRow\n               +- CometSort\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- CometNativeColumnarToRow\n                           +- CometColumnarExchange\n                              +- HashAggregate\n                                 +- Project\n                                    +- BroadcastHashJoin\n                                       :- Project\n                                       :  +- BroadcastHashJoin\n                                       :     :- Filter\n                                       :     :  +- ColumnarToRow\n                                       :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           +- SubqueryBroadcast\n                                       :     :              +- BroadcastExchange\n                                       :     :                 +- CometNativeColumnarToRow\n                                       :     :                    +- CometProject\n                                       :     :                       +- CometFilter\n                                       :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- BroadcastExchange\n                                       :        +- CometNativeColumnarToRow\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       +- BroadcastExchange\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 28 eligible operators (50%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark3_5/q98.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n            +- CometNativeColumnarToRow\n               +- CometSort\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometExchange\n                           +- CometHashAggregate\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometProject\n                                    :  +- CometBroadcastHashJoin\n                                    :     :- CometFilter\n                                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :     :        +- SubqueryBroadcast\n                                    :     :           +- BroadcastExchange\n                                    :     :              +- CometNativeColumnarToRow\n                                    :     :                 +- CometProject\n                                    :     :                    +- CometFilter\n                                    :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :     +- CometBroadcastExchange\n                                    :        +- CometProject\n                                    :           +- CometFilter\n                                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 24 out of 28 eligible operators (85%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q10a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- BroadcastHashJoin\n                  :     :     :  :- CometNativeColumnarToRow\n                  :     :     :  :  +- CometFilter\n                  :     :     :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- Project\n                  :     :     :        +- BroadcastHashJoin\n                  :     :     :           :- ColumnarToRow\n                  :     :     :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           :        +- SubqueryBroadcast\n                  :     :     :           :           +- BroadcastExchange\n                  :     :     :           :              +- CometNativeColumnarToRow\n                  :     :     :           :                 +- CometProject\n                  :     :     :           :                    +- CometFilter\n                  :     :     :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- Union\n                  :     :           :- Project\n                  :     :           :  +- BroadcastHashJoin\n                  :     :           :     :- ColumnarToRow\n                  :     :           :     :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :           :     :        +- ReusedSubquery\n                  :     :           :     +- BroadcastExchange\n                  :     :           :        +- CometNativeColumnarToRow\n                  :     :           :           +- CometProject\n                  :     :           :              +- CometFilter\n                  :     :           :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 52 eligible operators (40%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q10a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometBroadcastHashJoin\n                  :     :     :  :- CometFilter\n                  :     :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :     :  +- CometBroadcastExchange\n                  :     :     :     +- CometProject\n                  :     :     :        +- CometBroadcastHashJoin\n                  :     :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :           :     +- SubqueryBroadcast\n                  :     :     :           :        +- BroadcastExchange\n                  :     :     :           :           +- CometNativeColumnarToRow\n                  :     :     :           :              +- CometProject\n                  :     :     :           :                 +- CometFilter\n                  :     :     :           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :           +- CometBroadcastExchange\n                  :     :     :              +- CometProject\n                  :     :     :                 +- CometFilter\n                  :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometUnion\n                  :     :           :- CometProject\n                  :     :           :  +- CometBroadcastHashJoin\n                  :     :           :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :           :     :     +- ReusedSubquery\n                  :     :           :     +- CometBroadcastExchange\n                  :     :           :        +- CometProject\n                  :     :           :           +- CometFilter\n                  :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :           +- CometProject\n                  :     :              +- CometBroadcastHashJoin\n                  :     :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                 :     +- ReusedSubquery\n                  :     :                 +- CometBroadcastExchange\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 48 out of 52 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q11.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- BroadcastHashJoin\n      :     :  :- Filter\n      :     :  :  +- HashAggregate\n      :     :  :     +- CometNativeColumnarToRow\n      :     :  :        +- CometColumnarExchange\n      :     :  :           +- HashAggregate\n      :     :  :              +- Project\n      :     :  :                 +- BroadcastHashJoin\n      :     :  :                    :- Project\n      :     :  :                    :  +- BroadcastHashJoin\n      :     :  :                    :     :- CometNativeColumnarToRow\n      :     :  :                    :     :  +- CometProject\n      :     :  :                    :     :     +- CometFilter\n      :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :  :                    :     +- BroadcastExchange\n      :     :  :                    :        +- Filter\n      :     :  :                    :           +- ColumnarToRow\n      :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :  :                    :                    +- SubqueryBroadcast\n      :     :  :                    :                       +- BroadcastExchange\n      :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :  :                    :                             +- CometFilter\n      :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  :                    +- BroadcastExchange\n      :     :  :                       +- CometNativeColumnarToRow\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  +- BroadcastExchange\n      :     :     +- HashAggregate\n      :     :        +- CometNativeColumnarToRow\n      :     :           +- CometColumnarExchange\n      :     :              +- HashAggregate\n      :     :                 +- Project\n      :     :                    +- BroadcastHashJoin\n      :     :                       :- Project\n      :     :                       :  +- BroadcastHashJoin\n      :     :                       :     :- CometNativeColumnarToRow\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                       :     +- BroadcastExchange\n      :     :                       :        +- Filter\n      :     :                       :           +- ColumnarToRow\n      :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                       :                    +- SubqueryBroadcast\n      :     :                       :                       +- BroadcastExchange\n      :     :                       :                          +- CometNativeColumnarToRow\n      :     :                       :                             +- CometFilter\n      :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                       +- BroadcastExchange\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometFilter\n      :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 85 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q11.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometBroadcastHashJoin\n         :     :  :- CometFilter\n         :     :  :  +- CometHashAggregate\n         :     :  :     +- CometExchange\n         :     :  :        +- CometHashAggregate\n         :     :  :           +- CometProject\n         :     :  :              +- CometBroadcastHashJoin\n         :     :  :                 :- CometProject\n         :     :  :                 :  +- CometBroadcastHashJoin\n         :     :  :                 :     :- CometProject\n         :     :  :                 :     :  +- CometFilter\n         :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :  :                 :     +- CometBroadcastExchange\n         :     :  :                 :        +- CometFilter\n         :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :  :                 :                 +- SubqueryBroadcast\n         :     :  :                 :                    +- BroadcastExchange\n         :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :  :                 :                          +- CometFilter\n         :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  :                 +- CometBroadcastExchange\n         :     :  :                    +- CometFilter\n         :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  +- CometBroadcastExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometExchange\n         :     :           +- CometHashAggregate\n         :     :              +- CometProject\n         :     :                 +- CometBroadcastHashJoin\n         :     :                    :- CometProject\n         :     :                    :  +- CometBroadcastHashJoin\n         :     :                    :     :- CometProject\n         :     :                    :     :  +- CometFilter\n         :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                    :     +- CometBroadcastExchange\n         :     :                    :        +- CometFilter\n         :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                    :                 +- SubqueryBroadcast\n         :     :                    :                    +- BroadcastExchange\n         :     :                    :                       +- CometNativeColumnarToRow\n         :     :                    :                          +- CometFilter\n         :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                    +- CometBroadcastExchange\n         :     :                       +- CometFilter\n         :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 79 out of 85 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q12.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q12.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q14.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- BroadcastHashJoin\n   :- Filter\n   :  :  +- Subquery\n   :  :     +- HashAggregate\n   :  :        +- CometNativeColumnarToRow\n   :  :           +- CometColumnarExchange\n   :  :              +- HashAggregate\n   :  :                 +- Union\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    :- Project\n   :  :                    :  +- BroadcastHashJoin\n   :  :                    :     :- ColumnarToRow\n   :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                    :     :        +- ReusedSubquery\n   :  :                    :     +- BroadcastExchange\n   :  :                    :        +- CometNativeColumnarToRow\n   :  :                    :           +- CometProject\n   :  :                    :              +- CometFilter\n   :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  :                    +- Project\n   :  :                       +- BroadcastHashJoin\n   :  :                          :- ColumnarToRow\n   :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :  :                          :        +- ReusedSubquery\n   :  :                          +- BroadcastExchange\n   :  :                             +- CometNativeColumnarToRow\n   :  :                                +- CometProject\n   :  :                                   +- CometFilter\n   :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :  +- HashAggregate\n   :     +- CometNativeColumnarToRow\n   :        +- CometColumnarExchange\n   :           +- HashAggregate\n   :              +- Project\n   :                 +- BroadcastHashJoin\n   :                    :- Project\n   :                    :  +- BroadcastHashJoin\n   :                    :     :- BroadcastHashJoin\n   :                    :     :  :- Filter\n   :                    :     :  :  +- ColumnarToRow\n   :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :  :           +- SubqueryBroadcast\n   :                    :     :  :              +- BroadcastExchange\n   :                    :     :  :                 +- CometNativeColumnarToRow\n   :                    :     :  :                    +- CometProject\n   :                    :     :  :                       +- CometFilter\n   :                    :     :  :                          :  +- ReusedSubquery\n   :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  :                                +- Subquery\n   :                    :     :  :                                   +- CometNativeColumnarToRow\n   :                    :     :  :                                      +- CometProject\n   :                    :     :  :                                         +- CometFilter\n   :                    :     :  :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :  +- BroadcastExchange\n   :                    :     :     +- Project\n   :                    :     :        +- BroadcastHashJoin\n   :                    :     :           :- CometNativeColumnarToRow\n   :                    :     :           :  +- CometFilter\n   :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :           +- BroadcastExchange\n   :                    :     :              +- BroadcastHashJoin\n   :                    :     :                 :- CometNativeColumnarToRow\n   :                    :     :                 :  +- CometHashAggregate\n   :                    :     :                 :     +- CometColumnarExchange\n   :                    :     :                 :        +- HashAggregate\n   :                    :     :                 :           +- Project\n   :                    :     :                 :              +- BroadcastHashJoin\n   :                    :     :                 :                 :- Project\n   :                    :     :                 :                 :  +- BroadcastHashJoin\n   :                    :     :                 :                 :     :- Filter\n   :                    :     :                 :                 :     :  +- ColumnarToRow\n   :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :     :           +- SubqueryBroadcast\n   :                    :     :                 :                 :     :              +- BroadcastExchange\n   :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n   :                    :     :                 :                 :     :                    +- CometProject\n   :                    :     :                 :                 :     :                       +- CometFilter\n   :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 :     +- BroadcastExchange\n   :                    :     :                 :                 :        +- BroadcastHashJoin\n   :                    :     :                 :                 :           :- CometNativeColumnarToRow\n   :                    :     :                 :                 :           :  +- CometFilter\n   :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :           +- BroadcastExchange\n   :                    :     :                 :                 :              +- Project\n   :                    :     :                 :                 :                 +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :- Project\n   :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n   :                    :     :                 :                 :                    :     :- Filter\n   :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n   :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n   :                    :     :                 :                 :                    :     +- BroadcastExchange\n   :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                    :           +- CometFilter\n   :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                 :                 :                    +- BroadcastExchange\n   :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n   :                    :     :                 :                 :                          +- CometProject\n   :                    :     :                 :                 :                             +- CometFilter\n   :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 :                 +- BroadcastExchange\n   :                    :     :                 :                    +- CometNativeColumnarToRow\n   :                    :     :                 :                       +- CometProject\n   :                    :     :                 :                          +- CometFilter\n   :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     :                 +- BroadcastExchange\n   :                    :     :                    +- Project\n   :                    :     :                       +- BroadcastHashJoin\n   :                    :     :                          :- Project\n   :                    :     :                          :  +- BroadcastHashJoin\n   :                    :     :                          :     :- Filter\n   :                    :     :                          :     :  +- ColumnarToRow\n   :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :     :                          :     :           +- ReusedSubquery\n   :                    :     :                          :     +- BroadcastExchange\n   :                    :     :                          :        +- CometNativeColumnarToRow\n   :                    :     :                          :           +- CometFilter\n   :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :     :                          +- BroadcastExchange\n   :                    :     :                             +- CometNativeColumnarToRow\n   :                    :     :                                +- CometProject\n   :                    :     :                                   +- CometFilter\n   :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :     +- BroadcastExchange\n   :                    :        +- BroadcastHashJoin\n   :                    :           :- CometNativeColumnarToRow\n   :                    :           :  +- CometFilter\n   :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :           +- BroadcastExchange\n   :                    :              +- Project\n   :                    :                 +- BroadcastHashJoin\n   :                    :                    :- CometNativeColumnarToRow\n   :                    :                    :  +- CometFilter\n   :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                    +- BroadcastExchange\n   :                    :                       +- BroadcastHashJoin\n   :                    :                          :- CometNativeColumnarToRow\n   :                    :                          :  +- CometHashAggregate\n   :                    :                          :     +- CometColumnarExchange\n   :                    :                          :        +- HashAggregate\n   :                    :                          :           +- Project\n   :                    :                          :              +- BroadcastHashJoin\n   :                    :                          :                 :- Project\n   :                    :                          :                 :  +- BroadcastHashJoin\n   :                    :                          :                 :     :- Filter\n   :                    :                          :                 :     :  +- ColumnarToRow\n   :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :     :           +- SubqueryBroadcast\n   :                    :                          :                 :     :              +- BroadcastExchange\n   :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n   :                    :                          :                 :     :                    +- CometProject\n   :                    :                          :                 :     :                       +- CometFilter\n   :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 :     +- BroadcastExchange\n   :                    :                          :                 :        +- BroadcastHashJoin\n   :                    :                          :                 :           :- CometNativeColumnarToRow\n   :                    :                          :                 :           :  +- CometFilter\n   :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :           +- BroadcastExchange\n   :                    :                          :                 :              +- Project\n   :                    :                          :                 :                 +- BroadcastHashJoin\n   :                    :                          :                 :                    :- Project\n   :                    :                          :                 :                    :  +- BroadcastHashJoin\n   :                    :                          :                 :                    :     :- Filter\n   :                    :                          :                 :                    :     :  +- ColumnarToRow\n   :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                          :                 :                    :     :           +- ReusedSubquery\n   :                    :                          :                 :                    :     +- BroadcastExchange\n   :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n   :                    :                          :                 :                    :           +- CometFilter\n   :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                          :                 :                    +- BroadcastExchange\n   :                    :                          :                 :                       +- CometNativeColumnarToRow\n   :                    :                          :                 :                          +- CometProject\n   :                    :                          :                 :                             +- CometFilter\n   :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          :                 +- BroadcastExchange\n   :                    :                          :                    +- CometNativeColumnarToRow\n   :                    :                          :                       +- CometProject\n   :                    :                          :                          +- CometFilter\n   :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    :                          +- BroadcastExchange\n   :                    :                             +- Project\n   :                    :                                +- BroadcastHashJoin\n   :                    :                                   :- Project\n   :                    :                                   :  +- BroadcastHashJoin\n   :                    :                                   :     :- Filter\n   :                    :                                   :     :  +- ColumnarToRow\n   :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                    :                                   :     :           +- ReusedSubquery\n   :                    :                                   :     +- BroadcastExchange\n   :                    :                                   :        +- CometNativeColumnarToRow\n   :                    :                                   :           +- CometFilter\n   :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n   :                    :                                   +- BroadcastExchange\n   :                    :                                      +- CometNativeColumnarToRow\n   :                    :                                         +- CometProject\n   :                    :                                            +- CometFilter\n   :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                    +- BroadcastExchange\n   :                       +- CometNativeColumnarToRow\n   :                          +- CometProject\n   :                             +- CometFilter\n   :                                :  +- ReusedSubquery\n   :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                                      +- Subquery\n   :                                         +- CometNativeColumnarToRow\n   :                                            +- CometProject\n   :                                               +- CometFilter\n   :                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n   +- BroadcastExchange\n      +- Filter\n         :  +- ReusedSubquery\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- BroadcastHashJoin\n                           :     :  :- Filter\n                           :     :  :  +- ColumnarToRow\n                           :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :  :           +- SubqueryBroadcast\n                           :     :  :              +- BroadcastExchange\n                           :     :  :                 +- CometNativeColumnarToRow\n                           :     :  :                    +- CometProject\n                           :     :  :                       +- CometFilter\n                           :     :  :                          :  +- ReusedSubquery\n                           :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  :                                +- Subquery\n                           :     :  :                                   +- CometNativeColumnarToRow\n                           :     :  :                                      +- CometProject\n                           :     :  :                                         +- CometFilter\n                           :     :  :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :  +- BroadcastExchange\n                           :     :     +- Project\n                           :     :        +- BroadcastHashJoin\n                           :     :           :- CometNativeColumnarToRow\n                           :     :           :  +- CometFilter\n                           :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :           +- BroadcastExchange\n                           :     :              +- BroadcastHashJoin\n                           :     :                 :- CometNativeColumnarToRow\n                           :     :                 :  +- CometHashAggregate\n                           :     :                 :     +- CometColumnarExchange\n                           :     :                 :        +- HashAggregate\n                           :     :                 :           +- Project\n                           :     :                 :              +- BroadcastHashJoin\n                           :     :                 :                 :- Project\n                           :     :                 :                 :  +- BroadcastHashJoin\n                           :     :                 :                 :     :- Filter\n                           :     :                 :                 :     :  +- ColumnarToRow\n                           :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :     :           +- SubqueryBroadcast\n                           :     :                 :                 :     :              +- BroadcastExchange\n                           :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                           :     :                 :                 :     :                    +- CometProject\n                           :     :                 :                 :     :                       +- CometFilter\n                           :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 :     +- BroadcastExchange\n                           :     :                 :                 :        +- BroadcastHashJoin\n                           :     :                 :                 :           :- CometNativeColumnarToRow\n                           :     :                 :                 :           :  +- CometFilter\n                           :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :           +- BroadcastExchange\n                           :     :                 :                 :              +- Project\n                           :     :                 :                 :                 +- BroadcastHashJoin\n                           :     :                 :                 :                    :- Project\n                           :     :                 :                 :                    :  +- BroadcastHashJoin\n                           :     :                 :                 :                    :     :- Filter\n                           :     :                 :                 :                    :     :  +- ColumnarToRow\n                           :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                 :                 :                    :     :           +- ReusedSubquery\n                           :     :                 :                 :                    :     +- BroadcastExchange\n                           :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                           :     :                 :                 :                    :           +- CometFilter\n                           :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                 :                 :                    +- BroadcastExchange\n                           :     :                 :                 :                       +- CometNativeColumnarToRow\n                           :     :                 :                 :                          +- CometProject\n                           :     :                 :                 :                             +- CometFilter\n                           :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 :                 +- BroadcastExchange\n                           :     :                 :                    +- CometNativeColumnarToRow\n                           :     :                 :                       +- CometProject\n                           :     :                 :                          +- CometFilter\n                           :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     :                 +- BroadcastExchange\n                           :     :                    +- Project\n                           :     :                       +- BroadcastHashJoin\n                           :     :                          :- Project\n                           :     :                          :  +- BroadcastHashJoin\n                           :     :                          :     :- Filter\n                           :     :                          :     :  +- ColumnarToRow\n                           :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :     :                          :     :           +- ReusedSubquery\n                           :     :                          :     +- BroadcastExchange\n                           :     :                          :        +- CometNativeColumnarToRow\n                           :     :                          :           +- CometFilter\n                           :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                           :     :                          +- BroadcastExchange\n                           :     :                             +- CometNativeColumnarToRow\n                           :     :                                +- CometProject\n                           :     :                                   +- CometFilter\n                           :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :     +- BroadcastExchange\n                           :        +- BroadcastHashJoin\n                           :           :- CometNativeColumnarToRow\n                           :           :  +- CometFilter\n                           :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :           +- BroadcastExchange\n                           :              +- Project\n                           :                 +- BroadcastHashJoin\n                           :                    :- CometNativeColumnarToRow\n                           :                    :  +- CometFilter\n                           :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                    +- BroadcastExchange\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometHashAggregate\n                           :                          :     +- CometColumnarExchange\n                           :                          :        +- HashAggregate\n                           :                          :           +- Project\n                           :                          :              +- BroadcastHashJoin\n                           :                          :                 :- Project\n                           :                          :                 :  +- BroadcastHashJoin\n                           :                          :                 :     :- Filter\n                           :                          :                 :     :  +- ColumnarToRow\n                           :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :     :           +- SubqueryBroadcast\n                           :                          :                 :     :              +- BroadcastExchange\n                           :                          :                 :     :                 +- CometNativeColumnarToRow\n                           :                          :                 :     :                    +- CometProject\n                           :                          :                 :     :                       +- CometFilter\n                           :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 :     +- BroadcastExchange\n                           :                          :                 :        +- BroadcastHashJoin\n                           :                          :                 :           :- CometNativeColumnarToRow\n                           :                          :                 :           :  +- CometFilter\n                           :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :           +- BroadcastExchange\n                           :                          :                 :              +- Project\n                           :                          :                 :                 +- BroadcastHashJoin\n                           :                          :                 :                    :- Project\n                           :                          :                 :                    :  +- BroadcastHashJoin\n                           :                          :                 :                    :     :- Filter\n                           :                          :                 :                    :     :  +- ColumnarToRow\n                           :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                          :                 :                    :     :           +- ReusedSubquery\n                           :                          :                 :                    :     +- BroadcastExchange\n                           :                          :                 :                    :        +- CometNativeColumnarToRow\n                           :                          :                 :                    :           +- CometFilter\n                           :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                          :                 :                    +- BroadcastExchange\n                           :                          :                 :                       +- CometNativeColumnarToRow\n                           :                          :                 :                          +- CometProject\n                           :                          :                 :                             +- CometFilter\n                           :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          :                 +- BroadcastExchange\n                           :                          :                    +- CometNativeColumnarToRow\n                           :                          :                       +- CometProject\n                           :                          :                          +- CometFilter\n                           :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- BroadcastHashJoin\n                           :                                   :- Project\n                           :                                   :  +- BroadcastHashJoin\n                           :                                   :     :- Filter\n                           :                                   :     :  +- ColumnarToRow\n                           :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                   :     :           +- ReusedSubquery\n                           :                                   :     +- BroadcastExchange\n                           :                                   :        +- CometNativeColumnarToRow\n                           :                                   :           +- CometFilter\n                           :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                           :                                   +- BroadcastExchange\n                           :                                      +- CometNativeColumnarToRow\n                           :                                         +- CometProject\n                           :                                            +- CometFilter\n                           :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometProject\n                                    +- CometFilter\n                                       :  +- ReusedSubquery\n                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             +- Subquery\n                                                +- CometNativeColumnarToRow\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 128 out of 337 eligible operators (37%). Final plan contains 69 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q14.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometBroadcastHashJoin\n      :- CometFilter\n      :  :  +- Subquery\n      :  :     +- CometNativeColumnarToRow\n      :  :        +- CometHashAggregate\n      :  :           +- CometExchange\n      :  :              +- CometHashAggregate\n      :  :                 +- CometUnion\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    :- CometProject\n      :  :                    :  +- CometBroadcastHashJoin\n      :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :  :                    :     :     +- ReusedSubquery\n      :  :                    :     +- CometBroadcastExchange\n      :  :                    :        +- CometProject\n      :  :                    :           +- CometFilter\n      :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  :                    +- CometProject\n      :  :                       +- CometBroadcastHashJoin\n      :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :  :                          :     +- ReusedSubquery\n      :  :                          +- CometBroadcastExchange\n      :  :                             +- CometProject\n      :  :                                +- CometFilter\n      :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :  +- CometHashAggregate\n      :     +- CometExchange\n      :        +- CometHashAggregate\n      :           +- CometProject\n      :              +- CometBroadcastHashJoin\n      :                 :- CometProject\n      :                 :  +- CometBroadcastHashJoin\n      :                 :     :- CometBroadcastHashJoin\n      :                 :     :  :- CometFilter\n      :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :  :        +- SubqueryBroadcast\n      :                 :     :  :           +- BroadcastExchange\n      :                 :     :  :              +- CometNativeColumnarToRow\n      :                 :     :  :                 +- CometProject\n      :                 :     :  :                    +- CometFilter\n      :                 :     :  :                       :  +- ReusedSubquery\n      :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  :                             +- Subquery\n      :                 :     :  :                                +- CometNativeColumnarToRow\n      :                 :     :  :                                   +- CometProject\n      :                 :     :  :                                      +- CometFilter\n      :                 :     :  :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :  +- CometBroadcastExchange\n      :                 :     :     +- CometProject\n      :                 :     :        +- CometBroadcastHashJoin\n      :                 :     :           :- CometFilter\n      :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :           +- CometBroadcastExchange\n      :                 :     :              +- CometBroadcastHashJoin\n      :                 :     :                 :- CometHashAggregate\n      :                 :     :                 :  +- CometExchange\n      :                 :     :                 :     +- CometHashAggregate\n      :                 :     :                 :        +- CometProject\n      :                 :     :                 :           +- CometBroadcastHashJoin\n      :                 :     :                 :              :- CometProject\n      :                 :     :                 :              :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :     :- CometFilter\n      :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :     :                 :              :     :        +- SubqueryBroadcast\n      :                 :     :                 :              :     :           +- BroadcastExchange\n      :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n      :                 :     :                 :              :     :                 +- CometProject\n      :                 :     :                 :              :     :                    +- CometFilter\n      :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              :     +- CometBroadcastExchange\n      :                 :     :                 :              :        +- CometBroadcastHashJoin\n      :                 :     :                 :              :           :- CometFilter\n      :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :           +- CometBroadcastExchange\n      :                 :     :                 :              :              +- CometProject\n      :                 :     :                 :              :                 +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :- CometProject\n      :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n      :                 :     :                 :              :                    :     :- CometFilter\n      :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :     :                 :              :                    :     :        +- ReusedSubquery\n      :                 :     :                 :              :                    :     +- CometBroadcastExchange\n      :                 :     :                 :              :                    :        +- CometFilter\n      :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                 :              :                    +- CometBroadcastExchange\n      :                 :     :                 :              :                       +- CometProject\n      :                 :     :                 :              :                          +- CometFilter\n      :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 :              +- CometBroadcastExchange\n      :                 :     :                 :                 +- CometProject\n      :                 :     :                 :                    +- CometFilter\n      :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     :                 +- CometBroadcastExchange\n      :                 :     :                    +- CometProject\n      :                 :     :                       +- CometBroadcastHashJoin\n      :                 :     :                          :- CometProject\n      :                 :     :                          :  +- CometBroadcastHashJoin\n      :                 :     :                          :     :- CometFilter\n      :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :     :                          :     :        +- ReusedSubquery\n      :                 :     :                          :     +- CometBroadcastExchange\n      :                 :     :                          :        +- CometFilter\n      :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :     :                          +- CometBroadcastExchange\n      :                 :     :                             +- CometProject\n      :                 :     :                                +- CometFilter\n      :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :     +- CometBroadcastExchange\n      :                 :        +- CometBroadcastHashJoin\n      :                 :           :- CometFilter\n      :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :           +- CometBroadcastExchange\n      :                 :              +- CometProject\n      :                 :                 +- CometBroadcastHashJoin\n      :                 :                    :- CometFilter\n      :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                    +- CometBroadcastExchange\n      :                 :                       +- CometBroadcastHashJoin\n      :                 :                          :- CometHashAggregate\n      :                 :                          :  +- CometExchange\n      :                 :                          :     +- CometHashAggregate\n      :                 :                          :        +- CometProject\n      :                 :                          :           +- CometBroadcastHashJoin\n      :                 :                          :              :- CometProject\n      :                 :                          :              :  +- CometBroadcastHashJoin\n      :                 :                          :              :     :- CometFilter\n      :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                 :                          :              :     :        +- SubqueryBroadcast\n      :                 :                          :              :     :           +- BroadcastExchange\n      :                 :                          :              :     :              +- CometNativeColumnarToRow\n      :                 :                          :              :     :                 +- CometProject\n      :                 :                          :              :     :                    +- CometFilter\n      :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              :     +- CometBroadcastExchange\n      :                 :                          :              :        +- CometBroadcastHashJoin\n      :                 :                          :              :           :- CometFilter\n      :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :           +- CometBroadcastExchange\n      :                 :                          :              :              +- CometProject\n      :                 :                          :              :                 +- CometBroadcastHashJoin\n      :                 :                          :              :                    :- CometProject\n      :                 :                          :              :                    :  +- CometBroadcastHashJoin\n      :                 :                          :              :                    :     :- CometFilter\n      :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                 :                          :              :                    :     :        +- ReusedSubquery\n      :                 :                          :              :                    :     +- CometBroadcastExchange\n      :                 :                          :              :                    :        +- CometFilter\n      :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                          :              :                    +- CometBroadcastExchange\n      :                 :                          :              :                       +- CometProject\n      :                 :                          :              :                          +- CometFilter\n      :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          :              +- CometBroadcastExchange\n      :                 :                          :                 +- CometProject\n      :                 :                          :                    +- CometFilter\n      :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 :                          +- CometBroadcastExchange\n      :                 :                             +- CometProject\n      :                 :                                +- CometBroadcastHashJoin\n      :                 :                                   :- CometProject\n      :                 :                                   :  +- CometBroadcastHashJoin\n      :                 :                                   :     :- CometFilter\n      :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n      :                 :                                   :     :        +- ReusedSubquery\n      :                 :                                   :     +- CometBroadcastExchange\n      :                 :                                   :        +- CometFilter\n      :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                 :                                   +- CometBroadcastExchange\n      :                 :                                      +- CometProject\n      :                 :                                         +- CometFilter\n      :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                 +- CometBroadcastExchange\n      :                    +- CometProject\n      :                       +- CometFilter\n      :                          :  +- ReusedSubquery\n      :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                +- ReusedSubquery\n      +- CometBroadcastExchange\n         +- CometFilter\n            :  +- ReusedSubquery\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometBroadcastHashJoin\n                           :     :  :- CometFilter\n                           :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :  :        +- SubqueryBroadcast\n                           :     :  :           +- BroadcastExchange\n                           :     :  :              +- CometNativeColumnarToRow\n                           :     :  :                 +- CometProject\n                           :     :  :                    +- CometFilter\n                           :     :  :                       :  +- ReusedSubquery\n                           :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  :                             +- Subquery\n                           :     :  :                                +- CometNativeColumnarToRow\n                           :     :  :                                   +- CometProject\n                           :     :  :                                      +- CometFilter\n                           :     :  :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :  +- CometBroadcastExchange\n                           :     :     +- CometProject\n                           :     :        +- CometBroadcastHashJoin\n                           :     :           :- CometFilter\n                           :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :           +- CometBroadcastExchange\n                           :     :              +- CometBroadcastHashJoin\n                           :     :                 :- CometHashAggregate\n                           :     :                 :  +- CometExchange\n                           :     :                 :     +- CometHashAggregate\n                           :     :                 :        +- CometProject\n                           :     :                 :           +- CometBroadcastHashJoin\n                           :     :                 :              :- CometProject\n                           :     :                 :              :  +- CometBroadcastHashJoin\n                           :     :                 :              :     :- CometFilter\n                           :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :     :                 :              :     :        +- SubqueryBroadcast\n                           :     :                 :              :     :           +- BroadcastExchange\n                           :     :                 :              :     :              +- CometNativeColumnarToRow\n                           :     :                 :              :     :                 +- CometProject\n                           :     :                 :              :     :                    +- CometFilter\n                           :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              :     +- CometBroadcastExchange\n                           :     :                 :              :        +- CometBroadcastHashJoin\n                           :     :                 :              :           :- CometFilter\n                           :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :           +- CometBroadcastExchange\n                           :     :                 :              :              +- CometProject\n                           :     :                 :              :                 +- CometBroadcastHashJoin\n                           :     :                 :              :                    :- CometProject\n                           :     :                 :              :                    :  +- CometBroadcastHashJoin\n                           :     :                 :              :                    :     :- CometFilter\n                           :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :     :                 :              :                    :     :        +- ReusedSubquery\n                           :     :                 :              :                    :     +- CometBroadcastExchange\n                           :     :                 :              :                    :        +- CometFilter\n                           :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                 :              :                    +- CometBroadcastExchange\n                           :     :                 :              :                       +- CometProject\n                           :     :                 :              :                          +- CometFilter\n                           :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 :              +- CometBroadcastExchange\n                           :     :                 :                 +- CometProject\n                           :     :                 :                    +- CometFilter\n                           :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     :                 +- CometBroadcastExchange\n                           :     :                    +- CometProject\n                           :     :                       +- CometBroadcastHashJoin\n                           :     :                          :- CometProject\n                           :     :                          :  +- CometBroadcastHashJoin\n                           :     :                          :     :- CometFilter\n                           :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :     :                          :     :        +- ReusedSubquery\n                           :     :                          :     +- CometBroadcastExchange\n                           :     :                          :        +- CometFilter\n                           :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :     :                          +- CometBroadcastExchange\n                           :     :                             +- CometProject\n                           :     :                                +- CometFilter\n                           :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :     +- CometBroadcastExchange\n                           :        +- CometBroadcastHashJoin\n                           :           :- CometFilter\n                           :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :           +- CometBroadcastExchange\n                           :              +- CometProject\n                           :                 +- CometBroadcastHashJoin\n                           :                    :- CometFilter\n                           :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                    +- CometBroadcastExchange\n                           :                       +- CometBroadcastHashJoin\n                           :                          :- CometHashAggregate\n                           :                          :  +- CometExchange\n                           :                          :     +- CometHashAggregate\n                           :                          :        +- CometProject\n                           :                          :           +- CometBroadcastHashJoin\n                           :                          :              :- CometProject\n                           :                          :              :  +- CometBroadcastHashJoin\n                           :                          :              :     :- CometFilter\n                           :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                          :              :     :        +- SubqueryBroadcast\n                           :                          :              :     :           +- BroadcastExchange\n                           :                          :              :     :              +- CometNativeColumnarToRow\n                           :                          :              :     :                 +- CometProject\n                           :                          :              :     :                    +- CometFilter\n                           :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              :     +- CometBroadcastExchange\n                           :                          :              :        +- CometBroadcastHashJoin\n                           :                          :              :           :- CometFilter\n                           :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :           +- CometBroadcastExchange\n                           :                          :              :              +- CometProject\n                           :                          :              :                 +- CometBroadcastHashJoin\n                           :                          :              :                    :- CometProject\n                           :                          :              :                    :  +- CometBroadcastHashJoin\n                           :                          :              :                    :     :- CometFilter\n                           :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                           :                          :              :                    :     :        +- ReusedSubquery\n                           :                          :              :                    :     +- CometBroadcastExchange\n                           :                          :              :                    :        +- CometFilter\n                           :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                          :              :                    +- CometBroadcastExchange\n                           :                          :              :                       +- CometProject\n                           :                          :              :                          +- CometFilter\n                           :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          :              +- CometBroadcastExchange\n                           :                          :                 +- CometProject\n                           :                          :                    +- CometFilter\n                           :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                          +- CometBroadcastExchange\n                           :                             +- CometProject\n                           :                                +- CometBroadcastHashJoin\n                           :                                   :- CometProject\n                           :                                   :  +- CometBroadcastHashJoin\n                           :                                   :     :- CometFilter\n                           :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                                   :     :        +- ReusedSubquery\n                           :                                   :     +- CometBroadcastExchange\n                           :                                   :        +- CometFilter\n                           :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                                   +- CometBroadcastExchange\n                           :                                      +- CometProject\n                           :                                         +- CometFilter\n                           :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           +- CometBroadcastExchange\n                              +- CometProject\n                                 +- CometFilter\n                                    :  +- ReusedSubquery\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          +- ReusedSubquery\n\nComet accelerated 298 out of 331 eligible operators (90%). Final plan contains 10 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q14a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- Filter\n               :              :  :  +- Subquery\n               :              :  :     +- HashAggregate\n               :              :  :        +- CometNativeColumnarToRow\n               :              :  :           +- CometColumnarExchange\n               :              :  :              +- HashAggregate\n               :              :  :                 +- Union\n               :              :  :                    :- Project\n               :              :  :                    :  +- BroadcastHashJoin\n               :              :  :                    :     :- ColumnarToRow\n               :              :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :  :                    :     :        +- ReusedSubquery\n               :              :  :                    :     +- BroadcastExchange\n               :              :  :                    :        +- CometNativeColumnarToRow\n               :              :  :                    :           +- CometProject\n               :              :  :                    :              +- CometFilter\n               :              :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  :                    :- Project\n               :              :  :                    :  +- BroadcastHashJoin\n               :              :  :                    :     :- ColumnarToRow\n               :              :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :  :                    :     :        +- SubqueryBroadcast\n               :              :  :                    :     :           +- BroadcastExchange\n               :              :  :                    :     :              +- CometNativeColumnarToRow\n               :              :  :                    :     :                 +- CometProject\n               :              :  :                    :     :                    +- CometFilter\n               :              :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  :                    :     +- BroadcastExchange\n               :              :  :                    :        +- CometNativeColumnarToRow\n               :              :  :                    :           +- CometProject\n               :              :  :                    :              +- CometFilter\n               :              :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  :                    +- Project\n               :              :  :                       +- BroadcastHashJoin\n               :              :  :                          :- ColumnarToRow\n               :              :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :  :                          :        +- ReusedSubquery\n               :              :  :                          +- BroadcastExchange\n               :              :  :                             +- CometNativeColumnarToRow\n               :              :  :                                +- CometProject\n               :              :  :                                   +- CometFilter\n               :              :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :  +- HashAggregate\n               :              :     +- CometNativeColumnarToRow\n               :              :        +- CometColumnarExchange\n               :              :           +- HashAggregate\n               :              :              +- Project\n               :              :                 +- BroadcastHashJoin\n               :              :                    :- Project\n               :              :                    :  +- BroadcastHashJoin\n               :              :                    :     :- BroadcastHashJoin\n               :              :                    :     :  :- Filter\n               :              :                    :     :  :  +- ColumnarToRow\n               :              :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :  :           +- SubqueryBroadcast\n               :              :                    :     :  :              +- BroadcastExchange\n               :              :                    :     :  :                 +- CometNativeColumnarToRow\n               :              :                    :     :  :                    +- CometProject\n               :              :                    :     :  :                       +- CometFilter\n               :              :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :  +- BroadcastExchange\n               :              :                    :     :     +- Project\n               :              :                    :     :        +- BroadcastHashJoin\n               :              :                    :     :           :- CometNativeColumnarToRow\n               :              :                    :     :           :  +- CometFilter\n               :              :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :           +- BroadcastExchange\n               :              :                    :     :              +- BroadcastHashJoin\n               :              :                    :     :                 :- CometNativeColumnarToRow\n               :              :                    :     :                 :  +- CometHashAggregate\n               :              :                    :     :                 :     +- CometColumnarExchange\n               :              :                    :     :                 :        +- HashAggregate\n               :              :                    :     :                 :           +- Project\n               :              :                    :     :                 :              +- BroadcastHashJoin\n               :              :                    :     :                 :                 :- Project\n               :              :                    :     :                 :                 :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :     :- Filter\n               :              :                    :     :                 :                 :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :              :                    :     :                 :                 :     :              +- BroadcastExchange\n               :              :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :     :                    +- CometProject\n               :              :                    :     :                 :                 :     :                       +- CometFilter\n               :              :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 :     +- BroadcastExchange\n               :              :                    :     :                 :                 :        +- BroadcastHashJoin\n               :              :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :           :  +- CometFilter\n               :              :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :           +- BroadcastExchange\n               :              :                    :     :                 :                 :              +- Project\n               :              :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :- Project\n               :              :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :     :- Filter\n               :              :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :              :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :              :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                    :           +- CometFilter\n               :              :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :                    +- BroadcastExchange\n               :              :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                          +- CometProject\n               :              :                    :     :                 :                 :                             +- CometFilter\n               :              :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 +- BroadcastExchange\n               :              :                    :     :                 :                    +- CometNativeColumnarToRow\n               :              :                    :     :                 :                       +- CometProject\n               :              :                    :     :                 :                          +- CometFilter\n               :              :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 +- BroadcastExchange\n               :              :                    :     :                    +- Project\n               :              :                    :     :                       +- BroadcastHashJoin\n               :              :                    :     :                          :- Project\n               :              :                    :     :                          :  +- BroadcastHashJoin\n               :              :                    :     :                          :     :- Filter\n               :              :                    :     :                          :     :  +- ColumnarToRow\n               :              :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                          :     :           +- ReusedSubquery\n               :              :                    :     :                          :     +- BroadcastExchange\n               :              :                    :     :                          :        +- CometNativeColumnarToRow\n               :              :                    :     :                          :           +- CometFilter\n               :              :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                          +- BroadcastExchange\n               :              :                    :     :                             +- CometNativeColumnarToRow\n               :              :                    :     :                                +- CometProject\n               :              :                    :     :                                   +- CometFilter\n               :              :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     +- BroadcastExchange\n               :              :                    :        +- BroadcastHashJoin\n               :              :                    :           :- CometNativeColumnarToRow\n               :              :                    :           :  +- CometFilter\n               :              :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :           +- BroadcastExchange\n               :              :                    :              +- Project\n               :              :                    :                 +- BroadcastHashJoin\n               :              :                    :                    :- CometNativeColumnarToRow\n               :              :                    :                    :  +- CometFilter\n               :              :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                    +- BroadcastExchange\n               :              :                    :                       +- BroadcastHashJoin\n               :              :                    :                          :- CometNativeColumnarToRow\n               :              :                    :                          :  +- CometHashAggregate\n               :              :                    :                          :     +- CometColumnarExchange\n               :              :                    :                          :        +- HashAggregate\n               :              :                    :                          :           +- Project\n               :              :                    :                          :              +- BroadcastHashJoin\n               :              :                    :                          :                 :- Project\n               :              :                    :                          :                 :  +- BroadcastHashJoin\n               :              :                    :                          :                 :     :- Filter\n               :              :                    :                          :                 :     :  +- ColumnarToRow\n               :              :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :     :           +- SubqueryBroadcast\n               :              :                    :                          :                 :     :              +- BroadcastExchange\n               :              :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :                          :                 :     :                    +- CometProject\n               :              :                    :                          :                 :     :                       +- CometFilter\n               :              :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 :     +- BroadcastExchange\n               :              :                    :                          :                 :        +- BroadcastHashJoin\n               :              :                    :                          :                 :           :- CometNativeColumnarToRow\n               :              :                    :                          :                 :           :  +- CometFilter\n               :              :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :           +- BroadcastExchange\n               :              :                    :                          :                 :              +- Project\n               :              :                    :                          :                 :                 +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :- Project\n               :              :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :     :- Filter\n               :              :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :              :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :              :                    :                          :                 :                    :     +- BroadcastExchange\n               :              :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                    :           +- CometFilter\n               :              :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :                    +- BroadcastExchange\n               :              :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                          +- CometProject\n               :              :                    :                          :                 :                             +- CometFilter\n               :              :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 +- BroadcastExchange\n               :              :                    :                          :                    +- CometNativeColumnarToRow\n               :              :                    :                          :                       +- CometProject\n               :              :                    :                          :                          +- CometFilter\n               :              :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          +- BroadcastExchange\n               :              :                    :                             +- Project\n               :              :                    :                                +- BroadcastHashJoin\n               :              :                    :                                   :- Project\n               :              :                    :                                   :  +- BroadcastHashJoin\n               :              :                    :                                   :     :- Filter\n               :              :                    :                                   :     :  +- ColumnarToRow\n               :              :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                                   :     :           +- ReusedSubquery\n               :              :                    :                                   :     +- BroadcastExchange\n               :              :                    :                                   :        +- CometNativeColumnarToRow\n               :              :                    :                                   :           +- CometFilter\n               :              :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                                   +- BroadcastExchange\n               :              :                    :                                      +- CometNativeColumnarToRow\n               :              :                    :                                         +- CometProject\n               :              :                    :                                            +- CometFilter\n               :              :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    +- BroadcastExchange\n               :              :                       +- CometNativeColumnarToRow\n               :              :                          +- CometProject\n               :              :                             +- CometFilter\n               :              :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :- Filter\n               :              :  :  +- ReusedSubquery\n               :              :  +- HashAggregate\n               :              :     +- CometNativeColumnarToRow\n               :              :        +- CometColumnarExchange\n               :              :           +- HashAggregate\n               :              :              +- Project\n               :              :                 +- BroadcastHashJoin\n               :              :                    :- Project\n               :              :                    :  +- BroadcastHashJoin\n               :              :                    :     :- BroadcastHashJoin\n               :              :                    :     :  :- Filter\n               :              :                    :     :  :  +- ColumnarToRow\n               :              :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :  :           +- ReusedSubquery\n               :              :                    :     :  +- BroadcastExchange\n               :              :                    :     :     +- Project\n               :              :                    :     :        +- BroadcastHashJoin\n               :              :                    :     :           :- CometNativeColumnarToRow\n               :              :                    :     :           :  +- CometFilter\n               :              :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :           +- BroadcastExchange\n               :              :                    :     :              +- BroadcastHashJoin\n               :              :                    :     :                 :- CometNativeColumnarToRow\n               :              :                    :     :                 :  +- CometHashAggregate\n               :              :                    :     :                 :     +- CometColumnarExchange\n               :              :                    :     :                 :        +- HashAggregate\n               :              :                    :     :                 :           +- Project\n               :              :                    :     :                 :              +- BroadcastHashJoin\n               :              :                    :     :                 :                 :- Project\n               :              :                    :     :                 :                 :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :     :- Filter\n               :              :                    :     :                 :                 :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :              :                    :     :                 :                 :     :              +- BroadcastExchange\n               :              :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :     :                    +- CometProject\n               :              :                    :     :                 :                 :     :                       +- CometFilter\n               :              :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 :     +- BroadcastExchange\n               :              :                    :     :                 :                 :        +- BroadcastHashJoin\n               :              :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :           :  +- CometFilter\n               :              :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :           +- BroadcastExchange\n               :              :                    :     :                 :                 :              +- Project\n               :              :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :- Project\n               :              :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :              :                    :     :                 :                 :                    :     :- Filter\n               :              :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :              :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :              :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :              :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                    :           +- CometFilter\n               :              :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                 :                 :                    +- BroadcastExchange\n               :              :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :              :                    :     :                 :                 :                          +- CometProject\n               :              :                    :     :                 :                 :                             +- CometFilter\n               :              :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 :                 +- BroadcastExchange\n               :              :                    :     :                 :                    +- CometNativeColumnarToRow\n               :              :                    :     :                 :                       +- CometProject\n               :              :                    :     :                 :                          +- CometFilter\n               :              :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     :                 +- BroadcastExchange\n               :              :                    :     :                    +- Project\n               :              :                    :     :                       +- BroadcastHashJoin\n               :              :                    :     :                          :- Project\n               :              :                    :     :                          :  +- BroadcastHashJoin\n               :              :                    :     :                          :     :- Filter\n               :              :                    :     :                          :     :  +- ColumnarToRow\n               :              :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :     :                          :     :           +- ReusedSubquery\n               :              :                    :     :                          :     +- BroadcastExchange\n               :              :                    :     :                          :        +- CometNativeColumnarToRow\n               :              :                    :     :                          :           +- CometFilter\n               :              :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :     :                          +- BroadcastExchange\n               :              :                    :     :                             +- CometNativeColumnarToRow\n               :              :                    :     :                                +- CometProject\n               :              :                    :     :                                   +- CometFilter\n               :              :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :     +- BroadcastExchange\n               :              :                    :        +- BroadcastHashJoin\n               :              :                    :           :- CometNativeColumnarToRow\n               :              :                    :           :  +- CometFilter\n               :              :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :           +- BroadcastExchange\n               :              :                    :              +- Project\n               :              :                    :                 +- BroadcastHashJoin\n               :              :                    :                    :- CometNativeColumnarToRow\n               :              :                    :                    :  +- CometFilter\n               :              :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                    +- BroadcastExchange\n               :              :                    :                       +- BroadcastHashJoin\n               :              :                    :                          :- CometNativeColumnarToRow\n               :              :                    :                          :  +- CometHashAggregate\n               :              :                    :                          :     +- CometColumnarExchange\n               :              :                    :                          :        +- HashAggregate\n               :              :                    :                          :           +- Project\n               :              :                    :                          :              +- BroadcastHashJoin\n               :              :                    :                          :                 :- Project\n               :              :                    :                          :                 :  +- BroadcastHashJoin\n               :              :                    :                          :                 :     :- Filter\n               :              :                    :                          :                 :     :  +- ColumnarToRow\n               :              :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :     :           +- SubqueryBroadcast\n               :              :                    :                          :                 :     :              +- BroadcastExchange\n               :              :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :              :                    :                          :                 :     :                    +- CometProject\n               :              :                    :                          :                 :     :                       +- CometFilter\n               :              :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 :     +- BroadcastExchange\n               :              :                    :                          :                 :        +- BroadcastHashJoin\n               :              :                    :                          :                 :           :- CometNativeColumnarToRow\n               :              :                    :                          :                 :           :  +- CometFilter\n               :              :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :           +- BroadcastExchange\n               :              :                    :                          :                 :              +- Project\n               :              :                    :                          :                 :                 +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :- Project\n               :              :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :              :                    :                          :                 :                    :     :- Filter\n               :              :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :              :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :              :                    :                          :                 :                    :     +- BroadcastExchange\n               :              :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                    :           +- CometFilter\n               :              :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                          :                 :                    +- BroadcastExchange\n               :              :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :              :                    :                          :                 :                          +- CometProject\n               :              :                    :                          :                 :                             +- CometFilter\n               :              :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          :                 +- BroadcastExchange\n               :              :                    :                          :                    +- CometNativeColumnarToRow\n               :              :                    :                          :                       +- CometProject\n               :              :                    :                          :                          +- CometFilter\n               :              :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    :                          +- BroadcastExchange\n               :              :                    :                             +- Project\n               :              :                    :                                +- BroadcastHashJoin\n               :              :                    :                                   :- Project\n               :              :                    :                                   :  +- BroadcastHashJoin\n               :              :                    :                                   :     :- Filter\n               :              :                    :                                   :     :  +- ColumnarToRow\n               :              :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                    :                                   :     :           +- ReusedSubquery\n               :              :                    :                                   :     +- BroadcastExchange\n               :              :                    :                                   :        +- CometNativeColumnarToRow\n               :              :                    :                                   :           +- CometFilter\n               :              :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :              :                    :                                   +- BroadcastExchange\n               :              :                    :                                      +- CometNativeColumnarToRow\n               :              :                    :                                         +- CometProject\n               :              :                    :                                            +- CometFilter\n               :              :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                    +- BroadcastExchange\n               :              :                       +- CometNativeColumnarToRow\n               :              :                          +- CometProject\n               :              :                             +- CometFilter\n               :              :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              +- Filter\n               :                 :  +- ReusedSubquery\n               :                 +- HashAggregate\n               :                    +- CometNativeColumnarToRow\n               :                       +- CometColumnarExchange\n               :                          +- HashAggregate\n               :                             +- Project\n               :                                +- BroadcastHashJoin\n               :                                   :- Project\n               :                                   :  +- BroadcastHashJoin\n               :                                   :     :- BroadcastHashJoin\n               :                                   :     :  :- Filter\n               :                                   :     :  :  +- ColumnarToRow\n               :                                   :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :  :           +- ReusedSubquery\n               :                                   :     :  +- BroadcastExchange\n               :                                   :     :     +- Project\n               :                                   :     :        +- BroadcastHashJoin\n               :                                   :     :           :- CometNativeColumnarToRow\n               :                                   :     :           :  +- CometFilter\n               :                                   :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :           +- BroadcastExchange\n               :                                   :     :              +- BroadcastHashJoin\n               :                                   :     :                 :- CometNativeColumnarToRow\n               :                                   :     :                 :  +- CometHashAggregate\n               :                                   :     :                 :     +- CometColumnarExchange\n               :                                   :     :                 :        +- HashAggregate\n               :                                   :     :                 :           +- Project\n               :                                   :     :                 :              +- BroadcastHashJoin\n               :                                   :     :                 :                 :- Project\n               :                                   :     :                 :                 :  +- BroadcastHashJoin\n               :                                   :     :                 :                 :     :- Filter\n               :                                   :     :                 :                 :     :  +- ColumnarToRow\n               :                                   :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                   :     :                 :                 :     :              +- BroadcastExchange\n               :                                   :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                   :     :                 :                 :     :                    +- CometProject\n               :                                   :     :                 :                 :     :                       +- CometFilter\n               :                                   :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :                 :                 :     +- BroadcastExchange\n               :                                   :     :                 :                 :        +- BroadcastHashJoin\n               :                                   :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                   :     :                 :                 :           :  +- CometFilter\n               :                                   :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :                 :                 :           +- BroadcastExchange\n               :                                   :     :                 :                 :              +- Project\n               :                                   :     :                 :                 :                 +- BroadcastHashJoin\n               :                                   :     :                 :                 :                    :- Project\n               :                                   :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                   :     :                 :                 :                    :     :- Filter\n               :                                   :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                   :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                   :     :                 :                 :                    :     +- BroadcastExchange\n               :                                   :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                   :     :                 :                 :                    :           +- CometFilter\n               :                                   :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :                 :                 :                    +- BroadcastExchange\n               :                                   :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                   :     :                 :                 :                          +- CometProject\n               :                                   :     :                 :                 :                             +- CometFilter\n               :                                   :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :                 :                 +- BroadcastExchange\n               :                                   :     :                 :                    +- CometNativeColumnarToRow\n               :                                   :     :                 :                       +- CometProject\n               :                                   :     :                 :                          +- CometFilter\n               :                                   :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :                 +- BroadcastExchange\n               :                                   :     :                    +- Project\n               :                                   :     :                       +- BroadcastHashJoin\n               :                                   :     :                          :- Project\n               :                                   :     :                          :  +- BroadcastHashJoin\n               :                                   :     :                          :     :- Filter\n               :                                   :     :                          :     :  +- ColumnarToRow\n               :                                   :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :                          :     :           +- ReusedSubquery\n               :                                   :     :                          :     +- BroadcastExchange\n               :                                   :     :                          :        +- CometNativeColumnarToRow\n               :                                   :     :                          :           +- CometFilter\n               :                                   :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :     :                          +- BroadcastExchange\n               :                                   :     :                             +- CometNativeColumnarToRow\n               :                                   :     :                                +- CometProject\n               :                                   :     :                                   +- CometFilter\n               :                                   :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     +- BroadcastExchange\n               :                                   :        +- BroadcastHashJoin\n               :                                   :           :- CometNativeColumnarToRow\n               :                                   :           :  +- CometFilter\n               :                                   :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :           +- BroadcastExchange\n               :                                   :              +- Project\n               :                                   :                 +- BroadcastHashJoin\n               :                                   :                    :- CometNativeColumnarToRow\n               :                                   :                    :  +- CometFilter\n               :                                   :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                    +- BroadcastExchange\n               :                                   :                       +- BroadcastHashJoin\n               :                                   :                          :- CometNativeColumnarToRow\n               :                                   :                          :  +- CometHashAggregate\n               :                                   :                          :     +- CometColumnarExchange\n               :                                   :                          :        +- HashAggregate\n               :                                   :                          :           +- Project\n               :                                   :                          :              +- BroadcastHashJoin\n               :                                   :                          :                 :- Project\n               :                                   :                          :                 :  +- BroadcastHashJoin\n               :                                   :                          :                 :     :- Filter\n               :                                   :                          :                 :     :  +- ColumnarToRow\n               :                                   :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :                          :                 :     :           +- SubqueryBroadcast\n               :                                   :                          :                 :     :              +- BroadcastExchange\n               :                                   :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                   :                          :                 :     :                    +- CometProject\n               :                                   :                          :                 :     :                       +- CometFilter\n               :                                   :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :                          :                 :     +- BroadcastExchange\n               :                                   :                          :                 :        +- BroadcastHashJoin\n               :                                   :                          :                 :           :- CometNativeColumnarToRow\n               :                                   :                          :                 :           :  +- CometFilter\n               :                                   :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                          :                 :           +- BroadcastExchange\n               :                                   :                          :                 :              +- Project\n               :                                   :                          :                 :                 +- BroadcastHashJoin\n               :                                   :                          :                 :                    :- Project\n               :                                   :                          :                 :                    :  +- BroadcastHashJoin\n               :                                   :                          :                 :                    :     :- Filter\n               :                                   :                          :                 :                    :     :  +- ColumnarToRow\n               :                                   :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :                          :                 :                    :     :           +- ReusedSubquery\n               :                                   :                          :                 :                    :     +- BroadcastExchange\n               :                                   :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                   :                          :                 :                    :           +- CometFilter\n               :                                   :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                          :                 :                    +- BroadcastExchange\n               :                                   :                          :                 :                       +- CometNativeColumnarToRow\n               :                                   :                          :                 :                          +- CometProject\n               :                                   :                          :                 :                             +- CometFilter\n               :                                   :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :                          :                 +- BroadcastExchange\n               :                                   :                          :                    +- CometNativeColumnarToRow\n               :                                   :                          :                       +- CometProject\n               :                                   :                          :                          +- CometFilter\n               :                                   :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :                          +- BroadcastExchange\n               :                                   :                             +- Project\n               :                                   :                                +- BroadcastHashJoin\n               :                                   :                                   :- Project\n               :                                   :                                   :  +- BroadcastHashJoin\n               :                                   :                                   :     :- Filter\n               :                                   :                                   :     :  +- ColumnarToRow\n               :                                   :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :                                   :     :           +- ReusedSubquery\n               :                                   :                                   :     +- BroadcastExchange\n               :                                   :                                   :        +- CometNativeColumnarToRow\n               :                                   :                                   :           +- CometFilter\n               :                                   :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   :                                   +- BroadcastExchange\n               :                                   :                                      +- CometNativeColumnarToRow\n               :                                   :                                         +- CometProject\n               :                                   :                                            +- CometFilter\n               :                                   :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   +- BroadcastExchange\n               :                                      +- CometNativeColumnarToRow\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Filter\n               :                          :  :  +- Subquery\n               :                          :  :     +- HashAggregate\n               :                          :  :        +- CometNativeColumnarToRow\n               :                          :  :           +- CometColumnarExchange\n               :                          :  :              +- HashAggregate\n               :                          :  :                 +- Union\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- ReusedSubquery\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- SubqueryBroadcast\n               :                          :  :                    :     :           +- BroadcastExchange\n               :                          :  :                    :     :              +- CometNativeColumnarToRow\n               :                          :  :                    :     :                 +- CometProject\n               :                          :  :                    :     :                    +- CometFilter\n               :                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    +- Project\n               :                          :  :                       +- BroadcastHashJoin\n               :                          :  :                          :- ColumnarToRow\n               :                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                          :        +- ReusedSubquery\n               :                          :  :                          +- BroadcastExchange\n               :                          :  :                             +- CometNativeColumnarToRow\n               :                          :  :                                +- CometProject\n               :                          :  :                                   +- CometFilter\n               :                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- SubqueryBroadcast\n               :                          :                    :     :  :              +- BroadcastExchange\n               :                          :                    :     :  :                 +- CometNativeColumnarToRow\n               :                          :                    :     :  :                    +- CometProject\n               :                          :                    :     :  :                       +- CometFilter\n               :                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :- Filter\n               :                          :  :  +- ReusedSubquery\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- ReusedSubquery\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Filter\n               :                             :  +- ReusedSubquery\n               :                             +- HashAggregate\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometColumnarExchange\n               :                                      +- HashAggregate\n               :                                         +- Project\n               :                                            +- BroadcastHashJoin\n               :                                               :- Project\n               :                                               :  +- BroadcastHashJoin\n               :                                               :     :- BroadcastHashJoin\n               :                                               :     :  :- Filter\n               :                                               :     :  :  +- ColumnarToRow\n               :                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :  :           +- ReusedSubquery\n               :                                               :     :  +- BroadcastExchange\n               :                                               :     :     +- Project\n               :                                               :     :        +- BroadcastHashJoin\n               :                                               :     :           :- CometNativeColumnarToRow\n               :                                               :     :           :  +- CometFilter\n               :                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :           +- BroadcastExchange\n               :                                               :     :              +- BroadcastHashJoin\n               :                                               :     :                 :- CometNativeColumnarToRow\n               :                                               :     :                 :  +- CometHashAggregate\n               :                                               :     :                 :     +- CometColumnarExchange\n               :                                               :     :                 :        +- HashAggregate\n               :                                               :     :                 :           +- Project\n               :                                               :     :                 :              +- BroadcastHashJoin\n               :                                               :     :                 :                 :- Project\n               :                                               :     :                 :                 :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :     :- Filter\n               :                                               :     :                 :                 :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                               :     :                 :                 :     :              +- BroadcastExchange\n               :                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :     :                    +- CometProject\n               :                                               :     :                 :                 :     :                       +- CometFilter\n               :                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 :     +- BroadcastExchange\n               :                                               :     :                 :                 :        +- BroadcastHashJoin\n               :                                               :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                               :     :                 :                 :           :  +- CometFilter\n               :                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :           +- BroadcastExchange\n               :                                               :     :                 :                 :              +- Project\n               :                                               :     :                 :                 :                 +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :- Project\n               :                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :     :- Filter\n               :                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                               :     :                 :                 :                    :     +- BroadcastExchange\n               :                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                    :           +- CometFilter\n               :                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :                    +- BroadcastExchange\n               :                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                          +- CometProject\n               :                                               :     :                 :                 :                             +- CometFilter\n               :                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 +- BroadcastExchange\n               :                                               :     :                 :                    +- CometNativeColumnarToRow\n               :                                               :     :                 :                       +- CometProject\n               :                                               :     :                 :                          +- CometFilter\n               :                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 +- BroadcastExchange\n               :                                               :     :                    +- Project\n               :                                               :     :                       +- BroadcastHashJoin\n               :                                               :     :                          :- Project\n               :                                               :     :                          :  +- BroadcastHashJoin\n               :                                               :     :                          :     :- Filter\n               :                                               :     :                          :     :  +- ColumnarToRow\n               :                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                          :     :           +- ReusedSubquery\n               :                                               :     :                          :     +- BroadcastExchange\n               :                                               :     :                          :        +- CometNativeColumnarToRow\n               :                                               :     :                          :           +- CometFilter\n               :                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                          +- BroadcastExchange\n               :                                               :     :                             +- CometNativeColumnarToRow\n               :                                               :     :                                +- CometProject\n               :                                               :     :                                   +- CometFilter\n               :                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     +- BroadcastExchange\n               :                                               :        +- BroadcastHashJoin\n               :                                               :           :- CometNativeColumnarToRow\n               :                                               :           :  +- CometFilter\n               :                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :           +- BroadcastExchange\n               :                                               :              +- Project\n               :                                               :                 +- BroadcastHashJoin\n               :                                               :                    :- CometNativeColumnarToRow\n               :                                               :                    :  +- CometFilter\n               :                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                    +- BroadcastExchange\n               :                                               :                       +- BroadcastHashJoin\n               :                                               :                          :- CometNativeColumnarToRow\n               :                                               :                          :  +- CometHashAggregate\n               :                                               :                          :     +- CometColumnarExchange\n               :                                               :                          :        +- HashAggregate\n               :                                               :                          :           +- Project\n               :                                               :                          :              +- BroadcastHashJoin\n               :                                               :                          :                 :- Project\n               :                                               :                          :                 :  +- BroadcastHashJoin\n               :                                               :                          :                 :     :- Filter\n               :                                               :                          :                 :     :  +- ColumnarToRow\n               :                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :     :           +- SubqueryBroadcast\n               :                                               :                          :                 :     :              +- BroadcastExchange\n               :                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :                          :                 :     :                    +- CometProject\n               :                                               :                          :                 :     :                       +- CometFilter\n               :                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 :     +- BroadcastExchange\n               :                                               :                          :                 :        +- BroadcastHashJoin\n               :                                               :                          :                 :           :- CometNativeColumnarToRow\n               :                                               :                          :                 :           :  +- CometFilter\n               :                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :           +- BroadcastExchange\n               :                                               :                          :                 :              +- Project\n               :                                               :                          :                 :                 +- BroadcastHashJoin\n               :                                               :                          :                 :                    :- Project\n               :                                               :                          :                 :                    :  +- BroadcastHashJoin\n               :                                               :                          :                 :                    :     :- Filter\n               :                                               :                          :                 :                    :     :  +- ColumnarToRow\n               :                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :                    :     :           +- ReusedSubquery\n               :                                               :                          :                 :                    :     +- BroadcastExchange\n               :                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :                          :                 :                    :           +- CometFilter\n               :                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :                    +- BroadcastExchange\n               :                                               :                          :                 :                       +- CometNativeColumnarToRow\n               :                                               :                          :                 :                          +- CometProject\n               :                                               :                          :                 :                             +- CometFilter\n               :                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 +- BroadcastExchange\n               :                                               :                          :                    +- CometNativeColumnarToRow\n               :                                               :                          :                       +- CometProject\n               :                                               :                          :                          +- CometFilter\n               :                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          +- BroadcastExchange\n               :                                               :                             +- Project\n               :                                               :                                +- BroadcastHashJoin\n               :                                               :                                   :- Project\n               :                                               :                                   :  +- BroadcastHashJoin\n               :                                               :                                   :     :- Filter\n               :                                               :                                   :     :  +- ColumnarToRow\n               :                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                                   :     :           +- ReusedSubquery\n               :                                               :                                   :     +- BroadcastExchange\n               :                                               :                                   :        +- CometNativeColumnarToRow\n               :                                               :                                   :           +- CometFilter\n               :                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                                   +- BroadcastExchange\n               :                                               :                                      +- CometNativeColumnarToRow\n               :                                               :                                         +- CometProject\n               :                                               :                                            +- CometFilter\n               :                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               +- BroadcastExchange\n               :                                                  +- CometNativeColumnarToRow\n               :                                                     +- CometProject\n               :                                                        +- CometFilter\n               :                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Filter\n               :                          :  :  +- Subquery\n               :                          :  :     +- HashAggregate\n               :                          :  :        +- CometNativeColumnarToRow\n               :                          :  :           +- CometColumnarExchange\n               :                          :  :              +- HashAggregate\n               :                          :  :                 +- Union\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- ReusedSubquery\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- SubqueryBroadcast\n               :                          :  :                    :     :           +- BroadcastExchange\n               :                          :  :                    :     :              +- CometNativeColumnarToRow\n               :                          :  :                    :     :                 +- CometProject\n               :                          :  :                    :     :                    +- CometFilter\n               :                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    +- Project\n               :                          :  :                       +- BroadcastHashJoin\n               :                          :  :                          :- ColumnarToRow\n               :                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                          :        +- ReusedSubquery\n               :                          :  :                          +- BroadcastExchange\n               :                          :  :                             +- CometNativeColumnarToRow\n               :                          :  :                                +- CometProject\n               :                          :  :                                   +- CometFilter\n               :                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- SubqueryBroadcast\n               :                          :                    :     :  :              +- BroadcastExchange\n               :                          :                    :     :  :                 +- CometNativeColumnarToRow\n               :                          :                    :     :  :                    +- CometProject\n               :                          :                    :     :  :                       +- CometFilter\n               :                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :- Filter\n               :                          :  :  +- ReusedSubquery\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- ReusedSubquery\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Filter\n               :                             :  +- ReusedSubquery\n               :                             +- HashAggregate\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometColumnarExchange\n               :                                      +- HashAggregate\n               :                                         +- Project\n               :                                            +- BroadcastHashJoin\n               :                                               :- Project\n               :                                               :  +- BroadcastHashJoin\n               :                                               :     :- BroadcastHashJoin\n               :                                               :     :  :- Filter\n               :                                               :     :  :  +- ColumnarToRow\n               :                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :  :           +- ReusedSubquery\n               :                                               :     :  +- BroadcastExchange\n               :                                               :     :     +- Project\n               :                                               :     :        +- BroadcastHashJoin\n               :                                               :     :           :- CometNativeColumnarToRow\n               :                                               :     :           :  +- CometFilter\n               :                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :           +- BroadcastExchange\n               :                                               :     :              +- BroadcastHashJoin\n               :                                               :     :                 :- CometNativeColumnarToRow\n               :                                               :     :                 :  +- CometHashAggregate\n               :                                               :     :                 :     +- CometColumnarExchange\n               :                                               :     :                 :        +- HashAggregate\n               :                                               :     :                 :           +- Project\n               :                                               :     :                 :              +- BroadcastHashJoin\n               :                                               :     :                 :                 :- Project\n               :                                               :     :                 :                 :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :     :- Filter\n               :                                               :     :                 :                 :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                               :     :                 :                 :     :              +- BroadcastExchange\n               :                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :     :                    +- CometProject\n               :                                               :     :                 :                 :     :                       +- CometFilter\n               :                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 :     +- BroadcastExchange\n               :                                               :     :                 :                 :        +- BroadcastHashJoin\n               :                                               :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                               :     :                 :                 :           :  +- CometFilter\n               :                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :           +- BroadcastExchange\n               :                                               :     :                 :                 :              +- Project\n               :                                               :     :                 :                 :                 +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :- Project\n               :                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :     :- Filter\n               :                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                               :     :                 :                 :                    :     +- BroadcastExchange\n               :                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                    :           +- CometFilter\n               :                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :                    +- BroadcastExchange\n               :                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                          +- CometProject\n               :                                               :     :                 :                 :                             +- CometFilter\n               :                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 +- BroadcastExchange\n               :                                               :     :                 :                    +- CometNativeColumnarToRow\n               :                                               :     :                 :                       +- CometProject\n               :                                               :     :                 :                          +- CometFilter\n               :                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 +- BroadcastExchange\n               :                                               :     :                    +- Project\n               :                                               :     :                       +- BroadcastHashJoin\n               :                                               :     :                          :- Project\n               :                                               :     :                          :  +- BroadcastHashJoin\n               :                                               :     :                          :     :- Filter\n               :                                               :     :                          :     :  +- ColumnarToRow\n               :                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                          :     :           +- ReusedSubquery\n               :                                               :     :                          :     +- BroadcastExchange\n               :                                               :     :                          :        +- CometNativeColumnarToRow\n               :                                               :     :                          :           +- CometFilter\n               :                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                          +- BroadcastExchange\n               :                                               :     :                             +- CometNativeColumnarToRow\n               :                                               :     :                                +- CometProject\n               :                                               :     :                                   +- CometFilter\n               :                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     +- BroadcastExchange\n               :                                               :        +- BroadcastHashJoin\n               :                                               :           :- CometNativeColumnarToRow\n               :                                               :           :  +- CometFilter\n               :                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :           +- BroadcastExchange\n               :                                               :              +- Project\n               :                                               :                 +- BroadcastHashJoin\n               :                                               :                    :- CometNativeColumnarToRow\n               :                                               :                    :  +- CometFilter\n               :                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                    +- BroadcastExchange\n               :                                               :                       +- BroadcastHashJoin\n               :                                               :                          :- CometNativeColumnarToRow\n               :                                               :                          :  +- CometHashAggregate\n               :                                               :                          :     +- CometColumnarExchange\n               :                                               :                          :        +- HashAggregate\n               :                                               :                          :           +- Project\n               :                                               :                          :              +- BroadcastHashJoin\n               :                                               :                          :                 :- Project\n               :                                               :                          :                 :  +- BroadcastHashJoin\n               :                                               :                          :                 :     :- Filter\n               :                                               :                          :                 :     :  +- ColumnarToRow\n               :                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :     :           +- SubqueryBroadcast\n               :                                               :                          :                 :     :              +- BroadcastExchange\n               :                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :                          :                 :     :                    +- CometProject\n               :                                               :                          :                 :     :                       +- CometFilter\n               :                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 :     +- BroadcastExchange\n               :                                               :                          :                 :        +- BroadcastHashJoin\n               :                                               :                          :                 :           :- CometNativeColumnarToRow\n               :                                               :                          :                 :           :  +- CometFilter\n               :                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :           +- BroadcastExchange\n               :                                               :                          :                 :              +- Project\n               :                                               :                          :                 :                 +- BroadcastHashJoin\n               :                                               :                          :                 :                    :- Project\n               :                                               :                          :                 :                    :  +- BroadcastHashJoin\n               :                                               :                          :                 :                    :     :- Filter\n               :                                               :                          :                 :                    :     :  +- ColumnarToRow\n               :                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :                    :     :           +- ReusedSubquery\n               :                                               :                          :                 :                    :     +- BroadcastExchange\n               :                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :                          :                 :                    :           +- CometFilter\n               :                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :                    +- BroadcastExchange\n               :                                               :                          :                 :                       +- CometNativeColumnarToRow\n               :                                               :                          :                 :                          +- CometProject\n               :                                               :                          :                 :                             +- CometFilter\n               :                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 +- BroadcastExchange\n               :                                               :                          :                    +- CometNativeColumnarToRow\n               :                                               :                          :                       +- CometProject\n               :                                               :                          :                          +- CometFilter\n               :                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          +- BroadcastExchange\n               :                                               :                             +- Project\n               :                                               :                                +- BroadcastHashJoin\n               :                                               :                                   :- Project\n               :                                               :                                   :  +- BroadcastHashJoin\n               :                                               :                                   :     :- Filter\n               :                                               :                                   :     :  +- ColumnarToRow\n               :                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                                   :     :           +- ReusedSubquery\n               :                                               :                                   :     +- BroadcastExchange\n               :                                               :                                   :        +- CometNativeColumnarToRow\n               :                                               :                                   :           +- CometFilter\n               :                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                                   +- BroadcastExchange\n               :                                               :                                      +- CometNativeColumnarToRow\n               :                                               :                                         +- CometProject\n               :                                               :                                            +- CometFilter\n               :                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               +- BroadcastExchange\n               :                                                  +- CometNativeColumnarToRow\n               :                                                     +- CometProject\n               :                                                        +- CometFilter\n               :                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Filter\n               :                          :  :  +- Subquery\n               :                          :  :     +- HashAggregate\n               :                          :  :        +- CometNativeColumnarToRow\n               :                          :  :           +- CometColumnarExchange\n               :                          :  :              +- HashAggregate\n               :                          :  :                 +- Union\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- ReusedSubquery\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :- Project\n               :                          :  :                    :  +- BroadcastHashJoin\n               :                          :  :                    :     :- ColumnarToRow\n               :                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                    :     :        +- SubqueryBroadcast\n               :                          :  :                    :     :           +- BroadcastExchange\n               :                          :  :                    :     :              +- CometNativeColumnarToRow\n               :                          :  :                    :     :                 +- CometProject\n               :                          :  :                    :     :                    +- CometFilter\n               :                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    :     +- BroadcastExchange\n               :                          :  :                    :        +- CometNativeColumnarToRow\n               :                          :  :                    :           +- CometProject\n               :                          :  :                    :              +- CometFilter\n               :                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  :                    +- Project\n               :                          :  :                       +- BroadcastHashJoin\n               :                          :  :                          :- ColumnarToRow\n               :                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :  :                          :        +- ReusedSubquery\n               :                          :  :                          +- BroadcastExchange\n               :                          :  :                             +- CometNativeColumnarToRow\n               :                          :  :                                +- CometProject\n               :                          :  :                                   +- CometFilter\n               :                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- SubqueryBroadcast\n               :                          :                    :     :  :              +- BroadcastExchange\n               :                          :                    :     :  :                 +- CometNativeColumnarToRow\n               :                          :                    :     :  :                    +- CometProject\n               :                          :                    :     :  :                       +- CometFilter\n               :                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :- Filter\n               :                          :  :  +- ReusedSubquery\n               :                          :  +- HashAggregate\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometColumnarExchange\n               :                          :           +- HashAggregate\n               :                          :              +- Project\n               :                          :                 +- BroadcastHashJoin\n               :                          :                    :- Project\n               :                          :                    :  +- BroadcastHashJoin\n               :                          :                    :     :- BroadcastHashJoin\n               :                          :                    :     :  :- Filter\n               :                          :                    :     :  :  +- ColumnarToRow\n               :                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :  :           +- ReusedSubquery\n               :                          :                    :     :  +- BroadcastExchange\n               :                          :                    :     :     +- Project\n               :                          :                    :     :        +- BroadcastHashJoin\n               :                          :                    :     :           :- CometNativeColumnarToRow\n               :                          :                    :     :           :  +- CometFilter\n               :                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :           +- BroadcastExchange\n               :                          :                    :     :              +- BroadcastHashJoin\n               :                          :                    :     :                 :- CometNativeColumnarToRow\n               :                          :                    :     :                 :  +- CometHashAggregate\n               :                          :                    :     :                 :     +- CometColumnarExchange\n               :                          :                    :     :                 :        +- HashAggregate\n               :                          :                    :     :                 :           +- Project\n               :                          :                    :     :                 :              +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :- Project\n               :                          :                    :     :                 :                 :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :     :- Filter\n               :                          :                    :     :                 :                 :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n               :                          :                    :     :                 :                 :     :              +- BroadcastExchange\n               :                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :     :                    +- CometProject\n               :                          :                    :     :                 :                 :     :                       +- CometFilter\n               :                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :        +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :           :  +- CometFilter\n               :                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :           +- BroadcastExchange\n               :                          :                    :     :                 :                 :              +- Project\n               :                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :- Project\n               :                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :     :                 :                 :                    :     :- Filter\n               :                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n               :                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                    :           +- CometFilter\n               :                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                 :                 :                    +- BroadcastExchange\n               :                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                 :                          +- CometProject\n               :                          :                    :     :                 :                 :                             +- CometFilter\n               :                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 :                 +- BroadcastExchange\n               :                          :                    :     :                 :                    +- CometNativeColumnarToRow\n               :                          :                    :     :                 :                       +- CometProject\n               :                          :                    :     :                 :                          +- CometFilter\n               :                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     :                 +- BroadcastExchange\n               :                          :                    :     :                    +- Project\n               :                          :                    :     :                       +- BroadcastHashJoin\n               :                          :                    :     :                          :- Project\n               :                          :                    :     :                          :  +- BroadcastHashJoin\n               :                          :                    :     :                          :     :- Filter\n               :                          :                    :     :                          :     :  +- ColumnarToRow\n               :                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :     :                          :     :           +- ReusedSubquery\n               :                          :                    :     :                          :     +- BroadcastExchange\n               :                          :                    :     :                          :        +- CometNativeColumnarToRow\n               :                          :                    :     :                          :           +- CometFilter\n               :                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :     :                          +- BroadcastExchange\n               :                          :                    :     :                             +- CometNativeColumnarToRow\n               :                          :                    :     :                                +- CometProject\n               :                          :                    :     :                                   +- CometFilter\n               :                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :     +- BroadcastExchange\n               :                          :                    :        +- BroadcastHashJoin\n               :                          :                    :           :- CometNativeColumnarToRow\n               :                          :                    :           :  +- CometFilter\n               :                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :           +- BroadcastExchange\n               :                          :                    :              +- Project\n               :                          :                    :                 +- BroadcastHashJoin\n               :                          :                    :                    :- CometNativeColumnarToRow\n               :                          :                    :                    :  +- CometFilter\n               :                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                    +- BroadcastExchange\n               :                          :                    :                       +- BroadcastHashJoin\n               :                          :                    :                          :- CometNativeColumnarToRow\n               :                          :                    :                          :  +- CometHashAggregate\n               :                          :                    :                          :     +- CometColumnarExchange\n               :                          :                    :                          :        +- HashAggregate\n               :                          :                    :                          :           +- Project\n               :                          :                    :                          :              +- BroadcastHashJoin\n               :                          :                    :                          :                 :- Project\n               :                          :                    :                          :                 :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :     :- Filter\n               :                          :                    :                          :                 :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :     :           +- SubqueryBroadcast\n               :                          :                    :                          :                 :     :              +- BroadcastExchange\n               :                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :     :                    +- CometProject\n               :                          :                    :                          :                 :     :                       +- CometFilter\n               :                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 :     +- BroadcastExchange\n               :                          :                    :                          :                 :        +- BroadcastHashJoin\n               :                          :                    :                          :                 :           :- CometNativeColumnarToRow\n               :                          :                    :                          :                 :           :  +- CometFilter\n               :                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :           +- BroadcastExchange\n               :                          :                    :                          :                 :              +- Project\n               :                          :                    :                          :                 :                 +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :- Project\n               :                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n               :                          :                    :                          :                 :                    :     :- Filter\n               :                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n               :                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n               :                          :                    :                          :                 :                    :     +- BroadcastExchange\n               :                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                    :           +- CometFilter\n               :                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                          :                 :                    +- BroadcastExchange\n               :                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n               :                          :                    :                          :                 :                          +- CometProject\n               :                          :                    :                          :                 :                             +- CometFilter\n               :                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          :                 +- BroadcastExchange\n               :                          :                    :                          :                    +- CometNativeColumnarToRow\n               :                          :                    :                          :                       +- CometProject\n               :                          :                    :                          :                          +- CometFilter\n               :                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    :                          +- BroadcastExchange\n               :                          :                    :                             +- Project\n               :                          :                    :                                +- BroadcastHashJoin\n               :                          :                    :                                   :- Project\n               :                          :                    :                                   :  +- BroadcastHashJoin\n               :                          :                    :                                   :     :- Filter\n               :                          :                    :                                   :     :  +- ColumnarToRow\n               :                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                    :                                   :     :           +- ReusedSubquery\n               :                          :                    :                                   :     +- BroadcastExchange\n               :                          :                    :                                   :        +- CometNativeColumnarToRow\n               :                          :                    :                                   :           +- CometFilter\n               :                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          :                    :                                   +- BroadcastExchange\n               :                          :                    :                                      +- CometNativeColumnarToRow\n               :                          :                    :                                         +- CometProject\n               :                          :                    :                                            +- CometFilter\n               :                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                    +- BroadcastExchange\n               :                          :                       +- CometNativeColumnarToRow\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Filter\n               :                             :  +- ReusedSubquery\n               :                             +- HashAggregate\n               :                                +- CometNativeColumnarToRow\n               :                                   +- CometColumnarExchange\n               :                                      +- HashAggregate\n               :                                         +- Project\n               :                                            +- BroadcastHashJoin\n               :                                               :- Project\n               :                                               :  +- BroadcastHashJoin\n               :                                               :     :- BroadcastHashJoin\n               :                                               :     :  :- Filter\n               :                                               :     :  :  +- ColumnarToRow\n               :                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :  :           +- ReusedSubquery\n               :                                               :     :  +- BroadcastExchange\n               :                                               :     :     +- Project\n               :                                               :     :        +- BroadcastHashJoin\n               :                                               :     :           :- CometNativeColumnarToRow\n               :                                               :     :           :  +- CometFilter\n               :                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :           +- BroadcastExchange\n               :                                               :     :              +- BroadcastHashJoin\n               :                                               :     :                 :- CometNativeColumnarToRow\n               :                                               :     :                 :  +- CometHashAggregate\n               :                                               :     :                 :     +- CometColumnarExchange\n               :                                               :     :                 :        +- HashAggregate\n               :                                               :     :                 :           +- Project\n               :                                               :     :                 :              +- BroadcastHashJoin\n               :                                               :     :                 :                 :- Project\n               :                                               :     :                 :                 :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :     :- Filter\n               :                                               :     :                 :                 :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :     :           +- SubqueryBroadcast\n               :                                               :     :                 :                 :     :              +- BroadcastExchange\n               :                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :     :                    +- CometProject\n               :                                               :     :                 :                 :     :                       +- CometFilter\n               :                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 :     +- BroadcastExchange\n               :                                               :     :                 :                 :        +- BroadcastHashJoin\n               :                                               :     :                 :                 :           :- CometNativeColumnarToRow\n               :                                               :     :                 :                 :           :  +- CometFilter\n               :                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :           +- BroadcastExchange\n               :                                               :     :                 :                 :              +- Project\n               :                                               :     :                 :                 :                 +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :- Project\n               :                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n               :                                               :     :                 :                 :                    :     :- Filter\n               :                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n               :                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n               :                                               :     :                 :                 :                    :     +- BroadcastExchange\n               :                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                    :           +- CometFilter\n               :                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                 :                 :                    +- BroadcastExchange\n               :                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n               :                                               :     :                 :                 :                          +- CometProject\n               :                                               :     :                 :                 :                             +- CometFilter\n               :                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 :                 +- BroadcastExchange\n               :                                               :     :                 :                    +- CometNativeColumnarToRow\n               :                                               :     :                 :                       +- CometProject\n               :                                               :     :                 :                          +- CometFilter\n               :                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     :                 +- BroadcastExchange\n               :                                               :     :                    +- Project\n               :                                               :     :                       +- BroadcastHashJoin\n               :                                               :     :                          :- Project\n               :                                               :     :                          :  +- BroadcastHashJoin\n               :                                               :     :                          :     :- Filter\n               :                                               :     :                          :     :  +- ColumnarToRow\n               :                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :     :                          :     :           +- ReusedSubquery\n               :                                               :     :                          :     +- BroadcastExchange\n               :                                               :     :                          :        +- CometNativeColumnarToRow\n               :                                               :     :                          :           +- CometFilter\n               :                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :     :                          +- BroadcastExchange\n               :                                               :     :                             +- CometNativeColumnarToRow\n               :                                               :     :                                +- CometProject\n               :                                               :     :                                   +- CometFilter\n               :                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :     +- BroadcastExchange\n               :                                               :        +- BroadcastHashJoin\n               :                                               :           :- CometNativeColumnarToRow\n               :                                               :           :  +- CometFilter\n               :                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :           +- BroadcastExchange\n               :                                               :              +- Project\n               :                                               :                 +- BroadcastHashJoin\n               :                                               :                    :- CometNativeColumnarToRow\n               :                                               :                    :  +- CometFilter\n               :                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                    +- BroadcastExchange\n               :                                               :                       +- BroadcastHashJoin\n               :                                               :                          :- CometNativeColumnarToRow\n               :                                               :                          :  +- CometHashAggregate\n               :                                               :                          :     +- CometColumnarExchange\n               :                                               :                          :        +- HashAggregate\n               :                                               :                          :           +- Project\n               :                                               :                          :              +- BroadcastHashJoin\n               :                                               :                          :                 :- Project\n               :                                               :                          :                 :  +- BroadcastHashJoin\n               :                                               :                          :                 :     :- Filter\n               :                                               :                          :                 :     :  +- ColumnarToRow\n               :                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :     :           +- SubqueryBroadcast\n               :                                               :                          :                 :     :              +- BroadcastExchange\n               :                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n               :                                               :                          :                 :     :                    +- CometProject\n               :                                               :                          :                 :     :                       +- CometFilter\n               :                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 :     +- BroadcastExchange\n               :                                               :                          :                 :        +- BroadcastHashJoin\n               :                                               :                          :                 :           :- CometNativeColumnarToRow\n               :                                               :                          :                 :           :  +- CometFilter\n               :                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :           +- BroadcastExchange\n               :                                               :                          :                 :              +- Project\n               :                                               :                          :                 :                 +- BroadcastHashJoin\n               :                                               :                          :                 :                    :- Project\n               :                                               :                          :                 :                    :  +- BroadcastHashJoin\n               :                                               :                          :                 :                    :     :- Filter\n               :                                               :                          :                 :                    :     :  +- ColumnarToRow\n               :                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                          :                 :                    :     :           +- ReusedSubquery\n               :                                               :                          :                 :                    :     +- BroadcastExchange\n               :                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n               :                                               :                          :                 :                    :           +- CometFilter\n               :                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                          :                 :                    +- BroadcastExchange\n               :                                               :                          :                 :                       +- CometNativeColumnarToRow\n               :                                               :                          :                 :                          +- CometProject\n               :                                               :                          :                 :                             +- CometFilter\n               :                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          :                 +- BroadcastExchange\n               :                                               :                          :                    +- CometNativeColumnarToRow\n               :                                               :                          :                       +- CometProject\n               :                                               :                          :                          +- CometFilter\n               :                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               :                          +- BroadcastExchange\n               :                                               :                             +- Project\n               :                                               :                                +- BroadcastHashJoin\n               :                                               :                                   :- Project\n               :                                               :                                   :  +- BroadcastHashJoin\n               :                                               :                                   :     :- Filter\n               :                                               :                                   :     :  +- ColumnarToRow\n               :                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                               :                                   :     :           +- ReusedSubquery\n               :                                               :                                   :     +- BroadcastExchange\n               :                                               :                                   :        +- CometNativeColumnarToRow\n               :                                               :                                   :           +- CometFilter\n               :                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                               :                                   +- BroadcastExchange\n               :                                               :                                      +- CometNativeColumnarToRow\n               :                                               :                                         +- CometProject\n               :                                               :                                            +- CometFilter\n               :                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                               +- BroadcastExchange\n               :                                                  +- CometNativeColumnarToRow\n               :                                                     +- CometProject\n               :                                                        +- CometFilter\n               :                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- Filter\n                                          :  :  +- Subquery\n                                          :  :     +- HashAggregate\n                                          :  :        +- CometNativeColumnarToRow\n                                          :  :           +- CometColumnarExchange\n                                          :  :              +- HashAggregate\n                                          :  :                 +- Union\n                                          :  :                    :- Project\n                                          :  :                    :  +- BroadcastHashJoin\n                                          :  :                    :     :- ColumnarToRow\n                                          :  :                    :     :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :  :                    :     :        +- ReusedSubquery\n                                          :  :                    :     +- BroadcastExchange\n                                          :  :                    :        +- CometNativeColumnarToRow\n                                          :  :                    :           +- CometProject\n                                          :  :                    :              +- CometFilter\n                                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  :                    :- Project\n                                          :  :                    :  +- BroadcastHashJoin\n                                          :  :                    :     :- ColumnarToRow\n                                          :  :                    :     :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :  :                    :     :        +- SubqueryBroadcast\n                                          :  :                    :     :           +- BroadcastExchange\n                                          :  :                    :     :              +- CometNativeColumnarToRow\n                                          :  :                    :     :                 +- CometProject\n                                          :  :                    :     :                    +- CometFilter\n                                          :  :                    :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  :                    :     +- BroadcastExchange\n                                          :  :                    :        +- CometNativeColumnarToRow\n                                          :  :                    :           +- CometProject\n                                          :  :                    :              +- CometFilter\n                                          :  :                    :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  :                    +- Project\n                                          :  :                       +- BroadcastHashJoin\n                                          :  :                          :- ColumnarToRow\n                                          :  :                          :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :  :                          :        +- ReusedSubquery\n                                          :  :                          +- BroadcastExchange\n                                          :  :                             +- CometNativeColumnarToRow\n                                          :  :                                +- CometProject\n                                          :  :                                   +- CometFilter\n                                          :  :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :  +- HashAggregate\n                                          :     +- CometNativeColumnarToRow\n                                          :        +- CometColumnarExchange\n                                          :           +- HashAggregate\n                                          :              +- Project\n                                          :                 +- BroadcastHashJoin\n                                          :                    :- Project\n                                          :                    :  +- BroadcastHashJoin\n                                          :                    :     :- BroadcastHashJoin\n                                          :                    :     :  :- Filter\n                                          :                    :     :  :  +- ColumnarToRow\n                                          :                    :     :  :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :  :           +- SubqueryBroadcast\n                                          :                    :     :  :              +- BroadcastExchange\n                                          :                    :     :  :                 +- CometNativeColumnarToRow\n                                          :                    :     :  :                    +- CometProject\n                                          :                    :     :  :                       +- CometFilter\n                                          :                    :     :  :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :  +- BroadcastExchange\n                                          :                    :     :     +- Project\n                                          :                    :     :        +- BroadcastHashJoin\n                                          :                    :     :           :- CometNativeColumnarToRow\n                                          :                    :     :           :  +- CometFilter\n                                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :           +- BroadcastExchange\n                                          :                    :     :              +- BroadcastHashJoin\n                                          :                    :     :                 :- CometNativeColumnarToRow\n                                          :                    :     :                 :  +- CometHashAggregate\n                                          :                    :     :                 :     +- CometColumnarExchange\n                                          :                    :     :                 :        +- HashAggregate\n                                          :                    :     :                 :           +- Project\n                                          :                    :     :                 :              +- BroadcastHashJoin\n                                          :                    :     :                 :                 :- Project\n                                          :                    :     :                 :                 :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :     :- Filter\n                                          :                    :     :                 :                 :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n                                          :                    :     :                 :                 :     :              +- BroadcastExchange\n                                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :     :                    +- CometProject\n                                          :                    :     :                 :                 :     :                       +- CometFilter\n                                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 :     +- BroadcastExchange\n                                          :                    :     :                 :                 :        +- BroadcastHashJoin\n                                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :           :  +- CometFilter\n                                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :           +- BroadcastExchange\n                                          :                    :     :                 :                 :              +- Project\n                                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :- Project\n                                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :     :- Filter\n                                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n                                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n                                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                    :           +- CometFilter\n                                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :                    +- BroadcastExchange\n                                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                          +- CometProject\n                                          :                    :     :                 :                 :                             +- CometFilter\n                                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 +- BroadcastExchange\n                                          :                    :     :                 :                    +- CometNativeColumnarToRow\n                                          :                    :     :                 :                       +- CometProject\n                                          :                    :     :                 :                          +- CometFilter\n                                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 +- BroadcastExchange\n                                          :                    :     :                    +- Project\n                                          :                    :     :                       +- BroadcastHashJoin\n                                          :                    :     :                          :- Project\n                                          :                    :     :                          :  +- BroadcastHashJoin\n                                          :                    :     :                          :     :- Filter\n                                          :                    :     :                          :     :  +- ColumnarToRow\n                                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                          :     :           +- ReusedSubquery\n                                          :                    :     :                          :     +- BroadcastExchange\n                                          :                    :     :                          :        +- CometNativeColumnarToRow\n                                          :                    :     :                          :           +- CometFilter\n                                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                          +- BroadcastExchange\n                                          :                    :     :                             +- CometNativeColumnarToRow\n                                          :                    :     :                                +- CometProject\n                                          :                    :     :                                   +- CometFilter\n                                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     +- BroadcastExchange\n                                          :                    :        +- BroadcastHashJoin\n                                          :                    :           :- CometNativeColumnarToRow\n                                          :                    :           :  +- CometFilter\n                                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :           +- BroadcastExchange\n                                          :                    :              +- Project\n                                          :                    :                 +- BroadcastHashJoin\n                                          :                    :                    :- CometNativeColumnarToRow\n                                          :                    :                    :  +- CometFilter\n                                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                    +- BroadcastExchange\n                                          :                    :                       +- BroadcastHashJoin\n                                          :                    :                          :- CometNativeColumnarToRow\n                                          :                    :                          :  +- CometHashAggregate\n                                          :                    :                          :     +- CometColumnarExchange\n                                          :                    :                          :        +- HashAggregate\n                                          :                    :                          :           +- Project\n                                          :                    :                          :              +- BroadcastHashJoin\n                                          :                    :                          :                 :- Project\n                                          :                    :                          :                 :  +- BroadcastHashJoin\n                                          :                    :                          :                 :     :- Filter\n                                          :                    :                          :                 :     :  +- ColumnarToRow\n                                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :     :           +- SubqueryBroadcast\n                                          :                    :                          :                 :     :              +- BroadcastExchange\n                                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :                          :                 :     :                    +- CometProject\n                                          :                    :                          :                 :     :                       +- CometFilter\n                                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 :     +- BroadcastExchange\n                                          :                    :                          :                 :        +- BroadcastHashJoin\n                                          :                    :                          :                 :           :- CometNativeColumnarToRow\n                                          :                    :                          :                 :           :  +- CometFilter\n                                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :           +- BroadcastExchange\n                                          :                    :                          :                 :              +- Project\n                                          :                    :                          :                 :                 +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :- Project\n                                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :     :- Filter\n                                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n                                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n                                          :                    :                          :                 :                    :     +- BroadcastExchange\n                                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                    :           +- CometFilter\n                                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :                    +- BroadcastExchange\n                                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                          +- CometProject\n                                          :                    :                          :                 :                             +- CometFilter\n                                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 +- BroadcastExchange\n                                          :                    :                          :                    +- CometNativeColumnarToRow\n                                          :                    :                          :                       +- CometProject\n                                          :                    :                          :                          +- CometFilter\n                                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          +- BroadcastExchange\n                                          :                    :                             +- Project\n                                          :                    :                                +- BroadcastHashJoin\n                                          :                    :                                   :- Project\n                                          :                    :                                   :  +- BroadcastHashJoin\n                                          :                    :                                   :     :- Filter\n                                          :                    :                                   :     :  +- ColumnarToRow\n                                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                                   :     :           +- ReusedSubquery\n                                          :                    :                                   :     +- BroadcastExchange\n                                          :                    :                                   :        +- CometNativeColumnarToRow\n                                          :                    :                                   :           +- CometFilter\n                                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                                   +- BroadcastExchange\n                                          :                    :                                      +- CometNativeColumnarToRow\n                                          :                    :                                         +- CometProject\n                                          :                    :                                            +- CometFilter\n                                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    +- BroadcastExchange\n                                          :                       +- CometNativeColumnarToRow\n                                          :                          +- CometProject\n                                          :                             +- CometFilter\n                                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :- Filter\n                                          :  :  +- ReusedSubquery\n                                          :  +- HashAggregate\n                                          :     +- CometNativeColumnarToRow\n                                          :        +- CometColumnarExchange\n                                          :           +- HashAggregate\n                                          :              +- Project\n                                          :                 +- BroadcastHashJoin\n                                          :                    :- Project\n                                          :                    :  +- BroadcastHashJoin\n                                          :                    :     :- BroadcastHashJoin\n                                          :                    :     :  :- Filter\n                                          :                    :     :  :  +- ColumnarToRow\n                                          :                    :     :  :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :  :           +- ReusedSubquery\n                                          :                    :     :  +- BroadcastExchange\n                                          :                    :     :     +- Project\n                                          :                    :     :        +- BroadcastHashJoin\n                                          :                    :     :           :- CometNativeColumnarToRow\n                                          :                    :     :           :  +- CometFilter\n                                          :                    :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :           +- BroadcastExchange\n                                          :                    :     :              +- BroadcastHashJoin\n                                          :                    :     :                 :- CometNativeColumnarToRow\n                                          :                    :     :                 :  +- CometHashAggregate\n                                          :                    :     :                 :     +- CometColumnarExchange\n                                          :                    :     :                 :        +- HashAggregate\n                                          :                    :     :                 :           +- Project\n                                          :                    :     :                 :              +- BroadcastHashJoin\n                                          :                    :     :                 :                 :- Project\n                                          :                    :     :                 :                 :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :     :- Filter\n                                          :                    :     :                 :                 :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :     :           +- SubqueryBroadcast\n                                          :                    :     :                 :                 :     :              +- BroadcastExchange\n                                          :                    :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :     :                    +- CometProject\n                                          :                    :     :                 :                 :     :                       +- CometFilter\n                                          :                    :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 :     +- BroadcastExchange\n                                          :                    :     :                 :                 :        +- BroadcastHashJoin\n                                          :                    :     :                 :                 :           :- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :           :  +- CometFilter\n                                          :                    :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :           +- BroadcastExchange\n                                          :                    :     :                 :                 :              +- Project\n                                          :                    :     :                 :                 :                 +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :- Project\n                                          :                    :     :                 :                 :                    :  +- BroadcastHashJoin\n                                          :                    :     :                 :                 :                    :     :- Filter\n                                          :                    :     :                 :                 :                    :     :  +- ColumnarToRow\n                                          :                    :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                 :                 :                    :     :           +- ReusedSubquery\n                                          :                    :     :                 :                 :                    :     +- BroadcastExchange\n                                          :                    :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                    :           +- CometFilter\n                                          :                    :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                 :                 :                    +- BroadcastExchange\n                                          :                    :     :                 :                 :                       +- CometNativeColumnarToRow\n                                          :                    :     :                 :                 :                          +- CometProject\n                                          :                    :     :                 :                 :                             +- CometFilter\n                                          :                    :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 :                 +- BroadcastExchange\n                                          :                    :     :                 :                    +- CometNativeColumnarToRow\n                                          :                    :     :                 :                       +- CometProject\n                                          :                    :     :                 :                          +- CometFilter\n                                          :                    :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     :                 +- BroadcastExchange\n                                          :                    :     :                    +- Project\n                                          :                    :     :                       +- BroadcastHashJoin\n                                          :                    :     :                          :- Project\n                                          :                    :     :                          :  +- BroadcastHashJoin\n                                          :                    :     :                          :     :- Filter\n                                          :                    :     :                          :     :  +- ColumnarToRow\n                                          :                    :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :     :                          :     :           +- ReusedSubquery\n                                          :                    :     :                          :     +- BroadcastExchange\n                                          :                    :     :                          :        +- CometNativeColumnarToRow\n                                          :                    :     :                          :           +- CometFilter\n                                          :                    :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :     :                          +- BroadcastExchange\n                                          :                    :     :                             +- CometNativeColumnarToRow\n                                          :                    :     :                                +- CometProject\n                                          :                    :     :                                   +- CometFilter\n                                          :                    :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :     +- BroadcastExchange\n                                          :                    :        +- BroadcastHashJoin\n                                          :                    :           :- CometNativeColumnarToRow\n                                          :                    :           :  +- CometFilter\n                                          :                    :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :           +- BroadcastExchange\n                                          :                    :              +- Project\n                                          :                    :                 +- BroadcastHashJoin\n                                          :                    :                    :- CometNativeColumnarToRow\n                                          :                    :                    :  +- CometFilter\n                                          :                    :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                    +- BroadcastExchange\n                                          :                    :                       +- BroadcastHashJoin\n                                          :                    :                          :- CometNativeColumnarToRow\n                                          :                    :                          :  +- CometHashAggregate\n                                          :                    :                          :     +- CometColumnarExchange\n                                          :                    :                          :        +- HashAggregate\n                                          :                    :                          :           +- Project\n                                          :                    :                          :              +- BroadcastHashJoin\n                                          :                    :                          :                 :- Project\n                                          :                    :                          :                 :  +- BroadcastHashJoin\n                                          :                    :                          :                 :     :- Filter\n                                          :                    :                          :                 :     :  +- ColumnarToRow\n                                          :                    :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :     :           +- SubqueryBroadcast\n                                          :                    :                          :                 :     :              +- BroadcastExchange\n                                          :                    :                          :                 :     :                 +- CometNativeColumnarToRow\n                                          :                    :                          :                 :     :                    +- CometProject\n                                          :                    :                          :                 :     :                       +- CometFilter\n                                          :                    :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 :     +- BroadcastExchange\n                                          :                    :                          :                 :        +- BroadcastHashJoin\n                                          :                    :                          :                 :           :- CometNativeColumnarToRow\n                                          :                    :                          :                 :           :  +- CometFilter\n                                          :                    :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :           +- BroadcastExchange\n                                          :                    :                          :                 :              +- Project\n                                          :                    :                          :                 :                 +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :- Project\n                                          :                    :                          :                 :                    :  +- BroadcastHashJoin\n                                          :                    :                          :                 :                    :     :- Filter\n                                          :                    :                          :                 :                    :     :  +- ColumnarToRow\n                                          :                    :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                          :                 :                    :     :           +- ReusedSubquery\n                                          :                    :                          :                 :                    :     +- BroadcastExchange\n                                          :                    :                          :                 :                    :        +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                    :           +- CometFilter\n                                          :                    :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                          :                 :                    +- BroadcastExchange\n                                          :                    :                          :                 :                       +- CometNativeColumnarToRow\n                                          :                    :                          :                 :                          +- CometProject\n                                          :                    :                          :                 :                             +- CometFilter\n                                          :                    :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          :                 +- BroadcastExchange\n                                          :                    :                          :                    +- CometNativeColumnarToRow\n                                          :                    :                          :                       +- CometProject\n                                          :                    :                          :                          +- CometFilter\n                                          :                    :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    :                          +- BroadcastExchange\n                                          :                    :                             +- Project\n                                          :                    :                                +- BroadcastHashJoin\n                                          :                    :                                   :- Project\n                                          :                    :                                   :  +- BroadcastHashJoin\n                                          :                    :                                   :     :- Filter\n                                          :                    :                                   :     :  +- ColumnarToRow\n                                          :                    :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                    :                                   :     :           +- ReusedSubquery\n                                          :                    :                                   :     +- BroadcastExchange\n                                          :                    :                                   :        +- CometNativeColumnarToRow\n                                          :                    :                                   :           +- CometFilter\n                                          :                    :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                          :                    :                                   +- BroadcastExchange\n                                          :                    :                                      +- CometNativeColumnarToRow\n                                          :                    :                                         +- CometProject\n                                          :                    :                                            +- CometFilter\n                                          :                    :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                    +- BroadcastExchange\n                                          :                       +- CometNativeColumnarToRow\n                                          :                          +- CometProject\n                                          :                             +- CometFilter\n                                          :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- Filter\n                                             :  +- ReusedSubquery\n                                             +- HashAggregate\n                                                +- CometNativeColumnarToRow\n                                                   +- CometColumnarExchange\n                                                      +- HashAggregate\n                                                         +- Project\n                                                            +- BroadcastHashJoin\n                                                               :- Project\n                                                               :  +- BroadcastHashJoin\n                                                               :     :- BroadcastHashJoin\n                                                               :     :  :- Filter\n                                                               :     :  :  +- ColumnarToRow\n                                                               :     :  :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :  :           +- ReusedSubquery\n                                                               :     :  +- BroadcastExchange\n                                                               :     :     +- Project\n                                                               :     :        +- BroadcastHashJoin\n                                                               :     :           :- CometNativeColumnarToRow\n                                                               :     :           :  +- CometFilter\n                                                               :     :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :           +- BroadcastExchange\n                                                               :     :              +- BroadcastHashJoin\n                                                               :     :                 :- CometNativeColumnarToRow\n                                                               :     :                 :  +- CometHashAggregate\n                                                               :     :                 :     +- CometColumnarExchange\n                                                               :     :                 :        +- HashAggregate\n                                                               :     :                 :           +- Project\n                                                               :     :                 :              +- BroadcastHashJoin\n                                                               :     :                 :                 :- Project\n                                                               :     :                 :                 :  +- BroadcastHashJoin\n                                                               :     :                 :                 :     :- Filter\n                                                               :     :                 :                 :     :  +- ColumnarToRow\n                                                               :     :                 :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :                 :                 :     :           +- SubqueryBroadcast\n                                                               :     :                 :                 :     :              +- BroadcastExchange\n                                                               :     :                 :                 :     :                 +- CometNativeColumnarToRow\n                                                               :     :                 :                 :     :                    +- CometProject\n                                                               :     :                 :                 :     :                       +- CometFilter\n                                                               :     :                 :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     :                 :                 :     +- BroadcastExchange\n                                                               :     :                 :                 :        +- BroadcastHashJoin\n                                                               :     :                 :                 :           :- CometNativeColumnarToRow\n                                                               :     :                 :                 :           :  +- CometFilter\n                                                               :     :                 :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :                 :                 :           +- BroadcastExchange\n                                                               :     :                 :                 :              +- Project\n                                                               :     :                 :                 :                 +- BroadcastHashJoin\n                                                               :     :                 :                 :                    :- Project\n                                                               :     :                 :                 :                    :  +- BroadcastHashJoin\n                                                               :     :                 :                 :                    :     :- Filter\n                                                               :     :                 :                 :                    :     :  +- ColumnarToRow\n                                                               :     :                 :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :                 :                 :                    :     :           +- ReusedSubquery\n                                                               :     :                 :                 :                    :     +- BroadcastExchange\n                                                               :     :                 :                 :                    :        +- CometNativeColumnarToRow\n                                                               :     :                 :                 :                    :           +- CometFilter\n                                                               :     :                 :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :                 :                 :                    +- BroadcastExchange\n                                                               :     :                 :                 :                       +- CometNativeColumnarToRow\n                                                               :     :                 :                 :                          +- CometProject\n                                                               :     :                 :                 :                             +- CometFilter\n                                                               :     :                 :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     :                 :                 +- BroadcastExchange\n                                                               :     :                 :                    +- CometNativeColumnarToRow\n                                                               :     :                 :                       +- CometProject\n                                                               :     :                 :                          +- CometFilter\n                                                               :     :                 :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     :                 +- BroadcastExchange\n                                                               :     :                    +- Project\n                                                               :     :                       +- BroadcastHashJoin\n                                                               :     :                          :- Project\n                                                               :     :                          :  +- BroadcastHashJoin\n                                                               :     :                          :     :- Filter\n                                                               :     :                          :     :  +- ColumnarToRow\n                                                               :     :                          :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :     :                          :     :           +- ReusedSubquery\n                                                               :     :                          :     +- BroadcastExchange\n                                                               :     :                          :        +- CometNativeColumnarToRow\n                                                               :     :                          :           +- CometFilter\n                                                               :     :                          :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :     :                          +- BroadcastExchange\n                                                               :     :                             +- CometNativeColumnarToRow\n                                                               :     :                                +- CometProject\n                                                               :     :                                   +- CometFilter\n                                                               :     :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :     +- BroadcastExchange\n                                                               :        +- BroadcastHashJoin\n                                                               :           :- CometNativeColumnarToRow\n                                                               :           :  +- CometFilter\n                                                               :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :           +- BroadcastExchange\n                                                               :              +- Project\n                                                               :                 +- BroadcastHashJoin\n                                                               :                    :- CometNativeColumnarToRow\n                                                               :                    :  +- CometFilter\n                                                               :                    :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                    +- BroadcastExchange\n                                                               :                       +- BroadcastHashJoin\n                                                               :                          :- CometNativeColumnarToRow\n                                                               :                          :  +- CometHashAggregate\n                                                               :                          :     +- CometColumnarExchange\n                                                               :                          :        +- HashAggregate\n                                                               :                          :           +- Project\n                                                               :                          :              +- BroadcastHashJoin\n                                                               :                          :                 :- Project\n                                                               :                          :                 :  +- BroadcastHashJoin\n                                                               :                          :                 :     :- Filter\n                                                               :                          :                 :     :  +- ColumnarToRow\n                                                               :                          :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :                          :                 :     :           +- SubqueryBroadcast\n                                                               :                          :                 :     :              +- BroadcastExchange\n                                                               :                          :                 :     :                 +- CometNativeColumnarToRow\n                                                               :                          :                 :     :                    +- CometProject\n                                                               :                          :                 :     :                       +- CometFilter\n                                                               :                          :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :                          :                 :     +- BroadcastExchange\n                                                               :                          :                 :        +- BroadcastHashJoin\n                                                               :                          :                 :           :- CometNativeColumnarToRow\n                                                               :                          :                 :           :  +- CometFilter\n                                                               :                          :                 :           :     +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                          :                 :           +- BroadcastExchange\n                                                               :                          :                 :              +- Project\n                                                               :                          :                 :                 +- BroadcastHashJoin\n                                                               :                          :                 :                    :- Project\n                                                               :                          :                 :                    :  +- BroadcastHashJoin\n                                                               :                          :                 :                    :     :- Filter\n                                                               :                          :                 :                    :     :  +- ColumnarToRow\n                                                               :                          :                 :                    :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :                          :                 :                    :     :           +- ReusedSubquery\n                                                               :                          :                 :                    :     +- BroadcastExchange\n                                                               :                          :                 :                    :        +- CometNativeColumnarToRow\n                                                               :                          :                 :                    :           +- CometFilter\n                                                               :                          :                 :                    :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                          :                 :                    +- BroadcastExchange\n                                                               :                          :                 :                       +- CometNativeColumnarToRow\n                                                               :                          :                 :                          +- CometProject\n                                                               :                          :                 :                             +- CometFilter\n                                                               :                          :                 :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :                          :                 +- BroadcastExchange\n                                                               :                          :                    +- CometNativeColumnarToRow\n                                                               :                          :                       +- CometProject\n                                                               :                          :                          +- CometFilter\n                                                               :                          :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               :                          +- BroadcastExchange\n                                                               :                             +- Project\n                                                               :                                +- BroadcastHashJoin\n                                                               :                                   :- Project\n                                                               :                                   :  +- BroadcastHashJoin\n                                                               :                                   :     :- Filter\n                                                               :                                   :     :  +- ColumnarToRow\n                                                               :                                   :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                               :                                   :     :           +- ReusedSubquery\n                                                               :                                   :     +- BroadcastExchange\n                                                               :                                   :        +- CometNativeColumnarToRow\n                                                               :                                   :           +- CometFilter\n                                                               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                                               :                                   +- BroadcastExchange\n                                                               :                                      +- CometNativeColumnarToRow\n                                                               :                                         +- CometProject\n                                                               :                                            +- CometFilter\n                                                               :                                               +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                               +- BroadcastExchange\n                                                                  +- CometNativeColumnarToRow\n                                                                     +- CometProject\n                                                                        +- CometFilter\n                                                                           +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 842 out of 2302 eligible operators (36%). Final plan contains 475 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q14a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometFilter\n               :           :  :  +- Subquery\n               :           :  :     +- CometNativeColumnarToRow\n               :           :  :        +- CometHashAggregate\n               :           :  :           +- CometExchange\n               :           :  :              +- CometHashAggregate\n               :           :  :                 +- CometUnion\n               :           :  :                    :- CometProject\n               :           :  :                    :  +- CometBroadcastHashJoin\n               :           :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :  :                    :     :     +- ReusedSubquery\n               :           :  :                    :     +- CometBroadcastExchange\n               :           :  :                    :        +- CometProject\n               :           :  :                    :           +- CometFilter\n               :           :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  :                    :- CometProject\n               :           :  :                    :  +- CometBroadcastHashJoin\n               :           :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :  :                    :     :     +- SubqueryBroadcast\n               :           :  :                    :     :        +- BroadcastExchange\n               :           :  :                    :     :           +- CometNativeColumnarToRow\n               :           :  :                    :     :              +- CometProject\n               :           :  :                    :     :                 +- CometFilter\n               :           :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  :                    :     +- CometBroadcastExchange\n               :           :  :                    :        +- CometProject\n               :           :  :                    :           +- CometFilter\n               :           :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  :                    +- CometProject\n               :           :  :                       +- CometBroadcastHashJoin\n               :           :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :  :                          :     +- ReusedSubquery\n               :           :  :                          +- CometBroadcastExchange\n               :           :  :                             +- CometProject\n               :           :  :                                +- CometFilter\n               :           :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :  +- CometHashAggregate\n               :           :     +- CometExchange\n               :           :        +- CometHashAggregate\n               :           :           +- CometProject\n               :           :              +- CometBroadcastHashJoin\n               :           :                 :- CometProject\n               :           :                 :  +- CometBroadcastHashJoin\n               :           :                 :     :- CometBroadcastHashJoin\n               :           :                 :     :  :- CometFilter\n               :           :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :     :  :        +- SubqueryBroadcast\n               :           :                 :     :  :           +- BroadcastExchange\n               :           :                 :     :  :              +- CometNativeColumnarToRow\n               :           :                 :     :  :                 +- CometProject\n               :           :                 :     :  :                    +- CometFilter\n               :           :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :  +- CometBroadcastExchange\n               :           :                 :     :     +- CometProject\n               :           :                 :     :        +- CometBroadcastHashJoin\n               :           :                 :     :           :- CometFilter\n               :           :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :           +- CometBroadcastExchange\n               :           :                 :     :              +- CometBroadcastHashJoin\n               :           :                 :     :                 :- CometHashAggregate\n               :           :                 :     :                 :  +- CometExchange\n               :           :                 :     :                 :     +- CometHashAggregate\n               :           :                 :     :                 :        +- CometProject\n               :           :                 :     :                 :           +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :- CometProject\n               :           :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :     :- CometFilter\n               :           :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :           :                 :     :                 :              :     :           +- BroadcastExchange\n               :           :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :           :                 :     :                 :              :     :                 +- CometProject\n               :           :                 :     :                 :              :     :                    +- CometFilter\n               :           :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :           :- CometFilter\n               :           :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :           +- CometBroadcastExchange\n               :           :                 :     :                 :              :              +- CometProject\n               :           :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :- CometProject\n               :           :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :     :- CometFilter\n               :           :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :           :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :                    :        +- CometFilter\n               :           :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :                    +- CometBroadcastExchange\n               :           :                 :     :                 :              :                       +- CometProject\n               :           :                 :     :                 :              :                          +- CometFilter\n               :           :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              +- CometBroadcastExchange\n               :           :                 :     :                 :                 +- CometProject\n               :           :                 :     :                 :                    +- CometFilter\n               :           :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 +- CometBroadcastExchange\n               :           :                 :     :                    +- CometProject\n               :           :                 :     :                       +- CometBroadcastHashJoin\n               :           :                 :     :                          :- CometProject\n               :           :                 :     :                          :  +- CometBroadcastHashJoin\n               :           :                 :     :                          :     :- CometFilter\n               :           :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :     :                          :     :        +- ReusedSubquery\n               :           :                 :     :                          :     +- CometBroadcastExchange\n               :           :                 :     :                          :        +- CometFilter\n               :           :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                          +- CometBroadcastExchange\n               :           :                 :     :                             +- CometProject\n               :           :                 :     :                                +- CometFilter\n               :           :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     +- CometBroadcastExchange\n               :           :                 :        +- CometBroadcastHashJoin\n               :           :                 :           :- CometFilter\n               :           :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :           +- CometBroadcastExchange\n               :           :                 :              +- CometProject\n               :           :                 :                 +- CometBroadcastHashJoin\n               :           :                 :                    :- CometFilter\n               :           :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                    +- CometBroadcastExchange\n               :           :                 :                       +- CometBroadcastHashJoin\n               :           :                 :                          :- CometHashAggregate\n               :           :                 :                          :  +- CometExchange\n               :           :                 :                          :     +- CometHashAggregate\n               :           :                 :                          :        +- CometProject\n               :           :                 :                          :           +- CometBroadcastHashJoin\n               :           :                 :                          :              :- CometProject\n               :           :                 :                          :              :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :     :- CometFilter\n               :           :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :                          :              :     :        +- SubqueryBroadcast\n               :           :                 :                          :              :     :           +- BroadcastExchange\n               :           :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :           :                 :                          :              :     :                 +- CometProject\n               :           :                 :                          :              :     :                    +- CometFilter\n               :           :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              :     +- CometBroadcastExchange\n               :           :                 :                          :              :        +- CometBroadcastHashJoin\n               :           :                 :                          :              :           :- CometFilter\n               :           :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :           +- CometBroadcastExchange\n               :           :                 :                          :              :              +- CometProject\n               :           :                 :                          :              :                 +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :- CometProject\n               :           :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :     :- CometFilter\n               :           :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :                          :              :                    :     :        +- ReusedSubquery\n               :           :                 :                          :              :                    :     +- CometBroadcastExchange\n               :           :                 :                          :              :                    :        +- CometFilter\n               :           :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :                    +- CometBroadcastExchange\n               :           :                 :                          :              :                       +- CometProject\n               :           :                 :                          :              :                          +- CometFilter\n               :           :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              +- CometBroadcastExchange\n               :           :                 :                          :                 +- CometProject\n               :           :                 :                          :                    +- CometFilter\n               :           :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          +- CometBroadcastExchange\n               :           :                 :                             +- CometProject\n               :           :                 :                                +- CometBroadcastHashJoin\n               :           :                 :                                   :- CometProject\n               :           :                 :                                   :  +- CometBroadcastHashJoin\n               :           :                 :                                   :     :- CometFilter\n               :           :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :                                   :     :        +- ReusedSubquery\n               :           :                 :                                   :     +- CometBroadcastExchange\n               :           :                 :                                   :        +- CometFilter\n               :           :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                                   +- CometBroadcastExchange\n               :           :                 :                                      +- CometProject\n               :           :                 :                                         +- CometFilter\n               :           :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 +- CometBroadcastExchange\n               :           :                    +- CometProject\n               :           :                       +- CometFilter\n               :           :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :- CometFilter\n               :           :  :  +- ReusedSubquery\n               :           :  +- CometHashAggregate\n               :           :     +- CometExchange\n               :           :        +- CometHashAggregate\n               :           :           +- CometProject\n               :           :              +- CometBroadcastHashJoin\n               :           :                 :- CometProject\n               :           :                 :  +- CometBroadcastHashJoin\n               :           :                 :     :- CometBroadcastHashJoin\n               :           :                 :     :  :- CometFilter\n               :           :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :     :  :        +- ReusedSubquery\n               :           :                 :     :  +- CometBroadcastExchange\n               :           :                 :     :     +- CometProject\n               :           :                 :     :        +- CometBroadcastHashJoin\n               :           :                 :     :           :- CometFilter\n               :           :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :           +- CometBroadcastExchange\n               :           :                 :     :              +- CometBroadcastHashJoin\n               :           :                 :     :                 :- CometHashAggregate\n               :           :                 :     :                 :  +- CometExchange\n               :           :                 :     :                 :     +- CometHashAggregate\n               :           :                 :     :                 :        +- CometProject\n               :           :                 :     :                 :           +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :- CometProject\n               :           :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :     :- CometFilter\n               :           :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :           :                 :     :                 :              :     :           +- BroadcastExchange\n               :           :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :           :                 :     :                 :              :     :                 +- CometProject\n               :           :                 :     :                 :              :     :                    +- CometFilter\n               :           :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :           :- CometFilter\n               :           :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :           +- CometBroadcastExchange\n               :           :                 :     :                 :              :              +- CometProject\n               :           :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :- CometProject\n               :           :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :     :                 :              :                    :     :- CometFilter\n               :           :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :           :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :           :                 :     :                 :              :                    :        +- CometFilter\n               :           :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                 :              :                    +- CometBroadcastExchange\n               :           :                 :     :                 :              :                       +- CometProject\n               :           :                 :     :                 :              :                          +- CometFilter\n               :           :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 :              +- CometBroadcastExchange\n               :           :                 :     :                 :                 +- CometProject\n               :           :                 :     :                 :                    +- CometFilter\n               :           :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     :                 +- CometBroadcastExchange\n               :           :                 :     :                    +- CometProject\n               :           :                 :     :                       +- CometBroadcastHashJoin\n               :           :                 :     :                          :- CometProject\n               :           :                 :     :                          :  +- CometBroadcastHashJoin\n               :           :                 :     :                          :     :- CometFilter\n               :           :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :     :                          :     :        +- ReusedSubquery\n               :           :                 :     :                          :     +- CometBroadcastExchange\n               :           :                 :     :                          :        +- CometFilter\n               :           :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :     :                          +- CometBroadcastExchange\n               :           :                 :     :                             +- CometProject\n               :           :                 :     :                                +- CometFilter\n               :           :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :     +- CometBroadcastExchange\n               :           :                 :        +- CometBroadcastHashJoin\n               :           :                 :           :- CometFilter\n               :           :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :           +- CometBroadcastExchange\n               :           :                 :              +- CometProject\n               :           :                 :                 +- CometBroadcastHashJoin\n               :           :                 :                    :- CometFilter\n               :           :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                    +- CometBroadcastExchange\n               :           :                 :                       +- CometBroadcastHashJoin\n               :           :                 :                          :- CometHashAggregate\n               :           :                 :                          :  +- CometExchange\n               :           :                 :                          :     +- CometHashAggregate\n               :           :                 :                          :        +- CometProject\n               :           :                 :                          :           +- CometBroadcastHashJoin\n               :           :                 :                          :              :- CometProject\n               :           :                 :                          :              :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :     :- CometFilter\n               :           :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :                 :                          :              :     :        +- SubqueryBroadcast\n               :           :                 :                          :              :     :           +- BroadcastExchange\n               :           :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :           :                 :                          :              :     :                 +- CometProject\n               :           :                 :                          :              :     :                    +- CometFilter\n               :           :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              :     +- CometBroadcastExchange\n               :           :                 :                          :              :        +- CometBroadcastHashJoin\n               :           :                 :                          :              :           :- CometFilter\n               :           :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :           +- CometBroadcastExchange\n               :           :                 :                          :              :              +- CometProject\n               :           :                 :                          :              :                 +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :- CometProject\n               :           :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :           :                 :                          :              :                    :     :- CometFilter\n               :           :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :                 :                          :              :                    :     :        +- ReusedSubquery\n               :           :                 :                          :              :                    :     +- CometBroadcastExchange\n               :           :                 :                          :              :                    :        +- CometFilter\n               :           :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                          :              :                    +- CometBroadcastExchange\n               :           :                 :                          :              :                       +- CometProject\n               :           :                 :                          :              :                          +- CometFilter\n               :           :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          :              +- CometBroadcastExchange\n               :           :                 :                          :                 +- CometProject\n               :           :                 :                          :                    +- CometFilter\n               :           :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 :                          +- CometBroadcastExchange\n               :           :                 :                             +- CometProject\n               :           :                 :                                +- CometBroadcastHashJoin\n               :           :                 :                                   :- CometProject\n               :           :                 :                                   :  +- CometBroadcastHashJoin\n               :           :                 :                                   :     :- CometFilter\n               :           :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :           :                 :                                   :     :        +- ReusedSubquery\n               :           :                 :                                   :     +- CometBroadcastExchange\n               :           :                 :                                   :        +- CometFilter\n               :           :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :                 :                                   +- CometBroadcastExchange\n               :           :                 :                                      +- CometProject\n               :           :                 :                                         +- CometFilter\n               :           :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :                 +- CometBroadcastExchange\n               :           :                    +- CometProject\n               :           :                       +- CometFilter\n               :           :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           +- CometFilter\n               :              :  +- ReusedSubquery\n               :              +- CometHashAggregate\n               :                 +- CometExchange\n               :                    +- CometHashAggregate\n               :                       +- CometProject\n               :                          +- CometBroadcastHashJoin\n               :                             :- CometProject\n               :                             :  +- CometBroadcastHashJoin\n               :                             :     :- CometBroadcastHashJoin\n               :                             :     :  :- CometFilter\n               :                             :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                             :     :  :        +- ReusedSubquery\n               :                             :     :  +- CometBroadcastExchange\n               :                             :     :     +- CometProject\n               :                             :     :        +- CometBroadcastHashJoin\n               :                             :     :           :- CometFilter\n               :                             :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :           +- CometBroadcastExchange\n               :                             :     :              +- CometBroadcastHashJoin\n               :                             :     :                 :- CometHashAggregate\n               :                             :     :                 :  +- CometExchange\n               :                             :     :                 :     +- CometHashAggregate\n               :                             :     :                 :        +- CometProject\n               :                             :     :                 :           +- CometBroadcastHashJoin\n               :                             :     :                 :              :- CometProject\n               :                             :     :                 :              :  +- CometBroadcastHashJoin\n               :                             :     :                 :              :     :- CometFilter\n               :                             :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                             :     :                 :              :     :        +- SubqueryBroadcast\n               :                             :     :                 :              :     :           +- BroadcastExchange\n               :                             :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                             :     :                 :              :     :                 +- CometProject\n               :                             :     :                 :              :     :                    +- CometFilter\n               :                             :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     :                 :              :     +- CometBroadcastExchange\n               :                             :     :                 :              :        +- CometBroadcastHashJoin\n               :                             :     :                 :              :           :- CometFilter\n               :                             :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :                 :              :           +- CometBroadcastExchange\n               :                             :     :                 :              :              +- CometProject\n               :                             :     :                 :              :                 +- CometBroadcastHashJoin\n               :                             :     :                 :              :                    :- CometProject\n               :                             :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                             :     :                 :              :                    :     :- CometFilter\n               :                             :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                             :     :                 :              :                    :     :        +- ReusedSubquery\n               :                             :     :                 :              :                    :     +- CometBroadcastExchange\n               :                             :     :                 :              :                    :        +- CometFilter\n               :                             :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :                 :              :                    +- CometBroadcastExchange\n               :                             :     :                 :              :                       +- CometProject\n               :                             :     :                 :              :                          +- CometFilter\n               :                             :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     :                 :              +- CometBroadcastExchange\n               :                             :     :                 :                 +- CometProject\n               :                             :     :                 :                    +- CometFilter\n               :                             :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     :                 +- CometBroadcastExchange\n               :                             :     :                    +- CometProject\n               :                             :     :                       +- CometBroadcastHashJoin\n               :                             :     :                          :- CometProject\n               :                             :     :                          :  +- CometBroadcastHashJoin\n               :                             :     :                          :     :- CometFilter\n               :                             :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                             :     :                          :     :        +- ReusedSubquery\n               :                             :     :                          :     +- CometBroadcastExchange\n               :                             :     :                          :        +- CometFilter\n               :                             :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :     :                          +- CometBroadcastExchange\n               :                             :     :                             +- CometProject\n               :                             :     :                                +- CometFilter\n               :                             :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :     +- CometBroadcastExchange\n               :                             :        +- CometBroadcastHashJoin\n               :                             :           :- CometFilter\n               :                             :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :           +- CometBroadcastExchange\n               :                             :              +- CometProject\n               :                             :                 +- CometBroadcastHashJoin\n               :                             :                    :- CometFilter\n               :                             :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                    +- CometBroadcastExchange\n               :                             :                       +- CometBroadcastHashJoin\n               :                             :                          :- CometHashAggregate\n               :                             :                          :  +- CometExchange\n               :                             :                          :     +- CometHashAggregate\n               :                             :                          :        +- CometProject\n               :                             :                          :           +- CometBroadcastHashJoin\n               :                             :                          :              :- CometProject\n               :                             :                          :              :  +- CometBroadcastHashJoin\n               :                             :                          :              :     :- CometFilter\n               :                             :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                             :                          :              :     :        +- SubqueryBroadcast\n               :                             :                          :              :     :           +- BroadcastExchange\n               :                             :                          :              :     :              +- CometNativeColumnarToRow\n               :                             :                          :              :     :                 +- CometProject\n               :                             :                          :              :     :                    +- CometFilter\n               :                             :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :                          :              :     +- CometBroadcastExchange\n               :                             :                          :              :        +- CometBroadcastHashJoin\n               :                             :                          :              :           :- CometFilter\n               :                             :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                          :              :           +- CometBroadcastExchange\n               :                             :                          :              :              +- CometProject\n               :                             :                          :              :                 +- CometBroadcastHashJoin\n               :                             :                          :              :                    :- CometProject\n               :                             :                          :              :                    :  +- CometBroadcastHashJoin\n               :                             :                          :              :                    :     :- CometFilter\n               :                             :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                             :                          :              :                    :     :        +- ReusedSubquery\n               :                             :                          :              :                    :     +- CometBroadcastExchange\n               :                             :                          :              :                    :        +- CometFilter\n               :                             :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                          :              :                    +- CometBroadcastExchange\n               :                             :                          :              :                       +- CometProject\n               :                             :                          :              :                          +- CometFilter\n               :                             :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :                          :              +- CometBroadcastExchange\n               :                             :                          :                 +- CometProject\n               :                             :                          :                    +- CometFilter\n               :                             :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             :                          +- CometBroadcastExchange\n               :                             :                             +- CometProject\n               :                             :                                +- CometBroadcastHashJoin\n               :                             :                                   :- CometProject\n               :                             :                                   :  +- CometBroadcastHashJoin\n               :                             :                                   :     :- CometFilter\n               :                             :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                             :                                   :     :        +- ReusedSubquery\n               :                             :                                   :     +- CometBroadcastExchange\n               :                             :                                   :        +- CometFilter\n               :                             :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                             :                                   +- CometBroadcastExchange\n               :                             :                                      +- CometProject\n               :                             :                                         +- CometFilter\n               :                             :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                             +- CometBroadcastExchange\n               :                                +- CometProject\n               :                                   +- CometFilter\n               :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometFilter\n               :                    :  :  +- Subquery\n               :                    :  :     +- CometNativeColumnarToRow\n               :                    :  :        +- CometHashAggregate\n               :                    :  :           +- CometExchange\n               :                    :  :              +- CometHashAggregate\n               :                    :  :                 +- CometUnion\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :  :                    :     :     +- ReusedSubquery\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :  :                    :     :     +- SubqueryBroadcast\n               :                    :  :                    :     :        +- BroadcastExchange\n               :                    :  :                    :     :           +- CometNativeColumnarToRow\n               :                    :  :                    :     :              +- CometProject\n               :                    :  :                    :     :                 +- CometFilter\n               :                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    +- CometProject\n               :                    :  :                       +- CometBroadcastHashJoin\n               :                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :  :                          :     +- ReusedSubquery\n               :                    :  :                          +- CometBroadcastExchange\n               :                    :  :                             +- CometProject\n               :                    :  :                                +- CometFilter\n               :                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :  :        +- SubqueryBroadcast\n               :                    :                 :     :  :           +- BroadcastExchange\n               :                    :                 :     :  :              +- CometNativeColumnarToRow\n               :                    :                 :     :  :                 +- CometProject\n               :                    :                 :     :  :                    +- CometFilter\n               :                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :- CometFilter\n               :                    :  :  +- ReusedSubquery\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :  :        +- ReusedSubquery\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometFilter\n               :                       :  +- ReusedSubquery\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastHashJoin\n               :                                      :     :  :- CometFilter\n               :                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :  :        +- ReusedSubquery\n               :                                      :     :  +- CometBroadcastExchange\n               :                                      :     :     +- CometProject\n               :                                      :     :        +- CometBroadcastHashJoin\n               :                                      :     :           :- CometFilter\n               :                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :           +- CometBroadcastExchange\n               :                                      :     :              +- CometBroadcastHashJoin\n               :                                      :     :                 :- CometHashAggregate\n               :                                      :     :                 :  +- CometExchange\n               :                                      :     :                 :     +- CometHashAggregate\n               :                                      :     :                 :        +- CometProject\n               :                                      :     :                 :           +- CometBroadcastHashJoin\n               :                                      :     :                 :              :- CometProject\n               :                                      :     :                 :              :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :     :- CometFilter\n               :                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :     :                 :              :     :        +- SubqueryBroadcast\n               :                                      :     :                 :              :     :           +- BroadcastExchange\n               :                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                                      :     :                 :              :     :                 +- CometProject\n               :                                      :     :                 :              :     :                    +- CometFilter\n               :                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              :     +- CometBroadcastExchange\n               :                                      :     :                 :              :        +- CometBroadcastHashJoin\n               :                                      :     :                 :              :           :- CometFilter\n               :                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :           +- CometBroadcastExchange\n               :                                      :     :                 :              :              +- CometProject\n               :                                      :     :                 :              :                 +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :- CometProject\n               :                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :     :- CometFilter\n               :                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :                 :              :                    :     :        +- ReusedSubquery\n               :                                      :     :                 :              :                    :     +- CometBroadcastExchange\n               :                                      :     :                 :              :                    :        +- CometFilter\n               :                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :                    +- CometBroadcastExchange\n               :                                      :     :                 :              :                       +- CometProject\n               :                                      :     :                 :              :                          +- CometFilter\n               :                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              +- CometBroadcastExchange\n               :                                      :     :                 :                 +- CometProject\n               :                                      :     :                 :                    +- CometFilter\n               :                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 +- CometBroadcastExchange\n               :                                      :     :                    +- CometProject\n               :                                      :     :                       +- CometBroadcastHashJoin\n               :                                      :     :                          :- CometProject\n               :                                      :     :                          :  +- CometBroadcastHashJoin\n               :                                      :     :                          :     :- CometFilter\n               :                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :                          :     :        +- ReusedSubquery\n               :                                      :     :                          :     +- CometBroadcastExchange\n               :                                      :     :                          :        +- CometFilter\n               :                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                          +- CometBroadcastExchange\n               :                                      :     :                             +- CometProject\n               :                                      :     :                                +- CometFilter\n               :                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometBroadcastExchange\n               :                                      :        +- CometBroadcastHashJoin\n               :                                      :           :- CometFilter\n               :                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :           +- CometBroadcastExchange\n               :                                      :              +- CometProject\n               :                                      :                 +- CometBroadcastHashJoin\n               :                                      :                    :- CometFilter\n               :                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                    +- CometBroadcastExchange\n               :                                      :                       +- CometBroadcastHashJoin\n               :                                      :                          :- CometHashAggregate\n               :                                      :                          :  +- CometExchange\n               :                                      :                          :     +- CometHashAggregate\n               :                                      :                          :        +- CometProject\n               :                                      :                          :           +- CometBroadcastHashJoin\n               :                                      :                          :              :- CometProject\n               :                                      :                          :              :  +- CometBroadcastHashJoin\n               :                                      :                          :              :     :- CometFilter\n               :                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :                          :              :     :        +- SubqueryBroadcast\n               :                                      :                          :              :     :           +- BroadcastExchange\n               :                                      :                          :              :     :              +- CometNativeColumnarToRow\n               :                                      :                          :              :     :                 +- CometProject\n               :                                      :                          :              :     :                    +- CometFilter\n               :                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              :     +- CometBroadcastExchange\n               :                                      :                          :              :        +- CometBroadcastHashJoin\n               :                                      :                          :              :           :- CometFilter\n               :                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :           +- CometBroadcastExchange\n               :                                      :                          :              :              +- CometProject\n               :                                      :                          :              :                 +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :- CometProject\n               :                                      :                          :              :                    :  +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :     :- CometFilter\n               :                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :                          :              :                    :     :        +- ReusedSubquery\n               :                                      :                          :              :                    :     +- CometBroadcastExchange\n               :                                      :                          :              :                    :        +- CometFilter\n               :                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :                    +- CometBroadcastExchange\n               :                                      :                          :              :                       +- CometProject\n               :                                      :                          :              :                          +- CometFilter\n               :                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              +- CometBroadcastExchange\n               :                                      :                          :                 +- CometProject\n               :                                      :                          :                    +- CometFilter\n               :                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          +- CometBroadcastExchange\n               :                                      :                             +- CometProject\n               :                                      :                                +- CometBroadcastHashJoin\n               :                                      :                                   :- CometProject\n               :                                      :                                   :  +- CometBroadcastHashJoin\n               :                                      :                                   :     :- CometFilter\n               :                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :                                   :     :        +- ReusedSubquery\n               :                                      :                                   :     +- CometBroadcastExchange\n               :                                      :                                   :        +- CometFilter\n               :                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                                   +- CometBroadcastExchange\n               :                                      :                                      +- CometProject\n               :                                      :                                         +- CometFilter\n               :                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometFilter\n               :                    :  :  +- Subquery\n               :                    :  :     +- CometNativeColumnarToRow\n               :                    :  :        +- CometHashAggregate\n               :                    :  :           +- CometExchange\n               :                    :  :              +- CometHashAggregate\n               :                    :  :                 +- CometUnion\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :  :                    :     :     +- ReusedSubquery\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :  :                    :     :     +- SubqueryBroadcast\n               :                    :  :                    :     :        +- BroadcastExchange\n               :                    :  :                    :     :           +- CometNativeColumnarToRow\n               :                    :  :                    :     :              +- CometProject\n               :                    :  :                    :     :                 +- CometFilter\n               :                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    +- CometProject\n               :                    :  :                       +- CometBroadcastHashJoin\n               :                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :  :                          :     +- ReusedSubquery\n               :                    :  :                          +- CometBroadcastExchange\n               :                    :  :                             +- CometProject\n               :                    :  :                                +- CometFilter\n               :                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :  :        +- SubqueryBroadcast\n               :                    :                 :     :  :           +- BroadcastExchange\n               :                    :                 :     :  :              +- CometNativeColumnarToRow\n               :                    :                 :     :  :                 +- CometProject\n               :                    :                 :     :  :                    +- CometFilter\n               :                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :- CometFilter\n               :                    :  :  +- ReusedSubquery\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :  :        +- ReusedSubquery\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometFilter\n               :                       :  +- ReusedSubquery\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastHashJoin\n               :                                      :     :  :- CometFilter\n               :                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :  :        +- ReusedSubquery\n               :                                      :     :  +- CometBroadcastExchange\n               :                                      :     :     +- CometProject\n               :                                      :     :        +- CometBroadcastHashJoin\n               :                                      :     :           :- CometFilter\n               :                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :           +- CometBroadcastExchange\n               :                                      :     :              +- CometBroadcastHashJoin\n               :                                      :     :                 :- CometHashAggregate\n               :                                      :     :                 :  +- CometExchange\n               :                                      :     :                 :     +- CometHashAggregate\n               :                                      :     :                 :        +- CometProject\n               :                                      :     :                 :           +- CometBroadcastHashJoin\n               :                                      :     :                 :              :- CometProject\n               :                                      :     :                 :              :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :     :- CometFilter\n               :                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :     :                 :              :     :        +- SubqueryBroadcast\n               :                                      :     :                 :              :     :           +- BroadcastExchange\n               :                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                                      :     :                 :              :     :                 +- CometProject\n               :                                      :     :                 :              :     :                    +- CometFilter\n               :                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              :     +- CometBroadcastExchange\n               :                                      :     :                 :              :        +- CometBroadcastHashJoin\n               :                                      :     :                 :              :           :- CometFilter\n               :                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :           +- CometBroadcastExchange\n               :                                      :     :                 :              :              +- CometProject\n               :                                      :     :                 :              :                 +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :- CometProject\n               :                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :     :- CometFilter\n               :                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :                 :              :                    :     :        +- ReusedSubquery\n               :                                      :     :                 :              :                    :     +- CometBroadcastExchange\n               :                                      :     :                 :              :                    :        +- CometFilter\n               :                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :                    +- CometBroadcastExchange\n               :                                      :     :                 :              :                       +- CometProject\n               :                                      :     :                 :              :                          +- CometFilter\n               :                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              +- CometBroadcastExchange\n               :                                      :     :                 :                 +- CometProject\n               :                                      :     :                 :                    +- CometFilter\n               :                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 +- CometBroadcastExchange\n               :                                      :     :                    +- CometProject\n               :                                      :     :                       +- CometBroadcastHashJoin\n               :                                      :     :                          :- CometProject\n               :                                      :     :                          :  +- CometBroadcastHashJoin\n               :                                      :     :                          :     :- CometFilter\n               :                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :                          :     :        +- ReusedSubquery\n               :                                      :     :                          :     +- CometBroadcastExchange\n               :                                      :     :                          :        +- CometFilter\n               :                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                          +- CometBroadcastExchange\n               :                                      :     :                             +- CometProject\n               :                                      :     :                                +- CometFilter\n               :                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometBroadcastExchange\n               :                                      :        +- CometBroadcastHashJoin\n               :                                      :           :- CometFilter\n               :                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :           +- CometBroadcastExchange\n               :                                      :              +- CometProject\n               :                                      :                 +- CometBroadcastHashJoin\n               :                                      :                    :- CometFilter\n               :                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                    +- CometBroadcastExchange\n               :                                      :                       +- CometBroadcastHashJoin\n               :                                      :                          :- CometHashAggregate\n               :                                      :                          :  +- CometExchange\n               :                                      :                          :     +- CometHashAggregate\n               :                                      :                          :        +- CometProject\n               :                                      :                          :           +- CometBroadcastHashJoin\n               :                                      :                          :              :- CometProject\n               :                                      :                          :              :  +- CometBroadcastHashJoin\n               :                                      :                          :              :     :- CometFilter\n               :                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :                          :              :     :        +- SubqueryBroadcast\n               :                                      :                          :              :     :           +- BroadcastExchange\n               :                                      :                          :              :     :              +- CometNativeColumnarToRow\n               :                                      :                          :              :     :                 +- CometProject\n               :                                      :                          :              :     :                    +- CometFilter\n               :                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              :     +- CometBroadcastExchange\n               :                                      :                          :              :        +- CometBroadcastHashJoin\n               :                                      :                          :              :           :- CometFilter\n               :                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :           +- CometBroadcastExchange\n               :                                      :                          :              :              +- CometProject\n               :                                      :                          :              :                 +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :- CometProject\n               :                                      :                          :              :                    :  +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :     :- CometFilter\n               :                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :                          :              :                    :     :        +- ReusedSubquery\n               :                                      :                          :              :                    :     +- CometBroadcastExchange\n               :                                      :                          :              :                    :        +- CometFilter\n               :                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :                    +- CometBroadcastExchange\n               :                                      :                          :              :                       +- CometProject\n               :                                      :                          :              :                          +- CometFilter\n               :                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              +- CometBroadcastExchange\n               :                                      :                          :                 +- CometProject\n               :                                      :                          :                    +- CometFilter\n               :                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          +- CometBroadcastExchange\n               :                                      :                             +- CometProject\n               :                                      :                                +- CometBroadcastHashJoin\n               :                                      :                                   :- CometProject\n               :                                      :                                   :  +- CometBroadcastHashJoin\n               :                                      :                                   :     :- CometFilter\n               :                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :                                   :     :        +- ReusedSubquery\n               :                                      :                                   :     +- CometBroadcastExchange\n               :                                      :                                   :        +- CometFilter\n               :                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                                   +- CometBroadcastExchange\n               :                                      :                                      +- CometProject\n               :                                      :                                         +- CometFilter\n               :                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometFilter\n               :                    :  :  +- Subquery\n               :                    :  :     +- CometNativeColumnarToRow\n               :                    :  :        +- CometHashAggregate\n               :                    :  :           +- CometExchange\n               :                    :  :              +- CometHashAggregate\n               :                    :  :                 +- CometUnion\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :  :                    :     :     +- ReusedSubquery\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :- CometProject\n               :                    :  :                    :  +- CometBroadcastHashJoin\n               :                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :  :                    :     :     +- SubqueryBroadcast\n               :                    :  :                    :     :        +- BroadcastExchange\n               :                    :  :                    :     :           +- CometNativeColumnarToRow\n               :                    :  :                    :     :              +- CometProject\n               :                    :  :                    :     :                 +- CometFilter\n               :                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    :     +- CometBroadcastExchange\n               :                    :  :                    :        +- CometProject\n               :                    :  :                    :           +- CometFilter\n               :                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  :                    +- CometProject\n               :                    :  :                       +- CometBroadcastHashJoin\n               :                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :  :                          :     +- ReusedSubquery\n               :                    :  :                          +- CometBroadcastExchange\n               :                    :  :                             +- CometProject\n               :                    :  :                                +- CometFilter\n               :                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :  :        +- SubqueryBroadcast\n               :                    :                 :     :  :           +- BroadcastExchange\n               :                    :                 :     :  :              +- CometNativeColumnarToRow\n               :                    :                 :     :  :                 +- CometProject\n               :                    :                 :     :  :                    +- CometFilter\n               :                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :- CometFilter\n               :                    :  :  +- ReusedSubquery\n               :                    :  +- CometHashAggregate\n               :                    :     +- CometExchange\n               :                    :        +- CometHashAggregate\n               :                    :           +- CometProject\n               :                    :              +- CometBroadcastHashJoin\n               :                    :                 :- CometProject\n               :                    :                 :  +- CometBroadcastHashJoin\n               :                    :                 :     :- CometBroadcastHashJoin\n               :                    :                 :     :  :- CometFilter\n               :                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :  :        +- ReusedSubquery\n               :                    :                 :     :  +- CometBroadcastExchange\n               :                    :                 :     :     +- CometProject\n               :                    :                 :     :        +- CometBroadcastHashJoin\n               :                    :                 :     :           :- CometFilter\n               :                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :           +- CometBroadcastExchange\n               :                    :                 :     :              +- CometBroadcastHashJoin\n               :                    :                 :     :                 :- CometHashAggregate\n               :                    :                 :     :                 :  +- CometExchange\n               :                    :                 :     :                 :     +- CometHashAggregate\n               :                    :                 :     :                 :        +- CometProject\n               :                    :                 :     :                 :           +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :- CometProject\n               :                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :     :- CometFilter\n               :                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n               :                    :                 :     :                 :              :     :           +- BroadcastExchange\n               :                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :     :                 :              :     :                 +- CometProject\n               :                    :                 :     :                 :              :     :                    +- CometFilter\n               :                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :           :- CometFilter\n               :                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :           +- CometBroadcastExchange\n               :                    :                 :     :                 :              :              +- CometProject\n               :                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :- CometProject\n               :                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :     :                 :              :                    :     :- CometFilter\n               :                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n               :                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                    :        +- CometFilter\n               :                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                 :              :                    +- CometBroadcastExchange\n               :                    :                 :     :                 :              :                       +- CometProject\n               :                    :                 :     :                 :              :                          +- CometFilter\n               :                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 :              +- CometBroadcastExchange\n               :                    :                 :     :                 :                 +- CometProject\n               :                    :                 :     :                 :                    +- CometFilter\n               :                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     :                 +- CometBroadcastExchange\n               :                    :                 :     :                    +- CometProject\n               :                    :                 :     :                       +- CometBroadcastHashJoin\n               :                    :                 :     :                          :- CometProject\n               :                    :                 :     :                          :  +- CometBroadcastHashJoin\n               :                    :                 :     :                          :     :- CometFilter\n               :                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :     :                          :     :        +- ReusedSubquery\n               :                    :                 :     :                          :     +- CometBroadcastExchange\n               :                    :                 :     :                          :        +- CometFilter\n               :                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :     :                          +- CometBroadcastExchange\n               :                    :                 :     :                             +- CometProject\n               :                    :                 :     :                                +- CometFilter\n               :                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :     +- CometBroadcastExchange\n               :                    :                 :        +- CometBroadcastHashJoin\n               :                    :                 :           :- CometFilter\n               :                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :           +- CometBroadcastExchange\n               :                    :                 :              +- CometProject\n               :                    :                 :                 +- CometBroadcastHashJoin\n               :                    :                 :                    :- CometFilter\n               :                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                    +- CometBroadcastExchange\n               :                    :                 :                       +- CometBroadcastHashJoin\n               :                    :                 :                          :- CometHashAggregate\n               :                    :                 :                          :  +- CometExchange\n               :                    :                 :                          :     +- CometHashAggregate\n               :                    :                 :                          :        +- CometProject\n               :                    :                 :                          :           +- CometBroadcastHashJoin\n               :                    :                 :                          :              :- CometProject\n               :                    :                 :                          :              :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :     :- CometFilter\n               :                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :                 :                          :              :     :        +- SubqueryBroadcast\n               :                    :                 :                          :              :     :           +- BroadcastExchange\n               :                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n               :                    :                 :                          :              :     :                 +- CometProject\n               :                    :                 :                          :              :     :                    +- CometFilter\n               :                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              :     +- CometBroadcastExchange\n               :                    :                 :                          :              :        +- CometBroadcastHashJoin\n               :                    :                 :                          :              :           :- CometFilter\n               :                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :           +- CometBroadcastExchange\n               :                    :                 :                          :              :              +- CometProject\n               :                    :                 :                          :              :                 +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :- CometProject\n               :                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n               :                    :                 :                          :              :                    :     :- CometFilter\n               :                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :                 :                          :              :                    :     :        +- ReusedSubquery\n               :                    :                 :                          :              :                    :     +- CometBroadcastExchange\n               :                    :                 :                          :              :                    :        +- CometFilter\n               :                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                          :              :                    +- CometBroadcastExchange\n               :                    :                 :                          :              :                       +- CometProject\n               :                    :                 :                          :              :                          +- CometFilter\n               :                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          :              +- CometBroadcastExchange\n               :                    :                 :                          :                 +- CometProject\n               :                    :                 :                          :                    +- CometFilter\n               :                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 :                          +- CometBroadcastExchange\n               :                    :                 :                             +- CometProject\n               :                    :                 :                                +- CometBroadcastHashJoin\n               :                    :                 :                                   :- CometProject\n               :                    :                 :                                   :  +- CometBroadcastHashJoin\n               :                    :                 :                                   :     :- CometFilter\n               :                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                 :                                   :     :        +- ReusedSubquery\n               :                    :                 :                                   :     +- CometBroadcastExchange\n               :                    :                 :                                   :        +- CometFilter\n               :                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :                 :                                   +- CometBroadcastExchange\n               :                    :                 :                                      +- CometProject\n               :                    :                 :                                         +- CometFilter\n               :                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                 +- CometBroadcastExchange\n               :                    :                    +- CometProject\n               :                    :                       +- CometFilter\n               :                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometFilter\n               :                       :  +- ReusedSubquery\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastHashJoin\n               :                                      :     :  :- CometFilter\n               :                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :  :        +- ReusedSubquery\n               :                                      :     :  +- CometBroadcastExchange\n               :                                      :     :     +- CometProject\n               :                                      :     :        +- CometBroadcastHashJoin\n               :                                      :     :           :- CometFilter\n               :                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :           +- CometBroadcastExchange\n               :                                      :     :              +- CometBroadcastHashJoin\n               :                                      :     :                 :- CometHashAggregate\n               :                                      :     :                 :  +- CometExchange\n               :                                      :     :                 :     +- CometHashAggregate\n               :                                      :     :                 :        +- CometProject\n               :                                      :     :                 :           +- CometBroadcastHashJoin\n               :                                      :     :                 :              :- CometProject\n               :                                      :     :                 :              :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :     :- CometFilter\n               :                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :     :                 :              :     :        +- SubqueryBroadcast\n               :                                      :     :                 :              :     :           +- BroadcastExchange\n               :                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n               :                                      :     :                 :              :     :                 +- CometProject\n               :                                      :     :                 :              :     :                    +- CometFilter\n               :                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              :     +- CometBroadcastExchange\n               :                                      :     :                 :              :        +- CometBroadcastHashJoin\n               :                                      :     :                 :              :           :- CometFilter\n               :                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :           +- CometBroadcastExchange\n               :                                      :     :                 :              :              +- CometProject\n               :                                      :     :                 :              :                 +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :- CometProject\n               :                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n               :                                      :     :                 :              :                    :     :- CometFilter\n               :                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :                 :              :                    :     :        +- ReusedSubquery\n               :                                      :     :                 :              :                    :     +- CometBroadcastExchange\n               :                                      :     :                 :              :                    :        +- CometFilter\n               :                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                 :              :                    +- CometBroadcastExchange\n               :                                      :     :                 :              :                       +- CometProject\n               :                                      :     :                 :              :                          +- CometFilter\n               :                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 :              +- CometBroadcastExchange\n               :                                      :     :                 :                 +- CometProject\n               :                                      :     :                 :                    +- CometFilter\n               :                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     :                 +- CometBroadcastExchange\n               :                                      :     :                    +- CometProject\n               :                                      :     :                       +- CometBroadcastHashJoin\n               :                                      :     :                          :- CometProject\n               :                                      :     :                          :  +- CometBroadcastHashJoin\n               :                                      :     :                          :     :- CometFilter\n               :                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :                          :     :        +- ReusedSubquery\n               :                                      :     :                          :     +- CometBroadcastExchange\n               :                                      :     :                          :        +- CometFilter\n               :                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :     :                          +- CometBroadcastExchange\n               :                                      :     :                             +- CometProject\n               :                                      :     :                                +- CometFilter\n               :                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometBroadcastExchange\n               :                                      :        +- CometBroadcastHashJoin\n               :                                      :           :- CometFilter\n               :                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :           +- CometBroadcastExchange\n               :                                      :              +- CometProject\n               :                                      :                 +- CometBroadcastHashJoin\n               :                                      :                    :- CometFilter\n               :                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                    +- CometBroadcastExchange\n               :                                      :                       +- CometBroadcastHashJoin\n               :                                      :                          :- CometHashAggregate\n               :                                      :                          :  +- CometExchange\n               :                                      :                          :     +- CometHashAggregate\n               :                                      :                          :        +- CometProject\n               :                                      :                          :           +- CometBroadcastHashJoin\n               :                                      :                          :              :- CometProject\n               :                                      :                          :              :  +- CometBroadcastHashJoin\n               :                                      :                          :              :     :- CometFilter\n               :                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                      :                          :              :     :        +- SubqueryBroadcast\n               :                                      :                          :              :     :           +- BroadcastExchange\n               :                                      :                          :              :     :              +- CometNativeColumnarToRow\n               :                                      :                          :              :     :                 +- CometProject\n               :                                      :                          :              :     :                    +- CometFilter\n               :                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              :     +- CometBroadcastExchange\n               :                                      :                          :              :        +- CometBroadcastHashJoin\n               :                                      :                          :              :           :- CometFilter\n               :                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :           +- CometBroadcastExchange\n               :                                      :                          :              :              +- CometProject\n               :                                      :                          :              :                 +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :- CometProject\n               :                                      :                          :              :                    :  +- CometBroadcastHashJoin\n               :                                      :                          :              :                    :     :- CometFilter\n               :                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :                          :              :                    :     :        +- ReusedSubquery\n               :                                      :                          :              :                    :     +- CometBroadcastExchange\n               :                                      :                          :              :                    :        +- CometFilter\n               :                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                          :              :                    +- CometBroadcastExchange\n               :                                      :                          :              :                       +- CometProject\n               :                                      :                          :              :                          +- CometFilter\n               :                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          :              +- CometBroadcastExchange\n               :                                      :                          :                 +- CometProject\n               :                                      :                          :                    +- CometFilter\n               :                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :                          +- CometBroadcastExchange\n               :                                      :                             +- CometProject\n               :                                      :                                +- CometBroadcastHashJoin\n               :                                      :                                   :- CometProject\n               :                                      :                                   :  +- CometBroadcastHashJoin\n               :                                      :                                   :     :- CometFilter\n               :                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :                                   :     :        +- ReusedSubquery\n               :                                      :                                   :     +- CometBroadcastExchange\n               :                                      :                                   :        +- CometFilter\n               :                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                      :                                   +- CometBroadcastExchange\n               :                                      :                                      +- CometProject\n               :                                      :                                         +- CometFilter\n               :                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometFilter\n                                    :  :  +- Subquery\n                                    :  :     +- CometNativeColumnarToRow\n                                    :  :        +- CometHashAggregate\n                                    :  :           +- CometExchange\n                                    :  :              +- CometHashAggregate\n                                    :  :                 +- CometUnion\n                                    :  :                    :- CometProject\n                                    :  :                    :  +- CometBroadcastHashJoin\n                                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :  :                    :     :     +- ReusedSubquery\n                                    :  :                    :     +- CometBroadcastExchange\n                                    :  :                    :        +- CometProject\n                                    :  :                    :           +- CometFilter\n                                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  :                    :- CometProject\n                                    :  :                    :  +- CometBroadcastHashJoin\n                                    :  :                    :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :  :                    :     :     +- SubqueryBroadcast\n                                    :  :                    :     :        +- BroadcastExchange\n                                    :  :                    :     :           +- CometNativeColumnarToRow\n                                    :  :                    :     :              +- CometProject\n                                    :  :                    :     :                 +- CometFilter\n                                    :  :                    :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  :                    :     +- CometBroadcastExchange\n                                    :  :                    :        +- CometProject\n                                    :  :                    :           +- CometFilter\n                                    :  :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  :                    +- CometProject\n                                    :  :                       +- CometBroadcastHashJoin\n                                    :  :                          :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :  :                          :     +- ReusedSubquery\n                                    :  :                          +- CometBroadcastExchange\n                                    :  :                             +- CometProject\n                                    :  :                                +- CometFilter\n                                    :  :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :  +- CometHashAggregate\n                                    :     +- CometExchange\n                                    :        +- CometHashAggregate\n                                    :           +- CometProject\n                                    :              +- CometBroadcastHashJoin\n                                    :                 :- CometProject\n                                    :                 :  +- CometBroadcastHashJoin\n                                    :                 :     :- CometBroadcastHashJoin\n                                    :                 :     :  :- CometFilter\n                                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :     :  :        +- SubqueryBroadcast\n                                    :                 :     :  :           +- BroadcastExchange\n                                    :                 :     :  :              +- CometNativeColumnarToRow\n                                    :                 :     :  :                 +- CometProject\n                                    :                 :     :  :                    +- CometFilter\n                                    :                 :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :  +- CometBroadcastExchange\n                                    :                 :     :     +- CometProject\n                                    :                 :     :        +- CometBroadcastHashJoin\n                                    :                 :     :           :- CometFilter\n                                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :           +- CometBroadcastExchange\n                                    :                 :     :              +- CometBroadcastHashJoin\n                                    :                 :     :                 :- CometHashAggregate\n                                    :                 :     :                 :  +- CometExchange\n                                    :                 :     :                 :     +- CometHashAggregate\n                                    :                 :     :                 :        +- CometProject\n                                    :                 :     :                 :           +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :- CometProject\n                                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :     :- CometFilter\n                                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n                                    :                 :     :                 :              :     :           +- BroadcastExchange\n                                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n                                    :                 :     :                 :              :     :                 +- CometProject\n                                    :                 :     :                 :              :     :                    +- CometFilter\n                                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :           :- CometFilter\n                                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :           +- CometBroadcastExchange\n                                    :                 :     :                 :              :              +- CometProject\n                                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :- CometProject\n                                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :     :- CometFilter\n                                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n                                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :                    :        +- CometFilter\n                                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :                    +- CometBroadcastExchange\n                                    :                 :     :                 :              :                       +- CometProject\n                                    :                 :     :                 :              :                          +- CometFilter\n                                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              +- CometBroadcastExchange\n                                    :                 :     :                 :                 +- CometProject\n                                    :                 :     :                 :                    +- CometFilter\n                                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 +- CometBroadcastExchange\n                                    :                 :     :                    +- CometProject\n                                    :                 :     :                       +- CometBroadcastHashJoin\n                                    :                 :     :                          :- CometProject\n                                    :                 :     :                          :  +- CometBroadcastHashJoin\n                                    :                 :     :                          :     :- CometFilter\n                                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :     :                          :     :        +- ReusedSubquery\n                                    :                 :     :                          :     +- CometBroadcastExchange\n                                    :                 :     :                          :        +- CometFilter\n                                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                          +- CometBroadcastExchange\n                                    :                 :     :                             +- CometProject\n                                    :                 :     :                                +- CometFilter\n                                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     +- CometBroadcastExchange\n                                    :                 :        +- CometBroadcastHashJoin\n                                    :                 :           :- CometFilter\n                                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :           +- CometBroadcastExchange\n                                    :                 :              +- CometProject\n                                    :                 :                 +- CometBroadcastHashJoin\n                                    :                 :                    :- CometFilter\n                                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                    +- CometBroadcastExchange\n                                    :                 :                       +- CometBroadcastHashJoin\n                                    :                 :                          :- CometHashAggregate\n                                    :                 :                          :  +- CometExchange\n                                    :                 :                          :     +- CometHashAggregate\n                                    :                 :                          :        +- CometProject\n                                    :                 :                          :           +- CometBroadcastHashJoin\n                                    :                 :                          :              :- CometProject\n                                    :                 :                          :              :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :     :- CometFilter\n                                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :                          :              :     :        +- SubqueryBroadcast\n                                    :                 :                          :              :     :           +- BroadcastExchange\n                                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n                                    :                 :                          :              :     :                 +- CometProject\n                                    :                 :                          :              :     :                    +- CometFilter\n                                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              :     +- CometBroadcastExchange\n                                    :                 :                          :              :        +- CometBroadcastHashJoin\n                                    :                 :                          :              :           :- CometFilter\n                                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :           +- CometBroadcastExchange\n                                    :                 :                          :              :              +- CometProject\n                                    :                 :                          :              :                 +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :- CometProject\n                                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :     :- CometFilter\n                                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :                          :              :                    :     :        +- ReusedSubquery\n                                    :                 :                          :              :                    :     +- CometBroadcastExchange\n                                    :                 :                          :              :                    :        +- CometFilter\n                                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :                    +- CometBroadcastExchange\n                                    :                 :                          :              :                       +- CometProject\n                                    :                 :                          :              :                          +- CometFilter\n                                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              +- CometBroadcastExchange\n                                    :                 :                          :                 +- CometProject\n                                    :                 :                          :                    +- CometFilter\n                                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          +- CometBroadcastExchange\n                                    :                 :                             +- CometProject\n                                    :                 :                                +- CometBroadcastHashJoin\n                                    :                 :                                   :- CometProject\n                                    :                 :                                   :  +- CometBroadcastHashJoin\n                                    :                 :                                   :     :- CometFilter\n                                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :                                   :     :        +- ReusedSubquery\n                                    :                 :                                   :     +- CometBroadcastExchange\n                                    :                 :                                   :        +- CometFilter\n                                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                                   +- CometBroadcastExchange\n                                    :                 :                                      +- CometProject\n                                    :                 :                                         +- CometFilter\n                                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 +- CometBroadcastExchange\n                                    :                    +- CometProject\n                                    :                       +- CometFilter\n                                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :- CometFilter\n                                    :  :  +- ReusedSubquery\n                                    :  +- CometHashAggregate\n                                    :     +- CometExchange\n                                    :        +- CometHashAggregate\n                                    :           +- CometProject\n                                    :              +- CometBroadcastHashJoin\n                                    :                 :- CometProject\n                                    :                 :  +- CometBroadcastHashJoin\n                                    :                 :     :- CometBroadcastHashJoin\n                                    :                 :     :  :- CometFilter\n                                    :                 :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :     :  :        +- ReusedSubquery\n                                    :                 :     :  +- CometBroadcastExchange\n                                    :                 :     :     +- CometProject\n                                    :                 :     :        +- CometBroadcastHashJoin\n                                    :                 :     :           :- CometFilter\n                                    :                 :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :           +- CometBroadcastExchange\n                                    :                 :     :              +- CometBroadcastHashJoin\n                                    :                 :     :                 :- CometHashAggregate\n                                    :                 :     :                 :  +- CometExchange\n                                    :                 :     :                 :     +- CometHashAggregate\n                                    :                 :     :                 :        +- CometProject\n                                    :                 :     :                 :           +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :- CometProject\n                                    :                 :     :                 :              :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :     :- CometFilter\n                                    :                 :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :     :                 :              :     :        +- SubqueryBroadcast\n                                    :                 :     :                 :              :     :           +- BroadcastExchange\n                                    :                 :     :                 :              :     :              +- CometNativeColumnarToRow\n                                    :                 :     :                 :              :     :                 +- CometProject\n                                    :                 :     :                 :              :     :                    +- CometFilter\n                                    :                 :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :        +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :           :- CometFilter\n                                    :                 :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :           +- CometBroadcastExchange\n                                    :                 :     :                 :              :              +- CometProject\n                                    :                 :     :                 :              :                 +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :- CometProject\n                                    :                 :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :     :                 :              :                    :     :- CometFilter\n                                    :                 :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :     :                 :              :                    :     :        +- ReusedSubquery\n                                    :                 :     :                 :              :                    :     +- CometBroadcastExchange\n                                    :                 :     :                 :              :                    :        +- CometFilter\n                                    :                 :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                 :              :                    +- CometBroadcastExchange\n                                    :                 :     :                 :              :                       +- CometProject\n                                    :                 :     :                 :              :                          +- CometFilter\n                                    :                 :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 :              +- CometBroadcastExchange\n                                    :                 :     :                 :                 +- CometProject\n                                    :                 :     :                 :                    +- CometFilter\n                                    :                 :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     :                 +- CometBroadcastExchange\n                                    :                 :     :                    +- CometProject\n                                    :                 :     :                       +- CometBroadcastHashJoin\n                                    :                 :     :                          :- CometProject\n                                    :                 :     :                          :  +- CometBroadcastHashJoin\n                                    :                 :     :                          :     :- CometFilter\n                                    :                 :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :     :                          :     :        +- ReusedSubquery\n                                    :                 :     :                          :     +- CometBroadcastExchange\n                                    :                 :     :                          :        +- CometFilter\n                                    :                 :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :     :                          +- CometBroadcastExchange\n                                    :                 :     :                             +- CometProject\n                                    :                 :     :                                +- CometFilter\n                                    :                 :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :     +- CometBroadcastExchange\n                                    :                 :        +- CometBroadcastHashJoin\n                                    :                 :           :- CometFilter\n                                    :                 :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :           +- CometBroadcastExchange\n                                    :                 :              +- CometProject\n                                    :                 :                 +- CometBroadcastHashJoin\n                                    :                 :                    :- CometFilter\n                                    :                 :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                    +- CometBroadcastExchange\n                                    :                 :                       +- CometBroadcastHashJoin\n                                    :                 :                          :- CometHashAggregate\n                                    :                 :                          :  +- CometExchange\n                                    :                 :                          :     +- CometHashAggregate\n                                    :                 :                          :        +- CometProject\n                                    :                 :                          :           +- CometBroadcastHashJoin\n                                    :                 :                          :              :- CometProject\n                                    :                 :                          :              :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :     :- CometFilter\n                                    :                 :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :                 :                          :              :     :        +- SubqueryBroadcast\n                                    :                 :                          :              :     :           +- BroadcastExchange\n                                    :                 :                          :              :     :              +- CometNativeColumnarToRow\n                                    :                 :                          :              :     :                 +- CometProject\n                                    :                 :                          :              :     :                    +- CometFilter\n                                    :                 :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              :     +- CometBroadcastExchange\n                                    :                 :                          :              :        +- CometBroadcastHashJoin\n                                    :                 :                          :              :           :- CometFilter\n                                    :                 :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :           +- CometBroadcastExchange\n                                    :                 :                          :              :              +- CometProject\n                                    :                 :                          :              :                 +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :- CometProject\n                                    :                 :                          :              :                    :  +- CometBroadcastHashJoin\n                                    :                 :                          :              :                    :     :- CometFilter\n                                    :                 :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :                 :                          :              :                    :     :        +- ReusedSubquery\n                                    :                 :                          :              :                    :     +- CometBroadcastExchange\n                                    :                 :                          :              :                    :        +- CometFilter\n                                    :                 :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                          :              :                    +- CometBroadcastExchange\n                                    :                 :                          :              :                       +- CometProject\n                                    :                 :                          :              :                          +- CometFilter\n                                    :                 :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          :              +- CometBroadcastExchange\n                                    :                 :                          :                 +- CometProject\n                                    :                 :                          :                    +- CometFilter\n                                    :                 :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 :                          +- CometBroadcastExchange\n                                    :                 :                             +- CometProject\n                                    :                 :                                +- CometBroadcastHashJoin\n                                    :                 :                                   :- CometProject\n                                    :                 :                                   :  +- CometBroadcastHashJoin\n                                    :                 :                                   :     :- CometFilter\n                                    :                 :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                    :                 :                                   :     :        +- ReusedSubquery\n                                    :                 :                                   :     +- CometBroadcastExchange\n                                    :                 :                                   :        +- CometFilter\n                                    :                 :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :                 :                                   +- CometBroadcastExchange\n                                    :                 :                                      +- CometProject\n                                    :                 :                                         +- CometFilter\n                                    :                 :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :                 +- CometBroadcastExchange\n                                    :                    +- CometProject\n                                    :                       +- CometFilter\n                                    :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    +- CometFilter\n                                       :  +- ReusedSubquery\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometBroadcastHashJoin\n                                                      :- CometProject\n                                                      :  +- CometBroadcastHashJoin\n                                                      :     :- CometBroadcastHashJoin\n                                                      :     :  :- CometFilter\n                                                      :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                      :     :  :        +- ReusedSubquery\n                                                      :     :  +- CometBroadcastExchange\n                                                      :     :     +- CometProject\n                                                      :     :        +- CometBroadcastHashJoin\n                                                      :     :           :- CometFilter\n                                                      :     :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :           +- CometBroadcastExchange\n                                                      :     :              +- CometBroadcastHashJoin\n                                                      :     :                 :- CometHashAggregate\n                                                      :     :                 :  +- CometExchange\n                                                      :     :                 :     +- CometHashAggregate\n                                                      :     :                 :        +- CometProject\n                                                      :     :                 :           +- CometBroadcastHashJoin\n                                                      :     :                 :              :- CometProject\n                                                      :     :                 :              :  +- CometBroadcastHashJoin\n                                                      :     :                 :              :     :- CometFilter\n                                                      :     :                 :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :     :                 :              :     :        +- SubqueryBroadcast\n                                                      :     :                 :              :     :           +- BroadcastExchange\n                                                      :     :                 :              :     :              +- CometNativeColumnarToRow\n                                                      :     :                 :              :     :                 +- CometProject\n                                                      :     :                 :              :     :                    +- CometFilter\n                                                      :     :                 :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     :                 :              :     +- CometBroadcastExchange\n                                                      :     :                 :              :        +- CometBroadcastHashJoin\n                                                      :     :                 :              :           :- CometFilter\n                                                      :     :                 :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :                 :              :           +- CometBroadcastExchange\n                                                      :     :                 :              :              +- CometProject\n                                                      :     :                 :              :                 +- CometBroadcastHashJoin\n                                                      :     :                 :              :                    :- CometProject\n                                                      :     :                 :              :                    :  +- CometBroadcastHashJoin\n                                                      :     :                 :              :                    :     :- CometFilter\n                                                      :     :                 :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                                      :     :                 :              :                    :     :        +- ReusedSubquery\n                                                      :     :                 :              :                    :     +- CometBroadcastExchange\n                                                      :     :                 :              :                    :        +- CometFilter\n                                                      :     :                 :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :                 :              :                    +- CometBroadcastExchange\n                                                      :     :                 :              :                       +- CometProject\n                                                      :     :                 :              :                          +- CometFilter\n                                                      :     :                 :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     :                 :              +- CometBroadcastExchange\n                                                      :     :                 :                 +- CometProject\n                                                      :     :                 :                    +- CometFilter\n                                                      :     :                 :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     :                 +- CometBroadcastExchange\n                                                      :     :                    +- CometProject\n                                                      :     :                       +- CometBroadcastHashJoin\n                                                      :     :                          :- CometProject\n                                                      :     :                          :  +- CometBroadcastHashJoin\n                                                      :     :                          :     :- CometFilter\n                                                      :     :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                      :     :                          :     :        +- ReusedSubquery\n                                                      :     :                          :     +- CometBroadcastExchange\n                                                      :     :                          :        +- CometFilter\n                                                      :     :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :     :                          +- CometBroadcastExchange\n                                                      :     :                             +- CometProject\n                                                      :     :                                +- CometFilter\n                                                      :     :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     +- CometBroadcastExchange\n                                                      :        +- CometBroadcastHashJoin\n                                                      :           :- CometFilter\n                                                      :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :           +- CometBroadcastExchange\n                                                      :              +- CometProject\n                                                      :                 +- CometBroadcastHashJoin\n                                                      :                    :- CometFilter\n                                                      :                    :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                    +- CometBroadcastExchange\n                                                      :                       +- CometBroadcastHashJoin\n                                                      :                          :- CometHashAggregate\n                                                      :                          :  +- CometExchange\n                                                      :                          :     +- CometHashAggregate\n                                                      :                          :        +- CometProject\n                                                      :                          :           +- CometBroadcastHashJoin\n                                                      :                          :              :- CometProject\n                                                      :                          :              :  +- CometBroadcastHashJoin\n                                                      :                          :              :     :- CometFilter\n                                                      :                          :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :                          :              :     :        +- SubqueryBroadcast\n                                                      :                          :              :     :           +- BroadcastExchange\n                                                      :                          :              :     :              +- CometNativeColumnarToRow\n                                                      :                          :              :     :                 +- CometProject\n                                                      :                          :              :     :                    +- CometFilter\n                                                      :                          :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :                          :              :     +- CometBroadcastExchange\n                                                      :                          :              :        +- CometBroadcastHashJoin\n                                                      :                          :              :           :- CometFilter\n                                                      :                          :              :           :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                          :              :           +- CometBroadcastExchange\n                                                      :                          :              :              +- CometProject\n                                                      :                          :              :                 +- CometBroadcastHashJoin\n                                                      :                          :              :                    :- CometProject\n                                                      :                          :              :                    :  +- CometBroadcastHashJoin\n                                                      :                          :              :                    :     :- CometFilter\n                                                      :                          :              :                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                                      :                          :              :                    :     :        +- ReusedSubquery\n                                                      :                          :              :                    :     +- CometBroadcastExchange\n                                                      :                          :              :                    :        +- CometFilter\n                                                      :                          :              :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                          :              :                    +- CometBroadcastExchange\n                                                      :                          :              :                       +- CometProject\n                                                      :                          :              :                          +- CometFilter\n                                                      :                          :              :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :                          :              +- CometBroadcastExchange\n                                                      :                          :                 +- CometProject\n                                                      :                          :                    +- CometFilter\n                                                      :                          :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :                          +- CometBroadcastExchange\n                                                      :                             +- CometProject\n                                                      :                                +- CometBroadcastHashJoin\n                                                      :                                   :- CometProject\n                                                      :                                   :  +- CometBroadcastHashJoin\n                                                      :                                   :     :- CometFilter\n                                                      :                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                      :                                   :     :        +- ReusedSubquery\n                                                      :                                   :     +- CometBroadcastExchange\n                                                      :                                   :        +- CometFilter\n                                                      :                                   :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                      :                                   +- CometBroadcastExchange\n                                                      :                                      +- CometProject\n                                                      :                                         +- CometFilter\n                                                      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      +- CometBroadcastExchange\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 2127 out of 2302 eligible operators (92%). Final plan contains 46 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q18a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Union\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- SubqueryBroadcast\n   :                 :     :     :     :     :     :              +- BroadcastExchange\n   :                 :     :     :     :     :     :                 +- CometNativeColumnarToRow\n   :                 :     :     :     :     :     :                    +- CometProject\n   :                 :     :     :     :     :     :                       +- CometFilter\n   :                 :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Project\n   :                 :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :- Project\n   :                 :     :     :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :     :     :- Filter\n   :                 :     :     :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :     :           +- CometProject\n   :                 :     :     :     :     :              +- CometFilter\n   :                 :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     :     :     +- BroadcastExchange\n   :                 :     :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :     :           +- CometProject\n   :                 :     :     :     :              +- CometFilter\n   :                 :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometFilter\n   :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Project\n                     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :- Project\n                     :     :     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :     :     :- Filter\n                     :     :     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :     :     :           +- ReusedSubquery\n                     :     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :     :           +- CometProject\n                     :     :     :     :     :              +- CometFilter\n                     :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- BroadcastExchange\n                     :     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :     :           +- CometProject\n                     :     :     :     :              +- CometFilter\n                     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 92 out of 210 eligible operators (43%). Final plan contains 41 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q18a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- SubqueryBroadcast\n      :              :     :     :     :     :     :           +- BroadcastExchange\n      :              :     :     :     :     :     :              +- CometNativeColumnarToRow\n      :              :     :     :     :     :     :                 +- CometProject\n      :              :     :     :     :     :     :                    +- CometFilter\n      :              :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- ReusedSubquery\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- ReusedSubquery\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometProject\n      :              :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :- CometProject\n      :              :     :     :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :     :     :- CometFilter\n      :              :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :              :     :     :     :     :     :        +- ReusedSubquery\n      :              :     :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :     :        +- CometProject\n      :              :     :     :     :     :           +- CometFilter\n      :              :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     :     :     +- CometBroadcastExchange\n      :              :     :     :     :        +- CometProject\n      :              :     :     :     :           +- CometFilter\n      :              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometFilter\n      :              :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :- CometProject\n                     :     :     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :     :     :- CometFilter\n                     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                     :     :     :     :     :     :        +- ReusedSubquery\n                     :     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :     :        +- CometProject\n                     :     :     :     :     :           +- CometFilter\n                     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     :     :     +- CometBroadcastExchange\n                     :     :     :     :        +- CometProject\n                     :     :     :     :           +- CometFilter\n                     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 204 out of 210 eligible operators (97%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q20.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometColumnarExchange\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Filter\n                                 :     :  +- ColumnarToRow\n                                 :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :           +- SubqueryBroadcast\n                                 :     :              +- BroadcastExchange\n                                 :     :                 +- CometNativeColumnarToRow\n                                 :     :                    +- CometProject\n                                 :     :                       +- CometFilter\n                                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 12 out of 27 eligible operators (44%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q20.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometFilter\n                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :        +- SubqueryBroadcast\n                              :     :           +- BroadcastExchange\n                              :     :              +- CometNativeColumnarToRow\n                              :     :                 +- CometProject\n                              :     :                    +- CometFilter\n                              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 22 out of 27 eligible operators (81%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q22.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +- BroadcastNestedLoopJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Filter\n                     :     :     :  +- ColumnarToRow\n                     :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :           +- SubqueryBroadcast\n                     :     :     :              +- BroadcastExchange\n                     :     :     :                 +- CometNativeColumnarToRow\n                     :     :     :                    +- CometProject\n                     :     :     :                       +- CometFilter\n                     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometNativeScan parquet spark_catalog.default.warehouse\n\nComet accelerated 11 out of 28 eligible operators (39%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q22.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Expand\n               +- Project\n                  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n                     :- CometNativeColumnarToRow\n                     :  +- CometProject\n                     :     +- CometBroadcastHashJoin\n                     :        :- CometProject\n                     :        :  +- CometBroadcastHashJoin\n                     :        :     :- CometFilter\n                     :        :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                     :        :     :        +- SubqueryBroadcast\n                     :        :     :           +- BroadcastExchange\n                     :        :     :              +- CometNativeColumnarToRow\n                     :        :     :                 +- CometProject\n                     :        :     :                    +- CometFilter\n                     :        :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        :     +- CometBroadcastExchange\n                     :        :        +- CometProject\n                     :        :           +- CometFilter\n                     :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :        +- CometBroadcastExchange\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n\nComet accelerated 19 out of 28 eligible operators (67%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q22a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Union\n   :- HashAggregate\n   :  +- HashAggregate\n   :     +- HashAggregate\n   :        +- CometNativeColumnarToRow\n   :           +- CometColumnarExchange\n   :              +- HashAggregate\n   :                 +- Project\n   :                    +- BroadcastHashJoin\n   :                       :- Project\n   :                       :  +- BroadcastHashJoin\n   :                       :     :- Project\n   :                       :     :  +- BroadcastHashJoin\n   :                       :     :     :- Filter\n   :                       :     :     :  +- ColumnarToRow\n   :                       :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                       :     :     :           +- SubqueryBroadcast\n   :                       :     :     :              +- BroadcastExchange\n   :                       :     :     :                 +- CometNativeColumnarToRow\n   :                       :     :     :                    +- CometProject\n   :                       :     :     :                       +- CometFilter\n   :                       :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                       :     :     +- BroadcastExchange\n   :                       :     :        +- CometNativeColumnarToRow\n   :                       :     :           +- CometProject\n   :                       :     :              +- CometFilter\n   :                       :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                       :     +- BroadcastExchange\n   :                       :        +- CometNativeColumnarToRow\n   :                       :           +- CometProject\n   :                       :              +- CometFilter\n   :                       :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                       +- BroadcastExchange\n   :                          +- CometNativeColumnarToRow\n   :                             +- CometFilter\n   :                                +- CometNativeScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- HashAggregate\n   :              +- CometNativeColumnarToRow\n   :                 +- CometColumnarExchange\n   :                    +- HashAggregate\n   :                       +- Project\n   :                          +- BroadcastHashJoin\n   :                             :- Project\n   :                             :  +- BroadcastHashJoin\n   :                             :     :- Project\n   :                             :     :  +- BroadcastHashJoin\n   :                             :     :     :- Filter\n   :                             :     :     :  +- ColumnarToRow\n   :                             :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                             :     :     :           +- SubqueryBroadcast\n   :                             :     :     :              +- BroadcastExchange\n   :                             :     :     :                 +- CometNativeColumnarToRow\n   :                             :     :     :                    +- CometProject\n   :                             :     :     :                       +- CometFilter\n   :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     :     +- BroadcastExchange\n   :                             :     :        +- CometNativeColumnarToRow\n   :                             :     :           +- CometProject\n   :                             :     :              +- CometFilter\n   :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     +- BroadcastExchange\n   :                             :        +- CometNativeColumnarToRow\n   :                             :           +- CometProject\n   :                             :              +- CometFilter\n   :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                             +- BroadcastExchange\n   :                                +- CometNativeColumnarToRow\n   :                                   +- CometFilter\n   :                                      +- CometNativeScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- HashAggregate\n   :              +- CometNativeColumnarToRow\n   :                 +- CometColumnarExchange\n   :                    +- HashAggregate\n   :                       +- Project\n   :                          +- BroadcastHashJoin\n   :                             :- Project\n   :                             :  +- BroadcastHashJoin\n   :                             :     :- Project\n   :                             :     :  +- BroadcastHashJoin\n   :                             :     :     :- Filter\n   :                             :     :     :  +- ColumnarToRow\n   :                             :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                             :     :     :           +- SubqueryBroadcast\n   :                             :     :     :              +- BroadcastExchange\n   :                             :     :     :                 +- CometNativeColumnarToRow\n   :                             :     :     :                    +- CometProject\n   :                             :     :     :                       +- CometFilter\n   :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     :     +- BroadcastExchange\n   :                             :     :        +- CometNativeColumnarToRow\n   :                             :     :           +- CometProject\n   :                             :     :              +- CometFilter\n   :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     +- BroadcastExchange\n   :                             :        +- CometNativeColumnarToRow\n   :                             :           +- CometProject\n   :                             :              +- CometFilter\n   :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                             +- BroadcastExchange\n   :                                +- CometNativeColumnarToRow\n   :                                   +- CometFilter\n   :                                      +- CometNativeScan parquet spark_catalog.default.warehouse\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- HashAggregate\n   :              +- CometNativeColumnarToRow\n   :                 +- CometColumnarExchange\n   :                    +- HashAggregate\n   :                       +- Project\n   :                          +- BroadcastHashJoin\n   :                             :- Project\n   :                             :  +- BroadcastHashJoin\n   :                             :     :- Project\n   :                             :     :  +- BroadcastHashJoin\n   :                             :     :     :- Filter\n   :                             :     :     :  +- ColumnarToRow\n   :                             :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                             :     :     :           +- SubqueryBroadcast\n   :                             :     :     :              +- BroadcastExchange\n   :                             :     :     :                 +- CometNativeColumnarToRow\n   :                             :     :     :                    +- CometProject\n   :                             :     :     :                       +- CometFilter\n   :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     :     +- BroadcastExchange\n   :                             :     :        +- CometNativeColumnarToRow\n   :                             :     :           +- CometProject\n   :                             :     :              +- CometFilter\n   :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                             :     +- BroadcastExchange\n   :                             :        +- CometNativeColumnarToRow\n   :                             :           +- CometProject\n   :                             :              +- CometFilter\n   :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n   :                             +- BroadcastExchange\n   :                                +- CometNativeColumnarToRow\n   :                                   +- CometFilter\n   :                                      +- CometNativeScan parquet spark_catalog.default.warehouse\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- Project\n                              +- BroadcastHashJoin\n                                 :- Project\n                                 :  +- BroadcastHashJoin\n                                 :     :- Project\n                                 :     :  +- BroadcastHashJoin\n                                 :     :     :- Filter\n                                 :     :     :  +- ColumnarToRow\n                                 :     :     :     +-  Scan parquet spark_catalog.default.inventory [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                 :     :     :           +- SubqueryBroadcast\n                                 :     :     :              +- BroadcastExchange\n                                 :     :     :                 +- CometNativeColumnarToRow\n                                 :     :     :                    +- CometProject\n                                 :     :     :                       +- CometFilter\n                                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     :     +- BroadcastExchange\n                                 :     :        +- CometNativeColumnarToRow\n                                 :     :           +- CometProject\n                                 :     :              +- CometFilter\n                                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                 :     +- BroadcastExchange\n                                 :        +- CometNativeColumnarToRow\n                                 :           +- CometProject\n                                 :              +- CometFilter\n                                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                                 +- BroadcastExchange\n                                    +- CometNativeColumnarToRow\n                                       +- CometFilter\n                                          +- CometNativeScan parquet spark_catalog.default.warehouse\n\nComet accelerated 64 out of 151 eligible operators (42%). Final plan contains 34 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q22a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometHashAggregate\n      :     +- CometHashAggregate\n      :        +- CometExchange\n      :           +- CometHashAggregate\n      :              +- CometProject\n      :                 +- CometBroadcastHashJoin\n      :                    :- CometProject\n      :                    :  +- CometBroadcastHashJoin\n      :                    :     :- CometProject\n      :                    :     :  +- CometBroadcastHashJoin\n      :                    :     :     :- CometFilter\n      :                    :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                    :     :     :        +- SubqueryBroadcast\n      :                    :     :     :           +- BroadcastExchange\n      :                    :     :     :              +- CometNativeColumnarToRow\n      :                    :     :     :                 +- CometProject\n      :                    :     :     :                    +- CometFilter\n      :                    :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                    :     :     +- CometBroadcastExchange\n      :                    :     :        +- CometProject\n      :                    :     :           +- CometFilter\n      :                    :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                    :     +- CometBroadcastExchange\n      :                    :        +- CometProject\n      :                    :           +- CometFilter\n      :                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                    +- CometBroadcastExchange\n      :                       +- CometFilter\n      :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometHashAggregate\n      :           +- CometExchange\n      :              +- CometHashAggregate\n      :                 +- CometProject\n      :                    +- CometBroadcastHashJoin\n      :                       :- CometProject\n      :                       :  +- CometBroadcastHashJoin\n      :                       :     :- CometProject\n      :                       :     :  +- CometBroadcastHashJoin\n      :                       :     :     :- CometFilter\n      :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                       :     :     :        +- SubqueryBroadcast\n      :                       :     :     :           +- BroadcastExchange\n      :                       :     :     :              +- CometNativeColumnarToRow\n      :                       :     :     :                 +- CometProject\n      :                       :     :     :                    +- CometFilter\n      :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     :     +- CometBroadcastExchange\n      :                       :     :        +- CometProject\n      :                       :     :           +- CometFilter\n      :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     +- CometBroadcastExchange\n      :                       :        +- CometProject\n      :                       :           +- CometFilter\n      :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                       +- CometBroadcastExchange\n      :                          +- CometFilter\n      :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometHashAggregate\n      :           +- CometExchange\n      :              +- CometHashAggregate\n      :                 +- CometProject\n      :                    +- CometBroadcastHashJoin\n      :                       :- CometProject\n      :                       :  +- CometBroadcastHashJoin\n      :                       :     :- CometProject\n      :                       :     :  +- CometBroadcastHashJoin\n      :                       :     :     :- CometFilter\n      :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                       :     :     :        +- SubqueryBroadcast\n      :                       :     :     :           +- BroadcastExchange\n      :                       :     :     :              +- CometNativeColumnarToRow\n      :                       :     :     :                 +- CometProject\n      :                       :     :     :                    +- CometFilter\n      :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     :     +- CometBroadcastExchange\n      :                       :     :        +- CometProject\n      :                       :     :           +- CometFilter\n      :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     +- CometBroadcastExchange\n      :                       :        +- CometProject\n      :                       :           +- CometFilter\n      :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                       +- CometBroadcastExchange\n      :                          +- CometFilter\n      :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometHashAggregate\n      :           +- CometExchange\n      :              +- CometHashAggregate\n      :                 +- CometProject\n      :                    +- CometBroadcastHashJoin\n      :                       :- CometProject\n      :                       :  +- CometBroadcastHashJoin\n      :                       :     :- CometProject\n      :                       :     :  +- CometBroadcastHashJoin\n      :                       :     :     :- CometFilter\n      :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n      :                       :     :     :        +- SubqueryBroadcast\n      :                       :     :     :           +- BroadcastExchange\n      :                       :     :     :              +- CometNativeColumnarToRow\n      :                       :     :     :                 +- CometProject\n      :                       :     :     :                    +- CometFilter\n      :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     :     +- CometBroadcastExchange\n      :                       :     :        +- CometProject\n      :                       :     :           +- CometFilter\n      :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                       :     +- CometBroadcastExchange\n      :                       :        +- CometProject\n      :                       :           +- CometFilter\n      :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                       +- CometBroadcastExchange\n      :                          +- CometFilter\n      :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometFilter\n                              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                              :     :     :        +- SubqueryBroadcast\n                              :     :     :           +- BroadcastExchange\n                              :     :     :              +- CometNativeColumnarToRow\n                              :     :     :                 +- CometProject\n                              :     :     :                    +- CometFilter\n                              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometProject\n                              :     :           +- CometFilter\n                              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     +- CometBroadcastExchange\n                              :        +- CometProject\n                              :           +- CometFilter\n                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n\nComet accelerated 141 out of 151 eligible operators (93%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q24.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Filter\n         :  +- Subquery\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- HashAggregate\n         :                    +- CometNativeColumnarToRow\n         :                       +- CometColumnarExchange\n         :                          +- HashAggregate\n         :                             +- Project\n         :                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n         :                                   :- CometNativeColumnarToRow\n         :                                   :  +- CometProject\n         :                                   :     +- CometBroadcastHashJoin\n         :                                   :        :- CometProject\n         :                                   :        :  +- CometBroadcastHashJoin\n         :                                   :        :     :- CometProject\n         :                                   :        :     :  +- CometBroadcastHashJoin\n         :                                   :        :     :     :- CometProject\n         :                                   :        :     :     :  +- CometSortMergeJoin\n         :                                   :        :     :     :     :- CometSort\n         :                                   :        :     :     :     :  +- CometExchange\n         :                                   :        :     :     :     :     +- CometProject\n         :                                   :        :     :     :     :        +- CometFilter\n         :                                   :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n         :                                   :        :     :     :     +- CometSort\n         :                                   :        :     :     :        +- CometExchange\n         :                                   :        :     :     :           +- CometProject\n         :                                   :        :     :     :              +- CometFilter\n         :                                   :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n         :                                   :        :     :     +- CometBroadcastExchange\n         :                                   :        :     :        +- CometProject\n         :                                   :        :     :           +- CometFilter\n         :                                   :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n         :                                   :        :     +- CometBroadcastExchange\n         :                                   :        :        +- CometProject\n         :                                   :        :           +- CometFilter\n         :                                   :        :              +- CometNativeScan parquet spark_catalog.default.item\n         :                                   :        +- CometBroadcastExchange\n         :                                   :           +- CometProject\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometNativeScan parquet spark_catalog.default.customer\n         :                                   +- BroadcastExchange\n         :                                      +- CometNativeColumnarToRow\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometNativeScan parquet spark_catalog.default.customer_address\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- HashAggregate\n                        +- CometNativeColumnarToRow\n                           +- CometColumnarExchange\n                              +- HashAggregate\n                                 +- Project\n                                    +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                                       :- CometNativeColumnarToRow\n                                       :  +- CometProject\n                                       :     +- CometBroadcastHashJoin\n                                       :        :- CometProject\n                                       :        :  +- CometBroadcastHashJoin\n                                       :        :     :- CometProject\n                                       :        :     :  +- CometBroadcastHashJoin\n                                       :        :     :     :- CometProject\n                                       :        :     :     :  +- CometSortMergeJoin\n                                       :        :     :     :     :- CometSort\n                                       :        :     :     :     :  +- CometExchange\n                                       :        :     :     :     :     +- CometProject\n                                       :        :     :     :     :        +- CometFilter\n                                       :        :     :     :     :           +- CometNativeScan parquet spark_catalog.default.store_sales\n                                       :        :     :     :     +- CometSort\n                                       :        :     :     :        +- CometExchange\n                                       :        :     :     :           +- CometProject\n                                       :        :     :     :              +- CometFilter\n                                       :        :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                       :        :     :     +- CometBroadcastExchange\n                                       :        :     :        +- CometProject\n                                       :        :     :           +- CometFilter\n                                       :        :     :              +- CometNativeScan parquet spark_catalog.default.store\n                                       :        :     +- CometBroadcastExchange\n                                       :        :        +- CometProject\n                                       :        :           +- CometFilter\n                                       :        :              +- CometNativeScan parquet spark_catalog.default.item\n                                       :        +- CometBroadcastExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.customer\n                                       +- BroadcastExchange\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.customer_address\n\nComet accelerated 72 out of 88 eligible operators (81%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q24.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Filter\n         :  +- Subquery\n         :     +- HashAggregate\n         :        +- CometNativeColumnarToRow\n         :           +- CometColumnarExchange\n         :              +- HashAggregate\n         :                 +- HashAggregate\n         :                    +- CometNativeColumnarToRow\n         :                       +- CometColumnarExchange\n         :                          +- HashAggregate\n         :                             +- Project\n         :                                +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n         :                                   :- CometNativeColumnarToRow\n         :                                   :  +- CometProject\n         :                                   :     +- CometBroadcastHashJoin\n         :                                   :        :- CometProject\n         :                                   :        :  +- CometBroadcastHashJoin\n         :                                   :        :     :- CometProject\n         :                                   :        :     :  +- CometBroadcastHashJoin\n         :                                   :        :     :     :- CometProject\n         :                                   :        :     :     :  +- CometSortMergeJoin\n         :                                   :        :     :     :     :- CometSort\n         :                                   :        :     :     :     :  +- CometExchange\n         :                                   :        :     :     :     :     +- CometProject\n         :                                   :        :     :     :     :        +- CometFilter\n         :                                   :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :                                   :        :     :     :     +- CometSort\n         :                                   :        :     :     :        +- CometExchange\n         :                                   :        :     :     :           +- CometProject\n         :                                   :        :     :     :              +- CometFilter\n         :                                   :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :                                   :        :     :     +- CometBroadcastExchange\n         :                                   :        :     :        +- CometProject\n         :                                   :        :     :           +- CometFilter\n         :                                   :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n         :                                   :        :     +- CometBroadcastExchange\n         :                                   :        :        +- CometProject\n         :                                   :        :           +- CometFilter\n         :                                   :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                                   :        +- CometBroadcastExchange\n         :                                   :           +- CometProject\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                                   +- BroadcastExchange\n         :                                      +- CometNativeColumnarToRow\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- HashAggregate\n                        +- CometNativeColumnarToRow\n                           +- CometColumnarExchange\n                              +- HashAggregate\n                                 +- Project\n                                    +-  BroadcastHashJoin [COMET: Comet is not compatible with Spark for case conversion in locale-specific cases. Set spark.comet.caseConversion.enabled=true to enable it anyway.]\n                                       :- CometNativeColumnarToRow\n                                       :  +- CometProject\n                                       :     +- CometBroadcastHashJoin\n                                       :        :- CometProject\n                                       :        :  +- CometBroadcastHashJoin\n                                       :        :     :- CometProject\n                                       :        :     :  +- CometBroadcastHashJoin\n                                       :        :     :     :- CometProject\n                                       :        :     :     :  +- CometSortMergeJoin\n                                       :        :     :     :     :- CometSort\n                                       :        :     :     :     :  +- CometExchange\n                                       :        :     :     :     :     +- CometProject\n                                       :        :     :     :     :        +- CometFilter\n                                       :        :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :        :     :     :     +- CometSort\n                                       :        :     :     :        +- CometExchange\n                                       :        :     :     :           +- CometProject\n                                       :        :     :     :              +- CometFilter\n                                       :        :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                       :        :     :     +- CometBroadcastExchange\n                                       :        :     :        +- CometProject\n                                       :        :     :           +- CometFilter\n                                       :        :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                       :        :     +- CometBroadcastExchange\n                                       :        :        +- CometProject\n                                       :        :           +- CometFilter\n                                       :        :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :        +- CometBroadcastExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                                       +- BroadcastExchange\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n\nComet accelerated 72 out of 88 eligible operators (81%). Final plan contains 9 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q27a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Union\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Filter\n   :                 :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :           +- SubqueryBroadcast\n   :                 :     :     :     :              +- BroadcastExchange\n   :                 :     :     :     :                 +- CometNativeColumnarToRow\n   :                 :     :     :     :                    +- CometProject\n   :                 :     :     :     :                       +- CometFilter\n   :                 :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometProject\n   :                 :     :     :              +- CometFilter\n   :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   :- HashAggregate\n   :  +- CometNativeColumnarToRow\n   :     +- CometColumnarExchange\n   :        +- HashAggregate\n   :           +- Project\n   :              +- BroadcastHashJoin\n   :                 :- Project\n   :                 :  +- BroadcastHashJoin\n   :                 :     :- Project\n   :                 :     :  +- BroadcastHashJoin\n   :                 :     :     :- Project\n   :                 :     :     :  +- BroadcastHashJoin\n   :                 :     :     :     :- Filter\n   :                 :     :     :     :  +- ColumnarToRow\n   :                 :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n   :                 :     :     :     :           +- ReusedSubquery\n   :                 :     :     :     +- BroadcastExchange\n   :                 :     :     :        +- CometNativeColumnarToRow\n   :                 :     :     :           +- CometProject\n   :                 :     :     :              +- CometFilter\n   :                 :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n   :                 :     :     +- BroadcastExchange\n   :                 :     :        +- CometNativeColumnarToRow\n   :                 :     :           +- CometProject\n   :                 :     :              +- CometFilter\n   :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n   :                 :     +- BroadcastExchange\n   :                 :        +- CometNativeColumnarToRow\n   :                 :           +- CometProject\n   :                 :              +- CometFilter\n   :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n   :                 +- BroadcastExchange\n   :                    +- CometNativeColumnarToRow\n   :                       +- CometProject\n   :                          +- CometFilter\n   :                             +- CometNativeScan parquet spark_catalog.default.item\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- Project\n                     :     :     :  +- BroadcastHashJoin\n                     :     :     :     :- Filter\n                     :     :     :     :  +- ColumnarToRow\n                     :     :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :     :     :           +- ReusedSubquery\n                     :     :     :     +- BroadcastExchange\n                     :     :     :        +- CometNativeColumnarToRow\n                     :     :     :           +- CometProject\n                     :     :     :              +- CometFilter\n                     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                     :     :     +- BroadcastExchange\n                     :     :        +- CometNativeColumnarToRow\n                     :     :           +- CometProject\n                     :     :              +- CometFilter\n                     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 +- CometNativeScan parquet spark_catalog.default.store\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 41 out of 95 eligible operators (43%). Final plan contains 19 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q27a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometUnion\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometFilter\n      :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :     :     :        +- SubqueryBroadcast\n      :              :     :     :     :           +- BroadcastExchange\n      :              :     :     :     :              +- CometNativeColumnarToRow\n      :              :     :     :     :                 +- CometProject\n      :              :     :     :     :                    +- CometFilter\n      :              :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometProject\n      :              :     :     :           +- CometFilter\n      :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :- CometHashAggregate\n      :  +- CometExchange\n      :     +- CometHashAggregate\n      :        +- CometProject\n      :           +- CometBroadcastHashJoin\n      :              :- CometProject\n      :              :  +- CometBroadcastHashJoin\n      :              :     :- CometProject\n      :              :     :  +- CometBroadcastHashJoin\n      :              :     :     :- CometProject\n      :              :     :     :  +- CometBroadcastHashJoin\n      :              :     :     :     :- CometFilter\n      :              :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :              :     :     :     :        +- ReusedSubquery\n      :              :     :     :     +- CometBroadcastExchange\n      :              :     :     :        +- CometProject\n      :              :     :     :           +- CometFilter\n      :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n      :              :     :     +- CometBroadcastExchange\n      :              :     :        +- CometProject\n      :              :     :           +- CometFilter\n      :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :              :     +- CometBroadcastExchange\n      :              :        +- CometProject\n      :              :           +- CometFilter\n      :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :              +- CometBroadcastExchange\n      :                 +- CometProject\n      :                    +- CometFilter\n      :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometFilter\n                     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :     :     :        +- ReusedSubquery\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometProject\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometProject\n                     :     :           +- CometFilter\n                     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                     +- CometBroadcastExchange\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 91 out of 95 eligible operators (95%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q34.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +- BroadcastHashJoin\n            :- Filter\n            :  +- HashAggregate\n            :     +- CometNativeColumnarToRow\n            :        +- CometColumnarExchange\n            :           +- HashAggregate\n            :              +- Project\n            :                 +- BroadcastHashJoin\n            :                    :- Project\n            :                    :  +- BroadcastHashJoin\n            :                    :     :- Project\n            :                    :     :  +- BroadcastHashJoin\n            :                    :     :     :- Filter\n            :                    :     :     :  +- ColumnarToRow\n            :                    :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                    :     :     :           +- SubqueryBroadcast\n            :                    :     :     :              +- BroadcastExchange\n            :                    :     :     :                 +- CometNativeColumnarToRow\n            :                    :     :     :                    +- CometProject\n            :                    :     :     :                       +- CometFilter\n            :                    :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     :     +- BroadcastExchange\n            :                    :     :        +- CometNativeColumnarToRow\n            :                    :     :           +- CometProject\n            :                    :     :              +- CometFilter\n            :                    :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                    :     +- BroadcastExchange\n            :                    :        +- CometNativeColumnarToRow\n            :                    :           +- CometProject\n            :                    :              +- CometFilter\n            :                    :                 +- CometNativeScan parquet spark_catalog.default.store\n            :                    +- BroadcastExchange\n            :                       +- CometNativeColumnarToRow\n            :                          +- CometProject\n            :                             +- CometFilter\n            :                                +- CometNativeScan parquet spark_catalog.default.household_demographics\n            +- BroadcastExchange\n               +- CometNativeColumnarToRow\n                  +- CometProject\n                     +- CometFilter\n                        +- CometNativeScan parquet spark_catalog.default.customer\n\nComet accelerated 18 out of 37 eligible operators (48%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q34.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometBroadcastHashJoin\n            :- CometFilter\n            :  +- CometHashAggregate\n            :     +- CometExchange\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometFilter\n            :                 :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :        +- SubqueryBroadcast\n            :                 :     :     :           +- BroadcastExchange\n            :                 :     :     :              +- CometNativeColumnarToRow\n            :                 :     :     :                 +- CometProject\n            :                 :     :     :                    +- CometFilter\n            :                 :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometProject\n            :                 :     :           +- CometFilter\n            :                 :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometProject\n            :                 :           +- CometFilter\n            :                 :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            +- CometBroadcastExchange\n               +- CometProject\n                  +- CometFilter\n                     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n\nComet accelerated 35 out of 37 eligible operators (94%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q35.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :- BroadcastHashJoin\n                  :     :        :  :- BroadcastHashJoin\n                  :     :        :  :  :- CometNativeColumnarToRow\n                  :     :        :  :  :  +- CometFilter\n                  :     :        :  :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :        :  :  +- BroadcastExchange\n                  :     :        :  :     +- Project\n                  :     :        :  :        +- BroadcastHashJoin\n                  :     :        :  :           :- ColumnarToRow\n                  :     :        :  :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :  :           :        +- SubqueryBroadcast\n                  :     :        :  :           :           +- BroadcastExchange\n                  :     :        :  :           :              +- CometNativeColumnarToRow\n                  :     :        :  :           :                 +- CometProject\n                  :     :        :  :           :                    +- CometFilter\n                  :     :        :  :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  :           +- BroadcastExchange\n                  :     :        :  :              +- CometNativeColumnarToRow\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- Project\n                  :     :        :        +- BroadcastHashJoin\n                  :     :        :           :- ColumnarToRow\n                  :     :        :           :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :        :           :        +- ReusedSubquery\n                  :     :        :           +- BroadcastExchange\n                  :     :        :              +- CometNativeColumnarToRow\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 54 eligible operators (38%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q35.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- Filter\n                  :     :     +- BroadcastHashJoin\n                  :     :        :-  BroadcastHashJoin [COMET: Unsupported join type ExistenceJoin(exists#1)]\n                  :     :        :  :- CometNativeColumnarToRow\n                  :     :        :  :  +- CometBroadcastHashJoin\n                  :     :        :  :     :- CometFilter\n                  :     :        :  :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :        :  :     +- CometBroadcastExchange\n                  :     :        :  :        +- CometProject\n                  :     :        :  :           +- CometBroadcastHashJoin\n                  :     :        :  :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :        :  :              :     +- SubqueryBroadcast\n                  :     :        :  :              :        +- BroadcastExchange\n                  :     :        :  :              :           +- CometNativeColumnarToRow\n                  :     :        :  :              :              +- CometProject\n                  :     :        :  :              :                 +- CometFilter\n                  :     :        :  :              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  :              +- CometBroadcastExchange\n                  :     :        :  :                 +- CometProject\n                  :     :        :  :                    +- CometFilter\n                  :     :        :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        :  +- BroadcastExchange\n                  :     :        :     +- CometNativeColumnarToRow\n                  :     :        :        +- CometProject\n                  :     :        :           +- CometBroadcastHashJoin\n                  :     :        :              :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :        :              :     +- ReusedSubquery\n                  :     :        :              +- CometBroadcastExchange\n                  :     :        :                 +- CometProject\n                  :     :        :                    +- CometFilter\n                  :     :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :        +- BroadcastExchange\n                  :     :           +- CometNativeColumnarToRow\n                  :     :              +- CometProject\n                  :     :                 +- CometBroadcastHashJoin\n                  :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                    :     +- ReusedSubquery\n                  :     :                    +- CometBroadcastExchange\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 35 out of 54 eligible operators (64%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q35a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- HashAggregate\n   +- CometNativeColumnarToRow\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Project\n               +- BroadcastHashJoin\n                  :- Project\n                  :  +- BroadcastHashJoin\n                  :     :- Project\n                  :     :  +- BroadcastHashJoin\n                  :     :     :- BroadcastHashJoin\n                  :     :     :  :- CometNativeColumnarToRow\n                  :     :     :  :  +- CometFilter\n                  :     :     :  :     +- CometNativeScan parquet spark_catalog.default.customer\n                  :     :     :  +- BroadcastExchange\n                  :     :     :     +- Project\n                  :     :     :        +- BroadcastHashJoin\n                  :     :     :           :- ColumnarToRow\n                  :     :     :           :  +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :     :           :        +- SubqueryBroadcast\n                  :     :     :           :           +- BroadcastExchange\n                  :     :     :           :              +- CometNativeColumnarToRow\n                  :     :     :           :                 +- CometProject\n                  :     :     :           :                    +- CometFilter\n                  :     :     :           :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     :           +- BroadcastExchange\n                  :     :     :              +- CometNativeColumnarToRow\n                  :     :     :                 +- CometProject\n                  :     :     :                    +- CometFilter\n                  :     :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :     +- BroadcastExchange\n                  :     :        +- Union\n                  :     :           :- Project\n                  :     :           :  +- BroadcastHashJoin\n                  :     :           :     :- ColumnarToRow\n                  :     :           :     :  +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :           :     :        +- ReusedSubquery\n                  :     :           :     +- BroadcastExchange\n                  :     :           :        +- CometNativeColumnarToRow\n                  :     :           :           +- CometProject\n                  :     :           :              +- CometFilter\n                  :     :           :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     :           +- Project\n                  :     :              +- BroadcastHashJoin\n                  :     :                 :- ColumnarToRow\n                  :     :                 :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :     :                 :        +- ReusedSubquery\n                  :     :                 +- BroadcastExchange\n                  :     :                    +- CometNativeColumnarToRow\n                  :     :                       +- CometProject\n                  :     :                          +- CometFilter\n                  :     :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :     +- BroadcastExchange\n                  :        +- CometNativeColumnarToRow\n                  :           +- CometProject\n                  :              +- CometFilter\n                  :                 +- CometNativeScan parquet spark_catalog.default.customer_address\n                  +- BroadcastExchange\n                     +- CometNativeColumnarToRow\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n\nComet accelerated 21 out of 52 eligible operators (40%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q35a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometBroadcastHashJoin\n                  :- CometProject\n                  :  +- CometBroadcastHashJoin\n                  :     :- CometProject\n                  :     :  +- CometBroadcastHashJoin\n                  :     :     :- CometBroadcastHashJoin\n                  :     :     :  :- CometFilter\n                  :     :     :  :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                  :     :     :  +- CometBroadcastExchange\n                  :     :     :     +- CometProject\n                  :     :     :        +- CometBroadcastHashJoin\n                  :     :     :           :- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                  :     :     :           :     +- SubqueryBroadcast\n                  :     :     :           :        +- BroadcastExchange\n                  :     :     :           :           +- CometNativeColumnarToRow\n                  :     :     :           :              +- CometProject\n                  :     :     :           :                 +- CometFilter\n                  :     :     :           :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     :           +- CometBroadcastExchange\n                  :     :     :              +- CometProject\n                  :     :     :                 +- CometFilter\n                  :     :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :     +- CometBroadcastExchange\n                  :     :        +- CometUnion\n                  :     :           :- CometProject\n                  :     :           :  +- CometBroadcastHashJoin\n                  :     :           :     :- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                  :     :           :     :     +- ReusedSubquery\n                  :     :           :     +- CometBroadcastExchange\n                  :     :           :        +- CometProject\n                  :     :           :           +- CometFilter\n                  :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     :           +- CometProject\n                  :     :              +- CometBroadcastHashJoin\n                  :     :                 :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :     :                 :     +- ReusedSubquery\n                  :     :                 +- CometBroadcastExchange\n                  :     :                    +- CometProject\n                  :     :                       +- CometFilter\n                  :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :     +- CometBroadcastExchange\n                  :        +- CometProject\n                  :           +- CometFilter\n                  :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                  +- CometBroadcastExchange\n                     +- CometProject\n                        +- CometFilter\n                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n\nComet accelerated 48 out of 52 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q36a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- Project\n                           :                 :  +- BroadcastHashJoin\n                           :                 :     :- Project\n                           :                 :     :  +- BroadcastHashJoin\n                           :                 :     :     :- Filter\n                           :                 :     :     :  +- ColumnarToRow\n                           :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                 :     :     :           +- SubqueryBroadcast\n                           :                 :     :     :              +- BroadcastExchange\n                           :                 :     :     :                 +- CometNativeColumnarToRow\n                           :                 :     :     :                    +- CometProject\n                           :                 :     :     :                       +- CometFilter\n                           :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     :     +- BroadcastExchange\n                           :                 :     :        +- CometNativeColumnarToRow\n                           :                 :     :           +- CometProject\n                           :                 :     :              +- CometFilter\n                           :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     +- BroadcastExchange\n                           :                 :        +- CometNativeColumnarToRow\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometNativeScan parquet spark_catalog.default.item\n                           :                 +- BroadcastExchange\n                           :                    +- CometNativeColumnarToRow\n                           :                       +- CometProject\n                           :                          +- CometFilter\n                           :                             +- CometNativeScan parquet spark_catalog.default.store\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.item\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.store\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- Project\n                                                         :     :  +- BroadcastHashJoin\n                                                         :     :     :- Filter\n                                                         :     :     :  +- ColumnarToRow\n                                                         :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :     :           +- SubqueryBroadcast\n                                                         :     :     :              +- BroadcastExchange\n                                                         :     :     :                 +- CometNativeColumnarToRow\n                                                         :     :     :                    +- CometProject\n                                                         :     :     :                       +- CometFilter\n                                                         :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     :     +- BroadcastExchange\n                                                         :     :        +- CometNativeColumnarToRow\n                                                         :     :           +- CometProject\n                                                         :     :              +- CometFilter\n                                                         :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     +- BroadcastExchange\n                                                         :        +- CometNativeColumnarToRow\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometNativeScan parquet spark_catalog.default.item\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 45 out of 99 eligible operators (45%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q36a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometUnion\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometProject\n                           :           +- CometBroadcastHashJoin\n                           :              :- CometProject\n                           :              :  +- CometBroadcastHashJoin\n                           :              :     :- CometProject\n                           :              :     :  +- CometBroadcastHashJoin\n                           :              :     :     :- CometFilter\n                           :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :              :     :     :        +- SubqueryBroadcast\n                           :              :     :     :           +- BroadcastExchange\n                           :              :     :     :              +- CometNativeColumnarToRow\n                           :              :     :     :                 +- CometProject\n                           :              :     :     :                    +- CometFilter\n                           :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              :     :     +- CometBroadcastExchange\n                           :              :     :        +- CometProject\n                           :              :     :           +- CometFilter\n                           :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              :     +- CometBroadcastExchange\n                           :              :        +- CometProject\n                           :              :           +- CometFilter\n                           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :              +- CometBroadcastExchange\n                           :                 +- CometProject\n                           :                    +- CometFilter\n                           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometHashAggregate\n                           :           +- CometExchange\n                           :              +- CometHashAggregate\n                           :                 +- CometProject\n                           :                    +- CometBroadcastHashJoin\n                           :                       :- CometProject\n                           :                       :  +- CometBroadcastHashJoin\n                           :                       :     :- CometProject\n                           :                       :     :  +- CometBroadcastHashJoin\n                           :                       :     :     :- CometFilter\n                           :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                       :     :     :        +- SubqueryBroadcast\n                           :                       :     :     :           +- BroadcastExchange\n                           :                       :     :     :              +- CometNativeColumnarToRow\n                           :                       :     :     :                 +- CometProject\n                           :                       :     :     :                    +- CometFilter\n                           :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       :     :     +- CometBroadcastExchange\n                           :                       :     :        +- CometProject\n                           :                       :     :           +- CometFilter\n                           :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       :     +- CometBroadcastExchange\n                           :                       :        +- CometProject\n                           :                       :           +- CometFilter\n                           :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :                       +- CometBroadcastExchange\n                           :                          +- CometProject\n                           :                             +- CometFilter\n                           :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometProject\n                                                   :     :  +- CometBroadcastHashJoin\n                                                   :     :     :- CometFilter\n                                                   :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                   :     :     :        +- SubqueryBroadcast\n                                                   :     :     :           +- BroadcastExchange\n                                                   :     :     :              +- CometNativeColumnarToRow\n                                                   :     :     :                 +- CometProject\n                                                   :     :     :                    +- CometFilter\n                                                   :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     :     +- CometBroadcastExchange\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 90 out of 99 eligible operators (90%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q47.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.store\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q47.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q49.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- SubqueryBroadcast\n               :                                         :     :                    +- BroadcastExchange\n               :                                         :     :                       +- CometNativeColumnarToRow\n               :                                         :     :                          +- CometProject\n               :                                         :     :                             +- CometFilter\n               :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- BroadcastExchange\n               :                                         :     :  +- Project\n               :                                         :     :     +- Filter\n               :                                         :     :        +- ColumnarToRow\n               :                                         :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :                 +- ReusedSubquery\n               :                                         :     +- CometNativeColumnarToRow\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometProject\n               :                                                  +- CometFilter\n               :                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometColumnarExchange\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- BroadcastExchange\n                                                         :     :  +- Project\n                                                         :     :     +- Filter\n                                                         :     :        +- ColumnarToRow\n                                                         :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :                 +- ReusedSubquery\n                                                         :     +- CometNativeColumnarToRow\n                                                         :        +- CometProject\n                                                         :           +- CometFilter\n                                                         :              +- CometNativeScan parquet spark_catalog.default.store_returns\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 33 out of 87 eligible operators (37%). Final plan contains 17 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q49.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                      :     :              +- SubqueryBroadcast\n               :                                      :     :                 +- BroadcastExchange\n               :                                      :     :                    +- CometNativeColumnarToRow\n               :                                      :     :                       +- CometProject\n               :                                      :     :                          +- CometFilter\n               :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :- Project\n               :  +- Filter\n               :     +- Window\n               :        +- Sort\n               :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :              +- CometNativeColumnarToRow\n               :                 +- CometSort\n               :                    +- CometExchange\n               :                       +- CometHashAggregate\n               :                          +- CometExchange\n               :                             +- CometHashAggregate\n               :                                +- CometProject\n               :                                   +- CometBroadcastHashJoin\n               :                                      :- CometProject\n               :                                      :  +- CometBroadcastHashJoin\n               :                                      :     :- CometBroadcastExchange\n               :                                      :     :  +- CometProject\n               :                                      :     :     +- CometFilter\n               :                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                                      :     :              +- ReusedSubquery\n               :                                      :     +- CometProject\n               :                                      :        +- CometFilter\n               :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                                      +- CometBroadcastExchange\n               :                                         +- CometProject\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- Project\n                  +- Filter\n                     +- Window\n                        +- Sort\n                           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                              +- CometNativeColumnarToRow\n                                 +- CometSort\n                                    +- CometExchange\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometBroadcastHashJoin\n                                                      :- CometProject\n                                                      :  +- CometBroadcastHashJoin\n                                                      :     :- CometBroadcastExchange\n                                                      :     :  +- CometProject\n                                                      :     :     +- CometFilter\n                                                      :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :     :              +- ReusedSubquery\n                                                      :     +- CometProject\n                                                      :        +- CometFilter\n                                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                                      +- CometBroadcastExchange\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 66 out of 87 eligible operators (75%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q51a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :  +- CometNativeColumnarToRow\n               :     +- CometSort\n               :        +- CometExchange\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometSortMergeJoin\n               :                    :- CometSort\n               :                    :  +- CometColumnarExchange\n               :                    :     +- HashAggregate\n               :                    :        +- CometNativeColumnarToRow\n               :                    :           +- CometColumnarExchange\n               :                    :              +- HashAggregate\n               :                    :                 +- Project\n               :                    :                    +- BroadcastHashJoin\n               :                    :                       :- Project\n               :                    :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                       :     +- CometNativeColumnarToRow\n               :                    :                       :        +- CometSort\n               :                    :                       :           +- CometColumnarExchange\n               :                    :                       :              +- HashAggregate\n               :                    :                       :                 +- CometNativeColumnarToRow\n               :                    :                       :                    +- CometColumnarExchange\n               :                    :                       :                       +- HashAggregate\n               :                    :                       :                          +- Project\n               :                    :                       :                             +- BroadcastHashJoin\n               :                    :                       :                                :- Filter\n               :                    :                       :                                :  +- ColumnarToRow\n               :                    :                       :                                :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :                       :                                :           +- SubqueryBroadcast\n               :                    :                       :                                :              +- BroadcastExchange\n               :                    :                       :                                :                 +- CometNativeColumnarToRow\n               :                    :                       :                                :                    +- CometProject\n               :                    :                       :                                :                       +- CometFilter\n               :                    :                       :                                :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                       :                                +- BroadcastExchange\n               :                    :                       :                                   +- CometNativeColumnarToRow\n               :                    :                       :                                      +- CometProject\n               :                    :                       :                                         +- CometFilter\n               :                    :                       :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                       +- BroadcastExchange\n               :                    :                          +- Project\n               :                    :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                                +- CometNativeColumnarToRow\n               :                    :                                   +- CometSort\n               :                    :                                      +- CometColumnarExchange\n               :                    :                                         +- HashAggregate\n               :                    :                                            +- CometNativeColumnarToRow\n               :                    :                                               +- CometColumnarExchange\n               :                    :                                                  +- HashAggregate\n               :                    :                                                     +- Project\n               :                    :                                                        +- BroadcastHashJoin\n               :                    :                                                           :- Filter\n               :                    :                                                           :  +- ColumnarToRow\n               :                    :                                                           :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :                                                           :           +- SubqueryBroadcast\n               :                    :                                                           :              +- BroadcastExchange\n               :                    :                                                           :                 +- CometNativeColumnarToRow\n               :                    :                                                           :                    +- CometProject\n               :                    :                                                           :                       +- CometFilter\n               :                    :                                                           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                                                           +- BroadcastExchange\n               :                    :                                                              +- CometNativeColumnarToRow\n               :                    :                                                                 +- CometProject\n               :                    :                                                                    +- CometFilter\n               :                    :                                                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    +- CometSort\n               :                       +- CometColumnarExchange\n               :                          +- HashAggregate\n               :                             +- CometNativeColumnarToRow\n               :                                +- CometColumnarExchange\n               :                                   +- HashAggregate\n               :                                      +- Project\n               :                                         +- BroadcastHashJoin\n               :                                            :- Project\n               :                                            :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                            :     +- CometNativeColumnarToRow\n               :                                            :        +- CometSort\n               :                                            :           +- CometColumnarExchange\n               :                                            :              +- HashAggregate\n               :                                            :                 +- CometNativeColumnarToRow\n               :                                            :                    +- CometColumnarExchange\n               :                                            :                       +- HashAggregate\n               :                                            :                          +- Project\n               :                                            :                             +- BroadcastHashJoin\n               :                                            :                                :- Filter\n               :                                            :                                :  +- ColumnarToRow\n               :                                            :                                :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                            :                                :           +- ReusedSubquery\n               :                                            :                                +- BroadcastExchange\n               :                                            :                                   +- CometNativeColumnarToRow\n               :                                            :                                      +- CometProject\n               :                                            :                                         +- CometFilter\n               :                                            :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                            +- BroadcastExchange\n               :                                               +- Project\n               :                                                  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                                     +- CometNativeColumnarToRow\n               :                                                        +- CometSort\n               :                                                           +- CometColumnarExchange\n               :                                                              +- HashAggregate\n               :                                                                 +- CometNativeColumnarToRow\n               :                                                                    +- CometColumnarExchange\n               :                                                                       +- HashAggregate\n               :                                                                          +- Project\n               :                                                                             +- BroadcastHashJoin\n               :                                                                                :- Filter\n               :                                                                                :  +- ColumnarToRow\n               :                                                                                :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                                                                :           +- ReusedSubquery\n               :                                                                                +- BroadcastExchange\n               :                                                                                   +- CometNativeColumnarToRow\n               :                                                                                      +- CometProject\n               :                                                                                         +- CometFilter\n               :                                                                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- Project\n                     +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                        +- CometNativeColumnarToRow\n                           +- CometSort\n                              +- CometExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometSortMergeJoin\n                                          :- CometSort\n                                          :  +- CometColumnarExchange\n                                          :     +- HashAggregate\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometColumnarExchange\n                                          :              +- HashAggregate\n                                          :                 +- Project\n                                          :                    +- BroadcastHashJoin\n                                          :                       :- Project\n                                          :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                       :     +- CometNativeColumnarToRow\n                                          :                       :        +- CometSort\n                                          :                       :           +- CometColumnarExchange\n                                          :                       :              +- HashAggregate\n                                          :                       :                 +- CometNativeColumnarToRow\n                                          :                       :                    +- CometColumnarExchange\n                                          :                       :                       +- HashAggregate\n                                          :                       :                          +- Project\n                                          :                       :                             +- BroadcastHashJoin\n                                          :                       :                                :- Filter\n                                          :                       :                                :  +- ColumnarToRow\n                                          :                       :                                :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                       :                                :           +- SubqueryBroadcast\n                                          :                       :                                :              +- BroadcastExchange\n                                          :                       :                                :                 +- CometNativeColumnarToRow\n                                          :                       :                                :                    +- CometProject\n                                          :                       :                                :                       +- CometFilter\n                                          :                       :                                :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                       :                                +- BroadcastExchange\n                                          :                       :                                   +- CometNativeColumnarToRow\n                                          :                       :                                      +- CometProject\n                                          :                       :                                         +- CometFilter\n                                          :                       :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                       +- BroadcastExchange\n                                          :                          +- Project\n                                          :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                                +- CometNativeColumnarToRow\n                                          :                                   +- CometSort\n                                          :                                      +- CometColumnarExchange\n                                          :                                         +- HashAggregate\n                                          :                                            +- CometNativeColumnarToRow\n                                          :                                               +- CometColumnarExchange\n                                          :                                                  +- HashAggregate\n                                          :                                                     +- Project\n                                          :                                                        +- BroadcastHashJoin\n                                          :                                                           :- Filter\n                                          :                                                           :  +- ColumnarToRow\n                                          :                                                           :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                                                           :           +- SubqueryBroadcast\n                                          :                                                           :              +- BroadcastExchange\n                                          :                                                           :                 +- CometNativeColumnarToRow\n                                          :                                                           :                    +- CometProject\n                                          :                                                           :                       +- CometFilter\n                                          :                                                           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                                                           +- BroadcastExchange\n                                          :                                                              +- CometNativeColumnarToRow\n                                          :                                                                 +- CometProject\n                                          :                                                                    +- CometFilter\n                                          :                                                                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- CometSort\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- CometNativeColumnarToRow\n                                                      +- CometColumnarExchange\n                                                         +- HashAggregate\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- Project\n                                                                  :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                  :     +- CometNativeColumnarToRow\n                                                                  :        +- CometSort\n                                                                  :           +- CometColumnarExchange\n                                                                  :              +- HashAggregate\n                                                                  :                 +- CometNativeColumnarToRow\n                                                                  :                    +- CometColumnarExchange\n                                                                  :                       +- HashAggregate\n                                                                  :                          +- Project\n                                                                  :                             +- BroadcastHashJoin\n                                                                  :                                :- Filter\n                                                                  :                                :  +- ColumnarToRow\n                                                                  :                                :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                  :                                :           +- ReusedSubquery\n                                                                  :                                +- BroadcastExchange\n                                                                  :                                   +- CometNativeColumnarToRow\n                                                                  :                                      +- CometProject\n                                                                  :                                         +- CometFilter\n                                                                  :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                           +- CometNativeColumnarToRow\n                                                                              +- CometSort\n                                                                                 +- CometColumnarExchange\n                                                                                    +- HashAggregate\n                                                                                       +- CometNativeColumnarToRow\n                                                                                          +- CometColumnarExchange\n                                                                                             +- HashAggregate\n                                                                                                +- Project\n                                                                                                   +- BroadcastHashJoin\n                                                                                                      :- Filter\n                                                                                                      :  +- ColumnarToRow\n                                                                                                      :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                                                      :           +- ReusedSubquery\n                                                                                                      +- BroadcastExchange\n                                                                                                         +- CometNativeColumnarToRow\n                                                                                                            +- CometProject\n                                                                                                               +- CometFilter\n                                                                                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 82 out of 196 eligible operators (41%). Final plan contains 42 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q51a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- HashAggregate\n         +- Project\n            +- BroadcastHashJoin\n               :-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :  +- CometNativeColumnarToRow\n               :     +- CometSort\n               :        +- CometExchange\n               :           +- CometProject\n               :              +- CometFilter\n               :                 +- CometSortMergeJoin\n               :                    :- CometSort\n               :                    :  +- CometColumnarExchange\n               :                    :     +- HashAggregate\n               :                    :        +- CometNativeColumnarToRow\n               :                    :           +- CometColumnarExchange\n               :                    :              +- HashAggregate\n               :                    :                 +- Project\n               :                    :                    +- BroadcastHashJoin\n               :                    :                       :- Project\n               :                    :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                       :     +- CometNativeColumnarToRow\n               :                    :                       :        +- CometSort\n               :                    :                       :           +- CometExchange\n               :                    :                       :              +- CometHashAggregate\n               :                    :                       :                 +- CometExchange\n               :                    :                       :                    +- CometHashAggregate\n               :                    :                       :                       +- CometProject\n               :                    :                       :                          +- CometBroadcastHashJoin\n               :                    :                       :                             :- CometFilter\n               :                    :                       :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                       :                             :        +- SubqueryBroadcast\n               :                    :                       :                             :           +- BroadcastExchange\n               :                    :                       :                             :              +- CometNativeColumnarToRow\n               :                    :                       :                             :                 +- CometProject\n               :                    :                       :                             :                    +- CometFilter\n               :                    :                       :                             :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                       :                             +- CometBroadcastExchange\n               :                    :                       :                                +- CometProject\n               :                    :                       :                                   +- CometFilter\n               :                    :                       :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                       +- BroadcastExchange\n               :                    :                          +- Project\n               :                    :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                    :                                +- CometNativeColumnarToRow\n               :                    :                                   +- CometSort\n               :                    :                                      +- CometExchange\n               :                    :                                         +- CometHashAggregate\n               :                    :                                            +- CometExchange\n               :                    :                                               +- CometHashAggregate\n               :                    :                                                  +- CometProject\n               :                    :                                                     +- CometBroadcastHashJoin\n               :                    :                                                        :- CometFilter\n               :                    :                                                        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                    :                                                        :        +- SubqueryBroadcast\n               :                    :                                                        :           +- BroadcastExchange\n               :                    :                                                        :              +- CometNativeColumnarToRow\n               :                    :                                                        :                 +- CometProject\n               :                    :                                                        :                    +- CometFilter\n               :                    :                                                        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :                                                        +- CometBroadcastExchange\n               :                    :                                                           +- CometProject\n               :                    :                                                              +- CometFilter\n               :                    :                                                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    +- CometSort\n               :                       +- CometColumnarExchange\n               :                          +- HashAggregate\n               :                             +- CometNativeColumnarToRow\n               :                                +- CometColumnarExchange\n               :                                   +- HashAggregate\n               :                                      +- Project\n               :                                         +- BroadcastHashJoin\n               :                                            :- Project\n               :                                            :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                            :     +- CometNativeColumnarToRow\n               :                                            :        +- CometSort\n               :                                            :           +- CometExchange\n               :                                            :              +- CometHashAggregate\n               :                                            :                 +- CometExchange\n               :                                            :                    +- CometHashAggregate\n               :                                            :                       +- CometProject\n               :                                            :                          +- CometBroadcastHashJoin\n               :                                            :                             :- CometFilter\n               :                                            :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                            :                             :        +- ReusedSubquery\n               :                                            :                             +- CometBroadcastExchange\n               :                                            :                                +- CometProject\n               :                                            :                                   +- CometFilter\n               :                                            :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                            +- BroadcastExchange\n               :                                               +- Project\n               :                                                  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               :                                                     +- CometNativeColumnarToRow\n               :                                                        +- CometSort\n               :                                                           +- CometExchange\n               :                                                              +- CometHashAggregate\n               :                                                                 +- CometExchange\n               :                                                                    +- CometHashAggregate\n               :                                                                       +- CometProject\n               :                                                                          +- CometBroadcastHashJoin\n               :                                                                             :- CometFilter\n               :                                                                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                                                                             :        +- ReusedSubquery\n               :                                                                             +- CometBroadcastExchange\n               :                                                                                +- CometProject\n               :                                                                                   +- CometFilter\n               :                                                                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               +- BroadcastExchange\n                  +- Project\n                     +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                        +- CometNativeColumnarToRow\n                           +- CometSort\n                              +- CometExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometSortMergeJoin\n                                          :- CometSort\n                                          :  +- CometColumnarExchange\n                                          :     +- HashAggregate\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometColumnarExchange\n                                          :              +- HashAggregate\n                                          :                 +- Project\n                                          :                    +- BroadcastHashJoin\n                                          :                       :- Project\n                                          :                       :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                       :     +- CometNativeColumnarToRow\n                                          :                       :        +- CometSort\n                                          :                       :           +- CometExchange\n                                          :                       :              +- CometHashAggregate\n                                          :                       :                 +- CometExchange\n                                          :                       :                    +- CometHashAggregate\n                                          :                       :                       +- CometProject\n                                          :                       :                          +- CometBroadcastHashJoin\n                                          :                       :                             :- CometFilter\n                                          :                       :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                          :                       :                             :        +- SubqueryBroadcast\n                                          :                       :                             :           +- BroadcastExchange\n                                          :                       :                             :              +- CometNativeColumnarToRow\n                                          :                       :                             :                 +- CometProject\n                                          :                       :                             :                    +- CometFilter\n                                          :                       :                             :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                       :                             +- CometBroadcastExchange\n                                          :                       :                                +- CometProject\n                                          :                       :                                   +- CometFilter\n                                          :                       :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                       +- BroadcastExchange\n                                          :                          +- Project\n                                          :                             +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                          :                                +- CometNativeColumnarToRow\n                                          :                                   +- CometSort\n                                          :                                      +- CometExchange\n                                          :                                         +- CometHashAggregate\n                                          :                                            +- CometExchange\n                                          :                                               +- CometHashAggregate\n                                          :                                                  +- CometProject\n                                          :                                                     +- CometBroadcastHashJoin\n                                          :                                                        :- CometFilter\n                                          :                                                        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                          :                                                        :        +- SubqueryBroadcast\n                                          :                                                        :           +- BroadcastExchange\n                                          :                                                        :              +- CometNativeColumnarToRow\n                                          :                                                        :                 +- CometProject\n                                          :                                                        :                    +- CometFilter\n                                          :                                                        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                                                        +- CometBroadcastExchange\n                                          :                                                           +- CometProject\n                                          :                                                              +- CometFilter\n                                          :                                                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          +- CometSort\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- CometNativeColumnarToRow\n                                                      +- CometColumnarExchange\n                                                         +- HashAggregate\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- Project\n                                                                  :  +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                  :     +- CometNativeColumnarToRow\n                                                                  :        +- CometSort\n                                                                  :           +- CometExchange\n                                                                  :              +- CometHashAggregate\n                                                                  :                 +- CometExchange\n                                                                  :                    +- CometHashAggregate\n                                                                  :                       +- CometProject\n                                                                  :                          +- CometBroadcastHashJoin\n                                                                  :                             :- CometFilter\n                                                                  :                             :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                  :                             :        +- ReusedSubquery\n                                                                  :                             +- CometBroadcastExchange\n                                                                  :                                +- CometProject\n                                                                  :                                   +- CometFilter\n                                                                  :                                      +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n                                                                           +- CometNativeColumnarToRow\n                                                                              +- CometSort\n                                                                                 +- CometExchange\n                                                                                    +- CometHashAggregate\n                                                                                       +- CometExchange\n                                                                                          +- CometHashAggregate\n                                                                                             +- CometProject\n                                                                                                +- CometBroadcastHashJoin\n                                                                                                   :- CometFilter\n                                                                                                   :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                                                   :        +- ReusedSubquery\n                                                                                                   +- CometBroadcastExchange\n                                                                                                      +- CometProject\n                                                                                                         +- CometFilter\n                                                                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 138 out of 196 eligible operators (70%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q57.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometColumnarExchange\n      :     :                       +- HashAggregate\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometColumnarExchange\n      :     :                                +- HashAggregate\n      :     :                                   +- Project\n      :     :                                      +- BroadcastHashJoin\n      :     :                                         :- Project\n      :     :                                         :  +- BroadcastHashJoin\n      :     :                                         :     :- Project\n      :     :                                         :     :  +- BroadcastHashJoin\n      :     :                                         :     :     :- CometNativeColumnarToRow\n      :     :                                         :     :     :  +- CometProject\n      :     :                                         :     :     :     +- CometFilter\n      :     :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :     :                                         :     :     +- BroadcastExchange\n      :     :                                         :     :        +- Filter\n      :     :                                         :     :           +- ColumnarToRow\n      :     :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                                         :     :                    +- SubqueryBroadcast\n      :     :                                         :     :                       +- BroadcastExchange\n      :     :                                         :     :                          +- CometNativeColumnarToRow\n      :     :                                         :     :                             +- CometFilter\n      :     :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         :     +- BroadcastExchange\n      :     :                                         :        +- CometNativeColumnarToRow\n      :     :                                         :           +- CometFilter\n      :     :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                                         +- BroadcastExchange\n      :     :                                            +- CometNativeColumnarToRow\n      :     :                                               +- CometFilter\n      :     :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometColumnarExchange\n      :                       +- HashAggregate\n      :                          +- CometNativeColumnarToRow\n      :                             +- CometColumnarExchange\n      :                                +- HashAggregate\n      :                                   +- Project\n      :                                      +- BroadcastHashJoin\n      :                                         :- Project\n      :                                         :  +- BroadcastHashJoin\n      :                                         :     :- Project\n      :                                         :     :  +- BroadcastHashJoin\n      :                                         :     :     :- CometNativeColumnarToRow\n      :                                         :     :     :  +- CometProject\n      :                                         :     :     :     +- CometFilter\n      :                                         :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n      :                                         :     :     +- BroadcastExchange\n      :                                         :     :        +- Filter\n      :                                         :     :           +- ColumnarToRow\n      :                                         :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                                         :     :                    +- SubqueryBroadcast\n      :                                         :     :                       +- BroadcastExchange\n      :                                         :     :                          +- CometNativeColumnarToRow\n      :                                         :     :                             +- CometFilter\n      :                                         :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         :     +- BroadcastExchange\n      :                                         :        +- CometNativeColumnarToRow\n      :                                         :           +- CometFilter\n      :                                         :              +- CometNativeScan parquet spark_catalog.default.date_dim\n      :                                         +- BroadcastExchange\n      :                                            +- CometNativeColumnarToRow\n      :                                               +- CometFilter\n      :                                                  +- CometNativeScan parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- CometNativeColumnarToRow\n                              +- CometColumnarExchange\n                                 +- HashAggregate\n                                    +- Project\n                                       +- BroadcastHashJoin\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- Project\n                                          :     :  +- BroadcastHashJoin\n                                          :     :     :- CometNativeColumnarToRow\n                                          :     :     :  +- CometProject\n                                          :     :     :     +- CometFilter\n                                          :     :     :        +- CometNativeScan parquet spark_catalog.default.item\n                                          :     :     +- BroadcastExchange\n                                          :     :        +- Filter\n                                          :     :           +- ColumnarToRow\n                                          :     :              +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    +- SubqueryBroadcast\n                                          :     :                       +- BroadcastExchange\n                                          :     :                          +- CometNativeColumnarToRow\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- BroadcastExchange\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometFilter\n                                          :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- BroadcastExchange\n                                             +- CometNativeColumnarToRow\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.call_center\n\nComet accelerated 36 out of 97 eligible operators (37%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q57.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- Project\n      :     :  +- Filter\n      :     :     +- Window\n      :     :        +- Filter\n      :     :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :     :              +- CometNativeColumnarToRow\n      :     :                 +- CometSort\n      :     :                    +- CometExchange\n      :     :                       +- CometHashAggregate\n      :     :                          +- CometExchange\n      :     :                             +- CometHashAggregate\n      :     :                                +- CometProject\n      :     :                                   +- CometBroadcastHashJoin\n      :     :                                      :- CometProject\n      :     :                                      :  +- CometBroadcastHashJoin\n      :     :                                      :     :- CometProject\n      :     :                                      :     :  +- CometBroadcastHashJoin\n      :     :                                      :     :     :- CometProject\n      :     :                                      :     :     :  +- CometFilter\n      :     :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :     :                                      :     :     +- CometBroadcastExchange\n      :     :                                      :     :        +- CometFilter\n      :     :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :     :                                      :     :                 +- SubqueryBroadcast\n      :     :                                      :     :                    +- BroadcastExchange\n      :     :                                      :     :                       +- CometNativeColumnarToRow\n      :     :                                      :     :                          +- CometFilter\n      :     :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      :     +- CometBroadcastExchange\n      :     :                                      :        +- CometFilter\n      :     :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :     :                                      +- CometBroadcastExchange\n      :     :                                         +- CometFilter\n      :     :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      :     +- BroadcastExchange\n      :        +- Project\n      :           +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      :              +- CometNativeColumnarToRow\n      :                 +- CometSort\n      :                    +- CometExchange\n      :                       +- CometHashAggregate\n      :                          +- CometExchange\n      :                             +- CometHashAggregate\n      :                                +- CometProject\n      :                                   +- CometBroadcastHashJoin\n      :                                      :- CometProject\n      :                                      :  +- CometBroadcastHashJoin\n      :                                      :     :- CometProject\n      :                                      :     :  +- CometBroadcastHashJoin\n      :                                      :     :     :- CometProject\n      :                                      :     :     :  +- CometFilter\n      :                                      :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n      :                                      :     :     +- CometBroadcastExchange\n      :                                      :     :        +- CometFilter\n      :                                      :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n      :                                      :     :                 +- SubqueryBroadcast\n      :                                      :     :                    +- BroadcastExchange\n      :                                      :     :                       +- CometNativeColumnarToRow\n      :                                      :     :                          +- CometFilter\n      :                                      :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      :     +- CometBroadcastExchange\n      :                                      :        +- CometFilter\n      :                                      :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n      :                                      +- CometBroadcastExchange\n      :                                         +- CometFilter\n      :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n      +- BroadcastExchange\n         +- Project\n            +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n               +- CometNativeColumnarToRow\n                  +- CometSort\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometProject\n                                    +- CometBroadcastHashJoin\n                                       :- CometProject\n                                       :  +- CometBroadcastHashJoin\n                                       :     :- CometProject\n                                       :     :  +- CometBroadcastHashJoin\n                                       :     :     :- CometProject\n                                       :     :     :  +- CometFilter\n                                       :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :     +- CometBroadcastExchange\n                                       :     :        +- CometFilter\n                                       :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :                 +- SubqueryBroadcast\n                                       :     :                    +- BroadcastExchange\n                                       :     :                       +- CometNativeColumnarToRow\n                                       :     :                          +- CometFilter\n                                       :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometBroadcastExchange\n                                       :        +- CometFilter\n                                       :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       +- CometBroadcastExchange\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.call_center\n\nComet accelerated 75 out of 97 eligible operators (77%). Final plan contains 6 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q5a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- HashAggregate\n               :              :  +- CometNativeColumnarToRow\n               :              :     +- CometColumnarExchange\n               :              :        +- HashAggregate\n               :              :           +- Project\n               :              :              +- BroadcastHashJoin\n               :              :                 :- Project\n               :              :                 :  +- BroadcastHashJoin\n               :              :                 :     :- Union\n               :              :                 :     :  :- Project\n               :              :                 :     :  :  +- Filter\n               :              :                 :     :  :     +- ColumnarToRow\n               :              :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :  :              +- SubqueryBroadcast\n               :              :                 :     :  :                 +- BroadcastExchange\n               :              :                 :     :  :                    +- CometNativeColumnarToRow\n               :              :                 :     :  :                       +- CometProject\n               :              :                 :     :  :                          +- CometFilter\n               :              :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                 :     :  +- Project\n               :              :                 :     :     +- Filter\n               :              :                 :     :        +- ColumnarToRow\n               :              :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :                 +- ReusedSubquery\n               :              :                 :     +- BroadcastExchange\n               :              :                 :        +- CometNativeColumnarToRow\n               :              :                 :           +- CometProject\n               :              :                 :              +- CometFilter\n               :              :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                 +- BroadcastExchange\n               :              :                    +- CometNativeColumnarToRow\n               :              :                       +- CometProject\n               :              :                          +- CometFilter\n               :              :                             +- CometNativeScan parquet spark_catalog.default.store\n               :              :- HashAggregate\n               :              :  +- CometNativeColumnarToRow\n               :              :     +- CometColumnarExchange\n               :              :        +- HashAggregate\n               :              :           +- Project\n               :              :              +- BroadcastHashJoin\n               :              :                 :- Project\n               :              :                 :  +- BroadcastHashJoin\n               :              :                 :     :- Union\n               :              :                 :     :  :- Project\n               :              :                 :     :  :  +- Filter\n               :              :                 :     :  :     +- ColumnarToRow\n               :              :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :  :              +- ReusedSubquery\n               :              :                 :     :  +- Project\n               :              :                 :     :     +- Filter\n               :              :                 :     :        +- ColumnarToRow\n               :              :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                 :     :                 +- ReusedSubquery\n               :              :                 :     +- BroadcastExchange\n               :              :                 :        +- CometNativeColumnarToRow\n               :              :                 :           +- CometProject\n               :              :                 :              +- CometFilter\n               :              :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                 +- BroadcastExchange\n               :              :                    +- CometNativeColumnarToRow\n               :              :                       +- CometProject\n               :              :                          +- CometFilter\n               :              :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :              +- HashAggregate\n               :                 +- CometNativeColumnarToRow\n               :                    +- CometColumnarExchange\n               :                       +- HashAggregate\n               :                          +- Project\n               :                             +- BroadcastHashJoin\n               :                                :- Project\n               :                                :  +- BroadcastHashJoin\n               :                                :     :- Union\n               :                                :     :  :- Project\n               :                                :     :  :  +- Filter\n               :                                :     :  :     +- ColumnarToRow\n               :                                :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                :     :  :              +- ReusedSubquery\n               :                                :     :  +- Project\n               :                                :     :     +- BroadcastHashJoin\n               :                                :     :        :- BroadcastExchange\n               :                                :     :        :  +- ColumnarToRow\n               :                                :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                :     :        :           +- ReusedSubquery\n               :                                :     :        +- CometNativeColumnarToRow\n               :                                :     :           +- CometProject\n               :                                :     :              +- CometFilter\n               :                                :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n               :                                :     +- BroadcastExchange\n               :                                :        +- CometNativeColumnarToRow\n               :                                :           +- CometProject\n               :                                :              +- CometFilter\n               :                                :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                +- BroadcastExchange\n               :                                   +- CometNativeColumnarToRow\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometNativeScan parquet spark_catalog.default.web_site\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- HashAggregate\n               :                          :  +- CometNativeColumnarToRow\n               :                          :     +- CometColumnarExchange\n               :                          :        +- HashAggregate\n               :                          :           +- Project\n               :                          :              +- BroadcastHashJoin\n               :                          :                 :- Project\n               :                          :                 :  +- BroadcastHashJoin\n               :                          :                 :     :- Union\n               :                          :                 :     :  :- Project\n               :                          :                 :     :  :  +- Filter\n               :                          :                 :     :  :     +- ColumnarToRow\n               :                          :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :  :              +- SubqueryBroadcast\n               :                          :                 :     :  :                 +- BroadcastExchange\n               :                          :                 :     :  :                    +- CometNativeColumnarToRow\n               :                          :                 :     :  :                       +- CometProject\n               :                          :                 :     :  :                          +- CometFilter\n               :                          :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                 :     :  +- Project\n               :                          :                 :     :     +- Filter\n               :                          :                 :     :        +- ColumnarToRow\n               :                          :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :                 +- ReusedSubquery\n               :                          :                 :     +- BroadcastExchange\n               :                          :                 :        +- CometNativeColumnarToRow\n               :                          :                 :           +- CometProject\n               :                          :                 :              +- CometFilter\n               :                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                 +- BroadcastExchange\n               :                          :                    +- CometNativeColumnarToRow\n               :                          :                       +- CometProject\n               :                          :                          +- CometFilter\n               :                          :                             +- CometNativeScan parquet spark_catalog.default.store\n               :                          :- HashAggregate\n               :                          :  +- CometNativeColumnarToRow\n               :                          :     +- CometColumnarExchange\n               :                          :        +- HashAggregate\n               :                          :           +- Project\n               :                          :              +- BroadcastHashJoin\n               :                          :                 :- Project\n               :                          :                 :  +- BroadcastHashJoin\n               :                          :                 :     :- Union\n               :                          :                 :     :  :- Project\n               :                          :                 :     :  :  +- Filter\n               :                          :                 :     :  :     +- ColumnarToRow\n               :                          :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :  :              +- ReusedSubquery\n               :                          :                 :     :  +- Project\n               :                          :                 :     :     +- Filter\n               :                          :                 :     :        +- ColumnarToRow\n               :                          :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                 :     :                 +- ReusedSubquery\n               :                          :                 :     +- BroadcastExchange\n               :                          :                 :        +- CometNativeColumnarToRow\n               :                          :                 :           +- CometProject\n               :                          :                 :              +- CometFilter\n               :                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                 +- BroadcastExchange\n               :                          :                    +- CometNativeColumnarToRow\n               :                          :                       +- CometProject\n               :                          :                          +- CometFilter\n               :                          :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :                          +- HashAggregate\n               :                             +- CometNativeColumnarToRow\n               :                                +- CometColumnarExchange\n               :                                   +- HashAggregate\n               :                                      +- Project\n               :                                         +- BroadcastHashJoin\n               :                                            :- Project\n               :                                            :  +- BroadcastHashJoin\n               :                                            :     :- Union\n               :                                            :     :  :- Project\n               :                                            :     :  :  +- Filter\n               :                                            :     :  :     +- ColumnarToRow\n               :                                            :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                            :     :  :              +- ReusedSubquery\n               :                                            :     :  +- Project\n               :                                            :     :     +- BroadcastHashJoin\n               :                                            :     :        :- BroadcastExchange\n               :                                            :     :        :  +- ColumnarToRow\n               :                                            :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                            :     :        :           +- ReusedSubquery\n               :                                            :     :        +- CometNativeColumnarToRow\n               :                                            :     :           +- CometProject\n               :                                            :     :              +- CometFilter\n               :                                            :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n               :                                            :     +- BroadcastExchange\n               :                                            :        +- CometNativeColumnarToRow\n               :                                            :           +- CometProject\n               :                                            :              +- CometFilter\n               :                                            :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                            +- BroadcastExchange\n               :                                               +- CometNativeColumnarToRow\n               :                                                  +- CometProject\n               :                                                     +- CometFilter\n               :                                                        +- CometNativeScan parquet spark_catalog.default.web_site\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- HashAggregate\n                                          :  +- CometNativeColumnarToRow\n                                          :     +- CometColumnarExchange\n                                          :        +- HashAggregate\n                                          :           +- Project\n                                          :              +- BroadcastHashJoin\n                                          :                 :- Project\n                                          :                 :  +- BroadcastHashJoin\n                                          :                 :     :- Union\n                                          :                 :     :  :- Project\n                                          :                 :     :  :  +- Filter\n                                          :                 :     :  :     +- ColumnarToRow\n                                          :                 :     :  :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :  :              +- SubqueryBroadcast\n                                          :                 :     :  :                 +- BroadcastExchange\n                                          :                 :     :  :                    +- CometNativeColumnarToRow\n                                          :                 :     :  :                       +- CometProject\n                                          :                 :     :  :                          +- CometFilter\n                                          :                 :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                 :     :  +- Project\n                                          :                 :     :     +- Filter\n                                          :                 :     :        +- ColumnarToRow\n                                          :                 :     :           +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :                 +- ReusedSubquery\n                                          :                 :     +- BroadcastExchange\n                                          :                 :        +- CometNativeColumnarToRow\n                                          :                 :           +- CometProject\n                                          :                 :              +- CometFilter\n                                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                 +- BroadcastExchange\n                                          :                    +- CometNativeColumnarToRow\n                                          :                       +- CometProject\n                                          :                          +- CometFilter\n                                          :                             +- CometNativeScan parquet spark_catalog.default.store\n                                          :- HashAggregate\n                                          :  +- CometNativeColumnarToRow\n                                          :     +- CometColumnarExchange\n                                          :        +- HashAggregate\n                                          :           +- Project\n                                          :              +- BroadcastHashJoin\n                                          :                 :- Project\n                                          :                 :  +- BroadcastHashJoin\n                                          :                 :     :- Union\n                                          :                 :     :  :- Project\n                                          :                 :     :  :  +- Filter\n                                          :                 :     :  :     +- ColumnarToRow\n                                          :                 :     :  :        +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :  :              +- ReusedSubquery\n                                          :                 :     :  +- Project\n                                          :                 :     :     +- Filter\n                                          :                 :     :        +- ColumnarToRow\n                                          :                 :     :           +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                 :     :                 +- ReusedSubquery\n                                          :                 :     +- BroadcastExchange\n                                          :                 :        +- CometNativeColumnarToRow\n                                          :                 :           +- CometProject\n                                          :                 :              +- CometFilter\n                                          :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                 +- BroadcastExchange\n                                          :                    +- CometNativeColumnarToRow\n                                          :                       +- CometProject\n                                          :                          +- CometFilter\n                                          :                             +- CometNativeScan parquet spark_catalog.default.catalog_page\n                                          +- HashAggregate\n                                             +- CometNativeColumnarToRow\n                                                +- CometColumnarExchange\n                                                   +- HashAggregate\n                                                      +- Project\n                                                         +- BroadcastHashJoin\n                                                            :- Project\n                                                            :  +- BroadcastHashJoin\n                                                            :     :- Union\n                                                            :     :  :- Project\n                                                            :     :  :  +- Filter\n                                                            :     :  :     +- ColumnarToRow\n                                                            :     :  :        +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                            :     :  :              +- ReusedSubquery\n                                                            :     :  +- Project\n                                                            :     :     +- BroadcastHashJoin\n                                                            :     :        :- BroadcastExchange\n                                                            :     :        :  +- ColumnarToRow\n                                                            :     :        :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                            :     :        :           +- ReusedSubquery\n                                                            :     :        +- CometNativeColumnarToRow\n                                                            :     :           +- CometProject\n                                                            :     :              +- CometFilter\n                                                            :     :                 +- CometNativeScan parquet spark_catalog.default.web_sales\n                                                            :     +- BroadcastExchange\n                                                            :        +- CometNativeColumnarToRow\n                                                            :           +- CometProject\n                                                            :              +- CometFilter\n                                                            :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                            +- BroadcastExchange\n                                                               +- CometNativeColumnarToRow\n                                                                  +- CometProject\n                                                                     +- CometFilter\n                                                                        +- CometNativeScan parquet spark_catalog.default.web_site\n\nComet accelerated 89 out of 263 eligible operators (33%). Final plan contains 57 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q5a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometUnion\n               :           :              :     :  :- CometProject\n               :           :              :     :  :  +- CometFilter\n               :           :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :              :     :  :           +- SubqueryBroadcast\n               :           :              :     :  :              +- BroadcastExchange\n               :           :              :     :  :                 +- CometNativeColumnarToRow\n               :           :              :     :  :                    +- CometProject\n               :           :              :     :  :                       +- CometFilter\n               :           :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :  +- CometProject\n               :           :              :     :     +- CometFilter\n               :           :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :           :              :     :              +- ReusedSubquery\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometUnion\n               :           :              :     :  :- CometProject\n               :           :              :     :  :  +- CometFilter\n               :           :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :              :     :  :           +- ReusedSubquery\n               :           :              :     :  +- CometProject\n               :           :              :     :     +- CometFilter\n               :           :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :           :              :     :              +- ReusedSubquery\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometProject\n               :                          :  +- CometBroadcastHashJoin\n               :                          :     :- CometUnion\n               :                          :     :  :- CometProject\n               :                          :     :  :  +- CometFilter\n               :                          :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :     :  :           +- ReusedSubquery\n               :                          :     :  +- CometProject\n               :                          :     :     +- CometBroadcastHashJoin\n               :                          :     :        :- CometBroadcastExchange\n               :                          :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                          :     :        :        +- ReusedSubquery\n               :                          :     :        +- CometProject\n               :                          :     :           +- CometFilter\n               :                          :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :     +- CometBroadcastExchange\n               :                          :        +- CometProject\n               :                          :           +- CometFilter\n               :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometUnion\n               :                    :              :     :  :- CometProject\n               :                    :              :     :  :  +- CometFilter\n               :                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :              :     :  :           +- SubqueryBroadcast\n               :                    :              :     :  :              +- BroadcastExchange\n               :                    :              :     :  :                 +- CometNativeColumnarToRow\n               :                    :              :     :  :                    +- CometProject\n               :                    :              :     :  :                       +- CometFilter\n               :                    :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :  +- CometProject\n               :                    :              :     :     +- CometFilter\n               :                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :                    :              :     :              +- ReusedSubquery\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometUnion\n               :                    :              :     :  :- CometProject\n               :                    :              :     :  :  +- CometFilter\n               :                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :              :     :  :           +- ReusedSubquery\n               :                    :              :     :  +- CometProject\n               :                    :              :     :     +- CometFilter\n               :                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                    :              :     :              +- ReusedSubquery\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :                    +- CometHashAggregate\n               :                       +- CometExchange\n               :                          +- CometHashAggregate\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometProject\n               :                                   :  +- CometBroadcastHashJoin\n               :                                   :     :- CometUnion\n               :                                   :     :  :- CometProject\n               :                                   :     :  :  +- CometFilter\n               :                                   :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :     :  :           +- ReusedSubquery\n               :                                   :     :  +- CometProject\n               :                                   :     :     +- CometBroadcastHashJoin\n               :                                   :     :        :- CometBroadcastExchange\n               :                                   :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                   :     :        :        +- ReusedSubquery\n               :                                   :     :        +- CometProject\n               :                                   :     :           +- CometFilter\n               :                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :     +- CometBroadcastExchange\n               :                                   :        +- CometProject\n               :                                   :           +- CometFilter\n               :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometUnion\n                                    :              :     :  :- CometProject\n                                    :              :     :  :  +- CometFilter\n                                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :              :     :  :           +- SubqueryBroadcast\n                                    :              :     :  :              +- BroadcastExchange\n                                    :              :     :  :                 +- CometNativeColumnarToRow\n                                    :              :     :  :                    +- CometProject\n                                    :              :     :  :                       +- CometFilter\n                                    :              :     :  :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :  +- CometProject\n                                    :              :     :     +- CometFilter\n                                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                    :              :     :              +- ReusedSubquery\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometUnion\n                                    :              :     :  :- CometProject\n                                    :              :     :  :  +- CometFilter\n                                    :              :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :              :     :  :           +- ReusedSubquery\n                                    :              :     :  +- CometProject\n                                    :              :     :     +- CometFilter\n                                    :              :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                    :              :     :              +- ReusedSubquery\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometUnion\n                                                   :     :  :- CometProject\n                                                   :     :  :  +- CometFilter\n                                                   :     :  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     :  :           +- ReusedSubquery\n                                                   :     :  +- CometProject\n                                                   :     :     +- CometBroadcastHashJoin\n                                                   :     :        :- CometBroadcastExchange\n                                                   :     :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                                   :     :        :        +- ReusedSubquery\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n\nComet accelerated 242 out of 263 eligible operators (92%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q6.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- HashAggregate\n      +- CometNativeColumnarToRow\n         +- CometColumnarExchange\n            +- HashAggregate\n               +- Project\n                  +- BroadcastHashJoin\n                     :- Project\n                     :  +- BroadcastHashJoin\n                     :     :- Project\n                     :     :  +- BroadcastHashJoin\n                     :     :     :- CometNativeColumnarToRow\n                     :     :     :  +- CometProject\n                     :     :     :     +- CometBroadcastHashJoin\n                     :     :     :        :- CometProject\n                     :     :     :        :  +- CometFilter\n                     :     :     :        :     +- CometNativeScan parquet spark_catalog.default.customer_address\n                     :     :     :        +- CometBroadcastExchange\n                     :     :     :           +- CometFilter\n                     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer\n                     :     :     +- BroadcastExchange\n                     :     :        +- Filter\n                     :     :           +- ColumnarToRow\n                     :     :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                     :     :                    +- SubqueryBroadcast\n                     :     :                       +- BroadcastExchange\n                     :     :                          +- CometNativeColumnarToRow\n                     :     :                             +- CometProject\n                     :     :                                +- CometFilter\n                     :     :                                   :  +- ReusedSubquery\n                     :     :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     :                                         +- Subquery\n                     :     :                                            +- CometNativeColumnarToRow\n                     :     :                                               +- CometHashAggregate\n                     :     :                                                  +- CometExchange\n                     :     :                                                     +- CometHashAggregate\n                     :     :                                                        +- CometProject\n                     :     :                                                           +- CometFilter\n                     :     :                                                              +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :     +- BroadcastExchange\n                     :        +- CometNativeColumnarToRow\n                     :           +- CometProject\n                     :              +- CometFilter\n                     :                 :  +- ReusedSubquery\n                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                     :                       +- Subquery\n                     :                          +- CometNativeColumnarToRow\n                     :                             +- CometHashAggregate\n                     :                                +- CometExchange\n                     :                                   +- CometHashAggregate\n                     :                                      +- CometProject\n                     :                                         +- CometFilter\n                     :                                            +- CometNativeScan parquet spark_catalog.default.date_dim\n                     +- BroadcastExchange\n                        +- CometNativeColumnarToRow\n                           +- CometProject\n                              +- CometBroadcastHashJoin\n                                 :- CometFilter\n                                 :  +- CometNativeScan parquet spark_catalog.default.item\n                                 +- CometBroadcastExchange\n                                    +- CometFilter\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometFilter\n                                                      +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 39 out of 60 eligible operators (65%). Final plan contains 8 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q6.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometFilter\n      +- CometHashAggregate\n         +- CometExchange\n            +- CometHashAggregate\n               +- CometProject\n                  +- CometBroadcastHashJoin\n                     :- CometProject\n                     :  +- CometBroadcastHashJoin\n                     :     :- CometProject\n                     :     :  +- CometBroadcastHashJoin\n                     :     :     :- CometProject\n                     :     :     :  +- CometBroadcastHashJoin\n                     :     :     :     :- CometProject\n                     :     :     :     :  +- CometFilter\n                     :     :     :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                     :     :     :     +- CometBroadcastExchange\n                     :     :     :        +- CometFilter\n                     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                     :     :     +- CometBroadcastExchange\n                     :     :        +- CometFilter\n                     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                     :     :                 +- SubqueryBroadcast\n                     :     :                    +- BroadcastExchange\n                     :     :                       +- CometNativeColumnarToRow\n                     :     :                          +- CometProject\n                     :     :                             +- CometFilter\n                     :     :                                :  +- ReusedSubquery\n                     :     :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     :                                      +- Subquery\n                     :     :                                         +- CometNativeColumnarToRow\n                     :     :                                            +- CometHashAggregate\n                     :     :                                               +- CometExchange\n                     :     :                                                  +- CometHashAggregate\n                     :     :                                                     +- CometProject\n                     :     :                                                        +- CometFilter\n                     :     :                                                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :     +- CometBroadcastExchange\n                     :        +- CometProject\n                     :           +- CometFilter\n                     :              :  +- ReusedSubquery\n                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                     :                    +- ReusedSubquery\n                     +- CometBroadcastExchange\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometFilter\n                              :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 48 out of 54 eligible operators (88%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q64.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometNativeScan parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometNativeScan parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometColumnarExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- Project\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- BroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- Filter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- ColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :        +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometNativeScan parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometNativeScan parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometNativeScan parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometNativeScan parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 228 out of 242 eligible operators (94%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q64.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometExchange\n      +- CometProject\n         +- CometSortMergeJoin\n            :- CometSort\n            :  +- CometExchange\n            :     +- CometHashAggregate\n            :        +- CometHashAggregate\n            :           +- CometProject\n            :              +- CometBroadcastHashJoin\n            :                 :- CometProject\n            :                 :  +- CometBroadcastHashJoin\n            :                 :     :- CometProject\n            :                 :     :  +- CometBroadcastHashJoin\n            :                 :     :     :- CometProject\n            :                 :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :- CometProject\n            :                 :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :- CometProject\n            :                 :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n            :                 :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n            :                 :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n            :                 :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :     :        +- CometProject\n            :                 :     :     :     :     :     :     :     :           +- CometFilter\n            :                 :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n            :                 :     :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n            :                 :     :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :     :        +- CometFilter\n            :                 :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n            :                 :     :     :     :     +- CometBroadcastExchange\n            :                 :     :     :     :        +- CometProject\n            :                 :     :     :     :           +- CometFilter\n            :                 :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     :     +- CometBroadcastExchange\n            :                 :     :     :        +- CometProject\n            :                 :     :     :           +- CometFilter\n            :                 :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n            :                 :     :     +- CometBroadcastExchange\n            :                 :     :        +- CometFilter\n            :                 :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 :     +- CometBroadcastExchange\n            :                 :        +- CometFilter\n            :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n            :                 +- CometBroadcastExchange\n            :                    +- CometProject\n            :                       +- CometFilter\n            :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n            +- CometSort\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometBroadcastHashJoin\n                              :     :- CometProject\n                              :     :  +- CometBroadcastHashJoin\n                              :     :     :- CometProject\n                              :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :- CometProject\n                              :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :- CometProject\n                              :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometBroadcastHashJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :  +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :           +- SubqueryBroadcast\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :              +- BroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                 +- CometNativeColumnarToRow\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                    +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                 +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                    +- CometHashAggregate\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                       +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                          +- CometSortMergeJoin\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :  +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :     +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                             +- CometSort\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                +- CometExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                   +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                      +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     :                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :     :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                              :     :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :     :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :     :        +- CometProject\n                              :     :     :     :     :     :     :     :           +- CometFilter\n                              :     :     :     :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                              :     :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                              :     :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :     :        +- CometFilter\n                              :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                              :     :     :     :     +- CometBroadcastExchange\n                              :     :     :     :        +- CometProject\n                              :     :     :     :           +- CometFilter\n                              :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     :     +- CometBroadcastExchange\n                              :     :     :        +- CometProject\n                              :     :     :           +- CometFilter\n                              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_address\n                              :     :     +- CometBroadcastExchange\n                              :     :        +- CometFilter\n                              :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              :     +- CometBroadcastExchange\n                              :        +- CometFilter\n                              :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.income_band\n                              +- CometBroadcastExchange\n                                 +- CometProject\n                                    +- CometFilter\n                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 238 out of 242 eligible operators (98%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q67a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- Window\n      +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +- WindowGroupLimit\n                     +- Sort\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- Project\n                           :                 :  +- BroadcastHashJoin\n                           :                 :     :- Project\n                           :                 :     :  +- BroadcastHashJoin\n                           :                 :     :     :- Filter\n                           :                 :     :     :  +- ColumnarToRow\n                           :                 :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                 :     :     :           +- SubqueryBroadcast\n                           :                 :     :     :              +- BroadcastExchange\n                           :                 :     :     :                 +- CometNativeColumnarToRow\n                           :                 :     :     :                    +- CometProject\n                           :                 :     :     :                       +- CometFilter\n                           :                 :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     :     +- BroadcastExchange\n                           :                 :     :        +- CometNativeColumnarToRow\n                           :                 :     :           +- CometProject\n                           :                 :     :              +- CometFilter\n                           :                 :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     +- BroadcastExchange\n                           :                 :        +- CometNativeColumnarToRow\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                 +- BroadcastExchange\n                           :                    +- CometNativeColumnarToRow\n                           :                       +- CometProject\n                           :                          +- CometFilter\n                           :                             +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Project\n                           :                             :     :  +- BroadcastHashJoin\n                           :                             :     :     :- Filter\n                           :                             :     :     :  +- ColumnarToRow\n                           :                             :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :     :           +- SubqueryBroadcast\n                           :                             :     :     :              +- BroadcastExchange\n                           :                             :     :     :                 +- CometNativeColumnarToRow\n                           :                             :     :     :                    +- CometProject\n                           :                             :     :     :                       +- CometFilter\n                           :                             :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     :     +- BroadcastExchange\n                           :                             :     :        +- CometNativeColumnarToRow\n                           :                             :     :           +- CometProject\n                           :                             :     :              +- CometFilter\n                           :                             :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- Project\n                                                         :     :  +- BroadcastHashJoin\n                                                         :     :     :- Filter\n                                                         :     :     :  +- ColumnarToRow\n                                                         :     :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :     :           +- SubqueryBroadcast\n                                                         :     :     :              +- BroadcastExchange\n                                                         :     :     :                 +- CometNativeColumnarToRow\n                                                         :     :     :                    +- CometProject\n                                                         :     :     :                       +- CometFilter\n                                                         :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     :     +- BroadcastExchange\n                                                         :     :        +- CometNativeColumnarToRow\n                                                         :     :           +- CometProject\n                                                         :     :              +- CometFilter\n                                                         :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     +- BroadcastExchange\n                                                         :        +- CometNativeColumnarToRow\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometNativeScan parquet spark_catalog.default.store\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 127 out of 285 eligible operators (44%). Final plan contains 63 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q67a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Filter\n   +- Window\n      +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n         +- CometNativeColumnarToRow\n            +- CometSort\n               +- CometColumnarExchange\n                  +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                     +- CometNativeColumnarToRow\n                        +- CometSort\n                           +- CometUnion\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometProject\n                              :           +- CometBroadcastHashJoin\n                              :              :- CometProject\n                              :              :  +- CometBroadcastHashJoin\n                              :              :     :- CometProject\n                              :              :     :  +- CometBroadcastHashJoin\n                              :              :     :     :- CometFilter\n                              :              :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :              :     :     :        +- SubqueryBroadcast\n                              :              :     :     :           +- BroadcastExchange\n                              :              :     :     :              +- CometNativeColumnarToRow\n                              :              :     :     :                 +- CometProject\n                              :              :     :     :                    +- CometFilter\n                              :              :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :              :     :     +- CometBroadcastExchange\n                              :              :     :        +- CometProject\n                              :              :     :           +- CometFilter\n                              :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :              :     +- CometBroadcastExchange\n                              :              :        +- CometProject\n                              :              :           +- CometFilter\n                              :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :              +- CometBroadcastExchange\n                              :                 +- CometProject\n                              :                    +- CometFilter\n                              :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              :- CometHashAggregate\n                              :  +- CometExchange\n                              :     +- CometHashAggregate\n                              :        +- CometHashAggregate\n                              :           +- CometExchange\n                              :              +- CometHashAggregate\n                              :                 +- CometProject\n                              :                    +- CometBroadcastHashJoin\n                              :                       :- CometProject\n                              :                       :  +- CometBroadcastHashJoin\n                              :                       :     :- CometProject\n                              :                       :     :  +- CometBroadcastHashJoin\n                              :                       :     :     :- CometFilter\n                              :                       :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                              :                       :     :     :        +- SubqueryBroadcast\n                              :                       :     :     :           +- BroadcastExchange\n                              :                       :     :     :              +- CometNativeColumnarToRow\n                              :                       :     :     :                 +- CometProject\n                              :                       :     :     :                    +- CometFilter\n                              :                       :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     :     +- CometBroadcastExchange\n                              :                       :     :        +- CometProject\n                              :                       :     :           +- CometFilter\n                              :                       :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                              :                       :     +- CometBroadcastExchange\n                              :                       :        +- CometProject\n                              :                       :           +- CometFilter\n                              :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                              :                       +- CometBroadcastExchange\n                              :                          +- CometProject\n                              :                             +- CometFilter\n                              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                              +- CometHashAggregate\n                                 +- CometExchange\n                                    +- CometHashAggregate\n                                       +- CometHashAggregate\n                                          +- CometExchange\n                                             +- CometHashAggregate\n                                                +- CometProject\n                                                   +- CometBroadcastHashJoin\n                                                      :- CometProject\n                                                      :  +- CometBroadcastHashJoin\n                                                      :     :- CometProject\n                                                      :     :  +- CometBroadcastHashJoin\n                                                      :     :     :- CometFilter\n                                                      :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                      :     :     :        +- SubqueryBroadcast\n                                                      :     :     :           +- BroadcastExchange\n                                                      :     :     :              +- CometNativeColumnarToRow\n                                                      :     :     :                 +- CometProject\n                                                      :     :     :                    +- CometFilter\n                                                      :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     :     +- CometBroadcastExchange\n                                                      :     :        +- CometProject\n                                                      :     :           +- CometFilter\n                                                      :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                      :     +- CometBroadcastExchange\n                                                      :        +- CometProject\n                                                      :           +- CometFilter\n                                                      :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                                      +- CometBroadcastExchange\n                                                         +- CometProject\n                                                            +- CometFilter\n                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 262 out of 285 eligible operators (91%). Final plan contains 11 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q70a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- Project\n                           :                 :  +- BroadcastHashJoin\n                           :                 :     :- Filter\n                           :                 :     :  +- ColumnarToRow\n                           :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                 :     :           +- SubqueryBroadcast\n                           :                 :     :              +- BroadcastExchange\n                           :                 :     :                 +- CometNativeColumnarToRow\n                           :                 :     :                    +- CometProject\n                           :                 :     :                       +- CometFilter\n                           :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     +- BroadcastExchange\n                           :                 :        +- CometNativeColumnarToRow\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 +- BroadcastExchange\n                           :                    +- Project\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometFilter\n                           :                          :     +- CometNativeScan parquet spark_catalog.default.store\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- Filter\n                           :                                   +- Window\n                           :                                      +- WindowGroupLimit\n                           :                                         +- Sort\n                           :                                            +- HashAggregate\n                           :                                               +- CometNativeColumnarToRow\n                           :                                                  +- CometColumnarExchange\n                           :                                                     +- HashAggregate\n                           :                                                        +- Project\n                           :                                                           +- BroadcastHashJoin\n                           :                                                              :- Project\n                           :                                                              :  +- BroadcastHashJoin\n                           :                                                              :     :- Filter\n                           :                                                              :     :  +- ColumnarToRow\n                           :                                                              :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                                              :     :           +- ReusedSubquery\n                           :                                                              :     +- BroadcastExchange\n                           :                                                              :        +- CometNativeColumnarToRow\n                           :                                                              :           +- CometProject\n                           :                                                              :              +- CometFilter\n                           :                                                              :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                                                              +- BroadcastExchange\n                           :                                                                 +- CometNativeColumnarToRow\n                           :                                                                    +- CometProject\n                           :                                                                       +- CometFilter\n                           :                                                                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Filter\n                           :                             :     :  +- ColumnarToRow\n                           :                             :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :           +- SubqueryBroadcast\n                           :                             :     :              +- BroadcastExchange\n                           :                             :     :                 +- CometNativeColumnarToRow\n                           :                             :     :                    +- CometProject\n                           :                             :     :                       +- CometFilter\n                           :                             :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             +- BroadcastExchange\n                           :                                +- Project\n                           :                                   +- BroadcastHashJoin\n                           :                                      :- CometNativeColumnarToRow\n                           :                                      :  +- CometFilter\n                           :                                      :     +- CometNativeScan parquet spark_catalog.default.store\n                           :                                      +- BroadcastExchange\n                           :                                         +- Project\n                           :                                            +- Filter\n                           :                                               +- Window\n                           :                                                  +- WindowGroupLimit\n                           :                                                     +- Sort\n                           :                                                        +- HashAggregate\n                           :                                                           +- CometNativeColumnarToRow\n                           :                                                              +- CometColumnarExchange\n                           :                                                                 +- HashAggregate\n                           :                                                                    +- Project\n                           :                                                                       +- BroadcastHashJoin\n                           :                                                                          :- Project\n                           :                                                                          :  +- BroadcastHashJoin\n                           :                                                                          :     :- Filter\n                           :                                                                          :     :  +- ColumnarToRow\n                           :                                                                          :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                                                                          :     :           +- ReusedSubquery\n                           :                                                                          :     +- BroadcastExchange\n                           :                                                                          :        +- CometNativeColumnarToRow\n                           :                                                                          :           +- CometProject\n                           :                                                                          :              +- CometFilter\n                           :                                                                          :                 +- CometNativeScan parquet spark_catalog.default.store\n                           :                                                                          +- BroadcastExchange\n                           :                                                                             +- CometNativeColumnarToRow\n                           :                                                                                +- CometProject\n                           :                                                                                   +- CometFilter\n                           :                                                                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- Filter\n                                                         :     :  +- ColumnarToRow\n                                                         :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :           +- SubqueryBroadcast\n                                                         :     :              +- BroadcastExchange\n                                                         :     :                 +- CometNativeColumnarToRow\n                                                         :     :                    +- CometProject\n                                                         :     :                       +- CometFilter\n                                                         :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     +- BroadcastExchange\n                                                         :        +- CometNativeColumnarToRow\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         +- BroadcastExchange\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- CometNativeColumnarToRow\n                                                                  :  +- CometFilter\n                                                                  :     +- CometNativeScan parquet spark_catalog.default.store\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +- Filter\n                                                                           +- Window\n                                                                              +- WindowGroupLimit\n                                                                                 +- Sort\n                                                                                    +- HashAggregate\n                                                                                       +- CometNativeColumnarToRow\n                                                                                          +- CometColumnarExchange\n                                                                                             +- HashAggregate\n                                                                                                +- Project\n                                                                                                   +- BroadcastHashJoin\n                                                                                                      :- Project\n                                                                                                      :  +- BroadcastHashJoin\n                                                                                                      :     :- Filter\n                                                                                                      :     :  +- ColumnarToRow\n                                                                                                      :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                                                      :     :           +- ReusedSubquery\n                                                                                                      :     +- BroadcastExchange\n                                                                                                      :        +- CometNativeColumnarToRow\n                                                                                                      :           +- CometProject\n                                                                                                      :              +- CometFilter\n                                                                                                      :                 +- CometNativeScan parquet spark_catalog.default.store\n                                                                                                      +- BroadcastExchange\n                                                                                                         +- CometNativeColumnarToRow\n                                                                                                            +- CometProject\n                                                                                                               +- CometFilter\n                                                                                                                  +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 54 out of 156 eligible operators (34%). Final plan contains 30 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q70a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- CometNativeColumnarToRow\n                           :                 :  +- CometProject\n                           :                 :     +- CometBroadcastHashJoin\n                           :                 :        :- CometFilter\n                           :                 :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                 :        :        +- SubqueryBroadcast\n                           :                 :        :           +- BroadcastExchange\n                           :                 :        :              +- CometNativeColumnarToRow\n                           :                 :        :                 +- CometProject\n                           :                 :        :                    +- CometFilter\n                           :                 :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                 :        +- CometBroadcastExchange\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                 +- BroadcastExchange\n                           :                    +- Project\n                           :                       +- BroadcastHashJoin\n                           :                          :- CometNativeColumnarToRow\n                           :                          :  +- CometFilter\n                           :                          :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                          +- BroadcastExchange\n                           :                             +- Project\n                           :                                +- Filter\n                           :                                   +- Window\n                           :                                      +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                           :                                         +- CometNativeColumnarToRow\n                           :                                            +- CometSort\n                           :                                               +- CometHashAggregate\n                           :                                                  +- CometExchange\n                           :                                                     +- CometHashAggregate\n                           :                                                        +- CometProject\n                           :                                                           +- CometBroadcastHashJoin\n                           :                                                              :- CometProject\n                           :                                                              :  +- CometBroadcastHashJoin\n                           :                                                              :     :- CometFilter\n                           :                                                              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                                                              :     :        +- ReusedSubquery\n                           :                                                              :     +- CometBroadcastExchange\n                           :                                                              :        +- CometProject\n                           :                                                              :           +- CometFilter\n                           :                                                              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                                                              +- CometBroadcastExchange\n                           :                                                                 +- CometProject\n                           :                                                                    +- CometFilter\n                           :                                                                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- CometNativeColumnarToRow\n                           :                             :  +- CometProject\n                           :                             :     +- CometBroadcastHashJoin\n                           :                             :        :- CometFilter\n                           :                             :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                             :        :        +- SubqueryBroadcast\n                           :                             :        :           +- BroadcastExchange\n                           :                             :        :              +- CometNativeColumnarToRow\n                           :                             :        :                 +- CometProject\n                           :                             :        :                    +- CometFilter\n                           :                             :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                             :        +- CometBroadcastExchange\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                             +- BroadcastExchange\n                           :                                +- Project\n                           :                                   +- BroadcastHashJoin\n                           :                                      :- CometNativeColumnarToRow\n                           :                                      :  +- CometFilter\n                           :                                      :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                                      +- BroadcastExchange\n                           :                                         +- Project\n                           :                                            +- Filter\n                           :                                               +- Window\n                           :                                                  +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                           :                                                     +- CometNativeColumnarToRow\n                           :                                                        +- CometSort\n                           :                                                           +- CometHashAggregate\n                           :                                                              +- CometExchange\n                           :                                                                 +- CometHashAggregate\n                           :                                                                    +- CometProject\n                           :                                                                       +- CometBroadcastHashJoin\n                           :                                                                          :- CometProject\n                           :                                                                          :  +- CometBroadcastHashJoin\n                           :                                                                          :     :- CometFilter\n                           :                                                                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                           :                                                                          :     :        +- ReusedSubquery\n                           :                                                                          :     +- CometBroadcastExchange\n                           :                                                                          :        +- CometProject\n                           :                                                                          :           +- CometFilter\n                           :                                                                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                           :                                                                          +- CometBroadcastExchange\n                           :                                                                             +- CometProject\n                           :                                                                                +- CometFilter\n                           :                                                                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- CometNativeColumnarToRow\n                                                         :  +- CometProject\n                                                         :     +- CometBroadcastHashJoin\n                                                         :        :- CometFilter\n                                                         :        :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                         :        :        +- SubqueryBroadcast\n                                                         :        :           +- BroadcastExchange\n                                                         :        :              +- CometNativeColumnarToRow\n                                                         :        :                 +- CometProject\n                                                         :        :                    +- CometFilter\n                                                         :        :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                         :        +- CometBroadcastExchange\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                         +- BroadcastExchange\n                                                            +- Project\n                                                               +- BroadcastHashJoin\n                                                                  :- CometNativeColumnarToRow\n                                                                  :  +- CometFilter\n                                                                  :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                                                  +- BroadcastExchange\n                                                                     +- Project\n                                                                        +- Filter\n                                                                           +- Window\n                                                                              +-  WindowGroupLimit [COMET: WindowGroupLimit is not supported]\n                                                                                 +- CometNativeColumnarToRow\n                                                                                    +- CometSort\n                                                                                       +- CometHashAggregate\n                                                                                          +- CometExchange\n                                                                                             +- CometHashAggregate\n                                                                                                +- CometProject\n                                                                                                   +- CometBroadcastHashJoin\n                                                                                                      :- CometProject\n                                                                                                      :  +- CometBroadcastHashJoin\n                                                                                                      :     :- CometFilter\n                                                                                                      :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                                                                                      :     :        +- ReusedSubquery\n                                                                                                      :     +- CometBroadcastExchange\n                                                                                                      :        +- CometProject\n                                                                                                      :           +- CometFilter\n                                                                                                      :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                                                                                      +- CometBroadcastExchange\n                                                                                                         +- CometProject\n                                                                                                            +- CometFilter\n                                                                                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 102 out of 156 eligible operators (65%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q72.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometColumnarExchange\n                  :     +- Project\n                  :        +- BroadcastHashJoin\n                  :           :- Project\n                  :           :  +- BroadcastHashJoin\n                  :           :     :- Project\n                  :           :     :  +- BroadcastHashJoin\n                  :           :     :     :- Project\n                  :           :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :- Project\n                  :           :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :- Project\n                  :           :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- Project\n                  :           :     :     :     :     :     :     :     :  +- BroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- Filter\n                  :           :     :     :     :     :     :     :     :     :  +- ColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                  :           :     :     :     :     :     :     :     :     :           +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :              +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :                 +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                    +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                       +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :     :              +- CometNativeScan parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :     :           +- CometProject\n                  :           :     :     :     :     :              +- CometFilter\n                  :           :     :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- BroadcastExchange\n                  :           :     :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :     :           +- CometProject\n                  :           :     :     :     :              +- CometFilter\n                  :           :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- BroadcastExchange\n                  :           :     :     :        +- CometNativeColumnarToRow\n                  :           :     :     :           +- CometProject\n                  :           :     :     :              +- CometFilter\n                  :           :     :     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     :     +- BroadcastExchange\n                  :           :     :        +- CometNativeColumnarToRow\n                  :           :     :           +- CometFilter\n                  :           :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           :     +- BroadcastExchange\n                  :           :        +- CometNativeColumnarToRow\n                  :           :           +- CometFilter\n                  :           :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                  :           +- BroadcastExchange\n                  :              +- CometNativeColumnarToRow\n                  :                 +- CometFilter\n                  :                    +- CometNativeScan parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometNativeScan parquet spark_catalog.default.catalog_returns\n\nComet accelerated 37 out of 68 eligible operators (54%). Final plan contains 12 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q72.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometProject\n               +- CometSortMergeJoin\n                  :- CometSort\n                  :  +- CometExchange\n                  :     +- CometProject\n                  :        +- CometBroadcastHashJoin\n                  :           :- CometProject\n                  :           :  +- CometBroadcastHashJoin\n                  :           :     :- CometProject\n                  :           :     :  +- CometBroadcastHashJoin\n                  :           :     :     :- CometProject\n                  :           :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :- CometProject\n                  :           :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :- CometProject\n                  :           :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :- CometProject\n                  :           :     :     :     :     :     :     :     :  +- CometBroadcastHashJoin\n                  :           :     :     :     :     :     :     :     :     :- CometFilter\n                  :           :     :     :     :     :     :     :     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                  :           :     :     :     :     :     :     :     :     :        +- SubqueryBroadcast\n                  :           :     :     :     :     :     :     :     :     :           +- BroadcastExchange\n                  :           :     :     :     :     :     :     :     :     :              +- CometNativeColumnarToRow\n                  :           :     :     :     :     :     :     :     :     :                 +- CometProject\n                  :           :     :     :     :     :     :     :     :     :                    +- CometFilter\n                  :           :     :     :     :     :     :     :     :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.inventory\n                  :           :     :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.warehouse\n                  :           :     :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :     :        +- CometFilter\n                  :           :     :     :     :     :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                  :           :     :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :     :        +- CometProject\n                  :           :     :     :     :     :           +- CometFilter\n                  :           :     :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer_demographics\n                  :           :     :     :     :     +- CometBroadcastExchange\n                  :           :     :     :     :        +- CometProject\n                  :           :     :     :     :           +- CometFilter\n                  :           :     :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.household_demographics\n                  :           :     :     :     +- CometBroadcastExchange\n                  :           :     :     :        +- CometProject\n                  :           :     :     :           +- CometFilter\n                  :           :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     :     +- CometBroadcastExchange\n                  :           :     :        +- CometFilter\n                  :           :     :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           :     +- CometBroadcastExchange\n                  :           :        +- CometFilter\n                  :           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                  :           +- CometBroadcastExchange\n                  :              +- CometFilter\n                  :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                  +- CometSort\n                     +- CometExchange\n                        +- CometProject\n                           +- CometFilter\n                              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n\nComet accelerated 66 out of 68 eligible operators (97%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q74.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +- BroadcastHashJoin\n      :- Project\n      :  +- BroadcastHashJoin\n      :     :- BroadcastHashJoin\n      :     :  :- Filter\n      :     :  :  +- HashAggregate\n      :     :  :     +- CometNativeColumnarToRow\n      :     :  :        +- CometColumnarExchange\n      :     :  :           +- HashAggregate\n      :     :  :              +- Project\n      :     :  :                 +- BroadcastHashJoin\n      :     :  :                    :- Project\n      :     :  :                    :  +- BroadcastHashJoin\n      :     :  :                    :     :- CometNativeColumnarToRow\n      :     :  :                    :     :  +- CometProject\n      :     :  :                    :     :     +- CometFilter\n      :     :  :                    :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :  :                    :     +- BroadcastExchange\n      :     :  :                    :        +- Filter\n      :     :  :                    :           +- ColumnarToRow\n      :     :  :                    :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :  :                    :                    +- SubqueryBroadcast\n      :     :  :                    :                       +- BroadcastExchange\n      :     :  :                    :                          +- CometNativeColumnarToRow\n      :     :  :                    :                             +- CometFilter\n      :     :  :                    :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  :                    +- BroadcastExchange\n      :     :  :                       +- CometNativeColumnarToRow\n      :     :  :                          +- CometFilter\n      :     :  :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :  +- BroadcastExchange\n      :     :     +- HashAggregate\n      :     :        +- CometNativeColumnarToRow\n      :     :           +- CometColumnarExchange\n      :     :              +- HashAggregate\n      :     :                 +- Project\n      :     :                    +- BroadcastHashJoin\n      :     :                       :- Project\n      :     :                       :  +- BroadcastHashJoin\n      :     :                       :     :- CometNativeColumnarToRow\n      :     :                       :     :  +- CometProject\n      :     :                       :     :     +- CometFilter\n      :     :                       :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :     :                       :     +- BroadcastExchange\n      :     :                       :        +- Filter\n      :     :                       :           +- ColumnarToRow\n      :     :                       :              +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :     :                       :                    +- SubqueryBroadcast\n      :     :                       :                       +- BroadcastExchange\n      :     :                       :                          +- CometNativeColumnarToRow\n      :     :                       :                             +- CometFilter\n      :     :                       :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     :                       +- BroadcastExchange\n      :     :                          +- CometNativeColumnarToRow\n      :     :                             +- CometFilter\n      :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n      :     +- BroadcastExchange\n      :        +- Filter\n      :           +- HashAggregate\n      :              +- CometNativeColumnarToRow\n      :                 +- CometColumnarExchange\n      :                    +- HashAggregate\n      :                       +- Project\n      :                          +- BroadcastHashJoin\n      :                             :- Project\n      :                             :  +- BroadcastHashJoin\n      :                             :     :- CometNativeColumnarToRow\n      :                             :     :  +- CometProject\n      :                             :     :     +- CometFilter\n      :                             :     :        +- CometNativeScan parquet spark_catalog.default.customer\n      :                             :     +- BroadcastExchange\n      :                             :        +- Filter\n      :                             :           +- ColumnarToRow\n      :                             :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n      :                             :                    +- ReusedSubquery\n      :                             +- BroadcastExchange\n      :                                +- CometNativeColumnarToRow\n      :                                   +- CometFilter\n      :                                      +- CometNativeScan parquet spark_catalog.default.date_dim\n      +- BroadcastExchange\n         +- HashAggregate\n            +- CometNativeColumnarToRow\n               +- CometColumnarExchange\n                  +- HashAggregate\n                     +- Project\n                        +- BroadcastHashJoin\n                           :- Project\n                           :  +- BroadcastHashJoin\n                           :     :- CometNativeColumnarToRow\n                           :     :  +- CometProject\n                           :     :     +- CometFilter\n                           :     :        +- CometNativeScan parquet spark_catalog.default.customer\n                           :     +- BroadcastExchange\n                           :        +- Filter\n                           :           +- ColumnarToRow\n                           :              +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                    +- ReusedSubquery\n                           +- BroadcastExchange\n                              +- CometNativeColumnarToRow\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 28 out of 85 eligible operators (32%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q74.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometBroadcastHashJoin\n         :- CometProject\n         :  +- CometBroadcastHashJoin\n         :     :- CometBroadcastHashJoin\n         :     :  :- CometFilter\n         :     :  :  +- CometHashAggregate\n         :     :  :     +- CometExchange\n         :     :  :        +- CometHashAggregate\n         :     :  :           +- CometProject\n         :     :  :              +- CometBroadcastHashJoin\n         :     :  :                 :- CometProject\n         :     :  :                 :  +- CometBroadcastHashJoin\n         :     :  :                 :     :- CometProject\n         :     :  :                 :     :  +- CometFilter\n         :     :  :                 :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :  :                 :     +- CometBroadcastExchange\n         :     :  :                 :        +- CometFilter\n         :     :  :                 :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :  :                 :                 +- SubqueryBroadcast\n         :     :  :                 :                    +- BroadcastExchange\n         :     :  :                 :                       +- CometNativeColumnarToRow\n         :     :  :                 :                          +- CometFilter\n         :     :  :                 :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  :                 +- CometBroadcastExchange\n         :     :  :                    +- CometFilter\n         :     :  :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :  +- CometBroadcastExchange\n         :     :     +- CometHashAggregate\n         :     :        +- CometExchange\n         :     :           +- CometHashAggregate\n         :     :              +- CometProject\n         :     :                 +- CometBroadcastHashJoin\n         :     :                    :- CometProject\n         :     :                    :  +- CometBroadcastHashJoin\n         :     :                    :     :- CometProject\n         :     :                    :     :  +- CometFilter\n         :     :                    :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :     :                    :     +- CometBroadcastExchange\n         :     :                    :        +- CometFilter\n         :     :                    :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                    :                 +- SubqueryBroadcast\n         :     :                    :                    +- BroadcastExchange\n         :     :                    :                       +- CometNativeColumnarToRow\n         :     :                    :                          +- CometFilter\n         :     :                    :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                    +- CometBroadcastExchange\n         :     :                       +- CometFilter\n         :     :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometBroadcastExchange\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometBroadcastHashJoin\n         :                          :     :- CometProject\n         :                          :     :  +- CometFilter\n         :                          :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n         :                          :     +- CometBroadcastExchange\n         :                          :        +- CometFilter\n         :                          :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :                 +- ReusedSubquery\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometBroadcastExchange\n            +- CometHashAggregate\n               +- CometExchange\n                  +- CometHashAggregate\n                     +- CometProject\n                        +- CometBroadcastHashJoin\n                           :- CometProject\n                           :  +- CometBroadcastHashJoin\n                           :     :- CometProject\n                           :     :  +- CometFilter\n                           :     :     +- CometScan [native_iceberg_compat] parquet spark_catalog.default.customer\n                           :     +- CometBroadcastExchange\n                           :        +- CometFilter\n                           :           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                 +- ReusedSubquery\n                           +- CometBroadcastExchange\n                              +- CometFilter\n                                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 79 out of 85 eligible operators (92%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q75.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- SubqueryBroadcast\n         :                             :     :           :     :              +- BroadcastExchange\n         :                             :     :           :     :                 +- CometNativeColumnarToRow\n         :                             :     :           :     :                    +- CometFilter\n         :                             :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometColumnarExchange\n         :                             :     :     +- Project\n         :                             :     :        +- BroadcastHashJoin\n         :                             :     :           :- Project\n         :                             :     :           :  +- BroadcastHashJoin\n         :                             :     :           :     :- Filter\n         :                             :     :           :     :  +- ColumnarToRow\n         :                             :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                             :     :           :     :           +- ReusedSubquery\n         :                             :     :           :     +- BroadcastExchange\n         :                             :     :           :        +- CometNativeColumnarToRow\n         :                             :     :           :           +- CometProject\n         :                             :     :           :              +- CometFilter\n         :                             :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                             :     :           +- BroadcastExchange\n         :                             :     :              +- CometNativeColumnarToRow\n         :                             :     :                 +- CometFilter\n         :                             :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometColumnarExchange\n         :                                   :     +- Project\n         :                                   :        +- BroadcastHashJoin\n         :                                   :           :- Project\n         :                                   :           :  +- BroadcastHashJoin\n         :                                   :           :     :- Filter\n         :                                   :           :     :  +- ColumnarToRow\n         :                                   :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                                   :           :     :           +- ReusedSubquery\n         :                                   :           :     +- BroadcastExchange\n         :                                   :           :        +- CometNativeColumnarToRow\n         :                                   :           :           +- CometProject\n         :                                   :           :              +- CometFilter\n         :                                   :           :                 +- CometNativeScan parquet spark_catalog.default.item\n         :                                   :           +- BroadcastExchange\n         :                                   :              +- CometNativeColumnarToRow\n         :                                   :                 +- CometFilter\n         :                                   :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometNativeScan parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- SubqueryBroadcast\n                                       :     :           :     :              +- BroadcastExchange\n                                       :     :           :     :                 +- CometNativeColumnarToRow\n                                       :     :           :     :                    +- CometFilter\n                                       :     :           :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometColumnarExchange\n                                       :     :     +- Project\n                                       :     :        +- BroadcastHashJoin\n                                       :     :           :- Project\n                                       :     :           :  +- BroadcastHashJoin\n                                       :     :           :     :- Filter\n                                       :     :           :     :  +- ColumnarToRow\n                                       :     :           :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           :     :           +- ReusedSubquery\n                                       :     :           :     +- BroadcastExchange\n                                       :     :           :        +- CometNativeColumnarToRow\n                                       :     :           :           +- CometProject\n                                       :     :           :              +- CometFilter\n                                       :     :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       :     :           +- BroadcastExchange\n                                       :     :              +- CometNativeColumnarToRow\n                                       :     :                 +- CometFilter\n                                       :     :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometColumnarExchange\n                                             :     +- Project\n                                             :        +- BroadcastHashJoin\n                                             :           :- Project\n                                             :           :  +- BroadcastHashJoin\n                                             :           :     :- Filter\n                                             :           :     :  +- ColumnarToRow\n                                             :           :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                             :           :     :           +- ReusedSubquery\n                                             :           :     +- BroadcastExchange\n                                             :           :        +- CometNativeColumnarToRow\n                                             :           :           +- CometProject\n                                             :           :              +- CometFilter\n                                             :           :                 +- CometNativeScan parquet spark_catalog.default.item\n                                             :           +- BroadcastExchange\n                                             :              +- CometNativeColumnarToRow\n                                             :                 +- CometFilter\n                                             :                    +- CometNativeScan parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometNativeScan parquet spark_catalog.default.web_returns\n\nComet accelerated 111 out of 167 eligible operators (66%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q75.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometProject\n      +- CometSortMergeJoin\n         :- CometSort\n         :  +- CometExchange\n         :     +- CometFilter\n         :        +- CometHashAggregate\n         :           +- CometExchange\n         :              +- CometHashAggregate\n         :                 +- CometHashAggregate\n         :                    +- CometExchange\n         :                       +- CometHashAggregate\n         :                          +- CometUnion\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n         :                             :     :           :     :        +- SubqueryBroadcast\n         :                             :     :           :     :           +- BroadcastExchange\n         :                             :     :           :     :              +- CometNativeColumnarToRow\n         :                             :     :           :     :                 +- CometFilter\n         :                             :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n         :                             :- CometProject\n         :                             :  +- CometSortMergeJoin\n         :                             :     :- CometSort\n         :                             :     :  +- CometExchange\n         :                             :     :     +- CometProject\n         :                             :     :        +- CometBroadcastHashJoin\n         :                             :     :           :- CometProject\n         :                             :     :           :  +- CometBroadcastHashJoin\n         :                             :     :           :     :- CometFilter\n         :                             :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :                             :     :           :     :        +- ReusedSubquery\n         :                             :     :           :     +- CometBroadcastExchange\n         :                             :     :           :        +- CometProject\n         :                             :     :           :           +- CometFilter\n         :                             :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                             :     :           +- CometBroadcastExchange\n         :                             :     :              +- CometFilter\n         :                             :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                             :     +- CometSort\n         :                             :        +- CometExchange\n         :                             :           +- CometProject\n         :                             :              +- CometFilter\n         :                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :                             +- CometProject\n         :                                +- CometSortMergeJoin\n         :                                   :- CometSort\n         :                                   :  +- CometExchange\n         :                                   :     +- CometProject\n         :                                   :        +- CometBroadcastHashJoin\n         :                                   :           :- CometProject\n         :                                   :           :  +- CometBroadcastHashJoin\n         :                                   :           :     :- CometFilter\n         :                                   :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                                   :           :     :        +- ReusedSubquery\n         :                                   :           :     +- CometBroadcastExchange\n         :                                   :           :        +- CometProject\n         :                                   :           :           +- CometFilter\n         :                                   :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n         :                                   :           +- CometBroadcastExchange\n         :                                   :              +- CometFilter\n         :                                   :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :                                   +- CometSort\n         :                                      +- CometExchange\n         :                                         +- CometProject\n         :                                            +- CometFilter\n         :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         +- CometSort\n            +- CometExchange\n               +- CometFilter\n                  +- CometHashAggregate\n                     +- CometExchange\n                        +- CometHashAggregate\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometUnion\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                       :     :           :     :        +- SubqueryBroadcast\n                                       :     :           :     :           +- BroadcastExchange\n                                       :     :           :     :              +- CometNativeColumnarToRow\n                                       :     :           :     :                 +- CometFilter\n                                       :     :           :     :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                       :- CometProject\n                                       :  +- CometSortMergeJoin\n                                       :     :- CometSort\n                                       :     :  +- CometExchange\n                                       :     :     +- CometProject\n                                       :     :        +- CometBroadcastHashJoin\n                                       :     :           :- CometProject\n                                       :     :           :  +- CometBroadcastHashJoin\n                                       :     :           :     :- CometFilter\n                                       :     :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                       :     :           :     :        +- ReusedSubquery\n                                       :     :           :     +- CometBroadcastExchange\n                                       :     :           :        +- CometProject\n                                       :     :           :           +- CometFilter\n                                       :     :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                       :     :           +- CometBroadcastExchange\n                                       :     :              +- CometFilter\n                                       :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                       :     +- CometSort\n                                       :        +- CometExchange\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                       +- CometProject\n                                          +- CometSortMergeJoin\n                                             :- CometSort\n                                             :  +- CometExchange\n                                             :     +- CometProject\n                                             :        +- CometBroadcastHashJoin\n                                             :           :- CometProject\n                                             :           :  +- CometBroadcastHashJoin\n                                             :           :     :- CometFilter\n                                             :           :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                             :           :     :        +- ReusedSubquery\n                                             :           :     +- CometBroadcastExchange\n                                             :           :        +- CometProject\n                                             :           :           +- CometFilter\n                                             :           :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                             :           +- CometBroadcastExchange\n                                             :              +- CometFilter\n                                             :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                             +- CometSort\n                                                +- CometExchange\n                                                   +- CometProject\n                                                      +- CometFilter\n                                                         +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n\nComet accelerated 159 out of 167 eligible operators (95%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q77a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- Project\n               :              :  +- BroadcastHashJoin\n               :              :     :- HashAggregate\n               :              :     :  +- CometNativeColumnarToRow\n               :              :     :     +- CometColumnarExchange\n               :              :     :        +- HashAggregate\n               :              :     :           +- Project\n               :              :     :              +- BroadcastHashJoin\n               :              :     :                 :- Project\n               :              :     :                 :  +- BroadcastHashJoin\n               :              :     :                 :     :- Filter\n               :              :     :                 :     :  +- ColumnarToRow\n               :              :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :     :                 :     :           +- SubqueryBroadcast\n               :              :     :                 :     :              +- BroadcastExchange\n               :              :     :                 :     :                 +- CometNativeColumnarToRow\n               :              :     :                 :     :                    +- CometProject\n               :              :     :                 :     :                       +- CometFilter\n               :              :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :     :                 :     +- BroadcastExchange\n               :              :     :                 :        +- CometNativeColumnarToRow\n               :              :     :                 :           +- CometProject\n               :              :     :                 :              +- CometFilter\n               :              :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :     :                 +- BroadcastExchange\n               :              :     :                    +- CometNativeColumnarToRow\n               :              :     :                       +- CometFilter\n               :              :     :                          +- CometNativeScan parquet spark_catalog.default.store\n               :              :     +- BroadcastExchange\n               :              :        +- HashAggregate\n               :              :           +- CometNativeColumnarToRow\n               :              :              +- CometColumnarExchange\n               :              :                 +- HashAggregate\n               :              :                    +- Project\n               :              :                       +- BroadcastHashJoin\n               :              :                          :- Project\n               :              :                          :  +- BroadcastHashJoin\n               :              :                          :     :- Filter\n               :              :                          :     :  +- ColumnarToRow\n               :              :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                          :     :           +- ReusedSubquery\n               :              :                          :     +- BroadcastExchange\n               :              :                          :        +- CometNativeColumnarToRow\n               :              :                          :           +- CometProject\n               :              :                          :              +- CometFilter\n               :              :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :                          +- BroadcastExchange\n               :              :                             +- CometNativeColumnarToRow\n               :              :                                +- CometFilter\n               :              :                                   +- CometNativeScan parquet spark_catalog.default.store\n               :              :- Project\n               :              :  +- BroadcastNestedLoopJoin\n               :              :     :- BroadcastExchange\n               :              :     :  +- HashAggregate\n               :              :     :     +- CometNativeColumnarToRow\n               :              :     :        +- CometColumnarExchange\n               :              :     :           +- HashAggregate\n               :              :     :              +- Project\n               :              :     :                 +- BroadcastHashJoin\n               :              :     :                    :- ColumnarToRow\n               :              :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :     :                    :        +- ReusedSubquery\n               :              :     :                    +- BroadcastExchange\n               :              :     :                       +- CometNativeColumnarToRow\n               :              :     :                          +- CometProject\n               :              :     :                             +- CometFilter\n               :              :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              :     +- HashAggregate\n               :              :        +- CometNativeColumnarToRow\n               :              :           +- CometColumnarExchange\n               :              :              +- HashAggregate\n               :              :                 +- Project\n               :              :                    +- BroadcastHashJoin\n               :              :                       :- ColumnarToRow\n               :              :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :              :                       :        +- ReusedSubquery\n               :              :                       +- BroadcastExchange\n               :              :                          +- CometNativeColumnarToRow\n               :              :                             +- CometProject\n               :              :                                +- CometFilter\n               :              :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n               :              +- Project\n               :                 +- BroadcastHashJoin\n               :                    :- HashAggregate\n               :                    :  +- CometNativeColumnarToRow\n               :                    :     +- CometColumnarExchange\n               :                    :        +- HashAggregate\n               :                    :           +- Project\n               :                    :              +- BroadcastHashJoin\n               :                    :                 :- Project\n               :                    :                 :  +- BroadcastHashJoin\n               :                    :                 :     :- Filter\n               :                    :                 :     :  +- ColumnarToRow\n               :                    :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :                 :     :           +- ReusedSubquery\n               :                    :                 :     +- BroadcastExchange\n               :                    :                 :        +- CometNativeColumnarToRow\n               :                    :                 :           +- CometProject\n               :                    :                 :              +- CometFilter\n               :                    :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :                 +- BroadcastExchange\n               :                    :                    +- CometNativeColumnarToRow\n               :                    :                       +- CometFilter\n               :                    :                          +- CometNativeScan parquet spark_catalog.default.web_page\n               :                    +- BroadcastExchange\n               :                       +- HashAggregate\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometColumnarExchange\n               :                                +- HashAggregate\n               :                                   +- Project\n               :                                      +- BroadcastHashJoin\n               :                                         :- Project\n               :                                         :  +- BroadcastHashJoin\n               :                                         :     :- Filter\n               :                                         :     :  +- ColumnarToRow\n               :                                         :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                         :     :           +- ReusedSubquery\n               :                                         :     +- BroadcastExchange\n               :                                         :        +- CometNativeColumnarToRow\n               :                                         :           +- CometProject\n               :                                         :              +- CometFilter\n               :                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                         +- BroadcastExchange\n               :                                            +- CometNativeColumnarToRow\n               :                                               +- CometFilter\n               :                                                  +- CometNativeScan parquet spark_catalog.default.web_page\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- Project\n               :                          :  +- BroadcastHashJoin\n               :                          :     :- HashAggregate\n               :                          :     :  +- CometNativeColumnarToRow\n               :                          :     :     +- CometColumnarExchange\n               :                          :     :        +- HashAggregate\n               :                          :     :           +- Project\n               :                          :     :              +- BroadcastHashJoin\n               :                          :     :                 :- Project\n               :                          :     :                 :  +- BroadcastHashJoin\n               :                          :     :                 :     :- Filter\n               :                          :     :                 :     :  +- ColumnarToRow\n               :                          :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :     :                 :     :           +- SubqueryBroadcast\n               :                          :     :                 :     :              +- BroadcastExchange\n               :                          :     :                 :     :                 +- CometNativeColumnarToRow\n               :                          :     :                 :     :                    +- CometProject\n               :                          :     :                 :     :                       +- CometFilter\n               :                          :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     :                 :     +- BroadcastExchange\n               :                          :     :                 :        +- CometNativeColumnarToRow\n               :                          :     :                 :           +- CometProject\n               :                          :     :                 :              +- CometFilter\n               :                          :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     :                 +- BroadcastExchange\n               :                          :     :                    +- CometNativeColumnarToRow\n               :                          :     :                       +- CometFilter\n               :                          :     :                          +- CometNativeScan parquet spark_catalog.default.store\n               :                          :     +- BroadcastExchange\n               :                          :        +- HashAggregate\n               :                          :           +- CometNativeColumnarToRow\n               :                          :              +- CometColumnarExchange\n               :                          :                 +- HashAggregate\n               :                          :                    +- Project\n               :                          :                       +- BroadcastHashJoin\n               :                          :                          :- Project\n               :                          :                          :  +- BroadcastHashJoin\n               :                          :                          :     :- Filter\n               :                          :                          :     :  +- ColumnarToRow\n               :                          :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                          :     :           +- ReusedSubquery\n               :                          :                          :     +- BroadcastExchange\n               :                          :                          :        +- CometNativeColumnarToRow\n               :                          :                          :           +- CometProject\n               :                          :                          :              +- CometFilter\n               :                          :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :                          +- BroadcastExchange\n               :                          :                             +- CometNativeColumnarToRow\n               :                          :                                +- CometFilter\n               :                          :                                   +- CometNativeScan parquet spark_catalog.default.store\n               :                          :- Project\n               :                          :  +- BroadcastNestedLoopJoin\n               :                          :     :- BroadcastExchange\n               :                          :     :  +- HashAggregate\n               :                          :     :     +- CometNativeColumnarToRow\n               :                          :     :        +- CometColumnarExchange\n               :                          :     :           +- HashAggregate\n               :                          :     :              +- Project\n               :                          :     :                 +- BroadcastHashJoin\n               :                          :     :                    :- ColumnarToRow\n               :                          :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :     :                    :        +- ReusedSubquery\n               :                          :     :                    +- BroadcastExchange\n               :                          :     :                       +- CometNativeColumnarToRow\n               :                          :     :                          +- CometProject\n               :                          :     :                             +- CometFilter\n               :                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     +- HashAggregate\n               :                          :        +- CometNativeColumnarToRow\n               :                          :           +- CometColumnarExchange\n               :                          :              +- HashAggregate\n               :                          :                 +- Project\n               :                          :                    +- BroadcastHashJoin\n               :                          :                       :- ColumnarToRow\n               :                          :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :                       :        +- ReusedSubquery\n               :                          :                       +- BroadcastExchange\n               :                          :                          +- CometNativeColumnarToRow\n               :                          :                             +- CometProject\n               :                          :                                +- CometFilter\n               :                          :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          +- Project\n               :                             +- BroadcastHashJoin\n               :                                :- HashAggregate\n               :                                :  +- CometNativeColumnarToRow\n               :                                :     +- CometColumnarExchange\n               :                                :        +- HashAggregate\n               :                                :           +- Project\n               :                                :              +- BroadcastHashJoin\n               :                                :                 :- Project\n               :                                :                 :  +- BroadcastHashJoin\n               :                                :                 :     :- Filter\n               :                                :                 :     :  +- ColumnarToRow\n               :                                :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                :                 :     :           +- ReusedSubquery\n               :                                :                 :     +- BroadcastExchange\n               :                                :                 :        +- CometNativeColumnarToRow\n               :                                :                 :           +- CometProject\n               :                                :                 :              +- CometFilter\n               :                                :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                :                 +- BroadcastExchange\n               :                                :                    +- CometNativeColumnarToRow\n               :                                :                       +- CometFilter\n               :                                :                          +- CometNativeScan parquet spark_catalog.default.web_page\n               :                                +- BroadcastExchange\n               :                                   +- HashAggregate\n               :                                      +- CometNativeColumnarToRow\n               :                                         +- CometColumnarExchange\n               :                                            +- HashAggregate\n               :                                               +- Project\n               :                                                  +- BroadcastHashJoin\n               :                                                     :- Project\n               :                                                     :  +- BroadcastHashJoin\n               :                                                     :     :- Filter\n               :                                                     :     :  +- ColumnarToRow\n               :                                                     :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                                     :     :           +- ReusedSubquery\n               :                                                     :     +- BroadcastExchange\n               :                                                     :        +- CometNativeColumnarToRow\n               :                                                     :           +- CometProject\n               :                                                     :              +- CometFilter\n               :                                                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                                     +- BroadcastExchange\n               :                                                        +- CometNativeColumnarToRow\n               :                                                           +- CometFilter\n               :                                                              +- CometNativeScan parquet spark_catalog.default.web_page\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- Project\n                                          :  +- BroadcastHashJoin\n                                          :     :- HashAggregate\n                                          :     :  +- CometNativeColumnarToRow\n                                          :     :     +- CometColumnarExchange\n                                          :     :        +- HashAggregate\n                                          :     :           +- Project\n                                          :     :              +- BroadcastHashJoin\n                                          :     :                 :- Project\n                                          :     :                 :  +- BroadcastHashJoin\n                                          :     :                 :     :- Filter\n                                          :     :                 :     :  +- ColumnarToRow\n                                          :     :                 :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                 :     :           +- SubqueryBroadcast\n                                          :     :                 :     :              +- BroadcastExchange\n                                          :     :                 :     :                 +- CometNativeColumnarToRow\n                                          :     :                 :     :                    +- CometProject\n                                          :     :                 :     :                       +- CometFilter\n                                          :     :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 :     +- BroadcastExchange\n                                          :     :                 :        +- CometNativeColumnarToRow\n                                          :     :                 :           +- CometProject\n                                          :     :                 :              +- CometFilter\n                                          :     :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     :                 +- BroadcastExchange\n                                          :     :                    +- CometNativeColumnarToRow\n                                          :     :                       +- CometFilter\n                                          :     :                          +- CometNativeScan parquet spark_catalog.default.store\n                                          :     +- BroadcastExchange\n                                          :        +- HashAggregate\n                                          :           +- CometNativeColumnarToRow\n                                          :              +- CometColumnarExchange\n                                          :                 +- HashAggregate\n                                          :                    +- Project\n                                          :                       +- BroadcastHashJoin\n                                          :                          :- Project\n                                          :                          :  +- BroadcastHashJoin\n                                          :                          :     :- Filter\n                                          :                          :     :  +- ColumnarToRow\n                                          :                          :     :     +-  Scan parquet spark_catalog.default.store_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                          :     :           +- ReusedSubquery\n                                          :                          :     +- BroadcastExchange\n                                          :                          :        +- CometNativeColumnarToRow\n                                          :                          :           +- CometProject\n                                          :                          :              +- CometFilter\n                                          :                          :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :                          +- BroadcastExchange\n                                          :                             +- CometNativeColumnarToRow\n                                          :                                +- CometFilter\n                                          :                                   +- CometNativeScan parquet spark_catalog.default.store\n                                          :- Project\n                                          :  +- BroadcastNestedLoopJoin\n                                          :     :- BroadcastExchange\n                                          :     :  +- HashAggregate\n                                          :     :     +- CometNativeColumnarToRow\n                                          :     :        +- CometColumnarExchange\n                                          :     :           +- HashAggregate\n                                          :     :              +- Project\n                                          :     :                 +- BroadcastHashJoin\n                                          :     :                    :- ColumnarToRow\n                                          :     :                    :  +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :     :                    :        +- ReusedSubquery\n                                          :     :                    +- BroadcastExchange\n                                          :     :                       +- CometNativeColumnarToRow\n                                          :     :                          +- CometProject\n                                          :     :                             +- CometFilter\n                                          :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          :     +- HashAggregate\n                                          :        +- CometNativeColumnarToRow\n                                          :           +- CometColumnarExchange\n                                          :              +- HashAggregate\n                                          :                 +- Project\n                                          :                    +- BroadcastHashJoin\n                                          :                       :- ColumnarToRow\n                                          :                       :  +-  Scan parquet spark_catalog.default.catalog_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                          :                       :        +- ReusedSubquery\n                                          :                       +- BroadcastExchange\n                                          :                          +- CometNativeColumnarToRow\n                                          :                             +- CometProject\n                                          :                                +- CometFilter\n                                          :                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n                                          +- Project\n                                             +- BroadcastHashJoin\n                                                :- HashAggregate\n                                                :  +- CometNativeColumnarToRow\n                                                :     +- CometColumnarExchange\n                                                :        +- HashAggregate\n                                                :           +- Project\n                                                :              +- BroadcastHashJoin\n                                                :                 :- Project\n                                                :                 :  +- BroadcastHashJoin\n                                                :                 :     :- Filter\n                                                :                 :     :  +- ColumnarToRow\n                                                :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                :                 :     :           +- ReusedSubquery\n                                                :                 :     +- BroadcastExchange\n                                                :                 :        +- CometNativeColumnarToRow\n                                                :                 :           +- CometProject\n                                                :                 :              +- CometFilter\n                                                :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                :                 +- BroadcastExchange\n                                                :                    +- CometNativeColumnarToRow\n                                                :                       +- CometFilter\n                                                :                          +- CometNativeScan parquet spark_catalog.default.web_page\n                                                +- BroadcastExchange\n                                                   +- HashAggregate\n                                                      +- CometNativeColumnarToRow\n                                                         +- CometColumnarExchange\n                                                            +- HashAggregate\n                                                               +- Project\n                                                                  +- BroadcastHashJoin\n                                                                     :- Project\n                                                                     :  +- BroadcastHashJoin\n                                                                     :     :- Filter\n                                                                     :     :  +- ColumnarToRow\n                                                                     :     :     +-  Scan parquet spark_catalog.default.web_returns [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                                     :     :           +- ReusedSubquery\n                                                                     :     +- BroadcastExchange\n                                                                     :        +- CometNativeColumnarToRow\n                                                                     :           +- CometProject\n                                                                     :              +- CometFilter\n                                                                     :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                                     +- BroadcastExchange\n                                                                        +- CometNativeColumnarToRow\n                                                                           +- CometFilter\n                                                                              +- CometNativeScan parquet spark_catalog.default.web_page\n\nComet accelerated 113 out of 332 eligible operators (34%). Final plan contains 75 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q77a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometColumnarExchange\n         +- HashAggregate\n            +- Union\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- Union\n               :              :- CometNativeColumnarToRow\n               :              :  +- CometProject\n               :              :     +- CometBroadcastHashJoin\n               :              :        :- CometHashAggregate\n               :              :        :  +- CometExchange\n               :              :        :     +- CometHashAggregate\n               :              :        :        +- CometProject\n               :              :        :           +- CometBroadcastHashJoin\n               :              :        :              :- CometProject\n               :              :        :              :  +- CometBroadcastHashJoin\n               :              :        :              :     :- CometFilter\n               :              :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :              :        :              :     :        +- SubqueryBroadcast\n               :              :        :              :     :           +- BroadcastExchange\n               :              :        :              :     :              +- CometNativeColumnarToRow\n               :              :        :              :     :                 +- CometProject\n               :              :        :              :     :                    +- CometFilter\n               :              :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :        :              :     +- CometBroadcastExchange\n               :              :        :              :        +- CometProject\n               :              :        :              :           +- CometFilter\n               :              :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :        :              +- CometBroadcastExchange\n               :              :        :                 +- CometFilter\n               :              :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :              :        +- CometBroadcastExchange\n               :              :           +- CometHashAggregate\n               :              :              +- CometExchange\n               :              :                 +- CometHashAggregate\n               :              :                    +- CometProject\n               :              :                       +- CometBroadcastHashJoin\n               :              :                          :- CometProject\n               :              :                          :  +- CometBroadcastHashJoin\n               :              :                          :     :- CometFilter\n               :              :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :              :                          :     :        +- ReusedSubquery\n               :              :                          :     +- CometBroadcastExchange\n               :              :                          :        +- CometProject\n               :              :                          :           +- CometFilter\n               :              :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :                          +- CometBroadcastExchange\n               :              :                             +- CometFilter\n               :              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :              :- Project\n               :              :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n               :              :     :- BroadcastExchange\n               :              :     :  +- CometNativeColumnarToRow\n               :              :     :     +- CometHashAggregate\n               :              :     :        +- CometExchange\n               :              :     :           +- CometHashAggregate\n               :              :     :              +- CometProject\n               :              :     :                 +- CometBroadcastHashJoin\n               :              :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :              :     :                    :     +- ReusedSubquery\n               :              :     :                    +- CometBroadcastExchange\n               :              :     :                       +- CometProject\n               :              :     :                          +- CometFilter\n               :              :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              :     +- CometNativeColumnarToRow\n               :              :        +- CometHashAggregate\n               :              :           +- CometExchange\n               :              :              +- CometHashAggregate\n               :              :                 +- CometProject\n               :              :                    +- CometBroadcastHashJoin\n               :              :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :              :                       :     +- ReusedSubquery\n               :              :                       +- CometBroadcastExchange\n               :              :                          +- CometProject\n               :              :                             +- CometFilter\n               :              :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :              +- CometNativeColumnarToRow\n               :                 +- CometProject\n               :                    +- CometBroadcastHashJoin\n               :                       :- CometHashAggregate\n               :                       :  +- CometExchange\n               :                       :     +- CometHashAggregate\n               :                       :        +- CometProject\n               :                       :           +- CometBroadcastHashJoin\n               :                       :              :- CometProject\n               :                       :              :  +- CometBroadcastHashJoin\n               :                       :              :     :- CometFilter\n               :                       :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                       :              :     :        +- ReusedSubquery\n               :                       :              :     +- CometBroadcastExchange\n               :                       :              :        +- CometProject\n               :                       :              :           +- CometFilter\n               :                       :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                       :              +- CometBroadcastExchange\n               :                       :                 +- CometFilter\n               :                       :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               :                       +- CometBroadcastExchange\n               :                          +- CometHashAggregate\n               :                             +- CometExchange\n               :                                +- CometHashAggregate\n               :                                   +- CometProject\n               :                                      +- CometBroadcastHashJoin\n               :                                         :- CometProject\n               :                                         :  +- CometBroadcastHashJoin\n               :                                         :     :- CometFilter\n               :                                         :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                         :     :        +- ReusedSubquery\n               :                                         :     +- CometBroadcastExchange\n               :                                         :        +- CometProject\n               :                                         :           +- CometFilter\n               :                                         :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                         +- CometBroadcastExchange\n               :                                            +- CometFilter\n               :                                               +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               :- HashAggregate\n               :  +- CometNativeColumnarToRow\n               :     +- CometColumnarExchange\n               :        +- HashAggregate\n               :           +- HashAggregate\n               :              +- CometNativeColumnarToRow\n               :                 +- CometColumnarExchange\n               :                    +- HashAggregate\n               :                       +- Union\n               :                          :- CometNativeColumnarToRow\n               :                          :  +- CometProject\n               :                          :     +- CometBroadcastHashJoin\n               :                          :        :- CometHashAggregate\n               :                          :        :  +- CometExchange\n               :                          :        :     +- CometHashAggregate\n               :                          :        :        +- CometProject\n               :                          :        :           +- CometBroadcastHashJoin\n               :                          :        :              :- CometProject\n               :                          :        :              :  +- CometBroadcastHashJoin\n               :                          :        :              :     :- CometFilter\n               :                          :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                          :        :              :     :        +- SubqueryBroadcast\n               :                          :        :              :     :           +- BroadcastExchange\n               :                          :        :              :     :              +- CometNativeColumnarToRow\n               :                          :        :              :     :                 +- CometProject\n               :                          :        :              :     :                    +- CometFilter\n               :                          :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :        :              :     +- CometBroadcastExchange\n               :                          :        :              :        +- CometProject\n               :                          :        :              :           +- CometFilter\n               :                          :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :        :              +- CometBroadcastExchange\n               :                          :        :                 +- CometFilter\n               :                          :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                          :        +- CometBroadcastExchange\n               :                          :           +- CometHashAggregate\n               :                          :              +- CometExchange\n               :                          :                 +- CometHashAggregate\n               :                          :                    +- CometProject\n               :                          :                       +- CometBroadcastHashJoin\n               :                          :                          :- CometProject\n               :                          :                          :  +- CometBroadcastHashJoin\n               :                          :                          :     :- CometFilter\n               :                          :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :                          :                          :     :        +- ReusedSubquery\n               :                          :                          :     +- CometBroadcastExchange\n               :                          :                          :        +- CometProject\n               :                          :                          :           +- CometFilter\n               :                          :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :                          +- CometBroadcastExchange\n               :                          :                             +- CometFilter\n               :                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                          :- Project\n               :                          :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n               :                          :     :- BroadcastExchange\n               :                          :     :  +- CometNativeColumnarToRow\n               :                          :     :     +- CometHashAggregate\n               :                          :     :        +- CometExchange\n               :                          :     :           +- CometHashAggregate\n               :                          :     :              +- CometProject\n               :                          :     :                 +- CometBroadcastHashJoin\n               :                          :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                          :     :                    :     +- ReusedSubquery\n               :                          :     :                    +- CometBroadcastExchange\n               :                          :     :                       +- CometProject\n               :                          :     :                          +- CometFilter\n               :                          :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :     +- CometNativeColumnarToRow\n               :                          :        +- CometHashAggregate\n               :                          :           +- CometExchange\n               :                          :              +- CometHashAggregate\n               :                          :                 +- CometProject\n               :                          :                    +- CometBroadcastHashJoin\n               :                          :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                          :                       :     +- ReusedSubquery\n               :                          :                       +- CometBroadcastExchange\n               :                          :                          +- CometProject\n               :                          :                             +- CometFilter\n               :                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          +- CometNativeColumnarToRow\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometHashAggregate\n               :                                   :  +- CometExchange\n               :                                   :     +- CometHashAggregate\n               :                                   :        +- CometProject\n               :                                   :           +- CometBroadcastHashJoin\n               :                                   :              :- CometProject\n               :                                   :              :  +- CometBroadcastHashJoin\n               :                                   :              :     :- CometFilter\n               :                                   :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :              :     :        +- ReusedSubquery\n               :                                   :              :     +- CometBroadcastExchange\n               :                                   :              :        +- CometProject\n               :                                   :              :           +- CometFilter\n               :                                   :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                   :              +- CometBroadcastExchange\n               :                                   :                 +- CometFilter\n               :                                   :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometHashAggregate\n               :                                         +- CometExchange\n               :                                            +- CometHashAggregate\n               :                                               +- CometProject\n               :                                                  +- CometBroadcastHashJoin\n               :                                                     :- CometProject\n               :                                                     :  +- CometBroadcastHashJoin\n               :                                                     :     :- CometFilter\n               :                                                     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                                     :     :        +- ReusedSubquery\n               :                                                     :     +- CometBroadcastExchange\n               :                                                     :        +- CometProject\n               :                                                     :           +- CometFilter\n               :                                                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                                     +- CometBroadcastExchange\n               :                                                        +- CometFilter\n               :                                                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n               +- HashAggregate\n                  +- CometNativeColumnarToRow\n                     +- CometColumnarExchange\n                        +- HashAggregate\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- Union\n                                          :- CometNativeColumnarToRow\n                                          :  +- CometProject\n                                          :     +- CometBroadcastHashJoin\n                                          :        :- CometHashAggregate\n                                          :        :  +- CometExchange\n                                          :        :     +- CometHashAggregate\n                                          :        :        +- CometProject\n                                          :        :           +- CometBroadcastHashJoin\n                                          :        :              :- CometProject\n                                          :        :              :  +- CometBroadcastHashJoin\n                                          :        :              :     :- CometFilter\n                                          :        :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                          :        :              :     :        +- SubqueryBroadcast\n                                          :        :              :     :           +- BroadcastExchange\n                                          :        :              :     :              +- CometNativeColumnarToRow\n                                          :        :              :     :                 +- CometProject\n                                          :        :              :     :                    +- CometFilter\n                                          :        :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :        :              :     +- CometBroadcastExchange\n                                          :        :              :        +- CometProject\n                                          :        :              :           +- CometFilter\n                                          :        :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :        :              +- CometBroadcastExchange\n                                          :        :                 +- CometFilter\n                                          :        :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                          :        +- CometBroadcastExchange\n                                          :           +- CometHashAggregate\n                                          :              +- CometExchange\n                                          :                 +- CometHashAggregate\n                                          :                    +- CometProject\n                                          :                       +- CometBroadcastHashJoin\n                                          :                          :- CometProject\n                                          :                          :  +- CometBroadcastHashJoin\n                                          :                          :     :- CometFilter\n                                          :                          :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                          :                          :     :        +- ReusedSubquery\n                                          :                          :     +- CometBroadcastExchange\n                                          :                          :        +- CometProject\n                                          :                          :           +- CometFilter\n                                          :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :                          +- CometBroadcastExchange\n                                          :                             +- CometFilter\n                                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                          :- Project\n                                          :  +-  BroadcastNestedLoopJoin [COMET: BroadcastNestedLoopJoin is not supported]\n                                          :     :- BroadcastExchange\n                                          :     :  +- CometNativeColumnarToRow\n                                          :     :     +- CometHashAggregate\n                                          :     :        +- CometExchange\n                                          :     :           +- CometHashAggregate\n                                          :     :              +- CometProject\n                                          :     :                 +- CometBroadcastHashJoin\n                                          :     :                    :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                          :     :                    :     +- ReusedSubquery\n                                          :     :                    +- CometBroadcastExchange\n                                          :     :                       +- CometProject\n                                          :     :                          +- CometFilter\n                                          :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          :     +- CometNativeColumnarToRow\n                                          :        +- CometHashAggregate\n                                          :           +- CometExchange\n                                          :              +- CometHashAggregate\n                                          :                 +- CometProject\n                                          :                    +- CometBroadcastHashJoin\n                                          :                       :- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                          :                       :     +- ReusedSubquery\n                                          :                       +- CometBroadcastExchange\n                                          :                          +- CometProject\n                                          :                             +- CometFilter\n                                          :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometHashAggregate\n                                                   :  +- CometExchange\n                                                   :     +- CometHashAggregate\n                                                   :        +- CometProject\n                                                   :           +- CometBroadcastHashJoin\n                                                   :              :- CometProject\n                                                   :              :  +- CometBroadcastHashJoin\n                                                   :              :     :- CometFilter\n                                                   :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :              :     :        +- ReusedSubquery\n                                                   :              :     +- CometBroadcastExchange\n                                                   :              :        +- CometProject\n                                                   :              :           +- CometFilter\n                                                   :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :              +- CometBroadcastExchange\n                                                   :                 +- CometFilter\n                                                   :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n                                                   +- CometBroadcastExchange\n                                                      +- CometHashAggregate\n                                                         +- CometExchange\n                                                            +- CometHashAggregate\n                                                               +- CometProject\n                                                                  +- CometBroadcastHashJoin\n                                                                     :- CometProject\n                                                                     :  +- CometBroadcastHashJoin\n                                                                     :     :- CometFilter\n                                                                     :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                                                     :     :        +- ReusedSubquery\n                                                                     :     +- CometBroadcastExchange\n                                                                     :        +- CometProject\n                                                                     :           +- CometFilter\n                                                                     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                                     +- CometBroadcastExchange\n                                                                        +- CometFilter\n                                                                           +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_page\n\nComet accelerated 287 out of 332 eligible operators (86%). Final plan contains 21 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q78.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometColumnarExchange\n         :     :                 :        :     +- Filter\n         :     :                 :        :        +- ColumnarToRow\n         :     :                 :        :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :     :                 :        :                 +- SubqueryBroadcast\n         :     :                 :        :                    +- BroadcastExchange\n         :     :                 :        :                       +- CometNativeColumnarToRow\n         :     :                 :        :                          +- CometFilter\n         :     :                 :        :                             +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometNativeScan parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometNativeScan parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometColumnarExchange\n         :                          :        :     +- Filter\n         :                          :        :        +- ColumnarToRow\n         :                          :        :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n         :                          :        :                 +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometNativeScan parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometColumnarExchange\n                              :        :     +- Filter\n                              :        :        +- ColumnarToRow\n                              :        :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                              :        :                 +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 64 out of 76 eligible operators (84%). Final plan contains 5 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q78.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+-  Project [COMET: Comet does not support Spark's BigDecimal rounding]\n   +- CometNativeColumnarToRow\n      +- CometSortMergeJoin\n         :- CometProject\n         :  +- CometSortMergeJoin\n         :     :- CometSort\n         :     :  +- CometHashAggregate\n         :     :     +- CometExchange\n         :     :        +- CometHashAggregate\n         :     :           +- CometProject\n         :     :              +- CometBroadcastHashJoin\n         :     :                 :- CometProject\n         :     :                 :  +- CometFilter\n         :     :                 :     +- CometSortMergeJoin\n         :     :                 :        :- CometSort\n         :     :                 :        :  +- CometExchange\n         :     :                 :        :     +- CometFilter\n         :     :                 :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n         :     :                 :        :              +- SubqueryBroadcast\n         :     :                 :        :                 +- BroadcastExchange\n         :     :                 :        :                    +- CometNativeColumnarToRow\n         :     :                 :        :                       +- CometFilter\n         :     :                 :        :                          +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     :                 :        +- CometSort\n         :     :                 :           +- CometExchange\n         :     :                 :              +- CometProject\n         :     :                 :                 +- CometFilter\n         :     :                 :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n         :     :                 +- CometBroadcastExchange\n         :     :                    +- CometFilter\n         :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         :     +- CometSort\n         :        +- CometFilter\n         :           +- CometHashAggregate\n         :              +- CometExchange\n         :                 +- CometHashAggregate\n         :                    +- CometProject\n         :                       +- CometBroadcastHashJoin\n         :                          :- CometProject\n         :                          :  +- CometFilter\n         :                          :     +- CometSortMergeJoin\n         :                          :        :- CometSort\n         :                          :        :  +- CometExchange\n         :                          :        :     +- CometFilter\n         :                          :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n         :                          :        :              +- ReusedSubquery\n         :                          :        +- CometSort\n         :                          :           +- CometExchange\n         :                          :              +- CometProject\n         :                          :                 +- CometFilter\n         :                          :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n         :                          +- CometBroadcastExchange\n         :                             +- CometFilter\n         :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n         +- CometSort\n            +- CometFilter\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometProject\n                           +- CometBroadcastHashJoin\n                              :- CometProject\n                              :  +- CometFilter\n                              :     +- CometSortMergeJoin\n                              :        :- CometSort\n                              :        :  +- CometExchange\n                              :        :     +- CometFilter\n                              :        :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                              :        :              +- ReusedSubquery\n                              :        +- CometSort\n                              :           +- CometExchange\n                              :              +- CometProject\n                              :                 +- CometFilter\n                              :                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                              +- CometBroadcastExchange\n                                 +- CometFilter\n                                    +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 70 out of 76 eligible operators (92%). Final plan contains 2 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q80a.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometColumnarExchange\n               :           :              :     :     :     :     :     +- Filter\n               :           :              :     :     :     :     :        +- ColumnarToRow\n               :           :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :           :              :     :     :     :     :                 +- SubqueryBroadcast\n               :           :              :     :     :     :     :                    +- BroadcastExchange\n               :           :              :     :     :     :     :                       +- CometNativeColumnarToRow\n               :           :              :     :     :     :     :                          +- CometProject\n               :           :              :     :     :     :     :                             +- CometFilter\n               :           :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometColumnarExchange\n               :           :              :     :     :     :     :     +- Filter\n               :           :              :     :     :     :     :        +- ColumnarToRow\n               :           :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :           :              :     :     :     :     :                 +- ReusedSubquery\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometProject\n               :                          :  +- CometBroadcastHashJoin\n               :                          :     :- CometProject\n               :                          :     :  +- CometBroadcastHashJoin\n               :                          :     :     :- CometProject\n               :                          :     :     :  +- CometBroadcastHashJoin\n               :                          :     :     :     :- CometProject\n               :                          :     :     :     :  +- CometSortMergeJoin\n               :                          :     :     :     :     :- CometSort\n               :                          :     :     :     :     :  +- CometColumnarExchange\n               :                          :     :     :     :     :     +- Filter\n               :                          :     :     :     :     :        +- ColumnarToRow\n               :                          :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                          :     :     :     :     :                 +- ReusedSubquery\n               :                          :     :     :     :     +- CometSort\n               :                          :     :     :     :        +- CometExchange\n               :                          :     :     :     :           +- CometProject\n               :                          :     :     :     :              +- CometFilter\n               :                          :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                          :     :     :     +- CometBroadcastExchange\n               :                          :     :     :        +- CometProject\n               :                          :     :     :           +- CometFilter\n               :                          :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                          :     :     +- CometBroadcastExchange\n               :                          :     :        +- CometProject\n               :                          :     :           +- CometFilter\n               :                          :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n               :                          :     +- CometBroadcastExchange\n               :                          :        +- CometProject\n               :                          :           +- CometFilter\n               :                          :              +- CometNativeScan parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometNativeScan parquet spark_catalog.default.promotion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometColumnarExchange\n               :                    :              :     :     :     :     :     +- Filter\n               :                    :              :     :     :     :     :        +- ColumnarToRow\n               :                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :              :     :     :     :     :                 +- SubqueryBroadcast\n               :                    :              :     :     :     :     :                    +- BroadcastExchange\n               :                    :              :     :     :     :     :                       +- CometNativeColumnarToRow\n               :                    :              :     :     :     :     :                          +- CometProject\n               :                    :              :     :     :     :     :                             +- CometFilter\n               :                    :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometColumnarExchange\n               :                    :              :     :     :     :     :     +- Filter\n               :                    :              :     :     :     :     :        +- ColumnarToRow\n               :                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                    :              :     :     :     :     :                 +- ReusedSubquery\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n               :                    +- CometHashAggregate\n               :                       +- CometExchange\n               :                          +- CometHashAggregate\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometProject\n               :                                   :  +- CometBroadcastHashJoin\n               :                                   :     :- CometProject\n               :                                   :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :- CometProject\n               :                                   :     :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :     :- CometProject\n               :                                   :     :     :     :  +- CometSortMergeJoin\n               :                                   :     :     :     :     :- CometSort\n               :                                   :     :     :     :     :  +- CometColumnarExchange\n               :                                   :     :     :     :     :     +- Filter\n               :                                   :     :     :     :     :        +- ColumnarToRow\n               :                                   :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n               :                                   :     :     :     :     :                 +- ReusedSubquery\n               :                                   :     :     :     :     +- CometSort\n               :                                   :     :     :     :        +- CometExchange\n               :                                   :     :     :     :           +- CometProject\n               :                                   :     :     :     :              +- CometFilter\n               :                                   :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n               :                                   :     :     :     +- CometBroadcastExchange\n               :                                   :     :     :        +- CometProject\n               :                                   :     :     :           +- CometFilter\n               :                                   :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n               :                                   :     :     +- CometBroadcastExchange\n               :                                   :     :        +- CometProject\n               :                                   :     :           +- CometFilter\n               :                                   :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n               :                                   :     +- CometBroadcastExchange\n               :                                   :        +- CometProject\n               :                                   :           +- CometFilter\n               :                                   :              +- CometNativeScan parquet spark_catalog.default.item\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometNativeScan parquet spark_catalog.default.promotion\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometColumnarExchange\n                                    :              :     :     :     :     :     +- Filter\n                                    :              :     :     :     :     :        +- ColumnarToRow\n                                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :              :     :     :     :     :                 +- SubqueryBroadcast\n                                    :              :     :     :     :     :                    +- BroadcastExchange\n                                    :              :     :     :     :     :                       +- CometNativeColumnarToRow\n                                    :              :     :     :     :     :                          +- CometProject\n                                    :              :     :     :     :     :                             +- CometFilter\n                                    :              :     :     :     :     :                                +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.store_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometNativeScan parquet spark_catalog.default.store\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometColumnarExchange\n                                    :              :     :     :     :     :     +- Filter\n                                    :              :     :     :     :     :        +- ColumnarToRow\n                                    :              :     :     :     :     :           +-  Scan parquet spark_catalog.default.catalog_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                    :              :     :     :     :     :                 +- ReusedSubquery\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.catalog_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometNativeScan parquet spark_catalog.default.catalog_page\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometNativeScan parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometNativeScan parquet spark_catalog.default.promotion\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometProject\n                                                   :     :  +- CometBroadcastHashJoin\n                                                   :     :     :- CometProject\n                                                   :     :     :  +- CometBroadcastHashJoin\n                                                   :     :     :     :- CometProject\n                                                   :     :     :     :  +- CometSortMergeJoin\n                                                   :     :     :     :     :- CometSort\n                                                   :     :     :     :     :  +- CometColumnarExchange\n                                                   :     :     :     :     :     +- Filter\n                                                   :     :     :     :     :        +- ColumnarToRow\n                                                   :     :     :     :     :           +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                   :     :     :     :     :                 +- ReusedSubquery\n                                                   :     :     :     :     +- CometSort\n                                                   :     :     :     :        +- CometExchange\n                                                   :     :     :     :           +- CometProject\n                                                   :     :     :     :              +- CometFilter\n                                                   :     :     :     :                 +- CometNativeScan parquet spark_catalog.default.web_returns\n                                                   :     :     :     +- CometBroadcastExchange\n                                                   :     :     :        +- CometProject\n                                                   :     :     :           +- CometFilter\n                                                   :     :     :              +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                   :     :     +- CometBroadcastExchange\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometNativeScan parquet spark_catalog.default.web_site\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometNativeScan parquet spark_catalog.default.item\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometNativeScan parquet spark_catalog.default.promotion\n\nComet accelerated 356 out of 386 eligible operators (92%). Final plan contains 13 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q80a.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometTakeOrderedAndProject\n   +- CometHashAggregate\n      +- CometExchange\n         +- CometHashAggregate\n            +- CometUnion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometUnion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometExchange\n               :           :              :     :     :     :     :     +- CometFilter\n               :           :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :           :              :     :     :     :     :              +- SubqueryBroadcast\n               :           :              :     :     :     :     :                 +- BroadcastExchange\n               :           :              :     :     :     :     :                    +- CometNativeColumnarToRow\n               :           :              :     :     :     :     :                       +- CometProject\n               :           :              :     :     :     :     :                          +- CometFilter\n               :           :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :           :- CometHashAggregate\n               :           :  +- CometExchange\n               :           :     +- CometHashAggregate\n               :           :        +- CometProject\n               :           :           +- CometBroadcastHashJoin\n               :           :              :- CometProject\n               :           :              :  +- CometBroadcastHashJoin\n               :           :              :     :- CometProject\n               :           :              :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :- CometProject\n               :           :              :     :     :  +- CometBroadcastHashJoin\n               :           :              :     :     :     :- CometProject\n               :           :              :     :     :     :  +- CometSortMergeJoin\n               :           :              :     :     :     :     :- CometSort\n               :           :              :     :     :     :     :  +- CometExchange\n               :           :              :     :     :     :     :     +- CometFilter\n               :           :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :           :              :     :     :     :     :              +- ReusedSubquery\n               :           :              :     :     :     :     +- CometSort\n               :           :              :     :     :     :        +- CometExchange\n               :           :              :     :     :     :           +- CometProject\n               :           :              :     :     :     :              +- CometFilter\n               :           :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :           :              :     :     :     +- CometBroadcastExchange\n               :           :              :     :     :        +- CometProject\n               :           :              :     :     :           +- CometFilter\n               :           :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :           :              :     :     +- CometBroadcastExchange\n               :           :              :     :        +- CometProject\n               :           :              :     :           +- CometFilter\n               :           :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :           :              :     +- CometBroadcastExchange\n               :           :              :        +- CometProject\n               :           :              :           +- CometFilter\n               :           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :           :              +- CometBroadcastExchange\n               :           :                 +- CometProject\n               :           :                    +- CometFilter\n               :           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :           +- CometHashAggregate\n               :              +- CometExchange\n               :                 +- CometHashAggregate\n               :                    +- CometProject\n               :                       +- CometBroadcastHashJoin\n               :                          :- CometProject\n               :                          :  +- CometBroadcastHashJoin\n               :                          :     :- CometProject\n               :                          :     :  +- CometBroadcastHashJoin\n               :                          :     :     :- CometProject\n               :                          :     :     :  +- CometBroadcastHashJoin\n               :                          :     :     :     :- CometProject\n               :                          :     :     :     :  +- CometSortMergeJoin\n               :                          :     :     :     :     :- CometSort\n               :                          :     :     :     :     :  +- CometExchange\n               :                          :     :     :     :     :     +- CometFilter\n               :                          :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                          :     :     :     :     :              +- ReusedSubquery\n               :                          :     :     :     :     +- CometSort\n               :                          :     :     :     :        +- CometExchange\n               :                          :     :     :     :           +- CometProject\n               :                          :     :     :     :              +- CometFilter\n               :                          :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                          :     :     :     +- CometBroadcastExchange\n               :                          :     :     :        +- CometProject\n               :                          :     :     :           +- CometFilter\n               :                          :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                          :     :     +- CometBroadcastExchange\n               :                          :     :        +- CometProject\n               :                          :     :           +- CometFilter\n               :                          :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               :                          :     +- CometBroadcastExchange\n               :                          :        +- CometProject\n               :                          :           +- CometFilter\n               :                          :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                          +- CometBroadcastExchange\n               :                             +- CometProject\n               :                                +- CometFilter\n               :                                   +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :- CometHashAggregate\n               :  +- CometExchange\n               :     +- CometHashAggregate\n               :        +- CometHashAggregate\n               :           +- CometExchange\n               :              +- CometHashAggregate\n               :                 +- CometUnion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometExchange\n               :                    :              :     :     :     :     :     +- CometFilter\n               :                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n               :                    :              :     :     :     :     :              +- SubqueryBroadcast\n               :                    :              :     :     :     :     :                 +- BroadcastExchange\n               :                    :              :     :     :     :     :                    +- CometNativeColumnarToRow\n               :                    :              :     :     :     :     :                       +- CometProject\n               :                    :              :     :     :     :     :                          +- CometFilter\n               :                    :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :                    :- CometHashAggregate\n               :                    :  +- CometExchange\n               :                    :     +- CometHashAggregate\n               :                    :        +- CometProject\n               :                    :           +- CometBroadcastHashJoin\n               :                    :              :- CometProject\n               :                    :              :  +- CometBroadcastHashJoin\n               :                    :              :     :- CometProject\n               :                    :              :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :- CometProject\n               :                    :              :     :     :  +- CometBroadcastHashJoin\n               :                    :              :     :     :     :- CometProject\n               :                    :              :     :     :     :  +- CometSortMergeJoin\n               :                    :              :     :     :     :     :- CometSort\n               :                    :              :     :     :     :     :  +- CometExchange\n               :                    :              :     :     :     :     :     +- CometFilter\n               :                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n               :                    :              :     :     :     :     :              +- ReusedSubquery\n               :                    :              :     :     :     :     +- CometSort\n               :                    :              :     :     :     :        +- CometExchange\n               :                    :              :     :     :     :           +- CometProject\n               :                    :              :     :     :     :              +- CometFilter\n               :                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n               :                    :              :     :     :     +- CometBroadcastExchange\n               :                    :              :     :     :        +- CometProject\n               :                    :              :     :     :           +- CometFilter\n               :                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                    :              :     :     +- CometBroadcastExchange\n               :                    :              :     :        +- CometProject\n               :                    :              :     :           +- CometFilter\n               :                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n               :                    :              :     +- CometBroadcastExchange\n               :                    :              :        +- CometProject\n               :                    :              :           +- CometFilter\n               :                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                    :              +- CometBroadcastExchange\n               :                    :                 +- CometProject\n               :                    :                    +- CometFilter\n               :                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               :                    +- CometHashAggregate\n               :                       +- CometExchange\n               :                          +- CometHashAggregate\n               :                             +- CometProject\n               :                                +- CometBroadcastHashJoin\n               :                                   :- CometProject\n               :                                   :  +- CometBroadcastHashJoin\n               :                                   :     :- CometProject\n               :                                   :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :- CometProject\n               :                                   :     :     :  +- CometBroadcastHashJoin\n               :                                   :     :     :     :- CometProject\n               :                                   :     :     :     :  +- CometSortMergeJoin\n               :                                   :     :     :     :     :- CometSort\n               :                                   :     :     :     :     :  +- CometExchange\n               :                                   :     :     :     :     :     +- CometFilter\n               :                                   :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n               :                                   :     :     :     :     :              +- ReusedSubquery\n               :                                   :     :     :     :     +- CometSort\n               :                                   :     :     :     :        +- CometExchange\n               :                                   :     :     :     :           +- CometProject\n               :                                   :     :     :     :              +- CometFilter\n               :                                   :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n               :                                   :     :     :     +- CometBroadcastExchange\n               :                                   :     :     :        +- CometProject\n               :                                   :     :     :           +- CometFilter\n               :                                   :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n               :                                   :     :     +- CometBroadcastExchange\n               :                                   :     :        +- CometProject\n               :                                   :     :           +- CometFilter\n               :                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n               :                                   :     +- CometBroadcastExchange\n               :                                   :        +- CometProject\n               :                                   :           +- CometFilter\n               :                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n               :                                   +- CometBroadcastExchange\n               :                                      +- CometProject\n               :                                         +- CometFilter\n               :                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometHashAggregate\n                           +- CometExchange\n                              +- CometHashAggregate\n                                 +- CometUnion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometExchange\n                                    :              :     :     :     :     :     +- CometFilter\n                                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :              :     :     :     :     :              +- SubqueryBroadcast\n                                    :              :     :     :     :     :                 +- BroadcastExchange\n                                    :              :     :     :     :     :                    +- CometNativeColumnarToRow\n                                    :              :     :     :     :     :                       +- CometProject\n                                    :              :     :     :     :     :                          +- CometFilter\n                                    :              :     :     :     :     :                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                                    :- CometHashAggregate\n                                    :  +- CometExchange\n                                    :     +- CometHashAggregate\n                                    :        +- CometProject\n                                    :           +- CometBroadcastHashJoin\n                                    :              :- CometProject\n                                    :              :  +- CometBroadcastHashJoin\n                                    :              :     :- CometProject\n                                    :              :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :- CometProject\n                                    :              :     :     :  +- CometBroadcastHashJoin\n                                    :              :     :     :     :- CometProject\n                                    :              :     :     :     :  +- CometSortMergeJoin\n                                    :              :     :     :     :     :- CometSort\n                                    :              :     :     :     :     :  +- CometExchange\n                                    :              :     :     :     :     :     +- CometFilter\n                                    :              :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_sales\n                                    :              :     :     :     :     :              +- ReusedSubquery\n                                    :              :     :     :     :     +- CometSort\n                                    :              :     :     :     :        +- CometExchange\n                                    :              :     :     :     :           +- CometProject\n                                    :              :     :     :     :              +- CometFilter\n                                    :              :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_returns\n                                    :              :     :     :     +- CometBroadcastExchange\n                                    :              :     :     :        +- CometProject\n                                    :              :     :     :           +- CometFilter\n                                    :              :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :              :     :     +- CometBroadcastExchange\n                                    :              :     :        +- CometProject\n                                    :              :     :           +- CometFilter\n                                    :              :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.catalog_page\n                                    :              :     +- CometBroadcastExchange\n                                    :              :        +- CometProject\n                                    :              :           +- CometFilter\n                                    :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    :              +- CometBroadcastExchange\n                                    :                 +- CometProject\n                                    :                    +- CometFilter\n                                    :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometProject\n                                                   :     :  +- CometBroadcastHashJoin\n                                                   :     :     :- CometProject\n                                                   :     :     :  +- CometBroadcastHashJoin\n                                                   :     :     :     :- CometProject\n                                                   :     :     :     :  +- CometSortMergeJoin\n                                                   :     :     :     :     :- CometSort\n                                                   :     :     :     :     :  +- CometExchange\n                                                   :     :     :     :     :     +- CometFilter\n                                                   :     :     :     :     :        +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     :     :     :     :              +- ReusedSubquery\n                                                   :     :     :     :     +- CometSort\n                                                   :     :     :     :        +- CometExchange\n                                                   :     :     :     :           +- CometProject\n                                                   :     :     :     :              +- CometFilter\n                                                   :     :     :     :                 +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_returns\n                                                   :     :     :     +- CometBroadcastExchange\n                                                   :     :     :        +- CometProject\n                                                   :     :     :           +- CometFilter\n                                                   :     :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     :     +- CometBroadcastExchange\n                                                   :     :        +- CometProject\n                                                   :     :           +- CometFilter\n                                                   :     :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_site\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.promotion\n\nComet accelerated 374 out of 386 eligible operators (96%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q86a.native_datafusion/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- Union\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- Project\n                           :              +- BroadcastHashJoin\n                           :                 :- Project\n                           :                 :  +- BroadcastHashJoin\n                           :                 :     :- Filter\n                           :                 :     :  +- ColumnarToRow\n                           :                 :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                 :     :           +- SubqueryBroadcast\n                           :                 :     :              +- BroadcastExchange\n                           :                 :     :                 +- CometNativeColumnarToRow\n                           :                 :     :                    +- CometProject\n                           :                 :     :                       +- CometFilter\n                           :                 :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 :     +- BroadcastExchange\n                           :                 :        +- CometNativeColumnarToRow\n                           :                 :           +- CometProject\n                           :                 :              +- CometFilter\n                           :                 :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                 +- BroadcastExchange\n                           :                    +- CometNativeColumnarToRow\n                           :                       +- CometProject\n                           :                          +- CometFilter\n                           :                             +- CometNativeScan parquet spark_catalog.default.item\n                           :- HashAggregate\n                           :  +- CometNativeColumnarToRow\n                           :     +- CometColumnarExchange\n                           :        +- HashAggregate\n                           :           +- HashAggregate\n                           :              +- CometNativeColumnarToRow\n                           :                 +- CometColumnarExchange\n                           :                    +- HashAggregate\n                           :                       +- Project\n                           :                          +- BroadcastHashJoin\n                           :                             :- Project\n                           :                             :  +- BroadcastHashJoin\n                           :                             :     :- Filter\n                           :                             :     :  +- ColumnarToRow\n                           :                             :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                           :                             :     :           +- SubqueryBroadcast\n                           :                             :     :              +- BroadcastExchange\n                           :                             :     :                 +- CometNativeColumnarToRow\n                           :                             :     :                    +- CometProject\n                           :                             :     :                       +- CometFilter\n                           :                             :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             :     +- BroadcastExchange\n                           :                             :        +- CometNativeColumnarToRow\n                           :                             :           +- CometProject\n                           :                             :              +- CometFilter\n                           :                             :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                           :                             +- BroadcastExchange\n                           :                                +- CometNativeColumnarToRow\n                           :                                   +- CometProject\n                           :                                      +- CometFilter\n                           :                                         +- CometNativeScan parquet spark_catalog.default.item\n                           +- HashAggregate\n                              +- CometNativeColumnarToRow\n                                 +- CometColumnarExchange\n                                    +- HashAggregate\n                                       +- HashAggregate\n                                          +- CometNativeColumnarToRow\n                                             +- CometColumnarExchange\n                                                +- HashAggregate\n                                                   +- Project\n                                                      +- BroadcastHashJoin\n                                                         :- Project\n                                                         :  +- BroadcastHashJoin\n                                                         :     :- Filter\n                                                         :     :  +- ColumnarToRow\n                                                         :     :     +-  Scan parquet spark_catalog.default.web_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                                         :     :           +- SubqueryBroadcast\n                                                         :     :              +- BroadcastExchange\n                                                         :     :                 +- CometNativeColumnarToRow\n                                                         :     :                    +- CometProject\n                                                         :     :                       +- CometFilter\n                                                         :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         :     +- BroadcastExchange\n                                                         :        +- CometNativeColumnarToRow\n                                                         :           +- CometProject\n                                                         :              +- CometFilter\n                                                         :                 +- CometNativeScan parquet spark_catalog.default.date_dim\n                                                         +- BroadcastExchange\n                                                            +- CometNativeColumnarToRow\n                                                               +- CometProject\n                                                                  +- CometFilter\n                                                                     +- CometNativeScan parquet spark_catalog.default.item\n\nComet accelerated 36 out of 81 eligible operators (44%). Final plan contains 18 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q86a.native_iceberg_compat/extended.txt",
    "content": "TakeOrderedAndProject\n+- Project\n   +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n      +- CometNativeColumnarToRow\n         +- CometSort\n            +- CometExchange\n               +- CometHashAggregate\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometUnion\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometProject\n                           :           +- CometBroadcastHashJoin\n                           :              :- CometProject\n                           :              :  +- CometBroadcastHashJoin\n                           :              :     :- CometFilter\n                           :              :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :              :     :        +- SubqueryBroadcast\n                           :              :     :           +- BroadcastExchange\n                           :              :     :              +- CometNativeColumnarToRow\n                           :              :     :                 +- CometProject\n                           :              :     :                    +- CometFilter\n                           :              :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              :     +- CometBroadcastExchange\n                           :              :        +- CometProject\n                           :              :           +- CometFilter\n                           :              :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :              +- CometBroadcastExchange\n                           :                 +- CometProject\n                           :                    +- CometFilter\n                           :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           :- CometHashAggregate\n                           :  +- CometExchange\n                           :     +- CometHashAggregate\n                           :        +- CometHashAggregate\n                           :           +- CometExchange\n                           :              +- CometHashAggregate\n                           :                 +- CometProject\n                           :                    +- CometBroadcastHashJoin\n                           :                       :- CometProject\n                           :                       :  +- CometBroadcastHashJoin\n                           :                       :     :- CometFilter\n                           :                       :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                           :                       :     :        +- SubqueryBroadcast\n                           :                       :     :           +- BroadcastExchange\n                           :                       :     :              +- CometNativeColumnarToRow\n                           :                       :     :                 +- CometProject\n                           :                       :     :                    +- CometFilter\n                           :                       :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       :     +- CometBroadcastExchange\n                           :                       :        +- CometProject\n                           :                       :           +- CometFilter\n                           :                       :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                           :                       +- CometBroadcastExchange\n                           :                          +- CometProject\n                           :                             +- CometFilter\n                           :                                +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                           +- CometHashAggregate\n                              +- CometExchange\n                                 +- CometHashAggregate\n                                    +- CometHashAggregate\n                                       +- CometExchange\n                                          +- CometHashAggregate\n                                             +- CometProject\n                                                +- CometBroadcastHashJoin\n                                                   :- CometProject\n                                                   :  +- CometBroadcastHashJoin\n                                                   :     :- CometFilter\n                                                   :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.web_sales\n                                                   :     :        +- SubqueryBroadcast\n                                                   :     :           +- BroadcastExchange\n                                                   :     :              +- CometNativeColumnarToRow\n                                                   :     :                 +- CometProject\n                                                   :     :                    +- CometFilter\n                                                   :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   :     +- CometBroadcastExchange\n                                                   :        +- CometProject\n                                                   :           +- CometFilter\n                                                   :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                                   +- CometBroadcastExchange\n                                                      +- CometProject\n                                                         +- CometFilter\n                                                            +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n\nComet accelerated 72 out of 81 eligible operators (88%). Final plan contains 4 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q98.native_datafusion/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n            +- CometNativeColumnarToRow\n               +- CometSort\n                  +- CometColumnarExchange\n                     +- HashAggregate\n                        +- CometNativeColumnarToRow\n                           +- CometColumnarExchange\n                              +- HashAggregate\n                                 +- Project\n                                    +- BroadcastHashJoin\n                                       :- Project\n                                       :  +- BroadcastHashJoin\n                                       :     :- Filter\n                                       :     :  +- ColumnarToRow\n                                       :     :     +-  Scan parquet spark_catalog.default.store_sales [COMET: Native DataFusion scan does not support subqueries/dynamic pruning]\n                                       :     :           +- SubqueryBroadcast\n                                       :     :              +- BroadcastExchange\n                                       :     :                 +- CometNativeColumnarToRow\n                                       :     :                    +- CometProject\n                                       :     :                       +- CometFilter\n                                       :     :                          +- CometNativeScan parquet spark_catalog.default.date_dim\n                                       :     +- BroadcastExchange\n                                       :        +- CometNativeColumnarToRow\n                                       :           +- CometProject\n                                       :              +- CometFilter\n                                       :                 +- CometNativeScan parquet spark_catalog.default.item\n                                       +- BroadcastExchange\n                                          +- CometNativeColumnarToRow\n                                             +- CometProject\n                                                +- CometFilter\n                                                   +- CometNativeScan parquet spark_catalog.default.date_dim\n\nComet accelerated 14 out of 28 eligible operators (50%). Final plan contains 7 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-plan-stability/approved-plans-v2_7-spark4_0/q98.native_iceberg_compat/extended.txt",
    "content": "CometNativeColumnarToRow\n+- CometSort\n   +- CometColumnarExchange\n      +- Project\n         +-  Window [COMET: WindowExec is not fully compatible with Spark (Native WindowExec has known correctness issues). To enable it anyway, set spark.comet.operator.WindowExec.allowIncompatible=true. For more information, refer to the Comet Compatibility Guide (https://datafusion.apache.org/comet/user-guide/compatibility.html).]\n            +- CometNativeColumnarToRow\n               +- CometSort\n                  +- CometExchange\n                     +- CometHashAggregate\n                        +- CometExchange\n                           +- CometHashAggregate\n                              +- CometProject\n                                 +- CometBroadcastHashJoin\n                                    :- CometProject\n                                    :  +- CometBroadcastHashJoin\n                                    :     :- CometFilter\n                                    :     :  +- CometScan [native_iceberg_compat] parquet spark_catalog.default.store_sales\n                                    :     :        +- SubqueryBroadcast\n                                    :     :           +- BroadcastExchange\n                                    :     :              +- CometNativeColumnarToRow\n                                    :     :                 +- CometProject\n                                    :     :                    +- CometFilter\n                                    :     :                       +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n                                    :     +- CometBroadcastExchange\n                                    :        +- CometProject\n                                    :           +- CometFilter\n                                    :              +- CometScan [native_iceberg_compat] parquet spark_catalog.default.item\n                                    +- CometBroadcastExchange\n                                       +- CometProject\n                                          +- CometFilter\n                                             +- CometScan [native_iceberg_compat] parquet spark_catalog.default.date_dim\n\nComet accelerated 24 out of 28 eligible operators (85%). Final plan contains 3 transitions between Spark and Comet."
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/extended/q72.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_desc:string,w_warehouse_name:string,d_week_seq:int,no_promo:bigint,promo:bigint,total_cnt:bigint>\n-- !query output\nAcceptable statements kill only important grounds. Various, high rights can lie also maj\tOperations\t5197\t0\t2\t2\nAcceptable statements kill only important grounds. Various, high rights can lie also maj\tSelective,\t5197\t0\t2\t2\nBloody conditions cannot become. Libraries operate enough further good police.\tMatches produce\t5212\t0\t2\t2\nDemocratic friends would represent. British, welcome problems could show never better special classes. Gradually daily sites practice at a men. Genetic, british numbers use\tMatches produce\t5215\t0\t2\t2\nDirect, private interpretations look perhaps under political policies. Casualties can feel only results. Architects\tMatches produce\t5197\t0\t2\t2\nDue conclusions will not attra\tJust good amou\t5213\t0\t2\t2\nFree activities pay particularly from a questions. British engineers complete commonly political doors. Bright, w\tOperations\t5179\t0\t2\t2\nGreat doors could recall too studies. Then dual things kill. Meanwhile interesting appointments should name direct months. Potential theories change constant feet; hot resourc\tSelective,\t5179\t0\t2\t2\nGrey artists think there. Points meet with a p\tJust good amou\t5197\t0\t2\t2\nHere economic words appear widely between a feet. Financial yards used to wo\tSelective,\t5215\t0\t2\t2\nIdeas would allow full-time good m\tJust good amou\t5197\t0\t2\t2\nImages will relax in particular new skills. Urban obligations must not mark completely. Officials must not cope both emotionally dif\tJust good amou\t5195\t0\t2\t2\nImages will relax in particular new skills. Urban obligations must not mark completely. Officials must not cope both emotionally dif\tOperations\t5195\t0\t2\t2\nOther parties will add regional, special owners. Little administrative horses may indicate; \tMatches produce\t5206\t0\t2\t2\nParliamentary, irish achievements come specifically probably normal efforts. Boxes prove activities. Particular spirits display above, hot legs. Willing, warm police w\tMatches produce\t5212\t0\t2\t2\nPatterns hear available, bitter women. Certain, standard proposals m\tOperations\t5205\t0\t2\t2\nYears w\tMatches produce\t5211\t0\t2\t2\nNULL\tJust good amou\t5211\t0\t1\t1\nNULL\tMatches produce\t5202\t0\t1\t1\nNULL\tMatches produce\t5209\t0\t1\t1\nNULL\tOperations\t5198\t0\t1\t1\nNULL\tOperations\t5209\t0\t1\t1\nNULL\tOperations\t5217\t0\t1\t1\nNULL\tSelective,\t5181\t0\t1\t1\nNULL\tSignificantly\t5209\t0\t1\t1\nA\tSelective,\t5194\t0\t1\t1\nA\tSelective,\t5208\t0\t1\t1\nA little local letters think over like a children; nevertheless particular powers damage now suddenly absent prote\tMatches produce\t5215\t0\t1\t1\nA little local letters think over like a children; nevertheless particular powers damage now suddenly absent prote\tSignificantly\t5205\t0\t1\t1\nA lot young materials remain below from a rises. \tOperations\t5186\t0\t1\t1\nAble men become in order today other methods. Slightly royal shoulders would not operate centres. Statutory, social bills may not take other, main stages. Prayers cannot meet. Words m\tSelective,\t5177\t0\t1\t1\nAble men become in order today other methods. Slightly royal shoulders would not operate centres. Statutory, social bills may not take other, main stages. Prayers cannot meet. Words m\tSignificantly\t5177\t0\t1\t1\nAble troubles dust into the styles. Independent feet kill wounds. Fundamental months should exploit arms. Massive years read only modern courses; twin forms shall become products. Even h\tSignificantly\t5215\t0\t1\t1\nAble, available problems apply even to a bodies. Patients so\tMatches produce\t5211\t0\t1\t1\nAble, continuous figures see with a patients. Men go more open notes. Different engineers can display. Even strong fortunes cannot put at least general fans; reliable talk\tMatches produce\t5211\t0\t1\t1\nAble, continuous figures see with a patients. Men go more open notes. Different engineers can display. Even strong fortunes cannot put at least general fans; reliable talk\tOperations\t5211\t0\t1\t1\nAbout right clothes must get thoughtfully to a cases. Eastern improvements \tMatches produce\t5202\t0\t1\t1\nAc\tJust good amou\t5205\t0\t1\t1\nAc\tMatches produce\t5183\t0\t1\t1\nAccessible kids insist ministers. Causal, political goods find then other products. Following, average years may want considerably now general causes. N\tSelective,\t5209\t0\t1\t1\nAccidentally wrong communities look more goods. Rural matters recognize. Large, new days go hap\tSelective,\t5178\t0\t1\t1\nAccidents fly bet\tSignificantly\t5167\t0\t1\t1\nAccounts accept\tOperations\t5213\t0\t1\t1\nAccounts could think aspects. Industrial, large\tSelective,\t5211\t0\t1\t1\nAccounts return into a colleagues\tOperations\t5176\t0\t1\t1\nAccused times should stop then notes. Wor\tJust good amou\t5190\t0\t1\t1\nAccused times should stop then notes. Wor\tSelective,\t5218\t0\t1\t1\nAcres could provide. There chronic principles care highest at a details. Little operators look elsewhere long recommendations. Daily poets dig bravely remaining aims. Now willing services appear \tSelective,\t5205\t0\t1\t1\nActions see of course informal phrases. Markedly right men buy honest, additional stations. In order imaginative factors used to move human thanks. Centres shall catch altogether succe\tJust good amou\t5192\t0\t1\t1\nActive studies state away already large shelves. Extremely international appli\tSignificantly\t5208\t0\t1\t1\nActive years refuse all poor others. Old, reluctant friends live new, liberal varieties. English, full hours belong as eve\tMatches produce\t5202\t0\t1\t1\nActs protect forces. Still only options use controls. Less manufacturing cups let well sure, good fires. Sorry companies should die. All rare nurses maintain\tJust good amou\t5212\t0\t1\t1\nActs protect forces. Still only options use controls. Less manufacturing cups let well sure, good fires. Sorry companies should die. All rare nurses maintain\tSelective,\t5200\t0\t1\t1\nActs will not reflect as with the problems. General governments distract new, soft fires. Useful proposals restrict hard trees. Large, black customs go official\tJust good amou\t5201\t0\t1\t1\nActual things should prevent there responsible schemes. Others go all undoubtedly increasing things. Small, full samples analys\tSignificantly\t5167\t0\t1\t1\nActually labour schools think about a holes. So disciplinary prov\tSignificantly\t5179\t0\t1\t1\nAcute, old funds must sleep running, different orders. Black, international changes should cope enough multiple brothers. Ago available police attract simply internationa\tMatches produce\t5207\t0\t1\t1\nAdditional, human standards should not dream also silly forms. More independent friends may believ\tSignificantly\t5203\t0\t1\t1\nAdditionally effective sea\tOperations\t5182\t0\t1\t1\nAddresses retain once more applicable events. Following blocks follow for a develo\tSelective,\t5169\t0\t1\t1\nAdequate days take more sometimes high laws. Famous, exceptional gates must pay loans. Other, slow methods see superb aspirations. Possible services apply holes. Computers can free sole times. Costly\tMatches produce\t5201\t0\t1\t1\nAdults might not surrender doubtful, upper industries; earnings insist m\tOperations\t5181\t0\t1\t1\nAdults will foresee most left, social children. Different eyes make personal counties. Readers would not admit more musical proceedings; titles take here away fast institutions; bird\tSignificantly\t5215\t0\t1\t1\nAdvanced, foreign stories would greet always corporate games. Recent dev\tSignificantly\t5182\t0\t1\t1\nAgain brief things should remember only in a patients. Deals reply soon other points. Increasingly religious times necessitate farther troops. Both added programmes must come wonderfully solid pupi\tMatches produce\t5212\t0\t1\t1\nAgain brief things should remember only in a patients. Deals reply soon other points. Increasingly religious times necessitate farther troops. Both added programmes must come wonderfully solid pupi\tOperations\t5212\t0\t1\t1\nAgain economic objections spend suddenly urgently worried boats. Pupils talk yet nonethele\tSignificantly\t5205\t0\t1\t1\nAgain major troubles create new, other children. Fair interactions may telephone\tMatches produce\t5188\t0\t1\t1\nAgain scottish women shall ag\tJust good amou\t5204\t0\t1\t1\nAgain small aspects think apparently only sole problems; thick private years may not matt\tOperations\t5197\t0\t1\t1\nAgencies join enough keen examples. Accu\tJust good amou\t5214\t0\t1\t1\nAgo light fingers blame enough green, british years. Children go general stands. Economic, great numbers affect deputies. Purposes urge annually. Always electrical ways vote judicial, regular ac\tJust good amou\t5196\t0\t1\t1\nAgo light fingers blame enough green, british years. Children go general stands. Economic, great numbers affect deputies. Purposes urge annually. Always electrical ways vote judicial, regular ac\tJust good amou\t5208\t0\t1\t1\nAgo new arguments accept previously european parents; fo\tMatches produce\t5177\t0\t1\t1\nAgo new arguments accept previously european parents; fo\tSignificantly\t5177\t0\t1\t1\nAgricultural players shall smoke. So full reasons undertake \tJust good amou\t5218\t0\t1\t1\nAgricultural sites will not provide skills. Again\tOperations\t5217\t0\t1\t1\nAll possible injuries maintain strictly mu\tSignificantly\t5211\t0\t1\t1\nAll right other details might distrib\tOperations\t5186\t0\t1\t1\nAlmost little words used to explain annual men. Opportunities borrow. Quickly general alternatives see prices. Again green firms must not help most so able authorities; thankfully married hours mig\tJust good amou\t5204\t0\t1\t1\nAlmost positive couples draw into a orders. Glasses adopt shortly mathematical years. Ever big women set details. Successful, joint factors shall allow never at all overseas deals. Topics would use n\tSignificantly\t5210\t0\t1\t1\nAlmost scientific flats divide easy, able hours. Broad years sit also through a circumstances; more \tJust good amou\t5181\t0\t1\t1\nAlone healthy sales might meet far other roots. French groups look up to a workers. Fully average miners would walk inadequate considerations. Small, sure goods may admire more app\tMatches produce\t5207\t0\t1\t1\nAlone new copies discuss to a dates; all black machines get just just royal years. For example free weeks underestimate accurately individual mountains. National, delicious\tJust good amou\t5211\t0\t1\t1\nAlone sole services keep only; stairs shall eliminate for the woods. Methods must need yet. Other students can\tMatches produce\t5199\t0\t1\t1\nAlready early meetings cannot go animals. As comprehensive evenings w\tSelective,\t5201\t0\t1\t1\nAlready empty cups would prove already other white times. Various, able hands live here different countries. Dealers may not cut effectively important institutions. Rich, dramatic t\tSelective,\t5214\t0\t1\t1\nAlready little levels will assume specialist, main costs. By now new miles play meanwhile men; common, economic results achieve furious, sure holes; persistent men die\tOperations\t5218\t0\t1\t1\nAlso aware programmes should send for certain white letters. Eyes would not treat visible centuries; wide, full languages see on behalf of the ages. Guards illuminate feet. Nationally criminal cou\tSignificantly\t5191\t0\t1\t1\nAlso fun tories test of course. Rigid walls indicate then doors. Sometimes large children want just foreign, americ\tMatches produce\t5169\t0\t1\t1\nAlso major pieces resign never. Substan\tMatches produce\t5178\t0\t1\t1\nAlso middle managers can appear originally negotiations. Things must stand distinguished funds; only customers find here also foreign seeds. Green, political\tOperations\t5202\t0\t1\t1\nAlso possible systems could go forward. Local, british babies d\tMatches produce\t5215\t0\t1\t1\nAlso possible systems could go forward. Local, british babies d\tSelective,\t5215\t0\t1\t1\nAlso public dreams see. Then semantic services can allow in a powers. Things must make sections. Spaces would not protect officials. Inappropriate charges would act r\tMatches produce\t5177\t0\t1\t1\nAlso quiet users fall. Other, current sources would c\tOperations\t5206\t0\t1\t1\nAlso rational children achieve other trends. Rather conventional commitments ought to refuse then current olympic members; suitably special te\tJust good amou\t5210\t0\t1\t1\nAlso rational children achieve other trends. Rather conventional commitments ought to refuse then current olympic members; suitably special te\tSelective,\t5210\t0\t1\t1\nAlternative sessions might not know here all literary organisers. Ho\tSignificantly\t5213\t0\t1\t1\nAlways cool temperatures meet there social grounds. Threats h\tMatches produce\t5175\t0\t1\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q1.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_customer_id:string>\n-- !query output\nAAAAAAAAAAABAAAA\nAAAAAAAAAAAHBAAA\nAAAAAAAAAAAMAAAA\nAAAAAAAAAAAOAAAA\nAAAAAAAAAABCBAAA\nAAAAAAAAAABEAAAA\nAAAAAAAAAABFAAAA\nAAAAAAAAAACFBAAA\nAAAAAAAAAACFBAAA\nAAAAAAAAAADBBAAA\nAAAAAAAAAADOAAAA\nAAAAAAAAAADPAAAA\nAAAAAAAAAAEABAAA\nAAAAAAAAAAEEAAAA\nAAAAAAAAAAEGBAAA\nAAAAAAAAAAENAAAA\nAAAAAAAAAAFCBAAA\nAAAAAAAAAAFEBAAA\nAAAAAAAAAAFGAAAA\nAAAAAAAAAAFLAAAA\nAAAAAAAAAAFPAAAA\nAAAAAAAAAAGCAAAA\nAAAAAAAAAAGEAAAA\nAAAAAAAAAAGIBAAA\nAAAAAAAAAAGOAAAA\nAAAAAAAAAAHABAAA\nAAAAAAAAAAHGBAAA\nAAAAAAAAAAHHAAAA\nAAAAAAAAAAHMAAAA\nAAAAAAAAAAHPAAAA\nAAAAAAAAAAHPAAAA\nAAAAAAAAAAHPAAAA\nAAAAAAAAAAJAAAAA\nAAAAAAAAAAJEBAAA\nAAAAAAAAAAJMAAAA\nAAAAAAAAAAJPAAAA\nAAAAAAAAAAKBBAAA\nAAAAAAAAAAKGBAAA\nAAAAAAAAAAKHBAAA\nAAAAAAAAAAKLAAAA\nAAAAAAAAAALCAAAA\nAAAAAAAAAALJAAAA\nAAAAAAAAAALJAAAA\nAAAAAAAAAAMABAAA\nAAAAAAAAAAMGAAAA\nAAAAAAAAAAMLAAAA\nAAAAAAAAAAMMAAAA\nAAAAAAAAAANHBAAA\nAAAAAAAAAANMAAAA\nAAAAAAAAAAOBBAAA\nAAAAAAAAAAPDAAAA\nAAAAAAAAAAPKAAAA\nAAAAAAAAAAPLAAAA\nAAAAAAAAABANAAAA\nAAAAAAAAABCCBAAA\nAAAAAAAAABCGAAAA\nAAAAAAAAABDABAAA\nAAAAAAAAABDBAAAA\nAAAAAAAAABDEAAAA\nAAAAAAAAABDEBAAA\nAAAAAAAAABDEBAAA\nAAAAAAAAABDFBAAA\nAAAAAAAAABDOAAAA\nAAAAAAAAABDOAAAA\nAAAAAAAAABEBBAAA\nAAAAAAAAABEDAAAA\nAAAAAAAAABEEAAAA\nAAAAAAAAABEEBAAA\nAAAAAAAAABEIBAAA\nAAAAAAAAABEOAAAA\nAAAAAAAAABFFBAAA\nAAAAAAAAABFHAAAA\nAAAAAAAAABFNAAAA\nAAAAAAAAABFOAAAA\nAAAAAAAAABGAAAAA\nAAAAAAAAABHDBAAA\nAAAAAAAAABHGAAAA\nAAAAAAAAABHGBAAA\nAAAAAAAAABHLAAAA\nAAAAAAAAABIAAAAA\nAAAAAAAAABIBAAAA\nAAAAAAAAABIDBAAA\nAAAAAAAAABIEBAAA\nAAAAAAAAABKLAAAA\nAAAAAAAAABKNAAAA\nAAAAAAAAABKNAAAA\nAAAAAAAAABLJAAAA\nAAAAAAAAABLNAAAA\nAAAAAAAAABMAAAAA\nAAAAAAAAABMEBAAA\nAAAAAAAAABMPAAAA\nAAAAAAAAABNABAAA\nAAAAAAAAABNBAAAA\nAAAAAAAAABNEAAAA\nAAAAAAAAABNEAAAA\nAAAAAAAAABNGAAAA\nAAAAAAAAABNNAAAA\nAAAAAAAAABOEAAAA\nAAAAAAAAABOGBAAA\nAAAAAAAAABPABAAA\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q10.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<cd_gender:string,cd_marital_status:string,cd_education_status:string,cnt1:bigint,cd_purchase_estimate:int,cnt2:bigint,cd_credit_rating:string,cnt3:bigint,cd_dep_count:int,cnt4:bigint,cd_dep_employed_count:int,cnt5:bigint,cd_dep_college_count:int,cnt6:bigint>\n-- !query output\nF\tW\t4 yr Degree         \t1\t4000\t1\tLow Risk  \t1\t4\t1\t6\t1\t4\t1\nM\tD\t4 yr Degree         \t1\t1500\t1\tLow Risk  \t1\t3\t1\t4\t1\t2\t1\nM\tS\tCollege             \t1\t4500\t1\tHigh Risk \t1\t3\t1\t4\t1\t3\t1\nM\tS\tPrimary             \t1\t9500\t1\tLow Risk  \t1\t3\t1\t0\t1\t6\t1\nM\tS\tSecondary           \t1\t3000\t1\tHigh Risk \t1\t1\t1\t1\t1\t4\t1\nM\tU\t4 yr Degree         \t1\t2000\t1\tLow Risk  \t1\t3\t1\t1\t1\t3\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q11.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<customer_preferred_cust_flag:string>\n-- !query output\nNULL\nNULL\nNULL\nNULL\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nN\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\nY\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q12.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_desc:string,i_category:string,i_class:string,i_current_price:decimal(7,2),itemrevenue:decimal(17,2),revenueratio:decimal(38,17)>\n-- !query output\nPrecisely elderly bodies\tBooks                                             \tarts                                              \t1.40\t11.21\t0.01417562243122168\nGreat, contemporary workers would not remove of course cultural values. Then due children might see positive seconds. Significant problems w\tBooks                                             \tarts                                              \t0.55\t515.52\t0.65190159462474560\nForward psychological plants establish closely yet eastern changes. Likewise necessary techniques might drop. Pleasant operations like lonely things; dogs let regions. Forces might not result clearl\tBooks                                             \tarts                                              \t2.43\t11462.46\t14.49487110552909973\nBlack, following services justify by a investors; dirty, different charts will fly however prizes. Temporary, l\tBooks                                             \tarts                                              \t5.56\t3400.60\t4.30023386615632740\nUnited, important objectives put similarly large, previous phenomena; old, present days receive. Happy detectives assi\tBooks                                             \tarts                                              \t1.26\t784.30\t0.99178774958137022\nNaturally new years put serious, negative vehicles. Fin\tBooks                                             \tarts                                              \t3.34\t3319.96\t4.19826043236027781\nHard different differences would not paint even. Together suitable schemes marry directly only open women. Social ca\tBooks                                             \tarts                                              \t2.65\t229.68\t0.29044219090124839\nAnonymous, useful women provoke slightly present persons. Ideas ought to cost almost competent, working parties; aspects provide thr\tBooks                                             \tarts                                              \t6.73\t5752.44\t7.27425669029944833\nPowerful walls will find; there scottish decades must not\tBooks                                             \tarts                                              \t4.16\t434.76\t0.54977641464745189\nCareful privileges ought to live rather to a boards. Possible, broad p\tBooks                                             \tarts                                              \t3.93\t969.48\t1.22595739827125692\nAside legitimate decisions may not stand probably sexual g\tBooks                                             \tarts                                              \t3.88\t349.20\t0.44158138742039332\nSpecially interesting crews continue current, foreign directions; only social men would not call at least political children; circumstances could not understand now in a assessme\tBooks                                             \tarts                                              \t2.13\t3343.99\t4.22864760515441312\nUnlikely states take later in general extra inf\tBooks                                             \tarts                                              \t0.32\t20046.98\t25.35043883731064290\nInches may lose from a problems. Firm, other corporations shall protect ashamed, important practices. Materials shall not make then by a police. Weeks used\tBooks                                             \tarts                                              \t0.84\t11869.78\t15.00994822673206253\nRelevant lips take so sure, manufacturing \tBooks                                             \tarts                                              \t8.80\t5995.28\t7.58134037907713537\nExtra, primitive weeks look obviou\tBooks                                             \tarts                                              \t1.18\t425.89\t0.53855984275049058\nMore than key reasons should remain. Words used to offer slowly british\tBooks                                             \tarts                                              \t0.28\t7814.52\t9.88186306879843074\nChildren may turn also above, historical aspects. Surveys migh\tBooks                                             \tarts                                              \t7.22\t544.72\t0.68882649872748182\nTrustees know operations. Now past issues cut today german governments. British lines go critical, individual structures. Tonight adequate problems should no\tBooks                                             \tarts                                              \t4.05\t152.67\t0.19305907908783347\nFloors could not go only for a years. Special reasons shape consequently black, concerned instances. Mutual depths encourage both simple teachers. Cards favour massive \tBooks                                             \tarts                                              \t1.83\t503.10\t0.63619586486597904\nCertain customers think exactly already necessary factories. Awkward doubts shall not forget fine\tBooks                                             \tarts                                              \t0.30\t922.40\t1.16642231316314662\nDeep, big areas take for a facilities. Words could replace certainly cases; lights test. Nevertheless practical arts cross. Fa\tBooks                                             \tarts                                              \t7.37\t230.48\t0.29145383210954253\nNew, reluctant associations see more different, physical symptoms; useful pounds ought to give. Subjects \tBooks                                             \tbusiness                                          \t9.02\t306.85\t0.37352072221391094\nNatural plans might not like n\tBooks                                             \tbusiness                                          \t4.29\t2484.54\t3.02436752540117416\nYears shall want free objects. Old residents use absolutely so residential steps. Letters will share variables. Sure fres\tBooks                                             \tbusiness                                          \t40.76\t90.28\t0.10989555418436330\nSimple, great shops glance from a years. Lessons deepen here previous clients. Increased, silent flights open more great rocks. Brill\tBooks                                             \tbusiness                                          \t8.92\t393.75\t0.47930188812686144\nGroups must not put new, civil moves. Correct men laugh slightly total novels. Relatively public girls set even scott\tBooks                                             \tbusiness                                          \t3.36\t344.10\t0.41886420242400767\nJust young degrees stop posts. More than tight artists buy to a arts. European, essential techniques ought to sell to a offences. Sentences be\tBooks                                             \tbusiness                                          \t2.58\t184.08\t0.22407591508925118\nJunior, severe restrictions ought to want principles. Sure,\tBooks                                             \tbusiness                                          \t9.77\t1549.80\t1.88653223166732663\nRemaining subjects handle even only certain ladies; eagerly literary days could not provide. Very different articles cut then. Boys see out of a houses. Governme\tBooks                                             \tbusiness                                          \t9.03\t6463.45\t7.86779374936777799\nRussian windows should see in a weapons. New, considerable branches walk. English regions apply neither alone police; very new years w\tBooks                                             \tbusiness                                          \t2.79\t1635.60\t1.99097439548011320\nLong groups used to create more tiny feet; tools used to dare still\tBooks                                             \tbusiness                                          \t57.04\t10558.62\t12.85274032257534413\nDrugs must compensate dark, modest houses. Small pubs claim naturally accessible relationships. Distinguished\tBooks                                             \tbusiness                                          \t1.66\t31.78\t0.03868498794837246\nSmall, capable centres\tBooks                                             \tbusiness                                          \t2.98\t3219.72\t3.91928349267255446\nPopular, different parameters might take open, used modules. Prisoners use pretty alternative lovers. Annual, professional others spend once true men. Other, small subsidies seem politically\tBooks                                             \tbusiness                                          \t7.25\t3862.88\t4.70218584789203943\nSupreme, free uses handle even in the customers. Other minutes might not make of course social neighbours. So environmental rights come other, able sales\tBooks                                             \tbusiness                                          \t8.08\t10904.74\t13.27406341976510738\nSound, original activities consider quite to a attitudes. In order weak improvements marry available, hard studie\tBooks                                             \tbusiness                                          \t71.27\t385.84\t0.46967324575204627\nClassic issues will draw as european, engl\tBooks                                             \tbusiness                                          \t75.64\t92.64\t0.11276832232653319\nAgain british shareholders see shares. American lives ought to establish horses. Then ideal conservatives might charge even nec\tBooks                                             \tbusiness                                          \t2.44\t5353.50\t6.51667976657054660\nDepartments could seek now for a commu\tBooks                                             \tbusiness                                          \t5.93\t6535.44\t7.95542535045032467\nPaintings must not know primary, royal stands; similar, available others ough\tBooks                                             \tbusiness                                          \t0.39\t303.68\t0.36966196161616580\nMost present eyes restore fat, central relationships; again considerable habits must face in a discussions. Engineers help at all direct occasions. Curiously del\tBooks                                             \tbusiness                                          \t80.10\t2096.55\t2.55207713918062566\nChildren would not mean in favour of a parts. Heavy, whole others shall mean on\tBooks                                             \tbusiness                                          \t3.13\t6646.96\t8.09117581791421695\nWhite fees might combine reports. Tr\tBooks                                             \tbusiness                                          \t2.09\t500.56\t0.60931899205277908\nMost new weeks go yet members. Also encouraging delegates make publications. Different competitors run resources; somehow common views m\tBooks                                             \tbusiness                                          \t1.07\t974.26\t1.18594198736882801\nOnly new systems might join late speeches. Materials could stay on a benefits. Corporate regulations must crawl definitely practical deaths. Windows might soothe despite a organisations. Old\tBooks                                             \tbusiness                                          \t0.67\t9075.35\t11.04719337247520503\nProfessional managers form later initial grounds. Conscious, big risks restore. American, full rises say for a systems. Already\tBooks                                             \tbusiness                                          \t5.27\t890.13\t1.08353267219901759\nMemories can earn particularly over quick contexts; alone differences make separate years; irish men mea\tBooks                                             \tbusiness                                          \t4.23\t2059.92\t2.50748836924516678\nOnly, gothic\tBooks                                             \tbusiness                                          \t1.68\t4777.17\t5.81512787530920297\nSilver, critical operations could help howev\tBooks                                             \tbusiness                                          \t5.56\t428.54\t0.52165087273113702\nElse substantial problems slip months. Just unique corporations put vast areas. Supporters like far perfect chapters. Now young reports become wrong trials. Available ears shall\tBooks                                             \tcomputers                                         \t51.46\t2456.28\t1.26602601850774711\nAt least remaining results shall keep cuts. Clients should meet policies. Glorious, local times could use enough; clever styles will live political parents. Single, gradual contracts will describe ho\tBooks                                             \tcomputers                                         \t9.51\t10252.90\t5.28459221471415324\nYears learn here. Days make too. Only moving systems avoid old groups; short movements cannot see respectiv\tBooks                                             \tcomputers                                         \t0.60\t1761.68\t0.90801240749618444\nGa\tBooks                                             \tcomputers                                         \t5.53\t7541.48\t3.88706087988983530\nS\tBooks                                             \tcomputers                                         \t65.78\t4566.02\t2.35343695385979752\nBoxes batt\tBooks                                             \tcomputers                                         \t0.83\t7844.04\t4.04300760915510798\nArtists make times. Rather ready functions must pre\tBooks                                             \tcomputers                                         \t5.71\t3694.01\t1.90398194531071494\nLimited, capable cities shall try during the bodies. Specially economic services ought to prevent old area\tBooks                                             \tcomputers                                         \t2.93\t96.18\t0.04957349425150028\nLegs throw then. Old-fashioned develo\tBooks                                             \tcomputers                                         \t2.66\t718.55\t0.37035801928067716\nImportant, educational variables used to appear months. A\tBooks                                             \tcomputers                                         \t2.47\t9922.02\t5.11404867366677942\nMen should not turn shadows. Different, single concessions guarantee only therefore alone products.\tBooks                                             \tcomputers                                         \t8.38\t4194.24\t2.16181256528813215\nEducational, white teachers should not fix. Considerable, other services might not cover today on a forms. Successful genes fall otherwise so\tBooks                                             \tcomputers                                         \t1.65\t14569.68\t7.50956485471198434\nPresent \tBooks                                             \tcomputers                                         \t2.84\t12393.53\t6.38792460190056468\nMultiple, dark feet mean more complex girls; schools may not answer frequently blue assets. Spiritual, dry patients may reply personnel\tBooks                                             \tcomputers                                         \t2.04\t371.40\t0.19142852739662305\nPrivate teachers ap\tBooks                                             \tcomputers                                         \t5.27\t4911.39\t2.53144899076602182\nDaily numbers sense interesting players. General advantages would speak here. Shelves shall know with the reductions. Again wrong mothers provide ways; as hot pr\tBooks                                             \tcomputers                                         \t7.56\t689.26\t0.35526124607807325\nInc, corporate ships slow evident degrees. Chosen, acute prices throw always. Budgets spend points. Commonly large events may mean. Bottles c\tBooks                                             \tcomputers                                         \t68.38\t4.17\t0.00214931868401701\nHowever old hours ma\tBooks                                             \tcomputers                                         \t8.84\t451.53\t0.23272946412330966\nIndeed other actions should provide after a ideas; exhibitio\tBooks                                             \tcomputers                                         \t6.95\t8062.32\t4.15551439149257400\nPerfect days find at all. Crimes might develop hopes. Much socialist grants drive current, useful walls. Emissions open naturally. Combinations shall not know. Tragic things shall not receive just\tBooks                                             \tcomputers                                         \t6.71\t526.49\t0.27136565802113105\nHuman windows take right, variable steps. Years should buy often. Indeed thin figures may beat even up to a cars. Details may tell enough. Impossible, sufficient differences ought to return \tBooks                                             \tcomputers                                         \t4.47\t1542.60\t0.79509328584283986\nLeft diff\tBooks                                             \tcomputers                                         \t0.74\t5248.81\t2.70536340572070289\nFriendly, hot computers tax elsewhere units. New, real officials should l\tBooks                                             \tcomputers                                         \t3.19\t12378.72\t6.38029117031536278\nKinds relieve really major practices. Then capable reserves could not approve foundations. Pos\tBooks                                             \tcomputers                                         \t7.23\t1786.41\t0.92075884659828053\nVisible, average words shall not want quite only public boundaries. Cars must telephone proposals. German things ask abilities. Windows cut again favorite offi\tBooks                                             \tcomputers                                         \t6.75\t25255.89\t13.01749550563031296\nOnly increased errors must submit as rich, main \tBooks                                             \tcomputers                                         \t6.94\t2429.79\t1.25237243291071818\nMeals ought to test. Round days might need most urban years. Political, english pages must see on a eyes. Only subsequent women may come better methods; difficult, social childr\tBooks                                             \tcomputers                                         \t7.23\t6457.72\t3.32846480866914548\nSystems cannot see fairly practitioners. Little ca\tBooks                                             \tcomputers                                         \t1.73\t6197.59\t3.19438752586978211\nPast beautiful others might not like more than legislative, small products. Close, wh\tBooks                                             \tcomputers                                         \t3.02\t10232.30\t5.27397447733028024\nMain problems wait properly. Everyday, foreign offenders can worry activities. Social, important shoes will afford okay physical parts. Very\tBooks                                             \tcomputers                                         \t1.40\t2034.30\t1.04852733786470188\nSchools offer quickly others. Further main buildings satisfy sadly great, productive figures. Years contribute acti\tBooks                                             \tcomputers                                         \t4.11\t3975.90\t2.04927485750197523\nTiny, rare leaders mention old, precious areas; students will improve much multiple stars. Even confident solutions will include clearly single women. Please little rights will not mention harder com\tBooks                                             \tcomputers                                         \t1.45\t11902.91\t6.13504720795513872\nGuidelines should investigate so. Usual personnel look now old, modern aspects. Discussions could appear once br\tBooks                                             \tcomputers                                         \t2.44\t640.06\t0.32990237815154161\nFlat pleasant groups would go private, redundant eyes. Main devic\tBooks                                             \tcomputers                                         \t2.83\t8359.41\t4.30864175068552700\nFine users go for a networks. Sympathetic, effective industries could not alter particularly other concepts; wooden women used to feel however \tBooks                                             \tcomputers                                         \t5.30\t247.79\t0.12771694885193653\nReal, domestic facilities turn often guilty symptoms. Winds get naturally intense islands. Products shall not travel a little clear shares; improved children may not apply wrong c\tBooks                                             \tcomputers                                         \t5.28\t297.60\t0.15339022550682558\nThen irish champions must look now forward good women. Future, big models sign. Then different o\tBooks                                             \tcooking                                           \t85.81\t6496.48\t10.66582432143788856\nValuable studies should persist so concerned parties. Always polite songs include then from the holes. There conventional areas might not explain theore\tBooks                                             \tcooking                                           \t1.58\t2088.03\t3.42809662430915734\nMeanings occur in a things. Also essential features may not satisfy by the potatoes; happy words change childre\tBooks                                             \tcooking                                           \t3.46\t1496.40\t2.45676728237440221\nThen dominant goods should combine probably american items. Important artists guess only sill\tBooks                                             \tcooking                                           \t6.67\t5638.06\t9.25648312220250073\nIndividual, remarkable services take by the interest\tBooks                                             \tcooking                                           \t6.05\t0.00\t0.00000000000000000\nUltimately senior elections marry at l\tBooks                                             \tcooking                                           \t5.06\t2060.48\t3.38286544372280691\nHence young effects shall not solve however months. In order small activities must not return almost national foods. International decades take contributions. Sessions must see \tBooks                                             \tcooking                                           \t1.43\t242.28\t0.39777170353760369\nPoints trace so simple eyes. Short advisers shall not say limitations. Keys stretch in full now blue wings. Immediately strategic students would not make strangely for the players.\tBooks                                             \tcooking                                           \t1.69\t12518.00\t20.55186637313737424\nGreat pp. will not r\tBooks                                             \tcooking                                           \t1.91\t7268.22\t11.93285558480304571\nProducts may not resist further specif\tBooks                                             \tcooking                                           \t5.37\t2.72\t0.00446565557876128\nSomet\tBooks                                             \tcooking                                           \t7.34\t580.58\t0.95318761614603744\nGenetic properties might describe therefore leaves; right other organisers must not talk even lives; methods carry thus wrong minutes. Proud worke\tBooks                                             \tcooking                                           \t1.08\t1445.15\t2.37262579398781566\nUrgent agencies mean over as a plants; then\tBooks                                             \tcooking                                           \t6.47\t1312.79\t2.15531911295662354\nAs light hundreds must establish on a services. Sometimes special results \tBooks                                             \tcooking                                           \t44.82\t3513.30\t5.76808372972867366\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q13.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<avg(ss_quantity):double,avg(ss_ext_sales_price):decimal(11,6),avg(ss_ext_wholesale_cost):decimal(11,6),sum(ss_ext_wholesale_cost):decimal(17,2)>\n-- !query output\n31.875\t2306.480000\t2168.643750\t17349.15\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q14a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,i_brand_id:int,i_class_id:int,i_category_id:int,sum(sales):decimal(38,2),sum(number_sales):bigint>\n-- !query output\nNULL\tNULL\tNULL\tNULL\t674681657.38\t155667\ncatalog\tNULL\tNULL\tNULL\t235666845.52\t46085\ncatalog\t1001001\tNULL\tNULL\t1698135.15\t341\ncatalog\t1001001\t1\tNULL\t1050210.86\t214\ncatalog\t1001001\t1\t1\t42507.62\t8\ncatalog\t1001001\t1\t2\t106083.96\t19\ncatalog\t1001001\t1\t3\t101297.49\t22\ncatalog\t1001001\t1\t4\t68779.51\t25\ncatalog\t1001001\t1\t5\t201671.76\t31\ncatalog\t1001001\t1\t6\t165473.64\t25\ncatalog\t1001001\t1\t7\t57452.17\t9\ncatalog\t1001001\t1\t8\t107124.01\t25\ncatalog\t1001001\t1\t9\t61121.32\t18\ncatalog\t1001001\t1\t10\t138699.38\t32\ncatalog\t1001001\t2\tNULL\t34937.77\t4\ncatalog\t1001001\t2\t3\t34937.77\t4\ncatalog\t1001001\t3\tNULL\t208126.74\t43\ncatalog\t1001001\t3\t1\t65982.89\t13\ncatalog\t1001001\t3\t4\t108943.96\t20\ncatalog\t1001001\t3\t6\t23652.37\t7\ncatalog\t1001001\t3\t7\t9547.52\t3\ncatalog\t1001001\t4\tNULL\t20486.85\t5\ncatalog\t1001001\t4\t7\t20486.85\t5\ncatalog\t1001001\t5\tNULL\t41532.27\t8\ncatalog\t1001001\t5\t7\t11217.24\t4\ncatalog\t1001001\t5\t9\t30315.03\t4\ncatalog\t1001001\t8\tNULL\t139099.45\t25\ncatalog\t1001001\t8\t9\t77814.57\t15\ncatalog\t1001001\t8\t10\t61284.88\t10\ncatalog\t1001001\t10\tNULL\t24174.96\t5\ncatalog\t1001001\t10\t7\t24174.96\t5\ncatalog\t1001001\t13\tNULL\t20891.29\t3\ncatalog\t1001001\t13\t9\t20891.29\t3\ncatalog\t1001001\t14\tNULL\t66038.13\t14\ncatalog\t1001001\t14\t9\t26363.83\t7\ncatalog\t1001001\t14\t10\t39674.30\t7\ncatalog\t1001001\t15\tNULL\t63699.54\t12\ncatalog\t1001001\t15\t10\t63699.54\t12\ncatalog\t1001001\t16\tNULL\t28937.29\t8\ncatalog\t1001001\t16\t9\t28937.29\t8\ncatalog\t1001002\tNULL\tNULL\t3137545.73\t657\ncatalog\t1001002\t1\tNULL\t2597194.10\t539\ncatalog\t1001002\t1\t1\t2597194.10\t539\ncatalog\t1001002\t2\tNULL\t120152.76\t28\ncatalog\t1001002\t2\t1\t120152.76\t28\ncatalog\t1001002\t3\tNULL\t43520.49\t11\ncatalog\t1001002\t3\t1\t43520.49\t11\ncatalog\t1001002\t4\tNULL\t138778.24\t30\ncatalog\t1001002\t4\t1\t138778.24\t30\ncatalog\t1001002\t7\tNULL\t26096.94\t4\ncatalog\t1001002\t7\t1\t26096.94\t4\ncatalog\t1001002\t10\tNULL\t15192.85\t6\ncatalog\t1001002\t10\t1\t15192.85\t6\ncatalog\t1001002\t12\tNULL\t79002.90\t17\ncatalog\t1001002\t12\t1\t79002.90\t17\ncatalog\t1001002\t13\tNULL\t28858.71\t5\ncatalog\t1001002\t13\t1\t28858.71\t5\ncatalog\t1001002\t14\tNULL\t88748.74\t17\ncatalog\t1001002\t14\t1\t88748.74\t17\ncatalog\t1002001\tNULL\tNULL\t1812816.55\t357\ncatalog\t1002001\t1\tNULL\t158520.06\t30\ncatalog\t1002001\t1\t1\t25188.03\t6\ncatalog\t1002001\t1\t2\t77805.61\t11\ncatalog\t1002001\t1\t4\t15320.55\t6\ncatalog\t1002001\t1\t5\t40205.87\t7\ncatalog\t1002001\t2\tNULL\t944972.06\t193\ncatalog\t1002001\t2\t1\t69392.20\t14\ncatalog\t1002001\t2\t2\t143259.76\t24\ncatalog\t1002001\t2\t3\t190639.48\t38\ncatalog\t1002001\t2\t4\t60154.92\t11\ncatalog\t1002001\t2\t6\t72905.53\t16\ncatalog\t1002001\t2\t7\t17237.62\t3\ncatalog\t1002001\t2\t8\t141091.32\t36\ncatalog\t1002001\t2\t9\t141603.60\t25\ncatalog\t1002001\t2\t10\t108687.63\t26\ncatalog\t1002001\t3\tNULL\t61348.83\t9\ncatalog\t1002001\t3\t1\t61348.83\t9\ncatalog\t1002001\t4\tNULL\t126215.12\t28\ncatalog\t1002001\t4\t1\t59437.19\t12\ncatalog\t1002001\t4\t3\t24792.24\t9\ncatalog\t1002001\t4\t5\t41985.69\t7\ncatalog\t1002001\t5\tNULL\t60039.55\t7\ncatalog\t1002001\t5\t9\t60039.55\t7\ncatalog\t1002001\t6\tNULL\t160520.59\t27\ncatalog\t1002001\t6\t7\t10403.70\t5\ncatalog\t1002001\t6\t9\t100458.19\t15\ncatalog\t1002001\t6\t10\t49658.70\t7\ncatalog\t1002001\t8\tNULL\t18830.90\t5\ncatalog\t1002001\t8\t8\t18830.90\t5\ncatalog\t1002001\t9\tNULL\t31370.49\t5\ncatalog\t1002001\t9\t9\t31370.49\t5\ncatalog\t1002001\t10\tNULL\t40164.96\t9\ncatalog\t1002001\t10\t6\t7759.96\t5\ncatalog\t1002001\t10\t8\t32405.00\t4\ncatalog\t1002001\t11\tNULL\t42852.27\t11\ncatalog\t1002001\t11\t8\t25836.23\t7\ncatalog\t1002001\t11\t9\t17016.04\t4\ncatalog\t1002001\t12\tNULL\t12275.20\t4\ncatalog\t1002001\t12\t7\t12275.20\t4\ncatalog\t1002001\t13\tNULL\t27702.27\t5\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q14b.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,i_brand_id:int,i_class_id:int,i_category_id:int,sales:decimal(28,2),number_sales:bigint,channel:string,i_brand_id:int,i_class_id:int,i_category_id:int,sales:decimal(28,2),number_sales:bigint>\n-- !query output\nstore\t1001001\t1\t1\t618811.76\t171\tstore\t1001001\t1\t1\t1120886.77\t328\nstore\t1001002\t1\t1\t679257.16\t207\tstore\t1001002\t1\t1\t597317.93\t174\nstore\t1002001\t2\t1\t590902.05\t166\tstore\t1002001\t2\t1\t1382457.19\t367\nstore\t1002002\t2\t1\t712644.78\t166\tstore\t1002002\t2\t1\t686649.72\t155\nstore\t1003001\t3\t1\t567349.84\t161\tstore\t1003001\t3\t1\t1120432.56\t309\nstore\t1003002\t3\t1\t775782.67\t214\tstore\t1003002\t3\t1\t561917.26\t173\nstore\t1004001\t4\t1\t602670.66\t187\tstore\t1004001\t4\t1\t1284658.68\t355\nstore\t1004002\t4\t1\t680693.19\t185\tstore\t1004002\t4\t1\t572656.37\t181\nstore\t2001001\t1\t2\t721151.24\t193\tstore\t2001001\t1\t2\t1206198.64\t359\nstore\t2001002\t1\t2\t815659.11\t223\tstore\t2001002\t1\t2\t621816.55\t166\nstore\t2002001\t2\t2\t657826.11\t158\tstore\t2002001\t2\t2\t1229089.24\t366\nstore\t2002002\t2\t2\t738888.97\t213\tstore\t2002002\t2\t2\t701020.25\t187\nstore\t2003001\t3\t2\t856103.36\t229\tstore\t2003001\t3\t2\t1517919.93\t408\nstore\t2003002\t3\t2\t1035024.58\t255\tstore\t2003002\t3\t2\t693873.11\t187\nstore\t2004001\t4\t2\t818535.48\t210\tstore\t2004001\t4\t2\t1584158.70\t423\nstore\t2004002\t4\t2\t902066.26\t231\tstore\t2004002\t4\t2\t652868.55\t184\nstore\t3001001\t1\t3\t690099.22\t187\tstore\t3001001\t1\t3\t1540182.88\t363\nstore\t3001002\t1\t3\t796955.01\t238\tstore\t3001002\t1\t3\t690610.81\t178\nstore\t3002001\t2\t3\t601571.31\t163\tstore\t3002001\t2\t3\t1332847.53\t353\nstore\t3002002\t2\t3\t889691.45\t239\tstore\t3002002\t2\t3\t711790.52\t187\nstore\t3003001\t3\t3\t599641.77\t168\tstore\t3003001\t3\t3\t1218300.45\t329\nstore\t3003002\t3\t3\t785846.50\t222\tstore\t3003002\t3\t3\t727813.84\t205\nstore\t3004001\t4\t3\t636017.18\t169\tstore\t3004001\t4\t3\t1300058.21\t347\nstore\t3004002\t4\t3\t850298.30\t240\tstore\t3004002\t4\t3\t605722.91\t192\nstore\t4001001\t1\t4\t709294.08\t189\tstore\t4001001\t1\t4\t1317772.16\t332\nstore\t4001002\t1\t4\t833503.02\t203\tstore\t4001002\t1\t4\t651619.28\t161\nstore\t4002001\t2\t4\t633353.59\t188\tstore\t4002001\t2\t4\t1235025.47\t341\nstore\t4002002\t2\t4\t928549.29\t239\tstore\t4002002\t2\t4\t724613.68\t193\nstore\t4003001\t3\t4\t645858.10\t173\tstore\t4003001\t3\t4\t1417068.83\t390\nstore\t4003002\t3\t4\t872089.82\t236\tstore\t4003002\t3\t4\t707566.93\t201\nstore\t4004001\t4\t4\t657080.07\t179\tstore\t4004001\t4\t4\t1279798.81\t334\nstore\t4004002\t4\t4\t836177.44\t216\tstore\t4004002\t4\t4\t821961.41\t198\nstore\t5001001\t1\t5\t719353.50\t186\tstore\t5001001\t1\t5\t1189108.84\t310\nstore\t5001002\t1\t5\t842595.01\t234\tstore\t5001002\t1\t5\t788955.51\t198\nstore\t5002001\t2\t5\t773460.67\t218\tstore\t5002001\t2\t5\t1274443.41\t357\nstore\t5002002\t2\t5\t910604.00\t253\tstore\t5002002\t2\t5\t649585.02\t178\nstore\t5003001\t3\t5\t699927.22\t177\tstore\t5003001\t3\t5\t1388232.27\t389\nstore\t5003002\t3\t5\t795797.08\t221\tstore\t5003002\t3\t5\t640802.27\t168\nstore\t5004001\t4\t5\t746611.79\t186\tstore\t5004001\t4\t5\t1149863.23\t334\nstore\t5004002\t4\t5\t661403.11\t175\tstore\t5004002\t4\t5\t522457.04\t139\nstore\t6001001\t1\t6\t32952.14\t10\tstore\t6001001\t1\t6\t100674.40\t25\nstore\t6001002\t1\t6\t32363.63\t11\tstore\t6001002\t1\t6\t21192.44\t6\nstore\t6001003\t1\t6\t32105.72\t12\tstore\t6001003\t1\t6\t49799.29\t16\nstore\t6001004\t1\t6\t69366.44\t18\tstore\t6001004\t1\t6\t33159.67\t12\nstore\t6001005\t1\t6\t39316.28\t13\tstore\t6001005\t1\t6\t176957.80\t40\nstore\t6001006\t1\t6\t33851.16\t8\tstore\t6001006\t1\t6\t59928.72\t18\nstore\t6001007\t1\t6\t52934.26\t10\tstore\t6001007\t1\t6\t68730.18\t15\nstore\t6001008\t1\t6\t61470.12\t17\tstore\t6001008\t1\t6\t21110.81\t7\nstore\t6002001\t2\t6\t58059.46\t12\tstore\t6002001\t2\t6\t118560.37\t25\nstore\t6002002\t2\t6\t45696.16\t15\tstore\t6002002\t2\t6\t98188.35\t25\nstore\t6002003\t2\t6\t27662.18\t10\tstore\t6002003\t2\t6\t52033.85\t19\nstore\t6002004\t2\t6\t49212.09\t11\tstore\t6002004\t2\t6\t23087.83\t7\nstore\t6002005\t2\t6\t54618.54\t15\tstore\t6002005\t2\t6\t94438.29\t22\nstore\t6002006\t2\t6\t4768.74\t6\tstore\t6002006\t2\t6\t41696.39\t7\nstore\t6002007\t2\t6\t23765.55\t6\tstore\t6002007\t2\t6\t100119.20\t29\nstore\t6002008\t2\t6\t26747.76\t5\tstore\t6002008\t2\t6\t19965.51\t3\nstore\t6003001\t3\t6\t17874.97\t4\tstore\t6003001\t3\t6\t39734.27\t10\nstore\t6003004\t3\t6\t36845.45\t13\tstore\t6003004\t3\t6\t28332.73\t5\nstore\t6003005\t3\t6\t33876.00\t12\tstore\t6003005\t3\t6\t149594.43\t34\nstore\t6003006\t3\t6\t78594.03\t17\tstore\t6003006\t3\t6\t80057.92\t16\nstore\t6003007\t3\t6\t90969.04\t23\tstore\t6003007\t3\t6\t130880.43\t32\nstore\t6003008\t3\t6\t51335.82\t14\tstore\t6003008\t3\t6\t45068.89\t16\nstore\t6004001\t4\t6\t14673.04\t4\tstore\t6004001\t4\t6\t84488.05\t22\nstore\t6004002\t4\t6\t41801.05\t15\tstore\t6004002\t4\t6\t31527.39\t12\nstore\t6004003\t4\t6\t15735.99\t8\tstore\t6004003\t4\t6\t102005.55\t23\nstore\t6004004\t4\t6\t70293.74\t15\tstore\t6004004\t4\t6\t16694.72\t7\nstore\t6004006\t4\t6\t98506.28\t26\tstore\t6004006\t4\t6\t40380.88\t13\nstore\t6004007\t4\t6\t7974.20\t5\tstore\t6004007\t4\t6\t56390.64\t19\nstore\t6004008\t4\t6\t93726.06\t22\tstore\t6004008\t4\t6\t49454.40\t15\nstore\t6005001\t5\t6\t25935.02\t8\tstore\t6005001\t5\t6\t76459.38\t18\nstore\t6005002\t5\t6\t27560.91\t8\tstore\t6005002\t5\t6\t29874.66\t8\nstore\t6005003\t5\t6\t76424.40\t23\tstore\t6005003\t5\t6\t120380.82\t37\nstore\t6005004\t5\t6\t61026.67\t15\tstore\t6005004\t5\t6\t32867.59\t7\nstore\t6005005\t5\t6\t49398.50\t13\tstore\t6005005\t5\t6\t125117.31\t33\nstore\t6005006\t5\t6\t40539.82\t9\tstore\t6005006\t5\t6\t24310.11\t8\nstore\t6005007\t5\t6\t18454.21\t5\tstore\t6005007\t5\t6\t52454.65\t14\nstore\t6005008\t5\t6\t71116.83\t19\tstore\t6005008\t5\t6\t40388.90\t6\nstore\t6006001\t6\t6\t26990.02\t8\tstore\t6006001\t6\t6\t19931.96\t5\nstore\t6006003\t6\t6\t39807.40\t6\tstore\t6006003\t6\t6\t100194.33\t22\nstore\t6006004\t6\t6\t126180.26\t29\tstore\t6006004\t6\t6\t44060.41\t16\nstore\t6006005\t6\t6\t24214.85\t8\tstore\t6006005\t6\t6\t79947.56\t24\nstore\t6006006\t6\t6\t59581.58\t21\tstore\t6006006\t6\t6\t45979.99\t11\nstore\t6006008\t6\t6\t81635.42\t14\tstore\t6006008\t6\t6\t26221.85\t8\nstore\t6007001\t7\t6\t15407.87\t3\tstore\t6007001\t7\t6\t14185.50\t7\nstore\t6007002\t7\t6\t77223.28\t17\tstore\t6007002\t7\t6\t21012.93\t7\nstore\t6007003\t7\t6\t30617.75\t9\tstore\t6007003\t7\t6\t21503.75\t7\nstore\t6007004\t7\t6\t35752.37\t11\tstore\t6007004\t7\t6\t39378.54\t9\nstore\t6007005\t7\t6\t49598.38\t16\tstore\t6007005\t7\t6\t102247.83\t29\nstore\t6007006\t7\t6\t43099.21\t12\tstore\t6007006\t7\t6\t17599.96\t11\nstore\t6007007\t7\t6\t36750.53\t10\tstore\t6007007\t7\t6\t47280.71\t19\nstore\t6007008\t7\t6\t63116.82\t21\tstore\t6007008\t7\t6\t16366.95\t10\nstore\t6008001\t8\t6\t78930.46\t18\tstore\t6008001\t8\t6\t123392.71\t36\nstore\t6008002\t8\t6\t20148.09\t5\tstore\t6008002\t8\t6\t98780.67\t17\nstore\t6008004\t8\t6\t41158.44\t15\tstore\t6008004\t8\t6\t7371.54\t4\nstore\t6008005\t8\t6\t52094.82\t11\tstore\t6008005\t8\t6\t153697.84\t42\nstore\t6008006\t8\t6\t40340.12\t8\tstore\t6008006\t8\t6\t32564.87\t11\nstore\t6008007\t8\t6\t8554.42\t2\tstore\t6008007\t8\t6\t40138.79\t9\nstore\t6008008\t8\t6\t26928.47\t7\tstore\t6008008\t8\t6\t18485.16\t4\nstore\t6009001\t9\t6\t69811.04\t16\tstore\t6009001\t9\t6\t136769.36\t32\nstore\t6009002\t9\t6\t31689.61\t14\tstore\t6009002\t9\t6\t63253.41\t19\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q15.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<ca_zip:string,sum(cs_sales_price):decimal(17,2)>\n-- !query output\nNULL\t2519.19\n30059     \t812.16\n30069     \t2248.42\n30150     \t157.03\n30162     \t2039.97\n30169     \t1670.46\n30191     \t544.76\n30399     \t638.05\n30411     \t771.09\n30499     \t1826.69\n30587     \t3142.51\n30945     \t159.37\n30965     \t331.74\n31087     \t1264.39\n31218     \t1221.13\n31289     \t366.49\n31387     \t1650.86\n31521     \t490.31\n31675     \t1909.58\n31711     \t458.28\n31749     \t1229.27\n31757     \t1207.80\n31904     \t1584.89\n31933     \t829.70\n31952     \t794.71\n32293     \t1025.43\n32477     \t95.37\n32808     \t1306.54\n32812     \t209.90\n32898     \t448.94\n33003     \t236.71\n33199     \t385.03\n33372     \t170.95\n33394     \t842.14\n33445     \t1084.78\n33511     \t447.67\n33604     \t616.64\n33951     \t2103.68\n33957     \t319.38\n34098     \t288.86\n34107     \t2903.74\n34167     \t717.59\n34244     \t694.54\n34289     \t185.70\n34338     \t411.46\n34466     \t1736.55\n34536     \t2259.62\n34593     \t346.48\n34694     \t592.38\n34843     \t461.27\n34854     \t448.63\n35038     \t1972.67\n35124     \t415.20\n35281     \t538.84\n35709     \t1008.27\n35752     \t1067.71\n35804     \t1004.19\n35817     \t418.39\n36060     \t659.72\n36074     \t509.22\n36098     \t2175.73\n36192     \t1679.14\n36277     \t554.20\n36534     \t982.64\n36557     \t437.46\n36787     \t2030.37\n36788     \t357.97\n36867     \t649.86\n36871     \t551.07\n36971     \t473.15\n36997     \t953.02\n37057     \t832.78\n37411     \t447.31\n37683     \t1675.77\n37745     \t689.08\n37752     \t871.12\n37838     \t1238.05\n37940     \t257.25\n38014     \t1047.08\n38059     \t374.46\n38075     \t1052.41\n38095     \t935.02\n38354     \t310.51\n38370     \t2677.66\n38371     \t1890.06\n38482     \t1035.92\n38579     \t1957.82\n38605     \t652.30\n38721     \t1593.98\n38784     \t739.06\n38828     \t832.91\n38877     \t669.68\n38883     \t1743.88\n39003     \t400.20\n39101     \t481.37\n39145     \t633.33\n39231     \t576.23\n39237     \t895.30\n39275     \t622.01\n39303     \t825.38\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q16.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<order count :bigint,total shipping cost :decimal(17,2),total net profit :decimal(17,2)>\n-- !query output\n280\t1300478.25\t-285761.26\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q17.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,i_item_desc:string,s_state:string,store_sales_quantitycount:bigint,store_sales_quantityave:double,store_sales_quantitystdev:double,store_sales_quantitycov:double,as_store_returns_quantitycount:bigint,as_store_returns_quantityave:double,as_store_returns_quantitystdev:double,store_returns_quantitycov:double,catalog_sales_quantitycount:bigint,catalog_sales_quantityave:double,catalog_sales_quantitystdev:double,catalog_sales_quantitycov:double>\n-- !query output\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q18.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,ca_country:string,ca_state:string,ca_county:string,agg1:decimal(16,6),agg2:decimal(16,6),agg3:decimal(16,6),agg4:decimal(16,6),agg5:decimal(16,6),agg6:decimal(16,6),agg7:decimal(16,6)>\n-- !query output\nNULL\tNULL\tNULL\tNULL\t50.804533\t99.251888\t256.393484\t49.082603\t-338.277102\t1957.826868\t3.274912\nAAAAAAAAAACDAAAA\tNULL\tNULL\tNULL\t8.000000\t204.490000\t0.000000\t47.030000\t-299.760000\t1948.000000\t2.000000\nAAAAAAAAAADCAAAA\tNULL\tNULL\tNULL\t46.000000\t178.690000\t0.000000\t146.520000\t2502.860000\t1962.000000\t0.000000\nAAAAAAAAAAGBAAAA\tNULL\tNULL\tNULL\t86.000000\t1.520000\t40.290000\t0.660000\t-97.910000\t1945.000000\t2.000000\nAAAAAAAAAAGCAAAA\tNULL\tNULL\tNULL\t84.000000\t41.920000\t0.000000\t8.380000\t-640.080000\t1945.000000\t2.000000\nAAAAAAAAAAGEAAAA\tNULL\tNULL\tNULL\t99.000000\t107.550000\t0.000000\t65.600000\t738.540000\t1973.000000\t3.000000\nAAAAAAAAAAJBAAAA\tNULL\tNULL\tNULL\t97.000000\t48.580000\t0.000000\t44.690000\t534.470000\t1984.000000\t1.000000\nAAAAAAAAAAKAAAAA\tNULL\tNULL\tNULL\t36.000000\t164.023333\t0.000000\t57.763333\t-1073.803333\t1962.666667\t2.333333\nAAAAAAAAAAOCAAAA\tNULL\tNULL\tNULL\t82.000000\t148.640000\t0.000000\t107.020000\t4483.760000\t1948.000000\t6.000000\nAAAAAAAAAAODAAAA\tNULL\tNULL\tNULL\t100.000000\t80.700000\t0.000000\t64.560000\t1505.000000\t1981.000000\t4.000000\nAAAAAAAAABADAAAA\tNULL\tNULL\tNULL\t55.000000\t134.120000\t3189.310000\t63.030000\t-2198.210000\t1945.000000\t2.000000\nAAAAAAAAABCEAAAA\tNULL\tNULL\tNULL\t25.000000\t150.360000\t85.690000\t9.020000\t-1702.940000\t1948.000000\t6.000000\nAAAAAAAAABDEAAAA\tNULL\tNULL\tNULL\t24.000000\t99.490000\t0.000000\t94.510000\t562.560000\t1958.000000\t0.000000\nAAAAAAAAABFBAAAA\tNULL\tNULL\tNULL\t2.000000\t60.810000\t0.000000\t45.600000\t-26.880000\t1931.000000\t6.000000\nAAAAAAAAABFCAAAA\tNULL\tNULL\tNULL\t65.000000\t45.000000\t0.000000\t23.310000\t-1071.285000\t1937.500000\t3.000000\nAAAAAAAAABGBAAAA\tNULL\tNULL\tNULL\t65.000000\t20.450000\t0.000000\t0.810000\t-401.050000\t1983.000000\t2.000000\nAAAAAAAAABJBAAAA\tNULL\tNULL\tNULL\t42.000000\t69.930000\t0.000000\t49.650000\t260.820000\t1943.000000\t2.000000\nAAAAAAAAABNCAAAA\tNULL\tNULL\tNULL\t81.000000\t140.170000\t0.000000\t86.900000\t1733.400000\t1953.000000\t4.000000\nAAAAAAAAACCBAAAA\tNULL\tNULL\tNULL\t50.000000\t99.750000\t0.000000\t75.810000\t1109.000000\t1945.000000\t1.000000\nAAAAAAAAACCDAAAA\tNULL\tNULL\tNULL\t71.000000\t92.520000\t0.000000\t25.900000\t-4026.410000\t1928.000000\t0.000000\nAAAAAAAAACDDAAAA\tNULL\tNULL\tNULL\t29.000000\t147.490000\t0.000000\t35.390000\t-741.240000\t1933.000000\t5.000000\nAAAAAAAAACIBAAAA\tNULL\tNULL\tNULL\t4.000000\t38.470000\t0.000000\t35.770000\t41.160000\t1952.000000\t2.000000\nAAAAAAAAACPDAAAA\tNULL\tNULL\tNULL\t91.000000\t86.510000\t0.000000\t20.760000\t-1989.260000\t1981.000000\t3.000000\nAAAAAAAAADACAAAA\tNULL\tNULL\tNULL\t53.000000\t227.940000\t0.000000\t113.970000\t1089.150000\t1954.000000\t6.000000\nAAAAAAAAADBDAAAA\tNULL\tNULL\tNULL\t70.000000\t35.050000\t0.000000\t29.440000\t970.200000\t1990.000000\t2.000000\nAAAAAAAAADEEAAAA\tNULL\tNULL\tNULL\t66.000000\t85.760000\t2913.510000\t44.590000\t-3553.050000\t1926.000000\t3.000000\nAAAAAAAAADJBAAAA\tNULL\tNULL\tNULL\t28.000000\t90.830000\t0.000000\t15.155000\t-466.175000\t1986.000000\t1.000000\nAAAAAAAAADPBAAAA\tNULL\tNULL\tNULL\t55.000000\t47.460000\t0.000000\t18.500000\t43.450000\t1989.000000\t2.000000\nAAAAAAAAAEABAAAA\tNULL\tNULL\tNULL\t62.000000\t89.450000\t0.000000\t61.720000\t225.060000\t1958.000000\t0.000000\nAAAAAAAAAEBAAAAA\tNULL\tNULL\tNULL\t93.000000\t175.980000\t0.000000\t156.620000\t6503.490000\t1952.000000\t6.000000\nAAAAAAAAAECCAAAA\tNULL\tNULL\tNULL\t97.000000\t179.380000\t0.000000\t127.350000\t4183.610000\t1952.000000\t1.000000\nAAAAAAAAAEDDAAAA\tNULL\tNULL\tNULL\t81.000000\t104.250000\t0.000000\t53.160000\t-92.340000\t1982.000000\t0.000000\nAAAAAAAAAEGDAAAA\tNULL\tNULL\tNULL\t6.000000\t170.770000\t61.470000\t102.460000\t103.890000\t1985.000000\t0.000000\nAAAAAAAAAEGEAAAA\tNULL\tNULL\tNULL\t97.000000\t34.300000\t0.000000\t24.350000\t392.850000\t1976.000000\t6.000000\nAAAAAAAAAEJDAAAA\tNULL\tNULL\tNULL\t74.000000\t168.330000\t0.000000\t143.080000\t3926.440000\t1930.000000\t1.000000\nAAAAAAAAAENAAAAA\tNULL\tNULL\tNULL\t15.000000\t73.420000\t0.000000\t23.490000\t-108.450000\t1963.000000\t2.000000\nAAAAAAAAAEPAAAAA\tNULL\tNULL\tNULL\t34.000000\t4.380000\t0.000000\t3.460000\t67.320000\t1952.000000\t1.000000\nAAAAAAAAAEPDAAAA\tNULL\tNULL\tNULL\t54.000000\t99.220000\t1013.550000\t43.650000\t-3909.570000\t1958.000000\t0.000000\nAAAAAAAAAFGAAAAA\tNULL\tNULL\tNULL\t74.000000\t90.960000\t0.000000\t0.900000\t-4919.520000\t1964.000000\t6.000000\nAAAAAAAAAFGCAAAA\tNULL\tNULL\tNULL\t79.000000\t51.830000\t0.000000\t36.280000\t787.630000\t1975.000000\t5.000000\nAAAAAAAAAFGDAAAA\tNULL\tNULL\tNULL\t80.000000\t38.080000\t0.000000\t0.000000\t-1116.000000\t1926.000000\t0.000000\nAAAAAAAAAFHBAAAA\tNULL\tNULL\tNULL\t57.000000\t55.990000\t0.000000\t31.910000\t320.340000\t1970.000000\t6.000000\nAAAAAAAAAFLBAAAA\tNULL\tNULL\tNULL\t92.000000\t44.330000\t0.000000\t41.220000\t836.280000\t1943.000000\t3.000000\nAAAAAAAAAGBEAAAA\tNULL\tNULL\tNULL\t6.000000\t260.570000\t0.000000\t93.800000\t14.220000\t1970.000000\t2.000000\nAAAAAAAAAGCAAAAA\tNULL\tNULL\tNULL\t54.000000\t231.920000\t0.000000\t48.700000\t-2187.000000\t1969.000000\t5.000000\nAAAAAAAAAGCCAAAA\tNULL\tNULL\tNULL\t80.000000\t183.680000\t0.000000\t99.180000\t475.200000\t1946.000000\t0.000000\nAAAAAAAAAGCDAAAA\tNULL\tNULL\tNULL\t69.000000\t143.220000\t2478.280000\t54.420000\t-4676.620000\t1970.000000\t6.000000\nAAAAAAAAAGEDAAAA\tNULL\tNULL\tNULL\t93.000000\t60.830000\t0.000000\t52.310000\t2769.540000\t1942.000000\t6.000000\nAAAAAAAAAGIDAAAA\tNULL\tNULL\tNULL\t28.000000\t75.550000\t0.000000\t9.060000\t-1013.040000\t1942.000000\t6.000000\nAAAAAAAAAGJBAAAA\tNULL\tNULL\tNULL\t36.000000\t7.170000\t0.000000\t5.230000\t65.880000\t1984.000000\t6.000000\nAAAAAAAAAGKAAAAA\tNULL\tNULL\tNULL\t33.000000\t95.840000\t0.000000\t4.790000\t-1480.710000\t1948.000000\t6.000000\nAAAAAAAAAGLAAAAA\tNULL\tNULL\tNULL\t85.000000\t143.410000\t570.460000\t74.570000\t276.990000\t1948.000000\t6.000000\nAAAAAAAAAGLCAAAA\tNULL\tNULL\tNULL\t65.000000\t13.730000\t0.000000\t3.980000\t-176.800000\t1976.000000\t6.000000\nAAAAAAAAAHADAAAA\tNULL\tNULL\tNULL\t52.000000\t33.050000\t0.000000\t10.900000\t-1054.560000\t1936.000000\t1.000000\nAAAAAAAAAHBCAAAA\tNULL\tNULL\tNULL\t20.000000\t187.740000\t0.000000\t31.910000\t-994.400000\t1947.000000\t5.000000\nAAAAAAAAAHBDAAAA\tNULL\tNULL\tNULL\t61.000000\t83.490000\t0.000000\t59.270000\t1201.700000\t1945.000000\t2.000000\nAAAAAAAAAHCBAAAA\tNULL\tNULL\tNULL\t2.000000\t136.960000\t0.000000\t0.000000\t-95.780000\t1970.000000\t5.000000\nAAAAAAAAAHDEAAAA\tNULL\tNULL\tNULL\t15.000000\t31.950000\t232.130000\t16.290000\t-164.630000\t1934.000000\t2.000000\nAAAAAAAAAHGAAAAA\tNULL\tNULL\tNULL\t99.000000\t100.770000\t12.870000\t1.000000\t-9600.030000\t1932.000000\t4.000000\nAAAAAAAAAHHAAAAA\tNULL\tNULL\tNULL\t18.000000\t109.980000\t0.000000\t24.190000\t-634.680000\t1933.000000\t2.000000\nAAAAAAAAAHIBAAAA\tNULL\tNULL\tNULL\t86.000000\t20.920000\t0.000000\t5.230000\t-1332.140000\t1990.000000\t1.000000\nAAAAAAAAAHJDAAAA\tNULL\tNULL\tNULL\t53.000000\t88.370000\t0.000000\t37.990000\t-1308.570000\t1948.000000\t2.000000\nAAAAAAAAAHMDAAAA\tNULL\tNULL\tNULL\t68.000000\t200.900000\t546.440000\t20.090000\t-4993.640000\tNULL\t6.000000\nAAAAAAAAAHPDAAAA\tNULL\tNULL\tNULL\t6.000000\t73.210000\t60.840000\t13.170000\t-276.660000\t1936.000000\t6.000000\nAAAAAAAAAIACAAAA\tNULL\tNULL\tNULL\t1.000000\t17.830000\t0.000000\t10.160000\t1.940000\t1935.000000\t1.000000\nAAAAAAAAAIADAAAA\tNULL\tNULL\tNULL\t100.000000\t22.020000\t0.000000\t15.630000\t-131.000000\t1982.000000\t0.000000\nAAAAAAAAAIEBAAAA\tNULL\tNULL\tNULL\t4.000000\t51.860000\t0.000000\t42.520000\t-8.760000\t1967.000000\t4.000000\nAAAAAAAAAIEEAAAA\tNULL\tNULL\tNULL\t64.000000\t18.810000\t0.000000\t15.610000\t468.480000\t1930.000000\t1.000000\nAAAAAAAAAINBAAAA\tNULL\tNULL\tNULL\t6.000000\t102.610000\t2.950000\t24.620000\t-110.710000\t1930.000000\t2.000000\nAAAAAAAAAIODAAAA\tNULL\tNULL\tNULL\t81.000000\t42.490000\t0.000000\t12.740000\t-339.390000\t1990.000000\t1.000000\nAAAAAAAAAJBEAAAA\tNULL\tNULL\tNULL\t95.000000\t118.220000\t0.000000\t15.360000\t-3863.650000\t1962.000000\t0.000000\nAAAAAAAAAJDCAAAA\tNULL\tNULL\tNULL\t85.000000\t190.480000\t0.000000\t125.710000\t4732.800000\t1989.000000\t2.000000\nAAAAAAAAAJEAAAAA\tNULL\tNULL\tNULL\t36.000000\t183.430000\t0.000000\t0.000000\t-3318.480000\t1942.000000\t6.000000\nAAAAAAAAAJGCAAAA\tNULL\tNULL\tNULL\t43.500000\t39.415000\t619.885000\t28.940000\t-340.325000\t1960.000000\t5.000000\nAAAAAAAAAJIAAAAA\tNULL\tNULL\tNULL\t69.000000\t118.330000\t0.000000\t61.530000\t1117.110000\t1970.000000\t2.000000\nAAAAAAAAAJICAAAA\tNULL\tNULL\tNULL\t39.333333\t102.063333\t0.000000\t24.780000\t-848.446667\t1962.666667\t2.333333\nAAAAAAAAAJIDAAAA\tNULL\tNULL\tNULL\t75.000000\t222.870000\t6498.720000\t180.520000\t-42.720000\t1935.000000\t5.000000\nAAAAAAAAAJJBAAAA\tNULL\tNULL\tNULL\t42.000000\t103.240000\t0.000000\t43.360000\t-149.940000\t1932.000000\t4.000000\nAAAAAAAAAJNAAAAA\tNULL\tNULL\tNULL\t93.000000\t14.470000\t0.000000\t13.740000\t136.710000\t1954.000000\t0.000000\nAAAAAAAAAJOCAAAA\tNULL\tNULL\tNULL\t24.000000\t88.660000\t0.000000\t2.650000\t-1000.320000\t1927.000000\t2.000000\nAAAAAAAAAJPBAAAA\tNULL\tNULL\tNULL\t16.000000\t159.510000\t0.000000\t121.220000\t1085.920000\t1983.000000\t4.000000\nAAAAAAAAAKAEAAAA\tNULL\tNULL\tNULL\t5.000000\t178.480000\t0.000000\t57.110000\t-160.650000\t1990.000000\t2.000000\nAAAAAAAAAKFCAAAA\tNULL\tNULL\tNULL\t84.000000\t284.260000\t0.000000\t247.300000\t12760.440000\t1984.000000\t1.000000\nAAAAAAAAAKGDAAAA\tNULL\tNULL\tNULL\t26.000000\t132.060000\t0.000000\t64.700000\t-291.200000\t1930.000000\t2.000000\nAAAAAAAAAKHCAAAA\tNULL\tNULL\tNULL\t99.000000\t97.000000\t0.000000\t19.400000\t-3327.390000\t1935.000000\t1.000000\nAAAAAAAAALBEAAAA\tNULL\tNULL\tNULL\t61.000000\t7.760000\t0.000000\t2.630000\t-125.050000\t1981.000000\t0.000000\nAAAAAAAAALFBAAAA\tNULL\tNULL\tNULL\t58.000000\t92.090000\t0.000000\t35.910000\t-97.440000\t1970.000000\t3.000000\nAAAAAAAAALGCAAAA\tNULL\tNULL\tNULL\t77.000000\t36.240000\t0.000000\t18.840000\t-66.220000\t1983.000000\t4.000000\nAAAAAAAAALLBAAAA\tNULL\tNULL\tNULL\t82.000000\t53.000000\t0.000000\t8.480000\t-840.500000\t1981.000000\t4.000000\nAAAAAAAAALMDAAAA\tNULL\tNULL\tNULL\t86.000000\t32.020000\t802.660000\t17.610000\t-1020.240000\t1963.000000\t6.000000\nAAAAAAAAALOAAAAA\tNULL\tNULL\tNULL\t78.000000\t91.520000\t0.000000\t73.210000\t44.460000\t1964.000000\t6.000000\nAAAAAAAAAMBAAAAA\tNULL\tNULL\tNULL\t79.000000\t93.970000\t0.000000\t93.970000\t4924.070000\t1948.000000\t2.000000\nAAAAAAAAAMDBAAAA\tNULL\tNULL\tNULL\t15.000000\t79.140000\t0.000000\t18.990000\t-421.800000\t1969.000000\t2.000000\nAAAAAAAAAMEAAAAA\tNULL\tNULL\tNULL\t10.000000\t164.570000\t0.000000\t80.630000\t-123.500000\t1934.000000\t5.000000\nAAAAAAAAAMFDAAAA\tNULL\tNULL\tNULL\t2.000000\t41.000000\t0.000000\t17.220000\t-4.800000\t1948.000000\t6.000000\nAAAAAAAAAMGBAAAA\tNULL\tNULL\tNULL\t66.000000\t12.410000\t0.000000\t3.590000\t-140.580000\t1990.000000\t0.000000\nAAAAAAAAAMLDAAAA\tNULL\tNULL\tNULL\t70.000000\t147.430000\t297.020000\t8.840000\t-3444.920000\t1943.000000\t4.000000\nAAAAAAAAAMNAAAAA\tNULL\tNULL\tNULL\t41.000000\t53.225000\t0.000000\t37.950000\t121.915000\t1960.000000\t5.000000\nAAAAAAAAANABAAAA\tNULL\tNULL\tNULL\t94.000000\t8.250000\t30.830000\t0.410000\t-293.090000\t1986.000000\t6.000000\nAAAAAAAAANAEAAAA\tNULL\tNULL\tNULL\t5.000000\t47.390000\t0.000000\t9.470000\t-151.800000\t1931.000000\t2.000000\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q19.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<brand_id:int,brand:string,i_manufact_id:int,i_manufact:string,ext_price:decimal(17,2)>\n-- !query output\n10012006\timportoamalgamalg #x                              \t221\toughtableable                                     \t56586.14\n6015008\tscholarbrand #x                                   \t66\tcallycally                                        \t46201.70\n4001001\tamalgedu pack #x                                  \t950\tbarantin st                                       \t43315.50\n3002002\timportoexporti #x                                 \t752\tableantiation                                     \t42749.12\n2003001\texportiimporto #x                                 \t586\tcallyeinganti                                     \t41753.03\n6009007\tmaxicorp #x                                       \t120\tbarableought                                      \t41183.43\n8010005\tunivmaxi #x                                       \t520\tbarableanti                                       \t40133.14\n7005005\tscholarbrand #x                                   \t187\tationeingought                                    \t37940.95\n10004015\tedu packunivamalg #x                             \t439\tn stpriese                                        \t36037.04\n9015005\tscholarunivamalg #x                               \t297\tationn stable                                     \t34881.18\n5003001\texportischolar #x                                 \t227\tationableable                                     \t34528.86\n2002001\timportoimporto #x                                 \t536\tcallyprianti                                      \t34412.64\n7007003\tbrandbrand #x                                     \t759\tn stantiation                                     \t33533.84\n7016003\tcorpnameless #x                                   \t271\toughtationable                                    \t32722.68\n10003006\texportiunivamalg #x                               \t105\tantibarought                                      \t31509.32\n3003001\texportiexporti #x                                 \t178\teingationought                                    \t30944.05\n2003001\texportiimporto #x                                 \t269\tn stcallyable                                     \t30743.96\n7012007\timportonameless #x                                \t578\teingationanti                                     \t30629.71\n9010011\tunivunivamalg #x                                 \t27\tationable                                         \t30165.17\n1001001\tamalgamalg #x                                     \t390\tbarn stpri                                        \t29510.42\n1004002\tedu packamalg #x                                  \t86\tcallyeing                                         \t28798.78\n8016010\tcorpmaxi #x                                      \t745\tantieseation                                      \t28399.44\n3003002\texportiexporti #x                                 \t52\tableanti                                          \t28330.70\n7002007\timportobrand #x                                   \t261\toughtcallyable                                    \t28076.07\n2002001\timportoimporto #x                                 \t364\tesecallypri                                       \t27831.19\n5001002\tamalgscholar #x                                   \t743\tprieseation                                       \t27622.14\n8006009\tcorpnameless #x                                   \t148\teingeseought                                      \t26685.21\n2002001\timportoimporto #x                                 \t68\teingcally                                         \t26391.94\n8004005\tedu packnameless #x                               \t192\tablen stought                                     \t26231.73\n1004001\tedu packamalg #x                                  \t583\tprieinganti                                       \t26107.88\n1001001\tamalgamalg #x                                     \t282\tableeingable                                      \t26016.70\n8006009\tcorpnameless #x                                   \t319\tn stoughtpri                                      \t25529.26\n6016003\tcorpbrand #x                                      \t110\tbaroughtought                                     \t25233.47\n9007003\tbrandmaxi #x                                      \t34\tesepri                                            \t25164.92\n2004001\tedu packimporto #x                                \t5\tanti                                              \t25083.59\n5002001\timportoscholar #x                                 \t582\tableeinganti                                      \t24752.75\n3001001\tamalgexporti #x                                   \t296\tcallyn stable                                     \t24732.94\n7007007\tbrandbrand #x                                     \t529\tn stableanti                                      \t24268.11\n8010009\tunivmaxi #x                                       \t777\tationationation                                   \t24160.84\n8007002\tbrandnameless #x                                  \t192\tablen stought                                     \t23590.40\n8014006\tedu packmaxi #x                                   \t4\tese                                               \t23430.31\n6005005\tscholarcorp #x                                    \t129\tn stableought                                     \t23382.47\n8015001\tscholarmaxi #x                                    \t78\teingation                                         \t23235.50\n6004007\tedu packcorp #x                                   \t158\teingantiought                                     \t23188.37\n9016003\tcorpunivamalg #x                                  \t304\tesebarpri                                         \t23156.77\n10010013\tunivamalgamalg #x                                \t591\toughtn stanti                                     \t23127.55\n2004001\tedu packimporto #x                                \t563\tpricallyanti                                      \t22985.88\n2003002\texportiimporto #x                                 \t490\tbarn stese                                        \t22045.36\n1003002\texportiamalg #x                                   \t376\tcallyationpri                                     \t21890.84\n1004001\tedu packamalg #x                                  \t39\tn stpri                                           \t21878.31\n1001001\tamalgamalg #x                                     \t760\tbarcallyation                                     \t21856.16\n9014002\tedu packunivamalg #x                              \t78\teingation                                         \t21725.66\n1004001\tedu packamalg #x                                  \t366\tcallycallypri                                     \t21166.78\n1004001\tedu packamalg #x                                  \t513\tprioughtanti                                      \t21125.73\n1002001\timportoamalg #x                                   \t942\tableesen st                                       \t21119.54\n5002002\timportoscholar #x                                 \t102\tablebarought                                      \t21049.49\n5002001\timportoscholar #x                                 \t75\tantiation                                         \t20510.26\n8002004\timportonameless #x                                \t2\table                                              \t20401.46\n8011002\tamalgmaxi #x                                      \t29\tn stable                                          \t20270.33\n5001001\tamalgscholar #x                                   \t127\tationableought                                    \t19976.77\n1001001\tamalgamalg #x                                     \t522\tableableanti                                      \t19773.14\n7003005\texportibrand #x                                   \t45\tantiese                                           \t19662.73\n1002001\timportoamalg #x                                   \t361\toughtcallypri                                     \t19619.15\n7007002\tbrandbrand #x                                     \t410\tbaroughtese                                       \t18996.04\nNULL\tbrandcorp #x                                      \tNULL\tNULL\t18842.73\n9008008\tnamelessmaxi #x                                   \t607\tationbarcally                                     \t18650.79\n8015010\tscholarmaxi #x                                   \t410\tbaroughtese                                       \t18465.35\n10008005\tnamelessunivamalg #x                              \t349\tn stesepri                                        \t18229.28\n9001008\tamalgmaxi #x                                      \t840\tbareseeing                                        \t17824.97\n2004002\tedu packimporto #x                                \t314\teseoughtpri                                       \t17523.17\n4003001\texportiedu pack #x                                \t873\tpriationeing                                      \t17462.90\n9007005\tbrandmaxi #x                                      \t546\tcallyeseanti                                      \t17405.98\n10013015\texportiamalgamalg #x                             \t427\tationableese                                      \t17058.19\n3001001\tamalgexporti #x                                   \t139\tn stpriought                                      \t16994.07\n8015002\tscholarmaxi #x                                    \t15\tantiought                                         \t16963.84\n4003002\texportiedu pack #x                                \t664\tesecallycally                                     \t16936.98\n6005001\tscholarcorp #x                                    \t404\tesebarese                                         \t16500.58\n2003002\texportiimporto #x                                 \t333\tpripripri                                         \t16133.68\n9006003\tcorpmaxi #x                                       \t494\tesen stese                                        \t16104.95\n6009007\tmaxicorp #x                                       \t468\teingcallyese                                      \t16083.46\n4002002\timportoedu pack #x                                \t408\teingbarese                                        \t15980.04\n8012009\timportomaxi #x                                    \t351\toughtantipri                                      \t15946.21\n4001002\tamalgedu pack #x                                  \t102\tablebarought                                      \t15911.13\n6011007\tamalgbrand #x                                     \t822\tableableeing                                      \t15556.11\n4004002\tedu packedu pack #x                               \t51\toughtanti                                         \t15506.23\n4003002\texportiedu pack #x                                \t12\tableought                                         \t15452.21\n10010010\tunivamalgamalg #x                                \t721\toughtableation                                    \t15210.55\n3001001\tamalgexporti #x                                   \t574\teseationanti                                      \t14927.08\n3002002\timportoexporti #x                                 \t382\tableeingpri                                       \t14658.43\n3001001\tamalgexporti #x                                   \t142\tableeseought                                      \t14633.69\n10002014\timportounivamalg #x                              \t153\tpriantiought                                      \t14599.02\n1004001\tedu packamalg #x                                  \t148\teingeseought                                      \t14327.50\n5003002\texportischolar #x                                 \t164\tesecallyought                                     \t14289.22\n7008009\tnamelessbrand #x                                  \t532\tableprianti                                       \t14197.70\n4003001\texportiedu pack #x                                \t258\teingantiable                                      \t14148.81\n7007001\tbrandbrand #x                                     \t281\toughteingable                                     \t13920.21\n2002001\timportoimporto #x                                 \t258\teingantiable                                      \t13819.06\n8007007\tbrandnameless #x                                  \t423\tpriableese                                        \t13589.86\n4004001\tedu packedu pack #x                               \t456\tcallyantiese                                      \t13461.44\n8013009\texportimaxi #x                                    \t599\tn stn stanti                                      \t13459.83\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q2.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<d_week_seq1:int,round((sun_sales1 / sun_sales2), 2):decimal(20,2),round((mon_sales1 / mon_sales2), 2):decimal(20,2),round((tue_sales1 / tue_sales2), 2):decimal(20,2),round((wed_sales1 / wed_sales2), 2):decimal(20,2),round((thu_sales1 / thu_sales2), 2):decimal(20,2),round((fri_sales1 / fri_sales2), 2):decimal(20,2),round((sat_sales1 / sat_sales2), 2):decimal(20,2)>\n-- !query output\n5270\t3.18\t1.63\t2.25\t1.64\t3.41\t3.62\t3.72\n5270\t3.18\t1.63\t2.25\t1.64\t3.41\t3.62\t3.72\n5270\t3.18\t1.63\t2.25\t1.64\t3.41\t3.62\t3.72\n5270\t3.18\t1.63\t2.25\t1.64\t3.41\t3.62\t3.72\n5270\t3.18\t1.63\t2.25\t1.64\t3.41\t3.62\t3.72\n5270\t3.18\t1.63\t2.25\t1.64\t3.41\t3.62\t3.72\n5270\t3.18\t1.63\t2.25\t1.64\t3.41\t3.62\t3.72\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5271\t1.00\t1.15\t1.23\t0.82\t1.06\t0.85\t0.95\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5272\t1.22\t0.86\t1.29\t0.95\t0.98\t1.00\t0.84\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5273\t1.20\t0.99\t1.15\t0.86\t0.91\t1.19\t0.90\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5274\t0.97\t0.95\t1.08\t1.19\t0.97\t0.89\t0.96\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5275\t0.99\t0.89\t1.42\t1.00\t0.84\t1.11\t0.98\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5276\t0.97\t1.06\t1.16\t0.86\t0.98\t1.25\t1.13\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5277\t0.91\t0.91\t1.02\t0.95\t1.00\t1.05\t0.90\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5278\t0.96\t0.94\t1.29\t1.04\t0.95\t2.06\t1.46\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5279\t0.87\t0.86\t0.88\t0.99\t0.38\t0.83\t0.77\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5280\t0.96\t0.89\t0.73\t1.09\t1.09\t0.91\t1.21\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5281\t0.99\t1.03\t1.09\t0.86\t0.92\t1.13\t1.01\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5282\t0.97\t0.89\t1.08\t1.10\t1.15\t0.86\t1.33\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5283\t1.27\t0.85\t1.07\t0.88\t1.23\t0.93\t1.27\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5284\t1.06\t0.92\t0.95\t1.04\t0.95\t1.29\t0.89\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5285\t1.10\t1.20\t0.97\t1.14\t1.11\t1.12\t0.95\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5286\t1.09\t1.17\t1.17\t0.86\t1.12\t1.39\t1.00\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5287\t0.94\t1.00\t0.99\t0.76\t1.10\t0.90\t0.96\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5288\t0.95\t1.07\t1.40\t1.05\t0.94\t0.94\t1.01\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5289\t0.99\t1.20\t0.90\t0.95\t1.01\t0.82\t0.76\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5290\t1.05\t1.10\t1.04\t1.22\t0.97\t0.98\t0.98\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5291\t0.78\t1.28\t0.90\t0.99\t0.97\t1.09\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5292\t1.13\t0.79\t1.17\t1.07\t0.88\t0.85\t0.91\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5293\t1.29\t0.96\t0.89\t0.87\t0.97\t0.89\t1.06\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5294\t1.18\t1.05\t1.25\t0.98\t0.88\t1.02\t0.69\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5295\t0.87\t1.08\t1.03\t1.01\t0.88\t1.27\t1.12\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5296\t0.74\t1.11\t0.96\t1.12\t1.00\t1.25\t1.02\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5297\t1.04\t0.97\t0.98\t0.79\t0.90\t0.90\t1.03\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5298\t0.80\t0.78\t0.94\t1.04\t1.26\t0.91\t0.90\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5299\t0.87\t1.00\t1.14\t1.13\t1.07\t1.20\t1.01\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5300\t0.53\t0.41\t0.98\t0.67\t0.80\t0.56\t0.41\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5301\t0.95\t1.05\t0.38\t0.50\t0.76\t1.10\t1.14\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5302\t1.03\t0.80\t0.85\t1.12\t1.00\t0.96\t1.01\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5303\t0.77\t1.23\t0.94\t1.09\t0.98\t0.96\t0.98\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5304\t0.96\t0.82\t1.13\t1.04\t1.07\t1.24\t1.00\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5305\t1.01\t0.96\t0.91\t0.91\t0.90\t1.15\t1.09\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5306\t0.83\t0.99\t0.97\t1.04\t0.90\t1.00\t1.03\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5307\t0.97\t0.99\t0.89\t0.88\t1.07\t1.02\t0.94\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5308\t0.93\t0.95\t1.10\t0.97\t1.10\t0.89\t1.04\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5309\t1.20\t0.90\t1.05\t1.11\t0.96\t1.05\t1.06\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5310\t0.97\t1.10\t1.12\t1.06\t0.96\t1.03\t0.87\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5311\t1.09\t0.88\t1.06\t1.01\t0.91\t1.10\t1.03\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5312\t1.06\t0.93\t0.78\t0.99\t1.03\t0.87\t1.07\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5313\t0.62\t0.78\t1.13\t1.03\t1.13\t1.04\t0.85\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5314\t0.98\t1.16\t0.71\t0.57\t0.70\t1.07\t1.06\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5315\t1.10\t1.09\t0.99\t1.02\t1.11\t1.00\t1.02\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5316\t1.01\t1.07\t1.09\t1.01\t1.08\t1.03\t0.87\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5317\t0.96\t1.04\t0.94\t0.99\t1.09\t0.95\t0.88\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5318\t0.94\t1.00\t1.18\t0.94\t0.87\t1.02\t1.11\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5319\t0.97\t0.96\t0.96\t1.12\t1.00\t0.93\t0.88\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5320\t1.06\t0.94\t0.97\t1.14\t0.96\t0.90\t1.04\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5321\t0.96\t1.01\t0.96\t1.03\t0.97\t0.95\t0.89\n5322\t4.54\t4.72\t0.99\t1.74\t1.43\t4.36\t5.11\n5322\t4.54\t4.72\t0.99\t1.74\t1.43\t4.36\t5.11\n5322\t4.54\t4.72\t0.99\t1.74\t1.43\t4.36\t5.11\n5322\t4.54\t4.72\t0.99\t1.74\t1.43\t4.36\t5.11\n5322\t4.54\t4.72\t0.99\t1.74\t1.43\t4.36\t5.11\n5322\t4.54\t4.72\t0.99\t1.74\t1.43\t4.36\t5.11\n5322\t4.54\t4.72\t0.99\t1.74\t1.43\t4.36\t5.11\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q20.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_desc:string,i_category:string,i_class:string,i_current_price:decimal(7,2),itemrevenue:decimal(17,2),revenueratio:decimal(38,17)>\n-- !query output\nNULL\tBooks                                             \tNULL\tNULL\t9010.49\t72.07596281370536693\nNULL\tBooks                                             \tNULL\t6.35\t1491.96\t11.93436244638591899\nPrecisely elderly bodies\tBooks                                             \tarts                                              \t1.40\t4094.31\t1.52075020571659240\nClose, precise teeth should go for a qualities. Political groups shall not become just important occasions. Trials mean ne\tBooks                                             \tarts                                              \t2.53\t332.38\t0.12345595555199313\nAbilities could affect cruel parts. Predominantly other events telephone strong signs. Accurate mate\tBooks                                             \tarts                                              \t25.69\t2626.56\t0.97558359291967949\nAverage parents require also times. Children would not describe lightly purposes; large miles love now correct relations. Usual, german goals proceed literary, wooden visitors. Initial councils wil\tBooks                                             \tarts                                              \t1.24\t12327.20\t4.57869383019594946\nGreat, contemporary workers would not remove of course cultural values. Then due children might see positive seconds. Significant problems w\tBooks                                             \tarts                                              \t0.55\t343.80\t0.12769768794384511\nSmall objects stop etc mediterranean patterns; liberal, free initiatives would not leave less clear british attitudes; good, blue relationships find softly very\tBooks                                             \tarts                                              \t58.41\t886.92\t0.32942883476194038\nNewly national rights head curiously all electrical cells. Chinese, long values might not pull bad lines. High fun clothes ough\tBooks                                             \tarts                                              \t3.28\t2219.85\t0.82451923380495801\nQuick, easy studies must make always necessary systems. Upper, new persons should buy much physical technologies. English sciences hear solicitors. S\tBooks                                             \tarts                                              \t0.99\t2050.16\t0.76149125047979491\nEarly, short v\tBooks                                             \tarts                                              \t75.57\t5429.86\t2.01681375177070042\nBlack, following services justify by a investors; dirty, different charts will fly however prizes. Temporary, l\tBooks                                             \tarts                                              \t5.56\t13539.35\t5.02892289488801418\nScientific, difficult polls would not achieve. Countries reach of course. Bad, new churches realize most english\tBooks                                             \tarts                                              \t3.98\t143.88\t0.05344137097545211\nUnited, important objectives put similarly large, previous phenomena; old, present days receive. Happy detectives assi\tBooks                                             \tarts                                              \t1.26\t12297.15\t4.56753235398096242\nNaturally new years put serious, negative vehicles. Fin\tBooks                                             \tarts                                              \t3.34\t4587.47\t1.70392470189572752\nAgo correct profits must not handle else. Healthy children may not go only ancient words. Later just characters ought to drink about. British parts must watch soon ago other clients. So vital d\tBooks                                             \tarts                                              \t4.03\t5359.20\t1.99056849688381241\nMuch new waters \tBooks                                             \tarts                                              \t1.85\t6718.63\t2.49550179508480530\nHard different differences would not paint even. Together suitable schemes marry directly only open women. Social ca\tBooks                                             \tarts                                              \t2.65\t3208.60\t1.19177080144450674\nTall, following actions keep widely willing, secondary groups. Heads could afford however; agricultural, square pri\tBooks                                             \tarts                                              \t9.99\t4780.52\t1.77562929368618505\nAnonymous, useful women provoke slightly present persons. Ideas ought to cost almost competent, working parties; aspects provide thr\tBooks                                             \tarts                                              \t6.73\t5622.46\t2.08835119999055082\nPowerful walls will find; there scottish decades must not\tBooks                                             \tarts                                              \t4.16\t7914.41\t2.93965054810833964\nToo executive doors progress mainly seemingly possible parts; hundreds stay virtually simple workers. Sola\tBooks                                             \tarts                                              \t34.32\t3029.48\t1.12524023173973205\nCareful privileges ought to live rather to a boards. Possible, broad p\tBooks                                             \tarts                                              \t3.93\t1450.99\t0.53894144336718969\nAside legitimate decisions may not stand probably sexual g\tBooks                                             \tarts                                              \t3.88\t9619.83\t3.57309496629679899\nSpecially interesting crews continue current, foreign directions; only social men would not call at least political children; circumstances could not understand now in a assessme\tBooks                                             \tarts                                              \t2.13\t13616.57\t5.05760473160419719\nUnlikely states take later in general extra inf\tBooks                                             \tarts                                              \t0.32\t11879.56\t4.41242683475911751\nSometimes careful things state probably so\tBooks                                             \tarts                                              \t5.08\t25457.85\t9.45581321995700176\nCircumstances would not use. Principles seem writers. Times go from a hands. Members find grounds. Central, only teachers pursue properly into a p\tBooks                                             \tarts                                              \t5.95\t2567.54\t0.95366178505916251\nInches may lose from a problems. Firm, other corporations shall protect ashamed, important practices. Materials shall not make then by a police. Weeks used\tBooks                                             \tarts                                              \t0.84\t1811.85\t0.67297572978782944\nSystems cannot await regions. Home appropr\tBooks                                             \tarts                                              \t7.30\t1730.16\t0.64263360027028230\nExtra, primitive weeks look obviou\tBooks                                             \tarts                                              \t1.18\t22.77\t0.00845746467272063\nMore than key reasons should remain. Words used to offer slowly british\tBooks                                             \tarts                                              \t0.28\t10311.18\t3.82988320527288194\nChildren may turn also above, historical aspects. Surveys migh\tBooks                                             \tarts                                              \t7.22\t11872.32\t4.40973768042312729\nTrustees know operations. Now past issues cut today german governments. British lines go critical, individual structures. Tonight adequate problems should no\tBooks                                             \tarts                                              \t4.05\t8348.99\t3.10106666569599586\nUseful observers start often white colleagues; simple pro\tBooks                                             \tarts                                              \t3.47\t7565.51\t2.81005856636428042\nMembers should say earnings. Detailed departments would not move just at the hopes. Figures can take. Actually open houses want. Good teachers combine the\tBooks                                             \tarts                                              \t3.09\t4363.97\t1.62091006182752106\nMajor, senior words afford economic libraries; successful seconds need outside. Clinical, new ideas put now red c\tBooks                                             \tarts                                              \t5.87\t9661.08\t3.58841646026911898\nLikely states feel astonishingly working roads. Parents put so somewhere able policies. Others may rely shortly instead interesting bodies; bri\tBooks                                             \tarts                                              \t7.50\t132.66\t0.04927392461498107\nFloors could not go only for a years. Special reasons shape consequently black, concerned instances. Mutual depths encourage both simple teachers. Cards favour massive \tBooks                                             \tarts                                              \t1.83\t20114.53\t7.47114303396483641\nAccurate years want then other organisations. Simple lines mean as well so red results. Orthodox, central scales will not in\tBooks                                             \tarts                                              \t7.69\t2153.04\t0.79970398502215321\nCertain customers think exactly already necessary factories. Awkward doubts shall not forget fine\tBooks                                             \tarts                                              \t0.30\t231.71\t0.08606408165639427\nVisitors could not allow glad wages. Communist, real figures used to apply factors. Aggressive, optimistic days must mean about trees. Detailed courts consider really large pro\tBooks                                             \tarts                                              \t9.08\t24425.09\t9.07221501111207600\nDeep, big areas take for a facilities. Words could replace certainly cases; lights test. Nevertheless practical arts cross. Fa\tBooks                                             \tarts                                              \t7.37\t4380.23\t1.62694951617879192\nNew, reluctant associations see more different, physical symptoms; useful pounds ought to give. Subjects \tBooks                                             \tbusiness                                          \t9.02\t3044.02\t1.58609001939612781\nImports involve most now indian women. Developments announce intimately in a copies. Projects \tBooks                                             \tbusiness                                          \t3.26\t472.29\t0.24608723177265498\nYears shall want free objects. Old residents use absolutely so residential steps. Letters will share variables. Sure fres\tBooks                                             \tbusiness                                          \t40.76\t30227.05\t15.74983814849696292\nWhole, important problems make. Indeed industrial members go skills. Soft\tBooks                                             \tbusiness                                          \t3.22\t137.92\t0.07186336997625310\nOther, black houses flow. New soldiers put only eastern hours. Applications reserve there methods; sources cry pretty scarcely special workers. Never british opportunities \tBooks                                             \tbusiness                                          \t8.20\t736.96\t0.38399383075478162\nRows could not\tBooks                                             \tbusiness                                          \t1.65\t1290.88\t0.67261446516056841\nRemaining subjects handle even only certain ladies; eagerly literary days could not provide. Very different articles cut then. Boys see out of a houses. Governme\tBooks                                             \tbusiness                                          \t9.03\t1065.30\t0.55507575431918810\nWhite members see highly on a negotiations. Evident, passive colours can refer familiar, ugly factors; away small examinations shall prove \tBooks                                             \tbusiness                                          \t17.97\t1446.00\t0.75343991433919646\nManufacturing, ready concerns see already then new pupils. Both stable types used to manage otherw\tBooks                                             \tbusiness                                          \t1.18\t2635.71\t1.37333963805184198\nSmall, capable centres\tBooks                                             \tbusiness                                          \t2.98\t5029.45\t2.62060053746422658\nPopular, different parameters might take open, used modules. Prisoners use pretty alternative lovers. Annual, professional others spend once true men. Other, small subsidies seem politically\tBooks                                             \tbusiness                                          \t7.25\t621.26\t0.32370821658531756\nSupreme, free uses handle even in the customers. Other minutes might not make of course social neighbours. So environmental rights come other, able sales\tBooks                                             \tbusiness                                          \t8.08\t3950.22\t2.05826654109334761\nAlways other hours used to use. Women should jump then. Civil samples take therefore other offices. Concrete, major demands\tBooks                                             \tbusiness                                          \t1.42\t2013.79\t1.04928752772968910\nVisual fragments \tBooks                                             \tbusiness                                          \t6.77\t930.13\t0.48464527491308216\nClassic issues will draw as european, engl\tBooks                                             \tbusiness                                          \t75.64\t556.83\t0.29013689315456070\nAgain british shareholders see shares. American lives ought to establish horses. Then ideal conservatives might charge even nec\tBooks                                             \tbusiness                                          \t2.44\t1898.13\t0.98902275560488173\nConfident, video-tape\tBooks                                             \tbusiness                                          \t3.17\t1131.00\t0.58930881266779474\nOf course fundamental children will not deal still including a suppliers. More crucial powers will not keep enough. As good comments used to devote even convenient electric problems. Publi\tBooks                                             \tbusiness                                          \t8.85\t414.75\t0.21610595053401226\nDepartments could seek now for a commu\tBooks                                             \tbusiness                                          \t5.93\t9895.85\t5.15624369039663714\nPaintings must not know primary, royal stands; similar, available others ough\tBooks                                             \tbusiness                                          \t0.39\t13809.44\t7.19542412909562460\nMost present eyes restore fat, central relationships; again considerable habits must face in a discussions. Engineers help at all direct occasions. Curiously del\tBooks                                             \tbusiness                                          \t80.10\t9267.25\t4.82871095861681771\nSo white countries could secure more angry items. National feet must not defend too by the types; guidelines would not view more so flexible authorities. Critics will handle closely lig\tBooks                                             \tbusiness                                          \t2.50\t2542.50\t1.32477246349059959\nSimple changes ought to vote almost sudden techniques. Partial, golden faces mean in a officials; vertically minor \tBooks                                             \tbusiness                                          \t8.73\t22710.22\t11.83318548507904997\nChristian lines stand once deep formal aspirations. National, fine islands play together with a patterns. New journals lose etc positive armie\tBooks                                             \tbusiness                                          \t4.89\t11560.78\t6.02375732565303988\nChildren would not mean in favour of a parts. Heavy, whole others shall mean on\tBooks                                             \tbusiness                                          \t3.13\t9065.09\t4.72337526492192700\nLips will n\tBooks                                             \tbusiness                                          \t8.48\t541.62\t0.28221170567385587\nWhite fees might combine reports. Tr\tBooks                                             \tbusiness                                          \t2.09\t37.60\t0.01959152197728478\nAsleep children invite more. Wealthy forms could expect as. Indeed statistical examinations could la\tBooks                                             \tbusiness                                          \t3.71\t2082.24\t1.08495347664844290\nMost new weeks go yet members. Also encouraging delegates make publications. Different competitors run resources; somehow common views m\tBooks                                             \tbusiness                                          \t1.07\t13412.42\t6.98855641485568838\nLocal, bloody names \tBooks                                             \tbusiness                                          \t4.40\t1997.44\t1.04076834197626873\nLarge, larg\tBooks                                             \tbusiness                                          \t3.50\t12097.82\t6.30358261721370521\nOnly, gothic\tBooks                                             \tbusiness                                          \t1.68\t5708.95\t2.97465477106967886\nLow, large clouds will not visit for example as the notions. Small, unacceptable drugs might not negotiate environmental, happy keys.\tBooks                                             \tbusiness                                          \t3.11\t3020.85\t1.57401726502874248\nSilver, critical operations could help howev\tBooks                                             \tbusiness                                          \t5.56\t2286.24\t1.19124790439754116\nTerrible, psychiatric bones will destroy also used studies; solely usual windows should not make shares. Advances continue sufficiently. As key days might not use far artists. Offici\tBooks                                             \tbusiness                                          \t5.83\t6672.40\t3.47666146918178041\nToo white addresses end by the talks. Hands get only companies. Statements know. Sentences would pay around for a payments; papers wait actually drinks; men would \tBooks                                             \tbusiness                                          \t6.06\t7609.35\t3.96486031270882752\nNew, big arguments may not win since by a tenant\tBooks                                             \tcomputers                                         \t1.00\t904.16\t0.32327741862037314\nElse substantial problems slip months. Just unique corporations put vast areas. Supporters like far perfect chapters. Now young reports become wrong trials. Available ears shall\tBooks                                             \tcomputers                                         \t51.46\t18752.88\t6.70498876094676063\nCheap, desirable members take immediate, estimated debts; months must track typica\tBooks                                             \tcomputers                                         \t3.26\t10027.86\t3.58540600677589698\nExpert, scottish terms will ask quiet demands; poor bits attempt northern, dangerous si\tBooks                                             \tcomputers                                         \t2.66\t7330.68\t2.62104418148557444\nGradually serious visitors bear no doubt technical hearts. Critics continue earlier soviet, standard minute\tBooks                                             \tcomputers                                         \t6.40\t1711.84\t0.61205894564136830\nClear, general goods must know never women. Communications meet about. Other rewards spot wide in a skills. Relative, empty drawings facilitate too rooms. Still asian police end speedily comp\tBooks                                             \tcomputers                                         \t7.64\t1292.04\t0.46196177220211789\nAt least remaining results shall keep cuts. Clients should meet policies. Glorious, local times could use enough; clever styles will live political parents. Single, gradual contracts will describe ho\tBooks                                             \tcomputers                                         \t9.51\t3033.10\t1.08446816760026298\nEnvironmental, new women pay again fingers. Different, uncomfortable records miss far russian, dependent members. Enough double men will go here immediatel\tBooks                                             \tcomputers                                         \t89.89\t8553.39\t3.05821739476786568\nYears learn here. Days make too. Only moving systems avoid old groups; short movements cannot see respectiv\tBooks                                             \tcomputers                                         \t0.60\t3411.40\t1.21972724504682903\nMagnetic\tBooks                                             \tcomputers                                         \t57.19\t3569.09\t1.27610843437421206\nGa\tBooks                                             \tcomputers                                         \t5.53\t2687.70\t0.96097230360331899\nS\tBooks                                             \tcomputers                                         \t65.78\t1613.04\t0.57673355084432699\nSimple year\tBooks                                             \tcomputers                                         \t3.01\t1262.79\t0.45150359611088856\nAgricultural players shall smoke. So full reasons undertake \tBooks                                             \tcomputers                                         \t0.70\t4408.27\t1.57615261257037727\nThen basic years can encourage later traditions. For example christian parts subscribe informal, valuable gr\tBooks                                             \tcomputers                                         \t2.75\t844.19\t0.30183547604973987\nBoxes batt\tBooks                                             \tcomputers                                         \t0.83\t15300.82\t5.47072375727191844\nSeparate, dead buildings think possibly english, net policies. Big divisions can use almost\tBooks                                             \tcomputers                                         \t9.46\t12403.71\t4.43487806374503246\nArtists make times. Rather ready functions must pre\tBooks                                             \tcomputers                                         \t5.71\t1533.00\t0.54811569052494252\nAdvantages emerge moves; special, expected operations pass etc natural preferences; very posit\tBooks                                             \tcomputers                                         \t0.15\t5241.45\t1.87405152387603389\nSince other birds shall blame sudden\tBooks                                             \tcomputers                                         \t6.74\t2098.16\t0.75018552983158082\nLegs throw then. Old-fashioned develo\tBooks                                             \tcomputers                                         \t2.66\t163.26\t0.05837271209073850\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q21.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<w_warehouse_name:string,i_item_id:string,inv_before:bigint,inv_after:bigint>\n-- !query output\nJust good amou\tAAAAAAAAAAEAAAAA\t2473\t2677\nJust good amou\tAAAAAAAAAAPBAAAA\t2415\t2071\nJust good amou\tAAAAAAAAACACAAAA\t2792\t2465\nJust good amou\tAAAAAAAAACGCAAAA\t1396\t2084\nJust good amou\tAAAAAAAAACKCAAAA\t1974\t1585\nJust good amou\tAAAAAAAAACPCAAAA\t2131\t1690\nJust good amou\tAAAAAAAAADACAAAA\t2432\t2093\nJust good amou\tAAAAAAAAADBBAAAA\t1747\t2529\nJust good amou\tAAAAAAAAADNAAAAA\t2763\t1862\nJust good amou\tAAAAAAAAAELBAAAA\t2984\t2371\nJust good amou\tAAAAAAAAAFFDAAAA\t2858\t2415\nJust good amou\tAAAAAAAAAFJDAAAA\t2479\t2176\nJust good amou\tAAAAAAAAAFLDAAAA\t2892\t2303\nJust good amou\tAAAAAAAAAGLDAAAA\t2083\t1892\nJust good amou\tAAAAAAAAAGPCAAAA\t1596\t1191\nJust good amou\tAAAAAAAAAHBAAAAA\t2398\t2956\nJust good amou\tAAAAAAAAAHPBAAAA\t1775\t2655\nJust good amou\tAAAAAAAAAINCAAAA\t2296\t2458\nJust good amou\tAAAAAAAAAJIDAAAA\t2528\t2552\nJust good amou\tAAAAAAAAAKABAAAA\t1881\t1465\nJust good amou\tAAAAAAAAALFEAAAA\t2952\t2933\nJust good amou\tAAAAAAAAALIDAAAA\t1918\t2438\nJust good amou\tAAAAAAAAALOAAAAA\t1904\t1396\nJust good amou\tAAAAAAAAALOBAAAA\t1340\t1570\nJust good amou\tAAAAAAAAALPAAAAA\t2299\t1692\nJust good amou\tAAAAAAAAANLCAAAA\t3000\t3113\nJust good amou\tAAAAAAAAAOMDAAAA\t2294\t1928\nJust good amou\tAAAAAAAABABCAAAA\t2308\t1807\nJust good amou\tAAAAAAAABAGAAAAA\t2358\t2308\nJust good amou\tAAAAAAAABCCCAAAA\t1694\t1863\nJust good amou\tAAAAAAAABECDAAAA\t2361\t1636\nJust good amou\tAAAAAAAABEMCAAAA\t2644\t2018\nJust good amou\tAAAAAAAABHACAAAA\t1471\t1982\nJust good amou\tAAAAAAAABHPCAAAA\t1729\t1481\nJust good amou\tAAAAAAAABKEBAAAA\t2284\t1940\nJust good amou\tAAAAAAAABLBDAAAA\t1910\t2399\nJust good amou\tAAAAAAAABMCBAAAA\t2460\t2407\nJust good amou\tAAAAAAAABMOBAAAA\t2405\t1669\nJust good amou\tAAAAAAAABNEBAAAA\t2733\t2113\nJust good amou\tAAAAAAAABOKDAAAA\t3609\t2690\nJust good amou\tAAAAAAAABONAAAAA\t2096\t2325\nJust good amou\tAAAAAAAABPPAAAAA\t1788\t1540\nJust good amou\tAAAAAAAACACEAAAA\t1763\t1240\nJust good amou\tAAAAAAAACAHCAAAA\t1877\t2693\nJust good amou\tAAAAAAAACALAAAAA\t3249\t2868\nJust good amou\tAAAAAAAACALBAAAA\t1663\t1528\nJust good amou\tAAAAAAAACBDBAAAA\t2334\t2813\nJust good amou\tAAAAAAAACBHBAAAA\t1791\t2015\nJust good amou\tAAAAAAAACCABAAAA\t2715\t2959\nJust good amou\tAAAAAAAACCJBAAAA\t2461\t1696\nJust good amou\tAAAAAAAACCPDAAAA\t3129\t2313\nJust good amou\tAAAAAAAACDGCAAAA\t2114\t1790\nJust good amou\tAAAAAAAACDIBAAAA\t2874\t3081\nJust good amou\tAAAAAAAACEHDAAAA\t1616\t1765\nJust good amou\tAAAAAAAACEICAAAA\t2037\t2461\nJust good amou\tAAAAAAAACFKDAAAA\t2373\t1698\nJust good amou\tAAAAAAAACGJDAAAA\t2578\t1814\nJust good amou\tAAAAAAAACGMAAAAA\t2285\t1553\nJust good amou\tAAAAAAAACHCDAAAA\t2620\t2504\nJust good amou\tAAAAAAAACIICAAAA\t2800\t2293\nJust good amou\tAAAAAAAACJBEAAAA\t2718\t2070\nJust good amou\tAAAAAAAACJGDAAAA\t2153\t1778\nJust good amou\tAAAAAAAACJNCAAAA\t1482\t1582\nJust good amou\tAAAAAAAACKBAAAAA\t3122\t2281\nJust good amou\tAAAAAAAACKCAAAAA\t1664\t1982\nJust good amou\tAAAAAAAACKHBAAAA\t2222\t1546\nJust good amou\tAAAAAAAACKHDAAAA\t2312\t1798\nJust good amou\tAAAAAAAACKKBAAAA\t2463\t1829\nJust good amou\tAAAAAAAACLDAAAAA\t2523\t2348\nJust good amou\tAAAAAAAACOLBAAAA\t1630\t2360\nJust good amou\tAAAAAAAACPFDAAAA\t1863\t1405\nJust good amou\tAAAAAAAACPKBAAAA\t2088\t2537\nJust good amou\tAAAAAAAACPLDAAAA\t1982\t1599\nJust good amou\tAAAAAAAADBECAAAA\t3507\t2356\nJust good amou\tAAAAAAAADBGAAAAA\t1819\t1772\nJust good amou\tAAAAAAAADCEEAAAA\t2655\t1843\nJust good amou\tAAAAAAAADDEAAAAA\t2210\t2733\nJust good amou\tAAAAAAAADDOCAAAA\t2198\t2067\nJust good amou\tAAAAAAAADEHCAAAA\t3190\t2928\nJust good amou\tAAAAAAAADFLDAAAA\t2603\t1991\nJust good amou\tAAAAAAAADHDAAAAA\t1649\t1777\nJust good amou\tAAAAAAAADIIAAAAA\t1914\t1818\nJust good amou\tAAAAAAAADIOAAAAA\t2058\t2133\nJust good amou\tAAAAAAAADJPBAAAA\t2840\t1910\nJust good amou\tAAAAAAAADLBEAAAA\t2293\t1578\nJust good amou\tAAAAAAAADLHBAAAA\t1817\t1316\nJust good amou\tAAAAAAAADMCCAAAA\t1944\t2028\nJust good amou\tAAAAAAAADPDBAAAA\t1993\t1430\nJust good amou\tAAAAAAAAECEDAAAA\t1968\t2076\nJust good amou\tAAAAAAAAEEOBAAAA\t1992\t1737\nJust good amou\tAAAAAAAAEEODAAAA\t2938\t2820\nJust good amou\tAAAAAAAAEFACAAAA\t2213\t2877\nJust good amou\tAAAAAAAAEGCCAAAA\t2262\t3212\nJust good amou\tAAAAAAAAEGCEAAAA\t3052\t2175\nJust good amou\tAAAAAAAAEIEDAAAA\t1786\t2175\nJust good amou\tAAAAAAAAEIHDAAAA\t1938\t1944\nJust good amou\tAAAAAAAAEINDAAAA\t2402\t2113\nJust good amou\tAAAAAAAAEKADAAAA\t1327\t1683\nJust good amou\tAAAAAAAAELKAAAAA\t1817\t2630\nJust good amou\tAAAAAAAAEMHCAAAA\t2260\t2878\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q22.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_product_name:string,i_brand:string,i_class:string,i_category:string,qoh:double>\n-- !query output\nesepriableanti                                    \tNULL\tNULL\tNULL\t429.7808764940239\nesepriableanti                                    \timportoamalg #x                                   \tNULL\tNULL\t429.7808764940239\nesepriableanti                                    \timportoamalg #x                                   \tfragrances                                        \tNULL\t429.7808764940239\nesepriableanti                                    \timportoamalg #x                                   \tfragrances                                        \tWomen                                             \t429.7808764940239\nn stbarn stbarought                               \tNULL\tNULL\tNULL\t430.0122448979592\nn stbarn stbarought                               \tamalgimporto #x                                   \tNULL\tNULL\t430.0122448979592\nn stbarn stbarought                               \tamalgimporto #x                                   \taccessories                                       \tNULL\t430.0122448979592\nn stbarn stbarought                               \tamalgimporto #x                                   \taccessories                                       \tMen                                               \t430.0122448979592\nantiationeing                                     \tNULL\tNULL\tNULL\t437.03614457831327\nantiationeing                                     \tamalgexporti #x                                   \tNULL\tNULL\t437.03614457831327\nantiationeing                                     \tamalgexporti #x                                   \tnewborn                                           \tNULL\t437.03614457831327\nantiationeing                                     \tamalgexporti #x                                   \tnewborn                                           \tChildren                                          \t437.03614457831327\nn stpriantin st                                   \tNULL\tNULL\tNULL\t438.77868852459017\nn stpriantin st                                   \texportiexporti #x                                 \tNULL\tNULL\t438.77868852459017\nn stpriantin st                                   \texportiexporti #x                                 \ttoddlers                                          \tNULL\t438.77868852459017\nn stpriantin st                                   \texportiexporti #x                                 \ttoddlers                                          \tChildren                                          \t438.77868852459017\neingprically                                      \tNULL\tNULL\tNULL\t439.97975708502025\neingprically                                      \tamalgbrand #x                                     \tNULL\tNULL\t439.97975708502025\neingprically                                      \tamalgbrand #x                                     \tsemi-precious                                     \tNULL\t439.97975708502025\neingprically                                      \tamalgbrand #x                                     \tsemi-precious                                     \tJewelry                                           \t439.97975708502025\nprieingable                                       \tNULL\tNULL\tNULL\t440.096\nprieingable                                       \texportiunivamalg #x                               \tNULL\tNULL\t440.096\nprieingable                                       \texportiunivamalg #x                               \tself-help                                         \tNULL\t440.096\nprieingable                                       \texportiunivamalg #x                               \tself-help                                         \tBooks                                             \t440.096\noughteingn stationought                           \tNULL\tNULL\tNULL\t440.1497975708502\noughteingn stationought                           \tamalgscholar #x                                   \tNULL\tNULL\t440.1497975708502\noughteingn stationought                           \tamalgscholar #x                                   \trock                                              \tNULL\t440.1497975708502\noughteingn stationought                           \tamalgscholar #x                                   \trock                                              \tMusic                                             \t440.1497975708502\neingationbaroughtought                            \tNULL\tNULL\tNULL\t440.9721115537849\neingationbaroughtought                            \tmaxicorp #x                                       \tNULL\tNULL\t440.9721115537849\neingationbaroughtought                            \tmaxicorp #x                                       \twomens watch                                      \tNULL\t440.9721115537849\neingationbaroughtought                            \tmaxicorp #x                                       \twomens watch                                      \tJewelry                                           \t440.9721115537849\npriantibarpri                                     \tNULL\tNULL\tNULL\t443.45849802371544\npriantibarpri                                     \texportiimporto #x                                 \tNULL\tNULL\t443.45849802371544\npriantibarpri                                     \texportiimporto #x                                 \tpants                                             \tNULL\t443.45849802371544\npriantibarpri                                     \texportiimporto #x                                 \tpants                                             \tMen                                               \t443.45849802371544\nprioughtantiation                                 \tNULL\tNULL\tNULL\t443.8825910931174\nprioughtantiation                                 \tcorpmaxi #x                                       \tNULL\tNULL\t443.8825910931174\nprioughtantiation                                 \tcorpmaxi #x                                       \tparenting                                         \tNULL\t443.8825910931174\nprioughtantiation                                 \tcorpmaxi #x                                       \tparenting                                         \tBooks                                             \t443.8825910931174\neseprieingoughtought                              \tNULL\tNULL\tNULL\t445.2016129032258\neseprieingoughtought                              \timportonameless #x                                \tNULL\tNULL\t445.2016129032258\neseprieingoughtought                              \timportonameless #x                                \tbaseball                                          \tNULL\t445.2016129032258\neseprieingoughtought                              \timportonameless #x                                \tbaseball                                          \tSports                                            \t445.2016129032258\neingpriationanti                                  \tNULL\tNULL\tNULL\t445.4920634920635\neingpriationanti                                  \tscholarunivamalg #x                               \tNULL\tNULL\t445.4920634920635\neingpriationanti                                  \tscholarunivamalg #x                               \tfiction                                           \tNULL\t445.4920634920635\neingpriationanti                                  \tscholarunivamalg #x                               \tfiction                                           \tBooks                                             \t445.4920634920635\nantin stablecallyought                            \tNULL\tNULL\tNULL\t445.54918032786884\nantin stablecallyought                            \timportoedu pack #x                                \tNULL\tNULL\t445.54918032786884\nantin stablecallyought                            \timportoedu pack #x                                \tmens                                              \tNULL\t445.54918032786884\nantin stablecallyought                            \timportoedu pack #x                                \tmens                                              \tShoes                                             \t445.54918032786884\ncallycallyn steing                                \tNULL\tNULL\tNULL\t445.9012345679012\ncallycallyn steing                                \tcorpunivamalg #x                                  \tNULL\tNULL\t445.9012345679012\ncallycallyn steing                                \tcorpunivamalg #x                                  \tmystery                                           \tNULL\t445.9012345679012\ncallycallyn steing                                \tcorpunivamalg #x                                  \tmystery                                           \tBooks                                             \t445.9012345679012\noughtpribarought                                  \tNULL\tNULL\tNULL\t446.125\noughtpribarought                                  \texportinameless #x                                \tNULL\tNULL\t446.125\noughtpribarought                                  \texportinameless #x                                \twallpaper                                         \tNULL\t446.125\noughtpribarought                                  \texportinameless #x                                \twallpaper                                         \tHome                                              \t446.125\noughtantioughtbarought                            \tNULL\tNULL\tNULL\t446.1847389558233\noughtantioughtbarought                            \tedu packmaxi #x                                  \tNULL\tNULL\t446.1847389558233\noughtantioughtbarought                            \tedu packmaxi #x                                  \tentertainments                                    \tNULL\t446.1847389558233\noughtantioughtbarought                            \tedu packmaxi #x                                  \tentertainments                                    \tBooks                                             \t446.1847389558233\ncallyoughtn stcallyought                          \tNULL\tNULL\tNULL\t446.43650793650795\ncallyoughtn stcallyought                          \texportischolar #x                                 \tNULL\tNULL\t446.43650793650795\ncallyoughtn stcallyought                          \texportischolar #x                                 \tpop                                               \tNULL\t446.43650793650795\ncallyoughtn stcallyought                          \texportischolar #x                                 \tpop                                               \tMusic                                             \t446.43650793650795\nationeingationableought                           \tNULL\tNULL\tNULL\t446.48192771084337\nationeingationableought                           \tnamelessnameless #x                               \tNULL\tNULL\t446.48192771084337\nationeingationableought                           \tnamelessnameless #x                               \toutdoor                                           \tNULL\t446.48192771084337\nationeingationableought                           \tnamelessnameless #x                               \toutdoor                                           \tSports                                            \t446.48192771084337\npriantiableese                                    \tNULL\tNULL\tNULL\t446.85483870967744\npriantiableese                                    \texportiedu pack #x                                \tNULL\tNULL\t446.85483870967744\npriantiableese                                    \texportiedu pack #x                                \tkids                                              \tNULL\t446.85483870967744\npriantiableese                                    \texportiedu pack #x                                \tkids                                              \tShoes                                             \t446.85483870967744\nprieseeseableought                                \tNULL\tNULL\tNULL\t446.9186991869919\nprieseeseableought                                \tamalgscholar #x                                   \tNULL\tNULL\t446.9186991869919\nprieseeseableought                                \tamalgscholar #x                                   \trock                                              \tNULL\t446.9186991869919\nprieseeseableought                                \tamalgscholar #x                                   \trock                                              \tMusic                                             \t446.9186991869919\nationableoughtcallyought                          \tNULL\tNULL\tNULL\t447.165991902834\nationableoughtcallyought                          \texportischolar #x                                 \tNULL\tNULL\t447.165991902834\nationableoughtcallyought                          \texportischolar #x                                 \tpop                                               \tNULL\t447.165991902834\nationableoughtcallyought                          \texportischolar #x                                 \tpop                                               \tMusic                                             \t447.165991902834\npripricallyese                                    \tNULL\tNULL\tNULL\t447.2550607287449\npripricallyese                                    \tedu packimporto #x                                \tNULL\tNULL\t447.2550607287449\npripricallyese                                    \tedu packimporto #x                                \tsports-apparel                                    \tNULL\t447.2550607287449\npripricallyese                                    \tedu packimporto #x                                \tsports-apparel                                    \tMen                                               \t447.2550607287449\neingableationn st                                 \tNULL\tNULL\tNULL\t447.3541666666667\neingableationn st                                 \tnamelessmaxi #x                                   \tNULL\tNULL\t447.3541666666667\neingableationn st                                 \tnamelessmaxi #x                                   \tromance                                           \tNULL\t447.3541666666667\neingableationn st                                 \tnamelessmaxi #x                                   \tromance                                           \tBooks                                             \t447.3541666666667\nn stantin stoughtought                            \tNULL\tNULL\tNULL\t448.2396694214876\nn stantin stoughtought                            \timportoscholar #x                                 \tNULL\tNULL\t448.2396694214876\nn stantin stoughtought                            \timportoscholar #x                                 \tcountry                                           \tNULL\t448.2396694214876\nn stantin stoughtought                            \timportoscholar #x                                 \tcountry                                           \tMusic                                             \t448.2396694214876\nn steingbaranti                                   \tNULL\tNULL\tNULL\t448.702479338843\nn steingbaranti                                   \tamalgamalg #x                                     \tNULL\tNULL\t448.702479338843\nn steingbaranti                                   \tamalgamalg #x                                     \tdresses                                           \tNULL\t448.702479338843\nn steingbaranti                                   \tamalgamalg #x                                     \tdresses                                           \tWomen                                             \t448.702479338843\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q23a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<sum(sales):decimal(28,2)>\n-- !query output\nNULL\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q23b.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_last_name:string,c_first_name:string,sales:decimal(28,2)>\n-- !query output\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q24a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_last_name:string,c_first_name:string,s_store_name:string,paid:decimal(27,2)>\n-- !query output\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q24b.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_last_name:string,c_first_name:string,s_store_name:string,paid:decimal(27,2)>\n-- !query output\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q25.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,i_item_desc:string,s_store_id:string,s_store_name:string,store_sales_profit:decimal(17,2),store_returns_loss:decimal(17,2),catalog_sales_profit:decimal(17,2)>\n-- !query output\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q26.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,agg1:double,agg2:decimal(11,6),agg3:decimal(11,6),agg4:decimal(11,6)>\n-- !query output\nAAAAAAAAAABDAAAA\t84.0\t18.790000\t0.000000\t17.280000\nAAAAAAAAAABEAAAA\t76.0\t145.970000\t4732.320000\t115.310000\nAAAAAAAAAACAAAAA\t67.0\t55.950000\t1532.610000\t48.670000\nAAAAAAAAAADBAAAA\t34.333333333333336\t70.963333\t0.000000\t20.753333\nAAAAAAAAAAEDAAAA\t45.0\t93.180000\t0.000000\t16.770000\nAAAAAAAAAAEEAAAA\t46.666666666666664\t73.350000\t10.526667\t15.433333\nAAAAAAAAAAFAAAAA\t31.0\t42.450000\t0.000000\t22.920000\nAAAAAAAAAAHAAAAA\t42.0\t9.700000\t0.000000\t0.480000\nAAAAAAAAAAHDAAAA\t44.5\t89.835000\t0.000000\t75.130000\nAAAAAAAAAAICAAAA\t2.0\t117.590000\t68.420000\t35.270000\nAAAAAAAAAAIDAAAA\t11.0\t6.310000\t13.950000\t1.510000\nAAAAAAAAAAKAAAAA\t87.0\t142.525000\t0.000000\t32.775000\nAAAAAAAAAALCAAAA\t65.5\t114.655000\t0.000000\t12.395000\nAAAAAAAAAAOAAAAA\t91.0\t42.190000\t0.000000\t30.790000\nAAAAAAAAAAPBAAAA\t56.0\t37.590000\t0.000000\t22.335000\nAAAAAAAAAAPCAAAA\t35.0\t49.890000\t0.000000\t30.430000\nAAAAAAAAABAAAAAA\t87.0\t77.640000\t0.000000\t29.500000\nAAAAAAAAABAEAAAA\t79.0\t34.720000\t0.000000\t30.900000\nAAAAAAAAABBAAAAA\t16.0\t53.500000\t0.000000\t26.515000\nAAAAAAAAABBDAAAA\t79.0\t91.570000\t0.000000\t0.910000\nAAAAAAAAABCEAAAA\t100.0\t106.260000\t0.000000\t26.560000\nAAAAAAAAABDAAAAA\t58.0\t3.020000\t0.000000\t2.080000\nAAAAAAAAABDBAAAA\t21.0\t72.850000\t0.000000\t57.550000\nAAAAAAAAABEAAAAA\t71.0\t70.970000\t0.000000\t48.250000\nAAAAAAAAABFCAAAA\t33.0\t284.590000\t0.000000\t36.990000\nAAAAAAAAABGAAAAA\t54.5\t77.980000\t0.000000\t51.590000\nAAAAAAAAABGBAAAA\t84.0\t18.370000\t300.835000\t14.035000\nAAAAAAAAABJDAAAA\t42.0\t41.960000\t0.000000\t8.810000\nAAAAAAAAABKCAAAA\t27.0\t62.695000\t0.000000\t24.770000\nAAAAAAAAABLBAAAA\t62.0\t112.810000\t0.000000\t54.795000\nAAAAAAAAABLCAAAA\t42.0\t206.490000\t0.000000\t61.940000\nAAAAAAAAABMAAAAA\t45.5\t158.505000\t1467.465000\t50.865000\nAAAAAAAAABMBAAAA\t54.0\t38.760000\t0.000000\t20.930000\nAAAAAAAAABNDAAAA\t2.0\t227.200000\t0.000000\t34.080000\nAAAAAAAAACBCAAAA\t82.0\t200.340000\t0.000000\t154.260000\nAAAAAAAAACCDAAAA\t8.0\t96.950000\t0.000000\t89.190000\nAAAAAAAAACDDAAAA\t99.0\t215.550000\t5622.900000\t66.820000\nAAAAAAAAACECAAAA\t35.0\t117.300000\t0.000000\t32.840000\nAAAAAAAAACEEAAAA\t67.0\t221.660000\t2747.330000\t164.020000\nAAAAAAAAACFAAAAA\t81.0\t16.420000\t148.390000\t2.290000\nAAAAAAAAACFBAAAA\t23.0\t148.690000\t0.000000\t96.640000\nAAAAAAAAACGAAAAA\t71.5\t182.975000\t0.000000\t89.260000\nAAAAAAAAACGDAAAA\t38.0\t111.060000\t0.000000\t43.310000\nAAAAAAAAACHBAAAA\t62.0\t199.820000\t0.000000\t67.930000\nAAAAAAAAACIDAAAA\t73.0\t184.700000\t1925.030000\t38.780000\nAAAAAAAAACJDAAAA\t12.0\t58.740000\t319.710000\t42.290000\nAAAAAAAAACLBAAAA\t57.5\t78.230000\t391.295000\t11.920000\nAAAAAAAAACLDAAAA\t6.0\t127.370000\t0.000000\t118.450000\nAAAAAAAAACMAAAAA\t84.0\t118.000000\t0.000000\t20.060000\nAAAAAAAAACNCAAAA\t8.0\t25.910000\t99.840000\t18.910000\nAAAAAAAAACODAAAA\t49.0\t106.120000\t0.000000\t6.360000\nAAAAAAAAADBBAAAA\t77.0\t166.050000\t3605.600000\t99.630000\nAAAAAAAAADCDAAAA\t3.0\t191.710000\t0.000000\t70.930000\nAAAAAAAAADDBAAAA\t46.0\t68.830000\t0.000000\t28.900000\nAAAAAAAAADEAAAAA\t78.0\t20.720000\t0.000000\t5.800000\nAAAAAAAAADEBAAAA\t57.0\t54.900000\t0.000000\t8.230000\nAAAAAAAAADFAAAAA\t70.0\t75.673333\t777.880000\t21.056667\nAAAAAAAAADFCAAAA\t44.0\t98.340000\t182.190000\t82.775000\nAAAAAAAAADFDAAAA\t5.0\t40.030000\t0.000000\t28.020000\nAAAAAAAAADGCAAAA\t22.0\t42.730000\t0.000000\t8.540000\nAAAAAAAAADKAAAAA\t35.0\t62.020000\t0.000000\t36.590000\nAAAAAAAAADMBAAAA\t10.0\t46.770000\t0.000000\t37.410000\nAAAAAAAAADNAAAAA\t14.0\t258.660000\t0.000000\t178.470000\nAAAAAAAAADNBAAAA\t53.0\t94.195000\t0.000000\t27.755000\nAAAAAAAAADNDAAAA\t9.0\t150.480000\t0.000000\t75.240000\nAAAAAAAAADOAAAAA\t76.5\t187.970000\t310.730000\t36.320000\nAAAAAAAAAEBDAAAA\t7.0\t68.300000\t0.000000\t15.700000\nAAAAAAAAAECEAAAA\t81.0\t241.650000\t0.000000\t9.660000\nAAAAAAAAAEDEAAAA\t18.0\t184.510000\t979.740000\t108.860000\nAAAAAAAAAEEDAAAA\t81.0\t72.050000\t0.000000\t56.910000\nAAAAAAAAAEGAAAAA\t44.0\t192.830000\t0.000000\t30.850000\nAAAAAAAAAEGBAAAA\t39.0\t6.230000\t91.590000\t3.050000\nAAAAAAAAAEGDAAAA\t52.0\t74.130000\t0.000000\t18.530000\nAAAAAAAAAEHAAAAA\t68.0\t72.870000\t602.000000\t59.020000\nAAAAAAAAAEHCAAAA\t50.0\t52.560000\t0.000000\t29.430000\nAAAAAAAAAEJBAAAA\t66.0\t66.110000\t0.000000\t5.940000\nAAAAAAAAAEKAAAAA\t17.0\t186.350000\t339.590000\t124.850000\nAAAAAAAAAEKCAAAA\t93.0\t57.210000\t549.870000\t6.290000\nAAAAAAAAAEKDAAAA\t55.0\t143.730000\t0.000000\t10.060000\nAAAAAAAAAELBAAAA\t12.0\t137.550000\t0.000000\t126.540000\nAAAAAAAAAEMBAAAA\t100.0\t52.750000\t1092.980000\t14.770000\nAAAAAAAAAENAAAAA\t81.0\t43.060000\t2510.870000\t32.290000\nAAAAAAAAAEPBAAAA\t13.5\t80.590000\t0.000000\t16.975000\nAAAAAAAAAFBEAAAA\t93.0\t115.340000\t0.000000\t42.670000\nAAAAAAAAAFCDAAAA\t47.0\t170.770000\t0.000000\t163.930000\nAAAAAAAAAFCEAAAA\t73.0\t91.970000\t0.000000\t59.780000\nAAAAAAAAAFDAAAAA\t100.0\t229.510000\t2616.300000\t68.850000\nAAAAAAAAAFDCAAAA\t82.0\t93.600000\t0.000000\t24.330000\nAAAAAAAAAFEEAAAA\t61.0\t245.950000\t0.000000\t199.210000\nAAAAAAAAAFFAAAAA\t59.0\t54.550000\t555.140000\t40.910000\nAAAAAAAAAFFBAAAA\t65.0\t142.570000\t0.000000\t69.850000\nAAAAAAAAAFFEAAAA\t39.0\t237.790000\t681.580000\t116.510000\nAAAAAAAAAFGCAAAA\t45.0\t205.590000\t0.000000\t47.280000\nAAAAAAAAAFHBAAAA\t48.5\t68.835000\t0.000000\t23.875000\nAAAAAAAAAFIAAAAA\t72.0\t84.430000\t0.000000\t0.000000\nAAAAAAAAAFIDAAAA\t40.5\t45.650000\t1212.090000\t17.425000\nAAAAAAAAAFJCAAAA\t13.0\t133.270000\t0.000000\t6.660000\nAAAAAAAAAFKCAAAA\t5.0\t178.640000\t0.000000\t105.390000\nAAAAAAAAAFNBAAAA\t16.0\t32.220000\t0.000000\t31.570000\nAAAAAAAAAFODAAAA\t59.0\t88.455000\t138.365000\t14.180000\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q27.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,s_state:string,g_state:tinyint,agg1:double,agg2:decimal(11,6),agg3:decimal(11,6),agg4:decimal(11,6)>\n-- !query output\nNULL\tNULL\t1\t50.20319480167863\t76.358588\t197.423228\t38.217862\nAAAAAAAAAAABAAAA\tNULL\t1\t45.0\t20.970000\t0.000000\t10.900000\nAAAAAAAAAAABAAAA\tTN\t0\t45.0\t20.970000\t0.000000\t10.900000\nAAAAAAAAAAACAAAA\tNULL\t1\t4.0\t60.970000\t0.000000\t35.970000\nAAAAAAAAAAACAAAA\tTN\t0\t4.0\t60.970000\t0.000000\t35.970000\nAAAAAAAAAACDAAAA\tNULL\t1\t86.0\t56.830000\t0.000000\t38.070000\nAAAAAAAAAACDAAAA\tTN\t0\t86.0\t56.830000\t0.000000\t38.070000\nAAAAAAAAAADBAAAA\tNULL\t1\t61.0\t40.060000\t0.000000\t7.210000\nAAAAAAAAAADBAAAA\tTN\t0\t61.0\t40.060000\t0.000000\t7.210000\nAAAAAAAAAADCAAAA\tNULL\t1\t37.0\t96.405000\t0.000000\t82.010000\nAAAAAAAAAADCAAAA\tTN\t0\t37.0\t96.405000\t0.000000\t82.010000\nAAAAAAAAAADEAAAA\tNULL\t1\t23.0\t96.010000\t0.000000\t57.600000\nAAAAAAAAAADEAAAA\tTN\t0\t23.0\t96.010000\t0.000000\t57.600000\nAAAAAAAAAAEDAAAA\tNULL\t1\t64.0\t10.940000\t0.000000\t10.390000\nAAAAAAAAAAEDAAAA\tTN\t0\t64.0\t10.940000\t0.000000\t10.390000\nAAAAAAAAAAEEAAAA\tNULL\t1\t65.0\t97.825000\t0.000000\t24.320000\nAAAAAAAAAAEEAAAA\tTN\t0\t65.0\t97.825000\t0.000000\t24.320000\nAAAAAAAAAAFAAAAA\tNULL\t1\t71.0\t88.080000\t0.000000\t10.560000\nAAAAAAAAAAFAAAAA\tTN\t0\t71.0\t88.080000\t0.000000\t10.560000\nAAAAAAAAAAFCAAAA\tNULL\t1\t21.0\t72.140000\t0.000000\t54.725000\nAAAAAAAAAAFCAAAA\tTN\t0\t21.0\t72.140000\t0.000000\t54.725000\nAAAAAAAAAAGBAAAA\tNULL\t1\t23.0\t97.980000\t0.000000\t45.070000\nAAAAAAAAAAGBAAAA\tTN\t0\t23.0\t97.980000\t0.000000\t45.070000\nAAAAAAAAAAGCAAAA\tNULL\t1\t30.0\t62.280000\t0.000000\t9.340000\nAAAAAAAAAAGCAAAA\tTN\t0\t30.0\t62.280000\t0.000000\t9.340000\nAAAAAAAAAAHAAAAA\tNULL\t1\t30.0\t91.910000\t0.000000\t52.380000\nAAAAAAAAAAHAAAAA\tTN\t0\t30.0\t91.910000\t0.000000\t52.380000\nAAAAAAAAAAHBAAAA\tNULL\t1\t76.0\t30.060000\t345.610000\t26.750000\nAAAAAAAAAAHBAAAA\tTN\t0\t76.0\t30.060000\t345.610000\t26.750000\nAAAAAAAAAAHDAAAA\tNULL\t1\t60.0\t49.390000\t0.000000\t19.155000\nAAAAAAAAAAHDAAAA\tTN\t0\t60.0\t49.390000\t0.000000\t19.155000\nAAAAAAAAAAIAAAAA\tNULL\t1\t65.0\t115.230000\t0.000000\t79.555000\nAAAAAAAAAAIAAAAA\tTN\t0\t65.0\t115.230000\t0.000000\t79.555000\nAAAAAAAAAAICAAAA\tNULL\t1\tNULL\tNULL\t262.480000\tNULL\nAAAAAAAAAAICAAAA\tTN\t0\tNULL\tNULL\t262.480000\tNULL\nAAAAAAAAAAJCAAAA\tNULL\t1\t7.0\t111.745000\t0.000000\t82.515000\nAAAAAAAAAAJCAAAA\tTN\t0\t7.0\t111.745000\t0.000000\t82.515000\nAAAAAAAAAAKAAAAA\tNULL\t1\t31.5\t51.350000\t0.000000\t36.555000\nAAAAAAAAAAKAAAAA\tTN\t0\t31.5\t51.350000\t0.000000\t36.555000\nAAAAAAAAAAKBAAAA\tNULL\t1\t3.0\t146.600000\t0.000000\t105.550000\nAAAAAAAAAAKBAAAA\tTN\t0\t3.0\t146.600000\t0.000000\t105.550000\nAAAAAAAAAAKDAAAA\tNULL\t1\t69.0\t34.660000\t0.000000\t11.090000\nAAAAAAAAAAKDAAAA\tTN\t0\t69.0\t34.660000\t0.000000\t11.090000\nAAAAAAAAAALAAAAA\tNULL\t1\t97.0\t14.270000\t0.000000\t12.410000\nAAAAAAAAAALAAAAA\tTN\t0\t97.0\t14.270000\t0.000000\t12.410000\nAAAAAAAAAAMBAAAA\tNULL\t1\t68.5\t70.250000\t0.000000\t34.085000\nAAAAAAAAAAMBAAAA\tTN\t0\t68.5\t70.250000\t0.000000\t34.085000\nAAAAAAAAAAMCAAAA\tNULL\t1\t51.5\t73.135000\t0.000000\t25.570000\nAAAAAAAAAAMCAAAA\tTN\t0\t51.5\t73.135000\t0.000000\t25.570000\nAAAAAAAAAANAAAAA\tNULL\t1\t50.5\t29.315000\t9.580000\t15.805000\nAAAAAAAAAANAAAAA\tTN\t0\t50.5\t29.315000\t9.580000\t15.805000\nAAAAAAAAAAOCAAAA\tNULL\t1\t1.0\t74.630000\t0.000000\t19.400000\nAAAAAAAAAAOCAAAA\tTN\t0\t1.0\t74.630000\t0.000000\t19.400000\nAAAAAAAAAAODAAAA\tNULL\t1\t66.33333333333333\t52.823333\t1793.560000\t28.406667\nAAAAAAAAAAODAAAA\tTN\t0\t66.33333333333333\t52.823333\t1793.560000\t28.406667\nAAAAAAAAAAPBAAAA\tNULL\t1\t17.0\t167.070000\t0.000000\t40.090000\nAAAAAAAAAAPBAAAA\tTN\t0\t17.0\t167.070000\t0.000000\t40.090000\nAAAAAAAAABAAAAAA\tNULL\t1\t79.0\t48.110000\t0.000000\t14.430000\nAAAAAAAAABAAAAAA\tTN\t0\t79.0\t48.110000\t0.000000\t14.430000\nAAAAAAAAABAEAAAA\tNULL\t1\t16.5\t26.370000\t0.000000\t9.325000\nAAAAAAAAABAEAAAA\tTN\t0\t16.5\t26.370000\t0.000000\t9.325000\nAAAAAAAAABCBAAAA\tNULL\t1\t32.0\t98.600000\t0.000000\t46.340000\nAAAAAAAAABCBAAAA\tTN\t0\t32.0\t98.600000\t0.000000\t46.340000\nAAAAAAAAABFCAAAA\tNULL\t1\t24.0\t101.420000\t0.000000\t3.040000\nAAAAAAAAABFCAAAA\tTN\t0\t24.0\t101.420000\t0.000000\t3.040000\nAAAAAAAAABFEAAAA\tNULL\t1\t34.5\t42.490000\t131.055000\t15.505000\nAAAAAAAAABFEAAAA\tTN\t0\t34.5\t42.490000\t131.055000\t15.505000\nAAAAAAAAABGAAAAA\tNULL\t1\t57.666666666666664\t94.343333\t569.626667\t56.296667\nAAAAAAAAABGAAAAA\tTN\t0\t57.666666666666664\t94.343333\t569.626667\t56.296667\nAAAAAAAAABHCAAAA\tNULL\t1\t58.0\t38.370000\t0.000000\t16.880000\nAAAAAAAAABHCAAAA\tTN\t0\t58.0\t38.370000\t0.000000\t16.880000\nAAAAAAAAABHDAAAA\tNULL\t1\t43.0\t10.290000\t0.000000\t3.800000\nAAAAAAAAABHDAAAA\tTN\t0\t43.0\t10.290000\t0.000000\t3.800000\nAAAAAAAAABIBAAAA\tNULL\t1\t60.25\t74.462500\t0.000000\t58.247500\nAAAAAAAAABIBAAAA\tTN\t0\t60.25\t74.462500\t0.000000\t58.247500\nAAAAAAAAABJDAAAA\tNULL\t1\t64.5\t28.405000\t0.000000\t22.080000\nAAAAAAAAABJDAAAA\tTN\t0\t64.5\t28.405000\t0.000000\t22.080000\nAAAAAAAAABKCAAAA\tNULL\t1\t69.0\t44.880000\t518.585000\t18.855000\nAAAAAAAAABKCAAAA\tTN\t0\t69.0\t44.880000\t518.585000\t18.855000\nAAAAAAAAABLBAAAA\tNULL\t1\t18.0\t43.495000\t0.000000\t24.660000\nAAAAAAAAABLBAAAA\tTN\t0\t18.0\t43.495000\t0.000000\t24.660000\nAAAAAAAAABLCAAAA\tNULL\t1\t6.0\t102.740000\t0.000000\t90.410000\nAAAAAAAAABLCAAAA\tTN\t0\t6.0\t102.740000\t0.000000\t90.410000\nAAAAAAAAABMDAAAA\tNULL\t1\t50.5\t32.745000\t0.000000\t12.005000\nAAAAAAAAABMDAAAA\tTN\t0\t50.5\t32.745000\t0.000000\t12.005000\nAAAAAAAAABNAAAAA\tNULL\t1\t85.0\t31.880000\t0.000000\t5.100000\nAAAAAAAAABNAAAAA\tTN\t0\t85.0\t31.880000\t0.000000\t5.100000\nAAAAAAAAABOCAAAA\tNULL\t1\t60.0\t113.590000\t51.520000\t1.130000\nAAAAAAAAABOCAAAA\tTN\t0\t60.0\t113.590000\t51.520000\t1.130000\nAAAAAAAAABPAAAAA\tNULL\t1\t55.0\t89.150000\t3442.050000\t76.440000\nAAAAAAAAABPAAAAA\tTN\t0\t55.0\t89.150000\t3442.050000\t76.440000\nAAAAAAAAABPBAAAA\tNULL\t1\t80.0\t16.010000\t0.000000\t9.440000\nAAAAAAAAABPBAAAA\tTN\t0\t80.0\t16.010000\t0.000000\t9.440000\nAAAAAAAAABPDAAAA\tNULL\t1\t73.0\t112.940000\t2248.960000\t99.380000\nAAAAAAAAABPDAAAA\tTN\t0\t73.0\t112.940000\t2248.960000\t99.380000\nAAAAAAAAACAAAAAA\tNULL\t1\t61.0\t101.820000\t0.000000\t90.610000\nAAAAAAAAACAAAAAA\tTN\t0\t61.0\t101.820000\t0.000000\t90.610000\nAAAAAAAAACACAAAA\tNULL\t1\t86.0\t101.500000\t0.000000\t57.850000\nAAAAAAAAACACAAAA\tTN\t0\t86.0\t101.500000\t0.000000\t57.850000\nAAAAAAAAACADAAAA\tNULL\t1\t65.0\t97.210000\t0.000000\t83.595000\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q28.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<B1_LP:decimal(11,6),B1_CNT:bigint,B1_CNTD:bigint,B2_LP:decimal(11,6),B2_CNT:bigint,B2_CNTD:bigint,B3_LP:decimal(11,6),B3_CNT:bigint,B3_CNTD:bigint,B4_LP:decimal(11,6),B4_CNT:bigint,B4_CNTD:bigint,B5_LP:decimal(11,6),B5_CNT:bigint,B5_CNTD:bigint,B6_LP:decimal(11,6),B6_CNT:bigint,B6_CNTD:bigint>\n-- !query output\n78.045281\t36383\t9236\t69.528580\t35193\t6542\t133.847037\t28274\t9714\t81.911887\t31756\t7687\t61.160300\t36338\t8603\t39.282627\t29915\t5210\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q29.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,i_item_desc:string,s_store_id:string,s_store_name:string,store_sales_quantity:bigint,store_returns_quantity:bigint,catalog_sales_quantity:bigint>\n-- !query output\nAAAAAAAADIFDAAAA\tNow christian papers believe very major, new branches. Annual wars include harshly so-called sites. \tAAAAAAAAHAAAAAAA\tation\t11\t10\t13\nAAAAAAAANNBEAAAA\tOld forces shall not think more than foreign earnings. Controls could carry almos\tAAAAAAAACAAAAAAA\table\t56\t25\t10\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q3.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<d_year:int,brand_id:int,brand:string,sum_agg:decimal(17,2)>\n-- !query output\n1998\t1004001\tedu packamalg #x                                  \t65716.37\n1998\t10001008\tamalgunivamalg #x                                 \t34140.78\n1998\t8012001\timportomaxi #x                                    \t32669.34\n1998\t2004001\tedu packimporto #x                                \t25130.97\n1998\t10003014\texportiunivamalg #x                              \t23720.25\n1998\t5004001\tedu packscholar #x                                \t23713.55\n1998\t9002008\timportomaxi #x                                    \t22002.12\n1998\t3003001\texportiexporti #x                                 \t21596.96\n1998\t8014004\tedu packmaxi #x                                   \t20442.12\n1998\t9009005\tmaximaxi #x                                       \t19866.63\n1998\t3002001\timportoexporti #x                                 \t17347.94\n1998\t3001001\tamalgexporti #x                                   \t16882.10\n1998\t2003001\texportiimporto #x                                 \t13095.00\n1998\t8010001\tunivmaxi #x                                       \t12408.22\n1998\t8007001\tbrandnameless #x                                  \t12021.05\n1998\t6005005\tscholarcorp #x                                    \t10366.33\n1998\t8013008\texportimaxi #x                                    \t10008.30\n1998\t10008008\tnamelessunivamalg #x                              \t7909.24\n1998\t1003002\texportiamalg #x                                   \t5046.65\n1999\t1004001\tedu packamalg #x                                  \t72111.20\n1999\t8012001\timportomaxi #x                                    \t45932.95\n1999\t9009005\tmaximaxi #x                                       \t32382.90\n1999\t1003002\texportiamalg #x                                   \t28586.32\n1999\t8007001\tbrandnameless #x                                  \t27105.57\n1999\t9002008\timportomaxi #x                                    \t26746.44\n1999\t5004001\tedu packscholar #x                                \t25906.67\n1999\t8010001\tunivmaxi #x                                       \t23297.75\n1999\t8013008\texportimaxi #x                                    \t20896.22\n1999\t2004001\tedu packimporto #x                                \t18025.93\n1999\t10003014\texportiunivamalg #x                              \t16211.31\n1999\t8014004\tedu packmaxi #x                                   \t15207.49\n1999\t3001001\tamalgexporti #x                                   \t13536.73\n1999\t10001008\tamalgunivamalg #x                                 \t12980.62\n1999\t3003001\texportiexporti #x                                 \t12753.61\n1999\t10008008\tnamelessunivamalg #x                              \t12446.57\n1999\t2003001\texportiimporto #x                                 \t11284.94\n1999\t3002001\timportoexporti #x                                 \t9820.44\n1999\t6005005\tscholarcorp #x                                    \t5487.63\n2000\t3002001\timportoexporti #x                                 \t49567.04\n2000\t1004001\tedu packamalg #x                                  \t35173.68\n2000\t6006004\tcorpcorp #x                                       \t31206.02\n2000\t1003002\texportiamalg #x                                   \t30500.97\n2000\t6005005\tscholarcorp #x                                    \t30270.09\n2000\t9001012\tamalgmaxi #x                                     \t29107.66\n2000\t8012001\timportomaxi #x                                    \t26610.58\n2000\t5004001\tedu packscholar #x                                \t26200.25\n2000\t10003014\texportiunivamalg #x                              \t23018.57\n2000\t3001001\tamalgexporti #x                                   \t20499.18\n2000\t2004001\tedu packimporto #x                                \t19982.58\n2000\t10015010\timportoedu pack #x                                \t18697.08\n2000\t8007001\tbrandnameless #x                                  \t17315.93\n2000\t8014004\tedu packmaxi #x                                   \t16235.88\n2000\t3003001\texportiexporti #x                                 \t14272.86\n2000\t8013008\texportimaxi #x                                    \t10570.32\n2000\t9002008\timportomaxi #x                                    \t10262.91\n2001\t1003002\texportiamalg #x                                   \t36559.22\n2001\t9002008\timportomaxi #x                                    \t25301.41\n2001\t8013008\texportimaxi #x                                    \t21069.43\n2001\t6005003\timportoamalg #x                                   \t20705.88\n2001\t9001012\tamalgmaxi #x                                     \t18795.96\n2001\t8012001\texportiedu pack #x                                \t18477.27\n2001\t6006004\tcorpcorp #x                                       \t18283.57\n2001\t9003010\texportimaxi #x                                   \t17793.12\n2001\t3002001\tscholarmaxi #x                                    \t17206.42\n2001\t10003014\texportiunivamalg #x                              \t15781.84\n2001\t10001003\tedu packamalg #x                                  \t15683.42\n2001\t8014004\tedu packmaxi #x                                   \t13108.20\n2001\t10015010\timportoedu pack #x                                \t11683.34\n2001\t2002002\timportoimporto #x                                 \t8532.67\n2002\t1003002\texportiamalg #x                                   \t53079.32\n2002\t9002008\timportomaxi #x                                    \t39552.73\n2002\t3002001\tscholarmaxi #x                                    \t38802.50\n2002\t9003010\texportimaxi #x                                   \t36257.87\n2002\t5002001\timportoscholar #x                                 \t36116.22\n2002\t10015010\tamalgunivamalg #x                                \t28522.27\n2002\t8012001\texportiedu pack #x                                \t27777.47\n2002\t8013008\texportimaxi #x                                    \t23919.27\n2002\t2002002\timportoimporto #x                                 \t21849.78\n2002\t8014004\tedu packmaxi #x                                   \t19276.30\n2002\t10001003\tedu packamalg #x                                  \t13837.38\n2002\t10003014\texportiunivamalg #x                              \t13106.68\n2002\t9015001\tscholarunivamalg #x                               \t11700.36\n2002\t6005003\timportoamalg #x                                   \t6367.68\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q30.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_customer_id:string,c_salutation:string,c_first_name:string,c_last_name:string,c_preferred_cust_flag:string,c_birth_day:int,c_birth_month:int,c_birth_year:int,c_birth_country:string,c_login:string,c_email_address:string,c_last_review_date:int,ctr_total_return:decimal(17,2)>\n-- !query output\nAAAAAAAAABEBBAAA\tSir       \tCarlton             \tMiles                         \tN\t23\t9\t1957\tMALI\tNULL\tCarlton.Miles@4DbIUoznbr.org                      \t2452588\t12916.86\nAAAAAAAAABKGBAAA\tMiss      \tSharee              \tStevens                       \tY\t15\t8\t1992\tSLOVAKIA\tNULL\tSharee.Stevens@Z.edu                              \t2452492\t2480.80\nAAAAAAAAAEIPAAAA\tMr.       \tChris               \tRyan                          \tY\t23\t4\t1931\tTURKMENISTAN\tNULL\tChris.Ryan@QlH9G0fkAR5.org                        \t2452443\t3469.95\nAAAAAAAAAHNKAAAA\tSir       \tScott               \tGarcia                        \tN\t19\t10\t1952\tWALLIS AND FUTUNA\tNULL\tScott.Garcia@3pd6mnJYbKxb.org                     \t2452611\t2944.24\nAAAAAAAAAJDABAAA\tSir       \tGerald              \tMonroe                        \tY\t19\t5\t1985\tLIECHTENSTEIN\tNULL\tGerald.Monroe@opYT.org                            \t2452632\t4030.41\nAAAAAAAAAJJBAAAA\tSir       \tJose                \tGarcia                        \tY\t11\t6\t1977\tAMERICAN SAMOA\tNULL\tJose.Garcia@n1VrEIOg4f.com                        \t2452406\t4131.05\nAAAAAAAAALNKAAAA\tDr.       \tWanda               \tDevries                       \tN\t6\t2\t1953\tYEMEN\tNULL\tWanda.Devries@lzbOovP.com                         \t2452411\t3197.25\nAAAAAAAAANPAAAAA\tNULL\tNULL\tSanders                       \tNULL\t12\t10\tNULL\tNULL\tNULL\tNULL\t2452646\t2091.90\nAAAAAAAAAOKPAAAA\tMr.       \tBrandon             \tFoster                        \tN\t28\t4\t1992\tFRENCH POLYNESIA\tNULL\tBrandon.Foster@vKi8eFToOZHK.org                   \t2452527\t3706.08\nAAAAAAAABBEDAAAA\tMr.       \tJames               \tPantoja                       \tY\t5\t12\t1982\tNEPAL\tNULL\tJames.Pantoja@LTZLj3ddIvIG0.edu                   \t2452503\t5668.25\nAAAAAAAABBIHAAAA\tMs.       \tSherry              \tKennedy                       \tY\t3\t8\t1984\tMOZAMBIQUE\tNULL\tSherry.Kennedy@9hhBNI6.edu                        \t2452481\t1561.14\nAAAAAAAABBJMAAAA\tNULL\tPaul                \tLindsey                       \tN\tNULL\t10\tNULL\tNULL\tNULL\tPaul.Lindsey@bQo8Zt9XStF.edu                      \tNULL\t3139.50\nAAAAAAAABBNFBAAA\tMrs.      \tNellie              \tJohnson                       \tY\t14\t6\t1962\tBAHRAIN\tNULL\tNellie.Johnson@U.com                              \t2452411\t1609.90\nAAAAAAAABDKOAAAA\tDr.       \tVernita             \tBennett                       \tY\t12\t3\t1955\tNICARAGUA\tNULL\tVernita.Bennett@u2QKJHt.com                       \t2452352\t11315.85\nAAAAAAAABEJLAAAA\tMr.       \tAbel                \tLucero                        \tN\t15\t9\t1930\tCAPE VERDE\tNULL\tAbel.Lucero@txdlc3sekSyj7mLokv.org                \t2452461\t4524.56\nAAAAAAAABFFHBAAA\tMrs.      \tArlene              \tParrish                       \tN\t18\t9\t1957\tMALAWI\tNULL\tArlene.Parrish@m5lX3mTVhbxAEz.org                 \t2452433\t3184.72\nAAAAAAAABGCFBAAA\tSir       \tAlberto             \tWhitney                       \tY\t20\t5\t1946\tMALAYSIA\tNULL\tAlberto.Whitney@En6lzc8TlV7vF.org                 \t2452496\t3806.56\nAAAAAAAABHAGBAAA\tMrs.      \tMinnie              \tThompson                      \tN\t1\t8\t1976\tNEW ZEALAND\tNULL\tMinnie.Thompson@7u.org                            \t2452558\t1871.31\nAAAAAAAABLMCAAAA\tDr.       \tFrank               \tPrice                         \tN\t13\t2\t1962\tSYRIAN ARAB REPUBLIC\tNULL\tFrank.Price@KO.org                                \t2452372\t1779.84\nAAAAAAAABNEGAAAA\tMs.       \tLaurel              \tOlson                         \tY\t24\t6\t1960\tBANGLADESH\tNULL\tLaurel.Olson@92YpV3hofMbNz.edu                    \t2452508\t4912.49\nAAAAAAAABNMIAAAA\tMrs.      \tJudi                \tLopes                         \tY\t21\t2\t1943\tNICARAGUA\tNULL\tJudi.Lopes@DYAyBUb1E.org                          \t2452297\t8335.86\nAAAAAAAABOGDAAAA\tMs.       \tLaura               \tMcfadden                      \tN\t29\t3\t1964\tGREECE\tNULL\tLaura.Mcfadden@Ks.com                             \t2452364\t3942.35\nAAAAAAAABPECAAAA\tMr.       \tWilliam             \tHoffman                       \tN\t21\t4\t1991\tPERU\tNULL\tWilliam.Hoffman@m8E1N4.com                        \t2452321\t4550.73\nAAAAAAAACGHDAAAA\tMr.       \tRicky               \tMarshall                      \tN\t11\t7\t1979\tNAMIBIA\tNULL\tRicky.Marshall@tJKkX.edu                          \t2452610\t10400.46\nAAAAAAAACIKBAAAA\tMs.       \tVirginia            \tLawson                        \tN\t26\t11\t1929\tPHILIPPINES\tNULL\tVirginia.Lawson@bdtmjhsnqQZx.edu                  \t2452490\t2164.20\nAAAAAAAACILFBAAA\tNULL\tFrederick           \tNULL\tN\t23\t11\t1954\tNULL\tNULL\tFrederick.Lambert@BNHNh23xF7qIUJ30.edu            \tNULL\t3849.92\nAAAAAAAACLNHBAAA\tDr.       \tGary                \tPerez                         \tN\t21\t4\t1992\tMACAO\tNULL\tGary.Perez@5dGey4.com                             \t2452300\t9647.56\nAAAAAAAACOEAAAAA\tDr.       \tDavid               \tSaunders                      \tY\t6\t12\t1991\tHAITI\tNULL\tDavid.Saunders@APTXJyYNgVa.edu                    \t2452482\t8357.76\nAAAAAAAADBIHAAAA\tMr.       \tJohn                \tSturm                         \tY\t5\t8\t1973\tGUAM\tNULL\tJohn.Sturm@PnUSI7.com                             \t2452457\t7161.74\nAAAAAAAADCCCBAAA\tMs.       \tAntonia             \tMiller                        \tN\t1\t5\t1992\tPALAU\tNULL\tAntonia.Miller@CsPe3fE38F.com                     \t2452635\t2688.66\nAAAAAAAADGIGBAAA\tMiss      \tMargarita           \tPark                          \tY\t20\t1\t1947\tLITHUANIA\tNULL\tMargarita.Park@qb0FrD2Pk.org                      \t2452462\t1557.37\nAAAAAAAADGMJAAAA\tMrs.      \tKaren               \tOlson                         \tY\t18\t11\t1974\tARGENTINA\tNULL\tKaren.Olson@Qe3bBGr.com                           \t2452596\t11849.40\nAAAAAAAADGOIAAAA\tNULL\tNULL\tNULL\tNULL\t2\tNULL\t1977\tNULL\tNULL\tKathleen.Bell@B0Ze.com                            \t2452322\t3979.64\nAAAAAAAADHFBAAAA\tMr.       \tWilliam             \tLopez                         \tN\t12\t1\t1967\tMALAYSIA\tNULL\tWilliam.Lopez@9JMTYo5O22ZFskMMybp.org             \t2452435\t2302.41\nAAAAAAAADJJMAAAA\tMr.       \tJames               \tCooper                        \tY\t26\t4\t1982\tTUNISIA\tNULL\tJames.Cooper@KH.com                               \t2452365\t3388.66\nAAAAAAAADJPEBAAA\tMs.       \tLizzie              \tNeal                          \tY\t16\t2\t1939\tMAURITIUS\tNULL\tLizzie.Neal@ru9qksDH5rgLsRn.edu                   \t2452433\t1915.72\nAAAAAAAADKJCAAAA\tDr.       \tDaniel              \tGreen                         \tN\t29\t8\t1970\tAMERICAN SAMOA\tNULL\tDaniel.Green@GPktIG5Y.org                         \t2452421\t5675.02\nAAAAAAAADMFKAAAA\tMr.       \tRay                 \tGonzales                      \tN\t22\t2\t1930\tTURKMENISTAN\tNULL\tRay.Gonzales@kH8kp.com                            \t2452378\t2294.25\nAAAAAAAADMHDAAAA\tMr.       \tLee                 \tStanley                       \tY\t9\t3\t1959\tCZECH REPUBLIC\tNULL\tLee.Stanley@rONmrEDug8q.com                       \t2452451\t1912.92\nAAAAAAAADMONAAAA\tSir       \tJoshua              \tWest                          \tY\t15\t3\t1987\tALAND ISLANDS\tNULL\tJoshua.West@5q4JZHUuo9h0e1ol.org                  \t2452545\t2870.00\nAAAAAAAADOBAAAAA\tDr.       \tPaul                \tYeager                        \tN\t17\t11\t1946\tEGYPT\tNULL\tPaul.Yeager@PKzHg.edu                             \t2452571\t3050.46\nAAAAAAAAEADIAAAA\tDr.       \tGregory             \tCarey                         \tY\t6\t1\t1937\tSUDAN\tNULL\tGregory.Carey@495UmqFpU0My0GB8.edu                \t2452330\t5133.18\nAAAAAAAAECBEAAAA\tMs.       \tMegan               \tWilson                        \tN\t1\t12\t1935\tCOMOROS\tNULL\tMegan.Wilson@mi.com                               \t2452501\t1622.24\nAAAAAAAAECIEAAAA\tMiss      \tDanille             \tSanders                       \tY\t15\t3\t1942\tOMAN\tNULL\tDanille.Sanders@dnjPPYBZQ.com                     \t2452566\t5474.66\nAAAAAAAAEEBEBAAA\tDr.       \tJamie               \tJackson                       \tY\t21\t11\t1933\tPOLAND\tNULL\tJamie.Jackson@dLUBOuU.com                         \t2452365\t1761.68\nAAAAAAAAEEGMAAAA\tDr.       \tNULL\tNULL\tN\t5\tNULL\tNULL\tNULL\tNULL\tNULL\t2452438\t2046.20\nAAAAAAAAEEJAAAAA\tDr.       \tRhonda              \tAnderson                      \tN\t27\t12\t1942\tISLE OF MAN\tNULL\tRhonda.Anderson@P2VN7VQxGjj.com                   \t2452394\t2888.50\nAAAAAAAAEENFAAAA\tDr.       \tJames               \tBrown                         \tN\t27\t9\t1971\tNEW CALEDONIA\tNULL\tJames.Brown@dzkgBG43cs.org                        \t2452466\t2474.24\nAAAAAAAAEGDDBAAA\tMr.       \tTimothy             \tBarajas                       \tN\t8\t10\t1976\tMAURITANIA\tNULL\tTimothy.Barajas@FFhEdu3MO4n.com                   \t2452594\t2209.06\nAAAAAAAAEGEGAAAA\tMrs.      \tArthur              \tKirk                          \tY\t24\t8\t1972\tSLOVAKIA\tNULL\tArthur.Kirk@3eFiyMXiuRCqt1ofD.com                 \t2452286\t3189.74\nAAAAAAAAEGGKAAAA\tMiss      \tJohanna             \tMoses                         \tY\t3\t5\t1971\tR�UNION\tNULL\tJohanna.Moses@AhPc5dGr7FubqU1Lyj.com              \t2452443\t2842.06\nAAAAAAAAEGPPAAAA\tMr.       \tMichael             \tPringle                       \tY\t1\t10\t1990\tJERSEY\tNULL\tMichael.Pringle@noEgytx7nOED.edu                  \t2452398\t1870.56\nAAAAAAAAEHFCBAAA\tMrs.      \tLillian             \tYazzie                        \tN\t14\t1\t1936\tGEORGIA\tNULL\tLillian.Yazzie@Hzg3QQh.com                        \t2452555\t1758.12\nAAAAAAAAEHLDAAAA\tMs.       \tNULL\tBrown                         \tNULL\tNULL\t3\tNULL\tNULL\tNULL\tNULL\tNULL\t4483.20\nAAAAAAAAEHLNAAAA\tMr.       \tGregory             \tWester                        \tY\t9\t3\t1946\tSPAIN\tNULL\tGregory.Wester@ko.com                             \t2452535\t2303.28\nAAAAAAAAEKCJAAAA\tMrs.      \tKeri                \tLawrence                      \tY\t31\t10\t1955\tGUERNSEY\tNULL\tKeri.Lawrence@yIBxLUPgozYINi.com                  \t2452523\t1523.98\nAAAAAAAAEMELAAAA\tDr.       \tSteven              \tParker                        \tN\t11\t9\t1936\tMADAGASCAR\tNULL\tSteven.Parker@E6kIVz3.org                         \t2452372\t2106.30\nAAAAAAAAEMHBAAAA\tMr.       \tBrandon             \tRay                           \tY\t13\t8\t1976\tANTARCTICA\tNULL\tBrandon.Ray@OP6YgS6SQnuykF.org                    \t2452290\t2258.53\nAAAAAAAAFCPEBAAA\tDr.       \tKristi              \tBrennan                       \tY\tNULL\t11\tNULL\tNULL\tNULL\tNULL\t2452349\t6815.00\nAAAAAAAAFDBAAAAA\tMrs.      \tVirginia            \tSims                          \tY\t16\t4\t1969\tBULGARIA\tNULL\tVirginia.Sims@3qndx2y.edu                         \t2452302\t6723.52\nAAAAAAAAFGICAAAA\tDr.       \tWill                \tIsbell                        \tN\t1\t10\t1975\tERITREA\tNULL\tWill.Isbell@100I71HVxMaaTVZ5MH2m.org              \t2452482\t1765.89\nAAAAAAAAFKGEBAAA\tDr.       \tDorothy             \tHendricks                     \tN\t14\t9\t1947\tNAMIBIA\tNULL\tDorothy.Hendricks@7X7OXy1xAs7hVN.org              \t2452562\t2333.76\nAAAAAAAAFKNEAAAA\tMs.       \tRuth                \tCatron                        \tY\t2\t3\t1983\tSAINT LUCIA\tNULL\tRuth.Catron@g4krLkSRUFX60t4P.edu                  \t2452401\t1873.40\nAAAAAAAAFKOHBAAA\tMs.       \tAnnie               \tStevens                       \tN\t12\t2\t1933\tKENYA\tNULL\tAnnie.Stevens@e3vFdRXUEQ.org                      \t2452504\t4229.82\nAAAAAAAAFMDEAAAA\tSir       \tDennis              \tMayfield                      \tN\t5\t1\t1933\tSOMALIA\tNULL\tDennis.Mayfield@9DPHTIUxRlxd.edu                  \t2452535\t2849.00\nAAAAAAAAFMIPAAAA\tDr.       \tMichael             \tMoran                         \tN\t1\t2\t1940\tSAINT HELENA\tNULL\tMichael.Moran@3.edu                               \t2452411\t3869.60\nAAAAAAAAFPKCBAAA\tDr.       \tWilliam             \tTerry                         \tY\t10\t3\t1991\tGUATEMALA\tNULL\tWilliam.Terry@jKyKRKfrxH.org                      \t2452325\t3550.05\nAAAAAAAAFPPGAAAA\tMs.       \tMargie              \tLee                           \tY\t22\t10\t1933\tGUINEA\tNULL\tMargie.Lee@Tg2pE7.com                             \t2452419\t3288.60\nAAAAAAAAGBOAAAAA\tMr.       \tMicheal             \tNULL\tNULL\tNULL\tNULL\t1991\tNULL\tNULL\tMicheal.Holland@4rhkIkNJy6fSU.edu                 \t2452530\t2676.00\nAAAAAAAAGIEDAAAA\tMiss      \tAdriana             \tMaxfield                      \tY\t6\t1\t1948\tMONGOLIA\tNULL\tAdriana.Maxfield@Na4ize7RHB.com                   \t2452415\t4872.96\nAAAAAAAAGJJEAAAA\tMiss      \tNULL\tBarnard                       \tN\tNULL\tNULL\t1987\tNULL\tNULL\tNULL\tNULL\t1628.77\nAAAAAAAAGMJPAAAA\tMiss      \tDeena               \tFerguson                      \tY\t8\t9\t1986\tPHILIPPINES\tNULL\tDeena.Ferguson@19lazfjoSTXBorQ.edu                \t2452301\t5713.74\nAAAAAAAAHANIAAAA\tMr.       \tRobert              \tRogers                        \tY\t5\t7\t1990\tSOUTH AFRICA\tNULL\tRobert.Rogers@m77qeFKJfIO3rugyC.org               \t2452456\t1638.00\nAAAAAAAAHCHKAAAA\tDr.       \tHarry               \tBearden                       \tY\t10\t2\t1957\tCHINA\tNULL\tHarry.Bearden@kNxzQ3SYEmp.com                     \t2452533\t6455.11\nAAAAAAAAHHAKAAAA\tDr.       \tJeremy              \tRichter                       \tY\t8\t8\t1951\tSOUTH AFRICA\tNULL\tJeremy.Richter@hCAQGRgTCrU.edu                    \t2452496\t3764.21\nAAAAAAAAHIMLAAAA\tDr.       \tKathleen            \tFloyd                         \tN\t16\t12\t1983\tTURKMENISTAN\tNULL\tKathleen.Floyd@ACpkOV2nlHDL.edu                   \t2452644\t5796.90\nAAAAAAAAHLMCAAAA\tMiss      \tAdriana             \tCase                          \tN\t16\t3\t1959\tGREECE\tNULL\tAdriana.Case@aRv.com                              \t2452408\t1740.52\nAAAAAAAAHMFBBAAA\tDr.       \tKatheryn            \tWhite                         \tY\t21\t7\t1983\tCAPE VERDE\tNULL\tKatheryn.White@dqdzeup0a7TYhnOB.edu               \t2452417\t20518.95\nAAAAAAAAHMNFAAAA\tDr.       \tStephen             \tBelanger                      \tY\t10\t11\t1951\tGUADELOUPE\tNULL\tStephen.Belanger@emUp.edu                         \t2452340\t5734.80\nAAAAAAAAHNAOAAAA\tMrs.      \tAnita               \tGardner                       \tY\t27\t6\t1979\tGEORGIA\tNULL\tAnita.Gardner@g.org                               \t2452557\t2921.42\nAAAAAAAAHOIBAAAA\tMiss      \tVera                \tForte                         \tN\t20\t9\t1941\tZAMBIA\tNULL\tVera.Forte@XxMnMUGRS.edu                          \t2452549\t2533.00\nAAAAAAAAHOOGAAAA\tSir       \tJerry               \tRader                         \tY\t31\t1\t1948\tPAPUA NEW GUINEA\tNULL\tJerry.Rader@FAhBv4pGRIGB.com                      \t2452327\t3590.16\nAAAAAAAAHPGNAAAA\tDr.       \tRenae               \tRichardson                    \tY\t4\t8\t1927\tSYRIAN ARAB REPUBLIC\tNULL\tRenae.Richardson@eGnpquq6hFfm20r.com              \t2452426\t6468.92\nAAAAAAAAIAGBBAAA\tMs.       \tAlexis              \tEvans                         \tN\t3\t12\t1966\tLITHUANIA\tNULL\tAlexis.Evans@2cqJZnt7z8.edu                       \t2452596\t2043.99\nAAAAAAAAICGBAAAA\tMiss      \tGertrude            \tRodriguez                     \tY\t10\t4\t1932\tSAN MARINO\tNULL\tGertrude.Rodriguez@lj4qAeL9afqB9jS.com            \t2452447\t2865.39\nAAAAAAAAIDFJAAAA\tDr.       \tDixie               \tBrown                         \tN\t1\t5\t1983\tFRENCH POLYNESIA\tNULL\tDixie.Brown@PrD49KMoXd8SEMuCS.com                 \t2452595\t2135.70\nAAAAAAAAIJCHBAAA\tDr.       \tKaren               \tCosby                         \tY\t17\t9\t1963\tIRAQ\tNULL\tKaren.Cosby@H17xf1rPlPtkdKTr.com                  \t2452368\t3731.50\nAAAAAAAAIJDEBAAA\tMrs.      \tElisa               \tBaldwin                       \tY\t14\t5\t1983\tSWEDEN\tNULL\tElisa.Baldwin@S.edu                               \t2452579\t3168.75\nAAAAAAAAILKLAAAA\tMs.       \tJanice              \tCannon                        \tN\t11\t12\t1983\tARGENTINA\tNULL\tJanice.Cannon@lmR.org                             \t2452353\t2810.73\nAAAAAAAAINLKAAAA\tMs.       \tMandy               \tAnderson                      \tN\t30\t6\t1958\tMOZAMBIQUE\tNULL\tMandy.Anderson@ONxvienlYHJpCe.org                 \t2452539\t9893.12\nAAAAAAAAIOGPAAAA\tMs.       \tKathryn             \tCooper                        \tN\t27\t1\t1982\tARGENTINA\tNULL\tKathryn.Cooper@VBO8.com                           \t2452457\t1870.56\nAAAAAAAAJDKCAAAA\tDr.       \tCurtis              \tMcguire                       \tY\t10\t5\t1974\tETHIOPIA\tNULL\tCurtis.Mcguire@Vex25USDKvv.org                    \t2452411\t4900.15\nAAAAAAAAJFEFBAAA\tMs.       \tNancy               \tHampton                       \tN\t24\t3\t1946\tWESTERN SAHARA\tNULL\tNancy.Hampton@K8CudFMgtgyGyzS.com                 \t2452309\t6277.08\nAAAAAAAAJGNBAAAA\tSir       \tRobert              \tJohnston                      \tN\t14\t5\t1956\tFAROE ISLANDS\tNULL\tRobert.Johnston@ccQCj7j.edu                       \t2452631\t2383.74\nAAAAAAAAJHHEBAAA\tMiss      \tElse                \tCarter                        \tY\t19\t8\t1963\tNORFOLK ISLAND\tNULL\tElse.Carter@e.com                                 \t2452580\t4210.60\nAAAAAAAAJHIBAAAA\tMr.       \tEdwin               \tChristensen                   \tN\t23\t6\t1947\tURUGUAY\tNULL\tEdwin.Christensen@3fBqkiUX.com                    \t2452485\t5984.00\nAAAAAAAAJJFBBAAA\tSir       \tMichael             \tToney                         \tY\t2\t6\t1981\tCOSTA RICA\tNULL\tMichael.Toney@Oe1SH.edu                           \t2452306\t3453.63\nAAAAAAAAJKDCBAAA\tDr.       \tRobert              \tJames                         \tY\t8\t11\t1929\tLEBANON\tNULL\tRobert.James@DSVGYuMQLZNo9oga1.edu                \t2452306\t1922.62\nAAAAAAAAJLAGBAAA\tMrs.      \tIsabel              \tBarber                        \tN\t24\t4\t1964\tROMANIA\tNULL\tIsabel.Barber@0RSzJgpalSYmAoYJgnL.org             \t2452435\t2562.91\nAAAAAAAAJLICBAAA\tMs.       \tFumiko              \tEbert                         \tN\t5\t12\t1956\tNAURU\tNULL\tFumiko.Ebert@8kz.com                              \t2452309\t2408.38\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q31.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<ca_county:string,d_year:int,web_q1_q2_increase:decimal(37,20),store_q1_q2_increase:decimal(37,20),web_q2_q3_increase:decimal(37,20),store_q2_q3_increase:decimal(37,20)>\n-- !query output\nBedford city\t2000\t1.41895245428186244343\t1.28647520475423955859\t1.98397166014536480387\t1.22052485500235683695\nBlaine County\t2000\t0.90103169326915114550\t0.83066560812940517400\t5.91832927336153981754\t2.20534708400177561487\nBoone County\t2000\t1.10318140625440703802\t1.01388129938545215917\t2.93583104434541244580\t0.81595222675278264090\nBristol County\t2000\t2.03252306215265583168\t0.84389707668347883239\t1.88436946252627433938\t0.50855932416819944462\nBuffalo County\t2000\t0.79851677786134640592\t0.66608776395688494889\t4.12963020014993273566\t1.41417145222467738551\nCarlisle County\t2000\t1.69492598618078795489\t1.25168931735787673049\t1.50718182376907682203\t0.88181897085363449075\nCarroll County\t2000\t1.35087769364534219248\t1.29257051856006991407\t2.20994529474585418644\t1.48849785082428358420\nCedar County\t2000\t1.73903789422605330870\t0.46305877715234029287\t8.77007422523370529780\t3.61252572439703538764\nClinton County\t2000\t1.74067151347304310575\t1.41218893174550937759\t1.44667681199587337982\t1.25445563958487852628\nColbert County\t2000\t1.18024333271178132777\t0.83788572603247302022\t0.54236640028023008306\t0.42663333520116081271\nCrawford County\t2000\t1.31661755312891088929\t0.94956543151027392277\t2.05054841119621802578\t1.32014748612874091613\nCumberland County\t2000\t1.08524453978148728453\t0.80144708440874338843\t1.60419019384387788968\t1.54733087343509046916\nDillon County\t2000\t0.32644424794620832608\t0.12281769967769250465\t2.54843886762550432202\t2.36794563665865531250\nDrew County\t2000\t1.33752592432768148542\t0.42051684931974493150\t4.42925583278116865936\t3.59005466753091100322\nEl Paso County\t2000\t1.33106683366793877022\t0.52026046839940548659\t6.85897871779069724650\t3.52274370756107989319\nEscambia County\t2000\t1.29140013472373800933\t1.22312591570259586531\t4.94949857002487942979\t1.90881493105514558681\nEssex County\t2000\t0.91140422255526176760\t0.80397981840092167933\t2.40581838121906819055\t1.51850239999231343611\nForsyth County\t2000\t0.68611318852553022140\t0.65587408738915750878\t3.37787340847752340465\t1.32466135376522054649\nFranklin County\t2000\t1.23346105169430839663\t1.16004116355148303419\t3.11011280618208475383\t1.72852883588875015022\nFremont County\t2000\t2.08342397478237427534\t0.99231216361555377660\t4.64450632140050612837\t1.66543397250235949142\nGaines County\t2000\t0.18699408635518783605\t0.15864149496085374898\t4.38012199409197995027\t3.26392730974685542866\nGallatin County\t2000\t0.57267047178378396314\t0.42960913482855774228\t5.40426025792003986274\t2.60129747557845045914\nHale County\t2000\t1.13181285759861396698\t0.37084472158598998345\t2.76039662764995099764\t2.04880905907673376536\nHoward County\t2000\t0.75689303418215066506\t0.40416603686332285826\t5.20017335230756289550\t3.93113017208635232323\nJennings County\t2000\t2.78984076175313855515\t2.07599697431575122264\t3.44972481162691656332\t2.44378922187060355318\nJohnson County\t2000\t2.57795641894299816468\t0.91616273582603658596\t1.74313979276860348710\t1.18351065602569093409\nLanglade County\t2000\t1.68676813359812642201\t0.96428449439984805852\t2.54522036089486624332\t1.23454212891942526190\nLee County\t2000\t1.30443690529253126871\t0.72035286226320500421\t2.23150088973480218462\t1.84186089836441132231\nLimestone County\t2000\t3.32976919500999168869\t0.75905050855357115484\t1.82617549463146988019\t1.71235192999491983271\nLonoke County\t2000\t2.08982990182342723842\t1.10340085014357551300\t1.15539868818698913544\t0.99289022395794532472\nMilam County\t2000\t2.28508195011379140646\t0.46652927795640169049\t0.78408081688146536737\t0.47493776386092038861\nMineral County\t2000\t3.34300539922925594005\t2.45993211849872338822\t2.20409211845792015572\t1.08747970913885456325\nMitchell County\t2000\t1.67453892386598503677\t1.34365298077586851819\t1.23346415770637208954\t0.95073923197301166934\nMontgomery County\t2000\t1.56847414391333207258\t0.83177598227578637405\t2.13678203835469561935\t2.10257648921068871721\nMoore County\t2000\t2.15694071746428807771\t0.40796143296369785394\t1.74966386964488096764\t1.16471835770020727271\nOttawa County\t2000\t1.75323872261604668329\t0.83468698542567460304\t1.61973935146544593444\t1.42351390679742385404\nPamlico County\t2000\t1.94782724762452970276\t0.35933779785623271693\t1.56217364489823888166\t1.36537077933199287260\nPawnee County\t2000\t1.78950195911489308028\t1.16132303881758630677\t3.42292675284956044317\t0.55048211219399261479\nPierce County\t2000\t2.54716933553443590360\t1.41791053358023116142\t1.22686753138250342480\t1.16331445295301712751\nPike County\t2000\t0.87756608639678817034\t0.86487524757082952225\t1.66450846493571420317\t1.60752153195890666018\nPlymouth County\t2000\t0.42751024431730323179\t0.24869467440858413355\t1.77347026156010104138\t0.95743334493327210261\nPocahontas County\t2000\t1.81898673318791661987\t1.68668175433656035469\t2.78840493521175989923\t2.31448886854618583141\nPope County\t2000\t1.42325802767704214927\t0.71289882499386886358\t5.32400461162588232396\t4.36612471915760924134\nQuitman County\t2000\t0.36883113368425186793\t0.36314047030366149345\t5.16370853507835506706\t3.21226900707602706150\nRichland County\t2000\t3.25294641523656020681\t1.07950401261765093476\t2.63822689761623055166\t1.41044850738739382893\nRockland County\t2000\t0.42416114343500113082\t0.20590865976918589352\t3.77761891177163143690\t3.03503154543555890597\nSalem city\t2000\t0.70139973333391227808\t0.35839062570476603239\t2.05906098316466157534\t1.08354507133942051245\nSeminole County\t2000\t1.44053424523671633073\t0.43123781775978624959\t5.98509759915132505814\t3.90425489309553190978\nSherman County\t2000\t3.80549228460960365296\t0.85018052505371507262\t3.28429802752271011291\t2.95060760395590827092\nSioux County\t2000\t4.05794434246552571650\t1.42665905155276116003\t1.56803884104955881507\t1.13419970938634791615\nStafford County\t2000\t2.47320669600627229039\t0.89946337873032839807\t1.93845899918871104768\t0.90136132522419968048\nSuffolk County\t2000\t2.42331649558212058212\t1.82204858730891230166\t6.53566355839665770211\t1.95538998836281647012\nSully County\t2000\t0.93205458749496189963\t0.25090187329447210579\t2.29778378271979174484\t1.08802927271862880423\nThomas County\t2000\t2.83931404059757943504\t1.88036403728663893895\t1.98634825955250608974\t0.52141190626809911920\nTodd County\t2000\t2.82280525212475972969\t0.89465756573513943151\t3.16893938472753630323\t1.86121311814899148488\nValdez-Cordova Census Area\t2000\t1.25070886925917779038\t0.59811263426251699657\t1.22504083956972939990\t0.84771422379395948262\nWaushara County\t2000\t7.21438509071107960599\t3.30949368270859706271\t0.60525565982701327843\t0.60022615175317538544\nWayne County\t2000\t0.94936213717458544629\t0.78911846142834762231\t2.95594340748058137863\t1.96105182147450904414\nWilliamson County\t2000\t6.40051753949925199369\t0.73796090630650528678\t2.74186195207121934591\t2.27659084754941781749\nWoodford County\t2000\t1.66114001659859092820\t0.55664623460469568767\t1.82209706086687377022\t1.17870749798601085737\nWyoming County\t2000\t0.71452417457513699221\t0.46814038549181972360\t9.85342118017399816692\t3.19644465036087064672\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q32.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<excess discount amount:decimal(17,2)>\n-- !query output\n9089.28\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q33.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_manufact_id:int,total_sales:decimal(27,2)>\n-- !query output\n796\t319.61\n820\t445.38\n945\t550.19\n753\t655.92\n455\t711.84\n564\t739.49\n951\t850.52\n776\t1039.28\n676\t1386.92\n911\t1536.42\n832\t1643.04\n925\t1744.74\n701\t1859.69\n707\t1912.37\n780\t2339.10\n891\t2412.45\n890\t3020.90\n870\t3142.82\n656\t3227.47\n681\t3427.68\n985\t3441.42\n881\t3677.27\n717\t3733.60\n854\t3734.59\n449\t4015.00\n965\t4331.20\n960\t4344.29\n621\t4411.86\n756\t4863.42\n918\t5140.55\n795\t5145.78\n833\t5186.15\n976\t5317.66\n696\t5726.05\n853\t5799.18\n770\t5958.32\n722\t6049.15\n362\t6348.46\n399\t6401.19\n973\t6606.87\n715\t6673.32\n708\t6760.71\n537\t7094.17\n724\t7333.62\n375\t7385.50\n444\t7791.92\n716\t7793.63\n887\t7844.78\n464\t7874.66\n709\t7898.12\n978\t7952.34\n893\t8530.99\n677\t8770.76\n595\t8787.96\n869\t9002.04\n338\t9032.32\n738\t9136.32\n429\t9177.98\n650\t9182.94\n901\t9212.11\n921\t9223.79\n946\t9353.49\n858\t9443.98\n905\t9494.47\n568\t9614.09\n389\t9939.44\n836\t9951.19\n821\t9963.08\n728\t10017.99\n679\t10066.80\n721\t10348.99\n910\t10453.73\n698\t10543.49\n777\t10545.94\n675\t10556.87\n783\t10559.82\n624\t10836.32\n763\t10885.70\n730\t10897.97\n463\t10912.55\n534\t10925.55\n913\t10942.78\n970\t11004.86\n846\t11218.39\n742\t11301.99\n794\t11444.72\n442\t11482.50\n393\t11644.19\n706\t11654.35\n577\t11722.62\n929\t11744.05\n733\t11781.60\n896\t11875.86\n807\t11893.86\n386\t12070.02\n865\t12380.13\n561\t12458.46\n552\t12588.43\n683\t12592.26\n997\t12921.36\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q34.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_last_name:string,c_first_name:string,c_salutation:string,c_preferred_cust_flag:string,ss_ticket_number:int,cnt:bigint>\n-- !query output\nNULL\tNULL\tNULL\tY\t47915\t15\nNULL\tNULL\tNULL\tNULL\t215293\t15\nNULL\tNULL\tNULL\tNULL\t126143\t15\nNULL\tNULL\tMrs.      \tN\t120593\t15\nNULL\tRubin               \tSir       \tNULL\t30056\t15\nAdler                         \tJustin              \tSir       \tY\t226187\t15\nAllen                         \tRose                \tMrs.      \tN\t179476\t16\nAnderson                      \tMarvin              \tDr.       \tN\t211012\t16\nAndrews                       \tJacob               \tMr.       \tN\t67111\t16\nAndrews                       \tSamuel              \tDr.       \tY\t139993\t16\nAngel                         \tKevin               \tMr.       \tY\t106628\t15\nAshley                        \tLinda               \tMrs.      \tY\t82173\t15\nBaca                          \tDorothy             \tMrs.      \tN\t64890\t15\nBaker                         \tJamie               \tDr.       \tY\t9916\t15\nBanks                         \tLeroy               \tSir       \tN\t206730\t15\nBarber                        \tDianna              \tMrs.      \tY\t119959\t16\nBarksdale                     \tJoann               \tMiss      \tY\t138994\t15\nBarnes                        \tRuth                \tDr.       \tN\t84038\t15\nBarney                        \tSamuel              \tSir       \tN\t15288\t15\nBarnhart                      \tCharley             \tMr.       \tY\t166576\t15\nBarone                        \tSeth                \tMr.       \tY\t162374\t15\nBarrett                       \tDavid               \tSir       \tN\t189879\t15\nBartels                       \tElmer               \tSir       \tY\t114760\t16\nBear                          \tScott               \tSir       \tY\t82291\t15\nBeers                         \tKendra              \tDr.       \tNULL\t137960\t15\nBelcher                       \tJames               \tSir       \tY\t239470\t16\nBell                          \tCarrie              \tMiss      \tN\t5527\t15\nBell                          \tMatthew             \tDr.       \tN\t20400\t15\nBenjamin                      \tConsuelo            \tMs.       \tY\t201086\t15\nBergman                       \tJoann               \tMiss      \tN\t177052\t15\nBrooks                        \tRobert              \tSir       \tN\t155576\t16\nByrd                          \tKelly               \tSir       \tN\t165115\t16\nCagle                         \tJennifer            \tMiss      \tN\t163129\t15\nCampbell                      \tRobert              \tMr.       \tN\t8964\t15\nCardona                       \tRobert              \tMr.       \tN\t200501\t15\nCarter                        \tWendy               \tMs.       \tN\t96439\t15\nCarver                        \tBernard             \tMr.       \tY\t194943\t16\nChen                          \tWanita              \tMiss      \tN\t137713\t16\nChristensen                   \tLarry               \tDr.       \tY\t58094\t15\nCochrane                      \tAnne                \tMrs.      \tN\t208347\t16\nColeman                       \tInez                \tDr.       \tY\t88249\t16\nColeman                       \tJohn                \tMr.       \tN\t49444\t15\nColon                         \tAnna                \tDr.       \tY\t143694\t15\nConley                        \tRoxie               \tDr.       \tN\t196663\t15\nCook                          \tAdam                \tMs.       \tY\t167339\t15\nCote                          \tJustin              \tDr.       \tN\t93466\t15\nCouncil                       \tDonald              \tSir       \tY\t102958\t15\nCramer                        \tLinda               \tMs.       \tN\t126628\t15\nCrittenden                    \tAmie                \tMs.       \tN\t138787\t15\nCruz                          \tJames               \tMr.       \tY\t201430\t15\nCuellar                       \tOscar               \tMr.       \tY\t86781\t16\nCullen                        \tLarry               \tMr.       \tY\t221242\t16\nCushing                       \tAntonia             \tMrs.      \tY\t118927\t15\nDavis                         \tGordon              \tDr.       \tN\t227822\t15\nDavis                         \tMyrtle              \tDr.       \tY\t37430\t15\nDecker                        \tVera                \tMiss      \tY\t75737\t16\nDiamond                       \tFernando            \tDr.       \tN\t216391\t15\nDiaz                          \tWalton              \tMr.       \tN\t131135\t16\nDickinson                     \tSteven              \tMr.       \tN\t8057\t16\nDouglas                       \tLester              \tSir       \tN\t26043\t15\nDove                          \tGarry               \tDr.       \tN\t152171\t16\nDrake                         \tRosetta             \tDr.       \tY\t238040\t15\nDumas                         \tTravis              \tMr.       \tY\t94154\t15\nDuncan                        \tOlivia              \tDr.       \tY\t102032\t15\nDurham                        \tAndrea              \tDr.       \tY\t144734\t15\nDutton                        \tGay                 \tMiss      \tY\t110886\t15\nEllis                         \tKaren               \tMiss      \tN\t229706\t16\nEly                           \tCesar               \tDr.       \tY\t36054\t16\nEtheridge                     \tMike                \tDr.       \tN\t19648\t15\nFarmer                        \tEugenia             \tMiss      \tY\t98187\t16\nFarrow                        \tKathy               \tMiss      \tY\t200078\t15\nFaulkner                      \tLakeisha            \tDr.       \tY\t178393\t16\nFaulkner                      \tRobert              \tDr.       \tN\t109423\t15\nFelton                        \tDavid               \tMr.       \tN\t97807\t16\nFerreira                      \tChristine           \tMrs.      \tY\t155822\t15\nFinn                          \tRobert              \tMr.       \tN\t38057\t15\nFinney                        \tCrystal             \tMiss      \tY\t158304\t15\nFischer                       \tTamara              \tMrs.      \tN\t66790\t15\nFoote                         \tRoy                 \tSir       \tN\t68086\t15\nForeman                       \tAutumn              \tMrs.      \tY\t164060\t15\nFunk                          \tMarvin              \tSir       \tY\t61516\t15\nGarcia                        \tChristopher         \tSir       \tY\t181616\t16\nGarcia                        \tKaren               \tMiss      \tN\t236987\t15\nGarcia                        \tRobert              \tDr.       \tN\t172185\t16\nGarland                       \tMichael             \tMr.       \tN\t234421\t15\nGaylord                       \tKeith               \tMr.       \tY\t123333\t16\nGifford                       \tMark                \tMr.       \tN\t225973\t16\nGilbert                       \tNULL\tSir       \tN\t16844\t15\nGilmore                       \tAustin              \tDr.       \tY\t239871\t15\nGoldsmith                     \tBernice             \tMs.       \tY\t2347\t15\nGood                          \tNancy               \tDr.       \tN\t132655\t15\nGoodman                       \tNULL\tNULL\tN\t71903\t15\nGower                         \tNettie              \tMiss      \tN\t10576\t15\nGray                          \tEvelyn              \tMiss      \tN\t157486\t15\nHammond                       \tRoger               \tSir       \tY\t54884\t16\nHardin                        \tKimberly            \tDr.       \tN\t192424\t16\nHarp                          \tVance               \tMr.       \tN\t199017\t15\nHarper                        \tMadeline            \tDr.       \tN\t173835\t16\nHarris                        \tTammy               \tDr.       \tN\t217761\t16\nHartmann                      \tJoey                \tDr.       \tN\t230915\t15\nHayes                         \tDavid               \tSir       \tN\t82274\t15\nHaynes                        \tSara                \tMiss      \tY\t139168\t16\nHeath                         \tMatthew             \tDr.       \tN\t30710\t15\nHennessey                     \tDebbie              \tDr.       \tY\t79256\t15\nHerman                        \tStella              \tMs.       \tY\t33801\t16\nHernandez                     \tMax                 \tMr.       \tN\t16015\t15\nHernandez                     \tRuth                \tMs.       \tY\t97000\t15\nHess                          \tJoseph              \tSir       \tN\t151336\t15\nHodges                        \tLucas               \tDr.       \tY\t163325\t15\nHolland                       \tJeremiah            \tDr.       \tN\t95938\t16\nJackson                       \tWilliam             \tMr.       \tY\t16425\t16\nJameson                       \tMiguel              \tDr.       \tN\t9181\t16\nJarrell                       \tThomas              \tMr.       \tY\t85787\t16\nJohnson                       \tJulia               \tDr.       \tN\t27560\t15\nJones                         \tTheresa             \tMs.       \tN\t219765\t16\nKelly                         \tMark                \tMr.       \tY\t17039\t16\nKhan                          \tHank                \tMr.       \tN\t177803\t15\nKim                           \tCharlotte           \tDr.       \tY\t7208\t16\nKunz                          \tSarah               \tDr.       \tN\t74568\t15\nLake                          \tRobert              \tSir       \tN\t13264\t15\nLandry                        \tRudolph             \tSir       \tN\t117581\t15\nLane                          \tLuis                \tSir       \tN\t232302\t16\nLangford                      \tDarlene             \tMrs.      \tN\t214891\t15\nLarson                        \tKevin               \tMr.       \tY\t35053\t15\nLarson                        \tThomas              \tMr.       \tN\t114265\t15\nLee                           \tMalik               \tDr.       \tN\t20122\t16\nLeonard                       \tOrlando             \tDr.       \tY\t133168\t15\nLincoln                       \tAnthony             \tMiss      \tY\t1407\t16\nLindsey                       \tLinda               \tDr.       \tN\t62687\t16\nLopez                         \tKaren               \tDr.       \tY\t136008\t15\nLunsford                      \tKevin               \tDr.       \tN\t159120\t16\nLynch                         \tSylvia              \tMs.       \tY\t115438\t15\nLyon                          \tMichael             \tMr.       \tN\t140323\t15\nMaestas                       \tMabel               \tMrs.      \tN\t184265\t15\nMagana                        \tDiann               \tMrs.      \tY\t19139\t15\nManning                       \tAnnamarie           \tMs.       \tN\t4984\t16\nMarshall                      \tFelipe              \tSir       \tN\t138890\t15\nMartin                        \tPaul                \tDr.       \tN\t26115\t16\nMartinez                      \tEarl                \tSir       \tN\t108982\t15\nMartinez                      \tRobert              \tSir       \tY\t157672\t16\nMasterson                     \tBarbara             \tMrs.      \tN\t231070\t15\nMata                          \tDeborah             \tMiss      \tY\t4323\t15\nMccoy                         \tDebbie              \tDr.       \tN\t91552\t15\nMcgill                        \tTony                \tSir       \tN\t110030\t15\nMckeon                        \tChristina           \tDr.       \tN\t26190\t15\nMcnamara                      \tLinda               \tDr.       \tY\t7957\t15\nMeans                         \tMichael             \tMr.       \tY\t226164\t16\nMedina                        \tJoseph              \tSir       \tY\t110246\t15\nMeyers                        \tZachary             \tMr.       \tY\t59549\t15\nMontgomery                    \tJohn                \tMr.       \tY\t103718\t15\nMoody                         \tMiranda             \tMs.       \tY\t171671\t15\nMoore                         \tMark                \tDr.       \tN\t191471\t15\nMoran                         \tCelia               \tMs.       \tY\t200691\t15\nMorgan                        \tCecelia             \tMrs.      \tN\t200742\t15\nMorrell                       \tChad                \tMr.       \tN\t93790\t15\nMorse                         \tRobert              \tMr.       \tN\t68627\t16\nNeel                          \tAudrey              \tMs.       \tY\t193308\t15\nNeff                          \tSheri               \tMrs.      \tY\t52556\t15\nNelson                        \tKatherine           \tMrs.      \tN\t110232\t15\nNew                           \tSuzanne             \tMiss      \tN\t5120\t16\nNielsen                       \tVeronica            \tMrs.      \tN\t23905\t15\nOakley                        \tGeorge              \tMr.       \tY\t177890\t15\nParker                        \tBarbar              \tDr.       \tN\t57241\t15\nParker                        \tJeff                \tSir       \tN\t213566\t16\nPemberton                     \tJennifer            \tMrs.      \tY\t49875\t16\nPerry                         \tRobert              \tMr.       \tY\t153147\t16\nPhillips                      \tDavid               \tDr.       \tN\t148883\t15\nPhillips                      \tGeorgia             \tNULL\tY\t26878\t15\nPhillips                      \tStanley             \tSir       \tN\t31989\t15\nPinkston                      \tBrenda              \tDr.       \tN\t126440\t15\nPryor                         \tDorothy             \tMrs.      \tN\t213779\t16\nReed                          \tWilliam             \tDr.       \tN\t145002\t15\nReynolds                      \tAmelia              \tMs.       \tY\t68440\t15\nRice                          \tDavid               \tDr.       \tY\t70484\t16\nRobertson                     \tDaniel              \tMr.       \tN\t40407\t16\nRosales                       \tNULL\tNULL\tY\t156406\t16\nRusso                         \tCheryl              \tMiss      \tN\t81123\t15\nSanchez                       \tBruce               \tSir       \tY\t124479\t15\nSchmitz                       \tKaitlyn             \tMiss      \tN\t105162\t15\nSebastian                     \tHomer               \tDr.       \tY\t64994\t15\nSexton                        \tJerry               \tSir       \tN\t91446\t15\nSierra                        \tDavid               \tSir       \tY\t61810\t15\nSimmons                       \tJoseph              \tDr.       \tN\t54185\t15\nSimpson                       \tMichael             \tSir       \tY\t186613\t16\nSimpson                       \tShalanda            \tDr.       \tY\t181123\t15\nSingleton                     \tAndrew              \tMs.       \tN\t45464\t15\nSmith                         \tDanny               \tDr.       \tY\t143297\t15\nSmith                         \tEdward              \tSir       \tY\t81178\t16\nSmith                         \tHung                \tSir       \tN\t44710\t15\nSmith                         \tKimberly            \tMrs.      \tY\t174638\t15\nSmith                         \tVern                \tSir       \tN\t50960\t15\nSosa                          \tLeah                \tMs.       \tY\t77106\t16\nSparks                        \tErick               \tDr.       \tN\t220337\t15\nTaylor                        \tKenneth             \tDr.       \tY\t194337\t15\nTodd                          \tLinda               \tMs.       \tY\t235816\t15\nTrout                         \tHarley              \tMr.       \tY\t214547\t15\nUrban                         \tNULL\tNULL\tNULL\t214898\t15\nVarner                        \tElsie               \tMs.       \tN\t199602\t16\nVazquez                       \tBill                \tDr.       \tY\t62049\t15\nVelazquez                     \tWilliam             \tDr.       \tN\t46239\t15\nWagner                        \tBarbara             \tMs.       \tY\t233595\t15\nWard                          \tAnna                \tMiss      \tN\t52941\t16\nWatkins                       \tRosa                \tMiss      \tY\t152190\t16\nWelch                         \tJames               \tDr.       \tY\t51441\t16\nWest                          \tTeresa              \tMs.       \tN\t233179\t16\nWhite                         \tMaurice             \tMr.       \tN\t10107\t15\nWilliams                      \tJohn                \tMr.       \tY\t84783\t15\nWilliams                      \tRobert              \tMr.       \tY\t41233\t15\nWilliamson                    \tRuth                \tMrs.      \tY\t86369\t15\nWilson                        \tJoel                \tSir       \tY\t91826\t16\nWilson                        \tJohn                \tSir       \tY\t26543\t15\nWilson                        \tMariano             \tMr.       \tY\t67472\t16\nWinkler                       \tJose                \tDr.       \tY\t78400\t15\nWinter                        \tCora                \tMrs.      \tN\t8978\t16\nWood                          \tMarcia              \tMs.       \tY\t219276\t16\nWood                          \tMichelle            \tMrs.      \tN\t39560\t15\nWright                        \tRichie              \tSir       \tY\t106818\t15\nYoung                         \tWilliam             \tMr.       \tY\t51127\t15\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q35.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<ca_state:string,cd_gender:string,cd_marital_status:string,cnt1:bigint,min(cd_dep_count):int,max(cd_dep_count):int,avg(cd_dep_count):double,cd_dep_employed_count:int,cnt2:bigint,min(cd_dep_employed_count):int,max(cd_dep_employed_count):int,avg(cd_dep_employed_count):double,cd_dep_college_count:int,cnt3:bigint,min(cd_dep_college_count):int,max(cd_dep_college_count):int,avg(cd_dep_college_count):double>\n-- !query output\nNULL\tF\tD\t1\t0\t0\t0.0\t2\t1\t2\t2\t2.0\t2\t1\t2\t2\t2.0\nNULL\tF\tD\t1\t0\t0\t0.0\t3\t1\t3\t3\t3.0\t4\t1\t4\t4\t4.0\nNULL\tF\tD\t1\t0\t0\t0.0\t5\t1\t5\t5\t5.0\t2\t1\t2\t2\t2.0\nNULL\tF\tD\t1\t0\t0\t0.0\t6\t1\t6\t6\t6.0\t4\t1\t4\t4\t4.0\nNULL\tF\tD\t1\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t1\t1\t1\t1\t1.0\nNULL\tF\tD\t1\t1\t1\t1.0\t4\t1\t4\t4\t4.0\t4\t1\t4\t4\t4.0\nNULL\tF\tD\t1\t1\t1\t1.0\t4\t1\t4\t4\t4.0\t5\t1\t5\t5\t5.0\nNULL\tF\tD\t1\t2\t2\t2.0\t0\t1\t0\t0\t0.0\t4\t1\t4\t4\t4.0\nNULL\tF\tD\t1\t2\t2\t2.0\t1\t1\t1\t1\t1.0\t3\t1\t3\t3\t3.0\nNULL\tF\tD\t1\t2\t2\t2.0\t6\t1\t6\t6\t6.0\t1\t1\t1\t1\t1.0\nNULL\tF\tD\t1\t3\t3\t3.0\t3\t1\t3\t3\t3.0\t2\t1\t2\t2\t2.0\nNULL\tF\tD\t1\t3\t3\t3.0\t3\t1\t3\t3\t3.0\t6\t1\t6\t6\t6.0\nNULL\tF\tD\t1\t3\t3\t3.0\t4\t1\t4\t4\t4.0\t1\t1\t1\t1\t1.0\nNULL\tF\tD\t1\t4\t4\t4.0\t0\t1\t0\t0\t0.0\t3\t1\t3\t3\t3.0\nNULL\tF\tD\t1\t4\t4\t4.0\t1\t1\t1\t1\t1.0\t1\t1\t1\t1\t1.0\nNULL\tF\tD\t1\t4\t4\t4.0\t1\t1\t1\t1\t1.0\t4\t1\t4\t4\t4.0\nNULL\tF\tD\t1\t4\t4\t4.0\t5\t1\t5\t5\t5.0\t6\t1\t6\t6\t6.0\nNULL\tF\tD\t1\t5\t5\t5.0\t4\t1\t4\t4\t4.0\t3\t1\t3\t3\t3.0\nNULL\tF\tD\t1\t5\t5\t5.0\t5\t1\t5\t5\t5.0\t2\t1\t2\t2\t2.0\nNULL\tF\tD\t1\t6\t6\t6.0\t1\t1\t1\t1\t1.0\t3\t1\t3\t3\t3.0\nNULL\tF\tD\t1\t6\t6\t6.0\t2\t1\t2\t2\t2.0\t2\t1\t2\t2\t2.0\nNULL\tF\tD\t1\t6\t6\t6.0\t4\t1\t4\t4\t4.0\t1\t1\t1\t1\t1.0\nNULL\tF\tM\t1\t0\t0\t0.0\t5\t1\t5\t5\t5.0\t5\t1\t5\t5\t5.0\nNULL\tF\tM\t1\t1\t1\t1.0\t3\t1\t3\t3\t3.0\t0\t1\t0\t0\t0.0\nNULL\tF\tM\t1\t1\t1\t1.0\t6\t1\t6\t6\t6.0\t0\t1\t0\t0\t0.0\nNULL\tF\tM\t1\t1\t1\t1.0\t6\t1\t6\t6\t6.0\t1\t1\t1\t1\t1.0\nNULL\tF\tM\t1\t2\t2\t2.0\t2\t1\t2\t2\t2.0\t6\t1\t6\t6\t6.0\nNULL\tF\tM\t1\t2\t2\t2.0\t4\t1\t4\t4\t4.0\t4\t1\t4\t4\t4.0\nNULL\tF\tM\t1\t3\t3\t3.0\t2\t1\t2\t2\t2.0\t1\t1\t1\t1\t1.0\nNULL\tF\tM\t1\t3\t3\t3.0\t5\t1\t5\t5\t5.0\t0\t1\t0\t0\t0.0\nNULL\tF\tM\t1\t3\t3\t3.0\t5\t1\t5\t5\t5.0\t1\t1\t1\t1\t1.0\nNULL\tF\tM\t1\t4\t4\t4.0\t1\t1\t1\t1\t1.0\t4\t1\t4\t4\t4.0\nNULL\tF\tM\t1\t4\t4\t4.0\t2\t1\t2\t2\t2.0\t1\t1\t1\t1\t1.0\nNULL\tF\tM\t1\t4\t4\t4.0\t3\t1\t3\t3\t3.0\t3\t1\t3\t3\t3.0\nNULL\tF\tM\t1\t5\t5\t5.0\t2\t1\t2\t2\t2.0\t2\t1\t2\t2\t2.0\nNULL\tF\tM\t1\t6\t6\t6.0\t1\t1\t1\t1\t1.0\t1\t1\t1\t1\t1.0\nNULL\tF\tM\t1\t6\t6\t6.0\t5\t1\t5\t5\t5.0\t6\t1\t6\t6\t6.0\nNULL\tF\tS\t1\t0\t0\t0.0\t3\t1\t3\t3\t3.0\t6\t1\t6\t6\t6.0\nNULL\tF\tS\t1\t1\t1\t1.0\t0\t1\t0\t0\t0.0\t4\t1\t4\t4\t4.0\nNULL\tF\tS\t1\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t2\t1\t2\t2\t2.0\nNULL\tF\tS\t1\t1\t1\t1.0\t2\t1\t2\t2\t2.0\t6\t1\t6\t6\t6.0\nNULL\tF\tS\t1\t1\t1\t1.0\t5\t1\t5\t5\t5.0\t5\t1\t5\t5\t5.0\nNULL\tF\tS\t1\t2\t2\t2.0\t0\t1\t0\t0\t0.0\t3\t1\t3\t3\t3.0\nNULL\tF\tS\t2\t2\t2\t2.0\t5\t2\t5\t5\t5.0\t6\t2\t6\t6\t6.0\nNULL\tF\tS\t1\t3\t3\t3.0\t0\t1\t0\t0\t0.0\t4\t1\t4\t4\t4.0\nNULL\tF\tS\t1\t3\t3\t3.0\t2\t1\t2\t2\t2.0\t1\t1\t1\t1\t1.0\nNULL\tF\tS\t1\t3\t3\t3.0\t2\t1\t2\t2\t2.0\t5\t1\t5\t5\t5.0\nNULL\tF\tS\t1\t3\t3\t3.0\t3\t1\t3\t3\t3.0\t3\t1\t3\t3\t3.0\nNULL\tF\tS\t1\t4\t4\t4.0\t1\t1\t1\t1\t1.0\t4\t1\t4\t4\t4.0\nNULL\tF\tS\t1\t4\t4\t4.0\t2\t1\t2\t2\t2.0\t4\t1\t4\t4\t4.0\nNULL\tF\tS\t1\t5\t5\t5.0\t6\t1\t6\t6\t6.0\t0\t1\t0\t0\t0.0\nNULL\tF\tU\t1\t0\t0\t0.0\t1\t1\t1\t1\t1.0\t3\t1\t3\t3\t3.0\nNULL\tF\tU\t1\t0\t0\t0.0\t3\t1\t3\t3\t3.0\t0\t1\t0\t0\t0.0\nNULL\tF\tU\t1\t1\t1\t1.0\t3\t1\t3\t3\t3.0\t2\t1\t2\t2\t2.0\nNULL\tF\tU\t1\t1\t1\t1.0\t5\t1\t5\t5\t5.0\t6\t1\t6\t6\t6.0\nNULL\tF\tU\t1\t2\t2\t2.0\t0\t1\t0\t0\t0.0\t1\t1\t1\t1\t1.0\nNULL\tF\tU\t1\t2\t2\t2.0\t4\t1\t4\t4\t4.0\t4\t1\t4\t4\t4.0\nNULL\tF\tU\t2\t3\t3\t3.0\t1\t2\t1\t1\t1.0\t6\t2\t6\t6\t6.0\nNULL\tF\tU\t1\t4\t4\t4.0\t0\t1\t0\t0\t0.0\t4\t1\t4\t4\t4.0\nNULL\tF\tU\t1\t5\t5\t5.0\t3\t1\t3\t3\t3.0\t6\t1\t6\t6\t6.0\nNULL\tF\tU\t1\t6\t6\t6.0\t2\t1\t2\t2\t2.0\t2\t1\t2\t2\t2.0\nNULL\tF\tU\t1\t6\t6\t6.0\t4\t1\t4\t4\t4.0\t4\t1\t4\t4\t4.0\nNULL\tF\tU\t1\t6\t6\t6.0\t5\t1\t5\t5\t5.0\t0\t1\t0\t0\t0.0\nNULL\tF\tU\t1\t6\t6\t6.0\t5\t1\t5\t5\t5.0\t6\t1\t6\t6\t6.0\nNULL\tF\tW\t1\t0\t0\t0.0\t0\t1\t0\t0\t0.0\t4\t1\t4\t4\t4.0\nNULL\tF\tW\t1\t0\t0\t0.0\t5\t1\t5\t5\t5.0\t5\t1\t5\t5\t5.0\nNULL\tF\tW\t1\t1\t1\t1.0\t3\t1\t3\t3\t3.0\t4\t1\t4\t4\t4.0\nNULL\tF\tW\t1\t2\t2\t2.0\t0\t1\t0\t0\t0.0\t5\t1\t5\t5\t5.0\nNULL\tF\tW\t1\t3\t3\t3.0\t3\t1\t3\t3\t3.0\t6\t1\t6\t6\t6.0\nNULL\tF\tW\t1\t3\t3\t3.0\t6\t1\t6\t6\t6.0\t6\t1\t6\t6\t6.0\nNULL\tF\tW\t1\t4\t4\t4.0\t3\t1\t3\t3\t3.0\t1\t1\t1\t1\t1.0\nNULL\tF\tW\t1\t5\t5\t5.0\t1\t1\t1\t1\t1.0\t1\t1\t1\t1\t1.0\nNULL\tF\tW\t1\t5\t5\t5.0\t1\t1\t1\t1\t1.0\t4\t1\t4\t4\t4.0\nNULL\tF\tW\t1\t5\t5\t5.0\t3\t1\t3\t3\t3.0\t6\t1\t6\t6\t6.0\nNULL\tF\tW\t1\t5\t5\t5.0\t4\t1\t4\t4\t4.0\t6\t1\t6\t6\t6.0\nNULL\tF\tW\t1\t6\t6\t6.0\t0\t1\t0\t0\t0.0\t5\t1\t5\t5\t5.0\nNULL\tF\tW\t1\t6\t6\t6.0\t2\t1\t2\t2\t2.0\t3\t1\t3\t3\t3.0\nNULL\tF\tW\t1\t6\t6\t6.0\t5\t1\t5\t5\t5.0\t5\t1\t5\t5\t5.0\nNULL\tM\tD\t1\t0\t0\t0.0\t3\t1\t3\t3\t3.0\t0\t1\t0\t0\t0.0\nNULL\tM\tD\t1\t1\t1\t1.0\t3\t1\t3\t3\t3.0\t0\t1\t0\t0\t0.0\nNULL\tM\tD\t1\t1\t1\t1.0\t3\t1\t3\t3\t3.0\t2\t1\t2\t2\t2.0\nNULL\tM\tD\t1\t2\t2\t2.0\t0\t1\t0\t0\t0.0\t6\t1\t6\t6\t6.0\nNULL\tM\tD\t1\t2\t2\t2.0\t4\t1\t4\t4\t4.0\t4\t1\t4\t4\t4.0\nNULL\tM\tD\t1\t2\t2\t2.0\t5\t1\t5\t5\t5.0\t3\t1\t3\t3\t3.0\nNULL\tM\tD\t1\t3\t3\t3.0\t1\t1\t1\t1\t1.0\t5\t1\t5\t5\t5.0\nNULL\tM\tD\t1\t3\t3\t3.0\t2\t1\t2\t2\t2.0\t3\t1\t3\t3\t3.0\nNULL\tM\tD\t1\t4\t4\t4.0\t5\t1\t5\t5\t5.0\t2\t1\t2\t2\t2.0\nNULL\tM\tD\t1\t6\t6\t6.0\t1\t1\t1\t1\t1.0\t6\t1\t6\t6\t6.0\nNULL\tM\tD\t1\t6\t6\t6.0\t3\t1\t3\t3\t3.0\t1\t1\t1\t1\t1.0\nNULL\tM\tM\t1\t0\t0\t0.0\t0\t1\t0\t0\t0.0\t1\t1\t1\t1\t1.0\nNULL\tM\tM\t2\t0\t0\t0.0\t1\t2\t1\t1\t1.0\t2\t2\t2\t2\t2.0\nNULL\tM\tM\t1\t0\t0\t0.0\t2\t1\t2\t2\t2.0\t1\t1\t1\t1\t1.0\nNULL\tM\tM\t1\t0\t0\t0.0\t3\t1\t3\t3\t3.0\t5\t1\t5\t5\t5.0\nNULL\tM\tM\t1\t0\t0\t0.0\t5\t1\t5\t5\t5.0\t0\t1\t0\t0\t0.0\nNULL\tM\tM\t1\t1\t1\t1.0\t0\t1\t0\t0\t0.0\t1\t1\t1\t1\t1.0\nNULL\tM\tM\t1\t1\t1\t1.0\t0\t1\t0\t0\t0.0\t2\t1\t2\t2\t2.0\nNULL\tM\tM\t1\t2\t2\t2.0\t6\t1\t6\t6\t6.0\t5\t1\t5\t5\t5.0\nNULL\tM\tM\t1\t3\t3\t3.0\t5\t1\t5\t5\t5.0\t1\t1\t1\t1\t1.0\nNULL\tM\tM\t1\t3\t3\t3.0\t6\t1\t6\t6\t6.0\t4\t1\t4\t4\t4.0\nNULL\tM\tM\t1\t4\t4\t4.0\t1\t1\t1\t1\t1.0\t3\t1\t3\t3\t3.0\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q36.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<gross_margin:decimal(37,20),i_category:string,i_class:string,lochierarchy:tinyint,rank_within_parent:int>\n-- !query output\n-0.43310777864678831165\tNULL\tNULL\t2\t1\n-0.44057752675240550259\tHome                                              \tNULL\t1\t1\n-0.43759152110176221048\tMusic                                             \tNULL\t1\t2\n-0.43708103961494058652\tNULL\tNULL\t1\t3\n-0.43616253138693450880\tShoes                                             \tNULL\t1\t4\n-0.43567118609322457134\tChildren                                          \tNULL\t1\t5\n-0.43423932351647837678\tSports                                            \tNULL\t1\t6\n-0.43342977299642408093\tElectronics                                       \tNULL\t1\t7\n-0.43243283120699560700\tWomen                                             \tNULL\t1\t8\n-0.43164166899823508408\tMen                                               \tNULL\t1\t9\n-0.42516187689954540402\tBooks                                             \tNULL\t1\t10\n-0.42448713380832790884\tJewelry                                           \tNULL\t1\t11\n-0.73902664238792748962\tNULL\tshirts                                            \t0\t1\n-0.61125804873635587486\tNULL\tcountry                                           \t0\t2\n-0.53129803597069255822\tNULL\tdresses                                           \t0\t3\n-0.51266635289382758517\tNULL\tathletic                                          \t0\t4\n-0.45290387783638603924\tNULL\tmens                                              \t0\t5\n-0.41288056661656330013\tNULL\taccessories                                       \t0\t6\n-0.40784754677005682440\tNULL\tNULL\t0\t7\n-0.34254844860867375832\tNULL\tbaseball                                          \t0\t8\n-0.32511461675631534897\tNULL\tinfants                                           \t0\t9\n-0.44733955704648003493\tBooks                                             \tcomputers                                         \t0\t1\n-0.44221358112622373783\tBooks                                             \thome repair                                       \t0\t2\n-0.44131129175272951442\tBooks                                             \tromance                                           \t0\t3\n-0.43954111564375046074\tBooks                                             \thistory                                           \t0\t4\n-0.43921337505389731821\tBooks                                             \tmystery                                           \t0\t5\n-0.43904020269360481109\tBooks                                             \tsports                                            \t0\t6\n-0.42821476999837619396\tBooks                                             \ttravel                                            \t0\t7\n-0.42609067296303848297\tBooks                                             \tcooking                                           \t0\t8\n-0.42538995145338568328\tBooks                                             \tfiction                                           \t0\t9\n-0.42446563616188232944\tBooks                                             \tarts                                              \t0\t10\n-0.42424821311884350413\tBooks                                             \tparenting                                         \t0\t11\n-0.41822014479424203008\tBooks                                             \treference                                         \t0\t12\n-0.41350839325516811781\tBooks                                             \tbusiness                                          \t0\t13\n-0.40935208137315013129\tBooks                                             \tscience                                           \t0\t14\n-0.40159380735731858928\tBooks                                             \tself-help                                         \t0\t15\n-0.36957884843305744526\tBooks                                             \tentertainments                                    \t0\t16\n-0.44602461556731552282\tChildren                                          \tschool-uniforms                                   \t0\t1\n-0.44141106040000560852\tChildren                                          \ttoddlers                                          \t0\t2\n-0.43479886701046623711\tChildren                                          \tinfants                                           \t0\t3\n-0.41900662971936329442\tChildren                                          \tnewborn                                           \t0\t4\n-0.41526603781609697786\tChildren                                          \tNULL\t0\t5\n-0.45347482218635333366\tElectronics                                       \tpersonal                                          \t0\t1\n-0.44349670349829474271\tElectronics                                       \tstereo                                            \t0\t2\n-0.44262427232850112058\tElectronics                                       \tautomotive                                        \t0\t3\n-0.44115886172705231970\tElectronics                                       \tportable                                          \t0\t4\n-0.43972786651639318010\tElectronics                                       \tmemory                                            \t0\t5\n-0.43889275271590953040\tElectronics                                       \tscanners                                          \t0\t6\n-0.43879181695132886061\tElectronics                                       \tkaroke                                            \t0\t7\n-0.43743655149948399284\tElectronics                                       \tdvd/vcr players                                   \t0\t8\n-0.43737666390514154910\tElectronics                                       \tcameras                                           \t0\t9\n-0.43390499017233926812\tElectronics                                       \twireless                                          \t0\t10\n-0.43163869754114299547\tElectronics                                       \taudio                                             \t0\t11\n-0.42958938669780912634\tElectronics                                       \tcamcorders                                        \t0\t12\n-0.42872845803629855724\tElectronics                                       \tmusical                                           \t0\t13\n-0.42228240153396399656\tElectronics                                       \ttelevisions                                       \t0\t14\n-0.41893847772039275795\tElectronics                                       \tmonitors                                          \t0\t15\n-0.39793878022746331540\tElectronics                                       \tdisk drives                                       \t0\t16\n-0.49051156860507320113\tHome                                              \tNULL\t0\t1\n-0.48431476750686752965\tHome                                              \tblinds/shades                                     \t0\t2\n-0.47545837941951440918\tHome                                              \tbathroom                                          \t0\t3\n-0.45726228921216284093\tHome                                              \trugs                                              \t0\t4\n-0.45540507568891021759\tHome                                              \tfurniture                                         \t0\t5\n-0.45303572267019508501\tHome                                              \tflatware                                          \t0\t6\n-0.44755542058111800358\tHome                                              \ttables                                            \t0\t7\n-0.44419847780930149402\tHome                                              \twallpaper                                         \t0\t8\n-0.44092345226680695671\tHome                                              \tglassware                                         \t0\t9\n-0.43877591834074789745\tHome                                              \tdecor                                             \t0\t10\n-0.43765482553654514822\tHome                                              \taccent                                            \t0\t11\n-0.43188199218974854630\tHome                                              \tbedding                                           \t0\t12\n-0.43107417904272222899\tHome                                              \tkids                                              \t0\t13\n-0.42474436355625900935\tHome                                              \tlighting                                          \t0\t14\n-0.41783311109052416746\tHome                                              \tcurtains/drapes                                   \t0\t15\n-0.41767111806961188479\tHome                                              \tmattresses                                        \t0\t16\n-0.40562188698541221499\tHome                                              \tpaint                                             \t0\t17\n-0.45165056505480816921\tJewelry                                           \tjewelry boxes                                     \t0\t1\n-0.44372227804836590137\tJewelry                                           \testate                                            \t0\t2\n-0.44251815032563188894\tJewelry                                           \tgold                                              \t0\t3\n-0.43978127753996883542\tJewelry                                           \tconsignment                                       \t0\t4\n-0.43821750044359339153\tJewelry                                           \tcustom                                            \t0\t5\n-0.43439645036479672989\tJewelry                                           \tbracelets                                         \t0\t6\n-0.43208398325687772942\tJewelry                                           \tloose stones                                      \t0\t7\n-0.43060897375114375156\tJewelry                                           \tdiamonds                                          \t0\t8\n-0.42847505748860847066\tJewelry                                           \tcostume                                           \t0\t9\n-0.42667449062277843561\tJewelry                                           \trings                                             \t0\t10\n-0.41987969011585456826\tJewelry                                           \tmens watch                                        \t0\t11\n-0.41624621972944533035\tJewelry                                           \tsemi-precious                                     \t0\t12\n-0.41148949162100715771\tJewelry                                           \twomens watch                                      \t0\t13\n-0.39725668174847694299\tJewelry                                           \tbirdal                                            \t0\t14\n-0.39665274051903254057\tJewelry                                           \tpendants                                          \t0\t15\n-0.38423525233438861010\tJewelry                                           \tearings                                           \t0\t16\n-0.44464388887858793403\tMen                                               \tshirts                                            \t0\t1\n-0.43719860800637369827\tMen                                               \taccessories                                       \t0\t2\n-0.43164606665359630905\tMen                                               \tsports-apparel                                    \t0\t3\n-0.41530906677293519754\tMen                                               \tpants                                             \t0\t4\n-0.38332708894803499123\tMen                                               \tNULL\t0\t5\n-0.47339698705534020269\tMusic                                             \tNULL\t0\t1\n-0.44193214675249008923\tMusic                                             \trock                                              \t0\t2\n-0.44008174913565459246\tMusic                                             \tcountry                                           \t0\t3\n-0.43863444992223641373\tMusic                                             \tpop                                               \t0\t4\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q37.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,i_item_desc:string,i_current_price:decimal(7,2)>\n-- !query output\nAAAAAAAADHGDAAAA\tNecessary times believe probably. Cruel traders know ho\t92.95\nAAAAAAAAFMLDAAAA\tGiven groups please unfortu\t84.79\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q38.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<count(1):bigint>\n-- !query output\n104\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q39a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<w_warehouse_sk:int,i_item_sk:int,d_moy:int,mean:double,cov:double,w_warehouse_sk:int,i_item_sk:int,d_moy:int,mean:double,cov:double>\n-- !query output\n1\t823\t1\t301.5\t1.1271370519097714\t1\t823\t2\t399.25\t1.0208768007427147\n1\t1015\t1\t344.25\t1.2511428113709673\t1\t1015\t2\t385.5\t1.0470314051933909\n1\t1555\t1\t264.25\t1.2283186159550554\t1\t1555\t2\t308.25\t1.058767914170086\n1\t1691\t1\t116.33333333333333\t1.0248470977646387\t1\t1691\t2\t304.75\t1.1191291694661885\n1\t1859\t1\t434.0\t1.0498116619056204\t1\t1859\t2\t314.0\t1.1900690532464868\n1\t2239\t1\t322.5\t1.0978977568088866\t1\t2239\t2\t382.75\t1.0474268515036576\n1\t3859\t1\t352.25\t1.1782246058681007\t1\t3859\t2\t437.5\t1.145917046540028\n1\t4975\t1\t388.5\t1.0001018617093298\t1\t4975\t2\t363.3333333333333\t1.0038038922115327\n1\t5109\t1\t370.75\t1.1395542642353356\t1\t5109\t2\t322.25\t1.1495609463470746\n1\t5177\t1\t292.0\t1.3461430973363442\t1\t5177\t2\t216.25\t1.05761161489175\n1\t6619\t1\t409.5\t1.0040570532236324\t1\t6619\t2\t322.3333333333333\t1.0619035677903699\n1\t7535\t1\t428.25\t1.0544682695099616\t1\t7535\t2\t391.5\t1.222946431626478\n1\t8283\t1\t194.25\t1.2003971691624762\t1\t8283\t2\t400.6666666666667\t1.1209666759436547\n1\t8401\t1\t328.25\t1.0063843177755347\t1\t8401\t2\t297.75\t1.4183634329309731\n1\t8547\t1\t328.25\t1.1425055398027557\t1\t8547\t2\t264.5\t1.0416250678095451\n1\t8717\t1\t228.5\t1.2085987127693683\t1\t8717\t2\t288.25\t1.2149351569032296\n1\t8933\t1\t361.0\t1.1736874031675129\t1\t8933\t2\t433.0\t1.0089048763341881\n1\t8997\t1\t250.0\t1.035266149354841\t1\t8997\t2\t430.6666666666667\t1.1313205669952624\n1\t9245\t1\t392.3333333333333\t1.1119320768290193\t1\t9245\t2\t343.0\t1.1904615139612167\n1\t9621\t1\t277.6666666666667\t1.2243644452469618\t1\t9621\t2\t337.25\t1.2482331965937552\n1\t10299\t1\t380.5\t1.1949268339105126\t1\t10299\t2\t144.5\t1.184591012875645\n1\t10745\t1\t287.3333333333333\t1.084129539111935\t1\t10745\t2\t335.6666666666667\t1.1651939735017574\n1\t11125\t1\t472.75\t1.0530701492045889\t1\t11125\t2\t264.5\t1.0730737223380644\n1\t11859\t1\t256.5\t1.4406552725835113\t1\t11859\t2\t370.75\t1.1082243282754298\n1\t12101\t1\t334.75\t1.1423000747353222\t1\t12101\t2\t396.5\t1.0099457859523537\n1\t12259\t1\t326.5\t1.219693210219279\t1\t12259\t2\t292.6666666666667\t1.2808898286830026\n1\t12641\t1\t321.25\t1.1286221893301993\t1\t12641\t2\t279.25\t1.129134558577743\n1\t13043\t1\t260.5\t1.355894484625015\t1\t13043\t2\t295.0\t1.056210118409035\n1\t13157\t1\t260.5\t1.5242630430075292\t1\t13157\t2\t413.5\t1.0422561797285326\n1\t13293\t1\t325.25\t1.1599721810918615\t1\t13293\t2\t345.75\t1.0626233629994524\n1\t13729\t1\t486.0\t1.0680776434770018\t1\t13729\t2\t389.6666666666667\t1.3522269473359647\n1\t14137\t1\t427.0\t1.0418229612154228\t1\t14137\t2\t387.5\t1.0294855239302605\n1\t14159\t1\t398.6666666666667\t1.0001328196713188\t1\t14159\t2\t186.5\t1.01269532733355\n1\t14161\t1\t84.66666666666667\t1.4291501268026987\t1\t14161\t2\t450.5\t1.0059037693687187\n1\t14911\t1\t397.6666666666667\t1.1801953152598252\t1\t14911\t2\t195.0\t1.0294118715204632\n1\t14983\t1\t286.6666666666667\t1.335077674103522\t1\t14983\t2\t138.25\t1.0223829110804588\n1\t15181\t1\t270.0\t1.0247553155705627\t1\t15181\t2\t431.25\t1.0014337967301454\n1\t15425\t1\t311.5\t1.0406385606636968\t1\t15425\t2\t348.25\t1.1725274574998379\n1\t15441\t1\t295.0\t1.597573089984185\t1\t15441\t2\t296.0\t1.534664092337063\n1\t16601\t1\t428.5\t1.0395250642903893\t1\t16601\t2\t397.75\t1.108690797111687\n1\t16895\t1\t197.25\t1.6216484458744376\t1\t16895\t2\t201.33333333333334\t1.001838963026182\n1\t17285\t1\t291.5\t1.221338355118313\t1\t17285\t2\t384.3333333333333\t1.3166511804104957\n1\t17399\t1\t335.25\t1.18017157377048\t1\t17399\t2\t208.0\t1.2199892235676928\n1\t17521\t1\t248.0\t1.1446287380504054\t1\t17521\t2\t321.25\t1.1065301391790767\n2\t39\t1\t306.75\t1.3140265341053214\t2\t39\t2\t207.75\t1.0612481315699458\n2\t575\t1\t139.75\t1.0651946548671536\t2\t575\t2\t390.0\t1.2985907498394962\n2\t577\t1\t398.3333333333333\t1.0782049277126313\t2\t577\t2\t125.25\t1.015309703853557\n2\t795\t1\t445.0\t1.008215213597613\t2\t795\t2\t226.0\t1.0374761691860421\n2\t977\t1\t383.0\t1.0133755425339792\t2\t977\t2\t365.25\t1.013000783116057\n2\t993\t1\t361.0\t1.0341824955539196\t2\t993\t2\t228.0\t1.2304019216861168\n2\t1367\t1\t402.0\t1.143084675277277\t2\t1367\t2\t204.25\t1.0427544150441603\n2\t2863\t1\t366.5\t1.0309651709854288\t2\t2863\t2\t216.75\t1.213127273469588\n2\t3041\t1\t330.75\t1.0432879984065397\t2\t3041\t2\t397.25\t1.0315273164376213\n2\t3323\t1\t427.25\t1.0090121310722835\t2\t3323\t2\t338.5\t1.0459307196329675\n2\t3999\t1\t422.0\t1.0082188847601554\t2\t3999\t2\t403.25\t1.0051474075339424\n2\t4023\t1\t410.3333333333333\t1.0275376199691826\t2\t4023\t2\t247.75\t1.1609399487145424\n2\t4331\t1\t384.75\t1.064007972514113\t2\t4331\t2\t385.75\t1.011321832432027\n2\t5915\t1\t231.25\t1.8705235468767831\t2\t5915\t2\t407.0\t1.0885509926353207\n2\t6275\t1\t420.6666666666667\t1.008294829045718\t2\t6275\t2\t297.6666666666667\t1.4615952762209015\n2\t7087\t1\t494.5\t1.0102112511298167\t2\t7087\t2\t148.5\t1.7921780145107034\n2\t7333\t1\t150.33333333333334\t1.433050233356339\t2\t7333\t2\t214.75\t1.0557216871980746\n2\t7497\t1\t360.25\t1.0793771564436658\t2\t7497\t2\t200.0\t1.2799283834131763\n2\t8825\t1\t381.5\t1.097428368281076\t2\t8825\t2\t414.25\t1.0153496363346763\n2\t10795\t1\t393.3333333333333\t1.0666650582315749\t2\t10795\t2\t322.75\t1.0370108734399346\n2\t10863\t1\t330.75\t1.0627419399194518\t2\t10863\t2\t276.0\t1.6689178243621974\n2\t11073\t1\t274.0\t1.6770433716259476\t2\t11073\t2\t359.0\t1.0133649606830526\n2\t11465\t1\t312.25\t1.1867998857912492\t2\t11465\t2\t154.0\t1.1105857906239676\n2\t11853\t1\t292.25\t1.0772136418065505\t2\t11853\t2\t401.5\t1.0787107704829733\n2\t12389\t1\t366.25\t1.067334394007718\t2\t12389\t2\t278.5\t1.1044576295014297\n2\t13247\t1\t356.0\t1.0660645893691494\t2\t13247\t2\t342.5\t1.0634325916808434\n2\t13661\t1\t293.0\t1.2174529148923212\t2\t13661\t2\t412.25\t1.048653193049242\n2\t13923\t1\t335.75\t1.2358541998052608\t2\t13923\t2\t100.75\t1.0814570294681372\n2\t14671\t1\t262.75\t1.0250871002782607\t2\t14671\t2\t338.75\t1.0628054504205149\n2\t15397\t1\t297.6666666666667\t1.2809438534554334\t2\t15397\t2\t492.75\t1.0943752797356943\n2\t15477\t1\t251.0\t1.124353329693781\t2\t15477\t2\t307.6666666666667\t1.3795297219800364\n2\t15795\t1\t304.5\t1.0607926164522463\t2\t15795\t2\t397.25\t1.1176167493994051\n2\t16603\t1\t293.3333333333333\t1.387199822342635\t2\t16603\t2\t433.3333333333333\t1.1660592106922516\n2\t16969\t1\t364.6666666666667\t1.1611696936141274\t2\t16969\t2\t375.3333333333333\t1.0294034440006494\n2\t17393\t1\t179.25\t1.1070920857377156\t2\t17393\t2\t294.25\t1.1481110921426008\n3\t29\t1\t438.25\t1.0131261466664097\t3\t29\t2\t344.0\t1.1151530577310618\n3\t247\t1\t321.0\t1.0042014719826915\t3\t247\t2\t423.0\t1.180182949214427\n3\t953\t1\t338.75\t1.1838346915880587\t3\t953\t2\t321.5\t1.2363739805879619\n3\t1541\t1\t110.0\t1.1791578448793427\t3\t1541\t2\t273.75\t1.197487924282276\n3\t1649\t1\t360.6666666666667\t1.1853733590339803\t3\t1649\t2\t334.25\t1.1482623798952447\n3\t2459\t1\t313.75\t1.0048197369511642\t3\t2459\t2\t352.75\t1.21947122536524\n3\t2619\t1\t241.0\t1.1159485992928209\t3\t2619\t2\t261.0\t1.1099544779211474\n3\t2707\t1\t375.25\t1.1207806068743988\t3\t2707\t2\t290.75\t1.0006820492941273\n3\t2975\t1\t304.0\t1.0591594463002163\t3\t2975\t2\t190.0\t1.2046769431661426\n3\t3315\t1\t271.75\t1.555976998814345\t3\t3315\t2\t393.75\t1.0196319345405949\n3\t3393\t1\t260.0\t1.5009563026568116\t3\t3393\t2\t470.25\t1.129275872154205\n3\t3597\t1\t304.0\t1.2471400801439207\t3\t3597\t2\t364.0\t1.057917059038131\n3\t3661\t1\t331.25\t1.2138186201312904\t3\t3661\t2\t398.25\t1.0134502284121254\n3\t3951\t1\t328.3333333333333\t1.3920958631929026\t3\t3951\t2\t378.0\t1.057830622993178\n3\t4793\t1\t439.5\t1.3208979917045633\t3\t4793\t2\t298.6666666666667\t1.2536383791454593\n3\t5221\t1\t395.25\t1.012020609314844\t3\t5221\t2\t423.6666666666667\t1.0742618083358388\n3\t5857\t1\t331.5\t1.1548423818657882\t3\t5857\t2\t394.3333333333333\t1.101836576034495\n3\t6045\t1\t313.5\t1.1971443861134845\t3\t6045\t2\t67.25\t1.2083633449201445\n3\t6615\t1\t366.0\t1.4103495908912012\t3\t6615\t2\t228.0\t1.0322683130436006\n3\t7071\t1\t182.75\t1.402155194063468\t3\t7071\t2\t438.25\t1.0176436798626307\n3\t7211\t1\t355.25\t1.2455338321801286\t3\t7211\t2\t462.0\t1.0449517641148738\n3\t8761\t1\t253.75\t1.1207897246865177\t3\t8761\t2\t212.5\t1.1557740307473354\n3\t9305\t1\t350.6666666666667\t1.3141448475357504\t3\t9305\t2\t387.3333333333333\t1.043391324490137\n3\t9373\t1\t179.75\t1.3318949893741667\t3\t9373\t2\t321.25\t1.1314604181366261\n3\t9669\t1\t315.75\t1.093783081996044\t3\t9669\t2\t321.0\t1.1239703852823903\n3\t9699\t1\t362.25\t1.0269679854596525\t3\t9699\t2\t358.0\t1.5025258842887776\n3\t10301\t1\t348.5\t1.2820855632941448\t3\t10301\t2\t318.5\t1.289483896046129\n3\t10427\t1\t241.33333333333334\t1.5634035191786233\t3\t10427\t2\t381.25\t1.0623056061004696\n3\t11103\t1\t260.25\t1.0537747255764836\t3\t11103\t2\t334.0\t1.2702517027303248\n3\t11141\t1\t251.0\t1.0896833134701018\t3\t11141\t2\t272.0\t1.1910327315841194\n3\t12019\t1\t362.25\t1.0966647561341047\t3\t12019\t2\t282.25\t1.0983756663144604\n3\t12743\t1\t276.5\t1.005648259467935\t3\t12743\t2\t352.5\t1.08876682930328\n3\t12753\t1\t250.25\t1.3386846981823803\t3\t12753\t2\t468.0\t1.0383135087299893\n3\t12931\t1\t322.75\t1.1146291380437745\t3\t12931\t2\t320.0\t1.17009448069376\n3\t13487\t1\t308.25\t1.2961991776642086\t3\t13487\t2\t293.25\t1.0585525936033\n3\t13555\t1\t373.25\t1.0745070114317623\t3\t13555\t2\t152.75\t1.412684197862033\n3\t13581\t1\t292.75\t1.1902035296028353\t3\t13581\t2\t253.0\t1.155002663143799\n3\t13829\t1\t233.5\t1.1312399620732085\t3\t13829\t2\t444.25\t1.0391188483200453\n3\t13847\t1\t417.5\t1.039525557170396\t3\t13847\t2\t263.5\t1.4436108729741235\n3\t14073\t1\t355.5\t1.0476440697241391\t3\t14073\t2\t437.25\t1.0172135605078851\n3\t14767\t1\t311.5\t1.0034195608338836\t3\t14767\t2\t339.0\t1.2032144276415566\n3\t14981\t1\t193.25\t1.0060336654947306\t3\t14981\t2\t441.5\t1.3661655364714043\n3\t16331\t1\t272.0\t1.1467170493846688\t3\t16331\t2\t339.25\t1.2786638701439956\n3\t16847\t1\t273.6666666666667\t1.3346016934186173\t3\t16847\t2\t398.0\t1.2041547394959626\n3\t16987\t1\t358.0\t1.101510614957325\t3\t16987\t2\t420.0\t1.0848663494738469\n3\t17613\t1\t424.3333333333333\t1.0320925947787334\t3\t17613\t2\t390.75\t1.0761214357356987\n3\t17987\t1\t387.5\t1.1128327233395303\t3\t17987\t2\t131.66666666666666\t1.1227241574530091\n4\t225\t1\t180.0\t1.2847074573726138\t4\t225\t2\t366.5\t1.112494070167504\n4\t299\t1\t293.5\t1.135267940218844\t4\t299\t2\t380.0\t1.0485028679413595\n4\t825\t1\t223.25\t1.2488574716961685\t4\t825\t2\t254.0\t1.5182802586094637\n4\t1393\t1\t418.75\t1.0408989038120988\t4\t1393\t2\t413.3333333333333\t1.1020163503416796\n4\t1523\t1\t363.25\t1.0130673543588669\t4\t1523\t2\t253.5\t1.2817761298828965\n4\t1729\t1\t313.25\t1.3148930771572687\t4\t1729\t2\t296.5\t1.210664179669432\n4\t2989\t1\t424.6666666666667\t1.03767453099966\t4\t2989\t2\t123.75\t1.4454541925191389\n4\t3183\t1\t190.0\t1.2520196057807818\t4\t3183\t2\t245.0\t1.0300119488354766\n4\t4175\t1\t395.0\t1.042998032908585\t4\t4175\t2\t485.0\t1.0145126110736231\n4\t4293\t1\t285.0\t1.042264740588342\t4\t4293\t2\t331.25\t1.0702681575369872\n4\t4573\t1\t243.75\t1.4457774863358526\t4\t4573\t2\t431.25\t1.0010829394909448\n4\t4875\t1\t401.0\t1.0066599946104444\t4\t4875\t2\t410.5\t1.051550593497737\n4\t5009\t1\t386.5\t1.0301582587751055\t4\t5009\t2\t473.75\t1.055073121585445\n4\t5947\t1\t291.5\t1.046282184237671\t4\t5947\t2\t320.5\t1.1280002765664996\n4\t6359\t1\t193.33333333333334\t1.2483139639831744\t4\t6359\t2\t371.75\t1.0993680760045068\n4\t6517\t1\t289.0\t1.0911931716633327\t4\t6517\t2\t148.0\t1.0471156482980475\n4\t8309\t1\t371.0\t1.2845214196617782\t4\t8309\t2\t371.5\t1.0748626938819539\n4\t8339\t1\t392.75\t1.0058445869354098\t4\t8339\t2\t345.75\t1.2872431560206488\n4\t9685\t1\t288.75\t1.0017436994234579\t4\t9685\t2\t440.75\t1.0083448738924952\n4\t10255\t1\t373.5\t1.1222827247788254\t4\t10255\t2\t352.0\t1.1003307048901103\n4\t10925\t1\t199.5\t1.3875238422301213\t4\t10925\t2\t261.75\t1.283642511996497\n4\t11213\t1\t226.66666666666666\t1.09984270658979\t4\t11213\t2\t413.5\t1.0174813417315496\n4\t11305\t1\t351.75\t1.1922401157939606\t4\t11305\t2\t365.25\t1.1258535465411879\n4\t11473\t1\t394.6666666666667\t1.0178948794541924\t4\t11473\t2\t212.66666666666666\t1.195359710715888\n4\t12353\t1\t340.25\t1.164721531085477\t4\t12353\t2\t432.0\t1.0523203480868901\n4\t12783\t1\t329.5\t1.0329266474827115\t4\t12783\t2\t187.0\t1.2621302720196819\n4\t12971\t1\t370.3333333333333\t1.097620185659271\t4\t12971\t2\t278.0\t1.4524982093215804\n4\t13665\t1\t363.0\t1.04089223995917\t4\t13665\t2\t332.6666666666667\t1.1900176061910035\n4\t13913\t1\t297.3333333333333\t1.071040936419414\t4\t13913\t2\t316.25\t1.3567449933143143\n4\t15161\t1\t305.75\t1.3571548565863678\t4\t15161\t2\t262.3333333333333\t1.2292106140967536\n4\t15401\t1\t341.0\t1.0164918336889106\t4\t15401\t2\t337.25\t1.0178529534898602\n4\t15467\t1\t355.3333333333333\t1.27670099607062\t4\t15467\t2\t416.6666666666667\t1.1678517714162187\n4\t16211\t1\t257.6666666666667\t1.6381074811154002\t4\t16211\t2\t352.25\t1.055236934125639\n4\t16367\t1\t344.25\t1.1865617643407205\t4\t16367\t2\t330.5\t1.001436680208246\n4\t16623\t1\t174.75\t1.1547312605990323\t4\t16623\t2\t261.75\t1.4692073123565808\n4\t16753\t1\t283.6666666666667\t1.4179905875607177\t4\t16753\t2\t331.0\t1.0757450815976775\n4\t16791\t1\t229.75\t1.0415889892942085\t4\t16791\t2\t348.75\t1.2365182061688882\n5\t507\t1\t360.5\t1.0016609878348282\t5\t507\t2\t397.25\t1.1165805580468837\n5\t1379\t1\t418.3333333333333\t1.1593756930293735\t5\t1379\t2\t362.25\t1.1381161323894302\n5\t1451\t1\t259.75\t1.0166115859746467\t5\t1451\t2\t186.5\t1.3009837449067687\n5\t1761\t1\t245.25\t1.0674277258886877\t5\t1761\t2\t356.3333333333333\t1.0319105846046546\n5\t1919\t1\t558.0\t1.051789656603646\t5\t1919\t2\t280.75\t1.435982403616447\n5\t2153\t1\t398.25\t1.038369445511033\t5\t2153\t2\t322.0\t1.2495167076207327\n5\t2583\t1\t357.25\t1.0689747703230787\t5\t2583\t2\t321.5\t1.174109700061395\n5\t2725\t1\t306.25\t1.0685532393228003\t5\t2725\t2\t193.5\t1.4095901314659105\n5\t3547\t1\t357.0\t1.1544864737016736\t5\t3547\t2\t343.75\t1.2077817108886129\n5\t3785\t1\t215.5\t1.231057632809026\t5\t3785\t2\t460.0\t1.048938011267006\n5\t4445\t1\t327.3333333333333\t1.0177488158574015\t5\t4445\t2\t414.75\t1.046288264177383\n5\t4601\t1\t327.25\t1.15815714609041\t5\t4601\t2\t142.66666666666666\t1.2197537262761011\n5\t5019\t1\t341.6666666666667\t1.2014886661438384\t5\t5019\t2\t363.5\t1.1056740335885162\n5\t5635\t1\t275.75\t1.003161317494043\t5\t5635\t2\t195.33333333333334\t1.49437494371756\n5\t5725\t1\t308.6666666666667\t1.2494665767967896\t5\t5725\t2\t315.5\t1.4329959977644893\n5\t5787\t1\t335.75\t1.3581868619406905\t5\t5787\t2\t453.5\t1.031825110180606\n5\t8665\t1\t257.75\t1.8119629759287612\t5\t8665\t2\t368.0\t1.0243808356311228\n5\t9037\t1\t189.75\t1.0334701022027994\t5\t9037\t2\t326.0\t1.1188906978754734\n5\t9241\t1\t342.75\t1.037524616861255\t5\t9241\t2\t174.5\t1.1953290244295067\n5\t9245\t1\t339.5\t1.0092696575112496\t5\t9245\t2\t303.0\t1.2214283206597227\n5\t9789\t1\t391.25\t1.0458503093728178\t5\t9789\t2\t343.75\t1.070040394695916\n5\t10775\t1\t439.5\t1.2565424257262654\t5\t10775\t2\t330.75\t1.3194508007529422\n5\t10851\t1\t296.25\t1.4450973535233087\t5\t10851\t2\t185.75\t1.0920078306591938\n5\t11409\t1\t337.5\t1.2675445661798022\t5\t11409\t2\t267.5\t1.2735562175240271\n5\t11543\t1\t373.75\t1.1069130009236565\t5\t11543\t2\t347.0\t1.0384881272212296\n5\t11907\t1\t312.0\t1.1200627653353177\t5\t11907\t2\t130.0\t1.0970913669588425\n5\t12315\t1\t255.75\t1.118436212304132\t5\t12315\t2\t329.25\t1.1943065884369533\n5\t12589\t1\t372.75\t1.1109412437666168\t5\t12589\t2\t355.75\t1.0853990131935114\n5\t12853\t1\t306.75\t1.4054585232279222\t5\t12853\t2\t174.75\t1.0143495332647992\n5\t13001\t1\t420.3333333333333\t1.0350551797504086\t5\t13001\t2\t214.0\t1.115488847819715\n5\t13399\t1\t304.75\t1.1656588906680398\t5\t13399\t2\t359.5\t1.0278593838157082\n5\t13809\t1\t338.0\t1.3356560129512516\t5\t13809\t2\t254.25\t1.3229081483155771\n5\t13997\t1\t372.75\t1.0067987273536196\t5\t13997\t2\t488.5\t1.0136437168469898\n5\t14683\t1\t313.75\t1.0038487229336792\t5\t14683\t2\t344.5\t1.1175817890990485\n5\t14721\t1\t420.0\t1.0064164891633403\t5\t14721\t2\t280.5\t1.0511776533295065\n5\t14891\t1\t445.25\t1.0918502542585706\t5\t14891\t2\t420.3333333333333\t1.1234936159879627\n5\t15261\t1\t514.0\t1.0718025790789742\t5\t15261\t2\t367.25\t1.015824493979522\n5\t15477\t1\t225.0\t1.1250854562879289\t5\t15477\t2\t235.75\t1.0790388329037188\n5\t15655\t1\t367.5\t1.100326204582237\t5\t15655\t2\t306.3333333333333\t1.0511442359018315\n5\t15673\t1\t351.25\t1.0192896453224356\t5\t15673\t2\t389.6666666666667\t1.0824061270803669\n5\t15959\t1\t414.0\t1.0773325961138016\t5\t15959\t2\t237.25\t1.4024256583845858\n5\t17339\t1\t399.25\t1.0240754930004161\t5\t17339\t2\t265.5\t1.1851526004436805\n5\t17581\t1\t426.0\t1.2083890532953205\t5\t17581\t2\t233.25\t1.1436871765942431\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q39b.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<w_warehouse_sk:int,i_item_sk:int,d_moy:int,mean:double,cov:double,w_warehouse_sk:int,i_item_sk:int,d_moy:int,mean:double,cov:double>\n-- !query output\n1\t13157\t1\t260.5\t1.5242630430075292\t1\t13157\t2\t413.5\t1.0422561797285326\n1\t15441\t1\t295.0\t1.597573089984185\t1\t15441\t2\t296.0\t1.534664092337063\n1\t16895\t1\t197.25\t1.6216484458744376\t1\t16895\t2\t201.33333333333334\t1.001838963026182\n2\t5915\t1\t231.25\t1.8705235468767831\t2\t5915\t2\t407.0\t1.0885509926353207\n2\t11073\t1\t274.0\t1.6770433716259476\t2\t11073\t2\t359.0\t1.0133649606830526\n3\t3315\t1\t271.75\t1.555976998814345\t3\t3315\t2\t393.75\t1.0196319345405949\n3\t3393\t1\t260.0\t1.5009563026568116\t3\t3393\t2\t470.25\t1.129275872154205\n3\t10427\t1\t241.33333333333334\t1.5634035191786233\t3\t10427\t2\t381.25\t1.0623056061004696\n4\t16211\t1\t257.6666666666667\t1.6381074811154002\t4\t16211\t2\t352.25\t1.055236934125639\n5\t8665\t1\t257.75\t1.8119629759287612\t5\t8665\t2\t368.0\t1.0243808356311228\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q4.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<customer_id:string,customer_first_name:string,customer_last_name:string,customer_preferred_cust_flag:string,customer_birth_country:string,customer_login:string,customer_email_address:string>\n-- !query output\nAAAAAAAAMHOLAAAA\tTerri               \tCook                          \tN\tCOSTA RICA\tNULL\tTerri.Cook@Vz02fJPUlPO.edu                        \nAAAAAAAANBECBAAA\tMichael             \tLombardi                      \tY\tZIMBABWE\tNULL\tMichael.Lombardi@J.com\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q40.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<w_state:string,i_item_id:string,sales_before:decimal(23,2),sales_after:decimal(23,2)>\n-- !query output\nTN\tAAAAAAAAAAEAAAAA\t51.75\t-61.82\nTN\tAAAAAAAAAANDAAAA\t131.09\t324.87\nTN\tAAAAAAAAAAPBAAAA\t50.90\t146.62\nTN\tAAAAAAAAACACAAAA\t113.63\t89.13\nTN\tAAAAAAAAACGCAAAA\t53.39\t37.02\nTN\tAAAAAAAAACKCAAAA\t65.49\t99.60\nTN\tAAAAAAAAACPCAAAA\t0.00\t73.18\nTN\tAAAAAAAAADACAAAA\t132.24\t0.00\nTN\tAAAAAAAAADBBAAAA\t211.20\t0.00\nTN\tAAAAAAAAADNAAAAA\t202.46\t0.00\nTN\tAAAAAAAAAEFCAAAA\t45.57\t38.18\nTN\tAAAAAAAAAEGEAAAA\t-19.57\t109.62\nTN\tAAAAAAAAAELBAAAA\t34.98\t144.32\nTN\tAAAAAAAAAFBEAAAA\t130.26\t3.53\nTN\tAAAAAAAAAFFDAAAA\t27.12\t104.61\nTN\tAAAAAAAAAFJDAAAA\t-41.08\t0.00\nTN\tAAAAAAAAAFLDAAAA\t89.24\t0.00\nTN\tAAAAAAAAAFODAAAA\t16.39\t0.00\nTN\tAAAAAAAAAGEBAAAA\t146.51\t20.38\nTN\tAAAAAAAAAGIDAAAA\t2.84\t0.00\nTN\tAAAAAAAAAGLDAAAA\t-40.82\t75.88\nTN\tAAAAAAAAAGPCAAAA\t136.76\t190.61\nTN\tAAAAAAAAAHBAAAAA\t66.92\t219.76\nTN\tAAAAAAAAAHPBAAAA\t148.73\t-316.99\nTN\tAAAAAAAAAIADAAAA\t230.28\t132.32\nTN\tAAAAAAAAAIGDAAAA\t190.52\t0.00\nTN\tAAAAAAAAAINCAAAA\t207.61\t208.83\nTN\tAAAAAAAAAJHDAAAA\t-39.87\t215.04\nTN\tAAAAAAAAAJIDAAAA\t-351.64\t142.38\nTN\tAAAAAAAAAJOAAAAA\t58.56\t0.00\nTN\tAAAAAAAAAKABAAAA\t110.04\t0.00\nTN\tAAAAAAAAAKFEAAAA\t-284.41\t91.42\nTN\tAAAAAAAAALAAAAAA\t-4.78\t133.34\nTN\tAAAAAAAAALDCAAAA\t39.16\t0.00\nTN\tAAAAAAAAALFEAAAA\t27.83\t212.44\nTN\tAAAAAAAAALIDAAAA\t0.00\t118.88\nTN\tAAAAAAAAALOAAAAA\t4.18\t0.00\nTN\tAAAAAAAAALOBAAAA\t0.00\t0.00\nTN\tAAAAAAAAALPAAAAA\t-30.21\t10.95\nTN\tAAAAAAAAAMJBAAAA\t-24.54\t85.92\nTN\tAAAAAAAAANLCAAAA\t84.38\t67.38\nTN\tAAAAAAAAANPAAAAA\t74.98\t3.48\nTN\tAAAAAAAAANPDAAAA\t484.43\t90.58\nTN\tAAAAAAAAAOFAAAAA\t0.00\t91.90\nTN\tAAAAAAAAAOMDAAAA\t188.82\t29.34\nTN\tAAAAAAAAAPPBAAAA\t0.00\t146.30\nTN\tAAAAAAAABABCAAAA\t252.04\t27.49\nTN\tAAAAAAAABADAAAAA\t27.80\t35.00\nTN\tAAAAAAAABAGAAAAA\t17.30\t2.53\nTN\tAAAAAAAABCCCAAAA\t-254.31\t0.00\nTN\tAAAAAAAABDCBAAAA\t38.34\t101.58\nTN\tAAAAAAAABECDAAAA\t223.68\t112.52\nTN\tAAAAAAAABEMCAAAA\t0.00\t160.01\nTN\tAAAAAAAABGECAAAA\t0.00\t224.53\nTN\tAAAAAAAABGIBAAAA\t105.83\t0.00\nTN\tAAAAAAAABGNCAAAA\t0.00\t72.03\nTN\tAAAAAAAABHACAAAA\t0.00\t0.00\nTN\tAAAAAAAABHPCAAAA\t97.28\t190.95\nTN\tAAAAAAAABJFEAAAA\t70.68\t38.07\nTN\tAAAAAAAABJMAAAAA\t31.23\t140.56\nTN\tAAAAAAAABKEBAAAA\t133.61\t92.51\nTN\tAAAAAAAABLBDAAAA\t50.89\t37.68\nTN\tAAAAAAAABLEDAAAA\t-119.59\t0.00\nTN\tAAAAAAAABLOCAAAA\t112.88\t62.95\nTN\tAAAAAAAABMCBAAAA\t16.30\t70.83\nTN\tAAAAAAAABMOBAAAA\t54.02\t130.38\nTN\tAAAAAAAABNEBAAAA\t48.98\t-803.52\nTN\tAAAAAAAABOKDAAAA\t0.00\t135.88\nTN\tAAAAAAAABONAAAAA\t30.07\t213.69\nTN\tAAAAAAAABPPAAAAA\t4.77\t91.54\nTN\tAAAAAAAACACEAAAA\t244.10\t0.00\nTN\tAAAAAAAACALAAAAA\t-832.53\t0.00\nTN\tAAAAAAAACALBAAAA\t0.00\t-641.98\nTN\tAAAAAAAACBDBAAAA\t45.72\t145.13\nTN\tAAAAAAAACBHBAAAA\t102.42\t0.00\nTN\tAAAAAAAACBICAAAA\t0.95\t85.93\nTN\tAAAAAAAACBJCAAAA\t89.33\t143.00\nTN\tAAAAAAAACBNAAAAA\t198.92\t133.15\nTN\tAAAAAAAACCABAAAA\t61.26\t65.34\nTN\tAAAAAAAACCJBAAAA\t4.84\t0.00\nTN\tAAAAAAAACCPDAAAA\t280.66\t283.09\nTN\tAAAAAAAACDECAAAA\t-667.40\t-693.66\nTN\tAAAAAAAACDIBAAAA\t3.73\t148.26\nTN\tAAAAAAAACEACAAAA\t0.00\t-978.31\nTN\tAAAAAAAACEHDAAAA\t66.23\t123.74\nTN\tAAAAAAAACEICAAAA\t30.14\t117.63\nTN\tAAAAAAAACEODAAAA\t0.00\t1.28\nTN\tAAAAAAAACFKDAAAA\t0.00\t325.59\nTN\tAAAAAAAACFLCAAAA\t13.34\t76.65\nTN\tAAAAAAAACGCBAAAA\t-1636.68\t167.66\nTN\tAAAAAAAACGCDAAAA\t0.00\t8.58\nTN\tAAAAAAAACGJDAAAA\t-422.98\t490.30\nTN\tAAAAAAAACGMAAAAA\t142.04\t196.75\nTN\tAAAAAAAACHCDAAAA\t-936.02\t96.99\nTN\tAAAAAAAACIICAAAA\t-175.56\t40.39\nTN\tAAAAAAAACIJBAAAA\t0.00\t16.71\nTN\tAAAAAAAACIMDAAAA\t266.31\t0.00\nTN\tAAAAAAAACJBEAAAA\t-5.33\t22.57\nTN\tAAAAAAAACJDDAAAA\t72.37\t0.00\nTN\tAAAAAAAACJGCAAAA\t4.59\t237.09\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q41.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_product_name:string>\n-- !query output\nantieingeseableought                              \neingeseeseese                                     \neseeingeseableought                               \noughtoughtablecally                               \npriantin stoughtought\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q42.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<d_year:int,i_category_id:int,i_category:string,sum(ss_ext_sales_price):decimal(17,2)>\n-- !query output\n2000\t7\tHome                                              \t494603.64\n2000\t2\tMen                                               \t390852.57\n2000\t4\tShoes                                             \t378230.23\n2000\t3\tChildren                                          \t359411.01\n2000\t9\tBooks                                             \t319480.51\n2000\t10\tElectronics                                       \t317086.16\n2000\t8\tSports                                            \t287853.86\n2000\t6\tJewelry                                           \t278786.18\n2000\t1\tWomen                                             \t245897.86\n2000\t5\tMusic                                             \t189405.76\n2000\tNULL\tNULL\t39507.19\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q43.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<s_store_name:string,s_store_id:string,sun_sales:decimal(17,2),mon_sales:decimal(17,2),tue_sales:decimal(17,2),wed_sales:decimal(17,2),thu_sales:decimal(17,2),fri_sales:decimal(17,2),sat_sales:decimal(17,2)>\n-- !query output\nable\tAAAAAAAACAAAAAAA\t495537.85\t454457.37\t480383.13\t468495.25\t496050.12\t471996.47\t510311.54\nation\tAAAAAAAAHAAAAAAA\t485290.49\t465106.17\t462449.17\t478799.15\t521647.09\t474980.87\t484757.85\nbar\tAAAAAAAAKAAAAAAA\t510374.60\t458247.16\t464054.12\t473015.97\t487045.12\t495531.52\t502011.00\neing\tAAAAAAAAIAAAAAAA\t513799.80\t464451.67\t440681.32\t501857.36\t476201.82\t452754.56\t481703.98\nese\tAAAAAAAAEAAAAAAA\t529524.71\t460191.13\t492178.33\t458067.77\t488508.48\t477658.66\t490178.23\nought\tAAAAAAAABAAAAAAA\t464514.26\t460448.28\t465249.85\t482655.67\t474754.90\t479860.76\t474064.44\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q44.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<rnk:int,best_performing:string,worst_performing:string>\n-- !query output\n1\toughtantiought                                    \teingbarantiationought                             \n2\tbarprin steing                                    \tcallyableationpri                                 \n3\tableoughtantipriought                             \tablebarcallypriought                              \n4\toughtcallyoughtation                              \tcallyn steseable                                  \n5\toughteseesecally                                  \tableoughtbarn st                                  \n6\teingationationanti                                \tcallyableoughtn st                                \n7\tablecallyought                                    \tbarbarcallyableought                              \n8\tn steingprieseought                               \teseeingationeseought                              \n9\tpriprieinganti                                    \tbareseprieseought                                 \n10\toughtcallybaroughtought                           \teingcallyantipri\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q45.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<ca_zip:string,ca_city:string,sum(ws_sales_price):decimal(17,2)>\n-- !query output\n12150     \tMontezuma\t11.23\n20525     \tRyan\t57.27\n21087     \tMacedonia\t23.20\n21933     \tMount Pleasant\t51.15\n22924     \tBelleville\t25.42\n24289     \tNULL\t17.89\n26098     \tFive Points\t27.24\n45281     \tBethel\t113.11\n48014     \tClifton\t20.69\n48828     \tGreenwood\t58.48\n50411     \tCedar Grove\t70.45\n51933     \tMount Pleasant\t23.43\n54536     \tFriendship\t123.37\n58605     \tAntioch\t95.49\n65817     \tBridgeport\t47.98\n65867     \tRiceville\t11.06\n69843     \tOakland\t11.82\n71944     \tGravel Hill\t23.40\n76971     \tWilson\t111.77\n78048     \tSalem\t188.36\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q46.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_last_name:string,c_first_name:string,ca_city:string,bought_city:string,ss_ticket_number:int,amt:decimal(17,2),profit:decimal(17,2)>\n-- !query output\nNULL\tNULL\tAntioch\tFlorence\t19942\t6675.56\t-20224.05\nNULL\tNULL\tAntioch\tJamestown\t189909\t188.04\t4440.84\nNULL\tNULL\tAntioch\tSulphur Springs\t211628\t0.00\t1690.47\nNULL\tNULL\tArlington\tLeesburg\t112878\t1749.37\t-4137.65\nNULL\tNULL\tArlington\tMount Olive\t186130\t3625.05\t-9174.03\nNULL\tNULL\tArthur\tBridgeport\t89194\t1207.33\t-3747.04\nNULL\tNULL\tAshland\tAntioch\t25749\t1184.60\t-22285.21\nNULL\tNULL\tAshland\tCherry Valley\t92172\t2534.66\t-5553.97\nNULL\tNULL\tAshland\tSuperior\t91601\t7305.88\t-10487.66\nNULL\tNULL\tBelmont\tHamilton\t1029\t935.93\t-1026.72\nNULL\tNULL\tBethel\tOakwood\t179326\t2593.05\t-13662.26\nNULL\tNULL\tBethel\tWoodland\t95034\t407.35\t-15767.89\nNULL\tNULL\tBridgeport\tRiverside\t23264\t89.15\t-6827.82\nNULL\tNULL\tBridgeport\tWildwood\t90648\t4796.33\t-6445.14\nNULL\tNULL\tBrownsville\tOak Grove\t114024\t2720.05\t-18657.62\nNULL\tNULL\tBrunswick\tValley View\t153419\t0.00\t-13840.52\nNULL\tNULL\tBuena Vista\tClearview\t109316\t257.04\t-11539.44\nNULL\tNULL\tBuena Vista\tDeerfield\t17940\t0.00\t-1431.61\nNULL\tNULL\tBunker Hill\tColfax\t45044\t2803.00\t2310.74\nNULL\tNULL\tCedar Grove\tBrownsville\t17159\t865.78\t-9843.40\nNULL\tNULL\tCenterville\tFairfield\t11688\t604.66\t-7431.13\nNULL\tNULL\tCenterville\tFisher\t24700\t330.27\t-8639.65\nNULL\tNULL\tCenterville\tGlenwood\t27267\t0.00\t-7024.69\nNULL\tNULL\tClifton\tSpringhill\t190087\t2987.19\t-13925.26\nNULL\tNULL\tClinton\tEdgewood\t84531\t8561.97\t-28472.50\nNULL\tNULL\tClinton\tOakland\t179459\t879.23\t-875.68\nNULL\tNULL\tConcord\tLakewood\t81197\t1244.92\t866.30\nNULL\tNULL\tCrossroads\tMount Olive\t229868\t223.72\t139.81\nNULL\tNULL\tEllsworth\tShiloh\t103474\t1412.84\t-8861.07\nNULL\tNULL\tEnterprise\tFive Forks\t234468\t2248.50\t-7218.60\nNULL\tNULL\tEnterprise\tHillcrest\t216432\t529.37\t-2245.35\nNULL\tNULL\tFairfield\tArlington\t75875\t565.09\t-14322.52\nNULL\tNULL\tFairview\tFriendship\t172926\t1897.88\t-11040.49\nNULL\tNULL\tFairview\tMount Olive\t127246\t7793.58\t-12492.10\nNULL\tNULL\tFarmington\tSpringfield\t222902\t1213.28\t-1240.73\nNULL\tNULL\tFayette\tKingston\t32825\t0.00\t-6315.77\nNULL\tNULL\tFerguson\tBelmont\t238393\t11864.37\t-26448.01\nNULL\tNULL\tFive Forks\tFlorence\t224673\t1507.52\t-1549.30\nNULL\tNULL\tFive Forks\tNewport\t131275\t3957.94\t-8960.42\nNULL\tNULL\tFive Points\tBrownsville\t122401\t166.11\t-12890.14\nNULL\tNULL\tFive Points\tShady Grove\t127068\t41.49\t-3127.13\nNULL\tNULL\tFive Points\tUnion\t233626\t243.82\t-13369.20\nNULL\tNULL\tFlorence\tDeerfield\t181713\t2673.95\t-11875.68\nNULL\tNULL\tForest Hills\tShiloh\t76517\t2206.26\t-20795.18\nNULL\tNULL\tFriendship\tHamilton\t49257\t2542.47\t14.28\nNULL\tNULL\tFriendship\tLebanon\t230509\t7322.26\t-3897.95\nNULL\tNULL\tFriendship\tSpringdale\t55155\t187.12\t-8555.50\nNULL\tNULL\tGlendale\tLiberty\t127860\t179.92\t-9645.97\nNULL\tNULL\tGlendale\tSpringfield\t107410\t236.04\t-11521.38\nNULL\tNULL\tGlendale\tStringtown\t31383\t0.00\t-5195.83\nNULL\tNULL\tGlendale\tUnion\t3484\t793.55\t5358.57\nNULL\tNULL\tGlenwood\tArlington\t227159\t236.19\t-10529.52\nNULL\tNULL\tGlenwood\tBelmont\t72581\t1808.15\t-7585.34\nNULL\tNULL\tGravel Hill\tFarmington\t62500\t0.00\t-1479.92\nNULL\tNULL\tGreenville\tBethel\t84058\t225.62\t-1544.42\nNULL\tNULL\tGreenville\tMarion\t221320\t669.39\t-7120.33\nNULL\tNULL\tGreenville\tSpring Hill\t47915\t11384.61\t-24018.75\nNULL\tNULL\tGreenwood\tPleasant Hill\t4709\t822.69\t-17295.72\nNULL\tNULL\tGreenwood\tSpringdale\t204972\t263.33\t3302.02\nNULL\tNULL\tHamilton\tCedar Grove\t130494\t0.00\t-10426.22\nNULL\tNULL\tHamilton\tJackson\t170132\t5140.01\t-20602.97\nNULL\tNULL\tHamilton\tJamestown\t153509\t2156.76\t-9529.97\nNULL\tNULL\tHardy\tHarmony\t94856\t6219.17\t-11811.92\nNULL\tNULL\tHarmony\tPlainview\t226229\t9483.18\t-21260.39\nNULL\tNULL\tHighland Park\tClinton\t75671\t818.52\t-17086.12\nNULL\tNULL\tHillcrest\tArlington\t131958\t608.34\t-10931.94\nNULL\tNULL\tHillcrest\tBerea\t60399\t764.60\t-10177.35\nNULL\tNULL\tHillcrest\tHarmony\t196512\t1.70\t-9338.87\nNULL\tNULL\tHillcrest\tMount Olive\t129867\t7792.52\t-10929.36\nNULL\tNULL\tHillcrest\tWilson\t33115\t4120.87\t-17407.74\nNULL\tNULL\tIndian Village\tGolden\t98530\t284.90\t-11785.95\nNULL\tNULL\tJackson\tFlorence\t167217\t254.54\t-18883.41\nNULL\tNULL\tJackson\tPleasant Grove\t172765\t5569.64\t-7776.84\nNULL\tNULL\tJamestown\tSpring Valley\t125125\t4067.22\t-8344.61\nNULL\tNULL\tKingston\tFarmington\t196244\t0.00\t-4449.03\nNULL\tNULL\tKingston\tHighland\t45785\t5257.12\t-21282.22\nNULL\tNULL\tLakeside\tAshley\t233189\t1043.83\t-3335.36\nNULL\tNULL\tLakeside\tHighland Park\t194552\t677.94\t-11488.01\nNULL\tNULL\tLakeside\tJenkins\t71469\t434.92\t-9825.80\nNULL\tNULL\tLakeside\tLakewood\t181131\t42.20\t3148.24\nNULL\tNULL\tLakeside\tRiverview\t183085\t116.06\t-1892.62\nNULL\tNULL\tLakeside\tSummit\t200887\t554.47\t-8140.32\nNULL\tNULL\tLakeview\tGreenfield\t230719\t0.00\t-3529.16\nNULL\tNULL\tLancaster\tWaterloo\t197637\t976.52\t-11255.13\nNULL\tNULL\tLangdon\tBuena Vista\t12247\t1626.56\t-12689.74\nNULL\tNULL\tLebanon\tMount Zion\t2701\t11616.00\t-11748.63\nNULL\tNULL\tLincoln\tGreenville\t126303\t0.00\t-18688.05\nNULL\tNULL\tLouisville\tKingston\t215207\t986.85\t-9348.55\nNULL\tNULL\tMacedonia\tAshland\t81673\t142.17\t-56.02\nNULL\tNULL\tMacedonia\tGuthrie\t132070\t1459.41\t-9746.92\nNULL\tNULL\tMaple Grove\tFive Points\t40712\t4051.76\t-4016.52\nNULL\tNULL\tMarion\tLakewood\t203438\t8203.21\t-18846.98\nNULL\tNULL\tMidway\tSummerfield\t117504\t1027.47\t-21021.42\nNULL\tNULL\tMount Olive\tSalem\t34593\t910.10\t-6410.67\nNULL\tNULL\tMount Olive\tSpringdale\t73957\t2276.08\t-18160.48\nNULL\tNULL\tMount Vernon\tCrossroads\t216545\t0.00\t-10080.14\nNULL\tNULL\tMount Vernon\tWhite Oak\t25937\t2221.06\t-12575.90\nNULL\tNULL\tMurphy\tBuena Vista\t17149\t0.00\t-39.90\nNULL\tNULL\tNew Hope\tFarmington\t38760\t6931.68\t-10392.42\nNULL\tNULL\tNew Hope\tHillcrest\t119378\t625.62\t-15891.66\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q47.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_category:string,i_brand:string,s_store_name:string,s_company_name:string,d_year:int,d_moy:int,avg_monthly_sales:decimal(21,6),sum_sales:decimal(17,2),psum:decimal(17,2),nsum:decimal(17,2)>\n-- !query output\nShoes                                             \texportiedu pack #x                                \tation\tUnknown\t1999\t3\t5607.487500\t2197.48\t3271.66\t2831.67\nShoes                                             \texportiedu pack #x                                \tought\tUnknown\t1999\t2\t5643.938333\t2393.31\t4463.11\t2652.44\nShoes                                             \texportiedu pack #x                                \table\tUnknown\t1999\t4\t5640.362500\t2416.57\t3348.90\t2987.78\nMen                                               \tamalgimporto #x                                   \tought\tUnknown\t1999\t6\t4702.116667\t1534.95\t2666.37\t2514.13\nMen                                               \tedu packimporto #x                                \teing\tUnknown\t1999\t7\t5330.618333\t2218.63\t3182.74\t7436.83\nShoes                                             \texportiedu pack #x                                \tese\tUnknown\t1999\t4\t5338.852500\t2233.60\t3470.43\t2832.41\nMusic                                             \texportischolar #x                                 \tese\tUnknown\t1999\t5\t5139.465000\t2034.96\t3149.72\t3648.17\nMen                                               \texportiimporto #x                                 \teing\tUnknown\t1999\t7\t5748.707500\t2645.88\t3432.67\t7646.91\nMen                                               \timportoimporto #x                                 \tbar\tUnknown\t1999\t3\t4915.190000\t1815.92\t2884.81\t2956.00\nWomen                                             \tamalgamalg #x                                     \tation\tUnknown\t1999\t6\t4586.300000\t1508.07\t2992.12\t3059.37\nShoes                                             \texportiedu pack #x                                \teing\tUnknown\t1999\t4\t5374.032500\t2322.80\t2484.06\t3313.69\nMen                                               \timportoimporto #x                                 \tese\tUnknown\t1999\t6\t4596.057500\t1577.69\t2457.43\t2439.68\nMen                                               \tedu packimporto #x                                \tation\tUnknown\t1999\t3\t5839.670833\t2825.07\t3157.46\t3531.36\nMen                                               \tedu packimporto #x                                \table\tUnknown\t1999\t4\t5342.149167\t2347.82\t2787.31\t3588.16\nShoes                                             \texportiedu pack #x                                \tought\tUnknown\t1999\t3\t5643.938333\t2652.44\t2393.31\t3008.88\nMen                                               \texportiimporto #x                                 \tought\tUnknown\t1999\t3\t5475.719167\t2515.22\t2709.11\t2702.85\nShoes                                             \texportiedu pack #x                                \tese\tUnknown\t1999\t6\t5338.852500\t2388.17\t2832.41\t4216.10\nShoes                                             \texportiedu pack #x                                \tbar\tUnknown\t1999\t2\t5065.706667\t2119.85\t3439.05\t2640.59\nShoes                                             \tamalgedu pack #x                                  \tation\tUnknown\t1999\t2\t4713.009167\t1774.19\t3366.21\t1940.05\nShoes                                             \texportiedu pack #x                                \teing\tUnknown\t1999\t3\t5374.032500\t2484.06\t2994.92\t2322.80\nMen                                               \tedu packimporto #x                                \table\tUnknown\t1999\t7\t5342.149167\t2454.72\t2766.42\t7665.06\nMen                                               \tamalgimporto #x                                   \tese\tUnknown\t1999\t4\t4741.993333\t1879.47\t3419.59\t2634.29\nMusic                                             \texportischolar #x                                 \table\tUnknown\t1999\t4\t4723.742500\t1866.30\t2384.82\t2931.55\nShoes                                             \timportoedu pack #x                                \tation\tUnknown\t1999\t4\t4732.205000\t1875.08\t2686.46\t3422.03\nChildren                                          \timportoexporti #x                                 \tese\tUnknown\t1999\t6\t4849.967500\t2002.59\t2590.23\t3380.67\nMen                                               \timportoimporto #x                                 \tation\tUnknown\t1999\t6\t4920.419167\t2077.36\t3402.55\t3347.44\nMen                                               \timportoimporto #x                                 \tese\tUnknown\t1999\t4\t4596.057500\t1762.17\t2728.14\t2457.43\nChildren                                          \tedu packexporti #x                                \teing\tUnknown\t1999\t4\t4739.001667\t1923.04\t2309.91\t2849.64\nMusic                                             \texportischolar #x                                 \tought\tUnknown\t1999\t3\t4816.848333\t2010.47\t2539.57\t2940.38\nShoes                                             \tedu packedu pack #x                               \tese\tUnknown\t1999\t2\t4707.697500\t1903.70\t2693.49\t3474.34\nShoes                                             \timportoedu pack #x                                \tought\tUnknown\t1999\t2\t4443.995833\t1642.32\t3972.50\t2319.04\nMen                                               \tedu packimporto #x                                \tation\tUnknown\t1999\t6\t5839.670833\t3053.78\t3151.27\t3622.65\nShoes                                             \texportiedu pack #x                                \tation\tUnknown\t1999\t4\t5607.487500\t2831.67\t2197.48\t4187.54\nMen                                               \tedu packimporto #x                                \tought\tUnknown\t1999\t3\t5598.894167\t2824.13\t3154.80\t3135.60\nShoes                                             \tamalgedu pack #x                                  \tation\tUnknown\t1999\t3\t4713.009167\t1940.05\t1774.19\t2496.18\nMen                                               \texportiimporto #x                                 \tought\tUnknown\t1999\t4\t5475.719167\t2702.85\t2515.22\t4364.56\nShoes                                             \tamalgedu pack #x                                  \tese\tUnknown\t1999\t4\t4596.537500\t1825.00\t2777.56\t3234.34\nMusic                                             \timportoscholar #x                                 \table\tUnknown\t1999\t6\t4332.550833\t1563.26\t2484.37\t2460.11\nMen                                               \tedu packimporto #x                                \tese\tUnknown\t1999\t2\t5102.436667\t2333.32\t3417.85\t2536.68\nMen                                               \texportiimporto #x                                 \tought\tUnknown\t1999\t2\t5475.719167\t2709.11\t4740.72\t2515.22\nMen                                               \texportiimporto #x                                 \tought\tUnknown\t1999\t6\t5475.719167\t2723.08\t4364.56\t4000.42\nShoes                                             \texportiedu pack #x                                \tese\tUnknown\t1999\t2\t5338.852500\t2587.72\t3837.18\t3470.43\nChildren                                          \texportiexporti #x                                 \teing\tUnknown\t1999\t6\t4426.704167\t1683.86\t2186.50\t1970.76\nShoes                                             \texportiedu pack #x                                \tought\tUnknown\t1999\t6\t5643.938333\t2902.72\t3046.01\t3994.18\nShoes                                             \texportiedu pack #x                                \teing\tUnknown\t1999\t6\t5374.032500\t2637.31\t3313.69\t3158.65\nMen                                               \timportoimporto #x                                 \tation\tUnknown\t1999\t3\t4920.419167\t2183.89\t3598.63\t2841.78\nMen                                               \tedu packimporto #x                                \teing\tUnknown\t1999\t3\t5330.618333\t2602.65\t2613.45\t2624.00\nWomen                                             \tedu packamalg #x                                  \teing\tUnknown\t1999\t6\t4366.627500\t1640.10\t3159.43\t2882.91\nMusic                                             \texportischolar #x                                 \tese\tUnknown\t1999\t2\t5139.465000\t2413.63\t4627.89\t2773.28\nMen                                               \texportiimporto #x                                 \tese\tUnknown\t1999\t2\t4929.611667\t2207.89\t3940.06\t2246.76\nWomen                                             \tedu packamalg #x                                  \tese\tUnknown\t1999\t2\t4551.444167\t1833.08\t3729.30\t2940.96\nMen                                               \tedu packimporto #x                                \teing\tUnknown\t1999\t2\t5330.618333\t2613.45\t3859.71\t2602.65\nChildren                                          \tedu packexporti #x                                \tese\tUnknown\t1999\t2\t4640.935833\t1927.95\t3524.16\t2584.03\nChildren                                          \texportiexporti #x                                 \tese\tUnknown\t1999\t7\t4565.828333\t1853.57\t2846.40\t6447.36\nWomen                                             \tedu packamalg #x                                  \tese\tUnknown\t1999\t6\t4551.444167\t1840.98\t2478.75\t3174.74\nMen                                               \tedu packimporto #x                                \teing\tUnknown\t1999\t4\t5330.618333\t2624.00\t2602.65\t3722.18\nChildren                                          \timportoexporti #x                                 \table\tUnknown\t1999\t3\t5024.325000\t2319.29\t2691.31\t2350.53\nChildren                                          \tedu packexporti #x                                \tought\tUnknown\t1999\t2\t4836.831667\t2132.80\t3984.13\t2182.12\nChildren                                          \tedu packexporti #x                                \tese\tUnknown\t1999\t4\t4640.935833\t1950.03\t2584.03\t2919.51\nShoes                                             \timportoedu pack #x                                \tbar\tUnknown\t1999\t3\t4791.740833\t2100.87\t2211.40\t3278.75\nMen                                               \tedu packimporto #x                                \tation\tUnknown\t1999\t5\t5839.670833\t3151.27\t3531.36\t3053.78\nMen                                               \texportiimporto #x                                 \tese\tUnknown\t1999\t3\t4929.611667\t2246.76\t2207.89\t3208.14\nMen                                               \tedu packimporto #x                                \tation\tUnknown\t1999\t2\t5839.670833\t3157.46\t3425.68\t2825.07\nMen                                               \texportiimporto #x                                 \tbar\tUnknown\t1999\t5\t5311.965833\t2631.11\t2819.12\t3022.54\nMusic                                             \timportoscholar #x                                 \tation\tUnknown\t1999\t2\t4388.770833\t1714.25\t2228.69\t2407.93\nShoes                                             \timportoedu pack #x                                \tese\tUnknown\t1999\t2\t4989.784167\t2315.46\t4430.70\t2976.26\nChildren                                          \timportoexporti #x                                 \table\tUnknown\t1999\t4\t5024.325000\t2350.53\t2319.29\t2984.92\nWomen                                             \tamalgamalg #x                                     \tation\tUnknown\t1999\t3\t4586.300000\t1917.37\t3282.12\t2557.46\nChildren                                          \tedu packexporti #x                                \tought\tUnknown\t1999\t3\t4836.831667\t2182.12\t2132.80\t3466.25\nShoes                                             \texportiedu pack #x                                \table\tUnknown\t1999\t5\t5640.362500\t2987.78\t2416.57\t3437.87\nMen                                               \timportoimporto #x                                 \tought\tUnknown\t1999\t4\t5071.261667\t2425.64\t2696.60\t2641.17\nShoes                                             \timportoedu pack #x                                \tought\tUnknown\t1999\t6\t4443.995833\t1801.87\t2297.76\t3078.40\nChildren                                          \timportoexporti #x                                 \tought\tUnknown\t1999\t2\t4480.353333\t1838.88\t3250.53\t2375.04\nMen                                               \tedu packimporto #x                                \tese\tUnknown\t1999\t7\t5102.436667\t2464.06\t2798.80\t6978.09\nShoes                                             \texportiedu pack #x                                \tought\tUnknown\t1999\t4\t5643.938333\t3008.88\t2652.44\t3046.01\nMen                                               \timportoimporto #x                                 \tought\tUnknown\t1999\t2\t5071.261667\t2438.50\t4282.24\t2696.60\nChildren                                          \tedu packexporti #x                                \tation\tUnknown\t1999\t6\t4525.318333\t1896.34\t2883.95\t2727.36\nChildren                                          \tamalgexporti #x                                   \tese\tUnknown\t1999\t4\t4212.889167\t1588.72\t2296.49\t3077.12\nShoes                                             \timportoedu pack #x                                \teing\tUnknown\t1999\t4\t4884.260000\t2261.01\t2502.36\t3210.47\nMen                                               \texportiimporto #x                                 \teing\tUnknown\t1999\t3\t5748.707500\t3132.90\t3747.34\t3768.23\nShoes                                             \texportiedu pack #x                                \table\tUnknown\t1999\t2\t5640.362500\t3027.85\t4285.15\t3348.90\nMusic                                             \tedu packscholar #x                                \tought\tUnknown\t1999\t3\t4490.221667\t1883.06\t2321.31\t2211.82\nChildren                                          \texportiexporti #x                                 \table\tUnknown\t1999\t2\t4423.506667\t1816.43\t3192.19\t3334.11\nMusic                                             \tamalgscholar #x                                   \teing\tUnknown\t1999\t6\t4894.035000\t2288.21\t3156.77\t2845.45\nMusic                                             \tamalgscholar #x                                   \table\tUnknown\t1999\t5\t4829.768333\t2225.46\t3380.94\t2782.42\nWomen                                             \tedu packamalg #x                                  \tation\tUnknown\t1999\t3\t4268.562500\t1665.68\t1758.65\t2082.99\nChildren                                          \texportiexporti #x                                 \tbar\tUnknown\t1999\t3\t4267.374167\t1667.03\t2579.77\t2802.05\nMen                                               \texportiimporto #x                                 \table\tUnknown\t1999\t3\t4987.821667\t2387.78\t2962.40\t2928.83\nWomen                                             \timportoamalg #x                                   \tbar\tUnknown\t1999\t3\t4309.575833\t1710.42\t2004.22\t2742.41\nShoes                                             \texportiedu pack #x                                \tought\tUnknown\t1999\t5\t5643.938333\t3046.01\t3008.88\t2902.72\nMusic                                             \texportischolar #x                                 \tbar\tUnknown\t1999\t4\t4518.543333\t1922.23\t2683.26\t2227.02\nChildren                                          \tamalgexporti #x                                   \tbar\tUnknown\t1999\t3\t4441.555000\t1846.53\t3827.47\t3623.30\nMen                                               \texportiimporto #x                                 \teing\tUnknown\t1999\t5\t5748.707500\t3156.57\t3768.23\t3432.67\nMusic                                             \timportoscholar #x                                 \tese\tUnknown\t1999\t2\t4302.831667\t1712.70\t3561.46\t2414.41\nWomen                                             \tamalgamalg #x                                     \teing\tUnknown\t1999\t7\t4494.382500\t1904.98\t2308.58\t5603.14\nWomen                                             \tamalgamalg #x                                     \table\tUnknown\t1999\t4\t4582.713333\t1994.71\t2408.40\t2321.48\nMusic                                             \tedu packscholar #x                                \table\tUnknown\t1999\t3\t4426.623333\t1840.51\t2707.80\t3147.10\nShoes                                             \timportoedu pack #x                                \tbar\tUnknown\t1999\t2\t4791.740833\t2211.40\t3912.90\t2100.87\nShoes                                             \texportiedu pack #x                                \table\tUnknown\t1999\t7\t5640.362500\t3062.31\t3437.87\t6376.30\nChildren                                          \tamalgexporti #x                                   \teing\tUnknown\t1999\t3\t4733.152500\t2155.79\t2710.50\t2685.74\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q48.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<sum(ss_quantity):bigint>\n-- !query output\n28312\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q49.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,item:int,return_ratio:decimal(35,20),return_rank:int,currency_rank:int>\n-- !query output\ncatalog\t16735\t0.50505050505050505051\t1\t1\ncatalog\t12633\t0.69662921348314606742\t2\t2\ncatalog\t13967\t0.70000000000000000000\t3\t3\ncatalog\t12819\t0.70129870129870129870\t4\t8\ncatalog\t16155\t0.72043010752688172043\t5\t4\ncatalog\t17681\t0.75268817204301075269\t6\t5\ncatalog\t5975\t0.76404494382022471910\t7\t6\ncatalog\t11451\t0.76744186046511627907\t8\t7\ncatalog\t1689\t0.80219780219780219780\t9\t9\ncatalog\t10311\t0.81818181818181818182\t10\t10\nstore\t5111\t0.78947368421052631579\t1\t1\nstore\t11073\t0.83505154639175257732\t2\t3\nstore\t14429\t0.84782608695652173913\t3\t2\nstore\t15927\t0.86419753086419753086\t4\t4\nstore\t10171\t0.86868686868686868687\t5\t5\nstore\t12783\t0.88775510204081632653\t6\t6\nstore\t11075\t0.89743589743589743590\t7\t7\nstore\t12889\t0.95652173913043478261\t8\t8\nstore\t1939\t0.99000000000000000000\t9\t9\nstore\t10455\t1.00000000000000000000\t10\t10\nstore\t4333\t1.00000000000000000000\t10\t10\nstore\t12975\t1.00000000000000000000\t10\t10\nweb\t10485\t0.48863636363636363636\t1\t1\nweb\t4483\t0.52688172043010752688\t2\t2\nweb\t8833\t0.58241758241758241758\t3\t3\nweb\t1165\t0.61458333333333333333\t4\t4\nweb\t17197\t0.73076923076923076923\t5\t5\nweb\t10319\t0.73469387755102040816\t6\t6\nweb\t13159\t0.75257731958762886598\t7\t7\nweb\t9629\t0.77894736842105263158\t8\t8\nweb\t5909\t0.78378378378378378378\t9\t9\nweb\t7057\t0.86746987951807228916\t10\t10\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q5.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,id:string,sales:decimal(27,2),returns:decimal(27,2),profit:decimal(28,2)>\n-- !query output\nNULL\tNULL\t113832751.85\t3036754.81\t-31846972.27\ncatalog channel\tNULL\t39549896.80\t890805.91\t-4700833.96\ncatalog channel\tcatalog_pageAAAAAAAAAAABAAAA\t217002.51\t0.00\t13655.90\ncatalog channel\tcatalog_pageAAAAAAAAABABAAAA\t79307.07\t0.00\t-3323.53\ncatalog channel\tcatalog_pageAAAAAAAAACABAAAA\t109295.76\t3505.20\t8275.32\ncatalog channel\tcatalog_pageAAAAAAAAACJAAAAA\t0.00\t14.82\t-113.03\ncatalog channel\tcatalog_pageAAAAAAAAADABAAAA\t112264.38\t0.00\t-11845.75\ncatalog channel\tcatalog_pageAAAAAAAAADCBAAAA\t111462.44\t0.00\t2684.17\ncatalog channel\tcatalog_pageAAAAAAAAAEABAAAA\t60875.92\t753.55\t991.75\ncatalog channel\tcatalog_pageAAAAAAAAAECBAAAA\t89060.07\t0.00\t7186.17\ncatalog channel\tcatalog_pageAAAAAAAAAEPAAAAA\t0.00\t2466.17\t-1023.59\ncatalog channel\tcatalog_pageAAAAAAAAAFABAAAA\t42313.54\t0.00\t-25310.58\ncatalog channel\tcatalog_pageAAAAAAAAAFCBAAAA\t74908.61\t0.00\t-10384.86\ncatalog channel\tcatalog_pageAAAAAAAAAFPAAAAA\t0.00\t523.37\t-422.16\ncatalog channel\tcatalog_pageAAAAAAAAAGABAAAA\t120028.56\t0.00\t4694.80\ncatalog channel\tcatalog_pageAAAAAAAAAGBBAAAA\t0.00\t541.44\t-170.28\ncatalog channel\tcatalog_pageAAAAAAAAAGCBAAAA\t97228.89\t0.00\t-7579.73\ncatalog channel\tcatalog_pageAAAAAAAAAGPAAAAA\t0.00\t1814.55\t-667.34\ncatalog channel\tcatalog_pageAAAAAAAAAHABAAAA\t69941.58\t0.00\t-4444.41\ncatalog channel\tcatalog_pageAAAAAAAAAHBBAAAA\t0.00\t138.66\t-128.30\ncatalog channel\tcatalog_pageAAAAAAAAAHCBAAAA\t73967.17\t0.00\t-5904.39\ncatalog channel\tcatalog_pageAAAAAAAAAHPAAAAA\t0.00\t10125.01\t-3291.39\ncatalog channel\tcatalog_pageAAAAAAAAAICBAAAA\t95341.98\t0.00\t-10935.38\ncatalog channel\tcatalog_pageAAAAAAAAAIMAAAAA\t0.00\t247.30\t-261.67\ncatalog channel\tcatalog_pageAAAAAAAAAJCBAAAA\t91289.86\t0.00\t-38131.57\ncatalog channel\tcatalog_pageAAAAAAAAAJPAAAAA\t0.00\t19.98\t-102.55\ncatalog channel\tcatalog_pageAAAAAAAAAKABAAAA\t0.00\t3460.18\t-1790.30\ncatalog channel\tcatalog_pageAAAAAAAAAKCBAAAA\t1409.22\t0.00\t-2367.32\ncatalog channel\tcatalog_pageAAAAAAAAAKPAAAAA\t176800.33\t2203.74\t17358.83\ncatalog channel\tcatalog_pageAAAAAAAAALBBAAAA\t0.00\t1719.45\t-223.86\ncatalog channel\tcatalog_pageAAAAAAAAALCBAAAA\t9622.11\t0.00\t-995.37\ncatalog channel\tcatalog_pageAAAAAAAAALPAAAAA\t226923.74\t0.00\t-17678.34\ncatalog channel\tcatalog_pageAAAAAAAAAMCBAAAA\t1044.59\t0.00\t-371.42\ncatalog channel\tcatalog_pageAAAAAAAAAMPAAAAA\t210026.33\t0.00\t-4598.43\ncatalog channel\tcatalog_pageAAAAAAAAANABAAAA\t0.00\t4859.01\t-620.39\ncatalog channel\tcatalog_pageAAAAAAAAANPAAAAA\t204079.30\t6617.69\t-43707.08\ncatalog channel\tcatalog_pageAAAAAAAAAOCBAAAA\t3093.75\t0.00\t-3029.40\ncatalog channel\tcatalog_pageAAAAAAAAAOPAAAAA\t205664.00\t5.35\t-7249.23\ncatalog channel\tcatalog_pageAAAAAAAAAPCBAAAA\t11986.70\t0.00\t5352.16\ncatalog channel\tcatalog_pageAAAAAAAAAPPAAAAA\t259979.15\t0.00\t15500.93\ncatalog channel\tcatalog_pageAAAAAAAABAABAAAA\t205054.92\t0.00\t-13900.72\ncatalog channel\tcatalog_pageAAAAAAAABBABAAAA\t123870.49\t817.57\t-35169.40\ncatalog channel\tcatalog_pageAAAAAAAABCABAAAA\t110311.76\t6.60\t-2807.92\ncatalog channel\tcatalog_pageAAAAAAAABDABAAAA\t54375.62\t0.00\t-16298.08\ncatalog channel\tcatalog_pageAAAAAAAABDCBAAAA\t77225.26\t0.00\t-9404.24\ncatalog channel\tcatalog_pageAAAAAAAABDIAAAAA\t0.00\t2247.56\t-536.43\ncatalog channel\tcatalog_pageAAAAAAAABDPAAAAA\t0.00\t13366.54\t-4827.89\ncatalog channel\tcatalog_pageAAAAAAAABEABAAAA\t83653.23\t301.44\t-14544.63\ncatalog channel\tcatalog_pageAAAAAAAABECBAAAA\t102256.79\t0.00\t2795.53\ncatalog channel\tcatalog_pageAAAAAAAABEPAAAAA\t0.00\t4374.58\t-3395.35\ncatalog channel\tcatalog_pageAAAAAAAABFABAAAA\t77015.36\t0.00\t-4370.15\ncatalog channel\tcatalog_pageAAAAAAAABFBBAAAA\t0.00\t2435.93\t-3450.68\ncatalog channel\tcatalog_pageAAAAAAAABFCBAAAA\t105831.18\t0.00\t12908.55\ncatalog channel\tcatalog_pageAAAAAAAABFPAAAAA\t0.00\t203.00\t-162.37\ncatalog channel\tcatalog_pageAAAAAAAABGABAAAA\t72193.84\t0.00\t-10412.85\ncatalog channel\tcatalog_pageAAAAAAAABGCBAAAA\t98074.44\t0.00\t-1396.21\ncatalog channel\tcatalog_pageAAAAAAAABGIAAAAA\t0.00\t500.56\t-315.89\ncatalog channel\tcatalog_pageAAAAAAAABGPAAAAA\t0.00\t257.80\t-125.00\ncatalog channel\tcatalog_pageAAAAAAAABHABAAAA\t109670.37\t0.00\t17680.12\ncatalog channel\tcatalog_pageAAAAAAAABHBBAAAA\t0.00\t332.44\t-218.41\ncatalog channel\tcatalog_pageAAAAAAAABHCBAAAA\t73731.28\t0.00\t-14352.31\ncatalog channel\tcatalog_pageAAAAAAAABHPAAAAA\t0.00\t1366.24\t-3867.70\ncatalog channel\tcatalog_pageAAAAAAAABICBAAAA\t71039.47\t0.00\t2742.64\ncatalog channel\tcatalog_pageAAAAAAAABIPAAAAA\t0.00\t14223.07\t-7591.98\ncatalog channel\tcatalog_pageAAAAAAAABJBBAAAA\t0.00\t96.66\t-140.68\ncatalog channel\tcatalog_pageAAAAAAAABJCBAAAA\t5619.24\t0.00\t-2531.23\ncatalog channel\tcatalog_pageAAAAAAAABJPAAAAA\t0.00\t3881.66\t-2212.48\ncatalog channel\tcatalog_pageAAAAAAAABKCBAAAA\t5420.99\t0.00\t-5691.26\ncatalog channel\tcatalog_pageAAAAAAAABKPAAAAA\t161336.55\t0.00\t-513.02\ncatalog channel\tcatalog_pageAAAAAAAABLBBAAAA\t0.00\t303.27\t-173.59\ncatalog channel\tcatalog_pageAAAAAAAABLCBAAAA\t201.70\t0.00\t-393.37\ncatalog channel\tcatalog_pageAAAAAAAABLPAAAAA\t162028.76\t54.34\t-54551.60\ncatalog channel\tcatalog_pageAAAAAAAABMABAAAA\t0.00\t163.32\t-383.25\ncatalog channel\tcatalog_pageAAAAAAAABMBBAAAA\t0.00\t483.00\t-4825.77\ncatalog channel\tcatalog_pageAAAAAAAABMCBAAAA\t2213.59\t0.00\t-212.78\ncatalog channel\tcatalog_pageAAAAAAAABMPAAAAA\t140634.77\t138.84\t-39261.21\ncatalog channel\tcatalog_pageAAAAAAAABNABAAAA\t0.00\tNULL\tNULL\ncatalog channel\tcatalog_pageAAAAAAAABNBBAAAA\t0.00\t2152.88\t-910.95\ncatalog channel\tcatalog_pageAAAAAAAABNCBAAAA\t21066.23\t0.00\t5619.74\ncatalog channel\tcatalog_pageAAAAAAAABNPAAAAA\t168937.50\t0.00\t-7428.35\ncatalog channel\tcatalog_pageAAAAAAAABOABAAAA\t0.00\t756.28\t-521.22\ncatalog channel\tcatalog_pageAAAAAAAABOBBAAAA\t0.00\t289.38\t-262.27\ncatalog channel\tcatalog_pageAAAAAAAABOCBAAAA\t3380.37\t0.00\t337.45\ncatalog channel\tcatalog_pageAAAAAAAABOIAAAAA\t0.00\t553.08\t-124.33\ncatalog channel\tcatalog_pageAAAAAAAABOPAAAAA\t178060.89\t453.33\t-39647.59\ncatalog channel\tcatalog_pageAAAAAAAABPCBAAAA\t308.43\t0.00\t-6294.87\ncatalog channel\tcatalog_pageAAAAAAAABPPAAAAA\t230886.45\t423.60\t-7979.91\ncatalog channel\tcatalog_pageAAAAAAAACAABAAAA\t143217.90\t4739.60\t-14933.90\ncatalog channel\tcatalog_pageAAAAAAAACABBAAAA\t0.00\t2902.35\t-421.56\ncatalog channel\tcatalog_pageAAAAAAAACACBAAAA\t0.00\t275.93\t-770.61\ncatalog channel\tcatalog_pageAAAAAAAACBABAAAA\t82856.21\t0.00\t-33285.95\ncatalog channel\tcatalog_pageAAAAAAAACCABAAAA\t87477.08\t0.00\t818.09\ncatalog channel\tcatalog_pageAAAAAAAACDABAAAA\t83602.86\t0.00\t-33313.03\ncatalog channel\tcatalog_pageAAAAAAAACDCBAAAA\t58195.74\t0.00\t-16518.32\ncatalog channel\tcatalog_pageAAAAAAAACDPAAAAA\t0.00\t317.65\t-658.75\ncatalog channel\tcatalog_pageAAAAAAAACEABAAAA\t51882.61\t0.00\t-18201.87\ncatalog channel\tcatalog_pageAAAAAAAACECBAAAA\t87686.10\t0.00\t4983.58\ncatalog channel\tcatalog_pageAAAAAAAACEPAAAAA\t0.00\t9419.81\t-3490.18\ncatalog channel\tcatalog_pageAAAAAAAACFABAAAA\t100092.31\t0.00\t2640.22\ncatalog channel\tcatalog_pageAAAAAAAACFCBAAAA\t82085.45\t0.00\t-6014.66\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q50.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<s_store_name:string,s_company_id:int,s_street_number:string,s_street_name:string,s_street_type:string,s_suite_number:string,s_city:string,s_county:string,s_state:string,s_zip:string,30 days :bigint,31 - 60 days :bigint,61 - 90 days :bigint,91 - 120 days :bigint,>120 days :bigint>\n-- !query output\nable\t1\t666\tCedar Spruce\tAvenue         \tSuite 10  \tMidway\tWilliamson County\tTN\t31904     \t78\t53\t68\t63\t103\nation\t1\t405\t3rd \tWy             \tSuite 220 \tFairview\tWilliamson County\tTN\t35709     \t71\t65\t63\t43\t97\nbar\t1\t71\tCedar \tBlvd           \tSuite B   \tMidway\tWilliamson County\tTN\t31904     \t79\t54\t43\t67\t115\neing\t1\t914\tLake 11th\tRoad           \tSuite T   \tMidway\tWilliamson County\tTN\t31904     \t82\t45\t56\t49\t105\nese\t1\t120\t6th \tLane           \tSuite B   \tMidway\tWilliamson County\tTN\t31904     \t63\t52\t68\t43\t108\nought\t1\t32\t3rd \tStreet         \tSuite 220 \tMidway\tWilliamson County\tTN\t31904     \t75\t63\t51\t51\t99\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q51.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<item_sk:int,d_date:date,web_sales:decimal(27,2),store_sales:decimal(27,2),web_cumulative:decimal(27,2),store_cumulative:decimal(27,2)>\n-- !query output\n8\t2000-01-16\tNULL\t20.83\t35.36\t20.83\n8\t2000-01-18\t83.21\tNULL\t83.21\t20.83\n8\t2000-02-11\tNULL\t69.88\t83.21\t69.88\n14\t2000-01-14\tNULL\t31.65\t183.58\t31.65\n14\t2000-01-29\tNULL\t63.92\t183.58\t63.92\n14\t2000-02-10\tNULL\t103.12\t183.58\t103.12\n14\t2000-02-15\tNULL\t157.03\t183.58\t157.03\n14\t2000-02-16\tNULL\t183.40\t183.58\t183.40\n17\t2000-05-24\t439.75\tNULL\t439.75\t398.92\n17\t2000-06-09\tNULL\t407.77\t439.75\t407.77\n17\t2000-06-13\tNULL\t408.53\t439.75\t408.53\n26\t2000-01-14\t124.68\tNULL\t124.68\t61.34\n26\t2000-01-29\tNULL\t72.77\t124.68\t72.77\n26\t2000-02-06\t126.41\tNULL\t126.41\t72.77\n29\t2000-02-05\t160.79\tNULL\t160.79\t141.46\n29\t2000-02-06\t167.35\tNULL\t167.35\t141.46\n29\t2000-02-07\t214.40\tNULL\t214.40\t141.46\n29\t2000-02-15\tNULL\t164.35\t214.40\t164.35\n29\t2000-02-18\tNULL\t172.82\t214.40\t172.82\n32\t2000-01-21\t59.48\tNULL\t59.48\t6.20\n32\t2000-01-25\tNULL\t10.50\t59.48\t10.50\n32\t2000-02-03\tNULL\t38.01\t59.48\t38.01\n32\t2000-02-09\t109.67\tNULL\t109.67\t38.01\n32\t2000-02-23\tNULL\t43.04\t109.67\t43.04\n32\t2000-02-26\tNULL\t81.08\t109.67\t81.08\n32\t2000-03-05\tNULL\t99.43\t109.67\t99.43\n32\t2000-03-06\tNULL\t99.43\t109.67\t99.43\n43\t2000-04-09\t172.09\tNULL\t172.09\t169.55\n43\t2000-04-23\t257.19\tNULL\t257.19\t207.76\n43\t2000-04-24\t272.83\tNULL\t272.83\t207.76\n43\t2000-05-15\tNULL\t221.70\t272.83\t221.70\n44\t2000-01-02\t75.87\t26.58\t75.87\t26.58\n44\t2000-01-05\t125.14\tNULL\t125.14\t26.58\n47\t2000-02-14\t148.00\tNULL\t148.00\t74.22\n61\t2000-01-19\t140.35\t75.64\t140.35\t75.64\n61\t2000-01-28\t231.75\tNULL\t231.75\t75.64\n61\t2000-02-04\tNULL\t156.79\t231.75\t156.79\n61\t2000-02-27\tNULL\t224.70\t231.75\t224.70\n67\t2000-01-01\t88.63\t46.38\t88.63\t46.38\n67\t2000-01-02\tNULL\t53.21\t88.63\t53.21\n67\t2000-01-04\t185.14\t100.85\t185.14\t100.85\n67\t2000-01-29\t309.98\tNULL\t309.98\t100.85\n67\t2000-02-14\tNULL\t148.29\t309.98\t148.29\n67\t2000-02-20\tNULL\t174.01\t309.98\t174.01\n67\t2000-03-05\tNULL\t199.85\t309.98\t199.85\n67\t2000-03-07\tNULL\t284.98\t309.98\t284.98\n74\t2000-01-06\t71.82\tNULL\t71.82\t9.01\n80\t2000-03-16\tNULL\t67.38\t169.88\t67.38\n80\t2000-04-10\tNULL\t137.63\t169.88\t137.63\n80\t2000-07-01\t309.57\tNULL\t309.57\t273.52\n80\t2000-07-02\t455.94\tNULL\t455.94\t273.52\n80\t2000-07-25\tNULL\t443.48\t455.94\t443.48\n80\t2000-08-04\tNULL\t443.48\t455.94\t443.48\n82\t2000-01-01\t85.75\t64.27\t85.75\t64.27\n83\t2000-01-08\t134.96\tNULL\t134.96\t8.10\n83\t2000-02-01\tNULL\t52.54\t134.96\t52.54\n83\t2000-02-27\tNULL\t76.22\t134.96\t76.22\n83\t2000-03-01\tNULL\t112.53\t134.96\t112.53\n83\t2000-03-02\t248.36\tNULL\t248.36\t112.53\n83\t2000-03-05\tNULL\t118.07\t248.36\t118.07\n83\t2000-03-16\t250.29\tNULL\t250.29\t118.07\n83\t2000-03-19\tNULL\t163.65\t250.29\t163.65\n83\t2000-04-07\t289.65\tNULL\t289.65\t163.65\n83\t2000-04-08\tNULL\t201.68\t289.65\t201.68\n83\t2000-04-10\t357.89\tNULL\t357.89\t201.68\n83\t2000-04-20\t364.69\tNULL\t364.69\t201.68\n83\t2000-04-26\t371.15\tNULL\t371.15\t201.68\n83\t2000-05-06\tNULL\t206.29\t371.15\t206.29\n83\t2000-05-27\tNULL\t228.07\t371.15\t228.07\n83\t2000-06-06\t393.25\tNULL\t393.25\t228.07\n83\t2000-06-09\tNULL\t235.88\t393.25\t235.88\n83\t2000-06-10\tNULL\t299.68\t393.25\t299.68\n85\t2000-01-16\t48.36\tNULL\t48.36\t42.45\n85\t2000-01-18\tNULL\t47.47\t48.36\t47.47\n85\t2000-02-02\t126.73\tNULL\t126.73\t47.47\n85\t2000-04-01\tNULL\t47.47\t126.73\t47.47\n85\t2000-04-08\tNULL\t58.91\t126.73\t58.91\n85\t2000-04-15\tNULL\t69.74\t126.73\t69.74\n85\t2000-05-01\tNULL\t93.80\t126.73\t93.80\n86\t2000-01-16\tNULL\t19.92\t38.25\t19.92\n86\t2000-01-17\tNULL\t34.32\t38.25\t34.32\n86\t2000-05-06\t257.78\tNULL\t257.78\t204.60\n86\t2000-05-08\tNULL\t222.06\t257.78\t222.06\n86\t2000-05-12\tNULL\t235.00\t257.78\t235.00\n89\t2000-02-01\t143.30\tNULL\t143.30\t125.19\n89\t2000-02-11\t273.46\tNULL\t273.46\t125.19\n89\t2000-03-01\tNULL\t131.53\t273.46\t131.53\n89\t2000-03-24\tNULL\t200.30\t273.46\t200.30\n89\t2000-04-04\tNULL\t212.25\t273.46\t212.25\n89\t2000-04-05\tNULL\t265.57\t273.46\t265.57\n97\t2000-01-22\t114.93\tNULL\t114.93\t55.81\n97\t2000-01-31\tNULL\t60.35\t114.93\t60.35\n97\t2000-03-07\t263.86\tNULL\t263.86\t220.68\n98\t2000-02-10\tNULL\t1.04\t42.10\t1.04\n98\t2000-02-23\t44.27\tNULL\t44.27\t1.04\n98\t2000-02-24\t274.58\tNULL\t274.58\t1.04\n98\t2000-02-29\t301.60\tNULL\t301.60\t1.04\n98\t2000-03-13\t309.78\tNULL\t309.78\t1.04\n98\t2000-03-17\t335.79\tNULL\t335.79\t1.04\n98\t2000-03-27\tNULL\t37.67\t335.79\t37.67\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q52.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<d_year:int,brand_id:int,brand:string,ext_price:decimal(17,2)>\n-- !query output\n2000\t1002002\timportoamalg #x                                   \t98247.74\n2000\t4001001\tamalgedu pack #x                                  \t97944.74\n2000\t2004002\tedu packimporto #x                                \t90901.39\n2000\t2004001\tedu packimporto #x                                \t89064.36\n2000\t9003003\texportimaxi #x                                    \t67483.21\n2000\t4004002\tedu packedu pack #x                               \t64713.03\n2000\t2002002\timportoimporto #x                                 \t61594.80\n2000\t2001001\tamalgimporto #x                                   \t57219.92\n2000\t10016008\tcorpamalgamalg #x                                 \t56274.63\n2000\t3002001\timportoexporti #x                                 \t52756.72\n2000\t3004001\tedu packexporti #x                                \t49336.05\n2000\t4003001\texportiedu pack #x                                \t47889.60\n2000\t1004001\tedu packbrand #x                                  \t46738.09\n2000\t3003002\texportiexporti #x                                 \t46024.13\n2000\t3004002\tedu packexporti #x                                \t42712.09\n2000\t6003005\tedu packmaxi #x                                   \t42381.36\n2000\t9015011\texportibrand #x                                   \t41249.43\n2000\t5001002\tamalgscholar #x                                   \t40525.36\n2000\t9003006\texportimaxi #x                                    \t40252.87\n2000\tNULL\tNULL\t39507.19\n2000\t5002001\timportoscholar #x                                 \t38829.75\n2000\t3003001\texportiexporti #x                                 \t36506.92\n2000\t4002001\timportoedu pack #x                                \t35994.46\n2000\t9013003\texportiunivamalg #x                               \t35549.32\n2000\t6003004\texporticorp #x                                    \t34314.00\n2000\t8010002\tunivmaxi #x                                       \t32664.31\n2000\t6005004\tscholarcorp #x                                    \t32175.90\n2000\t7011006\tamalgnameless #x                                  \t30979.02\n2000\t10008016\tnamelessunivamalg #x                             \t30424.25\n2000\t10014001\tedu packamalgamalg #x                             \t30406.55\n2000\t5003001\texportischolar #x                                 \t30278.44\n2000\t10003008\texportiunivamalg #x                               \t30127.51\n2000\t7002002\timportobrand #x                                   \t29602.59\n2000\t7005006\tscholarbrand #x                                   \t29319.83\n2000\t9012009\timportounivamalg #x                               \t29204.86\n2000\t1001001\timportoexporti #x                                 \t28800.60\n2000\t7011010\tamalgnameless #x                                 \t28614.32\n2000\t10007004\texportiamalg #x                                   \t28600.67\n2000\t4001002\tamalgedu pack #x                                  \t27963.59\n2000\t4002001\tedu packscholar #x                                \t27934.37\n2000\t10016002\tcorpamalgamalg #x                                 \t27695.20\n2000\t7015008\tscholarnameless #x                                \t27490.44\n2000\t8006010\tcorpnameless #x                                  \t27390.35\n2000\t6008004\tnamelesscorp #x                                   \t27296.01\n2000\t5002002\timportoscholar #x                                 \t27037.76\n2000\t8003006\texportinameless #x                                \t26842.25\n2000\t1003001\texportiamalg #x                                   \t26808.25\n2000\t10001007\tamalgunivamalg #x                                 \t26031.20\n2000\t6011001\tamalgbrand #x                                     \t25567.26\n2000\t1001001\tamalgamalg #x                                     \t25210.15\n2000\t8002008\timportonameless #x                                \t24937.40\n2000\t5004002\tedu packscholar #x                                \t24800.08\n2000\t9007003\tbrandmaxi #x                                      \t24694.06\n2000\t4002002\timportoedu pack #x                                \t23924.37\n2000\t9008003\tnamelessmaxi #x                                   \t23871.65\n2000\t3001001\tamalgexporti #x                                   \t23023.19\n2000\t10009002\tmaxiunivamalg #x                                  \t22932.13\n2000\t8014006\tedu packmaxi #x                                   \t22164.19\n2000\t8008006\tnamelessnameless #x                               \t22159.19\n2000\t1002001\timportoamalg #x                                   \t21952.87\n2000\t10005002\tedu packedu pack #x                               \t21623.64\n2000\t9016008\tcorpunivamalg #x                                  \t21503.39\n2000\t8002007\timportonameless #x                                \t21370.99\n2000\t7013002\texportinameless #x                                \t21315.92\n2000\t5003001\timportonameless #x                                \t21140.49\n2000\t1003002\texportiamalg #x                                   \t20558.37\n2000\t7014009\tedu packedu pack #x                               \t20518.93\n2000\t4003001\tcorpnameless #x                                   \t20345.42\n2000\t3002001\tedu packedu pack #x                               \t19948.39\n2000\t6015004\tscholarbrand #x                                   \t19806.27\n2000\t7015004\tscholarnameless #x                                \t19805.86\n2000\t10005002\texportiimporto #x                                 \t19553.70\n2000\t6005003\tamalgbrand #x                                     \t18383.14\n2000\t10013010\texportiamalgamalg #x                             \t18349.28\n2000\t10010002\texportiexporti #x                                 \t18338.53\n2000\t2002001\tamalgimporto #x                                   \t18298.41\n2000\t10009016\tmaxiunivamalg #x                                 \t17942.21\n2000\t2002001\texportiexporti #x                                 \t17817.83\n2000\t4003002\texportiedu pack #x                                \t17709.48\n2000\t9004008\tedu packmaxi #x                                   \t17660.51\n2000\t7006005\tcorpbrand #x                                      \t17628.79\n2000\t10013006\texportiamalgamalg #x                              \t17579.59\n2000\t9002011\tbrandbrand #x                                     \t17177.90\n2000\t2001001\texportiexporti #x                                 \t16270.97\n2000\t6011003\tamalgbrand #x                                     \t16222.01\n2000\t8004002\tedu packnameless #x                               \t16054.52\n2000\t10008008\timportoexporti #x                                 \t15563.77\n2000\t6002008\timportocorp #x                                    \t15447.93\n2000\t4003001\tedu packimporto #x                                \t15313.51\n2000\t8015004\tscholarmaxi #x                                    \t15299.50\n2000\t8010001\tunivmaxi #x                                       \t15188.28\n2000\t7016008\tcorpnameless #x                                   \t15123.33\n2000\t6008001\tnamelesscorp #x                                   \t15096.05\n2000\t7009003\tmaxibrand #x                                      \t15073.42\n2000\t7015007\tscholarnameless #x                                \t14934.32\n2000\t1001002\tamalgamalg #x                                     \t14264.80\n2000\t7010005\tunivnameless #x                                   \t14247.64\n2000\t9005002\tscholarmaxi #x                                    \t14062.39\n2000\t7012008\timportonameless #x                                \t14043.20\n2000\t9010002\tunivunivamalg #x                                  \t13922.13\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q53.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_manufact_id:int,sum_sales:decimal(17,2),avg_quarterly_sales:decimal(21,6)>\n-- !query output\n132\t119.11\t258.580000\n132\t222.31\t258.580000\n132\t342.09\t258.580000\n132\t350.81\t258.580000\n612\t77.21\t301.970000\n612\t174.17\t301.970000\n612\t248.75\t301.970000\n612\t707.75\t301.970000\n650\t240.94\t329.890000\n650\t451.30\t329.890000\n483\t175.17\t341.280000\n483\t284.63\t341.280000\n483\t379.24\t341.280000\n483\t526.08\t341.280000\n39\t83.05\t347.250000\n39\t201.75\t347.250000\n39\t249.20\t347.250000\n39\t855.00\t347.250000\n659\t270.05\t357.907500\n659\t301.12\t357.907500\n659\t521.64\t357.907500\n487\t194.22\t363.632500\n487\t213.65\t363.632500\n487\t489.98\t363.632500\n487\t556.68\t363.632500\n423\t156.29\t372.700000\n423\t323.98\t372.700000\n423\t607.23\t372.700000\n872\t117.67\t375.220000\n872\t118.10\t375.220000\n872\t880.58\t375.220000\n556\t245.39\t386.777500\n556\t311.12\t386.777500\n556\t446.09\t386.777500\n556\t544.51\t386.777500\n179\t308.69\t388.540000\n179\t438.91\t388.540000\n274\t40.68\t390.370000\n274\t140.57\t390.370000\n274\t283.26\t390.370000\n274\t1096.97\t390.370000\n551\t154.35\t397.355000\n551\t327.31\t397.355000\n551\t696.52\t397.355000\n67\t271.97\t397.382500\n67\t305.28\t397.382500\n67\t621.24\t397.382500\n53\t110.01\t397.517500\n53\t250.03\t397.517500\n53\t468.76\t397.517500\n53\t761.27\t397.517500\n172\t154.34\t398.030000\n172\t347.87\t398.030000\n172\t439.68\t398.030000\n172\t650.23\t398.030000\n915\t210.94\t403.120000\n915\t359.81\t403.120000\n915\t678.57\t403.120000\n350\t30.44\t407.737500\n350\t328.11\t407.737500\n350\t339.56\t407.737500\n350\t932.84\t407.737500\n203\t164.68\t408.595000\n203\t305.41\t408.595000\n203\t735.85\t408.595000\n2\t239.50\t408.602500\n2\t289.78\t408.602500\n2\t696.25\t408.602500\n687\t70.08\t410.057500\n687\t80.72\t410.057500\n687\t368.29\t410.057500\n687\t1121.14\t410.057500\n113\t76.68\t414.170000\n113\t268.47\t414.170000\n113\t531.42\t414.170000\n113\t780.11\t414.170000\n161\t224.11\t416.682500\n161\t298.25\t416.682500\n161\t480.46\t416.682500\n161\t663.91\t416.682500\n233\t284.33\t421.205000\n233\t320.60\t421.205000\n233\t322.01\t421.205000\n233\t757.88\t421.205000\n743\t173.26\t430.360000\n743\t198.59\t430.360000\n743\t565.04\t430.360000\n743\t784.55\t430.360000\n581\t51.95\t432.642500\n581\t247.90\t432.642500\n581\t509.07\t432.642500\n581\t921.65\t432.642500\n107\t254.35\t434.100000\n107\t348.41\t434.100000\n107\t388.70\t434.100000\n107\t744.94\t434.100000\n359\t163.98\t436.225000\n359\t215.17\t436.225000\n359\t379.98\t436.225000\n359\t985.77\t436.225000\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q54.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<segment:int,num_customers:bigint,segment_base:int>\n-- !query output\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q55.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<brand_id:int,brand:string,ext_price:decimal(17,2)>\n-- !query output\n4002001\timportoedu pack #x                                \t141955.03\n2001001\tamalgimporto #x                                   \t115066.00\n2003001\texportiimporto #x                                 \t96597.01\n5003002\texportischolar #x                                 \t93316.15\n1002001\timportoamalg #x                                   \t85318.86\n2002001\timportoimporto #x                                 \t85035.18\n3003001\texportiexporti #x                                 \t80017.81\n2004001\tedu packimporto #x                                \t68604.08\n9001008\tamalgmaxi #x                                      \t60900.34\n4004001\tedu packedu pack #x                               \t58815.34\n4003002\texportiedu pack #x                                \t56587.37\n6012003\timportobrand #x                                   \t55151.38\n1001002\tamalgamalg #x                                     \t54471.07\n3004001\tedu packexporti #x                                \t54057.61\n3001001\tamalgexporti #x                                   \t52690.20\n4003001\texportiedu pack #x                                \t51021.71\n5002002\timportoscholar #x                                 \t46601.52\n8002007\timportonameless #x                                \t45983.92\n6016003\tcorpbrand #x                                      \t44911.30\n1003001\texportiamalg #x                                   \t44127.51\n6014003\tedu packbrand #x                                  \t41310.31\n7014009\tedu packnameless #x                               \t40891.96\n7008004\tnamelessbrand #x                                  \t36589.35\n10001016\tamalgunivamalg #x                                \t36270.01\n6016004\tcorpbrand #x                                      \t36208.64\n9013011\texportiunivamalg #x                              \t35794.13\n1003002\texportiamalg #x                                   \t35113.55\n1004002\tedu packamalg #x                                  \t34667.61\n7016005\tcorpnameless #x                                   \t33397.41\n7014005\tedu packnameless #x                               \t33313.92\n7012010\timportonameless #x                               \t32862.88\n7012001\timportonameless #x                                \t32815.12\n6001007\tamalgcorp #x                                      \t31948.28\n3002001\timportoexporti #x                                 \t31518.29\n8015001\tscholarmaxi #x                                    \t31235.19\n10012006\timportoamalgamalg #x                              \t31123.11\n6004004\tedu packcorp #x                                   \t29839.27\n10009008\tmaxiunivamalg #x                                  \t29180.04\n5004001\tedu packscholar #x                                \t28912.61\n9006011\tcorpmaxi #x                                      \t26493.45\n8014004\tedu packmaxi #x                                   \t26228.69\n9012008\timportounivamalg #x                               \t25354.49\n8015003\tscholarmaxi #x                                    \t23994.34\n7013009\texportinameless #x                                \t23887.30\n2004002\tedu packimporto #x                                \t23778.49\n6005005\tscholarcorp #x                                    \t23243.88\n5001001\tamalgscholar #x                                   \t23043.20\n8009007\tmaxinameless #x                                   \t22910.03\n8004009\tedu packnameless #x                               \t22679.38\n9016002\tcorpunivamalg #x                                  \t21881.94\n8003001\texportinameless #x                                \t21655.96\n8004004\tedu packnameless #x                               \t20920.43\n10010007\tunivamalgamalg #x                                 \t20530.11\n8007004\tbrandnameless #x                                  \t19973.79\n9015005\tscholarunivamalg #x                               \t19929.31\n6004002\tedu packcorp #x                                   \t19890.20\n10004005\tedu packunivamalg #x                              \t19819.04\n9014008\tedu packunivamalg #x                              \t19700.99\n5003001\texportischolar #x                                 \t19595.76\n7007006\tbrandbrand #x                                     \t19541.72\n8009001\tmaxinameless #x                                   \t19474.02\n2002002\timportoimporto #x                                 \t19249.25\n8003002\texportinameless #x                                \t18931.96\n9016011\tcorpunivamalg #x                                 \t18840.83\n10012011\timportoamalgamalg #x                             \t18632.17\n9004005\tedu packmaxi #x                                   \t17997.32\n7003003\texportibrand #x                                   \t16699.26\n9011011\tamalgunivamalg #x                                \t16656.62\n10008002\tnamelessunivamalg #x                              \t16609.49\n10016015\tcorpamalgamalg #x                                \t16397.21\n8008001\tnamelessnameless #x                               \t15538.37\n6004007\tedu packcorp #x                                   \t15474.56\n7006001\tcorpbrand #x                                      \t15365.31\n6010003\tunivbrand #x                                      \t15337.21\n1001001\tamalgamalg #x                                     \t14813.15\n9012003\timportounivamalg #x                               \t14033.76\n8015008\tscholarmaxi #x                                    \t13408.30\n6002005\timportocorp #x                                    \t13393.21\n7010009\tunivnameless #x                                   \t13359.47\n10010002\tunivamalgamalg #x                                 \t13288.24\n8014003\tedu packmaxi #x                                   \t13137.78\n10001009\tamalgunivamalg #x                                 \t12326.55\n5001002\tamalgscholar #x                                   \t12319.21\n5002001\timportoscholar #x                                 \t12023.80\n8005010\tscholarnameless #x                               \t12017.53\n6008005\tnamelesscorp #x                                   \t11908.45\n3001002\tamalgexporti #x                                   \t11679.83\n10008006\tnamelessunivamalg #x                              \t11503.15\n10014009\tedu packamalgamalg #x                             \t11462.45\n10007017\tbrandunivamalg #x                                \t11188.38\n7002004\timportobrand #x                                   \t10699.51\n7001006\tamalgbrand #x                                     \t10166.47\n10004009\tedu packunivamalg #x                              \t9913.46\n10002011\timportounivamalg #x                              \t9828.64\n3003002\texportiexporti #x                                 \t9521.40\n10014017\tedu packamalgamalg #x                            \t9150.09\n6002003\timportocorp #x                                    \t8810.78\n6001005\tamalgcorp #x                                      \t8582.71\n4001001\tamalgedu pack #x                                  \t8362.72\n8004002\tedu packnameless #x                               \t8010.27\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q56.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,total_sales:decimal(27,2)>\n-- !query output\nAAAAAAAANMMDAAAA\tNULL\nAAAAAAAAIMPDAAAA\t0.00\nAAAAAAAAEGJBAAAA\t0.59\nAAAAAAAAOGIDAAAA\t3.92\nAAAAAAAAFLGAAAAA\t17.01\nAAAAAAAAGPEEAAAA\t21.60\nAAAAAAAACJCDAAAA\t27.30\nAAAAAAAAAAEDAAAA\t38.88\nAAAAAAAACPBEAAAA\t46.20\nAAAAAAAAMGKBAAAA\t50.79\nAAAAAAAACLHAAAAA\t58.52\nAAAAAAAAKJOBAAAA\t74.48\nAAAAAAAAGMEDAAAA\t78.90\nAAAAAAAADGCCAAAA\t84.81\nAAAAAAAALLDAAAAA\t95.60\nAAAAAAAAONEDAAAA\t100.91\nAAAAAAAAAKMAAAAA\t102.75\nAAAAAAAAIJIAAAAA\t103.53\nAAAAAAAAMLFDAAAA\t112.10\nAAAAAAAAHDDAAAAA\t113.62\nAAAAAAAAPBMDAAAA\t117.80\nAAAAAAAAPFCDAAAA\t119.56\nAAAAAAAALICBAAAA\t119.68\nAAAAAAAAKAJAAAAA\t126.21\nAAAAAAAAOIAAAAAA\t127.82\nAAAAAAAAABDEAAAA\t145.52\nAAAAAAAAOCKDAAAA\t149.00\nAAAAAAAAEMECAAAA\t154.20\nAAAAAAAAGOLDAAAA\t172.12\nAAAAAAAAAGLDAAAA\t173.46\nAAAAAAAAGBGEAAAA\t178.20\nAAAAAAAAKOBDAAAA\t183.60\nAAAAAAAAEGGDAAAA\t187.32\nAAAAAAAAHDKCAAAA\t227.76\nAAAAAAAAFDBEAAAA\t236.91\nAAAAAAAAEICCAAAA\t241.28\nAAAAAAAAAKAAAAAA\t250.16\nAAAAAAAAOMBEAAAA\t310.90\nAAAAAAAAANHCAAAA\t320.32\nAAAAAAAAIJGAAAAA\t336.00\nAAAAAAAAAFAAAAAA\t336.28\nAAAAAAAAKFGBAAAA\t380.37\nAAAAAAAAEACCAAAA\t389.75\nAAAAAAAAIOMAAAAA\t418.92\nAAAAAAAAEKODAAAA\t448.26\nAAAAAAAAENDAAAAA\t456.30\nAAAAAAAAAJGBAAAA\t457.68\nAAAAAAAAIBEAAAAA\t466.74\nAAAAAAAAOMEEAAAA\t469.80\nAAAAAAAAMDIDAAAA\t501.28\nAAAAAAAAPANDAAAA\t519.27\nAAAAAAAAKKECAAAA\t523.27\nAAAAAAAAKPABAAAA\t564.00\nAAAAAAAABHMCAAAA\t566.58\nAAAAAAAAGMGBAAAA\t584.56\nAAAAAAAAGKNCAAAA\t614.00\nAAAAAAAAGELBAAAA\t625.04\nAAAAAAAAKGGDAAAA\t642.63\nAAAAAAAAFLPAAAAA\t680.48\nAAAAAAAAGJODAAAA\t698.32\nAAAAAAAAIBDBAAAA\t702.93\nAAAAAAAAGAFAAAAA\t733.20\nAAAAAAAAMPEEAAAA\t760.16\nAAAAAAAAMDFCAAAA\t776.72\nAAAAAAAANPFEAAAA\t778.00\nAAAAAAAAMHEAAAAA\t778.27\nAAAAAAAAOHLDAAAA\t801.88\nAAAAAAAAGIJDAAAA\t815.74\nAAAAAAAAICOCAAAA\t817.74\nAAAAAAAAAICAAAAA\t823.35\nAAAAAAAAKCNAAAAA\t833.31\nAAAAAAAACDGAAAAA\t836.90\nAAAAAAAAAAKDAAAA\t850.07\nAAAAAAAAFOJAAAAA\t859.28\nAAAAAAAAMNLCAAAA\t865.29\nAAAAAAAAOINCAAAA\t865.60\nAAAAAAAAKGOBAAAA\t875.20\nAAAAAAAAOFKAAAAA\t882.60\nAAAAAAAAIGMCAAAA\t886.89\nAAAAAAAAIIOBAAAA\t892.05\nAAAAAAAAGOBBAAAA\t909.74\nAAAAAAAAPNNCAAAA\t981.72\nAAAAAAAAKPAAAAAA\t1006.11\nAAAAAAAANDBCAAAA\t1028.34\nAAAAAAAAABCCAAAA\t1051.08\nAAAAAAAACJCEAAAA\t1055.78\nAAAAAAAACEPCAAAA\t1064.00\nAAAAAAAACGFEAAAA\t1067.78\nAAAAAAAAMOIAAAAA\t1073.80\nAAAAAAAAMIFDAAAA\t1104.17\nAAAAAAAAAOHBAAAA\t1104.40\nAAAAAAAAMGJBAAAA\t1124.12\nAAAAAAAAAEFCAAAA\t1134.68\nAAAAAAAAAJIDAAAA\t1153.92\nAAAAAAAAFEKDAAAA\t1156.54\nAAAAAAAAGIBEAAAA\t1165.02\nAAAAAAAAKLBBAAAA\t1181.93\nAAAAAAAAOBBAAAAA\t1183.08\nAAAAAAAABEEEAAAA\t1190.91\nAAAAAAAADHJAAAAA\t1202.11\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q57.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_category:string,i_brand:string,cc_name:string,d_year:int,d_moy:int,avg_monthly_sales:decimal(21,6),sum_sales:decimal(17,2),psum:decimal(17,2),nsum:decimal(17,2)>\n-- !query output\nMusic                                             \tamalgscholar #x                                   \tMid Atlantic\t1999\t2\t6662.669167\t1961.57\t4348.07\t3386.25\nShoes                                             \tamalgedu pack #x                                  \tNorth Midwest\t1999\t2\t6493.071667\t2044.05\t4348.88\t3443.20\nShoes                                             \texportiedu pack #x                                \tMid Atlantic\t1999\t3\t7416.141667\t2980.15\t4654.22\t5157.83\nChildren                                          \timportoexporti #x                                 \tNorth Midwest\t1999\t4\t6577.143333\t2152.15\t3291.07\t3659.32\nShoes                                             \timportoedu pack #x                                \tNY Metro\t1999\t6\t6926.960833\t2523.33\t4014.93\t4254.99\nMen                                               \timportoimporto #x                                 \tNorth Midwest\t1999\t2\t6707.315833\t2449.22\t4311.11\t3583.31\nMen                                               \tamalgimporto #x                                   \tNorth Midwest\t1999\t4\t7098.680833\t2965.42\t3526.45\t4923.53\nMen                                               \texportiimporto #x                                 \tNY Metro\t1999\t2\t7146.240000\t3013.99\t6183.83\t5091.17\nChildren                                          \tamalgexporti #x                                   \tNorth Midwest\t1999\t4\t6364.467500\t2270.79\t3330.83\t3817.50\nMen                                               \tedu packimporto #x                                \tNorth Midwest\t1999\t4\t7386.333333\t3329.74\t3488.01\t4860.20\nMen                                               \tedu packimporto #x                                \tNorth Midwest\t1999\t2\t7386.333333\t3347.65\t4007.40\t3488.01\nMusic                                             \tedu packscholar #x                                \tNY Metro\t1999\t7\t6639.040000\t2653.55\t4219.52\t10071.22\nMusic                                             \tamalgscholar #x                                   \tNorth Midwest\t1999\t4\t6719.304167\t2739.33\t3690.54\t3872.98\nMen                                               \timportoimporto #x                                 \tNY Metro\t1999\t3\t6610.034167\t2645.24\t3661.14\t4282.01\nMusic                                             \texportischolar #x                                 \tNY Metro\t1999\t2\t7043.051667\t3115.83\t4457.95\t5258.95\nMen                                               \tedu packimporto #x                                \tNorth Midwest\t1999\t3\t7386.333333\t3488.01\t3347.65\t3329.74\nShoes                                             \texportiedu pack #x                                \tNorth Midwest\t1999\t3\t7255.790000\t3411.07\t4194.64\t3624.85\nWomen                                             \texportiamalg #x                                   \tMid Atlantic\t1999\t2\t5646.671667\t1809.52\t4198.70\t2172.85\nMusic                                             \timportoscholar #x                                 \tMid Atlantic\t1999\t6\t6279.081667\t2456.98\t4361.44\t4256.24\nChildren                                          \timportoexporti #x                                 \tMid Atlantic\t1999\t7\t6786.750000\t2978.82\t3942.59\t7809.22\nMusic                                             \texportischolar #x                                 \tNorth Midwest\t1999\t2\t7041.705833\t3245.77\t3608.31\t4127.40\nShoes                                             \timportoedu pack #x                                \tMid Atlantic\t1999\t2\t6864.320833\t3104.48\t3135.52\t3606.28\nShoes                                             \timportoedu pack #x                                \tMid Atlantic\t1999\t1\t6864.320833\t3135.52\t14580.30\t3104.48\nChildren                                          \tedu packexporti #x                                \tMid Atlantic\t1999\t5\t6511.800833\t2785.92\t3956.90\t3906.63\nWomen                                             \tamalgamalg #x                                     \tMid Atlantic\t1999\t2\t6480.683333\t2769.69\t3353.67\t3876.51\nMen                                               \texportiimporto #x                                 \tNY Metro\t1999\t5\t7146.240000\t3440.57\t3561.11\t3971.13\nWomen                                             \timportoamalg #x                                   \tNY Metro\t1999\t3\t6512.794167\t2808.28\t3789.51\t4335.27\nMen                                               \tamalgimporto #x                                   \tNY Metro\t1999\t2\t6720.550000\t3018.90\t4328.03\t3810.74\nShoes                                             \texportiedu pack #x                                \tNorth Midwest\t1999\t7\t7255.790000\t3557.87\t4937.98\t9496.49\nMusic                                             \texportischolar #x                                 \tMid Atlantic\t1999\t5\t6791.260833\t3096.02\t3918.04\t3801.90\nMen                                               \texportiimporto #x                                 \tNorth Midwest\t1999\t1\t7343.719167\t3652.72\t13689.13\t3984.13\nMen                                               \timportoimporto #x                                 \tNorth Midwest\t1999\t5\t6707.315833\t3030.18\t4977.33\t4620.75\nChildren                                          \texportiexporti #x                                 \tNY Metro\t1999\t2\t6386.880833\t2717.07\t4809.11\t3355.48\nMen                                               \tamalgimporto #x                                   \tNorth Midwest\t1999\t2\t7098.680833\t3440.27\t5293.69\t3526.45\nMen                                               \timportoimporto #x                                 \tNY Metro\t1999\t5\t6610.034167\t2954.71\t4282.01\t3166.43\nShoes                                             \texportiedu pack #x                                \tNorth Midwest\t1999\t4\t7255.790000\t3624.85\t3411.07\t5169.09\nMen                                               \texportiimporto #x                                 \tNorth Midwest\t1999\t5\t7343.719167\t3727.75\t3729.62\t4580.93\nMen                                               \texportiimporto #x                                 \tNorth Midwest\t1999\t4\t7343.719167\t3729.62\t4033.37\t3727.75\nMusic                                             \tedu packscholar #x                                \tNorth Midwest\t1999\t2\t6489.175000\t2875.98\t4299.82\t4028.97\nMen                                               \tedu packimporto #x                                \tNY Metro\t1999\t1\t7202.242500\t3614.07\t15582.63\t4234.79\nMusic                                             \timportoscholar #x                                 \tNorth Midwest\t1999\t2\t5816.271667\t2229.79\t2919.29\t4298.41\nMen                                               \texportiimporto #x                                 \tNY Metro\t1999\t4\t7146.240000\t3561.11\t5091.17\t3440.57\nShoes                                             \texportiedu pack #x                                \tNY Metro\t1999\t7\t7073.462500\t3493.41\t4534.31\t8701.59\nMusic                                             \texportischolar #x                                 \tMid Atlantic\t1999\t3\t6791.260833\t3218.13\t3847.87\t3918.04\nShoes                                             \tedu packedu pack #x                               \tNY Metro\t1999\t4\t6203.331667\t2631.02\t4424.18\t4186.85\nMen                                               \tamalgimporto #x                                   \tNorth Midwest\t1999\t3\t7098.680833\t3526.45\t3440.27\t2965.42\nMen                                               \tedu packimporto #x                                \tNY Metro\t1999\t3\t7202.242500\t3639.93\t4234.79\t4016.25\nChildren                                          \tamalgexporti #x                                   \tNorth Midwest\t1999\t2\t6364.467500\t2825.09\t4111.08\t3330.83\nShoes                                             \tedu packedu pack #x                               \tNorth Midwest\t1999\t2\t6464.239167\t2928.99\t4233.86\t3840.97\nShoes                                             \tamalgedu pack #x                                  \tNorth Midwest\t1999\t4\t6493.071667\t2962.20\t3443.20\t4212.60\nMusic                                             \timportoscholar #x                                 \tNY Metro\t1999\t4\t5707.844167\t2179.41\t3789.16\t4317.53\nShoes                                             \texportiedu pack #x                                \tMid Atlantic\t1999\t1\t7416.141667\t3892.54\t14170.68\t4654.22\nWomen                                             \timportoamalg #x                                   \tNY Metro\t1999\t5\t6512.794167\t2991.07\t4335.27\t4624.86\nMusic                                             \texportischolar #x                                 \tNY Metro\t1999\t4\t7043.051667\t3521.99\t5258.95\t4135.21\nWomen                                             \tedu packamalg #x                                  \tMid Atlantic\t1999\t2\t6354.045833\t2836.23\t3719.67\t3527.07\nMusic                                             \tamalgscholar #x                                   \tNY Metro\t1999\t3\t6123.475000\t2617.39\t3080.43\t4919.93\nShoes                                             \tamalgedu pack #x                                  \tMid Atlantic\t1999\t7\t6674.896667\t3178.12\t3342.98\t8050.81\nMen                                               \tamalgimporto #x                                   \tMid Atlantic\t1999\t5\t6618.534167\t3127.28\t4291.66\t4669.62\nWomen                                             \tamalgamalg #x                                     \tNorth Midwest\t1999\t6\t6874.250000\t3387.53\t4798.69\t4329.48\nWomen                                             \texportiamalg #x                                   \tMid Atlantic\t1999\t3\t5646.671667\t2172.85\t1809.52\t4461.31\nChildren                                          \tedu packexporti #x                                \tNY Metro\t1999\t2\t6112.954167\t2641.54\t3567.58\t3196.45\nChildren                                          \tamalgexporti #x                                   \tNY Metro\t1999\t5\t6294.100833\t2834.48\t3317.60\t3803.79\nWomen                                             \tedu packamalg #x                                  \tNY Metro\t1999\t5\t6027.880000\t2575.81\t2750.96\t4459.01\nMusic                                             \texportischolar #x                                 \tNorth Midwest\t1999\t6\t7041.705833\t3589.97\t4134.03\t4892.26\nMusic                                             \texportischolar #x                                 \tNorth Midwest\t1999\t4\t7041.705833\t3593.86\t4127.40\t4134.03\nMen                                               \timportoimporto #x                                 \tNY Metro\t1999\t6\t6610.034167\t3166.43\t2954.71\t3673.39\nMusic                                             \texportischolar #x                                 \tNorth Midwest\t1999\t1\t7041.705833\t3608.31\t15046.54\t3245.77\nMusic                                             \tedu packscholar #x                                \tMid Atlantic\t1999\t2\t6602.385000\t3173.44\t3434.91\t3929.40\nMusic                                             \tamalgscholar #x                                   \tNY Metro\t1999\t6\t6123.475000\t2699.75\t4038.47\t3330.87\nChildren                                          \timportoexporti #x                                 \tMid Atlantic\t1999\t4\t6786.750000\t3366.25\t3847.60\t4259.57\nMen                                               \tedu packimporto #x                                \tMid Atlantic\t1999\t1\t7230.493333\t3811.64\t14668.93\t4497.31\nShoes                                             \timportoedu pack #x                                \tMid Atlantic\t1999\t5\t6864.320833\t3449.62\t3869.15\t3531.93\nChildren                                          \tedu packexporti #x                                \tNorth Midwest\t1999\t2\t6739.498333\t3328.01\t4986.50\t3623.32\nChildren                                          \timportoexporti #x                                 \tMid Atlantic\t1999\t1\t6786.750000\t3376.55\t12504.93\t5018.22\nChildren                                          \tedu packexporti #x                                \tNY Metro\t1999\t7\t6112.954167\t2711.04\t3254.04\t9465.10\nShoes                                             \timportoedu pack #x                                \tNorth Midwest\t1999\t3\t6588.741667\t3187.25\t4283.21\t3212.76\nMen                                               \timportoimporto #x                                 \tMid Atlantic\t1999\t3\t6702.415000\t3310.55\t3981.06\t4901.56\nMen                                               \tedu packimporto #x                                \tNorth Midwest\t1999\t1\t7386.333333\t4007.40\t14005.45\t3347.65\nShoes                                             \timportoedu pack #x                                \tNorth Midwest\t1999\t4\t6588.741667\t3212.76\t3187.25\t3974.78\nShoes                                             \tedu packedu pack #x                               \tNY Metro\t1999\t6\t6203.331667\t2835.78\t4186.85\t3192.53\nMen                                               \texportiimporto #x                                 \tNorth Midwest\t1999\t2\t7343.719167\t3984.13\t3652.72\t4033.37\nMen                                               \tamalgimporto #x                                   \tNY Metro\t1999\t4\t6720.550000\t3364.32\t3810.74\t4333.58\nChildren                                          \tedu packexporti #x                                \tNorth Midwest\t1999\t4\t6739.498333\t3389.03\t3623.32\t3605.25\nShoes                                             \timportoedu pack #x                                \tMid Atlantic\t1999\t6\t6864.320833\t3531.93\t3449.62\t4414.17\nShoes                                             \tamalgedu pack #x                                  \tMid Atlantic\t1999\t6\t6674.896667\t3342.98\t4458.26\t3178.12\nChildren                                          \tedu packexporti #x                                \tMid Atlantic\t1999\t2\t6511.800833\t3185.28\t3581.75\t3410.75\nChildren                                          \tamalgexporti #x                                   \tMid Atlantic\t1999\t4\t6854.405833\t3541.62\t3854.33\t3938.42\nMen                                               \texportiimporto #x                                 \tNorth Midwest\t1999\t3\t7343.719167\t4033.37\t3984.13\t3729.62\nMen                                               \tamalgimporto #x                                   \tMid Atlantic\t1999\t3\t6618.534167\t3313.62\t4044.36\t4291.66\nShoes                                             \texportiedu pack #x                                \tMid Atlantic\t1999\t7\t7416.141667\t4121.24\t4239.08\t8658.42\nWomen                                             \timportoamalg #x                                   \tMid Atlantic\t1999\t6\t6395.326667\t3102.55\t4234.22\t3650.03\nChildren                                          \timportoexporti #x                                 \tNorth Midwest\t1999\t3\t6577.143333\t3291.07\t3773.61\t2152.15\nWomen                                             \tedu packamalg #x                                  \tNY Metro\t1999\t4\t6027.880000\t2750.96\t3199.35\t2575.81\nMusic                                             \tamalgscholar #x                                   \tMid Atlantic\t1999\t3\t6662.669167\t3386.25\t1961.57\t4799.18\nMen                                               \tamalgimporto #x                                   \tNorth Midwest\t1999\t6\t7098.680833\t3834.90\t4923.53\t4115.57\nShoes                                             \timportoedu pack #x                                \tMid Atlantic\t1999\t3\t6864.320833\t3606.28\t3104.48\t3869.15\nMusic                                             \texportischolar #x                                 \tNY Metro\t1999\t6\t7043.051667\t3793.48\t4135.21\t5006.69\nShoes                                             \tedu packedu pack #x                               \tMid Atlantic\t1999\t1\t6711.753333\t3473.10\t15060.83\t4085.86\nMen                                               \texportiimporto #x                                 \tMid Atlantic\t1999\t1\t7419.459167\t4188.88\t16358.86\t4366.77\nWomen                                             \tamalgamalg #x                                     \tNY Metro\t1999\t2\t6362.709167\t3137.03\t4180.91\t3181.26\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q58.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<item_id:string,ss_item_rev:decimal(17,2),ss_dev:decimal(38,17),cs_item_rev:decimal(17,2),cs_dev:decimal(38,17),ws_item_rev:decimal(17,2),ws_dev:decimal(38,17),average:decimal(23,6)>\n-- !query output\nAAAAAAAAOEHBAAAA\t4202.05\t11.64402731132023188\t3952.65\t10.95293120074485419\t3874.50\t10.73637482126824727\t4009.733333\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q59.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<s_store_name1:string,s_store_id1:string,d_week_seq1:int,(sun_sales1 / sun_sales2):decimal(37,20),(mon_sales1 / mon_sales2):decimal(37,20),(tue_sales1 / tue_sales2):decimal(37,20),(wed_sales1 / wed_sales2):decimal(37,20),(thu_sales1 / thu_sales2):decimal(37,20),(fri_sales1 / fri_sales2):decimal(37,20),(sat_sales1 / sat_sales2):decimal(37,20)>\n-- !query output\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5271\t0.74419775623719778195\t0.74258389025810779226\t0.18455097389041153754\t0.11049123944133143765\t2.01369701972914228390\t0.74322255494522317957\t0.71433787179231334416\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5272\t0.84505990801516636507\t1.64762915953700921988\t0.38486496991736952802\t0.68701158883293751293\t1.56192545028377515216\t1.16497445041784212487\t0.46921185549145987093\nable\tAAAAAAAACAAAAAAA\t5273\t0.35130200745800939483\t0.85485871077793239664\t0.66980552712384851586\t0.66252919917140464542\t0.75558600483388711996\t0.52111936828952912290\t0.63215140490620731712\nable\tAAAAAAAACAAAAAAA\t5273\t0.35130200745800939483\t0.85485871077793239664\t0.66980552712384851586\t0.66252919917140464542\t0.75558600483388711996\t0.52111936828952912290\t0.63215140490620731712\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q6.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<state:string,cnt:bigint>\n-- !query output\nVT\t10\nMA\t11\nNH\t11\nNJ\t14\nWY\t16\nOR\t16\nNV\t16\nAK\t20\nME\t22\nMD\t26\nWA\t29\nNM\t31\nID\t31\nUT\t36\nND\t38\nSC\t41\nSD\t43\nWV\t44\nCA\t52\nFL\t53\nLA\t56\nPA\t57\nNY\t59\nWI\t61\nAR\t61\nCO\t61\nMT\t66\nMS\t68\nOK\t68\nMN\t71\nOH\t75\nMO\t79\nIL\t80\nNC\t81\nAL\t81\nIA\t82\nMI\t84\nNE\t86\nKS\t86\nNULL\t88\nIN\t94\nTN\t108\nKY\t110\nVA\t128\nGA\t138\nTX\t250\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q60.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,total_sales:decimal(27,2)>\n-- !query output\nAAAAAAAAAABAAAAA\t6334.66\nAAAAAAAAAACCAAAA\t5389.16\nAAAAAAAAAADBAAAA\t11655.98\nAAAAAAAAAAHAAAAA\t5388.00\nAAAAAAAAAAHDAAAA\t12685.87\nAAAAAAAAAAIAAAAA\t8916.30\nAAAAAAAAAAKBAAAA\t2140.04\nAAAAAAAAAAMBAAAA\t22048.17\nAAAAAAAAAAPBAAAA\t13195.87\nAAAAAAAAABAEAAAA\t1071.67\nAAAAAAAAABBCAAAA\t7064.02\nAAAAAAAAABEAAAAA\t13239.06\nAAAAAAAAABFBAAAA\t1877.72\nAAAAAAAAABFEAAAA\t699.95\nAAAAAAAAABGAAAAA\t8169.00\nAAAAAAAAABJAAAAA\t10752.87\nAAAAAAAAABLBAAAA\t5962.43\nAAAAAAAAABLCAAAA\t18131.46\nAAAAAAAAABMBAAAA\t12049.00\nAAAAAAAAABMDAAAA\t18947.93\nAAAAAAAAABNDAAAA\t8479.29\nAAAAAAAAACDCAAAA\t13466.87\nAAAAAAAAACEEAAAA\t7634.65\nAAAAAAAAACJCAAAA\t51774.63\nAAAAAAAAACKBAAAA\t18766.24\nAAAAAAAAACLBAAAA\t16002.89\nAAAAAAAAACLDAAAA\t12302.68\nAAAAAAAAACOBAAAA\t2707.13\nAAAAAAAAADABAAAA\t6234.38\nAAAAAAAAADBDAAAA\t27724.30\nAAAAAAAAADEAAAAA\t2867.87\nAAAAAAAAADFAAAAA\t10667.94\nAAAAAAAAADFDAAAA\t9051.70\nAAAAAAAAADGBAAAA\t6502.68\nAAAAAAAAADGCAAAA\t1174.33\nAAAAAAAAADHDAAAA\t18882.96\nAAAAAAAAADNBAAAA\t11526.41\nAAAAAAAAADPBAAAA\t1026.20\nAAAAAAAAAEADAAAA\t3582.21\nAAAAAAAAAECBAAAA\t12360.25\nAAAAAAAAAECCAAAA\t20954.85\nAAAAAAAAAEDAAAAA\t8537.71\nAAAAAAAAAEECAAAA\t10513.50\nAAAAAAAAAEEDAAAA\t2561.17\nAAAAAAAAAEGAAAAA\t7836.96\nAAAAAAAAAEGDAAAA\t638.36\nAAAAAAAAAEJBAAAA\t11429.14\nAAAAAAAAAELBAAAA\t6543.86\nAAAAAAAAAEMDAAAA\t13797.32\nAAAAAAAAAEOBAAAA\t6699.66\nAAAAAAAAAFAAAAAA\t1979.04\nAAAAAAAAAFCBAAAA\t6157.90\nAAAAAAAAAFDAAAAA\t14494.27\nAAAAAAAAAFEBAAAA\t11552.54\nAAAAAAAAAFGAAAAA\t13218.28\nAAAAAAAAAFGCAAAA\t10293.39\nAAAAAAAAAFGDAAAA\t6078.36\nAAAAAAAAAFJDAAAA\t21440.62\nAAAAAAAAAFLBAAAA\t11714.63\nAAAAAAAAAFNCAAAA\t17910.15\nAAAAAAAAAGACAAAA\t12212.53\nAAAAAAAAAGAEAAAA\t17187.86\nAAAAAAAAAGBAAAAA\t2753.50\nAAAAAAAAAGDBAAAA\t5244.60\nAAAAAAAAAGDEAAAA\t466.32\nAAAAAAAAAGEAAAAA\t2030.78\nAAAAAAAAAGEBAAAA\t673.70\nAAAAAAAAAGEDAAAA\t4861.80\nAAAAAAAAAGIAAAAA\t20717.88\nAAAAAAAAAGKAAAAA\t4481.54\nAAAAAAAAAGLAAAAA\t259.22\nAAAAAAAAAGNDAAAA\t9371.85\nAAAAAAAAAGOAAAAA\t12624.61\nAAAAAAAAAGODAAAA\t15071.99\nAAAAAAAAAHBDAAAA\t6674.73\nAAAAAAAAAHCCAAAA\t4327.67\nAAAAAAAAAHECAAAA\t13309.93\nAAAAAAAAAHGAAAAA\t8277.89\nAAAAAAAAAHIBAAAA\t6574.61\nAAAAAAAAAHICAAAA\t4810.18\nAAAAAAAAAHJDAAAA\t9151.90\nAAAAAAAAAHKAAAAA\t424.72\nAAAAAAAAAHKCAAAA\t23702.30\nAAAAAAAAAHKDAAAA\t8747.48\nAAAAAAAAAHMBAAAA\t17707.03\nAAAAAAAAAHMDAAAA\t5608.89\nAAAAAAAAAHNAAAAA\t9929.30\nAAAAAAAAAHOCAAAA\t21257.55\nAAAAAAAAAIAAAAAA\t36659.16\nAAAAAAAAAICEAAAA\t23564.63\nAAAAAAAAAIDCAAAA\t7648.02\nAAAAAAAAAIEBAAAA\t6151.13\nAAAAAAAAAIEEAAAA\t1406.98\nAAAAAAAAAIFBAAAA\t2088.56\nAAAAAAAAAIGAAAAA\t2468.08\nAAAAAAAAAIGCAAAA\t1834.76\nAAAAAAAAAIGDAAAA\t558.00\nAAAAAAAAAIIBAAAA\t5422.03\nAAAAAAAAAILDAAAA\t4833.43\nAAAAAAAAAIOAAAAA\t8317.81\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q61.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<promotions:decimal(17,2),total:decimal(17,2),((CAST(promotions AS DECIMAL(15,4)) / CAST(total AS DECIMAL(15,4))) * 100):decimal(38,19)>\n-- !query output\n2574995.94\t5792384.50\t44.4548517108972997220\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q62.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<substr(w_warehouse_name, 1, 20):string,sm_type:string,web_name:string,30 days :bigint,31 - 60 days :bigint,61 - 90 days :bigint,91 - 120 days :bigint,>120 days :bigint>\n-- !query output\nJust good amou\tEXPRESS                       \tsite_0\t270\t248\t280\t287\t0\nJust good amou\tEXPRESS                       \tsite_1\t290\t327\t288\t262\t0\nJust good amou\tEXPRESS                       \tsite_2\t314\t295\t311\t299\t0\nJust good amou\tEXPRESS                       \tsite_3\t309\t300\t299\t267\t0\nJust good amou\tEXPRESS                       \tsite_4\t224\t250\t297\t273\t0\nJust good amou\tLIBRARY                       \tsite_0\t217\t192\t210\t225\t0\nJust good amou\tLIBRARY                       \tsite_1\t163\t207\t172\t205\t0\nJust good amou\tLIBRARY                       \tsite_2\t176\t204\t208\t190\t0\nJust good amou\tLIBRARY                       \tsite_3\t222\t220\t240\t240\t0\nJust good amou\tLIBRARY                       \tsite_4\t254\t230\t241\t263\t0\nJust good amou\tNEXT DAY                      \tsite_0\t311\t275\t303\t291\t0\nJust good amou\tNEXT DAY                      \tsite_1\t291\t307\t284\t289\t0\nJust good amou\tNEXT DAY                      \tsite_2\t275\t299\t323\t279\t0\nJust good amou\tNEXT DAY                      \tsite_3\t260\t290\t242\t248\t0\nJust good amou\tNEXT DAY                      \tsite_4\t233\t285\t271\t281\t0\nJust good amou\tOVERNIGHT                     \tsite_0\t242\t262\t221\t248\t0\nJust good amou\tOVERNIGHT                     \tsite_1\t251\t245\t249\t233\t0\nJust good amou\tOVERNIGHT                     \tsite_2\t198\t226\t209\t204\t0\nJust good amou\tOVERNIGHT                     \tsite_3\t168\t199\t184\t175\t0\nJust good amou\tOVERNIGHT                     \tsite_4\t237\t231\t193\t207\t0\nJust good amou\tREGULAR                       \tsite_0\t140\t236\t154\t190\t0\nJust good amou\tREGULAR                       \tsite_1\t189\t190\t205\t194\t0\nJust good amou\tREGULAR                       \tsite_2\t234\t251\t264\t235\t0\nJust good amou\tREGULAR                       \tsite_3\t229\t251\t244\t243\t0\nJust good amou\tREGULAR                       \tsite_4\t198\t209\t188\t216\t0\nJust good amou\tTWO DAY                       \tsite_0\t256\t258\t218\t235\t0\nJust good amou\tTWO DAY                       \tsite_1\t217\t226\t195\t198\t0\nJust good amou\tTWO DAY                       \tsite_2\t176\t151\t186\t202\t0\nJust good amou\tTWO DAY                       \tsite_3\t221\t225\t236\t190\t0\nJust good amou\tTWO DAY                       \tsite_4\t239\t240\t219\t252\t0\nMatches produce\tEXPRESS                       \tsite_0\t330\t277\t239\t314\t0\nMatches produce\tEXPRESS                       \tsite_1\t296\t336\t288\t307\t0\nMatches produce\tEXPRESS                       \tsite_2\t262\t307\t273\t305\t0\nMatches produce\tEXPRESS                       \tsite_3\t284\t253\t253\t256\t0\nMatches produce\tEXPRESS                       \tsite_4\t260\t309\t252\t296\t0\nMatches produce\tLIBRARY                       \tsite_0\t188\t181\t179\t170\t0\nMatches produce\tLIBRARY                       \tsite_1\t189\t211\t193\t181\t0\nMatches produce\tLIBRARY                       \tsite_2\t253\t230\t207\t248\t0\nMatches produce\tLIBRARY                       \tsite_3\t225\t230\t217\t214\t0\nMatches produce\tLIBRARY                       \tsite_4\t220\t218\t208\t213\t0\nMatches produce\tNEXT DAY                      \tsite_0\t296\t286\t328\t316\t0\nMatches produce\tNEXT DAY                      \tsite_1\t303\t303\t307\t281\t0\nMatches produce\tNEXT DAY                      \tsite_2\t271\t258\t253\t256\t0\nMatches produce\tNEXT DAY                      \tsite_3\t265\t263\t290\t296\t0\nMatches produce\tNEXT DAY                      \tsite_4\t312\t312\t291\t311\t0\nMatches produce\tOVERNIGHT                     \tsite_0\t217\t254\t216\t258\t0\nMatches produce\tOVERNIGHT                     \tsite_1\t214\t187\t229\t230\t0\nMatches produce\tOVERNIGHT                     \tsite_2\t171\t176\t188\t174\t0\nMatches produce\tOVERNIGHT                     \tsite_3\t181\t218\t193\t219\t0\nMatches produce\tOVERNIGHT                     \tsite_4\t244\t216\t237\t234\t0\nMatches produce\tREGULAR                       \tsite_0\t205\t216\t204\t229\t0\nMatches produce\tREGULAR                       \tsite_1\t230\t232\t229\t244\t0\nMatches produce\tREGULAR                       \tsite_2\t225\t250\t209\t254\t0\nMatches produce\tREGULAR                       \tsite_3\t230\t212\t212\t195\t0\nMatches produce\tREGULAR                       \tsite_4\t207\t167\t180\t172\t0\nMatches produce\tTWO DAY                       \tsite_0\t228\t195\t222\t198\t0\nMatches produce\tTWO DAY                       \tsite_1\t169\t154\t170\t146\t0\nMatches produce\tTWO DAY                       \tsite_2\t197\t215\t201\t210\t0\nMatches produce\tTWO DAY                       \tsite_3\t247\t236\t234\t237\t0\nMatches produce\tTWO DAY                       \tsite_4\t242\t259\t245\t247\t0\nOperations\tEXPRESS                       \tsite_0\t270\t284\t268\t286\t0\nOperations\tEXPRESS                       \tsite_1\t274\t251\t245\t275\t0\nOperations\tEXPRESS                       \tsite_2\t287\t337\t296\t310\t0\nOperations\tEXPRESS                       \tsite_3\t349\t307\t322\t286\t0\nOperations\tEXPRESS                       \tsite_4\t284\t287\t285\t271\t0\nOperations\tLIBRARY                       \tsite_0\t266\t227\t229\t237\t0\nOperations\tLIBRARY                       \tsite_1\t201\t228\t190\t200\t0\nOperations\tLIBRARY                       \tsite_2\t177\t170\t182\t171\t0\nOperations\tLIBRARY                       \tsite_3\t227\t217\t214\t226\t0\nOperations\tLIBRARY                       \tsite_4\t268\t258\t242\t242\t0\nOperations\tNEXT DAY                      \tsite_0\t269\t255\t273\t257\t0\nOperations\tNEXT DAY                      \tsite_1\t292\t295\t258\t291\t0\nOperations\tNEXT DAY                      \tsite_2\t296\t327\t305\t294\t0\nOperations\tNEXT DAY                      \tsite_3\t274\t286\t322\t289\t0\nOperations\tNEXT DAY                      \tsite_4\t311\t270\t243\t255\t0\nOperations\tOVERNIGHT                     \tsite_0\t217\t205\t193\t227\t0\nOperations\tOVERNIGHT                     \tsite_1\t221\t243\t232\t237\t0\nOperations\tOVERNIGHT                     \tsite_2\t247\t243\t235\t238\t0\nOperations\tOVERNIGHT                     \tsite_3\t242\t218\t205\t223\t0\nOperations\tOVERNIGHT                     \tsite_4\t181\t172\t192\t186\t0\nOperations\tREGULAR                       \tsite_0\t190\t211\t206\t195\t0\nOperations\tREGULAR                       \tsite_1\t166\t173\t173\t174\t0\nOperations\tREGULAR                       \tsite_2\t198\t219\t215\t226\t0\nOperations\tREGULAR                       \tsite_3\t255\t234\t239\t215\t0\nOperations\tREGULAR                       \tsite_4\t246\t231\t266\t208\t0\nOperations\tTWO DAY                       \tsite_0\t247\t232\t246\t240\t0\nOperations\tTWO DAY                       \tsite_1\t244\t240\t259\t234\t0\nOperations\tTWO DAY                       \tsite_2\t210\t214\t200\t190\t0\nOperations\tTWO DAY                       \tsite_3\t166\t181\t174\t184\t0\nOperations\tTWO DAY                       \tsite_4\t194\t197\t224\t211\t0\nSelective,\tEXPRESS                       \tsite_0\t290\t275\t273\t332\t0\nSelective,\tEXPRESS                       \tsite_1\t303\t319\t311\t299\t0\nSelective,\tEXPRESS                       \tsite_2\t299\t238\t268\t261\t0\nSelective,\tEXPRESS                       \tsite_3\t262\t273\t281\t300\t0\nSelective,\tEXPRESS                       \tsite_4\t285\t298\t291\t300\t0\nSelective,\tLIBRARY                       \tsite_0\t215\t204\t202\t211\t0\nSelective,\tLIBRARY                       \tsite_1\t221\t253\t243\t240\t0\nSelective,\tLIBRARY                       \tsite_2\t218\t261\t226\t234\t0\nSelective,\tLIBRARY                       \tsite_3\t178\t189\t233\t217\t0\nSelective,\tLIBRARY                       \tsite_4\t159\t186\t194\t199\t0\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q63.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_manager_id:int,sum_sales:decimal(17,2),avg_monthly_sales:decimal(21,6)>\n-- !query output\n1\t175.21\t726.555833\n1\t231.64\t726.555833\n1\t400.27\t726.555833\n1\t409.69\t726.555833\n1\t468.02\t726.555833\n1\t530.63\t726.555833\n1\t880.08\t726.555833\n1\t972.96\t726.555833\n1\t1028.83\t726.555833\n1\t1038.59\t726.555833\n1\t1820.16\t726.555833\n2\t392.91\t1124.957500\n2\t469.26\t1124.957500\n2\t654.97\t1124.957500\n2\t703.18\t1124.957500\n2\t726.27\t1124.957500\n2\t813.75\t1124.957500\n2\t819.59\t1124.957500\n2\t1248.04\t1124.957500\n2\t1253.65\t1124.957500\n2\t1690.71\t1124.957500\n2\t1961.13\t1124.957500\n2\t2766.03\t1124.957500\n3\t259.59\t1110.663333\n3\t530.95\t1110.663333\n3\t639.03\t1110.663333\n3\t727.70\t1110.663333\n3\t822.22\t1110.663333\n3\t853.58\t1110.663333\n3\t910.78\t1110.663333\n3\t1455.11\t1110.663333\n3\t1615.68\t1110.663333\n3\t1741.42\t1110.663333\n3\t2618.11\t1110.663333\n4\t465.88\t1333.585833\n4\t472.55\t1333.585833\n4\t685.93\t1333.585833\n4\t703.90\t1333.585833\n4\t773.55\t1333.585833\n4\t1535.08\t1333.585833\n4\t1900.09\t1333.585833\n4\t2557.73\t1333.585833\n4\t2842.07\t1333.585833\n5\t507.40\t1378.418333\n5\t604.37\t1378.418333\n5\t633.93\t1378.418333\n5\t830.17\t1378.418333\n5\t1013.51\t1378.418333\n5\t1093.85\t1378.418333\n5\t1223.10\t1378.418333\n5\t1735.77\t1378.418333\n5\t2024.74\t1378.418333\n5\t2065.17\t1378.418333\n5\t2319.65\t1378.418333\n5\t2489.36\t1378.418333\n6\t852.04\t1879.384167\n6\t975.86\t1879.384167\n6\t1082.92\t1879.384167\n6\t1121.29\t1879.384167\n6\t1133.12\t1879.384167\n6\t1558.44\t1879.384167\n6\t1581.26\t1879.384167\n6\t1654.62\t1879.384167\n6\t2408.41\t1879.384167\n6\t3334.37\t1879.384167\n6\t4855.76\t1879.384167\n7\t655.33\t1433.785833\n7\t691.77\t1433.785833\n7\t750.55\t1433.785833\n7\t862.94\t1433.785833\n7\t868.32\t1433.785833\n7\t992.39\t1433.785833\n7\t1209.78\t1433.785833\n7\t1694.72\t1433.785833\n7\t2081.47\t1433.785833\n7\t2257.98\t1433.785833\n7\t2383.35\t1433.785833\n7\t2756.83\t1433.785833\n8\t54.86\t410.067500\n8\t183.63\t410.067500\n8\t192.44\t410.067500\n8\t271.04\t410.067500\n8\t472.18\t410.067500\n8\t492.97\t410.067500\n8\t644.56\t410.067500\n8\t669.81\t410.067500\n8\t676.94\t410.067500\n9\t479.27\t1306.207500\n9\t572.00\t1306.207500\n9\t742.40\t1306.207500\n9\t759.92\t1306.207500\n9\t805.90\t1306.207500\n9\t993.12\t1306.207500\n9\t1030.85\t1306.207500\n9\t1764.83\t1306.207500\n9\t1826.50\t1306.207500\n9\t2065.07\t1306.207500\n9\t2158.76\t1306.207500\n9\t2475.87\t1306.207500\n10\t475.18\t1442.284167\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q64.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<product_name:string,store_name:string,store_zip:string,b_street_number:string,b_streen_name:string,b_city:string,b_zip:string,c_street_number:string,c_street_name:string,c_city:string,c_zip:string,syear:int,cnt:bigint,s1:decimal(17,2),s2:decimal(17,2),s3:decimal(17,2),s1:decimal(17,2),s2:decimal(17,2),s3:decimal(17,2),syear:int,cnt:bigint>\n-- !query output\nablepricallyantiought                             \tation\t35709     \t71        \tRiver River\tFriendship\t34536     \tNULL\tNULL\tNewport\tNULL\t1999\t1\t22.60\t38.87\t0.00\t17.09\t26.31\t0.00\t2000\t1\nablepricallyantiought                             \tation\t35709     \t996       \tNULL\tBridgeport\t65817     \t752       \tLakeview Lincoln\tFriendship\t74536     \t1999\t1\t15.78\t24.93\t0.00\t17.09\t26.31\t0.00\t2000\t1\nablepricallyantiought                             \tbar\t31904     \t128       \tEast \tFranklin\t19101     \t990       \t2nd \tFriendship\t94536     \t1999\t1\t54.76\t78.30\t0.00\t15.80\t23.54\t0.00\t2000\t1\nationbarpri                                       \tation\t35709     \t362       \tCentral Ridge\tOakland\t69843     \t666       \t13th Ridge\tShiloh\t29275     \t1999\t1\t74.00\t95.46\t0.00\t11.32\t20.94\t0.00\t2000\t1\nationbarpri                                       \tese\t31904     \t759       \tElm Pine\tBelmont\t20191     \t35        \tMadison \tWaterloo\t31675     \t1999\t1\t12.92\t22.22\t0.00\t83.87\t147.61\t0.00\t2000\t1\nationbarpri                                       \tese\t31904     \t759       \tElm Pine\tBelmont\t20191     \t35        \tMadison \tWaterloo\t31675     \t1999\t1\t12.92\t22.22\t0.00\t24.15\t36.70\t0.00\t2000\t1\nationbarpri                                       \tought\t31904     \t754       \tNULL\tNULL\t65804     \t897       \t8th \tAshland\t54244     \t1999\t1\t74.70\t90.38\t0.00\t28.08\t38.18\t0.00\t2000\t1\nationbarpri                                       \tought\t31904     \t754       \tNULL\tNULL\t65804     \t897       \t8th \tAshland\t54244     \t1999\t1\t74.70\t90.38\t0.00\t12.02\t12.74\t0.00\t2000\t1\nationbarpri                                       \tought\t31904     \t754       \tNULL\tNULL\t65804     \t897       \t8th \tAshland\t54244     \t1999\t1\t74.70\t90.38\t0.00\t56.60\t63.39\t0.00\t2000\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q65.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<s_store_name:string,i_item_desc:string,revenue:decimal(17,2),i_current_price:decimal(7,2),i_wholesale_cost:decimal(7,2),i_brand:string>\n-- !query output\nable\tAcademic concerns help during a courses. Hard, working-class types concentrate in a costs. Commercial shares work directly additional, ordinary orders. Profi\t29.50\t3.57\t2.92\tamalgamalg #x                                     \nable\tAvailable ministers s\t20.59\t7.69\t3.99\texportiimporto #x                                 \nable\tBefore main women cover so rive\t35.39\t4.63\t2.36\timportoamalg #x                                   \nable\tDays will lift for a months. Public differences could like\t30.44\t1.78\t0.99\timportoexporti #x                                 \nable\tFields expire perhaps available schools. Common machines allow later employees; lawyers will pull worldwide objectives. Hidden orders may turn well fo\t19.20\t88.16\t54.65\timportoimporto #x                                 \nable\tGeneral, sure features drop for the time being integrated proposals. Occasionally eastern generations might not arrive today ever vast opportunities. Soft attacks used to sell \t31.34\t1.16\t0.90\texportiexporti #x                                 \nable\tLost, individual parents rally. Then various differences shall fill never private strong courts. Successfully global directors clean medical\t2.94\t8.89\t5.95\tedu packimporto #x                                \nable\tMembers say then authorities. Various wome\t32.82\t1.34\t1.17\tamalgamalgamalg #x                                \nable\tNational, round fields would not accomp\t32.77\t0.17\t0.05\tamalgnameless #x                                 \nable\tNew measures shall pay under a agencies; comparatively heavy police shall beat similarly concepts. However japanese times cannot check like a police. Long, long-term auth\t21.49\t1.87\t1.19\tamalgnameless #x                                  \nable\tOnly concerned times used to know however in the trees. Developers might not wear in the times. Studies see far variations. Calculations must not transport hardl\t32.50\t0.15\t0.12\tcorpmaxi #x                                       \nable\tOnly, foreign problems make in a women. Naturally very countries will beli\t13.72\t7.58\t5.15\tscholarunivamalg #x                              \nable\tPerfect grants fight highly as great minutes. Severe, available millions like counties. Young legs cook however from a years. Early armed services reject yet with \t23.87\t4.31\t2.84\tscholarnameless #x                                \nable\tPopular, ethical types must see human, central steps. Other options spare products; large, odd machines must fill clear, public users. Away local teachers make l\t25.26\t8.08\t5.73\tamalgexporti #x                                   \nable\tPrivileges cut perhaps reasons. Ideas finish times. Women envy general programmes. Hands shall unveil never to a facilities; official proposals conform. Scot\t18.45\t7.52\t4.73\tunivmaxi #x                                       \nable\tProbably suitable compan\t21.47\t2.39\t1.29\timportobrand #x                                   \nable\tRoyal speeches take evil, front margins. For example hard events ought to go angles. Possible, foreign lakes shall not reconsider. Other honours hear momen\t7.23\t8.13\t2.84\tscholarnameless #x                                \nable\tRules offer. Important, italian goo\t9.19\t4.06\t2.51\tscholarmaxi #x                                   \nable\tShips could not introduce as. Complete equations take different, european names. Respondents would help fine styles; really formal workers \t32.96\t2.55\t2.04\tedu packunivamalg #x                             \nable\tStores visit values. Others cannot hang around rather civil brothers. Direct systems go then free, other instructions. Difficult, top feet will al\t33.93\t13.91\t6.39\texportinameless #x                                \nable\tSystematic \t12.54\t2.98\t2.17\tamalgbrand #x                                     \nable\tTimes concentrate religious forms. Soon social agents understand again on a officials. Miles w\t25.36\t5.16\t2.58\timportoexporti #x                                 \nation\tAreas establish in \t28.85\t93.54\t69.21\tamalgunivamalg #x                                \nation\tArrangements\t26.15\t0.27\t0.20\tedu packunivamalg #x                             \nation\tBoring areas used to print; companies delegate lines. Clients shall amount also then\t18.54\t3.84\t1.84\timportoexporti #x                                 \nation\tDeliberately ordinary procedures will not pay by a months. Feet reach very s\t34.44\t9.43\t5.46\tunivunivamalg #x                                  \nation\tEars might remind far charges. Pleased years discharge oth\t24.58\t2.73\t1.22\tedu packbrand #x                                  \nation\tHowever old figures ask only good, large sources. Yet naked researchers shall deal to a women. Right, common miles describe there also prime bags. Readily significant shares\t21.02\t7.78\t4.97\tbrandbrand #x                                     \nation\tJust possible women say. Reasonably strong employers light almost degrees. Palestinian, smart rules help really visual\t33.67\t3.71\t2.26\tscholarbrand #x                                  \nation\tLocal years may not seek formally hard, interesting properties. Local values serve. Nevertheless private tales c\t28.58\t8.36\t5.68\tcorpunivamalg #x                                  \nation\tMuch present elements regard previously glorious homes. Important, royal judges can ad\t12.98\t8.12\t6.41\tedu packscholar #x                                \nation\tPolice succeed schools; supplies calculate far countries; new words move shares; officers must complete years. Asian things may bear warm things. Aw\t29.31\t6.66\t2.26\tunivnameless #x                                   \nation\tRespondents see also. Sure, american horses must go actually election\t34.94\t0.85\t0.50\texportiamalg #x                                   \nation\tSmall, japanese rights will not think enough on the cars. Very fond hospitals may choose originally. Right, other businesses relish. As large decades include federal tears. Usual, important quar\t21.23\t3.63\t1.81\tamalgscholar #x                                   \nation\tTechnical, open seats used to become accordingly. Real, actual qualifications may not carry highly interesting others. Wide, sexual knees may stay expenses. Labour, american\t30.78\t0.61\t0.51\texportiedu pack #x                                \nation\tUltimately good sets could go short, early examinations. Things ought to know relatively. Linguistic, applicable children establish curiou\t35.23\t6.22\t3.54\timportoimporto #x                                 \nation\tUnacceptable flowers should not give reasonable, ethnic governments. Employees shall complain \t35.82\t8.39\t5.87\tcorpmaxi #x                                       \nbar\tAlso coming years need with a faces; fresh poems judge for example already thick choices. Hands help individual, relevant samples. Together british fingers would not spe\t23.68\t2.15\t0.98\tedu packexporti #x                                \nbar\tBad, recent right\t31.34\t0.96\t0.57\timportoimporto #x                                 \nbar\tCards should strike largely in a concessions. Still true signs might talk; essentially ro\t29.21\t55.23\t48.05\timportoamalg #x                                   \nbar\tCases must not spend on\t14.60\t1.30\t1.17\timportoimporto #x                                 \nbar\tCold, old days stem thereby difficult, nuclear men; likely contents shall threaten often outer years. All real or\t3.53\t9.08\t3.35\tedu packnameless #x                               \nbar\tConscious, solar ambitions support outside countries; warm facilities rise occupations. Appropriate columns grow. Availabl\t26.55\t3.35\t2.84\timportonameless #x                                \nbar\tGeneral, legal businesses should use completely expensive teachers. Linguistic friends vote problem\t0.21\t0.53\t0.32\tscholarbrand #x                                   \nbar\tHigh, official employees shall not start too left circumstances. Patients used to touch obviously popular, senior members. British, impossible theories make only. Young, international wo\t28.68\t4.85\t3.63\tedu packmaxi #x                                   \nbar\tHowever old hours ma\t22.18\t8.84\t5.65\texportimaxi #x                                    \nbar\tIllegal technologies might distinguish that on a change\t12.14\t2.73\t1.28\tnamelessmaxi #x                                  \nbar\tNever various elements play other rights\t30.37\t0.37\t0.28\tedu packscholar #x                                \nbar\tNewspapers would ensure certainly short inadequate problems. Bedrooms would argue however halfway bad coun\t29.16\t2.78\t1.69\tscholarcorp #x                                    \nbar\tOnly groups will not prove united, furious results. Papers shall think in a opportunities; techniques decide that. American, initial forces might mean previous books. That delighted arts give so dimen\t28.98\t2.78\t1.64\texportiimporto #x                                 \nbar\tOpen prod\t11.05\t2.74\t1.53\tedu packunivamalg #x                              \nbar\tPresidential, mild tests justify yesterday unusual points. Notable individuals can set only external trousers. Here french letters may photogra\t27.05\t2.44\t2.17\timportoimporto #x                                 \nbar\tPrices acquire more out of a christians. Efficiently local prices \t20.48\t2.11\t0.78\texportimaxi #x                                    \nbar\tRare, full workers stay plant\t8.61\t0.55\t0.34\texportiimporto #x                                 \nbar\tSocial pieces become; reservations rescue probably old hopes. Different, high records buy just general centuries. Recently industrial relationships cannot\t16.63\t9.15\t5.03\timportoedu pack #x                                \nbar\tThough private depths accomplis\t12.68\t0.58\t0.23\tamalgamalgamalg #x                               \nbar\tThus certain stars appear totally even local guests. Urban friends might not take properly various vehicles\t35.61\t4.55\t2.09\tamalgunivamalg #x                                 \nbar\tTraditional legs pull at least better difficult circumstances; other, inner clients step burning arms; able, numerous weapons keep li\t3.50\t45.72\t20.11\tamalgscholar #x                                   \nbar\tUnknown topics ought to answer far present pictures. Estimated considerations might meet \t32.53\t2.55\t1.32\texportiamalg #x                                   \nbar\tWidely splendid others deprive only. Different, main soldiers discover then other periods. Too main birds must change public, terrible houses. Different, armed females may foster; science\t22.86\t4.26\t3.36\tamalgunivamalg #x                                 \neing\tFormerly central designers must not save. Scottish, small horses elicit men. British, fine companies pay little taxes; here pure lakes should benefit however small top countr\t33.81\t0.51\t0.31\texportiamalg #x                                   \neing\tForms shall involve then normal bodies. Left words may find\t11.97\t2.02\t0.64\tedu packexporti #x                                \neing\tFront elections ensure to a adults; valuable moments decide in a aspects. Marked books stand rooms. Expectations ought\t37.36\t59.51\t43.44\tamalgimporto #x                                   \neing\tGradual volunteers keep bc months. Calls get pleasantly questions. Repre\t12.82\t1.61\t1.07\tedu packexporti #x                                \neing\tGradually new flowers support suddenly. Left, light errors ought to steal other memories. Periods should not say never to a nurses;\t31.57\t2.05\t1.12\tedu packamalg #x                                  \neing\tGreat, possible children used to\t11.87\t4.00\t2.60\tnamelessnameless #x                               \neing\tNeat, desirable words make especially gradu\t25.33\t7.11\t3.76\tamalgnameless #x                                  \neing\tNever top observations spend appropriate, common states. Homes make. There available hospitals will appreciate away upon a years. Roots hang \t31.15\t2.07\t0.84\tedu packnameless #x                               \neing\tOften\t21.89\t7.85\t4.16\tamalgscholar #x                                   \neing\tPoor years produce questions. Marine events ensure inner systems. Individuals could kill to a managers. Drugs should not authorise thankfully traditional, strong holders. Just amazing injuri\t19.66\t4.11\t3.16\timportoimporto #x                                 \neing\tSensibly foreign parties must not suffer well indian personal students. About private cattle handle th\t4.79\t2.42\t1.01\tamalgscholar #x                                   \neing\tSilver, political interviews might know in common families. Far possible houses shall insist in a places. Whole, political gardens would adopt eggs. Others might live even offi\t33.77\t6.13\t2.88\tnamelessmaxi #x                                   \neing\tSocial areas undergo actually within a plants. More communist days would play on a books. Later educational policies shoul\t30.97\t5.52\t2.92\texportiunivamalg #x                               \neing\tSoon artificial notions think no longer lights; clearly late members could not trace good countries. Cultures can proceed away wealthy \t0.98\t2.38\t1.68\texportibrand #x                                   \nese\tAdditional, interior police provide words. Different, long qualities answer really concerns; then other words state dry, political services. Awfully di\t15.35\t9.78\t7.53\tcorpbrand #x                                      \nese\tConcer\t32.51\t9.66\t8.11\tamalgamalg #x                                     \nese\tConstitutional, good pupils might not begin below level devices. External savings fit hardly. Parents shall dry. Actually literary companies improve a\t17.22\t4.22\t2.40\tscholarunivamalg #x                               \nese\tEarnings feel possibilities. Single, poor problems make full, sho\t29.18\t2.75\t1.62\tmaximaxi #x                                       \nese\tFinancial, likely artists assume now c\t5.10\t5.63\t2.42\timportoamalgamalg #x                             \nese\tForces enter efficient things. S\t23.42\t2.64\t0.81\tamalgamalgamalg #x                               \nese\tForward new lights should not meet once yesterday national buildings. Natural, australian eyes may not fetch progressively unfair\t29.40\t7.87\t5.66\tamalgimporto #x                                   \nese\tFresh respondents would not encourage in a years.\t26.81\t3.90\t1.67\timportoimporto #x                                 \nese\tGoals ought to strengthen. Early industries would take. Early men could hear then. Difficult, new machines endorse \t9.97\t1.24\t0.57\timportoamalg #x                                   \nese\tIdentical solicitors must maintain sources. Factors take already unusual minutes. Just various sales sell agricultural, long states. \t27.99\t3.77\t1.16\tmaxinameless #x                                   \nese\tIn order turkish meanings should involve nevertheless to a inches. Common, free colleagues may know other, safe services. So sure plates might go hidden, formidable powers. Domes\t33.33\t1.12\t0.35\tnamelesscorp #x                                   \nese\tJust associated missiles could\t20.43\t1.23\t0.46\tedu packedu pack #x                               \nese\tMeanwhile single companies shall go either in a statements. Vo\t9.56\t7.96\t3.42\tcorpbrand #x                                      \nese\tModels feel figures. Homes get still at the positions. Political, other makers will make there in the servants; necessary, technical markets should not cope also; warm, \t32.66\t6.16\t4.49\tunivbrand #x                                      \nese\tNew interests must turn largely. High, essential females mark. Gradual police would not exercise old, national\t23.00\t6.87\t2.33\tedu packcorp #x                                   \nese\tParliamentary pieces shine never tragic patterns. Great human eyes would not get groups. Plant\t9.58\t6.03\t4.82\tscholarmaxi #x                                    \nese\tPhotographs get everywhere merely \t13.94\t3.75\t2.96\texportiamalg #x                                   \nese\tPlates shall think; new, economic pupils collect entirely. Really powerful books develop yet girls. Best unlik\t33.09\t3.44\t2.99\tnamelessbrand #x                                  \nese\tProposals should involve more soviet, young islands. Little resources try even books. Fundamental systems end recent, total provisions. Working-class matte\t30.46\t5.15\t3.60\texportischolar #x                                 \nese\tPublic operations need wonderfully improved routes; days may not admit for a circles. Able, wise girls lay later. Authorities know really reasons. Scottish accountants take customs. \t2.20\t1.40\t0.99\tamalgimporto #x                                   \nese\tSharply bright systems used to want. Other projects should benefit. Common parts use\t35.01\t1.16\t0.39\timportoexporti #x                                 \nese\tShort, known programmes reject quite documents. Really interna\t7.78\t36.94\t22.90\tunivamalgamalg #x                                 \nese\tSmall years could spend soon\t31.85\t0.55\t0.39\timportoscholar #x                                 \nese\tSpecial gates decide mutually. Current, appropriate terms feel thus better royal arms. Children starve; likely girls make yesterday local workers. \t26.14\t2.93\t1.11\texportischolar #x                                 \nese\tThere round authorities will show in a seasons. Other, far firms miss very mad, private lips. Powerful, m\t16.77\t5.93\t4.44\timportoamalg #x                                   \nese\tWomen get also chairs. Full, integrated paintings sit \t1.47\t6.34\t2.15\tscholarnameless #x\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q66.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<w_warehouse_name:string,w_warehouse_sq_ft:int,w_city:string,w_county:string,w_state:string,w_country:string,ship_carriers:string,year:int,jan_sales:decimal(38,2),feb_sales:decimal(38,2),mar_sales:decimal(38,2),apr_sales:decimal(38,2),may_sales:decimal(38,2),jun_sales:decimal(38,2),jul_sales:decimal(38,2),aug_sales:decimal(38,2),sep_sales:decimal(38,2),oct_sales:decimal(38,2),nov_sales:decimal(38,2),dec_sales:decimal(38,2),jan_sales_per_sq_foot:decimal(38,12),feb_sales_per_sq_foot:decimal(38,12),mar_sales_per_sq_foot:decimal(38,12),apr_sales_per_sq_foot:decimal(38,12),may_sales_per_sq_foot:decimal(38,12),jun_sales_per_sq_foot:decimal(38,12),jul_sales_per_sq_foot:decimal(38,12),aug_sales_per_sq_foot:decimal(38,12),sep_sales_per_sq_foot:decimal(38,12),oct_sales_per_sq_foot:decimal(38,12),nov_sales_per_sq_foot:decimal(38,12),dec_sales_per_sq_foot:decimal(38,12),jan_net:decimal(38,2),feb_net:decimal(38,2),mar_net:decimal(38,2),apr_net:decimal(38,2),may_net:decimal(38,2),jun_net:decimal(38,2),jul_net:decimal(38,2),aug_net:decimal(38,2),sep_net:decimal(38,2),oct_net:decimal(38,2),nov_net:decimal(38,2),dec_net:decimal(38,2)>\n-- !query output\nJust good amou\t933435\tMidway\tWilliamson County\tTN\tUnited States\tDHL,BARIAN\t2001\t9246702.65\t4973596.11\t8243830.24\t11536095.04\t8848441.04\t6589907.46\t14946163.85\t24260830.90\t23156311.42\t21897174.69\t33046376.63\t24591398.12\t9.906102353136\t5.328272573880\t8.831713231238\t12.358755606978\t9.479439961004\t7.059846116762\t16.012002817550\t25.990916239481\t24.807631404437\t23.458703273394\t35.402975708003\t26.345056827738\t29452942.94\t19052763.82\t30303577.65\t28563648.00\t19179558.75\t25111623.08\t26955891.11\t62345171.47\t55967232.63\t60120132.61\t94483100.18\t85104727.47\nMatches produce\t198821\tMidway\tWilliamson County\tTN\tUnited States\tDHL,BARIAN\t2001\t8171037.69\t7444707.30\t8904945.91\t8019953.83\t13081755.25\t8229030.48\t11805829.83\t25145588.96\t15707862.93\t18731722.90\t27837144.77\t33176280.56\t41.097457964702\t37.444270474447\t44.788759285990\t40.337559060662\t65.796647486935\t41.389141388486\t59.379189471937\t126.473506118569\t79.005049416309\t94.214006065758\t140.011089220958\t166.865072401809\t25824877.19\t17086383.78\t24620145.44\t28759520.46\t24988081.00\t21709799.17\t25386565.52\t54025360.94\t54325725.26\t59965431.57\t75829271.22\t85511883.03\nOperations\t500020\tFairview\tWilliamson County\tTN\tUnited States\tDHL,BARIAN\t2001\t10409399.91\t16141530.81\t8148742.71\t6680962.19\t11728095.96\t9767093.38\t11458408.56\t20106135.67\t25314120.67\t22893962.36\t35876332.21\t28738513.96\t20.817967101315\t32.281770349186\t16.296833546658\t13.361389924403\t23.455253709851\t19.533405423783\t22.915900483980\t40.210662913483\t50.626216291349\t45.786093276269\t71.749794428223\t57.474728930843\t18813981.10\t29928354.06\t20237634.01\t22392433.57\t25938775.04\t28927292.16\t25641922.14\t60075246.63\t49668486.97\t68646750.13\t88585464.12\t75478344.38\nSelective,\t720621\tFairview\tWilliamson County\tTN\tUnited States\tDHL,BARIAN\t2001\t10540885.44\t8098669.29\t9370217.14\t9753408.96\t9006503.21\t7725824.55\t12918857.72\t27286339.36\t17315063.55\t22573845.28\t39002921.92\t33502638.23\t14.627502445808\t11.238458621106\t13.002975405935\t13.534727630752\t12.498252493335\t10.721064956475\t17.927395565769\t37.865034962900\t24.027975246350\t31.325544606665\t54.124042901886\t46.491343202599\t27419902.20\t22882381.71\t25735954.25\t18682264.05\t24373316.64\t27192350.35\t33019111.44\t54462656.98\t50994470.37\t58509681.21\t88329849.81\t100833156.77\nSignificantly\t200313\tFairview\tWilliamson County\tTN\tUnited States\tDHL,BARIAN\t2001\t9802846.67\t12625248.69\t6602477.32\t11033269.92\t6082390.20\t15618748.55\t5368995.93\t19287503.44\t21876336.16\t23132474.86\t35201297.49\t21425392.36\t48.937645934113\t63.027605247787\t32.960802943393\t55.080149166554\t30.364430666008\t77.971717012875\t26.803032903506\t96.286828313689\t109.210765951287\t115.481645524754\t175.731467703044\t106.959570072836\t23574229.26\t27163084.88\t24006453.37\t23356164.02\t19405795.41\t35795815.91\t25006450.52\t58626312.30\t57258316.63\t60471338.55\t83474267.08\t73080652.01\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q67.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_category:string,i_class:string,i_brand:string,i_product_name:string,d_year:int,d_qoy:int,d_moy:int,s_store_id:string,sumsales:decimal(28,2),rk:int>\n-- !query output\nNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t672624.93\t5\nNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t1328761.89\t4\nNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t1867833.60\t3\nNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t2951322.99\t2\nNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t1012102066.67\t1\nNULL\tNULL\tNULL\tNULL\t2000\tNULL\tNULL\tNULL\t672624.93\t5\nNULL\tNULL\tNULL\tNULL\t2000\t1\tNULL\tNULL\t109354.01\t39\nNULL\tNULL\tNULL\tNULL\t2000\t1\t1\tNULL\t56696.62\t88\nNULL\tNULL\tNULL\tNULL\t2000\t2\tNULL\tNULL\t117554.43\t24\nNULL\tNULL\tNULL\tNULL\t2000\t2\t6\tNULL\t50119.54\t93\nNULL\tNULL\tNULL\tNULL\t2000\t3\tNULL\tNULL\t147622.32\t12\nNULL\tNULL\tNULL\tNULL\t2000\t3\t8\tNULL\t64629.56\t85\nNULL\tNULL\tNULL\tNULL\t2000\t3\t9\tNULL\t59755.91\t86\nNULL\tNULL\tNULL\tNULL\t2000\t4\tNULL\tNULL\t298094.17\t7\nNULL\tNULL\tNULL\tNULL\t2000\t4\t10\tNULL\t96651.33\t58\nNULL\tNULL\tNULL\tNULL\t2000\t4\t10\tAAAAAAAAKAAAAAAA\t41992.37\t100\nNULL\tNULL\tNULL\tNULL\t2000\t4\t11\tNULL\t110100.01\t35\nNULL\tNULL\tNULL\tNULL\t2000\t4\t12\tNULL\t91342.83\t68\nNULL\tNULL\tNULL\tationesebareing                                   \tNULL\tNULL\tNULL\tNULL\t136570.22\t16\nNULL\tNULL\tNULL\tationesebareing                                   \t2000\tNULL\tNULL\tNULL\t136570.22\t16\nNULL\tNULL\tNULL\tationesebareing                                   \t2000\t4\tNULL\tNULL\t54262.81\t90\nNULL\tNULL\tNULL\teseationablebarought                              \tNULL\tNULL\tNULL\tNULL\t106226.70\t44\nNULL\tNULL\tNULL\teseationablebarought                              \t2000\tNULL\tNULL\tNULL\t106226.70\t44\nNULL\tNULL\tNULL\teseationablebarought                              \t2000\t4\tNULL\tNULL\t46143.77\t96\nNULL\tNULL\tNULL\tn stought                                         \tNULL\tNULL\tNULL\tNULL\t90518.72\t69\nNULL\tNULL\tNULL\tn stought                                         \t2000\tNULL\tNULL\tNULL\t90518.72\t69\nNULL\tNULL\tNULL\toughtableantiable                                 \tNULL\tNULL\tNULL\tNULL\t96782.30\t52\nNULL\tNULL\tNULL\toughtableantiable                                 \t2000\tNULL\tNULL\tNULL\t96782.30\t52\nNULL\tNULL\tNULL\toughtableantiable                                 \t2000\t4\tNULL\tNULL\t49543.67\t94\nNULL\tNULL\tNULL\toughtablen stationought                           \tNULL\tNULL\tNULL\tNULL\t105616.28\t46\nNULL\tNULL\tNULL\toughtablen stationought                           \t2000\tNULL\tNULL\tNULL\t105616.28\t46\nNULL\tNULL\tNULL\toughtablen stationought                           \t2000\t4\tNULL\tNULL\t66351.81\t84\nNULL\tNULL\tNULL\toughteseoughtation                                \tNULL\tNULL\tNULL\tNULL\t120422.74\t22\nNULL\tNULL\tNULL\toughteseoughtation                                \t2000\tNULL\tNULL\tNULL\t120422.74\t22\nNULL\tNULL\tNULL\toughteseoughtation                                \t2000\t3\tNULL\tNULL\t51575.65\t92\nNULL\tNULL\tNULL\toughteseoughtation                                \t2000\t4\tNULL\tNULL\t53356.26\t91\nNULL\tNULL\tbrandcorp #x                                      \tNULL\tNULL\tNULL\tNULL\tNULL\t110558.31\t32\nNULL\tNULL\tbrandcorp #x                                      \tesepriantieing                                    \tNULL\tNULL\tNULL\tNULL\t110558.31\t32\nNULL\tNULL\tbrandcorp #x                                      \tesepriantieing                                    \t2000\tNULL\tNULL\tNULL\t110558.31\t32\nNULL\tNULL\tbrandcorp #x                                      \tesepriantieing                                    \t2000\t4\tNULL\tNULL\t57539.32\t87\nNULL\tNULL\tcorpamalgamalg #x                                 \tNULL\tNULL\tNULL\tNULL\tNULL\t178041.47\t9\nNULL\tNULL\tcorpamalgamalg #x                                 \tcallyationationantiought                          \tNULL\tNULL\tNULL\tNULL\t178041.47\t9\nNULL\tNULL\tcorpamalgamalg #x                                 \tcallyationationantiought                          \t2000\tNULL\tNULL\tNULL\t178041.47\t9\nNULL\tNULL\tcorpamalgamalg #x                                 \tcallyationationantiought                          \t2000\t4\tNULL\tNULL\t92734.16\t59\nNULL\tNULL\tcorpmaxi #x                                       \tNULL\tNULL\tNULL\tNULL\tNULL\t140599.56\t13\nNULL\tNULL\tcorpmaxi #x                                       \tcallyeingpri                                      \tNULL\tNULL\tNULL\tNULL\t140599.56\t13\nNULL\tNULL\tcorpmaxi #x                                       \tcallyeingpri                                      \t2000\tNULL\tNULL\tNULL\t140599.56\t13\nNULL\tNULL\tcorpmaxi #x                                       \tcallyeingpri                                      \t2000\t4\tNULL\tNULL\t68417.63\t83\nNULL\tNULL\tcorpmaxi #x                                       \tcallyeingpri                                      \t2000\t4\t11\tNULL\t44322.49\t99\nNULL\tNULL\timportoamalg #x                                   \tNULL\tNULL\tNULL\tNULL\tNULL\t109872.37\t36\nNULL\tNULL\timportoamalg #x                                   \tNULL\tNULL\tNULL\tNULL\tNULL\t109872.37\t36\nNULL\tNULL\timportoamalg #x                                   \tNULL\t2000\tNULL\tNULL\tNULL\t109872.37\t36\nNULL\taccessories                                       \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t102003.04\t48\nNULL\taccessories                                       \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t102003.04\t48\nNULL\taccessories                                       \tNULL\tprin stn stoughtought                             \tNULL\tNULL\tNULL\tNULL\t102003.04\t48\nNULL\taccessories                                       \tNULL\tprin stn stoughtought                             \t2000\tNULL\tNULL\tNULL\t102003.04\t48\nNULL\taccessories                                       \tNULL\tprin stn stoughtought                             \t2000\t4\tNULL\tNULL\t56122.71\t89\nNULL\tathletic                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t72722.46\t79\nNULL\tathletic                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t72722.46\t79\nNULL\tathletic                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t72722.46\t79\nNULL\tathletic                                          \tNULL\tNULL\t2000\tNULL\tNULL\tNULL\t72722.46\t79\nNULL\tbaseball                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t96654.08\t54\nNULL\tbaseball                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t96654.08\t54\nNULL\tbaseball                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t96654.08\t54\nNULL\tbaseball                                          \tNULL\tNULL\t2000\tNULL\tNULL\tNULL\t96654.08\t54\nNULL\tbaseball                                          \tNULL\tNULL\t2000\t3\tNULL\tNULL\t44473.92\t98\nNULL\tcountry                                           \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t114628.36\t25\nNULL\tcountry                                           \timportoscholar #x                                 \tNULL\tNULL\tNULL\tNULL\tNULL\t114628.36\t25\nNULL\tcountry                                           \timportoscholar #x                                 \tNULL\tNULL\tNULL\tNULL\tNULL\t114628.36\t25\nNULL\tcountry                                           \timportoscholar #x                                 \tNULL\t2000\tNULL\tNULL\tNULL\t114628.36\t25\nNULL\tcountry                                           \timportoscholar #x                                 \tNULL\t2000\t4\tNULL\tNULL\t44753.51\t97\nNULL\tdresses                                           \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t77687.53\t72\nNULL\tdresses                                           \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t192205.73\t8\nNULL\tdresses                                           \tNULL\toughteingantieing                                 \tNULL\tNULL\tNULL\tNULL\t77687.53\t72\nNULL\tdresses                                           \tNULL\toughteingantieing                                 \t2000\tNULL\tNULL\tNULL\t77687.53\t72\nNULL\tdresses                                           \tamalgamalg #x                                     \tNULL\tNULL\tNULL\tNULL\tNULL\t114518.20\t29\nNULL\tdresses                                           \tamalgamalg #x                                     \tNULL\tNULL\tNULL\tNULL\tNULL\t114518.20\t29\nNULL\tdresses                                           \tamalgamalg #x                                     \tNULL\t2000\tNULL\tNULL\tNULL\t114518.20\t29\nNULL\tdresses                                           \tamalgamalg #x                                     \tNULL\t2000\t4\tNULL\tNULL\t49313.59\t95\nNULL\tguns                                              \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t92681.88\t60\nNULL\tguns                                              \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t92681.88\t60\nNULL\tguns                                              \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t92681.88\t60\nNULL\tguns                                              \tNULL\tNULL\t2000\tNULL\tNULL\tNULL\t92681.88\t60\nNULL\tinfants                                           \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t129135.94\t18\nNULL\tinfants                                           \timportoexporti #x                                 \tNULL\tNULL\tNULL\tNULL\tNULL\t129135.94\t18\nNULL\tinfants                                           \timportoexporti #x                                 \tantiationeseese                                   \tNULL\tNULL\tNULL\tNULL\t129135.94\t18\nNULL\tinfants                                           \timportoexporti #x                                 \tantiationeseese                                   \t2000\tNULL\tNULL\tNULL\t129135.94\t18\nNULL\tinfants                                           \timportoexporti #x                                 \tantiationeseese                                   \t2000\t4\tNULL\tNULL\t80824.80\t71\nNULL\tlighting                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t107477.64\t40\nNULL\tlighting                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t107477.64\t40\nNULL\tlighting                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t107477.64\t40\nNULL\tlighting                                          \tNULL\tNULL\t2000\tNULL\tNULL\tNULL\t107477.64\t40\nNULL\tmens                                              \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t91415.81\t64\nNULL\tmens                                              \timportoedu pack #x                                \tNULL\tNULL\tNULL\tNULL\tNULL\t91415.81\t64\nNULL\tmens                                              \timportoedu pack #x                                \toughtablen steseought                             \tNULL\tNULL\tNULL\tNULL\t91415.81\t64\nNULL\tmens                                              \timportoedu pack #x                                \toughtablen steseought                             \t2000\tNULL\tNULL\tNULL\t91415.81\t64\nNULL\tshirts                                            \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t72876.08\t75\nNULL\tshirts                                            \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t72876.08\t75\nNULL\tshirts                                            \tNULL\toughtcallyeseantiought                            \tNULL\tNULL\tNULL\tNULL\t72876.08\t75\nNULL\tshirts                                            \tNULL\toughtcallyeseantiought                            \t2000\tNULL\tNULL\tNULL\t72876.08\t75\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q68.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_last_name:string,c_first_name:string,ca_city:string,bought_city:string,ss_ticket_number:int,extended_price:decimal(17,2),extended_tax:decimal(17,2),list_price:decimal(17,2)>\n-- !query output\nNULL\tNULL\tSalem\tLakeside\t2203\t23464.72\t675.04\t46100.94\nNULL\tIra                 \tRiverdale\tOak Ridge\t10508\t32911.54\t938.79\t59812.32\nNULL\tRebecca             \tBelmont\tMarion\t13206\t14180.52\t1072.25\t23887.85\nNULL\tNULL\tPleasant Valley\tLincoln\t20829\t16340.86\t289.50\t34809.84\nNULL\tJoel                \tOak Hill\tClifton\t21390\t19561.57\t939.54\t46319.37\nNULL\tMax                 \tStringtown\tPine Grove\t22049\t20059.77\t974.94\t38268.07\nNULL\tNULL\tBridgeport\tRiverside\t23264\t10699.95\t496.36\t29285.70\nNULL\tNULL\tWhite Oak\tPleasant Hill\t24860\t16258.45\t733.62\t52186.60\nNULL\tRebecca             \tUnion Hill\tFarmington\t26031\t7636.54\t515.78\t13443.39\nNULL\tAlecia              \tJamestown\tOakdale\t30174\t29125.41\t1118.58\t69857.70\nNULL\tRaymond             \tBunker Hill\tClifton\t30655\t27237.08\t1237.59\t51322.22\nNULL\tNULL\tPleasant Valley\tRiley\t30915\t25079.14\t998.46\t35200.27\nNULL\tDavid               \tGlendale\tGreenville\t33716\t15434.87\t1012.60\t30661.71\nNULL\tNULL\tNew Hope\tFarmington\t38760\t29150.85\t647.52\t50433.60\nNULL\tNorma               \tHamilton\tMacedonia\t41929\t35397.41\t1268.87\t55360.80\nNULL\tNULL\tWildwood\tAllison\t45786\t10448.67\t492.27\t32424.46\nNULL\tNULL\tGreenville\tSpring Hill\t47915\t26747.77\t512.05\t57910.19\nNULL\tJerome              \tGalena\tGeorgetown\t50559\t33831.95\t890.50\t57970.13\nNULL\tNULL\tWoodlawn\tSpringfield\t59625\t22470.48\t828.96\t54458.90\nNULL\tNULL\tHillcrest\tCordova\t64306\t10534.53\t295.14\t15380.96\nNULL\tJohn                \tStringtown\tArlington\t64522\t21495.01\t783.81\t45067.59\nNULL\tNULL\tGladstone\tSummit\t66488\t20881.36\t605.92\t32362.22\nNULL\tNULL\tHopewell\tRiverdale\t67278\t24471.59\t1549.17\t48406.81\nNULL\tRussel              \tRed Hill\tWoodland\t82061\t17750.10\t1025.09\t40565.68\nNULL\tNULL\tMount Olive\tMaple Grove\t82183\t16184.99\t573.72\t44448.96\nNULL\tNULL\tShady Grove\tSpringdale\t91775\t17103.31\t911.07\t35127.63\nNULL\tNULL\tIndian Village\tGolden\t98530\t18842.65\t1005.52\t40742.92\nNULL\tNULL\tGreenfield\tFarmington\t98985\t13264.63\t589.99\t20576.04\nNULL\tVirginia            \tFive Forks\tOakland\t106543\t24859.75\t1006.88\t39693.34\nNULL\tNULL\tScottsville\tGreenfield\t107236\t14638.50\t802.00\t35356.18\nNULL\tVirginia            \tOak Hill\tAmherst\t109641\t7667.92\t271.98\t30755.88\nNULL\tNULL\tMidway\tNew Hope\t113272\t50295.57\t2963.67\t78595.37\nNULL\tWilliam             \tValley View\tProvidence\t115559\t29947.61\t673.81\t50883.54\nNULL\tNULL\tNewport\tIndian Village\t120966\t15268.90\t721.76\t27022.81\nNULL\tRobert              \tCedar Grove\tRiverside\t122552\t20398.86\t1049.35\t44761.40\nNULL\tNULL\tJamestown\tSpring Valley\t125125\t31795.01\t1364.75\t52740.75\nNULL\tNULL\tLiberty\tOakwood\t128427\t41495.05\t2399.87\t77105.28\nNULL\tBrandon             \tArlington\tWoodland\t135601\t43789.02\t1493.45\t66479.43\nNULL\tJuan                \tMount Pleasant\tGreenwood\t138846\t12326.46\t472.96\t25735.00\nNULL\tNULL\tShirley\tFairview\t143290\t19808.13\t596.98\t34256.82\nNULL\tNULL\tWhite Oak\tFerguson\t146537\t5235.24\t246.72\t11343.46\nNULL\tLuther              \tGreenville\tHillcrest\t149654\t27079.37\t939.22\t41067.50\nNULL\tMichelle            \tFairview\tHopewell\t149783\t29018.58\t1426.60\t48040.63\nNULL\tAngelica            \tBarnes\tPlainview\t151659\t21082.45\t306.14\t40857.65\nNULL\tDonna               \tCenterville\tFriendship\t152588\t20862.64\t969.20\t46304.16\nNULL\tBrian               \tOak Ridge\tWoodville\t152862\t22493.16\t1611.93\t43153.08\nNULL\tNULL\tStringtown\tLiberty\t158006\t16267.97\t450.16\t25629.75\nNULL\tDaniel              \tGlobe\tSpring Valley\t161922\t15092.01\t847.21\t42715.60\nNULL\tMarlene             \tGeorgetown\tOakland\t163799\t42790.16\t1130.51\t77449.21\nNULL\tDean                \tLakeview\tNew Hope\t167847\t24431.20\t1333.52\t38658.46\nNULL\tNULL\tGreenfield\tHopewell\t168370\t13878.80\t617.56\t31230.84\nNULL\tRobert              \tFour Points\tFarmington\t176911\t14828.21\t674.85\t32517.24\nNULL\tBetty               \tLakewood\tSulphur Springs\t178057\t15008.49\t693.61\t25864.62\nNULL\tNULL\tHarmony\tSpring Valley\t178332\t10288.94\t631.05\t35516.38\nNULL\tNULL\tEnterprise\tBunker Hill\t182211\t14072.57\t405.42\t33771.00\nNULL\tNULL\tSalem\tValley View\t187904\t15702.34\t439.59\t41237.94\nNULL\tFaye                \tWalnut Grove\tOakland\t189770\t18527.90\t1113.75\t39987.40\nNULL\tDebra               \tGreen Acres\tWoodlawn\t192040\t9053.50\t169.31\t13181.96\nNULL\tNULL\tSalem\tAntioch\t192673\t5412.31\t119.61\t12746.52\nNULL\tNULL\tMount Olive\tMount Pleasant\t197746\t32038.93\t1017.39\t87608.72\nNULL\tNancy               \tShady Grove\tHillcrest\t201415\t14985.28\t328.13\t21957.26\nNULL\tScott               \tHillcrest\tLouisville\t203119\t8629.88\t329.70\t14712.89\nNULL\tNULL\tPlainview\tOak Grove\t203425\t33448.32\t1632.67\t62674.71\nNULL\tNULL\tRocky Hill\tProvidence\t211665\t15060.41\t598.75\t29385.31\nNULL\tTonya               \tBuena Vista\tBunker Hill\t213228\t46912.33\t2039.49\t81223.03\nNULL\tTroy                \tPlainview\tAshland\t216821\t45620.34\t2295.90\t74013.12\nNULL\tSandra              \tUnionville\tHopewell\t221077\t8341.60\t527.66\t17817.11\nNULL\tRobert              \tSummit\tFive Points\t224508\t15607.69\t500.96\t37984.93\nNULL\tRebecca             \tUnion Hill\tMountain View\t225489\t35322.48\t1859.03\t58568.59\nNULL\tJack                \tSunnyside\tLakeview\t231229\t16186.51\t577.86\t30596.37\nNULL\tNULL\tBethel\tSunnyside\t233081\t13271.69\t468.80\t32129.60\nNULL\tAlice               \tOak Grove\tMarion\t233656\t30278.85\t1490.14\t65326.68\nNULL\tShayne              \tRiverview\tCrossroads\t234750\t20554.66\t737.10\t52975.61\nNULL\tKevin               \tBuena Vista\tClifton\t239837\t28589.10\t999.94\t48962.74\nAaron                         \tNick                \tHarmony\tEdgewood\t73734\t29649.48\t959.10\t56974.10\nAbney                         \tJanice              \tMount Zion\tEnterprise\t27585\t16154.82\t636.94\t37751.16\nAbraham                       \tGerald              \tPleasant Grove\tMidway\t1779\t8661.58\t330.60\t14811.87\nAbrams                        \tAlma                \tFairfield\tMacedonia\t33078\t28262.54\t808.09\t48263.03\nAdame                         \tBrian               \tMarion\tForest Hills\t142280\t15398.91\t717.67\t52277.17\nAdams                         \tAdam                \tThe Meadows\tGlenwood\t35054\t33420.92\t1477.44\t37326.99\nAdams                         \tPaulette            \tTremont\tLakeside\t82644\t25822.05\t365.13\t46273.62\nAdams                         \tEdwin               \tPlainview\tGreenwood\t108138\t23483.91\t982.40\t35181.82\nAdams                         \tNichole             \tEdgewood\tFriendship\t171894\t20821.96\t1333.70\t48375.06\nAdams                         \tNULL\tOakland\tOak Ridge\t173206\t18700.18\t1085.29\t29361.60\nAdams                         \tNichole             \tEdgewood\tGreen Acres\t180862\t17519.04\t657.52\t33555.87\nAdams                         \tDonna               \tPlainview\tLiberty\t219849\t18150.78\t448.49\t22444.41\nAguilar                       \tJeannine            \tUnionville\tJackson\t152737\t34732.21\t1299.63\t56844.65\nAhmed                         \tJeffrey             \tStewart\tEdgewood\t202936\t27333.88\t948.83\t39990.68\nAlbert                        \tSally               \tNottingham\tMount Zion\t7000\t11496.97\t305.77\t28524.27\nAlbrecht                      \tBob                 \tMarion\tPleasant Hill\t60194\t23528.46\t1101.20\t43551.97\nAlcorn                        \tJeffery             \tCedar Grove\tSalem\t46467\t17634.79\t785.53\t57718.43\nAlderman                      \tMelanie             \tEdgewood\tMount Zion\t14230\t28157.47\t270.85\t48818.21\nAldridge                      \tDaniel              \tBerlin\tWilson\t114445\t15440.33\t552.85\t22637.40\nAlger                         \tBeverly             \tAllentown\tRiverview\t173449\t24541.75\t797.49\t48511.28\nAllen                         \tBrittany            \tLincoln\tHillcrest\t10696\t17579.89\t595.90\t46637.81\nAllen                         \tYvette              \tProvidence\tFlatwoods\t73587\t8938.13\t561.96\t27604.45\nAllen                         \tLori                \tHamilton\tGreenwood\t138690\t16085.88\t579.59\t35840.88\nAllison                       \tAnya                \tUnion\tAshland\t79895\t9453.18\t499.78\t22112.21\nAlvarez                       \tTerrence            \tConcord\tSpring Valley\t32211\t13206.58\t470.82\t33690.82\nAlvarez                       \tMarie               \tWalnut Grove\tRed Hill\t206011\t25535.02\t1344.58\t63224.42\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q69.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<cd_gender:string,cd_marital_status:string,cd_education_status:string,cnt1:bigint,cd_purchase_estimate:int,cnt2:bigint,cd_credit_rating:string,cnt3:bigint>\n-- !query output\nF\tD\t2 yr Degree         \t1\t500\t1\tLow Risk  \t1\nF\tD\t2 yr Degree         \t1\t3000\t1\tHigh Risk \t1\nF\tD\t2 yr Degree         \t1\t4500\t1\tGood      \t1\nF\tD\t2 yr Degree         \t1\t5000\t1\tLow Risk  \t1\nF\tD\t2 yr Degree         \t2\t6000\t2\tHigh Risk \t2\nF\tD\t2 yr Degree         \t1\t6500\t1\tGood      \t1\nF\tD\t2 yr Degree         \t1\t7000\t1\tHigh Risk \t1\nF\tD\t2 yr Degree         \t1\t8500\t1\tHigh Risk \t1\nF\tD\t2 yr Degree         \t1\t8500\t1\tLow Risk  \t1\nF\tD\t4 yr Degree         \t1\t1000\t1\tHigh Risk \t1\nF\tD\t4 yr Degree         \t1\t2500\t1\tGood      \t1\nF\tD\t4 yr Degree         \t1\t3000\t1\tLow Risk  \t1\nF\tD\t4 yr Degree         \t1\t3500\t1\tHigh Risk \t1\nF\tD\t4 yr Degree         \t1\t3500\t1\tUnknown   \t1\nF\tD\t4 yr Degree         \t1\t4000\t1\tUnknown   \t1\nF\tD\t4 yr Degree         \t1\t4500\t1\tUnknown   \t1\nF\tD\t4 yr Degree         \t1\t6500\t1\tHigh Risk \t1\nF\tD\t4 yr Degree         \t1\t7500\t1\tHigh Risk \t1\nF\tD\t4 yr Degree         \t1\t8500\t1\tLow Risk  \t1\nF\tD\t4 yr Degree         \t1\t9000\t1\tLow Risk  \t1\nF\tD\t4 yr Degree         \t1\t9000\t1\tUnknown   \t1\nF\tD\t4 yr Degree         \t1\t10000\t1\tHigh Risk \t1\nF\tD\tAdvanced Degree     \t2\t1000\t2\tUnknown   \t2\nF\tD\tAdvanced Degree     \t1\t1500\t1\tHigh Risk \t1\nF\tD\tAdvanced Degree     \t1\t1500\t1\tUnknown   \t1\nF\tD\tAdvanced Degree     \t1\t3000\t1\tGood      \t1\nF\tD\tAdvanced Degree     \t1\t3000\t1\tUnknown   \t1\nF\tD\tAdvanced Degree     \t1\t6000\t1\tLow Risk  \t1\nF\tD\tAdvanced Degree     \t1\t8000\t1\tUnknown   \t1\nF\tD\tCollege             \t1\t1000\t1\tLow Risk  \t1\nF\tD\tCollege             \t1\t2000\t1\tHigh Risk \t1\nF\tD\tCollege             \t3\t2500\t3\tHigh Risk \t3\nF\tD\tCollege             \t1\t4000\t1\tHigh Risk \t1\nF\tD\tCollege             \t1\t5500\t1\tHigh Risk \t1\nF\tD\tCollege             \t2\t7500\t2\tUnknown   \t2\nF\tD\tCollege             \t1\t8000\t1\tGood      \t1\nF\tD\tCollege             \t1\t9000\t1\tUnknown   \t1\nF\tD\tPrimary             \t1\t500\t1\tGood      \t1\nF\tD\tPrimary             \t1\t1000\t1\tUnknown   \t1\nF\tD\tPrimary             \t1\t1500\t1\tGood      \t1\nF\tD\tPrimary             \t1\t2000\t1\tHigh Risk \t1\nF\tD\tPrimary             \t2\t2000\t2\tUnknown   \t2\nF\tD\tPrimary             \t1\t2500\t1\tUnknown   \t1\nF\tD\tPrimary             \t1\t4000\t1\tLow Risk  \t1\nF\tD\tPrimary             \t1\t5000\t1\tHigh Risk \t1\nF\tD\tPrimary             \t1\t6000\t1\tUnknown   \t1\nF\tD\tPrimary             \t1\t6500\t1\tHigh Risk \t1\nF\tD\tPrimary             \t1\t7500\t1\tHigh Risk \t1\nF\tD\tPrimary             \t1\t7500\t1\tUnknown   \t1\nF\tD\tPrimary             \t1\t8000\t1\tLow Risk  \t1\nF\tD\tPrimary             \t2\t9500\t2\tLow Risk  \t2\nF\tD\tSecondary           \t1\t1500\t1\tHigh Risk \t1\nF\tD\tSecondary           \t1\t2000\t1\tUnknown   \t1\nF\tD\tSecondary           \t1\t2500\t1\tHigh Risk \t1\nF\tD\tSecondary           \t2\t4000\t2\tHigh Risk \t2\nF\tD\tSecondary           \t1\t5000\t1\tLow Risk  \t1\nF\tD\tSecondary           \t1\t6000\t1\tHigh Risk \t1\nF\tD\tSecondary           \t1\t10000\t1\tLow Risk  \t1\nF\tD\tUnknown             \t1\t2000\t1\tHigh Risk \t1\nF\tD\tUnknown             \t1\t5000\t1\tLow Risk  \t1\nF\tD\tUnknown             \t1\t6500\t1\tGood      \t1\nF\tD\tUnknown             \t1\t8000\t1\tUnknown   \t1\nF\tD\tUnknown             \t1\t9000\t1\tHigh Risk \t1\nF\tM\t2 yr Degree         \t1\t500\t1\tHigh Risk \t1\nF\tM\t2 yr Degree         \t1\t4000\t1\tUnknown   \t1\nF\tM\t2 yr Degree         \t1\t4500\t1\tLow Risk  \t1\nF\tM\t2 yr Degree         \t1\t5000\t1\tUnknown   \t1\nF\tM\t2 yr Degree         \t1\t5500\t1\tLow Risk  \t1\nF\tM\t2 yr Degree         \t1\t9000\t1\tLow Risk  \t1\nF\tM\t2 yr Degree         \t1\t10000\t1\tLow Risk  \t1\nF\tM\t4 yr Degree         \t1\t500\t1\tHigh Risk \t1\nF\tM\t4 yr Degree         \t1\t1000\t1\tGood      \t1\nF\tM\t4 yr Degree         \t1\t2500\t1\tGood      \t1\nF\tM\t4 yr Degree         \t1\t3500\t1\tLow Risk  \t1\nF\tM\t4 yr Degree         \t1\t4500\t1\tGood      \t1\nF\tM\t4 yr Degree         \t1\t7000\t1\tGood      \t1\nF\tM\t4 yr Degree         \t1\t7500\t1\tGood      \t1\nF\tM\t4 yr Degree         \t1\t7500\t1\tHigh Risk \t1\nF\tM\t4 yr Degree         \t1\t8000\t1\tUnknown   \t1\nF\tM\t4 yr Degree         \t1\t8500\t1\tHigh Risk \t1\nF\tM\t4 yr Degree         \t1\t9000\t1\tUnknown   \t1\nF\tM\t4 yr Degree         \t1\t10000\t1\tUnknown   \t1\nF\tM\tAdvanced Degree     \t1\t3000\t1\tUnknown   \t1\nF\tM\tAdvanced Degree     \t1\t4000\t1\tLow Risk  \t1\nF\tM\tAdvanced Degree     \t1\t4500\t1\tUnknown   \t1\nF\tM\tAdvanced Degree     \t1\t6000\t1\tUnknown   \t1\nF\tM\tAdvanced Degree     \t1\t6500\t1\tHigh Risk \t1\nF\tM\tAdvanced Degree     \t1\t8500\t1\tGood      \t1\nF\tM\tAdvanced Degree     \t1\t8500\t1\tHigh Risk \t1\nF\tM\tCollege             \t1\t1500\t1\tUnknown   \t1\nF\tM\tCollege             \t1\t4500\t1\tLow Risk  \t1\nF\tM\tCollege             \t1\t7000\t1\tUnknown   \t1\nF\tM\tCollege             \t1\t8500\t1\tHigh Risk \t1\nF\tM\tCollege             \t1\t8500\t1\tLow Risk  \t1\nF\tM\tCollege             \t1\t8500\t1\tUnknown   \t1\nF\tM\tCollege             \t1\t9500\t1\tGood      \t1\nF\tM\tPrimary             \t1\t500\t1\tUnknown   \t1\nF\tM\tPrimary             \t1\t1000\t1\tGood      \t1\nF\tM\tPrimary             \t1\t4500\t1\tLow Risk  \t1\nF\tM\tPrimary             \t1\t5000\t1\tHigh Risk \t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q7.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,agg1:double,agg2:decimal(11,6),agg3:decimal(11,6),agg4:decimal(11,6)>\n-- !query output\nAAAAAAAAAAACAAAA\t60.0\t52.450000\t0.000000\t13.630000\nAAAAAAAAAAAEAAAA\t96.0\t51.560000\t0.000000\t46.400000\nAAAAAAAAAABEAAAA\t8.0\t135.430000\t0.000000\t105.630000\nAAAAAAAAAACAAAAA\t43.0\t90.780000\t0.000000\t17.240000\nAAAAAAAAAACCAAAA\t31.0\t80.140000\t0.000000\t74.530000\nAAAAAAAAAADBAAAA\t79.0\t99.440000\t2573.530000\t90.490000\nAAAAAAAAAADCAAAA\t68.0\t115.045000\t0.000000\t72.910000\nAAAAAAAAAAEAAAAA\t42.0\t16.800000\t0.000000\t13.770000\nAAAAAAAAAAEDAAAA\t42.5\t52.175000\t0.000000\t21.300000\nAAAAAAAAAAFAAAAA\t32.0\t24.750000\t0.000000\t23.510000\nAAAAAAAAAAGBAAAA\t48.0\t119.350000\t0.000000\t59.670000\nAAAAAAAAAAGCAAAA\t54.0\t39.420000\t964.260000\t30.305000\nAAAAAAAAAAHBAAAA\t59.333333333333336\t86.790000\t198.126667\t47.576667\nAAAAAAAAAAHDAAAA\t82.0\t26.030000\t981.850000\t26.030000\nAAAAAAAAAAKAAAAA\t51.666666666666664\t47.246667\t0.000000\t5.333333\nAAAAAAAAAAKBAAAA\t11.0\t114.560000\t0.000000\t90.500000\nAAAAAAAAAAKDAAAA\t27.0\t23.140000\t0.000000\t20.820000\nAAAAAAAAAALAAAAA\t38.0\t83.580000\t0.000000\t51.810000\nAAAAAAAAAALCAAAA\t51.0\t26.610000\t0.000000\t19.950000\nAAAAAAAAAAMBAAAA\t79.0\t41.660000\t0.000000\t38.740000\nAAAAAAAAAAMCAAAA\t70.0\t83.210000\t0.000000\t49.090000\nAAAAAAAAAANAAAAA\t2.5\t97.780000\t0.000000\t77.765000\nAAAAAAAAAAOAAAAA\t60.0\t101.410000\t1674.240000\t43.600000\nAAAAAAAAAAOCAAAA\t7.0\t185.510000\t0.000000\t109.450000\nAAAAAAAAAAPBAAAA\t51.5\t57.890000\t839.850000\t52.465000\nAAAAAAAAABABAAAA\t20.0\t19.300000\t0.000000\t0.770000\nAAAAAAAAABADAAAA\t95.0\t100.800000\t0.000000\t35.280000\nAAAAAAAAABAEAAAA\t55.0\t2.220000\t0.000000\t2.080000\nAAAAAAAAABBAAAAA\t46.0\t61.386667\t0.000000\t27.306667\nAAAAAAAAABBDAAAA\t35.0\t166.510000\t166.080000\t158.180000\nAAAAAAAAABCBAAAA\t32.0\t33.820000\t175.495000\t13.025000\nAAAAAAAAABDDAAAA\t38.666666666666664\t48.240000\t1.143333\t27.163333\nAAAAAAAAABDEAAAA\t8.0\t135.240000\t38.095000\t18.845000\nAAAAAAAAABECAAAA\t13.0\t73.870000\t0.000000\t64.260000\nAAAAAAAAABEDAAAA\t98.0\t76.895000\t0.000000\t25.375000\nAAAAAAAAABGAAAAA\t81.0\t46.320000\t0.000000\t26.580000\nAAAAAAAAABGDAAAA\t88.0\t81.150000\t0.000000\t43.820000\nAAAAAAAAABGEAAAA\t81.0\t18.760000\t0.000000\t7.310000\nAAAAAAAAABHAAAAA\t61.5\t101.695000\t224.313333\t33.520000\nAAAAAAAAABHDAAAA\t51.0\t128.445000\t0.000000\t83.725000\nAAAAAAAAABIBAAAA\t51.0\t20.940000\t0.000000\t6.490000\nAAAAAAAAABICAAAA\t82.5\t62.635000\t0.860000\t27.015000\nAAAAAAAAABJAAAAA\t18.333333333333332\t76.346667\t167.953333\t44.930000\nAAAAAAAAABKAAAAA\t77.0\t122.740000\t0.000000\t98.190000\nAAAAAAAAABKDAAAA\t42.0\t125.690000\t1504.230000\t37.700000\nAAAAAAAAABLBAAAA\t59.0\t118.240000\t72.480000\t9.450000\nAAAAAAAAABMBAAAA\t97.5\t91.465000\t0.000000\t60.660000\nAAAAAAAAABMDAAAA\t46.0\t171.290000\t0.000000\t92.490000\nAAAAAAAAABNCAAAA\t20.0\t109.530000\t0.000000\t97.480000\nAAAAAAAAABPAAAAA\t59.25\t47.130000\t62.827500\t29.995000\nAAAAAAAAABPBAAAA\t66.0\t53.950000\t30.970000\t46.930000\nAAAAAAAAABPDAAAA\t66.5\t52.420000\t0.000000\t30.925000\nAAAAAAAAACAAAAAA\t45.0\t75.860000\t0.000000\t22.750000\nAAAAAAAAACBBAAAA\t64.5\t27.585000\t0.000000\t16.820000\nAAAAAAAAACBCAAAA\t71.0\t47.330000\t0.000000\t18.450000\nAAAAAAAAACCDAAAA\t68.0\t92.350000\t0.000000\t41.550000\nAAAAAAAAACDAAAAA\t21.0\t64.390000\t59.620000\t13.520000\nAAAAAAAAACDDAAAA\t65.66666666666667\t34.473333\t23.316667\t14.916667\nAAAAAAAAACEBAAAA\t48.0\t111.330000\t0.000000\t12.240000\nAAAAAAAAACECAAAA\t67.5\t36.365000\t373.635000\t26.500000\nAAAAAAAAACFBAAAA\t9.0\t19.600000\t0.000000\t1.370000\nAAAAAAAAACFDAAAA\t26.0\t18.350000\t0.000000\t13.390000\nAAAAAAAAACGCAAAA\t60.5\t100.270000\t0.000000\t45.862500\nAAAAAAAAACGDAAAA\t58.5\t32.805000\t396.170000\t12.550000\nAAAAAAAAACHBAAAA\t97.0\t5.380000\t27.470000\t0.480000\nAAAAAAAAACIAAAAA\t57.0\t56.880000\t0.000000\t52.750000\nAAAAAAAAACIBAAAA\t58.0\t26.060000\t152.250000\t12.500000\nAAAAAAAAACIDAAAA\t71.0\t4.020000\t0.000000\t2.210000\nAAAAAAAAACJAAAAA\t30.0\t46.910000\t0.000000\t8.910000\nAAAAAAAAACJDAAAA\t63.5\t112.305000\t0.000000\t30.610000\nAAAAAAAAACKCAAAA\t50.666666666666664\t63.280000\t0.000000\t26.126667\nAAAAAAAAACLAAAAA\t95.0\t22.310000\t0.000000\t0.000000\nAAAAAAAAACLBAAAA\t88.0\t27.450000\t0.000000\t23.880000\nAAAAAAAAACMAAAAA\t86.0\t90.600000\t1511.530000\t87.880000\nAAAAAAAAACNBAAAA\t44.666666666666664\t73.076667\t52.400000\t54.546667\nAAAAAAAAACNCAAAA\t62.0\t105.380000\t0.000000\t70.600000\nAAAAAAAAACOAAAAA\t41.0\t42.810000\t412.370000\t10.700000\nAAAAAAAAACOBAAAA\t36.5\t66.735000\t14.770000\t33.725000\nAAAAAAAAACODAAAA\t25.0\t65.280000\t238.920000\t39.820000\nAAAAAAAAACPAAAAA\t88.0\t164.900000\t104.470000\t14.840000\nAAAAAAAAACPDAAAA\t10.0\t78.550000\t0.000000\t10.990000\nAAAAAAAAADABAAAA\t81.0\t60.225000\t0.000000\t43.775000\nAAAAAAAAADBDAAAA\t2.0\t94.600000\t0.000000\t59.590000\nAAAAAAAAADCAAAAA\t52.0\t72.480000\t98.720000\t15.350000\nAAAAAAAAADCCAAAA\t89.0\t61.690000\t0.000000\t59.220000\nAAAAAAAAADCDAAAA\t27.5\t126.585000\t0.000000\t61.340000\nAAAAAAAAADDBAAAA\t64.0\t19.570000\t934.160000\t16.040000\nAAAAAAAAADDEAAAA\t65.0\t78.255000\t348.165000\t49.595000\nAAAAAAAAADEAAAAA\t65.0\t108.640000\t0.000000\t91.250000\nAAAAAAAAADEBAAAA\t70.0\t104.400000\t1189.580000\t45.930000\nAAAAAAAAADEDAAAA\t31.0\t130.200000\t0.000000\t76.810000\nAAAAAAAAADFAAAAA\t80.0\t5.240000\t0.000000\t1.310000\nAAAAAAAAADFCAAAA\t14.0\t9.110000\t0.000000\t6.650000\nAAAAAAAAADGCAAAA\t10.0\t84.260000\t0.000000\t9.260000\nAAAAAAAAADGEAAAA\t37.333333333333336\t34.403333\t101.936667\t17.150000\nAAAAAAAAADHBAAAA\t32.333333333333336\t65.606667\t751.803333\t54.530000\nAAAAAAAAADJBAAAA\t57.0\t63.110000\t0.000000\t47.960000\nAAAAAAAAADKBAAAA\t44.0\t44.980000\t0.000000\t3.590000\nAAAAAAAAADLAAAAA\t21.0\t17.740000\t0.000000\t9.930000\nAAAAAAAAADMBAAAA\t29.0\t102.770000\t2052.360000\t72.960000\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q70.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<total_sum:decimal(17,2),s_state:string,s_county:string,lochierarchy:tinyint,rank_within_parent:int>\n-- !query output\n-439591881.24\tNULL\tNULL\t2\t1\n-439591881.24\tTN\tNULL\t1\t1\n-439591881.24\tTN\tWilliamson County\t0\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q71.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<brand_id:int,brand:string,t_hour:int,t_minute:int,ext_price:decimal(17,2)>\n-- !query output\n10016002\tcorpamalgamalg #x                                 \t19\t7\t23183.08\n10005002\tscholarunivamalg #x                               \t8\t8\t17109.63\n8014006\tedu packmaxi #x                                   \t9\t53\t16420.75\n3003002\texportiexporti #x                                 \t17\t44\t16337.23\n3001001\tamalgexporti #x                                   \t17\t46\t16088.53\n4001001\tamalgedu pack #x                                  \t9\t28\t15557.83\n6015001\tscholarbrand #x                                   \t17\t50\t15411.90\n6007007\tbrandcorp #x                                      \t19\t25\t15067.50\n6007007\tbrandcorp #x                                      \t18\t59\t14974.32\n8014006\tedu packmaxi #x                                   \t19\t40\t14889.35\n4004002\tedu packedu pack #x                               \t18\t4\t14418.00\n3002002\timportoexporti #x                                 \t19\t40\t14063.50\n3003001\texportiexporti #x                                 \t19\t58\t13900.80\n7014010\tedu packnameless #x                              \t9\t52\t13795.60\n2004002\tedu packimporto #x                                \t19\t17\t13663.76\n9013003\texportiunivamalg #x                               \t19\t24\t13511.26\n5001002\tamalgscholar #x                                   \t9\t41\t13318.05\n7014003\tedu packnameless #x                               \t19\t52\t12754.50\n5002002\timportoscholar #x                                 \t7\t16\t12584.65\n10005002\tscholarunivamalg #x                               \t17\t19\t12325.56\n4003001\texportiedu pack #x                                \t17\t0\t12313.71\n1004001\tedu packamalg #x                                  \t9\t6\t12311.28\n4001001\tamalgedu pack #x                                  \t18\t21\t12299.94\n8004002\tedu packnameless #x                               \t9\t3\t12191.36\n10008006\tnamelessunivamalg #x                              \t19\t47\t11971.67\n8001007\tamalgnameless #x                                  \t19\t36\t11926.98\n9003003\texportimaxi #x                                    \t17\t10\t11905.95\n1003001\texportiamalg #x                                   \t17\t48\t11845.76\n6009008\tmaxicorp #x                                       \t17\t1\t11772.54\n6015003\tscholarbrand #x                                   \t9\t40\t11755.64\n9003003\texportimaxi #x                                    \t19\t3\t11623.24\n9014011\tedu packunivamalg #x                             \t19\t12\t11407.38\n1001001\tamalgamalg #x                                     \t17\t38\t11250.06\n7013007\texportinameless #x                                \t7\t27\t11169.57\n5001002\tamalgscholar #x                                   \t18\t35\t11152.35\n9016003\tcorpunivamalg #x                                  \t17\t11\t10903.08\n2002002\timportoimporto #x                                 \t8\t38\t10861.98\n7009003\tmaxibrand #x                                      \t9\t44\t10687.08\n3002001\timportoexporti #x                                 \t17\t37\t10580.80\n9008003\tnamelessmaxi #x                                   \t9\t49\t10517.90\n8001007\tamalgnameless #x                                  \t9\t22\t10416.24\n7008005\tnamelessbrand #x                                  \t18\t31\t10196.64\n4004002\tedu packedu pack #x                               \t18\t29\t10166.76\n4003001\texportiedu pack #x                                \t8\t41\t10118.45\n8012007\timportomaxi #x                                    \t17\t47\t10079.52\n5001002\tamalgscholar #x                                   \t9\t11\t9922.06\n4003002\texportiedu pack #x                                \t9\t8\t9888.56\n7004005\tedu packbrand #x                                  \t17\t8\t9600.00\n9008003\tnamelessmaxi #x                                   \t18\t46\t9599.75\n9007003\tbrandmaxi #x                                      \t18\t23\t9492.33\n7014003\tedu packnameless #x                               \t19\t39\t9431.61\n3004001\tedu packexporti #x                                \t18\t53\t9398.02\n3002001\timportoexporti #x                                 \t17\t51\t9370.58\n3002001\timportoexporti #x                                 \t9\t58\t9365.40\n1002001\timportoamalg #x                                   \t8\t59\t9292.90\n7013002\texportinameless #x                                \t6\t23\t9269.32\n7002009\timportobrand #x                                   \t17\t31\t9114.30\n5003001\texportischolar #x                                 \t19\t22\t9074.52\n10007016\tbrandunivamalg #x                                \t18\t38\t9003.20\n10005012\tscholarunivamalg #x                              \t18\t6\t8947.68\n9003003\texportimaxi #x                                    \t9\t0\t8849.28\n8002008\timportonameless #x                                \t18\t59\t8721.28\n8002007\timportonameless #x                                \t8\t51\t8694.00\n3004002\tedu packexporti #x                                \t8\t18\t8480.98\n5002002\timportoscholar #x                                 \t9\t33\t8431.24\n3004001\tedu packexporti #x                                \t19\t5\t8427.36\n9003003\texportimaxi #x                                    \t8\t35\t8400.48\n4001001\tamalgedu pack #x                                  \t17\t37\t8391.68\n9014011\tedu packunivamalg #x                             \t7\t33\t8367.30\n8002007\timportonameless #x                                \t19\t52\t8361.36\n10014005\tedu packamalgamalg #x                             \t9\t4\t8332.00\n10001003\tamalgunivamalg #x                                 \t17\t56\t8299.50\n10011008\tamalgamalgamalg #x                                \t18\t25\t8223.96\n9016005\tcorpunivamalg #x                                  \t17\t2\t8104.92\n8001007\tamalgnameless #x                                  \t19\t42\t8077.98\n10014001\tedu packamalgamalg #x                             \t19\t52\t8077.98\n2002002\timportoimporto #x                                 \t17\t52\t8045.25\n8005009\tscholarnameless #x                                \t8\t17\t8033.28\n9013003\texportiunivamalg #x                               \t19\t38\t7994.24\n5001002\tamalgscholar #x                                   \t19\t18\t7980.48\n4003001\texportiedu pack #x                                \t8\t57\t7973.82\n8006010\tcorpnameless #x                                  \t19\t41\t7864.08\n10005002\tscholarunivamalg #x                               \t6\t44\t7846.24\n4003001\texportiedu pack #x                                \t19\t19\t7784.72\n9014011\tedu packunivamalg #x                             \t17\t48\t7677.20\n7014010\tedu packnameless #x                              \t9\t29\t7671.96\n4003001\texportiedu pack #x                                \t19\t29\t7630.71\n4001001\tamalgedu pack #x                                  \t9\t33\t7626.44\n2001001\tamalgimporto #x                                   \t17\t46\t7444.64\n9004008\tedu packmaxi #x                                   \t19\t53\t7429.73\n9010002\tunivunivamalg #x                                  \t17\t25\t7428.08\n2001001\tamalgimporto #x                                   \t19\t10\t7387.20\n8010001\tunivmaxi #x                                       \t17\t38\t7327.00\n9003003\texportimaxi #x                                    \t17\t49\t7325.40\n4002001\timportoedu pack #x                                \t19\t29\t7273.80\n5001002\tamalgscholar #x                                   \t19\t31\t7253.56\n10001007\tamalgunivamalg #x                                 \t17\t35\t7188.16\n3002001\timportoexporti #x                                 \t19\t36\t7184.41\n1004001\tedu packamalg #x                                  \t9\t39\t7168.20\n8006010\tcorpnameless #x                                  \t6\t45\t7122.15\n9014008\tedu packunivamalg #x                              \t19\t35\t7048.58\n6015001\tscholarbrand #x                                   \t19\t45\t7043.40\n7014001\tedu packnameless #x                               \t9\t9\t6921.20\n3001001\tamalgexporti #x                                   \t17\t58\t6919.84\n7016008\tcorpnameless #x                                   \t18\t20\t6866.37\n4004002\tedu packedu pack #x                               \t9\t23\t6819.60\n7013002\texportinameless #x                                \t8\t4\t6757.74\n4001001\tamalgedu pack #x                                  \t18\t20\t6744.87\n7014010\tedu packnameless #x                              \t17\t23\t6714.90\n2002002\timportoimporto #x                                 \t9\t49\t6696.46\n3003002\texportiexporti #x                                 \t17\t19\t6690.42\n4004002\tedu packedu pack #x                               \t18\t45\t6669.03\n8003006\texportinameless #x                                \t9\t46\t6630.78\n9003003\texportimaxi #x                                    \t9\t48\t6603.20\n9003003\texportimaxi #x                                    \t8\t28\t6580.50\n7012007\timportonameless #x                                \t18\t6\t6548.40\n7015007\tscholarnameless #x                                \t17\t32\t6544.80\n3004001\tedu packexporti #x                                \t19\t27\t6510.42\n9001003\tamalgmaxi #x                                      \t6\t57\t6506.50\n2001001\tamalgimporto #x                                   \t19\t8\t6500.76\n9003003\texportimaxi #x                                    \t18\t46\t6443.99\n9016003\tcorpunivamalg #x                                  \t8\t47\t6398.37\n2004002\tedu packimporto #x                                \t9\t25\t6339.69\n9010002\tunivunivamalg #x                                  \t8\t21\t6302.19\n2004002\tedu packimporto #x                                \t19\t48\t6257.70\n5003001\texportischolar #x                                 \t17\t44\t6225.62\n10016002\tcorpamalgamalg #x                                 \t17\t45\t6222.80\n9004008\tedu packmaxi #x                                   \t8\t46\t6201.78\n7015007\tscholarnameless #x                                \t8\t18\t6162.84\n4002001\timportoedu pack #x                                \t18\t3\t6101.83\n1002001\timportoamalg #x                                   \t9\t1\t6058.89\n6008001\tnamelesscorp #x                                   \t17\t19\t6056.16\n3003002\texportiexporti #x                                 \t18\t48\t6054.30\n7010009\tunivnameless #x                                   \t9\t8\t6013.24\n9016005\tcorpunivamalg #x                                  \t19\t14\t5985.00\n7015004\tscholarnameless #x                                \t19\t49\t5981.96\n1004001\tedu packamalg #x                                  \t19\t55\t5921.48\n7013007\texportinameless #x                                \t7\t0\t5905.55\n1003001\texportiamalg #x                                   \t8\t27\t5891.34\n5004001\tedu packscholar #x                                \t18\t15\t5875.10\n8010002\tunivmaxi #x                                       \t17\t24\t5865.60\n8001007\tamalgnameless #x                                  \t9\t35\t5839.47\n3004001\tedu packexporti #x                                \t8\t45\t5819.52\n2001001\tamalgimporto #x                                   \t19\t27\t5797.79\n8002007\timportonameless #x                                \t19\t30\t5762.88\n9014008\tedu packunivamalg #x                              \t17\t53\t5754.24\n2003001\texportiimporto #x                                 \t8\t49\t5747.50\n7012007\timportonameless #x                                \t17\t5\t5701.60\n3002001\timportoexporti #x                                 \t19\t17\t5699.90\n3003001\texportiexporti #x                                 \t8\t39\t5681.76\n7011010\tamalgnameless #x                                 \t17\t56\t5639.04\n6008004\tnamelesscorp #x                                   \t8\t42\t5623.09\n1003001\texportiamalg #x                                   \t18\t12\t5598.45\n8001009\tamalgnameless #x                                  \t18\t33\t5576.00\n7009003\tmaxibrand #x                                      \t17\t30\t5541.85\n10013006\texportiamalgamalg #x                              \t9\t48\t5522.14\n10014005\tedu packamalgamalg #x                             \t7\t45\t5490.66\n7013002\texportinameless #x                                \t19\t13\t5482.24\n9003003\texportimaxi #x                                    \t19\t15\t5480.13\n1004001\tedu packamalg #x                                  \t18\t0\t5476.76\n3004001\tedu packexporti #x                                \t8\t31\t5466.72\n4002001\timportoedu pack #x                                \t18\t42\t5460.21\n7010005\tunivnameless #x                                   \t18\t48\t5435.52\n8008006\tnamelessnameless #x                               \t8\t40\t5425.28\n1004001\tedu packamalg #x                                  \t19\t56\t5418.16\n1003002\texportiamalg #x                                   \t18\t52\t5397.60\n4004002\tedu packedu pack #x                               \t17\t21\t5377.26\n2002002\timportoimporto #x                                 \t8\t18\t5372.40\n1004001\tedu packamalg #x                                  \t17\t16\t5341.47\n10015013\tscholaramalgamalg #x                             \t6\t21\t5340.06\n2001001\tamalgimporto #x                                   \t18\t33\t5337.60\n6002008\timportocorp #x                                    \t19\t36\t5328.99\n3004002\tedu packexporti #x                                \t18\t33\t5301.04\n9003003\texportimaxi #x                                    \t9\t43\t5298.43\n7009003\tmaxibrand #x                                      \t9\t26\t5193.32\n8010001\tunivmaxi #x                                       \t17\t48\t5172.16\n8001009\tamalgnameless #x                                  \t17\t1\t5166.56\n7015007\tscholarnameless #x                                \t17\t22\t5156.10\n10005012\tscholarunivamalg #x                              \t9\t16\t5114.80\n3003001\texportiexporti #x                                 \t18\t5\t5092.80\n3003001\texportiexporti #x                                 \t17\t41\t5084.64\n9007003\tbrandmaxi #x                                      \t17\t23\t5045.62\n3002001\timportoexporti #x                                 \t7\t13\t5010.32\n3002001\timportoexporti #x                                 \t17\t1\t4992.44\n3004001\tedu packexporti #x                                \t17\t11\t4980.35\n4004002\tedu packedu pack #x                               \t8\t43\t4979.04\n10001007\tamalgunivamalg #x                                 \t19\t43\t4955.52\n6007007\tbrandcorp #x                                      \t19\t20\t4942.00\n9014008\tedu packunivamalg #x                              \t8\t24\t4912.02\n5001001\tamalgscholar #x                                   \t18\t54\t4879.18\n2001001\tamalgimporto #x                                   \t18\t49\t4878.45\n7014010\tedu packnameless #x                              \t18\t34\t4873.60\n8001007\tamalgnameless #x                                  \t19\t41\t4830.00\n3002001\timportoexporti #x                                 \t9\t44\t4827.15\n4004001\tedu packedu pack #x                               \t7\t3\t4804.34\n4001001\tamalgedu pack #x                                  \t19\t28\t4773.53\n4004002\tedu packedu pack #x                               \t17\t26\t4734.60\n6015003\tscholarbrand #x                                   \t18\t34\t4665.65\n2004001\tedu packimporto #x                                \t17\t12\t4651.86\n5003001\texportischolar #x                                 \t18\t4\t4631.00\nNULL\tbrandmaxi #x                                     \t17\t37\t4616.78\n4001001\tamalgedu pack #x                                  \t7\t32\t4597.00\n4001001\tamalgedu pack #x                                  \t17\t13\t4588.56\n6012005\timportobrand #x                                   \t18\t50\t4575.60\n3002002\timportoexporti #x                                 \t19\t57\t4570.02\n3002001\timportoexporti #x                                 \t19\t39\t4528.04\n9016005\tcorpunivamalg #x                                  \t17\t16\t4526.48\n10008016\tnamelessunivamalg #x                             \t8\t28\t4506.81\n2003002\texportiimporto #x                                 \t17\t42\t4484.11\n7015007\tscholarnameless #x                                \t18\t19\t4469.40\n3003002\texportiexporti #x                                 \t6\t39\t4465.80\n10005012\tscholarunivamalg #x                              \t18\t59\t4437.00\n1004001\tedu packamalg #x                                  \t19\t23\t4432.40\n2003001\texportiimporto #x                                 \t17\t19\t4421.20\n3004002\tedu packexporti #x                                \t18\t46\t4403.07\n10011008\tamalgamalgamalg #x                                \t7\t42\t4402.30\n4003001\texportiedu pack #x                                \t8\t13\t4392.08\n8005009\tscholarnameless #x                                \t18\t0\t4379.70\n4003001\texportiedu pack #x                                \t19\t50\t4349.40\n7002009\timportobrand #x                                   \t17\t33\t4341.12\n7006007\tcorpbrand #x                                      \t19\t42\t4310.33\n1003001\texportiamalg #x                                   \t17\t21\t4301.91\n10008007\tnamelessunivamalg #x                              \t17\t34\t4292.47\n8015004\tscholarmaxi #x                                    \t18\t58\t4180.80\n2002001\timportoimporto #x                                 \t18\t40\t4176.56\n2002002\timportoimporto #x                                 \t9\t54\t4173.54\n3001001\tamalgexporti #x                                   \t9\t4\t4165.80\n2004001\tedu packimporto #x                                \t17\t43\t4164.04\n3002001\timportoexporti #x                                 \t17\t30\t4151.64\n10011008\tamalgamalgamalg #x                                \t18\t39\t4149.63\n4004002\tedu packedu pack #x                               \t17\t38\t4133.66\n8001009\tamalgnameless #x                                  \t6\t43\t4122.69\n3002001\timportoexporti #x                                 \t19\t22\t4115.04\n3002001\timportoexporti #x                                 \t19\t18\t4097.52\n4004002\tedu packedu pack #x                               \t19\t45\t4076.16\n1003001\texportiamalg #x                                   \t8\t31\t4075.59\n7015004\tscholarnameless #x                                \t17\t5\t4068.78\n6009008\tmaxicorp #x                                       \t17\t18\t4024.77\n7012007\timportonameless #x                                \t17\t51\t4005.99\n7009003\tmaxibrand #x                                      \t18\t39\t4005.48\n9013005\texportiunivamalg #x                               \t8\t42\t3946.02\n1004001\tedu packamalg #x                                  \t9\t54\t3944.02\n1002002\timportoamalg #x                                   \t19\t45\t3925.44\n6005004\tscholarcorp #x                                    \t9\t37\t3921.38\n8002007\timportonameless #x                                \t18\t45\t3842.84\n1002001\timportoamalg #x                                   \t9\t30\t3842.56\n2002002\timportoimporto #x                                 \t8\t41\t3837.18\n8013007\texportimaxi #x                                    \t17\t41\t3815.46\n10005012\tscholarunivamalg #x                              \t6\t34\t3806.53\n6005003\tscholarcorp #x                                    \t19\t28\t3796.65\n4004002\tedu packedu pack #x                               \t18\t28\t3780.70\n4004001\tedu packedu pack #x                               \t19\t7\t3777.74\n3003001\texportiexporti #x                                 \t9\t41\t3754.92\n5001002\tamalgscholar #x                                   \t8\t15\t3754.40\n1003001\texportiamalg #x                                   \t17\t53\t3748.70\n6007007\tbrandcorp #x                                      \t18\t15\t3697.20\n7015004\tscholarnameless #x                                \t7\t52\t3666.06\n9014008\tedu packunivamalg #x                              \t18\t52\t3645.18\n4003001\texportiedu pack #x                                \t18\t8\t3619.35\n3001001\tamalgexporti #x                                   \t19\t54\t3606.90\n6009008\tmaxicorp #x                                       \t18\t19\t3556.14\n5001002\tamalgscholar #x                                   \t9\t4\t3522.24\n2004001\tedu packimporto #x                                \t17\t55\t3502.84\n6011001\tamalgbrand #x                                     \t8\t10\t3499.45\n3003001\texportiexporti #x                                 \t19\t55\t3482.00\n2003001\texportiimporto #x                                 \t17\t53\t3472.00\n2001001\tamalgimporto #x                                   \t19\t53\t3459.52\n2001001\tamalgimporto #x                                   \t9\t49\t3457.48\n4001001\tamalgedu pack #x                                  \t17\t39\t3433.56\n1004001\tedu packamalg #x                                  \t19\t50\t3426.37\n6002008\timportocorp #x                                    \t18\t6\t3412.64\n7014010\tedu packnameless #x                              \t9\t18\t3412.08\n7012008\timportonameless #x                                \t18\t4\t3410.74\n3002001\timportoexporti #x                                 \t17\t55\t3405.60\n5002001\timportoscholar #x                                 \t19\t17\t3400.38\n2001001\tamalgimporto #x                                   \t17\t57\t3397.19\n10014001\tedu packamalgamalg #x                             \t19\t31\t3392.50\n7006007\tcorpbrand #x                                      \t17\t10\t3376.96\n10001003\tamalgunivamalg #x                                 \t19\t41\t3352.25\n5001002\tamalgscholar #x                                   \t19\t44\t3340.22\n8009005\tmaxinameless #x                                   \t9\t24\t3338.52\n4002002\timportoedu pack #x                                \t19\t17\t3331.64\n6011003\tamalgbrand #x                                     \t9\t50\t3319.68\n4003002\texportiedu pack #x                                \t19\t38\t3314.22\n1002001\timportoamalg #x                                   \t19\t46\t3308.76\nNULL\tbrandmaxi #x                                     \t17\t53\t3303.10\n3004001\tedu packexporti #x                                \t19\t53\t3300.96\n4001001\tamalgedu pack #x                                  \t17\t45\t3295.24\n5002001\timportoscholar #x                                 \t8\t11\t3271.52\n9012009\timportounivamalg #x                               \t17\t7\t3268.00\n8010001\tunivmaxi #x                                       \t17\t31\t3267.20\n8010001\tunivmaxi #x                                       \t17\t39\t3263.40\n9013003\texportiunivamalg #x                               \t17\t52\t3260.32\n10011008\tamalgamalgamalg #x                                \t17\t41\t3255.54\n6003004\texporticorp #x                                    \t19\t25\t3249.90\n7013002\texportinameless #x                                \t19\t39\t3219.00\n4003001\texportiedu pack #x                                \t7\t6\t3211.20\n4001001\tamalgedu pack #x                                  \t17\t49\t3193.61\n9009009\tmaximaxi #x                                       \t18\t41\t3176.58\n4002001\timportoedu pack #x                                \t19\t11\t3175.90\n2001001\tamalgimporto #x                                   \t17\t47\t3157.44\n9014011\tedu packunivamalg #x                             \t8\t58\t3137.94\n3002001\timportoexporti #x                                 \t19\t8\t3127.98\n7002009\timportobrand #x                                   \t17\t51\t3104.96\n10001003\tamalgunivamalg #x                                 \t17\t0\t3099.87\n6007007\tbrandcorp #x                                      \t17\t2\t3096.72\n7014010\tedu packnameless #x                              \t19\t17\t3081.42\n10005012\tscholarunivamalg #x                              \t8\t39\t3081.11\n5001002\tamalgscholar #x                                   \t9\t43\t3079.14\n4002001\timportoedu pack #x                                \t6\t35\t3071.00\n4001001\tamalgedu pack #x                                  \t18\t56\t3070.71\n7008005\tnamelessbrand #x                                  \t8\t5\t3070.54\n4003001\texportiedu pack #x                                \t17\t14\t3060.92\n4004002\tedu packedu pack #x                               \t18\t30\t3026.87\n6015003\tscholarbrand #x                                   \t19\t16\t3021.75\n1002001\timportoamalg #x                                   \t18\t23\t3018.06\n1003001\texportiamalg #x                                   \t17\t12\t3016.20\n8008006\tnamelessnameless #x                               \t9\t19\t3006.85\n8005004\tscholarnameless #x                                \t8\t7\t3004.80\n9008003\tnamelessmaxi #x                                   \t8\t13\t3004.80\n2004002\tedu packimporto #x                                \t17\t25\t2991.85\n8005009\tscholarnameless #x                                \t6\t44\t2991.08\n7016008\tcorpnameless #x                                   \t8\t20\t2987.85\n1002001\timportoamalg #x                                   \t18\t38\t2983.83\n9016011\tcorpunivamalg #x                                 \t7\t43\t2923.80\n9016011\tcorpunivamalg #x                                 \t17\t56\t2910.60\n3003002\texportiexporti #x                                 \t17\t37\t2908.44\n1004001\tedu packamalg #x                                  \t9\t40\t2907.30\n7011010\tamalgnameless #x                                 \t9\t39\t2899.84\n8012007\timportomaxi #x                                    \t8\t37\t2858.82\n4002001\timportoedu pack #x                                \t18\t7\t2846.40\n7010005\tunivnameless #x                                   \t19\t20\t2838.58\n7002002\timportobrand #x                                   \t9\t28\t2829.06\n6011001\tamalgbrand #x                                     \t19\t23\t2824.58\n10005012\tscholarunivamalg #x                              \t18\t51\t2800.24\n3001001\tamalgexporti #x                                   \t18\t58\t2767.91\n9003003\texportimaxi #x                                    \t8\t46\t2762.53\n4003001\texportiedu pack #x                                \t19\t55\t2748.48\n7014001\tedu packnameless #x                               \t17\t15\t2742.96\n5002001\timportoscholar #x                                 \t19\t14\t2740.79\n7006005\tcorpbrand #x                                      \t17\t5\t2734.45\n5001002\tamalgscholar #x                                   \t19\t24\t2708.16\n7015007\tscholarnameless #x                                \t18\t6\t2707.96\n3004001\tedu packexporti #x                                \t19\t0\t2705.12\n4001001\tamalgedu pack #x                                  \t18\t45\t2693.82\n3003001\texportiexporti #x                                 \t19\t15\t2676.38\n7015007\tscholarnameless #x                                \t17\t51\t2675.55\n8010001\tunivmaxi #x                                       \t18\t15\t2671.00\n3004001\tedu packexporti #x                                \t18\t37\t2668.55\n9015011\tscholarunivamalg #x                              \t9\t27\t2666.22\n8002008\timportonameless #x                                \t18\t53\t2666.00\n6009008\tmaxicorp #x                                       \t19\t14\t2663.54\n3002001\timportoexporti #x                                 \t19\t47\t2647.33\n5002001\timportoscholar #x                                 \t19\t37\t2639.14\n3003001\texportiexporti #x                                 \t9\t1\t2634.17\n3004001\tedu packexporti #x                                \t18\t51\t2628.00\n3002001\timportoexporti #x                                 \t19\t30\t2612.00\n2001001\tamalgimporto #x                                   \t18\t53\t2610.58\n1003001\texportiamalg #x                                   \t17\t44\t2592.59\n3002001\timportoexporti #x                                 \t9\t47\t2580.48\n10009010\tmaxiunivamalg #x                                 \t18\t53\t2579.82\n6008001\tnamelesscorp #x                                   \t17\t29\t2579.57\n8012007\timportomaxi #x                                    \t17\t52\t2569.00\n2002002\timportoimporto #x                                 \t18\t55\t2555.57\n8013007\texportimaxi #x                                    \t17\t23\t2545.20\n7006001\tcorpbrand #x                                      \t17\t36\t2531.43\n10014001\tedu packamalgamalg #x                             \t9\t26\t2527.24\n1003001\texportiamalg #x                                   \t17\t2\t2517.48\n9008003\tnamelessmaxi #x                                   \t8\t35\t2485.56\n8008006\tnamelessnameless #x                               \t19\t13\t2475.20\n10003008\texportiunivamalg #x                               \t8\t6\t2453.95\n1001001\tamalgamalg #x                                     \t17\t26\t2448.16\n7010009\tunivnameless #x                                   \t18\t53\t2437.00\n9010002\tunivunivamalg #x                                  \t19\t27\t2435.80\n4001001\tamalgedu pack #x                                  \t18\t38\t2431.24\n4004001\tedu packedu pack #x                               \t17\t33\t2426.48\n6012005\timportobrand #x                                   \t17\t41\t2420.71\n8013007\texportimaxi #x                                    \t19\t6\t2420.00\n9016005\tcorpunivamalg #x                                  \t18\t54\t2415.70\n2003001\texportiimporto #x                                 \t6\t25\t2413.20\n2002002\timportoimporto #x                                 \t18\t6\t2390.73\n9016008\tcorpunivamalg #x                                  \t17\t9\t2390.70\n10015013\tscholaramalgamalg #x                             \t8\t28\t2368.85\n4001001\tamalgedu pack #x                                  \t17\t20\t2367.30\n8008006\tnamelessnameless #x                               \t8\t20\t2363.24\n9012009\timportounivamalg #x                               \t8\t3\t2348.10\n1002002\timportoamalg #x                                   \t9\t44\t2336.62\n6007007\tbrandcorp #x                                      \t9\t23\t2334.24\n1003001\texportiamalg #x                                   \t9\t54\t2325.44\n3004002\tedu packexporti #x                                \t9\t16\t2313.30\n4004001\tedu packedu pack #x                               \t19\t17\t2307.75\n3004001\tedu packexporti #x                                \t8\t15\t2299.77\n7004005\tedu packbrand #x                                  \t9\t34\t2294.12\n4003002\texportiedu pack #x                                \t18\t55\t2280.73\n6005004\tscholarcorp #x                                    \t9\t40\t2279.25\n8015004\tscholarmaxi #x                                    \t9\t47\t2276.40\n4004002\tedu packedu pack #x                               \t19\t3\t2271.28\n9015011\tscholarunivamalg #x                              \t17\t54\t2269.76\n1002001\timportoamalg #x                                   \t19\t39\t2258.08\n6002008\timportocorp #x                                    \t17\t53\t2247.20\n9013005\texportiunivamalg #x                               \t7\t55\t2234.00\n1003002\texportiamalg #x                                   \t6\t59\t2231.46\n10016008\tcorpamalgamalg #x                                 \t17\t17\t2230.20\n4002001\timportoedu pack #x                                \t19\t24\t2220.00\n4004002\tedu packedu pack #x                               \t19\t5\t2215.50\n9004008\tedu packmaxi #x                                   \t17\t15\t2206.40\n4004002\tedu packedu pack #x                               \t17\t52\t2204.16\n2002002\timportoimporto #x                                 \t17\t22\t2199.69\n2003002\texportiimporto #x                                 \t17\t14\t2197.84\n8004002\tedu packnameless #x                               \t18\t33\t2188.04\n3003001\texportiexporti #x                                 \t17\t58\t2187.64\n6005003\tscholarcorp #x                                    \t17\t15\t2180.85\n8001009\tamalgnameless #x                                  \t18\t41\t2178.21\n2002001\timportoimporto #x                                 \t17\t34\t2177.28\n6011003\tamalgbrand #x                                     \t18\t48\t2176.72\n6008004\tnamelesscorp #x                                   \t17\t17\t2162.08\n5004002\tedu packscholar #x                                \t9\t20\t2158.40\n6005003\tscholarcorp #x                                    \t18\t7\t2152.57\n8010001\tunivmaxi #x                                       \t18\t21\t2147.42\n9016003\tcorpunivamalg #x                                  \t9\t4\t2146.05\n9012009\timportounivamalg #x                               \t9\t6\t2138.08\n1003002\texportiamalg #x                                   \t9\t22\t2133.04\n4004002\tedu packedu pack #x                               \t18\t15\t2126.52\n8006010\tcorpnameless #x                                  \t18\t41\t2126.29\n7013007\texportinameless #x                                \t9\t51\t2114.10\n10009016\tmaxiunivamalg #x                                 \t9\t14\t2113.20\n8004002\tedu packnameless #x                               \t8\t32\t2111.54\n3003002\texportiexporti #x                                 \t17\t45\t2106.00\n9008003\tnamelessmaxi #x                                   \t17\t55\t2100.24\n3004001\tedu packexporti #x                                \t8\t7\t2099.20\n9016011\tcorpunivamalg #x                                 \t19\t3\t2094.48\n9007003\tbrandmaxi #x                                      \t17\t38\t2081.01\n7013007\texportinameless #x                                \t8\t47\t2079.36\n2004001\tedu packimporto #x                                \t8\t45\t2069.88\n2002002\timportoimporto #x                                 \t8\t39\t2067.52\n7012007\timportonameless #x                                \t7\t30\t2056.54\n9001003\tamalgmaxi #x                                      \t19\t35\t2046.96\n4002001\timportoedu pack #x                                \t7\t52\t2027.30\n9013003\texportiunivamalg #x                               \t17\t7\t2015.02\n4003002\texportiedu pack #x                                \t9\t26\t2007.44\n10009016\tmaxiunivamalg #x                                 \t9\t57\t1997.28\n10008006\tnamelessunivamalg #x                              \t18\t6\t1989.49\n5002001\timportoscholar #x                                 \t19\t39\t1969.54\n2004001\tedu packimporto #x                                \t17\t47\t1968.80\n9003003\texportimaxi #x                                    \t9\t22\t1964.48\n10009016\tmaxiunivamalg #x                                 \t9\t47\t1942.08\n2003001\texportiimporto #x                                 \t9\t48\t1919.01\n3003001\texportiexporti #x                                 \t18\t44\t1914.96\n10008016\tnamelessunivamalg #x                             \t18\t1\t1902.04\n4003001\texportiedu pack #x                                \t8\t27\t1899.78\n1003001\texportiamalg #x                                   \t7\t16\t1889.55\n1002001\timportoamalg #x                                   \t17\t40\t1889.16\n3001001\tamalgexporti #x                                   \t8\t45\t1888.80\n10003008\texportiunivamalg #x                               \t17\t42\t1858.50\n4003002\texportiedu pack #x                                \t17\t55\t1851.30\n3002001\timportoexporti #x                                 \t18\t39\t1848.96\n8003006\texportinameless #x                                \t8\t57\t1847.78\n4003001\texportiedu pack #x                                \t18\t10\t1835.50\n9014011\tedu packunivamalg #x                             \t19\t9\t1816.08\n6012005\timportobrand #x                                   \t19\t32\t1815.98\n7010005\tunivnameless #x                                   \t19\t23\t1815.60\n6011003\tamalgbrand #x                                     \t8\t50\t1808.80\n4001002\tamalgedu pack #x                                  \t17\t10\t1800.90\n2002002\timportoimporto #x                                 \t19\t0\t1798.16\n7010005\tunivnameless #x                                   \t18\t31\t1789.30\n7002002\timportobrand #x                                   \t19\t33\t1786.47\n4004002\tedu packedu pack #x                               \t18\t14\t1786.36\n1002001\timportoamalg #x                                   \t17\t53\t1781.00\n8002008\timportonameless #x                                \t18\t21\t1770.21\n10014001\tedu packamalgamalg #x                             \t19\t7\t1760.54\n8003006\texportinameless #x                                \t19\t11\t1749.55\n9004008\tedu packmaxi #x                                   \t19\t37\t1748.25\n4001001\tamalgedu pack #x                                  \t9\t2\t1743.00\n3004001\tedu packexporti #x                                \t18\t21\t1736.00\n7006001\tcorpbrand #x                                      \t9\t59\t1722.93\n7014001\tedu packnameless #x                               \t8\t14\t1720.33\n1002001\timportoamalg #x                                   \t19\t19\t1708.92\n1003001\texportiamalg #x                                   \t17\t55\t1703.00\n1003001\texportiamalg #x                                   \t8\t17\t1697.28\n6005003\tscholarcorp #x                                    \t19\t13\t1695.72\n2004002\tedu packimporto #x                                \t19\t26\t1693.89\n5003001\texportischolar #x                                 \t9\t49\t1690.08\n7011010\tamalgnameless #x                                 \t19\t13\t1688.23\n10014005\tedu packamalgamalg #x                             \t17\t18\t1679.70\n1001001\tamalgamalg #x                                     \t19\t26\t1678.27\n2003002\texportiimporto #x                                 \t18\t10\t1677.90\n9007003\tbrandmaxi #x                                      \t18\t29\t1676.16\n7013002\texportinameless #x                                \t9\t21\t1674.33\n9004008\tedu packmaxi #x                                   \t17\t45\t1667.47\n7012007\timportonameless #x                                \t9\t9\t1662.44\n9010002\tunivunivamalg #x                                  \t8\t32\t1636.20\n1004001\tedu packamalg #x                                  \t19\t48\t1629.36\n6005004\tscholarcorp #x                                    \t19\t28\t1623.60\n10014001\tedu packamalgamalg #x                             \t19\t42\t1622.88\n7002002\timportobrand #x                                   \t19\t43\t1611.50\n6009008\tmaxicorp #x                                       \t8\t40\t1607.76\n10008016\tnamelessunivamalg #x                             \t8\t35\t1606.32\n2004002\tedu packimporto #x                                \t19\t41\t1600.06\n8013007\texportimaxi #x                                    \t19\t12\t1597.52\n3002001\timportoexporti #x                                 \t19\t5\t1597.32\n8013009\texportimaxi #x                                    \t8\t38\t1588.47\n6012005\timportobrand #x                                   \t18\t13\t1569.12\n10013006\texportiamalgamalg #x                              \t17\t4\t1565.76\n2001001\tamalgimporto #x                                   \t18\t22\t1560.28\n5002001\timportoscholar #x                                 \t18\t23\t1558.80\n1001001\tamalgamalg #x                                     \t19\t13\t1542.42\nNULL\tbrandmaxi #x                                     \t18\t10\t1538.24\n10011008\tamalgamalgamalg #x                                \t9\t49\t1536.50\n8014006\tedu packmaxi #x                                   \t9\t49\t1534.26\n9013003\texportiunivamalg #x                               \t7\t55\t1520.45\n7006007\tcorpbrand #x                                      \t8\t43\t1517.67\n2003001\texportiimporto #x                                 \t9\t16\t1512.00\n10011008\tamalgamalgamalg #x                                \t17\t46\t1506.54\n9016005\tcorpunivamalg #x                                  \t17\t54\t1505.28\n6012005\timportobrand #x                                   \t9\t47\t1503.81\n1002002\timportoamalg #x                                   \t17\t3\t1499.84\n9013005\texportiunivamalg #x                               \t18\t8\t1499.40\n2002002\timportoimporto #x                                 \t17\t14\t1483.50\n3001001\tamalgexporti #x                                   \t17\t17\t1459.44\n9001003\tamalgmaxi #x                                      \t19\t55\t1455.30\n9016011\tcorpunivamalg #x                                 \t18\t49\t1455.08\n9013003\texportiunivamalg #x                               \t17\t2\t1443.52\n9003003\texportimaxi #x                                    \t9\t20\t1442.40\n7004005\tedu packbrand #x                                  \t17\t20\t1432.67\n8002008\timportonameless #x                                \t17\t42\t1428.68\n9005002\tscholarmaxi #x                                    \t7\t32\t1420.70\n10008007\tnamelessunivamalg #x                              \t9\t59\t1406.40\n7011010\tamalgnameless #x                                 \t17\t19\t1404.86\n7015007\tscholarnameless #x                                \t18\t46\t1402.40\n2001001\tamalgimporto #x                                   \t17\t23\t1397.76\n10001007\tamalgunivamalg #x                                 \t17\t42\t1388.49\n1003001\texportiamalg #x                                   \t17\t57\t1388.03\n7006001\tcorpbrand #x                                      \t19\t58\t1387.75\n10005002\tscholarunivamalg #x                               \t19\t11\t1382.68\n9001003\tamalgmaxi #x                                      \t17\t17\t1382.16\n3002001\timportoexporti #x                                 \t6\t8\t1379.81\n7004005\tedu packbrand #x                                  \t17\t33\t1365.52\n3002001\timportoexporti #x                                 \t18\t0\t1350.00\n7014003\tedu packnameless #x                               \t8\t21\t1349.76\n9016003\tcorpunivamalg #x                                  \t18\t38\t1340.64\n2004002\tedu packimporto #x                                \t9\t4\t1334.41\n4003001\texportiedu pack #x                                \t17\t55\t1326.42\n1002001\timportoamalg #x                                   \t18\t51\t1308.82\n6003004\texporticorp #x                                    \t17\t39\t1305.00\n1004001\tedu packamalg #x                                  \t9\t44\t1304.91\n10005012\tscholarunivamalg #x                              \t6\t30\t1298.70\n10016008\tcorpamalgamalg #x                                 \t19\t32\t1293.44\n10008006\tnamelessunivamalg #x                              \t8\t20\t1288.16\n3002001\timportoexporti #x                                 \t19\t25\t1284.20\n9013005\texportiunivamalg #x                               \t17\t50\t1269.05\n4003001\texportiedu pack #x                                \t9\t12\t1266.16\n5001002\tamalgscholar #x                                   \t17\t40\t1262.40\n1004001\tedu packamalg #x                                  \t17\t34\t1257.90\n10008007\tnamelessunivamalg #x                              \t8\t13\t1246.77\n4001001\tamalgedu pack #x                                  \t9\t56\t1233.28\n7015007\tscholarnameless #x                                \t19\t52\t1218.28\n9003003\texportimaxi #x                                    \t8\t39\t1211.34\n4003001\texportiedu pack #x                                \t18\t3\t1208.70\n5004001\tedu packscholar #x                                \t18\t16\t1197.76\n6003004\texporticorp #x                                    \t6\t50\t1195.92\n6002008\timportocorp #x                                    \t9\t14\t1194.08\n3003001\texportiexporti #x                                 \t17\t21\t1192.50\n8001009\tamalgnameless #x                                  \t19\t49\t1191.50\n6011003\tamalgbrand #x                                     \t7\t32\t1189.59\n6009008\tmaxicorp #x                                       \t18\t34\t1179.80\n2004002\tedu packimporto #x                                \t7\t19\t1178.10\n9013005\texportiunivamalg #x                               \t17\t40\t1173.15\n7006007\tcorpbrand #x                                      \t8\t2\t1166.66\n2002002\timportoimporto #x                                 \t19\t17\t1162.28\n3001001\tamalgexporti #x                                   \t18\t25\t1161.54\n2002001\timportoimporto #x                                 \t9\t59\t1161.00\n8010001\tunivmaxi #x                                       \t19\t55\t1160.64\n4001001\tamalgedu pack #x                                  \t17\t51\t1153.88\n9014008\tedu packunivamalg #x                              \t18\t4\t1153.00\n4001001\tamalgedu pack #x                                  \t19\t48\t1151.94\n7011010\tamalgnameless #x                                 \t8\t40\t1151.64\n5004001\tedu packscholar #x                                \t6\t57\t1148.68\n9004008\tedu packmaxi #x                                   \t19\t3\t1140.21\n8001009\tamalgnameless #x                                  \t17\t27\t1139.00\n8001007\tamalgnameless #x                                  \t19\t3\t1135.20\n9015011\tscholarunivamalg #x                              \t9\t36\t1134.00\n9015011\tscholarunivamalg #x                              \t9\t41\t1133.76\n7010005\tunivnameless #x                                   \t19\t8\t1130.29\n1001001\tamalgamalg #x                                     \t18\t54\t1129.80\n4001001\tamalgedu pack #x                                  \t8\t44\t1129.80\n7016008\tcorpnameless #x                                   \t18\t27\t1128.50\n4002002\timportoedu pack #x                                \t17\t24\t1124.37\n5002002\timportoscholar #x                                 \t19\t20\t1124.23\n4002001\timportoedu pack #x                                \t9\t42\t1120.98\n9013005\texportiunivamalg #x                               \t17\t16\t1113.20\n7015007\tscholarnameless #x                                \t17\t7\t1111.10\n8001009\tamalgnameless #x                                  \t19\t40\t1110.42\n2001001\tamalgimporto #x                                   \t17\t19\t1108.80\n3004001\tedu packexporti #x                                \t18\t56\t1100.47\n4003001\texportiedu pack #x                                \t9\t44\t1100.32\n8003006\texportinameless #x                                \t17\t54\t1098.20\n8001007\tamalgnameless #x                                  \t19\t51\t1097.28\n7002002\timportobrand #x                                   \t17\t27\t1092.25\n2001001\tamalgimporto #x                                   \t17\t41\t1090.98\n2004001\tedu packimporto #x                                \t17\t14\t1088.64\n3003001\texportiexporti #x                                 \t17\t54\t1080.64\n9007003\tbrandmaxi #x                                      \t17\t2\t1079.52\n1003001\texportiamalg #x                                   \t9\t36\t1079.10\n10008007\tnamelessunivamalg #x                              \t17\t12\t1073.16\n6011001\tamalgbrand #x                                     \t8\t53\t1071.84\n7008005\tnamelessbrand #x                                  \t19\t55\t1071.84\n3003001\texportiexporti #x                                 \t8\t20\t1064.85\n7015004\tscholarnameless #x                                \t19\t59\t1059.50\n4004002\tedu packedu pack #x                               \t18\t19\t1052.10\n2001001\tamalgimporto #x                                   \t9\t24\t1050.07\n9013005\texportiunivamalg #x                               \t17\t19\t1049.20\n4001001\tamalgedu pack #x                                  \t9\t3\t1047.60\n1004001\tedu packamalg #x                                  \t17\t29\t1041.70\n7009003\tmaxibrand #x                                      \t18\t17\t1040.98\n10008006\tnamelessunivamalg #x                              \t8\t21\t1040.04\n9013005\texportiunivamalg #x                               \t9\t58\t1032.46\n5001002\tamalgscholar #x                                   \t19\t7\t1029.76\n9016011\tcorpunivamalg #x                                 \t17\t51\t1029.34\n1002001\timportoamalg #x                                   \t18\t53\t1024.10\n5001001\tamalgscholar #x                                   \t6\t19\t1012.69\n2002002\timportoimporto #x                                 \t17\t49\t1001.90\n6012005\timportobrand #x                                   \t17\t38\t1001.60\n2001001\tamalgimporto #x                                   \t17\t36\t992.25\n7008005\tnamelessbrand #x                                  \t19\t19\t991.36\n5004001\tedu packscholar #x                                \t9\t9\t989.86\n2004001\tedu packimporto #x                                \t17\t28\t984.62\n2003002\texportiimporto #x                                 \t19\t14\t981.54\n3001001\tamalgexporti #x                                   \t19\t24\t979.20\n8015004\tscholarmaxi #x                                    \t9\t32\t973.50\n9010002\tunivunivamalg #x                                  \t17\t40\t969.15\n4001001\tamalgedu pack #x                                  \t9\t38\t966.92\n3004002\tedu packexporti #x                                \t17\t39\t965.25\n2003001\texportiimporto #x                                 \t8\t10\t960.33\n2002002\timportoimporto #x                                 \t18\t51\t958.86\n9016008\tcorpunivamalg #x                                  \t17\t52\t957.18\n7010005\tunivnameless #x                                   \t17\t54\t948.42\n9009009\tmaximaxi #x                                       \t8\t10\t939.60\n4003001\texportiedu pack #x                                \t17\t45\t937.30\n2004001\tedu packimporto #x                                \t17\t54\t932.94\n9003003\texportimaxi #x                                    \t19\t36\t928.32\n4003002\texportiedu pack #x                                \t9\t48\t924.80\n8002008\timportonameless #x                                \t19\t54\t915.38\n6002008\timportocorp #x                                    \t9\t0\t914.64\n7006007\tcorpbrand #x                                      \t18\t2\t914.39\n7014010\tedu packnameless #x                              \t18\t8\t910.30\n3004002\tedu packexporti #x                                \t18\t8\t907.12\n5001002\tamalgscholar #x                                   \t6\t8\t906.84\n2004001\tedu packimporto #x                                \t9\t57\t906.78\n8013007\texportimaxi #x                                    \t17\t54\t904.68\n7008005\tnamelessbrand #x                                  \t19\t29\t901.32\n1002001\timportoamalg #x                                   \t17\t45\t898.82\n10013006\texportiamalgamalg #x                              \t9\t14\t896.26\n1003001\texportiamalg #x                                   \t18\t51\t891.87\n9008003\tnamelessmaxi #x                                   \t17\t24\t890.40\n10008006\tnamelessunivamalg #x                              \t18\t37\t889.76\n7014010\tedu packnameless #x                              \t8\t15\t888.12\n8008006\tnamelessnameless #x                               \t19\t3\t884.00\n7011010\tamalgnameless #x                                 \t8\t20\t883.96\n9013005\texportiunivamalg #x                               \t18\t52\t883.35\n6015001\tscholarbrand #x                                   \t9\t55\t881.45\n3004001\tedu packexporti #x                                \t18\t0\t876.68\n1003001\texportiamalg #x                                   \t19\t56\t874.50\n9014008\tedu packunivamalg #x                              \t8\t10\t873.30\n8003006\texportinameless #x                                \t19\t44\t863.04\n2001001\tamalgimporto #x                                   \t8\t47\t862.92\n3003001\texportiexporti #x                                 \t17\t48\t860.52\n7004005\tedu packbrand #x                                  \t17\t0\t857.55\n8004002\tedu packnameless #x                               \t6\t59\t855.84\n8002008\timportonameless #x                                \t8\t23\t855.21\n5001002\tamalgscholar #x                                   \t7\t57\t854.70\n6011001\tamalgbrand #x                                     \t8\t7\t850.86\n8002007\timportonameless #x                                \t17\t0\t843.50\n2001001\tamalgimporto #x                                   \t19\t21\t840.63\n3004001\tedu packexporti #x                                \t17\t42\t836.43\n2004001\tedu packimporto #x                                \t17\t27\t833.91\n4003001\texportiedu pack #x                                \t6\t35\t825.24\n2001001\tamalgimporto #x                                   \t9\t45\t822.00\n2004001\tedu packimporto #x                                \t9\t15\t817.02\n7012007\timportonameless #x                                \t19\t21\t814.45\n6012005\timportobrand #x                                   \t17\t53\t805.65\n7011010\tamalgnameless #x                                 \t18\t34\t802.80\n4001001\tamalgedu pack #x                                  \t9\t19\t799.77\n7014010\tedu packnameless #x                              \t9\t5\t796.86\n4003002\texportiedu pack #x                                \t18\t59\t794.58\n1001001\tamalgamalg #x                                     \t17\t49\t793.50\n2001002\tamalgimporto #x                                   \t7\t46\t789.96\n6005003\tscholarcorp #x                                    \t17\t22\t788.72\n9013005\texportiunivamalg #x                               \t19\t43\t788.67\n3004002\tedu packexporti #x                                \t19\t6\t788.12\n3002001\timportoexporti #x                                 \t18\t46\t787.76\n9016011\tcorpunivamalg #x                                 \t8\t49\t787.71\n10014001\tedu packamalgamalg #x                             \t19\t37\t783.36\n1002002\timportoamalg #x                                   \t18\t18\t781.60\n5002001\timportoscholar #x                                 \t17\t40\t781.55\n5002002\timportoscholar #x                                 \t18\t34\t779.22\n7002002\timportobrand #x                                   \t18\t17\t773.22\n8013007\texportimaxi #x                                    \t7\t56\t772.80\n8014006\tedu packmaxi #x                                   \t9\t32\t771.12\n8008006\tnamelessnameless #x                               \t9\t2\t767.36\n5001001\tamalgscholar #x                                   \t8\t51\t767.23\n2001001\tamalgimporto #x                                   \t17\t20\t765.15\n2001002\tamalgimporto #x                                   \t18\t17\t764.40\n9010002\tunivunivamalg #x                                  \t19\t2\t764.16\n2002002\timportoimporto #x                                 \t18\t46\t757.80\n7012008\timportonameless #x                                \t17\t57\t751.74\n9003003\texportimaxi #x                                    \t19\t24\t749.84\n1002002\timportoamalg #x                                   \t9\t12\t748.60\n1004001\tedu packamalg #x                                  \t18\t24\t746.88\n4001001\tamalgedu pack #x                                  \t19\t46\t743.38\n2004001\tedu packimporto #x                                \t17\t18\t742.14\n9013005\texportiunivamalg #x                               \t18\t49\t739.25\n5004001\tedu packscholar #x                                \t17\t50\t736.65\n7008005\tnamelessbrand #x                                  \t18\t7\t734.16\n9013005\texportiunivamalg #x                               \t6\t58\t733.04\n7004005\tedu packbrand #x                                  \t19\t38\t728.64\n2004001\tedu packimporto #x                                \t19\t35\t723.36\n5004002\tedu packscholar #x                                \t19\t57\t723.18\n4003001\texportiedu pack #x                                \t8\t5\t720.16\n8005004\tscholarnameless #x                                \t19\t1\t707.37\n5004002\tedu packscholar #x                                \t8\t20\t704.84\n4001002\tamalgedu pack #x                                  \t9\t46\t701.68\n5002001\timportoscholar #x                                 \t9\t42\t700.00\n2001001\tamalgimporto #x                                   \t18\t45\t698.10\n7002009\timportobrand #x                                   \t17\t55\t697.62\n1004001\tedu packamalg #x                                  \t18\t8\t694.96\n4003001\texportiedu pack #x                                \t9\t11\t693.24\n5004001\tedu packscholar #x                                \t18\t18\t693.00\n2004001\tedu packimporto #x                                \t9\t38\t689.92\n6008001\tnamelesscorp #x                                   \t18\t21\t689.22\n8001007\tamalgnameless #x                                  \t17\t23\t685.44\n3002001\timportoexporti #x                                 \t18\t53\t677.28\n5003001\texportischolar #x                                 \t8\t32\t675.90\n7009003\tmaxibrand #x                                      \t19\t5\t672.80\n9013003\texportiunivamalg #x                               \t18\t10\t672.29\n9014011\tedu packunivamalg #x                             \t19\t27\t671.46\n4004002\tedu packedu pack #x                               \t7\t46\t670.32\n10005002\tscholarunivamalg #x                               \t17\t42\t669.28\n9003003\texportimaxi #x                                    \t9\t44\t666.39\n6005003\tscholarcorp #x                                    \t9\t26\t665.28\n8005004\tscholarnameless #x                                \t9\t14\t664.74\n5001002\tamalgscholar #x                                   \t18\t32\t657.69\n6011003\tamalgbrand #x                                     \t9\t53\t657.57\n7006001\tcorpbrand #x                                      \t8\t34\t655.68\n6007007\tbrandcorp #x                                      \t17\t39\t653.90\n8008006\tnamelessnameless #x                               \t6\t9\t648.45\n2003001\texportiimporto #x                                 \t17\t54\t645.00\n10008016\tnamelessunivamalg #x                             \t9\t53\t638.52\n4003001\texportiedu pack #x                                \t19\t4\t638.02\n8005004\tscholarnameless #x                                \t19\t2\t633.36\n4001002\tamalgedu pack #x                                  \t19\t11\t630.36\n1003001\texportiamalg #x                                   \t17\t4\t629.68\n4004002\tedu packedu pack #x                               \t17\t15\t617.28\n4004002\tedu packedu pack #x                               \t18\t8\t615.36\n9016003\tcorpunivamalg #x                                  \t18\t12\t609.96\n6007007\tbrandcorp #x                                      \t18\t14\t607.62\n9016005\tcorpunivamalg #x                                  \t19\t54\t603.95\n8005009\tscholarnameless #x                                \t19\t1\t601.68\n9009009\tmaximaxi #x                                       \t18\t47\t598.50\n1002001\timportoamalg #x                                   \t8\t7\t593.56\n2003002\texportiimporto #x                                 \t17\t0\t591.43\n3002001\timportoexporti #x                                 \t17\t11\t587.94\n4003001\texportiedu pack #x                                \t8\t26\t587.36\n3002001\timportoexporti #x                                 \t19\t13\t586.08\n10013006\texportiamalgamalg #x                              \t18\t6\t585.12\n9003003\texportimaxi #x                                    \t8\t4\t581.28\n3003001\texportiexporti #x                                 \t19\t0\t577.68\n9007003\tbrandmaxi #x                                      \t17\t0\t574.52\n3002002\timportoexporti #x                                 \t9\t53\t570.78\n10009016\tmaxiunivamalg #x                                 \t19\t4\t570.57\n1003001\texportiamalg #x                                   \t17\t59\t560.34\n7002002\timportobrand #x                                   \t8\t58\t560.28\n7006001\tcorpbrand #x                                      \t7\t48\t558.90\nNULL\tbrandmaxi #x                                     \t18\t7\t555.45\n6015003\tscholarbrand #x                                   \t18\t2\t555.30\n9003003\texportimaxi #x                                    \t19\t54\t546.96\n9003003\texportimaxi #x                                    \t9\t51\t541.80\n1002001\timportoamalg #x                                   \t19\t29\t540.00\n6009008\tmaxicorp #x                                       \t17\t8\t537.37\n1002001\timportoamalg #x                                   \t17\t50\t536.58\n7011010\tamalgnameless #x                                 \t17\t34\t536.50\n8009005\tmaxinameless #x                                   \t18\t44\t535.78\n8009005\tmaxinameless #x                                   \t17\t9\t535.16\n3002001\timportoexporti #x                                 \t17\t44\t530.42\n9007003\tbrandmaxi #x                                      \t8\t40\t529.72\n2001001\tamalgimporto #x                                   \t9\t54\t529.55\n2003002\texportiimporto #x                                 \t8\t41\t527.80\n2002002\timportoimporto #x                                 \t6\t9\t527.40\n9001003\tamalgmaxi #x                                      \t17\t22\t517.16\n2002002\timportoimporto #x                                 \t19\t32\t516.40\n3002002\timportoexporti #x                                 \t9\t58\t515.78\n7006001\tcorpbrand #x                                      \t8\t37\t515.08\n4002001\timportoedu pack #x                                \t9\t0\t512.46\n4001001\tamalgedu pack #x                                  \t18\t10\t511.88\n9003003\texportimaxi #x                                    \t18\t59\t510.95\n1002002\timportoamalg #x                                   \t8\t10\t504.75\n4001001\tamalgedu pack #x                                  \t9\t57\t500.32\n2003001\texportiimporto #x                                 \t19\t55\t499.92\n6011003\tamalgbrand #x                                     \t9\t24\t498.96\n2001001\tamalgimporto #x                                   \t19\t0\t498.18\n2004001\tedu packimporto #x                                \t9\t35\t497.09\n3004002\tedu packexporti #x                                \t18\t40\t495.95\n4001001\tamalgedu pack #x                                  \t9\t34\t494.52\n5004002\tedu packscholar #x                                \t9\t15\t491.25\n4002001\timportoedu pack #x                                \t8\t5\t490.56\n10009016\tmaxiunivamalg #x                                 \t17\t20\t486.96\n7014003\tedu packnameless #x                               \t19\t23\t486.08\n4001001\tamalgedu pack #x                                  \t9\t32\t484.72\n5004001\tedu packscholar #x                                \t17\t24\t484.47\n4002001\timportoedu pack #x                                \t9\t9\t477.28\n1004001\tedu packamalg #x                                  \t8\t5\t476.13\n6011003\tamalgbrand #x                                     \t19\t9\t474.96\n8001009\tamalgnameless #x                                  \t7\t52\t473.55\n5001002\tamalgscholar #x                                   \t17\t3\t470.64\n3003001\texportiexporti #x                                 \t18\t10\t470.25\n6011001\tamalgbrand #x                                     \t19\t2\t467.52\n8009005\tmaxinameless #x                                   \t18\t37\t467.00\n7015004\tscholarnameless #x                                \t17\t24\t466.86\n1003001\texportiamalg #x                                   \t18\t7\t462.00\n5001002\tamalgscholar #x                                   \t18\t26\t461.89\n2001001\tamalgimporto #x                                   \t18\t39\t461.70\n9016005\tcorpunivamalg #x                                  \t19\t30\t458.64\n3002002\timportoexporti #x                                 \t17\t53\t454.08\n7014003\tedu packnameless #x                               \t8\t47\t453.84\n2004001\tedu packimporto #x                                \t8\t27\t450.78\n8013007\texportimaxi #x                                    \t17\t50\t449.35\n9013005\texportiunivamalg #x                               \t9\t3\t448.74\n3004001\tedu packexporti #x                                \t8\t20\t447.76\n3004001\tedu packexporti #x                                \t9\t57\t445.60\n2001002\tamalgimporto #x                                   \t8\t16\t444.62\n5003001\texportischolar #x                                 \t18\t23\t443.56\n7010009\tunivnameless #x                                   \t17\t24\t440.88\n7013007\texportinameless #x                                \t9\t47\t438.55\n5002001\timportoscholar #x                                 \t9\t45\t432.48\n10016008\tcorpamalgamalg #x                                 \t18\t41\t430.55\n2004002\tedu packimporto #x                                \t18\t10\t430.08\n10009010\tmaxiunivamalg #x                                 \t8\t26\t429.00\n4003001\texportiedu pack #x                                \t18\t42\t424.02\nNULL\tbrandmaxi #x                                     \t17\t16\t422.31\n10001007\tamalgunivamalg #x                                 \t9\t36\t419.76\n7008005\tnamelessbrand #x                                  \t17\t44\t419.12\n2004001\tedu packimporto #x                                \t8\t54\t417.34\n4004002\tedu packedu pack #x                               \t17\t0\t415.48\n4003001\texportiedu pack #x                                \t17\t9\t415.32\n4003002\texportiedu pack #x                                \t18\t24\t415.16\n4001001\tamalgedu pack #x                                  \t19\t54\t414.72\n9014011\tedu packunivamalg #x                             \t8\t32\t413.76\n8001007\tamalgnameless #x                                  \t19\t43\t409.08\n3002001\timportoexporti #x                                 \t17\t28\t408.90\n4003001\texportiedu pack #x                                \t19\t48\t406.90\n9013005\texportiunivamalg #x                               \t17\t9\t400.20\n6003004\texporticorp #x                                    \t9\t11\t399.04\n3003002\texportiexporti #x                                 \t17\t7\t396.00\n10007016\tbrandunivamalg #x                                \t9\t16\t394.80\n3002001\timportoexporti #x                                 \t18\t38\t393.38\n4001002\tamalgedu pack #x                                  \t8\t57\t392.60\n10015013\tscholaramalgamalg #x                             \t17\t22\t390.39\n3002001\timportoexporti #x                                 \t17\t5\t390.18\n5004001\tedu packscholar #x                                \t9\t47\t384.09\n3001001\tamalgexporti #x                                   \t19\t19\t380.60\n4004002\tedu packedu pack #x                               \t18\t48\t377.60\n7005006\tscholarbrand #x                                   \t18\t52\t377.28\n10003008\texportiunivamalg #x                               \t17\t8\t372.52\n8004002\tedu packnameless #x                               \t8\t44\t372.33\n2003001\texportiimporto #x                                 \t18\t12\t371.25\n3002002\timportoexporti #x                                 \t19\t35\t370.59\n1002002\timportoamalg #x                                   \t18\t2\t370.08\n6002008\timportocorp #x                                    \t17\t4\t369.55\n4003001\texportiedu pack #x                                \t18\t51\t362.70\n10009010\tmaxiunivamalg #x                                 \t17\t40\t360.36\n1002002\timportoamalg #x                                   \t17\t35\t354.96\n3004001\tedu packexporti #x                                \t9\t58\t354.48\n10003008\texportiunivamalg #x                               \t17\t59\t352.32\n4001001\tamalgedu pack #x                                  \t18\t48\t350.46\n4001001\tamalgedu pack #x                                  \t17\t54\t349.44\n4001001\tamalgedu pack #x                                  \t17\t14\t347.51\n5001001\tamalgscholar #x                                   \t19\t20\t346.40\n4003001\texportiedu pack #x                                \t9\t9\t345.80\n4001001\tamalgedu pack #x                                  \t17\t21\t344.75\n10014005\tedu packamalgamalg #x                             \t18\t25\t344.00\n8008006\tnamelessnameless #x                               \t17\t34\t342.72\n9010002\tunivunivamalg #x                                  \t18\t53\t342.35\n8002007\timportonameless #x                                \t8\t17\t340.80\n1002001\timportoamalg #x                                   \t7\t36\t333.32\n4001001\tamalgedu pack #x                                  \t19\t31\t330.45\n4004002\tedu packedu pack #x                               \t18\t44\t329.04\n1003001\texportiamalg #x                                   \t18\t41\t326.83\n6008001\tnamelesscorp #x                                   \t19\t54\t325.65\n7013002\texportinameless #x                                \t17\t24\t324.24\n4004002\tedu packedu pack #x                               \t8\t0\t323.50\n3002001\timportoexporti #x                                 \t18\t51\t323.46\n10016008\tcorpamalgamalg #x                                 \t18\t35\t322.00\n10014005\tedu packamalgamalg #x                             \t17\t51\t320.76\n1003001\texportiamalg #x                                   \t9\t56\t320.40\n5002002\timportoscholar #x                                 \t17\t54\t318.20\n3004001\tedu packexporti #x                                \t17\t37\t317.52\n10007016\tbrandunivamalg #x                                \t9\t6\t315.75\n3003001\texportiexporti #x                                 \t17\t32\t314.72\n8010001\tunivmaxi #x                                       \t19\t13\t314.72\n4001001\tamalgedu pack #x                                  \t8\t0\t312.13\n5004002\tedu packscholar #x                                \t18\t11\t310.80\n1002001\timportoamalg #x                                   \t19\t37\t309.33\n1002001\timportoamalg #x                                   \t6\t21\t308.34\n7006005\tcorpbrand #x                                      \t17\t40\t307.97\n4001001\tamalgedu pack #x                                  \t17\t18\t307.05\n1002001\timportoamalg #x                                   \t17\t24\t306.93\n2004001\tedu packimporto #x                                \t7\t51\t305.40\n2004002\tedu packimporto #x                                \t17\t39\t302.90\n9003003\texportimaxi #x                                    \t8\t20\t298.80\n4001001\tamalgedu pack #x                                  \t19\t32\t296.38\n5003001\texportischolar #x                                 \t19\t27\t295.95\n2001001\tamalgimporto #x                                   \t18\t4\t294.80\n10013006\texportiamalgamalg #x                              \t9\t41\t290.03\n1002001\timportoamalg #x                                   \t19\t12\t281.76\n7013002\texportinameless #x                                \t17\t0\t279.00\n10005012\tscholarunivamalg #x                              \t9\t50\t275.52\n8010002\tunivmaxi #x                                       \t7\t31\t272.25\n10015013\tscholaramalgamalg #x                             \t7\t13\t270.28\n1003001\texportiamalg #x                                   \t8\t47\t269.50\n10014005\tedu packamalgamalg #x                             \t9\t30\t267.54\n7011010\tamalgnameless #x                                 \t19\t3\t265.47\n9003003\texportimaxi #x                                    \t18\t55\t264.77\n9016003\tcorpunivamalg #x                                  \t9\t6\t264.64\n2002002\timportoimporto #x                                 \t18\t1\t264.30\n1004001\tedu packamalg #x                                  \t8\t13\t263.55\n4004001\tedu packedu pack #x                               \t6\t8\t261.08\n8004002\tedu packnameless #x                               \t9\t56\t258.30\n2002002\timportoimporto #x                                 \t18\t36\t257.56\n3002001\timportoexporti #x                                 \t19\t20\t256.14\n2003002\texportiimporto #x                                 \t17\t57\t255.50\n9010002\tunivunivamalg #x                                  \t19\t30\t254.52\n6007007\tbrandcorp #x                                      \t9\t39\t253.23\n9013005\texportiunivamalg #x                               \t7\t35\t251.93\n3003001\texportiexporti #x                                 \t9\t42\t250.80\n10016002\tcorpamalgamalg #x                                 \t17\t47\t250.80\n7015007\tscholarnameless #x                                \t18\t44\t249.20\n3004002\tedu packexporti #x                                \t18\t35\t247.28\n8001007\tamalgnameless #x                                  \t19\t32\t246.54\n9010002\tunivunivamalg #x                                  \t19\t47\t246.43\n3002001\timportoexporti #x                                 \t9\t5\t244.62\n10016008\tcorpamalgamalg #x                                 \t19\t0\t243.09\n9009009\tmaximaxi #x                                       \t18\t26\t242.13\n9008003\tnamelessmaxi #x                                   \t18\t9\t241.86\n8001009\tamalgnameless #x                                  \t17\t46\t241.83\n7013007\texportinameless #x                                \t19\t19\t241.20\n1002002\timportoamalg #x                                   \t18\t49\t241.08\n8005004\tscholarnameless #x                                \t8\t16\t239.98\n4002001\timportoedu pack #x                                \t8\t45\t239.44\n3004001\tedu packexporti #x                                \t9\t26\t237.60\n6012005\timportobrand #x                                   \t18\t8\t235.32\n10016008\tcorpamalgamalg #x                                 \t17\t27\t232.70\n4003002\texportiedu pack #x                                \t8\t8\t220.08\n1002002\timportoamalg #x                                   \t18\t29\t219.36\n2003001\texportiimporto #x                                 \t18\t31\t218.00\n7014010\tedu packnameless #x                              \t17\t7\t217.80\n9005002\tscholarmaxi #x                                    \t18\t41\t217.40\n9016003\tcorpunivamalg #x                                  \t17\t3\t217.00\nNULL\tbrandmaxi #x                                     \t18\t27\t211.40\n2002002\timportoimporto #x                                 \t17\t42\t208.98\n4001002\tamalgedu pack #x                                  \t18\t29\t206.64\n3002001\timportoexporti #x                                 \t18\t1\t205.55\n6009008\tmaxicorp #x                                       \t18\t13\t204.48\n4001001\tamalgedu pack #x                                  \t19\t35\t203.76\n4003002\texportiedu pack #x                                \t9\t9\t201.41\n1002002\timportoamalg #x                                   \t6\t7\t200.80\n9014011\tedu packunivamalg #x                             \t17\t27\t200.72\n8006010\tcorpnameless #x                                  \t9\t27\t199.75\n3004001\tedu packexporti #x                                \t17\t18\t199.64\n8005009\tscholarnameless #x                                \t19\t46\t198.38\n3004001\tedu packexporti #x                                \t9\t12\t197.62\n6002008\timportocorp #x                                    \t18\t24\t196.94\n8013009\texportimaxi #x                                    \t17\t17\t196.50\n10007016\tbrandunivamalg #x                                \t9\t7\t188.63\n4002002\timportoedu pack #x                                \t17\t54\t188.25\n4002001\timportoedu pack #x                                \t19\t55\t187.29\n8009005\tmaxinameless #x                                   \t17\t0\t187.09\n4001001\tamalgedu pack #x                                  \t17\t33\t186.10\n1003002\texportiamalg #x                                   \t7\t56\t179.82\n4001001\tamalgedu pack #x                                  \t9\t8\t178.40\n3002002\timportoexporti #x                                 \t17\t16\t177.30\n8003006\texportinameless #x                                \t9\t15\t174.05\n9003003\texportimaxi #x                                    \t17\t52\t172.62\n9014011\tedu packunivamalg #x                             \t6\t12\t172.05\n2004001\tedu packimporto #x                                \t17\t37\t170.66\n10001003\tamalgunivamalg #x                                 \t6\t57\t169.92\n7005006\tscholarbrand #x                                   \t7\t32\t168.08\n6005003\tscholarcorp #x                                    \t18\t59\t168.00\n6007007\tbrandcorp #x                                      \t17\t22\t166.74\n2004001\tedu packimporto #x                                \t17\t58\t165.69\n9003003\texportimaxi #x                                    \t8\t13\t164.78\n6011003\tamalgbrand #x                                     \t9\t43\t163.40\n4004002\tedu packedu pack #x                               \t7\t14\t162.92\n2001001\tamalgimporto #x                                   \t17\t0\t162.47\n2001002\tamalgimporto #x                                   \t19\t30\t161.44\n7013002\texportinameless #x                                \t19\t29\t160.95\n10011008\tamalgamalgamalg #x                                \t17\t13\t160.56\n9009009\tmaximaxi #x                                       \t17\t35\t160.40\n3002001\timportoexporti #x                                 \t17\t57\t159.60\n2003001\texportiimporto #x                                 \t19\t37\t159.08\n7002009\timportobrand #x                                   \t19\t5\t158.40\n7006007\tcorpbrand #x                                      \t8\t38\t158.16\n4003001\texportiedu pack #x                                \t19\t52\t157.78\n10009016\tmaxiunivamalg #x                                 \t9\t41\t157.68\n10015013\tscholaramalgamalg #x                             \t17\t32\t157.62\n6015001\tscholarbrand #x                                   \t17\t0\t157.28\n6008004\tnamelesscorp #x                                   \t17\t42\t156.80\n7013002\texportinameless #x                                \t9\t42\t156.75\n10014001\tedu packamalgamalg #x                             \t18\t27\t156.18\n2001001\tamalgimporto #x                                   \t8\t5\t155.75\n7010005\tunivnameless #x                                   \t9\t33\t154.64\n7013002\texportinameless #x                                \t8\t59\t154.38\n5001002\tamalgscholar #x                                   \t18\t31\t154.35\n10003008\texportiunivamalg #x                               \t17\t27\t154.08\n6012005\timportobrand #x                                   \t9\t26\t153.51\n10016002\tcorpamalgamalg #x                                 \t17\t46\t152.10\n5001002\tamalgscholar #x                                   \t18\t41\t151.14\n8001009\tamalgnameless #x                                  \t17\t8\t150.66\n6008004\tnamelesscorp #x                                   \t19\t44\t150.40\n6002008\timportocorp #x                                    \t17\t37\t150.22\n8004002\tedu packnameless #x                               \t19\t36\t148.23\n8013007\texportimaxi #x                                    \t18\t33\t147.84\n6015003\tscholarbrand #x                                   \t18\t10\t147.66\n4002001\timportoedu pack #x                                \t19\t41\t146.60\n4003001\texportiedu pack #x                                \t17\t42\t146.32\n5003001\texportischolar #x                                 \t17\t57\t142.56\n4004002\tedu packedu pack #x                               \t17\t4\t141.24\n9015011\tscholarunivamalg #x                              \t7\t56\t139.16\n3003001\texportiexporti #x                                 \t17\t25\t135.24\n3003001\texportiexporti #x                                 \t9\t36\t133.92\n8004002\tedu packnameless #x                               \t19\t42\t133.70\n6008004\tnamelesscorp #x                                   \t19\t16\t132.81\n7014010\tedu packnameless #x                              \t17\t30\t132.46\n2001001\tamalgimporto #x                                   \t18\t6\t132.25\n7004005\tedu packbrand #x                                  \t17\t37\t132.22\n1003001\texportiamalg #x                                   \t19\t29\t131.58\n8002008\timportonameless #x                                \t17\t2\t131.22\n7012008\timportonameless #x                                \t17\t53\t130.18\n4001001\tamalgedu pack #x                                  \t7\t51\t128.50\n8010001\tunivmaxi #x                                       \t17\t17\t127.28\n9014008\tedu packunivamalg #x                              \t9\t53\t126.70\n8002007\timportonameless #x                                \t17\t21\t126.00\n9004008\tedu packmaxi #x                                   \t9\t19\t126.00\n7008005\tnamelessbrand #x                                  \t17\t52\t125.23\n9003003\texportimaxi #x                                    \t17\t23\t122.74\n6015001\tscholarbrand #x                                   \t17\t36\t121.92\n3002002\timportoexporti #x                                 \t9\t40\t119.60\nNULL\tbrandmaxi #x                                     \t18\t20\t119.52\n4001001\tamalgedu pack #x                                  \t19\t29\t119.38\n4003001\texportiedu pack #x                                \t19\t14\t119.36\n7015004\tscholarnameless #x                                \t17\t15\t118.36\n2001001\tamalgimporto #x                                   \t6\t35\t116.13\n9016005\tcorpunivamalg #x                                  \t9\t58\t116.09\n4002001\timportoedu pack #x                                \t19\t19\t114.52\n3001001\tamalgexporti #x                                   \t19\t20\t114.00\n5001002\tamalgscholar #x                                   \t19\t42\t113.04\n3002001\timportoexporti #x                                 \t6\t19\t111.52\n3004001\tedu packexporti #x                                \t17\t45\t110.88\n1004001\tedu packamalg #x                                  \t18\t22\t110.64\n1002001\timportoamalg #x                                   \t7\t52\t110.25\n9003003\texportimaxi #x                                    \t18\t58\t109.34\n6005003\tscholarcorp #x                                    \t9\t27\t106.75\n4003002\texportiedu pack #x                                \t9\t40\t106.44\n8008006\tnamelessnameless #x                               \t9\t39\t105.60\n6015003\tscholarbrand #x                                   \t18\t5\t104.70\n4002001\timportoedu pack #x                                \t18\t18\t104.00\n7009003\tmaxibrand #x                                      \t18\t33\t102.00\n10014005\tedu packamalgamalg #x                             \t18\t19\t99.36\n8005004\tscholarnameless #x                                \t18\t47\t98.98\n6008004\tnamelesscorp #x                                   \t17\t54\t97.40\n2003001\texportiimporto #x                                 \t19\t0\t92.79\n10001007\tamalgunivamalg #x                                 \t17\t1\t90.90\n1003001\texportiamalg #x                                   \t18\t16\t90.45\n4003001\texportiedu pack #x                                \t17\t4\t88.83\n10016002\tcorpamalgamalg #x                                 \t9\t14\t87.36\n10008016\tnamelessunivamalg #x                             \t18\t7\t87.06\n3002002\timportoexporti #x                                 \t8\t24\t86.73\n8006010\tcorpnameless #x                                  \t19\t44\t85.60\n1003001\texportiamalg #x                                   \t8\t29\t84.15\n2001001\tamalgimporto #x                                   \t18\t25\t84.04\n9008003\tnamelessmaxi #x                                   \t9\t43\t83.94\n4003002\texportiedu pack #x                                \t18\t2\t82.94\n9003003\texportimaxi #x                                    \t18\t29\t81.88\n10003008\texportiunivamalg #x                               \t18\t51\t81.80\n8006010\tcorpnameless #x                                  \t8\t15\t80.00\n5004001\tedu packscholar #x                                \t8\t28\t78.75\n10016002\tcorpamalgamalg #x                                 \t19\t8\t78.75\n8002007\timportonameless #x                                \t9\t7\t78.00\n8001007\tamalgnameless #x                                  \t9\t23\t77.04\n3002001\timportoexporti #x                                 \t6\t57\t76.38\n10001007\tamalgunivamalg #x                                 \t8\t36\t76.20\n8015004\tscholarmaxi #x                                    \t9\t53\t74.88\n3003001\texportiexporti #x                                 \t9\t27\t74.52\n9013005\texportiunivamalg #x                               \t8\t30\t74.20\n9016011\tcorpunivamalg #x                                 \t9\t14\t72.80\n2002002\timportoimporto #x                                 \t19\t35\t70.85\n8005004\tscholarnameless #x                                \t9\t18\t69.60\n9007003\tbrandmaxi #x                                      \t9\t18\t68.76\n9016008\tcorpunivamalg #x                                  \t9\t0\t68.64\n9005002\tscholarmaxi #x                                    \t9\t24\t67.89\n2001001\tamalgimporto #x                                   \t18\t23\t66.65\n5003001\texportischolar #x                                 \t17\t1\t66.36\n7008005\tnamelessbrand #x                                  \t18\t3\t66.30\n8003006\texportinameless #x                                \t9\t8\t63.84\n7010005\tunivnameless #x                                   \t19\t10\t61.88\n4001001\tamalgedu pack #x                                  \t19\t20\t60.90\n5002002\timportoscholar #x                                 \t9\t8\t58.83\n4001001\tamalgedu pack #x                                  \t18\t27\t58.66\n4003002\texportiedu pack #x                                \t9\t24\t58.56\n3002001\timportoexporti #x                                 \t9\t50\t58.32\n8005009\tscholarnameless #x                                \t19\t10\t58.29\n3003001\texportiexporti #x                                 \t9\t4\t57.89\n7014010\tedu packnameless #x                              \t9\t59\t56.76\n7012007\timportonameless #x                                \t8\t53\t55.98\nNULL\tbrandmaxi #x                                     \t17\t54\t55.45\n10007016\tbrandunivamalg #x                                \t17\t11\t53.72\n7002002\timportobrand #x                                   \t7\t35\t53.52\n7010009\tunivnameless #x                                   \t17\t40\t53.16\n4002001\timportoedu pack #x                                \t8\t49\t52.87\n4004002\tedu packedu pack #x                               \t19\t55\t50.31\n4001001\tamalgedu pack #x                                  \t17\t46\t50.12\n5002001\timportoscholar #x                                 \t8\t10\t49.80\n9016005\tcorpunivamalg #x                                  \t19\t28\t49.50\n4002001\timportoedu pack #x                                \t9\t22\t49.44\n5002002\timportoscholar #x                                 \t9\t38\t47.88\n2003001\texportiimporto #x                                 \t9\t52\t47.46\n7016008\tcorpnameless #x                                   \t19\t40\t47.32\n4004002\tedu packedu pack #x                               \t18\t34\t47.11\n10008006\tnamelessunivamalg #x                              \t9\t37\t45.05\n10001003\tamalgunivamalg #x                                 \t9\t14\t44.55\n2002002\timportoimporto #x                                 \t8\t9\t44.32\n2002002\timportoimporto #x                                 \t18\t10\t41.16\n7006007\tcorpbrand #x                                      \t18\t6\t40.32\n1003001\texportiamalg #x                                   \t8\t39\t38.22\n10001003\tamalgunivamalg #x                                 \t17\t11\t37.84\n7012008\timportonameless #x                                \t17\t1\t37.44\n1002001\timportoamalg #x                                   \t19\t15\t36.74\n3003001\texportiexporti #x                                 \t18\t22\t34.76\n4001001\tamalgedu pack #x                                  \t8\t32\t32.23\n9013005\texportiunivamalg #x                               \t19\t33\t29.85\n1002001\timportoamalg #x                                   \t19\t57\t29.52\n10016002\tcorpamalgamalg #x                                 \t9\t18\t25.47\n6007007\tbrandcorp #x                                      \t9\t11\t24.77\n1003002\texportiamalg #x                                   \t19\t43\t24.43\n9004008\tedu packmaxi #x                                   \t18\t57\t24.32\n10009016\tmaxiunivamalg #x                                 \t19\t19\t22.66\n10008006\tnamelessunivamalg #x                              \t9\t3\t22.14\n3003001\texportiexporti #x                                 \t19\t25\t21.40\n2001001\tamalgimporto #x                                   \t8\t58\t19.88\n3004001\tedu packexporti #x                                \t17\t29\t17.76\n6008001\tnamelesscorp #x                                   \t17\t58\t17.00\n9009009\tmaximaxi #x                                       \t9\t38\t16.80\n3003001\texportiexporti #x                                 \t17\t52\t15.96\n6002008\timportocorp #x                                    \t9\t41\t15.78\n2004002\tedu packimporto #x                                \t8\t35\t15.21\n6011001\tamalgbrand #x                                     \t19\t40\t14.55\n10008016\tnamelessunivamalg #x                             \t8\t54\t14.49\n5004001\tedu packscholar #x                                \t6\t4\t13.64\n2003002\texportiimporto #x                                 \t8\t47\t13.04\n2001001\tamalgimporto #x                                   \t19\t59\t12.84\n7013002\texportinameless #x                                \t17\t1\t12.54\n10014001\tedu packamalgamalg #x                             \t18\t49\t12.50\n8015004\tscholarmaxi #x                                    \t9\t0\t10.06\n9004008\tedu packmaxi #x                                   \t9\t31\t9.80\n5004002\tedu packscholar #x                                \t9\t46\t8.19\n4002002\timportoedu pack #x                                \t8\t58\t7.79\n4001001\tamalgedu pack #x                                  \t19\t59\t7.38\n4004002\tedu packedu pack #x                               \t9\t18\t7.33\n2002002\timportoimporto #x                                 \t19\t14\t6.54\n5004002\tedu packscholar #x                                \t19\t19\t6.50\n6008001\tnamelesscorp #x                                   \t19\t32\t5.75\n7010009\tunivnameless #x                                   \t17\t29\t4.25\n6015003\tscholarbrand #x                                   \t18\t44\t3.94\n7014001\tedu packnameless #x                               \t18\t31\t3.08\n6015003\tscholarbrand #x                                   \t17\t18\t3.04\n8004002\tedu packnameless #x                               \t9\t57\t0.24\n2001001\tamalgimporto #x                                   \t17\t34\t0.00\n2004002\tedu packimporto #x                                \t19\t40\t0.00\n4004002\tedu packedu pack #x                               \t17\t50\t0.00\n6009008\tmaxicorp #x                                       \t17\t19\t0.00\n6015003\tscholarbrand #x                                   \t9\t51\t0.00\n8002007\timportonameless #x                                \t19\t40\t0.00\n9003003\texportimaxi #x                                    \t19\t33\t0.00\n9015011\tscholarunivamalg #x                              \t18\t10\t0.00\n1001001\tamalgamalg #x                                     \t19\t20\tNULL\n4003001\texportiedu pack #x                                \t9\t42\tNULL\n5002001\timportoscholar #x                                 \t9\t8\tNULL\n5003001\texportischolar #x                                 \t17\t43\tNULL\n8005009\tscholarnameless #x                                \t9\t58\tNULL\n8010002\tunivmaxi #x                                       \t9\t53\tNULL\n9013003\texportiunivamalg #x                               \t9\t35\tNULL\n10003008\texportiunivamalg #x                               \t19\t30\tNULL\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q72.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_desc:string,w_warehouse_name:string,d_week_seq:int,no_promo:bigint,promo:bigint,total_cnt:bigint>\n-- !query output\nNULL\tJust good amou\t5166\t2\t2\t2\nActions see of course informal phrases. Markedly right men buy honest, additional stations. In order imaginative factors used to move human thanks. Centres shall catch altogether succe\tSignificantly\t5189\t2\t2\t2\nChildren shall write only systems. Again outdoor c\tOperations\t5215\t2\t2\t2\nClouds may compensate about religious man\tMatches produce\t5216\t2\t2\t2\nCritical hours might stress above married, sufficient thousands. Poets shall die medical, ameri\tOperations\t5217\t2\t2\t2\nCustomers find for a dogs. Main, following members must live closely because of the interests. Children could imagine more. Innocent, social forces will welcome. I\tSignificantly\t5210\t2\t2\t2\nDelicate readers gain too able officers. Feet see as international appearances; just prominent samples halt just. Substantia\tJust good amou\t5197\t2\t2\t2\nFields might need only however new lengths; explicit, impossible parents cut early able items. Details specify particularly \tSignificantly\t5208\t2\t2\t2\nGuests agree around trying, young costs. Here annual banks appeas\tJust good amou\t5204\t2\t2\t2\nGuests agree around trying, young costs. Here annual banks appeas\tSignificantly\t5204\t2\t2\t2\nHere extra efforts ensure eyes; merely little periods will not loosen home past a boys. Just local aspects must reclaim. Standard qualities might not roll today. Military, national clothes must go wid\tMatches produce\t5215\t2\t2\t2\nMore than familiar lives survive independent reports. Sites must find clearly regulations. Together honest savings refuse so other fingers; british tables\tSelective,\t5215\t2\t2\t2\nMysterious p\tMatches produce\t5218\t2\t2\t2\nPlease separate charges point spiritual, new areas. Angry, able units should try certainly in a accounts. Years retain alternatively. Certain, constant women spend really vital rights. Medical, round \tMatches produce\t5210\t2\t2\t2\nScottish, forward years could interrupt yesterday pure scienc\tJust good amou\t5214\t2\t2\t2\nSectors might not know properly. Large, electric workers used to drop even as ca\tMatches produce\t5171\t2\t2\t2\nSma\tMatches produce\t5204\t2\t2\t2\nSocial universities get. Easier yellow results question above basic, direct roots; individual, respective \tJust good amou\t5200\t2\t2\t2\nUpper, usual agencies would not evaluate able, simple faces. Poor lights g\tSelective,\t5197\t2\t2\t2\nNULL\tMatches produce\t5177\t1\t1\t1\nNULL\tOperations\t5203\t1\t1\t1\nNULL\tSignificantly\t5171\t1\t1\t1\nNULL\tSignificantly\t5203\t1\t1\t1\nNULL\tSignificantly\t5213\t1\t1\t1\nA bit liable flowers change also writings. Currently soviet ministers come. Hotels telephone before aggressive, economic eyes. Blue changes improve. Overal\tJust good amou\t5201\t1\t1\t1\nA bit liable flowers change also writings. Currently soviet ministers come. Hotels telephone before aggressive, economic eyes. Blue changes improve. Overal\tMatches produce\t5201\t1\t1\t1\nA lot young materials remain below from a rises. \tMatches produce\t5190\t1\t1\t1\nA lot young materials remain below from a rises. \tOperations\t5190\t1\t1\t1\nA lot young materials remain below from a rises. \tSelective,\t5190\t1\t1\t1\nAback british songs meet. Chief jobs k\tSelective,\t5178\t1\t1\t1\nAble differences may not spoil particularly then civil ways. Less unusual plants swallow for example in a pp.. Eastern, typical children start to a councils. Exciting cells must meet new, huge me\tMatches produce\t5178\t1\t1\t1\nAble differences may not spoil particularly then civil ways. Less unusual plants swallow for example in a pp.. Eastern, typical children start to a councils. Exciting cells must meet new, huge me\tSignificantly\t5167\t1\t1\t1\nAble issues bother however political services. French teachers will act voices. Pregnant, existing cases make by th\tJust good amou\t5208\t1\t1\t1\nAble, available problems apply even to a bodies. Patients so\tSelective,\t5216\t1\t1\t1\nAble, continuous figures see with a patients. Men go more open notes. Different engineers can display. Even strong fortunes cannot put at least general fans; reliable talk\tOperations\t5216\t1\t1\t1\nAble, potential products should\tJust good amou\t5208\t1\t1\t1\nAble, potential products should\tOperations\t5208\t1\t1\t1\nAbout careful activities hear level cases. However satisfactory reports feel as words. More bad things preserve now poor tories; only strong tools intervene canadian waters. Blin\tJust good amou\t5193\t1\t1\t1\nAbout right clothes must get thoughtfully to a cases. Eastern improvements \tJust good amou\t5197\t1\t1\t1\nAbout statistical blocks shall point so brothers. Even new affairs spend hopefully even old contexts. Possible officers wait absolutely with\tOperations\t5168\t1\t1\t1\nAbove, new groups will not like much local bodies. However traditional sessions can walk slowly big, young aspects. Quite close companies ought to take in a rules. Leaders must not like of cou\tMatches produce\t5187\t1\t1\t1\nAbove, new groups will not like much local bodies. However traditional sessions can walk slowly big, young aspects. Quite close companies ought to take in a rules. Leaders must not like of cou\tOperations\t5187\t1\t1\t1\nAbsolutely b\tMatches produce\t5211\t1\t1\t1\nAccidentally wrong communities look more goods. Rural matters recognize. Large, new days go hap\tMatches produce\t5190\t1\t1\t1\nAccidents can include below other, marginal plans. Comparable, welsh exceptions argue most as usual physical claims. Certain months may smell far from in a cases. Active seconds used to restore t\tSignificantly\t5213\t1\t1\t1\nAccounts return into a colleagues\tOperations\t5218\t1\t1\t1\nAccurate others could not enable raw goods. Usually early drawings choose originally into a boys. So capable students place \tSelective,\t5188\t1\t1\t1\nActions see of course informal phrases. Markedly right men buy honest, additional stations. In order imaginative factors used to move human thanks. Centres shall catch altogether succe\tJust good amou\t5209\t1\t1\t1\nActive studies state away already large shelves. Extremely international appli\tSignificantly\t5208\t1\t1\t1\nActive, mi\tSelective,\t5192\t1\t1\t1\nActual things should prevent there responsible schemes. Others go all undoubtedly increasing things. Small, full samples analys\tOperations\t5190\t1\t1\t1\nAdded activities leave hands. Nevertheless individual moments could repre\tOperations\t5198\t1\t1\t1\nAdded activities leave hands. Nevertheless individual moments could repre\tSelective,\t5174\t1\t1\t1\nAdded activities leave hands. Nevertheless individual moments could repre\tSelective,\t5198\t1\t1\t1\nAdditional officers shall not apply so poin\tJust good amou\t5217\t1\t1\t1\nAdditional, secondary movements will hurt theoretical, major seconds. Families hear well possible, magnetic minutes. Earlier financial women would s\tOperations\t5194\t1\t1\t1\nAddresses retain once more applicable events. Following blocks follow for a develo\tOperations\t5197\t1\t1\t1\nAdequate, true insects clear similar payments. Front rela\tJust good amou\t5177\t1\t1\t1\nAdults might not surrender doubtful, upper industries; earnings insist m\tOperations\t5188\t1\t1\t1\nAfraid questions ta\tSelective,\t5187\t1\t1\t1\nAfraid women shall correct so only women. Red, severe friends repay suddenly out of a elements. Very rigid complaints want however great, little years. Black, itali\tSelective,\t5218\t1\t1\t1\nAfrican, good purposes would determine quite visitors. Sources can make then royal jobs; still full sciences should concentrate no longer elections. Fair applicants talk there under a c\tJust good amou\t5214\t1\t1\t1\nAfrican, good purposes would determine quite visitors. Sources can make then royal jobs; still full sciences should concentrate no longer elections. Fair applicants talk there under a c\tSelective,\t5168\t1\t1\t1\nAfrican, good purposes would determine quite visitors. Sources can make then royal jobs; still full sciences should concentrate no longer elections. Fair applicants talk there under a c\tSelective,\t5214\t1\t1\t1\nAgain british shareholders see shares. American lives ought to establish horses. Then ideal conservatives might charge even nec\tSignificantly\t5186\t1\t1\t1\nAgain economic objections spend suddenly urgently worried boats. Pupils talk yet nonethele\tJust good amou\t5193\t1\t1\t1\nAgain major troubles create new, other children. Fair interactions may telephone\tSelective,\t5199\t1\t1\t1\nAgain remote branches should help; processes may s\tJust good amou\t5191\t1\t1\t1\nAgain scottish accidents would destroy italian places; please careful services pick sometimes overall men; immensely extra sets move optimistic, substantial actors. Human, likely reports\tJust good amou\t5193\t1\t1\t1\nAgain specialist words transform still as perfect forces; expensive, like businesses might want u\tJust good amou\t5191\t1\t1\t1\nAgencies should need likely recommendations. Active, fresh stars shall get just young fragments. Personal \tSignificantly\t5216\t1\t1\t1\nAgo natural taxes could protect rats. More local days shall tend closely. Proteins may intervene very perfect men. Procedures make expens\tJust good amou\t5214\t1\t1\t1\nAgo sexual courts may attract. Important, alone observations expect. New, available ways represent years. Excell\tOperations\t5203\t1\t1\t1\nAgricultural, important boys know willingly after the interests. S\tSignificantly\t5213\t1\t1\t1\nAgricultural, social tiles would come tragic, various buildings. Good employees shall result high wet plants. Only single contacts support already. Priests would not say unreasonably. Upstairs good \tMatches produce\t5191\t1\t1\t1\nAhead new columns s\tOperations\t5187\t1\t1\t1\nAhead young classes should take more central late conservatives. Formal, common details used to think\tSignificantly\t5180\t1\t1\t1\nAll capital bacteria make jobs. Again appropriate eyes may not leave others. There fixed ways\tMatches produce\t5202\t1\t1\t1\nAll direct guns would look cool sure sophisticated bonds; irish sequences look just also local years. Almost close things can look. Build\tMatches produce\t5178\t1\t1\t1\nAllowances might lay at best children. Academic sections burst hot times. Short-term, warm goods\tSelective,\t5177\t1\t1\t1\nAlmost busy threats go together recent sides; still tired wines shall not admit on a\tMatches produce\t5209\t1\t1\t1\nAlmost good hours should not make. Fully appropriate cases may stop for a terms. Legal compl\tMatches produce\t5188\t1\t1\t1\nAlone bills protect adults. Demands make in a gains. Students train harshly. Ashamed periods choose just general, free places. Senses would finish quite slow, gla\tJust good amou\t5213\t1\t1\t1\nAlone friends would not know else armies. Services recover too extreme, fiscal machines\tOperations\t5205\t1\t1\t1\nAlone new copies discuss to a dates; all black machines get just just royal years. For example free weeks underestimate accurately individual mountains. National, delicious\tSignificantly\t5185\t1\t1\t1\nAlone responsibilities used to argue all. Eventual, past reasons lead electrical, absent years. Again big sessions embrace about later familiar hundreds. Certain parts cannot assist desperately good m\tJust good amou\t5208\t1\t1\t1\nAlone responsibilities used to argue all. Eventual, past reasons lead electrical, absent years. Again big sessions embrace about later familiar hundreds. Certain parts cannot assist desperately good m\tMatches produce\t5208\t1\t1\t1\nAlone sole services keep only; stairs shall eliminate for the woods. Methods must need yet. Other students can\tJust good amou\t5203\t1\t1\t1\nAlone, fiscal attitudes will see subsequently. Arrangements used to prefe\tSelective,\t5207\t1\t1\t1\nAlone, fortunate minutes can put particularly out of a consequences. Darling costs run already in a laws. Molecules discover. Temporary, political ty\tMatches produce\t5200\t1\t1\t1\nAlready bright poems evaluate somewhere problems; regulations will not conceal now delighted objects; false thoughts follow then. Months should not work only outside times. Fingers prove worker\tOperations\t5170\t1\t1\t1\nAlready bright poems evaluate somewhere problems; regulations will not conceal now delighted objects; false thoughts follow then. Months should not work only outside times. Fingers prove worker\tSignificantly\t5215\t1\t1\t1\nAlso alone patients seem also for the connections. Significant flowers prove finally in a opportunities. Closely international women might avoid tomorrow hidden, following \tOperations\t5209\t1\t1\t1\nAlso difficult acres researc\tMatches produce\t5214\t1\t1\t1\nAlso dominant elections call only more conventional films. Magazines shall not hand really soon opening hundreds. Equally particula\tSignificantly\t5216\t1\t1\t1\nAlso eastern matters should not enable now irish, \tMatches produce\t5215\t1\t1\t1\nAlso good forms\tJust good amou\t5215\t1\t1\t1\nAlso interesting sides acknowledge basically. Tonight low employees run thus. More sympathetic results watch rarely. Even severe arrangements study very\tMatches produce\t5212\t1\t1\t1\nAlso invisible shoes sell whit\tJust good amou\t5201\t1\t1\t1\nAlso invisible shoes sell whit\tSelective,\t5193\t1\t1\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q73.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_last_name:string,c_first_name:string,c_salutation:string,c_preferred_cust_flag:string,ss_ticket_number:int,cnt:bigint>\n-- !query output\nSauer                         \tLarry               \tMr.       \tN\t215795\t5\nValle                         \tChandra             \tDr.       \tN\t45338\t5\nRichardson                    \tHarry               \tMr.       \tY\t85055\t5\nRansom                        \tThomas              \tSir       \tN\t872\t5\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q74.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<customer_id:string,customer_first_name:string,customer_last_name:string>\n-- !query output\nAAAAAAAAAAECBAAA\tFrank               \tWenzel                        \nAAAAAAAAABGKAAAA\tJonna               \tKing                          \nAAAAAAAAAFAGAAAA\tRobert              \tChang                         \nAAAAAAAAAFBNAAAA\tRobert              \tBaines                        \nAAAAAAAAAGLPAAAA\tCharlene            \tMarcus                        \nAAAAAAAAAHKEAAAA\tWilliam             \tStafford                      \nAAAAAAAABAAGAAAA\tLuis                \tJames                         \nAAAAAAAABBEAAAAA\tJason               \tGallegos                      \nAAAAAAAABGMHBAAA\tMichael             \tGillespie                     \nAAAAAAAABIABAAAA\tLetha               \tStone                         \nAAAAAAAABIIHAAAA\tCharles             \tQuarles                       \nAAAAAAAABILCAAAA\tTheresa             \tMullins                       \nAAAAAAAABJEDBAAA\tArthur              \tBryan                         \nAAAAAAAACEMIAAAA\tJames               \tHernandez                     \nAAAAAAAACGLDAAAA\tAngelo              \tSloan                         \nAAAAAAAACOEHBAAA\tChristine           \tGonzalez                      \nAAAAAAAACPDFBAAA\tCheryl              \tBarry                         \nAAAAAAAADFJBBAAA\tPatrick             \tJones                         \nAAAAAAAADHCBAAAA\tTherese             \tPerez                         \nAAAAAAAADHNHBAAA\tPatrick             \tCooper                        \nAAAAAAAADKMBAAAA\tDonald              \tNelson                        \nAAAAAAAAEBFHAAAA\tEsther              \tMerrill                       \nAAAAAAAAEFCEBAAA\tCornelius           \tMartino                       \nAAAAAAAAEIAHAAAA\tHenry               \tDesantis                      \nAAAAAAAAEIPIAAAA\tLuke                \tRios                          \nAAAAAAAAFAIEAAAA\tBetty               \tGipson                        \nAAAAAAAAFDIMAAAA\tStephanie           \tCowan                         \nAAAAAAAAFGMHBAAA\tDonald              \tColeman                       \nAAAAAAAAFGNEAAAA\tAndrew              \tSilva                         \nAAAAAAAAFHNDAAAA\tVirgil              \tMcdonald                      \nAAAAAAAAFMOKAAAA\tHarry               \tBrown                         \nAAAAAAAAFMPPAAAA\tManuel              \tBryant                        \nAAAAAAAAFOEDAAAA\tLori                \tErwin                         \nAAAAAAAAGCGIAAAA\tMae                 \tWilliams                      \nAAAAAAAAGEKLAAAA\tJerilyn             \tWalker                        \nAAAAAAAAGGMHAAAA\tJulia               \tFisher                        \nAAAAAAAAGLDMAAAA\tAlex                \tNorris                        \nAAAAAAAAGMFHAAAA\tBruce               \tHowe                          \nAAAAAAAAGMGEBAAA\tTamika              \tPotts                         \nAAAAAAAAHEIFBAAA\tNULL\tJones                         \nAAAAAAAAHEPFBAAA\tKathryn             \tKinney                        \nAAAAAAAAHGKLAAAA\tArthur              \tChristensen                   \nAAAAAAAAHIEIAAAA\tWilliam             \tRoberts                       \nAAAAAAAAHLEAAAAA\tGeneva              \tSims                          \nAAAAAAAAHLJCAAAA\tMarlene             \tGrover                        \nAAAAAAAAIANDAAAA\tElva                \tWade                          \nAAAAAAAAIBBFBAAA\tJames               \tCompton                       \nAAAAAAAAIBHHBAAA\tVennie              \tLoya                          \nAAAAAAAAIBJDBAAA\tDean                \tVelez                         \nAAAAAAAAILLJAAAA\tBilly               \tOrtiz                         \nAAAAAAAAIODCBAAA\tJennifer            \tCrane                         \nAAAAAAAAIPGJAAAA\tMichael             \tNULL\nAAAAAAAAIPKJAAAA\tCharles             \tJones                         \nAAAAAAAAJADIAAAA\tMargaret            \tRoberts                       \nAAAAAAAAJBELAAAA\tSean                \tBusby                         \nAAAAAAAAJCNBBAAA\tJohnnie             \tCox                           \nAAAAAAAAJDEFAAAA\tLoretta             \tSerrano                       \nAAAAAAAAJGDLAAAA\tFredrick            \tDavis                         \nAAAAAAAAJHGFAAAA\tPamela              \tGannon                        \nAAAAAAAAJIAHAAAA\tShawna              \tDelgado                       \nAAAAAAAAJINGAAAA\tElla                \tMoore                         \nAAAAAAAAJMIDAAAA\tSally               \tThurman                       \nAAAAAAAAKAKPAAAA\tCarolann            \tRoyer                         \nAAAAAAAAKMHPAAAA\tRobert              \tJones                         \nAAAAAAAAKNMEBAAA\tAmber               \tGonzalez                      \nAAAAAAAALEAHBAAA\tEddie               \tPena                          \nAAAAAAAALMAJAAAA\tIleen               \tLinn                          \nAAAAAAAALMGGBAAA\tDedra               \tRainey                        \nAAAAAAAALNLABAAA\tJanie               \tGarcia                        \nAAAAAAAALPHGBAAA\tDorothy             \tHeller                        \nAAAAAAAAMFMKAAAA\tJohn                \tSanders                       \nAAAAAAAAMHOLAAAA\tTerri               \tCook                          \nAAAAAAAAMJFAAAAA\tMarcus              \tEspinal                       \nAAAAAAAAMLOEAAAA\tMiguel              \tJackson                       \nAAAAAAAANBECBAAA\tMichael             \tLombardi                      \nAAAAAAAANKBBAAAA\tDiann               \tSaunders                      \nAAAAAAAAOCDCAAAA\tArmando             \tJackson                       \nAAAAAAAAOCFLAAAA\tBill                \tFreeman                       \nAAAAAAAAOEDIAAAA\tAlexander           \tRich                          \nAAAAAAAAOJBPAAAA\tJonathan            \tMcbride                       \nAAAAAAAAOMOKAAAA\tLaurette            \tGary                          \nAAAAAAAAOOKKAAAA\tDeborah             \tEarly                         \nAAAAAAAAOPMDAAAA\tPeggy               \tSmith                         \nAAAAAAAAPAEEBAAA\tAudria              \tMattson                       \nAAAAAAAAPBIGBAAA\tSusie               \tZavala                        \nAAAAAAAAPEFLAAAA\tDavid               \tMartinez                      \nAAAAAAAAPFKDAAAA\tLinda               \tSimmons                       \nAAAAAAAAPIGBBAAA\tCharles             \tWelch                         \nAAAAAAAAPNMGAAAA\tChristine           \tOlds\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q75.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<prev_year:int,year:int,i_brand_id:int,i_class_id:int,i_category_id:int,i_manufact_id:int,prev_yr_cnt:bigint,curr_yr_cnt:bigint,sales_cnt_diff:bigint,sales_amt_diff:decimal(19,2)>\n-- !query output\n2001\t2002\t7009003\t16\t9\t545\t6720\t3891\t-2829\t-117008.58\n2001\t2002\t8005007\t5\t9\t512\t6454\t4227\t-2227\t-135735.94\n2001\t2002\t9001010\t1\t9\t456\t5643\t3436\t-2207\t-93617.05\n2001\t2002\t10013015\t3\t9\t127\t6225\t4143\t-2082\t-14567.99\n2001\t2002\t9015004\t10\t9\t118\t7332\t5259\t-2073\t-77291.51\n2001\t2002\t9004002\t4\t9\t435\t6625\t4581\t-2044\t-122670.86\n2001\t2002\t9014004\t3\t9\t306\t5855\t3858\t-1997\t-73620.65\n2001\t2002\t5002001\t5\t9\t170\t5700\t3736\t-1964\t-109018.53\n2001\t2002\t9014010\t1\t9\t987\t6970\t5148\t-1822\t-75086.23\n2001\t2002\t9015008\t15\t9\t327\t6526\t4739\t-1787\t-93540.56\n2001\t2002\t9016002\t16\t9\t197\t6053\t4343\t-1710\t-97667.31\n2001\t2002\t9015002\t15\t9\t856\t10449\t8783\t-1666\t-20233.37\n2001\t2002\t3001001\t1\t9\t380\t6554\t4895\t-1659\t-71747.73\n2001\t2002\t5002001\t14\t9\t75\t6649\t5024\t-1625\t-114258.52\n2001\t2002\t9004008\t4\t9\t787\t6493\t4911\t-1582\t-133842.63\n2001\t2002\t3001001\t4\t9\t652\t6331\t4766\t-1565\t-74417.45\n2001\t2002\t5004001\t11\t9\t963\t6179\t4628\t-1551\t-85648.92\n2001\t2002\t3004001\t7\t9\t237\t6097\t4560\t-1537\t-118933.63\n2001\t2002\t1001001\t8\t9\t271\t5623\t4128\t-1495\t-50180.45\n2001\t2002\t10004009\t4\t9\t513\t6500\t5024\t-1476\t-67288.17\n2001\t2002\t4004001\t10\t9\t126\t5667\t4194\t-1473\t-88247.10\n2001\t2002\t9007002\t7\t9\t144\t5474\t4020\t-1454\t-42908.70\n2001\t2002\t9016008\t16\t9\t53\t6099\t4671\t-1428\t-67851.70\n2001\t2002\t9007008\t7\t9\t110\t5655\t4230\t-1425\t-28768.95\n2001\t2002\t9002008\t2\t9\t97\t5979\t4581\t-1398\t-101148.97\n2001\t2002\t9008004\t8\t9\t171\t5952\t4559\t-1393\t-41622.99\n2001\t2002\t10010001\t10\t9\t368\t5714\t4353\t-1361\t-78871.35\n2001\t2002\t3003001\t8\t9\t266\t5785\t4431\t-1354\t-99666.06\n2001\t2002\t9011002\t11\t9\t581\t6045\t4721\t-1324\t-61169.99\n2001\t2002\t9012004\t12\t9\t233\t5370\t4050\t-1320\t-135207.43\n2001\t2002\t9008008\t8\t9\t84\t5431\t4114\t-1317\t-48625.72\n2001\t2002\t9010010\t7\t9\t962\t6123\t4819\t-1304\t-34943.08\n2001\t2002\t9009008\t9\t9\t135\t5984\t4698\t-1286\t-53184.41\n2001\t2002\t9014008\t14\t9\t52\t5835\t4555\t-1280\t-47886.49\n2001\t2002\t9005008\t5\t9\t326\t5773\t4494\t-1279\t-47219.15\n2001\t2002\t9011002\t11\t9\t806\t5487\t4208\t-1279\t-34285.63\n2001\t2002\t1003001\t3\t9\t57\t5893\t4631\t-1262\t-24156.11\n2001\t2002\t5003001\t3\t9\t456\t5617\t4359\t-1258\t-46563.26\n2001\t2002\t9015008\t15\t9\t829\t4998\t3781\t-1217\t15209.89\n2001\t2002\t7007001\t7\t9\t211\t6277\t5071\t-1206\t-74262.01\n2001\t2002\t9011008\t11\t9\t541\t5654\t4449\t-1205\t-50137.27\n2001\t2002\t6010001\t4\t9\t91\t6373\t5169\t-1204\t-40582.11\n2001\t2002\t4003001\t3\t9\t175\t6106\t4907\t-1199\t-84819.82\n2001\t2002\t9009008\t9\t9\t83\t5880\t4689\t-1191\t-54463.08\n2001\t2002\t9014010\t14\t9\t201\t6106\t4919\t-1187\t-5741.72\n2001\t2002\t9012010\t16\t9\t173\t6576\t5390\t-1186\t-57504.67\n2001\t2002\t9014002\t14\t9\t966\t6066\t4915\t-1151\t-31274.42\n2001\t2002\t9003004\t2\t9\t29\t5232\t4084\t-1148\t-85408.38\n2001\t2002\t9008002\t8\t9\t586\t6212\t5079\t-1133\t-45166.64\n2001\t2002\t9004010\t4\t9\t87\t5551\t4428\t-1123\t-104064.28\n2001\t2002\t9013002\t13\t9\t18\t5087\t3977\t-1110\t34375.04\n2001\t2002\t9001008\t1\t9\t65\t5429\t4333\t-1096\t-112773.38\n2001\t2002\t8007003\t4\t9\t290\t6148\t5075\t-1073\t-48541.94\n2001\t2002\t9004002\t4\t9\t506\t5503\t4432\t-1071\t-47389.52\n2001\t2002\t9001009\t1\t9\t73\t5116\t4053\t-1063\t-12805.29\n2001\t2002\t9003008\t3\t9\t156\t5719\t4665\t-1054\t-27241.44\n2001\t2002\t9011008\t11\t9\t324\t5293\t4253\t-1040\t-78455.59\n2001\t2002\t9011008\t11\t9\t525\t5269\t4243\t-1026\t-8416.01\n2001\t2002\t9009004\t9\t9\t599\t5642\t4622\t-1020\t-7244.59\n2001\t2002\t9004010\t4\t9\t146\t5387\t4373\t-1014\t-96437.31\n2001\t2002\t9002010\t2\t9\t64\t4911\t3898\t-1013\t-70357.40\n2001\t2002\t2003001\t3\t9\t4\t6089\t5078\t-1011\t-14326.99\n2001\t2002\t9008002\t8\t9\t433\t5707\t4697\t-1010\t-59856.93\n2001\t2002\t9015002\t15\t9\t120\t5419\t4409\t-1010\t-25557.35\n2001\t2002\t9008010\t3\t9\t64\t6024\t5020\t-1004\t15466.29\n2001\t2002\t1003001\t5\t9\t139\t6059\t5060\t-999\t-23720.17\n2001\t2002\t9003002\t3\t9\t281\t5640\t4651\t-989\t-62224.16\n2001\t2002\t9013008\t13\t9\t660\t5400\t4411\t-989\t-6728.84\n2001\t2002\t6014001\t14\t9\t977\t5339\t4369\t-970\t-34999.44\n2001\t2002\t9013002\t13\t9\t30\t4967\t3997\t-970\t-74733.18\n2001\t2002\t8012009\t12\t9\t149\t5872\t4905\t-967\t-25425.45\n2001\t2002\t9004009\t4\t9\t618\t5522\t4576\t-946\t-59306.75\n2001\t2002\t7006009\t8\t9\t116\t5471\t4528\t-943\t-65664.87\n2001\t2002\t9014010\t15\t9\t151\t5633\t4701\t-932\t7429.42\n2001\t2002\t9003002\t3\t9\t877\t6303\t5379\t-924\t-45324.79\n2001\t2002\t9006008\t6\t9\t162\t5169\t4251\t-918\t-69642.81\n2001\t2002\t9008010\t8\t9\t503\t5526\t4608\t-918\t-49021.02\n2001\t2002\t9005002\t5\t9\t530\t4962\t4049\t-913\t-51406.91\n2001\t2002\t9010002\t10\t9\t236\t5107\t4195\t-912\t-68196.83\n2001\t2002\t9015004\t15\t9\t134\t4996\t4092\t-904\t-20839.46\n2001\t2002\t9009009\t6\t9\t181\t5283\t4380\t-903\t-50577.89\n2001\t2002\t9009010\t16\t9\t590\t4684\t3787\t-897\t-42032.78\n2001\t2002\t9006008\t6\t9\t221\t5238\t4347\t-891\t-110703.28\n2001\t2002\t10009017\t9\t9\t640\t5562\t4682\t-880\t-91021.88\n2001\t2002\t7013003\t13\t9\t313\t5161\t4284\t-877\t-55538.42\n2001\t2002\t3004001\t12\t9\t570\t5053\t4181\t-872\t-81469.82\n2001\t2002\t9009004\t9\t9\t587\t5131\t4262\t-869\t-31202.41\n2001\t2002\t9010008\t10\t9\t86\t5074\t4211\t-863\t-31897.54\n2001\t2002\t9006002\t6\t9\t230\t5616\t4756\t-860\t-39142.42\n2001\t2002\t9001008\t1\t9\t533\t6196\t5337\t-859\t-70174.46\n2001\t2002\t9004010\t4\t9\t102\t6146\t5291\t-855\t-66567.87\n2001\t2002\t9002004\t2\t9\t179\t5203\t4351\t-852\t-26624.64\n2001\t2002\t9016008\t16\t9\t69\t5922\t5072\t-850\t-31016.10\n2001\t2002\t9008010\t8\t9\t81\t5520\t4672\t-848\t-44217.60\n2001\t2002\t9001002\t1\t9\t927\t5858\t5013\t-845\t8179.45\n2001\t2002\t1001001\t1\t9\t116\t5773\t4930\t-843\t700.02\n2001\t2002\t9012008\t12\t9\t274\t6154\t5316\t-838\t-85490.33\n2001\t2002\t6008007\t8\t9\t22\t5439\t4605\t-834\t-25617.83\n2001\t2002\t9016008\t16\t9\t601\t6463\t5633\t-830\t22715.94\n2001\t2002\t9001008\t1\t9\t170\t5970\t5142\t-828\t-2265.18\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q76.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,col_name:int,d_year:int,d_qoy:int,i_category:string,sales_cnt:bigint,sales_amt:decimal(17,2)>\n-- !query output\ncatalog\tNULL\t1998\t1\tBooks                                             \t8\t9808.66\ncatalog\tNULL\t1998\t1\tChildren                                          \t11\t14843.15\ncatalog\tNULL\t1998\t1\tElectronics                                       \t12\t34659.85\ncatalog\tNULL\t1998\t1\tHome                                              \t4\t1149.06\ncatalog\tNULL\t1998\t1\tJewelry                                           \t7\t16037.86\ncatalog\tNULL\t1998\t1\tMen                                               \t17\t30340.73\ncatalog\tNULL\t1998\t1\tMusic                                             \t14\t20075.63\ncatalog\tNULL\t1998\t1\tShoes                                             \t10\t15432.10\ncatalog\tNULL\t1998\t1\tSports                                            \t4\t2660.73\ncatalog\tNULL\t1998\t1\tWomen                                             \t14\t32287.07\ncatalog\tNULL\t1998\t2\tBooks                                             \t8\t27523.09\ncatalog\tNULL\t1998\t2\tChildren                                          \t11\t33106.16\ncatalog\tNULL\t1998\t2\tElectronics                                       \t16\t24105.87\ncatalog\tNULL\t1998\t2\tHome                                              \t14\t14282.44\ncatalog\tNULL\t1998\t2\tJewelry                                           \t10\t6120.92\ncatalog\tNULL\t1998\t2\tMen                                               \t13\t25776.26\ncatalog\tNULL\t1998\t2\tMusic                                             \t11\t11113.26\ncatalog\tNULL\t1998\t2\tShoes                                             \t10\t5887.74\ncatalog\tNULL\t1998\t2\tSports                                            \t9\t15135.88\ncatalog\tNULL\t1998\t2\tWomen                                             \t7\t2373.08\ncatalog\tNULL\t1998\t3\tBooks                                             \t13\t5452.76\ncatalog\tNULL\t1998\t3\tChildren                                          \t18\t32298.89\ncatalog\tNULL\t1998\t3\tElectronics                                       \t21\t49769.09\ncatalog\tNULL\t1998\t3\tHome                                              \t17\t14007.10\ncatalog\tNULL\t1998\t3\tJewelry                                           \t23\t25680.05\ncatalog\tNULL\t1998\t3\tMen                                               \t20\t45022.03\ncatalog\tNULL\t1998\t3\tMusic                                             \t14\t20882.65\ncatalog\tNULL\t1998\t3\tShoes                                             \t19\t11602.79\ncatalog\tNULL\t1998\t3\tSports                                            \t18\t17526.27\ncatalog\tNULL\t1998\t3\tWomen                                             \t17\t20129.34\ncatalog\tNULL\t1998\t4\tNULL\t2\t607.00\ncatalog\tNULL\t1998\t4\tBooks                                             \t33\t57441.77\ncatalog\tNULL\t1998\t4\tChildren                                          \t35\t53915.78\ncatalog\tNULL\t1998\t4\tElectronics                                       \t39\t40769.87\ncatalog\tNULL\t1998\t4\tHome                                              \t28\t40240.88\ncatalog\tNULL\t1998\t4\tJewelry                                           \t38\t58972.54\ncatalog\tNULL\t1998\t4\tMen                                               \t30\t39765.00\ncatalog\tNULL\t1998\t4\tMusic                                             \t32\t70931.52\ncatalog\tNULL\t1998\t4\tShoes                                             \t33\t65217.73\ncatalog\tNULL\t1998\t4\tSports                                            \t31\t45867.98\ncatalog\tNULL\t1998\t4\tWomen                                             \t32\t33059.29\ncatalog\tNULL\t1999\t1\tBooks                                             \t11\t13083.10\ncatalog\tNULL\t1999\t1\tChildren                                          \t10\t6952.44\ncatalog\tNULL\t1999\t1\tElectronics                                       \t10\t3112.96\ncatalog\tNULL\t1999\t1\tHome                                              \t9\t4768.55\ncatalog\tNULL\t1999\t1\tJewelry                                           \t6\t7143.30\ncatalog\tNULL\t1999\t1\tMen                                               \t10\t21187.82\ncatalog\tNULL\t1999\t1\tMusic                                             \t6\t265.20\ncatalog\tNULL\t1999\t1\tShoes                                             \t19\t24625.11\ncatalog\tNULL\t1999\t1\tSports                                            \t12\t15481.90\ncatalog\tNULL\t1999\t1\tWomen                                             \t15\t25291.30\ncatalog\tNULL\t1999\t2\tBooks                                             \t17\t14284.73\ncatalog\tNULL\t1999\t2\tChildren                                          \t8\t4453.70\ncatalog\tNULL\t1999\t2\tElectronics                                       \t14\t7808.53\ncatalog\tNULL\t1999\t2\tHome                                              \t7\t7418.42\ncatalog\tNULL\t1999\t2\tJewelry                                           \t13\t11931.85\ncatalog\tNULL\t1999\t2\tMen                                               \t8\t1730.92\ncatalog\tNULL\t1999\t2\tMusic                                             \t11\t14585.34\ncatalog\tNULL\t1999\t2\tShoes                                             \t10\t16267.47\ncatalog\tNULL\t1999\t2\tSports                                            \t7\t1058.62\ncatalog\tNULL\t1999\t2\tWomen                                             \t17\t17451.00\ncatalog\tNULL\t1999\t3\tBooks                                             \t21\t34033.74\ncatalog\tNULL\t1999\t3\tChildren                                          \t15\t28755.03\ncatalog\tNULL\t1999\t3\tElectronics                                       \t24\t41518.93\ncatalog\tNULL\t1999\t3\tHome                                              \t20\t39919.72\ncatalog\tNULL\t1999\t3\tJewelry                                           \t22\t15372.42\ncatalog\tNULL\t1999\t3\tMen                                               \t26\t49692.31\ncatalog\tNULL\t1999\t3\tMusic                                             \t23\t6840.77\ncatalog\tNULL\t1999\t3\tShoes                                             \t19\t21542.78\ncatalog\tNULL\t1999\t3\tSports                                            \t17\t15957.19\ncatalog\tNULL\t1999\t3\tWomen                                             \t27\t30416.44\ncatalog\tNULL\t1999\t4\tNULL\t1\t9077.75\ncatalog\tNULL\t1999\t4\tBooks                                             \t36\t60721.76\ncatalog\tNULL\t1999\t4\tChildren                                          \t22\t21641.02\ncatalog\tNULL\t1999\t4\tElectronics                                       \t37\t30157.36\ncatalog\tNULL\t1999\t4\tHome                                              \t34\t42467.56\ncatalog\tNULL\t1999\t4\tJewelry                                           \t35\t38566.86\ncatalog\tNULL\t1999\t4\tMen                                               \t26\t28008.47\ncatalog\tNULL\t1999\t4\tMusic                                             \t26\t31237.65\ncatalog\tNULL\t1999\t4\tShoes                                             \t27\t45175.99\ncatalog\tNULL\t1999\t4\tSports                                            \t39\t68801.24\ncatalog\tNULL\t1999\t4\tWomen                                             \t38\t55600.11\ncatalog\tNULL\t2000\t1\tBooks                                             \t12\t14558.48\ncatalog\tNULL\t2000\t1\tChildren                                          \t11\t10218.22\ncatalog\tNULL\t2000\t1\tElectronics                                       \t6\t13621.14\ncatalog\tNULL\t2000\t1\tHome                                              \t11\t13460.77\ncatalog\tNULL\t2000\t1\tJewelry                                           \t10\t8071.78\ncatalog\tNULL\t2000\t1\tMen                                               \t10\t14579.76\ncatalog\tNULL\t2000\t1\tMusic                                             \t9\t23840.61\ncatalog\tNULL\t2000\t1\tShoes                                             \t15\t38230.31\ncatalog\tNULL\t2000\t1\tSports                                            \t13\t5259.09\ncatalog\tNULL\t2000\t1\tWomen                                             \t10\t6568.10\ncatalog\tNULL\t2000\t2\tNULL\t1\tNULL\ncatalog\tNULL\t2000\t2\tBooks                                             \t15\t27517.21\ncatalog\tNULL\t2000\t2\tChildren                                          \t9\t4430.92\ncatalog\tNULL\t2000\t2\tElectronics                                       \t9\t7435.41\ncatalog\tNULL\t2000\t2\tHome                                              \t10\t12542.28\ncatalog\tNULL\t2000\t2\tJewelry                                           \t8\t4325.38\ncatalog\tNULL\t2000\t2\tMen                                               \t12\t5896.08\ncatalog\tNULL\t2000\t2\tMusic                                             \t11\t2962.75\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q77.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,id:int,sales:decimal(27,2),returns:decimal(27,2),profit:decimal(28,2)>\n-- !query output\nNULL\tNULL\t238379361.39\t11949589.80\t-69066318.65\ncatalog channel\tNULL\t81893158.01\t7956829.96\t-13266843.17\ncatalog channel\tNULL\t116209.49\t1989207.49\t-1103184.43\ncatalog channel\t1\t26819348.55\t1989207.49\t-4169636.96\ncatalog channel\t2\t27454600.50\t1989207.49\t-3825432.73\ncatalog channel\t5\t27502999.47\t1989207.49\t-4168589.05\nstore channel\tNULL\t114945147.06\t2805428.32\t-51038302.63\nstore channel\t1\t19743223.74\t437906.57\t-8831106.92\nstore channel\t2\t18272722.14\t522196.16\t-8183951.59\nstore channel\t4\t19720603.73\t449683.19\t-8686183.94\nstore channel\t7\t19275817.79\t456008.74\t-8633897.32\nstore channel\t8\t19342554.44\t467014.66\t-8767463.34\nstore channel\t10\t18590225.22\t472619.00\t-7935699.52\nweb channel\tNULL\t41541056.32\t1187331.52\t-4761172.85\nweb channel\t1\t1228578.03\t47675.10\t-188274.21\nweb channel\t2\t1343477.51\t44041.96\t-110450.27\nweb channel\t5\t1355045.09\t11417.70\t-99307.08\nweb channel\t7\t1439105.65\t44505.99\t-131740.09\nweb channel\t8\t1222672.59\t64741.73\t-137947.41\nweb channel\t11\t1380233.43\t12631.13\t-89907.09\nweb channel\t13\t1380503.10\t66341.09\t-190038.15\nweb channel\t14\t1322143.80\t66841.97\t-172945.60\nweb channel\t17\t1640923.76\t6485.63\t-15639.73\nweb channel\t19\t1258408.09\t43554.94\t-95271.34\nweb channel\t20\t1413076.47\t25716.55\t-176644.94\nweb channel\t23\t1522205.18\t26405.14\t-102673.55\nweb channel\t25\t1435872.54\t58108.85\t-141661.85\nweb channel\t26\t1364954.66\t41289.03\t-226994.92\nweb channel\t29\t1456398.83\t23148.07\t-127320.23\nweb channel\t31\t1331158.19\t58914.20\t-259972.34\nweb channel\t32\t1430016.29\t68634.13\t-216355.06\nweb channel\t35\t1275017.69\t9253.48\t-86568.79\nweb channel\t37\t1527487.12\t34512.52\t-145699.04\nweb channel\t38\t1459100.39\t77580.83\t-145709.91\nweb channel\t41\t1542741.42\t22802.83\t-126554.81\nweb channel\t43\t1556206.93\t43143.17\t-170251.07\nweb channel\t44\t1197577.68\t47675.73\t-232746.02\nweb channel\t47\t1311689.48\t14426.63\t-151021.48\nweb channel\t49\t1487129.22\t38844.09\t-129263.73\nweb channel\t50\t1404935.78\t52795.27\t-203592.12\nweb channel\t53\t1308281.20\t25132.82\t-234014.24\nweb channel\t55\t1348261.69\t34116.67\t-208865.51\nweb channel\t56\t1223987.10\t55188.28\t-252201.97\nweb channel\t59\t1373867.41\t21405.99\t-191540.30\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q78.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<ratio:double,store_qty:bigint,store_wholesale_cost:decimal(17,2),store_sales_price:decimal(17,2),other_chan_qty:bigint,other_chan_wholesale_cost:decimal(18,2),other_chan_sales_price:decimal(18,2)>\n-- !query output\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q79.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_last_name:string,c_first_name:string,substr(s_city, 1, 30):string,ss_ticket_number:int,amt:decimal(17,2),profit:decimal(17,2)>\n-- !query output\nNULL\tNULL\tFairview\t198070\t3414.76\t-28275.11\nNULL\tNULL\tFairview\t178356\t4636.16\t-24754.68\nNULL\tNULL\tFairview\t4561\t38.97\t-17858.20\nNULL\tNULL\tFairview\t227170\t4972.68\t-17709.46\nNULL\tNULL\tFairview\t78433\t689.97\t-16192.40\nNULL\tNULL\tFairview\t150646\t3348.61\t-15819.67\nNULL\tNULL\tFairview\t15261\t2836.87\t-15688.87\nNULL\tNULL\tFairview\t226126\t651.62\t-14052.62\nNULL\tNULL\tFairview\t36397\t2615.22\t-13738.93\nNULL\tNULL\tFairview\t40\t532.59\t-12157.64\nNULL\tNULL\tFairview\t237656\t9409.20\t-12036.60\nNULL\tNULL\tFairview\t178664\t0.00\t-11859.96\nNULL\tNULL\tFairview\t60434\t7214.21\t-11468.85\nNULL\tNULL\tFairview\t23163\t1270.90\t-11127.55\nNULL\tNULL\tFairview\t36041\t7796.51\t-11067.35\nNULL\tNULL\tFairview\t35679\t5305.03\t-10515.13\nNULL\tNULL\tFairview\t233081\t361.69\t-9831.11\nNULL\tNULL\tFairview\t65032\t3442.33\t-9620.20\nNULL\tNULL\tFairview\t162505\t7179.34\t-8572.34\nNULL\tNULL\tFairview\t142857\t8377.25\t-8392.77\nNULL\tNULL\tFairview\t193575\t1828.27\t-6805.77\nNULL\tNULL\tFairview\t49934\t228.47\t-5430.28\nNULL\tNULL\tFairview\t67999\t621.15\t-5031.30\nNULL\tNULL\tFairview\t223653\t3570.27\t-4327.27\nNULL\tNULL\tFairview\t6384\t0.00\t-3987.67\nNULL\tNULL\tFairview\t168833\t1894.07\t-3677.23\nNULL\tNULL\tFairview\t93344\t0.00\t-1599.51\nNULL\tNULL\tFairview\t53781\t16.91\t764.24\nNULL\tNULL\tFairview\t175529\t1252.60\t1017.06\nNULL\tNULL\tFairview\t197495\t373.09\t2365.53\nNULL\tNULL\tFairview\t4047\t3121.19\t2715.67\nNULL\tNULL\tMidway\t48072\t11113.87\t-29510.66\nNULL\tNULL\tMidway\t11394\t5172.08\t-25597.50\nNULL\tNULL\tMidway\t207040\t2008.73\t-25394.40\nNULL\tNULL\tMidway\t222765\t7629.39\t-25247.34\nNULL\tNULL\tMidway\t217712\t4349.03\t-24697.82\nNULL\tNULL\tMidway\t45131\t5788.09\t-24158.43\nNULL\tNULL\tMidway\t164632\t1394.75\t-23255.21\nNULL\tNULL\tMidway\t127656\t5021.65\t-22989.97\nNULL\tNULL\tMidway\t83311\t7875.67\t-22914.32\nNULL\tNULL\tMidway\t56977\t3159.77\t-20447.95\nNULL\tNULL\tMidway\t215293\t0.00\t-19925.34\nNULL\tNULL\tMidway\t140002\t905.08\t-18610.09\nNULL\tNULL\tMidway\t173634\t9179.91\t-17739.01\nNULL\tNULL\tMidway\t23863\t152.72\t-17463.88\nNULL\tNULL\tMidway\t160686\t5769.52\t-17219.86\nNULL\tNULL\tMidway\t82279\t1835.82\t-17174.21\nNULL\tNULL\tMidway\t112924\t10990.60\t-16901.50\nNULL\tNULL\tMidway\t166267\t653.18\t-16156.26\nNULL\tNULL\tMidway\t213647\t1924.30\t-16123.38\nNULL\tNULL\tMidway\t204741\t2259.86\t-15336.56\nNULL\tNULL\tMidway\t122738\t12543.65\t-15281.12\nNULL\tNULL\tMidway\t173203\t4314.45\t-14126.67\nNULL\tNULL\tMidway\t147660\t3460.04\t-14088.14\nNULL\tNULL\tMidway\t204175\t1442.30\t-13954.82\nNULL\tNULL\tMidway\t124093\t3122.91\t-13873.66\nNULL\tNULL\tMidway\t230300\t1956.43\t-13769.94\nNULL\tNULL\tMidway\t217344\t0.00\t-13534.82\nNULL\tNULL\tMidway\t91656\t2427.25\t-13507.46\nNULL\tNULL\tMidway\t148611\t5844.43\t-13212.15\nNULL\tNULL\tMidway\t234650\t2836.85\t-12426.80\nNULL\tNULL\tMidway\t170455\t3175.00\t-11962.07\nNULL\tNULL\tMidway\t26663\t264.59\t-11479.46\nNULL\tNULL\tMidway\t75387\t5918.43\t-11406.83\nNULL\tNULL\tMidway\t77944\t788.27\t-11329.12\nNULL\tNULL\tMidway\t135318\t805.73\t-11284.26\nNULL\tNULL\tMidway\t2896\t12623.23\t-11061.86\nNULL\tNULL\tMidway\t158461\t29.32\t-10931.38\nNULL\tNULL\tMidway\t1333\t1390.53\t-10865.55\nNULL\tNULL\tMidway\t116866\t3621.69\t-10822.24\nNULL\tNULL\tMidway\t194993\t3401.82\t-10654.47\nNULL\tNULL\tMidway\t187290\t6137.11\t-10592.41\nNULL\tNULL\tMidway\t151959\t8548.17\t-10204.65\nNULL\tNULL\tMidway\t149775\t15.18\t-10113.87\nNULL\tNULL\tMidway\t157401\t1094.24\t-10070.65\nNULL\tNULL\tMidway\t210606\t2098.02\t-9961.51\nNULL\tNULL\tMidway\t118130\t182.41\t-9901.81\nNULL\tNULL\tMidway\t7439\t2125.08\t-9595.87\nNULL\tNULL\tMidway\t19796\t1532.38\t-9553.82\nNULL\tNULL\tMidway\t222335\t0.00\t-9523.98\nNULL\tNULL\tMidway\t208688\t1211.17\t-9220.71\nNULL\tNULL\tMidway\t32396\t2191.49\t-9201.97\nNULL\tNULL\tMidway\t111809\t4648.48\t-9170.68\nNULL\tNULL\tMidway\t12549\t1306.17\t-8842.14\nNULL\tNULL\tMidway\t6762\t2278.78\t-8696.20\nNULL\tNULL\tMidway\t12892\t2172.51\t-8615.80\nNULL\tNULL\tMidway\t133874\t0.00\t-8601.78\nNULL\tNULL\tMidway\t86109\t161.95\t-8348.77\nNULL\tNULL\tMidway\t58015\t5671.25\t-8231.58\nNULL\tNULL\tMidway\t173045\t0.00\t-8004.30\nNULL\tNULL\tMidway\t221481\t315.96\t-7893.04\nNULL\tNULL\tMidway\t4682\t0.00\t-7769.63\nNULL\tNULL\tMidway\t155285\t0.00\t-7757.04\nNULL\tNULL\tMidway\t194931\t1315.23\t-7705.13\nNULL\tNULL\tMidway\t208424\t878.43\t-7702.00\nNULL\tNULL\tMidway\t199504\t2052.21\t-7662.62\nNULL\tNULL\tMidway\t73227\t2869.94\t-7484.05\nNULL\tNULL\tMidway\t12578\t3486.23\t-7447.58\nNULL\tNULL\tMidway\t160835\t3133.37\t-7258.48\nNULL\tNULL\tMidway\t164697\t6050.09\t-7208.84\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q8.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<s_store_name:string,sum(ss_net_profit):decimal(17,2)>\n-- !query output\nable\t-9960913.46\nbar\t-10200043.73\neing\t-11104757.88\nese\t-11009853.93\nought\t-10574796.20\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q80.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,id:string,sales:decimal(27,2),returns:decimal(32,2),profit:decimal(33,2)>\n-- !query output\nNULL\tNULL\t13553636.71\t630819.07\t-3876883.14\ncatalog channel\tNULL\t4818809.23\t250952.51\t-519182.04\ncatalog channel\tcatalog_pageAAAAAAAAAAABAAAA\t18407.14\t67.62\t4548.90\ncatalog channel\tcatalog_pageAAAAAAAAABABAAAA\t1837.52\t0.00\t-3388.65\ncatalog channel\tcatalog_pageAAAAAAAAACABAAAA\t18390.60\t0.00\t9578.02\ncatalog channel\tcatalog_pageAAAAAAAAADABAAAA\t1404.55\t27.58\t-3703.26\ncatalog channel\tcatalog_pageAAAAAAAAADCBAAAA\t2868.85\t0.00\t826.15\ncatalog channel\tcatalog_pageAAAAAAAAAEABAAAA\t5039.78\t0.00\t-3701.62\ncatalog channel\tcatalog_pageAAAAAAAAAECBAAAA\t3971.05\t0.00\t1757.41\ncatalog channel\tcatalog_pageAAAAAAAAAFABAAAA\t2316.00\t0.00\t-1002.65\ncatalog channel\tcatalog_pageAAAAAAAAAFCBAAAA\t7015.40\t0.00\t915.49\ncatalog channel\tcatalog_pageAAAAAAAAAGABAAAA\t23067.38\t0.00\t10478.19\ncatalog channel\tcatalog_pageAAAAAAAAAGCBAAAA\t27364.22\t0.00\t8607.28\ncatalog channel\tcatalog_pageAAAAAAAAAHABAAAA\t12794.77\t0.00\t-7812.73\ncatalog channel\tcatalog_pageAAAAAAAAAHCBAAAA\t2640.87\t63.18\t-2334.42\ncatalog channel\tcatalog_pageAAAAAAAAAICBAAAA\t1845.15\t0.00\t-4629.81\ncatalog channel\tcatalog_pageAAAAAAAAAJCBAAAA\t1555.72\t0.00\t-5039.18\ncatalog channel\tcatalog_pageAAAAAAAAAKPAAAAA\t42070.02\t0.00\t6567.19\ncatalog channel\tcatalog_pageAAAAAAAAALCBAAAA\t8772.47\t0.00\t479.78\ncatalog channel\tcatalog_pageAAAAAAAAALPAAAAA\t20380.13\t0.00\t2437.83\ncatalog channel\tcatalog_pageAAAAAAAAAMCBAAAA\t15549.59\t5438.72\t-7150.88\ncatalog channel\tcatalog_pageAAAAAAAAAMPAAAAA\t11607.07\t0.00\t-7275.33\ncatalog channel\tcatalog_pageAAAAAAAAANCBAAAA\t21588.37\t0.00\t4221.36\ncatalog channel\tcatalog_pageAAAAAAAAANPAAAAA\t21638.94\t0.00\t4905.57\ncatalog channel\tcatalog_pageAAAAAAAAAOCBAAAA\t521.86\t0.00\t160.52\ncatalog channel\tcatalog_pageAAAAAAAAAOPAAAAA\t38265.71\t0.00\t-15751.13\ncatalog channel\tcatalog_pageAAAAAAAAAPCBAAAA\t4872.72\t0.00\t2322.60\ncatalog channel\tcatalog_pageAAAAAAAAAPPAAAAA\t16581.43\t5140.00\t-3778.70\ncatalog channel\tcatalog_pageAAAAAAAABAABAAAA\t6642.83\t0.00\t768.98\ncatalog channel\tcatalog_pageAAAAAAAABBABAAAA\t280.96\t0.00\t-50.56\ncatalog channel\tcatalog_pageAAAAAAAABCABAAAA\t1048.56\t0.00\t-600.26\ncatalog channel\tcatalog_pageAAAAAAAABDABAAAA\t17545.67\t0.00\t-9838.60\ncatalog channel\tcatalog_pageAAAAAAAABDCBAAAA\t9090.78\t0.00\t-572.50\ncatalog channel\tcatalog_pageAAAAAAAABEABAAAA\t17525.88\t232.32\t-2410.45\ncatalog channel\tcatalog_pageAAAAAAAABECBAAAA\t18693.14\t0.00\t-3538.78\ncatalog channel\tcatalog_pageAAAAAAAABFABAAAA\t17175.44\t61.36\t2821.53\ncatalog channel\tcatalog_pageAAAAAAAABFCBAAAA\t140.82\t0.00\t-569.12\ncatalog channel\tcatalog_pageAAAAAAAABGABAAAA\t384.55\t0.00\t-619.85\ncatalog channel\tcatalog_pageAAAAAAAABGCBAAAA\t3166.40\t1590.00\t-9892.69\ncatalog channel\tcatalog_pageAAAAAAAABHABAAAA\t14266.65\t0.00\t-4959.21\ncatalog channel\tcatalog_pageAAAAAAAABHCBAAAA\t1741.48\t0.00\t403.78\ncatalog channel\tcatalog_pageAAAAAAAABICBAAAA\t1413.03\t0.00\t-789.20\ncatalog channel\tcatalog_pageAAAAAAAABJCBAAAA\t17473.29\t18.29\t2514.85\ncatalog channel\tcatalog_pageAAAAAAAABKCBAAAA\t415.15\t0.00\t-1.90\ncatalog channel\tcatalog_pageAAAAAAAABKPAAAAA\t2724.27\t0.00\t-8394.13\ncatalog channel\tcatalog_pageAAAAAAAABLCBAAAA\t15422.53\t0.00\t7523.39\ncatalog channel\tcatalog_pageAAAAAAAABLPAAAAA\t20440.65\t0.00\t-6095.43\ncatalog channel\tcatalog_pageAAAAAAAABMCBAAAA\t2324.92\t0.00\t1256.32\ncatalog channel\tcatalog_pageAAAAAAAABMPAAAAA\t2322.14\t0.00\t-4465.41\ncatalog channel\tcatalog_pageAAAAAAAABNCBAAAA\t10606.89\t0.00\t-4039.08\ncatalog channel\tcatalog_pageAAAAAAAABNPAAAAA\t5671.18\t0.00\t-6038.20\ncatalog channel\tcatalog_pageAAAAAAAABOCBAAAA\t7494.52\t0.00\t2578.36\ncatalog channel\tcatalog_pageAAAAAAAABOPAAAAA\t19163.74\t0.00\t2677.78\ncatalog channel\tcatalog_pageAAAAAAAABPCBAAAA\t48.42\t0.00\t-92.80\ncatalog channel\tcatalog_pageAAAAAAAABPPAAAAA\t65484.56\t8188.50\t16714.73\ncatalog channel\tcatalog_pageAAAAAAAACAABAAAA\t34936.59\t0.00\t6574.55\ncatalog channel\tcatalog_pageAAAAAAAACBABAAAA\t8178.17\t0.00\t1370.92\ncatalog channel\tcatalog_pageAAAAAAAACCABAAAA\t24428.99\t0.00\t7616.93\ncatalog channel\tcatalog_pageAAAAAAAACDABAAAA\t17628.32\t1503.67\t-2086.58\ncatalog channel\tcatalog_pageAAAAAAAACDCBAAAA\t222.09\t0.00\t108.57\ncatalog channel\tcatalog_pageAAAAAAAACEABAAAA\t9898.50\t491.66\t-4581.91\ncatalog channel\tcatalog_pageAAAAAAAACECBAAAA\t11493.96\t0.00\t-7027.70\ncatalog channel\tcatalog_pageAAAAAAAACFABAAAA\t25027.83\t11283.47\t4064.93\ncatalog channel\tcatalog_pageAAAAAAAACFCBAAAA\t618.24\t0.00\t-1316.28\ncatalog channel\tcatalog_pageAAAAAAAACGABAAAA\t1830.05\t0.00\t-3073.99\ncatalog channel\tcatalog_pageAAAAAAAACHABAAAA\t1121.15\t275.52\t-5850.52\ncatalog channel\tcatalog_pageAAAAAAAACHCBAAAA\t13108.45\t0.00\t-786.42\ncatalog channel\tcatalog_pageAAAAAAAACICBAAAA\t1755.49\t0.00\t146.98\ncatalog channel\tcatalog_pageAAAAAAAACJCBAAAA\t2982.05\t0.00\t-196.21\ncatalog channel\tcatalog_pageAAAAAAAACKCBAAAA\t13658.88\t0.00\t5924.52\ncatalog channel\tcatalog_pageAAAAAAAACKPAAAAA\t26319.77\t162.81\t-8423.49\ncatalog channel\tcatalog_pageAAAAAAAACLCBAAAA\t6469.97\t0.00\t-5067.48\ncatalog channel\tcatalog_pageAAAAAAAACLPAAAAA\t43938.15\t7021.98\t-4638.29\ncatalog channel\tcatalog_pageAAAAAAAACMCBAAAA\t9332.13\t0.00\t-5421.54\ncatalog channel\tcatalog_pageAAAAAAAACMPAAAAA\t13306.48\t0.00\t-6449.30\ncatalog channel\tcatalog_pageAAAAAAAACNPAAAAA\t18488.66\t0.00\t186.37\ncatalog channel\tcatalog_pageAAAAAAAACOCBAAAA\t1220.46\t451.26\t-6770.17\ncatalog channel\tcatalog_pageAAAAAAAACOPAAAAA\t27469.76\t1091.48\t7443.78\ncatalog channel\tcatalog_pageAAAAAAAACPCBAAAA\t117.48\t0.00\t30.19\ncatalog channel\tcatalog_pageAAAAAAAACPPAAAAA\t19041.61\t172.22\t380.68\ncatalog channel\tcatalog_pageAAAAAAAADAABAAAA\t12391.42\t953.40\t-7032.35\ncatalog channel\tcatalog_pageAAAAAAAADBABAAAA\t1495.00\t0.00\t-9772.65\ncatalog channel\tcatalog_pageAAAAAAAADCABAAAA\t0.00\t0.00\t-3388.10\ncatalog channel\tcatalog_pageAAAAAAAADDABAAAA\t15917.59\t0.00\t4459.43\ncatalog channel\tcatalog_pageAAAAAAAADDCBAAAA\t4409.76\t733.12\t912.46\ncatalog channel\tcatalog_pageAAAAAAAADEABAAAA\t14545.59\t10419.31\t-2030.01\ncatalog channel\tcatalog_pageAAAAAAAADFABAAAA\t5222.68\t0.00\t-3306.65\ncatalog channel\tcatalog_pageAAAAAAAADFCBAAAA\t2442.96\t0.00\t-758.68\ncatalog channel\tcatalog_pageAAAAAAAADGABAAAA\t14604.59\t0.00\t-3980.39\ncatalog channel\tcatalog_pageAAAAAAAADHABAAAA\t19078.94\t12277.75\t6448.20\ncatalog channel\tcatalog_pageAAAAAAAADHCBAAAA\t12207.03\t0.00\t1895.71\ncatalog channel\tcatalog_pageAAAAAAAADICBAAAA\t12548.28\t0.00\t407.59\ncatalog channel\tcatalog_pageAAAAAAAADJCBAAAA\t3526.74\t0.00\t2014.47\ncatalog channel\tcatalog_pageAAAAAAAADKCBAAAA\t3071.53\t0.00\t-5979.53\ncatalog channel\tcatalog_pageAAAAAAAADKPAAAAA\t52608.41\t1959.75\t20557.42\ncatalog channel\tcatalog_pageAAAAAAAADLCBAAAA\t386.12\t0.00\t-1094.08\ncatalog channel\tcatalog_pageAAAAAAAADLPAAAAA\t22380.47\t230.88\t611.66\ncatalog channel\tcatalog_pageAAAAAAAADMCBAAAA\t4943.75\t0.00\t2158.10\ncatalog channel\tcatalog_pageAAAAAAAADMPAAAAA\t17113.94\t0.00\t4231.57\ncatalog channel\tcatalog_pageAAAAAAAADNCBAAAA\t982.53\t0.00\t-4098.99\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q81.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_customer_id:string,c_salutation:string,c_first_name:string,c_last_name:string,ca_street_number:string,ca_street_name:string,ca_street_type:string,ca_suite_number:string,ca_city:string,ca_county:string,ca_state:string,ca_zip:string,ca_country:string,ca_gmt_offset:decimal(5,2),ca_location_type:string,ctr_total_return:decimal(17,2)>\n-- !query output\nAAAAAAAAAAGOAAAA\tMrs.      \tMelissa             \tBarton                        \t761       \tLincoln \tDrive          \tSuite 440 \tMarion\tCobb County\tGA\t30399     \tUnited States\t-5.00\tcondo               \t2053.35\nAAAAAAAAACCMAAAA\tMr.       \tCraig               \tThompson                      \t607       \tLakeview Laurel\tLane           \tSuite 140 \tHillcrest\tFloyd County\tGA\t33003     \tUnited States\t-5.00\tapartment           \t2314.62\nAAAAAAAAADBBAAAA\tDr.       \tRobert              \tWillis                        \t48        \tNinth West\tLane           \tSuite 370 \tOak Grove\tCook County\tGA\t38370     \tUnited States\t-5.00\tsingle family       \t2232.39\nAAAAAAAAAELEAAAA\tMiss      \tNULL\tWerner                        \t800       \t11th \tAvenue         \tSuite M   \tHarmony\tEchols County\tGA\t35804     \tUnited States\t-5.00\tcondo               \t2312.04\nAAAAAAAAAFIHBAAA\tSir       \tChadwick            \tStevens                       \t679       \tForest Central\tAvenue         \tSuite 60  \tSulphur Springs\tGordon County\tGA\t38354     \tUnited States\t-5.00\tcondo               \t2282.39\nAAAAAAAAAHDGAAAA\tMr.       \tJustin              \tGarcia                        \t281       \tSpring \tWay            \tSuite 320 \tShiloh\tFannin County\tGA\t39275     \tUnited States\t-5.00\tsingle family       \t4185.32\nAAAAAAAAAICMAAAA\tSir       \tWillie              \tMaldonado                     \t492       \tPark \tStreet         \tSuite C   \tShady Grove\tChatham County\tGA\t32812     \tUnited States\t-5.00\tapartment           \t2428.92\nAAAAAAAAALPABAAA\tDr.       \tJohn                \tWang                          \t724       \tOak \tAve            \tSuite 70  \tMaple Grove\tNewton County\tGA\t38252     \tUnited States\t-5.00\tsingle family       \t1727.64\nAAAAAAAAAMJAAAAA\tSir       \tRoy                 \tMark                          \t92        \tCedar Sycamore\tWay            \tSuite 30  \tUnion Hill\tJenkins County\tGA\t37746     \tUnited States\t-5.00\tsingle family       \t1825.38\nAAAAAAAAAMKFBAAA\tDr.       \tLamar               \tJones                         \t194       \tMaple \tCir.           \tSuite 80  \tOakwood\tJefferson County\tGA\t30169     \tUnited States\t-5.00\tsingle family       \t3977.04\nAAAAAAAAAMLOAAAA\tMs.       \tBrooke              \tGarcia                        \t914       \tHillcrest Cedar\tDrive          \tSuite 210 \tStringtown\tJasper County\tGA\t30162     \tUnited States\t-5.00\tcondo               \t4133.35\nAAAAAAAAAMODAAAA\tMrs.      \tLori                \tCampos                        \t728       \tPine Walnut\tCir.           \tSuite M   \tWoodlawn\tWorth County\tGA\t34098     \tUnited States\t-5.00\tcondo               \t4182.98\nAAAAAAAAANJOAAAA\tMs.       \tAllen               \tRodriguez                     \t156       \tAsh Seventh\tWay            \tSuite 100 \tPleasant Valley\tSumter County\tGA\t32477     \tUnited States\t-5.00\tapartment           \t2710.46\nAAAAAAAAAODABAAA\tDr.       \tOscar               \tMiller                        \t651       \tSmith West\tDr.            \tSuite E   \tAntioch\tEvans County\tGA\t38605     \tUnited States\t-5.00\tcondo               \t7234.29\nAAAAAAAAAODIAAAA\tMr.       \tJulio               \tGamble                        \t211       \tView \tCir.           \tSuite Y   \tSummit\tBulloch County\tGA\t30499     \tUnited States\t-5.00\tsingle family       \t3291.69\nAAAAAAAABBAGAAAA\tDr.       \tCharles             \tButts                         \tNULL\tPark Hickory\tRD             \tNULL\tNULL\tNULL\tGA\t38605     \tNULL\t-5.00\tNULL\t3291.71\nAAAAAAAABBCJAAAA\tSir       \tDoyle               \tShaffer                       \t401       \tJohnson Main\tBoulevard      \tSuite 150 \tShiloh\tWorth County\tGA\t39275     \tUnited States\t-5.00\tsingle family       \t4027.79\nAAAAAAAABBJKAAAA\tDr.       \tMildred             \tHogan                         \t174       \tBirch \tPkwy           \tSuite Q   \tEnterprise\tWebster County\tGA\t31757     \tUnited States\t-5.00\tcondo               \t4006.97\nAAAAAAAABCCEAAAA\tMrs.      \tLinda               \tWorth                         \t283       \t2nd \tPkwy           \tSuite 230 \tFairfield\tChattooga County\tGA\t36192     \tUnited States\t-5.00\tcondo               \t5047.24\nAAAAAAAABCEFBAAA\tDr.       \tWilliam             \tParks                         \t19        \tMeadow \tCircle         \tSuite P   \tProvidence\tGwinnett County\tGA\t36614     \tUnited States\t-5.00\tapartment           \t4262.30\nAAAAAAAABCFKAAAA\tNULL\tShawn               \tPeterson                      \t220       \tJohnson First\tCt.            \tSuite F   \tKingston\tRandolph County\tGA\t34975     \tUnited States\t-5.00\tcondo               \t2786.80\nAAAAAAAABCGEAAAA\tMiss      \tMargaret            \tWright                        \t314       \t5th Washington\tStreet         \tSuite S   \tForest Hills\tDougherty County\tGA\t39237     \tUnited States\t-5.00\tsingle family       \t2049.52\nAAAAAAAABDKOAAAA\tDr.       \tVernita             \tBennett                       \t763       \t13th \tCourt          \tSuite 60  \tMaple Hill\tRabun County\tGA\t38095     \tUnited States\t-5.00\tsingle family       \t18570.61\nAAAAAAAABEIKAAAA\tMiss      \tNena                \tAugust                        \t297       \tLee \tLane           \tSuite 240 \tCedar\tHenry County\tGA\t31229     \tUnited States\t-5.00\tcondo               \t2094.88\nAAAAAAAABEJLAAAA\tMr.       \tAbel                \tLucero                        \t382       \tAsh North\tParkway        \tSuite J   \tPlainview\tCatoosa County\tGA\t33683     \tUnited States\t-5.00\tsingle family       \t3530.52\nAAAAAAAABIDEAAAA\tDr.       \tMarcus              \tHolder                        \t724       \tOak \tAve            \tSuite 70  \tMaple Grove\tNewton County\tGA\t38252     \tUnited States\t-5.00\tsingle family       \t1927.32\nAAAAAAAABIGIAAAA\tMrs.      \tCaprice             \tEspinoza                      \t527       \tChurch Second\tCourt          \tSuite M   \tDerby\tTaylor County\tGA\t37702     \tUnited States\t-5.00\tcondo               \t6808.21\nAAAAAAAABJEEAAAA\tDr.       \tClarence            \tSwanson                       \tNULL\tNULL\tCt.            \tSuite 110 \tNULL\tFannin County\tGA\tNULL\tUnited States\t-5.00\tcondo               \t1676.20\nAAAAAAAABJHJAAAA\tSir       \tJames               \tCulpepper                     \t16        \tPark \tWay            \tSuite 460 \tFreeman\tCharlton County\tGA\t32297     \tUnited States\t-5.00\tsingle family       \t2096.68\nAAAAAAAABKKAAAAA\tSir       \tJeffrey             \tYoung                         \t379       \t7th \tLane           \tSuite 90  \tPlainville\tBrantley County\tGA\t36115     \tUnited States\t-5.00\tapartment           \t2161.20\nAAAAAAAABLIBAAAA\tDr.       \tCharles             \tHawkins                       \t939       \tJackson Forest\tBoulevard      \tSuite Y   \tBunker Hill\tCandler County\tGA\t30150     \tUnited States\t-5.00\tcondo               \t4396.68\nAAAAAAAABLPJAAAA\tNULL\tNULL\tGibson                        \t458       \tSecond \tBoulevard      \tSuite 480 \tPlainview\tLiberty County\tGA\t33683     \tUnited States\t-5.00\tsingle family       \t1513.94\nAAAAAAAABLPNAAAA\tMr.       \tEric                \tDurham                        \t917       \tSecond Park\tWy             \tSuite 180 \tFriendship\tMarion County\tGA\t34536     \tUnited States\t-5.00\tcondo               \t2166.25\nAAAAAAAABMCOAAAA\tSir       \tWarren              \tSkinner                       \t934       \tWalnut Hickory\tCourt          \tSuite 310 \tRiverview\tLong County\tGA\t39003     \tUnited States\t-5.00\tcondo               \t5699.34\nAAAAAAAABNJBAAAA\tMiss      \tSanta               \tRichmond                      \t859       \t4th Maple\tST             \tSuite 60  \tAntioch\tToombs County\tGA\t38605     \tUnited States\t-5.00\tsingle family       \t2136.92\nAAAAAAAABPGJAAAA\tDr.       \tJune                \tHill                          \t480       \tSecond 15th\tST             \tSuite T   \tWoodlawn\tAtkinson County\tGA\t34098     \tUnited States\t-5.00\tcondo               \t3081.42\nAAAAAAAABPKOAAAA\tMr.       \tRobert              \tTanner                        \t676       \tSixth Walnut\tRoad           \tSuite L   \tCumberland\tOglethorpe County\tGA\t38971     \tUnited States\t-5.00\tcondo               \t3966.42\nAAAAAAAACAINAAAA\tDr.       \tDonald              \tBeebe                         \t960       \tRailroad Davis\tLn             \tSuite Y   \tSilver Springs\tClay County\tGA\t34843     \tUnited States\t-5.00\tapartment           \t4925.75\nAAAAAAAACBEPAAAA\tSir       \tOtis                \tHartman                       \t939       \tThird \tLane           \tSuite R   \tForest Hills\tPolk County\tGA\t39237     \tUnited States\t-5.00\tsingle family       \t4904.39\nAAAAAAAACCBGBAAA\tMs.       \tJudy                \tWilliams                      \t283       \tCenter \tCir.           \tSuite Q   \tFarmington\tDade County\tGA\t39145     \tUnited States\t-5.00\tsingle family       \t3655.41\nAAAAAAAACCOFAAAA\tSir       \tJohn                \tBower                         \t970       \tJohnson Franklin\tST             \tSuite 20  \tWalnut Grove\tUnion County\tGA\t37752     \tUnited States\t-5.00\tsingle family       \t5456.48\nAAAAAAAACDNAAAAA\tMiss      \tBeatriz             \tRome                          \t947       \tElm \tLn             \tSuite 210 \tWilson\tLanier County\tGA\t36971     \tUnited States\t-5.00\tapartment           \t3994.62\nAAAAAAAACEKEBAAA\tMr.       \tGregory             \tMaynard                       \t214       \tPark Forest\tWy             \tSuite 90  \tLakewood\tJones County\tGA\t38877     \tUnited States\t-5.00\tapartment           \t3840.02\nAAAAAAAACGIBBAAA\tDr.       \tPamela              \tTaylor                        \t364       \tCherry \tPkwy           \tSuite 430 \tRolling Hills\tGlascock County\tGA\t37272     \tUnited States\t-5.00\tapartment           \t2025.02\nAAAAAAAACIDCBAAA\tDr.       \tAndre               \tWorth                         \t307       \t10th 6th\tParkway        \tSuite I   \tFive Forks\tHaralson County\tGA\t32293     \tUnited States\t-5.00\tsingle family       \t1947.99\nAAAAAAAACJLAAAAA\tSir       \tKevin               \tEllis                         \t291       \t7th Valley\tCircle         \tSuite H   \tHartland\tMarion County\tGA\t36594     \tUnited States\t-5.00\tapartment           \t5655.03\nAAAAAAAACKDGAAAA\tMs.       \tCatherine           \tAnderson                      \t875       \tWalnut Lake\tCt.            \tSuite 410 \tTaft\tTelfair County\tGA\t30589     \tUnited States\t-5.00\tcondo               \t1701.58\nAAAAAAAACKIOAAAA\tMr.       \tFrancisco           \tDavis                         \t394       \tSpruce \tDr.            \tSuite 250 \tSmith\tWarren County\tGA\t37317     \tUnited States\t-5.00\tcondo               \t4471.40\nAAAAAAAACNIEBAAA\tDr.       \tAndrea              \tLyon                          \t700       \tFifth Center\tCir.           \tSuite P   \tFranklin\tPutnam County\tGA\t39101     \tUnited States\t-5.00\tcondo               \t1725.43\nAAAAAAAACNOHAAAA\tMs.       \tJessica             \tPitman                        \t11        \tSpring East\tStreet         \tSuite 470 \tGlenwood\tColumbia County\tGA\t33511     \tUnited States\t-5.00\tcondo               \t3469.20\nAAAAAAAACOOKAAAA\tMiss      \tRebecca             \tMorris                        \tNULL\tPark Pine\tNULL\tNULL\tNULL\tNULL\tGA\tNULL\tUnited States\t-5.00\tNULL\t1834.52\nAAAAAAAADAFIBAAA\tDr.       \tTodd                \tAlexander                     \t371       \tEast \tPkwy           \tSuite Y   \tJackson\tLong County\tGA\t39583     \tUnited States\t-5.00\tsingle family       \t3081.99\nAAAAAAAADBAABAAA\tDr.       \tMartha              \tValenti                       \t605       \t13th Lake\tAve            \tSuite 100 \tMount Pleasant\tPickens County\tGA\t31933     \tUnited States\t-5.00\tsingle family       \t2721.38\nAAAAAAAADBNCAAAA\tMrs.      \tKatie               \tNovak                         \t352       \tJackson Hill\tStreet         \tSuite 360 \tLiberty\tJeff Davis County\tGA\t33451     \tUnited States\t-5.00\tapartment           \t2970.22\nAAAAAAAADCCHBAAA\tDr.       \tHarley              \tKohl                          \t116       \tCenter Park\tCir.           \tSuite 300 \tWhite Oak\tCook County\tGA\t36668     \tUnited States\t-5.00\tsingle family       \t5818.25\nAAAAAAAADCICAAAA\tMrs.      \tBonnie              \tHarden                        \t375       \tRailroad 3rd\tRoad           \tSuite A   \tGreenwood\tGlynn County\tGA\t38828     \tUnited States\t-5.00\tsingle family       \t2770.41\nAAAAAAAADDJPAAAA\tSir       \tRick                \tLindstrom                     \t994       \tJefferson View\tRoad           \tSuite R   \tSpring Valley\tBarrow County\tGA\t36060     \tUnited States\t-5.00\tapartment           \t8135.54\nAAAAAAAADDKOAAAA\tSir       \tDerek               \tWatkins                       \t344       \tView Cedar\tBoulevard      \tSuite 280 \tFairfield\tGordon County\tGA\t36192     \tUnited States\t-5.00\tsingle family       \t1699.26\nAAAAAAAADDLMAAAA\tMr.       \tAlan                \tMeredith                      \t994       \tJefferson View\tRoad           \tSuite R   \tSpring Valley\tBarrow County\tGA\t36060     \tUnited States\t-5.00\tapartment           \t2784.48\nAAAAAAAADGDBBAAA\tSir       \tWalter              \tBarron                        \t728       \tRiver \tCourt          \tSuite 110 \tWalnut Grove\tPaulding County\tGA\t37752     \tUnited States\t-5.00\tcondo               \t4128.19\nAAAAAAAADGGCBAAA\tMs.       \tAnna                \tMaynard                       \t956       \tHill \tBlvd           \tSuite N   \tMacedonia\tEmanuel County\tGA\t31087     \tUnited States\t-5.00\tcondo               \t6425.51\nAAAAAAAADGPLAAAA\tMrs.      \tColleen             \tRogers                        \t633       \t6th \tDr.            \tSuite I   \tBridgeport\tHarris County\tGA\t35817     \tUnited States\t-5.00\tapartment           \t3184.68\nAAAAAAAADHBGAAAA\tMiss      \tNULL\tNULL\t862       \tLocust \tCircle         \tSuite 430 \tGreenfield\tLiberty County\tGA\t35038     \tUnited States\t-5.00\tsingle family       \t14758.14\nAAAAAAAADJICBAAA\tDr.       \tJulian              \tOlivares                      \t56        \t1st Washington\tParkway        \tSuite 190 \tFriendship\tJeff Davis County\tGA\t34536     \tUnited States\t-5.00\tsingle family       \t3078.43\nAAAAAAAADKOLAAAA\tDr.       \tMatthew             \tGoodman                       \t385       \tOak \tRD             \tSuite D   \tNew Hope\tJefferson County\tGA\t39431     \tUnited States\t-5.00\tapartment           \t4047.82\nAAAAAAAADLEHAAAA\tDr.       \tRobert              \tLilley                        \t926       \tDavis First\tRoad           \tSuite 360 \tSalem\tChattooga County\tGA\t38048     \tUnited States\t-5.00\tapartment           \t2256.31\nAAAAAAAADLEHAAAA\tDr.       \tRobert              \tLilley                        \t926       \tDavis First\tRoad           \tSuite 360 \tSalem\tChattooga County\tGA\t38048     \tUnited States\t-5.00\tapartment           \t2298.55\nAAAAAAAADMOOAAAA\tSir       \tEddie               \tCraig                         \t788       \tLake \tDr.            \tSuite F   \tBuena Vista\tMarion County\tGA\t35752     \tUnited States\t-5.00\tsingle family       \t5585.06\nAAAAAAAADNCKAAAA\tDr.       \tScott               \tWhitehurst                    \t481       \t4th \tWy             \tSuite U   \tSugar Hill\tEffingham County\tGA\t35114     \tUnited States\t-5.00\tcondo               \t3355.64\nAAAAAAAADNHFBAAA\tDr.       \tGerald              \tHayden                        \t884       \t1st Smith\tCourt          \tSuite 390 \tHighland\tTowns County\tGA\t39454     \tUnited States\t-5.00\tsingle family       \t3481.60\nAAAAAAAADONFBAAA\tDr.       \tJane                \tWilliams                      \t527       \tCedar South\tStreet         \tSuite 310 \tFairview\tTwiggs County\tGA\t35709     \tUnited States\t-5.00\tcondo               \t2304.16\nAAAAAAAADPLEAAAA\tMr.       \tMichael             \tPowell                        \t616       \t6th \tPkwy           \tSuite G   \tGlendale\tDougherty County\tGA\t33951     \tUnited States\t-5.00\tsingle family       \t2061.25\nAAAAAAAAECBEAAAA\tMs.       \tMegan               \tWilson                        \t93        \t13th \tDrive          \tSuite X   \tRiverdale\tClarke County\tGA\t39391     \tUnited States\t-5.00\tsingle family       \t5256.96\nAAAAAAAAECBMAAAA\tMrs.      \tPaula               \tClark                         \t145       \tLocust \tDrive          \tSuite M   \tMount Vernon\tWilcox County\tGA\t38482     \tUnited States\t-5.00\tcondo               \t3445.05\nAAAAAAAAECIEAAAA\tMiss      \tDanille             \tSanders                       \t825       \tJackson South\tBoulevard      \tSuite 480 \tRiverdale\tClarke County\tGA\t39391     \tUnited States\t-5.00\tsingle family       \t2897.72\nAAAAAAAAEDDFAAAA\tDr.       \tMichelle            \tLouis                         \t375       \tHighland East\tAvenue         \tSuite L   \tOakdale\tHaralson County\tGA\t39584     \tUnited States\t-5.00\tapartment           \t2301.53\nAAAAAAAAEGJBAAAA\tMrs.      \tManuela             \tRoche                         \t935       \t10th Davis\tWay            \tSuite O   \tOakwood\tWorth County\tGA\t30169     \tUnited States\t-5.00\tapartment           \t6416.92\nAAAAAAAAEGMCBAAA\tMr.       \tChristopher         \tSaunders                      \t476       \tWest \tLane           \tSuite J   \tPlainview\tGilmer County\tGA\t33683     \tUnited States\t-5.00\tsingle family       \t3623.76\nAAAAAAAAEGOCBAAA\tMrs.      \tElise               \tHuff                          \t40        \tCentral \tCourt          \tSuite P   \tFarmington\tCoffee County\tGA\t39145     \tUnited States\t-5.00\tcondo               \t2518.63\nAAAAAAAAEGOMAAAA\tSir       \tJames               \tHandy                         \t316       \tWashington \tLane           \tSuite 480 \tOakland\tRichmond County\tGA\t39843     \tUnited States\t-5.00\tsingle family       \t3035.76\nAAAAAAAAEHIJAAAA\tNULL\tNULL\tBoggs                         \t721       \t2nd 5th\tAve            \tSuite C   \tGreenwood\tWilkes County\tGA\t38828     \tUnited States\t-5.00\tapartment           \t2623.44\nAAAAAAAAEICDBAAA\tMs.       \tAlice               \tGarrett                       \t157       \tWest Smith\tDrive          \tSuite 20  \tRiver Oaks\tHancock County\tGA\t38075     \tUnited States\t-5.00\tsingle family       \t16853.44\nAAAAAAAAEIINAAAA\tMrs.      \tMarie               \tCoffey                        \t539       \tHill Walnut\tRoad           \tSuite F   \tMarion\tDade County\tGA\t30399     \tUnited States\t-5.00\tapartment           \t3608.02\nAAAAAAAAEIINAAAA\tMrs.      \tMarie               \tCoffey                        \t539       \tHill Walnut\tRoad           \tSuite F   \tMarion\tDade County\tGA\t30399     \tUnited States\t-5.00\tapartment           \t7928.23\nAAAAAAAAEJCKAAAA\tMr.       \tBryan               \tSimpson                       \t313       \tSeventh 7th\tLn             \tSuite B   \tBelmont\tWashington County\tGA\t30191     \tUnited States\t-5.00\tcondo               \t2094.14\nAAAAAAAAELCCBAAA\tMs.       \tRobyn               \tMartinez                      \t274       \tWillow Birch\tParkway        \tSuite O   \tPleasant Valley\tBurke County\tGA\t32477     \tUnited States\t-5.00\tcondo               \t3081.88\nAAAAAAAAEMFEAAAA\tSir       \tDan                 \tPorter                        \t110       \tNorth West\tAve            \tSuite 270 \tFarmington\tCalhoun County\tGA\t39145     \tUnited States\t-5.00\tsingle family       \t5717.10\nAAAAAAAAENCEBAAA\tMrs.      \tLatonya             \tDoyle                         \t634       \tRailroad Park\tLn             \tSuite S   \tOak Hill\tDade County\tGA\t37838     \tUnited States\t-5.00\tsingle family       \t4450.56\nAAAAAAAAEPHFBAAA\tMrs.      \tCharlotte           \tMatthews                      \t368       \tCenter 4th\tST             \tSuite L   \tMartinsville\tBaldwin County\tGA\t30419     \tUnited States\t-5.00\tapartment           \t2301.95\nAAAAAAAAFACDAAAA\tMr.       \tWilliam             \tMarquez                       \t724       \t1st Walnut\tDr.            \tSuite X   \tOak Hill\tAppling County\tGA\t37838     \tUnited States\t-5.00\tapartment           \t5728.95\nAAAAAAAAFADGAAAA\tMs.       \tEdna                \tBrown                         \t871       \t11th Third\tCir.           \tSuite 380 \tWaterloo\tQuitman County\tGA\t31675     \tUnited States\t-5.00\tcondo               \t3552.69\nAAAAAAAAFANIAAAA\tDr.       \tJeanette            \tMccormick                     \t89        \tJackson Birch\tRD             \tSuite 440 \tLakeview\tCoweta County\tGA\t38579     \tUnited States\t-5.00\tcondo               \t9022.24\nAAAAAAAAFCPCAAAA\tDr.       \tChristine           \tSmith                         \t820       \tPark Lincoln\tCir.           \tSuite 60  \tSpringfield\tIrwin County\tGA\t39303     \tUnited States\t-5.00\tapartment           \t1916.99\nAAAAAAAAFFAPAAAA\tSir       \tDonald              \tWright                        \t35        \tJefferson \tParkway        \tSuite A   \tBuena Vista\tMarion County\tGA\t35752     \tUnited States\t-5.00\tsingle family       \t3434.76\nAAAAAAAAFFFDAAAA\tMr.       \tNathan              \tPratt                         \t317       \tMain Second\tDr.            \tSuite J   \tShaw\tMadison County\tGA\t30618     \tUnited States\t-5.00\tcondo               \t3953.10\nAAAAAAAAFGJHAAAA\tNULL\tNULL\tCarey                         \t605       \t13th Lake\tAve            \tSuite 100 \tMount Pleasant\tPickens County\tGA\t31933     \tUnited States\t-5.00\tsingle family       \t3349.63\nAAAAAAAAFGMKAAAA\tMrs.      \tAlbert              \tBradshaw                      \t774       \t15th 9th\tBlvd           \tSuite I   \tWhite Oak\tPeach County\tGA\t36668     \tUnited States\t-5.00\tsingle family       \t1695.86\nAAAAAAAAFKDJAAAA\tNULL\tNULL\tNULL\t186       \tSpring \tRoad           \tSuite 290 \tBuena Vista\tJohnson County\tGA\t35752     \tUnited States\t-5.00\tcondo               \t1919.82\nAAAAAAAAFLFNAAAA\tMrs.      \tScarlet             \tCook                          \t127       \t2nd \tBlvd           \tSuite U   \tLebanon\tColumbia County\tGA\t32898     \tUnited States\t-5.00\tcondo               \t2722.78\nAAAAAAAAFLHCAAAA\tSir       \tPatrick             \tTaylor                        \t635       \tThird \tAve            \tSuite H   \tKingston\tWayne County\tGA\t34975     \tUnited States\t-5.00\tapartment           \t3413.14\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q82.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,i_item_desc:string,i_current_price:decimal(7,2)>\n-- !query output\nAAAAAAAAEIJBAAAA\tNew games used to suggest. Annual, legal expenses see male pubs; almost early offences must come from the heads. En route small conditions underestimate e\t64.55\nAAAAAAAAELDEAAAA\tMethods get hours. American, great schools l\t89.24\nAAAAAAAAMOCDAAAA\tColours think. Partial, rich things would not appeal extremely open students. New, working magis\t66.03\nAAAAAAAAOIOBAAAA\tJust open variables used to mind well also new \t65.54\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q83.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<item_id:string,sr_item_qty:bigint,sr_dev:double,cr_item_qty:bigint,cr_dev:double,wr_item_qty:bigint,wr_dev:double,average:decimal(27,6)>\n-- !query output\nAAAAAAAAAEFEAAAA\t35\t9.04392764857881\t4\t1.03359173126615\t90\t23.25581395348837\t43.000000\nAAAAAAAAAFDDAAAA\t11\t8.94308943089431\t21\t17.073170731707318\t9\t7.317073170731707\t13.666667\nAAAAAAAAAFKBAAAA\t2\t1.5151515151515151\t3\t2.2727272727272725\t39\t29.545454545454547\t14.666667\nAAAAAAAABGOBAAAA\t37\t17.129629629629626\t31\t14.351851851851851\t4\t1.8518518518518516\t24.000000\nAAAAAAAACIEAAAAA\t34\t26.35658914728682\t6\t4.651162790697675\t3\t2.3255813953488373\t14.333333\nAAAAAAAACLLBAAAA\t63\t23.863636363636363\t16\t6.0606060606060606\t9\t3.4090909090909096\t29.333333\nAAAAAAAADHDAAAAA\t4\t8.88888888888889\t2\t4.444444444444445\t9\t20.0\t5.000000\nAAAAAAAADKNCAAAA\t6\tNULL\t41\tNULL\tNULL\tNULL\tNULL\nAAAAAAAAEIABAAAA\t56\t13.82716049382716\t27\t6.666666666666667\t52\t12.839506172839506\t45.000000\nAAAAAAAAFKICAAAA\t73\t15.904139433551197\t29\t6.318082788671024\t51\t11.11111111111111\t51.000000\nAAAAAAAAGDHAAAAA\t79\t21.236559139784948\t3\t0.8064516129032258\t42\t11.29032258064516\t41.333333\nAAAAAAAAGJEAAAAA\t94\t23.55889724310777\t4\t1.0025062656641603\t35\t8.771929824561402\t44.333333\nAAAAAAAAHKEBAAAA\t12\t7.4074074074074066\t10\t6.172839506172839\t32\t19.753086419753085\t18.000000\nAAAAAAAAICPBAAAA\t19\t13.768115942028986\t20\t14.492753623188406\t7\t5.072463768115942\t15.333333\nAAAAAAAAJICAAAAA\t15\t5.319148936170213\t49\t17.375886524822697\t30\t10.638297872340425\t31.333333\nAAAAAAAAKDMBAAAA\t7\t11.666666666666666\t9\t15.0\t4\t6.666666666666667\t6.666667\nAAAAAAAAKEEBAAAA\t18\t6.382978723404255\t29\t10.28368794326241\t47\t16.666666666666664\t31.333333\nAAAAAAAALJIAAAAA\t1\t0.32362459546925565\t9\t2.912621359223301\t93\t30.09708737864078\t34.333333\nAAAAAAAALLJAAAAA\t53\t16.358024691358025\t25\t7.716049382716049\t30\t9.25925925925926\t36.000000\nAAAAAAAALMDCAAAA\t103\t21.458333333333336\t46\t9.583333333333332\t11\t2.291666666666667\t53.333333\nAAAAAAAALNDBAAAA\t31\t10.99290780141844\t29\t10.28368794326241\t34\t12.056737588652483\t31.333333\nAAAAAAAAMLDDAAAA\t1\t0.32051282051282054\t22\t7.051282051282051\t81\t25.961538461538463\t34.666667\nAAAAAAAANAFEAAAA\t12\t10.0\t21\t17.5\t7\t5.833333333333333\t13.333333\nAAAAAAAANFNAAAAA\t10\t2.849002849002849\t82\t23.36182336182336\t25\t7.122507122507122\t39.000000\nAAAAAAAANIDEAAAA\t18\t5.172413793103448\t14\t4.022988505747127\t84\t24.137931034482758\t38.666667\nAAAAAAAAPDAEAAAA\t61\t29.04761904761905\t7\t3.3333333333333335\t2\t0.9523809523809523\t23.333333\nAAAAAAAAPDFCAAAA\t7\t4.861111111111112\t20\t13.88888888888889\t21\t14.583333333333334\t16.000000\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q84.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<customer_id:string,customername:string>\n-- !query output\nAAAAAAAAAAAKAAAA\tPerry                         , Lori                \nAAAAAAAAANFABAAA\tHarris                        , Alexander           \nAAAAAAAACHLDAAAA\tPhipps                        , Charles             \nAAAAAAAACIHPAAAA\tNelson                        , Timothy             \nAAAAAAAACJOJAAAA\tMata                          , Eva                 \nAAAAAAAADEANAAAA\tGaines                        , David               \nAAAAAAAAEJPOAAAA\tWesley                        , Paul                \nAAAAAAAAFFKBAAAA\tDuncan                        , Diana               \nAAAAAAAAFOBCAAAA\tNull                          , Thomas              \nAAAAAAAAJFBBAAAA\tJacob                         , Ross                \nAAAAAAAAJGFBBAAA\tJohnson                       , Windy               \nAAAAAAAALBGPAAAA\tPorter                        , Veronica            \nAAAAAAAANJBHAAAA\tThompson                      , Kenneth             \nAAAAAAAANOLKAAAA\tThomas                        , Billy               \nAAAAAAAAONJOAAAA\tFloyd                         , Janette             \nAAAAAAAAONLJAAAA\tHoward                        , Tommy\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q85.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<substr(r_reason_desc, 1, 20):string,avg(ws_quantity):double,avg(wr_refunded_cash):decimal(11,6),avg(wr_fee):decimal(11,6)>\n-- !query output\nNot the product that\t10.0\t177.330000\t86.330000\nNot working any more\t38.0\t892.470000\t14.940000\nreason 25           \t79.0\t4640.200000\t47.240000\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q86.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<total_sum:decimal(17,2),i_category:string,i_class:string,lochierarchy:tinyint,rank_within_parent:int>\n-- !query output\n329088997.80\tNULL\tNULL\t2\t1\n35430664.91\tBooks                                             \tNULL\t1\t1\n34918777.34\tMen                                               \tNULL\t1\t2\n33760378.41\tChildren                                          \tNULL\t1\t3\n33243530.17\tElectronics                                       \tNULL\t1\t4\n33145164.93\tHome                                              \tNULL\t1\t5\n32756827.42\tShoes                                             \tNULL\t1\t6\n32285995.12\tMusic                                             \tNULL\t1\t7\n31712254.78\tSports                                            \tNULL\t1\t8\n30795989.18\tWomen                                             \tNULL\t1\t9\n30076030.73\tJewelry                                           \tNULL\t1\t10\n963384.81\tNULL\tNULL\t1\t11\n615989.10\tNULL\tNULL\t0\t1\n65752.25\tNULL\tdresses                                           \t0\t2\n54445.48\tNULL\tlighting                                          \t0\t3\n40391.44\tNULL\tinfants                                           \t0\t4\n35471.06\tNULL\tmens                                              \t0\t5\n34067.03\tNULL\tcountry                                           \t0\t6\n34046.70\tNULL\taccessories                                       \t0\t7\n27537.73\tNULL\tbaseball                                          \t0\t8\n26174.29\tNULL\tguns                                              \t0\t9\n17743.12\tNULL\tathletic                                          \t0\t10\n11766.61\tNULL\tshirts                                            \t0\t11\n2662546.58\tBooks                                             \tcomputers                                         \t0\t1\n2495626.03\tBooks                                             \tparenting                                         \t0\t2\n2425647.20\tBooks                                             \tfiction                                           \t0\t3\n2401196.81\tBooks                                             \tsports                                            \t0\t4\n2346704.09\tBooks                                             \thistory                                           \t0\t5\n2302719.16\tBooks                                             \tscience                                           \t0\t6\n2276270.78\tBooks                                             \tself-help                                         \t0\t7\n2257444.51\tBooks                                             \tbusiness                                          \t0\t8\n2216374.08\tBooks                                             \thome repair                                       \t0\t9\n2206720.90\tBooks                                             \treference                                         \t0\t10\n2149203.74\tBooks                                             \tmystery                                           \t0\t11\n2140640.32\tBooks                                             \ttravel                                            \t0\t12\n2048647.89\tBooks                                             \tromance                                           \t0\t13\n1956865.76\tBooks                                             \tcooking                                           \t0\t14\n1859356.14\tBooks                                             \tarts                                              \t0\t15\n1617536.06\tBooks                                             \tentertainments                                    \t0\t16\n67164.86\tBooks                                             \tNULL\t0\t17\n8836884.43\tChildren                                          \tschool-uniforms                                   \t0\t1\n8760984.73\tChildren                                          \tinfants                                           \t0\t2\n8142524.86\tChildren                                          \tnewborn                                           \t0\t3\n7925776.25\tChildren                                          \ttoddlers                                          \t0\t4\n94208.14\tChildren                                          \tNULL\t0\t5\n2489469.99\tElectronics                                       \tdisk drives                                       \t0\t1\n2323648.66\tElectronics                                       \tpersonal                                          \t0\t2\n2305415.91\tElectronics                                       \tcamcorders                                        \t0\t3\n2240199.44\tElectronics                                       \tkaroke                                            \t0\t4\n2235691.33\tElectronics                                       \tstereo                                            \t0\t5\n2233058.51\tElectronics                                       \ttelevisions                                       \t0\t6\n2113792.92\tElectronics                                       \tportable                                          \t0\t7\n2092500.41\tElectronics                                       \tmusical                                           \t0\t8\n2088701.69\tElectronics                                       \tdvd/vcr players                                   \t0\t9\n2081097.01\tElectronics                                       \tcameras                                           \t0\t10\n2055973.50\tElectronics                                       \tautomotive                                        \t0\t11\n2021900.56\tElectronics                                       \taudio                                             \t0\t12\n2006817.42\tElectronics                                       \tmemory                                            \t0\t13\n1729410.62\tElectronics                                       \tmonitors                                          \t0\t14\n1621759.97\tElectronics                                       \twireless                                          \t0\t15\n1604092.23\tElectronics                                       \tscanners                                          \t0\t16\n2580794.04\tHome                                              \tmattresses                                        \t0\t1\n2552960.47\tHome                                              \tglassware                                         \t0\t2\n2479089.66\tHome                                              \tlighting                                          \t0\t3\n2241963.26\tHome                                              \tbathroom                                          \t0\t4\n2227778.30\tHome                                              \tbedding                                           \t0\t5\n2201961.72\tHome                                              \twallpaper                                         \t0\t6\n2125093.38\tHome                                              \trugs                                              \t0\t7\n2029622.97\tHome                                              \taccent                                            \t0\t8\n2005244.71\tHome                                              \tpaint                                             \t0\t9\n2004693.90\tHome                                              \ttables                                            \t0\t10\n1882095.92\tHome                                              \tblinds/shades                                     \t0\t11\n1833537.38\tHome                                              \tdecor                                             \t0\t12\n1805505.20\tHome                                              \tkids                                              \t0\t13\n1761357.98\tHome                                              \tcurtains/drapes                                   \t0\t14\n1684930.79\tHome                                              \tflatware                                          \t0\t15\n1596468.10\tHome                                              \tfurniture                                         \t0\t16\n132067.15\tHome                                              \tNULL\t0\t17\n2311781.76\tJewelry                                           \tcustom                                            \t0\t1\n2143571.63\tJewelry                                           \tbirdal                                            \t0\t2\n2116270.85\tJewelry                                           \tloose stones                                      \t0\t3\n2091585.73\tJewelry                                           \tdiamonds                                          \t0\t4\n1929363.68\tJewelry                                           \tbracelets                                         \t0\t5\n1896051.77\tJewelry                                           \tgold                                              \t0\t6\n1858969.13\tJewelry                                           \tcostume                                           \t0\t7\n1849246.62\tJewelry                                           \tmens watch                                        \t0\t8\n1845580.36\tJewelry                                           \testate                                            \t0\t9\n1834863.97\tJewelry                                           \tearings                                           \t0\t10\n1809049.82\tJewelry                                           \twomens watch                                      \t0\t11\n1738077.18\tJewelry                                           \tsemi-precious                                     \t0\t12\n1736010.32\tJewelry                                           \trings                                             \t0\t13\n1703934.55\tJewelry                                           \tconsignment                                       \t0\t14\n1608464.69\tJewelry                                           \tpendants                                          \t0\t15\n1603208.67\tJewelry                                           \tjewelry boxes                                     \t0\t16\n9509484.62\tMen                                               \tsports-apparel                                    \t0\t1\n8897383.77\tMen                                               \tpants                                             \t0\t2\n8508441.20\tMen                                               \taccessories                                       \t0\t3\n7952307.23\tMen                                               \tshirts                                            \t0\t4\n51160.52\tMen                                               \tNULL\t0\t5\n8668502.54\tMusic                                             \trock                                              \t0\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q87.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<count(1):bigint>\n-- !query output\n47170\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q88.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<h8_30_to_9:bigint,h9_to_9_30:bigint,h9_30_to_10:bigint,h10_to_10_30:bigint,h10_30_to_11:bigint,h11_to_11_30:bigint,h11_30_to_12:bigint,h12_to_12_30:bigint>\n-- !query output\n2358\t4664\t4828\t7447\t6997\t3886\t4073\t4923\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q89.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_category:string,i_class:string,i_brand:string,s_store_name:string,s_company_name:string,d_moy:int,sum_sales:decimal(17,2),avg_monthly_sales:decimal(21,6)>\n-- !query output\nMen                                               \tshirts                                            \timportoimporto #x                                 \tbar\tUnknown\t3\t1815.92\t4915.190000\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tation\tUnknown\t6\t1508.07\t4586.300000\nMen                                               \tshirts                                            \timportoimporto #x                                 \tese\tUnknown\t6\t1577.69\t4596.057500\nMen                                               \tshirts                                            \timportoimporto #x                                 \tation\tUnknown\t6\t2077.36\t4920.419167\nMen                                               \tshirts                                            \timportoimporto #x                                 \tese\tUnknown\t4\t1762.17\t4596.057500\nMen                                               \tshirts                                            \timportoimporto #x                                 \tation\tUnknown\t3\t2183.89\t4920.419167\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tation\tUnknown\t3\t1917.37\t4586.300000\nMen                                               \tshirts                                            \timportoimporto #x                                 \tought\tUnknown\t4\t2425.64\t5071.261667\nMen                                               \tshirts                                            \timportoimporto #x                                 \tought\tUnknown\t2\t2438.50\t5071.261667\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \teing\tUnknown\t7\t1904.98\t4494.382500\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \table\tUnknown\t4\t1994.71\t4582.713333\nMen                                               \tshirts                                            \timportoimporto #x                                 \teing\tUnknown\t3\t2458.87\t5035.925833\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tought\tUnknown\t3\t2138.31\t4707.378333\nMen                                               \tshirts                                            \timportoimporto #x                                 \tought\tUnknown\t7\t2511.79\t5071.261667\nMen                                               \tshirts                                            \timportoimporto #x                                 \teing\tUnknown\t4\t2528.19\t5035.925833\nMen                                               \tshirts                                            \timportoimporto #x                                 \tought\tUnknown\t5\t2641.17\t5071.261667\nMen                                               \tshirts                                            \timportoimporto #x                                 \table\tUnknown\t7\t2360.38\t4779.735000\nMen                                               \tshirts                                            \timportoimporto #x                                 \tbar\tUnknown\t6\t2521.43\t4915.190000\nMen                                               \tshirts                                            \timportoimporto #x                                 \tought\tUnknown\t3\t2696.60\t5071.261667\nMen                                               \tshirts                                            \timportoimporto #x                                 \teing\tUnknown\t7\t2689.88\t5035.925833\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tought\tUnknown\t4\t2396.08\t4707.378333\nMen                                               \tshirts                                            \timportoimporto #x                                 \teing\tUnknown\t6\t2725.29\t5035.925833\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \table\tUnknown\t7\t2299.82\t4582.713333\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \table\tUnknown\t5\t2321.48\t4582.713333\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tought\tUnknown\t7\t2449.19\t4707.378333\nMen                                               \tshirts                                            \timportoimporto #x                                 \teing\tUnknown\t2\t2784.04\t5035.925833\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tese\tUnknown\t4\t2254.90\t4490.582500\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tese\tUnknown\t3\t2257.13\t4490.582500\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tese\tUnknown\t6\t2257.18\t4490.582500\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tbar\tUnknown\t4\t1998.31\t4205.080833\nMen                                               \tshirts                                            \timportoimporto #x                                 \tese\tUnknown\t2\t2393.90\t4596.057500\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \teing\tUnknown\t6\t2308.58\t4494.382500\nMen                                               \tshirts                                            \timportoimporto #x                                 \teing\tUnknown\t1\t2856.92\t5035.925833\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \table\tUnknown\t3\t2408.40\t4582.713333\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tese\tUnknown\t5\t2317.37\t4490.582500\nMen                                               \tshirts                                            \timportoimporto #x                                 \tese\tUnknown\t7\t2439.68\t4596.057500\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tese\tUnknown\t7\t2341.06\t4490.582500\nMen                                               \tshirts                                            \timportoimporto #x                                 \tese\tUnknown\t5\t2457.43\t4596.057500\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tought\tUnknown\t6\t2577.66\t4707.378333\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tbar\tUnknown\t3\t2097.37\t4205.080833\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \teing\tUnknown\t5\t2398.63\t4494.382500\nMen                                               \tshirts                                            \timportoimporto #x                                 \tation\tUnknown\t4\t2841.78\t4920.419167\nMen                                               \tshirts                                            \timportoimporto #x                                 \tbar\tUnknown\t2\t2884.81\t4915.190000\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tation\tUnknown\t4\t2557.46\t4586.300000\nMen                                               \tshirts                                            \timportoimporto #x                                 \tbar\tUnknown\t1\t2926.69\t4915.190000\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \teing\tUnknown\t3\t2528.38\t4494.382500\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \table\tUnknown\t2\t2616.82\t4582.713333\nMen                                               \tshirts                                            \timportoimporto #x                                 \tbar\tUnknown\t4\t2956.00\t4915.190000\nMen                                               \tshirts                                            \timportoimporto #x                                 \table\tUnknown\t5\t2829.72\t4779.735000\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tese\tUnknown\t2\t2586.26\t4490.582500\nMen                                               \tshirts                                            \timportoimporto #x                                 \tese\tUnknown\t3\t2728.14\t4596.057500\nMen                                               \tshirts                                            \timportoimporto #x                                 \table\tUnknown\t2\t2913.44\t4779.735000\nMen                                               \tshirts                                            \timportoimporto #x                                 \tought\tUnknown\t6\t3210.96\t5071.261667\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \teing\tUnknown\t4\t2647.58\t4494.382500\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tbar\tUnknown\t7\t2407.85\t4205.080833\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tbar\tUnknown\t6\t2427.96\t4205.080833\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tought\tUnknown\t2\t2933.33\t4707.378333\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tought\tUnknown\t5\t2946.43\t4707.378333\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tbar\tUnknown\t2\t2465.92\t4205.080833\nMen                                               \tshirts                                            \timportoimporto #x                                 \table\tUnknown\t4\t3120.21\t4779.735000\nMen                                               \tshirts                                            \timportoimporto #x                                 \table\tUnknown\t7\t868.21\t2511.135833\nMen                                               \tshirts                                            \timportoimporto #x                                 \tese\tUnknown\t6\t985.62\t2603.736667\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tbar\tUnknown\t5\t2594.86\t4205.080833\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tation\tUnknown\t5\t2992.12\t4586.300000\nMen                                               \tshirts                                            \timportoimporto #x                                 \table\tUnknown\t3\t3197.55\t4779.735000\nMen                                               \tshirts                                            \timportoimporto #x                                 \tation\tUnknown\t7\t3347.44\t4920.419167\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tation\tUnknown\t7\t3059.37\t4586.300000\nMen                                               \tshirts                                            \timportoimporto #x                                 \tation\tUnknown\t5\t3402.55\t4920.419167\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \table\tUnknown\t6\t3085.16\t4582.713333\nMen                                               \tshirts                                            \timportoimporto #x                                 \tbar\tUnknown\t7\t3426.04\t4915.190000\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \teing\tUnknown\t2\t3013.27\t4494.382500\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tbar\tUnknown\t2\t512.15\t1992.243333\nMen                                               \tshirts                                            \timportoimporto #x                                 \tese\tUnknown\t2\t1143.39\t2603.736667\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \table\tUnknown\t2\t652.58\t2089.189167\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \table\tUnknown\t3\t656.84\t2089.189167\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tbar\tUnknown\t3\t564.90\t1992.243333\nMen                                               \tshirts                                            \timportoimporto #x                                 \tbar\tUnknown\t5\t3491.77\t4915.190000\nMen                                               \tshirts                                            \timportoimporto #x                                 \tation\tUnknown\t3\t1075.39\t2478.654167\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tese\tUnknown\t1\t3107.05\t4490.582500\nMen                                               \tshirts                                            \timportoimporto #x                                 \tought\tUnknown\t2\t1061.74\t2425.509167\nMen                                               \tshirts                                            \timportoimporto #x                                 \teing\tUnknown\t7\t1040.11\t2396.908333\nMen                                               \tshirts                                            \timportoimporto #x                                 \tese\tUnknown\t1\t3253.77\t4596.057500\nMen                                               \tshirts                                            \timportoimporto #x                                 \table\tUnknown\t2\t1181.12\t2511.135833\nMen                                               \tshirts                                            \timportoimporto #x                                 \tation\tUnknown\t2\t3598.63\t4920.419167\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \teing\tUnknown\t3\t758.25\t2074.660833\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tese\tUnknown\t5\t782.25\t2094.761667\nMen                                               \tshirts                                            \timportoimporto #x                                 \tese\tUnknown\t4\t1293.95\t2603.736667\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tation\tUnknown\t2\t3282.12\t4586.300000\nMen                                               \tshirts                                            \timportoimporto #x                                 \table\tUnknown\t3\t1216.42\t2511.135833\nMen                                               \tshirts                                            \timportoimporto #x                                 \table\tUnknown\t6\t3503.61\t4779.735000\nMen                                               \tshirts                                            \timportoimporto #x                                 \tese\tUnknown\t1\t1333.77\t2603.736667\nMen                                               \tshirts                                            \timportoimporto #x                                 \table\tUnknown\t6\t1243.07\t2511.135833\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tought\tUnknown\t2\t760.60\t2026.577500\nMen                                               \tshirts                                            \timportoimporto #x                                 \tbar\tUnknown\t4\t970.79\t2234.079167\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tese\tUnknown\t4\t832.51\t2094.761667\nMen                                               \tshirts                                            \timportoimporto #x                                 \tation\tUnknown\t6\t1226.01\t2478.654167\nMen                                               \tshirts                                            \timportoimporto #x                                 \tought\tUnknown\t6\t1205.46\t2425.509167\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \tation\tUnknown\t4\t595.85\t1797.562500\nMen                                               \tshirts                                            \timportoimporto #x                                 \tbar\tUnknown\t2\t1036.00\t2234.079167\nWomen                                             \tdresses                                           \tamalgamalg #x                                     \teing\tUnknown\t4\t878.88\t2074.660833\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q9.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<bucket1:decimal(11,6),bucket2:decimal(11,6),bucket3:decimal(11,6),bucket4:decimal(11,6),bucket5:decimal(11,6)>\n-- !query output\n358.920754\t1040.839473\t1718.531826\t2401.883865\t3085.520204\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q90.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<am_pm_ratio:decimal(35,20)>\n-- !query output\n0.58933333333333333333\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q91.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<Call_Center:string,Call_Center_Name:string,Manager:string,Returns_Loss:decimal(17,2)>\n-- !query output\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q92.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<Excess Discount Amount :decimal(17,2)>\n-- !query output\n39037.04\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q93.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<ss_customer_sk:int,sumsales:decimal(28,2)>\n-- !query output\n108\tNULL\n331\tNULL\n533\tNULL\n564\tNULL\n1391\tNULL\n1680\tNULL\n2096\tNULL\n2534\tNULL\n2766\tNULL\n2857\tNULL\n3400\tNULL\n3565\tNULL\n4075\tNULL\n4516\tNULL\n4888\tNULL\n4980\tNULL\n5111\tNULL\n5626\tNULL\n5743\tNULL\n6078\tNULL\n6760\tNULL\n6910\tNULL\n7034\tNULL\n7770\tNULL\n7791\tNULL\n8140\tNULL\n8364\tNULL\n8725\tNULL\n8913\tNULL\n8938\tNULL\n9203\tNULL\n9220\tNULL\n9860\tNULL\n10269\tNULL\n11218\tNULL\n11625\tNULL\n12405\tNULL\n12935\tNULL\n12954\tNULL\n13050\tNULL\n13298\tNULL\n13626\tNULL\n13730\tNULL\n14105\tNULL\n14422\tNULL\n14587\tNULL\n14627\tNULL\n14734\tNULL\n14958\tNULL\n15599\tNULL\n16142\tNULL\n16759\tNULL\n16918\tNULL\n17479\tNULL\n18540\tNULL\n18710\tNULL\n19327\tNULL\n19514\tNULL\n19917\tNULL\n20571\tNULL\n21532\tNULL\n22571\tNULL\n23925\tNULL\n25168\tNULL\n25250\tNULL\n25414\tNULL\n26175\tNULL\n26407\tNULL\n27166\tNULL\n27240\tNULL\n28852\tNULL\n29635\tNULL\n30207\tNULL\n30278\tNULL\n30438\tNULL\n30445\tNULL\n31204\tNULL\n31911\tNULL\n31987\tNULL\n32147\tNULL\n32154\tNULL\n32169\tNULL\n32291\tNULL\n32639\tNULL\n32646\tNULL\n33232\tNULL\n33313\tNULL\n33872\tNULL\n33951\tNULL\n34860\tNULL\n35502\tNULL\n35684\tNULL\n35774\tNULL\n35779\tNULL\n35957\tNULL\n36221\tNULL\n36333\tNULL\n36556\tNULL\n37254\tNULL\n37411\tNULL\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q94.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<order count :bigint,total shipping cost :decimal(17,2),total net profit :decimal(17,2)>\n-- !query output\n38\t84308.56\t21690.18\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q95.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<order count :bigint,total shipping cost :decimal(17,2),total net profit :decimal(17,2)>\n-- !query output\n101\t237942.25\t-27171.32\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q96.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<count(1):bigint>\n-- !query output\n888\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q97.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<store_only:bigint,catalog_only:bigint,store_and_catalog:bigint>\n-- !query output\n537833\t285408\t200\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q98.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_desc:string,i_category:string,i_class:string,i_current_price:decimal(7,2),itemrevenue:decimal(17,2),revenueratio:decimal(38,17)>\n-- !query output\nNULL\tBooks                                             \tNULL\tNULL\t557.55\t2.41577919115267179\nNULL\tBooks                                             \tNULL\t6.35\t361.02\t1.56424464817493959\nPrecisely elderly bodies\tBooks                                             \tarts                                              \t1.40\t4303.08\t1.75015577438987686\nAbilities could affect cruel parts. Predominantly other events telephone strong signs. Accurate mate\tBooks                                             \tarts                                              \t25.69\t9236.96\t3.75687156218529913\nGerma\tBooks                                             \tarts                                              \t5.82\t3191.92\t1.29822295179047002\nGreat, contemporary workers would not remove of course cultural values. Then due children might see positive seconds. Significant problems w\tBooks                                             \tarts                                              \t0.55\t2096.07\t0.85251703756969175\nSmall objects stop etc mediterranean patterns; liberal, free initiatives would not leave less clear british attitudes; good, blue relationships find softly very\tBooks                                             \tarts                                              \t58.41\t5760.88\t2.34307458786895754\nNewly national rights head curiously all electrical cells. Chinese, long values might not pull bad lines. High fun clothes ough\tBooks                                             \tarts                                              \t3.28\t571.10\t0.23227873122369528\nForward psychological plants establish closely yet eastern changes. Likewise necessary techniques might drop. Pleasant operations like lonely things; dogs let regions. Forces might not result clearl\tBooks                                             \tarts                                              \t2.43\t3457.82\t1.40637023708618106\nEarly, short v\tBooks                                             \tarts                                              \t75.57\t850.08\t0.34574593563060564\nBlack, following services justify by a investors; dirty, different charts will fly however prizes. Temporary, l\tBooks                                             \tarts                                              \t5.56\t4798.20\t1.95153179505784395\nScientific, difficult polls would not achieve. Countries reach of course. Bad, new churches realize most english\tBooks                                             \tarts                                              \t3.98\t17.50\t0.00711762878027433\nUnited, important objectives put similarly large, previous phenomena; old, present days receive. Happy detectives assi\tBooks                                             \tarts                                              \t1.26\t11646.43\t4.73685516316858938\nNaturally new years put serious, negative vehicles. Fin\tBooks                                             \tarts                                              \t3.34\t20302.86\t8.25761260902173683\nAgo correct profits must not handle else. Healthy children may not go only ancient words. Later just characters ought to drink about. British parts must watch soon ago other clients. So vital d\tBooks                                             \tarts                                              \t4.03\t9410.73\t3.82754758236520025\nMuch new waters \tBooks                                             \tarts                                              \t1.85\t2864.61\t1.16509889030066491\nTall, following actions keep widely willing, secondary groups. Heads could afford however; agricultural, square pri\tBooks                                             \tarts                                              \t9.99\t1903.68\t0.77426786036757875\nAnonymous, useful women provoke slightly present persons. Ideas ought to cost almost competent, working parties; aspects provide thr\tBooks                                             \tarts                                              \t6.73\t7841.73\t3.18940132200803357\nPowerful walls will find; there scottish decades must not\tBooks                                             \tarts                                              \t4.16\t5934.47\t2.41367739815283298\nToo executive doors progress mainly seemingly possible parts; hundreds stay virtually simple workers. Sola\tBooks                                             \tarts                                              \t34.32\t10139.18\t4.12382396436467639\nCareful privileges ought to live rather to a boards. Possible, broad p\tBooks                                             \tarts                                              \t3.93\t10008.95\t4.07085660459009779\nAside legitimate decisions may not stand probably sexual g\tBooks                                             \tarts                                              \t3.88\t874.84\t0.35581636355058234\nSpecially interesting crews continue current, foreign directions; only social men would not call at least political children; circumstances could not understand now in a assessme\tBooks                                             \tarts                                              \t2.13\t15380.51\t6.25558632178840388\nUnlikely states take later in general extra inf\tBooks                                             \tarts                                              \t0.32\t6478.12\t2.63479162023261224\nSometimes careful things state probably so\tBooks                                             \tarts                                              \t5.08\t17118.92\t6.96263529595507190\nCircumstances would not use. Principles seem writers. Times go from a hands. Members find grounds. Central, only teachers pursue properly into a p\tBooks                                             \tarts                                              \t5.95\t7863.28\t3.19816617344888566\nInches may lose from a problems. Firm, other corporations shall protect ashamed, important practices. Materials shall not make then by a police. Weeks used\tBooks                                             \tarts                                              \t0.84\t6519.57\t2.65165023240074772\nSystems cannot await regions. Home appropr\tBooks                                             \tarts                                              \t7.30\t889.75\t0.36188058327137607\nRelevant lips take so sure, manufacturing \tBooks                                             \tarts                                              \t8.80\t932.33\t0.37919879089789497\nExtra, primitive weeks look obviou\tBooks                                             \tarts                                              \t1.18\t10006.38\t4.06981132996350893\nMore than key reasons should remain. Words used to offer slowly british\tBooks                                             \tarts                                              \t0.28\t1075.86\t0.43757554854548205\nChildren may turn also above, historical aspects. Surveys migh\tBooks                                             \tarts                                              \t7.22\t3864.60\t1.57181646767132336\nTrustees know operations. Now past issues cut today german governments. British lines go critical, individual structures. Tonight adequate problems should no\tBooks                                             \tarts                                              \t4.05\t3061.36\t1.24512137387317768\nUseful observers start often white colleagues; simple pro\tBooks                                             \tarts                                              \t3.47\t724.40\t0.29462915933889837\nMembers should say earnings. Detailed departments would not move just at the hopes. Figures can take. Actually open houses want. Good teachers combine the\tBooks                                             \tarts                                              \t3.09\t7805.88\t3.17482035104958588\nMajor, senior words afford economic libraries; successful seconds need outside. Clinical, new ideas put now red c\tBooks                                             \tarts                                              \t5.87\t616.62\t0.25079270048530027\nLikely states feel astonishingly working roads. Parents put so somewhere able policies. Others may rely shortly instead interesting bodies; bri\tBooks                                             \tarts                                              \t7.50\t8829.29\t3.59106334933647431\nFloors could not go only for a years. Special reasons shape consequently black, concerned instances. Mutual depths encourage both simple teachers. Cards favour massive \tBooks                                             \tarts                                              \t1.83\t6344.62\t2.58049428068023382\nAccurate years want then other organisations. Simple lines mean as well so red results. Orthodox, central scales will not in\tBooks                                             \tarts                                              \t7.69\t3640.38\t1.48062134052200283\nCertain customers think exactly already necessary factories. Awkward doubts shall not forget fine\tBooks                                             \tarts                                              \t0.30\t12867.38\t5.23344195512721440\nWeak effects set far in the effects. Positive, true classes sell frankly ever open studies. Unique problems must mean as yet new genes. Essential businesses agree deep current stages. Not\tBooks                                             \tarts                                              \t4.40\t4471.87\t1.81880632077973420\nVisitors could not allow glad wages. Communist, real figures used to apply factors. Aggressive, optimistic days must mean about trees. Detailed courts consider really large pro\tBooks                                             \tarts                                              \t9.08\t1737.00\t0.70647549664780021\nDeep, big areas take for a facilities. Words could replace certainly cases; lights test. Nevertheless practical arts cross. Fa\tBooks                                             \tarts                                              \t7.37\t8016.08\t3.26031324074179520\nSimilar situations come separate programmes. National, large others could not ask opportunities. Severe, large findings accept; twins go more. Tiny rights may see specifi\tBooks                                             \tarts                                              \t1.27\t2413.11\t0.98146406776958731\nNatural plans might not like n\tBooks                                             \tbusiness                                          \t4.29\t8813.76\t2.98246237973343018\nYears shall want free objects. Old residents use absolutely so residential steps. Letters will share variables. Sure fres\tBooks                                             \tbusiness                                          \t40.76\t1861.77\t0.62999888636816844\nWhole, important problems make. Indeed industrial members go skills. Soft\tBooks                                             \tbusiness                                          \t3.22\t6877.01\t2.32709123121693768\nCheap depths calm as in a traditi\tBooks                                             \tbusiness                                          \t7.92\t2554.82\t0.86451804190159048\nSimple, great shops glance from a years. Lessons deepen here previous clients. Increased, silent flights open more great rocks. Brill\tBooks                                             \tbusiness                                          \t8.92\t8014.35\t2.71195237594586375\nJust sudden ideas ought to serve full sources; uncertain, open qualifications shout questions; chronic, informal\tBooks                                             \tbusiness                                          \t4.62\t4172.49\t1.41191664565564981\nGroups must not put new, civil moves. Correct men laugh slightly total novels. Relatively public girls set even scott\tBooks                                             \tbusiness                                          \t3.36\t414.96\t0.14041709657333354\nJust young degrees stop posts. More than tight artists buy to a arts. European, essential techniques ought to sell to a offences. Sentences be\tBooks                                             \tbusiness                                          \t2.58\t6286.70\t2.12733796276165399\nOther, black houses flow. New soldiers put only eastern hours. Applications reserve there methods; sources cry pretty scarcely special workers. Never british opportunities \tBooks                                             \tbusiness                                          \t8.20\t3426.05\t1.15933100471146462\nJunior, severe restrictions ought to want principles. Sure,\tBooks                                             \tbusiness                                          \t9.77\t3899.69\t1.31960465427044307\nRows could not\tBooks                                             \tbusiness                                          \t1.65\t15875.48\t5.37205708576254358\nRemaining subjects handle even only certain ladies; eagerly literary days could not provide. Very different articles cut then. Boys see out of a houses. Governme\tBooks                                             \tbusiness                                          \t9.03\t916.17\t0.31002007751973922\nManufacturing, ready concerns see already then new pupils. Both stable types used to manage otherw\tBooks                                             \tbusiness                                          \t1.18\t8723.00\t2.95175036969632840\nRussian windows should see in a weapons. New, considerable branches walk. English regions apply neither alone police; very new years w\tBooks                                             \tbusiness                                          \t2.79\t8434.64\t2.85417307557668685\nLong groups used to create more tiny feet; tools used to dare still\tBooks                                             \tbusiness                                          \t57.04\t795.66\t0.26924105229308502\nDrugs must compensate dark, modest houses. Small pubs claim naturally accessible relationships. Distinguished\tBooks                                             \tbusiness                                          \t1.66\t11559.25\t3.91150068335575881\nSmall, capable centres\tBooks                                             \tbusiness                                          \t2.98\t19190.81\t6.49392187461561344\nPopular, different parameters might take open, used modules. Prisoners use pretty alternative lovers. Annual, professional others spend once true men. Other, small subsidies seem politically\tBooks                                             \tbusiness                                          \t7.25\t1761.61\t0.59610603791823330\nSupreme, free uses handle even in the customers. Other minutes might not make of course social neighbours. So environmental rights come other, able sales\tBooks                                             \tbusiness                                          \t8.08\t3500.18\t1.18441563785437289\nSound, original activities consider quite to a attitudes. In order weak improvements marry available, hard studie\tBooks                                             \tbusiness                                          \t71.27\t11431.86\t3.86839355512056274\nAlways other hours used to use. Women should jump then. Civil samples take therefore other offices. Concrete, major demands\tBooks                                             \tbusiness                                          \t1.42\t2038.40\t0.68976819369356825\nChanges ensure different clients. Distinct, alone attacks think directly previous feelings. White children tell so medieval, responsible yea\tBooks                                             \tbusiness                                          \t5.89\t5116.38\t1.73131681262259552\nVisual fragments \tBooks                                             \tbusiness                                          \t6.77\t2739.02\t0.92684893931051673\nClassic issues will draw as european, engl\tBooks                                             \tbusiness                                          \t75.64\t14735.99\t4.98646840884344817\nAgain british shareholders see shares. American lives ought to establish horses. Then ideal conservatives might charge even nec\tBooks                                             \tbusiness                                          \t2.44\t9396.38\t3.17961345165736401\nCritical cases tell anywhere to the circumstances. Dependent, new numbers must not\tBooks                                             \tbusiness                                          \t3.72\t726.75\t0.24592279963049486\nConfident, video-tape\tBooks                                             \tbusiness                                          \t3.17\t6173.95\t2.08918482116091330\nOf course fundamental children will not deal still including a suppliers. More crucial powers will not keep enough. As good comments used to devote even convenient electric problems. Publi\tBooks                                             \tbusiness                                          \t8.85\t2672.80\t0.90444094785330122\nDepartments could seek now for a commu\tBooks                                             \tbusiness                                          \t5.93\t3205.71\t1.08477079876638965\nPaintings must not know primary, royal stands; similar, available others ough\tBooks                                             \tbusiness                                          \t0.39\t12939.97\t4.37871847201185356\nMost present eyes restore fat, central relationships; again considerable habits must face in a discussions. Engineers help at all direct occasions. Curiously del\tBooks                                             \tbusiness                                          \t80.10\t6877.89\t2.32738901183430931\nSo white countries could secure more angry items. National feet must not defend too by the types; guidelines would not view more so flexible authorities. Critics will handle closely lig\tBooks                                             \tbusiness                                          \t2.50\t2135.27\t0.72254774869901171\nSimple changes ought to vote almost sudden techniques. Partial, golden faces mean in a officials; vertically minor \tBooks                                             \tbusiness                                          \t8.73\t5996.87\t2.02926323965617573\nChristian lines stand once deep formal aspirations. National, fine islands play together with a patterns. New journals lose etc positive armie\tBooks                                             \tbusiness                                          \t4.89\t6106.50\t2.06636061361350790\nChildren would not mean in favour of a parts. Heavy, whole others shall mean on\tBooks                                             \tbusiness                                          \t3.13\t5521.91\t1.86854291917113983\nLips will n\tBooks                                             \tbusiness                                          \t8.48\t7806.43\t2.64159493735051117\nWhite fees might combine reports. Tr\tBooks                                             \tbusiness                                          \t2.09\t6566.98\t2.22218108939451963\nAsleep children invite more. Wealthy forms could expect as. Indeed statistical examinations could la\tBooks                                             \tbusiness                                          \t3.71\t11535.83\t3.90357565828889099\nMost new weeks go yet members. Also encouraging delegates make publications. Different competitors run resources; somehow common views m\tBooks                                             \tbusiness                                          \t1.07\t9334.32\t3.15861315039135987\nLocal, bloody names \tBooks                                             \tbusiness                                          \t4.40\t2927.75\t0.99071273012477651\nLarge, larg\tBooks                                             \tbusiness                                          \t3.50\t5826.76\t1.97170021599584758\nOnly new systems might join late speeches. Materials could stay on a benefits. Corporate regulations must crawl definitely practical deaths. Windows might soothe despite a organisations. Old\tBooks                                             \tbusiness                                          \t0.67\t123.41\t0.04176034771571981\nProfessional managers form later initial grounds. Conscious, big risks restore. American, full rises say for a systems. Already\tBooks                                             \tbusiness                                          \t5.27\t1616.40\t0.54696885218126163\nMemories can earn particularly over quick contexts; alone differences make separate years; irish men mea\tBooks                                             \tbusiness                                          \t4.23\t6855.84\t2.31992757704675870\nOnly, gothic\tBooks                                             \tbusiness                                          \t1.68\t7807.37\t2.64191302119179451\nLow, large clouds will not visit for example as the notions. Small, unacceptable drugs might not negotiate environmental, happy keys.\tBooks                                             \tbusiness                                          \t3.11\t3933.56\t1.33106582416859905\nSilver, critical operations could help howev\tBooks                                             \tbusiness                                          \t5.56\t9087.69\t3.07515674850230731\nTerrible, psychiatric bones will destroy also used studies; solely usual windows should not make shares. Advances continue sufficiently. As key days might not use far artists. Offici\tBooks                                             \tbusiness                                          \t5.83\t2024.32\t0.68500370381562209\nToo white addresses end by the talks. Hands get only companies. Statements know. Sentences would pay around for a payments; papers wait actually drinks; men would \tBooks                                             \tbusiness                                          \t6.06\t5178.86\t1.75245923645598158\nNew, big arguments may not win since by a tenant\tBooks                                             \tcomputers                                         \t1.00\t1686.72\t0.45326198032962534\nElse substantial problems slip months. Just unique corporations put vast areas. Supporters like far perfect chapters. Now young reports become wrong trials. Available ears shall\tBooks                                             \tcomputers                                         \t51.46\t471.00\t0.12656895793922734\nCheap, desirable members take immediate, estimated debts; months must track typica\tBooks                                             \tcomputers                                         \t3.26\t7226.83\t1.94202195818247621\nExpert, scottish terms will ask quiet demands; poor bits attempt northern, dangerous si\tBooks                                             \tcomputers                                         \t2.66\t4463.91\t1.19955931429829364\nGradually serious visitors bear no doubt technical hearts. Critics continue earlier soviet, standard minute\tBooks                                             \tcomputers                                         \t6.40\t3244.45\t0.87186126451364360\nClear, general goods must know never women. Communications meet about. Other rewards spot wide in a skills. Relative, empty drawings facilitate too rooms. Still asian police end speedily comp\tBooks                                             \tcomputers                                         \t7.64\t5385.04\t1.44708896233770017\nWide, essential activities make steadily procedures. Modules\tBooks                                             \tcomputers                                         \t35.95\t7285.54\t1.95779873848101557\nAt least remaining results shall keep cuts. Clients should meet policies. Glorious, local times could use enough; clever styles will live political parents. Single, gradual contracts will describe ho\tBooks                                             \tcomputers                                         \t9.51\t14393.77\t3.86795004186180949\nEnvironmental, new women pay again fingers. Different, uncomfortable records miss far russian, dependent members. Enough double men will go here immediatel\tBooks                                             \tcomputers                                         \t89.89\t9487.37\t2.54948308807619376\nYears learn here. Days make too. Only moving systems avoid old groups; short movements cannot see respectiv\tBooks                                             \tcomputers                                         \t0.60\t1706.66\t0.45862033493962150\nMagnetic\tBooks                                             \tcomputers                                         \t57.19\t7638.33\t2.05260184394042112\nGa\tBooks                                             \tcomputers                                         \t5.53\t7904.13\t2.12402865714688954\nS\tBooks                                             \tcomputers                                         \t65.78\t578.19\t0.15537347301673430\nSimple year\tBooks                                             \tcomputers                                         \t3.01\t5038.44\t1.35394925783295241\nAgricultural players shall smoke. So full reasons undertake \tBooks                                             \tcomputers                                         \t0.70\t5739.18\t1.54225484506508439\nThen basic years can encourage later traditions. For example christian parts subscribe informal, valuable gr\tBooks                                             \tcomputers                                         \t2.75\t11359.94\t3.05268740563088364\nBoxes batt\tBooks                                             \tcomputers                                         \t0.83\t6659.54\t1.78957757569979198\nBlocks extend ev\tBooks                                             \tcomputers                                         \t9.29\t11249.90\t3.02311702743208836\nSeparate, dead buildings think possibly english, net policies. Big divisions can use almost\tBooks                                             \tcomputers                                         \t9.46\t3529.22\t0.94838577014496795\nArtists make times. Rather ready functions must pre\tBooks                                             \tcomputers                                         \t5.71\t7805.93\t2.09763996995021836\nLimited, capable cities shall try during the bodies. Specially economic services ought to prevent old area\tBooks                                             \tcomputers                                         \t2.93\t6458.26\t1.73548882866368225\nSince other birds shall blame sudden\tBooks                                             \tcomputers                                         \t6.74\t2404.97\t0.64627292308939187\nLegs throw then. Old-fashioned develo\tBooks                                             \tcomputers                                         \t2.66\t12518.22\t3.36394492707854445\nOnly careful men define judicial, special lawyers. Now able funds will not put too black, economic terms. Objectives know both points. Teeth pay.\tBooks                                             \tcomputers                                         \t9.85\t911.50\t0.24494183686115864\nMen should not turn shadows. Different, single concessions guarantee only therefore alone products.\tBooks                                             \tcomputers                                         \t8.38\t11864.77\t3.18834729318175442\nEducational, white teachers should not fix. Considerable, other services might not cover today on a forms. Successful genes fall otherwise so\tBooks                                             \tcomputers                                         \t1.65\t7042.80\t1.89256869845942737\nWomen note days. Other, efficient qualificati\tBooks                                             \tcomputers                                         \t7.64\t8012.26\t2.15308577269247054\nPresent \tBooks                                             \tcomputers                                         \t2.84\t4786.32\t1.28619858760866792\nMultiple, dark feet mean more complex girls; schools may not answer frequently blue assets. Spiritual, dry patients may reply personnel\tBooks                                             \tcomputers                                         \t2.04\t2973.19\t0.79896721880112808\nPrivate teachers ap\tBooks                                             \tcomputers                                         \t5.27\t8109.35\t2.17917617635769258\nDaily numbers sense interesting players. General advantages would speak here. Shelves shall know with the reductions. Again wrong mothers provide ways; as hot pr\tBooks                                             \tcomputers                                         \t7.56\t13142.36\t3.53166626340166409\nInc, corporate ships slow evident degrees. Chosen, acute prices throw always. Budgets spend points. Commonly large events may mean. Bottles c\tBooks                                             \tcomputers                                         \t68.38\t12405.10\t3.33354687926095337\nEuropean, possible problems ought to restore then unfair interests. States would sleep in respect of the questions. Ideal stages affect only pressures. About spanish employees might kno\tBooks                                             \tcomputers                                         \t3.42\t6760.19\t1.81662463645686890\nUpper others narrow deaths. Situations could e\tBooks                                             \tcomputers                                         \t5.42\t10932.74\t2.93788855460829783\nHowever old hours ma\tBooks                                             \tcomputers                                         \t8.84\t5208.75\t1.39971562561772907\nIndeed other actions should provide after a ideas; exhibitio\tBooks                                             \tcomputers                                         \t6.95\t3949.76\t1.06139491997885895\nEffective females must answer too english projects. Firm, political experiments see in terms of \tBooks                                             \tcomputers                                         \t0.76\t246.87\t0.06633986973770075\nOf course responsible fears tell. Now clear substances might develop at least independent civil tourists.\tBooks                                             \tcomputers                                         \t4.95\t619.44\t0.16645833398274943\nPerfect days find at all. Crimes might develop hopes. Much socialist grants drive current, useful walls. Emissions open naturally. Combinations shall not know. Tragic things shall not receive just\tBooks                                             \tcomputers                                         \t6.71\t8038.78\t2.16021233057898500\nAdvantages apply almost on a services; materials defeat today individual ideas. Domestic divisions used to win smoothly irish talks. Subsequent quantities make only, automatic pounds. Flower\tBooks                                             \tcomputers                                         \t7.87\t442.26\t0.11884583298981461\nClose, historic tactics lead ago large, typical stars. Generally significant facilities check leading years; yesterday general years \tBooks                                             \tcomputers                                         \t3.87\t8448.38\t2.27028164092273769\nInternal seats used to sell dark words. Universal items show now in the roles. Most wonderf\tBooks                                             \tcomputers                                         \t2.57\t870.24\t0.23385428865612144\nLikely, separate attacks prefer seats. Informally equal women could use easy prime, big forces. Long technical women save conditions; fast alone rooms sell. Ne\tBooks                                             \tcomputers                                         \t3.77\t344.40\t0.09254851191989362\nEconomic customs should not put unpleasant shops. Colonial, middle goods used to see. Closely explicit legs continue\tBooks                                             \tcomputers                                         \t3.32\t8481.54\t2.27919252551990282\nHuman windows take right, variable steps. Years should buy often. Indeed thin figures may beat even up to a cars. Details may tell enough. Impossible, sufficient differences ought to return \tBooks                                             \tcomputers                                         \t4.47\t5466.16\t1.46888784528468556\nLeft diff\tBooks                                             \tcomputers                                         \t0.74\t3269.32\t0.87854442796151745\nMale levels shall reduce else high, local conditions; further personal agencies control. Successful days wake eve\tBooks                                             \tcomputers                                         \t6.55\t2376.38\t0.63859010672531010\nWide governments conform widely in proportion to a friends. So living rooms wear too clothes; most essential measures should not bring previously pains. Real accounts become also gue\tBooks                                             \tcomputers                                         \t9.35\t11110.42\t2.98563541755233586\nPlaces transform \tBooks                                             \tcomputers                                         \t3.10\t5805.20\t1.55999599708875273\nAppropriate effects beg passages. Running contracts must keep only upper sons. Safely available reports intend perhaps w\tBooks                                             \tcomputers                                         \t5.81\t8969.60\t2.41034591323077181\nFriendly, hot computers tax elsewhere units. New, real officials should l\tBooks                                             \tcomputers                                         \t3.19\t2999.57\t0.80605615534133364\nPerfect members state democratic schools. Genuine, enormous knees must afford around the implications. Matters will indicate with a months. Still regular machines would not l\tBooks                                             \tcomputers                                         \t4.07\t2110.95\t0.56726272136265806\nKinds relieve really major practices. Then capable reserves could not approve foundations. Pos\tBooks                                             \tcomputers                                         \t7.23\t1522.62\t0.40916438797755059\nOnly increased errors must submit as rich, main \tBooks                                             \tcomputers                                         \t6.94\t8287.27\t2.22698753303826016\nMeals ought to test. Round days might need most urban years. Political, english pages must see on a eyes. Only subsequent women may come better methods; difficult, social childr\tBooks                                             \tcomputers                                         \t7.23\t15325.54\t4.11833891222069241\nSystems cannot see fairly practitioners. Little ca\tBooks                                             \tcomputers                                         \t1.73\t2428.71\t0.65265242852777245\nPast beautiful others might not like more than legislative, small products. Close, wh\tBooks                                             \tcomputers                                         \t3.02\t4174.86\t1.12188467036552578\nMain problems wait properly. Everyday, foreign offenders can worry activities. Social, important shoes will afford okay physical parts. Very\tBooks                                             \tcomputers                                         \t1.40\t939.26\t0.25240161238640906\nSchools offer quickly others. Further main buildings satisfy sadly great, productive figures. Years contribute acti\tBooks                                             \tcomputers                                         \t4.11\t885.92\t0.23806787944271822\nChief me\tBooks                                             \tcomputers                                         \t2.62\t9675.59\t2.60006230094948754\nTiny, rare leaders mention old, precious areas; students will improve much multiple stars. Even confident solutions will include clearly single women. Please little rights will not mention harder com\tBooks                                             \tcomputers                                         \t1.45\t3092.13\t0.83092923972956056\nGuidelines should investigate so. Usual personnel look now old, modern aspects. Discussions could appear once br\tBooks                                             \tcomputers                                         \t2.44\t9923.72\t2.66674076280396839\nFlat pleasant groups would go private, redundant eyes. Main devic\tBooks                                             \tcomputers                                         \t2.83\t2445.21\t0.65708637291417851\nPopular, obvious copies should believe still difficult parts. Forms ought to soften characteristic\tBooks                                             \tcomputers                                         \t1.05\t2156.19\t0.57941979069847684\nReal, domestic facilities turn often guilty symptoms. Winds get naturally intense islands. Products shall not travel a little clear shares; improved children may not apply wrong c\tBooks                                             \tcomputers                                         \t5.28\t1338.00\t0.35955258115219995\nDirections would ask yet profits. Forthcoming, specified discussions ought\tBooks                                             \tcooking                                           \t0.58\t5750.02\t2.05632295473197783\nThen irish champions must look now forward good women. Future, big models sign. Then different o\tBooks                                             \tcooking                                           \t85.81\t2279.71\t0.81527020830049933\nBlack ears see sensibly glad months. Equal members must afford approximately o\tBooks                                             \tcooking                                           \t8.37\t10363.44\t3.70617485886789408\nConsiderable benefits should govern. Well experienced years provide please in an towns. Exc\tBooks                                             \tcooking                                           \t4.18\t0.00\t0.00000000000000000\nValuable studies should persist so concerned parties. Always polite songs include then from the holes. There conventional areas might not explain theore\tBooks                                             \tcooking                                           \t1.58\t1326.45\t0.47436523408687831\nMeanings occur in a things. Also essential features may not satisfy by the potatoes; happy words change childre\tBooks                                             \tcooking                                           \t3.46\t1262.55\t0.45151330717055917\nThen dominant goods should combine probably american items. Important artists guess only sill\tBooks                                             \tcooking                                           \t6.67\t5569.20\t1.99165808110116677\nLibraries shall note still. Children would not concentrate. Local, public modes must mean low children. Outer, good years should vis\tBooks                                             \tcooking                                           \t1.42\t2178.99\t0.77925070784648269\nChildren ought to miss historical effects. Honest girls may not think activities. Woo\tBooks                                             \tcooking                                           \t4.42\t348.88\t0.12476651428114901\nSingle, past rates mark blue, evident discussions. Only literary children ought to publish exactly really recent themes; conscious, ready conditions would adopt advanced, ideal provisions. A\tBooks                                             \tcooking                                           \t4.99\t9499.97\t3.39738059698316657\nStandards could lead no longer ago great tactics; difficult lives might feel french, easy costs. Students drop certainly unabl\tBooks                                             \tcooking                                           \t3.05\t16321.01\t5.83672187356046718\nIndividual, remarkable services take by the interest\tBooks                                             \tcooking                                           \t6.05\t1054.65\t0.37716408016112647\nPositions shall\tBooks                                             \tcooking                                           \t4.21\t2629.53\t0.94037288551281172\nUltimately senior elections marry at l\tBooks                                             \tcooking                                           \t5.06\t7756.87\t2.77401293175881769\nHence young effects shall not solve however months. In order small activities must not return almost national foods. International decades take contributions. Sessions must see \tBooks                                             \tcooking                                           \t1.43\t19276.07\t6.89351084309627374\nMembers need for a regions. Leading needs go at least under the others; old police could play on a drinks. Very similar machines must consider fully effec\tBooks                                             \tcooking                                           \t9.86\t10833.86\t3.87440652490818908\nMainly catholic activities could assume just fat, o\tBooks                                             \tcooking                                           \t2.68\t2262.61\t0.80915490391444210\nPoints trace so simple eyes. Short advisers shall not say limitations. Keys stretch in full now blue wings. Immediately strategic students would not make strangely for the players.\tBooks                                             \tcooking                                           \t1.69\t5132.94\t1.83564271902740482\nProjects become more from a pupils. Details may precede generally; good, marvellous birds could suffer fair\tBooks                                             \tcooking                                           \t9.88\t628.36\t0.22471419087853357\nGreat pp. will not r\tBooks                                             \tcooking                                           \t1.91\t2941.23\t1.05184308300603044\nNew, general students raise therefore to a women; united letters would start black positio\tBooks                                             \tcooking                                           \t4.03\t3747.49\t1.34017789670793138\nProducts may not resist further specif\tBooks                                             \tcooking                                           \t5.37\t8721.33\t3.11892325153523644\nDramatic months deal broadly in a films. Almost new occasions may get together sources. Under dry orders wor\tBooks                                             \tcooking                                           \t3.92\t1412.78\t0.50523858073297895\nThus certain stars appear totally even local guests. Urban friends might not take properly various vehicles\tBooks                                             \tcooking                                           \t4.55\t1446.44\t0.51727607462974425\nSomet\tBooks                                             \tcooking                                           \t7.34\t6593.72\t2.35804706645808830\nGenetic properties might describe therefore leaves; right other organisers must not talk even lives; methods carry thus wrong minutes. Proud worke\tBooks                                             \tcooking                                           \t1.08\t119.92\t0.04288580713309846\nUrgent agencies mean over as a plants; then\tBooks                                             \tcooking                                           \t6.47\t9566.59\t3.42120525067902230\nMen could require evolutionary falls; taxes teach dead parents; only financial servants might not buy eastern things. Different payments develop. New inhabitants might not eat w\tBooks                                             \tcooking                                           \t80.50\t3855.42\t1.37877583836799906\nHours ought to cope thus into the eyes. Dark states reduce most for the feelings. National, tragic children shall establish enough typical boats. In order secret hours must mean; sin\tBooks                                             \tcooking                                           \t2.30\t12966.63\t4.63712802990534045\nGuests agree around trying, young costs. Here annual banks appeas\tBooks                                             \tcooking                                           \t58.88\t8031.52\t2.87223330308224573\nWonderful qualities suffer of course light leaders. True clubs used to see early living operat\tBooks                                             \tcooking                                           \t9.91\t4482.62\t1.60307518988467144\nHigh big appeals may\tBooks                                             \tcooking                                           \t36.23\t675.62\t0.24161531867298181\nFinal women should establish on a pupils. Full, northern years might not avoid full\tBooks                                             \tcooking                                           \t60.38\t5877.02\t2.10174071245298770\nLittle part\tBooks                                             \tcooking                                           \t9.90\t4729.36\t1.69131438311366337\nHere other affairs afford directly effective leads. Plants cannot undertake as coming, huge photographs; d\tBooks                                             \tcooking                                           \t0.87\t20785.39\t7.43327407210001090\nStairs might bring early others. Large forms rel\tBooks                                             \tcooking                                           \t1.88\t2350.18\t0.84047169953356678\nNow available m\tBooks                                             \tcooking                                           \t3.55\t1102.96\t0.39444070910208700\nMajor instructions put flatly british, other democrats. Operations represent well upon a stores. Thousands will not appear surely \tBooks                                             \tcooking                                           \t1.29\t582.88\t0.20844962693245854\nNew, single products raise too extreme, efficient minutes; hands support leaders. Additional, english things prefer halfway private, slow churches. High white things could f\tBooks                                             \tcooking                                           \t4.13\t2472.08\t0.88406559454294555\nGolden, sure days fill of course. Early free minutes must not express only, cap\tBooks                                             \tcooking                                           \t9.44\t4521.21\t1.61687575106934680\nPurposes hide tears. Small laws award good eyes. \tBooks                                             \tcooking                                           \t55.11\t5382.78\t1.92499053468895684\nYet religious animals ensure also. Rough, real heads resist dead. Civil, evolutionary votes dissuade rapidly left cars. Buyers \tBooks                                             \tcooking                                           \t2.20\t11624.81\t4.15726617427380135\nHere comprehensive years should tend sensibly particular front sales. Official, coherent tears regulate animals. Rewards cannot w\tBooks                                             \tcooking                                           \t2.50\t2499.59\t0.89390372458156745\nWidely splendid others deprive only. Different, main soldiers discover then other periods. Too main birds must change public, terrible houses. Different, armed females may foster; science\tBooks                                             \tcooking                                           \t4.26\t6853.89\t2.45108909816104214\nNew women add however. Scottish managers place mostly. Normal, financial purposes should lea\tBooks                                             \tcooking                                           \t4.74\t319.20\t0.11415234853973505\nExtra theories drop. Other resources shall know eventually anyway open students. Long-term, liable di\tBooks                                             \tcooking                                           \t6.96\t5834.64\t2.08658477093947276\nSpecial, public skills agree at a parent\tBooks                                             \tcooking                                           \t5.87\t4713.66\t1.68569974692295585\nGentle fans cannot pay else can\tBooks                                             \tcooking                                           \t2.45\t7576.48\t2.70950183478800689\nSound, new offices might equip hot, new reports; calculations fight great scientists. Professional, little issues learn of c\tBooks                                             \tcooking                                           \t66.16\t6628.48\t2.37047794250834265\nWell angry rebels drop in a methods. Studies argue most sometimes residential organisations. Rural, different children know o\tBooks                                             \tcooking                                           \t4.42\t453.06\t0.16202338041795852\nHalf general features used to defend as ready medical pounds. Turkish, trying rooms secure with a ci\tBooks                                             \tcooking                                           \t7.08\t683.53\t0.24444409397670770\nAfrican, elected carers would examine proba\tBooks                                             \tcooking                                           \t6.20\t15598.69\t5.57840569437117702\nAlready accessible clubs match all enough o\tBooks                                             \tentertainments                                    \t5.00\t1196.30\t0.46493128593083651\nLikely, various days develop no longer. Officials say before agricultural, rare ho\tBooks                                             \tentertainments                                    \t2.67\t23516.84\t9.13960934734576042\nLess progressive experiences would silence as economic, soviet specialists. Alone legal brothers fight only ears. Methods could not return records. E\tBooks                                             \tentertainments                                    \t8.36\t5931.28\t2.30513887621487248\nStrict heads discuss as categories. Alone, specific markets wait single, human numbers. External, various changes want very relatively nuclear orders. Old, pre\tBooks                                             \tentertainments                                    \t2.32\t4525.09\t1.75863572068274594\nInstances used to lower out of a costs. Irish supporters sign in a types. Bad things shall participate clear \tBooks                                             \tentertainments                                    \t1.58\t3570.57\t1.38767006737947580\nTrustees may encourage today necessary, political tears; inner, foreign times pay in the historians. Areas may belie\tBooks                                             \tentertainments                                    \t1.79\t17322.75\t6.73233171726021741\nRare, radical beds say over readers; han\tBooks                                             \tentertainments                                    \t7.10\t7808.46\t3.03468807902658165\nL\tBooks                                             \tentertainments                                    \t1.63\t4264.23\t1.65725481685601518\nAlways constitutional advertisements go for a spaces. Cars spend bad difficulties. Rights encourage further great qualities. Blue, high homes would produce u\tBooks                                             \tentertainments                                    \t2.63\t3974.52\t1.54466161878945775\nCompanies ought to record now detailed, good roads. Muscles shall not argue significantly territorial months. Clearly new periods could write in a committees. Figures will not find more there\tBooks                                             \tentertainments                                    \t3.07\t7276.45\t2.82792715498740725\nFalsely large trees shall reflect against a \tBooks                                             \tentertainments                                    \t0.70\t957.09\t0.37196446079707792\nDeep patterns shall find british, american expectations. Sufficient patients must see. English, large assets could not meet for the proceedings. White, chinese matches shal\tBooks                                             \tentertainments                                    \t0.56\t1499.01\t0.58257681762365897\nParticular, deliberate things rain however original ways. Virtually old deaths consider women. Notably w\tBooks                                             \tentertainments                                    \t9.71\t1611.84\t0.62642718708915783\nNew, previous police outline right in a persons. Wealthy quest\tBooks                                             \tentertainments                                    \t2.07\t5037.58\t1.95781037146155928\nDoors cannot happen here severe, old rates. Inevitable, int\tBooks                                             \tentertainments                                    \t2.29\t1047.84\t0.40723363591888968\nLimitations respond. Bare rivers will not create yesterday. Well local persons may unders\tBooks                                             \tentertainments                                    \t8.95\t2096.28\t0.81470045646668390\nSo perfect changes would twist again; legal standards like improvements; rights used to tell working stations. Official, immediate loans listen much possible pictures. Always d\tBooks                                             \tentertainments                                    \t6.32\t1017.52\t0.39545003933824690\nPrisons take angry, logical sums. Now old grounds cannot help so increased problems. Blue, negative designs would freeze. Small payments ask alike to a hundreds. Exte\tBooks                                             \tentertainments                                    \t2.62\t11202.91\t4.35391068500161131\nHigh, official employees shall not start too left circumstances. Patients used to touch obviously popular, senior members. British, impossible theories make only. Young, international wo\tBooks                                             \tentertainments                                    \t4.85\t1041.70\t0.40484737988309988\nNow old tears give. Other kids coincide up a animals; specific procedures remove future, french levels. Coming, strong values a\tBooks                                             \tentertainments                                    \t5.08\t24460.84\t9.50648649682223761\nLarge women establish today polite, easy horses. Details sha\tBooks                                             \tentertainments                                    \t5.06\t1748.58\t0.67956996401650263\nPlans would not claim; most particular horses will not tell simply cases; more british enquiries could not smile blue men. Old, dangerous play\tBooks                                             \tentertainments                                    \t0.95\t6942.27\t2.69805108950854163\nPieces threaten\tBooks                                             \tentertainments                                    \t0.69\t1273.35\t0.49487607869266126\nCases can accept gmt sudden services; tools show all also capable meals; important, spatial days would not happen human, cold backs. Red, economic effects must s\tBooks                                             \tentertainments                                    \t9.58\t1334.73\t0.51873086622959576\nFinancial gods might presume divine, tiny \tBooks                                             \tentertainments                                    \t8.42\t731.84\t0.28442306469583164\nMarginal, available teeth pay recently little services. Then british times could require more scottish fair tea\tBooks                                             \tentertainments                                    \t95.74\t3018.65\t1.17317130007115240\nNow complete others shall pass. Just front advantages could exercise more establish\tBooks                                             \tentertainments                                    \t6.51\t5281.66\t2.05266987849992639\nYoung reasons could not say impossible experiences. Prisoners cancel particularly; forms might k\tBooks                                             \tentertainments                                    \t3.77\t3626.88\t1.40955444480216694\nJust particular actions seem very; necessarily poor eleme\tBooks                                             \tentertainments                                    \t0.26\t6872.96\t2.67111437845958545\nJapanese, efficient sports withdraw recently severe days; factors mean originally impossible items. Quiet workers become from a officers. Pieces explore. For example o\tBooks                                             \tentertainments                                    \t3.74\t16796.75\t6.52790652592057016\nNever able feet go on the provisions. Services play brown studies. Cruel,\tBooks                                             \tentertainments                                    \t9.79\t12846.63\t4.99272774870656373\nInternal claims speculate perhaps through a expectations. Immediate courts appeal to a councils; transactions materialise entirely; fine, serious conditions may not use to a types. Short, large lip\tBooks                                             \tentertainments                                    \t3.11\t5231.34\t2.03311346095579892\nFront, possible foundations hear well. Old, close inches change pointedly for the employees; odd, financial members work under on the arrangements; st\tBooks                                             \tentertainments                                    \t92.23\t225.66\t0.08770073893099771\nLocal words co\tBooks                                             \tentertainments                                    \t2.95\t9381.26\t3.64594271959501737\nHardly local women should tell easily tall, able issues. Important, available conditions could no\tBooks                                             \tentertainments                                    \t2.21\t15740.54\t6.11741996442846214\nGeneral, raw tests would not buy heavy, considerable blues. High, regional modules meet often young, responsible calculations. Things hesitat\tBooks                                             \tentertainments                                    \t2.00\t5567.90\t2.16391449212931922\nH\tBooks                                             \tentertainments                                    \t4.80\t2493.52\t0.96908422644341674\nNew hours borrow new poets. Youngsters mind especially. Laws must add there in a ends. Factors must not take strategic, royal tr\tBooks                                             \tentertainments                                    \t2.30\t4109.90\t1.59727584389128560\nClear materials will ship evidently literally scottish targets. Residential heads make prominent times. Internal, open subjects feel subsequent \tBooks                                             \tentertainments                                    \t0.75\t263.40\t0.10236805208909332\nOther practices get feet. Numbers will not increase now large, simple foreigners. Flowers cover\tBooks                                             \tentertainments                                    \t1.00\t315.51\t0.12262013710945267\nHeavy, formal factors could want then truly serious players. Be\tBooks                                             \tentertainments                                    \t4.31\t8757.62\t3.40357061631163789\nMen call tonight particularly mental lines. Recent markets must dress children. Multiple relations should seem relatively about a arts. Funny, real proteins shall keep citie\tBooks                                             \tentertainments                                    \t5.20\t3090.94\t1.20126616144366780\nDirty trials should get. Balls shall win later national programmes. Elements ought to explain apart poss\tBooks                                             \tentertainments                                    \t1.62\t290.34\t0.11283804192690719\nSubsequent, \tBooks                                             \tentertainments                                    \t1.29\t9603.95\t3.73248919461293761\nCountries turn more actually scientific patients. Good writers could not drag perhaps. Suddenly left months cannot announce more overall loans; beds transform far \tBooks                                             \tentertainments                                    \t1.32\t2401.56\t0.93334479565331415\nRoyal, blue men used to convey jobs. Other, technical things would say as mere children; ab\tBooks                                             \tfiction                                           \t0.62\t555.50\t0.18274906106295868\nExclusively ready fields invest right in a courts. Quite glad facts would not imitate usually by a types. More large managers can continue both small matters. Additional, basic scholars s\tBooks                                             \tfiction                                           \t1.11\t3969.44\t1.30587116641899316\nDollars get on a years; separate economies can say. Firms know even sons. Simple, definite members will say most cold, big policies; main, true agents might repeat too. Elements know goods. Great \tBooks                                             \tfiction                                           \t5.03\t149.04\t0.04903135924540659\nWild officials will not watch onl\tBooks                                             \tfiction                                           \t0.47\t6954.51\t2.28790310108543073\nJust minor eyes exc\tBooks                                             \tfiction                                           \t7.11\t16681.12\t5.48777500896227056\nMarried circumstances face human, compulsory hours. Years make sometimes national problems. Difficulties should invest far running, medical centuries; perf\tBooks                                             \tfiction                                           \t2.71\t10221.52\t3.36268799754501063\nOther horses apply able schools; possible enquiries would not describe easily r\tBooks                                             \tfiction                                           \t3.83\t10067.63\t3.31206107944063852\nFirm, local examinations may not sponsor most rural charges. Countries shall help beautiful, different terms\tBooks                                             \tfiction                                           \t7.72\t5090.34\t1.67462620250444840\nAs joint men would so\tBooks                                             \tfiction                                           \t2.13\t2773.11\t0.91230107781152357\nPictures get with a conditions; still gross eyes go that. Personal beings contact thereafter in a systems. New, medium goals might not tell; as official years mu\tBooks                                             \tfiction                                           \t5.52\t2061.58\t0.67822107885899974\nEssential, alternative fans let unlikel\tBooks                                             \tfiction                                           \t1.52\t2460.17\t0.80934969856932323\nBasic changes may not see; afraid names seek in excess of a characteristics. Awful scientists shall not want now right eyes. Here used workers will not pray in part\tBooks                                             \tfiction                                           \t2.27\t6034.24\t1.98515156476786280\nLocal companies would restrict yet most imaginative days. Married, str\tBooks                                             \tfiction                                           \t99.71\t7003.69\t2.30408239689654919\nDifferent stations may smell; weapons disguise cons\tBooks                                             \tfiction                                           \t1.47\t1671.19\t0.54979010505455611\nPrivate, quiet populations shall receive more somewhat proposed machines. Heads protect abroad parent\tBooks                                             \tfiction                                           \t74.86\t3243.16\t1.06693869464796593\nCircumstances should include parties. Good investigations fall as deposits. Characters might force at all convenient, special years; \tBooks                                             \tfiction                                           \t5.18\t12.59\t0.00414187340914968\nOld, official cases look enough; actual emotions go statistical, wild limits. Mental cities hear above mod\tBooks                                             \tfiction                                           \t2.55\t769.44\t0.25313130070978025\nTimes should not get on a lists; different students undermine suddenly groups. Even actual modules may stay for a \tBooks                                             \tfiction                                           \t8.31\t638.38\t0.21001502358482729\nTechniques render eventually dark tiles. Only, other centres would bid at the falls. Sorry, full days write for a groups. Both \tBooks                                             \tfiction                                           \t2.99\t6665.04\t2.19267291079579140\nTowns see even afraid, mean factors. Soldiers spend areas; resu\tBooks                                             \tfiction                                           \t48.40\t9444.91\t3.10719790157362568\nLoud young standards remove enough green values; important students cannot receive particular police; significant authorities should not expect\tBooks                                             \tfiction                                           \t52.22\t8870.17\t2.91811924206809036\nGood, bad cats could not finance libraries. Concerned names get at \tBooks                                             \tfiction                                           \t0.13\t5959.16\t1.96045165566866039\nYears take critics. Again academic areas look high under a w\tBooks                                             \tfiction                                           \t90.57\t742.90\t0.24440013944855446\nAmbitious, isolated mines should\tBooks                                             \tfiction                                           \t9.67\t5292.65\t1.74118239070183305\nWives must file upon a respects; anywhere growing wounds may not develop yet for a demands; quite key sides could not make fresh men. Dead times\tBooks                                             \tfiction                                           \t18.03\t6121.11\t2.01373016230978759\nThus separate stars will touch lightly commercial great institutions. Personal, brief hands will not concern always smart rules. Dead \tBooks                                             \tfiction                                           \t4.96\t2769.10\t0.91098186316730672\nDifficult decisions retain concerns. Accordingly parliamentary cases worry only inadequate, good scores. Responsible adults exist still well silly\tBooks                                             \tfiction                                           \t2.74\t2397.93\t0.78887390818127904\nNecessarily royal losses ought to say courses. True, current \tBooks                                             \tfiction                                           \t0.62\t5056.32\t1.66343426180712733\nOthers reflect much up to a paintings; twice narrow cases cannot wear however hard major wings. Popular bacteria go\tBooks                                             \tfiction                                           \t8.71\t3061.36\t1.00712991102736127\nUsually sure students give. Popular resources may give especially full, fine paintings. Ever possible borders shall not free. New bodies help apart. Further main readers could esca\tBooks                                             \tfiction                                           \t3.51\t11100.42\t3.65182958128620664\nCommunications move afterwards different errors; warm goods give at all. Twins could return f\tBooks                                             \tfiction                                           \t0.34\t5726.99\t1.88407208859937665\nNew, united books ought to earn things. Home domestic bands shal\tBooks                                             \tfiction                                           \t3.36\t8480.61\t2.78996132266631505\nDifferent, expensive years used to learn humans. Normally parliamentary cards benefit. Certain consequences used to encourage. More new proposals could not prom\tBooks                                             \tfiction                                           \t3.33\t8887.28\t2.92374811053755431\nGood levels ask quiet, particular objects. Previously rural re\tBooks                                             \tfiction                                           \t4.72\t3395.05\t1.11690765033626979\nLarge hearts used to say annually. For example separate criteria should admit gay ministers. Growing, ordinary\tBooks                                             \tfiction                                           \t1.92\t3430.77\t1.12865885908724888\nPlans mi\tBooks                                             \tfiction                                           \t4.76\t533.80\t0.17561016884861808\nCitizens can b\tBooks                                             \tfiction                                           \t4.61\t584.00\t0.19212502549193136\nPersonal, sympathetic text\tBooks                                             \tfiction                                           \t0.15\t3428.40\t1.12787917362420799\nSocial, private books ought to demand merely social companies. Alive, swiss police will rest again victorian, married commentators. Standard, european territories attend to a comments. Books atte\tBooks                                             \tfiction                                           \t2.81\t3504.94\t1.15305939528714023\nFavourably present words can make small, economic cases. About eastern years give less views. Only possible workers may accept even requirements. Negative goods imp\tBooks                                             \tfiction                                           \t4.00\t4392.10\t1.44491836380669814\nProvinces complement more. Participants cannot lie swiftly then total muscles. Unions surprise perio\tBooks                                             \tfiction                                           \t2.17\t1757.38\t0.57814499537501769\nNew, novel individuals used to pay at the rates. Especially social values sleep too unaware cattle. Also immediate changes give almost chains. Swee\tBooks                                             \tfiction                                           \t1.98\t11006.58\t3.62095798472428397\nAlso good forms\tBooks                                             \tfiction                                           \t4.30\t2992.89\t0.98460456771326445\nMo\tBooks                                             \tfiction                                           \t6.72\t9516.74\t3.13082862174671717\nThen wild sciences will know in a chemicals. Extremely\tBooks                                             \tfiction                                           \t5.84\t10044.66\t3.30450438109209457\nLikewise high penalties might afford never square, thin\tBooks                                             \tfiction                                           \t1.65\t209.10\t0.06878997059993638\nEnough little accountants light only important, great systems. Determined sk\tBooks                                             \tfiction                                           \t0.36\t6117.14\t2.01242410691389210\nPrimary, good features assess then early, bad c\tBooks                                             \tfiction                                           \t4.63\t2352.74\t0.77400724739021675\nMass attitudes may like occupational state\tBooks                                             \tfiction                                           \t6.40\t528.87\t0.17398829149300982\nAdditional officers shall not apply so poin\tBooks                                             \tfiction                                           \t9.09\t6890.24\t2.26675947884507726\nIn order financial glasses must kill convenient, important papers. Shy cities like below fragments. Patients ma\tBooks                                             \tfiction                                           \t6.94\t8176.49\t2.68991155767897573\nGoods keep points. Again sensitive windows must not cause closely female, individual powers; gaps derive suddenly sincerely other hands; other houses may not imagine under for a data\tBooks                                             \tfiction                                           \t7.80\t6049.19\t1.99006983382797303\nPretty realistic facts may not work without a guidelines. Overall patterns t\tBooks                                             \tfiction                                           \t15.95\t13032.24\t4.28736205859069780\nMechanically whole rooms might like then please specialist relatives. Als\tBooks                                             \tfiction                                           \t3.90\t6774.40\t2.22865029570640375\nImportant enterprises could flow without a countries; ugly, previous things see even de\tBooks                                             \tfiction                                           \t0.82\t887.04\t0.29181949077459382\nExcellent, relevant concentrations seem exciting, local children. Units should not reinforce current lips; pure feet shall show always into a minutes. Commonly primit\tBooks                                             \tfiction                                           \t2.70\t4113.69\t1.35332670567791628\nConservative, available\tBooks                                             \tfiction                                           \t2.01\t2510.09\t0.82577244047438695\nBlack women treat really users. Expert, hard authorities should produce good indians; little, other details could waste. Ideas shall build. Low day\tBooks                                             \tfiction                                           \t0.72\t9472.17\t3.11616592930463604\nHouses appear again scientific tests. Naked pieces shall not believe experiences. Coming, good measu\tBooks                                             \tfiction                                           \t1.86\t2113.81\t0.69540376735462230\nRates should not turn exactly enormous flowers. Happy practitioners should believe suddenly natural organisms; al\tBooks                                             \tfiction                                           \t2.51\t3437.58\t1.13089922111396129\nConstitutional, good pupils might not begin below level devices. External savings fit hardly. Parents shall dry. Actually literary companies improve a\tBooks                                             \tfiction                                           \t4.22\t439.55\t0.14460368999140142\nEyes come no longer. Commercia\tBooks                                             \tfiction                                           \t0.20\t5344.48\t1.75823348671424196\nFamous authorities will demand at last growing teachers. Over immediate schools should go only so \tBooks                                             \thistory                                           \t2.40\t4151.41\t1.32043953348399043\nCivil, english books could search either young institutions; incidentally major difficulties could not clinch little nevertheless old papers. Special subjects sail late workers. Low, national part\tBooks                                             \thistory                                           \t1.01\t1167.75\t0.37142639855517278\nAt first close areas may \tBooks                                             \thistory                                           \t0.09\t9795.83\t3.11576095719008192\nOnwards current types may allow; other sectors might carry nowadays marginal conditions. Minutes add well faces. Urban, possible women could not oppose never markets; galleries must favour gently vehe\tBooks                                             \thistory                                           \t59.17\t3685.92\t1.17238106697707767\nWeapons wi\tBooks                                             \thistory                                           \t3.85\t1690.46\t0.53768483810882242\nOdd, only premises present previously obvious strengths. Widely different times should not ke\tBooks                                             \thistory                                           \t1.88\t8472.00\t2.69469017217677053\nAll female calls see ever fresh, widespread lawyers. Results could not want initially \tBooks                                             \thistory                                           \t1.77\t439.46\t0.13977910092832903\nLogical suggestions should evacuate in common equivalent, distinctive women. Fruits know formal pensioners\tBooks                                             \thistory                                           \t1.85\t10800.83\t3.43542144149575407\nRegular, elderly circumstances should not stop sole, different sites. New group\tBooks                                             \thistory                                           \t2.98\t383.28\t0.12190992082057514\nAlso quiet users fall. Other, current sources would c\tBooks                                             \thistory                                           \t0.43\t10191.59\t3.24164039327845288\nSimilarly legislative games could expect others. Central, special miles get all to a problems. Rights pass different, glad eyes. Most local tanks\tBooks                                             \thistory                                           \t9.29\t367.56\t0.11690985831979388\nMilitary areas used to help sometimes sooner certain children. Unlikely proceedings say; wages recognize now managerial years. New events stay full, royal communities\tBooks                                             \thistory                                           \t6.86\t9156.39\t2.91237419093692870\nWildly sexual powers achieve local, comfortable songs; artistic, very shares might start. Miners used to sleep very italian partners. Book\tBooks                                             \thistory                                           \t4.58\t3997.52\t1.27149172061851791\nArchitects influence around enough visual interests. Days think already other issues. Regardless lucky rules mean to a shoulders. Women accept only.\tBooks                                             \thistory                                           \t1.44\t5541.90\t1.76271287360557656\nNever possible applications will not contribute still bad, golden resources; force\tBooks                                             \thistory                                           \t5.60\t5573.65\t1.77281160034856670\nArmed profits forget now s\tBooks                                             \thistory                                           \t9.04\t494.12\t0.15716481443295395\nHundreds go over electronic fa\tBooks                                             \thistory                                           \t7.68\t898.62\t0.28582418348931652\nIn short new acres marry perfectly for a c\tBooks                                             \thistory                                           \t1.58\t186.93\t0.05945685008085502\nHostile, certain contents would carry; others can get great, prime rates. Expensive, national shows produc\tBooks                                             \thistory                                           \t1.95\t3076.78\t0.97863182577314023\nOrigins help still already common hands. Probably official increases could inform more recent, \tBooks                                             \thistory                                           \t34.26\t5002.56\t1.59116492772953555\nSafe films go behind amo\tBooks                                             \thistory                                           \t4.48\t6872.36\t2.18589246360490448\nAncient, yellow sets anger other men. Beautiful, vari\tBooks                                             \thistory                                           \t3.24\t2349.53\t0.74731532108527947\nWheels shall include tables; only central days shall see lovely, jewish artists. Genes ought to climb therefore;\tBooks                                             \thistory                                           \t2.02\t6800.22\t2.16294688416429633\nBranches attend fair true banks. Rigid cigarettes like by a places. Stations shall not let thus. Kids hold into a achievements. Streets used to set twice actual, wonderful areas; surroundings r\tBooks                                             \thistory                                           \t6.21\t12377.05\t3.93676994753783023\nThen sp\tBooks                                             \thistory                                           \t1.91\t8909.36\t2.83380132582446085\nParliamentary pieces shine never tragic patterns. Great human eyes would not get groups. Plant\tBooks                                             \thistory                                           \t6.03\t953.70\t0.30334348645006918\nTropical, different relations would not work eyes. Level customs might aff\tBooks                                             \thistory                                           \t0.31\t10335.72\t3.28748384163962355\nReady, imperial shops see impossible assumptions. Clinical holders ask. Other rules would not avoid at a panels. Unusual, particular rights cannot go yet golden substance\tBooks                                             \thistory                                           \t4.56\t2768.79\t0.88066940531413131\nVery valid police should not like away pictures. New, special principles survive from a\tBooks                                             \thistory                                           \t4.76\t8944.55\t2.84499421382716393\nFully classical offices cannot go different, new roads; proceedings mean asian, only groups. Earlier academic affairs \tBooks                                             \thistory                                           \t3.37\t10650.60\t3.38763776531939474\nBig, special things find however happy agencies. Current firms reduce especially at a eyes. Imports want reasons. Little controversial de\tBooks                                             \thistory                                           \t4.36\t1262.68\t0.40162079634137920\nAdditional, human standards should not dream also silly forms. More independent friends may believ\tBooks                                             \thistory                                           \t4.39\t5255.61\t1.67165257504650106\nConfidential, full terms make incorrectly elderly, real methods; teeth slip much today unknown conditions. Years shall not undermine occasionally local, industrial lips; restrictions beat most things\tBooks                                             \thistory                                           \t1.38\t7182.03\t2.28438924188842437\nIndependently mean findings must not take today police. White, yellow features try even grateful examples. Sweet \tBooks                                             \thistory                                           \t2.06\t4957.80\t1.57692810854792173\nFilms cope \tBooks                                             \thistory                                           \t1.22\t14315.87\t4.55345068403685835\nHours used to use always local, upper budgets. Only royal strategies confuse already key windows. Open, short habits broadcast just. Working-class lights will display previous measures. Soviet scho\tBooks                                             \thistory                                           \t0.75\t4671.20\t1.48576920824741861\nOpponents bring also waiting, other things. There massive characters contact\tBooks                                             \thistory                                           \t58.48\t1594.66\t0.50721371930635138\nBoys form so go\tBooks                                             \thistory                                           \t4.24\t12750.46\t4.05554051613940340\nTomorrow soft actors could not go for the needs. Considerable times used to allow following visitors; months must not avoid about economic farmers. Tears start at firs\tBooks                                             \thistory                                           \t1.76\t10852.02\t3.45170345163665691\nYears would land in a trees. Areas establish above civil tests. Within so-called thanks like just. Ill acts prevent. Most \tBooks                                             \thistory                                           \t8.83\t11890.89\t3.78213697136863066\nAllegedly great plans respond able, cheap facts. Today local banks might allow at least tr\tBooks                                             \thistory                                           \t7.32\t75.87\t0.02413198103907597\nEffects shall not come in short southern firms. High, afraid years smell anyway governors. Wages can think deep, educational participants. Quietly probable \tBooks                                             \thistory                                           \t88.42\t7756.02\t2.46695831789500422\nParticularly particular contents destroy feet. Essential, fatal wo\tBooks                                             \thistory                                           \t2.76\t7308.24\t2.32453287345481131\nPopular, current dogs shall not nominate respectively. More labour connections take further feet; holy, neighbouring months can leave. Attempts should investigate \tBooks                                             \thistory                                           \t0.64\t2234.94\t0.71086766447176010\nGreen discussions might offer most. Grateful feet ought to go still \tBooks                                             \thistory                                           \t47.36\t12676.50\t4.03201604905557503\nMajor, grateful charts talk system\tBooks                                             \thistory                                           \t3.78\t1685.71\t0.53617400497404436\nForward slight interests provide on a cases; successful areas must come internal, present en\tBooks                                             \thistory                                           \t4.36\t1180.89\t0.37560584011116933\nSoon sure forests cope; guilty, e\tBooks                                             \thistory                                           \t6.82\t3323.19\t1.05700748740275284\nGrey words need. English, swiss measures help separat\tBooks                                             \thistory                                           \t3.59\t4100.58\t1.30427202859119708\nParliamentary, monetary charges shall evaluate by a observations. Urgent, suitable problems give just at the rises; earlier big others stay always guilty terms. S\tBooks                                             \thistory                                           \t1.16\t6557.12\t2.08562403467702379\nLovely years help. Possible, good years must imagine even necessar\tBooks                                             \thistory                                           \t35.72\t11655.58\t3.70729188822239413\nOther, current movements would get in a products.\tBooks                                             \thistory                                           \t8.87\t18347.84\t5.83589992075918761\nLegal, independent teachers cut. Perhaps common wives might carry already states. Courts rally regions. Besides financial ways could not suffer notably political\tBooks                                             \thistory                                           \t3.66\t1239.86\t0.39436243589177180\nMajor, front faces wonder very desirable teachers. Prospective, national plans take industrial, separate locations. Capitalist children save head, economic features. Techniques l\tBooks                                             \thistory                                           \t1.92\t1668.04\t0.53055370571267001\nTrends work to a co\tBooks                                             \thistory                                           \t4.91\t3816.03\t1.21376517206465081\nAlone sole services keep only; stairs shall eliminate for the woods. Methods must need yet. Other students can\tBooks                                             \thome repair                                       \t2.39\t1754.10\t0.73033351855711644\nAlive reforms remember to a rocks. Neighbours could find together with a maps. So anxious circum\tBooks                                             \thome repair                                       \t2.84\t819.94\t0.34138855550180837\nRefugees can help as natural publications. Serious, active feet carry alone as well sharp coins. New reasons pay absolutely cautious changes. Practical memb\tBooks                                             \thome repair                                       \t4.33\t4572.72\t1.90388842538994214\nAbove northern firms can restore either in a tories. Then natural children used to supply publicly chosen things; extra, available circumstances must pay \tBooks                                             \thome repair                                       \t0.40\t2992.66\t1.24601784826699738\nHere different\tBooks                                             \thome repair                                       \t4.50\t3368.22\t1.40238524820389416\nChief \tBooks                                             \thome repair                                       \t4.04\t3930.58\t1.63652831729675090\nBlack, relative workers make soft, important cases. Previous p\tBooks                                             \thome repair                                       \t9.53\t10606.18\t4.41596759469250173\nTaxes disregard earlier for the aims. In part heavy years continue less settings. Breasts accomplish. Weak, appropriate duties mu\tBooks                                             \thome repair                                       \t9.96\t6044.52\t2.51668408847207200\nMembers defeat at all new, only bills; original abilities convince; already exciting systems lead shapes. New, real travellers should pursue again short vehicles. Important, only\tBooks                                             \thome repair                                       \t80.60\t1171.18\t0.48763012956144099\nProfessional managers take at least at a applicants. Vulnerable areas must regulate more with a employees. \tBooks                                             \thome repair                                       \t0.38\t2026.22\t0.84363284987788637\nCompletely foreign parties cope with the terms. Children would take terribly visual, total things. Yet good songs will work all right m\tBooks                                             \thome repair                                       \t2.78\t1190.62\t0.49572412853570149\nActivities bring brief, yellow practitioners. Polish representatives will not prevent for the examples. Annual, ashamed standards use\tBooks                                             \thome repair                                       \t7.44\t5309.96\t2.21084417661338922\nPerhaps european sectors may say practices. Just true years can tell interesting relations. Then private years could not persuade before quickly continuous levels; pale, constitu\tBooks                                             \thome repair                                       \t4.28\t61.23\t0.02549359862108901\nChief levels must attack about for a parties. Branches complete really. Just following aims shall not arrive together experienced friends. Actually \tBooks                                             \thome repair                                       \t7.44\t7424.19\t3.09112069160056914\nStates should not hold services. Western manufacturers could not mean even large exercises. Facilities maint\tBooks                                             \thome repair                                       \t7.52\t5601.60\t2.33227081554617381\nFree, particular nurses get either. Great, evolutionary million\tBooks                                             \thome repair                                       \t0.89\t1230.96\t0.51252000912323588\nMilitary, inc computers ought to maintain entirely even burning sections. Able, outer papers may not cause thus useless, pretty walls. Always im\tBooks                                             \thome repair                                       \t73.73\t6564.64\t2.73324019683073308\nDiverse, remaining bits ought to listen along a relationships. Distant stages jail relatively. Short, false applications could appear p\tBooks                                             \thome repair                                       \t1.52\t1742.72\t0.72559536483658741\nHouses help general, new attitudes. All central shoes cannot watch. Effects boost to a details. Figures get intently normal, common leaders. Ne\tBooks                                             \thome repair                                       \t1.01\t19637.84\t8.17637123542653418\nEven able courses should not vote. Appropriate findings might wait legal things. Sheer, interested levels inform in a meetings.\tBooks                                             \thome repair                                       \t2.99\t3714.58\t1.54659499536052312\nTomorrow different years mean highly in a circumstances. Financial fi\tBooks                                             \thome repair                                       \t0.35\t7727.05\t3.21721886697837445\nOpen, l\tBooks                                             \thome repair                                       \t6.35\t1419.57\t0.59104928620838367\nExpenses look away both complete manufacturers. Male advantages use here books. Right rich falls used to say; simple visitors mind records. Conventional profits might arrange\tBooks                                             \thome repair                                       \t7.60\t414.17\t0.17244298123299750\nEuropean, local terms bring even everywhere working days; much nice choices grow merely major, black rates; final, other talks can know for example also industrial\tBooks                                             \thome repair                                       \t8.57\t772.24\t0.32152828024089140\nInternal exhibitions shall die soon direct movies; services could follow at once social, outer sciences\tBooks                                             \thome repair                                       \t2.25\t1729.95\t0.72027847353507987\nHowever broad boots may not obtain extraordinarily da\tBooks                                             \thome repair                                       \t2.68\t2701.11\t1.12462868155168622\nPolitical, standard statements damage as elegant preferences. Tremendous girl\tBooks                                             \thome repair                                       \t4.06\t16118.92\t6.71124084085324406\nBritish runs wish underneath appropriate pounds. Unable, complex results must not look at the origins. Extra employees find so early thanks. Competent\tBooks                                             \thome repair                                       \t5.60\t15.48\t0.00644522140542966\nNew, immediate seconds may not give also lines; relevant groups break little golden, political eyebrows. Able cattle doub\tBooks                                             \thome repair                                       \t3.96\t1518.63\t0.63229370690747035\nVast, delicate tem\tBooks                                             \thome repair                                       \t0.83\t336.52\t0.14011278471286747\nCorporate stones relieve together early things; forward line\tBooks                                             \thome repair                                       \t8.20\t7293.74\t3.03680679416269454\nWords should agree completely level times. Very gentle hours would not interpret. Gr\tBooks                                             \thome repair                                       \t8.23\t3906.80\t1.62662732472432730\nHowever great occupations find very academic homes. Surprised writings suit as free, short shows. Originally possible preparations should accept as yet similar children. Hours re\tBooks                                             \thome repair                                       \t1.86\t2705.71\t1.12654392822255033\nMembers may not cut probably area\tBooks                                             \thome repair                                       \t0.87\t8868.24\t3.69236242096172529\nSimilar seats would not see now light soldiers. Rather possible countries take white, proposed boys. Guilty, famous models would not invest often like a fears. Plainly new classes prevent little\tBooks                                             \thome repair                                       \t3.02\t3962.44\t1.64979348228234450\nExternal hours will not begin never old, empty word\tBooks                                             \thome repair                                       \t1.92\t275.50\t0.11470662126588312\nSections will not kick for a systems. Political, lacking arms used to say other authorities. Savi\tBooks                                             \thome repair                                       \t53.64\t8876.73\t3.69589730014338536\nPlanes play sometimes economic, wonderful comments. Responsible, primary costs can bring stra\tBooks                                             \thome repair                                       \t8.00\t3496.76\t1.45590390191538823\nOf course british lawyers shall describe at least extremely active men. Proposals may gain. Also lexical differences attend bad teams; academic, major contexts could not hold less stead\tBooks                                             \thome repair                                       \t4.97\t855.34\t0.35612762770802348\nPolitical, local children will distinguish as necessarily new managers. Directly resulting questions \tBooks                                             \thome repair                                       \t6.97\t13643.34\t5.68051337271024974\nIssues become at a materials; more complete others should apply seco\tBooks                                             \thome repair                                       \t3.96\t2603.64\t1.08404627002796343\nReal earnings exceed there from a shoulders. Practical days shall not spend now systems. Ages might not sit much. Probably \tBooks                                             \thome repair                                       \t0.86\t1450.51\t0.60393140185980444\nScientific contracts transform only variable contacts; just important relations could tell generally on a values. Possible\tBooks                                             \thome repair                                       \t1.94\t8305.21\t3.45794039202767748\nExtraordinary, economic obligations intend multiple, public patients; again enthusiastic supporters should stop greatly labour, mad trus\tBooks                                             \thome repair                                       \t2.73\t1640.87\t0.68318930539582445\nRemarkably political plans would locate separate problems. Sensible areas will not join home social \tBooks                                             \thome repair                                       \t6.39\t3591.09\t1.49517894940726030\nHours might need etc with the holders. Early demands drive usually; at all other responsibilities see so equally italian issues. Simple, senior operations must t\tBooks                                             \thome repair                                       \t6.30\t4254.02\t1.77119513973681346\nSpanish, unique colleagues put through a applications. Years will confront normally by no appearances; colleagues will not own still. Sympa\tBooks                                             \thome repair                                       \t2.68\t5243.74\t2.18327295171238458\nBritish demands can travel easy conditions. Inevitably small pat\tBooks                                             \thome repair                                       \t0.78\t3069.27\t1.27791503249632335\nAble prices would leave mainly in a matters. Ostensibly necessary schools get far private sales. Laboratories question possibly rare sectors. Likely hands could respond up to good\tBooks                                             \thome repair                                       \t2.22\t5893.46\t2.45378905323278233\nSystems cannot show. Global pains sha\tBooks                                             \thome repair                                       \t6.41\t748.19\t0.31151487101604752\nDark, fun calculations must not take away interested feet. High, local films could show normal, visual glasses. Concerned, indian chiefs stick at least. Cultural condition\tBooks                                             \thome repair                                       \t1.87\t2172.50\t0.90453769401136507\nSentences might treat in a persons. Prisoners look best heavy investigations. Western, emotio\tBooks                                             \thome repair                                       \t2.92\t1731.95\t0.72111118947893383\nJapane\tBooks                                             \thome repair                                       \t8.75\t326.81\t0.13606994880545649\nDemocratic, sure places lose in a friends. Other, essential volunteers borrow other, other nurses; foreign hours get indeed enormous designers. Helpful, professional powers lower far from. C\tBooks                                             \thome repair                                       \t4.46\t7443.09\t3.09898985726998908\nDutch, quick households ring fortunately small, automatic pubs; objectives st\tBooks                                             \thome repair                                       \t93.40\t4131.30\t1.72009968942193442\nIndustrial, difficult children shall use crops; errors can reach frankly boards. Apparent, special arms may not see always other inter\tBooks                                             \thome repair                                       \t3.19\t722.52\t0.30082696187668193\nSuddenly various forms must not involve then local, other economies; continuing, still others cannot know directly only comprehensive products. Odd books go enough southern cases\tBooks                                             \thome repair                                       \t7.64\t10446.87\t4.34963760618481448\nRather little years should not reach more new magistrates. Political speakers may lower considerably gates. Kinds would not depend well. Provisions raise. Almost difficult pensions pick yet organi\tBooks                                             \tmystery                                           \t4.25\t327.20\t0.10733361870342104\nRoyal plants find however workers. About genetic peasants come welsh, marine men. So federal eyes develop. Home old services \tBooks                                             \tmystery                                           \t4.32\t7859.96\t2.57835559188307223\nWhite changes come much matters.\tBooks                                             \tmystery                                           \t3.16\t3490.58\t1.14503845591010823\nLater other operations see; expected, honest animals show respons\tBooks                                             \tmystery                                           \t2.82\t18416.84\t6.04140000697406092\nRoyal advantages succumb again english, new regulat\tBooks                                             \tmystery                                           \t0.58\t3081.67\t1.01090095583671001\nCentra\tBooks                                             \tmystery                                           \t1.36\t6619.98\t2.17159660496416018\nCountries keep much french, addit\tBooks                                             \tmystery                                           \t4.87\t25157.14\t8.25246599152989476\nAlways silver months must capture only left mass miles. Characteristics should fall total ways. Courses might work in a spirits; key sources would live again up the records; thoughts can inspect ofte\tBooks                                             \tmystery                                           \t9.69\t3901.52\t1.27984187054942315\nPrimary, single engineers seem new centuries. Close ladies date. Necessary, likely hands cannot retain generally inc prices. Opini\tBooks                                             \tmystery                                           \t1.81\t10328.03\t3.38797320897766992\nA\tBooks                                             \tmystery                                           \t0.11\t6325.20\t2.07489793711148765\nHills may not die reforms. Better \tBooks                                             \tmystery                                           \t5.64\t2254.23\t0.73947024232827876\nOnly present circumstances cannot fall from a players. Sharp relations will blame late eyes. Closest different problems should not write i\tBooks                                             \tmystery                                           \t4.33\t9175.56\t3.00992071647421134\nAlso strategic consultants proceed slightly eyes. Almost stran\tBooks                                             \tmystery                                           \t2.26\t23865.71\t7.82882951475068011\nNow top documents might mitigate usually ethnic sheets. Big times come partly high records. Social years can seek social, major r\tBooks                                             \tmystery                                           \t2.68\t5730.79\t1.87990962325604602\nDouble workers ought to face with the objects. Satisfactory, other participants help politically urgent, \tBooks                                             \tmystery                                           \t3.56\t2094.56\t0.68709261733324441\nNational specialists go practical chapters. Enough right women stare again for example literary cameras. Most industrial cells shall improve possible, availab\tBooks                                             \tmystery                                           \t3.03\t4124.34\t1.35293501516891054\nFortunes could meet emotional meetings. Beautiful women replace beautifully in the things; less previous year\tBooks                                             \tmystery                                           \t5.11\t102.48\t0.03361720429317417\nAvailable solicitors emerge. Further true weeks manufacture changes; families save up to right things. Gre\tBooks                                             \tmystery                                           \t3.50\t2151.90\t0.70590224354490139\nPresent, regular adults cannot l\tBooks                                             \tmystery                                           \t7.59\t522.99\t0.17155993045752497\nEspecially simple sources absorb shortly accessible, new years; glad chapters restrict so southern districts. Modest, particular years could not discard only free men. Now black things could ge\tBooks                                             \tmystery                                           \t3.35\t3104.40\t1.01835723075458519\nDays must appear on the police. Direct, late developments should serve always for the papers. Meetings take yesterday women. Medium periods \tBooks                                             \tmystery                                           \t7.03\t1997.98\t0.65541082975874440\nSufficient, whole judges may not show even almost vo\tBooks                                             \tmystery                                           \t75.13\t1924.56\t0.63132637289687040\nWords take here free goods. Efficient sales could not ask only. Please local women can talk less than useful permanent colleges. Always free members mus\tBooks                                             \tmystery                                           \t5.23\t4082.90\t1.33934117299571443\nRegional, able services should transfer old, social preferences. Other courts might talk a li\tBooks                                             \tmystery                                           \t1.16\t954.39\t0.31307497663312349\nHuge, difficult init\tBooks                                             \tmystery                                           \t34.65\t621.18\t0.20376985717051064\nDifficulties would offer changes. Payable pounds give electric, sure weeks. Tired houses shall not get together most important pools. Bones shall not give foreign, new troops. \tBooks                                             \tmystery                                           \t4.33\t12111.11\t3.97288894503419799\nVery dead processes may enable drugs. Early units work long police. Easily difficult opportunities ought to seem extra, common eyes. Just quiet subjects must finance ch\tBooks                                             \tmystery                                           \t4.70\t475.66\t0.15603395193297449\nAlso rich lines want noticeably often social difficulties. Animals go; sexual, central cats ought to tolerate. Groups sha\tBooks                                             \tmystery                                           \t3.23\t150.35\t0.04932032265299313\nAlso significant \tBooks                                             \tmystery                                           \t4.93\t1060.69\t0.34794528124245618\nFine, sure centuries would not form now angry, dead insects; customers cannot pray totally as male laws. Unique procedures reinforce rarely also\tBooks                                             \tmystery                                           \t2.81\t5986.79\t1.96388702664258571\nIntermediate, subj\tBooks                                             \tmystery                                           \t9.70\t10978.67\t3.60140702827227219\nHot eyes must invest patently common laws. Whole arts discourage small studies. Policies could need. Reasons hope really independent, international departments. Effective, afraid attitudes\tBooks                                             \tmystery                                           \t0.97\t251.85\t0.08261605094882821\nPrices find under way around the languages. Civil, effective products should last really at a hundreds. Main, capable groups will contribute; only indian regulations take now in a feet; total\tBooks                                             \tmystery                                           \t2.73\t625.40\t0.20515417217946063\nAdvances accept. Lists must not act also old comments. Objectives shall know as to the months; live years can pay possible, inc attempts. Russian years see further pro\tBooks                                             \tmystery                                           \t1.42\t15186.66\t4.98178231607119854\nClean systems can realise often s\tBooks                                             \tmystery                                           \t2.73\t3145.42\t1.03181329750035026\nDistinguished, huge levels return pretty characters. Months cannot ask right. Overseas studies c\tBooks                                             \tmystery                                           \t6.54\t1642.06\t0.53865599611289594\nVoluntary, clear techniques go. Before domestic students ought to live supreme, easy journalists; hands will run overseas such as the skills. Technical, official doctors would\tBooks                                             \tmystery                                           \t5.72\t1966.05\t0.64493661690666545\nGood, local rules follow normally high lines. Whole, male activities know again. \tBooks                                             \tmystery                                           \t4.01\t5929.90\t1.94522501696031914\nYears will appear original\tBooks                                             \tmystery                                           \t4.79\t1653.40\t0.54237593265353407\nProblems eat very in a persons; dead ideas happen british things. Short bags should test usually to a others. Also inner visitors expose nevertheless coming, peaceful me\tBooks                                             \tmystery                                           \t4.72\t5511.42\t1.80794820536188504\nExpensive rates take as at once white careers. Parts drive all weeks. Therefore other years s\tBooks                                             \tmystery                                           \t0.55\t181.72\t0.05961083493516403\nFurthermore little classes say spots. Like days used to provide costs. Friends\tBooks                                             \tmystery                                           \t4.03\t13223.74\t4.33787245413562633\nYears might give also. Ultimately private stars should make \tBooks                                             \tmystery                                           \t2.78\t1284.36\t0.42131725708412545\nGood, low facilities suggest too thereafter asian senses. Far holidays defend delicate members. Cautious reports treat on a talks\tBooks                                             \tmystery                                           \t0.25\t5386.71\t1.76703874451682502\nStrange, necessary weeks hope all. Dead sons know too. Heavy, social waters used to move pupils. Heels provide. Eastern trees used to allow currently bad children. Articles would not clear\tBooks                                             \tmystery                                           \t4.09\t5477.40\t1.79678839573997066\nBitter, nice students like general books; maximum, holy members draw indeed sure, strong lines; forests must not adapt opposite, r\tBooks                                             \tmystery                                           \t6.38\t2322.45\t0.76184890818386367\nEveryday, low cases could contribute again through a developments. Useful, unable answers might not assign local da\tBooks                                             \tmystery                                           \t1.87\t8562.04\t2.80866362067065732\nFree, peaceful years should not help ahead animals. Then helpful others \tBooks                                             \tmystery                                           \t27.03\t92.46\t0.03033027623874789\nHowev\tBooks                                             \tmystery                                           \t3.41\t6376.36\t2.09168029631951644\nSorry theories decide there wages.\tBooks                                             \tmystery                                           \t2.59\t4969.90\t1.63030975426079530\nOther courses discuss full leaves. Connections excuse; objective, international sessions go. All expensive surve\tBooks                                             \tmystery                                           \t3.01\t1617.54\t0.53061253544477894\nBanks will employ of course real, dead resources. Sisters shall not go short effects. Hopes run c\tBooks                                             \tmystery                                           \t3.63\t4915.26\t1.61238582722548074\nSeconds preve\tBooks                                             \tmystery                                           \t4.51\t2037.80\t0.66847325242613507\nRight developments would not seek variables; numbers like impatiently \tBooks                                             \tmystery                                           \t3.84\t11928.22\t3.91289430712261892\nLimits ought to eat less; actual costs would smash more main rules; magnetic, constitutional expressions can head years. Quickly western children may not wonder also useless, other millions; comm\tBooks                                             \tmystery                                           \t10.39\t6043.00\t1.98232597134710679\nBritish, quiet residents trace particularly. Years should reduce now libraries. Special, general figures gain\tBooks                                             \tmystery                                           \t2.22\t6385.64\t2.09472447719227850\nMost small ministers appear agencies. Industries review so much as solicitors. Far from distant children hear still terms. Particular, available days learn already long-t\tBooks                                             \tmystery                                           \t3.79\t3704.73\t1.21528752206334055\nSizes could not continue home; obligations will not lack notably current buildings. Measures burn there then useful thousands. Historic,\tBooks                                             \tmystery                                           \t7.35\t5443.06\t1.78552361436382311\nInches c\tBooks                                             \tparenting                                         \t0.16\t4582.16\t1.47127016656624148\nCertain signs prepare societies. Economic reasons can i\tBooks                                             \tparenting                                         \t0.98\t1989.28\t0.63873114796229133\nGolden dogs could hear only available feet. Big, serious patterns used to use here with a days; otherwise long reasons should not trave\tBooks                                             \tparenting                                         \t1.58\t566.43\t0.18187308178852684\nLuckily economic c\tBooks                                             \tparenting                                         \t9.18\t122.92\t0.03946796464425564\nMen become most so living studies; private nurses come frequently in a feet. Points will \tBooks                                             \tparenting                                         \t1.38\t4878.48\t1.56641454732922415\nOther changes mean. Miles form. Local, illegal authorities take again inside the figures. Players would love properly\tBooks                                             \tparenting                                         \t14.38\t2483.90\t0.79754700113786669\nPopular circumstances should not take relations. Secret questions should get after the players. Automatic methods cope please in a effects; unli\tBooks                                             \tparenting                                         \t5.60\t9646.64\t3.09740682115084758\nOriginal, able troops reduce jointly. Crowds move american feet. Cities move. Legs transfer loudly so central germans. Households could c\tBooks                                             \tparenting                                         \t4.02\t877.39\t0.28171817034838474\nTypical, right programmes tell against a reforms. Outside friends can inhibit again either military stairs. International men must launch legall\tBooks                                             \tparenting                                         \t65.75\t4078.44\t1.30953242534752647\nFavorite, small son\tBooks                                             \tparenting                                         \t1.77\t4476.61\t1.43737947613180297\nImproved loans read years. Now constant tears perform now local negotiations. Specifically concerned problems ought to know more than previous steep plants. Cont\tBooks                                             \tparenting                                         \t0.48\t5231.60\t1.67979664686696862\nSo plain prisoners make improvements. Contemporary roots will resume in the computers. Firms accept modern, present names. Essential, collective sons cannot examine in the d\tBooks                                             \tparenting                                         \t5.38\t18382.40\t5.90234228178136019\nSoft friends could make clean, brave feet. Rapid standards should not spread problems. Careers use quantities; british, other visitors should pursue wide, sudden sh\tBooks                                             \tparenting                                         \t4.17\t7509.00\t2.41103926548743546\nCrazy years could cry even clergy. Other, philosophical sides might take years. Already senior hours cannot believe early strengths. Fields will not find little jewish councils. Events might not o\tBooks                                             \tparenting                                         \t1.37\t8851.94\t2.84223930160325602\nPrime, flexible records say upwards at least easy schools. Here good investors can spend more at a cus\tBooks                                             \tparenting                                         \t7.33\t6260.65\t2.01021081069035995\nArms shall get thus famous, clear conditions. Royal languages might not understand in a films. Scientific, notable views would achieve like a years. Large, nervous students \tBooks                                             \tparenting                                         \t2.05\t2365.43\t0.75950787185536616\nMain contents set within a communities; rules date at\tBooks                                             \tparenting                                         \t1.39\t1973.40\t0.63363229278371356\nLeaders restructure so. Years used to go from a years. Shoulders supply thus original tracks. Securely necessary\tBooks                                             \tparenting                                         \t2.01\t2314.86\t0.74327052258706151\nFaces may occur existing houses. Ruling, annual arguments allow all but for a elections. Future, spanish subjects take. Then prim\tBooks                                             \tparenting                                         \t8.01\t13033.96\t4.18502987678687100\nHigh fields shall join then. Diffi\tBooks                                             \tparenting                                         \t1.11\t3833.50\t1.23088547399734770\nNarrow, \tBooks                                             \tparenting                                         \t7.17\t950.12\t0.30507079863163167\nVery strong arrangements should not cover parliamentary, fundamental implications. Parents renew then; major, basic structures settle; only long-te\tBooks                                             \tparenting                                         \t7.59\t3460.43\t1.11109769682656629\nPretty eastern facts should not join. Too labour things mean in particular. Closer intensive problems\tBooks                                             \tparenting                                         \t1.18\t11548.91\t3.70820022420834975\nNew friends must not gather by a blocks. Empty opportunities ought to remind else single families. Early years should not use suddenly abou\tBooks                                             \tparenting                                         \t4.28\t11681.79\t3.75086621137015165\nSource\tBooks                                             \tparenting                                         \t8.78\t5480.98\t1.75986922271292103\nGood countries need once again. Most economic patients appear there only real trees. Apparently jewish policies\tBooks                                             \tparenting                                         \t9.76\t3680.94\t1.18190050258400862\nSmall, true kids can go genuine objectives. Scottish games give ever. Scientific, similar trees remark. Boot\tBooks                                             \tparenting                                         \t8.58\t10853.90\t3.48504182763005404\nWidespread lands get curious, certain reasons; issues ought to accept sales. Easy, other others might bomb large payments. Econo\tBooks                                             \tparenting                                         \t4.78\t8024.99\t2.57671673926541680\nForces can measure now groups. Resources form rat\tBooks                                             \tparenting                                         \t4.43\t6742.48\t2.16491996627563242\nEqual voices build. High streets would harm simply individual, black methods. Substantial rooms land as current savings. Again very opportunit\tBooks                                             \tparenting                                         \t7.81\t26.70\t0.00857301217053063\nOverall, high heads cannot see explicit, bad bodies; opportunities can accommodate little leaders. Light times u\tBooks                                             \tparenting                                         \t6.61\t13341.53\t4.28378648177900984\nMeanwhile thorough roads put also more other trees. Never religious costs want just especially direct nights. Young, excellent aud\tBooks                                             \tparenting                                         \t2.67\t3546.05\t1.13858913135993082\nCommon circles may win children. Tiny things must need as beside a words. Permanent yards remain fully. Slight, general ways avoid new, possible arts; therefore educational conditions ou\tBooks                                             \tparenting                                         \t4.26\t9853.55\t3.16384284917348778\nSites will not manage most generally immense woods. Fine employers avoid in a men; reasons ought to think here; only corresponding areas\tBooks                                             \tparenting                                         \t58.45\t12923.27\t4.14948880123795580\nRecords face long lips. Main researchers will know unequivocally ameri\tBooks                                             \tparenting                                         \t1.24\t16478.74\t5.29110256835243338\nCorners would not descend often plain new activities. Just local trusts think \tBooks                                             \tparenting                                         \t8.15\t9940.76\t3.19184481139790637\nOpen, large roads might tell friends. Used, old arms will drop as good as natural others. Sad programmes participate\tBooks                                             \tparenting                                         \t4.27\t2597.90\t0.83415087332664917\nDays could meet just. Folk might alter possibly tories; serious, basic things wait suffici\tBooks                                             \tparenting                                         \t5.54\t8776.83\t2.81812248721641872\nStations may not reme\tBooks                                             \tparenting                                         \t0.88\t3316.92\t1.06501855912645951\nEconomic, free bits post quite issues. Perhaps back sales used to affect d\tBooks                                             \tparenting                                         \t0.09\t19263.00\t6.18509114010979749\nGenuine cities say. Practices prove together elsewhere simple\tBooks                                             \tparenting                                         \t1.52\t1712.57\t0.54988327538897554\nSe\tBooks                                             \tparenting                                         \t3.22\t2194.90\t0.70475297427332163\nPartners will not locate. General, other losses cannot restrict else protective kilometres; children carry unusual, long groups. Yet true reservations differ never long-term\tBooks                                             \tparenting                                         \t1.02\t6482.66\t2.08149524634502309\nProfits could not cling through a terms. Later democratic arms might not work all. Sometimes apparent arti\tBooks                                             \tparenting                                         \t6.57\t0.00\t0.00000000000000000\nElse emotional lives declare also c\tBooks                                             \tparenting                                         \t7.67\t4780.68\t1.53501227803042655\nPrevious floors keep complex computers.\tBooks                                             \tparenting                                         \t9.60\t5787.26\t1.85821162599344996\nLists used to miss little names. Prime roads should not help from the minutes; in order various exceptions help \tBooks                                             \tparenting                                         \t1.19\t4186.16\t1.34411987369994445\nTheories look. Just young regions \tBooks                                             \tparenting                                         \t45.83\t1849.39\t0.59381434374747746\nForeign, simple stocks may draw still; \tBooks                                             \tparenting                                         \t2.55\t18500.06\t5.94012133091936148\nCareful, long customers may think about just professional meetings. Students could not drink. British, basic commentators remember espec\tBooks                                             \treference                                         \t1.77\t6207.69\t2.15509748883540916\nBills emerge later in a yards. Ev\tBooks                                             \treference                                         \t2.72\t1496.80\t0.51963772696266090\nExamples will talk there estimated, short initiatives. Benefits ought to prove too negative \tBooks                                             \treference                                         \t0.17\t6141.90\t2.13225745272044827\nSorry services must not recall much main details. Sexual, major secrets will not go results. P\tBooks                                             \treference                                         \t7.54\t1423.78\t0.49428768231887850\nFlexible, previous patterns must not manipulate essential, dull criteria. Much possible players will include firmly working, important duties. Far english busi\tBooks                                             \treference                                         \t6.38\t13587.29\t4.71704201709145697\nFunds shall call more able countries. \tBooks                                             \treference                                         \t0.39\t913.90\t0.31727479868464444\nIndivi\tBooks                                             \treference                                         \t3.76\t2162.13\t0.75061752979541556\nHitherto certain kinds evade also by a months. Poor points might make even just selective passengers. Old, general qualities could overcome over; recent variables might s\tBooks                                             \treference                                         \t56.16\t1298.61\t0.45083294268504882\nDifficult, rapid sizes say so; initial banks stress high single sports; prisoners used to think likely firms. Good, current services must take human, precise persons. Signals m\tBooks                                             \treference                                         \t7.77\t9585.22\t3.32766029745927077\nRoyal, educational days can add black, long-term matters. Different executives should not remai\tBooks                                             \treference                                         \t4.86\t9194.30\t3.19194625401709854\nClassical, labour books make in addition finally significant suggestions. Ethical figures could sell as the levels. Regardless plain scholars set in a companie\tBooks                                             \treference                                         \t80.47\t2466.20\t0.85618022597228374\nCruelly shared examples shall not investigate then in vit\tBooks                                             \treference                                         \t0.28\t610.19\t0.21183708218555990\nMale, small legs allocate today to a programs. Video-taped circumstances afford short, royal changes. Planned, appropriate names can enter usual periods. Very consta\tBooks                                             \treference                                         \t4.40\t9663.14\t3.35471145438399721\nOften other ideas must not understand possible, static groups. Late\tBooks                                             \treference                                         \t8.13\t705.22\t0.24482824546272563\nPossible solutio\tBooks                                             \treference                                         \t2.63\t10773.86\t3.74031542023913264\nStill short documents ought to give more longer individual parties. Brief, expensive reforms should give now. As perfect sect\tBooks                                             \treference                                         \t1.16\t4401.20\t1.52794599405936875\nGreat speeches would draw too particular, full things. Available, real lives shall like long, supreme skills. Grim men would n\tBooks                                             \treference                                         \t4.95\t7141.72\t2.47936073450278901\nEver only sides should not ensure clearly familiar, running points. Persons bear free, huge products. Organizations blame. Recent, parliamentary communities complain both perfect, l\tBooks                                             \treference                                         \t5.85\t4618.08\t1.60323930660858167\nDead, blue homes should write more small objectives. Systems could underpin all so blue exchanges. Better adult arts make very governments. Quick managers talk and \tBooks                                             \treference                                         \t2.83\t3913.25\t1.35854645579678832\nDamp, happy roads \tBooks                                             \treference                                         \t4.29\t12407.36\t4.30741070818241603\nItalian pati\tBooks                                             \treference                                         \t4.42\t7902.99\t2.74364762146488472\nClasses used t\tBooks                                             \treference                                         \t1.61\t7530.59\t2.61436308811313771\nDangerous parents would not advise almost previous, important matters.\tBooks                                             \treference                                         \t7.62\t1064.34\t0.36950241736734266\nUtterly free reasons control powers. Resources think too systematic sy\tBooks                                             \treference                                         \t5.69\t6131.92\t2.12879273831966837\nTherefore secondary countries get eventually prospective lives. Directly complete wings see as g\tBooks                                             \treference                                         \t6.19\t4028.40\t1.39852259439897325\nAt present pink police would not endorse yet bright rules. Photographs shall te\tBooks                                             \treference                                         \t5.24\t7033.41\t2.44175920977849331\nEqual, strong requirements use broadly remote pictures.\tBooks                                             \treference                                         \t96.89\t15194.39\t5.27497212866393982\nRelative, possible papers may change only current, tropical services; following procedures bring ever delicious questions; never convenient women may want secondary ch\tBooks                                             \treference                                         \t3.67\t2.16\t0.00074987806670186\nEyes alleviate yet; major women get that blue scientists. Wild interests suffer forthwith years. Women might complete in a commitments. Japanese, victorian\tBooks                                             \treference                                         \t8.24\t12242.59\t4.25020820399238554\nClear points create however from a bases. Social, wrong rates contribute. More whole legs find now now unha\tBooks                                             \treference                                         \t0.65\t9377.23\t3.25545328861977061\nGlad, certain others ought to protect narrow, american friends; thi\tBooks                                             \treference                                         \t9.25\t2557.68\t0.88793895076019410\nLong son\tBooks                                             \treference                                         \t6.53\t13751.99\t4.77422021967747397\nHistorical arguments can point much big times. Lines bri\tBooks                                             \treference                                         \t7.40\t4482.72\t1.55624694776193163\nTypes shall serve quite possible emotions; hard weekends appear months. There difficult colours form probably. Rules know however green manufac\tBooks                                             \treference                                         \t4.01\t2684.41\t0.93193526899775290\nAlso real addresses give in a advantages. Perfect, interested humans could fall never at a years. Sophisticated interp\tBooks                                             \treference                                         \t8.60\t936.71\t0.32519364993532475\nMuch political attitudes must not understand more. Holy years shall not link large friends. Now occasional supporters may write also. Southern difficulties used \tBooks                                             \treference                                         \t3.32\t7569.18\t2.62776021524000108\nActions cannot go perhaps publications; huge, willing girls wo\tBooks                                             \treference                                         \t9.60\t2251.62\t0.78168539469779966\nSuccessful solutions find clearly as socialist problems; individual systems\tBooks                                             \treference                                         \t9.20\t2974.66\t1.03270013421081565\nToo nuclear windows ought to contemplate for example active, constitutional appeals. Again short partners clear to the issues. There political sheets end s\tBooks                                             \treference                                         \t3.51\t295.80\t0.10269163524556059\nCities regard only. Operations used to make later; personal, written years used to interfere for a agreements. Obvious, sufficient protests tell. Issues pay effective own\tBooks                                             \treference                                         \t2.70\t445.16\t0.15454431489490789\nHere special fruits sti\tBooks                                             \treference                                         \t2.31\t6938.36\t2.40876110318589515\nYears decide pot\tBooks                                             \treference                                         \t4.03\t15341.75\t5.32613047677004465\nStructures drop home revolutionary, formal hands. Ears\tBooks                                             \treference                                         \t3.42\t1450.10\t0.50342508542794934\nPredominantly on\tBooks                                             \treference                                         \t8.46\t11177.59\t3.88047665721577287\nReally different purposes answ\tBooks                                             \treference                                         \t81.85\t4832.22\t1.67758138494355241\nKinds play sooner; old causes would publish. Great,\tBooks                                             \treference                                         \t2.90\t463.44\t0.16089050520014402\nRelations preclude most primary records. Hardly common f\tBooks                                             \treference                                         \t3.01\t45.64\t0.01584464581679305\nParticularly natural children put hardly. Parties weep into a days. Heavy hands will not take mad, lonely children. Ye\tBooks                                             \treference                                         \t4.55\t1000.50\t0.34733935450704318\nLittle, num\tBooks                                             \treference                                         \t4.79\t11088.98\t3.84971429819241545\nDemocratic, fresh operations shall not explain fully decisions; contra\tBooks                                             \treference                                         \t1.68\t140.25\t0.04868999946987787\nAs progressive minutes apply as firms. Involved,\tBooks                                             \treference                                         \t4.35\t18398.21\t6.38722877109947712\nBoth gross guns ought t\tBooks                                             \tromance                                           \t22.07\t2932.20\t1.53691964340235494\nMatters care too expressions; economic\tBooks                                             \tromance                                           \t5.87\t4968.70\t2.60435598941862117\nInternal, additional structures pretend trains. Useful payments should make fingers. \tBooks                                             \tromance                                           \t0.64\t4689.33\t2.45792353570560163\nFollowing, very poli\tBooks                                             \tromance                                           \t1.59\t7979.33\t4.18238490491430082\nLikely weapons see. Items improve half. Short, human resources depend white, local texts; fully permanent way\tBooks                                             \tromance                                           \t6.42\t22088.52\t11.57775059057560371\nFull days keep full, visible bottles. Big, domestic countr\tBooks                                             \tromance                                           \t4.62\t11680.82\t6.12252974184813303\nTeachers arise clear often old services. Other minutes could cost by a attempts; open conscious goods detect yet disastrous stones; thus slight men tell for a countries. Capitalist bodies wou\tBooks                                             \tromance                                           \t0.25\t4832.22\t2.53281967097801228\nNew, small beds will come instead in a stories. Female, other systems could not\tBooks                                             \tromance                                           \t4.36\t9867.04\t5.17183261654620160\nPart-time architects buy. Silently national skills understand free parts. Only european millions shall not attend at all other informal words. Empty, redundant holes contain again acceptable relatio\tBooks                                             \tromance                                           \t1.12\t1104.46\t0.57890535071010332\nSimilar consumers will live once on a eyes. More likely teams pass particularly. Just other workshops \tBooks                                             \tromance                                           \t3.59\t1239.88\t0.64988606761534406\nFuture years can reform as before social suppliers; particular, judicial individuals resume vaguely remaining aff\tBooks                                             \tromance                                           \t0.52\t6031.54\t3.16144611757964666\nCrucial, different affairs could not forgo; public p\tBooks                                             \tromance                                           \t5.62\t4775.42\t2.50304781512054902\nFor example new resources find perhaps necessary opportunities. Main systems move spontaneously necessary m\tBooks                                             \tromance                                           \t6.68\t3560.08\t1.86602444720136955\nRather aware thanks may not work with a chi\tBooks                                             \tromance                                           \t2.35\t2220.62\t1.16394328440493058\nIslands meet only for\tBooks                                             \tromance                                           \t6.79\t2450.58\t1.28447736843630822\nMinutes will defend. Now new courses could know definitely international forces. There capital accounts should not lift more pro\tBooks                                             \tromance                                           \t72.49\t1876.47\t0.98355623874743093\nMore simple principl\tBooks                                             \tromance                                           \t6.44\t6567.15\t3.44218738018203917\nLate, dark looks would not make citizens. Safe, great curtains use as by the children. Signs would prove neither romantic moveme\tBooks                                             \tromance                                           \t4.68\t2862.64\t1.50045960302479959\nProblems inherit. Sure edges must become enough revolutionary years. Systems burst however slowly strong issues; cultural site\tBooks                                             \tromance                                           \t1.60\t775.70\t0.40658501036327902\nPossible, common bars cannot rid mainly ultimate years. Drugs could bring of course large, good rules. S\tBooks                                             \tromance                                           \t3.33\t273.51\t0.14336092069673900\nStandard, geographical scales may hope equal, sure problems. Strong associati\tBooks                                             \tromance                                           \t7.58\t4049.00\t2.12229303462797052\nProbably just results receive perfectly on the countries. Bold girls will pass religious years. Here public conditions ought to consider most sources. Different, able years go rarely ita\tBooks                                             \tromance                                           \t5.44\t1710.73\t0.89668322132109361\nEven sure children build there imaginative novels. Real, quick members shall not exercise unlikely, vast times. Open regulations buy all catholic days. Domestic, palest\tBooks                                             \tromance                                           \t6.42\t49.14\t0.02575684853584057\nSilver, political interviews might know in common families. Far possible houses shall insist in a places. Whole, political gardens would adopt eggs. Others might live even offi\tBooks                                             \tromance                                           \t6.13\t5432.94\t2.84768849581419762\nCultural, harsh conditions describe\tBooks                                             \tromance                                           \t4.72\t1495.08\t0.78364975801718601\nDistinctive hours work more federal, proper plants; crimes may ensure therefore; players work increasingly previous, genuine needs. Hostile, young schools will offer very new, implicit changes;\tBooks                                             \tromance                                           \t47.76\t1911.06\t1.00168666998175583\nParticular bombs could illustrate suddenly planes. Western months expect just special, relevant readers. Able demands ought to achieve for a cars. Suitable counties must stud\tBooks                                             \tromance                                           \t0.88\t1663.75\t0.87205854195166361\nLevels tear only. Colleagues may not see hot forests. So effective residents must help completely in a hands. However professional classes ought to seem very; political\tBooks                                             \tromance                                           \t4.81\t1069.40\t0.56052856785160575\nSo only things know prac\tBooks                                             \tromance                                           \t2.71\t3443.44\t1.80488731221519852\nWays used to contain only double cigarettes. Intensely increased feelings \tBooks                                             \tromance                                           \t76.83\t18974.38\t9.94546666099883214\nViews balance quite other degrees. Slow passages promote due major animals. Sons would say. Possible, other schemes cannot restart either important, new \tBooks                                             \tromance                                           \t3.75\t745.80\t0.39091285384676227\nPremier, good budgets could put high, slow members; traditions could not join however. Students laugh for a effects. Carefu\tBooks                                             \tromance                                           \t9.00\t1184.75\t0.62098954625228157\nContacts remove basically blue, labour details. Full measures hold then families. G\tBooks                                             \tromance                                           \t66.85\t845.81\t0.44333333455635558\nSubject children would not like sufficiently great levels. Yet busy hotels must not help behind\tBooks                                             \tromance                                           \t9.33\t1361.15\t0.71345002817581182\nLarge thoughts make\tBooks                                             \tromance                                           \t0.85\t2228.59\t1.16812077896802885\nSpecially clinical muscles can pass causal, following changes. Dishes could use at present areas; even c\tBooks                                             \tromance                                           \t5.00\t276.00\t0.14466606015246230\nTeachers play apparent indians. Professional corners accept consequences; extensively necessary men will not know only economic clean stairs. Divisions could \tBooks                                             \tromance                                           \t0.78\t379.40\t0.19886341747044999\nStages choose physically to a families\tBooks                                             \tromance                                           \t6.13\t1969.70\t1.03242296624023550\nIllegal technologies might distinguish that on a change\tBooks                                             \tromance                                           \t2.73\t1019.24\t0.53423708387607130\nAs single women would get ideas. Rural classes may hear quite available, high sequen\tBooks                                             \tromance                                           \t1.38\t894.27\t0.46873375946573356\nSenior fans cook frequently. Fin\tBooks                                             \tromance                                           \t4.36\t5607.44\t2.93915308819320006\nMammals take at all. Profound weeks must know parts. Too low earnings can share directly new gaps. Equal block\tBooks                                             \tromance                                           \t4.99\t179.00\t0.09382327814235780\nFine, real rows could think short, united others. Twice moving molecules list enough really vague assessments. Days put with a lines. Importa\tBooks                                             \tromance                                           \t4.85\t950.33\t0.49811774255322283\nAssociated words produce simply. Frantically tough forms take there across right years. Recent fears appear also fierce examples. Incredibly coastal te\tBooks                                             \tromance                                           \t2.28\t99.82\t0.05232089175514053\nHistorical, new notes should say levels; largely low prisons present at once enough useful winners. Yet worthwhile sons give different, social beaches. Minutes want guns. Industrial\tBooks                                             \tromance                                           \t65.28\t3120.61\t1.63567519555208473\nComplete, foreign makers prevent conservative gardens; full prisoners would look so good goods. Then only cir\tBooks                                             \tromance                                           \t3.56\t510.48\t0.26756931299503245\nLocal, strong letters should not make also ba\tBooks                                             \tromance                                           \t6.39\t3270.83\t1.71441336785680534\nAt all chemical branches make as existing things. Directly civil students must not afford much beautiful companies. Past police offer well perhaps chan\tBooks                                             \tromance                                           \t36.28\t3753.37\t1.96733786302336027\nMinor democrats can wonder impatiently real backs. Early,\tBooks                                             \tromance                                           \t2.77\t1091.04\t0.57187122561138576\nSurely local universities may know perhaps primitive computers. About bad sides will provide carefully about a workshops. National, sheer references ought to develop already also long-t\tBooks                                             \tromance                                           \t5.58\t112.88\t0.05916632199278965\nFinancial things will die only pai\tBooks                                             \tromance                                           \t1.33\t1782.43\t0.93426494781722240\nDebts should not go into a eyes. Legal troops pursue wholly friends. Inc families will meet never; potatoes should give all various users. New women st\tBooks                                             \tromance                                           \t4.80\t6935.94\t3.63548954077488907\nAlso genuine men identify. Gradual, useful things used to see below patterns; superb, hidden ways would fail even huge yea\tBooks                                             \tromance                                           \t2.08\t1555.12\t0.81511986762426513\nGains keep still. Possible, final clothes kill perhaps in the conclusions. Methods would proceed for a hopes. Other, particular ways find perhaps in a demands. Adverse, other men admit eviden\tBooks                                             \tromance                                           \t1.93\t3352.42\t1.75717896150839737\nRacial minutes used to come enough teenag\tBooks                                             \tromance                                           \t3.47\t4982.66\t2.61167315680894137\nThen modern features should improve otherwise available qualifications. Personal purposes go with a years. Ministers remove big arts. Linear, poli\tBooks                                             \tscience                                           \t4.66\t527.85\t0.17402980157734269\nOrganizations make enough horrible requirements. Grateful, only funds reassure anxiously yesterday great years. Extra\tBooks                                             \tscience                                           \t5.13\t36276.15\t11.96008560479287668\nAc\tBooks                                             \tscience                                           \t1.13\t11382.07\t3.75261794759766011\nP\tBooks                                             \tscience                                           \t7.15\t115.77\t0.03816885503193893\nConfident views gain to the resources. Jobs could direct kings. Attitudes might not support as aware jobs. Happy accounts cannot test. Professional, joint interests will support in\tBooks                                             \tscience                                           \t78.67\t7479.68\t2.46601728949894583\nContinuous members shall look usually about careful supplies. More than negative sports become probably other leaves. L\tBooks                                             \tscience                                           \t47.51\t97.92\t0.03228378927811575\nObvious relationships put originally. Pounds give well central, british leaves. Differences ought to ask also central states. Tests grant for a chapters. Soon active heads should want \tBooks                                             \tscience                                           \t4.26\t2414.14\t0.79593124027645368\nGently independent fears call now statutory sciences. Friendly, quiet needs stumble too. So famous cattle teach too only services; public forces collect pure friends. Arms might make im\tBooks                                             \tscience                                           \t4.68\t5668.22\t1.86878696958743084\nLater other words comfort historic, social birds. Large, english interests muster there ag\tBooks                                             \tscience                                           \t1.74\t2463.16\t0.81209291664913785\nWays create things. Popular opportunities regard eyes. Intact conditions show years. Variable banks could run legally. Sexual, mechanical dates shall not carry however fingers. Forms\tBooks                                             \tscience                                           \t2.88\t10151.52\t3.34691107570034261\nNow educational levels lift perhaps men. Types use not. Very environments might go for sure at once common p\tBooks                                             \tscience                                           \t71.85\t6430.06\t2.11996223535172516\nLittle, able companies could not combine particles. Private kids participate in common; unable, only detectives introduce; very good skills go. Copies miss. Strategic m\tBooks                                             \tscience                                           \t1.07\t7269.76\t2.39680759745174345\nRegular teachers serve together events. Other arms would not use. Dou\tBooks                                             \tscience                                           \t3.59\t8847.06\t2.91683640493103230\nAware parts hang experienced, new groups. Handsome, perfect forms will grasp tonight in terms of the tears. Effective, economic subjects deny in the o\tBooks                                             \tscience                                           \t3.18\t38.60\t0.01272624863291736\nJust essential errors permit never too bad applications. Ideas could buy men. Anxious wives would not pull royal, common towns. Adults\tBooks                                             \tscience                                           \t3.22\t10051.00\t3.31377007796508735\nDomestic copies cannot get additional victims. Pieces should not determine now british, gold depths. Local, available stocks punc\tBooks                                             \tscience                                           \t3.99\t3769.53\t1.24279730593888526\nComplaints can involve very vital adults. A little practical initiatives remain traditionally important months. Clear new transactions create perhaps new, personal princip\tBooks                                             \tscience                                           \t1.15\t3928.72\t1.29528154220505402\nDistinguished, assis\tBooks                                             \tscience                                           \t6.29\t16.68\t0.00549932194811040\nOld prices help general trials. National, prime men ought to compete about a posts. Suspicious, extreme mistakes might not make gently other characters. Acc\tBooks                                             \tscience                                           \t1.53\t3227.96\t1.06424408127232946\nSpanish ranks can deal all but conservatives. Local metres shall not go no longer with a processes\tBooks                                             \tscience                                           \t2.91\t4385.32\t1.44582053510116972\nParticular ears ought to know streets; tears could pr\tBooks                                             \tscience                                           \t1.38\t4417.02\t1.45627188436706299\nUseful examples might understand evidently. Royal shops ought to leave in order. Also huge experts stay continuous, long organisers. Often burning services flee global circumstances. Fine, ex\tBooks                                             \tscience                                           \t2.78\t7923.96\t2.61249443309046200\nAccounts accept\tBooks                                             \tscience                                           \t1.24\t4454.22\t1.46853655921536677\nSmall years turn as beside a problems. Famous, significant attitudes defend again subtle machines. Pp. double less. Human men appear in a regions. Exclusively warm \tBooks                                             \tscience                                           \t1.75\t3606.79\t1.18914265043316062\nCertain, long councillors smile then fresh eyes. Lights attend initially after a preferences; national genes admit. Wide single plans improve never\tBooks                                             \tscience                                           \t2.09\t2209.49\t0.72845904383276100\nProblems could not find small, late years. Demands might get only normal, available communications. Quiet mothers leave women. Fair interes\tBooks                                             \tscience                                           \t0.21\t8916.11\t2.93960188337929509\nMarks remember\tBooks                                             \tscience                                           \t1.41\t1407.04\t0.46389484135906840\nThings prejudice unfortunately. Available lives used to get for an readers. Roughly good articles might express open years. Black m\tBooks                                             \tscience                                           \t9.38\t11566.26\t3.81334457287478571\nSmall, stupid members lack hands. Literary terms would understand sure ordinary acids. Lovely,\tBooks                                             \tscience                                           \t0.22\t2581.68\t0.85116843447228203\nConditions must like most still desperate concessions. Parts shall not raise sometimes places. Local, prof\tBooks                                             \tscience                                           \t4.37\t214.32\t0.07066035251313079\nMale, major regulations could get. Books may not bring. Upper, musical girls take well special, curious parents. Criminal, equal knees stop just a\tBooks                                             \tscience                                           \t3.41\t7411.80\t2.44363755485639582\nCourts receive high male limitations. Political, little parents may establish tomorrow unique minu\tBooks                                             \tscience                                           \t9.26\t10412.18\t3.43284952048418299\nLocal, contemporary tanks provoke yet. Well red quantities should spend only deaf new firms. \tBooks                                             \tscience                                           \t2.13\t6975.01\t2.29962983101256232\nYoung officers depend very well unnecessary players. Personnel will consider apart types. Most universal courses could enable arrangements. Magic, equal responsibilities detect; value\tBooks                                             \tscience                                           \t5.89\t6948.34\t2.29083685041567357\nPounds realise fairly formal, casual residents. Good areas shall stick etc disputes. So small police find variable, certain programs. Results think children; dogs will take prices. Old, traditi\tBooks                                             \tscience                                           \t44.25\t3791.67\t1.25009676564698863\nLeft times used to tell trees. Right t\tBooks                                             \tscience                                           \t7.96\t2094.92\t0.69068582347334800\nSo clear employees could tell experiments. Hands would control demands; well ethnic sites afford then bottom programmes; times flow easily premises. Alter\tBooks                                             \tscience                                           \t1.28\t10461.12\t3.44898482121203209\nHowever major deb\tBooks                                             \tscience                                           \t0.66\t2219.28\t0.73168676336945170\nThereafter strange rates shall not inhibit now on a heroes; eyes may not provide.\tBooks                                             \tscience                                           \t8.37\t11495.90\t3.79014719324234879\nDue proposed concepts afford indeed yesterda\tBooks                                             \tscience                                           \t1.34\t10405.19\t3.43054494851671946\nEarnings feel possibilities. Single, poor problems make full, sho\tBooks                                             \tscience                                           \t2.75\t17541.34\t5.78330192213830518\nDirect schemes rival pa\tBooks                                             \tscience                                           \t78.33\t9776.79\t3.22336425833730836\nM\tBooks                                             \tscience                                           \t42.63\t5228.32\t1.72375389255063431\nClear spirits shall not co\tBooks                                             \tscience                                           \t2.11\t1098.47\t0.36216068227463034\nNew, political bish\tBooks                                             \tscience                                           \t1.33\t1836.00\t0.60532104896467022\nProfessionally uncomfortable groups would not protect again there dependent users. Standard fields avoid likely families. Independent, intact fortunes work in the\tBooks                                             \tscience                                           \t8.28\t64.98\t0.02142361751727901\nFuture, solar deaths stand much confident, prime horses. New, other hundr\tBooks                                             \tscience                                           \t0.22\t7461.07\t2.45988165511918956\nActs will not reflect as with the problems. General governments distract new, soft fires. Useful proposals restrict hard trees. Large, black customs go official\tBooks                                             \tscience                                           \t3.05\t12762.28\t4.20766705707016963\nRoyal, considerable rooms reply then often c\tBooks                                             \tscience                                           \t0.79\t3487.40\t1.14978029747243514\nSymptoms could not take else. Now rich\tBooks                                             \tself-help                                         \t8.22\t4725.36\t1.53069603755177003\nNormal sports will not afford from a women. Nearly past families would permit \tBooks                                             \tself-help                                         \t4.46\t6912.33\t2.23912593775928744\nThere main prices could bowl acres. Radical, domestic plants take long. Fresh developments wave sanctions. British, able men cover goals. There other men\tBooks                                             \tself-help                                         \t7.22\t5298.60\t1.71638690482244922\nResults\tBooks                                             \tself-help                                         \t0.29\t6602.84\t2.13887217578942752\nAbout statistical blocks shall point so brothers. Even new affairs spend hopefully even old contexts. Possible officers wait absolutely with\tBooks                                             \tself-help                                         \t3.51\t7809.11\t2.52962181374665694\nFacts shall provide al\tBooks                                             \tself-help                                         \t5.02\t1138.39\t0.36876112342521194\nMen shall accept yet. Indians can continue obviously global, efficient times. Profit\tBooks                                             \tself-help                                         \t5.85\t4729.95\t1.53218288613311888\nProper, mutual feelings would bring right over the days. Prices ought to see thus electronic owners; most surprising definitions might not see in part big lads. Responsible, tory doors read good, a\tBooks                                             \tself-help                                         \t6.84\t4062.63\t1.31601648192708015\nEarly alternatives lie meanwhile european, new makers. Suspicious purposes speak new, overseas critics. Generally important police must refer approximately virtually other firms. British, appointed c\tBooks                                             \tself-help                                         \t2.07\t157.85\t0.05113269031937184\nSettlements can see so scientific sales; jeans ought to disco\tBooks                                             \tself-help                                         \t0.78\t10137.10\t3.28373262614193372\nNow christian papers believe very major, new branches. Annual wars include harshly so-called sites. \tBooks                                             \tself-help                                         \t5.23\t8239.18\t2.66893531470105824\nMuch g\tBooks                                             \tself-help                                         \t4.52\t725.34\t0.23496094771145497\nParticular prisoners wait at a wag\tBooks                                             \tself-help                                         \t1.99\t210.35\t0.06813912834133586\nGood others run considerably excelle\tBooks                                             \tself-help                                         \t2.72\t567.97\t0.18398374482542681\nVery concerned shares must form also rather nice gardens. Quietly available games may see quite. Short eyes repay. As useful variables should not produce there. Managers use so also total versions\tBooks                                             \tself-help                                         \t26.11\t239.20\t0.07748457094959609\nCreative churches like. Walls objec\tBooks                                             \tself-help                                         \t6.05\t3579.99\t1.15967386770001887\nNow environmental examples enter banks. Royal, new attitudes go prices; almost living tre\tBooks                                             \tself-help                                         \t7.75\t779.81\t0.25260553207443365\nHot steps help right able councils. Provincial mammals ought to establish from a others; forests used to offer true, open practitioners. Key theories could not imagine exact, other races.\tBooks                                             \tself-help                                         \t4.63\t8643.42\t2.79988164814865324\nAware, a\tBooks                                             \tself-help                                         \t2.74\t1189.77\t0.38540475743604073\nCultural notes ignore usuall\tBooks                                             \tself-help                                         \t9.32\t5567.49\t1.80348902138865697\nPositive, recent adults cannot tell fortunately laboratories. Frequent performances may get labour buildings; vocational windows will talk; similar seeds must replace better. Other merch\tBooks                                             \tself-help                                         \t9.69\t10154.63\t3.28941115678050571\nTonight single claims used to compete seriously. Frequently magic advances concentrate very political men. Again damp types will apply also pol\tBooks                                             \tself-help                                         \t0.56\t8790.86\t2.84764220475738421\nAreas increase well final, peculiar findings. Fat possibilities will say now at all sure dogs\tBooks                                             \tself-help                                         \t5.11\t3770.90\t1.22151575499093605\nClearly legal servants should not investigate however early difficult women. Increased laboratories tell home samples. Still wooden institutions avoid undoubtedly. Policies will \tBooks                                             \tself-help                                         \t9.11\t9124.75\t2.95579991125554742\nPhysical, political issues must not increase. Teeth go there particular prices. Words mi\tBooks                                             \tself-help                                         \t4.82\t1881.44\t0.60945890956274278\nOld, acceptable respects imply around banks. Rights will not spare so existing reasons. Old eggs must claim. Patients might not stop there military,\tBooks                                             \tself-help                                         \t7.89\t15529.28\t5.03043310182334282\nNational, dreadful opportunities give. Lucky, low rules should start away from the girls. Available words will not leave now. Stor\tBooks                                             \tself-help                                         \t5.53\t6895.58\t2.23370007419989892\nDominant, useful restaurants might not say contrary eyes. Modest years may not confirm again just other stage\tBooks                                             \tself-help                                         \t3.87\t12631.86\t4.09186560365955223\nVarious\tBooks                                             \tself-help                                         \t6.24\t3437.60\t1.11354916846292444\nThere political deta\tBooks                                             \tself-help                                         \t8.83\t4867.67\t1.57679482221664051\nOther, established programmes used to avoid good organisations. Forward, simple changes might enter straight. Papers cal\tBooks                                             \tself-help                                         \t1.63\t3028.98\t0.98118401218606844\nCards insist sad males. Instruments turn later instructions. Economic, white \tBooks                                             \tself-help                                         \t2.64\t3883.30\t1.25792572896557903\nOther, precious services can stem; grounds will set in particular friendly factors. Ports will provide. So complete moments diversify morally different, open pupi\tBooks                                             \tself-help                                         \t6.72\tNULL\tNULL\nMetres must not go more soft attacks. Northern, central changes see all right inherent metres; women shall reduce together always private efforts. Extra, secret dates ought to sa\tBooks                                             \tself-help                                         \t36.51\t215.49\t0.06980413960672434\nOutside, remaining problems must come only new politicians. Readers would not tell right, modern products. Particular threats become legally among a beaches\tBooks                                             \tself-help                                         \t1.38\t24121.05\t7.81358365427991146\nIn order excellent words could go old costs. Surp\tBooks                                             \tself-help                                         \t1.45\t3398.74\t1.10096116500514307\nLogic\tBooks                                             \tself-help                                         \t1.29\t3676.91\t1.19106937194932846\nSufficiently great tears may see. Much short standards duck over a pap\tBooks                                             \tself-help                                         \t8.57\t1508.73\t0.48872615689291017\nAgain right years welcome to the months. Once competitive years could benefit great, social projects. Actually old expectations must not spin \tBooks                                             \tself-help                                         \t2.42\t1824.90\t0.59114378564346952\nActions need qualifications. Expert sales see. Guests look evidently dead roots. Activities \tBooks                                             \tself-help                                         \t2.20\t1248.95\t0.40457506223870418\nStill social transactions provide both most existing vi\tBooks                                             \tself-help                                         \t6.50\t2330.32\t0.75486557431129919\nPrime even\tBooks                                             \tself-help                                         \t4.28\t3438.17\t1.11373380979002005\nConfidential, japanese reports discuss ever forms. Initiatives say now pregnant, sad sites. Neither round eyes may ask more w\tBooks                                             \tself-help                                         \t1.72\t3385.13\t1.09655244840554440\nClever, informal negotiations study sharply with a leaders. Professionals come noble officials. Plans continue pa\tBooks                                             \tself-help                                         \t4.69\t2768.44\t0.89678672909573497\nBritish, \tBooks                                             \tself-help                                         \t1.52\t4014.40\t1.30039323419756920\nHighly other times could stay no longer huge symbolic results. Most narrow police chan\tBooks                                             \tself-help                                         \t7.99\t660.44\t0.21393775099477944\nHands can ensure. Dead schools concentrate by a years. Increased authorities should not stop natural, following guards. Principal years might secure. Long, criti\tBooks                                             \tself-help                                         \t4.23\t4140.99\t1.34139980542043446\nRights could not talk. Miners shall clear various outcomes. Relative, western forms locate communist, local prices. Items would not disappear probably likely women. Bare conclusions mark in gener\tBooks                                             \tself-help                                         \t8.57\t3116.42\t1.00950863962684053\nOther changes shall seek \tBooks                                             \tself-help                                         \t2.51\t2862.54\t0.92726874467415049\nSo ashamed periods could give there on the operations. Potatoes must order very noble systems; labour years should not escape so formal, ready \tBooks                                             \tself-help                                         \t1.94\t11014.72\t3.56802196208166835\nAlso crucial weeks will consider just then close parts. Long values prove then reco\tBooks                                             \tself-help                                         \t3.91\t65.52\t0.02122403465141110\nSincerely important experiments should hear surprised, unchanged sorts. Else financial democrats will not start so major bodies. E\tBooks                                             \tself-help                                         \t1.90\t5855.42\t1.89675880614416367\nCities practise a\tBooks                                             \tself-help                                         \t2.94\t9089.11\t2.94425496932977984\nNearly northern eyes would not use further buyers. Ever independent advertisements comment also nice, old schemes. Firm members would restore as a doors. Problems \tBooks                                             \tself-help                                         \t8.02\t14009.14\t4.53801087906699247\nEssential, modern goods help friendly roads. Cultures\tBooks                                             \tself-help                                         \t1.13\t8764.28\t2.83903208813597843\nGentlemen construct. Inevitable proposals tell more subject troops. Feelings used to come thus a\tBooks                                             \tself-help                                         \t1.73\t8962.10\t2.90311234660273887\nMiles kiss silently difficult streets. Less social rules see never \tBooks                                             \tself-help                                         \t7.03\t283.44\t0.09181532938943778\nYards shall build gradually steep, possible players. Foreign, wild lines used to understand vital layers. Problems shall go likely, parliamentary rats. Suspicious, wrong thousands \tBooks                                             \tself-help                                         \t7.63\t7823.86\t2.53439981300044683\nResults\tBooks                                             \tself-help                                         \t9.21\t3280.19\t1.06255900829078431\nSmooth, othe\tBooks                                             \tself-help                                         \t8.62\t11533.69\t3.73613303141992873\nAvailable, other responsibilities ban common, english authorities. Participants save little for a years. Well local plans look. As entir\tBooks                                             \tsports                                            \t2.98\t624.68\t0.24146901355107034\nNow beautiful results scream just official payments. Carefully \tBooks                                             \tsports                                            \t4.89\t12518.36\t4.83895120778186737\nAgricultural elections go users. Popular customers could threaten upside down hard, able pages. European, interesting bases spend at a fingers. \tBooks                                             \tsports                                            \t2.47\t7461.50\t2.88423039734153702\nLevels should rethink really typically other women. Elections respond long numbers. Firms might sum nearly present, personal homes. Again clear\tBooks                                             \tsports                                            \t3.91\t6886.83\t2.66209266599525798\nVery social engineers ask facilities. Numerous, stupid \tBooks                                             \tsports                                            \t7.36\t4152.23\t1.60503759066587821\nGreen levels provide. Students would agree. Very upper states get finally for a\tBooks                                             \tsports                                            \t1.29\t4251.46\t1.64339478189126194\nIn order\tBooks                                             \tsports                                            \t9.54\t5723.96\t2.21258720433787633\nAs specific characteristics contain for the hours. Free, double men avoid in the meals. Trying, potential institutions share above from the months. Contemporary problems could cheer only heav\tBooks                                             \tsports                                            \t1.58\t1246.89\t0.48198325271610120\nGrounds ought \tBooks                                             \tsports                                            \t1.69\t6467.35\t2.49994337066900616\nCompletely particular voices shall not say straight. Used ideas must recall current colonies. New techniques could not make naturally old, great versions; great adults test\tBooks                                             \tsports                                            \t2.88\t6653.24\t2.57179884055600185\nProcedures order here shops. Late static sciences shall not see cultures. Polite implications cover underway. That is right communications might not say cool principles. Strange keys\tBooks                                             \tsports                                            \t1.34\t2498.12\t0.96564412520362400\nMore big results develop again on a politicians. Characteristics live flowers. Children wipe perhaps appropriate roles. Wrong, external shows want somewhat little ways. Then difficult\tBooks                                             \tsports                                            \t3.64\t4362.77\t1.68642147699654727\nBasic, functional circumstances must \tBooks                                             \tsports                                            \t7.87\t2947.46\t1.13933575379592397\nNeighbours shall not represent overall dramatic trees. Random chiefs could not interfere basic, special fruits. A little poli\tBooks                                             \tsports                                            \t5.46\t3974.85\t1.53647164710487281\nImmediately impossible teachers cut kindly busy, national products. Important, principal communities could die all very video-taped words. Short children doubt windows. Sometimes russian developm\tBooks                                             \tsports                                            \t96.08\t4160.79\t1.60834644440858994\nTwice detailed customers know women; economic, intense values listen wide industr\tBooks                                             \tsports                                            \t0.74\t6802.45\t2.62947571753614401\nSad, very sales could gather hence on a pounds. Issues see just within a things. Eastern directors put very in a initiatives. \tBooks                                             \tsports                                            \t3.99\t5533.59\t2.13899999791263899\nSick organizations cannot cause to the situations. Direct nations seek to a genes. Able, invisible polls c\tBooks                                             \tsports                                            \t52.92\t10879.04\t4.20527479218581719\nLetters help; international directions should hu\tBooks                                             \tsports                                            \t37.74\t460.35\t0.17794752575436260\nAppointments might not hold to a tickets. Proper, private areas describe and so on prime, natural calls. Miners shall receive typically safe units. Little times will develop pointedly very mus\tBooks                                             \tsports                                            \t6.13\t3351.79\t1.29562884185557735\nMinisters prove perhaps social processes. Aggressive characters could get open signals. Products try at all public, loyal councils; wholly historical respondents see there from a statements. C\tBooks                                             \tsports                                            \t7.24\t13142.40\t5.08017283039890319\nLikely days shall get. Great users would love even. However acceptable walls\tBooks                                             \tsports                                            \t8.23\t2406.70\t0.93030587647013029\nJust average men might make so faintly free parents. J\tBooks                                             \tsports                                            \t1.41\t9937.58\t3.84135499725434718\nPapers conceive over immediate victims. Local, expert members add ill over immediate tiles. Profits pay even. Tall classes begin for instance grand fields; ru\tBooks                                             \tsports                                            \t0.25\t3880.85\t1.50013610366855243\nGreat, reliable children see french, proper dates. Public passages like closely traditionally academic books. Values used to distinguish leaders. Much key oper\tBooks                                             \tsports                                            \t31.97\t1293.62\t0.50004665638396557\nDual months should survive only large, political eyes; new, new merchants pass fairly conseque\tBooks                                             \tsports                                            \t6.26\t4192.74\t1.62069666369359458\nConversely good eggs would not call too. Police happen present courses. Fine procedures finish well forward private\tBooks                                             \tsports                                            \t6.31\t6912.27\t2.67192645562313022\nReal, japanese systems would need downstairs for the phrases; level waters might not go about existing, little friends. Nation\tBooks                                             \tsports                                            \t5.90\t2794.92\t1.08037167086213344\nDevices take truly afraid, great men. Both true parties hurt even with a proposals. All internal candidates prevent more. Distinctive, prime women would say. Little, english departme\tBooks                                             \tsports                                            \t0.63\t1050.56\t0.40609221821766738\nParents prevent alone little children. Cases might dispose again lives; very strange windows violate officially. Improved, cheap critics should alert plates. Expressions build c\tBooks                                             \tsports                                            \t5.56\t4342.45\t1.67856681484095121\nWrong others miss less to the respects. Especially other standards start in order regula\tBooks                                             \tsports                                            \t7.53\t11059.22\t4.27492307108322362\nAdults will foresee most left, social children. Different eyes make personal counties. Readers would not admit more musical proceedings; titles take here away fast institutions; bird\tBooks                                             \tsports                                            \t3.83\t10985.10\t4.24627210853535058\nInternational, coloured contexts think. Relevant, british conservatives ought to happen ago. Perhaps human shops must see animals; rights must h\tBooks                                             \tsports                                            \t44.83\t10933.78\t4.22643444801245737\nYears should comment then central, internal implications; directly collective feet may find around extra, victorian crimes. Short\tBooks                                             \tsports                                            \t2.75\t1868.42\t0.72223463901372038\nSo single phrases could not sum; desirable friends see times. French efforts think\tBooks                                             \tsports                                            \t4.59\t4611.30\t1.78249033455217177\nCentral, visible moments \tBooks                                             \tsports                                            \t57.64\t1362.54\t0.52668756759280813\nOld, straight enemies obtain however years. Largely social questions disrupt never. Measures rule fine, extensive trees. Already economic friends would not show more beautiful engines. Systems ret\tBooks                                             \tsports                                            \t9.99\t4644.12\t1.79517685088812959\nFreely proud children cannot continue countries. Rates shall not look applications. Compl\tBooks                                             \tsports                                            \t4.13\t886.97\t0.34285677618843706\nAlready secondary year\tBooks                                             \tsports                                            \t72.51\t8152.72\t3.15142033706550904\nDevelopers ought to recognize again. British, fast artists shall experi\tBooks                                             \tsports                                            \t79.00\t2317.17\t0.89569820408870728\nPaths check still international attitudes. Immediate\tBooks                                             \tsports                                            \t0.37\t2211.39\t0.85480912127281399\nAll capital bacteria make jobs. Again appropriate eyes may not leave others. There fixed ways\tBooks                                             \tsports                                            \t0.32\t7910.07\t3.05762438371632671\nPapers occur critically relatively happy numbers; related, soviet genes experiment governments; voluntary devices\tBooks                                             \tsports                                            \t2.52\t3864.91\t1.49397452321775512\nIndeed similar changes might drink too national careful areas. Wise, good rooms give large opportunities. Various patients shall research directly plants. International hands can get pieces\tBooks                                             \tsports                                            \t9.31\t3710.53\t1.43429919134861534\nHere familiar rooms would not believe particularly new, fresh rights. Levels allow then wives; temporary, big ears may sound always others. Lovely, essentia\tBooks                                             \tsports                                            \t9.23\t1808.93\t0.69923887859854273\nLines might clear too high eyes. Great women balance as the things. Natural requirements\tBooks                                             \tsports                                            \t8.76\t5395.16\t2.08549011197764081\nGeneral, local thanks must soar actually about p\tBooks                                             \tsports                                            \t22.08\t7752.94\t2.99688604392750734\nInc others look in the varieties. Cold methods write values. Partners will make often times. Democratic, dramatic personnel shall not see\tBooks                                             \tsports                                            \t3.64\t473.00\t0.18283736218488870\nOthers used to coincide there so as historical sites; syste\tBooks                                             \tsports                                            \t4.08\t4391.31\t1.69745356646114923\nPoor, major pairs affect complex, redundant results. Different animals could find so great, honest designs. Dull, linguistic studies might not get more cons\tBooks                                             \tsports                                            \t33.21\t1010.07\t0.39044087615663959\nOpen prod\tBooks                                             \tsports                                            \t2.74\t12438.41\t4.80804666844427361\nBloody masters pull only women; shops take aggressively also legal cells. Continually underlying grounds would interfere. Entries shall not separate. Senior techniques see in \tBooks                                             \ttravel                                            \t2.25\t4171.41\t1.64665291182793628\nActive, mi\tBooks                                             \ttravel                                            \t1.40\t12936.19\t5.10652631399441219\nVoluntary others will imply again international, important birds; ill old publishers can get dark powers. Features stretch now progressive procedures. Tough n\tBooks                                             \ttravel                                            \t1.83\t3612.43\t1.42599705573765030\nCold terms shall comply only early claims; head, different politicians shall not commend good, foreign organizations; criminal, po\tBooks                                             \ttravel                                            \t1.03\t5504.86\t2.17302872367020583\nOperations s\tBooks                                             \ttravel                                            \t1.00\t193.62\t0.07643097580629212\nApplications might gather rather out of a problems. Scales could observe presumably for a directors; totally empty questions will forget. Just, symbolic question\tBooks                                             \ttravel                                            \t21.48\t5351.75\t2.11258896173599765\nFor example influential subjects shall work for example. Modules should persuade aside overall preliminary relatives. American, available reasons may use to the weekends; streets used t\tBooks                                             \ttravel                                            \t2.18\t6997.28\t2.76215751673304277\nSimilar sides assess more yet complete improvements. Bacteria would stay; general, curious trends used to reac\tBooks                                             \ttravel                                            \t1.61\t221.43\t0.08740889873353613\nCommunist, small cases may not turn other rules. Little, forward men should assist quite available technique\tBooks                                             \ttravel                                            \t2.29\t16204.92\t6.39684871636659094\nConflicts could give really sole institutions. Then advanced proceedings could not receive. Black experiences shall \tBooks                                             \ttravel                                            \t1.91\t5880.48\t2.32130371144192077\nLeading players will sa\tBooks                                             \ttravel                                            \t4.51\t262.65\t0.10368038320174892\nThere european members turn; industrial, honest leaders cut exactly happy, consistent reasons. Incidentally european millions worry at first aware \tBooks                                             \ttravel                                            \t3.81\t2395.24\t0.94551456714318326\nDeliberately ordinary procedures will not pay by a months. Feet reach very s\tBooks                                             \ttravel                                            \t9.43\t1776.74\t0.70136335065629308\nGood, national parts remove animals; \tBooks                                             \ttravel                                            \t2.57\t3370.45\t1.33047609960911726\nOdd, artistic databases open now; female, left days use all obligations. Simple, early sites may not hesitate statements. Left, free s\tBooks                                             \ttravel                                            \t2.31\t9717.76\t3.83605970174234756\nHowever solid hours visit painfully things. Clubs must take most other words; officials will follow necessary developers. Alternative, great decisio\tBooks                                             \ttravel                                            \t2.68\t1892.11\t0.74690534879063830\nFinally surprising cells cannot look better points. Elections might choo\tBooks                                             \ttravel                                            \t1.98\t3145.02\t1.24148821160161580\nRight schools go now; average, invisible hands should get also good persons. Usually good ministers will make. Notes ought to stem average words. Heavy, certain suggestions summaris\tBooks                                             \ttravel                                            \t4.55\t337.50\t0.13322721999082528\nThanks could talk well individually national records; just simple officials go then encouraging, remarkable needs. Signals assess now. Upper, cheap pp. would not see. Hard trousers shall send whol\tBooks                                             \ttravel                                            \t4.23\t6920.66\t2.73191197719023675\nReports used to think characteristics. True types break extremely deliberately white tasks. Courses must cost. Economic, nervous resou\tBooks                                             \ttravel                                            \t0.74\t1273.19\t0.50258833842998175\nDear signals know finally. Positions answer payable payments. Superior babies can exis\tBooks                                             \ttravel                                            \t1.78\t16390.16\t6.46997170964392568\nHorizontal problems continue members; modern, other interactio\tBooks                                             \ttravel                                            \t8.51\t2371.88\t0.93629326978322569\nOpen conditio\tBooks                                             \ttravel                                            \t8.17\t9456.37\t3.73287670016189772\nPractical writers used to succeed recent arms. \tBooks                                             \ttravel                                            \t9.48\t10115.82\t3.99319281934100804\nMembers show yards. Economic stones get newspapers. Only magic views lea\tBooks                                             \ttravel                                            \t9.23\t1653.26\t0.65261995176898313\nInvestments ought to use still also professional developments. Only fresh visitors know steadily never main occ\tBooks                                             \ttravel                                            \t1.37\t4036.41\t1.59336202383160616\nConclusions might take on a ch\tBooks                                             \ttravel                                            \t4.48\t4341.46\t1.71377969333738765\nSmall, original things announce in addition at last other functions. Best political women make even old materials. Downstairs wet arr\tBooks                                             \ttravel                                            \t0.34\t8289.45\t3.27223815926799005\nAgain english deals cut for the cases. Yet normal systems reach biological, original reasons. So other remains spread steadily. Much inadequate members consider\tBooks                                             \ttravel                                            \t1.92\t7192.94\t2.83939377706905721\nLater severe rules would\tBooks                                             \ttravel                                            \t1.57\t3713.31\t1.46581916522705609\nMovements may describe quite southern, nervous views. Young notes imagine. Sensitive women might excuse then sales. Proportions may not exist only from a controls. Are\tBooks                                             \ttravel                                            \t2.49\t6651.86\t2.62580389797976612\nThat is fine terms know to the goods; useful colleagues us\tBooks                                             \ttravel                                            \t6.31\t6202.60\t2.44845971767434933\nYesterday long babies may not include as else able companies. Large, true d\tBooks                                             \ttravel                                            \t4.19\t1813.84\t0.71600847617232157\nWords see low courts. National, democratic plants avoid. Days should go stupid, apparent days. Dependent hours should not want police. Also urban wages shall not define so great, typic\tBooks                                             \ttravel                                            \t8.88\t8312.77\t3.28144366673520796\nMasses can contain as. Military men retain in a earnings; british, related instructions shall know different, precise needs; favorite\tBooks                                             \ttravel                                            \t5.09\t959.36\t0.37870478746784635\nBehind relevant areas find then necessary papers. Copies might come envi\tBooks                                             \ttravel                                            \t7.07\t7437.38\t2.93588581160107894\nRemarkably good bishops would deprive transactions. I\tBooks                                             \ttravel                                            \t0.59\t7014.30\t2.76887611609376528\nRunning businesses find emotions; \tBooks                                             \ttravel                                            \t4.40\t2300.61\t0.90815962839434831\nPink, central countries shall defend rapidly \tBooks                                             \ttravel                                            \t6.87\t6536.14\t2.58012373828394893\nLocal, conservati\tBooks                                             \ttravel                                            \t1.68\t8121.86\t3.20608245616202735\nStrong women know also also obvious votes. Private, natural areas should play strongly for \tBooks                                             \ttravel                                            \t2.11\t184.12\t0.07268087628062445\nColours meet certainly hours; aw\tBooks                                             \ttravel                                            \t1.63\t5441.98\t2.14820701228347073\nToo full weeks might obtain most today vital cities. Police shall take for example full sto\tBooks                                             \ttravel                                            \t3.82\t5904.69\t2.33086054402259597\nExceptional hundreds compare else then previous scientists. Rapid, popular differences get exactly now social persons. Naturally fundamental dreams hold on a changes. Brilliant birds pursue te\tBooks                                             \ttravel                                            \t5.39\t3124.51\t1.23339194409935853\nBritish leaders can focus. Different workers cannot breathe only in an objectives; arrangements might enter predictably hours; reduced, effective phases operate ready men. Others say o\tBooks                                             \ttravel                                            \t4.95\t1624.50\t0.64126701888917236\nYesterday public notes work at least students; accidents might not apply today rural, subject premises. National, particular organisations could not endorse simply under a respondents. Sti\tBooks                                             \ttravel                                            \t9.83\t531.86\t0.20995030881280099\nMaybe gastric variations will see as. However physical plants would not choose for example wi\tBooks                                             \ttravel                                            \t6.36\t1691.34\t0.66765192965713314\nLittle arts can grow directly rights. Full, slim argum\tBooks                                             \ttravel                                            \t4.77\t16542.31\t6.53003251415238218\nAbout right clothes must get thoughtfully to a cases. Eastern improvements \tBooks                                             \ttravel                                            \t98.75\t2730.37\t1.07780623598918408\nCountries want incorr\tBooks                                             \ttravel                                            \t63.33\t473.46\t0.18689706541290708\nFields would die clear horses. However new problems go nasty, smooth ways. Interested others go great societies. Familiar patients shall seem trends. Yellow, r\tHome                                              \tNULL\tNULL\t7995.48\t34.64319649767261090\nNULL\tHome                                              \tNULL\t0.87\t14048.70\t60.87087637475838958\nNULL\tHome                                              \tNULL\tNULL\t116.76\t0.50590328824138814\nNeat, desirable words make especially gradu\tHome                                              \taccent                                            \t7.11\t1583.88\t0.73384072874422647\nCommon males protest probably statements. Subsequent, main ways begin then titles. Rights come therefore interesting, ordinary thin\tHome                                              \taccent                                            \t8.82\t1429.40\t0.66226730413099308\nOffers go odds. Black, certain readers prove again in a cases. Public, black things watch as else modern forces. Difficult, new crops comp\tHome                                              \taccent                                            \t3.59\t4707.69\t2.18115934307012370\nNational, round fields would not accomp\tHome                                              \taccent                                            \t0.17\t1970.93\t0.91316811090730250\nMore general applications work also moves. Final, equal instruction\tHome                                              \taccent                                            \t33.79\t1466.94\t0.67966027642501678\nSevere plants filter fair with the days. Both great hills bring still. Military standards ask now for a conditions. Ago new proposals may like particularly men. Then alone a\tHome                                              \taccent                                            \t5.54\t6369.32\t2.95102307649896240\nPresent, good grounds fall students. Big, long nerves remain events. Important, black years must not use principles. Fatal mines cannot order hospitals. Forces apply elsewhere; now final members\tHome                                              \taccent                                            \t5.37\t187.59\t0.08691389644741359\nTerms must work slow signs. Just american movements make surprisingly\tHome                                              \taccent                                            \t0.26\t481.20\t0.22294880841460324\nDiscussions could inform; legitimately potential miles remember again from the factors. Then administrative changes may\tHome                                              \taccent                                            \t2.20\t1475.60\t0.68367261366705848\nAgo light fingers blame enough green, british years. Children go general stands. Economic, great numbers affect deputies. Purposes urge annually. Always electrical ways vote judicial, regular ac\tHome                                              \taccent                                            \t6.86\t11873.28\t5.50110895256222018\nDays shall want later romantic, american changes. Reasons read; great reasons may occupy economically. Strong, new films go then objects. English relations would resolve over. New, crazy feelin\tHome                                              \taccent                                            \t1.78\t715.86\t0.33167110139583931\nNew, large words stop more strong cars. Back views leave other, young shoes. White conte\tHome                                              \taccent                                            \t2.81\t9585.07\t4.44093918343840622\nDecades try then. Different leaders stray examples. Things would not participate too good, good messages. Exactly new thanks can forget; companies u\tHome                                              \taccent                                            \t3.51\t4955.85\t2.29613643429241784\nVery afraid concepts will not disentangle with a days. Long-term, civil points c\tHome                                              \taccent                                            \t8.15\t3501.80\t1.62244833189164095\nNew measures shall pay under a agencies; comparatively heavy police shall beat similarly concepts. However japanese times cannot check like a police. Long, long-term auth\tHome                                              \taccent                                            \t1.87\t5547.93\t2.57045798559357804\nUseful, n\tHome                                              \taccent                                            \t9.44\t3014.70\t1.39676594498650122\nDays give briefly vulnerable months. Sexual feelings create just animals. Charts study; changes knock rapidly aware sites. Schemes include sufficiently. For example speci\tHome                                              \taccent                                            \t7.15\t303.87\t0.14078855863039378\nConnections must not come right finally certain parties. Wild parties fi\tHome                                              \taccent                                            \t2.55\t1293.30\t0.59920967149336320\nLittle powers reach by a subjects; traditional insects make also others. Numbers shall make. Products take serious, military rules. Curiously economic methods approac\tHome                                              \taccent                                            \t3.52\t99.03\t0.04588241998607265\nOld buildings must proceed;\tHome                                              \taccent                                            \t9.33\t595.01\t0.27567907417866391\nAdditional eyes give nationally. Territorial groups should talk previously strange differences. Small discus\tHome                                              \taccent                                            \t6.07\t18159.55\t8.41365343691896978\nAlmost busy pounds lose at last for an factors. Good mothers would\tHome                                              \taccent                                            \t1.45\t2292.51\t1.06216203819318802\nBenefits might choose only by a directors. Continued eggs must not make much black, back arrangements. Living,\tHome                                              \taccent                                            \t1.62\t9494.68\t4.39905983432661074\nHoles may avoid of course genuine\tHome                                              \taccent                                            \t3.27\t409.64\t0.18979374455311320\nSupporters will laugh well indirect, old reductions. Men can increase critical words. Eyes ought to drift better parties. Other, social goods avoid costs; similar, substantial days learn;\tHome                                              \taccent                                            \t63.79\t5475.88\t2.53707589572185700\nMain, powerful kilometres should like certainly political directors. Left families go tall, clear organizatio\tHome                                              \taccent                                            \t0.18\t11613.93\t5.38094732857567124\nPromptly soviet faces could confirm now consistent new procedure\tHome                                              \taccent                                            \t1.85\t5675.68\t2.62964690968951645\nOld events can try far natural genes. Primary months explain at all par\tHome                                              \taccent                                            \t0.15\t20335.22\t9.42168135463177076\nWomen should hear among a pages. Everywhere main techniques go just unlikely principles. Broad, willing differences can make also short, modern roots. Together sorry thoug\tHome                                              \taccent                                            \t8.25\t1632.64\t0.75643213335415177\nAttractive, pale rights stop in a delegates. Answers go as; variable, alone roles ought to relax quickly concerned, detailed parents. Poor, physical matches would send as for a details; cent\tHome                                              \taccent                                            \t1.45\t989.82\t0.45860180703437776\nAncient periods will not see in a affairs. Fun\tHome                                              \taccent                                            \t4.09\t8014.62\t3.71332082064806196\nPerhaps material e\tHome                                              \taccent                                            \t6.64\t2552.44\t1.18259238684490834\nHere german thanks trust further remarkable towns. Other years\tHome                                              \taccent                                            \t2.04\t7200.88\t3.33630011541261051\nSupreme others can decide. Unfair, short presents give. Activities give simply police. Dark, impossible \tHome                                              \taccent                                            \t0.13\t2033.98\t0.94238033528498482\nStill different holes ought to enjoy early problems. Mammals see usually. Powerful, public \tHome                                              \taccent                                            \t6.84\t1085.87\t0.50310353822353537\nAlways potential wages shall not restart sometimes at the efforts. Mere, high weapons would not go there physical pr\tHome                                              \taccent                                            \t66.58\t7246.44\t3.35740890118021093\nBoys ought to answer. International citizens call areas. All quick cuts might back most white, central amounts. Strong mice make on a lines. Cultures would dismiss changes. Left chil\tHome                                              \taccent                                            \t5.45\t18131.76\t8.40077781891015469\nMost main firms would know highly for an companies. D\tHome                                              \taccent                                            \t1.31\t5733.85\t2.65659814033265334\nNew investors think especially secondary parties. Farmers detect adequately. Hum\tHome                                              \taccent                                            \t38.04\t1460.72\t0.67677843605024781\nInternational, nice forces will turn modest ways. Trees might not deal eastern others. Responsibilities ought t\tHome                                              \taccent                                            \t2.75\t6806.25\t3.15346077986677743\nQuite political women like home seriously formal chains. Certainly male lips \tHome                                              \taccent                                            \t4.86\t1551.13\t0.71866705152980782\nRules meet as; authorities shall not kill moreover near a \tHome                                              \taccent                                            \t3.55\t651.58\t0.30188899540063836\nAlso possible systems could go forward. Local, british babies d\tHome                                              \taccent                                            \t2.53\t2797.54\t1.29615172379922932\nBritish results cou\tHome                                              \taccent                                            \t4.30\t118.60\t0.05494956084366572\nSimply perfect shareholders come others. Other, tired eyes contact therefore educational jobs. Over cathol\tHome                                              \taccent                                            \t7.12\t11929.65\t5.52722621010654933\nEnough labour losses demonstrate also quickly happy women; near available things might surrender also ge\tHome                                              \taccent                                            \t1.26\t1093.19\t0.50649502882535352\nRoyal children \tHome                                              \taccent                                            \t3.70\t188.00\t0.08710385698658647\nFuture, real fears mean far interests; ill, mean payments speak far so labour lights. Already other applicants might not go so powerful lengths; japanese, central modes boil. Old homes ough\tHome                                              \tbathroom                                          \t1.70\t19546.11\t7.34362930968507144\nAlso eastern matters should not enable now irish, \tHome                                              \tbathroom                                          \t3.46\t2574.19\t0.96714369931910820\nQuite public shoulders help even ministers. Short, tall groups cannot overcome too other notes. Thus surprising reasons find\tHome                                              \tbathroom                                          \t1.77\t11046.40\t4.15022051991445731\nIn\tHome                                              \tbathroom                                          \t0.42\t1225.60\t0.46046768804381146\nNecessary, p\tHome                                              \tbathroom                                          \t8.13\t5680.58\t2.13423918027734537\nLetter\tHome                                              \tbathroom                                          \t9.54\t6366.89\t2.39209131717465953\nModern companies shall not become also old, grateful agents. Enough joint programs approve titles. Jeans will not fall already wrong teachers. High, silver children manage a\tHome                                              \tbathroom                                          \t2.28\t16790.19\t6.30820820097611185\nDetailed, unhappy groups play old, human others. Well anxious councils will study whole, democratic employees. Educational, english customers get more. Explicitly cold deci\tHome                                              \tbathroom                                          \t79.37\t2249.42\t0.84512502189907830\nPp. may not record also human rocks. Extraordinary, industrial measures may not operate only out of a officials. Ready subjects show clearly new things. Projects should enable\tHome                                              \tbathroom                                          \t3.56\t11356.89\t4.26687408752274959\nHere economic areas develop too sole processes; grateful, new children pass shares; fat, proposed aspects affect gmt on the terms. Years remind e\tHome                                              \tbathroom                                          \t6.16\t5399.13\t2.02849617211813296\nAppropriate, active areas change alternative books. Clients will not look now only, other rates. Usually effecti\tHome                                              \tbathroom                                          \t2.89\t2344.36\t0.88079473657179327\nEmployees watch never at the imports. Cases resist actually reliable prices. Alive, var\tHome                                              \tbathroom                                          \t7.17\t2759.95\t1.03693521182809843\nVery oral hands ought to smoke military, independent issues. Moving sons play. Patients contradict to a measures. Other cattle enable significant goods. Initial, possible groups let soci\tHome                                              \tbathroom                                          \t7.17\t3821.04\t1.43559518172562445\nNew sports will give now students. Scarcely free countries damage there prime, necessary members. Big units should not fill probably mental child\tHome                                              \tbathroom                                          \t4.29\t1777.37\t0.66777207465602902\nUnions last moving pur\tHome                                              \tbathroom                                          \t2.72\t3881.21\t1.45820153028110433\nIndeed political miles imagine. Urgent, able males can explain companies. Accor\tHome                                              \tbathroom                                          \t5.47\t2914.22\t1.09489568036148517\nAlmost other bodies call cars. So international benefits ought to suppose in a points. Officers can ensure also for a books. Carefully different police sleep. Irish, u\tHome                                              \tbathroom                                          \t9.17\t4471.44\t1.67995564541989254\nLabour, japanese economies care more minor, great gardens; events may m\tHome                                              \tbathroom                                          \t5.15\t5956.38\t2.23785943840600333\nSmal\tHome                                              \tbathroom                                          \t3.40\t1261.44\t0.47393306168895686\nFree, sad bits might not speed then. Troubles\tHome                                              \tbathroom                                          \t5.76\t175.15\t0.06580525094718797\nHard players show empty troops. Expectations used to know even; alternative organs could not consume historical, direct practices. Material restrictions could count deep. Gifts could s\tHome                                              \tbathroom                                          \t4.64\t8640.19\t3.24618824539756797\nMere, alternativ\tHome                                              \tbathroom                                          \t6.84\t4069.67\t1.52900745430912057\nStrong taxes represent nece\tHome                                              \tbathroom                                          \t3.36\t2436.99\t0.91559656583378597\nSimply costly processes should not believe therefore by the weeks. Instead earl\tHome                                              \tbathroom                                          \t7.28\t419.52\t0.15761700757844303\nJoint lovers can mention tomorrow minor techniques. Major markets may no\tHome                                              \tbathroom                                          \t17.20\t2682.86\t1.00797188442005549\nPretty figures ought to join that things. Extra authorities find dramatic items. Over mutual cases give for the time being as successful lines; permanent arms return publi\tHome                                              \tbathroom                                          \t0.31\t15228.27\t5.72138240845865918\nBoth long tories will not get together; problems seem by now special,\tHome                                              \tbathroom                                          \t5.62\t8655.20\t3.25182762202741263\nSanctions will know black quarters. Cent\tHome                                              \tbathroom                                          \t4.35\t2089.84\t0.78516954404494038\nComfortable clothes ought to carry violently. New, united services must look always. Common, recent workers could prevent. New, local languages need very often young kinds. Structures might\tHome                                              \tbathroom                                          \t1.84\t4089.18\t1.53633751680400859\nDrivers might put \tHome                                              \tbathroom                                          \t7.91\t1583.75\t0.59502749750276305\nFinancial forces may bring yet. Unknown, expensive assets offer enough securities; female movements ought to grow great, aware modules. Normal contacts mus\tHome                                              \tbathroom                                          \t2.10\t4156.11\t1.56148365123675362\nBy now developing masses used to flourish subtle methods. Much \tHome                                              \tbathroom                                          \t9.84\t4755.08\t1.78652145403342606\nThereby social children should report to a days. Times meet anyway as a whole liable reasons. Physical, region\tHome                                              \tbathroom                                          \t5.82\t12047.28\t4.52625911293770307\nSo present rises l\tHome                                              \tbathroom                                          \t5.86\t3137.27\t1.17869734307213477\nPhilosophical,\tHome                                              \tbathroom                                          \t6.72\t3878.46\t1.45716833336357782\nSingle p\tHome                                              \tbathroom                                          \t3.92\t6593.22\t2.47712530202694074\nAreas ride perhaps even leading women. High sides cannot get then throughout the officers. Long signs may not embrace to the friends. Very, tory\tHome                                              \tbathroom                                          \t9.18\t6130.98\t2.30345804996968600\nHi\tHome                                              \tbathroom                                          \t2.13\t440.85\t0.16563085857874860\nForce\tHome                                              \tbathroom                                          \t0.20\t6396.38\t2.40317094521024374\nHard programmes make as other goods. Rational, similar computers could go to the streets. Options mi\tHome                                              \tbathroom                                          \t7.10\t4799.14\t1.80307514719205068\nSo straightforwar\tHome                                              \tbathroom                                          \t1.16\t1899.26\t0.71356711912050371\nProperties go industrial troops; sweet companies would start more constant negotiations. Groups will protect. Public so\tHome                                              \tbathroom                                          \t5.64\t10621.64\t3.99063480257316377\nEspecially linguistic games cover to a officials. Minor, main days know completely variations\tHome                                              \tbathroom                                          \t1.60\t3572.22\t1.34211152462782650\nFrom time to time successful books decide important, active elements. Parts will hear on a clubs. Firstly following supplies take barely upon a years. Other cases may find\tHome                                              \tbathroom                                          \t3.90\t218.22\t0.08198699321550305\nImportant kinds can catch again slim areas. Good, past men must \tHome                                              \tbathroom                                          \t5.17\t6013.16\t2.25919213694315054\nFormal, positive soldiers co-operate long along a offices. Great, able details must overtake responsible, remaining papers. Lives would think acute, labour shapes. Representative\tHome                                              \tbathroom                                          \t10.92\t3002.22\t1.12795798172233325\nSocial\tHome                                              \tbathroom                                          \t5.38\t4680.62\t1.75854623858650847\nMain forms matter constitutional, popular animals; ministers might not allow hardly. Officials will think so. Soon brief relations interfere for example old terms. Co\tHome                                              \tbathroom                                          \t8.37\t867.00\t0.32573880999835553\nProbably awful sales require massively as annual notes. A little national devices arrest sharply short, grateful legs. Trees may protect immediately in a courses. Indians will not get i\tHome                                              \tbathroom                                          \t4.33\t1138.62\t0.42778860881237321\nMilitary characters would\tHome                                              \tbathroom                                          \t2.10\t8317.61\t3.12499236843185918\nIn particular acute origins could like thousands; impatiently small stones might give away female, crucial models. Colleagues might accompany bes\tHome                                              \tbathroom                                          \t3.25\t4807.80\t1.80632877821233414\nAfterwards oth\tHome                                              \tbathroom                                          \t0.24\t7197.60\t2.70419568494136532\nMaterial officials tackle employers. Clear shareholders go very products. Areas imagine systems; superior, precise tonnes will make much minutes. Milita\tHome                                              \tbedding                                           \t18.44\t3038.10\t1.25620354127751860\nLarge tests complain dark, pales\tHome                                              \tbedding                                           \t37.80\t10472.58\t4.33023668816435133\nGreat servants deal primarily certainly possible gates. Problems ca\tHome                                              \tbedding                                           \t4.62\t4172.20\t1.72513492476154936\nUsually large paintings might not go beautifully local appeals. Clothes bring partially different, very orders. Fruits provide except a schools. R\tHome                                              \tbedding                                           \t33.55\t1050.47\t0.43435177709943549\nWell healthy\tHome                                              \tbedding                                           \t7.46\t10368.46\t4.28718480945140073\nConditions know both popular\tHome                                              \tbedding                                           \t2.48\t18121.95\t7.49312325626349635\nPayable, mutual pictures will not help new women; mole\tHome                                              \tbedding                                           \t49.59\t591.36\t0.24451747018527152\nIncreasingly sexual\tHome                                              \tbedding                                           \t0.50\t233.74\t0.09664758096777828\nThus angry stations would not demonstrate forward; single, political winds must not accept then dark profits. Patterns used to know obviously. Wars use particular met\tHome                                              \tbedding                                           \t64.50\t744.66\t0.30790445641937955\nNotes shall say slightly to a files. Important suggestions stay today acts. New, true powers make in particular; awkwardly left prices g\tHome                                              \tbedding                                           \t0.79\t546.70\t0.22605130707232133\nAbout political men\tHome                                              \tbedding                                           \t3.09\t589.74\t0.24384762727790521\nYet personal children answer; sp\tHome                                              \tbedding                                           \t4.17\t1458.28\t0.60297439194699971\nSacred, other police run competent, poor solutions. Just subsequent lips allow far all small sentences; programmes used to develop with a conditions. Properties m\tHome                                              \tbedding                                           \t1.39\t2951.80\t1.22051993454559739\nAttractive, dead situations shall enter also great, forward groups; thus compatible sections give still troubles. Cold, known waters can ho\tHome                                              \tbedding                                           \t5.95\t634.78\t0.26247091403579318\nNew, hard children say needs. Particular, horrible sports can clean. Corporate, adminis\tHome                                              \tbedding                                           \t8.14\t2691.36\t1.11283235010455958\nFemale abilities remove hard, happy customs. Really current shoulders lead to a heads. Vast advantages ought to explai\tHome                                              \tbedding                                           \t2.45\t2906.03\t1.20159480499611843\nClearly profitable ages cancel above evolutionary lessons. Steps would live better; labour women can bounce inst\tHome                                              \tbedding                                           \t3.09\t4184.78\t1.73033654437554205\nUsefully clinical hours laugh almost attractive instruments. Responsible, obvious results follow even powers. Away big cups should d\tHome                                              \tbedding                                           \t9.21\t12113.91\t5.00889919381098232\nOf course political others should turn social, low charges. Thoughts must not expand. Prime letters will not correspond alone \tHome                                              \tbedding                                           \t3.60\t3509.07\t1.45094175984684579\nImmediately legitimate details may not laugh over bad, great publications. Pale conditions cost high, commercial arms; new problems should gai\tHome                                              \tbedding                                           \t1.16\t272.24\t0.11256668709963190\nCriminal faces can exercise always to a members. And so on likely lines can know patients. New premises used to top also yesterday physical relatives. Organisational, alone operations \tHome                                              \tbedding                                           \t93.25\t255.70\t0.10572767371207712\nExpensive parents could become very over the implications; prominent reasons bring \tHome                                              \tbedding                                           \t92.94\t4461.34\t1.84468947922815077\nJust joint transactions might take now still national tests. Cells vary less so orange texts\tHome                                              \tbedding                                           \t6.63\t7559.57\t3.12575576990069165\nImportant, local transactions set overhead single prices. Available, white particles shall develop concerned, remote comments. Whole efforts m\tHome                                              \tbedding                                           \t1.47\t361.08\t0.14930054135297930\nEager, low years shall report clearly. Others should operate since a meanings. Directors would know holes. Poor boundaries hear early hours. Important countries make of course small, rec\tHome                                              \tbedding                                           \t2.90\t15764.84\t6.51849769121275679\nGoods want special children. Personal plans remain. Payable, royal things go always concessions. Free, academic dogs raise still ra\tHome                                              \tbedding                                           \t2.19\t10328.90\t4.27082741104682595\nPublic applications will include less elderly, double businessmen. Federal cards impose partners. Places pay completely. Quite old ways deny ac\tHome                                              \tbedding                                           \t6.98\t7984.50\t3.30145721843597883\nGood benefits pretend completely; s\tHome                                              \tbedding                                           \t1.31\t2239.67\t0.92606608909944376\nWays become just together close centuries; shots account also perhaps lengthy profits. Both eastern efforts might grab together tight countries. Police will express today for\tHome                                              \tbedding                                           \t1.95\t405.51\t0.16767160331241453\nElectronic, long-term theories would give especially; elderly forms know yet later old risks. Different m\tHome                                              \tbedding                                           \t82.96\t15743.55\t6.50969463226347981\nDouble services precipitate finally demands. Authorities write early with a things. Full changes may not see in the doll\tHome                                              \tbedding                                           \t4.48\t1865.76\t0.77146055731343376\nCritical, whole men forget in a industries. Alone lips look soon for a natio\tHome                                              \tbedding                                           \t5.35\t3628.30\t1.50024137086245375\nTotal, unlikely images get either open measures. Politicians visualise economically children. Able, ready states could not go in addition small \tHome                                              \tbedding                                           \t1.42\t334.80\t0.13843420085570364\nFirm managers will not walk at a g\tHome                                              \tbedding                                           \t3.23\t1994.75\t0.82479576510428565\nThere controversial beings upset sure at a arms. Broad circumstances see pale memb\tHome                                              \tbedding                                           \t0.56\t8534.56\t3.52889782931617102\nDifficulties will not feel most. Like things used to avoid both favor\tHome                                              \tbedding                                           \t0.82\t2845.65\t1.17662868478205813\nSpecial, true decades cannot convert cool normal, old-fashioned books. Old ministers become. Substantial, economic recordings see particularly patients. Mass, absolute thanks could not su\tHome                                              \tbedding                                           \t3.58\t8483.58\t3.50781845189793992\nAreas cannot get just. Horses achieve finally sad fans; tough examinations will not love also concrete mines. Experts shall l\tHome                                              \tbedding                                           \t6.67\t1746.36\t0.72209065414087995\nQuestions will encourage finally final, small institutions. Additional holes enjoy alread\tHome                                              \tbedding                                           \t4.45\t7157.46\t2.95949000972719407\nAble, small executives would complete ne\tHome                                              \tbedding                                           \t5.70\t11277.99\t4.66326025360996743\nShortly official associations find however weeks. Empty subjects draw much linguistic, whole powers. Typical, payable feet shall sink also narrow boys. Permanent, i\tHome                                              \tbedding                                           \t4.13\t10215.08\t4.22376474455520053\nNevertheless left things must appear for instance again h\tHome                                              \tbedding                                           \t6.76\t6935.76\t2.86782076740428637\nEnough lost problems manage before excellent champions\tHome                                              \tbedding                                           \t0.97\t425.46\t0.17592059467164776\nCrude dates should convin\tHome                                              \tbedding                                           \t9.48\t2442.81\t1.01006108181696956\nPersonal, major issues say palestinian, german gods; angry styles keep surprising, pleased years. Authori\tHome                                              \tbedding                                           \t8.78\t375.34\t0.15519681287090742\nFinal off\tHome                                              \tbedding                                           \t4.48\t10411.01\t4.30477852285167011\nChildren used to solve all right required, military a\tHome                                              \tbedding                                           \t4.08\t5342.86\t2.20918325682169878\nAble, red references might hire so direct children. Experiments ban too different, labour met\tHome                                              \tbedding                                           \t4.41\t1941.93\t0.80295557845793480\nThen distant children plot. Previous roads w\tHome                                              \tbedding                                           \t8.48\t514.40\t0.21269579725261037\nPowerful, happy companies seem also very national implications; children scan natural charts; really single subjects used to preserve. New re\tHome                                              \tbedding                                           \t1.99\t9617.02\t3.97647693641971033\nSlight, royal projects will ask audiences. Elabora\tHome                                              \tblinds/shades                                     \t5.27\t7981.68\t2.95699007390289399\nYears say much at a eyes; surely different theories may hear well powerful, free wars. Well little conservatives weave physical, fundamental servants; c\tHome                                              \tblinds/shades                                     \t4.42\t1284.84\t0.47599742492224623\nStates must not harm maybe late changes. Good, original steps must abandon incredible, useful neighbours. Sure annual shareholders could analyse absolutely patently dark \tHome                                              \tblinds/shades                                     \t7.32\t10474.36\t3.88045856893354741\nVery able governments m\tHome                                              \tblinds/shades                                     \t2.20\t7440.10\t2.75634977208368684\nCompanies want as from the reports. Often different purposes will not work cases; principal towns guess \tHome                                              \tblinds/shades                                     \t9.34\t5385.32\t1.99511102735147651\nCells cannot give. Indeed english trees shall talk ever. In particular foreign things may catch too soviet, rich situations. N\tHome                                              \tblinds/shades                                     \t0.28\t8695.50\t3.22144049719139513\nTiny, exi\tHome                                              \tblinds/shades                                     \t7.04\t7025.12\t2.60261124324411636\nWomen must sleep in a scales. Agents can get generally extraordinary, general studies. Central systems profit; either comprehensive rivers use in the cars; cases shall ke\tHome                                              \tblinds/shades                                     \t0.63\t5940.92\t2.20094534857964501\nTheories employ more specific offenders. Modes must see preferences. Certainly main studies see from the varieties. Pleasant elements \tHome                                              \tblinds/shades                                     \t97.19\t4156.26\t1.53977853842294381\nYoung opinions make fine weeks; copies would reply soon from the accountants. Interesting losses would go only slightly old families. Most famous patterns ex\tHome                                              \tblinds/shades                                     \t2.76\t8530.68\t3.16037927900416200\nIndustrial losses take letters; organic, likely yards could know all possible questions. Old studies retrie\tHome                                              \tblinds/shades                                     \t9.59\t8586.88\t3.18119981329686010\nNew, light associations feel on a officials. Potential, enormous customers find in the me\tHome                                              \tblinds/shades                                     \t4.62\t4568.78\t1.69260570579703321\nCertainly tory systems read now in a prisons; evenings may seduce anywhere other months; new customers talk with the cells. Police lead more other exports. Young provisions \tHome                                              \tblinds/shades                                     \t7.50\t11150.34\t4.13089032642781908\nCommon, interesting figures would not see high; naked studies would get both. Changes might face over prayers. Tremendous, intact considerations shall choose just.\tHome                                              \tblinds/shades                                     \t1.19\t3490.71\t1.29321080535345580\nTrue, impossible trees could get no longer exclusive cel\tHome                                              \tblinds/shades                                     \t7.65\t13982.16\t5.18000074316711372\nLess whole balls should purchase often difficult specialists. Impossible, international intentions will not counter completely during a trees. Important sciences look initia\tHome                                              \tblinds/shades                                     \t0.25\t4673.99\t1.73158307969266965\nNational, electric sections must market in the decisions; b\tHome                                              \tblinds/shades                                     \t3.94\t13578.70\t5.03053005338540591\nThin, financial others can mobilize from a stories. Anywhere related others should remain following patients. Equations sh\tHome                                              \tblinds/shades                                     \t5.47\t1070.00\t0.39640519027023090\nSteep, slow terms get. Affairs will decide upwards dominant courts. Familiar, serious years add\tHome                                              \tblinds/shades                                     \t2.80\t2331.69\t0.86382618514130345\nAvailable laws get worldwide waste, new policies; then other societies understand by a interests; often local problems can help whole hours. Certain, m\tHome                                              \tblinds/shades                                     \t8.96\t9879.49\t3.66007580675032100\nClear accounts will not play even spectacular offices. Christian, impossible changes say for ins\tHome                                              \tblinds/shades                                     \t0.25\t7864.42\t2.91354851071496196\nRural, top years must accept again unusual shelves. Directors used to move later known, form\tHome                                              \tblinds/shades                                     \t4.05\t3163.86\t1.17212198625081564\nHealthy directors understand at least young conditions. Excellent members prevent well meetings. Obvious\tHome                                              \tblinds/shades                                     \t4.77\t821.24\t0.30424654061450881\nThoughts must not achieve forward from the eyes. Powers seem recent\tHome                                              \tblinds/shades                                     \t1.53\t8071.29\t2.99018808240767473\nServices must move amongst a bedrooms. Small markets used to advance in a courses. New levels could say from a centres. In particular present buyers must not transfer again. Indian, net courses s\tHome                                              \tblinds/shades                                     \t0.19\t3825.58\t1.41727081102242049\nDifferent, upper days receive thorough, personal couples. Social, new girls must not prove strangely in a words; feet shall help however now full th\tHome                                              \tblinds/shades                                     \t4.79\t7716.79\t2.85885570862188328\nScarcely crucial groups may bring especially effective, important losses. Now new drugs wan\tHome                                              \tblinds/shades                                     \t3.48\t2706.56\t1.00270507642784686\nShort candidates shed women. Involved, wooden needs might violate then long-term times. Students must not\tHome                                              \tblinds/shades                                     \t5.18\tNULL\tNULL\nOnly normal subjects might create over in the teachers. Main hours used t\tHome                                              \tblinds/shades                                     \t4.63\t2891.18\t1.07110164299578147\nBars like full, traditional politicians. Things used to show properly at the holidays; less specific relations may say possibly. Forces could \tHome                                              \tblinds/shades                                     \t6.30\t144.44\t0.05351099596507678\nPrime, international results test ever conditions. Territorial users should love never barely emotional detectives. Firms resi\tHome                                              \tblinds/shades                                     \t3.79\t5465.05\t2.02464877110871531\nConditions make patients. New, various eggs will not watch appropri\tHome                                              \tblinds/shades                                     \t2.22\t360.68\t0.13362189161370737\nAlready early meetings cannot go animals. As comprehensive evenings w\tHome                                              \tblinds/shades                                     \t4.11\t511.70\t0.18957059426287584\nSerious, free symptoms used to remember certainly able runs. Feelings shall pro\tHome                                              \tblinds/shades                                     \t5.48\t2291.60\t0.84897395703108517\nAlso long lines make further near a dogs. Rather foreign jobs can sit in the trends. Chronic police shall experience apparently diverse, proper years. Only notable companies migrate also years. Free,\tHome                                              \tblinds/shades                                     \t73.55\t6931.61\t2.56796839339162169\nComplete costs become again industrial theories. Populations vary trustees. Countr\tHome                                              \tblinds/shades                                     \t3.42\t4143.26\t1.53496240059723073\nP\tHome                                              \tblinds/shades                                     \t2.11\t8507.90\t3.15193992364495091\nMinutes might not reply polish, main days. Main beans make properly agencies. As new years de\tHome                                              \tblinds/shades                                     \t9.78\t8403.34\t3.11320335664060012\nLives would look. Things exist for a patterns. Local, palestinian members should get problems; statements may not make yet nasty, specific men; numbers find clear women. Groups shall seem m\tHome                                              \tblinds/shades                                     \t3.38\t2112.47\t0.78261128251416324\nAppropriate, extensive scenes stem openly now financial personnel. More concerned signs stay now members; also full days could prepare subtle horses. Ancient achievements say america\tHome                                              \tblinds/shades                                     \t2.98\t14371.92\t5.32439596462480082\nPrimary, occupational regions set particularly here prime ideas. Clinical, sophisticated minutes allocate just. Needs want interested let\tHome                                              \tblinds/shades                                     \t4.77\t5863.19\t2.17214854910328515\nLarge colours must win over; months assess extreme days. Blacks might signify then fully concerned points; here political potatoes might not die \tHome                                              \tblinds/shades                                     \t0.55\t3969.07\t1.47042985845407977\nSad increases ought to mean too levels. Organs used to present; other, sympathetic controls like always new interests. Other, small women deal in a edges. Outcomes run \tHome                                              \tblinds/shades                                     \t8.43\t7535.76\t2.79178913703812636\nNew parts come also old, tiny chains; responsible seats involve now properly small secrets; eligible chains get complete communications. Talks beat about married, liable books. Big,\tHome                                              \tblinds/shades                                     \t7.11\t1861.92\t0.68978948772705450\nSocial, central lights warn police. \tHome                                              \tblinds/shades                                     \t7.78\t6660.62\t2.46757414805393022\nSubjects sha\tHome                                              \tblinds/shades                                     \t0.26\t360.45\t0.13353668302140629\nFree, educational times ensure practically. So linguistic officers need. N\tHome                                              \tblinds/shades                                     \t9.32\t4744.02\t1.75752724368764560\nJust possible women say. Reasonably strong employers light almost degrees. Palestinian, smart rules help really visual\tHome                                              \tblinds/shades                                     \t3.71\t8398.39\t3.11136951954542476\nLabour taxes could get even lab\tHome                                              \tcurtains/drapes                                   \t4.54\t24984.53\t7.47827965622433549\nAll real copies loosen more definite doors.\tHome                                              \tcurtains/drapes                                   \t9.49\t736.67\t0.22049741477429358\nVery, various goods should turn local arran\tHome                                              \tcurtains/drapes                                   \t3.04\t3989.59\t1.19414972919947050\nUnlikely sides sell sometimes friends; mutual floors used to say i\tHome                                              \tcurtains/drapes                                   \t3.70\t11830.01\t3.54091604348492652\nRoads help less functions. Relevant, increased procedures may not respond. All labour children ought to say workers. Given findings could decide thus royal shareholders\tHome                                              \tcurtains/drapes                                   \t4.28\t5979.42\t1.78973848785712263\nWeak girls swim; provinces may introduce. Nervous, green tracks say better british, public rebels. Houses must not s\tHome                                              \tcurtains/drapes                                   \t8.21\t9746.45\t2.91727235835165499\nMainly alternative politicians will not maintain from a matters. Principles should not tell always details; suddenly democratic years formulate far. Western, wise years ge\tHome                                              \tcurtains/drapes                                   \t2.73\t3116.99\t0.93296623573285915\nPublic metres want; characteristics shoul\tHome                                              \tcurtains/drapes                                   \t0.82\t6428.18\t1.92405971697478996\nServices decide only easy, single bases. Now british solicitors ought to transfer now over a drawings. Thorough elections run still religious, tough parameters. Complete, sole consequences ac\tHome                                              \tcurtains/drapes                                   \t4.49\t6448.14\t1.93003407238344634\nNew, intimate hours go unfortunately forms. Subsequently experienced advisers must feed n\tHome                                              \tcurtains/drapes                                   \t0.70\t188.16\t0.05631937443350629\nWords might correct long old, major relations. Visible, desperate policemen may become extra agreements. General, other students include so\tHome                                              \tcurtains/drapes                                   \t3.90\t10122.80\t3.02992008671076475\nCentres look nevertheless with a advertisements. Naked users address to a reports. Im\tHome                                              \tcurtains/drapes                                   \t3.82\t6381.83\t1.91018640168464850\nClear partners ought to take effective, black books. Circumstances become hospitals. Forces answer gradua\tHome                                              \tcurtains/drapes                                   \t1.32\t1013.02\t0.30321350280947356\nCertain, conservativ\tHome                                              \tcurtains/drapes                                   \t0.28\t11983.75\t3.58693294731893617\nPrivate years forgive then in the computers; more exclusive differences get sources. Minutes meet insects. Small circumstances will contact sudd\tHome                                              \tcurtains/drapes                                   \t1.69\t2179.00\t0.65221044265843012\nKnown, possible years may approve. Forth wrong aspects see again local girls. Excellent peasants can run usually with a exchanges;\tHome                                              \tcurtains/drapes                                   \t3.79\t4760.53\t1.42490471711277482\nPrime, national features claim different, great views. Versions might not sign european \tHome                                              \tcurtains/drapes                                   \t0.67\t9131.87\t2.73331848324884729\nFree funds cause still new,\tHome                                              \tcurtains/drapes                                   \t4.69\t8170.69\t2.44562154278329893\nYears must not enable existing others; other, political ties like then short products. Quite\tHome                                              \tcurtains/drapes                                   \t4.35\t696.96\t0.20861156040166106\nPrivate parents carry really british dreams; writings look probab\tHome                                              \tcurtains/drapes                                   \t9.60\t2216.28\t0.66336895817119114\nResponses used to bring of course video-taped loans. Hot, positive systems would remember. New, personal words may not answer on\tHome                                              \tcurtains/drapes                                   \t6.31\t2854.74\t0.85447050898335328\nGermans will throw perhaps with a\tHome                                              \tcurtains/drapes                                   \t6.68\t11036.19\t3.30331269626550706\nGenerally left questions bri\tHome                                              \tcurtains/drapes                                   \t93.18\t2295.48\t0.68707481730774354\nParticular, british wa\tHome                                              \tcurtains/drapes                                   \t3.20\t6421.72\t1.92212613300986409\nDemocratic, likely appearances might expand both good, certain pounds; american values can pick. Only previous figures will not repa\tHome                                              \tcurtains/drapes                                   \t6.11\t15070.04\t4.51071016947234888\nDifferent, local measures say there political doors. Open assets progress minus th\tHome                                              \tcurtains/drapes                                   \t9.40\t2024.63\t0.60600496949037970\nStatements might not test financial, concerned authorities. United scenes back just bare publishers. More simple things could cope \tHome                                              \tcurtains/drapes                                   \t0.37\t4710.47\t1.40992093796661557\nAccountants look equally marvellous, british schemes. Things shall study tiny events. Both normal courses could appeal faintly. Then black practices used to die hardly. Advisor\tHome                                              \tcurtains/drapes                                   \t2.23\t9441.66\t2.82604371180834938\nValid resources ought to say still tears. M\tHome                                              \tcurtains/drapes                                   \t1.25\t8697.98\t2.60344808904734832\nElectronic reports try in comparison with the problems. Germans might not go as. Common, social cups come sure about intact\tHome                                              \tcurtains/drapes                                   \t3.25\t817.84\t0.24479292722522739\nOutside mammals can ignore eyes. Amounts stand always that is ready notes. Structures remember most attractive issues. Subjective difficulties cause very. Adequate, di\tHome                                              \tcurtains/drapes                                   \t1.51\t3062.90\t0.91677621148164553\nSmall females would allow topics; local, local tears find\tHome                                              \tcurtains/drapes                                   \t0.60\t123.41\t0.03693863732376175\nProblems must not hate there in a stars. Fully forward teams may work yet white, concerned personnel. Merely common years stem methods; measures could introduce more on a areas. L\tHome                                              \tcurtains/drapes                                   \t3.73\t15982.27\t4.78375557199933360\nHere other years may like later. Terms call yesterday also\tHome                                              \tcurtains/drapes                                   \t1.50\t1201.77\t0.35970947392089103\nFree, competitive aspects get even specific, medical times. Other, free days \tHome                                              \tcurtains/drapes                                   \t4.40\t3406.63\t1.01966023876708940\nNational features sing then really magnificent values. Light, shallow advertisements should acknowledge. Possible, good forms should move anyway political, irish estates. Simply\tHome                                              \tcurtains/drapes                                   \t2.02\t2017.71\t0.60393369997996376\nLinguistic, appropriate degrees shout. Educational poles will study now in a names. Full arms look in a ways. Minute, modest systems deal unique experiments; automatically regular \tHome                                              \tcurtains/drapes                                   \t2.54\t6407.34\t1.91782196313128299\nActive books find; important, remarkable personnel may turn alone prices; public eyes administer different, financial waters. Obvious, weekly managers cannot make so. Proble\tHome                                              \tcurtains/drapes                                   \t8.93\t25.68\t0.00768644523518517\nSocially extra interpretations continue other men. Also odd initiatives must need now by a hills. So gross rules can divide. Significant, impossible parent\tHome                                              \tcurtains/drapes                                   \t4.37\t100.62\t0.03011721649393815\nEffects might tolerate reasonably. Comparisons take other, clear others. French, christian \tHome                                              \tcurtains/drapes                                   \t1.91\t6527.01\t1.95364115710692977\nNew, different elections kill arms. As good as new yards would calcula\tHome                                              \tcurtains/drapes                                   \t0.59\t4150.32\t1.24225885469212285\nEvents explore away. Unusual rights should affect so in a posts. New journalists might not find wrong scientists. For example tall authorities shall not con\tHome                                              \tcurtains/drapes                                   \t6.84\t1245.00\t0.37264892203292588\nTall, whole women would not create. Still national hands bear around flat, poor attacks. Fiel\tHome                                              \tcurtains/drapes                                   \t6.19\t2226.86\t0.66653572571746292\nMonths shall not find also intact forces; super ju\tHome                                              \tcurtains/drapes                                   \t0.99\t6731.10\t2.01472864184403808\nSuperbly loyal police would contemplate twice sure nights. Even succ\tHome                                              \tcurtains/drapes                                   \t0.44\t49.08\t0.01469044907098474\nLegs solve by a women. Early, early weekends neglect again loans; proposals\tHome                                              \tcurtains/drapes                                   \t57.92\t10980.48\t3.28663777944104577\nLikely, normal policies believe very children. Twice old knees should suggest with a details. Lives take students; questions will not look as deeply ready areas; valuable members wor\tHome                                              \tcurtains/drapes                                   \t5.17\t249.22\t0.07459563401529782\nBudgets keep so lesser women. Stairs determine \tHome                                              \tcurtains/drapes                                   \t1.55\t4402.52\t1.31774645158907378\nDi\tHome                                              \tcurtains/drapes                                   \t6.03\t5657.98\t1.69352622319988272\nParticularly old assumptions might learn repeatedly fine sessions; payments compete more bad times. Days will plan formerly; all right simple jeans reject weeks. Today national representati\tHome                                              \tcurtains/drapes                                   \t24.89\t14029.64\t4.19930138354218335\nGoals commit then obvious tracks. Excellent days k\tHome                                              \tcurtains/drapes                                   \t6.14\t1920.32\t0.57478327546848854\nHuman drinks\tHome                                              \tcurtains/drapes                                   \t0.71\t1522.69\t0.45576609404844651\nDead, obvious terms would serve more through a forces; worthy, possible arms decide for the falls. Rules \tHome                                              \tcurtains/drapes                                   \t2.34\t14312.02\t4.28382234948889629\nSmall branches cause smoothly right duties. Outstanding ministers give real policies. Increased, japanese settlements used to protect electoral, large offices; clouds \tHome                                              \tcurtains/drapes                                   \t3.90\t15202.77\t4.55043843567430089\nSpecific, small functions make about a children. Other, hot notes request however new things. Very slight eyes should want always serious, normal\tHome                                              \tcurtains/drapes                                   \t6.32\t1409.34\t0.42183857974127210\nSomehow surprising officials eat important cells. Mature police operate close, permanent flights. Old, fine engineers will pay away fingers. Hardly cultural activities watch gay, new \tHome                                              \tcurtains/drapes                                   \t0.25\t6118.86\t1.83147516712481033\nNew, perfect clothes let. High centuries could go months. Part-time, legal things think even there new systems. Aware losses come yet that wide functions. Big, british ears send please economic hee\tHome                                              \tcurtains/drapes                                   \t7.09\t4208.63\t1.25971199416500631\nLess than dark patients teach however national senses; as positive problems can take instead liberal collective sectors; urgent resources raise so southern motives. Private p\tHome                                              \tcurtains/drapes                                   \t0.67\t7346.83\t2.19902673081057097\nStill available arguments\tHome                                              \tdecor                                             \t6.57\t7479.82\t2.46562464048976131\nThen adequate experiments ought to need pp.. Able unions could need please on a countries. Women continue previously british ways.\tHome                                              \tdecor                                             \t0.96\t3319.93\t1.09437141705297364\nNow imaginative boys shall look. Experiments tell main confl\tHome                                              \tdecor                                             \t3.59\t1502.18\t0.49517395103771343\nIndependent, limited numbers claim nonetheless to a firms; never managerial sources would want only special terms. Changing, present homes shall suffer \tHome                                              \tdecor                                             \t6.24\t1843.18\t0.60758013225691504\nFre\tHome                                              \tdecor                                             \t2.65\t4396.90\t1.44938046393755886\nWonderful, brief ships continue; less vital o\tHome                                              \tdecor                                             \t9.80\t3685.64\t1.21492292594937898\nPerhaps spanish\tHome                                              \tdecor                                             \t7.44\t2152.90\t0.70967527139829663\nRegional circumstances see really matters. Again sexual years secure adjacent trials. Old animals will solve new, necessary eyes. Level views migh\tHome                                              \tdecor                                             \t7.80\t157.04\t0.05176617800194552\nOld fruits tak\tHome                                              \tdecor                                             \t2.26\t7882.54\t2.59837601087274335\nParliamentary, favorite months escape almost necessary, environmental beliefs; closely high doctors used to run far exact contributions. Kinds accept never european trades. Sorry, great tho\tHome                                              \tdecor                                             \t2.64\t8778.45\t2.89370100153577829\nMuch red years would not repeat by the others. Particularly environ\tHome                                              \tdecor                                             \t1.45\t2736.60\t0.90208432705122327\nSol\tHome                                              \tdecor                                             \t1.01\t9042.00\t2.98057680523173309\nSchemes wield usually other \tHome                                              \tdecor                                             \t1.43\t5016.00\t1.65345866567599792\nHelpful, very colleagues shall provide members. Concessions go other, tired eyes. Accurate windows ride slowly different hours. Speciali\tHome                                              \tdecor                                             \t1.48\t2381.42\t0.78500389465991526\nFrequently small crimes spend as primary regions; exactly small students simplify very. Early workers make interpretations. Late direct pensioners ca\tHome                                              \tdecor                                             \t2.82\t6192.37\t2.04123361993063780\nMaps form houses. Whole assumptions used to know for a premises; black titles \tHome                                              \tdecor                                             \t5.19\t6005.87\t1.97975633899990144\nContacts choose to the governments. Over angry contracts could sell as yet national technical tables; violent, toxic patterns cannot express solid crops. Feet shall use\tHome                                              \tdecor                                             \t9.88\t1269.31\t0.41841140728253607\nFormerly prime men influence incentives; new bars support children. Machines end certainly so economic drawings; other, christian eff\tHome                                              \tdecor                                             \t2.26\t5503.23\t1.81406765006142784\nAs \tHome                                              \tdecor                                             \t2.03\t7855.62\t2.58950218565743277\nLong-term st\tHome                                              \tdecor                                             \t8.22\t2874.12\t0.94741599286138340\nContemporary feet used to go still political, late lives. Statutory, scottish genes must smell. Good lips establish quite. Old women must avoid with the places. Too wet l\tHome                                              \tdecor                                             \t4.58\t710.24\t0.23412130835520749\nCitizens can keep for the most part at the things. Branches visit terms. Available, slight problems may avoid. Problems help more. Social years feel inherent acres. Individuals use\tHome                                              \tdecor                                             \t49.10\t5668.87\t1.86866870536098372\nWorkers shall not control never on a studies. Sophisticated activities go separately according to a bodies; co\tHome                                              \tdecor                                             \t40.34\t2145.78\t0.70732825670539131\nPrematurely other systems assume nearly under w\tHome                                              \tdecor                                             \t0.88\t9056.13\t2.98523457455908593\nAlways cool temperatures meet there social grounds. Threats h\tHome                                              \tdecor                                             \t5.44\t3350.86\t1.10456708621751882\nToo complete events try environmental, national topi\tHome                                              \tdecor                                             \t3.31\t7994.82\t2.63538764145131214\nFresh, beautiful functions give empty, fast origins. Sons get other companies. Lights say delightful, native services. Small, soviet things could go already also dead systems. Medical, comm\tHome                                              \tdecor                                             \t34.78\t11689.03\t3.85313555559144935\nResulting, distinct clients shall tell intellectually difficult gardens. Villages turn then by a things; fresh, supreme powers succeed here. Historical hands st\tHome                                              \tdecor                                             \t4.30\t269.93\t0.08897888708650760\nPossible shoes render undoubt\tHome                                              \tdecor                                             \t8.28\t13638.47\t4.49574290431860593\nHowever old figures ask only good, large sources. Yet naked researchers shall deal to a women. Right, common miles describe there also prime bags. Readily significant shares\tHome                                              \tdecor                                             \t7.78\tNULL\tNULL\nRelatively regional months wish then needs. Eyes follo\tHome                                              \tdecor                                             \t66.29\t7883.31\t2.59862983128194800\nDeposits shall leave more skills. Close ce\tHome                                              \tdecor                                             \t5.30\t5555.19\t1.83119558312931557\nRegular findings put. Little, national cattle should say most mothers. Asleep eyes stay over thoughts. Western, golden walls might not move distinct, small boxes. Swiss, go\tHome                                              \tdecor                                             \t3.83\t3030.40\t0.99893164682307498\nGentlemen work always. Religious, spiritual variations think fairly so electronic resources. Diplomatic, civil others split both mathematical, new contacts. Ultimate\tHome                                              \tdecor                                             \t9.53\t6205.11\t2.04543319397384199\nThere final techniques wear so old winners. Old, particular prices will return especially motives. Around early members shall pay systems. Unions call rather. Else old ter\tHome                                              \tdecor                                             \t2.10\t13195.83\t4.34983242908439067\nSimilar, ready forces play often arms. Marrie\tHome                                              \tdecor                                             \t7.68\t7302.41\t2.40714375893522009\nNearly delighted services know then eventually political p\tHome                                              \tdecor                                             \t0.48\t4915.69\t1.62039278873142867\nTop modules ought to go. Funds shall offer in\tHome                                              \tdecor                                             \t4.71\t13454.30\t4.43503367735338493\nImportant rights justify now still e\tHome                                              \tdecor                                             \t53.89\t3370.57\t1.11106422941936768\nFields divorce hardl\tHome                                              \tdecor                                             \t1.25\t14250.34\t4.69743783130568185\nAble, assistant positions should die\tHome                                              \tdecor                                             \t4.24\t3308.46\t1.09059048186650958\nBritish, electric ye\tHome                                              \tdecor                                             \t4.13\t6855.95\t2.25997407076183372\nImmediate designs reward more speedily expected things. Good, happy feet create interesting, political signals. Still general stations help. Remote, flat ideas ma\tHome                                              \tdecor                                             \t0.10\t6799.02\t2.24120784232544325\nSa\tHome                                              \tdecor                                             \t2.03\t474.81\t0.15651489414864844\nMinutes must not reduce in addition conditions. Australian, likely methods miss on a grou\tHome                                              \tdecor                                             \t25.40\t111.84\t0.03686659034473756\nQuickl\tHome                                              \tdecor                                             \t9.23\t2919.06\t0.96222987492587290\nAbroad great methods can call all labour main clubs. Minerals may make often countries. Apparently good pairs used to write terrible accounts; able funds close again with the times; earlier average \tHome                                              \tdecor                                             \t4.93\t5327.91\t1.75627570961758494\nMinor, usual members come more before good waters. Circumstances cannot take interests\tHome                                              \tdecor                                             \t0.15\t15519.10\t5.11566793829592889\nPresent, responsible rates contribute at all records. Eyes ought to wait political, national awards. Politically int\tHome                                              \tdecor                                             \t0.18\t20899.05\t6.88909795193300723\nNations realize on a shadows. Managerial, disabled systems stay between the councils. Capitalist girls might live \tHome                                              \tdecor                                             \t4.02\t1089.18\t0.35903391337340180\nMilitary issues face rather once previous thanks. Then famous sources ought to transport boats; readily impossible requirements trust again with \tHome                                              \tdecor                                             \t5.27\t7325.56\t2.41477485305611310\nPrivate, direct rates increase furious meals. Italian values buy for instance random members. Available reforms work financial, impossible adults. Immediate, good experimen\tHome                                              \tdecor                                             \t6.40\t7796.60\t2.57004701611034397\nSo far conditions may r\tHome                                              \tdecor                                             \t8.95\t1175.16\t0.38737609361160401\nSuspiciou\tHome                                              \tflatware                                          \t8.91\t11913.78\t5.12961692885790892\nMaterial, rough relations think cities. As popular studies should not ask at a boo\tHome                                              \tflatware                                          \t0.28\t1925.64\t0.82910676064909237\nReal times could cultivate honours. Great carers enter like a drugs. Sufficient years read o\tHome                                              \tflatware                                          \t3.21\t32.10\t0.01382102938079593\nLong, other grounds give now clinical, essential areas. Possible languages make. So similar costs would say. More similar propos\tHome                                              \tflatware                                          \t3.20\t180.81\t0.07784985427855798\nPresent variables shall raise royal, american structures. \tHome                                              \tflatware                                          \t1.03\t26390.07\t11.36255242464987910\nRemarkable m\tHome                                              \tflatware                                          \t20.08\t15671.25\t6.74743946055445923\nChanges like old, perfect streets. Thousands say. Whole factors work particular\tHome                                              \tflatware                                          \t1.83\t3396.31\t1.46232088150439278\nPolice succeed schools; supplies calculate far countries; new words move shares; officers must complete years. Asian things may bear warm things. Aw\tHome                                              \tflatware                                          \t6.66\t2788.28\t1.20052647357899259\nSuppo\tHome                                              \tflatware                                          \t2.16\t18092.16\t7.78979049601435527\nStreets will marry. Agencies tell regularly students. Years study here colonial, economic transactions. Cards shall not hide of course inside technical sons; else environmental \tHome                                              \tflatware                                          \t58.71\t3036.50\t1.30740048955722201\nEarly, particular conditions fulfil just women. All new sales might not feel large, active books; current children should take. Generally di\tHome                                              \tflatware                                          \t14.12\t22.62\t0.00973930481600012\nForeign parties could not keep ston\tHome                                              \tflatware                                          \t1.70\t4789.08\t2.06199424881564327\nPatient \tHome                                              \tflatware                                          \t1.87\t9772.43\t4.20763371189319384\nYears know more medical citizens. Then comprehensive observers come finally by a processes. Small voters must waste others. Statistical levels study. Ex\tHome                                              \tflatware                                          \t0.33\t741.75\t0.31936911349549462\nArrangements keep simply close large terms. Projects might not live true, easy others. So new years take labour members. Original towns travel away away americ\tHome                                              \tflatware                                          \t9.19\t2252.25\t0.96973250538621876\nPossible, thick acids shall not go in a c\tHome                                              \tflatware                                          \t3.98\t5764.14\t2.48181770389473594\nRandom influences can force low for a subjects; young days will not travel historic hills. Unlikely, huge guards arrest now by th\tHome                                              \tflatware                                          \t3.46\t5434.00\t2.33967207648738495\nDomestic, new tasks show here very various farms. Internal, old homes used to impose long traditional, high \tHome                                              \tflatware                                          \t1.93\t627.94\t0.27036689063479730\nMore special scots ought to see just on a pupils. Grounds might shut complex writers. Empty, actual eyes may get little wrong, odd words; social, full tact\tHome                                              \tflatware                                          \t3.31\t2123.58\t0.91433213621403771\nLegal ci\tHome                                              \tflatware                                          \t4.71\t5052.16\t2.17526641110535642\nHom\tHome                                              \tflatware                                          \t8.19\t3362.38\t1.44771192428039261\nLeaves cannot lose colours; european, dynamic sentences will \tHome                                              \tflatware                                          \t96.77\t1428.58\t0.61509178046160258\nFurther o\tHome                                              \tflatware                                          \t5.51\t11480.35\t4.94299858728412768\nThus internal planes would not apply never rather than a\tHome                                              \tflatware                                          \t2.06\t4826.77\t2.07822211789234727\nEuropean seconds wou\tHome                                              \tflatware                                          \t5.97\t12128.66\t5.22213601899328053\nLabour, likely area\tHome                                              \tflatware                                          \t84.74\t7106.28\t3.05969173421066874\nParticular, healthy talks get written, apparent months; then great attacks used to secure characteristically to a agencies. Accounts answer prod\tHome                                              \tflatware                                          \t3.87\t179.28\t0.07719109493423967\nYesterday angry obligations defi\tHome                                              \tflatware                                          \t3.77\t1418.04\t0.61055366053407644\nEuropean, rigid voters believe in common including a meetings. Complete trends broadcast directly;\tHome                                              \tflatware                                          \t2.19\t10595.74\t4.56211943461914690\nLikely, odd offences shall ease enough true, chinese eyes. Other indi\tHome                                              \tflatware                                          \t4.09\t3818.90\t1.64427193465176194\nLeft, white ways might intervene es\tHome                                              \tflatware                                          \t9.19\t416.05\t0.17913517987165560\nLater substantial changes give wisely. Minor taxes would shed forward reasons; yet long shareholders will live close small constitutional bags; supplies rea\tHome                                              \tflatware                                          \t3.08\t1033.24\t0.44487353262970659\nRather inc researchers might not answer sure. Most actual lives \tHome                                              \tflatware                                          \t4.89\t317.32\t0.13662582688829168\nForces used to adapt in a musicians. Rather political t\tHome                                              \tflatware                                          \t89.07\t4073.22\t1.75377237677400555\nOther, white years get meanwhile plans; more royal sciences would not contain triumphantly splendid specific concepts; free months \tHome                                              \tflatware                                          \t1.62\t21553.63\t9.28016677547677492\nFinancial, black securities may support vague, late offices. So marginal incomes make on the men. Hotly close occupation\tHome                                              \tflatware                                          \t6.87\t280.44\t0.12074671275857973\nActively fierce lines should not feel quite confident new rules. Arms pay long settings. Wide, black women should pick real talks. Important friends make today between the revenues. Noisily expe\tHome                                              \tflatware                                          \t4.53\t8713.76\t3.75181099617458879\nBrief regions ought to inclu\tHome                                              \tflatware                                          \t4.98\t5812.86\t2.50279466811381312\nForward general regulations can begin forward women; galleries consist typic\tHome                                              \tflatware                                          \t8.74\t2672.21\t1.15055118136002115\nUncertain, statistical jobs walk there; agreements show to a rights. Useless years may not resist locally only marginal experts. Concerned,\tHome                                              \tflatware                                          \t0.14\t7564.70\t3.25706981174164905\nBeneficial, moving years ought to see difficult, political stocks; attitudes can say british questions. Upper, educational chapters should end then back lives. Workers talk there in a boundaries; pro\tHome                                              \tflatware                                          \t2.02\t609.71\t0.26251775151916148\nBusy, new things go satisfactory services. Now old years must take. Scottish procedure\tHome                                              \tflatware                                          \t0.85\t2855.80\t1.22959799706158888\nMislea\tHome                                              \tfurniture                                         \t1.06\t2910.97\t1.06321050660037366\nPapers check other, industrial boards. Violent, social things give cars. Local councillors give ther\tHome                                              \tfurniture                                         \t3.38\t3631.97\t1.32655048442868154\nDutch, busy firms must not return thereof full, naval plants. Parts shall get ashore early politicians. Good organisms try rather also close boys. Positive, big ingredients foster greatly local grou\tHome                                              \tfurniture                                         \t1.71\t1113.86\t0.40682922011628158\nArrangements will trade however in every negotia\tHome                                              \tfurniture                                         \t3.24\t15049.37\t5.49667234692094570\nBlack, perfect visitors should test more low english interests; about major wives believe examples. Other, available gro\tHome                                              \tfurniture                                         \t0.66\t10969.33\t4.00646757141663321\nMarine, new services shall reach more more significant elements. Late, solid rights would like also. Notes complete elements; continually personal armies will compare clearly curre\tHome                                              \tfurniture                                         \t3.59\t965.34\t0.35258337613977633\nWays become worldwide specially common gene\tHome                                              \tfurniture                                         \t8.57\t791.04\t0.28892157567448637\nVery likely areas should k\tHome                                              \tfurniture                                         \t2.37\t3579.84\t1.30751038311912580\nArms fail other faces; leaders could arise good characteristics; gol\tHome                                              \tfurniture                                         \t8.75\t2288.09\t0.83570814128872814\nStones tell. Still brown relationships put initially long r\tHome                                              \tfurniture                                         \t9.54\t5599.90\t2.04532252682488396\nPrivate, young standards find even so in the women. Sheer, expert classes cannot present men. Small, sure enquiries must support mildly p\tHome                                              \tfurniture                                         \t4.99\t2942.39\t1.07468643184775984\nAuthorities used to consider; general weapons seek particularly economic papers; much american walls\tHome                                              \tfurniture                                         \t1.27\t2216.17\t0.80943988718968251\nSevere, likely areas make on board formal, new conditions. Democratic, individual numbers should not fight workers. Poor options think. Independent feelings squeeze only ideas. Thin prob\tHome                                              \tfurniture                                         \t8.47\t3094.07\t1.13008644271738222\nAdults might not surrender doubtful, upper industries; earnings insist m\tHome                                              \tfurniture                                         \t1.61\t6969.96\t2.54572692352870019\nShareholders mean; more very teams believe necessary, charming words. Courses would not suggest as popular, similar assets. Subjects must make on the things. Liabilities used to get very to a lines; \tHome                                              \tfurniture                                         \t8.45\t3751.07\t1.37005088853319121\nDirectly high lines move calmly also international files. Pounds cannot ensure creditors. Similar, favorable colleagues could gather written police. Free days might provide so. Probably other rock\tHome                                              \tfurniture                                         \t6.83\t5386.33\t1.96731764601379975\nStreets know half. National, \tHome                                              \tfurniture                                         \t0.39\t9772.83\t3.56945469558921243\nSoviet, evident ways change able, huge woods. Smart sales ask sales. Thus possible transactions can want below effective, available families. Also external \tHome                                              \tfurniture                                         \t4.84\t145.90\t0.05328890813474358\nUsual tools happen little young children. Dramatic,\tHome                                              \tfurniture                                         \t1.68\t11143.74\t4.07016954857756966\nJudicial operations cannot kick currently h\tHome                                              \tfurniture                                         \t6.22\t9022.42\t3.29537293031578591\nToo young things leave individually skills. Contexts suffer enormously so romantic\tHome                                              \tfurniture                                         \t29.66\t20545.03\t7.50392197598047208\nSuperb lights occur with a standards. Bright services specify at the sides. Then urgent versions get earlier prisoners. Available heroes would not believe civil sides. Banks could t\tHome                                              \tfurniture                                         \t0.12\t16046.32\t5.86080104441877032\nRoyal, military notions will not find very very wet acids. Funny actions take western, remaining homes. Great patients will replace simply. Signs can think equivalent reasons. Campaigns \tHome                                              \tfurniture                                         \t7.54\t1334.66\t0.48747480555940278\nYet huge priests think today unlikely, absolute things. Whole, modern changes might not manipulate most only, desirable companies; accused, particular girls may take serious, central hours\tHome                                              \tfurniture                                         \t0.52\t10920.86\t3.98876425834404225\nLocal blocks shall not get natural things; already post-war patients may exploit british, sexual grounds. Easy centuries would not\tHome                                              \tfurniture                                         \t3.75\t2996.52\t1.09445701853270617\nAgo new arguments accept previously european parents; fo\tHome                                              \tfurniture                                         \t3.03\t6882.58\t2.51381201747788529\nWalls s\tHome                                              \tfurniture                                         \t4.80\t1253.04\t0.45766369738971278\nLate general supporters see more informal, blank employees; very similar methods shall help complex, likely schemes. More than new groups reconsider unanimously. Physical incenti\tHome                                              \tfurniture                                         \t37.53\t2259.23\t0.82516723732184192\nMountains ought to join pressures. Bright countries used to pay there owners. Imperial issues might establish thus calmly senior members. Just regular \tHome                                              \tfurniture                                         \t7.01\t10713.70\t3.91310058316108488\nContacts open considerable, suprem\tHome                                              \tfurniture                                         \t7.01\t1997.51\t0.72957592109822925\nEffects must quit about small values; full paths must get. Problem\tHome                                              \tfurniture                                         \t1.87\t4806.19\t1.75542575317425115\nPolitical girls used to ask hands. Large-scale, chief areas can produce including the children. Sufficiently new areas will\tHome                                              \tfurniture                                         \t2.26\t3164.50\t1.15581048521176187\nNow late makers used to\tHome                                              \tfurniture                                         \t0.85\t7607.78\t2.77868601459451341\nGreatly commercial animals used to live now as wide personnel. Enough hot wars keep. Min\tHome                                              \tfurniture                                         \t4.37\t894.54\t0.32672419385094943\nBetter high children\tHome                                              \tfurniture                                         \t4.48\t4768.72\t1.74174010966630844\nThus light firms expect anyway in a toys. Laws used to ab\tHome                                              \tfurniture                                         \t2.06\t12227.85\t4.46613279873491621\nWidespread others hold quickly new teachers. Societies wou\tHome                                              \tfurniture                                         \t3.01\t1696.19\t0.61952099444188288\nHot, small levels ought to arrive only there other features. Often irish columns used to spend now new natural months. Once british flowers shall penetrate funds. \tHome                                              \tfurniture                                         \t5.70\t20519.61\t7.49463750685925767\nElectronic organizations transfer still natural, whole posts. Plants ought to curl just animals; already huge women can dream eventua\tHome                                              \tfurniture                                         \t3.59\t6214.52\t2.26980798753616633\nIncreasingly other policies happen previously under a targets. Efficient, experienced points will see mostly even english machines. Fine states must remedy also good thoughts; normally clear years i\tHome                                              \tfurniture                                         \t5.85\t9156.23\t3.34424605435629337\nNatural costs assist during the types. Sometimes possible concerns make as real, right forms. \tHome                                              \tfurniture                                         \t6.28\t1707.15\t0.62352405429902331\nTherefore early eyes stay recent, expert studies; varieties halt in a parts. Unable i\tHome                                              \tfurniture                                         \t7.52\t742.08\t0.27103929368492471\nFunds drink much common months. Royal, long trees will expect sometimes front coins. Old ears can allow very similar, short members. Even public rules act common, open \tHome                                              \tfurniture                                         \t17.29\t6237.51\t2.27820491692628117\nIntensive minutes might see like a boys. Questions might know more young communications. Ready, southern others may result. Lonely, trying seeds love probably good farms. \tHome                                              \tfurniture                                         \t9.12\t11445.81\t4.18049840724968750\nAt least competitive notions may not convince white, familiar principles. Valuable, fat books convince further cases. Yet ordinary cities cannot need so as. Ri\tHome                                              \tfurniture                                         \t8.51\t1332.65\t0.48674066775713524\nWomen should not knock doubtless details. Sure northern products must go very cruel, other tickets. Poor, physical objectives highlight only by the discussions; now slow crowds must \tHome                                              \tfurniture                                         \t0.77\t87.87\t0.03209387496778559\nLittle, evil parties would not act subject\tHome                                              \tfurniture                                         \t7.63\t1108.98\t0.40504683580032854\nEasy, philosophical levels must \tHome                                              \tfurniture                                         \t2.32\t3778.34\t1.38001105662664191\nNow additional reasons hate. Original, use\tHome                                              \tglassware                                         \t4.41\t6349.14\t1.56441659290736902\nJobs notify about future boxes. Now main policies will think above offers. Criminal men used to think soon national women. Sure talks ought to appreciate there companies. So appropri\tHome                                              \tglassware                                         \t1.19\t7756.30\t1.91113826747676477\nSeats will cope similarly new shares; massive deals explore semantic, important thi\tHome                                              \tglassware                                         \t1.53\t4412.81\t1.08730838906490754\nPowerful hours take worth a authorities. Respondents must generate aside c\tHome                                              \tglassware                                         \t31.97\t10526.17\t2.59362921714811148\nUnfair, possible hands will not arrive surely tight russian employers. Really necessary walls should decide varieties. Talks would raise probably moral meetings. Bright, necessary \tHome                                              \tglassware                                         \t1.54\t3919.44\t0.96574291493097623\nOld\tHome                                              \tglassware                                         \t1.47\t1351.66\t0.33304657512185499\nConditions criticise enough more particular shops. Be\tHome                                              \tglassware                                         \t6.38\t1038.40\t0.25585987867254652\nCountries ensure in a christians. Expected ends used to run high holes. Broad, unlike women specify therefore. Lit\tHome                                              \tglassware                                         \t2.94\t153.37\t0.03779009013097887\nOnc\tHome                                              \tglassware                                         \t4.53\t1345.23\t0.33146223477144621\nWestern, complete meetings follow also educational shareho\tHome                                              \tglassware                                         \t7.67\t2508.40\t0.61806521539119384\nSimilar, low sites remember peaceful days. Faster permanent views give then churches. Others make well public processes. Eventually other schemes can trus\tHome                                              \tglassware                                         \t0.29\t105.75\t0.02605660840680065\nStatistical bedrooms analyse there good, little residents. \tHome                                              \tglassware                                         \t8.08\t5239.63\t1.29103533906879324\nLess than outside students go more. Military views should not let more different, big steps. Average, black animals ought to begin automatically with a notes. Needs\tHome                                              \tglassware                                         \t3.76\t13328.83\t3.28419956341197821\nWide, great premises mean ever severe courses. Used ministers face there about a things.\tHome                                              \tglassware                                         \t0.83\t1275.20\t0.31420696964872045\nFaintly actual prices may not wait dramatic terms. Others shall see shortly priests. Very na\tHome                                              \tglassware                                         \t27.85\t6812.75\t1.67864925695915955\nAgents invest often things. French cars ought to get locally distinctive, local powers. More american entries compensate only\tHome                                              \tglassware                                         \t6.43\t10473.16\t2.58056764918929822\nAgain other wheels ought to find on a employees. Developments make really together new groups. Drinks would not assess bright women; special, australian t\tHome                                              \tglassware                                         \t3.25\t516.63\t0.12729669599248624\nWords visit authorities. American occasions must need available, pure c\tHome                                              \tglassware                                         \t5.43\t5888.06\t1.45080731627183575\nPurposes look events. Words convert over complete sites. New notes tell up a \tHome                                              \tglassware                                         \t9.93\t9702.28\t2.39062421383578063\nFree kids would become only \tHome                                              \tglassware                                         \t1.05\t8484.78\t2.09063441964873770\nInterested, square savings change off\tHome                                              \tglassware                                         \t2.10\t8572.37\t2.11221643695702771\nExactly single cities used to deserve ago false services. Suddenly psychological managers could sustain far together big changes. Parents should r\tHome                                              \tglassware                                         \t0.64\t2997.09\t0.73847754600414333\nHeavy, desperate standards could produce still fine, important weeks. Accordingly \tHome                                              \tglassware                                         \t9.90\t11317.37\t2.78857946368674669\nLong, surprised sections keep positive sports. Strategies go northern, precious forms; readers emerge about reports. Large, unusual legs might show affairs; as usual ac\tHome                                              \tglassware                                         \t4.43\t12838.25\t3.16332154022324760\nRed rooms could not apply\tHome                                              \tglassware                                         \t4.96\t1551.75\t0.38234838860759250\nPresent materials would say real, rare relationships. Particular conclusions contribute well to a hand\tHome                                              \tglassware                                         \t4.07\t8454.05\t2.08306260332400026\nSeparate moments come months. Avail\tHome                                              \tglassware                                         \t0.58\t5564.41\t1.37106054264667234\nProfessional, local chemicals can feel eyes. Familiar shops bear early in a accounts. Western arrangements ride reserves. Sorry, scottish ministers might not keep constantly w\tHome                                              \tglassware                                         \t6.13\t5921.40\t1.45902223186788996\nRows come \tHome                                              \tglassware                                         \t0.29\t840.56\t0.20711246111035795\nWhite, local attitudes ca\tHome                                              \tglassware                                         \t1.74\t1012.36\t0.24944366985067333\nSeconds may make ahead quite little lips. Young, criminal consumers s\tHome                                              \tglassware                                         \t7.17\t1471.96\t0.36268827716760552\nRecently nice particles hear above in a candidates. Human errors register. American, old days live. \tHome                                              \tglassware                                         \t8.16\t528.66\t0.13026086619706129\nTraditional, old-fashioned men show too final,\tHome                                              \tglassware                                         \t4.84\t6698.16\t1.65041448856828214\nYears must share new, white loans. Able \tHome                                              \tglassware                                         \t1.64\t1410.40\t0.34752000469930625\nSingle, roman facts may hear by a rights; different, able preferences must produce as internal surveys. Similar heads might stabilize direct na\tHome                                              \tglassware                                         \t6.70\t8825.39\t2.17456010654651897\nStones should send perhaps at the groups. Perhaps individual facts \tHome                                              \tglassware                                         \t4.18\t26041.20\t6.41650449969907389\nMore black members would run more central poor phases. Personal responsibiliti\tHome                                              \tglassware                                         \t8.30\t423.06\t0.10424121751849724\nSafe, distinct millions must not deliver at the men. Indeed old claims might put exercises; particular, wooden households should learn clear, lucky votes. Mean, level terms might write bot\tHome                                              \tglassware                                         \t9.86\t7952.69\t1.95952840766599957\nSignificant difficulties could observe numbers. Very alone centuries affect forwards by a matters. Glad fields ought to spread hardly british str\tHome                                              \tglassware                                         \t3.06\t501.96\t0.12368203457094708\nNovel, small attitudes may warn now however good terms. Aware earnings must eat much; lat\tHome                                              \tglassware                                         \t2.84\t5534.76\t1.36375483636523840\nCold, old days stem thereby difficult, nuclear men; likely contents shall threaten often outer years. All real or\tHome                                              \tglassware                                         \t9.08\t11902.21\t2.93268298009935465\nNow strong fields may not feel. Again\tHome                                              \tglassware                                         \t3.96\t9805.52\t2.41606236279008890\nEven sexual men can clear thereby always male members. Shoulders extract. Negotiations used to alter else \tHome                                              \tglassware                                         \t3.47\t1371.15\t0.33784887581073012\nConditions could not estimate following problems. Theories get sure; extremely complete scholars ought to thrive only strong, european businesses. Important, social p\tHome                                              \tglassware                                         \t1.56\t6751.07\t1.66345141670827100\nHoles buy then markets. Practical themes ought to escape above australian children.\tHome                                              \tglassware                                         \t1.43\t3401.20\t0.83804951785541719\nWilling, due values will chat hardly gmt central records. Necessary, adult stairs make fast in terms of a years. Views would not dig\tHome                                              \tglassware                                         \t0.24\t2373.76\t0.58489016332602467\nMoments used to contract really boats. A\tHome                                              \tglassware                                         \t68.94\t1997.56\t0.49219516490864023\nInsects indicate germans. Other, particular properties might \tHome                                              \tglassware                                         \t4.52\t2374.24\t0.58500843445638178\nPersons might live here doctors. Chil\tHome                                              \tglassware                                         \t2.86\t15578.10\t3.83841561628351009\nMaterials make apart colonies. Rates make naturally poor, appropriate companies; c\tHome                                              \tglassware                                         \t0.80\t1956.16\t0.48199427991533955\nUsed groups ought to fail high from the districts. Immediate, main walls could exploit rights. Therefore late friends ought to try away. In short widespread lakes sh\tHome                                              \tglassware                                         \t80.17\t9287.91\t2.28852419657312357\nToo only affairs put nonetheless big numbers. Rapid students appeal for the \tHome                                              \tglassware                                         \t9.29\t13621.22\t3.35624392967263487\nGood windows say widely actions. Simple, imaginative findings see to a recommendations. Environmental, l\tHome                                              \tglassware                                         \t4.66\t12892.65\t3.17672560166371999\nJapanese emotions speak disabled, new techniques. Experts should not tell only refugees. Years cannot afford well head quarters. Offices make conscious, primary stories\tHome                                              \tglassware                                         \t7.31\t4129.01\t1.01738058324126665\nFull goods should find then. Only able years exploit completely mode\tHome                                              \tglassware                                         \t2.13\t3040.36\t0.74913919560946025\nSexual, due tensions take quite lucky circumstances. For ever formal districts respond ways. Poor relations should not come correctly in an facilities; important times could look away common \tHome                                              \tglassware                                         \t42.90\t1247.40\t0.30735710001553787\nBad boys might claim shortly italian, good lines. Times learn additional, sick cards; measures work sometimes pleasant male\tHome                                              \tglassware                                         \t2.10\t3225.77\t0.79482388369177617\nChildren want on a paintings. Over nice teachers must not sell. Richly accurate pp. hate as african, fiscal days. Claims eat part\tHome                                              \tglassware                                         \t7.95\t6793.78\t1.67397508332817129\nAlways sad weeks would not put close to a masses. Fresh, atomic sides will not help together previous \tHome                                              \tglassware                                         \t0.83\t6893.14\t1.69845720731209292\nAs other \tHome                                              \tglassware                                         \t4.88\t2352.12\t0.57955810653242499\nSerious branches use. Rich, english bombs keep much vulnerable consequences. Little, furious sales can keep to a gentlemen. As gold customers overlap betwee\tHome                                              \tglassware                                         \t2.54\t3062.18\t0.75451560407694385\nReally different shares ought to help clearly p\tHome                                              \tglassware                                         \t2.82\t6640.72\t1.63626137663554805\nThere possible newspapers experiment. Annual accounts might visit possible, prime groups; competitive, universal pr\tHome                                              \tglassware                                         \t1.12\t63.36\t0.01561178920713843\nRecent, labour complaints must read in a units. Softly old courts rely even. Actual\tHome                                              \tglassware                                         \t8.70\t2861.55\t0.70508073556955459\nWell new carers shall give together with a samples. Individual, central birds find there weapons. Kind details proceed ultimate miles. Unlike, independent months mus\tHome                                              \tglassware                                         \t0.46\t6486.44\t1.59824706415326716\nOverseas businesses conceal gmt in a farmers. Level functions could support all right dreadful processes. Walls buy furth\tHome                                              \tglassware                                         \t3.81\t10274.91\t2.53171920836992962\nMental techniques might prohibit by a chiefs; other, waiting defendants vary else. Now old skills would see. Common jobs will no\tHome                                              \tglassware                                         \t89.76\t2200.15\t0.54211297386498769\nDogs will cover never. Bitter children restore cheaply upper, short views; other teams shall exist too high customs. Yards must not help now present, coming mines. However federal method\tHome                                              \tglassware                                         \t3.22\t2352.77\t0.57971826535478358\nMore than divine areas will control together from \tHome                                              \tglassware                                         \t4.90\t563.56\t0.13886016296677611\nSurely national arguments address working, soviet effects. Again central parents say english rules; carefully military chang\tHome                                              \tglassware                                         \t8.61\t13637.98\t3.36037356330760394\nClassical, attractive employers want only prices. Financial approaches used to hear considerable votes. Bo\tHome                                              \tglassware                                         \t2.50\t13555.23\t3.33998411323041478\nOther patients see normal colleagues\tHome                                              \tglassware                                         \t4.62\t1970.54\t0.48553748586228795\nNewspapers ought to pursue. Well rare criticisms used to tell so. Powerful, new matters touch. Home magic brothers can read now rather supreme rats. As evolu\tHome                                              \tglassware                                         \t4.99\t1537.58\t0.37885692628017534\nSurely additional years work never remote, great bits; women deal in a judges. Far ethnic hands might help afterwards already dead awards. Rich, social experts target social children. National\tHome                                              \tkids                                              \t0.50\t361.08\t0.11815869948988022\nYet black costs must not judge here lively variables. Full, po\tHome                                              \tkids                                              \t1.68\t3938.44\t1.28880289248621866\nProud investors may not visit extremely. Alone, everyday houses move widely global countries. Only single gardens come further shadows. Scottish, wo\tHome                                              \tkids                                              \t2.68\t31.68\t0.01036686496022877\nTotal, new savings would make short, popular consultants. Short, other contracts might discuss for a\tHome                                              \tkids                                              \t9.91\t1600.56\t0.52376229105883094\nEffective, free arrangements will build social, possible agreemen\tHome                                              \tkids                                              \t4.30\t2319.90\t0.75915688198341950\nEnterprises shall not influence perhaps delighted, big police. Novels keep early temporary bacteria; rates will not cope men\tHome                                              \tkids                                              \t3.57\t6583.08\t2.15422668504996302\nAgricultural sites will not provide skills. Again\tHome                                              \tkids                                              \t0.55\t5015.40\t1.64122394323015739\nConservatives tell effectively in a parties. Dir\tHome                                              \tkids                                              \t6.35\t8063.47\t2.63866491795631001\nToo old\tHome                                              \tkids                                              \t0.95\t114.66\t0.03752098283900982\nFollowing occasions see then only real lovers\tHome                                              \tkids                                              \t5.63\t6310.36\t2.06498263795546836\nPermanent details would help also off a owners. External children used to listen like a years\tHome                                              \tkids                                              \t30.73\t6001.32\t1.96385334668939829\nFarmers might not assume now to the tanks. For\tHome                                              \tkids                                              \t3.80\t11826.88\t3.87019153601106270\nLocal farmers skip also shoulders; things ought to seem so only applications. Foreign, voluntary voices may not find new\tHome                                              \tkids                                              \t3.96\t2251.62\t0.73681314651989612\nNow close items become already against a groups. Authorities would work as well natural, dependent parties. Operators should not fall l\tHome                                              \tkids                                              \t5.59\t7257.25\t2.37483998524685165\nAppropriate items take mediterranean centuries. High, very days see ways. Careful, technical minds remai\tHome                                              \tkids                                              \t4.98\t10259.21\t3.35719206656024705\nDire\tHome                                              \tkids                                              \t4.41\t1733.90\t0.56739605917110697\nShort areas would not improve below to the measurements. Vo\tHome                                              \tkids                                              \t0.36\t18342.34\t6.00229046195084044\nAs beautiful children strike really natural services. Too assistant pow\tHome                                              \tkids                                              \t3.30\t2799.11\t0.91597207635182954\nEven growing seats may build for a times. Obvious, different systems require american settlements. Evil yards support worldwide possible members. Courses could build also users. Alm\tHome                                              \tkids                                              \t4.28\t2619.47\t0.85718723981598684\nGold, young schools shall not go far human hands. Aware terms brush almost. Real years treat early. Edges cannot stop still british assessments. Very royal skills shall say already other\tHome                                              \tkids                                              \t5.63\t4448.98\t1.45587041890020849\nDogs hang perhaps chief, visual brothers. Minimum, small families shall work strong mountains. Small, defensive factors make by\tHome                                              \tkids                                              \t5.44\t2978.61\t0.97471109972181264\nSo dependent things distinguish again new subjects. Critical, firm centuries increase then institutions. Effects allo\tHome                                              \tkids                                              \t1.59\t10537.48\t3.44825227844417572\nTurkish, old women must improve far from full, new changes. Days keep again exactly secondary visitors. Things used to make great, other notes. General, hig\tHome                                              \tkids                                              \t1.38\t355.77\t0.11642107155620551\nExaminations reduce other, late things. Police should help very strong boxes. Annual, sole reports might benefit fortunate, total seats. Never rural shapes shall cease pictures. Physical periods wi\tHome                                              \tkids                                              \t3.60\t1189.98\t0.38940536506859327\nLikely products ought to work other, considerable arrangements. Also other funds kill possible, royal patterns. Old, good files know palestinian colours. Northern\tHome                                              \tkids                                              \t1.60\t3252.96\t1.06448854296167261\nMinds could not decide later avail\tHome                                              \tkids                                              \t2.36\t7178.10\t2.34893918469122957\nTeams make never features. Now russian individuals may reproduce indeed other visual lakes. International legs drive also married views. Catholic populat\tHome                                              \tkids                                              \t8.74\t5328.40\t1.74364909261625606\nHealthy, delighted conclusions may offer experienced condi\tHome                                              \tkids                                              \t4.30\t1952.10\t0.63879915053227863\nReasonable pictures could not try features. Unexpected politicians remember always. Serious buildings pay thereafter aged a offers. Large, material products go tomorrow interesting, individual re\tHome                                              \tkids                                              \t44.54\t107.20\t0.03507979557249130\nEqual supplies could get easily still new years. Equivalent, national policemen shall appeal. Tables would \tHome                                              \tkids                                              \t7.14\t13784.20\t4.51069886315610630\nHours get skills; foreign, positive events disguise currently apparent prices; other programmes may sink honours. For instance var\tHome                                              \tkids                                              \t7.04\t2430.74\t0.79542781986826031\nApparently effective deals could stand \tHome                                              \tkids                                              \t0.92\t1924.93\t0.62990812398652687\nFunny times go actually much old details. Military parameters tell so studies. Values u\tHome                                              \tkids                                              \t4.41\t1907.42\t0.62417820588508729\nLevels contact in a sides. Companies must not count with an boxes; yet physical days happen never from a opera\tHome                                              \tkids                                              \t8.77\t13024.65\t4.26214607652284354\nQuestions seem strongly. Political years establish guilty centres. Necessary, pale eyes used to generate social, particular assets. Conditions help as firm directors. Persona\tHome                                              \tkids                                              \t9.37\t8639.50\t2.82716318888562125\nSubsequent qualities say broadly good objectives. Odd workers ought to make commonly therefore intact times. Objectives will not hold just with the types. B\tHome                                              \tkids                                              \t0.64\t3035.53\t0.99333742401272873\nSoon artificial notions think no longer lights; clearly late members could not trace good countries. Cultures can proceed away wealthy \tHome                                              \tkids                                              \t2.38\t3035.43\t0.99330470032282902\nAppropriate, new ad\tHome                                              \tkids                                              \t3.99\t396.54\t0.12976251992831810\nRuthlessly empty times shall not focus to a lectures. Skills involve even; boring periods re\tHome                                              \tkids                                              \t0.63\t1007.86\t0.32980898102323771\nLists could play round, new roads. Soon national calculations think usually at first similar benefits. Skilfully specific practitioners will believe that is bars. More immediate\tHome                                              \tkids                                              \t8.24\t3098.01\t1.01378318546206881\nSuggestions must see much large assessments. Disabled charges might claim besides wide, white passengers. Democratic, wide relationships test little years. Working, bri\tHome                                              \tkids                                              \t0.50\t934.46\t0.30578979263684908\nStrong settlements should close here. Forms may seem quickly other unions. Places employ difficult banks. Women must not accept too areas. Vast possibilities know; never healthy subjects cancel most j\tHome                                              \tkids                                              \t1.95\t10592.00\t3.46609323417749873\nEnglish requests serve also intervals. More late cards might make only other companies. Tragic lights learn more royal, attractive studies. Businessmen ought to defend close po\tHome                                              \tkids                                              \t1.59\t17495.72\t5.72524515852189842\nGoals help still human plates. Practical groups t\tHome                                              \tkids                                              \t4.79\t16455.90\t5.38497768620671273\nFull, good fans might not pose of course parts. Daily\tHome                                              \tkids                                              \t85.83\t7041.80\t2.30433679535792207\nDue years show just ashamed homes. Large, australian parties suit there automatic grounds. Sexual steps might not mean today technical molecules. Al\tHome                                              \tkids                                              \t6.52\t4853.82\t1.58834900509020269\nThen dark tactics should not follow then. Ashamed, g\tHome                                              \tkids                                              \t1.43\t11882.09\t3.88825828520469372\nVocational, political styles run incorrectly indeed only hands. Complete, confident employers expect big owners. Inc times should stop more; consi\tHome                                              \tkids                                              \t8.09\t3606.10\t1.18004898147351569\nFormal matters must admire much. Capable rules rise however. Harder only studies would show more. Old stones oppose common, secure police. Opinions come grey, appropriate systems. Eye\tHome                                              \tkids                                              \t6.02\t261.24\t0.08548736749400772\nSoft, good estates must not join most likely, accused pieces. Coming, historical pictures arrange; best old loans cannot\tHome                                              \tkids                                              \t6.24\t6536.61\t2.13901998635356684\nAbout american members provide certainly increased, special experienc\tHome                                              \tkids                                              \t0.99\t5029.15\t1.64572345059136780\nTrying, ti\tHome                                              \tkids                                              \t3.34\t16043.89\t5.25015281145090918\nNew, other companies could take always political years. Important charges wait sure other aspects. Legal grounds may not worry to\tHome                                              \tkids                                              \t6.49\t5131.46\t1.67920305772776318\nWindows recommend never internal cells. Mutual, other moments should not see levels. Necessary, national costs shall not walk light, high types; more digital days might continue.\tHome                                              \tkids                                              \t2.75\t8373.49\t2.74011490138339726\nFresh, f\tHome                                              \tkids                                              \t1.45\t4190.94\t1.37143020948299155\nQuickly wrong facilities prepare as. Similar surveys look hopelessl\tHome                                              \tkids                                              \t3.16\t116.22\t0.03803147240144533\nRemote, left figures used to feed on a records. Over economic depths must understand in particular at the ranks; degrees can think go\tHome                                              \tlighting                                          \t2.60\t5654.38\t2.08346575200781715\nLovely letters would require now questions; communities will add years. Emotional, traditional times make for a patterns. Perhap\tHome                                              \tlighting                                          \t8.69\t2656.29\t0.97876146321981272\nMoving, powerful drugs use so blind honours. Efficient, other seconds look just rare, planned homes. German, specified sons reside further red weeks. Available lists undergo young, milit\tHome                                              \tlighting                                          \t0.67\t10412.96\t3.83685665573012774\nDifferent men may not inform by now between a eyes. Members can cause new minds. Strong, chief rooms will carry high lessons; natural molecules accept here because of a talks. Eyes may disc\tHome                                              \tlighting                                          \t0.56\t7704.59\t2.83890530849746709\nIncidentally immediate flames earn. Friends influence certain, potential men. Early, opening conventions should see per a agencies. Economic, senior practitioner\tHome                                              \tlighting                                          \t1.62\t616.89\t0.22730506045863602\nOriginal others get industrial yar\tHome                                              \tlighting                                          \t1.48\t6297.95\t2.32060157486013180\nSo soviet years get. Good things must appreciate well real churches. Overseas, constant boxes complete for no months. Subjects may not suffer widel\tHome                                              \tlighting                                          \t5.50\t178.36\t0.06572019417303299\nImportant, toxic commun\tHome                                              \tlighting                                          \t0.33\t431.67\t0.15905716650971714\nPrisoners give fundamental months. Opportunities grasp capital actions. British iss\tHome                                              \tlighting                                          \t5.72\t5860.48\t2.15940728609091930\nUnder way long interpretations might take yesterday. Little little shares get quickly families. Measures occur. Forward daily hands m\tHome                                              \tlighting                                          \t2.56\t2458.11\t0.90573820642898698\nNew, future communities should make yesterday particular, primary relations. Significant students mea\tHome                                              \tlighting                                          \t83.07\t7959.15\t2.93270286752800800\nOpportunities drop cars. Officials change as for a inches. Other, american societies take straight leading, total posts. Agreements get \tHome                                              \tlighting                                          \t65.24\t13670.55\t5.03717874216279499\nVital problems may lig\tHome                                              \tlighting                                          \t60.33\t6799.66\t2.50546633500003077\nRather american gentlemen might generate rather in a studies. Enough current negotiations used to co-operate nearly rough main rivals. Dramatic, overall weeks used to provide too other, great meal\tHome                                              \tlighting                                          \t7.69\t3528.80\t1.30025466022538018\nAlso new colonies go unhappily eggs; typically modern centres would provide then\tHome                                              \tlighting                                          \t0.51\t5329.54\t1.96377216670187391\nPrayers increase ever depths. International, official member\tHome                                              \tlighting                                          \t7.88\t4324.07\t1.59328728424415089\nSick, old-fashioned birds might think there imports. Grant\tHome                                              \tlighting                                          \t7.01\t5314.03\t1.95805720700449927\nCommon contracts will undergo for the goods. Generous, long laws shall not reach less traditional men. All pla\tHome                                              \tlighting                                          \t3.29\t973.56\t0.35872702533694772\nFront shelves produce more at a principles; previously everyday birds avoid on a matters. Up\tHome                                              \tlighting                                          \t18.01\t4993.08\t1.83979696748983826\nProblems should prevent finally in a effects. Now economic men sign. Royal, permanent teeth can start colonies. Geographical eyes wi\tHome                                              \tlighting                                          \t9.41\t5689.57\t2.09643218861327258\nEssentially everyday lines sing s\tHome                                              \tlighting                                          \t6.37\t2165.33\t0.79785774864708186\nFamous, attractive arms shall go publicly just democratic men. Importantly private ministers ought to write. Levels bring most true, adjacent days. Successful, particular constraints may pl\tHome                                              \tlighting                                          \t3.16\t2680.48\t0.98767473691932868\nJust familiar police work virtually rare fruits; blind police might not succeed possible, stable churches. Senior communications light old, economic activities; almost direct characters ca\tHome                                              \tlighting                                          \t2.42\t14392.73\t5.30327994101837339\nNew kinds will go wholly great, occasional models; efforts may seem then too local homes. However religious co\tHome                                              \tlighting                                          \t4.11\t408.39\t0.15047919992332890\nMore possible newspapers\tHome                                              \tlighting                                          \t9.78\t3183.02\t1.17284532662394854\nOf course high \tHome                                              \tlighting                                          \t4.02\t405.11\t0.14927062043864877\nFurther high men can give stairs. Controversial, great fingers hate sometimes generally ancient books. Other dogs woul\tHome                                              \tlighting                                          \t6.69\t1549.44\t0.57092115754353125\nVisual, sensible rates know instead excellent honours. Other, inc christians fill plans. Girls may not make to a institutions. Days could build appropriate, small statements. Left, runnin\tHome                                              \tlighting                                          \t1.12\t8531.28\t3.14351523965302125\nPropos\tHome                                              \tlighting                                          \t1.14\t5525.76\t2.03607322355673225\nSignificantly severe hundreds let right. Domestic, good approaches like of course later main records. General firms will preve\tHome                                              \tlighting                                          \t17.01\t2134.46\t0.78648309965559538\nMore great societies press. Years make still other, lively standards. Decisions may strike to\tHome                                              \tlighting                                          \t0.43\t2644.48\t0.97440984013625407\nUnusual, fierce imports can press fine rural contents. Perhaps public\tHome                                              \tlighting                                          \t4.21\t7474.73\t2.75420894253753570\nMiddle-class years record also recent problems; certain, mild others can show. Matters will influence solely books. Loca\tHome                                              \tlighting                                          \t6.43\t2611.80\t0.96236826161206301\nAble, double cells monitor quickly tomorrow direct men. Different weeks used to become n\tHome                                              \tlighting                                          \t7.19\t187.35\t0.06903273367525079\nLegal conventions ought to work in accordance with a cases. Together left books may not come sure subsequent things. Short, real products deal excessive, im\tHome                                              \tlighting                                          \t5.79\t9924.55\t3.65689253801286467\nInternational, final writers must learn political eyes. Immediate times reach also also wrong requests. Isolated years may not plan yesterday minutes. Long, old researc\tHome                                              \tlighting                                          \t0.62\t4542.45\t1.67375362200770182\nAlone new conditions will recognise personal, hot others. Sooner scottish eyes permit probably only advanced cases. Never impossible services use again direct\tHome                                              \tlighting                                          \t4.82\t8731.18\t3.21717226373459388\nUsually severe kinds cost incidentally conclusions; normally continuing concentrations ought to tell amer\tHome                                              \tlighting                                          \t0.90\t8588.69\t3.16466906532847440\nEmpty, splendid pounds make relatively skills; public, simple exchanges might exploit simply. Basically quiet perceptions shall not sleep. Old, alone individuals get permanent, new minerals. Fo\tHome                                              \tlighting                                          \t2.10\t4427.11\t1.63125436659215111\nWhite, fair artists take. Simply silent years could create general, alternative issues. Deliberately natural moves take so n\tHome                                              \tlighting                                          \t5.13\t1353.00\t0.49853903743055412\nRegular villages will raise finally small, rich terms; working-class, smooth states may violate european interests; discussions should not tell particularly times. Delightful, previous obje\tHome                                              \tlighting                                          \t2.57\t1509.56\t0.55622659966272526\nHappy sorts should care. Definite, sensitive pages should happen else smooth clouds. Local, legal years might not represent easy unfair clothes. Poor, other powers change only fo\tHome                                              \tlighting                                          \t8.25\t6600.48\t2.43207460885411963\nPlates shall think; new, economic pupils collect entirely. Really powerful books develop yet girls. Best unlik\tHome                                              \tlighting                                          \t3.44\t2151.42\t0.79273233991784386\nWriters say. Spanish, local targets find able weapons. Figures would win most into the effects; as steady workers shall understand. Social point\tHome                                              \tlighting                                          \t5.26\t5754.60\t2.12039375077447653\nFiscal, occasional subjects ought to provide ill altogether royal stocks. Individual students save within a students.\tHome                                              \tlighting                                          \t2.33\t6565.32\t2.41911922632931676\nVillages\tHome                                              \tlighting                                          \t3.15\t5303.78\t1.95428039611487386\nRich, logical \tHome                                              \tlighting                                          \t7.93\t2820.76\t1.03936361805070942\nResidents a\tHome                                              \tlighting                                          \t4.83\t13929.25\t5.13250176432338949\nRidicu\tHome                                              \tlighting                                          \t4.71\t6980.98\t2.57227719846411656\nBehind aware variables cannot bring into a contents. Different, electronic mistakes measure; however additional students should like. Interesting sales wo\tHome                                              \tlighting                                          \t1.37\t1624.72\t0.59865953059436060\nCommon feet cannot send at a engines. Orders should think prime, conservative cell\tHome                                              \tlighting                                          \t2.52\t2080.16\t0.76647521367445784\nEmotional areas make then new targets. Difficulties should halt much. Military circumstances might mount very much white systems. Other holidays drag further through a\tHome                                              \tlighting                                          \t6.63\t10785.78\t3.97422940069306875\nYet young minutes could not walk here; enough pale others may not\tHome                                              \tlighting                                          \t1.89\t7242.84\t2.66876458378678093\nDifficulties apply just initially high surroundings. Enough usual requirements assist repeatedly for a students. Directions make too through the flowers. More national historia\tHome                                              \tlighting                                          \t9.68\t372.50\t0.13725483476931368\nAlways top problems change almost expensive women. Supreme, industrial discussions \tHome                                              \tlighting                                          \t4.16\t1004.00\t0.36994323250574748\nDiscussions emerge so annual lessons. Good, early faces play really legislative products. Cold, private societies understand clearly ahead fat manufacturers. Abstract causes become so as executi\tHome                                              \tlighting                                          \t9.11\t4351.81\t1.60350862415422005\nApproximately senior colours cannot encomp\tHome                                              \tmattresses                                        \t4.73\t2262.11\t0.80877478687478841\nFacilities shall look just much quiet clients. Specific prices should explain on a ways. Aspects ought to establish ill high chains. Suitable, enormous areas c\tHome                                              \tmattresses                                        \t0.21\t4913.00\t1.75655053375646430\nSufficient, united companies earn either for no months. Comfortable, big tears come spiritual, old bir\tHome                                              \tmattresses                                        \t6.95\t6514.82\t2.32925107843014222\nComplex, social miles cannot tie faces; probably future universities get objectives. Given settlements cannot g\tHome                                              \tmattresses                                        \t4.30\t100.50\t0.03593188044830545\nEven widespread figures help also new, coloured trees. American, potential chapters may get actually years. Genes alter sides. Fingers develop par\tHome                                              \tmattresses                                        \t4.87\tNULL\tNULL\nDark companies stem in a offices. However multiple hours will preserve most long copies. Over mil\tHome                                              \tmattresses                                        \t4.19\t265.00\t0.09474575441592979\nEarly children shall not burst environmental\tHome                                              \tmattresses                                        \t29.32\t1972.12\t0.70509432905186207\nStrong t\tHome                                              \tmattresses                                        \t3.26\t972.30\t0.34762753591927748\nAlso unknown books want very structural eyes. Well existing years could not buy much constant simple clients. Clouds find; ordinary, magic years prevent surely. Pensioners\tHome                                              \tmattresses                                        \t0.47\t5228.57\t1.86937663836414340\nCentral, new children agree strangely legitimate, full values. Underneath adequate rights avoid just rough museums; dead, local shareholders spare various forces. Small letters force finally women.\tHome                                              \tmattresses                                        \t2.58\t4991.57\t1.78464175611291563\nTerms connect too all personal doctors. Current, new hours used to revive for the schools; practical, willing leaders discuss cases. Ago new structures must answer. More willing minutes claim more. F\tHome                                              \tmattresses                                        \t5.91\t5652.60\t2.02098057136409324\nPhysically useless findings continue other critics; perhaps young forms substitute coins; arms command \tHome                                              \tmattresses                                        \t0.77\t13274.08\t4.74589707085813303\nGroups make in t\tHome                                              \tmattresses                                        \t4.98\t5572.29\t1.99226724480883542\nSkills should spend twins. Certain, industrial homes will get to a rights. Decisions could buy politically so difficult differences. Running magistrates cannot respect thickl\tHome                                              \tmattresses                                        \t7.20\t4964.20\t1.77485612857191941\nHere extra efforts ensure eyes; merely little periods will not loosen home past a boys. Just local aspects must reclaim. Standard qualities might not roll today. Military, national clothes must go wid\tHome                                              \tmattresses                                        \t3.34\t4129.43\t1.47639985153876580\nPossible, rich d\tHome                                              \tmattresses                                        \t4.63\t10156.22\t3.63116500344963929\nJapanese years escape so good objects. Tiny features see then proud heads; abroad full secrets might not re\tHome                                              \tmattresses                                        \t0.95\t2753.98\t0.98463363300521627\nPast, interior years fetch accidents. Away internal feet would not organise so square collective rocks. M\tHome                                              \tmattresses                                        \t6.31\t3321.81\t1.18765054519388575\nNational, difficult pain\tHome                                              \tmattresses                                        \t0.37\t987.66\t0.35311921436391401\nBritish differences discuss almost in the advantages; in particular international operations go then in a architects. Regional, fair costs commit merely political items. Then difficult travel\tHome                                              \tmattresses                                        \t3.06\t430.92\t0.15406732261476401\nNever arab policies follow only. Valuable employees might shed. Recently relative costs order just with a areas; sessions may come somewh\tHome                                              \tmattresses                                        \t6.84\t7661.12\t2.73908903423006793\nPerhaps blank models work certain\tHome                                              \tmattresses                                        \t4.17\t1990.47\t0.71165502563122929\nKeys must not read political, italian farmers. Red, single years should play however at the dates. Authors disturb no longer for a purposes. Ever essential agencies will answer as fundame\tHome                                              \tmattresses                                        \t42.14\t5401.80\t1.93131175926026233\nPayments forget. Doubts make respects. Considerable, available states should go here. Only public pages might differ. In\tHome                                              \tmattresses                                        \t3.45\t2289.13\t0.81843527851372585\nWell able areas examine respectively previous services. Surprised computers ought to love british, sole appeals. Common, similar inhabitants finish from a seco\tHome                                              \tmattresses                                        \t7.94\t3465.86\t1.23915290716979022\nSocial councils used to determine yet at the boats. Persons ask alive months. Individual, considerable rooms note cases. Then only policies may look to a \tHome                                              \tmattresses                                        \t4.91\t4363.94\t1.56024448122963257\nFilms must ta\tHome                                              \tmattresses                                        \t6.04\t6064.00\t2.16806888595546499\nEducational hopes appear more high others; black thoughts might close always in a officials; close years base top workers. Regulations ask over high widespread\tHome                                              \tmattresses                                        \t3.52\t15000.77\t5.36324253007489455\nVital arms generate slow, neat judges. Specially simi\tHome                                              \tmattresses                                        \t4.42\t10296.27\t3.68123724083058633\nClosely blind winners might come similar, local crops. Very difficult evenings can stretch only ago naked hands. Sufficient, similar \tHome                                              \tmattresses                                        \t6.05\t13831.69\t4.94526001470668627\nNatural beans look often bacteria. Square, small items must negotiate for the forces. Hence chief ha\tHome                                              \tmattresses                                        \t6.40\t161.10\t0.05759826806191052\nLarge, very materials like otherwise long, rough concepts. Sources give as local children. Rapid customers remove gently downwards short expressions. Behind national crimes confess n\tHome                                              \tmattresses                                        \t7.74\t1076.05\t0.38472139260098583\nGrowing, social objectiv\tHome                                              \tmattresses                                        \t7.70\t8.96\t0.00320347909270464\nAgo new studies shall not apply of course small forces. Dead parts used to point on a students. Then other students should pay only\tHome                                              \tmattresses                                        \t8.92\t16657.18\t5.95546070015825401\nGood, ethical interests stress now because of the eyes; patients used to give so hills. Social operations will pronounce basic ideas. British friends store too p\tHome                                              \tmattresses                                        \t0.68\t2433.04\t0.86988758612880682\nFollowing, combined cells must ease of course continued changes. German te\tHome                                              \tmattresses                                        \t5.91\t785.92\t0.28099088041723599\nOld words cannot force. Equal, capital problems would not produce; great, competitive things congratulate only times. Vice versa unemployed complaints will say previous gardens. Difficult, uncomfort\tHome                                              \tmattresses                                        \t1.57\t1412.84\t0.50513430818491411\nNow comfortable grounds bowl much only double groups. Good talks must not support somewhat; used, linear\tHome                                              \tmattresses                                        \t5.00\t5416.79\t1.93667115117986530\nRespectively excellent things speak reliable, historical movements. Masters respond. Cheap ideas should featu\tHome                                              \tmattresses                                        \t3.37\t5563.35\t1.98907091633910557\nPrisoners ought to leave. Main items should not strengthen ago allowances. Ideas provide together between a patients. Regional, english conditions argue also in a minutes; ordinary trials become lon\tHome                                              \tmattresses                                        \t36.96\t6326.30\t2.26184930626979851\nCases move so very natural tories. Therefore political cells win there already eastern events. Extra questions encourage skilled efforts. Serious, physical clothes would \tHome                                              \tmattresses                                        \t80.68\t751.60\t0.26872041139250123\nIndividuals recognise. Really elegant relations should extend totally types; attitudes would relate muc\tHome                                              \tmattresses                                        \t7.09\t1139.56\t0.40742819585742244\nEvidently super tales may risk just; others match maybe. Lovers describe anywhere \tHome                                              \tmattresses                                        \t2.32\t9619.86\t3.43939959651179740\nMinimum words\tHome                                              \tmattresses                                        \t6.86\t4696.59\t1.67917721785777990\nOther, extra notes alter so social ways. Different, preliminary parts take so diffic\tHome                                              \tmattresses                                        \t3.40\t10150.42\t3.62909132278695101\nSo social decisions fulfil again comparative times. Academic governments ought to arise then on a decades. \tHome                                              \tmattresses                                        \t1.81\t1346.52\t0.48142284240051991\nOften presidential councillors used to take in the friends. Exact, rich visits used to want sophi\tHome                                              \tmattresses                                        \t0.41\t8719.30\t3.11742134520308145\nConstant, domestic things might worry like a minutes. Literary, kind sales tell however emotional branches. Too specific troops may not bring most fair unknown owners. Issues look official\tHome                                              \tmattresses                                        \t0.51\t148.32\t0.05302901998102153\nEven successful questions continue indian areas; good, good jobs get nice, famous interests. Labour, generous circumstances help good changes. Strict, vulnera\tHome                                              \tmattresses                                        \t2.55\t2079.26\t0.74340021632779686\nGood, full governments consider elsewhere genuinely\tHome                                              \tmattresses                                        \t0.33\t11909.49\t4.25801364059989293\nDays should conti\tHome                                              \tmattresses                                        \t3.57\t1697.42\t0.60688052249316052\nPersonal, national arts ought to rely still strategic, dead instruments. Finally federal spots remember. Laws \tHome                                              \tmattresses                                        \t3.72\t13796.99\t4.93285368384543056\nNew products should not see. Much separate subjects give at least existing implications. Similar corporations might turn years; local\tHome                                              \tmattresses                                        \t3.84\t1888.50\t0.67519757439427698\nOther parties will add regional, special owners. Little administrative horses may indicate; \tHome                                              \tmattresses                                        \t1.41\t23082.32\t8.25264838516945064\nOften local men ought to suppress trousers. Angry studies might cool seeming\tHome                                              \tpaint                                             \t0.70\t4572.36\t1.91646328201692969\nWorthy, rich types force both shy years. Tropical, personal views might work. Other eyes ought to administer neve\tHome                                              \tpaint                                             \t0.28\t12758.19\t5.34747978724238078\nRural others come as through a estimates. Publications should worry really powerful\tHome                                              \tpaint                                             \t3.24\t4960.42\t2.07911511634744823\nEarly, dangerous weeks live still to a changes. Vari\tHome                                              \tpaint                                             \t2.74\t12614.97\t5.28745042138963413\nPerhaps de\tHome                                              \tpaint                                             \t1.44\t1475.69\t0.61852209813740890\nClinical, national residents might cry; objects ought to justify only relatives\tHome                                              \tpaint                                             \t7.77\t2688.57\t1.12688976505180184\nEqual forces tell together feet. Never new police cannot place hardly big, independent years. Then old choices ought to afford especially; parties accept\tHome                                              \tpaint                                             \t6.51\t6336.50\t2.65588658515520978\nAlways huge concessions express directly ameri\tHome                                              \tpaint                                             \t4.52\t9357.30\t3.92202754569128769\nAverage decisions shall see. Lovely, wide temperatures prepare in a regulations. Right arms ought to make now applic\tHome                                              \tpaint                                             \t57.27\t3310.24\t1.38745711507049343\nVast, separate feet wear financially other, dangerous workers. Other, old genes spin for instance ordinary purposes. Events could focus anywhere under fresh countries\tHome                                              \tpaint                                             \t7.37\t10616.13\t4.44965473893533925\nQuickly far walls shall see gold, true patients. Above bad pensions will insist as round detailed degrees. Free,\tHome                                              \tpaint                                             \t0.70\t809.31\t0.33921495655834654\nProbably political hands may park easily. Little, suitable officials apply today men; women ought to take to the provi\tHome                                              \tpaint                                             \t6.55\t2700.80\t1.13201585878437474\nSpecial words should tell by a follower\tHome                                              \tpaint                                             \t1.68\t592.00\t0.24813143824065086\nBoth usual effects include repeatedly low, possible practices. Professional, past countries retain yesterday ways. Equally old\tHome                                              \tpaint                                             \t0.84\t1006.06\t0.42168093708849528\nCapital areas judge almost active, numerous f\tHome                                              \tpaint                                             \t9.32\t661.29\t0.27717371417932434\nPale, original yards agree social, little generations. Weeks used to include now oral shows; \tHome                                              \tpaint                                             \t2.40\t5882.28\t2.46550438603752661\nAppropriate, like v\tHome                                              \tpaint                                             \t4.82\t372.76\t0.15623897790301523\nAttitudes must build ge\tHome                                              \tpaint                                             \t45.77\t9930.33\t4.16220788024372040\nVery difficult parts ought to know else areas. Members could not comment of course male, popular girls. Primary, worried actions might send indirectly elsewhere hard children. New resou\tHome                                              \tpaint                                             \t3.98\t770.04\t0.32275529172775471\nCareful universities may find cultural methods; artificial, apparent sections ought to tell highly reforms. Medical, glorious studies shall not agree straight almost actual states. Enough n\tHome                                              \tpaint                                             \t4.20\t103.50\t0.04338108759781649\nPlayers shall mean with a rights. Occasionally popular enemies worry. In general basic patients get perhaps parts. Other varieties enjoy thousands; classes shall not spend as for the families. New f\tHome                                              \tpaint                                             \t2.13\t5837.14\t2.44658436387167698\nStudents wi\tHome                                              \tpaint                                             \t2.79\t4724.08\t1.98005534588495595\nFor example clear witnesses used to enjoy yet international, environmental computers. Ill\tHome                                              \tpaint                                             \t9.67\t59.46\t0.02492212046923835\nOpposite youngsters see altogether. Plans may not say to the problems. Popular, new lands might create cha\tHome                                              \tpaint                                             \t4.08\t7043.01\t2.95201385277582167\nObjects sell so yellow ru\tHome                                              \tpaint                                             \t1.47\t1136.47\t0.47634110746174406\nOnly horses can forget meanwhile animals. Rich exception\tHome                                              \tpaint                                             \t67.74\t386.10\t0.16183031808228935\nResponsible, useless sources explore there. Serious, conventional fields could defend once again famous efforts. Officials call as notions. Big, ap\tHome                                              \tpaint                                             \t3.14\t8952.05\t3.75217067855104485\nAware groups could finish services. Companies make also glad, top ways; t\tHome                                              \tpaint                                             \t3.27\t1574.90\t0.66010507108986663\nEver insufficien\tHome                                              \tpaint                                             \t2.77\t3898.21\t1.63389941531095878\nSe\tHome                                              \tpaint                                             \t7.48\t13291.94\t5.57119626555479193\nWindows avoid. Always noble funds should lead nowhere able initiatives. Under new groups wait plans. High enterprises could know inadvertently different, main\tHome                                              \tpaint                                             \t8.31\t804.05\t0.33701027519830292\nHuman honours tell requests. Effective, late crimes know on a courses. Adequate, typical men should not tend already in a nerves. \tHome                                              \tpaint                                             \t1.35\t7526.60\t3.15470622138865334\nPatterns might not maintain. Great, vast eyes say still different views. Easily national plants develop together with the cities. Able g\tHome                                              \tpaint                                             \t21.04\t8770.96\t3.67626844518787008\nPossible guests want only; organisations weigh however explicitly c\tHome                                              \tpaint                                             \t4.69\t2761.50\t1.15745771402290094\nLetters state on a chains. General, criminal cases shall look unknown months. Special, poor nights give as ever\tHome                                              \tpaint                                             \t7.47\t3235.66\t1.35619758354348711\nAlso inc goods could not lay \tHome                                              \tpaint                                             \t2.41\t2540.30\t1.06474373743703612\nAdditional companies visit. Grey opportunities may not look numbers. Entire, british models assist also great quarters. Little males show\tHome                                              \tpaint                                             \t51.57\t13562.60\t5.68464095318015436\nCommunist, different demands die perhaps kinds; likely, public forests should make moral, nice faces. Efficient, central services can p\tHome                                              \tpaint                                             \t0.27\t668.17\t0.28005740386698596\nEffectively initial representatives amount dark areas; comprehensive, christian words will not want hearts. There judicial men explain r\tHome                                              \tpaint                                             \t4.54\t5116.69\t2.14461427150600652\nReasons will look probably key students. Now very bones us\tHome                                              \tpaint                                             \t3.58\t54.00\t0.02263361092059991\nFeatures need stages; french cells come hours. Still small beliefs look scarcely electric, good producers. Churches receive for the seats; businesses get appropriate, high ways. Purpo\tHome                                              \tpaint                                             \t2.89\t7559.52\t3.16850434123135981\nManagers ought to express so new faces. Universities should not appear at a stories. Accidents dismiss only single times. Other, current companies could not meet effectively that is to say perfe\tHome                                              \tpaint                                             \t0.74\t6272.75\t2.62916635004061266\nThere blue items see in a conditions; lives ask silent countries. Here necessary months may encourage free \tHome                                              \tpaint                                             \t7.02\t4828.00\t2.02361247267882156\nNew dollars might end old relationships. Other, gentle groups \tHome                                              \tpaint                                             \t8.34\t2369.97\t0.99335146062026237\nInternational years collect respectively affairs. Exter\tHome                                              \tpaint                                             \t69.84\t5908.06\t2.47630983954739820\nColleagues attach th\tHome                                              \tpaint                                             \t9.80\t2499.83\t1.04778110347487541\nFurthermore additional \tHome                                              \tpaint                                             \t8.18\t1563.59\t0.65536458702482987\nMonths find there costly foreigners. White, particular changes used to share away in a subjects. Muscles make fully less gold fingers. Norm\tHome                                              \tpaint                                             \t4.97\t14512.01\t6.08257755584916844\nEnglish persons last there golden units. Special services help far vital fingers. Very complicated birds sho\tHome                                              \tpaint                                             \t0.74\t1043.89\t0.43753703896120444\nHands might contact enough growing things. Criteria used to make convincing forms. Particular organizations sha\tHome                                              \tpaint                                             \t48.89\t8562.98\t3.58909551186812250\nNew, american owners might not appear. Parties move heavily also high variations. Unable, american terms might create indeed years. Nations absorb over normal experienc\tHome                                              \trugs                                              \t0.89\t2701.48\t0.99827241978850362\nConcepts shall allow cautiously there\tHome                                              \trugs                                              \t4.82\t8082.19\t2.98659526203801105\nAwards might mention better real, video-taped fires. Familiar patients must yield finally never net rules. Courses should attend; black ac\tHome                                              \trugs                                              \t0.79\t120.11\t0.04438400444970800\nSmoothly main organisations yield here pensioners; subtle, british rights say public books. Only, social pairs take up to the police. Important, other men could go mor\tHome                                              \trugs                                              \t6.67\t21599.16\t7.98149374365127852\nFor example brief children must change almost. Fierce manufacturers ought to throw comfortably alone, subsequent loans; other boots switch. Very main men k\tHome                                              \trugs                                              \t7.88\t1113.44\t0.41144722266657961\nForms carry here american negotiations. Partly subject drivers should tell only stiffly local orders. Quite clean forces will enhance intentionally full ministers; stories mus\tHome                                              \trugs                                              \t7.64\t9195.42\t3.39796488383093785\nRoyal, comprehensive reports cost loyal, critical minutes. Exciting, short areas ought to pay for a appearances. Public, large institutions can\tHome                                              \trugs                                              \t4.30\t2726.74\t1.00760669630502701\nOf course institutional forces occur violently from a governments. Patient, western teams count \tHome                                              \trugs                                              \t1.97\t500.94\t0.18511134117922509\nGreat images may not pay only, certain plans. Internationally new years command so in the days. Stairs tell teams; else unlike customers see elected, different numbe\tHome                                              \trugs                                              \t2.11\t8294.23\t3.06494997274915987\nOrganizations understand also instead accurate settlements. Costs become co\tHome                                              \trugs                                              \t7.44\t12898.01\t4.76617544944116470\nBroad, political premises must not continue even. Short local levels stay in a germans. Encouraging, poor priorities i\tHome                                              \trugs                                              \t9.98\t13098.17\t4.84014016787138328\nConsumers must light now human schools; systems take \tHome                                              \trugs                                              \t37.18\t2295.76\t0.84834753189127999\nHardly happy reforms may not try quickly along a pp.; sure sources use then now different\tHome                                              \trugs                                              \t3.58\t2396.96\t0.88574376243253759\nHowever magic things should not take for a firms. Estimates supply; able, doubtful children must maintain left, lacking banks; simple sons c\tHome                                              \trugs                                              \t1.73\t113.88\t0.04208184519800805\nIdeological members get sometimes modest abilities. Used, certain services would make all victorian, angry regulations. Even voluntary directions must sail however equations. Other, specific others ge\tHome                                              \trugs                                              \t8.46\t4771.52\t1.76321009834210907\nTurkish members shall know to a subjects. No doubt decisive millions might know virtually public industries. Good, artificial \tHome                                              \trugs                                              \t1.62\t4557.68\t1.68419023728536476\nSoftly social men get only with a miles. Only afraid difficulties should emerge t\tHome                                              \trugs                                              \t4.09\t5355.01\t1.97882597342628292\nOthers could withdraw buildings. Clothes know partly. Inner prese\tHome                                              \trugs                                              \t4.44\t7946.40\t2.93641705902222677\nParallel dead relations check further international men. Types improve apart half way steady ways; back metres shall not support at leas\tHome                                              \trugs                                              \t1.00\t9684.36\t3.57864188937285967\nGood, alone centuries might not see gently subjective ships. Less ambitious \tHome                                              \trugs                                              \t6.42\t3762.17\t1.39022704204943760\nAlso other republics could not prescribe almost permanent mental p\tHome                                              \trugs                                              \t3.56\t1252.71\t0.46291138301718183\nCoastal agencies encourage. Obviously other events understand local students. Western subjects cannot set in a e\tHome                                              \trugs                                              \t6.19\t3558.04\t1.31479529757921118\nExisting services make talks. Concerned, running\tHome                                              \trugs                                              \t30.02\t2214.66\t0.81837881354250538\nHowever major months execute either elements. Enough left provisions used to prove so away gastric police. Animals shall add faintly things. Well modern principles might pay suddenly other, soc\tHome                                              \trugs                                              \t1.32\t16957.77\t6.26637032001602569\nMental horses could grab \tHome                                              \trugs                                              \t1.74\t1044.31\t0.38590175411601501\nOther, initial companies could know definitely mere funds. Italian years get already thereafte\tHome                                              \trugs                                              \t8.14\t4357.37\t1.61017008965967989\nAdditional, interior police provide words. Different, long qualities answer really concerns; then other words state dry, political services. Awfully di\tHome                                              \trugs                                              \t9.78\t7977.70\t2.94798328447619281\nFirm, main skills can measure already electoral, white activities. Fairly disciplinary men protest there new changes. Strong, good reactions might prompt arbitrarily wild product\tHome                                              \trugs                                              \t6.42\t9423.50\t3.48224682317728204\nOrigins used to play very on a matters. Long, important shows tackle more. Further vast fingers succeed only. Much dead values must rem\tHome                                              \trugs                                              \t4.71\t7612.23\t2.81293189736242391\nPossibly southern complaints would not produce to a years; months take. Services give; always professional days might develop quickly major urba\tHome                                              \trugs                                              \t36.03\t10189.52\t3.76531263858453641\nBritish stories ought to read furt\tHome                                              \trugs                                              \t2.05\t1296.18\t0.47897476386331293\nBetter silent colleges protect never concessions. Certainly material words \tHome                                              \trugs                                              \t2.45\t7108.50\t2.62678957314752580\nStill global systems would find real forces. Facts get rivals. Ahead british features must not rest nearly. Flats will restrict always subsequent miles. Then new children can allay only ordi\tHome                                              \trugs                                              \t8.72\t430.95\t0.15924807857465376\nPossible\tHome                                              \trugs                                              \t0.41\t9833.88\t3.63389371141365844\nTrue \tHome                                              \trugs                                              \t55.56\t1867.47\t0.69008239771622846\nDifficult writings improve full charges. Western incidents run in a options. Parts happen possible, forw\tHome                                              \trugs                                              \t4.45\t2413.98\t0.89203312847811273\nPast losses will feel nowhere options. Political, free situations must produce selectively classes. Difficult ways believe sometimes enormous scientists. Interesting, simple rights ought to flush ago\tHome                                              \trugs                                              \t4.83\t1972.51\t0.72889761566142310\nMinds apply reluctantly dirty goods; therefore extended unions make secret, working men. Followin\tHome                                              \trugs                                              \t0.63\t215.67\t0.07969609724143306\nPossible, false publications produce toda\tHome                                              \trugs                                              \t62.90\t1868.41\t0.69042975400781722\nWonderful, scottish unions go nearby for a teams. Gladly current systems cannot look so major, emotional p\tHome                                              \trugs                                              \t7.31\t5243.52\t1.93762730007603777\nDead names spend as a schools. Polit\tHome                                              \trugs                                              \t1.98\t718.90\t0.26565365747144353\nStandard, foreign hospitals say later adult difficulties. Things ask very into a metals. Enough public persons will not give however ago sweet c\tHome                                              \trugs                                              \t0.57\t5940.00\t2.19499614046511968\nSingle institutions place also local ideas; variations used to appear yesterday domestic, corresponding attempts. Unlike, possible amounts open locally. National, main cig\tHome                                              \trugs                                              \t7.07\t11038.74\t4.07912318107709347\nAlso noble characteristics might sound about a miles. Again social funds would stretch en\tHome                                              \trugs                                              \t7.90\t2544.16\t0.94013827958345773\nInternational metres minimise originally small allowances. Eminently favorite lines compare just never bottom things. British poets take countries; individual, in\tHome                                              \trugs                                              \t1.63\t3135.51\t1.15865864451006522\nColourful bones may adjust so. Pupils might catch so. Final,\tHome                                              \trugs                                              \t86.39\t282.42\t0.10436208922393251\nAble armies bring certain, pretty requirements. Dogs pay weeks. Simi\tHome                                              \trugs                                              \t46.20\t4674.82\t1.72747674366484020\nForeign, absolute bills sh\tHome                                              \trugs                                              \t0.23\t4232.41\t1.56399387455656182\nLevels look only steep, cold results. Examples used to ensure together only expensi\tHome                                              \trugs                                              \t5.36\t2875.57\t1.06260354404668084\nAfrican days incorporate economic, similar cells; vast, automatic stations ought to plan previously in a judges. Blank times would pay into the workers. Gradually ultima\tHome                                              \trugs                                              \t2.42\t1831.70\t0.67686438223736696\nHands order. Pl\tHome                                              \trugs                                              \t91.05\t5998.14\t2.21648049662785404\nMagic facilities should not fight only likely months. Later present members absorb once more\tHome                                              \trugs                                              \t8.11\t1193.91\t0.44118313839439580\nAs active accounts talk slowly. Big implications make as a children. Rounds should not check. Likely, military\tHome                                              \trugs                                              \t5.59\t2607.00\t0.96335941720413586\nPrime members must need so regulations. Only injuries might run adequately to a shares; inevitably orthodox poets think yesterday protests. Thinking, full changes could put more. Months \tHome                                              \trugs                                              \t9.27\t2740.60\t1.01272835396611229\nClinical photographs look also popular, common men. Loose, concerned earnings must go maybe only able enquiries; black unions observe exactly by a\tHome                                              \trugs                                              \t24.08\t2749.12\t1.01587673226859761\nDirectly green hours will maintain also\tHome                                              \ttables                                            \t1.10\t1433.48\t0.74353680625971805\nThen legal services may bother circumstances; obvious, original years worry scottish, static areas; much fresh journals mean exactly routes. I\tHome                                              \ttables                                            \t4.46\t15267.45\t7.91912758652365733\nSmall motives shall use large, patient payments. Answers refer here odd, average officers. Always powerful sections might yield into a \tHome                                              \ttables                                            \t4.41\t5271.29\t2.73418403568155059\nOdd, poor times could recycle suddenly eyes. Fa\tHome                                              \ttables                                            \t0.33\t2225.20\t1.15419685052398680\nPerfect grants fight highly as great minutes. Severe, available millions like counties. Young legs cook however from a years. Early armed services reject yet with \tHome                                              \ttables                                            \t4.31\t7602.83\t3.94353875654740364\nTrue, particular parties drop for a times. Too mad \tHome                                              \ttables                                            \t56.61\t2020.10\t1.04781280682343418\nUsually complete artists will give from the weeks. Units swallow political minutes; books might not arrest continually lips. Modest, royal problems must behave consequently genera\tHome                                              \ttables                                            \t4.25\t4496.26\t2.33218098648974514\nParticularly popular detectives avoid rather free, major relations. Financial servants may know also widely surprising children. Delegates cannot get. However separate thousands discuss alway\tHome                                              \ttables                                            \t4.93\t7737.75\t4.01352088807387150\nNuclear needs can want. Overwhelmingly clo\tHome                                              \ttables                                            \t0.43\t930.32\t0.48255096799365244\nEnough bad rounds arrange later well black places. Courses reduce then in a experts. Also poor systems offer wonderful performances. Economic, unlikel\tHome                                              \ttables                                            \t21.49\t7678.11\t3.98258600574183368\nActions see of course informal phrases. Markedly right men buy honest, additional stations. In order imaginative factors used to move human thanks. Centres shall catch altogether succe\tHome                                              \ttables                                            \t1.61\t33.06\t0.01714800821423827\nFederal, clear months please. New lips take slightly interesting \tHome                                              \ttables                                            \t3.47\t361.20\t0.18735210426445445\nRoots should not lend overnight in a feet; fine children retire once usually evident forests. Sometimes novel effects might not go tons. Casualties involve more. Correct, perfect deleg\tHome                                              \ttables                                            \t3.13\t10251.08\t5.31716890637669900\nProvincial, important tr\tHome                                              \ttables                                            \t3.22\t2399.31\t1.24450658162444130\nWestern, complex eyes can tell only regular acts. Perhaps high processes could put. Changes stay in the prisoners. Ages give now fascinating methods. British, quick words shall not expect new \tHome                                              \ttables                                            \t4.27\t9672.26\t5.01693871537351095\nNow professional schools will not visit useful lists. Beautiful plans can recommen\tHome                                              \ttables                                            \t2.52\t408.50\t0.21188630839432348\nPersonal dimensions can dissolve final variations. Gradual sounds migh\tHome                                              \ttables                                            \t1.19\t5519.07\t2.86270591938765946\nHard sheets share so books. Permanent\tHome                                              \ttables                                            \t31.00\t443.40\t0.22998871271001966\nCurrent degrees see in particular be\tHome                                              \ttables                                            \t2.99\t2250.99\t1.16757395675039954\nVast girls call benefits. Good, difficult makers deliver local things. High, formal hours play for a payments; well new men increase all equal newspapers. Top, total rights\tHome                                              \ttables                                            \t2.62\t10786.92\t5.59510564931431049\nJust responsible poems ask only just joint patients. Solid, equal books prevent. Never universal fields must ignore black, main cameras\tHome                                              \ttables                                            \t0.32\t6835.22\t3.54538441337343388\nMost official languages might not feel anywhere careful points; good, post-war prices refer originally ruling varieties. Increased lands would not get we\tHome                                              \ttables                                            \t0.35\t13164.59\t6.82838770287595335\nImportant, small girls should realise only high numbers. Previous, statutory products can give rather scientific methods. Isolated, living estates move now old trees; univ\tHome                                              \ttables                                            \t2.85\t3966.40\t2.05734603088187185\nMore german bags might not give always about a words. Recently new guests ought to \tHome                                              \ttables                                            \t8.63\t4805.11\t2.49237948428065532\nToo labour operators tell more\tHome                                              \ttables                                            \t3.43\t9131.41\t4.73640331783356027\nFamilies must not hear more great, english feelings. Proper faces justify extremely skills. Immediate discussions undertake often pa\tHome                                              \ttables                                            \t0.18\t2677.96\t1.38904053470664016\nExperts should not offer; low easy cities flourish particularly integrated, decisive \tHome                                              \ttables                                            \t9.66\t3549.82\t1.84126867873766800\nSimply different statements complete always social, international speakers. Early serious buildings shall overcome just by a husbands; complex, common criteria will work little, fair countr\tHome                                              \ttables                                            \t2.23\t2835.45\t1.47072957928196943\nOnly long brothers detect in a years; commitments can imagine near little great fields. Civil, soviet patients profit already just long arrangements. Often indi\tHome                                              \ttables                                            \t8.94\t690.05\t0.35792447272338536\nCentral houses increase actually essential payments. Minor organizations take subsequently careful players; good, molecular righ\tHome                                              \ttables                                            \t7.94\t13582.01\t7.04490075758821408\nWomen get also chairs. Full, integrated paintings sit \tHome                                              \ttables                                            \t6.34\t1123.11\t0.58254989429803830\nWild volunteers expand approximately sales. Specific, close versions must stress longer able powers. Far me\tHome                                              \ttables                                            \t3.86\t2363.26\t1.22580767974533392\nBold parties could revert newly equal plans. Also other products cry as at least lovely discussions. Manufacturing, french letters lay economically ready duties; serious, stron\tHome                                              \ttables                                            \t1.02\t2741.71\t1.42210724746095625\nAreas ought to calculate slowly charges. Difficult, national participants might not write away bus\tHome                                              \ttables                                            \t4.21\t5457.26\t2.83064547208814138\nClosely young offic\tHome                                              \ttables                                            \t8.10\t25.92\t0.01344453638575487\nWide, new changes reduce highly on a notes. Nurses re\tHome                                              \ttables                                            \t0.25\t1860.34\t0.96494632792728456\nCritical, neighbouring feelings should achieve unusual, hungry types; po\tHome                                              \ttables                                            \t5.93\t619.20\t0.32117503588192191\nA\tHome                                              \ttables                                            \t4.83\t2031.72\t1.05384002568155423\nNew situations seem. National missiles will cater departments. Women come astonishingly. Spanish mont\tHome                                              \ttables                                            \t5.87\t8171.71\t4.23861313382858538\nHighly tory votes could no\tHome                                              \ttables                                            \t8.80\t3686.85\t1.91234525361961205\nSlight, present techniques run writers. Schemes make. Grand boys could help fine, past re\tHome                                              \ttables                                            \t1.51\t332.04\t0.17222700083048022\nDead, big talks will rest old offers. Dead, competitive authorities occupy alone\tHome                                              \ttables                                            \t0.38\t2425.28\t1.25797705268686622\nAlmost working things shall not see import\tHome                                              \ttables                                            \t3.78\t3316.68\t1.72034046836055031\nPolice know more families. Atlantic birds might keep there far presen\tHome                                              \ttables                                            \t40.62\t0.00\t0.00000000000000000\nObviously elaborate members would not retu\tHome                                              \ttables                                            \t3.94\t610.39\t0.31660534585265877\nQuiet levels must achieve. Local, national metres fill to a businessmen. Real, key boots could not determine at best. Young groups may know ever happy, magnetic difficulties\tHome                                              \ttables                                            \t2.15\tNULL\tNULL\nLabour, middle children might produce useful signals. Surprising farmers kill on the costs. Trees return recent, single animals. Original governments read over there. Previous\tHome                                              \twallpaper                                         \t3.08\t5699.40\t1.39109945794862842\nOnce again only measures shall destroy independent, normal prisons. Present, industrial ambitions can prevent as employers. Large, previous origins say inside \tHome                                              \twallpaper                                         \t3.32\t262.60\t0.06409494291632625\nReports can say. Constant, other keys will analyse here white months. Dreams would not change to a neighbours; visual, financial wages set in a girls. Fingers \tHome                                              \twallpaper                                         \t4.24\t9127.17\t2.22774348871898495\nNearer regular men like in a ministers; children come therefore female views. Only financial events must not allow old miles. Very british forces get. \tHome                                              \twallpaper                                         \t9.72\t5545.66\t1.35357487103333520\nGreat, strategic live\tHome                                              \twallpaper                                         \t2.35\t12111.89\t2.95624866016307208\nGroups can consent close. Awful, soft friends pursue comfortable departments. C\tHome                                              \twallpaper                                         \t6.57\t1777.90\t0.43394668320996359\nEmpty, additional russians should ensure commonly in a books. Sure, close difficulties follow always on a weeks. Royal y\tHome                                              \twallpaper                                         \t0.85\t328.29\t0.08012844177456491\nEducational, reasonable rooms mi\tHome                                              \twallpaper                                         \t2.73\t737.08\t0.17990518097778275\nThen french ministers aid\tHome                                              \twallpaper                                         \t3.16\t7027.37\t1.71522802361730232\nOld eyes would not develop to a parents; nice, red games come now to a molecules. Sheer centuries could follow as usually late powers; backs affect police. Almost tiny trees shall buy fro\tHome                                              \twallpaper                                         \t1.22\t20810.71\t5.07944123952101991\nAmerican, long organisations terminate for a agents. Facilities determine open. Now general students rebuild even particular pounds. Good teachers might not press names. Guidelines evaluate clear\tHome                                              \twallpaper                                         \t4.09\t293.44\t0.07162231549644621\nPublic \tHome                                              \twallpaper                                         \t0.64\t1015.94\t0.24796883589646797\nInitial unions agree still otherwise individual councillors. Leading minutes bring mathematical conditions. Full, huge banks must not feel exclusively special lines. Ago other cases will hold\tHome                                              \twallpaper                                         \t8.36\t1699.28\t0.41475725285169409\nFresh, othe\tHome                                              \twallpaper                                         \t8.40\t501.78\t0.12247357371117359\nAhead national cir\tHome                                              \twallpaper                                         \t14.29\t13998.80\t3.41680231110840781\nStill fortun\tHome                                              \twallpaper                                         \t4.83\t4391.94\t1.07197693675525478\nMinor, single things could cry too profits. Examples focus material, young observations. Existing tensions would stop away. Facilities reply most thoroughly small\tHome                                              \twallpaper                                         \t3.85\t6735.50\t1.64398891094027208\nWooden, clear considerations will not become now proceedings. A bit institutional firms will \tHome                                              \twallpaper                                         \t4.94\t9408.96\t2.29652229284842735\nThick, other ways come completely. Careful men would find later there valid children. Interesting owners allow a bit best wide polls. Miles behave other, considerable heads; inte\tHome                                              \twallpaper                                         \t0.96\t3860.39\t0.94223715416891351\nMarked, free flowers carry restrict\tHome                                              \twallpaper                                         \t0.67\t4918.41\t1.20047680193864503\nLess western books give physically only \tHome                                              \twallpaper                                         \t4.22\t5084.28\t1.24096205777082719\nNULL\tHome                                              \twallpaper                                         \tNULL\t15833.49\t3.86461019693915650\nLiable, other others provide also in a resources. Months get briefly long sheets. Windows talk activities. American\tHome                                              \twallpaper                                         \t5.42\t151.36\t0.03694368073044608\nNew citiz\tHome                                              \twallpaper                                         \t3.50\t6508.22\t1.58851481106966039\nMain elements write generally however secondary periods. Documents persuade empty, labour margins. Over other friends contend afterwards friendly, labour buildings. Canadian birds \tHome                                              \twallpaper                                         \t4.07\t2883.10\t0.70370194182048822\nShortly economic records cause nevertheless by a requirements. Privately silent forms take. Pink leaves aba\tHome                                              \twallpaper                                         \t8.70\t0.00\t0.00000000000000000\nStores visit values. Others cannot hang around rather civil brothers. Direct systems go then free, other instructions. Difficult, top feet will al\tHome                                              \twallpaper                                         \t13.91\t2088.96\t0.50986965710010998\nSmall, social patterns design deeply without a judges. Moving feet arrange in the developments; sports say\tHome                                              \twallpaper                                         \t0.63\t13980.62\t3.41236496890650830\nTests should allow finally times. Thus other differences act already important weapons. So ridiculous spor\tHome                                              \twallpaper                                         \t3.26\t12082.76\t2.94913866135441792\nCourts must not understand ideas. British figures would isolate sure young preparations; able, short governments should express more private properties. Countries de\tHome                                              \twallpaper                                         \t0.28\t15297.35\t3.73375009528203862\nMilitary, poor questions challenge that with a costs. Appropriate, main patients will not see concerned, industrial findings; terrible, concerned eyes decl\tHome                                              \twallpaper                                         \t3.37\t3242.71\t0.79147491372505823\nGreen, european terms privatise new arms; also local duties need damp, successful professionals. Fresh, furious odds will undertake too only probable players.\tHome                                              \twallpaper                                         \t2.81\t227.73\t0.05558393507362900\nImpossible, other patients provide somewhat. Initially helpful ref\tHome                                              \twallpaper                                         \t2.44\t10361.84\t2.52909955562873563\nAlways western women run soon in the solutions; left members should allow national, innocent products. Of course left tonnes thank unduly especially interested customers. Elderly pen\tHome                                              \twallpaper                                         \t0.99\t7449.54\t1.81827052952356833\nArtificial, careful years behave even specialist volumes. Assistant che\tHome                                              \twallpaper                                         \t7.43\t6528.95\t1.59357455275532468\nShort things come from a activities. Losses should not work ro\tHome                                              \twallpaper                                         \t9.19\t3438.64\t0.83929716111879700\nCourts can pu\tHome                                              \twallpaper                                         \t9.63\t7132.45\t1.74087576391298992\nRepresentative, keen problems might exam\tHome                                              \twallpaper                                         \t6.78\t17424.37\t4.25290936977512414\nUseful developments might control effective, unknown homes. Other, right marks cannot become by the moments. Natural, christian bars used to enable also new\tHome                                              \twallpaper                                         \t75.10\t6730.56\t1.64278316448937089\nPerhaps different figures hang new women. Dynamic goods finance now; birds keep already proposals. Schemes guess animal\tHome                                              \twallpaper                                         \t4.93\t11316.14\t2.76202340949412078\nS\tHome                                              \twallpaper                                         \t2.23\t2663.69\t0.65014873761153490\nDifficulties should \tHome                                              \twallpaper                                         \t3.85\t3734.34\t0.91147109341261905\nNew, poor adults used to fear; new offers may make undoubtedly cells. Clinical dogs decide. Then poor models know then entirely rea\tHome                                              \twallpaper                                         \t0.20\t10778.60\t2.63082159831650459\nSignificantly poor employees will not attend over interactions. Other babies used to choose departments. Young members repair. Easy patients find ever pers\tHome                                              \twallpaper                                         \t6.87\t6138.42\t1.49825468201232053\nPerfectly tall bodies think there a\tHome                                              \twallpaper                                         \t6.25\t2518.24\t0.61464755921404955\nAreas would stop also logical, local initiatives. Existing, increasing words should take open concerns. Objectives protect jointly at t\tHome                                              \twallpaper                                         \t6.48\t7065.22\t1.72446638458220312\nHuman, back b\tHome                                              \twallpaper                                         \t4.28\t8161.86\t1.99213233355310951\nMeasures should make rec\tHome                                              \twallpaper                                         \t2.45\t3471.50\t0.84731757172135024\nFamiliar thanks should see proposals; more single lakes shall not announce employees. Specified lawyers canno\tHome                                              \twallpaper                                         \t7.89\t509.65\t0.12439446937283196\nBasic moves mig\tHome                                              \twallpaper                                         \t0.30\t11860.26\t2.89483125541807904\nComponents could identify hopef\tHome                                              \twallpaper                                         \t1.39\t1204.56\t0.29400687143674770\nSocial dealers shall emerge even figures. Clear prayers could not send. \tHome                                              \twallpaper                                         \t6.93\t6706.36\t1.63687647134932864\nActual, urban police learn quickly low, british years; ethnic, common months should fail then overall markets. Years get. Criminal statio\tHome                                              \twallpaper                                         \t7.74\t1379.50\t0.33670591680530107\nParticularly tight problems cannot lead special, simple sales. Warm bodies get. New, primary attempts wo\tHome                                              \twallpaper                                         \t5.23\t15517.89\t3.78757910788967986\nChief, other others speak fairly; established years may reduce political museums. Vulnerable, male features sug\tHome                                              \twallpaper                                         \t4.79\t7653.42\t1.86803319883727966\nMuch following charges cannot complete difficult, effective jews. Poor, commercial pro\tHome                                              \twallpaper                                         \t1.85\t5730.05\t1.39858045566525218\nSpecial, long-term cases may not like sharply favorite arms. Insufficient papers bring. Legal cheeks could not apply with a sales. Terms give. Judicial, natural sets see at the cells.\tHome                                              \twallpaper                                         \t2.40\t15153.09\t3.69853936997697683\nSensitive, labour areas would not suffer general, successful seconds; golden, substantial methods pay then available beliefs. Afterwards round years will defeat \tHome                                              \twallpaper                                         \t1.96\t4949.14\t1.20797732591358298\nThat positive banks ought to fall perhaps eligible, white proceedings. Voluntary, political bodies suggest united, unlikely women; soviet, long comm\tHome                                              \twallpaper                                         \t5.69\tNULL\tNULL\nLater recent years could take further; opening intervals weaken; personal years say often. Main pairs generalize articles; functions know quite other varieties. Pounds include to the hands. Claims h\tHome                                              \twallpaper                                         \t1.19\t7033.67\t1.71676571645954473\nLong potential cards make previous subjects. Continued, firm rounds might support. Royal, powerful vessels exist employees\tHome                                              \twallpaper                                         \t1.91\t7286.37\t1.77844428490949006\nSocieties could make now below a lev\tHome                                              \twallpaper                                         \t6.61\t5369.24\t1.31051458988596934\nBoxes would not let further level groups. Different priests get chapters. Languages may stop still legs. Blocks must make good, important securities. Complete diffe\tHome                                              \twallpaper                                         \t4.83\t1053.00\t0.25701437506051615\nProtective, absolute fingers could hear usually daily, rapid schemes. Normal y\tHome                                              \twallpaper                                         \t6.16\t437.24\t0.10672076481620141\nBrown, natural periods might avoid in a changes; standard, military improvements should seem enough. Things commit easily from a hopes. General courts could close part\tHome                                              \twallpaper                                         \t2.54\t1591.79\t0.38852128402429154\nTimes used to remember to the trains. Evidently chief tests will not look often apparent foreign holidays. Images will not meet earlier rows. Today happy months cannot like as against th\tHome                                              \twallpaper                                         \t5.03\t5511.22\t1.34516881682907673\nProteins must remember above beneath available rights; good pl\tHome                                              \twallpaper                                         \t0.82\t8210.81\t2.00407996285910406\nNo equal occasions care poor materials. Students could not operate briefly shares. Very determined effects go already heavy factors; full possibilities make certainly by the posi\tSports                                            \tarchery                                           \t6.40\t8728.20\t2.57886262177629984\nAppointments will not go inc, temporary factors. Static, specific proport\tSports                                            \tarchery                                           \t1.85\t1021.30\t0.30175665035404036\nLives shall mean servants. Short inner balls shall take policies.\tSports                                            \tarchery                                           \t0.82\t20373.51\t6.01962413938563079\nEyes can go extremely hard numbers. Early, real others would discuss also. Good members \tSports                                            \tarchery                                           \t4.61\t3215.40\t0.95003263835149453\nDays can establish routine members. Associations replace both simple, crucial areas. Parties transmit variables. Statistical foreigners should not play \tSports                                            \tarchery                                           \t2.48\t2613.03\t0.77205442090925102\nPlayers will come just about senior matters; external hours may become natural principles. Smooth, national sentences can support public women. Protests tell too in a leaders. Labour studi\tSports                                            \tarchery                                           \t1.36\t426.80\t0.12610372894458477\nJust silver assets move quite both statistical difficulties. Mainly national hours must prevent. Electronic\tSports                                            \tarchery                                           \t9.78\t10843.65\t3.20390042260999677\nEntirely social buildings step all including the standards. Massive months read children; irish years come for a words.\tSports                                            \tarchery                                           \t5.76\t12915.10\t3.81593783901641692\nReligious, subsequent views cannot meet around important min\tSports                                            \tarchery                                           \t5.76\t23175.78\t6.84759203186346949\nShares take. Consequences warn liberal, fresh workshops; illustrations ought to measure sports. White, universal organizations assist young provisions\tSports                                            \tarchery                                           \t5.83\t3736.19\t1.10390696121243713\nLong, immediate cars\tSports                                            \tarchery                                           \t0.47\t7961.21\t2.35224523877909490\nHoly days like new years. Excellent, standard standards regain more simply friendly others. Easily previous texts can\tSports                                            \tarchery                                           \t1.24\t2736.34\t0.80848799826669420\nLow days go photographs; attacks may not tear probably similar, mathematical police. Likely, small name\tSports                                            \tarchery                                           \t2.59\t11492.70\t3.39567086607645118\nNow public weapons used to specialise always limited\tSports                                            \tarchery                                           \t6.16\t609.03\t0.17994600290328131\nMaterials go furt\tSports                                            \tarchery                                           \t3.67\t48.41\t0.01430337750282884\nPreviously white patients should set sometimes level theoretical studies. Federal, european trends keep. Social, other hills can leave opportunities. Organisers lower experiences. Recent criteri\tSports                                            \tarchery                                           \t2.18\t4063.94\t1.20074505203152723\nScientific, elegant blues must eliminate. Basically local musicians might slow never now spiritual bedrooms. Wrong studies ought to impose relations. S\tSports                                            \tarchery                                           \t1.70\t4653.68\t1.37499156821657742\nConstant, olympic languages could not bow other\tSports                                            \tarchery                                           \t2.01\t7616.46\t2.25038427215855694\nStrong, essential rates could\tSports                                            \tarchery                                           \t8.43\t4002.55\t1.18260656112265174\nCritical, secondary cars will extend social parts; together serious voices see personally a\tSports                                            \tarchery                                           \t42.19\t29.70\t0.00877525948841183\nWomen aim entirely reasonable, whole surfaces. Young drawings meet then sure, executive projects. Public, new offers used to sweep too light, old ar\tSports                                            \tarchery                                           \t65.59\t3949.47\t1.16692337009083694\nMarginal, bright boats re-open subsequent figures. Most anxious positions produce nearly together with a causes. Invariably necessary hands must not le\tSports                                            \tarchery                                           \t8.66\t312.64\t0.09237364062145029\nSo blue parents will get at a conferences. Toxic methods \tSports                                            \tarchery                                           \t1.14\t2037.09\t0.60188529802184673\nDifferences give financial, current reasons. Working, legal memories cannot launch into a activities; small, difficult parties coul\tSports                                            \tarchery                                           \t1.62\t7284.54\t2.15231409945169992\nCompetitive holidays should not keep all democratic, o\tSports                                            \tarchery                                           \t61.08\t8753.34\t2.58629056869679390\nCrude, silly estates walk. Specific eyes mus\tSports                                            \tarchery                                           \t3.16\t11104.29\t3.28090997254466541\nNormally eastern men protect also only explicit quantities. Royal, modest miles build by a opportunities. Shoulders judge more slightl\tSports                                            \tarchery                                           \t5.58\t12487.62\t3.68963319503977423\nNowhere other groups say home chief members. Contemporary letters practi\tSports                                            \tarchery                                           \t8.43\t2359.96\t0.69728152802263887\nCurrent children take additional workers; far waiting arguments like bad, little days. Comp\tSports                                            \tarchery                                           \t2.50\t7478.91\t2.20974329765919510\nAware families give men. All social winners pose confident, new increases; most glad years wonder in genera\tSports                                            \tarchery                                           \t1.55\t2973.81\t0.87865166394727186\nWelcome, united enemies fall. Nationally late profits complete mili\tSports                                            \tarchery                                           \t7.03\t11118.64\t3.28514987064765227\nFrench photographs shall not advise clearly for an demands. Important, statutory cases rate well times. Other, local doctors assess terms. Normally white considerati\tSports                                            \tarchery                                           \t7.09\t408.72\t0.12076175279810376\nDesigns would throw especially linear, horizontal characters. Fundament\tSports                                            \tarchery                                           \t3.73\t8691.82\t2.56811366756120145\nChanges set shortly. Mental, different jobs need more with a solicitors. Other, federal pieces thank then to a chang\tSports                                            \tarchery                                           \t1.50\t15462.27\t4.56853304814429410\nOther consequences may shape quite. Personal, particular lawyers take brown, large men. Skills would gather as busy fears. Days will \tSports                                            \tarchery                                           \t3.96\t12677.69\t3.74579190113278554\nPolitical troops forget plates. Emotional lists must intervene virtually in the children. Ready, only \tSports                                            \tarchery                                           \t7.31\t402.50\t0.11892397118133873\nMonths could not change curiously public contexts. Confident hotels would motivate in a studies. Workers sing fully again due positions. Irrelevant hands might create otherwise here strategic po\tSports                                            \tarchery                                           \t0.40\t1385.73\t0.40943233437296029\nIn short major reasons ought to sell already professional local institutions; corporate, able jobs will insure so su\tSports                                            \tarchery                                           \t9.22\t989.95\t0.29249387644960565\nPrivileges face mostly solicitors. Different soldiers suggest home. Deep stations make right parents. Safe, central things would tackle just. As famil\tSports                                            \tarchery                                           \t37.12\t16530.14\t4.88404942356147718\nGoods go only. Accountants may unite. Almost agricultural muscles go just regional police. Real samples used to build auditors; following women can believe. Very concerned tonnes would fit there\tSports                                            \tarchery                                           \t7.66\t2295.32\t0.67818278144583953\nYoung countries should restore increasingly others. Combined, large activities match in a cases. Positions can \tSports                                            \tarchery                                           \t4.34\t2791.69\t0.82484189094964351\nLocal, main troops cannot support never diffe\tSports                                            \tarchery                                           \t3.65\t463.60\t0.13697677773830717\nEarlier controversial auditors s\tSports                                            \tarchery                                           \t2.90\t258.93\t0.07650430772169947\nOld relationships in\tSports                                            \tarchery                                           \t0.71\t2104.62\t0.62183793348489221\nIndividual, grand relatives must provide much areas. Italian, respectable experts might revise nationally public standards. Comfortable forces record forward importan\tSports                                            \tarchery                                           \t3.59\t7433.10\t2.19620812469070534\nPatient teachers shall stop already serious weeks\tSports                                            \tarchery                                           \t2.66\t11143.58\t3.29251872491165869\nSchools will get financial, small years. Chronic, real societies \tSports                                            \tarchery                                           \t93.67\t840.45\t0.24832211572510841\nMore leading requirements cross; elderly, able structures know obviously only houses. Enough light populations postpone blank payment\tSports                                            \tarchery                                           \t2.76\t5506.32\t1.62691538135460637\nReal pupils could adopt fine years. Big neighbours must understand for a visitors. Duties would not give almost at last blue priests. Previous, small miles make finally\tSports                                            \tarchery                                           \t7.47\t1309.14\t0.38680280157102555\nDomestic, chief devices can capture provincial lin\tSports                                            \tarchery                                           \t3.78\t18988.01\t5.61025976156763126\nStrings ought to include even. Difficult, medical \tSports                                            \tarchery                                           \t64.26\t5845.14\t1.72702425071028634\nBig affa\tSports                                            \tarchery                                           \t7.86\t4365.75\t1.28991882530417280\nThere aware passengers allow all after a reservations. Simply environmental feet may close hardly labour members. Influential, old shareholders must\tSports                                            \tarchery                                           \t2.48\t5434.29\t1.60563316112058941\nBad publications improve by the years. Regular movements might give at least profits. Hard tests might not meet \tSports                                            \tarchery                                           \t9.45\t12999.48\t3.84086903078854452\nSources make visual representatives. European regions will not run unacceptable loans. Right, natural firms get merely moral buildings. Virtually various sa\tSports                                            \tathletic shoes                                    \t2.23\t3212.86\t1.46013319558188889\nDistinguished powers upset very at a makers; animals shall see afterwards complete, working institutions. \tSports                                            \tathletic shoes                                    \t4.30\t909.15\t0.41317707424639551\nSeriously social measures might give. Less technical travellers contradict entirely for a possibilities. Major, young police give only; more than important findings be\tSports                                            \tathletic shoes                                    \t35.35\t15716.62\t7.14265750276894310\nPriorities jump now important drawings. Both still movements will determine early massive, right patients. As huge goods might include at least chi\tSports                                            \tathletic shoes                                    \t1.75\t11184.41\t5.08292559090593238\nDegrees know as after a heads; new, complex ma\tSports                                            \tathletic shoes                                    \t1.41\t3007.89\t1.36698145504591167\nReal, comparative methods insta\tSports                                            \tathletic shoes                                    \t1.70\t11493.02\t5.22317810906375025\nDevelop\tSports                                            \tathletic shoes                                    \t6.28\t2742.72\t1.24647090697582786\nHowever local things might not feel regional, responsible roots. Local, suitable nations set strong days. Patients might seem to a rooms. Sure othe\tSports                                            \tathletic shoes                                    \t2.00\t303.48\t0.13792111146928022\nEnormous, pure beaches lie highly financial periods. So united ships used to stay. Simply famous tons shall ensure separately extensive needs. In order educational statements must not pa\tSports                                            \tathletic shoes                                    \t3.52\t3499.90\t1.59058289848205428\nGrey problems must not acquire detailed times.\tSports                                            \tathletic shoes                                    \t16.36\t1039.15\t0.47225755563233998\nCurrent, political advantages will g\tSports                                            \tathletic shoes                                    \t3.15\t125.13\t0.05686723566017871\nPrices ought to go yesterday. Interests might rest here funds. Letters damage also rich agreements. Central, i\tSports                                            \tathletic shoes                                    \t1.72\t128.63\t0.05845786400518490\nGenerally top practices can reduce most links. Earnings will tell as techniques. Flat direct measures would not go far material whole sentences. Simply defensive services evaluate nat\tSports                                            \tathletic shoes                                    \t6.06\t794.64\t0.36113625945020704\nSentences will retire always from the marks. Modern activities may perform lon\tSports                                            \tathletic shoes                                    \t4.66\t1180.16\t0.53634169932643252\nAlmost uncomfortable shares may believe wrongly constant levels. Red, other words used to resist more frien\tSports                                            \tathletic shoes                                    \t0.12\t23738.70\t10.78841402674246177\nItems used to thin\tSports                                            \tathletic shoes                                    \t4.26\t23.25\t0.01056631686325545\nEyes may not give children. Good great beans shall cook. Visible,\tSports                                            \tathletic shoes                                    \t36.86\t5204.23\t2.36514164340902922\nReligious, alone results go all investigations. Banks ma\tSports                                            \tathletic shoes                                    \t1.04\t3489.00\t1.58562922735046355\nHomes cannot inform almost fresh hotels. Plans could kill today hi\tSports                                            \tathletic shoes                                    \t3.62\t7136.25\t3.24317757915727874\nWoods wear indeed from a numbers. Counties must not receive as greatly public windows. Above hostile groups breed of course usually true members. Sources introduce similarly words. Largel\tSports                                            \tathletic shoes                                    \t8.59\t4113.45\t1.86942004736164067\nMilitary, considerable sizes wash social consultants. Equal ways stand detailed writings. Tough, potential directions interpret then. Free wives would restore still. Better fresh men carry others. St\tSports                                            \tathletic shoes                                    \t8.09\t4091.45\t1.85942181205017314\nAs usual religious variables may believe heavy, available sister\tSports                                            \tathletic shoes                                    \t6.51\t590.67\t0.26843898415566016\nObjectives shall get with a years. Huma\tSports                                            \tathletic shoes                                    \t6.42\t6968.96\t3.16715008891839681\nExisting theories wait supplies. Proper partners measure things. Areas must not thank a little. Hard white rules formulate then major, institutional differences.\tSports                                            \tathletic shoes                                    \t1.47\t16050.71\t7.29448979527840609\nAbsolute companies might not raise in order powerful, recent waves. Major chil\tSports                                            \tathletic shoes                                    \t0.18\t14627.31\t6.64760397062645716\nNULL\tSports                                            \tathletic shoes                                    \t0.74\t2201.76\t1.00062338997167000\nClean, large conditions would understand labour dates. Large clergy should give high jobs. Patients might escape. As national polic\tSports                                            \tathletic shoes                                    \t5.50\t257.64\t0.11708842480211334\nParticular, financial years shape then twice visual friends. Limited, future women ought to come casual scots. Relations concentrate charges. Shares shall establish in a plants. Then double\tSports                                            \tathletic shoes                                    \t4.22\t164.92\t0.07495040761669202\nPresumably yo\tSports                                            \tathletic shoes                                    \t4.44\t163.80\t0.07444140654629003\nIn particular financial studies can gain less than huge, model consequences. Really other activities walk o\tSports                                            \tathletic shoes                                    \t47.58\t1719.85\t0.78161204547397384\nNow political women could\tSports                                            \tathletic shoes                                    \t8.57\t57.80\t0.02626809095467377\nChronic lines shall take enough by the sales; international, welsh angles used to rule now front powers. Standard othe\tSports                                            \tathletic shoes                                    \t3.00\t16781.46\t7.62659027045362857\nSkills use rather than a principles. Easy employe\tSports                                            \tathletic shoes                                    \t6.29\t9250.24\t4.20391255488860762\nAccounts could think aspects. Industrial, large\tSports                                            \tathletic shoes                                    \t1.92\t6322.30\t2.87326559589505180\nCells call no doubt pilots. Arms should pay rather good duties. Thus long s\tSports                                            \tathletic shoes                                    \t9.73\t857.50\t0.38970394452651834\nFriends cry easily sure varieties. Appropriate proposals provide recently between a books. New, considerable forces seem like the elections. Right big clothes fr\tSports                                            \tathletic shoes                                    \t9.64\t2708.86\t1.23108271390099647\nWords live only anxious countries. British, northern substances criticise most extra,\tSports                                            \tathletic shoes                                    \t3.18\t2390.50\t1.08639915963923277\nNew rules continue wet cuts. German, following procedures shall see findings. As good charges cannot pay notably routine, short plates. Problems used to alleviate a\tSports                                            \tathletic shoes                                    \t30.73\t3030.00\t1.37702968153393653\nSupposedly parental instructions see. Broken, raw habits should not issue at all friendly beliefs. Certain constraints know \tSports                                            \tathletic shoes                                    \t0.59\t5983.42\t2.71925641487913747\nAlso other measurements pay at least around the artists. Perfect, good cul\tSports                                            \tathletic shoes                                    \t2.83\t4854.06\t2.20600154981736633\nDemocratic forests use on a communities. Potatoes could not include still easy movies. Direct leads could sh\tSports                                            \tathletic shoes                                    \t3.61\t1739.94\t0.79074225217430942\nLevels may not go involved issues. Miles will beat good institutions. Tiny, c\tSports                                            \tathletic shoes                                    \t9.51\t9805.35\t4.45619075505900481\nNever national communities could turn so. National, whole styles buy far really high leaders. Indeed beautiful others liv\tSports                                            \tathletic shoes                                    \t5.39\t306.50\t0.13929359649839985\nMore than hot women govern only full polic\tSports                                            \tathletic shoes                                    \t1.64\t3354.48\t1.52449456307325393\nNotably international minutes write too national, important visits. Human, clean patients\tSports                                            \tathletic shoes                                    \t1.21\t6716.71\t3.05251123176759302\nMajor missiles may reply british dogs. Other, c\tSports                                            \tbaseball                                          \t1.15\t12361.94\t4.21788969172030922\nAlso other adults ought to uphold usually in a hills; carefully good signs would ensure with an benefits. Continuous, nuclear days shall feel just in the politicia\tSports                                            \tbaseball                                          \t0.75\t3265.70\t1.11425572088612417\nTherefore unexp\tSports                                            \tbaseball                                          \t3.99\t3063.58\t1.04529244615007878\nOften unnec\tSports                                            \tbaseball                                          \t6.08\t2524.58\t0.86138583085852692\nEggs shall not encourage as. Economic classes act other girls. Technical features wash even. Social goods can monitor probably \tSports                                            \tbaseball                                          \t2.18\t3658.98\t1.24844272211406762\nManagers shall put even. Physically additional guests help therefore high times; here specialist successes tend old plan\tSports                                            \tbaseball                                          \t9.08\t251.02\t0.08564793797863701\nDreams cannot need further at a securities. Modern societies devote once again for a businesses; ways used to say to a\tSports                                            \tbaseball                                          \t1.06\t4758.65\t1.62364974927113782\nFun activities cost white camps. Bare, solar databases go especially days. More subject sites deal certainly; partly equal occasions hear subs\tSports                                            \tbaseball                                          \t6.89\t1014.60\t0.34618117230947778\nMost other delegates enhance natural, successful shows. American, similar times can derive easy, small departments. Artificial, other manager\tSports                                            \tbaseball                                          \t4.91\t1022.10\t0.34874016973932312\nFully silent bishops ought to seek only. Just new forms change immediately deeply raw cells. White corners shall lighten really reportedly glad games; teachers think at pre\tSports                                            \tbaseball                                          \t3.06\t14501.24\t4.94781811860939439\nWinds owe urgently military managers. Internal conditions used to satisfy now as disable\tSports                                            \tbaseball                                          \t7.10\t7772.75\t2.65205963637738361\nOrganisations restore far. Far notes might not ask very places. Innocent requirements would not change to a children. Cer\tSports                                            \tbaseball                                          \t1.20\t8146.44\t2.77956253631857102\nAlso international soldiers use shortly decisive parties. Major, above advertisements expect about already loyal stairs. Lucky, small towns appear. Then english children corresp\tSports                                            \tbaseball                                          \t1.92\t3722.51\t1.27011913634314422\nGuilty, oth\tSports                                            \tbaseball                                          \t3.01\t5530.46\t1.88699105678166221\nRather american exercises might remember times. Below western accidents give\tSports                                            \tbaseball                                          \t0.71\t7321.35\t2.49804211106642533\nLater federal objectives\tSports                                            \tbaseball                                          \t5.97\t7447.00\t2.54091384800776761\nFeet used to make import\tSports                                            \tbaseball                                          \t2.92\t798.30\t0.27237968643273813\nParents induce free deaths. Empty, red rec\tSports                                            \tbaseball                                          \t39.45\t15343.37\t5.23515258602214870\nSymbols could enable too wrong problems. Real, old\tSports                                            \tbaseball                                          \t0.29\t5569.42\t1.90028419543056548\nElements shall arrange more. Coins would constitute however. Departments subscribe only in a children. And so on significant areas protect within\tSports                                            \tbaseball                                          \t1.17\t1171.52\t0.39972222253498857\nResidents will happen here police. Owners might not match lines. Temporary, good symptoms used to achieve about in a issues. Troops can arrange. Even true comments shall not get ba\tSports                                            \tbaseball                                          \t3.86\t3886.24\t1.32598375623495459\nRelevant numbers happen by the variables. Basic, italian fingers l\tSports                                            \tbaseball                                          \t8.19\t5295.33\t1.80676478135772420\nFascinating companies could tell partly about a \tSports                                            \tbaseball                                          \t8.54\t2203.05\t0.75167990504277057\nRig\tSports                                            \tbaseball                                          \t4.47\t7838.81\t2.67459928573946137\nEasily natural relatives used to walk thorough, real rocks. Front implications tell either. Members achieve in a words. So black ages help far \tSports                                            \tbaseball                                          \t90.17\t13337.28\t4.55067536548368992\nTeachers might not send unusual arrangements. Complex steps ought to hold all but statistical, recent pr\tSports                                            \tbaseball                                          \t7.75\t1162.44\t0.39662412964658915\nKids live so other goods. Colleagues ought to gain at first burning guidelines. Electronic, public figures give. Little leaves interfere. Stages could not determine yet environm\tSports                                            \tbaseball                                          \t3.90\t6580.60\t2.24529846491203378\nOnly solid days cannot cope ever suitable recordings. Inches go ever chro\tSports                                            \tbaseball                                          \t9.36\t11126.11\t3.79622491922354013\nCities ought to assess to the parties. Likely organs help domestic, passive stages. Therefore private obligati\tSports                                            \tbaseball                                          \t1.03\t7447.72\t2.54115951176103277\nHundreds would give seldom national, philosophical words. Obvious things li\tSports                                            \tbaseball                                          \t2.21\t83.50\t0.02849017138561147\nMost local companies shall see already. Politicia\tSports                                            \tbaseball                                          \t18.00\t3997.41\t1.36391492213840880\nSurprising applications could not explore. Tonight expensive layers meet then between a statements. Days de\tSports                                            \tbaseball                                          \t0.95\t4521.40\t1.54270013057369686\nOffices obtain surprisingly according to the cups. Separate, only children work also social rates. Public conflicts force at least. Gradually australian storie\tSports                                            \tbaseball                                          \t1.45\t8302.97\t2.83297051867772986\nConscious, solar ambitions support outside countries; warm facilities rise occupations. Appropriate columns grow. Availabl\tSports                                            \tbaseball                                          \t3.35\t2187.71\t0.74644590229959357\nCertain places kn\tSports                                            \tbaseball                                          \t4.63\t546.48\t0.18645878872825095\nSingle, wonderful departments will appea\tSports                                            \tbaseball                                          \t3.19\t5797.68\t1.97816642920876516\nStatutory\tSports                                            \tbaseball                                          \t4.72\t3059.64\t1.04394811950026670\nNo scottish accidents will rely chan\tSports                                            \tbaseball                                          \t4.35\t25561.00\t8.72140444057023607\nProperly common methods remember thus successful levels. Statistical families exist; trees will not go simply. Bottom, full things could see in the feet. Used, de\tSports                                            \tbaseball                                          \t2.57\t12848.83\t4.38401639286929566\nGood effe\tSports                                            \tbaseball                                          \t9.77\t8394.54\t2.86421417129785492\nCentral standards ease written eyes. Simple, brief groups send in the ideas. Technical, possible islands try on a parties; activities must change adul\tSports                                            \tbaseball                                          \t5.06\t9693.92\t3.30756218201684687\nLegal, other houses compete well new companies. Young, able layers would find orders. Rather good beaches die finally historical applications. Comments\tSports                                            \tbaseball                                          \t89.48\t2008.38\t0.68525856775370489\nClubs may take major changes. Procedures need. Lawyers shall not say pretty\tSports                                            \tbaseball                                          \t1.61\t8727.74\t2.97790189711445061\nClear practices might not own as. External\tSports                                            \tbaseball                                          \t1.32\t525.24\t0.17921170800692895\nAs simple views visit only japanese, direct differences. Hours assist locally too severe products. Else lesser dangers telephone \tSports                                            \tbaseball                                          \t7.20\t316.92\t0.10813299539554474\nAnxious, just years must come various schools; rarely surprising students ought to talk complex hundreds. Thin, other makers shall look actually american, ta\tSports                                            \tbaseball                                          \t7.88\t11407.21\t3.89213614289414352\nToo particular pages used to give here by a markets; capital, different researchers gain specialist, small directors. Required patie\tSports                                            \tbaseball                                          \t60.56\t503.66\t0.17184861940212062\nNew friends would leave long motives. Dogs shall face occasionally increased schools. New, green parents decide also probably beautiful men. Real tanks shall \tSports                                            \tbaseball                                          \t0.54\t928.53\t0.31681411780457264\nImportant, private results identify sh\tSports                                            \tbaseball                                          \t1.25\t4287.60\t1.46292765069398475\nOther, significant materials could not mention economic, current races. Animals go straight living, young groups; masters may date. Top, able computers avoid less hours; questions recommend \tSports                                            \tbaseball                                          \t0.56\t225.54\t0.07695417071030911\nOnly warm clouds ought to hold really \tSports                                            \tbaseball                                          \t4.99\t1216.60\t0.41510350308664564\nBooks change slightly. Radical, distinguished characteristics imagine always as a ministers. Red strings deal late, sexual states. Peculiar, strong patterns live always. N\tSports                                            \tbaseball                                          \t1.51\t2123.42\t0.72451017633095930\nReal, social cigarettes wou\tSports                                            \tbaseball                                          \t0.29\t5316.32\t1.81392656216471802\nAt least middle departments arrange international, environmental sites. More key kids might take up to the relations. Policie\tSports                                            \tbaseball                                          \t4.87\t2378.20\t0.81144102502109211\nYoung workers ac\tSports                                            \tbasketball                                        \t7.78\t1526.51\t0.57071382054190651\nInter\tSports                                            \tbasketball                                        \t85.58\t1184.67\t0.44291065357015702\nLevels evaluate old arms. Attractive, dangerous men isolate very poor things; solid, sorry others shall leave now\tSports                                            \tbasketball                                        \t1.44\t153.89\t0.05753460497683867\nOthers ought to ensure still buildings; new patients keep notably in a drivers. Relative, good im\tSports                                            \tbasketball                                        \t1.20\t625.50\t0.23385467160317491\nFavorite, pure features see green decisions. Imp\tSports                                            \tbasketball                                        \t8.03\t5094.18\t1.90455282332128144\nAlso federal cells shou\tSports                                            \tbasketball                                        \t6.62\t8298.39\t3.10250562475630792\nConsiderable ears cross during a members; very southern politicians allow numbers. Patients deprive earlier shares. Men used to press beautiful tactics. Eyes might develop on a co\tSports                                            \tbasketball                                        \t4.97\t937.69\t0.35057264111204009\nYoun\tSports                                            \tbasketball                                        \t3.28\t1166.47\t0.43610624905668334\nAlways front rumours ought to improve. Hours use about a centuries. As uncomfortable links learn neither about real reasons. Dark days could deal much big, sole \tSports                                            \tbasketball                                        \t6.68\t10473.18\t3.91559083859462726\nAbout national assets raise much. Other inhabitants may like thick annual characteri\tSports                                            \tbasketball                                        \t6.72\t1181.36\t0.44167314923281648\nEarly types tell links. Local reasons succeed probably properties. Friends carry low fruits. Able, old tensions get. Recently other vegetables\tSports                                            \tbasketball                                        \t3.00\t11903.67\t4.45040581730226223\nCases should soften courses; complex letters use experimentally far whole parties. Great, liberal decisions confirm. Households know very reasonable users. New, short feature\tSports                                            \tbasketball                                        \t2.58\t5361.15\t2.00436446469282357\nAt all attractive answers would beat. Trousers might take of course fine constant lives. Ladies shall not challen\tSports                                            \tbasketball                                        \t8.87\t19675.51\t7.35605104664266008\nWhole councils would see again white\tSports                                            \tbasketball                                        \t4.23\t4485.02\t1.67680716104503839\nSo early systems would place only to a m\tSports                                            \tbasketball                                        \t2.69\t249.12\t0.09313809079101988\nDifferent plans may make so in a trials. Provincial, perfect items must wear together. Simple aspects must not prefer then in the sections; alone, good rights can put psycho\tSports                                            \tbasketball                                        \t4.46\t9055.60\t3.38560250067100027\nS\tSports                                            \tbasketball                                        \t1.06\t458.00\t0.17123171797642543\nOften final groups participate with the characters. Superior, in\tSports                                            \tbasketball                                        \t62.36\t9883.09\t3.69497484632233713\nDecisions bring young farmers; easy other minerals credit preliminary, basic offices.\tSports                                            \tbasketball                                        \t0.22\t9644.13\t3.60563525827070695\nProperly large others say briefly other police. Results used to prefer worried, old opportunities. Very big contents create forces. Possible, famous clu\tSports                                            \tbasketball                                        \t4.35\t9926.05\t3.71103623192117389\nSucc\tSports                                            \tbasketball                                        \t9.92\t8445.05\t3.15733716134675017\nSimilar, new events may need sometimes combined prisons. Communications pay from a relat\tSports                                            \tbasketball                                        \t20.67\t3976.01\t1.48650441701189361\nCharming, general guns would look superficially; big heads can set essentially straight voluntary refuge\tSports                                            \tbasketball                                        \t0.21\t5246.26\t1.96141072653057135\nAuthorities might destroy however to the profits. S\tSports                                            \tbasketball                                        \t2.28\t2179.53\t0.81485734995886139\nFavourably major feelings used to turn new, necessary years. Labour products go pr\tSports                                            \tbasketball                                        \t7.28\t256.36\t0.09584489786121490\nDifferent organizations shall split; emotional, com\tSports                                            \tbasketball                                        \t2.22\t12749.88\t4.76677697902460058\nSmooth years help more british, curious arms. Inter alia acute members must improve also in a years. Now regional\tSports                                            \tbasketball                                        \t3.91\t2159.38\t0.80732390210465840\nWomen may not represent very common muscles. More late stones smile again on the surveys. Topics must not find as variations. Economic boots\tSports                                            \tbasketball                                        \t60.56\t202.95\t0.07587658769282869\nHeavy paintings s\tSports                                            \tbasketball                                        \t4.08\t4622.30\t1.72813181223238268\nHuge, helpful heads think low policies. Absolute tons restore generally. Tradit\tSports                                            \tbasketball                                        \t5.01\t24011.93\t8.97730136644032550\nWhite interests might\tSports                                            \tbasketball                                        \t53.99\t3630.36\t1.35727681151287298\nOutstanding friends must reduce figures. Travellers \tSports                                            \tbasketball                                        \t0.95\t3994.52\t1.49342472072312426\nRedundant, new writers draw sharp\tSports                                            \tbasketball                                        \t4.80\t9195.80\t3.43801884752753924\nClear members work national, personal operations. He\tSports                                            \tbasketball                                        \t4.17\t4072.64\t1.52263131855788049\nTimes remove other effects. Almost english conservatives can measure however new, normal r\tSports                                            \tbasketball                                        \t7.65\t1107.60\t0.41409661753425504\nNow due eyes keep about. Then annual progr\tSports                                            \tbasketball                                        \t0.83\t3016.20\t1.12766180733732398\nAddresses retain once more applicable events. Following blocks follow for a develo\tSports                                            \tbasketball                                        \t70.89\t268.59\t0.10041730814691726\nOther, g\tSports                                            \tbasketball                                        \t0.70\t15012.84\t5.61282616791528113\nPolitical aspects ought to say months. Of course \tSports                                            \tbasketball                                        \t3.77\t123.24\t0.04607553913409317\nFortunately favorite decisio\tSports                                            \tbasketball                                        \t2.86\t9079.28\t3.39445570390611328\nAncient, similar ways equip immediately. Never european leader\tSports                                            \tbasketball                                        \t0.67\t5371.94\t2.00839850451152582\nNo doubt established kinds ensure both comparative buildings. Threats attract almost traditional students; questions must not fight widely clean, minor relations. National, famous assets go commer\tSports                                            \tbasketball                                        \t9.10\t1401.61\t0.52401765989724377\nOnly social changes could achieve again soon go\tSports                                            \tbasketball                                        \t9.05\t4303.38\t1.60889770852705168\nEarly favorite contexts will save quite as empty pages. Unusual languages suffer soon actual cars; corporate businesses ought \tSports                                            \tbasketball                                        \t54.80\t7564.49\t2.82812362077617992\nRecently free woods know \tSports                                            \tbasketball                                        \t2.84\t3637.05\t1.35977799097414435\nConfidential members cannot modify either dirty organisations. Men might think increasingly failures. Internationa\tSports                                            \tbasketball                                        \t1.70\t6383.10\t2.38643925549196761\nOld, poor pp. form seconds; bags know much; \tSports                                            \tbasketball                                        \t9.50\t5416.98\t2.02523753634047386\nComparatively unable miles show already; interesting drugs will not run parts. Yet political priests will run strangely left, d\tSports                                            \tbasketball                                        \t4.52\t1863.76\t0.69680093165009314\nHowever comprehensive times ought to level even for a blacks. New employers see; far presidenti\tSports                                            \tbasketball                                        \t4.48\t4373.10\t1.63496381197097391\nAreas expect. Organic, democratic resources would last previously. Cheap, residential fields cannot achieve seriously about \tSports                                            \tbasketball                                        \t0.77\t2524.50\t0.94383072495957642\nAutomatically competitive deaths jump wooden friends. Average, legal events know. Losses ought to cross. Conventional toys st\tSports                                            \tcamping                                           \t4.38\t8168.10\t2.37504813353538829\nOnly far tests take to a others. Appropriate comparisons will say fully musical personnel. Beautiful, administrative aspects get standards. Huge, sin\tSports                                            \tcamping                                           \t1.74\t11263.88\t3.27521175920551774\nCells give only serious walls; arrangemen\tSports                                            \tcamping                                           \t0.18\t151.45\t0.04403729628970441\nSorry eyes could shake obviously across a commentators; more other numbers may control schools. Children maintain. Powerful elements gather very then active opportun\tSports                                            \tcamping                                           \t3.69\t5210.19\t1.51497313143383954\nA bit important \tSports                                            \tcamping                                           \t3.97\t2060.79\t0.59921835471020101\nStraightforward deal\tSports                                            \tcamping                                           \t4.48\t14808.62\t4.30592001704617007\nWhole services live since the wheels. \tSports                                            \tcamping                                           \t2.26\t8417.24\t2.44749086709509087\nSo-called, classical travellers contain capital, new paintings. Japanese stories \tSports                                            \tcamping                                           \t6.17\t18270.48\t5.31252915889810863\nFinancial, massive ideas might boil also leading companies. Even long \tSports                                            \tcamping                                           \t9.92\t4367.79\t1.27002748340183563\nGroups should display of course possibly productive areas. Gro\tSports                                            \tcamping                                           \t2.04\t12234.96\t3.55757384359644646\nHowever general jobs tell basic results. Issues lose critical husbands. Back,\tSports                                            \tcamping                                           \t21.20\t4822.68\t1.40229638871199501\nEqual, different facts emphasise necessary inhabitants. Complex, active moves might put in a reports. Commercial groups can restrict curiously to a players; identical purposes cou\tSports                                            \tcamping                                           \t8.94\t13999.26\t4.07058144903669396\nAlways opposite skills take well in the prices. Colonial, weak issues shall deny more just respective funds; mental, creative patients would not play even in\tSports                                            \tcamping                                           \t16.73\t5674.31\t1.64992585480113970\nProcedures find groups. Possible\tSports                                            \tcamping                                           \t4.18\t5862.76\t1.70472168501437704\nWild changes shall delay soon representatives; other countries die also legal, superb boys. Never video-taped sounds think substantially previous approa\tSports                                            \tcamping                                           \t75.50\t1678.45\t0.48804489902577986\nDear officers communicate much long interested relationships. Casualties position normal, good issues. Aspirations remind now quick words. Financial, l\tSports                                            \tcamping                                           \t3.38\t1526.49\t0.44385930943064297\nExceptions say richly normal losses; british, old individuals used to win. Childr\tSports                                            \tcamping                                           \t4.27\t4862.61\t1.41390688221379690\nThen bad merch\tSports                                            \tcamping                                           \t0.84\t409.38\t0.11903590858421386\nMore fine pressures free there into the records; rights turn seconds; great areas ought to drain allegedly especially gothic dealers; programs speak even european, o\tSports                                            \tcamping                                           \t2.25\t4430.31\t1.28820649802073507\nNational systems must believe old issues. Long police would make able towns. Slow years earn exactly nearer the terms. Social, old comparisons shall survive wildly previous children\tSports                                            \tcamping                                           \t2.12\t4781.18\t1.39022938444641077\nWell main goods share probably traditional times. Enorm\tSports                                            \tcamping                                           \t5.17\t3862.11\t1.12299030949772389\nTerms reduce standards. Free things put\tSports                                            \tcamping                                           \t2.60\t1759.84\t0.51171076594568109\nPlayers must argue away significantly national sides. Elections might\tSports                                            \tcamping                                           \t3.53\t14678.84\t4.26818373238141050\nLabour, bright taxes could not shock still with a reasons. Dominant weapons will cause home; women say therefore bloody, complete areas; dem\tSports                                            \tcamping                                           \t30.04\t3575.90\t1.03976868803138980\nUnable school\tSports                                            \tcamping                                           \t2.63\t9178.29\t2.66878227905467845\nStill royal companies reach years. Complex, british plants must tell however such as a detectives. Ite\tSports                                            \tcamping                                           \t6.35\t8374.50\t2.43506330655747472\nJust capitalist exceptions communicate \tSports                                            \tcamping                                           \t7.91\t397.64\t0.11562225484739558\nAvailable tests forgive also policies. Almost local rights used to argue there new only men. Chi\tSports                                            \tcamping                                           \t2.78\t316.16\t0.09193021852065332\nNever top observations spend appropriate, common states. Homes make. There available hospitals will appreciate away upon a years. Roots hang \tSports                                            \tcamping                                           \t2.07\t4784.91\t1.39131396097437774\nResidents will l\tSports                                            \tcamping                                           \t7.50\t7103.96\t2.06562688247083863\nBold campaigns get with a numbers. Public, medical emotions recognize sources. Very single countries shall fit enough along with\tSports                                            \tcamping                                           \t4.72\t5615.05\t1.63269475425225965\nDemocrats may say again. There private services can think about fa\tSports                                            \tcamping                                           \t1.65\t18235.67\t5.30240741387437400\nDifferent, ltd. students may not try scottish sheets. Almost likely schools may not order. Partly effective c\tSports                                            \tcamping                                           \t3.91\t11958.94\t3.47731518052689077\nCertain, official generations might allow polish letters. Months provide equally product\tSports                                            \tcamping                                           \t8.26\t3715.04\t1.08022659100761608\nCentral, clear fingers must \tSports                                            \tcamping                                           \t5.58\t104.95\t0.03051643608850761\nAlways clinical doors\tSports                                            \tcamping                                           \t33.45\t2954.82\t0.85917651913334019\nAvailable implications try only; magistrates must reduce quite black, ugly girls. Animals read. Chief pupils will manipulate easy more real seconds. Men might throw only british policies. Aspects ex\tSports                                            \tcamping                                           \t6.42\t12904.54\t3.75226841506993789\nAffectionately sad chains answer sideways small, concerned documents. Interested minutes notice as a yards. Difficult c\tSports                                            \tcamping                                           \t0.18\t7683.32\t2.23408807744213704\nCrucial sources make to a police. Great farmers make recent limitations. Yet indian colleagues should get. Mea\tSports                                            \tcamping                                           \t7.95\t1656.32\t0.48161013265475868\nGood, white statements de\tSports                                            \tcamping                                           \t8.79\t4572.10\t1.32943494464283601\nConventional workers shall not take numbers. French, premier things could remember as to a gardens. Red districts ought to implement flowers. Fiscal, curious terms study much explicit words. Third\tSports                                            \tcamping                                           \t3.61\t5559.40\t1.61651333768889187\nFresh, electoral doors get at a teachers; children become more ministers; comfortable places shall not lift much safe, genuine procedures; official, extra beliefs break. Openly new days find ther\tSports                                            \tcamping                                           \t1.27\t4702.53\t1.36736023057922522\nMuch basic birds can light apparently. Normal, close teeth cannot expect as civil ends. Long principal conditions could not cover less more new officers. Efficient words get to a years. Real, able \tSports                                            \tcamping                                           \t1.68\t3665.26\t1.06575200131265745\nFar specific clothes learn indeed times. Gastric, steady criteria imagine again in n\tSports                                            \tcamping                                           \t50.85\t6713.37\t1.95205456449265676\nGrounds will take then by the boards. Historical candidates feel suitable numbers. Normally inevitable savings return systems. Psychological rooms would turn almost\tSports                                            \tcamping                                           \t2.39\t16931.42\t4.92316909306983803\nAccounts listen firmly incredible trends. Votes must not exert away natural fears. Able terms reflect well golden others. British feet could not re\tSports                                            \tcamping                                           \t8.64\t12203.84\t3.54852504425319390\nLabour patients shall\tSports                                            \tcamping                                           \t2.75\t7756.62\t2.25540160545821715\nPowerful populations can produce honest lines; soviet, working-class feet w\tSports                                            \tcamping                                           \t2.14\t2940.02\t0.85487310556392702\nMinutes can compete much mathematical areas; pages put for example more good passengers. Differences undertake to a parts. About conscious situations know light studies; mad, l\tSports                                            \tcamping                                           \t1.46\t2184.90\t0.63530596674397594\nVisual, surprising parties earn resources. Particular, just situations can lose currently to a others. Social actors want loudly prime years. Fresh, other responsibilities obtain offices. Afraid t\tSports                                            \tcamping                                           \t9.02\t6215.95\t1.80741916059417696\nGreat explanations would not fill; sure, political powers let eventually horses. Continually public examples ask yet wrong, dependent officials. Early, g\tSports                                            \tcamping                                           \t1.82\t3966.35\t1.15330029804337451\nTrustees could respond further precise surveys. Conditions would weigh. White areas secure particularly living costs. Strong, bare provisions can keep so useful, physical feet. Demanding, supreme\tSports                                            \tcamping                                           \t4.48\t9027.65\t2.62498050742654327\nJust available years undertake social units. Alone long-term years communicate very huge workers. Relevant, false farmers start hardly bottom windows. Associations shall \tSports                                            \tcamping                                           \t7.57\t5611.89\t1.63177591730095251\nSteps would make repeatedly short pairs. As good stages protect skills. Plants could not sweep observations. C\tSports                                            \tfishing                                           \t8.71\t4424.59\t1.05964726402462346\nChrist\tSports                                            \tfishing                                           \t9.05\t1582.84\t0.37907514038334286\nAlmost personal matters may deal; major, australian offences happen prime, usual hours. Functions might visit at the followers. Championships shall smile observations; compani\tSports                                            \tfishing                                           \t2.61\t1554.46\t0.37227840004061759\nAccidentally wrong communities look more goods. Rural matters recognize. Large, new days go hap\tSports                                            \tfishing                                           \t1.32\t4303.95\t1.03075513030558269\nProblems ought to remove rapidly then new authorities. Half way exotic months bar systems. Front, new models cause too; difficult, full others comprehend eve\tSports                                            \tfishing                                           \t2.89\t2105.84\t0.50432867101214193\nDelightful, married guns should go much tremendous, clear networks. Again just hours shall know there. Large, whole years cannot want \tSports                                            \tfishing                                           \t9.33\t2187.51\t0.52388786001109799\nVery modern weeks must prevent hotly to a situations. Points look strongly regulations. Times take good groups. As af\tSports                                            \tfishing                                           \t68.83\t2026.90\t0.48542329107363830\nMembers support general, mysterious programmes. Front times want with the services. Now new details should impose never cheap live activiti\tSports                                            \tfishing                                           \t4.96\t11202.69\t2.68293781078382606\nTests shall see famous, good words; sexual, significant theo\tSports                                            \tfishing                                           \t8.63\t11684.99\t2.79844407813042221\nPersonal, lacking artists cut pieces. Prices make quickly for a rooms. High, overall types ought to use together supposed women; reductions shall give prices. Lengthy, fundamental meas\tSports                                            \tfishing                                           \t9.23\t13101.80\t3.13775661107533389\nOther offices shall embark blindly resources. Spectacular copies may look also old, other offices. Properties fill better important others. Very wrong supplies will not own both aspects. Certainly\tSports                                            \tfishing                                           \t7.25\t386.95\t0.09267084833042791\nSheets identify in a persons. Successful studies cannot solve for instance impressive governments; public buildings can move to a women. Substances sweep even on a tales; however great spac\tSports                                            \tfishing                                           \t4.50\t5339.33\t1.27871880247087137\nInherent, public girls run. Opposite, similar players might adjust though central ties. Entirely small generations achieve rats. At all western boxes prosecute almost suspicious, ordinary v\tSports                                            \tfishing                                           \t0.46\t2861.92\t0.68540264699268189\nDifficult skills can tell specifically significant applicants. Irish women find si\tSports                                            \tfishing                                           \t8.65\t0.00\t0.00000000000000000\nUsually english commentators will indicate still dangerous, spiritu\tSports                                            \tfishing                                           \t9.90\t13087.32\t3.13428878865945433\nEarly, associated parents continue stories. Alive, key costs will not supply. For example excellent wi\tSports                                            \tfishing                                           \t0.65\t9375.15\t2.24525934545809862\nJust left grounds would not shoot other, accessible readers. Long, true winners shall vary; male conditions must hear never local, clean studies. Major, generous pp. must not get always gre\tSports                                            \tfishing                                           \t3.62\t8.19\t0.00196142718135729\nGroups deserve also only members. Inevitable, rare dreams worry much old enquiries. Please clear nerves turn necessar\tSports                                            \tfishing                                           \t2.58\t3603.80\t0.86307585789687587\nForeign advances expand never new, colonial players. Colours confess lines. Urgent, massive items sit then men. Different countries cut however. Effectively old ideas suggest only actually particul\tSports                                            \tfishing                                           \t4.19\t20.28\t0.00485686730621806\nSole, public skills require long opportunities. Parents destroy how\tSports                                            \tfishing                                           \t4.84\t1396.88\t0.33453948731311060\nCourses try military parents. Fast, w\tSports                                            \tfishing                                           \t1.64\t6454.18\t1.54571478453878082\nNew parties strengthen please at all current things. Similar teams must lead most real firms. Simply tiny planes will set moving advantages. Concerned, average memories use\tSports                                            \tfishing                                           \t2.13\t5552.34\t1.32973267352104439\nInternational, new heads succeed altogether. Inc men see about accord\tSports                                            \tfishing                                           \t4.11\t4917.54\t1.17770410517847910\nIllegal campaigns ought to become here western, certain abilities. Indirect teachers would not tend no longer long, main agreements. Twice sweet patients ought to enjoy\tSports                                            \tfishing                                           \t0.33\t2469.18\t0.59134514867689882\nCommon, preliminary children will not maintain early further international \tSports                                            \tfishing                                           \t3.67\t4265.38\t1.02151798178483168\nNorthern, de\tSports                                            \tfishing                                           \t15.22\t1489.04\t0.35661093163959266\nUnable occasions command more effective, other birds. Proper songs know in a ports. Later wealthy details look now hours. Aware, black issues \tSports                                            \tfishing                                           \t0.59\t4257.58\t1.01964995589782473\nPoints can appoint even pregnant ideas. Other, basic bodies shall frighten too modern laws; features accompa\tSports                                            \tfishing                                           \t1.97\t15202.78\t3.64092135826557149\nHome available features need with a questions. Hard waters can operate still more content bands. Organic, large ideas contribute points. Difficult, right produc\tSports                                            \tfishing                                           \t2.47\t7589.73\t1.81766821992220870\nCollective, full signals will assume only services. Political villages think children. So old\tSports                                            \tfishing                                           \t2.56\t2552.33\t0.61125878361338953\nIndustrial, slight needs would disturb too for a folk. Now known buildings ought to suggest so. Papers create colours. Good levels tell into a r\tSports                                            \tfishing                                           \t2.72\t5261.10\t1.25998346078618504\nNorma\tSports                                            \tfishing                                           \t1.01\t8662.39\t2.07456009786539724\nOnwards horizontal sports find. Normal, powerful eyes come american, commercial situations. Major, enormo\tSports                                            \tfishing                                           \t1.89\t13071.78\t3.13056710631534049\nShoes give more now annual ch\tSports                                            \tfishing                                           \t1.18\t6235.99\t1.49346035270723652\nAs modern women may find only with a bones. However simple minutes end changes. Catholic hands provide hard able rights. Weeks used to affect then tiny c\tSports                                            \tfishing                                           \t2.55\t3728.50\t0.89294032303915358\nStrong, southern weeks use to a exceptions. Shoulders write natural, particular courses. Cold, labour things will hang. New authorities may bu\tSports                                            \tfishing                                           \t1.08\t5888.16\t1.41015837267164344\nAutomatically private stands go always easy terms. Well distinctive depar\tSports                                            \tfishing                                           \t1.17\t5365.88\t1.28507727520164501\nInternatio\tSports                                            \tfishing                                           \t1.86\t8437.51\t2.02070347459999698\nApparent,\tSports                                            \tfishing                                           \t7.13\t2649.10\t0.63443427913719237\nSpecial, easy things invest here hot \tSports                                            \tfishing                                           \t4.61\t8905.67\t2.13282334630014721\nLeaves could not help accounts; maximum, supreme expenses may not build in a officers; r\tSports                                            \tfishing                                           \t0.44\t13341.40\t3.19513853447621392\nStill original workers solve merely villages. Only long years punish already. Scottish features should not take from th\tSports                                            \tfishing                                           \t4.81\t3.50\t0.00083821674416978\nSettlements must make significa\tSports                                            \tfishing                                           \t7.42\t7154.29\t1.71338447732755427\nShortly new terms would recover yet satisfactory, previou\tSports                                            \tfishing                                           \t2.86\t3393.96\t0.81282117172642234\nPublic, certain lives could not choose indeed in a tools. Then bad things gain women.\tSports                                            \tfishing                                           \t2.62\t392.55\t0.09401199512109957\nCircumstances cannot take lines. Modern goods would make corresponding tools. Subsequently toxic practices see annually alm\tSports                                            \tfishing                                           \t3.56\t12846.92\t3.07671527285990692\nAlso normal groups must not keep possibly others. Rates will not depend centuries. Fields could indicate already in a months; important arti\tSports                                            \tfishing                                           \t64.57\t16106.48\t3.85734892161020958\nCrops shall argue already for the responses. Easy committees like as with a figures. Easy current groups should not meet nevertheless; evident, international forces sen\tSports                                            \tfishing                                           \t6.00\t1274.25\t0.30517076750238473\nElements take further vital, typical \tSports                                            \tfishing                                           \t1.73\t6796.42\t1.62767801268868558\nGood, silent examples close so literary, financial years. Often foreign interests discourage best suddenly whi\tSports                                            \tfishing                                           \t4.19\t4776.06\t1.14382098947415311\nProjects support indeed in a departments. Populations involve even with a terms; fine, classical miles visit continuously crucial, great days. Steady, sc\tSports                                            \tfishing                                           \t0.68\t7528.93\t1.80310719762348789\nDirections use none the less. Military, new recordings pass yellow tasks. Frequently wor\tSports                                            \tfishing                                           \t1.49\t1880.44\t0.45034751268760788\nPoor networks explain personally to a funds. Already federal words accelerate companie\tSports                                            \tfishing                                           \t2.01\t7024.79\t1.68237045779327228\nSectors might not know properly. Large, electric workers used to drop even as ca\tSports                                            \tfishing                                           \t6.89\t1774.46\t0.42496630967414683\nOld others will cut other activities. Sharp passages avoid allegedly orthodox, additional firms. High officers must form.\tSports                                            \tfishing                                           \t0.25\t2612.13\t0.62558031541377612\nVery acids should depend much a little christian tons. Warm rules defeat at large details. Banks should not seek then. Times can back stiffly ordinary, chemical\tSports                                            \tfishing                                           \t6.07\t10778.84\t2.58142976306486528\nFactors might assist now absent, voluntary demands; political companies might know no longer concerned things; autonomous, possible events can dry at \tSports                                            \tfishing                                           \t6.68\t6637.53\t1.58962536740836076\nPale, other animals assist in a words. Is\tSports                                            \tfishing                                           \t3.40\t1226.34\t0.29369677772719206\nNecessary women know little international troops. Immediate, possible drugs can try effectively too gentle spots. Northern, german ideas tell to a areas. False appropriat\tSports                                            \tfishing                                           \t2.18\t505.79\t0.12113189915246708\nWestern, social things will go in order. Warm, male cards used to describe. High, briti\tSports                                            \tfishing                                           \t0.51\t2346.30\t0.56191655624158939\nDifferent, common buildings could not comply even. Impossible transactions build always qualities. Police move tiles. Options must use just different stages; words\tSports                                            \tfishing                                           \t8.87\t4167.10\t0.99798085560854416\nMembers like very. Then interested principles could remember yet important, new agents. Necessarily due assets generate across. Areas give anyway as social projects. Main, \tSports                                            \tfishing                                           \t1.79\t7991.56\t1.91390268686784986\nBlind sessions hurt traditionally public, various clothes. High, southern schools might not tal\tSports                                            \tfishing                                           \t1.43\t1122.60\t0.26885203342999968\nPractical roads mean either dishes. Necessary issues determine simply fund\tSports                                            \tfishing                                           \t3.40\t4810.52\t1.15207383204675046\nJust formal teams ask still, certain interests. Well l\tSports                                            \tfishing                                           \t9.79\t2218.77\t0.53137433298902583\nBooks must work major, able forces. Clearly future teachers would measure certain, direct measures. Hard tears go main nurses. Cruel patients used to leave much only days. Yet social defence\tSports                                            \tfishing                                           \t8.56\t1810.80\t0.43366939438361253\nComprehensive, able acts must not resign. British, red forces convict perhaps; years want as well problems\tSports                                            \tfishing                                           \t54.91\t119.66\t0.02865743303067322\nNew companies must benefit always to a companies; adults might get yet international, comfortable indicators. Dual bones shall find ever like parents. Wars need new, heavy purposes\tSports                                            \tfishing                                           \t3.43\t7734.29\t1.85228896636140409\nBacks think now back, british wines. Very fine shows get often serious, fatal prisoners. Good terms ought to come far easy, obvious shoulders. Machines play more ac\tSports                                            \tfishing                                           \t2.94\t7583.99\t1.81629354446177025\nTiny values allow equations. \tSports                                            \tfishing                                           \t4.39\t7729.84\t1.85122323364381680\nIll, simple objects shall bear solid trees. Ears should use there minimum, inappropriate personnel. Available practices should not apply increasingly pr\tSports                                            \tfishing                                           \t7.87\t15557.69\t3.72591893102937088\nSure reliable suppliers show upright other women. Maybe\tSports                                            \tfishing                                           \t1.11\t12392.70\t2.96793389870653577\nMuch common times achieve existing, continuing positions. Los\tSports                                            \tfishing                                           \t8.20\t9965.46\t2.38663298152977430\nGood, whole facilities maintain for a points. More worthwhile directors battle annual hours. Yes\tSports                                            \tfishing                                           \t8.90\t603.00\t0.14441277049553697\nRules offer. Important, italian goo\tSports                                            \tfishing                                           \t4.06\t3150.39\t0.75448847104715544\nVital, similar activities change thickly. Seats would sit essentially brilliant words. Hig\tSports                                            \tfishing                                           \t68.38\t6302.32\t1.50934575746174558\nEven useless times make old, old studies. Early public employees must open together warm consequences. Sufficient, evident men would operate stars. Various, other sections control l\tSports                                            \tfishing                                           \t89.62\t2679.48\t0.64171000047658609\nA\tSports                                            \tfitness                                           \t7.12\t10468.61\t4.22441966945963305\nFast bizarre situations fulfil all as political plans. Thus labour conventions run more part-time experiments. Early considerable own\tSports                                            \tfitness                                           \t0.81\t5713.17\t2.30544721056249987\nOther, cultural differences might take. Musical branches take only new defences. \tSports                                            \tfitness                                           \t3.76\t18567.33\t7.49251276543379958\nIncreased machines claim \tSports                                            \tfitness                                           \t1.76\t2327.22\t0.93910786084875139\nNew parties survive\tSports                                            \tfitness                                           \t1.06\t5055.94\t2.04023384036732070\nAbruptly real years cope together; significant accounts provide at a others. Twice competent languages cannot impose most protests. Identical leaders \tSports                                            \tfitness                                           \t3.76\t11311.78\t4.56466578930728034\nClinical, real figures figure effects. Full, pleased bacteria used to fit immediately more main things. Windows will not present perhaps\tSports                                            \tfitness                                           \t4.25\t1715.39\t0.69221484579083182\nConcerned clothes comment less small probl\tSports                                            \tfitness                                           \t0.73\t1855.00\t0.74855195549816254\nLarge, working matters oppose etc far remote aspects; today amer\tSports                                            \tfitness                                           \t3.52\t11563.15\t4.66610164108818229\nPhysical questions confirm much to the marks. Irish, pleased eyes would know in an subsi\tSports                                            \tfitness                                           \t2.86\t8639.15\t3.48617392255630775\nLittle, national services will buy young molecules. In part video-taped activities join now\tSports                                            \tfitness                                           \t5.91\t408.38\t0.16479441918401058\nIntelligent trends used to bother open. Bedrooms will not hit all senior, economic boys; objects would sum. Often blue times should deal in a\tSports                                            \tfitness                                           \t3.84\t1925.10\t0.77683955230701493\nAbsolutely wild standards impose only so scottish schools. New, complex incomes can establish children. Certainly free groups will rest. Impressive teeth must go front s\tSports                                            \tfitness                                           \t4.00\t2927.91\t1.18150552885316716\nPolicies think races. Loc\tSports                                            \tfitness                                           \t40.32\t1793.89\t0.72389211183212873\nShares could release barely months. Aware writings used to use so very impossible authorities. \tSports                                            \tfitness                                           \t6.66\t3449.47\t1.39197170562385268\nBoys might not fill in a problems. Military, young ways will encourage somehow inner, large matters. Ways will begin today whole firm\tSports                                            \tfitness                                           \t3.62\t2731.00\t1.10204603259594711\nCorporate heroes examine forth technical, formal shares; buildings may not emphasize abo\tSports                                            \tfitness                                           \t68.11\t4428.60\t1.78708204319092324\nBelow old resources could cover lo\tSports                                            \tfitness                                           \t2.86\t2908.84\t1.17381017263141516\nRunning children may continue common, small wives; great, subtle teams shall change bad, good lines; others may want; parties used to like near a sty\tSports                                            \tfitness                                           \t2.32\t2591.76\t1.04585822974766455\nLabour, dominant dreams come. Please various symptoms cannot persuade so owners. Primary colours would argue once small posts. Live, asia\tSports                                            \tfitness                                           \t48.03\t4332.46\t1.74828647176149287\nDeep, light measures could ask around experimental sections. Days attend social, wise cases. Children should find; as\tSports                                            \tfitness                                           \t3.91\t12590.50\t5.08067029417769025\nTimes force also years. Emotional solutions ought to allow elderly differences. Too urban parents shall accommodate so. Traditional, effective im\tSports                                            \tfitness                                           \t3.60\t8417.45\t3.39671086674286159\nPrincipal eyes should pay frequently relevant areas. Light police m\tSports                                            \tfitness                                           \t3.17\t451.78\t0.18230771021830721\nOriginal hands know as. So prime things might not identify. Less than little journals let very hard things; nurses see; large bodies name once political, national c\tSports                                            \tfitness                                           \t6.83\t1540.63\t0.62169358447392677\nMethods develop warm males. Governments depend please with the hospitals. At random tory weaknesses enter approximately simply young me\tSports                                            \tfitness                                           \t6.01\t24.98\t0.01008023064600760\nAlso new activities would not drop immediately fina\tSports                                            \tfitness                                           \t6.42\t9171.55\t3.70101438676505262\nBeings should affect close projects. In common labour metres might call directly \tSports                                            \tfitness                                           \t2.85\t837.90\t0.33811950593633983\nMen could not escape so old victims. Tiny horses give together effective teeth; little, beneficial bones used to forget again days. Of course\tSports                                            \tfitness                                           \t71.90\t2421.19\t0.97702776772646693\nRegions see in the cop\tSports                                            \tfitness                                           \t1.90\t8595.06\t3.46838219440648889\nAsleep, fat topics pick into a rul\tSports                                            \tfitness                                           \t2.70\t3452.62\t1.39324283158601937\nConscious, central results play above about the hands. Stages stay so available universities. Tomorrow professional birds decide; enthusiastically big views appear new window\tSports                                            \tfitness                                           \t9.62\t412.47\t0.16644486527456987\nPlease positive sys\tSports                                            \tfitness                                           \t0.31\t4494.44\t1.81365059346046449\nSimply necessary girls could not take supreme hospitals. Issues ought to \tSports                                            \tfitness                                           \t93.50\t342.93\t0.13838324641454710\nOverseas campaigns must finance just. Researchers believe sure, positive days. Workers appear from a values. Periods can lift ago related, extens\tSports                                            \tfitness                                           \t8.92\t691.02\t0.27884871821473869\nRegular, gold effects take gently for a terms. Good, strong difficulties attract articles. Ultimate farmers develop \tSports                                            \tfitness                                           \t1.12\t3313.24\t1.33699853425052940\nRound prisoners go at all into a lives. Streets find again places. Kindly liable men offer plainly on a contents. Early accurate regions should no\tSports                                            \tfitness                                           \t4.49\t3281.89\t1.32434780443658472\nMore \tSports                                            \tfitness                                           \t0.82\t1089.45\t0.43962799348650845\nSolid, romantic feet would come so equations. Only economic feet will n\tSports                                            \tfitness                                           \t0.36\t6592.06\t2.66010749528906595\nOnly subjects think for a goods. Windows wo\tSports                                            \tfitness                                           \t3.66\t9334.78\t3.76688292352837611\nSpecial miles must ease under across a conditions. Points might continue australian, australian places. Entirely \tSports                                            \tfitness                                           \t3.17\t0.00\t0.00000000000000000\nMen mean also weapons. Individual proposals ought to mean farmers. Sometimes valuable eyes might take rights. Rough, different rewards cost real, alone ministers. Requirements may no\tSports                                            \tfitness                                           \t64.89\t3913.00\t1.57902091744706739\nTogether working cases used to buy in a structures. Millions must \tSports                                            \tfitness                                           \t1.88\t3472.20\t1.40114398915402693\nSure, coming sessions could not pass very. Concerned children pick on a individuals. Easy pairs shall return. Reports consider subsequently rough sites. Vital, normal w\tSports                                            \tfitness                                           \t2.27\t5967.84\t2.40821471811329074\nGirls move ways. Other, human actors should participate serious families. New di\tSports                                            \tfitness                                           \t4.79\t10717.00\t4.32465299572712017\nQuick reasons could set only distant a\tSports                                            \tfitness                                           \t1.29\t968.12\t0.39066744968025936\nSo close miles would seem american, emotional horses. Other, alive operations ought to want further red layers. Parameters might faint bad, significant stations. So prime newspapers wou\tSports                                            \tfitness                                           \t2.97\t9281.14\t3.74523746428690903\nRoyal speeches take evil, front margins. For example hard events ought to go angles. Possible, foreign lakes shall not reconsider. Other honours hear momen\tSports                                            \tfitness                                           \t8.13\t0.00\t0.00000000000000000\nPoints force into the symptoms. Local, strong negotiations get examples. For the time being fat result\tSports                                            \tfitness                                           \t5.61\t19543.75\t7.88652953114135530\nSubject, dead qualifications benefit more real nurses. Up to special writers give most responses; social circumstances de\tSports                                            \tfitness                                           \t2.69\t12178.65\t4.91447561877503891\nJust ready clothes try live skills. Girls investigate up \tSports                                            \tfootball                                          \t1.80\t3028.92\t1.26615656780156976\nMostly furious applications cut in a workers; successful, substantia\tSports                                            \tfootball                                          \t3.20\t4690.04\t1.96054202463322710\nDynamic, technical problems cannot go important, general sources. Overall inevitable subjects may take. Recent ends would n\tSports                                            \tfootball                                          \t2.51\t10300.92\t4.30601584472305176\nAllowances might lay at best children. Academic sections burst hot times. Short-term, warm goods\tSports                                            \tfootball                                          \t4.96\t652.80\t0.27288505720219244\nSophisticated, unfair questions may remove separate premises. Typical patterns intervene typically walls. Naked areas ought to return now military, necessary children; young met\tSports                                            \tfootball                                          \t33.19\t7921.58\t3.31139830182558766\nOnly available cars could not allow during a films. Cuts might not grow also unfortunately poor names. Windows go at first so key effects. Leading, possible relationships used to rec\tSports                                            \tfootball                                          \t1.80\t5455.78\t2.28063853765713464\nPupils talk tonight even expected rights. However federal costs may not borrow large decisions. Social, american soldiers repair legal, economi\tSports                                            \tfootball                                          \t11.06\t1681.47\t0.70289221374658476\nBritish components must go. Wrong, overseas jobs explain with a towns. Quite ideological habits may\tSports                                            \tfootball                                          \t0.63\t8173.32\t3.41663127409899441\nGirls would face criminal, special offenders. Healthy principles get very greek, ade\tSports                                            \tfootball                                          \t1.47\t435.76\t0.18215746404170861\nDelicate readers gain too able officers. Feet see as international appearances; just prominent samples halt just. Substantia\tSports                                            \tfootball                                          \t94.83\t12471.06\t5.21318309049015641\nDaily, level areas fetch known, other \tSports                                            \tfootball                                          \t69.68\t818.79\t0.34227260414611390\nMore reasonable opp\tSports                                            \tfootball                                          \t3.70\t3418.34\t1.42894287137950754\nAwful eyes get now like a gentlemen. Final countries may become french, turkish sciences. French lives repeat great, big standards. Large, able roads cl\tSports                                            \tfootball                                          \t6.18\t5009.22\t2.09396643112494858\nThanks may add suddenly strong weeks. Times abandon as files. Systems feel cheap targets. Green, formal events understand french, rea\tSports                                            \tfootball                                          \t0.97\t2280.64\t0.95335872680393409\nMiserable officers introduce clearly. Much mathematical eyes could change so before prominent plans. Prices i\tSports                                            \tfootball                                          \t4.67\t20055.07\t8.38346955291662626\nElse social offenders will not support mines. Gently intelligent expressions speed days. Sometimes old houses offer really important, local month\tSports                                            \tfootball                                          \t2.19\t15388.53\t6.43275105592471583\nCritics can cover only str\tSports                                            \tfootball                                          \t1.79\t10295.54\t4.30376688392686948\nSources negotiate never books.\tSports                                            \tfootball                                          \t12.71\t1473.07\t0.61577633457848288\nYoung, previous metals keep here due, equal churches. Strong temperatures avoid. Established, average children could help also technical aspects. Feelings navigate now weekl\tSports                                            \tfootball                                          \t1.45\t8988.48\t3.75738645674136449\nWhite, vital departments should become aga\tSports                                            \tfootball                                          \t2.88\t4166.35\t1.74162784631488126\nDaily, marked years may not save players. Then hot families please universally always parental opportunities. Closely medic\tSports                                            \tfootball                                          \t3.21\t1605.80\t0.67126045474154508\nPopular, strong farms worry certainly followers. New documents will argue considerably under a men. Catholic, exist\tSports                                            \tfootball                                          \t0.10\t1110.81\t0.46434352081919024\nClearly great options cannot believe. Responsible products ought to condemn at a systems. Dull types assure; real ser\tSports                                            \tfootball                                          \t3.03\t8226.16\t3.43871958050610814\nSucc\tSports                                            \tfootball                                          \t4.47\t9246.93\t3.86542435967320677\nAlmost busy threats go together recent sides; still tired wines shall not admit on a\tSports                                            \tfootball                                          \t3.88\t7510.88\t3.13971648045159802\nEconomic, crude hands put available payments; irish months pay main, tropical members. Neither soft syste\tSports                                            \tfootball                                          \t4.23\t2877.00\t1.20265059676885365\nInternational, profitable schools sit rather di\tSports                                            \tfootball                                          \t81.85\t205.56\t0.08592869540208744\nYoung features may seem actually for the plans. Unduly\tSports                                            \tfootball                                          \t9.86\t3012.65\t1.25935534249415605\nStandards must pa\tSports                                            \tfootball                                          \t3.63\t836.01\t0.34947095078370849\nVery aspects use then. Popular, weste\tSports                                            \tfootball                                          \t6.30\t1501.17\t0.62752276550278069\nModels may register still digital, professional birds. There necessary things can fail never irish forces. All corporate readers identify more\tSports                                            \tfootball                                          \t68.59\t9190.37\t3.84178100974159524\nAgain sexual officials shall not\tSports                                            \tfootball                                          \t7.81\t11678.56\t4.88190029662873252\nAges must answer even such as a citizens. Fatal candidates say also. Thus great friends create normally \tSports                                            \tfootball                                          \t19.60\t1325.80\t0.55421416795138901\nSuccessive, joint\tSports                                            \tfootball                                          \t4.67\t4363.92\t1.82421654231892103\nDemocrats take before. Joint years woul\tSports                                            \tfootball                                          \t65.80\t7674.39\t3.20806733171252094\nHours take so. Now new things want common, recent drugs. Ships will st\tSports                                            \tfootball                                          \t3.32\t1013.26\t0.42356543054640551\nQuiet, small objectives should stay as matches. In particular formal students allow then. Professional, other demands drop\tSports                                            \tfootball                                          \t1.58\t2487.00\t1.03962184016827912\nSuper stars might like approximately stories. Major practices might allow more fresh decisions. Advanced organisations wield. Towns must not protect quickly. Active, righ\tSports                                            \tfootball                                          \t4.05\t6655.69\t2.78222785902276383\nCheaply financial tales allow unfortunately safe, red meals. Who\tSports                                            \tfootball                                          \t2.91\t5952.36\t2.48822012727947644\nHard figures will not help twice central principles. Collective, impor\tSports                                            \tfootball                                          \t2.33\t468.64\t0.19590204229049551\nAdvanced, foreign stories would greet always corporate games. Recent dev\tSports                                            \tfootball                                          \t3.00\t634.63\t0.26528958923441696\nVery questions make secret stocks. Aggressive, major years qualify for example senio\tSports                                            \tfootball                                          \t4.39\t292.60\t0.12231336969571310\nMatters reserve more proper, concerned birds. True months result together more chemical columns. Social views reduce in a affairs. Medieval, serious sports may n\tSports                                            \tfootball                                          \t0.16\t7175.77\t2.99963297628642230\nProud things mus\tSports                                            \tfootball                                          \t28.70\t17469.96\t7.30283552990198210\nUnacceptable flowers should not give reasonable, ethnic governments. Employees shall complain \tSports                                            \tgolf                                              \t8.39\t4100.46\t1.45417454300510042\nCrucial products would carry silently double groups. Really full systems run usual structures. Financial departments must meet well c\tSports                                            \tgolf                                              \t1.50\t12212.90\t4.33114535351326216\nDifferent hours must not know towards a weapons. Facilities shall not know items. Today established fl\tSports                                            \tgolf                                              \t5.73\t437.77\t0.15524940852766344\nEducational terms must apply automatic, other objectives. Indeed financial sources pass very unacceptabl\tSports                                            \tgolf                                              \t6.99\t16143.50\t5.72508126771211978\nMore black mothers shall repea\tSports                                            \tgolf                                              \t14.90\t7660.56\t2.71671747490846200\nAdmini\tSports                                            \tgolf                                              \t9.35\t2840.01\t1.00717242550345943\nSeparate, rapid bodies will start too religious surveys. Geographical, loyal things involve in order. Notes need dead for a members; at last economic managers look once more nervous skills; joint\tSports                                            \tgolf                                              \t6.57\t2341.31\t0.83031498887521685\nEuropean quantities would wait\tSports                                            \tgolf                                              \t0.73\t9236.58\t3.27563236818065546\nWet, suitable projects shall follow voluntarily all of a sudden resulting negotiations. High, video-taped services should not take all full eyes; wrong representatives follow royal, full figures. Fre\tSports                                            \tgolf                                              \t3.35\t7298.73\t2.58839919478975935\nGood, interior faces contribute with a rights. Social, certain versions pick furiously between a troops. Forward political countries bec\tSports                                            \tgolf                                              \t7.89\t4757.12\t1.68705042898124194\nGreat, new errors w\tSports                                            \tgolf                                              \t3.21\t791.01\t0.28052135742391451\nStairs say long words. Newspapers will go exceedingly. Other, empty numbers must not provide therefore environmental months. Entirely bare groups buy. New days\tSports                                            \tgolf                                              \t20.77\t1505.63\t0.53395199982069557\nLabour parties worry far well clear files. Finally domestic generations would not announce too; continuous, possible patterns might conceal\tSports                                            \tgolf                                              \t4.32\t2152.66\t0.76341273216794201\nLive processes review home at pres\tSports                                            \tgolf                                              \t2.74\t4204.30\t1.49100004174076658\nJudicial models should not pick. Close dogs can refuse exactly. European, r\tSports                                            \tgolf                                              \t5.70\t6536.36\t2.31803463902021193\nPages could watch fundamental, literary components. Financial, royal elements should overcome environmental trustees. Shared areas \tSports                                            \tgolf                                              \t3.07\t4544.08\t1.61149857756900853\nDemands could treat lines. Conditions suck studies. Documents could not hide local things; gold calls see together. Preferences may refuse indeed in a pieces. Old, unknown boys emerge more opposite, \tSports                                            \tgolf                                              \t2.87\t625.67\t0.22188568753798383\nNew sources play just. English groups evaluate here indian changes. Familiar, able authorities get direct important, emotional orde\tSports                                            \tgolf                                              \t6.52\t7170.18\t2.54281061753176740\nMost angry years help intimate conditions. By far urgent police would agree \tSports                                            \tgolf                                              \t1.81\t13747.41\t4.87533926785135024\nThen growing levels light sometimes human, fellow cities. Users may derive odd championships. Stages support right \tSports                                            \tgolf                                              \t8.86\t5586.76\t1.98127141098295675\nBrown customers can detect too. Then human numbers cannot prepare never victorian long accountants; interests share open in the years. Full-time, underlying \tSports                                            \tgolf                                              \t92.44\t6716.33\t2.38185864718140065\nSecondary, normal \tSports                                            \tgolf                                              \t6.04\t7486.01\t2.65481559890393074\nWishes might behave environmental regions. Statements conflict now nuclear\tSports                                            \tgolf                                              \t7.46\t16077.73\t5.70175679687386128\nHorses say. Other peasants can keep at first large kilometres. Necessarily new miles separate for an poems; interestingly indian teeth used to make further.\tSports                                            \tgolf                                              \t3.40\t752.00\t0.26668697081299062\nRussians receive then definit\tSports                                            \tgolf                                              \t8.76\t20347.14\t7.21584724907956645\nIndependent, scientific subsidies might contain. Here certain instructions shall not imagine exhibitions. Either other attitudes buy finally. Public, right p\tSports                                            \tgolf                                              \t4.05\t198.74\t0.07048054332363531\nMarried professionals clarify plans. All basic children could prove more religious big trees.\tSports                                            \tgolf                                              \t4.01\t7501.44\t2.66028764672260686\nRoles shall not remember primary, inc years. Young feelings can s\tSports                                            \tgolf                                              \t5.74\t3892.36\t1.38037459802347363\nParticular, complete artists belong much enough active cheeks; profits may see able, complete processes. Here available officials take aside at a eyebrows. \tSports                                            \tgolf                                              \t4.07\t10080.46\t3.57490338005521200\nPoles decide over for a managers. Properly other views include slim functions. Bright, other minutes should talk exactly certain weeks.\tSports                                            \tgolf                                              \t6.56\t1356.03\t0.48089831520151552\nInevitably dead trees establish original, primary events. Other women ought to issue almost long medical achievements. Catholic, hard cars need here difficult humans. Great,\tSports                                            \tgolf                                              \t0.80\t5928.82\t2.10257851900994022\nStrong changes stay. Future claims will not recoup fo\tSports                                            \tgolf                                              \t2.23\t9989.59\t3.54267752229221140\nImpressive records lie easy origins. Social schools shall bend else different details. Novel chemicals present primarily by a bags. Molecules shall see repeated\tSports                                            \tgolf                                              \t3.63\t4279.32\t1.51760490417479657\nAlso major pieces resign never. Substan\tSports                                            \tgolf                                              \t4.63\t55.04\t0.01951921658716357\nAssets may not engage heavily always formal groups. Local, genetic offices cannot keep still sad, annual troops; supreme, natural gaps can see. Nearl\tSports                                            \tgolf                                              \t7.20\t4005.33\t1.42043793192339857\nSo overall investor\tSports                                            \tgolf                                              \t2.54\t15395.25\t5.45972418538390139\nBrothers appoint even.\tSports                                            \tgolf                                              \t3.65\t3103.75\t1.10070436922981335\nClosely substantial instructions wait for a companies; members may bring then characters; recent views should indicate near early days; objectives could not arrive categories. High gains speak\tSports                                            \tgolf                                              \t7.73\t77.67\t0.02754465029660237\nNeighbours shall send important, excellent games. Plain important ways note monthly, japanese figures; routinely \tSports                                            \tgolf                                              \t4.81\t616.44\t0.21861238868079779\nCertainly persistent players move often respective minutes; amer\tSports                                            \tgolf                                              \t7.78\t74.48\t0.02641335849222279\nImpossible, natural cases may wait then products. Political sectors go here sure consultants. Me\tSports                                            \tgolf                                              \t2.14\t2979.66\t1.05669747267637717\nClassical, small perceptions finance again ideas. Obligations determine. Clear, useful crowds could take thus formal, genetic individuals. Int\tSports                                            \tgolf                                              \t0.68\t14169.23\t5.02493221735711581\nForward working birds ought to try already then public pounds. Black, similar hands cover still at a rights. Right contracts save for example general, able feet. Systems could not t\tSports                                            \tgolf                                              \t8.61\t291.36\t0.10332701571286296\nYoung, severe parts must not act therefore rath\tSports                                            \tgolf                                              \t2.17\t1012.25\t0.35898123165618319\nOnly concerned times used to know however in the trees. Developers might not wear in the times. Studies see far variations. Calculations must not transport hardl\tSports                                            \tgolf                                              \t0.15\t8494.93\t3.01261588958563618\nSales include easier from the times. Significant, young features should not keep hardly social\tSports                                            \tgolf                                              \t4.30\t403.10\t0.14295414618978261\nLikely, exciting negotiations disrupt even communications; all normal girls may think about years; allegedly old hands end darkly musical years. Individual, similar \tSports                                            \tgolf                                              \t4.26\t9885.12\t3.50562860229110351\nBasic differences stem \tSports                                            \tgolf                                              \t0.88\t12915.95\t4.58047284663835931\nContinental issues need famous areas. Thus christian years shall agree just foreign negotiations. Sensitive centres may not assess large remains. Men eat from the ideas. Other, specific plants say \tSports                                            \tguns                                              \t0.19\t6159.12\t2.82446517920513238\nRevolutionary son\tSports                                            \tguns                                              \t4.83\t7287.25\t3.34180595233776919\nBusinesses may keep also behind a workers. Early, previous objectives hit wet, bottom requests. Under true hours touch similar, long sources. Widely able attitudes must appear now politica\tSports                                            \tguns                                              \t2.73\t6762.87\t3.10133441571052580\nOccasional, biological questions make usually for a tools; parts will use between a machines. Languages swim alive commitments. Other russians shall finish b\tSports                                            \tguns                                              \t4.12\t2865.32\t1.31398585630415545\nAgain dull trials ensure suddenly; communities should produce terms. Too extra notes might choose properly social, absolute talks\tSports                                            \tguns                                              \t6.99\t8342.32\t3.82564268171208874\nOnly other packages shall not lift procedures. Available, only types result obviously rough parts. Deep, back boundaries assert english, blue police; findings will declare restaurants. Little, daily s\tSports                                            \tguns                                              \t2.81\t10686.60\t4.90068866722739088\nComplicated, right projects forget naturally british, true weapons. Employers step also as continuous tickets. Ev\tSports                                            \tguns                                              \t5.02\t8567.83\t3.92905764075860015\nThen vague profits used to buy tonnes. I\tSports                                            \tguns                                              \t0.44\t2445.30\t1.12137199838780706\nNULL\tSports                                            \tguns                                              \t8.03\t272.49\t0.12495916895296837\nVital, possible communications go yet operational effects; \tSports                                            \tguns                                              \t1.48\t11987.62\t5.49731378371310009\nNow good properties see quite mere exceptions; long publications ought to make alone facilities. Certa\tSports                                            \tguns                                              \t4.20\t3874.40\t1.77673237253249895\nNegative patients may not get for a eyes. Past little questions perform highly only, afraid acts. Again co\tSports                                            \tguns                                              \t1.13\t5931.38\t2.72002758099107309\nDifferences imagine up a feet. Tender methods shall complet\tSports                                            \tguns                                              \t93.05\t1128.12\t0.51733618730677336\nAnnual communications use enough in a standards; only famous conservatives used to kill new, public children. Men dance so examples. Christian patients shall cause as busy te\tSports                                            \tguns                                              \t2.43\t22127.23\t10.14716236203600213\nCourts define so. Appropriate tables surprise well to a agreemen\tSports                                            \tguns                                              \t7.17\t131.70\t0.06039532662154917\nExamples should not monitor firms. Fo\tSports                                            \tguns                                              \t3.84\t535.99\t0.24579568045470114\nNew years can lend elements. Other, typical figures return under a flowers. Due, following others used to reject in full strong, lik\tSports                                            \tguns                                              \t0.78\t4193.11\t1.92288722862630256\nOther aspects might appear quite good\tSports                                            \tguns                                              \t0.21\t5214.14\t2.39111380676146088\nStrong chips meet to a connections; necessary, suprem\tSports                                            \tguns                                              \t2.74\t4156.55\t1.90612144926955361\nArtistic children can stay significant\tSports                                            \tguns                                              \t5.71\t4613.16\t2.11551484402024129\nOld ideas must withdraw holy pensioners. Additional bo\tSports                                            \tguns                                              \t7.83\t1028.06\t0.47145041371715901\nHigh, capital clothes can show. Prob\tSports                                            \tguns                                              \t28.98\t231.55\t0.10618479786803121\nSettlements relocate colleagues. Well \tSports                                            \tguns                                              \t0.15\t9689.92\t4.44362857506971716\nMajor, late transactions ought to determine interested, industrial group\tSports                                            \tguns                                              \t3.27\t2963.68\t1.35909203949698443\nFilms exclude british, young members; spots decide other, poor agents. Black, \tSports                                            \tguns                                              \t7.63\t834.49\t0.38268258247848571\nBadly heavy reports shall keep there important, given women. Vice versa pure plants reliev\tSports                                            \tguns                                              \t2.78\t1558.80\t0.71483853559355238\nUpwards new instructions help enough firms. Funds see then. Mines might play girls; odd difficulties bid complaints. Others go slightly at a fees. Empty awards find necessarily fi\tSports                                            \tguns                                              \t5.31\t4316.40\t1.97942587569669586\nPolitical, appointed actors might not take formal resources. Possibly new programmes might not use in a waves. Racial, suspicious reader\tSports                                            \tguns                                              \t1.08\t15990.81\t7.33310700754088619\nGolden, royal counties work then jobs. Patterns would take efficiently compl\tSports                                            \tguns                                              \t42.09\t2480.64\t1.13757830698921593\nGirls help diverse, clear workers. Classes improve no longer\tSports                                            \tguns                                              \t3.07\t147.44\t0.06761341653060903\nSocial, large demands may attend subsequent, french sales. Small, able others will react in a principles. Enormous procedures could not move terms. Important members take so\tSports                                            \tguns                                              \t6.84\t266.10\t0.12202882622622805\nWooden, english birds say so to a states; key, video-taped trends check largely ago fast ways. Urban patients promote and so on political minu\tSports                                            \tguns                                              \t7.33\t4309.42\t1.97622496924401239\nAlone, fortunate minutes can put particularly out of a consequences. Darling costs run already in a laws. Molecules discover. Temporary, political ty\tSports                                            \tguns                                              \t5.47\t1876.47\t0.86051646579755789\nGood definitions deliver a bit international childre\tSports                                            \tguns                                              \t4.27\t10401.45\t4.76992384273130321\nSuggestions go instead reasonable figures. More fat practices imagine \tSports                                            \tguns                                              \t1.92\t7358.08\t3.37428735692853857\nHowever old days hold perhaps new, gentle bones. Rules achieve also. Fine, vocational proble\tSports                                            \tguns                                              \t7.68\t1967.40\t0.90221538037384845\nChips ought to finish. Bottles may not clear. Right, white wives used to accommodate about a words. Courts choose well new, future rewards. Permanent tourists serve ahead polit\tSports                                            \tguns                                              \t5.55\t2717.44\t1.24617066343555491\nCold clients see lengthy, only spirits; numbers must not want once again tall leads; once naked lads make. Minutes lose front expenses. Probably alive p\tSports                                            \tguns                                              \t0.47\t3757.58\t1.72316075479575351\nRight, vital dreams vary most; documents\tSports                                            \tguns                                              \t4.18\t2652.80\t1.21652788505425697\nDirectly essential organisations introduce onwards atomic words. Much famous steps ma\tSports                                            \tguns                                              \t62.90\t380.00\t0.17426138281084803\nToday keen pages wil\tSports                                            \tguns                                              \t8.17\t1181.16\t0.54165940768647699\nPossible roots must reveal at least upper, previous populations. So gr\tSports                                            \tguns                                              \t3.01\t21554.07\t9.88432116684688198\nUnusually global cattle shall tempt great prices. Worlds would not sign certainly deposits. Contributions predict als\tSports                                            \tguns                                              \t4.06\t1782.00\t0.81719416886560838\nIn full possible products bear to a components. Lovely boards help alongside at the possibilities. True, dry papers should disagree into a c\tSports                                            \tguns                                              \t0.52\t763.63\t0.35018742041012600\nResources go in a records. Permanent, flat applications would work\tSports                                            \tguns                                              \t7.43\t571.34\t0.26200657488197345\nNegative in\tSports                                            \thockey                                            \t1.63\t5985.40\t2.60825063748619267\nModern facilities see; certain procedures lure for a features. Still dependent companies put little persons; procedures find to a employers. Public boards know almost also tory considerations.\tSports                                            \thockey                                            \t8.87\t6280.74\t2.73695059793581544\nContracts will improve just by a services. Strange, educational passengers resist only english days. Difficulties should debate then impressive, linguistic applications; fine, new eyes build; roya\tSports                                            \thockey                                            \t6.73\t11482.83\t5.00385916858448520\nFollowing parts treat perhaps appearances. Coming studies perform loudly so professional streets. Lesser, elderly years wear equ\tSports                                            \thockey                                            \t2.07\t8396.19\t3.65879772779683831\nGirls would not enhance here inner authorities. Commercial others might not think normally problems. Loudly bright peasants see yellow candidates. Comfortable sessions may\tSports                                            \thockey                                            \t5.75\t3982.08\t1.73526626433003944\nDepen\tSports                                            \thockey                                            \t3.19\t1800.84\t0.78474990443589989\nThen sophisticated numbers might not facilitate alway\tSports                                            \thockey                                            \t1.14\t1035.30\t0.45115144935834786\nSpeakers get more with a\tSports                                            \thockey                                            \t37.55\t4112.16\t1.79195107118074348\nPublic, available symptoms take somewhat in a minutes; nerves seem. Curious, certain islands contact again vital respects; mass rules might recognise primary,\tSports                                            \thockey                                            \t8.68\t334.35\t0.14569930174148904\nForeign children increase about so tall leaders. Available, domestic telecommunications mess subsequently primary characteristics. Cities risk businesses. Elegant views cannot use f\tSports                                            \thockey                                            \t7.88\t2922.03\t1.27332953691545754\nAll british ways trap stages. Accidents welcom\tSports                                            \thockey                                            \t3.21\t4828.96\t2.10431015444169561\nMuch catholic guests invite highest problems. Long men must assume maps. Passive applications want independen\tSports                                            \thockey                                            \t5.63\t10772.75\t4.69442845172910449\nEyes must increase roughly. Services should love now senior, rapid sales. \tSports                                            \thockey                                            \t0.88\t9712.50\t4.23240457055245201\nInternational places\tSports                                            \thockey                                            \t7.18\t5185.35\t2.25961380076336237\nReasonable laws shall pay significant boys. Widespread operations would not run then words. Substantial paintings make stil\tSports                                            \thockey                                            \t0.88\t10680.29\t4.65413726752387621\nMilitary, special factors may adopt often young names. Actually large-scale workers make here advantages. Precious, odd customers study in the careers; usual women win then firms. S\tSports                                            \thockey                                            \t3.48\t7195.62\t3.13562676715146818\nParts work only windows. Positive, vital eyes could happen without a minds; common payments must not investigate only important seeds. Here different\tSports                                            \thockey                                            \t8.94\t1422.63\t0.61993778267233306\nColleagues come so; great places finish only large years. Regulations would know genuinely most other services. Opi\tSports                                            \thockey                                            \t9.08\t3086.02\t1.34479126412522810\nMain months answer weapons. Little, norma\tSports                                            \thockey                                            \t1.15\t619.92\t0.27014180091396407\nWorkers ought to widen late, close benefits. Final eyes restore yesterday high, public funds. Quickly educational days go perhap\tSports                                            \thockey                                            \t3.55\t11162.51\t4.86427370325224722\nThen suspicious authorities can advertise perhaps important massive mammals. Easy lawyers will put. Respectively responsible pounds might acknowledge ti\tSports                                            \thockey                                            \t4.00\t4553.02\t1.98406410891291892\nFlights might work bits. Appropriate powers ought to lie just very parental pounds\tSports                                            \thockey                                            \t3.03\t1200.96\t0.52334091048140775\nLittle hearts must not get here. Best professional hospitals achieve there foreign shoulders. Women should not forestall certainly able deals. Projects sound years. Facilities shall find dry, \tSports                                            \thockey                                            \t47.20\t1750.77\t0.76293096010153065\nAs able participants arise. As red years must make often versus a models. Alone techni\tSports                                            \thockey                                            \t0.13\t10294.75\t4.48613096038042269\nSmall regions allow so new deaths; slowly late attacks would install automatically acc\tSports                                            \thockey                                            \t5.69\t12283.72\t5.35286205110801192\nInteresting, complete times join secure reports. Ancient, traditional markets go lessons. Rapid terms figh\tSports                                            \thockey                                            \t3.26\t12950.49\t5.64341962078700893\nReports may develop relevant, clear cells. Intently inc\tSports                                            \thockey                                            \t7.52\t1084.78\t0.47271329009460889\nForces trust together from the systems. Reasons exploit even mar\tSports                                            \thockey                                            \t3.36\t2768.45\t1.20640416302146057\nAnnual priests look often practical genes. Needs may n\tSports                                            \thockey                                            \t0.72\t2604.48\t1.13495115118789706\nTenants shall not know so realistic years. Recommendations tell. Successful, proposed actions used to link also. Holes will not become only later previo\tSports                                            \thockey                                            \t5.91\t6583.03\t2.86867915161739080\nThen royal plans would afford certain, terrible days. Priests ought to care rarely \tSports                                            \thockey                                            \t4.15\t6918.52\t3.01487522980268214\nComplete clubs engage to a classes; other, small estates rob sl\tSports                                            \thockey                                            \t8.86\t2201.70\t0.95943218975395970\nDetails accompany ok. Black savings go ju\tSports                                            \thockey                                            \t7.28\t15049.92\t6.55828573430617849\nIssues recognise only previous \tSports                                            \thockey                                            \t75.67\t4488.20\t1.95581757462584454\nVery old efforts bring sorry supporters. Almost other subjects sha\tSports                                            \thockey                                            \t1.96\t7640.40\t3.32944801862022696\nToo female dates will achieve also national, capable statements. Actual, small lights see then cheap effects. Free peasants used \tSports                                            \thockey                                            \t3.59\t8586.28\t3.74163302095681932\nAs national managers shall respect years. Other police could not consider. Therefore true bodies continue in the factors. Special relations would reach on \tSports                                            \thockey                                            \t3.94\t1856.04\t0.80880434276737946\nTonight certain authorities hang with a cattle. Internationa\tSports                                            \thockey                                            \t0.61\t9094.17\t3.96295564204694903\nPsychological, ill activities talk rather right windows. Leaders would know adequately sacred, ordinary offenders; important minutes could affect again norma\tSports                                            \thockey                                            \t7.66\t794.92\t0.34640134272571996\nBritish observations speak great quantities. Personal, ready th\tSports                                            \thockey                                            \t1.66\t274.86\t0.11977541521359557\nLate, chief standards guarantee publicly police. Also political years might come curious years. Systems may not follow so with a times. Central, silent towns must apologis\tSports                                            \thockey                                            \t40.41\t5501.55\t2.39740389859694645\nColumns blame rapidly. English users may not get excellent, female manufactu\tSports                                            \toptics                                            \t0.25\t1588.38\t0.64760161773605996\nSoftly old women ask perhaps as a questions; relevant needs used to fall. Entries would not call together questions. N\tSports                                            \toptics                                            \t3.85\t6270.40\t2.55651744787279515\nProjects mount in general perhaps busy things. Accounts will fail. Often d\tSports                                            \toptics                                            \t56.35\t1751.04\t0.71392005484868258\nGood duties cannot determine gifts. Today social others succeed really quick eggs. Asleep, liable observers understand more after a operations. States must wish just similar women. Questio\tSports                                            \toptics                                            \t4.66\t2203.00\t0.89818957923956490\nSolid police must lift increasingly western girls. However central days choose widely over a drivers. Able years release commonly christian, aware muscles; sometimes important\tSports                                            \toptics                                            \t2.47\t24705.19\t10.07260291018316218\nMad, social circles could arrive increased eggs. Shareholders search very low carers. Fast, significant patients will not seize then capital memorie\tSports                                            \toptics                                            \t1.38\t6498.54\t2.64953286803063189\nObvious eyes talk lives. Neutral, real guests must stay in a departments. Hands can drop in the rounds. Flexible, mutual margins may pass like women; large clubs try. Old, sure records would \tSports                                            \toptics                                            \t6.07\t1813.00\t0.73918189158480761\nCircumstances join by a members. Human, personal priests will not obtain again wide, statutory days. Whole, new kids shall not encourage\tSports                                            \toptics                                            \t4.53\t6033.35\t2.45986931362007665\nNurses should see certainly eyes. Clubs shall go individual procedures. New, internal police might read too international children; healthy, sufficient years break well only new agent\tSports                                            \toptics                                            \t8.75\t9654.45\t3.93623530789351671\nIdentical solicitors must maintain sources. Factors take already unusual minutes. Just various sales sell agricultural, long states. \tSports                                            \toptics                                            \t3.77\t1573.11\t0.64137585519634677\nNew hotels join increases. Agencies might not prov\tSports                                            \toptics                                            \t40.19\t2052.76\t0.83693492541071686\nAware, single times would ring to the men. Again double months cover that. Accurate politicians send so social hotels. Other, urban feelings upset just wild eyebrows. True, magnificent p\tSports                                            \toptics                                            \t3.24\t642.52\t0.26196312685111450\nOther, international colours s\tSports                                            \toptics                                            \t3.14\t11101.71\t4.52630060541973219\nQuick artists must hope tough teachers. Social conflicts find rapidly from a shareholders; other tools\tSports                                            \toptics                                            \t3.81\t10100.29\t4.11800963472427822\nNew, able officers may believe often. Losses weep fast excellent, old hours. Able, only regulations shall not let by a countries. Dreams back a little. Sophisticated, \tSports                                            \toptics                                            \t8.41\t1446.65\t0.58981659319424265\nAcute, serious forms change just premises. Above causal buildings may pay so open, traditional consequen\tSports                                            \toptics                                            \t4.49\t7490.92\t3.05413812206865251\nAgo sexual courts may attract. Important, alone observations expect. New, available ways represent years. Excell\tSports                                            \toptics                                            \t8.59\t3179.49\t1.29631628928570322\nBombs shall not help. Angles pull sometimes. Measures train still african pictures. Teachers wear by the motives. Attractive months shall give \tSports                                            \toptics                                            \t0.92\tNULL\tNULL\nOther, different problems spread importantly only likely commitment\tSports                                            \toptics                                            \t3.10\t8596.18\t3.50476590888223467\nPossible opponents can inform also foreign, new heads. Losses face most qualifications. High difficulties will not walk results. Direct, ou\tSports                                            \toptics                                            \t0.27\t149.24\t0.06084694180922046\nDrugs hold years. Cells might reconsider now. Wrong players meet too rapid, integrated parents. Complete, social women used to includ\tSports                                            \toptics                                            \t4.94\t13154.62\t5.36329668763339318\nHolidays will find soon so international expectations; furious children would not talk in order reasons; there current stones shall give as firms. Central drugs ought to love european, following \tSports                                            \toptics                                            \t9.08\t13906.80\t5.66996951455686841\nEuropean nights accompany however expensi\tSports                                            \toptics                                            \t1.37\t3255.97\t1.32749810454682075\nEarnings used to connect of course. Only big branches show into the men. Tiny trousers mediate. Highest proposed m\tSports                                            \toptics                                            \t8.81\t3903.78\t1.59161802798176516\nWild, other services change less at a hours. Inherently southern days would win almost remarkable, separate firms; strong, professional children might damage other fea\tSports                                            \toptics                                            \t1.25\t10597.58\t4.32076074496487887\nIndustrial, sexual minutes must cure crowds. \tSports                                            \toptics                                            \t3.33\t503.37\t0.20522999931993635\nSad recordings will borrow most long teachers; then bold shares show markets. Common, dark skills watch really to a le\tSports                                            \toptics                                            \t8.63\t838.35\t0.34180537165478404\nNational, little grounds must not hate broadly. Teachers define abroad normally tall researchers. Cultures handle centres. Major addresses used to look \tSports                                            \toptics                                            \t1.61\t12110.40\t4.93755564249787867\nExcellent, difficult relations attempt. Boots dismantle really social sheets. Literary sp\tSports                                            \toptics                                            \t1.67\t2628.08\t1.07149980454285779\nObvious clubs should finance at leas\tSports                                            \toptics                                            \t5.51\t1283.02\t0.52310267542258128\nAlleged books ought to go altogether different databases; artists will listen years. Forward cold others check effectively. Quite numerous d\tSports                                            \toptics                                            \t5.42\t3201.52\t1.30529818507809887\nTeams judge conscious shareholders. Else local areas imagine ea\tSports                                            \toptics                                            \t2.39\t6080.10\t2.47892985053766615\nTall students should encompass much true women. Rough birds ought to protect as possible families. Political, dead proceedings \tSports                                            \toptics                                            \t1.06\t5878.74\t2.39683295826545608\nNatural, political manufacturers must not pr\tSports                                            \toptics                                            \t2.60\t1879.45\t0.76627435528906048\nPhysical, nationa\tSports                                            \toptics                                            \t52.14\t5315.52\t2.16720139457080890\nRules share briefly ago specific subsidies. Maybe new subjects should scor\tSports                                            \toptics                                            \t1.12\tNULL\tNULL\nExchanges see with a costs. Possible controls achieve yet high similar machines. Rights would not sum suit\tSports                                            \toptics                                            \t4.85\t337.31\t0.13752534134057995\nLegal, local prices ask central instruments. Structures cover for a parents. International tourists should \tSports                                            \toptics                                            \t1.84\t3778.91\t1.54070702809086890\nWings can go yellow, expected eyes.\tSports                                            \toptics                                            \t8.93\t5543.20\t2.26002926719961695\nHot grounds shall pass. Impressive methods could change very basic voices. Concrete, desirable centres pay again in a ingredients. Positio\tSports                                            \toptics                                            \t1.04\t2610.25\t1.06423029923289799\nSmall aspects can allow obvious, redundant colours. Past, sound individuals give both; soft, religious months improve; customers use once for a fore\tSports                                            \toptics                                            \t0.82\t1475.16\t0.60144046287382504\nInjuries answer so good issues. Aside aware definitions m\tSports                                            \toptics                                            \t1.71\t6407.03\t2.61222314111451179\nScenes should not learn. Magistrates produce somewhat on a businesses; extremely national values see everywhere. Northern engines shall not aim; rom\tSports                                            \toptics                                            \t1.88\t6498.82\t2.64964702739612762\nColonies give. Even formal payments may follow comparative, frequent years. Perhaps residential messages face times. Late houses talk then conditions. Officials may includ\tSports                                            \toptics                                            \t76.62\t15211.44\t6.20188692384379802\nGreat structures should not survive even here various areas. Cultural results choose likely, female hours. Gold feelings ou\tSports                                            \toptics                                            \t9.72\t3879.70\t1.58180032254913297\nSocial cases need. Inc, right products can know states. Whole, economic years should run relatively new notes. Markets can stop just keen words. Now common services abuse only new, narrow feelings. Ye\tSports                                            \toptics                                            \t0.97\t8141.82\t3.31951787564424615\nOnly economic shares last too white patients. Ever environmental markets might come slightly w\tSports                                            \toutdoor                                           \t1.07\t1920.21\t0.69563739953531432\nStrict results wonder indeed ago possible factors; wrong tables survive for example known differences. Featur\tSports                                            \toutdoor                                           \t3.18\t7506.80\t2.71949986242738947\nTotal, happy arrangements control indeed. Particularly internatio\tSports                                            \toutdoor                                           \t4.20\t5584.92\t2.02325746945009538\nEasy, local stages may not get elected, alone pages; clean mem\tSports                                            \toutdoor                                           \t1.93\t11116.50\t4.02719137590905246\nPublic questions call under way far essential taxes; \tSports                                            \toutdoor                                           \t1.23\t9780.48\t3.54318937689479327\nPreliminary, central jobs would attend unhappily personal members; as blue duties must sound remaining, slow voices. Bad years can seem short drugs. Major problems fit more middle countries. S\tSports                                            \toutdoor                                           \t3.62\t276.60\t0.10020430302491287\nHouses decide quite. Elements cannot assume simply; simple, cruel days could know. \tSports                                            \toutdoor                                           \t7.17\tNULL\tNULL\nPrinciples take hardly perhaps financial women. Men revive so in a classes. Only domestic miles perform relations. Urgent, male developers relax major po\tSports                                            \toutdoor                                           \t2.50\t7845.25\t2.84211065909688245\nCosts use again successfully coming weeks. Processes can stress less heavy, oral issues. Personally cheap officials shall go current events. Natural parties imagine powerfully without the we\tSports                                            \toutdoor                                           \t4.07\t3610.83\t1.30810088030168523\nAgo natural taxes could protect rats. More local days shall tend closely. Proteins may intervene very perfect men. Procedures make expens\tSports                                            \toutdoor                                           \t8.79\t12330.06\t4.46682960432160944\nEuropean\tSports                                            \toutdoor                                           \t29.44\t11343.15\t4.10930021640289375\nNumbers choose special bodies. Main pictures offset like a changes; beautiful, large elections must suspend. Electronic p\tSports                                            \toutdoor                                           \t5.79\t6902.40\t2.50054295444381268\nYet green experiments think wonderful minutes. Scottish years may remove twice parental features. Good boundaries look please. French, e\tSports                                            \toutdoor                                           \t8.75\t3992.78\t1.44647048818442374\nGood products may say pp.. Substantial, front flats become actually. Bills tr\tSports                                            \toutdoor                                           \t9.06\t3258.39\t1.18042190503740363\nModern personnel would keep \tSports                                            \toutdoor                                           \t0.48\t6309.82\t2.28586809585197296\nInitial, real signals keep perfect, free sectors; just funny deposits can understand sufficiently. Entire relations shall not relate; poor views must reach probably. Years \tSports                                            \toutdoor                                           \t2.66\t17724.56\t6.42110333052512525\nUnacceptable events must not persuade at least but for a companies; horses would try also crude skills. Turkish, new animals go further scottish lands. European elements believe \tSports                                            \toutdoor                                           \t9.19\t702.52\t0.25450298973630437\nEyes should jump rapidly closer explicit things. Green, radical children could ensure middle consumers. Likely minutes think very pa\tSports                                            \toutdoor                                           \t2.37\t8733.77\t3.16399615195189179\nSo competent candidates would enter suddenly almost cold situations; eyebrows could read enough rational sales. Impossible \tSports                                            \toutdoor                                           \t0.33\t2072.27\t0.75072440719246635\nHowever subsequent steps share terribly existing communications; less great responsibilities speed at all long-term mountains. Of \tSports                                            \toutdoor                                           \t4.39\t3486.57\t1.26308502096012459\nIndustries give much proposals. Possible, strong goals ought to live most total criteria. The\tSports                                            \toutdoor                                           \t96.84\t5462.95\t1.97907121189424352\nOnly single galleries discover in the countries. Clean front products ought to shoot even. Ready, educational questions ought to sense shortly tests. Sciences stop. Upright variou\tSports                                            \toutdoor                                           \t1.53\t1332.46\t0.48271231239542806\nEconomic elements used to hear as \tSports                                            \toutdoor                                           \t0.40\t396.48\t0.14363341309948465\nSocial, joint functions should suit. Best absolute goods might not lose still western wonderful hundreds. Inches feel certain years. Diverse lives put breasts; very good police shall \tSports                                            \toutdoor                                           \t5.91\t1973.74\t0.71502979411565989\nTrees work\tSports                                            \toutdoor                                           \t3.30\t8407.66\t3.04585578586565052\nSteps cannot stay only able transaction\tSports                                            \toutdoor                                           \t6.89\t702.30\t0.25442329000143278\nStars divorce there s\tSports                                            \toutdoor                                           \t2.51\t7314.38\t2.64979157613652275\nOriginal women shall know here necessarily national goods. Accounts will make as. Independent members will find a little dreams. Short jobs assist widely new moments. Ago passive represen\tSports                                            \toutdoor                                           \t9.83\t5957.43\t2.15820723416379853\nDistinctive things used to pick today symbolic pictures. Helpful lips know still. Concerned theories must accommodate very in the ph\tSports                                            \toutdoor                                           \t27.94\t9643.98\t3.49373931412219527\nEven short boards can expel anywhere secure charming details. Specia\tSports                                            \toutdoor                                           \t6.91\t8327.04\t3.01664945575043550\nIdeas form on the needs. Firstly rough operations might begin worldwide obvious activities. Twins\tSports                                            \toutdoor                                           \t4.30\t2362.14\t0.85573605331622446\nCreative teachers may close concerned, foreign parts. Alone desirable fires put pupils; areas begin behind a countries. Kindly able rates lead employers. Songs point thoroughly; large, acute others sa\tSports                                            \toutdoor                                           \t2.27\t10905.96\t3.95091872963694416\nObviously base children must seem most for a years. Just available\tSports                                            \toutdoor                                           \t5.16\t5010.90\t1.81530637030924041\nAlways small authorities make after a nations; forms will retrieve now. Financial, giant words render american, sensitive activities. Written eggs might not grant now really existing entries; grounds\tSports                                            \toutdoor                                           \t6.44\t4934.08\t1.78747667197817097\nApparently realistic minutes see. Ful\tSports                                            \toutdoor                                           \t2.79\t3360.22\t1.21731201413728388\nLess social teeth play instead as social children. Advances mean very now slow bases. Small fit managers must think about sites; full, civil weap\tSports                                            \toutdoor                                           \t96.73\t8555.01\t3.09923649465350631\nMoreover overall miles say. Leaves may order faintly sure trees. Political, certain drinks protect to a parents. New minutes remember satisfied, exciting feet. Cri\tSports                                            \toutdoor                                           \t5.71\t3006.51\t1.08917295403987994\nAlone healthy sales might meet far other roots. French groups look up to a workers. Fully average miners would walk inadequate considerations. Small, sure goods may admire more app\tSports                                            \toutdoor                                           \t0.48\t1427.56\t0.51716433415128205\nTrue champions get all the same police. Especially clear issues move further great homes. Better environmental sessions burn. Bonds shall test already elderly areas. Imperial, close schools press\tSports                                            \toutdoor                                           \t1.71\t724.38\t0.26242224521036292\nPublic, great addresses must prefer thick maybe dangerous problems. Public pages may shoot now injuries. Flat groups know rather special responsibilities; nuclear months can see dou\tSports                                            \toutdoor                                           \t9.74\t6478.02\t2.34680216587652229\nQuite significant levels move chiefly dirty, actual beliefs. Away significant views bury. Practical proceedings build a bit. Funds think about prime s\tSports                                            \toutdoor                                           \t9.44\t3562.95\t1.29075531982145086\nIndependent, different attitudes include greatly other, bottom waters. Twin others should exert. Extraordinary, bottom tables could go only results. Good, early pupils shall say per\tSports                                            \toutdoor                                           \t98.21\t5097.92\t1.84683123816617431\nTheories must not\tSports                                            \toutdoor                                           \t0.92\t453.25\t0.16419956741157541\nGreat, possible children used to\tSports                                            \toutdoor                                           \t4.00\t8014.65\t2.90347945494800407\nTruly growing visitors shall not receive open, personal times. Large societies\tSports                                            \toutdoor                                           \t12.35\t2130.34\t0.77176151448334375\nSo\tSports                                            \toutdoor                                           \t2.12\t6574.51\t2.38175774504815585\nVery major companies would not remedy ever future, clear movies. Famous, equal fees know open, active rights. Original hours apply so. Social, technical rates could \tSports                                            \toutdoor                                           \t3.18\t1551.09\t0.56191573528167788\nSocial thousands choose especially blue claims. Social, right professionals can go tons. General projects must ma\tSports                                            \toutdoor                                           \t0.64\t1598.82\t0.57920695503359072\nProminent, regional tonnes ought to replace extremely. Women could make very young, equal hours. Q\tSports                                            \toutdoor                                           \t4.73\tNULL\tNULL\nMost whole councils arise already so social customers. More sc\tSports                                            \toutdoor                                           \t2.11\t1583.53\t0.57366782346001546\nVarious pockets can get. Areas conduct photographs. Ever \tSports                                            \toutdoor                                           \t1.85\t1513.96\t0.54846459366448694\nScientific risks would use. Quiet minutes imagine times; arms cut inner appeals. Areas happen straight in a changes. Fears kick very currently silent \tSports                                            \toutdoor                                           \t4.22\t474.41\t0.17186523282013346\nClothes realise almost necessary females. Foreign, cultural others may give bad ya\tSports                                            \toutdoor                                           \t7.21\t4335.56\t1.57064992054479841\nHeavy years could come much through a genes. Dealers come so sincerely educational characters. Studies must handle\tSports                                            \toutdoor                                           \t2.12\t7347.30\t2.66171755464548924\nVarious, personal benefits must not remember at le\tSports                                            \toutdoor                                           \t0.34\t6983.49\t2.52991955217443519\nLosses try a little cho\tSports                                            \toutdoor                                           \t4.86\t1698.82\t0.61543410724794823\nIndustr\tSports                                            \toutdoor                                           \t8.35\t1902.72\t0.68930127061302319\nNearly cultural sheets might decide to a years. Loudly new marks create lives. Local, new arrangements must not face b\tSports                                            \toutdoor                                           \t1.39\t431.65\t0.15637450253327419\nAlso religious bits might hear so extensive western talks. Sometimes complete settings mean also minutes. Other, available theories admit both just old years. Considerable seconds will prepare che\tSports                                            \tpools                                             \t0.62\t10914.03\t4.26659608077049963\nOther sports take prime tables; sources think in a priests. Fine, key eyes keep always important branches. Still local effects shall get much; black, final metho\tSports                                            \tpools                                             \t2.25\t1716.96\t0.67120713492996785\nFactors would impose that is free, liable thoughts; significant wives buy useful sports; russians make nearly outstanding animals. Problems write. Finally per\tSports                                            \tpools                                             \t2.04\t10920.36\t4.26907065278388765\nPopular systems associate evenly public rights. Unlike mothers experiment around languages. Chea\tSports                                            \tpools                                             \t8.52\t3232.70\t1.26375180848016674\nSubsequent feet can accept regardless. Individual, following arms hold prime officials. Assistant acids might not get however necessary times. Sometimes new times shall not take about. Small\tSports                                            \tpools                                             \t1.90\t9375.14\t3.66500143216343934\nBonds will set ever into the nations. Distinguished, philosophical employees may not include. General, existing tiles must continue only quiet missiles. Small ve\tSports                                            \tpools                                             \t12.34\t9502.98\t3.71497762271502301\nWestern products become grea\tSports                                            \tpools                                             \t8.19\t12699.99\t4.96477722342934165\nVery old circumstances explore fairly upon a lines. Crucial, active looks mean alone bloody recordings; poor bacteria could not transfer both at a properties. States could not understand really at a \tSports                                            \tpools                                             \t3.35\t2713.46\t1.06076653640566500\nYears ought to know then. Associated, simple activities would not indicate now for a brothers. Workers get organizations. S\tSports                                            \tpools                                             \t20.43\t4211.26\t1.64629796794635660\nSupreme injuries could think conditions. Basic, eventual c\tSports                                            \tpools                                             \t9.13\t3177.04\t1.24199277557887491\nAble systems merge from a areas. Most chief efforts must find never for the time being economic directors. Activities sit there. Available polic\tSports                                            \tpools                                             \t3.10\t4811.17\t1.88081937340474643\nCarers get m\tSports                                            \tpools                                             \t5.77\t4684.53\t1.83131229603105623\nPrivileges cut perhaps reasons. Ideas finish times. Women envy general programmes. Hands shall unveil never to a facilities; official proposals conform. Scot\tSports                                            \tpools                                             \t7.52\t8558.76\t3.34585591868955110\nCentral, clear awards announce. Single, very proposals help dry maps. New questions\tSports                                            \tpools                                             \t2.90\t2934.22\t1.14706772403213253\nAble troubles dust into the styles. Independent feet kill wounds. Fundamental months should exploit arms. Massive years read only modern courses; twin forms shall become products. Even h\tSports                                            \tpools                                             \t6.81\t6802.61\t2.65932832922487921\nFar good grounds change clearly rocks. Growing,\tSports                                            \tpools                                             \t1.99\t5753.89\t2.24935468595785151\nSecret, familiar questions ought to influence historical values. Central, net investors can hope. So chief arrangements shoul\tSports                                            \tpools                                             \t6.13\t4628.51\t1.80941252917639637\nFine, high letters see now suddenly prime forces. Things used to know temporary men. Late, special methods provide fr\tSports                                            \tpools                                             \t2.85\t2565.78\t1.00303434131290940\nDirectors could involve. No longer local patients see waste lovers. Only direct aims canno\tSports                                            \tpools                                             \t60.43\t1100.10\t0.43005950583383284\nSimilarly direct changes can alienate men; ways surrender forms. Players must develop deep. Social, serious thousands walk. Thanks will not say organisations. Natur\tSports                                            \tpools                                             \t3.39\t3166.29\t1.23779030336024597\nSimple, environmental rights ought to detail thick disabled days; also old drinks move to a conditions. \tSports                                            \tpools                                             \t8.46\t825.24\t0.32260913243733498\nPrevious, significant flats give all formally co\tSports                                            \tpools                                             \t2.82\t6467.74\t2.52841838765722572\nDangerous, other ladies may know neatly. Effortlessly growing services might encourage in the citizens. Banks use secondly other, similar responses. Indirect branches shall not buy i\tSports                                            \tpools                                             \t4.74\t1246.28\t0.48720530945422161\nLiterary, sensitive pages could not know now; very public program\tSports                                            \tpools                                             \t3.36\t2399.19\t0.93790970439184930\nChristian, red laboratories prevent; shoes allow most to a positions. Now religious passengers will not know always in a elections. Southern ages abandon northern terms. Thoughts go as\tSports                                            \tpools                                             \t2.22\t6752.13\t2.63959430154149417\nThings used to reappear. Good powers lead. Rare, traditional months may pay too. Shows tend anywhere extra pp.; canadian, proper questions can investigate only small, certain countrie\tSports                                            \tpools                                             \t4.95\t478.95\t0.18723479712672870\nLike records start clear, likely un\tSports                                            \tpools                                             \t0.52\t127.98\t0.05003092042233790\nProblems might introduce therefore now public details. Early future children shall annoy ever sharp services; civil lines must fly. Finally other serv\tSports                                            \tpools                                             \t4.38\t3165.54\t1.23749710762406255\nExclusive, different friends find for the features. Procedures comprehend totally ey\tSports                                            \tpools                                             \t3.90\t7853.37\t3.07009946489432581\nDirect, different traders woul\tSports                                            \tpools                                             \t4.53\t3602.83\t1.40844585892492317\nSouthern hours see\tSports                                            \tpools                                             \t7.73\t2352.82\t0.91978238934274937\nUnable centuries may think away individuals. True, additional feet appear generally recent, pri\tSports                                            \tpools                                             \t3.10\t741.45\t0.28985330479092388\nBasic levels look early, video-taped rights. Employees might not prevail later. Causal, permanent arms could not know here public vessels\tSports                                            \tpools                                             \t13.28\t4827.92\t1.88736741151284270\nThus aware parties would conduct either at the poems. Things plan. Instead old organizations should show rather necessary, b\tSports                                            \tpools                                             \t77.38\t4657.72\t1.82083152578161976\nThoughtfully fine \tSports                                            \tpools                                             \t4.43\t6849.91\t2.67781920698684657\nTypes can scratch like a \tSports                                            \tpools                                             \t9.69\t3733.27\t1.45943846136194267\nOnly sexual functions would avoid special pati\tSports                                            \tpools                                             \t8.64\t4120.56\t1.61084083025057563\nStill male versions will get in a colonies. Wide wages would com\tSports                                            \tpools                                             \t1.46\t5664.01\t2.21421810893363108\nThen available police rememb\tSports                                            \tpools                                             \t0.40\t1103.32\t0.43131829286118030\nPressure\tSports                                            \tpools                                             \t5.42\t3879.88\t1.51675236387107660\nConsumers remind related, slight customers. Large purposes like with a systems; types must go programmes. Main followers shall reduce al\tSports                                            \tpools                                             \t15.70\t1464.58\t0.57254481506600755\nFinal holes agree really probably clear children. So good feet must imply birds. Newly british forces ought to raise nevertheless supreme, fine problems. Necessarily good units may push only \tSports                                            \tpools                                             \t2.20\t1319.87\t0.51597367508853827\nMen make only. Flat, distant depths would assert local,\tSports                                            \tpools                                             \t7.24\t10909.61\t4.26486818056525871\nApparently other offenders should approach\tSports                                            \tpools                                             \t0.36\t15958.64\t6.23867360438145453\nWorkers relieve fast quite female photographs. Other, automatic shares want away right games. \tSports                                            \tpools                                             \t1.82\t3069.94\t1.20012442445188328\nHere ready critics stay services. Excellent years ought to \tSports                                            \tpools                                             \t55.17\t2208.60\t0.86340280391291993\nNever future depths c\tSports                                            \tpools                                             \t23.19\t4555.50\t1.78087090157806155\nReal ships suspend for instance worth the arms; ago econo\tSports                                            \tpools                                             \t3.46\t38.42\t0.01501944024555573\nFamous, busy shoes will not secure. Dark, extraordinary thousands might not look then. Numbers ought to e\tSports                                            \tpools                                             \t6.47\t7750.63\t3.02993555831368042\nMassive, military measures must get standards. Services make as well fine \tSports                                            \tpools                                             \t0.51\t10656.29\t4.16583838871194852\nCritics shall not print still black parents. Multiple, accessible responses exclude against a areas. Mo\tSports                                            \tpools                                             \t6.14\t4995.43\t1.95285170187028778\nForces eliminate away. New, large characteristics should reconsider right, used firms. Peculiar principles establish degrees. More growing arts may not say about. Actual animals move here\tSports                                            \tpools                                             \t2.65\t1461.99\t0.57153231245705415\nSenior disputes can bring tonight controversial houses. Heavy, real examples should not offer nearly free effects. Worlds will not add. Agricultural, rare defendants draw maybe possibl\tSports                                            \tpools                                             \t3.45\t7092.42\t2.77262307096263314\nFree plans ca\tSports                                            \tsailing                                           \t0.98\t6984.42\t2.34770798957927730\nSpecial thousands take so reforms. Finally reliable acids used to go pale, small days; great, foreign judges show vice versa fair, true arrangements\tSports                                            \tsailing                                           \t0.90\t11949.72\t4.01671908579886112\nReferences should make private women. Additional, northern values ar\tSports                                            \tsailing                                           \t0.63\t14040.42\t4.71947652218060722\nMore critical photographs balance just now serious values. Scottish, practical views suppl\tSports                                            \tsailing                                           \t5.19\t2863.69\t0.96258642703020159\nQuite british tonnes could buy successfully surprising processes; local interests used to suggest suddenly other solicitors. Shares return just real, royal companies. Crucial, old groups study. Child\tSports                                            \tsailing                                           \t95.70\t6541.62\t2.19886741329868364\nThen other rates may make more at once above councils. Camps could give \tSports                                            \tsailing                                           \t0.61\t8648.26\t2.90698284151853421\nScottish, british colleagues enable about a workers. Most good persons could read with a years. Indeed specific damages believe organisations. Immediate facilitie\tSports                                            \tsailing                                           \t1.74\t7276.84\t2.44600058514380124\nEasy, natural leaves contin\tSports                                            \tsailing                                           \t1.73\t12739.66\t4.28224556463149924\nNew routes cannot test over a others. Armed, brown fans make so in a techniques. Electronic, subsequent professionals used to follow in a matters. Enough substantial standards\tSports                                            \tsailing                                           \t3.07\t5349.42\t1.79812727092803377\nOpen times ought to add actually soviet attitudes. Women must imagine of course inner streets. Rightly big records enable yesterday st\tSports                                            \tsailing                                           \t6.43\t2470.80\t0.83052234840580583\nExternal, definite securities might know then particular others; always local years must buy right children. British effects used to enable powerful, \tSports                                            \tsailing                                           \t5.35\tNULL\tNULL\nImportant, broad investors can see dearly vulnerable troops. Eastern, poor lists need genuine facilities. Figures meet equally children. Other, defensive changes go old, new companies; \tSports                                            \tsailing                                           \t71.43\t17348.99\t5.83160268628332577\nYoung, black boys spread too wealthy, major numbers. Profitable drawings might think better purposes. Industr\tSports                                            \tsailing                                           \t3.24\t12918.54\t4.34237339273690257\nJoint texts take only local, asleep shareholders. Detailed courses fast programmes. Soft students know settlements; just b\tSports                                            \tsailing                                           \t4.70\t1007.64\t0.33870306748730216\nOnly american aspirations will not provide then on a subjec\tSports                                            \tsailing                                           \t9.32\t2524.02\t0.84841145289915090\nEqual songs will overcome slight contracts. Large, inner odds go even good women. Feet could not find hard strong models. Bloody machines see dark heads. Huge, only men make at the advis\tSports                                            \tsailing                                           \t2.07\t2504.57\t0.84187362722467586\nPrisoners raise both. Medical children sell; activities \tSports                                            \tsailing                                           \t1.25\t8453.80\t2.84161803017362852\nBenefits may hold\tSports                                            \tsailing                                           \t8.02\t5687.08\t1.91162661371688936\nEthnic positions must buy years. Other efforts should get; common goods show exactly aware eyes; foreign, unfair fans may carry thus daily, national actions.\tSports                                            \tsailing                                           \t4.63\t4728.78\t1.58950844693799842\nCriteria shall announce far about other waves. Farmers see possibly; just english managers clean. Head files see both. Comparisons may n\tSports                                            \tsailing                                           \t4.18\t1308.47\t0.43982255836916981\nConnections present high secondary benefits. Levels could compete. Psychological students ought to wonder advanced seats. Of course rich functions would see items; unlikely id\tSports                                            \tsailing                                           \t9.39\t2534.25\t0.85185011390942748\nWell bad areas seem\tSports                                            \tsailing                                           \t0.39\t2413.53\t0.81127189717818704\nBlue, united ministers know childr\tSports                                            \tsailing                                           \t4.68\t530.93\t0.17846415348838210\nDear, continuous problems\tSports                                            \tsailing                                           \t5.90\t8982.06\t3.01918470322237831\nPrices acquire more out of a christians. Efficiently local prices \tSports                                            \tsailing                                           \t2.11\t8027.95\t2.69847494207721747\nGood, capable studies might like bad apparently new years. Modest, payable plants could feed there english women. New, local recommendations last public novels. Candidates must save as orange pla\tSports                                            \tsailing                                           \t4.28\t1617.69\t0.54376222186845881\nMothers may not obtain p\tSports                                            \tsailing                                           \t9.99\t205.80\t0.06917658220087212\nBritish figures can tell much white methods. New, french men could think marginally nuclear relatives. Electronic, differ\tSports                                            \tsailing                                           \t7.39\t13316.13\t4.47601730584304808\nReal appearances could join miles. A\tSports                                            \tsailing                                           \t2.44\t1182.16\t0.39736534700963551\nAt present financial areas used to link very purposes. Likely members can retaliate true, blac\tSports                                            \tsailing                                           \t1.69\t7800.18\t2.62191347401165555\nSpecial birds will not answer especially then public walls. Most human areas could require major groups. Particularly diverse children could continue to the readers\tSports                                            \tsailing                                           \t4.71\t7976.59\t2.68121104867664997\nStudents would rise broad obligations. Good, statistical children would not see. Gradually elegant cases can look w\tSports                                            \tsailing                                           \t4.63\t391.82\t0.13170441417855061\nReliable stages cannot see similarly. Feelings repeat together significant, available notes. Rich, basic roots provide instinctively before the talks. Parties arrest there other investigations. Bom\tSports                                            \tsailing                                           \t7.89\t7983.29\t2.68346315315063365\nDemands can imagine also purely fresh eyebrows. Busy skills become almost; complete pa\tSports                                            \tsailing                                           \t4.98\t12443.47\t4.18268574013161433\nProper applications stand now very limited arms. Angrily slow boys shall aid too previous old masses. Mechanical contents think through the times. Sequences may not agree. Old, working stren\tSports                                            \tsailing                                           \t0.63\t679.89\t0.22853482250996573\nSuccessful, able hearts cite then contents. Urban rights will use long important, suspicious ideas; police speak for a methods. Plans seek no longer good gardens\tSports                                            \tsailing                                           \t4.39\t8675.35\t2.91608873856334289\nScientific packages make banks. Then important parents must get front, little bact\tSports                                            \tsailing                                           \t4.23\t6135.42\t2.06232937787597103\nAlso long ways should not give only now good resources. Previous, economic units s\tSports                                            \tsailing                                           \t4.65\t389.74\t0.13100525338662731\nSocial years attend. Bloody wee\tSports                                            \tsailing                                           \t1.94\t3178.08\t1.06826390845941533\nCapital, foreign problems \tSports                                            \tsailing                                           \t3.60\t1277.78\t0.42950657533834004\nOriginal, major nations should come once more now permanent feet. Prizes revise thus with the spots. Aside ordinary studies can learn l\tSports                                            \tsailing                                           \t1.46\t7468.82\t2.51053178169833686\nIndustrial, open sites would throw before a men. Also p\tSports                                            \tsailing                                           \t7.20\t1089.19\t0.36611487642064095\nLoose patients used to look at all companies. Old, low centres may illustr\tSports                                            \tsailing                                           \t6.35\t7701.71\t2.58881426094401766\nEspecially moral students used to keep guilty, bizarre things. Unknown trends reduce later terms; general mothers can find as right n\tSports                                            \tsailing                                           \t3.35\t12086.74\t4.06277630296680815\nOrigins would come sales. Educational eyes could invite actually stupid, forei\tSports                                            \tsailing                                           \t3.77\t9292.44\t3.12351428331716300\nLegal, secondary sales elect. Big years appeal low with a doubts. Military videos might describe; comparable, long companies would not extend now industrial tools. Even ne\tSports                                            \tsailing                                           \t5.45\t1828.50\t0.61462284039987695\nAdditional organisations will adopt usually schemes. Conventional problems should not create attacks. Generally european powers win very human, busy months; fai\tSports                                            \tsailing                                           \t0.87\t6498.29\t2.18430268391693540\nWrong, local indians train excellent, comprehensive holidays. Meals c\tSports                                            \tsailing                                           \t60.65\t1510.40\t0.50769829813506926\nNational shareholders learn. Effective proceedings will develop now other, informal days; new, big waves last americans. Solicitors ought to sue flames; interested conservatives might understand just\tSports                                            \tsailing                                           \t0.24\t5784.43\t1.94434935558887624\nAmbitious exceptions appoint. V\tSports                                            \tsailing                                           \t7.35\t9044.55\t3.04018977912972767\nProceedings mi\tSports                                            \tsailing                                           \t7.11\t4105.60\t1.38003584005782598\nAgain standard families change literally. Narrow lips work certainly carefully vast stages. Drugs see also right factors. Financial, western examples ought to let desperately ago sudden\tSports                                            \ttennis                                            \t9.39\t6556.29\t1.81601129267527792\nLate global concepts shall understand very quiet, upper heads. Already english buildings make women. Others try. Please minimal agreements conflict largely forthcoming police. \tSports                                            \ttennis                                            \t4.33\t7426.08\t2.05693237186122454\nSeriously complete characteristics make forward in a projects. Industries should rise then also new departments. Physical babies encourage under to a workers. Personal, beautiful ministers cont\tSports                                            \ttennis                                            \t0.82\t14172.38\t3.92557408596710262\nWhole, new meetings may last; free plans broaden there mammals. Public, honest areas may risk on a profits. Good, normal generations ought to walk almost over a reductions. Otherwise basic s\tSports                                            \ttennis                                            \t4.88\t8723.48\t2.41629613568450044\nEconomic, content activit\tSports                                            \ttennis                                            \t5.07\t16087.57\t4.45605804375706699\nWomen would come fair unaware, current bars. Villages may go then on a neighbours. Early numbers should not change however cr\tSports                                            \ttennis                                            \t2.92\t13912.86\t3.85369025369685708\nWomen should leave also annual, marginal techniques; intellectual, appropriate factors could think profits. Neverthe\tSports                                            \ttennis                                            \t8.24\t23633.13\t6.54608489881669218\nOf course equal nee\tSports                                            \ttennis                                            \t3.49\t11949.65\t3.30990534944566741\nFree representatives can fall much prime, useful banks. Recent, secondary practitioners can talk times; libraries take from now on young prices. Bodies appear only yellow rates. Second\tSports                                            \ttennis                                            \t6.85\t7304.83\t2.02334762054045053\nCostly offices collect officially for a debts; readers greet. Women get by a write\tSports                                            \ttennis                                            \t3.22\t2864.47\t0.79342278446035080\nRapidly main banks shall not bring extremely decades. For example main clothes might not see less. Certainly co\tSports                                            \ttennis                                            \t3.15\t5004.38\t1.38615140465694887\nJust able pounds should join then successful modern pieces. Associated, sorry clubs pay close issues. Resources will e\tSports                                            \ttennis                                            \t7.67\t7567.71\t2.09616213128028617\nNecessary times believe probably. Cruel traders know ho\tSports                                            \ttennis                                            \t92.95\t7731.85\t2.14162688247032202\nFunny, armed savings go yet thin\tSports                                            \ttennis                                            \t3.97\t3362.82\t0.93145957473422897\nElected, marvellous advisers may not pass all in a programmes. Directly soviet studies could not stress more than; convenient, public\tSports                                            \ttennis                                            \t4.67\t18.70\t0.00517966886349257\nMen could remove only; economic, clear children raise public, extensive poli\tSports                                            \ttennis                                            \t5.04\t2721.49\t0.75381909172761457\nAble, common villages read. Only social grounds remember e\tSports                                            \ttennis                                            \t2.08\t2677.23\t0.74155961879188295\nSuccessful parties see once on a ideas. Scottish, natural men would not examine regulatory, multiple payments. Steadily loc\tSports                                            \ttennis                                            \t2.55\t8031.03\t2.22449604453340795\nCurrent, \tSports                                            \ttennis                                            \t0.47\t18310.05\t5.07165753336856247\nYears may speak to a\tSports                                            \ttennis                                            \t2.02\t3056.11\t0.84650469574375807\nSeparate, comfortable consumers get. Tests work even high, different faces. Hars\tSports                                            \ttennis                                            \t8.09\t11878.41\t3.29017274998923903\nMuch critical possibilities might ensure; hence northern ways may persuade much japanese, running notes. Small, ed\tSports                                            \ttennis                                            \t8.53\t8171.42\t2.26338233927916847\nAs specific ears worry also labour components. Duly proper articles would attend more easy shapes; years wait head convention\tSports                                            \ttennis                                            \t0.85\t11273.32\t3.12257029904748936\nEarly, experimental factors mean usually suitable detectives; just black assets must not store only. So british employers must see elaborate, complete pages. Mental years should t\tSports                                            \ttennis                                            \t88.56\t15092.59\t4.18046088194969605\nSocial, substantial orders would not offset however to a colleagues. Small students give for sure husbands. Subjects shall not make generations; acceptable lights g\tSports                                            \ttennis                                            \t56.30\t5682.58\t1.57400442194147617\nI\tSports                                            \ttennis                                            \t1.04\t4973.48\t1.37759248658839698\nAutomatic amounts may find more in a regulations. Boys can give available, current seasons; here complicated camps may spot even generous open individuals. Channels remain currently protest\tSports                                            \ttennis                                            \t8.43\t3330.22\t0.92242977767808685\nPoints used to find cool typical managers. However military horses understand indeed inc periods. Coloured developments could make very roots. \tSports                                            \ttennis                                            \t8.52\t11481.61\t3.18026405453288334\nSides express even new women. Also joint markets should switch authorities. Trees would play always about a skills. Teams deprive future pubs; ways recover national, old days. Rea\tSports                                            \ttennis                                            \t90.25\t3634.02\t1.00657862263685918\nSecret children will start in short familie\tSports                                            \ttennis                                            \t38.21\t13612.04\t3.77036683190456646\nOther, general countries keep successfully teachers. Major, traditional relationships could not become in a subjects. Constant observers wil\tSports                                            \ttennis                                            \t99.16\t7979.51\t2.21022564133302628\nUpper, industrial years shall opera\tSports                                            \ttennis                                            \t1.58\t369.36\t0.10230815462136981\nAfraid, spanish matt\tSports                                            \ttennis                                            \t3.06\t141.37\t0.03915774263272431\nLight, social animals resist instead then female societies. Also informal minutes shall not implement. Servants win. Hands will a\tSports                                            \ttennis                                            \t8.30\t3341.21\t0.92547387183903783\nModest, educational principles would \tSports                                            \ttennis                                            \t6.42\t18707.39\t5.18171580215038800\nFar little eyes can happen pp.. Related margins will suffer low below active children; times feel just similar, nervous birds. Vegetabl\tSports                                            \ttennis                                            \t9.01\t813.78\t0.22540700148304722\nThen various shoes date good, bad shops. Here open rats match badly well dual games. No doubt small kids answer much points. Completely free services shall understand. Following patients\tSports                                            \ttennis                                            \t5.46\t1154.69\t0.31983485775327459\nWidely free parties would find in a problems. Men like parties; straight a\tSports                                            \ttennis                                            \t8.95\t10297.10\t2.85216942536199653\nTired rights free. Paintings sell\tSports                                            \ttennis                                            \t8.06\t5429.22\t1.50382683353214583\nMeetings improve early women. Even likely variables might want approxi\tSports                                            \ttennis                                            \t2.56\t7342.79\t2.03386207134570068\nGrowing jews see only grey tactics. Also indian parts ought to provide pretty other, canadian ways. Countries shall correspond really to a doubts. Star sounds ought to mean further at a steps. \tSports                                            \ttennis                                            \t8.04\t4423.03\t1.22512464028307694\nElse single arrangements will not keep approximately from a teachers. Large levels tolerate daily financial, critical others. Properties make a\tSports                                            \ttennis                                            \t0.30\t5475.18\t1.51655718545546767\nEquivalent, important points would not reject foreign, high mountains. Always alive cups mark near the games. Sons will not stay extremely. Unfortunatel\tSports                                            \ttennis                                            \t0.19\t5314.97\t1.47218099568968454\nConfidential companies can write highly; potentially new children mix sympathetically military, economic gains. Various, traditional designers make in a measurements. Individuals tell only se\tSports                                            \ttennis                                            \t7.12\t1837.86\t0.50906450360740392\nExamples show waves. Currently representative farmers should put like a customers. Both full rights practise with a police. Legal re\tSports                                            \ttennis                                            \t4.24\t735.27\t0.20366070188557120\nPart\tSports                                            \ttennis                                            \t6.53\t4928.46\t1.36512250304644856\nGreat, big arts will not let brilliant pp.. Real, new or\tSports                                            \ttennis                                            \t0.88\t13772.83\t3.81490367450140978\nInc presents cannot break often subjects. Of course capital services would pay. Systems cannot\tSports                                            \ttennis                                            \t9.67\t3395.45\t0.94049768141956387\nParts may refuse primarily old holidays. Scottish, good tests handle however for the households; low measurements will remember into a calls; inc, genuine events used to think again r\tSports                                            \ttennis                                            \t6.88\t733.87\t0.20327291918990865\nLiterary pai\tSports                                            \ttennis                                            \t2.68\t3317.04\t0.91877908058606374\nThemes would not reflect on the jeans. Traditional relations would not force mildly smal\tSports                                            \ttennis                                            \t9.89\t1274.76\t0.35309276365913303\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v1_4/q99.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<substr(w_warehouse_name, 1, 20):string,sm_type:string,cc_name:string,30 days :bigint,31 - 60 days :bigint,61 - 90 days :bigint,91 - 120 days :bigint,>120 days :bigint>\n-- !query output\nJust good amou\tEXPRESS                       \tMid Atlantic\t1260\t1337\t1352\t0\t0\nJust good amou\tEXPRESS                       \tNY Metro\t1297\t1268\t1203\t0\t0\nJust good amou\tEXPRESS                       \tNorth Midwest\t1291\t1266\t1327\t0\t0\nJust good amou\tLIBRARY                       \tMid Atlantic\t932\t1025\t970\t0\t0\nJust good amou\tLIBRARY                       \tNY Metro\t918\t973\t922\t0\t0\nJust good amou\tLIBRARY                       \tNorth Midwest\t899\t945\t970\t0\t0\nJust good amou\tNEXT DAY                      \tMid Atlantic\t1273\t1356\t1306\t0\t0\nJust good amou\tNEXT DAY                      \tNY Metro\t1301\t1304\t1226\t0\t0\nJust good amou\tNEXT DAY                      \tNorth Midwest\t1158\t1250\t1247\t0\t0\nJust good amou\tOVERNIGHT                     \tMid Atlantic\t1021\t961\t1055\t0\t0\nJust good amou\tOVERNIGHT                     \tNY Metro\t852\t992\t977\t0\t0\nJust good amou\tOVERNIGHT                     \tNorth Midwest\t858\t993\t982\t0\t0\nJust good amou\tREGULAR                       \tMid Atlantic\t948\t982\t993\t0\t0\nJust good amou\tREGULAR                       \tNY Metro\t910\t941\t951\t0\t0\nJust good amou\tREGULAR                       \tNorth Midwest\t880\t948\t993\t0\t0\nJust good amou\tTWO DAY                       \tMid Atlantic\t958\t1030\t981\t0\t0\nJust good amou\tTWO DAY                       \tNY Metro\t891\t907\t907\t0\t0\nJust good amou\tTWO DAY                       \tNorth Midwest\t924\t886\t955\t0\t0\nMatches produce\tEXPRESS                       \tMid Atlantic\t1216\t1329\t1314\t0\t0\nMatches produce\tEXPRESS                       \tNY Metro\t1164\t1301\t1225\t0\t0\nMatches produce\tEXPRESS                       \tNorth Midwest\t1246\t1265\t1264\t0\t0\nMatches produce\tLIBRARY                       \tMid Atlantic\t890\t962\t963\t0\t0\nMatches produce\tLIBRARY                       \tNY Metro\t891\t970\t963\t0\t0\nMatches produce\tLIBRARY                       \tNorth Midwest\t956\t902\t964\t0\t0\nMatches produce\tNEXT DAY                      \tMid Atlantic\t1339\t1308\t1245\t0\t0\nMatches produce\tNEXT DAY                      \tNY Metro\t1218\t1280\t1177\t0\t0\nMatches produce\tNEXT DAY                      \tNorth Midwest\t1265\t1249\t1309\t0\t0\nMatches produce\tOVERNIGHT                     \tMid Atlantic\t928\t916\t959\t0\t0\nMatches produce\tOVERNIGHT                     \tNY Metro\t928\t937\t959\t0\t0\nMatches produce\tOVERNIGHT                     \tNorth Midwest\t924\t985\t923\t0\t0\nMatches produce\tREGULAR                       \tMid Atlantic\t937\t919\t956\t0\t0\nMatches produce\tREGULAR                       \tNY Metro\t920\t970\t942\t0\t0\nMatches produce\tREGULAR                       \tNorth Midwest\t920\t978\t1033\t0\t0\nMatches produce\tTWO DAY                       \tMid Atlantic\t947\t961\t1010\t0\t0\nMatches produce\tTWO DAY                       \tNY Metro\t870\t950\t1004\t0\t0\nMatches produce\tTWO DAY                       \tNorth Midwest\t896\t989\t883\t0\t0\nOperations\tEXPRESS                       \tMid Atlantic\t1282\t1274\t1361\t0\t0\nOperations\tEXPRESS                       \tNY Metro\t1183\t1267\t1206\t0\t0\nOperations\tEXPRESS                       \tNorth Midwest\t1182\t1297\t1234\t0\t0\nOperations\tLIBRARY                       \tMid Atlantic\t955\t1001\t1015\t0\t0\nOperations\tLIBRARY                       \tNY Metro\t917\t948\t930\t0\t0\nOperations\tLIBRARY                       \tNorth Midwest\t890\t926\t977\t0\t0\nOperations\tNEXT DAY                      \tMid Atlantic\t1197\t1322\t1291\t0\t0\nOperations\tNEXT DAY                      \tNY Metro\t1221\t1238\t1294\t0\t0\nOperations\tNEXT DAY                      \tNorth Midwest\t1277\t1295\t1273\t0\t0\nOperations\tOVERNIGHT                     \tMid Atlantic\t904\t1021\t953\t0\t0\nOperations\tOVERNIGHT                     \tNY Metro\t923\t915\t975\t0\t0\nOperations\tOVERNIGHT                     \tNorth Midwest\t932\t1010\t987\t0\t0\nOperations\tREGULAR                       \tMid Atlantic\t953\t1024\t974\t0\t0\nOperations\tREGULAR                       \tNY Metro\t902\t892\t901\t0\t0\nOperations\tREGULAR                       \tNorth Midwest\t938\t942\t990\t0\t0\nOperations\tTWO DAY                       \tMid Atlantic\t964\t990\t1011\t0\t0\nOperations\tTWO DAY                       \tNY Metro\t917\t946\t886\t0\t0\nOperations\tTWO DAY                       \tNorth Midwest\t926\t973\t980\t0\t0\nSelective,\tEXPRESS                       \tMid Atlantic\t1307\t1294\t1263\t0\t0\nSelective,\tEXPRESS                       \tNY Metro\t1186\t1296\t1230\t0\t0\nSelective,\tEXPRESS                       \tNorth Midwest\t1203\t1289\t1271\t0\t0\nSelective,\tLIBRARY                       \tMid Atlantic\t1004\t959\t972\t0\t0\nSelective,\tLIBRARY                       \tNY Metro\t932\t940\t931\t0\t0\nSelective,\tLIBRARY                       \tNorth Midwest\t925\t912\t940\t0\t0\nSelective,\tNEXT DAY                      \tMid Atlantic\t1267\t1309\t1327\t0\t0\nSelective,\tNEXT DAY                      \tNY Metro\t1244\t1244\t1261\t0\t0\nSelective,\tNEXT DAY                      \tNorth Midwest\t1234\t1268\t1270\t0\t0\nSelective,\tOVERNIGHT                     \tMid Atlantic\t978\t945\t1068\t0\t0\nSelective,\tOVERNIGHT                     \tNY Metro\t938\t963\t947\t0\t0\nSelective,\tOVERNIGHT                     \tNorth Midwest\t882\t936\t932\t0\t0\nSelective,\tREGULAR                       \tMid Atlantic\t989\t948\t970\t0\t0\nSelective,\tREGULAR                       \tNY Metro\t917\t972\t980\t0\t0\nSelective,\tREGULAR                       \tNorth Midwest\t876\t937\t1001\t0\t0\nSelective,\tTWO DAY                       \tMid Atlantic\t951\t974\t972\t0\t0\nSelective,\tTWO DAY                       \tNY Metro\t928\t1007\t934\t0\t0\nSelective,\tTWO DAY                       \tNorth Midwest\t968\t942\t996\t0\t0\nSignificantly\tEXPRESS                       \tMid Atlantic\t1260\t1340\t1298\t0\t0\nSignificantly\tEXPRESS                       \tNY Metro\t1231\t1326\t1236\t0\t0\nSignificantly\tEXPRESS                       \tNorth Midwest\t1200\t1222\t1233\t0\t0\nSignificantly\tLIBRARY                       \tMid Atlantic\t949\t1048\t965\t0\t0\nSignificantly\tLIBRARY                       \tNY Metro\t908\t963\t915\t0\t0\nSignificantly\tLIBRARY                       \tNorth Midwest\t970\t984\t920\t0\t0\nSignificantly\tNEXT DAY                      \tMid Atlantic\t1312\t1347\t1268\t0\t0\nSignificantly\tNEXT DAY                      \tNY Metro\t1198\t1251\t1190\t0\t0\nSignificantly\tNEXT DAY                      \tNorth Midwest\t1231\t1232\t1307\t0\t0\nSignificantly\tOVERNIGHT                     \tMid Atlantic\t990\t973\t990\t0\t0\nSignificantly\tOVERNIGHT                     \tNY Metro\t891\t925\t954\t0\t0\nSignificantly\tOVERNIGHT                     \tNorth Midwest\t876\t971\t958\t0\t0\nSignificantly\tREGULAR                       \tMid Atlantic\t942\t1006\t913\t0\t0\nSignificantly\tREGULAR                       \tNY Metro\t955\t956\t957\t0\t0\nSignificantly\tREGULAR                       \tNorth Midwest\t910\t937\t1001\t0\t0\nSignificantly\tTWO DAY                       \tMid Atlantic\t957\t1027\t1018\t0\t0\nSignificantly\tTWO DAY                       \tNY Metro\t971\t972\t958\t0\t0\nSignificantly\tTWO DAY                       \tNorth Midwest\t885\t977\t919\t0\t0\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q10a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<cd_gender:string,cd_marital_status:string,cd_education_status:string,cnt1:bigint,cd_purchase_estimate:int,cnt2:bigint,cd_credit_rating:string,cnt3:bigint,cd_dep_count:int,cnt4:bigint,cd_dep_employed_count:int,cnt5:bigint,cd_dep_college_count:int,cnt6:bigint>\n-- !query output\nF\tS\tAdvanced Degree     \t1\t4500\t1\tHigh Risk \t1\t2\t1\t0\t1\t0\t1\nM\tD\tUnknown             \t1\t5000\t1\tUnknown   \t1\t5\t1\t4\t1\t6\t1\nM\tM\t2 yr Degree         \t1\t2500\t1\tUnknown   \t1\t2\t1\t0\t1\t4\t1\nM\tW\tPrimary             \t1\t2000\t1\tLow Risk  \t1\t2\t1\t3\t1\t4\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q11.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<customer_id:string,customer_first_name:string,customer_last_name:string,customer_email_address:string>\n-- !query output\nAAAAAAAAAAECBAAA\tFrank               \tWenzel                        \tFrank.Wenzel@zhXN6.com                            \nAAAAAAAAABGKAAAA\tJonna               \tKing                          \tJonna.King@aFu3Mu88XNgLh7lzUbBd.com               \nAAAAAAAAAFAGAAAA\tRobert              \tChang                         \tRobert.Chang@q5SPPnTKPgA2siE.org                  \nAAAAAAAAAFBNAAAA\tRobert              \tBaines                        \tRobert.Baines@FI8euotCCfA0dfsoy.com               \nAAAAAAAAAGLPAAAA\tCharlene            \tMarcus                        \tCharlene.Marcus@XYRXjq9m6.com                     \nAAAAAAAABAAGAAAA\tLuis                \tJames                         \tLuis.James@oLxkv69Mc9.edu                         \nAAAAAAAABBEAAAAA\tJason               \tGallegos                      \tJason.Gallegos@sg0JhLIArOU5lOS.org                \nAAAAAAAABGMHBAAA\tMichael             \tGillespie                     \tMichael.Gillespie@J63SDK8lTkTx.edu                \nAAAAAAAABIABAAAA\tLetha               \tStone                         \tLetha.Stone@BkqMc.com                             \nAAAAAAAABILCAAAA\tTheresa             \tMullins                       \tTheresa.Mullins@96UTbTai7sO.org                   \nAAAAAAAABJEDBAAA\tArthur              \tBryan                         \tArthur.Bryan@ZvCpRQMEbZYcg.org                    \nAAAAAAAABKDKAAAA\tGerald              \tRuiz                          \tGerald.Ruiz@kHcL2q.com                            \nAAAAAAAACEMIAAAA\tJames               \tHernandez                     \tJames.Hernandez@gj0dkjapodlS.com                  \nAAAAAAAACGLDAAAA\tAngelo              \tSloan                         \tAngelo.Sloan@dabad6klflJ.edu                      \nAAAAAAAACKKIAAAA\tNULL\tNULL\tLorraine.Miller@31f.edu                           \nAAAAAAAACOEHBAAA\tChristine           \tGonzalez                      \tChristine.Gonzalez@oHMdrqfEDX.org                 \nAAAAAAAACPDFBAAA\tCheryl              \tBarry                         \tCheryl.Barry@b4id7Q6XJNsB.edu                     \nAAAAAAAADFJBBAAA\tPatrick             \tJones                         \tPatrick.Jones@L0aUVXsdRxldn.com                   \nAAAAAAAADHNHBAAA\tPatrick             \tCooper                        \tPatrick.Cooper@2kXFgCYx14V.edu                    \nAAAAAAAADKMBAAAA\tDonald              \tNelson                        \tDonald.Nelson@b6TdhXbAelMn8oF.edu                 \nAAAAAAAAEBFHAAAA\tEsther              \tMerrill                       \tEsther.Merrill@hGu.edu                            \nAAAAAAAAEBJNAAAA\tAlfred              \tGlenn                         \tAlfred.Glenn@xCi0.edu                             \nAAAAAAAAEFCEBAAA\tCornelius           \tMartino                       \tCornelius.Martino@In2KFInUjUY.com                 \nAAAAAAAAEIAHAAAA\tHenry               \tDesantis                      \tHenry.Desantis@znTqdvjJGag4.edu                   \nAAAAAAAAEIPIAAAA\tLuke                \tRios                          \tLuke.Rios@NgqF4xn2Qxgm00FR0.com                   \nAAAAAAAAFAIEAAAA\tBetty               \tGipson                        \tBetty.Gipson@13Lp7iesLn.com                       \nAAAAAAAAFDIMAAAA\tStephanie           \tCowan                         \tStephanie.Cowan@R80Njmu1D1n0d.com                 \nAAAAAAAAFGMHBAAA\tDonald              \tColeman                       \tDonald.Coleman@S4KL45.org                         \nAAAAAAAAFGNEAAAA\tAndrew              \tSilva                         \tAndrew.Silva@hx4.edu                              \nAAAAAAAAFHNDAAAA\tVirgil              \tMcdonald                      \tVirgil.Mcdonald@dUD.org                           \nAAAAAAAAFMOKAAAA\tHarry               \tBrown                         \tHarry.Brown@Clj2rtJAo.com                         \nAAAAAAAAFMPPAAAA\tManuel              \tBryant                        \tManuel.Bryant@1LtMa1H0t8B5.edu                    \nAAAAAAAAFOEDAAAA\tLori                \tErwin                         \tLori.Erwin@SkmpHUaEnhHBkQ.com                     \nAAAAAAAAGCGIAAAA\tMae                 \tWilliams                      \tMae.Williams@mfBvsN8VAQOX21Yh.org                 \nAAAAAAAAGEKLAAAA\tJerilyn             \tWalker                        \tJerilyn.Walker@hOIXjGj8unTzQ5J3Um.edu             \nAAAAAAAAGGMHAAAA\tJulia               \tFisher                        \tJulia.Fisher@eyrOB7M7abp.org                      \nAAAAAAAAGHFDAAAA\tLaura               \tRoy                           \tLaura.Roy@xb1d3mQ2.org                            \nAAAAAAAAGLDMAAAA\tAlex                \tNorris                        \tAlex.Norris@GABnCVFfjXxUV2Q.edu                   \nAAAAAAAAGMFHAAAA\tBruce               \tHowe                          \tBruce.Howe@yNj94o0DBJ.com                         \nAAAAAAAAGMGEBAAA\tTamika              \tPotts                         \tTamika.Potts@yzUu.edu                             \nAAAAAAAAHBEABAAA\tBonnie              \tCochran                       \tBonnie.Cochran@D3oggm81Joopv.com                  \nAAAAAAAAHEIFBAAA\tNULL\tJones                         \tAnn.Jones@E1eNB.edu                               \nAAAAAAAAHEPFBAAA\tKathryn             \tKinney                        \tKathryn.Kinney@Stq02g.com                         \nAAAAAAAAHGKLAAAA\tArthur              \tChristensen                   \tArthur.Christensen@VFNBhqKt1TAdrr.edu             \nAAAAAAAAHIEIAAAA\tWilliam             \tRoberts                       \tWilliam.Roberts@ObeXEfeXMMgm.org                  \nAAAAAAAAHLEAAAAA\tGeneva              \tSims                          \tGeneva.Sims@1E0ayoK5qFo.edu                       \nAAAAAAAAHLJCAAAA\tMarlene             \tGrover                        \tMarlene.Grover@F9DZzXQsJNYJ.org                   \nAAAAAAAAHPMLAAAA\tElizabeth           \tKennedy                       \tElizabeth.Kennedy@A0YBfGbsbmc3om.edu              \nAAAAAAAAIANDAAAA\tElva                \tWade                          \tElva.Wade@4xZ1agk4PU9.edu                         \nAAAAAAAAIBBFBAAA\tJames               \tCompton                       \tJames.Compton@KzmxhGNiTqrp.com                    \nAAAAAAAAIBJDBAAA\tDean                \tVelez                         \tDean.Velez@ycRqZT5hVfX8ZZk.org                    \nAAAAAAAAILLJAAAA\tBilly               \tOrtiz                         \tBilly.Ortiz@AEcBAd1rTF.org                        \nAAAAAAAAIODCBAAA\tJennifer            \tCrane                         \tJennifer.Crane@Sbzbbg2f7tIl32aDBj.org             \nAAAAAAAAIPGJAAAA\tMichael             \tNULL\tMichael.Connelly@K.edu                            \nAAAAAAAAIPKJAAAA\tCharles             \tJones                         \tCharles.Jones@aZTRs91tA.org                       \nAAAAAAAAJADIAAAA\tMargaret            \tRoberts                       \tMargaret.Roberts@jp.edu                           \nAAAAAAAAJBELAAAA\tSean                \tBusby                         \tSean.Busby@HlbL26U77.edu                          \nAAAAAAAAJCNBBAAA\tJohnnie             \tCox                           \tJohnnie.Cox@nNTlnRXjr5.edu                        \nAAAAAAAAJDEFAAAA\tLoretta             \tSerrano                       \tLoretta.Serrano@GYZpg38p40VgqS7L9.edu             \nAAAAAAAAJDKKAAAA\tSharon              \tReynolds                      \tSharon.Reynolds@tk5.org                           \nAAAAAAAAJEDJAAAA\tDavid               \tTaylor                        \tDavid.Taylor@kxn8ngym6u9XoC.org                   \nAAAAAAAAJGDLAAAA\tFredrick            \tDavis                         \tFredrick.Davis@fBIx4ZgRJ2.org                     \nAAAAAAAAJHGFAAAA\tPamela              \tGannon                        \tPamela.Gannon@dx1Vy6KLG.org                       \nAAAAAAAAJIAHAAAA\tShawna              \tDelgado                       \tShawna.Delgado@Mu5QaTkI2N4tdINV.org               \nAAAAAAAAJILDBAAA\tErica               \tReynolds                      \tErica.Reynolds@NjAGMPr5SynCgvs.org                \nAAAAAAAAJMIDAAAA\tSally               \tThurman                       \tSally.Thurman@xTyyZ3qRIlqa8oBLYTNm.org            \nAAAAAAAAKAKPAAAA\tCarolann            \tRoyer                         \tCarolann.Royer@yVh8tzAJmV.com                     \nAAAAAAAAKLHDAAAA\tBrittany            \tKnox                          \tBrittany.Knox@ldm93PY1oSCUpHZQfy.org              \nAAAAAAAAKMHPAAAA\tRobert              \tJones                         \tRobert.Jones@rYXHiKMN4A.org                       \nAAAAAAAAKNMEBAAA\tAmber               \tGonzalez                      \tAmber.Gonzalez@Va5m6mBBm.edu                      \nAAAAAAAALEAHBAAA\tEddie               \tPena                          \tEddie.Pena@2L00HyEmYqdOy.org                      \nAAAAAAAALMAJAAAA\tIleen               \tLinn                          \tIleen.Linn@5FUTo7S.org                            \nAAAAAAAALMGGBAAA\tDedra               \tRainey                        \tDedra.Rainey@ik6CKcRSdO6GBi.edu                   \nAAAAAAAALNLABAAA\tJanie               \tGarcia                        \tJanie.Garcia@IpXCI4cANG0F1M.org                   \nAAAAAAAALPHGBAAA\tDorothy             \tHeller                        \tDorothy.Heller@dXV5.edu                           \nAAAAAAAAMFMKAAAA\tJohn                \tSanders                       \tJohn.Sanders@g.com                                \nAAAAAAAAMHOLAAAA\tTerri               \tCook                          \tTerri.Cook@Vz02fJPUlPO.edu                        \nAAAAAAAAMJFAAAAA\tMarcus              \tEspinal                       \tMarcus.Espinal@zoIoG4RpC.com                      \nAAAAAAAAMLOEAAAA\tMiguel              \tJackson                       \tMiguel.Jackson@i8Me4xM79.org                      \nAAAAAAAANBECBAAA\tMichael             \tLombardi                      \tMichael.Lombardi@J.com                            \nAAAAAAAANKBBAAAA\tDiann               \tSaunders                      \tDiann.Saunders@g6OYMRl4DEBFz.org                  \nAAAAAAAAOCDCAAAA\tArmando             \tJackson                       \tArmando.Jackson@IoY0Kf.edu                        \nAAAAAAAAOEDIAAAA\tAlexander           \tRich                          \tAlexander.Rich@YT4HorlXiEXVj.org                  \nAAAAAAAAOFFIAAAA\tFrank               \tMilton                        \tFrank.Milton@satakl9QHE.edu                       \nAAAAAAAAOJBPAAAA\tJonathan            \tMcbride                       \tJonathan.Mcbride@SjQzPb47cUO.com                  \nAAAAAAAAOMOKAAAA\tLaurette            \tGary                          \tLaurette.Gary@bSgl2F0kEo2tf.org                   \nAAAAAAAAOOKKAAAA\tDeborah             \tEarly                         \tDeborah.Early@Zpi3TmsGBi.edu                      \nAAAAAAAAOPMDAAAA\tPeggy               \tSmith                         \tPeggy.Smith@CfqGXuI6hH.org                        \nAAAAAAAAOPPKAAAA\tTina                \tJohnson                       \tTina.Johnson@LlITreSC2jD7.com                     \nAAAAAAAAPAEEBAAA\tAudria              \tMattson                       \tAudria.Mattson@V6zU0l0A.com                       \nAAAAAAAAPBIGBAAA\tSusie               \tZavala                        \tSusie.Zavala@C0UOUuL65F7kV.com                    \nAAAAAAAAPEFLAAAA\tDavid               \tMartinez                      \tDavid.Martinez@ghefIHRjR1N.com                    \nAAAAAAAAPFKDAAAA\tLinda               \tSimmons                       \tLinda.Simmons@P3L.com                             \nAAAAAAAAPNMGAAAA\tChristine           \tOlds                          \tChristine.Olds@acdIL3Bsp4QnMIc.org\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q12.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,i_item_desc:string,i_category:string,i_class:string,i_current_price:decimal(7,2),itemrevenue:decimal(17,2),revenueratio:decimal(38,17)>\n-- !query output\nAAAAAAAAAELBAAAA\tPrecisely elderly bodies\tBooks                                             \tarts                                              \t1.40\t11.21\t0.01417562243122168\nAAAAAAAAALNCAAAA\tGreat, contemporary workers would not remove of course cultural values. Then due children might see positive seconds. Significant problems w\tBooks                                             \tarts                                              \t0.55\t515.52\t0.65190159462474560\nAAAAAAAABKLDAAAA\tForward psychological plants establish closely yet eastern changes. Likewise necessary techniques might drop. Pleasant operations like lonely things; dogs let regions. Forces might not result clearl\tBooks                                             \tarts                                              \t2.43\t11462.46\t14.49487110552909973\nAAAAAAAADLLDAAAA\tBlack, following services justify by a investors; dirty, different charts will fly however prizes. Temporary, l\tBooks                                             \tarts                                              \t5.56\t3400.60\t4.30023386615632740\nAAAAAAAAFCIBAAAA\tUnited, important objectives put similarly large, previous phenomena; old, present days receive. Happy detectives assi\tBooks                                             \tarts                                              \t1.26\t784.30\t0.99178774958137022\nAAAAAAAAFFIBAAAA\tNaturally new years put serious, negative vehicles. Fin\tBooks                                             \tarts                                              \t3.34\t3319.96\t4.19826043236027781\nAAAAAAAAGHBAAAAA\tHard different differences would not paint even. Together suitable schemes marry directly only open women. Social ca\tBooks                                             \tarts                                              \t2.65\t229.68\t0.29044219090124839\nAAAAAAAAGMFAAAAA\tAnonymous, useful women provoke slightly present persons. Ideas ought to cost almost competent, working parties; aspects provide thr\tBooks                                             \tarts                                              \t6.73\t5752.44\t7.27425669029944833\nAAAAAAAAHHEBAAAA\tPowerful walls will find; there scottish decades must not\tBooks                                             \tarts                                              \t4.16\t434.76\t0.54977641464745189\nAAAAAAAAIBOCAAAA\tCareful privileges ought to live rather to a boards. Possible, broad p\tBooks                                             \tarts                                              \t3.93\t969.48\t1.22595739827125692\nAAAAAAAAICMBAAAA\tAside legitimate decisions may not stand probably sexual g\tBooks                                             \tarts                                              \t3.88\t349.20\t0.44158138742039332\nAAAAAAAAIFPBAAAA\tSpecially interesting crews continue current, foreign directions; only social men would not call at least political children; circumstances could not understand now in a assessme\tBooks                                             \tarts                                              \t2.13\t3343.99\t4.22864760515441312\nAAAAAAAAIHNAAAAA\tUnlikely states take later in general extra inf\tBooks                                             \tarts                                              \t0.32\t20046.98\t25.35043883731064290\nAAAAAAAAJLHBAAAA\tInches may lose from a problems. Firm, other corporations shall protect ashamed, important practices. Materials shall not make then by a police. Weeks used\tBooks                                             \tarts                                              \t0.84\t11869.78\t15.00994822673206253\nAAAAAAAAKHJDAAAA\tRelevant lips take so sure, manufacturing \tBooks                                             \tarts                                              \t8.80\t5995.28\t7.58134037907713537\nAAAAAAAAKHLBAAAA\tExtra, primitive weeks look obviou\tBooks                                             \tarts                                              \t1.18\t425.89\t0.53855984275049058\nAAAAAAAALCFBAAAA\tMore than key reasons should remain. Words used to offer slowly british\tBooks                                             \tarts                                              \t0.28\t7814.52\t9.88186306879843074\nAAAAAAAALGEEAAAA\tChildren may turn also above, historical aspects. Surveys migh\tBooks                                             \tarts                                              \t7.22\t544.72\t0.68882649872748182\nAAAAAAAALOKCAAAA\tTrustees know operations. Now past issues cut today german governments. British lines go critical, individual structures. Tonight adequate problems should no\tBooks                                             \tarts                                              \t4.05\t152.67\t0.19305907908783347\nAAAAAAAANMECAAAA\tFloors could not go only for a years. Special reasons shape consequently black, concerned instances. Mutual depths encourage both simple teachers. Cards favour massive \tBooks                                             \tarts                                              \t1.83\t503.10\t0.63619586486597904\nAAAAAAAAODBEAAAA\tCertain customers think exactly already necessary factories. Awkward doubts shall not forget fine\tBooks                                             \tarts                                              \t0.30\t922.40\t1.16642231316314662\nAAAAAAAAOODBAAAA\tDeep, big areas take for a facilities. Words could replace certainly cases; lights test. Nevertheless practical arts cross. Fa\tBooks                                             \tarts                                              \t7.37\t230.48\t0.29145383210954253\nAAAAAAAAAJJBAAAA\tNew, reluctant associations see more different, physical symptoms; useful pounds ought to give. Subjects \tBooks                                             \tbusiness                                          \t9.02\t306.85\t0.37352072221391094\nAAAAAAAABBLDAAAA\tNatural plans might not like n\tBooks                                             \tbusiness                                          \t4.29\t2484.54\t3.02436752540117416\nAAAAAAAABINDAAAA\tYears shall want free objects. Old residents use absolutely so residential steps. Letters will share variables. Sure fres\tBooks                                             \tbusiness                                          \t40.76\t90.28\t0.10989555418436330\nAAAAAAAACDDCAAAA\tSimple, great shops glance from a years. Lessons deepen here previous clients. Increased, silent flights open more great rocks. Brill\tBooks                                             \tbusiness                                          \t8.92\t393.75\t0.47930188812686144\nAAAAAAAACGIDAAAA\tGroups must not put new, civil moves. Correct men laugh slightly total novels. Relatively public girls set even scott\tBooks                                             \tbusiness                                          \t3.36\t344.10\t0.41886420242400767\nAAAAAAAACNEDAAAA\tJust young degrees stop posts. More than tight artists buy to a arts. European, essential techniques ought to sell to a offences. Sentences be\tBooks                                             \tbusiness                                          \t2.58\t184.08\t0.22407591508925118\nAAAAAAAADEDAAAAA\tJunior, severe restrictions ought to want principles. Sure,\tBooks                                             \tbusiness                                          \t9.77\t1549.80\t1.88653223166732663\nAAAAAAAAEEFDAAAA\tRemaining subjects handle even only certain ladies; eagerly literary days could not provide. Very different articles cut then. Boys see out of a houses. Governme\tBooks                                             \tbusiness                                          \t9.03\t6463.45\t7.86779374936777799\nAAAAAAAAEGKCAAAA\tRussian windows should see in a weapons. New, considerable branches walk. English regions apply neither alone police; very new years w\tBooks                                             \tbusiness                                          \t2.79\t1635.60\t1.99097439548011320\nAAAAAAAAEKDAAAAA\tLong groups used to create more tiny feet; tools used to dare still\tBooks                                             \tbusiness                                          \t57.04\t10558.62\t12.85274032257534413\nAAAAAAAAEPLBAAAA\tDrugs must compensate dark, modest houses. Small pubs claim naturally accessible relationships. Distinguished\tBooks                                             \tbusiness                                          \t1.66\t31.78\t0.03868498794837246\nAAAAAAAAFCGDAAAA\tSmall, capable centres\tBooks                                             \tbusiness                                          \t2.98\t3219.72\t3.91928349267255446\nAAAAAAAAFDLAAAAA\tPopular, different parameters might take open, used modules. Prisoners use pretty alternative lovers. Annual, professional others spend once true men. Other, small subsidies seem politically\tBooks                                             \tbusiness                                          \t7.25\t3862.88\t4.70218584789203943\nAAAAAAAAFEGEAAAA\tSupreme, free uses handle even in the customers. Other minutes might not make of course social neighbours. So environmental rights come other, able sales\tBooks                                             \tbusiness                                          \t8.08\t10904.74\t13.27406341976510738\nAAAAAAAAFHFCAAAA\tSound, original activities consider quite to a attitudes. In order weak improvements marry available, hard studie\tBooks                                             \tbusiness                                          \t71.27\t385.84\t0.46967324575204627\nAAAAAAAAHDLBAAAA\tClassic issues will draw as european, engl\tBooks                                             \tbusiness                                          \t75.64\t92.64\t0.11276832232653319\nAAAAAAAAHJAAAAAA\tAgain british shareholders see shares. American lives ought to establish horses. Then ideal conservatives might charge even nec\tBooks                                             \tbusiness                                          \t2.44\t5353.50\t6.51667976657054660\nAAAAAAAAIMJAAAAA\tDepartments could seek now for a commu\tBooks                                             \tbusiness                                          \t5.93\t6535.44\t7.95542535045032467\nAAAAAAAAJFBEAAAA\tPaintings must not know primary, royal stands; similar, available others ough\tBooks                                             \tbusiness                                          \t0.39\t303.68\t0.36966196161616580\nAAAAAAAAJJGBAAAA\tMost present eyes restore fat, central relationships; again considerable habits must face in a discussions. Engineers help at all direct occasions. Curiously del\tBooks                                             \tbusiness                                          \t80.10\t2096.55\t2.55207713918062566\nAAAAAAAAKKDAAAAA\tChildren would not mean in favour of a parts. Heavy, whole others shall mean on\tBooks                                             \tbusiness                                          \t3.13\t6646.96\t8.09117581791421695\nAAAAAAAAKNJCAAAA\tWhite fees might combine reports. Tr\tBooks                                             \tbusiness                                          \t2.09\t500.56\t0.60931899205277908\nAAAAAAAALDHBAAAA\tMost new weeks go yet members. Also encouraging delegates make publications. Different competitors run resources; somehow common views m\tBooks                                             \tbusiness                                          \t1.07\t974.26\t1.18594198736882801\nAAAAAAAALOMDAAAA\tOnly new systems might join late speeches. Materials could stay on a benefits. Corporate regulations must crawl definitely practical deaths. Windows might soothe despite a organisations. Old\tBooks                                             \tbusiness                                          \t0.67\t9075.35\t11.04719337247520503\nAAAAAAAAMBECAAAA\tProfessional managers form later initial grounds. Conscious, big risks restore. American, full rises say for a systems. Already\tBooks                                             \tbusiness                                          \t5.27\t890.13\t1.08353267219901759\nAAAAAAAAMKGDAAAA\tMemories can earn particularly over quick contexts; alone differences make separate years; irish men mea\tBooks                                             \tbusiness                                          \t4.23\t2059.92\t2.50748836924516678\nAAAAAAAANJLBAAAA\tOnly, gothic\tBooks                                             \tbusiness                                          \t1.68\t4777.17\t5.81512787530920297\nAAAAAAAAOAPAAAAA\tSilver, critical operations could help howev\tBooks                                             \tbusiness                                          \t5.56\t428.54\t0.52165087273113702\nAAAAAAAAALNBAAAA\tElse substantial problems slip months. Just unique corporations put vast areas. Supporters like far perfect chapters. Now young reports become wrong trials. Available ears shall\tBooks                                             \tcomputers                                         \t51.46\t2456.28\t1.26602601850774711\nAAAAAAAACFMBAAAA\tAt least remaining results shall keep cuts. Clients should meet policies. Glorious, local times could use enough; clever styles will live political parents. Single, gradual contracts will describe ho\tBooks                                             \tcomputers                                         \t9.51\t10252.90\t5.28459221471415324\nAAAAAAAACOFCAAAA\tYears learn here. Days make too. Only moving systems avoid old groups; short movements cannot see respectiv\tBooks                                             \tcomputers                                         \t0.60\t1761.68\t0.90801240749618444\nAAAAAAAADAHAAAAA\tGa\tBooks                                             \tcomputers                                         \t5.53\t7541.48\t3.88706087988983530\nAAAAAAAADDBAAAAA\tS\tBooks                                             \tcomputers                                         \t65.78\t4566.02\t2.35343695385979752\nAAAAAAAAECHAAAAA\tBoxes batt\tBooks                                             \tcomputers                                         \t0.83\t7844.04\t4.04300760915510798\nAAAAAAAAEJECAAAA\tArtists make times. Rather ready functions must pre\tBooks                                             \tcomputers                                         \t5.71\t3694.01\t1.90398194531071494\nAAAAAAAAFDPCAAAA\tLimited, capable cities shall try during the bodies. Specially economic services ought to prevent old area\tBooks                                             \tcomputers                                         \t2.93\t96.18\t0.04957349425150028\nAAAAAAAAFHNAAAAA\tLegs throw then. Old-fashioned develo\tBooks                                             \tcomputers                                         \t2.66\t718.55\t0.37035801928067716\nAAAAAAAAFOCEAAAA\tImportant, educational variables used to appear months. A\tBooks                                             \tcomputers                                         \t2.47\t9922.02\t5.11404867366677942\nAAAAAAAAGHEAAAAA\tMen should not turn shadows. Different, single concessions guarantee only therefore alone products.\tBooks                                             \tcomputers                                         \t8.38\t4194.24\t2.16181256528813215\nAAAAAAAAGIFEAAAA\tEducational, white teachers should not fix. Considerable, other services might not cover today on a forms. Successful genes fall otherwise so\tBooks                                             \tcomputers                                         \t1.65\t14569.68\t7.50956485471198434\nAAAAAAAAHGCEAAAA\tPresent \tBooks                                             \tcomputers                                         \t2.84\t12393.53\t6.38792460190056468\nAAAAAAAAHHFDAAAA\tMultiple, dark feet mean more complex girls; schools may not answer frequently blue assets. Spiritual, dry patients may reply personnel\tBooks                                             \tcomputers                                         \t2.04\t371.40\t0.19142852739662305\nAAAAAAAAIBDEAAAA\tPrivate teachers ap\tBooks                                             \tcomputers                                         \t5.27\t4911.39\t2.53144899076602182\nAAAAAAAAIDCDAAAA\tDaily numbers sense interesting players. General advantages would speak here. Shelves shall know with the reductions. Again wrong mothers provide ways; as hot pr\tBooks                                             \tcomputers                                         \t7.56\t689.26\t0.35526124607807325\nAAAAAAAAIECAAAAA\tInc, corporate ships slow evident degrees. Chosen, acute prices throw always. Budgets spend points. Commonly large events may mean. Bottles c\tBooks                                             \tcomputers                                         \t68.38\t4.17\t0.00214931868401701\nAAAAAAAAIOKCAAAA\tHowever old hours ma\tBooks                                             \tcomputers                                         \t8.84\t451.53\t0.23272946412330966\nAAAAAAAAJDOCAAAA\tIndeed other actions should provide after a ideas; exhibitio\tBooks                                             \tcomputers                                         \t6.95\t8062.32\t4.15551439149257400\nAAAAAAAAKDGEAAAA\tPerfect days find at all. Crimes might develop hopes. Much socialist grants drive current, useful walls. Emissions open naturally. Combinations shall not know. Tragic things shall not receive just\tBooks                                             \tcomputers                                         \t6.71\t526.49\t0.27136565802113105\nAAAAAAAAKOKBAAAA\tHuman windows take right, variable steps. Years should buy often. Indeed thin figures may beat even up to a cars. Details may tell enough. Impossible, sufficient differences ought to return \tBooks                                             \tcomputers                                         \t4.47\t1542.60\t0.79509328584283986\nAAAAAAAAKPNDAAAA\tLeft diff\tBooks                                             \tcomputers                                         \t0.74\t5248.81\t2.70536340572070289\nAAAAAAAAMDKBAAAA\tFriendly, hot computers tax elsewhere units. New, real officials should l\tBooks                                             \tcomputers                                         \t3.19\t12378.72\t6.38029117031536278\nAAAAAAAAMENDAAAA\tKinds relieve really major practices. Then capable reserves could not approve foundations. Pos\tBooks                                             \tcomputers                                         \t7.23\t1786.41\t0.92075884659828053\nAAAAAAAAMJJCAAAA\tVisible, average words shall not want quite only public boundaries. Cars must telephone proposals. German things ask abilities. Windows cut again favorite offi\tBooks                                             \tcomputers                                         \t6.75\t25255.89\t13.01749550563031296\nAAAAAAAANANCAAAA\tOnly increased errors must submit as rich, main \tBooks                                             \tcomputers                                         \t6.94\t2429.79\t1.25237243291071818\nAAAAAAAANFHDAAAA\tMeals ought to test. Round days might need most urban years. Political, english pages must see on a eyes. Only subsequent women may come better methods; difficult, social childr\tBooks                                             \tcomputers                                         \t7.23\t6457.72\t3.32846480866914548\nAAAAAAAANHFDAAAA\tSystems cannot see fairly practitioners. Little ca\tBooks                                             \tcomputers                                         \t1.73\t6197.59\t3.19438752586978211\nAAAAAAAANKLDAAAA\tPast beautiful others might not like more than legislative, small products. Close, wh\tBooks                                             \tcomputers                                         \t3.02\t10232.30\t5.27397447733028024\nAAAAAAAAOGDDAAAA\tMain problems wait properly. Everyday, foreign offenders can worry activities. Social, important shoes will afford okay physical parts. Very\tBooks                                             \tcomputers                                         \t1.40\t2034.30\t1.04852733786470188\nAAAAAAAAOGMDAAAA\tSchools offer quickly others. Further main buildings satisfy sadly great, productive figures. Years contribute acti\tBooks                                             \tcomputers                                         \t4.11\t3975.90\t2.04927485750197523\nAAAAAAAAOMDAAAAA\tTiny, rare leaders mention old, precious areas; students will improve much multiple stars. Even confident solutions will include clearly single women. Please little rights will not mention harder com\tBooks                                             \tcomputers                                         \t1.45\t11902.91\t6.13504720795513872\nAAAAAAAAONDCAAAA\tGuidelines should investigate so. Usual personnel look now old, modern aspects. Discussions could appear once br\tBooks                                             \tcomputers                                         \t2.44\t640.06\t0.32990237815154161\nAAAAAAAAONHDAAAA\tFlat pleasant groups would go private, redundant eyes. Main devic\tBooks                                             \tcomputers                                         \t2.83\t8359.41\t4.30864175068552700\nAAAAAAAAOPNBAAAA\tFine users go for a networks. Sympathetic, effective industries could not alter particularly other concepts; wooden women used to feel however \tBooks                                             \tcomputers                                         \t5.30\t247.79\t0.12771694885193653\nAAAAAAAAPAKAAAAA\tReal, domestic facilities turn often guilty symptoms. Winds get naturally intense islands. Products shall not travel a little clear shares; improved children may not apply wrong c\tBooks                                             \tcomputers                                         \t5.28\t297.60\t0.15339022550682558\nAAAAAAAAABIBAAAA\tThen irish champions must look now forward good women. Future, big models sign. Then different o\tBooks                                             \tcooking                                           \t85.81\t6496.48\t10.66582432143788856\nAAAAAAAAAGHBAAAA\tValuable studies should persist so concerned parties. Always polite songs include then from the holes. There conventional areas might not explain theore\tBooks                                             \tcooking                                           \t1.58\t2088.03\t3.42809662430915734\nAAAAAAAAAIJCAAAA\tMeanings occur in a things. Also essential features may not satisfy by the potatoes; happy words change childre\tBooks                                             \tcooking                                           \t3.46\t1496.40\t2.45676728237440221\nAAAAAAAAAJDBAAAA\tThen dominant goods should combine probably american items. Important artists guess only sill\tBooks                                             \tcooking                                           \t6.67\t5638.06\t9.25648312220250073\nAAAAAAAADDNAAAAA\tIndividual, remarkable services take by the interest\tBooks                                             \tcooking                                           \t6.05\t0.00\t0.00000000000000000\nAAAAAAAAEGFEAAAA\tUltimately senior elections marry at l\tBooks                                             \tcooking                                           \t5.06\t2060.48\t3.38286544372280691\nAAAAAAAAEINDAAAA\tHence young effects shall not solve however months. In order small activities must not return almost national foods. International decades take contributions. Sessions must see \tBooks                                             \tcooking                                           \t1.43\t242.28\t0.39777170353760369\nAAAAAAAAENDCAAAA\tPoints trace so simple eyes. Short advisers shall not say limitations. Keys stretch in full now blue wings. Immediately strategic students would not make strangely for the players.\tBooks                                             \tcooking                                           \t1.69\t12518.00\t20.55186637313737424\nAAAAAAAAFDIAAAAA\tGreat pp. will not r\tBooks                                             \tcooking                                           \t1.91\t7268.22\t11.93285558480304571\nAAAAAAAAGFGCAAAA\tProducts may not resist further specif\tBooks                                             \tcooking                                           \t5.37\t2.72\t0.00446565557876128\nAAAAAAAAICAAAAAA\tSomet\tBooks                                             \tcooking                                           \t7.34\t580.58\t0.95318761614603744\nAAAAAAAAIFHDAAAA\tGenetic properties might describe therefore leaves; right other organisers must not talk even lives; methods carry thus wrong minutes. Proud worke\tBooks                                             \tcooking                                           \t1.08\t1445.15\t2.37262579398781566\nAAAAAAAAIHHDAAAA\tUrgent agencies mean over as a plants; then\tBooks                                             \tcooking                                           \t6.47\t1312.79\t2.15531911295662354\nAAAAAAAAILAEAAAA\tAs light hundreds must establish on a services. Sometimes special results \tBooks                                             \tcooking                                           \t44.82\t3513.30\t5.76808372972867366\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q14.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,i_brand_id:int,i_class_id:int,i_category_id:int,sales:decimal(28,2),number_sales:bigint,channel:string,i_brand_id:int,i_class_id:int,i_category_id:int,sales:decimal(28,2),number_sales:bigint>\n-- !query output\nstore\t1001001\t1\t1\t1217789.46\t331\tstore\t1001001\t1\t1\t1234065.50\t342\nstore\t1001002\t1\t1\t553751.85\t151\tstore\t1001002\t1\t1\t559183.12\t158\nstore\t1002001\t2\t1\t1249466.39\t353\tstore\t1002001\t2\t1\t1576146.08\t368\nstore\t1002002\t2\t1\t547670.08\t152\tstore\t1002002\t2\t1\t491380.52\t130\nstore\t1003001\t3\t1\t1167282.59\t309\tstore\t1003001\t3\t1\t1045547.02\t287\nstore\t1003002\t3\t1\t789483.23\t193\tstore\t1003002\t3\t1\t601149.89\t165\nstore\t1004001\t4\t1\t1522903.35\t398\tstore\t1004001\t4\t1\t1062756.03\t278\nstore\t1004002\t4\t1\t541183.58\t151\tstore\t1004002\t4\t1\t607217.67\t151\nstore\t2001001\t1\t2\t1145363.89\t364\tstore\t2001001\t1\t2\t1303027.45\t345\nstore\t2001002\t1\t2\t627833.76\t169\tstore\t2001002\t1\t2\t536535.24\t161\nstore\t2002001\t2\t2\t1440545.64\t383\tstore\t2002001\t2\t2\t1329963.82\t368\nstore\t2002002\t2\t2\t747435.75\t207\tstore\t2002002\t2\t2\t816248.81\t208\nstore\t2003001\t3\t2\t1388229.40\t395\tstore\t2003001\t3\t2\t1469176.64\t414\nstore\t2003002\t3\t2\t716344.82\t190\tstore\t2003002\t3\t2\t729626.91\t181\nstore\t2004001\t4\t2\t1613653.10\t440\tstore\t2004001\t4\t2\t1488785.35\t406\nstore\t2004002\t4\t2\t657357.03\t182\tstore\t2004002\t4\t2\t593298.47\t162\nstore\t3001001\t1\t3\t1282986.35\t374\tstore\t3001001\t1\t3\t1519875.76\t390\nstore\t3001002\t1\t3\t673633.15\t169\tstore\t3001002\t1\t3\t656845.65\t177\nstore\t3002001\t2\t3\t1333021.08\t362\tstore\t3002001\t2\t3\t1241035.46\t351\nstore\t3002002\t2\t3\t748022.41\t210\tstore\t3002002\t2\t3\t699267.97\t190\nstore\t3003001\t3\t3\t1134792.91\t326\tstore\t3003001\t3\t3\t1020180.77\t305\nstore\t3003002\t3\t3\t802127.48\t197\tstore\t3003002\t3\t3\t703596.57\t195\nstore\t3004001\t4\t3\t1385106.21\t399\tstore\t3004001\t4\t3\t1338923.03\t363\nstore\t3004002\t4\t3\t732944.48\t203\tstore\t3004002\t4\t3\t722668.95\t194\nstore\t4001001\t1\t4\t1157990.42\t340\tstore\t4001001\t1\t4\t1349185.21\t367\nstore\t4001002\t1\t4\t621674.47\t152\tstore\t4001002\t1\t4\t549243.01\t154\nstore\t4002001\t2\t4\t1568059.38\t402\tstore\t4002001\t2\t4\t1452439.59\t374\nstore\t4002002\t2\t4\t694485.13\t177\tstore\t4002002\t2\t4\t802206.88\t214\nstore\t4003001\t3\t4\t1593249.87\t451\tstore\t4003001\t3\t4\t1456053.57\t403\nstore\t4003002\t3\t4\t662121.36\t163\tstore\t4003002\t3\t4\t717341.48\t189\nstore\t4004001\t4\t4\t1248825.41\t348\tstore\t4004001\t4\t4\t1389020.06\t347\nstore\t4004002\t4\t4\t718297.35\t201\tstore\t4004002\t4\t4\t708937.79\t199\nstore\t5001001\t1\t5\t1273579.39\t349\tstore\t5001001\t1\t5\t1253638.00\t334\nstore\t5001002\t1\t5\t825892.06\t217\tstore\t5001002\t1\t5\t702168.77\t206\nstore\t5002001\t2\t5\t1486176.43\t365\tstore\t5002001\t2\t5\t1243594.81\t313\nstore\t5002002\t2\t5\t633123.05\t172\tstore\t5002002\t2\t5\t765400.35\t202\nstore\t5003001\t3\t5\t1477157.87\t390\tstore\t5003001\t3\t5\t1314942.13\t338\nstore\t5003002\t3\t5\t697211.48\t193\tstore\t5003002\t3\t5\t611056.63\t180\nstore\t5004001\t4\t5\t1287387.74\t353\tstore\t5004001\t4\t5\t1459943.10\t376\nstore\t5004002\t4\t5\t587935.16\t160\tstore\t5004002\t4\t5\t554584.12\t154\nstore\t6001001\t1\t6\t83017.27\t25\tstore\t6001001\t1\t6\t53839.44\t19\nstore\t6001002\t1\t6\t38797.04\t9\tstore\t6001002\t1\t6\t40326.43\t9\nstore\t6001003\t1\t6\t50457.02\t12\tstore\t6001003\t1\t6\t53585.59\t10\nstore\t6001004\t1\t6\t85147.15\t18\tstore\t6001004\t1\t6\t88513.17\t27\nstore\t6001005\t1\t6\t174523.58\t45\tstore\t6001005\t1\t6\t96635.15\t31\nstore\t6001006\t1\t6\t61231.20\t15\tstore\t6001006\t1\t6\t52077.07\t12\nstore\t6001007\t1\t6\t69263.80\t19\tstore\t6001007\t1\t6\t51940.50\t18\nstore\t6001008\t1\t6\t7774.49\t5\tstore\t6001008\t1\t6\t34176.00\t11\nstore\t6002001\t2\t6\t137288.11\t36\tstore\t6002001\t2\t6\t158530.44\t40\nstore\t6002002\t2\t6\t32548.73\t11\tstore\t6002002\t2\t6\t87976.59\t21\nstore\t6002003\t2\t6\t73606.51\t23\tstore\t6002003\t2\t6\t75815.97\t30\nstore\t6002004\t2\t6\t53750.06\t10\tstore\t6002004\t2\t6\t47235.26\t13\nstore\t6002005\t2\t6\t102178.51\t28\tstore\t6002005\t2\t6\t65676.11\t26\nstore\t6002006\t2\t6\t54942.99\t11\tstore\t6002006\t2\t6\t19627.43\t5\nstore\t6002007\t2\t6\t90084.17\t30\tstore\t6002007\t2\t6\t92767.58\t27\nstore\t6002008\t2\t6\t39639.50\t9\tstore\t6002008\t2\t6\t22406.00\t6\nstore\t6003001\t3\t6\t51483.85\t13\tstore\t6003001\t3\t6\t54481.52\t8\nstore\t6003003\t3\t6\t47337.51\t12\tstore\t6003003\t3\t6\t45051.71\t13\nstore\t6003004\t3\t6\t10107.64\t5\tstore\t6003004\t3\t6\t32499.05\t7\nstore\t6003005\t3\t6\t66634.50\t24\tstore\t6003005\t3\t6\t108128.10\t28\nstore\t6003006\t3\t6\t48367.44\t18\tstore\t6003006\t3\t6\t40436.80\t10\nstore\t6003007\t3\t6\t81724.63\t24\tstore\t6003007\t3\t6\t53676.25\t15\nstore\t6003008\t3\t6\t38023.42\t10\tstore\t6003008\t3\t6\t86371.33\t18\nstore\t6004001\t4\t6\t46759.68\t16\tstore\t6004001\t4\t6\t98037.79\t25\nstore\t6004002\t4\t6\t32304.25\t8\tstore\t6004002\t4\t6\t31274.82\t9\nstore\t6004003\t4\t6\t69089.26\t18\tstore\t6004003\t4\t6\t70840.13\t21\nstore\t6004004\t4\t6\t40891.73\t9\tstore\t6004004\t4\t6\t37496.52\t13\nstore\t6004005\t4\t6\t47341.21\t14\tstore\t6004005\t4\t6\t11576.16\t4\nstore\t6004006\t4\t6\t37823.00\t15\tstore\t6004006\t4\t6\t95562.26\t20\nstore\t6004007\t4\t6\t107423.33\t25\tstore\t6004007\t4\t6\t49625.35\t14\nstore\t6004008\t4\t6\t29725.16\t11\tstore\t6004008\t4\t6\t80015.81\t19\nstore\t6005001\t5\t6\t57198.07\t20\tstore\t6005001\t5\t6\t61313.19\t22\nstore\t6005002\t5\t6\t26742.25\t7\tstore\t6005002\t5\t6\t8143.14\t3\nstore\t6005003\t5\t6\t220162.11\t49\tstore\t6005003\t5\t6\t156718.19\t37\nstore\t6005004\t5\t6\t11571.97\t5\tstore\t6005004\t5\t6\t47370.39\t12\nstore\t6005005\t5\t6\t98809.34\t31\tstore\t6005005\t5\t6\t139898.64\t34\nstore\t6005006\t5\t6\t18108.22\t5\tstore\t6005006\t5\t6\t23139.57\t5\nstore\t6005007\t5\t6\t74967.30\t18\tstore\t6005007\t5\t6\t49412.83\t17\nstore\t6005008\t5\t6\t71048.66\t14\tstore\t6005008\t5\t6\t35086.91\t13\nstore\t6006001\t6\t6\t33781.32\t9\tstore\t6006001\t6\t6\t33793.98\t13\nstore\t6006003\t6\t6\t97264.81\t19\tstore\t6006003\t6\t6\t68109.24\t28\nstore\t6006004\t6\t6\t68406.20\t16\tstore\t6006004\t6\t6\t45670.46\t17\nstore\t6006005\t6\t6\t56934.37\t15\tstore\t6006005\t6\t6\t80912.54\t19\nstore\t6006006\t6\t6\t9360.35\t4\tstore\t6006006\t6\t6\t63735.59\t18\nstore\t6006007\t6\t6\t15367.23\t7\tstore\t6006007\t6\t6\t35007.03\t7\nstore\t6006008\t6\t6\t32684.13\t6\tstore\t6006008\t6\t6\t17541.68\t5\nstore\t6007001\t7\t6\t20878.30\t5\tstore\t6007001\t7\t6\t31745.91\t6\nstore\t6007002\t7\t6\t22897.56\t7\tstore\t6007002\t7\t6\t49259.06\t10\nstore\t6007003\t7\t6\t23239.07\t10\tstore\t6007003\t7\t6\t58808.25\t19\nstore\t6007004\t7\t6\t47157.82\t10\tstore\t6007004\t7\t6\t35948.03\t11\nstore\t6007005\t7\t6\t108100.32\t27\tstore\t6007005\t7\t6\t61513.57\t21\nstore\t6007006\t7\t6\t49256.53\t10\tstore\t6007006\t7\t6\t16532.02\t5\nstore\t6007007\t7\t6\t65032.83\t21\tstore\t6007007\t7\t6\t110874.54\t26\nstore\t6007008\t7\t6\t56626.18\t12\tstore\t6007008\t7\t6\t88630.08\t17\nstore\t6008001\t8\t6\t138363.66\t40\tstore\t6008001\t8\t6\t118280.46\t34\nstore\t6008002\t8\t6\t59363.03\t20\tstore\t6008002\t8\t6\t35498.23\t9\nstore\t6008004\t8\t6\t48538.79\t12\tstore\t6008004\t8\t6\t15279.50\t6\nstore\t6008005\t8\t6\t107128.07\t41\tstore\t6008005\t8\t6\t174087.69\t43\nstore\t6008006\t8\t6\t18420.16\t7\tstore\t6008006\t8\t6\t19669.85\t8\nstore\t6008007\t8\t6\t33281.27\t10\tstore\t6008007\t8\t6\t50246.87\t12\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q14a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,i_brand_id:int,i_class_id:int,i_category_id:int,sum_sales:decimal(38,2),number_sales:bigint>\n-- !query output\nNULL\tNULL\tNULL\tNULL\t677178449.86\t157050\ncatalog\tNULL\tNULL\tNULL\t235662013.22\t46193\ncatalog\t1001001\tNULL\tNULL\t2553656.44\t520\ncatalog\t1001001\t1\tNULL\t2222983.82\t453\ncatalog\t1001001\t1\t1\t1860468.47\t369\ncatalog\t1001001\t1\t2\t33739.75\t3\ncatalog\t1001001\t1\t3\t49979.66\t13\ncatalog\t1001001\t1\t4\t49610.76\t10\ncatalog\t1001001\t1\t5\t61333.54\t13\ncatalog\t1001001\t1\t6\t31767.56\t6\ncatalog\t1001001\t1\t7\t19752.34\t6\ncatalog\t1001001\t1\t8\t43708.18\t12\ncatalog\t1001001\t1\t9\t36388.85\t11\ncatalog\t1001001\t1\t10\t36234.71\t10\ncatalog\t1001001\t2\tNULL\t41208.06\t6\ncatalog\t1001001\t2\t3\t41208.06\t6\ncatalog\t1001001\t3\tNULL\t85518.68\t19\ncatalog\t1001001\t3\t1\t11946.51\t5\ncatalog\t1001001\t3\t4\t32093.25\t7\ncatalog\t1001001\t3\t6\t6905.04\t2\ncatalog\t1001001\t3\t7\t34573.88\t5\ncatalog\t1001001\t4\tNULL\t22702.37\t6\ncatalog\t1001001\t4\t7\t22702.37\t6\ncatalog\t1001001\t5\tNULL\t15433.39\t3\ncatalog\t1001001\t5\t9\t15433.39\t3\ncatalog\t1001001\t8\tNULL\t22550.38\t4\ncatalog\t1001001\t8\t9\t22550.38\t4\ncatalog\t1001001\t10\tNULL\t29206.10\t5\ncatalog\t1001001\t10\t7\t29206.10\t5\ncatalog\t1001001\t13\tNULL\t9727.21\t3\ncatalog\t1001001\t13\t9\t9727.21\t3\ncatalog\t1001001\t14\tNULL\t80584.37\t15\ncatalog\t1001001\t14\t9\t17298.44\t3\ncatalog\t1001001\t14\t10\t63285.93\t12\ncatalog\t1001001\t16\tNULL\t23742.06\t6\ncatalog\t1001001\t16\t9\t23742.06\t6\ncatalog\t1001002\tNULL\tNULL\t2674718.24\t515\ncatalog\t1001002\t1\tNULL\t2413496.48\t456\ncatalog\t1001002\t1\t1\t2413496.48\t456\ncatalog\t1001002\t2\tNULL\t40653.02\t10\ncatalog\t1001002\t2\t1\t40653.02\t10\ncatalog\t1001002\t3\tNULL\t55083.13\t11\ncatalog\t1001002\t3\t1\t55083.13\t11\ncatalog\t1001002\t4\tNULL\t59222.69\t15\ncatalog\t1001002\t4\t1\t59222.69\t15\ncatalog\t1001002\t7\tNULL\t23174.76\t3\ncatalog\t1001002\t7\t1\t23174.76\t3\ncatalog\t1001002\t10\tNULL\t40160.58\t11\ncatalog\t1001002\t10\t1\t40160.58\t11\ncatalog\t1001002\t12\tNULL\t25963.72\t6\ncatalog\t1001002\t12\t1\t25963.72\t6\ncatalog\t1001002\t13\tNULL\t11354.62\t2\ncatalog\t1001002\t13\t1\t11354.62\t2\ncatalog\t1001002\t14\tNULL\t5609.24\t1\ncatalog\t1001002\t14\t1\t5609.24\t1\ncatalog\t1002001\tNULL\tNULL\t2907063.96\t511\ncatalog\t1002001\t1\tNULL\t88041.35\t10\ncatalog\t1002001\t1\t1\t38209.40\t6\ncatalog\t1002001\t1\t5\t49831.95\t4\ncatalog\t1002001\t2\tNULL\t2574507.86\t463\ncatalog\t1002001\t2\t1\t2132551.78\t377\ncatalog\t1002001\t2\t3\t34961.36\t11\ncatalog\t1002001\t2\t4\t50761.02\t9\ncatalog\t1002001\t2\t6\t99382.39\t16\ncatalog\t1002001\t2\t7\t43547.86\t4\ncatalog\t1002001\t2\t8\t104272.59\t23\ncatalog\t1002001\t2\t9\t62741.20\t11\ncatalog\t1002001\t2\t10\t46289.66\t12\ncatalog\t1002001\t3\tNULL\t44220.16\t5\ncatalog\t1002001\t3\t1\t44220.16\t5\ncatalog\t1002001\t4\tNULL\t86121.32\t15\ncatalog\t1002001\t4\t1\t28141.23\t5\ncatalog\t1002001\t4\t3\t29119.15\t5\ncatalog\t1002001\t4\t5\t28860.94\t5\ncatalog\t1002001\t5\tNULL\t26371.70\t2\ncatalog\t1002001\t5\t9\t26371.70\t2\ncatalog\t1002001\t6\tNULL\t32428.43\t4\ncatalog\t1002001\t6\t10\t32428.43\t4\ncatalog\t1002001\t8\tNULL\t19630.88\t3\ncatalog\t1002001\t8\t8\t19630.88\t3\ncatalog\t1002001\t10\tNULL\t20942.49\t5\ncatalog\t1002001\t10\t8\t20942.49\t5\ncatalog\t1002001\t11\tNULL\t6069.54\t2\ncatalog\t1002001\t11\t8\t6069.54\t2\ncatalog\t1002001\t12\tNULL\t8730.23\t2\ncatalog\t1002001\t12\t7\t8730.23\t2\ncatalog\t1002002\tNULL\tNULL\t2695862.91\t567\ncatalog\t1002002\t1\tNULL\t79482.49\t21\ncatalog\t1002002\t1\t1\t79482.49\t21\ncatalog\t1002002\t2\tNULL\t2153480.10\t444\ncatalog\t1002002\t2\t1\t2153480.10\t444\ncatalog\t1002002\t3\tNULL\t64329.01\t20\ncatalog\t1002002\t3\t1\t64329.01\t20\ncatalog\t1002002\t4\tNULL\t98180.19\t26\ncatalog\t1002002\t4\t1\t98180.19\t26\ncatalog\t1002002\t7\tNULL\t45311.61\t6\ncatalog\t1002002\t7\t1\t45311.61\t6\ncatalog\t1002002\t8\tNULL\t16754.12\t4\ncatalog\t1002002\t8\t1\t16754.12\t4\ncatalog\t1002002\t9\tNULL\t30603.32\t7\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q18a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,ca_country:string,ca_state:string,ca_county:string,agg1:decimal(16,6),agg2:decimal(16,6),agg3:decimal(16,6),agg4:decimal(16,6),agg5:decimal(16,6),agg6:decimal(16,6),agg7:decimal(16,6)>\n-- !query output\nNULL\tNULL\tNULL\tNULL\t49.988822\t101.021588\t257.990037\t51.117224\t-216.279125\t1957.948730\t3.062849\nAAAAAAAAAAAEAAAA\tNULL\tNULL\tNULL\t83.000000\t14.310000\t0.000000\t7.580000\t-377.650000\t1980.000000\t5.000000\nAAAAAAAAAABEAAAA\tNULL\tNULL\tNULL\t45.000000\t57.980000\t0.000000\t44.640000\t-95.400000\t1951.000000\t1.000000\nAAAAAAAAAADBAAAA\tNULL\tNULL\tNULL\t19.000000\t232.670000\t0.000000\t62.820000\t-679.630000\t1933.000000\t6.000000\nAAAAAAAAAAEDAAAA\tNULL\tNULL\tNULL\t57.000000\t119.920000\t0.000000\t119.920000\t3329.940000\t1946.000000\t6.000000\nAAAAAAAAAAFAAAAA\tNULL\tNULL\tNULL\t83.000000\t141.350000\t0.000000\t12.720000\t-6142.000000\t1991.000000\t4.000000\nAAAAAAAAAAHBAAAA\tNULL\tNULL\tNULL\t97.000000\t83.210000\t0.000000\t71.560000\t431.650000\t1924.000000\t3.000000\nAAAAAAAAAAHDAAAA\tNULL\tNULL\tNULL\t53.500000\t97.390000\t0.000000\t52.640000\t-6.425000\t1974.000000\t2.500000\nAAAAAAAAAAKAAAAA\tNULL\tNULL\tNULL\t69.000000\t127.490000\t0.000000\t101.990000\t2660.640000\t1948.000000\t2.000000\nAAAAAAAAAALAAAAA\tNULL\tNULL\tNULL\t88.000000\t143.850000\t0.000000\t28.770000\t-2337.280000\t1944.000000\t5.000000\nAAAAAAAAAALDAAAA\tNULL\tNULL\tNULL\t19.000000\t116.590000\t0.000000\t5.820000\t-1366.290000\t1949.000000\t2.000000\nAAAAAAAAAAMBAAAA\tNULL\tNULL\tNULL\t1.000000\t37.170000\t0.000000\t30.100000\t13.430000\t1932.000000\t2.000000\nAAAAAAAAAAOAAAAA\tNULL\tNULL\tNULL\t87.000000\t26.050000\t0.000000\t1.820000\t-963.960000\t1985.000000\t4.000000\nAAAAAAAAABAEAAAA\tNULL\tNULL\tNULL\t91.000000\t67.860000\t56.180000\t8.820000\t-4041.070000\t1970.000000\t0.000000\nAAAAAAAAABBDAAAA\tNULL\tNULL\tNULL\t71.000000\t31.880000\t0.000000\t15.300000\t-23.430000\t1979.000000\t6.000000\nAAAAAAAAABCEAAAA\tNULL\tNULL\tNULL\t65.000000\t115.550000\t0.000000\t25.420000\t-1641.900000\t1973.000000\t6.000000\nAAAAAAAAABDAAAAA\tNULL\tNULL\tNULL\t71.000000\t40.670000\t0.000000\t13.820000\t-269.090000\t1968.000000\t0.000000\nAAAAAAAAABFCAAAA\tNULL\tNULL\tNULL\t2.000000\t237.540000\t0.000000\t35.630000\t-101.500000\t1932.000000\t1.000000\nAAAAAAAAABGBAAAA\tNULL\tNULL\tNULL\t25.500000\t149.375000\t0.000000\t87.680000\t-85.130000\t1955.000000\t4.000000\nAAAAAAAAABGEAAAA\tNULL\tNULL\tNULL\t32.500000\t237.875000\t0.000000\t107.305000\t615.725000\t1969.000000\t3.000000\nAAAAAAAAABJDAAAA\tNULL\tNULL\tNULL\t12.000000\t118.140000\t0.000000\t54.340000\t-405.960000\t1971.000000\t4.000000\nAAAAAAAAACAAAAAA\tNULL\tNULL\tNULL\t40.000000\t218.900000\t0.000000\t168.550000\t3400.000000\t1984.000000\t2.000000\nAAAAAAAAACDCAAAA\tNULL\tNULL\tNULL\t9.000000\t27.400000\t0.000000\t18.630000\t-11.070000\t1982.000000\t0.000000\nAAAAAAAAACDDAAAA\tNULL\tNULL\tNULL\t9.000000\t72.980000\t0.000000\t36.490000\t85.140000\t1925.000000\t4.000000\nAAAAAAAAACGDAAAA\tNULL\tNULL\tNULL\t100.000000\t240.000000\t0.000000\t52.800000\t-3171.000000\t1984.000000\t2.000000\nAAAAAAAAACJAAAAA\tNULL\tNULL\tNULL\t89.000000\t141.410000\t0.000000\t91.910000\t3394.460000\t1939.000000\t5.000000\nAAAAAAAAACJCAAAA\tNULL\tNULL\tNULL\t97.000000\t68.650000\t0.000000\t60.410000\t2160.190000\t1979.000000\t5.000000\nAAAAAAAAACMAAAAA\tNULL\tNULL\tNULL\t36.000000\t5.030000\t0.000000\t1.200000\t-23.400000\t1973.000000\t6.000000\nAAAAAAAAACPDAAAA\tNULL\tNULL\tNULL\t89.000000\t165.880000\t0.000000\t48.100000\t-1248.670000\t1988.000000\t5.000000\nAAAAAAAAADDEAAAA\tNULL\tNULL\tNULL\t44.000000\t152.720000\t0.000000\t138.970000\t3326.400000\t1930.000000\t6.000000\nAAAAAAAAADFAAAAA\tNULL\tNULL\tNULL\t52.000000\t99.460000\t465.420000\t59.670000\t242.820000\t1945.000000\t2.000000\nAAAAAAAAADGBAAAA\tNULL\tNULL\tNULL\t51.000000\t48.560000\t0.000000\t12.620000\t-1473.390000\t1935.000000\t3.000000\nAAAAAAAAADIAAAAA\tNULL\tNULL\tNULL\t27.000000\t175.020000\t0.000000\t47.250000\t-388.260000\t1924.000000\t3.000000\nAAAAAAAAADMCAAAA\tNULL\tNULL\tNULL\t89.000000\t161.440000\t0.000000\t35.510000\t-4648.470000\t1924.000000\t6.000000\nAAAAAAAAADNAAAAA\tNULL\tNULL\tNULL\t57.000000\t107.880000\t975.770000\t24.810000\t-2503.940000\t1975.000000\t2.000000\nAAAAAAAAADNBAAAA\tNULL\tNULL\tNULL\t32.000000\t52.410000\t0.000000\t33.540000\t274.560000\t1975.000000\t5.000000\nAAAAAAAAADPBAAAA\tNULL\tNULL\tNULL\t72.000000\t65.940000\t3.770000\t1.310000\t-3562.010000\t1932.000000\t5.000000\nAAAAAAAAAEABAAAA\tNULL\tNULL\tNULL\t57.000000\t200.610000\t0.000000\t24.070000\t-4047.570000\t1987.000000\t5.000000\nAAAAAAAAAEADAAAA\tNULL\tNULL\tNULL\t61.000000\t28.010000\t0.000000\t18.480000\t85.400000\t1970.000000\t2.000000\nAAAAAAAAAEAEAAAA\tNULL\tNULL\tNULL\t95.000000\t155.800000\t0.000000\t4.670000\t-5184.150000\t1966.000000\t4.000000\nAAAAAAAAAEBDAAAA\tNULL\tNULL\tNULL\t57.000000\t257.970000\t0.000000\t36.110000\t-2994.780000\t1990.000000\t4.000000\nAAAAAAAAAECCAAAA\tNULL\tNULL\tNULL\t40.000000\t81.220000\t0.000000\t51.160000\t746.800000\t1975.000000\t1.000000\nAAAAAAAAAEDAAAAA\tNULL\tNULL\tNULL\t63.000000\t31.950000\t564.530000\t14.690000\t-809.600000\t1966.000000\t6.000000\nAAAAAAAAAEJAAAAA\tNULL\tNULL\tNULL\t83.000000\t48.220000\t202.630000\t6.260000\t-2670.220000\t1944.000000\t3.000000\nAAAAAAAAAEJDAAAA\tNULL\tNULL\tNULL\t4.000000\t39.860000\t0.000000\t0.790000\t-66.480000\t1958.000000\t3.000000\nAAAAAAAAAELCAAAA\tNULL\tNULL\tNULL\t34.000000\t106.700000\t0.000000\t17.070000\t-1481.040000\t1925.000000\t1.000000\nAAAAAAAAAENAAAAA\tNULL\tNULL\tNULL\t49.000000\t205.770000\t0.000000\t141.980000\t3018.400000\t1926.000000\t2.000000\nAAAAAAAAAENCAAAA\tNULL\tNULL\tNULL\t67.000000\t121.280000\t0.000000\t115.210000\t4025.360000\t1947.000000\t3.000000\nAAAAAAAAAEPAAAAA\tNULL\tNULL\tNULL\t73.000000\t85.250000\t0.000000\t3.410000\t-4651.560000\t1975.000000\t1.000000\nAAAAAAAAAEPDAAAA\tNULL\tNULL\tNULL\t28.000000\t91.090000\t0.000000\t43.720000\t-177.240000\t1987.000000\t5.000000\nAAAAAAAAAFADAAAA\tNULL\tNULL\tNULL\t15.000000\t109.790000\t0.000000\t62.580000\t139.200000\t1963.000000\t4.000000\nAAAAAAAAAFCAAAAA\tNULL\tNULL\tNULL\t23.000000\t8.880000\t0.000000\t1.950000\t-67.390000\t1988.000000\t0.000000\nAAAAAAAAAFCBAAAA\tNULL\tNULL\tNULL\t74.000000\t14.690000\t0.000000\t10.720000\t-69.560000\t1954.000000\t3.000000\nAAAAAAAAAFHCAAAA\tNULL\tNULL\tNULL\t53.000000\t48.210000\t350.520000\t47.240000\t5.640000\t1960.000000\t1.000000\nAAAAAAAAAFMCAAAA\tNULL\tNULL\tNULL\t42.000000\t56.730000\t828.590000\t53.320000\t-380.870000\t1944.000000\t5.000000\nAAAAAAAAAFODAAAA\tNULL\tNULL\tNULL\t15.000000\t23.690000\t0.000000\t17.530000\t139.950000\t1973.000000\t5.000000\nAAAAAAAAAFPAAAAA\tNULL\tNULL\tNULL\t89.000000\t126.010000\t0.000000\t75.600000\t-507.300000\t1938.000000\t6.000000\nAAAAAAAAAGEBAAAA\tNULL\tNULL\tNULL\t55.000000\t78.640000\t0.000000\t7.070000\t-1731.400000\t1957.000000\t0.000000\nAAAAAAAAAGEDAAAA\tNULL\tNULL\tNULL\t19.000000\t205.240000\t0.000000\t22.570000\t-1223.600000\t1935.000000\t0.000000\nAAAAAAAAAGEEAAAA\tNULL\tNULL\tNULL\t40.000000\t128.550000\t488.400000\t48.840000\t-882.800000\t1934.000000\t6.000000\nAAAAAAAAAGIDAAAA\tNULL\tNULL\tNULL\t11.000000\t125.030000\t0.000000\t13.750000\t-753.610000\t1935.000000\t0.000000\nAAAAAAAAAGJCAAAA\tNULL\tNULL\tNULL\t8.000000\t190.590000\t0.000000\t142.940000\t369.520000\t1941.000000\t4.000000\nAAAAAAAAAGKDAAAA\tNULL\tNULL\tNULL\t80.000000\t257.180000\t0.000000\t100.300000\t953.600000\t1960.000000\t1.000000\nAAAAAAAAAGLAAAAA\tNULL\tNULL\tNULL\t44.000000\t58.460000\t411.400000\t9.350000\t-1811.480000\t1973.000000\t6.000000\nAAAAAAAAAGMBAAAA\tNULL\tNULL\tNULL\t86.000000\t110.960000\t0.000000\t4.430000\t-3732.400000\t1946.000000\t0.000000\nAAAAAAAAAGNBAAAA\tNULL\tNULL\tNULL\t41.000000\t133.900000\t0.000000\t1.330000\t-3118.870000\t1948.000000\t5.000000\nAAAAAAAAAGPBAAAA\tNULL\tNULL\tNULL\t45.000000\t57.940000\t876.960000\t33.600000\t-529.110000\t1965.000000\t6.000000\nAAAAAAAAAHAAAAAA\tNULL\tNULL\tNULL\t20.000000\t51.470000\t0.000000\t26.240000\t-205.400000\t1942.000000\t5.000000\nAAAAAAAAAHBAAAAA\tNULL\tNULL\tNULL\t50.500000\t70.085000\t0.000000\t55.760000\t724.565000\t1957.000000\t2.000000\nAAAAAAAAAHDEAAAA\tNULL\tNULL\tNULL\t79.000000\t21.510000\t0.000000\t7.950000\t-631.210000\t1943.000000\t6.000000\nAAAAAAAAAHEDAAAA\tNULL\tNULL\tNULL\t93.000000\t23.200000\t0.000000\t2.320000\t-1272.240000\t1948.000000\t5.000000\nAAAAAAAAAHFEAAAA\tNULL\tNULL\tNULL\t75.000000\t57.750000\t508.650000\t14.430000\t-870.150000\t1965.000000\t6.000000\nAAAAAAAAAHGDAAAA\tNULL\tNULL\tNULL\t71.000000\t3.020000\t0.000000\t1.540000\t-80.940000\t1984.000000\t2.000000\nAAAAAAAAAHIBAAAA\tNULL\tNULL\tNULL\t30.500000\t59.105000\t0.000000\t41.310000\t452.755000\t1957.500000\t3.000000\nAAAAAAAAAHKAAAAA\tNULL\tNULL\tNULL\t38.000000\t89.720000\t0.000000\t20.630000\t-1257.800000\t1939.000000\t5.000000\nAAAAAAAAAHKCAAAA\tNULL\tNULL\tNULL\t71.000000\t141.010000\t0.000000\t90.240000\t2269.870000\t1924.000000\t0.000000\nAAAAAAAAAHNDAAAA\tNULL\tNULL\tNULL\t51.000000\t63.060000\t0.000000\t5.040000\t-2107.830000\t1975.000000\t1.000000\nAAAAAAAAAIDAAAAA\tNULL\tNULL\tNULL\t77.000000\t36.770000\t0.000000\t26.100000\t-72.380000\t1986.000000\t2.000000\nAAAAAAAAAIEEAAAA\tNULL\tNULL\tNULL\t1.000000\t145.160000\t0.000000\t133.540000\t45.560000\t1958.000000\t3.000000\nAAAAAAAAAIIAAAAA\tNULL\tNULL\tNULL\t53.000000\t232.610000\t0.000000\t0.000000\t-4466.840000\t1971.000000\t4.000000\nAAAAAAAAAIJAAAAA\tNULL\tNULL\tNULL\t91.000000\t62.950000\t0.000000\t26.430000\t-107.380000\t1935.000000\t6.000000\nAAAAAAAAAIKBAAAA\tNULL\tNULL\tNULL\t73.000000\t6.890000\t124.470000\t2.890000\t-280.690000\t1935.000000\t3.000000\nAAAAAAAAAILBAAAA\tNULL\tNULL\tNULL\t7.000000\t15.340000\t0.000000\t1.070000\t-73.290000\t1966.000000\t4.000000\nAAAAAAAAAILDAAAA\tNULL\tNULL\tNULL\t27.500000\t69.010000\t200.210000\t38.180000\t-143.600000\t1974.000000\t2.500000\nAAAAAAAAAIODAAAA\tNULL\tNULL\tNULL\t99.000000\t84.610000\t663.300000\t60.910000\t-931.590000\t1986.000000\t1.000000\nAAAAAAAAAIPCAAAA\tNULL\tNULL\tNULL\t96.000000\t183.920000\t0.000000\t22.070000\t-3887.040000\t1924.000000\t3.000000\nAAAAAAAAAJACAAAA\tNULL\tNULL\tNULL\t62.000000\t128.680000\t0.000000\t104.230000\t2698.860000\t1975.000000\t4.000000\nAAAAAAAAAJAEAAAA\tNULL\tNULL\tNULL\t81.000000\t131.730000\t0.000000\t23.710000\t-4963.680000\t1968.000000\t5.000000\nAAAAAAAAAJBAAAAA\tNULL\tNULL\tNULL\t19.000000\t113.950000\t0.000000\t79.760000\t613.320000\t1930.000000\t2.000000\nAAAAAAAAAJBBAAAA\tNULL\tNULL\tNULL\t60.500000\t140.315000\t1093.800000\t51.610000\t-1455.575000\t1976.000000\t2.500000\nAAAAAAAAAJBDAAAA\tNULL\tNULL\tNULL\t4.000000\t85.990000\t82.520000\t20.630000\t-252.920000\t1988.000000\t5.000000\nAAAAAAAAAJDEAAAA\tNULL\tNULL\tNULL\t21.000000\t7.750000\t38.220000\t7.280000\t42.000000\t1948.000000\t0.000000\nAAAAAAAAAJEAAAAA\tNULL\tNULL\tNULL\t47.000000\t95.750000\t0.000000\t44.040000\t-1744.170000\t1935.000000\t0.000000\nAAAAAAAAAJEDAAAA\tNULL\tNULL\tNULL\t27.000000\t38.270000\t0.000000\t21.040000\t-300.240000\t1976.000000\t6.000000\nAAAAAAAAAJFDAAAA\tNULL\tNULL\tNULL\t82.000000\t22.310000\t0.000000\t6.690000\t-80.360000\t1973.000000\t5.000000\nAAAAAAAAAJGBAAAA\tNULL\tNULL\tNULL\t64.000000\t155.160000\t0.000000\t34.130000\t-3623.040000\t1975.000000\t5.000000\nAAAAAAAAAJGCAAAA\tNULL\tNULL\tNULL\t8.000000\t2.080000\t0.000000\t1.200000\t0.720000\t1932.000000\t0.000000\nAAAAAAAAAJJCAAAA\tNULL\tNULL\tNULL\t61.000000\t30.960000\t0.000000\t0.610000\t-875.350000\t1932.000000\t2.000000\nAAAAAAAAAJKBAAAA\tNULL\tNULL\tNULL\t13.000000\t30.910000\t0.000000\t14.520000\t43.680000\t1928.000000\t6.000000\nAAAAAAAAAJLCAAAA\tNULL\tNULL\tNULL\t25.000000\t7.330000\t0.000000\t0.580000\t-116.500000\t1970.000000\t0.000000\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q20.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,i_item_desc:string,i_category:string,i_class:string,i_current_price:decimal(7,2),itemrevenue:decimal(17,2),revenueratio:decimal(38,17)>\n-- !query output\nAAAAAAAAMJJBAAAA\tNULL\tBooks                                             \tNULL\tNULL\t9010.49\t72.07596281370536693\nAAAAAAAAMLGDAAAA\tNULL\tBooks                                             \tNULL\t6.35\t1491.96\t11.93436244638591899\nAAAAAAAAAELBAAAA\tPrecisely elderly bodies\tBooks                                             \tarts                                              \t1.40\t4094.31\t1.52075020571659240\nAAAAAAAAAFODAAAA\tClose, precise teeth should go for a qualities. Political groups shall not become just important occasions. Trials mean ne\tBooks                                             \tarts                                              \t2.53\t332.38\t0.12345595555199313\nAAAAAAAAAHMAAAAA\tAbilities could affect cruel parts. Predominantly other events telephone strong signs. Accurate mate\tBooks                                             \tarts                                              \t25.69\t2626.56\t0.97558359291967949\nAAAAAAAAAJOAAAAA\tAverage parents require also times. Children would not describe lightly purposes; large miles love now correct relations. Usual, german goals proceed literary, wooden visitors. Initial councils wil\tBooks                                             \tarts                                              \t1.24\t12327.20\t4.57869383019594946\nAAAAAAAAALNCAAAA\tGreat, contemporary workers would not remove of course cultural values. Then due children might see positive seconds. Significant problems w\tBooks                                             \tarts                                              \t0.55\t343.80\t0.12769768794384511\nAAAAAAAAANKAAAAA\tSmall objects stop etc mediterranean patterns; liberal, free initiatives would not leave less clear british attitudes; good, blue relationships find softly very\tBooks                                             \tarts                                              \t58.41\t886.92\t0.32942883476194038\nAAAAAAAABGDAAAAA\tNewly national rights head curiously all electrical cells. Chinese, long values might not pull bad lines. High fun clothes ough\tBooks                                             \tarts                                              \t3.28\t2219.85\t0.82451923380495801\nAAAAAAAACBBAAAAA\tQuick, easy studies must make always necessary systems. Upper, new persons should buy much physical technologies. English sciences hear solicitors. S\tBooks                                             \tarts                                              \t0.99\t2050.16\t0.76149125047979491\nAAAAAAAACMFDAAAA\tEarly, short v\tBooks                                             \tarts                                              \t75.57\t5429.86\t2.01681375177070042\nAAAAAAAADLLDAAAA\tBlack, following services justify by a investors; dirty, different charts will fly however prizes. Temporary, l\tBooks                                             \tarts                                              \t5.56\t13539.35\t5.02892289488801418\nAAAAAAAAEIPCAAAA\tScientific, difficult polls would not achieve. Countries reach of course. Bad, new churches realize most english\tBooks                                             \tarts                                              \t3.98\t143.88\t0.05344137097545211\nAAAAAAAAFCIBAAAA\tUnited, important objectives put similarly large, previous phenomena; old, present days receive. Happy detectives assi\tBooks                                             \tarts                                              \t1.26\t12297.15\t4.56753235398096242\nAAAAAAAAFFIBAAAA\tNaturally new years put serious, negative vehicles. Fin\tBooks                                             \tarts                                              \t3.34\t4587.47\t1.70392470189572752\nAAAAAAAAFJGCAAAA\tAgo correct profits must not handle else. Healthy children may not go only ancient words. Later just characters ought to drink about. British parts must watch soon ago other clients. So vital d\tBooks                                             \tarts                                              \t4.03\t5359.20\t1.99056849688381241\nAAAAAAAAFLNCAAAA\tMuch new waters \tBooks                                             \tarts                                              \t1.85\t6718.63\t2.49550179508480530\nAAAAAAAAGHBAAAAA\tHard different differences would not paint even. Together suitable schemes marry directly only open women. Social ca\tBooks                                             \tarts                                              \t2.65\t3208.60\t1.19177080144450674\nAAAAAAAAGLEBAAAA\tTall, following actions keep widely willing, secondary groups. Heads could afford however; agricultural, square pri\tBooks                                             \tarts                                              \t9.99\t4780.52\t1.77562929368618505\nAAAAAAAAGMFAAAAA\tAnonymous, useful women provoke slightly present persons. Ideas ought to cost almost competent, working parties; aspects provide thr\tBooks                                             \tarts                                              \t6.73\t5622.46\t2.08835119999055082\nAAAAAAAAHHEBAAAA\tPowerful walls will find; there scottish decades must not\tBooks                                             \tarts                                              \t4.16\t7914.41\t2.93965054810833964\nAAAAAAAAHMCEAAAA\tToo executive doors progress mainly seemingly possible parts; hundreds stay virtually simple workers. Sola\tBooks                                             \tarts                                              \t34.32\t3029.48\t1.12524023173973205\nAAAAAAAAIBOCAAAA\tCareful privileges ought to live rather to a boards. Possible, broad p\tBooks                                             \tarts                                              \t3.93\t1450.99\t0.53894144336718969\nAAAAAAAAICMBAAAA\tAside legitimate decisions may not stand probably sexual g\tBooks                                             \tarts                                              \t3.88\t9619.83\t3.57309496629679899\nAAAAAAAAIFPBAAAA\tSpecially interesting crews continue current, foreign directions; only social men would not call at least political children; circumstances could not understand now in a assessme\tBooks                                             \tarts                                              \t2.13\t13616.57\t5.05760473160419719\nAAAAAAAAIHNAAAAA\tUnlikely states take later in general extra inf\tBooks                                             \tarts                                              \t0.32\t11879.56\t4.41242683475911751\nAAAAAAAAINFDAAAA\tSometimes careful things state probably so\tBooks                                             \tarts                                              \t5.08\t25457.85\t9.45581321995700176\nAAAAAAAAJGHDAAAA\tCircumstances would not use. Principles seem writers. Times go from a hands. Members find grounds. Central, only teachers pursue properly into a p\tBooks                                             \tarts                                              \t5.95\t2567.54\t0.95366178505916251\nAAAAAAAAJLHBAAAA\tInches may lose from a problems. Firm, other corporations shall protect ashamed, important practices. Materials shall not make then by a police. Weeks used\tBooks                                             \tarts                                              \t0.84\t1811.85\t0.67297572978782944\nAAAAAAAAKFGBAAAA\tSystems cannot await regions. Home appropr\tBooks                                             \tarts                                              \t7.30\t1730.16\t0.64263360027028230\nAAAAAAAAKHLBAAAA\tExtra, primitive weeks look obviou\tBooks                                             \tarts                                              \t1.18\t22.77\t0.00845746467272063\nAAAAAAAALCFBAAAA\tMore than key reasons should remain. Words used to offer slowly british\tBooks                                             \tarts                                              \t0.28\t10311.18\t3.82988320527288194\nAAAAAAAALGEEAAAA\tChildren may turn also above, historical aspects. Surveys migh\tBooks                                             \tarts                                              \t7.22\t11872.32\t4.40973768042312729\nAAAAAAAALOKCAAAA\tTrustees know operations. Now past issues cut today german governments. British lines go critical, individual structures. Tonight adequate problems should no\tBooks                                             \tarts                                              \t4.05\t8348.99\t3.10106666569599586\nAAAAAAAAMACDAAAA\tUseful observers start often white colleagues; simple pro\tBooks                                             \tarts                                              \t3.47\t7565.51\t2.81005856636428042\nAAAAAAAAMNPAAAAA\tMembers should say earnings. Detailed departments would not move just at the hopes. Figures can take. Actually open houses want. Good teachers combine the\tBooks                                             \tarts                                              \t3.09\t4363.97\t1.62091006182752106\nAAAAAAAAMPFCAAAA\tMajor, senior words afford economic libraries; successful seconds need outside. Clinical, new ideas put now red c\tBooks                                             \tarts                                              \t5.87\t9661.08\t3.58841646026911898\nAAAAAAAANABCAAAA\tLikely states feel astonishingly working roads. Parents put so somewhere able policies. Others may rely shortly instead interesting bodies; bri\tBooks                                             \tarts                                              \t7.50\t132.66\t0.04927392461498107\nAAAAAAAANMECAAAA\tFloors could not go only for a years. Special reasons shape consequently black, concerned instances. Mutual depths encourage both simple teachers. Cards favour massive \tBooks                                             \tarts                                              \t1.83\t20114.53\t7.47114303396483641\nAAAAAAAAOAHCAAAA\tAccurate years want then other organisations. Simple lines mean as well so red results. Orthodox, central scales will not in\tBooks                                             \tarts                                              \t7.69\t2153.04\t0.79970398502215321\nAAAAAAAAODBEAAAA\tCertain customers think exactly already necessary factories. Awkward doubts shall not forget fine\tBooks                                             \tarts                                              \t0.30\t231.71\t0.08606408165639427\nAAAAAAAAOKEDAAAA\tVisitors could not allow glad wages. Communist, real figures used to apply factors. Aggressive, optimistic days must mean about trees. Detailed courts consider really large pro\tBooks                                             \tarts                                              \t9.08\t24425.09\t9.07221501111207600\nAAAAAAAAOODBAAAA\tDeep, big areas take for a facilities. Words could replace certainly cases; lights test. Nevertheless practical arts cross. Fa\tBooks                                             \tarts                                              \t7.37\t4380.23\t1.62694951617879192\nAAAAAAAAAJJBAAAA\tNew, reluctant associations see more different, physical symptoms; useful pounds ought to give. Subjects \tBooks                                             \tbusiness                                          \t9.02\t3044.02\t1.58609001939612781\nAAAAAAAABDMAAAAA\tImports involve most now indian women. Developments announce intimately in a copies. Projects \tBooks                                             \tbusiness                                          \t3.26\t472.29\t0.24608723177265498\nAAAAAAAABINDAAAA\tYears shall want free objects. Old residents use absolutely so residential steps. Letters will share variables. Sure fres\tBooks                                             \tbusiness                                          \t40.76\t30227.05\t15.74983814849696292\nAAAAAAAACAADAAAA\tWhole, important problems make. Indeed industrial members go skills. Soft\tBooks                                             \tbusiness                                          \t3.22\t137.92\t0.07186336997625310\nAAAAAAAACPBBAAAA\tOther, black houses flow. New soldiers put only eastern hours. Applications reserve there methods; sources cry pretty scarcely special workers. Never british opportunities \tBooks                                             \tbusiness                                          \t8.20\t736.96\t0.38399383075478162\nAAAAAAAAEBPAAAAA\tRows could not\tBooks                                             \tbusiness                                          \t1.65\t1290.88\t0.67261446516056841\nAAAAAAAAEEFDAAAA\tRemaining subjects handle even only certain ladies; eagerly literary days could not provide. Very different articles cut then. Boys see out of a houses. Governme\tBooks                                             \tbusiness                                          \t9.03\t1065.30\t0.55507575431918810\nAAAAAAAAEFEEAAAA\tWhite members see highly on a negotiations. Evident, passive colours can refer familiar, ugly factors; away small examinations shall prove \tBooks                                             \tbusiness                                          \t17.97\t1446.00\t0.75343991433919646\nAAAAAAAAEGCCAAAA\tManufacturing, ready concerns see already then new pupils. Both stable types used to manage otherw\tBooks                                             \tbusiness                                          \t1.18\t2635.71\t1.37333963805184198\nAAAAAAAAFCGDAAAA\tSmall, capable centres\tBooks                                             \tbusiness                                          \t2.98\t5029.45\t2.62060053746422658\nAAAAAAAAFDLAAAAA\tPopular, different parameters might take open, used modules. Prisoners use pretty alternative lovers. Annual, professional others spend once true men. Other, small subsidies seem politically\tBooks                                             \tbusiness                                          \t7.25\t621.26\t0.32370821658531756\nAAAAAAAAFEGEAAAA\tSupreme, free uses handle even in the customers. Other minutes might not make of course social neighbours. So environmental rights come other, able sales\tBooks                                             \tbusiness                                          \t8.08\t3950.22\t2.05826654109334761\nAAAAAAAAGIJAAAAA\tAlways other hours used to use. Women should jump then. Civil samples take therefore other offices. Concrete, major demands\tBooks                                             \tbusiness                                          \t1.42\t2013.79\t1.04928752772968910\nAAAAAAAAHDKCAAAA\tVisual fragments \tBooks                                             \tbusiness                                          \t6.77\t930.13\t0.48464527491308216\nAAAAAAAAHDLBAAAA\tClassic issues will draw as european, engl\tBooks                                             \tbusiness                                          \t75.64\t556.83\t0.29013689315456070\nAAAAAAAAHJAAAAAA\tAgain british shareholders see shares. American lives ought to establish horses. Then ideal conservatives might charge even nec\tBooks                                             \tbusiness                                          \t2.44\t1898.13\t0.98902275560488173\nAAAAAAAAHLKAAAAA\tConfident, video-tape\tBooks                                             \tbusiness                                          \t3.17\t1131.00\t0.58930881266779474\nAAAAAAAAIHNDAAAA\tOf course fundamental children will not deal still including a suppliers. More crucial powers will not keep enough. As good comments used to devote even convenient electric problems. Publi\tBooks                                             \tbusiness                                          \t8.85\t414.75\t0.21610595053401226\nAAAAAAAAIMJAAAAA\tDepartments could seek now for a commu\tBooks                                             \tbusiness                                          \t5.93\t9895.85\t5.15624369039663714\nAAAAAAAAJFBEAAAA\tPaintings must not know primary, royal stands; similar, available others ough\tBooks                                             \tbusiness                                          \t0.39\t13809.44\t7.19542412909562460\nAAAAAAAAJJGBAAAA\tMost present eyes restore fat, central relationships; again considerable habits must face in a discussions. Engineers help at all direct occasions. Curiously del\tBooks                                             \tbusiness                                          \t80.10\t9267.25\t4.82871095861681771\nAAAAAAAAKBMDAAAA\tSo white countries could secure more angry items. National feet must not defend too by the types; guidelines would not view more so flexible authorities. Critics will handle closely lig\tBooks                                             \tbusiness                                          \t2.50\t2542.50\t1.32477246349059959\nAAAAAAAAKJHDAAAA\tSimple changes ought to vote almost sudden techniques. Partial, golden faces mean in a officials; vertically minor \tBooks                                             \tbusiness                                          \t8.73\t22710.22\t11.83318548507904997\nAAAAAAAAKJOBAAAA\tChristian lines stand once deep formal aspirations. National, fine islands play together with a patterns. New journals lose etc positive armie\tBooks                                             \tbusiness                                          \t4.89\t11560.78\t6.02375732565303988\nAAAAAAAAKKDAAAAA\tChildren would not mean in favour of a parts. Heavy, whole others shall mean on\tBooks                                             \tbusiness                                          \t3.13\t9065.09\t4.72337526492192700\nAAAAAAAAKLCCAAAA\tLips will n\tBooks                                             \tbusiness                                          \t8.48\t541.62\t0.28221170567385587\nAAAAAAAAKNJCAAAA\tWhite fees might combine reports. Tr\tBooks                                             \tbusiness                                          \t2.09\t37.60\t0.01959152197728478\nAAAAAAAALAJCAAAA\tAsleep children invite more. Wealthy forms could expect as. Indeed statistical examinations could la\tBooks                                             \tbusiness                                          \t3.71\t2082.24\t1.08495347664844290\nAAAAAAAALDHBAAAA\tMost new weeks go yet members. Also encouraging delegates make publications. Different competitors run resources; somehow common views m\tBooks                                             \tbusiness                                          \t1.07\t13412.42\t6.98855641485568838\nAAAAAAAALHMBAAAA\tLocal, bloody names \tBooks                                             \tbusiness                                          \t4.40\t1997.44\t1.04076834197626873\nAAAAAAAALJJCAAAA\tLarge, larg\tBooks                                             \tbusiness                                          \t3.50\t12097.82\t6.30358261721370521\nAAAAAAAANJLBAAAA\tOnly, gothic\tBooks                                             \tbusiness                                          \t1.68\t5708.95\t2.97465477106967886\nAAAAAAAANKCAAAAA\tLow, large clouds will not visit for example as the notions. Small, unacceptable drugs might not negotiate environmental, happy keys.\tBooks                                             \tbusiness                                          \t3.11\t3020.85\t1.57401726502874248\nAAAAAAAAOAPAAAAA\tSilver, critical operations could help howev\tBooks                                             \tbusiness                                          \t5.56\t2286.24\t1.19124790439754116\nAAAAAAAAOBAEAAAA\tTerrible, psychiatric bones will destroy also used studies; solely usual windows should not make shares. Advances continue sufficiently. As key days might not use far artists. Offici\tBooks                                             \tbusiness                                          \t5.83\t6672.40\t3.47666146918178041\nAAAAAAAAOCHCAAAA\tToo white addresses end by the talks. Hands get only companies. Statements know. Sentences would pay around for a payments; papers wait actually drinks; men would \tBooks                                             \tbusiness                                          \t6.06\t7609.35\t3.96486031270882752\nAAAAAAAAAGLDAAAA\tNew, big arguments may not win since by a tenant\tBooks                                             \tcomputers                                         \t1.00\t904.16\t0.32327741862037314\nAAAAAAAAALNBAAAA\tElse substantial problems slip months. Just unique corporations put vast areas. Supporters like far perfect chapters. Now young reports become wrong trials. Available ears shall\tBooks                                             \tcomputers                                         \t51.46\t18752.88\t6.70498876094676063\nAAAAAAAABEBEAAAA\tCheap, desirable members take immediate, estimated debts; months must track typica\tBooks                                             \tcomputers                                         \t3.26\t10027.86\t3.58540600677589698\nAAAAAAAABHOAAAAA\tExpert, scottish terms will ask quiet demands; poor bits attempt northern, dangerous si\tBooks                                             \tcomputers                                         \t2.66\t7330.68\t2.62104418148557444\nAAAAAAAACCDBAAAA\tGradually serious visitors bear no doubt technical hearts. Critics continue earlier soviet, standard minute\tBooks                                             \tcomputers                                         \t6.40\t1711.84\t0.61205894564136830\nAAAAAAAACCPBAAAA\tClear, general goods must know never women. Communications meet about. Other rewards spot wide in a skills. Relative, empty drawings facilitate too rooms. Still asian police end speedily comp\tBooks                                             \tcomputers                                         \t7.64\t1292.04\t0.46196177220211789\nAAAAAAAACFMBAAAA\tAt least remaining results shall keep cuts. Clients should meet policies. Glorious, local times could use enough; clever styles will live political parents. Single, gradual contracts will describe ho\tBooks                                             \tcomputers                                         \t9.51\t3033.10\t1.08446816760026298\nAAAAAAAACLPDAAAA\tEnvironmental, new women pay again fingers. Different, uncomfortable records miss far russian, dependent members. Enough double men will go here immediatel\tBooks                                             \tcomputers                                         \t89.89\t8553.39\t3.05821739476786568\nAAAAAAAACOFCAAAA\tYears learn here. Days make too. Only moving systems avoid old groups; short movements cannot see respectiv\tBooks                                             \tcomputers                                         \t0.60\t3411.40\t1.21972724504682903\nAAAAAAAACONDAAAA\tMagnetic\tBooks                                             \tcomputers                                         \t57.19\t3569.09\t1.27610843437421206\nAAAAAAAADAHAAAAA\tGa\tBooks                                             \tcomputers                                         \t5.53\t2687.70\t0.96097230360331899\nAAAAAAAADDBAAAAA\tS\tBooks                                             \tcomputers                                         \t65.78\t1613.04\t0.57673355084432699\nAAAAAAAAEAHCAAAA\tSimple year\tBooks                                             \tcomputers                                         \t3.01\t1262.79\t0.45150359611088856\nAAAAAAAAECEEAAAA\tAgricultural players shall smoke. So full reasons undertake \tBooks                                             \tcomputers                                         \t0.70\t4408.27\t1.57615261257037727\nAAAAAAAAECGEAAAA\tThen basic years can encourage later traditions. For example christian parts subscribe informal, valuable gr\tBooks                                             \tcomputers                                         \t2.75\t844.19\t0.30183547604973987\nAAAAAAAAECHAAAAA\tBoxes batt\tBooks                                             \tcomputers                                         \t0.83\t15300.82\t5.47072375727191844\nAAAAAAAAEIGCAAAA\tSeparate, dead buildings think possibly english, net policies. Big divisions can use almost\tBooks                                             \tcomputers                                         \t9.46\t12403.71\t4.43487806374503246\nAAAAAAAAEJECAAAA\tArtists make times. Rather ready functions must pre\tBooks                                             \tcomputers                                         \t5.71\t1533.00\t0.54811569052494252\nAAAAAAAAEMKDAAAA\tAdvantages emerge moves; special, expected operations pass etc natural preferences; very posit\tBooks                                             \tcomputers                                         \t0.15\t5241.45\t1.87405152387603389\nAAAAAAAAFGLAAAAA\tSince other birds shall blame sudden\tBooks                                             \tcomputers                                         \t6.74\t2098.16\t0.75018552983158082\nAAAAAAAAFHNAAAAA\tLegs throw then. Old-fashioned develo\tBooks                                             \tcomputers                                         \t2.66\t163.26\t0.05837271209073850\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q22.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_product_name:string,i_brand:string,i_class:string,i_category:string,qoh:double>\n-- !query output\nesepriableanti                                    \tNULL\tNULL\tNULL\t429.7808764940239\nesepriableanti                                    \timportoamalg #x                                   \tNULL\tNULL\t429.7808764940239\nesepriableanti                                    \timportoamalg #x                                   \tfragrances                                        \tNULL\t429.7808764940239\nesepriableanti                                    \timportoamalg #x                                   \tfragrances                                        \tWomen                                             \t429.7808764940239\nn stbarn stbarought                               \tNULL\tNULL\tNULL\t430.0122448979592\nn stbarn stbarought                               \tamalgimporto #x                                   \tNULL\tNULL\t430.0122448979592\nn stbarn stbarought                               \tamalgimporto #x                                   \taccessories                                       \tNULL\t430.0122448979592\nn stbarn stbarought                               \tamalgimporto #x                                   \taccessories                                       \tMen                                               \t430.0122448979592\nantiationeing                                     \tNULL\tNULL\tNULL\t437.03614457831327\nantiationeing                                     \tamalgexporti #x                                   \tNULL\tNULL\t437.03614457831327\nantiationeing                                     \tamalgexporti #x                                   \tnewborn                                           \tNULL\t437.03614457831327\nantiationeing                                     \tamalgexporti #x                                   \tnewborn                                           \tChildren                                          \t437.03614457831327\nn stpriantin st                                   \tNULL\tNULL\tNULL\t438.77868852459017\nn stpriantin st                                   \texportiexporti #x                                 \tNULL\tNULL\t438.77868852459017\nn stpriantin st                                   \texportiexporti #x                                 \ttoddlers                                          \tNULL\t438.77868852459017\nn stpriantin st                                   \texportiexporti #x                                 \ttoddlers                                          \tChildren                                          \t438.77868852459017\neingprically                                      \tNULL\tNULL\tNULL\t439.97975708502025\neingprically                                      \tamalgbrand #x                                     \tNULL\tNULL\t439.97975708502025\neingprically                                      \tamalgbrand #x                                     \tsemi-precious                                     \tNULL\t439.97975708502025\neingprically                                      \tamalgbrand #x                                     \tsemi-precious                                     \tJewelry                                           \t439.97975708502025\nprieingable                                       \tNULL\tNULL\tNULL\t440.096\nprieingable                                       \texportiunivamalg #x                               \tNULL\tNULL\t440.096\nprieingable                                       \texportiunivamalg #x                               \tself-help                                         \tNULL\t440.096\nprieingable                                       \texportiunivamalg #x                               \tself-help                                         \tBooks                                             \t440.096\noughteingn stationought                           \tNULL\tNULL\tNULL\t440.1497975708502\noughteingn stationought                           \tamalgscholar #x                                   \tNULL\tNULL\t440.1497975708502\noughteingn stationought                           \tamalgscholar #x                                   \trock                                              \tNULL\t440.1497975708502\noughteingn stationought                           \tamalgscholar #x                                   \trock                                              \tMusic                                             \t440.1497975708502\neingationbaroughtought                            \tNULL\tNULL\tNULL\t440.9721115537849\neingationbaroughtought                            \tmaxicorp #x                                       \tNULL\tNULL\t440.9721115537849\neingationbaroughtought                            \tmaxicorp #x                                       \twomens watch                                      \tNULL\t440.9721115537849\neingationbaroughtought                            \tmaxicorp #x                                       \twomens watch                                      \tJewelry                                           \t440.9721115537849\npriantibarpri                                     \tNULL\tNULL\tNULL\t443.45849802371544\npriantibarpri                                     \texportiimporto #x                                 \tNULL\tNULL\t443.45849802371544\npriantibarpri                                     \texportiimporto #x                                 \tpants                                             \tNULL\t443.45849802371544\npriantibarpri                                     \texportiimporto #x                                 \tpants                                             \tMen                                               \t443.45849802371544\nprioughtantiation                                 \tNULL\tNULL\tNULL\t443.8825910931174\nprioughtantiation                                 \tcorpmaxi #x                                       \tNULL\tNULL\t443.8825910931174\nprioughtantiation                                 \tcorpmaxi #x                                       \tparenting                                         \tNULL\t443.8825910931174\nprioughtantiation                                 \tcorpmaxi #x                                       \tparenting                                         \tBooks                                             \t443.8825910931174\neseprieingoughtought                              \tNULL\tNULL\tNULL\t445.2016129032258\neseprieingoughtought                              \timportonameless #x                                \tNULL\tNULL\t445.2016129032258\neseprieingoughtought                              \timportonameless #x                                \tbaseball                                          \tNULL\t445.2016129032258\neseprieingoughtought                              \timportonameless #x                                \tbaseball                                          \tSports                                            \t445.2016129032258\neingpriationanti                                  \tNULL\tNULL\tNULL\t445.4920634920635\neingpriationanti                                  \tscholarunivamalg #x                               \tNULL\tNULL\t445.4920634920635\neingpriationanti                                  \tscholarunivamalg #x                               \tfiction                                           \tNULL\t445.4920634920635\neingpriationanti                                  \tscholarunivamalg #x                               \tfiction                                           \tBooks                                             \t445.4920634920635\nantin stablecallyought                            \tNULL\tNULL\tNULL\t445.54918032786884\nantin stablecallyought                            \timportoedu pack #x                                \tNULL\tNULL\t445.54918032786884\nantin stablecallyought                            \timportoedu pack #x                                \tmens                                              \tNULL\t445.54918032786884\nantin stablecallyought                            \timportoedu pack #x                                \tmens                                              \tShoes                                             \t445.54918032786884\ncallycallyn steing                                \tNULL\tNULL\tNULL\t445.9012345679012\ncallycallyn steing                                \tcorpunivamalg #x                                  \tNULL\tNULL\t445.9012345679012\ncallycallyn steing                                \tcorpunivamalg #x                                  \tmystery                                           \tNULL\t445.9012345679012\ncallycallyn steing                                \tcorpunivamalg #x                                  \tmystery                                           \tBooks                                             \t445.9012345679012\noughtpribarought                                  \tNULL\tNULL\tNULL\t446.125\noughtpribarought                                  \texportinameless #x                                \tNULL\tNULL\t446.125\noughtpribarought                                  \texportinameless #x                                \twallpaper                                         \tNULL\t446.125\noughtpribarought                                  \texportinameless #x                                \twallpaper                                         \tHome                                              \t446.125\noughtantioughtbarought                            \tNULL\tNULL\tNULL\t446.1847389558233\noughtantioughtbarought                            \tedu packmaxi #x                                  \tNULL\tNULL\t446.1847389558233\noughtantioughtbarought                            \tedu packmaxi #x                                  \tentertainments                                    \tNULL\t446.1847389558233\noughtantioughtbarought                            \tedu packmaxi #x                                  \tentertainments                                    \tBooks                                             \t446.1847389558233\ncallyoughtn stcallyought                          \tNULL\tNULL\tNULL\t446.43650793650795\ncallyoughtn stcallyought                          \texportischolar #x                                 \tNULL\tNULL\t446.43650793650795\ncallyoughtn stcallyought                          \texportischolar #x                                 \tpop                                               \tNULL\t446.43650793650795\ncallyoughtn stcallyought                          \texportischolar #x                                 \tpop                                               \tMusic                                             \t446.43650793650795\nationeingationableought                           \tNULL\tNULL\tNULL\t446.48192771084337\nationeingationableought                           \tnamelessnameless #x                               \tNULL\tNULL\t446.48192771084337\nationeingationableought                           \tnamelessnameless #x                               \toutdoor                                           \tNULL\t446.48192771084337\nationeingationableought                           \tnamelessnameless #x                               \toutdoor                                           \tSports                                            \t446.48192771084337\npriantiableese                                    \tNULL\tNULL\tNULL\t446.85483870967744\npriantiableese                                    \texportiedu pack #x                                \tNULL\tNULL\t446.85483870967744\npriantiableese                                    \texportiedu pack #x                                \tkids                                              \tNULL\t446.85483870967744\npriantiableese                                    \texportiedu pack #x                                \tkids                                              \tShoes                                             \t446.85483870967744\nprieseeseableought                                \tNULL\tNULL\tNULL\t446.9186991869919\nprieseeseableought                                \tamalgscholar #x                                   \tNULL\tNULL\t446.9186991869919\nprieseeseableought                                \tamalgscholar #x                                   \trock                                              \tNULL\t446.9186991869919\nprieseeseableought                                \tamalgscholar #x                                   \trock                                              \tMusic                                             \t446.9186991869919\nationableoughtcallyought                          \tNULL\tNULL\tNULL\t447.165991902834\nationableoughtcallyought                          \texportischolar #x                                 \tNULL\tNULL\t447.165991902834\nationableoughtcallyought                          \texportischolar #x                                 \tpop                                               \tNULL\t447.165991902834\nationableoughtcallyought                          \texportischolar #x                                 \tpop                                               \tMusic                                             \t447.165991902834\npripricallyese                                    \tNULL\tNULL\tNULL\t447.2550607287449\npripricallyese                                    \tedu packimporto #x                                \tNULL\tNULL\t447.2550607287449\npripricallyese                                    \tedu packimporto #x                                \tsports-apparel                                    \tNULL\t447.2550607287449\npripricallyese                                    \tedu packimporto #x                                \tsports-apparel                                    \tMen                                               \t447.2550607287449\neingableationn st                                 \tNULL\tNULL\tNULL\t447.3541666666667\neingableationn st                                 \tnamelessmaxi #x                                   \tNULL\tNULL\t447.3541666666667\neingableationn st                                 \tnamelessmaxi #x                                   \tromance                                           \tNULL\t447.3541666666667\neingableationn st                                 \tnamelessmaxi #x                                   \tromance                                           \tBooks                                             \t447.3541666666667\nn stantin stoughtought                            \tNULL\tNULL\tNULL\t448.2396694214876\nn stantin stoughtought                            \timportoscholar #x                                 \tNULL\tNULL\t448.2396694214876\nn stantin stoughtought                            \timportoscholar #x                                 \tcountry                                           \tNULL\t448.2396694214876\nn stantin stoughtought                            \timportoscholar #x                                 \tcountry                                           \tMusic                                             \t448.2396694214876\nn steingbaranti                                   \tNULL\tNULL\tNULL\t448.702479338843\nn steingbaranti                                   \tamalgamalg #x                                     \tNULL\tNULL\t448.702479338843\nn steingbaranti                                   \tamalgamalg #x                                     \tdresses                                           \tNULL\t448.702479338843\nn steingbaranti                                   \tamalgamalg #x                                     \tdresses                                           \tWomen                                             \t448.702479338843\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q22a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_product_name:string,i_brand:string,i_class:string,i_category:string,qoh:double>\n-- !query output\noughtn steingcally                                \tNULL\tNULL\tNULL\t429.75206611570246\noughtn steingcally                                \texportiexporti #x                                 \tNULL\tNULL\t429.75206611570246\noughtn steingcally                                \texportiexporti #x                                 \ttoddlers                                          \tNULL\t429.75206611570246\noughtn steingcally                                \texportiexporti #x                                 \ttoddlers                                          \tChildren                                          \t429.75206611570246\noughtcallypripriought                             \tNULL\tNULL\tNULL\t433.04918032786884\noughtcallypripriought                             \tcorpunivamalg #x                                 \tNULL\tNULL\t433.04918032786884\noughtcallypripriought                             \tcorpunivamalg #x                                 \tmusical                                           \tNULL\t433.04918032786884\noughtcallypripriought                             \tcorpunivamalg #x                                 \tmusical                                           \tElectronics                                       \t433.04918032786884\noughtesecallyeseought                             \tNULL\tNULL\tNULL\t434.8091286307054\noughtesecallyeseought                             \tscholarbrand #x                                   \tNULL\tNULL\t434.8091286307054\noughtesecallyeseought                             \tscholarbrand #x                                   \tblinds/shades                                     \tNULL\t434.8091286307054\noughtesecallyeseought                             \tscholarbrand #x                                   \tblinds/shades                                     \tHome                                              \t434.8091286307054\nantiesen stese                                    \tNULL\tNULL\tNULL\t436.39676113360326\nantiesen stese                                    \tedu packexporti #x                                \tNULL\tNULL\t436.39676113360326\nantiesen stese                                    \tedu packexporti #x                                \tschool-uniforms                                   \tNULL\t436.39676113360326\nantiesen stese                                    \tedu packexporti #x                                \tschool-uniforms                                   \tChildren                                          \t436.39676113360326\npriesecallyantiought                              \tNULL\tNULL\tNULL\t440.51612903225805\npriesecallyantiought                              \timportounivamalg #x                               \tNULL\tNULL\t440.51612903225805\npriesecallyantiought                              \timportounivamalg #x                               \thome repair                                       \tNULL\t440.51612903225805\npriesecallyantiought                              \timportounivamalg #x                               \thome repair                                       \tBooks                                             \t440.51612903225805\nprioughtableoughtought                            \tNULL\tNULL\tNULL\t440.97478991596637\nprioughtableoughtought                            \texportiunivamalg #x                              \tNULL\tNULL\t440.97478991596637\nprioughtableoughtought                            \texportiunivamalg #x                              \tdvd/vcr players                                   \tNULL\t440.97478991596637\nprioughtableoughtought                            \texportiunivamalg #x                              \tdvd/vcr players                                   \tElectronics                                       \t440.97478991596637\nantiablen stought                                 \tNULL\tNULL\tNULL\t441.520325203252\nantiablen stought                                 \tamalgamalg #x                                     \tNULL\tNULL\t441.520325203252\nantiablen stought                                 \tamalgamalg #x                                     \tdresses                                           \tNULL\t441.520325203252\nantiablen stought                                 \tamalgamalg #x                                     \tdresses                                           \tWomen                                             \t441.520325203252\nn steingantianti                                  \tNULL\tNULL\tNULL\t442.4404761904762\nn steingantianti                                  \tcorpbrand #x                                     \tNULL\tNULL\t442.4404761904762\nn steingantianti                                  \tcorpbrand #x                                     \trugs                                              \tNULL\t442.4404761904762\nn steingantianti                                  \tcorpbrand #x                                     \trugs                                              \tHome                                              \t442.4404761904762\nprieingation                                      \tNULL\tNULL\tNULL\t442.68595041322317\nprieingation                                      \timportoamalg #x                                   \tNULL\tNULL\t442.68595041322317\nprieingation                                      \timportoamalg #x                                   \tfragrances                                        \tNULL\t442.68595041322317\nprieingation                                      \timportoamalg #x                                   \tfragrances                                        \tWomen                                             \t442.68595041322317\nn stesecallypri                                   \tNULL\tNULL\tNULL\t442.84\nn stesecallypri                                   \tamalgimporto #x                                   \tNULL\tNULL\t442.84\nn stesecallypri                                   \tamalgimporto #x                                   \taccessories                                       \tNULL\t442.84\nn stesecallypri                                   \tamalgimporto #x                                   \taccessories                                       \tMen                                               \t442.84\nn stn stcallyoughtought                           \tNULL\tNULL\tNULL\t443.20883534136544\nn stn stcallyoughtought                           \tcorpmaxi #x                                      \tNULL\tNULL\t443.20883534136544\nn stn stcallyoughtought                           \tcorpmaxi #x                                      \tgolf                                              \tNULL\t443.20883534136544\nn stn stcallyoughtought                           \tcorpmaxi #x                                      \tgolf                                              \tSports                                            \t443.20883534136544\nationableprieing                                  \tNULL\tNULL\tNULL\t443.349593495935\nationableprieing                                  \texportiamalg #x                                   \tNULL\tNULL\t443.349593495935\nationableprieing                                  \texportiamalg #x                                   \tmaternity                                         \tNULL\t443.349593495935\nationableprieing                                  \texportiamalg #x                                   \tmaternity                                         \tWomen                                             \t443.349593495935\nationoughtesepri                                  \tNULL\tNULL\tNULL\t443.8292682926829\nationoughtesepri                                  \tedu packunivamalg #x                             \tNULL\tNULL\t443.8292682926829\nationoughtesepri                                  \tedu packunivamalg #x                             \tsports                                            \tNULL\t443.8292682926829\nationoughtesepri                                  \tedu packunivamalg #x                             \tsports                                            \tBooks                                             \t443.8292682926829\noughtbarcallycallyought                           \tNULL\tNULL\tNULL\t444.5889328063241\noughtbarcallycallyought                           \tcorpmaxi #x                                       \tNULL\tNULL\t444.5889328063241\noughtbarcallycallyought                           \tcorpmaxi #x                                       \tgolf                                              \tNULL\t444.5889328063241\noughtbarcallycallyought                           \tcorpmaxi #x                                       \tgolf                                              \tSports                                            \t444.5889328063241\nn steingcallycally                                \tNULL\tNULL\tNULL\t445.0833333333333\nn steingcallycally                                \timportoscholar #x                                 \tNULL\tNULL\t445.0833333333333\nn steingcallycally                                \timportoscholar #x                                 \tcountry                                           \tNULL\t445.0833333333333\nn steingcallycally                                \timportoscholar #x                                 \tcountry                                           \tMusic                                             \t445.0833333333333\nationcallyoughteing                               \tNULL\tNULL\tNULL\t445.83534136546183\nationcallyoughteing                               \tamalgedu pack #x                                  \tNULL\tNULL\t445.83534136546183\nationcallyoughteing                               \tamalgedu pack #x                                  \twomens                                            \tNULL\t445.83534136546183\nationcallyoughteing                               \tamalgedu pack #x                                  \twomens                                            \tShoes                                             \t445.83534136546183\nantiablebarantiought                              \tNULL\tNULL\tNULL\t446.05555555555554\nantiablebarantiought                              \tscholarbrand #x                                   \tNULL\tNULL\t446.05555555555554\nantiablebarantiought                              \tscholarbrand #x                                   \tcustom                                            \tNULL\t446.05555555555554\nantiablebarantiought                              \tscholarbrand #x                                   \tcustom                                            \tJewelry                                           \t446.05555555555554\nationantiationeseought                            \tNULL\tNULL\tNULL\t446.33870967741933\nationantiationeseought                            \tamalgimporto #x                                   \tNULL\tNULL\t446.33870967741933\nationantiationeseought                            \tamalgimporto #x                                   \taccessories                                       \tNULL\t446.33870967741933\nationantiationeseought                            \tamalgimporto #x                                   \taccessories                                       \tMen                                               \t446.33870967741933\npricallypriationought                             \tNULL\tNULL\tNULL\t446.96\npricallypriationought                             \tmaxibrand #x                                      \tNULL\tNULL\t446.96\npricallypriationought                             \tmaxibrand #x                                      \tmattresses                                        \tNULL\t446.96\npricallypriationought                             \tmaxibrand #x                                      \tmattresses                                        \tHome                                              \t446.96\nationcallyn stbarought                            \tNULL\tNULL\tNULL\t446.9918032786885\nationcallyn stbarought                            \tcorpbrand #x                                      \tNULL\tNULL\t446.9918032786885\nationcallyn stbarought                            \tcorpbrand #x                                      \trugs                                              \tNULL\t446.9918032786885\nationcallyn stbarought                            \tcorpbrand #x                                      \trugs                                              \tHome                                              \t446.9918032786885\nationeseoughtpri                                  \tNULL\tNULL\tNULL\t447.0769230769231\nationeseoughtpri                                  \tedu packamalg #x                                  \tNULL\tNULL\t447.0769230769231\nationeseoughtpri                                  \tedu packamalg #x                                  \tswimwear                                          \tNULL\t447.0769230769231\nationeseoughtpri                                  \tedu packamalg #x                                  \tswimwear                                          \tWomen                                             \t447.0769230769231\nanticallyese                                      \tNULL\tNULL\tNULL\t447.4313725490196\nanticallyese                                      \tamalgcorp #x                                      \tNULL\tNULL\t447.4313725490196\nanticallyese                                      \tamalgcorp #x                                      \tbirdal                                            \tNULL\t447.4313725490196\nanticallyese                                      \tamalgcorp #x                                      \tbirdal                                            \tJewelry                                           \t447.4313725490196\nn stpriprioughtought                              \tNULL\tNULL\tNULL\t447.6375\nn stpriprioughtought                              \tscholarbrand #x                                  \tNULL\tNULL\t447.6375\nn stpriprioughtought                              \tscholarbrand #x                                  \tblinds/shades                                     \tNULL\t447.6375\nn stpriprioughtought                              \tscholarbrand #x                                  \tblinds/shades                                     \tHome                                              \t447.6375\noughtantioughteing                                \tNULL\tNULL\tNULL\t448.08097165991904\noughtantioughteing                                \tunivmaxi #x                                       \tNULL\tNULL\t448.08097165991904\noughtantioughteing                                \tunivmaxi #x                                       \tpools                                             \tNULL\t448.08097165991904\noughtantioughteing                                \tunivmaxi #x                                       \tpools                                             \tSports                                            \t448.08097165991904\nn steseeseationought                              \tNULL\tNULL\tNULL\t448.1769547325103\nn steseeseationought                              \texportiunivamalg #x                               \tNULL\tNULL\t448.1769547325103\nn steseeseationought                              \texportiunivamalg #x                               \tself-help                                         \tNULL\t448.1769547325103\nn steseeseationought                              \texportiunivamalg #x                               \tself-help                                         \tBooks                                             \t448.1769547325103\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q24.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_last_name:string,c_first_name:string,s_store_name:string,paid:decimal(27,2)>\n-- !query output\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q27a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,s_state:string,g_state:int,agg1:double,agg2:decimal(11,6),agg3:decimal(11,6),agg4:decimal(11,6)>\n-- !query output\nNULL\tNULL\t1\t49.916138404648706\t75.425222\t191.109448\t37.896850\nAAAAAAAAAAABAAAA\tNULL\t1\t67.0\t37.520000\t0.000000\t24.010000\nAAAAAAAAAAABAAAA\tTN\t0\t67.0\t37.520000\t0.000000\t24.010000\nAAAAAAAAAAACAAAA\tNULL\t1\t34.0\t89.610000\t186.400000\t32.250000\nAAAAAAAAAAACAAAA\tTN\t0\t34.0\t89.610000\t186.400000\t32.250000\nAAAAAAAAAACCAAAA\tNULL\t1\t69.0\t31.085000\t0.000000\t7.825000\nAAAAAAAAAACCAAAA\tTN\t0\t69.0\t31.085000\t0.000000\t7.825000\nAAAAAAAAAACDAAAA\tNULL\t1\t36.0\t107.280000\t0.000000\t54.063333\nAAAAAAAAAACDAAAA\tTN\t0\t36.0\t107.280000\t0.000000\t54.063333\nAAAAAAAAAADBAAAA\tNULL\t1\t21.0\t74.470000\t0.000000\t44.680000\nAAAAAAAAAADBAAAA\tTN\t0\t21.0\t74.470000\t0.000000\t44.680000\nAAAAAAAAAAEBAAAA\tNULL\t1\t47.0\t30.540000\t125.010000\t20.460000\nAAAAAAAAAAEBAAAA\tTN\t0\t47.0\t30.540000\t125.010000\t20.460000\nAAAAAAAAAAEEAAAA\tNULL\t1\t26.5\t100.840000\t362.865000\t44.030000\nAAAAAAAAAAEEAAAA\tTN\t0\t26.5\t100.840000\t362.865000\t44.030000\nAAAAAAAAAAFCAAAA\tNULL\t1\t50.0\t37.055000\t0.000000\t7.555000\nAAAAAAAAAAFCAAAA\tTN\t0\t50.0\t37.055000\t0.000000\t7.555000\nAAAAAAAAAAGBAAAA\tNULL\t1\t14.0\t55.130000\t0.000000\t50.160000\nAAAAAAAAAAGBAAAA\tTN\t0\t14.0\t55.130000\t0.000000\t50.160000\nAAAAAAAAAAHBAAAA\tNULL\t1\t50.0\t106.065000\t0.000000\t29.695000\nAAAAAAAAAAHBAAAA\tTN\t0\t50.0\t106.065000\t0.000000\t29.695000\nAAAAAAAAAAIAAAAA\tNULL\t1\t67.0\t68.610000\t0.000000\t26.405000\nAAAAAAAAAAIAAAAA\tTN\t0\t67.0\t68.610000\t0.000000\t26.405000\nAAAAAAAAAAIDAAAA\tNULL\t1\t37.333333333333336\t78.440000\t0.000000\t51.863333\nAAAAAAAAAAIDAAAA\tTN\t0\t37.333333333333336\t78.440000\t0.000000\t51.863333\nAAAAAAAAAAJBAAAA\tNULL\t1\t55.0\t131.290000\t0.000000\t36.760000\nAAAAAAAAAAJBAAAA\tTN\t0\t55.0\t131.290000\t0.000000\t36.760000\nAAAAAAAAAAKBAAAA\tNULL\t1\t70.0\t66.120000\t0.000000\t21.150000\nAAAAAAAAAAKBAAAA\tTN\t0\t70.0\t66.120000\t0.000000\t21.150000\nAAAAAAAAAALCAAAA\tNULL\t1\t93.0\t104.200000\t0.000000\t6.250000\nAAAAAAAAAALCAAAA\tTN\t0\t93.0\t104.200000\t0.000000\t6.250000\nAAAAAAAAAALDAAAA\tNULL\t1\t61.5\t150.645000\t0.000000\t57.335000\nAAAAAAAAAALDAAAA\tTN\t0\t61.5\t150.645000\t0.000000\t57.335000\nAAAAAAAAAANAAAAA\tNULL\t1\t52.0\t2.510000\t0.000000\t2.280000\nAAAAAAAAAANAAAAA\tTN\t0\t52.0\t2.510000\t0.000000\t2.280000\nAAAAAAAAAAOAAAAA\tNULL\t1\t23.5\t17.590000\t0.000000\t12.130000\nAAAAAAAAAAOAAAAA\tTN\t0\t23.5\t17.590000\t0.000000\t12.130000\nAAAAAAAAAAOCAAAA\tNULL\t1\t30.0\t22.380000\t0.000000\t4.690000\nAAAAAAAAAAOCAAAA\tTN\t0\t30.0\t22.380000\t0.000000\t4.690000\nAAAAAAAAAAPCAAAA\tNULL\t1\t92.0\t75.630000\t0.000000\t64.280000\nAAAAAAAAAAPCAAAA\tTN\t0\t92.0\t75.630000\t0.000000\t64.280000\nAAAAAAAAABAAAAAA\tNULL\t1\t10.0\t45.790000\t0.000000\t43.950000\nAAAAAAAAABAAAAAA\tTN\t0\t10.0\t45.790000\t0.000000\t43.950000\nAAAAAAAAABABAAAA\tNULL\t1\t63.0\t21.890000\t0.000000\t19.700000\nAAAAAAAAABABAAAA\tTN\t0\t63.0\t21.890000\t0.000000\t19.700000\nAAAAAAAAABBAAAAA\tNULL\t1\t26.5\t78.785000\t0.000000\t35.615000\nAAAAAAAAABBAAAAA\tTN\t0\t26.5\t78.785000\t0.000000\t35.615000\nAAAAAAAAABBDAAAA\tNULL\t1\t14.0\t67.910000\t0.000000\t36.670000\nAAAAAAAAABBDAAAA\tTN\t0\t14.0\t67.910000\t0.000000\t36.670000\nAAAAAAAAABCBAAAA\tNULL\t1\t42.5\t72.900000\t0.000000\t44.685000\nAAAAAAAAABCBAAAA\tTN\t0\t42.5\t72.900000\t0.000000\t44.685000\nAAAAAAAAABCCAAAA\tNULL\t1\t50.0\t90.835000\t0.000000\t42.605000\nAAAAAAAAABCCAAAA\tTN\t0\t50.0\t90.835000\t0.000000\t42.605000\nAAAAAAAAABDBAAAA\tNULL\t1\t40.0\t10.100000\t0.000000\t4.240000\nAAAAAAAAABDBAAAA\tTN\t0\t40.0\t10.100000\t0.000000\t4.240000\nAAAAAAAAABDEAAAA\tNULL\t1\t67.0\t85.420000\t0.000000\t31.600000\nAAAAAAAAABDEAAAA\tTN\t0\t67.0\t85.420000\t0.000000\t31.600000\nAAAAAAAAABECAAAA\tNULL\t1\t78.5\t36.280000\t1673.320000\t25.170000\nAAAAAAAAABECAAAA\tTN\t0\t78.5\t36.280000\t1673.320000\t25.170000\nAAAAAAAAABEDAAAA\tNULL\t1\t60.0\t92.410000\t2270.190000\t58.210000\nAAAAAAAAABEDAAAA\tTN\t0\t60.0\t92.410000\t2270.190000\t58.210000\nAAAAAAAAABGAAAAA\tNULL\t1\t57.0\t11.390000\t201.900000\t3.980000\nAAAAAAAAABGAAAAA\tTN\t0\t57.0\t11.390000\t201.900000\t3.980000\nAAAAAAAAABGDAAAA\tNULL\t1\t74.0\t174.750000\t232.650000\t5.240000\nAAAAAAAAABGDAAAA\tTN\t0\t74.0\t174.750000\t232.650000\t5.240000\nAAAAAAAAABHDAAAA\tNULL\t1\t34.5\t120.730000\t0.000000\t61.460000\nAAAAAAAAABHDAAAA\tTN\t0\t34.5\t120.730000\t0.000000\t61.460000\nAAAAAAAAABIBAAAA\tNULL\t1\t4.0\t12.680000\t5.040000\t10.520000\nAAAAAAAAABIBAAAA\tTN\t0\t4.0\t12.680000\t5.040000\t10.520000\nAAAAAAAAABICAAAA\tNULL\t1\t20.5\t22.420000\t0.000000\t20.300000\nAAAAAAAAABICAAAA\tTN\t0\t20.5\t22.420000\t0.000000\t20.300000\nAAAAAAAAABJBAAAA\tNULL\t1\t38.0\t2.850000\t0.000000\t0.880000\nAAAAAAAAABJBAAAA\tTN\t0\t38.0\t2.850000\t0.000000\t0.880000\nAAAAAAAAABKCAAAA\tNULL\t1\t8.0\t73.980000\t0.000000\t51.780000\nAAAAAAAAABKCAAAA\tTN\t0\t8.0\t73.980000\t0.000000\t51.780000\nAAAAAAAAABLCAAAA\tNULL\t1\t49.0\t96.580000\t0.000000\t28.970000\nAAAAAAAAABLCAAAA\tTN\t0\t49.0\t96.580000\t0.000000\t28.970000\nAAAAAAAAABMAAAAA\tNULL\t1\t50.0\t136.990000\t838.320000\t69.860000\nAAAAAAAAABMAAAAA\tTN\t0\t50.0\t136.990000\t838.320000\t69.860000\nAAAAAAAAABMBAAAA\tNULL\t1\t76.0\t120.520000\t0.000000\t83.150000\nAAAAAAAAABMBAAAA\tTN\t0\t76.0\t120.520000\t0.000000\t83.150000\nAAAAAAAAABNAAAAA\tNULL\t1\t17.0\t91.190000\t0.000000\t16.410000\nAAAAAAAAABNAAAAA\tTN\t0\t17.0\t91.190000\t0.000000\t16.410000\nAAAAAAAAABNCAAAA\tNULL\t1\t61.0\t13.080000\t16.650000\t0.650000\nAAAAAAAAABNCAAAA\tTN\t0\t61.0\t13.080000\t16.650000\t0.650000\nAAAAAAAAABPAAAAA\tNULL\t1\t30.0\t118.380000\t623.160000\t46.160000\nAAAAAAAAABPAAAAA\tTN\t0\t30.0\t118.380000\t623.160000\t46.160000\nAAAAAAAAABPBAAAA\tNULL\t1\t30.0\tNULL\tNULL\t17.350000\nAAAAAAAAABPBAAAA\tTN\t0\t30.0\tNULL\tNULL\t17.350000\nAAAAAAAAACAAAAAA\tNULL\t1\t20.0\t3.710000\t0.000000\t0.250000\nAAAAAAAAACAAAAAA\tTN\t0\t20.0\t3.710000\t0.000000\t0.250000\nAAAAAAAAACACAAAA\tNULL\t1\t66.0\t117.590000\t0.000000\t34.100000\nAAAAAAAAACACAAAA\tTN\t0\t66.0\t117.590000\t0.000000\t34.100000\nAAAAAAAAACADAAAA\tNULL\t1\t79.0\t171.850000\t0.000000\t127.160000\nAAAAAAAAACADAAAA\tTN\t0\t79.0\t171.850000\t0.000000\t127.160000\nAAAAAAAAACBBAAAA\tNULL\t1\t81.0\t88.700000\t0.000000\t13.300000\nAAAAAAAAACBBAAAA\tTN\t0\t81.0\t88.700000\t0.000000\t13.300000\nAAAAAAAAACBCAAAA\tNULL\t1\t26.0\t62.780000\t1311.600000\t61.520000\nAAAAAAAAACBCAAAA\tTN\t0\t26.0\t62.780000\t1311.600000\t61.520000\nAAAAAAAAACBEAAAA\tNULL\t1\t60.0\t84.015000\t0.000000\t33.495000\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q34.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<c_last_name:string,c_first_name:string,c_salutation:string,c_preferred_cust_flag:string,ss_ticket_number:int,cnt:bigint>\n-- !query output\nNULL\tNULL\tNULL\tY\t47915\t15\nNULL\tNULL\tNULL\tNULL\t126143\t15\nNULL\tNULL\tNULL\tNULL\t215293\t15\nNULL\tNULL\tMrs.      \tN\t120593\t15\nNULL\tRubin               \tSir       \tNULL\t30056\t15\nAdler                         \tJustin              \tSir       \tY\t226187\t15\nAllen                         \tRose                \tMrs.      \tN\t179476\t16\nAnderson                      \tMarvin              \tDr.       \tN\t211012\t16\nAndrews                       \tJacob               \tMr.       \tN\t67111\t16\nAndrews                       \tSamuel              \tDr.       \tY\t139993\t16\nAngel                         \tKevin               \tMr.       \tY\t106628\t15\nAshley                        \tLinda               \tMrs.      \tY\t82173\t15\nBaca                          \tDorothy             \tMrs.      \tN\t64890\t15\nBaker                         \tJamie               \tDr.       \tY\t9916\t15\nBanks                         \tLeroy               \tSir       \tN\t206730\t15\nBarber                        \tDianna              \tMrs.      \tY\t119959\t16\nBarksdale                     \tJoann               \tMiss      \tY\t138994\t15\nBarnes                        \tRuth                \tDr.       \tN\t84038\t15\nBarney                        \tSamuel              \tSir       \tN\t15288\t15\nBarnhart                      \tCharley             \tMr.       \tY\t166576\t15\nBarone                        \tSeth                \tMr.       \tY\t162374\t15\nBarrett                       \tDavid               \tSir       \tN\t189879\t15\nBartels                       \tElmer               \tSir       \tY\t114760\t16\nBear                          \tScott               \tSir       \tY\t82291\t15\nBeers                         \tKendra              \tDr.       \tNULL\t137960\t15\nBelcher                       \tJames               \tSir       \tY\t239470\t16\nBell                          \tCarrie              \tMiss      \tN\t5527\t15\nBell                          \tMatthew             \tDr.       \tN\t20400\t15\nBenjamin                      \tConsuelo            \tMs.       \tY\t201086\t15\nBergman                       \tJoann               \tMiss      \tN\t177052\t15\nBrooks                        \tRobert              \tSir       \tN\t155576\t16\nByrd                          \tKelly               \tSir       \tN\t165115\t16\nCagle                         \tJennifer            \tMiss      \tN\t163129\t15\nCampbell                      \tRobert              \tMr.       \tN\t8964\t15\nCardona                       \tRobert              \tMr.       \tN\t200501\t15\nCarter                        \tWendy               \tMs.       \tN\t96439\t15\nCarver                        \tBernard             \tMr.       \tY\t194943\t16\nChen                          \tWanita              \tMiss      \tN\t137713\t16\nChristensen                   \tLarry               \tDr.       \tY\t58094\t15\nCochrane                      \tAnne                \tMrs.      \tN\t208347\t16\nColeman                       \tInez                \tDr.       \tY\t88249\t16\nColeman                       \tJohn                \tMr.       \tN\t49444\t15\nColon                         \tAnna                \tDr.       \tY\t143694\t15\nConley                        \tRoxie               \tDr.       \tN\t196663\t15\nCook                          \tAdam                \tMs.       \tY\t167339\t15\nCote                          \tJustin              \tDr.       \tN\t93466\t15\nCouncil                       \tDonald              \tSir       \tY\t102958\t15\nCramer                        \tLinda               \tMs.       \tN\t126628\t15\nCrittenden                    \tAmie                \tMs.       \tN\t138787\t15\nCruz                          \tJames               \tMr.       \tY\t201430\t15\nCuellar                       \tOscar               \tMr.       \tY\t86781\t16\nCullen                        \tLarry               \tMr.       \tY\t221242\t16\nCushing                       \tAntonia             \tMrs.      \tY\t118927\t15\nDavis                         \tGordon              \tDr.       \tN\t227822\t15\nDavis                         \tMyrtle              \tDr.       \tY\t37430\t15\nDecker                        \tVera                \tMiss      \tY\t75737\t16\nDiamond                       \tFernando            \tDr.       \tN\t216391\t15\nDiaz                          \tWalton              \tMr.       \tN\t131135\t16\nDickinson                     \tSteven              \tMr.       \tN\t8057\t16\nDouglas                       \tLester              \tSir       \tN\t26043\t15\nDove                          \tGarry               \tDr.       \tN\t152171\t16\nDrake                         \tRosetta             \tDr.       \tY\t238040\t15\nDumas                         \tTravis              \tMr.       \tY\t94154\t15\nDuncan                        \tOlivia              \tDr.       \tY\t102032\t15\nDurham                        \tAndrea              \tDr.       \tY\t144734\t15\nDutton                        \tGay                 \tMiss      \tY\t110886\t15\nEllis                         \tKaren               \tMiss      \tN\t229706\t16\nEly                           \tCesar               \tDr.       \tY\t36054\t16\nEtheridge                     \tMike                \tDr.       \tN\t19648\t15\nFarmer                        \tEugenia             \tMiss      \tY\t98187\t16\nFarrow                        \tKathy               \tMiss      \tY\t200078\t15\nFaulkner                      \tLakeisha            \tDr.       \tY\t178393\t16\nFaulkner                      \tRobert              \tDr.       \tN\t109423\t15\nFelton                        \tDavid               \tMr.       \tN\t97807\t16\nFerreira                      \tChristine           \tMrs.      \tY\t155822\t15\nFinn                          \tRobert              \tMr.       \tN\t38057\t15\nFinney                        \tCrystal             \tMiss      \tY\t158304\t15\nFischer                       \tTamara              \tMrs.      \tN\t66790\t15\nFoote                         \tRoy                 \tSir       \tN\t68086\t15\nForeman                       \tAutumn              \tMrs.      \tY\t164060\t15\nFunk                          \tMarvin              \tSir       \tY\t61516\t15\nGarcia                        \tChristopher         \tSir       \tY\t181616\t16\nGarcia                        \tKaren               \tMiss      \tN\t236987\t15\nGarcia                        \tRobert              \tDr.       \tN\t172185\t16\nGarland                       \tMichael             \tMr.       \tN\t234421\t15\nGaylord                       \tKeith               \tMr.       \tY\t123333\t16\nGifford                       \tMark                \tMr.       \tN\t225973\t16\nGilbert                       \tNULL\tSir       \tN\t16844\t15\nGilmore                       \tAustin              \tDr.       \tY\t239871\t15\nGoldsmith                     \tBernice             \tMs.       \tY\t2347\t15\nGood                          \tNancy               \tDr.       \tN\t132655\t15\nGoodman                       \tNULL\tNULL\tN\t71903\t15\nGower                         \tNettie              \tMiss      \tN\t10576\t15\nGray                          \tEvelyn              \tMiss      \tN\t157486\t15\nHammond                       \tRoger               \tSir       \tY\t54884\t16\nHardin                        \tKimberly            \tDr.       \tN\t192424\t16\nHarp                          \tVance               \tMr.       \tN\t199017\t15\nHarper                        \tMadeline            \tDr.       \tN\t173835\t16\nHarris                        \tTammy               \tDr.       \tN\t217761\t16\nHartmann                      \tJoey                \tDr.       \tN\t230915\t15\nHayes                         \tDavid               \tSir       \tN\t82274\t15\nHaynes                        \tSara                \tMiss      \tY\t139168\t16\nHeath                         \tMatthew             \tDr.       \tN\t30710\t15\nHennessey                     \tDebbie              \tDr.       \tY\t79256\t15\nHerman                        \tStella              \tMs.       \tY\t33801\t16\nHernandez                     \tMax                 \tMr.       \tN\t16015\t15\nHernandez                     \tRuth                \tMs.       \tY\t97000\t15\nHess                          \tJoseph              \tSir       \tN\t151336\t15\nHodges                        \tLucas               \tDr.       \tY\t163325\t15\nHolland                       \tJeremiah            \tDr.       \tN\t95938\t16\nJackson                       \tWilliam             \tMr.       \tY\t16425\t16\nJameson                       \tMiguel              \tDr.       \tN\t9181\t16\nJarrell                       \tThomas              \tMr.       \tY\t85787\t16\nJohnson                       \tJulia               \tDr.       \tN\t27560\t15\nJones                         \tTheresa             \tMs.       \tN\t219765\t16\nKelly                         \tMark                \tMr.       \tY\t17039\t16\nKhan                          \tHank                \tMr.       \tN\t177803\t15\nKim                           \tCharlotte           \tDr.       \tY\t7208\t16\nKunz                          \tSarah               \tDr.       \tN\t74568\t15\nLake                          \tRobert              \tSir       \tN\t13264\t15\nLandry                        \tRudolph             \tSir       \tN\t117581\t15\nLane                          \tLuis                \tSir       \tN\t232302\t16\nLangford                      \tDarlene             \tMrs.      \tN\t214891\t15\nLarson                        \tKevin               \tMr.       \tY\t35053\t15\nLarson                        \tThomas              \tMr.       \tN\t114265\t15\nLee                           \tMalik               \tDr.       \tN\t20122\t16\nLeonard                       \tOrlando             \tDr.       \tY\t133168\t15\nLincoln                       \tAnthony             \tMiss      \tY\t1407\t16\nLindsey                       \tLinda               \tDr.       \tN\t62687\t16\nLopez                         \tKaren               \tDr.       \tY\t136008\t15\nLunsford                      \tKevin               \tDr.       \tN\t159120\t16\nLynch                         \tSylvia              \tMs.       \tY\t115438\t15\nLyon                          \tMichael             \tMr.       \tN\t140323\t15\nMaestas                       \tMabel               \tMrs.      \tN\t184265\t15\nMagana                        \tDiann               \tMrs.      \tY\t19139\t15\nManning                       \tAnnamarie           \tMs.       \tN\t4984\t16\nMarshall                      \tFelipe              \tSir       \tN\t138890\t15\nMartin                        \tPaul                \tDr.       \tN\t26115\t16\nMartinez                      \tEarl                \tSir       \tN\t108982\t15\nMartinez                      \tRobert              \tSir       \tY\t157672\t16\nMasterson                     \tBarbara             \tMrs.      \tN\t231070\t15\nMata                          \tDeborah             \tMiss      \tY\t4323\t15\nMccoy                         \tDebbie              \tDr.       \tN\t91552\t15\nMcgill                        \tTony                \tSir       \tN\t110030\t15\nMckeon                        \tChristina           \tDr.       \tN\t26190\t15\nMcnamara                      \tLinda               \tDr.       \tY\t7957\t15\nMeans                         \tMichael             \tMr.       \tY\t226164\t16\nMedina                        \tJoseph              \tSir       \tY\t110246\t15\nMeyers                        \tZachary             \tMr.       \tY\t59549\t15\nMontgomery                    \tJohn                \tMr.       \tY\t103718\t15\nMoody                         \tMiranda             \tMs.       \tY\t171671\t15\nMoore                         \tMark                \tDr.       \tN\t191471\t15\nMoran                         \tCelia               \tMs.       \tY\t200691\t15\nMorgan                        \tCecelia             \tMrs.      \tN\t200742\t15\nMorrell                       \tChad                \tMr.       \tN\t93790\t15\nMorse                         \tRobert              \tMr.       \tN\t68627\t16\nNeel                          \tAudrey              \tMs.       \tY\t193308\t15\nNeff                          \tSheri               \tMrs.      \tY\t52556\t15\nNelson                        \tKatherine           \tMrs.      \tN\t110232\t15\nNew                           \tSuzanne             \tMiss      \tN\t5120\t16\nNielsen                       \tVeronica            \tMrs.      \tN\t23905\t15\nOakley                        \tGeorge              \tMr.       \tY\t177890\t15\nParker                        \tBarbar              \tDr.       \tN\t57241\t15\nParker                        \tJeff                \tSir       \tN\t213566\t16\nPemberton                     \tJennifer            \tMrs.      \tY\t49875\t16\nPerry                         \tRobert              \tMr.       \tY\t153147\t16\nPhillips                      \tDavid               \tDr.       \tN\t148883\t15\nPhillips                      \tGeorgia             \tNULL\tY\t26878\t15\nPhillips                      \tStanley             \tSir       \tN\t31989\t15\nPinkston                      \tBrenda              \tDr.       \tN\t126440\t15\nPryor                         \tDorothy             \tMrs.      \tN\t213779\t16\nReed                          \tWilliam             \tDr.       \tN\t145002\t15\nReynolds                      \tAmelia              \tMs.       \tY\t68440\t15\nRice                          \tDavid               \tDr.       \tY\t70484\t16\nRobertson                     \tDaniel              \tMr.       \tN\t40407\t16\nRosales                       \tNULL\tNULL\tY\t156406\t16\nRusso                         \tCheryl              \tMiss      \tN\t81123\t15\nSanchez                       \tBruce               \tSir       \tY\t124479\t15\nSchmitz                       \tKaitlyn             \tMiss      \tN\t105162\t15\nSebastian                     \tHomer               \tDr.       \tY\t64994\t15\nSexton                        \tJerry               \tSir       \tN\t91446\t15\nSierra                        \tDavid               \tSir       \tY\t61810\t15\nSimmons                       \tJoseph              \tDr.       \tN\t54185\t15\nSimpson                       \tMichael             \tSir       \tY\t186613\t16\nSimpson                       \tShalanda            \tDr.       \tY\t181123\t15\nSingleton                     \tAndrew              \tMs.       \tN\t45464\t15\nSmith                         \tDanny               \tDr.       \tY\t143297\t15\nSmith                         \tEdward              \tSir       \tY\t81178\t16\nSmith                         \tHung                \tSir       \tN\t44710\t15\nSmith                         \tKimberly            \tMrs.      \tY\t174638\t15\nSmith                         \tVern                \tSir       \tN\t50960\t15\nSosa                          \tLeah                \tMs.       \tY\t77106\t16\nSparks                        \tErick               \tDr.       \tN\t220337\t15\nTaylor                        \tKenneth             \tDr.       \tY\t194337\t15\nTodd                          \tLinda               \tMs.       \tY\t235816\t15\nTrout                         \tHarley              \tMr.       \tY\t214547\t15\nUrban                         \tNULL\tNULL\tNULL\t214898\t15\nVarner                        \tElsie               \tMs.       \tN\t199602\t16\nVazquez                       \tBill                \tDr.       \tY\t62049\t15\nVelazquez                     \tWilliam             \tDr.       \tN\t46239\t15\nWagner                        \tBarbara             \tMs.       \tY\t233595\t15\nWard                          \tAnna                \tMiss      \tN\t52941\t16\nWatkins                       \tRosa                \tMiss      \tY\t152190\t16\nWelch                         \tJames               \tDr.       \tY\t51441\t16\nWest                          \tTeresa              \tMs.       \tN\t233179\t16\nWhite                         \tMaurice             \tMr.       \tN\t10107\t15\nWilliams                      \tJohn                \tMr.       \tY\t84783\t15\nWilliams                      \tRobert              \tMr.       \tY\t41233\t15\nWilliamson                    \tRuth                \tMrs.      \tY\t86369\t15\nWilson                        \tJoel                \tSir       \tY\t91826\t16\nWilson                        \tJohn                \tSir       \tY\t26543\t15\nWilson                        \tMariano             \tMr.       \tY\t67472\t16\nWinkler                       \tJose                \tDr.       \tY\t78400\t15\nWinter                        \tCora                \tMrs.      \tN\t8978\t16\nWood                          \tMarcia              \tMs.       \tY\t219276\t16\nWood                          \tMichelle            \tMrs.      \tN\t39560\t15\nWright                        \tRichie              \tSir       \tY\t106818\t15\nYoung                         \tWilliam             \tMr.       \tY\t51127\t15\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q35.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<ca_state:string,cd_gender:string,cd_marital_status:string,cd_dep_count:int,cnt1:bigint,avg(cd_dep_count):double,max(cd_dep_count):int,sum(cd_dep_count):bigint,cd_dep_employed_count:int,cnt2:bigint,avg(cd_dep_employed_count):double,max(cd_dep_employed_count):int,sum(cd_dep_employed_count):bigint,cd_dep_college_count:int,cnt3:bigint,avg(cd_dep_college_count):double,max(cd_dep_college_count):int,sum(cd_dep_college_count):bigint>\n-- !query output\nNULL\tF\tD\t0\t1\t0.0\t0\t0\t2\t1\t2.0\t2\t2\t2\t1\t2.0\t2\t2\nNULL\tF\tD\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\t4\t1\t4.0\t4\t4\nNULL\tF\tD\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\t2\t1\t2.0\t2\t2\nNULL\tF\tD\t0\t1\t0.0\t0\t0\t6\t1\t6.0\t6\t6\t4\t1\t4.0\t4\t4\nNULL\tF\tD\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t1\t1\nNULL\tF\tD\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\t4\t1\t4.0\t4\t4\nNULL\tF\tD\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\t5\t1\t5.0\t5\t5\nNULL\tF\tD\t2\t1\t2.0\t2\t2\t0\t1\t0.0\t0\t0\t4\t1\t4.0\t4\t4\nNULL\tF\tD\t2\t1\t2.0\t2\t2\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\nNULL\tF\tD\t2\t1\t2.0\t2\t2\t6\t1\t6.0\t6\t6\t1\t1\t1.0\t1\t1\nNULL\tF\tD\t3\t1\t3.0\t3\t3\t3\t1\t3.0\t3\t3\t2\t1\t2.0\t2\t2\nNULL\tF\tD\t3\t1\t3.0\t3\t3\t3\t1\t3.0\t3\t3\t6\t1\t6.0\t6\t6\nNULL\tF\tD\t3\t1\t3.0\t3\t3\t4\t1\t4.0\t4\t4\t1\t1\t1.0\t1\t1\nNULL\tF\tD\t4\t1\t4.0\t4\t4\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\nNULL\tF\tD\t4\t1\t4.0\t4\t4\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t1\t1\nNULL\tF\tD\t4\t1\t4.0\t4\t4\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\nNULL\tF\tD\t4\t1\t4.0\t4\t4\t5\t1\t5.0\t5\t5\t6\t1\t6.0\t6\t6\nNULL\tF\tD\t5\t1\t5.0\t5\t5\t4\t1\t4.0\t4\t4\t3\t1\t3.0\t3\t3\nNULL\tF\tD\t5\t1\t5.0\t5\t5\t5\t1\t5.0\t5\t5\t2\t1\t2.0\t2\t2\nNULL\tF\tD\t6\t1\t6.0\t6\t6\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\nNULL\tF\tD\t6\t1\t6.0\t6\t6\t2\t1\t2.0\t2\t2\t2\t1\t2.0\t2\t2\nNULL\tF\tD\t6\t1\t6.0\t6\t6\t4\t1\t4.0\t4\t4\t1\t1\t1.0\t1\t1\nNULL\tF\tM\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\t5\t1\t5.0\t5\t5\nNULL\tF\tM\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\nNULL\tF\tM\t1\t1\t1.0\t1\t1\t6\t1\t6.0\t6\t6\t0\t1\t0.0\t0\t0\nNULL\tF\tM\t1\t1\t1.0\t1\t1\t6\t1\t6.0\t6\t6\t1\t1\t1.0\t1\t1\nNULL\tF\tM\t2\t1\t2.0\t2\t2\t2\t1\t2.0\t2\t2\t6\t1\t6.0\t6\t6\nNULL\tF\tM\t2\t1\t2.0\t2\t2\t4\t1\t4.0\t4\t4\t4\t1\t4.0\t4\t4\nNULL\tF\tM\t3\t1\t3.0\t3\t3\t2\t1\t2.0\t2\t2\t1\t1\t1.0\t1\t1\nNULL\tF\tM\t3\t1\t3.0\t3\t3\t5\t1\t5.0\t5\t5\t0\t1\t0.0\t0\t0\nNULL\tF\tM\t3\t1\t3.0\t3\t3\t5\t1\t5.0\t5\t5\t1\t1\t1.0\t1\t1\nNULL\tF\tM\t4\t1\t4.0\t4\t4\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\nNULL\tF\tM\t4\t1\t4.0\t4\t4\t2\t1\t2.0\t2\t2\t1\t1\t1.0\t1\t1\nNULL\tF\tM\t4\t1\t4.0\t4\t4\t3\t1\t3.0\t3\t3\t3\t1\t3.0\t3\t3\nNULL\tF\tM\t5\t1\t5.0\t5\t5\t2\t1\t2.0\t2\t2\t2\t1\t2.0\t2\t2\nNULL\tF\tM\t6\t1\t6.0\t6\t6\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t1\t1\nNULL\tF\tM\t6\t1\t6.0\t6\t6\t5\t1\t5.0\t5\t5\t6\t1\t6.0\t6\t6\nNULL\tF\tS\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\t6\t1\t6.0\t6\t6\nNULL\tF\tS\t1\t1\t1.0\t1\t1\t0\t1\t0.0\t0\t0\t4\t1\t4.0\t4\t4\nNULL\tF\tS\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t1\t1\t2\t1\t2.0\t2\t2\nNULL\tF\tS\t1\t1\t1.0\t1\t1\t2\t1\t2.0\t2\t2\t6\t1\t6.0\t6\t6\nNULL\tF\tS\t1\t1\t1.0\t1\t1\t5\t1\t5.0\t5\t5\t5\t1\t5.0\t5\t5\nNULL\tF\tS\t2\t1\t2.0\t2\t2\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\nNULL\tF\tS\t2\t2\t2.0\t2\t4\t5\t2\t5.0\t5\t10\t6\t2\t6.0\t6\t12\nNULL\tF\tS\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\t4\t1\t4.0\t4\t4\nNULL\tF\tS\t3\t1\t3.0\t3\t3\t2\t1\t2.0\t2\t2\t1\t1\t1.0\t1\t1\nNULL\tF\tS\t3\t1\t3.0\t3\t3\t2\t1\t2.0\t2\t2\t5\t1\t5.0\t5\t5\nNULL\tF\tS\t3\t1\t3.0\t3\t3\t3\t1\t3.0\t3\t3\t3\t1\t3.0\t3\t3\nNULL\tF\tS\t4\t1\t4.0\t4\t4\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\nNULL\tF\tS\t4\t1\t4.0\t4\t4\t2\t1\t2.0\t2\t2\t4\t1\t4.0\t4\t4\nNULL\tF\tS\t5\t1\t5.0\t5\t5\t6\t1\t6.0\t6\t6\t0\t1\t0.0\t0\t0\nNULL\tF\tU\t0\t1\t0.0\t0\t0\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\nNULL\tF\tU\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\nNULL\tF\tU\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\t2\t1\t2.0\t2\t2\nNULL\tF\tU\t1\t1\t1.0\t1\t1\t5\t1\t5.0\t5\t5\t6\t1\t6.0\t6\t6\nNULL\tF\tU\t2\t1\t2.0\t2\t2\t0\t1\t0.0\t0\t0\t1\t1\t1.0\t1\t1\nNULL\tF\tU\t2\t1\t2.0\t2\t2\t4\t1\t4.0\t4\t4\t4\t1\t4.0\t4\t4\nNULL\tF\tU\t3\t2\t3.0\t3\t6\t1\t2\t1.0\t1\t2\t6\t2\t6.0\t6\t12\nNULL\tF\tU\t4\t1\t4.0\t4\t4\t0\t1\t0.0\t0\t0\t4\t1\t4.0\t4\t4\nNULL\tF\tU\t5\t1\t5.0\t5\t5\t3\t1\t3.0\t3\t3\t6\t1\t6.0\t6\t6\nNULL\tF\tU\t6\t1\t6.0\t6\t6\t2\t1\t2.0\t2\t2\t2\t1\t2.0\t2\t2\nNULL\tF\tU\t6\t1\t6.0\t6\t6\t4\t1\t4.0\t4\t4\t4\t1\t4.0\t4\t4\nNULL\tF\tU\t6\t1\t6.0\t6\t6\t5\t1\t5.0\t5\t5\t0\t1\t0.0\t0\t0\nNULL\tF\tU\t6\t1\t6.0\t6\t6\t5\t1\t5.0\t5\t5\t6\t1\t6.0\t6\t6\nNULL\tF\tW\t0\t1\t0.0\t0\t0\t0\t1\t0.0\t0\t0\t4\t1\t4.0\t4\t4\nNULL\tF\tW\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\t5\t1\t5.0\t5\t5\nNULL\tF\tW\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\t4\t1\t4.0\t4\t4\nNULL\tF\tW\t2\t1\t2.0\t2\t2\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\nNULL\tF\tW\t3\t1\t3.0\t3\t3\t3\t1\t3.0\t3\t3\t6\t1\t6.0\t6\t6\nNULL\tF\tW\t3\t1\t3.0\t3\t3\t6\t1\t6.0\t6\t6\t6\t1\t6.0\t6\t6\nNULL\tF\tW\t4\t1\t4.0\t4\t4\t3\t1\t3.0\t3\t3\t1\t1\t1.0\t1\t1\nNULL\tF\tW\t5\t1\t5.0\t5\t5\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t1\t1\nNULL\tF\tW\t5\t1\t5.0\t5\t5\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\nNULL\tF\tW\t5\t1\t5.0\t5\t5\t3\t1\t3.0\t3\t3\t6\t1\t6.0\t6\t6\nNULL\tF\tW\t5\t1\t5.0\t5\t5\t4\t1\t4.0\t4\t4\t6\t1\t6.0\t6\t6\nNULL\tF\tW\t6\t1\t6.0\t6\t6\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\nNULL\tF\tW\t6\t1\t6.0\t6\t6\t2\t1\t2.0\t2\t2\t3\t1\t3.0\t3\t3\nNULL\tF\tW\t6\t1\t6.0\t6\t6\t5\t1\t5.0\t5\t5\t5\t1\t5.0\t5\t5\nNULL\tM\tD\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\nNULL\tM\tD\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\nNULL\tM\tD\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\t2\t1\t2.0\t2\t2\nNULL\tM\tD\t2\t1\t2.0\t2\t2\t0\t1\t0.0\t0\t0\t6\t1\t6.0\t6\t6\nNULL\tM\tD\t2\t1\t2.0\t2\t2\t4\t1\t4.0\t4\t4\t4\t1\t4.0\t4\t4\nNULL\tM\tD\t2\t1\t2.0\t2\t2\t5\t1\t5.0\t5\t5\t3\t1\t3.0\t3\t3\nNULL\tM\tD\t3\t1\t3.0\t3\t3\t1\t1\t1.0\t1\t1\t5\t1\t5.0\t5\t5\nNULL\tM\tD\t3\t1\t3.0\t3\t3\t2\t1\t2.0\t2\t2\t3\t1\t3.0\t3\t3\nNULL\tM\tD\t4\t1\t4.0\t4\t4\t5\t1\t5.0\t5\t5\t2\t1\t2.0\t2\t2\nNULL\tM\tD\t6\t1\t6.0\t6\t6\t1\t1\t1.0\t1\t1\t6\t1\t6.0\t6\t6\nNULL\tM\tD\t6\t1\t6.0\t6\t6\t3\t1\t3.0\t3\t3\t1\t1\t1.0\t1\t1\nNULL\tM\tM\t0\t1\t0.0\t0\t0\t0\t1\t0.0\t0\t0\t1\t1\t1.0\t1\t1\nNULL\tM\tM\t0\t2\t0.0\t0\t0\t1\t2\t1.0\t1\t2\t2\t2\t2.0\t2\t4\nNULL\tM\tM\t0\t1\t0.0\t0\t0\t2\t1\t2.0\t2\t2\t1\t1\t1.0\t1\t1\nNULL\tM\tM\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\t5\t1\t5.0\t5\t5\nNULL\tM\tM\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\t0\t1\t0.0\t0\t0\nNULL\tM\tM\t1\t1\t1.0\t1\t1\t0\t1\t0.0\t0\t0\t1\t1\t1.0\t1\t1\nNULL\tM\tM\t1\t1\t1.0\t1\t1\t0\t1\t0.0\t0\t0\t2\t1\t2.0\t2\t2\nNULL\tM\tM\t2\t1\t2.0\t2\t2\t6\t1\t6.0\t6\t6\t5\t1\t5.0\t5\t5\nNULL\tM\tM\t3\t1\t3.0\t3\t3\t5\t1\t5.0\t5\t5\t1\t1\t1.0\t1\t1\nNULL\tM\tM\t3\t1\t3.0\t3\t3\t6\t1\t6.0\t6\t6\t4\t1\t4.0\t4\t4\nNULL\tM\tM\t4\t1\t4.0\t4\t4\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q35a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<ca_state:string,cd_gender:string,cd_marital_status:string,cd_dep_count:int,cnt1:bigint,avg(cd_dep_count):double,max(cd_dep_count):int,sum(cd_dep_count):bigint,cd_dep_employed_count:int,cnt2:bigint,avg(cd_dep_employed_count):double,max(cd_dep_employed_count):int,sum(cd_dep_employed_count):bigint,cd_dep_college_count:int,cnt3:bigint,avg(cd_dep_college_count):double,max(cd_dep_college_count):int,sum(cd_dep_college_count):bigint>\n-- !query output\nNULL\tF\tD\t0\t1\t0.0\t0\t0\t2\t1\t2.0\t2\t2\t2\t1\t2.0\t2\t2\nNULL\tF\tD\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t1\t1\nNULL\tF\tD\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\t3\t1\t3.0\t3\t3\nNULL\tF\tD\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\t6\t1\t6.0\t6\t6\nNULL\tF\tD\t2\t1\t2.0\t2\t2\t2\t1\t2.0\t2\t2\t5\t1\t5.0\t5\t5\nNULL\tF\tD\t2\t1\t2.0\t2\t2\t6\t1\t6.0\t6\t6\t3\t1\t3.0\t3\t3\nNULL\tF\tD\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\t2\t1\t2.0\t2\t2\nNULL\tF\tD\t3\t1\t3.0\t3\t3\t6\t1\t6.0\t6\t6\t3\t1\t3.0\t3\t3\nNULL\tF\tD\t4\t1\t4.0\t4\t4\t1\t1\t1.0\t1\t1\t6\t1\t6.0\t6\t6\nNULL\tF\tD\t4\t1\t4.0\t4\t4\t3\t1\t3.0\t3\t3\t1\t1\t1.0\t1\t1\nNULL\tF\tD\t4\t1\t4.0\t4\t4\t4\t1\t4.0\t4\t4\t4\t1\t4.0\t4\t4\nNULL\tF\tD\t4\t2\t4.0\t4\t8\t5\t2\t5.0\t5\t10\t4\t2\t4.0\t4\t8\nNULL\tF\tD\t5\t1\t5.0\t5\t5\t5\t1\t5.0\t5\t5\t2\t1\t2.0\t2\t2\nNULL\tF\tD\t5\t1\t5.0\t5\t5\t6\t1\t6.0\t6\t6\t4\t1\t4.0\t4\t4\nNULL\tF\tD\t6\t1\t6.0\t6\t6\t2\t1\t2.0\t2\t2\t1\t1\t1.0\t1\t1\nNULL\tF\tD\t6\t1\t6.0\t6\t6\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\nNULL\tF\tD\t6\t1\t6.0\t6\t6\t5\t1\t5.0\t5\t5\t1\t1\t1.0\t1\t1\nNULL\tF\tD\t6\t1\t6.0\t6\t6\t5\t1\t5.0\t5\t5\t4\t1\t4.0\t4\t4\nNULL\tF\tM\t0\t1\t0.0\t0\t0\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t1\t1\nNULL\tF\tM\t0\t1\t0.0\t0\t0\t2\t1\t2.0\t2\t2\t6\t1\t6.0\t6\t6\nNULL\tF\tM\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\t4\t1\t4.0\t4\t4\nNULL\tF\tM\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\t3\t1\t3.0\t3\t3\nNULL\tF\tM\t1\t1\t1.0\t1\t1\t6\t1\t6.0\t6\t6\t0\t1\t0.0\t0\t0\nNULL\tF\tM\t1\t1\t1.0\t1\t1\t6\t1\t6.0\t6\t6\t3\t1\t3.0\t3\t3\nNULL\tF\tM\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\t1\t1\t1.0\t1\t1\nNULL\tF\tM\t3\t1\t3.0\t3\t3\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\nNULL\tF\tM\t3\t2\t3.0\t3\t6\t4\t2\t4.0\t4\t8\t5\t2\t5.0\t5\t10\nNULL\tF\tM\t3\t1\t3.0\t3\t3\t5\t1\t5.0\t5\t5\t6\t1\t6.0\t6\t6\nNULL\tF\tM\t3\t1\t3.0\t3\t3\t6\t1\t6.0\t6\t6\t1\t1\t1.0\t1\t1\nNULL\tF\tM\t4\t1\t4.0\t4\t4\t3\t1\t3.0\t3\t3\t2\t1\t2.0\t2\t2\nNULL\tF\tM\t4\t1\t4.0\t4\t4\t6\t1\t6.0\t6\t6\t1\t1\t1.0\t1\t1\nNULL\tF\tM\t4\t1\t4.0\t4\t4\t6\t1\t6.0\t6\t6\t4\t1\t4.0\t4\t4\nNULL\tF\tM\t5\t1\t5.0\t5\t5\t1\t1\t1.0\t1\t1\t5\t1\t5.0\t5\t5\nNULL\tF\tM\t5\t1\t5.0\t5\t5\t2\t1\t2.0\t2\t2\t2\t1\t2.0\t2\t2\nNULL\tF\tM\t6\t1\t6.0\t6\t6\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\nNULL\tF\tM\t6\t1\t6.0\t6\t6\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\nNULL\tF\tM\t6\t1\t6.0\t6\t6\t6\t1\t6.0\t6\t6\t3\t1\t3.0\t3\t3\nNULL\tF\tS\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\t3\t1\t3.0\t3\t3\nNULL\tF\tS\t0\t1\t0.0\t0\t0\t4\t1\t4.0\t4\t4\t2\t1\t2.0\t2\t2\nNULL\tF\tS\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\t1\t1\t1.0\t1\t1\nNULL\tF\tS\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\t2\t1\t2.0\t2\t2\nNULL\tF\tS\t1\t1\t1.0\t1\t1\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\nNULL\tF\tS\t1\t1\t1.0\t1\t1\t2\t1\t2.0\t2\t2\t3\t1\t3.0\t3\t3\nNULL\tF\tS\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\t6\t1\t6.0\t6\t6\nNULL\tF\tS\t2\t1\t2.0\t2\t2\t0\t1\t0.0\t0\t0\t6\t1\t6.0\t6\t6\nNULL\tF\tS\t2\t1\t2.0\t2\t2\t5\t1\t5.0\t5\t5\t4\t1\t4.0\t4\t4\nNULL\tF\tS\t2\t1\t2.0\t2\t2\t6\t1\t6.0\t6\t6\t6\t1\t6.0\t6\t6\nNULL\tF\tS\t4\t1\t4.0\t4\t4\t3\t1\t3.0\t3\t3\t5\t1\t5.0\t5\t5\nNULL\tF\tU\t0\t1\t0.0\t0\t0\t2\t1\t2.0\t2\t2\t0\t1\t0.0\t0\t0\nNULL\tF\tU\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\nNULL\tF\tU\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\t6\t1\t6.0\t6\t6\nNULL\tF\tU\t0\t1\t0.0\t0\t0\t6\t1\t6.0\t6\t6\t6\t1\t6.0\t6\t6\nNULL\tF\tU\t1\t1\t1.0\t1\t1\t0\t1\t0.0\t0\t0\t6\t1\t6.0\t6\t6\nNULL\tF\tU\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\t5\t1\t5.0\t5\t5\nNULL\tF\tU\t1\t1\t1.0\t1\t1\t6\t1\t6.0\t6\t6\t5\t1\t5.0\t5\t5\nNULL\tF\tU\t2\t1\t2.0\t2\t2\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\nNULL\tF\tU\t2\t2\t2.0\t2\t4\t1\t2\t1.0\t1\t2\t0\t2\t0.0\t0\t0\nNULL\tF\tU\t2\t1\t2.0\t2\t2\t3\t1\t3.0\t3\t3\t4\t1\t4.0\t4\t4\nNULL\tF\tU\t2\t1\t2.0\t2\t2\t4\t1\t4.0\t4\t4\t6\t1\t6.0\t6\t6\nNULL\tF\tU\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\t6\t1\t6.0\t6\t6\nNULL\tF\tU\t3\t1\t3.0\t3\t3\t2\t1\t2.0\t2\t2\t0\t1\t0.0\t0\t0\nNULL\tF\tU\t4\t1\t4.0\t4\t4\t0\t1\t0.0\t0\t0\t6\t1\t6.0\t6\t6\nNULL\tF\tU\t4\t1\t4.0\t4\t4\t1\t1\t1.0\t1\t1\t0\t1\t0.0\t0\t0\nNULL\tF\tU\t5\t1\t5.0\t5\t5\t0\t1\t0.0\t0\t0\t2\t1\t2.0\t2\t2\nNULL\tF\tU\t5\t1\t5.0\t5\t5\t1\t1\t1.0\t1\t1\t2\t1\t2.0\t2\t2\nNULL\tF\tU\t5\t1\t5.0\t5\t5\t4\t1\t4.0\t4\t4\t4\t1\t4.0\t4\t4\nNULL\tF\tU\t6\t1\t6.0\t6\t6\t0\t1\t0.0\t0\t0\t0\t1\t0.0\t0\t0\nNULL\tF\tU\t6\t1\t6.0\t6\t6\t6\t1\t6.0\t6\t6\t4\t1\t4.0\t4\t4\nNULL\tF\tW\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\t4\t1\t4.0\t4\t4\nNULL\tF\tW\t3\t1\t3.0\t3\t3\t4\t1\t4.0\t4\t4\t1\t1\t1.0\t1\t1\nNULL\tF\tW\t4\t1\t4.0\t4\t4\t3\t1\t3.0\t3\t3\t4\t1\t4.0\t4\t4\nNULL\tF\tW\t4\t1\t4.0\t4\t4\t5\t1\t5.0\t5\t5\t3\t1\t3.0\t3\t3\nNULL\tF\tW\t5\t1\t5.0\t5\t5\t2\t1\t2.0\t2\t2\t3\t1\t3.0\t3\t3\nNULL\tF\tW\t6\t1\t6.0\t6\t6\t2\t1\t2.0\t2\t2\t0\t1\t0.0\t0\t0\nNULL\tF\tW\t6\t1\t6.0\t6\t6\t2\t1\t2.0\t2\t2\t2\t1\t2.0\t2\t2\nNULL\tF\tW\t6\t1\t6.0\t6\t6\t2\t1\t2.0\t2\t2\t3\t1\t3.0\t3\t3\nNULL\tF\tW\t6\t1\t6.0\t6\t6\t2\t1\t2.0\t2\t2\t6\t1\t6.0\t6\t6\nNULL\tF\tW\t6\t1\t6.0\t6\t6\t4\t1\t4.0\t4\t4\t6\t1\t6.0\t6\t6\nNULL\tF\tW\t6\t1\t6.0\t6\t6\t5\t1\t5.0\t5\t5\t0\t1\t0.0\t0\t0\nNULL\tF\tW\t6\t1\t6.0\t6\t6\t6\t1\t6.0\t6\t6\t5\t1\t5.0\t5\t5\nNULL\tM\tD\t0\t1\t0.0\t0\t0\t0\t1\t0.0\t0\t0\t6\t1\t6.0\t6\t6\nNULL\tM\tD\t0\t1\t0.0\t0\t0\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\nNULL\tM\tD\t0\t1\t0.0\t0\t0\t5\t1\t5.0\t5\t5\t5\t1\t5.0\t5\t5\nNULL\tM\tD\t1\t1\t1.0\t1\t1\t2\t1\t2.0\t2\t2\t0\t1\t0.0\t0\t0\nNULL\tM\tD\t1\t1\t1.0\t1\t1\t2\t1\t2.0\t2\t2\t1\t1\t1.0\t1\t1\nNULL\tM\tD\t1\t1\t1.0\t1\t1\t2\t1\t2.0\t2\t2\t6\t1\t6.0\t6\t6\nNULL\tM\tD\t1\t1\t1.0\t1\t1\t3\t1\t3.0\t3\t3\t4\t1\t4.0\t4\t4\nNULL\tM\tD\t2\t1\t2.0\t2\t2\t1\t1\t1.0\t1\t1\t0\t1\t0.0\t0\t0\nNULL\tM\tD\t2\t1\t2.0\t2\t2\t1\t1\t1.0\t1\t1\t4\t1\t4.0\t4\t4\nNULL\tM\tD\t2\t1\t2.0\t2\t2\t4\t1\t4.0\t4\t4\t6\t1\t6.0\t6\t6\nNULL\tM\tD\t2\t1\t2.0\t2\t2\t6\t1\t6.0\t6\t6\t1\t1\t1.0\t1\t1\nNULL\tM\tD\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\t6\t1\t6.0\t6\t6\nNULL\tM\tD\t3\t1\t3.0\t3\t3\t3\t1\t3.0\t3\t3\t0\t1\t0.0\t0\t0\nNULL\tM\tD\t3\t1\t3.0\t3\t3\t3\t1\t3.0\t3\t3\t3\t1\t3.0\t3\t3\nNULL\tM\tD\t3\t1\t3.0\t3\t3\t5\t1\t5.0\t5\t5\t2\t1\t2.0\t2\t2\nNULL\tM\tD\t3\t1\t3.0\t3\t3\t5\t1\t5.0\t5\t5\t5\t1\t5.0\t5\t5\nNULL\tM\tD\t4\t1\t4.0\t4\t4\t4\t1\t4.0\t4\t4\t5\t1\t5.0\t5\t5\nNULL\tM\tD\t4\t1\t4.0\t4\t4\t6\t1\t6.0\t6\t6\t5\t1\t5.0\t5\t5\nNULL\tM\tD\t5\t1\t5.0\t5\t5\t2\t1\t2.0\t2\t2\t3\t1\t3.0\t3\t3\nNULL\tM\tD\t6\t1\t6.0\t6\t6\t3\t1\t3.0\t3\t3\t1\t1\t1.0\t1\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q36a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<gross_margin:decimal(38,20),i_category:string,i_class:string,lochierarchy:int,rank_within_parent:int>\n-- !query output\n-0.43310777865000000000\tNULL\tNULL\t2\t1\n-0.44057752675000000000\tHome                                              \tNULL\t1\t1\n-0.43759152110000000000\tMusic                                             \tNULL\t1\t2\n-0.43708103961000000000\tNULL\tNULL\t1\t3\n-0.43616253139000000000\tShoes                                             \tNULL\t1\t4\n-0.43567118609000000000\tChildren                                          \tNULL\t1\t5\n-0.43423932352000000000\tSports                                            \tNULL\t1\t6\n-0.43342977300000000000\tElectronics                                       \tNULL\t1\t7\n-0.43243283121000000000\tWomen                                             \tNULL\t1\t8\n-0.43164166900000000000\tMen                                               \tNULL\t1\t9\n-0.42516187690000000000\tBooks                                             \tNULL\t1\t10\n-0.42448713381000000000\tJewelry                                           \tNULL\t1\t11\n-0.73902664238792748962\tNULL\tshirts                                            \t0\t1\n-0.61125804873635587486\tNULL\tcountry                                           \t0\t2\n-0.53129803597069255822\tNULL\tdresses                                           \t0\t3\n-0.51266635289382758517\tNULL\tathletic                                          \t0\t4\n-0.45290387783638603924\tNULL\tmens                                              \t0\t5\n-0.41288056661656330013\tNULL\taccessories                                       \t0\t6\n-0.40784754677005682440\tNULL\tNULL\t0\t7\n-0.34254844860867375832\tNULL\tbaseball                                          \t0\t8\n-0.32511461675631534897\tNULL\tinfants                                           \t0\t9\n-0.44733955704648003493\tBooks                                             \tcomputers                                         \t0\t1\n-0.44221358112622373783\tBooks                                             \thome repair                                       \t0\t2\n-0.44131129175272951442\tBooks                                             \tromance                                           \t0\t3\n-0.43954111564375046074\tBooks                                             \thistory                                           \t0\t4\n-0.43921337505389731821\tBooks                                             \tmystery                                           \t0\t5\n-0.43904020269360481109\tBooks                                             \tsports                                            \t0\t6\n-0.42821476999837619396\tBooks                                             \ttravel                                            \t0\t7\n-0.42609067296303848297\tBooks                                             \tcooking                                           \t0\t8\n-0.42538995145338568328\tBooks                                             \tfiction                                           \t0\t9\n-0.42446563616188232944\tBooks                                             \tarts                                              \t0\t10\n-0.42424821311884350413\tBooks                                             \tparenting                                         \t0\t11\n-0.41822014479424203008\tBooks                                             \treference                                         \t0\t12\n-0.41350839325516811781\tBooks                                             \tbusiness                                          \t0\t13\n-0.40935208137315013129\tBooks                                             \tscience                                           \t0\t14\n-0.40159380735731858928\tBooks                                             \tself-help                                         \t0\t15\n-0.36957884843305744526\tBooks                                             \tentertainments                                    \t0\t16\n-0.44602461556731552282\tChildren                                          \tschool-uniforms                                   \t0\t1\n-0.44141106040000560852\tChildren                                          \ttoddlers                                          \t0\t2\n-0.43479886701046623711\tChildren                                          \tinfants                                           \t0\t3\n-0.41900662971936329442\tChildren                                          \tnewborn                                           \t0\t4\n-0.41526603781609697786\tChildren                                          \tNULL\t0\t5\n-0.45347482218635333366\tElectronics                                       \tpersonal                                          \t0\t1\n-0.44349670349829474271\tElectronics                                       \tstereo                                            \t0\t2\n-0.44262427232850112058\tElectronics                                       \tautomotive                                        \t0\t3\n-0.44115886172705231970\tElectronics                                       \tportable                                          \t0\t4\n-0.43972786651639318010\tElectronics                                       \tmemory                                            \t0\t5\n-0.43889275271590953040\tElectronics                                       \tscanners                                          \t0\t6\n-0.43879181695132886061\tElectronics                                       \tkaroke                                            \t0\t7\n-0.43743655149948399284\tElectronics                                       \tdvd/vcr players                                   \t0\t8\n-0.43737666390514154910\tElectronics                                       \tcameras                                           \t0\t9\n-0.43390499017233926812\tElectronics                                       \twireless                                          \t0\t10\n-0.43163869754114299547\tElectronics                                       \taudio                                             \t0\t11\n-0.42958938669780912634\tElectronics                                       \tcamcorders                                        \t0\t12\n-0.42872845803629855724\tElectronics                                       \tmusical                                           \t0\t13\n-0.42228240153396399656\tElectronics                                       \ttelevisions                                       \t0\t14\n-0.41893847772039275795\tElectronics                                       \tmonitors                                          \t0\t15\n-0.39793878022746331540\tElectronics                                       \tdisk drives                                       \t0\t16\n-0.49051156860507320113\tHome                                              \tNULL\t0\t1\n-0.48431476750686752965\tHome                                              \tblinds/shades                                     \t0\t2\n-0.47545837941951440918\tHome                                              \tbathroom                                          \t0\t3\n-0.45726228921216284093\tHome                                              \trugs                                              \t0\t4\n-0.45540507568891021759\tHome                                              \tfurniture                                         \t0\t5\n-0.45303572267019508501\tHome                                              \tflatware                                          \t0\t6\n-0.44755542058111800358\tHome                                              \ttables                                            \t0\t7\n-0.44419847780930149402\tHome                                              \twallpaper                                         \t0\t8\n-0.44092345226680695671\tHome                                              \tglassware                                         \t0\t9\n-0.43877591834074789745\tHome                                              \tdecor                                             \t0\t10\n-0.43765482553654514822\tHome                                              \taccent                                            \t0\t11\n-0.43188199218974854630\tHome                                              \tbedding                                           \t0\t12\n-0.43107417904272222899\tHome                                              \tkids                                              \t0\t13\n-0.42474436355625900935\tHome                                              \tlighting                                          \t0\t14\n-0.41783311109052416746\tHome                                              \tcurtains/drapes                                   \t0\t15\n-0.41767111806961188479\tHome                                              \tmattresses                                        \t0\t16\n-0.40562188698541221499\tHome                                              \tpaint                                             \t0\t17\n-0.45165056505480816921\tJewelry                                           \tjewelry boxes                                     \t0\t1\n-0.44372227804836590137\tJewelry                                           \testate                                            \t0\t2\n-0.44251815032563188894\tJewelry                                           \tgold                                              \t0\t3\n-0.43978127753996883542\tJewelry                                           \tconsignment                                       \t0\t4\n-0.43821750044359339153\tJewelry                                           \tcustom                                            \t0\t5\n-0.43439645036479672989\tJewelry                                           \tbracelets                                         \t0\t6\n-0.43208398325687772942\tJewelry                                           \tloose stones                                      \t0\t7\n-0.43060897375114375156\tJewelry                                           \tdiamonds                                          \t0\t8\n-0.42847505748860847066\tJewelry                                           \tcostume                                           \t0\t9\n-0.42667449062277843561\tJewelry                                           \trings                                             \t0\t10\n-0.41987969011585456826\tJewelry                                           \tmens watch                                        \t0\t11\n-0.41624621972944533035\tJewelry                                           \tsemi-precious                                     \t0\t12\n-0.41148949162100715771\tJewelry                                           \twomens watch                                      \t0\t13\n-0.39725668174847694299\tJewelry                                           \tbirdal                                            \t0\t14\n-0.39665274051903254057\tJewelry                                           \tpendants                                          \t0\t15\n-0.38423525233438861010\tJewelry                                           \tearings                                           \t0\t16\n-0.44464388887858793403\tMen                                               \tshirts                                            \t0\t1\n-0.43719860800637369827\tMen                                               \taccessories                                       \t0\t2\n-0.43164606665359630905\tMen                                               \tsports-apparel                                    \t0\t3\n-0.41530906677293519754\tMen                                               \tpants                                             \t0\t4\n-0.38332708894803499123\tMen                                               \tNULL\t0\t5\n-0.47339698705534020269\tMusic                                             \tNULL\t0\t1\n-0.44193214675249008923\tMusic                                             \trock                                              \t0\t2\n-0.44008174913565459246\tMusic                                             \tcountry                                           \t0\t3\n-0.43863444992223641373\tMusic                                             \tpop                                               \t0\t4\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q47.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_category:string,d_year:int,d_moy:int,avg_monthly_sales:decimal(21,6),sum_sales:decimal(17,2),psum:decimal(17,2),nsum:decimal(17,2)>\n-- !query output\nShoes                                             \t1999\t3\t5607.487500\t2197.48\t3271.66\t2831.67\nShoes                                             \t1999\t2\t5643.938333\t2393.31\t4463.11\t2652.44\nShoes                                             \t1999\t4\t5640.362500\t2416.57\t3348.90\t2987.78\nMen                                               \t1999\t6\t4702.116667\t1534.95\t2666.37\t2514.13\nMen                                               \t1999\t7\t5330.618333\t2218.63\t3182.74\t7436.83\nShoes                                             \t1999\t4\t5338.852500\t2233.60\t3470.43\t2832.41\nMusic                                             \t1999\t5\t5139.465000\t2034.96\t3149.72\t3648.17\nMen                                               \t1999\t7\t5748.707500\t2645.88\t3432.67\t7646.91\nMen                                               \t1999\t3\t4915.190000\t1815.92\t2884.81\t2956.00\nWomen                                             \t1999\t6\t4586.300000\t1508.07\t2992.12\t3059.37\nShoes                                             \t1999\t4\t5374.032500\t2322.80\t2484.06\t3313.69\nMen                                               \t1999\t6\t4596.057500\t1577.69\t2457.43\t2439.68\nMen                                               \t1999\t3\t5839.670833\t2825.07\t3157.46\t3531.36\nMen                                               \t1999\t4\t5342.149167\t2347.82\t2787.31\t3588.16\nShoes                                             \t1999\t3\t5643.938333\t2652.44\t2393.31\t3008.88\nMen                                               \t1999\t3\t5475.719167\t2515.22\t2709.11\t2702.85\nShoes                                             \t1999\t6\t5338.852500\t2388.17\t2832.41\t4216.10\nShoes                                             \t1999\t2\t5065.706667\t2119.85\t3439.05\t2640.59\nShoes                                             \t1999\t2\t4713.009167\t1774.19\t3366.21\t1940.05\nShoes                                             \t1999\t3\t5374.032500\t2484.06\t2994.92\t2322.80\nMen                                               \t1999\t7\t5342.149167\t2454.72\t2766.42\t7665.06\nMen                                               \t1999\t4\t4741.993333\t1879.47\t3419.59\t2634.29\nMusic                                             \t1999\t4\t4723.742500\t1866.30\t2384.82\t2931.55\nShoes                                             \t1999\t4\t4732.205000\t1875.08\t2686.46\t3422.03\nChildren                                          \t1999\t6\t4849.967500\t2002.59\t2590.23\t3380.67\nMen                                               \t1999\t6\t4920.419167\t2077.36\t3402.55\t3347.44\nMen                                               \t1999\t4\t4596.057500\t1762.17\t2728.14\t2457.43\nChildren                                          \t1999\t4\t4739.001667\t1923.04\t2309.91\t2849.64\nMusic                                             \t1999\t3\t4816.848333\t2010.47\t2539.57\t2940.38\nShoes                                             \t1999\t2\t4707.697500\t1903.70\t2693.49\t3474.34\nShoes                                             \t1999\t2\t4443.995833\t1642.32\t3972.50\t2319.04\nMen                                               \t1999\t6\t5839.670833\t3053.78\t3151.27\t3622.65\nShoes                                             \t1999\t4\t5607.487500\t2831.67\t2197.48\t4187.54\nMen                                               \t1999\t3\t5598.894167\t2824.13\t3154.80\t3135.60\nShoes                                             \t1999\t3\t4713.009167\t1940.05\t1774.19\t2496.18\nMen                                               \t1999\t4\t5475.719167\t2702.85\t2515.22\t4364.56\nShoes                                             \t1999\t4\t4596.537500\t1825.00\t2777.56\t3234.34\nMusic                                             \t1999\t6\t4332.550833\t1563.26\t2484.37\t2460.11\nMen                                               \t1999\t2\t5102.436667\t2333.32\t3417.85\t2536.68\nMen                                               \t1999\t2\t5475.719167\t2709.11\t4740.72\t2515.22\nMen                                               \t1999\t6\t5475.719167\t2723.08\t4364.56\t4000.42\nShoes                                             \t1999\t2\t5338.852500\t2587.72\t3837.18\t3470.43\nChildren                                          \t1999\t6\t4426.704167\t1683.86\t2186.50\t1970.76\nShoes                                             \t1999\t6\t5643.938333\t2902.72\t3046.01\t3994.18\nShoes                                             \t1999\t6\t5374.032500\t2637.31\t3313.69\t3158.65\nMen                                               \t1999\t3\t4920.419167\t2183.89\t3598.63\t2841.78\nMen                                               \t1999\t3\t5330.618333\t2602.65\t2613.45\t2624.00\nWomen                                             \t1999\t6\t4366.627500\t1640.10\t3159.43\t2882.91\nMusic                                             \t1999\t2\t5139.465000\t2413.63\t4627.89\t2773.28\nMen                                               \t1999\t2\t4929.611667\t2207.89\t3940.06\t2246.76\nWomen                                             \t1999\t2\t4551.444167\t1833.08\t3729.30\t2940.96\nMen                                               \t1999\t2\t5330.618333\t2613.45\t3859.71\t2602.65\nChildren                                          \t1999\t2\t4640.935833\t1927.95\t3524.16\t2584.03\nChildren                                          \t1999\t7\t4565.828333\t1853.57\t2846.40\t6447.36\nWomen                                             \t1999\t6\t4551.444167\t1840.98\t2478.75\t3174.74\nMen                                               \t1999\t4\t5330.618333\t2624.00\t2602.65\t3722.18\nChildren                                          \t1999\t3\t5024.325000\t2319.29\t2691.31\t2350.53\nChildren                                          \t1999\t2\t4836.831667\t2132.80\t3984.13\t2182.12\nChildren                                          \t1999\t4\t4640.935833\t1950.03\t2584.03\t2919.51\nShoes                                             \t1999\t3\t4791.740833\t2100.87\t2211.40\t3278.75\nMen                                               \t1999\t5\t5839.670833\t3151.27\t3531.36\t3053.78\nMen                                               \t1999\t3\t4929.611667\t2246.76\t2207.89\t3208.14\nMen                                               \t1999\t2\t5839.670833\t3157.46\t3425.68\t2825.07\nMen                                               \t1999\t5\t5311.965833\t2631.11\t2819.12\t3022.54\nMusic                                             \t1999\t2\t4388.770833\t1714.25\t2228.69\t2407.93\nShoes                                             \t1999\t2\t4989.784167\t2315.46\t4430.70\t2976.26\nChildren                                          \t1999\t4\t5024.325000\t2350.53\t2319.29\t2984.92\nWomen                                             \t1999\t3\t4586.300000\t1917.37\t3282.12\t2557.46\nChildren                                          \t1999\t3\t4836.831667\t2182.12\t2132.80\t3466.25\nShoes                                             \t1999\t5\t5640.362500\t2987.78\t2416.57\t3437.87\nMen                                               \t1999\t4\t5071.261667\t2425.64\t2696.60\t2641.17\nShoes                                             \t1999\t6\t4443.995833\t1801.87\t2297.76\t3078.40\nChildren                                          \t1999\t2\t4480.353333\t1838.88\t3250.53\t2375.04\nMen                                               \t1999\t7\t5102.436667\t2464.06\t2798.80\t6978.09\nShoes                                             \t1999\t4\t5643.938333\t3008.88\t2652.44\t3046.01\nMen                                               \t1999\t2\t5071.261667\t2438.50\t4282.24\t2696.60\nChildren                                          \t1999\t6\t4525.318333\t1896.34\t2883.95\t2727.36\nChildren                                          \t1999\t4\t4212.889167\t1588.72\t2296.49\t3077.12\nShoes                                             \t1999\t4\t4884.260000\t2261.01\t2502.36\t3210.47\nMen                                               \t1999\t3\t5748.707500\t3132.90\t3747.34\t3768.23\nShoes                                             \t1999\t2\t5640.362500\t3027.85\t4285.15\t3348.90\nMusic                                             \t1999\t3\t4490.221667\t1883.06\t2321.31\t2211.82\nChildren                                          \t1999\t2\t4423.506667\t1816.43\t3192.19\t3334.11\nMusic                                             \t1999\t6\t4894.035000\t2288.21\t3156.77\t2845.45\nMusic                                             \t1999\t5\t4829.768333\t2225.46\t3380.94\t2782.42\nWomen                                             \t1999\t3\t4268.562500\t1665.68\t1758.65\t2082.99\nChildren                                          \t1999\t3\t4267.374167\t1667.03\t2579.77\t2802.05\nMen                                               \t1999\t3\t4987.821667\t2387.78\t2962.40\t2928.83\nWomen                                             \t1999\t3\t4309.575833\t1710.42\t2004.22\t2742.41\nShoes                                             \t1999\t5\t5643.938333\t3046.01\t3008.88\t2902.72\nMusic                                             \t1999\t4\t4518.543333\t1922.23\t2683.26\t2227.02\nChildren                                          \t1999\t3\t4441.555000\t1846.53\t3827.47\t3623.30\nMen                                               \t1999\t5\t5748.707500\t3156.57\t3768.23\t3432.67\nMusic                                             \t1999\t2\t4302.831667\t1712.70\t3561.46\t2414.41\nWomen                                             \t1999\t7\t4494.382500\t1904.98\t2308.58\t5603.14\nWomen                                             \t1999\t4\t4582.713333\t1994.71\t2408.40\t2321.48\nMusic                                             \t1999\t3\t4426.623333\t1840.51\t2707.80\t3147.10\nShoes                                             \t1999\t2\t4791.740833\t2211.40\t3912.90\t2100.87\nShoes                                             \t1999\t7\t5640.362500\t3062.31\t3437.87\t6376.30\nChildren                                          \t1999\t3\t4733.152500\t2155.79\t2710.50\t2685.74\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q49.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,item:int,return_ratio:decimal(35,20),return_rank:int,currency_rank:int>\n-- !query output\ncatalog\t16735\t0.50505050505050505051\t1\t1\ncatalog\t12633\t0.69662921348314606742\t2\t2\ncatalog\t13967\t0.70000000000000000000\t3\t3\ncatalog\t12819\t0.70129870129870129870\t4\t8\ncatalog\t16155\t0.72043010752688172043\t5\t4\ncatalog\t17681\t0.75268817204301075269\t6\t5\ncatalog\t5975\t0.76404494382022471910\t7\t6\ncatalog\t11451\t0.76744186046511627907\t8\t7\ncatalog\t1689\t0.80219780219780219780\t9\t9\ncatalog\t10311\t0.81818181818181818182\t10\t10\nstore\t5111\t0.78947368421052631579\t1\t1\nstore\t11073\t0.83505154639175257732\t2\t3\nstore\t14429\t0.84782608695652173913\t3\t2\nstore\t15927\t0.86419753086419753086\t4\t4\nstore\t10171\t0.86868686868686868687\t5\t5\nstore\t12783\t0.88775510204081632653\t6\t6\nstore\t11075\t0.89743589743589743590\t7\t7\nstore\t12889\t0.95652173913043478261\t8\t8\nstore\t1939\t0.99000000000000000000\t9\t9\nstore\t4333\t1.00000000000000000000\t10\t10\nstore\t10455\t1.00000000000000000000\t10\t10\nstore\t12975\t1.00000000000000000000\t10\t10\nweb\t10485\t0.48863636363636363636\t1\t1\nweb\t4483\t0.52688172043010752688\t2\t2\nweb\t8833\t0.58241758241758241758\t3\t3\nweb\t1165\t0.61458333333333333333\t4\t4\nweb\t17197\t0.73076923076923076923\t5\t5\nweb\t10319\t0.73469387755102040816\t6\t6\nweb\t13159\t0.75257731958762886598\t7\t7\nweb\t9629\t0.77894736842105263158\t8\t8\nweb\t5909\t0.78378378378378378378\t9\t9\nweb\t7057\t0.86746987951807228916\t10\t10\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q51a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<item_sk:int,d_date:date,web_sales:decimal(27,2),store_sales:decimal(27,2),web_cumulative:decimal(27,2),store_cumulative:decimal(27,2)>\n-- !query output\n9\t2001-01-20\t155.31\tNULL\t155.31\t4.26\n9\t2001-02-01\t168.10\t28.81\t168.10\t28.81\n9\t2001-02-04\tNULL\t34.51\t168.10\t34.51\n9\t2001-02-15\tNULL\t144.79\t168.10\t144.79\n9\t2001-04-02\t249.70\tNULL\t249.70\t181.21\n9\t2001-04-09\tNULL\t183.64\t249.70\t183.64\n9\t2001-04-17\t260.21\t230.47\t260.21\t230.47\n29\t2001-01-07\t145.14\tNULL\t145.14\t12.18\n29\t2001-01-10\tNULL\t53.52\t145.14\t53.52\n29\t2001-01-11\tNULL\t55.81\t145.14\t55.81\n29\t2001-02-05\tNULL\t88.85\t145.14\t88.85\n29\t2001-02-08\tNULL\t140.80\t145.14\t140.80\n29\t2001-02-28\t215.28\tNULL\t215.28\t212.84\n29\t2001-03-30\t340.91\tNULL\t340.91\t279.82\n29\t2001-04-07\tNULL\t306.91\t340.91\t306.91\n29\t2001-05-26\t392.06\t389.83\t392.06\t389.83\n29\t2001-06-12\tNULL\t391.96\t392.06\t391.96\n29\t2001-06-29\t580.42\tNULL\t580.42\t407.31\n29\t2001-07-08\tNULL\t429.12\t580.42\t429.12\n29\t2001-07-12\tNULL\t444.37\t580.42\t444.37\n29\t2001-07-26\tNULL\t501.13\t580.42\t501.13\n29\t2001-07-28\tNULL\t523.55\t580.42\t523.55\n29\t2001-07-31\tNULL\t547.65\t580.42\t547.65\n29\t2001-09-15\t755.81\tNULL\t755.81\t710.20\n29\t2001-09-16\tNULL\t742.83\t755.81\t742.83\n31\t2001-01-23\t160.83\t143.12\t160.83\t143.12\n31\t2001-01-24\t247.53\tNULL\t247.53\t143.12\n31\t2001-02-04\tNULL\t143.12\t247.53\t143.12\n31\t2001-02-07\tNULL\t166.46\t247.53\t166.46\n31\t2001-02-15\tNULL\t224.58\t247.53\t224.58\n31\t2001-02-22\tNULL\t243.99\t247.53\t243.99\n31\t2001-02-26\tNULL\t245.24\t247.53\t245.24\n33\t2001-02-06\t143.86\tNULL\t143.86\t100.10\n33\t2001-03-06\t260.39\tNULL\t260.39\t100.10\n33\t2001-03-08\t311.65\tNULL\t311.65\t100.10\n33\t2001-03-17\tNULL\t166.59\t311.65\t166.59\n33\t2001-04-04\tNULL\t195.94\t311.65\t195.94\n33\t2001-04-11\tNULL\t218.41\t311.65\t218.41\n33\t2001-04-15\tNULL\t258.16\t311.65\t258.16\n33\t2001-04-26\tNULL\t260.73\t311.65\t260.73\n35\t2001-04-11\t218.63\tNULL\t218.63\t175.11\n35\t2001-04-13\tNULL\t200.14\t218.63\t200.14\n35\t2001-04-15\tNULL\t213.01\t218.63\t213.01\n35\t2001-04-21\t236.74\tNULL\t236.74\t213.01\n35\t2001-05-12\t250.14\tNULL\t250.14\t213.01\n35\t2001-06-03\t315.73\tNULL\t315.73\t213.01\n35\t2001-06-09\tNULL\t213.01\t315.73\t213.01\n35\t2001-06-12\t350.47\tNULL\t350.47\t213.01\n35\t2001-06-16\tNULL\t240.39\t350.47\t240.39\n35\t2001-06-29\tNULL\t251.30\t350.47\t251.30\n35\t2001-07-03\tNULL\t251.62\t350.47\t251.62\n35\t2001-07-06\tNULL\t279.62\t350.47\t279.62\n35\t2001-07-10\tNULL\t281.30\t350.47\t281.30\n35\t2001-07-19\tNULL\t285.09\t350.47\t285.09\n35\t2001-07-22\tNULL\t306.31\t350.47\t306.31\n35\t2001-07-28\t421.29\tNULL\t421.29\t306.31\n35\t2001-07-29\t434.60\tNULL\t434.60\t306.31\n37\t2001-03-07\t104.29\tNULL\t104.29\t100.87\n47\t2001-01-01\t7.00\t5.55\t7.00\t5.55\n47\t2001-01-15\t72.85\tNULL\t72.85\t48.96\n47\t2001-01-22\tNULL\t58.39\t72.85\t58.39\n47\t2001-02-17\t139.16\tNULL\t139.16\t119.31\n49\t2001-05-05\t361.90\tNULL\t361.90\t340.45\n49\t2001-05-21\t397.02\tNULL\t397.02\t340.45\n49\t2001-05-25\t479.86\tNULL\t479.86\t340.45\n49\t2001-06-03\tNULL\t350.29\t479.86\t350.29\n49\t2001-06-10\t488.44\tNULL\t488.44\t350.29\n49\t2001-06-20\tNULL\t362.21\t488.44\t362.21\n49\t2001-06-28\t527.37\tNULL\t527.37\t362.21\n49\t2001-07-19\tNULL\t490.09\t527.37\t490.09\n49\t2001-07-23\t532.83\tNULL\t532.83\t490.09\n49\t2001-07-26\tNULL\t511.76\t532.83\t511.76\n49\t2001-07-31\t556.70\tNULL\t556.70\t511.76\n51\t2001-03-24\t142.03\tNULL\t142.03\t141.46\n51\t2001-04-11\t221.62\tNULL\t221.62\t208.13\n53\t2001-02-01\t150.99\tNULL\t150.99\t15.99\n53\t2001-02-03\tNULL\t92.68\t150.99\t92.68\n53\t2001-02-09\tNULL\t96.53\t150.99\t96.53\n53\t2001-02-18\tNULL\t98.68\t150.99\t98.68\n53\t2001-02-20\tNULL\t129.22\t150.99\t129.22\n57\t2001-03-15\t147.84\tNULL\t147.84\t108.81\n65\t2001-01-07\tNULL\t7.39\t143.43\t7.39\n65\t2001-01-08\tNULL\t21.35\t143.43\t21.35\n65\t2001-01-16\tNULL\t102.46\t143.43\t102.46\n67\t2001-02-02\t120.54\t13.19\t120.54\t13.19\n67\t2001-02-19\t219.36\tNULL\t219.36\t120.94\n67\t2001-03-12\tNULL\t203.25\t219.36\t203.25\n67\t2001-04-13\t330.07\tNULL\t330.07\t277.79\n67\t2001-04-23\tNULL\t284.23\t330.07\t284.23\n67\t2001-04-27\tNULL\t290.21\t330.07\t290.21\n67\t2001-04-28\tNULL\t320.26\t330.07\t320.26\n69\t2001-02-22\t54.17\tNULL\t54.17\t36.00\n69\t2001-02-27\tNULL\t45.08\t54.17\t45.08\n73\t2001-01-18\t184.35\tNULL\t184.35\t178.75\n73\t2001-01-19\t185.11\tNULL\t185.11\t178.75\n73\t2001-02-09\tNULL\t180.42\t185.11\t180.42\n75\t2001-01-15\tNULL\t9.11\t19.68\t9.11\n75\t2001-01-31\t36.37\tNULL\t36.37\t9.11\n75\t2001-02-03\tNULL\t14.06\t36.37\t14.06\n83\t2001-02-03\t72.95\tNULL\t72.95\t55.10\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q57.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_category:string,i_brand:string,d_year:int,d_moy:int,avg_monthly_sales:decimal(21,6),sum_sales:decimal(17,2),psum:decimal(17,2),nsum:decimal(17,2)>\n-- !query output\nMusic                                             \tamalgscholar #x                                   \t1999\t2\t6662.669167\t1961.57\t4348.07\t3386.25\nShoes                                             \tamalgedu pack #x                                  \t1999\t2\t6493.071667\t2044.05\t4348.88\t3443.20\nShoes                                             \texportiedu pack #x                                \t1999\t3\t7416.141667\t2980.15\t4654.22\t5157.83\nChildren                                          \timportoexporti #x                                 \t1999\t4\t6577.143333\t2152.15\t3291.07\t3659.32\nShoes                                             \timportoedu pack #x                                \t1999\t6\t6926.960833\t2523.33\t4014.93\t4254.99\nMen                                               \timportoimporto #x                                 \t1999\t2\t6707.315833\t2449.22\t4311.11\t3583.31\nMen                                               \tamalgimporto #x                                   \t1999\t4\t7098.680833\t2965.42\t3526.45\t4923.53\nMen                                               \texportiimporto #x                                 \t1999\t2\t7146.240000\t3013.99\t6183.83\t5091.17\nChildren                                          \tamalgexporti #x                                   \t1999\t4\t6364.467500\t2270.79\t3330.83\t3817.50\nMen                                               \tedu packimporto #x                                \t1999\t4\t7386.333333\t3329.74\t3488.01\t4860.20\nMen                                               \tedu packimporto #x                                \t1999\t2\t7386.333333\t3347.65\t4007.40\t3488.01\nMusic                                             \tedu packscholar #x                                \t1999\t7\t6639.040000\t2653.55\t4219.52\t10071.22\nMusic                                             \tamalgscholar #x                                   \t1999\t4\t6719.304167\t2739.33\t3690.54\t3872.98\nMen                                               \timportoimporto #x                                 \t1999\t3\t6610.034167\t2645.24\t3661.14\t4282.01\nMusic                                             \texportischolar #x                                 \t1999\t2\t7043.051667\t3115.83\t4457.95\t5258.95\nMen                                               \tedu packimporto #x                                \t1999\t3\t7386.333333\t3488.01\t3347.65\t3329.74\nShoes                                             \texportiedu pack #x                                \t1999\t3\t7255.790000\t3411.07\t4194.64\t3624.85\nWomen                                             \texportiamalg #x                                   \t1999\t2\t5646.671667\t1809.52\t4198.70\t2172.85\nMusic                                             \timportoscholar #x                                 \t1999\t6\t6279.081667\t2456.98\t4361.44\t4256.24\nChildren                                          \timportoexporti #x                                 \t1999\t7\t6786.750000\t2978.82\t3942.59\t7809.22\nMusic                                             \texportischolar #x                                 \t1999\t2\t7041.705833\t3245.77\t3608.31\t4127.40\nShoes                                             \timportoedu pack #x                                \t1999\t2\t6864.320833\t3104.48\t3135.52\t3606.28\nShoes                                             \timportoedu pack #x                                \t1999\t1\t6864.320833\t3135.52\t14580.30\t3104.48\nChildren                                          \tedu packexporti #x                                \t1999\t5\t6511.800833\t2785.92\t3956.90\t3906.63\nWomen                                             \tamalgamalg #x                                     \t1999\t2\t6480.683333\t2769.69\t3353.67\t3876.51\nMen                                               \texportiimporto #x                                 \t1999\t5\t7146.240000\t3440.57\t3561.11\t3971.13\nWomen                                             \timportoamalg #x                                   \t1999\t3\t6512.794167\t2808.28\t3789.51\t4335.27\nMen                                               \tamalgimporto #x                                   \t1999\t2\t6720.550000\t3018.90\t4328.03\t3810.74\nShoes                                             \texportiedu pack #x                                \t1999\t7\t7255.790000\t3557.87\t4937.98\t9496.49\nMusic                                             \texportischolar #x                                 \t1999\t5\t6791.260833\t3096.02\t3918.04\t3801.90\nMen                                               \texportiimporto #x                                 \t1999\t1\t7343.719167\t3652.72\t13689.13\t3984.13\nMen                                               \timportoimporto #x                                 \t1999\t5\t6707.315833\t3030.18\t4977.33\t4620.75\nChildren                                          \texportiexporti #x                                 \t1999\t2\t6386.880833\t2717.07\t4809.11\t3355.48\nMen                                               \tamalgimporto #x                                   \t1999\t2\t7098.680833\t3440.27\t5293.69\t3526.45\nMen                                               \timportoimporto #x                                 \t1999\t5\t6610.034167\t2954.71\t4282.01\t3166.43\nShoes                                             \texportiedu pack #x                                \t1999\t4\t7255.790000\t3624.85\t3411.07\t5169.09\nMen                                               \texportiimporto #x                                 \t1999\t5\t7343.719167\t3727.75\t3729.62\t4580.93\nMen                                               \texportiimporto #x                                 \t1999\t4\t7343.719167\t3729.62\t4033.37\t3727.75\nMusic                                             \tedu packscholar #x                                \t1999\t2\t6489.175000\t2875.98\t4299.82\t4028.97\nMen                                               \tedu packimporto #x                                \t1999\t1\t7202.242500\t3614.07\t15582.63\t4234.79\nMusic                                             \timportoscholar #x                                 \t1999\t2\t5816.271667\t2229.79\t2919.29\t4298.41\nMen                                               \texportiimporto #x                                 \t1999\t4\t7146.240000\t3561.11\t5091.17\t3440.57\nShoes                                             \texportiedu pack #x                                \t1999\t7\t7073.462500\t3493.41\t4534.31\t8701.59\nMusic                                             \texportischolar #x                                 \t1999\t3\t6791.260833\t3218.13\t3847.87\t3918.04\nShoes                                             \tedu packedu pack #x                               \t1999\t4\t6203.331667\t2631.02\t4424.18\t4186.85\nMen                                               \tamalgimporto #x                                   \t1999\t3\t7098.680833\t3526.45\t3440.27\t2965.42\nMen                                               \tedu packimporto #x                                \t1999\t3\t7202.242500\t3639.93\t4234.79\t4016.25\nChildren                                          \tamalgexporti #x                                   \t1999\t2\t6364.467500\t2825.09\t4111.08\t3330.83\nShoes                                             \tedu packedu pack #x                               \t1999\t2\t6464.239167\t2928.99\t4233.86\t3840.97\nShoes                                             \tamalgedu pack #x                                  \t1999\t4\t6493.071667\t2962.20\t3443.20\t4212.60\nMusic                                             \timportoscholar #x                                 \t1999\t4\t5707.844167\t2179.41\t3789.16\t4317.53\nShoes                                             \texportiedu pack #x                                \t1999\t1\t7416.141667\t3892.54\t14170.68\t4654.22\nWomen                                             \timportoamalg #x                                   \t1999\t5\t6512.794167\t2991.07\t4335.27\t4624.86\nMusic                                             \texportischolar #x                                 \t1999\t4\t7043.051667\t3521.99\t5258.95\t4135.21\nWomen                                             \tedu packamalg #x                                  \t1999\t2\t6354.045833\t2836.23\t3719.67\t3527.07\nMusic                                             \tamalgscholar #x                                   \t1999\t3\t6123.475000\t2617.39\t3080.43\t4919.93\nShoes                                             \tamalgedu pack #x                                  \t1999\t7\t6674.896667\t3178.12\t3342.98\t8050.81\nMen                                               \tamalgimporto #x                                   \t1999\t5\t6618.534167\t3127.28\t4291.66\t4669.62\nWomen                                             \tamalgamalg #x                                     \t1999\t6\t6874.250000\t3387.53\t4798.69\t4329.48\nWomen                                             \texportiamalg #x                                   \t1999\t3\t5646.671667\t2172.85\t1809.52\t4461.31\nChildren                                          \tedu packexporti #x                                \t1999\t2\t6112.954167\t2641.54\t3567.58\t3196.45\nChildren                                          \tamalgexporti #x                                   \t1999\t5\t6294.100833\t2834.48\t3317.60\t3803.79\nWomen                                             \tedu packamalg #x                                  \t1999\t5\t6027.880000\t2575.81\t2750.96\t4459.01\nMusic                                             \texportischolar #x                                 \t1999\t6\t7041.705833\t3589.97\t4134.03\t4892.26\nMusic                                             \texportischolar #x                                 \t1999\t4\t7041.705833\t3593.86\t4127.40\t4134.03\nMen                                               \timportoimporto #x                                 \t1999\t6\t6610.034167\t3166.43\t2954.71\t3673.39\nMusic                                             \texportischolar #x                                 \t1999\t1\t7041.705833\t3608.31\t15046.54\t3245.77\nMusic                                             \tedu packscholar #x                                \t1999\t2\t6602.385000\t3173.44\t3434.91\t3929.40\nMusic                                             \tamalgscholar #x                                   \t1999\t6\t6123.475000\t2699.75\t4038.47\t3330.87\nChildren                                          \timportoexporti #x                                 \t1999\t4\t6786.750000\t3366.25\t3847.60\t4259.57\nMen                                               \tedu packimporto #x                                \t1999\t1\t7230.493333\t3811.64\t14668.93\t4497.31\nShoes                                             \timportoedu pack #x                                \t1999\t5\t6864.320833\t3449.62\t3869.15\t3531.93\nChildren                                          \tedu packexporti #x                                \t1999\t2\t6739.498333\t3328.01\t4986.50\t3623.32\nChildren                                          \timportoexporti #x                                 \t1999\t1\t6786.750000\t3376.55\t12504.93\t5018.22\nChildren                                          \tedu packexporti #x                                \t1999\t7\t6112.954167\t2711.04\t3254.04\t9465.10\nShoes                                             \timportoedu pack #x                                \t1999\t3\t6588.741667\t3187.25\t4283.21\t3212.76\nMen                                               \timportoimporto #x                                 \t1999\t3\t6702.415000\t3310.55\t3981.06\t4901.56\nMen                                               \tedu packimporto #x                                \t1999\t1\t7386.333333\t4007.40\t14005.45\t3347.65\nShoes                                             \timportoedu pack #x                                \t1999\t4\t6588.741667\t3212.76\t3187.25\t3974.78\nShoes                                             \tedu packedu pack #x                               \t1999\t6\t6203.331667\t2835.78\t4186.85\t3192.53\nMen                                               \texportiimporto #x                                 \t1999\t2\t7343.719167\t3984.13\t3652.72\t4033.37\nMen                                               \tamalgimporto #x                                   \t1999\t4\t6720.550000\t3364.32\t3810.74\t4333.58\nChildren                                          \tedu packexporti #x                                \t1999\t4\t6739.498333\t3389.03\t3623.32\t3605.25\nShoes                                             \timportoedu pack #x                                \t1999\t6\t6864.320833\t3531.93\t3449.62\t4414.17\nShoes                                             \tamalgedu pack #x                                  \t1999\t6\t6674.896667\t3342.98\t4458.26\t3178.12\nChildren                                          \tedu packexporti #x                                \t1999\t2\t6511.800833\t3185.28\t3581.75\t3410.75\nChildren                                          \tamalgexporti #x                                   \t1999\t4\t6854.405833\t3541.62\t3854.33\t3938.42\nMen                                               \texportiimporto #x                                 \t1999\t3\t7343.719167\t4033.37\t3984.13\t3729.62\nMen                                               \tamalgimporto #x                                   \t1999\t3\t6618.534167\t3313.62\t4044.36\t4291.66\nShoes                                             \texportiedu pack #x                                \t1999\t7\t7416.141667\t4121.24\t4239.08\t8658.42\nWomen                                             \timportoamalg #x                                   \t1999\t6\t6395.326667\t3102.55\t4234.22\t3650.03\nChildren                                          \timportoexporti #x                                 \t1999\t3\t6577.143333\t3291.07\t3773.61\t2152.15\nWomen                                             \tedu packamalg #x                                  \t1999\t4\t6027.880000\t2750.96\t3199.35\t2575.81\nMusic                                             \tamalgscholar #x                                   \t1999\t3\t6662.669167\t3386.25\t1961.57\t4799.18\nMen                                               \tamalgimporto #x                                   \t1999\t6\t7098.680833\t3834.90\t4923.53\t4115.57\nShoes                                             \timportoedu pack #x                                \t1999\t3\t6864.320833\t3606.28\t3104.48\t3869.15\nMusic                                             \texportischolar #x                                 \t1999\t6\t7043.051667\t3793.48\t4135.21\t5006.69\nShoes                                             \tedu packedu pack #x                               \t1999\t1\t6711.753333\t3473.10\t15060.83\t4085.86\nMen                                               \texportiimporto #x                                 \t1999\t1\t7419.459167\t4188.88\t16358.86\t4366.77\nWomen                                             \tamalgamalg #x                                     \t1999\t2\t6362.709167\t3137.03\t4180.91\t3181.26\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q5a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,id:string,sales:decimal(37,2),returns:decimal(37,2),profit:decimal(38,2)>\n-- !query output\nNULL\tNULL\t115594110.84\t2284876.41\t-30784735.42\ncatalog channel\tNULL\t39386111.98\t835889.43\t-4413707.25\ncatalog channel\tcatalog_pageAAAAAAAAAABAAAAA\t142709.95\t0.00\t-33829.11\ncatalog channel\tcatalog_pageAAAAAAAAAADAAAAA\t78358.50\t0.00\t-9578.35\ncatalog channel\tcatalog_pageAAAAAAAAABAAAAAA\t0.00\t4176.90\t-492.29\ncatalog channel\tcatalog_pageAAAAAAAAABBAAAAA\t71027.83\t0.00\t-27264.22\ncatalog channel\tcatalog_pageAAAAAAAAABDAAAAA\t93670.35\t0.00\t-3768.98\ncatalog channel\tcatalog_pageAAAAAAAAACAAAAAA\t0.00\t1776.17\t-1252.65\ncatalog channel\tcatalog_pageAAAAAAAAACBAAAAA\t112153.16\t0.00\t-19004.12\ncatalog channel\tcatalog_pageAAAAAAAAACDAAAAA\t102298.11\t0.00\t-14768.21\ncatalog channel\tcatalog_pageAAAAAAAAADAAAAAA\t0.00\t8773.82\t-4372.46\ncatalog channel\tcatalog_pageAAAAAAAAADBAAAAA\t23722.97\t0.00\t-22193.13\ncatalog channel\tcatalog_pageAAAAAAAAADDAAAAA\t47126.10\t0.00\t-6735.40\ncatalog channel\tcatalog_pageAAAAAAAAAEAAAAAA\t0.00\t6234.69\t-2690.18\ncatalog channel\tcatalog_pageAAAAAAAAAEBAAAAA\t45880.56\t679.36\t-16578.99\ncatalog channel\tcatalog_pageAAAAAAAAAEDAAAAA\t66004.24\t0.00\t6074.29\ncatalog channel\tcatalog_pageAAAAAAAAAFAAAAAA\t0.00\t463.68\t-1521.36\ncatalog channel\tcatalog_pageAAAAAAAAAFDAAAAA\t88022.65\t0.00\t-769.81\ncatalog channel\tcatalog_pageAAAAAAAAAGAAAAAA\t0.00\t8947.80\t-2922.65\ncatalog channel\tcatalog_pageAAAAAAAAAGDAAAAA\t124552.05\t0.00\t24880.33\ncatalog channel\tcatalog_pageAAAAAAAAAHAAAAAA\t188643.97\t0.00\t-33045.93\ncatalog channel\tcatalog_pageAAAAAAAAAIAAAAAA\t219950.84\t518.83\t-1089.03\ncatalog channel\tcatalog_pageAAAAAAAAAJAAAAAA\t196607.25\t7494.81\t-41407.88\ncatalog channel\tcatalog_pageAAAAAAAAAJCAAAAA\t23556.72\t0.00\t-5599.24\ncatalog channel\tcatalog_pageAAAAAAAAAKAAAAAA\t155095.31\t0.00\t-24248.39\ncatalog channel\tcatalog_pageAAAAAAAAAKCAAAAA\t13054.83\t0.00\t-528.84\ncatalog channel\tcatalog_pageAAAAAAAAALAAAAAA\t114177.20\t1866.02\t-48127.76\ncatalog channel\tcatalog_pageAAAAAAAAALCAAAAA\t10602.13\t0.00\t-3956.02\ncatalog channel\tcatalog_pageAAAAAAAAAMAAAAAA\t180993.77\t144.00\t-26262.58\ncatalog channel\tcatalog_pageAAAAAAAAAMCAAAAA\t29191.46\t0.00\t-8019.34\ncatalog channel\tcatalog_pageAAAAAAAAANAAAAAA\t202640.83\t0.00\t14603.23\ncatalog channel\tcatalog_pageAAAAAAAAANBAAAAA\t0.00\t3103.80\t-1162.99\ncatalog channel\tcatalog_pageAAAAAAAAANCAAAAA\t14627.25\t61.80\t1219.16\ncatalog channel\tcatalog_pageAAAAAAAAAOAAAAAA\t49178.54\t0.00\t-27661.86\ncatalog channel\tcatalog_pageAAAAAAAAAOCAAAAA\t9621.16\t0.00\t-4213.22\ncatalog channel\tcatalog_pageAAAAAAAAAPAAAAAA\t142216.57\t0.00\t21113.94\ncatalog channel\tcatalog_pageAAAAAAAAAPBAAAAA\t0.00\t4876.83\t-2903.30\ncatalog channel\tcatalog_pageAAAAAAAAAPCAAAAA\t17230.02\t689.60\t-1816.99\ncatalog channel\tcatalog_pageAAAAAAAABAAAAAAA\t0.00\t1902.91\t-1516.52\ncatalog channel\tcatalog_pageAAAAAAAABABAAAAA\t100018.02\t0.00\t-9370.45\ncatalog channel\tcatalog_pageAAAAAAAABADAAAAA\t115622.82\t0.00\t12126.64\ncatalog channel\tcatalog_pageAAAAAAAABBAAAAAA\t0.00\t1786.21\t-891.28\ncatalog channel\tcatalog_pageAAAAAAAABBBAAAAA\t76388.45\t1153.20\t2393.80\ncatalog channel\tcatalog_pageAAAAAAAABBCAAAAA\t0.00\t831.90\t-1077.03\ncatalog channel\tcatalog_pageAAAAAAAABBDAAAAA\t47684.96\t0.00\t-8371.50\ncatalog channel\tcatalog_pageAAAAAAAABCAAAAAA\t0.00\t4371.58\t-4222.90\ncatalog channel\tcatalog_pageAAAAAAAABCBAAAAA\t93859.97\t1753.50\t-1196.59\ncatalog channel\tcatalog_pageAAAAAAAABCDAAAAA\t73347.51\t0.00\t-11342.92\ncatalog channel\tcatalog_pageAAAAAAAABDAAAAAA\t0.00\t3474.90\t-1008.22\ncatalog channel\tcatalog_pageAAAAAAAABDBAAAAA\t42173.47\t0.00\t-26341.79\ncatalog channel\tcatalog_pageAAAAAAAABDCAAAAA\t0.00\t305.46\t-585.35\ncatalog channel\tcatalog_pageAAAAAAAABDDAAAAA\t83810.14\t0.00\t-1482.89\ncatalog channel\tcatalog_pageAAAAAAAABEAAAAAA\t0.00\t347.39\t-197.16\ncatalog channel\tcatalog_pageAAAAAAAABEBAAAAA\t97527.42\t0.00\t2333.17\ncatalog channel\tcatalog_pageAAAAAAAABEDAAAAA\t60506.22\t0.00\t-1212.87\ncatalog channel\tcatalog_pageAAAAAAAABFAAAAAA\t0.00\t6528.33\t-2187.98\ncatalog channel\tcatalog_pageAAAAAAAABFDAAAAA\t68737.67\t0.00\t-11721.44\ncatalog channel\tcatalog_pageAAAAAAAABHAAAAAA\t199767.35\t1044.68\t-6811.92\ncatalog channel\tcatalog_pageAAAAAAAABIAAAAAA\t205306.84\t411.12\t2555.23\ncatalog channel\tcatalog_pageAAAAAAAABICAAAAA\t0.00\t68.02\t-99.54\ncatalog channel\tcatalog_pageAAAAAAAABJAAAAAA\t187825.12\t230.88\t-21543.64\ncatalog channel\tcatalog_pageAAAAAAAABJBAAAAA\t0.00\t727.89\t-209.17\ncatalog channel\tcatalog_pageAAAAAAAABJCAAAAA\t14241.32\t0.00\t4283.83\ncatalog channel\tcatalog_pageAAAAAAAABKAAAAAA\t215820.15\t1121.48\t-7030.88\ncatalog channel\tcatalog_pageAAAAAAAABKBAAAAA\t0.00\t3063.15\t-3543.44\ncatalog channel\tcatalog_pageAAAAAAAABKCAAAAA\t1871.32\t0.00\t-2063.64\ncatalog channel\tcatalog_pageAAAAAAAABLAAAAAA\t171643.10\t779.60\t-15082.45\ncatalog channel\tcatalog_pageAAAAAAAABLCAAAAA\t8855.03\t0.00\t3013.67\ncatalog channel\tcatalog_pageAAAAAAAABMAAAAAA\t202476.57\t1954.44\t10034.15\ncatalog channel\tcatalog_pageAAAAAAAABMCAAAAA\t13837.48\t0.00\t6008.90\ncatalog channel\tcatalog_pageAAAAAAAABNAAAAAA\t197464.73\t1397.02\t-34977.14\ncatalog channel\tcatalog_pageAAAAAAAABNCAAAAA\t14801.05\t0.00\t-6910.60\ncatalog channel\tcatalog_pageAAAAAAAABOAAAAAA\t98871.66\t0.00\t-6493.20\ncatalog channel\tcatalog_pageAAAAAAAABOCAAAAA\t15216.80\t0.00\t-5722.57\ncatalog channel\tcatalog_pageAAAAAAAABPAAAAAA\t99238.38\t100.20\t-6353.28\ncatalog channel\tcatalog_pageAAAAAAAABPCAAAAA\t26783.54\t0.00\t4129.36\ncatalog channel\tcatalog_pageAAAAAAAACAAAAAAA\t0.00\t8072.86\t-3459.85\ncatalog channel\tcatalog_pageAAAAAAAACABAAAAA\t139938.46\t0.00\t8612.80\ncatalog channel\tcatalog_pageAAAAAAAACADAAAAA\t36613.51\t0.00\t-9705.44\ncatalog channel\tcatalog_pageAAAAAAAACBAAAAAA\t0.00\t9373.27\t-7982.93\ncatalog channel\tcatalog_pageAAAAAAAACBBAAAAA\t111531.45\t55.38\t-15921.86\ncatalog channel\tcatalog_pageAAAAAAAACBDAAAAA\t103545.90\t0.00\t-168.10\ncatalog channel\tcatalog_pageAAAAAAAACCAAAAAA\t0.00\t3879.26\t-1783.40\ncatalog channel\tcatalog_pageAAAAAAAACCBAAAAA\t78290.87\t0.00\t-12841.84\ncatalog channel\tcatalog_pageAAAAAAAACCDAAAAA\t37841.79\t0.00\t-26568.14\ncatalog channel\tcatalog_pageAAAAAAAACDAAAAAA\t0.00\t923.32\t-1166.75\ncatalog channel\tcatalog_pageAAAAAAAACDBAAAAA\t113988.86\t212.31\t-9856.53\ncatalog channel\tcatalog_pageAAAAAAAACDDAAAAA\t64516.71\t0.00\t-27577.04\ncatalog channel\tcatalog_pageAAAAAAAACEAAAAAA\t0.00\t2955.12\t-1700.94\ncatalog channel\tcatalog_pageAAAAAAAACEBAAAAA\t91555.04\t0.00\t-23392.02\ncatalog channel\tcatalog_pageAAAAAAAACEDAAAAA\t84993.13\t0.00\t-24259.30\ncatalog channel\tcatalog_pageAAAAAAAACFAAAAAA\t0.00\t6507.25\t-3250.26\ncatalog channel\tcatalog_pageAAAAAAAACFDAAAAA\t73749.97\t0.00\t1468.23\ncatalog channel\tcatalog_pageAAAAAAAACGAAAAAA\t0.00\t8814.60\t-3790.45\ncatalog channel\tcatalog_pageAAAAAAAACHAAAAAA\t157602.51\t907.12\t-50822.23\ncatalog channel\tcatalog_pageAAAAAAAACHCAAAAA\t0.00\t59.58\t-63.15\ncatalog channel\tcatalog_pageAAAAAAAACIAAAAAA\t132342.50\t4686.24\t-45195.01\ncatalog channel\tcatalog_pageAAAAAAAACIBAAAAA\t0.00\t6029.56\t-3462.80\ncatalog channel\tcatalog_pageAAAAAAAACICAAAAA\t0.00\t530.41\t-535.89\ncatalog channel\tcatalog_pageAAAAAAAACJAAAAAA\t193536.42\t2487.88\t-29573.52\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q6.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<state:string,cnt:bigint>\n-- !query output\nVT\t10\nMA\t11\nNH\t11\nNJ\t14\nNV\t16\nOR\t16\nWY\t16\nAK\t20\nME\t22\nMD\t26\nWA\t29\nID\t31\nNM\t31\nUT\t36\nND\t38\nSC\t41\nSD\t43\nWV\t44\nCA\t52\nFL\t53\nLA\t56\nPA\t57\nNY\t59\nAR\t61\nCO\t61\nWI\t61\nMT\t66\nMS\t68\nOK\t68\nMN\t71\nOH\t75\nMO\t79\nIL\t80\nAL\t81\nNC\t81\nIA\t82\nMI\t84\nKS\t86\nNE\t86\nNULL\t88\nIN\t94\nTN\t108\nKY\t110\nVA\t128\nGA\t138\nTX\t250\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q64.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<product_name:string,store_name:string,store_zip:string,b_street_number:string,b_streen_name:string,b_city:string,b_zip:string,c_street_number:string,c_street_name:string,c_city:string,c_zip:string,syear:int,cnt:bigint,s1:decimal(17,2),s2:decimal(17,2),s3:decimal(17,2),s1:decimal(17,2),s2:decimal(17,2),s3:decimal(17,2),syear:int,cnt:bigint>\n-- !query output\nablepricallyantiought                             \tation\t35709     \t996       \tNULL\tBridgeport\t65817     \t752       \tLakeview Lincoln\tFriendship\t74536     \t1999\t1\t15.78\t24.93\t0.00\t17.09\t26.31\t0.00\t2000\t1\nablepricallyantiought                             \tation\t35709     \t71        \tRiver River\tFriendship\t34536     \tNULL\tNULL\tNewport\tNULL\t1999\t1\t22.60\t38.87\t0.00\t17.09\t26.31\t0.00\t2000\t1\nablepricallyantiought                             \tbar\t31904     \t128       \tEast \tFranklin\t19101     \t990       \t2nd \tFriendship\t94536     \t1999\t1\t54.76\t78.30\t0.00\t15.80\t23.54\t0.00\t2000\t1\nationbarpri                                       \tation\t35709     \t362       \tCentral Ridge\tOakland\t69843     \t666       \t13th Ridge\tShiloh\t29275     \t1999\t1\t74.00\t95.46\t0.00\t11.32\t20.94\t0.00\t2000\t1\nationbarpri                                       \tese\t31904     \t759       \tElm Pine\tBelmont\t20191     \t35        \tMadison \tWaterloo\t31675     \t1999\t1\t12.92\t22.22\t0.00\t24.15\t36.70\t0.00\t2000\t1\nationbarpri                                       \tese\t31904     \t759       \tElm Pine\tBelmont\t20191     \t35        \tMadison \tWaterloo\t31675     \t1999\t1\t12.92\t22.22\t0.00\t83.87\t147.61\t0.00\t2000\t1\nationbarpri                                       \tought\t31904     \t754       \tNULL\tNULL\t65804     \t897       \t8th \tAshland\t54244     \t1999\t1\t74.70\t90.38\t0.00\t12.02\t12.74\t0.00\t2000\t1\nationbarpri                                       \tought\t31904     \t754       \tNULL\tNULL\t65804     \t897       \t8th \tAshland\t54244     \t1999\t1\t74.70\t90.38\t0.00\t28.08\t38.18\t0.00\t2000\t1\nationbarpri                                       \tought\t31904     \t754       \tNULL\tNULL\t65804     \t897       \t8th \tAshland\t54244     \t1999\t1\t74.70\t90.38\t0.00\t56.60\t63.39\t0.00\t2000\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q67a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_category:string,i_class:string,i_brand:string,i_product_name:string,d_year:int,d_qoy:int,d_moy:int,s_store_id:string,sumsales:decimal(38,2),rk:int>\n-- !query output\nNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t681149.47\t5\nNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t1474997.55\t4\nNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t1577864.42\t3\nNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t2536204.81\t2\nNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t999668082.59\t1\nNULL\tNULL\tNULL\tNULL\t2001\tNULL\tNULL\tNULL\t681149.47\t5\nNULL\tNULL\tNULL\tNULL\t2001\t1\tNULL\tNULL\t111437.42\t32\nNULL\tNULL\tNULL\tNULL\t2001\t1\t1\tNULL\t36235.73\t90\nNULL\tNULL\tNULL\tNULL\t2001\t1\t3\tNULL\t49252.88\t77\nNULL\tNULL\tNULL\tNULL\t2001\t2\tNULL\tNULL\t78346.18\t63\nNULL\tNULL\tNULL\tNULL\t2001\t2\t6\tNULL\t34840.48\t96\nNULL\tNULL\tNULL\tNULL\t2001\t3\tNULL\tNULL\t141691.87\t12\nNULL\tNULL\tNULL\tNULL\t2001\t3\t7\tNULL\t31988.52\t100\nNULL\tNULL\tNULL\tNULL\t2001\t3\t8\tNULL\t57778.28\t69\nNULL\tNULL\tNULL\tNULL\t2001\t3\t9\tNULL\t51925.07\t73\nNULL\tNULL\tNULL\tNULL\t2001\t4\tNULL\tNULL\t349674.00\t7\nNULL\tNULL\tNULL\tNULL\t2001\t4\t10\tNULL\t89521.66\t52\nNULL\tNULL\tNULL\tNULL\t2001\t4\t11\tNULL\t107989.95\t35\nNULL\tNULL\tNULL\tNULL\t2001\t4\t11\tAAAAAAAAHAAAAAAA\t38673.65\t85\nNULL\tNULL\tNULL\tNULL\t2001\t4\t12\tNULL\t152162.39\t11\nNULL\tNULL\tNULL\tNULL\t2001\t4\t12\tAAAAAAAACAAAAAAA\t36936.71\t88\nNULL\tNULL\tNULL\tNULL\t2001\t4\t12\tAAAAAAAAKAAAAAAA\t35885.68\t92\nNULL\tNULL\tNULL\tationesebareing                                   \tNULL\tNULL\tNULL\tNULL\t130971.51\t13\nNULL\tNULL\tNULL\tationesebareing                                   \t2001\tNULL\tNULL\tNULL\t130971.51\t13\nNULL\tNULL\tNULL\tationesebareing                                   \t2001\t3\tNULL\tNULL\t49334.88\t76\nNULL\tNULL\tNULL\tationesebareing                                   \t2001\t4\tNULL\tNULL\t36033.41\t91\nNULL\tNULL\tNULL\tn stcallyn stn st                                 \tNULL\tNULL\tNULL\tNULL\t110926.60\t33\nNULL\tNULL\tNULL\tn stcallyn stn st                                 \t2001\tNULL\tNULL\tNULL\t110926.60\t33\nNULL\tNULL\tNULL\tn stcallyn stn st                                 \t2001\t3\tNULL\tNULL\t33118.05\t98\nNULL\tNULL\tNULL\tn stcallyn stn st                                 \t2001\t4\tNULL\tNULL\t41603.21\t83\nNULL\tNULL\tNULL\tn stought                                         \tNULL\tNULL\tNULL\tNULL\t157465.43\t9\nNULL\tNULL\tNULL\tn stought                                         \t2001\tNULL\tNULL\tNULL\t157465.43\t9\nNULL\tNULL\tNULL\tn stought                                         \t2001\t3\tNULL\tNULL\t46371.14\t78\nNULL\tNULL\tNULL\tn stought                                         \t2001\t4\tNULL\tNULL\t65619.67\t66\nNULL\tNULL\tNULL\tn stought                                         \t2001\t4\t11\tNULL\t45528.69\t79\nNULL\tNULL\tNULL\toughtableantiable                                 \tNULL\tNULL\tNULL\tNULL\t103187.19\t40\nNULL\tNULL\tNULL\toughtableantiable                                 \t2001\tNULL\tNULL\tNULL\t103187.19\t40\nNULL\tNULL\tNULL\toughtableantiable                                 \t2001\t4\tNULL\tNULL\t42408.80\t82\nNULL\tNULL\tNULL\toughtablen stationought                           \tNULL\tNULL\tNULL\tNULL\t86348.22\t57\nNULL\tNULL\tNULL\toughtablen stationought                           \t2001\tNULL\tNULL\tNULL\t86348.22\t57\nNULL\tNULL\tNULL\toughtablen stationought                           \t2001\t4\tNULL\tNULL\t35714.86\t93\nNULL\tNULL\tNULL\toughtesebaration                                  \tNULL\tNULL\tNULL\tNULL\t76325.38\t64\nNULL\tNULL\tNULL\toughtesebaration                                  \t2001\tNULL\tNULL\tNULL\t76325.38\t64\nNULL\tNULL\tNULL\toughtesebaration                                  \t2001\t4\tNULL\tNULL\t36610.75\t89\nNULL\tNULL\tNULL\toughteseoughtation                                \tNULL\tNULL\tNULL\tNULL\t128623.75\t19\nNULL\tNULL\tNULL\toughteseoughtation                                \t2001\tNULL\tNULL\tNULL\t128623.75\t19\nNULL\tNULL\tNULL\toughteseoughtation                                \t2001\t3\tNULL\tNULL\t35495.82\t94\nNULL\tNULL\tNULL\toughteseoughtation                                \t2001\t4\tNULL\tNULL\t57203.17\t70\nNULL\tNULL\timportoamalg #x                                   \tNULL\tNULL\tNULL\tNULL\tNULL\t102866.87\t42\nNULL\tNULL\timportoamalg #x                                   \tNULL\tNULL\tNULL\tNULL\tNULL\t102866.87\t42\nNULL\tNULL\timportoamalg #x                                   \tNULL\t2001\tNULL\tNULL\tNULL\t102866.87\t42\nNULL\tNULL\timportoamalg #x                                   \tNULL\t2001\t4\tNULL\tNULL\t51717.30\t74\nNULL\taccessories                                       \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t98994.39\t48\nNULL\taccessories                                       \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t98994.39\t48\nNULL\taccessories                                       \tNULL\tprin stn stoughtought                             \tNULL\tNULL\tNULL\tNULL\t98994.39\t48\nNULL\taccessories                                       \tNULL\tprin stn stoughtought                             \t2001\tNULL\tNULL\tNULL\t98994.39\t48\nNULL\taccessories                                       \tNULL\tprin stn stoughtought                             \t2001\t3\tNULL\tNULL\t37246.83\t87\nNULL\tathletic                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t122120.16\t21\nNULL\tathletic                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t122120.16\t21\nNULL\tathletic                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t122120.16\t21\nNULL\tathletic                                          \tNULL\tNULL\t2001\tNULL\tNULL\tNULL\t122120.16\t21\nNULL\tathletic                                          \tNULL\tNULL\t2001\t4\tNULL\tNULL\t63278.49\t67\nNULL\tathletic                                          \tNULL\tNULL\t2001\t4\t11\tNULL\t38216.51\t86\nNULL\tbaseball                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t105806.96\t36\nNULL\tbaseball                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t105806.96\t36\nNULL\tbaseball                                          \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t105806.96\t36\nNULL\tbaseball                                          \tNULL\tNULL\t2001\tNULL\tNULL\tNULL\t105806.96\t36\nNULL\tbaseball                                          \tNULL\tNULL\t2001\t4\tNULL\tNULL\t58206.33\t68\nNULL\tcountry                                           \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t89164.69\t53\nNULL\tcountry                                           \timportoscholar #x                                 \tNULL\tNULL\tNULL\tNULL\tNULL\t89164.69\t53\nNULL\tcountry                                           \timportoscholar #x                                 \tNULL\tNULL\tNULL\tNULL\tNULL\t89164.69\t53\nNULL\tcountry                                           \timportoscholar #x                                 \tNULL\t2001\tNULL\tNULL\tNULL\t89164.69\t53\nNULL\tcountry                                           \timportoscholar #x                                 \tNULL\t2001\t4\tNULL\tNULL\t55674.96\t71\nNULL\tdresses                                           \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t116126.30\t25\nNULL\tdresses                                           \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t217647.20\t8\nNULL\tdresses                                           \tNULL\toughteingantieing                                 \tNULL\tNULL\tNULL\tNULL\t116126.30\t25\nNULL\tdresses                                           \tNULL\toughteingantieing                                 \t2001\tNULL\tNULL\tNULL\t116126.30\t25\nNULL\tdresses                                           \tNULL\toughteingantieing                                 \t2001\t3\tNULL\tNULL\t35421.52\t95\nNULL\tdresses                                           \tNULL\toughteingantieing                                 \t2001\t4\tNULL\tNULL\t53283.33\t72\nNULL\tdresses                                           \tamalgamalg #x                                     \tNULL\tNULL\tNULL\tNULL\tNULL\t101520.90\t45\nNULL\tdresses                                           \tamalgamalg #x                                     \tNULL\tNULL\tNULL\tNULL\tNULL\t101520.90\t45\nNULL\tdresses                                           \tamalgamalg #x                                     \tNULL\t2001\tNULL\tNULL\tNULL\t101520.90\t45\nNULL\tdresses                                           \tamalgamalg #x                                     \tNULL\t2001\t3\tNULL\tNULL\t32017.28\t99\nNULL\tdresses                                           \tamalgamalg #x                                     \tNULL\t2001\t4\tNULL\tNULL\t34451.25\t97\nNULL\tinfants                                           \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t128750.23\t15\nNULL\tinfants                                           \timportoexporti #x                                 \tNULL\tNULL\tNULL\tNULL\tNULL\t128750.23\t15\nNULL\tinfants                                           \timportoexporti #x                                 \tantiationeseese                                   \tNULL\tNULL\tNULL\tNULL\t128750.23\t15\nNULL\tinfants                                           \timportoexporti #x                                 \tantiationeseese                                   \t2001\tNULL\tNULL\tNULL\t128750.23\t15\nNULL\tinfants                                           \timportoexporti #x                                 \tantiationeseese                                   \t2001\t3\tNULL\tNULL\t40113.56\t84\nNULL\tinfants                                           \timportoexporti #x                                 \tantiationeseese                                   \t2001\t4\tNULL\tNULL\t50724.87\t75\nNULL\tmens                                              \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t113853.60\t28\nNULL\tmens                                              \timportoedu pack #x                                \tNULL\tNULL\tNULL\tNULL\tNULL\t113853.60\t28\nNULL\tmens                                              \timportoedu pack #x                                \toughtablen steseought                             \tNULL\tNULL\tNULL\tNULL\t113853.60\t28\nNULL\tmens                                              \timportoedu pack #x                                \toughtablen steseought                             \t2001\tNULL\tNULL\tNULL\t113853.60\t28\nNULL\tmens                                              \timportoedu pack #x                                \toughtablen steseought                             \t2001\t3\tNULL\tNULL\t42447.63\t81\nNULL\tmens                                              \timportoedu pack #x                                \toughtablen steseought                             \t2001\t4\tNULL\tNULL\t43089.74\t80\nNULL\tshirts                                            \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t82003.16\t59\nNULL\tshirts                                            \tNULL\tNULL\tNULL\tNULL\tNULL\tNULL\t82003.16\t59\nNULL\tshirts                                            \tNULL\toughtcallyeseantiought                            \tNULL\tNULL\tNULL\tNULL\t82003.16\t59\nNULL\tshirts                                            \tNULL\toughtcallyeseantiought                            \t2001\tNULL\tNULL\tNULL\t82003.16\t59\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q70a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<total_sum:decimal(27,2),s_state:string,s_county:string,lochierarchy:int,rank_within_parent:int>\n-- !query output\n-435656177.80\tNULL\tNULL\t2\t1\n-435656177.80\tTN\tNULL\t1\t1\n-435656177.80\tTN\tWilliamson County\t0\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q72.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_desc:string,w_warehouse_name:string,d_week_seq:int,no_promo:bigint,promo:bigint,total_cnt:bigint>\n-- !query output\nAgain perfect sons used to put always. Europea\tSignificantly\t5305\t2\t2\t2\nAgo new arguments accept previously european parents; fo\tSelective,\t5313\t2\t2\t2\nDifficult, rapid sizes say so; initial banks stress high single sports; prisoners used to think likely firms. Good, current services must take human, precise persons. Signals m\tSelective,\t5322\t2\t2\t2\nEventually soft issues will see pp.; possible children pay completely future tasks. Only women will not rehearse more old parts; different movements sponsor most. Please political allowance\tSignificantly\t5273\t2\t2\t2\nHappy, nuclear obligations should not leave little payments. About able relationships impress thus only original a\tSignificantly\t5313\t2\t2\t2\nHealthy, delighted conclusions may offer experienced condi\tSignificantly\t5305\t2\t2\t2\nHitherto certain kinds evade also by a months. Poor points might make even just selective passengers. Old, general qualities could overcome over; recent variables might s\tJust good amou\t5280\t2\t2\t2\nIdeas total sadly. International members contribute perhaps lucky cells. Texts would acknowled\tSelective,\t5280\t2\t2\t2\nImportant audiences shall murder.\tSelective,\t5322\t2\t2\t2\nIn order suitable conditions used to eat more on a americans. As open rates shall go sometimes big authorities. Tickets respond for example pregnant, good women. Banks could exploit also full, old cr\tMatches produce\t5319\t2\t2\t2\nLittle, num\tMatches produce\t5322\t2\t2\t2\nNational, l\tOperations\t5318\t2\t2\t2\nObvious, present cells may not define appointed, military boys. Answers can get little. Over there comparative days become in a police. Particularly divine prop\tJust good amou\t5322\t2\t2\t2\nOutside, useful animals find again also british decisions; now small attitudes shall n\tSignificantly\t5322\t2\t2\t2\nParticular, british wa\tOperations\t5313\t2\t2\t2\nParts will recruit si\tSelective,\t5322\t2\t2\t2\nRational, given goods would stay just equal materials. Very alternative months might not buy voc\tJust good amou\t5319\t2\t2\t2\nRoyal, military notions will not find very very wet acids. Funny actions take western, remaining homes. Great patients will replace simply. Signs can think equivalent reasons. Campaigns \tMatches produce\t5313\t2\t2\t2\nShort companies get that is for a days. Early, special hands might keep to the women. Present blocks co-ordinate so sure results. Seasons lend still recent friends. Dead \tJust good amou\t5273\t2\t2\t2\nShort companies get that is for a days. Early, special hands might keep to the women. Present blocks co-ordinate so sure results. Seasons lend still recent friends. Dead \tSelective,\t5273\t2\t2\t2\nSpecialist, identical factors should not answer so main shares. Sales might reduce then free hours. Ethic\tJust good amou\t5306\t2\t2\t2\nThen sp\tSelective,\t5308\t2\t2\t2\nNULL\tJust good amou\t5297\t1\t1\t1\nNULL\tMatches produce\t5282\t1\t1\t1\nNULL\tOperations\t5308\t1\t1\t1\nNULL\tSelective,\t5318\t1\t1\t1\nA bit important \tSignificantly\t5284\t1\t1\t1\nA bit liable flowers change also writings. Currently soviet ministers come. Hotels telephone before aggressive, economic eyes. Blue changes improve. Overal\tOperations\t5317\t1\t1\t1\nA bit liable flowers change also writings. Currently soviet ministers come. Hotels telephone before aggressive, economic eyes. Blue changes improve. Overal\tSelective,\t5317\t1\t1\t1\nA little local letters think over like a children; nevertheless particular powers damage now suddenly absent prote\tSignificantly\t5317\t1\t1\t1\nAble systems merge from a areas. Most chief efforts must find never for the time being economic directors. Activities sit there. Available polic\tSelective,\t5305\t1\t1\t1\nAble troubles dust into the styles. Independent feet kill wounds. Fundamental months should exploit arms. Massive years read only modern courses; twin forms shall become products. Even h\tMatches produce\t5309\t1\t1\t1\nAble troubles dust into the styles. Independent feet kill wounds. Fundamental months should exploit arms. Massive years read only modern courses; twin forms shall become products. Even h\tOperations\t5309\t1\t1\t1\nAble, active jobs might not play upstairs. Electoral crimes could not worry for the solutions. Wholly capitalist effects would not get \tMatches produce\t5291\t1\t1\t1\nAble, initial men cannot assume then rational, new references; shares could support physical, internati\tOperations\t5304\t1\t1\t1\nAble, reasonable standards make forward. Strategic, \tJust good amou\t5319\t1\t1\t1\nAbout military programmes identify all in a thousands; able sentences mean also flats. Branches know secrets; right, increased interactions tour a little. High, lov\tSelective,\t5299\t1\t1\t1\nAbout supreme days tell then for a consequences. Ill items force meals; years may not mean quite social structures. Goals\tSignificantly\t5311\t1\t1\t1\nAbove, new groups will not like much local bodies. However traditional sessions can walk slowly big, young aspects. Quite close companies ought to take in a rules. Leaders must not like of cou\tJust good amou\t5314\t1\t1\t1\nAbove, new groups will not like much local bodies. However traditional sessions can walk slowly big, young aspects. Quite close companies ought to take in a rules. Leaders must not like of cou\tMatches produce\t5314\t1\t1\t1\nAbove, new groups will not like much local bodies. However traditional sessions can walk slowly big, young aspects. Quite close companies ought to take in a rules. Leaders must not like of cou\tMatches produce\t5315\t1\t1\t1\nAbsolute\tMatches produce\t5322\t1\t1\t1\nAbsolutely angry odds put strongly. Telecommunications help only recent, \tSelective,\t5316\t1\t1\t1\nAc\tSignificantly\t5306\t1\t1\t1\nAccessible, likel\tMatches produce\t5301\t1\t1\t1\nAccessible, likel\tSelective,\t5301\t1\t1\t1\nAccessible, old walls profit here. Wars form therefore as effective servants. Secrets could not feel meanwhile regional theories. Perfect, new service\tOperations\t5273\t1\t1\t1\nAccidents can include below other, marginal plans. Comparable, welsh exceptions argue most as usual physical claims. Certain months may smell far from in a cases. Active seconds used to restore t\tMatches produce\t5304\t1\t1\t1\nAccidents fly bet\tMatches produce\t5321\t1\t1\t1\nAccounts rank only high plans. Days sho\tJust good amou\t5302\t1\t1\t1\nAccurately economic workers play clearly. Deliberately other stands recapture social, cultural prices. Full paths used to make twice alw\tMatches produce\t5301\t1\t1\t1\nActions must not compare so economi\tMatches produce\t5320\t1\t1\t1\nActions see of course informal phrases. Markedly right men buy honest, additional stations. In order imaginative factors used to move human thanks. Centres shall catch altogether succe\tSignificantly\t5289\t1\t1\t1\nActual arrangements should introduce never in a unions. Ultimately d\tJust good amou\t5273\t1\t1\t1\nActual, possible sides employ here future hands. Powerful intervals ought to respond new, particular marks. Appointed, spiritual accidents sustain but modern, coming findings. Male, national year\tSelective,\t5304\t1\t1\t1\nActually subtle subjects mark as tories. Yet possible areas \tJust good amou\t5321\t1\t1\t1\nActually subtle subjects mark as tories. Yet possible areas \tMatches produce\t5320\t1\t1\t1\nAcutely possible kilometres cannot trim fully justly original visitors. Owners can transport from the connections. Then controversial girls might tell yet more big kinds. More typical houses g\tJust good amou\t5272\t1\t1\t1\nAcutely possible kilometres cannot trim fully justly original visitors. Owners can transport from the connections. Then controversial girls might tell yet more big kinds. More typical houses g\tSelective,\t5318\t1\t1\t1\nAdditional companies visit. Grey opportunities may not look numbers. Entire, british models assist also great quarters. Little males show\tJust good amou\t5284\t1\t1\t1\nAdditional companies visit. Grey opportunities may not look numbers. Entire, british models assist also great quarters. Little males show\tSelective,\t5284\t1\t1\t1\nAdditional figures consult relationships. Sole addresses convict right, \tOperations\t5322\t1\t1\t1\nAdvantages emerge moves; special, expected operations pass etc natural preferences; very posit\tSelective,\t5313\t1\t1\t1\nAfterwards defensive standards answer just almost informal officers. Now constant rights shall hear courses. Signs go on a budgets\tJust good amou\t5280\t1\t1\t1\nAfterwards oth\tJust good amou\t5277\t1\t1\t1\nAfterwards oth\tSelective,\t5277\t1\t1\t1\nAfterwards rich options go unlikely, welsh elections. Just gentle authors must not provi\tOperations\t5285\t1\t1\t1\nAfterwards written skills influence; english, level departments like just. Really legal rocks would \tJust good amou\t5300\t1\t1\t1\nAgain\tJust good amou\t5319\t1\t1\t1\nAgain brief things should remember only in a patients. Deals reply soon other points. Increasingly religious times necessitate farther troops. Both added programmes must come wonderfully solid pupi\tMatches produce\t5308\t1\t1\t1\nAgain brief things should remember only in a patients. Deals reply soon other points. Increasingly religious times necessitate farther troops. Both added programmes must come wonderfully solid pupi\tOperations\t5308\t1\t1\t1\nAgain brief things should remember only in a patients. Deals reply soon other points. Increasingly religious times necessitate farther troops. Both added programmes must come wonderfully solid pupi\tSelective,\t5308\t1\t1\t1\nAgain new teeth heat delicately. Just future officers\tJust good amou\t5319\t1\t1\t1\nAgain new teeth heat delicately. Just future officers\tOperations\t5294\t1\t1\t1\nAgain new teeth heat delicately. Just future officers\tSignificantly\t5294\t1\t1\t1\nAgain old police could work in the skills. Points announce agents. Pieces conform slowly to a hea\tSignificantly\t5307\t1\t1\t1\nAgain scottish women shall ag\tSignificantly\t5308\t1\t1\t1\nAgencies will not move criminal issues. Years mean very largel\tSelective,\t5305\t1\t1\t1\nAgents invest often things. French cars ought to get locally distinctive, local powers. More american entries compensate only\tOperations\t5317\t1\t1\t1\nAgo correct profits must not handle else. Healthy children may not go only ancient words. Later just characters ought to drink about. British parts must watch soon ago other clients. So vital d\tJust good amou\t5317\t1\t1\t1\nAgo correct profits must not handle else. Healthy children may not go only ancient words. Later just characters ought to drink about. British parts must watch soon ago other clients. So vital d\tOperations\t5304\t1\t1\t1\nAgo interested doctors meet really fair, cold minds. Fine children understand original procedures. So other services ought to\tOperations\t5303\t1\t1\t1\nAgo interested doctors meet really fair, cold minds. Fine children understand original procedures. So other services ought to\tSignificantly\t5303\t1\t1\t1\nAgo new studies shall not apply of course small forces. Dead parts used to point on a students. Then other students should pay only\tMatches produce\t5307\t1\t1\t1\nAgo sexual courts may attract. Important, alone observations expect. New, available ways represent years. Excell\tMatches produce\t5301\t1\t1\t1\nAhead national cir\tMatches produce\t5320\t1\t1\t1\nAll attractive ways develop originally lucky sites. New, single sounds might excuse enough senior savings. Other bacteria live across a concerns. Dark minutes s\tSignificantly\t5316\t1\t1\t1\nAll capital bacteria make jobs. Again appropriate eyes may not leave others. There fixed ways\tJust good amou\t5285\t1\t1\t1\nAll difficult emotions supervise. Mere \tJust good amou\t5322\t1\t1\t1\nAll environmental lips cannot catch; social, broad authorities add for no customers. Interes\tJust good amou\t5318\t1\t1\t1\nAll following systems develop home different words. Old minutes will come never independent, real duties. Policies used to distinguish all rats. E\tMatches produce\t5311\t1\t1\t1\nAll full things will not administer quickly difficult women. Ready, new arrangements ma\tJust good amou\t5284\t1\t1\t1\nAll full things will not administer quickly difficult women. Ready, new arrangements ma\tSignificantly\t5284\t1\t1\t1\nAll real fam\tOperations\t5318\t1\t1\t1\nAll real fam\tSelective,\t5318\t1\t1\t1\nAll right other details might distrib\tSelective,\t5277\t1\t1\t1\nAlmost busy threats go together recent sides; still tired wines shall not admit on a\tSignificantly\t5300\t1\t1\t1\nAlone arms happen again real documents. Paintings might not invite steps. Internal pairs may increase only rural rooms. Men must not deal here long, heavy patients; merely e\tMatches produce\t5310\t1\t1\t1\nAlready early meetings cannot go animals. As comprehensive evenings w\tJust good amou\t5320\t1\t1\t1\nAlready professional senses encourage broad theories. Nearly eastern eyes would describe correct, complex proposals. Friends change crimin\tSelective,\t5291\t1\t1\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q74.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<customer_id:string,customer_first_name:string,customer_last_name:string>\n-- !query output\nAAAAAAAAHEIFBAAA\tNULL\tJones                         \nAAAAAAAAGLDMAAAA\tAlex                \tNorris                        \nAAAAAAAAOEDIAAAA\tAlexander           \tRich                          \nAAAAAAAAKNMEBAAA\tAmber               \tGonzalez                      \nAAAAAAAAFGNEAAAA\tAndrew              \tSilva                         \nAAAAAAAACGLDAAAA\tAngelo              \tSloan                         \nAAAAAAAAOCDCAAAA\tArmando             \tJackson                       \nAAAAAAAABJEDBAAA\tArthur              \tBryan                         \nAAAAAAAAHGKLAAAA\tArthur              \tChristensen                   \nAAAAAAAAPAEEBAAA\tAudria              \tMattson                       \nAAAAAAAAFAIEAAAA\tBetty               \tGipson                        \nAAAAAAAAOCFLAAAA\tBill                \tFreeman                       \nAAAAAAAAILLJAAAA\tBilly               \tOrtiz                         \nAAAAAAAAGMFHAAAA\tBruce               \tHowe                          \nAAAAAAAAKAKPAAAA\tCarolann            \tRoyer                         \nAAAAAAAAAGLPAAAA\tCharlene            \tMarcus                        \nAAAAAAAABIIHAAAA\tCharles             \tQuarles                       \nAAAAAAAAIPKJAAAA\tCharles             \tJones                         \nAAAAAAAAPIGBBAAA\tCharles             \tWelch                         \nAAAAAAAACPDFBAAA\tCheryl              \tBarry                         \nAAAAAAAACOEHBAAA\tChristine           \tGonzalez                      \nAAAAAAAAPNMGAAAA\tChristine           \tOlds                          \nAAAAAAAAEFCEBAAA\tCornelius           \tMartino                       \nAAAAAAAAPEFLAAAA\tDavid               \tMartinez                      \nAAAAAAAAIBJDBAAA\tDean                \tVelez                         \nAAAAAAAAOOKKAAAA\tDeborah             \tEarly                         \nAAAAAAAALMGGBAAA\tDedra               \tRainey                        \nAAAAAAAANKBBAAAA\tDiann               \tSaunders                      \nAAAAAAAADKMBAAAA\tDonald              \tNelson                        \nAAAAAAAAFGMHBAAA\tDonald              \tColeman                       \nAAAAAAAALPHGBAAA\tDorothy             \tHeller                        \nAAAAAAAALEAHBAAA\tEddie               \tPena                          \nAAAAAAAAJINGAAAA\tElla                \tMoore                         \nAAAAAAAAIANDAAAA\tElva                \tWade                          \nAAAAAAAAEBFHAAAA\tEsther              \tMerrill                       \nAAAAAAAAAAECBAAA\tFrank               \tWenzel                        \nAAAAAAAAJGDLAAAA\tFredrick            \tDavis                         \nAAAAAAAAHLEAAAAA\tGeneva              \tSims                          \nAAAAAAAAFMOKAAAA\tHarry               \tBrown                         \nAAAAAAAAEIAHAAAA\tHenry               \tDesantis                      \nAAAAAAAALMAJAAAA\tIleen               \tLinn                          \nAAAAAAAACEMIAAAA\tJames               \tHernandez                     \nAAAAAAAAIBBFBAAA\tJames               \tCompton                       \nAAAAAAAALNLABAAA\tJanie               \tGarcia                        \nAAAAAAAABBEAAAAA\tJason               \tGallegos                      \nAAAAAAAAIODCBAAA\tJennifer            \tCrane                         \nAAAAAAAAGEKLAAAA\tJerilyn             \tWalker                        \nAAAAAAAAMFMKAAAA\tJohn                \tSanders                       \nAAAAAAAAJCNBBAAA\tJohnnie             \tCox                           \nAAAAAAAAOJBPAAAA\tJonathan            \tMcbride                       \nAAAAAAAAABGKAAAA\tJonna               \tKing                          \nAAAAAAAAGGMHAAAA\tJulia               \tFisher                        \nAAAAAAAAHEPFBAAA\tKathryn             \tKinney                        \nAAAAAAAAOMOKAAAA\tLaurette            \tGary                          \nAAAAAAAABIABAAAA\tLetha               \tStone                         \nAAAAAAAAPFKDAAAA\tLinda               \tSimmons                       \nAAAAAAAAJDEFAAAA\tLoretta             \tSerrano                       \nAAAAAAAAFOEDAAAA\tLori                \tErwin                         \nAAAAAAAABAAGAAAA\tLuis                \tJames                         \nAAAAAAAAEIPIAAAA\tLuke                \tRios                          \nAAAAAAAAGCGIAAAA\tMae                 \tWilliams                      \nAAAAAAAAFMPPAAAA\tManuel              \tBryant                        \nAAAAAAAAMJFAAAAA\tMarcus              \tEspinal                       \nAAAAAAAAJADIAAAA\tMargaret            \tRoberts                       \nAAAAAAAAHLJCAAAA\tMarlene             \tGrover                        \nAAAAAAAABGMHBAAA\tMichael             \tGillespie                     \nAAAAAAAAIPGJAAAA\tMichael             \tNULL\nAAAAAAAANBECBAAA\tMichael             \tLombardi                      \nAAAAAAAAMLOEAAAA\tMiguel              \tJackson                       \nAAAAAAAAJHGFAAAA\tPamela              \tGannon                        \nAAAAAAAADFJBBAAA\tPatrick             \tJones                         \nAAAAAAAADHNHBAAA\tPatrick             \tCooper                        \nAAAAAAAAOPMDAAAA\tPeggy               \tSmith                         \nAAAAAAAAAFAGAAAA\tRobert              \tChang                         \nAAAAAAAAAFBNAAAA\tRobert              \tBaines                        \nAAAAAAAAKMHPAAAA\tRobert              \tJones                         \nAAAAAAAAJMIDAAAA\tSally               \tThurman                       \nAAAAAAAAJBELAAAA\tSean                \tBusby                         \nAAAAAAAAJIAHAAAA\tShawna              \tDelgado                       \nAAAAAAAAFDIMAAAA\tStephanie           \tCowan                         \nAAAAAAAAPBIGBAAA\tSusie               \tZavala                        \nAAAAAAAAGMGEBAAA\tTamika              \tPotts                         \nAAAAAAAAMHOLAAAA\tTerri               \tCook                          \nAAAAAAAABILCAAAA\tTheresa             \tMullins                       \nAAAAAAAADHCBAAAA\tTherese             \tPerez                         \nAAAAAAAAIBHHBAAA\tVennie              \tLoya                          \nAAAAAAAAFHNDAAAA\tVirgil              \tMcdonald                      \nAAAAAAAAAHKEAAAA\tWilliam             \tStafford                      \nAAAAAAAAHIEIAAAA\tWilliam             \tRoberts\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q75.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<prev_year:int,year:int,i_brand_id:int,i_class_id:int,i_category_id:int,i_manufact_id:int,prev_yr_cnt:bigint,curr_yr_cnt:bigint,sales_cnt_diff:bigint,sales_amt_diff:decimal(19,2)>\n-- !query output\n2001\t2002\t7009003\t16\t9\t545\t6720\t3891\t-2829\t-117008.58\n2001\t2002\t8005007\t5\t9\t512\t6454\t4227\t-2227\t-135735.94\n2001\t2002\t9001010\t1\t9\t456\t5643\t3436\t-2207\t-93617.05\n2001\t2002\t10013015\t3\t9\t127\t6225\t4143\t-2082\t-14567.99\n2001\t2002\t9015004\t10\t9\t118\t7332\t5259\t-2073\t-77291.51\n2001\t2002\t9004002\t4\t9\t435\t6625\t4581\t-2044\t-122670.86\n2001\t2002\t9014004\t3\t9\t306\t5855\t3858\t-1997\t-73620.65\n2001\t2002\t5002001\t5\t9\t170\t5700\t3736\t-1964\t-109018.53\n2001\t2002\t9014010\t1\t9\t987\t6970\t5148\t-1822\t-75086.23\n2001\t2002\t9015008\t15\t9\t327\t6526\t4739\t-1787\t-93540.56\n2001\t2002\t9016002\t16\t9\t197\t6053\t4343\t-1710\t-97667.31\n2001\t2002\t9015002\t15\t9\t856\t10449\t8783\t-1666\t-20233.37\n2001\t2002\t3001001\t1\t9\t380\t6554\t4895\t-1659\t-71747.73\n2001\t2002\t5002001\t14\t9\t75\t6649\t5024\t-1625\t-114258.52\n2001\t2002\t9004008\t4\t9\t787\t6493\t4911\t-1582\t-133842.63\n2001\t2002\t3001001\t4\t9\t652\t6331\t4766\t-1565\t-74417.45\n2001\t2002\t5004001\t11\t9\t963\t6179\t4628\t-1551\t-85648.92\n2001\t2002\t3004001\t7\t9\t237\t6097\t4560\t-1537\t-118933.63\n2001\t2002\t1001001\t8\t9\t271\t5623\t4128\t-1495\t-50180.45\n2001\t2002\t10004009\t4\t9\t513\t6500\t5024\t-1476\t-67288.17\n2001\t2002\t4004001\t10\t9\t126\t5667\t4194\t-1473\t-88247.10\n2001\t2002\t9007002\t7\t9\t144\t5474\t4020\t-1454\t-42908.70\n2001\t2002\t9016008\t16\t9\t53\t6099\t4671\t-1428\t-67851.70\n2001\t2002\t9007008\t7\t9\t110\t5655\t4230\t-1425\t-28768.95\n2001\t2002\t9002008\t2\t9\t97\t5979\t4581\t-1398\t-101148.97\n2001\t2002\t9008004\t8\t9\t171\t5952\t4559\t-1393\t-41622.99\n2001\t2002\t10010001\t10\t9\t368\t5714\t4353\t-1361\t-78871.35\n2001\t2002\t3003001\t8\t9\t266\t5785\t4431\t-1354\t-99666.06\n2001\t2002\t9011002\t11\t9\t581\t6045\t4721\t-1324\t-61169.99\n2001\t2002\t9012004\t12\t9\t233\t5370\t4050\t-1320\t-135207.43\n2001\t2002\t9008008\t8\t9\t84\t5431\t4114\t-1317\t-48625.72\n2001\t2002\t9010010\t7\t9\t962\t6123\t4819\t-1304\t-34943.08\n2001\t2002\t9009008\t9\t9\t135\t5984\t4698\t-1286\t-53184.41\n2001\t2002\t9014008\t14\t9\t52\t5835\t4555\t-1280\t-47886.49\n2001\t2002\t9005008\t5\t9\t326\t5773\t4494\t-1279\t-47219.15\n2001\t2002\t9011002\t11\t9\t806\t5487\t4208\t-1279\t-34285.63\n2001\t2002\t1003001\t3\t9\t57\t5893\t4631\t-1262\t-24156.11\n2001\t2002\t5003001\t3\t9\t456\t5617\t4359\t-1258\t-46563.26\n2001\t2002\t9015008\t15\t9\t829\t4998\t3781\t-1217\t15209.89\n2001\t2002\t7007001\t7\t9\t211\t6277\t5071\t-1206\t-74262.01\n2001\t2002\t9011008\t11\t9\t541\t5654\t4449\t-1205\t-50137.27\n2001\t2002\t6010001\t4\t9\t91\t6373\t5169\t-1204\t-40582.11\n2001\t2002\t4003001\t3\t9\t175\t6106\t4907\t-1199\t-84819.82\n2001\t2002\t9009008\t9\t9\t83\t5880\t4689\t-1191\t-54463.08\n2001\t2002\t9014010\t14\t9\t201\t6106\t4919\t-1187\t-5741.72\n2001\t2002\t9012010\t16\t9\t173\t6576\t5390\t-1186\t-57504.67\n2001\t2002\t9014002\t14\t9\t966\t6066\t4915\t-1151\t-31274.42\n2001\t2002\t9003004\t2\t9\t29\t5232\t4084\t-1148\t-85408.38\n2001\t2002\t9008002\t8\t9\t586\t6212\t5079\t-1133\t-45166.64\n2001\t2002\t9004010\t4\t9\t87\t5551\t4428\t-1123\t-104064.28\n2001\t2002\t9013002\t13\t9\t18\t5087\t3977\t-1110\t34375.04\n2001\t2002\t9001008\t1\t9\t65\t5429\t4333\t-1096\t-112773.38\n2001\t2002\t8007003\t4\t9\t290\t6148\t5075\t-1073\t-48541.94\n2001\t2002\t9004002\t4\t9\t506\t5503\t4432\t-1071\t-47389.52\n2001\t2002\t9001009\t1\t9\t73\t5116\t4053\t-1063\t-12805.29\n2001\t2002\t9003008\t3\t9\t156\t5719\t4665\t-1054\t-27241.44\n2001\t2002\t9011008\t11\t9\t324\t5293\t4253\t-1040\t-78455.59\n2001\t2002\t9011008\t11\t9\t525\t5269\t4243\t-1026\t-8416.01\n2001\t2002\t9009004\t9\t9\t599\t5642\t4622\t-1020\t-7244.59\n2001\t2002\t9004010\t4\t9\t146\t5387\t4373\t-1014\t-96437.31\n2001\t2002\t9002010\t2\t9\t64\t4911\t3898\t-1013\t-70357.40\n2001\t2002\t2003001\t3\t9\t4\t6089\t5078\t-1011\t-14326.99\n2001\t2002\t9008002\t8\t9\t433\t5707\t4697\t-1010\t-59856.93\n2001\t2002\t9015002\t15\t9\t120\t5419\t4409\t-1010\t-25557.35\n2001\t2002\t9008010\t3\t9\t64\t6024\t5020\t-1004\t15466.29\n2001\t2002\t1003001\t5\t9\t139\t6059\t5060\t-999\t-23720.17\n2001\t2002\t9003002\t3\t9\t281\t5640\t4651\t-989\t-62224.16\n2001\t2002\t9013008\t13\t9\t660\t5400\t4411\t-989\t-6728.84\n2001\t2002\t9013002\t13\t9\t30\t4967\t3997\t-970\t-74733.18\n2001\t2002\t6014001\t14\t9\t977\t5339\t4369\t-970\t-34999.44\n2001\t2002\t8012009\t12\t9\t149\t5872\t4905\t-967\t-25425.45\n2001\t2002\t9004009\t4\t9\t618\t5522\t4576\t-946\t-59306.75\n2001\t2002\t7006009\t8\t9\t116\t5471\t4528\t-943\t-65664.87\n2001\t2002\t9014010\t15\t9\t151\t5633\t4701\t-932\t7429.42\n2001\t2002\t9003002\t3\t9\t877\t6303\t5379\t-924\t-45324.79\n2001\t2002\t9006008\t6\t9\t162\t5169\t4251\t-918\t-69642.81\n2001\t2002\t9008010\t8\t9\t503\t5526\t4608\t-918\t-49021.02\n2001\t2002\t9005002\t5\t9\t530\t4962\t4049\t-913\t-51406.91\n2001\t2002\t9010002\t10\t9\t236\t5107\t4195\t-912\t-68196.83\n2001\t2002\t9015004\t15\t9\t134\t4996\t4092\t-904\t-20839.46\n2001\t2002\t9009009\t6\t9\t181\t5283\t4380\t-903\t-50577.89\n2001\t2002\t9009010\t16\t9\t590\t4684\t3787\t-897\t-42032.78\n2001\t2002\t9006008\t6\t9\t221\t5238\t4347\t-891\t-110703.28\n2001\t2002\t10009017\t9\t9\t640\t5562\t4682\t-880\t-91021.88\n2001\t2002\t7013003\t13\t9\t313\t5161\t4284\t-877\t-55538.42\n2001\t2002\t3004001\t12\t9\t570\t5053\t4181\t-872\t-81469.82\n2001\t2002\t9009004\t9\t9\t587\t5131\t4262\t-869\t-31202.41\n2001\t2002\t9010008\t10\t9\t86\t5074\t4211\t-863\t-31897.54\n2001\t2002\t9006002\t6\t9\t230\t5616\t4756\t-860\t-39142.42\n2001\t2002\t9001008\t1\t9\t533\t6196\t5337\t-859\t-70174.46\n2001\t2002\t9004010\t4\t9\t102\t6146\t5291\t-855\t-66567.87\n2001\t2002\t9002004\t2\t9\t179\t5203\t4351\t-852\t-26624.64\n2001\t2002\t9016008\t16\t9\t69\t5922\t5072\t-850\t-31016.10\n2001\t2002\t9008010\t8\t9\t81\t5520\t4672\t-848\t-44217.60\n2001\t2002\t9001002\t1\t9\t927\t5858\t5013\t-845\t8179.45\n2001\t2002\t1001001\t1\t9\t116\t5773\t4930\t-843\t700.02\n2001\t2002\t9012008\t12\t9\t274\t6154\t5316\t-838\t-85490.33\n2001\t2002\t6008007\t8\t9\t22\t5439\t4605\t-834\t-25617.83\n2001\t2002\t9016008\t16\t9\t601\t6463\t5633\t-830\t22715.94\n2001\t2002\t9001008\t1\t9\t170\t5970\t5142\t-828\t-2265.18\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q77a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,id:int,sales:decimal(37,2),returns:decimal(37,2),profit:decimal(38,2)>\n-- !query output\nNULL\tNULL\t239062306.14\t9940693.53\t-67351905.74\ncatalog channel\tNULL\t120443.39\t1680292.45\t-994006.90\ncatalog channel\tNULL\t81456313.49\t6721169.80\t-11963308.94\ncatalog channel\t1\t25511213.21\t1680292.45\t-4013845.35\ncatalog channel\t2\t28320909.41\t1680292.45\t-3815679.20\ncatalog channel\t4\t27503747.48\t1680292.45\t-3139777.49\nstore channel\tNULL\t114950675.73\t2650053.56\t-51250807.34\nstore channel\t1\t19900057.08\t430318.96\t-9021290.46\nstore channel\t2\t19357997.15\t420826.73\t-8260880.77\nstore channel\t4\t18625533.75\t413145.75\t-8693775.33\nstore channel\t7\t19392254.22\t523129.28\t-8781756.70\nstore channel\t8\t18443448.06\t453191.53\t-8177970.58\nstore channel\t10\t19231385.47\t409441.31\t-8315133.50\nweb channel\tNULL\t42655316.92\t569470.17\t-4137789.46\nweb channel\t1\t1484216.46\t22578.30\t-145384.36\nweb channel\t2\t1434288.77\t8368.29\t-74179.21\nweb channel\t4\t1366508.91\t23624.40\t-200293.36\nweb channel\t7\t1357762.92\t40137.83\t-123205.40\nweb channel\t8\t1488902.03\t21963.58\t-135533.22\nweb channel\t10\t1568120.92\t11055.99\t-148192.16\nweb channel\t13\t1322895.39\t16655.94\t-110269.99\nweb channel\t14\t1367483.46\t8838.41\t-184084.28\nweb channel\t16\t1393205.82\t26800.78\t-87656.13\nweb channel\t19\t1364048.12\t13570.75\t-121815.95\nweb channel\t20\t1313170.80\t14044.48\t-194480.66\nweb channel\t22\t1372505.20\t19743.54\t-82372.11\nweb channel\t25\t1483839.68\t17464.70\t12637.58\nweb channel\t26\t1351984.72\t16290.97\t-210017.25\nweb channel\t28\t1515110.49\t34693.25\t-87241.61\nweb channel\t31\t1472020.01\t6383.43\t-163741.34\nweb channel\t32\t1371605.44\t23901.58\t-201414.74\nweb channel\t34\t1550210.89\t15290.60\t-189498.78\nweb channel\t37\t1292707.71\t17894.26\t-179114.08\nweb channel\t38\t1271096.72\t16767.70\t-211804.95\nweb channel\t40\t1529058.38\t21329.09\t-126745.86\nweb channel\t43\t1489558.57\t24760.75\t-100645.79\nweb channel\t44\t1506795.25\t9375.29\t-161279.12\nweb channel\t46\t1311086.33\t13347.60\t-144246.40\nweb channel\t49\t1492170.04\t21268.41\t-97796.30\nweb channel\t50\t1486817.09\t14070.53\t-109583.01\nweb channel\t52\t1564493.01\t26771.44\t-74766.72\nweb channel\t55\t1330931.73\t18908.13\t-235881.43\nweb channel\t56\t1448998.86\t26396.32\t-131860.48\nweb channel\t58\t1353723.20\t17173.83\t-117322.35\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q78.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<ratio:double,store_qty:bigint,store_wholesale_cost:decimal(17,2),store_sales_price:decimal(17,2),other_chan_qty:bigint,other_chan_wholesale_cost:decimal(18,2),other_chan_sales_price:decimal(18,2)>\n-- !query output\n\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q80a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<channel:string,id:string,sales:decimal(37,2),returns:decimal(38,2),profit:decimal(38,2)>\n-- !query output\nNULL\tNULL\t12394002.28\t579360.90\t-3565544.29\ncatalog channel\tNULL\t4336403.99\t190045.19\t-496576.74\ncatalog channel\tcatalog_pageAAAAAAAAAABAAAAA\t4438.43\t383.01\t-4797.32\ncatalog channel\tcatalog_pageAAAAAAAAAADAAAAA\t1459.86\t0.00\t-538.24\ncatalog channel\tcatalog_pageAAAAAAAAABBAAAAA\t6092.40\t0.00\t-37.09\ncatalog channel\tcatalog_pageAAAAAAAAABDAAAAA\t878.66\t0.00\t401.04\ncatalog channel\tcatalog_pageAAAAAAAAACBAAAAA\t15564.33\t0.00\t-743.39\ncatalog channel\tcatalog_pageAAAAAAAAACDAAAAA\t19434.58\t0.00\t-1241.86\ncatalog channel\tcatalog_pageAAAAAAAAADBAAAAA\t3357.08\t0.00\t-223.80\ncatalog channel\tcatalog_pageAAAAAAAAADDAAAAA\t17773.31\t0.00\t10583.67\ncatalog channel\tcatalog_pageAAAAAAAAAEBAAAAA\t11505.75\t0.00\t1975.14\ncatalog channel\tcatalog_pageAAAAAAAAAEDAAAAA\t14132.65\t0.00\t-3283.42\ncatalog channel\tcatalog_pageAAAAAAAAAFDAAAAA\t8636.84\t216.08\t-1736.94\ncatalog channel\tcatalog_pageAAAAAAAAAGDAAAAA\t17045.61\t145.32\t1162.68\ncatalog channel\tcatalog_pageAAAAAAAAAHAAAAAA\t19082.44\t2556.07\t-16790.11\ncatalog channel\tcatalog_pageAAAAAAAAAIAAAAAA\t7785.09\t33.84\t-3860.11\ncatalog channel\tcatalog_pageAAAAAAAAAJAAAAAA\t33875.64\t0.00\t6085.44\ncatalog channel\tcatalog_pageAAAAAAAAAKAAAAAA\t4069.90\t3395.00\t-57.06\ncatalog channel\tcatalog_pageAAAAAAAAALAAAAAA\t10778.98\t0.00\t-6320.25\ncatalog channel\tcatalog_pageAAAAAAAAAMAAAAAA\t23566.51\t0.00\t1270.50\ncatalog channel\tcatalog_pageAAAAAAAAANAAAAAA\t11997.42\t0.00\t-7811.93\ncatalog channel\tcatalog_pageAAAAAAAAANCAAAAA\t1574.03\t0.00\t170.61\ncatalog channel\tcatalog_pageAAAAAAAAAOAAAAAA\t12768.66\t0.00\t-1180.46\ncatalog channel\tcatalog_pageAAAAAAAAAPAAAAAA\t15591.17\t0.00\t617.97\ncatalog channel\tcatalog_pageAAAAAAAAAPCAAAAA\t0.00\t0.00\t-2448.27\ncatalog channel\tcatalog_pageAAAAAAAABABAAAAA\t22630.47\t0.00\t4803.56\ncatalog channel\tcatalog_pageAAAAAAAABADAAAAA\t4689.30\t0.00\t1920.53\ncatalog channel\tcatalog_pageAAAAAAAABBBAAAAA\t8799.41\t1594.88\t-1551.23\ncatalog channel\tcatalog_pageAAAAAAAABBDAAAAA\t2900.27\t0.00\t-1813.33\ncatalog channel\tcatalog_pageAAAAAAAABCBAAAAA\t19972.81\t189.75\t5730.83\ncatalog channel\tcatalog_pageAAAAAAAABCDAAAAA\t24932.09\t2122.08\t3205.82\ncatalog channel\tcatalog_pageAAAAAAAABDBAAAAA\t5321.51\t0.00\t143.34\ncatalog channel\tcatalog_pageAAAAAAAABDDAAAAA\t2502.32\t0.00\t-1101.24\ncatalog channel\tcatalog_pageAAAAAAAABHAAAAAA\t3734.33\t0.00\t-3819.22\ncatalog channel\tcatalog_pageAAAAAAAABIAAAAAA\t37573.12\t586.70\t5857.09\ncatalog channel\tcatalog_pageAAAAAAAABJAAAAAA\t12330.42\t0.00\t-1486.17\ncatalog channel\tcatalog_pageAAAAAAAABJCAAAAA\t6097.67\t0.00\t3736.59\ncatalog channel\tcatalog_pageAAAAAAAABKAAAAAA\t18001.80\t41.67\t-7542.13\ncatalog channel\tcatalog_pageAAAAAAAABLAAAAAA\t21210.22\t0.00\t2386.64\ncatalog channel\tcatalog_pageAAAAAAAABMAAAAAA\t27450.22\t35.95\t-6130.18\ncatalog channel\tcatalog_pageAAAAAAAABNAAAAAA\t24544.71\t0.00\t-19634.16\ncatalog channel\tcatalog_pageAAAAAAAABNCAAAAA\t1254.78\t23.70\t-2121.59\ncatalog channel\tcatalog_pageAAAAAAAABOAAAAAA\t7397.94\t0.00\t-2259.68\ncatalog channel\tcatalog_pageAAAAAAAABPAAAAAA\t23154.42\t0.00\t744.22\ncatalog channel\tcatalog_pageAAAAAAAACABAAAAA\t4702.71\t0.00\t-13312.05\ncatalog channel\tcatalog_pageAAAAAAAACADAAAAA\t1010.39\t0.00\t-2765.23\ncatalog channel\tcatalog_pageAAAAAAAACBBAAAAA\t74.70\t0.00\t-1114.49\ncatalog channel\tcatalog_pageAAAAAAAACBDAAAAA\t9217.35\t0.00\t3327.65\ncatalog channel\tcatalog_pageAAAAAAAACCBAAAAA\t11097.18\t0.00\t3019.32\ncatalog channel\tcatalog_pageAAAAAAAACCDAAAAA\t771.30\t0.00\t-365.85\ncatalog channel\tcatalog_pageAAAAAAAACDBAAAAA\t3370.78\t61.32\t-1489.09\ncatalog channel\tcatalog_pageAAAAAAAACEBAAAAA\t3646.15\t0.00\t1089.86\ncatalog channel\tcatalog_pageAAAAAAAACEDAAAAA\t2402.24\t0.00\t-1262.38\ncatalog channel\tcatalog_pageAAAAAAAACFDAAAAA\t2351.04\t581.76\t-3631.49\ncatalog channel\tcatalog_pageAAAAAAAACHAAAAAA\t14363.16\t72.40\t-6022.88\ncatalog channel\tcatalog_pageAAAAAAAACIAAAAAA\t17233.80\t290.61\t-1149.33\ncatalog channel\tcatalog_pageAAAAAAAACJAAAAAA\t17334.55\t0.00\t-5325.06\ncatalog channel\tcatalog_pageAAAAAAAACKAAAAAA\t33391.87\t0.00\t5427.63\ncatalog channel\tcatalog_pageAAAAAAAACLAAAAAA\t29852.16\t55.65\t-4206.17\ncatalog channel\tcatalog_pageAAAAAAAACMAAAAAA\t13596.19\t0.00\t1289.63\ncatalog channel\tcatalog_pageAAAAAAAACNAAAAAA\t12652.74\t2776.86\t-10512.27\ncatalog channel\tcatalog_pageAAAAAAAACOAAAAAA\t15417.46\t0.00\t8.02\ncatalog channel\tcatalog_pageAAAAAAAACPAAAAAA\t6001.13\t0.00\t-948.15\ncatalog channel\tcatalog_pageAAAAAAAACPCAAAAA\t6110.72\t0.00\t-34.56\ncatalog channel\tcatalog_pageAAAAAAAADABAAAAA\t0.00\t0.00\t-674.57\ncatalog channel\tcatalog_pageAAAAAAAADADAAAAA\t13815.26\t0.00\t4963.37\ncatalog channel\tcatalog_pageAAAAAAAADBBAAAAA\t4417.61\t0.00\t1955.36\ncatalog channel\tcatalog_pageAAAAAAAADBDAAAAA\t9645.90\t0.00\t2941.05\ncatalog channel\tcatalog_pageAAAAAAAADCBAAAAA\t4124.08\t0.00\t-7880.15\ncatalog channel\tcatalog_pageAAAAAAAADCDAAAAA\t20198.02\t0.00\t-3874.19\ncatalog channel\tcatalog_pageAAAAAAAADDBAAAAA\t7697.49\t875.52\t3573.72\ncatalog channel\tcatalog_pageAAAAAAAADDDAAAAA\t81.18\t0.00\t-40.95\ncatalog channel\tcatalog_pageAAAAAAAADEBAAAAA\t7221.62\t0.00\t-2080.43\ncatalog channel\tcatalog_pageAAAAAAAADEDAAAAA\t31633.70\t0.00\t1203.47\ncatalog channel\tcatalog_pageAAAAAAAADFDAAAAA\t4486.06\t0.00\t-194.53\ncatalog channel\tcatalog_pageAAAAAAAADHAAAAAA\t29478.34\t0.00\t2568.57\ncatalog channel\tcatalog_pageAAAAAAAADIAAAAAA\t3541.34\t0.00\t-5860.49\ncatalog channel\tcatalog_pageAAAAAAAADJAAAAAA\t27011.83\t995.61\t-13747.67\ncatalog channel\tcatalog_pageAAAAAAAADKAAAAAA\t22422.65\t0.00\t10022.26\ncatalog channel\tcatalog_pageAAAAAAAADLAAAAAA\t25580.22\t1878.40\t3693.93\ncatalog channel\tcatalog_pageAAAAAAAADMAAAAAA\t14964.93\t0.00\t-3775.51\ncatalog channel\tcatalog_pageAAAAAAAADNAAAAAA\t7954.26\t7.41\t-11362.44\ncatalog channel\tcatalog_pageAAAAAAAADOAAAAAA\t20282.62\t389.76\t20.54\ncatalog channel\tcatalog_pageAAAAAAAADOCAAAAA\t2056.15\t1394.00\t-96.79\ncatalog channel\tcatalog_pageAAAAAAAADPAAAAAA\t2325.25\t0.00\t-3003.94\ncatalog channel\tcatalog_pageAAAAAAAADPCAAAAA\t5.79\t0.00\t0.11\ncatalog channel\tcatalog_pageAAAAAAAAEABAAAAA\t5870.11\t0.00\t-2224.89\ncatalog channel\tcatalog_pageAAAAAAAAEADAAAAA\t15887.40\t294.38\t1553.93\ncatalog channel\tcatalog_pageAAAAAAAAEBBAAAAA\t750.60\t48.60\t-1121.83\ncatalog channel\tcatalog_pageAAAAAAAAEBDAAAAA\t2076.12\t1787.04\t-7886.05\ncatalog channel\tcatalog_pageAAAAAAAAECBAAAAA\t18310.64\t2076.12\t1185.01\ncatalog channel\tcatalog_pageAAAAAAAAECDAAAAA\t6384.54\t0.00\t-2290.17\ncatalog channel\tcatalog_pageAAAAAAAAEDBAAAAA\t6470.98\t0.00\t3092.44\ncatalog channel\tcatalog_pageAAAAAAAAEDDAAAAA\t7165.64\t0.00\t-169.94\ncatalog channel\tcatalog_pageAAAAAAAAEEBAAAAA\t8572.26\t697.82\t-1507.83\ncatalog channel\tcatalog_pageAAAAAAAAEEDAAAAA\t9693.59\t811.58\t-7987.68\ncatalog channel\tcatalog_pageAAAAAAAAEFDAAAAA\t5850.24\t0.00\t2898.90\ncatalog channel\tcatalog_pageAAAAAAAAEHAAAAAA\t18074.80\t2563.92\t-2315.60\ncatalog channel\tcatalog_pageAAAAAAAAEIAAAAAA\t1842.65\t0.00\t-4113.10\ncatalog channel\tcatalog_pageAAAAAAAAEJAAAAAA\t31976.85\t194.76\t4337.39\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q86a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<total_sum:decimal(27,2),i_category:string,i_class:string,lochierarchy:int,rank_within_parent:int>\n-- !query output\n329332948.59\tNULL\tNULL\t2\t1\n35765359.17\tBooks                                             \tNULL\t1\t1\n34301963.73\tMen                                               \tNULL\t1\t2\n34299263.29\tHome                                              \tNULL\t1\t3\n34185638.04\tChildren                                          \tNULL\t1\t4\n33632557.65\tElectronics                                       \tNULL\t1\t5\n32679955.12\tMusic                                             \tNULL\t1\t6\n31276464.49\tWomen                                             \tNULL\t1\t7\n30973438.42\tShoes                                             \tNULL\t1\t8\n30893463.64\tJewelry                                           \tNULL\t1\t9\n30492050.58\tSports                                            \tNULL\t1\t10\n832794.46\tNULL\tNULL\t1\t11\n487784.65\tNULL\tNULL\t0\t1\n91077.43\tNULL\tdresses                                           \t0\t2\n48174.49\tNULL\tathletic                                          \t0\t3\n45303.17\tNULL\taccessories                                       \t0\t4\n38544.63\tNULL\tcountry                                           \t0\t5\n36691.45\tNULL\tinfants                                           \t0\t6\n33102.66\tNULL\tshirts                                            \t0\t7\n31974.19\tNULL\tbaseball                                          \t0\t8\n20141.79\tNULL\tmens                                              \t0\t9\n2788922.44\tBooks                                             \thistory                                           \t0\t1\n2627303.83\tBooks                                             \tfiction                                           \t0\t2\n2559899.95\tBooks                                             \tbusiness                                          \t0\t3\n2523972.68\tBooks                                             \tsports                                            \t0\t4\n2478882.85\tBooks                                             \tparenting                                         \t0\t5\n2454802.70\tBooks                                             \thome repair                                       \t0\t6\n2311733.05\tBooks                                             \tscience                                           \t0\t7\n2267995.48\tBooks                                             \treference                                         \t0\t8\n2199284.41\tBooks                                             \ttravel                                            \t0\t9\n2143489.54\tBooks                                             \tself-help                                         \t0\t10\n2136284.47\tBooks                                             \tromance                                           \t0\t11\n2047067.66\tBooks                                             \tcomputers                                         \t0\t12\n2024086.51\tBooks                                             \tmystery                                           \t0\t13\n1841858.53\tBooks                                             \tcooking                                           \t0\t14\n1718405.56\tBooks                                             \tarts                                              \t0\t15\n1641369.51\tBooks                                             \tentertainments                                    \t0\t16\n9726340.87\tChildren                                          \tschool-uniforms                                   \t0\t1\n8423026.60\tChildren                                          \tinfants                                           \t0\t2\n8110705.74\tChildren                                          \ttoddlers                                          \t0\t3\n7857828.46\tChildren                                          \tnewborn                                           \t0\t4\n67736.37\tChildren                                          \tNULL\t0\t5\n2545288.50\tElectronics                                       \ttelevisions                                       \t0\t1\n2432288.94\tElectronics                                       \tmemory                                            \t0\t2\n2393855.24\tElectronics                                       \tstereo                                            \t0\t3\n2322478.19\tElectronics                                       \tkaroke                                            \t0\t4\n2312404.32\tElectronics                                       \taudio                                             \t0\t5\n2297793.09\tElectronics                                       \tautomotive                                        \t0\t6\n2175340.77\tElectronics                                       \tcamcorders                                        \t0\t7\n2155848.29\tElectronics                                       \tdvd/vcr players                                   \t0\t8\n2151409.30\tElectronics                                       \tportable                                          \t0\t9\n2126590.68\tElectronics                                       \tmonitors                                          \t0\t10\n2049010.73\tElectronics                                       \tpersonal                                          \t0\t11\n1975162.86\tElectronics                                       \tdisk drives                                       \t0\t12\n1824298.91\tElectronics                                       \twireless                                          \t0\t13\n1747334.28\tElectronics                                       \tmusical                                           \t0\t14\n1647065.60\tElectronics                                       \tscanners                                          \t0\t15\n1476387.95\tElectronics                                       \tcameras                                           \t0\t16\n2878895.80\tHome                                              \trugs                                              \t0\t1\n2433373.20\tHome                                              \twallpaper                                         \t0\t2\n2394255.96\tHome                                              \tbedding                                           \t0\t3\n2360342.61\tHome                                              \tmattresses                                        \t0\t4\n2349809.40\tHome                                              \tpaint                                             \t0\t5\n2342593.19\tHome                                              \ttables                                            \t0\t6\n2255327.66\tHome                                              \taccent                                            \t0\t7\n2238644.85\tHome                                              \tlighting                                          \t0\t8\n2195356.00\tHome                                              \tglassware                                         \t0\t9\n2001058.99\tHome                                              \tcurtains/drapes                                   \t0\t10\n1995854.06\tHome                                              \tbathroom                                          \t0\t11\n1932033.64\tHome                                              \tdecor                                             \t0\t12\n1789688.36\tHome                                              \tflatware                                          \t0\t13\n1767061.55\tHome                                              \tblinds/shades                                     \t0\t14\n1730423.41\tHome                                              \tkids                                              \t0\t15\n1585801.82\tHome                                              \tfurniture                                         \t0\t16\n48742.79\tHome                                              \tNULL\t0\t17\n2309049.65\tJewelry                                           \tloose stones                                      \t0\t1\n2291438.75\tJewelry                                           \tbracelets                                         \t0\t2\n2204443.86\tJewelry                                           \tcustom                                            \t0\t3\n1975110.14\tJewelry                                           \tmens watch                                        \t0\t4\n1964522.90\tJewelry                                           \tdiamonds                                          \t0\t5\n1950628.31\tJewelry                                           \testate                                            \t0\t6\n1950233.65\tJewelry                                           \twomens watch                                      \t0\t7\n1948966.11\tJewelry                                           \tbirdal                                            \t0\t8\n1934844.79\tJewelry                                           \trings                                             \t0\t9\n1920939.20\tJewelry                                           \tgold                                              \t0\t10\n1912539.77\tJewelry                                           \tsemi-precious                                     \t0\t11\n1816107.81\tJewelry                                           \tconsignment                                       \t0\t12\n1793972.84\tJewelry                                           \tearings                                           \t0\t13\n1783332.27\tJewelry                                           \tpendants                                          \t0\t14\n1720324.59\tJewelry                                           \tcostume                                           \t0\t15\n1417009.00\tJewelry                                           \tjewelry boxes                                     \t0\t16\n8970878.34\tMen                                               \tsports-apparel                                    \t0\t1\n8843683.02\tMen                                               \tpants                                             \t0\t2\n8340416.89\tMen                                               \taccessories                                       \t0\t3\n8054511.00\tMen                                               \tshirts                                            \t0\t4\n92474.48\tMen                                               \tNULL\t0\t5\n8987804.92\tMusic                                             \trock                                              \t0\t1\n8141376.53\tMusic                                             \tcountry                                           \t0\t2\n7793743.27\tMusic                                             \tclassical                                         \t0\t3\n7727726.97\tMusic                                             \tpop                                               \t0\t4\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7/q98.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<i_item_id:string,i_item_desc:string,i_category:string,i_class:string,i_current_price:decimal(7,2),itemrevenue:decimal(17,2),revenueratio:decimal(38,17)>\n-- !query output\nAAAAAAAAMJJBAAAA\tNULL\tBooks                                             \tNULL\tNULL\t557.55\t2.41577919115267179\nAAAAAAAAMLGDAAAA\tNULL\tBooks                                             \tNULL\t6.35\t361.02\t1.56424464817493959\nAAAAAAAAAELBAAAA\tPrecisely elderly bodies\tBooks                                             \tarts                                              \t1.40\t4303.08\t1.75015577438987686\nAAAAAAAAAHMAAAAA\tAbilities could affect cruel parts. Predominantly other events telephone strong signs. Accurate mate\tBooks                                             \tarts                                              \t25.69\t9236.96\t3.75687156218529913\nAAAAAAAAAIDDAAAA\tGerma\tBooks                                             \tarts                                              \t5.82\t3191.92\t1.29822295179047002\nAAAAAAAAALNCAAAA\tGreat, contemporary workers would not remove of course cultural values. Then due children might see positive seconds. Significant problems w\tBooks                                             \tarts                                              \t0.55\t2096.07\t0.85251703756969175\nAAAAAAAAANKAAAAA\tSmall objects stop etc mediterranean patterns; liberal, free initiatives would not leave less clear british attitudes; good, blue relationships find softly very\tBooks                                             \tarts                                              \t58.41\t5760.88\t2.34307458786895754\nAAAAAAAABGDAAAAA\tNewly national rights head curiously all electrical cells. Chinese, long values might not pull bad lines. High fun clothes ough\tBooks                                             \tarts                                              \t3.28\t571.10\t0.23227873122369528\nAAAAAAAABKLDAAAA\tForward psychological plants establish closely yet eastern changes. Likewise necessary techniques might drop. Pleasant operations like lonely things; dogs let regions. Forces might not result clearl\tBooks                                             \tarts                                              \t2.43\t3457.82\t1.40637023708618106\nAAAAAAAACMFDAAAA\tEarly, short v\tBooks                                             \tarts                                              \t75.57\t850.08\t0.34574593563060564\nAAAAAAAADLLDAAAA\tBlack, following services justify by a investors; dirty, different charts will fly however prizes. Temporary, l\tBooks                                             \tarts                                              \t5.56\t4798.20\t1.95153179505784395\nAAAAAAAAEIPCAAAA\tScientific, difficult polls would not achieve. Countries reach of course. Bad, new churches realize most english\tBooks                                             \tarts                                              \t3.98\t17.50\t0.00711762878027433\nAAAAAAAAFCIBAAAA\tUnited, important objectives put similarly large, previous phenomena; old, present days receive. Happy detectives assi\tBooks                                             \tarts                                              \t1.26\t11646.43\t4.73685516316858938\nAAAAAAAAFFIBAAAA\tNaturally new years put serious, negative vehicles. Fin\tBooks                                             \tarts                                              \t3.34\t20302.86\t8.25761260902173683\nAAAAAAAAFJGCAAAA\tAgo correct profits must not handle else. Healthy children may not go only ancient words. Later just characters ought to drink about. British parts must watch soon ago other clients. So vital d\tBooks                                             \tarts                                              \t4.03\t9410.73\t3.82754758236520025\nAAAAAAAAFLNCAAAA\tMuch new waters \tBooks                                             \tarts                                              \t1.85\t2864.61\t1.16509889030066491\nAAAAAAAAGLEBAAAA\tTall, following actions keep widely willing, secondary groups. Heads could afford however; agricultural, square pri\tBooks                                             \tarts                                              \t9.99\t1903.68\t0.77426786036757875\nAAAAAAAAGMFAAAAA\tAnonymous, useful women provoke slightly present persons. Ideas ought to cost almost competent, working parties; aspects provide thr\tBooks                                             \tarts                                              \t6.73\t7841.73\t3.18940132200803357\nAAAAAAAAHHEBAAAA\tPowerful walls will find; there scottish decades must not\tBooks                                             \tarts                                              \t4.16\t5934.47\t2.41367739815283298\nAAAAAAAAHMCEAAAA\tToo executive doors progress mainly seemingly possible parts; hundreds stay virtually simple workers. Sola\tBooks                                             \tarts                                              \t34.32\t10139.18\t4.12382396436467639\nAAAAAAAAIBOCAAAA\tCareful privileges ought to live rather to a boards. Possible, broad p\tBooks                                             \tarts                                              \t3.93\t10008.95\t4.07085660459009779\nAAAAAAAAICMBAAAA\tAside legitimate decisions may not stand probably sexual g\tBooks                                             \tarts                                              \t3.88\t874.84\t0.35581636355058234\nAAAAAAAAIFPBAAAA\tSpecially interesting crews continue current, foreign directions; only social men would not call at least political children; circumstances could not understand now in a assessme\tBooks                                             \tarts                                              \t2.13\t15380.51\t6.25558632178840388\nAAAAAAAAIHNAAAAA\tUnlikely states take later in general extra inf\tBooks                                             \tarts                                              \t0.32\t6478.12\t2.63479162023261224\nAAAAAAAAINFDAAAA\tSometimes careful things state probably so\tBooks                                             \tarts                                              \t5.08\t17118.92\t6.96263529595507190\nAAAAAAAAJGHDAAAA\tCircumstances would not use. Principles seem writers. Times go from a hands. Members find grounds. Central, only teachers pursue properly into a p\tBooks                                             \tarts                                              \t5.95\t7863.28\t3.19816617344888566\nAAAAAAAAJLHBAAAA\tInches may lose from a problems. Firm, other corporations shall protect ashamed, important practices. Materials shall not make then by a police. Weeks used\tBooks                                             \tarts                                              \t0.84\t6519.57\t2.65165023240074772\nAAAAAAAAKFGBAAAA\tSystems cannot await regions. Home appropr\tBooks                                             \tarts                                              \t7.30\t889.75\t0.36188058327137607\nAAAAAAAAKHJDAAAA\tRelevant lips take so sure, manufacturing \tBooks                                             \tarts                                              \t8.80\t932.33\t0.37919879089789497\nAAAAAAAAKHLBAAAA\tExtra, primitive weeks look obviou\tBooks                                             \tarts                                              \t1.18\t10006.38\t4.06981132996350893\nAAAAAAAALCFBAAAA\tMore than key reasons should remain. Words used to offer slowly british\tBooks                                             \tarts                                              \t0.28\t1075.86\t0.43757554854548205\nAAAAAAAALGEEAAAA\tChildren may turn also above, historical aspects. Surveys migh\tBooks                                             \tarts                                              \t7.22\t3864.60\t1.57181646767132336\nAAAAAAAALOKCAAAA\tTrustees know operations. Now past issues cut today german governments. British lines go critical, individual structures. Tonight adequate problems should no\tBooks                                             \tarts                                              \t4.05\t3061.36\t1.24512137387317768\nAAAAAAAAMACDAAAA\tUseful observers start often white colleagues; simple pro\tBooks                                             \tarts                                              \t3.47\t724.40\t0.29462915933889837\nAAAAAAAAMNPAAAAA\tMembers should say earnings. Detailed departments would not move just at the hopes. Figures can take. Actually open houses want. Good teachers combine the\tBooks                                             \tarts                                              \t3.09\t7805.88\t3.17482035104958588\nAAAAAAAAMPFCAAAA\tMajor, senior words afford economic libraries; successful seconds need outside. Clinical, new ideas put now red c\tBooks                                             \tarts                                              \t5.87\t616.62\t0.25079270048530027\nAAAAAAAANABCAAAA\tLikely states feel astonishingly working roads. Parents put so somewhere able policies. Others may rely shortly instead interesting bodies; bri\tBooks                                             \tarts                                              \t7.50\t8829.29\t3.59106334933647431\nAAAAAAAANMECAAAA\tFloors could not go only for a years. Special reasons shape consequently black, concerned instances. Mutual depths encourage both simple teachers. Cards favour massive \tBooks                                             \tarts                                              \t1.83\t6344.62\t2.58049428068023382\nAAAAAAAAOAHCAAAA\tAccurate years want then other organisations. Simple lines mean as well so red results. Orthodox, central scales will not in\tBooks                                             \tarts                                              \t7.69\t3640.38\t1.48062134052200283\nAAAAAAAAODBEAAAA\tCertain customers think exactly already necessary factories. Awkward doubts shall not forget fine\tBooks                                             \tarts                                              \t0.30\t12867.38\t5.23344195512721440\nAAAAAAAAOEIDAAAA\tWeak effects set far in the effects. Positive, true classes sell frankly ever open studies. Unique problems must mean as yet new genes. Essential businesses agree deep current stages. Not\tBooks                                             \tarts                                              \t4.40\t4471.87\t1.81880632077973420\nAAAAAAAAOKEDAAAA\tVisitors could not allow glad wages. Communist, real figures used to apply factors. Aggressive, optimistic days must mean about trees. Detailed courts consider really large pro\tBooks                                             \tarts                                              \t9.08\t1737.00\t0.70647549664780021\nAAAAAAAAOODBAAAA\tDeep, big areas take for a facilities. Words could replace certainly cases; lights test. Nevertheless practical arts cross. Fa\tBooks                                             \tarts                                              \t7.37\t8016.08\t3.26031324074179520\nAAAAAAAAPEBCAAAA\tSimilar situations come separate programmes. National, large others could not ask opportunities. Severe, large findings accept; twins go more. Tiny rights may see specifi\tBooks                                             \tarts                                              \t1.27\t2413.11\t0.98146406776958731\nAAAAAAAABBLDAAAA\tNatural plans might not like n\tBooks                                             \tbusiness                                          \t4.29\t8813.76\t2.98246237973343018\nAAAAAAAABINDAAAA\tYears shall want free objects. Old residents use absolutely so residential steps. Letters will share variables. Sure fres\tBooks                                             \tbusiness                                          \t40.76\t1861.77\t0.62999888636816844\nAAAAAAAACAADAAAA\tWhole, important problems make. Indeed industrial members go skills. Soft\tBooks                                             \tbusiness                                          \t3.22\t6877.01\t2.32709123121693768\nAAAAAAAACCMBAAAA\tCheap depths calm as in a traditi\tBooks                                             \tbusiness                                          \t7.92\t2554.82\t0.86451804190159048\nAAAAAAAACDDCAAAA\tSimple, great shops glance from a years. Lessons deepen here previous clients. Increased, silent flights open more great rocks. Brill\tBooks                                             \tbusiness                                          \t8.92\t8014.35\t2.71195237594586375\nAAAAAAAACGCAAAAA\tJust sudden ideas ought to serve full sources; uncertain, open qualifications shout questions; chronic, informal\tBooks                                             \tbusiness                                          \t4.62\t4172.49\t1.41191664565564981\nAAAAAAAACGIDAAAA\tGroups must not put new, civil moves. Correct men laugh slightly total novels. Relatively public girls set even scott\tBooks                                             \tbusiness                                          \t3.36\t414.96\t0.14041709657333354\nAAAAAAAACNEDAAAA\tJust young degrees stop posts. More than tight artists buy to a arts. European, essential techniques ought to sell to a offences. Sentences be\tBooks                                             \tbusiness                                          \t2.58\t6286.70\t2.12733796276165399\nAAAAAAAACPBBAAAA\tOther, black houses flow. New soldiers put only eastern hours. Applications reserve there methods; sources cry pretty scarcely special workers. Never british opportunities \tBooks                                             \tbusiness                                          \t8.20\t3426.05\t1.15933100471146462\nAAAAAAAADEDAAAAA\tJunior, severe restrictions ought to want principles. Sure,\tBooks                                             \tbusiness                                          \t9.77\t3899.69\t1.31960465427044307\nAAAAAAAAEBPAAAAA\tRows could not\tBooks                                             \tbusiness                                          \t1.65\t15875.48\t5.37205708576254358\nAAAAAAAAEEFDAAAA\tRemaining subjects handle even only certain ladies; eagerly literary days could not provide. Very different articles cut then. Boys see out of a houses. Governme\tBooks                                             \tbusiness                                          \t9.03\t916.17\t0.31002007751973922\nAAAAAAAAEGCCAAAA\tManufacturing, ready concerns see already then new pupils. Both stable types used to manage otherw\tBooks                                             \tbusiness                                          \t1.18\t8723.00\t2.95175036969632840\nAAAAAAAAEGKCAAAA\tRussian windows should see in a weapons. New, considerable branches walk. English regions apply neither alone police; very new years w\tBooks                                             \tbusiness                                          \t2.79\t8434.64\t2.85417307557668685\nAAAAAAAAEKDAAAAA\tLong groups used to create more tiny feet; tools used to dare still\tBooks                                             \tbusiness                                          \t57.04\t795.66\t0.26924105229308502\nAAAAAAAAEPLBAAAA\tDrugs must compensate dark, modest houses. Small pubs claim naturally accessible relationships. Distinguished\tBooks                                             \tbusiness                                          \t1.66\t11559.25\t3.91150068335575881\nAAAAAAAAFCGDAAAA\tSmall, capable centres\tBooks                                             \tbusiness                                          \t2.98\t19190.81\t6.49392187461561344\nAAAAAAAAFDLAAAAA\tPopular, different parameters might take open, used modules. Prisoners use pretty alternative lovers. Annual, professional others spend once true men. Other, small subsidies seem politically\tBooks                                             \tbusiness                                          \t7.25\t1761.61\t0.59610603791823330\nAAAAAAAAFEGEAAAA\tSupreme, free uses handle even in the customers. Other minutes might not make of course social neighbours. So environmental rights come other, able sales\tBooks                                             \tbusiness                                          \t8.08\t3500.18\t1.18441563785437289\nAAAAAAAAFHFCAAAA\tSound, original activities consider quite to a attitudes. In order weak improvements marry available, hard studie\tBooks                                             \tbusiness                                          \t71.27\t11431.86\t3.86839355512056274\nAAAAAAAAGIJAAAAA\tAlways other hours used to use. Women should jump then. Civil samples take therefore other offices. Concrete, major demands\tBooks                                             \tbusiness                                          \t1.42\t2038.40\t0.68976819369356825\nAAAAAAAAGLADAAAA\tChanges ensure different clients. Distinct, alone attacks think directly previous feelings. White children tell so medieval, responsible yea\tBooks                                             \tbusiness                                          \t5.89\t5116.38\t1.73131681262259552\nAAAAAAAAHDKCAAAA\tVisual fragments \tBooks                                             \tbusiness                                          \t6.77\t2739.02\t0.92684893931051673\nAAAAAAAAHDLBAAAA\tClassic issues will draw as european, engl\tBooks                                             \tbusiness                                          \t75.64\t14735.99\t4.98646840884344817\nAAAAAAAAHJAAAAAA\tAgain british shareholders see shares. American lives ought to establish horses. Then ideal conservatives might charge even nec\tBooks                                             \tbusiness                                          \t2.44\t9396.38\t3.17961345165736401\nAAAAAAAAHKACAAAA\tCritical cases tell anywhere to the circumstances. Dependent, new numbers must not\tBooks                                             \tbusiness                                          \t3.72\t726.75\t0.24592279963049486\nAAAAAAAAHLKAAAAA\tConfident, video-tape\tBooks                                             \tbusiness                                          \t3.17\t6173.95\t2.08918482116091330\nAAAAAAAAIHNDAAAA\tOf course fundamental children will not deal still including a suppliers. More crucial powers will not keep enough. As good comments used to devote even convenient electric problems. Publi\tBooks                                             \tbusiness                                          \t8.85\t2672.80\t0.90444094785330122\nAAAAAAAAIMJAAAAA\tDepartments could seek now for a commu\tBooks                                             \tbusiness                                          \t5.93\t3205.71\t1.08477079876638965\nAAAAAAAAJFBEAAAA\tPaintings must not know primary, royal stands; similar, available others ough\tBooks                                             \tbusiness                                          \t0.39\t12939.97\t4.37871847201185356\nAAAAAAAAJJGBAAAA\tMost present eyes restore fat, central relationships; again considerable habits must face in a discussions. Engineers help at all direct occasions. Curiously del\tBooks                                             \tbusiness                                          \t80.10\t6877.89\t2.32738901183430931\nAAAAAAAAKBMDAAAA\tSo white countries could secure more angry items. National feet must not defend too by the types; guidelines would not view more so flexible authorities. Critics will handle closely lig\tBooks                                             \tbusiness                                          \t2.50\t2135.27\t0.72254774869901171\nAAAAAAAAKJHDAAAA\tSimple changes ought to vote almost sudden techniques. Partial, golden faces mean in a officials; vertically minor \tBooks                                             \tbusiness                                          \t8.73\t5996.87\t2.02926323965617573\nAAAAAAAAKJOBAAAA\tChristian lines stand once deep formal aspirations. National, fine islands play together with a patterns. New journals lose etc positive armie\tBooks                                             \tbusiness                                          \t4.89\t6106.50\t2.06636061361350790\nAAAAAAAAKKDAAAAA\tChildren would not mean in favour of a parts. Heavy, whole others shall mean on\tBooks                                             \tbusiness                                          \t3.13\t5521.91\t1.86854291917113983\nAAAAAAAAKLCCAAAA\tLips will n\tBooks                                             \tbusiness                                          \t8.48\t7806.43\t2.64159493735051117\nAAAAAAAAKNJCAAAA\tWhite fees might combine reports. Tr\tBooks                                             \tbusiness                                          \t2.09\t6566.98\t2.22218108939451963\nAAAAAAAALAJCAAAA\tAsleep children invite more. Wealthy forms could expect as. Indeed statistical examinations could la\tBooks                                             \tbusiness                                          \t3.71\t11535.83\t3.90357565828889099\nAAAAAAAALDHBAAAA\tMost new weeks go yet members. Also encouraging delegates make publications. Different competitors run resources; somehow common views m\tBooks                                             \tbusiness                                          \t1.07\t9334.32\t3.15861315039135987\nAAAAAAAALHMBAAAA\tLocal, bloody names \tBooks                                             \tbusiness                                          \t4.40\t2927.75\t0.99071273012477651\nAAAAAAAALJJCAAAA\tLarge, larg\tBooks                                             \tbusiness                                          \t3.50\t5826.76\t1.97170021599584758\nAAAAAAAALOMDAAAA\tOnly new systems might join late speeches. Materials could stay on a benefits. Corporate regulations must crawl definitely practical deaths. Windows might soothe despite a organisations. Old\tBooks                                             \tbusiness                                          \t0.67\t123.41\t0.04176034771571981\nAAAAAAAAMBECAAAA\tProfessional managers form later initial grounds. Conscious, big risks restore. American, full rises say for a systems. Already\tBooks                                             \tbusiness                                          \t5.27\t1616.40\t0.54696885218126163\nAAAAAAAAMKGDAAAA\tMemories can earn particularly over quick contexts; alone differences make separate years; irish men mea\tBooks                                             \tbusiness                                          \t4.23\t6855.84\t2.31992757704675870\nAAAAAAAANJLBAAAA\tOnly, gothic\tBooks                                             \tbusiness                                          \t1.68\t7807.37\t2.64191302119179451\nAAAAAAAANKCAAAAA\tLow, large clouds will not visit for example as the notions. Small, unacceptable drugs might not negotiate environmental, happy keys.\tBooks                                             \tbusiness                                          \t3.11\t3933.56\t1.33106582416859905\nAAAAAAAAOAPAAAAA\tSilver, critical operations could help howev\tBooks                                             \tbusiness                                          \t5.56\t9087.69\t3.07515674850230731\nAAAAAAAAOBAEAAAA\tTerrible, psychiatric bones will destroy also used studies; solely usual windows should not make shares. Advances continue sufficiently. As key days might not use far artists. Offici\tBooks                                             \tbusiness                                          \t5.83\t2024.32\t0.68500370381562209\nAAAAAAAAOCHCAAAA\tToo white addresses end by the talks. Hands get only companies. Statements know. Sentences would pay around for a payments; papers wait actually drinks; men would \tBooks                                             \tbusiness                                          \t6.06\t5178.86\t1.75245923645598158\nAAAAAAAAAGLDAAAA\tNew, big arguments may not win since by a tenant\tBooks                                             \tcomputers                                         \t1.00\t1686.72\t0.45326198032962534\nAAAAAAAAALNBAAAA\tElse substantial problems slip months. Just unique corporations put vast areas. Supporters like far perfect chapters. Now young reports become wrong trials. Available ears shall\tBooks                                             \tcomputers                                         \t51.46\t471.00\t0.12656895793922734\nAAAAAAAABEBEAAAA\tCheap, desirable members take immediate, estimated debts; months must track typica\tBooks                                             \tcomputers                                         \t3.26\t7226.83\t1.94202195818247621\nAAAAAAAABHOAAAAA\tExpert, scottish terms will ask quiet demands; poor bits attempt northern, dangerous si\tBooks                                             \tcomputers                                         \t2.66\t4463.91\t1.19955931429829364\nAAAAAAAACCDBAAAA\tGradually serious visitors bear no doubt technical hearts. Critics continue earlier soviet, standard minute\tBooks                                             \tcomputers                                         \t6.40\t3244.45\t0.87186126451364360\nAAAAAAAACCPBAAAA\tClear, general goods must know never women. Communications meet about. Other rewards spot wide in a skills. Relative, empty drawings facilitate too rooms. Still asian police end speedily comp\tBooks                                             \tcomputers                                         \t7.64\t5385.04\t1.44708896233770017\nAAAAAAAACDFEAAAA\tWide, essential activities make steadily procedures. Modules\tBooks                                             \tcomputers                                         \t35.95\t7285.54\t1.95779873848101557\nAAAAAAAACFMBAAAA\tAt least remaining results shall keep cuts. Clients should meet policies. Glorious, local times could use enough; clever styles will live political parents. Single, gradual contracts will describe ho\tBooks                                             \tcomputers                                         \t9.51\t14393.77\t3.86795004186180949\nAAAAAAAACLPDAAAA\tEnvironmental, new women pay again fingers. Different, uncomfortable records miss far russian, dependent members. Enough double men will go here immediatel\tBooks                                             \tcomputers                                         \t89.89\t9487.37\t2.54948308807619376\nAAAAAAAACOFCAAAA\tYears learn here. Days make too. Only moving systems avoid old groups; short movements cannot see respectiv\tBooks                                             \tcomputers                                         \t0.60\t1706.66\t0.45862033493962150\nAAAAAAAACONDAAAA\tMagnetic\tBooks                                             \tcomputers                                         \t57.19\t7638.33\t2.05260184394042112\nAAAAAAAADAHAAAAA\tGa\tBooks                                             \tcomputers                                         \t5.53\t7904.13\t2.12402865714688954\nAAAAAAAADDBAAAAA\tS\tBooks                                             \tcomputers                                         \t65.78\t578.19\t0.15537347301673430\nAAAAAAAAEAHCAAAA\tSimple year\tBooks                                             \tcomputers                                         \t3.01\t5038.44\t1.35394925783295241\nAAAAAAAAECEEAAAA\tAgricultural players shall smoke. So full reasons undertake \tBooks                                             \tcomputers                                         \t0.70\t5739.18\t1.54225484506508439\nAAAAAAAAECGEAAAA\tThen basic years can encourage later traditions. For example christian parts subscribe informal, valuable gr\tBooks                                             \tcomputers                                         \t2.75\t11359.94\t3.05268740563088364\nAAAAAAAAECHAAAAA\tBoxes batt\tBooks                                             \tcomputers                                         \t0.83\t6659.54\t1.78957757569979198\nAAAAAAAAEFFAAAAA\tBlocks extend ev\tBooks                                             \tcomputers                                         \t9.29\t11249.90\t3.02311702743208836\nAAAAAAAAEIGCAAAA\tSeparate, dead buildings think possibly english, net policies. Big divisions can use almost\tBooks                                             \tcomputers                                         \t9.46\t3529.22\t0.94838577014496795\nAAAAAAAAEJECAAAA\tArtists make times. Rather ready functions must pre\tBooks                                             \tcomputers                                         \t5.71\t7805.93\t2.09763996995021836\nAAAAAAAAFDPCAAAA\tLimited, capable cities shall try during the bodies. Specially economic services ought to prevent old area\tBooks                                             \tcomputers                                         \t2.93\t6458.26\t1.73548882866368225\nAAAAAAAAFGLAAAAA\tSince other birds shall blame sudden\tBooks                                             \tcomputers                                         \t6.74\t2404.97\t0.64627292308939187\nAAAAAAAAFHNAAAAA\tLegs throw then. Old-fashioned develo\tBooks                                             \tcomputers                                         \t2.66\t12518.22\t3.36394492707854445\nAAAAAAAAFJBEAAAA\tOnly careful men define judicial, special lawyers. Now able funds will not put too black, economic terms. Objectives know both points. Teeth pay.\tBooks                                             \tcomputers                                         \t9.85\t911.50\t0.24494183686115864\nAAAAAAAAGHEAAAAA\tMen should not turn shadows. Different, single concessions guarantee only therefore alone products.\tBooks                                             \tcomputers                                         \t8.38\t11864.77\t3.18834729318175442\nAAAAAAAAGIFEAAAA\tEducational, white teachers should not fix. Considerable, other services might not cover today on a forms. Successful genes fall otherwise so\tBooks                                             \tcomputers                                         \t1.65\t7042.80\t1.89256869845942737\nAAAAAAAAGKKAAAAA\tWomen note days. Other, efficient qualificati\tBooks                                             \tcomputers                                         \t7.64\t8012.26\t2.15308577269247054\nAAAAAAAAHGCEAAAA\tPresent \tBooks                                             \tcomputers                                         \t2.84\t4786.32\t1.28619858760866792\nAAAAAAAAHHFDAAAA\tMultiple, dark feet mean more complex girls; schools may not answer frequently blue assets. Spiritual, dry patients may reply personnel\tBooks                                             \tcomputers                                         \t2.04\t2973.19\t0.79896721880112808\nAAAAAAAAIBDEAAAA\tPrivate teachers ap\tBooks                                             \tcomputers                                         \t5.27\t8109.35\t2.17917617635769258\nAAAAAAAAIDCDAAAA\tDaily numbers sense interesting players. General advantages would speak here. Shelves shall know with the reductions. Again wrong mothers provide ways; as hot pr\tBooks                                             \tcomputers                                         \t7.56\t13142.36\t3.53166626340166409\nAAAAAAAAIECAAAAA\tInc, corporate ships slow evident degrees. Chosen, acute prices throw always. Budgets spend points. Commonly large events may mean. Bottles c\tBooks                                             \tcomputers                                         \t68.38\t12405.10\t3.33354687926095337\nAAAAAAAAINGCAAAA\tEuropean, possible problems ought to restore then unfair interests. States would sleep in respect of the questions. Ideal stages affect only pressures. About spanish employees might kno\tBooks                                             \tcomputers                                         \t3.42\t6760.19\t1.81662463645686890\nAAAAAAAAIOJBAAAA\tUpper others narrow deaths. Situations could e\tBooks                                             \tcomputers                                         \t5.42\t10932.74\t2.93788855460829783\nAAAAAAAAIOKCAAAA\tHowever old hours ma\tBooks                                             \tcomputers                                         \t8.84\t5208.75\t1.39971562561772907\nAAAAAAAAJDOCAAAA\tIndeed other actions should provide after a ideas; exhibitio\tBooks                                             \tcomputers                                         \t6.95\t3949.76\t1.06139491997885895\nAAAAAAAAJJFCAAAA\tEffective females must answer too english projects. Firm, political experiments see in terms of \tBooks                                             \tcomputers                                         \t0.76\t246.87\t0.06633986973770075\nAAAAAAAAJPBDAAAA\tOf course responsible fears tell. Now clear substances might develop at least independent civil tourists.\tBooks                                             \tcomputers                                         \t4.95\t619.44\t0.16645833398274943\nAAAAAAAAKDGEAAAA\tPerfect days find at all. Crimes might develop hopes. Much socialist grants drive current, useful walls. Emissions open naturally. Combinations shall not know. Tragic things shall not receive just\tBooks                                             \tcomputers                                         \t6.71\t8038.78\t2.16021233057898500\nAAAAAAAAKEAAAAAA\tAdvantages apply almost on a services; materials defeat today individual ideas. Domestic divisions used to win smoothly irish talks. Subsequent quantities make only, automatic pounds. Flower\tBooks                                             \tcomputers                                         \t7.87\t442.26\t0.11884583298981461\nAAAAAAAAKJDBAAAA\tClose, historic tactics lead ago large, typical stars. Generally significant facilities check leading years; yesterday general years \tBooks                                             \tcomputers                                         \t3.87\t8448.38\t2.27028164092273769\nAAAAAAAAKJECAAAA\tInternal seats used to sell dark words. Universal items show now in the roles. Most wonderf\tBooks                                             \tcomputers                                         \t2.57\t870.24\t0.23385428865612144\nAAAAAAAAKKACAAAA\tLikely, separate attacks prefer seats. Informally equal women could use easy prime, big forces. Long technical women save conditions; fast alone rooms sell. Ne\tBooks                                             \tcomputers                                         \t3.77\t344.40\t0.09254851191989362\nAAAAAAAAKOJBAAAA\tEconomic customs should not put unpleasant shops. Colonial, middle goods used to see. Closely explicit legs continue\tBooks                                             \tcomputers                                         \t3.32\t8481.54\t2.27919252551990282\nAAAAAAAAKOKBAAAA\tHuman windows take right, variable steps. Years should buy often. Indeed thin figures may beat even up to a cars. Details may tell enough. Impossible, sufficient differences ought to return \tBooks                                             \tcomputers                                         \t4.47\t5466.16\t1.46888784528468556\nAAAAAAAAKPNDAAAA\tLeft diff\tBooks                                             \tcomputers                                         \t0.74\t3269.32\t0.87854442796151745\nAAAAAAAALBNAAAAA\tMale levels shall reduce else high, local conditions; further personal agencies control. Successful days wake eve\tBooks                                             \tcomputers                                         \t6.55\t2376.38\t0.63859010672531010\nAAAAAAAALEJBAAAA\tWide governments conform widely in proportion to a friends. So living rooms wear too clothes; most essential measures should not bring previously pains. Real accounts become also gue\tBooks                                             \tcomputers                                         \t9.35\t11110.42\t2.98563541755233586\nAAAAAAAALIHCAAAA\tPlaces transform \tBooks                                             \tcomputers                                         \t3.10\t5805.20\t1.55999599708875273\nAAAAAAAAMADEAAAA\tAppropriate effects beg passages. Running contracts must keep only upper sons. Safely available reports intend perhaps w\tBooks                                             \tcomputers                                         \t5.81\t8969.60\t2.41034591323077181\nAAAAAAAAMDKBAAAA\tFriendly, hot computers tax elsewhere units. New, real officials should l\tBooks                                             \tcomputers                                         \t3.19\t2999.57\t0.80605615534133364\nAAAAAAAAMELCAAAA\tPerfect members state democratic schools. Genuine, enormous knees must afford around the implications. Matters will indicate with a months. Still regular machines would not l\tBooks                                             \tcomputers                                         \t4.07\t2110.95\t0.56726272136265806\nAAAAAAAAMENDAAAA\tKinds relieve really major practices. Then capable reserves could not approve foundations. Pos\tBooks                                             \tcomputers                                         \t7.23\t1522.62\t0.40916438797755059\nAAAAAAAANANCAAAA\tOnly increased errors must submit as rich, main \tBooks                                             \tcomputers                                         \t6.94\t8287.27\t2.22698753303826016\nAAAAAAAANFHDAAAA\tMeals ought to test. Round days might need most urban years. Political, english pages must see on a eyes. Only subsequent women may come better methods; difficult, social childr\tBooks                                             \tcomputers                                         \t7.23\t15325.54\t4.11833891222069241\nAAAAAAAANHFDAAAA\tSystems cannot see fairly practitioners. Little ca\tBooks                                             \tcomputers                                         \t1.73\t2428.71\t0.65265242852777245\nAAAAAAAANKLDAAAA\tPast beautiful others might not like more than legislative, small products. Close, wh\tBooks                                             \tcomputers                                         \t3.02\t4174.86\t1.12188467036552578\nAAAAAAAAOGDDAAAA\tMain problems wait properly. Everyday, foreign offenders can worry activities. Social, important shoes will afford okay physical parts. Very\tBooks                                             \tcomputers                                         \t1.40\t939.26\t0.25240161238640906\nAAAAAAAAOGMDAAAA\tSchools offer quickly others. Further main buildings satisfy sadly great, productive figures. Years contribute acti\tBooks                                             \tcomputers                                         \t4.11\t885.92\t0.23806787944271822\nAAAAAAAAOKLAAAAA\tChief me\tBooks                                             \tcomputers                                         \t2.62\t9675.59\t2.60006230094948754\nAAAAAAAAOMDAAAAA\tTiny, rare leaders mention old, precious areas; students will improve much multiple stars. Even confident solutions will include clearly single women. Please little rights will not mention harder com\tBooks                                             \tcomputers                                         \t1.45\t3092.13\t0.83092923972956056\nAAAAAAAAONDCAAAA\tGuidelines should investigate so. Usual personnel look now old, modern aspects. Discussions could appear once br\tBooks                                             \tcomputers                                         \t2.44\t9923.72\t2.66674076280396839\nAAAAAAAAONHDAAAA\tFlat pleasant groups would go private, redundant eyes. Main devic\tBooks                                             \tcomputers                                         \t2.83\t2445.21\t0.65708637291417851\nAAAAAAAAOPDAAAAA\tPopular, obvious copies should believe still difficult parts. Forms ought to soften characteristic\tBooks                                             \tcomputers                                         \t1.05\t2156.19\t0.57941979069847684\nAAAAAAAAPAKAAAAA\tReal, domestic facilities turn often guilty symptoms. Winds get naturally intense islands. Products shall not travel a little clear shares; improved children may not apply wrong c\tBooks                                             \tcomputers                                         \t5.28\t1338.00\t0.35955258115219995\nAAAAAAAAAADCAAAA\tDirections would ask yet profits. Forthcoming, specified discussions ought\tBooks                                             \tcooking                                           \t0.58\t5750.02\t2.05632295473197783\nAAAAAAAAABIBAAAA\tThen irish champions must look now forward good women. Future, big models sign. Then different o\tBooks                                             \tcooking                                           \t85.81\t2279.71\t0.81527020830049933\nAAAAAAAAACLDAAAA\tBlack ears see sensibly glad months. Equal members must afford approximately o\tBooks                                             \tcooking                                           \t8.37\t10363.44\t3.70617485886789408\nAAAAAAAAAEFBAAAA\tConsiderable benefits should govern. Well experienced years provide please in an towns. Exc\tBooks                                             \tcooking                                           \t4.18\t0.00\t0.00000000000000000\nAAAAAAAAAGHBAAAA\tValuable studies should persist so concerned parties. Always polite songs include then from the holes. There conventional areas might not explain theore\tBooks                                             \tcooking                                           \t1.58\t1326.45\t0.47436523408687831\nAAAAAAAAAIJCAAAA\tMeanings occur in a things. Also essential features may not satisfy by the potatoes; happy words change childre\tBooks                                             \tcooking                                           \t3.46\t1262.55\t0.45151330717055917\nAAAAAAAAAJDBAAAA\tThen dominant goods should combine probably american items. Important artists guess only sill\tBooks                                             \tcooking                                           \t6.67\t5569.20\t1.99165808110116677\nAAAAAAAACDIBAAAA\tLibraries shall note still. Children would not concentrate. Local, public modes must mean low children. Outer, good years should vis\tBooks                                             \tcooking                                           \t1.42\t2178.99\t0.77925070784648269\nAAAAAAAACJJAAAAA\tChildren ought to miss historical effects. Honest girls may not think activities. Woo\tBooks                                             \tcooking                                           \t4.42\t348.88\t0.12476651428114901\nAAAAAAAACLFEAAAA\tSingle, past rates mark blue, evident discussions. Only literary children ought to publish exactly really recent themes; conscious, ready conditions would adopt advanced, ideal provisions. A\tBooks                                             \tcooking                                           \t4.99\t9499.97\t3.39738059698316657\nAAAAAAAACPEEAAAA\tStandards could lead no longer ago great tactics; difficult lives might feel french, easy costs. Students drop certainly unabl\tBooks                                             \tcooking                                           \t3.05\t16321.01\t5.83672187356046718\nAAAAAAAADDNAAAAA\tIndividual, remarkable services take by the interest\tBooks                                             \tcooking                                           \t6.05\t1054.65\t0.37716408016112647\nAAAAAAAADIGCAAAA\tPositions shall\tBooks                                             \tcooking                                           \t4.21\t2629.53\t0.94037288551281172\nAAAAAAAAEGFEAAAA\tUltimately senior elections marry at l\tBooks                                             \tcooking                                           \t5.06\t7756.87\t2.77401293175881769\nAAAAAAAAEINDAAAA\tHence young effects shall not solve however months. In order small activities must not return almost national foods. International decades take contributions. Sessions must see \tBooks                                             \tcooking                                           \t1.43\t19276.07\t6.89351084309627374\nAAAAAAAAEKJCAAAA\tMembers need for a regions. Leading needs go at least under the others; old police could play on a drinks. Very similar machines must consider fully effec\tBooks                                             \tcooking                                           \t9.86\t10833.86\t3.87440652490818908\nAAAAAAAAEKPDAAAA\tMainly catholic activities could assume just fat, o\tBooks                                             \tcooking                                           \t2.68\t2262.61\t0.80915490391444210\nAAAAAAAAENDCAAAA\tPoints trace so simple eyes. Short advisers shall not say limitations. Keys stretch in full now blue wings. Immediately strategic students would not make strangely for the players.\tBooks                                             \tcooking                                           \t1.69\t5132.94\t1.83564271902740482\nAAAAAAAAEPJDAAAA\tProjects become more from a pupils. Details may precede generally; good, marvellous birds could suffer fair\tBooks                                             \tcooking                                           \t9.88\t628.36\t0.22471419087853357\nAAAAAAAAFDIAAAAA\tGreat pp. will not r\tBooks                                             \tcooking                                           \t1.91\t2941.23\t1.05184308300603044\nAAAAAAAAFPEBAAAA\tNew, general students raise therefore to a women; united letters would start black positio\tBooks                                             \tcooking                                           \t4.03\t3747.49\t1.34017789670793138\nAAAAAAAAGFGCAAAA\tProducts may not resist further specif\tBooks                                             \tcooking                                           \t5.37\t8721.33\t3.11892325153523644\nAAAAAAAAHBGCAAAA\tDramatic months deal broadly in a films. Almost new occasions may get together sources. Under dry orders wor\tBooks                                             \tcooking                                           \t3.92\t1412.78\t0.50523858073297895\nAAAAAAAAHCGEAAAA\tThus certain stars appear totally even local guests. Urban friends might not take properly various vehicles\tBooks                                             \tcooking                                           \t4.55\t1446.44\t0.51727607462974425\nAAAAAAAAICAAAAAA\tSomet\tBooks                                             \tcooking                                           \t7.34\t6593.72\t2.35804706645808830\nAAAAAAAAIFHDAAAA\tGenetic properties might describe therefore leaves; right other organisers must not talk even lives; methods carry thus wrong minutes. Proud worke\tBooks                                             \tcooking                                           \t1.08\t119.92\t0.04288580713309846\nAAAAAAAAIHHDAAAA\tUrgent agencies mean over as a plants; then\tBooks                                             \tcooking                                           \t6.47\t9566.59\t3.42120525067902230\nAAAAAAAAIJJAAAAA\tMen could require evolutionary falls; taxes teach dead parents; only financial servants might not buy eastern things. Different payments develop. New inhabitants might not eat w\tBooks                                             \tcooking                                           \t80.50\t3855.42\t1.37877583836799906\nAAAAAAAAINMBAAAA\tHours ought to cope thus into the eyes. Dark states reduce most for the feelings. National, tragic children shall establish enough typical boats. In order secret hours must mean; sin\tBooks                                             \tcooking                                           \t2.30\t12966.63\t4.63712802990534045\nAAAAAAAAJHGDAAAA\tGuests agree around trying, young costs. Here annual banks appeas\tBooks                                             \tcooking                                           \t58.88\t8031.52\t2.87223330308224573\nAAAAAAAAJIHBAAAA\tWonderful qualities suffer of course light leaders. True clubs used to see early living operat\tBooks                                             \tcooking                                           \t9.91\t4482.62\t1.60307518988467144\nAAAAAAAAKABAAAAA\tHigh big appeals may\tBooks                                             \tcooking                                           \t36.23\t675.62\t0.24161531867298181\nAAAAAAAAKCGEAAAA\tFinal women should establish on a pupils. Full, northern years might not avoid full\tBooks                                             \tcooking                                           \t60.38\t5877.02\t2.10174071245298770\nAAAAAAAAKECDAAAA\tLittle part\tBooks                                             \tcooking                                           \t9.90\t4729.36\t1.69131438311366337\nAAAAAAAAKFIAAAAA\tHere other affairs afford directly effective leads. Plants cannot undertake as coming, huge photographs; d\tBooks                                             \tcooking                                           \t0.87\t20785.39\t7.43327407210001090\nAAAAAAAAKFNBAAAA\tStairs might bring early others. Large forms rel\tBooks                                             \tcooking                                           \t1.88\t2350.18\t0.84047169953356678\nAAAAAAAAKHADAAAA\tNow available m\tBooks                                             \tcooking                                           \t3.55\t1102.96\t0.39444070910208700\nAAAAAAAAKJNDAAAA\tMajor instructions put flatly british, other democrats. Operations represent well upon a stores. Thousands will not appear surely \tBooks                                             \tcooking                                           \t1.29\t582.88\t0.20844962693245854\nAAAAAAAAKLACAAAA\tNew, single products raise too extreme, efficient minutes; hands support leaders. Additional, english things prefer halfway private, slow churches. High white things could f\tBooks                                             \tcooking                                           \t4.13\t2472.08\t0.88406559454294555\nAAAAAAAAKLEDAAAA\tGolden, sure days fill of course. Early free minutes must not express only, cap\tBooks                                             \tcooking                                           \t9.44\t4521.21\t1.61687575106934680\nAAAAAAAAKLOCAAAA\tPurposes hide tears. Small laws award good eyes. \tBooks                                             \tcooking                                           \t55.11\t5382.78\t1.92499053468895684\nAAAAAAAALCJDAAAA\tYet religious animals ensure also. Rough, real heads resist dead. Civil, evolutionary votes dissuade rapidly left cars. Buyers \tBooks                                             \tcooking                                           \t2.20\t11624.81\t4.15726617427380135\nAAAAAAAALLNCAAAA\tHere comprehensive years should tend sensibly particular front sales. Official, coherent tears regulate animals. Rewards cannot w\tBooks                                             \tcooking                                           \t2.50\t2499.59\t0.89390372458156745\nAAAAAAAAMAODAAAA\tWidely splendid others deprive only. Different, main soldiers discover then other periods. Too main birds must change public, terrible houses. Different, armed females may foster; science\tBooks                                             \tcooking                                           \t4.26\t6853.89\t2.45108909816104214\nAAAAAAAAMGBDAAAA\tNew women add however. Scottish managers place mostly. Normal, financial purposes should lea\tBooks                                             \tcooking                                           \t4.74\t319.20\t0.11415234853973505\nAAAAAAAAMINCAAAA\tExtra theories drop. Other resources shall know eventually anyway open students. Long-term, liable di\tBooks                                             \tcooking                                           \t6.96\t5834.64\t2.08658477093947276\nAAAAAAAAMJIAAAAA\tSpecial, public skills agree at a parent\tBooks                                             \tcooking                                           \t5.87\t4713.66\t1.68569974692295585\nAAAAAAAAMKGAAAAA\tGentle fans cannot pay else can\tBooks                                             \tcooking                                           \t2.45\t7576.48\t2.70950183478800689\nAAAAAAAANHBBAAAA\tSound, new offices might equip hot, new reports; calculations fight great scientists. Professional, little issues learn of c\tBooks                                             \tcooking                                           \t66.16\t6628.48\t2.37047794250834265\nAAAAAAAAOLDBAAAA\tWell angry rebels drop in a methods. Studies argue most sometimes residential organisations. Rural, different children know o\tBooks                                             \tcooking                                           \t4.42\t453.06\t0.16202338041795852\nAAAAAAAAPAFCAAAA\tHalf general features used to defend as ready medical pounds. Turkish, trying rooms secure with a ci\tBooks                                             \tcooking                                           \t7.08\t683.53\t0.24444409397670770\nAAAAAAAAPHHCAAAA\tAfrican, elected carers would examine proba\tBooks                                             \tcooking                                           \t6.20\t15598.69\t5.57840569437117702\nAAAAAAAAAFBEAAAA\tAlready accessible clubs match all enough o\tBooks                                             \tentertainments                                    \t5.00\t1196.30\t0.46493128593083651\nAAAAAAAAAGKBAAAA\tLikely, various days develop no longer. Officials say before agricultural, rare ho\tBooks                                             \tentertainments                                    \t2.67\t23516.84\t9.13960934734576042\nAAAAAAAAAIKBAAAA\tLess progressive experiences would silence as economic, soviet specialists. Alone legal brothers fight only ears. Methods could not return records. E\tBooks                                             \tentertainments                                    \t8.36\t5931.28\t2.30513887621487248\nAAAAAAAAAJCDAAAA\tStrict heads discuss as categories. Alone, specific markets wait single, human numbers. External, various changes want very relatively nuclear orders. Old, pre\tBooks                                             \tentertainments                                    \t2.32\t4525.09\t1.75863572068274594\nAAAAAAAAAMFCAAAA\tInstances used to lower out of a costs. Irish supporters sign in a types. Bad things shall participate clear \tBooks                                             \tentertainments                                    \t1.58\t3570.57\t1.38767006737947580\nAAAAAAAAANJDAAAA\tTrustees may encourage today necessary, political tears; inner, foreign times pay in the historians. Areas may belie\tBooks                                             \tentertainments                                    \t1.79\t17322.75\t6.73233171726021741\nAAAAAAAABAJDAAAA\tRare, radical beds say over readers; han\tBooks                                             \tentertainments                                    \t7.10\t7808.46\t3.03468807902658165\nAAAAAAAACKAEAAAA\tL\tBooks                                             \tentertainments                                    \t1.63\t4264.23\t1.65725481685601518\nAAAAAAAACLMDAAAA\tAlways constitutional advertisements go for a spaces. Cars spend bad difficulties. Rights encourage further great qualities. Blue, high homes would produce u\tBooks                                             \tentertainments                                    \t2.63\t3974.52\t1.54466161878945775\nAAAAAAAACMOAAAAA\tCompanies ought to record now detailed, good roads. Muscles shall not argue significantly territorial months. Clearly new periods could write in a committees. Figures will not find more there\tBooks                                             \tentertainments                                    \t3.07\t7276.45\t2.82792715498740725\nAAAAAAAACMPAAAAA\tFalsely large trees shall reflect against a \tBooks                                             \tentertainments                                    \t0.70\t957.09\t0.37196446079707792\nAAAAAAAAEGNDAAAA\tDeep patterns shall find british, american expectations. Sufficient patients must see. English, large assets could not meet for the proceedings. White, chinese matches shal\tBooks                                             \tentertainments                                    \t0.56\t1499.01\t0.58257681762365897\nAAAAAAAAEKGAAAAA\tParticular, deliberate things rain however original ways. Virtually old deaths consider women. Notably w\tBooks                                             \tentertainments                                    \t9.71\t1611.84\t0.62642718708915783\nAAAAAAAAEPCEAAAA\tNew, previous police outline right in a persons. Wealthy quest\tBooks                                             \tentertainments                                    \t2.07\t5037.58\t1.95781037146155928\nAAAAAAAAFAKBAAAA\tDoors cannot happen here severe, old rates. Inevitable, int\tBooks                                             \tentertainments                                    \t2.29\t1047.84\t0.40723363591888968\nAAAAAAAAFGJCAAAA\tLimitations respond. Bare rivers will not create yesterday. Well local persons may unders\tBooks                                             \tentertainments                                    \t8.95\t2096.28\t0.81470045646668390\nAAAAAAAAGCBCAAAA\tSo perfect changes would twist again; legal standards like improvements; rights used to tell working stations. Official, immediate loans listen much possible pictures. Always d\tBooks                                             \tentertainments                                    \t6.32\t1017.52\t0.39545003933824690\nAAAAAAAAHFABAAAA\tPrisons take angry, logical sums. Now old grounds cannot help so increased problems. Blue, negative designs would freeze. Small payments ask alike to a hundreds. Exte\tBooks                                             \tentertainments                                    \t2.62\t11202.91\t4.35391068500161131\nAAAAAAAAHGMAAAAA\tHigh, official employees shall not start too left circumstances. Patients used to touch obviously popular, senior members. British, impossible theories make only. Young, international wo\tBooks                                             \tentertainments                                    \t4.85\t1041.70\t0.40484737988309988\nAAAAAAAAHPPDAAAA\tNow old tears give. Other kids coincide up a animals; specific procedures remove future, french levels. Coming, strong values a\tBooks                                             \tentertainments                                    \t5.08\t24460.84\t9.50648649682223761\nAAAAAAAAIACBAAAA\tLarge women establish today polite, easy horses. Details sha\tBooks                                             \tentertainments                                    \t5.06\t1748.58\t0.67956996401650263\nAAAAAAAAIDDAAAAA\tPlans would not claim; most particular horses will not tell simply cases; more british enquiries could not smile blue men. Old, dangerous play\tBooks                                             \tentertainments                                    \t0.95\t6942.27\t2.69805108950854163\nAAAAAAAAIKFAAAAA\tPieces threaten\tBooks                                             \tentertainments                                    \t0.69\t1273.35\t0.49487607869266126\nAAAAAAAAINCCAAAA\tCases can accept gmt sudden services; tools show all also capable meals; important, spatial days would not happen human, cold backs. Red, economic effects must s\tBooks                                             \tentertainments                                    \t9.58\t1334.73\t0.51873086622959576\nAAAAAAAAIPLAAAAA\tFinancial gods might presume divine, tiny \tBooks                                             \tentertainments                                    \t8.42\t731.84\t0.28442306469583164\nAAAAAAAAKBFEAAAA\tMarginal, available teeth pay recently little services. Then british times could require more scottish fair tea\tBooks                                             \tentertainments                                    \t95.74\t3018.65\t1.17317130007115240\nAAAAAAAAKFGCAAAA\tNow complete others shall pass. Just front advantages could exercise more establish\tBooks                                             \tentertainments                                    \t6.51\t5281.66\t2.05266987849992639\nAAAAAAAAKKIDAAAA\tYoung reasons could not say impossible experiences. Prisoners cancel particularly; forms might k\tBooks                                             \tentertainments                                    \t3.77\t3626.88\t1.40955444480216694\nAAAAAAAAKLNAAAAA\tJust particular actions seem very; necessarily poor eleme\tBooks                                             \tentertainments                                    \t0.26\t6872.96\t2.67111437845958545\nAAAAAAAAKMOCAAAA\tJapanese, efficient sports withdraw recently severe days; factors mean originally impossible items. Quiet workers become from a officers. Pieces explore. For example o\tBooks                                             \tentertainments                                    \t3.74\t16796.75\t6.52790652592057016\nAAAAAAAALMJCAAAA\tNever able feet go on the provisions. Services play brown studies. Cruel,\tBooks                                             \tentertainments                                    \t9.79\t12846.63\t4.99272774870656373\nAAAAAAAAMDOCAAAA\tInternal claims speculate perhaps through a expectations. Immediate courts appeal to a councils; transactions materialise entirely; fine, serious conditions may not use to a types. Short, large lip\tBooks                                             \tentertainments                                    \t3.11\t5231.34\t2.03311346095579892\nAAAAAAAAMJABAAAA\tFront, possible foundations hear well. Old, close inches change pointedly for the employees; odd, financial members work under on the arrangements; st\tBooks                                             \tentertainments                                    \t92.23\t225.66\t0.08770073893099771\nAAAAAAAAMJDCAAAA\tLocal words co\tBooks                                             \tentertainments                                    \t2.95\t9381.26\t3.64594271959501737\nAAAAAAAAMOFBAAAA\tHardly local women should tell easily tall, able issues. Important, available conditions could no\tBooks                                             \tentertainments                                    \t2.21\t15740.54\t6.11741996442846214\nAAAAAAAAMOHBAAAA\tGeneral, raw tests would not buy heavy, considerable blues. High, regional modules meet often young, responsible calculations. Things hesitat\tBooks                                             \tentertainments                                    \t2.00\t5567.90\t2.16391449212931922\nAAAAAAAAOAODAAAA\tH\tBooks                                             \tentertainments                                    \t4.80\t2493.52\t0.96908422644341674\nAAAAAAAAOBODAAAA\tNew hours borrow new poets. Youngsters mind especially. Laws must add there in a ends. Factors must not take strategic, royal tr\tBooks                                             \tentertainments                                    \t2.30\t4109.90\t1.59727584389128560\nAAAAAAAAOEGCAAAA\tClear materials will ship evidently literally scottish targets. Residential heads make prominent times. Internal, open subjects feel subsequent \tBooks                                             \tentertainments                                    \t0.75\t263.40\t0.10236805208909332\nAAAAAAAAOEMBAAAA\tOther practices get feet. Numbers will not increase now large, simple foreigners. Flowers cover\tBooks                                             \tentertainments                                    \t1.00\t315.51\t0.12262013710945267\nAAAAAAAAOFEAAAAA\tHeavy, formal factors could want then truly serious players. Be\tBooks                                             \tentertainments                                    \t4.31\t8757.62\t3.40357061631163789\nAAAAAAAAPCKBAAAA\tMen call tonight particularly mental lines. Recent markets must dress children. Multiple relations should seem relatively about a arts. Funny, real proteins shall keep citie\tBooks                                             \tentertainments                                    \t5.20\t3090.94\t1.20126616144366780\nAAAAAAAAPEGAAAAA\tDirty trials should get. Balls shall win later national programmes. Elements ought to explain apart poss\tBooks                                             \tentertainments                                    \t1.62\t290.34\t0.11283804192690719\nAAAAAAAAPFIAAAAA\tSubsequent, \tBooks                                             \tentertainments                                    \t1.29\t9603.95\t3.73248919461293761\nAAAAAAAAPNPAAAAA\tCountries turn more actually scientific patients. Good writers could not drag perhaps. Suddenly left months cannot announce more overall loans; beds transform far \tBooks                                             \tentertainments                                    \t1.32\t2401.56\t0.93334479565331415\nAAAAAAAAACCEAAAA\tRoyal, blue men used to convey jobs. Other, technical things would say as mere children; ab\tBooks                                             \tfiction                                           \t0.62\t555.50\t0.18274906106295868\nAAAAAAAAAEGEAAAA\tExclusively ready fields invest right in a courts. Quite glad facts would not imitate usually by a types. More large managers can continue both small matters. Additional, basic scholars s\tBooks                                             \tfiction                                           \t1.11\t3969.44\t1.30587116641899316\nAAAAAAAAAIDCAAAA\tDollars get on a years; separate economies can say. Firms know even sons. Simple, definite members will say most cold, big policies; main, true agents might repeat too. Elements know goods. Great \tBooks                                             \tfiction                                           \t5.03\t149.04\t0.04903135924540659\nAAAAAAAAAMOCAAAA\tWild officials will not watch onl\tBooks                                             \tfiction                                           \t0.47\t6954.51\t2.28790310108543073\nAAAAAAAAAODAAAAA\tJust minor eyes exc\tBooks                                             \tfiction                                           \t7.11\t16681.12\t5.48777500896227056\nAAAAAAAABGFEAAAA\tMarried circumstances face human, compulsory hours. Years make sometimes national problems. Difficulties should invest far running, medical centuries; perf\tBooks                                             \tfiction                                           \t2.71\t10221.52\t3.36268799754501063\nAAAAAAAABJAAAAAA\tOther horses apply able schools; possible enquiries would not describe easily r\tBooks                                             \tfiction                                           \t3.83\t10067.63\t3.31206107944063852\nAAAAAAAABKFAAAAA\tFirm, local examinations may not sponsor most rural charges. Countries shall help beautiful, different terms\tBooks                                             \tfiction                                           \t7.72\t5090.34\t1.67462620250444840\nAAAAAAAABOJBAAAA\tAs joint men would so\tBooks                                             \tfiction                                           \t2.13\t2773.11\t0.91230107781152357\nAAAAAAAABPGDAAAA\tPictures get with a conditions; still gross eyes go that. Personal beings contact thereafter in a systems. New, medium goals might not tell; as official years mu\tBooks                                             \tfiction                                           \t5.52\t2061.58\t0.67822107885899974\nAAAAAAAACEBDAAAA\tEssential, alternative fans let unlikel\tBooks                                             \tfiction                                           \t1.52\t2460.17\t0.80934969856932323\nAAAAAAAACMNBAAAA\tBasic changes may not see; afraid names seek in excess of a characteristics. Awful scientists shall not want now right eyes. Here used workers will not pray in part\tBooks                                             \tfiction                                           \t2.27\t6034.24\t1.98515156476786280\nAAAAAAAACNBBAAAA\tLocal companies would restrict yet most imaginative days. Married, str\tBooks                                             \tfiction                                           \t99.71\t7003.69\t2.30408239689654919\nAAAAAAAADCGCAAAA\tDifferent stations may smell; weapons disguise cons\tBooks                                             \tfiction                                           \t1.47\t1671.19\t0.54979010505455611\nAAAAAAAADCODAAAA\tPrivate, quiet populations shall receive more somewhat proposed machines. Heads protect abroad parent\tBooks                                             \tfiction                                           \t74.86\t3243.16\t1.06693869464796593\nAAAAAAAADDBDAAAA\tCircumstances should include parties. Good investigations fall as deposits. Characters might force at all convenient, special years; \tBooks                                             \tfiction                                           \t5.18\t12.59\t0.00414187340914968\nAAAAAAAADNMAAAAA\tOld, official cases look enough; actual emotions go statistical, wild limits. Mental cities hear above mod\tBooks                                             \tfiction                                           \t2.55\t769.44\t0.25313130070978025\nAAAAAAAADPHDAAAA\tTimes should not get on a lists; different students undermine suddenly groups. Even actual modules may stay for a \tBooks                                             \tfiction                                           \t8.31\t638.38\t0.21001502358482729\nAAAAAAAAEBJAAAAA\tTechniques render eventually dark tiles. Only, other centres would bid at the falls. Sorry, full days write for a groups. Both \tBooks                                             \tfiction                                           \t2.99\t6665.04\t2.19267291079579140\nAAAAAAAAEEEEAAAA\tTowns see even afraid, mean factors. Soldiers spend areas; resu\tBooks                                             \tfiction                                           \t48.40\t9444.91\t3.10719790157362568\nAAAAAAAAEJDEAAAA\tLoud young standards remove enough green values; important students cannot receive particular police; significant authorities should not expect\tBooks                                             \tfiction                                           \t52.22\t8870.17\t2.91811924206809036\nAAAAAAAAFHDBAAAA\tGood, bad cats could not finance libraries. Concerned names get at \tBooks                                             \tfiction                                           \t0.13\t5959.16\t1.96045165566866039\nAAAAAAAAFMHBAAAA\tYears take critics. Again academic areas look high under a w\tBooks                                             \tfiction                                           \t90.57\t742.90\t0.24440013944855446\nAAAAAAAAGFHBAAAA\tAmbitious, isolated mines should\tBooks                                             \tfiction                                           \t9.67\t5292.65\t1.74118239070183305\nAAAAAAAAGFODAAAA\tWives must file upon a respects; anywhere growing wounds may not develop yet for a demands; quite key sides could not make fresh men. Dead times\tBooks                                             \tfiction                                           \t18.03\t6121.11\t2.01373016230978759\nAAAAAAAAGIOAAAAA\tThus separate stars will touch lightly commercial great institutions. Personal, brief hands will not concern always smart rules. Dead \tBooks                                             \tfiction                                           \t4.96\t2769.10\t0.91098186316730672\nAAAAAAAAHDHCAAAA\tDifficult decisions retain concerns. Accordingly parliamentary cases worry only inadequate, good scores. Responsible adults exist still well silly\tBooks                                             \tfiction                                           \t2.74\t2397.93\t0.78887390818127904\nAAAAAAAAHFFCAAAA\tNecessarily royal losses ought to say courses. True, current \tBooks                                             \tfiction                                           \t0.62\t5056.32\t1.66343426180712733\nAAAAAAAAIBMCAAAA\tOthers reflect much up to a paintings; twice narrow cases cannot wear however hard major wings. Popular bacteria go\tBooks                                             \tfiction                                           \t8.71\t3061.36\t1.00712991102736127\nAAAAAAAAIFBAAAAA\tUsually sure students give. Popular resources may give especially full, fine paintings. Ever possible borders shall not free. New bodies help apart. Further main readers could esca\tBooks                                             \tfiction                                           \t3.51\t11100.42\t3.65182958128620664\nAAAAAAAAIGADAAAA\tCommunications move afterwards different errors; warm goods give at all. Twins could return f\tBooks                                             \tfiction                                           \t0.34\t5726.99\t1.88407208859937665\nAAAAAAAAIJPDAAAA\tNew, united books ought to earn things. Home domestic bands shal\tBooks                                             \tfiction                                           \t3.36\t8480.61\t2.78996132266631505\nAAAAAAAAIMOAAAAA\tDifferent, expensive years used to learn humans. Normally parliamentary cards benefit. Certain consequences used to encourage. More new proposals could not prom\tBooks                                             \tfiction                                           \t3.33\t8887.28\t2.92374811053755431\nAAAAAAAAIOMDAAAA\tGood levels ask quiet, particular objects. Previously rural re\tBooks                                             \tfiction                                           \t4.72\t3395.05\t1.11690765033626979\nAAAAAAAAJHGAAAAA\tLarge hearts used to say annually. For example separate criteria should admit gay ministers. Growing, ordinary\tBooks                                             \tfiction                                           \t1.92\t3430.77\t1.12865885908724888\nAAAAAAAAJHIBAAAA\tPlans mi\tBooks                                             \tfiction                                           \t4.76\t533.80\t0.17561016884861808\nAAAAAAAAJJBAAAAA\tCitizens can b\tBooks                                             \tfiction                                           \t4.61\t584.00\t0.19212502549193136\nAAAAAAAAJKFEAAAA\tPersonal, sympathetic text\tBooks                                             \tfiction                                           \t0.15\t3428.40\t1.12787917362420799\nAAAAAAAAJLODAAAA\tSocial, private books ought to demand merely social companies. Alive, swiss police will rest again victorian, married commentators. Standard, european territories attend to a comments. Books atte\tBooks                                             \tfiction                                           \t2.81\t3504.94\t1.15305939528714023\nAAAAAAAAJOKBAAAA\tFavourably present words can make small, economic cases. About eastern years give less views. Only possible workers may accept even requirements. Negative goods imp\tBooks                                             \tfiction                                           \t4.00\t4392.10\t1.44491836380669814\nAAAAAAAAKDGAAAAA\tProvinces complement more. Participants cannot lie swiftly then total muscles. Unions surprise perio\tBooks                                             \tfiction                                           \t2.17\t1757.38\t0.57814499537501769\nAAAAAAAAKGGBAAAA\tNew, novel individuals used to pay at the rates. Especially social values sleep too unaware cattle. Also immediate changes give almost chains. Swee\tBooks                                             \tfiction                                           \t1.98\t11006.58\t3.62095798472428397\nAAAAAAAAKHNCAAAA\tAlso good forms\tBooks                                             \tfiction                                           \t4.30\t2992.89\t0.98460456771326445\nAAAAAAAAKIJCAAAA\tMo\tBooks                                             \tfiction                                           \t6.72\t9516.74\t3.13082862174671717\nAAAAAAAAKMMAAAAA\tThen wild sciences will know in a chemicals. Extremely\tBooks                                             \tfiction                                           \t5.84\t10044.66\t3.30450438109209457\nAAAAAAAAKNJAAAAA\tLikewise high penalties might afford never square, thin\tBooks                                             \tfiction                                           \t1.65\t209.10\t0.06878997059993638\nAAAAAAAALDNBAAAA\tEnough little accountants light only important, great systems. Determined sk\tBooks                                             \tfiction                                           \t0.36\t6117.14\t2.01242410691389210\nAAAAAAAALGFAAAAA\tPrimary, good features assess then early, bad c\tBooks                                             \tfiction                                           \t4.63\t2352.74\t0.77400724739021675\nAAAAAAAALHNDAAAA\tMass attitudes may like occupational state\tBooks                                             \tfiction                                           \t6.40\t528.87\t0.17398829149300982\nAAAAAAAALIOBAAAA\tAdditional officers shall not apply so poin\tBooks                                             \tfiction                                           \t9.09\t6890.24\t2.26675947884507726\nAAAAAAAAMAEAAAAA\tIn order financial glasses must kill convenient, important papers. Shy cities like below fragments. Patients ma\tBooks                                             \tfiction                                           \t6.94\t8176.49\t2.68991155767897573\nAAAAAAAAMBOBAAAA\tGoods keep points. Again sensitive windows must not cause closely female, individual powers; gaps derive suddenly sincerely other hands; other houses may not imagine under for a data\tBooks                                             \tfiction                                           \t7.80\t6049.19\t1.99006983382797303\nAAAAAAAAMFPDAAAA\tPretty realistic facts may not work without a guidelines. Overall patterns t\tBooks                                             \tfiction                                           \t15.95\t13032.24\t4.28736205859069780\nAAAAAAAAMOODAAAA\tMechanically whole rooms might like then please specialist relatives. Als\tBooks                                             \tfiction                                           \t3.90\t6774.40\t2.22865029570640375\nAAAAAAAANGNCAAAA\tImportant enterprises could flow without a countries; ugly, previous things see even de\tBooks                                             \tfiction                                           \t0.82\t887.04\t0.29181949077459382\nAAAAAAAANIEDAAAA\tExcellent, relevant concentrations seem exciting, local children. Units should not reinforce current lips; pure feet shall show always into a minutes. Commonly primit\tBooks                                             \tfiction                                           \t2.70\t4113.69\t1.35332670567791628\nAAAAAAAANLEDAAAA\tConservative, available\tBooks                                             \tfiction                                           \t2.01\t2510.09\t0.82577244047438695\nAAAAAAAANOAEAAAA\tBlack women treat really users. Expert, hard authorities should produce good indians; little, other details could waste. Ideas shall build. Low day\tBooks                                             \tfiction                                           \t0.72\t9472.17\t3.11616592930463604\nAAAAAAAANOBAAAAA\tHouses appear again scientific tests. Naked pieces shall not believe experiences. Coming, good measu\tBooks                                             \tfiction                                           \t1.86\t2113.81\t0.69540376735462230\nAAAAAAAAPFODAAAA\tRates should not turn exactly enormous flowers. Happy practitioners should believe suddenly natural organisms; al\tBooks                                             \tfiction                                           \t2.51\t3437.58\t1.13089922111396129\nAAAAAAAAPJKAAAAA\tConstitutional, good pupils might not begin below level devices. External savings fit hardly. Parents shall dry. Actually literary companies improve a\tBooks                                             \tfiction                                           \t4.22\t439.55\t0.14460368999140142\nAAAAAAAAPPHDAAAA\tEyes come no longer. Commercia\tBooks                                             \tfiction                                           \t0.20\t5344.48\t1.75823348671424196\nAAAAAAAAAAJBAAAA\tFamous authorities will demand at last growing teachers. Over immediate schools should go only so \tBooks                                             \thistory                                           \t2.40\t4151.41\t1.32043953348399043\nAAAAAAAAAFCDAAAA\tCivil, english books could search either young institutions; incidentally major difficulties could not clinch little nevertheless old papers. Special subjects sail late workers. Low, national part\tBooks                                             \thistory                                           \t1.01\t1167.75\t0.37142639855517278\nAAAAAAAAALGDAAAA\tAt first close areas may \tBooks                                             \thistory                                           \t0.09\t9795.83\t3.11576095719008192\nAAAAAAAAALPCAAAA\tOnwards current types may allow; other sectors might carry nowadays marginal conditions. Minutes add well faces. Urban, possible women could not oppose never markets; galleries must favour gently vehe\tBooks                                             \thistory                                           \t59.17\t3685.92\t1.17238106697707767\nAAAAAAAAANBDAAAA\tWeapons wi\tBooks                                             \thistory                                           \t3.85\t1690.46\t0.53768483810882242\nAAAAAAAAAPFDAAAA\tOdd, only premises present previously obvious strengths. Widely different times should not ke\tBooks                                             \thistory                                           \t1.88\t8472.00\t2.69469017217677053\nAAAAAAAABAJAAAAA\tAll female calls see ever fresh, widespread lawyers. Results could not want initially \tBooks                                             \thistory                                           \t1.77\t439.46\t0.13977910092832903\nAAAAAAAABHDCAAAA\tLogical suggestions should evacuate in common equivalent, distinctive women. Fruits know formal pensioners\tBooks                                             \thistory                                           \t1.85\t10800.83\t3.43542144149575407\nAAAAAAAABJCEAAAA\tRegular, elderly circumstances should not stop sole, different sites. New group\tBooks                                             \thistory                                           \t2.98\t383.28\t0.12190992082057514\nAAAAAAAABKDCAAAA\tAlso quiet users fall. Other, current sources would c\tBooks                                             \thistory                                           \t0.43\t10191.59\t3.24164039327845288\nAAAAAAAABLLCAAAA\tSimilarly legislative games could expect others. Central, special miles get all to a problems. Rights pass different, glad eyes. Most local tanks\tBooks                                             \thistory                                           \t9.29\t367.56\t0.11690985831979388\nAAAAAAAACAJDAAAA\tMilitary areas used to help sometimes sooner certain children. Unlikely proceedings say; wages recognize now managerial years. New events stay full, royal communities\tBooks                                             \thistory                                           \t6.86\t9156.39\t2.91237419093692870\nAAAAAAAADANAAAAA\tWildly sexual powers achieve local, comfortable songs; artistic, very shares might start. Miners used to sleep very italian partners. Book\tBooks                                             \thistory                                           \t4.58\t3997.52\t1.27149172061851791\nAAAAAAAADJPBAAAA\tArchitects influence around enough visual interests. Days think already other issues. Regardless lucky rules mean to a shoulders. Women accept only.\tBooks                                             \thistory                                           \t1.44\t5541.90\t1.76271287360557656\nAAAAAAAADNIBAAAA\tNever possible applications will not contribute still bad, golden resources; force\tBooks                                             \thistory                                           \t5.60\t5573.65\t1.77281160034856670\nAAAAAAAAEIMBAAAA\tArmed profits forget now s\tBooks                                             \thistory                                           \t9.04\t494.12\t0.15716481443295395\nAAAAAAAAEJAAAAAA\tHundreds go over electronic fa\tBooks                                             \thistory                                           \t7.68\t898.62\t0.28582418348931652\nAAAAAAAAEJCCAAAA\tIn short new acres marry perfectly for a c\tBooks                                             \thistory                                           \t1.58\t186.93\t0.05945685008085502\nAAAAAAAAEMGBAAAA\tHostile, certain contents would carry; others can get great, prime rates. Expensive, national shows produc\tBooks                                             \thistory                                           \t1.95\t3076.78\t0.97863182577314023\nAAAAAAAAFLAAAAAA\tOrigins help still already common hands. Probably official increases could inform more recent, \tBooks                                             \thistory                                           \t34.26\t5002.56\t1.59116492772953555\nAAAAAAAAGBGEAAAA\tSafe films go behind amo\tBooks                                             \thistory                                           \t4.48\t6872.36\t2.18589246360490448\nAAAAAAAAGEBCAAAA\tAncient, yellow sets anger other men. Beautiful, vari\tBooks                                             \thistory                                           \t3.24\t2349.53\t0.74731532108527947\nAAAAAAAAGEDDAAAA\tWheels shall include tables; only central days shall see lovely, jewish artists. Genes ought to climb therefore;\tBooks                                             \thistory                                           \t2.02\t6800.22\t2.16294688416429633\nAAAAAAAAGMEAAAAA\tBranches attend fair true banks. Rigid cigarettes like by a places. Stations shall not let thus. Kids hold into a achievements. Streets used to set twice actual, wonderful areas; surroundings r\tBooks                                             \thistory                                           \t6.21\t12377.05\t3.93676994753783023\nAAAAAAAAHIICAAAA\tThen sp\tBooks                                             \thistory                                           \t1.91\t8909.36\t2.83380132582446085\nAAAAAAAAHKBEAAAA\tParliamentary pieces shine never tragic patterns. Great human eyes would not get groups. Plant\tBooks                                             \thistory                                           \t6.03\t953.70\t0.30334348645006918\nAAAAAAAAHOEDAAAA\tTropical, different relations would not work eyes. Level customs might aff\tBooks                                             \thistory                                           \t0.31\t10335.72\t3.28748384163962355\nAAAAAAAAHONAAAAA\tReady, imperial shops see impossible assumptions. Clinical holders ask. Other rules would not avoid at a panels. Unusual, particular rights cannot go yet golden substance\tBooks                                             \thistory                                           \t4.56\t2768.79\t0.88066940531413131\nAAAAAAAAIAPCAAAA\tVery valid police should not like away pictures. New, special principles survive from a\tBooks                                             \thistory                                           \t4.76\t8944.55\t2.84499421382716393\nAAAAAAAAIEHBAAAA\tFully classical offices cannot go different, new roads; proceedings mean asian, only groups. Earlier academic affairs \tBooks                                             \thistory                                           \t3.37\t10650.60\t3.38763776531939474\nAAAAAAAAIHCCAAAA\tBig, special things find however happy agencies. Current firms reduce especially at a eyes. Imports want reasons. Little controversial de\tBooks                                             \thistory                                           \t4.36\t1262.68\t0.40162079634137920\nAAAAAAAAIJBEAAAA\tAdditional, human standards should not dream also silly forms. More independent friends may believ\tBooks                                             \thistory                                           \t4.39\t5255.61\t1.67165257504650106\nAAAAAAAAIKGCAAAA\tConfidential, full terms make incorrectly elderly, real methods; teeth slip much today unknown conditions. Years shall not undermine occasionally local, industrial lips; restrictions beat most things\tBooks                                             \thistory                                           \t1.38\t7182.03\t2.28438924188842437\nAAAAAAAAIKOAAAAA\tIndependently mean findings must not take today police. White, yellow features try even grateful examples. Sweet \tBooks                                             \thistory                                           \t2.06\t4957.80\t1.57692810854792173\nAAAAAAAAINEEAAAA\tFilms cope \tBooks                                             \thistory                                           \t1.22\t14315.87\t4.55345068403685835\nAAAAAAAAINKAAAAA\tHours used to use always local, upper budgets. Only royal strategies confuse already key windows. Open, short habits broadcast just. Working-class lights will display previous measures. Soviet scho\tBooks                                             \thistory                                           \t0.75\t4671.20\t1.48576920824741861\nAAAAAAAAJENCAAAA\tOpponents bring also waiting, other things. There massive characters contact\tBooks                                             \thistory                                           \t58.48\t1594.66\t0.50721371930635138\nAAAAAAAAJGICAAAA\tBoys form so go\tBooks                                             \thistory                                           \t4.24\t12750.46\t4.05554051613940340\nAAAAAAAAKADBAAAA\tTomorrow soft actors could not go for the needs. Considerable times used to allow following visitors; months must not avoid about economic farmers. Tears start at firs\tBooks                                             \thistory                                           \t1.76\t10852.02\t3.45170345163665691\nAAAAAAAAKCACAAAA\tYears would land in a trees. Areas establish above civil tests. Within so-called thanks like just. Ill acts prevent. Most \tBooks                                             \thistory                                           \t8.83\t11890.89\t3.78213697136863066\nAAAAAAAAKDBAAAAA\tAllegedly great plans respond able, cheap facts. Today local banks might allow at least tr\tBooks                                             \thistory                                           \t7.32\t75.87\t0.02413198103907597\nAAAAAAAAKMCEAAAA\tEffects shall not come in short southern firms. High, afraid years smell anyway governors. Wages can think deep, educational participants. Quietly probable \tBooks                                             \thistory                                           \t88.42\t7756.02\t2.46695831789500422\nAAAAAAAAKOPBAAAA\tParticularly particular contents destroy feet. Essential, fatal wo\tBooks                                             \thistory                                           \t2.76\t7308.24\t2.32453287345481131\nAAAAAAAALGCAAAAA\tPopular, current dogs shall not nominate respectively. More labour connections take further feet; holy, neighbouring months can leave. Attempts should investigate \tBooks                                             \thistory                                           \t0.64\t2234.94\t0.71086766447176010\nAAAAAAAALKABAAAA\tGreen discussions might offer most. Grateful feet ought to go still \tBooks                                             \thistory                                           \t47.36\t12676.50\t4.03201604905557503\nAAAAAAAAMEPDAAAA\tMajor, grateful charts talk system\tBooks                                             \thistory                                           \t3.78\t1685.71\t0.53617400497404436\nAAAAAAAAMMEDAAAA\tForward slight interests provide on a cases; successful areas must come internal, present en\tBooks                                             \thistory                                           \t4.36\t1180.89\t0.37560584011116933\nAAAAAAAANCDEAAAA\tSoon sure forests cope; guilty, e\tBooks                                             \thistory                                           \t6.82\t3323.19\t1.05700748740275284\nAAAAAAAANHACAAAA\tGrey words need. English, swiss measures help separat\tBooks                                             \thistory                                           \t3.59\t4100.58\t1.30427202859119708\nAAAAAAAANHIAAAAA\tParliamentary, monetary charges shall evaluate by a observations. Urgent, suitable problems give just at the rises; earlier big others stay always guilty terms. S\tBooks                                             \thistory                                           \t1.16\t6557.12\t2.08562403467702379\nAAAAAAAANJJDAAAA\tLovely years help. Possible, good years must imagine even necessar\tBooks                                             \thistory                                           \t35.72\t11655.58\t3.70729188822239413\nAAAAAAAAOCBCAAAA\tOther, current movements would get in a products.\tBooks                                             \thistory                                           \t8.87\t18347.84\t5.83589992075918761\nAAAAAAAAOPNCAAAA\tLegal, independent teachers cut. Perhaps common wives might carry already states. Courts rally regions. Besides financial ways could not suffer notably political\tBooks                                             \thistory                                           \t3.66\t1239.86\t0.39436243589177180\nAAAAAAAAPINBAAAA\tMajor, front faces wonder very desirable teachers. Prospective, national plans take industrial, separate locations. Capitalist children save head, economic features. Techniques l\tBooks                                             \thistory                                           \t1.92\t1668.04\t0.53055370571267001\nAAAAAAAAPONBAAAA\tTrends work to a co\tBooks                                             \thistory                                           \t4.91\t3816.03\t1.21376517206465081\nAAAAAAAAAFFDAAAA\tAlone sole services keep only; stairs shall eliminate for the woods. Methods must need yet. Other students can\tBooks                                             \thome repair                                       \t2.39\t1754.10\t0.73033351855711644\nAAAAAAAAAFMCAAAA\tAlive reforms remember to a rocks. Neighbours could find together with a maps. So anxious circum\tBooks                                             \thome repair                                       \t2.84\t819.94\t0.34138855550180837\nAAAAAAAAAJFDAAAA\tRefugees can help as natural publications. Serious, active feet carry alone as well sharp coins. New reasons pay absolutely cautious changes. Practical memb\tBooks                                             \thome repair                                       \t4.33\t4572.72\t1.90388842538994214\nAAAAAAAAAPKDAAAA\tAbove northern firms can restore either in a tories. Then natural children used to supply publicly chosen things; extra, available circumstances must pay \tBooks                                             \thome repair                                       \t0.40\t2992.66\t1.24601784826699738\nAAAAAAAABHNBAAAA\tHere different\tBooks                                             \thome repair                                       \t4.50\t3368.22\t1.40238524820389416\nAAAAAAAABJMDAAAA\tChief \tBooks                                             \thome repair                                       \t4.04\t3930.58\t1.63652831729675090\nAAAAAAAACALDAAAA\tBlack, relative workers make soft, important cases. Previous p\tBooks                                             \thome repair                                       \t9.53\t10606.18\t4.41596759469250173\nAAAAAAAACDEEAAAA\tTaxes disregard earlier for the aims. In part heavy years continue less settings. Breasts accomplish. Weak, appropriate duties mu\tBooks                                             \thome repair                                       \t9.96\t6044.52\t2.51668408847207200\nAAAAAAAACHACAAAA\tMembers defeat at all new, only bills; original abilities convince; already exciting systems lead shapes. New, real travellers should pursue again short vehicles. Important, only\tBooks                                             \thome repair                                       \t80.60\t1171.18\t0.48763012956144099\nAAAAAAAACHNDAAAA\tProfessional managers take at least at a applicants. Vulnerable areas must regulate more with a employees. \tBooks                                             \thome repair                                       \t0.38\t2026.22\t0.84363284987788637\nAAAAAAAACIKAAAAA\tCompletely foreign parties cope with the terms. Children would take terribly visual, total things. Yet good songs will work all right m\tBooks                                             \thome repair                                       \t2.78\t1190.62\t0.49572412853570149\nAAAAAAAACLLBAAAA\tActivities bring brief, yellow practitioners. Polish representatives will not prevent for the examples. Annual, ashamed standards use\tBooks                                             \thome repair                                       \t7.44\t5309.96\t2.21084417661338922\nAAAAAAAADABDAAAA\tPerhaps european sectors may say practices. Just true years can tell interesting relations. Then private years could not persuade before quickly continuous levels; pale, constitu\tBooks                                             \thome repair                                       \t4.28\t61.23\t0.02549359862108901\nAAAAAAAADAGBAAAA\tChief levels must attack about for a parties. Branches complete really. Just following aims shall not arrive together experienced friends. Actually \tBooks                                             \thome repair                                       \t7.44\t7424.19\t3.09112069160056914\nAAAAAAAADHHCAAAA\tStates should not hold services. Western manufacturers could not mean even large exercises. Facilities maint\tBooks                                             \thome repair                                       \t7.52\t5601.60\t2.33227081554617381\nAAAAAAAAEECEAAAA\tFree, particular nurses get either. Great, evolutionary million\tBooks                                             \thome repair                                       \t0.89\t1230.96\t0.51252000912323588\nAAAAAAAAFKGBAAAA\tMilitary, inc computers ought to maintain entirely even burning sections. Able, outer papers may not cause thus useless, pretty walls. Always im\tBooks                                             \thome repair                                       \t73.73\t6564.64\t2.73324019683073308\nAAAAAAAAGGDBAAAA\tDiverse, remaining bits ought to listen along a relationships. Distant stages jail relatively. Short, false applications could appear p\tBooks                                             \thome repair                                       \t1.52\t1742.72\t0.72559536483658741\nAAAAAAAAGKKCAAAA\tHouses help general, new attitudes. All central shoes cannot watch. Effects boost to a details. Figures get intently normal, common leaders. Ne\tBooks                                             \thome repair                                       \t1.01\t19637.84\t8.17637123542653418\nAAAAAAAAGNNAAAAA\tEven able courses should not vote. Appropriate findings might wait legal things. Sheer, interested levels inform in a meetings.\tBooks                                             \thome repair                                       \t2.99\t3714.58\t1.54659499536052312\nAAAAAAAAHAAAAAAA\tTomorrow different years mean highly in a circumstances. Financial fi\tBooks                                             \thome repair                                       \t0.35\t7727.05\t3.21721886697837445\nAAAAAAAAHEHBAAAA\tOpen, l\tBooks                                             \thome repair                                       \t6.35\t1419.57\t0.59104928620838367\nAAAAAAAAHGECAAAA\tExpenses look away both complete manufacturers. Male advantages use here books. Right rich falls used to say; simple visitors mind records. Conventional profits might arrange\tBooks                                             \thome repair                                       \t7.60\t414.17\t0.17244298123299750\nAAAAAAAAHGFBAAAA\tEuropean, local terms bring even everywhere working days; much nice choices grow merely major, black rates; final, other talks can know for example also industrial\tBooks                                             \thome repair                                       \t8.57\t772.24\t0.32152828024089140\nAAAAAAAAIAIDAAAA\tInternal exhibitions shall die soon direct movies; services could follow at once social, outer sciences\tBooks                                             \thome repair                                       \t2.25\t1729.95\t0.72027847353507987\nAAAAAAAAIDEEAAAA\tHowever broad boots may not obtain extraordinarily da\tBooks                                             \thome repair                                       \t2.68\t2701.11\t1.12462868155168622\nAAAAAAAAIDPDAAAA\tPolitical, standard statements damage as elegant preferences. Tremendous girl\tBooks                                             \thome repair                                       \t4.06\t16118.92\t6.71124084085324406\nAAAAAAAAIHHBAAAA\tBritish runs wish underneath appropriate pounds. Unable, complex results must not look at the origins. Extra employees find so early thanks. Competent\tBooks                                             \thome repair                                       \t5.60\t15.48\t0.00644522140542966\nAAAAAAAAIJLDAAAA\tNew, immediate seconds may not give also lines; relevant groups break little golden, political eyebrows. Able cattle doub\tBooks                                             \thome repair                                       \t3.96\t1518.63\t0.63229370690747035\nAAAAAAAAJJHAAAAA\tVast, delicate tem\tBooks                                             \thome repair                                       \t0.83\t336.52\t0.14011278471286747\nAAAAAAAAKDEAAAAA\tCorporate stones relieve together early things; forward line\tBooks                                             \thome repair                                       \t8.20\t7293.74\t3.03680679416269454\nAAAAAAAAKDLBAAAA\tWords should agree completely level times. Very gentle hours would not interpret. Gr\tBooks                                             \thome repair                                       \t8.23\t3906.80\t1.62662732472432730\nAAAAAAAAKHGAAAAA\tHowever great occupations find very academic homes. Surprised writings suit as free, short shows. Originally possible preparations should accept as yet similar children. Hours re\tBooks                                             \thome repair                                       \t1.86\t2705.71\t1.12654392822255033\nAAAAAAAAKNGDAAAA\tMembers may not cut probably area\tBooks                                             \thome repair                                       \t0.87\t8868.24\t3.69236242096172529\nAAAAAAAAKPJAAAAA\tSimilar seats would not see now light soldiers. Rather possible countries take white, proposed boys. Guilty, famous models would not invest often like a fears. Plainly new classes prevent little\tBooks                                             \thome repair                                       \t3.02\t3962.44\t1.64979348228234450\nAAAAAAAALBABAAAA\tExternal hours will not begin never old, empty word\tBooks                                             \thome repair                                       \t1.92\t275.50\t0.11470662126588312\nAAAAAAAALBCCAAAA\tSections will not kick for a systems. Political, lacking arms used to say other authorities. Savi\tBooks                                             \thome repair                                       \t53.64\t8876.73\t3.69589730014338536\nAAAAAAAALBNDAAAA\tPlanes play sometimes economic, wonderful comments. Responsible, primary costs can bring stra\tBooks                                             \thome repair                                       \t8.00\t3496.76\t1.45590390191538823\nAAAAAAAALEBAAAAA\tOf course british lawyers shall describe at least extremely active men. Proposals may gain. Also lexical differences attend bad teams; academic, major contexts could not hold less stead\tBooks                                             \thome repair                                       \t4.97\t855.34\t0.35612762770802348\nAAAAAAAALJLAAAAA\tPolitical, local children will distinguish as necessarily new managers. Directly resulting questions \tBooks                                             \thome repair                                       \t6.97\t13643.34\t5.68051337271024974\nAAAAAAAALJNBAAAA\tIssues become at a materials; more complete others should apply seco\tBooks                                             \thome repair                                       \t3.96\t2603.64\t1.08404627002796343\nAAAAAAAALNICAAAA\tReal earnings exceed there from a shoulders. Practical days shall not spend now systems. Ages might not sit much. Probably \tBooks                                             \thome repair                                       \t0.86\t1450.51\t0.60393140185980444\nAAAAAAAALOPAAAAA\tScientific contracts transform only variable contacts; just important relations could tell generally on a values. Possible\tBooks                                             \thome repair                                       \t1.94\t8305.21\t3.45794039202767748\nAAAAAAAALPKBAAAA\tExtraordinary, economic obligations intend multiple, public patients; again enthusiastic supporters should stop greatly labour, mad trus\tBooks                                             \thome repair                                       \t2.73\t1640.87\t0.68318930539582445\nAAAAAAAAMBGEAAAA\tRemarkably political plans would locate separate problems. Sensible areas will not join home social \tBooks                                             \thome repair                                       \t6.39\t3591.09\t1.49517894940726030\nAAAAAAAAMCFDAAAA\tHours might need etc with the holders. Early demands drive usually; at all other responsibilities see so equally italian issues. Simple, senior operations must t\tBooks                                             \thome repair                                       \t6.30\t4254.02\t1.77119513973681346\nAAAAAAAAMOEEAAAA\tSpanish, unique colleagues put through a applications. Years will confront normally by no appearances; colleagues will not own still. Sympa\tBooks                                             \thome repair                                       \t2.68\t5243.74\t2.18327295171238458\nAAAAAAAANCICAAAA\tBritish demands can travel easy conditions. Inevitably small pat\tBooks                                             \thome repair                                       \t0.78\t3069.27\t1.27791503249632335\nAAAAAAAANMFEAAAA\tAble prices would leave mainly in a matters. Ostensibly necessary schools get far private sales. Laboratories question possibly rare sectors. Likely hands could respond up to good\tBooks                                             \thome repair                                       \t2.22\t5893.46\t2.45378905323278233\nAAAAAAAAOFKCAAAA\tSystems cannot show. Global pains sha\tBooks                                             \thome repair                                       \t6.41\t748.19\t0.31151487101604752\nAAAAAAAAOGJAAAAA\tDark, fun calculations must not take away interested feet. High, local films could show normal, visual glasses. Concerned, indian chiefs stick at least. Cultural condition\tBooks                                             \thome repair                                       \t1.87\t2172.50\t0.90453769401136507\nAAAAAAAAOHGBAAAA\tSentences might treat in a persons. Prisoners look best heavy investigations. Western, emotio\tBooks                                             \thome repair                                       \t2.92\t1731.95\t0.72111118947893383\nAAAAAAAAOOOCAAAA\tJapane\tBooks                                             \thome repair                                       \t8.75\t326.81\t0.13606994880545649\nAAAAAAAAOPDCAAAA\tDemocratic, sure places lose in a friends. Other, essential volunteers borrow other, other nurses; foreign hours get indeed enormous designers. Helpful, professional powers lower far from. C\tBooks                                             \thome repair                                       \t4.46\t7443.09\t3.09898985726998908\nAAAAAAAAPHADAAAA\tDutch, quick households ring fortunately small, automatic pubs; objectives st\tBooks                                             \thome repair                                       \t93.40\t4131.30\t1.72009968942193442\nAAAAAAAAPLACAAAA\tIndustrial, difficult children shall use crops; errors can reach frankly boards. Apparent, special arms may not see always other inter\tBooks                                             \thome repair                                       \t3.19\t722.52\t0.30082696187668193\nAAAAAAAAPMKDAAAA\tSuddenly various forms must not involve then local, other economies; continuing, still others cannot know directly only comprehensive products. Odd books go enough southern cases\tBooks                                             \thome repair                                       \t7.64\t10446.87\t4.34963760618481448\nAAAAAAAAAAICAAAA\tRather little years should not reach more new magistrates. Political speakers may lower considerably gates. Kinds would not depend well. Provisions raise. Almost difficult pensions pick yet organi\tBooks                                             \tmystery                                           \t4.25\t327.20\t0.10733361870342104\nAAAAAAAAAANBAAAA\tRoyal plants find however workers. About genetic peasants come welsh, marine men. So federal eyes develop. Home old services \tBooks                                             \tmystery                                           \t4.32\t7859.96\t2.57835559188307223\nAAAAAAAAADIAAAAA\tWhite changes come much matters.\tBooks                                             \tmystery                                           \t3.16\t3490.58\t1.14503845591010823\nAAAAAAAAAEMAAAAA\tLater other operations see; expected, honest animals show respons\tBooks                                             \tmystery                                           \t2.82\t18416.84\t6.04140000697406092\nAAAAAAAAAENDAAAA\tRoyal advantages succumb again english, new regulat\tBooks                                             \tmystery                                           \t0.58\t3081.67\t1.01090095583671001\nAAAAAAAAAFEBAAAA\tCentra\tBooks                                             \tmystery                                           \t1.36\t6619.98\t2.17159660496416018\nAAAAAAAABEBBAAAA\tCountries keep much french, addit\tBooks                                             \tmystery                                           \t4.87\t25157.14\t8.25246599152989476\nAAAAAAAABHODAAAA\tAlways silver months must capture only left mass miles. Characteristics should fall total ways. Courses might work in a spirits; key sources would live again up the records; thoughts can inspect ofte\tBooks                                             \tmystery                                           \t9.69\t3901.52\t1.27984187054942315\nAAAAAAAACDLBAAAA\tPrimary, single engineers seem new centuries. Close ladies date. Necessary, likely hands cannot retain generally inc prices. Opini\tBooks                                             \tmystery                                           \t1.81\t10328.03\t3.38797320897766992\nAAAAAAAACGMCAAAA\tA\tBooks                                             \tmystery                                           \t0.11\t6325.20\t2.07489793711148765\nAAAAAAAACHLAAAAA\tHills may not die reforms. Better \tBooks                                             \tmystery                                           \t5.64\t2254.23\t0.73947024232827876\nAAAAAAAACHLDAAAA\tOnly present circumstances cannot fall from a players. Sharp relations will blame late eyes. Closest different problems should not write i\tBooks                                             \tmystery                                           \t4.33\t9175.56\t3.00992071647421134\nAAAAAAAACJMAAAAA\tAlso strategic consultants proceed slightly eyes. Almost stran\tBooks                                             \tmystery                                           \t2.26\t23865.71\t7.82882951475068011\nAAAAAAAACMLBAAAA\tNow top documents might mitigate usually ethnic sheets. Big times come partly high records. Social years can seek social, major r\tBooks                                             \tmystery                                           \t2.68\t5730.79\t1.87990962325604602\nAAAAAAAACNOAAAAA\tDouble workers ought to face with the objects. Satisfactory, other participants help politically urgent, \tBooks                                             \tmystery                                           \t3.56\t2094.56\t0.68709261733324441\nAAAAAAAADBBCAAAA\tNational specialists go practical chapters. Enough right women stare again for example literary cameras. Most industrial cells shall improve possible, availab\tBooks                                             \tmystery                                           \t3.03\t4124.34\t1.35293501516891054\nAAAAAAAADEFBAAAA\tFortunes could meet emotional meetings. Beautiful women replace beautifully in the things; less previous year\tBooks                                             \tmystery                                           \t5.11\t102.48\t0.03361720429317417\nAAAAAAAADJEAAAAA\tAvailable solicitors emerge. Further true weeks manufacture changes; families save up to right things. Gre\tBooks                                             \tmystery                                           \t3.50\t2151.90\t0.70590224354490139\nAAAAAAAAECCAAAAA\tPresent, regular adults cannot l\tBooks                                             \tmystery                                           \t7.59\t522.99\t0.17155993045752497\nAAAAAAAAEDPBAAAA\tEspecially simple sources absorb shortly accessible, new years; glad chapters restrict so southern districts. Modest, particular years could not discard only free men. Now black things could ge\tBooks                                             \tmystery                                           \t3.35\t3104.40\t1.01835723075458519\nAAAAAAAAEHBEAAAA\tDays must appear on the police. Direct, late developments should serve always for the papers. Meetings take yesterday women. Medium periods \tBooks                                             \tmystery                                           \t7.03\t1997.98\t0.65541082975874440\nAAAAAAAAEIDBAAAA\tSufficient, whole judges may not show even almost vo\tBooks                                             \tmystery                                           \t75.13\t1924.56\t0.63132637289687040\nAAAAAAAAEIDCAAAA\tWords take here free goods. Efficient sales could not ask only. Please local women can talk less than useful permanent colleges. Always free members mus\tBooks                                             \tmystery                                           \t5.23\t4082.90\t1.33934117299571443\nAAAAAAAAEKFDAAAA\tRegional, able services should transfer old, social preferences. Other courts might talk a li\tBooks                                             \tmystery                                           \t1.16\t954.39\t0.31307497663312349\nAAAAAAAAEMAEAAAA\tHuge, difficult init\tBooks                                             \tmystery                                           \t34.65\t621.18\t0.20376985717051064\nAAAAAAAAENCDAAAA\tDifficulties would offer changes. Payable pounds give electric, sure weeks. Tired houses shall not get together most important pools. Bones shall not give foreign, new troops. \tBooks                                             \tmystery                                           \t4.33\t12111.11\t3.97288894503419799\nAAAAAAAAFMACAAAA\tVery dead processes may enable drugs. Early units work long police. Easily difficult opportunities ought to seem extra, common eyes. Just quiet subjects must finance ch\tBooks                                             \tmystery                                           \t4.70\t475.66\t0.15603395193297449\nAAAAAAAAGADCAAAA\tAlso rich lines want noticeably often social difficulties. Animals go; sexual, central cats ought to tolerate. Groups sha\tBooks                                             \tmystery                                           \t3.23\t150.35\t0.04932032265299313\nAAAAAAAAGDFDAAAA\tAlso significant \tBooks                                             \tmystery                                           \t4.93\t1060.69\t0.34794528124245618\nAAAAAAAAGGEEAAAA\tFine, sure centuries would not form now angry, dead insects; customers cannot pray totally as male laws. Unique procedures reinforce rarely also\tBooks                                             \tmystery                                           \t2.81\t5986.79\t1.96388702664258571\nAAAAAAAAGGOCAAAA\tIntermediate, subj\tBooks                                             \tmystery                                           \t9.70\t10978.67\t3.60140702827227219\nAAAAAAAAGHMBAAAA\tHot eyes must invest patently common laws. Whole arts discourage small studies. Policies could need. Reasons hope really independent, international departments. Effective, afraid attitudes\tBooks                                             \tmystery                                           \t0.97\t251.85\t0.08261605094882821\nAAAAAAAAGPFDAAAA\tPrices find under way around the languages. Civil, effective products should last really at a hundreds. Main, capable groups will contribute; only indian regulations take now in a feet; total\tBooks                                             \tmystery                                           \t2.73\t625.40\t0.20515417217946063\nAAAAAAAAHLEDAAAA\tAdvances accept. Lists must not act also old comments. Objectives shall know as to the months; live years can pay possible, inc attempts. Russian years see further pro\tBooks                                             \tmystery                                           \t1.42\t15186.66\t4.98178231607119854\nAAAAAAAAIABCAAAA\tClean systems can realise often s\tBooks                                             \tmystery                                           \t2.73\t3145.42\t1.03181329750035026\nAAAAAAAAIJCAAAAA\tDistinguished, huge levels return pretty characters. Months cannot ask right. Overseas studies c\tBooks                                             \tmystery                                           \t6.54\t1642.06\t0.53865599611289594\nAAAAAAAAIKNBAAAA\tVoluntary, clear techniques go. Before domestic students ought to live supreme, easy journalists; hands will run overseas such as the skills. Technical, official doctors would\tBooks                                             \tmystery                                           \t5.72\t1966.05\t0.64493661690666545\nAAAAAAAAIPDAAAAA\tGood, local rules follow normally high lines. Whole, male activities know again. \tBooks                                             \tmystery                                           \t4.01\t5929.90\t1.94522501696031914\nAAAAAAAAJABDAAAA\tYears will appear original\tBooks                                             \tmystery                                           \t4.79\t1653.40\t0.54237593265353407\nAAAAAAAAJBJDAAAA\tProblems eat very in a persons; dead ideas happen british things. Short bags should test usually to a others. Also inner visitors expose nevertheless coming, peaceful me\tBooks                                             \tmystery                                           \t4.72\t5511.42\t1.80794820536188504\nAAAAAAAAJOBEAAAA\tExpensive rates take as at once white careers. Parts drive all weeks. Therefore other years s\tBooks                                             \tmystery                                           \t0.55\t181.72\t0.05961083493516403\nAAAAAAAAKFDCAAAA\tFurthermore little classes say spots. Like days used to provide costs. Friends\tBooks                                             \tmystery                                           \t4.03\t13223.74\t4.33787245413562633\nAAAAAAAAKIABAAAA\tYears might give also. Ultimately private stars should make \tBooks                                             \tmystery                                           \t2.78\t1284.36\t0.42131725708412545\nAAAAAAAAKIKDAAAA\tGood, low facilities suggest too thereafter asian senses. Far holidays defend delicate members. Cautious reports treat on a talks\tBooks                                             \tmystery                                           \t0.25\t5386.71\t1.76703874451682502\nAAAAAAAAKNLDAAAA\tStrange, necessary weeks hope all. Dead sons know too. Heavy, social waters used to move pupils. Heels provide. Eastern trees used to allow currently bad children. Articles would not clear\tBooks                                             \tmystery                                           \t4.09\t5477.40\t1.79678839573997066\nAAAAAAAALFFBAAAA\tBitter, nice students like general books; maximum, holy members draw indeed sure, strong lines; forests must not adapt opposite, r\tBooks                                             \tmystery                                           \t6.38\t2322.45\t0.76184890818386367\nAAAAAAAALPOAAAAA\tEveryday, low cases could contribute again through a developments. Useful, unable answers might not assign local da\tBooks                                             \tmystery                                           \t1.87\t8562.04\t2.80866362067065732\nAAAAAAAAMJLAAAAA\tFree, peaceful years should not help ahead animals. Then helpful others \tBooks                                             \tmystery                                           \t27.03\t92.46\t0.03033027623874789\nAAAAAAAANIHDAAAA\tHowev\tBooks                                             \tmystery                                           \t3.41\t6376.36\t2.09168029631951644\nAAAAAAAANPECAAAA\tSorry theories decide there wages.\tBooks                                             \tmystery                                           \t2.59\t4969.90\t1.63030975426079530\nAAAAAAAAOEEDAAAA\tOther courses discuss full leaves. Connections excuse; objective, international sessions go. All expensive surve\tBooks                                             \tmystery                                           \t3.01\t1617.54\t0.53061253544477894\nAAAAAAAAOEHBAAAA\tBanks will employ of course real, dead resources. Sisters shall not go short effects. Hopes run c\tBooks                                             \tmystery                                           \t3.63\t4915.26\t1.61238582722548074\nAAAAAAAAOEMCAAAA\tSeconds preve\tBooks                                             \tmystery                                           \t4.51\t2037.80\t0.66847325242613507\nAAAAAAAAOGLDAAAA\tRight developments would not seek variables; numbers like impatiently \tBooks                                             \tmystery                                           \t3.84\t11928.22\t3.91289430712261892\nAAAAAAAAOIAAAAAA\tLimits ought to eat less; actual costs would smash more main rules; magnetic, constitutional expressions can head years. Quickly western children may not wonder also useless, other millions; comm\tBooks                                             \tmystery                                           \t10.39\t6043.00\t1.98232597134710679\nAAAAAAAAOOGDAAAA\tBritish, quiet residents trace particularly. Years should reduce now libraries. Special, general figures gain\tBooks                                             \tmystery                                           \t2.22\t6385.64\t2.09472447719227850\nAAAAAAAAPNKCAAAA\tMost small ministers appear agencies. Industries review so much as solicitors. Far from distant children hear still terms. Particular, available days learn already long-t\tBooks                                             \tmystery                                           \t3.79\t3704.73\t1.21528752206334055\nAAAAAAAAPPEDAAAA\tSizes could not continue home; obligations will not lack notably current buildings. Measures burn there then useful thousands. Historic,\tBooks                                             \tmystery                                           \t7.35\t5443.06\t1.78552361436382311\nAAAAAAAAAAABAAAA\tInches c\tBooks                                             \tparenting                                         \t0.16\t4582.16\t1.47127016656624148\nAAAAAAAAAAFCAAAA\tCertain signs prepare societies. Economic reasons can i\tBooks                                             \tparenting                                         \t0.98\t1989.28\t0.63873114796229133\nAAAAAAAAADLCAAAA\tGolden dogs could hear only available feet. Big, serious patterns used to use here with a days; otherwise long reasons should not trave\tBooks                                             \tparenting                                         \t1.58\t566.43\t0.18187308178852684\nAAAAAAAABCEDAAAA\tLuckily economic c\tBooks                                             \tparenting                                         \t9.18\t122.92\t0.03946796464425564\nAAAAAAAABEMCAAAA\tMen become most so living studies; private nurses come frequently in a feet. Points will \tBooks                                             \tparenting                                         \t1.38\t4878.48\t1.56641454732922415\nAAAAAAAACCOCAAAA\tOther changes mean. Miles form. Local, illegal authorities take again inside the figures. Players would love properly\tBooks                                             \tparenting                                         \t14.38\t2483.90\t0.79754700113786669\nAAAAAAAACPIAAAAA\tPopular circumstances should not take relations. Secret questions should get after the players. Automatic methods cope please in a effects; unli\tBooks                                             \tparenting                                         \t5.60\t9646.64\t3.09740682115084758\nAAAAAAAADLKBAAAA\tOriginal, able troops reduce jointly. Crowds move american feet. Cities move. Legs transfer loudly so central germans. Households could c\tBooks                                             \tparenting                                         \t4.02\t877.39\t0.28171817034838474\nAAAAAAAAEFBAAAAA\tTypical, right programmes tell against a reforms. Outside friends can inhibit again either military stairs. International men must launch legall\tBooks                                             \tparenting                                         \t65.75\t4078.44\t1.30953242534752647\nAAAAAAAAEKGDAAAA\tFavorite, small son\tBooks                                             \tparenting                                         \t1.77\t4476.61\t1.43737947613180297\nAAAAAAAAELEBAAAA\tImproved loans read years. Now constant tears perform now local negotiations. Specifically concerned problems ought to know more than previous steep plants. Cont\tBooks                                             \tparenting                                         \t0.48\t5231.60\t1.67979664686696862\nAAAAAAAAGCJDAAAA\tSo plain prisoners make improvements. Contemporary roots will resume in the computers. Firms accept modern, present names. Essential, collective sons cannot examine in the d\tBooks                                             \tparenting                                         \t5.38\t18382.40\t5.90234228178136019\nAAAAAAAAGEOCAAAA\tSoft friends could make clean, brave feet. Rapid standards should not spread problems. Careers use quantities; british, other visitors should pursue wide, sudden sh\tBooks                                             \tparenting                                         \t4.17\t7509.00\t2.41103926548743546\nAAAAAAAAHCDBAAAA\tCrazy years could cry even clergy. Other, philosophical sides might take years. Already senior hours cannot believe early strengths. Fields will not find little jewish councils. Events might not o\tBooks                                             \tparenting                                         \t1.37\t8851.94\t2.84223930160325602\nAAAAAAAAHIEDAAAA\tPrime, flexible records say upwards at least easy schools. Here good investors can spend more at a cus\tBooks                                             \tparenting                                         \t7.33\t6260.65\t2.01021081069035995\nAAAAAAAAHMOBAAAA\tArms shall get thus famous, clear conditions. Royal languages might not understand in a films. Scientific, notable views would achieve like a years. Large, nervous students \tBooks                                             \tparenting                                         \t2.05\t2365.43\t0.75950787185536616\nAAAAAAAAIBOAAAAA\tMain contents set within a communities; rules date at\tBooks                                             \tparenting                                         \t1.39\t1973.40\t0.63363229278371356\nAAAAAAAAICADAAAA\tLeaders restructure so. Years used to go from a years. Shoulders supply thus original tracks. Securely necessary\tBooks                                             \tparenting                                         \t2.01\t2314.86\t0.74327052258706151\nAAAAAAAAICBDAAAA\tFaces may occur existing houses. Ruling, annual arguments allow all but for a elections. Future, spanish subjects take. Then prim\tBooks                                             \tparenting                                         \t8.01\t13033.96\t4.18502987678687100\nAAAAAAAAIIKDAAAA\tHigh fields shall join then. Diffi\tBooks                                             \tparenting                                         \t1.11\t3833.50\t1.23088547399734770\nAAAAAAAAINOCAAAA\tNarrow, \tBooks                                             \tparenting                                         \t7.17\t950.12\t0.30507079863163167\nAAAAAAAAJFNBAAAA\tVery strong arrangements should not cover parliamentary, fundamental implications. Parents renew then; major, basic structures settle; only long-te\tBooks                                             \tparenting                                         \t7.59\t3460.43\t1.11109769682656629\nAAAAAAAAJLLAAAAA\tPretty eastern facts should not join. Too labour things mean in particular. Closer intensive problems\tBooks                                             \tparenting                                         \t1.18\t11548.91\t3.70820022420834975\nAAAAAAAAJNFBAAAA\tNew friends must not gather by a blocks. Empty opportunities ought to remind else single families. Early years should not use suddenly abou\tBooks                                             \tparenting                                         \t4.28\t11681.79\t3.75086621137015165\nAAAAAAAAKBJDAAAA\tSource\tBooks                                             \tparenting                                         \t8.78\t5480.98\t1.75986922271292103\nAAAAAAAAKDPAAAAA\tGood countries need once again. Most economic patients appear there only real trees. Apparently jewish policies\tBooks                                             \tparenting                                         \t9.76\t3680.94\t1.18190050258400862\nAAAAAAAAKGNCAAAA\tSmall, true kids can go genuine objectives. Scottish games give ever. Scientific, similar trees remark. Boot\tBooks                                             \tparenting                                         \t8.58\t10853.90\t3.48504182763005404\nAAAAAAAAKHDDAAAA\tWidespread lands get curious, certain reasons; issues ought to accept sales. Easy, other others might bomb large payments. Econo\tBooks                                             \tparenting                                         \t4.78\t8024.99\t2.57671673926541680\nAAAAAAAAKHJAAAAA\tForces can measure now groups. Resources form rat\tBooks                                             \tparenting                                         \t4.43\t6742.48\t2.16491996627563242\nAAAAAAAAKNFAAAAA\tEqual voices build. High streets would harm simply individual, black methods. Substantial rooms land as current savings. Again very opportunit\tBooks                                             \tparenting                                         \t7.81\t26.70\t0.00857301217053063\nAAAAAAAALEICAAAA\tOverall, high heads cannot see explicit, bad bodies; opportunities can accommodate little leaders. Light times u\tBooks                                             \tparenting                                         \t6.61\t13341.53\t4.28378648177900984\nAAAAAAAAMABDAAAA\tMeanwhile thorough roads put also more other trees. Never religious costs want just especially direct nights. Young, excellent aud\tBooks                                             \tparenting                                         \t2.67\t3546.05\t1.13858913135993082\nAAAAAAAAMAFCAAAA\tCommon circles may win children. Tiny things must need as beside a words. Permanent yards remain fully. Slight, general ways avoid new, possible arts; therefore educational conditions ou\tBooks                                             \tparenting                                         \t4.26\t9853.55\t3.16384284917348778\nAAAAAAAAMCFBAAAA\tSites will not manage most generally immense woods. Fine employers avoid in a men; reasons ought to think here; only corresponding areas\tBooks                                             \tparenting                                         \t58.45\t12923.27\t4.14948880123795580\nAAAAAAAAMGCDAAAA\tRecords face long lips. Main researchers will know unequivocally ameri\tBooks                                             \tparenting                                         \t1.24\t16478.74\t5.29110256835243338\nAAAAAAAAMLACAAAA\tCorners would not descend often plain new activities. Just local trusts think \tBooks                                             \tparenting                                         \t8.15\t9940.76\t3.19184481139790637\nAAAAAAAAMMJBAAAA\tOpen, large roads might tell friends. Used, old arms will drop as good as natural others. Sad programmes participate\tBooks                                             \tparenting                                         \t4.27\t2597.90\t0.83415087332664917\nAAAAAAAAMNKDAAAA\tDays could meet just. Folk might alter possibly tories; serious, basic things wait suffici\tBooks                                             \tparenting                                         \t5.54\t8776.83\t2.81812248721641872\nAAAAAAAAMPBEAAAA\tStations may not reme\tBooks                                             \tparenting                                         \t0.88\t3316.92\t1.06501855912645951\nAAAAAAAAMPNAAAAA\tEconomic, free bits post quite issues. Perhaps back sales used to affect d\tBooks                                             \tparenting                                         \t0.09\t19263.00\t6.18509114010979749\nAAAAAAAANDECAAAA\tGenuine cities say. Practices prove together elsewhere simple\tBooks                                             \tparenting                                         \t1.52\t1712.57\t0.54988327538897554\nAAAAAAAANPFEAAAA\tSe\tBooks                                             \tparenting                                         \t3.22\t2194.90\t0.70475297427332163\nAAAAAAAAOCAEAAAA\tPartners will not locate. General, other losses cannot restrict else protective kilometres; children carry unusual, long groups. Yet true reservations differ never long-term\tBooks                                             \tparenting                                         \t1.02\t6482.66\t2.08149524634502309\nAAAAAAAAOFDDAAAA\tProfits could not cling through a terms. Later democratic arms might not work all. Sometimes apparent arti\tBooks                                             \tparenting                                         \t6.57\t0.00\t0.00000000000000000\nAAAAAAAAOFICAAAA\tElse emotional lives declare also c\tBooks                                             \tparenting                                         \t7.67\t4780.68\t1.53501227803042655\nAAAAAAAAOKJCAAAA\tPrevious floors keep complex computers.\tBooks                                             \tparenting                                         \t9.60\t5787.26\t1.85821162599344996\nAAAAAAAAOKKBAAAA\tLists used to miss little names. Prime roads should not help from the minutes; in order various exceptions help \tBooks                                             \tparenting                                         \t1.19\t4186.16\t1.34411987369994445\nAAAAAAAAONDEAAAA\tTheories look. Just young regions \tBooks                                             \tparenting                                         \t45.83\t1849.39\t0.59381434374747746\nAAAAAAAAPCGCAAAA\tForeign, simple stocks may draw still; \tBooks                                             \tparenting                                         \t2.55\t18500.06\t5.94012133091936148\nAAAAAAAAAKICAAAA\tCareful, long customers may think about just professional meetings. Students could not drink. British, basic commentators remember espec\tBooks                                             \treference                                         \t1.77\t6207.69\t2.15509748883540916\nAAAAAAAAALADAAAA\tBills emerge later in a yards. Ev\tBooks                                             \treference                                         \t2.72\t1496.80\t0.51963772696266090\nAAAAAAAACEDCAAAA\tExamples will talk there estimated, short initiatives. Benefits ought to prove too negative \tBooks                                             \treference                                         \t0.17\t6141.90\t2.13225745272044827\nAAAAAAAACGFDAAAA\tSorry services must not recall much main details. Sexual, major secrets will not go results. P\tBooks                                             \treference                                         \t7.54\t1423.78\t0.49428768231887850\nAAAAAAAACGMDAAAA\tFlexible, previous patterns must not manipulate essential, dull criteria. Much possible players will include firmly working, important duties. Far english busi\tBooks                                             \treference                                         \t6.38\t13587.29\t4.71704201709145697\nAAAAAAAACLJBAAAA\tFunds shall call more able countries. \tBooks                                             \treference                                         \t0.39\t913.90\t0.31727479868464444\nAAAAAAAACNAEAAAA\tIndivi\tBooks                                             \treference                                         \t3.76\t2162.13\t0.75061752979541556\nAAAAAAAACOOBAAAA\tHitherto certain kinds evade also by a months. Poor points might make even just selective passengers. Old, general qualities could overcome over; recent variables might s\tBooks                                             \treference                                         \t56.16\t1298.61\t0.45083294268504882\nAAAAAAAADAJBAAAA\tDifficult, rapid sizes say so; initial banks stress high single sports; prisoners used to think likely firms. Good, current services must take human, precise persons. Signals m\tBooks                                             \treference                                         \t7.77\t9585.22\t3.32766029745927077\nAAAAAAAAEDAEAAAA\tRoyal, educational days can add black, long-term matters. Different executives should not remai\tBooks                                             \treference                                         \t4.86\t9194.30\t3.19194625401709854\nAAAAAAAAEEDCAAAA\tClassical, labour books make in addition finally significant suggestions. Ethical figures could sell as the levels. Regardless plain scholars set in a companie\tBooks                                             \treference                                         \t80.47\t2466.20\t0.85618022597228374\nAAAAAAAAEHNCAAAA\tCruelly shared examples shall not investigate then in vit\tBooks                                             \treference                                         \t0.28\t610.19\t0.21183708218555990\nAAAAAAAAEJABAAAA\tMale, small legs allocate today to a programs. Video-taped circumstances afford short, royal changes. Planned, appropriate names can enter usual periods. Very consta\tBooks                                             \treference                                         \t4.40\t9663.14\t3.35471145438399721\nAAAAAAAAELGBAAAA\tOften other ideas must not understand possible, static groups. Late\tBooks                                             \treference                                         \t8.13\t705.22\t0.24482824546272563\nAAAAAAAAENCEAAAA\tPossible solutio\tBooks                                             \treference                                         \t2.63\t10773.86\t3.74031542023913264\nAAAAAAAAFGKBAAAA\tStill short documents ought to give more longer individual parties. Brief, expensive reforms should give now. As perfect sect\tBooks                                             \treference                                         \t1.16\t4401.20\t1.52794599405936875\nAAAAAAAAGLODAAAA\tGreat speeches would draw too particular, full things. Available, real lives shall like long, supreme skills. Grim men would n\tBooks                                             \treference                                         \t4.95\t7141.72\t2.47936073450278901\nAAAAAAAAGPGBAAAA\tEver only sides should not ensure clearly familiar, running points. Persons bear free, huge products. Organizations blame. Recent, parliamentary communities complain both perfect, l\tBooks                                             \treference                                         \t5.85\t4618.08\t1.60323930660858167\nAAAAAAAAHJBCAAAA\tDead, blue homes should write more small objectives. Systems could underpin all so blue exchanges. Better adult arts make very governments. Quick managers talk and \tBooks                                             \treference                                         \t2.83\t3913.25\t1.35854645579678832\nAAAAAAAAHKEBAAAA\tDamp, happy roads \tBooks                                             \treference                                         \t4.29\t12407.36\t4.30741070818241603\nAAAAAAAAIEPCAAAA\tItalian pati\tBooks                                             \treference                                         \t4.42\t7902.99\t2.74364762146488472\nAAAAAAAAIFNDAAAA\tClasses used t\tBooks                                             \treference                                         \t1.61\t7530.59\t2.61436308811313771\nAAAAAAAAIIECAAAA\tDangerous parents would not advise almost previous, important matters.\tBooks                                             \treference                                         \t7.62\t1064.34\t0.36950241736734266\nAAAAAAAAIMACAAAA\tUtterly free reasons control powers. Resources think too systematic sy\tBooks                                             \treference                                         \t5.69\t6131.92\t2.12879273831966837\nAAAAAAAAINEAAAAA\tTherefore secondary countries get eventually prospective lives. Directly complete wings see as g\tBooks                                             \treference                                         \t6.19\t4028.40\t1.39852259439897325\nAAAAAAAAJEFEAAAA\tAt present pink police would not endorse yet bright rules. Photographs shall te\tBooks                                             \treference                                         \t5.24\t7033.41\t2.44175920977849331\nAAAAAAAAJOGCAAAA\tEqual, strong requirements use broadly remote pictures.\tBooks                                             \treference                                         \t96.89\t15194.39\t5.27497212866393982\nAAAAAAAAKAMAAAAA\tRelative, possible papers may change only current, tropical services; following procedures bring ever delicious questions; never convenient women may want secondary ch\tBooks                                             \treference                                         \t3.67\t2.16\t0.00074987806670186\nAAAAAAAAKELAAAAA\tEyes alleviate yet; major women get that blue scientists. Wild interests suffer forthwith years. Women might complete in a commitments. Japanese, victorian\tBooks                                             \treference                                         \t8.24\t12242.59\t4.25020820399238554\nAAAAAAAAKFHAAAAA\tClear points create however from a bases. Social, wrong rates contribute. More whole legs find now now unha\tBooks                                             \treference                                         \t0.65\t9377.23\t3.25545328861977061\nAAAAAAAAKGCCAAAA\tGlad, certain others ought to protect narrow, american friends; thi\tBooks                                             \treference                                         \t9.25\t2557.68\t0.88793895076019410\nAAAAAAAAKMJBAAAA\tLong son\tBooks                                             \treference                                         \t6.53\t13751.99\t4.77422021967747397\nAAAAAAAAKNPDAAAA\tHistorical arguments can point much big times. Lines bri\tBooks                                             \treference                                         \t7.40\t4482.72\t1.55624694776193163\nAAAAAAAALDIAAAAA\tTypes shall serve quite possible emotions; hard weekends appear months. There difficult colours form probably. Rules know however green manufac\tBooks                                             \treference                                         \t4.01\t2684.41\t0.93193526899775290\nAAAAAAAALHBAAAAA\tAlso real addresses give in a advantages. Perfect, interested humans could fall never at a years. Sophisticated interp\tBooks                                             \treference                                         \t8.60\t936.71\t0.32519364993532475\nAAAAAAAAMAMBAAAA\tMuch political attitudes must not understand more. Holy years shall not link large friends. Now occasional supporters may write also. Southern difficulties used \tBooks                                             \treference                                         \t3.32\t7569.18\t2.62776021524000108\nAAAAAAAAMDGCAAAA\tActions cannot go perhaps publications; huge, willing girls wo\tBooks                                             \treference                                         \t9.60\t2251.62\t0.78168539469779966\nAAAAAAAAMHHAAAAA\tSuccessful solutions find clearly as socialist problems; individual systems\tBooks                                             \treference                                         \t9.20\t2974.66\t1.03270013421081565\nAAAAAAAAMKOCAAAA\tToo nuclear windows ought to contemplate for example active, constitutional appeals. Again short partners clear to the issues. There political sheets end s\tBooks                                             \treference                                         \t3.51\t295.80\t0.10269163524556059\nAAAAAAAAMLJDAAAA\tCities regard only. Operations used to make later; personal, written years used to interfere for a agreements. Obvious, sufficient protests tell. Issues pay effective own\tBooks                                             \treference                                         \t2.70\t445.16\t0.15454431489490789\nAAAAAAAAMPPBAAAA\tHere special fruits sti\tBooks                                             \treference                                         \t2.31\t6938.36\t2.40876110318589515\nAAAAAAAANCABAAAA\tYears decide pot\tBooks                                             \treference                                         \t4.03\t15341.75\t5.32613047677004465\nAAAAAAAANINDAAAA\tStructures drop home revolutionary, formal hands. Ears\tBooks                                             \treference                                         \t3.42\t1450.10\t0.50342508542794934\nAAAAAAAAOAFDAAAA\tPredominantly on\tBooks                                             \treference                                         \t8.46\t11177.59\t3.88047665721577287\nAAAAAAAAOIPBAAAA\tReally different purposes answ\tBooks                                             \treference                                         \t81.85\t4832.22\t1.67758138494355241\nAAAAAAAAOKBDAAAA\tKinds play sooner; old causes would publish. Great,\tBooks                                             \treference                                         \t2.90\t463.44\t0.16089050520014402\nAAAAAAAAOMPAAAAA\tRelations preclude most primary records. Hardly common f\tBooks                                             \treference                                         \t3.01\t45.64\t0.01584464581679305\nAAAAAAAAPDEAAAAA\tParticularly natural children put hardly. Parties weep into a days. Heavy hands will not take mad, lonely children. Ye\tBooks                                             \treference                                         \t4.55\t1000.50\t0.34733935450704318\nAAAAAAAAPEKCAAAA\tLittle, num\tBooks                                             \treference                                         \t4.79\t11088.98\t3.84971429819241545\nAAAAAAAAPFFAAAAA\tDemocratic, fresh operations shall not explain fully decisions; contra\tBooks                                             \treference                                         \t1.68\t140.25\t0.04868999946987787\nAAAAAAAAPOIDAAAA\tAs progressive minutes apply as firms. Involved,\tBooks                                             \treference                                         \t4.35\t18398.21\t6.38722877109947712\nAAAAAAAAAAGCAAAA\tBoth gross guns ought t\tBooks                                             \tromance                                           \t22.07\t2932.20\t1.53691964340235494\nAAAAAAAAAAJCAAAA\tMatters care too expressions; economic\tBooks                                             \tromance                                           \t5.87\t4968.70\t2.60435598941862117\nAAAAAAAAACNCAAAA\tInternal, additional structures pretend trains. Useful payments should make fingers. \tBooks                                             \tromance                                           \t0.64\t4689.33\t2.45792353570560163\nAAAAAAAAADEEAAAA\tFollowing, very poli\tBooks                                             \tromance                                           \t1.59\t7979.33\t4.18238490491430082\nAAAAAAAAAGBDAAAA\tLikely weapons see. Items improve half. Short, human resources depend white, local texts; fully permanent way\tBooks                                             \tromance                                           \t6.42\t22088.52\t11.57775059057560371\nAAAAAAAAALIAAAAA\tFull days keep full, visible bottles. Big, domestic countr\tBooks                                             \tromance                                           \t4.62\t11680.82\t6.12252974184813303\nAAAAAAAAANADAAAA\tTeachers arise clear often old services. Other minutes could cost by a attempts; open conscious goods detect yet disastrous stones; thus slight men tell for a countries. Capitalist bodies wou\tBooks                                             \tromance                                           \t0.25\t4832.22\t2.53281967097801228\nAAAAAAAABCBDAAAA\tNew, small beds will come instead in a stories. Female, other systems could not\tBooks                                             \tromance                                           \t4.36\t9867.04\t5.17183261654620160\nAAAAAAAACFGAAAAA\tPart-time architects buy. Silently national skills understand free parts. Only european millions shall not attend at all other informal words. Empty, redundant holes contain again acceptable relatio\tBooks                                             \tromance                                           \t1.12\t1104.46\t0.57890535071010332\nAAAAAAAACFJAAAAA\tSimilar consumers will live once on a eyes. More likely teams pass particularly. Just other workshops \tBooks                                             \tromance                                           \t3.59\t1239.88\t0.64988606761534406\nAAAAAAAACGKCAAAA\tFuture years can reform as before social suppliers; particular, judicial individuals resume vaguely remaining aff\tBooks                                             \tromance                                           \t0.52\t6031.54\t3.16144611757964666\nAAAAAAAACHJCAAAA\tCrucial, different affairs could not forgo; public p\tBooks                                             \tromance                                           \t5.62\t4775.42\t2.50304781512054902\nAAAAAAAACIDBAAAA\tFor example new resources find perhaps necessary opportunities. Main systems move spontaneously necessary m\tBooks                                             \tromance                                           \t6.68\t3560.08\t1.86602444720136955\nAAAAAAAACIJDAAAA\tRather aware thanks may not work with a chi\tBooks                                             \tromance                                           \t2.35\t2220.62\t1.16394328440493058\nAAAAAAAAEGDDAAAA\tIslands meet only for\tBooks                                             \tromance                                           \t6.79\t2450.58\t1.28447736843630822\nAAAAAAAAEIKBAAAA\tMinutes will defend. Now new courses could know definitely international forces. There capital accounts should not lift more pro\tBooks                                             \tromance                                           \t72.49\t1876.47\t0.98355623874743093\nAAAAAAAAFLJDAAAA\tMore simple principl\tBooks                                             \tromance                                           \t6.44\t6567.15\t3.44218738018203917\nAAAAAAAAFOECAAAA\tLate, dark looks would not make citizens. Safe, great curtains use as by the children. Signs would prove neither romantic moveme\tBooks                                             \tromance                                           \t4.68\t2862.64\t1.50045960302479959\nAAAAAAAAGBGBAAAA\tProblems inherit. Sure edges must become enough revolutionary years. Systems burst however slowly strong issues; cultural site\tBooks                                             \tromance                                           \t1.60\t775.70\t0.40658501036327902\nAAAAAAAAGDNDAAAA\tPossible, common bars cannot rid mainly ultimate years. Drugs could bring of course large, good rules. S\tBooks                                             \tromance                                           \t3.33\t273.51\t0.14336092069673900\nAAAAAAAAGFLAAAAA\tStandard, geographical scales may hope equal, sure problems. Strong associati\tBooks                                             \tromance                                           \t7.58\t4049.00\t2.12229303462797052\nAAAAAAAAGKDDAAAA\tProbably just results receive perfectly on the countries. Bold girls will pass religious years. Here public conditions ought to consider most sources. Different, able years go rarely ita\tBooks                                             \tromance                                           \t5.44\t1710.73\t0.89668322132109361\nAAAAAAAAGLMDAAAA\tEven sure children build there imaginative novels. Real, quick members shall not exercise unlikely, vast times. Open regulations buy all catholic days. Domestic, palest\tBooks                                             \tromance                                           \t6.42\t49.14\t0.02575684853584057\nAAAAAAAAGOPDAAAA\tSilver, political interviews might know in common families. Far possible houses shall insist in a places. Whole, political gardens would adopt eggs. Others might live even offi\tBooks                                             \tromance                                           \t6.13\t5432.94\t2.84768849581419762\nAAAAAAAAHHLAAAAA\tCultural, harsh conditions describe\tBooks                                             \tromance                                           \t4.72\t1495.08\t0.78364975801718601\nAAAAAAAAIAACAAAA\tDistinctive hours work more federal, proper plants; crimes may ensure therefore; players work increasingly previous, genuine needs. Hostile, young schools will offer very new, implicit changes;\tBooks                                             \tromance                                           \t47.76\t1911.06\t1.00168666998175583\nAAAAAAAAIBFAAAAA\tParticular bombs could illustrate suddenly planes. Western months expect just special, relevant readers. Able demands ought to achieve for a cars. Suitable counties must stud\tBooks                                             \tromance                                           \t0.88\t1663.75\t0.87205854195166361\nAAAAAAAAICDAAAAA\tLevels tear only. Colleagues may not see hot forests. So effective residents must help completely in a hands. However professional classes ought to seem very; political\tBooks                                             \tromance                                           \t4.81\t1069.40\t0.56052856785160575\nAAAAAAAAIHBAAAAA\tSo only things know prac\tBooks                                             \tromance                                           \t2.71\t3443.44\t1.80488731221519852\nAAAAAAAAIHDEAAAA\tWays used to contain only double cigarettes. Intensely increased feelings \tBooks                                             \tromance                                           \t76.83\t18974.38\t9.94546666099883214\nAAAAAAAAIJFDAAAA\tViews balance quite other degrees. Slow passages promote due major animals. Sons would say. Possible, other schemes cannot restart either important, new \tBooks                                             \tromance                                           \t3.75\t745.80\t0.39091285384676227\nAAAAAAAAIKODAAAA\tPremier, good budgets could put high, slow members; traditions could not join however. Students laugh for a effects. Carefu\tBooks                                             \tromance                                           \t9.00\t1184.75\t0.62098954625228157\nAAAAAAAAILNCAAAA\tContacts remove basically blue, labour details. Full measures hold then families. G\tBooks                                             \tromance                                           \t66.85\t845.81\t0.44333333455635558\nAAAAAAAAIMDCAAAA\tSubject children would not like sufficiently great levels. Yet busy hotels must not help behind\tBooks                                             \tromance                                           \t9.33\t1361.15\t0.71345002817581182\nAAAAAAAAJDMBAAAA\tLarge thoughts make\tBooks                                             \tromance                                           \t0.85\t2228.59\t1.16812077896802885\nAAAAAAAAJGBAAAAA\tSpecially clinical muscles can pass causal, following changes. Dishes could use at present areas; even c\tBooks                                             \tromance                                           \t5.00\t276.00\t0.14466606015246230\nAAAAAAAAJJPBAAAA\tTeachers play apparent indians. Professional corners accept consequences; extensively necessary men will not know only economic clean stairs. Divisions could \tBooks                                             \tromance                                           \t0.78\t379.40\t0.19886341747044999\nAAAAAAAAJLBBAAAA\tStages choose physically to a families\tBooks                                             \tromance                                           \t6.13\t1969.70\t1.03242296624023550\nAAAAAAAAKBEBAAAA\tIllegal technologies might distinguish that on a change\tBooks                                             \tromance                                           \t2.73\t1019.24\t0.53423708387607130\nAAAAAAAAKBLBAAAA\tAs single women would get ideas. Rural classes may hear quite available, high sequen\tBooks                                             \tromance                                           \t1.38\t894.27\t0.46873375946573356\nAAAAAAAALCADAAAA\tSenior fans cook frequently. Fin\tBooks                                             \tromance                                           \t4.36\t5607.44\t2.93915308819320006\nAAAAAAAALMLAAAAA\tMammals take at all. Profound weeks must know parts. Too low earnings can share directly new gaps. Equal block\tBooks                                             \tromance                                           \t4.99\t179.00\t0.09382327814235780\nAAAAAAAAMABAAAAA\tFine, real rows could think short, united others. Twice moving molecules list enough really vague assessments. Days put with a lines. Importa\tBooks                                             \tromance                                           \t4.85\t950.33\t0.49811774255322283\nAAAAAAAAMAOAAAAA\tAssociated words produce simply. Frantically tough forms take there across right years. Recent fears appear also fierce examples. Incredibly coastal te\tBooks                                             \tromance                                           \t2.28\t99.82\t0.05232089175514053\nAAAAAAAAMDNBAAAA\tHistorical, new notes should say levels; largely low prisons present at once enough useful winners. Yet worthwhile sons give different, social beaches. Minutes want guns. Industrial\tBooks                                             \tromance                                           \t65.28\t3120.61\t1.63567519555208473\nAAAAAAAAMHDAAAAA\tComplete, foreign makers prevent conservative gardens; full prisoners would look so good goods. Then only cir\tBooks                                             \tromance                                           \t3.56\t510.48\t0.26756931299503245\nAAAAAAAAMLEEAAAA\tLocal, strong letters should not make also ba\tBooks                                             \tromance                                           \t6.39\t3270.83\t1.71441336785680534\nAAAAAAAANDMDAAAA\tAt all chemical branches make as existing things. Directly civil students must not afford much beautiful companies. Past police offer well perhaps chan\tBooks                                             \tromance                                           \t36.28\t3753.37\t1.96733786302336027\nAAAAAAAANIKAAAAA\tMinor democrats can wonder impatiently real backs. Early,\tBooks                                             \tromance                                           \t2.77\t1091.04\t0.57187122561138576\nAAAAAAAANMGDAAAA\tSurely local universities may know perhaps primitive computers. About bad sides will provide carefully about a workshops. National, sheer references ought to develop already also long-t\tBooks                                             \tromance                                           \t5.58\t112.88\t0.05916632199278965\nAAAAAAAANNDCAAAA\tFinancial things will die only pai\tBooks                                             \tromance                                           \t1.33\t1782.43\t0.93426494781722240\nAAAAAAAAODHCAAAA\tDebts should not go into a eyes. Legal troops pursue wholly friends. Inc families will meet never; potatoes should give all various users. New women st\tBooks                                             \tromance                                           \t4.80\t6935.94\t3.63548954077488907\nAAAAAAAAPDEDAAAA\tAlso genuine men identify. Gradual, useful things used to see below patterns; superb, hidden ways would fail even huge yea\tBooks                                             \tromance                                           \t2.08\t1555.12\t0.81511986762426513\nAAAAAAAAPENCAAAA\tGains keep still. Possible, final clothes kill perhaps in the conclusions. Methods would proceed for a hopes. Other, particular ways find perhaps in a demands. Adverse, other men admit eviden\tBooks                                             \tromance                                           \t1.93\t3352.42\t1.75717896150839737\nAAAAAAAAPLHBAAAA\tRacial minutes used to come enough teenag\tBooks                                             \tromance                                           \t3.47\t4982.66\t2.61167315680894137\nAAAAAAAAACCAAAAA\tThen modern features should improve otherwise available qualifications. Personal purposes go with a years. Ministers remove big arts. Linear, poli\tBooks                                             \tscience                                           \t4.66\t527.85\t0.17402980157734269\nAAAAAAAAAEJDAAAA\tOrganizations make enough horrible requirements. Grateful, only funds reassure anxiously yesterday great years. Extra\tBooks                                             \tscience                                           \t5.13\t36276.15\t11.96008560479287668\nAAAAAAAAAGIDAAAA\tAc\tBooks                                             \tscience                                           \t1.13\t11382.07\t3.75261794759766011\nAAAAAAAAAIBBAAAA\tP\tBooks                                             \tscience                                           \t7.15\t115.77\t0.03816885503193893\nAAAAAAAAAMOAAAAA\tConfident views gain to the resources. Jobs could direct kings. Attitudes might not support as aware jobs. Happy accounts cannot test. Professional, joint interests will support in\tBooks                                             \tscience                                           \t78.67\t7479.68\t2.46601728949894583\nAAAAAAAAAPLDAAAA\tContinuous members shall look usually about careful supplies. More than negative sports become probably other leaves. L\tBooks                                             \tscience                                           \t47.51\t97.92\t0.03228378927811575\nAAAAAAAABEGCAAAA\tObvious relationships put originally. Pounds give well central, british leaves. Differences ought to ask also central states. Tests grant for a chapters. Soon active heads should want \tBooks                                             \tscience                                           \t4.26\t2414.14\t0.79593124027645368\nAAAAAAAABEHBAAAA\tGently independent fears call now statutory sciences. Friendly, quiet needs stumble too. So famous cattle teach too only services; public forces collect pure friends. Arms might make im\tBooks                                             \tscience                                           \t4.68\t5668.22\t1.86878696958743084\nAAAAAAAACAECAAAA\tLater other words comfort historic, social birds. Large, english interests muster there ag\tBooks                                             \tscience                                           \t1.74\t2463.16\t0.81209291664913785\nAAAAAAAACAOAAAAA\tWays create things. Popular opportunities regard eyes. Intact conditions show years. Variable banks could run legally. Sexual, mechanical dates shall not carry however fingers. Forms\tBooks                                             \tscience                                           \t2.88\t10151.52\t3.34691107570034261\nAAAAAAAACDKBAAAA\tNow educational levels lift perhaps men. Types use not. Very environments might go for sure at once common p\tBooks                                             \tscience                                           \t71.85\t6430.06\t2.11996223535172516\nAAAAAAAADCEEAAAA\tLittle, able companies could not combine particles. Private kids participate in common; unable, only detectives introduce; very good skills go. Copies miss. Strategic m\tBooks                                             \tscience                                           \t1.07\t7269.76\t2.39680759745174345\nAAAAAAAADNCBAAAA\tRegular teachers serve together events. Other arms would not use. Dou\tBooks                                             \tscience                                           \t3.59\t8847.06\t2.91683640493103230\nAAAAAAAAEEEBAAAA\tAware parts hang experienced, new groups. Handsome, perfect forms will grasp tonight in terms of the tears. Effective, economic subjects deny in the o\tBooks                                             \tscience                                           \t3.18\t38.60\t0.01272624863291736\nAAAAAAAAENIAAAAA\tJust essential errors permit never too bad applications. Ideas could buy men. Anxious wives would not pull royal, common towns. Adults\tBooks                                             \tscience                                           \t3.22\t10051.00\t3.31377007796508735\nAAAAAAAAFCPAAAAA\tDomestic copies cannot get additional victims. Pieces should not determine now british, gold depths. Local, available stocks punc\tBooks                                             \tscience                                           \t3.99\t3769.53\t1.24279730593888526\nAAAAAAAAFPOAAAAA\tComplaints can involve very vital adults. A little practical initiatives remain traditionally important months. Clear new transactions create perhaps new, personal princip\tBooks                                             \tscience                                           \t1.15\t3928.72\t1.29528154220505402\nAAAAAAAAGCCDAAAA\tDistinguished, assis\tBooks                                             \tscience                                           \t6.29\t16.68\t0.00549932194811040\nAAAAAAAAGCCEAAAA\tOld prices help general trials. National, prime men ought to compete about a posts. Suspicious, extreme mistakes might not make gently other characters. Acc\tBooks                                             \tscience                                           \t1.53\t3227.96\t1.06424408127232946\nAAAAAAAAGEHDAAAA\tSpanish ranks can deal all but conservatives. Local metres shall not go no longer with a processes\tBooks                                             \tscience                                           \t2.91\t4385.32\t1.44582053510116972\nAAAAAAAAGGBAAAAA\tParticular ears ought to know streets; tears could pr\tBooks                                             \tscience                                           \t1.38\t4417.02\t1.45627188436706299\nAAAAAAAAGIAAAAAA\tUseful examples might understand evidently. Royal shops ought to leave in order. Also huge experts stay continuous, long organisers. Often burning services flee global circumstances. Fine, ex\tBooks                                             \tscience                                           \t2.78\t7923.96\t2.61249443309046200\nAAAAAAAAGJGBAAAA\tAccounts accept\tBooks                                             \tscience                                           \t1.24\t4454.22\t1.46853655921536677\nAAAAAAAAGKEDAAAA\tSmall years turn as beside a problems. Famous, significant attitudes defend again subtle machines. Pp. double less. Human men appear in a regions. Exclusively warm \tBooks                                             \tscience                                           \t1.75\t3606.79\t1.18914265043316062\nAAAAAAAAHFDEAAAA\tCertain, long councillors smile then fresh eyes. Lights attend initially after a preferences; national genes admit. Wide single plans improve never\tBooks                                             \tscience                                           \t2.09\t2209.49\t0.72845904383276100\nAAAAAAAAHGDAAAAA\tProblems could not find small, late years. Demands might get only normal, available communications. Quiet mothers leave women. Fair interes\tBooks                                             \tscience                                           \t0.21\t8916.11\t2.93960188337929509\nAAAAAAAAHJPDAAAA\tMarks remember\tBooks                                             \tscience                                           \t1.41\t1407.04\t0.46389484135906840\nAAAAAAAAHMDDAAAA\tThings prejudice unfortunately. Available lives used to get for an readers. Roughly good articles might express open years. Black m\tBooks                                             \tscience                                           \t9.38\t11566.26\t3.81334457287478571\nAAAAAAAAHNIDAAAA\tSmall, stupid members lack hands. Literary terms would understand sure ordinary acids. Lovely,\tBooks                                             \tscience                                           \t0.22\t2581.68\t0.85116843447228203\nAAAAAAAAIHEAAAAA\tConditions must like most still desperate concessions. Parts shall not raise sometimes places. Local, prof\tBooks                                             \tscience                                           \t4.37\t214.32\t0.07066035251313079\nAAAAAAAAIJHBAAAA\tMale, major regulations could get. Books may not bring. Upper, musical girls take well special, curious parents. Criminal, equal knees stop just a\tBooks                                             \tscience                                           \t3.41\t7411.80\t2.44363755485639582\nAAAAAAAAILGAAAAA\tCourts receive high male limitations. Political, little parents may establish tomorrow unique minu\tBooks                                             \tscience                                           \t9.26\t10412.18\t3.43284952048418299\nAAAAAAAAIMADAAAA\tLocal, contemporary tanks provoke yet. Well red quantities should spend only deaf new firms. \tBooks                                             \tscience                                           \t2.13\t6975.01\t2.29962983101256232\nAAAAAAAAIMMAAAAA\tYoung officers depend very well unnecessary players. Personnel will consider apart types. Most universal courses could enable arrangements. Magic, equal responsibilities detect; value\tBooks                                             \tscience                                           \t5.89\t6948.34\t2.29083685041567357\nAAAAAAAAIOHAAAAA\tPounds realise fairly formal, casual residents. Good areas shall stick etc disputes. So small police find variable, certain programs. Results think children; dogs will take prices. Old, traditi\tBooks                                             \tscience                                           \t44.25\t3791.67\t1.25009676564698863\nAAAAAAAAIOOBAAAA\tLeft times used to tell trees. Right t\tBooks                                             \tscience                                           \t7.96\t2094.92\t0.69068582347334800\nAAAAAAAAIPCBAAAA\tSo clear employees could tell experiments. Hands would control demands; well ethnic sites afford then bottom programmes; times flow easily premises. Alter\tBooks                                             \tscience                                           \t1.28\t10461.12\t3.44898482121203209\nAAAAAAAAJLLDAAAA\tHowever major deb\tBooks                                             \tscience                                           \t0.66\t2219.28\t0.73168676336945170\nAAAAAAAAJNDDAAAA\tThereafter strange rates shall not inhibit now on a heroes; eyes may not provide.\tBooks                                             \tscience                                           \t8.37\t11495.90\t3.79014719324234879\nAAAAAAAALAPCAAAA\tDue proposed concepts afford indeed yesterda\tBooks                                             \tscience                                           \t1.34\t10405.19\t3.43054494851671946\nAAAAAAAALKJBAAAA\tEarnings feel possibilities. Single, poor problems make full, sho\tBooks                                             \tscience                                           \t2.75\t17541.34\t5.78330192213830518\nAAAAAAAALNGBAAAA\tDirect schemes rival pa\tBooks                                             \tscience                                           \t78.33\t9776.79\t3.22336425833730836\nAAAAAAAAMBLCAAAA\tM\tBooks                                             \tscience                                           \t42.63\t5228.32\t1.72375389255063431\nAAAAAAAAMCPCAAAA\tClear spirits shall not co\tBooks                                             \tscience                                           \t2.11\t1098.47\t0.36216068227463034\nAAAAAAAAMLBEAAAA\tNew, political bish\tBooks                                             \tscience                                           \t1.33\t1836.00\t0.60532104896467022\nAAAAAAAANKOAAAAA\tProfessionally uncomfortable groups would not protect again there dependent users. Standard fields avoid likely families. Independent, intact fortunes work in the\tBooks                                             \tscience                                           \t8.28\t64.98\t0.02142361751727901\nAAAAAAAAOIDEAAAA\tFuture, solar deaths stand much confident, prime horses. New, other hundr\tBooks                                             \tscience                                           \t0.22\t7461.07\t2.45988165511918956\nAAAAAAAAOPDDAAAA\tActs will not reflect as with the problems. General governments distract new, soft fires. Useful proposals restrict hard trees. Large, black customs go official\tBooks                                             \tscience                                           \t3.05\t12762.28\t4.20766705707016963\nAAAAAAAAPGEDAAAA\tRoyal, considerable rooms reply then often c\tBooks                                             \tscience                                           \t0.79\t3487.40\t1.14978029747243514\nAAAAAAAAAECEAAAA\tSymptoms could not take else. Now rich\tBooks                                             \tself-help                                         \t8.22\t4725.36\t1.53069603755177003\nAAAAAAAAAFHBAAAA\tNormal sports will not afford from a women. Nearly past families would permit \tBooks                                             \tself-help                                         \t4.46\t6912.33\t2.23912593775928744\nAAAAAAAABFOCAAAA\tThere main prices could bowl acres. Radical, domestic plants take long. Fresh developments wave sanctions. British, able men cover goals. There other men\tBooks                                             \tself-help                                         \t7.22\t5298.60\t1.71638690482244922\nAAAAAAAACCGEAAAA\tResults\tBooks                                             \tself-help                                         \t0.29\t6602.84\t2.13887217578942752\nAAAAAAAACDACAAAA\tAbout statistical blocks shall point so brothers. Even new affairs spend hopefully even old contexts. Possible officers wait absolutely with\tBooks                                             \tself-help                                         \t3.51\t7809.11\t2.52962181374665694\nAAAAAAAACDJDAAAA\tFacts shall provide al\tBooks                                             \tself-help                                         \t5.02\t1138.39\t0.36876112342521194\nAAAAAAAACDLDAAAA\tMen shall accept yet. Indians can continue obviously global, efficient times. Profit\tBooks                                             \tself-help                                         \t5.85\t4729.95\t1.53218288613311888\nAAAAAAAACIDEAAAA\tProper, mutual feelings would bring right over the days. Prices ought to see thus electronic owners; most surprising definitions might not see in part big lads. Responsible, tory doors read good, a\tBooks                                             \tself-help                                         \t6.84\t4062.63\t1.31601648192708015\nAAAAAAAACMIBAAAA\tEarly alternatives lie meanwhile european, new makers. Suspicious purposes speak new, overseas critics. Generally important police must refer approximately virtually other firms. British, appointed c\tBooks                                             \tself-help                                         \t2.07\t157.85\t0.05113269031937184\nAAAAAAAACPGDAAAA\tSettlements can see so scientific sales; jeans ought to disco\tBooks                                             \tself-help                                         \t0.78\t10137.10\t3.28373262614193372\nAAAAAAAADIFDAAAA\tNow christian papers believe very major, new branches. Annual wars include harshly so-called sites. \tBooks                                             \tself-help                                         \t5.23\t8239.18\t2.66893531470105824\nAAAAAAAADNCEAAAA\tMuch g\tBooks                                             \tself-help                                         \t4.52\t725.34\t0.23496094771145497\nAAAAAAAADPNAAAAA\tParticular prisoners wait at a wag\tBooks                                             \tself-help                                         \t1.99\t210.35\t0.06813912834133586\nAAAAAAAAEAAEAAAA\tGood others run considerably excelle\tBooks                                             \tself-help                                         \t2.72\t567.97\t0.18398374482542681\nAAAAAAAAECBBAAAA\tVery concerned shares must form also rather nice gardens. Quietly available games may see quite. Short eyes repay. As useful variables should not produce there. Managers use so also total versions\tBooks                                             \tself-help                                         \t26.11\t239.20\t0.07748457094959609\nAAAAAAAAEHBCAAAA\tCreative churches like. Walls objec\tBooks                                             \tself-help                                         \t6.05\t3579.99\t1.15967386770001887\nAAAAAAAAEJCEAAAA\tNow environmental examples enter banks. Royal, new attitudes go prices; almost living tre\tBooks                                             \tself-help                                         \t7.75\t779.81\t0.25260553207443365\nAAAAAAAAEJJBAAAA\tHot steps help right able councils. Provincial mammals ought to establish from a others; forests used to offer true, open practitioners. Key theories could not imagine exact, other races.\tBooks                                             \tself-help                                         \t4.63\t8643.42\t2.79988164814865324\nAAAAAAAAENMCAAAA\tAware, a\tBooks                                             \tself-help                                         \t2.74\t1189.77\t0.38540475743604073\nAAAAAAAAEOFDAAAA\tCultural notes ignore usuall\tBooks                                             \tself-help                                         \t9.32\t5567.49\t1.80348902138865697\nAAAAAAAAEPICAAAA\tPositive, recent adults cannot tell fortunately laboratories. Frequent performances may get labour buildings; vocational windows will talk; similar seeds must replace better. Other merch\tBooks                                             \tself-help                                         \t9.69\t10154.63\t3.28941115678050571\nAAAAAAAAFEAEAAAA\tTonight single claims used to compete seriously. Frequently magic advances concentrate very political men. Again damp types will apply also pol\tBooks                                             \tself-help                                         \t0.56\t8790.86\t2.84764220475738421\nAAAAAAAAFFGAAAAA\tAreas increase well final, peculiar findings. Fat possibilities will say now at all sure dogs\tBooks                                             \tself-help                                         \t5.11\t3770.90\t1.22151575499093605\nAAAAAAAAGEPAAAAA\tClearly legal servants should not investigate however early difficult women. Increased laboratories tell home samples. Still wooden institutions avoid undoubtedly. Policies will \tBooks                                             \tself-help                                         \t9.11\t9124.75\t2.95579991125554742\nAAAAAAAAGKLBAAAA\tPhysical, political issues must not increase. Teeth go there particular prices. Words mi\tBooks                                             \tself-help                                         \t4.82\t1881.44\t0.60945890956274278\nAAAAAAAAGLECAAAA\tOld, acceptable respects imply around banks. Rights will not spare so existing reasons. Old eggs must claim. Patients might not stop there military,\tBooks                                             \tself-help                                         \t7.89\t15529.28\t5.03043310182334282\nAAAAAAAAGNJBAAAA\tNational, dreadful opportunities give. Lucky, low rules should start away from the girls. Available words will not leave now. Stor\tBooks                                             \tself-help                                         \t5.53\t6895.58\t2.23370007419989892\nAAAAAAAAGPFAAAAA\tDominant, useful restaurants might not say contrary eyes. Modest years may not confirm again just other stage\tBooks                                             \tself-help                                         \t3.87\t12631.86\t4.09186560365955223\nAAAAAAAAHAFBAAAA\tVarious\tBooks                                             \tself-help                                         \t6.24\t3437.60\t1.11354916846292444\nAAAAAAAAHBBEAAAA\tThere political deta\tBooks                                             \tself-help                                         \t8.83\t4867.67\t1.57679482221664051\nAAAAAAAAICMDAAAA\tOther, established programmes used to avoid good organisations. Forward, simple changes might enter straight. Papers cal\tBooks                                             \tself-help                                         \t1.63\t3028.98\t0.98118401218606844\nAAAAAAAAIECDAAAA\tCards insist sad males. Instruments turn later instructions. Economic, white \tBooks                                             \tself-help                                         \t2.64\t3883.30\t1.25792572896557903\nAAAAAAAAIEDBAAAA\tOther, precious services can stem; grounds will set in particular friendly factors. Ports will provide. So complete moments diversify morally different, open pupi\tBooks                                             \tself-help                                         \t6.72\tNULL\tNULL\nAAAAAAAAIHIDAAAA\tMetres must not go more soft attacks. Northern, central changes see all right inherent metres; women shall reduce together always private efforts. Extra, secret dates ought to sa\tBooks                                             \tself-help                                         \t36.51\t215.49\t0.06980413960672434\nAAAAAAAAIPODAAAA\tOutside, remaining problems must come only new politicians. Readers would not tell right, modern products. Particular threats become legally among a beaches\tBooks                                             \tself-help                                         \t1.38\t24121.05\t7.81358365427991146\nAAAAAAAAJCEEAAAA\tIn order excellent words could go old costs. Surp\tBooks                                             \tself-help                                         \t1.45\t3398.74\t1.10096116500514307\nAAAAAAAAJCMCAAAA\tLogic\tBooks                                             \tself-help                                         \t1.29\t3676.91\t1.19106937194932846\nAAAAAAAAJJOCAAAA\tSufficiently great tears may see. Much short standards duck over a pap\tBooks                                             \tself-help                                         \t8.57\t1508.73\t0.48872615689291017\nAAAAAAAAJMABAAAA\tAgain right years welcome to the months. Once competitive years could benefit great, social projects. Actually old expectations must not spin \tBooks                                             \tself-help                                         \t2.42\t1824.90\t0.59114378564346952\nAAAAAAAAKCEAAAAA\tActions need qualifications. Expert sales see. Guests look evidently dead roots. Activities \tBooks                                             \tself-help                                         \t2.20\t1248.95\t0.40457506223870418\nAAAAAAAAKDCEAAAA\tStill social transactions provide both most existing vi\tBooks                                             \tself-help                                         \t6.50\t2330.32\t0.75486557431129919\nAAAAAAAAKHEBAAAA\tPrime even\tBooks                                             \tself-help                                         \t4.28\t3438.17\t1.11373380979002005\nAAAAAAAAKHMAAAAA\tConfidential, japanese reports discuss ever forms. Initiatives say now pregnant, sad sites. Neither round eyes may ask more w\tBooks                                             \tself-help                                         \t1.72\t3385.13\t1.09655244840554440\nAAAAAAAAKLCAAAAA\tClever, informal negotiations study sharply with a leaders. Professionals come noble officials. Plans continue pa\tBooks                                             \tself-help                                         \t4.69\t2768.44\t0.89678672909573497\nAAAAAAAAKLEAAAAA\tBritish, \tBooks                                             \tself-help                                         \t1.52\t4014.40\t1.30039323419756920\nAAAAAAAALBBAAAAA\tHighly other times could stay no longer huge symbolic results. Most narrow police chan\tBooks                                             \tself-help                                         \t7.99\t660.44\t0.21393775099477944\nAAAAAAAAMBHAAAAA\tHands can ensure. Dead schools concentrate by a years. Increased authorities should not stop natural, following guards. Principal years might secure. Long, criti\tBooks                                             \tself-help                                         \t4.23\t4140.99\t1.34139980542043446\nAAAAAAAAMCODAAAA\tRights could not talk. Miners shall clear various outcomes. Relative, western forms locate communist, local prices. Items would not disappear probably likely women. Bare conclusions mark in gener\tBooks                                             \tself-help                                         \t8.57\t3116.42\t1.00950863962684053\nAAAAAAAAMHEDAAAA\tOther changes shall seek \tBooks                                             \tself-help                                         \t2.51\t2862.54\t0.92726874467415049\nAAAAAAAAMLOBAAAA\tSo ashamed periods could give there on the operations. Potatoes must order very noble systems; labour years should not escape so formal, ready \tBooks                                             \tself-help                                         \t1.94\t11014.72\t3.56802196208166835\nAAAAAAAANBMCAAAA\tAlso crucial weeks will consider just then close parts. Long values prove then reco\tBooks                                             \tself-help                                         \t3.91\t65.52\t0.02122403465141110\nAAAAAAAANDDDAAAA\tSincerely important experiments should hear surprised, unchanged sorts. Else financial democrats will not start so major bodies. E\tBooks                                             \tself-help                                         \t1.90\t5855.42\t1.89675880614416367\nAAAAAAAAOACAAAAA\tCities practise a\tBooks                                             \tself-help                                         \t2.94\t9089.11\t2.94425496932977984\nAAAAAAAAOJMCAAAA\tNearly northern eyes would not use further buyers. Ever independent advertisements comment also nice, old schemes. Firm members would restore as a doors. Problems \tBooks                                             \tself-help                                         \t8.02\t14009.14\t4.53801087906699247\nAAAAAAAAOKBEAAAA\tEssential, modern goods help friendly roads. Cultures\tBooks                                             \tself-help                                         \t1.13\t8764.28\t2.83903208813597843\nAAAAAAAAOLEDAAAA\tGentlemen construct. Inevitable proposals tell more subject troops. Feelings used to come thus a\tBooks                                             \tself-help                                         \t1.73\t8962.10\t2.90311234660273887\nAAAAAAAAONJCAAAA\tMiles kiss silently difficult streets. Less social rules see never \tBooks                                             \tself-help                                         \t7.03\t283.44\t0.09181532938943778\nAAAAAAAAONPCAAAA\tYards shall build gradually steep, possible players. Foreign, wild lines used to understand vital layers. Problems shall go likely, parliamentary rats. Suspicious, wrong thousands \tBooks                                             \tself-help                                         \t7.63\t7823.86\t2.53439981300044683\nAAAAAAAAPEECAAAA\tResults\tBooks                                             \tself-help                                         \t9.21\t3280.19\t1.06255900829078431\nAAAAAAAAPPNDAAAA\tSmooth, othe\tBooks                                             \tself-help                                         \t8.62\t11533.69\t3.73613303141992873\nAAAAAAAAABJAAAAA\tAvailable, other responsibilities ban common, english authorities. Participants save little for a years. Well local plans look. As entir\tBooks                                             \tsports                                            \t2.98\t624.68\t0.24146901355107034\nAAAAAAAAAIOAAAAA\tNow beautiful results scream just official payments. Carefully \tBooks                                             \tsports                                            \t4.89\t12518.36\t4.83895120778186737\nAAAAAAAAAJABAAAA\tAgricultural elections go users. Popular customers could threaten upside down hard, able pages. European, interesting bases spend at a fingers. \tBooks                                             \tsports                                            \t2.47\t7461.50\t2.88423039734153702\nAAAAAAAAALMDAAAA\tLevels should rethink really typically other women. Elections respond long numbers. Firms might sum nearly present, personal homes. Again clear\tBooks                                             \tsports                                            \t3.91\t6886.83\t2.66209266599525798\nAAAAAAAAAMGBAAAA\tVery social engineers ask facilities. Numerous, stupid \tBooks                                             \tsports                                            \t7.36\t4152.23\t1.60503759066587821\nAAAAAAAABAGAAAAA\tGreen levels provide. Students would agree. Very upper states get finally for a\tBooks                                             \tsports                                            \t1.29\t4251.46\t1.64339478189126194\nAAAAAAAABLKAAAAA\tIn order\tBooks                                             \tsports                                            \t9.54\t5723.96\t2.21258720433787633\nAAAAAAAABMIBAAAA\tAs specific characteristics contain for the hours. Free, double men avoid in the meals. Trying, potential institutions share above from the months. Contemporary problems could cheer only heav\tBooks                                             \tsports                                            \t1.58\t1246.89\t0.48198325271610120\nAAAAAAAABNPCAAAA\tGrounds ought \tBooks                                             \tsports                                            \t1.69\t6467.35\t2.49994337066900616\nAAAAAAAABOPBAAAA\tCompletely particular voices shall not say straight. Used ideas must recall current colonies. New techniques could not make naturally old, great versions; great adults test\tBooks                                             \tsports                                            \t2.88\t6653.24\t2.57179884055600185\nAAAAAAAACBHBAAAA\tProcedures order here shops. Late static sciences shall not see cultures. Polite implications cover underway. That is right communications might not say cool principles. Strange keys\tBooks                                             \tsports                                            \t1.34\t2498.12\t0.96564412520362400\nAAAAAAAACDJAAAAA\tMore big results develop again on a politicians. Characteristics live flowers. Children wipe perhaps appropriate roles. Wrong, external shows want somewhat little ways. Then difficult\tBooks                                             \tsports                                            \t3.64\t4362.77\t1.68642147699654727\nAAAAAAAACGPAAAAA\tBasic, functional circumstances must \tBooks                                             \tsports                                            \t7.87\t2947.46\t1.13933575379592397\nAAAAAAAACLNAAAAA\tNeighbours shall not represent overall dramatic trees. Random chiefs could not interfere basic, special fruits. A little poli\tBooks                                             \tsports                                            \t5.46\t3974.85\t1.53647164710487281\nAAAAAAAACPDDAAAA\tImmediately impossible teachers cut kindly busy, national products. Important, principal communities could die all very video-taped words. Short children doubt windows. Sometimes russian developm\tBooks                                             \tsports                                            \t96.08\t4160.79\t1.60834644440858994\nAAAAAAAAFBKDAAAA\tTwice detailed customers know women; economic, intense values listen wide industr\tBooks                                             \tsports                                            \t0.74\t6802.45\t2.62947571753614401\nAAAAAAAAFIECAAAA\tSad, very sales could gather hence on a pounds. Issues see just within a things. Eastern directors put very in a initiatives. \tBooks                                             \tsports                                            \t3.99\t5533.59\t2.13899999791263899\nAAAAAAAAGBBAAAAA\tSick organizations cannot cause to the situations. Direct nations seek to a genes. Able, invisible polls c\tBooks                                             \tsports                                            \t52.92\t10879.04\t4.20527479218581719\nAAAAAAAAGBECAAAA\tLetters help; international directions should hu\tBooks                                             \tsports                                            \t37.74\t460.35\t0.17794752575436260\nAAAAAAAAGCFDAAAA\tAppointments might not hold to a tickets. Proper, private areas describe and so on prime, natural calls. Miners shall receive typically safe units. Little times will develop pointedly very mus\tBooks                                             \tsports                                            \t6.13\t3351.79\t1.29562884185557735\nAAAAAAAAGJJBAAAA\tMinisters prove perhaps social processes. Aggressive characters could get open signals. Products try at all public, loyal councils; wholly historical respondents see there from a statements. C\tBooks                                             \tsports                                            \t7.24\t13142.40\t5.08017283039890319\nAAAAAAAAGJKBAAAA\tLikely days shall get. Great users would love even. However acceptable walls\tBooks                                             \tsports                                            \t8.23\t2406.70\t0.93030587647013029\nAAAAAAAAGPODAAAA\tJust average men might make so faintly free parents. J\tBooks                                             \tsports                                            \t1.41\t9937.58\t3.84135499725434718\nAAAAAAAAHACBAAAA\tPapers conceive over immediate victims. Local, expert members add ill over immediate tiles. Profits pay even. Tall classes begin for instance grand fields; ru\tBooks                                             \tsports                                            \t0.25\t3880.85\t1.50013610366855243\nAAAAAAAAHEJCAAAA\tGreat, reliable children see french, proper dates. Public passages like closely traditionally academic books. Values used to distinguish leaders. Much key oper\tBooks                                             \tsports                                            \t31.97\t1293.62\t0.50004665638396557\nAAAAAAAAHLHDAAAA\tDual months should survive only large, political eyes; new, new merchants pass fairly conseque\tBooks                                             \tsports                                            \t6.26\t4192.74\t1.62069666369359458\nAAAAAAAAIACEAAAA\tConversely good eggs would not call too. Police happen present courses. Fine procedures finish well forward private\tBooks                                             \tsports                                            \t6.31\t6912.27\t2.67192645562313022\nAAAAAAAAIAMDAAAA\tReal, japanese systems would need downstairs for the phrases; level waters might not go about existing, little friends. Nation\tBooks                                             \tsports                                            \t5.90\t2794.92\t1.08037167086213344\nAAAAAAAAIBLDAAAA\tDevices take truly afraid, great men. Both true parties hurt even with a proposals. All internal candidates prevent more. Distinctive, prime women would say. Little, english departme\tBooks                                             \tsports                                            \t0.63\t1050.56\t0.40609221821766738\nAAAAAAAAIHLCAAAA\tParents prevent alone little children. Cases might dispose again lives; very strange windows violate officially. Improved, cheap critics should alert plates. Expressions build c\tBooks                                             \tsports                                            \t5.56\t4342.45\t1.67856681484095121\nAAAAAAAAJBBCAAAA\tWrong others miss less to the respects. Especially other standards start in order regula\tBooks                                             \tsports                                            \t7.53\t11059.22\t4.27492307108322362\nAAAAAAAAJCCDAAAA\tAdults will foresee most left, social children. Different eyes make personal counties. Readers would not admit more musical proceedings; titles take here away fast institutions; bird\tBooks                                             \tsports                                            \t3.83\t10985.10\t4.24627210853535058\nAAAAAAAAKEOBAAAA\tInternational, coloured contexts think. Relevant, british conservatives ought to happen ago. Perhaps human shops must see animals; rights must h\tBooks                                             \tsports                                            \t44.83\t10933.78\t4.22643444801245737\nAAAAAAAAKMFBAAAA\tYears should comment then central, internal implications; directly collective feet may find around extra, victorian crimes. Short\tBooks                                             \tsports                                            \t2.75\t1868.42\t0.72223463901372038\nAAAAAAAAKNODAAAA\tSo single phrases could not sum; desirable friends see times. French efforts think\tBooks                                             \tsports                                            \t4.59\t4611.30\t1.78249033455217177\nAAAAAAAALEHDAAAA\tCentral, visible moments \tBooks                                             \tsports                                            \t57.64\t1362.54\t0.52668756759280813\nAAAAAAAALJLDAAAA\tOld, straight enemies obtain however years. Largely social questions disrupt never. Measures rule fine, extensive trees. Already economic friends would not show more beautiful engines. Systems ret\tBooks                                             \tsports                                            \t9.99\t4644.12\t1.79517685088812959\nAAAAAAAALNHAAAAA\tFreely proud children cannot continue countries. Rates shall not look applications. Compl\tBooks                                             \tsports                                            \t4.13\t886.97\t0.34285677618843706\nAAAAAAAAMDOAAAAA\tAlready secondary year\tBooks                                             \tsports                                            \t72.51\t8152.72\t3.15142033706550904\nAAAAAAAAMLOAAAAA\tDevelopers ought to recognize again. British, fast artists shall experi\tBooks                                             \tsports                                            \t79.00\t2317.17\t0.89569820408870728\nAAAAAAAAMOFEAAAA\tPaths check still international attitudes. Immediate\tBooks                                             \tsports                                            \t0.37\t2211.39\t0.85480912127281399\nAAAAAAAANDFBAAAA\tAll capital bacteria make jobs. Again appropriate eyes may not leave others. There fixed ways\tBooks                                             \tsports                                            \t0.32\t7910.07\t3.05762438371632671\nAAAAAAAANDNCAAAA\tPapers occur critically relatively happy numbers; related, soviet genes experiment governments; voluntary devices\tBooks                                             \tsports                                            \t2.52\t3864.91\t1.49397452321775512\nAAAAAAAANMJDAAAA\tIndeed similar changes might drink too national careful areas. Wise, good rooms give large opportunities. Various patients shall research directly plants. International hands can get pieces\tBooks                                             \tsports                                            \t9.31\t3710.53\t1.43429919134861534\nAAAAAAAAOCLBAAAA\tHere familiar rooms would not believe particularly new, fresh rights. Levels allow then wives; temporary, big ears may sound always others. Lovely, essentia\tBooks                                             \tsports                                            \t9.23\t1808.93\t0.69923887859854273\nAAAAAAAAOKLCAAAA\tLines might clear too high eyes. Great women balance as the things. Natural requirements\tBooks                                             \tsports                                            \t8.76\t5395.16\t2.08549011197764081\nAAAAAAAAOLJBAAAA\tGeneral, local thanks must soar actually about p\tBooks                                             \tsports                                            \t22.08\t7752.94\t2.99688604392750734\nAAAAAAAAOMFAAAAA\tInc others look in the varieties. Cold methods write values. Partners will make often times. Democratic, dramatic personnel shall not see\tBooks                                             \tsports                                            \t3.64\t473.00\t0.18283736218488870\nAAAAAAAAOMKBAAAA\tOthers used to coincide there so as historical sites; syste\tBooks                                             \tsports                                            \t4.08\t4391.31\t1.69745356646114923\nAAAAAAAAOMMAAAAA\tPoor, major pairs affect complex, redundant results. Different animals could find so great, honest designs. Dull, linguistic studies might not get more cons\tBooks                                             \tsports                                            \t33.21\t1010.07\t0.39044087615663959\nAAAAAAAAPLNBAAAA\tOpen prod\tBooks                                             \tsports                                            \t2.74\t12438.41\t4.80804666844427361\nAAAAAAAAACIBAAAA\tBloody masters pull only women; shops take aggressively also legal cells. Continually underlying grounds would interfere. Entries shall not separate. Senior techniques see in \tBooks                                             \ttravel                                            \t2.25\t4171.41\t1.64665291182793628\nAAAAAAAAACKCAAAA\tActive, mi\tBooks                                             \ttravel                                            \t1.40\t12936.19\t5.10652631399441219\nAAAAAAAAADDEAAAA\tVoluntary others will imply again international, important birds; ill old publishers can get dark powers. Features stretch now progressive procedures. Tough n\tBooks                                             \ttravel                                            \t1.83\t3612.43\t1.42599705573765030\nAAAAAAAAAGAEAAAA\tCold terms shall comply only early claims; head, different politicians shall not commend good, foreign organizations; criminal, po\tBooks                                             \ttravel                                            \t1.03\t5504.86\t2.17302872367020583\nAAAAAAAACACEAAAA\tOperations s\tBooks                                             \ttravel                                            \t1.00\t193.62\t0.07643097580629212\nAAAAAAAACBLAAAAA\tApplications might gather rather out of a problems. Scales could observe presumably for a directors; totally empty questions will forget. Just, symbolic question\tBooks                                             \ttravel                                            \t21.48\t5351.75\t2.11258896173599765\nAAAAAAAACDDDAAAA\tFor example influential subjects shall work for example. Modules should persuade aside overall preliminary relatives. American, available reasons may use to the weekends; streets used t\tBooks                                             \ttravel                                            \t2.18\t6997.28\t2.76215751673304277\nAAAAAAAACGADAAAA\tSimilar sides assess more yet complete improvements. Bacteria would stay; general, curious trends used to reac\tBooks                                             \ttravel                                            \t1.61\t221.43\t0.08740889873353613\nAAAAAAAACHBBAAAA\tCommunist, small cases may not turn other rules. Little, forward men should assist quite available technique\tBooks                                             \ttravel                                            \t2.29\t16204.92\t6.39684871636659094\nAAAAAAAACPDCAAAA\tConflicts could give really sole institutions. Then advanced proceedings could not receive. Black experiences shall \tBooks                                             \ttravel                                            \t1.91\t5880.48\t2.32130371144192077\nAAAAAAAADACCAAAA\tLeading players will sa\tBooks                                             \ttravel                                            \t4.51\t262.65\t0.10368038320174892\nAAAAAAAADADBAAAA\tThere european members turn; industrial, honest leaders cut exactly happy, consistent reasons. Incidentally european millions worry at first aware \tBooks                                             \ttravel                                            \t3.81\t2395.24\t0.94551456714318326\nAAAAAAAADEPDAAAA\tDeliberately ordinary procedures will not pay by a months. Feet reach very s\tBooks                                             \ttravel                                            \t9.43\t1776.74\t0.70136335065629308\nAAAAAAAAEEHCAAAA\tGood, national parts remove animals; \tBooks                                             \ttravel                                            \t2.57\t3370.45\t1.33047609960911726\nAAAAAAAAEIICAAAA\tOdd, artistic databases open now; female, left days use all obligations. Simple, early sites may not hesitate statements. Left, free s\tBooks                                             \ttravel                                            \t2.31\t9717.76\t3.83605970174234756\nAAAAAAAAEJPAAAAA\tHowever solid hours visit painfully things. Clubs must take most other words; officials will follow necessary developers. Alternative, great decisio\tBooks                                             \ttravel                                            \t2.68\t1892.11\t0.74690534879063830\nAAAAAAAAFEBAAAAA\tFinally surprising cells cannot look better points. Elections might choo\tBooks                                             \ttravel                                            \t1.98\t3145.02\t1.24148821160161580\nAAAAAAAAFGBEAAAA\tRight schools go now; average, invisible hands should get also good persons. Usually good ministers will make. Notes ought to stem average words. Heavy, certain suggestions summaris\tBooks                                             \ttravel                                            \t4.55\t337.50\t0.13322721999082528\nAAAAAAAAGEEDAAAA\tThanks could talk well individually national records; just simple officials go then encouraging, remarkable needs. Signals assess now. Upper, cheap pp. would not see. Hard trousers shall send whol\tBooks                                             \ttravel                                            \t4.23\t6920.66\t2.73191197719023675\nAAAAAAAAGFHCAAAA\tReports used to think characteristics. True types break extremely deliberately white tasks. Courses must cost. Economic, nervous resou\tBooks                                             \ttravel                                            \t0.74\t1273.19\t0.50258833842998175\nAAAAAAAAGMGCAAAA\tDear signals know finally. Positions answer payable payments. Superior babies can exis\tBooks                                             \ttravel                                            \t1.78\t16390.16\t6.46997170964392568\nAAAAAAAAGNFBAAAA\tHorizontal problems continue members; modern, other interactio\tBooks                                             \ttravel                                            \t8.51\t2371.88\t0.93629326978322569\nAAAAAAAAHAECAAAA\tOpen conditio\tBooks                                             \ttravel                                            \t8.17\t9456.37\t3.73287670016189772\nAAAAAAAAIACAAAAA\tPractical writers used to succeed recent arms. \tBooks                                             \ttravel                                            \t9.48\t10115.82\t3.99319281934100804\nAAAAAAAAIFJAAAAA\tMembers show yards. Economic stones get newspapers. Only magic views lea\tBooks                                             \ttravel                                            \t9.23\t1653.26\t0.65261995176898313\nAAAAAAAAIGDAAAAA\tInvestments ought to use still also professional developments. Only fresh visitors know steadily never main occ\tBooks                                             \ttravel                                            \t1.37\t4036.41\t1.59336202383160616\nAAAAAAAAIGEEAAAA\tConclusions might take on a ch\tBooks                                             \ttravel                                            \t4.48\t4341.46\t1.71377969333738765\nAAAAAAAAILBCAAAA\tSmall, original things announce in addition at last other functions. Best political women make even old materials. Downstairs wet arr\tBooks                                             \ttravel                                            \t0.34\t8289.45\t3.27223815926799005\nAAAAAAAAIMLAAAAA\tAgain english deals cut for the cases. Yet normal systems reach biological, original reasons. So other remains spread steadily. Much inadequate members consider\tBooks                                             \ttravel                                            \t1.92\t7192.94\t2.83939377706905721\nAAAAAAAAINFCAAAA\tLater severe rules would\tBooks                                             \ttravel                                            \t1.57\t3713.31\t1.46581916522705609\nAAAAAAAAINIDAAAA\tMovements may describe quite southern, nervous views. Young notes imagine. Sensitive women might excuse then sales. Proportions may not exist only from a controls. Are\tBooks                                             \ttravel                                            \t2.49\t6651.86\t2.62580389797976612\nAAAAAAAAJGKDAAAA\tThat is fine terms know to the goods; useful colleagues us\tBooks                                             \ttravel                                            \t6.31\t6202.60\t2.44845971767434933\nAAAAAAAAKBODAAAA\tYesterday long babies may not include as else able companies. Large, true d\tBooks                                             \ttravel                                            \t4.19\t1813.84\t0.71600847617232157\nAAAAAAAAKEKBAAAA\tWords see low courts. National, democratic plants avoid. Days should go stupid, apparent days. Dependent hours should not want police. Also urban wages shall not define so great, typic\tBooks                                             \ttravel                                            \t8.88\t8312.77\t3.28144366673520796\nAAAAAAAAKGPBAAAA\tMasses can contain as. Military men retain in a earnings; british, related instructions shall know different, precise needs; favorite\tBooks                                             \ttravel                                            \t5.09\t959.36\t0.37870478746784635\nAAAAAAAAKIIAAAAA\tBehind relevant areas find then necessary papers. Copies might come envi\tBooks                                             \ttravel                                            \t7.07\t7437.38\t2.93588581160107894\nAAAAAAAAKLHBAAAA\tRemarkably good bishops would deprive transactions. I\tBooks                                             \ttravel                                            \t0.59\t7014.30\t2.76887611609376528\nAAAAAAAAKNEEAAAA\tRunning businesses find emotions; \tBooks                                             \ttravel                                            \t4.40\t2300.61\t0.90815962839434831\nAAAAAAAALJDCAAAA\tPink, central countries shall defend rapidly \tBooks                                             \ttravel                                            \t6.87\t6536.14\t2.58012373828394893\nAAAAAAAALNEAAAAA\tLocal, conservati\tBooks                                             \ttravel                                            \t1.68\t8121.86\t3.20608245616202735\nAAAAAAAALNHDAAAA\tStrong women know also also obvious votes. Private, natural areas should play strongly for \tBooks                                             \ttravel                                            \t2.11\t184.12\t0.07268087628062445\nAAAAAAAAMENAAAAA\tColours meet certainly hours; aw\tBooks                                             \ttravel                                            \t1.63\t5441.98\t2.14820701228347073\nAAAAAAAAMHJAAAAA\tToo full weeks might obtain most today vital cities. Police shall take for example full sto\tBooks                                             \ttravel                                            \t3.82\t5904.69\t2.33086054402259597\nAAAAAAAAMKAAAAAA\tExceptional hundreds compare else then previous scientists. Rapid, popular differences get exactly now social persons. Naturally fundamental dreams hold on a changes. Brilliant birds pursue te\tBooks                                             \ttravel                                            \t5.39\t3124.51\t1.23339194409935853\nAAAAAAAAOBIDAAAA\tBritish leaders can focus. Different workers cannot breathe only in an objectives; arrangements might enter predictably hours; reduced, effective phases operate ready men. Others say o\tBooks                                             \ttravel                                            \t4.95\t1624.50\t0.64126701888917236\nAAAAAAAAOHHBAAAA\tYesterday public notes work at least students; accidents might not apply today rural, subject premises. National, particular organisations could not endorse simply under a respondents. Sti\tBooks                                             \ttravel                                            \t9.83\t531.86\t0.20995030881280099\nAAAAAAAAONIAAAAA\tMaybe gastric variations will see as. However physical plants would not choose for example wi\tBooks                                             \ttravel                                            \t6.36\t1691.34\t0.66765192965713314\nAAAAAAAAPMICAAAA\tLittle arts can grow directly rights. Full, slim argum\tBooks                                             \ttravel                                            \t4.77\t16542.31\t6.53003251415238218\nAAAAAAAAPMNDAAAA\tAbout right clothes must get thoughtfully to a cases. Eastern improvements \tBooks                                             \ttravel                                            \t98.75\t2730.37\t1.07780623598918408\nAAAAAAAAPPDEAAAA\tCountries want incorr\tBooks                                             \ttravel                                            \t63.33\t473.46\t0.18689706541290708\nAAAAAAAADKHCAAAA\tFields would die clear horses. However new problems go nasty, smooth ways. Interested others go great societies. Familiar patients shall seem trends. Yellow, r\tHome                                              \tNULL\tNULL\t7995.48\t34.64319649767261090\nAAAAAAAAGAMCAAAA\tNULL\tHome                                              \tNULL\t0.87\t14048.70\t60.87087637475838958\nAAAAAAAAHGGDAAAA\tNULL\tHome                                              \tNULL\tNULL\t116.76\t0.50590328824138814\nAAAAAAAAAEPBAAAA\tNeat, desirable words make especially gradu\tHome                                              \taccent                                            \t7.11\t1583.88\t0.73384072874422647\nAAAAAAAABCDBAAAA\tCommon males protest probably statements. Subsequent, main ways begin then titles. Rights come therefore interesting, ordinary thin\tHome                                              \taccent                                            \t8.82\t1429.40\t0.66226730413099308\nAAAAAAAABCNAAAAA\tOffers go odds. Black, certain readers prove again in a cases. Public, black things watch as else modern forces. Difficult, new crops comp\tHome                                              \taccent                                            \t3.59\t4707.69\t2.18115934307012370\nAAAAAAAABDMDAAAA\tNational, round fields would not accomp\tHome                                              \taccent                                            \t0.17\t1970.93\t0.91316811090730250\nAAAAAAAABDPDAAAA\tMore general applications work also moves. Final, equal instruction\tHome                                              \taccent                                            \t33.79\t1466.94\t0.67966027642501678\nAAAAAAAABIDBAAAA\tSevere plants filter fair with the days. Both great hills bring still. Military standards ask now for a conditions. Ago new proposals may like particularly men. Then alone a\tHome                                              \taccent                                            \t5.54\t6369.32\t2.95102307649896240\nAAAAAAAABMNCAAAA\tPresent, good grounds fall students. Big, long nerves remain events. Important, black years must not use principles. Fatal mines cannot order hospitals. Forces apply elsewhere; now final members\tHome                                              \taccent                                            \t5.37\t187.59\t0.08691389644741359\nAAAAAAAACBIAAAAA\tTerms must work slow signs. Just american movements make surprisingly\tHome                                              \taccent                                            \t0.26\t481.20\t0.22294880841460324\nAAAAAAAACBIDAAAA\tDiscussions could inform; legitimately potential miles remember again from the factors. Then administrative changes may\tHome                                              \taccent                                            \t2.20\t1475.60\t0.68367261366705848\nAAAAAAAACLEDAAAA\tAgo light fingers blame enough green, british years. Children go general stands. Economic, great numbers affect deputies. Purposes urge annually. Always electrical ways vote judicial, regular ac\tHome                                              \taccent                                            \t6.86\t11873.28\t5.50110895256222018\nAAAAAAAADCIDAAAA\tDays shall want later romantic, american changes. Reasons read; great reasons may occupy economically. Strong, new films go then objects. English relations would resolve over. New, crazy feelin\tHome                                              \taccent                                            \t1.78\t715.86\t0.33167110139583931\nAAAAAAAADIJCAAAA\tNew, large words stop more strong cars. Back views leave other, young shoes. White conte\tHome                                              \taccent                                            \t2.81\t9585.07\t4.44093918343840622\nAAAAAAAADKJDAAAA\tDecades try then. Different leaders stray examples. Things would not participate too good, good messages. Exactly new thanks can forget; companies u\tHome                                              \taccent                                            \t3.51\t4955.85\t2.29613643429241784\nAAAAAAAADNPDAAAA\tVery afraid concepts will not disentangle with a days. Long-term, civil points c\tHome                                              \taccent                                            \t8.15\t3501.80\t1.62244833189164095\nAAAAAAAAEEPAAAAA\tNew measures shall pay under a agencies; comparatively heavy police shall beat similarly concepts. However japanese times cannot check like a police. Long, long-term auth\tHome                                              \taccent                                            \t1.87\t5547.93\t2.57045798559357804\nAAAAAAAAELJCAAAA\tUseful, n\tHome                                              \taccent                                            \t9.44\t3014.70\t1.39676594498650122\nAAAAAAAAFAJCAAAA\tDays give briefly vulnerable months. Sexual feelings create just animals. Charts study; changes knock rapidly aware sites. Schemes include sufficiently. For example speci\tHome                                              \taccent                                            \t7.15\t303.87\t0.14078855863039378\nAAAAAAAAFKLCAAAA\tConnections must not come right finally certain parties. Wild parties fi\tHome                                              \taccent                                            \t2.55\t1293.30\t0.59920967149336320\nAAAAAAAAFOADAAAA\tLittle powers reach by a subjects; traditional insects make also others. Numbers shall make. Products take serious, military rules. Curiously economic methods approac\tHome                                              \taccent                                            \t3.52\t99.03\t0.04588241998607265\nAAAAAAAAGCJAAAAA\tOld buildings must proceed;\tHome                                              \taccent                                            \t9.33\t595.01\t0.27567907417866391\nAAAAAAAAGEPDAAAA\tAdditional eyes give nationally. Territorial groups should talk previously strange differences. Small discus\tHome                                              \taccent                                            \t6.07\t18159.55\t8.41365343691896978\nAAAAAAAAGHFBAAAA\tAlmost busy pounds lose at last for an factors. Good mothers would\tHome                                              \taccent                                            \t1.45\t2292.51\t1.06216203819318802\nAAAAAAAAGKMDAAAA\tBenefits might choose only by a directors. Continued eggs must not make much black, back arrangements. Living,\tHome                                              \taccent                                            \t1.62\t9494.68\t4.39905983432661074\nAAAAAAAAGNOBAAAA\tHoles may avoid of course genuine\tHome                                              \taccent                                            \t3.27\t409.64\t0.18979374455311320\nAAAAAAAAGOEEAAAA\tSupporters will laugh well indirect, old reductions. Men can increase critical words. Eyes ought to drift better parties. Other, social goods avoid costs; similar, substantial days learn;\tHome                                              \taccent                                            \t63.79\t5475.88\t2.53707589572185700\nAAAAAAAAHKFAAAAA\tMain, powerful kilometres should like certainly political directors. Left families go tall, clear organizatio\tHome                                              \taccent                                            \t0.18\t11613.93\t5.38094732857567124\nAAAAAAAAHOAEAAAA\tPromptly soviet faces could confirm now consistent new procedure\tHome                                              \taccent                                            \t1.85\t5675.68\t2.62964690968951645\nAAAAAAAAHPCEAAAA\tOld events can try far natural genes. Primary months explain at all par\tHome                                              \taccent                                            \t0.15\t20335.22\t9.42168135463177076\nAAAAAAAAIEODAAAA\tWomen should hear among a pages. Everywhere main techniques go just unlikely principles. Broad, willing differences can make also short, modern roots. Together sorry thoug\tHome                                              \taccent                                            \t8.25\t1632.64\t0.75643213335415177\nAAAAAAAAIKDBAAAA\tAttractive, pale rights stop in a delegates. Answers go as; variable, alone roles ought to relax quickly concerned, detailed parents. Poor, physical matches would send as for a details; cent\tHome                                              \taccent                                            \t1.45\t989.82\t0.45860180703437776\nAAAAAAAAILOBAAAA\tAncient periods will not see in a affairs. Fun\tHome                                              \taccent                                            \t4.09\t8014.62\t3.71332082064806196\nAAAAAAAAJNKCAAAA\tPerhaps material e\tHome                                              \taccent                                            \t6.64\t2552.44\t1.18259238684490834\nAAAAAAAAKMBDAAAA\tHere german thanks trust further remarkable towns. Other years\tHome                                              \taccent                                            \t2.04\t7200.88\t3.33630011541261051\nAAAAAAAAKOEAAAAA\tSupreme others can decide. Unfair, short presents give. Activities give simply police. Dark, impossible \tHome                                              \taccent                                            \t0.13\t2033.98\t0.94238033528498482\nAAAAAAAAKOEBAAAA\tStill different holes ought to enjoy early problems. Mammals see usually. Powerful, public \tHome                                              \taccent                                            \t6.84\t1085.87\t0.50310353822353537\nAAAAAAAALGMCAAAA\tAlways potential wages shall not restart sometimes at the efforts. Mere, high weapons would not go there physical pr\tHome                                              \taccent                                            \t66.58\t7246.44\t3.35740890118021093\nAAAAAAAALIMDAAAA\tBoys ought to answer. International citizens call areas. All quick cuts might back most white, central amounts. Strong mice make on a lines. Cultures would dismiss changes. Left chil\tHome                                              \taccent                                            \t5.45\t18131.76\t8.40077781891015469\nAAAAAAAALOADAAAA\tMost main firms would know highly for an companies. D\tHome                                              \taccent                                            \t1.31\t5733.85\t2.65659814033265334\nAAAAAAAAMBBDAAAA\tNew investors think especially secondary parties. Farmers detect adequately. Hum\tHome                                              \taccent                                            \t38.04\t1460.72\t0.67677843605024781\nAAAAAAAAMDCAAAAA\tInternational, nice forces will turn modest ways. Trees might not deal eastern others. Responsibilities ought t\tHome                                              \taccent                                            \t2.75\t6806.25\t3.15346077986677743\nAAAAAAAAMOFDAAAA\tQuite political women like home seriously formal chains. Certainly male lips \tHome                                              \taccent                                            \t4.86\t1551.13\t0.71866705152980782\nAAAAAAAANGKCAAAA\tRules meet as; authorities shall not kill moreover near a \tHome                                              \taccent                                            \t3.55\t651.58\t0.30188899540063836\nAAAAAAAANMBCAAAA\tAlso possible systems could go forward. Local, british babies d\tHome                                              \taccent                                            \t2.53\t2797.54\t1.29615172379922932\nAAAAAAAAOCADAAAA\tBritish results cou\tHome                                              \taccent                                            \t4.30\t118.60\t0.05494956084366572\nAAAAAAAAOFFEAAAA\tSimply perfect shareholders come others. Other, tired eyes contact therefore educational jobs. Over cathol\tHome                                              \taccent                                            \t7.12\t11929.65\t5.52722621010654933\nAAAAAAAAOIKDAAAA\tEnough labour losses demonstrate also quickly happy women; near available things might surrender also ge\tHome                                              \taccent                                            \t1.26\t1093.19\t0.50649502882535352\nAAAAAAAAPABAAAAA\tRoyal children \tHome                                              \taccent                                            \t3.70\t188.00\t0.08710385698658647\nAAAAAAAAAALAAAAA\tFuture, real fears mean far interests; ill, mean payments speak far so labour lights. Already other applicants might not go so powerful lengths; japanese, central modes boil. Old homes ough\tHome                                              \tbathroom                                          \t1.70\t19546.11\t7.34362930968507144\nAAAAAAAAAAOAAAAA\tAlso eastern matters should not enable now irish, \tHome                                              \tbathroom                                          \t3.46\t2574.19\t0.96714369931910820\nAAAAAAAAABFEAAAA\tQuite public shoulders help even ministers. Short, tall groups cannot overcome too other notes. Thus surprising reasons find\tHome                                              \tbathroom                                          \t1.77\t11046.40\t4.15022051991445731\nAAAAAAAAAEHCAAAA\tIn\tHome                                              \tbathroom                                          \t0.42\t1225.60\t0.46046768804381146\nAAAAAAAAAHKDAAAA\tNecessary, p\tHome                                              \tbathroom                                          \t8.13\t5680.58\t2.13423918027734537\nAAAAAAAAAJCAAAAA\tLetter\tHome                                              \tbathroom                                          \t9.54\t6366.89\t2.39209131717465953\nAAAAAAAAALCEAAAA\tModern companies shall not become also old, grateful agents. Enough joint programs approve titles. Jeans will not fall already wrong teachers. High, silver children manage a\tHome                                              \tbathroom                                          \t2.28\t16790.19\t6.30820820097611185\nAAAAAAAAANBAAAAA\tDetailed, unhappy groups play old, human others. Well anxious councils will study whole, democratic employees. Educational, english customers get more. Explicitly cold deci\tHome                                              \tbathroom                                          \t79.37\t2249.42\t0.84512502189907830\nAAAAAAAAAPICAAAA\tPp. may not record also human rocks. Extraordinary, industrial measures may not operate only out of a officials. Ready subjects show clearly new things. Projects should enable\tHome                                              \tbathroom                                          \t3.56\t11356.89\t4.26687408752274959\nAAAAAAAABLEAAAAA\tHere economic areas develop too sole processes; grateful, new children pass shares; fat, proposed aspects affect gmt on the terms. Years remind e\tHome                                              \tbathroom                                          \t6.16\t5399.13\t2.02849617211813296\nAAAAAAAACGECAAAA\tAppropriate, active areas change alternative books. Clients will not look now only, other rates. Usually effecti\tHome                                              \tbathroom                                          \t2.89\t2344.36\t0.88079473657179327\nAAAAAAAACLKCAAAA\tEmployees watch never at the imports. Cases resist actually reliable prices. Alive, var\tHome                                              \tbathroom                                          \t7.17\t2759.95\t1.03693521182809843\nAAAAAAAACONAAAAA\tVery oral hands ought to smoke military, independent issues. Moving sons play. Patients contradict to a measures. Other cattle enable significant goods. Initial, possible groups let soci\tHome                                              \tbathroom                                          \t7.17\t3821.04\t1.43559518172562445\nAAAAAAAADHLBAAAA\tNew sports will give now students. Scarcely free countries damage there prime, necessary members. Big units should not fill probably mental child\tHome                                              \tbathroom                                          \t4.29\t1777.37\t0.66777207465602902\nAAAAAAAAEABDAAAA\tUnions last moving pur\tHome                                              \tbathroom                                          \t2.72\t3881.21\t1.45820153028110433\nAAAAAAAAEBAAAAAA\tIndeed political miles imagine. Urgent, able males can explain companies. Accor\tHome                                              \tbathroom                                          \t5.47\t2914.22\t1.09489568036148517\nAAAAAAAAEDMBAAAA\tAlmost other bodies call cars. So international benefits ought to suppose in a points. Officers can ensure also for a books. Carefully different police sleep. Irish, u\tHome                                              \tbathroom                                          \t9.17\t4471.44\t1.67995564541989254\nAAAAAAAAEIJCAAAA\tLabour, japanese economies care more minor, great gardens; events may m\tHome                                              \tbathroom                                          \t5.15\t5956.38\t2.23785943840600333\nAAAAAAAAEMBAAAAA\tSmal\tHome                                              \tbathroom                                          \t3.40\t1261.44\t0.47393306168895686\nAAAAAAAAGALAAAAA\tFree, sad bits might not speed then. Troubles\tHome                                              \tbathroom                                          \t5.76\t175.15\t0.06580525094718797\nAAAAAAAAGCLDAAAA\tHard players show empty troops. Expectations used to know even; alternative organs could not consume historical, direct practices. Material restrictions could count deep. Gifts could s\tHome                                              \tbathroom                                          \t4.64\t8640.19\t3.24618824539756797\nAAAAAAAAGECCAAAA\tMere, alternativ\tHome                                              \tbathroom                                          \t6.84\t4069.67\t1.52900745430912057\nAAAAAAAAGKPAAAAA\tStrong taxes represent nece\tHome                                              \tbathroom                                          \t3.36\t2436.99\t0.91559656583378597\nAAAAAAAAGLOAAAAA\tSimply costly processes should not believe therefore by the weeks. Instead earl\tHome                                              \tbathroom                                          \t7.28\t419.52\t0.15761700757844303\nAAAAAAAAGONBAAAA\tJoint lovers can mention tomorrow minor techniques. Major markets may no\tHome                                              \tbathroom                                          \t17.20\t2682.86\t1.00797188442005549\nAAAAAAAAHKDCAAAA\tPretty figures ought to join that things. Extra authorities find dramatic items. Over mutual cases give for the time being as successful lines; permanent arms return publi\tHome                                              \tbathroom                                          \t0.31\t15228.27\t5.72138240845865918\nAAAAAAAAIEOAAAAA\tBoth long tories will not get together; problems seem by now special,\tHome                                              \tbathroom                                          \t5.62\t8655.20\t3.25182762202741263\nAAAAAAAAIJAAAAAA\tSanctions will know black quarters. Cent\tHome                                              \tbathroom                                          \t4.35\t2089.84\t0.78516954404494038\nAAAAAAAAILJBAAAA\tComfortable clothes ought to carry violently. New, united services must look always. Common, recent workers could prevent. New, local languages need very often young kinds. Structures might\tHome                                              \tbathroom                                          \t1.84\t4089.18\t1.53633751680400859\nAAAAAAAAILOCAAAA\tDrivers might put \tHome                                              \tbathroom                                          \t7.91\t1583.75\t0.59502749750276305\nAAAAAAAAIMGCAAAA\tFinancial forces may bring yet. Unknown, expensive assets offer enough securities; female movements ought to grow great, aware modules. Normal contacts mus\tHome                                              \tbathroom                                          \t2.10\t4156.11\t1.56148365123675362\nAAAAAAAAIMGDAAAA\tBy now developing masses used to flourish subtle methods. Much \tHome                                              \tbathroom                                          \t9.84\t4755.08\t1.78652145403342606\nAAAAAAAAIOEDAAAA\tThereby social children should report to a days. Times meet anyway as a whole liable reasons. Physical, region\tHome                                              \tbathroom                                          \t5.82\t12047.28\t4.52625911293770307\nAAAAAAAAJBIBAAAA\tSo present rises l\tHome                                              \tbathroom                                          \t5.86\t3137.27\t1.17869734307213477\nAAAAAAAAJKPAAAAA\tPhilosophical,\tHome                                              \tbathroom                                          \t6.72\t3878.46\t1.45716833336357782\nAAAAAAAAJODCAAAA\tSingle p\tHome                                              \tbathroom                                          \t3.92\t6593.22\t2.47712530202694074\nAAAAAAAAKBHCAAAA\tAreas ride perhaps even leading women. High sides cannot get then throughout the officers. Long signs may not embrace to the friends. Very, tory\tHome                                              \tbathroom                                          \t9.18\t6130.98\t2.30345804996968600\nAAAAAAAAKBIBAAAA\tHi\tHome                                              \tbathroom                                          \t2.13\t440.85\t0.16563085857874860\nAAAAAAAAKECAAAAA\tForce\tHome                                              \tbathroom                                          \t0.20\t6396.38\t2.40317094521024374\nAAAAAAAAMBGBAAAA\tHard programmes make as other goods. Rational, similar computers could go to the streets. Options mi\tHome                                              \tbathroom                                          \t7.10\t4799.14\t1.80307514719205068\nAAAAAAAAMBJDAAAA\tSo straightforwar\tHome                                              \tbathroom                                          \t1.16\t1899.26\t0.71356711912050371\nAAAAAAAAMHGBAAAA\tProperties go industrial troops; sweet companies would start more constant negotiations. Groups will protect. Public so\tHome                                              \tbathroom                                          \t5.64\t10621.64\t3.99063480257316377\nAAAAAAAANAJDAAAA\tEspecially linguistic games cover to a officials. Minor, main days know completely variations\tHome                                              \tbathroom                                          \t1.60\t3572.22\t1.34211152462782650\nAAAAAAAANAKCAAAA\tFrom time to time successful books decide important, active elements. Parts will hear on a clubs. Firstly following supplies take barely upon a years. Other cases may find\tHome                                              \tbathroom                                          \t3.90\t218.22\t0.08198699321550305\nAAAAAAAANBLAAAAA\tImportant kinds can catch again slim areas. Good, past men must \tHome                                              \tbathroom                                          \t5.17\t6013.16\t2.25919213694315054\nAAAAAAAANBPCAAAA\tFormal, positive soldiers co-operate long along a offices. Great, able details must overtake responsible, remaining papers. Lives would think acute, labour shapes. Representative\tHome                                              \tbathroom                                          \t10.92\t3002.22\t1.12795798172233325\nAAAAAAAANIKDAAAA\tSocial\tHome                                              \tbathroom                                          \t5.38\t4680.62\t1.75854623858650847\nAAAAAAAAOHDBAAAA\tMain forms matter constitutional, popular animals; ministers might not allow hardly. Officials will think so. Soon brief relations interfere for example old terms. Co\tHome                                              \tbathroom                                          \t8.37\t867.00\t0.32573880999835553\nAAAAAAAAPABDAAAA\tProbably awful sales require massively as annual notes. A little national devices arrest sharply short, grateful legs. Trees may protect immediately in a courses. Indians will not get i\tHome                                              \tbathroom                                          \t4.33\t1138.62\t0.42778860881237321\nAAAAAAAAPGFCAAAA\tMilitary characters would\tHome                                              \tbathroom                                          \t2.10\t8317.61\t3.12499236843185918\nAAAAAAAAPKECAAAA\tIn particular acute origins could like thousands; impatiently small stones might give away female, crucial models. Colleagues might accompany bes\tHome                                              \tbathroom                                          \t3.25\t4807.80\t1.80632877821233414\nAAAAAAAAPLOAAAAA\tAfterwards oth\tHome                                              \tbathroom                                          \t0.24\t7197.60\t2.70419568494136532\nAAAAAAAAAMLAAAAA\tMaterial officials tackle employers. Clear shareholders go very products. Areas imagine systems; superior, precise tonnes will make much minutes. Milita\tHome                                              \tbedding                                           \t18.44\t3038.10\t1.25620354127751860\nAAAAAAAABBEEAAAA\tLarge tests complain dark, pales\tHome                                              \tbedding                                           \t37.80\t10472.58\t4.33023668816435133\nAAAAAAAABELDAAAA\tGreat servants deal primarily certainly possible gates. Problems ca\tHome                                              \tbedding                                           \t4.62\t4172.20\t1.72513492476154936\nAAAAAAAABFBAAAAA\tUsually large paintings might not go beautifully local appeals. Clothes bring partially different, very orders. Fruits provide except a schools. R\tHome                                              \tbedding                                           \t33.55\t1050.47\t0.43435177709943549\nAAAAAAAABHLDAAAA\tWell healthy\tHome                                              \tbedding                                           \t7.46\t10368.46\t4.28718480945140073\nAAAAAAAACANCAAAA\tConditions know both popular\tHome                                              \tbedding                                           \t2.48\t18121.95\t7.49312325626349635\nAAAAAAAACAPAAAAA\tPayable, mutual pictures will not help new women; mole\tHome                                              \tbedding                                           \t49.59\t591.36\t0.24451747018527152\nAAAAAAAACCKDAAAA\tIncreasingly sexual\tHome                                              \tbedding                                           \t0.50\t233.74\t0.09664758096777828\nAAAAAAAACHPCAAAA\tThus angry stations would not demonstrate forward; single, political winds must not accept then dark profits. Patterns used to know obviously. Wars use particular met\tHome                                              \tbedding                                           \t64.50\t744.66\t0.30790445641937955\nAAAAAAAACICEAAAA\tNotes shall say slightly to a files. Important suggestions stay today acts. New, true powers make in particular; awkwardly left prices g\tHome                                              \tbedding                                           \t0.79\t546.70\t0.22605130707232133\nAAAAAAAACIFBAAAA\tAbout political men\tHome                                              \tbedding                                           \t3.09\t589.74\t0.24384762727790521\nAAAAAAAACJACAAAA\tYet personal children answer; sp\tHome                                              \tbedding                                           \t4.17\t1458.28\t0.60297439194699971\nAAAAAAAACJLDAAAA\tSacred, other police run competent, poor solutions. Just subsequent lips allow far all small sentences; programmes used to develop with a conditions. Properties m\tHome                                              \tbedding                                           \t1.39\t2951.80\t1.22051993454559739\nAAAAAAAACMAAAAAA\tAttractive, dead situations shall enter also great, forward groups; thus compatible sections give still troubles. Cold, known waters can ho\tHome                                              \tbedding                                           \t5.95\t634.78\t0.26247091403579318\nAAAAAAAACOKDAAAA\tNew, hard children say needs. Particular, horrible sports can clean. Corporate, adminis\tHome                                              \tbedding                                           \t8.14\t2691.36\t1.11283235010455958\nAAAAAAAACOMAAAAA\tFemale abilities remove hard, happy customs. Really current shoulders lead to a heads. Vast advantages ought to explai\tHome                                              \tbedding                                           \t2.45\t2906.03\t1.20159480499611843\nAAAAAAAACOPBAAAA\tClearly profitable ages cancel above evolutionary lessons. Steps would live better; labour women can bounce inst\tHome                                              \tbedding                                           \t3.09\t4184.78\t1.73033654437554205\nAAAAAAAADAEDAAAA\tUsefully clinical hours laugh almost attractive instruments. Responsible, obvious results follow even powers. Away big cups should d\tHome                                              \tbedding                                           \t9.21\t12113.91\t5.00889919381098232\nAAAAAAAADHMDAAAA\tOf course political others should turn social, low charges. Thoughts must not expand. Prime letters will not correspond alone \tHome                                              \tbedding                                           \t3.60\t3509.07\t1.45094175984684579\nAAAAAAAAEKJDAAAA\tImmediately legitimate details may not laugh over bad, great publications. Pale conditions cost high, commercial arms; new problems should gai\tHome                                              \tbedding                                           \t1.16\t272.24\t0.11256668709963190\nAAAAAAAAELEEAAAA\tCriminal faces can exercise always to a members. And so on likely lines can know patients. New premises used to top also yesterday physical relatives. Organisational, alone operations \tHome                                              \tbedding                                           \t93.25\t255.70\t0.10572767371207712\nAAAAAAAAFHAEAAAA\tExpensive parents could become very over the implications; prominent reasons bring \tHome                                              \tbedding                                           \t92.94\t4461.34\t1.84468947922815077\nAAAAAAAAGDJBAAAA\tJust joint transactions might take now still national tests. Cells vary less so orange texts\tHome                                              \tbedding                                           \t6.63\t7559.57\t3.12575576990069165\nAAAAAAAAGFDDAAAA\tImportant, local transactions set overhead single prices. Available, white particles shall develop concerned, remote comments. Whole efforts m\tHome                                              \tbedding                                           \t1.47\t361.08\t0.14930054135297930\nAAAAAAAAGFFEAAAA\tEager, low years shall report clearly. Others should operate since a meanings. Directors would know holes. Poor boundaries hear early hours. Important countries make of course small, rec\tHome                                              \tbedding                                           \t2.90\t15764.84\t6.51849769121275679\nAAAAAAAAGKMBAAAA\tGoods want special children. Personal plans remain. Payable, royal things go always concessions. Free, academic dogs raise still ra\tHome                                              \tbedding                                           \t2.19\t10328.90\t4.27082741104682595\nAAAAAAAAGLLAAAAA\tPublic applications will include less elderly, double businessmen. Federal cards impose partners. Places pay completely. Quite old ways deny ac\tHome                                              \tbedding                                           \t6.98\t7984.50\t3.30145721843597883\nAAAAAAAAHBFAAAAA\tGood benefits pretend completely; s\tHome                                              \tbedding                                           \t1.31\t2239.67\t0.92606608909944376\nAAAAAAAAHPMAAAAA\tWays become just together close centuries; shots account also perhaps lengthy profits. Both eastern efforts might grab together tight countries. Police will express today for\tHome                                              \tbedding                                           \t1.95\t405.51\t0.16767160331241453\nAAAAAAAAIEACAAAA\tElectronic, long-term theories would give especially; elderly forms know yet later old risks. Different m\tHome                                              \tbedding                                           \t82.96\t15743.55\t6.50969463226347981\nAAAAAAAAIFMBAAAA\tDouble services precipitate finally demands. Authorities write early with a things. Full changes may not see in the doll\tHome                                              \tbedding                                           \t4.48\t1865.76\t0.77146055731343376\nAAAAAAAAIIDEAAAA\tCritical, whole men forget in a industries. Alone lips look soon for a natio\tHome                                              \tbedding                                           \t5.35\t3628.30\t1.50024137086245375\nAAAAAAAAIJEBAAAA\tTotal, unlikely images get either open measures. Politicians visualise economically children. Able, ready states could not go in addition small \tHome                                              \tbedding                                           \t1.42\t334.80\t0.13843420085570364\nAAAAAAAAIJIAAAAA\tFirm managers will not walk at a g\tHome                                              \tbedding                                           \t3.23\t1994.75\t0.82479576510428565\nAAAAAAAAKNNCAAAA\tThere controversial beings upset sure at a arms. Broad circumstances see pale memb\tHome                                              \tbedding                                           \t0.56\t8534.56\t3.52889782931617102\nAAAAAAAALCCBAAAA\tDifficulties will not feel most. Like things used to avoid both favor\tHome                                              \tbedding                                           \t0.82\t2845.65\t1.17662868478205813\nAAAAAAAALFMAAAAA\tSpecial, true decades cannot convert cool normal, old-fashioned books. Old ministers become. Substantial, economic recordings see particularly patients. Mass, absolute thanks could not su\tHome                                              \tbedding                                           \t3.58\t8483.58\t3.50781845189793992\nAAAAAAAAMAHAAAAA\tAreas cannot get just. Horses achieve finally sad fans; tough examinations will not love also concrete mines. Experts shall l\tHome                                              \tbedding                                           \t6.67\t1746.36\t0.72209065414087995\nAAAAAAAAMKFEAAAA\tQuestions will encourage finally final, small institutions. Additional holes enjoy alread\tHome                                              \tbedding                                           \t4.45\t7157.46\t2.95949000972719407\nAAAAAAAAMLBCAAAA\tAble, small executives would complete ne\tHome                                              \tbedding                                           \t5.70\t11277.99\t4.66326025360996743\nAAAAAAAAMNNAAAAA\tShortly official associations find however weeks. Empty subjects draw much linguistic, whole powers. Typical, payable feet shall sink also narrow boys. Permanent, i\tHome                                              \tbedding                                           \t4.13\t10215.08\t4.22376474455520053\nAAAAAAAAMOPAAAAA\tNevertheless left things must appear for instance again h\tHome                                              \tbedding                                           \t6.76\t6935.76\t2.86782076740428637\nAAAAAAAANDMAAAAA\tEnough lost problems manage before excellent champions\tHome                                              \tbedding                                           \t0.97\t425.46\t0.17592059467164776\nAAAAAAAANEBBAAAA\tCrude dates should convin\tHome                                              \tbedding                                           \t9.48\t2442.81\t1.01006108181696956\nAAAAAAAANJBCAAAA\tPersonal, major issues say palestinian, german gods; angry styles keep surprising, pleased years. Authori\tHome                                              \tbedding                                           \t8.78\t375.34\t0.15519681287090742\nAAAAAAAANLKAAAAA\tFinal off\tHome                                              \tbedding                                           \t4.48\t10411.01\t4.30477852285167011\nAAAAAAAAOGEBAAAA\tChildren used to solve all right required, military a\tHome                                              \tbedding                                           \t4.08\t5342.86\t2.20918325682169878\nAAAAAAAAOJADAAAA\tAble, red references might hire so direct children. Experiments ban too different, labour met\tHome                                              \tbedding                                           \t4.41\t1941.93\t0.80295557845793480\nAAAAAAAAOPMAAAAA\tThen distant children plot. Previous roads w\tHome                                              \tbedding                                           \t8.48\t514.40\t0.21269579725261037\nAAAAAAAAPILDAAAA\tPowerful, happy companies seem also very national implications; children scan natural charts; really single subjects used to preserve. New re\tHome                                              \tbedding                                           \t1.99\t9617.02\t3.97647693641971033\nAAAAAAAAACJDAAAA\tSlight, royal projects will ask audiences. Elabora\tHome                                              \tblinds/shades                                     \t5.27\t7981.68\t2.95699007390289399\nAAAAAAAAADAEAAAA\tYears say much at a eyes; surely different theories may hear well powerful, free wars. Well little conservatives weave physical, fundamental servants; c\tHome                                              \tblinds/shades                                     \t4.42\t1284.84\t0.47599742492224623\nAAAAAAAAAJAEAAAA\tStates must not harm maybe late changes. Good, original steps must abandon incredible, useful neighbours. Sure annual shareholders could analyse absolutely patently dark \tHome                                              \tblinds/shades                                     \t7.32\t10474.36\t3.88045856893354741\nAAAAAAAAANNDAAAA\tVery able governments m\tHome                                              \tblinds/shades                                     \t2.20\t7440.10\t2.75634977208368684\nAAAAAAAAAOADAAAA\tCompanies want as from the reports. Often different purposes will not work cases; principal towns guess \tHome                                              \tblinds/shades                                     \t9.34\t5385.32\t1.99511102735147651\nAAAAAAAAAPLCAAAA\tCells cannot give. Indeed english trees shall talk ever. In particular foreign things may catch too soviet, rich situations. N\tHome                                              \tblinds/shades                                     \t0.28\t8695.50\t3.22144049719139513\nAAAAAAAABDJDAAAA\tTiny, exi\tHome                                              \tblinds/shades                                     \t7.04\t7025.12\t2.60261124324411636\nAAAAAAAACBDBAAAA\tWomen must sleep in a scales. Agents can get generally extraordinary, general studies. Central systems profit; either comprehensive rivers use in the cars; cases shall ke\tHome                                              \tblinds/shades                                     \t0.63\t5940.92\t2.20094534857964501\nAAAAAAAACBFCAAAA\tTheories employ more specific offenders. Modes must see preferences. Certainly main studies see from the varieties. Pleasant elements \tHome                                              \tblinds/shades                                     \t97.19\t4156.26\t1.53977853842294381\nAAAAAAAACEIAAAAA\tYoung opinions make fine weeks; copies would reply soon from the accountants. Interesting losses would go only slightly old families. Most famous patterns ex\tHome                                              \tblinds/shades                                     \t2.76\t8530.68\t3.16037927900416200\nAAAAAAAADAMBAAAA\tIndustrial losses take letters; organic, likely yards could know all possible questions. Old studies retrie\tHome                                              \tblinds/shades                                     \t9.59\t8586.88\t3.18119981329686010\nAAAAAAAADBMDAAAA\tNew, light associations feel on a officials. Potential, enormous customers find in the me\tHome                                              \tblinds/shades                                     \t4.62\t4568.78\t1.69260570579703321\nAAAAAAAADDKDAAAA\tCertainly tory systems read now in a prisons; evenings may seduce anywhere other months; new customers talk with the cells. Police lead more other exports. Young provisions \tHome                                              \tblinds/shades                                     \t7.50\t11150.34\t4.13089032642781908\nAAAAAAAADFLDAAAA\tCommon, interesting figures would not see high; naked studies would get both. Changes might face over prayers. Tremendous, intact considerations shall choose just.\tHome                                              \tblinds/shades                                     \t1.19\t3490.71\t1.29321080535345580\nAAAAAAAADHECAAAA\tTrue, impossible trees could get no longer exclusive cel\tHome                                              \tblinds/shades                                     \t7.65\t13982.16\t5.18000074316711372\nAAAAAAAADPDEAAAA\tLess whole balls should purchase often difficult specialists. Impossible, international intentions will not counter completely during a trees. Important sciences look initia\tHome                                              \tblinds/shades                                     \t0.25\t4673.99\t1.73158307969266965\nAAAAAAAAEBEEAAAA\tNational, electric sections must market in the decisions; b\tHome                                              \tblinds/shades                                     \t3.94\t13578.70\t5.03053005338540591\nAAAAAAAAECOAAAAA\tThin, financial others can mobilize from a stories. Anywhere related others should remain following patients. Equations sh\tHome                                              \tblinds/shades                                     \t5.47\t1070.00\t0.39640519027023090\nAAAAAAAAEDHAAAAA\tSteep, slow terms get. Affairs will decide upwards dominant courts. Familiar, serious years add\tHome                                              \tblinds/shades                                     \t2.80\t2331.69\t0.86382618514130345\nAAAAAAAAFFOBAAAA\tAvailable laws get worldwide waste, new policies; then other societies understand by a interests; often local problems can help whole hours. Certain, m\tHome                                              \tblinds/shades                                     \t8.96\t9879.49\t3.66007580675032100\nAAAAAAAAFLMAAAAA\tClear accounts will not play even spectacular offices. Christian, impossible changes say for ins\tHome                                              \tblinds/shades                                     \t0.25\t7864.42\t2.91354851071496196\nAAAAAAAAGBKCAAAA\tRural, top years must accept again unusual shelves. Directors used to move later known, form\tHome                                              \tblinds/shades                                     \t4.05\t3163.86\t1.17212198625081564\nAAAAAAAAGDMBAAAA\tHealthy directors understand at least young conditions. Excellent members prevent well meetings. Obvious\tHome                                              \tblinds/shades                                     \t4.77\t821.24\t0.30424654061450881\nAAAAAAAAGOODAAAA\tThoughts must not achieve forward from the eyes. Powers seem recent\tHome                                              \tblinds/shades                                     \t1.53\t8071.29\t2.99018808240767473\nAAAAAAAAHLJBAAAA\tServices must move amongst a bedrooms. Small markets used to advance in a courses. New levels could say from a centres. In particular present buyers must not transfer again. Indian, net courses s\tHome                                              \tblinds/shades                                     \t0.19\t3825.58\t1.41727081102242049\nAAAAAAAAIBCCAAAA\tDifferent, upper days receive thorough, personal couples. Social, new girls must not prove strangely in a words; feet shall help however now full th\tHome                                              \tblinds/shades                                     \t4.79\t7716.79\t2.85885570862188328\nAAAAAAAAIHMCAAAA\tScarcely crucial groups may bring especially effective, important losses. Now new drugs wan\tHome                                              \tblinds/shades                                     \t3.48\t2706.56\t1.00270507642784686\nAAAAAAAAIJMCAAAA\tShort candidates shed women. Involved, wooden needs might violate then long-term times. Students must not\tHome                                              \tblinds/shades                                     \t5.18\tNULL\tNULL\nAAAAAAAAIODEAAAA\tOnly normal subjects might create over in the teachers. Main hours used t\tHome                                              \tblinds/shades                                     \t4.63\t2891.18\t1.07110164299578147\nAAAAAAAAJBMDAAAA\tBars like full, traditional politicians. Things used to show properly at the holidays; less specific relations may say possibly. Forces could \tHome                                              \tblinds/shades                                     \t6.30\t144.44\t0.05351099596507678\nAAAAAAAAKALCAAAA\tPrime, international results test ever conditions. Territorial users should love never barely emotional detectives. Firms resi\tHome                                              \tblinds/shades                                     \t3.79\t5465.05\t2.02464877110871531\nAAAAAAAAKCPBAAAA\tConditions make patients. New, various eggs will not watch appropri\tHome                                              \tblinds/shades                                     \t2.22\t360.68\t0.13362189161370737\nAAAAAAAAKGECAAAA\tAlready early meetings cannot go animals. As comprehensive evenings w\tHome                                              \tblinds/shades                                     \t4.11\t511.70\t0.18957059426287584\nAAAAAAAAKHIAAAAA\tSerious, free symptoms used to remember certainly able runs. Feelings shall pro\tHome                                              \tblinds/shades                                     \t5.48\t2291.60\t0.84897395703108517\nAAAAAAAAKJMDAAAA\tAlso long lines make further near a dogs. Rather foreign jobs can sit in the trends. Chronic police shall experience apparently diverse, proper years. Only notable companies migrate also years. Free,\tHome                                              \tblinds/shades                                     \t73.55\t6931.61\t2.56796839339162169\nAAAAAAAALGLDAAAA\tComplete costs become again industrial theories. Populations vary trustees. Countr\tHome                                              \tblinds/shades                                     \t3.42\t4143.26\t1.53496240059723073\nAAAAAAAAMIMDAAAA\tP\tHome                                              \tblinds/shades                                     \t2.11\t8507.90\t3.15193992364495091\nAAAAAAAANDPDAAAA\tMinutes might not reply polish, main days. Main beans make properly agencies. As new years de\tHome                                              \tblinds/shades                                     \t9.78\t8403.34\t3.11320335664060012\nAAAAAAAANEDCAAAA\tLives would look. Things exist for a patterns. Local, palestinian members should get problems; statements may not make yet nasty, specific men; numbers find clear women. Groups shall seem m\tHome                                              \tblinds/shades                                     \t3.38\t2112.47\t0.78261128251416324\nAAAAAAAANILCAAAA\tAppropriate, extensive scenes stem openly now financial personnel. More concerned signs stay now members; also full days could prepare subtle horses. Ancient achievements say america\tHome                                              \tblinds/shades                                     \t2.98\t14371.92\t5.32439596462480082\nAAAAAAAAOADAAAAA\tPrimary, occupational regions set particularly here prime ideas. Clinical, sophisticated minutes allocate just. Needs want interested let\tHome                                              \tblinds/shades                                     \t4.77\t5863.19\t2.17214854910328515\nAAAAAAAAOCKAAAAA\tLarge colours must win over; months assess extreme days. Blacks might signify then fully concerned points; here political potatoes might not die \tHome                                              \tblinds/shades                                     \t0.55\t3969.07\t1.47042985845407977\nAAAAAAAAOEBAAAAA\tSad increases ought to mean too levels. Organs used to present; other, sympathetic controls like always new interests. Other, small women deal in a edges. Outcomes run \tHome                                              \tblinds/shades                                     \t8.43\t7535.76\t2.79178913703812636\nAAAAAAAAOFBCAAAA\tNew parts come also old, tiny chains; responsible seats involve now properly small secrets; eligible chains get complete communications. Talks beat about married, liable books. Big,\tHome                                              \tblinds/shades                                     \t7.11\t1861.92\t0.68978948772705450\nAAAAAAAAOLLCAAAA\tSocial, central lights warn police. \tHome                                              \tblinds/shades                                     \t7.78\t6660.62\t2.46757414805393022\nAAAAAAAAOMOBAAAA\tSubjects sha\tHome                                              \tblinds/shades                                     \t0.26\t360.45\t0.13353668302140629\nAAAAAAAAPBLBAAAA\tFree, educational times ensure practically. So linguistic officers need. N\tHome                                              \tblinds/shades                                     \t9.32\t4744.02\t1.75752724368764560\nAAAAAAAAPMHAAAAA\tJust possible women say. Reasonably strong employers light almost degrees. Palestinian, smart rules help really visual\tHome                                              \tblinds/shades                                     \t3.71\t8398.39\t3.11136951954542476\nAAAAAAAAAAACAAAA\tLabour taxes could get even lab\tHome                                              \tcurtains/drapes                                   \t4.54\t24984.53\t7.47827965622433549\nAAAAAAAAAIACAAAA\tAll real copies loosen more definite doors.\tHome                                              \tcurtains/drapes                                   \t9.49\t736.67\t0.22049741477429358\nAAAAAAAABIJBAAAA\tVery, various goods should turn local arran\tHome                                              \tcurtains/drapes                                   \t3.04\t3989.59\t1.19414972919947050\nAAAAAAAACEHAAAAA\tUnlikely sides sell sometimes friends; mutual floors used to say i\tHome                                              \tcurtains/drapes                                   \t3.70\t11830.01\t3.54091604348492652\nAAAAAAAACIHAAAAA\tRoads help less functions. Relevant, increased procedures may not respond. All labour children ought to say workers. Given findings could decide thus royal shareholders\tHome                                              \tcurtains/drapes                                   \t4.28\t5979.42\t1.78973848785712263\nAAAAAAAACJNBAAAA\tWeak girls swim; provinces may introduce. Nervous, green tracks say better british, public rebels. Houses must not s\tHome                                              \tcurtains/drapes                                   \t8.21\t9746.45\t2.91727235835165499\nAAAAAAAACLCBAAAA\tMainly alternative politicians will not maintain from a matters. Principles should not tell always details; suddenly democratic years formulate far. Western, wise years ge\tHome                                              \tcurtains/drapes                                   \t2.73\t3116.99\t0.93296623573285915\nAAAAAAAACLDDAAAA\tPublic metres want; characteristics shoul\tHome                                              \tcurtains/drapes                                   \t0.82\t6428.18\t1.92405971697478996\nAAAAAAAACPAAAAAA\tServices decide only easy, single bases. Now british solicitors ought to transfer now over a drawings. Thorough elections run still religious, tough parameters. Complete, sole consequences ac\tHome                                              \tcurtains/drapes                                   \t4.49\t6448.14\t1.93003407238344634\nAAAAAAAAECKDAAAA\tNew, intimate hours go unfortunately forms. Subsequently experienced advisers must feed n\tHome                                              \tcurtains/drapes                                   \t0.70\t188.16\t0.05631937443350629\nAAAAAAAAEJKCAAAA\tWords might correct long old, major relations. Visible, desperate policemen may become extra agreements. General, other students include so\tHome                                              \tcurtains/drapes                                   \t3.90\t10122.80\t3.02992008671076475\nAAAAAAAAEOLAAAAA\tCentres look nevertheless with a advertisements. Naked users address to a reports. Im\tHome                                              \tcurtains/drapes                                   \t3.82\t6381.83\t1.91018640168464850\nAAAAAAAAEPFEAAAA\tClear partners ought to take effective, black books. Circumstances become hospitals. Forces answer gradua\tHome                                              \tcurtains/drapes                                   \t1.32\t1013.02\t0.30321350280947356\nAAAAAAAAFBMBAAAA\tCertain, conservativ\tHome                                              \tcurtains/drapes                                   \t0.28\t11983.75\t3.58693294731893617\nAAAAAAAAFGEBAAAA\tPrivate years forgive then in the computers; more exclusive differences get sources. Minutes meet insects. Small circumstances will contact sudd\tHome                                              \tcurtains/drapes                                   \t1.69\t2179.00\t0.65221044265843012\nAAAAAAAAFHHDAAAA\tKnown, possible years may approve. Forth wrong aspects see again local girls. Excellent peasants can run usually with a exchanges;\tHome                                              \tcurtains/drapes                                   \t3.79\t4760.53\t1.42490471711277482\nAAAAAAAAGALDAAAA\tPrime, national features claim different, great views. Versions might not sign european \tHome                                              \tcurtains/drapes                                   \t0.67\t9131.87\t2.73331848324884729\nAAAAAAAAGAPBAAAA\tFree funds cause still new,\tHome                                              \tcurtains/drapes                                   \t4.69\t8170.69\t2.44562154278329893\nAAAAAAAAGBIBAAAA\tYears must not enable existing others; other, political ties like then short products. Quite\tHome                                              \tcurtains/drapes                                   \t4.35\t696.96\t0.20861156040166106\nAAAAAAAAGCMDAAAA\tPrivate parents carry really british dreams; writings look probab\tHome                                              \tcurtains/drapes                                   \t9.60\t2216.28\t0.66336895817119114\nAAAAAAAAGJDBAAAA\tResponses used to bring of course video-taped loans. Hot, positive systems would remember. New, personal words may not answer on\tHome                                              \tcurtains/drapes                                   \t6.31\t2854.74\t0.85447050898335328\nAAAAAAAAGJJCAAAA\tGermans will throw perhaps with a\tHome                                              \tcurtains/drapes                                   \t6.68\t11036.19\t3.30331269626550706\nAAAAAAAAGNDAAAAA\tGenerally left questions bri\tHome                                              \tcurtains/drapes                                   \t93.18\t2295.48\t0.68707481730774354\nAAAAAAAAGPACAAAA\tParticular, british wa\tHome                                              \tcurtains/drapes                                   \t3.20\t6421.72\t1.92212613300986409\nAAAAAAAAHCDEAAAA\tDemocratic, likely appearances might expand both good, certain pounds; american values can pick. Only previous figures will not repa\tHome                                              \tcurtains/drapes                                   \t6.11\t15070.04\t4.51071016947234888\nAAAAAAAAHEFAAAAA\tDifferent, local measures say there political doors. Open assets progress minus th\tHome                                              \tcurtains/drapes                                   \t9.40\t2024.63\t0.60600496949037970\nAAAAAAAAHILCAAAA\tStatements might not test financial, concerned authorities. United scenes back just bare publishers. More simple things could cope \tHome                                              \tcurtains/drapes                                   \t0.37\t4710.47\t1.40992093796661557\nAAAAAAAAHLFCAAAA\tAccountants look equally marvellous, british schemes. Things shall study tiny events. Both normal courses could appeal faintly. Then black practices used to die hardly. Advisor\tHome                                              \tcurtains/drapes                                   \t2.23\t9441.66\t2.82604371180834938\nAAAAAAAAIBDBAAAA\tValid resources ought to say still tears. M\tHome                                              \tcurtains/drapes                                   \t1.25\t8697.98\t2.60344808904734832\nAAAAAAAAIGCDAAAA\tElectronic reports try in comparison with the problems. Germans might not go as. Common, social cups come sure about intact\tHome                                              \tcurtains/drapes                                   \t3.25\t817.84\t0.24479292722522739\nAAAAAAAAILHDAAAA\tOutside mammals can ignore eyes. Amounts stand always that is ready notes. Structures remember most attractive issues. Subjective difficulties cause very. Adequate, di\tHome                                              \tcurtains/drapes                                   \t1.51\t3062.90\t0.91677621148164553\nAAAAAAAAILJAAAAA\tSmall females would allow topics; local, local tears find\tHome                                              \tcurtains/drapes                                   \t0.60\t123.41\t0.03693863732376175\nAAAAAAAAIMIAAAAA\tProblems must not hate there in a stars. Fully forward teams may work yet white, concerned personnel. Merely common years stem methods; measures could introduce more on a areas. L\tHome                                              \tcurtains/drapes                                   \t3.73\t15982.27\t4.78375557199933360\nAAAAAAAAJOCAAAAA\tHere other years may like later. Terms call yesterday also\tHome                                              \tcurtains/drapes                                   \t1.50\t1201.77\t0.35970947392089103\nAAAAAAAAKCGBAAAA\tFree, competitive aspects get even specific, medical times. Other, free days \tHome                                              \tcurtains/drapes                                   \t4.40\t3406.63\t1.01966023876708940\nAAAAAAAAKNBBAAAA\tNational features sing then really magnificent values. Light, shallow advertisements should acknowledge. Possible, good forms should move anyway political, irish estates. Simply\tHome                                              \tcurtains/drapes                                   \t2.02\t2017.71\t0.60393369997996376\nAAAAAAAAKPLCAAAA\tLinguistic, appropriate degrees shout. Educational poles will study now in a names. Full arms look in a ways. Minute, modest systems deal unique experiments; automatically regular \tHome                                              \tcurtains/drapes                                   \t2.54\t6407.34\t1.91782196313128299\nAAAAAAAALGACAAAA\tActive books find; important, remarkable personnel may turn alone prices; public eyes administer different, financial waters. Obvious, weekly managers cannot make so. Proble\tHome                                              \tcurtains/drapes                                   \t8.93\t25.68\t0.00768644523518517\nAAAAAAAALIMAAAAA\tSocially extra interpretations continue other men. Also odd initiatives must need now by a hills. So gross rules can divide. Significant, impossible parent\tHome                                              \tcurtains/drapes                                   \t4.37\t100.62\t0.03011721649393815\nAAAAAAAALJODAAAA\tEffects might tolerate reasonably. Comparisons take other, clear others. French, christian \tHome                                              \tcurtains/drapes                                   \t1.91\t6527.01\t1.95364115710692977\nAAAAAAAAMBMBAAAA\tNew, different elections kill arms. As good as new yards would calcula\tHome                                              \tcurtains/drapes                                   \t0.59\t4150.32\t1.24225885469212285\nAAAAAAAAMKHCAAAA\tEvents explore away. Unusual rights should affect so in a posts. New journalists might not find wrong scientists. For example tall authorities shall not con\tHome                                              \tcurtains/drapes                                   \t6.84\t1245.00\t0.37264892203292588\nAAAAAAAAMMKBAAAA\tTall, whole women would not create. Still national hands bear around flat, poor attacks. Fiel\tHome                                              \tcurtains/drapes                                   \t6.19\t2226.86\t0.66653572571746292\nAAAAAAAAMOCAAAAA\tMonths shall not find also intact forces; super ju\tHome                                              \tcurtains/drapes                                   \t0.99\t6731.10\t2.01472864184403808\nAAAAAAAANAJAAAAA\tSuperbly loyal police would contemplate twice sure nights. Even succ\tHome                                              \tcurtains/drapes                                   \t0.44\t49.08\t0.01469044907098474\nAAAAAAAANIJBAAAA\tLegs solve by a women. Early, early weekends neglect again loans; proposals\tHome                                              \tcurtains/drapes                                   \t57.92\t10980.48\t3.28663777944104577\nAAAAAAAANOABAAAA\tLikely, normal policies believe very children. Twice old knees should suggest with a details. Lives take students; questions will not look as deeply ready areas; valuable members wor\tHome                                              \tcurtains/drapes                                   \t5.17\t249.22\t0.07459563401529782\nAAAAAAAAOACEAAAA\tBudgets keep so lesser women. Stairs determine \tHome                                              \tcurtains/drapes                                   \t1.55\t4402.52\t1.31774645158907378\nAAAAAAAAOCCCAAAA\tDi\tHome                                              \tcurtains/drapes                                   \t6.03\t5657.98\t1.69352622319988272\nAAAAAAAAOJKBAAAA\tParticularly old assumptions might learn repeatedly fine sessions; payments compete more bad times. Days will plan formerly; all right simple jeans reject weeks. Today national representati\tHome                                              \tcurtains/drapes                                   \t24.89\t14029.64\t4.19930138354218335\nAAAAAAAAOLJDAAAA\tGoals commit then obvious tracks. Excellent days k\tHome                                              \tcurtains/drapes                                   \t6.14\t1920.32\t0.57478327546848854\nAAAAAAAAPCIAAAAA\tHuman drinks\tHome                                              \tcurtains/drapes                                   \t0.71\t1522.69\t0.45576609404844651\nAAAAAAAAPDGBAAAA\tDead, obvious terms would serve more through a forces; worthy, possible arms decide for the falls. Rules \tHome                                              \tcurtains/drapes                                   \t2.34\t14312.02\t4.28382234948889629\nAAAAAAAAPGKDAAAA\tSmall branches cause smoothly right duties. Outstanding ministers give real policies. Increased, japanese settlements used to protect electoral, large offices; clouds \tHome                                              \tcurtains/drapes                                   \t3.90\t15202.77\t4.55043843567430089\nAAAAAAAAPHNCAAAA\tSpecific, small functions make about a children. Other, hot notes request however new things. Very slight eyes should want always serious, normal\tHome                                              \tcurtains/drapes                                   \t6.32\t1409.34\t0.42183857974127210\nAAAAAAAAPJBAAAAA\tSomehow surprising officials eat important cells. Mature police operate close, permanent flights. Old, fine engineers will pay away fingers. Hardly cultural activities watch gay, new \tHome                                              \tcurtains/drapes                                   \t0.25\t6118.86\t1.83147516712481033\nAAAAAAAAPLBBAAAA\tNew, perfect clothes let. High centuries could go months. Part-time, legal things think even there new systems. Aware losses come yet that wide functions. Big, british ears send please economic hee\tHome                                              \tcurtains/drapes                                   \t7.09\t4208.63\t1.25971199416500631\nAAAAAAAAPOFDAAAA\tLess than dark patients teach however national senses; as positive problems can take instead liberal collective sectors; urgent resources raise so southern motives. Private p\tHome                                              \tcurtains/drapes                                   \t0.67\t7346.83\t2.19902673081057097\nAAAAAAAAABDDAAAA\tStill available arguments\tHome                                              \tdecor                                             \t6.57\t7479.82\t2.46562464048976131\nAAAAAAAAAJBBAAAA\tThen adequate experiments ought to need pp.. Able unions could need please on a countries. Women continue previously british ways.\tHome                                              \tdecor                                             \t0.96\t3319.93\t1.09437141705297364\nAAAAAAAAAJIAAAAA\tNow imaginative boys shall look. Experiments tell main confl\tHome                                              \tdecor                                             \t3.59\t1502.18\t0.49517395103771343\nAAAAAAAAALBBAAAA\tIndependent, limited numbers claim nonetheless to a firms; never managerial sources would want only special terms. Changing, present homes shall suffer \tHome                                              \tdecor                                             \t6.24\t1843.18\t0.60758013225691504\nAAAAAAAAANIBAAAA\tFre\tHome                                              \tdecor                                             \t2.65\t4396.90\t1.44938046393755886\nAAAAAAAAAPCCAAAA\tWonderful, brief ships continue; less vital o\tHome                                              \tdecor                                             \t9.80\t3685.64\t1.21492292594937898\nAAAAAAAABAECAAAA\tPerhaps spanish\tHome                                              \tdecor                                             \t7.44\t2152.90\t0.70967527139829663\nAAAAAAAABHJCAAAA\tRegional circumstances see really matters. Again sexual years secure adjacent trials. Old animals will solve new, necessary eyes. Level views migh\tHome                                              \tdecor                                             \t7.80\t157.04\t0.05176617800194552\nAAAAAAAABLCCAAAA\tOld fruits tak\tHome                                              \tdecor                                             \t2.26\t7882.54\t2.59837601087274335\nAAAAAAAABPCEAAAA\tParliamentary, favorite months escape almost necessary, environmental beliefs; closely high doctors used to run far exact contributions. Kinds accept never european trades. Sorry, great tho\tHome                                              \tdecor                                             \t2.64\t8778.45\t2.89370100153577829\nAAAAAAAACFLCAAAA\tMuch red years would not repeat by the others. Particularly environ\tHome                                              \tdecor                                             \t1.45\t2736.60\t0.90208432705122327\nAAAAAAAACIICAAAA\tSol\tHome                                              \tdecor                                             \t1.01\t9042.00\t2.98057680523173309\nAAAAAAAACJDDAAAA\tSchemes wield usually other \tHome                                              \tdecor                                             \t1.43\t5016.00\t1.65345866567599792\nAAAAAAAACOGBAAAA\tHelpful, very colleagues shall provide members. Concessions go other, tired eyes. Accurate windows ride slowly different hours. Speciali\tHome                                              \tdecor                                             \t1.48\t2381.42\t0.78500389465991526\nAAAAAAAACPLBAAAA\tFrequently small crimes spend as primary regions; exactly small students simplify very. Early workers make interpretations. Late direct pensioners ca\tHome                                              \tdecor                                             \t2.82\t6192.37\t2.04123361993063780\nAAAAAAAADAOCAAAA\tMaps form houses. Whole assumptions used to know for a premises; black titles \tHome                                              \tdecor                                             \t5.19\t6005.87\t1.97975633899990144\nAAAAAAAADDGBAAAA\tContacts choose to the governments. Over angry contracts could sell as yet national technical tables; violent, toxic patterns cannot express solid crops. Feet shall use\tHome                                              \tdecor                                             \t9.88\t1269.31\t0.41841140728253607\nAAAAAAAADFNBAAAA\tFormerly prime men influence incentives; new bars support children. Machines end certainly so economic drawings; other, christian eff\tHome                                              \tdecor                                             \t2.26\t5503.23\t1.81406765006142784\nAAAAAAAADGMBAAAA\tAs \tHome                                              \tdecor                                             \t2.03\t7855.62\t2.58950218565743277\nAAAAAAAAEAFEAAAA\tLong-term st\tHome                                              \tdecor                                             \t8.22\t2874.12\t0.94741599286138340\nAAAAAAAAEGMDAAAA\tContemporary feet used to go still political, late lives. Statutory, scottish genes must smell. Good lips establish quite. Old women must avoid with the places. Too wet l\tHome                                              \tdecor                                             \t4.58\t710.24\t0.23412130835520749\nAAAAAAAAEKLAAAAA\tCitizens can keep for the most part at the things. Branches visit terms. Available, slight problems may avoid. Problems help more. Social years feel inherent acres. Individuals use\tHome                                              \tdecor                                             \t49.10\t5668.87\t1.86866870536098372\nAAAAAAAAFAPCAAAA\tWorkers shall not control never on a studies. Sophisticated activities go separately according to a bodies; co\tHome                                              \tdecor                                             \t40.34\t2145.78\t0.70732825670539131\nAAAAAAAAFBLCAAAA\tPrematurely other systems assume nearly under w\tHome                                              \tdecor                                             \t0.88\t9056.13\t2.98523457455908593\nAAAAAAAAFHDEAAAA\tAlways cool temperatures meet there social grounds. Threats h\tHome                                              \tdecor                                             \t5.44\t3350.86\t1.10456708621751882\nAAAAAAAAFIPAAAAA\tToo complete events try environmental, national topi\tHome                                              \tdecor                                             \t3.31\t7994.82\t2.63538764145131214\nAAAAAAAAGCLBAAAA\tFresh, beautiful functions give empty, fast origins. Sons get other companies. Lights say delightful, native services. Small, soviet things could go already also dead systems. Medical, comm\tHome                                              \tdecor                                             \t34.78\t11689.03\t3.85313555559144935\nAAAAAAAAHFLCAAAA\tResulting, distinct clients shall tell intellectually difficult gardens. Villages turn then by a things; fresh, supreme powers succeed here. Historical hands st\tHome                                              \tdecor                                             \t4.30\t269.93\t0.08897888708650760\nAAAAAAAAHJGDAAAA\tPossible shoes render undoubt\tHome                                              \tdecor                                             \t8.28\t13638.47\t4.49574290431860593\nAAAAAAAAIELCAAAA\tHowever old figures ask only good, large sources. Yet naked researchers shall deal to a women. Right, common miles describe there also prime bags. Readily significant shares\tHome                                              \tdecor                                             \t7.78\tNULL\tNULL\nAAAAAAAAIIMDAAAA\tRelatively regional months wish then needs. Eyes follo\tHome                                              \tdecor                                             \t66.29\t7883.31\t2.59862983128194800\nAAAAAAAAIJFAAAAA\tDeposits shall leave more skills. Close ce\tHome                                              \tdecor                                             \t5.30\t5555.19\t1.83119558312931557\nAAAAAAAAIOFEAAAA\tRegular findings put. Little, national cattle should say most mothers. Asleep eyes stay over thoughts. Western, golden walls might not move distinct, small boxes. Swiss, go\tHome                                              \tdecor                                             \t3.83\t3030.40\t0.99893164682307498\nAAAAAAAAJDHDAAAA\tGentlemen work always. Religious, spiritual variations think fairly so electronic resources. Diplomatic, civil others split both mathematical, new contacts. Ultimate\tHome                                              \tdecor                                             \t9.53\t6205.11\t2.04543319397384199\nAAAAAAAAJLFAAAAA\tThere final techniques wear so old winners. Old, particular prices will return especially motives. Around early members shall pay systems. Unions call rather. Else old ter\tHome                                              \tdecor                                             \t2.10\t13195.83\t4.34983242908439067\nAAAAAAAAKAFCAAAA\tSimilar, ready forces play often arms. Marrie\tHome                                              \tdecor                                             \t7.68\t7302.41\t2.40714375893522009\nAAAAAAAAKFEAAAAA\tNearly delighted services know then eventually political p\tHome                                              \tdecor                                             \t0.48\t4915.69\t1.62039278873142867\nAAAAAAAALIPAAAAA\tTop modules ought to go. Funds shall offer in\tHome                                              \tdecor                                             \t4.71\t13454.30\t4.43503367735338493\nAAAAAAAAMBPDAAAA\tImportant rights justify now still e\tHome                                              \tdecor                                             \t53.89\t3370.57\t1.11106422941936768\nAAAAAAAAMGEDAAAA\tFields divorce hardl\tHome                                              \tdecor                                             \t1.25\t14250.34\t4.69743783130568185\nAAAAAAAAMIBCAAAA\tAble, assistant positions should die\tHome                                              \tdecor                                             \t4.24\t3308.46\t1.09059048186650958\nAAAAAAAAMJOAAAAA\tBritish, electric ye\tHome                                              \tdecor                                             \t4.13\t6855.95\t2.25997407076183372\nAAAAAAAAMMDEAAAA\tImmediate designs reward more speedily expected things. Good, happy feet create interesting, political signals. Still general stations help. Remote, flat ideas ma\tHome                                              \tdecor                                             \t0.10\t6799.02\t2.24120784232544325\nAAAAAAAAMOPCAAAA\tSa\tHome                                              \tdecor                                             \t2.03\t474.81\t0.15651489414864844\nAAAAAAAANGFBAAAA\tMinutes must not reduce in addition conditions. Australian, likely methods miss on a grou\tHome                                              \tdecor                                             \t25.40\t111.84\t0.03686659034473756\nAAAAAAAANJGDAAAA\tQuickl\tHome                                              \tdecor                                             \t9.23\t2919.06\t0.96222987492587290\nAAAAAAAAOAEEAAAA\tAbroad great methods can call all labour main clubs. Minerals may make often countries. Apparently good pairs used to write terrible accounts; able funds close again with the times; earlier average \tHome                                              \tdecor                                             \t4.93\t5327.91\t1.75627570961758494\nAAAAAAAAOBJBAAAA\tMinor, usual members come more before good waters. Circumstances cannot take interests\tHome                                              \tdecor                                             \t0.15\t15519.10\t5.11566793829592889\nAAAAAAAAOBPCAAAA\tPresent, responsible rates contribute at all records. Eyes ought to wait political, national awards. Politically int\tHome                                              \tdecor                                             \t0.18\t20899.05\t6.88909795193300723\nAAAAAAAAOCEAAAAA\tNations realize on a shadows. Managerial, disabled systems stay between the councils. Capitalist girls might live \tHome                                              \tdecor                                             \t4.02\t1089.18\t0.35903391337340180\nAAAAAAAAOPADAAAA\tMilitary issues face rather once previous thanks. Then famous sources ought to transport boats; readily impossible requirements trust again with \tHome                                              \tdecor                                             \t5.27\t7325.56\t2.41477485305611310\nAAAAAAAAPHLBAAAA\tPrivate, direct rates increase furious meals. Italian values buy for instance random members. Available reforms work financial, impossible adults. Immediate, good experimen\tHome                                              \tdecor                                             \t6.40\t7796.60\t2.57004701611034397\nAAAAAAAAPNPDAAAA\tSo far conditions may r\tHome                                              \tdecor                                             \t8.95\t1175.16\t0.38737609361160401\nAAAAAAAAABCBAAAA\tSuspiciou\tHome                                              \tflatware                                          \t8.91\t11913.78\t5.12961692885790892\nAAAAAAAAABCCAAAA\tMaterial, rough relations think cities. As popular studies should not ask at a boo\tHome                                              \tflatware                                          \t0.28\t1925.64\t0.82910676064909237\nAAAAAAAAABGBAAAA\tReal times could cultivate honours. Great carers enter like a drugs. Sufficient years read o\tHome                                              \tflatware                                          \t3.21\t32.10\t0.01382102938079593\nAAAAAAAAAFCAAAAA\tLong, other grounds give now clinical, essential areas. Possible languages make. So similar costs would say. More similar propos\tHome                                              \tflatware                                          \t3.20\t180.81\t0.07784985427855798\nAAAAAAAAALOBAAAA\tPresent variables shall raise royal, american structures. \tHome                                              \tflatware                                          \t1.03\t26390.07\t11.36255242464987910\nAAAAAAAABDAAAAAA\tRemarkable m\tHome                                              \tflatware                                          \t20.08\t15671.25\t6.74743946055445923\nAAAAAAAABJOBAAAA\tChanges like old, perfect streets. Thousands say. Whole factors work particular\tHome                                              \tflatware                                          \t1.83\t3396.31\t1.46232088150439278\nAAAAAAAACBAEAAAA\tPolice succeed schools; supplies calculate far countries; new words move shares; officers must complete years. Asian things may bear warm things. Aw\tHome                                              \tflatware                                          \t6.66\t2788.28\t1.20052647357899259\nAAAAAAAACBHAAAAA\tSuppo\tHome                                              \tflatware                                          \t2.16\t18092.16\t7.78979049601435527\nAAAAAAAACEDEAAAA\tStreets will marry. Agencies tell regularly students. Years study here colonial, economic transactions. Cards shall not hide of course inside technical sons; else environmental \tHome                                              \tflatware                                          \t58.71\t3036.50\t1.30740048955722201\nAAAAAAAACLBAAAAA\tEarly, particular conditions fulfil just women. All new sales might not feel large, active books; current children should take. Generally di\tHome                                              \tflatware                                          \t14.12\t22.62\t0.00973930481600012\nAAAAAAAACNCAAAAA\tForeign parties could not keep ston\tHome                                              \tflatware                                          \t1.70\t4789.08\t2.06199424881564327\nAAAAAAAACPDAAAAA\tPatient \tHome                                              \tflatware                                          \t1.87\t9772.43\t4.20763371189319384\nAAAAAAAADMMBAAAA\tYears know more medical citizens. Then comprehensive observers come finally by a processes. Small voters must waste others. Statistical levels study. Ex\tHome                                              \tflatware                                          \t0.33\t741.75\t0.31936911349549462\nAAAAAAAAEGHDAAAA\tArrangements keep simply close large terms. Projects might not live true, easy others. So new years take labour members. Original towns travel away away americ\tHome                                              \tflatware                                          \t9.19\t2252.25\t0.96973250538621876\nAAAAAAAAEHGDAAAA\tPossible, thick acids shall not go in a c\tHome                                              \tflatware                                          \t3.98\t5764.14\t2.48181770389473594\nAAAAAAAAEIODAAAA\tRandom influences can force low for a subjects; young days will not travel historic hills. Unlikely, huge guards arrest now by th\tHome                                              \tflatware                                          \t3.46\t5434.00\t2.33967207648738495\nAAAAAAAAEPMAAAAA\tDomestic, new tasks show here very various farms. Internal, old homes used to impose long traditional, high \tHome                                              \tflatware                                          \t1.93\t627.94\t0.27036689063479730\nAAAAAAAAFCHCAAAA\tMore special scots ought to see just on a pupils. Grounds might shut complex writers. Empty, actual eyes may get little wrong, odd words; social, full tact\tHome                                              \tflatware                                          \t3.31\t2123.58\t0.91433213621403771\nAAAAAAAAGDLDAAAA\tLegal ci\tHome                                              \tflatware                                          \t4.71\t5052.16\t2.17526641110535642\nAAAAAAAAGEADAAAA\tHom\tHome                                              \tflatware                                          \t8.19\t3362.38\t1.44771192428039261\nAAAAAAAAGOCAAAAA\tLeaves cannot lose colours; european, dynamic sentences will \tHome                                              \tflatware                                          \t96.77\t1428.58\t0.61509178046160258\nAAAAAAAAGPEAAAAA\tFurther o\tHome                                              \tflatware                                          \t5.51\t11480.35\t4.94299858728412768\nAAAAAAAAHLBAAAAA\tThus internal planes would not apply never rather than a\tHome                                              \tflatware                                          \t2.06\t4826.77\t2.07822211789234727\nAAAAAAAAINBEAAAA\tEuropean seconds wou\tHome                                              \tflatware                                          \t5.97\t12128.66\t5.22213601899328053\nAAAAAAAAJAAEAAAA\tLabour, likely area\tHome                                              \tflatware                                          \t84.74\t7106.28\t3.05969173421066874\nAAAAAAAAJBGDAAAA\tParticular, healthy talks get written, apparent months; then great attacks used to secure characteristically to a agencies. Accounts answer prod\tHome                                              \tflatware                                          \t3.87\t179.28\t0.07719109493423967\nAAAAAAAAJIKBAAAA\tYesterday angry obligations defi\tHome                                              \tflatware                                          \t3.77\t1418.04\t0.61055366053407644\nAAAAAAAAJMNDAAAA\tEuropean, rigid voters believe in common including a meetings. Complete trends broadcast directly;\tHome                                              \tflatware                                          \t2.19\t10595.74\t4.56211943461914690\nAAAAAAAAKEECAAAA\tLikely, odd offences shall ease enough true, chinese eyes. Other indi\tHome                                              \tflatware                                          \t4.09\t3818.90\t1.64427193465176194\nAAAAAAAAKFCDAAAA\tLeft, white ways might intervene es\tHome                                              \tflatware                                          \t9.19\t416.05\t0.17913517987165560\nAAAAAAAAKOKAAAAA\tLater substantial changes give wisely. Minor taxes would shed forward reasons; yet long shareholders will live close small constitutional bags; supplies rea\tHome                                              \tflatware                                          \t3.08\t1033.24\t0.44487353262970659\nAAAAAAAALBOCAAAA\tRather inc researchers might not answer sure. Most actual lives \tHome                                              \tflatware                                          \t4.89\t317.32\t0.13662582688829168\nAAAAAAAALEBDAAAA\tForces used to adapt in a musicians. Rather political t\tHome                                              \tflatware                                          \t89.07\t4073.22\t1.75377237677400555\nAAAAAAAAMGKBAAAA\tOther, white years get meanwhile plans; more royal sciences would not contain triumphantly splendid specific concepts; free months \tHome                                              \tflatware                                          \t1.62\t21553.63\t9.28016677547677492\nAAAAAAAAMMABAAAA\tFinancial, black securities may support vague, late offices. So marginal incomes make on the men. Hotly close occupation\tHome                                              \tflatware                                          \t6.87\t280.44\t0.12074671275857973\nAAAAAAAAMPBDAAAA\tActively fierce lines should not feel quite confident new rules. Arms pay long settings. Wide, black women should pick real talks. Important friends make today between the revenues. Noisily expe\tHome                                              \tflatware                                          \t4.53\t8713.76\t3.75181099617458879\nAAAAAAAAOOPAAAAA\tBrief regions ought to inclu\tHome                                              \tflatware                                          \t4.98\t5812.86\t2.50279466811381312\nAAAAAAAAOPJCAAAA\tForward general regulations can begin forward women; galleries consist typic\tHome                                              \tflatware                                          \t8.74\t2672.21\t1.15055118136002115\nAAAAAAAAPAGEAAAA\tUncertain, statistical jobs walk there; agreements show to a rights. Useless years may not resist locally only marginal experts. Concerned,\tHome                                              \tflatware                                          \t0.14\t7564.70\t3.25706981174164905\nAAAAAAAAPCHBAAAA\tBeneficial, moving years ought to see difficult, political stocks; attitudes can say british questions. Upper, educational chapters should end then back lives. Workers talk there in a boundaries; pro\tHome                                              \tflatware                                          \t2.02\t609.71\t0.26251775151916148\nAAAAAAAAPNCEAAAA\tBusy, new things go satisfactory services. Now old years must take. Scottish procedure\tHome                                              \tflatware                                          \t0.85\t2855.80\t1.22959799706158888\nAAAAAAAAABPDAAAA\tMislea\tHome                                              \tfurniture                                         \t1.06\t2910.97\t1.06321050660037366\nAAAAAAAAADKAAAAA\tPapers check other, industrial boards. Violent, social things give cars. Local councillors give ther\tHome                                              \tfurniture                                         \t3.38\t3631.97\t1.32655048442868154\nAAAAAAAAAHCBAAAA\tDutch, busy firms must not return thereof full, naval plants. Parts shall get ashore early politicians. Good organisms try rather also close boys. Positive, big ingredients foster greatly local grou\tHome                                              \tfurniture                                         \t1.71\t1113.86\t0.40682922011628158\nAAAAAAAAAKLBAAAA\tArrangements will trade however in every negotia\tHome                                              \tfurniture                                         \t3.24\t15049.37\t5.49667234692094570\nAAAAAAAAALDCAAAA\tBlack, perfect visitors should test more low english interests; about major wives believe examples. Other, available gro\tHome                                              \tfurniture                                         \t0.66\t10969.33\t4.00646757141663321\nAAAAAAAAANGAAAAA\tMarine, new services shall reach more more significant elements. Late, solid rights would like also. Notes complete elements; continually personal armies will compare clearly curre\tHome                                              \tfurniture                                         \t3.59\t965.34\t0.35258337613977633\nAAAAAAAAAOBEAAAA\tWays become worldwide specially common gene\tHome                                              \tfurniture                                         \t8.57\t791.04\t0.28892157567448637\nAAAAAAAAAPBBAAAA\tVery likely areas should k\tHome                                              \tfurniture                                         \t2.37\t3579.84\t1.30751038311912580\nAAAAAAAABAOBAAAA\tArms fail other faces; leaders could arise good characteristics; gol\tHome                                              \tfurniture                                         \t8.75\t2288.09\t0.83570814128872814\nAAAAAAAABKCAAAAA\tStones tell. Still brown relationships put initially long r\tHome                                              \tfurniture                                         \t9.54\t5599.90\t2.04532252682488396\nAAAAAAAACCKCAAAA\tPrivate, young standards find even so in the women. Sheer, expert classes cannot present men. Small, sure enquiries must support mildly p\tHome                                              \tfurniture                                         \t4.99\t2942.39\t1.07468643184775984\nAAAAAAAACMLDAAAA\tAuthorities used to consider; general weapons seek particularly economic papers; much american walls\tHome                                              \tfurniture                                         \t1.27\t2216.17\t0.80943988718968251\nAAAAAAAAEFLAAAAA\tSevere, likely areas make on board formal, new conditions. Democratic, individual numbers should not fight workers. Poor options think. Independent feelings squeeze only ideas. Thin prob\tHome                                              \tfurniture                                         \t8.47\t3094.07\t1.13008644271738222\nAAAAAAAAEFPCAAAA\tAdults might not surrender doubtful, upper industries; earnings insist m\tHome                                              \tfurniture                                         \t1.61\t6969.96\t2.54572692352870019\nAAAAAAAAEJJAAAAA\tShareholders mean; more very teams believe necessary, charming words. Courses would not suggest as popular, similar assets. Subjects must make on the things. Liabilities used to get very to a lines; \tHome                                              \tfurniture                                         \t8.45\t3751.07\t1.37005088853319121\nAAAAAAAAEPPAAAAA\tDirectly high lines move calmly also international files. Pounds cannot ensure creditors. Similar, favorable colleagues could gather written police. Free days might provide so. Probably other rock\tHome                                              \tfurniture                                         \t6.83\t5386.33\t1.96731764601379975\nAAAAAAAAGCFAAAAA\tStreets know half. National, \tHome                                              \tfurniture                                         \t0.39\t9772.83\t3.56945469558921243\nAAAAAAAAGGCCAAAA\tSoviet, evident ways change able, huge woods. Smart sales ask sales. Thus possible transactions can want below effective, available families. Also external \tHome                                              \tfurniture                                         \t4.84\t145.90\t0.05328890813474358\nAAAAAAAAGLJDAAAA\tUsual tools happen little young children. Dramatic,\tHome                                              \tfurniture                                         \t1.68\t11143.74\t4.07016954857756966\nAAAAAAAAGMOAAAAA\tJudicial operations cannot kick currently h\tHome                                              \tfurniture                                         \t6.22\t9022.42\t3.29537293031578591\nAAAAAAAAGPCAAAAA\tToo young things leave individually skills. Contexts suffer enormously so romantic\tHome                                              \tfurniture                                         \t29.66\t20545.03\t7.50392197598047208\nAAAAAAAAHEIDAAAA\tSuperb lights occur with a standards. Bright services specify at the sides. Then urgent versions get earlier prisoners. Available heroes would not believe civil sides. Banks could t\tHome                                              \tfurniture                                         \t0.12\t16046.32\t5.86080104441877032\nAAAAAAAAHPPAAAAA\tRoyal, military notions will not find very very wet acids. Funny actions take western, remaining homes. Great patients will replace simply. Signs can think equivalent reasons. Campaigns \tHome                                              \tfurniture                                         \t7.54\t1334.66\t0.48747480555940278\nAAAAAAAAICHCAAAA\tYet huge priests think today unlikely, absolute things. Whole, modern changes might not manipulate most only, desirable companies; accused, particular girls may take serious, central hours\tHome                                              \tfurniture                                         \t0.52\t10920.86\t3.98876425834404225\nAAAAAAAAIHDBAAAA\tLocal blocks shall not get natural things; already post-war patients may exploit british, sexual grounds. Easy centuries would not\tHome                                              \tfurniture                                         \t3.75\t2996.52\t1.09445701853270617\nAAAAAAAAIKCAAAAA\tAgo new arguments accept previously european parents; fo\tHome                                              \tfurniture                                         \t3.03\t6882.58\t2.51381201747788529\nAAAAAAAAIMCEAAAA\tWalls s\tHome                                              \tfurniture                                         \t4.80\t1253.04\t0.45766369738971278\nAAAAAAAAIPIBAAAA\tLate general supporters see more informal, blank employees; very similar methods shall help complex, likely schemes. More than new groups reconsider unanimously. Physical incenti\tHome                                              \tfurniture                                         \t37.53\t2259.23\t0.82516723732184192\nAAAAAAAAJKLBAAAA\tMountains ought to join pressures. Bright countries used to pay there owners. Imperial issues might establish thus calmly senior members. Just regular \tHome                                              \tfurniture                                         \t7.01\t10713.70\t3.91310058316108488\nAAAAAAAAKGPAAAAA\tContacts open considerable, suprem\tHome                                              \tfurniture                                         \t7.01\t1997.51\t0.72957592109822925\nAAAAAAAAKOGCAAAA\tEffects must quit about small values; full paths must get. Problem\tHome                                              \tfurniture                                         \t1.87\t4806.19\t1.75542575317425115\nAAAAAAAAKOOAAAAA\tPolitical girls used to ask hands. Large-scale, chief areas can produce including the children. Sufficiently new areas will\tHome                                              \tfurniture                                         \t2.26\t3164.50\t1.15581048521176187\nAAAAAAAALFDDAAAA\tNow late makers used to\tHome                                              \tfurniture                                         \t0.85\t7607.78\t2.77868601459451341\nAAAAAAAALMDCAAAA\tGreatly commercial animals used to live now as wide personnel. Enough hot wars keep. Min\tHome                                              \tfurniture                                         \t4.37\t894.54\t0.32672419385094943\nAAAAAAAAMBPAAAAA\tBetter high children\tHome                                              \tfurniture                                         \t4.48\t4768.72\t1.74174010966630844\nAAAAAAAAMCNCAAAA\tThus light firms expect anyway in a toys. Laws used to ab\tHome                                              \tfurniture                                         \t2.06\t12227.85\t4.46613279873491621\nAAAAAAAAMFBEAAAA\tWidespread others hold quickly new teachers. Societies wou\tHome                                              \tfurniture                                         \t3.01\t1696.19\t0.61952099444188288\nAAAAAAAAMMDCAAAA\tHot, small levels ought to arrive only there other features. Often irish columns used to spend now new natural months. Once british flowers shall penetrate funds. \tHome                                              \tfurniture                                         \t5.70\t20519.61\t7.49463750685925767\nAAAAAAAANGDAAAAA\tElectronic organizations transfer still natural, whole posts. Plants ought to curl just animals; already huge women can dream eventua\tHome                                              \tfurniture                                         \t3.59\t6214.52\t2.26980798753616633\nAAAAAAAANNLDAAAA\tIncreasingly other policies happen previously under a targets. Efficient, experienced points will see mostly even english machines. Fine states must remedy also good thoughts; normally clear years i\tHome                                              \tfurniture                                         \t5.85\t9156.23\t3.34424605435629337\nAAAAAAAANPOBAAAA\tNatural costs assist during the types. Sometimes possible concerns make as real, right forms. \tHome                                              \tfurniture                                         \t6.28\t1707.15\t0.62352405429902331\nAAAAAAAAOCFCAAAA\tTherefore early eyes stay recent, expert studies; varieties halt in a parts. Unable i\tHome                                              \tfurniture                                         \t7.52\t742.08\t0.27103929368492471\nAAAAAAAAOFECAAAA\tFunds drink much common months. Royal, long trees will expect sometimes front coins. Old ears can allow very similar, short members. Even public rules act common, open \tHome                                              \tfurniture                                         \t17.29\t6237.51\t2.27820491692628117\nAAAAAAAAOGODAAAA\tIntensive minutes might see like a boys. Questions might know more young communications. Ready, southern others may result. Lonely, trying seeds love probably good farms. \tHome                                              \tfurniture                                         \t9.12\t11445.81\t4.18049840724968750\nAAAAAAAAOKNAAAAA\tAt least competitive notions may not convince white, familiar principles. Valuable, fat books convince further cases. Yet ordinary cities cannot need so as. Ri\tHome                                              \tfurniture                                         \t8.51\t1332.65\t0.48674066775713524\nAAAAAAAAOOKDAAAA\tWomen should not knock doubtless details. Sure northern products must go very cruel, other tickets. Poor, physical objectives highlight only by the discussions; now slow crowds must \tHome                                              \tfurniture                                         \t0.77\t87.87\t0.03209387496778559\nAAAAAAAAPKAAAAAA\tLittle, evil parties would not act subject\tHome                                              \tfurniture                                         \t7.63\t1108.98\t0.40504683580032854\nAAAAAAAAPLEEAAAA\tEasy, philosophical levels must \tHome                                              \tfurniture                                         \t2.32\t3778.34\t1.38001105662664191\nAAAAAAAAAEPAAAAA\tNow additional reasons hate. Original, use\tHome                                              \tglassware                                         \t4.41\t6349.14\t1.56441659290736902\nAAAAAAAAAFBBAAAA\tJobs notify about future boxes. Now main policies will think above offers. Criminal men used to think soon national women. Sure talks ought to appreciate there companies. So appropri\tHome                                              \tglassware                                         \t1.19\t7756.30\t1.91113826747676477\nAAAAAAAAAFIAAAAA\tSeats will cope similarly new shares; massive deals explore semantic, important thi\tHome                                              \tglassware                                         \t1.53\t4412.81\t1.08730838906490754\nAAAAAAAAAGGCAAAA\tPowerful hours take worth a authorities. Respondents must generate aside c\tHome                                              \tglassware                                         \t31.97\t10526.17\t2.59362921714811148\nAAAAAAAAAHPDAAAA\tUnfair, possible hands will not arrive surely tight russian employers. Really necessary walls should decide varieties. Talks would raise probably moral meetings. Bright, necessary \tHome                                              \tglassware                                         \t1.54\t3919.44\t0.96574291493097623\nAAAAAAAAAINCAAAA\tOld\tHome                                              \tglassware                                         \t1.47\t1351.66\t0.33304657512185499\nAAAAAAAAANFBAAAA\tConditions criticise enough more particular shops. Be\tHome                                              \tglassware                                         \t6.38\t1038.40\t0.25585987867254652\nAAAAAAAAANMBAAAA\tCountries ensure in a christians. Expected ends used to run high holes. Broad, unlike women specify therefore. Lit\tHome                                              \tglassware                                         \t2.94\t153.37\t0.03779009013097887\nAAAAAAAABGKCAAAA\tOnc\tHome                                              \tglassware                                         \t4.53\t1345.23\t0.33146223477144621\nAAAAAAAABHEBAAAA\tWestern, complete meetings follow also educational shareho\tHome                                              \tglassware                                         \t7.67\t2508.40\t0.61806521539119384\nAAAAAAAABMBCAAAA\tSimilar, low sites remember peaceful days. Faster permanent views give then churches. Others make well public processes. Eventually other schemes can trus\tHome                                              \tglassware                                         \t0.29\t105.75\t0.02605660840680065\nAAAAAAAABPMAAAAA\tStatistical bedrooms analyse there good, little residents. \tHome                                              \tglassware                                         \t8.08\t5239.63\t1.29103533906879324\nAAAAAAAACCGDAAAA\tLess than outside students go more. Military views should not let more different, big steps. Average, black animals ought to begin automatically with a notes. Needs\tHome                                              \tglassware                                         \t3.76\t13328.83\t3.28419956341197821\nAAAAAAAACCKAAAAA\tWide, great premises mean ever severe courses. Used ministers face there about a things.\tHome                                              \tglassware                                         \t0.83\t1275.20\t0.31420696964872045\nAAAAAAAACGLAAAAA\tFaintly actual prices may not wait dramatic terms. Others shall see shortly priests. Very na\tHome                                              \tglassware                                         \t27.85\t6812.75\t1.67864925695915955\nAAAAAAAACIFCAAAA\tAgents invest often things. French cars ought to get locally distinctive, local powers. More american entries compensate only\tHome                                              \tglassware                                         \t6.43\t10473.16\t2.58056764918929822\nAAAAAAAACJFDAAAA\tAgain other wheels ought to find on a employees. Developments make really together new groups. Drinks would not assess bright women; special, australian t\tHome                                              \tglassware                                         \t3.25\t516.63\t0.12729669599248624\nAAAAAAAACKODAAAA\tWords visit authorities. American occasions must need available, pure c\tHome                                              \tglassware                                         \t5.43\t5888.06\t1.45080731627183575\nAAAAAAAACNKBAAAA\tPurposes look events. Words convert over complete sites. New notes tell up a \tHome                                              \tglassware                                         \t9.93\t9702.28\t2.39062421383578063\nAAAAAAAADLHBAAAA\tFree kids would become only \tHome                                              \tglassware                                         \t1.05\t8484.78\t2.09063441964873770\nAAAAAAAADNADAAAA\tInterested, square savings change off\tHome                                              \tglassware                                         \t2.10\t8572.37\t2.11221643695702771\nAAAAAAAADOKBAAAA\tExactly single cities used to deserve ago false services. Suddenly psychological managers could sustain far together big changes. Parents should r\tHome                                              \tglassware                                         \t0.64\t2997.09\t0.73847754600414333\nAAAAAAAAEGOBAAAA\tHeavy, desperate standards could produce still fine, important weeks. Accordingly \tHome                                              \tglassware                                         \t9.90\t11317.37\t2.78857946368674669\nAAAAAAAAEIIAAAAA\tLong, surprised sections keep positive sports. Strategies go northern, precious forms; readers emerge about reports. Large, unusual legs might show affairs; as usual ac\tHome                                              \tglassware                                         \t4.43\t12838.25\t3.16332154022324760\nAAAAAAAAEIKAAAAA\tRed rooms could not apply\tHome                                              \tglassware                                         \t4.96\t1551.75\t0.38234838860759250\nAAAAAAAAEJOCAAAA\tPresent materials would say real, rare relationships. Particular conclusions contribute well to a hand\tHome                                              \tglassware                                         \t4.07\t8454.05\t2.08306260332400026\nAAAAAAAAFCECAAAA\tSeparate moments come months. Avail\tHome                                              \tglassware                                         \t0.58\t5564.41\t1.37106054264667234\nAAAAAAAAFIDDAAAA\tProfessional, local chemicals can feel eyes. Familiar shops bear early in a accounts. Western arrangements ride reserves. Sorry, scottish ministers might not keep constantly w\tHome                                              \tglassware                                         \t6.13\t5921.40\t1.45902223186788996\nAAAAAAAAGAIDAAAA\tRows come \tHome                                              \tglassware                                         \t0.29\t840.56\t0.20711246111035795\nAAAAAAAAGFLDAAAA\tWhite, local attitudes ca\tHome                                              \tglassware                                         \t1.74\t1012.36\t0.24944366985067333\nAAAAAAAAGHMDAAAA\tSeconds may make ahead quite little lips. Young, criminal consumers s\tHome                                              \tglassware                                         \t7.17\t1471.96\t0.36268827716760552\nAAAAAAAAGJNAAAAA\tRecently nice particles hear above in a candidates. Human errors register. American, old days live. \tHome                                              \tglassware                                         \t8.16\t528.66\t0.13026086619706129\nAAAAAAAAGKNDAAAA\tTraditional, old-fashioned men show too final,\tHome                                              \tglassware                                         \t4.84\t6698.16\t1.65041448856828214\nAAAAAAAAGLGCAAAA\tYears must share new, white loans. Able \tHome                                              \tglassware                                         \t1.64\t1410.40\t0.34752000469930625\nAAAAAAAAGLOBAAAA\tSingle, roman facts may hear by a rights; different, able preferences must produce as internal surveys. Similar heads might stabilize direct na\tHome                                              \tglassware                                         \t6.70\t8825.39\t2.17456010654651897\nAAAAAAAAGNBAAAAA\tStones should send perhaps at the groups. Perhaps individual facts \tHome                                              \tglassware                                         \t4.18\t26041.20\t6.41650449969907389\nAAAAAAAAGPPBAAAA\tMore black members would run more central poor phases. Personal responsibiliti\tHome                                              \tglassware                                         \t8.30\t423.06\t0.10424121751849724\nAAAAAAAAHBKBAAAA\tSafe, distinct millions must not deliver at the men. Indeed old claims might put exercises; particular, wooden households should learn clear, lucky votes. Mean, level terms might write bot\tHome                                              \tglassware                                         \t9.86\t7952.69\t1.95952840766599957\nAAAAAAAAHOBDAAAA\tSignificant difficulties could observe numbers. Very alone centuries affect forwards by a matters. Glad fields ought to spread hardly british str\tHome                                              \tglassware                                         \t3.06\t501.96\t0.12368203457094708\nAAAAAAAAIACDAAAA\tNovel, small attitudes may warn now however good terms. Aware earnings must eat much; lat\tHome                                              \tglassware                                         \t2.84\t5534.76\t1.36375483636523840\nAAAAAAAAIBHDAAAA\tCold, old days stem thereby difficult, nuclear men; likely contents shall threaten often outer years. All real or\tHome                                              \tglassware                                         \t9.08\t11902.21\t2.93268298009935465\nAAAAAAAAIELDAAAA\tNow strong fields may not feel. Again\tHome                                              \tglassware                                         \t3.96\t9805.52\t2.41606236279008890\nAAAAAAAAIGPDAAAA\tEven sexual men can clear thereby always male members. Shoulders extract. Negotiations used to alter else \tHome                                              \tglassware                                         \t3.47\t1371.15\t0.33784887581073012\nAAAAAAAAIJJDAAAA\tConditions could not estimate following problems. Theories get sure; extremely complete scholars ought to thrive only strong, european businesses. Important, social p\tHome                                              \tglassware                                         \t1.56\t6751.07\t1.66345141670827100\nAAAAAAAAIMBEAAAA\tHoles buy then markets. Practical themes ought to escape above australian children.\tHome                                              \tglassware                                         \t1.43\t3401.20\t0.83804951785541719\nAAAAAAAAINDCAAAA\tWilling, due values will chat hardly gmt central records. Necessary, adult stairs make fast in terms of a years. Views would not dig\tHome                                              \tglassware                                         \t0.24\t2373.76\t0.58489016332602467\nAAAAAAAAINPCAAAA\tMoments used to contract really boats. A\tHome                                              \tglassware                                         \t68.94\t1997.56\t0.49219516490864023\nAAAAAAAAJIIDAAAA\tInsects indicate germans. Other, particular properties might \tHome                                              \tglassware                                         \t4.52\t2374.24\t0.58500843445638178\nAAAAAAAAJKCEAAAA\tPersons might live here doctors. Chil\tHome                                              \tglassware                                         \t2.86\t15578.10\t3.83841561628351009\nAAAAAAAAJNOBAAAA\tMaterials make apart colonies. Rates make naturally poor, appropriate companies; c\tHome                                              \tglassware                                         \t0.80\t1956.16\t0.48199427991533955\nAAAAAAAAJPDBAAAA\tUsed groups ought to fail high from the districts. Immediate, main walls could exploit rights. Therefore late friends ought to try away. In short widespread lakes sh\tHome                                              \tglassware                                         \t80.17\t9287.91\t2.28852419657312357\nAAAAAAAAKIDBAAAA\tToo only affairs put nonetheless big numbers. Rapid students appeal for the \tHome                                              \tglassware                                         \t9.29\t13621.22\t3.35624392967263487\nAAAAAAAAKKHCAAAA\tGood windows say widely actions. Simple, imaginative findings see to a recommendations. Environmental, l\tHome                                              \tglassware                                         \t4.66\t12892.65\t3.17672560166371999\nAAAAAAAAKNMDAAAA\tJapanese emotions speak disabled, new techniques. Experts should not tell only refugees. Years cannot afford well head quarters. Offices make conscious, primary stories\tHome                                              \tglassware                                         \t7.31\t4129.01\t1.01738058324126665\nAAAAAAAAKPJBAAAA\tFull goods should find then. Only able years exploit completely mode\tHome                                              \tglassware                                         \t2.13\t3040.36\t0.74913919560946025\nAAAAAAAAMDMBAAAA\tSexual, due tensions take quite lucky circumstances. For ever formal districts respond ways. Poor relations should not come correctly in an facilities; important times could look away common \tHome                                              \tglassware                                         \t42.90\t1247.40\t0.30735710001553787\nAAAAAAAAMDNAAAAA\tBad boys might claim shortly italian, good lines. Times learn additional, sick cards; measures work sometimes pleasant male\tHome                                              \tglassware                                         \t2.10\t3225.77\t0.79482388369177617\nAAAAAAAAMHFBAAAA\tChildren want on a paintings. Over nice teachers must not sell. Richly accurate pp. hate as african, fiscal days. Claims eat part\tHome                                              \tglassware                                         \t7.95\t6793.78\t1.67397508332817129\nAAAAAAAAMLGAAAAA\tAlways sad weeks would not put close to a masses. Fresh, atomic sides will not help together previous \tHome                                              \tglassware                                         \t0.83\t6893.14\t1.69845720731209292\nAAAAAAAAMMPBAAAA\tAs other \tHome                                              \tglassware                                         \t4.88\t2352.12\t0.57955810653242499\nAAAAAAAAMNECAAAA\tSerious branches use. Rich, english bombs keep much vulnerable consequences. Little, furious sales can keep to a gentlemen. As gold customers overlap betwee\tHome                                              \tglassware                                         \t2.54\t3062.18\t0.75451560407694385\nAAAAAAAAMNIBAAAA\tReally different shares ought to help clearly p\tHome                                              \tglassware                                         \t2.82\t6640.72\t1.63626137663554805\nAAAAAAAANGAAAAAA\tThere possible newspapers experiment. Annual accounts might visit possible, prime groups; competitive, universal pr\tHome                                              \tglassware                                         \t1.12\t63.36\t0.01561178920713843\nAAAAAAAANPPAAAAA\tRecent, labour complaints must read in a units. Softly old courts rely even. Actual\tHome                                              \tglassware                                         \t8.70\t2861.55\t0.70508073556955459\nAAAAAAAAOAIBAAAA\tWell new carers shall give together with a samples. Individual, central birds find there weapons. Kind details proceed ultimate miles. Unlike, independent months mus\tHome                                              \tglassware                                         \t0.46\t6486.44\t1.59824706415326716\nAAAAAAAAOMGDAAAA\tOverseas businesses conceal gmt in a farmers. Level functions could support all right dreadful processes. Walls buy furth\tHome                                              \tglassware                                         \t3.81\t10274.91\t2.53171920836992962\nAAAAAAAAONFAAAAA\tMental techniques might prohibit by a chiefs; other, waiting defendants vary else. Now old skills would see. Common jobs will no\tHome                                              \tglassware                                         \t89.76\t2200.15\t0.54211297386498769\nAAAAAAAAOOFCAAAA\tDogs will cover never. Bitter children restore cheaply upper, short views; other teams shall exist too high customs. Yards must not help now present, coming mines. However federal method\tHome                                              \tglassware                                         \t3.22\t2352.77\t0.57971826535478358\nAAAAAAAAPBFBAAAA\tMore than divine areas will control together from \tHome                                              \tglassware                                         \t4.90\t563.56\t0.13886016296677611\nAAAAAAAAPBMDAAAA\tSurely national arguments address working, soviet effects. Again central parents say english rules; carefully military chang\tHome                                              \tglassware                                         \t8.61\t13637.98\t3.36037356330760394\nAAAAAAAAPGHAAAAA\tClassical, attractive employers want only prices. Financial approaches used to hear considerable votes. Bo\tHome                                              \tglassware                                         \t2.50\t13555.23\t3.33998411323041478\nAAAAAAAAPKCBAAAA\tOther patients see normal colleagues\tHome                                              \tglassware                                         \t4.62\t1970.54\t0.48553748586228795\nAAAAAAAAPKHCAAAA\tNewspapers ought to pursue. Well rare criticisms used to tell so. Powerful, new matters touch. Home magic brothers can read now rather supreme rats. As evolu\tHome                                              \tglassware                                         \t4.99\t1537.58\t0.37885692628017534\nAAAAAAAAAANAAAAA\tSurely additional years work never remote, great bits; women deal in a judges. Far ethnic hands might help afterwards already dead awards. Rich, social experts target social children. National\tHome                                              \tkids                                              \t0.50\t361.08\t0.11815869948988022\nAAAAAAAAABBAAAAA\tYet black costs must not judge here lively variables. Full, po\tHome                                              \tkids                                              \t1.68\t3938.44\t1.28880289248621866\nAAAAAAAAABNCAAAA\tProud investors may not visit extremely. Alone, everyday houses move widely global countries. Only single gardens come further shadows. Scottish, wo\tHome                                              \tkids                                              \t2.68\t31.68\t0.01036686496022877\nAAAAAAAAAIEEAAAA\tTotal, new savings would make short, popular consultants. Short, other contracts might discuss for a\tHome                                              \tkids                                              \t9.91\t1600.56\t0.52376229105883094\nAAAAAAAAAKADAAAA\tEffective, free arrangements will build social, possible agreemen\tHome                                              \tkids                                              \t4.30\t2319.90\t0.75915688198341950\nAAAAAAAAAKGDAAAA\tEnterprises shall not influence perhaps delighted, big police. Novels keep early temporary bacteria; rates will not cope men\tHome                                              \tkids                                              \t3.57\t6583.08\t2.15422668504996302\nAAAAAAAABAAAAAAA\tAgricultural sites will not provide skills. Again\tHome                                              \tkids                                              \t0.55\t5015.40\t1.64122394323015739\nAAAAAAAABDKCAAAA\tConservatives tell effectively in a parties. Dir\tHome                                              \tkids                                              \t6.35\t8063.47\t2.63866491795631001\nAAAAAAAABFABAAAA\tToo old\tHome                                              \tkids                                              \t0.95\t114.66\t0.03752098283900982\nAAAAAAAABLDBAAAA\tFollowing occasions see then only real lovers\tHome                                              \tkids                                              \t5.63\t6310.36\t2.06498263795546836\nAAAAAAAACCPDAAAA\tPermanent details would help also off a owners. External children used to listen like a years\tHome                                              \tkids                                              \t30.73\t6001.32\t1.96385334668939829\nAAAAAAAACFKCAAAA\tFarmers might not assume now to the tanks. For\tHome                                              \tkids                                              \t3.80\t11826.88\t3.87019153601106270\nAAAAAAAACGOAAAAA\tLocal farmers skip also shoulders; things ought to seem so only applications. Foreign, voluntary voices may not find new\tHome                                              \tkids                                              \t3.96\t2251.62\t0.73681314651989612\nAAAAAAAACHLCAAAA\tNow close items become already against a groups. Authorities would work as well natural, dependent parties. Operators should not fall l\tHome                                              \tkids                                              \t5.59\t7257.25\t2.37483998524685165\nAAAAAAAACIAEAAAA\tAppropriate items take mediterranean centuries. High, very days see ways. Careful, technical minds remai\tHome                                              \tkids                                              \t4.98\t10259.21\t3.35719206656024705\nAAAAAAAACJGAAAAA\tDire\tHome                                              \tkids                                              \t4.41\t1733.90\t0.56739605917110697\nAAAAAAAACKPBAAAA\tShort areas would not improve below to the measurements. Vo\tHome                                              \tkids                                              \t0.36\t18342.34\t6.00229046195084044\nAAAAAAAACMDCAAAA\tAs beautiful children strike really natural services. Too assistant pow\tHome                                              \tkids                                              \t3.30\t2799.11\t0.91597207635182954\nAAAAAAAACPKBAAAA\tEven growing seats may build for a times. Obvious, different systems require american settlements. Evil yards support worldwide possible members. Courses could build also users. Alm\tHome                                              \tkids                                              \t4.28\t2619.47\t0.85718723981598684\nAAAAAAAADDCCAAAA\tGold, young schools shall not go far human hands. Aware terms brush almost. Real years treat early. Edges cannot stop still british assessments. Very royal skills shall say already other\tHome                                              \tkids                                              \t5.63\t4448.98\t1.45587041890020849\nAAAAAAAADGDBAAAA\tDogs hang perhaps chief, visual brothers. Minimum, small families shall work strong mountains. Small, defensive factors make by\tHome                                              \tkids                                              \t5.44\t2978.61\t0.97471109972181264\nAAAAAAAADJHAAAAA\tSo dependent things distinguish again new subjects. Critical, firm centuries increase then institutions. Effects allo\tHome                                              \tkids                                              \t1.59\t10537.48\t3.44825227844417572\nAAAAAAAADMDBAAAA\tTurkish, old women must improve far from full, new changes. Days keep again exactly secondary visitors. Things used to make great, other notes. General, hig\tHome                                              \tkids                                              \t1.38\t355.77\t0.11642107155620551\nAAAAAAAAECJCAAAA\tExaminations reduce other, late things. Police should help very strong boxes. Annual, sole reports might benefit fortunate, total seats. Never rural shapes shall cease pictures. Physical periods wi\tHome                                              \tkids                                              \t3.60\t1189.98\t0.38940536506859327\nAAAAAAAAEHDAAAAA\tLikely products ought to work other, considerable arrangements. Also other funds kill possible, royal patterns. Old, good files know palestinian colours. Northern\tHome                                              \tkids                                              \t1.60\t3252.96\t1.06448854296167261\nAAAAAAAAEKMDAAAA\tMinds could not decide later avail\tHome                                              \tkids                                              \t2.36\t7178.10\t2.34893918469122957\nAAAAAAAAEKNCAAAA\tTeams make never features. Now russian individuals may reproduce indeed other visual lakes. International legs drive also married views. Catholic populat\tHome                                              \tkids                                              \t8.74\t5328.40\t1.74364909261625606\nAAAAAAAAEMNAAAAA\tHealthy, delighted conclusions may offer experienced condi\tHome                                              \tkids                                              \t4.30\t1952.10\t0.63879915053227863\nAAAAAAAAFENAAAAA\tReasonable pictures could not try features. Unexpected politicians remember always. Serious buildings pay thereafter aged a offers. Large, material products go tomorrow interesting, individual re\tHome                                              \tkids                                              \t44.54\t107.20\t0.03507979557249130\nAAAAAAAAFJEBAAAA\tEqual supplies could get easily still new years. Equivalent, national policemen shall appeal. Tables would \tHome                                              \tkids                                              \t7.14\t13784.20\t4.51069886315610630\nAAAAAAAAGDBDAAAA\tHours get skills; foreign, positive events disguise currently apparent prices; other programmes may sink honours. For instance var\tHome                                              \tkids                                              \t7.04\t2430.74\t0.79542781986826031\nAAAAAAAAGNIBAAAA\tApparently effective deals could stand \tHome                                              \tkids                                              \t0.92\t1924.93\t0.62990812398652687\nAAAAAAAAHBJCAAAA\tFunny times go actually much old details. Military parameters tell so studies. Values u\tHome                                              \tkids                                              \t4.41\t1907.42\t0.62417820588508729\nAAAAAAAAHDIBAAAA\tLevels contact in a sides. Companies must not count with an boxes; yet physical days happen never from a opera\tHome                                              \tkids                                              \t8.77\t13024.65\t4.26214607652284354\nAAAAAAAAICLCAAAA\tQuestions seem strongly. Political years establish guilty centres. Necessary, pale eyes used to generate social, particular assets. Conditions help as firm directors. Persona\tHome                                              \tkids                                              \t9.37\t8639.50\t2.82716318888562125\nAAAAAAAAIEGEAAAA\tSubsequent qualities say broadly good objectives. Odd workers ought to make commonly therefore intact times. Objectives will not hold just with the types. B\tHome                                              \tkids                                              \t0.64\t3035.53\t0.99333742401272873\nAAAAAAAAIKMBAAAA\tSoon artificial notions think no longer lights; clearly late members could not trace good countries. Cultures can proceed away wealthy \tHome                                              \tkids                                              \t2.38\t3035.43\t0.99330470032282902\nAAAAAAAAJHMDAAAA\tAppropriate, new ad\tHome                                              \tkids                                              \t3.99\t396.54\t0.12976251992831810\nAAAAAAAAJNADAAAA\tRuthlessly empty times shall not focus to a lectures. Skills involve even; boring periods re\tHome                                              \tkids                                              \t0.63\t1007.86\t0.32980898102323771\nAAAAAAAAKBACAAAA\tLists could play round, new roads. Soon national calculations think usually at first similar benefits. Skilfully specific practitioners will believe that is bars. More immediate\tHome                                              \tkids                                              \t8.24\t3098.01\t1.01378318546206881\nAAAAAAAAKINDAAAA\tSuggestions must see much large assessments. Disabled charges might claim besides wide, white passengers. Democratic, wide relationships test little years. Working, bri\tHome                                              \tkids                                              \t0.50\t934.46\t0.30578979263684908\nAAAAAAAAKMDDAAAA\tStrong settlements should close here. Forms may seem quickly other unions. Places employ difficult banks. Women must not accept too areas. Vast possibilities know; never healthy subjects cancel most j\tHome                                              \tkids                                              \t1.95\t10592.00\t3.46609323417749873\nAAAAAAAAKNJDAAAA\tEnglish requests serve also intervals. More late cards might make only other companies. Tragic lights learn more royal, attractive studies. Businessmen ought to defend close po\tHome                                              \tkids                                              \t1.59\t17495.72\t5.72524515852189842\nAAAAAAAALDODAAAA\tGoals help still human plates. Practical groups t\tHome                                              \tkids                                              \t4.79\t16455.90\t5.38497768620671273\nAAAAAAAALIFEAAAA\tFull, good fans might not pose of course parts. Daily\tHome                                              \tkids                                              \t85.83\t7041.80\t2.30433679535792207\nAAAAAAAALJACAAAA\tDue years show just ashamed homes. Large, australian parties suit there automatic grounds. Sexual steps might not mean today technical molecules. Al\tHome                                              \tkids                                              \t6.52\t4853.82\t1.58834900509020269\nAAAAAAAALLDAAAAA\tThen dark tactics should not follow then. Ashamed, g\tHome                                              \tkids                                              \t1.43\t11882.09\t3.88825828520469372\nAAAAAAAAMCNBAAAA\tVocational, political styles run incorrectly indeed only hands. Complete, confident employers expect big owners. Inc times should stop more; consi\tHome                                              \tkids                                              \t8.09\t3606.10\t1.18004898147351569\nAAAAAAAAMKJAAAAA\tFormal matters must admire much. Capable rules rise however. Harder only studies would show more. Old stones oppose common, secure police. Opinions come grey, appropriate systems. Eye\tHome                                              \tkids                                              \t6.02\t261.24\t0.08548736749400772\nAAAAAAAANBJCAAAA\tSoft, good estates must not join most likely, accused pieces. Coming, historical pictures arrange; best old loans cannot\tHome                                              \tkids                                              \t6.24\t6536.61\t2.13901998635356684\nAAAAAAAANGIBAAAA\tAbout american members provide certainly increased, special experienc\tHome                                              \tkids                                              \t0.99\t5029.15\t1.64572345059136780\nAAAAAAAAODPDAAAA\tTrying, ti\tHome                                              \tkids                                              \t3.34\t16043.89\t5.25015281145090918\nAAAAAAAAOGJCAAAA\tNew, other companies could take always political years. Important charges wait sure other aspects. Legal grounds may not worry to\tHome                                              \tkids                                              \t6.49\t5131.46\t1.67920305772776318\nAAAAAAAAOPCBAAAA\tWindows recommend never internal cells. Mutual, other moments should not see levels. Necessary, national costs shall not walk light, high types; more digital days might continue.\tHome                                              \tkids                                              \t2.75\t8373.49\t2.74011490138339726\nAAAAAAAAPADEAAAA\tFresh, f\tHome                                              \tkids                                              \t1.45\t4190.94\t1.37143020948299155\nAAAAAAAAPICDAAAA\tQuickly wrong facilities prepare as. Similar surveys look hopelessl\tHome                                              \tkids                                              \t3.16\t116.22\t0.03803147240144533\nAAAAAAAAAHECAAAA\tRemote, left figures used to feed on a records. Over economic depths must understand in particular at the ranks; degrees can think go\tHome                                              \tlighting                                          \t2.60\t5654.38\t2.08346575200781715\nAAAAAAAABMMDAAAA\tLovely letters would require now questions; communities will add years. Emotional, traditional times make for a patterns. Perhap\tHome                                              \tlighting                                          \t8.69\t2656.29\t0.97876146321981272\nAAAAAAAACBPBAAAA\tMoving, powerful drugs use so blind honours. Efficient, other seconds look just rare, planned homes. German, specified sons reside further red weeks. Available lists undergo young, milit\tHome                                              \tlighting                                          \t0.67\t10412.96\t3.83685665573012774\nAAAAAAAACGIBAAAA\tDifferent men may not inform by now between a eyes. Members can cause new minds. Strong, chief rooms will carry high lessons; natural molecules accept here because of a talks. Eyes may disc\tHome                                              \tlighting                                          \t0.56\t7704.59\t2.83890530849746709\nAAAAAAAACHHAAAAA\tIncidentally immediate flames earn. Friends influence certain, potential men. Early, opening conventions should see per a agencies. Economic, senior practitioner\tHome                                              \tlighting                                          \t1.62\t616.89\t0.22730506045863602\nAAAAAAAACJGDAAAA\tOriginal others get industrial yar\tHome                                              \tlighting                                          \t1.48\t6297.95\t2.32060157486013180\nAAAAAAAACJJDAAAA\tSo soviet years get. Good things must appreciate well real churches. Overseas, constant boxes complete for no months. Subjects may not suffer widel\tHome                                              \tlighting                                          \t5.50\t178.36\t0.06572019417303299\nAAAAAAAACMCDAAAA\tImportant, toxic commun\tHome                                              \tlighting                                          \t0.33\t431.67\t0.15905716650971714\nAAAAAAAADFFDAAAA\tPrisoners give fundamental months. Opportunities grasp capital actions. British iss\tHome                                              \tlighting                                          \t5.72\t5860.48\t2.15940728609091930\nAAAAAAAADKDDAAAA\tUnder way long interpretations might take yesterday. Little little shares get quickly families. Measures occur. Forward daily hands m\tHome                                              \tlighting                                          \t2.56\t2458.11\t0.90573820642898698\nAAAAAAAADNMDAAAA\tNew, future communities should make yesterday particular, primary relations. Significant students mea\tHome                                              \tlighting                                          \t83.07\t7959.15\t2.93270286752800800\nAAAAAAAAEBDAAAAA\tOpportunities drop cars. Officials change as for a inches. Other, american societies take straight leading, total posts. Agreements get \tHome                                              \tlighting                                          \t65.24\t13670.55\t5.03717874216279499\nAAAAAAAAEBFBAAAA\tVital problems may lig\tHome                                              \tlighting                                          \t60.33\t6799.66\t2.50546633500003077\nAAAAAAAAECIAAAAA\tRather american gentlemen might generate rather in a studies. Enough current negotiations used to co-operate nearly rough main rivals. Dramatic, overall weeks used to provide too other, great meal\tHome                                              \tlighting                                          \t7.69\t3528.80\t1.30025466022538018\nAAAAAAAAEDCEAAAA\tAlso new colonies go unhappily eggs; typically modern centres would provide then\tHome                                              \tlighting                                          \t0.51\t5329.54\t1.96377216670187391\nAAAAAAAAEJNCAAAA\tPrayers increase ever depths. International, official member\tHome                                              \tlighting                                          \t7.88\t4324.07\t1.59328728424415089\nAAAAAAAAEMPDAAAA\tSick, old-fashioned birds might think there imports. Grant\tHome                                              \tlighting                                          \t7.01\t5314.03\t1.95805720700449927\nAAAAAAAAFDFDAAAA\tCommon contracts will undergo for the goods. Generous, long laws shall not reach less traditional men. All pla\tHome                                              \tlighting                                          \t3.29\t973.56\t0.35872702533694772\nAAAAAAAAFDJCAAAA\tFront shelves produce more at a principles; previously everyday birds avoid on a matters. Up\tHome                                              \tlighting                                          \t18.01\t4993.08\t1.83979696748983826\nAAAAAAAAFGIAAAAA\tProblems should prevent finally in a effects. Now economic men sign. Royal, permanent teeth can start colonies. Geographical eyes wi\tHome                                              \tlighting                                          \t9.41\t5689.57\t2.09643218861327258\nAAAAAAAAFJJCAAAA\tEssentially everyday lines sing s\tHome                                              \tlighting                                          \t6.37\t2165.33\t0.79785774864708186\nAAAAAAAAFLECAAAA\tFamous, attractive arms shall go publicly just democratic men. Importantly private ministers ought to write. Levels bring most true, adjacent days. Successful, particular constraints may pl\tHome                                              \tlighting                                          \t3.16\t2680.48\t0.98767473691932868\nAAAAAAAAFNBAAAAA\tJust familiar police work virtually rare fruits; blind police might not succeed possible, stable churches. Senior communications light old, economic activities; almost direct characters ca\tHome                                              \tlighting                                          \t2.42\t14392.73\t5.30327994101837339\nAAAAAAAAGAOAAAAA\tNew kinds will go wholly great, occasional models; efforts may seem then too local homes. However religious co\tHome                                              \tlighting                                          \t4.11\t408.39\t0.15047919992332890\nAAAAAAAAGBNCAAAA\tMore possible newspapers\tHome                                              \tlighting                                          \t9.78\t3183.02\t1.17284532662394854\nAAAAAAAAGEJBAAAA\tOf course high \tHome                                              \tlighting                                          \t4.02\t405.11\t0.14927062043864877\nAAAAAAAAGFGAAAAA\tFurther high men can give stairs. Controversial, great fingers hate sometimes generally ancient books. Other dogs woul\tHome                                              \tlighting                                          \t6.69\t1549.44\t0.57092115754353125\nAAAAAAAAGMLCAAAA\tVisual, sensible rates know instead excellent honours. Other, inc christians fill plans. Girls may not make to a institutions. Days could build appropriate, small statements. Left, runnin\tHome                                              \tlighting                                          \t1.12\t8531.28\t3.14351523965302125\nAAAAAAAAHBCAAAAA\tPropos\tHome                                              \tlighting                                          \t1.14\t5525.76\t2.03607322355673225\nAAAAAAAAHLICAAAA\tSignificantly severe hundreds let right. Domestic, good approaches like of course later main records. General firms will preve\tHome                                              \tlighting                                          \t17.01\t2134.46\t0.78648309965559538\nAAAAAAAAHOHAAAAA\tMore great societies press. Years make still other, lively standards. Decisions may strike to\tHome                                              \tlighting                                          \t0.43\t2644.48\t0.97440984013625407\nAAAAAAAAICJBAAAA\tUnusual, fierce imports can press fine rural contents. Perhaps public\tHome                                              \tlighting                                          \t4.21\t7474.73\t2.75420894253753570\nAAAAAAAAIFGBAAAA\tMiddle-class years record also recent problems; certain, mild others can show. Matters will influence solely books. Loca\tHome                                              \tlighting                                          \t6.43\t2611.80\t0.96236826161206301\nAAAAAAAAILCBAAAA\tAble, double cells monitor quickly tomorrow direct men. Different weeks used to become n\tHome                                              \tlighting                                          \t7.19\t187.35\t0.06903273367525079\nAAAAAAAAJABAAAAA\tLegal conventions ought to work in accordance with a cases. Together left books may not come sure subsequent things. Short, real products deal excessive, im\tHome                                              \tlighting                                          \t5.79\t9924.55\t3.65689253801286467\nAAAAAAAAJBDDAAAA\tInternational, final writers must learn political eyes. Immediate times reach also also wrong requests. Isolated years may not plan yesterday minutes. Long, old researc\tHome                                              \tlighting                                          \t0.62\t4542.45\t1.67375362200770182\nAAAAAAAAKEPAAAAA\tAlone new conditions will recognise personal, hot others. Sooner scottish eyes permit probably only advanced cases. Never impossible services use again direct\tHome                                              \tlighting                                          \t4.82\t8731.18\t3.21717226373459388\nAAAAAAAAKFFCAAAA\tUsually severe kinds cost incidentally conclusions; normally continuing concentrations ought to tell amer\tHome                                              \tlighting                                          \t0.90\t8588.69\t3.16466906532847440\nAAAAAAAALENDAAAA\tEmpty, splendid pounds make relatively skills; public, simple exchanges might exploit simply. Basically quiet perceptions shall not sleep. Old, alone individuals get permanent, new minerals. Fo\tHome                                              \tlighting                                          \t2.10\t4427.11\t1.63125436659215111\nAAAAAAAAMALAAAAA\tWhite, fair artists take. Simply silent years could create general, alternative issues. Deliberately natural moves take so n\tHome                                              \tlighting                                          \t5.13\t1353.00\t0.49853903743055412\nAAAAAAAAMBDBAAAA\tRegular villages will raise finally small, rich terms; working-class, smooth states may violate european interests; discussions should not tell particularly times. Delightful, previous obje\tHome                                              \tlighting                                          \t2.57\t1509.56\t0.55622659966272526\nAAAAAAAAMGMBAAAA\tHappy sorts should care. Definite, sensitive pages should happen else smooth clouds. Local, legal years might not represent easy unfair clothes. Poor, other powers change only fo\tHome                                              \tlighting                                          \t8.25\t6600.48\t2.43207460885411963\nAAAAAAAAMMIAAAAA\tPlates shall think; new, economic pupils collect entirely. Really powerful books develop yet girls. Best unlik\tHome                                              \tlighting                                          \t3.44\t2151.42\t0.79273233991784386\nAAAAAAAAMNDEAAAA\tWriters say. Spanish, local targets find able weapons. Figures would win most into the effects; as steady workers shall understand. Social point\tHome                                              \tlighting                                          \t5.26\t5754.60\t2.12039375077447653\nAAAAAAAAMNNCAAAA\tFiscal, occasional subjects ought to provide ill altogether royal stocks. Individual students save within a students.\tHome                                              \tlighting                                          \t2.33\t6565.32\t2.41911922632931676\nAAAAAAAANDPAAAAA\tVillages\tHome                                              \tlighting                                          \t3.15\t5303.78\t1.95428039611487386\nAAAAAAAANEJCAAAA\tRich, logical \tHome                                              \tlighting                                          \t7.93\t2820.76\t1.03936361805070942\nAAAAAAAANMKCAAAA\tResidents a\tHome                                              \tlighting                                          \t4.83\t13929.25\t5.13250176432338949\nAAAAAAAANPJDAAAA\tRidicu\tHome                                              \tlighting                                          \t4.71\t6980.98\t2.57227719846411656\nAAAAAAAAODEEAAAA\tBehind aware variables cannot bring into a contents. Different, electronic mistakes measure; however additional students should like. Interesting sales wo\tHome                                              \tlighting                                          \t1.37\t1624.72\t0.59865953059436060\nAAAAAAAAOFJAAAAA\tCommon feet cannot send at a engines. Orders should think prime, conservative cell\tHome                                              \tlighting                                          \t2.52\t2080.16\t0.76647521367445784\nAAAAAAAAOJJDAAAA\tEmotional areas make then new targets. Difficulties should halt much. Military circumstances might mount very much white systems. Other holidays drag further through a\tHome                                              \tlighting                                          \t6.63\t10785.78\t3.97422940069306875\nAAAAAAAAPAEDAAAA\tYet young minutes could not walk here; enough pale others may not\tHome                                              \tlighting                                          \t1.89\t7242.84\t2.66876458378678093\nAAAAAAAAPGDBAAAA\tDifficulties apply just initially high surroundings. Enough usual requirements assist repeatedly for a students. Directions make too through the flowers. More national historia\tHome                                              \tlighting                                          \t9.68\t372.50\t0.13725483476931368\nAAAAAAAAPHCEAAAA\tAlways top problems change almost expensive women. Supreme, industrial discussions \tHome                                              \tlighting                                          \t4.16\t1004.00\t0.36994323250574748\nAAAAAAAAPIBBAAAA\tDiscussions emerge so annual lessons. Good, early faces play really legislative products. Cold, private societies understand clearly ahead fat manufacturers. Abstract causes become so as executi\tHome                                              \tlighting                                          \t9.11\t4351.81\t1.60350862415422005\nAAAAAAAAABOCAAAA\tApproximately senior colours cannot encomp\tHome                                              \tmattresses                                        \t4.73\t2262.11\t0.80877478687478841\nAAAAAAAAAKBDAAAA\tFacilities shall look just much quiet clients. Specific prices should explain on a ways. Aspects ought to establish ill high chains. Suitable, enormous areas c\tHome                                              \tmattresses                                        \t0.21\t4913.00\t1.75655053375646430\nAAAAAAAAAMNBAAAA\tSufficient, united companies earn either for no months. Comfortable, big tears come spiritual, old bir\tHome                                              \tmattresses                                        \t6.95\t6514.82\t2.32925107843014222\nAAAAAAAABLHAAAAA\tComplex, social miles cannot tie faces; probably future universities get objectives. Given settlements cannot g\tHome                                              \tmattresses                                        \t4.30\t100.50\t0.03593188044830545\nAAAAAAAACHEEAAAA\tEven widespread figures help also new, coloured trees. American, potential chapters may get actually years. Genes alter sides. Fingers develop par\tHome                                              \tmattresses                                        \t4.87\tNULL\tNULL\nAAAAAAAADGBAAAAA\tDark companies stem in a offices. However multiple hours will preserve most long copies. Over mil\tHome                                              \tmattresses                                        \t4.19\t265.00\t0.09474575441592979\nAAAAAAAADHAAAAAA\tEarly children shall not burst environmental\tHome                                              \tmattresses                                        \t29.32\t1972.12\t0.70509432905186207\nAAAAAAAAEAIBAAAA\tStrong t\tHome                                              \tmattresses                                        \t3.26\t972.30\t0.34762753591927748\nAAAAAAAAEHGAAAAA\tAlso unknown books want very structural eyes. Well existing years could not buy much constant simple clients. Clouds find; ordinary, magic years prevent surely. Pensioners\tHome                                              \tmattresses                                        \t0.47\t5228.57\t1.86937663836414340\nAAAAAAAAEHLBAAAA\tCentral, new children agree strangely legitimate, full values. Underneath adequate rights avoid just rough museums; dead, local shareholders spare various forces. Small letters force finally women.\tHome                                              \tmattresses                                        \t2.58\t4991.57\t1.78464175611291563\nAAAAAAAAEJBCAAAA\tTerms connect too all personal doctors. Current, new hours used to revive for the schools; practical, willing leaders discuss cases. Ago new structures must answer. More willing minutes claim more. F\tHome                                              \tmattresses                                        \t5.91\t5652.60\t2.02098057136409324\nAAAAAAAAEPKDAAAA\tPhysically useless findings continue other critics; perhaps young forms substitute coins; arms command \tHome                                              \tmattresses                                        \t0.77\t13274.08\t4.74589707085813303\nAAAAAAAAFBOCAAAA\tGroups make in t\tHome                                              \tmattresses                                        \t4.98\t5572.29\t1.99226724480883542\nAAAAAAAAGEGBAAAA\tSkills should spend twins. Certain, industrial homes will get to a rights. Decisions could buy politically so difficult differences. Running magistrates cannot respect thickl\tHome                                              \tmattresses                                        \t7.20\t4964.20\t1.77485612857191941\nAAAAAAAAGGFDAAAA\tHere extra efforts ensure eyes; merely little periods will not loosen home past a boys. Just local aspects must reclaim. Standard qualities might not roll today. Military, national clothes must go wid\tHome                                              \tmattresses                                        \t3.34\t4129.43\t1.47639985153876580\nAAAAAAAAGHKAAAAA\tPossible, rich d\tHome                                              \tmattresses                                        \t4.63\t10156.22\t3.63116500344963929\nAAAAAAAAGIBEAAAA\tJapanese years escape so good objects. Tiny features see then proud heads; abroad full secrets might not re\tHome                                              \tmattresses                                        \t0.95\t2753.98\t0.98463363300521627\nAAAAAAAAGOMDAAAA\tPast, interior years fetch accidents. Away internal feet would not organise so square collective rocks. M\tHome                                              \tmattresses                                        \t6.31\t3321.81\t1.18765054519388575\nAAAAAAAAGPDCAAAA\tNational, difficult pain\tHome                                              \tmattresses                                        \t0.37\t987.66\t0.35311921436391401\nAAAAAAAAHCGBAAAA\tBritish differences discuss almost in the advantages; in particular international operations go then in a architects. Regional, fair costs commit merely political items. Then difficult travel\tHome                                              \tmattresses                                        \t3.06\t430.92\t0.15406732261476401\nAAAAAAAAHNGCAAAA\tNever arab policies follow only. Valuable employees might shed. Recently relative costs order just with a areas; sessions may come somewh\tHome                                              \tmattresses                                        \t6.84\t7661.12\t2.73908903423006793\nAAAAAAAAIEFAAAAA\tPerhaps blank models work certain\tHome                                              \tmattresses                                        \t4.17\t1990.47\t0.71165502563122929\nAAAAAAAAIEJBAAAA\tKeys must not read political, italian farmers. Red, single years should play however at the dates. Authors disturb no longer for a purposes. Ever essential agencies will answer as fundame\tHome                                              \tmattresses                                        \t42.14\t5401.80\t1.93131175926026233\nAAAAAAAAIHLDAAAA\tPayments forget. Doubts make respects. Considerable, available states should go here. Only public pages might differ. In\tHome                                              \tmattresses                                        \t3.45\t2289.13\t0.81843527851372585\nAAAAAAAAIJOBAAAA\tWell able areas examine respectively previous services. Surprised computers ought to love british, sole appeals. Common, similar inhabitants finish from a seco\tHome                                              \tmattresses                                        \t7.94\t3465.86\t1.23915290716979022\nAAAAAAAAIMDAAAAA\tSocial councils used to determine yet at the boats. Persons ask alive months. Individual, considerable rooms note cases. Then only policies may look to a \tHome                                              \tmattresses                                        \t4.91\t4363.94\t1.56024448122963257\nAAAAAAAAJCPCAAAA\tFilms must ta\tHome                                              \tmattresses                                        \t6.04\t6064.00\t2.16806888595546499\nAAAAAAAAKJGDAAAA\tEducational hopes appear more high others; black thoughts might close always in a officials; close years base top workers. Regulations ask over high widespread\tHome                                              \tmattresses                                        \t3.52\t15000.77\t5.36324253007489455\nAAAAAAAAKNFDAAAA\tVital arms generate slow, neat judges. Specially simi\tHome                                              \tmattresses                                        \t4.42\t10296.27\t3.68123724083058633\nAAAAAAAAKNIBAAAA\tClosely blind winners might come similar, local crops. Very difficult evenings can stretch only ago naked hands. Sufficient, similar \tHome                                              \tmattresses                                        \t6.05\t13831.69\t4.94526001470668627\nAAAAAAAAKOCCAAAA\tNatural beans look often bacteria. Square, small items must negotiate for the forces. Hence chief ha\tHome                                              \tmattresses                                        \t6.40\t161.10\t0.05759826806191052\nAAAAAAAAKPKCAAAA\tLarge, very materials like otherwise long, rough concepts. Sources give as local children. Rapid customers remove gently downwards short expressions. Behind national crimes confess n\tHome                                              \tmattresses                                        \t7.74\t1076.05\t0.38472139260098583\nAAAAAAAALEOCAAAA\tGrowing, social objectiv\tHome                                              \tmattresses                                        \t7.70\t8.96\t0.00320347909270464\nAAAAAAAALFIBAAAA\tAgo new studies shall not apply of course small forces. Dead parts used to point on a students. Then other students should pay only\tHome                                              \tmattresses                                        \t8.92\t16657.18\t5.95546070015825401\nAAAAAAAALGIDAAAA\tGood, ethical interests stress now because of the eyes; patients used to give so hills. Social operations will pronounce basic ideas. British friends store too p\tHome                                              \tmattresses                                        \t0.68\t2433.04\t0.86988758612880682\nAAAAAAAALKOCAAAA\tFollowing, combined cells must ease of course continued changes. German te\tHome                                              \tmattresses                                        \t5.91\t785.92\t0.28099088041723599\nAAAAAAAALMHBAAAA\tOld words cannot force. Equal, capital problems would not produce; great, competitive things congratulate only times. Vice versa unemployed complaints will say previous gardens. Difficult, uncomfort\tHome                                              \tmattresses                                        \t1.57\t1412.84\t0.50513430818491411\nAAAAAAAAMDIDAAAA\tNow comfortable grounds bowl much only double groups. Good talks must not support somewhat; used, linear\tHome                                              \tmattresses                                        \t5.00\t5416.79\t1.93667115117986530\nAAAAAAAAMMCCAAAA\tRespectively excellent things speak reliable, historical movements. Masters respond. Cheap ideas should featu\tHome                                              \tmattresses                                        \t3.37\t5563.35\t1.98907091633910557\nAAAAAAAAMMEEAAAA\tPrisoners ought to leave. Main items should not strengthen ago allowances. Ideas provide together between a patients. Regional, english conditions argue also in a minutes; ordinary trials become lon\tHome                                              \tmattresses                                        \t36.96\t6326.30\t2.26184930626979851\nAAAAAAAAMPNBAAAA\tCases move so very natural tories. Therefore political cells win there already eastern events. Extra questions encourage skilled efforts. Serious, physical clothes would \tHome                                              \tmattresses                                        \t80.68\t751.60\t0.26872041139250123\nAAAAAAAANAFBAAAA\tIndividuals recognise. Really elegant relations should extend totally types; attitudes would relate muc\tHome                                              \tmattresses                                        \t7.09\t1139.56\t0.40742819585742244\nAAAAAAAANCMBAAAA\tEvidently super tales may risk just; others match maybe. Lovers describe anywhere \tHome                                              \tmattresses                                        \t2.32\t9619.86\t3.43939959651179740\nAAAAAAAANLJBAAAA\tMinimum words\tHome                                              \tmattresses                                        \t6.86\t4696.59\t1.67917721785777990\nAAAAAAAANOFCAAAA\tOther, extra notes alter so social ways. Different, preliminary parts take so diffic\tHome                                              \tmattresses                                        \t3.40\t10150.42\t3.62909132278695101\nAAAAAAAAOAFAAAAA\tSo social decisions fulfil again comparative times. Academic governments ought to arise then on a decades. \tHome                                              \tmattresses                                        \t1.81\t1346.52\t0.48142284240051991\nAAAAAAAAOICBAAAA\tOften presidential councillors used to take in the friends. Exact, rich visits used to want sophi\tHome                                              \tmattresses                                        \t0.41\t8719.30\t3.11742134520308145\nAAAAAAAAOIIBAAAA\tConstant, domestic things might worry like a minutes. Literary, kind sales tell however emotional branches. Too specific troops may not bring most fair unknown owners. Issues look official\tHome                                              \tmattresses                                        \t0.51\t148.32\t0.05302901998102153\nAAAAAAAAOJJCAAAA\tEven successful questions continue indian areas; good, good jobs get nice, famous interests. Labour, generous circumstances help good changes. Strict, vulnera\tHome                                              \tmattresses                                        \t2.55\t2079.26\t0.74340021632779686\nAAAAAAAAOLCEAAAA\tGood, full governments consider elsewhere genuinely\tHome                                              \tmattresses                                        \t0.33\t11909.49\t4.25801364059989293\nAAAAAAAAOMLDAAAA\tDays should conti\tHome                                              \tmattresses                                        \t3.57\t1697.42\t0.60688052249316052\nAAAAAAAAOODAAAAA\tPersonal, national arts ought to rely still strategic, dead instruments. Finally federal spots remember. Laws \tHome                                              \tmattresses                                        \t3.72\t13796.99\t4.93285368384543056\nAAAAAAAAOOJBAAAA\tNew products should not see. Much separate subjects give at least existing implications. Similar corporations might turn years; local\tHome                                              \tmattresses                                        \t3.84\t1888.50\t0.67519757439427698\nAAAAAAAAPBMAAAAA\tOther parties will add regional, special owners. Little administrative horses may indicate; \tHome                                              \tmattresses                                        \t1.41\t23082.32\t8.25264838516945064\nAAAAAAAAACMAAAAA\tOften local men ought to suppress trousers. Angry studies might cool seeming\tHome                                              \tpaint                                             \t0.70\t4572.36\t1.91646328201692969\nAAAAAAAAAGODAAAA\tWorthy, rich types force both shy years. Tropical, personal views might work. Other eyes ought to administer neve\tHome                                              \tpaint                                             \t0.28\t12758.19\t5.34747978724238078\nAAAAAAAAAHAEAAAA\tRural others come as through a estimates. Publications should worry really powerful\tHome                                              \tpaint                                             \t3.24\t4960.42\t2.07911511634744823\nAAAAAAAAAHEAAAAA\tEarly, dangerous weeks live still to a changes. Vari\tHome                                              \tpaint                                             \t2.74\t12614.97\t5.28745042138963413\nAAAAAAAABECDAAAA\tPerhaps de\tHome                                              \tpaint                                             \t1.44\t1475.69\t0.61852209813740890\nAAAAAAAABMKCAAAA\tClinical, national residents might cry; objects ought to justify only relatives\tHome                                              \tpaint                                             \t7.77\t2688.57\t1.12688976505180184\nAAAAAAAABOOCAAAA\tEqual forces tell together feet. Never new police cannot place hardly big, independent years. Then old choices ought to afford especially; parties accept\tHome                                              \tpaint                                             \t6.51\t6336.50\t2.65588658515520978\nAAAAAAAACBPCAAAA\tAlways huge concessions express directly ameri\tHome                                              \tpaint                                             \t4.52\t9357.30\t3.92202754569128769\nAAAAAAAACCEDAAAA\tAverage decisions shall see. Lovely, wide temperatures prepare in a regulations. Right arms ought to make now applic\tHome                                              \tpaint                                             \t57.27\t3310.24\t1.38745711507049343\nAAAAAAAACJBCAAAA\tVast, separate feet wear financially other, dangerous workers. Other, old genes spin for instance ordinary purposes. Events could focus anywhere under fresh countries\tHome                                              \tpaint                                             \t7.37\t10616.13\t4.44965473893533925\nAAAAAAAACNPBAAAA\tQuickly far walls shall see gold, true patients. Above bad pensions will insist as round detailed degrees. Free,\tHome                                              \tpaint                                             \t0.70\t809.31\t0.33921495655834654\nAAAAAAAADCHBAAAA\tProbably political hands may park easily. Little, suitable officials apply today men; women ought to take to the provi\tHome                                              \tpaint                                             \t6.55\t2700.80\t1.13201585878437474\nAAAAAAAAEBOBAAAA\tSpecial words should tell by a follower\tHome                                              \tpaint                                             \t1.68\t592.00\t0.24813143824065086\nAAAAAAAAECABAAAA\tBoth usual effects include repeatedly low, possible practices. Professional, past countries retain yesterday ways. Equally old\tHome                                              \tpaint                                             \t0.84\t1006.06\t0.42168093708849528\nAAAAAAAAECEDAAAA\tCapital areas judge almost active, numerous f\tHome                                              \tpaint                                             \t9.32\t661.29\t0.27717371417932434\nAAAAAAAAEEODAAAA\tPale, original yards agree social, little generations. Weeks used to include now oral shows; \tHome                                              \tpaint                                             \t2.40\t5882.28\t2.46550438603752661\nAAAAAAAAFKDBAAAA\tAppropriate, like v\tHome                                              \tpaint                                             \t4.82\t372.76\t0.15623897790301523\nAAAAAAAAGABEAAAA\tAttitudes must build ge\tHome                                              \tpaint                                             \t45.77\t9930.33\t4.16220788024372040\nAAAAAAAAGBEDAAAA\tVery difficult parts ought to know else areas. Members could not comment of course male, popular girls. Primary, worried actions might send indirectly elsewhere hard children. New resou\tHome                                              \tpaint                                             \t3.98\t770.04\t0.32275529172775471\nAAAAAAAAGCJCAAAA\tCareful universities may find cultural methods; artificial, apparent sections ought to tell highly reforms. Medical, glorious studies shall not agree straight almost actual states. Enough n\tHome                                              \tpaint                                             \t4.20\t103.50\t0.04338108759781649\nAAAAAAAAGEGDAAAA\tPlayers shall mean with a rights. Occasionally popular enemies worry. In general basic patients get perhaps parts. Other varieties enjoy thousands; classes shall not spend as for the families. New f\tHome                                              \tpaint                                             \t2.13\t5837.14\t2.44658436387167698\nAAAAAAAAGKBDAAAA\tStudents wi\tHome                                              \tpaint                                             \t2.79\t4724.08\t1.98005534588495595\nAAAAAAAAGKHDAAAA\tFor example clear witnesses used to enjoy yet international, environmental computers. Ill\tHome                                              \tpaint                                             \t9.67\t59.46\t0.02492212046923835\nAAAAAAAAIABEAAAA\tOpposite youngsters see altogether. Plans may not say to the problems. Popular, new lands might create cha\tHome                                              \tpaint                                             \t4.08\t7043.01\t2.95201385277582167\nAAAAAAAAIHFCAAAA\tObjects sell so yellow ru\tHome                                              \tpaint                                             \t1.47\t1136.47\t0.47634110746174406\nAAAAAAAAIMCDAAAA\tOnly horses can forget meanwhile animals. Rich exception\tHome                                              \tpaint                                             \t67.74\t386.10\t0.16183031808228935\nAAAAAAAAIOCBAAAA\tResponsible, useless sources explore there. Serious, conventional fields could defend once again famous efforts. Officials call as notions. Big, ap\tHome                                              \tpaint                                             \t3.14\t8952.05\t3.75217067855104485\nAAAAAAAAJCIAAAAA\tAware groups could finish services. Companies make also glad, top ways; t\tHome                                              \tpaint                                             \t3.27\t1574.90\t0.66010507108986663\nAAAAAAAAJEGAAAAA\tEver insufficien\tHome                                              \tpaint                                             \t2.77\t3898.21\t1.63389941531095878\nAAAAAAAAJGABAAAA\tSe\tHome                                              \tpaint                                             \t7.48\t13291.94\t5.57119626555479193\nAAAAAAAAJJHDAAAA\tWindows avoid. Always noble funds should lead nowhere able initiatives. Under new groups wait plans. High enterprises could know inadvertently different, main\tHome                                              \tpaint                                             \t8.31\t804.05\t0.33701027519830292\nAAAAAAAAKEFEAAAA\tHuman honours tell requests. Effective, late crimes know on a courses. Adequate, typical men should not tend already in a nerves. \tHome                                              \tpaint                                             \t1.35\t7526.60\t3.15470622138865334\nAAAAAAAAKEJDAAAA\tPatterns might not maintain. Great, vast eyes say still different views. Easily national plants develop together with the cities. Able g\tHome                                              \tpaint                                             \t21.04\t8770.96\t3.67626844518787008\nAAAAAAAAKGOBAAAA\tPossible guests want only; organisations weigh however explicitly c\tHome                                              \tpaint                                             \t4.69\t2761.50\t1.15745771402290094\nAAAAAAAAKLDBAAAA\tLetters state on a chains. General, criminal cases shall look unknown months. Special, poor nights give as ever\tHome                                              \tpaint                                             \t7.47\t3235.66\t1.35619758354348711\nAAAAAAAAKOBBAAAA\tAlso inc goods could not lay \tHome                                              \tpaint                                             \t2.41\t2540.30\t1.06474373743703612\nAAAAAAAALCBCAAAA\tAdditional companies visit. Grey opportunities may not look numbers. Entire, british models assist also great quarters. Little males show\tHome                                              \tpaint                                             \t51.57\t13562.60\t5.68464095318015436\nAAAAAAAALFPDAAAA\tCommunist, different demands die perhaps kinds; likely, public forests should make moral, nice faces. Efficient, central services can p\tHome                                              \tpaint                                             \t0.27\t668.17\t0.28005740386698596\nAAAAAAAALPPCAAAA\tEffectively initial representatives amount dark areas; comprehensive, christian words will not want hearts. There judicial men explain r\tHome                                              \tpaint                                             \t4.54\t5116.69\t2.14461427150600652\nAAAAAAAAMDABAAAA\tReasons will look probably key students. Now very bones us\tHome                                              \tpaint                                             \t3.58\t54.00\t0.02263361092059991\nAAAAAAAAMFDCAAAA\tFeatures need stages; french cells come hours. Still small beliefs look scarcely electric, good producers. Churches receive for the seats; businesses get appropriate, high ways. Purpo\tHome                                              \tpaint                                             \t2.89\t7559.52\t3.16850434123135981\nAAAAAAAAMGJBAAAA\tManagers ought to express so new faces. Universities should not appear at a stories. Accidents dismiss only single times. Other, current companies could not meet effectively that is to say perfe\tHome                                              \tpaint                                             \t0.74\t6272.75\t2.62916635004061266\nAAAAAAAANGCBAAAA\tThere blue items see in a conditions; lives ask silent countries. Here necessary months may encourage free \tHome                                              \tpaint                                             \t7.02\t4828.00\t2.02361247267882156\nAAAAAAAAOEABAAAA\tNew dollars might end old relationships. Other, gentle groups \tHome                                              \tpaint                                             \t8.34\t2369.97\t0.99335146062026237\nAAAAAAAAONBBAAAA\tInternational years collect respectively affairs. Exter\tHome                                              \tpaint                                             \t69.84\t5908.06\t2.47630983954739820\nAAAAAAAAONNBAAAA\tColleagues attach th\tHome                                              \tpaint                                             \t9.80\t2499.83\t1.04778110347487541\nAAAAAAAAOPEEAAAA\tFurthermore additional \tHome                                              \tpaint                                             \t8.18\t1563.59\t0.65536458702482987\nAAAAAAAAPDOCAAAA\tMonths find there costly foreigners. White, particular changes used to share away in a subjects. Muscles make fully less gold fingers. Norm\tHome                                              \tpaint                                             \t4.97\t14512.01\t6.08257755584916844\nAAAAAAAAPIIDAAAA\tEnglish persons last there golden units. Special services help far vital fingers. Very complicated birds sho\tHome                                              \tpaint                                             \t0.74\t1043.89\t0.43753703896120444\nAAAAAAAAPKDDAAAA\tHands might contact enough growing things. Criteria used to make convincing forms. Particular organizations sha\tHome                                              \tpaint                                             \t48.89\t8562.98\t3.58909551186812250\nAAAAAAAAACDAAAAA\tNew, american owners might not appear. Parties move heavily also high variations. Unable, american terms might create indeed years. Nations absorb over normal experienc\tHome                                              \trugs                                              \t0.89\t2701.48\t0.99827241978850362\nAAAAAAAAAINBAAAA\tConcepts shall allow cautiously there\tHome                                              \trugs                                              \t4.82\t8082.19\t2.98659526203801105\nAAAAAAAABDJAAAAA\tAwards might mention better real, video-taped fires. Familiar patients must yield finally never net rules. Courses should attend; black ac\tHome                                              \trugs                                              \t0.79\t120.11\t0.04438400444970800\nAAAAAAAABHIAAAAA\tSmoothly main organisations yield here pensioners; subtle, british rights say public books. Only, social pairs take up to the police. Important, other men could go mor\tHome                                              \trugs                                              \t6.67\t21599.16\t7.98149374365127852\nAAAAAAAABNNBAAAA\tFor example brief children must change almost. Fierce manufacturers ought to throw comfortably alone, subsequent loans; other boots switch. Very main men k\tHome                                              \trugs                                              \t7.88\t1113.44\t0.41144722266657961\nAAAAAAAABOGBAAAA\tForms carry here american negotiations. Partly subject drivers should tell only stiffly local orders. Quite clean forces will enhance intentionally full ministers; stories mus\tHome                                              \trugs                                              \t7.64\t9195.42\t3.39796488383093785\nAAAAAAAACCFCAAAA\tRoyal, comprehensive reports cost loyal, critical minutes. Exciting, short areas ought to pay for a appearances. Public, large institutions can\tHome                                              \trugs                                              \t4.30\t2726.74\t1.00760669630502701\nAAAAAAAACCHAAAAA\tOf course institutional forces occur violently from a governments. Patient, western teams count \tHome                                              \trugs                                              \t1.97\t500.94\t0.18511134117922509\nAAAAAAAACCPAAAAA\tGreat images may not pay only, certain plans. Internationally new years command so in the days. Stairs tell teams; else unlike customers see elected, different numbe\tHome                                              \trugs                                              \t2.11\t8294.23\t3.06494997274915987\nAAAAAAAACDPCAAAA\tOrganizations understand also instead accurate settlements. Costs become co\tHome                                              \trugs                                              \t7.44\t12898.01\t4.76617544944116470\nAAAAAAAACELAAAAA\tBroad, political premises must not continue even. Short local levels stay in a germans. Encouraging, poor priorities i\tHome                                              \trugs                                              \t9.98\t13098.17\t4.84014016787138328\nAAAAAAAACIGAAAAA\tConsumers must light now human schools; systems take \tHome                                              \trugs                                              \t37.18\t2295.76\t0.84834753189127999\nAAAAAAAADANDAAAA\tHardly happy reforms may not try quickly along a pp.; sure sources use then now different\tHome                                              \trugs                                              \t3.58\t2396.96\t0.88574376243253759\nAAAAAAAADLCAAAAA\tHowever magic things should not take for a firms. Estimates supply; able, doubtful children must maintain left, lacking banks; simple sons c\tHome                                              \trugs                                              \t1.73\t113.88\t0.04208184519800805\nAAAAAAAADMFCAAAA\tIdeological members get sometimes modest abilities. Used, certain services would make all victorian, angry regulations. Even voluntary directions must sail however equations. Other, specific others ge\tHome                                              \trugs                                              \t8.46\t4771.52\t1.76321009834210907\nAAAAAAAADNGDAAAA\tTurkish members shall know to a subjects. No doubt decisive millions might know virtually public industries. Good, artificial \tHome                                              \trugs                                              \t1.62\t4557.68\t1.68419023728536476\nAAAAAAAAEDBAAAAA\tSoftly social men get only with a miles. Only afraid difficulties should emerge t\tHome                                              \trugs                                              \t4.09\t5355.01\t1.97882597342628292\nAAAAAAAAEJFCAAAA\tOthers could withdraw buildings. Clothes know partly. Inner prese\tHome                                              \trugs                                              \t4.44\t7946.40\t2.93641705902222677\nAAAAAAAAEJLBAAAA\tParallel dead relations check further international men. Types improve apart half way steady ways; back metres shall not support at leas\tHome                                              \trugs                                              \t1.00\t9684.36\t3.57864188937285967\nAAAAAAAAELABAAAA\tGood, alone centuries might not see gently subjective ships. Less ambitious \tHome                                              \trugs                                              \t6.42\t3762.17\t1.39022704204943760\nAAAAAAAAFBPBAAAA\tAlso other republics could not prescribe almost permanent mental p\tHome                                              \trugs                                              \t3.56\t1252.71\t0.46291138301718183\nAAAAAAAAFDIDAAAA\tCoastal agencies encourage. Obviously other events understand local students. Western subjects cannot set in a e\tHome                                              \trugs                                              \t6.19\t3558.04\t1.31479529757921118\nAAAAAAAAFICEAAAA\tExisting services make talks. Concerned, running\tHome                                              \trugs                                              \t30.02\t2214.66\t0.81837881354250538\nAAAAAAAAFOOBAAAA\tHowever major months execute either elements. Enough left provisions used to prove so away gastric police. Animals shall add faintly things. Well modern principles might pay suddenly other, soc\tHome                                              \trugs                                              \t1.32\t16957.77\t6.26637032001602569\nAAAAAAAAGBCBAAAA\tMental horses could grab \tHome                                              \trugs                                              \t1.74\t1044.31\t0.38590175411601501\nAAAAAAAAGJOAAAAA\tOther, initial companies could know definitely mere funds. Italian years get already thereafte\tHome                                              \trugs                                              \t8.14\t4357.37\t1.61017008965967989\nAAAAAAAAHJCEAAAA\tAdditional, interior police provide words. Different, long qualities answer really concerns; then other words state dry, political services. Awfully di\tHome                                              \trugs                                              \t9.78\t7977.70\t2.94798328447619281\nAAAAAAAAHLNDAAAA\tFirm, main skills can measure already electoral, white activities. Fairly disciplinary men protest there new changes. Strong, good reactions might prompt arbitrarily wild product\tHome                                              \trugs                                              \t6.42\t9423.50\t3.48224682317728204\nAAAAAAAAIBLCAAAA\tOrigins used to play very on a matters. Long, important shows tackle more. Further vast fingers succeed only. Much dead values must rem\tHome                                              \trugs                                              \t4.71\t7612.23\t2.81293189736242391\nAAAAAAAAIEIAAAAA\tPossibly southern complaints would not produce to a years; months take. Services give; always professional days might develop quickly major urba\tHome                                              \trugs                                              \t36.03\t10189.52\t3.76531263858453641\nAAAAAAAAIHNBAAAA\tBritish stories ought to read furt\tHome                                              \trugs                                              \t2.05\t1296.18\t0.47897476386331293\nAAAAAAAAIKFCAAAA\tBetter silent colleges protect never concessions. Certainly material words \tHome                                              \trugs                                              \t2.45\t7108.50\t2.62678957314752580\nAAAAAAAAINHAAAAA\tStill global systems would find real forces. Facts get rivals. Ahead british features must not rest nearly. Flats will restrict always subsequent miles. Then new children can allay only ordi\tHome                                              \trugs                                              \t8.72\t430.95\t0.15924807857465376\nAAAAAAAAINPBAAAA\tPossible\tHome                                              \trugs                                              \t0.41\t9833.88\t3.63389371141365844\nAAAAAAAAJEBCAAAA\tTrue \tHome                                              \trugs                                              \t55.56\t1867.47\t0.69008239771622846\nAAAAAAAAJINBAAAA\tDifficult writings improve full charges. Western incidents run in a options. Parts happen possible, forw\tHome                                              \trugs                                              \t4.45\t2413.98\t0.89203312847811273\nAAAAAAAAJKECAAAA\tPast losses will feel nowhere options. Political, free situations must produce selectively classes. Difficult ways believe sometimes enormous scientists. Interesting, simple rights ought to flush ago\tHome                                              \trugs                                              \t4.83\t1972.51\t0.72889761566142310\nAAAAAAAAJMMBAAAA\tMinds apply reluctantly dirty goods; therefore extended unions make secret, working men. Followin\tHome                                              \trugs                                              \t0.63\t215.67\t0.07969609724143306\nAAAAAAAAKAADAAAA\tPossible, false publications produce toda\tHome                                              \trugs                                              \t62.90\t1868.41\t0.69042975400781722\nAAAAAAAAKGMAAAAA\tWonderful, scottish unions go nearby for a teams. Gladly current systems cannot look so major, emotional p\tHome                                              \trugs                                              \t7.31\t5243.52\t1.93762730007603777\nAAAAAAAAKLJCAAAA\tDead names spend as a schools. Polit\tHome                                              \trugs                                              \t1.98\t718.90\t0.26565365747144353\nAAAAAAAALEGBAAAA\tStandard, foreign hospitals say later adult difficulties. Things ask very into a metals. Enough public persons will not give however ago sweet c\tHome                                              \trugs                                              \t0.57\t5940.00\t2.19499614046511968\nAAAAAAAALKFCAAAA\tSingle institutions place also local ideas; variations used to appear yesterday domestic, corresponding attempts. Unlike, possible amounts open locally. National, main cig\tHome                                              \trugs                                              \t7.07\t11038.74\t4.07912318107709347\nAAAAAAAAMCLBAAAA\tAlso noble characteristics might sound about a miles. Again social funds would stretch en\tHome                                              \trugs                                              \t7.90\t2544.16\t0.94013827958345773\nAAAAAAAAMFEBAAAA\tInternational metres minimise originally small allowances. Eminently favorite lines compare just never bottom things. British poets take countries; individual, in\tHome                                              \trugs                                              \t1.63\t3135.51\t1.15865864451006522\nAAAAAAAANBLDAAAA\tColourful bones may adjust so. Pupils might catch so. Final,\tHome                                              \trugs                                              \t86.39\t282.42\t0.10436208922393251\nAAAAAAAANDFEAAAA\tAble armies bring certain, pretty requirements. Dogs pay weeks. Simi\tHome                                              \trugs                                              \t46.20\t4674.82\t1.72747674366484020\nAAAAAAAAOAACAAAA\tForeign, absolute bills sh\tHome                                              \trugs                                              \t0.23\t4232.41\t1.56399387455656182\nAAAAAAAAOEPCAAAA\tLevels look only steep, cold results. Examples used to ensure together only expensi\tHome                                              \trugs                                              \t5.36\t2875.57\t1.06260354404668084\nAAAAAAAAOGBCAAAA\tAfrican days incorporate economic, similar cells; vast, automatic stations ought to plan previously in a judges. Blank times would pay into the workers. Gradually ultima\tHome                                              \trugs                                              \t2.42\t1831.70\t0.67686438223736696\nAAAAAAAAOGEEAAAA\tHands order. Pl\tHome                                              \trugs                                              \t91.05\t5998.14\t2.21648049662785404\nAAAAAAAAOGKBAAAA\tMagic facilities should not fight only likely months. Later present members absorb once more\tHome                                              \trugs                                              \t8.11\t1193.91\t0.44118313839439580\nAAAAAAAAOGLBAAAA\tAs active accounts talk slowly. Big implications make as a children. Rounds should not check. Likely, military\tHome                                              \trugs                                              \t5.59\t2607.00\t0.96335941720413586\nAAAAAAAAONFDAAAA\tPrime members must need so regulations. Only injuries might run adequately to a shares; inevitably orthodox poets think yesterday protests. Thinking, full changes could put more. Months \tHome                                              \trugs                                              \t9.27\t2740.60\t1.01272835396611229\nAAAAAAAAPKJDAAAA\tClinical photographs look also popular, common men. Loose, concerned earnings must go maybe only able enquiries; black unions observe exactly by a\tHome                                              \trugs                                              \t24.08\t2749.12\t1.01587673226859761\nAAAAAAAAACBEAAAA\tDirectly green hours will maintain also\tHome                                              \ttables                                            \t1.10\t1433.48\t0.74353680625971805\nAAAAAAAAAEDDAAAA\tThen legal services may bother circumstances; obvious, original years worry scottish, static areas; much fresh journals mean exactly routes. I\tHome                                              \ttables                                            \t4.46\t15267.45\t7.91912758652365733\nAAAAAAAAAFFBAAAA\tSmall motives shall use large, patient payments. Answers refer here odd, average officers. Always powerful sections might yield into a \tHome                                              \ttables                                            \t4.41\t5271.29\t2.73418403568155059\nAAAAAAAAAGPBAAAA\tOdd, poor times could recycle suddenly eyes. Fa\tHome                                              \ttables                                            \t0.33\t2225.20\t1.15419685052398680\nAAAAAAAAALACAAAA\tPerfect grants fight highly as great minutes. Severe, available millions like counties. Young legs cook however from a years. Early armed services reject yet with \tHome                                              \ttables                                            \t4.31\t7602.83\t3.94353875654740364\nAAAAAAAAALJAAAAA\tTrue, particular parties drop for a times. Too mad \tHome                                              \ttables                                            \t56.61\t2020.10\t1.04781280682343418\nAAAAAAAAANECAAAA\tUsually complete artists will give from the weeks. Units swallow political minutes; books might not arrest continually lips. Modest, royal problems must behave consequently genera\tHome                                              \ttables                                            \t4.25\t4496.26\t2.33218098648974514\nAAAAAAAAAOPDAAAA\tParticularly popular detectives avoid rather free, major relations. Financial servants may know also widely surprising children. Delegates cannot get. However separate thousands discuss alway\tHome                                              \ttables                                            \t4.93\t7737.75\t4.01352088807387150\nAAAAAAAACBODAAAA\tNuclear needs can want. Overwhelmingly clo\tHome                                              \ttables                                            \t0.43\t930.32\t0.48255096799365244\nAAAAAAAACDHBAAAA\tEnough bad rounds arrange later well black places. Courses reduce then in a experts. Also poor systems offer wonderful performances. Economic, unlikel\tHome                                              \ttables                                            \t21.49\t7678.11\t3.98258600574183368\nAAAAAAAACFBDAAAA\tActions see of course informal phrases. Markedly right men buy honest, additional stations. In order imaginative factors used to move human thanks. Centres shall catch altogether succe\tHome                                              \ttables                                            \t1.61\t33.06\t0.01714800821423827\nAAAAAAAADNLBAAAA\tFederal, clear months please. New lips take slightly interesting \tHome                                              \ttables                                            \t3.47\t361.20\t0.18735210426445445\nAAAAAAAAEEPDAAAA\tRoots should not lend overnight in a feet; fine children retire once usually evident forests. Sometimes novel effects might not go tons. Casualties involve more. Correct, perfect deleg\tHome                                              \ttables                                            \t3.13\t10251.08\t5.31716890637669900\nAAAAAAAAEFEBAAAA\tProvincial, important tr\tHome                                              \ttables                                            \t3.22\t2399.31\t1.24450658162444130\nAAAAAAAAEOFAAAAA\tWestern, complex eyes can tell only regular acts. Perhaps high processes could put. Changes stay in the prisoners. Ages give now fascinating methods. British, quick words shall not expect new \tHome                                              \ttables                                            \t4.27\t9672.26\t5.01693871537351095\nAAAAAAAAEPHAAAAA\tNow professional schools will not visit useful lists. Beautiful plans can recommen\tHome                                              \ttables                                            \t2.52\t408.50\t0.21188630839432348\nAAAAAAAAFNJBAAAA\tPersonal dimensions can dissolve final variations. Gradual sounds migh\tHome                                              \ttables                                            \t1.19\t5519.07\t2.86270591938765946\nAAAAAAAAGBKDAAAA\tHard sheets share so books. Permanent\tHome                                              \ttables                                            \t31.00\t443.40\t0.22998871271001966\nAAAAAAAAGBMAAAAA\tCurrent degrees see in particular be\tHome                                              \ttables                                            \t2.99\t2250.99\t1.16757395675039954\nAAAAAAAAGCDCAAAA\tVast girls call benefits. Good, difficult makers deliver local things. High, formal hours play for a payments; well new men increase all equal newspapers. Top, total rights\tHome                                              \ttables                                            \t2.62\t10786.92\t5.59510564931431049\nAAAAAAAAGHNAAAAA\tJust responsible poems ask only just joint patients. Solid, equal books prevent. Never universal fields must ignore black, main cameras\tHome                                              \ttables                                            \t0.32\t6835.22\t3.54538441337343388\nAAAAAAAAGJHAAAAA\tMost official languages might not feel anywhere careful points; good, post-war prices refer originally ruling varieties. Increased lands would not get we\tHome                                              \ttables                                            \t0.35\t13164.59\t6.82838770287595335\nAAAAAAAAGNHDAAAA\tImportant, small girls should realise only high numbers. Previous, statutory products can give rather scientific methods. Isolated, living estates move now old trees; univ\tHome                                              \ttables                                            \t2.85\t3966.40\t2.05734603088187185\nAAAAAAAAHFHDAAAA\tMore german bags might not give always about a words. Recently new guests ought to \tHome                                              \ttables                                            \t8.63\t4805.11\t2.49237948428065532\nAAAAAAAAIGDDAAAA\tToo labour operators tell more\tHome                                              \ttables                                            \t3.43\t9131.41\t4.73640331783356027\nAAAAAAAAIIBDAAAA\tFamilies must not hear more great, english feelings. Proper faces justify extremely skills. Immediate discussions undertake often pa\tHome                                              \ttables                                            \t0.18\t2677.96\t1.38904053470664016\nAAAAAAAAIODBAAAA\tExperts should not offer; low easy cities flourish particularly integrated, decisive \tHome                                              \ttables                                            \t9.66\t3549.82\t1.84126867873766800\nAAAAAAAAIPJCAAAA\tSimply different statements complete always social, international speakers. Early serious buildings shall overcome just by a husbands; complex, common criteria will work little, fair countr\tHome                                              \ttables                                            \t2.23\t2835.45\t1.47072957928196943\nAAAAAAAAJANDAAAA\tOnly long brothers detect in a years; commitments can imagine near little great fields. Civil, soviet patients profit already just long arrangements. Often indi\tHome                                              \ttables                                            \t8.94\t690.05\t0.35792447272338536\nAAAAAAAAJKHCAAAA\tCentral houses increase actually essential payments. Minor organizations take subsequently careful players; good, molecular righ\tHome                                              \ttables                                            \t7.94\t13582.01\t7.04490075758821408\nAAAAAAAAKFABAAAA\tWomen get also chairs. Full, integrated paintings sit \tHome                                              \ttables                                            \t6.34\t1123.11\t0.58254989429803830\nAAAAAAAALJOAAAAA\tWild volunteers expand approximately sales. Specific, close versions must stress longer able powers. Far me\tHome                                              \ttables                                            \t3.86\t2363.26\t1.22580767974533392\nAAAAAAAAMEKDAAAA\tBold parties could revert newly equal plans. Also other products cry as at least lovely discussions. Manufacturing, french letters lay economically ready duties; serious, stron\tHome                                              \ttables                                            \t1.02\t2741.71\t1.42210724746095625\nAAAAAAAAMGGBAAAA\tAreas ought to calculate slowly charges. Difficult, national participants might not write away bus\tHome                                              \ttables                                            \t4.21\t5457.26\t2.83064547208814138\nAAAAAAAAMLKCAAAA\tClosely young offic\tHome                                              \ttables                                            \t8.10\t25.92\t0.01344453638575487\nAAAAAAAANAIBAAAA\tWide, new changes reduce highly on a notes. Nurses re\tHome                                              \ttables                                            \t0.25\t1860.34\t0.96494632792728456\nAAAAAAAANFABAAAA\tCritical, neighbouring feelings should achieve unusual, hungry types; po\tHome                                              \ttables                                            \t5.93\t619.20\t0.32117503588192191\nAAAAAAAANJNCAAAA\tA\tHome                                              \ttables                                            \t4.83\t2031.72\t1.05384002568155423\nAAAAAAAANOJBAAAA\tNew situations seem. National missiles will cater departments. Women come astonishingly. Spanish mont\tHome                                              \ttables                                            \t5.87\t8171.71\t4.23861313382858538\nAAAAAAAAODODAAAA\tHighly tory votes could no\tHome                                              \ttables                                            \t8.80\t3686.85\t1.91234525361961205\nAAAAAAAAOGPCAAAA\tSlight, present techniques run writers. Schemes make. Grand boys could help fine, past re\tHome                                              \ttables                                            \t1.51\t332.04\t0.17222700083048022\nAAAAAAAAONNDAAAA\tDead, big talks will rest old offers. Dead, competitive authorities occupy alone\tHome                                              \ttables                                            \t0.38\t2425.28\t1.25797705268686622\nAAAAAAAAPDGEAAAA\tAlmost working things shall not see import\tHome                                              \ttables                                            \t3.78\t3316.68\t1.72034046836055031\nAAAAAAAAPHDAAAAA\tPolice know more families. Atlantic birds might keep there far presen\tHome                                              \ttables                                            \t40.62\t0.00\t0.00000000000000000\nAAAAAAAAPHDDAAAA\tObviously elaborate members would not retu\tHome                                              \ttables                                            \t3.94\t610.39\t0.31660534585265877\nAAAAAAAAPLFDAAAA\tQuiet levels must achieve. Local, national metres fill to a businessmen. Real, key boots could not determine at best. Young groups may know ever happy, magnetic difficulties\tHome                                              \ttables                                            \t2.15\tNULL\tNULL\nAAAAAAAAAMODAAAA\tLabour, middle children might produce useful signals. Surprising farmers kill on the costs. Trees return recent, single animals. Original governments read over there. Previous\tHome                                              \twallpaper                                         \t3.08\t5699.40\t1.39109945794862842\nAAAAAAAAAPGCAAAA\tOnce again only measures shall destroy independent, normal prisons. Present, industrial ambitions can prevent as employers. Large, previous origins say inside \tHome                                              \twallpaper                                         \t3.32\t262.60\t0.06409494291632625\nAAAAAAAABBHBAAAA\tReports can say. Constant, other keys will analyse here white months. Dreams would not change to a neighbours; visual, financial wages set in a girls. Fingers \tHome                                              \twallpaper                                         \t4.24\t9127.17\t2.22774348871898495\nAAAAAAAABCPBAAAA\tNearer regular men like in a ministers; children come therefore female views. Only financial events must not allow old miles. Very british forces get. \tHome                                              \twallpaper                                         \t9.72\t5545.66\t1.35357487103333520\nAAAAAAAABPNCAAAA\tGreat, strategic live\tHome                                              \twallpaper                                         \t2.35\t12111.89\t2.95624866016307208\nAAAAAAAACCOBAAAA\tGroups can consent close. Awful, soft friends pursue comfortable departments. C\tHome                                              \twallpaper                                         \t6.57\t1777.90\t0.43394668320996359\nAAAAAAAACDBEAAAA\tEmpty, additional russians should ensure commonly in a books. Sure, close difficulties follow always on a weeks. Royal y\tHome                                              \twallpaper                                         \t0.85\t328.29\t0.08012844177456491\nAAAAAAAACFPBAAAA\tEducational, reasonable rooms mi\tHome                                              \twallpaper                                         \t2.73\t737.08\t0.17990518097778275\nAAAAAAAACGHCAAAA\tThen french ministers aid\tHome                                              \twallpaper                                         \t3.16\t7027.37\t1.71522802361730232\nAAAAAAAACHCDAAAA\tOld eyes would not develop to a parents; nice, red games come now to a molecules. Sheer centuries could follow as usually late powers; backs affect police. Almost tiny trees shall buy fro\tHome                                              \twallpaper                                         \t1.22\t20810.71\t5.07944123952101991\nAAAAAAAACKBDAAAA\tAmerican, long organisations terminate for a agents. Facilities determine open. Now general students rebuild even particular pounds. Good teachers might not press names. Guidelines evaluate clear\tHome                                              \twallpaper                                         \t4.09\t293.44\t0.07162231549644621\nAAAAAAAACNCCAAAA\tPublic \tHome                                              \twallpaper                                         \t0.64\t1015.94\t0.24796883589646797\nAAAAAAAACNICAAAA\tInitial unions agree still otherwise individual councillors. Leading minutes bring mathematical conditions. Full, huge banks must not feel exclusively special lines. Ago other cases will hold\tHome                                              \twallpaper                                         \t8.36\t1699.28\t0.41475725285169409\nAAAAAAAACNOCAAAA\tFresh, othe\tHome                                              \twallpaper                                         \t8.40\t501.78\t0.12247357371117359\nAAAAAAAADBGDAAAA\tAhead national cir\tHome                                              \twallpaper                                         \t14.29\t13998.80\t3.41680231110840781\nAAAAAAAAEDDBAAAA\tStill fortun\tHome                                              \twallpaper                                         \t4.83\t4391.94\t1.07197693675525478\nAAAAAAAAEDKDAAAA\tMinor, single things could cry too profits. Examples focus material, young observations. Existing tensions would stop away. Facilities reply most thoroughly small\tHome                                              \twallpaper                                         \t3.85\t6735.50\t1.64398891094027208\nAAAAAAAAEEPCAAAA\tWooden, clear considerations will not become now proceedings. A bit institutional firms will \tHome                                              \twallpaper                                         \t4.94\t9408.96\t2.29652229284842735\nAAAAAAAAEKDCAAAA\tThick, other ways come completely. Careful men would find later there valid children. Interesting owners allow a bit best wide polls. Miles behave other, considerable heads; inte\tHome                                              \twallpaper                                         \t0.96\t3860.39\t0.94223715416891351\nAAAAAAAAEKECAAAA\tMarked, free flowers carry restrict\tHome                                              \twallpaper                                         \t0.67\t4918.41\t1.20047680193864503\nAAAAAAAAFEKDAAAA\tLess western books give physically only \tHome                                              \twallpaper                                         \t4.22\t5084.28\t1.24096205777082719\nAAAAAAAAFJACAAAA\tNULL\tHome                                              \twallpaper                                         \tNULL\t15833.49\t3.86461019693915650\nAAAAAAAAGCIDAAAA\tLiable, other others provide also in a resources. Months get briefly long sheets. Windows talk activities. American\tHome                                              \twallpaper                                         \t5.42\t151.36\t0.03694368073044608\nAAAAAAAAGDIDAAAA\tNew citiz\tHome                                              \twallpaper                                         \t3.50\t6508.22\t1.58851481106966039\nAAAAAAAAGGABAAAA\tMain elements write generally however secondary periods. Documents persuade empty, labour margins. Over other friends contend afterwards friendly, labour buildings. Canadian birds \tHome                                              \twallpaper                                         \t4.07\t2883.10\t0.70370194182048822\nAAAAAAAAGHAEAAAA\tShortly economic records cause nevertheless by a requirements. Privately silent forms take. Pink leaves aba\tHome                                              \twallpaper                                         \t8.70\t0.00\t0.00000000000000000\nAAAAAAAAGKADAAAA\tStores visit values. Others cannot hang around rather civil brothers. Direct systems go then free, other instructions. Difficult, top feet will al\tHome                                              \twallpaper                                         \t13.91\t2088.96\t0.50986965710010998\nAAAAAAAAGLJCAAAA\tSmall, social patterns design deeply without a judges. Moving feet arrange in the developments; sports say\tHome                                              \twallpaper                                         \t0.63\t13980.62\t3.41236496890650830\nAAAAAAAAGMMCAAAA\tTests should allow finally times. Thus other differences act already important weapons. So ridiculous spor\tHome                                              \twallpaper                                         \t3.26\t12082.76\t2.94913866135441792\nAAAAAAAAHEGCAAAA\tCourts must not understand ideas. British figures would isolate sure young preparations; able, short governments should express more private properties. Countries de\tHome                                              \twallpaper                                         \t0.28\t15297.35\t3.73375009528203862\nAAAAAAAAHKHBAAAA\tMilitary, poor questions challenge that with a costs. Appropriate, main patients will not see concerned, industrial findings; terrible, concerned eyes decl\tHome                                              \twallpaper                                         \t3.37\t3242.71\t0.79147491372505823\nAAAAAAAAIALAAAAA\tGreen, european terms privatise new arms; also local duties need damp, successful professionals. Fresh, furious odds will undertake too only probable players.\tHome                                              \twallpaper                                         \t2.81\t227.73\t0.05558393507362900\nAAAAAAAAICBCAAAA\tImpossible, other patients provide somewhat. Initially helpful ref\tHome                                              \twallpaper                                         \t2.44\t10361.84\t2.52909955562873563\nAAAAAAAAIDFEAAAA\tAlways western women run soon in the solutions; left members should allow national, innocent products. Of course left tonnes thank unduly especially interested customers. Elderly pen\tHome                                              \twallpaper                                         \t0.99\t7449.54\t1.81827052952356833\nAAAAAAAAIEOCAAAA\tArtificial, careful years behave even specialist volumes. Assistant che\tHome                                              \twallpaper                                         \t7.43\t6528.95\t1.59357455275532468\nAAAAAAAAIGLBAAAA\tShort things come from a activities. Losses should not work ro\tHome                                              \twallpaper                                         \t9.19\t3438.64\t0.83929716111879700\nAAAAAAAAILHCAAAA\tCourts can pu\tHome                                              \twallpaper                                         \t9.63\t7132.45\t1.74087576391298992\nAAAAAAAAINEBAAAA\tRepresentative, keen problems might exam\tHome                                              \twallpaper                                         \t6.78\t17424.37\t4.25290936977512414\nAAAAAAAAIPFEAAAA\tUseful developments might control effective, unknown homes. Other, right marks cannot become by the moments. Natural, christian bars used to enable also new\tHome                                              \twallpaper                                         \t75.10\t6730.56\t1.64278316448937089\nAAAAAAAAJCCAAAAA\tPerhaps different figures hang new women. Dynamic goods finance now; birds keep already proposals. Schemes guess animal\tHome                                              \twallpaper                                         \t4.93\t11316.14\t2.76202340949412078\nAAAAAAAAJGBDAAAA\tS\tHome                                              \twallpaper                                         \t2.23\t2663.69\t0.65014873761153490\nAAAAAAAAKAMDAAAA\tDifficulties should \tHome                                              \twallpaper                                         \t3.85\t3734.34\t0.91147109341261905\nAAAAAAAAKCCAAAAA\tNew, poor adults used to fear; new offers may make undoubtedly cells. Clinical dogs decide. Then poor models know then entirely rea\tHome                                              \twallpaper                                         \t0.20\t10778.60\t2.63082159831650459\nAAAAAAAAKGHCAAAA\tSignificantly poor employees will not attend over interactions. Other babies used to choose departments. Young members repair. Easy patients find ever pers\tHome                                              \twallpaper                                         \t6.87\t6138.42\t1.49825468201232053\nAAAAAAAAKKFBAAAA\tPerfectly tall bodies think there a\tHome                                              \twallpaper                                         \t6.25\t2518.24\t0.61464755921404955\nAAAAAAAALCKCAAAA\tAreas would stop also logical, local initiatives. Existing, increasing words should take open concerns. Objectives protect jointly at t\tHome                                              \twallpaper                                         \t6.48\t7065.22\t1.72446638458220312\nAAAAAAAALIGDAAAA\tHuman, back b\tHome                                              \twallpaper                                         \t4.28\t8161.86\t1.99213233355310951\nAAAAAAAAMAMCAAAA\tMeasures should make rec\tHome                                              \twallpaper                                         \t2.45\t3471.50\t0.84731757172135024\nAAAAAAAAMBAAAAAA\tFamiliar thanks should see proposals; more single lakes shall not announce employees. Specified lawyers canno\tHome                                              \twallpaper                                         \t7.89\t509.65\t0.12439446937283196\nAAAAAAAAMDDBAAAA\tBasic moves mig\tHome                                              \twallpaper                                         \t0.30\t11860.26\t2.89483125541807904\nAAAAAAAAMDDEAAAA\tComponents could identify hopef\tHome                                              \twallpaper                                         \t1.39\t1204.56\t0.29400687143674770\nAAAAAAAAMDEDAAAA\tSocial dealers shall emerge even figures. Clear prayers could not send. \tHome                                              \twallpaper                                         \t6.93\t6706.36\t1.63687647134932864\nAAAAAAAAMDEEAAAA\tActual, urban police learn quickly low, british years; ethnic, common months should fail then overall markets. Years get. Criminal statio\tHome                                              \twallpaper                                         \t7.74\t1379.50\t0.33670591680530107\nAAAAAAAAMEMAAAAA\tParticularly tight problems cannot lead special, simple sales. Warm bodies get. New, primary attempts wo\tHome                                              \twallpaper                                         \t5.23\t15517.89\t3.78757910788967986\nAAAAAAAAMHKDAAAA\tChief, other others speak fairly; established years may reduce political museums. Vulnerable, male features sug\tHome                                              \twallpaper                                         \t4.79\t7653.42\t1.86803319883727966\nAAAAAAAAMIPDAAAA\tMuch following charges cannot complete difficult, effective jews. Poor, commercial pro\tHome                                              \twallpaper                                         \t1.85\t5730.05\t1.39858045566525218\nAAAAAAAAMMLCAAAA\tSpecial, long-term cases may not like sharply favorite arms. Insufficient papers bring. Legal cheeks could not apply with a sales. Terms give. Judicial, natural sets see at the cells.\tHome                                              \twallpaper                                         \t2.40\t15153.09\t3.69853936997697683\nAAAAAAAAOAJDAAAA\tSensitive, labour areas would not suffer general, successful seconds; golden, substantial methods pay then available beliefs. Afterwards round years will defeat \tHome                                              \twallpaper                                         \t1.96\t4949.14\t1.20797732591358298\nAAAAAAAAOCKDAAAA\tThat positive banks ought to fall perhaps eligible, white proceedings. Voluntary, political bodies suggest united, unlikely women; soviet, long comm\tHome                                              \twallpaper                                         \t5.69\tNULL\tNULL\nAAAAAAAAODACAAAA\tLater recent years could take further; opening intervals weaken; personal years say often. Main pairs generalize articles; functions know quite other varieties. Pounds include to the hands. Claims h\tHome                                              \twallpaper                                         \t1.19\t7033.67\t1.71676571645954473\nAAAAAAAAODCEAAAA\tLong potential cards make previous subjects. Continued, firm rounds might support. Royal, powerful vessels exist employees\tHome                                              \twallpaper                                         \t1.91\t7286.37\t1.77844428490949006\nAAAAAAAAOGIDAAAA\tSocieties could make now below a lev\tHome                                              \twallpaper                                         \t6.61\t5369.24\t1.31051458988596934\nAAAAAAAAOGPDAAAA\tBoxes would not let further level groups. Different priests get chapters. Languages may stop still legs. Blocks must make good, important securities. Complete diffe\tHome                                              \twallpaper                                         \t4.83\t1053.00\t0.25701437506051615\nAAAAAAAAOMBEAAAA\tProtective, absolute fingers could hear usually daily, rapid schemes. Normal y\tHome                                              \twallpaper                                         \t6.16\t437.24\t0.10672076481620141\nAAAAAAAAONGBAAAA\tBrown, natural periods might avoid in a changes; standard, military improvements should seem enough. Things commit easily from a hopes. General courts could close part\tHome                                              \twallpaper                                         \t2.54\t1591.79\t0.38852128402429154\nAAAAAAAAOOMBAAAA\tTimes used to remember to the trains. Evidently chief tests will not look often apparent foreign holidays. Images will not meet earlier rows. Today happy months cannot like as against th\tHome                                              \twallpaper                                         \t5.03\t5511.22\t1.34516881682907673\nAAAAAAAAPCIDAAAA\tProteins must remember above beneath available rights; good pl\tHome                                              \twallpaper                                         \t0.82\t8210.81\t2.00407996285910406\nAAAAAAAAADKDAAAA\tNo equal occasions care poor materials. Students could not operate briefly shares. Very determined effects go already heavy factors; full possibilities make certainly by the posi\tSports                                            \tarchery                                           \t6.40\t8728.20\t2.57886262177629984\nAAAAAAAAAEJBAAAA\tAppointments will not go inc, temporary factors. Static, specific proport\tSports                                            \tarchery                                           \t1.85\t1021.30\t0.30175665035404036\nAAAAAAAAAMIAAAAA\tLives shall mean servants. Short inner balls shall take policies.\tSports                                            \tarchery                                           \t0.82\t20373.51\t6.01962413938563079\nAAAAAAAABAFBAAAA\tEyes can go extremely hard numbers. Early, real others would discuss also. Good members \tSports                                            \tarchery                                           \t4.61\t3215.40\t0.95003263835149453\nAAAAAAAACDFDAAAA\tDays can establish routine members. Associations replace both simple, crucial areas. Parties transmit variables. Statistical foreigners should not play \tSports                                            \tarchery                                           \t2.48\t2613.03\t0.77205442090925102\nAAAAAAAACECCAAAA\tPlayers will come just about senior matters; external hours may become natural principles. Smooth, national sentences can support public women. Protests tell too in a leaders. Labour studi\tSports                                            \tarchery                                           \t1.36\t426.80\t0.12610372894458477\nAAAAAAAACIIBAAAA\tJust silver assets move quite both statistical difficulties. Mainly national hours must prevent. Electronic\tSports                                            \tarchery                                           \t9.78\t10843.65\t3.20390042260999677\nAAAAAAAADGNDAAAA\tEntirely social buildings step all including the standards. Massive months read children; irish years come for a words.\tSports                                            \tarchery                                           \t5.76\t12915.10\t3.81593783901641692\nAAAAAAAAEAEDAAAA\tReligious, subsequent views cannot meet around important min\tSports                                            \tarchery                                           \t5.76\t23175.78\t6.84759203186346949\nAAAAAAAAEFIAAAAA\tShares take. Consequences warn liberal, fresh workshops; illustrations ought to measure sports. White, universal organizations assist young provisions\tSports                                            \tarchery                                           \t5.83\t3736.19\t1.10390696121243713\nAAAAAAAAEHDDAAAA\tLong, immediate cars\tSports                                            \tarchery                                           \t0.47\t7961.21\t2.35224523877909490\nAAAAAAAAEHGCAAAA\tHoly days like new years. Excellent, standard standards regain more simply friendly others. Easily previous texts can\tSports                                            \tarchery                                           \t1.24\t2736.34\t0.80848799826669420\nAAAAAAAAEIEBAAAA\tLow days go photographs; attacks may not tear probably similar, mathematical police. Likely, small name\tSports                                            \tarchery                                           \t2.59\t11492.70\t3.39567086607645118\nAAAAAAAAEMADAAAA\tNow public weapons used to specialise always limited\tSports                                            \tarchery                                           \t6.16\t609.03\t0.17994600290328131\nAAAAAAAAFADCAAAA\tMaterials go furt\tSports                                            \tarchery                                           \t3.67\t48.41\t0.01430337750282884\nAAAAAAAAFHBAAAAA\tPreviously white patients should set sometimes level theoretical studies. Federal, european trends keep. Social, other hills can leave opportunities. Organisers lower experiences. Recent criteri\tSports                                            \tarchery                                           \t2.18\t4063.94\t1.20074505203152723\nAAAAAAAAFLFBAAAA\tScientific, elegant blues must eliminate. Basically local musicians might slow never now spiritual bedrooms. Wrong studies ought to impose relations. S\tSports                                            \tarchery                                           \t1.70\t4653.68\t1.37499156821657742\nAAAAAAAAGAKDAAAA\tConstant, olympic languages could not bow other\tSports                                            \tarchery                                           \t2.01\t7616.46\t2.25038427215855694\nAAAAAAAAGDNAAAAA\tStrong, essential rates could\tSports                                            \tarchery                                           \t8.43\t4002.55\t1.18260656112265174\nAAAAAAAAGEMDAAAA\tCritical, secondary cars will extend social parts; together serious voices see personally a\tSports                                            \tarchery                                           \t42.19\t29.70\t0.00877525948841183\nAAAAAAAAGNGBAAAA\tWomen aim entirely reasonable, whole surfaces. Young drawings meet then sure, executive projects. Public, new offers used to sweep too light, old ar\tSports                                            \tarchery                                           \t65.59\t3949.47\t1.16692337009083694\nAAAAAAAAHADDAAAA\tMarginal, bright boats re-open subsequent figures. Most anxious positions produce nearly together with a causes. Invariably necessary hands must not le\tSports                                            \tarchery                                           \t8.66\t312.64\t0.09237364062145029\nAAAAAAAAHCAEAAAA\tSo blue parents will get at a conferences. Toxic methods \tSports                                            \tarchery                                           \t1.14\t2037.09\t0.60188529802184673\nAAAAAAAAHCEAAAAA\tDifferences give financial, current reasons. Working, legal memories cannot launch into a activities; small, difficult parties coul\tSports                                            \tarchery                                           \t1.62\t7284.54\t2.15231409945169992\nAAAAAAAAHFAEAAAA\tCompetitive holidays should not keep all democratic, o\tSports                                            \tarchery                                           \t61.08\t8753.34\t2.58629056869679390\nAAAAAAAAHKMCAAAA\tCrude, silly estates walk. Specific eyes mus\tSports                                            \tarchery                                           \t3.16\t11104.29\t3.28090997254466541\nAAAAAAAAHMKCAAAA\tNormally eastern men protect also only explicit quantities. Royal, modest miles build by a opportunities. Shoulders judge more slightl\tSports                                            \tarchery                                           \t5.58\t12487.62\t3.68963319503977423\nAAAAAAAAICGBAAAA\tNowhere other groups say home chief members. Contemporary letters practi\tSports                                            \tarchery                                           \t8.43\t2359.96\t0.69728152802263887\nAAAAAAAAIHCAAAAA\tCurrent children take additional workers; far waiting arguments like bad, little days. Comp\tSports                                            \tarchery                                           \t2.50\t7478.91\t2.20974329765919510\nAAAAAAAAIIDBAAAA\tAware families give men. All social winners pose confident, new increases; most glad years wonder in genera\tSports                                            \tarchery                                           \t1.55\t2973.81\t0.87865166394727186\nAAAAAAAAIJGAAAAA\tWelcome, united enemies fall. Nationally late profits complete mili\tSports                                            \tarchery                                           \t7.03\t11118.64\t3.28514987064765227\nAAAAAAAAILDDAAAA\tFrench photographs shall not advise clearly for an demands. Important, statutory cases rate well times. Other, local doctors assess terms. Normally white considerati\tSports                                            \tarchery                                           \t7.09\t408.72\t0.12076175279810376\nAAAAAAAAILFBAAAA\tDesigns would throw especially linear, horizontal characters. Fundament\tSports                                            \tarchery                                           \t3.73\t8691.82\t2.56811366756120145\nAAAAAAAAILPAAAAA\tChanges set shortly. Mental, different jobs need more with a solicitors. Other, federal pieces thank then to a chang\tSports                                            \tarchery                                           \t1.50\t15462.27\t4.56853304814429410\nAAAAAAAAIMJDAAAA\tOther consequences may shape quite. Personal, particular lawyers take brown, large men. Skills would gather as busy fears. Days will \tSports                                            \tarchery                                           \t3.96\t12677.69\t3.74579190113278554\nAAAAAAAAIPADAAAA\tPolitical troops forget plates. Emotional lists must intervene virtually in the children. Ready, only \tSports                                            \tarchery                                           \t7.31\t402.50\t0.11892397118133873\nAAAAAAAAJDNAAAAA\tMonths could not change curiously public contexts. Confident hotels would motivate in a studies. Workers sing fully again due positions. Irrelevant hands might create otherwise here strategic po\tSports                                            \tarchery                                           \t0.40\t1385.73\t0.40943233437296029\nAAAAAAAAJJNDAAAA\tIn short major reasons ought to sell already professional local institutions; corporate, able jobs will insure so su\tSports                                            \tarchery                                           \t9.22\t989.95\t0.29249387644960565\nAAAAAAAAKAECAAAA\tPrivileges face mostly solicitors. Different soldiers suggest home. Deep stations make right parents. Safe, central things would tackle just. As famil\tSports                                            \tarchery                                           \t37.12\t16530.14\t4.88404942356147718\nAAAAAAAAKGCBAAAA\tGoods go only. Accountants may unite. Almost agricultural muscles go just regional police. Real samples used to build auditors; following women can believe. Very concerned tonnes would fit there\tSports                                            \tarchery                                           \t7.66\t2295.32\t0.67818278144583953\nAAAAAAAAKIPCAAAA\tYoung countries should restore increasingly others. Combined, large activities match in a cases. Positions can \tSports                                            \tarchery                                           \t4.34\t2791.69\t0.82484189094964351\nAAAAAAAAKJEDAAAA\tLocal, main troops cannot support never diffe\tSports                                            \tarchery                                           \t3.65\t463.60\t0.13697677773830717\nAAAAAAAAKKHBAAAA\tEarlier controversial auditors s\tSports                                            \tarchery                                           \t2.90\t258.93\t0.07650430772169947\nAAAAAAAAKNBEAAAA\tOld relationships in\tSports                                            \tarchery                                           \t0.71\t2104.62\t0.62183793348489221\nAAAAAAAAKPABAAAA\tIndividual, grand relatives must provide much areas. Italian, respectable experts might revise nationally public standards. Comfortable forces record forward importan\tSports                                            \tarchery                                           \t3.59\t7433.10\t2.19620812469070534\nAAAAAAAAMCKBAAAA\tPatient teachers shall stop already serious weeks\tSports                                            \tarchery                                           \t2.66\t11143.58\t3.29251872491165869\nAAAAAAAAMGEAAAAA\tSchools will get financial, small years. Chronic, real societies \tSports                                            \tarchery                                           \t93.67\t840.45\t0.24832211572510841\nAAAAAAAAMIFEAAAA\tMore leading requirements cross; elderly, able structures know obviously only houses. Enough light populations postpone blank payment\tSports                                            \tarchery                                           \t2.76\t5506.32\t1.62691538135460637\nAAAAAAAANDOBAAAA\tReal pupils could adopt fine years. Big neighbours must understand for a visitors. Duties would not give almost at last blue priests. Previous, small miles make finally\tSports                                            \tarchery                                           \t7.47\t1309.14\t0.38680280157102555\nAAAAAAAANKJCAAAA\tDomestic, chief devices can capture provincial lin\tSports                                            \tarchery                                           \t3.78\t18988.01\t5.61025976156763126\nAAAAAAAANOHAAAAA\tStrings ought to include even. Difficult, medical \tSports                                            \tarchery                                           \t64.26\t5845.14\t1.72702425071028634\nAAAAAAAAOAGAAAAA\tBig affa\tSports                                            \tarchery                                           \t7.86\t4365.75\t1.28991882530417280\nAAAAAAAAOILBAAAA\tThere aware passengers allow all after a reservations. Simply environmental feet may close hardly labour members. Influential, old shareholders must\tSports                                            \tarchery                                           \t2.48\t5434.29\t1.60563316112058941\nAAAAAAAAOKNBAAAA\tBad publications improve by the years. Regular movements might give at least profits. Hard tests might not meet \tSports                                            \tarchery                                           \t9.45\t12999.48\t3.84086903078854452\nAAAAAAAAAGCAAAAA\tSources make visual representatives. European regions will not run unacceptable loans. Right, natural firms get merely moral buildings. Virtually various sa\tSports                                            \tathletic shoes                                    \t2.23\t3212.86\t1.46013319558188889\nAAAAAAAAAJLCAAAA\tDistinguished powers upset very at a makers; animals shall see afterwards complete, working institutions. \tSports                                            \tathletic shoes                                    \t4.30\t909.15\t0.41317707424639551\nAAAAAAAAAKEDAAAA\tSeriously social measures might give. Less technical travellers contradict entirely for a possibilities. Major, young police give only; more than important findings be\tSports                                            \tathletic shoes                                    \t35.35\t15716.62\t7.14265750276894310\nAAAAAAAAAOECAAAA\tPriorities jump now important drawings. Both still movements will determine early massive, right patients. As huge goods might include at least chi\tSports                                            \tathletic shoes                                    \t1.75\t11184.41\t5.08292559090593238\nAAAAAAAABCCCAAAA\tDegrees know as after a heads; new, complex ma\tSports                                            \tathletic shoes                                    \t1.41\t3007.89\t1.36698145504591167\nAAAAAAAACCFEAAAA\tReal, comparative methods insta\tSports                                            \tathletic shoes                                    \t1.70\t11493.02\t5.22317810906375025\nAAAAAAAACFNDAAAA\tDevelop\tSports                                            \tathletic shoes                                    \t6.28\t2742.72\t1.24647090697582786\nAAAAAAAADOGCAAAA\tHowever local things might not feel regional, responsible roots. Local, suitable nations set strong days. Patients might seem to a rooms. Sure othe\tSports                                            \tathletic shoes                                    \t2.00\t303.48\t0.13792111146928022\nAAAAAAAAEBIDAAAA\tEnormous, pure beaches lie highly financial periods. So united ships used to stay. Simply famous tons shall ensure separately extensive needs. In order educational statements must not pa\tSports                                            \tathletic shoes                                    \t3.52\t3499.90\t1.59058289848205428\nAAAAAAAAECBDAAAA\tGrey problems must not acquire detailed times.\tSports                                            \tathletic shoes                                    \t16.36\t1039.15\t0.47225755563233998\nAAAAAAAAECBEAAAA\tCurrent, political advantages will g\tSports                                            \tathletic shoes                                    \t3.15\t125.13\t0.05686723566017871\nAAAAAAAAEGLCAAAA\tPrices ought to go yesterday. Interests might rest here funds. Letters damage also rich agreements. Central, i\tSports                                            \tathletic shoes                                    \t1.72\t128.63\t0.05845786400518490\nAAAAAAAAELIAAAAA\tGenerally top practices can reduce most links. Earnings will tell as techniques. Flat direct measures would not go far material whole sentences. Simply defensive services evaluate nat\tSports                                            \tathletic shoes                                    \t6.06\t794.64\t0.36113625945020704\nAAAAAAAAELNBAAAA\tSentences will retire always from the marks. Modern activities may perform lon\tSports                                            \tathletic shoes                                    \t4.66\t1180.16\t0.53634169932643252\nAAAAAAAAENFEAAAA\tAlmost uncomfortable shares may believe wrongly constant levels. Red, other words used to resist more frien\tSports                                            \tathletic shoes                                    \t0.12\t23738.70\t10.78841402674246177\nAAAAAAAAFHMBAAAA\tItems used to thin\tSports                                            \tathletic shoes                                    \t4.26\t23.25\t0.01056631686325545\nAAAAAAAAGNCCAAAA\tEyes may not give children. Good great beans shall cook. Visible,\tSports                                            \tathletic shoes                                    \t36.86\t5204.23\t2.36514164340902922\nAAAAAAAAGNPAAAAA\tReligious, alone results go all investigations. Banks ma\tSports                                            \tathletic shoes                                    \t1.04\t3489.00\t1.58562922735046355\nAAAAAAAAHFNDAAAA\tHomes cannot inform almost fresh hotels. Plans could kill today hi\tSports                                            \tathletic shoes                                    \t3.62\t7136.25\t3.24317757915727874\nAAAAAAAAHJFEAAAA\tWoods wear indeed from a numbers. Counties must not receive as greatly public windows. Above hostile groups breed of course usually true members. Sources introduce similarly words. Largel\tSports                                            \tathletic shoes                                    \t8.59\t4113.45\t1.86942004736164067\nAAAAAAAAHOFCAAAA\tMilitary, considerable sizes wash social consultants. Equal ways stand detailed writings. Tough, potential directions interpret then. Free wives would restore still. Better fresh men carry others. St\tSports                                            \tathletic shoes                                    \t8.09\t4091.45\t1.85942181205017314\nAAAAAAAAIGKCAAAA\tAs usual religious variables may believe heavy, available sister\tSports                                            \tathletic shoes                                    \t6.51\t590.67\t0.26843898415566016\nAAAAAAAAIIFEAAAA\tObjectives shall get with a years. Huma\tSports                                            \tathletic shoes                                    \t6.42\t6968.96\t3.16715008891839681\nAAAAAAAAJFCDAAAA\tExisting theories wait supplies. Proper partners measure things. Areas must not thank a little. Hard white rules formulate then major, institutional differences.\tSports                                            \tathletic shoes                                    \t1.47\t16050.71\t7.29448979527840609\nAAAAAAAAJLCAAAAA\tAbsolute companies might not raise in order powerful, recent waves. Major chil\tSports                                            \tathletic shoes                                    \t0.18\t14627.31\t6.64760397062645716\nAAAAAAAAKAJBAAAA\tNULL\tSports                                            \tathletic shoes                                    \t0.74\t2201.76\t1.00062338997167000\nAAAAAAAAKAOBAAAA\tClean, large conditions would understand labour dates. Large clergy should give high jobs. Patients might escape. As national polic\tSports                                            \tathletic shoes                                    \t5.50\t257.64\t0.11708842480211334\nAAAAAAAAKCAEAAAA\tParticular, financial years shape then twice visual friends. Limited, future women ought to come casual scots. Relations concentrate charges. Shares shall establish in a plants. Then double\tSports                                            \tathletic shoes                                    \t4.22\t164.92\t0.07495040761669202\nAAAAAAAAKGKDAAAA\tPresumably yo\tSports                                            \tathletic shoes                                    \t4.44\t163.80\t0.07444140654629003\nAAAAAAAAKIBBAAAA\tIn particular financial studies can gain less than huge, model consequences. Really other activities walk o\tSports                                            \tathletic shoes                                    \t47.58\t1719.85\t0.78161204547397384\nAAAAAAAAKIEDAAAA\tNow political women could\tSports                                            \tathletic shoes                                    \t8.57\t57.80\t0.02626809095467377\nAAAAAAAAKIFCAAAA\tChronic lines shall take enough by the sales; international, welsh angles used to rule now front powers. Standard othe\tSports                                            \tathletic shoes                                    \t3.00\t16781.46\t7.62659027045362857\nAAAAAAAAKLABAAAA\tSkills use rather than a principles. Easy employe\tSports                                            \tathletic shoes                                    \t6.29\t9250.24\t4.20391255488860762\nAAAAAAAAKLNDAAAA\tAccounts could think aspects. Industrial, large\tSports                                            \tathletic shoes                                    \t1.92\t6322.30\t2.87326559589505180\nAAAAAAAAKMECAAAA\tCells call no doubt pilots. Arms should pay rather good duties. Thus long s\tSports                                            \tathletic shoes                                    \t9.73\t857.50\t0.38970394452651834\nAAAAAAAAMDHAAAAA\tFriends cry easily sure varieties. Appropriate proposals provide recently between a books. New, considerable forces seem like the elections. Right big clothes fr\tSports                                            \tathletic shoes                                    \t9.64\t2708.86\t1.23108271390099647\nAAAAAAAAMPHBAAAA\tWords live only anxious countries. British, northern substances criticise most extra,\tSports                                            \tathletic shoes                                    \t3.18\t2390.50\t1.08639915963923277\nAAAAAAAANCCCAAAA\tNew rules continue wet cuts. German, following procedures shall see findings. As good charges cannot pay notably routine, short plates. Problems used to alleviate a\tSports                                            \tathletic shoes                                    \t30.73\t3030.00\t1.37702968153393653\nAAAAAAAANCNAAAAA\tSupposedly parental instructions see. Broken, raw habits should not issue at all friendly beliefs. Certain constraints know \tSports                                            \tathletic shoes                                    \t0.59\t5983.42\t2.71925641487913747\nAAAAAAAANGOBAAAA\tAlso other measurements pay at least around the artists. Perfect, good cul\tSports                                            \tathletic shoes                                    \t2.83\t4854.06\t2.20600154981736633\nAAAAAAAANLBAAAAA\tDemocratic forests use on a communities. Potatoes could not include still easy movies. Direct leads could sh\tSports                                            \tathletic shoes                                    \t3.61\t1739.94\t0.79074225217430942\nAAAAAAAAOAPCAAAA\tLevels may not go involved issues. Miles will beat good institutions. Tiny, c\tSports                                            \tathletic shoes                                    \t9.51\t9805.35\t4.45619075505900481\nAAAAAAAAPCODAAAA\tNever national communities could turn so. National, whole styles buy far really high leaders. Indeed beautiful others liv\tSports                                            \tathletic shoes                                    \t5.39\t306.50\t0.13929359649839985\nAAAAAAAAPDABAAAA\tMore than hot women govern only full polic\tSports                                            \tathletic shoes                                    \t1.64\t3354.48\t1.52449456307325393\nAAAAAAAAPIFAAAAA\tNotably international minutes write too national, important visits. Human, clean patients\tSports                                            \tathletic shoes                                    \t1.21\t6716.71\t3.05251123176759302\nAAAAAAAAAJIDAAAA\tMajor missiles may reply british dogs. Other, c\tSports                                            \tbaseball                                          \t1.15\t12361.94\t4.21788969172030922\nAAAAAAAAALEBAAAA\tAlso other adults ought to uphold usually in a hills; carefully good signs would ensure with an benefits. Continuous, nuclear days shall feel just in the politicia\tSports                                            \tbaseball                                          \t0.75\t3265.70\t1.11425572088612417\nAAAAAAAAANCBAAAA\tTherefore unexp\tSports                                            \tbaseball                                          \t3.99\t3063.58\t1.04529244615007878\nAAAAAAAAAOMCAAAA\tOften unnec\tSports                                            \tbaseball                                          \t6.08\t2524.58\t0.86138583085852692\nAAAAAAAABDOBAAAA\tEggs shall not encourage as. Economic classes act other girls. Technical features wash even. Social goods can monitor probably \tSports                                            \tbaseball                                          \t2.18\t3658.98\t1.24844272211406762\nAAAAAAAACBGCAAAA\tManagers shall put even. Physically additional guests help therefore high times; here specialist successes tend old plan\tSports                                            \tbaseball                                          \t9.08\t251.02\t0.08564793797863701\nAAAAAAAACGMAAAAA\tDreams cannot need further at a securities. Modern societies devote once again for a businesses; ways used to say to a\tSports                                            \tbaseball                                          \t1.06\t4758.65\t1.62364974927113782\nAAAAAAAACJJCAAAA\tFun activities cost white camps. Bare, solar databases go especially days. More subject sites deal certainly; partly equal occasions hear subs\tSports                                            \tbaseball                                          \t6.89\t1014.60\t0.34618117230947778\nAAAAAAAADDLCAAAA\tMost other delegates enhance natural, successful shows. American, similar times can derive easy, small departments. Artificial, other manager\tSports                                            \tbaseball                                          \t4.91\t1022.10\t0.34874016973932312\nAAAAAAAAEHFAAAAA\tFully silent bishops ought to seek only. Just new forms change immediately deeply raw cells. White corners shall lighten really reportedly glad games; teachers think at pre\tSports                                            \tbaseball                                          \t3.06\t14501.24\t4.94781811860939439\nAAAAAAAAEHFEAAAA\tWinds owe urgently military managers. Internal conditions used to satisfy now as disable\tSports                                            \tbaseball                                          \t7.10\t7772.75\t2.65205963637738361\nAAAAAAAAEHNBAAAA\tOrganisations restore far. Far notes might not ask very places. Innocent requirements would not change to a children. Cer\tSports                                            \tbaseball                                          \t1.20\t8146.44\t2.77956253631857102\nAAAAAAAAEHOAAAAA\tAlso international soldiers use shortly decisive parties. Major, above advertisements expect about already loyal stairs. Lucky, small towns appear. Then english children corresp\tSports                                            \tbaseball                                          \t1.92\t3722.51\t1.27011913634314422\nAAAAAAAAEIKDAAAA\tGuilty, oth\tSports                                            \tbaseball                                          \t3.01\t5530.46\t1.88699105678166221\nAAAAAAAAFOIBAAAA\tRather american exercises might remember times. Below western accidents give\tSports                                            \tbaseball                                          \t0.71\t7321.35\t2.49804211106642533\nAAAAAAAAFOMDAAAA\tLater federal objectives\tSports                                            \tbaseball                                          \t5.97\t7447.00\t2.54091384800776761\nAAAAAAAAGFKCAAAA\tFeet used to make import\tSports                                            \tbaseball                                          \t2.92\t798.30\t0.27237968643273813\nAAAAAAAAGKHCAAAA\tParents induce free deaths. Empty, red rec\tSports                                            \tbaseball                                          \t39.45\t15343.37\t5.23515258602214870\nAAAAAAAAGKMAAAAA\tSymbols could enable too wrong problems. Real, old\tSports                                            \tbaseball                                          \t0.29\t5569.42\t1.90028419543056548\nAAAAAAAAGLNCAAAA\tElements shall arrange more. Coins would constitute however. Departments subscribe only in a children. And so on significant areas protect within\tSports                                            \tbaseball                                          \t1.17\t1171.52\t0.39972222253498857\nAAAAAAAAHDFBAAAA\tResidents will happen here police. Owners might not match lines. Temporary, good symptoms used to achieve about in a issues. Troops can arrange. Even true comments shall not get ba\tSports                                            \tbaseball                                          \t3.86\t3886.24\t1.32598375623495459\nAAAAAAAAHMMAAAAA\tRelevant numbers happen by the variables. Basic, italian fingers l\tSports                                            \tbaseball                                          \t8.19\t5295.33\t1.80676478135772420\nAAAAAAAAHNDCAAAA\tFascinating companies could tell partly about a \tSports                                            \tbaseball                                          \t8.54\t2203.05\t0.75167990504277057\nAAAAAAAAIAOBAAAA\tRig\tSports                                            \tbaseball                                          \t4.47\t7838.81\t2.67459928573946137\nAAAAAAAAIBNAAAAA\tEasily natural relatives used to walk thorough, real rocks. Front implications tell either. Members achieve in a words. So black ages help far \tSports                                            \tbaseball                                          \t90.17\t13337.28\t4.55067536548368992\nAAAAAAAAIFAAAAAA\tTeachers might not send unusual arrangements. Complex steps ought to hold all but statistical, recent pr\tSports                                            \tbaseball                                          \t7.75\t1162.44\t0.39662412964658915\nAAAAAAAAIGCEAAAA\tKids live so other goods. Colleagues ought to gain at first burning guidelines. Electronic, public figures give. Little leaves interfere. Stages could not determine yet environm\tSports                                            \tbaseball                                          \t3.90\t6580.60\t2.24529846491203378\nAAAAAAAAIHACAAAA\tOnly solid days cannot cope ever suitable recordings. Inches go ever chro\tSports                                            \tbaseball                                          \t9.36\t11126.11\t3.79622491922354013\nAAAAAAAAJCEBAAAA\tCities ought to assess to the parties. Likely organs help domestic, passive stages. Therefore private obligati\tSports                                            \tbaseball                                          \t1.03\t7447.72\t2.54115951176103277\nAAAAAAAAJFKBAAAA\tHundreds would give seldom national, philosophical words. Obvious things li\tSports                                            \tbaseball                                          \t2.21\t83.50\t0.02849017138561147\nAAAAAAAAKDOCAAAA\tMost local companies shall see already. Politicia\tSports                                            \tbaseball                                          \t18.00\t3997.41\t1.36391492213840880\nAAAAAAAAKEJAAAAA\tSurprising applications could not explore. Tonight expensive layers meet then between a statements. Days de\tSports                                            \tbaseball                                          \t0.95\t4521.40\t1.54270013057369686\nAAAAAAAAKJDDAAAA\tOffices obtain surprisingly according to the cups. Separate, only children work also social rates. Public conflicts force at least. Gradually australian storie\tSports                                            \tbaseball                                          \t1.45\t8302.97\t2.83297051867772986\nAAAAAAAAKKOAAAAA\tConscious, solar ambitions support outside countries; warm facilities rise occupations. Appropriate columns grow. Availabl\tSports                                            \tbaseball                                          \t3.35\t2187.71\t0.74644590229959357\nAAAAAAAAKLHAAAAA\tCertain places kn\tSports                                            \tbaseball                                          \t4.63\t546.48\t0.18645878872825095\nAAAAAAAAKMDAAAAA\tSingle, wonderful departments will appea\tSports                                            \tbaseball                                          \t3.19\t5797.68\t1.97816642920876516\nAAAAAAAAKNMAAAAA\tStatutory\tSports                                            \tbaseball                                          \t4.72\t3059.64\t1.04394811950026670\nAAAAAAAAKNPCAAAA\tNo scottish accidents will rely chan\tSports                                            \tbaseball                                          \t4.35\t25561.00\t8.72140444057023607\nAAAAAAAALKHAAAAA\tProperly common methods remember thus successful levels. Statistical families exist; trees will not go simply. Bottom, full things could see in the feet. Used, de\tSports                                            \tbaseball                                          \t2.57\t12848.83\t4.38401639286929566\nAAAAAAAALOGAAAAA\tGood effe\tSports                                            \tbaseball                                          \t9.77\t8394.54\t2.86421417129785492\nAAAAAAAALPIAAAAA\tCentral standards ease written eyes. Simple, brief groups send in the ideas. Technical, possible islands try on a parties; activities must change adul\tSports                                            \tbaseball                                          \t5.06\t9693.92\t3.30756218201684687\nAAAAAAAAMBFEAAAA\tLegal, other houses compete well new companies. Young, able layers would find orders. Rather good beaches die finally historical applications. Comments\tSports                                            \tbaseball                                          \t89.48\t2008.38\t0.68525856775370489\nAAAAAAAAMHBAAAAA\tClubs may take major changes. Procedures need. Lawyers shall not say pretty\tSports                                            \tbaseball                                          \t1.61\t8727.74\t2.97790189711445061\nAAAAAAAAMKECAAAA\tClear practices might not own as. External\tSports                                            \tbaseball                                          \t1.32\t525.24\t0.17921170800692895\nAAAAAAAAMLNBAAAA\tAs simple views visit only japanese, direct differences. Hours assist locally too severe products. Else lesser dangers telephone \tSports                                            \tbaseball                                          \t7.20\t316.92\t0.10813299539554474\nAAAAAAAANBEBAAAA\tAnxious, just years must come various schools; rarely surprising students ought to talk complex hundreds. Thin, other makers shall look actually american, ta\tSports                                            \tbaseball                                          \t7.88\t11407.21\t3.89213614289414352\nAAAAAAAANCKAAAAA\tToo particular pages used to give here by a markets; capital, different researchers gain specialist, small directors. Required patie\tSports                                            \tbaseball                                          \t60.56\t503.66\t0.17184861940212062\nAAAAAAAANEODAAAA\tNew friends would leave long motives. Dogs shall face occasionally increased schools. New, green parents decide also probably beautiful men. Real tanks shall \tSports                                            \tbaseball                                          \t0.54\t928.53\t0.31681411780457264\nAAAAAAAANNKBAAAA\tImportant, private results identify sh\tSports                                            \tbaseball                                          \t1.25\t4287.60\t1.46292765069398475\nAAAAAAAAOCNAAAAA\tOther, significant materials could not mention economic, current races. Animals go straight living, young groups; masters may date. Top, able computers avoid less hours; questions recommend \tSports                                            \tbaseball                                          \t0.56\t225.54\t0.07695417071030911\nAAAAAAAAOLFCAAAA\tOnly warm clouds ought to hold really \tSports                                            \tbaseball                                          \t4.99\t1216.60\t0.41510350308664564\nAAAAAAAAOOEAAAAA\tBooks change slightly. Radical, distinguished characteristics imagine always as a ministers. Red strings deal late, sexual states. Peculiar, strong patterns live always. N\tSports                                            \tbaseball                                          \t1.51\t2123.42\t0.72451017633095930\nAAAAAAAAOONAAAAA\tReal, social cigarettes wou\tSports                                            \tbaseball                                          \t0.29\t5316.32\t1.81392656216471802\nAAAAAAAAPLODAAAA\tAt least middle departments arrange international, environmental sites. More key kids might take up to the relations. Policie\tSports                                            \tbaseball                                          \t4.87\t2378.20\t0.81144102502109211\nAAAAAAAAABKDAAAA\tYoung workers ac\tSports                                            \tbasketball                                        \t7.78\t1526.51\t0.57071382054190651\nAAAAAAAAADCAAAAA\tInter\tSports                                            \tbasketball                                        \t85.58\t1184.67\t0.44291065357015702\nAAAAAAAAAIADAAAA\tLevels evaluate old arms. Attractive, dangerous men isolate very poor things; solid, sorry others shall leave now\tSports                                            \tbasketball                                        \t1.44\t153.89\t0.05753460497683867\nAAAAAAAABEJCAAAA\tOthers ought to ensure still buildings; new patients keep notably in a drivers. Relative, good im\tSports                                            \tbasketball                                        \t1.20\t625.50\t0.23385467160317491\nAAAAAAAABHCAAAAA\tFavorite, pure features see green decisions. Imp\tSports                                            \tbasketball                                        \t8.03\t5094.18\t1.90455282332128144\nAAAAAAAABIHDAAAA\tAlso federal cells shou\tSports                                            \tbasketball                                        \t6.62\t8298.39\t3.10250562475630792\nAAAAAAAACAMCAAAA\tConsiderable ears cross during a members; very southern politicians allow numbers. Patients deprive earlier shares. Men used to press beautiful tactics. Eyes might develop on a co\tSports                                            \tbasketball                                        \t4.97\t937.69\t0.35057264111204009\nAAAAAAAACBKDAAAA\tYoun\tSports                                            \tbasketball                                        \t3.28\t1166.47\t0.43610624905668334\nAAAAAAAACECDAAAA\tAlways front rumours ought to improve. Hours use about a centuries. As uncomfortable links learn neither about real reasons. Dark days could deal much big, sole \tSports                                            \tbasketball                                        \t6.68\t10473.18\t3.91559083859462726\nAAAAAAAACIFEAAAA\tAbout national assets raise much. Other inhabitants may like thick annual characteri\tSports                                            \tbasketball                                        \t6.72\t1181.36\t0.44167314923281648\nAAAAAAAACKABAAAA\tEarly types tell links. Local reasons succeed probably properties. Friends carry low fruits. Able, old tensions get. Recently other vegetables\tSports                                            \tbasketball                                        \t3.00\t11903.67\t4.45040581730226223\nAAAAAAAACKKDAAAA\tCases should soften courses; complex letters use experimentally far whole parties. Great, liberal decisions confirm. Households know very reasonable users. New, short feature\tSports                                            \tbasketball                                        \t2.58\t5361.15\t2.00436446469282357\nAAAAAAAACMJAAAAA\tAt all attractive answers would beat. Trousers might take of course fine constant lives. Ladies shall not challen\tSports                                            \tbasketball                                        \t8.87\t19675.51\t7.35605104664266008\nAAAAAAAACPGAAAAA\tWhole councils would see again white\tSports                                            \tbasketball                                        \t4.23\t4485.02\t1.67680716104503839\nAAAAAAAADGNAAAAA\tSo early systems would place only to a m\tSports                                            \tbasketball                                        \t2.69\t249.12\t0.09313809079101988\nAAAAAAAAEILCAAAA\tDifferent plans may make so in a trials. Provincial, perfect items must wear together. Simple aspects must not prefer then in the sections; alone, good rights can put psycho\tSports                                            \tbasketball                                        \t4.46\t9055.60\t3.38560250067100027\nAAAAAAAAENEBAAAA\tS\tSports                                            \tbasketball                                        \t1.06\t458.00\t0.17123171797642543\nAAAAAAAAFEDEAAAA\tOften final groups participate with the characters. Superior, in\tSports                                            \tbasketball                                        \t62.36\t9883.09\t3.69497484632233713\nAAAAAAAAFKMBAAAA\tDecisions bring young farmers; easy other minerals credit preliminary, basic offices.\tSports                                            \tbasketball                                        \t0.22\t9644.13\t3.60563525827070695\nAAAAAAAAFMCDAAAA\tProperly large others say briefly other police. Results used to prefer worried, old opportunities. Very big contents create forces. Possible, famous clu\tSports                                            \tbasketball                                        \t4.35\t9926.05\t3.71103623192117389\nAAAAAAAAHPIBAAAA\tSucc\tSports                                            \tbasketball                                        \t9.92\t8445.05\t3.15733716134675017\nAAAAAAAAIBBBAAAA\tSimilar, new events may need sometimes combined prisons. Communications pay from a relat\tSports                                            \tbasketball                                        \t20.67\t3976.01\t1.48650441701189361\nAAAAAAAAIBFCAAAA\tCharming, general guns would look superficially; big heads can set essentially straight voluntary refuge\tSports                                            \tbasketball                                        \t0.21\t5246.26\t1.96141072653057135\nAAAAAAAAICGEAAAA\tAuthorities might destroy however to the profits. S\tSports                                            \tbasketball                                        \t2.28\t2179.53\t0.81485734995886139\nAAAAAAAAINLDAAAA\tFavourably major feelings used to turn new, necessary years. Labour products go pr\tSports                                            \tbasketball                                        \t7.28\t256.36\t0.09584489786121490\nAAAAAAAAIOKDAAAA\tDifferent organizations shall split; emotional, com\tSports                                            \tbasketball                                        \t2.22\t12749.88\t4.76677697902460058\nAAAAAAAAJGLCAAAA\tSmooth years help more british, curious arms. Inter alia acute members must improve also in a years. Now regional\tSports                                            \tbasketball                                        \t3.91\t2159.38\t0.80732390210465840\nAAAAAAAAJNMAAAAA\tWomen may not represent very common muscles. More late stones smile again on the surveys. Topics must not find as variations. Economic boots\tSports                                            \tbasketball                                        \t60.56\t202.95\t0.07587658769282869\nAAAAAAAAKACBAAAA\tHeavy paintings s\tSports                                            \tbasketball                                        \t4.08\t4622.30\t1.72813181223238268\nAAAAAAAAKADAAAAA\tHuge, helpful heads think low policies. Absolute tons restore generally. Tradit\tSports                                            \tbasketball                                        \t5.01\t24011.93\t8.97730136644032550\nAAAAAAAAKDDDAAAA\tWhite interests might\tSports                                            \tbasketball                                        \t53.99\t3630.36\t1.35727681151287298\nAAAAAAAAKOJCAAAA\tOutstanding friends must reduce figures. Travellers \tSports                                            \tbasketball                                        \t0.95\t3994.52\t1.49342472072312426\nAAAAAAAALAEEAAAA\tRedundant, new writers draw sharp\tSports                                            \tbasketball                                        \t4.80\t9195.80\t3.43801884752753924\nAAAAAAAALJEEAAAA\tClear members work national, personal operations. He\tSports                                            \tbasketball                                        \t4.17\t4072.64\t1.52263131855788049\nAAAAAAAALKKAAAAA\tTimes remove other effects. Almost english conservatives can measure however new, normal r\tSports                                            \tbasketball                                        \t7.65\t1107.60\t0.41409661753425504\nAAAAAAAAMHCCAAAA\tNow due eyes keep about. Then annual progr\tSports                                            \tbasketball                                        \t0.83\t3016.20\t1.12766180733732398\nAAAAAAAAMHPBAAAA\tAddresses retain once more applicable events. Following blocks follow for a develo\tSports                                            \tbasketball                                        \t70.89\t268.59\t0.10041730814691726\nAAAAAAAAMKFCAAAA\tOther, g\tSports                                            \tbasketball                                        \t0.70\t15012.84\t5.61282616791528113\nAAAAAAAAMNFBAAAA\tPolitical aspects ought to say months. Of course \tSports                                            \tbasketball                                        \t3.77\t123.24\t0.04607553913409317\nAAAAAAAANNHBAAAA\tFortunately favorite decisio\tSports                                            \tbasketball                                        \t2.86\t9079.28\t3.39445570390611328\nAAAAAAAANOEDAAAA\tAncient, similar ways equip immediately. Never european leader\tSports                                            \tbasketball                                        \t0.67\t5371.94\t2.00839850451152582\nAAAAAAAAOCJAAAAA\tNo doubt established kinds ensure both comparative buildings. Threats attract almost traditional students; questions must not fight widely clean, minor relations. National, famous assets go commer\tSports                                            \tbasketball                                        \t9.10\t1401.61\t0.52401765989724377\nAAAAAAAAODJCAAAA\tOnly social changes could achieve again soon go\tSports                                            \tbasketball                                        \t9.05\t4303.38\t1.60889770852705168\nAAAAAAAAOEFCAAAA\tEarly favorite contexts will save quite as empty pages. Unusual languages suffer soon actual cars; corporate businesses ought \tSports                                            \tbasketball                                        \t54.80\t7564.49\t2.82812362077617992\nAAAAAAAAOHGCAAAA\tRecently free woods know \tSports                                            \tbasketball                                        \t2.84\t3637.05\t1.35977799097414435\nAAAAAAAAOJLBAAAA\tConfidential members cannot modify either dirty organisations. Men might think increasingly failures. Internationa\tSports                                            \tbasketball                                        \t1.70\t6383.10\t2.38643925549196761\nAAAAAAAAOLMDAAAA\tOld, poor pp. form seconds; bags know much; \tSports                                            \tbasketball                                        \t9.50\t5416.98\t2.02523753634047386\nAAAAAAAAPLEBAAAA\tComparatively unable miles show already; interesting drugs will not run parts. Yet political priests will run strangely left, d\tSports                                            \tbasketball                                        \t4.52\t1863.76\t0.69680093165009314\nAAAAAAAAPNOBAAAA\tHowever comprehensive times ought to level even for a blacks. New employers see; far presidenti\tSports                                            \tbasketball                                        \t4.48\t4373.10\t1.63496381197097391\nAAAAAAAAPPGBAAAA\tAreas expect. Organic, democratic resources would last previously. Cheap, residential fields cannot achieve seriously about \tSports                                            \tbasketball                                        \t0.77\t2524.50\t0.94383072495957642\nAAAAAAAAACAAAAAA\tAutomatically competitive deaths jump wooden friends. Average, legal events know. Losses ought to cross. Conventional toys st\tSports                                            \tcamping                                           \t4.38\t8168.10\t2.37504813353538829\nAAAAAAAAANABAAAA\tOnly far tests take to a others. Appropriate comparisons will say fully musical personnel. Beautiful, administrative aspects get standards. Huge, sin\tSports                                            \tcamping                                           \t1.74\t11263.88\t3.27521175920551774\nAAAAAAAABCABAAAA\tCells give only serious walls; arrangemen\tSports                                            \tcamping                                           \t0.18\t151.45\t0.04403729628970441\nAAAAAAAABGCBAAAA\tSorry eyes could shake obviously across a commentators; more other numbers may control schools. Children maintain. Powerful elements gather very then active opportun\tSports                                            \tcamping                                           \t3.69\t5210.19\t1.51497313143383954\nAAAAAAAABHCDAAAA\tA bit important \tSports                                            \tcamping                                           \t3.97\t2060.79\t0.59921835471020101\nAAAAAAAABHEEAAAA\tStraightforward deal\tSports                                            \tcamping                                           \t4.48\t14808.62\t4.30592001704617007\nAAAAAAAACCNCAAAA\tWhole services live since the wheels. \tSports                                            \tcamping                                           \t2.26\t8417.24\t2.44749086709509087\nAAAAAAAACFDEAAAA\tSo-called, classical travellers contain capital, new paintings. Japanese stories \tSports                                            \tcamping                                           \t6.17\t18270.48\t5.31252915889810863\nAAAAAAAACIABAAAA\tFinancial, massive ideas might boil also leading companies. Even long \tSports                                            \tcamping                                           \t9.92\t4367.79\t1.27002748340183563\nAAAAAAAACLOCAAAA\tGroups should display of course possibly productive areas. Gro\tSports                                            \tcamping                                           \t2.04\t12234.96\t3.55757384359644646\nAAAAAAAACOJAAAAA\tHowever general jobs tell basic results. Issues lose critical husbands. Back,\tSports                                            \tcamping                                           \t21.20\t4822.68\t1.40229638871199501\nAAAAAAAADBIBAAAA\tEqual, different facts emphasise necessary inhabitants. Complex, active moves might put in a reports. Commercial groups can restrict curiously to a players; identical purposes cou\tSports                                            \tcamping                                           \t8.94\t13999.26\t4.07058144903669396\nAAAAAAAADCLDAAAA\tAlways opposite skills take well in the prices. Colonial, weak issues shall deny more just respective funds; mental, creative patients would not play even in\tSports                                            \tcamping                                           \t16.73\t5674.31\t1.64992585480113970\nAAAAAAAADJFCAAAA\tProcedures find groups. Possible\tSports                                            \tcamping                                           \t4.18\t5862.76\t1.70472168501437704\nAAAAAAAADKIBAAAA\tWild changes shall delay soon representatives; other countries die also legal, superb boys. Never video-taped sounds think substantially previous approa\tSports                                            \tcamping                                           \t75.50\t1678.45\t0.48804489902577986\nAAAAAAAAEADDAAAA\tDear officers communicate much long interested relationships. Casualties position normal, good issues. Aspirations remind now quick words. Financial, l\tSports                                            \tcamping                                           \t3.38\t1526.49\t0.44385930943064297\nAAAAAAAAEGBCAAAA\tExceptions say richly normal losses; british, old individuals used to win. Childr\tSports                                            \tcamping                                           \t4.27\t4862.61\t1.41390688221379690\nAAAAAAAAEGCEAAAA\tThen bad merch\tSports                                            \tcamping                                           \t0.84\t409.38\t0.11903590858421386\nAAAAAAAAEGKAAAAA\tMore fine pressures free there into the records; rights turn seconds; great areas ought to drain allegedly especially gothic dealers; programs speak even european, o\tSports                                            \tcamping                                           \t2.25\t4430.31\t1.28820649802073507\nAAAAAAAAEJGAAAAA\tNational systems must believe old issues. Long police would make able towns. Slow years earn exactly nearer the terms. Social, old comparisons shall survive wildly previous children\tSports                                            \tcamping                                           \t2.12\t4781.18\t1.39022938444641077\nAAAAAAAAEMKCAAAA\tWell main goods share probably traditional times. Enorm\tSports                                            \tcamping                                           \t5.17\t3862.11\t1.12299030949772389\nAAAAAAAAEOKAAAAA\tTerms reduce standards. Free things put\tSports                                            \tcamping                                           \t2.60\t1759.84\t0.51171076594568109\nAAAAAAAAEPAAAAAA\tPlayers must argue away significantly national sides. Elections might\tSports                                            \tcamping                                           \t3.53\t14678.84\t4.26818373238141050\nAAAAAAAAFDBBAAAA\tLabour, bright taxes could not shock still with a reasons. Dominant weapons will cause home; women say therefore bloody, complete areas; dem\tSports                                            \tcamping                                           \t30.04\t3575.90\t1.03976868803138980\nAAAAAAAAFJODAAAA\tUnable school\tSports                                            \tcamping                                           \t2.63\t9178.29\t2.66878227905467845\nAAAAAAAAFLMDAAAA\tStill royal companies reach years. Complex, british plants must tell however such as a detectives. Ite\tSports                                            \tcamping                                           \t6.35\t8374.50\t2.43506330655747472\nAAAAAAAAFNHDAAAA\tJust capitalist exceptions communicate \tSports                                            \tcamping                                           \t7.91\t397.64\t0.11562225484739558\nAAAAAAAAFPNBAAAA\tAvailable tests forgive also policies. Almost local rights used to argue there new only men. Chi\tSports                                            \tcamping                                           \t2.78\t316.16\t0.09193021852065332\nAAAAAAAAGNGDAAAA\tNever top observations spend appropriate, common states. Homes make. There available hospitals will appreciate away upon a years. Roots hang \tSports                                            \tcamping                                           \t2.07\t4784.91\t1.39131396097437774\nAAAAAAAAHEBBAAAA\tResidents will l\tSports                                            \tcamping                                           \t7.50\t7103.96\t2.06562688247083863\nAAAAAAAAHGIBAAAA\tBold campaigns get with a numbers. Public, medical emotions recognize sources. Very single countries shall fit enough along with\tSports                                            \tcamping                                           \t4.72\t5615.05\t1.63269475425225965\nAAAAAAAAHGJDAAAA\tDemocrats may say again. There private services can think about fa\tSports                                            \tcamping                                           \t1.65\t18235.67\t5.30240741387437400\nAAAAAAAAHKNBAAAA\tDifferent, ltd. students may not try scottish sheets. Almost likely schools may not order. Partly effective c\tSports                                            \tcamping                                           \t3.91\t11958.94\t3.47731518052689077\nAAAAAAAAIIHCAAAA\tCertain, official generations might allow polish letters. Months provide equally product\tSports                                            \tcamping                                           \t8.26\t3715.04\t1.08022659100761608\nAAAAAAAAIKDCAAAA\tCentral, clear fingers must \tSports                                            \tcamping                                           \t5.58\t104.95\t0.03051643608850761\nAAAAAAAAILADAAAA\tAlways clinical doors\tSports                                            \tcamping                                           \t33.45\t2954.82\t0.85917651913334019\nAAAAAAAAJMICAAAA\tAvailable implications try only; magistrates must reduce quite black, ugly girls. Animals read. Chief pupils will manipulate easy more real seconds. Men might throw only british policies. Aspects ex\tSports                                            \tcamping                                           \t6.42\t12904.54\t3.75226841506993789\nAAAAAAAAKCJBAAAA\tAffectionately sad chains answer sideways small, concerned documents. Interested minutes notice as a yards. Difficult c\tSports                                            \tcamping                                           \t0.18\t7683.32\t2.23408807744213704\nAAAAAAAALALAAAAA\tCrucial sources make to a police. Great farmers make recent limitations. Yet indian colleagues should get. Mea\tSports                                            \tcamping                                           \t7.95\t1656.32\t0.48161013265475868\nAAAAAAAALDKBAAAA\tGood, white statements de\tSports                                            \tcamping                                           \t8.79\t4572.10\t1.32943494464283601\nAAAAAAAALHICAAAA\tConventional workers shall not take numbers. French, premier things could remember as to a gardens. Red districts ought to implement flowers. Fiscal, curious terms study much explicit words. Third\tSports                                            \tcamping                                           \t3.61\t5559.40\t1.61651333768889187\nAAAAAAAAMBGDAAAA\tFresh, electoral doors get at a teachers; children become more ministers; comfortable places shall not lift much safe, genuine procedures; official, extra beliefs break. Openly new days find ther\tSports                                            \tcamping                                           \t1.27\t4702.53\t1.36736023057922522\nAAAAAAAAMELBAAAA\tMuch basic birds can light apparently. Normal, close teeth cannot expect as civil ends. Long principal conditions could not cover less more new officers. Efficient words get to a years. Real, able \tSports                                            \tcamping                                           \t1.68\t3665.26\t1.06575200131265745\nAAAAAAAAMMOAAAAA\tFar specific clothes learn indeed times. Gastric, steady criteria imagine again in n\tSports                                            \tcamping                                           \t50.85\t6713.37\t1.95205456449265676\nAAAAAAAANGMAAAAA\tGrounds will take then by the boards. Historical candidates feel suitable numbers. Normally inevitable savings return systems. Psychological rooms would turn almost\tSports                                            \tcamping                                           \t2.39\t16931.42\t4.92316909306983803\nAAAAAAAANJJAAAAA\tAccounts listen firmly incredible trends. Votes must not exert away natural fears. Able terms reflect well golden others. British feet could not re\tSports                                            \tcamping                                           \t8.64\t12203.84\t3.54852504425319390\nAAAAAAAAOBLCAAAA\tLabour patients shall\tSports                                            \tcamping                                           \t2.75\t7756.62\t2.25540160545821715\nAAAAAAAAOHPBAAAA\tPowerful populations can produce honest lines; soviet, working-class feet w\tSports                                            \tcamping                                           \t2.14\t2940.02\t0.85487310556392702\nAAAAAAAAOKHDAAAA\tMinutes can compete much mathematical areas; pages put for example more good passengers. Differences undertake to a parts. About conscious situations know light studies; mad, l\tSports                                            \tcamping                                           \t1.46\t2184.90\t0.63530596674397594\nAAAAAAAAOLDEAAAA\tVisual, surprising parties earn resources. Particular, just situations can lose currently to a others. Social actors want loudly prime years. Fresh, other responsibilities obtain offices. Afraid t\tSports                                            \tcamping                                           \t9.02\t6215.95\t1.80741916059417696\nAAAAAAAAONBDAAAA\tGreat explanations would not fill; sure, political powers let eventually horses. Continually public examples ask yet wrong, dependent officials. Early, g\tSports                                            \tcamping                                           \t1.82\t3966.35\t1.15330029804337451\nAAAAAAAAONHBAAAA\tTrustees could respond further precise surveys. Conditions would weigh. White areas secure particularly living costs. Strong, bare provisions can keep so useful, physical feet. Demanding, supreme\tSports                                            \tcamping                                           \t4.48\t9027.65\t2.62498050742654327\nAAAAAAAAPBAAAAAA\tJust available years undertake social units. Alone long-term years communicate very huge workers. Relevant, false farmers start hardly bottom windows. Associations shall \tSports                                            \tcamping                                           \t7.57\t5611.89\t1.63177591730095251\nAAAAAAAAABGAAAAA\tSteps would make repeatedly short pairs. As good stages protect skills. Plants could not sweep observations. C\tSports                                            \tfishing                                           \t8.71\t4424.59\t1.05964726402462346\nAAAAAAAAACDDAAAA\tChrist\tSports                                            \tfishing                                           \t9.05\t1582.84\t0.37907514038334286\nAAAAAAAAAFIBAAAA\tAlmost personal matters may deal; major, australian offences happen prime, usual hours. Functions might visit at the followers. Championships shall smile observations; compani\tSports                                            \tfishing                                           \t2.61\t1554.46\t0.37227840004061759\nAAAAAAAAAGICAAAA\tAccidentally wrong communities look more goods. Rural matters recognize. Large, new days go hap\tSports                                            \tfishing                                           \t1.32\t4303.95\t1.03075513030558269\nAAAAAAAAAGMCAAAA\tProblems ought to remove rapidly then new authorities. Half way exotic months bar systems. Front, new models cause too; difficult, full others comprehend eve\tSports                                            \tfishing                                           \t2.89\t2105.84\t0.50432867101214193\nAAAAAAAAAJPBAAAA\tDelightful, married guns should go much tremendous, clear networks. Again just hours shall know there. Large, whole years cannot want \tSports                                            \tfishing                                           \t9.33\t2187.51\t0.52388786001109799\nAAAAAAAAALDAAAAA\tVery modern weeks must prevent hotly to a situations. Points look strongly regulations. Times take good groups. As af\tSports                                            \tfishing                                           \t68.83\t2026.90\t0.48542329107363830\nAAAAAAAABEFDAAAA\tMembers support general, mysterious programmes. Front times want with the services. Now new details should impose never cheap live activiti\tSports                                            \tfishing                                           \t4.96\t11202.69\t2.68293781078382606\nAAAAAAAABHGCAAAA\tTests shall see famous, good words; sexual, significant theo\tSports                                            \tfishing                                           \t8.63\t11684.99\t2.79844407813042221\nAAAAAAAACBEEAAAA\tPersonal, lacking artists cut pieces. Prices make quickly for a rooms. High, overall types ought to use together supposed women; reductions shall give prices. Lengthy, fundamental meas\tSports                                            \tfishing                                           \t9.23\t13101.80\t3.13775661107533389\nAAAAAAAACEMBAAAA\tOther offices shall embark blindly resources. Spectacular copies may look also old, other offices. Properties fill better important others. Very wrong supplies will not own both aspects. Certainly\tSports                                            \tfishing                                           \t7.25\t386.95\t0.09267084833042791\nAAAAAAAACKDBAAAA\tSheets identify in a persons. Successful studies cannot solve for instance impressive governments; public buildings can move to a women. Substances sweep even on a tales; however great spac\tSports                                            \tfishing                                           \t4.50\t5339.33\t1.27871880247087137\nAAAAAAAACLCEAAAA\tInherent, public girls run. Opposite, similar players might adjust though central ties. Entirely small generations achieve rats. At all western boxes prosecute almost suspicious, ordinary v\tSports                                            \tfishing                                           \t0.46\t2861.92\t0.68540264699268189\nAAAAAAAACPJAAAAA\tDifficult skills can tell specifically significant applicants. Irish women find si\tSports                                            \tfishing                                           \t8.65\t0.00\t0.00000000000000000\nAAAAAAAADCPCAAAA\tUsually english commentators will indicate still dangerous, spiritu\tSports                                            \tfishing                                           \t9.90\t13087.32\t3.13428878865945433\nAAAAAAAADDDBAAAA\tEarly, associated parents continue stories. Alive, key costs will not supply. For example excellent wi\tSports                                            \tfishing                                           \t0.65\t9375.15\t2.24525934545809862\nAAAAAAAADEDDAAAA\tJust left grounds would not shoot other, accessible readers. Long, true winners shall vary; male conditions must hear never local, clean studies. Major, generous pp. must not get always gre\tSports                                            \tfishing                                           \t3.62\t8.19\t0.00196142718135729\nAAAAAAAADFMCAAAA\tGroups deserve also only members. Inevitable, rare dreams worry much old enquiries. Please clear nerves turn necessar\tSports                                            \tfishing                                           \t2.58\t3603.80\t0.86307585789687587\nAAAAAAAADFPCAAAA\tForeign advances expand never new, colonial players. Colours confess lines. Urgent, massive items sit then men. Different countries cut however. Effectively old ideas suggest only actually particul\tSports                                            \tfishing                                           \t4.19\t20.28\t0.00485686730621806\nAAAAAAAADGICAAAA\tSole, public skills require long opportunities. Parents destroy how\tSports                                            \tfishing                                           \t4.84\t1396.88\t0.33453948731311060\nAAAAAAAADKFBAAAA\tCourses try military parents. Fast, w\tSports                                            \tfishing                                           \t1.64\t6454.18\t1.54571478453878082\nAAAAAAAADLIAAAAA\tNew parties strengthen please at all current things. Similar teams must lead most real firms. Simply tiny planes will set moving advantages. Concerned, average memories use\tSports                                            \tfishing                                           \t2.13\t5552.34\t1.32973267352104439\nAAAAAAAADOHBAAAA\tInternational, new heads succeed altogether. Inc men see about accord\tSports                                            \tfishing                                           \t4.11\t4917.54\t1.17770410517847910\nAAAAAAAAEABAAAAA\tIllegal campaigns ought to become here western, certain abilities. Indirect teachers would not tend no longer long, main agreements. Twice sweet patients ought to enjoy\tSports                                            \tfishing                                           \t0.33\t2469.18\t0.59134514867689882\nAAAAAAAAEAOBAAAA\tCommon, preliminary children will not maintain early further international \tSports                                            \tfishing                                           \t3.67\t4265.38\t1.02151798178483168\nAAAAAAAAEBGCAAAA\tNorthern, de\tSports                                            \tfishing                                           \t15.22\t1489.04\t0.35661093163959266\nAAAAAAAAEJHDAAAA\tUnable occasions command more effective, other birds. Proper songs know in a ports. Later wealthy details look now hours. Aware, black issues \tSports                                            \tfishing                                           \t0.59\t4257.58\t1.01964995589782473\nAAAAAAAAFEICAAAA\tPoints can appoint even pregnant ideas. Other, basic bodies shall frighten too modern laws; features accompa\tSports                                            \tfishing                                           \t1.97\t15202.78\t3.64092135826557149\nAAAAAAAAFFMAAAAA\tHome available features need with a questions. Hard waters can operate still more content bands. Organic, large ideas contribute points. Difficult, right produc\tSports                                            \tfishing                                           \t2.47\t7589.73\t1.81766821992220870\nAAAAAAAAFJCAAAAA\tCollective, full signals will assume only services. Political villages think children. So old\tSports                                            \tfishing                                           \t2.56\t2552.33\t0.61125878361338953\nAAAAAAAAGDABAAAA\tIndustrial, slight needs would disturb too for a folk. Now known buildings ought to suggest so. Papers create colours. Good levels tell into a r\tSports                                            \tfishing                                           \t2.72\t5261.10\t1.25998346078618504\nAAAAAAAAGNJAAAAA\tNorma\tSports                                            \tfishing                                           \t1.01\t8662.39\t2.07456009786539724\nAAAAAAAAGOCBAAAA\tOnwards horizontal sports find. Normal, powerful eyes come american, commercial situations. Major, enormo\tSports                                            \tfishing                                           \t1.89\t13071.78\t3.13056710631534049\nAAAAAAAAHACEAAAA\tShoes give more now annual ch\tSports                                            \tfishing                                           \t1.18\t6235.99\t1.49346035270723652\nAAAAAAAAHIJBAAAA\tAs modern women may find only with a bones. However simple minutes end changes. Catholic hands provide hard able rights. Weeks used to affect then tiny c\tSports                                            \tfishing                                           \t2.55\t3728.50\t0.89294032303915358\nAAAAAAAAHODBAAAA\tStrong, southern weeks use to a exceptions. Shoulders write natural, particular courses. Cold, labour things will hang. New authorities may bu\tSports                                            \tfishing                                           \t1.08\t5888.16\t1.41015837267164344\nAAAAAAAAHPCBAAAA\tAutomatically private stands go always easy terms. Well distinctive depar\tSports                                            \tfishing                                           \t1.17\t5365.88\t1.28507727520164501\nAAAAAAAAIBBDAAAA\tInternatio\tSports                                            \tfishing                                           \t1.86\t8437.51\t2.02070347459999698\nAAAAAAAAIBIAAAAA\tApparent,\tSports                                            \tfishing                                           \t7.13\t2649.10\t0.63443427913719237\nAAAAAAAAICNDAAAA\tSpecial, easy things invest here hot \tSports                                            \tfishing                                           \t4.61\t8905.67\t2.13282334630014721\nAAAAAAAAIDMDAAAA\tLeaves could not help accounts; maximum, supreme expenses may not build in a officers; r\tSports                                            \tfishing                                           \t0.44\t13341.40\t3.19513853447621392\nAAAAAAAAIGFAAAAA\tStill original workers solve merely villages. Only long years punish already. Scottish features should not take from th\tSports                                            \tfishing                                           \t4.81\t3.50\t0.00083821674416978\nAAAAAAAAJFODAAAA\tSettlements must make significa\tSports                                            \tfishing                                           \t7.42\t7154.29\t1.71338447732755427\nAAAAAAAAJHDDAAAA\tShortly new terms would recover yet satisfactory, previou\tSports                                            \tfishing                                           \t2.86\t3393.96\t0.81282117172642234\nAAAAAAAAJHKCAAAA\tPublic, certain lives could not choose indeed in a tools. Then bad things gain women.\tSports                                            \tfishing                                           \t2.62\t392.55\t0.09401199512109957\nAAAAAAAAJJEAAAAA\tCircumstances cannot take lines. Modern goods would make corresponding tools. Subsequently toxic practices see annually alm\tSports                                            \tfishing                                           \t3.56\t12846.92\t3.07671527285990692\nAAAAAAAAKGAEAAAA\tAlso normal groups must not keep possibly others. Rates will not depend centuries. Fields could indicate already in a months; important arti\tSports                                            \tfishing                                           \t64.57\t16106.48\t3.85734892161020958\nAAAAAAAAKIACAAAA\tCrops shall argue already for the responses. Easy committees like as with a figures. Easy current groups should not meet nevertheless; evident, international forces sen\tSports                                            \tfishing                                           \t6.00\t1274.25\t0.30517076750238473\nAAAAAAAAKILCAAAA\tElements take further vital, typical \tSports                                            \tfishing                                           \t1.73\t6796.42\t1.62767801268868558\nAAAAAAAAKIMBAAAA\tGood, silent examples close so literary, financial years. Often foreign interests discourage best suddenly whi\tSports                                            \tfishing                                           \t4.19\t4776.06\t1.14382098947415311\nAAAAAAAAKKBCAAAA\tProjects support indeed in a departments. Populations involve even with a terms; fine, classical miles visit continuously crucial, great days. Steady, sc\tSports                                            \tfishing                                           \t0.68\t7528.93\t1.80310719762348789\nAAAAAAAAKOACAAAA\tDirections use none the less. Military, new recordings pass yellow tasks. Frequently wor\tSports                                            \tfishing                                           \t1.49\t1880.44\t0.45034751268760788\nAAAAAAAAKOIDAAAA\tPoor networks explain personally to a funds. Already federal words accelerate companie\tSports                                            \tfishing                                           \t2.01\t7024.79\t1.68237045779327228\nAAAAAAAALBICAAAA\tSectors might not know properly. Large, electric workers used to drop even as ca\tSports                                            \tfishing                                           \t6.89\t1774.46\t0.42496630967414683\nAAAAAAAALOIBAAAA\tOld others will cut other activities. Sharp passages avoid allegedly orthodox, additional firms. High officers must form.\tSports                                            \tfishing                                           \t0.25\t2612.13\t0.62558031541377612\nAAAAAAAAMAEBAAAA\tVery acids should depend much a little christian tons. Warm rules defeat at large details. Banks should not seek then. Times can back stiffly ordinary, chemical\tSports                                            \tfishing                                           \t6.07\t10778.84\t2.58142976306486528\nAAAAAAAAMEMDAAAA\tFactors might assist now absent, voluntary demands; political companies might know no longer concerned things; autonomous, possible events can dry at \tSports                                            \tfishing                                           \t6.68\t6637.53\t1.58962536740836076\nAAAAAAAAMIIDAAAA\tPale, other animals assist in a words. Is\tSports                                            \tfishing                                           \t3.40\t1226.34\t0.29369677772719206\nAAAAAAAAMKLCAAAA\tNecessary women know little international troops. Immediate, possible drugs can try effectively too gentle spots. Northern, german ideas tell to a areas. False appropriat\tSports                                            \tfishing                                           \t2.18\t505.79\t0.12113189915246708\nAAAAAAAAMNAAAAAA\tWestern, social things will go in order. Warm, male cards used to describe. High, briti\tSports                                            \tfishing                                           \t0.51\t2346.30\t0.56191655624158939\nAAAAAAAAMNEDAAAA\tDifferent, common buildings could not comply even. Impossible transactions build always qualities. Police move tiles. Options must use just different stages; words\tSports                                            \tfishing                                           \t8.87\t4167.10\t0.99798085560854416\nAAAAAAAANCGBAAAA\tMembers like very. Then interested principles could remember yet important, new agents. Necessarily due assets generate across. Areas give anyway as social projects. Main, \tSports                                            \tfishing                                           \t1.79\t7991.56\t1.91390268686784986\nAAAAAAAANECDAAAA\tBlind sessions hurt traditionally public, various clothes. High, southern schools might not tal\tSports                                            \tfishing                                           \t1.43\t1122.60\t0.26885203342999968\nAAAAAAAANMFBAAAA\tPractical roads mean either dishes. Necessary issues determine simply fund\tSports                                            \tfishing                                           \t3.40\t4810.52\t1.15207383204675046\nAAAAAAAAOCHDAAAA\tJust formal teams ask still, certain interests. Well l\tSports                                            \tfishing                                           \t9.79\t2218.77\t0.53137433298902583\nAAAAAAAAOEKDAAAA\tBooks must work major, able forces. Clearly future teachers would measure certain, direct measures. Hard tears go main nurses. Cruel patients used to leave much only days. Yet social defence\tSports                                            \tfishing                                           \t8.56\t1810.80\t0.43366939438361253\nAAAAAAAAOGNCAAAA\tComprehensive, able acts must not resign. British, red forces convict perhaps; years want as well problems\tSports                                            \tfishing                                           \t54.91\t119.66\t0.02865743303067322\nAAAAAAAAOIOCAAAA\tNew companies must benefit always to a companies; adults might get yet international, comfortable indicators. Dual bones shall find ever like parents. Wars need new, heavy purposes\tSports                                            \tfishing                                           \t3.43\t7734.29\t1.85228896636140409\nAAAAAAAAOLKDAAAA\tBacks think now back, british wines. Very fine shows get often serious, fatal prisoners. Good terms ought to come far easy, obvious shoulders. Machines play more ac\tSports                                            \tfishing                                           \t2.94\t7583.99\t1.81629354446177025\nAAAAAAAAOOFEAAAA\tTiny values allow equations. \tSports                                            \tfishing                                           \t4.39\t7729.84\t1.85122323364381680\nAAAAAAAAPBKCAAAA\tIll, simple objects shall bear solid trees. Ears should use there minimum, inappropriate personnel. Available practices should not apply increasingly pr\tSports                                            \tfishing                                           \t7.87\t15557.69\t3.72591893102937088\nAAAAAAAAPDKAAAAA\tSure reliable suppliers show upright other women. Maybe\tSports                                            \tfishing                                           \t1.11\t12392.70\t2.96793389870653577\nAAAAAAAAPEFEAAAA\tMuch common times achieve existing, continuing positions. Los\tSports                                            \tfishing                                           \t8.20\t9965.46\t2.38663298152977430\nAAAAAAAAPHKCAAAA\tGood, whole facilities maintain for a points. More worthwhile directors battle annual hours. Yes\tSports                                            \tfishing                                           \t8.90\t603.00\t0.14441277049553697\nAAAAAAAAPLGCAAAA\tRules offer. Important, italian goo\tSports                                            \tfishing                                           \t4.06\t3150.39\t0.75448847104715544\nAAAAAAAAPLIAAAAA\tVital, similar activities change thickly. Seats would sit essentially brilliant words. Hig\tSports                                            \tfishing                                           \t68.38\t6302.32\t1.50934575746174558\nAAAAAAAAPMMBAAAA\tEven useless times make old, old studies. Early public employees must open together warm consequences. Sufficient, evident men would operate stars. Various, other sections control l\tSports                                            \tfishing                                           \t89.62\t2679.48\t0.64171000047658609\nAAAAAAAAAFKCAAAA\tA\tSports                                            \tfitness                                           \t7.12\t10468.61\t4.22441966945963305\nAAAAAAAAAHMDAAAA\tFast bizarre situations fulfil all as political plans. Thus labour conventions run more part-time experiments. Early considerable own\tSports                                            \tfitness                                           \t0.81\t5713.17\t2.30544721056249987\nAAAAAAAAAPNAAAAA\tOther, cultural differences might take. Musical branches take only new defences. \tSports                                            \tfitness                                           \t3.76\t18567.33\t7.49251276543379958\nAAAAAAAABDGDAAAA\tIncreased machines claim \tSports                                            \tfitness                                           \t1.76\t2327.22\t0.93910786084875139\nAAAAAAAACAHCAAAA\tNew parties survive\tSports                                            \tfitness                                           \t1.06\t5055.94\t2.04023384036732070\nAAAAAAAACEFDAAAA\tAbruptly real years cope together; significant accounts provide at a others. Twice competent languages cannot impose most protests. Identical leaders \tSports                                            \tfitness                                           \t3.76\t11311.78\t4.56466578930728034\nAAAAAAAACGPCAAAA\tClinical, real figures figure effects. Full, pleased bacteria used to fit immediately more main things. Windows will not present perhaps\tSports                                            \tfitness                                           \t4.25\t1715.39\t0.69221484579083182\nAAAAAAAACMADAAAA\tConcerned clothes comment less small probl\tSports                                            \tfitness                                           \t0.73\t1855.00\t0.74855195549816254\nAAAAAAAACOKCAAAA\tLarge, working matters oppose etc far remote aspects; today amer\tSports                                            \tfitness                                           \t3.52\t11563.15\t4.66610164108818229\nAAAAAAAADKMDAAAA\tPhysical questions confirm much to the marks. Irish, pleased eyes would know in an subsi\tSports                                            \tfitness                                           \t2.86\t8639.15\t3.48617392255630775\nAAAAAAAADOCAAAAA\tLittle, national services will buy young molecules. In part video-taped activities join now\tSports                                            \tfitness                                           \t5.91\t408.38\t0.16479441918401058\nAAAAAAAAEBBEAAAA\tIntelligent trends used to bother open. Bedrooms will not hit all senior, economic boys; objects would sum. Often blue times should deal in a\tSports                                            \tfitness                                           \t3.84\t1925.10\t0.77683955230701493\nAAAAAAAAEBMDAAAA\tAbsolutely wild standards impose only so scottish schools. New, complex incomes can establish children. Certainly free groups will rest. Impressive teeth must go front s\tSports                                            \tfitness                                           \t4.00\t2927.91\t1.18150552885316716\nAAAAAAAAEHJDAAAA\tPolicies think races. Loc\tSports                                            \tfitness                                           \t40.32\t1793.89\t0.72389211183212873\nAAAAAAAAEIOAAAAA\tShares could release barely months. Aware writings used to use so very impossible authorities. \tSports                                            \tfitness                                           \t6.66\t3449.47\t1.39197170562385268\nAAAAAAAAEMMBAAAA\tBoys might not fill in a problems. Military, young ways will encourage somehow inner, large matters. Ways will begin today whole firm\tSports                                            \tfitness                                           \t3.62\t2731.00\t1.10204603259594711\nAAAAAAAAFFPDAAAA\tCorporate heroes examine forth technical, formal shares; buildings may not emphasize abo\tSports                                            \tfitness                                           \t68.11\t4428.60\t1.78708204319092324\nAAAAAAAAFHBDAAAA\tBelow old resources could cover lo\tSports                                            \tfitness                                           \t2.86\t2908.84\t1.17381017263141516\nAAAAAAAAFKKAAAAA\tRunning children may continue common, small wives; great, subtle teams shall change bad, good lines; others may want; parties used to like near a sty\tSports                                            \tfitness                                           \t2.32\t2591.76\t1.04585822974766455\nAAAAAAAAFLCBAAAA\tLabour, dominant dreams come. Please various symptoms cannot persuade so owners. Primary colours would argue once small posts. Live, asia\tSports                                            \tfitness                                           \t48.03\t4332.46\t1.74828647176149287\nAAAAAAAAFMOAAAAA\tDeep, light measures could ask around experimental sections. Days attend social, wise cases. Children should find; as\tSports                                            \tfitness                                           \t3.91\t12590.50\t5.08067029417769025\nAAAAAAAAFNPBAAAA\tTimes force also years. Emotional solutions ought to allow elderly differences. Too urban parents shall accommodate so. Traditional, effective im\tSports                                            \tfitness                                           \t3.60\t8417.45\t3.39671086674286159\nAAAAAAAAFOGAAAAA\tPrincipal eyes should pay frequently relevant areas. Light police m\tSports                                            \tfitness                                           \t3.17\t451.78\t0.18230771021830721\nAAAAAAAAGAABAAAA\tOriginal hands know as. So prime things might not identify. Less than little journals let very hard things; nurses see; large bodies name once political, national c\tSports                                            \tfitness                                           \t6.83\t1540.63\t0.62169358447392677\nAAAAAAAAGCMAAAAA\tMethods develop warm males. Governments depend please with the hospitals. At random tory weaknesses enter approximately simply young me\tSports                                            \tfitness                                           \t6.01\t24.98\t0.01008023064600760\nAAAAAAAAGENAAAAA\tAlso new activities would not drop immediately fina\tSports                                            \tfitness                                           \t6.42\t9171.55\t3.70101438676505262\nAAAAAAAAGPBAAAAA\tBeings should affect close projects. In common labour metres might call directly \tSports                                            \tfitness                                           \t2.85\t837.90\t0.33811950593633983\nAAAAAAAAIAFBAAAA\tMen could not escape so old victims. Tiny horses give together effective teeth; little, beneficial bones used to forget again days. Of course\tSports                                            \tfitness                                           \t71.90\t2421.19\t0.97702776772646693\nAAAAAAAAIKEEAAAA\tRegions see in the cop\tSports                                            \tfitness                                           \t1.90\t8595.06\t3.46838219440648889\nAAAAAAAAINBDAAAA\tAsleep, fat topics pick into a rul\tSports                                            \tfitness                                           \t2.70\t3452.62\t1.39324283158601937\nAAAAAAAAJDJBAAAA\tConscious, central results play above about the hands. Stages stay so available universities. Tomorrow professional birds decide; enthusiastically big views appear new window\tSports                                            \tfitness                                           \t9.62\t412.47\t0.16644486527456987\nAAAAAAAAJPAEAAAA\tPlease positive sys\tSports                                            \tfitness                                           \t0.31\t4494.44\t1.81365059346046449\nAAAAAAAAKABCAAAA\tSimply necessary girls could not take supreme hospitals. Issues ought to \tSports                                            \tfitness                                           \t93.50\t342.93\t0.13838324641454710\nAAAAAAAAKAGEAAAA\tOverseas campaigns must finance just. Researchers believe sure, positive days. Workers appear from a values. Periods can lift ago related, extens\tSports                                            \tfitness                                           \t8.92\t691.02\t0.27884871821473869\nAAAAAAAAKDJAAAAA\tRegular, gold effects take gently for a terms. Good, strong difficulties attract articles. Ultimate farmers develop \tSports                                            \tfitness                                           \t1.12\t3313.24\t1.33699853425052940\nAAAAAAAAKFPCAAAA\tRound prisoners go at all into a lives. Streets find again places. Kindly liable men offer plainly on a contents. Early accurate regions should no\tSports                                            \tfitness                                           \t4.49\t3281.89\t1.32434780443658472\nAAAAAAAAKLAEAAAA\tMore \tSports                                            \tfitness                                           \t0.82\t1089.45\t0.43962799348650845\nAAAAAAAAKNGAAAAA\tSolid, romantic feet would come so equations. Only economic feet will n\tSports                                            \tfitness                                           \t0.36\t6592.06\t2.66010749528906595\nAAAAAAAAKOICAAAA\tOnly subjects think for a goods. Windows wo\tSports                                            \tfitness                                           \t3.66\t9334.78\t3.76688292352837611\nAAAAAAAALCIBAAAA\tSpecial miles must ease under across a conditions. Points might continue australian, australian places. Entirely \tSports                                            \tfitness                                           \t3.17\t0.00\t0.00000000000000000\nAAAAAAAALLDDAAAA\tMen mean also weapons. Individual proposals ought to mean farmers. Sometimes valuable eyes might take rights. Rough, different rewards cost real, alone ministers. Requirements may no\tSports                                            \tfitness                                           \t64.89\t3913.00\t1.57902091744706739\nAAAAAAAAMDKDAAAA\tTogether working cases used to buy in a structures. Millions must \tSports                                            \tfitness                                           \t1.88\t3472.20\t1.40114398915402693\nAAAAAAAAMGABAAAA\tSure, coming sessions could not pass very. Concerned children pick on a individuals. Easy pairs shall return. Reports consider subsequently rough sites. Vital, normal w\tSports                                            \tfitness                                           \t2.27\t5967.84\t2.40821471811329074\nAAAAAAAAMHMAAAAA\tGirls move ways. Other, human actors should participate serious families. New di\tSports                                            \tfitness                                           \t4.79\t10717.00\t4.32465299572712017\nAAAAAAAAMKDDAAAA\tQuick reasons could set only distant a\tSports                                            \tfitness                                           \t1.29\t968.12\t0.39066744968025936\nAAAAAAAANBGCAAAA\tSo close miles would seem american, emotional horses. Other, alive operations ought to want further red layers. Parameters might faint bad, significant stations. So prime newspapers wou\tSports                                            \tfitness                                           \t2.97\t9281.14\t3.74523746428690903\nAAAAAAAANNEBAAAA\tRoyal speeches take evil, front margins. For example hard events ought to go angles. Possible, foreign lakes shall not reconsider. Other honours hear momen\tSports                                            \tfitness                                           \t8.13\t0.00\t0.00000000000000000\nAAAAAAAAPDHAAAAA\tPoints force into the symptoms. Local, strong negotiations get examples. For the time being fat result\tSports                                            \tfitness                                           \t5.61\t19543.75\t7.88652953114135530\nAAAAAAAAPFFDAAAA\tSubject, dead qualifications benefit more real nurses. Up to special writers give most responses; social circumstances de\tSports                                            \tfitness                                           \t2.69\t12178.65\t4.91447561877503891\nAAAAAAAAAIMAAAAA\tJust ready clothes try live skills. Girls investigate up \tSports                                            \tfootball                                          \t1.80\t3028.92\t1.26615656780156976\nAAAAAAAAAKKAAAAA\tMostly furious applications cut in a workers; successful, substantia\tSports                                            \tfootball                                          \t3.20\t4690.04\t1.96054202463322710\nAAAAAAAAAMBDAAAA\tDynamic, technical problems cannot go important, general sources. Overall inevitable subjects may take. Recent ends would n\tSports                                            \tfootball                                          \t2.51\t10300.92\t4.30601584472305176\nAAAAAAAAAOCEAAAA\tAllowances might lay at best children. Academic sections burst hot times. Short-term, warm goods\tSports                                            \tfootball                                          \t4.96\t652.80\t0.27288505720219244\nAAAAAAAABDPAAAAA\tSophisticated, unfair questions may remove separate premises. Typical patterns intervene typically walls. Naked areas ought to return now military, necessary children; young met\tSports                                            \tfootball                                          \t33.19\t7921.58\t3.31139830182558766\nAAAAAAAACAGAAAAA\tOnly available cars could not allow during a films. Cuts might not grow also unfortunately poor names. Windows go at first so key effects. Leading, possible relationships used to rec\tSports                                            \tfootball                                          \t1.80\t5455.78\t2.28063853765713464\nAAAAAAAACCDDAAAA\tPupils talk tonight even expected rights. However federal costs may not borrow large decisions. Social, american soldiers repair legal, economi\tSports                                            \tfootball                                          \t11.06\t1681.47\t0.70289221374658476\nAAAAAAAACEBEAAAA\tBritish components must go. Wrong, overseas jobs explain with a towns. Quite ideological habits may\tSports                                            \tfootball                                          \t0.63\t8173.32\t3.41663127409899441\nAAAAAAAACGFAAAAA\tGirls would face criminal, special offenders. Healthy principles get very greek, ade\tSports                                            \tfootball                                          \t1.47\t435.76\t0.18215746404170861\nAAAAAAAACIECAAAA\tDelicate readers gain too able officers. Feet see as international appearances; just prominent samples halt just. Substantia\tSports                                            \tfootball                                          \t94.83\t12471.06\t5.21318309049015641\nAAAAAAAACNDCAAAA\tDaily, level areas fetch known, other \tSports                                            \tfootball                                          \t69.68\t818.79\t0.34227260414611390\nAAAAAAAACPNBAAAA\tMore reasonable opp\tSports                                            \tfootball                                          \t3.70\t3418.34\t1.42894287137950754\nAAAAAAAADEKCAAAA\tAwful eyes get now like a gentlemen. Final countries may become french, turkish sciences. French lives repeat great, big standards. Large, able roads cl\tSports                                            \tfootball                                          \t6.18\t5009.22\t2.09396643112494858\nAAAAAAAADGHAAAAA\tThanks may add suddenly strong weeks. Times abandon as files. Systems feel cheap targets. Green, formal events understand french, rea\tSports                                            \tfootball                                          \t0.97\t2280.64\t0.95335872680393409\nAAAAAAAADGKDAAAA\tMiserable officers introduce clearly. Much mathematical eyes could change so before prominent plans. Prices i\tSports                                            \tfootball                                          \t4.67\t20055.07\t8.38346955291662626\nAAAAAAAADLIDAAAA\tElse social offenders will not support mines. Gently intelligent expressions speed days. Sometimes old houses offer really important, local month\tSports                                            \tfootball                                          \t2.19\t15388.53\t6.43275105592471583\nAAAAAAAADPEAAAAA\tCritics can cover only str\tSports                                            \tfootball                                          \t1.79\t10295.54\t4.30376688392686948\nAAAAAAAAEBNBAAAA\tSources negotiate never books.\tSports                                            \tfootball                                          \t12.71\t1473.07\t0.61577633457848288\nAAAAAAAAEHMDAAAA\tYoung, previous metals keep here due, equal churches. Strong temperatures avoid. Established, average children could help also technical aspects. Feelings navigate now weekl\tSports                                            \tfootball                                          \t1.45\t8988.48\t3.75738645674136449\nAAAAAAAAEIACAAAA\tWhite, vital departments should become aga\tSports                                            \tfootball                                          \t2.88\t4166.35\t1.74162784631488126\nAAAAAAAAEJPDAAAA\tDaily, marked years may not save players. Then hot families please universally always parental opportunities. Closely medic\tSports                                            \tfootball                                          \t3.21\t1605.80\t0.67126045474154508\nAAAAAAAAELMCAAAA\tPopular, strong farms worry certainly followers. New documents will argue considerably under a men. Catholic, exist\tSports                                            \tfootball                                          \t0.10\t1110.81\t0.46434352081919024\nAAAAAAAAEOODAAAA\tClearly great options cannot believe. Responsible products ought to condemn at a systems. Dull types assure; real ser\tSports                                            \tfootball                                          \t3.03\t8226.16\t3.43871958050610814\nAAAAAAAAGDEAAAAA\tSucc\tSports                                            \tfootball                                          \t4.47\t9246.93\t3.86542435967320677\nAAAAAAAAGFFDAAAA\tAlmost busy threats go together recent sides; still tired wines shall not admit on a\tSports                                            \tfootball                                          \t3.88\t7510.88\t3.13971648045159802\nAAAAAAAAHAFEAAAA\tEconomic, crude hands put available payments; irish months pay main, tropical members. Neither soft syste\tSports                                            \tfootball                                          \t4.23\t2877.00\t1.20265059676885365\nAAAAAAAAHAGAAAAA\tInternational, profitable schools sit rather di\tSports                                            \tfootball                                          \t81.85\t205.56\t0.08592869540208744\nAAAAAAAAIPLBAAAA\tYoung features may seem actually for the plans. Unduly\tSports                                            \tfootball                                          \t9.86\t3012.65\t1.25935534249415605\nAAAAAAAAJLIAAAAA\tStandards must pa\tSports                                            \tfootball                                          \t3.63\t836.01\t0.34947095078370849\nAAAAAAAAJLOAAAAA\tVery aspects use then. Popular, weste\tSports                                            \tfootball                                          \t6.30\t1501.17\t0.62752276550278069\nAAAAAAAAJPFCAAAA\tModels may register still digital, professional birds. There necessary things can fail never irish forces. All corporate readers identify more\tSports                                            \tfootball                                          \t68.59\t9190.37\t3.84178100974159524\nAAAAAAAAKGCEAAAA\tAgain sexual officials shall not\tSports                                            \tfootball                                          \t7.81\t11678.56\t4.88190029662873252\nAAAAAAAAKNDCAAAA\tAges must answer even such as a citizens. Fatal candidates say also. Thus great friends create normally \tSports                                            \tfootball                                          \t19.60\t1325.80\t0.55421416795138901\nAAAAAAAALDPCAAAA\tSuccessive, joint\tSports                                            \tfootball                                          \t4.67\t4363.92\t1.82421654231892103\nAAAAAAAALHDBAAAA\tDemocrats take before. Joint years woul\tSports                                            \tfootball                                          \t65.80\t7674.39\t3.20806733171252094\nAAAAAAAALHEAAAAA\tHours take so. Now new things want common, recent drugs. Ships will st\tSports                                            \tfootball                                          \t3.32\t1013.26\t0.42356543054640551\nAAAAAAAALMIDAAAA\tQuiet, small objectives should stay as matches. In particular formal students allow then. Professional, other demands drop\tSports                                            \tfootball                                          \t1.58\t2487.00\t1.03962184016827912\nAAAAAAAAMDLCAAAA\tSuper stars might like approximately stories. Major practices might allow more fresh decisions. Advanced organisations wield. Towns must not protect quickly. Active, righ\tSports                                            \tfootball                                          \t4.05\t6655.69\t2.78222785902276383\nAAAAAAAAMHAAAAAA\tCheaply financial tales allow unfortunately safe, red meals. Who\tSports                                            \tfootball                                          \t2.91\t5952.36\t2.48822012727947644\nAAAAAAAAMIEBAAAA\tHard figures will not help twice central principles. Collective, impor\tSports                                            \tfootball                                          \t2.33\t468.64\t0.19590204229049551\nAAAAAAAAMJKAAAAA\tAdvanced, foreign stories would greet always corporate games. Recent dev\tSports                                            \tfootball                                          \t3.00\t634.63\t0.26528958923441696\nAAAAAAAANIEAAAAA\tVery questions make secret stocks. Aggressive, major years qualify for example senio\tSports                                            \tfootball                                          \t4.39\t292.60\t0.12231336969571310\nAAAAAAAAODBBAAAA\tMatters reserve more proper, concerned birds. True months result together more chemical columns. Social views reduce in a affairs. Medieval, serious sports may n\tSports                                            \tfootball                                          \t0.16\t7175.77\t2.99963297628642230\nAAAAAAAAOGADAAAA\tProud things mus\tSports                                            \tfootball                                          \t28.70\t17469.96\t7.30283552990198210\nAAAAAAAAACEEAAAA\tUnacceptable flowers should not give reasonable, ethnic governments. Employees shall complain \tSports                                            \tgolf                                              \t8.39\t4100.46\t1.45417454300510042\nAAAAAAAAAHOCAAAA\tCrucial products would carry silently double groups. Really full systems run usual structures. Financial departments must meet well c\tSports                                            \tgolf                                              \t1.50\t12212.90\t4.33114535351326216\nAAAAAAAAANGBAAAA\tDifferent hours must not know towards a weapons. Facilities shall not know items. Today established fl\tSports                                            \tgolf                                              \t5.73\t437.77\t0.15524940852766344\nAAAAAAAABEEBAAAA\tEducational terms must apply automatic, other objectives. Indeed financial sources pass very unacceptabl\tSports                                            \tgolf                                              \t6.99\t16143.50\t5.72508126771211978\nAAAAAAAABPHCAAAA\tMore black mothers shall repea\tSports                                            \tgolf                                              \t14.90\t7660.56\t2.71671747490846200\nAAAAAAAACCMAAAAA\tAdmini\tSports                                            \tgolf                                              \t9.35\t2840.01\t1.00717242550345943\nAAAAAAAACGGCAAAA\tSeparate, rapid bodies will start too religious surveys. Geographical, loyal things involve in order. Notes need dead for a members; at last economic managers look once more nervous skills; joint\tSports                                            \tgolf                                              \t6.57\t2341.31\t0.83031498887521685\nAAAAAAAACIGDAAAA\tEuropean quantities would wait\tSports                                            \tgolf                                              \t0.73\t9236.58\t3.27563236818065546\nAAAAAAAACKEAAAAA\tWet, suitable projects shall follow voluntarily all of a sudden resulting negotiations. High, video-taped services should not take all full eyes; wrong representatives follow royal, full figures. Fre\tSports                                            \tgolf                                              \t3.35\t7298.73\t2.58839919478975935\nAAAAAAAACNHBAAAA\tGood, interior faces contribute with a rights. Social, certain versions pick furiously between a troops. Forward political countries bec\tSports                                            \tgolf                                              \t7.89\t4757.12\t1.68705042898124194\nAAAAAAAACPMCAAAA\tGreat, new errors w\tSports                                            \tgolf                                              \t3.21\t791.01\t0.28052135742391451\nAAAAAAAADJICAAAA\tStairs say long words. Newspapers will go exceedingly. Other, empty numbers must not provide therefore environmental months. Entirely bare groups buy. New days\tSports                                            \tgolf                                              \t20.77\t1505.63\t0.53395199982069557\nAAAAAAAAEBEBAAAA\tLabour parties worry far well clear files. Finally domestic generations would not announce too; continuous, possible patterns might conceal\tSports                                            \tgolf                                              \t4.32\t2152.66\t0.76341273216794201\nAAAAAAAAENPDAAAA\tLive processes review home at pres\tSports                                            \tgolf                                              \t2.74\t4204.30\t1.49100004174076658\nAAAAAAAAFCFEAAAA\tJudicial models should not pick. Close dogs can refuse exactly. European, r\tSports                                            \tgolf                                              \t5.70\t6536.36\t2.31803463902021193\nAAAAAAAAFIJAAAAA\tPages could watch fundamental, literary components. Financial, royal elements should overcome environmental trustees. Shared areas \tSports                                            \tgolf                                              \t3.07\t4544.08\t1.61149857756900853\nAAAAAAAAFOBCAAAA\tDemands could treat lines. Conditions suck studies. Documents could not hide local things; gold calls see together. Preferences may refuse indeed in a pieces. Old, unknown boys emerge more opposite, \tSports                                            \tgolf                                              \t2.87\t625.67\t0.22188568753798383\nAAAAAAAAGBJDAAAA\tNew sources play just. English groups evaluate here indian changes. Familiar, able authorities get direct important, emotional orde\tSports                                            \tgolf                                              \t6.52\t7170.18\t2.54281061753176740\nAAAAAAAAGCHBAAAA\tMost angry years help intimate conditions. By far urgent police would agree \tSports                                            \tgolf                                              \t1.81\t13747.41\t4.87533926785135024\nAAAAAAAAGFCAAAAA\tThen growing levels light sometimes human, fellow cities. Users may derive odd championships. Stages support right \tSports                                            \tgolf                                              \t8.86\t5586.76\t1.98127141098295675\nAAAAAAAAGHBCAAAA\tBrown customers can detect too. Then human numbers cannot prepare never victorian long accountants; interests share open in the years. Full-time, underlying \tSports                                            \tgolf                                              \t92.44\t6716.33\t2.38185864718140065\nAAAAAAAAGPGCAAAA\tSecondary, normal \tSports                                            \tgolf                                              \t6.04\t7486.01\t2.65481559890393074\nAAAAAAAAHCABAAAA\tWishes might behave environmental regions. Statements conflict now nuclear\tSports                                            \tgolf                                              \t7.46\t16077.73\t5.70175679687386128\nAAAAAAAAHLDBAAAA\tHorses say. Other peasants can keep at first large kilometres. Necessarily new miles separate for an poems; interestingly indian teeth used to make further.\tSports                                            \tgolf                                              \t3.40\t752.00\t0.26668697081299062\nAAAAAAAAIBLAAAAA\tRussians receive then definit\tSports                                            \tgolf                                              \t8.76\t20347.14\t7.21584724907956645\nAAAAAAAAIDOBAAAA\tIndependent, scientific subsidies might contain. Here certain instructions shall not imagine exhibitions. Either other attitudes buy finally. Public, right p\tSports                                            \tgolf                                              \t4.05\t198.74\t0.07048054332363531\nAAAAAAAAIIAEAAAA\tMarried professionals clarify plans. All basic children could prove more religious big trees.\tSports                                            \tgolf                                              \t4.01\t7501.44\t2.66028764672260686\nAAAAAAAAIKPCAAAA\tRoles shall not remember primary, inc years. Young feelings can s\tSports                                            \tgolf                                              \t5.74\t3892.36\t1.38037459802347363\nAAAAAAAAILFEAAAA\tParticular, complete artists belong much enough active cheeks; profits may see able, complete processes. Here available officials take aside at a eyebrows. \tSports                                            \tgolf                                              \t4.07\t10080.46\t3.57490338005521200\nAAAAAAAAINHDAAAA\tPoles decide over for a managers. Properly other views include slim functions. Bright, other minutes should talk exactly certain weeks.\tSports                                            \tgolf                                              \t6.56\t1356.03\t0.48089831520151552\nAAAAAAAAKBNCAAAA\tInevitably dead trees establish original, primary events. Other women ought to issue almost long medical achievements. Catholic, hard cars need here difficult humans. Great,\tSports                                            \tgolf                                              \t0.80\t5928.82\t2.10257851900994022\nAAAAAAAAKGHAAAAA\tStrong changes stay. Future claims will not recoup fo\tSports                                            \tgolf                                              \t2.23\t9989.59\t3.54267752229221140\nAAAAAAAAKGMDAAAA\tImpressive records lie easy origins. Social schools shall bend else different details. Novel chemicals present primarily by a bags. Molecules shall see repeated\tSports                                            \tgolf                                              \t3.63\t4279.32\t1.51760490417479657\nAAAAAAAAKJGAAAAA\tAlso major pieces resign never. Substan\tSports                                            \tgolf                                              \t4.63\t55.04\t0.01951921658716357\nAAAAAAAAKNCBAAAA\tAssets may not engage heavily always formal groups. Local, genetic offices cannot keep still sad, annual troops; supreme, natural gaps can see. Nearl\tSports                                            \tgolf                                              \t7.20\t4005.33\t1.42043793192339857\nAAAAAAAAMCBBAAAA\tSo overall investor\tSports                                            \tgolf                                              \t2.54\t15395.25\t5.45972418538390139\nAAAAAAAAMDBDAAAA\tBrothers appoint even.\tSports                                            \tgolf                                              \t3.65\t3103.75\t1.10070436922981335\nAAAAAAAAMDEAAAAA\tClosely substantial instructions wait for a companies; members may bring then characters; recent views should indicate near early days; objectives could not arrive categories. High gains speak\tSports                                            \tgolf                                              \t7.73\t77.67\t0.02754465029660237\nAAAAAAAAMFADAAAA\tNeighbours shall send important, excellent games. Plain important ways note monthly, japanese figures; routinely \tSports                                            \tgolf                                              \t4.81\t616.44\t0.21861238868079779\nAAAAAAAANGBCAAAA\tCertainly persistent players move often respective minutes; amer\tSports                                            \tgolf                                              \t7.78\t74.48\t0.02641335849222279\nAAAAAAAANMHCAAAA\tImpossible, natural cases may wait then products. Political sectors go here sure consultants. Me\tSports                                            \tgolf                                              \t2.14\t2979.66\t1.05669747267637717\nAAAAAAAANOGBAAAA\tClassical, small perceptions finance again ideas. Obligations determine. Clear, useful crowds could take thus formal, genetic individuals. Int\tSports                                            \tgolf                                              \t0.68\t14169.23\t5.02493221735711581\nAAAAAAAAOEIAAAAA\tForward working birds ought to try already then public pounds. Black, similar hands cover still at a rights. Right contracts save for example general, able feet. Systems could not t\tSports                                            \tgolf                                              \t8.61\t291.36\t0.10332701571286296\nAAAAAAAAOKCDAAAA\tYoung, severe parts must not act therefore rath\tSports                                            \tgolf                                              \t2.17\t1012.25\t0.35898123165618319\nAAAAAAAAOLHDAAAA\tOnly concerned times used to know however in the trees. Developers might not wear in the times. Studies see far variations. Calculations must not transport hardl\tSports                                            \tgolf                                              \t0.15\t8494.93\t3.01261588958563618\nAAAAAAAAPEOBAAAA\tSales include easier from the times. Significant, young features should not keep hardly social\tSports                                            \tgolf                                              \t4.30\t403.10\t0.14295414618978261\nAAAAAAAAPJLCAAAA\tLikely, exciting negotiations disrupt even communications; all normal girls may think about years; allegedly old hands end darkly musical years. Individual, similar \tSports                                            \tgolf                                              \t4.26\t9885.12\t3.50562860229110351\nAAAAAAAAPNFBAAAA\tBasic differences stem \tSports                                            \tgolf                                              \t0.88\t12915.95\t4.58047284663835931\nAAAAAAAAAILAAAAA\tContinental issues need famous areas. Thus christian years shall agree just foreign negotiations. Sensitive centres may not assess large remains. Men eat from the ideas. Other, specific plants say \tSports                                            \tguns                                              \t0.19\t6159.12\t2.82446517920513238\nAAAAAAAAAJGCAAAA\tRevolutionary son\tSports                                            \tguns                                              \t4.83\t7287.25\t3.34180595233776919\nAAAAAAAACCDEAAAA\tBusinesses may keep also behind a workers. Early, previous objectives hit wet, bottom requests. Under true hours touch similar, long sources. Widely able attitudes must appear now politica\tSports                                            \tguns                                              \t2.73\t6762.87\t3.10133441571052580\nAAAAAAAACHGBAAAA\tOccasional, biological questions make usually for a tools; parts will use between a machines. Languages swim alive commitments. Other russians shall finish b\tSports                                            \tguns                                              \t4.12\t2865.32\t1.31398585630415545\nAAAAAAAADDFCAAAA\tAgain dull trials ensure suddenly; communities should produce terms. Too extra notes might choose properly social, absolute talks\tSports                                            \tguns                                              \t6.99\t8342.32\t3.82564268171208874\nAAAAAAAADKLBAAAA\tOnly other packages shall not lift procedures. Available, only types result obviously rough parts. Deep, back boundaries assert english, blue police; findings will declare restaurants. Little, daily s\tSports                                            \tguns                                              \t2.81\t10686.60\t4.90068866722739088\nAAAAAAAADMJBAAAA\tComplicated, right projects forget naturally british, true weapons. Employers step also as continuous tickets. Ev\tSports                                            \tguns                                              \t5.02\t8567.83\t3.92905764075860015\nAAAAAAAAEBCBAAAA\tThen vague profits used to buy tonnes. I\tSports                                            \tguns                                              \t0.44\t2445.30\t1.12137199838780706\nAAAAAAAAEECDAAAA\tNULL\tSports                                            \tguns                                              \t8.03\t272.49\t0.12495916895296837\nAAAAAAAAEHLAAAAA\tVital, possible communications go yet operational effects; \tSports                                            \tguns                                              \t1.48\t11987.62\t5.49731378371310009\nAAAAAAAAEIBBAAAA\tNow good properties see quite mere exceptions; long publications ought to make alone facilities. Certa\tSports                                            \tguns                                              \t4.20\t3874.40\t1.77673237253249895\nAAAAAAAAEKKBAAAA\tNegative patients may not get for a eyes. Past little questions perform highly only, afraid acts. Again co\tSports                                            \tguns                                              \t1.13\t5931.38\t2.72002758099107309\nAAAAAAAAEMHCAAAA\tDifferences imagine up a feet. Tender methods shall complet\tSports                                            \tguns                                              \t93.05\t1128.12\t0.51733618730677336\nAAAAAAAAEMOBAAAA\tAnnual communications use enough in a standards; only famous conservatives used to kill new, public children. Men dance so examples. Christian patients shall cause as busy te\tSports                                            \tguns                                              \t2.43\t22127.23\t10.14716236203600213\nAAAAAAAAENIBAAAA\tCourts define so. Appropriate tables surprise well to a agreemen\tSports                                            \tguns                                              \t7.17\t131.70\t0.06039532662154917\nAAAAAAAAENNBAAAA\tExamples should not monitor firms. Fo\tSports                                            \tguns                                              \t3.84\t535.99\t0.24579568045470114\nAAAAAAAAEONAAAAA\tNew years can lend elements. Other, typical figures return under a flowers. Due, following others used to reject in full strong, lik\tSports                                            \tguns                                              \t0.78\t4193.11\t1.92288722862630256\nAAAAAAAAFJLAAAAA\tOther aspects might appear quite good\tSports                                            \tguns                                              \t0.21\t5214.14\t2.39111380676146088\nAAAAAAAAFKFCAAAA\tStrong chips meet to a connections; necessary, suprem\tSports                                            \tguns                                              \t2.74\t4156.55\t1.90612144926955361\nAAAAAAAAGACCAAAA\tArtistic children can stay significant\tSports                                            \tguns                                              \t5.71\t4613.16\t2.11551484402024129\nAAAAAAAAGDCAAAAA\tOld ideas must withdraw holy pensioners. Additional bo\tSports                                            \tguns                                              \t7.83\t1028.06\t0.47145041371715901\nAAAAAAAAGFCDAAAA\tHigh, capital clothes can show. Prob\tSports                                            \tguns                                              \t28.98\t231.55\t0.10618479786803121\nAAAAAAAAGMDBAAAA\tSettlements relocate colleagues. Well \tSports                                            \tguns                                              \t0.15\t9689.92\t4.44362857506971716\nAAAAAAAAHCLCAAAA\tMajor, late transactions ought to determine interested, industrial group\tSports                                            \tguns                                              \t3.27\t2963.68\t1.35909203949698443\nAAAAAAAAHMCBAAAA\tFilms exclude british, young members; spots decide other, poor agents. Black, \tSports                                            \tguns                                              \t7.63\t834.49\t0.38268258247848571\nAAAAAAAAIDNCAAAA\tBadly heavy reports shall keep there important, given women. Vice versa pure plants reliev\tSports                                            \tguns                                              \t2.78\t1558.80\t0.71483853559355238\nAAAAAAAAIKACAAAA\tUpwards new instructions help enough firms. Funds see then. Mines might play girls; odd difficulties bid complaints. Others go slightly at a fees. Empty awards find necessarily fi\tSports                                            \tguns                                              \t5.31\t4316.40\t1.97942587569669586\nAAAAAAAAIKAEAAAA\tPolitical, appointed actors might not take formal resources. Possibly new programmes might not use in a waves. Racial, suspicious reader\tSports                                            \tguns                                              \t1.08\t15990.81\t7.33310700754088619\nAAAAAAAAJLEBAAAA\tGolden, royal counties work then jobs. Patterns would take efficiently compl\tSports                                            \tguns                                              \t42.09\t2480.64\t1.13757830698921593\nAAAAAAAAKBECAAAA\tGirls help diverse, clear workers. Classes improve no longer\tSports                                            \tguns                                              \t3.07\t147.44\t0.06761341653060903\nAAAAAAAAKJICAAAA\tSocial, large demands may attend subsequent, french sales. Small, able others will react in a principles. Enormous procedures could not move terms. Important members take so\tSports                                            \tguns                                              \t6.84\t266.10\t0.12202882622622805\nAAAAAAAAKLICAAAA\tWooden, english birds say so to a states; key, video-taped trends check largely ago fast ways. Urban patients promote and so on political minu\tSports                                            \tguns                                              \t7.33\t4309.42\t1.97622496924401239\nAAAAAAAALKBAAAAA\tAlone, fortunate minutes can put particularly out of a consequences. Darling costs run already in a laws. Molecules discover. Temporary, political ty\tSports                                            \tguns                                              \t5.47\t1876.47\t0.86051646579755789\nAAAAAAAALKKDAAAA\tGood definitions deliver a bit international childre\tSports                                            \tguns                                              \t4.27\t10401.45\t4.76992384273130321\nAAAAAAAAMABBAAAA\tSuggestions go instead reasonable figures. More fat practices imagine \tSports                                            \tguns                                              \t1.92\t7358.08\t3.37428735692853857\nAAAAAAAAMAKBAAAA\tHowever old days hold perhaps new, gentle bones. Rules achieve also. Fine, vocational proble\tSports                                            \tguns                                              \t7.68\t1967.40\t0.90221538037384845\nAAAAAAAAMFDDAAAA\tChips ought to finish. Bottles may not clear. Right, white wives used to accommodate about a words. Courts choose well new, future rewards. Permanent tourists serve ahead polit\tSports                                            \tguns                                              \t5.55\t2717.44\t1.24617066343555491\nAAAAAAAAMGMCAAAA\tCold clients see lengthy, only spirits; numbers must not want once again tall leads; once naked lads make. Minutes lose front expenses. Probably alive p\tSports                                            \tguns                                              \t0.47\t3757.58\t1.72316075479575351\nAAAAAAAAMIIAAAAA\tRight, vital dreams vary most; documents\tSports                                            \tguns                                              \t4.18\t2652.80\t1.21652788505425697\nAAAAAAAANCGEAAAA\tDirectly essential organisations introduce onwards atomic words. Much famous steps ma\tSports                                            \tguns                                              \t62.90\t380.00\t0.17426138281084803\nAAAAAAAAOBHBAAAA\tToday keen pages wil\tSports                                            \tguns                                              \t8.17\t1181.16\t0.54165940768647699\nAAAAAAAAOILCAAAA\tPossible roots must reveal at least upper, previous populations. So gr\tSports                                            \tguns                                              \t3.01\t21554.07\t9.88432116684688198\nAAAAAAAAOJOAAAAA\tUnusually global cattle shall tempt great prices. Worlds would not sign certainly deposits. Contributions predict als\tSports                                            \tguns                                              \t4.06\t1782.00\t0.81719416886560838\nAAAAAAAAPGHDAAAA\tIn full possible products bear to a components. Lovely boards help alongside at the possibilities. True, dry papers should disagree into a c\tSports                                            \tguns                                              \t0.52\t763.63\t0.35018742041012600\nAAAAAAAAPOBEAAAA\tResources go in a records. Permanent, flat applications would work\tSports                                            \tguns                                              \t7.43\t571.34\t0.26200657488197345\nAAAAAAAAAAFDAAAA\tNegative in\tSports                                            \thockey                                            \t1.63\t5985.40\t2.60825063748619267\nAAAAAAAAABMDAAAA\tModern facilities see; certain procedures lure for a features. Still dependent companies put little persons; procedures find to a employers. Public boards know almost also tory considerations.\tSports                                            \thockey                                            \t8.87\t6280.74\t2.73695059793581544\nAAAAAAAAAEABAAAA\tContracts will improve just by a services. Strange, educational passengers resist only english days. Difficulties should debate then impressive, linguistic applications; fine, new eyes build; roya\tSports                                            \thockey                                            \t6.73\t11482.83\t5.00385916858448520\nAAAAAAAAAFADAAAA\tFollowing parts treat perhaps appearances. Coming studies perform loudly so professional streets. Lesser, elderly years wear equ\tSports                                            \thockey                                            \t2.07\t8396.19\t3.65879772779683831\nAAAAAAAAAICDAAAA\tGirls would not enhance here inner authorities. Commercial others might not think normally problems. Loudly bright peasants see yellow candidates. Comfortable sessions may\tSports                                            \thockey                                            \t5.75\t3982.08\t1.73526626433003944\nAAAAAAAAALPDAAAA\tDepen\tSports                                            \thockey                                            \t3.19\t1800.84\t0.78474990443589989\nAAAAAAAACBICAAAA\tThen sophisticated numbers might not facilitate alway\tSports                                            \thockey                                            \t1.14\t1035.30\t0.45115144935834786\nAAAAAAAACDBBAAAA\tSpeakers get more with a\tSports                                            \thockey                                            \t37.55\t4112.16\t1.79195107118074348\nAAAAAAAACGHBAAAA\tPublic, available symptoms take somewhat in a minutes; nerves seem. Curious, certain islands contact again vital respects; mass rules might recognise primary,\tSports                                            \thockey                                            \t8.68\t334.35\t0.14569930174148904\nAAAAAAAACMEEAAAA\tForeign children increase about so tall leaders. Available, domestic telecommunications mess subsequently primary characteristics. Cities risk businesses. Elegant views cannot use f\tSports                                            \thockey                                            \t7.88\t2922.03\t1.27332953691545754\nAAAAAAAADGAEAAAA\tAll british ways trap stages. Accidents welcom\tSports                                            \thockey                                            \t3.21\t4828.96\t2.10431015444169561\nAAAAAAAAECKAAAAA\tMuch catholic guests invite highest problems. Long men must assume maps. Passive applications want independen\tSports                                            \thockey                                            \t5.63\t10772.75\t4.69442845172910449\nAAAAAAAAEKMCAAAA\tEyes must increase roughly. Services should love now senior, rapid sales. \tSports                                            \thockey                                            \t0.88\t9712.50\t4.23240457055245201\nAAAAAAAAELCAAAAA\tInternational places\tSports                                            \thockey                                            \t7.18\t5185.35\t2.25961380076336237\nAAAAAAAAEOBBAAAA\tReasonable laws shall pay significant boys. Widespread operations would not run then words. Substantial paintings make stil\tSports                                            \thockey                                            \t0.88\t10680.29\t4.65413726752387621\nAAAAAAAAFALAAAAA\tMilitary, special factors may adopt often young names. Actually large-scale workers make here advantages. Precious, odd customers study in the careers; usual women win then firms. S\tSports                                            \thockey                                            \t3.48\t7195.62\t3.13562676715146818\nAAAAAAAAFIKCAAAA\tParts work only windows. Positive, vital eyes could happen without a minds; common payments must not investigate only important seeds. Here different\tSports                                            \thockey                                            \t8.94\t1422.63\t0.61993778267233306\nAAAAAAAAGJKAAAAA\tColleagues come so; great places finish only large years. Regulations would know genuinely most other services. Opi\tSports                                            \thockey                                            \t9.08\t3086.02\t1.34479126412522810\nAAAAAAAAHIDBAAAA\tMain months answer weapons. Little, norma\tSports                                            \thockey                                            \t1.15\t619.92\t0.27014180091396407\nAAAAAAAAICCCAAAA\tWorkers ought to widen late, close benefits. Final eyes restore yesterday high, public funds. Quickly educational days go perhap\tSports                                            \thockey                                            \t3.55\t11162.51\t4.86427370325224722\nAAAAAAAAJBHCAAAA\tThen suspicious authorities can advertise perhaps important massive mammals. Easy lawyers will put. Respectively responsible pounds might acknowledge ti\tSports                                            \thockey                                            \t4.00\t4553.02\t1.98406410891291892\nAAAAAAAAJIMCAAAA\tFlights might work bits. Appropriate powers ought to lie just very parental pounds\tSports                                            \thockey                                            \t3.03\t1200.96\t0.52334091048140775\nAAAAAAAAJNLBAAAA\tLittle hearts must not get here. Best professional hospitals achieve there foreign shoulders. Women should not forestall certainly able deals. Projects sound years. Facilities shall find dry, \tSports                                            \thockey                                            \t47.20\t1750.77\t0.76293096010153065\nAAAAAAAAKFEBAAAA\tAs able participants arise. As red years must make often versus a models. Alone techni\tSports                                            \thockey                                            \t0.13\t10294.75\t4.48613096038042269\nAAAAAAAAKHDAAAAA\tSmall regions allow so new deaths; slowly late attacks would install automatically acc\tSports                                            \thockey                                            \t5.69\t12283.72\t5.35286205110801192\nAAAAAAAAKHEEAAAA\tInteresting, complete times join secure reports. Ancient, traditional markets go lessons. Rapid terms figh\tSports                                            \thockey                                            \t3.26\t12950.49\t5.64341962078700893\nAAAAAAAAKOBAAAAA\tReports may develop relevant, clear cells. Intently inc\tSports                                            \thockey                                            \t7.52\t1084.78\t0.47271329009460889\nAAAAAAAALLPDAAAA\tForces trust together from the systems. Reasons exploit even mar\tSports                                            \thockey                                            \t3.36\t2768.45\t1.20640416302146057\nAAAAAAAALMPCAAAA\tAnnual priests look often practical genes. Needs may n\tSports                                            \thockey                                            \t0.72\t2604.48\t1.13495115118789706\nAAAAAAAAMBDDAAAA\tTenants shall not know so realistic years. Recommendations tell. Successful, proposed actions used to link also. Holes will not become only later previo\tSports                                            \thockey                                            \t5.91\t6583.03\t2.86867915161739080\nAAAAAAAAMEGAAAAA\tThen royal plans would afford certain, terrible days. Priests ought to care rarely \tSports                                            \thockey                                            \t4.15\t6918.52\t3.01487522980268214\nAAAAAAAAMEJBAAAA\tComplete clubs engage to a classes; other, small estates rob sl\tSports                                            \thockey                                            \t8.86\t2201.70\t0.95943218975395970\nAAAAAAAAMFBBAAAA\tDetails accompany ok. Black savings go ju\tSports                                            \thockey                                            \t7.28\t15049.92\t6.55828573430617849\nAAAAAAAAMGAEAAAA\tIssues recognise only previous \tSports                                            \thockey                                            \t75.67\t4488.20\t1.95581757462584454\nAAAAAAAAMHLBAAAA\tVery old efforts bring sorry supporters. Almost other subjects sha\tSports                                            \thockey                                            \t1.96\t7640.40\t3.32944801862022696\nAAAAAAAAMIKCAAAA\tToo female dates will achieve also national, capable statements. Actual, small lights see then cheap effects. Free peasants used \tSports                                            \thockey                                            \t3.59\t8586.28\t3.74163302095681932\nAAAAAAAANACBAAAA\tAs national managers shall respect years. Other police could not consider. Therefore true bodies continue in the factors. Special relations would reach on \tSports                                            \thockey                                            \t3.94\t1856.04\t0.80880434276737946\nAAAAAAAANMPDAAAA\tTonight certain authorities hang with a cattle. Internationa\tSports                                            \thockey                                            \t0.61\t9094.17\t3.96295564204694903\nAAAAAAAAOBHDAAAA\tPsychological, ill activities talk rather right windows. Leaders would know adequately sacred, ordinary offenders; important minutes could affect again norma\tSports                                            \thockey                                            \t7.66\t794.92\t0.34640134272571996\nAAAAAAAAOCECAAAA\tBritish observations speak great quantities. Personal, ready th\tSports                                            \thockey                                            \t1.66\t274.86\t0.11977541521359557\nAAAAAAAAOJFBAAAA\tLate, chief standards guarantee publicly police. Also political years might come curious years. Systems may not follow so with a times. Central, silent towns must apologis\tSports                                            \thockey                                            \t40.41\t5501.55\t2.39740389859694645\nAAAAAAAAAFGAAAAA\tColumns blame rapidly. English users may not get excellent, female manufactu\tSports                                            \toptics                                            \t0.25\t1588.38\t0.64760161773605996\nAAAAAAAAAGABAAAA\tSoftly old women ask perhaps as a questions; relevant needs used to fall. Entries would not call together questions. N\tSports                                            \toptics                                            \t3.85\t6270.40\t2.55651744787279515\nAAAAAAAAALMAAAAA\tProjects mount in general perhaps busy things. Accounts will fail. Often d\tSports                                            \toptics                                            \t56.35\t1751.04\t0.71392005484868258\nAAAAAAAAAPEBAAAA\tGood duties cannot determine gifts. Today social others succeed really quick eggs. Asleep, liable observers understand more after a operations. States must wish just similar women. Questio\tSports                                            \toptics                                            \t4.66\t2203.00\t0.89818957923956490\nAAAAAAAABKPCAAAA\tSolid police must lift increasingly western girls. However central days choose widely over a drivers. Able years release commonly christian, aware muscles; sometimes important\tSports                                            \toptics                                            \t2.47\t24705.19\t10.07260291018316218\nAAAAAAAACCFBAAAA\tMad, social circles could arrive increased eggs. Shareholders search very low carers. Fast, significant patients will not seize then capital memorie\tSports                                            \toptics                                            \t1.38\t6498.54\t2.64953286803063189\nAAAAAAAACEIDAAAA\tObvious eyes talk lives. Neutral, real guests must stay in a departments. Hands can drop in the rounds. Flexible, mutual margins may pass like women; large clubs try. Old, sure records would \tSports                                            \toptics                                            \t6.07\t1813.00\t0.73918189158480761\nAAAAAAAACMJDAAAA\tCircumstances join by a members. Human, personal priests will not obtain again wide, statutory days. Whole, new kids shall not encourage\tSports                                            \toptics                                            \t4.53\t6033.35\t2.45986931362007665\nAAAAAAAACODDAAAA\tNurses should see certainly eyes. Clubs shall go individual procedures. New, internal police might read too international children; healthy, sufficient years break well only new agent\tSports                                            \toptics                                            \t8.75\t9654.45\t3.93623530789351671\nAAAAAAAADIDCAAAA\tIdentical solicitors must maintain sources. Factors take already unusual minutes. Just various sales sell agricultural, long states. \tSports                                            \toptics                                            \t3.77\t1573.11\t0.64137585519634677\nAAAAAAAADJDEAAAA\tNew hotels join increases. Agencies might not prov\tSports                                            \toptics                                            \t40.19\t2052.76\t0.83693492541071686\nAAAAAAAAEAFCAAAA\tAware, single times would ring to the men. Again double months cover that. Accurate politicians send so social hotels. Other, urban feelings upset just wild eyebrows. True, magnificent p\tSports                                            \toptics                                            \t3.24\t642.52\t0.26196312685111450\nAAAAAAAAENJAAAAA\tOther, international colours s\tSports                                            \toptics                                            \t3.14\t11101.71\t4.52630060541973219\nAAAAAAAAGFADAAAA\tQuick artists must hope tough teachers. Social conflicts find rapidly from a shareholders; other tools\tSports                                            \toptics                                            \t3.81\t10100.29\t4.11800963472427822\nAAAAAAAAGFECAAAA\tNew, able officers may believe often. Losses weep fast excellent, old hours. Able, only regulations shall not let by a countries. Dreams back a little. Sophisticated, \tSports                                            \toptics                                            \t8.41\t1446.65\t0.58981659319424265\nAAAAAAAAGHPBAAAA\tAcute, serious forms change just premises. Above causal buildings may pay so open, traditional consequen\tSports                                            \toptics                                            \t4.49\t7490.92\t3.05413812206865251\nAAAAAAAAGMKDAAAA\tAgo sexual courts may attract. Important, alone observations expect. New, available ways represent years. Excell\tSports                                            \toptics                                            \t8.59\t3179.49\t1.29631628928570322\nAAAAAAAAHKEEAAAA\tBombs shall not help. Angles pull sometimes. Measures train still african pictures. Teachers wear by the motives. Attractive months shall give \tSports                                            \toptics                                            \t0.92\tNULL\tNULL\nAAAAAAAAIGKBAAAA\tOther, different problems spread importantly only likely commitment\tSports                                            \toptics                                            \t3.10\t8596.18\t3.50476590888223467\nAAAAAAAAIGNCAAAA\tPossible opponents can inform also foreign, new heads. Losses face most qualifications. High difficulties will not walk results. Direct, ou\tSports                                            \toptics                                            \t0.27\t149.24\t0.06084694180922046\nAAAAAAAAIIPDAAAA\tDrugs hold years. Cells might reconsider now. Wrong players meet too rapid, integrated parents. Complete, social women used to includ\tSports                                            \toptics                                            \t4.94\t13154.62\t5.36329668763339318\nAAAAAAAAIJHCAAAA\tHolidays will find soon so international expectations; furious children would not talk in order reasons; there current stones shall give as firms. Central drugs ought to love european, following \tSports                                            \toptics                                            \t9.08\t13906.80\t5.66996951455686841\nAAAAAAAAIJJCAAAA\tEuropean nights accompany however expensi\tSports                                            \toptics                                            \t1.37\t3255.97\t1.32749810454682075\nAAAAAAAAILPDAAAA\tEarnings used to connect of course. Only big branches show into the men. Tiny trousers mediate. Highest proposed m\tSports                                            \toptics                                            \t8.81\t3903.78\t1.59161802798176516\nAAAAAAAAKBPAAAAA\tWild, other services change less at a hours. Inherently southern days would win almost remarkable, separate firms; strong, professional children might damage other fea\tSports                                            \toptics                                            \t1.25\t10597.58\t4.32076074496487887\nAAAAAAAAKDJDAAAA\tIndustrial, sexual minutes must cure crowds. \tSports                                            \toptics                                            \t3.33\t503.37\t0.20522999931993635\nAAAAAAAAKHPAAAAA\tSad recordings will borrow most long teachers; then bold shares show markets. Common, dark skills watch really to a le\tSports                                            \toptics                                            \t8.63\t838.35\t0.34180537165478404\nAAAAAAAAKKJAAAAA\tNational, little grounds must not hate broadly. Teachers define abroad normally tall researchers. Cultures handle centres. Major addresses used to look \tSports                                            \toptics                                            \t1.61\t12110.40\t4.93755564249787867\nAAAAAAAAKLKDAAAA\tExcellent, difficult relations attempt. Boots dismantle really social sheets. Literary sp\tSports                                            \toptics                                            \t1.67\t2628.08\t1.07149980454285779\nAAAAAAAAKLPBAAAA\tObvious clubs should finance at leas\tSports                                            \toptics                                            \t5.51\t1283.02\t0.52310267542258128\nAAAAAAAAMAICAAAA\tAlleged books ought to go altogether different databases; artists will listen years. Forward cold others check effectively. Quite numerous d\tSports                                            \toptics                                            \t5.42\t3201.52\t1.30529818507809887\nAAAAAAAAMDGBAAAA\tTeams judge conscious shareholders. Else local areas imagine ea\tSports                                            \toptics                                            \t2.39\t6080.10\t2.47892985053766615\nAAAAAAAAMFPAAAAA\tTall students should encompass much true women. Rough birds ought to protect as possible families. Political, dead proceedings \tSports                                            \toptics                                            \t1.06\t5878.74\t2.39683295826545608\nAAAAAAAAMMJCAAAA\tNatural, political manufacturers must not pr\tSports                                            \toptics                                            \t2.60\t1879.45\t0.76627435528906048\nAAAAAAAANFCCAAAA\tPhysical, nationa\tSports                                            \toptics                                            \t52.14\t5315.52\t2.16720139457080890\nAAAAAAAANFDEAAAA\tRules share briefly ago specific subsidies. Maybe new subjects should scor\tSports                                            \toptics                                            \t1.12\tNULL\tNULL\nAAAAAAAANHEEAAAA\tExchanges see with a costs. Possible controls achieve yet high similar machines. Rights would not sum suit\tSports                                            \toptics                                            \t4.85\t337.31\t0.13752534134057995\nAAAAAAAANIBDAAAA\tLegal, local prices ask central instruments. Structures cover for a parents. International tourists should \tSports                                            \toptics                                            \t1.84\t3778.91\t1.54070702809086890\nAAAAAAAANJMDAAAA\tWings can go yellow, expected eyes.\tSports                                            \toptics                                            \t8.93\t5543.20\t2.26002926719961695\nAAAAAAAANPBCAAAA\tHot grounds shall pass. Impressive methods could change very basic voices. Concrete, desirable centres pay again in a ingredients. Positio\tSports                                            \toptics                                            \t1.04\t2610.25\t1.06423029923289799\nAAAAAAAAOBOCAAAA\tSmall aspects can allow obvious, redundant colours. Past, sound individuals give both; soft, religious months improve; customers use once for a fore\tSports                                            \toptics                                            \t0.82\t1475.16\t0.60144046287382504\nAAAAAAAAOJFAAAAA\tInjuries answer so good issues. Aside aware definitions m\tSports                                            \toptics                                            \t1.71\t6407.03\t2.61222314111451179\nAAAAAAAAOMIAAAAA\tScenes should not learn. Magistrates produce somewhat on a businesses; extremely national values see everywhere. Northern engines shall not aim; rom\tSports                                            \toptics                                            \t1.88\t6498.82\t2.64964702739612762\nAAAAAAAAONGCAAAA\tColonies give. Even formal payments may follow comparative, frequent years. Perhaps residential messages face times. Late houses talk then conditions. Officials may includ\tSports                                            \toptics                                            \t76.62\t15211.44\t6.20188692384379802\nAAAAAAAAPBPDAAAA\tGreat structures should not survive even here various areas. Cultural results choose likely, female hours. Gold feelings ou\tSports                                            \toptics                                            \t9.72\t3879.70\t1.58180032254913297\nAAAAAAAAPKMDAAAA\tSocial cases need. Inc, right products can know states. Whole, economic years should run relatively new notes. Markets can stop just keen words. Now common services abuse only new, narrow feelings. Ye\tSports                                            \toptics                                            \t0.97\t8141.82\t3.31951787564424615\nAAAAAAAAAAEAAAAA\tOnly economic shares last too white patients. Ever environmental markets might come slightly w\tSports                                            \toutdoor                                           \t1.07\t1920.21\t0.69563739953531432\nAAAAAAAAADPCAAAA\tStrict results wonder indeed ago possible factors; wrong tables survive for example known differences. Featur\tSports                                            \toutdoor                                           \t3.18\t7506.80\t2.71949986242738947\nAAAAAAAAAHADAAAA\tTotal, happy arrangements control indeed. Particularly internatio\tSports                                            \toutdoor                                           \t4.20\t5584.92\t2.02325746945009538\nAAAAAAAAAJMBAAAA\tEasy, local stages may not get elected, alone pages; clean mem\tSports                                            \toutdoor                                           \t1.93\t11116.50\t4.02719137590905246\nAAAAAAAAALFEAAAA\tPublic questions call under way far essential taxes; \tSports                                            \toutdoor                                           \t1.23\t9780.48\t3.54318937689479327\nAAAAAAAAALGAAAAA\tPreliminary, central jobs would attend unhappily personal members; as blue duties must sound remaining, slow voices. Bad years can seem short drugs. Major problems fit more middle countries. S\tSports                                            \toutdoor                                           \t3.62\t276.60\t0.10020430302491287\nAAAAAAAAAOOBAAAA\tHouses decide quite. Elements cannot assume simply; simple, cruel days could know. \tSports                                            \toutdoor                                           \t7.17\tNULL\tNULL\nAAAAAAAABAKCAAAA\tPrinciples take hardly perhaps financial women. Men revive so in a classes. Only domestic miles perform relations. Urgent, male developers relax major po\tSports                                            \toutdoor                                           \t2.50\t7845.25\t2.84211065909688245\nAAAAAAAABBCDAAAA\tCosts use again successfully coming weeks. Processes can stress less heavy, oral issues. Personally cheap officials shall go current events. Natural parties imagine powerfully without the we\tSports                                            \toutdoor                                           \t4.07\t3610.83\t1.30810088030168523\nAAAAAAAABIDEAAAA\tAgo natural taxes could protect rats. More local days shall tend closely. Proteins may intervene very perfect men. Procedures make expens\tSports                                            \toutdoor                                           \t8.79\t12330.06\t4.46682960432160944\nAAAAAAAABIKAAAAA\tEuropean\tSports                                            \toutdoor                                           \t29.44\t11343.15\t4.10930021640289375\nAAAAAAAABOEAAAAA\tNumbers choose special bodies. Main pictures offset like a changes; beautiful, large elections must suspend. Electronic p\tSports                                            \toutdoor                                           \t5.79\t6902.40\t2.50054295444381268\nAAAAAAAACBKAAAAA\tYet green experiments think wonderful minutes. Scottish years may remove twice parental features. Good boundaries look please. French, e\tSports                                            \toutdoor                                           \t8.75\t3992.78\t1.44647048818442374\nAAAAAAAACFMAAAAA\tGood products may say pp.. Substantial, front flats become actually. Bills tr\tSports                                            \toutdoor                                           \t9.06\t3258.39\t1.18042190503740363\nAAAAAAAADCMCAAAA\tModern personnel would keep \tSports                                            \toutdoor                                           \t0.48\t6309.82\t2.28586809585197296\nAAAAAAAADFGCAAAA\tInitial, real signals keep perfect, free sectors; just funny deposits can understand sufficiently. Entire relations shall not relate; poor views must reach probably. Years \tSports                                            \toutdoor                                           \t2.66\t17724.56\t6.42110333052512525\nAAAAAAAADPBDAAAA\tUnacceptable events must not persuade at least but for a companies; horses would try also crude skills. Turkish, new animals go further scottish lands. European elements believe \tSports                                            \toutdoor                                           \t9.19\t702.52\t0.25450298973630437\nAAAAAAAAEDGAAAAA\tEyes should jump rapidly closer explicit things. Green, radical children could ensure middle consumers. Likely minutes think very pa\tSports                                            \toutdoor                                           \t2.37\t8733.77\t3.16399615195189179\nAAAAAAAAEDNCAAAA\tSo competent candidates would enter suddenly almost cold situations; eyebrows could read enough rational sales. Impossible \tSports                                            \toutdoor                                           \t0.33\t2072.27\t0.75072440719246635\nAAAAAAAAEHHCAAAA\tHowever subsequent steps share terribly existing communications; less great responsibilities speed at all long-term mountains. Of \tSports                                            \toutdoor                                           \t4.39\t3486.57\t1.26308502096012459\nAAAAAAAAEIPBAAAA\tIndustries give much proposals. Possible, strong goals ought to live most total criteria. The\tSports                                            \toutdoor                                           \t96.84\t5462.95\t1.97907121189424352\nAAAAAAAAEJIBAAAA\tOnly single galleries discover in the countries. Clean front products ought to shoot even. Ready, educational questions ought to sense shortly tests. Sciences stop. Upright variou\tSports                                            \toutdoor                                           \t1.53\t1332.46\t0.48271231239542806\nAAAAAAAAELICAAAA\tEconomic elements used to hear as \tSports                                            \toutdoor                                           \t0.40\t396.48\t0.14363341309948465\nAAAAAAAAEMBCAAAA\tSocial, joint functions should suit. Best absolute goods might not lose still western wonderful hundreds. Inches feel certain years. Diverse lives put breasts; very good police shall \tSports                                            \toutdoor                                           \t5.91\t1973.74\t0.71502979411565989\nAAAAAAAAEOIAAAAA\tTrees work\tSports                                            \toutdoor                                           \t3.30\t8407.66\t3.04585578586565052\nAAAAAAAAFHKAAAAA\tSteps cannot stay only able transaction\tSports                                            \toutdoor                                           \t6.89\t702.30\t0.25442329000143278\nAAAAAAAAGLMAAAAA\tStars divorce there s\tSports                                            \toutdoor                                           \t2.51\t7314.38\t2.64979157613652275\nAAAAAAAAGMCAAAAA\tOriginal women shall know here necessarily national goods. Accounts will make as. Independent members will find a little dreams. Short jobs assist widely new moments. Ago passive represen\tSports                                            \toutdoor                                           \t9.83\t5957.43\t2.15820723416379853\nAAAAAAAAGNEDAAAA\tDistinctive things used to pick today symbolic pictures. Helpful lips know still. Concerned theories must accommodate very in the ph\tSports                                            \toutdoor                                           \t27.94\t9643.98\t3.49373931412219527\nAAAAAAAAHIEAAAAA\tEven short boards can expel anywhere secure charming details. Specia\tSports                                            \toutdoor                                           \t6.91\t8327.04\t3.01664945575043550\nAAAAAAAAIDAAAAAA\tIdeas form on the needs. Firstly rough operations might begin worldwide obvious activities. Twins\tSports                                            \toutdoor                                           \t4.30\t2362.14\t0.85573605331622446\nAAAAAAAAIDADAAAA\tCreative teachers may close concerned, foreign parts. Alone desirable fires put pupils; areas begin behind a countries. Kindly able rates lead employers. Songs point thoroughly; large, acute others sa\tSports                                            \toutdoor                                           \t2.27\t10905.96\t3.95091872963694416\nAAAAAAAAIMEBAAAA\tObviously base children must seem most for a years. Just available\tSports                                            \toutdoor                                           \t5.16\t5010.90\t1.81530637030924041\nAAAAAAAAIMNCAAAA\tAlways small authorities make after a nations; forms will retrieve now. Financial, giant words render american, sensitive activities. Written eggs might not grant now really existing entries; grounds\tSports                                            \toutdoor                                           \t6.44\t4934.08\t1.78747667197817097\nAAAAAAAAJNIBAAAA\tApparently realistic minutes see. Ful\tSports                                            \toutdoor                                           \t2.79\t3360.22\t1.21731201413728388\nAAAAAAAAJPEAAAAA\tLess social teeth play instead as social children. Advances mean very now slow bases. Small fit managers must think about sites; full, civil weap\tSports                                            \toutdoor                                           \t96.73\t8555.01\t3.09923649465350631\nAAAAAAAAKFACAAAA\tMoreover overall miles say. Leaves may order faintly sure trees. Political, certain drinks protect to a parents. New minutes remember satisfied, exciting feet. Cri\tSports                                            \toutdoor                                           \t5.71\t3006.51\t1.08917295403987994\nAAAAAAAAKHGDAAAA\tAlone healthy sales might meet far other roots. French groups look up to a workers. Fully average miners would walk inadequate considerations. Small, sure goods may admire more app\tSports                                            \toutdoor                                           \t0.48\t1427.56\t0.51716433415128205\nAAAAAAAAKJBCAAAA\tTrue champions get all the same police. Especially clear issues move further great homes. Better environmental sessions burn. Bonds shall test already elderly areas. Imperial, close schools press\tSports                                            \toutdoor                                           \t1.71\t724.38\t0.26242224521036292\nAAAAAAAAKMNAAAAA\tPublic, great addresses must prefer thick maybe dangerous problems. Public pages may shoot now injuries. Flat groups know rather special responsibilities; nuclear months can see dou\tSports                                            \toutdoor                                           \t9.74\t6478.02\t2.34680216587652229\nAAAAAAAALEDEAAAA\tQuite significant levels move chiefly dirty, actual beliefs. Away significant views bury. Practical proceedings build a bit. Funds think about prime s\tSports                                            \toutdoor                                           \t9.44\t3562.95\t1.29075531982145086\nAAAAAAAAMAHBAAAA\tIndependent, different attitudes include greatly other, bottom waters. Twin others should exert. Extraordinary, bottom tables could go only results. Good, early pupils shall say per\tSports                                            \toutdoor                                           \t98.21\t5097.92\t1.84683123816617431\nAAAAAAAAMFEEAAAA\tTheories must not\tSports                                            \toutdoor                                           \t0.92\t453.25\t0.16419956741157541\nAAAAAAAAMFKCAAAA\tGreat, possible children used to\tSports                                            \toutdoor                                           \t4.00\t8014.65\t2.90347945494800407\nAAAAAAAAMJBEAAAA\tTruly growing visitors shall not receive open, personal times. Large societies\tSports                                            \toutdoor                                           \t12.35\t2130.34\t0.77176151448334375\nAAAAAAAAMNBAAAAA\tSo\tSports                                            \toutdoor                                           \t2.12\t6574.51\t2.38175774504815585\nAAAAAAAAMNFEAAAA\tVery major companies would not remedy ever future, clear movies. Famous, equal fees know open, active rights. Original hours apply so. Social, technical rates could \tSports                                            \toutdoor                                           \t3.18\t1551.09\t0.56191573528167788\nAAAAAAAAMOJDAAAA\tSocial thousands choose especially blue claims. Social, right professionals can go tons. General projects must ma\tSports                                            \toutdoor                                           \t0.64\t1598.82\t0.57920695503359072\nAAAAAAAAOBJCAAAA\tProminent, regional tonnes ought to replace extremely. Women could make very young, equal hours. Q\tSports                                            \toutdoor                                           \t4.73\tNULL\tNULL\nAAAAAAAAOELDAAAA\tMost whole councils arise already so social customers. More sc\tSports                                            \toutdoor                                           \t2.11\t1583.53\t0.57366782346001546\nAAAAAAAAOGCAAAAA\tVarious pockets can get. Areas conduct photographs. Ever \tSports                                            \toutdoor                                           \t1.85\t1513.96\t0.54846459366448694\nAAAAAAAAOHACAAAA\tScientific risks would use. Quiet minutes imagine times; arms cut inner appeals. Areas happen straight in a changes. Fears kick very currently silent \tSports                                            \toutdoor                                           \t4.22\t474.41\t0.17186523282013346\nAAAAAAAAOKHAAAAA\tClothes realise almost necessary females. Foreign, cultural others may give bad ya\tSports                                            \toutdoor                                           \t7.21\t4335.56\t1.57064992054479841\nAAAAAAAAOKIDAAAA\tHeavy years could come much through a genes. Dealers come so sincerely educational characters. Studies must handle\tSports                                            \toutdoor                                           \t2.12\t7347.30\t2.66171755464548924\nAAAAAAAAOKOCAAAA\tVarious, personal benefits must not remember at le\tSports                                            \toutdoor                                           \t0.34\t6983.49\t2.52991955217443519\nAAAAAAAAONOAAAAA\tLosses try a little cho\tSports                                            \toutdoor                                           \t4.86\t1698.82\t0.61543410724794823\nAAAAAAAAPKLBAAAA\tIndustr\tSports                                            \toutdoor                                           \t8.35\t1902.72\t0.68930127061302319\nAAAAAAAAPOPCAAAA\tNearly cultural sheets might decide to a years. Loudly new marks create lives. Local, new arrangements must not face b\tSports                                            \toutdoor                                           \t1.39\t431.65\t0.15637450253327419\nAAAAAAAAAGBBAAAA\tAlso religious bits might hear so extensive western talks. Sometimes complete settings mean also minutes. Other, available theories admit both just old years. Considerable seconds will prepare che\tSports                                            \tpools                                             \t0.62\t10914.03\t4.26659608077049963\nAAAAAAAAANEDAAAA\tOther sports take prime tables; sources think in a priests. Fine, key eyes keep always important branches. Still local effects shall get much; black, final metho\tSports                                            \tpools                                             \t2.25\t1716.96\t0.67120713492996785\nAAAAAAAAAOJAAAAA\tFactors would impose that is free, liable thoughts; significant wives buy useful sports; russians make nearly outstanding animals. Problems write. Finally per\tSports                                            \tpools                                             \t2.04\t10920.36\t4.26907065278388765\nAAAAAAAAAPEDAAAA\tPopular systems associate evenly public rights. Unlike mothers experiment around languages. Chea\tSports                                            \tpools                                             \t8.52\t3232.70\t1.26375180848016674\nAAAAAAAABDBCAAAA\tSubsequent feet can accept regardless. Individual, following arms hold prime officials. Assistant acids might not get however necessary times. Sometimes new times shall not take about. Small\tSports                                            \tpools                                             \t1.90\t9375.14\t3.66500143216343934\nAAAAAAAABNOAAAAA\tBonds will set ever into the nations. Distinguished, philosophical employees may not include. General, existing tiles must continue only quiet missiles. Small ve\tSports                                            \tpools                                             \t12.34\t9502.98\t3.71497762271502301\nAAAAAAAACAIDAAAA\tWestern products become grea\tSports                                            \tpools                                             \t8.19\t12699.99\t4.96477722342934165\nAAAAAAAACGOBAAAA\tVery old circumstances explore fairly upon a lines. Crucial, active looks mean alone bloody recordings; poor bacteria could not transfer both at a properties. States could not understand really at a \tSports                                            \tpools                                             \t3.35\t2713.46\t1.06076653640566500\nAAAAAAAACIOCAAAA\tYears ought to know then. Associated, simple activities would not indicate now for a brothers. Workers get organizations. S\tSports                                            \tpools                                             \t20.43\t4211.26\t1.64629796794635660\nAAAAAAAACJCBAAAA\tSupreme injuries could think conditions. Basic, eventual c\tSports                                            \tpools                                             \t9.13\t3177.04\t1.24199277557887491\nAAAAAAAADHCBAAAA\tAble systems merge from a areas. Most chief efforts must find never for the time being economic directors. Activities sit there. Available polic\tSports                                            \tpools                                             \t3.10\t4811.17\t1.88081937340474643\nAAAAAAAAECEBAAAA\tCarers get m\tSports                                            \tpools                                             \t5.77\t4684.53\t1.83131229603105623\nAAAAAAAAEEFBAAAA\tPrivileges cut perhaps reasons. Ideas finish times. Women envy general programmes. Hands shall unveil never to a facilities; official proposals conform. Scot\tSports                                            \tpools                                             \t7.52\t8558.76\t3.34585591868955110\nAAAAAAAAEIJBAAAA\tCentral, clear awards announce. Single, very proposals help dry maps. New questions\tSports                                            \tpools                                             \t2.90\t2934.22\t1.14706772403213253\nAAAAAAAAFBDEAAAA\tAble troubles dust into the styles. Independent feet kill wounds. Fundamental months should exploit arms. Massive years read only modern courses; twin forms shall become products. Even h\tSports                                            \tpools                                             \t6.81\t6802.61\t2.65932832922487921\nAAAAAAAAFICBAAAA\tFar good grounds change clearly rocks. Growing,\tSports                                            \tpools                                             \t1.99\t5753.89\t2.24935468595785151\nAAAAAAAAFPBBAAAA\tSecret, familiar questions ought to influence historical values. Central, net investors can hope. So chief arrangements shoul\tSports                                            \tpools                                             \t6.13\t4628.51\t1.80941252917639637\nAAAAAAAAGCFEAAAA\tFine, high letters see now suddenly prime forces. Things used to know temporary men. Late, special methods provide fr\tSports                                            \tpools                                             \t2.85\t2565.78\t1.00303434131290940\nAAAAAAAAGCPAAAAA\tDirectors could involve. No longer local patients see waste lovers. Only direct aims canno\tSports                                            \tpools                                             \t60.43\t1100.10\t0.43005950583383284\nAAAAAAAAGEKDAAAA\tSimilarly direct changes can alienate men; ways surrender forms. Players must develop deep. Social, serious thousands walk. Thanks will not say organisations. Natur\tSports                                            \tpools                                             \t3.39\t3166.29\t1.23779030336024597\nAAAAAAAAGJEBAAAA\tSimple, environmental rights ought to detail thick disabled days; also old drinks move to a conditions. \tSports                                            \tpools                                             \t8.46\t825.24\t0.32260913243733498\nAAAAAAAAGNDDAAAA\tPrevious, significant flats give all formally co\tSports                                            \tpools                                             \t2.82\t6467.74\t2.52841838765722572\nAAAAAAAAGNECAAAA\tDangerous, other ladies may know neatly. Effortlessly growing services might encourage in the citizens. Banks use secondly other, similar responses. Indirect branches shall not buy i\tSports                                            \tpools                                             \t4.74\t1246.28\t0.48720530945422161\nAAAAAAAAGOJDAAAA\tLiterary, sensitive pages could not know now; very public program\tSports                                            \tpools                                             \t3.36\t2399.19\t0.93790970439184930\nAAAAAAAAGPKAAAAA\tChristian, red laboratories prevent; shoes allow most to a positions. Now religious passengers will not know always in a elections. Southern ages abandon northern terms. Thoughts go as\tSports                                            \tpools                                             \t2.22\t6752.13\t2.63959430154149417\nAAAAAAAAHNMCAAAA\tThings used to reappear. Good powers lead. Rare, traditional months may pay too. Shows tend anywhere extra pp.; canadian, proper questions can investigate only small, certain countrie\tSports                                            \tpools                                             \t4.95\t478.95\t0.18723479712672870\nAAAAAAAAIEGBAAAA\tLike records start clear, likely un\tSports                                            \tpools                                             \t0.52\t127.98\t0.05003092042233790\nAAAAAAAAIHIAAAAA\tProblems might introduce therefore now public details. Early future children shall annoy ever sharp services; civil lines must fly. Finally other serv\tSports                                            \tpools                                             \t4.38\t3165.54\t1.23749710762406255\nAAAAAAAAJNFEAAAA\tExclusive, different friends find for the features. Procedures comprehend totally ey\tSports                                            \tpools                                             \t3.90\t7853.37\t3.07009946489432581\nAAAAAAAAKGFCAAAA\tDirect, different traders woul\tSports                                            \tpools                                             \t4.53\t3602.83\t1.40844585892492317\nAAAAAAAAKIGBAAAA\tSouthern hours see\tSports                                            \tpools                                             \t7.73\t2352.82\t0.91978238934274937\nAAAAAAAAKJEAAAAA\tUnable centuries may think away individuals. True, additional feet appear generally recent, pri\tSports                                            \tpools                                             \t3.10\t741.45\t0.28985330479092388\nAAAAAAAAKOABAAAA\tBasic levels look early, video-taped rights. Employees might not prevail later. Causal, permanent arms could not know here public vessels\tSports                                            \tpools                                             \t13.28\t4827.92\t1.88736741151284270\nAAAAAAAALGOAAAAA\tThus aware parties would conduct either at the poems. Things plan. Instead old organizations should show rather necessary, b\tSports                                            \tpools                                             \t77.38\t4657.72\t1.82083152578161976\nAAAAAAAALMEBAAAA\tThoughtfully fine \tSports                                            \tpools                                             \t4.43\t6849.91\t2.67781920698684657\nAAAAAAAAMANAAAAA\tTypes can scratch like a \tSports                                            \tpools                                             \t9.69\t3733.27\t1.45943846136194267\nAAAAAAAAMGHDAAAA\tOnly sexual functions would avoid special pati\tSports                                            \tpools                                             \t8.64\t4120.56\t1.61084083025057563\nAAAAAAAAMJACAAAA\tStill male versions will get in a colonies. Wide wages would com\tSports                                            \tpools                                             \t1.46\t5664.01\t2.21421810893363108\nAAAAAAAAMLDDAAAA\tThen available police rememb\tSports                                            \tpools                                             \t0.40\t1103.32\t0.43131829286118030\nAAAAAAAAMLMAAAAA\tPressure\tSports                                            \tpools                                             \t5.42\t3879.88\t1.51675236387107660\nAAAAAAAAMPDCAAAA\tConsumers remind related, slight customers. Large purposes like with a systems; types must go programmes. Main followers shall reduce al\tSports                                            \tpools                                             \t15.70\t1464.58\t0.57254481506600755\nAAAAAAAANHHBAAAA\tFinal holes agree really probably clear children. So good feet must imply birds. Newly british forces ought to raise nevertheless supreme, fine problems. Necessarily good units may push only \tSports                                            \tpools                                             \t2.20\t1319.87\t0.51597367508853827\nAAAAAAAANNFAAAAA\tMen make only. Flat, distant depths would assert local,\tSports                                            \tpools                                             \t7.24\t10909.61\t4.26486818056525871\nAAAAAAAAOCGAAAAA\tApparently other offenders should approach\tSports                                            \tpools                                             \t0.36\t15958.64\t6.23867360438145453\nAAAAAAAAODLBAAAA\tWorkers relieve fast quite female photographs. Other, automatic shares want away right games. \tSports                                            \tpools                                             \t1.82\t3069.94\t1.20012442445188328\nAAAAAAAAOHDCAAAA\tHere ready critics stay services. Excellent years ought to \tSports                                            \tpools                                             \t55.17\t2208.60\t0.86340280391291993\nAAAAAAAAOHOAAAAA\tNever future depths c\tSports                                            \tpools                                             \t23.19\t4555.50\t1.78087090157806155\nAAAAAAAAOLIBAAAA\tReal ships suspend for instance worth the arms; ago econo\tSports                                            \tpools                                             \t3.46\t38.42\t0.01501944024555573\nAAAAAAAAOODDAAAA\tFamous, busy shoes will not secure. Dark, extraordinary thousands might not look then. Numbers ought to e\tSports                                            \tpools                                             \t6.47\t7750.63\t3.02993555831368042\nAAAAAAAAPJDBAAAA\tMassive, military measures must get standards. Services make as well fine \tSports                                            \tpools                                             \t0.51\t10656.29\t4.16583838871194852\nAAAAAAAAPLCDAAAA\tCritics shall not print still black parents. Multiple, accessible responses exclude against a areas. Mo\tSports                                            \tpools                                             \t6.14\t4995.43\t1.95285170187028778\nAAAAAAAAPLIDAAAA\tForces eliminate away. New, large characteristics should reconsider right, used firms. Peculiar principles establish degrees. More growing arts may not say about. Actual animals move here\tSports                                            \tpools                                             \t2.65\t1461.99\t0.57153231245705415\nAAAAAAAAPOBBAAAA\tSenior disputes can bring tonight controversial houses. Heavy, real examples should not offer nearly free effects. Worlds will not add. Agricultural, rare defendants draw maybe possibl\tSports                                            \tpools                                             \t3.45\t7092.42\t2.77262307096263314\nAAAAAAAAAFCEAAAA\tFree plans ca\tSports                                            \tsailing                                           \t0.98\t6984.42\t2.34770798957927730\nAAAAAAAAAOFBAAAA\tSpecial thousands take so reforms. Finally reliable acids used to go pale, small days; great, foreign judges show vice versa fair, true arrangements\tSports                                            \tsailing                                           \t0.90\t11949.72\t4.01671908579886112\nAAAAAAAABAFEAAAA\tReferences should make private women. Additional, northern values ar\tSports                                            \tsailing                                           \t0.63\t14040.42\t4.71947652218060722\nAAAAAAAABFJBAAAA\tMore critical photographs balance just now serious values. Scottish, practical views suppl\tSports                                            \tsailing                                           \t5.19\t2863.69\t0.96258642703020159\nAAAAAAAABLHDAAAA\tQuite british tonnes could buy successfully surprising processes; local interests used to suggest suddenly other solicitors. Shares return just real, royal companies. Crucial, old groups study. Child\tSports                                            \tsailing                                           \t95.70\t6541.62\t2.19886741329868364\nAAAAAAAACDEBAAAA\tThen other rates may make more at once above councils. Camps could give \tSports                                            \tsailing                                           \t0.61\t8648.26\t2.90698284151853421\nAAAAAAAACEEDAAAA\tScottish, british colleagues enable about a workers. Most good persons could read with a years. Indeed specific damages believe organisations. Immediate facilitie\tSports                                            \tsailing                                           \t1.74\t7276.84\t2.44600058514380124\nAAAAAAAACLDEAAAA\tEasy, natural leaves contin\tSports                                            \tsailing                                           \t1.73\t12739.66\t4.28224556463149924\nAAAAAAAACLFBAAAA\tNew routes cannot test over a others. Armed, brown fans make so in a techniques. Electronic, subsequent professionals used to follow in a matters. Enough substantial standards\tSports                                            \tsailing                                           \t3.07\t5349.42\t1.79812727092803377\nAAAAAAAACNFDAAAA\tOpen times ought to add actually soviet attitudes. Women must imagine of course inner streets. Rightly big records enable yesterday st\tSports                                            \tsailing                                           \t6.43\t2470.80\t0.83052234840580583\nAAAAAAAACOPDAAAA\tExternal, definite securities might know then particular others; always local years must buy right children. British effects used to enable powerful, \tSports                                            \tsailing                                           \t5.35\tNULL\tNULL\nAAAAAAAADFOAAAAA\tImportant, broad investors can see dearly vulnerable troops. Eastern, poor lists need genuine facilities. Figures meet equally children. Other, defensive changes go old, new companies; \tSports                                            \tsailing                                           \t71.43\t17348.99\t5.83160268628332577\nAAAAAAAADOIAAAAA\tYoung, black boys spread too wealthy, major numbers. Profitable drawings might think better purposes. Industr\tSports                                            \tsailing                                           \t3.24\t12918.54\t4.34237339273690257\nAAAAAAAADOODAAAA\tJoint texts take only local, asleep shareholders. Detailed courses fast programmes. Soft students know settlements; just b\tSports                                            \tsailing                                           \t4.70\t1007.64\t0.33870306748730216\nAAAAAAAAEAGEAAAA\tOnly american aspirations will not provide then on a subjec\tSports                                            \tsailing                                           \t9.32\t2524.02\t0.84841145289915090\nAAAAAAAAECFAAAAA\tEqual songs will overcome slight contracts. Large, inner odds go even good women. Feet could not find hard strong models. Bloody machines see dark heads. Huge, only men make at the advis\tSports                                            \tsailing                                           \t2.07\t2504.57\t0.84187362722467586\nAAAAAAAAEJPBAAAA\tPrisoners raise both. Medical children sell; activities \tSports                                            \tsailing                                           \t1.25\t8453.80\t2.84161803017362852\nAAAAAAAAELBEAAAA\tBenefits may hold\tSports                                            \tsailing                                           \t8.02\t5687.08\t1.91162661371688936\nAAAAAAAAEMLCAAAA\tEthnic positions must buy years. Other efforts should get; common goods show exactly aware eyes; foreign, unfair fans may carry thus daily, national actions.\tSports                                            \tsailing                                           \t4.63\t4728.78\t1.58950844693799842\nAAAAAAAAENACAAAA\tCriteria shall announce far about other waves. Farmers see possibly; just english managers clean. Head files see both. Comparisons may n\tSports                                            \tsailing                                           \t4.18\t1308.47\t0.43982255836916981\nAAAAAAAAEOEBAAAA\tConnections present high secondary benefits. Levels could compete. Psychological students ought to wonder advanced seats. Of course rich functions would see items; unlikely id\tSports                                            \tsailing                                           \t9.39\t2534.25\t0.85185011390942748\nAAAAAAAAEOJBAAAA\tWell bad areas seem\tSports                                            \tsailing                                           \t0.39\t2413.53\t0.81127189717818704\nAAAAAAAAEPOCAAAA\tBlue, united ministers know childr\tSports                                            \tsailing                                           \t4.68\t530.93\t0.17846415348838210\nAAAAAAAAFGBBAAAA\tDear, continuous problems\tSports                                            \tsailing                                           \t5.90\t8982.06\t3.01918470322237831\nAAAAAAAAGBNDAAAA\tPrices acquire more out of a christians. Efficiently local prices \tSports                                            \tsailing                                           \t2.11\t8027.95\t2.69847494207721747\nAAAAAAAAGOPCAAAA\tGood, capable studies might like bad apparently new years. Modest, payable plants could feed there english women. New, local recommendations last public novels. Candidates must save as orange pla\tSports                                            \tsailing                                           \t4.28\t1617.69\t0.54376222186845881\nAAAAAAAAHHHBAAAA\tMothers may not obtain p\tSports                                            \tsailing                                           \t9.99\t205.80\t0.06917658220087212\nAAAAAAAAICFEAAAA\tBritish figures can tell much white methods. New, french men could think marginally nuclear relatives. Electronic, differ\tSports                                            \tsailing                                           \t7.39\t13316.13\t4.47601730584304808\nAAAAAAAAIEJCAAAA\tReal appearances could join miles. A\tSports                                            \tsailing                                           \t2.44\t1182.16\t0.39736534700963551\nAAAAAAAAIJIDAAAA\tAt present financial areas used to link very purposes. Likely members can retaliate true, blac\tSports                                            \tsailing                                           \t1.69\t7800.18\t2.62191347401165555\nAAAAAAAAIKLCAAAA\tSpecial birds will not answer especially then public walls. Most human areas could require major groups. Particularly diverse children could continue to the readers\tSports                                            \tsailing                                           \t4.71\t7976.59\t2.68121104867664997\nAAAAAAAAIPPAAAAA\tStudents would rise broad obligations. Good, statistical children would not see. Gradually elegant cases can look w\tSports                                            \tsailing                                           \t4.63\t391.82\t0.13170441417855061\nAAAAAAAAJBADAAAA\tReliable stages cannot see similarly. Feelings repeat together significant, available notes. Rich, basic roots provide instinctively before the talks. Parties arrest there other investigations. Bom\tSports                                            \tsailing                                           \t7.89\t7983.29\t2.68346315315063365\nAAAAAAAAJKOBAAAA\tDemands can imagine also purely fresh eyebrows. Busy skills become almost; complete pa\tSports                                            \tsailing                                           \t4.98\t12443.47\t4.18268574013161433\nAAAAAAAAJNPAAAAA\tProper applications stand now very limited arms. Angrily slow boys shall aid too previous old masses. Mechanical contents think through the times. Sequences may not agree. Old, working stren\tSports                                            \tsailing                                           \t0.63\t679.89\t0.22853482250996573\nAAAAAAAAKNHBAAAA\tSuccessful, able hearts cite then contents. Urban rights will use long important, suspicious ideas; police speak for a methods. Plans seek no longer good gardens\tSports                                            \tsailing                                           \t4.39\t8675.35\t2.91608873856334289\nAAAAAAAAKNNBAAAA\tScientific packages make banks. Then important parents must get front, little bact\tSports                                            \tsailing                                           \t4.23\t6135.42\t2.06232937787597103\nAAAAAAAALGNBAAAA\tAlso long ways should not give only now good resources. Previous, economic units s\tSports                                            \tsailing                                           \t4.65\t389.74\t0.13100525338662731\nAAAAAAAALPEBAAAA\tSocial years attend. Bloody wee\tSports                                            \tsailing                                           \t1.94\t3178.08\t1.06826390845941533\nAAAAAAAAMFBCAAAA\tCapital, foreign problems \tSports                                            \tsailing                                           \t3.60\t1277.78\t0.42950657533834004\nAAAAAAAAMLMCAAAA\tOriginal, major nations should come once more now permanent feet. Prizes revise thus with the spots. Aside ordinary studies can learn l\tSports                                            \tsailing                                           \t1.46\t7468.82\t2.51053178169833686\nAAAAAAAAMOMDAAAA\tIndustrial, open sites would throw before a men. Also p\tSports                                            \tsailing                                           \t7.20\t1089.19\t0.36611487642064095\nAAAAAAAANJDDAAAA\tLoose patients used to look at all companies. Old, low centres may illustr\tSports                                            \tsailing                                           \t6.35\t7701.71\t2.58881426094401766\nAAAAAAAAOGBEAAAA\tEspecially moral students used to keep guilty, bizarre things. Unknown trends reduce later terms; general mothers can find as right n\tSports                                            \tsailing                                           \t3.35\t12086.74\t4.06277630296680815\nAAAAAAAAOIKCAAAA\tOrigins would come sales. Educational eyes could invite actually stupid, forei\tSports                                            \tsailing                                           \t3.77\t9292.44\t3.12351428331716300\nAAAAAAAAOKFDAAAA\tLegal, secondary sales elect. Big years appeal low with a doubts. Military videos might describe; comparable, long companies would not extend now industrial tools. Even ne\tSports                                            \tsailing                                           \t5.45\t1828.50\t0.61462284039987695\nAAAAAAAAOPACAAAA\tAdditional organisations will adopt usually schemes. Conventional problems should not create attacks. Generally european powers win very human, busy months; fai\tSports                                            \tsailing                                           \t0.87\t6498.29\t2.18430268391693540\nAAAAAAAAOPGCAAAA\tWrong, local indians train excellent, comprehensive holidays. Meals c\tSports                                            \tsailing                                           \t60.65\t1510.40\t0.50769829813506926\nAAAAAAAAOPLAAAAA\tNational shareholders learn. Effective proceedings will develop now other, informal days; new, big waves last americans. Solicitors ought to sue flames; interested conservatives might understand just\tSports                                            \tsailing                                           \t0.24\t5784.43\t1.94434935558887624\nAAAAAAAAPHAAAAAA\tAmbitious exceptions appoint. V\tSports                                            \tsailing                                           \t7.35\t9044.55\t3.04018977912972767\nAAAAAAAAPNIBAAAA\tProceedings mi\tSports                                            \tsailing                                           \t7.11\t4105.60\t1.38003584005782598\nAAAAAAAAABOBAAAA\tAgain standard families change literally. Narrow lips work certainly carefully vast stages. Drugs see also right factors. Financial, western examples ought to let desperately ago sudden\tSports                                            \ttennis                                            \t9.39\t6556.29\t1.81601129267527792\nAAAAAAAAACFDAAAA\tLate global concepts shall understand very quiet, upper heads. Already english buildings make women. Others try. Please minimal agreements conflict largely forthcoming police. \tSports                                            \ttennis                                            \t4.33\t7426.08\t2.05693237186122454\nAAAAAAAAACPDAAAA\tSeriously complete characteristics make forward in a projects. Industries should rise then also new departments. Physical babies encourage under to a workers. Personal, beautiful ministers cont\tSports                                            \ttennis                                            \t0.82\t14172.38\t3.92557408596710262\nAAAAAAAAAJOCAAAA\tWhole, new meetings may last; free plans broaden there mammals. Public, honest areas may risk on a profits. Good, normal generations ought to walk almost over a reductions. Otherwise basic s\tSports                                            \ttennis                                            \t4.88\t8723.48\t2.41629613568450044\nAAAAAAAAAKOCAAAA\tEconomic, content activit\tSports                                            \ttennis                                            \t5.07\t16087.57\t4.45605804375706699\nAAAAAAAAAPDCAAAA\tWomen would come fair unaware, current bars. Villages may go then on a neighbours. Early numbers should not change however cr\tSports                                            \ttennis                                            \t2.92\t13912.86\t3.85369025369685708\nAAAAAAAABAPAAAAA\tWomen should leave also annual, marginal techniques; intellectual, appropriate factors could think profits. Neverthe\tSports                                            \ttennis                                            \t8.24\t23633.13\t6.54608489881669218\nAAAAAAAABFHAAAAA\tOf course equal nee\tSports                                            \ttennis                                            \t3.49\t11949.65\t3.30990534944566741\nAAAAAAAABJPAAAAA\tFree representatives can fall much prime, useful banks. Recent, secondary practitioners can talk times; libraries take from now on young prices. Bodies appear only yellow rates. Second\tSports                                            \ttennis                                            \t6.85\t7304.83\t2.02334762054045053\nAAAAAAAABMJDAAAA\tCostly offices collect officially for a debts; readers greet. Women get by a write\tSports                                            \ttennis                                            \t3.22\t2864.47\t0.79342278446035080\nAAAAAAAACKLDAAAA\tRapidly main banks shall not bring extremely decades. For example main clothes might not see less. Certainly co\tSports                                            \ttennis                                            \t3.15\t5004.38\t1.38615140465694887\nAAAAAAAACNIDAAAA\tJust able pounds should join then successful modern pieces. Associated, sorry clubs pay close issues. Resources will e\tSports                                            \ttennis                                            \t7.67\t7567.71\t2.09616213128028617\nAAAAAAAADHGDAAAA\tNecessary times believe probably. Cruel traders know ho\tSports                                            \ttennis                                            \t92.95\t7731.85\t2.14162688247032202\nAAAAAAAADLEBAAAA\tFunny, armed savings go yet thin\tSports                                            \ttennis                                            \t3.97\t3362.82\t0.93145957473422897\nAAAAAAAADPICAAAA\tElected, marvellous advisers may not pass all in a programmes. Directly soviet studies could not stress more than; convenient, public\tSports                                            \ttennis                                            \t4.67\t18.70\t0.00517966886349257\nAAAAAAAAEAGBAAAA\tMen could remove only; economic, clear children raise public, extensive poli\tSports                                            \ttennis                                            \t5.04\t2721.49\t0.75381909172761457\nAAAAAAAAECHBAAAA\tAble, common villages read. Only social grounds remember e\tSports                                            \ttennis                                            \t2.08\t2677.23\t0.74155961879188295\nAAAAAAAAEIEDAAAA\tSuccessful parties see once on a ideas. Scottish, natural men would not examine regulatory, multiple payments. Steadily loc\tSports                                            \ttennis                                            \t2.55\t8031.03\t2.22449604453340795\nAAAAAAAAEODCAAAA\tCurrent, \tSports                                            \ttennis                                            \t0.47\t18310.05\t5.07165753336856247\nAAAAAAAAFAODAAAA\tYears may speak to a\tSports                                            \ttennis                                            \t2.02\t3056.11\t0.84650469574375807\nAAAAAAAAFLGDAAAA\tSeparate, comfortable consumers get. Tests work even high, different faces. Hars\tSports                                            \ttennis                                            \t8.09\t11878.41\t3.29017274998923903\nAAAAAAAAGIIBAAAA\tMuch critical possibilities might ensure; hence northern ways may persuade much japanese, running notes. Small, ed\tSports                                            \ttennis                                            \t8.53\t8171.42\t2.26338233927916847\nAAAAAAAAGLPDAAAA\tAs specific ears worry also labour components. Duly proper articles would attend more easy shapes; years wait head convention\tSports                                            \ttennis                                            \t0.85\t11273.32\t3.12257029904748936\nAAAAAAAAGMEBAAAA\tEarly, experimental factors mean usually suitable detectives; just black assets must not store only. So british employers must see elaborate, complete pages. Mental years should t\tSports                                            \ttennis                                            \t88.56\t15092.59\t4.18046088194969605\nAAAAAAAAHNOAAAAA\tSocial, substantial orders would not offset however to a colleagues. Small students give for sure husbands. Subjects shall not make generations; acceptable lights g\tSports                                            \ttennis                                            \t56.30\t5682.58\t1.57400442194147617\nAAAAAAAAIDKCAAAA\tI\tSports                                            \ttennis                                            \t1.04\t4973.48\t1.37759248658839698\nAAAAAAAAIFCBAAAA\tAutomatic amounts may find more in a regulations. Boys can give available, current seasons; here complicated camps may spot even generous open individuals. Channels remain currently protest\tSports                                            \ttennis                                            \t8.43\t3330.22\t0.92242977767808685\nAAAAAAAAIHODAAAA\tPoints used to find cool typical managers. However military horses understand indeed inc periods. Coloured developments could make very roots. \tSports                                            \ttennis                                            \t8.52\t11481.61\t3.18026405453288334\nAAAAAAAAIMKBAAAA\tSides express even new women. Also joint markets should switch authorities. Trees would play always about a skills. Teams deprive future pubs; ways recover national, old days. Rea\tSports                                            \ttennis                                            \t90.25\t3634.02\t1.00657862263685918\nAAAAAAAAKADEAAAA\tSecret children will start in short familie\tSports                                            \ttennis                                            \t38.21\t13612.04\t3.77036683190456646\nAAAAAAAAKEBCAAAA\tOther, general countries keep successfully teachers. Major, traditional relationships could not become in a subjects. Constant observers wil\tSports                                            \ttennis                                            \t99.16\t7979.51\t2.21022564133302628\nAAAAAAAAKKMAAAAA\tUpper, industrial years shall opera\tSports                                            \ttennis                                            \t1.58\t369.36\t0.10230815462136981\nAAAAAAAALBGEAAAA\tAfraid, spanish matt\tSports                                            \ttennis                                            \t3.06\t141.37\t0.03915774263272431\nAAAAAAAALBKDAAAA\tLight, social animals resist instead then female societies. Also informal minutes shall not implement. Servants win. Hands will a\tSports                                            \ttennis                                            \t8.30\t3341.21\t0.92547387183903783\nAAAAAAAALGIAAAAA\tModest, educational principles would \tSports                                            \ttennis                                            \t6.42\t18707.39\t5.18171580215038800\nAAAAAAAALHPBAAAA\tFar little eyes can happen pp.. Related margins will suffer low below active children; times feel just similar, nervous birds. Vegetabl\tSports                                            \ttennis                                            \t9.01\t813.78\t0.22540700148304722\nAAAAAAAALKHDAAAA\tThen various shoes date good, bad shops. Here open rats match badly well dual games. No doubt small kids answer much points. Completely free services shall understand. Following patients\tSports                                            \ttennis                                            \t5.46\t1154.69\t0.31983485775327459\nAAAAAAAALODAAAAA\tWidely free parties would find in a problems. Men like parties; straight a\tSports                                            \ttennis                                            \t8.95\t10297.10\t2.85216942536199653\nAAAAAAAAMDACAAAA\tTired rights free. Paintings sell\tSports                                            \ttennis                                            \t8.06\t5429.22\t1.50382683353214583\nAAAAAAAANAAAAAAA\tMeetings improve early women. Even likely variables might want approxi\tSports                                            \ttennis                                            \t2.56\t7342.79\t2.03386207134570068\nAAAAAAAANKODAAAA\tGrowing jews see only grey tactics. Also indian parts ought to provide pretty other, canadian ways. Countries shall correspond really to a doubts. Star sounds ought to mean further at a steps. \tSports                                            \ttennis                                            \t8.04\t4423.03\t1.22512464028307694\nAAAAAAAAOCCBAAAA\tElse single arrangements will not keep approximately from a teachers. Large levels tolerate daily financial, critical others. Properties make a\tSports                                            \ttennis                                            \t0.30\t5475.18\t1.51655718545546767\nAAAAAAAAOEBEAAAA\tEquivalent, important points would not reject foreign, high mountains. Always alive cups mark near the games. Sons will not stay extremely. Unfortunatel\tSports                                            \ttennis                                            \t0.19\t5314.97\t1.47218099568968454\nAAAAAAAAOEDBAAAA\tConfidential companies can write highly; potentially new children mix sympathetically military, economic gains. Various, traditional designers make in a measurements. Individuals tell only se\tSports                                            \ttennis                                            \t7.12\t1837.86\t0.50906450360740392\nAAAAAAAAONEBAAAA\tExamples show waves. Currently representative farmers should put like a customers. Both full rights practise with a police. Legal re\tSports                                            \ttennis                                            \t4.24\t735.27\t0.20366070188557120\nAAAAAAAAPCOAAAAA\tPart\tSports                                            \ttennis                                            \t6.53\t4928.46\t1.36512250304644856\nAAAAAAAAPGOCAAAA\tGreat, big arts will not let brilliant pp.. Real, new or\tSports                                            \ttennis                                            \t0.88\t13772.83\t3.81490367450140978\nAAAAAAAAPHPDAAAA\tInc presents cannot break often subjects. Of course capital services would pay. Systems cannot\tSports                                            \ttennis                                            \t9.67\t3395.45\t0.94049768141956387\nAAAAAAAAPJNAAAAA\tParts may refuse primarily old holidays. Scottish, good tests handle however for the households; low measurements will remember into a calls; inc, genuine events used to think again r\tSports                                            \ttennis                                            \t6.88\t733.87\t0.20327291918990865\nAAAAAAAAPMOCAAAA\tLiterary pai\tSports                                            \ttennis                                            \t2.68\t3317.04\t0.91877908058606374\nAAAAAAAAPOHBAAAA\tThemes would not reflect on the jeans. Traditional relations would not force mildly smal\tSports                                            \ttennis                                            \t9.89\t1274.76\t0.35309276365913303\n"
  },
  {
    "path": "spark/src/test/resources/tpcds-query-results/v2_7-spark4_0/q36a.sql.out",
    "content": "-- Automatically generated by CometTPCDSQuerySuite\n\n-- !query schema\nstruct<gross_margin:decimal(38,11),i_category:string,i_class:string,lochierarchy:int,rank_within_parent:int>\n-- !query output\n-0.43310777865\tNULL\tNULL\t2\t1\n-0.44057752675\tHome                                              \tNULL\t1\t1\n-0.43759152110\tMusic                                             \tNULL\t1\t2\n-0.43708103961\tNULL\tNULL\t1\t3\n-0.43616253139\tShoes                                             \tNULL\t1\t4\n-0.43567118609\tChildren                                          \tNULL\t1\t5\n-0.43423932352\tSports                                            \tNULL\t1\t6\n-0.43342977300\tElectronics                                       \tNULL\t1\t7\n-0.43243283121\tWomen                                             \tNULL\t1\t8\n-0.43164166900\tMen                                               \tNULL\t1\t9\n-0.42516187690\tBooks                                             \tNULL\t1\t10\n-0.42448713381\tJewelry                                           \tNULL\t1\t11\n-0.73902664239\tNULL\tshirts                                            \t0\t1\n-0.61125804874\tNULL\tcountry                                           \t0\t2\n-0.53129803597\tNULL\tdresses                                           \t0\t3\n-0.51266635289\tNULL\tathletic                                          \t0\t4\n-0.45290387784\tNULL\tmens                                              \t0\t5\n-0.41288056662\tNULL\taccessories                                       \t0\t6\n-0.40784754677\tNULL\tNULL\t0\t7\n-0.34254844861\tNULL\tbaseball                                          \t0\t8\n-0.32511461676\tNULL\tinfants                                           \t0\t9\n-0.44733955705\tBooks                                             \tcomputers                                         \t0\t1\n-0.44221358113\tBooks                                             \thome repair                                       \t0\t2\n-0.44131129175\tBooks                                             \tromance                                           \t0\t3\n-0.43954111564\tBooks                                             \thistory                                           \t0\t4\n-0.43921337505\tBooks                                             \tmystery                                           \t0\t5\n-0.43904020269\tBooks                                             \tsports                                            \t0\t6\n-0.42821477000\tBooks                                             \ttravel                                            \t0\t7\n-0.42609067296\tBooks                                             \tcooking                                           \t0\t8\n-0.42538995145\tBooks                                             \tfiction                                           \t0\t9\n-0.42446563616\tBooks                                             \tarts                                              \t0\t10\n-0.42424821312\tBooks                                             \tparenting                                         \t0\t11\n-0.41822014479\tBooks                                             \treference                                         \t0\t12\n-0.41350839326\tBooks                                             \tbusiness                                          \t0\t13\n-0.40935208137\tBooks                                             \tscience                                           \t0\t14\n-0.40159380736\tBooks                                             \tself-help                                         \t0\t15\n-0.36957884843\tBooks                                             \tentertainments                                    \t0\t16\n-0.44602461557\tChildren                                          \tschool-uniforms                                   \t0\t1\n-0.44141106040\tChildren                                          \ttoddlers                                          \t0\t2\n-0.43479886701\tChildren                                          \tinfants                                           \t0\t3\n-0.41900662972\tChildren                                          \tnewborn                                           \t0\t4\n-0.41526603782\tChildren                                          \tNULL\t0\t5\n-0.45347482219\tElectronics                                       \tpersonal                                          \t0\t1\n-0.44349670350\tElectronics                                       \tstereo                                            \t0\t2\n-0.44262427233\tElectronics                                       \tautomotive                                        \t0\t3\n-0.44115886173\tElectronics                                       \tportable                                          \t0\t4\n-0.43972786652\tElectronics                                       \tmemory                                            \t0\t5\n-0.43889275272\tElectronics                                       \tscanners                                          \t0\t6\n-0.43879181695\tElectronics                                       \tkaroke                                            \t0\t7\n-0.43743655150\tElectronics                                       \tdvd/vcr players                                   \t0\t8\n-0.43737666391\tElectronics                                       \tcameras                                           \t0\t9\n-0.43390499017\tElectronics                                       \twireless                                          \t0\t10\n-0.43163869754\tElectronics                                       \taudio                                             \t0\t11\n-0.42958938670\tElectronics                                       \tcamcorders                                        \t0\t12\n-0.42872845804\tElectronics                                       \tmusical                                           \t0\t13\n-0.42228240153\tElectronics                                       \ttelevisions                                       \t0\t14\n-0.41893847772\tElectronics                                       \tmonitors                                          \t0\t15\n-0.39793878023\tElectronics                                       \tdisk drives                                       \t0\t16\n-0.49051156861\tHome                                              \tNULL\t0\t1\n-0.48431476751\tHome                                              \tblinds/shades                                     \t0\t2\n-0.47545837942\tHome                                              \tbathroom                                          \t0\t3\n-0.45726228921\tHome                                              \trugs                                              \t0\t4\n-0.45540507569\tHome                                              \tfurniture                                         \t0\t5\n-0.45303572267\tHome                                              \tflatware                                          \t0\t6\n-0.44755542058\tHome                                              \ttables                                            \t0\t7\n-0.44419847781\tHome                                              \twallpaper                                         \t0\t8\n-0.44092345227\tHome                                              \tglassware                                         \t0\t9\n-0.43877591834\tHome                                              \tdecor                                             \t0\t10\n-0.43765482554\tHome                                              \taccent                                            \t0\t11\n-0.43188199219\tHome                                              \tbedding                                           \t0\t12\n-0.43107417904\tHome                                              \tkids                                              \t0\t13\n-0.42474436356\tHome                                              \tlighting                                          \t0\t14\n-0.41783311109\tHome                                              \tcurtains/drapes                                   \t0\t15\n-0.41767111807\tHome                                              \tmattresses                                        \t0\t16\n-0.40562188699\tHome                                              \tpaint                                             \t0\t17\n-0.45165056505\tJewelry                                           \tjewelry boxes                                     \t0\t1\n-0.44372227805\tJewelry                                           \testate                                            \t0\t2\n-0.44251815033\tJewelry                                           \tgold                                              \t0\t3\n-0.43978127754\tJewelry                                           \tconsignment                                       \t0\t4\n-0.43821750044\tJewelry                                           \tcustom                                            \t0\t5\n-0.43439645036\tJewelry                                           \tbracelets                                         \t0\t6\n-0.43208398326\tJewelry                                           \tloose stones                                      \t0\t7\n-0.43060897375\tJewelry                                           \tdiamonds                                          \t0\t8\n-0.42847505749\tJewelry                                           \tcostume                                           \t0\t9\n-0.42667449062\tJewelry                                           \trings                                             \t0\t10\n-0.41987969012\tJewelry                                           \tmens watch                                        \t0\t11\n-0.41624621973\tJewelry                                           \tsemi-precious                                     \t0\t12\n-0.41148949162\tJewelry                                           \twomens watch                                      \t0\t13\n-0.39725668175\tJewelry                                           \tbirdal                                            \t0\t14\n-0.39665274052\tJewelry                                           \tpendants                                          \t0\t15\n-0.38423525233\tJewelry                                           \tearings                                           \t0\t16\n-0.44464388888\tMen                                               \tshirts                                            \t0\t1\n-0.43719860801\tMen                                               \taccessories                                       \t0\t2\n-0.43164606665\tMen                                               \tsports-apparel                                    \t0\t3\n-0.41530906677\tMen                                               \tpants                                             \t0\t4\n-0.38332708895\tMen                                               \tNULL\t0\t5\n-0.47339698706\tMusic                                             \tNULL\t0\t1\n-0.44193214675\tMusic                                             \trock                                              \t0\t2\n-0.44008174914\tMusic                                             \tcountry                                           \t0\t3\n-0.43863444992\tMusic                                             \tpop                                               \t0\t4\n"
  },
  {
    "path": "spark/src/test/resources/tpch-extended/q1.sql",
    "content": "select\n        sum(o_custkey)\nfrom\n        orders\nwhere\n        o_orderpriority = '1-URGENT'\n        or o_orderpriority = '2-HIGH'\ngroup by\n        o_orderkey\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q1.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<l_returnflag:string,l_linestatus:string,sum_qty:decimal(22,2),sum_base_price:decimal(22,2),sum_disc_price:decimal(36,4),sum_charge:decimal(38,6),avg_qty:decimal(16,6),avg_price:decimal(16,6),avg_disc:decimal(16,6),count_order:bigint>\n-- !query output\nA\tF\t37734107.00\t56586554400.73\t53758257134.8700\t55909065222.827692\t25.522006\t38273.129735\t0.049985\t1478493\nN\tF\t991417.00\t1487504710.38\t1413082168.0541\t1469649223.194375\t25.516472\t38284.467761\t0.050093\t38854\nN\tO\t74476040.00\t111701729697.74\t106118230307.6056\t110367043872.497010\t25.502227\t38249.117989\t0.049997\t2920374\nR\tF\t37719753.00\t56568041380.90\t53741292684.6040\t55889619119.831932\t25.505794\t38250.854626\t0.050009\t1478870\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q10.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<c_custkey:bigint,c_name:string,revenue:decimal(36,4),c_acctbal:decimal(12,2),n_name:string,c_address:string,c_phone:string,c_comment:string>\n-- !query output\n57040\tCustomer#x\t734235.2455\t632.87\tJAPAN\tEioyzjf4pp\t22-895-641-3466\tsits. slyly regular requests sleep alongside of the regular inst\n143347\tCustomer#x\t721002.6948\t2557.47\tEGYPT\t1aReFYv,Kw4\t14-742-935-3718\tggle carefully enticing requests. final deposits use bold, bold pinto beans. ironic, idle re\n60838\tCustomer#x\t679127.3077\t2454.77\tBRAZIL\t64EaJ5vMAHWJlBOxJklpNc2RJiWE\t12-913-494-9813\t need to boost against the slyly regular account\n101998\tCustomer#x\t637029.5667\t3790.89\tUNITED KINGDOM\t01c9CILnNtfOQYmZj\t33-593-865-6378\tress foxes wake slyly after the bold excuses. ironic platelets are furiously carefully bold theodolites\n125341\tCustomer#x\t633508.0860\t4983.51\tGERMANY\tS29ODD6bceU8QSuuEJznkNaK\t17-582-695-5962\tarefully even depths. blithely even excuses sleep furiously. foxes use except the dependencies. ca\n25501\tCustomer#x\t620269.7849\t7725.04\tETHIOPIA\t  W556MXuoiaYCCZamJI,Rn0B4ACUGdkQ8DZ\t15-874-808-6793\the pending instructions wake carefully at the pinto beans. regular, final instructions along the slyly fina\n115831\tCustomer#x\t596423.8672\t5098.10\tFRANCE\trFeBbEEyk dl ne7zV5fDrmiq1oK09wV7pxqCgIc\t16-715-386-3788\tl somas sleep. furiously final deposits wake blithely regular pinto b\n84223\tCustomer#x\t594998.0239\t528.65\tUNITED KINGDOM\tnAVZCs6BaWap rrM27N 2qBnzc5WBauxbA\t33-442-824-8191\t slyly final deposits haggle regular, pending dependencies. pending escapades wake \n54289\tCustomer#x\t585603.3918\t5583.02\tIRAN\tvXCxoCsU0Bad5JQI ,oobkZ\t20-834-292-4707\tely special foxes are quickly finally ironic p\n39922\tCustomer#x\t584878.1134\t7321.11\tGERMANY\tZgy4s50l2GKN4pLDPBU8m342gIw6R\t17-147-757-8036\ty final requests. furiously final foxes cajole blithely special platelets. f\n6226\tCustomer#x\t576783.7606\t2230.09\tUNITED KINGDOM\t8gPu8,NPGkfyQQ0hcIYUGPIBWc,ybP5g,\t33-657-701-3391\tending platelets along the express deposits cajole carefully final \n922\tCustomer#x\t576767.5333\t3869.25\tGERMANY\tAz9RFaut7NkPnc5zSD2PwHgVwr4jRzq\t17-945-916-9648\tluffily fluffy deposits. packages c\n147946\tCustomer#x\t576455.1320\t2030.13\tALGERIA\tiANyZHjqhyy7Ajah0pTrYyhJ\t10-886-956-3143\tithely ironic deposits haggle blithely ironic requests. quickly regu\n115640\tCustomer#x\t569341.1933\t6436.10\tARGENTINA\tVtgfia9qI 7EpHgecU1X\t11-411-543-4901\tost slyly along the patterns; pinto be\n73606\tCustomer#x\t568656.8578\t1785.67\tJAPAN\txuR0Tro5yChDfOCrjkd2ol\t22-437-653-6966\the furiously regular ideas. slowly\n110246\tCustomer#x\t566842.9815\t7763.35\tVIETNAM\t7KzflgX MDOq7sOkI\t31-943-426-9837\tegular deposits serve blithely above the fl\n142549\tCustomer#x\t563537.2368\t5085.99\tINDONESIA\tChqEoK43OysjdHbtKCp6dKqjNyvvi9\t19-955-562-2398\tsleep pending courts. ironic deposits against the carefully unusual platelets cajole carefully express accounts.\n146149\tCustomer#x\t557254.9865\t1791.55\tROMANIA\ts87fvzFQpU\t29-744-164-6487\t of the slyly silent accounts. quickly final accounts across the \n52528\tCustomer#x\t556397.3509\t551.79\tARGENTINA\tNFztyTOR10UOJ\t11-208-192-3205\t deposits hinder. blithely pending asymptotes breach slyly regular re\n23431\tCustomer#x\t554269.5360\t3381.86\tROMANIA\tHgiV0phqhaIa9aydNoIlb\t29-915-458-2654\tnusual, even instructions: furiously stealthy n\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q11.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<ps_partkey:bigint,value:decimal(33,2)>\n-- !query output\n129760\t17538456.86\n166726\t16503353.92\n191287\t16474801.97\n161758\t16101755.54\n34452\t15983844.72\n139035\t15907078.34\n9403\t15451755.62\n154358\t15212937.88\n38823\t15064802.86\n85606\t15053957.15\n33354\t14408297.40\n154747\t14407580.68\n82865\t14235489.78\n76094\t14094247.04\n222\t13937777.74\n121271\t13908336.00\n55221\t13716120.47\n22819\t13666434.28\n76281\t13646853.68\n85298\t13581154.93\n85158\t13554904.00\n139684\t13535538.72\n31034\t13498025.25\n87305\t13482847.04\n10181\t13445148.75\n62323\t13411824.30\n26489\t13377256.38\n96493\t13339057.83\n56548\t13329014.97\n55576\t13306843.35\n159751\t13306614.48\n92406\t13287414.50\n182636\t13223726.74\n199969\t13135288.21\n62865\t13001926.94\n7284\t12945298.19\n197867\t12944510.52\n11562\t12931575.51\n75165\t12916918.12\n97175\t12911283.50\n140840\t12896562.23\n65241\t12890600.46\n166120\t12876927.22\n9035\t12863828.70\n144616\t12853549.30\n176723\t12832309.74\n170884\t12792136.58\n29790\t12723300.33\n95213\t12555483.73\n183873\t12550533.05\n171235\t12476538.30\n21533\t12437821.32\n17290\t12432159.50\n156397\t12260623.50\n122611\t12222812.98\n139155\t12220319.25\n146316\t12215800.61\n171381\t12199734.52\n198633\t12078226.95\n167417\t12046637.62\n59512\t12043468.76\n31688\t12034893.64\n159586\t12001505.84\n8993\t11963814.30\n120302\t11857707.55\n43536\t11779340.52\n9552\t11776909.16\n86223\t11772205.08\n53776\t11758669.65\n131285\t11616953.74\n91628\t11611114.83\n169644\t11567959.72\n182299\t11567462.05\n33107\t11453818.76\n104184\t11436657.44\n67027\t11419127.14\n176869\t11371451.71\n30885\t11369674.79\n54420\t11345076.88\n72240\t11313951.05\n178708\t11294635.17\n81298\t11273686.13\n158324\t11243442.72\n117095\t11242535.24\n176793\t11237733.38\n86091\t11177793.79\n116033\t11145434.36\n129058\t11119112.20\n193714\t11104706.39\n117195\t11077217.96\n49851\t11043701.78\n19791\t11030662.62\n75800\t11012401.62\n161562\t10996371.69\n10119\t10980015.75\n39185\t10970042.56\n47223\t10950022.13\n175594\t10942923.05\n111295\t10893675.61\n155446\t10852764.57\n156391\t10839810.38\n40884\t10837234.19\n141288\t10837130.21\n152388\t10830977.82\n33449\t10830858.72\n149035\t10826130.02\n162620\t10814275.68\n118324\t10791788.10\n38932\t10777541.75\n121294\t10764225.22\n48721\t10762582.49\n63342\t10740132.60\n5614\t10724668.80\n62266\t10711143.10\n100202\t10696675.55\n197741\t10688560.72\n169178\t10648522.80\n5271\t10639392.65\n34499\t10584177.10\n71108\t10569117.56\n137132\t10539880.47\n78451\t10524873.24\n150827\t10503810.48\n107237\t10488030.84\n101727\t10473558.10\n58708\t10466280.44\n89768\t10465477.22\n146493\t10444291.58\n55424\t10444006.48\n16560\t10425574.74\n133114\t10415097.90\n195810\t10413625.20\n76673\t10391977.18\n97305\t10390890.57\n134210\t10387210.02\n188536\t10386529.92\n122255\t10335760.32\n2682\t10312966.10\n43814\t10303086.61\n34767\t10290405.18\n165584\t10273705.89\n2231\t10270415.55\n111259\t10263256.56\n195578\t10239795.82\n21093\t10217531.30\n29856\t10216932.54\n133686\t10213345.76\n87745\t10185509.40\n135153\t10179379.70\n11773\t10167410.84\n76316\t10165151.70\n123076\t10161225.78\n91894\t10130462.19\n39741\t10128387.52\n111753\t10119780.98\n142729\t10104748.89\n116775\t10097750.42\n102589\t10034784.36\n186268\t10012181.57\n44545\t10000286.48\n23307\t9966577.50\n124281\t9930018.90\n69604\t9925730.64\n21971\t9908982.03\n58148\t9895894.40\n16532\t9886529.90\n159180\t9883744.43\n74733\t9877582.88\n35173\t9858275.92\n7116\t9856881.02\n124620\t9838589.14\n122108\t9829949.35\n67200\t9828690.69\n164775\t9821424.44\n9039\t9816447.72\n14912\t9803102.20\n190906\t9791315.70\n130398\t9781674.27\n119310\t9776927.21\n10132\t9770930.78\n107211\t9757586.25\n113958\t9757065.50\n37009\t9748362.69\n66746\t9743528.76\n134486\t9731922.00\n15945\t9731096.45\n55307\t9717745.80\n56362\t9714922.83\n57726\t9711792.10\n57256\t9708621.00\n112292\t9701653.08\n87514\t9699492.53\n174206\t9680562.02\n72865\t9679043.34\n114357\t9671017.44\n112807\t9665019.21\n115203\t9661018.73\n177454\t9658906.35\n161275\t9634313.71\n61893\t9617095.44\n122219\t9604888.20\n183427\t9601362.58\n59158\t9599705.96\n61931\t9584918.98\n5532\t9579964.14\n20158\t9576714.38\n167199\t9557413.08\n38869\t9550279.53\n86949\t9541943.70\n198544\t9538613.92\n193762\t9538238.94\n108807\t9536247.16\n168324\t9535647.99\n115588\t9532195.04\n141372\t9529702.14\n175120\t9526068.66\n163851\t9522808.83\n160954\t9520359.45\n117757\t9517882.80\n52594\t9508325.76\n60960\t9498843.06\n70272\t9495775.62\n44050\t9495515.36\n152213\t9494756.96\n121203\t9492601.30\n70114\t9491012.30\n167588\t9484741.11\n136455\t9476241.78\n4357\t9464355.64\n6786\t9463632.57\n61345\t9455336.70\n160826\t9446754.84\n71275\t9440138.40\n77746\t9439118.35\n91289\t9437472.00\n56723\t9435102.16\n86647\t9434604.18\n131234\t9432120.00\n198129\t9427651.36\n165530\t9426193.68\n69233\t9425053.92\n6243\t9423304.66\n90110\t9420422.70\n191980\t9419368.36\n38461\t9419316.07\n167873\t9419024.49\n159373\t9416950.15\n128707\t9413428.50\n45267\t9410863.78\n48460\t9409793.93\n197672\t9406887.68\n60884\t9403442.40\n15209\t9403245.31\n138049\t9401262.10\n199286\t9391770.70\n19629\t9391236.40\n134019\t9390615.15\n169475\t9387639.58\n165918\t9379510.44\n135602\t9374251.54\n162323\t9367566.51\n96277\t9360850.68\n98336\t9359671.29\n119781\t9356395.73\n34440\t9355365.00\n57362\t9355180.10\n167236\t9352973.84\n38463\t9347530.94\n86749\t9346826.44\n170007\t9345699.90\n193087\t9343744.00\n150383\t9332576.75\n60932\t9329582.02\n128420\t9328206.35\n162145\t9327722.88\n55686\t9320304.40\n163080\t9304916.96\n160583\t9303515.92\n118153\t9298606.56\n152634\t9282184.57\n84731\t9276586.92\n119989\t9273814.20\n114584\t9269698.65\n131817\t9268570.08\n29068\t9256583.88\n44116\t9255922.00\n115818\t9253311.91\n103388\t9239218.08\n186118\t9236209.12\n155809\t9235410.84\n147003\t9234847.99\n27769\t9232511.64\n112779\t9231927.36\n124851\t9228982.68\n158488\t9227216.40\n83328\t9224792.20\n136797\t9222927.09\n141730\t9216370.68\n87304\t9215695.50\n156004\t9215557.90\n140740\t9215329.20\n100648\t9212185.08\n174774\t9211718.00\n37644\t9211578.60\n48807\t9209496.24\n95940\t9207948.40\n141586\t9206699.22\n147248\t9205654.95\n61372\t9205228.76\n52970\t9204415.95\n26430\t9203710.51\n28504\t9201669.20\n25810\t9198878.50\n125329\t9198688.50\n167867\t9194022.72\n134767\t9191444.72\n127745\t9191271.56\n69208\t9187110.00\n155222\t9186469.16\n196916\t9182995.82\n195590\t9176353.12\n169155\t9175176.09\n81558\t9171946.50\n185136\t9171293.04\n114790\t9168509.10\n194142\t9165836.61\n167639\t9161165.00\n11241\t9160789.46\n82628\t9160155.54\n41399\t9148338.00\n30755\t9146196.84\n6944\t9143574.58\n6326\t9138803.16\n101296\t9135657.62\n181479\t9121093.30\n76898\t9120983.10\n64274\t9118745.25\n175826\t9117387.99\n142215\t9116876.88\n103415\t9113128.62\n119765\t9110768.79\n107624\t9108837.45\n84215\t9105257.36\n73774\t9102651.92\n173972\t9102069.00\n69817\t9095513.88\n86943\t9092253.00\n138859\t9087719.30\n162273\t9085296.48\n175945\t9080401.21\n16836\t9075715.44\n70224\t9075265.95\n139765\t9074755.89\n30319\t9073233.10\n3851\t9072657.24\n181271\t9070631.52\n162184\t9068835.78\n81683\t9067258.47\n153028\t9067010.51\n123324\t9061870.95\n186481\t9058608.30\n167680\t9052908.76\n165293\t9050545.70\n122148\t9046298.17\n138604\t9045840.80\n78851\t9044822.60\n137280\t9042355.34\n8823\t9040855.10\n163900\t9040848.48\n75600\t9035392.45\n81676\t9031999.40\n46033\t9031460.58\n194917\t9028500.00\n133936\t9026949.02\n33182\t9024971.10\n34220\t9021485.39\n20118\t9019942.60\n178258\t9019881.66\n15560\t9017687.28\n111425\t9016198.56\n95942\t9015585.12\n132709\t9015240.15\n39731\t9014746.95\n154307\t9012571.20\n23769\t9008157.60\n93328\t9007211.20\n142826\t8998297.44\n188792\t8996014.00\n68703\t8994982.22\n145280\t8990941.05\n150725\t8985686.16\n172046\t8982469.52\n70476\t8967629.50\n124988\t8966805.22\n17937\t8963319.76\n177372\t8954873.64\n137994\t8950916.79\n84019\t8950039.98\n40389\t8946158.20\n69187\t8941054.14\n4863\t8939044.92\n50465\t8930503.14\n43686\t8915543.84\n131352\t8909053.59\n198916\t8906940.03\n135932\t8905282.95\n104673\t8903682.00\n152308\t8903244.08\n135298\t8900323.20\n156873\t8899429.10\n157454\t8897339.20\n75415\t8897068.09\n46325\t8895569.09\n1966\t8895117.06\n24576\t8895034.75\n19425\t8890156.60\n169735\t8890085.56\n32225\t8889829.28\n124537\t8889770.71\n146327\t8887836.23\n121562\t8887740.40\n44731\t8882444.95\n93141\t8881850.88\n187871\t8873506.18\n71709\t8873057.28\n151913\t8869321.17\n33786\t8868955.39\n35902\t8868126.06\n23588\t8867769.90\n24508\t8867616.00\n161282\t8866661.43\n188061\t8862304.00\n132847\t8862082.00\n166843\t8861200.80\n30609\t8860214.73\n56191\t8856546.96\n160740\t8852685.43\n71229\t8846106.99\n91208\t8845541.28\n10995\t8845306.56\n78094\t8839938.29\n36489\t8838538.10\n198437\t8836494.84\n151693\t8833807.64\n185367\t8829791.37\n65682\t8820622.89\n65421\t8819329.24\n122225\t8816821.86\n85330\t8811013.16\n64555\t8810643.12\n104188\t8808211.02\n54411\t8805703.40\n39438\t8805282.56\n70795\t8800060.92\n20383\t8799073.28\n21952\t8798624.19\n63584\t8796590.00\n158768\t8796422.95\n166588\t8796214.38\n120600\t8793558.06\n157202\t8788287.88\n55358\t8786820.75\n168322\t8786670.73\n25143\t8786324.80\n5368\t8786274.14\n114025\t8786201.12\n97744\t8785315.94\n164327\t8784503.86\n76542\t8782613.28\n4731\t8772846.70\n157590\t8772006.45\n154276\t8771733.91\n28705\t8771576.64\n100226\t8769455.00\n179195\t8769185.16\n184355\t8768118.05\n120408\t8768011.12\n63145\t8761991.96\n53135\t8753491.80\n173071\t8750508.80\n41087\t8749436.79\n194830\t8747438.40\n43496\t8743359.30\n30235\t8741611.00\n26391\t8741399.64\n191816\t8740258.72\n47616\t8737229.68\n152101\t8734432.76\n163784\t8730514.34\n5134\t8728424.64\n155241\t8725429.86\n188814\t8724182.40\n140782\t8720378.75\n153141\t8719407.51\n169373\t8718609.06\n41335\t8714773.80\n197450\t8714617.32\n87004\t8714017.79\n181804\t8712257.76\n122814\t8711119.14\n109939\t8709193.16\n98094\t8708780.04\n74630\t8708040.75\n197291\t8706519.09\n184173\t8705467.45\n192175\t8705411.12\n19471\t8702536.12\n18052\t8702155.70\n135560\t8698137.72\n152791\t8697325.80\n170953\t8696909.19\n116137\t8696687.17\n7722\t8696589.40\n49788\t8694846.71\n13252\t8694822.42\n12633\t8694559.36\n193438\t8690426.72\n17326\t8689329.16\n96124\t8679794.58\n143802\t8676626.48\n30389\t8675826.60\n75250\t8675257.14\n72613\t8673524.94\n123520\t8672456.25\n325\t8667741.28\n167291\t8667556.18\n150119\t8663403.54\n88420\t8663355.40\n179784\t8653021.34\n130884\t8651970.00\n172611\t8648217.00\n85373\t8647796.22\n122717\t8646758.54\n113431\t8646348.34\n66015\t8643349.40\n33141\t8643243.18\n69786\t8637396.92\n181857\t8637393.28\n122939\t8636378.00\n196223\t8635391.02\n50532\t8632648.24\n58102\t8632614.54\n93581\t8632372.36\n52804\t8632109.25\n755\t8627091.68\n16597\t8623357.05\n119041\t8622397.00\n89050\t8621185.98\n98696\t8620784.82\n94399\t8620524.00\n151295\t8616671.02\n56417\t8613450.35\n121322\t8612948.23\n126883\t8611373.42\n29155\t8610163.64\n114530\t8608471.74\n131007\t8607394.82\n128715\t8606833.62\n72522\t8601479.98\n144061\t8595718.74\n83503\t8595034.20\n112199\t8590717.44\n9227\t8587350.42\n116318\t8585910.66\n41248\t8585559.64\n159398\t8584821.00\n105966\t8582308.79\n137876\t8580641.30\n122272\t8580400.77\n195717\t8577278.10\n165295\t8571121.92\n5840\t8570728.74\n120860\t8570610.44\n66692\t8567540.52\n135596\t8563276.31\n150576\t8562794.10\n7500\t8562393.84\n107716\t8561541.56\n100611\t8559995.85\n171192\t8557390.08\n107660\t8556696.60\n13461\t8556545.12\n90310\t8555131.51\n141493\t8553782.93\n71286\t8552682.00\n136423\t8551300.76\n54241\t8550785.25\n120325\t8549976.60\n424\t8547527.10\n196543\t8545907.09\n13042\t8542717.18\n58332\t8536074.69\n9191\t8535663.92\n134357\t8535429.90\n96207\t8534900.60\n92292\t8530618.78\n181093\t8528303.52\n105064\t8527491.60\n59635\t8526854.08\n136974\t8524351.56\n126694\t8522783.37\n6247\t8522606.90\n139447\t8522521.92\n96313\t8520949.92\n108454\t8520916.25\n181254\t8519496.10\n71117\t8519223.00\n131703\t8517215.28\n59312\t8510568.36\n2903\t8509960.35\n102838\t8509527.69\n162806\t8508906.05\n41527\t8508222.36\n118416\t8505858.36\n180203\t8505024.16\n14773\t8500598.28\n140446\t8499514.24\n199641\t8497362.59\n109240\t8494617.12\n150268\t8494188.38\n45310\t8492380.65\n36552\t8490733.60\n199690\t8490145.80\n185353\t8488726.68\n163615\t8484985.01\n196520\t8483545.04\n133438\t8483482.35\n77285\t8481442.32\n55824\t8476893.90\n76753\t8475522.12\n46129\t8472717.96\n28358\t8472515.50\n9317\t8472145.32\n33823\t8469721.44\n39055\t8469145.07\n91471\t8468874.56\n142299\t8466039.55\n97672\t8464119.80\n134712\t8461781.79\n157988\t8460123.20\n102284\t8458652.44\n73533\t8458453.32\n90599\t8457874.86\n112160\t8457863.36\n124792\t8457633.70\n66097\t8457573.15\n165271\t8456969.01\n146925\t8454887.91\n164277\t8454838.50\n131290\t8454811.20\n179386\t8450909.90\n90486\t8447873.86\n175924\t8444421.66\n185922\t8442394.88\n38492\t8436438.32\n172511\t8436287.34\n139539\t8434180.29\n11926\t8433199.52\n55889\t8431449.88\n163068\t8431116.40\n138772\t8428406.36\n126821\t8425180.68\n22091\t8420687.88\n55981\t8419434.38\n100960\t8419403.46\n172568\t8417955.21\n63135\t8415945.53\n137651\t8413170.35\n191353\t8413039.84\n62988\t8411571.48\n103417\t8411541.12\n12052\t8411519.28\n104260\t8408516.55\n157129\t8405730.08\n77254\t8405537.22\n112966\t8403512.89\n168114\t8402764.56\n49940\t8402328.20\n52017\t8398753.60\n176179\t8398087.00\n100215\t8395906.61\n61256\t8392811.20\n15366\t8388907.80\n109479\t8388027.20\n66202\t8386522.83\n81707\t8385761.19\n51727\t8385426.40\n9980\t8382754.62\n174403\t8378575.73\n54558\t8378041.92\n3141\t8377378.22\n134829\t8377105.52\n145056\t8376920.76\n194020\t8375157.64\n7117\t8373982.27\n120146\t8373796.20\n126843\t8370761.28\n62117\t8369493.44\n111221\t8367525.81\n159337\t8366092.26\n173903\t8365428.48\n136438\t8364065.45\n56684\t8363198.00\n137597\t8363185.94\n20039\t8361138.24\n121326\t8359635.52\n48435\t8352863.10\n1712\t8349107.00\n167190\t8347238.70\n32113\t8346452.04\n40580\t8342983.32\n74785\t8342519.13\n14799\t8342236.75\n177291\t8341736.83\n198956\t8340370.65\n69179\t8338465.99\n118764\t8337616.56\n128814\t8336435.56\n82729\t8331766.88\n152048\t8330638.99\n171085\t8326259.50\n126730\t8325974.40\n77525\t8323282.50\n170653\t8322840.50\n5257\t8320350.78\n67350\t8318987.56\n109008\t8317836.54\n199043\t8316603.54\n139969\t8316551.54\n22634\t8316531.24\n173309\t8315750.25\n10887\t8315019.36\n42392\t8312895.96\n126040\t8312623.20\n101590\t8304555.42\n46891\t8302192.12\n138721\t8301745.62\n113715\t8301533.20\n78778\t8299685.64\n142908\t8299447.77\n64419\t8297631.80\n21396\t8296272.27\n4180\t8295646.92\n63534\t8295383.67\n135957\t8294389.86\n30126\t8291920.32\n158427\t8288938.00\n14545\t8288395.92\n75548\t8288287.20\n64473\t8286137.44\n149553\t8285714.88\n151284\t8283526.65\n171091\t8282934.36\n194256\t8278985.34\n952\t8276136.00\n121541\t8275390.26\n177664\t8275315.20\n51117\t8274504.30\n66770\t8273407.80\n37238\t8272728.06\n46679\t8270486.55\n165852\t8268312.60\n99458\t8266564.47\n114519\t8265493.54\n7231\t8264881.50\n19033\t8264826.56\n125123\t8262732.65\n18642\t8261578.99\n50386\t8261380.05\n193770\t8259578.82\n7276\t8258101.60\n178045\t8253904.15\n49033\t8253696.23\n187195\t8251334.58\n10590\t8249227.40\n143779\t8247057.70\n35205\t8245675.17\n19729\t8245081.60\n144946\t8240479.80\n123786\t8239581.24\n70843\t8237973.20\n112437\t8236907.52\n5436\t8236039.57\n163754\t8235471.16\n115945\t8234811.36\n27918\t8233957.88\n105712\t8233571.86\n41007\t8229431.79\n40476\t8226640.41\n145620\t8221371.60\n7771\t8220413.33\n86424\t8215572.61\n129137\t8215478.40\n76020\t8210495.36\n140213\t8209831.80\n32379\t8208338.88\n130616\t8207715.75\n195469\t8206609.80\n191805\t8205147.75\n90906\t8200951.20\n170910\t8195558.01\n105399\t8193122.63\n123798\t8192385.97\n90218\t8191689.16\n114766\t8189339.54\n11289\t8187354.72\n178308\t8185750.50\n71271\t8185519.24\n1115\t8184903.38\n152636\t8184530.72\n151619\t8182909.05\n116943\t8181072.69\n28891\t8181051.54\n47049\t8180955.00\n158827\t8180470.90\n92620\t8179671.55\n20814\t8176953.54\n179323\t8176795.55\n193453\t8174343.94\n56888\t8173342.00\n28087\t8169876.30\n164254\t8169632.35\n57661\t8168848.16\n7363\t8167538.05\n164499\t8167512.08\n197557\t8165940.45\n5495\t8164805.22\n966\t8163824.79\n98435\t8161771.45\n127227\t8161344.92\n194100\t8160978.78\n40134\t8160358.08\n107341\t8159952.05\n6790\t8158792.66\n43851\t8157101.40\n51295\t8156419.20\n69512\t8151537.00\n164274\t8149869.93\n130854\t8145338.85\n186865\t8143586.82\n176629\t8141411.20\n193739\t8141377.77\n6810\t8139822.60\n27732\t8136724.96\n50616\t8134089.82\n123908\t8128920.54\n140994\t8128470.82\n99039\t8128290.78\n62735\t8124940.50\n47829\t8122796.50\n192635\t8122687.57\n192429\t8119268.00\n145812\t8119165.63\n42896\t8118529.80\n146877\t8118266.16\n60882\t8116095.04\n18254\t8114783.04\n165464\t8114571.80\n57936\t8111927.25\n52226\t8110723.32\n128571\t8106788.80\n100308\t8105837.04\n8872\t8102395.62\n58867\t8102033.19\n145153\t8100222.84\n172088\t8098138.20\n59398\t8095845.45\n89395\t8093576.10\n171961\t8093538.00\n88736\t8090762.16\n174053\t8090350.11\n102237\t8089103.22\n43041\t8086537.90\n110219\t8085296.90\n126738\t8084199.20\n44787\t8083628.40\n31277\t8083580.76\n93595\t8082188.80\n189040\t8080257.21\n59851\t8079024.24\n175100\t8077904.01\n43429\t8076729.96\n154199\t8074940.76\n60963\t8073894.40\n8768\t8072760.96\n66095\t8071421.70\n111552\t8068184.48\n24563\t8067500.40\n16167\t8067495.24\n12662\t8067248.85\n94540\t8063727.16\n23308\t8063463.18\n27390\t8062823.25\n130660\t8062787.48\n8608\t8062411.16\n181552\t8062008.30\n199319\t8060248.56\n55475\t8058850.92\n142711\t8057926.58\n103499\t8056978.00\n105943\t8056698.75\n8432\t8053052.16\n149392\t8049675.69\n101248\t8048855.49\n140962\t8047260.70\n87101\t8046651.83\n133107\t8046476.73\n45126\t8045924.40\n87508\t8042966.39\n124711\t8042722.72\n173169\t8042224.41\n175161\t8041331.98\n167787\t8040075.78\n3242\t8038855.53\n114789\t8038628.35\n43833\t8038545.83\n141198\t8035110.72\n137248\t8034109.35\n96673\t8033491.20\n32180\t8032380.72\n166493\t8031902.40\n66959\t8031839.40\n85628\t8029693.44\n110971\t8029469.70\n130395\t8027463.92\n7757\t8026840.37\n178446\t8025379.09\n41295\t8024785.53\n100956\t8024179.30\n131917\t8021604.78\n24224\t8020463.52\n2073\t8020009.64\n121622\t8018462.17\n14357\t8016906.30\n135601\t8016209.44\n58458\t8016192.52\n73036\t8015799.00\n184722\t8015680.31\n151664\t8014821.96\n195090\t8012680.20\n162609\t8011241.00\n83532\t8009753.85\n50166\t8007137.89\n181562\t8006805.96\n175165\t8005319.76\n62500\t8005316.28\n36342\t8004333.40\n128435\t8004242.88\n92516\t8003836.80\n30802\t8003710.88\n107418\t8000430.30\n46620\t7999778.35\n191803\t7994734.15\n106343\t7993087.76\n59362\t7990397.46\n8329\t7990052.90\n75133\t7988244.00\n179023\t7986829.62\n135899\t7985726.64\n5824\t7985340.02\n148579\t7984889.56\n95888\t7984735.72\n9791\t7982699.79\n170437\t7982370.72\n39782\t7977858.24\n20605\t7977556.00\n28682\t7976960.00\n42172\t7973399.00\n56137\t7971405.40\n64729\t7970769.72\n98643\t7968603.73\n153787\t7967535.58\n8932\t7967222.19\n20134\t7965713.28\n197635\t7963507.58\n80408\t7963312.17\n37728\t7961875.68\n26624\t7961772.31\n44736\t7961144.10\n29763\t7960605.03\n36147\t7959463.68\n146040\t7957587.66\n115469\t7957485.14\n142276\t7956790.63\n181280\t7954037.35\n115096\t7953047.55\n109650\t7952258.73\n93862\t7951992.24\n158325\t7950728.30\n55952\t7950387.06\n122397\t7947106.27\n28114\t7946945.72\n11966\t7945197.48\n47814\t7944083.00\n85096\t7943691.06\n51657\t7943593.77\n196680\t7943578.89\n13141\t7942730.34\n193327\t7941036.25\n152612\t7940663.71\n139680\t7939242.36\n31134\t7938318.30\n45636\t7937240.85\n56694\t7936015.95\n8114\t7933921.88\n71518\t7932261.69\n72922\t7930400.64\n146699\t7929167.40\n92387\t7928972.67\n186289\t7928786.19\n95952\t7927972.78\n196514\t7927180.70\n4403\t7925729.04\n2267\t7925649.37\n45924\t7925047.68\n11493\t7916722.23\n104478\t7916253.60\n166794\t7913842.00\n161995\t7910874.27\n23538\t7909752.06\n41093\t7909579.92\n112073\t7908617.57\n92814\t7908262.50\n88919\t7907992.50\n79753\t7907933.88\n108765\t7905338.98\n146530\t7905336.60\n71475\t7903367.58\n36289\t7901946.50\n61739\t7900794.00\n52338\t7898638.08\n194299\t7898421.24\n105235\t7897829.94\n77207\t7897752.72\n96712\t7897575.27\n10157\t7897046.25\n171154\t7896814.50\n79373\t7896186.00\n113808\t7893353.88\n27901\t7892952.00\n128820\t7892882.72\n25891\t7890511.20\n122819\t7888881.02\n154731\t7888301.33\n101674\t7879324.60\n51968\t7879102.21\n72073\t7877736.11\n5182\t7874521.73\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q12.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<l_shipmode:string,high_line_count:bigint,low_line_count:bigint>\n-- !query output\nMAIL\t6202\t9324\nSHIP\t6200\t9262\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q13.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<c_count:bigint,custdist:bigint>\n-- !query output\n0\t50005\n9\t6641\n10\t6532\n11\t6014\n8\t5937\n12\t5639\n13\t5024\n19\t4793\n7\t4687\n17\t4587\n18\t4529\n20\t4516\n15\t4505\n14\t4446\n16\t4273\n21\t4190\n22\t3623\n6\t3265\n23\t3225\n24\t2742\n25\t2086\n5\t1948\n26\t1612\n27\t1179\n4\t1007\n28\t893\n29\t593\n3\t415\n30\t376\n31\t226\n32\t148\n2\t134\n33\t75\n34\t50\n35\t37\n1\t17\n36\t14\n38\t5\n37\t5\n40\t4\n41\t2\n39\t1\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q14.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<promo_revenue:decimal(38,6)>\n-- !query output\n16.380779\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q15.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<s_suppkey:bigint,s_name:string,s_address:string,s_phone:string,total_revenue:decimal(36,4)>\n-- !query output\n8449\tSupplier#x\tWp34zim9qYFbVctdW\t20-469-856-8873\t1772627.2087\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q16.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<p_brand:string,p_type:string,p_size:int,supplier_cnt:bigint>\n-- !query output\nBrand#x\tMEDIUM BRUSHED TIN\t3\t28\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t27\nBrand#x\tSTANDARD BRUSHED TIN\t23\t24\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t24\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t24\nBrand#x\tSMALL ANODIZED BRASS\t45\t24\nBrand#x\tSMALL BURNISHED NICKEL\t19\t24\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t24\nBrand#x\tSMALL BRUSHED NICKEL\t3\t24\nBrand#x\tSMALL BURNISHED BRASS\t19\t24\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t24\nBrand#x\tPROMO POLISHED COPPER\t36\t24\nBrand#x\tLARGE POLISHED TIN\t23\t24\nBrand#x\tPROMO POLISHED STEEL\t14\t24\nBrand#x\tPROMO BRUSHED NICKEL\t14\t24\nBrand#x\tECONOMY BRUSHED STEEL\t9\t24\nBrand#x\tECONOMY POLISHED TIN\t19\t24\nBrand#x\tLARGE PLATED COPPER\t36\t24\nBrand#x\tECONOMY PLATED BRASS\t3\t24\nBrand#x\tSTANDARD POLISHED TIN\t49\t24\nBrand#x\tPROMO BRUSHED TIN\t3\t24\nBrand#x\tSMALL ANODIZED COPPER\t36\t24\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t24\nBrand#x\tECONOMY PLATED TIN\t14\t24\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t24\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t24\nBrand#x\tPROMO ANODIZED NICKEL\t45\t23\nBrand#x\tECONOMY PLATED BRASS\t9\t23\nBrand#x\tSMALL ANODIZED COPPER\t3\t23\nBrand#x\tECONOMY BRUSHED COPPER\t45\t20\nBrand#x\tECONOMY PLATED BRASS\t23\t20\nBrand#x\tLARGE BRUSHED COPPER\t49\t20\nBrand#x\tLARGE POLISHED COPPER\t49\t20\nBrand#x\tSTANDARD ANODIZED TIN\t49\t20\nBrand#x\tSTANDARD PLATED BRASS\t19\t20\nBrand#x\tECONOMY BRUSHED BRASS\t9\t20\nBrand#x\tECONOMY BURNISHED STEEL\t14\t20\nBrand#x\tLARGE BURNISHED NICKEL\t19\t20\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t20\nBrand#x\tSMALL BRUSHED TIN\t45\t20\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t20\nBrand#x\tSTANDARD PLATED NICKEL\t23\t20\nBrand#x\tECONOMY ANODIZED COPPER\t14\t20\nBrand#x\tECONOMY PLATED TIN\t36\t20\nBrand#x\tECONOMY POLISHED NICKEL\t3\t20\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t20\nBrand#x\tSMALL POLISHED TIN\t14\t20\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t20\nBrand#x\tMEDIUM PLATED TIN\t23\t20\nBrand#x\tPROMO PLATED BRASS\t14\t20\nBrand#x\tSMALL ANODIZED COPPER\t45\t20\nBrand#x\tSMALL PLATED COPPER\t49\t20\nBrand#x\tSTANDARD PLATED TIN\t3\t20\nBrand#x\tLARGE ANODIZED COPPER\t36\t20\nBrand#x\tLARGE BRUSHED TIN\t3\t20\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t20\nBrand#x\tPROMO BRUSHED TIN\t36\t20\nBrand#x\tPROMO POLISHED NICKEL\t45\t20\nBrand#x\tSMALL ANODIZED COPPER\t9\t20\nBrand#x\tSMALL POLISHED NICKEL\t23\t20\nBrand#x\tLARGE ANODIZED COPPER\t36\t20\nBrand#x\tLARGE BRUSHED COPPER\t49\t20\nBrand#x\tPROMO ANODIZED TIN\t49\t20\nBrand#x\tPROMO POLISHED BRASS\t45\t20\nBrand#x\tSMALL BURNISHED STEEL\t45\t20\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t20\nBrand#x\tPROMO POLISHED STEEL\t23\t20\nBrand#x\tSTANDARD BRUSHED TIN\t14\t20\nBrand#x\tSTANDARD PLATED NICKEL\t36\t20\nBrand#x\tPROMO PLATED COPPER\t49\t20\nBrand#x\tPROMO PLATED STEEL\t49\t20\nBrand#x\tPROMO POLISHED STEEL\t9\t20\nBrand#x\tSTANDARD BRUSHED TIN\t36\t20\nBrand#x\tLARGE ANODIZED BRASS\t3\t20\nBrand#x\tPROMO BURNISHED TIN\t3\t20\nBrand#x\tECONOMY POLISHED NICKEL\t3\t20\nBrand#x\tMEDIUM PLATED TIN\t45\t20\nBrand#x\tSMALL ANODIZED STEEL\t14\t20\nBrand#x\tECONOMY ANODIZED COPPER\t36\t20\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t20\nBrand#x\tLARGE ANODIZED TIN\t19\t20\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t20\nBrand#x\tSMALL ANODIZED STEEL\t45\t20\nBrand#x\tECONOMY POLISHED COPPER\t19\t20\nBrand#x\tPROMO PLATED NICKEL\t14\t20\nBrand#x\tSMALL POLISHED TIN\t9\t20\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t20\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t20\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t20\nBrand#x\tLARGE BRUSHED BRASS\t19\t20\nBrand#x\tSMALL BRUSHED TIN\t3\t20\nBrand#x\tSTANDARD PLATED COPPER\t9\t20\nBrand#x\tLARGE ANODIZED NICKEL\t3\t20\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t20\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t20\nBrand#x\tPROMO ANODIZED COPPER\t49\t20\nBrand#x\tSMALL POLISHED COPPER\t14\t20\nBrand#x\tLARGE ANODIZED STEEL\t3\t20\nBrand#x\tLARGE BRUSHED NICKEL\t23\t20\nBrand#x\tLARGE BURNISHED COPPER\t3\t20\nBrand#x\tMEDIUM PLATED STEEL\t19\t20\nBrand#x\tSMALL BURNISHED COPPER\t23\t20\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t20\nBrand#x\tSMALL BURNISHED COPPER\t3\t20\nBrand#x\tECONOMY POLISHED COPPER\t9\t20\nBrand#x\tSMALL PLATED STEEL\t3\t20\nBrand#x\tSTANDARD BURNISHED TIN\t23\t20\nBrand#x\tLARGE ANODIZED STEEL\t23\t20\nBrand#x\tPROMO ANODIZED TIN\t23\t20\nBrand#x\tECONOMY BRUSHED BRASS\t49\t20\nBrand#x\tECONOMY POLISHED NICKEL\t9\t20\nBrand#x\tMEDIUM BRUSHED TIN\t9\t20\nBrand#x\tMEDIUM PLATED BRASS\t9\t20\nBrand#x\tPROMO BURNISHED BRASS\t9\t20\nBrand#x\tSMALL PLATED NICKEL\t49\t20\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t20\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t20\nBrand#x\tECONOMY ANODIZED BRASS\t3\t20\nBrand#x\tECONOMY BRUSHED COPPER\t49\t20\nBrand#x\tLARGE ANODIZED NICKEL\t45\t20\nBrand#x\tMEDIUM ANODIZED TIN\t23\t20\nBrand#x\tMEDIUM BURNISHED TIN\t45\t20\nBrand#x\tSMALL PLATED COPPER\t36\t20\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t20\nBrand#x\tECONOMY PLATED COPPER\t45\t20\nBrand#x\tPROMO ANODIZED COPPER\t49\t20\nBrand#x\tPROMO BRUSHED COPPER\t23\t20\nBrand#x\tPROMO PLATED TIN\t19\t20\nBrand#x\tPROMO POLISHED NICKEL\t3\t20\nBrand#x\tSMALL ANODIZED STEEL\t9\t20\nBrand#x\tSMALL BRUSHED COPPER\t3\t20\nBrand#x\tSMALL BRUSHED NICKEL\t3\t20\nBrand#x\tECONOMY PLATED STEEL\t9\t20\nBrand#x\tECONOMY POLISHED TIN\t3\t20\nBrand#x\tSMALL BRUSHED BRASS\t19\t20\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t20\nBrand#x\tPROMO BURNISHED STEEL\t14\t20\nBrand#x\tPROMO POLISHED NICKEL\t49\t20\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t20\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t20\nBrand#x\tECONOMY ANODIZED TIN\t3\t19\nBrand#x\tECONOMY ANODIZED BRASS\t14\t16\nBrand#x\tECONOMY ANODIZED BRASS\t23\t16\nBrand#x\tECONOMY ANODIZED COPPER\t14\t16\nBrand#x\tECONOMY BRUSHED BRASS\t49\t16\nBrand#x\tECONOMY BRUSHED STEEL\t19\t16\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t16\nBrand#x\tLARGE ANODIZED COPPER\t14\t16\nBrand#x\tLARGE BRUSHED TIN\t45\t16\nBrand#x\tLARGE BURNISHED COPPER\t23\t16\nBrand#x\tLARGE BURNISHED NICKEL\t36\t16\nBrand#x\tLARGE PLATED STEEL\t14\t16\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t16\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t16\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t16\nBrand#x\tMEDIUM BURNISHED TIN\t3\t16\nBrand#x\tMEDIUM PLATED COPPER\t9\t16\nBrand#x\tPROMO ANODIZED BRASS\t19\t16\nBrand#x\tPROMO ANODIZED BRASS\t49\t16\nBrand#x\tPROMO ANODIZED STEEL\t45\t16\nBrand#x\tPROMO PLATED BRASS\t45\t16\nBrand#x\tSMALL ANODIZED TIN\t45\t16\nBrand#x\tSMALL BRUSHED STEEL\t49\t16\nBrand#x\tSMALL BURNISHED COPPER\t19\t16\nBrand#x\tSMALL BURNISHED COPPER\t45\t16\nBrand#x\tSMALL BURNISHED NICKEL\t14\t16\nBrand#x\tSMALL POLISHED NICKEL\t36\t16\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t16\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t16\nBrand#x\tSTANDARD BRUSHED STEEL\t45\t16\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t16\nBrand#x\tECONOMY ANODIZED TIN\t14\t16\nBrand#x\tECONOMY BRUSHED COPPER\t9\t16\nBrand#x\tECONOMY BRUSHED COPPER\t36\t16\nBrand#x\tECONOMY BURNISHED BRASS\t9\t16\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t16\nBrand#x\tLARGE ANODIZED BRASS\t14\t16\nBrand#x\tLARGE ANODIZED COPPER\t9\t16\nBrand#x\tLARGE ANODIZED STEEL\t23\t16\nBrand#x\tLARGE BURNISHED TIN\t36\t16\nBrand#x\tLARGE PLATED COPPER\t49\t16\nBrand#x\tLARGE POLISHED COPPER\t49\t16\nBrand#x\tMEDIUM PLATED COPPER\t19\t16\nBrand#x\tMEDIUM PLATED NICKEL\t23\t16\nBrand#x\tPROMO ANODIZED BRASS\t45\t16\nBrand#x\tPROMO ANODIZED STEEL\t49\t16\nBrand#x\tPROMO BURNISHED STEEL\t9\t16\nBrand#x\tSMALL BRUSHED NICKEL\t36\t16\nBrand#x\tSMALL BRUSHED TIN\t45\t16\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t16\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t16\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t16\nBrand#x\tSTANDARD BRUSHED TIN\t9\t16\nBrand#x\tSTANDARD BRUSHED TIN\t36\t16\nBrand#x\tSTANDARD POLISHED COPPER\t9\t16\nBrand#x\tECONOMY ANODIZED STEEL\t45\t16\nBrand#x\tECONOMY POLISHED BRASS\t3\t16\nBrand#x\tLARGE BRUSHED NICKEL\t23\t16\nBrand#x\tLARGE BURNISHED NICKEL\t9\t16\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t16\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t16\nBrand#x\tMEDIUM PLATED BRASS\t49\t16\nBrand#x\tPROMO ANODIZED BRASS\t14\t16\nBrand#x\tPROMO ANODIZED COPPER\t3\t16\nBrand#x\tSMALL ANODIZED STEEL\t45\t16\nBrand#x\tSMALL BURNISHED STEEL\t19\t16\nBrand#x\tSMALL PLATED BRASS\t36\t16\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t16\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t16\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t16\nBrand#x\tSTANDARD PLATED NICKEL\t9\t16\nBrand#x\tSTANDARD PLATED TIN\t23\t16\nBrand#x\tECONOMY BRUSHED STEEL\t3\t16\nBrand#x\tECONOMY PLATED NICKEL\t9\t16\nBrand#x\tECONOMY PLATED STEEL\t9\t16\nBrand#x\tECONOMY POLISHED NICKEL\t19\t16\nBrand#x\tLARGE ANODIZED COPPER\t14\t16\nBrand#x\tLARGE BRUSHED NICKEL\t19\t16\nBrand#x\tLARGE POLISHED STEEL\t3\t16\nBrand#x\tLARGE POLISHED TIN\t23\t16\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t16\nBrand#x\tPROMO ANODIZED STEEL\t36\t16\nBrand#x\tPROMO PLATED BRASS\t9\t16\nBrand#x\tPROMO PLATED NICKEL\t49\t16\nBrand#x\tPROMO POLISHED BRASS\t19\t16\nBrand#x\tPROMO POLISHED STEEL\t19\t16\nBrand#x\tPROMO POLISHED TIN\t45\t16\nBrand#x\tSMALL BRUSHED BRASS\t14\t16\nBrand#x\tSMALL BURNISHED COPPER\t45\t16\nBrand#x\tSTANDARD BRUSHED TIN\t19\t16\nBrand#x\tSTANDARD PLATED COPPER\t45\t16\nBrand#x\tSTANDARD PLATED TIN\t9\t16\nBrand#x\tSTANDARD POLISHED TIN\t49\t16\nBrand#x\tECONOMY BRUSHED STEEL\t19\t16\nBrand#x\tLARGE BRUSHED BRASS\t14\t16\nBrand#x\tLARGE BRUSHED STEEL\t14\t16\nBrand#x\tLARGE BURNISHED NICKEL\t3\t16\nBrand#x\tLARGE PLATED COPPER\t49\t16\nBrand#x\tPROMO ANODIZED NICKEL\t3\t16\nBrand#x\tPROMO BURNISHED TIN\t49\t16\nBrand#x\tPROMO PLATED STEEL\t3\t16\nBrand#x\tPROMO POLISHED STEEL\t49\t16\nBrand#x\tSMALL BRUSHED COPPER\t9\t16\nBrand#x\tSMALL BRUSHED NICKEL\t23\t16\nBrand#x\tSMALL PLATED BRASS\t49\t16\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t16\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t16\nBrand#x\tSTANDARD PLATED TIN\t36\t16\nBrand#x\tECONOMY ANODIZED STEEL\t45\t16\nBrand#x\tECONOMY BRUSHED COPPER\t9\t16\nBrand#x\tECONOMY POLISHED STEEL\t19\t16\nBrand#x\tLARGE ANODIZED STEEL\t14\t16\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t16\nBrand#x\tPROMO POLISHED BRASS\t14\t16\nBrand#x\tPROMO POLISHED TIN\t49\t16\nBrand#x\tSMALL BRUSHED COPPER\t3\t16\nBrand#x\tSMALL PLATED STEEL\t45\t16\nBrand#x\tSMALL PLATED TIN\t45\t16\nBrand#x\tSTANDARD POLISHED STEEL\t36\t16\nBrand#x\tECONOMY BRUSHED BRASS\t9\t16\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t16\nBrand#x\tECONOMY POLISHED TIN\t36\t16\nBrand#x\tLARGE BRUSHED COPPER\t19\t16\nBrand#x\tLARGE BRUSHED TIN\t36\t16\nBrand#x\tLARGE POLISHED COPPER\t19\t16\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t16\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t16\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t16\nBrand#x\tMEDIUM PLATED NICKEL\t23\t16\nBrand#x\tPROMO ANODIZED TIN\t45\t16\nBrand#x\tPROMO POLISHED STEEL\t49\t16\nBrand#x\tSMALL BRUSHED NICKEL\t45\t16\nBrand#x\tSMALL POLISHED BRASS\t36\t16\nBrand#x\tSMALL POLISHED STEEL\t9\t16\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t16\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t16\nBrand#x\tSTANDARD PLATED BRASS\t9\t16\nBrand#x\tECONOMY BRUSHED TIN\t49\t16\nBrand#x\tECONOMY BURNISHED COPPER\t45\t16\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t16\nBrand#x\tECONOMY BURNISHED TIN\t9\t16\nBrand#x\tECONOMY PLATED BRASS\t9\t16\nBrand#x\tECONOMY PLATED COPPER\t14\t16\nBrand#x\tLARGE ANODIZED STEEL\t23\t16\nBrand#x\tLARGE ANODIZED STEEL\t49\t16\nBrand#x\tLARGE BURNISHED COPPER\t23\t16\nBrand#x\tLARGE POLISHED NICKEL\t9\t16\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t16\nBrand#x\tPROMO ANODIZED COPPER\t19\t16\nBrand#x\tPROMO ANODIZED TIN\t3\t16\nBrand#x\tPROMO BURNISHED COPPER\t14\t16\nBrand#x\tPROMO PLATED BRASS\t3\t16\nBrand#x\tSMALL ANODIZED BRASS\t23\t16\nBrand#x\tSMALL BRUSHED BRASS\t45\t16\nBrand#x\tSMALL POLISHED TIN\t3\t16\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t16\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t16\nBrand#x\tSTANDARD PLATED BRASS\t9\t16\nBrand#x\tSTANDARD PLATED COPPER\t45\t16\nBrand#x\tSTANDARD POLISHED BRASS\t9\t16\nBrand#x\tECONOMY ANODIZED BRASS\t3\t16\nBrand#x\tECONOMY BRUSHED COPPER\t36\t16\nBrand#x\tECONOMY BRUSHED STEEL\t14\t16\nBrand#x\tECONOMY POLISHED COPPER\t36\t16\nBrand#x\tECONOMY POLISHED NICKEL\t3\t16\nBrand#x\tLARGE ANODIZED BRASS\t23\t16\nBrand#x\tLARGE BURNISHED BRASS\t45\t16\nBrand#x\tLARGE BURNISHED STEEL\t14\t16\nBrand#x\tLARGE PLATED TIN\t9\t16\nBrand#x\tMEDIUM BRUSHED NICKEL\t49\t16\nBrand#x\tMEDIUM BURNISHED STEEL\t3\t16\nBrand#x\tPROMO BURNISHED COPPER\t49\t16\nBrand#x\tPROMO BURNISHED STEEL\t49\t16\nBrand#x\tPROMO POLISHED STEEL\t23\t16\nBrand#x\tSMALL ANODIZED NICKEL\t19\t16\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t16\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t16\nBrand#x\tSTANDARD PLATED NICKEL\t23\t16\nBrand#x\tSTANDARD PLATED TIN\t49\t16\nBrand#x\tECONOMY ANODIZED COPPER\t14\t16\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t16\nBrand#x\tECONOMY PLATED TIN\t14\t16\nBrand#x\tECONOMY POLISHED TIN\t45\t16\nBrand#x\tLARGE ANODIZED STEEL\t9\t16\nBrand#x\tLARGE ANODIZED TIN\t45\t16\nBrand#x\tLARGE BRUSHED NICKEL\t36\t16\nBrand#x\tLARGE BURNISHED NICKEL\t14\t16\nBrand#x\tLARGE POLISHED STEEL\t19\t16\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t16\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t16\nBrand#x\tMEDIUM BURNISHED TIN\t3\t16\nBrand#x\tMEDIUM PLATED STEEL\t9\t16\nBrand#x\tPROMO ANODIZED BRASS\t49\t16\nBrand#x\tPROMO ANODIZED STEEL\t19\t16\nBrand#x\tPROMO ANODIZED TIN\t23\t16\nBrand#x\tPROMO BURNISHED COPPER\t49\t16\nBrand#x\tPROMO POLISHED COPPER\t14\t16\nBrand#x\tSMALL ANODIZED COPPER\t23\t16\nBrand#x\tSMALL BRUSHED STEEL\t23\t16\nBrand#x\tSMALL POLISHED COPPER\t23\t16\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t16\nBrand#x\tSTANDARD BURNISHED TIN\t3\t16\nBrand#x\tSTANDARD BURNISHED TIN\t36\t16\nBrand#x\tSTANDARD PLATED BRASS\t45\t16\nBrand#x\tSTANDARD PLATED COPPER\t49\t16\nBrand#x\tECONOMY ANODIZED BRASS\t45\t16\nBrand#x\tECONOMY BRUSHED COPPER\t14\t16\nBrand#x\tECONOMY BRUSHED COPPER\t36\t16\nBrand#x\tLARGE ANODIZED STEEL\t45\t16\nBrand#x\tLARGE BURNISHED NICKEL\t45\t16\nBrand#x\tLARGE PLATED TIN\t14\t16\nBrand#x\tLARGE POLISHED COPPER\t49\t16\nBrand#x\tMEDIUM ANODIZED NICKEL\t49\t16\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t16\nBrand#x\tPROMO ANODIZED NICKEL\t14\t16\nBrand#x\tPROMO BRUSHED TIN\t45\t16\nBrand#x\tPROMO BURNISHED STEEL\t36\t16\nBrand#x\tSMALL ANODIZED NICKEL\t23\t16\nBrand#x\tSMALL BRUSHED NICKEL\t14\t16\nBrand#x\tSMALL BRUSHED TIN\t19\t16\nBrand#x\tSMALL PLATED NICKEL\t23\t16\nBrand#x\tSMALL POLISHED BRASS\t23\t16\nBrand#x\tSMALL POLISHED TIN\t14\t16\nBrand#x\tSMALL POLISHED TIN\t45\t16\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t16\nBrand#x\tSTANDARD POLISHED STEEL\t36\t16\nBrand#x\tECONOMY BRUSHED STEEL\t9\t16\nBrand#x\tECONOMY PLATED STEEL\t14\t16\nBrand#x\tLARGE ANODIZED BRASS\t36\t16\nBrand#x\tLARGE BURNISHED NICKEL\t36\t16\nBrand#x\tLARGE PLATED BRASS\t36\t16\nBrand#x\tLARGE PLATED STEEL\t23\t16\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t16\nBrand#x\tMEDIUM BRUSHED TIN\t9\t16\nBrand#x\tMEDIUM PLATED COPPER\t36\t16\nBrand#x\tPROMO ANODIZED TIN\t36\t16\nBrand#x\tPROMO BRUSHED BRASS\t9\t16\nBrand#x\tPROMO BURNISHED STEEL\t36\t16\nBrand#x\tPROMO PLATED STEEL\t3\t16\nBrand#x\tPROMO PLATED TIN\t45\t16\nBrand#x\tSMALL BURNISHED TIN\t49\t16\nBrand#x\tSMALL PLATED NICKEL\t36\t16\nBrand#x\tSMALL POLISHED NICKEL\t36\t16\nBrand#x\tSMALL POLISHED STEEL\t9\t16\nBrand#x\tSMALL POLISHED TIN\t36\t16\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t16\nBrand#x\tSTANDARD ANODIZED TIN\t9\t16\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t16\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t16\nBrand#x\tSTANDARD POLISHED BRASS\t14\t16\nBrand#x\tSTANDARD POLISHED STEEL\t14\t16\nBrand#x\tECONOMY ANODIZED STEEL\t49\t16\nBrand#x\tECONOMY PLATED BRASS\t36\t16\nBrand#x\tECONOMY PLATED COPPER\t19\t16\nBrand#x\tECONOMY POLISHED NICKEL\t19\t16\nBrand#x\tLARGE ANODIZED STEEL\t45\t16\nBrand#x\tLARGE ANODIZED TIN\t45\t16\nBrand#x\tLARGE BURNISHED COPPER\t45\t16\nBrand#x\tLARGE POLISHED STEEL\t3\t16\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t16\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t16\nBrand#x\tMEDIUM ANODIZED TIN\t14\t16\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t16\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t16\nBrand#x\tPROMO BURNISHED BRASS\t9\t16\nBrand#x\tPROMO BURNISHED BRASS\t19\t16\nBrand#x\tPROMO PLATED STEEL\t49\t16\nBrand#x\tSMALL ANODIZED BRASS\t36\t16\nBrand#x\tSMALL BRUSHED BRASS\t3\t16\nBrand#x\tSMALL BRUSHED STEEL\t9\t16\nBrand#x\tSMALL POLISHED BRASS\t14\t16\nBrand#x\tSMALL POLISHED COPPER\t36\t16\nBrand#x\tSMALL POLISHED NICKEL\t19\t16\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t16\nBrand#x\tSTANDARD ANODIZED TIN\t3\t16\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t16\nBrand#x\tSTANDARD PLATED NICKEL\t49\t16\nBrand#x\tSTANDARD POLISHED BRASS\t9\t16\nBrand#x\tSTANDARD POLISHED BRASS\t14\t16\nBrand#x\tSTANDARD POLISHED COPPER\t49\t16\nBrand#x\tSTANDARD POLISHED STEEL\t3\t16\nBrand#x\tECONOMY BURNISHED BRASS\t14\t16\nBrand#x\tECONOMY POLISHED STEEL\t36\t16\nBrand#x\tLARGE BRUSHED BRASS\t23\t16\nBrand#x\tLARGE PLATED BRASS\t36\t16\nBrand#x\tLARGE PLATED TIN\t3\t16\nBrand#x\tLARGE POLISHED COPPER\t14\t16\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t16\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t16\nBrand#x\tMEDIUM PLATED NICKEL\t23\t16\nBrand#x\tPROMO BRUSHED NICKEL\t45\t16\nBrand#x\tPROMO POLISHED TIN\t3\t16\nBrand#x\tSMALL ANODIZED NICKEL\t14\t16\nBrand#x\tSMALL BURNISHED TIN\t3\t16\nBrand#x\tSMALL POLISHED NICKEL\t36\t16\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t16\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t16\nBrand#x\tSTANDARD BURNISHED NICKEL\t23\t16\nBrand#x\tSTANDARD POLISHED COPPER\t23\t16\nBrand#x\tECONOMY ANODIZED COPPER\t36\t16\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t16\nBrand#x\tECONOMY BURNISHED TIN\t9\t16\nBrand#x\tECONOMY PLATED STEEL\t14\t16\nBrand#x\tLARGE ANODIZED BRASS\t9\t16\nBrand#x\tLARGE ANODIZED COPPER\t49\t16\nBrand#x\tLARGE ANODIZED NICKEL\t9\t16\nBrand#x\tLARGE BRUSHED TIN\t49\t16\nBrand#x\tLARGE BURNISHED COPPER\t23\t16\nBrand#x\tLARGE BURNISHED NICKEL\t9\t16\nBrand#x\tLARGE BURNISHED STEEL\t3\t16\nBrand#x\tLARGE PLATED COPPER\t19\t16\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t16\nBrand#x\tMEDIUM PLATED NICKEL\t23\t16\nBrand#x\tPROMO BRUSHED NICKEL\t19\t16\nBrand#x\tSMALL ANODIZED BRASS\t45\t16\nBrand#x\tSMALL BRUSHED TIN\t49\t16\nBrand#x\tECONOMY ANODIZED STEEL\t49\t16\nBrand#x\tECONOMY PLATED STEEL\t3\t16\nBrand#x\tECONOMY PLATED TIN\t3\t16\nBrand#x\tECONOMY POLISHED STEEL\t19\t16\nBrand#x\tECONOMY POLISHED STEEL\t45\t16\nBrand#x\tLARGE ANODIZED BRASS\t36\t16\nBrand#x\tLARGE BURNISHED BRASS\t23\t16\nBrand#x\tLARGE POLISHED BRASS\t36\t16\nBrand#x\tLARGE POLISHED NICKEL\t3\t16\nBrand#x\tMEDIUM BURNISHED TIN\t3\t16\nBrand#x\tMEDIUM PLATED STEEL\t3\t16\nBrand#x\tPROMO PLATED BRASS\t9\t16\nBrand#x\tPROMO PLATED STEEL\t36\t16\nBrand#x\tPROMO POLISHED STEEL\t36\t16\nBrand#x\tPROMO POLISHED TIN\t19\t16\nBrand#x\tSMALL ANODIZED COPPER\t23\t16\nBrand#x\tSMALL ANODIZED STEEL\t45\t16\nBrand#x\tSMALL BRUSHED NICKEL\t45\t16\nBrand#x\tSMALL BURNISHED NICKEL\t36\t16\nBrand#x\tSMALL POLISHED NICKEL\t9\t16\nBrand#x\tSMALL POLISHED STEEL\t45\t16\nBrand#x\tSMALL POLISHED TIN\t14\t16\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t16\nBrand#x\tECONOMY BRUSHED STEEL\t14\t16\nBrand#x\tECONOMY BURNISHED STEEL\t9\t16\nBrand#x\tECONOMY BURNISHED STEEL\t45\t16\nBrand#x\tLARGE ANODIZED TIN\t23\t16\nBrand#x\tLARGE BRUSHED STEEL\t14\t16\nBrand#x\tLARGE BURNISHED NICKEL\t19\t16\nBrand#x\tLARGE PLATED STEEL\t45\t16\nBrand#x\tLARGE POLISHED STEEL\t14\t16\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t16\nBrand#x\tMEDIUM ANODIZED TIN\t19\t16\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t16\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t16\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t16\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t16\nBrand#x\tMEDIUM BURNISHED TIN\t49\t16\nBrand#x\tPROMO ANODIZED NICKEL\t49\t16\nBrand#x\tPROMO ANODIZED STEEL\t49\t16\nBrand#x\tPROMO BURNISHED TIN\t49\t16\nBrand#x\tSMALL ANODIZED BRASS\t23\t16\nBrand#x\tSMALL ANODIZED NICKEL\t19\t16\nBrand#x\tSMALL ANODIZED TIN\t49\t16\nBrand#x\tSMALL PLATED COPPER\t23\t16\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t16\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t16\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t16\nBrand#x\tSTANDARD BRUSHED TIN\t45\t16\nBrand#x\tSTANDARD PLATED TIN\t23\t16\nBrand#x\tECONOMY BRUSHED STEEL\t23\t16\nBrand#x\tECONOMY PLATED TIN\t49\t16\nBrand#x\tECONOMY POLISHED TIN\t14\t16\nBrand#x\tLARGE BRUSHED COPPER\t9\t16\nBrand#x\tLARGE BURNISHED STEEL\t9\t16\nBrand#x\tLARGE PLATED BRASS\t14\t16\nBrand#x\tLARGE PLATED BRASS\t19\t16\nBrand#x\tLARGE PLATED NICKEL\t45\t16\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t16\nBrand#x\tPROMO BRUSHED BRASS\t36\t16\nBrand#x\tPROMO BRUSHED STEEL\t49\t16\nBrand#x\tPROMO PLATED BRASS\t45\t16\nBrand#x\tSMALL BURNISHED COPPER\t19\t16\nBrand#x\tSMALL BURNISHED TIN\t23\t16\nBrand#x\tSMALL BURNISHED TIN\t45\t16\nBrand#x\tSMALL PLATED COPPER\t23\t16\nBrand#x\tSMALL POLISHED STEEL\t19\t16\nBrand#x\tSTANDARD ANODIZED TIN\t45\t16\nBrand#x\tSTANDARD PLATED BRASS\t3\t16\nBrand#x\tECONOMY ANODIZED BRASS\t45\t16\nBrand#x\tECONOMY BRUSHED TIN\t45\t16\nBrand#x\tECONOMY PLATED COPPER\t23\t16\nBrand#x\tECONOMY PLATED STEEL\t3\t16\nBrand#x\tLARGE BRUSHED BRASS\t9\t16\nBrand#x\tLARGE PLATED BRASS\t49\t16\nBrand#x\tLARGE PLATED STEEL\t14\t16\nBrand#x\tLARGE POLISHED TIN\t19\t16\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t16\nBrand#x\tMEDIUM ANODIZED TIN\t49\t16\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t16\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t16\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t16\nBrand#x\tMEDIUM PLATED BRASS\t9\t16\nBrand#x\tMEDIUM PLATED STEEL\t49\t16\nBrand#x\tPROMO BURNISHED TIN\t3\t16\nBrand#x\tSMALL ANODIZED COPPER\t9\t16\nBrand#x\tSMALL ANODIZED STEEL\t14\t16\nBrand#x\tSMALL BRUSHED STEEL\t19\t16\nBrand#x\tSMALL BRUSHED TIN\t14\t16\nBrand#x\tSMALL BURNISHED STEEL\t23\t16\nBrand#x\tSMALL PLATED STEEL\t19\t16\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t16\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t16\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t16\nBrand#x\tSTANDARD PLATED BRASS\t49\t16\nBrand#x\tSTANDARD PLATED NICKEL\t45\t16\nBrand#x\tSTANDARD PLATED STEEL\t36\t16\nBrand#x\tECONOMY ANODIZED STEEL\t9\t16\nBrand#x\tECONOMY BRUSHED STEEL\t23\t16\nBrand#x\tECONOMY PLATED STEEL\t9\t16\nBrand#x\tLARGE BURNISHED COPPER\t14\t16\nBrand#x\tLARGE PLATED BRASS\t3\t16\nBrand#x\tLARGE PLATED BRASS\t36\t16\nBrand#x\tLARGE PLATED BRASS\t49\t16\nBrand#x\tLARGE POLISHED BRASS\t3\t16\nBrand#x\tLARGE POLISHED NICKEL\t19\t16\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t16\nBrand#x\tMEDIUM ANODIZED TIN\t9\t16\nBrand#x\tMEDIUM PLATED BRASS\t14\t16\nBrand#x\tPROMO BURNISHED NICKEL\t14\t16\nBrand#x\tPROMO BURNISHED TIN\t9\t16\nBrand#x\tPROMO PLATED NICKEL\t14\t16\nBrand#x\tSMALL ANODIZED COPPER\t45\t16\nBrand#x\tSMALL BURNISHED COPPER\t36\t16\nBrand#x\tSMALL BURNISHED TIN\t9\t16\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t16\nBrand#x\tSTANDARD BURNISHED TIN\t9\t16\nBrand#x\tSTANDARD PLATED BRASS\t36\t16\nBrand#x\tSTANDARD PLATED STEEL\t45\t16\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t16\nBrand#x\tECONOMY BURNISHED COPPER\t9\t16\nBrand#x\tECONOMY BURNISHED STEEL\t14\t16\nBrand#x\tLARGE ANODIZED BRASS\t23\t16\nBrand#x\tLARGE BRUSHED BRASS\t14\t16\nBrand#x\tLARGE BURNISHED TIN\t23\t16\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t16\nBrand#x\tPROMO BRUSHED STEEL\t36\t16\nBrand#x\tPROMO PLATED COPPER\t14\t16\nBrand#x\tSMALL PLATED COPPER\t3\t16\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t16\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t16\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t16\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t16\nBrand#x\tECONOMY ANODIZED BRASS\t19\t16\nBrand#x\tLARGE BRUSHED COPPER\t14\t16\nBrand#x\tLARGE BRUSHED NICKEL\t45\t16\nBrand#x\tLARGE BURNISHED COPPER\t36\t16\nBrand#x\tLARGE PLATED COPPER\t36\t16\nBrand#x\tLARGE PLATED STEEL\t36\t16\nBrand#x\tLARGE PLATED TIN\t14\t16\nBrand#x\tLARGE POLISHED BRASS\t14\t16\nBrand#x\tLARGE POLISHED STEEL\t49\t16\nBrand#x\tMEDIUM BRUSHED NICKEL\t49\t16\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t16\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t16\nBrand#x\tPROMO ANODIZED COPPER\t36\t16\nBrand#x\tPROMO ANODIZED NICKEL\t3\t16\nBrand#x\tPROMO BURNISHED STEEL\t9\t16\nBrand#x\tPROMO PLATED COPPER\t3\t16\nBrand#x\tSMALL ANODIZED TIN\t9\t16\nBrand#x\tSTANDARD PLATED BRASS\t23\t16\nBrand#x\tECONOMY BRUSHED BRASS\t45\t16\nBrand#x\tECONOMY BRUSHED COPPER\t14\t16\nBrand#x\tLARGE ANODIZED NICKEL\t49\t16\nBrand#x\tLARGE BURNISHED BRASS\t49\t16\nBrand#x\tLARGE BURNISHED COPPER\t19\t16\nBrand#x\tLARGE POLISHED NICKEL\t36\t16\nBrand#x\tPROMO BURNISHED TIN\t19\t16\nBrand#x\tPROMO PLATED BRASS\t49\t16\nBrand#x\tPROMO POLISHED TIN\t23\t16\nBrand#x\tSMALL ANODIZED COPPER\t14\t16\nBrand#x\tSMALL BRUSHED COPPER\t9\t16\nBrand#x\tSMALL PLATED NICKEL\t9\t16\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t16\nBrand#x\tSTANDARD ANODIZED TIN\t14\t16\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t16\nBrand#x\tSTANDARD PLATED COPPER\t23\t16\nBrand#x\tSTANDARD PLATED COPPER\t45\t16\nBrand#x\tSTANDARD POLISHED BRASS\t19\t16\nBrand#x\tSTANDARD POLISHED STEEL\t14\t16\nBrand#x\tECONOMY BRUSHED TIN\t36\t16\nBrand#x\tECONOMY POLISHED TIN\t14\t16\nBrand#x\tLARGE PLATED BRASS\t9\t16\nBrand#x\tLARGE POLISHED STEEL\t9\t16\nBrand#x\tMEDIUM BURNISHED TIN\t36\t16\nBrand#x\tPROMO ANODIZED BRASS\t14\t16\nBrand#x\tPROMO ANODIZED COPPER\t14\t16\nBrand#x\tSMALL BURNISHED STEEL\t9\t16\nBrand#x\tSTANDARD POLISHED COPPER\t19\t16\nBrand#x\tPROMO POLISHED COPPER\t36\t15\nBrand#x\tPROMO POLISHED STEEL\t9\t15\nBrand#x\tLARGE BURNISHED BRASS\t23\t15\nBrand#x\tPROMO ANODIZED BRASS\t49\t15\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t12\nBrand#x\tECONOMY ANODIZED STEEL\t36\t12\nBrand#x\tECONOMY ANODIZED TIN\t14\t12\nBrand#x\tECONOMY BRUSHED COPPER\t14\t12\nBrand#x\tECONOMY BURNISHED BRASS\t36\t12\nBrand#x\tECONOMY BURNISHED COPPER\t3\t12\nBrand#x\tECONOMY BURNISHED COPPER\t49\t12\nBrand#x\tECONOMY PLATED COPPER\t3\t12\nBrand#x\tECONOMY PLATED COPPER\t19\t12\nBrand#x\tECONOMY PLATED NICKEL\t14\t12\nBrand#x\tECONOMY POLISHED COPPER\t14\t12\nBrand#x\tECONOMY POLISHED TIN\t23\t12\nBrand#x\tLARGE ANODIZED NICKEL\t9\t12\nBrand#x\tLARGE ANODIZED STEEL\t23\t12\nBrand#x\tLARGE ANODIZED TIN\t36\t12\nBrand#x\tLARGE BRUSHED BRASS\t19\t12\nBrand#x\tLARGE BRUSHED STEEL\t19\t12\nBrand#x\tLARGE BRUSHED STEEL\t36\t12\nBrand#x\tLARGE BURNISHED BRASS\t3\t12\nBrand#x\tLARGE PLATED TIN\t19\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t12\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t12\nBrand#x\tMEDIUM PLATED BRASS\t14\t12\nBrand#x\tMEDIUM PLATED COPPER\t3\t12\nBrand#x\tMEDIUM PLATED STEEL\t14\t12\nBrand#x\tPROMO ANODIZED BRASS\t45\t12\nBrand#x\tPROMO BRUSHED NICKEL\t9\t12\nBrand#x\tPROMO BRUSHED STEEL\t45\t12\nBrand#x\tPROMO BURNISHED BRASS\t23\t12\nBrand#x\tPROMO BURNISHED COPPER\t23\t12\nBrand#x\tPROMO BURNISHED NICKEL\t36\t12\nBrand#x\tPROMO PLATED BRASS\t14\t12\nBrand#x\tPROMO PLATED COPPER\t14\t12\nBrand#x\tPROMO PLATED STEEL\t49\t12\nBrand#x\tPROMO PLATED TIN\t3\t12\nBrand#x\tPROMO POLISHED COPPER\t14\t12\nBrand#x\tPROMO POLISHED NICKEL\t3\t12\nBrand#x\tPROMO POLISHED STEEL\t3\t12\nBrand#x\tPROMO POLISHED STEEL\t23\t12\nBrand#x\tPROMO POLISHED TIN\t14\t12\nBrand#x\tSMALL ANODIZED BRASS\t49\t12\nBrand#x\tSMALL ANODIZED COPPER\t49\t12\nBrand#x\tSMALL ANODIZED NICKEL\t9\t12\nBrand#x\tSMALL ANODIZED STEEL\t45\t12\nBrand#x\tSMALL BURNISHED BRASS\t19\t12\nBrand#x\tSMALL BURNISHED BRASS\t49\t12\nBrand#x\tSMALL BURNISHED NICKEL\t9\t12\nBrand#x\tSMALL BURNISHED NICKEL\t49\t12\nBrand#x\tSMALL PLATED COPPER\t45\t12\nBrand#x\tSMALL PLATED NICKEL\t45\t12\nBrand#x\tSMALL PLATED TIN\t36\t12\nBrand#x\tSMALL POLISHED BRASS\t14\t12\nBrand#x\tSMALL POLISHED BRASS\t19\t12\nBrand#x\tSMALL POLISHED STEEL\t3\t12\nBrand#x\tSMALL POLISHED STEEL\t36\t12\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t12\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t12\nBrand#x\tSTANDARD PLATED STEEL\t19\t12\nBrand#x\tSTANDARD PLATED TIN\t45\t12\nBrand#x\tSTANDARD POLISHED STEEL\t9\t12\nBrand#x\tSTANDARD POLISHED STEEL\t19\t12\nBrand#x\tSTANDARD POLISHED TIN\t14\t12\nBrand#x\tECONOMY ANODIZED BRASS\t49\t12\nBrand#x\tECONOMY ANODIZED COPPER\t14\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t12\nBrand#x\tECONOMY BRUSHED BRASS\t23\t12\nBrand#x\tECONOMY BRUSHED STEEL\t9\t12\nBrand#x\tECONOMY BRUSHED TIN\t3\t12\nBrand#x\tECONOMY BRUSHED TIN\t19\t12\nBrand#x\tECONOMY BURNISHED BRASS\t19\t12\nBrand#x\tECONOMY BURNISHED COPPER\t49\t12\nBrand#x\tECONOMY BURNISHED STEEL\t9\t12\nBrand#x\tECONOMY BURNISHED STEEL\t36\t12\nBrand#x\tECONOMY PLATED BRASS\t3\t12\nBrand#x\tECONOMY PLATED NICKEL\t9\t12\nBrand#x\tECONOMY PLATED TIN\t45\t12\nBrand#x\tECONOMY POLISHED NICKEL\t45\t12\nBrand#x\tECONOMY POLISHED STEEL\t9\t12\nBrand#x\tECONOMY POLISHED STEEL\t19\t12\nBrand#x\tECONOMY POLISHED TIN\t14\t12\nBrand#x\tLARGE ANODIZED COPPER\t19\t12\nBrand#x\tLARGE ANODIZED NICKEL\t49\t12\nBrand#x\tLARGE ANODIZED TIN\t49\t12\nBrand#x\tLARGE BRUSHED BRASS\t9\t12\nBrand#x\tLARGE BRUSHED BRASS\t23\t12\nBrand#x\tLARGE BRUSHED BRASS\t49\t12\nBrand#x\tLARGE BURNISHED NICKEL\t45\t12\nBrand#x\tLARGE PLATED BRASS\t3\t12\nBrand#x\tLARGE POLISHED BRASS\t23\t12\nBrand#x\tLARGE POLISHED COPPER\t19\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t12\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t12\nBrand#x\tMEDIUM BRUSHED TIN\t14\t12\nBrand#x\tMEDIUM BRUSHED TIN\t36\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t12\nBrand#x\tMEDIUM PLATED BRASS\t23\t12\nBrand#x\tMEDIUM PLATED NICKEL\t45\t12\nBrand#x\tMEDIUM PLATED STEEL\t19\t12\nBrand#x\tMEDIUM PLATED TIN\t23\t12\nBrand#x\tPROMO BRUSHED COPPER\t36\t12\nBrand#x\tPROMO BRUSHED STEEL\t19\t12\nBrand#x\tPROMO BRUSHED STEEL\t45\t12\nBrand#x\tPROMO PLATED COPPER\t14\t12\nBrand#x\tPROMO PLATED STEEL\t19\t12\nBrand#x\tPROMO POLISHED COPPER\t45\t12\nBrand#x\tPROMO POLISHED STEEL\t45\t12\nBrand#x\tPROMO POLISHED TIN\t3\t12\nBrand#x\tPROMO POLISHED TIN\t14\t12\nBrand#x\tSMALL ANODIZED BRASS\t9\t12\nBrand#x\tSMALL ANODIZED STEEL\t14\t12\nBrand#x\tSMALL BRUSHED BRASS\t36\t12\nBrand#x\tSMALL BRUSHED NICKEL\t3\t12\nBrand#x\tSMALL BRUSHED NICKEL\t9\t12\nBrand#x\tSMALL BURNISHED BRASS\t14\t12\nBrand#x\tSMALL BURNISHED BRASS\t23\t12\nBrand#x\tSMALL BURNISHED TIN\t14\t12\nBrand#x\tSMALL POLISHED NICKEL\t23\t12\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t12\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t12\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t12\nBrand#x\tSTANDARD BRUSHED TIN\t45\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t12\nBrand#x\tSTANDARD BURNISHED TIN\t3\t12\nBrand#x\tSTANDARD PLATED COPPER\t49\t12\nBrand#x\tSTANDARD PLATED NICKEL\t19\t12\nBrand#x\tSTANDARD PLATED NICKEL\t45\t12\nBrand#x\tSTANDARD PLATED STEEL\t19\t12\nBrand#x\tSTANDARD PLATED STEEL\t36\t12\nBrand#x\tSTANDARD POLISHED BRASS\t45\t12\nBrand#x\tECONOMY ANODIZED BRASS\t36\t12\nBrand#x\tECONOMY ANODIZED BRASS\t45\t12\nBrand#x\tECONOMY ANODIZED COPPER\t14\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t12\nBrand#x\tECONOMY ANODIZED TIN\t23\t12\nBrand#x\tECONOMY BRUSHED BRASS\t45\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t12\nBrand#x\tECONOMY BURNISHED BRASS\t3\t12\nBrand#x\tECONOMY BURNISHED COPPER\t19\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t12\nBrand#x\tECONOMY PLATED COPPER\t49\t12\nBrand#x\tECONOMY PLATED NICKEL\t3\t12\nBrand#x\tECONOMY PLATED NICKEL\t19\t12\nBrand#x\tECONOMY PLATED STEEL\t23\t12\nBrand#x\tECONOMY POLISHED STEEL\t19\t12\nBrand#x\tECONOMY POLISHED STEEL\t36\t12\nBrand#x\tLARGE ANODIZED BRASS\t49\t12\nBrand#x\tLARGE ANODIZED TIN\t9\t12\nBrand#x\tLARGE ANODIZED TIN\t19\t12\nBrand#x\tLARGE BRUSHED BRASS\t3\t12\nBrand#x\tLARGE BRUSHED COPPER\t9\t12\nBrand#x\tLARGE BRUSHED NICKEL\t3\t12\nBrand#x\tLARGE BURNISHED COPPER\t45\t12\nBrand#x\tLARGE PLATED COPPER\t23\t12\nBrand#x\tLARGE PLATED COPPER\t36\t12\nBrand#x\tLARGE PLATED NICKEL\t23\t12\nBrand#x\tLARGE PLATED NICKEL\t49\t12\nBrand#x\tLARGE PLATED STEEL\t14\t12\nBrand#x\tLARGE PLATED TIN\t9\t12\nBrand#x\tLARGE POLISHED BRASS\t49\t12\nBrand#x\tLARGE POLISHED STEEL\t9\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t9\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t12\nBrand#x\tMEDIUM BRUSHED TIN\t19\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t12\nBrand#x\tMEDIUM PLATED BRASS\t9\t12\nBrand#x\tPROMO ANODIZED COPPER\t45\t12\nBrand#x\tPROMO BRUSHED NICKEL\t23\t12\nBrand#x\tPROMO BRUSHED STEEL\t45\t12\nBrand#x\tPROMO BRUSHED TIN\t3\t12\nBrand#x\tPROMO BURNISHED BRASS\t19\t12\nBrand#x\tPROMO BURNISHED COPPER\t19\t12\nBrand#x\tPROMO BURNISHED NICKEL\t3\t12\nBrand#x\tPROMO BURNISHED NICKEL\t49\t12\nBrand#x\tPROMO PLATED COPPER\t3\t12\nBrand#x\tPROMO PLATED NICKEL\t3\t12\nBrand#x\tPROMO PLATED STEEL\t45\t12\nBrand#x\tPROMO POLISHED NICKEL\t3\t12\nBrand#x\tPROMO POLISHED STEEL\t14\t12\nBrand#x\tSMALL ANODIZED BRASS\t49\t12\nBrand#x\tSMALL ANODIZED COPPER\t36\t12\nBrand#x\tSMALL ANODIZED TIN\t9\t12\nBrand#x\tSMALL ANODIZED TIN\t23\t12\nBrand#x\tSMALL BRUSHED COPPER\t14\t12\nBrand#x\tSMALL BRUSHED COPPER\t45\t12\nBrand#x\tSMALL BURNISHED NICKEL\t3\t12\nBrand#x\tSMALL PLATED BRASS\t45\t12\nBrand#x\tSMALL PLATED NICKEL\t45\t12\nBrand#x\tSMALL PLATED TIN\t14\t12\nBrand#x\tSMALL POLISHED BRASS\t49\t12\nBrand#x\tSMALL POLISHED NICKEL\t19\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t12\nBrand#x\tSTANDARD BURNISHED TIN\t23\t12\nBrand#x\tSTANDARD PLATED BRASS\t14\t12\nBrand#x\tSTANDARD PLATED COPPER\t45\t12\nBrand#x\tSTANDARD PLATED NICKEL\t45\t12\nBrand#x\tSTANDARD PLATED STEEL\t9\t12\nBrand#x\tSTANDARD POLISHED BRASS\t19\t12\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t12\nBrand#x\tECONOMY ANODIZED COPPER\t9\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t12\nBrand#x\tECONOMY ANODIZED STEEL\t45\t12\nBrand#x\tECONOMY BRUSHED BRASS\t23\t12\nBrand#x\tECONOMY BRUSHED COPPER\t19\t12\nBrand#x\tECONOMY BRUSHED COPPER\t45\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t12\nBrand#x\tECONOMY BRUSHED TIN\t14\t12\nBrand#x\tECONOMY BURNISHED COPPER\t9\t12\nBrand#x\tECONOMY BURNISHED COPPER\t23\t12\nBrand#x\tECONOMY BURNISHED STEEL\t9\t12\nBrand#x\tECONOMY BURNISHED STEEL\t14\t12\nBrand#x\tECONOMY PLATED BRASS\t9\t12\nBrand#x\tECONOMY POLISHED BRASS\t19\t12\nBrand#x\tECONOMY POLISHED COPPER\t23\t12\nBrand#x\tECONOMY POLISHED STEEL\t45\t12\nBrand#x\tLARGE ANODIZED COPPER\t49\t12\nBrand#x\tLARGE ANODIZED NICKEL\t23\t12\nBrand#x\tLARGE ANODIZED NICKEL\t45\t12\nBrand#x\tLARGE ANODIZED STEEL\t9\t12\nBrand#x\tLARGE BRUSHED COPPER\t14\t12\nBrand#x\tLARGE BRUSHED TIN\t3\t12\nBrand#x\tLARGE BRUSHED TIN\t45\t12\nBrand#x\tLARGE BURNISHED COPPER\t49\t12\nBrand#x\tLARGE PLATED BRASS\t19\t12\nBrand#x\tLARGE PLATED COPPER\t3\t12\nBrand#x\tLARGE PLATED NICKEL\t36\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t9\t12\nBrand#x\tMEDIUM BRUSHED TIN\t19\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t12\nBrand#x\tMEDIUM BURNISHED STEEL\t3\t12\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t12\nBrand#x\tMEDIUM PLATED COPPER\t36\t12\nBrand#x\tMEDIUM PLATED TIN\t49\t12\nBrand#x\tPROMO ANODIZED NICKEL\t36\t12\nBrand#x\tPROMO BRUSHED COPPER\t14\t12\nBrand#x\tPROMO BURNISHED NICKEL\t14\t12\nBrand#x\tPROMO PLATED COPPER\t45\t12\nBrand#x\tPROMO PLATED NICKEL\t36\t12\nBrand#x\tPROMO PLATED STEEL\t9\t12\nBrand#x\tPROMO PLATED TIN\t19\t12\nBrand#x\tPROMO PLATED TIN\t45\t12\nBrand#x\tPROMO PLATED TIN\t49\t12\nBrand#x\tPROMO POLISHED BRASS\t9\t12\nBrand#x\tPROMO POLISHED COPPER\t14\t12\nBrand#x\tPROMO POLISHED NICKEL\t9\t12\nBrand#x\tSMALL ANODIZED NICKEL\t45\t12\nBrand#x\tSMALL ANODIZED TIN\t45\t12\nBrand#x\tSMALL BRUSHED NICKEL\t19\t12\nBrand#x\tSMALL BRUSHED TIN\t19\t12\nBrand#x\tSMALL BURNISHED STEEL\t9\t12\nBrand#x\tSMALL BURNISHED STEEL\t36\t12\nBrand#x\tSMALL PLATED BRASS\t23\t12\nBrand#x\tSMALL PLATED COPPER\t9\t12\nBrand#x\tSMALL PLATED STEEL\t23\t12\nBrand#x\tSMALL POLISHED BRASS\t3\t12\nBrand#x\tSMALL POLISHED BRASS\t9\t12\nBrand#x\tSMALL POLISHED COPPER\t36\t12\nBrand#x\tSMALL POLISHED NICKEL\t49\t12\nBrand#x\tSMALL POLISHED STEEL\t14\t12\nBrand#x\tSMALL POLISHED TIN\t49\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t12\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t12\nBrand#x\tSTANDARD PLATED NICKEL\t49\t12\nBrand#x\tSTANDARD POLISHED COPPER\t36\t12\nBrand#x\tSTANDARD POLISHED COPPER\t45\t12\nBrand#x\tECONOMY ANODIZED TIN\t19\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t12\nBrand#x\tECONOMY BURNISHED STEEL\t19\t12\nBrand#x\tECONOMY PLATED NICKEL\t9\t12\nBrand#x\tECONOMY PLATED STEEL\t3\t12\nBrand#x\tECONOMY PLATED STEEL\t19\t12\nBrand#x\tECONOMY PLATED TIN\t9\t12\nBrand#x\tECONOMY POLISHED COPPER\t36\t12\nBrand#x\tECONOMY POLISHED NICKEL\t45\t12\nBrand#x\tLARGE ANODIZED BRASS\t19\t12\nBrand#x\tLARGE ANODIZED STEEL\t14\t12\nBrand#x\tLARGE ANODIZED TIN\t23\t12\nBrand#x\tLARGE BRUSHED BRASS\t19\t12\nBrand#x\tLARGE BRUSHED BRASS\t49\t12\nBrand#x\tLARGE BURNISHED BRASS\t3\t12\nBrand#x\tLARGE BURNISHED BRASS\t23\t12\nBrand#x\tLARGE BURNISHED COPPER\t9\t12\nBrand#x\tLARGE BURNISHED COPPER\t49\t12\nBrand#x\tLARGE BURNISHED STEEL\t9\t12\nBrand#x\tLARGE PLATED BRASS\t9\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t12\nBrand#x\tMEDIUM BRUSHED TIN\t14\t12\nBrand#x\tMEDIUM BURNISHED STEEL\t3\t12\nBrand#x\tMEDIUM PLATED TIN\t9\t12\nBrand#x\tMEDIUM PLATED TIN\t45\t12\nBrand#x\tPROMO BRUSHED BRASS\t36\t12\nBrand#x\tPROMO BRUSHED STEEL\t9\t12\nBrand#x\tPROMO BURNISHED NICKEL\t9\t12\nBrand#x\tPROMO PLATED COPPER\t36\t12\nBrand#x\tPROMO POLISHED BRASS\t14\t12\nBrand#x\tPROMO POLISHED COPPER\t9\t12\nBrand#x\tPROMO POLISHED NICKEL\t36\t12\nBrand#x\tPROMO POLISHED TIN\t49\t12\nBrand#x\tSMALL ANODIZED STEEL\t45\t12\nBrand#x\tSMALL BRUSHED BRASS\t45\t12\nBrand#x\tSMALL BRUSHED COPPER\t14\t12\nBrand#x\tSMALL BRUSHED COPPER\t19\t12\nBrand#x\tSMALL BRUSHED NICKEL\t36\t12\nBrand#x\tSMALL BURNISHED BRASS\t3\t12\nBrand#x\tSMALL PLATED COPPER\t19\t12\nBrand#x\tSMALL PLATED COPPER\t23\t12\nBrand#x\tSMALL PLATED NICKEL\t19\t12\nBrand#x\tSMALL POLISHED BRASS\t45\t12\nBrand#x\tSMALL POLISHED NICKEL\t19\t12\nBrand#x\tSMALL POLISHED NICKEL\t23\t12\nBrand#x\tSMALL POLISHED TIN\t3\t12\nBrand#x\tSMALL POLISHED TIN\t49\t12\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t12\nBrand#x\tSTANDARD ANODIZED TIN\t36\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t49\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t12\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t12\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t12\nBrand#x\tSTANDARD BURNISHED TIN\t49\t12\nBrand#x\tSTANDARD PLATED COPPER\t14\t12\nBrand#x\tSTANDARD PLATED STEEL\t3\t12\nBrand#x\tSTANDARD PLATED TIN\t9\t12\nBrand#x\tSTANDARD PLATED TIN\t45\t12\nBrand#x\tSTANDARD POLISHED TIN\t14\t12\nBrand#x\tECONOMY ANODIZED STEEL\t19\t12\nBrand#x\tECONOMY BRUSHED COPPER\t14\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t12\nBrand#x\tECONOMY BRUSHED STEEL\t45\t12\nBrand#x\tECONOMY BRUSHED TIN\t19\t12\nBrand#x\tECONOMY BURNISHED BRASS\t19\t12\nBrand#x\tECONOMY BURNISHED COPPER\t45\t12\nBrand#x\tECONOMY BURNISHED STEEL\t9\t12\nBrand#x\tECONOMY BURNISHED STEEL\t14\t12\nBrand#x\tECONOMY BURNISHED TIN\t49\t12\nBrand#x\tECONOMY PLATED BRASS\t49\t12\nBrand#x\tECONOMY PLATED COPPER\t14\t12\nBrand#x\tECONOMY PLATED NICKEL\t3\t12\nBrand#x\tECONOMY PLATED STEEL\t9\t12\nBrand#x\tECONOMY PLATED TIN\t19\t12\nBrand#x\tECONOMY PLATED TIN\t23\t12\nBrand#x\tECONOMY POLISHED BRASS\t9\t12\nBrand#x\tECONOMY POLISHED STEEL\t14\t12\nBrand#x\tLARGE ANODIZED COPPER\t3\t12\nBrand#x\tLARGE ANODIZED TIN\t3\t12\nBrand#x\tLARGE ANODIZED TIN\t14\t12\nBrand#x\tLARGE ANODIZED TIN\t45\t12\nBrand#x\tLARGE BRUSHED COPPER\t23\t12\nBrand#x\tLARGE BRUSHED NICKEL\t36\t12\nBrand#x\tLARGE BRUSHED STEEL\t23\t12\nBrand#x\tLARGE BRUSHED TIN\t45\t12\nBrand#x\tLARGE BRUSHED TIN\t49\t12\nBrand#x\tLARGE BURNISHED BRASS\t14\t12\nBrand#x\tLARGE BURNISHED NICKEL\t14\t12\nBrand#x\tLARGE BURNISHED STEEL\t19\t12\nBrand#x\tLARGE PLATED BRASS\t14\t12\nBrand#x\tLARGE PLATED COPPER\t19\t12\nBrand#x\tLARGE PLATED COPPER\t49\t12\nBrand#x\tLARGE POLISHED COPPER\t14\t12\nBrand#x\tLARGE POLISHED STEEL\t45\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t12\nBrand#x\tMEDIUM BURNISHED TIN\t9\t12\nBrand#x\tMEDIUM PLATED BRASS\t36\t12\nBrand#x\tMEDIUM PLATED NICKEL\t36\t12\nBrand#x\tMEDIUM PLATED STEEL\t36\t12\nBrand#x\tMEDIUM PLATED TIN\t9\t12\nBrand#x\tPROMO ANODIZED BRASS\t9\t12\nBrand#x\tPROMO ANODIZED COPPER\t9\t12\nBrand#x\tPROMO ANODIZED NICKEL\t19\t12\nBrand#x\tPROMO ANODIZED STEEL\t36\t12\nBrand#x\tPROMO ANODIZED TIN\t45\t12\nBrand#x\tPROMO BRUSHED NICKEL\t9\t12\nBrand#x\tPROMO BRUSHED STEEL\t14\t12\nBrand#x\tPROMO BRUSHED STEEL\t19\t12\nBrand#x\tPROMO BRUSHED STEEL\t45\t12\nBrand#x\tPROMO BRUSHED TIN\t14\t12\nBrand#x\tPROMO BURNISHED COPPER\t3\t12\nBrand#x\tPROMO BURNISHED STEEL\t14\t12\nBrand#x\tPROMO PLATED BRASS\t36\t12\nBrand#x\tPROMO PLATED COPPER\t49\t12\nBrand#x\tPROMO PLATED TIN\t45\t12\nBrand#x\tPROMO POLISHED COPPER\t9\t12\nBrand#x\tPROMO POLISHED COPPER\t19\t12\nBrand#x\tPROMO POLISHED NICKEL\t23\t12\nBrand#x\tPROMO POLISHED STEEL\t3\t12\nBrand#x\tPROMO POLISHED STEEL\t9\t12\nBrand#x\tPROMO POLISHED TIN\t9\t12\nBrand#x\tPROMO POLISHED TIN\t14\t12\nBrand#x\tPROMO POLISHED TIN\t19\t12\nBrand#x\tSMALL BRUSHED NICKEL\t9\t12\nBrand#x\tSMALL BRUSHED NICKEL\t45\t12\nBrand#x\tSMALL BRUSHED STEEL\t3\t12\nBrand#x\tSMALL BRUSHED STEEL\t9\t12\nBrand#x\tSMALL BRUSHED TIN\t14\t12\nBrand#x\tSMALL PLATED BRASS\t36\t12\nBrand#x\tSMALL PLATED COPPER\t14\t12\nBrand#x\tSMALL PLATED COPPER\t23\t12\nBrand#x\tSMALL POLISHED NICKEL\t9\t12\nBrand#x\tSMALL POLISHED STEEL\t3\t12\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t12\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t12\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t12\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t12\nBrand#x\tSTANDARD PLATED COPPER\t45\t12\nBrand#x\tSTANDARD PLATED NICKEL\t49\t12\nBrand#x\tSTANDARD PLATED STEEL\t36\t12\nBrand#x\tSTANDARD PLATED TIN\t9\t12\nBrand#x\tSTANDARD POLISHED COPPER\t49\t12\nBrand#x\tECONOMY ANODIZED COPPER\t36\t12\nBrand#x\tECONOMY ANODIZED COPPER\t45\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t12\nBrand#x\tECONOMY ANODIZED STEEL\t45\t12\nBrand#x\tECONOMY ANODIZED TIN\t49\t12\nBrand#x\tECONOMY BRUSHED STEEL\t45\t12\nBrand#x\tECONOMY BRUSHED TIN\t49\t12\nBrand#x\tECONOMY BURNISHED BRASS\t19\t12\nBrand#x\tECONOMY BURNISHED BRASS\t23\t12\nBrand#x\tECONOMY BURNISHED BRASS\t45\t12\nBrand#x\tECONOMY BURNISHED COPPER\t3\t12\nBrand#x\tECONOMY BURNISHED COPPER\t9\t12\nBrand#x\tECONOMY BURNISHED COPPER\t49\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t12\nBrand#x\tECONOMY BURNISHED STEEL\t23\t12\nBrand#x\tECONOMY BURNISHED STEEL\t45\t12\nBrand#x\tECONOMY BURNISHED STEEL\t49\t12\nBrand#x\tECONOMY BURNISHED TIN\t9\t12\nBrand#x\tECONOMY BURNISHED TIN\t19\t12\nBrand#x\tECONOMY PLATED BRASS\t36\t12\nBrand#x\tECONOMY PLATED COPPER\t3\t12\nBrand#x\tECONOMY PLATED STEEL\t23\t12\nBrand#x\tECONOMY POLISHED COPPER\t14\t12\nBrand#x\tECONOMY POLISHED TIN\t49\t12\nBrand#x\tLARGE ANODIZED NICKEL\t14\t12\nBrand#x\tLARGE ANODIZED TIN\t14\t12\nBrand#x\tLARGE BRUSHED BRASS\t9\t12\nBrand#x\tLARGE BRUSHED BRASS\t49\t12\nBrand#x\tLARGE BRUSHED COPPER\t14\t12\nBrand#x\tLARGE BRUSHED STEEL\t19\t12\nBrand#x\tLARGE BRUSHED TIN\t23\t12\nBrand#x\tLARGE BURNISHED BRASS\t14\t12\nBrand#x\tLARGE BURNISHED TIN\t36\t12\nBrand#x\tLARGE PLATED STEEL\t9\t12\nBrand#x\tLARGE PLATED TIN\t49\t12\nBrand#x\tLARGE POLISHED COPPER\t23\t12\nBrand#x\tLARGE POLISHED NICKEL\t19\t12\nBrand#x\tLARGE POLISHED NICKEL\t23\t12\nBrand#x\tLARGE POLISHED STEEL\t3\t12\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t12\nBrand#x\tMEDIUM PLATED BRASS\t36\t12\nBrand#x\tMEDIUM PLATED NICKEL\t14\t12\nBrand#x\tPROMO ANODIZED COPPER\t45\t12\nBrand#x\tPROMO ANODIZED STEEL\t36\t12\nBrand#x\tPROMO BURNISHED BRASS\t3\t12\nBrand#x\tPROMO BURNISHED BRASS\t23\t12\nBrand#x\tPROMO BURNISHED STEEL\t3\t12\nBrand#x\tPROMO PLATED BRASS\t14\t12\nBrand#x\tPROMO POLISHED BRASS\t14\t12\nBrand#x\tPROMO POLISHED COPPER\t3\t12\nBrand#x\tPROMO POLISHED COPPER\t23\t12\nBrand#x\tPROMO POLISHED NICKEL\t19\t12\nBrand#x\tPROMO POLISHED NICKEL\t36\t12\nBrand#x\tPROMO POLISHED STEEL\t36\t12\nBrand#x\tSMALL ANODIZED COPPER\t9\t12\nBrand#x\tSMALL ANODIZED STEEL\t19\t12\nBrand#x\tSMALL ANODIZED TIN\t19\t12\nBrand#x\tSMALL ANODIZED TIN\t49\t12\nBrand#x\tSMALL BRUSHED COPPER\t36\t12\nBrand#x\tSMALL BRUSHED TIN\t45\t12\nBrand#x\tSMALL BURNISHED COPPER\t49\t12\nBrand#x\tSMALL BURNISHED NICKEL\t9\t12\nBrand#x\tSMALL PLATED BRASS\t9\t12\nBrand#x\tSMALL PLATED COPPER\t3\t12\nBrand#x\tSMALL POLISHED NICKEL\t9\t12\nBrand#x\tSMALL POLISHED NICKEL\t49\t12\nBrand#x\tSMALL POLISHED STEEL\t49\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t12\nBrand#x\tSTANDARD BRUSHED TIN\t19\t12\nBrand#x\tSTANDARD BRUSHED TIN\t49\t12\nBrand#x\tSTANDARD BURNISHED TIN\t14\t12\nBrand#x\tSTANDARD PLATED BRASS\t45\t12\nBrand#x\tSTANDARD PLATED COPPER\t36\t12\nBrand#x\tSTANDARD PLATED NICKEL\t9\t12\nBrand#x\tSTANDARD PLATED STEEL\t36\t12\nBrand#x\tSTANDARD PLATED STEEL\t49\t12\nBrand#x\tSTANDARD PLATED TIN\t3\t12\nBrand#x\tSTANDARD PLATED TIN\t36\t12\nBrand#x\tSTANDARD PLATED TIN\t49\t12\nBrand#x\tSTANDARD POLISHED BRASS\t19\t12\nBrand#x\tSTANDARD POLISHED COPPER\t9\t12\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t12\nBrand#x\tSTANDARD POLISHED STEEL\t9\t12\nBrand#x\tSTANDARD POLISHED TIN\t45\t12\nBrand#x\tECONOMY ANODIZED BRASS\t36\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t12\nBrand#x\tECONOMY ANODIZED STEEL\t49\t12\nBrand#x\tECONOMY BRUSHED COPPER\t3\t12\nBrand#x\tECONOMY BRUSHED COPPER\t49\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t12\nBrand#x\tECONOMY BURNISHED STEEL\t49\t12\nBrand#x\tECONOMY BURNISHED TIN\t3\t12\nBrand#x\tECONOMY PLATED STEEL\t14\t12\nBrand#x\tECONOMY PLATED TIN\t49\t12\nBrand#x\tECONOMY POLISHED COPPER\t23\t12\nBrand#x\tECONOMY POLISHED NICKEL\t36\t12\nBrand#x\tECONOMY POLISHED TIN\t3\t12\nBrand#x\tLARGE ANODIZED TIN\t14\t12\nBrand#x\tLARGE BURNISHED STEEL\t23\t12\nBrand#x\tLARGE BURNISHED TIN\t19\t12\nBrand#x\tLARGE PLATED COPPER\t14\t12\nBrand#x\tLARGE PLATED STEEL\t9\t12\nBrand#x\tLARGE POLISHED BRASS\t19\t12\nBrand#x\tLARGE POLISHED COPPER\t45\t12\nBrand#x\tLARGE POLISHED COPPER\t49\t12\nBrand#x\tLARGE POLISHED TIN\t3\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t9\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t12\nBrand#x\tMEDIUM PLATED COPPER\t19\t12\nBrand#x\tMEDIUM PLATED STEEL\t14\t12\nBrand#x\tPROMO ANODIZED BRASS\t9\t12\nBrand#x\tPROMO ANODIZED BRASS\t19\t12\nBrand#x\tPROMO ANODIZED NICKEL\t3\t12\nBrand#x\tPROMO ANODIZED STEEL\t36\t12\nBrand#x\tPROMO BRUSHED COPPER\t36\t12\nBrand#x\tPROMO BURNISHED BRASS\t9\t12\nBrand#x\tPROMO BURNISHED STEEL\t9\t12\nBrand#x\tPROMO BURNISHED TIN\t3\t12\nBrand#x\tPROMO BURNISHED TIN\t45\t12\nBrand#x\tPROMO PLATED BRASS\t19\t12\nBrand#x\tPROMO PLATED BRASS\t23\t12\nBrand#x\tPROMO PLATED BRASS\t49\t12\nBrand#x\tPROMO PLATED NICKEL\t3\t12\nBrand#x\tPROMO PLATED TIN\t14\t12\nBrand#x\tPROMO POLISHED TIN\t45\t12\nBrand#x\tSMALL ANODIZED STEEL\t3\t12\nBrand#x\tSMALL ANODIZED TIN\t45\t12\nBrand#x\tSMALL BRUSHED BRASS\t19\t12\nBrand#x\tSMALL BRUSHED STEEL\t3\t12\nBrand#x\tSMALL BURNISHED BRASS\t14\t12\nBrand#x\tSMALL BURNISHED COPPER\t36\t12\nBrand#x\tSMALL BURNISHED STEEL\t45\t12\nBrand#x\tSMALL PLATED BRASS\t49\t12\nBrand#x\tSMALL PLATED STEEL\t23\t12\nBrand#x\tSMALL PLATED TIN\t14\t12\nBrand#x\tSMALL POLISHED COPPER\t49\t12\nBrand#x\tSMALL POLISHED TIN\t23\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t12\nBrand#x\tSTANDARD ANODIZED TIN\t3\t12\nBrand#x\tSTANDARD ANODIZED TIN\t45\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t12\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t12\nBrand#x\tSTANDARD BRUSHED TIN\t19\t12\nBrand#x\tSTANDARD PLATED BRASS\t3\t12\nBrand#x\tSTANDARD PLATED NICKEL\t49\t12\nBrand#x\tSTANDARD PLATED TIN\t9\t12\nBrand#x\tSTANDARD PLATED TIN\t19\t12\nBrand#x\tSTANDARD POLISHED STEEL\t23\t12\nBrand#x\tSTANDARD POLISHED TIN\t23\t12\nBrand#x\tECONOMY ANODIZED BRASS\t19\t12\nBrand#x\tECONOMY ANODIZED COPPER\t36\t12\nBrand#x\tECONOMY ANODIZED COPPER\t49\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t12\nBrand#x\tECONOMY ANODIZED STEEL\t23\t12\nBrand#x\tECONOMY ANODIZED STEEL\t45\t12\nBrand#x\tECONOMY BRUSHED STEEL\t9\t12\nBrand#x\tECONOMY BRUSHED TIN\t49\t12\nBrand#x\tECONOMY BURNISHED BRASS\t14\t12\nBrand#x\tECONOMY BURNISHED COPPER\t3\t12\nBrand#x\tECONOMY BURNISHED COPPER\t19\t12\nBrand#x\tECONOMY BURNISHED STEEL\t45\t12\nBrand#x\tECONOMY PLATED COPPER\t49\t12\nBrand#x\tECONOMY PLATED STEEL\t45\t12\nBrand#x\tECONOMY POLISHED BRASS\t23\t12\nBrand#x\tECONOMY POLISHED STEEL\t14\t12\nBrand#x\tECONOMY POLISHED TIN\t14\t12\nBrand#x\tECONOMY POLISHED TIN\t45\t12\nBrand#x\tECONOMY POLISHED TIN\t49\t12\nBrand#x\tLARGE ANODIZED BRASS\t3\t12\nBrand#x\tLARGE ANODIZED BRASS\t45\t12\nBrand#x\tLARGE BRUSHED BRASS\t14\t12\nBrand#x\tLARGE BRUSHED BRASS\t45\t12\nBrand#x\tLARGE BRUSHED STEEL\t23\t12\nBrand#x\tLARGE BRUSHED STEEL\t45\t12\nBrand#x\tLARGE BURNISHED STEEL\t3\t12\nBrand#x\tLARGE BURNISHED TIN\t23\t12\nBrand#x\tLARGE PLATED COPPER\t23\t12\nBrand#x\tLARGE PLATED STEEL\t3\t12\nBrand#x\tLARGE POLISHED COPPER\t9\t12\nBrand#x\tLARGE POLISHED TIN\t14\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t12\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t12\nBrand#x\tMEDIUM BURNISHED TIN\t23\t12\nBrand#x\tMEDIUM PLATED BRASS\t3\t12\nBrand#x\tMEDIUM PLATED NICKEL\t36\t12\nBrand#x\tPROMO ANODIZED NICKEL\t19\t12\nBrand#x\tPROMO ANODIZED NICKEL\t45\t12\nBrand#x\tPROMO ANODIZED TIN\t14\t12\nBrand#x\tPROMO BRUSHED COPPER\t23\t12\nBrand#x\tPROMO BRUSHED COPPER\t49\t12\nBrand#x\tPROMO BRUSHED NICKEL\t3\t12\nBrand#x\tPROMO BURNISHED BRASS\t36\t12\nBrand#x\tPROMO BURNISHED STEEL\t14\t12\nBrand#x\tPROMO BURNISHED TIN\t14\t12\nBrand#x\tPROMO PLATED STEEL\t3\t12\nBrand#x\tPROMO POLISHED BRASS\t3\t12\nBrand#x\tPROMO POLISHED BRASS\t14\t12\nBrand#x\tPROMO POLISHED COPPER\t45\t12\nBrand#x\tSMALL ANODIZED COPPER\t3\t12\nBrand#x\tSMALL ANODIZED NICKEL\t23\t12\nBrand#x\tSMALL BRUSHED BRASS\t45\t12\nBrand#x\tSMALL BRUSHED COPPER\t9\t12\nBrand#x\tSMALL BRUSHED NICKEL\t49\t12\nBrand#x\tSMALL BURNISHED BRASS\t3\t12\nBrand#x\tSMALL BURNISHED BRASS\t14\t12\nBrand#x\tSMALL BURNISHED COPPER\t19\t12\nBrand#x\tSMALL BURNISHED NICKEL\t9\t12\nBrand#x\tSMALL PLATED BRASS\t3\t12\nBrand#x\tSMALL PLATED BRASS\t14\t12\nBrand#x\tSMALL PLATED NICKEL\t14\t12\nBrand#x\tSMALL POLISHED BRASS\t3\t12\nBrand#x\tSMALL POLISHED NICKEL\t19\t12\nBrand#x\tSMALL POLISHED TIN\t9\t12\nBrand#x\tSTANDARD ANODIZED TIN\t49\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t12\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t12\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t12\nBrand#x\tSTANDARD PLATED BRASS\t36\t12\nBrand#x\tSTANDARD PLATED COPPER\t49\t12\nBrand#x\tSTANDARD PLATED NICKEL\t36\t12\nBrand#x\tSTANDARD POLISHED BRASS\t9\t12\nBrand#x\tSTANDARD POLISHED COPPER\t9\t12\nBrand#x\tECONOMY ANODIZED STEEL\t14\t12\nBrand#x\tECONOMY ANODIZED STEEL\t45\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t12\nBrand#x\tECONOMY BRUSHED STEEL\t3\t12\nBrand#x\tECONOMY BRUSHED TIN\t14\t12\nBrand#x\tECONOMY PLATED COPPER\t3\t12\nBrand#x\tECONOMY PLATED NICKEL\t19\t12\nBrand#x\tECONOMY PLATED STEEL\t9\t12\nBrand#x\tECONOMY POLISHED BRASS\t3\t12\nBrand#x\tECONOMY POLISHED BRASS\t9\t12\nBrand#x\tECONOMY POLISHED NICKEL\t3\t12\nBrand#x\tLARGE ANODIZED BRASS\t14\t12\nBrand#x\tLARGE ANODIZED BRASS\t23\t12\nBrand#x\tLARGE ANODIZED COPPER\t19\t12\nBrand#x\tLARGE ANODIZED COPPER\t36\t12\nBrand#x\tLARGE BRUSHED BRASS\t19\t12\nBrand#x\tLARGE BRUSHED NICKEL\t49\t12\nBrand#x\tLARGE BRUSHED STEEL\t36\t12\nBrand#x\tLARGE BRUSHED TIN\t3\t12\nBrand#x\tLARGE BRUSHED TIN\t9\t12\nBrand#x\tLARGE BURNISHED BRASS\t23\t12\nBrand#x\tLARGE BURNISHED STEEL\t36\t12\nBrand#x\tLARGE BURNISHED TIN\t14\t12\nBrand#x\tLARGE BURNISHED TIN\t36\t12\nBrand#x\tLARGE PLATED NICKEL\t45\t12\nBrand#x\tLARGE PLATED TIN\t23\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t12\nBrand#x\tMEDIUM ANODIZED TIN\t3\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t12\nBrand#x\tMEDIUM BRUSHED TIN\t9\t12\nBrand#x\tMEDIUM BRUSHED TIN\t49\t12\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t12\nBrand#x\tMEDIUM PLATED COPPER\t14\t12\nBrand#x\tMEDIUM PLATED COPPER\t23\t12\nBrand#x\tMEDIUM PLATED STEEL\t36\t12\nBrand#x\tMEDIUM PLATED TIN\t14\t12\nBrand#x\tPROMO ANODIZED COPPER\t3\t12\nBrand#x\tPROMO ANODIZED NICKEL\t23\t12\nBrand#x\tPROMO ANODIZED TIN\t36\t12\nBrand#x\tPROMO BURNISHED COPPER\t19\t12\nBrand#x\tPROMO BURNISHED COPPER\t36\t12\nBrand#x\tPROMO BURNISHED COPPER\t45\t12\nBrand#x\tPROMO BURNISHED STEEL\t9\t12\nBrand#x\tPROMO PLATED BRASS\t9\t12\nBrand#x\tPROMO POLISHED BRASS\t3\t12\nBrand#x\tPROMO POLISHED BRASS\t49\t12\nBrand#x\tPROMO POLISHED NICKEL\t36\t12\nBrand#x\tPROMO POLISHED STEEL\t45\t12\nBrand#x\tSMALL ANODIZED COPPER\t45\t12\nBrand#x\tSMALL ANODIZED TIN\t14\t12\nBrand#x\tSMALL BRUSHED COPPER\t14\t12\nBrand#x\tSMALL BURNISHED BRASS\t3\t12\nBrand#x\tSMALL BURNISHED NICKEL\t45\t12\nBrand#x\tSMALL BURNISHED STEEL\t14\t12\nBrand#x\tSMALL PLATED BRASS\t19\t12\nBrand#x\tSMALL PLATED BRASS\t49\t12\nBrand#x\tSMALL PLATED COPPER\t23\t12\nBrand#x\tSMALL PLATED TIN\t3\t12\nBrand#x\tSMALL POLISHED COPPER\t9\t12\nBrand#x\tSTANDARD BRUSHED TIN\t45\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t12\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t12\nBrand#x\tSTANDARD PLATED COPPER\t9\t12\nBrand#x\tSTANDARD PLATED COPPER\t23\t12\nBrand#x\tSTANDARD PLATED NICKEL\t36\t12\nBrand#x\tSTANDARD PLATED NICKEL\t49\t12\nBrand#x\tSTANDARD PLATED TIN\t36\t12\nBrand#x\tSTANDARD POLISHED COPPER\t23\t12\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t12\nBrand#x\tSTANDARD POLISHED TIN\t3\t12\nBrand#x\tECONOMY ANODIZED BRASS\t19\t12\nBrand#x\tECONOMY ANODIZED TIN\t36\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t12\nBrand#x\tECONOMY BURNISHED COPPER\t14\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t12\nBrand#x\tECONOMY PLATED NICKEL\t9\t12\nBrand#x\tECONOMY POLISHED COPPER\t3\t12\nBrand#x\tECONOMY POLISHED TIN\t36\t12\nBrand#x\tLARGE ANODIZED COPPER\t3\t12\nBrand#x\tLARGE ANODIZED COPPER\t14\t12\nBrand#x\tLARGE ANODIZED STEEL\t36\t12\nBrand#x\tLARGE ANODIZED TIN\t3\t12\nBrand#x\tLARGE BRUSHED BRASS\t36\t12\nBrand#x\tLARGE BRUSHED NICKEL\t19\t12\nBrand#x\tLARGE BRUSHED STEEL\t36\t12\nBrand#x\tLARGE BRUSHED TIN\t14\t12\nBrand#x\tLARGE BURNISHED BRASS\t36\t12\nBrand#x\tLARGE BURNISHED NICKEL\t14\t12\nBrand#x\tLARGE PLATED STEEL\t23\t12\nBrand#x\tLARGE POLISHED BRASS\t9\t12\nBrand#x\tLARGE POLISHED STEEL\t45\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t12\nBrand#x\tMEDIUM ANODIZED TIN\t9\t12\nBrand#x\tMEDIUM ANODIZED TIN\t23\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t12\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t12\nBrand#x\tPROMO ANODIZED COPPER\t14\t12\nBrand#x\tPROMO ANODIZED TIN\t36\t12\nBrand#x\tPROMO BRUSHED BRASS\t3\t12\nBrand#x\tPROMO BRUSHED COPPER\t23\t12\nBrand#x\tPROMO BRUSHED STEEL\t23\t12\nBrand#x\tPROMO BURNISHED BRASS\t49\t12\nBrand#x\tPROMO BURNISHED STEEL\t3\t12\nBrand#x\tPROMO PLATED BRASS\t36\t12\nBrand#x\tPROMO POLISHED NICKEL\t49\t12\nBrand#x\tSMALL ANODIZED COPPER\t3\t12\nBrand#x\tSMALL ANODIZED NICKEL\t9\t12\nBrand#x\tSMALL ANODIZED TIN\t3\t12\nBrand#x\tSMALL BRUSHED COPPER\t14\t12\nBrand#x\tSMALL BRUSHED COPPER\t19\t12\nBrand#x\tSMALL BRUSHED NICKEL\t3\t12\nBrand#x\tSMALL BRUSHED NICKEL\t23\t12\nBrand#x\tSMALL BRUSHED NICKEL\t36\t12\nBrand#x\tSMALL BURNISHED BRASS\t3\t12\nBrand#x\tSMALL BURNISHED NICKEL\t9\t12\nBrand#x\tSMALL BURNISHED TIN\t23\t12\nBrand#x\tSMALL PLATED STEEL\t19\t12\nBrand#x\tSMALL PLATED STEEL\t23\t12\nBrand#x\tSMALL POLISHED STEEL\t3\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t12\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t12\nBrand#x\tSTANDARD PLATED BRASS\t3\t12\nBrand#x\tSTANDARD PLATED BRASS\t19\t12\nBrand#x\tSTANDARD PLATED STEEL\t19\t12\nBrand#x\tSTANDARD POLISHED BRASS\t23\t12\nBrand#x\tSTANDARD POLISHED COPPER\t45\t12\nBrand#x\tECONOMY ANODIZED BRASS\t14\t12\nBrand#x\tECONOMY ANODIZED STEEL\t23\t12\nBrand#x\tECONOMY ANODIZED STEEL\t49\t12\nBrand#x\tECONOMY ANODIZED TIN\t23\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t12\nBrand#x\tECONOMY BRUSHED STEEL\t36\t12\nBrand#x\tECONOMY BRUSHED TIN\t19\t12\nBrand#x\tECONOMY BURNISHED TIN\t19\t12\nBrand#x\tECONOMY PLATED BRASS\t19\t12\nBrand#x\tECONOMY PLATED NICKEL\t23\t12\nBrand#x\tECONOMY PLATED TIN\t45\t12\nBrand#x\tLARGE ANODIZED NICKEL\t3\t12\nBrand#x\tLARGE ANODIZED STEEL\t14\t12\nBrand#x\tLARGE BRUSHED BRASS\t45\t12\nBrand#x\tLARGE BRUSHED NICKEL\t3\t12\nBrand#x\tLARGE BRUSHED STEEL\t45\t12\nBrand#x\tLARGE BRUSHED TIN\t19\t12\nBrand#x\tLARGE PLATED BRASS\t3\t12\nBrand#x\tLARGE PLATED BRASS\t9\t12\nBrand#x\tLARGE POLISHED COPPER\t19\t12\nBrand#x\tLARGE POLISHED NICKEL\t3\t12\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t12\nBrand#x\tMEDIUM ANODIZED TIN\t45\t12\nBrand#x\tMEDIUM ANODIZED TIN\t49\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t12\nBrand#x\tMEDIUM PLATED BRASS\t49\t12\nBrand#x\tMEDIUM PLATED TIN\t3\t12\nBrand#x\tPROMO ANODIZED NICKEL\t49\t12\nBrand#x\tPROMO BRUSHED COPPER\t45\t12\nBrand#x\tPROMO BRUSHED STEEL\t23\t12\nBrand#x\tPROMO BRUSHED STEEL\t49\t12\nBrand#x\tPROMO BRUSHED TIN\t14\t12\nBrand#x\tPROMO BRUSHED TIN\t36\t12\nBrand#x\tPROMO BURNISHED NICKEL\t45\t12\nBrand#x\tPROMO BURNISHED TIN\t49\t12\nBrand#x\tPROMO PLATED COPPER\t49\t12\nBrand#x\tPROMO PLATED STEEL\t49\t12\nBrand#x\tPROMO POLISHED STEEL\t49\t12\nBrand#x\tPROMO POLISHED TIN\t19\t12\nBrand#x\tPROMO POLISHED TIN\t23\t12\nBrand#x\tPROMO POLISHED TIN\t45\t12\nBrand#x\tSMALL ANODIZED NICKEL\t9\t12\nBrand#x\tSMALL BRUSHED TIN\t3\t12\nBrand#x\tSMALL BRUSHED TIN\t9\t12\nBrand#x\tSMALL BURNISHED TIN\t23\t12\nBrand#x\tSMALL BURNISHED TIN\t36\t12\nBrand#x\tSMALL PLATED BRASS\t36\t12\nBrand#x\tSMALL PLATED COPPER\t14\t12\nBrand#x\tSMALL PLATED COPPER\t45\t12\nBrand#x\tSMALL PLATED STEEL\t36\t12\nBrand#x\tSMALL PLATED TIN\t14\t12\nBrand#x\tSMALL POLISHED NICKEL\t45\t12\nBrand#x\tSMALL POLISHED STEEL\t23\t12\nBrand#x\tSMALL POLISHED STEEL\t36\t12\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t12\nBrand#x\tSTANDARD ANODIZED TIN\t14\t12\nBrand#x\tSTANDARD ANODIZED TIN\t19\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t12\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t12\nBrand#x\tSTANDARD BRUSHED TIN\t9\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t12\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t12\nBrand#x\tSTANDARD PLATED STEEL\t9\t12\nBrand#x\tSTANDARD PLATED STEEL\t49\t12\nBrand#x\tSTANDARD POLISHED COPPER\t36\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t12\nBrand#x\tECONOMY ANODIZED STEEL\t23\t12\nBrand#x\tECONOMY ANODIZED STEEL\t45\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t12\nBrand#x\tECONOMY BURNISHED TIN\t45\t12\nBrand#x\tECONOMY PLATED STEEL\t3\t12\nBrand#x\tECONOMY PLATED TIN\t3\t12\nBrand#x\tECONOMY PLATED TIN\t9\t12\nBrand#x\tECONOMY POLISHED BRASS\t3\t12\nBrand#x\tECONOMY POLISHED BRASS\t14\t12\nBrand#x\tLARGE ANODIZED BRASS\t3\t12\nBrand#x\tLARGE ANODIZED BRASS\t36\t12\nBrand#x\tLARGE ANODIZED NICKEL\t23\t12\nBrand#x\tLARGE ANODIZED STEEL\t3\t12\nBrand#x\tLARGE ANODIZED TIN\t36\t12\nBrand#x\tLARGE BRUSHED BRASS\t23\t12\nBrand#x\tLARGE BRUSHED STEEL\t3\t12\nBrand#x\tLARGE BRUSHED TIN\t36\t12\nBrand#x\tLARGE BURNISHED BRASS\t19\t12\nBrand#x\tLARGE BURNISHED BRASS\t49\t12\nBrand#x\tLARGE PLATED NICKEL\t9\t12\nBrand#x\tLARGE PLATED NICKEL\t19\t12\nBrand#x\tLARGE POLISHED BRASS\t9\t12\nBrand#x\tLARGE POLISHED NICKEL\t45\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t12\nBrand#x\tMEDIUM ANODIZED TIN\t49\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t12\nBrand#x\tMEDIUM BURNISHED TIN\t3\t12\nBrand#x\tMEDIUM BURNISHED TIN\t49\t12\nBrand#x\tMEDIUM PLATED STEEL\t3\t12\nBrand#x\tMEDIUM PLATED TIN\t23\t12\nBrand#x\tPROMO ANODIZED STEEL\t23\t12\nBrand#x\tPROMO ANODIZED TIN\t9\t12\nBrand#x\tPROMO ANODIZED TIN\t49\t12\nBrand#x\tPROMO BRUSHED BRASS\t3\t12\nBrand#x\tPROMO BRUSHED BRASS\t19\t12\nBrand#x\tPROMO BRUSHED TIN\t49\t12\nBrand#x\tPROMO BURNISHED NICKEL\t23\t12\nBrand#x\tPROMO BURNISHED TIN\t3\t12\nBrand#x\tPROMO BURNISHED TIN\t19\t12\nBrand#x\tPROMO BURNISHED TIN\t23\t12\nBrand#x\tPROMO BURNISHED TIN\t36\t12\nBrand#x\tPROMO BURNISHED TIN\t49\t12\nBrand#x\tPROMO PLATED BRASS\t23\t12\nBrand#x\tPROMO PLATED BRASS\t36\t12\nBrand#x\tPROMO POLISHED COPPER\t3\t12\nBrand#x\tPROMO POLISHED NICKEL\t3\t12\nBrand#x\tPROMO POLISHED STEEL\t23\t12\nBrand#x\tSMALL ANODIZED STEEL\t14\t12\nBrand#x\tSMALL ANODIZED STEEL\t49\t12\nBrand#x\tSMALL ANODIZED TIN\t19\t12\nBrand#x\tSMALL BRUSHED BRASS\t36\t12\nBrand#x\tSMALL BRUSHED NICKEL\t19\t12\nBrand#x\tSMALL BRUSHED NICKEL\t45\t12\nBrand#x\tSMALL BURNISHED BRASS\t36\t12\nBrand#x\tSMALL BURNISHED TIN\t9\t12\nBrand#x\tSMALL PLATED BRASS\t14\t12\nBrand#x\tSMALL PLATED NICKEL\t49\t12\nBrand#x\tSMALL PLATED STEEL\t3\t12\nBrand#x\tSMALL POLISHED NICKEL\t9\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t12\nBrand#x\tSTANDARD ANODIZED TIN\t9\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t12\nBrand#x\tSTANDARD BRUSHED NICKEL\t14\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t12\nBrand#x\tSTANDARD BURNISHED TIN\t23\t12\nBrand#x\tSTANDARD POLISHED STEEL\t45\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t12\nBrand#x\tECONOMY ANODIZED STEEL\t45\t12\nBrand#x\tECONOMY BURNISHED COPPER\t9\t12\nBrand#x\tECONOMY BURNISHED COPPER\t23\t12\nBrand#x\tECONOMY BURNISHED COPPER\t36\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t12\nBrand#x\tECONOMY BURNISHED STEEL\t9\t12\nBrand#x\tECONOMY BURNISHED TIN\t14\t12\nBrand#x\tECONOMY PLATED BRASS\t3\t12\nBrand#x\tECONOMY PLATED COPPER\t3\t12\nBrand#x\tECONOMY PLATED TIN\t3\t12\nBrand#x\tECONOMY PLATED TIN\t14\t12\nBrand#x\tECONOMY POLISHED TIN\t36\t12\nBrand#x\tLARGE ANODIZED COPPER\t3\t12\nBrand#x\tLARGE ANODIZED NICKEL\t3\t12\nBrand#x\tLARGE ANODIZED NICKEL\t49\t12\nBrand#x\tLARGE BRUSHED COPPER\t36\t12\nBrand#x\tLARGE BRUSHED NICKEL\t19\t12\nBrand#x\tLARGE BRUSHED NICKEL\t49\t12\nBrand#x\tLARGE BURNISHED COPPER\t23\t12\nBrand#x\tLARGE BURNISHED NICKEL\t23\t12\nBrand#x\tLARGE BURNISHED TIN\t14\t12\nBrand#x\tLARGE BURNISHED TIN\t23\t12\nBrand#x\tLARGE BURNISHED TIN\t49\t12\nBrand#x\tLARGE PLATED COPPER\t9\t12\nBrand#x\tLARGE PLATED TIN\t14\t12\nBrand#x\tLARGE POLISHED BRASS\t3\t12\nBrand#x\tLARGE POLISHED BRASS\t45\t12\nBrand#x\tLARGE POLISHED COPPER\t3\t12\nBrand#x\tLARGE POLISHED NICKEL\t3\t12\nBrand#x\tLARGE POLISHED NICKEL\t49\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t12\nBrand#x\tMEDIUM BRUSHED TIN\t36\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t12\nBrand#x\tMEDIUM PLATED BRASS\t23\t12\nBrand#x\tPROMO ANODIZED NICKEL\t3\t12\nBrand#x\tPROMO BRUSHED COPPER\t49\t12\nBrand#x\tPROMO BRUSHED NICKEL\t49\t12\nBrand#x\tPROMO BURNISHED STEEL\t14\t12\nBrand#x\tPROMO PLATED BRASS\t3\t12\nBrand#x\tPROMO PLATED BRASS\t36\t12\nBrand#x\tPROMO PLATED TIN\t49\t12\nBrand#x\tPROMO POLISHED BRASS\t14\t12\nBrand#x\tPROMO POLISHED COPPER\t23\t12\nBrand#x\tPROMO POLISHED NICKEL\t49\t12\nBrand#x\tSMALL ANODIZED BRASS\t19\t12\nBrand#x\tSMALL ANODIZED COPPER\t14\t12\nBrand#x\tSMALL ANODIZED STEEL\t19\t12\nBrand#x\tSMALL ANODIZED TIN\t9\t12\nBrand#x\tSMALL BRUSHED COPPER\t14\t12\nBrand#x\tSMALL BURNISHED BRASS\t9\t12\nBrand#x\tSMALL BURNISHED BRASS\t23\t12\nBrand#x\tSMALL BURNISHED COPPER\t9\t12\nBrand#x\tSMALL BURNISHED COPPER\t36\t12\nBrand#x\tSMALL BURNISHED NICKEL\t9\t12\nBrand#x\tSMALL BURNISHED NICKEL\t14\t12\nBrand#x\tSMALL BURNISHED NICKEL\t36\t12\nBrand#x\tSMALL BURNISHED STEEL\t14\t12\nBrand#x\tSMALL PLATED BRASS\t14\t12\nBrand#x\tSMALL PLATED TIN\t45\t12\nBrand#x\tSMALL POLISHED STEEL\t19\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t12\nBrand#x\tSTANDARD ANODIZED TIN\t3\t12\nBrand#x\tSTANDARD ANODIZED TIN\t14\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t12\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t12\nBrand#x\tSTANDARD BRUSHED TIN\t45\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t12\nBrand#x\tSTANDARD BURNISHED TIN\t45\t12\nBrand#x\tSTANDARD POLISHED COPPER\t14\t12\nBrand#x\tECONOMY ANODIZED BRASS\t14\t12\nBrand#x\tECONOMY ANODIZED COPPER\t19\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t12\nBrand#x\tECONOMY ANODIZED STEEL\t14\t12\nBrand#x\tECONOMY ANODIZED STEEL\t45\t12\nBrand#x\tECONOMY BRUSHED BRASS\t36\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t12\nBrand#x\tECONOMY BURNISHED BRASS\t19\t12\nBrand#x\tECONOMY BURNISHED BRASS\t36\t12\nBrand#x\tECONOMY BURNISHED STEEL\t36\t12\nBrand#x\tECONOMY PLATED TIN\t45\t12\nBrand#x\tECONOMY PLATED TIN\t49\t12\nBrand#x\tECONOMY POLISHED COPPER\t9\t12\nBrand#x\tECONOMY POLISHED NICKEL\t23\t12\nBrand#x\tECONOMY POLISHED STEEL\t9\t12\nBrand#x\tECONOMY POLISHED TIN\t23\t12\nBrand#x\tLARGE ANODIZED BRASS\t3\t12\nBrand#x\tLARGE ANODIZED BRASS\t45\t12\nBrand#x\tLARGE ANODIZED COPPER\t19\t12\nBrand#x\tLARGE ANODIZED COPPER\t36\t12\nBrand#x\tLARGE ANODIZED STEEL\t45\t12\nBrand#x\tLARGE ANODIZED TIN\t45\t12\nBrand#x\tLARGE BRUSHED COPPER\t23\t12\nBrand#x\tLARGE BRUSHED NICKEL\t36\t12\nBrand#x\tLARGE BRUSHED STEEL\t3\t12\nBrand#x\tLARGE BRUSHED TIN\t36\t12\nBrand#x\tLARGE BURNISHED BRASS\t45\t12\nBrand#x\tLARGE BURNISHED STEEL\t9\t12\nBrand#x\tLARGE BURNISHED STEEL\t45\t12\nBrand#x\tLARGE BURNISHED TIN\t49\t12\nBrand#x\tLARGE PLATED BRASS\t3\t12\nBrand#x\tLARGE PLATED BRASS\t23\t12\nBrand#x\tLARGE PLATED STEEL\t19\t12\nBrand#x\tLARGE PLATED STEEL\t49\t12\nBrand#x\tMEDIUM ANODIZED TIN\t3\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t12\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t12\nBrand#x\tMEDIUM PLATED NICKEL\t45\t12\nBrand#x\tMEDIUM PLATED STEEL\t3\t12\nBrand#x\tMEDIUM PLATED TIN\t36\t12\nBrand#x\tPROMO ANODIZED BRASS\t14\t12\nBrand#x\tPROMO ANODIZED STEEL\t3\t12\nBrand#x\tPROMO ANODIZED STEEL\t23\t12\nBrand#x\tPROMO ANODIZED TIN\t49\t12\nBrand#x\tPROMO BRUSHED COPPER\t9\t12\nBrand#x\tPROMO BRUSHED COPPER\t23\t12\nBrand#x\tPROMO BRUSHED STEEL\t36\t12\nBrand#x\tPROMO BURNISHED NICKEL\t19\t12\nBrand#x\tPROMO BURNISHED STEEL\t3\t12\nBrand#x\tPROMO BURNISHED STEEL\t14\t12\nBrand#x\tPROMO BURNISHED STEEL\t49\t12\nBrand#x\tPROMO BURNISHED TIN\t9\t12\nBrand#x\tPROMO BURNISHED TIN\t14\t12\nBrand#x\tPROMO POLISHED BRASS\t19\t12\nBrand#x\tPROMO POLISHED COPPER\t49\t12\nBrand#x\tPROMO POLISHED NICKEL\t49\t12\nBrand#x\tPROMO POLISHED STEEL\t9\t12\nBrand#x\tPROMO POLISHED TIN\t36\t12\nBrand#x\tSMALL ANODIZED BRASS\t9\t12\nBrand#x\tSMALL ANODIZED BRASS\t19\t12\nBrand#x\tSMALL BRUSHED NICKEL\t19\t12\nBrand#x\tSMALL BRUSHED STEEL\t45\t12\nBrand#x\tSMALL BRUSHED TIN\t45\t12\nBrand#x\tSMALL BURNISHED BRASS\t9\t12\nBrand#x\tSMALL BURNISHED BRASS\t23\t12\nBrand#x\tSMALL BURNISHED BRASS\t36\t12\nBrand#x\tSMALL BURNISHED BRASS\t49\t12\nBrand#x\tSMALL BURNISHED COPPER\t45\t12\nBrand#x\tSMALL PLATED BRASS\t9\t12\nBrand#x\tSMALL PLATED BRASS\t36\t12\nBrand#x\tSMALL PLATED TIN\t36\t12\nBrand#x\tSTANDARD ANODIZED TIN\t3\t12\nBrand#x\tSTANDARD ANODIZED TIN\t9\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t12\nBrand#x\tSTANDARD PLATED BRASS\t49\t12\nBrand#x\tSTANDARD PLATED COPPER\t9\t12\nBrand#x\tSTANDARD PLATED NICKEL\t23\t12\nBrand#x\tSTANDARD PLATED NICKEL\t49\t12\nBrand#x\tSTANDARD PLATED STEEL\t23\t12\nBrand#x\tSTANDARD PLATED TIN\t45\t12\nBrand#x\tSTANDARD POLISHED STEEL\t23\t12\nBrand#x\tSTANDARD POLISHED TIN\t3\t12\nBrand#x\tECONOMY ANODIZED BRASS\t45\t12\nBrand#x\tECONOMY ANODIZED TIN\t14\t12\nBrand#x\tECONOMY BRUSHED BRASS\t23\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t12\nBrand#x\tECONOMY BRUSHED STEEL\t36\t12\nBrand#x\tECONOMY BRUSHED TIN\t45\t12\nBrand#x\tECONOMY BURNISHED COPPER\t3\t12\nBrand#x\tECONOMY BURNISHED COPPER\t45\t12\nBrand#x\tECONOMY PLATED NICKEL\t23\t12\nBrand#x\tECONOMY PLATED STEEL\t36\t12\nBrand#x\tECONOMY PLATED TIN\t23\t12\nBrand#x\tECONOMY POLISHED BRASS\t36\t12\nBrand#x\tECONOMY POLISHED COPPER\t49\t12\nBrand#x\tECONOMY POLISHED NICKEL\t9\t12\nBrand#x\tECONOMY POLISHED NICKEL\t19\t12\nBrand#x\tECONOMY POLISHED NICKEL\t23\t12\nBrand#x\tECONOMY POLISHED STEEL\t49\t12\nBrand#x\tLARGE ANODIZED BRASS\t14\t12\nBrand#x\tLARGE ANODIZED BRASS\t23\t12\nBrand#x\tLARGE ANODIZED COPPER\t36\t12\nBrand#x\tLARGE ANODIZED STEEL\t23\t12\nBrand#x\tLARGE BRUSHED BRASS\t9\t12\nBrand#x\tLARGE BRUSHED COPPER\t23\t12\nBrand#x\tLARGE BURNISHED BRASS\t36\t12\nBrand#x\tLARGE BURNISHED STEEL\t23\t12\nBrand#x\tLARGE PLATED NICKEL\t14\t12\nBrand#x\tLARGE POLISHED BRASS\t45\t12\nBrand#x\tLARGE POLISHED COPPER\t23\t12\nBrand#x\tLARGE POLISHED COPPER\t36\t12\nBrand#x\tLARGE POLISHED STEEL\t3\t12\nBrand#x\tLARGE POLISHED STEEL\t9\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t12\nBrand#x\tMEDIUM ANODIZED TIN\t3\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t12\nBrand#x\tMEDIUM BURNISHED TIN\t14\t12\nBrand#x\tMEDIUM BURNISHED TIN\t45\t12\nBrand#x\tMEDIUM PLATED BRASS\t19\t12\nBrand#x\tMEDIUM PLATED COPPER\t19\t12\nBrand#x\tMEDIUM PLATED COPPER\t45\t12\nBrand#x\tPROMO ANODIZED BRASS\t14\t12\nBrand#x\tPROMO ANODIZED NICKEL\t49\t12\nBrand#x\tPROMO ANODIZED TIN\t9\t12\nBrand#x\tPROMO BURNISHED COPPER\t49\t12\nBrand#x\tPROMO BURNISHED TIN\t14\t12\nBrand#x\tPROMO PLATED NICKEL\t14\t12\nBrand#x\tPROMO PLATED STEEL\t45\t12\nBrand#x\tPROMO PLATED TIN\t3\t12\nBrand#x\tPROMO PLATED TIN\t36\t12\nBrand#x\tPROMO POLISHED COPPER\t23\t12\nBrand#x\tPROMO POLISHED NICKEL\t19\t12\nBrand#x\tSMALL ANODIZED BRASS\t3\t12\nBrand#x\tSMALL ANODIZED COPPER\t14\t12\nBrand#x\tSMALL ANODIZED NICKEL\t36\t12\nBrand#x\tSMALL BRUSHED STEEL\t36\t12\nBrand#x\tSMALL BRUSHED TIN\t14\t12\nBrand#x\tSMALL BURNISHED TIN\t3\t12\nBrand#x\tSMALL PLATED BRASS\t14\t12\nBrand#x\tSMALL PLATED STEEL\t14\t12\nBrand#x\tSMALL POLISHED COPPER\t36\t12\nBrand#x\tSMALL POLISHED TIN\t36\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t12\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t12\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t12\nBrand#x\tSTANDARD BURNISHED TIN\t3\t12\nBrand#x\tSTANDARD PLATED BRASS\t45\t12\nBrand#x\tSTANDARD PLATED COPPER\t49\t12\nBrand#x\tSTANDARD POLISHED COPPER\t23\t12\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t12\nBrand#x\tECONOMY ANODIZED BRASS\t36\t12\nBrand#x\tECONOMY ANODIZED STEEL\t9\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t12\nBrand#x\tECONOMY BRUSHED TIN\t14\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t12\nBrand#x\tECONOMY BURNISHED STEEL\t49\t12\nBrand#x\tECONOMY BURNISHED TIN\t19\t12\nBrand#x\tECONOMY PLATED COPPER\t14\t12\nBrand#x\tECONOMY PLATED NICKEL\t9\t12\nBrand#x\tECONOMY POLISHED COPPER\t9\t12\nBrand#x\tLARGE ANODIZED BRASS\t49\t12\nBrand#x\tLARGE ANODIZED COPPER\t36\t12\nBrand#x\tLARGE BURNISHED COPPER\t9\t12\nBrand#x\tLARGE BURNISHED COPPER\t19\t12\nBrand#x\tLARGE BURNISHED TIN\t9\t12\nBrand#x\tLARGE PLATED BRASS\t23\t12\nBrand#x\tLARGE PLATED BRASS\t36\t12\nBrand#x\tLARGE PLATED NICKEL\t23\t12\nBrand#x\tLARGE PLATED TIN\t9\t12\nBrand#x\tLARGE PLATED TIN\t19\t12\nBrand#x\tLARGE POLISHED BRASS\t36\t12\nBrand#x\tLARGE POLISHED STEEL\t9\t12\nBrand#x\tLARGE POLISHED STEEL\t45\t12\nBrand#x\tLARGE POLISHED TIN\t14\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t12\nBrand#x\tMEDIUM ANODIZED TIN\t49\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t12\nBrand#x\tMEDIUM BRUSHED TIN\t14\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t12\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t12\nBrand#x\tMEDIUM PLATED BRASS\t36\t12\nBrand#x\tMEDIUM PLATED COPPER\t36\t12\nBrand#x\tMEDIUM PLATED COPPER\t45\t12\nBrand#x\tMEDIUM PLATED STEEL\t3\t12\nBrand#x\tMEDIUM PLATED TIN\t45\t12\nBrand#x\tPROMO ANODIZED TIN\t23\t12\nBrand#x\tPROMO BRUSHED BRASS\t19\t12\nBrand#x\tPROMO BRUSHED NICKEL\t3\t12\nBrand#x\tPROMO BRUSHED TIN\t45\t12\nBrand#x\tPROMO BURNISHED BRASS\t19\t12\nBrand#x\tPROMO BURNISHED NICKEL\t3\t12\nBrand#x\tPROMO BURNISHED TIN\t9\t12\nBrand#x\tPROMO PLATED BRASS\t14\t12\nBrand#x\tPROMO PLATED BRASS\t23\t12\nBrand#x\tPROMO PLATED STEEL\t19\t12\nBrand#x\tPROMO POLISHED STEEL\t45\t12\nBrand#x\tSMALL ANODIZED BRASS\t36\t12\nBrand#x\tSMALL BRUSHED BRASS\t36\t12\nBrand#x\tSMALL BURNISHED BRASS\t3\t12\nBrand#x\tSMALL BURNISHED BRASS\t36\t12\nBrand#x\tSMALL BURNISHED STEEL\t23\t12\nBrand#x\tSMALL BURNISHED TIN\t9\t12\nBrand#x\tSMALL BURNISHED TIN\t49\t12\nBrand#x\tSMALL PLATED COPPER\t9\t12\nBrand#x\tSMALL PLATED COPPER\t19\t12\nBrand#x\tSMALL POLISHED BRASS\t3\t12\nBrand#x\tSMALL POLISHED COPPER\t36\t12\nBrand#x\tSMALL POLISHED NICKEL\t23\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t12\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t12\nBrand#x\tSTANDARD ANODIZED TIN\t23\t12\nBrand#x\tSTANDARD BRUSHED TIN\t3\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t12\nBrand#x\tSTANDARD BURNISHED TIN\t23\t12\nBrand#x\tSTANDARD PLATED COPPER\t9\t12\nBrand#x\tSTANDARD PLATED TIN\t3\t12\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t12\nBrand#x\tSTANDARD POLISHED STEEL\t14\t12\nBrand#x\tECONOMY ANODIZED BRASS\t14\t12\nBrand#x\tECONOMY ANODIZED COPPER\t9\t12\nBrand#x\tECONOMY ANODIZED COPPER\t19\t12\nBrand#x\tECONOMY ANODIZED COPPER\t45\t12\nBrand#x\tECONOMY BRUSHED STEEL\t9\t12\nBrand#x\tECONOMY BRUSHED STEEL\t14\t12\nBrand#x\tECONOMY BRUSHED STEEL\t36\t12\nBrand#x\tECONOMY BRUSHED STEEL\t45\t12\nBrand#x\tECONOMY BRUSHED TIN\t49\t12\nBrand#x\tECONOMY BURNISHED BRASS\t3\t12\nBrand#x\tECONOMY BURNISHED BRASS\t49\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t12\nBrand#x\tECONOMY BURNISHED STEEL\t9\t12\nBrand#x\tECONOMY BURNISHED TIN\t19\t12\nBrand#x\tECONOMY PLATED COPPER\t3\t12\nBrand#x\tECONOMY PLATED STEEL\t3\t12\nBrand#x\tECONOMY POLISHED BRASS\t45\t12\nBrand#x\tECONOMY POLISHED NICKEL\t45\t12\nBrand#x\tECONOMY POLISHED TIN\t49\t12\nBrand#x\tLARGE ANODIZED TIN\t14\t12\nBrand#x\tLARGE BRUSHED NICKEL\t23\t12\nBrand#x\tLARGE BRUSHED STEEL\t45\t12\nBrand#x\tLARGE BURNISHED COPPER\t14\t12\nBrand#x\tLARGE BURNISHED NICKEL\t3\t12\nBrand#x\tLARGE BURNISHED STEEL\t3\t12\nBrand#x\tLARGE BURNISHED TIN\t45\t12\nBrand#x\tLARGE PLATED TIN\t9\t12\nBrand#x\tLARGE POLISHED BRASS\t9\t12\nBrand#x\tLARGE POLISHED COPPER\t23\t12\nBrand#x\tLARGE POLISHED NICKEL\t9\t12\nBrand#x\tLARGE POLISHED TIN\t45\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t12\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t12\nBrand#x\tMEDIUM PLATED BRASS\t3\t12\nBrand#x\tMEDIUM PLATED BRASS\t49\t12\nBrand#x\tMEDIUM PLATED COPPER\t19\t12\nBrand#x\tPROMO ANODIZED NICKEL\t19\t12\nBrand#x\tPROMO ANODIZED STEEL\t9\t12\nBrand#x\tPROMO ANODIZED TIN\t9\t12\nBrand#x\tPROMO BRUSHED NICKEL\t23\t12\nBrand#x\tPROMO BRUSHED TIN\t49\t12\nBrand#x\tPROMO BURNISHED STEEL\t36\t12\nBrand#x\tPROMO BURNISHED STEEL\t45\t12\nBrand#x\tPROMO BURNISHED TIN\t14\t12\nBrand#x\tPROMO PLATED NICKEL\t9\t12\nBrand#x\tPROMO PLATED NICKEL\t14\t12\nBrand#x\tPROMO PLATED STEEL\t9\t12\nBrand#x\tPROMO POLISHED COPPER\t23\t12\nBrand#x\tPROMO POLISHED NICKEL\t3\t12\nBrand#x\tPROMO POLISHED STEEL\t3\t12\nBrand#x\tPROMO POLISHED STEEL\t36\t12\nBrand#x\tSMALL ANODIZED NICKEL\t3\t12\nBrand#x\tSMALL ANODIZED NICKEL\t23\t12\nBrand#x\tSMALL BRUSHED BRASS\t49\t12\nBrand#x\tSMALL BRUSHED COPPER\t36\t12\nBrand#x\tSMALL BRUSHED NICKEL\t36\t12\nBrand#x\tSMALL BRUSHED STEEL\t9\t12\nBrand#x\tSMALL BURNISHED COPPER\t49\t12\nBrand#x\tSMALL BURNISHED NICKEL\t45\t12\nBrand#x\tSMALL PLATED BRASS\t36\t12\nBrand#x\tSMALL PLATED COPPER\t9\t12\nBrand#x\tSMALL PLATED COPPER\t49\t12\nBrand#x\tSMALL POLISHED NICKEL\t14\t12\nBrand#x\tSMALL POLISHED TIN\t49\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t12\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t12\nBrand#x\tSTANDARD ANODIZED TIN\t9\t12\nBrand#x\tSTANDARD ANODIZED TIN\t49\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t12\nBrand#x\tSTANDARD BURNISHED TIN\t14\t12\nBrand#x\tSTANDARD PLATED BRASS\t19\t12\nBrand#x\tSTANDARD PLATED NICKEL\t14\t12\nBrand#x\tSTANDARD PLATED NICKEL\t23\t12\nBrand#x\tSTANDARD PLATED NICKEL\t36\t12\nBrand#x\tSTANDARD POLISHED COPPER\t3\t12\nBrand#x\tSTANDARD POLISHED STEEL\t36\t12\nBrand#x\tSTANDARD POLISHED TIN\t9\t12\nBrand#x\tECONOMY ANODIZED COPPER\t9\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t12\nBrand#x\tECONOMY ANODIZED STEEL\t14\t12\nBrand#x\tECONOMY BRUSHED COPPER\t19\t12\nBrand#x\tECONOMY BURNISHED STEEL\t45\t12\nBrand#x\tECONOMY POLISHED TIN\t36\t12\nBrand#x\tECONOMY POLISHED TIN\t49\t12\nBrand#x\tLARGE ANODIZED TIN\t3\t12\nBrand#x\tLARGE BRUSHED COPPER\t36\t12\nBrand#x\tLARGE BRUSHED STEEL\t36\t12\nBrand#x\tLARGE BRUSHED TIN\t3\t12\nBrand#x\tLARGE BRUSHED TIN\t19\t12\nBrand#x\tLARGE BURNISHED BRASS\t19\t12\nBrand#x\tLARGE BURNISHED BRASS\t49\t12\nBrand#x\tLARGE BURNISHED NICKEL\t9\t12\nBrand#x\tLARGE PLATED BRASS\t9\t12\nBrand#x\tLARGE PLATED NICKEL\t3\t12\nBrand#x\tLARGE PLATED NICKEL\t14\t12\nBrand#x\tLARGE PLATED NICKEL\t36\t12\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t12\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t12\nBrand#x\tMEDIUM ANODIZED TIN\t9\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t12\nBrand#x\tMEDIUM PLATED STEEL\t19\t12\nBrand#x\tMEDIUM PLATED TIN\t23\t12\nBrand#x\tMEDIUM PLATED TIN\t36\t12\nBrand#x\tPROMO ANODIZED BRASS\t9\t12\nBrand#x\tPROMO ANODIZED COPPER\t19\t12\nBrand#x\tPROMO ANODIZED NICKEL\t19\t12\nBrand#x\tPROMO ANODIZED STEEL\t36\t12\nBrand#x\tPROMO BRUSHED NICKEL\t3\t12\nBrand#x\tPROMO BURNISHED BRASS\t19\t12\nBrand#x\tPROMO BURNISHED NICKEL\t49\t12\nBrand#x\tPROMO PLATED BRASS\t19\t12\nBrand#x\tPROMO PLATED STEEL\t14\t12\nBrand#x\tPROMO PLATED STEEL\t36\t12\nBrand#x\tPROMO POLISHED COPPER\t14\t12\nBrand#x\tPROMO POLISHED COPPER\t23\t12\nBrand#x\tPROMO POLISHED COPPER\t45\t12\nBrand#x\tPROMO POLISHED STEEL\t36\t12\nBrand#x\tSMALL ANODIZED STEEL\t36\t12\nBrand#x\tSMALL BRUSHED COPPER\t19\t12\nBrand#x\tSMALL BRUSHED COPPER\t45\t12\nBrand#x\tSMALL BRUSHED NICKEL\t3\t12\nBrand#x\tSMALL BRUSHED NICKEL\t9\t12\nBrand#x\tSMALL BURNISHED COPPER\t14\t12\nBrand#x\tSMALL BURNISHED NICKEL\t3\t12\nBrand#x\tSMALL BURNISHED TIN\t3\t12\nBrand#x\tSMALL BURNISHED TIN\t36\t12\nBrand#x\tSMALL PLATED BRASS\t23\t12\nBrand#x\tSMALL PLATED BRASS\t49\t12\nBrand#x\tSMALL PLATED STEEL\t3\t12\nBrand#x\tSMALL PLATED STEEL\t45\t12\nBrand#x\tSMALL POLISHED BRASS\t3\t12\nBrand#x\tSMALL POLISHED COPPER\t14\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t12\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t12\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t12\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t12\nBrand#x\tSTANDARD ANODIZED TIN\t9\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t12\nBrand#x\tSTANDARD BRUSHED TIN\t49\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t12\nBrand#x\tSTANDARD BURNISHED TIN\t36\t12\nBrand#x\tSTANDARD PLATED COPPER\t14\t12\nBrand#x\tSTANDARD PLATED COPPER\t45\t12\nBrand#x\tSTANDARD PLATED TIN\t9\t12\nBrand#x\tSTANDARD PLATED TIN\t23\t12\nBrand#x\tSTANDARD POLISHED BRASS\t14\t12\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t12\nBrand#x\tECONOMY ANODIZED BRASS\t9\t12\nBrand#x\tECONOMY ANODIZED BRASS\t36\t12\nBrand#x\tECONOMY ANODIZED BRASS\t45\t12\nBrand#x\tECONOMY ANODIZED COPPER\t19\t12\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t12\nBrand#x\tECONOMY ANODIZED TIN\t9\t12\nBrand#x\tECONOMY BRUSHED STEEL\t36\t12\nBrand#x\tECONOMY BRUSHED STEEL\t45\t12\nBrand#x\tECONOMY BRUSHED TIN\t36\t12\nBrand#x\tECONOMY BURNISHED COPPER\t45\t12\nBrand#x\tECONOMY PLATED STEEL\t19\t12\nBrand#x\tECONOMY PLATED STEEL\t23\t12\nBrand#x\tECONOMY PLATED TIN\t45\t12\nBrand#x\tLARGE ANODIZED COPPER\t19\t12\nBrand#x\tLARGE BRUSHED COPPER\t36\t12\nBrand#x\tLARGE BRUSHED NICKEL\t49\t12\nBrand#x\tLARGE BURNISHED STEEL\t3\t12\nBrand#x\tLARGE PLATED COPPER\t9\t12\nBrand#x\tLARGE PLATED NICKEL\t45\t12\nBrand#x\tLARGE PLATED TIN\t19\t12\nBrand#x\tLARGE PLATED TIN\t23\t12\nBrand#x\tLARGE POLISHED COPPER\t3\t12\nBrand#x\tLARGE POLISHED COPPER\t23\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t12\nBrand#x\tMEDIUM ANODIZED TIN\t14\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t12\nBrand#x\tMEDIUM BRUSHED TIN\t49\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t12\nBrand#x\tMEDIUM PLATED NICKEL\t45\t12\nBrand#x\tPROMO ANODIZED BRASS\t3\t12\nBrand#x\tPROMO ANODIZED COPPER\t23\t12\nBrand#x\tPROMO ANODIZED NICKEL\t9\t12\nBrand#x\tPROMO ANODIZED NICKEL\t14\t12\nBrand#x\tPROMO ANODIZED TIN\t23\t12\nBrand#x\tPROMO ANODIZED TIN\t49\t12\nBrand#x\tPROMO BRUSHED BRASS\t23\t12\nBrand#x\tPROMO BRUSHED COPPER\t19\t12\nBrand#x\tPROMO BRUSHED STEEL\t36\t12\nBrand#x\tPROMO BRUSHED TIN\t3\t12\nBrand#x\tPROMO BURNISHED COPPER\t3\t12\nBrand#x\tPROMO BURNISHED COPPER\t19\t12\nBrand#x\tPROMO PLATED COPPER\t9\t12\nBrand#x\tPROMO PLATED STEEL\t45\t12\nBrand#x\tPROMO PLATED TIN\t14\t12\nBrand#x\tSMALL ANODIZED NICKEL\t9\t12\nBrand#x\tSMALL BRUSHED BRASS\t19\t12\nBrand#x\tSMALL BRUSHED NICKEL\t3\t12\nBrand#x\tSMALL BRUSHED TIN\t19\t12\nBrand#x\tSMALL BURNISHED NICKEL\t14\t12\nBrand#x\tSMALL BURNISHED NICKEL\t23\t12\nBrand#x\tSMALL BURNISHED STEEL\t45\t12\nBrand#x\tSMALL BURNISHED STEEL\t49\t12\nBrand#x\tSMALL BURNISHED TIN\t23\t12\nBrand#x\tSMALL PLATED COPPER\t14\t12\nBrand#x\tSMALL PLATED COPPER\t36\t12\nBrand#x\tSMALL PLATED NICKEL\t14\t12\nBrand#x\tSMALL PLATED STEEL\t9\t12\nBrand#x\tSMALL POLISHED COPPER\t23\t12\nBrand#x\tSMALL POLISHED NICKEL\t19\t12\nBrand#x\tSMALL POLISHED NICKEL\t23\t12\nBrand#x\tSMALL POLISHED STEEL\t3\t12\nBrand#x\tSMALL POLISHED TIN\t36\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t12\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t12\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t12\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t12\nBrand#x\tSTANDARD ANODIZED TIN\t19\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t12\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t12\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t12\nBrand#x\tSTANDARD BRUSHED TIN\t36\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t12\nBrand#x\tSTANDARD PLATED BRASS\t3\t12\nBrand#x\tSTANDARD POLISHED COPPER\t45\t12\nBrand#x\tSTANDARD POLISHED STEEL\t36\t12\nBrand#x\tSTANDARD POLISHED STEEL\t45\t12\nBrand#x\tSTANDARD POLISHED TIN\t3\t12\nBrand#x\tECONOMY ANODIZED COPPER\t19\t12\nBrand#x\tECONOMY ANODIZED STEEL\t14\t12\nBrand#x\tECONOMY ANODIZED TIN\t9\t12\nBrand#x\tECONOMY ANODIZED TIN\t19\t12\nBrand#x\tECONOMY BURNISHED COPPER\t14\t12\nBrand#x\tECONOMY BURNISHED COPPER\t19\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t12\nBrand#x\tECONOMY PLATED STEEL\t45\t12\nBrand#x\tECONOMY POLISHED BRASS\t14\t12\nBrand#x\tECONOMY POLISHED BRASS\t19\t12\nBrand#x\tECONOMY POLISHED COPPER\t3\t12\nBrand#x\tECONOMY POLISHED COPPER\t14\t12\nBrand#x\tECONOMY POLISHED COPPER\t19\t12\nBrand#x\tLARGE ANODIZED COPPER\t14\t12\nBrand#x\tLARGE ANODIZED NICKEL\t3\t12\nBrand#x\tLARGE BRUSHED BRASS\t23\t12\nBrand#x\tLARGE BRUSHED STEEL\t23\t12\nBrand#x\tLARGE BURNISHED BRASS\t14\t12\nBrand#x\tLARGE BURNISHED NICKEL\t23\t12\nBrand#x\tLARGE PLATED BRASS\t23\t12\nBrand#x\tLARGE PLATED COPPER\t19\t12\nBrand#x\tLARGE PLATED NICKEL\t19\t12\nBrand#x\tLARGE PLATED NICKEL\t45\t12\nBrand#x\tLARGE PLATED STEEL\t49\t12\nBrand#x\tLARGE PLATED TIN\t3\t12\nBrand#x\tLARGE PLATED TIN\t19\t12\nBrand#x\tLARGE POLISHED BRASS\t3\t12\nBrand#x\tLARGE POLISHED BRASS\t9\t12\nBrand#x\tLARGE POLISHED BRASS\t23\t12\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t12\nBrand#x\tMEDIUM ANODIZED TIN\t3\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t12\nBrand#x\tMEDIUM BURNISHED TIN\t23\t12\nBrand#x\tMEDIUM PLATED BRASS\t14\t12\nBrand#x\tMEDIUM PLATED TIN\t36\t12\nBrand#x\tMEDIUM PLATED TIN\t49\t12\nBrand#x\tPROMO ANODIZED BRASS\t9\t12\nBrand#x\tPROMO ANODIZED BRASS\t23\t12\nBrand#x\tPROMO ANODIZED COPPER\t14\t12\nBrand#x\tPROMO ANODIZED COPPER\t49\t12\nBrand#x\tPROMO ANODIZED STEEL\t36\t12\nBrand#x\tPROMO ANODIZED TIN\t3\t12\nBrand#x\tPROMO BRUSHED COPPER\t49\t12\nBrand#x\tPROMO BRUSHED NICKEL\t3\t12\nBrand#x\tPROMO BRUSHED TIN\t36\t12\nBrand#x\tPROMO BURNISHED NICKEL\t36\t12\nBrand#x\tPROMO BURNISHED STEEL\t19\t12\nBrand#x\tPROMO BURNISHED STEEL\t45\t12\nBrand#x\tPROMO BURNISHED TIN\t19\t12\nBrand#x\tPROMO BURNISHED TIN\t45\t12\nBrand#x\tPROMO PLATED BRASS\t14\t12\nBrand#x\tPROMO PLATED NICKEL\t14\t12\nBrand#x\tPROMO PLATED NICKEL\t49\t12\nBrand#x\tPROMO PLATED STEEL\t9\t12\nBrand#x\tPROMO PLATED TIN\t3\t12\nBrand#x\tPROMO POLISHED BRASS\t23\t12\nBrand#x\tPROMO POLISHED COPPER\t45\t12\nBrand#x\tPROMO POLISHED NICKEL\t49\t12\nBrand#x\tSMALL ANODIZED COPPER\t36\t12\nBrand#x\tSMALL ANODIZED NICKEL\t19\t12\nBrand#x\tSMALL ANODIZED NICKEL\t36\t12\nBrand#x\tSMALL BRUSHED BRASS\t14\t12\nBrand#x\tSMALL BRUSHED BRASS\t19\t12\nBrand#x\tSMALL BRUSHED COPPER\t9\t12\nBrand#x\tSMALL BRUSHED STEEL\t45\t12\nBrand#x\tSMALL BURNISHED BRASS\t14\t12\nBrand#x\tSMALL BURNISHED COPPER\t23\t12\nBrand#x\tSMALL BURNISHED NICKEL\t9\t12\nBrand#x\tSMALL BURNISHED NICKEL\t36\t12\nBrand#x\tSMALL BURNISHED NICKEL\t49\t12\nBrand#x\tSMALL BURNISHED STEEL\t23\t12\nBrand#x\tSMALL BURNISHED TIN\t3\t12\nBrand#x\tSMALL PLATED BRASS\t36\t12\nBrand#x\tSMALL PLATED NICKEL\t19\t12\nBrand#x\tSMALL PLATED NICKEL\t23\t12\nBrand#x\tSMALL POLISHED NICKEL\t9\t12\nBrand#x\tSMALL POLISHED NICKEL\t19\t12\nBrand#x\tSTANDARD ANODIZED TIN\t14\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t12\nBrand#x\tSTANDARD BRUSHED TIN\t36\t12\nBrand#x\tSTANDARD BRUSHED TIN\t49\t12\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t12\nBrand#x\tSTANDARD BURNISHED TIN\t9\t12\nBrand#x\tSTANDARD PLATED COPPER\t45\t12\nBrand#x\tSTANDARD PLATED NICKEL\t3\t12\nBrand#x\tSTANDARD PLATED NICKEL\t45\t12\nBrand#x\tSTANDARD PLATED STEEL\t9\t12\nBrand#x\tSTANDARD PLATED TIN\t23\t12\nBrand#x\tSTANDARD POLISHED BRASS\t36\t12\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t12\nBrand#x\tECONOMY ANODIZED COPPER\t23\t12\nBrand#x\tECONOMY ANODIZED COPPER\t36\t12\nBrand#x\tECONOMY ANODIZED STEEL\t9\t12\nBrand#x\tECONOMY BRUSHED BRASS\t3\t12\nBrand#x\tECONOMY BRUSHED BRASS\t23\t12\nBrand#x\tECONOMY BRUSHED COPPER\t45\t12\nBrand#x\tECONOMY BRUSHED STEEL\t19\t12\nBrand#x\tECONOMY BURNISHED BRASS\t49\t12\nBrand#x\tECONOMY BURNISHED COPPER\t45\t12\nBrand#x\tECONOMY BURNISHED TIN\t14\t12\nBrand#x\tECONOMY PLATED BRASS\t36\t12\nBrand#x\tECONOMY PLATED BRASS\t45\t12\nBrand#x\tECONOMY PLATED STEEL\t36\t12\nBrand#x\tECONOMY PLATED TIN\t3\t12\nBrand#x\tECONOMY PLATED TIN\t23\t12\nBrand#x\tECONOMY POLISHED STEEL\t14\t12\nBrand#x\tECONOMY POLISHED STEEL\t36\t12\nBrand#x\tECONOMY POLISHED STEEL\t45\t12\nBrand#x\tECONOMY POLISHED STEEL\t49\t12\nBrand#x\tECONOMY POLISHED TIN\t19\t12\nBrand#x\tECONOMY POLISHED TIN\t36\t12\nBrand#x\tLARGE ANODIZED COPPER\t45\t12\nBrand#x\tLARGE ANODIZED NICKEL\t9\t12\nBrand#x\tLARGE ANODIZED STEEL\t19\t12\nBrand#x\tLARGE BRUSHED BRASS\t9\t12\nBrand#x\tLARGE BRUSHED BRASS\t19\t12\nBrand#x\tLARGE BRUSHED NICKEL\t23\t12\nBrand#x\tLARGE BRUSHED STEEL\t19\t12\nBrand#x\tLARGE BURNISHED BRASS\t9\t12\nBrand#x\tLARGE BURNISHED STEEL\t14\t12\nBrand#x\tLARGE PLATED COPPER\t3\t12\nBrand#x\tLARGE PLATED NICKEL\t45\t12\nBrand#x\tLARGE POLISHED COPPER\t49\t12\nBrand#x\tLARGE POLISHED STEEL\t36\t12\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t12\nBrand#x\tMEDIUM ANODIZED TIN\t23\t12\nBrand#x\tMEDIUM ANODIZED TIN\t36\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t12\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t12\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t12\nBrand#x\tMEDIUM PLATED BRASS\t49\t12\nBrand#x\tMEDIUM PLATED COPPER\t14\t12\nBrand#x\tMEDIUM PLATED COPPER\t23\t12\nBrand#x\tMEDIUM PLATED STEEL\t14\t12\nBrand#x\tMEDIUM PLATED TIN\t45\t12\nBrand#x\tPROMO ANODIZED COPPER\t14\t12\nBrand#x\tPROMO BRUSHED COPPER\t3\t12\nBrand#x\tPROMO BURNISHED COPPER\t36\t12\nBrand#x\tPROMO BURNISHED NICKEL\t36\t12\nBrand#x\tPROMO BURNISHED STEEL\t36\t12\nBrand#x\tPROMO BURNISHED STEEL\t49\t12\nBrand#x\tPROMO PLATED COPPER\t14\t12\nBrand#x\tPROMO PLATED TIN\t3\t12\nBrand#x\tPROMO PLATED TIN\t23\t12\nBrand#x\tPROMO POLISHED COPPER\t49\t12\nBrand#x\tPROMO POLISHED NICKEL\t9\t12\nBrand#x\tPROMO POLISHED TIN\t14\t12\nBrand#x\tSMALL ANODIZED COPPER\t36\t12\nBrand#x\tSMALL ANODIZED NICKEL\t36\t12\nBrand#x\tSMALL ANODIZED STEEL\t19\t12\nBrand#x\tSMALL BRUSHED COPPER\t14\t12\nBrand#x\tSMALL BURNISHED BRASS\t9\t12\nBrand#x\tSMALL BURNISHED COPPER\t9\t12\nBrand#x\tSMALL BURNISHED NICKEL\t36\t12\nBrand#x\tSMALL BURNISHED STEEL\t19\t12\nBrand#x\tSMALL PLATED COPPER\t3\t12\nBrand#x\tSMALL POLISHED BRASS\t3\t12\nBrand#x\tSMALL POLISHED BRASS\t9\t12\nBrand#x\tSMALL POLISHED STEEL\t36\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t12\nBrand#x\tSTANDARD BRUSHED STEEL\t45\t12\nBrand#x\tSTANDARD BRUSHED TIN\t14\t12\nBrand#x\tSTANDARD BRUSHED TIN\t19\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t12\nBrand#x\tSTANDARD BURNISHED NICKEL\t23\t12\nBrand#x\tSTANDARD PLATED BRASS\t3\t12\nBrand#x\tSTANDARD PLATED BRASS\t36\t12\nBrand#x\tSTANDARD PLATED COPPER\t36\t12\nBrand#x\tSTANDARD PLATED COPPER\t45\t12\nBrand#x\tSTANDARD POLISHED BRASS\t19\t12\nBrand#x\tSTANDARD POLISHED COPPER\t14\t12\nBrand#x\tSTANDARD POLISHED TIN\t19\t12\nBrand#x\tECONOMY ANODIZED COPPER\t19\t12\nBrand#x\tECONOMY BRUSHED STEEL\t19\t12\nBrand#x\tECONOMY BRUSHED STEEL\t45\t12\nBrand#x\tECONOMY BRUSHED TIN\t45\t12\nBrand#x\tECONOMY BURNISHED BRASS\t19\t12\nBrand#x\tECONOMY BURNISHED BRASS\t45\t12\nBrand#x\tECONOMY BURNISHED COPPER\t14\t12\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t12\nBrand#x\tECONOMY POLISHED NICKEL\t14\t12\nBrand#x\tECONOMY POLISHED NICKEL\t45\t12\nBrand#x\tECONOMY POLISHED TIN\t23\t12\nBrand#x\tLARGE ANODIZED TIN\t36\t12\nBrand#x\tLARGE BRUSHED COPPER\t9\t12\nBrand#x\tLARGE BRUSHED COPPER\t23\t12\nBrand#x\tLARGE BURNISHED BRASS\t45\t12\nBrand#x\tLARGE BURNISHED COPPER\t3\t12\nBrand#x\tLARGE BURNISHED COPPER\t45\t12\nBrand#x\tLARGE BURNISHED NICKEL\t14\t12\nBrand#x\tLARGE PLATED COPPER\t9\t12\nBrand#x\tLARGE PLATED COPPER\t45\t12\nBrand#x\tLARGE PLATED STEEL\t49\t12\nBrand#x\tLARGE POLISHED BRASS\t23\t12\nBrand#x\tLARGE POLISHED COPPER\t3\t12\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t12\nBrand#x\tMEDIUM BURNISHED STEEL\t3\t12\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t12\nBrand#x\tPROMO ANODIZED COPPER\t49\t12\nBrand#x\tPROMO ANODIZED STEEL\t19\t12\nBrand#x\tPROMO BRUSHED BRASS\t14\t12\nBrand#x\tPROMO BRUSHED COPPER\t14\t12\nBrand#x\tPROMO BRUSHED STEEL\t14\t12\nBrand#x\tPROMO BRUSHED STEEL\t45\t12\nBrand#x\tPROMO BRUSHED TIN\t14\t12\nBrand#x\tPROMO BURNISHED BRASS\t9\t12\nBrand#x\tPROMO BURNISHED COPPER\t49\t12\nBrand#x\tPROMO BURNISHED NICKEL\t23\t12\nBrand#x\tPROMO BURNISHED NICKEL\t36\t12\nBrand#x\tPROMO BURNISHED STEEL\t23\t12\nBrand#x\tPROMO BURNISHED TIN\t9\t12\nBrand#x\tPROMO BURNISHED TIN\t23\t12\nBrand#x\tPROMO PLATED BRASS\t23\t12\nBrand#x\tPROMO PLATED STEEL\t9\t12\nBrand#x\tPROMO PLATED TIN\t3\t12\nBrand#x\tPROMO PLATED TIN\t49\t12\nBrand#x\tPROMO POLISHED STEEL\t19\t12\nBrand#x\tPROMO POLISHED STEEL\t45\t12\nBrand#x\tPROMO POLISHED TIN\t19\t12\nBrand#x\tSMALL ANODIZED COPPER\t49\t12\nBrand#x\tSMALL BRUSHED BRASS\t23\t12\nBrand#x\tSMALL BRUSHED BRASS\t36\t12\nBrand#x\tSMALL BRUSHED COPPER\t19\t12\nBrand#x\tSMALL BRUSHED TIN\t14\t12\nBrand#x\tSMALL BURNISHED BRASS\t3\t12\nBrand#x\tSMALL BURNISHED COPPER\t49\t12\nBrand#x\tSMALL BURNISHED NICKEL\t14\t12\nBrand#x\tSMALL BURNISHED STEEL\t19\t12\nBrand#x\tSMALL BURNISHED TIN\t9\t12\nBrand#x\tSMALL PLATED BRASS\t23\t12\nBrand#x\tSMALL PLATED COPPER\t36\t12\nBrand#x\tSMALL PLATED NICKEL\t36\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t12\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t12\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t12\nBrand#x\tSTANDARD PLATED BRASS\t45\t12\nBrand#x\tSTANDARD PLATED COPPER\t9\t12\nBrand#x\tSTANDARD PLATED COPPER\t19\t12\nBrand#x\tSTANDARD PLATED NICKEL\t49\t12\nBrand#x\tSTANDARD PLATED TIN\t45\t12\nBrand#x\tSTANDARD POLISHED STEEL\t49\t12\nBrand#x\tECONOMY BRUSHED BRASS\t3\t12\nBrand#x\tECONOMY BRUSHED COPPER\t9\t12\nBrand#x\tECONOMY BRUSHED COPPER\t14\t12\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t12\nBrand#x\tECONOMY BRUSHED STEEL\t3\t12\nBrand#x\tECONOMY BURNISHED COPPER\t9\t12\nBrand#x\tECONOMY PLATED STEEL\t9\t12\nBrand#x\tECONOMY POLISHED STEEL\t3\t12\nBrand#x\tLARGE ANODIZED NICKEL\t9\t12\nBrand#x\tLARGE BRUSHED COPPER\t14\t12\nBrand#x\tLARGE BRUSHED COPPER\t23\t12\nBrand#x\tLARGE BRUSHED COPPER\t49\t12\nBrand#x\tLARGE BURNISHED COPPER\t14\t12\nBrand#x\tLARGE BURNISHED NICKEL\t14\t12\nBrand#x\tLARGE PLATED BRASS\t45\t12\nBrand#x\tLARGE PLATED NICKEL\t14\t12\nBrand#x\tLARGE PLATED STEEL\t23\t12\nBrand#x\tLARGE POLISHED NICKEL\t3\t12\nBrand#x\tLARGE POLISHED STEEL\t45\t12\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t12\nBrand#x\tMEDIUM ANODIZED TIN\t49\t12\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t12\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t12\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t12\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t12\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t12\nBrand#x\tMEDIUM PLATED NICKEL\t23\t12\nBrand#x\tMEDIUM PLATED STEEL\t3\t12\nBrand#x\tMEDIUM PLATED TIN\t19\t12\nBrand#x\tPROMO ANODIZED TIN\t19\t12\nBrand#x\tPROMO BRUSHED BRASS\t23\t12\nBrand#x\tPROMO BRUSHED BRASS\t45\t12\nBrand#x\tPROMO BRUSHED NICKEL\t23\t12\nBrand#x\tPROMO BRUSHED TIN\t9\t12\nBrand#x\tPROMO BURNISHED STEEL\t23\t12\nBrand#x\tPROMO POLISHED BRASS\t45\t12\nBrand#x\tSMALL ANODIZED STEEL\t23\t12\nBrand#x\tSMALL ANODIZED STEEL\t45\t12\nBrand#x\tSMALL BRUSHED STEEL\t36\t12\nBrand#x\tSMALL BRUSHED TIN\t3\t12\nBrand#x\tSMALL BURNISHED BRASS\t49\t12\nBrand#x\tSMALL BURNISHED TIN\t49\t12\nBrand#x\tSMALL PLATED NICKEL\t36\t12\nBrand#x\tSMALL PLATED NICKEL\t45\t12\nBrand#x\tSMALL PLATED STEEL\t9\t12\nBrand#x\tSMALL PLATED STEEL\t19\t12\nBrand#x\tSMALL POLISHED STEEL\t14\t12\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t12\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t12\nBrand#x\tSTANDARD ANODIZED TIN\t9\t12\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t12\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t12\nBrand#x\tSTANDARD BRUSHED TIN\t36\t12\nBrand#x\tSTANDARD BRUSHED TIN\t45\t12\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t12\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t12\nBrand#x\tSTANDARD BURNISHED TIN\t3\t12\nBrand#x\tSTANDARD PLATED BRASS\t3\t12\nBrand#x\tSTANDARD PLATED COPPER\t3\t12\nBrand#x\tSTANDARD PLATED COPPER\t19\t12\nBrand#x\tSTANDARD PLATED NICKEL\t9\t12\nBrand#x\tSTANDARD PLATED TIN\t19\t12\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t12\nBrand#x\tECONOMY POLISHED BRASS\t14\t11\nBrand#x\tSMALL PLATED BRASS\t14\t11\nBrand#x\tMEDIUM BURNISHED TIN\t45\t11\nBrand#x\tSMALL BURNISHED COPPER\t23\t11\nBrand#x\tSMALL PLATED NICKEL\t45\t11\nBrand#x\tECONOMY PLATED COPPER\t3\t11\nBrand#x\tSMALL BRUSHED TIN\t19\t11\nBrand#x\tLARGE BRUSHED NICKEL\t23\t11\nBrand#x\tPROMO BRUSHED NICKEL\t9\t11\nBrand#x\tSMALL PLATED TIN\t23\t11\nBrand#x\tECONOMY POLISHED COPPER\t14\t11\nBrand#x\tSMALL PLATED NICKEL\t45\t11\nBrand#x\tPROMO ANODIZED TIN\t19\t11\nBrand#x\tPROMO BRUSHED NICKEL\t9\t11\nBrand#x\tLARGE PLATED STEEL\t3\t11\nBrand#x\tECONOMY ANODIZED COPPER\t36\t11\nBrand#x\tSMALL POLISHED BRASS\t49\t11\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t11\nBrand#x\tPROMO BRUSHED NICKEL\t3\t11\nBrand#x\tLARGE PLATED BRASS\t19\t11\nBrand#x\tLARGE POLISHED NICKEL\t3\t11\nBrand#x\tPROMO ANODIZED STEEL\t45\t11\nBrand#x\tSTANDARD POLISHED STEEL\t19\t11\nBrand#x\tECONOMY ANODIZED BRASS\t19\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t8\nBrand#x\tECONOMY ANODIZED TIN\t36\t8\nBrand#x\tECONOMY BRUSHED COPPER\t9\t8\nBrand#x\tECONOMY BRUSHED COPPER\t49\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t8\nBrand#x\tECONOMY BRUSHED STEEL\t9\t8\nBrand#x\tECONOMY BRUSHED STEEL\t14\t8\nBrand#x\tECONOMY BRUSHED STEEL\t23\t8\nBrand#x\tECONOMY BRUSHED TIN\t19\t8\nBrand#x\tECONOMY BRUSHED TIN\t36\t8\nBrand#x\tECONOMY BRUSHED TIN\t49\t8\nBrand#x\tECONOMY BURNISHED BRASS\t23\t8\nBrand#x\tECONOMY BURNISHED COPPER\t9\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t8\nBrand#x\tECONOMY BURNISHED TIN\t9\t8\nBrand#x\tECONOMY BURNISHED TIN\t14\t8\nBrand#x\tECONOMY BURNISHED TIN\t49\t8\nBrand#x\tECONOMY PLATED COPPER\t14\t8\nBrand#x\tECONOMY PLATED COPPER\t49\t8\nBrand#x\tECONOMY PLATED NICKEL\t23\t8\nBrand#x\tECONOMY PLATED NICKEL\t36\t8\nBrand#x\tECONOMY PLATED NICKEL\t45\t8\nBrand#x\tECONOMY PLATED STEEL\t23\t8\nBrand#x\tECONOMY PLATED TIN\t49\t8\nBrand#x\tECONOMY POLISHED BRASS\t3\t8\nBrand#x\tECONOMY POLISHED COPPER\t45\t8\nBrand#x\tECONOMY POLISHED COPPER\t49\t8\nBrand#x\tECONOMY POLISHED NICKEL\t3\t8\nBrand#x\tECONOMY POLISHED NICKEL\t9\t8\nBrand#x\tECONOMY POLISHED NICKEL\t14\t8\nBrand#x\tECONOMY POLISHED NICKEL\t23\t8\nBrand#x\tECONOMY POLISHED STEEL\t19\t8\nBrand#x\tECONOMY POLISHED TIN\t3\t8\nBrand#x\tECONOMY POLISHED TIN\t14\t8\nBrand#x\tECONOMY POLISHED TIN\t36\t8\nBrand#x\tLARGE ANODIZED BRASS\t49\t8\nBrand#x\tLARGE ANODIZED COPPER\t23\t8\nBrand#x\tLARGE ANODIZED NICKEL\t36\t8\nBrand#x\tLARGE ANODIZED NICKEL\t45\t8\nBrand#x\tLARGE ANODIZED NICKEL\t49\t8\nBrand#x\tLARGE ANODIZED STEEL\t9\t8\nBrand#x\tLARGE ANODIZED TIN\t23\t8\nBrand#x\tLARGE ANODIZED TIN\t45\t8\nBrand#x\tLARGE BRUSHED BRASS\t14\t8\nBrand#x\tLARGE BRUSHED BRASS\t23\t8\nBrand#x\tLARGE BRUSHED COPPER\t19\t8\nBrand#x\tLARGE BRUSHED COPPER\t23\t8\nBrand#x\tLARGE BRUSHED COPPER\t36\t8\nBrand#x\tLARGE BRUSHED NICKEL\t3\t8\nBrand#x\tLARGE BRUSHED NICKEL\t14\t8\nBrand#x\tLARGE BRUSHED NICKEL\t19\t8\nBrand#x\tLARGE BRUSHED STEEL\t49\t8\nBrand#x\tLARGE BRUSHED TIN\t14\t8\nBrand#x\tLARGE BRUSHED TIN\t23\t8\nBrand#x\tLARGE BURNISHED BRASS\t14\t8\nBrand#x\tLARGE BURNISHED BRASS\t23\t8\nBrand#x\tLARGE BURNISHED BRASS\t45\t8\nBrand#x\tLARGE BURNISHED BRASS\t49\t8\nBrand#x\tLARGE BURNISHED COPPER\t9\t8\nBrand#x\tLARGE BURNISHED COPPER\t36\t8\nBrand#x\tLARGE BURNISHED NICKEL\t45\t8\nBrand#x\tLARGE BURNISHED STEEL\t36\t8\nBrand#x\tLARGE BURNISHED STEEL\t49\t8\nBrand#x\tLARGE BURNISHED TIN\t14\t8\nBrand#x\tLARGE BURNISHED TIN\t23\t8\nBrand#x\tLARGE PLATED BRASS\t14\t8\nBrand#x\tLARGE PLATED BRASS\t23\t8\nBrand#x\tLARGE PLATED NICKEL\t3\t8\nBrand#x\tLARGE PLATED NICKEL\t36\t8\nBrand#x\tLARGE PLATED STEEL\t3\t8\nBrand#x\tLARGE PLATED STEEL\t23\t8\nBrand#x\tLARGE PLATED STEEL\t36\t8\nBrand#x\tLARGE PLATED TIN\t9\t8\nBrand#x\tLARGE PLATED TIN\t14\t8\nBrand#x\tLARGE POLISHED BRASS\t49\t8\nBrand#x\tLARGE POLISHED COPPER\t14\t8\nBrand#x\tLARGE POLISHED NICKEL\t14\t8\nBrand#x\tLARGE POLISHED STEEL\t36\t8\nBrand#x\tLARGE POLISHED TIN\t3\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t8\nBrand#x\tMEDIUM ANODIZED TIN\t23\t8\nBrand#x\tMEDIUM ANODIZED TIN\t49\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t8\nBrand#x\tMEDIUM BURNISHED TIN\t19\t8\nBrand#x\tMEDIUM BURNISHED TIN\t36\t8\nBrand#x\tMEDIUM PLATED BRASS\t3\t8\nBrand#x\tMEDIUM PLATED BRASS\t36\t8\nBrand#x\tMEDIUM PLATED NICKEL\t14\t8\nBrand#x\tMEDIUM PLATED NICKEL\t45\t8\nBrand#x\tMEDIUM PLATED STEEL\t3\t8\nBrand#x\tMEDIUM PLATED STEEL\t9\t8\nBrand#x\tMEDIUM PLATED STEEL\t23\t8\nBrand#x\tMEDIUM PLATED STEEL\t36\t8\nBrand#x\tMEDIUM PLATED TIN\t3\t8\nBrand#x\tMEDIUM PLATED TIN\t19\t8\nBrand#x\tMEDIUM PLATED TIN\t23\t8\nBrand#x\tMEDIUM PLATED TIN\t45\t8\nBrand#x\tPROMO ANODIZED COPPER\t14\t8\nBrand#x\tPROMO ANODIZED NICKEL\t3\t8\nBrand#x\tPROMO ANODIZED NICKEL\t45\t8\nBrand#x\tPROMO ANODIZED STEEL\t23\t8\nBrand#x\tPROMO ANODIZED STEEL\t49\t8\nBrand#x\tPROMO ANODIZED TIN\t36\t8\nBrand#x\tPROMO BRUSHED BRASS\t3\t8\nBrand#x\tPROMO BRUSHED BRASS\t36\t8\nBrand#x\tPROMO BRUSHED COPPER\t14\t8\nBrand#x\tPROMO BRUSHED COPPER\t19\t8\nBrand#x\tPROMO BRUSHED NICKEL\t19\t8\nBrand#x\tPROMO BRUSHED STEEL\t49\t8\nBrand#x\tPROMO BRUSHED TIN\t19\t8\nBrand#x\tPROMO BRUSHED TIN\t36\t8\nBrand#x\tPROMO BURNISHED BRASS\t3\t8\nBrand#x\tPROMO BURNISHED BRASS\t19\t8\nBrand#x\tPROMO BURNISHED BRASS\t36\t8\nBrand#x\tPROMO BURNISHED BRASS\t49\t8\nBrand#x\tPROMO BURNISHED COPPER\t14\t8\nBrand#x\tPROMO BURNISHED NICKEL\t3\t8\nBrand#x\tPROMO BURNISHED NICKEL\t14\t8\nBrand#x\tPROMO BURNISHED STEEL\t14\t8\nBrand#x\tPROMO BURNISHED STEEL\t19\t8\nBrand#x\tPROMO BURNISHED STEEL\t36\t8\nBrand#x\tPROMO BURNISHED STEEL\t49\t8\nBrand#x\tPROMO PLATED BRASS\t23\t8\nBrand#x\tPROMO PLATED NICKEL\t14\t8\nBrand#x\tPROMO PLATED NICKEL\t49\t8\nBrand#x\tPROMO PLATED STEEL\t19\t8\nBrand#x\tPROMO PLATED STEEL\t23\t8\nBrand#x\tPROMO POLISHED BRASS\t3\t8\nBrand#x\tPROMO POLISHED BRASS\t19\t8\nBrand#x\tPROMO POLISHED BRASS\t36\t8\nBrand#x\tPROMO POLISHED COPPER\t45\t8\nBrand#x\tPROMO POLISHED TIN\t3\t8\nBrand#x\tPROMO POLISHED TIN\t9\t8\nBrand#x\tPROMO POLISHED TIN\t49\t8\nBrand#x\tSMALL ANODIZED COPPER\t19\t8\nBrand#x\tSMALL ANODIZED NICKEL\t49\t8\nBrand#x\tSMALL ANODIZED STEEL\t3\t8\nBrand#x\tSMALL ANODIZED STEEL\t14\t8\nBrand#x\tSMALL ANODIZED TIN\t9\t8\nBrand#x\tSMALL ANODIZED TIN\t19\t8\nBrand#x\tSMALL BRUSHED BRASS\t45\t8\nBrand#x\tSMALL BRUSHED BRASS\t49\t8\nBrand#x\tSMALL BRUSHED COPPER\t14\t8\nBrand#x\tSMALL BRUSHED COPPER\t19\t8\nBrand#x\tSMALL BRUSHED NICKEL\t3\t8\nBrand#x\tSMALL BRUSHED NICKEL\t45\t8\nBrand#x\tSMALL BRUSHED NICKEL\t49\t8\nBrand#x\tSMALL BRUSHED TIN\t14\t8\nBrand#x\tSMALL BURNISHED COPPER\t23\t8\nBrand#x\tSMALL BURNISHED COPPER\t36\t8\nBrand#x\tSMALL BURNISHED COPPER\t49\t8\nBrand#x\tSMALL BURNISHED STEEL\t3\t8\nBrand#x\tSMALL BURNISHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED STEEL\t36\t8\nBrand#x\tSMALL BURNISHED STEEL\t45\t8\nBrand#x\tSMALL BURNISHED TIN\t3\t8\nBrand#x\tSMALL BURNISHED TIN\t19\t8\nBrand#x\tSMALL BURNISHED TIN\t45\t8\nBrand#x\tSMALL PLATED BRASS\t3\t8\nBrand#x\tSMALL PLATED BRASS\t19\t8\nBrand#x\tSMALL PLATED BRASS\t36\t8\nBrand#x\tSMALL PLATED COPPER\t49\t8\nBrand#x\tSMALL PLATED NICKEL\t9\t8\nBrand#x\tSMALL PLATED NICKEL\t49\t8\nBrand#x\tSMALL PLATED STEEL\t9\t8\nBrand#x\tSMALL PLATED TIN\t9\t8\nBrand#x\tSMALL PLATED TIN\t19\t8\nBrand#x\tSMALL PLATED TIN\t45\t8\nBrand#x\tSMALL POLISHED BRASS\t23\t8\nBrand#x\tSMALL POLISHED COPPER\t36\t8\nBrand#x\tSMALL POLISHED NICKEL\t45\t8\nBrand#x\tSMALL POLISHED STEEL\t14\t8\nBrand#x\tSMALL POLISHED STEEL\t23\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t14\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t8\nBrand#x\tSTANDARD BRUSHED TIN\t19\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t8\nBrand#x\tSTANDARD BURNISHED TIN\t49\t8\nBrand#x\tSTANDARD PLATED BRASS\t19\t8\nBrand#x\tSTANDARD PLATED BRASS\t23\t8\nBrand#x\tSTANDARD PLATED BRASS\t49\t8\nBrand#x\tSTANDARD PLATED NICKEL\t36\t8\nBrand#x\tSTANDARD PLATED NICKEL\t45\t8\nBrand#x\tSTANDARD PLATED STEEL\t23\t8\nBrand#x\tSTANDARD PLATED STEEL\t45\t8\nBrand#x\tSTANDARD PLATED TIN\t36\t8\nBrand#x\tSTANDARD POLISHED BRASS\t9\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t8\nBrand#x\tSTANDARD POLISHED STEEL\t49\t8\nBrand#x\tECONOMY ANODIZED STEEL\t3\t8\nBrand#x\tECONOMY ANODIZED STEEL\t19\t8\nBrand#x\tECONOMY ANODIZED STEEL\t23\t8\nBrand#x\tECONOMY ANODIZED TIN\t23\t8\nBrand#x\tECONOMY BRUSHED COPPER\t3\t8\nBrand#x\tECONOMY BRUSHED COPPER\t14\t8\nBrand#x\tECONOMY BRUSHED COPPER\t19\t8\nBrand#x\tECONOMY BRUSHED COPPER\t49\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t8\nBrand#x\tECONOMY BRUSHED STEEL\t3\t8\nBrand#x\tECONOMY BRUSHED STEEL\t49\t8\nBrand#x\tECONOMY BRUSHED TIN\t9\t8\nBrand#x\tECONOMY BRUSHED TIN\t49\t8\nBrand#x\tECONOMY BURNISHED BRASS\t49\t8\nBrand#x\tECONOMY BURNISHED COPPER\t3\t8\nBrand#x\tECONOMY BURNISHED COPPER\t19\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t8\nBrand#x\tECONOMY BURNISHED STEEL\t3\t8\nBrand#x\tECONOMY BURNISHED TIN\t14\t8\nBrand#x\tECONOMY BURNISHED TIN\t19\t8\nBrand#x\tECONOMY PLATED BRASS\t19\t8\nBrand#x\tECONOMY PLATED BRASS\t49\t8\nBrand#x\tECONOMY PLATED COPPER\t23\t8\nBrand#x\tECONOMY PLATED STEEL\t23\t8\nBrand#x\tECONOMY PLATED TIN\t36\t8\nBrand#x\tECONOMY PLATED TIN\t49\t8\nBrand#x\tECONOMY POLISHED BRASS\t9\t8\nBrand#x\tECONOMY POLISHED BRASS\t14\t8\nBrand#x\tECONOMY POLISHED COPPER\t9\t8\nBrand#x\tECONOMY POLISHED COPPER\t49\t8\nBrand#x\tECONOMY POLISHED TIN\t3\t8\nBrand#x\tECONOMY POLISHED TIN\t19\t8\nBrand#x\tECONOMY POLISHED TIN\t36\t8\nBrand#x\tLARGE ANODIZED BRASS\t23\t8\nBrand#x\tLARGE ANODIZED BRASS\t36\t8\nBrand#x\tLARGE ANODIZED COPPER\t14\t8\nBrand#x\tLARGE ANODIZED COPPER\t45\t8\nBrand#x\tLARGE ANODIZED NICKEL\t9\t8\nBrand#x\tLARGE ANODIZED STEEL\t9\t8\nBrand#x\tLARGE ANODIZED STEEL\t49\t8\nBrand#x\tLARGE ANODIZED TIN\t14\t8\nBrand#x\tLARGE BRUSHED BRASS\t14\t8\nBrand#x\tLARGE BRUSHED NICKEL\t9\t8\nBrand#x\tLARGE BRUSHED STEEL\t3\t8\nBrand#x\tLARGE BRUSHED STEEL\t23\t8\nBrand#x\tLARGE BRUSHED STEEL\t49\t8\nBrand#x\tLARGE BRUSHED TIN\t9\t8\nBrand#x\tLARGE BRUSHED TIN\t45\t8\nBrand#x\tLARGE BURNISHED BRASS\t14\t8\nBrand#x\tLARGE BURNISHED BRASS\t45\t8\nBrand#x\tLARGE BURNISHED COPPER\t19\t8\nBrand#x\tLARGE BURNISHED COPPER\t49\t8\nBrand#x\tLARGE BURNISHED NICKEL\t3\t8\nBrand#x\tLARGE BURNISHED NICKEL\t14\t8\nBrand#x\tLARGE BURNISHED STEEL\t3\t8\nBrand#x\tLARGE BURNISHED STEEL\t45\t8\nBrand#x\tLARGE BURNISHED TIN\t9\t8\nBrand#x\tLARGE BURNISHED TIN\t14\t8\nBrand#x\tLARGE BURNISHED TIN\t45\t8\nBrand#x\tLARGE BURNISHED TIN\t49\t8\nBrand#x\tLARGE PLATED BRASS\t49\t8\nBrand#x\tLARGE PLATED COPPER\t3\t8\nBrand#x\tLARGE PLATED COPPER\t36\t8\nBrand#x\tLARGE PLATED COPPER\t45\t8\nBrand#x\tLARGE PLATED NICKEL\t49\t8\nBrand#x\tLARGE PLATED STEEL\t3\t8\nBrand#x\tLARGE PLATED STEEL\t36\t8\nBrand#x\tLARGE PLATED TIN\t14\t8\nBrand#x\tLARGE POLISHED BRASS\t9\t8\nBrand#x\tLARGE POLISHED BRASS\t19\t8\nBrand#x\tLARGE POLISHED COPPER\t9\t8\nBrand#x\tLARGE POLISHED COPPER\t36\t8\nBrand#x\tLARGE POLISHED NICKEL\t23\t8\nBrand#x\tLARGE POLISHED NICKEL\t36\t8\nBrand#x\tLARGE POLISHED NICKEL\t49\t8\nBrand#x\tLARGE POLISHED STEEL\t49\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t8\nBrand#x\tMEDIUM ANODIZED TIN\t9\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t49\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t3\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t8\nBrand#x\tMEDIUM BURNISHED TIN\t14\t8\nBrand#x\tMEDIUM PLATED BRASS\t14\t8\nBrand#x\tMEDIUM PLATED BRASS\t49\t8\nBrand#x\tMEDIUM PLATED NICKEL\t9\t8\nBrand#x\tMEDIUM PLATED NICKEL\t36\t8\nBrand#x\tMEDIUM PLATED NICKEL\t49\t8\nBrand#x\tMEDIUM PLATED STEEL\t14\t8\nBrand#x\tMEDIUM PLATED STEEL\t23\t8\nBrand#x\tMEDIUM PLATED STEEL\t45\t8\nBrand#x\tMEDIUM PLATED TIN\t14\t8\nBrand#x\tMEDIUM PLATED TIN\t19\t8\nBrand#x\tMEDIUM PLATED TIN\t45\t8\nBrand#x\tPROMO ANODIZED BRASS\t49\t8\nBrand#x\tPROMO ANODIZED COPPER\t3\t8\nBrand#x\tPROMO ANODIZED COPPER\t36\t8\nBrand#x\tPROMO ANODIZED COPPER\t45\t8\nBrand#x\tPROMO ANODIZED COPPER\t49\t8\nBrand#x\tPROMO ANODIZED NICKEL\t14\t8\nBrand#x\tPROMO ANODIZED NICKEL\t23\t8\nBrand#x\tPROMO ANODIZED TIN\t19\t8\nBrand#x\tPROMO ANODIZED TIN\t36\t8\nBrand#x\tPROMO BRUSHED BRASS\t3\t8\nBrand#x\tPROMO BRUSHED BRASS\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t49\t8\nBrand#x\tPROMO BRUSHED COPPER\t9\t8\nBrand#x\tPROMO BRUSHED COPPER\t23\t8\nBrand#x\tPROMO BRUSHED NICKEL\t23\t8\nBrand#x\tPROMO BRUSHED STEEL\t23\t8\nBrand#x\tPROMO BURNISHED BRASS\t3\t8\nBrand#x\tPROMO BURNISHED COPPER\t3\t8\nBrand#x\tPROMO BURNISHED NICKEL\t9\t8\nBrand#x\tPROMO BURNISHED NICKEL\t49\t8\nBrand#x\tPROMO BURNISHED TIN\t9\t8\nBrand#x\tPROMO BURNISHED TIN\t14\t8\nBrand#x\tPROMO BURNISHED TIN\t45\t8\nBrand#x\tPROMO PLATED BRASS\t36\t8\nBrand#x\tPROMO PLATED BRASS\t45\t8\nBrand#x\tPROMO PLATED BRASS\t49\t8\nBrand#x\tPROMO PLATED COPPER\t23\t8\nBrand#x\tPROMO PLATED COPPER\t36\t8\nBrand#x\tPROMO PLATED NICKEL\t14\t8\nBrand#x\tPROMO PLATED STEEL\t3\t8\nBrand#x\tPROMO PLATED STEEL\t49\t8\nBrand#x\tPROMO PLATED TIN\t3\t8\nBrand#x\tPROMO PLATED TIN\t9\t8\nBrand#x\tPROMO PLATED TIN\t23\t8\nBrand#x\tPROMO PLATED TIN\t36\t8\nBrand#x\tPROMO POLISHED BRASS\t3\t8\nBrand#x\tPROMO POLISHED BRASS\t9\t8\nBrand#x\tPROMO POLISHED BRASS\t19\t8\nBrand#x\tPROMO POLISHED COPPER\t19\t8\nBrand#x\tPROMO POLISHED COPPER\t23\t8\nBrand#x\tPROMO POLISHED NICKEL\t3\t8\nBrand#x\tPROMO POLISHED NICKEL\t19\t8\nBrand#x\tPROMO POLISHED STEEL\t3\t8\nBrand#x\tPROMO POLISHED STEEL\t19\t8\nBrand#x\tPROMO POLISHED STEEL\t36\t8\nBrand#x\tPROMO POLISHED TIN\t23\t8\nBrand#x\tPROMO POLISHED TIN\t36\t8\nBrand#x\tSMALL ANODIZED BRASS\t36\t8\nBrand#x\tSMALL ANODIZED COPPER\t9\t8\nBrand#x\tSMALL ANODIZED STEEL\t9\t8\nBrand#x\tSMALL ANODIZED STEEL\t45\t8\nBrand#x\tSMALL ANODIZED STEEL\t49\t8\nBrand#x\tSMALL ANODIZED TIN\t14\t8\nBrand#x\tSMALL ANODIZED TIN\t19\t8\nBrand#x\tSMALL ANODIZED TIN\t23\t8\nBrand#x\tSMALL BRUSHED BRASS\t14\t8\nBrand#x\tSMALL BRUSHED BRASS\t45\t8\nBrand#x\tSMALL BRUSHED COPPER\t36\t8\nBrand#x\tSMALL BRUSHED NICKEL\t14\t8\nBrand#x\tSMALL BRUSHED NICKEL\t19\t8\nBrand#x\tSMALL BRUSHED NICKEL\t23\t8\nBrand#x\tSMALL BRUSHED NICKEL\t45\t8\nBrand#x\tSMALL BRUSHED STEEL\t49\t8\nBrand#x\tSMALL BRUSHED TIN\t9\t8\nBrand#x\tSMALL BURNISHED BRASS\t9\t8\nBrand#x\tSMALL BURNISHED COPPER\t36\t8\nBrand#x\tSMALL BURNISHED COPPER\t49\t8\nBrand#x\tSMALL BURNISHED NICKEL\t3\t8\nBrand#x\tSMALL BURNISHED NICKEL\t9\t8\nBrand#x\tSMALL BURNISHED NICKEL\t19\t8\nBrand#x\tSMALL BURNISHED NICKEL\t36\t8\nBrand#x\tSMALL BURNISHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED TIN\t23\t8\nBrand#x\tSMALL BURNISHED TIN\t45\t8\nBrand#x\tSMALL PLATED BRASS\t45\t8\nBrand#x\tSMALL PLATED COPPER\t19\t8\nBrand#x\tSMALL PLATED NICKEL\t23\t8\nBrand#x\tSMALL PLATED TIN\t45\t8\nBrand#x\tSMALL POLISHED BRASS\t36\t8\nBrand#x\tSMALL POLISHED BRASS\t45\t8\nBrand#x\tSMALL POLISHED COPPER\t45\t8\nBrand#x\tSMALL POLISHED COPPER\t49\t8\nBrand#x\tSMALL POLISHED TIN\t36\t8\nBrand#x\tSMALL POLISHED TIN\t45\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t8\nBrand#x\tSTANDARD ANODIZED TIN\t36\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t14\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t45\t8\nBrand#x\tSTANDARD BRUSHED TIN\t14\t8\nBrand#x\tSTANDARD BRUSHED TIN\t23\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t8\nBrand#x\tSTANDARD BURNISHED TIN\t14\t8\nBrand#x\tSTANDARD BURNISHED TIN\t23\t8\nBrand#x\tSTANDARD BURNISHED TIN\t36\t8\nBrand#x\tSTANDARD PLATED BRASS\t49\t8\nBrand#x\tSTANDARD PLATED COPPER\t14\t8\nBrand#x\tSTANDARD PLATED STEEL\t45\t8\nBrand#x\tSTANDARD PLATED TIN\t9\t8\nBrand#x\tSTANDARD PLATED TIN\t45\t8\nBrand#x\tSTANDARD POLISHED COPPER\t3\t8\nBrand#x\tSTANDARD POLISHED COPPER\t23\t8\nBrand#x\tSTANDARD POLISHED COPPER\t36\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t8\nBrand#x\tSTANDARD POLISHED STEEL\t3\t8\nBrand#x\tSTANDARD POLISHED STEEL\t14\t8\nBrand#x\tSTANDARD POLISHED STEEL\t19\t8\nBrand#x\tSTANDARD POLISHED STEEL\t45\t8\nBrand#x\tECONOMY ANODIZED COPPER\t9\t8\nBrand#x\tECONOMY ANODIZED COPPER\t23\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t8\nBrand#x\tECONOMY ANODIZED STEEL\t3\t8\nBrand#x\tECONOMY BRUSHED COPPER\t3\t8\nBrand#x\tECONOMY BRUSHED COPPER\t9\t8\nBrand#x\tECONOMY BRUSHED COPPER\t49\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t8\nBrand#x\tECONOMY BRUSHED TIN\t14\t8\nBrand#x\tECONOMY BRUSHED TIN\t23\t8\nBrand#x\tECONOMY BURNISHED BRASS\t45\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t8\nBrand#x\tECONOMY BURNISHED STEEL\t3\t8\nBrand#x\tECONOMY BURNISHED STEEL\t36\t8\nBrand#x\tECONOMY BURNISHED TIN\t49\t8\nBrand#x\tECONOMY PLATED BRASS\t36\t8\nBrand#x\tECONOMY PLATED COPPER\t3\t8\nBrand#x\tECONOMY PLATED COPPER\t9\t8\nBrand#x\tECONOMY PLATED COPPER\t19\t8\nBrand#x\tECONOMY PLATED NICKEL\t14\t8\nBrand#x\tECONOMY PLATED STEEL\t45\t8\nBrand#x\tECONOMY PLATED TIN\t3\t8\nBrand#x\tECONOMY PLATED TIN\t23\t8\nBrand#x\tECONOMY POLISHED BRASS\t9\t8\nBrand#x\tECONOMY POLISHED BRASS\t36\t8\nBrand#x\tECONOMY POLISHED COPPER\t9\t8\nBrand#x\tECONOMY POLISHED COPPER\t49\t8\nBrand#x\tECONOMY POLISHED STEEL\t3\t8\nBrand#x\tECONOMY POLISHED STEEL\t23\t8\nBrand#x\tECONOMY POLISHED STEEL\t45\t8\nBrand#x\tECONOMY POLISHED STEEL\t49\t8\nBrand#x\tECONOMY POLISHED TIN\t3\t8\nBrand#x\tECONOMY POLISHED TIN\t36\t8\nBrand#x\tLARGE ANODIZED COPPER\t3\t8\nBrand#x\tLARGE ANODIZED COPPER\t19\t8\nBrand#x\tLARGE ANODIZED STEEL\t19\t8\nBrand#x\tLARGE ANODIZED STEEL\t45\t8\nBrand#x\tLARGE ANODIZED TIN\t45\t8\nBrand#x\tLARGE BRUSHED BRASS\t9\t8\nBrand#x\tLARGE BRUSHED BRASS\t19\t8\nBrand#x\tLARGE BRUSHED BRASS\t45\t8\nBrand#x\tLARGE BRUSHED BRASS\t49\t8\nBrand#x\tLARGE BRUSHED COPPER\t45\t8\nBrand#x\tLARGE BRUSHED COPPER\t49\t8\nBrand#x\tLARGE BRUSHED NICKEL\t9\t8\nBrand#x\tLARGE BRUSHED STEEL\t19\t8\nBrand#x\tLARGE BRUSHED STEEL\t36\t8\nBrand#x\tLARGE BRUSHED TIN\t9\t8\nBrand#x\tLARGE BURNISHED BRASS\t3\t8\nBrand#x\tLARGE BURNISHED COPPER\t3\t8\nBrand#x\tLARGE BURNISHED COPPER\t23\t8\nBrand#x\tLARGE BURNISHED NICKEL\t14\t8\nBrand#x\tLARGE BURNISHED STEEL\t14\t8\nBrand#x\tLARGE BURNISHED STEEL\t45\t8\nBrand#x\tLARGE PLATED BRASS\t9\t8\nBrand#x\tLARGE PLATED COPPER\t14\t8\nBrand#x\tLARGE PLATED NICKEL\t19\t8\nBrand#x\tLARGE PLATED STEEL\t3\t8\nBrand#x\tLARGE PLATED STEEL\t36\t8\nBrand#x\tLARGE PLATED TIN\t14\t8\nBrand#x\tLARGE PLATED TIN\t45\t8\nBrand#x\tLARGE POLISHED BRASS\t23\t8\nBrand#x\tLARGE POLISHED NICKEL\t45\t8\nBrand#x\tLARGE POLISHED STEEL\t36\t8\nBrand#x\tLARGE POLISHED TIN\t3\t8\nBrand#x\tLARGE POLISHED TIN\t9\t8\nBrand#x\tLARGE POLISHED TIN\t14\t8\nBrand#x\tLARGE POLISHED TIN\t45\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t8\nBrand#x\tMEDIUM ANODIZED TIN\t9\t8\nBrand#x\tMEDIUM ANODIZED TIN\t45\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t8\nBrand#x\tMEDIUM BRUSHED TIN\t9\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t8\nBrand#x\tMEDIUM BURNISHED TIN\t45\t8\nBrand#x\tMEDIUM BURNISHED TIN\t49\t8\nBrand#x\tMEDIUM PLATED BRASS\t19\t8\nBrand#x\tMEDIUM PLATED BRASS\t23\t8\nBrand#x\tMEDIUM PLATED COPPER\t14\t8\nBrand#x\tMEDIUM PLATED COPPER\t19\t8\nBrand#x\tMEDIUM PLATED NICKEL\t3\t8\nBrand#x\tMEDIUM PLATED NICKEL\t36\t8\nBrand#x\tMEDIUM PLATED STEEL\t3\t8\nBrand#x\tMEDIUM PLATED STEEL\t9\t8\nBrand#x\tMEDIUM PLATED STEEL\t19\t8\nBrand#x\tMEDIUM PLATED STEEL\t36\t8\nBrand#x\tMEDIUM PLATED TIN\t36\t8\nBrand#x\tPROMO ANODIZED BRASS\t3\t8\nBrand#x\tPROMO ANODIZED COPPER\t9\t8\nBrand#x\tPROMO ANODIZED COPPER\t14\t8\nBrand#x\tPROMO ANODIZED COPPER\t23\t8\nBrand#x\tPROMO ANODIZED NICKEL\t3\t8\nBrand#x\tPROMO ANODIZED NICKEL\t9\t8\nBrand#x\tPROMO ANODIZED NICKEL\t45\t8\nBrand#x\tPROMO ANODIZED STEEL\t19\t8\nBrand#x\tPROMO ANODIZED TIN\t36\t8\nBrand#x\tPROMO ANODIZED TIN\t49\t8\nBrand#x\tPROMO BRUSHED BRASS\t3\t8\nBrand#x\tPROMO BRUSHED BRASS\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t49\t8\nBrand#x\tPROMO BRUSHED COPPER\t14\t8\nBrand#x\tPROMO BRUSHED COPPER\t19\t8\nBrand#x\tPROMO BRUSHED COPPER\t49\t8\nBrand#x\tPROMO BRUSHED NICKEL\t14\t8\nBrand#x\tPROMO BRUSHED TIN\t45\t8\nBrand#x\tPROMO BURNISHED BRASS\t9\t8\nBrand#x\tPROMO BURNISHED BRASS\t23\t8\nBrand#x\tPROMO BURNISHED BRASS\t36\t8\nBrand#x\tPROMO BURNISHED COPPER\t9\t8\nBrand#x\tPROMO BURNISHED COPPER\t23\t8\nBrand#x\tPROMO BURNISHED COPPER\t45\t8\nBrand#x\tPROMO BURNISHED NICKEL\t9\t8\nBrand#x\tPROMO BURNISHED STEEL\t9\t8\nBrand#x\tPROMO BURNISHED STEEL\t14\t8\nBrand#x\tPROMO BURNISHED STEEL\t23\t8\nBrand#x\tPROMO BURNISHED STEEL\t45\t8\nBrand#x\tPROMO BURNISHED TIN\t9\t8\nBrand#x\tPROMO BURNISHED TIN\t14\t8\nBrand#x\tPROMO BURNISHED TIN\t19\t8\nBrand#x\tPROMO PLATED BRASS\t14\t8\nBrand#x\tPROMO PLATED BRASS\t49\t8\nBrand#x\tPROMO PLATED NICKEL\t14\t8\nBrand#x\tPROMO PLATED NICKEL\t36\t8\nBrand#x\tPROMO PLATED STEEL\t9\t8\nBrand#x\tPROMO PLATED TIN\t14\t8\nBrand#x\tPROMO PLATED TIN\t23\t8\nBrand#x\tPROMO POLISHED BRASS\t3\t8\nBrand#x\tPROMO POLISHED BRASS\t19\t8\nBrand#x\tPROMO POLISHED BRASS\t45\t8\nBrand#x\tPROMO POLISHED BRASS\t49\t8\nBrand#x\tPROMO POLISHED NICKEL\t23\t8\nBrand#x\tPROMO POLISHED STEEL\t36\t8\nBrand#x\tPROMO POLISHED STEEL\t45\t8\nBrand#x\tPROMO POLISHED TIN\t19\t8\nBrand#x\tPROMO POLISHED TIN\t36\t8\nBrand#x\tSMALL ANODIZED BRASS\t14\t8\nBrand#x\tSMALL ANODIZED COPPER\t9\t8\nBrand#x\tSMALL ANODIZED COPPER\t23\t8\nBrand#x\tSMALL ANODIZED NICKEL\t14\t8\nBrand#x\tSMALL ANODIZED NICKEL\t45\t8\nBrand#x\tSMALL ANODIZED STEEL\t14\t8\nBrand#x\tSMALL ANODIZED STEEL\t23\t8\nBrand#x\tSMALL ANODIZED TIN\t45\t8\nBrand#x\tSMALL BRUSHED BRASS\t9\t8\nBrand#x\tSMALL BRUSHED BRASS\t14\t8\nBrand#x\tSMALL BRUSHED BRASS\t36\t8\nBrand#x\tSMALL BRUSHED BRASS\t49\t8\nBrand#x\tSMALL BRUSHED COPPER\t23\t8\nBrand#x\tSMALL BRUSHED COPPER\t36\t8\nBrand#x\tSMALL BRUSHED NICKEL\t9\t8\nBrand#x\tSMALL BRUSHED NICKEL\t19\t8\nBrand#x\tSMALL BRUSHED STEEL\t23\t8\nBrand#x\tSMALL BRUSHED STEEL\t49\t8\nBrand#x\tSMALL BRUSHED TIN\t9\t8\nBrand#x\tSMALL BRUSHED TIN\t49\t8\nBrand#x\tSMALL BURNISHED BRASS\t19\t8\nBrand#x\tSMALL BURNISHED BRASS\t45\t8\nBrand#x\tSMALL BURNISHED COPPER\t9\t8\nBrand#x\tSMALL BURNISHED NICKEL\t19\t8\nBrand#x\tSMALL BURNISHED NICKEL\t45\t8\nBrand#x\tSMALL BURNISHED STEEL\t23\t8\nBrand#x\tSMALL BURNISHED TIN\t9\t8\nBrand#x\tSMALL BURNISHED TIN\t14\t8\nBrand#x\tSMALL BURNISHED TIN\t36\t8\nBrand#x\tSMALL PLATED BRASS\t9\t8\nBrand#x\tSMALL PLATED BRASS\t49\t8\nBrand#x\tSMALL PLATED COPPER\t3\t8\nBrand#x\tSMALL PLATED COPPER\t23\t8\nBrand#x\tSMALL PLATED COPPER\t36\t8\nBrand#x\tSMALL PLATED NICKEL\t14\t8\nBrand#x\tSMALL PLATED STEEL\t19\t8\nBrand#x\tSMALL PLATED TIN\t19\t8\nBrand#x\tSMALL POLISHED BRASS\t45\t8\nBrand#x\tSMALL POLISHED COPPER\t3\t8\nBrand#x\tSMALL POLISHED COPPER\t49\t8\nBrand#x\tSMALL POLISHED NICKEL\t3\t8\nBrand#x\tSMALL POLISHED STEEL\t36\t8\nBrand#x\tSMALL POLISHED STEEL\t45\t8\nBrand#x\tSMALL POLISHED STEEL\t49\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t8\nBrand#x\tSTANDARD ANODIZED TIN\t9\t8\nBrand#x\tSTANDARD ANODIZED TIN\t36\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t14\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t8\nBrand#x\tSTANDARD BRUSHED TIN\t9\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t8\nBrand#x\tSTANDARD PLATED BRASS\t9\t8\nBrand#x\tSTANDARD PLATED BRASS\t23\t8\nBrand#x\tSTANDARD PLATED BRASS\t49\t8\nBrand#x\tSTANDARD PLATED COPPER\t36\t8\nBrand#x\tSTANDARD PLATED NICKEL\t3\t8\nBrand#x\tSTANDARD PLATED STEEL\t14\t8\nBrand#x\tSTANDARD PLATED STEEL\t49\t8\nBrand#x\tSTANDARD PLATED TIN\t14\t8\nBrand#x\tSTANDARD PLATED TIN\t36\t8\nBrand#x\tSTANDARD POLISHED COPPER\t14\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t8\nBrand#x\tSTANDARD POLISHED STEEL\t14\t8\nBrand#x\tSTANDARD POLISHED STEEL\t23\t8\nBrand#x\tSTANDARD POLISHED STEEL\t36\t8\nBrand#x\tSTANDARD POLISHED TIN\t23\t8\nBrand#x\tECONOMY ANODIZED BRASS\t36\t8\nBrand#x\tECONOMY ANODIZED COPPER\t36\t8\nBrand#x\tECONOMY ANODIZED COPPER\t45\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t8\nBrand#x\tECONOMY ANODIZED STEEL\t3\t8\nBrand#x\tECONOMY ANODIZED STEEL\t9\t8\nBrand#x\tECONOMY ANODIZED STEEL\t19\t8\nBrand#x\tECONOMY ANODIZED TIN\t19\t8\nBrand#x\tECONOMY BRUSHED BRASS\t3\t8\nBrand#x\tECONOMY BRUSHED BRASS\t14\t8\nBrand#x\tECONOMY BRUSHED BRASS\t49\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t8\nBrand#x\tECONOMY BRUSHED STEEL\t49\t8\nBrand#x\tECONOMY BRUSHED TIN\t3\t8\nBrand#x\tECONOMY BRUSHED TIN\t49\t8\nBrand#x\tECONOMY BURNISHED BRASS\t14\t8\nBrand#x\tECONOMY BURNISHED COPPER\t36\t8\nBrand#x\tECONOMY BURNISHED STEEL\t23\t8\nBrand#x\tECONOMY BURNISHED STEEL\t36\t8\nBrand#x\tECONOMY BURNISHED TIN\t19\t8\nBrand#x\tECONOMY BURNISHED TIN\t23\t8\nBrand#x\tECONOMY BURNISHED TIN\t36\t8\nBrand#x\tECONOMY PLATED BRASS\t36\t8\nBrand#x\tECONOMY PLATED COPPER\t3\t8\nBrand#x\tECONOMY PLATED COPPER\t9\t8\nBrand#x\tECONOMY PLATED NICKEL\t3\t8\nBrand#x\tECONOMY PLATED STEEL\t36\t8\nBrand#x\tECONOMY PLATED TIN\t9\t8\nBrand#x\tECONOMY POLISHED BRASS\t3\t8\nBrand#x\tECONOMY POLISHED BRASS\t23\t8\nBrand#x\tECONOMY POLISHED BRASS\t36\t8\nBrand#x\tECONOMY POLISHED COPPER\t14\t8\nBrand#x\tECONOMY POLISHED STEEL\t3\t8\nBrand#x\tECONOMY POLISHED TIN\t19\t8\nBrand#x\tLARGE ANODIZED BRASS\t19\t8\nBrand#x\tLARGE ANODIZED COPPER\t23\t8\nBrand#x\tLARGE ANODIZED NICKEL\t9\t8\nBrand#x\tLARGE ANODIZED NICKEL\t49\t8\nBrand#x\tLARGE ANODIZED STEEL\t3\t8\nBrand#x\tLARGE ANODIZED STEEL\t45\t8\nBrand#x\tLARGE ANODIZED TIN\t9\t8\nBrand#x\tLARGE ANODIZED TIN\t19\t8\nBrand#x\tLARGE ANODIZED TIN\t23\t8\nBrand#x\tLARGE BRUSHED BRASS\t23\t8\nBrand#x\tLARGE BRUSHED BRASS\t45\t8\nBrand#x\tLARGE BRUSHED COPPER\t49\t8\nBrand#x\tLARGE BRUSHED NICKEL\t23\t8\nBrand#x\tLARGE BRUSHED NICKEL\t45\t8\nBrand#x\tLARGE BRUSHED TIN\t9\t8\nBrand#x\tLARGE BURNISHED BRASS\t14\t8\nBrand#x\tLARGE BURNISHED COPPER\t19\t8\nBrand#x\tLARGE BURNISHED NICKEL\t3\t8\nBrand#x\tLARGE BURNISHED NICKEL\t49\t8\nBrand#x\tLARGE BURNISHED STEEL\t3\t8\nBrand#x\tLARGE BURNISHED STEEL\t9\t8\nBrand#x\tLARGE BURNISHED STEEL\t14\t8\nBrand#x\tLARGE BURNISHED STEEL\t19\t8\nBrand#x\tLARGE BURNISHED STEEL\t45\t8\nBrand#x\tLARGE BURNISHED TIN\t19\t8\nBrand#x\tLARGE BURNISHED TIN\t23\t8\nBrand#x\tLARGE BURNISHED TIN\t45\t8\nBrand#x\tLARGE PLATED BRASS\t23\t8\nBrand#x\tLARGE PLATED COPPER\t36\t8\nBrand#x\tLARGE PLATED NICKEL\t23\t8\nBrand#x\tLARGE PLATED NICKEL\t49\t8\nBrand#x\tLARGE PLATED STEEL\t49\t8\nBrand#x\tLARGE POLISHED BRASS\t3\t8\nBrand#x\tLARGE POLISHED BRASS\t9\t8\nBrand#x\tLARGE POLISHED BRASS\t14\t8\nBrand#x\tLARGE POLISHED BRASS\t19\t8\nBrand#x\tLARGE POLISHED BRASS\t36\t8\nBrand#x\tLARGE POLISHED COPPER\t9\t8\nBrand#x\tLARGE POLISHED COPPER\t23\t8\nBrand#x\tLARGE POLISHED NICKEL\t14\t8\nBrand#x\tLARGE POLISHED NICKEL\t36\t8\nBrand#x\tLARGE POLISHED STEEL\t23\t8\nBrand#x\tLARGE POLISHED TIN\t36\t8\nBrand#x\tLARGE POLISHED TIN\t45\t8\nBrand#x\tLARGE POLISHED TIN\t49\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t8\nBrand#x\tMEDIUM BRUSHED TIN\t3\t8\nBrand#x\tMEDIUM BRUSHED TIN\t9\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t8\nBrand#x\tMEDIUM BURNISHED TIN\t49\t8\nBrand#x\tMEDIUM PLATED BRASS\t9\t8\nBrand#x\tMEDIUM PLATED BRASS\t23\t8\nBrand#x\tMEDIUM PLATED BRASS\t36\t8\nBrand#x\tMEDIUM PLATED BRASS\t45\t8\nBrand#x\tMEDIUM PLATED BRASS\t49\t8\nBrand#x\tMEDIUM PLATED NICKEL\t23\t8\nBrand#x\tMEDIUM PLATED STEEL\t36\t8\nBrand#x\tMEDIUM PLATED STEEL\t49\t8\nBrand#x\tMEDIUM PLATED TIN\t3\t8\nBrand#x\tMEDIUM PLATED TIN\t14\t8\nBrand#x\tMEDIUM PLATED TIN\t45\t8\nBrand#x\tPROMO ANODIZED BRASS\t23\t8\nBrand#x\tPROMO ANODIZED BRASS\t36\t8\nBrand#x\tPROMO ANODIZED COPPER\t19\t8\nBrand#x\tPROMO ANODIZED COPPER\t36\t8\nBrand#x\tPROMO ANODIZED COPPER\t45\t8\nBrand#x\tPROMO ANODIZED STEEL\t45\t8\nBrand#x\tPROMO ANODIZED TIN\t14\t8\nBrand#x\tPROMO ANODIZED TIN\t19\t8\nBrand#x\tPROMO BRUSHED BRASS\t14\t8\nBrand#x\tPROMO BRUSHED BRASS\t19\t8\nBrand#x\tPROMO BRUSHED BRASS\t36\t8\nBrand#x\tPROMO BRUSHED BRASS\t45\t8\nBrand#x\tPROMO BRUSHED COPPER\t23\t8\nBrand#x\tPROMO BRUSHED COPPER\t49\t8\nBrand#x\tPROMO BRUSHED NICKEL\t19\t8\nBrand#x\tPROMO BRUSHED NICKEL\t36\t8\nBrand#x\tPROMO BRUSHED STEEL\t9\t8\nBrand#x\tPROMO BRUSHED STEEL\t36\t8\nBrand#x\tPROMO BRUSHED STEEL\t49\t8\nBrand#x\tPROMO BURNISHED BRASS\t9\t8\nBrand#x\tPROMO BURNISHED BRASS\t23\t8\nBrand#x\tPROMO BURNISHED BRASS\t36\t8\nBrand#x\tPROMO BURNISHED BRASS\t45\t8\nBrand#x\tPROMO BURNISHED NICKEL\t9\t8\nBrand#x\tPROMO BURNISHED STEEL\t36\t8\nBrand#x\tPROMO BURNISHED TIN\t49\t8\nBrand#x\tPROMO PLATED BRASS\t14\t8\nBrand#x\tPROMO PLATED BRASS\t45\t8\nBrand#x\tPROMO PLATED COPPER\t23\t8\nBrand#x\tPROMO PLATED NICKEL\t9\t8\nBrand#x\tPROMO PLATED STEEL\t3\t8\nBrand#x\tPROMO PLATED STEEL\t14\t8\nBrand#x\tPROMO PLATED STEEL\t19\t8\nBrand#x\tPROMO PLATED STEEL\t49\t8\nBrand#x\tPROMO PLATED TIN\t3\t8\nBrand#x\tPROMO PLATED TIN\t9\t8\nBrand#x\tPROMO POLISHED BRASS\t36\t8\nBrand#x\tPROMO POLISHED COPPER\t3\t8\nBrand#x\tPROMO POLISHED NICKEL\t3\t8\nBrand#x\tPROMO POLISHED NICKEL\t45\t8\nBrand#x\tPROMO POLISHED TIN\t9\t8\nBrand#x\tPROMO POLISHED TIN\t49\t8\nBrand#x\tSMALL ANODIZED BRASS\t9\t8\nBrand#x\tSMALL ANODIZED BRASS\t14\t8\nBrand#x\tSMALL ANODIZED COPPER\t14\t8\nBrand#x\tSMALL ANODIZED NICKEL\t36\t8\nBrand#x\tSMALL ANODIZED STEEL\t23\t8\nBrand#x\tSMALL ANODIZED TIN\t19\t8\nBrand#x\tSMALL BRUSHED BRASS\t19\t8\nBrand#x\tSMALL BRUSHED BRASS\t45\t8\nBrand#x\tSMALL BRUSHED COPPER\t36\t8\nBrand#x\tSMALL BRUSHED COPPER\t49\t8\nBrand#x\tSMALL BRUSHED TIN\t9\t8\nBrand#x\tSMALL BRUSHED TIN\t14\t8\nBrand#x\tSMALL BRUSHED TIN\t36\t8\nBrand#x\tSMALL BURNISHED BRASS\t19\t8\nBrand#x\tSMALL BURNISHED BRASS\t45\t8\nBrand#x\tSMALL BURNISHED COPPER\t14\t8\nBrand#x\tSMALL BURNISHED COPPER\t36\t8\nBrand#x\tSMALL BURNISHED NICKEL\t36\t8\nBrand#x\tSMALL BURNISHED NICKEL\t45\t8\nBrand#x\tSMALL BURNISHED STEEL\t14\t8\nBrand#x\tSMALL BURNISHED STEEL\t45\t8\nBrand#x\tSMALL BURNISHED TIN\t19\t8\nBrand#x\tSMALL BURNISHED TIN\t23\t8\nBrand#x\tSMALL PLATED BRASS\t14\t8\nBrand#x\tSMALL PLATED COPPER\t23\t8\nBrand#x\tSMALL PLATED NICKEL\t19\t8\nBrand#x\tSMALL PLATED STEEL\t14\t8\nBrand#x\tSMALL PLATED STEEL\t36\t8\nBrand#x\tSMALL PLATED TIN\t9\t8\nBrand#x\tSMALL PLATED TIN\t49\t8\nBrand#x\tSMALL POLISHED BRASS\t19\t8\nBrand#x\tSMALL POLISHED BRASS\t36\t8\nBrand#x\tSMALL POLISHED BRASS\t45\t8\nBrand#x\tSMALL POLISHED COPPER\t3\t8\nBrand#x\tSMALL POLISHED NICKEL\t9\t8\nBrand#x\tSMALL POLISHED NICKEL\t19\t8\nBrand#x\tSMALL POLISHED STEEL\t49\t8\nBrand#x\tSMALL POLISHED TIN\t3\t8\nBrand#x\tSMALL POLISHED TIN\t36\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t8\nBrand#x\tSTANDARD BRUSHED TIN\t9\t8\nBrand#x\tSTANDARD BRUSHED TIN\t45\t8\nBrand#x\tSTANDARD BRUSHED TIN\t49\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t23\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t8\nBrand#x\tSTANDARD BURNISHED TIN\t3\t8\nBrand#x\tSTANDARD BURNISHED TIN\t9\t8\nBrand#x\tSTANDARD PLATED BRASS\t9\t8\nBrand#x\tSTANDARD PLATED BRASS\t45\t8\nBrand#x\tSTANDARD PLATED COPPER\t14\t8\nBrand#x\tSTANDARD PLATED NICKEL\t14\t8\nBrand#x\tSTANDARD PLATED STEEL\t23\t8\nBrand#x\tSTANDARD PLATED TIN\t3\t8\nBrand#x\tSTANDARD POLISHED BRASS\t19\t8\nBrand#x\tSTANDARD POLISHED COPPER\t3\t8\nBrand#x\tSTANDARD POLISHED COPPER\t49\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t8\nBrand#x\tECONOMY ANODIZED BRASS\t3\t8\nBrand#x\tECONOMY ANODIZED BRASS\t9\t8\nBrand#x\tECONOMY ANODIZED STEEL\t3\t8\nBrand#x\tECONOMY ANODIZED STEEL\t14\t8\nBrand#x\tECONOMY ANODIZED STEEL\t36\t8\nBrand#x\tECONOMY ANODIZED TIN\t36\t8\nBrand#x\tECONOMY BRUSHED BRASS\t19\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t8\nBrand#x\tECONOMY BRUSHED STEEL\t23\t8\nBrand#x\tECONOMY BRUSHED STEEL\t45\t8\nBrand#x\tECONOMY BRUSHED TIN\t36\t8\nBrand#x\tECONOMY BRUSHED TIN\t49\t8\nBrand#x\tECONOMY BURNISHED BRASS\t9\t8\nBrand#x\tECONOMY BURNISHED BRASS\t14\t8\nBrand#x\tECONOMY BURNISHED COPPER\t23\t8\nBrand#x\tECONOMY BURNISHED COPPER\t45\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t8\nBrand#x\tECONOMY PLATED BRASS\t14\t8\nBrand#x\tECONOMY PLATED COPPER\t36\t8\nBrand#x\tECONOMY PLATED COPPER\t45\t8\nBrand#x\tECONOMY POLISHED COPPER\t49\t8\nBrand#x\tECONOMY POLISHED NICKEL\t9\t8\nBrand#x\tECONOMY POLISHED NICKEL\t14\t8\nBrand#x\tECONOMY POLISHED STEEL\t49\t8\nBrand#x\tLARGE ANODIZED BRASS\t9\t8\nBrand#x\tLARGE ANODIZED BRASS\t36\t8\nBrand#x\tLARGE ANODIZED COPPER\t23\t8\nBrand#x\tLARGE ANODIZED COPPER\t36\t8\nBrand#x\tLARGE ANODIZED COPPER\t45\t8\nBrand#x\tLARGE ANODIZED NICKEL\t23\t8\nBrand#x\tLARGE ANODIZED NICKEL\t49\t8\nBrand#x\tLARGE ANODIZED TIN\t14\t8\nBrand#x\tLARGE BRUSHED BRASS\t9\t8\nBrand#x\tLARGE BRUSHED COPPER\t3\t8\nBrand#x\tLARGE BRUSHED COPPER\t14\t8\nBrand#x\tLARGE BRUSHED STEEL\t19\t8\nBrand#x\tLARGE BRUSHED STEEL\t23\t8\nBrand#x\tLARGE BRUSHED TIN\t3\t8\nBrand#x\tLARGE BRUSHED TIN\t9\t8\nBrand#x\tLARGE BRUSHED TIN\t19\t8\nBrand#x\tLARGE BRUSHED TIN\t49\t8\nBrand#x\tLARGE BURNISHED BRASS\t9\t8\nBrand#x\tLARGE BURNISHED BRASS\t14\t8\nBrand#x\tLARGE BURNISHED BRASS\t19\t8\nBrand#x\tLARGE BURNISHED COPPER\t23\t8\nBrand#x\tLARGE BURNISHED COPPER\t45\t8\nBrand#x\tLARGE BURNISHED NICKEL\t36\t8\nBrand#x\tLARGE BURNISHED STEEL\t36\t8\nBrand#x\tLARGE BURNISHED STEEL\t49\t8\nBrand#x\tLARGE BURNISHED TIN\t49\t8\nBrand#x\tLARGE PLATED COPPER\t19\t8\nBrand#x\tLARGE PLATED COPPER\t45\t8\nBrand#x\tLARGE PLATED NICKEL\t14\t8\nBrand#x\tLARGE PLATED STEEL\t9\t8\nBrand#x\tLARGE PLATED TIN\t49\t8\nBrand#x\tLARGE POLISHED BRASS\t23\t8\nBrand#x\tLARGE POLISHED STEEL\t36\t8\nBrand#x\tLARGE POLISHED STEEL\t49\t8\nBrand#x\tLARGE POLISHED TIN\t19\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t8\nBrand#x\tMEDIUM ANODIZED TIN\t14\t8\nBrand#x\tMEDIUM ANODIZED TIN\t19\t8\nBrand#x\tMEDIUM ANODIZED TIN\t23\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t8\nBrand#x\tMEDIUM BRUSHED TIN\t9\t8\nBrand#x\tMEDIUM BRUSHED TIN\t23\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t8\nBrand#x\tMEDIUM BURNISHED TIN\t45\t8\nBrand#x\tMEDIUM PLATED BRASS\t36\t8\nBrand#x\tMEDIUM PLATED NICKEL\t23\t8\nBrand#x\tMEDIUM PLATED NICKEL\t49\t8\nBrand#x\tMEDIUM PLATED STEEL\t9\t8\nBrand#x\tMEDIUM PLATED STEEL\t19\t8\nBrand#x\tMEDIUM PLATED STEEL\t49\t8\nBrand#x\tMEDIUM PLATED TIN\t19\t8\nBrand#x\tMEDIUM PLATED TIN\t49\t8\nBrand#x\tPROMO ANODIZED BRASS\t14\t8\nBrand#x\tPROMO ANODIZED BRASS\t36\t8\nBrand#x\tPROMO ANODIZED COPPER\t45\t8\nBrand#x\tPROMO ANODIZED NICKEL\t36\t8\nBrand#x\tPROMO ANODIZED NICKEL\t49\t8\nBrand#x\tPROMO ANODIZED STEEL\t14\t8\nBrand#x\tPROMO BRUSHED BRASS\t14\t8\nBrand#x\tPROMO BRUSHED COPPER\t9\t8\nBrand#x\tPROMO BRUSHED COPPER\t19\t8\nBrand#x\tPROMO BRUSHED NICKEL\t9\t8\nBrand#x\tPROMO BRUSHED NICKEL\t19\t8\nBrand#x\tPROMO BRUSHED NICKEL\t23\t8\nBrand#x\tPROMO BRUSHED STEEL\t14\t8\nBrand#x\tPROMO BRUSHED STEEL\t23\t8\nBrand#x\tPROMO BRUSHED STEEL\t49\t8\nBrand#x\tPROMO BRUSHED TIN\t3\t8\nBrand#x\tPROMO BRUSHED TIN\t23\t8\nBrand#x\tPROMO BRUSHED TIN\t36\t8\nBrand#x\tPROMO BURNISHED BRASS\t23\t8\nBrand#x\tPROMO BURNISHED BRASS\t36\t8\nBrand#x\tPROMO BURNISHED COPPER\t3\t8\nBrand#x\tPROMO BURNISHED COPPER\t9\t8\nBrand#x\tPROMO BURNISHED COPPER\t19\t8\nBrand#x\tPROMO BURNISHED NICKEL\t23\t8\nBrand#x\tPROMO BURNISHED NICKEL\t36\t8\nBrand#x\tPROMO BURNISHED NICKEL\t49\t8\nBrand#x\tPROMO BURNISHED STEEL\t9\t8\nBrand#x\tPROMO BURNISHED TIN\t14\t8\nBrand#x\tPROMO BURNISHED TIN\t45\t8\nBrand#x\tPROMO PLATED BRASS\t36\t8\nBrand#x\tPROMO PLATED BRASS\t49\t8\nBrand#x\tPROMO PLATED COPPER\t3\t8\nBrand#x\tPROMO PLATED COPPER\t9\t8\nBrand#x\tPROMO PLATED COPPER\t14\t8\nBrand#x\tPROMO PLATED NICKEL\t36\t8\nBrand#x\tPROMO PLATED NICKEL\t45\t8\nBrand#x\tPROMO PLATED STEEL\t14\t8\nBrand#x\tPROMO PLATED TIN\t3\t8\nBrand#x\tPROMO PLATED TIN\t9\t8\nBrand#x\tPROMO PLATED TIN\t19\t8\nBrand#x\tPROMO POLISHED COPPER\t3\t8\nBrand#x\tPROMO POLISHED COPPER\t14\t8\nBrand#x\tPROMO POLISHED COPPER\t19\t8\nBrand#x\tPROMO POLISHED COPPER\t49\t8\nBrand#x\tPROMO POLISHED NICKEL\t19\t8\nBrand#x\tPROMO POLISHED STEEL\t3\t8\nBrand#x\tPROMO POLISHED STEEL\t14\t8\nBrand#x\tPROMO POLISHED STEEL\t19\t8\nBrand#x\tPROMO POLISHED TIN\t23\t8\nBrand#x\tSMALL ANODIZED BRASS\t14\t8\nBrand#x\tSMALL ANODIZED BRASS\t19\t8\nBrand#x\tSMALL ANODIZED NICKEL\t3\t8\nBrand#x\tSMALL ANODIZED NICKEL\t14\t8\nBrand#x\tSMALL ANODIZED NICKEL\t36\t8\nBrand#x\tSMALL ANODIZED STEEL\t3\t8\nBrand#x\tSMALL ANODIZED TIN\t45\t8\nBrand#x\tSMALL BRUSHED BRASS\t3\t8\nBrand#x\tSMALL BRUSHED BRASS\t9\t8\nBrand#x\tSMALL BRUSHED BRASS\t19\t8\nBrand#x\tSMALL BRUSHED NICKEL\t9\t8\nBrand#x\tSMALL BRUSHED NICKEL\t49\t8\nBrand#x\tSMALL BRUSHED STEEL\t14\t8\nBrand#x\tSMALL BRUSHED STEEL\t23\t8\nBrand#x\tSMALL BRUSHED TIN\t9\t8\nBrand#x\tSMALL BRUSHED TIN\t23\t8\nBrand#x\tSMALL BRUSHED TIN\t36\t8\nBrand#x\tSMALL BRUSHED TIN\t45\t8\nBrand#x\tSMALL BURNISHED BRASS\t19\t8\nBrand#x\tSMALL BURNISHED COPPER\t14\t8\nBrand#x\tSMALL BURNISHED COPPER\t49\t8\nBrand#x\tSMALL BURNISHED NICKEL\t3\t8\nBrand#x\tSMALL BURNISHED NICKEL\t9\t8\nBrand#x\tSMALL BURNISHED NICKEL\t36\t8\nBrand#x\tSMALL BURNISHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED STEEL\t19\t8\nBrand#x\tSMALL BURNISHED TIN\t14\t8\nBrand#x\tSMALL BURNISHED TIN\t19\t8\nBrand#x\tSMALL BURNISHED TIN\t23\t8\nBrand#x\tSMALL PLATED STEEL\t3\t8\nBrand#x\tSMALL PLATED STEEL\t9\t8\nBrand#x\tSMALL PLATED TIN\t9\t8\nBrand#x\tSMALL POLISHED COPPER\t3\t8\nBrand#x\tSMALL POLISHED COPPER\t9\t8\nBrand#x\tSMALL POLISHED NICKEL\t14\t8\nBrand#x\tSMALL POLISHED STEEL\t3\t8\nBrand#x\tSMALL POLISHED STEEL\t9\t8\nBrand#x\tSMALL POLISHED STEEL\t23\t8\nBrand#x\tSMALL POLISHED STEEL\t36\t8\nBrand#x\tSMALL POLISHED TIN\t9\t8\nBrand#x\tSMALL POLISHED TIN\t19\t8\nBrand#x\tSMALL POLISHED TIN\t45\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t8\nBrand#x\tSTANDARD BRUSHED TIN\t45\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t8\nBrand#x\tSTANDARD PLATED BRASS\t14\t8\nBrand#x\tSTANDARD PLATED BRASS\t36\t8\nBrand#x\tSTANDARD PLATED COPPER\t9\t8\nBrand#x\tSTANDARD PLATED NICKEL\t9\t8\nBrand#x\tSTANDARD PLATED STEEL\t23\t8\nBrand#x\tSTANDARD POLISHED BRASS\t3\t8\nBrand#x\tSTANDARD POLISHED BRASS\t9\t8\nBrand#x\tSTANDARD POLISHED BRASS\t14\t8\nBrand#x\tSTANDARD POLISHED COPPER\t3\t8\nBrand#x\tSTANDARD POLISHED COPPER\t23\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t8\nBrand#x\tSTANDARD POLISHED TIN\t3\t8\nBrand#x\tSTANDARD POLISHED TIN\t36\t8\nBrand#x\tECONOMY ANODIZED BRASS\t14\t8\nBrand#x\tECONOMY ANODIZED COPPER\t3\t8\nBrand#x\tECONOMY ANODIZED COPPER\t14\t8\nBrand#x\tECONOMY ANODIZED COPPER\t36\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t8\nBrand#x\tECONOMY ANODIZED STEEL\t36\t8\nBrand#x\tECONOMY ANODIZED STEEL\t49\t8\nBrand#x\tECONOMY ANODIZED TIN\t9\t8\nBrand#x\tECONOMY BRUSHED BRASS\t14\t8\nBrand#x\tECONOMY BRUSHED BRASS\t36\t8\nBrand#x\tECONOMY BRUSHED COPPER\t45\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t8\nBrand#x\tECONOMY BURNISHED BRASS\t3\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t8\nBrand#x\tECONOMY BURNISHED STEEL\t23\t8\nBrand#x\tECONOMY BURNISHED STEEL\t36\t8\nBrand#x\tECONOMY BURNISHED TIN\t14\t8\nBrand#x\tECONOMY BURNISHED TIN\t19\t8\nBrand#x\tECONOMY BURNISHED TIN\t45\t8\nBrand#x\tECONOMY PLATED BRASS\t9\t8\nBrand#x\tECONOMY PLATED NICKEL\t49\t8\nBrand#x\tECONOMY PLATED STEEL\t19\t8\nBrand#x\tECONOMY PLATED STEEL\t23\t8\nBrand#x\tECONOMY POLISHED BRASS\t23\t8\nBrand#x\tECONOMY POLISHED COPPER\t3\t8\nBrand#x\tECONOMY POLISHED NICKEL\t3\t8\nBrand#x\tECONOMY POLISHED NICKEL\t19\t8\nBrand#x\tECONOMY POLISHED NICKEL\t36\t8\nBrand#x\tECONOMY POLISHED STEEL\t36\t8\nBrand#x\tECONOMY POLISHED STEEL\t49\t8\nBrand#x\tECONOMY POLISHED TIN\t3\t8\nBrand#x\tECONOMY POLISHED TIN\t45\t8\nBrand#x\tLARGE ANODIZED BRASS\t45\t8\nBrand#x\tLARGE ANODIZED NICKEL\t9\t8\nBrand#x\tLARGE ANODIZED NICKEL\t19\t8\nBrand#x\tLARGE ANODIZED NICKEL\t49\t8\nBrand#x\tLARGE ANODIZED STEEL\t3\t8\nBrand#x\tLARGE ANODIZED STEEL\t36\t8\nBrand#x\tLARGE BRUSHED BRASS\t19\t8\nBrand#x\tLARGE BRUSHED BRASS\t23\t8\nBrand#x\tLARGE BRUSHED BRASS\t45\t8\nBrand#x\tLARGE BRUSHED COPPER\t19\t8\nBrand#x\tLARGE BRUSHED NICKEL\t14\t8\nBrand#x\tLARGE BRUSHED NICKEL\t45\t8\nBrand#x\tLARGE BRUSHED STEEL\t45\t8\nBrand#x\tLARGE BRUSHED TIN\t9\t8\nBrand#x\tLARGE BRUSHED TIN\t19\t8\nBrand#x\tLARGE BRUSHED TIN\t36\t8\nBrand#x\tLARGE BURNISHED COPPER\t3\t8\nBrand#x\tLARGE BURNISHED COPPER\t9\t8\nBrand#x\tLARGE BURNISHED COPPER\t14\t8\nBrand#x\tLARGE BURNISHED COPPER\t19\t8\nBrand#x\tLARGE BURNISHED COPPER\t23\t8\nBrand#x\tLARGE BURNISHED NICKEL\t9\t8\nBrand#x\tLARGE BURNISHED NICKEL\t36\t8\nBrand#x\tLARGE BURNISHED STEEL\t14\t8\nBrand#x\tLARGE BURNISHED STEEL\t45\t8\nBrand#x\tLARGE BURNISHED STEEL\t49\t8\nBrand#x\tLARGE BURNISHED TIN\t14\t8\nBrand#x\tLARGE BURNISHED TIN\t49\t8\nBrand#x\tLARGE PLATED BRASS\t19\t8\nBrand#x\tLARGE PLATED BRASS\t23\t8\nBrand#x\tLARGE PLATED NICKEL\t23\t8\nBrand#x\tLARGE PLATED STEEL\t3\t8\nBrand#x\tLARGE PLATED STEEL\t19\t8\nBrand#x\tLARGE PLATED STEEL\t45\t8\nBrand#x\tLARGE PLATED TIN\t9\t8\nBrand#x\tLARGE PLATED TIN\t23\t8\nBrand#x\tLARGE POLISHED BRASS\t36\t8\nBrand#x\tLARGE POLISHED BRASS\t49\t8\nBrand#x\tLARGE POLISHED COPPER\t23\t8\nBrand#x\tLARGE POLISHED NICKEL\t3\t8\nBrand#x\tLARGE POLISHED NICKEL\t23\t8\nBrand#x\tLARGE POLISHED NICKEL\t45\t8\nBrand#x\tLARGE POLISHED STEEL\t3\t8\nBrand#x\tLARGE POLISHED STEEL\t9\t8\nBrand#x\tLARGE POLISHED STEEL\t23\t8\nBrand#x\tLARGE POLISHED TIN\t3\t8\nBrand#x\tLARGE POLISHED TIN\t19\t8\nBrand#x\tLARGE POLISHED TIN\t45\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t8\nBrand#x\tMEDIUM ANODIZED TIN\t9\t8\nBrand#x\tMEDIUM ANODIZED TIN\t14\t8\nBrand#x\tMEDIUM ANODIZED TIN\t23\t8\nBrand#x\tMEDIUM ANODIZED TIN\t45\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t8\nBrand#x\tMEDIUM PLATED BRASS\t45\t8\nBrand#x\tMEDIUM PLATED COPPER\t23\t8\nBrand#x\tMEDIUM PLATED COPPER\t49\t8\nBrand#x\tMEDIUM PLATED TIN\t36\t8\nBrand#x\tPROMO ANODIZED BRASS\t14\t8\nBrand#x\tPROMO ANODIZED BRASS\t19\t8\nBrand#x\tPROMO ANODIZED COPPER\t14\t8\nBrand#x\tPROMO ANODIZED COPPER\t23\t8\nBrand#x\tPROMO ANODIZED COPPER\t45\t8\nBrand#x\tPROMO ANODIZED NICKEL\t14\t8\nBrand#x\tPROMO ANODIZED NICKEL\t23\t8\nBrand#x\tPROMO ANODIZED STEEL\t3\t8\nBrand#x\tPROMO ANODIZED STEEL\t9\t8\nBrand#x\tPROMO ANODIZED TIN\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t45\t8\nBrand#x\tPROMO BRUSHED BRASS\t49\t8\nBrand#x\tPROMO BRUSHED NICKEL\t45\t8\nBrand#x\tPROMO BRUSHED STEEL\t23\t8\nBrand#x\tPROMO BURNISHED BRASS\t19\t8\nBrand#x\tPROMO BURNISHED BRASS\t23\t8\nBrand#x\tPROMO BURNISHED COPPER\t9\t8\nBrand#x\tPROMO BURNISHED COPPER\t49\t8\nBrand#x\tPROMO BURNISHED NICKEL\t19\t8\nBrand#x\tPROMO BURNISHED NICKEL\t23\t8\nBrand#x\tPROMO BURNISHED STEEL\t23\t8\nBrand#x\tPROMO PLATED BRASS\t14\t8\nBrand#x\tPROMO PLATED BRASS\t23\t8\nBrand#x\tPROMO PLATED COPPER\t3\t8\nBrand#x\tPROMO PLATED NICKEL\t3\t8\nBrand#x\tPROMO PLATED STEEL\t9\t8\nBrand#x\tPROMO PLATED STEEL\t23\t8\nBrand#x\tPROMO PLATED STEEL\t49\t8\nBrand#x\tPROMO PLATED TIN\t3\t8\nBrand#x\tPROMO POLISHED COPPER\t14\t8\nBrand#x\tPROMO POLISHED STEEL\t19\t8\nBrand#x\tPROMO POLISHED STEEL\t23\t8\nBrand#x\tPROMO POLISHED STEEL\t45\t8\nBrand#x\tSMALL ANODIZED BRASS\t45\t8\nBrand#x\tSMALL ANODIZED COPPER\t49\t8\nBrand#x\tSMALL ANODIZED NICKEL\t9\t8\nBrand#x\tSMALL ANODIZED NICKEL\t14\t8\nBrand#x\tSMALL ANODIZED NICKEL\t19\t8\nBrand#x\tSMALL ANODIZED NICKEL\t23\t8\nBrand#x\tSMALL ANODIZED NICKEL\t45\t8\nBrand#x\tSMALL ANODIZED STEEL\t49\t8\nBrand#x\tSMALL ANODIZED TIN\t9\t8\nBrand#x\tSMALL ANODIZED TIN\t14\t8\nBrand#x\tSMALL ANODIZED TIN\t19\t8\nBrand#x\tSMALL ANODIZED TIN\t36\t8\nBrand#x\tSMALL ANODIZED TIN\t49\t8\nBrand#x\tSMALL BRUSHED BRASS\t36\t8\nBrand#x\tSMALL BRUSHED NICKEL\t36\t8\nBrand#x\tSMALL BRUSHED STEEL\t14\t8\nBrand#x\tSMALL BRUSHED TIN\t45\t8\nBrand#x\tSMALL BURNISHED BRASS\t36\t8\nBrand#x\tSMALL BURNISHED COPPER\t3\t8\nBrand#x\tSMALL BURNISHED COPPER\t14\t8\nBrand#x\tSMALL BURNISHED COPPER\t19\t8\nBrand#x\tSMALL BURNISHED COPPER\t45\t8\nBrand#x\tSMALL BURNISHED NICKEL\t14\t8\nBrand#x\tSMALL BURNISHED STEEL\t49\t8\nBrand#x\tSMALL PLATED BRASS\t14\t8\nBrand#x\tSMALL PLATED COPPER\t3\t8\nBrand#x\tSMALL PLATED COPPER\t9\t8\nBrand#x\tSMALL PLATED COPPER\t45\t8\nBrand#x\tSMALL PLATED NICKEL\t3\t8\nBrand#x\tSMALL PLATED STEEL\t3\t8\nBrand#x\tSMALL PLATED STEEL\t9\t8\nBrand#x\tSMALL PLATED TIN\t19\t8\nBrand#x\tSMALL POLISHED STEEL\t36\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t8\nBrand#x\tSTANDARD ANODIZED TIN\t9\t8\nBrand#x\tSTANDARD ANODIZED TIN\t49\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t8\nBrand#x\tSTANDARD BRUSHED TIN\t23\t8\nBrand#x\tSTANDARD BRUSHED TIN\t36\t8\nBrand#x\tSTANDARD BRUSHED TIN\t45\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t23\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t8\nBrand#x\tSTANDARD BURNISHED TIN\t19\t8\nBrand#x\tSTANDARD BURNISHED TIN\t36\t8\nBrand#x\tSTANDARD PLATED BRASS\t14\t8\nBrand#x\tSTANDARD PLATED BRASS\t19\t8\nBrand#x\tSTANDARD PLATED BRASS\t49\t8\nBrand#x\tSTANDARD PLATED COPPER\t19\t8\nBrand#x\tSTANDARD PLATED COPPER\t23\t8\nBrand#x\tSTANDARD PLATED COPPER\t49\t8\nBrand#x\tSTANDARD PLATED NICKEL\t3\t8\nBrand#x\tSTANDARD PLATED NICKEL\t45\t8\nBrand#x\tSTANDARD PLATED TIN\t14\t8\nBrand#x\tSTANDARD PLATED TIN\t49\t8\nBrand#x\tSTANDARD POLISHED BRASS\t9\t8\nBrand#x\tSTANDARD POLISHED BRASS\t19\t8\nBrand#x\tSTANDARD POLISHED BRASS\t45\t8\nBrand#x\tSTANDARD POLISHED COPPER\t23\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t8\nBrand#x\tSTANDARD POLISHED STEEL\t19\t8\nBrand#x\tSTANDARD POLISHED TIN\t36\t8\nBrand#x\tECONOMY ANODIZED BRASS\t3\t8\nBrand#x\tECONOMY ANODIZED BRASS\t36\t8\nBrand#x\tECONOMY ANODIZED COPPER\t14\t8\nBrand#x\tECONOMY ANODIZED STEEL\t23\t8\nBrand#x\tECONOMY ANODIZED TIN\t36\t8\nBrand#x\tECONOMY BRUSHED BRASS\t14\t8\nBrand#x\tECONOMY BRUSHED COPPER\t3\t8\nBrand#x\tECONOMY BRUSHED COPPER\t9\t8\nBrand#x\tECONOMY BRUSHED COPPER\t23\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t8\nBrand#x\tECONOMY BRUSHED STEEL\t49\t8\nBrand#x\tECONOMY BRUSHED TIN\t45\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t8\nBrand#x\tECONOMY BURNISHED STEEL\t9\t8\nBrand#x\tECONOMY BURNISHED TIN\t23\t8\nBrand#x\tECONOMY PLATED BRASS\t3\t8\nBrand#x\tECONOMY PLATED STEEL\t3\t8\nBrand#x\tECONOMY PLATED TIN\t9\t8\nBrand#x\tECONOMY POLISHED COPPER\t49\t8\nBrand#x\tECONOMY POLISHED NICKEL\t45\t8\nBrand#x\tECONOMY POLISHED STEEL\t9\t8\nBrand#x\tECONOMY POLISHED STEEL\t14\t8\nBrand#x\tECONOMY POLISHED TIN\t14\t8\nBrand#x\tECONOMY POLISHED TIN\t45\t8\nBrand#x\tLARGE ANODIZED BRASS\t14\t8\nBrand#x\tLARGE ANODIZED NICKEL\t23\t8\nBrand#x\tLARGE ANODIZED TIN\t19\t8\nBrand#x\tLARGE ANODIZED TIN\t23\t8\nBrand#x\tLARGE ANODIZED TIN\t49\t8\nBrand#x\tLARGE BRUSHED BRASS\t23\t8\nBrand#x\tLARGE BRUSHED BRASS\t36\t8\nBrand#x\tLARGE BRUSHED COPPER\t9\t8\nBrand#x\tLARGE BRUSHED NICKEL\t23\t8\nBrand#x\tLARGE BRUSHED NICKEL\t45\t8\nBrand#x\tLARGE BRUSHED STEEL\t23\t8\nBrand#x\tLARGE BURNISHED BRASS\t23\t8\nBrand#x\tLARGE BURNISHED COPPER\t9\t8\nBrand#x\tLARGE BURNISHED COPPER\t19\t8\nBrand#x\tLARGE BURNISHED NICKEL\t36\t8\nBrand#x\tLARGE BURNISHED NICKEL\t49\t8\nBrand#x\tLARGE BURNISHED STEEL\t9\t8\nBrand#x\tLARGE BURNISHED TIN\t45\t8\nBrand#x\tLARGE PLATED BRASS\t45\t8\nBrand#x\tLARGE PLATED COPPER\t45\t8\nBrand#x\tLARGE PLATED NICKEL\t9\t8\nBrand#x\tLARGE PLATED NICKEL\t19\t8\nBrand#x\tLARGE PLATED NICKEL\t23\t8\nBrand#x\tLARGE PLATED STEEL\t14\t8\nBrand#x\tLARGE PLATED STEEL\t19\t8\nBrand#x\tLARGE PLATED STEEL\t23\t8\nBrand#x\tLARGE PLATED TIN\t14\t8\nBrand#x\tLARGE POLISHED BRASS\t23\t8\nBrand#x\tLARGE POLISHED BRASS\t45\t8\nBrand#x\tLARGE POLISHED BRASS\t49\t8\nBrand#x\tLARGE POLISHED COPPER\t9\t8\nBrand#x\tLARGE POLISHED COPPER\t49\t8\nBrand#x\tLARGE POLISHED NICKEL\t45\t8\nBrand#x\tLARGE POLISHED NICKEL\t49\t8\nBrand#x\tLARGE POLISHED STEEL\t49\t8\nBrand#x\tLARGE POLISHED TIN\t49\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t8\nBrand#x\tMEDIUM ANODIZED TIN\t45\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t3\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t8\nBrand#x\tMEDIUM BURNISHED TIN\t3\t8\nBrand#x\tMEDIUM BURNISHED TIN\t9\t8\nBrand#x\tMEDIUM BURNISHED TIN\t19\t8\nBrand#x\tMEDIUM BURNISHED TIN\t49\t8\nBrand#x\tMEDIUM PLATED COPPER\t19\t8\nBrand#x\tMEDIUM PLATED NICKEL\t3\t8\nBrand#x\tMEDIUM PLATED NICKEL\t45\t8\nBrand#x\tMEDIUM PLATED NICKEL\t49\t8\nBrand#x\tMEDIUM PLATED STEEL\t3\t8\nBrand#x\tMEDIUM PLATED STEEL\t9\t8\nBrand#x\tMEDIUM PLATED STEEL\t19\t8\nBrand#x\tMEDIUM PLATED TIN\t49\t8\nBrand#x\tPROMO ANODIZED BRASS\t3\t8\nBrand#x\tPROMO ANODIZED BRASS\t9\t8\nBrand#x\tPROMO ANODIZED BRASS\t45\t8\nBrand#x\tPROMO ANODIZED COPPER\t3\t8\nBrand#x\tPROMO ANODIZED COPPER\t9\t8\nBrand#x\tPROMO ANODIZED COPPER\t23\t8\nBrand#x\tPROMO ANODIZED NICKEL\t9\t8\nBrand#x\tPROMO ANODIZED NICKEL\t45\t8\nBrand#x\tPROMO ANODIZED STEEL\t19\t8\nBrand#x\tPROMO ANODIZED TIN\t14\t8\nBrand#x\tPROMO ANODIZED TIN\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t3\t8\nBrand#x\tPROMO BRUSHED BRASS\t19\t8\nBrand#x\tPROMO BRUSHED BRASS\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t36\t8\nBrand#x\tPROMO BRUSHED NICKEL\t19\t8\nBrand#x\tPROMO BRUSHED NICKEL\t45\t8\nBrand#x\tPROMO BRUSHED STEEL\t36\t8\nBrand#x\tPROMO BRUSHED TIN\t3\t8\nBrand#x\tPROMO BURNISHED COPPER\t14\t8\nBrand#x\tPROMO BURNISHED COPPER\t36\t8\nBrand#x\tPROMO BURNISHED STEEL\t14\t8\nBrand#x\tPROMO BURNISHED STEEL\t36\t8\nBrand#x\tPROMO BURNISHED TIN\t3\t8\nBrand#x\tPROMO PLATED BRASS\t36\t8\nBrand#x\tPROMO PLATED BRASS\t45\t8\nBrand#x\tPROMO PLATED COPPER\t36\t8\nBrand#x\tPROMO PLATED NICKEL\t9\t8\nBrand#x\tPROMO PLATED NICKEL\t19\t8\nBrand#x\tPROMO PLATED NICKEL\t49\t8\nBrand#x\tPROMO PLATED STEEL\t45\t8\nBrand#x\tPROMO PLATED TIN\t3\t8\nBrand#x\tPROMO PLATED TIN\t23\t8\nBrand#x\tPROMO POLISHED BRASS\t3\t8\nBrand#x\tPROMO POLISHED BRASS\t9\t8\nBrand#x\tPROMO POLISHED NICKEL\t9\t8\nBrand#x\tPROMO POLISHED STEEL\t14\t8\nBrand#x\tPROMO POLISHED TIN\t3\t8\nBrand#x\tPROMO POLISHED TIN\t23\t8\nBrand#x\tPROMO POLISHED TIN\t49\t8\nBrand#x\tSMALL ANODIZED BRASS\t14\t8\nBrand#x\tSMALL ANODIZED COPPER\t3\t8\nBrand#x\tSMALL ANODIZED NICKEL\t3\t8\nBrand#x\tSMALL ANODIZED NICKEL\t36\t8\nBrand#x\tSMALL ANODIZED STEEL\t23\t8\nBrand#x\tSMALL BRUSHED BRASS\t36\t8\nBrand#x\tSMALL BRUSHED TIN\t19\t8\nBrand#x\tSMALL BRUSHED TIN\t23\t8\nBrand#x\tSMALL BURNISHED BRASS\t3\t8\nBrand#x\tSMALL BURNISHED NICKEL\t3\t8\nBrand#x\tSMALL BURNISHED NICKEL\t49\t8\nBrand#x\tSMALL BURNISHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED STEEL\t23\t8\nBrand#x\tSMALL BURNISHED STEEL\t49\t8\nBrand#x\tSMALL BURNISHED TIN\t45\t8\nBrand#x\tSMALL PLATED BRASS\t23\t8\nBrand#x\tSMALL PLATED COPPER\t14\t8\nBrand#x\tSMALL PLATED COPPER\t36\t8\nBrand#x\tSMALL PLATED NICKEL\t3\t8\nBrand#x\tSMALL PLATED NICKEL\t19\t8\nBrand#x\tSMALL PLATED STEEL\t3\t8\nBrand#x\tSMALL PLATED STEEL\t45\t8\nBrand#x\tSMALL POLISHED COPPER\t9\t8\nBrand#x\tSMALL POLISHED COPPER\t23\t8\nBrand#x\tSMALL POLISHED NICKEL\t14\t8\nBrand#x\tSMALL POLISHED NICKEL\t19\t8\nBrand#x\tSMALL POLISHED STEEL\t14\t8\nBrand#x\tSMALL POLISHED TIN\t14\t8\nBrand#x\tSMALL POLISHED TIN\t19\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t8\nBrand#x\tSTANDARD ANODIZED TIN\t3\t8\nBrand#x\tSTANDARD ANODIZED TIN\t49\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t8\nBrand#x\tSTANDARD BRUSHED TIN\t3\t8\nBrand#x\tSTANDARD BRUSHED TIN\t45\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t8\nBrand#x\tSTANDARD PLATED BRASS\t14\t8\nBrand#x\tSTANDARD PLATED COPPER\t19\t8\nBrand#x\tSTANDARD PLATED NICKEL\t19\t8\nBrand#x\tSTANDARD PLATED NICKEL\t36\t8\nBrand#x\tSTANDARD PLATED STEEL\t9\t8\nBrand#x\tSTANDARD PLATED STEEL\t14\t8\nBrand#x\tSTANDARD POLISHED BRASS\t14\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t8\nBrand#x\tSTANDARD POLISHED STEEL\t36\t8\nBrand#x\tSTANDARD POLISHED STEEL\t45\t8\nBrand#x\tSTANDARD POLISHED STEEL\t49\t8\nBrand#x\tSTANDARD POLISHED TIN\t9\t8\nBrand#x\tSTANDARD POLISHED TIN\t19\t8\nBrand#x\tECONOMY ANODIZED COPPER\t3\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t8\nBrand#x\tECONOMY ANODIZED STEEL\t14\t8\nBrand#x\tECONOMY ANODIZED TIN\t14\t8\nBrand#x\tECONOMY ANODIZED TIN\t19\t8\nBrand#x\tECONOMY ANODIZED TIN\t45\t8\nBrand#x\tECONOMY ANODIZED TIN\t49\t8\nBrand#x\tECONOMY BRUSHED BRASS\t3\t8\nBrand#x\tECONOMY BRUSHED BRASS\t36\t8\nBrand#x\tECONOMY BRUSHED COPPER\t9\t8\nBrand#x\tECONOMY BRUSHED TIN\t9\t8\nBrand#x\tECONOMY BRUSHED TIN\t19\t8\nBrand#x\tECONOMY BRUSHED TIN\t23\t8\nBrand#x\tECONOMY BURNISHED BRASS\t9\t8\nBrand#x\tECONOMY BURNISHED BRASS\t14\t8\nBrand#x\tECONOMY BURNISHED COPPER\t14\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t8\nBrand#x\tECONOMY BURNISHED STEEL\t45\t8\nBrand#x\tECONOMY PLATED BRASS\t23\t8\nBrand#x\tECONOMY PLATED COPPER\t23\t8\nBrand#x\tECONOMY PLATED NICKEL\t3\t8\nBrand#x\tECONOMY PLATED NICKEL\t23\t8\nBrand#x\tECONOMY PLATED STEEL\t19\t8\nBrand#x\tECONOMY PLATED TIN\t3\t8\nBrand#x\tECONOMY PLATED TIN\t19\t8\nBrand#x\tECONOMY PLATED TIN\t23\t8\nBrand#x\tECONOMY PLATED TIN\t36\t8\nBrand#x\tECONOMY POLISHED BRASS\t36\t8\nBrand#x\tECONOMY POLISHED COPPER\t3\t8\nBrand#x\tECONOMY POLISHED COPPER\t14\t8\nBrand#x\tECONOMY POLISHED COPPER\t49\t8\nBrand#x\tECONOMY POLISHED STEEL\t3\t8\nBrand#x\tECONOMY POLISHED STEEL\t23\t8\nBrand#x\tECONOMY POLISHED STEEL\t36\t8\nBrand#x\tECONOMY POLISHED TIN\t45\t8\nBrand#x\tLARGE ANODIZED BRASS\t9\t8\nBrand#x\tLARGE ANODIZED BRASS\t14\t8\nBrand#x\tLARGE ANODIZED COPPER\t9\t8\nBrand#x\tLARGE ANODIZED COPPER\t45\t8\nBrand#x\tLARGE ANODIZED COPPER\t49\t8\nBrand#x\tLARGE ANODIZED STEEL\t19\t8\nBrand#x\tLARGE ANODIZED STEEL\t36\t8\nBrand#x\tLARGE BRUSHED BRASS\t9\t8\nBrand#x\tLARGE BRUSHED NICKEL\t3\t8\nBrand#x\tLARGE BRUSHED NICKEL\t45\t8\nBrand#x\tLARGE BURNISHED COPPER\t3\t8\nBrand#x\tLARGE BURNISHED COPPER\t9\t8\nBrand#x\tLARGE BURNISHED NICKEL\t9\t8\nBrand#x\tLARGE BURNISHED NICKEL\t19\t8\nBrand#x\tLARGE BURNISHED STEEL\t3\t8\nBrand#x\tLARGE BURNISHED STEEL\t9\t8\nBrand#x\tLARGE BURNISHED STEEL\t14\t8\nBrand#x\tLARGE BURNISHED STEEL\t49\t8\nBrand#x\tLARGE PLATED BRASS\t3\t8\nBrand#x\tLARGE PLATED BRASS\t9\t8\nBrand#x\tLARGE PLATED BRASS\t14\t8\nBrand#x\tLARGE PLATED COPPER\t19\t8\nBrand#x\tLARGE PLATED NICKEL\t23\t8\nBrand#x\tLARGE PLATED NICKEL\t49\t8\nBrand#x\tLARGE PLATED STEEL\t3\t8\nBrand#x\tLARGE PLATED STEEL\t14\t8\nBrand#x\tLARGE PLATED STEEL\t45\t8\nBrand#x\tLARGE POLISHED NICKEL\t3\t8\nBrand#x\tLARGE POLISHED NICKEL\t49\t8\nBrand#x\tLARGE POLISHED TIN\t9\t8\nBrand#x\tLARGE POLISHED TIN\t14\t8\nBrand#x\tLARGE POLISHED TIN\t36\t8\nBrand#x\tLARGE POLISHED TIN\t49\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t8\nBrand#x\tMEDIUM ANODIZED TIN\t14\t8\nBrand#x\tMEDIUM ANODIZED TIN\t23\t8\nBrand#x\tMEDIUM ANODIZED TIN\t45\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t8\nBrand#x\tMEDIUM BRUSHED TIN\t14\t8\nBrand#x\tMEDIUM BRUSHED TIN\t45\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t8\nBrand#x\tMEDIUM BURNISHED TIN\t9\t8\nBrand#x\tMEDIUM BURNISHED TIN\t14\t8\nBrand#x\tMEDIUM PLATED BRASS\t9\t8\nBrand#x\tMEDIUM PLATED BRASS\t14\t8\nBrand#x\tMEDIUM PLATED BRASS\t19\t8\nBrand#x\tMEDIUM PLATED NICKEL\t3\t8\nBrand#x\tMEDIUM PLATED NICKEL\t9\t8\nBrand#x\tMEDIUM PLATED NICKEL\t23\t8\nBrand#x\tMEDIUM PLATED NICKEL\t36\t8\nBrand#x\tMEDIUM PLATED STEEL\t23\t8\nBrand#x\tMEDIUM PLATED TIN\t49\t8\nBrand#x\tPROMO ANODIZED COPPER\t3\t8\nBrand#x\tPROMO ANODIZED COPPER\t36\t8\nBrand#x\tPROMO ANODIZED COPPER\t45\t8\nBrand#x\tPROMO ANODIZED NICKEL\t45\t8\nBrand#x\tPROMO ANODIZED TIN\t14\t8\nBrand#x\tPROMO BRUSHED BRASS\t19\t8\nBrand#x\tPROMO BRUSHED BRASS\t36\t8\nBrand#x\tPROMO BRUSHED COPPER\t14\t8\nBrand#x\tPROMO BRUSHED NICKEL\t3\t8\nBrand#x\tPROMO BRUSHED NICKEL\t49\t8\nBrand#x\tPROMO BRUSHED TIN\t9\t8\nBrand#x\tPROMO BRUSHED TIN\t49\t8\nBrand#x\tPROMO BURNISHED BRASS\t14\t8\nBrand#x\tPROMO BURNISHED BRASS\t45\t8\nBrand#x\tPROMO BURNISHED COPPER\t49\t8\nBrand#x\tPROMO BURNISHED NICKEL\t9\t8\nBrand#x\tPROMO BURNISHED NICKEL\t23\t8\nBrand#x\tPROMO BURNISHED STEEL\t14\t8\nBrand#x\tPROMO BURNISHED TIN\t14\t8\nBrand#x\tPROMO BURNISHED TIN\t49\t8\nBrand#x\tPROMO PLATED BRASS\t14\t8\nBrand#x\tPROMO PLATED COPPER\t14\t8\nBrand#x\tPROMO PLATED NICKEL\t23\t8\nBrand#x\tPROMO PLATED NICKEL\t45\t8\nBrand#x\tPROMO PLATED STEEL\t3\t8\nBrand#x\tPROMO PLATED STEEL\t49\t8\nBrand#x\tPROMO PLATED TIN\t3\t8\nBrand#x\tPROMO PLATED TIN\t23\t8\nBrand#x\tPROMO PLATED TIN\t36\t8\nBrand#x\tPROMO PLATED TIN\t45\t8\nBrand#x\tPROMO POLISHED BRASS\t14\t8\nBrand#x\tPROMO POLISHED COPPER\t23\t8\nBrand#x\tPROMO POLISHED NICKEL\t19\t8\nBrand#x\tPROMO POLISHED NICKEL\t23\t8\nBrand#x\tPROMO POLISHED NICKEL\t36\t8\nBrand#x\tPROMO POLISHED STEEL\t3\t8\nBrand#x\tPROMO POLISHED STEEL\t14\t8\nBrand#x\tPROMO POLISHED TIN\t23\t8\nBrand#x\tPROMO POLISHED TIN\t49\t8\nBrand#x\tSMALL ANODIZED BRASS\t36\t8\nBrand#x\tSMALL ANODIZED BRASS\t49\t8\nBrand#x\tSMALL ANODIZED COPPER\t14\t8\nBrand#x\tSMALL ANODIZED STEEL\t14\t8\nBrand#x\tSMALL ANODIZED STEEL\t23\t8\nBrand#x\tSMALL ANODIZED TIN\t3\t8\nBrand#x\tSMALL BRUSHED BRASS\t49\t8\nBrand#x\tSMALL BRUSHED COPPER\t23\t8\nBrand#x\tSMALL BRUSHED COPPER\t45\t8\nBrand#x\tSMALL BRUSHED NICKEL\t3\t8\nBrand#x\tSMALL BRUSHED STEEL\t23\t8\nBrand#x\tSMALL BRUSHED STEEL\t45\t8\nBrand#x\tSMALL BRUSHED STEEL\t49\t8\nBrand#x\tSMALL BRUSHED TIN\t3\t8\nBrand#x\tSMALL BRUSHED TIN\t14\t8\nBrand#x\tSMALL BURNISHED BRASS\t3\t8\nBrand#x\tSMALL BURNISHED BRASS\t9\t8\nBrand#x\tSMALL BURNISHED BRASS\t49\t8\nBrand#x\tSMALL BURNISHED COPPER\t45\t8\nBrand#x\tSMALL BURNISHED NICKEL\t3\t8\nBrand#x\tSMALL BURNISHED NICKEL\t49\t8\nBrand#x\tSMALL BURNISHED STEEL\t19\t8\nBrand#x\tSMALL BURNISHED STEEL\t49\t8\nBrand#x\tSMALL PLATED BRASS\t3\t8\nBrand#x\tSMALL PLATED BRASS\t45\t8\nBrand#x\tSMALL PLATED COPPER\t14\t8\nBrand#x\tSMALL PLATED COPPER\t36\t8\nBrand#x\tSMALL PLATED COPPER\t45\t8\nBrand#x\tSMALL PLATED NICKEL\t23\t8\nBrand#x\tSMALL PLATED STEEL\t19\t8\nBrand#x\tSMALL PLATED STEEL\t36\t8\nBrand#x\tSMALL PLATED STEEL\t49\t8\nBrand#x\tSMALL PLATED TIN\t19\t8\nBrand#x\tSMALL PLATED TIN\t23\t8\nBrand#x\tSMALL PLATED TIN\t45\t8\nBrand#x\tSMALL PLATED TIN\t49\t8\nBrand#x\tSMALL POLISHED BRASS\t19\t8\nBrand#x\tSMALL POLISHED BRASS\t49\t8\nBrand#x\tSMALL POLISHED COPPER\t9\t8\nBrand#x\tSMALL POLISHED NICKEL\t3\t8\nBrand#x\tSMALL POLISHED NICKEL\t23\t8\nBrand#x\tSMALL POLISHED NICKEL\t49\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t8\nBrand#x\tSTANDARD BRUSHED TIN\t3\t8\nBrand#x\tSTANDARD BRUSHED TIN\t36\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t8\nBrand#x\tSTANDARD BURNISHED TIN\t49\t8\nBrand#x\tSTANDARD PLATED BRASS\t23\t8\nBrand#x\tSTANDARD PLATED COPPER\t3\t8\nBrand#x\tSTANDARD PLATED COPPER\t14\t8\nBrand#x\tSTANDARD PLATED COPPER\t23\t8\nBrand#x\tSTANDARD PLATED COPPER\t36\t8\nBrand#x\tSTANDARD PLATED STEEL\t3\t8\nBrand#x\tSTANDARD PLATED STEEL\t19\t8\nBrand#x\tSTANDARD PLATED STEEL\t36\t8\nBrand#x\tSTANDARD PLATED STEEL\t49\t8\nBrand#x\tSTANDARD PLATED TIN\t3\t8\nBrand#x\tSTANDARD PLATED TIN\t23\t8\nBrand#x\tSTANDARD POLISHED BRASS\t36\t8\nBrand#x\tSTANDARD POLISHED BRASS\t49\t8\nBrand#x\tSTANDARD POLISHED COPPER\t14\t8\nBrand#x\tSTANDARD POLISHED COPPER\t45\t8\nBrand#x\tSTANDARD POLISHED TIN\t3\t8\nBrand#x\tECONOMY ANODIZED BRASS\t23\t8\nBrand#x\tECONOMY ANODIZED COPPER\t9\t8\nBrand#x\tECONOMY ANODIZED COPPER\t14\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t8\nBrand#x\tECONOMY ANODIZED STEEL\t3\t8\nBrand#x\tECONOMY ANODIZED STEEL\t36\t8\nBrand#x\tECONOMY ANODIZED TIN\t19\t8\nBrand#x\tECONOMY ANODIZED TIN\t36\t8\nBrand#x\tECONOMY BRUSHED BRASS\t19\t8\nBrand#x\tECONOMY BRUSHED COPPER\t23\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t8\nBrand#x\tECONOMY BRUSHED STEEL\t36\t8\nBrand#x\tECONOMY BRUSHED TIN\t9\t8\nBrand#x\tECONOMY BRUSHED TIN\t14\t8\nBrand#x\tECONOMY BRUSHED TIN\t36\t8\nBrand#x\tECONOMY BURNISHED BRASS\t23\t8\nBrand#x\tECONOMY BURNISHED BRASS\t49\t8\nBrand#x\tECONOMY BURNISHED COPPER\t45\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t8\nBrand#x\tECONOMY BURNISHED STEEL\t9\t8\nBrand#x\tECONOMY BURNISHED STEEL\t36\t8\nBrand#x\tECONOMY BURNISHED STEEL\t49\t8\nBrand#x\tECONOMY PLATED BRASS\t49\t8\nBrand#x\tECONOMY PLATED COPPER\t36\t8\nBrand#x\tECONOMY PLATED COPPER\t45\t8\nBrand#x\tECONOMY PLATED NICKEL\t9\t8\nBrand#x\tECONOMY PLATED NICKEL\t36\t8\nBrand#x\tECONOMY PLATED STEEL\t14\t8\nBrand#x\tECONOMY POLISHED BRASS\t3\t8\nBrand#x\tECONOMY POLISHED BRASS\t9\t8\nBrand#x\tECONOMY POLISHED BRASS\t14\t8\nBrand#x\tECONOMY POLISHED BRASS\t36\t8\nBrand#x\tECONOMY POLISHED COPPER\t23\t8\nBrand#x\tECONOMY POLISHED NICKEL\t23\t8\nBrand#x\tECONOMY POLISHED NICKEL\t36\t8\nBrand#x\tECONOMY POLISHED STEEL\t23\t8\nBrand#x\tECONOMY POLISHED STEEL\t36\t8\nBrand#x\tECONOMY POLISHED TIN\t9\t8\nBrand#x\tECONOMY POLISHED TIN\t23\t8\nBrand#x\tLARGE ANODIZED COPPER\t23\t8\nBrand#x\tLARGE ANODIZED NICKEL\t3\t8\nBrand#x\tLARGE ANODIZED NICKEL\t23\t8\nBrand#x\tLARGE ANODIZED NICKEL\t49\t8\nBrand#x\tLARGE ANODIZED STEEL\t14\t8\nBrand#x\tLARGE ANODIZED STEEL\t49\t8\nBrand#x\tLARGE ANODIZED TIN\t9\t8\nBrand#x\tLARGE BRUSHED COPPER\t19\t8\nBrand#x\tLARGE BRUSHED COPPER\t49\t8\nBrand#x\tLARGE BRUSHED NICKEL\t36\t8\nBrand#x\tLARGE BRUSHED STEEL\t9\t8\nBrand#x\tLARGE BRUSHED STEEL\t19\t8\nBrand#x\tLARGE BRUSHED TIN\t45\t8\nBrand#x\tLARGE BRUSHED TIN\t49\t8\nBrand#x\tLARGE BURNISHED BRASS\t3\t8\nBrand#x\tLARGE BURNISHED BRASS\t23\t8\nBrand#x\tLARGE BURNISHED COPPER\t3\t8\nBrand#x\tLARGE BURNISHED NICKEL\t14\t8\nBrand#x\tLARGE BURNISHED NICKEL\t19\t8\nBrand#x\tLARGE BURNISHED TIN\t45\t8\nBrand#x\tLARGE PLATED BRASS\t9\t8\nBrand#x\tLARGE PLATED BRASS\t23\t8\nBrand#x\tLARGE PLATED COPPER\t45\t8\nBrand#x\tLARGE PLATED COPPER\t49\t8\nBrand#x\tLARGE PLATED NICKEL\t14\t8\nBrand#x\tLARGE PLATED NICKEL\t49\t8\nBrand#x\tLARGE PLATED STEEL\t19\t8\nBrand#x\tLARGE PLATED STEEL\t36\t8\nBrand#x\tLARGE PLATED TIN\t19\t8\nBrand#x\tLARGE POLISHED BRASS\t3\t8\nBrand#x\tLARGE POLISHED BRASS\t14\t8\nBrand#x\tLARGE POLISHED BRASS\t36\t8\nBrand#x\tLARGE POLISHED NICKEL\t9\t8\nBrand#x\tLARGE POLISHED NICKEL\t19\t8\nBrand#x\tLARGE POLISHED NICKEL\t36\t8\nBrand#x\tLARGE POLISHED STEEL\t23\t8\nBrand#x\tLARGE POLISHED STEEL\t49\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t8\nBrand#x\tMEDIUM BRUSHED TIN\t49\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t8\nBrand#x\tMEDIUM PLATED BRASS\t19\t8\nBrand#x\tMEDIUM PLATED BRASS\t23\t8\nBrand#x\tMEDIUM PLATED COPPER\t3\t8\nBrand#x\tMEDIUM PLATED COPPER\t9\t8\nBrand#x\tMEDIUM PLATED COPPER\t23\t8\nBrand#x\tMEDIUM PLATED COPPER\t45\t8\nBrand#x\tMEDIUM PLATED NICKEL\t3\t8\nBrand#x\tMEDIUM PLATED NICKEL\t19\t8\nBrand#x\tMEDIUM PLATED STEEL\t14\t8\nBrand#x\tMEDIUM PLATED STEEL\t19\t8\nBrand#x\tPROMO ANODIZED BRASS\t3\t8\nBrand#x\tPROMO ANODIZED NICKEL\t14\t8\nBrand#x\tPROMO ANODIZED STEEL\t9\t8\nBrand#x\tPROMO ANODIZED STEEL\t45\t8\nBrand#x\tPROMO BRUSHED BRASS\t19\t8\nBrand#x\tPROMO BRUSHED COPPER\t3\t8\nBrand#x\tPROMO BRUSHED COPPER\t45\t8\nBrand#x\tPROMO BRUSHED NICKEL\t19\t8\nBrand#x\tPROMO BRUSHED NICKEL\t45\t8\nBrand#x\tPROMO BRUSHED NICKEL\t49\t8\nBrand#x\tPROMO BRUSHED STEEL\t19\t8\nBrand#x\tPROMO BRUSHED TIN\t14\t8\nBrand#x\tPROMO BURNISHED BRASS\t49\t8\nBrand#x\tPROMO BURNISHED STEEL\t3\t8\nBrand#x\tPROMO PLATED BRASS\t3\t8\nBrand#x\tPROMO PLATED BRASS\t9\t8\nBrand#x\tPROMO PLATED BRASS\t19\t8\nBrand#x\tPROMO PLATED BRASS\t49\t8\nBrand#x\tPROMO PLATED COPPER\t9\t8\nBrand#x\tPROMO PLATED NICKEL\t9\t8\nBrand#x\tPROMO PLATED STEEL\t36\t8\nBrand#x\tPROMO PLATED TIN\t23\t8\nBrand#x\tPROMO PLATED TIN\t49\t8\nBrand#x\tPROMO POLISHED BRASS\t45\t8\nBrand#x\tPROMO POLISHED COPPER\t49\t8\nBrand#x\tPROMO POLISHED NICKEL\t45\t8\nBrand#x\tPROMO POLISHED NICKEL\t49\t8\nBrand#x\tPROMO POLISHED STEEL\t14\t8\nBrand#x\tPROMO POLISHED STEEL\t36\t8\nBrand#x\tPROMO POLISHED TIN\t3\t8\nBrand#x\tPROMO POLISHED TIN\t14\t8\nBrand#x\tPROMO POLISHED TIN\t45\t8\nBrand#x\tSMALL ANODIZED BRASS\t19\t8\nBrand#x\tSMALL ANODIZED BRASS\t23\t8\nBrand#x\tSMALL ANODIZED COPPER\t36\t8\nBrand#x\tSMALL ANODIZED NICKEL\t9\t8\nBrand#x\tSMALL ANODIZED NICKEL\t45\t8\nBrand#x\tSMALL ANODIZED NICKEL\t49\t8\nBrand#x\tSMALL ANODIZED STEEL\t45\t8\nBrand#x\tSMALL ANODIZED TIN\t9\t8\nBrand#x\tSMALL ANODIZED TIN\t23\t8\nBrand#x\tSMALL ANODIZED TIN\t36\t8\nBrand#x\tSMALL BRUSHED BRASS\t9\t8\nBrand#x\tSMALL BRUSHED COPPER\t19\t8\nBrand#x\tSMALL BRUSHED NICKEL\t36\t8\nBrand#x\tSMALL BRUSHED STEEL\t9\t8\nBrand#x\tSMALL BRUSHED STEEL\t19\t8\nBrand#x\tSMALL BRUSHED STEEL\t36\t8\nBrand#x\tSMALL BRUSHED TIN\t3\t8\nBrand#x\tSMALL BRUSHED TIN\t14\t8\nBrand#x\tSMALL BRUSHED TIN\t36\t8\nBrand#x\tSMALL BRUSHED TIN\t49\t8\nBrand#x\tSMALL BURNISHED BRASS\t19\t8\nBrand#x\tSMALL BURNISHED BRASS\t36\t8\nBrand#x\tSMALL BURNISHED BRASS\t49\t8\nBrand#x\tSMALL BURNISHED NICKEL\t19\t8\nBrand#x\tSMALL BURNISHED NICKEL\t23\t8\nBrand#x\tSMALL BURNISHED NICKEL\t36\t8\nBrand#x\tSMALL BURNISHED TIN\t9\t8\nBrand#x\tSMALL PLATED BRASS\t23\t8\nBrand#x\tSMALL PLATED BRASS\t36\t8\nBrand#x\tSMALL PLATED COPPER\t3\t8\nBrand#x\tSMALL PLATED COPPER\t23\t8\nBrand#x\tSMALL PLATED NICKEL\t49\t8\nBrand#x\tSMALL PLATED STEEL\t3\t8\nBrand#x\tSMALL PLATED STEEL\t14\t8\nBrand#x\tSMALL PLATED STEEL\t49\t8\nBrand#x\tSMALL PLATED TIN\t3\t8\nBrand#x\tSMALL PLATED TIN\t14\t8\nBrand#x\tSMALL POLISHED BRASS\t14\t8\nBrand#x\tSMALL POLISHED BRASS\t23\t8\nBrand#x\tSMALL POLISHED NICKEL\t3\t8\nBrand#x\tSMALL POLISHED NICKEL\t9\t8\nBrand#x\tSMALL POLISHED NICKEL\t36\t8\nBrand#x\tSMALL POLISHED NICKEL\t45\t8\nBrand#x\tSMALL POLISHED STEEL\t9\t8\nBrand#x\tSMALL POLISHED TIN\t3\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t8\nBrand#x\tSTANDARD BRUSHED TIN\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t23\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t8\nBrand#x\tSTANDARD BURNISHED TIN\t14\t8\nBrand#x\tSTANDARD BURNISHED TIN\t23\t8\nBrand#x\tSTANDARD PLATED BRASS\t14\t8\nBrand#x\tSTANDARD PLATED COPPER\t14\t8\nBrand#x\tSTANDARD PLATED NICKEL\t3\t8\nBrand#x\tSTANDARD PLATED NICKEL\t14\t8\nBrand#x\tSTANDARD PLATED NICKEL\t45\t8\nBrand#x\tSTANDARD PLATED STEEL\t19\t8\nBrand#x\tSTANDARD PLATED STEEL\t49\t8\nBrand#x\tSTANDARD PLATED TIN\t36\t8\nBrand#x\tSTANDARD PLATED TIN\t45\t8\nBrand#x\tSTANDARD POLISHED BRASS\t49\t8\nBrand#x\tSTANDARD POLISHED COPPER\t23\t8\nBrand#x\tSTANDARD POLISHED COPPER\t45\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t8\nBrand#x\tSTANDARD POLISHED STEEL\t3\t8\nBrand#x\tSTANDARD POLISHED STEEL\t9\t8\nBrand#x\tSTANDARD POLISHED STEEL\t19\t8\nBrand#x\tSTANDARD POLISHED STEEL\t23\t8\nBrand#x\tECONOMY ANODIZED BRASS\t49\t8\nBrand#x\tECONOMY ANODIZED COPPER\t9\t8\nBrand#x\tECONOMY ANODIZED COPPER\t23\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t8\nBrand#x\tECONOMY ANODIZED STEEL\t3\t8\nBrand#x\tECONOMY ANODIZED STEEL\t19\t8\nBrand#x\tECONOMY ANODIZED TIN\t49\t8\nBrand#x\tECONOMY BRUSHED BRASS\t36\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t8\nBrand#x\tECONOMY BRUSHED STEEL\t49\t8\nBrand#x\tECONOMY BURNISHED COPPER\t9\t8\nBrand#x\tECONOMY BURNISHED COPPER\t14\t8\nBrand#x\tECONOMY BURNISHED COPPER\t45\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t8\nBrand#x\tECONOMY BURNISHED STEEL\t9\t8\nBrand#x\tECONOMY PLATED NICKEL\t45\t8\nBrand#x\tECONOMY PLATED STEEL\t49\t8\nBrand#x\tECONOMY PLATED TIN\t3\t8\nBrand#x\tECONOMY PLATED TIN\t19\t8\nBrand#x\tECONOMY PLATED TIN\t36\t8\nBrand#x\tECONOMY POLISHED BRASS\t36\t8\nBrand#x\tECONOMY POLISHED BRASS\t45\t8\nBrand#x\tECONOMY POLISHED COPPER\t9\t8\nBrand#x\tECONOMY POLISHED TIN\t36\t8\nBrand#x\tLARGE ANODIZED BRASS\t45\t8\nBrand#x\tLARGE ANODIZED NICKEL\t36\t8\nBrand#x\tLARGE ANODIZED STEEL\t3\t8\nBrand#x\tLARGE BRUSHED BRASS\t3\t8\nBrand#x\tLARGE BRUSHED NICKEL\t19\t8\nBrand#x\tLARGE BURNISHED BRASS\t9\t8\nBrand#x\tLARGE BURNISHED BRASS\t49\t8\nBrand#x\tLARGE BURNISHED COPPER\t3\t8\nBrand#x\tLARGE BURNISHED COPPER\t14\t8\nBrand#x\tLARGE BURNISHED COPPER\t19\t8\nBrand#x\tLARGE BURNISHED COPPER\t45\t8\nBrand#x\tLARGE BURNISHED TIN\t3\t8\nBrand#x\tLARGE BURNISHED TIN\t9\t8\nBrand#x\tLARGE PLATED COPPER\t36\t8\nBrand#x\tLARGE PLATED NICKEL\t36\t8\nBrand#x\tLARGE PLATED STEEL\t9\t8\nBrand#x\tLARGE PLATED STEEL\t23\t8\nBrand#x\tLARGE PLATED STEEL\t49\t8\nBrand#x\tLARGE PLATED TIN\t3\t8\nBrand#x\tLARGE PLATED TIN\t9\t8\nBrand#x\tLARGE PLATED TIN\t19\t8\nBrand#x\tLARGE PLATED TIN\t45\t8\nBrand#x\tLARGE POLISHED BRASS\t9\t8\nBrand#x\tLARGE POLISHED BRASS\t19\t8\nBrand#x\tLARGE POLISHED COPPER\t14\t8\nBrand#x\tLARGE POLISHED COPPER\t23\t8\nBrand#x\tLARGE POLISHED COPPER\t36\t8\nBrand#x\tLARGE POLISHED NICKEL\t14\t8\nBrand#x\tLARGE POLISHED STEEL\t9\t8\nBrand#x\tLARGE POLISHED STEEL\t36\t8\nBrand#x\tLARGE POLISHED STEEL\t45\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t8\nBrand#x\tMEDIUM ANODIZED TIN\t9\t8\nBrand#x\tMEDIUM ANODIZED TIN\t49\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t49\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t3\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t8\nBrand#x\tMEDIUM BURNISHED TIN\t9\t8\nBrand#x\tMEDIUM PLATED BRASS\t14\t8\nBrand#x\tMEDIUM PLATED BRASS\t45\t8\nBrand#x\tMEDIUM PLATED COPPER\t49\t8\nBrand#x\tMEDIUM PLATED NICKEL\t9\t8\nBrand#x\tMEDIUM PLATED NICKEL\t19\t8\nBrand#x\tMEDIUM PLATED NICKEL\t23\t8\nBrand#x\tMEDIUM PLATED NICKEL\t36\t8\nBrand#x\tMEDIUM PLATED NICKEL\t45\t8\nBrand#x\tMEDIUM PLATED TIN\t3\t8\nBrand#x\tPROMO ANODIZED BRASS\t23\t8\nBrand#x\tPROMO ANODIZED BRASS\t45\t8\nBrand#x\tPROMO ANODIZED COPPER\t19\t8\nBrand#x\tPROMO ANODIZED COPPER\t45\t8\nBrand#x\tPROMO ANODIZED TIN\t3\t8\nBrand#x\tPROMO ANODIZED TIN\t14\t8\nBrand#x\tPROMO ANODIZED TIN\t19\t8\nBrand#x\tPROMO BRUSHED COPPER\t49\t8\nBrand#x\tPROMO BRUSHED NICKEL\t3\t8\nBrand#x\tPROMO BRUSHED NICKEL\t19\t8\nBrand#x\tPROMO BRUSHED TIN\t14\t8\nBrand#x\tPROMO BRUSHED TIN\t19\t8\nBrand#x\tPROMO BURNISHED COPPER\t14\t8\nBrand#x\tPROMO BURNISHED NICKEL\t3\t8\nBrand#x\tPROMO BURNISHED NICKEL\t36\t8\nBrand#x\tPROMO BURNISHED STEEL\t19\t8\nBrand#x\tPROMO BURNISHED TIN\t36\t8\nBrand#x\tPROMO PLATED BRASS\t14\t8\nBrand#x\tPROMO PLATED BRASS\t19\t8\nBrand#x\tPROMO PLATED NICKEL\t3\t8\nBrand#x\tPROMO PLATED NICKEL\t9\t8\nBrand#x\tPROMO PLATED NICKEL\t49\t8\nBrand#x\tPROMO PLATED STEEL\t23\t8\nBrand#x\tPROMO PLATED STEEL\t36\t8\nBrand#x\tPROMO POLISHED BRASS\t14\t8\nBrand#x\tPROMO POLISHED COPPER\t9\t8\nBrand#x\tPROMO POLISHED COPPER\t36\t8\nBrand#x\tPROMO POLISHED NICKEL\t19\t8\nBrand#x\tPROMO POLISHED NICKEL\t45\t8\nBrand#x\tPROMO POLISHED TIN\t19\t8\nBrand#x\tSMALL ANODIZED BRASS\t3\t8\nBrand#x\tSMALL ANODIZED BRASS\t9\t8\nBrand#x\tSMALL ANODIZED BRASS\t14\t8\nBrand#x\tSMALL ANODIZED BRASS\t23\t8\nBrand#x\tSMALL ANODIZED NICKEL\t14\t8\nBrand#x\tSMALL ANODIZED NICKEL\t19\t8\nBrand#x\tSMALL ANODIZED NICKEL\t49\t8\nBrand#x\tSMALL ANODIZED STEEL\t23\t8\nBrand#x\tSMALL ANODIZED TIN\t3\t8\nBrand#x\tSMALL ANODIZED TIN\t45\t8\nBrand#x\tSMALL BRUSHED BRASS\t3\t8\nBrand#x\tSMALL BRUSHED BRASS\t49\t8\nBrand#x\tSMALL BRUSHED COPPER\t36\t8\nBrand#x\tSMALL BRUSHED NICKEL\t9\t8\nBrand#x\tSMALL BRUSHED NICKEL\t49\t8\nBrand#x\tSMALL BRUSHED STEEL\t3\t8\nBrand#x\tSMALL BRUSHED TIN\t36\t8\nBrand#x\tSMALL BRUSHED TIN\t49\t8\nBrand#x\tSMALL BURNISHED BRASS\t23\t8\nBrand#x\tSMALL BURNISHED COPPER\t45\t8\nBrand#x\tSMALL BURNISHED NICKEL\t23\t8\nBrand#x\tSMALL BURNISHED STEEL\t3\t8\nBrand#x\tSMALL BURNISHED TIN\t3\t8\nBrand#x\tSMALL PLATED COPPER\t36\t8\nBrand#x\tSMALL PLATED COPPER\t49\t8\nBrand#x\tSMALL PLATED STEEL\t23\t8\nBrand#x\tSMALL PLATED STEEL\t36\t8\nBrand#x\tSMALL PLATED STEEL\t45\t8\nBrand#x\tSMALL PLATED TIN\t49\t8\nBrand#x\tSMALL POLISHED BRASS\t3\t8\nBrand#x\tSMALL POLISHED BRASS\t19\t8\nBrand#x\tSMALL POLISHED BRASS\t23\t8\nBrand#x\tSMALL POLISHED BRASS\t45\t8\nBrand#x\tSMALL POLISHED COPPER\t3\t8\nBrand#x\tSMALL POLISHED NICKEL\t9\t8\nBrand#x\tSMALL POLISHED NICKEL\t14\t8\nBrand#x\tSMALL POLISHED NICKEL\t19\t8\nBrand#x\tSMALL POLISHED STEEL\t14\t8\nBrand#x\tSMALL POLISHED TIN\t45\t8\nBrand#x\tSMALL POLISHED TIN\t49\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t8\nBrand#x\tSTANDARD ANODIZED TIN\t9\t8\nBrand#x\tSTANDARD ANODIZED TIN\t14\t8\nBrand#x\tSTANDARD ANODIZED TIN\t23\t8\nBrand#x\tSTANDARD ANODIZED TIN\t49\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t14\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t8\nBrand#x\tSTANDARD BRUSHED TIN\t36\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t8\nBrand#x\tSTANDARD BURNISHED TIN\t14\t8\nBrand#x\tSTANDARD BURNISHED TIN\t19\t8\nBrand#x\tSTANDARD PLATED BRASS\t19\t8\nBrand#x\tSTANDARD PLATED COPPER\t14\t8\nBrand#x\tSTANDARD PLATED COPPER\t36\t8\nBrand#x\tSTANDARD PLATED COPPER\t45\t8\nBrand#x\tSTANDARD PLATED STEEL\t45\t8\nBrand#x\tSTANDARD PLATED TIN\t49\t8\nBrand#x\tSTANDARD POLISHED BRASS\t19\t8\nBrand#x\tSTANDARD POLISHED BRASS\t49\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t8\nBrand#x\tSTANDARD POLISHED STEEL\t19\t8\nBrand#x\tSTANDARD POLISHED TIN\t36\t8\nBrand#x\tSTANDARD POLISHED TIN\t45\t8\nBrand#x\tSTANDARD POLISHED TIN\t49\t8\nBrand#x\tECONOMY ANODIZED COPPER\t23\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t8\nBrand#x\tECONOMY ANODIZED STEEL\t3\t8\nBrand#x\tECONOMY ANODIZED TIN\t3\t8\nBrand#x\tECONOMY ANODIZED TIN\t19\t8\nBrand#x\tECONOMY BRUSHED COPPER\t3\t8\nBrand#x\tECONOMY BRUSHED COPPER\t9\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t8\nBrand#x\tECONOMY BRUSHED STEEL\t3\t8\nBrand#x\tECONOMY BRUSHED STEEL\t19\t8\nBrand#x\tECONOMY BRUSHED TIN\t23\t8\nBrand#x\tECONOMY BURNISHED COPPER\t45\t8\nBrand#x\tECONOMY BURNISHED STEEL\t3\t8\nBrand#x\tECONOMY BURNISHED STEEL\t14\t8\nBrand#x\tECONOMY BURNISHED STEEL\t19\t8\nBrand#x\tECONOMY BURNISHED TIN\t3\t8\nBrand#x\tECONOMY BURNISHED TIN\t45\t8\nBrand#x\tECONOMY BURNISHED TIN\t49\t8\nBrand#x\tECONOMY PLATED BRASS\t36\t8\nBrand#x\tECONOMY PLATED COPPER\t19\t8\nBrand#x\tECONOMY PLATED COPPER\t49\t8\nBrand#x\tECONOMY PLATED NICKEL\t36\t8\nBrand#x\tECONOMY PLATED STEEL\t23\t8\nBrand#x\tECONOMY PLATED TIN\t3\t8\nBrand#x\tECONOMY PLATED TIN\t36\t8\nBrand#x\tECONOMY POLISHED BRASS\t9\t8\nBrand#x\tECONOMY POLISHED BRASS\t23\t8\nBrand#x\tECONOMY POLISHED COPPER\t49\t8\nBrand#x\tECONOMY POLISHED NICKEL\t9\t8\nBrand#x\tECONOMY POLISHED NICKEL\t45\t8\nBrand#x\tECONOMY POLISHED STEEL\t49\t8\nBrand#x\tECONOMY POLISHED TIN\t45\t8\nBrand#x\tLARGE ANODIZED BRASS\t3\t8\nBrand#x\tLARGE ANODIZED BRASS\t14\t8\nBrand#x\tLARGE ANODIZED BRASS\t36\t8\nBrand#x\tLARGE ANODIZED COPPER\t23\t8\nBrand#x\tLARGE ANODIZED COPPER\t45\t8\nBrand#x\tLARGE ANODIZED NICKEL\t49\t8\nBrand#x\tLARGE ANODIZED STEEL\t3\t8\nBrand#x\tLARGE ANODIZED STEEL\t9\t8\nBrand#x\tLARGE ANODIZED TIN\t9\t8\nBrand#x\tLARGE BRUSHED BRASS\t45\t8\nBrand#x\tLARGE BRUSHED BRASS\t49\t8\nBrand#x\tLARGE BRUSHED COPPER\t19\t8\nBrand#x\tLARGE BRUSHED NICKEL\t14\t8\nBrand#x\tLARGE BRUSHED NICKEL\t23\t8\nBrand#x\tLARGE BRUSHED STEEL\t14\t8\nBrand#x\tLARGE BRUSHED TIN\t45\t8\nBrand#x\tLARGE BURNISHED BRASS\t19\t8\nBrand#x\tLARGE BURNISHED BRASS\t23\t8\nBrand#x\tLARGE BURNISHED COPPER\t23\t8\nBrand#x\tLARGE BURNISHED COPPER\t45\t8\nBrand#x\tLARGE BURNISHED NICKEL\t9\t8\nBrand#x\tLARGE BURNISHED NICKEL\t19\t8\nBrand#x\tLARGE BURNISHED STEEL\t9\t8\nBrand#x\tLARGE BURNISHED STEEL\t36\t8\nBrand#x\tLARGE BURNISHED TIN\t14\t8\nBrand#x\tLARGE BURNISHED TIN\t23\t8\nBrand#x\tLARGE BURNISHED TIN\t49\t8\nBrand#x\tLARGE PLATED BRASS\t14\t8\nBrand#x\tLARGE PLATED COPPER\t19\t8\nBrand#x\tLARGE PLATED TIN\t9\t8\nBrand#x\tLARGE PLATED TIN\t36\t8\nBrand#x\tLARGE POLISHED BRASS\t45\t8\nBrand#x\tLARGE POLISHED COPPER\t3\t8\nBrand#x\tLARGE POLISHED COPPER\t9\t8\nBrand#x\tLARGE POLISHED COPPER\t19\t8\nBrand#x\tLARGE POLISHED NICKEL\t14\t8\nBrand#x\tLARGE POLISHED NICKEL\t19\t8\nBrand#x\tLARGE POLISHED TIN\t14\t8\nBrand#x\tLARGE POLISHED TIN\t19\t8\nBrand#x\tLARGE POLISHED TIN\t23\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t8\nBrand#x\tMEDIUM ANODIZED TIN\t19\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t8\nBrand#x\tMEDIUM BRUSHED TIN\t49\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t8\nBrand#x\tMEDIUM BURNISHED TIN\t3\t8\nBrand#x\tMEDIUM BURNISHED TIN\t14\t8\nBrand#x\tMEDIUM BURNISHED TIN\t23\t8\nBrand#x\tMEDIUM PLATED BRASS\t3\t8\nBrand#x\tMEDIUM PLATED COPPER\t14\t8\nBrand#x\tMEDIUM PLATED COPPER\t19\t8\nBrand#x\tMEDIUM PLATED TIN\t19\t8\nBrand#x\tPROMO ANODIZED BRASS\t3\t8\nBrand#x\tPROMO ANODIZED BRASS\t9\t8\nBrand#x\tPROMO ANODIZED BRASS\t14\t8\nBrand#x\tPROMO ANODIZED BRASS\t23\t8\nBrand#x\tPROMO ANODIZED COPPER\t23\t8\nBrand#x\tPROMO ANODIZED NICKEL\t3\t8\nBrand#x\tPROMO ANODIZED NICKEL\t36\t8\nBrand#x\tPROMO ANODIZED NICKEL\t45\t8\nBrand#x\tPROMO ANODIZED STEEL\t9\t8\nBrand#x\tPROMO ANODIZED STEEL\t49\t8\nBrand#x\tPROMO BRUSHED BRASS\t19\t8\nBrand#x\tPROMO BRUSHED BRASS\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t36\t8\nBrand#x\tPROMO BRUSHED COPPER\t45\t8\nBrand#x\tPROMO BRUSHED NICKEL\t23\t8\nBrand#x\tPROMO BRUSHED NICKEL\t49\t8\nBrand#x\tPROMO BRUSHED STEEL\t49\t8\nBrand#x\tPROMO BRUSHED TIN\t9\t8\nBrand#x\tPROMO BRUSHED TIN\t36\t8\nBrand#x\tPROMO BURNISHED COPPER\t3\t8\nBrand#x\tPROMO BURNISHED COPPER\t14\t8\nBrand#x\tPROMO BURNISHED COPPER\t19\t8\nBrand#x\tPROMO BURNISHED COPPER\t36\t8\nBrand#x\tPROMO BURNISHED NICKEL\t45\t8\nBrand#x\tPROMO BURNISHED STEEL\t19\t8\nBrand#x\tPROMO PLATED COPPER\t19\t8\nBrand#x\tPROMO PLATED COPPER\t36\t8\nBrand#x\tPROMO PLATED COPPER\t49\t8\nBrand#x\tPROMO PLATED NICKEL\t36\t8\nBrand#x\tPROMO PLATED STEEL\t19\t8\nBrand#x\tPROMO PLATED STEEL\t23\t8\nBrand#x\tPROMO PLATED TIN\t3\t8\nBrand#x\tPROMO PLATED TIN\t49\t8\nBrand#x\tPROMO POLISHED BRASS\t3\t8\nBrand#x\tPROMO POLISHED BRASS\t49\t8\nBrand#x\tPROMO POLISHED NICKEL\t3\t8\nBrand#x\tPROMO POLISHED TIN\t9\t8\nBrand#x\tPROMO POLISHED TIN\t45\t8\nBrand#x\tSMALL ANODIZED BRASS\t9\t8\nBrand#x\tSMALL ANODIZED BRASS\t36\t8\nBrand#x\tSMALL ANODIZED COPPER\t36\t8\nBrand#x\tSMALL ANODIZED COPPER\t45\t8\nBrand#x\tSMALL ANODIZED NICKEL\t14\t8\nBrand#x\tSMALL ANODIZED NICKEL\t49\t8\nBrand#x\tSMALL ANODIZED STEEL\t9\t8\nBrand#x\tSMALL ANODIZED STEEL\t45\t8\nBrand#x\tSMALL ANODIZED TIN\t23\t8\nBrand#x\tSMALL BRUSHED BRASS\t23\t8\nBrand#x\tSMALL BRUSHED NICKEL\t19\t8\nBrand#x\tSMALL BRUSHED NICKEL\t49\t8\nBrand#x\tSMALL BRUSHED STEEL\t36\t8\nBrand#x\tSMALL BRUSHED STEEL\t49\t8\nBrand#x\tSMALL BRUSHED TIN\t9\t8\nBrand#x\tSMALL BRUSHED TIN\t45\t8\nBrand#x\tSMALL BURNISHED NICKEL\t23\t8\nBrand#x\tSMALL BURNISHED STEEL\t3\t8\nBrand#x\tSMALL BURNISHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED STEEL\t19\t8\nBrand#x\tSMALL BURNISHED STEEL\t23\t8\nBrand#x\tSMALL BURNISHED STEEL\t36\t8\nBrand#x\tSMALL BURNISHED STEEL\t49\t8\nBrand#x\tSMALL BURNISHED TIN\t36\t8\nBrand#x\tSMALL PLATED BRASS\t23\t8\nBrand#x\tSMALL PLATED COPPER\t14\t8\nBrand#x\tSMALL PLATED COPPER\t19\t8\nBrand#x\tSMALL PLATED NICKEL\t36\t8\nBrand#x\tSMALL PLATED STEEL\t14\t8\nBrand#x\tSMALL PLATED STEEL\t36\t8\nBrand#x\tSMALL PLATED TIN\t3\t8\nBrand#x\tSMALL PLATED TIN\t36\t8\nBrand#x\tSMALL POLISHED BRASS\t3\t8\nBrand#x\tSMALL POLISHED BRASS\t14\t8\nBrand#x\tSMALL POLISHED BRASS\t19\t8\nBrand#x\tSMALL POLISHED COPPER\t3\t8\nBrand#x\tSMALL POLISHED COPPER\t36\t8\nBrand#x\tSMALL POLISHED NICKEL\t14\t8\nBrand#x\tSMALL POLISHED STEEL\t49\t8\nBrand#x\tSMALL POLISHED TIN\t23\t8\nBrand#x\tSMALL POLISHED TIN\t49\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t8\nBrand#x\tSTANDARD ANODIZED TIN\t3\t8\nBrand#x\tSTANDARD ANODIZED TIN\t9\t8\nBrand#x\tSTANDARD ANODIZED TIN\t19\t8\nBrand#x\tSTANDARD ANODIZED TIN\t36\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t45\t8\nBrand#x\tSTANDARD BRUSHED TIN\t23\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t8\nBrand#x\tSTANDARD BURNISHED TIN\t3\t8\nBrand#x\tSTANDARD BURNISHED TIN\t9\t8\nBrand#x\tSTANDARD BURNISHED TIN\t36\t8\nBrand#x\tSTANDARD PLATED COPPER\t3\t8\nBrand#x\tSTANDARD PLATED COPPER\t23\t8\nBrand#x\tSTANDARD PLATED COPPER\t49\t8\nBrand#x\tSTANDARD PLATED NICKEL\t3\t8\nBrand#x\tSTANDARD PLATED NICKEL\t9\t8\nBrand#x\tSTANDARD PLATED NICKEL\t36\t8\nBrand#x\tSTANDARD PLATED TIN\t14\t8\nBrand#x\tSTANDARD PLATED TIN\t19\t8\nBrand#x\tSTANDARD POLISHED BRASS\t14\t8\nBrand#x\tSTANDARD POLISHED COPPER\t3\t8\nBrand#x\tSTANDARD POLISHED COPPER\t23\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t8\nBrand#x\tSTANDARD POLISHED STEEL\t14\t8\nBrand#x\tSTANDARD POLISHED TIN\t36\t8\nBrand#x\tSTANDARD POLISHED TIN\t45\t8\nBrand#x\tECONOMY ANODIZED BRASS\t19\t8\nBrand#x\tECONOMY ANODIZED BRASS\t45\t8\nBrand#x\tECONOMY ANODIZED COPPER\t49\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t8\nBrand#x\tECONOMY ANODIZED STEEL\t3\t8\nBrand#x\tECONOMY ANODIZED STEEL\t45\t8\nBrand#x\tECONOMY ANODIZED TIN\t49\t8\nBrand#x\tECONOMY BRUSHED COPPER\t23\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t8\nBrand#x\tECONOMY BRUSHED STEEL\t14\t8\nBrand#x\tECONOMY BRUSHED TIN\t14\t8\nBrand#x\tECONOMY BRUSHED TIN\t45\t8\nBrand#x\tECONOMY BURNISHED COPPER\t9\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t8\nBrand#x\tECONOMY BURNISHED STEEL\t14\t8\nBrand#x\tECONOMY BURNISHED STEEL\t19\t8\nBrand#x\tECONOMY BURNISHED STEEL\t36\t8\nBrand#x\tECONOMY BURNISHED TIN\t9\t8\nBrand#x\tECONOMY BURNISHED TIN\t45\t8\nBrand#x\tECONOMY BURNISHED TIN\t49\t8\nBrand#x\tECONOMY PLATED NICKEL\t3\t8\nBrand#x\tECONOMY PLATED NICKEL\t14\t8\nBrand#x\tECONOMY PLATED STEEL\t49\t8\nBrand#x\tECONOMY PLATED TIN\t9\t8\nBrand#x\tECONOMY POLISHED BRASS\t14\t8\nBrand#x\tECONOMY POLISHED BRASS\t19\t8\nBrand#x\tECONOMY POLISHED BRASS\t49\t8\nBrand#x\tECONOMY POLISHED COPPER\t3\t8\nBrand#x\tECONOMY POLISHED COPPER\t23\t8\nBrand#x\tECONOMY POLISHED NICKEL\t9\t8\nBrand#x\tECONOMY POLISHED NICKEL\t23\t8\nBrand#x\tECONOMY POLISHED NICKEL\t49\t8\nBrand#x\tECONOMY POLISHED STEEL\t19\t8\nBrand#x\tECONOMY POLISHED STEEL\t36\t8\nBrand#x\tECONOMY POLISHED TIN\t9\t8\nBrand#x\tLARGE ANODIZED BRASS\t9\t8\nBrand#x\tLARGE ANODIZED NICKEL\t45\t8\nBrand#x\tLARGE ANODIZED STEEL\t3\t8\nBrand#x\tLARGE ANODIZED STEEL\t49\t8\nBrand#x\tLARGE ANODIZED TIN\t9\t8\nBrand#x\tLARGE BRUSHED BRASS\t3\t8\nBrand#x\tLARGE BRUSHED COPPER\t9\t8\nBrand#x\tLARGE BRUSHED COPPER\t14\t8\nBrand#x\tLARGE BRUSHED NICKEL\t45\t8\nBrand#x\tLARGE BRUSHED TIN\t36\t8\nBrand#x\tLARGE BURNISHED BRASS\t9\t8\nBrand#x\tLARGE BURNISHED BRASS\t23\t8\nBrand#x\tLARGE BURNISHED BRASS\t36\t8\nBrand#x\tLARGE BURNISHED NICKEL\t3\t8\nBrand#x\tLARGE BURNISHED STEEL\t49\t8\nBrand#x\tLARGE BURNISHED TIN\t23\t8\nBrand#x\tLARGE BURNISHED TIN\t45\t8\nBrand#x\tLARGE BURNISHED TIN\t49\t8\nBrand#x\tLARGE PLATED BRASS\t14\t8\nBrand#x\tLARGE PLATED BRASS\t45\t8\nBrand#x\tLARGE PLATED BRASS\t49\t8\nBrand#x\tLARGE PLATED NICKEL\t14\t8\nBrand#x\tLARGE PLATED STEEL\t19\t8\nBrand#x\tLARGE PLATED TIN\t14\t8\nBrand#x\tLARGE POLISHED BRASS\t45\t8\nBrand#x\tLARGE POLISHED COPPER\t3\t8\nBrand#x\tLARGE POLISHED COPPER\t9\t8\nBrand#x\tLARGE POLISHED STEEL\t49\t8\nBrand#x\tLARGE POLISHED TIN\t14\t8\nBrand#x\tLARGE POLISHED TIN\t49\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t8\nBrand#x\tMEDIUM ANODIZED TIN\t9\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t8\nBrand#x\tMEDIUM BRUSHED TIN\t3\t8\nBrand#x\tMEDIUM BRUSHED TIN\t23\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t8\nBrand#x\tMEDIUM BURNISHED TIN\t9\t8\nBrand#x\tMEDIUM PLATED BRASS\t3\t8\nBrand#x\tMEDIUM PLATED COPPER\t3\t8\nBrand#x\tMEDIUM PLATED COPPER\t9\t8\nBrand#x\tMEDIUM PLATED COPPER\t23\t8\nBrand#x\tMEDIUM PLATED NICKEL\t23\t8\nBrand#x\tMEDIUM PLATED STEEL\t3\t8\nBrand#x\tMEDIUM PLATED STEEL\t9\t8\nBrand#x\tPROMO ANODIZED BRASS\t3\t8\nBrand#x\tPROMO ANODIZED BRASS\t9\t8\nBrand#x\tPROMO ANODIZED COPPER\t19\t8\nBrand#x\tPROMO ANODIZED NICKEL\t9\t8\nBrand#x\tPROMO ANODIZED NICKEL\t14\t8\nBrand#x\tPROMO ANODIZED NICKEL\t19\t8\nBrand#x\tPROMO ANODIZED STEEL\t3\t8\nBrand#x\tPROMO ANODIZED STEEL\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t36\t8\nBrand#x\tPROMO BRUSHED COPPER\t23\t8\nBrand#x\tPROMO BRUSHED NICKEL\t23\t8\nBrand#x\tPROMO BRUSHED NICKEL\t36\t8\nBrand#x\tPROMO BRUSHED STEEL\t3\t8\nBrand#x\tPROMO BRUSHED TIN\t23\t8\nBrand#x\tPROMO BURNISHED BRASS\t14\t8\nBrand#x\tPROMO BURNISHED BRASS\t45\t8\nBrand#x\tPROMO BURNISHED COPPER\t3\t8\nBrand#x\tPROMO BURNISHED COPPER\t19\t8\nBrand#x\tPROMO BURNISHED COPPER\t49\t8\nBrand#x\tPROMO BURNISHED NICKEL\t3\t8\nBrand#x\tPROMO BURNISHED NICKEL\t19\t8\nBrand#x\tPROMO BURNISHED NICKEL\t49\t8\nBrand#x\tPROMO BURNISHED TIN\t3\t8\nBrand#x\tPROMO BURNISHED TIN\t14\t8\nBrand#x\tPROMO BURNISHED TIN\t45\t8\nBrand#x\tPROMO PLATED BRASS\t9\t8\nBrand#x\tPROMO PLATED COPPER\t19\t8\nBrand#x\tPROMO PLATED NICKEL\t49\t8\nBrand#x\tPROMO PLATED STEEL\t14\t8\nBrand#x\tPROMO PLATED TIN\t19\t8\nBrand#x\tPROMO POLISHED BRASS\t3\t8\nBrand#x\tPROMO POLISHED BRASS\t23\t8\nBrand#x\tPROMO POLISHED BRASS\t49\t8\nBrand#x\tPROMO POLISHED NICKEL\t3\t8\nBrand#x\tPROMO POLISHED NICKEL\t36\t8\nBrand#x\tPROMO POLISHED STEEL\t3\t8\nBrand#x\tPROMO POLISHED TIN\t3\t8\nBrand#x\tPROMO POLISHED TIN\t9\t8\nBrand#x\tPROMO POLISHED TIN\t14\t8\nBrand#x\tPROMO POLISHED TIN\t36\t8\nBrand#x\tSMALL ANODIZED BRASS\t3\t8\nBrand#x\tSMALL ANODIZED BRASS\t49\t8\nBrand#x\tSMALL ANODIZED COPPER\t23\t8\nBrand#x\tSMALL ANODIZED STEEL\t3\t8\nBrand#x\tSMALL ANODIZED STEEL\t23\t8\nBrand#x\tSMALL ANODIZED TIN\t49\t8\nBrand#x\tSMALL BRUSHED BRASS\t36\t8\nBrand#x\tSMALL BRUSHED COPPER\t14\t8\nBrand#x\tSMALL BRUSHED COPPER\t23\t8\nBrand#x\tSMALL BRUSHED NICKEL\t3\t8\nBrand#x\tSMALL BRUSHED NICKEL\t36\t8\nBrand#x\tSMALL BRUSHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED BRASS\t9\t8\nBrand#x\tSMALL BURNISHED BRASS\t36\t8\nBrand#x\tSMALL BURNISHED BRASS\t49\t8\nBrand#x\tSMALL BURNISHED COPPER\t45\t8\nBrand#x\tSMALL BURNISHED NICKEL\t9\t8\nBrand#x\tSMALL BURNISHED STEEL\t3\t8\nBrand#x\tSMALL BURNISHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED STEEL\t14\t8\nBrand#x\tSMALL BURNISHED STEEL\t23\t8\nBrand#x\tSMALL BURNISHED TIN\t19\t8\nBrand#x\tSMALL BURNISHED TIN\t45\t8\nBrand#x\tSMALL PLATED COPPER\t23\t8\nBrand#x\tSMALL PLATED COPPER\t36\t8\nBrand#x\tSMALL PLATED NICKEL\t9\t8\nBrand#x\tSMALL PLATED STEEL\t3\t8\nBrand#x\tSMALL PLATED STEEL\t19\t8\nBrand#x\tSMALL PLATED TIN\t23\t8\nBrand#x\tSMALL PLATED TIN\t36\t8\nBrand#x\tSMALL PLATED TIN\t45\t8\nBrand#x\tSMALL POLISHED BRASS\t3\t8\nBrand#x\tSMALL POLISHED BRASS\t45\t8\nBrand#x\tSMALL POLISHED COPPER\t9\t8\nBrand#x\tSMALL POLISHED COPPER\t14\t8\nBrand#x\tSMALL POLISHED NICKEL\t49\t8\nBrand#x\tSMALL POLISHED STEEL\t3\t8\nBrand#x\tSMALL POLISHED STEEL\t14\t8\nBrand#x\tSMALL POLISHED STEEL\t45\t8\nBrand#x\tSMALL POLISHED TIN\t19\t8\nBrand#x\tSMALL POLISHED TIN\t45\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t8\nBrand#x\tSTANDARD ANODIZED TIN\t36\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t14\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t8\nBrand#x\tSTANDARD BRUSHED TIN\t45\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t23\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t8\nBrand#x\tSTANDARD BURNISHED TIN\t14\t8\nBrand#x\tSTANDARD BURNISHED TIN\t23\t8\nBrand#x\tSTANDARD PLATED BRASS\t3\t8\nBrand#x\tSTANDARD PLATED BRASS\t9\t8\nBrand#x\tSTANDARD PLATED COPPER\t9\t8\nBrand#x\tSTANDARD PLATED COPPER\t14\t8\nBrand#x\tSTANDARD PLATED NICKEL\t19\t8\nBrand#x\tSTANDARD PLATED TIN\t9\t8\nBrand#x\tSTANDARD POLISHED COPPER\t3\t8\nBrand#x\tSTANDARD POLISHED COPPER\t23\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t8\nBrand#x\tSTANDARD POLISHED STEEL\t23\t8\nBrand#x\tECONOMY ANODIZED BRASS\t3\t8\nBrand#x\tECONOMY ANODIZED BRASS\t19\t8\nBrand#x\tECONOMY ANODIZED COPPER\t3\t8\nBrand#x\tECONOMY ANODIZED COPPER\t9\t8\nBrand#x\tECONOMY ANODIZED COPPER\t23\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t8\nBrand#x\tECONOMY ANODIZED STEEL\t19\t8\nBrand#x\tECONOMY BRUSHED BRASS\t14\t8\nBrand#x\tECONOMY BRUSHED BRASS\t45\t8\nBrand#x\tECONOMY BRUSHED COPPER\t9\t8\nBrand#x\tECONOMY BRUSHED COPPER\t23\t8\nBrand#x\tECONOMY BRUSHED COPPER\t45\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t8\nBrand#x\tECONOMY BRUSHED STEEL\t19\t8\nBrand#x\tECONOMY BRUSHED TIN\t3\t8\nBrand#x\tECONOMY BURNISHED BRASS\t3\t8\nBrand#x\tECONOMY BURNISHED BRASS\t45\t8\nBrand#x\tECONOMY BURNISHED BRASS\t49\t8\nBrand#x\tECONOMY BURNISHED COPPER\t3\t8\nBrand#x\tECONOMY BURNISHED COPPER\t14\t8\nBrand#x\tECONOMY BURNISHED COPPER\t19\t8\nBrand#x\tECONOMY BURNISHED STEEL\t9\t8\nBrand#x\tECONOMY BURNISHED STEEL\t36\t8\nBrand#x\tECONOMY BURNISHED TIN\t9\t8\nBrand#x\tECONOMY PLATED BRASS\t3\t8\nBrand#x\tECONOMY PLATED BRASS\t9\t8\nBrand#x\tECONOMY PLATED BRASS\t19\t8\nBrand#x\tECONOMY PLATED COPPER\t9\t8\nBrand#x\tECONOMY PLATED COPPER\t14\t8\nBrand#x\tECONOMY PLATED COPPER\t23\t8\nBrand#x\tECONOMY PLATED COPPER\t49\t8\nBrand#x\tECONOMY PLATED NICKEL\t3\t8\nBrand#x\tECONOMY PLATED NICKEL\t36\t8\nBrand#x\tECONOMY PLATED STEEL\t9\t8\nBrand#x\tECONOMY PLATED STEEL\t19\t8\nBrand#x\tECONOMY PLATED STEEL\t23\t8\nBrand#x\tECONOMY POLISHED BRASS\t19\t8\nBrand#x\tECONOMY POLISHED NICKEL\t14\t8\nBrand#x\tECONOMY POLISHED STEEL\t9\t8\nBrand#x\tLARGE ANODIZED BRASS\t9\t8\nBrand#x\tLARGE ANODIZED BRASS\t49\t8\nBrand#x\tLARGE ANODIZED COPPER\t3\t8\nBrand#x\tLARGE ANODIZED COPPER\t19\t8\nBrand#x\tLARGE ANODIZED NICKEL\t19\t8\nBrand#x\tLARGE ANODIZED NICKEL\t45\t8\nBrand#x\tLARGE ANODIZED NICKEL\t49\t8\nBrand#x\tLARGE BRUSHED BRASS\t3\t8\nBrand#x\tLARGE BRUSHED BRASS\t14\t8\nBrand#x\tLARGE BRUSHED BRASS\t19\t8\nBrand#x\tLARGE BRUSHED BRASS\t49\t8\nBrand#x\tLARGE BRUSHED COPPER\t19\t8\nBrand#x\tLARGE BRUSHED COPPER\t49\t8\nBrand#x\tLARGE BRUSHED NICKEL\t9\t8\nBrand#x\tLARGE BRUSHED NICKEL\t14\t8\nBrand#x\tLARGE BRUSHED NICKEL\t49\t8\nBrand#x\tLARGE BRUSHED STEEL\t36\t8\nBrand#x\tLARGE BRUSHED TIN\t3\t8\nBrand#x\tLARGE BRUSHED TIN\t23\t8\nBrand#x\tLARGE BURNISHED BRASS\t3\t8\nBrand#x\tLARGE BURNISHED BRASS\t9\t8\nBrand#x\tLARGE BURNISHED COPPER\t14\t8\nBrand#x\tLARGE BURNISHED NICKEL\t3\t8\nBrand#x\tLARGE BURNISHED TIN\t9\t8\nBrand#x\tLARGE BURNISHED TIN\t14\t8\nBrand#x\tLARGE BURNISHED TIN\t45\t8\nBrand#x\tLARGE PLATED BRASS\t3\t8\nBrand#x\tLARGE PLATED BRASS\t45\t8\nBrand#x\tLARGE PLATED NICKEL\t3\t8\nBrand#x\tLARGE PLATED STEEL\t3\t8\nBrand#x\tLARGE PLATED STEEL\t14\t8\nBrand#x\tLARGE PLATED TIN\t36\t8\nBrand#x\tLARGE POLISHED BRASS\t23\t8\nBrand#x\tLARGE POLISHED NICKEL\t3\t8\nBrand#x\tLARGE POLISHED NICKEL\t14\t8\nBrand#x\tLARGE POLISHED STEEL\t36\t8\nBrand#x\tLARGE POLISHED TIN\t9\t8\nBrand#x\tLARGE POLISHED TIN\t36\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t8\nBrand#x\tMEDIUM ANODIZED TIN\t9\t8\nBrand#x\tMEDIUM ANODIZED TIN\t23\t8\nBrand#x\tMEDIUM ANODIZED TIN\t36\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t8\nBrand#x\tMEDIUM BURNISHED TIN\t36\t8\nBrand#x\tMEDIUM PLATED BRASS\t3\t8\nBrand#x\tMEDIUM PLATED BRASS\t19\t8\nBrand#x\tMEDIUM PLATED NICKEL\t3\t8\nBrand#x\tMEDIUM PLATED NICKEL\t9\t8\nBrand#x\tMEDIUM PLATED NICKEL\t23\t8\nBrand#x\tMEDIUM PLATED NICKEL\t36\t8\nBrand#x\tMEDIUM PLATED NICKEL\t45\t8\nBrand#x\tMEDIUM PLATED STEEL\t14\t8\nBrand#x\tMEDIUM PLATED TIN\t14\t8\nBrand#x\tPROMO ANODIZED BRASS\t3\t8\nBrand#x\tPROMO ANODIZED BRASS\t9\t8\nBrand#x\tPROMO ANODIZED BRASS\t49\t8\nBrand#x\tPROMO ANODIZED COPPER\t14\t8\nBrand#x\tPROMO ANODIZED COPPER\t19\t8\nBrand#x\tPROMO ANODIZED NICKEL\t45\t8\nBrand#x\tPROMO ANODIZED STEEL\t9\t8\nBrand#x\tPROMO ANODIZED TIN\t45\t8\nBrand#x\tPROMO BRUSHED COPPER\t3\t8\nBrand#x\tPROMO BRUSHED COPPER\t9\t8\nBrand#x\tPROMO BRUSHED COPPER\t45\t8\nBrand#x\tPROMO BRUSHED COPPER\t49\t8\nBrand#x\tPROMO BRUSHED NICKEL\t14\t8\nBrand#x\tPROMO BRUSHED NICKEL\t36\t8\nBrand#x\tPROMO BRUSHED NICKEL\t49\t8\nBrand#x\tPROMO BRUSHED STEEL\t9\t8\nBrand#x\tPROMO BRUSHED STEEL\t49\t8\nBrand#x\tPROMO BRUSHED TIN\t3\t8\nBrand#x\tPROMO BRUSHED TIN\t9\t8\nBrand#x\tPROMO BURNISHED BRASS\t45\t8\nBrand#x\tPROMO BURNISHED BRASS\t49\t8\nBrand#x\tPROMO BURNISHED COPPER\t3\t8\nBrand#x\tPROMO BURNISHED COPPER\t23\t8\nBrand#x\tPROMO BURNISHED NICKEL\t3\t8\nBrand#x\tPROMO BURNISHED NICKEL\t14\t8\nBrand#x\tPROMO BURNISHED NICKEL\t19\t8\nBrand#x\tPROMO BURNISHED STEEL\t3\t8\nBrand#x\tPROMO BURNISHED STEEL\t14\t8\nBrand#x\tPROMO BURNISHED STEEL\t19\t8\nBrand#x\tPROMO BURNISHED STEEL\t36\t8\nBrand#x\tPROMO BURNISHED STEEL\t45\t8\nBrand#x\tPROMO PLATED BRASS\t3\t8\nBrand#x\tPROMO PLATED BRASS\t9\t8\nBrand#x\tPROMO PLATED BRASS\t45\t8\nBrand#x\tPROMO PLATED COPPER\t14\t8\nBrand#x\tPROMO PLATED COPPER\t19\t8\nBrand#x\tPROMO PLATED COPPER\t45\t8\nBrand#x\tPROMO PLATED NICKEL\t45\t8\nBrand#x\tPROMO PLATED STEEL\t9\t8\nBrand#x\tPROMO POLISHED BRASS\t3\t8\nBrand#x\tPROMO POLISHED BRASS\t9\t8\nBrand#x\tPROMO POLISHED BRASS\t14\t8\nBrand#x\tPROMO POLISHED BRASS\t36\t8\nBrand#x\tPROMO POLISHED BRASS\t49\t8\nBrand#x\tPROMO POLISHED COPPER\t45\t8\nBrand#x\tPROMO POLISHED NICKEL\t9\t8\nBrand#x\tPROMO POLISHED NICKEL\t49\t8\nBrand#x\tPROMO POLISHED STEEL\t3\t8\nBrand#x\tPROMO POLISHED STEEL\t19\t8\nBrand#x\tPROMO POLISHED TIN\t14\t8\nBrand#x\tPROMO POLISHED TIN\t45\t8\nBrand#x\tPROMO POLISHED TIN\t49\t8\nBrand#x\tSMALL ANODIZED BRASS\t23\t8\nBrand#x\tSMALL ANODIZED COPPER\t3\t8\nBrand#x\tSMALL ANODIZED COPPER\t14\t8\nBrand#x\tSMALL ANODIZED COPPER\t45\t8\nBrand#x\tSMALL ANODIZED COPPER\t49\t8\nBrand#x\tSMALL ANODIZED NICKEL\t3\t8\nBrand#x\tSMALL ANODIZED NICKEL\t45\t8\nBrand#x\tSMALL ANODIZED STEEL\t3\t8\nBrand#x\tSMALL ANODIZED STEEL\t9\t8\nBrand#x\tSMALL ANODIZED TIN\t49\t8\nBrand#x\tSMALL BRUSHED BRASS\t9\t8\nBrand#x\tSMALL BRUSHED BRASS\t23\t8\nBrand#x\tSMALL BRUSHED BRASS\t49\t8\nBrand#x\tSMALL BRUSHED STEEL\t3\t8\nBrand#x\tSMALL BRUSHED TIN\t9\t8\nBrand#x\tSMALL BRUSHED TIN\t19\t8\nBrand#x\tSMALL BURNISHED BRASS\t9\t8\nBrand#x\tSMALL BURNISHED BRASS\t14\t8\nBrand#x\tSMALL BURNISHED BRASS\t23\t8\nBrand#x\tSMALL BURNISHED COPPER\t36\t8\nBrand#x\tSMALL BURNISHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED STEEL\t14\t8\nBrand#x\tSMALL BURNISHED TIN\t23\t8\nBrand#x\tSMALL BURNISHED TIN\t36\t8\nBrand#x\tSMALL BURNISHED TIN\t45\t8\nBrand#x\tSMALL PLATED BRASS\t9\t8\nBrand#x\tSMALL PLATED BRASS\t49\t8\nBrand#x\tSMALL PLATED NICKEL\t14\t8\nBrand#x\tSMALL PLATED NICKEL\t19\t8\nBrand#x\tSMALL PLATED NICKEL\t36\t8\nBrand#x\tSMALL PLATED STEEL\t14\t8\nBrand#x\tSMALL PLATED STEEL\t23\t8\nBrand#x\tSMALL PLATED TIN\t23\t8\nBrand#x\tSMALL PLATED TIN\t36\t8\nBrand#x\tSMALL PLATED TIN\t49\t8\nBrand#x\tSMALL POLISHED BRASS\t9\t8\nBrand#x\tSMALL POLISHED BRASS\t23\t8\nBrand#x\tSMALL POLISHED BRASS\t45\t8\nBrand#x\tSMALL POLISHED COPPER\t3\t8\nBrand#x\tSMALL POLISHED STEEL\t23\t8\nBrand#x\tSMALL POLISHED STEEL\t49\t8\nBrand#x\tSMALL POLISHED TIN\t19\t8\nBrand#x\tSMALL POLISHED TIN\t23\t8\nBrand#x\tSMALL POLISHED TIN\t45\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t8\nBrand#x\tSTANDARD ANODIZED TIN\t23\t8\nBrand#x\tSTANDARD ANODIZED TIN\t49\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t8\nBrand#x\tSTANDARD BRUSHED TIN\t19\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t8\nBrand#x\tSTANDARD PLATED BRASS\t14\t8\nBrand#x\tSTANDARD PLATED BRASS\t36\t8\nBrand#x\tSTANDARD PLATED BRASS\t45\t8\nBrand#x\tSTANDARD PLATED BRASS\t49\t8\nBrand#x\tSTANDARD PLATED COPPER\t14\t8\nBrand#x\tSTANDARD PLATED COPPER\t19\t8\nBrand#x\tSTANDARD PLATED COPPER\t45\t8\nBrand#x\tSTANDARD PLATED COPPER\t49\t8\nBrand#x\tSTANDARD PLATED NICKEL\t36\t8\nBrand#x\tSTANDARD PLATED STEEL\t3\t8\nBrand#x\tSTANDARD PLATED STEEL\t9\t8\nBrand#x\tSTANDARD PLATED STEEL\t23\t8\nBrand#x\tSTANDARD PLATED STEEL\t49\t8\nBrand#x\tSTANDARD PLATED TIN\t14\t8\nBrand#x\tSTANDARD PLATED TIN\t49\t8\nBrand#x\tSTANDARD POLISHED BRASS\t19\t8\nBrand#x\tSTANDARD POLISHED COPPER\t3\t8\nBrand#x\tSTANDARD POLISHED COPPER\t9\t8\nBrand#x\tSTANDARD POLISHED COPPER\t23\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t8\nBrand#x\tSTANDARD POLISHED STEEL\t14\t8\nBrand#x\tSTANDARD POLISHED STEEL\t19\t8\nBrand#x\tSTANDARD POLISHED STEEL\t49\t8\nBrand#x\tECONOMY ANODIZED BRASS\t14\t8\nBrand#x\tECONOMY ANODIZED COPPER\t9\t8\nBrand#x\tECONOMY ANODIZED COPPER\t14\t8\nBrand#x\tECONOMY ANODIZED COPPER\t45\t8\nBrand#x\tECONOMY ANODIZED STEEL\t49\t8\nBrand#x\tECONOMY ANODIZED TIN\t19\t8\nBrand#x\tECONOMY ANODIZED TIN\t23\t8\nBrand#x\tECONOMY ANODIZED TIN\t36\t8\nBrand#x\tECONOMY ANODIZED TIN\t49\t8\nBrand#x\tECONOMY BRUSHED BRASS\t9\t8\nBrand#x\tECONOMY BRUSHED BRASS\t14\t8\nBrand#x\tECONOMY BRUSHED BRASS\t36\t8\nBrand#x\tECONOMY BRUSHED COPPER\t3\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t8\nBrand#x\tECONOMY BRUSHED STEEL\t3\t8\nBrand#x\tECONOMY BRUSHED STEEL\t19\t8\nBrand#x\tECONOMY BRUSHED TIN\t14\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t8\nBrand#x\tECONOMY BURNISHED TIN\t3\t8\nBrand#x\tECONOMY BURNISHED TIN\t9\t8\nBrand#x\tECONOMY BURNISHED TIN\t19\t8\nBrand#x\tECONOMY PLATED BRASS\t9\t8\nBrand#x\tECONOMY PLATED BRASS\t14\t8\nBrand#x\tECONOMY PLATED BRASS\t45\t8\nBrand#x\tECONOMY PLATED COPPER\t49\t8\nBrand#x\tECONOMY PLATED NICKEL\t23\t8\nBrand#x\tECONOMY PLATED NICKEL\t36\t8\nBrand#x\tECONOMY PLATED NICKEL\t45\t8\nBrand#x\tECONOMY PLATED STEEL\t3\t8\nBrand#x\tECONOMY PLATED STEEL\t9\t8\nBrand#x\tECONOMY PLATED TIN\t45\t8\nBrand#x\tECONOMY POLISHED BRASS\t14\t8\nBrand#x\tECONOMY POLISHED BRASS\t19\t8\nBrand#x\tECONOMY POLISHED BRASS\t36\t8\nBrand#x\tECONOMY POLISHED COPPER\t14\t8\nBrand#x\tECONOMY POLISHED COPPER\t19\t8\nBrand#x\tECONOMY POLISHED COPPER\t45\t8\nBrand#x\tECONOMY POLISHED STEEL\t14\t8\nBrand#x\tECONOMY POLISHED STEEL\t23\t8\nBrand#x\tECONOMY POLISHED STEEL\t45\t8\nBrand#x\tLARGE ANODIZED TIN\t36\t8\nBrand#x\tLARGE BRUSHED BRASS\t14\t8\nBrand#x\tLARGE BRUSHED BRASS\t49\t8\nBrand#x\tLARGE BRUSHED STEEL\t19\t8\nBrand#x\tLARGE BRUSHED STEEL\t49\t8\nBrand#x\tLARGE BRUSHED TIN\t9\t8\nBrand#x\tLARGE BURNISHED BRASS\t36\t8\nBrand#x\tLARGE BURNISHED BRASS\t45\t8\nBrand#x\tLARGE BURNISHED COPPER\t3\t8\nBrand#x\tLARGE BURNISHED COPPER\t14\t8\nBrand#x\tLARGE BURNISHED COPPER\t36\t8\nBrand#x\tLARGE BURNISHED NICKEL\t3\t8\nBrand#x\tLARGE BURNISHED STEEL\t19\t8\nBrand#x\tLARGE BURNISHED TIN\t3\t8\nBrand#x\tLARGE PLATED COPPER\t3\t8\nBrand#x\tLARGE PLATED COPPER\t14\t8\nBrand#x\tLARGE PLATED COPPER\t36\t8\nBrand#x\tLARGE PLATED COPPER\t49\t8\nBrand#x\tLARGE PLATED NICKEL\t14\t8\nBrand#x\tLARGE PLATED STEEL\t23\t8\nBrand#x\tLARGE PLATED TIN\t19\t8\nBrand#x\tLARGE POLISHED BRASS\t23\t8\nBrand#x\tLARGE POLISHED COPPER\t9\t8\nBrand#x\tLARGE POLISHED NICKEL\t19\t8\nBrand#x\tLARGE POLISHED STEEL\t3\t8\nBrand#x\tLARGE POLISHED STEEL\t23\t8\nBrand#x\tLARGE POLISHED STEEL\t36\t8\nBrand#x\tLARGE POLISHED TIN\t19\t8\nBrand#x\tLARGE POLISHED TIN\t36\t8\nBrand#x\tLARGE POLISHED TIN\t45\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t8\nBrand#x\tMEDIUM ANODIZED TIN\t9\t8\nBrand#x\tMEDIUM ANODIZED TIN\t14\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t8\nBrand#x\tMEDIUM BRUSHED TIN\t9\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t8\nBrand#x\tMEDIUM BURNISHED TIN\t3\t8\nBrand#x\tMEDIUM PLATED BRASS\t19\t8\nBrand#x\tMEDIUM PLATED COPPER\t14\t8\nBrand#x\tMEDIUM PLATED COPPER\t49\t8\nBrand#x\tMEDIUM PLATED STEEL\t14\t8\nBrand#x\tMEDIUM PLATED STEEL\t23\t8\nBrand#x\tMEDIUM PLATED TIN\t14\t8\nBrand#x\tMEDIUM PLATED TIN\t19\t8\nBrand#x\tMEDIUM PLATED TIN\t36\t8\nBrand#x\tPROMO ANODIZED BRASS\t3\t8\nBrand#x\tPROMO ANODIZED COPPER\t14\t8\nBrand#x\tPROMO ANODIZED COPPER\t45\t8\nBrand#x\tPROMO ANODIZED NICKEL\t14\t8\nBrand#x\tPROMO ANODIZED NICKEL\t19\t8\nBrand#x\tPROMO ANODIZED STEEL\t14\t8\nBrand#x\tPROMO ANODIZED STEEL\t23\t8\nBrand#x\tPROMO ANODIZED TIN\t3\t8\nBrand#x\tPROMO ANODIZED TIN\t9\t8\nBrand#x\tPROMO ANODIZED TIN\t14\t8\nBrand#x\tPROMO BRUSHED BRASS\t9\t8\nBrand#x\tPROMO BRUSHED BRASS\t19\t8\nBrand#x\tPROMO BRUSHED BRASS\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t45\t8\nBrand#x\tPROMO BRUSHED COPPER\t14\t8\nBrand#x\tPROMO BRUSHED STEEL\t36\t8\nBrand#x\tPROMO BRUSHED TIN\t3\t8\nBrand#x\tPROMO BRUSHED TIN\t45\t8\nBrand#x\tPROMO BURNISHED BRASS\t14\t8\nBrand#x\tPROMO BURNISHED BRASS\t36\t8\nBrand#x\tPROMO BURNISHED NICKEL\t19\t8\nBrand#x\tPROMO BURNISHED STEEL\t9\t8\nBrand#x\tPROMO BURNISHED STEEL\t45\t8\nBrand#x\tPROMO BURNISHED STEEL\t49\t8\nBrand#x\tPROMO BURNISHED TIN\t14\t8\nBrand#x\tPROMO BURNISHED TIN\t36\t8\nBrand#x\tPROMO PLATED BRASS\t9\t8\nBrand#x\tPROMO PLATED BRASS\t23\t8\nBrand#x\tPROMO PLATED BRASS\t49\t8\nBrand#x\tPROMO PLATED NICKEL\t23\t8\nBrand#x\tPROMO PLATED STEEL\t9\t8\nBrand#x\tPROMO PLATED STEEL\t14\t8\nBrand#x\tPROMO POLISHED BRASS\t23\t8\nBrand#x\tPROMO POLISHED COPPER\t14\t8\nBrand#x\tPROMO POLISHED NICKEL\t19\t8\nBrand#x\tPROMO POLISHED STEEL\t9\t8\nBrand#x\tPROMO POLISHED STEEL\t19\t8\nBrand#x\tPROMO POLISHED STEEL\t49\t8\nBrand#x\tPROMO POLISHED TIN\t9\t8\nBrand#x\tPROMO POLISHED TIN\t45\t8\nBrand#x\tSMALL ANODIZED BRASS\t49\t8\nBrand#x\tSMALL ANODIZED NICKEL\t23\t8\nBrand#x\tSMALL ANODIZED NICKEL\t36\t8\nBrand#x\tSMALL ANODIZED NICKEL\t49\t8\nBrand#x\tSMALL ANODIZED STEEL\t49\t8\nBrand#x\tSMALL ANODIZED TIN\t49\t8\nBrand#x\tSMALL BRUSHED BRASS\t14\t8\nBrand#x\tSMALL BRUSHED NICKEL\t14\t8\nBrand#x\tSMALL BRUSHED NICKEL\t45\t8\nBrand#x\tSMALL BRUSHED STEEL\t9\t8\nBrand#x\tSMALL BRUSHED STEEL\t14\t8\nBrand#x\tSMALL BURNISHED BRASS\t19\t8\nBrand#x\tSMALL BURNISHED BRASS\t45\t8\nBrand#x\tSMALL BURNISHED COPPER\t14\t8\nBrand#x\tSMALL BURNISHED COPPER\t23\t8\nBrand#x\tSMALL BURNISHED NICKEL\t3\t8\nBrand#x\tSMALL BURNISHED NICKEL\t49\t8\nBrand#x\tSMALL BURNISHED TIN\t36\t8\nBrand#x\tSMALL BURNISHED TIN\t49\t8\nBrand#x\tSMALL PLATED BRASS\t23\t8\nBrand#x\tSMALL PLATED COPPER\t19\t8\nBrand#x\tSMALL PLATED COPPER\t23\t8\nBrand#x\tSMALL PLATED COPPER\t49\t8\nBrand#x\tSMALL PLATED NICKEL\t3\t8\nBrand#x\tSMALL PLATED NICKEL\t9\t8\nBrand#x\tSMALL PLATED NICKEL\t23\t8\nBrand#x\tSMALL PLATED STEEL\t9\t8\nBrand#x\tSMALL PLATED STEEL\t45\t8\nBrand#x\tSMALL PLATED TIN\t14\t8\nBrand#x\tSMALL PLATED TIN\t19\t8\nBrand#x\tSMALL POLISHED BRASS\t14\t8\nBrand#x\tSMALL POLISHED COPPER\t9\t8\nBrand#x\tSMALL POLISHED COPPER\t45\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t8\nBrand#x\tSTANDARD ANODIZED TIN\t36\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t8\nBrand#x\tSTANDARD BRUSHED TIN\t49\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t8\nBrand#x\tSTANDARD BURNISHED TIN\t9\t8\nBrand#x\tSTANDARD BURNISHED TIN\t23\t8\nBrand#x\tSTANDARD PLATED NICKEL\t3\t8\nBrand#x\tSTANDARD PLATED NICKEL\t19\t8\nBrand#x\tSTANDARD PLATED NICKEL\t36\t8\nBrand#x\tSTANDARD PLATED STEEL\t23\t8\nBrand#x\tSTANDARD PLATED STEEL\t49\t8\nBrand#x\tSTANDARD PLATED TIN\t14\t8\nBrand#x\tSTANDARD PLATED TIN\t19\t8\nBrand#x\tSTANDARD PLATED TIN\t45\t8\nBrand#x\tSTANDARD POLISHED BRASS\t9\t8\nBrand#x\tSTANDARD POLISHED BRASS\t19\t8\nBrand#x\tSTANDARD POLISHED COPPER\t36\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t8\nBrand#x\tSTANDARD POLISHED STEEL\t3\t8\nBrand#x\tSTANDARD POLISHED STEEL\t9\t8\nBrand#x\tSTANDARD POLISHED STEEL\t23\t8\nBrand#x\tSTANDARD POLISHED TIN\t3\t8\nBrand#x\tECONOMY ANODIZED BRASS\t23\t8\nBrand#x\tECONOMY ANODIZED COPPER\t3\t8\nBrand#x\tECONOMY ANODIZED COPPER\t49\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t8\nBrand#x\tECONOMY ANODIZED STEEL\t36\t8\nBrand#x\tECONOMY ANODIZED TIN\t19\t8\nBrand#x\tECONOMY ANODIZED TIN\t23\t8\nBrand#x\tECONOMY BRUSHED BRASS\t3\t8\nBrand#x\tECONOMY BRUSHED COPPER\t23\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t8\nBrand#x\tECONOMY BRUSHED STEEL\t23\t8\nBrand#x\tECONOMY BRUSHED STEEL\t36\t8\nBrand#x\tECONOMY BRUSHED STEEL\t45\t8\nBrand#x\tECONOMY BRUSHED TIN\t3\t8\nBrand#x\tECONOMY BRUSHED TIN\t9\t8\nBrand#x\tECONOMY BRUSHED TIN\t23\t8\nBrand#x\tECONOMY BRUSHED TIN\t36\t8\nBrand#x\tECONOMY BURNISHED BRASS\t3\t8\nBrand#x\tECONOMY BURNISHED COPPER\t19\t8\nBrand#x\tECONOMY BURNISHED COPPER\t23\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t8\nBrand#x\tECONOMY BURNISHED TIN\t3\t8\nBrand#x\tECONOMY BURNISHED TIN\t23\t8\nBrand#x\tECONOMY BURNISHED TIN\t45\t8\nBrand#x\tECONOMY PLATED BRASS\t45\t8\nBrand#x\tECONOMY PLATED BRASS\t49\t8\nBrand#x\tECONOMY PLATED COPPER\t19\t8\nBrand#x\tECONOMY PLATED NICKEL\t3\t8\nBrand#x\tECONOMY PLATED NICKEL\t23\t8\nBrand#x\tECONOMY PLATED STEEL\t19\t8\nBrand#x\tECONOMY PLATED STEEL\t23\t8\nBrand#x\tECONOMY PLATED TIN\t36\t8\nBrand#x\tECONOMY POLISHED BRASS\t14\t8\nBrand#x\tECONOMY POLISHED BRASS\t36\t8\nBrand#x\tECONOMY POLISHED COPPER\t3\t8\nBrand#x\tECONOMY POLISHED COPPER\t14\t8\nBrand#x\tECONOMY POLISHED NICKEL\t9\t8\nBrand#x\tECONOMY POLISHED TIN\t19\t8\nBrand#x\tLARGE ANODIZED BRASS\t23\t8\nBrand#x\tLARGE ANODIZED BRASS\t36\t8\nBrand#x\tLARGE ANODIZED COPPER\t9\t8\nBrand#x\tLARGE ANODIZED COPPER\t14\t8\nBrand#x\tLARGE ANODIZED STEEL\t9\t8\nBrand#x\tLARGE ANODIZED STEEL\t19\t8\nBrand#x\tLARGE ANODIZED STEEL\t23\t8\nBrand#x\tLARGE ANODIZED TIN\t9\t8\nBrand#x\tLARGE ANODIZED TIN\t14\t8\nBrand#x\tLARGE BRUSHED BRASS\t23\t8\nBrand#x\tLARGE BRUSHED COPPER\t9\t8\nBrand#x\tLARGE BRUSHED COPPER\t19\t8\nBrand#x\tLARGE BRUSHED STEEL\t14\t8\nBrand#x\tLARGE BRUSHED STEEL\t19\t8\nBrand#x\tLARGE BRUSHED TIN\t23\t8\nBrand#x\tLARGE BRUSHED TIN\t45\t8\nBrand#x\tLARGE BURNISHED BRASS\t14\t8\nBrand#x\tLARGE BURNISHED BRASS\t19\t8\nBrand#x\tLARGE BURNISHED BRASS\t36\t8\nBrand#x\tLARGE BURNISHED COPPER\t3\t8\nBrand#x\tLARGE BURNISHED NICKEL\t14\t8\nBrand#x\tLARGE BURNISHED NICKEL\t23\t8\nBrand#x\tLARGE BURNISHED TIN\t3\t8\nBrand#x\tLARGE BURNISHED TIN\t9\t8\nBrand#x\tLARGE BURNISHED TIN\t19\t8\nBrand#x\tLARGE BURNISHED TIN\t23\t8\nBrand#x\tLARGE BURNISHED TIN\t36\t8\nBrand#x\tLARGE PLATED BRASS\t9\t8\nBrand#x\tLARGE PLATED BRASS\t45\t8\nBrand#x\tLARGE PLATED COPPER\t23\t8\nBrand#x\tLARGE PLATED NICKEL\t3\t8\nBrand#x\tLARGE PLATED NICKEL\t19\t8\nBrand#x\tLARGE PLATED STEEL\t23\t8\nBrand#x\tLARGE PLATED TIN\t9\t8\nBrand#x\tLARGE PLATED TIN\t45\t8\nBrand#x\tLARGE POLISHED BRASS\t19\t8\nBrand#x\tLARGE POLISHED BRASS\t49\t8\nBrand#x\tLARGE POLISHED STEEL\t14\t8\nBrand#x\tLARGE POLISHED STEEL\t36\t8\nBrand#x\tLARGE POLISHED TIN\t9\t8\nBrand#x\tLARGE POLISHED TIN\t14\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t8\nBrand#x\tMEDIUM ANODIZED TIN\t14\t8\nBrand#x\tMEDIUM ANODIZED TIN\t45\t8\nBrand#x\tMEDIUM ANODIZED TIN\t49\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t8\nBrand#x\tMEDIUM BRUSHED TIN\t14\t8\nBrand#x\tMEDIUM BRUSHED TIN\t19\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t8\nBrand#x\tMEDIUM BURNISHED TIN\t9\t8\nBrand#x\tMEDIUM BURNISHED TIN\t19\t8\nBrand#x\tMEDIUM BURNISHED TIN\t23\t8\nBrand#x\tMEDIUM BURNISHED TIN\t36\t8\nBrand#x\tMEDIUM BURNISHED TIN\t45\t8\nBrand#x\tMEDIUM PLATED BRASS\t3\t8\nBrand#x\tMEDIUM PLATED BRASS\t23\t8\nBrand#x\tMEDIUM PLATED BRASS\t36\t8\nBrand#x\tMEDIUM PLATED COPPER\t3\t8\nBrand#x\tMEDIUM PLATED COPPER\t9\t8\nBrand#x\tMEDIUM PLATED COPPER\t19\t8\nBrand#x\tMEDIUM PLATED NICKEL\t49\t8\nBrand#x\tMEDIUM PLATED STEEL\t14\t8\nBrand#x\tMEDIUM PLATED STEEL\t23\t8\nBrand#x\tMEDIUM PLATED STEEL\t36\t8\nBrand#x\tMEDIUM PLATED TIN\t23\t8\nBrand#x\tPROMO ANODIZED BRASS\t3\t8\nBrand#x\tPROMO ANODIZED COPPER\t3\t8\nBrand#x\tPROMO ANODIZED COPPER\t36\t8\nBrand#x\tPROMO ANODIZED NICKEL\t36\t8\nBrand#x\tPROMO ANODIZED NICKEL\t45\t8\nBrand#x\tPROMO ANODIZED NICKEL\t49\t8\nBrand#x\tPROMO ANODIZED STEEL\t45\t8\nBrand#x\tPROMO ANODIZED TIN\t14\t8\nBrand#x\tPROMO BRUSHED BRASS\t14\t8\nBrand#x\tPROMO BRUSHED BRASS\t45\t8\nBrand#x\tPROMO BRUSHED COPPER\t3\t8\nBrand#x\tPROMO BRUSHED COPPER\t14\t8\nBrand#x\tPROMO BRUSHED NICKEL\t9\t8\nBrand#x\tPROMO BRUSHED STEEL\t9\t8\nBrand#x\tPROMO BRUSHED TIN\t19\t8\nBrand#x\tPROMO BRUSHED TIN\t45\t8\nBrand#x\tPROMO BURNISHED BRASS\t3\t8\nBrand#x\tPROMO BURNISHED BRASS\t19\t8\nBrand#x\tPROMO BURNISHED COPPER\t9\t8\nBrand#x\tPROMO BURNISHED COPPER\t14\t8\nBrand#x\tPROMO BURNISHED COPPER\t19\t8\nBrand#x\tPROMO BURNISHED NICKEL\t14\t8\nBrand#x\tPROMO BURNISHED TIN\t3\t8\nBrand#x\tPROMO BURNISHED TIN\t45\t8\nBrand#x\tPROMO PLATED BRASS\t19\t8\nBrand#x\tPROMO PLATED COPPER\t23\t8\nBrand#x\tPROMO PLATED NICKEL\t9\t8\nBrand#x\tPROMO PLATED NICKEL\t23\t8\nBrand#x\tPROMO PLATED NICKEL\t45\t8\nBrand#x\tPROMO PLATED STEEL\t9\t8\nBrand#x\tPROMO PLATED STEEL\t23\t8\nBrand#x\tPROMO PLATED STEEL\t36\t8\nBrand#x\tPROMO PLATED TIN\t3\t8\nBrand#x\tPROMO PLATED TIN\t9\t8\nBrand#x\tPROMO PLATED TIN\t19\t8\nBrand#x\tPROMO PLATED TIN\t36\t8\nBrand#x\tPROMO PLATED TIN\t45\t8\nBrand#x\tPROMO POLISHED BRASS\t3\t8\nBrand#x\tPROMO POLISHED BRASS\t9\t8\nBrand#x\tPROMO POLISHED BRASS\t23\t8\nBrand#x\tPROMO POLISHED NICKEL\t9\t8\nBrand#x\tPROMO POLISHED NICKEL\t23\t8\nBrand#x\tPROMO POLISHED TIN\t3\t8\nBrand#x\tPROMO POLISHED TIN\t23\t8\nBrand#x\tPROMO POLISHED TIN\t45\t8\nBrand#x\tSMALL ANODIZED BRASS\t49\t8\nBrand#x\tSMALL ANODIZED NICKEL\t9\t8\nBrand#x\tSMALL ANODIZED NICKEL\t19\t8\nBrand#x\tSMALL ANODIZED STEEL\t19\t8\nBrand#x\tSMALL ANODIZED TIN\t14\t8\nBrand#x\tSMALL ANODIZED TIN\t36\t8\nBrand#x\tSMALL BRUSHED BRASS\t14\t8\nBrand#x\tSMALL BRUSHED COPPER\t49\t8\nBrand#x\tSMALL BRUSHED NICKEL\t3\t8\nBrand#x\tSMALL BRUSHED NICKEL\t9\t8\nBrand#x\tSMALL BRUSHED NICKEL\t49\t8\nBrand#x\tSMALL BRUSHED STEEL\t9\t8\nBrand#x\tSMALL BRUSHED STEEL\t23\t8\nBrand#x\tSMALL BRUSHED STEEL\t36\t8\nBrand#x\tSMALL BRUSHED STEEL\t49\t8\nBrand#x\tSMALL BRUSHED TIN\t19\t8\nBrand#x\tSMALL BRUSHED TIN\t23\t8\nBrand#x\tSMALL BURNISHED COPPER\t49\t8\nBrand#x\tSMALL BURNISHED NICKEL\t9\t8\nBrand#x\tSMALL BURNISHED STEEL\t3\t8\nBrand#x\tSMALL BURNISHED STEEL\t14\t8\nBrand#x\tSMALL BURNISHED STEEL\t23\t8\nBrand#x\tSMALL BURNISHED STEEL\t36\t8\nBrand#x\tSMALL PLATED COPPER\t45\t8\nBrand#x\tSMALL PLATED NICKEL\t9\t8\nBrand#x\tSMALL PLATED NICKEL\t23\t8\nBrand#x\tSMALL PLATED NICKEL\t36\t8\nBrand#x\tSMALL PLATED NICKEL\t45\t8\nBrand#x\tSMALL PLATED STEEL\t3\t8\nBrand#x\tSMALL PLATED STEEL\t14\t8\nBrand#x\tSMALL PLATED TIN\t9\t8\nBrand#x\tSMALL POLISHED BRASS\t9\t8\nBrand#x\tSMALL POLISHED BRASS\t23\t8\nBrand#x\tSMALL POLISHED BRASS\t36\t8\nBrand#x\tSMALL POLISHED COPPER\t3\t8\nBrand#x\tSMALL POLISHED COPPER\t23\t8\nBrand#x\tSMALL POLISHED COPPER\t45\t8\nBrand#x\tSMALL POLISHED COPPER\t49\t8\nBrand#x\tSMALL POLISHED NICKEL\t14\t8\nBrand#x\tSMALL POLISHED NICKEL\t19\t8\nBrand#x\tSMALL POLISHED STEEL\t23\t8\nBrand#x\tSMALL POLISHED STEEL\t49\t8\nBrand#x\tSMALL POLISHED TIN\t9\t8\nBrand#x\tSMALL POLISHED TIN\t23\t8\nBrand#x\tSMALL POLISHED TIN\t45\t8\nBrand#x\tSMALL POLISHED TIN\t49\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t8\nBrand#x\tSTANDARD ANODIZED TIN\t23\t8\nBrand#x\tSTANDARD ANODIZED TIN\t45\t8\nBrand#x\tSTANDARD ANODIZED TIN\t49\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t14\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t8\nBrand#x\tSTANDARD BRUSHED TIN\t3\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t8\nBrand#x\tSTANDARD BURNISHED TIN\t36\t8\nBrand#x\tSTANDARD PLATED BRASS\t23\t8\nBrand#x\tSTANDARD PLATED COPPER\t3\t8\nBrand#x\tSTANDARD PLATED COPPER\t19\t8\nBrand#x\tSTANDARD PLATED COPPER\t36\t8\nBrand#x\tSTANDARD PLATED NICKEL\t14\t8\nBrand#x\tSTANDARD PLATED TIN\t19\t8\nBrand#x\tSTANDARD PLATED TIN\t23\t8\nBrand#x\tSTANDARD PLATED TIN\t49\t8\nBrand#x\tSTANDARD POLISHED BRASS\t19\t8\nBrand#x\tSTANDARD POLISHED BRASS\t36\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t8\nBrand#x\tSTANDARD POLISHED STEEL\t14\t8\nBrand#x\tSTANDARD POLISHED STEEL\t45\t8\nBrand#x\tSTANDARD POLISHED STEEL\t49\t8\nBrand#x\tSTANDARD POLISHED TIN\t45\t8\nBrand#x\tECONOMY ANODIZED BRASS\t14\t8\nBrand#x\tECONOMY ANODIZED BRASS\t19\t8\nBrand#x\tECONOMY ANODIZED COPPER\t23\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t8\nBrand#x\tECONOMY ANODIZED STEEL\t45\t8\nBrand#x\tECONOMY BRUSHED BRASS\t3\t8\nBrand#x\tECONOMY BRUSHED BRASS\t14\t8\nBrand#x\tECONOMY BRUSHED BRASS\t36\t8\nBrand#x\tECONOMY BRUSHED COPPER\t3\t8\nBrand#x\tECONOMY BRUSHED COPPER\t14\t8\nBrand#x\tECONOMY BRUSHED COPPER\t19\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t8\nBrand#x\tECONOMY BRUSHED STEEL\t3\t8\nBrand#x\tECONOMY BRUSHED STEEL\t45\t8\nBrand#x\tECONOMY BRUSHED TIN\t14\t8\nBrand#x\tECONOMY BRUSHED TIN\t36\t8\nBrand#x\tECONOMY BURNISHED BRASS\t3\t8\nBrand#x\tECONOMY BURNISHED BRASS\t45\t8\nBrand#x\tECONOMY BURNISHED COPPER\t9\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t8\nBrand#x\tECONOMY BURNISHED STEEL\t23\t8\nBrand#x\tECONOMY BURNISHED TIN\t3\t8\nBrand#x\tECONOMY PLATED BRASS\t49\t8\nBrand#x\tECONOMY PLATED COPPER\t14\t8\nBrand#x\tECONOMY PLATED NICKEL\t14\t8\nBrand#x\tECONOMY PLATED NICKEL\t45\t8\nBrand#x\tECONOMY PLATED STEEL\t9\t8\nBrand#x\tECONOMY PLATED STEEL\t23\t8\nBrand#x\tECONOMY PLATED STEEL\t45\t8\nBrand#x\tECONOMY PLATED TIN\t19\t8\nBrand#x\tECONOMY PLATED TIN\t49\t8\nBrand#x\tECONOMY POLISHED BRASS\t14\t8\nBrand#x\tECONOMY POLISHED BRASS\t23\t8\nBrand#x\tECONOMY POLISHED BRASS\t49\t8\nBrand#x\tECONOMY POLISHED COPPER\t14\t8\nBrand#x\tECONOMY POLISHED NICKEL\t49\t8\nBrand#x\tECONOMY POLISHED TIN\t45\t8\nBrand#x\tECONOMY POLISHED TIN\t49\t8\nBrand#x\tLARGE ANODIZED BRASS\t3\t8\nBrand#x\tLARGE ANODIZED BRASS\t45\t8\nBrand#x\tLARGE ANODIZED COPPER\t14\t8\nBrand#x\tLARGE ANODIZED NICKEL\t3\t8\nBrand#x\tLARGE ANODIZED STEEL\t14\t8\nBrand#x\tLARGE ANODIZED STEEL\t36\t8\nBrand#x\tLARGE ANODIZED TIN\t45\t8\nBrand#x\tLARGE BRUSHED BRASS\t23\t8\nBrand#x\tLARGE BRUSHED COPPER\t49\t8\nBrand#x\tLARGE BRUSHED TIN\t14\t8\nBrand#x\tLARGE BRUSHED TIN\t19\t8\nBrand#x\tLARGE BRUSHED TIN\t49\t8\nBrand#x\tLARGE BURNISHED BRASS\t19\t8\nBrand#x\tLARGE BURNISHED COPPER\t14\t8\nBrand#x\tLARGE BURNISHED COPPER\t49\t8\nBrand#x\tLARGE BURNISHED NICKEL\t14\t8\nBrand#x\tLARGE BURNISHED STEEL\t3\t8\nBrand#x\tLARGE BURNISHED STEEL\t14\t8\nBrand#x\tLARGE BURNISHED STEEL\t45\t8\nBrand#x\tLARGE BURNISHED STEEL\t49\t8\nBrand#x\tLARGE BURNISHED TIN\t3\t8\nBrand#x\tLARGE BURNISHED TIN\t9\t8\nBrand#x\tLARGE BURNISHED TIN\t36\t8\nBrand#x\tLARGE PLATED BRASS\t3\t8\nBrand#x\tLARGE PLATED BRASS\t14\t8\nBrand#x\tLARGE PLATED BRASS\t19\t8\nBrand#x\tLARGE PLATED BRASS\t45\t8\nBrand#x\tLARGE PLATED COPPER\t14\t8\nBrand#x\tLARGE PLATED COPPER\t23\t8\nBrand#x\tLARGE PLATED NICKEL\t3\t8\nBrand#x\tLARGE PLATED NICKEL\t9\t8\nBrand#x\tLARGE PLATED NICKEL\t36\t8\nBrand#x\tLARGE PLATED STEEL\t3\t8\nBrand#x\tLARGE PLATED STEEL\t23\t8\nBrand#x\tLARGE PLATED STEEL\t36\t8\nBrand#x\tLARGE PLATED STEEL\t49\t8\nBrand#x\tLARGE PLATED TIN\t3\t8\nBrand#x\tLARGE POLISHED BRASS\t19\t8\nBrand#x\tLARGE POLISHED COPPER\t3\t8\nBrand#x\tLARGE POLISHED COPPER\t19\t8\nBrand#x\tLARGE POLISHED COPPER\t49\t8\nBrand#x\tLARGE POLISHED NICKEL\t23\t8\nBrand#x\tLARGE POLISHED STEEL\t14\t8\nBrand#x\tLARGE POLISHED TIN\t9\t8\nBrand#x\tLARGE POLISHED TIN\t14\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t8\nBrand#x\tMEDIUM ANODIZED TIN\t9\t8\nBrand#x\tMEDIUM ANODIZED TIN\t19\t8\nBrand#x\tMEDIUM ANODIZED TIN\t45\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t8\nBrand#x\tMEDIUM BRUSHED TIN\t9\t8\nBrand#x\tMEDIUM BRUSHED TIN\t23\t8\nBrand#x\tMEDIUM BRUSHED TIN\t49\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t3\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t8\nBrand#x\tMEDIUM PLATED BRASS\t45\t8\nBrand#x\tMEDIUM PLATED COPPER\t9\t8\nBrand#x\tMEDIUM PLATED COPPER\t49\t8\nBrand#x\tMEDIUM PLATED NICKEL\t19\t8\nBrand#x\tMEDIUM PLATED NICKEL\t45\t8\nBrand#x\tMEDIUM PLATED STEEL\t9\t8\nBrand#x\tMEDIUM PLATED STEEL\t23\t8\nBrand#x\tPROMO ANODIZED COPPER\t14\t8\nBrand#x\tPROMO ANODIZED NICKEL\t3\t8\nBrand#x\tPROMO ANODIZED NICKEL\t19\t8\nBrand#x\tPROMO ANODIZED STEEL\t9\t8\nBrand#x\tPROMO ANODIZED TIN\t36\t8\nBrand#x\tPROMO BRUSHED BRASS\t9\t8\nBrand#x\tPROMO BRUSHED BRASS\t14\t8\nBrand#x\tPROMO BRUSHED BRASS\t19\t8\nBrand#x\tPROMO BRUSHED BRASS\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t36\t8\nBrand#x\tPROMO BRUSHED COPPER\t36\t8\nBrand#x\tPROMO BRUSHED STEEL\t9\t8\nBrand#x\tPROMO BRUSHED STEEL\t36\t8\nBrand#x\tPROMO BRUSHED STEEL\t49\t8\nBrand#x\tPROMO BRUSHED TIN\t9\t8\nBrand#x\tPROMO BRUSHED TIN\t49\t8\nBrand#x\tPROMO BURNISHED BRASS\t9\t8\nBrand#x\tPROMO BURNISHED BRASS\t14\t8\nBrand#x\tPROMO BURNISHED COPPER\t36\t8\nBrand#x\tPROMO BURNISHED COPPER\t45\t8\nBrand#x\tPROMO BURNISHED NICKEL\t36\t8\nBrand#x\tPROMO BURNISHED STEEL\t14\t8\nBrand#x\tPROMO BURNISHED STEEL\t36\t8\nBrand#x\tPROMO BURNISHED TIN\t3\t8\nBrand#x\tPROMO BURNISHED TIN\t23\t8\nBrand#x\tPROMO PLATED BRASS\t14\t8\nBrand#x\tPROMO PLATED BRASS\t36\t8\nBrand#x\tPROMO PLATED COPPER\t14\t8\nBrand#x\tPROMO PLATED COPPER\t23\t8\nBrand#x\tPROMO PLATED NICKEL\t49\t8\nBrand#x\tPROMO PLATED STEEL\t3\t8\nBrand#x\tPROMO PLATED STEEL\t14\t8\nBrand#x\tPROMO PLATED TIN\t45\t8\nBrand#x\tPROMO POLISHED BRASS\t9\t8\nBrand#x\tPROMO POLISHED COPPER\t3\t8\nBrand#x\tPROMO POLISHED COPPER\t19\t8\nBrand#x\tPROMO POLISHED COPPER\t49\t8\nBrand#x\tPROMO POLISHED NICKEL\t3\t8\nBrand#x\tPROMO POLISHED STEEL\t49\t8\nBrand#x\tPROMO POLISHED TIN\t14\t8\nBrand#x\tPROMO POLISHED TIN\t45\t8\nBrand#x\tSMALL ANODIZED BRASS\t14\t8\nBrand#x\tSMALL ANODIZED BRASS\t36\t8\nBrand#x\tSMALL ANODIZED COPPER\t49\t8\nBrand#x\tSMALL ANODIZED NICKEL\t14\t8\nBrand#x\tSMALL ANODIZED NICKEL\t19\t8\nBrand#x\tSMALL ANODIZED TIN\t3\t8\nBrand#x\tSMALL ANODIZED TIN\t9\t8\nBrand#x\tSMALL ANODIZED TIN\t23\t8\nBrand#x\tSMALL BRUSHED BRASS\t9\t8\nBrand#x\tSMALL BRUSHED BRASS\t23\t8\nBrand#x\tSMALL BRUSHED COPPER\t45\t8\nBrand#x\tSMALL BRUSHED COPPER\t49\t8\nBrand#x\tSMALL BRUSHED NICKEL\t14\t8\nBrand#x\tSMALL BRUSHED NICKEL\t36\t8\nBrand#x\tSMALL BRUSHED STEEL\t19\t8\nBrand#x\tSMALL BRUSHED TIN\t3\t8\nBrand#x\tSMALL BRUSHED TIN\t19\t8\nBrand#x\tSMALL BURNISHED BRASS\t14\t8\nBrand#x\tSMALL BURNISHED BRASS\t19\t8\nBrand#x\tSMALL BURNISHED COPPER\t9\t8\nBrand#x\tSMALL BURNISHED COPPER\t19\t8\nBrand#x\tSMALL BURNISHED NICKEL\t3\t8\nBrand#x\tSMALL BURNISHED NICKEL\t19\t8\nBrand#x\tSMALL BURNISHED NICKEL\t45\t8\nBrand#x\tSMALL BURNISHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED STEEL\t23\t8\nBrand#x\tSMALL BURNISHED STEEL\t45\t8\nBrand#x\tSMALL BURNISHED STEEL\t49\t8\nBrand#x\tSMALL BURNISHED TIN\t14\t8\nBrand#x\tSMALL PLATED BRASS\t3\t8\nBrand#x\tSMALL PLATED COPPER\t9\t8\nBrand#x\tSMALL PLATED COPPER\t14\t8\nBrand#x\tSMALL PLATED NICKEL\t3\t8\nBrand#x\tSMALL PLATED NICKEL\t36\t8\nBrand#x\tSMALL PLATED STEEL\t9\t8\nBrand#x\tSMALL PLATED STEEL\t36\t8\nBrand#x\tSMALL PLATED TIN\t19\t8\nBrand#x\tSMALL PLATED TIN\t49\t8\nBrand#x\tSMALL POLISHED BRASS\t45\t8\nBrand#x\tSMALL POLISHED COPPER\t3\t8\nBrand#x\tSMALL POLISHED COPPER\t14\t8\nBrand#x\tSMALL POLISHED COPPER\t23\t8\nBrand#x\tSMALL POLISHED NICKEL\t3\t8\nBrand#x\tSMALL POLISHED STEEL\t49\t8\nBrand#x\tSMALL POLISHED TIN\t9\t8\nBrand#x\tSMALL POLISHED TIN\t45\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t8\nBrand#x\tSTANDARD ANODIZED TIN\t19\t8\nBrand#x\tSTANDARD ANODIZED TIN\t23\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t45\t8\nBrand#x\tSTANDARD BRUSHED TIN\t9\t8\nBrand#x\tSTANDARD BRUSHED TIN\t19\t8\nBrand#x\tSTANDARD BRUSHED TIN\t45\t8\nBrand#x\tSTANDARD BRUSHED TIN\t49\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t8\nBrand#x\tSTANDARD BURNISHED TIN\t19\t8\nBrand#x\tSTANDARD BURNISHED TIN\t45\t8\nBrand#x\tSTANDARD PLATED BRASS\t19\t8\nBrand#x\tSTANDARD PLATED NICKEL\t14\t8\nBrand#x\tSTANDARD PLATED NICKEL\t19\t8\nBrand#x\tSTANDARD PLATED NICKEL\t49\t8\nBrand#x\tSTANDARD PLATED STEEL\t3\t8\nBrand#x\tSTANDARD PLATED STEEL\t19\t8\nBrand#x\tSTANDARD PLATED STEEL\t49\t8\nBrand#x\tSTANDARD PLATED TIN\t45\t8\nBrand#x\tSTANDARD PLATED TIN\t49\t8\nBrand#x\tSTANDARD POLISHED BRASS\t14\t8\nBrand#x\tSTANDARD POLISHED BRASS\t36\t8\nBrand#x\tSTANDARD POLISHED COPPER\t14\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t8\nBrand#x\tSTANDARD POLISHED STEEL\t3\t8\nBrand#x\tSTANDARD POLISHED STEEL\t36\t8\nBrand#x\tSTANDARD POLISHED TIN\t19\t8\nBrand#x\tSTANDARD POLISHED TIN\t45\t8\nBrand#x\tECONOMY ANODIZED BRASS\t9\t8\nBrand#x\tECONOMY ANODIZED BRASS\t19\t8\nBrand#x\tECONOMY ANODIZED BRASS\t23\t8\nBrand#x\tECONOMY ANODIZED COPPER\t23\t8\nBrand#x\tECONOMY ANODIZED COPPER\t49\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t8\nBrand#x\tECONOMY ANODIZED STEEL\t49\t8\nBrand#x\tECONOMY BRUSHED COPPER\t3\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t8\nBrand#x\tECONOMY BRUSHED STEEL\t23\t8\nBrand#x\tECONOMY BRUSHED STEEL\t49\t8\nBrand#x\tECONOMY BRUSHED TIN\t9\t8\nBrand#x\tECONOMY BRUSHED TIN\t19\t8\nBrand#x\tECONOMY BRUSHED TIN\t49\t8\nBrand#x\tECONOMY BURNISHED COPPER\t3\t8\nBrand#x\tECONOMY BURNISHED COPPER\t49\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t8\nBrand#x\tECONOMY BURNISHED TIN\t14\t8\nBrand#x\tECONOMY BURNISHED TIN\t45\t8\nBrand#x\tECONOMY PLATED BRASS\t9\t8\nBrand#x\tECONOMY PLATED COPPER\t23\t8\nBrand#x\tECONOMY PLATED COPPER\t36\t8\nBrand#x\tECONOMY PLATED NICKEL\t19\t8\nBrand#x\tECONOMY PLATED NICKEL\t49\t8\nBrand#x\tECONOMY PLATED STEEL\t49\t8\nBrand#x\tECONOMY PLATED TIN\t3\t8\nBrand#x\tECONOMY POLISHED BRASS\t9\t8\nBrand#x\tECONOMY POLISHED NICKEL\t49\t8\nBrand#x\tECONOMY POLISHED STEEL\t9\t8\nBrand#x\tECONOMY POLISHED STEEL\t36\t8\nBrand#x\tECONOMY POLISHED TIN\t36\t8\nBrand#x\tLARGE ANODIZED BRASS\t3\t8\nBrand#x\tLARGE ANODIZED BRASS\t23\t8\nBrand#x\tLARGE ANODIZED COPPER\t3\t8\nBrand#x\tLARGE ANODIZED COPPER\t14\t8\nBrand#x\tLARGE ANODIZED COPPER\t49\t8\nBrand#x\tLARGE ANODIZED NICKEL\t9\t8\nBrand#x\tLARGE ANODIZED NICKEL\t45\t8\nBrand#x\tLARGE ANODIZED NICKEL\t49\t8\nBrand#x\tLARGE ANODIZED STEEL\t3\t8\nBrand#x\tLARGE ANODIZED STEEL\t9\t8\nBrand#x\tLARGE ANODIZED TIN\t14\t8\nBrand#x\tLARGE ANODIZED TIN\t45\t8\nBrand#x\tLARGE BRUSHED BRASS\t49\t8\nBrand#x\tLARGE BRUSHED COPPER\t9\t8\nBrand#x\tLARGE BRUSHED NICKEL\t19\t8\nBrand#x\tLARGE BRUSHED NICKEL\t36\t8\nBrand#x\tLARGE BRUSHED NICKEL\t49\t8\nBrand#x\tLARGE BRUSHED TIN\t23\t8\nBrand#x\tLARGE BRUSHED TIN\t49\t8\nBrand#x\tLARGE BURNISHED BRASS\t3\t8\nBrand#x\tLARGE BURNISHED BRASS\t49\t8\nBrand#x\tLARGE BURNISHED TIN\t45\t8\nBrand#x\tLARGE PLATED COPPER\t9\t8\nBrand#x\tLARGE PLATED COPPER\t45\t8\nBrand#x\tLARGE PLATED NICKEL\t45\t8\nBrand#x\tLARGE PLATED TIN\t3\t8\nBrand#x\tLARGE PLATED TIN\t45\t8\nBrand#x\tLARGE POLISHED COPPER\t49\t8\nBrand#x\tLARGE POLISHED NICKEL\t23\t8\nBrand#x\tLARGE POLISHED NICKEL\t36\t8\nBrand#x\tLARGE POLISHED STEEL\t3\t8\nBrand#x\tLARGE POLISHED TIN\t3\t8\nBrand#x\tLARGE POLISHED TIN\t19\t8\nBrand#x\tLARGE POLISHED TIN\t45\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t8\nBrand#x\tMEDIUM ANODIZED TIN\t14\t8\nBrand#x\tMEDIUM ANODIZED TIN\t36\t8\nBrand#x\tMEDIUM ANODIZED TIN\t45\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t8\nBrand#x\tMEDIUM BURNISHED TIN\t3\t8\nBrand#x\tMEDIUM PLATED BRASS\t49\t8\nBrand#x\tMEDIUM PLATED COPPER\t3\t8\nBrand#x\tMEDIUM PLATED COPPER\t49\t8\nBrand#x\tMEDIUM PLATED NICKEL\t9\t8\nBrand#x\tMEDIUM PLATED STEEL\t9\t8\nBrand#x\tMEDIUM PLATED STEEL\t14\t8\nBrand#x\tMEDIUM PLATED STEEL\t36\t8\nBrand#x\tMEDIUM PLATED TIN\t9\t8\nBrand#x\tMEDIUM PLATED TIN\t14\t8\nBrand#x\tPROMO ANODIZED BRASS\t9\t8\nBrand#x\tPROMO ANODIZED BRASS\t36\t8\nBrand#x\tPROMO ANODIZED BRASS\t45\t8\nBrand#x\tPROMO ANODIZED COPPER\t3\t8\nBrand#x\tPROMO ANODIZED COPPER\t23\t8\nBrand#x\tPROMO ANODIZED COPPER\t45\t8\nBrand#x\tPROMO ANODIZED NICKEL\t9\t8\nBrand#x\tPROMO ANODIZED TIN\t3\t8\nBrand#x\tPROMO BRUSHED COPPER\t14\t8\nBrand#x\tPROMO BRUSHED STEEL\t19\t8\nBrand#x\tPROMO BRUSHED STEEL\t23\t8\nBrand#x\tPROMO BRUSHED STEEL\t45\t8\nBrand#x\tPROMO BURNISHED BRASS\t14\t8\nBrand#x\tPROMO BURNISHED BRASS\t45\t8\nBrand#x\tPROMO BURNISHED BRASS\t49\t8\nBrand#x\tPROMO BURNISHED COPPER\t45\t8\nBrand#x\tPROMO BURNISHED NICKEL\t36\t8\nBrand#x\tPROMO PLATED NICKEL\t23\t8\nBrand#x\tPROMO PLATED STEEL\t45\t8\nBrand#x\tPROMO PLATED TIN\t9\t8\nBrand#x\tPROMO PLATED TIN\t19\t8\nBrand#x\tPROMO PLATED TIN\t23\t8\nBrand#x\tPROMO PLATED TIN\t36\t8\nBrand#x\tPROMO PLATED TIN\t45\t8\nBrand#x\tPROMO POLISHED BRASS\t19\t8\nBrand#x\tPROMO POLISHED BRASS\t23\t8\nBrand#x\tPROMO POLISHED BRASS\t45\t8\nBrand#x\tPROMO POLISHED COPPER\t36\t8\nBrand#x\tPROMO POLISHED NICKEL\t3\t8\nBrand#x\tPROMO POLISHED NICKEL\t9\t8\nBrand#x\tPROMO POLISHED STEEL\t9\t8\nBrand#x\tPROMO POLISHED STEEL\t23\t8\nBrand#x\tPROMO POLISHED TIN\t3\t8\nBrand#x\tPROMO POLISHED TIN\t9\t8\nBrand#x\tSMALL ANODIZED BRASS\t19\t8\nBrand#x\tSMALL ANODIZED COPPER\t14\t8\nBrand#x\tSMALL ANODIZED COPPER\t19\t8\nBrand#x\tSMALL ANODIZED COPPER\t36\t8\nBrand#x\tSMALL ANODIZED NICKEL\t14\t8\nBrand#x\tSMALL ANODIZED NICKEL\t23\t8\nBrand#x\tSMALL ANODIZED NICKEL\t45\t8\nBrand#x\tSMALL ANODIZED STEEL\t3\t8\nBrand#x\tSMALL ANODIZED STEEL\t9\t8\nBrand#x\tSMALL ANODIZED STEEL\t36\t8\nBrand#x\tSMALL ANODIZED TIN\t3\t8\nBrand#x\tSMALL ANODIZED TIN\t19\t8\nBrand#x\tSMALL BRUSHED COPPER\t9\t8\nBrand#x\tSMALL BRUSHED COPPER\t36\t8\nBrand#x\tSMALL BRUSHED NICKEL\t23\t8\nBrand#x\tSMALL BRUSHED STEEL\t3\t8\nBrand#x\tSMALL BRUSHED STEEL\t9\t8\nBrand#x\tSMALL BRUSHED STEEL\t14\t8\nBrand#x\tSMALL BRUSHED STEEL\t36\t8\nBrand#x\tSMALL BRUSHED STEEL\t45\t8\nBrand#x\tSMALL BRUSHED TIN\t9\t8\nBrand#x\tSMALL BRUSHED TIN\t14\t8\nBrand#x\tSMALL BRUSHED TIN\t45\t8\nBrand#x\tSMALL BRUSHED TIN\t49\t8\nBrand#x\tSMALL BURNISHED BRASS\t23\t8\nBrand#x\tSMALL BURNISHED NICKEL\t19\t8\nBrand#x\tSMALL BURNISHED STEEL\t14\t8\nBrand#x\tSMALL PLATED BRASS\t19\t8\nBrand#x\tSMALL PLATED COPPER\t36\t8\nBrand#x\tSMALL PLATED STEEL\t3\t8\nBrand#x\tSMALL PLATED STEEL\t23\t8\nBrand#x\tSMALL PLATED STEEL\t36\t8\nBrand#x\tSMALL PLATED TIN\t14\t8\nBrand#x\tSMALL PLATED TIN\t19\t8\nBrand#x\tSMALL PLATED TIN\t36\t8\nBrand#x\tSMALL POLISHED BRASS\t23\t8\nBrand#x\tSMALL POLISHED BRASS\t45\t8\nBrand#x\tSMALL POLISHED COPPER\t23\t8\nBrand#x\tSMALL POLISHED COPPER\t45\t8\nBrand#x\tSMALL POLISHED NICKEL\t14\t8\nBrand#x\tSMALL POLISHED NICKEL\t19\t8\nBrand#x\tSMALL POLISHED NICKEL\t45\t8\nBrand#x\tSMALL POLISHED STEEL\t49\t8\nBrand#x\tSMALL POLISHED TIN\t14\t8\nBrand#x\tSMALL POLISHED TIN\t36\t8\nBrand#x\tSMALL POLISHED TIN\t49\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t8\nBrand#x\tSTANDARD ANODIZED TIN\t3\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t8\nBrand#x\tSTANDARD BRUSHED TIN\t49\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t8\nBrand#x\tSTANDARD BURNISHED TIN\t9\t8\nBrand#x\tSTANDARD PLATED COPPER\t49\t8\nBrand#x\tSTANDARD PLATED NICKEL\t14\t8\nBrand#x\tSTANDARD PLATED NICKEL\t45\t8\nBrand#x\tSTANDARD PLATED STEEL\t14\t8\nBrand#x\tSTANDARD PLATED STEEL\t19\t8\nBrand#x\tSTANDARD PLATED STEEL\t36\t8\nBrand#x\tSTANDARD PLATED STEEL\t45\t8\nBrand#x\tSTANDARD PLATED TIN\t9\t8\nBrand#x\tSTANDARD PLATED TIN\t14\t8\nBrand#x\tSTANDARD POLISHED BRASS\t19\t8\nBrand#x\tSTANDARD POLISHED BRASS\t36\t8\nBrand#x\tSTANDARD POLISHED COPPER\t14\t8\nBrand#x\tSTANDARD POLISHED COPPER\t19\t8\nBrand#x\tSTANDARD POLISHED COPPER\t49\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t8\nBrand#x\tSTANDARD POLISHED STEEL\t23\t8\nBrand#x\tSTANDARD POLISHED TIN\t14\t8\nBrand#x\tSTANDARD POLISHED TIN\t23\t8\nBrand#x\tSTANDARD POLISHED TIN\t36\t8\nBrand#x\tECONOMY ANODIZED BRASS\t3\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t8\nBrand#x\tECONOMY ANODIZED STEEL\t23\t8\nBrand#x\tECONOMY ANODIZED STEEL\t36\t8\nBrand#x\tECONOMY ANODIZED TIN\t49\t8\nBrand#x\tECONOMY BRUSHED COPPER\t45\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t8\nBrand#x\tECONOMY BRUSHED TIN\t36\t8\nBrand#x\tECONOMY BRUSHED TIN\t45\t8\nBrand#x\tECONOMY BURNISHED BRASS\t19\t8\nBrand#x\tECONOMY BURNISHED COPPER\t14\t8\nBrand#x\tECONOMY BURNISHED COPPER\t36\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t8\nBrand#x\tECONOMY BURNISHED STEEL\t3\t8\nBrand#x\tECONOMY BURNISHED STEEL\t36\t8\nBrand#x\tECONOMY BURNISHED TIN\t3\t8\nBrand#x\tECONOMY BURNISHED TIN\t49\t8\nBrand#x\tECONOMY PLATED COPPER\t19\t8\nBrand#x\tECONOMY PLATED NICKEL\t9\t8\nBrand#x\tECONOMY PLATED STEEL\t19\t8\nBrand#x\tECONOMY PLATED TIN\t9\t8\nBrand#x\tECONOMY PLATED TIN\t19\t8\nBrand#x\tECONOMY POLISHED BRASS\t19\t8\nBrand#x\tECONOMY POLISHED COPPER\t19\t8\nBrand#x\tECONOMY POLISHED COPPER\t36\t8\nBrand#x\tECONOMY POLISHED NICKEL\t19\t8\nBrand#x\tECONOMY POLISHED NICKEL\t36\t8\nBrand#x\tECONOMY POLISHED STEEL\t3\t8\nBrand#x\tECONOMY POLISHED TIN\t9\t8\nBrand#x\tECONOMY POLISHED TIN\t36\t8\nBrand#x\tECONOMY POLISHED TIN\t45\t8\nBrand#x\tLARGE ANODIZED BRASS\t14\t8\nBrand#x\tLARGE ANODIZED BRASS\t36\t8\nBrand#x\tLARGE ANODIZED COPPER\t19\t8\nBrand#x\tLARGE ANODIZED NICKEL\t3\t8\nBrand#x\tLARGE ANODIZED NICKEL\t23\t8\nBrand#x\tLARGE ANODIZED NICKEL\t36\t8\nBrand#x\tLARGE ANODIZED STEEL\t23\t8\nBrand#x\tLARGE ANODIZED STEEL\t49\t8\nBrand#x\tLARGE ANODIZED TIN\t19\t8\nBrand#x\tLARGE BRUSHED BRASS\t23\t8\nBrand#x\tLARGE BRUSHED COPPER\t19\t8\nBrand#x\tLARGE BRUSHED COPPER\t36\t8\nBrand#x\tLARGE BRUSHED NICKEL\t14\t8\nBrand#x\tLARGE BRUSHED NICKEL\t19\t8\nBrand#x\tLARGE BRUSHED NICKEL\t36\t8\nBrand#x\tLARGE BRUSHED NICKEL\t49\t8\nBrand#x\tLARGE BRUSHED STEEL\t3\t8\nBrand#x\tLARGE BRUSHED TIN\t23\t8\nBrand#x\tLARGE BURNISHED BRASS\t9\t8\nBrand#x\tLARGE BURNISHED BRASS\t14\t8\nBrand#x\tLARGE BURNISHED BRASS\t49\t8\nBrand#x\tLARGE BURNISHED COPPER\t3\t8\nBrand#x\tLARGE BURNISHED NICKEL\t36\t8\nBrand#x\tLARGE BURNISHED TIN\t23\t8\nBrand#x\tLARGE PLATED BRASS\t9\t8\nBrand#x\tLARGE PLATED BRASS\t45\t8\nBrand#x\tLARGE PLATED COPPER\t36\t8\nBrand#x\tLARGE PLATED NICKEL\t3\t8\nBrand#x\tLARGE PLATED NICKEL\t14\t8\nBrand#x\tLARGE PLATED NICKEL\t49\t8\nBrand#x\tLARGE PLATED STEEL\t3\t8\nBrand#x\tLARGE PLATED STEEL\t14\t8\nBrand#x\tLARGE PLATED STEEL\t49\t8\nBrand#x\tLARGE PLATED TIN\t23\t8\nBrand#x\tLARGE PLATED TIN\t36\t8\nBrand#x\tLARGE PLATED TIN\t45\t8\nBrand#x\tLARGE POLISHED BRASS\t36\t8\nBrand#x\tLARGE POLISHED COPPER\t3\t8\nBrand#x\tLARGE POLISHED COPPER\t14\t8\nBrand#x\tLARGE POLISHED COPPER\t36\t8\nBrand#x\tLARGE POLISHED NICKEL\t3\t8\nBrand#x\tLARGE POLISHED STEEL\t9\t8\nBrand#x\tLARGE POLISHED STEEL\t14\t8\nBrand#x\tLARGE POLISHED STEEL\t19\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t9\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t8\nBrand#x\tMEDIUM BRUSHED TIN\t49\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t8\nBrand#x\tMEDIUM BURNISHED TIN\t14\t8\nBrand#x\tMEDIUM BURNISHED TIN\t36\t8\nBrand#x\tMEDIUM PLATED BRASS\t19\t8\nBrand#x\tMEDIUM PLATED BRASS\t36\t8\nBrand#x\tMEDIUM PLATED COPPER\t3\t8\nBrand#x\tMEDIUM PLATED COPPER\t49\t8\nBrand#x\tMEDIUM PLATED NICKEL\t36\t8\nBrand#x\tMEDIUM PLATED NICKEL\t45\t8\nBrand#x\tMEDIUM PLATED TIN\t45\t8\nBrand#x\tPROMO ANODIZED BRASS\t3\t8\nBrand#x\tPROMO ANODIZED BRASS\t9\t8\nBrand#x\tPROMO ANODIZED BRASS\t45\t8\nBrand#x\tPROMO ANODIZED NICKEL\t14\t8\nBrand#x\tPROMO ANODIZED NICKEL\t45\t8\nBrand#x\tPROMO ANODIZED STEEL\t49\t8\nBrand#x\tPROMO ANODIZED TIN\t3\t8\nBrand#x\tPROMO ANODIZED TIN\t14\t8\nBrand#x\tPROMO ANODIZED TIN\t19\t8\nBrand#x\tPROMO ANODIZED TIN\t49\t8\nBrand#x\tPROMO BRUSHED BRASS\t3\t8\nBrand#x\tPROMO BRUSHED BRASS\t45\t8\nBrand#x\tPROMO BRUSHED COPPER\t23\t8\nBrand#x\tPROMO BRUSHED NICKEL\t14\t8\nBrand#x\tPROMO BRUSHED NICKEL\t19\t8\nBrand#x\tPROMO BRUSHED STEEL\t14\t8\nBrand#x\tPROMO BURNISHED BRASS\t3\t8\nBrand#x\tPROMO BURNISHED BRASS\t49\t8\nBrand#x\tPROMO BURNISHED COPPER\t14\t8\nBrand#x\tPROMO BURNISHED NICKEL\t49\t8\nBrand#x\tPROMO BURNISHED STEEL\t49\t8\nBrand#x\tPROMO BURNISHED TIN\t9\t8\nBrand#x\tPROMO BURNISHED TIN\t36\t8\nBrand#x\tPROMO BURNISHED TIN\t49\t8\nBrand#x\tPROMO PLATED BRASS\t14\t8\nBrand#x\tPROMO PLATED COPPER\t45\t8\nBrand#x\tPROMO PLATED NICKEL\t45\t8\nBrand#x\tPROMO PLATED STEEL\t45\t8\nBrand#x\tPROMO PLATED TIN\t23\t8\nBrand#x\tPROMO POLISHED BRASS\t23\t8\nBrand#x\tPROMO POLISHED COPPER\t3\t8\nBrand#x\tPROMO POLISHED COPPER\t14\t8\nBrand#x\tPROMO POLISHED COPPER\t36\t8\nBrand#x\tPROMO POLISHED NICKEL\t14\t8\nBrand#x\tPROMO POLISHED NICKEL\t19\t8\nBrand#x\tPROMO POLISHED STEEL\t14\t8\nBrand#x\tPROMO POLISHED STEEL\t23\t8\nBrand#x\tPROMO POLISHED TIN\t3\t8\nBrand#x\tPROMO POLISHED TIN\t36\t8\nBrand#x\tSMALL ANODIZED BRASS\t19\t8\nBrand#x\tSMALL ANODIZED COPPER\t14\t8\nBrand#x\tSMALL ANODIZED COPPER\t19\t8\nBrand#x\tSMALL ANODIZED COPPER\t49\t8\nBrand#x\tSMALL ANODIZED NICKEL\t14\t8\nBrand#x\tSMALL ANODIZED NICKEL\t45\t8\nBrand#x\tSMALL ANODIZED STEEL\t49\t8\nBrand#x\tSMALL ANODIZED TIN\t49\t8\nBrand#x\tSMALL BRUSHED COPPER\t19\t8\nBrand#x\tSMALL BRUSHED COPPER\t49\t8\nBrand#x\tSMALL BRUSHED NICKEL\t9\t8\nBrand#x\tSMALL BRUSHED NICKEL\t49\t8\nBrand#x\tSMALL BRUSHED STEEL\t45\t8\nBrand#x\tSMALL BRUSHED TIN\t3\t8\nBrand#x\tSMALL BURNISHED COPPER\t23\t8\nBrand#x\tSMALL BURNISHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED STEEL\t45\t8\nBrand#x\tSMALL BURNISHED TIN\t9\t8\nBrand#x\tSMALL BURNISHED TIN\t49\t8\nBrand#x\tSMALL PLATED BRASS\t23\t8\nBrand#x\tSMALL PLATED BRASS\t45\t8\nBrand#x\tSMALL PLATED COPPER\t45\t8\nBrand#x\tSMALL PLATED NICKEL\t3\t8\nBrand#x\tSMALL PLATED NICKEL\t19\t8\nBrand#x\tSMALL PLATED NICKEL\t23\t8\nBrand#x\tSMALL PLATED NICKEL\t45\t8\nBrand#x\tSMALL PLATED NICKEL\t49\t8\nBrand#x\tSMALL PLATED STEEL\t14\t8\nBrand#x\tSMALL PLATED STEEL\t36\t8\nBrand#x\tSMALL PLATED TIN\t14\t8\nBrand#x\tSMALL POLISHED BRASS\t9\t8\nBrand#x\tSMALL POLISHED BRASS\t19\t8\nBrand#x\tSMALL POLISHED COPPER\t9\t8\nBrand#x\tSMALL POLISHED COPPER\t19\t8\nBrand#x\tSMALL POLISHED NICKEL\t3\t8\nBrand#x\tSMALL POLISHED NICKEL\t36\t8\nBrand#x\tSMALL POLISHED STEEL\t45\t8\nBrand#x\tSMALL POLISHED STEEL\t49\t8\nBrand#x\tSMALL POLISHED TIN\t36\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t8\nBrand#x\tSTANDARD ANODIZED TIN\t14\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t8\nBrand#x\tSTANDARD BRUSHED TIN\t19\t8\nBrand#x\tSTANDARD BRUSHED TIN\t45\t8\nBrand#x\tSTANDARD BRUSHED TIN\t49\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t8\nBrand#x\tSTANDARD BURNISHED TIN\t19\t8\nBrand#x\tSTANDARD PLATED COPPER\t36\t8\nBrand#x\tSTANDARD PLATED NICKEL\t19\t8\nBrand#x\tSTANDARD PLATED NICKEL\t49\t8\nBrand#x\tSTANDARD PLATED TIN\t14\t8\nBrand#x\tSTANDARD PLATED TIN\t23\t8\nBrand#x\tSTANDARD POLISHED BRASS\t19\t8\nBrand#x\tSTANDARD POLISHED COPPER\t45\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t8\nBrand#x\tSTANDARD POLISHED STEEL\t14\t8\nBrand#x\tSTANDARD POLISHED STEEL\t49\t8\nBrand#x\tECONOMY ANODIZED BRASS\t3\t8\nBrand#x\tECONOMY ANODIZED COPPER\t23\t8\nBrand#x\tECONOMY ANODIZED COPPER\t49\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t8\nBrand#x\tECONOMY ANODIZED STEEL\t19\t8\nBrand#x\tECONOMY ANODIZED STEEL\t45\t8\nBrand#x\tECONOMY ANODIZED TIN\t14\t8\nBrand#x\tECONOMY ANODIZED TIN\t36\t8\nBrand#x\tECONOMY BRUSHED COPPER\t23\t8\nBrand#x\tECONOMY BRUSHED STEEL\t9\t8\nBrand#x\tECONOMY BRUSHED STEEL\t19\t8\nBrand#x\tECONOMY BRUSHED TIN\t19\t8\nBrand#x\tECONOMY BRUSHED TIN\t49\t8\nBrand#x\tECONOMY BURNISHED COPPER\t3\t8\nBrand#x\tECONOMY BURNISHED COPPER\t9\t8\nBrand#x\tECONOMY BURNISHED COPPER\t14\t8\nBrand#x\tECONOMY BURNISHED COPPER\t23\t8\nBrand#x\tECONOMY BURNISHED COPPER\t49\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t8\nBrand#x\tECONOMY BURNISHED STEEL\t9\t8\nBrand#x\tECONOMY BURNISHED STEEL\t19\t8\nBrand#x\tECONOMY BURNISHED STEEL\t49\t8\nBrand#x\tECONOMY BURNISHED TIN\t3\t8\nBrand#x\tECONOMY BURNISHED TIN\t19\t8\nBrand#x\tECONOMY BURNISHED TIN\t45\t8\nBrand#x\tECONOMY PLATED COPPER\t45\t8\nBrand#x\tECONOMY PLATED NICKEL\t23\t8\nBrand#x\tECONOMY PLATED STEEL\t14\t8\nBrand#x\tECONOMY PLATED STEEL\t23\t8\nBrand#x\tECONOMY PLATED STEEL\t36\t8\nBrand#x\tECONOMY PLATED TIN\t19\t8\nBrand#x\tECONOMY POLISHED BRASS\t23\t8\nBrand#x\tECONOMY POLISHED BRASS\t36\t8\nBrand#x\tECONOMY POLISHED COPPER\t9\t8\nBrand#x\tECONOMY POLISHED COPPER\t19\t8\nBrand#x\tECONOMY POLISHED NICKEL\t23\t8\nBrand#x\tECONOMY POLISHED NICKEL\t36\t8\nBrand#x\tECONOMY POLISHED NICKEL\t45\t8\nBrand#x\tECONOMY POLISHED NICKEL\t49\t8\nBrand#x\tECONOMY POLISHED STEEL\t9\t8\nBrand#x\tECONOMY POLISHED STEEL\t49\t8\nBrand#x\tECONOMY POLISHED TIN\t3\t8\nBrand#x\tECONOMY POLISHED TIN\t19\t8\nBrand#x\tLARGE ANODIZED BRASS\t3\t8\nBrand#x\tLARGE ANODIZED BRASS\t23\t8\nBrand#x\tLARGE ANODIZED BRASS\t49\t8\nBrand#x\tLARGE ANODIZED COPPER\t9\t8\nBrand#x\tLARGE ANODIZED COPPER\t45\t8\nBrand#x\tLARGE ANODIZED NICKEL\t49\t8\nBrand#x\tLARGE ANODIZED STEEL\t19\t8\nBrand#x\tLARGE ANODIZED TIN\t14\t8\nBrand#x\tLARGE BRUSHED BRASS\t14\t8\nBrand#x\tLARGE BRUSHED COPPER\t14\t8\nBrand#x\tLARGE BRUSHED NICKEL\t19\t8\nBrand#x\tLARGE BRUSHED NICKEL\t23\t8\nBrand#x\tLARGE BRUSHED NICKEL\t45\t8\nBrand#x\tLARGE BRUSHED TIN\t23\t8\nBrand#x\tLARGE BURNISHED COPPER\t9\t8\nBrand#x\tLARGE BURNISHED COPPER\t19\t8\nBrand#x\tLARGE BURNISHED COPPER\t23\t8\nBrand#x\tLARGE BURNISHED NICKEL\t36\t8\nBrand#x\tLARGE BURNISHED NICKEL\t49\t8\nBrand#x\tLARGE BURNISHED STEEL\t23\t8\nBrand#x\tLARGE BURNISHED STEEL\t49\t8\nBrand#x\tLARGE BURNISHED TIN\t14\t8\nBrand#x\tLARGE PLATED BRASS\t19\t8\nBrand#x\tLARGE PLATED COPPER\t14\t8\nBrand#x\tLARGE PLATED COPPER\t19\t8\nBrand#x\tLARGE PLATED NICKEL\t9\t8\nBrand#x\tLARGE PLATED NICKEL\t23\t8\nBrand#x\tLARGE PLATED STEEL\t23\t8\nBrand#x\tLARGE PLATED TIN\t14\t8\nBrand#x\tLARGE PLATED TIN\t19\t8\nBrand#x\tLARGE PLATED TIN\t36\t8\nBrand#x\tLARGE PLATED TIN\t49\t8\nBrand#x\tLARGE POLISHED BRASS\t9\t8\nBrand#x\tLARGE POLISHED BRASS\t19\t8\nBrand#x\tLARGE POLISHED BRASS\t23\t8\nBrand#x\tLARGE POLISHED COPPER\t9\t8\nBrand#x\tLARGE POLISHED COPPER\t49\t8\nBrand#x\tLARGE POLISHED NICKEL\t23\t8\nBrand#x\tLARGE POLISHED NICKEL\t36\t8\nBrand#x\tLARGE POLISHED STEEL\t45\t8\nBrand#x\tLARGE POLISHED TIN\t9\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t8\nBrand#x\tMEDIUM ANODIZED TIN\t45\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t8\nBrand#x\tMEDIUM BURNISHED TIN\t19\t8\nBrand#x\tMEDIUM BURNISHED TIN\t23\t8\nBrand#x\tMEDIUM PLATED BRASS\t3\t8\nBrand#x\tMEDIUM PLATED BRASS\t23\t8\nBrand#x\tMEDIUM PLATED COPPER\t3\t8\nBrand#x\tMEDIUM PLATED NICKEL\t23\t8\nBrand#x\tMEDIUM PLATED NICKEL\t49\t8\nBrand#x\tPROMO ANODIZED BRASS\t3\t8\nBrand#x\tPROMO ANODIZED BRASS\t14\t8\nBrand#x\tPROMO ANODIZED BRASS\t49\t8\nBrand#x\tPROMO ANODIZED COPPER\t23\t8\nBrand#x\tPROMO ANODIZED NICKEL\t23\t8\nBrand#x\tPROMO ANODIZED NICKEL\t36\t8\nBrand#x\tPROMO ANODIZED STEEL\t9\t8\nBrand#x\tPROMO ANODIZED STEEL\t49\t8\nBrand#x\tPROMO BRUSHED BRASS\t9\t8\nBrand#x\tPROMO BRUSHED COPPER\t9\t8\nBrand#x\tPROMO BRUSHED COPPER\t23\t8\nBrand#x\tPROMO BRUSHED COPPER\t36\t8\nBrand#x\tPROMO BRUSHED NICKEL\t23\t8\nBrand#x\tPROMO BRUSHED NICKEL\t45\t8\nBrand#x\tPROMO BRUSHED STEEL\t3\t8\nBrand#x\tPROMO BRUSHED STEEL\t9\t8\nBrand#x\tPROMO BRUSHED STEEL\t45\t8\nBrand#x\tPROMO BRUSHED STEEL\t49\t8\nBrand#x\tPROMO BRUSHED TIN\t3\t8\nBrand#x\tPROMO BRUSHED TIN\t19\t8\nBrand#x\tPROMO BRUSHED TIN\t45\t8\nBrand#x\tPROMO BURNISHED BRASS\t36\t8\nBrand#x\tPROMO BURNISHED NICKEL\t3\t8\nBrand#x\tPROMO BURNISHED STEEL\t9\t8\nBrand#x\tPROMO BURNISHED STEEL\t19\t8\nBrand#x\tPROMO BURNISHED STEEL\t49\t8\nBrand#x\tPROMO PLATED BRASS\t23\t8\nBrand#x\tPROMO PLATED NICKEL\t9\t8\nBrand#x\tPROMO PLATED NICKEL\t23\t8\nBrand#x\tPROMO PLATED STEEL\t23\t8\nBrand#x\tPROMO PLATED STEEL\t49\t8\nBrand#x\tPROMO PLATED TIN\t14\t8\nBrand#x\tPROMO PLATED TIN\t36\t8\nBrand#x\tPROMO POLISHED BRASS\t36\t8\nBrand#x\tPROMO POLISHED COPPER\t9\t8\nBrand#x\tPROMO POLISHED NICKEL\t45\t8\nBrand#x\tPROMO POLISHED STEEL\t9\t8\nBrand#x\tPROMO POLISHED STEEL\t45\t8\nBrand#x\tPROMO POLISHED TIN\t14\t8\nBrand#x\tPROMO POLISHED TIN\t23\t8\nBrand#x\tPROMO POLISHED TIN\t36\t8\nBrand#x\tPROMO POLISHED TIN\t45\t8\nBrand#x\tPROMO POLISHED TIN\t49\t8\nBrand#x\tSMALL ANODIZED BRASS\t3\t8\nBrand#x\tSMALL ANODIZED BRASS\t9\t8\nBrand#x\tSMALL ANODIZED BRASS\t36\t8\nBrand#x\tSMALL ANODIZED COPPER\t14\t8\nBrand#x\tSMALL ANODIZED COPPER\t19\t8\nBrand#x\tSMALL ANODIZED COPPER\t23\t8\nBrand#x\tSMALL ANODIZED NICKEL\t23\t8\nBrand#x\tSMALL ANODIZED TIN\t14\t8\nBrand#x\tSMALL ANODIZED TIN\t19\t8\nBrand#x\tSMALL ANODIZED TIN\t23\t8\nBrand#x\tSMALL ANODIZED TIN\t45\t8\nBrand#x\tSMALL BRUSHED BRASS\t14\t8\nBrand#x\tSMALL BRUSHED COPPER\t23\t8\nBrand#x\tSMALL BRUSHED TIN\t36\t8\nBrand#x\tSMALL BURNISHED BRASS\t3\t8\nBrand#x\tSMALL BURNISHED BRASS\t36\t8\nBrand#x\tSMALL BURNISHED BRASS\t49\t8\nBrand#x\tSMALL BURNISHED NICKEL\t14\t8\nBrand#x\tSMALL BURNISHED NICKEL\t45\t8\nBrand#x\tSMALL BURNISHED TIN\t9\t8\nBrand#x\tSMALL BURNISHED TIN\t23\t8\nBrand#x\tSMALL BURNISHED TIN\t49\t8\nBrand#x\tSMALL PLATED BRASS\t36\t8\nBrand#x\tSMALL PLATED COPPER\t14\t8\nBrand#x\tSMALL PLATED NICKEL\t45\t8\nBrand#x\tSMALL PLATED NICKEL\t49\t8\nBrand#x\tSMALL PLATED TIN\t19\t8\nBrand#x\tSMALL POLISHED COPPER\t9\t8\nBrand#x\tSMALL POLISHED COPPER\t49\t8\nBrand#x\tSMALL POLISHED NICKEL\t9\t8\nBrand#x\tSMALL POLISHED NICKEL\t14\t8\nBrand#x\tSMALL POLISHED NICKEL\t19\t8\nBrand#x\tSMALL POLISHED NICKEL\t23\t8\nBrand#x\tSMALL POLISHED NICKEL\t45\t8\nBrand#x\tSMALL POLISHED STEEL\t3\t8\nBrand#x\tSMALL POLISHED TIN\t3\t8\nBrand#x\tSMALL POLISHED TIN\t14\t8\nBrand#x\tSMALL POLISHED TIN\t19\t8\nBrand#x\tSMALL POLISHED TIN\t23\t8\nBrand#x\tSMALL POLISHED TIN\t45\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t8\nBrand#x\tSTANDARD ANODIZED TIN\t23\t8\nBrand#x\tSTANDARD ANODIZED TIN\t36\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t8\nBrand#x\tSTANDARD BRUSHED TIN\t9\t8\nBrand#x\tSTANDARD BRUSHED TIN\t19\t8\nBrand#x\tSTANDARD BRUSHED TIN\t23\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t23\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t8\nBrand#x\tSTANDARD BURNISHED TIN\t19\t8\nBrand#x\tSTANDARD PLATED BRASS\t9\t8\nBrand#x\tSTANDARD PLATED BRASS\t45\t8\nBrand#x\tSTANDARD PLATED COPPER\t9\t8\nBrand#x\tSTANDARD PLATED COPPER\t23\t8\nBrand#x\tSTANDARD PLATED COPPER\t49\t8\nBrand#x\tSTANDARD PLATED NICKEL\t14\t8\nBrand#x\tSTANDARD PLATED NICKEL\t19\t8\nBrand#x\tSTANDARD PLATED TIN\t19\t8\nBrand#x\tSTANDARD PLATED TIN\t49\t8\nBrand#x\tSTANDARD POLISHED COPPER\t14\t8\nBrand#x\tSTANDARD POLISHED COPPER\t19\t8\nBrand#x\tSTANDARD POLISHED COPPER\t45\t8\nBrand#x\tSTANDARD POLISHED COPPER\t49\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t8\nBrand#x\tSTANDARD POLISHED TIN\t9\t8\nBrand#x\tSTANDARD POLISHED TIN\t19\t8\nBrand#x\tECONOMY ANODIZED BRASS\t49\t8\nBrand#x\tECONOMY ANODIZED COPPER\t3\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t8\nBrand#x\tECONOMY ANODIZED STEEL\t36\t8\nBrand#x\tECONOMY ANODIZED STEEL\t45\t8\nBrand#x\tECONOMY ANODIZED STEEL\t49\t8\nBrand#x\tECONOMY ANODIZED TIN\t23\t8\nBrand#x\tECONOMY BRUSHED BRASS\t3\t8\nBrand#x\tECONOMY BRUSHED COPPER\t36\t8\nBrand#x\tECONOMY BRUSHED COPPER\t45\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t8\nBrand#x\tECONOMY BRUSHED STEEL\t9\t8\nBrand#x\tECONOMY BRUSHED STEEL\t14\t8\nBrand#x\tECONOMY BRUSHED STEEL\t49\t8\nBrand#x\tECONOMY BRUSHED TIN\t19\t8\nBrand#x\tECONOMY BURNISHED BRASS\t14\t8\nBrand#x\tECONOMY BURNISHED STEEL\t14\t8\nBrand#x\tECONOMY BURNISHED STEEL\t19\t8\nBrand#x\tECONOMY BURNISHED STEEL\t36\t8\nBrand#x\tECONOMY BURNISHED TIN\t14\t8\nBrand#x\tECONOMY BURNISHED TIN\t45\t8\nBrand#x\tECONOMY PLATED BRASS\t3\t8\nBrand#x\tECONOMY PLATED BRASS\t23\t8\nBrand#x\tECONOMY PLATED BRASS\t36\t8\nBrand#x\tECONOMY PLATED COPPER\t49\t8\nBrand#x\tECONOMY PLATED NICKEL\t9\t8\nBrand#x\tECONOMY PLATED NICKEL\t14\t8\nBrand#x\tECONOMY PLATED NICKEL\t49\t8\nBrand#x\tECONOMY PLATED TIN\t36\t8\nBrand#x\tECONOMY PLATED TIN\t49\t8\nBrand#x\tECONOMY POLISHED BRASS\t14\t8\nBrand#x\tECONOMY POLISHED BRASS\t36\t8\nBrand#x\tECONOMY POLISHED BRASS\t49\t8\nBrand#x\tECONOMY POLISHED COPPER\t9\t8\nBrand#x\tECONOMY POLISHED NICKEL\t19\t8\nBrand#x\tECONOMY POLISHED NICKEL\t36\t8\nBrand#x\tECONOMY POLISHED STEEL\t3\t8\nBrand#x\tECONOMY POLISHED STEEL\t9\t8\nBrand#x\tECONOMY POLISHED STEEL\t14\t8\nBrand#x\tECONOMY POLISHED STEEL\t36\t8\nBrand#x\tECONOMY POLISHED TIN\t14\t8\nBrand#x\tECONOMY POLISHED TIN\t19\t8\nBrand#x\tLARGE ANODIZED BRASS\t19\t8\nBrand#x\tLARGE ANODIZED BRASS\t23\t8\nBrand#x\tLARGE ANODIZED COPPER\t36\t8\nBrand#x\tLARGE ANODIZED COPPER\t49\t8\nBrand#x\tLARGE ANODIZED NICKEL\t14\t8\nBrand#x\tLARGE ANODIZED NICKEL\t45\t8\nBrand#x\tLARGE ANODIZED STEEL\t45\t8\nBrand#x\tLARGE ANODIZED TIN\t19\t8\nBrand#x\tLARGE BRUSHED BRASS\t9\t8\nBrand#x\tLARGE BRUSHED BRASS\t23\t8\nBrand#x\tLARGE BRUSHED COPPER\t23\t8\nBrand#x\tLARGE BRUSHED COPPER\t49\t8\nBrand#x\tLARGE BRUSHED NICKEL\t9\t8\nBrand#x\tLARGE BRUSHED NICKEL\t19\t8\nBrand#x\tLARGE BRUSHED NICKEL\t45\t8\nBrand#x\tLARGE BURNISHED BRASS\t3\t8\nBrand#x\tLARGE BURNISHED BRASS\t14\t8\nBrand#x\tLARGE BURNISHED BRASS\t36\t8\nBrand#x\tLARGE BURNISHED NICKEL\t23\t8\nBrand#x\tLARGE BURNISHED STEEL\t9\t8\nBrand#x\tLARGE BURNISHED STEEL\t36\t8\nBrand#x\tLARGE PLATED BRASS\t23\t8\nBrand#x\tLARGE PLATED COPPER\t49\t8\nBrand#x\tLARGE PLATED NICKEL\t3\t8\nBrand#x\tLARGE PLATED NICKEL\t36\t8\nBrand#x\tLARGE PLATED STEEL\t3\t8\nBrand#x\tLARGE PLATED TIN\t9\t8\nBrand#x\tLARGE PLATED TIN\t36\t8\nBrand#x\tLARGE POLISHED BRASS\t9\t8\nBrand#x\tLARGE POLISHED COPPER\t14\t8\nBrand#x\tLARGE POLISHED COPPER\t45\t8\nBrand#x\tLARGE POLISHED NICKEL\t14\t8\nBrand#x\tLARGE POLISHED STEEL\t3\t8\nBrand#x\tLARGE POLISHED TIN\t14\t8\nBrand#x\tLARGE POLISHED TIN\t23\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t8\nBrand#x\tMEDIUM ANODIZED TIN\t3\t8\nBrand#x\tMEDIUM ANODIZED TIN\t19\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t8\nBrand#x\tMEDIUM BURNISHED TIN\t9\t8\nBrand#x\tMEDIUM BURNISHED TIN\t49\t8\nBrand#x\tMEDIUM PLATED BRASS\t49\t8\nBrand#x\tMEDIUM PLATED COPPER\t9\t8\nBrand#x\tMEDIUM PLATED COPPER\t19\t8\nBrand#x\tMEDIUM PLATED NICKEL\t3\t8\nBrand#x\tMEDIUM PLATED NICKEL\t9\t8\nBrand#x\tMEDIUM PLATED STEEL\t9\t8\nBrand#x\tMEDIUM PLATED STEEL\t49\t8\nBrand#x\tPROMO ANODIZED COPPER\t49\t8\nBrand#x\tPROMO ANODIZED NICKEL\t19\t8\nBrand#x\tPROMO ANODIZED TIN\t14\t8\nBrand#x\tPROMO ANODIZED TIN\t19\t8\nBrand#x\tPROMO BRUSHED BRASS\t19\t8\nBrand#x\tPROMO BRUSHED NICKEL\t9\t8\nBrand#x\tPROMO BRUSHED NICKEL\t14\t8\nBrand#x\tPROMO BRUSHED STEEL\t49\t8\nBrand#x\tPROMO BRUSHED TIN\t45\t8\nBrand#x\tPROMO BURNISHED BRASS\t3\t8\nBrand#x\tPROMO BURNISHED BRASS\t19\t8\nBrand#x\tPROMO BURNISHED BRASS\t23\t8\nBrand#x\tPROMO BURNISHED NICKEL\t3\t8\nBrand#x\tPROMO BURNISHED STEEL\t14\t8\nBrand#x\tPROMO BURNISHED TIN\t3\t8\nBrand#x\tPROMO BURNISHED TIN\t36\t8\nBrand#x\tPROMO BURNISHED TIN\t45\t8\nBrand#x\tPROMO PLATED BRASS\t19\t8\nBrand#x\tPROMO PLATED BRASS\t49\t8\nBrand#x\tPROMO PLATED COPPER\t19\t8\nBrand#x\tPROMO PLATED NICKEL\t23\t8\nBrand#x\tPROMO PLATED STEEL\t3\t8\nBrand#x\tPROMO PLATED STEEL\t23\t8\nBrand#x\tPROMO PLATED STEEL\t49\t8\nBrand#x\tPROMO PLATED TIN\t3\t8\nBrand#x\tPROMO PLATED TIN\t19\t8\nBrand#x\tPROMO POLISHED BRASS\t3\t8\nBrand#x\tPROMO POLISHED BRASS\t9\t8\nBrand#x\tPROMO POLISHED BRASS\t19\t8\nBrand#x\tPROMO POLISHED BRASS\t23\t8\nBrand#x\tPROMO POLISHED COPPER\t9\t8\nBrand#x\tPROMO POLISHED COPPER\t14\t8\nBrand#x\tPROMO POLISHED STEEL\t36\t8\nBrand#x\tPROMO POLISHED STEEL\t45\t8\nBrand#x\tSMALL ANODIZED BRASS\t9\t8\nBrand#x\tSMALL ANODIZED COPPER\t49\t8\nBrand#x\tSMALL ANODIZED NICKEL\t14\t8\nBrand#x\tSMALL ANODIZED STEEL\t3\t8\nBrand#x\tSMALL ANODIZED STEEL\t14\t8\nBrand#x\tSMALL ANODIZED STEEL\t23\t8\nBrand#x\tSMALL ANODIZED STEEL\t45\t8\nBrand#x\tSMALL ANODIZED TIN\t19\t8\nBrand#x\tSMALL BRUSHED BRASS\t9\t8\nBrand#x\tSMALL BRUSHED COPPER\t3\t8\nBrand#x\tSMALL BRUSHED COPPER\t19\t8\nBrand#x\tSMALL BRUSHED COPPER\t45\t8\nBrand#x\tSMALL BRUSHED NICKEL\t23\t8\nBrand#x\tSMALL BRUSHED STEEL\t3\t8\nBrand#x\tSMALL BRUSHED STEEL\t9\t8\nBrand#x\tSMALL BRUSHED STEEL\t14\t8\nBrand#x\tSMALL BRUSHED TIN\t9\t8\nBrand#x\tSMALL BRUSHED TIN\t36\t8\nBrand#x\tSMALL BURNISHED BRASS\t36\t8\nBrand#x\tSMALL BURNISHED BRASS\t49\t8\nBrand#x\tSMALL BURNISHED COPPER\t14\t8\nBrand#x\tSMALL BURNISHED COPPER\t23\t8\nBrand#x\tSMALL BURNISHED NICKEL\t19\t8\nBrand#x\tSMALL BURNISHED NICKEL\t49\t8\nBrand#x\tSMALL BURNISHED STEEL\t14\t8\nBrand#x\tSMALL BURNISHED STEEL\t19\t8\nBrand#x\tSMALL BURNISHED TIN\t49\t8\nBrand#x\tSMALL PLATED COPPER\t45\t8\nBrand#x\tSMALL PLATED COPPER\t49\t8\nBrand#x\tSMALL PLATED NICKEL\t9\t8\nBrand#x\tSMALL PLATED STEEL\t36\t8\nBrand#x\tSMALL PLATED STEEL\t45\t8\nBrand#x\tSMALL PLATED TIN\t19\t8\nBrand#x\tSMALL POLISHED BRASS\t19\t8\nBrand#x\tSMALL POLISHED COPPER\t36\t8\nBrand#x\tSMALL POLISHED STEEL\t23\t8\nBrand#x\tSMALL POLISHED STEEL\t45\t8\nBrand#x\tSMALL POLISHED TIN\t49\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t8\nBrand#x\tSTANDARD ANODIZED TIN\t9\t8\nBrand#x\tSTANDARD ANODIZED TIN\t23\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t23\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t8\nBrand#x\tSTANDARD BURNISHED TIN\t14\t8\nBrand#x\tSTANDARD PLATED BRASS\t49\t8\nBrand#x\tSTANDARD PLATED COPPER\t19\t8\nBrand#x\tSTANDARD PLATED COPPER\t45\t8\nBrand#x\tSTANDARD PLATED NICKEL\t19\t8\nBrand#x\tSTANDARD PLATED STEEL\t19\t8\nBrand#x\tSTANDARD PLATED TIN\t9\t8\nBrand#x\tSTANDARD POLISHED BRASS\t3\t8\nBrand#x\tSTANDARD POLISHED BRASS\t45\t8\nBrand#x\tSTANDARD POLISHED COPPER\t9\t8\nBrand#x\tSTANDARD POLISHED COPPER\t49\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t8\nBrand#x\tSTANDARD POLISHED STEEL\t9\t8\nBrand#x\tSTANDARD POLISHED STEEL\t14\t8\nBrand#x\tSTANDARD POLISHED STEEL\t49\t8\nBrand#x\tSTANDARD POLISHED TIN\t14\t8\nBrand#x\tSTANDARD POLISHED TIN\t23\t8\nBrand#x\tSTANDARD POLISHED TIN\t49\t8\nBrand#x\tECONOMY ANODIZED BRASS\t14\t8\nBrand#x\tECONOMY ANODIZED BRASS\t36\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t8\nBrand#x\tECONOMY ANODIZED STEEL\t3\t8\nBrand#x\tECONOMY ANODIZED STEEL\t19\t8\nBrand#x\tECONOMY ANODIZED TIN\t3\t8\nBrand#x\tECONOMY ANODIZED TIN\t14\t8\nBrand#x\tECONOMY ANODIZED TIN\t49\t8\nBrand#x\tECONOMY BRUSHED BRASS\t36\t8\nBrand#x\tECONOMY BRUSHED STEEL\t3\t8\nBrand#x\tECONOMY BRUSHED STEEL\t14\t8\nBrand#x\tECONOMY BRUSHED TIN\t9\t8\nBrand#x\tECONOMY BRUSHED TIN\t36\t8\nBrand#x\tECONOMY BRUSHED TIN\t49\t8\nBrand#x\tECONOMY BURNISHED COPPER\t45\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t8\nBrand#x\tECONOMY BURNISHED STEEL\t9\t8\nBrand#x\tECONOMY BURNISHED STEEL\t36\t8\nBrand#x\tECONOMY BURNISHED TIN\t3\t8\nBrand#x\tECONOMY PLATED BRASS\t3\t8\nBrand#x\tECONOMY PLATED BRASS\t14\t8\nBrand#x\tECONOMY PLATED BRASS\t23\t8\nBrand#x\tECONOMY PLATED BRASS\t45\t8\nBrand#x\tECONOMY PLATED COPPER\t49\t8\nBrand#x\tECONOMY PLATED NICKEL\t3\t8\nBrand#x\tECONOMY PLATED NICKEL\t49\t8\nBrand#x\tECONOMY PLATED STEEL\t3\t8\nBrand#x\tECONOMY PLATED STEEL\t14\t8\nBrand#x\tECONOMY PLATED TIN\t3\t8\nBrand#x\tECONOMY PLATED TIN\t19\t8\nBrand#x\tECONOMY PLATED TIN\t23\t8\nBrand#x\tECONOMY POLISHED BRASS\t23\t8\nBrand#x\tECONOMY POLISHED BRASS\t45\t8\nBrand#x\tECONOMY POLISHED BRASS\t49\t8\nBrand#x\tECONOMY POLISHED NICKEL\t19\t8\nBrand#x\tECONOMY POLISHED NICKEL\t23\t8\nBrand#x\tECONOMY POLISHED NICKEL\t45\t8\nBrand#x\tECONOMY POLISHED STEEL\t9\t8\nBrand#x\tECONOMY POLISHED STEEL\t45\t8\nBrand#x\tECONOMY POLISHED STEEL\t49\t8\nBrand#x\tECONOMY POLISHED TIN\t19\t8\nBrand#x\tECONOMY POLISHED TIN\t36\t8\nBrand#x\tECONOMY POLISHED TIN\t49\t8\nBrand#x\tLARGE ANODIZED BRASS\t14\t8\nBrand#x\tLARGE ANODIZED STEEL\t9\t8\nBrand#x\tLARGE ANODIZED STEEL\t19\t8\nBrand#x\tLARGE ANODIZED STEEL\t36\t8\nBrand#x\tLARGE ANODIZED STEEL\t45\t8\nBrand#x\tLARGE ANODIZED TIN\t9\t8\nBrand#x\tLARGE ANODIZED TIN\t14\t8\nBrand#x\tLARGE ANODIZED TIN\t36\t8\nBrand#x\tLARGE BRUSHED BRASS\t19\t8\nBrand#x\tLARGE BRUSHED COPPER\t14\t8\nBrand#x\tLARGE BRUSHED COPPER\t49\t8\nBrand#x\tLARGE BRUSHED NICKEL\t36\t8\nBrand#x\tLARGE BRUSHED TIN\t19\t8\nBrand#x\tLARGE BRUSHED TIN\t49\t8\nBrand#x\tLARGE BURNISHED BRASS\t19\t8\nBrand#x\tLARGE BURNISHED BRASS\t49\t8\nBrand#x\tLARGE BURNISHED COPPER\t3\t8\nBrand#x\tLARGE BURNISHED COPPER\t23\t8\nBrand#x\tLARGE BURNISHED NICKEL\t3\t8\nBrand#x\tLARGE BURNISHED NICKEL\t9\t8\nBrand#x\tLARGE BURNISHED STEEL\t9\t8\nBrand#x\tLARGE BURNISHED STEEL\t14\t8\nBrand#x\tLARGE BURNISHED TIN\t14\t8\nBrand#x\tLARGE BURNISHED TIN\t45\t8\nBrand#x\tLARGE PLATED BRASS\t14\t8\nBrand#x\tLARGE PLATED COPPER\t3\t8\nBrand#x\tLARGE PLATED COPPER\t14\t8\nBrand#x\tLARGE PLATED COPPER\t45\t8\nBrand#x\tLARGE PLATED NICKEL\t14\t8\nBrand#x\tLARGE PLATED NICKEL\t49\t8\nBrand#x\tLARGE PLATED TIN\t45\t8\nBrand#x\tLARGE POLISHED COPPER\t14\t8\nBrand#x\tLARGE POLISHED NICKEL\t23\t8\nBrand#x\tLARGE POLISHED NICKEL\t49\t8\nBrand#x\tLARGE POLISHED TIN\t9\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t8\nBrand#x\tMEDIUM ANODIZED TIN\t19\t8\nBrand#x\tMEDIUM ANODIZED TIN\t49\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t9\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t49\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t8\nBrand#x\tMEDIUM BRUSHED TIN\t19\t8\nBrand#x\tMEDIUM BRUSHED TIN\t23\t8\nBrand#x\tMEDIUM BRUSHED TIN\t49\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t8\nBrand#x\tMEDIUM BURNISHED TIN\t9\t8\nBrand#x\tMEDIUM BURNISHED TIN\t19\t8\nBrand#x\tMEDIUM BURNISHED TIN\t49\t8\nBrand#x\tMEDIUM PLATED COPPER\t14\t8\nBrand#x\tMEDIUM PLATED COPPER\t19\t8\nBrand#x\tMEDIUM PLATED COPPER\t36\t8\nBrand#x\tMEDIUM PLATED NICKEL\t3\t8\nBrand#x\tMEDIUM PLATED STEEL\t36\t8\nBrand#x\tMEDIUM PLATED TIN\t3\t8\nBrand#x\tMEDIUM PLATED TIN\t9\t8\nBrand#x\tMEDIUM PLATED TIN\t14\t8\nBrand#x\tPROMO ANODIZED BRASS\t36\t8\nBrand#x\tPROMO ANODIZED COPPER\t19\t8\nBrand#x\tPROMO ANODIZED COPPER\t23\t8\nBrand#x\tPROMO ANODIZED COPPER\t36\t8\nBrand#x\tPROMO ANODIZED TIN\t9\t8\nBrand#x\tPROMO ANODIZED TIN\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t3\t8\nBrand#x\tPROMO BRUSHED BRASS\t14\t8\nBrand#x\tPROMO BRUSHED BRASS\t45\t8\nBrand#x\tPROMO BRUSHED COPPER\t45\t8\nBrand#x\tPROMO BRUSHED NICKEL\t45\t8\nBrand#x\tPROMO BRUSHED NICKEL\t49\t8\nBrand#x\tPROMO BRUSHED STEEL\t9\t8\nBrand#x\tPROMO BRUSHED STEEL\t14\t8\nBrand#x\tPROMO BRUSHED STEEL\t23\t8\nBrand#x\tPROMO BURNISHED BRASS\t14\t8\nBrand#x\tPROMO BURNISHED BRASS\t23\t8\nBrand#x\tPROMO BURNISHED COPPER\t45\t8\nBrand#x\tPROMO BURNISHED COPPER\t49\t8\nBrand#x\tPROMO BURNISHED NICKEL\t9\t8\nBrand#x\tPROMO BURNISHED NICKEL\t14\t8\nBrand#x\tPROMO BURNISHED NICKEL\t49\t8\nBrand#x\tPROMO PLATED BRASS\t3\t8\nBrand#x\tPROMO PLATED BRASS\t45\t8\nBrand#x\tPROMO PLATED BRASS\t49\t8\nBrand#x\tPROMO PLATED COPPER\t3\t8\nBrand#x\tPROMO PLATED COPPER\t9\t8\nBrand#x\tPROMO PLATED COPPER\t45\t8\nBrand#x\tPROMO PLATED NICKEL\t19\t8\nBrand#x\tPROMO PLATED NICKEL\t23\t8\nBrand#x\tPROMO PLATED NICKEL\t36\t8\nBrand#x\tPROMO PLATED NICKEL\t45\t8\nBrand#x\tPROMO PLATED STEEL\t3\t8\nBrand#x\tPROMO PLATED STEEL\t23\t8\nBrand#x\tPROMO PLATED STEEL\t49\t8\nBrand#x\tPROMO POLISHED BRASS\t36\t8\nBrand#x\tPROMO POLISHED COPPER\t23\t8\nBrand#x\tPROMO POLISHED COPPER\t49\t8\nBrand#x\tPROMO POLISHED NICKEL\t14\t8\nBrand#x\tPROMO POLISHED STEEL\t45\t8\nBrand#x\tPROMO POLISHED TIN\t3\t8\nBrand#x\tPROMO POLISHED TIN\t9\t8\nBrand#x\tPROMO POLISHED TIN\t14\t8\nBrand#x\tPROMO POLISHED TIN\t19\t8\nBrand#x\tPROMO POLISHED TIN\t45\t8\nBrand#x\tSMALL ANODIZED BRASS\t3\t8\nBrand#x\tSMALL ANODIZED BRASS\t14\t8\nBrand#x\tSMALL ANODIZED BRASS\t23\t8\nBrand#x\tSMALL ANODIZED COPPER\t23\t8\nBrand#x\tSMALL ANODIZED NICKEL\t45\t8\nBrand#x\tSMALL ANODIZED STEEL\t23\t8\nBrand#x\tSMALL ANODIZED TIN\t19\t8\nBrand#x\tSMALL ANODIZED TIN\t23\t8\nBrand#x\tSMALL ANODIZED TIN\t49\t8\nBrand#x\tSMALL BRUSHED BRASS\t9\t8\nBrand#x\tSMALL BRUSHED BRASS\t49\t8\nBrand#x\tSMALL BRUSHED COPPER\t23\t8\nBrand#x\tSMALL BRUSHED NICKEL\t19\t8\nBrand#x\tSMALL BRUSHED TIN\t3\t8\nBrand#x\tSMALL BRUSHED TIN\t19\t8\nBrand#x\tSMALL BRUSHED TIN\t45\t8\nBrand#x\tSMALL BRUSHED TIN\t49\t8\nBrand#x\tSMALL BURNISHED BRASS\t9\t8\nBrand#x\tSMALL BURNISHED BRASS\t45\t8\nBrand#x\tSMALL BURNISHED COPPER\t9\t8\nBrand#x\tSMALL BURNISHED COPPER\t45\t8\nBrand#x\tSMALL BURNISHED NICKEL\t3\t8\nBrand#x\tSMALL BURNISHED NICKEL\t14\t8\nBrand#x\tSMALL BURNISHED TIN\t36\t8\nBrand#x\tSMALL PLATED BRASS\t3\t8\nBrand#x\tSMALL PLATED BRASS\t45\t8\nBrand#x\tSMALL PLATED BRASS\t49\t8\nBrand#x\tSMALL PLATED COPPER\t49\t8\nBrand#x\tSMALL PLATED NICKEL\t14\t8\nBrand#x\tSMALL PLATED NICKEL\t36\t8\nBrand#x\tSMALL POLISHED BRASS\t23\t8\nBrand#x\tSMALL POLISHED COPPER\t9\t8\nBrand#x\tSMALL POLISHED COPPER\t36\t8\nBrand#x\tSMALL POLISHED COPPER\t45\t8\nBrand#x\tSMALL POLISHED STEEL\t3\t8\nBrand#x\tSMALL POLISHED STEEL\t9\t8\nBrand#x\tSMALL POLISHED STEEL\t49\t8\nBrand#x\tSMALL POLISHED TIN\t9\t8\nBrand#x\tSMALL POLISHED TIN\t14\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t8\nBrand#x\tSTANDARD ANODIZED TIN\t3\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t8\nBrand#x\tSTANDARD BRUSHED TIN\t9\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t8\nBrand#x\tSTANDARD BURNISHED TIN\t19\t8\nBrand#x\tSTANDARD PLATED BRASS\t49\t8\nBrand#x\tSTANDARD PLATED STEEL\t14\t8\nBrand#x\tSTANDARD PLATED STEEL\t36\t8\nBrand#x\tSTANDARD POLISHED BRASS\t3\t8\nBrand#x\tSTANDARD POLISHED BRASS\t9\t8\nBrand#x\tSTANDARD POLISHED BRASS\t49\t8\nBrand#x\tSTANDARD POLISHED COPPER\t9\t8\nBrand#x\tSTANDARD POLISHED COPPER\t14\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t8\nBrand#x\tSTANDARD POLISHED STEEL\t45\t8\nBrand#x\tSTANDARD POLISHED TIN\t19\t8\nBrand#x\tECONOMY ANODIZED BRASS\t9\t8\nBrand#x\tECONOMY ANODIZED BRASS\t36\t8\nBrand#x\tECONOMY ANODIZED BRASS\t45\t8\nBrand#x\tECONOMY ANODIZED COPPER\t45\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t8\nBrand#x\tECONOMY ANODIZED STEEL\t45\t8\nBrand#x\tECONOMY ANODIZED TIN\t14\t8\nBrand#x\tECONOMY ANODIZED TIN\t36\t8\nBrand#x\tECONOMY BRUSHED COPPER\t3\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t8\nBrand#x\tECONOMY BRUSHED STEEL\t23\t8\nBrand#x\tECONOMY BRUSHED STEEL\t49\t8\nBrand#x\tECONOMY BRUSHED TIN\t3\t8\nBrand#x\tECONOMY BURNISHED BRASS\t9\t8\nBrand#x\tECONOMY BURNISHED BRASS\t45\t8\nBrand#x\tECONOMY BURNISHED COPPER\t9\t8\nBrand#x\tECONOMY BURNISHED COPPER\t14\t8\nBrand#x\tECONOMY BURNISHED COPPER\t19\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t8\nBrand#x\tECONOMY BURNISHED STEEL\t19\t8\nBrand#x\tECONOMY BURNISHED STEEL\t23\t8\nBrand#x\tECONOMY BURNISHED STEEL\t36\t8\nBrand#x\tECONOMY BURNISHED TIN\t3\t8\nBrand#x\tECONOMY BURNISHED TIN\t49\t8\nBrand#x\tECONOMY PLATED BRASS\t14\t8\nBrand#x\tECONOMY PLATED BRASS\t19\t8\nBrand#x\tECONOMY PLATED COPPER\t3\t8\nBrand#x\tECONOMY PLATED TIN\t19\t8\nBrand#x\tECONOMY POLISHED COPPER\t14\t8\nBrand#x\tECONOMY POLISHED COPPER\t19\t8\nBrand#x\tECONOMY POLISHED NICKEL\t36\t8\nBrand#x\tECONOMY POLISHED STEEL\t3\t8\nBrand#x\tECONOMY POLISHED STEEL\t9\t8\nBrand#x\tLARGE ANODIZED BRASS\t19\t8\nBrand#x\tLARGE ANODIZED BRASS\t45\t8\nBrand#x\tLARGE ANODIZED STEEL\t45\t8\nBrand#x\tLARGE ANODIZED TIN\t23\t8\nBrand#x\tLARGE ANODIZED TIN\t45\t8\nBrand#x\tLARGE ANODIZED TIN\t49\t8\nBrand#x\tLARGE BRUSHED COPPER\t19\t8\nBrand#x\tLARGE BRUSHED COPPER\t45\t8\nBrand#x\tLARGE BRUSHED STEEL\t9\t8\nBrand#x\tLARGE BRUSHED STEEL\t45\t8\nBrand#x\tLARGE BRUSHED TIN\t3\t8\nBrand#x\tLARGE BRUSHED TIN\t9\t8\nBrand#x\tLARGE BRUSHED TIN\t36\t8\nBrand#x\tLARGE BURNISHED BRASS\t3\t8\nBrand#x\tLARGE BURNISHED NICKEL\t14\t8\nBrand#x\tLARGE BURNISHED NICKEL\t23\t8\nBrand#x\tLARGE BURNISHED STEEL\t3\t8\nBrand#x\tLARGE BURNISHED STEEL\t19\t8\nBrand#x\tLARGE BURNISHED STEEL\t23\t8\nBrand#x\tLARGE BURNISHED STEEL\t45\t8\nBrand#x\tLARGE BURNISHED TIN\t9\t8\nBrand#x\tLARGE PLATED BRASS\t9\t8\nBrand#x\tLARGE PLATED BRASS\t49\t8\nBrand#x\tLARGE PLATED NICKEL\t49\t8\nBrand#x\tLARGE PLATED STEEL\t45\t8\nBrand#x\tLARGE PLATED TIN\t23\t8\nBrand#x\tLARGE POLISHED BRASS\t3\t8\nBrand#x\tLARGE POLISHED BRASS\t23\t8\nBrand#x\tLARGE POLISHED COPPER\t23\t8\nBrand#x\tLARGE POLISHED NICKEL\t3\t8\nBrand#x\tLARGE POLISHED NICKEL\t14\t8\nBrand#x\tLARGE POLISHED NICKEL\t23\t8\nBrand#x\tLARGE POLISHED STEEL\t3\t8\nBrand#x\tLARGE POLISHED STEEL\t23\t8\nBrand#x\tLARGE POLISHED TIN\t9\t8\nBrand#x\tLARGE POLISHED TIN\t49\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t8\nBrand#x\tMEDIUM ANODIZED TIN\t3\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t8\nBrand#x\tMEDIUM BRUSHED TIN\t9\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t8\nBrand#x\tMEDIUM BURNISHED TIN\t9\t8\nBrand#x\tMEDIUM BURNISHED TIN\t14\t8\nBrand#x\tMEDIUM PLATED BRASS\t9\t8\nBrand#x\tMEDIUM PLATED BRASS\t19\t8\nBrand#x\tMEDIUM PLATED NICKEL\t23\t8\nBrand#x\tMEDIUM PLATED NICKEL\t36\t8\nBrand#x\tMEDIUM PLATED NICKEL\t45\t8\nBrand#x\tMEDIUM PLATED STEEL\t19\t8\nBrand#x\tMEDIUM PLATED STEEL\t45\t8\nBrand#x\tPROMO ANODIZED BRASS\t19\t8\nBrand#x\tPROMO ANODIZED BRASS\t23\t8\nBrand#x\tPROMO ANODIZED BRASS\t36\t8\nBrand#x\tPROMO ANODIZED COPPER\t3\t8\nBrand#x\tPROMO ANODIZED COPPER\t9\t8\nBrand#x\tPROMO ANODIZED NICKEL\t36\t8\nBrand#x\tPROMO ANODIZED STEEL\t3\t8\nBrand#x\tPROMO ANODIZED STEEL\t14\t8\nBrand#x\tPROMO ANODIZED TIN\t19\t8\nBrand#x\tPROMO ANODIZED TIN\t49\t8\nBrand#x\tPROMO BRUSHED BRASS\t45\t8\nBrand#x\tPROMO BRUSHED COPPER\t9\t8\nBrand#x\tPROMO BRUSHED COPPER\t14\t8\nBrand#x\tPROMO BRUSHED NICKEL\t14\t8\nBrand#x\tPROMO BRUSHED NICKEL\t49\t8\nBrand#x\tPROMO BRUSHED STEEL\t3\t8\nBrand#x\tPROMO BRUSHED TIN\t23\t8\nBrand#x\tPROMO BURNISHED BRASS\t14\t8\nBrand#x\tPROMO BURNISHED BRASS\t23\t8\nBrand#x\tPROMO BURNISHED BRASS\t36\t8\nBrand#x\tPROMO BURNISHED COPPER\t14\t8\nBrand#x\tPROMO BURNISHED NICKEL\t14\t8\nBrand#x\tPROMO BURNISHED STEEL\t23\t8\nBrand#x\tPROMO BURNISHED TIN\t3\t8\nBrand#x\tPROMO BURNISHED TIN\t9\t8\nBrand#x\tPROMO BURNISHED TIN\t19\t8\nBrand#x\tPROMO BURNISHED TIN\t45\t8\nBrand#x\tPROMO PLATED BRASS\t45\t8\nBrand#x\tPROMO PLATED BRASS\t49\t8\nBrand#x\tPROMO PLATED COPPER\t23\t8\nBrand#x\tPROMO PLATED COPPER\t45\t8\nBrand#x\tPROMO PLATED COPPER\t49\t8\nBrand#x\tPROMO PLATED NICKEL\t49\t8\nBrand#x\tPROMO PLATED STEEL\t19\t8\nBrand#x\tPROMO PLATED TIN\t45\t8\nBrand#x\tPROMO PLATED TIN\t49\t8\nBrand#x\tPROMO POLISHED BRASS\t14\t8\nBrand#x\tPROMO POLISHED BRASS\t19\t8\nBrand#x\tPROMO POLISHED BRASS\t36\t8\nBrand#x\tPROMO POLISHED NICKEL\t19\t8\nBrand#x\tPROMO POLISHED NICKEL\t23\t8\nBrand#x\tPROMO POLISHED NICKEL\t45\t8\nBrand#x\tPROMO POLISHED STEEL\t3\t8\nBrand#x\tPROMO POLISHED STEEL\t9\t8\nBrand#x\tPROMO POLISHED TIN\t36\t8\nBrand#x\tPROMO POLISHED TIN\t45\t8\nBrand#x\tSMALL ANODIZED BRASS\t3\t8\nBrand#x\tSMALL ANODIZED BRASS\t9\t8\nBrand#x\tSMALL ANODIZED BRASS\t45\t8\nBrand#x\tSMALL ANODIZED COPPER\t3\t8\nBrand#x\tSMALL ANODIZED COPPER\t19\t8\nBrand#x\tSMALL ANODIZED COPPER\t23\t8\nBrand#x\tSMALL ANODIZED NICKEL\t9\t8\nBrand#x\tSMALL ANODIZED NICKEL\t19\t8\nBrand#x\tSMALL ANODIZED STEEL\t23\t8\nBrand#x\tSMALL ANODIZED STEEL\t45\t8\nBrand#x\tSMALL ANODIZED TIN\t36\t8\nBrand#x\tSMALL BRUSHED BRASS\t14\t8\nBrand#x\tSMALL BRUSHED BRASS\t36\t8\nBrand#x\tSMALL BRUSHED STEEL\t45\t8\nBrand#x\tSMALL BRUSHED TIN\t3\t8\nBrand#x\tSMALL BRUSHED TIN\t14\t8\nBrand#x\tSMALL BRUSHED TIN\t19\t8\nBrand#x\tSMALL BRUSHED TIN\t45\t8\nBrand#x\tSMALL BRUSHED TIN\t49\t8\nBrand#x\tSMALL BURNISHED BRASS\t45\t8\nBrand#x\tSMALL BURNISHED BRASS\t49\t8\nBrand#x\tSMALL BURNISHED COPPER\t19\t8\nBrand#x\tSMALL BURNISHED COPPER\t23\t8\nBrand#x\tSMALL BURNISHED COPPER\t36\t8\nBrand#x\tSMALL BURNISHED COPPER\t45\t8\nBrand#x\tSMALL BURNISHED COPPER\t49\t8\nBrand#x\tSMALL BURNISHED NICKEL\t14\t8\nBrand#x\tSMALL BURNISHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED STEEL\t36\t8\nBrand#x\tSMALL BURNISHED TIN\t14\t8\nBrand#x\tSMALL BURNISHED TIN\t23\t8\nBrand#x\tSMALL PLATED BRASS\t9\t8\nBrand#x\tSMALL PLATED BRASS\t36\t8\nBrand#x\tSMALL PLATED NICKEL\t9\t8\nBrand#x\tSMALL PLATED NICKEL\t14\t8\nBrand#x\tSMALL PLATED NICKEL\t23\t8\nBrand#x\tSMALL PLATED STEEL\t19\t8\nBrand#x\tSMALL PLATED STEEL\t23\t8\nBrand#x\tSMALL PLATED TIN\t9\t8\nBrand#x\tSMALL POLISHED BRASS\t36\t8\nBrand#x\tSMALL POLISHED COPPER\t23\t8\nBrand#x\tSMALL POLISHED NICKEL\t3\t8\nBrand#x\tSMALL POLISHED NICKEL\t19\t8\nBrand#x\tSMALL POLISHED STEEL\t3\t8\nBrand#x\tSMALL POLISHED STEEL\t23\t8\nBrand#x\tSMALL POLISHED TIN\t23\t8\nBrand#x\tSMALL POLISHED TIN\t36\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t8\nBrand#x\tSTANDARD ANODIZED TIN\t14\t8\nBrand#x\tSTANDARD ANODIZED TIN\t49\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t8\nBrand#x\tSTANDARD BURNISHED TIN\t3\t8\nBrand#x\tSTANDARD BURNISHED TIN\t14\t8\nBrand#x\tSTANDARD BURNISHED TIN\t19\t8\nBrand#x\tSTANDARD BURNISHED TIN\t36\t8\nBrand#x\tSTANDARD PLATED BRASS\t19\t8\nBrand#x\tSTANDARD PLATED COPPER\t3\t8\nBrand#x\tSTANDARD PLATED NICKEL\t14\t8\nBrand#x\tSTANDARD PLATED NICKEL\t36\t8\nBrand#x\tSTANDARD PLATED STEEL\t14\t8\nBrand#x\tSTANDARD PLATED STEEL\t23\t8\nBrand#x\tSTANDARD PLATED STEEL\t45\t8\nBrand#x\tSTANDARD PLATED TIN\t9\t8\nBrand#x\tSTANDARD PLATED TIN\t14\t8\nBrand#x\tSTANDARD PLATED TIN\t19\t8\nBrand#x\tSTANDARD PLATED TIN\t23\t8\nBrand#x\tSTANDARD POLISHED BRASS\t36\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t8\nBrand#x\tSTANDARD POLISHED TIN\t9\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t8\nBrand#x\tECONOMY ANODIZED STEEL\t19\t8\nBrand#x\tECONOMY ANODIZED STEEL\t23\t8\nBrand#x\tECONOMY ANODIZED TIN\t3\t8\nBrand#x\tECONOMY ANODIZED TIN\t45\t8\nBrand#x\tECONOMY BRUSHED BRASS\t14\t8\nBrand#x\tECONOMY BRUSHED BRASS\t19\t8\nBrand#x\tECONOMY BRUSHED BRASS\t23\t8\nBrand#x\tECONOMY BRUSHED COPPER\t9\t8\nBrand#x\tECONOMY BRUSHED COPPER\t45\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t8\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t8\nBrand#x\tECONOMY BRUSHED STEEL\t3\t8\nBrand#x\tECONOMY BRUSHED STEEL\t14\t8\nBrand#x\tECONOMY BURNISHED COPPER\t9\t8\nBrand#x\tECONOMY BURNISHED COPPER\t36\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t8\nBrand#x\tECONOMY BURNISHED STEEL\t14\t8\nBrand#x\tECONOMY BURNISHED STEEL\t36\t8\nBrand#x\tECONOMY BURNISHED TIN\t9\t8\nBrand#x\tECONOMY BURNISHED TIN\t14\t8\nBrand#x\tECONOMY BURNISHED TIN\t23\t8\nBrand#x\tECONOMY PLATED COPPER\t14\t8\nBrand#x\tECONOMY PLATED COPPER\t19\t8\nBrand#x\tECONOMY PLATED NICKEL\t23\t8\nBrand#x\tECONOMY PLATED NICKEL\t45\t8\nBrand#x\tECONOMY PLATED STEEL\t3\t8\nBrand#x\tECONOMY PLATED STEEL\t19\t8\nBrand#x\tECONOMY PLATED TIN\t23\t8\nBrand#x\tECONOMY POLISHED BRASS\t23\t8\nBrand#x\tECONOMY POLISHED BRASS\t36\t8\nBrand#x\tECONOMY POLISHED BRASS\t49\t8\nBrand#x\tECONOMY POLISHED COPPER\t9\t8\nBrand#x\tECONOMY POLISHED COPPER\t19\t8\nBrand#x\tECONOMY POLISHED COPPER\t23\t8\nBrand#x\tECONOMY POLISHED COPPER\t45\t8\nBrand#x\tECONOMY POLISHED STEEL\t14\t8\nBrand#x\tECONOMY POLISHED STEEL\t19\t8\nBrand#x\tECONOMY POLISHED STEEL\t23\t8\nBrand#x\tLARGE ANODIZED COPPER\t3\t8\nBrand#x\tLARGE ANODIZED COPPER\t45\t8\nBrand#x\tLARGE ANODIZED STEEL\t9\t8\nBrand#x\tLARGE ANODIZED STEEL\t14\t8\nBrand#x\tLARGE ANODIZED TIN\t23\t8\nBrand#x\tLARGE BRUSHED BRASS\t3\t8\nBrand#x\tLARGE BRUSHED BRASS\t14\t8\nBrand#x\tLARGE BRUSHED BRASS\t45\t8\nBrand#x\tLARGE BRUSHED COPPER\t14\t8\nBrand#x\tLARGE BRUSHED COPPER\t45\t8\nBrand#x\tLARGE BRUSHED NICKEL\t3\t8\nBrand#x\tLARGE BRUSHED STEEL\t36\t8\nBrand#x\tLARGE BRUSHED STEEL\t49\t8\nBrand#x\tLARGE BRUSHED TIN\t36\t8\nBrand#x\tLARGE BURNISHED BRASS\t23\t8\nBrand#x\tLARGE BURNISHED COPPER\t49\t8\nBrand#x\tLARGE BURNISHED NICKEL\t23\t8\nBrand#x\tLARGE BURNISHED NICKEL\t49\t8\nBrand#x\tLARGE BURNISHED STEEL\t49\t8\nBrand#x\tLARGE BURNISHED TIN\t14\t8\nBrand#x\tLARGE BURNISHED TIN\t49\t8\nBrand#x\tLARGE PLATED BRASS\t23\t8\nBrand#x\tLARGE PLATED BRASS\t45\t8\nBrand#x\tLARGE PLATED COPPER\t49\t8\nBrand#x\tLARGE PLATED STEEL\t3\t8\nBrand#x\tLARGE PLATED STEEL\t9\t8\nBrand#x\tLARGE PLATED STEEL\t19\t8\nBrand#x\tLARGE PLATED STEEL\t36\t8\nBrand#x\tLARGE PLATED TIN\t9\t8\nBrand#x\tLARGE POLISHED BRASS\t49\t8\nBrand#x\tLARGE POLISHED COPPER\t45\t8\nBrand#x\tLARGE POLISHED NICKEL\t14\t8\nBrand#x\tLARGE POLISHED STEEL\t3\t8\nBrand#x\tLARGE POLISHED STEEL\t14\t8\nBrand#x\tLARGE POLISHED STEEL\t19\t8\nBrand#x\tLARGE POLISHED TIN\t36\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t8\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t8\nBrand#x\tMEDIUM BRUSHED TIN\t14\t8\nBrand#x\tMEDIUM BRUSHED TIN\t49\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t8\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t8\nBrand#x\tMEDIUM BURNISHED TIN\t9\t8\nBrand#x\tMEDIUM BURNISHED TIN\t19\t8\nBrand#x\tMEDIUM BURNISHED TIN\t36\t8\nBrand#x\tMEDIUM PLATED BRASS\t3\t8\nBrand#x\tMEDIUM PLATED BRASS\t23\t8\nBrand#x\tMEDIUM PLATED COPPER\t9\t8\nBrand#x\tMEDIUM PLATED COPPER\t45\t8\nBrand#x\tMEDIUM PLATED COPPER\t49\t8\nBrand#x\tMEDIUM PLATED NICKEL\t45\t8\nBrand#x\tMEDIUM PLATED TIN\t19\t8\nBrand#x\tMEDIUM PLATED TIN\t23\t8\nBrand#x\tPROMO ANODIZED BRASS\t3\t8\nBrand#x\tPROMO ANODIZED BRASS\t9\t8\nBrand#x\tPROMO ANODIZED BRASS\t14\t8\nBrand#x\tPROMO ANODIZED BRASS\t23\t8\nBrand#x\tPROMO ANODIZED BRASS\t36\t8\nBrand#x\tPROMO ANODIZED COPPER\t23\t8\nBrand#x\tPROMO ANODIZED STEEL\t36\t8\nBrand#x\tPROMO ANODIZED TIN\t19\t8\nBrand#x\tPROMO BRUSHED BRASS\t23\t8\nBrand#x\tPROMO BRUSHED BRASS\t49\t8\nBrand#x\tPROMO BRUSHED COPPER\t3\t8\nBrand#x\tPROMO BRUSHED COPPER\t45\t8\nBrand#x\tPROMO BRUSHED NICKEL\t3\t8\nBrand#x\tPROMO BRUSHED NICKEL\t23\t8\nBrand#x\tPROMO BRUSHED STEEL\t36\t8\nBrand#x\tPROMO BRUSHED STEEL\t49\t8\nBrand#x\tPROMO BRUSHED TIN\t45\t8\nBrand#x\tPROMO BURNISHED COPPER\t14\t8\nBrand#x\tPROMO BURNISHED NICKEL\t19\t8\nBrand#x\tPROMO BURNISHED NICKEL\t49\t8\nBrand#x\tPROMO BURNISHED STEEL\t3\t8\nBrand#x\tPROMO BURNISHED TIN\t3\t8\nBrand#x\tPROMO BURNISHED TIN\t14\t8\nBrand#x\tPROMO BURNISHED TIN\t45\t8\nBrand#x\tPROMO PLATED COPPER\t36\t8\nBrand#x\tPROMO PLATED COPPER\t45\t8\nBrand#x\tPROMO PLATED COPPER\t49\t8\nBrand#x\tPROMO PLATED NICKEL\t3\t8\nBrand#x\tPROMO PLATED NICKEL\t14\t8\nBrand#x\tPROMO PLATED NICKEL\t19\t8\nBrand#x\tPROMO PLATED STEEL\t45\t8\nBrand#x\tPROMO PLATED TIN\t14\t8\nBrand#x\tPROMO PLATED TIN\t23\t8\nBrand#x\tPROMO POLISHED BRASS\t14\t8\nBrand#x\tPROMO POLISHED BRASS\t36\t8\nBrand#x\tPROMO POLISHED COPPER\t14\t8\nBrand#x\tPROMO POLISHED COPPER\t23\t8\nBrand#x\tPROMO POLISHED NICKEL\t14\t8\nBrand#x\tPROMO POLISHED NICKEL\t19\t8\nBrand#x\tPROMO POLISHED NICKEL\t23\t8\nBrand#x\tPROMO POLISHED STEEL\t23\t8\nBrand#x\tPROMO POLISHED STEEL\t36\t8\nBrand#x\tPROMO POLISHED TIN\t49\t8\nBrand#x\tSMALL ANODIZED BRASS\t19\t8\nBrand#x\tSMALL ANODIZED COPPER\t23\t8\nBrand#x\tSMALL ANODIZED COPPER\t45\t8\nBrand#x\tSMALL ANODIZED NICKEL\t14\t8\nBrand#x\tSMALL ANODIZED STEEL\t9\t8\nBrand#x\tSMALL ANODIZED TIN\t14\t8\nBrand#x\tSMALL BRUSHED BRASS\t9\t8\nBrand#x\tSMALL BRUSHED BRASS\t49\t8\nBrand#x\tSMALL BRUSHED COPPER\t45\t8\nBrand#x\tSMALL BRUSHED TIN\t19\t8\nBrand#x\tSMALL BRUSHED TIN\t36\t8\nBrand#x\tSMALL BURNISHED BRASS\t9\t8\nBrand#x\tSMALL BURNISHED BRASS\t14\t8\nBrand#x\tSMALL BURNISHED COPPER\t3\t8\nBrand#x\tSMALL BURNISHED COPPER\t14\t8\nBrand#x\tSMALL BURNISHED STEEL\t9\t8\nBrand#x\tSMALL BURNISHED TIN\t23\t8\nBrand#x\tSMALL BURNISHED TIN\t49\t8\nBrand#x\tSMALL PLATED COPPER\t14\t8\nBrand#x\tSMALL PLATED COPPER\t23\t8\nBrand#x\tSMALL PLATED COPPER\t45\t8\nBrand#x\tSMALL PLATED NICKEL\t14\t8\nBrand#x\tSMALL PLATED STEEL\t49\t8\nBrand#x\tSMALL PLATED TIN\t14\t8\nBrand#x\tSMALL PLATED TIN\t23\t8\nBrand#x\tSMALL PLATED TIN\t36\t8\nBrand#x\tSMALL POLISHED BRASS\t9\t8\nBrand#x\tSMALL POLISHED BRASS\t36\t8\nBrand#x\tSMALL POLISHED COPPER\t3\t8\nBrand#x\tSMALL POLISHED COPPER\t49\t8\nBrand#x\tSMALL POLISHED NICKEL\t3\t8\nBrand#x\tSMALL POLISHED NICKEL\t14\t8\nBrand#x\tSMALL POLISHED NICKEL\t23\t8\nBrand#x\tSMALL POLISHED STEEL\t3\t8\nBrand#x\tSMALL POLISHED STEEL\t23\t8\nBrand#x\tSMALL POLISHED TIN\t45\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t8\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t8\nBrand#x\tSTANDARD ANODIZED TIN\t45\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t14\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t8\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t8\nBrand#x\tSTANDARD BRUSHED TIN\t19\t8\nBrand#x\tSTANDARD BRUSHED TIN\t23\t8\nBrand#x\tSTANDARD BRUSHED TIN\t49\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t23\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t8\nBrand#x\tSTANDARD PLATED BRASS\t3\t8\nBrand#x\tSTANDARD PLATED BRASS\t23\t8\nBrand#x\tSTANDARD PLATED COPPER\t36\t8\nBrand#x\tSTANDARD PLATED NICKEL\t36\t8\nBrand#x\tSTANDARD PLATED STEEL\t45\t8\nBrand#x\tSTANDARD PLATED TIN\t49\t8\nBrand#x\tSTANDARD POLISHED BRASS\t49\t8\nBrand#x\tSTANDARD POLISHED COPPER\t19\t8\nBrand#x\tSTANDARD POLISHED COPPER\t23\t8\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t8\nBrand#x\tSTANDARD POLISHED STEEL\t19\t8\nBrand#x\tSTANDARD POLISHED TIN\t9\t8\nBrand#x\tSTANDARD POLISHED TIN\t14\t8\nBrand#x\tECONOMY ANODIZED COPPER\t23\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t8\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t8\nBrand#x\tECONOMY ANODIZED STEEL\t9\t8\nBrand#x\tECONOMY ANODIZED STEEL\t49\t8\nBrand#x\tECONOMY ANODIZED TIN\t9\t8\nBrand#x\tECONOMY ANODIZED TIN\t14\t8\nBrand#x\tECONOMY ANODIZED TIN\t19\t8\nBrand#x\tECONOMY ANODIZED TIN\t23\t8\nBrand#x\tECONOMY ANODIZED TIN\t36\t8\nBrand#x\tECONOMY BRUSHED COPPER\t23\t8\nBrand#x\tECONOMY BRUSHED STEEL\t49\t8\nBrand#x\tECONOMY BRUSHED TIN\t3\t8\nBrand#x\tECONOMY BRUSHED TIN\t23\t8\nBrand#x\tECONOMY BURNISHED BRASS\t3\t8\nBrand#x\tECONOMY BURNISHED BRASS\t14\t8\nBrand#x\tECONOMY BURNISHED COPPER\t23\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t8\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t8\nBrand#x\tECONOMY BURNISHED STEEL\t9\t8\nBrand#x\tECONOMY BURNISHED STEEL\t19\t8\nBrand#x\tECONOMY BURNISHED STEEL\t49\t8\nBrand#x\tECONOMY BURNISHED TIN\t45\t8\nBrand#x\tECONOMY PLATED BRASS\t45\t8\nBrand#x\tECONOMY PLATED COPPER\t49\t8\nBrand#x\tECONOMY PLATED NICKEL\t19\t8\nBrand#x\tECONOMY PLATED NICKEL\t36\t8\nBrand#x\tECONOMY PLATED TIN\t23\t8\nBrand#x\tECONOMY POLISHED BRASS\t19\t8\nBrand#x\tECONOMY POLISHED BRASS\t23\t8\nBrand#x\tECONOMY POLISHED COPPER\t23\t8\nBrand#x\tECONOMY POLISHED COPPER\t45\t8\nBrand#x\tECONOMY POLISHED NICKEL\t9\t8\nBrand#x\tECONOMY POLISHED NICKEL\t14\t8\nBrand#x\tECONOMY POLISHED NICKEL\t19\t8\nBrand#x\tECONOMY POLISHED NICKEL\t45\t8\nBrand#x\tECONOMY POLISHED TIN\t9\t8\nBrand#x\tLARGE ANODIZED BRASS\t36\t8\nBrand#x\tLARGE ANODIZED COPPER\t9\t8\nBrand#x\tLARGE ANODIZED COPPER\t36\t8\nBrand#x\tLARGE ANODIZED COPPER\t45\t8\nBrand#x\tLARGE ANODIZED NICKEL\t36\t8\nBrand#x\tLARGE ANODIZED STEEL\t9\t8\nBrand#x\tLARGE ANODIZED TIN\t14\t8\nBrand#x\tLARGE BRUSHED COPPER\t9\t8\nBrand#x\tLARGE BRUSHED COPPER\t19\t8\nBrand#x\tLARGE BRUSHED NICKEL\t14\t8\nBrand#x\tLARGE BRUSHED TIN\t9\t8\nBrand#x\tLARGE BURNISHED BRASS\t3\t8\nBrand#x\tLARGE BURNISHED BRASS\t49\t8\nBrand#x\tLARGE BURNISHED COPPER\t36\t8\nBrand#x\tLARGE BURNISHED COPPER\t49\t8\nBrand#x\tLARGE BURNISHED NICKEL\t19\t8\nBrand#x\tLARGE BURNISHED NICKEL\t45\t8\nBrand#x\tLARGE BURNISHED STEEL\t3\t8\nBrand#x\tLARGE BURNISHED STEEL\t23\t8\nBrand#x\tLARGE PLATED COPPER\t14\t8\nBrand#x\tLARGE PLATED NICKEL\t9\t8\nBrand#x\tLARGE PLATED STEEL\t19\t8\nBrand#x\tLARGE PLATED STEEL\t36\t8\nBrand#x\tLARGE PLATED STEEL\t49\t8\nBrand#x\tLARGE PLATED TIN\t9\t8\nBrand#x\tLARGE PLATED TIN\t14\t8\nBrand#x\tLARGE PLATED TIN\t36\t8\nBrand#x\tLARGE PLATED TIN\t45\t8\nBrand#x\tLARGE POLISHED BRASS\t3\t8\nBrand#x\tLARGE POLISHED COPPER\t9\t8\nBrand#x\tLARGE POLISHED COPPER\t36\t8\nBrand#x\tLARGE POLISHED TIN\t9\t8\nBrand#x\tLARGE POLISHED TIN\t45\t8\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t8\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t8\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t8\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t8\nBrand#x\tMEDIUM ANODIZED TIN\t45\t8\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t8\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t8\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t8\nBrand#x\tMEDIUM BRUSHED TIN\t45\t8\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t8\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t8\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t8\nBrand#x\tMEDIUM BURNISHED TIN\t45\t8\nBrand#x\tMEDIUM PLATED BRASS\t23\t8\nBrand#x\tMEDIUM PLATED COPPER\t9\t8\nBrand#x\tMEDIUM PLATED COPPER\t45\t8\nBrand#x\tMEDIUM PLATED NICKEL\t49\t8\nBrand#x\tMEDIUM PLATED TIN\t3\t8\nBrand#x\tMEDIUM PLATED TIN\t14\t8\nBrand#x\tMEDIUM PLATED TIN\t36\t8\nBrand#x\tPROMO ANODIZED BRASS\t45\t8\nBrand#x\tPROMO ANODIZED BRASS\t49\t8\nBrand#x\tPROMO ANODIZED COPPER\t3\t8\nBrand#x\tPROMO ANODIZED COPPER\t45\t8\nBrand#x\tPROMO ANODIZED COPPER\t49\t8\nBrand#x\tPROMO ANODIZED NICKEL\t3\t8\nBrand#x\tPROMO ANODIZED NICKEL\t14\t8\nBrand#x\tPROMO ANODIZED NICKEL\t36\t8\nBrand#x\tPROMO ANODIZED STEEL\t3\t8\nBrand#x\tPROMO ANODIZED STEEL\t36\t8\nBrand#x\tPROMO ANODIZED STEEL\t49\t8\nBrand#x\tPROMO ANODIZED TIN\t36\t8\nBrand#x\tPROMO ANODIZED TIN\t49\t8\nBrand#x\tPROMO BRUSHED BRASS\t9\t8\nBrand#x\tPROMO BRUSHED COPPER\t9\t8\nBrand#x\tPROMO BRUSHED NICKEL\t36\t8\nBrand#x\tPROMO BRUSHED NICKEL\t49\t8\nBrand#x\tPROMO BRUSHED STEEL\t3\t8\nBrand#x\tPROMO BRUSHED STEEL\t9\t8\nBrand#x\tPROMO BRUSHED STEEL\t36\t8\nBrand#x\tPROMO BRUSHED STEEL\t45\t8\nBrand#x\tPROMO BRUSHED TIN\t49\t8\nBrand#x\tPROMO BURNISHED BRASS\t49\t8\nBrand#x\tPROMO BURNISHED COPPER\t14\t8\nBrand#x\tPROMO BURNISHED STEEL\t9\t8\nBrand#x\tPROMO BURNISHED TIN\t45\t8\nBrand#x\tPROMO BURNISHED TIN\t49\t8\nBrand#x\tPROMO PLATED BRASS\t9\t8\nBrand#x\tPROMO PLATED BRASS\t36\t8\nBrand#x\tPROMO PLATED BRASS\t45\t8\nBrand#x\tPROMO PLATED COPPER\t14\t8\nBrand#x\tPROMO PLATED COPPER\t23\t8\nBrand#x\tPROMO PLATED NICKEL\t14\t8\nBrand#x\tPROMO PLATED NICKEL\t49\t8\nBrand#x\tPROMO PLATED TIN\t36\t8\nBrand#x\tPROMO PLATED TIN\t45\t8\nBrand#x\tPROMO POLISHED BRASS\t3\t8\nBrand#x\tPROMO POLISHED COPPER\t36\t8\nBrand#x\tPROMO POLISHED STEEL\t3\t8\nBrand#x\tPROMO POLISHED STEEL\t14\t8\nBrand#x\tPROMO POLISHED STEEL\t36\t8\nBrand#x\tSMALL ANODIZED BRASS\t19\t8\nBrand#x\tSMALL ANODIZED COPPER\t14\t8\nBrand#x\tSMALL ANODIZED NICKEL\t3\t8\nBrand#x\tSMALL ANODIZED STEEL\t14\t8\nBrand#x\tSMALL ANODIZED STEEL\t19\t8\nBrand#x\tSMALL ANODIZED STEEL\t49\t8\nBrand#x\tSMALL ANODIZED TIN\t3\t8\nBrand#x\tSMALL BRUSHED BRASS\t19\t8\nBrand#x\tSMALL BRUSHED BRASS\t49\t8\nBrand#x\tSMALL BRUSHED COPPER\t14\t8\nBrand#x\tSMALL BRUSHED COPPER\t36\t8\nBrand#x\tSMALL BRUSHED COPPER\t45\t8\nBrand#x\tSMALL BRUSHED TIN\t23\t8\nBrand#x\tSMALL BURNISHED BRASS\t9\t8\nBrand#x\tSMALL BURNISHED COPPER\t45\t8\nBrand#x\tSMALL BURNISHED NICKEL\t3\t8\nBrand#x\tSMALL BURNISHED STEEL\t19\t8\nBrand#x\tSMALL BURNISHED STEEL\t23\t8\nBrand#x\tSMALL BURNISHED TIN\t3\t8\nBrand#x\tSMALL BURNISHED TIN\t14\t8\nBrand#x\tSMALL BURNISHED TIN\t19\t8\nBrand#x\tSMALL BURNISHED TIN\t36\t8\nBrand#x\tSMALL PLATED BRASS\t45\t8\nBrand#x\tSMALL PLATED COPPER\t9\t8\nBrand#x\tSMALL PLATED COPPER\t19\t8\nBrand#x\tSMALL PLATED COPPER\t23\t8\nBrand#x\tSMALL PLATED COPPER\t45\t8\nBrand#x\tSMALL PLATED NICKEL\t9\t8\nBrand#x\tSMALL PLATED NICKEL\t23\t8\nBrand#x\tSMALL PLATED STEEL\t49\t8\nBrand#x\tSMALL PLATED TIN\t3\t8\nBrand#x\tSMALL PLATED TIN\t9\t8\nBrand#x\tSMALL PLATED TIN\t14\t8\nBrand#x\tSMALL PLATED TIN\t49\t8\nBrand#x\tSMALL POLISHED BRASS\t14\t8\nBrand#x\tSMALL POLISHED COPPER\t3\t8\nBrand#x\tSMALL POLISHED TIN\t19\t8\nBrand#x\tSMALL POLISHED TIN\t49\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t8\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t8\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t8\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t8\nBrand#x\tSTANDARD ANODIZED TIN\t19\t8\nBrand#x\tSTANDARD ANODIZED TIN\t49\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t8\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t8\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t8\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t8\nBrand#x\tSTANDARD BRUSHED TIN\t23\t8\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t8\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t8\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t8\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t8\nBrand#x\tSTANDARD PLATED BRASS\t19\t8\nBrand#x\tSTANDARD PLATED BRASS\t23\t8\nBrand#x\tSTANDARD PLATED COPPER\t23\t8\nBrand#x\tSTANDARD PLATED NICKEL\t49\t8\nBrand#x\tSTANDARD PLATED TIN\t23\t8\nBrand#x\tSTANDARD POLISHED BRASS\t19\t8\nBrand#x\tSTANDARD POLISHED BRASS\t49\t8\nBrand#x\tSTANDARD POLISHED COPPER\t9\t8\nBrand#x\tSTANDARD POLISHED COPPER\t36\t8\nBrand#x\tSTANDARD POLISHED STEEL\t9\t8\nBrand#x\tSTANDARD POLISHED STEEL\t36\t8\nBrand#x\tSTANDARD POLISHED STEEL\t45\t8\nBrand#x\tSTANDARD POLISHED STEEL\t49\t8\nBrand#x\tPROMO ANODIZED NICKEL\t49\t7\nBrand#x\tLARGE PLATED TIN\t23\t7\nBrand#x\tPROMO PLATED BRASS\t19\t7\nBrand#x\tSTANDARD POLISHED TIN\t3\t7\nBrand#x\tECONOMY PLATED NICKEL\t19\t7\nBrand#x\tLARGE BURNISHED NICKEL\t14\t7\nBrand#x\tPROMO BRUSHED NICKEL\t14\t7\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t7\nBrand#x\tLARGE BRUSHED COPPER\t3\t7\nBrand#x\tLARGE POLISHED NICKEL\t23\t7\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t7\nBrand#x\tECONOMY BRUSHED BRASS\t3\t7\nBrand#x\tPROMO PLATED NICKEL\t9\t7\nBrand#x\tSMALL ANODIZED COPPER\t23\t7\nBrand#x\tECONOMY BRUSHED COPPER\t36\t7\nBrand#x\tPROMO POLISHED BRASS\t45\t7\nBrand#x\tMEDIUM PLATED STEEL\t45\t7\nBrand#x\tSTANDARD PLATED COPPER\t19\t7\nBrand#x\tLARGE POLISHED COPPER\t19\t7\nBrand#x\tPROMO BURNISHED STEEL\t45\t7\nBrand#x\tSTANDARD PLATED TIN\t45\t7\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t7\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t7\nBrand#x\tECONOMY POLISHED TIN\t19\t7\nBrand#x\tSMALL BURNISHED STEEL\t3\t7\nBrand#x\tMEDIUM BURNISHED STEEL\t3\t6\nBrand#x\tECONOMY ANODIZED BRASS\t3\t4\nBrand#x\tECONOMY ANODIZED BRASS\t45\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t19\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED COPPER\t45\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t14\t4\nBrand#x\tECONOMY ANODIZED STEEL\t23\t4\nBrand#x\tECONOMY ANODIZED STEEL\t45\t4\nBrand#x\tECONOMY ANODIZED STEEL\t49\t4\nBrand#x\tECONOMY ANODIZED TIN\t3\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED BRASS\t3\t4\nBrand#x\tECONOMY BRUSHED BRASS\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t3\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t3\t4\nBrand#x\tECONOMY BRUSHED STEEL\t36\t4\nBrand#x\tECONOMY BRUSHED TIN\t23\t4\nBrand#x\tECONOMY BRUSHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED BRASS\t14\t4\nBrand#x\tECONOMY BURNISHED BRASS\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t14\t4\nBrand#x\tECONOMY BURNISHED COPPER\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t14\t4\nBrand#x\tECONOMY BURNISHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t23\t4\nBrand#x\tECONOMY BURNISHED TIN\t45\t4\nBrand#x\tECONOMY PLATED BRASS\t3\t4\nBrand#x\tECONOMY PLATED BRASS\t9\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t49\t4\nBrand#x\tECONOMY PLATED COPPER\t36\t4\nBrand#x\tECONOMY PLATED NICKEL\t3\t4\nBrand#x\tECONOMY PLATED NICKEL\t49\t4\nBrand#x\tECONOMY PLATED STEEL\t3\t4\nBrand#x\tECONOMY PLATED STEEL\t14\t4\nBrand#x\tECONOMY PLATED STEEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t3\t4\nBrand#x\tECONOMY PLATED TIN\t9\t4\nBrand#x\tECONOMY PLATED TIN\t19\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED BRASS\t19\t4\nBrand#x\tECONOMY POLISHED BRASS\t23\t4\nBrand#x\tECONOMY POLISHED BRASS\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t49\t4\nBrand#x\tECONOMY POLISHED COPPER\t3\t4\nBrand#x\tECONOMY POLISHED COPPER\t19\t4\nBrand#x\tECONOMY POLISHED COPPER\t23\t4\nBrand#x\tECONOMY POLISHED NICKEL\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t9\t4\nBrand#x\tECONOMY POLISHED STEEL\t14\t4\nBrand#x\tECONOMY POLISHED STEEL\t23\t4\nBrand#x\tECONOMY POLISHED STEEL\t36\t4\nBrand#x\tECONOMY POLISHED STEEL\t45\t4\nBrand#x\tECONOMY POLISHED TIN\t49\t4\nBrand#x\tLARGE ANODIZED BRASS\t3\t4\nBrand#x\tLARGE ANODIZED BRASS\t9\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t23\t4\nBrand#x\tLARGE ANODIZED COPPER\t3\t4\nBrand#x\tLARGE ANODIZED COPPER\t9\t4\nBrand#x\tLARGE ANODIZED COPPER\t36\t4\nBrand#x\tLARGE ANODIZED COPPER\t45\t4\nBrand#x\tLARGE ANODIZED NICKEL\t23\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED STEEL\t49\t4\nBrand#x\tLARGE ANODIZED TIN\t3\t4\nBrand#x\tLARGE ANODIZED TIN\t9\t4\nBrand#x\tLARGE ANODIZED TIN\t14\t4\nBrand#x\tLARGE ANODIZED TIN\t19\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED BRASS\t45\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t9\t4\nBrand#x\tLARGE BRUSHED NICKEL\t36\t4\nBrand#x\tLARGE BRUSHED NICKEL\t49\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t9\t4\nBrand#x\tLARGE BRUSHED STEEL\t23\t4\nBrand#x\tLARGE BRUSHED STEEL\t45\t4\nBrand#x\tLARGE BRUSHED TIN\t3\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t36\t4\nBrand#x\tLARGE BURNISHED COPPER\t19\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED NICKEL\t3\t4\nBrand#x\tLARGE BURNISHED NICKEL\t49\t4\nBrand#x\tLARGE BURNISHED STEEL\t14\t4\nBrand#x\tLARGE BURNISHED STEEL\t23\t4\nBrand#x\tLARGE BURNISHED STEEL\t45\t4\nBrand#x\tLARGE BURNISHED TIN\t3\t4\nBrand#x\tLARGE BURNISHED TIN\t9\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t45\t4\nBrand#x\tLARGE PLATED BRASS\t9\t4\nBrand#x\tLARGE PLATED BRASS\t36\t4\nBrand#x\tLARGE PLATED BRASS\t45\t4\nBrand#x\tLARGE PLATED BRASS\t49\t4\nBrand#x\tLARGE PLATED COPPER\t3\t4\nBrand#x\tLARGE PLATED COPPER\t9\t4\nBrand#x\tLARGE PLATED COPPER\t14\t4\nBrand#x\tLARGE PLATED COPPER\t19\t4\nBrand#x\tLARGE PLATED COPPER\t23\t4\nBrand#x\tLARGE PLATED COPPER\t36\t4\nBrand#x\tLARGE PLATED COPPER\t45\t4\nBrand#x\tLARGE PLATED COPPER\t49\t4\nBrand#x\tLARGE PLATED NICKEL\t9\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED NICKEL\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t49\t4\nBrand#x\tLARGE PLATED STEEL\t9\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t23\t4\nBrand#x\tLARGE PLATED TIN\t45\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED BRASS\t14\t4\nBrand#x\tLARGE POLISHED BRASS\t19\t4\nBrand#x\tLARGE POLISHED BRASS\t23\t4\nBrand#x\tLARGE POLISHED BRASS\t45\t4\nBrand#x\tLARGE POLISHED COPPER\t3\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t49\t4\nBrand#x\tLARGE POLISHED STEEL\t14\t4\nBrand#x\tLARGE POLISHED STEEL\t23\t4\nBrand#x\tLARGE POLISHED STEEL\t45\t4\nBrand#x\tLARGE POLISHED TIN\t9\t4\nBrand#x\tLARGE POLISHED TIN\t14\t4\nBrand#x\tLARGE POLISHED TIN\t45\t4\nBrand#x\tLARGE POLISHED TIN\t49\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t4\nBrand#x\tMEDIUM ANODIZED TIN\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t49\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t9\t4\nBrand#x\tMEDIUM BRUSHED TIN\t49\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t45\t4\nBrand#x\tMEDIUM PLATED BRASS\t19\t4\nBrand#x\tMEDIUM PLATED COPPER\t23\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t49\t4\nBrand#x\tMEDIUM PLATED NICKEL\t36\t4\nBrand#x\tMEDIUM PLATED NICKEL\t49\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t36\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t3\t4\nBrand#x\tPROMO ANODIZED BRASS\t9\t4\nBrand#x\tPROMO ANODIZED BRASS\t14\t4\nBrand#x\tPROMO ANODIZED BRASS\t23\t4\nBrand#x\tPROMO ANODIZED COPPER\t3\t4\nBrand#x\tPROMO ANODIZED COPPER\t23\t4\nBrand#x\tPROMO ANODIZED COPPER\t45\t4\nBrand#x\tPROMO ANODIZED NICKEL\t14\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t23\t4\nBrand#x\tPROMO ANODIZED NICKEL\t49\t4\nBrand#x\tPROMO ANODIZED STEEL\t9\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED TIN\t14\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t19\t4\nBrand#x\tPROMO BRUSHED BRASS\t23\t4\nBrand#x\tPROMO BRUSHED BRASS\t45\t4\nBrand#x\tPROMO BRUSHED COPPER\t3\t4\nBrand#x\tPROMO BRUSHED COPPER\t23\t4\nBrand#x\tPROMO BRUSHED COPPER\t45\t4\nBrand#x\tPROMO BRUSHED COPPER\t49\t4\nBrand#x\tPROMO BRUSHED NICKEL\t3\t4\nBrand#x\tPROMO BRUSHED NICKEL\t14\t4\nBrand#x\tPROMO BRUSHED NICKEL\t23\t4\nBrand#x\tPROMO BRUSHED NICKEL\t45\t4\nBrand#x\tPROMO BRUSHED NICKEL\t49\t4\nBrand#x\tPROMO BRUSHED STEEL\t3\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t3\t4\nBrand#x\tPROMO BRUSHED TIN\t9\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t14\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED COPPER\t9\t4\nBrand#x\tPROMO BURNISHED COPPER\t19\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t19\t4\nBrand#x\tPROMO BURNISHED NICKEL\t49\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t3\t4\nBrand#x\tPROMO BURNISHED TIN\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t14\t4\nBrand#x\tPROMO BURNISHED TIN\t19\t4\nBrand#x\tPROMO BURNISHED TIN\t49\t4\nBrand#x\tPROMO PLATED BRASS\t3\t4\nBrand#x\tPROMO PLATED BRASS\t9\t4\nBrand#x\tPROMO PLATED BRASS\t36\t4\nBrand#x\tPROMO PLATED COPPER\t9\t4\nBrand#x\tPROMO PLATED COPPER\t23\t4\nBrand#x\tPROMO PLATED NICKEL\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t23\t4\nBrand#x\tPROMO PLATED NICKEL\t36\t4\nBrand#x\tPROMO PLATED NICKEL\t45\t4\nBrand#x\tPROMO PLATED STEEL\t36\t4\nBrand#x\tPROMO PLATED STEEL\t45\t4\nBrand#x\tPROMO PLATED TIN\t45\t4\nBrand#x\tPROMO POLISHED BRASS\t9\t4\nBrand#x\tPROMO POLISHED BRASS\t45\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t3\t4\nBrand#x\tPROMO POLISHED COPPER\t36\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t19\t4\nBrand#x\tPROMO POLISHED STEEL\t9\t4\nBrand#x\tPROMO POLISHED STEEL\t14\t4\nBrand#x\tPROMO POLISHED STEEL\t36\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tPROMO POLISHED TIN\t45\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t14\t4\nBrand#x\tSMALL ANODIZED BRASS\t19\t4\nBrand#x\tSMALL ANODIZED BRASS\t36\t4\nBrand#x\tSMALL ANODIZED COPPER\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t23\t4\nBrand#x\tSMALL ANODIZED COPPER\t36\t4\nBrand#x\tSMALL ANODIZED NICKEL\t3\t4\nBrand#x\tSMALL ANODIZED NICKEL\t14\t4\nBrand#x\tSMALL ANODIZED NICKEL\t19\t4\nBrand#x\tSMALL ANODIZED NICKEL\t45\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL ANODIZED TIN\t14\t4\nBrand#x\tSMALL ANODIZED TIN\t49\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t9\t4\nBrand#x\tSMALL BRUSHED BRASS\t14\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t23\t4\nBrand#x\tSMALL BRUSHED COPPER\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED STEEL\t9\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t36\t4\nBrand#x\tSMALL BRUSHED STEEL\t45\t4\nBrand#x\tSMALL BRUSHED TIN\t9\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BRUSHED TIN\t45\t4\nBrand#x\tSMALL BURNISHED BRASS\t3\t4\nBrand#x\tSMALL BURNISHED BRASS\t23\t4\nBrand#x\tSMALL BURNISHED BRASS\t36\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t14\t4\nBrand#x\tSMALL BURNISHED NICKEL\t36\t4\nBrand#x\tSMALL BURNISHED NICKEL\t45\t4\nBrand#x\tSMALL BURNISHED STEEL\t14\t4\nBrand#x\tSMALL BURNISHED STEEL\t23\t4\nBrand#x\tSMALL BURNISHED STEEL\t49\t4\nBrand#x\tSMALL BURNISHED TIN\t14\t4\nBrand#x\tSMALL BURNISHED TIN\t23\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t49\t4\nBrand#x\tSMALL PLATED BRASS\t9\t4\nBrand#x\tSMALL PLATED BRASS\t23\t4\nBrand#x\tSMALL PLATED COPPER\t3\t4\nBrand#x\tSMALL PLATED COPPER\t14\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED NICKEL\t3\t4\nBrand#x\tSMALL PLATED NICKEL\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED STEEL\t23\t4\nBrand#x\tSMALL PLATED STEEL\t36\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t36\t4\nBrand#x\tSMALL POLISHED BRASS\t45\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t3\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED COPPER\t19\t4\nBrand#x\tSMALL POLISHED COPPER\t49\t4\nBrand#x\tSMALL POLISHED NICKEL\t3\t4\nBrand#x\tSMALL POLISHED NICKEL\t14\t4\nBrand#x\tSMALL POLISHED NICKEL\t19\t4\nBrand#x\tSMALL POLISHED STEEL\t9\t4\nBrand#x\tSMALL POLISHED STEEL\t49\t4\nBrand#x\tSMALL POLISHED TIN\t14\t4\nBrand#x\tSMALL POLISHED TIN\t19\t4\nBrand#x\tSMALL POLISHED TIN\t36\t4\nBrand#x\tSMALL POLISHED TIN\t45\t4\nBrand#x\tSMALL POLISHED TIN\t49\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t4\nBrand#x\tSTANDARD ANODIZED TIN\t3\t4\nBrand#x\tSTANDARD ANODIZED TIN\t19\t4\nBrand#x\tSTANDARD ANODIZED TIN\t36\t4\nBrand#x\tSTANDARD ANODIZED TIN\t49\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t45\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t19\t4\nBrand#x\tSTANDARD BURNISHED TIN\t23\t4\nBrand#x\tSTANDARD BURNISHED TIN\t36\t4\nBrand#x\tSTANDARD PLATED BRASS\t3\t4\nBrand#x\tSTANDARD PLATED BRASS\t14\t4\nBrand#x\tSTANDARD PLATED BRASS\t36\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t14\t4\nBrand#x\tSTANDARD PLATED COPPER\t45\t4\nBrand#x\tSTANDARD PLATED NICKEL\t3\t4\nBrand#x\tSTANDARD PLATED NICKEL\t9\t4\nBrand#x\tSTANDARD PLATED NICKEL\t23\t4\nBrand#x\tSTANDARD PLATED NICKEL\t49\t4\nBrand#x\tSTANDARD PLATED STEEL\t9\t4\nBrand#x\tSTANDARD PLATED STEEL\t36\t4\nBrand#x\tSTANDARD PLATED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED BRASS\t19\t4\nBrand#x\tSTANDARD POLISHED BRASS\t36\t4\nBrand#x\tSTANDARD POLISHED BRASS\t49\t4\nBrand#x\tSTANDARD POLISHED COPPER\t3\t4\nBrand#x\tSTANDARD POLISHED COPPER\t45\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED STEEL\t14\t4\nBrand#x\tSTANDARD POLISHED STEEL\t23\t4\nBrand#x\tSTANDARD POLISHED STEEL\t36\t4\nBrand#x\tSTANDARD POLISHED STEEL\t45\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t36\t4\nBrand#x\tSTANDARD POLISHED TIN\t45\t4\nBrand#x\tECONOMY ANODIZED BRASS\t9\t4\nBrand#x\tECONOMY ANODIZED BRASS\t19\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED COPPER\t9\t4\nBrand#x\tECONOMY ANODIZED COPPER\t19\t4\nBrand#x\tECONOMY ANODIZED COPPER\t23\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED COPPER\t45\t4\nBrand#x\tECONOMY ANODIZED COPPER\t49\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t49\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t36\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED BRASS\t9\t4\nBrand#x\tECONOMY BRUSHED BRASS\t14\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t4\nBrand#x\tECONOMY BRUSHED STEEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t14\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t45\t4\nBrand#x\tECONOMY BURNISHED COPPER\t9\t4\nBrand#x\tECONOMY BURNISHED COPPER\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t45\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t14\t4\nBrand#x\tECONOMY BURNISHED STEEL\t19\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t45\t4\nBrand#x\tECONOMY BURNISHED TIN\t49\t4\nBrand#x\tECONOMY PLATED BRASS\t9\t4\nBrand#x\tECONOMY PLATED BRASS\t14\t4\nBrand#x\tECONOMY PLATED BRASS\t23\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED COPPER\t49\t4\nBrand#x\tECONOMY PLATED NICKEL\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t23\t4\nBrand#x\tECONOMY PLATED NICKEL\t36\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED NICKEL\t49\t4\nBrand#x\tECONOMY PLATED STEEL\t3\t4\nBrand#x\tECONOMY PLATED STEEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t14\t4\nBrand#x\tECONOMY PLATED STEEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t9\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t19\t4\nBrand#x\tECONOMY PLATED TIN\t23\t4\nBrand#x\tECONOMY POLISHED BRASS\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t49\t4\nBrand#x\tECONOMY POLISHED COPPER\t23\t4\nBrand#x\tECONOMY POLISHED COPPER\t45\t4\nBrand#x\tECONOMY POLISHED NICKEL\t9\t4\nBrand#x\tECONOMY POLISHED NICKEL\t23\t4\nBrand#x\tECONOMY POLISHED STEEL\t14\t4\nBrand#x\tECONOMY POLISHED STEEL\t36\t4\nBrand#x\tECONOMY POLISHED STEEL\t45\t4\nBrand#x\tECONOMY POLISHED TIN\t23\t4\nBrand#x\tECONOMY POLISHED TIN\t45\t4\nBrand#x\tLARGE ANODIZED BRASS\t3\t4\nBrand#x\tLARGE ANODIZED BRASS\t9\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t3\t4\nBrand#x\tLARGE ANODIZED COPPER\t23\t4\nBrand#x\tLARGE ANODIZED NICKEL\t3\t4\nBrand#x\tLARGE ANODIZED NICKEL\t14\t4\nBrand#x\tLARGE ANODIZED NICKEL\t19\t4\nBrand#x\tLARGE ANODIZED NICKEL\t23\t4\nBrand#x\tLARGE ANODIZED NICKEL\t45\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED STEEL\t19\t4\nBrand#x\tLARGE ANODIZED STEEL\t45\t4\nBrand#x\tLARGE ANODIZED TIN\t9\t4\nBrand#x\tLARGE ANODIZED TIN\t36\t4\nBrand#x\tLARGE ANODIZED TIN\t45\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t9\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t19\t4\nBrand#x\tLARGE BRUSHED NICKEL\t45\t4\nBrand#x\tLARGE BRUSHED STEEL\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t36\t4\nBrand#x\tLARGE BRUSHED TIN\t49\t4\nBrand#x\tLARGE BURNISHED BRASS\t3\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t23\t4\nBrand#x\tLARGE BURNISHED BRASS\t36\t4\nBrand#x\tLARGE BURNISHED BRASS\t49\t4\nBrand#x\tLARGE BURNISHED COPPER\t9\t4\nBrand#x\tLARGE BURNISHED COPPER\t14\t4\nBrand#x\tLARGE BURNISHED COPPER\t23\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED NICKEL\t9\t4\nBrand#x\tLARGE BURNISHED NICKEL\t23\t4\nBrand#x\tLARGE BURNISHED NICKEL\t36\t4\nBrand#x\tLARGE BURNISHED NICKEL\t49\t4\nBrand#x\tLARGE BURNISHED STEEL\t14\t4\nBrand#x\tLARGE BURNISHED STEEL\t19\t4\nBrand#x\tLARGE BURNISHED STEEL\t23\t4\nBrand#x\tLARGE BURNISHED STEEL\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t19\t4\nBrand#x\tLARGE PLATED BRASS\t14\t4\nBrand#x\tLARGE PLATED BRASS\t19\t4\nBrand#x\tLARGE PLATED BRASS\t23\t4\nBrand#x\tLARGE PLATED BRASS\t36\t4\nBrand#x\tLARGE PLATED BRASS\t45\t4\nBrand#x\tLARGE PLATED COPPER\t9\t4\nBrand#x\tLARGE PLATED COPPER\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED NICKEL\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t23\t4\nBrand#x\tLARGE PLATED NICKEL\t45\t4\nBrand#x\tLARGE PLATED STEEL\t23\t4\nBrand#x\tLARGE PLATED STEEL\t45\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t3\t4\nBrand#x\tLARGE PLATED TIN\t23\t4\nBrand#x\tLARGE POLISHED BRASS\t14\t4\nBrand#x\tLARGE POLISHED BRASS\t36\t4\nBrand#x\tLARGE POLISHED BRASS\t45\t4\nBrand#x\tLARGE POLISHED COPPER\t14\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t3\t4\nBrand#x\tLARGE POLISHED NICKEL\t9\t4\nBrand#x\tLARGE POLISHED STEEL\t3\t4\nBrand#x\tLARGE POLISHED STEEL\t19\t4\nBrand#x\tLARGE POLISHED STEEL\t45\t4\nBrand#x\tLARGE POLISHED TIN\t14\t4\nBrand#x\tLARGE POLISHED TIN\t23\t4\nBrand#x\tLARGE POLISHED TIN\t49\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t49\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t4\nBrand#x\tMEDIUM ANODIZED TIN\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM ANODIZED TIN\t45\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t4\nBrand#x\tMEDIUM BRUSHED TIN\t23\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t36\t4\nBrand#x\tMEDIUM BURNISHED TIN\t49\t4\nBrand#x\tMEDIUM PLATED BRASS\t19\t4\nBrand#x\tMEDIUM PLATED BRASS\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t3\t4\nBrand#x\tMEDIUM PLATED COPPER\t9\t4\nBrand#x\tMEDIUM PLATED COPPER\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t23\t4\nBrand#x\tMEDIUM PLATED COPPER\t36\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t19\t4\nBrand#x\tMEDIUM PLATED STEEL\t36\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t9\t4\nBrand#x\tPROMO ANODIZED BRASS\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t36\t4\nBrand#x\tPROMO ANODIZED COPPER\t9\t4\nBrand#x\tPROMO ANODIZED COPPER\t14\t4\nBrand#x\tPROMO ANODIZED COPPER\t23\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t9\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t3\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED COPPER\t14\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t45\t4\nBrand#x\tPROMO BRUSHED COPPER\t49\t4\nBrand#x\tPROMO BRUSHED NICKEL\t3\t4\nBrand#x\tPROMO BRUSHED NICKEL\t9\t4\nBrand#x\tPROMO BRUSHED NICKEL\t14\t4\nBrand#x\tPROMO BRUSHED NICKEL\t19\t4\nBrand#x\tPROMO BRUSHED NICKEL\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t45\t4\nBrand#x\tPROMO BRUSHED NICKEL\t49\t4\nBrand#x\tPROMO BRUSHED STEEL\t36\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t19\t4\nBrand#x\tPROMO BURNISHED BRASS\t23\t4\nBrand#x\tPROMO BURNISHED BRASS\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t49\t4\nBrand#x\tPROMO BURNISHED COPPER\t9\t4\nBrand#x\tPROMO BURNISHED COPPER\t14\t4\nBrand#x\tPROMO BURNISHED COPPER\t23\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED COPPER\t45\t4\nBrand#x\tPROMO BURNISHED COPPER\t49\t4\nBrand#x\tPROMO BURNISHED NICKEL\t3\t4\nBrand#x\tPROMO BURNISHED NICKEL\t19\t4\nBrand#x\tPROMO BURNISHED NICKEL\t23\t4\nBrand#x\tPROMO BURNISHED NICKEL\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t14\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED STEEL\t23\t4\nBrand#x\tPROMO BURNISHED STEEL\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t49\t4\nBrand#x\tPROMO BURNISHED TIN\t3\t4\nBrand#x\tPROMO BURNISHED TIN\t19\t4\nBrand#x\tPROMO PLATED BRASS\t14\t4\nBrand#x\tPROMO PLATED BRASS\t23\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t9\t4\nBrand#x\tPROMO PLATED NICKEL\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t49\t4\nBrand#x\tPROMO PLATED STEEL\t9\t4\nBrand#x\tPROMO PLATED STEEL\t14\t4\nBrand#x\tPROMO PLATED STEEL\t23\t4\nBrand#x\tPROMO PLATED STEEL\t45\t4\nBrand#x\tPROMO PLATED TIN\t14\t4\nBrand#x\tPROMO PLATED TIN\t19\t4\nBrand#x\tPROMO PLATED TIN\t49\t4\nBrand#x\tPROMO POLISHED BRASS\t14\t4\nBrand#x\tPROMO POLISHED BRASS\t45\t4\nBrand#x\tPROMO POLISHED COPPER\t3\t4\nBrand#x\tPROMO POLISHED COPPER\t9\t4\nBrand#x\tPROMO POLISHED COPPER\t36\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t23\t4\nBrand#x\tPROMO POLISHED NICKEL\t45\t4\nBrand#x\tPROMO POLISHED STEEL\t9\t4\nBrand#x\tPROMO POLISHED STEEL\t14\t4\nBrand#x\tPROMO POLISHED TIN\t9\t4\nBrand#x\tPROMO POLISHED TIN\t45\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t14\t4\nBrand#x\tSMALL ANODIZED BRASS\t19\t4\nBrand#x\tSMALL ANODIZED BRASS\t23\t4\nBrand#x\tSMALL ANODIZED COPPER\t19\t4\nBrand#x\tSMALL ANODIZED COPPER\t23\t4\nBrand#x\tSMALL ANODIZED COPPER\t45\t4\nBrand#x\tSMALL ANODIZED COPPER\t49\t4\nBrand#x\tSMALL ANODIZED NICKEL\t9\t4\nBrand#x\tSMALL ANODIZED NICKEL\t14\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL ANODIZED TIN\t36\t4\nBrand#x\tSMALL BRUSHED BRASS\t9\t4\nBrand#x\tSMALL BRUSHED BRASS\t19\t4\nBrand#x\tSMALL BRUSHED COPPER\t9\t4\nBrand#x\tSMALL BRUSHED COPPER\t14\t4\nBrand#x\tSMALL BRUSHED COPPER\t19\t4\nBrand#x\tSMALL BRUSHED COPPER\t23\t4\nBrand#x\tSMALL BRUSHED COPPER\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED STEEL\t3\t4\nBrand#x\tSMALL BRUSHED TIN\t14\t4\nBrand#x\tSMALL BRUSHED TIN\t19\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t19\t4\nBrand#x\tSMALL BURNISHED COPPER\t45\t4\nBrand#x\tSMALL BURNISHED NICKEL\t23\t4\nBrand#x\tSMALL BURNISHED NICKEL\t49\t4\nBrand#x\tSMALL BURNISHED STEEL\t14\t4\nBrand#x\tSMALL BURNISHED STEEL\t19\t4\nBrand#x\tSMALL BURNISHED STEEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t45\t4\nBrand#x\tSMALL BURNISHED STEEL\t49\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t49\t4\nBrand#x\tSMALL PLATED BRASS\t9\t4\nBrand#x\tSMALL PLATED BRASS\t36\t4\nBrand#x\tSMALL PLATED COPPER\t3\t4\nBrand#x\tSMALL PLATED COPPER\t9\t4\nBrand#x\tSMALL PLATED COPPER\t14\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED COPPER\t45\t4\nBrand#x\tSMALL PLATED COPPER\t49\t4\nBrand#x\tSMALL PLATED NICKEL\t9\t4\nBrand#x\tSMALL PLATED NICKEL\t36\t4\nBrand#x\tSMALL PLATED STEEL\t14\t4\nBrand#x\tSMALL PLATED TIN\t3\t4\nBrand#x\tSMALL PLATED TIN\t9\t4\nBrand#x\tSMALL PLATED TIN\t14\t4\nBrand#x\tSMALL PLATED TIN\t19\t4\nBrand#x\tSMALL PLATED TIN\t36\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t3\t4\nBrand#x\tSMALL POLISHED BRASS\t9\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t3\t4\nBrand#x\tSMALL POLISHED COPPER\t9\t4\nBrand#x\tSMALL POLISHED COPPER\t19\t4\nBrand#x\tSMALL POLISHED COPPER\t23\t4\nBrand#x\tSMALL POLISHED COPPER\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t3\t4\nBrand#x\tSMALL POLISHED NICKEL\t9\t4\nBrand#x\tSMALL POLISHED NICKEL\t19\t4\nBrand#x\tSMALL POLISHED NICKEL\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t45\t4\nBrand#x\tSMALL POLISHED STEEL\t3\t4\nBrand#x\tSMALL POLISHED STEEL\t9\t4\nBrand#x\tSMALL POLISHED STEEL\t14\t4\nBrand#x\tSMALL POLISHED STEEL\t23\t4\nBrand#x\tSMALL POLISHED STEEL\t36\t4\nBrand#x\tSMALL POLISHED STEEL\t49\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t9\t4\nBrand#x\tSMALL POLISHED TIN\t23\t4\nBrand#x\tSMALL POLISHED TIN\t49\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t4\nBrand#x\tSTANDARD ANODIZED TIN\t19\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t49\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t4\nBrand#x\tSTANDARD BRUSHED TIN\t3\t4\nBrand#x\tSTANDARD BRUSHED TIN\t49\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t19\t4\nBrand#x\tSTANDARD PLATED BRASS\t14\t4\nBrand#x\tSTANDARD PLATED BRASS\t23\t4\nBrand#x\tSTANDARD PLATED BRASS\t36\t4\nBrand#x\tSTANDARD PLATED BRASS\t45\t4\nBrand#x\tSTANDARD PLATED COPPER\t3\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED COPPER\t45\t4\nBrand#x\tSTANDARD PLATED NICKEL\t23\t4\nBrand#x\tSTANDARD PLATED NICKEL\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t49\t4\nBrand#x\tSTANDARD PLATED STEEL\t9\t4\nBrand#x\tSTANDARD PLATED TIN\t14\t4\nBrand#x\tSTANDARD PLATED TIN\t23\t4\nBrand#x\tSTANDARD PLATED TIN\t49\t4\nBrand#x\tSTANDARD POLISHED BRASS\t9\t4\nBrand#x\tSTANDARD POLISHED BRASS\t19\t4\nBrand#x\tSTANDARD POLISHED BRASS\t49\t4\nBrand#x\tSTANDARD POLISHED COPPER\t14\t4\nBrand#x\tSTANDARD POLISHED COPPER\t45\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED STEEL\t36\t4\nBrand#x\tSTANDARD POLISHED TIN\t14\t4\nBrand#x\tSTANDARD POLISHED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t49\t4\nBrand#x\tECONOMY ANODIZED BRASS\t3\t4\nBrand#x\tECONOMY ANODIZED BRASS\t9\t4\nBrand#x\tECONOMY ANODIZED BRASS\t14\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED BRASS\t49\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED COPPER\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t14\t4\nBrand#x\tECONOMY ANODIZED STEEL\t19\t4\nBrand#x\tECONOMY ANODIZED STEEL\t36\t4\nBrand#x\tECONOMY ANODIZED STEEL\t49\t4\nBrand#x\tECONOMY ANODIZED TIN\t3\t4\nBrand#x\tECONOMY ANODIZED TIN\t14\t4\nBrand#x\tECONOMY ANODIZED TIN\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t3\t4\nBrand#x\tECONOMY BRUSHED BRASS\t14\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED BRASS\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t23\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t19\t4\nBrand#x\tECONOMY BRUSHED STEEL\t23\t4\nBrand#x\tECONOMY BRUSHED STEEL\t36\t4\nBrand#x\tECONOMY BRUSHED TIN\t3\t4\nBrand#x\tECONOMY BRUSHED TIN\t36\t4\nBrand#x\tECONOMY BRUSHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED BRASS\t14\t4\nBrand#x\tECONOMY BURNISHED BRASS\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t3\t4\nBrand#x\tECONOMY BURNISHED COPPER\t9\t4\nBrand#x\tECONOMY BURNISHED COPPER\t49\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t9\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t49\t4\nBrand#x\tECONOMY BURNISHED TIN\t3\t4\nBrand#x\tECONOMY BURNISHED TIN\t9\t4\nBrand#x\tECONOMY BURNISHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t45\t4\nBrand#x\tECONOMY PLATED BRASS\t3\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED BRASS\t45\t4\nBrand#x\tECONOMY PLATED COPPER\t23\t4\nBrand#x\tECONOMY PLATED COPPER\t45\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t14\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t19\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY PLATED TIN\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t19\t4\nBrand#x\tECONOMY POLISHED COPPER\t3\t4\nBrand#x\tECONOMY POLISHED COPPER\t14\t4\nBrand#x\tECONOMY POLISHED COPPER\t23\t4\nBrand#x\tECONOMY POLISHED NICKEL\t9\t4\nBrand#x\tECONOMY POLISHED NICKEL\t14\t4\nBrand#x\tECONOMY POLISHED NICKEL\t19\t4\nBrand#x\tECONOMY POLISHED NICKEL\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t45\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t14\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t14\t4\nBrand#x\tECONOMY POLISHED TIN\t49\t4\nBrand#x\tLARGE ANODIZED BRASS\t3\t4\nBrand#x\tLARGE ANODIZED BRASS\t9\t4\nBrand#x\tLARGE ANODIZED BRASS\t14\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t23\t4\nBrand#x\tLARGE ANODIZED COPPER\t9\t4\nBrand#x\tLARGE ANODIZED COPPER\t14\t4\nBrand#x\tLARGE ANODIZED COPPER\t36\t4\nBrand#x\tLARGE ANODIZED COPPER\t45\t4\nBrand#x\tLARGE ANODIZED COPPER\t49\t4\nBrand#x\tLARGE ANODIZED NICKEL\t3\t4\nBrand#x\tLARGE ANODIZED NICKEL\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t36\t4\nBrand#x\tLARGE ANODIZED STEEL\t23\t4\nBrand#x\tLARGE ANODIZED TIN\t3\t4\nBrand#x\tLARGE ANODIZED TIN\t23\t4\nBrand#x\tLARGE BRUSHED BRASS\t14\t4\nBrand#x\tLARGE BRUSHED BRASS\t23\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t14\t4\nBrand#x\tLARGE BRUSHED COPPER\t23\t4\nBrand#x\tLARGE BRUSHED COPPER\t36\t4\nBrand#x\tLARGE BRUSHED NICKEL\t14\t4\nBrand#x\tLARGE BRUSHED NICKEL\t19\t4\nBrand#x\tLARGE BRUSHED STEEL\t9\t4\nBrand#x\tLARGE BRUSHED STEEL\t14\t4\nBrand#x\tLARGE BRUSHED STEEL\t45\t4\nBrand#x\tLARGE BRUSHED STEEL\t49\t4\nBrand#x\tLARGE BRUSHED TIN\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t19\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BRUSHED TIN\t49\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t36\t4\nBrand#x\tLARGE BURNISHED BRASS\t49\t4\nBrand#x\tLARGE BURNISHED COPPER\t9\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t3\t4\nBrand#x\tLARGE BURNISHED NICKEL\t23\t4\nBrand#x\tLARGE BURNISHED NICKEL\t36\t4\nBrand#x\tLARGE BURNISHED STEEL\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t14\t4\nBrand#x\tLARGE BURNISHED TIN\t19\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t49\t4\nBrand#x\tLARGE PLATED BRASS\t3\t4\nBrand#x\tLARGE PLATED BRASS\t14\t4\nBrand#x\tLARGE PLATED BRASS\t23\t4\nBrand#x\tLARGE PLATED BRASS\t36\t4\nBrand#x\tLARGE PLATED BRASS\t49\t4\nBrand#x\tLARGE PLATED COPPER\t45\t4\nBrand#x\tLARGE PLATED NICKEL\t3\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t23\t4\nBrand#x\tLARGE PLATED TIN\t3\t4\nBrand#x\tLARGE PLATED TIN\t19\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED BRASS\t45\t4\nBrand#x\tLARGE POLISHED COPPER\t3\t4\nBrand#x\tLARGE POLISHED COPPER\t9\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t23\t4\nBrand#x\tLARGE POLISHED COPPER\t36\t4\nBrand#x\tLARGE POLISHED COPPER\t49\t4\nBrand#x\tLARGE POLISHED NICKEL\t3\t4\nBrand#x\tLARGE POLISHED NICKEL\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t36\t4\nBrand#x\tLARGE POLISHED STEEL\t14\t4\nBrand#x\tLARGE POLISHED STEEL\t45\t4\nBrand#x\tLARGE POLISHED STEEL\t49\t4\nBrand#x\tLARGE POLISHED TIN\t49\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t49\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t4\nBrand#x\tMEDIUM ANODIZED TIN\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED TIN\t49\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t14\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED TIN\t49\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t4\nBrand#x\tMEDIUM BURNISHED TIN\t9\t4\nBrand#x\tMEDIUM BURNISHED TIN\t14\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t14\t4\nBrand#x\tMEDIUM PLATED BRASS\t36\t4\nBrand#x\tMEDIUM PLATED BRASS\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t3\t4\nBrand#x\tMEDIUM PLATED COPPER\t9\t4\nBrand#x\tMEDIUM PLATED COPPER\t23\t4\nBrand#x\tMEDIUM PLATED NICKEL\t9\t4\nBrand#x\tMEDIUM PLATED NICKEL\t49\t4\nBrand#x\tMEDIUM PLATED STEEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t14\t4\nBrand#x\tMEDIUM PLATED TIN\t23\t4\nBrand#x\tMEDIUM PLATED TIN\t45\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t9\t4\nBrand#x\tPROMO ANODIZED BRASS\t36\t4\nBrand#x\tPROMO ANODIZED BRASS\t49\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED COPPER\t36\t4\nBrand#x\tPROMO ANODIZED COPPER\t49\t4\nBrand#x\tPROMO ANODIZED NICKEL\t14\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t23\t4\nBrand#x\tPROMO ANODIZED NICKEL\t36\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t9\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED STEEL\t23\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED STEEL\t49\t4\nBrand#x\tPROMO ANODIZED TIN\t3\t4\nBrand#x\tPROMO ANODIZED TIN\t9\t4\nBrand#x\tPROMO ANODIZED TIN\t14\t4\nBrand#x\tPROMO ANODIZED TIN\t19\t4\nBrand#x\tPROMO ANODIZED TIN\t23\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t23\t4\nBrand#x\tPROMO BRUSHED COPPER\t45\t4\nBrand#x\tPROMO BRUSHED NICKEL\t3\t4\nBrand#x\tPROMO BRUSHED NICKEL\t45\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED STEEL\t36\t4\nBrand#x\tPROMO BRUSHED STEEL\t49\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t3\t4\nBrand#x\tPROMO BURNISHED BRASS\t14\t4\nBrand#x\tPROMO BURNISHED BRASS\t49\t4\nBrand#x\tPROMO BURNISHED COPPER\t14\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t19\t4\nBrand#x\tPROMO BURNISHED NICKEL\t23\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t36\t4\nBrand#x\tPROMO BURNISHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED TIN\t49\t4\nBrand#x\tPROMO PLATED BRASS\t3\t4\nBrand#x\tPROMO PLATED BRASS\t9\t4\nBrand#x\tPROMO PLATED BRASS\t19\t4\nBrand#x\tPROMO PLATED BRASS\t23\t4\nBrand#x\tPROMO PLATED BRASS\t36\t4\nBrand#x\tPROMO PLATED BRASS\t45\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED COPPER\t23\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t45\t4\nBrand#x\tPROMO PLATED STEEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t14\t4\nBrand#x\tPROMO PLATED STEEL\t23\t4\nBrand#x\tPROMO PLATED STEEL\t36\t4\nBrand#x\tPROMO PLATED STEEL\t49\t4\nBrand#x\tPROMO PLATED TIN\t3\t4\nBrand#x\tPROMO PLATED TIN\t9\t4\nBrand#x\tPROMO PLATED TIN\t19\t4\nBrand#x\tPROMO PLATED TIN\t36\t4\nBrand#x\tPROMO PLATED TIN\t45\t4\nBrand#x\tPROMO PLATED TIN\t49\t4\nBrand#x\tPROMO POLISHED BRASS\t9\t4\nBrand#x\tPROMO POLISHED BRASS\t14\t4\nBrand#x\tPROMO POLISHED BRASS\t23\t4\nBrand#x\tPROMO POLISHED COPPER\t3\t4\nBrand#x\tPROMO POLISHED COPPER\t23\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t19\t4\nBrand#x\tPROMO POLISHED STEEL\t3\t4\nBrand#x\tPROMO POLISHED STEEL\t9\t4\nBrand#x\tPROMO POLISHED STEEL\t19\t4\nBrand#x\tPROMO POLISHED STEEL\t49\t4\nBrand#x\tPROMO POLISHED TIN\t3\t4\nBrand#x\tPROMO POLISHED TIN\t14\t4\nBrand#x\tPROMO POLISHED TIN\t49\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t9\t4\nBrand#x\tSMALL ANODIZED BRASS\t23\t4\nBrand#x\tSMALL ANODIZED BRASS\t45\t4\nBrand#x\tSMALL ANODIZED COPPER\t3\t4\nBrand#x\tSMALL ANODIZED COPPER\t14\t4\nBrand#x\tSMALL ANODIZED COPPER\t45\t4\nBrand#x\tSMALL ANODIZED COPPER\t49\t4\nBrand#x\tSMALL ANODIZED NICKEL\t9\t4\nBrand#x\tSMALL ANODIZED NICKEL\t23\t4\nBrand#x\tSMALL ANODIZED NICKEL\t36\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED STEEL\t49\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t23\t4\nBrand#x\tSMALL BRUSHED BRASS\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED NICKEL\t49\t4\nBrand#x\tSMALL BRUSHED STEEL\t9\t4\nBrand#x\tSMALL BRUSHED STEEL\t14\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED TIN\t14\t4\nBrand#x\tSMALL BRUSHED TIN\t19\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t9\t4\nBrand#x\tSMALL BURNISHED BRASS\t23\t4\nBrand#x\tSMALL BURNISHED BRASS\t36\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t14\t4\nBrand#x\tSMALL BURNISHED COPPER\t19\t4\nBrand#x\tSMALL BURNISHED COPPER\t36\t4\nBrand#x\tSMALL BURNISHED NICKEL\t14\t4\nBrand#x\tSMALL BURNISHED NICKEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t14\t4\nBrand#x\tSMALL BURNISHED TIN\t3\t4\nBrand#x\tSMALL BURNISHED TIN\t23\t4\nBrand#x\tSMALL BURNISHED TIN\t45\t4\nBrand#x\tSMALL PLATED BRASS\t3\t4\nBrand#x\tSMALL PLATED BRASS\t14\t4\nBrand#x\tSMALL PLATED COPPER\t9\t4\nBrand#x\tSMALL PLATED COPPER\t45\t4\nBrand#x\tSMALL PLATED NICKEL\t3\t4\nBrand#x\tSMALL PLATED NICKEL\t9\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED STEEL\t3\t4\nBrand#x\tSMALL PLATED STEEL\t45\t4\nBrand#x\tSMALL PLATED STEEL\t49\t4\nBrand#x\tSMALL PLATED TIN\t9\t4\nBrand#x\tSMALL PLATED TIN\t23\t4\nBrand#x\tSMALL PLATED TIN\t45\t4\nBrand#x\tSMALL POLISHED BRASS\t3\t4\nBrand#x\tSMALL POLISHED BRASS\t19\t4\nBrand#x\tSMALL POLISHED BRASS\t36\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED COPPER\t23\t4\nBrand#x\tSMALL POLISHED COPPER\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t9\t4\nBrand#x\tSMALL POLISHED NICKEL\t23\t4\nBrand#x\tSMALL POLISHED NICKEL\t49\t4\nBrand#x\tSMALL POLISHED STEEL\t9\t4\nBrand#x\tSMALL POLISHED STEEL\t19\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t9\t4\nBrand#x\tSMALL POLISHED TIN\t19\t4\nBrand#x\tSMALL POLISHED TIN\t23\t4\nBrand#x\tSMALL POLISHED TIN\t36\t4\nBrand#x\tSMALL POLISHED TIN\t45\t4\nBrand#x\tSMALL POLISHED TIN\t49\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t4\nBrand#x\tSTANDARD ANODIZED TIN\t3\t4\nBrand#x\tSTANDARD ANODIZED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t19\t4\nBrand#x\tSTANDARD ANODIZED TIN\t45\t4\nBrand#x\tSTANDARD ANODIZED TIN\t49\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t49\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t36\t4\nBrand#x\tSTANDARD BRUSHED TIN\t45\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t4\nBrand#x\tSTANDARD BURNISHED TIN\t14\t4\nBrand#x\tSTANDARD BURNISHED TIN\t45\t4\nBrand#x\tSTANDARD PLATED COPPER\t3\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED COPPER\t49\t4\nBrand#x\tSTANDARD PLATED NICKEL\t19\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t45\t4\nBrand#x\tSTANDARD PLATED TIN\t3\t4\nBrand#x\tSTANDARD PLATED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t9\t4\nBrand#x\tSTANDARD POLISHED BRASS\t14\t4\nBrand#x\tSTANDARD POLISHED BRASS\t23\t4\nBrand#x\tSTANDARD POLISHED BRASS\t49\t4\nBrand#x\tSTANDARD POLISHED COPPER\t9\t4\nBrand#x\tSTANDARD POLISHED COPPER\t19\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t4\nBrand#x\tSTANDARD POLISHED STEEL\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t49\t4\nBrand#x\tECONOMY ANODIZED BRASS\t9\t4\nBrand#x\tECONOMY ANODIZED BRASS\t19\t4\nBrand#x\tECONOMY ANODIZED COPPER\t19\t4\nBrand#x\tECONOMY ANODIZED COPPER\t23\t4\nBrand#x\tECONOMY ANODIZED COPPER\t49\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t4\nBrand#x\tECONOMY ANODIZED STEEL\t23\t4\nBrand#x\tECONOMY ANODIZED STEEL\t36\t4\nBrand#x\tECONOMY ANODIZED TIN\t14\t4\nBrand#x\tECONOMY ANODIZED TIN\t36\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED BRASS\t19\t4\nBrand#x\tECONOMY BRUSHED BRASS\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED COPPER\t9\t4\nBrand#x\tECONOMY BRUSHED COPPER\t14\t4\nBrand#x\tECONOMY BRUSHED COPPER\t23\t4\nBrand#x\tECONOMY BRUSHED COPPER\t36\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t4\nBrand#x\tECONOMY BRUSHED STEEL\t9\t4\nBrand#x\tECONOMY BRUSHED STEEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t19\t4\nBrand#x\tECONOMY BRUSHED STEEL\t23\t4\nBrand#x\tECONOMY BRUSHED TIN\t9\t4\nBrand#x\tECONOMY BRUSHED TIN\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t23\t4\nBrand#x\tECONOMY BRUSHED TIN\t36\t4\nBrand#x\tECONOMY BRUSHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED BRASS\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t3\t4\nBrand#x\tECONOMY BURNISHED COPPER\t14\t4\nBrand#x\tECONOMY BURNISHED COPPER\t19\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED TIN\t3\t4\nBrand#x\tECONOMY BURNISHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED TIN\t49\t4\nBrand#x\tECONOMY PLATED BRASS\t3\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED BRASS\t23\t4\nBrand#x\tECONOMY PLATED BRASS\t49\t4\nBrand#x\tECONOMY PLATED COPPER\t36\t4\nBrand#x\tECONOMY PLATED COPPER\t45\t4\nBrand#x\tECONOMY PLATED COPPER\t49\t4\nBrand#x\tECONOMY PLATED NICKEL\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t14\t4\nBrand#x\tECONOMY PLATED STEEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t23\t4\nBrand#x\tECONOMY PLATED STEEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t3\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t23\t4\nBrand#x\tECONOMY PLATED TIN\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED BRASS\t14\t4\nBrand#x\tECONOMY POLISHED BRASS\t45\t4\nBrand#x\tECONOMY POLISHED COPPER\t3\t4\nBrand#x\tECONOMY POLISHED COPPER\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t19\t4\nBrand#x\tECONOMY POLISHED COPPER\t36\t4\nBrand#x\tECONOMY POLISHED COPPER\t45\t4\nBrand#x\tECONOMY POLISHED NICKEL\t23\t4\nBrand#x\tECONOMY POLISHED STEEL\t14\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t23\t4\nBrand#x\tECONOMY POLISHED STEEL\t36\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t14\t4\nBrand#x\tECONOMY POLISHED TIN\t36\t4\nBrand#x\tECONOMY POLISHED TIN\t45\t4\nBrand#x\tLARGE ANODIZED BRASS\t23\t4\nBrand#x\tLARGE ANODIZED BRASS\t36\t4\nBrand#x\tLARGE ANODIZED BRASS\t45\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t9\t4\nBrand#x\tLARGE ANODIZED COPPER\t36\t4\nBrand#x\tLARGE ANODIZED NICKEL\t3\t4\nBrand#x\tLARGE ANODIZED NICKEL\t19\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED STEEL\t23\t4\nBrand#x\tLARGE ANODIZED STEEL\t36\t4\nBrand#x\tLARGE ANODIZED STEEL\t49\t4\nBrand#x\tLARGE ANODIZED TIN\t3\t4\nBrand#x\tLARGE ANODIZED TIN\t36\t4\nBrand#x\tLARGE ANODIZED TIN\t45\t4\nBrand#x\tLARGE ANODIZED TIN\t49\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED BRASS\t19\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t9\t4\nBrand#x\tLARGE BRUSHED NICKEL\t36\t4\nBrand#x\tLARGE BRUSHED NICKEL\t49\t4\nBrand#x\tLARGE BRUSHED STEEL\t14\t4\nBrand#x\tLARGE BRUSHED STEEL\t23\t4\nBrand#x\tLARGE BRUSHED STEEL\t49\t4\nBrand#x\tLARGE BRUSHED TIN\t19\t4\nBrand#x\tLARGE BRUSHED TIN\t23\t4\nBrand#x\tLARGE BURNISHED BRASS\t3\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t36\t4\nBrand#x\tLARGE BURNISHED COPPER\t3\t4\nBrand#x\tLARGE BURNISHED COPPER\t23\t4\nBrand#x\tLARGE BURNISHED COPPER\t36\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED NICKEL\t14\t4\nBrand#x\tLARGE BURNISHED NICKEL\t19\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED STEEL\t49\t4\nBrand#x\tLARGE BURNISHED TIN\t3\t4\nBrand#x\tLARGE BURNISHED TIN\t14\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t49\t4\nBrand#x\tLARGE PLATED BRASS\t3\t4\nBrand#x\tLARGE PLATED BRASS\t9\t4\nBrand#x\tLARGE PLATED COPPER\t9\t4\nBrand#x\tLARGE PLATED COPPER\t14\t4\nBrand#x\tLARGE PLATED COPPER\t19\t4\nBrand#x\tLARGE PLATED COPPER\t45\t4\nBrand#x\tLARGE PLATED NICKEL\t3\t4\nBrand#x\tLARGE PLATED NICKEL\t9\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED STEEL\t14\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED TIN\t3\t4\nBrand#x\tLARGE PLATED TIN\t9\t4\nBrand#x\tLARGE PLATED TIN\t19\t4\nBrand#x\tLARGE PLATED TIN\t23\t4\nBrand#x\tLARGE PLATED TIN\t45\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t49\t4\nBrand#x\tLARGE POLISHED COPPER\t3\t4\nBrand#x\tLARGE POLISHED COPPER\t14\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t36\t4\nBrand#x\tLARGE POLISHED COPPER\t49\t4\nBrand#x\tLARGE POLISHED NICKEL\t3\t4\nBrand#x\tLARGE POLISHED NICKEL\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t49\t4\nBrand#x\tLARGE POLISHED STEEL\t9\t4\nBrand#x\tLARGE POLISHED STEEL\t14\t4\nBrand#x\tLARGE POLISHED STEEL\t36\t4\nBrand#x\tLARGE POLISHED STEEL\t49\t4\nBrand#x\tLARGE POLISHED TIN\t3\t4\nBrand#x\tLARGE POLISHED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t49\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t4\nBrand#x\tMEDIUM ANODIZED TIN\t3\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED TIN\t23\t4\nBrand#x\tMEDIUM ANODIZED TIN\t45\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED TIN\t49\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED TIN\t9\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM PLATED BRASS\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t49\t4\nBrand#x\tMEDIUM PLATED NICKEL\t3\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t19\t4\nBrand#x\tMEDIUM PLATED NICKEL\t36\t4\nBrand#x\tMEDIUM PLATED NICKEL\t45\t4\nBrand#x\tMEDIUM PLATED STEEL\t3\t4\nBrand#x\tMEDIUM PLATED STEEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t3\t4\nBrand#x\tPROMO ANODIZED BRASS\t9\t4\nBrand#x\tPROMO ANODIZED BRASS\t14\t4\nBrand#x\tPROMO ANODIZED BRASS\t49\t4\nBrand#x\tPROMO ANODIZED COPPER\t23\t4\nBrand#x\tPROMO ANODIZED COPPER\t49\t4\nBrand#x\tPROMO ANODIZED NICKEL\t3\t4\nBrand#x\tPROMO ANODIZED NICKEL\t23\t4\nBrand#x\tPROMO ANODIZED STEEL\t9\t4\nBrand#x\tPROMO ANODIZED STEEL\t49\t4\nBrand#x\tPROMO ANODIZED TIN\t3\t4\nBrand#x\tPROMO ANODIZED TIN\t23\t4\nBrand#x\tPROMO ANODIZED TIN\t36\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t49\t4\nBrand#x\tPROMO BRUSHED BRASS\t3\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t3\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED NICKEL\t3\t4\nBrand#x\tPROMO BRUSHED NICKEL\t9\t4\nBrand#x\tPROMO BRUSHED NICKEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED STEEL\t23\t4\nBrand#x\tPROMO BRUSHED STEEL\t45\t4\nBrand#x\tPROMO BRUSHED TIN\t14\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t45\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t3\t4\nBrand#x\tPROMO BURNISHED BRASS\t14\t4\nBrand#x\tPROMO BURNISHED COPPER\t3\t4\nBrand#x\tPROMO BURNISHED COPPER\t9\t4\nBrand#x\tPROMO BURNISHED COPPER\t14\t4\nBrand#x\tPROMO BURNISHED COPPER\t19\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t23\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t49\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED STEEL\t49\t4\nBrand#x\tPROMO BURNISHED TIN\t3\t4\nBrand#x\tPROMO BURNISHED TIN\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t23\t4\nBrand#x\tPROMO PLATED BRASS\t3\t4\nBrand#x\tPROMO PLATED BRASS\t23\t4\nBrand#x\tPROMO PLATED BRASS\t49\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t9\t4\nBrand#x\tPROMO PLATED COPPER\t36\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t14\t4\nBrand#x\tPROMO PLATED NICKEL\t19\t4\nBrand#x\tPROMO PLATED STEEL\t36\t4\nBrand#x\tPROMO PLATED STEEL\t45\t4\nBrand#x\tPROMO PLATED TIN\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t3\t4\nBrand#x\tPROMO POLISHED BRASS\t45\t4\nBrand#x\tPROMO POLISHED COPPER\t9\t4\nBrand#x\tPROMO POLISHED COPPER\t23\t4\nBrand#x\tPROMO POLISHED COPPER\t36\t4\nBrand#x\tPROMO POLISHED COPPER\t45\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t19\t4\nBrand#x\tPROMO POLISHED NICKEL\t23\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED NICKEL\t49\t4\nBrand#x\tPROMO POLISHED STEEL\t9\t4\nBrand#x\tPROMO POLISHED STEEL\t45\t4\nBrand#x\tPROMO POLISHED TIN\t23\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t19\t4\nBrand#x\tSMALL ANODIZED BRASS\t23\t4\nBrand#x\tSMALL ANODIZED BRASS\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t45\t4\nBrand#x\tSMALL ANODIZED BRASS\t49\t4\nBrand#x\tSMALL ANODIZED COPPER\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t19\t4\nBrand#x\tSMALL ANODIZED COPPER\t23\t4\nBrand#x\tSMALL ANODIZED COPPER\t36\t4\nBrand#x\tSMALL ANODIZED COPPER\t45\t4\nBrand#x\tSMALL ANODIZED NICKEL\t14\t4\nBrand#x\tSMALL ANODIZED NICKEL\t23\t4\nBrand#x\tSMALL ANODIZED STEEL\t45\t4\nBrand#x\tSMALL ANODIZED TIN\t9\t4\nBrand#x\tSMALL ANODIZED TIN\t14\t4\nBrand#x\tSMALL ANODIZED TIN\t23\t4\nBrand#x\tSMALL ANODIZED TIN\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t49\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t9\t4\nBrand#x\tSMALL BRUSHED COPPER\t14\t4\nBrand#x\tSMALL BRUSHED COPPER\t19\t4\nBrand#x\tSMALL BRUSHED COPPER\t23\t4\nBrand#x\tSMALL BRUSHED COPPER\t45\t4\nBrand#x\tSMALL BRUSHED NICKEL\t3\t4\nBrand#x\tSMALL BRUSHED NICKEL\t14\t4\nBrand#x\tSMALL BRUSHED NICKEL\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t9\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t3\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t45\t4\nBrand#x\tSMALL BURNISHED BRASS\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t19\t4\nBrand#x\tSMALL BURNISHED COPPER\t23\t4\nBrand#x\tSMALL BURNISHED COPPER\t49\t4\nBrand#x\tSMALL BURNISHED NICKEL\t3\t4\nBrand#x\tSMALL BURNISHED NICKEL\t23\t4\nBrand#x\tSMALL BURNISHED STEEL\t3\t4\nBrand#x\tSMALL BURNISHED TIN\t3\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t14\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t45\t4\nBrand#x\tSMALL PLATED BRASS\t3\t4\nBrand#x\tSMALL PLATED BRASS\t19\t4\nBrand#x\tSMALL PLATED COPPER\t14\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED COPPER\t45\t4\nBrand#x\tSMALL PLATED NICKEL\t3\t4\nBrand#x\tSMALL PLATED NICKEL\t9\t4\nBrand#x\tSMALL PLATED NICKEL\t45\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t3\t4\nBrand#x\tSMALL PLATED STEEL\t45\t4\nBrand#x\tSMALL PLATED TIN\t3\t4\nBrand#x\tSMALL PLATED TIN\t23\t4\nBrand#x\tSMALL PLATED TIN\t36\t4\nBrand#x\tSMALL POLISHED COPPER\t9\t4\nBrand#x\tSMALL POLISHED COPPER\t19\t4\nBrand#x\tSMALL POLISHED COPPER\t23\t4\nBrand#x\tSMALL POLISHED COPPER\t45\t4\nBrand#x\tSMALL POLISHED NICKEL\t14\t4\nBrand#x\tSMALL POLISHED NICKEL\t23\t4\nBrand#x\tSMALL POLISHED TIN\t23\t4\nBrand#x\tSMALL POLISHED TIN\t45\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t4\nBrand#x\tSTANDARD ANODIZED TIN\t9\t4\nBrand#x\tSTANDARD ANODIZED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t19\t4\nBrand#x\tSTANDARD ANODIZED TIN\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t36\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t4\nBrand#x\tSTANDARD BRUSHED TIN\t3\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t4\nBrand#x\tSTANDARD BURNISHED TIN\t23\t4\nBrand#x\tSTANDARD BURNISHED TIN\t36\t4\nBrand#x\tSTANDARD BURNISHED TIN\t45\t4\nBrand#x\tSTANDARD PLATED BRASS\t23\t4\nBrand#x\tSTANDARD PLATED BRASS\t36\t4\nBrand#x\tSTANDARD PLATED COPPER\t3\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED NICKEL\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t45\t4\nBrand#x\tSTANDARD PLATED STEEL\t14\t4\nBrand#x\tSTANDARD PLATED STEEL\t19\t4\nBrand#x\tSTANDARD PLATED STEEL\t45\t4\nBrand#x\tSTANDARD PLATED STEEL\t49\t4\nBrand#x\tSTANDARD PLATED TIN\t14\t4\nBrand#x\tSTANDARD PLATED TIN\t23\t4\nBrand#x\tSTANDARD PLATED TIN\t36\t4\nBrand#x\tSTANDARD PLATED TIN\t45\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t36\t4\nBrand#x\tSTANDARD POLISHED COPPER\t9\t4\nBrand#x\tSTANDARD POLISHED COPPER\t23\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t3\t4\nBrand#x\tSTANDARD POLISHED STEEL\t9\t4\nBrand#x\tSTANDARD POLISHED STEEL\t14\t4\nBrand#x\tSTANDARD POLISHED STEEL\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t23\t4\nBrand#x\tSTANDARD POLISHED TIN\t36\t4\nBrand#x\tECONOMY ANODIZED BRASS\t14\t4\nBrand#x\tECONOMY ANODIZED BRASS\t19\t4\nBrand#x\tECONOMY ANODIZED BRASS\t45\t4\nBrand#x\tECONOMY ANODIZED BRASS\t49\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t14\t4\nBrand#x\tECONOMY ANODIZED COPPER\t23\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t19\t4\nBrand#x\tECONOMY ANODIZED STEEL\t45\t4\nBrand#x\tECONOMY ANODIZED STEEL\t49\t4\nBrand#x\tECONOMY ANODIZED TIN\t3\t4\nBrand#x\tECONOMY ANODIZED TIN\t14\t4\nBrand#x\tECONOMY ANODIZED TIN\t23\t4\nBrand#x\tECONOMY ANODIZED TIN\t45\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED BRASS\t9\t4\nBrand#x\tECONOMY BRUSHED BRASS\t14\t4\nBrand#x\tECONOMY BRUSHED BRASS\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t14\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED COPPER\t49\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t4\nBrand#x\tECONOMY BRUSHED STEEL\t3\t4\nBrand#x\tECONOMY BRUSHED STEEL\t14\t4\nBrand#x\tECONOMY BRUSHED TIN\t3\t4\nBrand#x\tECONOMY BRUSHED TIN\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t23\t4\nBrand#x\tECONOMY BRUSHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t3\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t4\nBrand#x\tECONOMY BURNISHED STEEL\t14\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t36\t4\nBrand#x\tECONOMY BURNISHED TIN\t3\t4\nBrand#x\tECONOMY BURNISHED TIN\t14\t4\nBrand#x\tECONOMY BURNISHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t9\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED BRASS\t23\t4\nBrand#x\tECONOMY PLATED BRASS\t45\t4\nBrand#x\tECONOMY PLATED BRASS\t49\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED COPPER\t19\t4\nBrand#x\tECONOMY PLATED NICKEL\t3\t4\nBrand#x\tECONOMY PLATED NICKEL\t23\t4\nBrand#x\tECONOMY PLATED NICKEL\t49\t4\nBrand#x\tECONOMY PLATED STEEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t23\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED STEEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t3\t4\nBrand#x\tECONOMY PLATED TIN\t19\t4\nBrand#x\tECONOMY PLATED TIN\t23\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY PLATED TIN\t45\t4\nBrand#x\tECONOMY PLATED TIN\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED BRASS\t23\t4\nBrand#x\tECONOMY POLISHED BRASS\t45\t4\nBrand#x\tECONOMY POLISHED BRASS\t49\t4\nBrand#x\tECONOMY POLISHED COPPER\t14\t4\nBrand#x\tECONOMY POLISHED COPPER\t19\t4\nBrand#x\tECONOMY POLISHED COPPER\t23\t4\nBrand#x\tECONOMY POLISHED NICKEL\t23\t4\nBrand#x\tECONOMY POLISHED STEEL\t14\t4\nBrand#x\tECONOMY POLISHED STEEL\t45\t4\nBrand#x\tECONOMY POLISHED TIN\t19\t4\nBrand#x\tECONOMY POLISHED TIN\t45\t4\nBrand#x\tECONOMY POLISHED TIN\t49\t4\nBrand#x\tLARGE ANODIZED BRASS\t23\t4\nBrand#x\tLARGE ANODIZED BRASS\t45\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t3\t4\nBrand#x\tLARGE ANODIZED COPPER\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t45\t4\nBrand#x\tLARGE ANODIZED STEEL\t9\t4\nBrand#x\tLARGE ANODIZED STEEL\t36\t4\nBrand#x\tLARGE ANODIZED STEEL\t49\t4\nBrand#x\tLARGE ANODIZED TIN\t3\t4\nBrand#x\tLARGE ANODIZED TIN\t9\t4\nBrand#x\tLARGE ANODIZED TIN\t19\t4\nBrand#x\tLARGE ANODIZED TIN\t45\t4\nBrand#x\tLARGE ANODIZED TIN\t49\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t23\t4\nBrand#x\tLARGE BRUSHED COPPER\t49\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t14\t4\nBrand#x\tLARGE BRUSHED NICKEL\t23\t4\nBrand#x\tLARGE BRUSHED NICKEL\t36\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t9\t4\nBrand#x\tLARGE BRUSHED STEEL\t36\t4\nBrand#x\tLARGE BRUSHED STEEL\t49\t4\nBrand#x\tLARGE BRUSHED TIN\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BURNISHED BRASS\t49\t4\nBrand#x\tLARGE BURNISHED COPPER\t3\t4\nBrand#x\tLARGE BURNISHED COPPER\t14\t4\nBrand#x\tLARGE BURNISHED NICKEL\t14\t4\nBrand#x\tLARGE BURNISHED NICKEL\t23\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED STEEL\t3\t4\nBrand#x\tLARGE BURNISHED TIN\t3\t4\nBrand#x\tLARGE BURNISHED TIN\t9\t4\nBrand#x\tLARGE BURNISHED TIN\t19\t4\nBrand#x\tLARGE BURNISHED TIN\t23\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t45\t4\nBrand#x\tLARGE PLATED BRASS\t3\t4\nBrand#x\tLARGE PLATED BRASS\t14\t4\nBrand#x\tLARGE PLATED BRASS\t19\t4\nBrand#x\tLARGE PLATED BRASS\t23\t4\nBrand#x\tLARGE PLATED BRASS\t49\t4\nBrand#x\tLARGE PLATED COPPER\t3\t4\nBrand#x\tLARGE PLATED COPPER\t14\t4\nBrand#x\tLARGE PLATED COPPER\t23\t4\nBrand#x\tLARGE PLATED NICKEL\t36\t4\nBrand#x\tLARGE PLATED STEEL\t3\t4\nBrand#x\tLARGE PLATED STEEL\t45\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t9\t4\nBrand#x\tLARGE PLATED TIN\t19\t4\nBrand#x\tLARGE PLATED TIN\t36\t4\nBrand#x\tLARGE PLATED TIN\t45\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED BRASS\t9\t4\nBrand#x\tLARGE POLISHED BRASS\t14\t4\nBrand#x\tLARGE POLISHED COPPER\t9\t4\nBrand#x\tLARGE POLISHED COPPER\t14\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t3\t4\nBrand#x\tLARGE POLISHED NICKEL\t14\t4\nBrand#x\tLARGE POLISHED NICKEL\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t23\t4\nBrand#x\tLARGE POLISHED NICKEL\t36\t4\nBrand#x\tLARGE POLISHED NICKEL\t49\t4\nBrand#x\tLARGE POLISHED STEEL\t3\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t49\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t4\nBrand#x\tMEDIUM ANODIZED TIN\t9\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM ANODIZED TIN\t45\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t9\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BRUSHED TIN\t49\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t4\nBrand#x\tMEDIUM BURNISHED TIN\t3\t4\nBrand#x\tMEDIUM BURNISHED TIN\t19\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t36\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t9\t4\nBrand#x\tMEDIUM PLATED BRASS\t19\t4\nBrand#x\tMEDIUM PLATED BRASS\t23\t4\nBrand#x\tMEDIUM PLATED BRASS\t49\t4\nBrand#x\tMEDIUM PLATED COPPER\t9\t4\nBrand#x\tMEDIUM PLATED COPPER\t19\t4\nBrand#x\tMEDIUM PLATED COPPER\t36\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t49\t4\nBrand#x\tMEDIUM PLATED NICKEL\t3\t4\nBrand#x\tMEDIUM PLATED NICKEL\t9\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t19\t4\nBrand#x\tMEDIUM PLATED NICKEL\t36\t4\nBrand#x\tMEDIUM PLATED NICKEL\t45\t4\nBrand#x\tMEDIUM PLATED STEEL\t3\t4\nBrand#x\tMEDIUM PLATED STEEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t23\t4\nBrand#x\tMEDIUM PLATED STEEL\t36\t4\nBrand#x\tMEDIUM PLATED TIN\t14\t4\nBrand#x\tPROMO ANODIZED BRASS\t3\t4\nBrand#x\tPROMO ANODIZED BRASS\t9\t4\nBrand#x\tPROMO ANODIZED BRASS\t19\t4\nBrand#x\tPROMO ANODIZED BRASS\t49\t4\nBrand#x\tPROMO ANODIZED COPPER\t3\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED COPPER\t23\t4\nBrand#x\tPROMO ANODIZED COPPER\t49\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED STEEL\t23\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t23\t4\nBrand#x\tPROMO ANODIZED TIN\t36\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t3\t4\nBrand#x\tPROMO BRUSHED BRASS\t23\t4\nBrand#x\tPROMO BRUSHED BRASS\t45\t4\nBrand#x\tPROMO BRUSHED COPPER\t14\t4\nBrand#x\tPROMO BRUSHED COPPER\t49\t4\nBrand#x\tPROMO BRUSHED NICKEL\t3\t4\nBrand#x\tPROMO BRUSHED NICKEL\t14\t4\nBrand#x\tPROMO BRUSHED NICKEL\t45\t4\nBrand#x\tPROMO BRUSHED STEEL\t3\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t9\t4\nBrand#x\tPROMO BRUSHED TIN\t14\t4\nBrand#x\tPROMO BRUSHED TIN\t45\t4\nBrand#x\tPROMO BURNISHED BRASS\t3\t4\nBrand#x\tPROMO BURNISHED BRASS\t19\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED COPPER\t23\t4\nBrand#x\tPROMO BURNISHED COPPER\t49\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t14\t4\nBrand#x\tPROMO BURNISHED STEEL\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t49\t4\nBrand#x\tPROMO BURNISHED TIN\t3\t4\nBrand#x\tPROMO BURNISHED TIN\t23\t4\nBrand#x\tPROMO PLATED BRASS\t3\t4\nBrand#x\tPROMO PLATED BRASS\t9\t4\nBrand#x\tPROMO PLATED BRASS\t45\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED NICKEL\t49\t4\nBrand#x\tPROMO PLATED STEEL\t9\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED STEEL\t45\t4\nBrand#x\tPROMO PLATED STEEL\t49\t4\nBrand#x\tPROMO PLATED TIN\t14\t4\nBrand#x\tPROMO PLATED TIN\t36\t4\nBrand#x\tPROMO PLATED TIN\t45\t4\nBrand#x\tPROMO PLATED TIN\t49\t4\nBrand#x\tPROMO POLISHED BRASS\t19\t4\nBrand#x\tPROMO POLISHED BRASS\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t36\t4\nBrand#x\tPROMO POLISHED BRASS\t45\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t23\t4\nBrand#x\tPROMO POLISHED NICKEL\t3\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t45\t4\nBrand#x\tPROMO POLISHED STEEL\t23\t4\nBrand#x\tPROMO POLISHED STEEL\t36\t4\nBrand#x\tPROMO POLISHED STEEL\t45\t4\nBrand#x\tPROMO POLISHED TIN\t14\t4\nBrand#x\tPROMO POLISHED TIN\t19\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t36\t4\nBrand#x\tSMALL ANODIZED COPPER\t3\t4\nBrand#x\tSMALL ANODIZED COPPER\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t14\t4\nBrand#x\tSMALL ANODIZED COPPER\t19\t4\nBrand#x\tSMALL ANODIZED COPPER\t36\t4\nBrand#x\tSMALL ANODIZED COPPER\t49\t4\nBrand#x\tSMALL ANODIZED NICKEL\t45\t4\nBrand#x\tSMALL ANODIZED NICKEL\t49\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL ANODIZED TIN\t9\t4\nBrand#x\tSMALL ANODIZED TIN\t49\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED NICKEL\t3\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t3\t4\nBrand#x\tSMALL BRUSHED STEEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t3\t4\nBrand#x\tSMALL BRUSHED TIN\t14\t4\nBrand#x\tSMALL BRUSHED TIN\t49\t4\nBrand#x\tSMALL BURNISHED BRASS\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t45\t4\nBrand#x\tSMALL BURNISHED BRASS\t49\t4\nBrand#x\tSMALL BURNISHED COPPER\t23\t4\nBrand#x\tSMALL BURNISHED COPPER\t36\t4\nBrand#x\tSMALL BURNISHED COPPER\t45\t4\nBrand#x\tSMALL BURNISHED NICKEL\t14\t4\nBrand#x\tSMALL BURNISHED NICKEL\t23\t4\nBrand#x\tSMALL BURNISHED NICKEL\t49\t4\nBrand#x\tSMALL BURNISHED STEEL\t3\t4\nBrand#x\tSMALL BURNISHED STEEL\t14\t4\nBrand#x\tSMALL BURNISHED STEEL\t23\t4\nBrand#x\tSMALL BURNISHED STEEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t45\t4\nBrand#x\tSMALL BURNISHED STEEL\t49\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t49\t4\nBrand#x\tSMALL PLATED BRASS\t3\t4\nBrand#x\tSMALL PLATED BRASS\t9\t4\nBrand#x\tSMALL PLATED BRASS\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t36\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED TIN\t3\t4\nBrand#x\tSMALL PLATED TIN\t23\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t14\t4\nBrand#x\tSMALL POLISHED BRASS\t36\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED COPPER\t19\t4\nBrand#x\tSMALL POLISHED COPPER\t23\t4\nBrand#x\tSMALL POLISHED NICKEL\t3\t4\nBrand#x\tSMALL POLISHED NICKEL\t9\t4\nBrand#x\tSMALL POLISHED NICKEL\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t49\t4\nBrand#x\tSMALL POLISHED STEEL\t14\t4\nBrand#x\tSMALL POLISHED STEEL\t19\t4\nBrand#x\tSMALL POLISHED TIN\t14\t4\nBrand#x\tSMALL POLISHED TIN\t23\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t4\nBrand#x\tSTANDARD ANODIZED TIN\t9\t4\nBrand#x\tSTANDARD ANODIZED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t23\t4\nBrand#x\tSTANDARD ANODIZED TIN\t49\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t45\t4\nBrand#x\tSTANDARD BRUSHED TIN\t3\t4\nBrand#x\tSTANDARD BRUSHED TIN\t9\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t19\t4\nBrand#x\tSTANDARD BRUSHED TIN\t36\t4\nBrand#x\tSTANDARD BRUSHED TIN\t49\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t4\nBrand#x\tSTANDARD BURNISHED TIN\t19\t4\nBrand#x\tSTANDARD BURNISHED TIN\t23\t4\nBrand#x\tSTANDARD BURNISHED TIN\t36\t4\nBrand#x\tSTANDARD PLATED BRASS\t19\t4\nBrand#x\tSTANDARD PLATED BRASS\t49\t4\nBrand#x\tSTANDARD PLATED COPPER\t3\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED COPPER\t23\t4\nBrand#x\tSTANDARD PLATED NICKEL\t19\t4\nBrand#x\tSTANDARD PLATED NICKEL\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t45\t4\nBrand#x\tSTANDARD PLATED STEEL\t9\t4\nBrand#x\tSTANDARD PLATED STEEL\t45\t4\nBrand#x\tSTANDARD PLATED TIN\t19\t4\nBrand#x\tSTANDARD PLATED TIN\t49\t4\nBrand#x\tSTANDARD POLISHED BRASS\t36\t4\nBrand#x\tSTANDARD POLISHED BRASS\t49\t4\nBrand#x\tSTANDARD POLISHED COPPER\t14\t4\nBrand#x\tSTANDARD POLISHED COPPER\t19\t4\nBrand#x\tSTANDARD POLISHED COPPER\t45\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t9\t4\nBrand#x\tSTANDARD POLISHED STEEL\t14\t4\nBrand#x\tSTANDARD POLISHED STEEL\t19\t4\nBrand#x\tSTANDARD POLISHED STEEL\t36\t4\nBrand#x\tSTANDARD POLISHED STEEL\t45\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t45\t4\nBrand#x\tECONOMY ANODIZED BRASS\t3\t4\nBrand#x\tECONOMY ANODIZED BRASS\t9\t4\nBrand#x\tECONOMY ANODIZED BRASS\t19\t4\nBrand#x\tECONOMY ANODIZED COPPER\t9\t4\nBrand#x\tECONOMY ANODIZED COPPER\t23\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t14\t4\nBrand#x\tECONOMY ANODIZED STEEL\t23\t4\nBrand#x\tECONOMY ANODIZED TIN\t14\t4\nBrand#x\tECONOMY ANODIZED TIN\t19\t4\nBrand#x\tECONOMY ANODIZED TIN\t23\t4\nBrand#x\tECONOMY ANODIZED TIN\t45\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED BRASS\t9\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t3\t4\nBrand#x\tECONOMY BRUSHED COPPER\t23\t4\nBrand#x\tECONOMY BRUSHED COPPER\t36\t4\nBrand#x\tECONOMY BRUSHED COPPER\t49\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t4\nBrand#x\tECONOMY BRUSHED STEEL\t9\t4\nBrand#x\tECONOMY BRUSHED STEEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t19\t4\nBrand#x\tECONOMY BRUSHED STEEL\t23\t4\nBrand#x\tECONOMY BRUSHED STEEL\t36\t4\nBrand#x\tECONOMY BRUSHED TIN\t3\t4\nBrand#x\tECONOMY BRUSHED TIN\t45\t4\nBrand#x\tECONOMY BRUSHED TIN\t49\t4\nBrand#x\tECONOMY BURNISHED BRASS\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t9\t4\nBrand#x\tECONOMY BURNISHED COPPER\t14\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t4\nBrand#x\tECONOMY BURNISHED STEEL\t3\t4\nBrand#x\tECONOMY BURNISHED STEEL\t19\t4\nBrand#x\tECONOMY BURNISHED STEEL\t49\t4\nBrand#x\tECONOMY BURNISHED TIN\t23\t4\nBrand#x\tECONOMY BURNISHED TIN\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t14\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t45\t4\nBrand#x\tECONOMY PLATED COPPER\t9\t4\nBrand#x\tECONOMY PLATED COPPER\t19\t4\nBrand#x\tECONOMY PLATED COPPER\t23\t4\nBrand#x\tECONOMY PLATED NICKEL\t9\t4\nBrand#x\tECONOMY PLATED NICKEL\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t9\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t45\t4\nBrand#x\tECONOMY PLATED TIN\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t3\t4\nBrand#x\tECONOMY POLISHED BRASS\t19\t4\nBrand#x\tECONOMY POLISHED BRASS\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t45\t4\nBrand#x\tECONOMY POLISHED COPPER\t45\t4\nBrand#x\tECONOMY POLISHED COPPER\t49\t4\nBrand#x\tECONOMY POLISHED NICKEL\t9\t4\nBrand#x\tECONOMY POLISHED NICKEL\t23\t4\nBrand#x\tECONOMY POLISHED STEEL\t3\t4\nBrand#x\tECONOMY POLISHED STEEL\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t14\t4\nBrand#x\tECONOMY POLISHED TIN\t36\t4\nBrand#x\tLARGE ANODIZED BRASS\t36\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t9\t4\nBrand#x\tLARGE ANODIZED COPPER\t14\t4\nBrand#x\tLARGE ANODIZED COPPER\t23\t4\nBrand#x\tLARGE ANODIZED COPPER\t45\t4\nBrand#x\tLARGE ANODIZED COPPER\t49\t4\nBrand#x\tLARGE ANODIZED NICKEL\t23\t4\nBrand#x\tLARGE ANODIZED STEEL\t19\t4\nBrand#x\tLARGE ANODIZED TIN\t9\t4\nBrand#x\tLARGE ANODIZED TIN\t23\t4\nBrand#x\tLARGE ANODIZED TIN\t36\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED BRASS\t14\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t14\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t9\t4\nBrand#x\tLARGE BRUSHED NICKEL\t19\t4\nBrand#x\tLARGE BRUSHED NICKEL\t49\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t19\t4\nBrand#x\tLARGE BRUSHED STEEL\t36\t4\nBrand#x\tLARGE BURNISHED BRASS\t3\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t23\t4\nBrand#x\tLARGE BURNISHED BRASS\t49\t4\nBrand#x\tLARGE BURNISHED COPPER\t36\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t19\t4\nBrand#x\tLARGE BURNISHED NICKEL\t23\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED NICKEL\t49\t4\nBrand#x\tLARGE BURNISHED STEEL\t9\t4\nBrand#x\tLARGE BURNISHED STEEL\t23\t4\nBrand#x\tLARGE BURNISHED TIN\t19\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE PLATED BRASS\t3\t4\nBrand#x\tLARGE PLATED BRASS\t49\t4\nBrand#x\tLARGE PLATED NICKEL\t3\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED NICKEL\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t36\t4\nBrand#x\tLARGE PLATED NICKEL\t49\t4\nBrand#x\tLARGE PLATED STEEL\t9\t4\nBrand#x\tLARGE PLATED STEEL\t23\t4\nBrand#x\tLARGE PLATED TIN\t19\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED BRASS\t9\t4\nBrand#x\tLARGE POLISHED BRASS\t45\t4\nBrand#x\tLARGE POLISHED COPPER\t3\t4\nBrand#x\tLARGE POLISHED COPPER\t36\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t9\t4\nBrand#x\tLARGE POLISHED NICKEL\t14\t4\nBrand#x\tLARGE POLISHED NICKEL\t19\t4\nBrand#x\tLARGE POLISHED STEEL\t14\t4\nBrand#x\tLARGE POLISHED STEEL\t19\t4\nBrand#x\tLARGE POLISHED STEEL\t36\t4\nBrand#x\tLARGE POLISHED STEEL\t49\t4\nBrand#x\tLARGE POLISHED TIN\t9\t4\nBrand#x\tLARGE POLISHED TIN\t14\t4\nBrand#x\tLARGE POLISHED TIN\t23\t4\nBrand#x\tLARGE POLISHED TIN\t49\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t49\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t4\nBrand#x\tMEDIUM ANODIZED TIN\t3\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t14\t4\nBrand#x\tMEDIUM BRUSHED TIN\t49\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t4\nBrand#x\tMEDIUM BURNISHED TIN\t3\t4\nBrand#x\tMEDIUM BURNISHED TIN\t19\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t36\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t19\t4\nBrand#x\tMEDIUM PLATED BRASS\t23\t4\nBrand#x\tMEDIUM PLATED BRASS\t49\t4\nBrand#x\tMEDIUM PLATED COPPER\t3\t4\nBrand#x\tMEDIUM PLATED COPPER\t19\t4\nBrand#x\tMEDIUM PLATED COPPER\t36\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED NICKEL\t3\t4\nBrand#x\tMEDIUM PLATED NICKEL\t9\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t45\t4\nBrand#x\tMEDIUM PLATED NICKEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t19\t4\nBrand#x\tMEDIUM PLATED TIN\t45\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t3\t4\nBrand#x\tPROMO ANODIZED BRASS\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t45\t4\nBrand#x\tPROMO ANODIZED BRASS\t49\t4\nBrand#x\tPROMO ANODIZED COPPER\t3\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED COPPER\t49\t4\nBrand#x\tPROMO ANODIZED NICKEL\t36\t4\nBrand#x\tPROMO ANODIZED NICKEL\t45\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED STEEL\t23\t4\nBrand#x\tPROMO ANODIZED TIN\t3\t4\nBrand#x\tPROMO ANODIZED TIN\t9\t4\nBrand#x\tPROMO ANODIZED TIN\t36\t4\nBrand#x\tPROMO ANODIZED TIN\t49\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t36\t4\nBrand#x\tPROMO BRUSHED COPPER\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t14\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t23\t4\nBrand#x\tPROMO BRUSHED NICKEL\t3\t4\nBrand#x\tPROMO BRUSHED NICKEL\t14\t4\nBrand#x\tPROMO BRUSHED NICKEL\t23\t4\nBrand#x\tPROMO BRUSHED STEEL\t9\t4\nBrand#x\tPROMO BRUSHED STEEL\t49\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t3\t4\nBrand#x\tPROMO BURNISHED BRASS\t14\t4\nBrand#x\tPROMO BURNISHED BRASS\t36\t4\nBrand#x\tPROMO BURNISHED COPPER\t14\t4\nBrand#x\tPROMO BURNISHED COPPER\t19\t4\nBrand#x\tPROMO BURNISHED COPPER\t23\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED COPPER\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t14\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t49\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED TIN\t3\t4\nBrand#x\tPROMO BURNISHED TIN\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t14\t4\nBrand#x\tPROMO BURNISHED TIN\t19\t4\nBrand#x\tPROMO BURNISHED TIN\t23\t4\nBrand#x\tPROMO PLATED BRASS\t9\t4\nBrand#x\tPROMO PLATED BRASS\t45\t4\nBrand#x\tPROMO PLATED COPPER\t36\t4\nBrand#x\tPROMO PLATED COPPER\t45\t4\nBrand#x\tPROMO PLATED NICKEL\t9\t4\nBrand#x\tPROMO PLATED NICKEL\t36\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED STEEL\t45\t4\nBrand#x\tPROMO PLATED TIN\t9\t4\nBrand#x\tPROMO PLATED TIN\t19\t4\nBrand#x\tPROMO PLATED TIN\t49\t4\nBrand#x\tPROMO POLISHED BRASS\t36\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t23\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t3\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t19\t4\nBrand#x\tPROMO POLISHED NICKEL\t49\t4\nBrand#x\tPROMO POLISHED TIN\t3\t4\nBrand#x\tPROMO POLISHED TIN\t23\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t9\t4\nBrand#x\tSMALL ANODIZED BRASS\t14\t4\nBrand#x\tSMALL ANODIZED BRASS\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t49\t4\nBrand#x\tSMALL ANODIZED COPPER\t3\t4\nBrand#x\tSMALL ANODIZED COPPER\t14\t4\nBrand#x\tSMALL ANODIZED COPPER\t23\t4\nBrand#x\tSMALL ANODIZED COPPER\t36\t4\nBrand#x\tSMALL ANODIZED STEEL\t9\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL ANODIZED TIN\t45\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t9\t4\nBrand#x\tSMALL BRUSHED BRASS\t23\t4\nBrand#x\tSMALL BRUSHED BRASS\t49\t4\nBrand#x\tSMALL BRUSHED COPPER\t19\t4\nBrand#x\tSMALL BRUSHED COPPER\t23\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED NICKEL\t3\t4\nBrand#x\tSMALL BRUSHED NICKEL\t49\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t23\t4\nBrand#x\tSMALL BRUSHED STEEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BRUSHED TIN\t49\t4\nBrand#x\tSMALL BURNISHED BRASS\t3\t4\nBrand#x\tSMALL BURNISHED BRASS\t9\t4\nBrand#x\tSMALL BURNISHED BRASS\t19\t4\nBrand#x\tSMALL BURNISHED BRASS\t23\t4\nBrand#x\tSMALL BURNISHED BRASS\t45\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t23\t4\nBrand#x\tSMALL BURNISHED NICKEL\t3\t4\nBrand#x\tSMALL BURNISHED NICKEL\t19\t4\nBrand#x\tSMALL BURNISHED NICKEL\t23\t4\nBrand#x\tSMALL BURNISHED STEEL\t3\t4\nBrand#x\tSMALL BURNISHED STEEL\t14\t4\nBrand#x\tSMALL BURNISHED STEEL\t19\t4\nBrand#x\tSMALL BURNISHED STEEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t45\t4\nBrand#x\tSMALL BURNISHED TIN\t14\t4\nBrand#x\tSMALL BURNISHED TIN\t19\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t45\t4\nBrand#x\tSMALL BURNISHED TIN\t49\t4\nBrand#x\tSMALL PLATED BRASS\t19\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED BRASS\t49\t4\nBrand#x\tSMALL PLATED COPPER\t19\t4\nBrand#x\tSMALL PLATED COPPER\t49\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t14\t4\nBrand#x\tSMALL PLATED STEEL\t36\t4\nBrand#x\tSMALL PLATED TIN\t3\t4\nBrand#x\tSMALL PLATED TIN\t9\t4\nBrand#x\tSMALL PLATED TIN\t14\t4\nBrand#x\tSMALL PLATED TIN\t23\t4\nBrand#x\tSMALL POLISHED BRASS\t3\t4\nBrand#x\tSMALL POLISHED BRASS\t9\t4\nBrand#x\tSMALL POLISHED BRASS\t23\t4\nBrand#x\tSMALL POLISHED BRASS\t45\t4\nBrand#x\tSMALL POLISHED COPPER\t3\t4\nBrand#x\tSMALL POLISHED COPPER\t9\t4\nBrand#x\tSMALL POLISHED COPPER\t19\t4\nBrand#x\tSMALL POLISHED COPPER\t45\t4\nBrand#x\tSMALL POLISHED NICKEL\t3\t4\nBrand#x\tSMALL POLISHED NICKEL\t14\t4\nBrand#x\tSMALL POLISHED NICKEL\t45\t4\nBrand#x\tSMALL POLISHED STEEL\t14\t4\nBrand#x\tSMALL POLISHED STEEL\t19\t4\nBrand#x\tSMALL POLISHED STEEL\t49\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t9\t4\nBrand#x\tSMALL POLISHED TIN\t23\t4\nBrand#x\tSMALL POLISHED TIN\t36\t4\nBrand#x\tSMALL POLISHED TIN\t45\t4\nBrand#x\tSMALL POLISHED TIN\t49\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t4\nBrand#x\tSTANDARD ANODIZED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t19\t4\nBrand#x\tSTANDARD ANODIZED TIN\t23\t4\nBrand#x\tSTANDARD ANODIZED TIN\t45\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t4\nBrand#x\tSTANDARD BRUSHED TIN\t3\t4\nBrand#x\tSTANDARD BRUSHED TIN\t9\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t19\t4\nBrand#x\tSTANDARD BRUSHED TIN\t49\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t4\nBrand#x\tSTANDARD BURNISHED TIN\t3\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD PLATED BRASS\t3\t4\nBrand#x\tSTANDARD PLATED BRASS\t9\t4\nBrand#x\tSTANDARD PLATED BRASS\t45\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED NICKEL\t9\t4\nBrand#x\tSTANDARD PLATED NICKEL\t14\t4\nBrand#x\tSTANDARD PLATED NICKEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED STEEL\t9\t4\nBrand#x\tSTANDARD PLATED STEEL\t19\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t45\t4\nBrand#x\tSTANDARD PLATED TIN\t19\t4\nBrand#x\tSTANDARD PLATED TIN\t23\t4\nBrand#x\tSTANDARD PLATED TIN\t36\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t23\t4\nBrand#x\tSTANDARD POLISHED BRASS\t36\t4\nBrand#x\tSTANDARD POLISHED COPPER\t3\t4\nBrand#x\tSTANDARD POLISHED COPPER\t36\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t9\t4\nBrand#x\tSTANDARD POLISHED STEEL\t23\t4\nBrand#x\tSTANDARD POLISHED STEEL\t45\t4\nBrand#x\tSTANDARD POLISHED STEEL\t49\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t23\t4\nBrand#x\tSTANDARD POLISHED TIN\t45\t4\nBrand#x\tSTANDARD POLISHED TIN\t49\t4\nBrand#x\tECONOMY ANODIZED BRASS\t14\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED BRASS\t45\t4\nBrand#x\tECONOMY ANODIZED BRASS\t49\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t9\t4\nBrand#x\tECONOMY ANODIZED COPPER\t19\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t3\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t14\t4\nBrand#x\tECONOMY ANODIZED STEEL\t19\t4\nBrand#x\tECONOMY ANODIZED STEEL\t36\t4\nBrand#x\tECONOMY ANODIZED STEEL\t49\t4\nBrand#x\tECONOMY ANODIZED TIN\t3\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t19\t4\nBrand#x\tECONOMY BRUSHED BRASS\t3\t4\nBrand#x\tECONOMY BRUSHED BRASS\t36\t4\nBrand#x\tECONOMY BRUSHED COPPER\t14\t4\nBrand#x\tECONOMY BRUSHED COPPER\t36\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED COPPER\t49\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t4\nBrand#x\tECONOMY BRUSHED STEEL\t9\t4\nBrand#x\tECONOMY BRUSHED STEEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t23\t4\nBrand#x\tECONOMY BRUSHED STEEL\t36\t4\nBrand#x\tECONOMY BRUSHED TIN\t9\t4\nBrand#x\tECONOMY BRUSHED TIN\t14\t4\nBrand#x\tECONOMY BRUSHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t19\t4\nBrand#x\tECONOMY BURNISHED COPPER\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t4\nBrand#x\tECONOMY BURNISHED STEEL\t3\t4\nBrand#x\tECONOMY BURNISHED STEEL\t14\t4\nBrand#x\tECONOMY BURNISHED TIN\t3\t4\nBrand#x\tECONOMY BURNISHED TIN\t14\t4\nBrand#x\tECONOMY BURNISHED TIN\t36\t4\nBrand#x\tECONOMY BURNISHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED TIN\t49\t4\nBrand#x\tECONOMY PLATED BRASS\t9\t4\nBrand#x\tECONOMY PLATED BRASS\t14\t4\nBrand#x\tECONOMY PLATED BRASS\t23\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED COPPER\t23\t4\nBrand#x\tECONOMY PLATED COPPER\t36\t4\nBrand#x\tECONOMY PLATED COPPER\t45\t4\nBrand#x\tECONOMY PLATED COPPER\t49\t4\nBrand#x\tECONOMY PLATED NICKEL\t19\t4\nBrand#x\tECONOMY PLATED NICKEL\t23\t4\nBrand#x\tECONOMY PLATED STEEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t3\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t23\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY PLATED TIN\t45\t4\nBrand#x\tECONOMY POLISHED BRASS\t3\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED BRASS\t14\t4\nBrand#x\tECONOMY POLISHED BRASS\t19\t4\nBrand#x\tECONOMY POLISHED BRASS\t49\t4\nBrand#x\tECONOMY POLISHED COPPER\t3\t4\nBrand#x\tECONOMY POLISHED COPPER\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t3\t4\nBrand#x\tECONOMY POLISHED NICKEL\t14\t4\nBrand#x\tECONOMY POLISHED NICKEL\t19\t4\nBrand#x\tECONOMY POLISHED NICKEL\t23\t4\nBrand#x\tECONOMY POLISHED NICKEL\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t3\t4\nBrand#x\tECONOMY POLISHED TIN\t3\t4\nBrand#x\tECONOMY POLISHED TIN\t23\t4\nBrand#x\tLARGE ANODIZED BRASS\t3\t4\nBrand#x\tLARGE ANODIZED BRASS\t9\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t23\t4\nBrand#x\tLARGE ANODIZED BRASS\t36\t4\nBrand#x\tLARGE ANODIZED BRASS\t45\t4\nBrand#x\tLARGE ANODIZED COPPER\t14\t4\nBrand#x\tLARGE ANODIZED COPPER\t45\t4\nBrand#x\tLARGE ANODIZED COPPER\t49\t4\nBrand#x\tLARGE ANODIZED NICKEL\t3\t4\nBrand#x\tLARGE ANODIZED NICKEL\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t36\t4\nBrand#x\tLARGE ANODIZED NICKEL\t49\t4\nBrand#x\tLARGE ANODIZED STEEL\t3\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED STEEL\t23\t4\nBrand#x\tLARGE ANODIZED STEEL\t49\t4\nBrand#x\tLARGE ANODIZED TIN\t36\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t19\t4\nBrand#x\tLARGE BRUSHED NICKEL\t36\t4\nBrand#x\tLARGE BRUSHED STEEL\t9\t4\nBrand#x\tLARGE BRUSHED STEEL\t45\t4\nBrand#x\tLARGE BRUSHED STEEL\t49\t4\nBrand#x\tLARGE BRUSHED TIN\t3\t4\nBrand#x\tLARGE BRUSHED TIN\t9\t4\nBrand#x\tLARGE BRUSHED TIN\t19\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BRUSHED TIN\t49\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t45\t4\nBrand#x\tLARGE BURNISHED BRASS\t49\t4\nBrand#x\tLARGE BURNISHED COPPER\t3\t4\nBrand#x\tLARGE BURNISHED COPPER\t14\t4\nBrand#x\tLARGE BURNISHED COPPER\t36\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t14\t4\nBrand#x\tLARGE BURNISHED STEEL\t3\t4\nBrand#x\tLARGE BURNISHED STEEL\t19\t4\nBrand#x\tLARGE BURNISHED STEEL\t23\t4\nBrand#x\tLARGE BURNISHED STEEL\t45\t4\nBrand#x\tLARGE BURNISHED TIN\t9\t4\nBrand#x\tLARGE BURNISHED TIN\t14\t4\nBrand#x\tLARGE BURNISHED TIN\t49\t4\nBrand#x\tLARGE PLATED BRASS\t9\t4\nBrand#x\tLARGE PLATED BRASS\t14\t4\nBrand#x\tLARGE PLATED BRASS\t36\t4\nBrand#x\tLARGE PLATED BRASS\t49\t4\nBrand#x\tLARGE PLATED COPPER\t9\t4\nBrand#x\tLARGE PLATED COPPER\t14\t4\nBrand#x\tLARGE PLATED COPPER\t49\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED NICKEL\t49\t4\nBrand#x\tLARGE PLATED STEEL\t3\t4\nBrand#x\tLARGE PLATED STEEL\t36\t4\nBrand#x\tLARGE PLATED STEEL\t45\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t9\t4\nBrand#x\tLARGE PLATED TIN\t19\t4\nBrand#x\tLARGE POLISHED BRASS\t9\t4\nBrand#x\tLARGE POLISHED BRASS\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t14\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t9\t4\nBrand#x\tLARGE POLISHED NICKEL\t36\t4\nBrand#x\tLARGE POLISHED STEEL\t14\t4\nBrand#x\tLARGE POLISHED STEEL\t19\t4\nBrand#x\tLARGE POLISHED STEEL\t23\t4\nBrand#x\tLARGE POLISHED STEEL\t36\t4\nBrand#x\tLARGE POLISHED TIN\t3\t4\nBrand#x\tLARGE POLISHED TIN\t19\t4\nBrand#x\tLARGE POLISHED TIN\t23\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t4\nBrand#x\tMEDIUM ANODIZED TIN\t3\t4\nBrand#x\tMEDIUM ANODIZED TIN\t9\t4\nBrand#x\tMEDIUM ANODIZED TIN\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM ANODIZED TIN\t49\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t9\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED TIN\t9\t4\nBrand#x\tMEDIUM BRUSHED TIN\t14\t4\nBrand#x\tMEDIUM BRUSHED TIN\t19\t4\nBrand#x\tMEDIUM BRUSHED TIN\t23\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t45\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t19\t4\nBrand#x\tMEDIUM PLATED BRASS\t45\t4\nBrand#x\tMEDIUM PLATED BRASS\t49\t4\nBrand#x\tMEDIUM PLATED COPPER\t9\t4\nBrand#x\tMEDIUM PLATED COPPER\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t23\t4\nBrand#x\tMEDIUM PLATED COPPER\t49\t4\nBrand#x\tMEDIUM PLATED NICKEL\t19\t4\nBrand#x\tMEDIUM PLATED STEEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t36\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t3\t4\nBrand#x\tMEDIUM PLATED TIN\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t14\t4\nBrand#x\tPROMO ANODIZED BRASS\t14\t4\nBrand#x\tPROMO ANODIZED COPPER\t14\t4\nBrand#x\tPROMO ANODIZED COPPER\t36\t4\nBrand#x\tPROMO ANODIZED COPPER\t49\t4\nBrand#x\tPROMO ANODIZED NICKEL\t3\t4\nBrand#x\tPROMO ANODIZED NICKEL\t14\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t49\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t23\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t3\t4\nBrand#x\tPROMO ANODIZED TIN\t9\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t3\t4\nBrand#x\tPROMO BRUSHED COPPER\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t14\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED NICKEL\t3\t4\nBrand#x\tPROMO BRUSHED NICKEL\t23\t4\nBrand#x\tPROMO BRUSHED STEEL\t9\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED STEEL\t23\t4\nBrand#x\tPROMO BRUSHED STEEL\t49\t4\nBrand#x\tPROMO BRUSHED TIN\t14\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t45\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t9\t4\nBrand#x\tPROMO BURNISHED BRASS\t19\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED COPPER\t3\t4\nBrand#x\tPROMO BURNISHED COPPER\t9\t4\nBrand#x\tPROMO BURNISHED COPPER\t19\t4\nBrand#x\tPROMO BURNISHED COPPER\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t23\t4\nBrand#x\tPROMO BURNISHED NICKEL\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t49\t4\nBrand#x\tPROMO BURNISHED STEEL\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t19\t4\nBrand#x\tPROMO BURNISHED TIN\t23\t4\nBrand#x\tPROMO BURNISHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED TIN\t45\t4\nBrand#x\tPROMO BURNISHED TIN\t49\t4\nBrand#x\tPROMO PLATED BRASS\t49\t4\nBrand#x\tPROMO PLATED COPPER\t9\t4\nBrand#x\tPROMO PLATED COPPER\t23\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED NICKEL\t14\t4\nBrand#x\tPROMO PLATED NICKEL\t36\t4\nBrand#x\tPROMO PLATED STEEL\t14\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED STEEL\t49\t4\nBrand#x\tPROMO PLATED TIN\t9\t4\nBrand#x\tPROMO PLATED TIN\t14\t4\nBrand#x\tPROMO PLATED TIN\t45\t4\nBrand#x\tPROMO PLATED TIN\t49\t4\nBrand#x\tPROMO POLISHED BRASS\t19\t4\nBrand#x\tPROMO POLISHED BRASS\t23\t4\nBrand#x\tPROMO POLISHED COPPER\t9\t4\nBrand#x\tPROMO POLISHED COPPER\t14\t4\nBrand#x\tPROMO POLISHED COPPER\t36\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t3\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED STEEL\t3\t4\nBrand#x\tPROMO POLISHED STEEL\t9\t4\nBrand#x\tPROMO POLISHED STEEL\t23\t4\nBrand#x\tPROMO POLISHED STEEL\t45\t4\nBrand#x\tPROMO POLISHED TIN\t9\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tPROMO POLISHED TIN\t45\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t9\t4\nBrand#x\tSMALL ANODIZED BRASS\t23\t4\nBrand#x\tSMALL ANODIZED BRASS\t45\t4\nBrand#x\tSMALL ANODIZED COPPER\t14\t4\nBrand#x\tSMALL ANODIZED COPPER\t36\t4\nBrand#x\tSMALL ANODIZED NICKEL\t9\t4\nBrand#x\tSMALL ANODIZED NICKEL\t14\t4\nBrand#x\tSMALL ANODIZED NICKEL\t19\t4\nBrand#x\tSMALL ANODIZED NICKEL\t49\t4\nBrand#x\tSMALL ANODIZED STEEL\t3\t4\nBrand#x\tSMALL ANODIZED STEEL\t9\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED STEEL\t49\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL ANODIZED TIN\t14\t4\nBrand#x\tSMALL ANODIZED TIN\t36\t4\nBrand#x\tSMALL BRUSHED BRASS\t23\t4\nBrand#x\tSMALL BRUSHED BRASS\t49\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t14\t4\nBrand#x\tSMALL BRUSHED COPPER\t19\t4\nBrand#x\tSMALL BRUSHED COPPER\t23\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED NICKEL\t14\t4\nBrand#x\tSMALL BRUSHED NICKEL\t19\t4\nBrand#x\tSMALL BRUSHED NICKEL\t36\t4\nBrand#x\tSMALL BRUSHED STEEL\t3\t4\nBrand#x\tSMALL BRUSHED STEEL\t9\t4\nBrand#x\tSMALL BRUSHED STEEL\t14\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t36\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t3\t4\nBrand#x\tSMALL BRUSHED TIN\t9\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t45\t4\nBrand#x\tSMALL BURNISHED BRASS\t49\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t23\t4\nBrand#x\tSMALL BURNISHED COPPER\t36\t4\nBrand#x\tSMALL BURNISHED NICKEL\t14\t4\nBrand#x\tSMALL BURNISHED NICKEL\t19\t4\nBrand#x\tSMALL BURNISHED NICKEL\t23\t4\nBrand#x\tSMALL BURNISHED NICKEL\t36\t4\nBrand#x\tSMALL BURNISHED NICKEL\t45\t4\nBrand#x\tSMALL BURNISHED STEEL\t3\t4\nBrand#x\tSMALL BURNISHED STEEL\t19\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t14\t4\nBrand#x\tSMALL PLATED BRASS\t3\t4\nBrand#x\tSMALL PLATED BRASS\t19\t4\nBrand#x\tSMALL PLATED BRASS\t36\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED COPPER\t9\t4\nBrand#x\tSMALL PLATED COPPER\t19\t4\nBrand#x\tSMALL PLATED COPPER\t23\t4\nBrand#x\tSMALL PLATED COPPER\t45\t4\nBrand#x\tSMALL PLATED NICKEL\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t23\t4\nBrand#x\tSMALL PLATED NICKEL\t36\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t9\t4\nBrand#x\tSMALL PLATED TIN\t3\t4\nBrand#x\tSMALL PLATED TIN\t9\t4\nBrand#x\tSMALL PLATED TIN\t14\t4\nBrand#x\tSMALL PLATED TIN\t19\t4\nBrand#x\tSMALL PLATED TIN\t36\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t9\t4\nBrand#x\tSMALL POLISHED BRASS\t23\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED COPPER\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t36\t4\nBrand#x\tSMALL POLISHED STEEL\t3\t4\nBrand#x\tSMALL POLISHED STEEL\t19\t4\nBrand#x\tSMALL POLISHED STEEL\t23\t4\nBrand#x\tSMALL POLISHED STEEL\t36\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t9\t4\nBrand#x\tSMALL POLISHED TIN\t36\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t4\nBrand#x\tSTANDARD ANODIZED TIN\t19\t4\nBrand#x\tSTANDARD ANODIZED TIN\t23\t4\nBrand#x\tSTANDARD ANODIZED TIN\t36\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t49\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t4\nBrand#x\tSTANDARD BURNISHED TIN\t36\t4\nBrand#x\tSTANDARD BURNISHED TIN\t49\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t45\t4\nBrand#x\tSTANDARD PLATED COPPER\t49\t4\nBrand#x\tSTANDARD PLATED NICKEL\t3\t4\nBrand#x\tSTANDARD PLATED NICKEL\t14\t4\nBrand#x\tSTANDARD PLATED NICKEL\t45\t4\nBrand#x\tSTANDARD PLATED NICKEL\t49\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED TIN\t9\t4\nBrand#x\tSTANDARD PLATED TIN\t14\t4\nBrand#x\tSTANDARD PLATED TIN\t19\t4\nBrand#x\tSTANDARD PLATED TIN\t45\t4\nBrand#x\tSTANDARD POLISHED BRASS\t23\t4\nBrand#x\tSTANDARD POLISHED COPPER\t3\t4\nBrand#x\tSTANDARD POLISHED COPPER\t14\t4\nBrand#x\tSTANDARD POLISHED COPPER\t23\t4\nBrand#x\tSTANDARD POLISHED COPPER\t36\t4\nBrand#x\tSTANDARD POLISHED COPPER\t45\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t3\t4\nBrand#x\tSTANDARD POLISHED STEEL\t23\t4\nBrand#x\tSTANDARD POLISHED TIN\t14\t4\nBrand#x\tSTANDARD POLISHED TIN\t23\t4\nBrand#x\tSTANDARD POLISHED TIN\t36\t4\nBrand#x\tSTANDARD POLISHED TIN\t49\t4\nBrand#x\tECONOMY ANODIZED BRASS\t14\t4\nBrand#x\tECONOMY ANODIZED BRASS\t19\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED BRASS\t45\t4\nBrand#x\tECONOMY ANODIZED COPPER\t9\t4\nBrand#x\tECONOMY ANODIZED COPPER\t14\t4\nBrand#x\tECONOMY ANODIZED COPPER\t19\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED COPPER\t45\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t4\nBrand#x\tECONOMY ANODIZED STEEL\t3\t4\nBrand#x\tECONOMY ANODIZED STEEL\t19\t4\nBrand#x\tECONOMY ANODIZED TIN\t3\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t4\nBrand#x\tECONOMY BRUSHED STEEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t36\t4\nBrand#x\tECONOMY BRUSHED STEEL\t45\t4\nBrand#x\tECONOMY BRUSHED TIN\t3\t4\nBrand#x\tECONOMY BRUSHED TIN\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t19\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t19\t4\nBrand#x\tECONOMY BURNISHED STEEL\t36\t4\nBrand#x\tECONOMY BURNISHED TIN\t14\t4\nBrand#x\tECONOMY BURNISHED TIN\t23\t4\nBrand#x\tECONOMY PLATED BRASS\t3\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED COPPER\t3\t4\nBrand#x\tECONOMY PLATED COPPER\t45\t4\nBrand#x\tECONOMY PLATED NICKEL\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t36\t4\nBrand#x\tECONOMY PLATED STEEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t23\t4\nBrand#x\tECONOMY PLATED STEEL\t45\t4\nBrand#x\tECONOMY POLISHED BRASS\t3\t4\nBrand#x\tECONOMY POLISHED BRASS\t14\t4\nBrand#x\tECONOMY POLISHED BRASS\t23\t4\nBrand#x\tECONOMY POLISHED BRASS\t45\t4\nBrand#x\tECONOMY POLISHED COPPER\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t9\t4\nBrand#x\tECONOMY POLISHED NICKEL\t14\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t9\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t14\t4\nBrand#x\tECONOMY POLISHED TIN\t19\t4\nBrand#x\tECONOMY POLISHED TIN\t23\t4\nBrand#x\tECONOMY POLISHED TIN\t36\t4\nBrand#x\tLARGE ANODIZED BRASS\t3\t4\nBrand#x\tLARGE ANODIZED BRASS\t23\t4\nBrand#x\tLARGE ANODIZED COPPER\t14\t4\nBrand#x\tLARGE ANODIZED COPPER\t23\t4\nBrand#x\tLARGE ANODIZED NICKEL\t3\t4\nBrand#x\tLARGE ANODIZED NICKEL\t45\t4\nBrand#x\tLARGE ANODIZED NICKEL\t49\t4\nBrand#x\tLARGE ANODIZED STEEL\t3\t4\nBrand#x\tLARGE ANODIZED TIN\t3\t4\nBrand#x\tLARGE ANODIZED TIN\t9\t4\nBrand#x\tLARGE ANODIZED TIN\t23\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED BRASS\t19\t4\nBrand#x\tLARGE BRUSHED BRASS\t23\t4\nBrand#x\tLARGE BRUSHED BRASS\t49\t4\nBrand#x\tLARGE BRUSHED COPPER\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED COPPER\t49\t4\nBrand#x\tLARGE BRUSHED NICKEL\t9\t4\nBrand#x\tLARGE BRUSHED NICKEL\t19\t4\nBrand#x\tLARGE BRUSHED NICKEL\t49\t4\nBrand#x\tLARGE BRUSHED STEEL\t45\t4\nBrand#x\tLARGE BRUSHED TIN\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t23\t4\nBrand#x\tLARGE BRUSHED TIN\t36\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BURNISHED BRASS\t3\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t14\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t36\t4\nBrand#x\tLARGE BURNISHED BRASS\t45\t4\nBrand#x\tLARGE BURNISHED NICKEL\t23\t4\nBrand#x\tLARGE BURNISHED STEEL\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t3\t4\nBrand#x\tLARGE BURNISHED TIN\t9\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t45\t4\nBrand#x\tLARGE PLATED BRASS\t19\t4\nBrand#x\tLARGE PLATED BRASS\t23\t4\nBrand#x\tLARGE PLATED BRASS\t49\t4\nBrand#x\tLARGE PLATED COPPER\t3\t4\nBrand#x\tLARGE PLATED COPPER\t36\t4\nBrand#x\tLARGE PLATED COPPER\t49\t4\nBrand#x\tLARGE PLATED NICKEL\t3\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED NICKEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t36\t4\nBrand#x\tLARGE PLATED TIN\t9\t4\nBrand#x\tLARGE PLATED TIN\t14\t4\nBrand#x\tLARGE PLATED TIN\t19\t4\nBrand#x\tLARGE PLATED TIN\t23\t4\nBrand#x\tLARGE PLATED TIN\t36\t4\nBrand#x\tLARGE PLATED TIN\t45\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED BRASS\t14\t4\nBrand#x\tLARGE POLISHED BRASS\t23\t4\nBrand#x\tLARGE POLISHED BRASS\t36\t4\nBrand#x\tLARGE POLISHED BRASS\t45\t4\nBrand#x\tLARGE POLISHED BRASS\t49\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t14\t4\nBrand#x\tLARGE POLISHED NICKEL\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t23\t4\nBrand#x\tLARGE POLISHED NICKEL\t45\t4\nBrand#x\tLARGE POLISHED STEEL\t9\t4\nBrand#x\tLARGE POLISHED STEEL\t14\t4\nBrand#x\tLARGE POLISHED STEEL\t19\t4\nBrand#x\tLARGE POLISHED STEEL\t36\t4\nBrand#x\tLARGE POLISHED TIN\t19\t4\nBrand#x\tLARGE POLISHED TIN\t23\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t49\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t3\t4\nBrand#x\tMEDIUM ANODIZED TIN\t9\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM ANODIZED TIN\t49\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t9\t4\nBrand#x\tMEDIUM BRUSHED TIN\t19\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t4\nBrand#x\tMEDIUM BURNISHED TIN\t3\t4\nBrand#x\tMEDIUM BURNISHED TIN\t19\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t49\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t23\t4\nBrand#x\tMEDIUM PLATED BRASS\t36\t4\nBrand#x\tMEDIUM PLATED BRASS\t49\t4\nBrand#x\tMEDIUM PLATED COPPER\t3\t4\nBrand#x\tMEDIUM PLATED COPPER\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t36\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t49\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t45\t4\nBrand#x\tMEDIUM PLATED STEEL\t3\t4\nBrand#x\tMEDIUM PLATED STEEL\t9\t4\nBrand#x\tMEDIUM PLATED STEEL\t45\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t14\t4\nBrand#x\tMEDIUM PLATED TIN\t36\t4\nBrand#x\tPROMO ANODIZED BRASS\t14\t4\nBrand#x\tPROMO ANODIZED BRASS\t36\t4\nBrand#x\tPROMO ANODIZED BRASS\t45\t4\nBrand#x\tPROMO ANODIZED BRASS\t49\t4\nBrand#x\tPROMO ANODIZED COPPER\t9\t4\nBrand#x\tPROMO ANODIZED COPPER\t14\t4\nBrand#x\tPROMO ANODIZED NICKEL\t9\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t49\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED STEEL\t49\t4\nBrand#x\tPROMO ANODIZED TIN\t36\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t3\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t49\t4\nBrand#x\tPROMO BRUSHED COPPER\t3\t4\nBrand#x\tPROMO BRUSHED COPPER\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t49\t4\nBrand#x\tPROMO BRUSHED NICKEL\t9\t4\nBrand#x\tPROMO BRUSHED NICKEL\t36\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED STEEL\t23\t4\nBrand#x\tPROMO BRUSHED STEEL\t36\t4\nBrand#x\tPROMO BRUSHED STEEL\t45\t4\nBrand#x\tPROMO BRUSHED STEEL\t49\t4\nBrand#x\tPROMO BRUSHED TIN\t14\t4\nBrand#x\tPROMO BRUSHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t3\t4\nBrand#x\tPROMO BURNISHED BRASS\t19\t4\nBrand#x\tPROMO BURNISHED BRASS\t23\t4\nBrand#x\tPROMO BURNISHED BRASS\t36\t4\nBrand#x\tPROMO BURNISHED COPPER\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t3\t4\nBrand#x\tPROMO BURNISHED NICKEL\t14\t4\nBrand#x\tPROMO BURNISHED NICKEL\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED STEEL\t36\t4\nBrand#x\tPROMO BURNISHED STEEL\t49\t4\nBrand#x\tPROMO BURNISHED TIN\t19\t4\nBrand#x\tPROMO BURNISHED TIN\t23\t4\nBrand#x\tPROMO PLATED BRASS\t9\t4\nBrand#x\tPROMO PLATED BRASS\t36\t4\nBrand#x\tPROMO PLATED BRASS\t45\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t9\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t14\t4\nBrand#x\tPROMO PLATED NICKEL\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t49\t4\nBrand#x\tPROMO PLATED STEEL\t36\t4\nBrand#x\tPROMO PLATED TIN\t49\t4\nBrand#x\tPROMO POLISHED BRASS\t3\t4\nBrand#x\tPROMO POLISHED BRASS\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t36\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t3\t4\nBrand#x\tPROMO POLISHED COPPER\t14\t4\nBrand#x\tPROMO POLISHED COPPER\t19\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t49\t4\nBrand#x\tPROMO POLISHED STEEL\t9\t4\nBrand#x\tPROMO POLISHED STEEL\t36\t4\nBrand#x\tPROMO POLISHED STEEL\t45\t4\nBrand#x\tPROMO POLISHED TIN\t3\t4\nBrand#x\tPROMO POLISHED TIN\t9\t4\nBrand#x\tPROMO POLISHED TIN\t19\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t3\t4\nBrand#x\tSMALL ANODIZED COPPER\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t23\t4\nBrand#x\tSMALL ANODIZED COPPER\t49\t4\nBrand#x\tSMALL ANODIZED NICKEL\t3\t4\nBrand#x\tSMALL ANODIZED NICKEL\t9\t4\nBrand#x\tSMALL ANODIZED NICKEL\t19\t4\nBrand#x\tSMALL ANODIZED STEEL\t9\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t14\t4\nBrand#x\tSMALL ANODIZED TIN\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t23\t4\nBrand#x\tSMALL ANODIZED TIN\t49\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t14\t4\nBrand#x\tSMALL BRUSHED BRASS\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t14\t4\nBrand#x\tSMALL BRUSHED COPPER\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED NICKEL\t19\t4\nBrand#x\tSMALL BRUSHED NICKEL\t36\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t9\t4\nBrand#x\tSMALL BRUSHED STEEL\t14\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED TIN\t9\t4\nBrand#x\tSMALL BRUSHED TIN\t19\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BRUSHED TIN\t49\t4\nBrand#x\tSMALL BURNISHED BRASS\t36\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t19\t4\nBrand#x\tSMALL BURNISHED COPPER\t49\t4\nBrand#x\tSMALL BURNISHED NICKEL\t19\t4\nBrand#x\tSMALL BURNISHED NICKEL\t23\t4\nBrand#x\tSMALL BURNISHED STEEL\t3\t4\nBrand#x\tSMALL BURNISHED STEEL\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t19\t4\nBrand#x\tSMALL BURNISHED TIN\t23\t4\nBrand#x\tSMALL BURNISHED TIN\t49\t4\nBrand#x\tSMALL PLATED BRASS\t14\t4\nBrand#x\tSMALL PLATED BRASS\t19\t4\nBrand#x\tSMALL PLATED BRASS\t23\t4\nBrand#x\tSMALL PLATED BRASS\t36\t4\nBrand#x\tSMALL PLATED COPPER\t9\t4\nBrand#x\tSMALL PLATED COPPER\t19\t4\nBrand#x\tSMALL PLATED COPPER\t23\t4\nBrand#x\tSMALL PLATED NICKEL\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t3\t4\nBrand#x\tSMALL PLATED STEEL\t45\t4\nBrand#x\tSMALL PLATED TIN\t36\t4\nBrand#x\tSMALL POLISHED BRASS\t9\t4\nBrand#x\tSMALL POLISHED BRASS\t14\t4\nBrand#x\tSMALL POLISHED BRASS\t23\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED COPPER\t23\t4\nBrand#x\tSMALL POLISHED COPPER\t36\t4\nBrand#x\tSMALL POLISHED COPPER\t45\t4\nBrand#x\tSMALL POLISHED STEEL\t3\t4\nBrand#x\tSMALL POLISHED STEEL\t9\t4\nBrand#x\tSMALL POLISHED STEEL\t14\t4\nBrand#x\tSMALL POLISHED STEEL\t45\t4\nBrand#x\tSMALL POLISHED STEEL\t49\t4\nBrand#x\tSMALL POLISHED TIN\t9\t4\nBrand#x\tSMALL POLISHED TIN\t14\t4\nBrand#x\tSMALL POLISHED TIN\t36\t4\nBrand#x\tSMALL POLISHED TIN\t45\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t4\nBrand#x\tSTANDARD ANODIZED TIN\t36\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t49\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BRUSHED TIN\t9\t4\nBrand#x\tSTANDARD BRUSHED TIN\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t4\nBrand#x\tSTANDARD BURNISHED TIN\t3\t4\nBrand#x\tSTANDARD BURNISHED TIN\t23\t4\nBrand#x\tSTANDARD PLATED BRASS\t14\t4\nBrand#x\tSTANDARD PLATED BRASS\t45\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED NICKEL\t9\t4\nBrand#x\tSTANDARD PLATED NICKEL\t45\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED TIN\t49\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t14\t4\nBrand#x\tSTANDARD POLISHED BRASS\t23\t4\nBrand#x\tSTANDARD POLISHED COPPER\t3\t4\nBrand#x\tSTANDARD POLISHED COPPER\t9\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t3\t4\nBrand#x\tSTANDARD POLISHED STEEL\t9\t4\nBrand#x\tSTANDARD POLISHED STEEL\t19\t4\nBrand#x\tSTANDARD POLISHED STEEL\t36\t4\nBrand#x\tSTANDARD POLISHED STEEL\t45\t4\nBrand#x\tSTANDARD POLISHED STEEL\t49\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t14\t4\nBrand#x\tSTANDARD POLISHED TIN\t49\t4\nBrand#x\tECONOMY ANODIZED BRASS\t9\t4\nBrand#x\tECONOMY ANODIZED BRASS\t14\t4\nBrand#x\tECONOMY ANODIZED BRASS\t36\t4\nBrand#x\tECONOMY ANODIZED BRASS\t45\t4\nBrand#x\tECONOMY ANODIZED BRASS\t49\t4\nBrand#x\tECONOMY ANODIZED COPPER\t19\t4\nBrand#x\tECONOMY ANODIZED COPPER\t45\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED BRASS\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t9\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED COPPER\t49\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t4\nBrand#x\tECONOMY BRUSHED STEEL\t3\t4\nBrand#x\tECONOMY BRUSHED STEEL\t19\t4\nBrand#x\tECONOMY BRUSHED STEEL\t45\t4\nBrand#x\tECONOMY BRUSHED TIN\t3\t4\nBrand#x\tECONOMY BRUSHED TIN\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t23\t4\nBrand#x\tECONOMY BRUSHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t45\t4\nBrand#x\tECONOMY BURNISHED COPPER\t9\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t14\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED TIN\t3\t4\nBrand#x\tECONOMY BURNISHED TIN\t9\t4\nBrand#x\tECONOMY BURNISHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t45\t4\nBrand#x\tECONOMY PLATED BRASS\t3\t4\nBrand#x\tECONOMY PLATED BRASS\t9\t4\nBrand#x\tECONOMY PLATED BRASS\t23\t4\nBrand#x\tECONOMY PLATED BRASS\t45\t4\nBrand#x\tECONOMY PLATED COPPER\t3\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED COPPER\t23\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED NICKEL\t49\t4\nBrand#x\tECONOMY PLATED STEEL\t3\t4\nBrand#x\tECONOMY PLATED STEEL\t23\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t19\t4\nBrand#x\tECONOMY PLATED TIN\t23\t4\nBrand#x\tECONOMY PLATED TIN\t45\t4\nBrand#x\tECONOMY POLISHED BRASS\t19\t4\nBrand#x\tECONOMY POLISHED BRASS\t49\t4\nBrand#x\tECONOMY POLISHED COPPER\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t14\t4\nBrand#x\tECONOMY POLISHED COPPER\t45\t4\nBrand#x\tECONOMY POLISHED NICKEL\t9\t4\nBrand#x\tECONOMY POLISHED NICKEL\t19\t4\nBrand#x\tECONOMY POLISHED NICKEL\t45\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t45\t4\nBrand#x\tECONOMY POLISHED STEEL\t49\t4\nBrand#x\tECONOMY POLISHED TIN\t3\t4\nBrand#x\tLARGE ANODIZED BRASS\t14\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t3\t4\nBrand#x\tLARGE ANODIZED COPPER\t9\t4\nBrand#x\tLARGE ANODIZED COPPER\t36\t4\nBrand#x\tLARGE ANODIZED COPPER\t49\t4\nBrand#x\tLARGE ANODIZED NICKEL\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t19\t4\nBrand#x\tLARGE ANODIZED NICKEL\t36\t4\nBrand#x\tLARGE ANODIZED NICKEL\t45\t4\nBrand#x\tLARGE ANODIZED STEEL\t9\t4\nBrand#x\tLARGE ANODIZED STEEL\t36\t4\nBrand#x\tLARGE ANODIZED TIN\t14\t4\nBrand#x\tLARGE ANODIZED TIN\t36\t4\nBrand#x\tLARGE ANODIZED TIN\t45\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED BRASS\t23\t4\nBrand#x\tLARGE BRUSHED COPPER\t23\t4\nBrand#x\tLARGE BRUSHED COPPER\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t9\t4\nBrand#x\tLARGE BRUSHED NICKEL\t19\t4\nBrand#x\tLARGE BRUSHED NICKEL\t23\t4\nBrand#x\tLARGE BRUSHED STEEL\t14\t4\nBrand#x\tLARGE BRUSHED STEEL\t36\t4\nBrand#x\tLARGE BRUSHED TIN\t3\t4\nBrand#x\tLARGE BRUSHED TIN\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t49\t4\nBrand#x\tLARGE BURNISHED COPPER\t9\t4\nBrand#x\tLARGE BURNISHED COPPER\t14\t4\nBrand#x\tLARGE BURNISHED COPPER\t19\t4\nBrand#x\tLARGE BURNISHED COPPER\t23\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED NICKEL\t3\t4\nBrand#x\tLARGE BURNISHED NICKEL\t9\t4\nBrand#x\tLARGE BURNISHED NICKEL\t23\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED STEEL\t9\t4\nBrand#x\tLARGE BURNISHED STEEL\t49\t4\nBrand#x\tLARGE BURNISHED TIN\t3\t4\nBrand#x\tLARGE BURNISHED TIN\t19\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE PLATED BRASS\t3\t4\nBrand#x\tLARGE PLATED BRASS\t14\t4\nBrand#x\tLARGE PLATED BRASS\t36\t4\nBrand#x\tLARGE PLATED BRASS\t45\t4\nBrand#x\tLARGE PLATED COPPER\t36\t4\nBrand#x\tLARGE PLATED NICKEL\t3\t4\nBrand#x\tLARGE PLATED NICKEL\t9\t4\nBrand#x\tLARGE PLATED NICKEL\t23\t4\nBrand#x\tLARGE PLATED NICKEL\t36\t4\nBrand#x\tLARGE PLATED NICKEL\t45\t4\nBrand#x\tLARGE PLATED STEEL\t9\t4\nBrand#x\tLARGE PLATED STEEL\t14\t4\nBrand#x\tLARGE PLATED STEEL\t23\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t36\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t9\t4\nBrand#x\tLARGE POLISHED BRASS\t19\t4\nBrand#x\tLARGE POLISHED BRASS\t23\t4\nBrand#x\tLARGE POLISHED BRASS\t49\t4\nBrand#x\tLARGE POLISHED COPPER\t3\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t36\t4\nBrand#x\tLARGE POLISHED COPPER\t49\t4\nBrand#x\tLARGE POLISHED NICKEL\t3\t4\nBrand#x\tLARGE POLISHED NICKEL\t14\t4\nBrand#x\tLARGE POLISHED STEEL\t14\t4\nBrand#x\tLARGE POLISHED TIN\t3\t4\nBrand#x\tLARGE POLISHED TIN\t9\t4\nBrand#x\tLARGE POLISHED TIN\t19\t4\nBrand#x\tLARGE POLISHED TIN\t36\t4\nBrand#x\tLARGE POLISHED TIN\t45\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t49\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t4\nBrand#x\tMEDIUM ANODIZED TIN\t9\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED TIN\t23\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM ANODIZED TIN\t45\t4\nBrand#x\tMEDIUM ANODIZED TIN\t49\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t9\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t4\nBrand#x\tMEDIUM BRUSHED TIN\t19\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t4\nBrand#x\tMEDIUM BURNISHED TIN\t3\t4\nBrand#x\tMEDIUM BURNISHED TIN\t19\t4\nBrand#x\tMEDIUM BURNISHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED TIN\t49\t4\nBrand#x\tMEDIUM PLATED BRASS\t9\t4\nBrand#x\tMEDIUM PLATED BRASS\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t36\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t23\t4\nBrand#x\tMEDIUM PLATED NICKEL\t49\t4\nBrand#x\tMEDIUM PLATED STEEL\t3\t4\nBrand#x\tMEDIUM PLATED STEEL\t23\t4\nBrand#x\tMEDIUM PLATED TIN\t3\t4\nBrand#x\tMEDIUM PLATED TIN\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t14\t4\nBrand#x\tMEDIUM PLATED TIN\t19\t4\nBrand#x\tMEDIUM PLATED TIN\t23\t4\nBrand#x\tMEDIUM PLATED TIN\t36\t4\nBrand#x\tMEDIUM PLATED TIN\t45\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t9\t4\nBrand#x\tPROMO ANODIZED BRASS\t14\t4\nBrand#x\tPROMO ANODIZED BRASS\t19\t4\nBrand#x\tPROMO ANODIZED BRASS\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t36\t4\nBrand#x\tPROMO ANODIZED BRASS\t45\t4\nBrand#x\tPROMO ANODIZED BRASS\t49\t4\nBrand#x\tPROMO ANODIZED COPPER\t14\t4\nBrand#x\tPROMO ANODIZED COPPER\t23\t4\nBrand#x\tPROMO ANODIZED COPPER\t49\t4\nBrand#x\tPROMO ANODIZED NICKEL\t9\t4\nBrand#x\tPROMO ANODIZED NICKEL\t23\t4\nBrand#x\tPROMO ANODIZED NICKEL\t49\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED STEEL\t49\t4\nBrand#x\tPROMO ANODIZED TIN\t36\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t3\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED BRASS\t36\t4\nBrand#x\tPROMO BRUSHED BRASS\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t49\t4\nBrand#x\tPROMO BRUSHED COPPER\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t23\t4\nBrand#x\tPROMO BRUSHED STEEL\t9\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t36\t4\nBrand#x\tPROMO BRUSHED STEEL\t45\t4\nBrand#x\tPROMO BRUSHED STEEL\t49\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t45\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t3\t4\nBrand#x\tPROMO BURNISHED BRASS\t9\t4\nBrand#x\tPROMO BURNISHED BRASS\t19\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED COPPER\t3\t4\nBrand#x\tPROMO BURNISHED COPPER\t9\t4\nBrand#x\tPROMO BURNISHED COPPER\t14\t4\nBrand#x\tPROMO BURNISHED COPPER\t19\t4\nBrand#x\tPROMO BURNISHED COPPER\t23\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t49\t4\nBrand#x\tPROMO BURNISHED TIN\t3\t4\nBrand#x\tPROMO BURNISHED TIN\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t36\t4\nBrand#x\tPROMO PLATED BRASS\t14\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED COPPER\t23\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED NICKEL\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t45\t4\nBrand#x\tPROMO PLATED NICKEL\t49\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED STEEL\t45\t4\nBrand#x\tPROMO PLATED TIN\t3\t4\nBrand#x\tPROMO PLATED TIN\t9\t4\nBrand#x\tPROMO PLATED TIN\t45\t4\nBrand#x\tPROMO POLISHED BRASS\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t36\t4\nBrand#x\tPROMO POLISHED NICKEL\t3\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t19\t4\nBrand#x\tPROMO POLISHED NICKEL\t23\t4\nBrand#x\tPROMO POLISHED STEEL\t3\t4\nBrand#x\tPROMO POLISHED STEEL\t19\t4\nBrand#x\tPROMO POLISHED STEEL\t45\t4\nBrand#x\tPROMO POLISHED STEEL\t49\t4\nBrand#x\tPROMO POLISHED TIN\t19\t4\nBrand#x\tPROMO POLISHED TIN\t23\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tPROMO POLISHED TIN\t49\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t9\t4\nBrand#x\tSMALL ANODIZED BRASS\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t45\t4\nBrand#x\tSMALL ANODIZED BRASS\t49\t4\nBrand#x\tSMALL ANODIZED COPPER\t14\t4\nBrand#x\tSMALL ANODIZED COPPER\t23\t4\nBrand#x\tSMALL ANODIZED COPPER\t49\t4\nBrand#x\tSMALL ANODIZED NICKEL\t3\t4\nBrand#x\tSMALL ANODIZED NICKEL\t14\t4\nBrand#x\tSMALL ANODIZED NICKEL\t36\t4\nBrand#x\tSMALL ANODIZED STEEL\t14\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL ANODIZED TIN\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t49\t4\nBrand#x\tSMALL BRUSHED BRASS\t14\t4\nBrand#x\tSMALL BRUSHED BRASS\t49\t4\nBrand#x\tSMALL BRUSHED COPPER\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED NICKEL\t3\t4\nBrand#x\tSMALL BRUSHED NICKEL\t9\t4\nBrand#x\tSMALL BRUSHED NICKEL\t14\t4\nBrand#x\tSMALL BRUSHED NICKEL\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t3\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t45\t4\nBrand#x\tSMALL BURNISHED BRASS\t9\t4\nBrand#x\tSMALL BURNISHED BRASS\t23\t4\nBrand#x\tSMALL BURNISHED BRASS\t45\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t14\t4\nBrand#x\tSMALL BURNISHED NICKEL\t49\t4\nBrand#x\tSMALL BURNISHED STEEL\t3\t4\nBrand#x\tSMALL BURNISHED STEEL\t9\t4\nBrand#x\tSMALL BURNISHED STEEL\t14\t4\nBrand#x\tSMALL BURNISHED STEEL\t19\t4\nBrand#x\tSMALL BURNISHED STEEL\t45\t4\nBrand#x\tSMALL BURNISHED TIN\t3\t4\nBrand#x\tSMALL BURNISHED TIN\t19\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t49\t4\nBrand#x\tSMALL PLATED BRASS\t49\t4\nBrand#x\tSMALL PLATED COPPER\t9\t4\nBrand#x\tSMALL PLATED COPPER\t14\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED COPPER\t45\t4\nBrand#x\tSMALL PLATED COPPER\t49\t4\nBrand#x\tSMALL PLATED NICKEL\t9\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED NICKEL\t23\t4\nBrand#x\tSMALL PLATED NICKEL\t36\t4\nBrand#x\tSMALL PLATED NICKEL\t45\t4\nBrand#x\tSMALL PLATED STEEL\t9\t4\nBrand#x\tSMALL PLATED STEEL\t45\t4\nBrand#x\tSMALL PLATED TIN\t19\t4\nBrand#x\tSMALL PLATED TIN\t36\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t19\t4\nBrand#x\tSMALL POLISHED BRASS\t36\t4\nBrand#x\tSMALL POLISHED BRASS\t45\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t9\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED COPPER\t19\t4\nBrand#x\tSMALL POLISHED COPPER\t49\t4\nBrand#x\tSMALL POLISHED NICKEL\t14\t4\nBrand#x\tSMALL POLISHED NICKEL\t23\t4\nBrand#x\tSMALL POLISHED STEEL\t23\t4\nBrand#x\tSMALL POLISHED STEEL\t36\t4\nBrand#x\tSMALL POLISHED TIN\t14\t4\nBrand#x\tSMALL POLISHED TIN\t23\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t4\nBrand#x\tSTANDARD ANODIZED TIN\t9\t4\nBrand#x\tSTANDARD ANODIZED TIN\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t4\nBrand#x\tSTANDARD BRUSHED TIN\t9\t4\nBrand#x\tSTANDARD BRUSHED TIN\t19\t4\nBrand#x\tSTANDARD BRUSHED TIN\t45\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t19\t4\nBrand#x\tSTANDARD BURNISHED TIN\t36\t4\nBrand#x\tSTANDARD BURNISHED TIN\t49\t4\nBrand#x\tSTANDARD PLATED BRASS\t3\t4\nBrand#x\tSTANDARD PLATED BRASS\t19\t4\nBrand#x\tSTANDARD PLATED BRASS\t45\t4\nBrand#x\tSTANDARD PLATED BRASS\t49\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED COPPER\t45\t4\nBrand#x\tSTANDARD PLATED NICKEL\t49\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t36\t4\nBrand#x\tSTANDARD PLATED TIN\t3\t4\nBrand#x\tSTANDARD PLATED TIN\t14\t4\nBrand#x\tSTANDARD PLATED TIN\t19\t4\nBrand#x\tSTANDARD PLATED TIN\t23\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t19\t4\nBrand#x\tSTANDARD POLISHED BRASS\t36\t4\nBrand#x\tSTANDARD POLISHED COPPER\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t4\nBrand#x\tSTANDARD POLISHED STEEL\t36\t4\nBrand#x\tSTANDARD POLISHED STEEL\t45\t4\nBrand#x\tSTANDARD POLISHED STEEL\t49\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t14\t4\nBrand#x\tSTANDARD POLISHED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t36\t4\nBrand#x\tSTANDARD POLISHED TIN\t49\t4\nBrand#x\tECONOMY ANODIZED BRASS\t9\t4\nBrand#x\tECONOMY ANODIZED BRASS\t14\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED BRASS\t45\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED COPPER\t45\t4\nBrand#x\tECONOMY ANODIZED COPPER\t49\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t23\t4\nBrand#x\tECONOMY ANODIZED TIN\t3\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t14\t4\nBrand#x\tECONOMY ANODIZED TIN\t19\t4\nBrand#x\tECONOMY ANODIZED TIN\t23\t4\nBrand#x\tECONOMY ANODIZED TIN\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t9\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t23\t4\nBrand#x\tECONOMY BRUSHED COPPER\t36\t4\nBrand#x\tECONOMY BRUSHED COPPER\t49\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t4\nBrand#x\tECONOMY BRUSHED STEEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t23\t4\nBrand#x\tECONOMY BRUSHED TIN\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t3\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t14\t4\nBrand#x\tECONOMY BURNISHED STEEL\t19\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t45\t4\nBrand#x\tECONOMY BURNISHED TIN\t3\t4\nBrand#x\tECONOMY BURNISHED TIN\t9\t4\nBrand#x\tECONOMY BURNISHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t49\t4\nBrand#x\tECONOMY PLATED BRASS\t9\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t45\t4\nBrand#x\tECONOMY PLATED BRASS\t49\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED COPPER\t23\t4\nBrand#x\tECONOMY PLATED COPPER\t36\t4\nBrand#x\tECONOMY PLATED COPPER\t49\t4\nBrand#x\tECONOMY PLATED NICKEL\t3\t4\nBrand#x\tECONOMY PLATED NICKEL\t9\t4\nBrand#x\tECONOMY PLATED NICKEL\t23\t4\nBrand#x\tECONOMY PLATED NICKEL\t49\t4\nBrand#x\tECONOMY PLATED STEEL\t3\t4\nBrand#x\tECONOMY PLATED STEEL\t14\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED STEEL\t45\t4\nBrand#x\tECONOMY PLATED TIN\t9\t4\nBrand#x\tECONOMY PLATED TIN\t23\t4\nBrand#x\tECONOMY PLATED TIN\t45\t4\nBrand#x\tECONOMY PLATED TIN\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t14\t4\nBrand#x\tECONOMY POLISHED BRASS\t23\t4\nBrand#x\tECONOMY POLISHED BRASS\t49\t4\nBrand#x\tECONOMY POLISHED COPPER\t19\t4\nBrand#x\tECONOMY POLISHED COPPER\t45\t4\nBrand#x\tECONOMY POLISHED COPPER\t49\t4\nBrand#x\tECONOMY POLISHED NICKEL\t19\t4\nBrand#x\tECONOMY POLISHED NICKEL\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t3\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t23\t4\nBrand#x\tECONOMY POLISHED STEEL\t45\t4\nBrand#x\tECONOMY POLISHED STEEL\t49\t4\nBrand#x\tECONOMY POLISHED TIN\t3\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t14\t4\nBrand#x\tLARGE ANODIZED BRASS\t9\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t36\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t49\t4\nBrand#x\tLARGE ANODIZED NICKEL\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t19\t4\nBrand#x\tLARGE ANODIZED NICKEL\t23\t4\nBrand#x\tLARGE ANODIZED NICKEL\t49\t4\nBrand#x\tLARGE ANODIZED STEEL\t19\t4\nBrand#x\tLARGE ANODIZED STEEL\t23\t4\nBrand#x\tLARGE ANODIZED STEEL\t36\t4\nBrand#x\tLARGE ANODIZED STEEL\t49\t4\nBrand#x\tLARGE ANODIZED TIN\t14\t4\nBrand#x\tLARGE ANODIZED TIN\t49\t4\nBrand#x\tLARGE BRUSHED BRASS\t14\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED BRASS\t45\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t9\t4\nBrand#x\tLARGE BRUSHED COPPER\t19\t4\nBrand#x\tLARGE BRUSHED COPPER\t23\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED COPPER\t49\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t23\t4\nBrand#x\tLARGE BRUSHED NICKEL\t45\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t9\t4\nBrand#x\tLARGE BRUSHED STEEL\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t19\t4\nBrand#x\tLARGE BRUSHED TIN\t23\t4\nBrand#x\tLARGE BRUSHED TIN\t36\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED COPPER\t9\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t3\t4\nBrand#x\tLARGE BURNISHED STEEL\t3\t4\nBrand#x\tLARGE BURNISHED STEEL\t9\t4\nBrand#x\tLARGE BURNISHED STEEL\t19\t4\nBrand#x\tLARGE BURNISHED STEEL\t49\t4\nBrand#x\tLARGE BURNISHED TIN\t19\t4\nBrand#x\tLARGE BURNISHED TIN\t45\t4\nBrand#x\tLARGE BURNISHED TIN\t49\t4\nBrand#x\tLARGE PLATED BRASS\t14\t4\nBrand#x\tLARGE PLATED BRASS\t45\t4\nBrand#x\tLARGE PLATED COPPER\t19\t4\nBrand#x\tLARGE PLATED COPPER\t23\t4\nBrand#x\tLARGE PLATED NICKEL\t3\t4\nBrand#x\tLARGE PLATED NICKEL\t9\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED NICKEL\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t23\t4\nBrand#x\tLARGE PLATED STEEL\t14\t4\nBrand#x\tLARGE PLATED STEEL\t36\t4\nBrand#x\tLARGE PLATED TIN\t14\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED BRASS\t45\t4\nBrand#x\tLARGE POLISHED BRASS\t49\t4\nBrand#x\tLARGE POLISHED COPPER\t3\t4\nBrand#x\tLARGE POLISHED COPPER\t9\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t49\t4\nBrand#x\tLARGE POLISHED NICKEL\t3\t4\nBrand#x\tLARGE POLISHED NICKEL\t9\t4\nBrand#x\tLARGE POLISHED NICKEL\t23\t4\nBrand#x\tLARGE POLISHED STEEL\t3\t4\nBrand#x\tLARGE POLISHED STEEL\t14\t4\nBrand#x\tLARGE POLISHED TIN\t9\t4\nBrand#x\tLARGE POLISHED TIN\t19\t4\nBrand#x\tLARGE POLISHED TIN\t36\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t4\nBrand#x\tMEDIUM ANODIZED TIN\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED TIN\t23\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t36\t4\nBrand#x\tMEDIUM BURNISHED TIN\t45\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t9\t4\nBrand#x\tMEDIUM PLATED BRASS\t19\t4\nBrand#x\tMEDIUM PLATED BRASS\t23\t4\nBrand#x\tMEDIUM PLATED BRASS\t36\t4\nBrand#x\tMEDIUM PLATED COPPER\t3\t4\nBrand#x\tMEDIUM PLATED COPPER\t19\t4\nBrand#x\tMEDIUM PLATED COPPER\t36\t4\nBrand#x\tMEDIUM PLATED NICKEL\t3\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t19\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t19\t4\nBrand#x\tMEDIUM PLATED TIN\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t3\t4\nBrand#x\tPROMO ANODIZED BRASS\t19\t4\nBrand#x\tPROMO ANODIZED COPPER\t9\t4\nBrand#x\tPROMO ANODIZED COPPER\t14\t4\nBrand#x\tPROMO ANODIZED NICKEL\t9\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED STEEL\t36\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t3\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t3\t4\nBrand#x\tPROMO BRUSHED COPPER\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t23\t4\nBrand#x\tPROMO BRUSHED NICKEL\t49\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED STEEL\t36\t4\nBrand#x\tPROMO BRUSHED TIN\t3\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t9\t4\nBrand#x\tPROMO BURNISHED COPPER\t3\t4\nBrand#x\tPROMO BURNISHED COPPER\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t14\t4\nBrand#x\tPROMO BURNISHED NICKEL\t19\t4\nBrand#x\tPROMO BURNISHED NICKEL\t23\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t49\t4\nBrand#x\tPROMO BURNISHED TIN\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t23\t4\nBrand#x\tPROMO BURNISHED TIN\t45\t4\nBrand#x\tPROMO BURNISHED TIN\t49\t4\nBrand#x\tPROMO PLATED BRASS\t36\t4\nBrand#x\tPROMO PLATED BRASS\t45\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t14\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED COPPER\t45\t4\nBrand#x\tPROMO PLATED NICKEL\t14\t4\nBrand#x\tPROMO PLATED NICKEL\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t23\t4\nBrand#x\tPROMO PLATED NICKEL\t45\t4\nBrand#x\tPROMO PLATED STEEL\t14\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED TIN\t3\t4\nBrand#x\tPROMO PLATED TIN\t19\t4\nBrand#x\tPROMO PLATED TIN\t23\t4\nBrand#x\tPROMO PLATED TIN\t36\t4\nBrand#x\tPROMO PLATED TIN\t49\t4\nBrand#x\tPROMO POLISHED BRASS\t9\t4\nBrand#x\tPROMO POLISHED BRASS\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t45\t4\nBrand#x\tPROMO POLISHED COPPER\t3\t4\nBrand#x\tPROMO POLISHED COPPER\t45\t4\nBrand#x\tPROMO POLISHED NICKEL\t3\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED STEEL\t19\t4\nBrand#x\tPROMO POLISHED TIN\t3\t4\nBrand#x\tPROMO POLISHED TIN\t23\t4\nBrand#x\tPROMO POLISHED TIN\t45\t4\nBrand#x\tPROMO POLISHED TIN\t49\t4\nBrand#x\tSMALL ANODIZED BRASS\t45\t4\nBrand#x\tSMALL ANODIZED COPPER\t3\t4\nBrand#x\tSMALL ANODIZED COPPER\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t14\t4\nBrand#x\tSMALL ANODIZED COPPER\t19\t4\nBrand#x\tSMALL ANODIZED COPPER\t49\t4\nBrand#x\tSMALL ANODIZED NICKEL\t3\t4\nBrand#x\tSMALL ANODIZED NICKEL\t9\t4\nBrand#x\tSMALL ANODIZED NICKEL\t23\t4\nBrand#x\tSMALL ANODIZED NICKEL\t45\t4\nBrand#x\tSMALL ANODIZED STEEL\t3\t4\nBrand#x\tSMALL ANODIZED STEEL\t9\t4\nBrand#x\tSMALL ANODIZED STEEL\t14\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED STEEL\t45\t4\nBrand#x\tSMALL ANODIZED STEEL\t49\t4\nBrand#x\tSMALL ANODIZED TIN\t9\t4\nBrand#x\tSMALL ANODIZED TIN\t19\t4\nBrand#x\tSMALL BRUSHED BRASS\t9\t4\nBrand#x\tSMALL BRUSHED BRASS\t14\t4\nBrand#x\tSMALL BRUSHED BRASS\t19\t4\nBrand#x\tSMALL BRUSHED BRASS\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t9\t4\nBrand#x\tSMALL BRUSHED COPPER\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED NICKEL\t19\t4\nBrand#x\tSMALL BRUSHED NICKEL\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t36\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t36\t4\nBrand#x\tSMALL BRUSHED STEEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t9\t4\nBrand#x\tSMALL BRUSHED TIN\t14\t4\nBrand#x\tSMALL BRUSHED TIN\t19\t4\nBrand#x\tSMALL BURNISHED BRASS\t14\t4\nBrand#x\tSMALL BURNISHED BRASS\t19\t4\nBrand#x\tSMALL BURNISHED BRASS\t45\t4\nBrand#x\tSMALL BURNISHED BRASS\t49\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t14\t4\nBrand#x\tSMALL BURNISHED COPPER\t19\t4\nBrand#x\tSMALL BURNISHED COPPER\t23\t4\nBrand#x\tSMALL BURNISHED NICKEL\t14\t4\nBrand#x\tSMALL BURNISHED NICKEL\t19\t4\nBrand#x\tSMALL BURNISHED STEEL\t9\t4\nBrand#x\tSMALL BURNISHED STEEL\t19\t4\nBrand#x\tSMALL BURNISHED STEEL\t23\t4\nBrand#x\tSMALL BURNISHED STEEL\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t14\t4\nBrand#x\tSMALL BURNISHED TIN\t23\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t49\t4\nBrand#x\tSMALL PLATED BRASS\t3\t4\nBrand#x\tSMALL PLATED BRASS\t23\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED COPPER\t3\t4\nBrand#x\tSMALL PLATED COPPER\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t3\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED NICKEL\t23\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t3\t4\nBrand#x\tSMALL PLATED STEEL\t14\t4\nBrand#x\tSMALL PLATED TIN\t9\t4\nBrand#x\tSMALL PLATED TIN\t14\t4\nBrand#x\tSMALL PLATED TIN\t19\t4\nBrand#x\tSMALL PLATED TIN\t36\t4\nBrand#x\tSMALL PLATED TIN\t45\t4\nBrand#x\tSMALL POLISHED BRASS\t14\t4\nBrand#x\tSMALL POLISHED BRASS\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t49\t4\nBrand#x\tSMALL POLISHED STEEL\t9\t4\nBrand#x\tSMALL POLISHED STEEL\t49\t4\nBrand#x\tSMALL POLISHED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t4\nBrand#x\tSTANDARD ANODIZED TIN\t36\t4\nBrand#x\tSTANDARD ANODIZED TIN\t45\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t45\t4\nBrand#x\tSTANDARD BRUSHED TIN\t3\t4\nBrand#x\tSTANDARD BRUSHED TIN\t9\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t45\t4\nBrand#x\tSTANDARD PLATED BRASS\t3\t4\nBrand#x\tSTANDARD PLATED BRASS\t36\t4\nBrand#x\tSTANDARD PLATED COPPER\t3\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED NICKEL\t9\t4\nBrand#x\tSTANDARD PLATED NICKEL\t19\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED TIN\t3\t4\nBrand#x\tSTANDARD PLATED TIN\t9\t4\nBrand#x\tSTANDARD PLATED TIN\t14\t4\nBrand#x\tSTANDARD PLATED TIN\t19\t4\nBrand#x\tSTANDARD PLATED TIN\t45\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t14\t4\nBrand#x\tSTANDARD POLISHED BRASS\t23\t4\nBrand#x\tSTANDARD POLISHED BRASS\t45\t4\nBrand#x\tSTANDARD POLISHED COPPER\t9\t4\nBrand#x\tSTANDARD POLISHED COPPER\t19\t4\nBrand#x\tSTANDARD POLISHED COPPER\t45\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t49\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t23\t4\nBrand#x\tECONOMY ANODIZED BRASS\t3\t4\nBrand#x\tECONOMY ANODIZED BRASS\t9\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t9\t4\nBrand#x\tECONOMY ANODIZED COPPER\t14\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED COPPER\t45\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t19\t4\nBrand#x\tECONOMY ANODIZED STEEL\t23\t4\nBrand#x\tECONOMY ANODIZED STEEL\t49\t4\nBrand#x\tECONOMY ANODIZED TIN\t14\t4\nBrand#x\tECONOMY ANODIZED TIN\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t14\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t23\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED COPPER\t49\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t4\nBrand#x\tECONOMY BRUSHED STEEL\t45\t4\nBrand#x\tECONOMY BRUSHED TIN\t3\t4\nBrand#x\tECONOMY BRUSHED TIN\t9\t4\nBrand#x\tECONOMY BRUSHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED BRASS\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t3\t4\nBrand#x\tECONOMY BURNISHED COPPER\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t9\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t36\t4\nBrand#x\tECONOMY BURNISHED STEEL\t45\t4\nBrand#x\tECONOMY BURNISHED TIN\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t3\t4\nBrand#x\tECONOMY PLATED BRASS\t9\t4\nBrand#x\tECONOMY PLATED BRASS\t14\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED BRASS\t49\t4\nBrand#x\tECONOMY PLATED COPPER\t9\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED COPPER\t23\t4\nBrand#x\tECONOMY PLATED COPPER\t36\t4\nBrand#x\tECONOMY PLATED COPPER\t45\t4\nBrand#x\tECONOMY PLATED NICKEL\t3\t4\nBrand#x\tECONOMY PLATED NICKEL\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t19\t4\nBrand#x\tECONOMY PLATED NICKEL\t23\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED NICKEL\t49\t4\nBrand#x\tECONOMY PLATED STEEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t14\t4\nBrand#x\tECONOMY PLATED STEEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED TIN\t9\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t19\t4\nBrand#x\tECONOMY POLISHED BRASS\t49\t4\nBrand#x\tECONOMY POLISHED COPPER\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t23\t4\nBrand#x\tECONOMY POLISHED COPPER\t36\t4\nBrand#x\tECONOMY POLISHED COPPER\t45\t4\nBrand#x\tECONOMY POLISHED NICKEL\t19\t4\nBrand#x\tECONOMY POLISHED NICKEL\t23\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t14\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t23\t4\nBrand#x\tECONOMY POLISHED STEEL\t36\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t14\t4\nBrand#x\tECONOMY POLISHED TIN\t19\t4\nBrand#x\tECONOMY POLISHED TIN\t23\t4\nBrand#x\tECONOMY POLISHED TIN\t49\t4\nBrand#x\tLARGE ANODIZED BRASS\t9\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t23\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t3\t4\nBrand#x\tLARGE ANODIZED NICKEL\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t23\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED STEEL\t19\t4\nBrand#x\tLARGE ANODIZED STEEL\t23\t4\nBrand#x\tLARGE ANODIZED TIN\t23\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED BRASS\t14\t4\nBrand#x\tLARGE BRUSHED BRASS\t19\t4\nBrand#x\tLARGE BRUSHED COPPER\t14\t4\nBrand#x\tLARGE BRUSHED COPPER\t23\t4\nBrand#x\tLARGE BRUSHED COPPER\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t49\t4\nBrand#x\tLARGE BRUSHED NICKEL\t9\t4\nBrand#x\tLARGE BRUSHED NICKEL\t49\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t9\t4\nBrand#x\tLARGE BRUSHED STEEL\t19\t4\nBrand#x\tLARGE BRUSHED STEEL\t49\t4\nBrand#x\tLARGE BRUSHED TIN\t3\t4\nBrand#x\tLARGE BRUSHED TIN\t9\t4\nBrand#x\tLARGE BRUSHED TIN\t19\t4\nBrand#x\tLARGE BRUSHED TIN\t23\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t14\t4\nBrand#x\tLARGE BURNISHED COPPER\t3\t4\nBrand#x\tLARGE BURNISHED COPPER\t14\t4\nBrand#x\tLARGE BURNISHED COPPER\t19\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t3\t4\nBrand#x\tLARGE BURNISHED NICKEL\t23\t4\nBrand#x\tLARGE BURNISHED STEEL\t14\t4\nBrand#x\tLARGE BURNISHED STEEL\t19\t4\nBrand#x\tLARGE BURNISHED STEEL\t45\t4\nBrand#x\tLARGE BURNISHED TIN\t3\t4\nBrand#x\tLARGE BURNISHED TIN\t9\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t45\t4\nBrand#x\tLARGE PLATED BRASS\t19\t4\nBrand#x\tLARGE PLATED BRASS\t36\t4\nBrand#x\tLARGE PLATED COPPER\t9\t4\nBrand#x\tLARGE PLATED COPPER\t14\t4\nBrand#x\tLARGE PLATED COPPER\t36\t4\nBrand#x\tLARGE PLATED COPPER\t49\t4\nBrand#x\tLARGE PLATED NICKEL\t3\t4\nBrand#x\tLARGE PLATED NICKEL\t9\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED STEEL\t3\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t36\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t3\t4\nBrand#x\tLARGE PLATED TIN\t19\t4\nBrand#x\tLARGE PLATED TIN\t45\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED BRASS\t14\t4\nBrand#x\tLARGE POLISHED BRASS\t19\t4\nBrand#x\tLARGE POLISHED BRASS\t36\t4\nBrand#x\tLARGE POLISHED BRASS\t49\t4\nBrand#x\tLARGE POLISHED COPPER\t36\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t9\t4\nBrand#x\tLARGE POLISHED NICKEL\t23\t4\nBrand#x\tLARGE POLISHED STEEL\t3\t4\nBrand#x\tLARGE POLISHED STEEL\t9\t4\nBrand#x\tLARGE POLISHED STEEL\t14\t4\nBrand#x\tLARGE POLISHED STEEL\t19\t4\nBrand#x\tLARGE POLISHED TIN\t36\t4\nBrand#x\tLARGE POLISHED TIN\t45\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM ANODIZED TIN\t45\t4\nBrand#x\tMEDIUM ANODIZED TIN\t49\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t4\nBrand#x\tMEDIUM BRUSHED TIN\t19\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t4\nBrand#x\tMEDIUM BURNISHED TIN\t9\t4\nBrand#x\tMEDIUM BURNISHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED TIN\t49\t4\nBrand#x\tMEDIUM PLATED BRASS\t14\t4\nBrand#x\tMEDIUM PLATED BRASS\t36\t4\nBrand#x\tMEDIUM PLATED BRASS\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t19\t4\nBrand#x\tMEDIUM PLATED NICKEL\t45\t4\nBrand#x\tMEDIUM PLATED STEEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t3\t4\nBrand#x\tMEDIUM PLATED TIN\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t14\t4\nBrand#x\tMEDIUM PLATED TIN\t36\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t19\t4\nBrand#x\tPROMO ANODIZED BRASS\t45\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED COPPER\t36\t4\nBrand#x\tPROMO ANODIZED COPPER\t45\t4\nBrand#x\tPROMO ANODIZED NICKEL\t9\t4\nBrand#x\tPROMO ANODIZED NICKEL\t49\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t23\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t9\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t49\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t45\t4\nBrand#x\tPROMO BRUSHED COPPER\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t36\t4\nBrand#x\tPROMO BRUSHED COPPER\t49\t4\nBrand#x\tPROMO BRUSHED NICKEL\t19\t4\nBrand#x\tPROMO BRUSHED NICKEL\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t45\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED STEEL\t36\t4\nBrand#x\tPROMO BRUSHED TIN\t14\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t23\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED COPPER\t23\t4\nBrand#x\tPROMO BURNISHED COPPER\t49\t4\nBrand#x\tPROMO BURNISHED NICKEL\t23\t4\nBrand#x\tPROMO BURNISHED NICKEL\t36\t4\nBrand#x\tPROMO BURNISHED STEEL\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t3\t4\nBrand#x\tPROMO BURNISHED TIN\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t14\t4\nBrand#x\tPROMO BURNISHED TIN\t19\t4\nBrand#x\tPROMO BURNISHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED TIN\t45\t4\nBrand#x\tPROMO PLATED BRASS\t9\t4\nBrand#x\tPROMO PLATED BRASS\t14\t4\nBrand#x\tPROMO PLATED BRASS\t19\t4\nBrand#x\tPROMO PLATED BRASS\t49\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t9\t4\nBrand#x\tPROMO PLATED COPPER\t23\t4\nBrand#x\tPROMO PLATED COPPER\t45\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED NICKEL\t9\t4\nBrand#x\tPROMO PLATED NICKEL\t14\t4\nBrand#x\tPROMO PLATED NICKEL\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t23\t4\nBrand#x\tPROMO PLATED NICKEL\t49\t4\nBrand#x\tPROMO PLATED STEEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t9\t4\nBrand#x\tPROMO PLATED STEEL\t14\t4\nBrand#x\tPROMO PLATED TIN\t9\t4\nBrand#x\tPROMO PLATED TIN\t36\t4\nBrand#x\tPROMO POLISHED BRASS\t14\t4\nBrand#x\tPROMO POLISHED BRASS\t36\t4\nBrand#x\tPROMO POLISHED COPPER\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED STEEL\t19\t4\nBrand#x\tPROMO POLISHED STEEL\t45\t4\nBrand#x\tPROMO POLISHED STEEL\t49\t4\nBrand#x\tPROMO POLISHED TIN\t3\t4\nBrand#x\tPROMO POLISHED TIN\t14\t4\nBrand#x\tPROMO POLISHED TIN\t19\t4\nBrand#x\tPROMO POLISHED TIN\t23\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t14\t4\nBrand#x\tSMALL ANODIZED BRASS\t23\t4\nBrand#x\tSMALL ANODIZED BRASS\t45\t4\nBrand#x\tSMALL ANODIZED BRASS\t49\t4\nBrand#x\tSMALL ANODIZED COPPER\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t19\t4\nBrand#x\tSMALL ANODIZED COPPER\t23\t4\nBrand#x\tSMALL ANODIZED NICKEL\t19\t4\nBrand#x\tSMALL ANODIZED NICKEL\t36\t4\nBrand#x\tSMALL ANODIZED NICKEL\t45\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED STEEL\t23\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED STEEL\t49\t4\nBrand#x\tSMALL ANODIZED TIN\t9\t4\nBrand#x\tSMALL ANODIZED TIN\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t45\t4\nBrand#x\tSMALL ANODIZED TIN\t49\t4\nBrand#x\tSMALL BRUSHED BRASS\t9\t4\nBrand#x\tSMALL BRUSHED BRASS\t14\t4\nBrand#x\tSMALL BRUSHED BRASS\t19\t4\nBrand#x\tSMALL BRUSHED BRASS\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED NICKEL\t9\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t45\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t19\t4\nBrand#x\tSMALL BURNISHED BRASS\t23\t4\nBrand#x\tSMALL BURNISHED BRASS\t45\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t14\t4\nBrand#x\tSMALL BURNISHED COPPER\t23\t4\nBrand#x\tSMALL BURNISHED COPPER\t36\t4\nBrand#x\tSMALL BURNISHED COPPER\t45\t4\nBrand#x\tSMALL BURNISHED COPPER\t49\t4\nBrand#x\tSMALL BURNISHED NICKEL\t19\t4\nBrand#x\tSMALL BURNISHED NICKEL\t36\t4\nBrand#x\tSMALL BURNISHED NICKEL\t45\t4\nBrand#x\tSMALL BURNISHED TIN\t3\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t19\t4\nBrand#x\tSMALL PLATED BRASS\t9\t4\nBrand#x\tSMALL PLATED BRASS\t19\t4\nBrand#x\tSMALL PLATED BRASS\t36\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED COPPER\t3\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED COPPER\t45\t4\nBrand#x\tSMALL PLATED NICKEL\t3\t4\nBrand#x\tSMALL PLATED NICKEL\t9\t4\nBrand#x\tSMALL PLATED NICKEL\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t45\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t3\t4\nBrand#x\tSMALL PLATED STEEL\t49\t4\nBrand#x\tSMALL PLATED TIN\t14\t4\nBrand#x\tSMALL PLATED TIN\t19\t4\nBrand#x\tSMALL PLATED TIN\t23\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t9\t4\nBrand#x\tSMALL POLISHED BRASS\t36\t4\nBrand#x\tSMALL POLISHED BRASS\t45\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED COPPER\t23\t4\nBrand#x\tSMALL POLISHED COPPER\t45\t4\nBrand#x\tSMALL POLISHED COPPER\t49\t4\nBrand#x\tSMALL POLISHED NICKEL\t9\t4\nBrand#x\tSMALL POLISHED NICKEL\t23\t4\nBrand#x\tSMALL POLISHED NICKEL\t45\t4\nBrand#x\tSMALL POLISHED NICKEL\t49\t4\nBrand#x\tSMALL POLISHED STEEL\t36\t4\nBrand#x\tSMALL POLISHED STEEL\t45\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t4\nBrand#x\tSTANDARD ANODIZED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t49\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t14\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t4\nBrand#x\tSTANDARD BRUSHED TIN\t49\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t4\nBrand#x\tSTANDARD BURNISHED TIN\t14\t4\nBrand#x\tSTANDARD BURNISHED TIN\t23\t4\nBrand#x\tSTANDARD BURNISHED TIN\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t49\t4\nBrand#x\tSTANDARD PLATED BRASS\t14\t4\nBrand#x\tSTANDARD PLATED BRASS\t23\t4\nBrand#x\tSTANDARD PLATED BRASS\t45\t4\nBrand#x\tSTANDARD PLATED BRASS\t49\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED COPPER\t45\t4\nBrand#x\tSTANDARD PLATED NICKEL\t14\t4\nBrand#x\tSTANDARD PLATED NICKEL\t19\t4\nBrand#x\tSTANDARD PLATED NICKEL\t45\t4\nBrand#x\tSTANDARD PLATED NICKEL\t49\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED STEEL\t14\t4\nBrand#x\tSTANDARD PLATED STEEL\t36\t4\nBrand#x\tSTANDARD PLATED STEEL\t45\t4\nBrand#x\tSTANDARD PLATED STEEL\t49\t4\nBrand#x\tSTANDARD PLATED TIN\t3\t4\nBrand#x\tSTANDARD PLATED TIN\t45\t4\nBrand#x\tSTANDARD PLATED TIN\t49\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t9\t4\nBrand#x\tSTANDARD POLISHED BRASS\t45\t4\nBrand#x\tSTANDARD POLISHED COPPER\t9\t4\nBrand#x\tSTANDARD POLISHED COPPER\t36\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t9\t4\nBrand#x\tSTANDARD POLISHED STEEL\t49\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t14\t4\nBrand#x\tECONOMY ANODIZED BRASS\t36\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t14\t4\nBrand#x\tECONOMY ANODIZED STEEL\t36\t4\nBrand#x\tECONOMY ANODIZED TIN\t3\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t14\t4\nBrand#x\tECONOMY ANODIZED TIN\t19\t4\nBrand#x\tECONOMY ANODIZED TIN\t36\t4\nBrand#x\tECONOMY ANODIZED TIN\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t14\t4\nBrand#x\tECONOMY BRUSHED BRASS\t19\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED BRASS\t36\t4\nBrand#x\tECONOMY BRUSHED COPPER\t9\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t4\nBrand#x\tECONOMY BRUSHED STEEL\t19\t4\nBrand#x\tECONOMY BRUSHED STEEL\t23\t4\nBrand#x\tECONOMY BRUSHED STEEL\t45\t4\nBrand#x\tECONOMY BRUSHED STEEL\t49\t4\nBrand#x\tECONOMY BRUSHED TIN\t9\t4\nBrand#x\tECONOMY BRUSHED TIN\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED BRASS\t14\t4\nBrand#x\tECONOMY BURNISHED BRASS\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t19\t4\nBrand#x\tECONOMY BURNISHED COPPER\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t45\t4\nBrand#x\tECONOMY BURNISHED COPPER\t49\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t45\t4\nBrand#x\tECONOMY BURNISHED STEEL\t49\t4\nBrand#x\tECONOMY BURNISHED TIN\t14\t4\nBrand#x\tECONOMY PLATED BRASS\t23\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED COPPER\t3\t4\nBrand#x\tECONOMY PLATED COPPER\t9\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED COPPER\t23\t4\nBrand#x\tECONOMY PLATED COPPER\t36\t4\nBrand#x\tECONOMY PLATED COPPER\t45\t4\nBrand#x\tECONOMY PLATED COPPER\t49\t4\nBrand#x\tECONOMY PLATED NICKEL\t9\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t45\t4\nBrand#x\tECONOMY PLATED TIN\t3\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t14\t4\nBrand#x\tECONOMY POLISHED COPPER\t19\t4\nBrand#x\tECONOMY POLISHED NICKEL\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t45\t4\nBrand#x\tECONOMY POLISHED STEEL\t3\t4\nBrand#x\tECONOMY POLISHED STEEL\t14\t4\nBrand#x\tECONOMY POLISHED STEEL\t45\t4\nBrand#x\tECONOMY POLISHED STEEL\t49\t4\nBrand#x\tECONOMY POLISHED TIN\t14\t4\nBrand#x\tECONOMY POLISHED TIN\t36\t4\nBrand#x\tECONOMY POLISHED TIN\t45\t4\nBrand#x\tECONOMY POLISHED TIN\t49\t4\nBrand#x\tLARGE ANODIZED BRASS\t14\t4\nBrand#x\tLARGE ANODIZED BRASS\t23\t4\nBrand#x\tLARGE ANODIZED COPPER\t9\t4\nBrand#x\tLARGE ANODIZED COPPER\t14\t4\nBrand#x\tLARGE ANODIZED COPPER\t36\t4\nBrand#x\tLARGE ANODIZED COPPER\t45\t4\nBrand#x\tLARGE ANODIZED NICKEL\t14\t4\nBrand#x\tLARGE ANODIZED NICKEL\t23\t4\nBrand#x\tLARGE ANODIZED STEEL\t19\t4\nBrand#x\tLARGE ANODIZED STEEL\t23\t4\nBrand#x\tLARGE ANODIZED STEEL\t45\t4\nBrand#x\tLARGE ANODIZED TIN\t3\t4\nBrand#x\tLARGE ANODIZED TIN\t45\t4\nBrand#x\tLARGE BRUSHED BRASS\t9\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED BRASS\t49\t4\nBrand#x\tLARGE BRUSHED COPPER\t19\t4\nBrand#x\tLARGE BRUSHED COPPER\t23\t4\nBrand#x\tLARGE BRUSHED COPPER\t36\t4\nBrand#x\tLARGE BRUSHED NICKEL\t23\t4\nBrand#x\tLARGE BRUSHED NICKEL\t36\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t14\t4\nBrand#x\tLARGE BRUSHED STEEL\t19\t4\nBrand#x\tLARGE BRUSHED STEEL\t36\t4\nBrand#x\tLARGE BRUSHED STEEL\t49\t4\nBrand#x\tLARGE BRUSHED TIN\t3\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BRUSHED TIN\t49\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED COPPER\t3\t4\nBrand#x\tLARGE BURNISHED COPPER\t9\t4\nBrand#x\tLARGE BURNISHED COPPER\t19\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED NICKEL\t14\t4\nBrand#x\tLARGE BURNISHED NICKEL\t23\t4\nBrand#x\tLARGE BURNISHED NICKEL\t49\t4\nBrand#x\tLARGE BURNISHED STEEL\t3\t4\nBrand#x\tLARGE BURNISHED STEEL\t36\t4\nBrand#x\tLARGE BURNISHED STEEL\t45\t4\nBrand#x\tLARGE BURNISHED TIN\t19\t4\nBrand#x\tLARGE PLATED COPPER\t3\t4\nBrand#x\tLARGE PLATED COPPER\t9\t4\nBrand#x\tLARGE PLATED COPPER\t23\t4\nBrand#x\tLARGE PLATED COPPER\t45\t4\nBrand#x\tLARGE PLATED NICKEL\t9\t4\nBrand#x\tLARGE PLATED NICKEL\t49\t4\nBrand#x\tLARGE PLATED STEEL\t3\t4\nBrand#x\tLARGE PLATED STEEL\t9\t4\nBrand#x\tLARGE PLATED STEEL\t14\t4\nBrand#x\tLARGE PLATED STEEL\t36\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t19\t4\nBrand#x\tLARGE PLATED TIN\t23\t4\nBrand#x\tLARGE PLATED TIN\t45\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED BRASS\t14\t4\nBrand#x\tLARGE POLISHED BRASS\t49\t4\nBrand#x\tLARGE POLISHED COPPER\t14\t4\nBrand#x\tLARGE POLISHED COPPER\t36\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED COPPER\t49\t4\nBrand#x\tLARGE POLISHED NICKEL\t14\t4\nBrand#x\tLARGE POLISHED NICKEL\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t36\t4\nBrand#x\tLARGE POLISHED NICKEL\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t49\t4\nBrand#x\tLARGE POLISHED STEEL\t3\t4\nBrand#x\tLARGE POLISHED STEEL\t9\t4\nBrand#x\tLARGE POLISHED TIN\t23\t4\nBrand#x\tLARGE POLISHED TIN\t36\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t4\nBrand#x\tMEDIUM ANODIZED TIN\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t23\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t49\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t4\nBrand#x\tMEDIUM BRUSHED TIN\t14\t4\nBrand#x\tMEDIUM BRUSHED TIN\t49\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t4\nBrand#x\tMEDIUM BURNISHED TIN\t19\t4\nBrand#x\tMEDIUM BURNISHED TIN\t36\t4\nBrand#x\tMEDIUM BURNISHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED TIN\t49\t4\nBrand#x\tMEDIUM PLATED BRASS\t19\t4\nBrand#x\tMEDIUM PLATED BRASS\t36\t4\nBrand#x\tMEDIUM PLATED COPPER\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t49\t4\nBrand#x\tMEDIUM PLATED NICKEL\t3\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t19\t4\nBrand#x\tMEDIUM PLATED NICKEL\t36\t4\nBrand#x\tMEDIUM PLATED NICKEL\t45\t4\nBrand#x\tMEDIUM PLATED NICKEL\t49\t4\nBrand#x\tMEDIUM PLATED STEEL\t19\t4\nBrand#x\tMEDIUM PLATED STEEL\t36\t4\nBrand#x\tMEDIUM PLATED TIN\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t45\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t19\t4\nBrand#x\tPROMO ANODIZED BRASS\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t49\t4\nBrand#x\tPROMO ANODIZED COPPER\t14\t4\nBrand#x\tPROMO ANODIZED COPPER\t36\t4\nBrand#x\tPROMO ANODIZED NICKEL\t23\t4\nBrand#x\tPROMO ANODIZED NICKEL\t45\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t9\t4\nBrand#x\tPROMO ANODIZED TIN\t19\t4\nBrand#x\tPROMO ANODIZED TIN\t23\t4\nBrand#x\tPROMO BRUSHED BRASS\t23\t4\nBrand#x\tPROMO BRUSHED BRASS\t45\t4\nBrand#x\tPROMO BRUSHED COPPER\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t14\t4\nBrand#x\tPROMO BRUSHED NICKEL\t19\t4\nBrand#x\tPROMO BRUSHED NICKEL\t49\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED STEEL\t36\t4\nBrand#x\tPROMO BRUSHED TIN\t3\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BURNISHED BRASS\t9\t4\nBrand#x\tPROMO BURNISHED BRASS\t23\t4\nBrand#x\tPROMO BURNISHED BRASS\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t49\t4\nBrand#x\tPROMO BURNISHED COPPER\t14\t4\nBrand#x\tPROMO BURNISHED COPPER\t23\t4\nBrand#x\tPROMO BURNISHED COPPER\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED STEEL\t49\t4\nBrand#x\tPROMO BURNISHED TIN\t19\t4\nBrand#x\tPROMO PLATED BRASS\t14\t4\nBrand#x\tPROMO PLATED BRASS\t19\t4\nBrand#x\tPROMO PLATED BRASS\t45\t4\nBrand#x\tPROMO PLATED BRASS\t49\t4\nBrand#x\tPROMO PLATED COPPER\t9\t4\nBrand#x\tPROMO PLATED COPPER\t14\t4\nBrand#x\tPROMO PLATED COPPER\t36\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED NICKEL\t14\t4\nBrand#x\tPROMO PLATED NICKEL\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t23\t4\nBrand#x\tPROMO PLATED NICKEL\t45\t4\nBrand#x\tPROMO PLATED STEEL\t9\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED TIN\t14\t4\nBrand#x\tPROMO PLATED TIN\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t9\t4\nBrand#x\tPROMO POLISHED BRASS\t19\t4\nBrand#x\tPROMO POLISHED BRASS\t36\t4\nBrand#x\tPROMO POLISHED COPPER\t3\t4\nBrand#x\tPROMO POLISHED COPPER\t9\t4\nBrand#x\tPROMO POLISHED COPPER\t14\t4\nBrand#x\tPROMO POLISHED COPPER\t19\t4\nBrand#x\tPROMO POLISHED COPPER\t23\t4\nBrand#x\tPROMO POLISHED COPPER\t36\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t19\t4\nBrand#x\tPROMO POLISHED NICKEL\t45\t4\nBrand#x\tPROMO POLISHED STEEL\t9\t4\nBrand#x\tPROMO POLISHED STEEL\t23\t4\nBrand#x\tPROMO POLISHED STEEL\t45\t4\nBrand#x\tSMALL ANODIZED BRASS\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t3\t4\nBrand#x\tSMALL ANODIZED COPPER\t19\t4\nBrand#x\tSMALL ANODIZED COPPER\t45\t4\nBrand#x\tSMALL ANODIZED NICKEL\t3\t4\nBrand#x\tSMALL ANODIZED NICKEL\t14\t4\nBrand#x\tSMALL ANODIZED NICKEL\t19\t4\nBrand#x\tSMALL ANODIZED NICKEL\t36\t4\nBrand#x\tSMALL ANODIZED NICKEL\t45\t4\nBrand#x\tSMALL ANODIZED STEEL\t9\t4\nBrand#x\tSMALL ANODIZED STEEL\t14\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t9\t4\nBrand#x\tSMALL ANODIZED TIN\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t23\t4\nBrand#x\tSMALL ANODIZED TIN\t45\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t9\t4\nBrand#x\tSMALL BRUSHED BRASS\t19\t4\nBrand#x\tSMALL BRUSHED BRASS\t23\t4\nBrand#x\tSMALL BRUSHED BRASS\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t9\t4\nBrand#x\tSMALL BRUSHED COPPER\t45\t4\nBrand#x\tSMALL BRUSHED NICKEL\t9\t4\nBrand#x\tSMALL BRUSHED NICKEL\t14\t4\nBrand#x\tSMALL BRUSHED NICKEL\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t3\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t23\t4\nBrand#x\tSMALL BRUSHED STEEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t19\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BRUSHED TIN\t45\t4\nBrand#x\tSMALL BRUSHED TIN\t49\t4\nBrand#x\tSMALL BURNISHED BRASS\t3\t4\nBrand#x\tSMALL BURNISHED BRASS\t14\t4\nBrand#x\tSMALL BURNISHED BRASS\t19\t4\nBrand#x\tSMALL BURNISHED BRASS\t23\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t14\t4\nBrand#x\tSMALL BURNISHED COPPER\t23\t4\nBrand#x\tSMALL BURNISHED COPPER\t36\t4\nBrand#x\tSMALL BURNISHED COPPER\t49\t4\nBrand#x\tSMALL BURNISHED NICKEL\t14\t4\nBrand#x\tSMALL BURNISHED NICKEL\t19\t4\nBrand#x\tSMALL BURNISHED NICKEL\t36\t4\nBrand#x\tSMALL BURNISHED NICKEL\t45\t4\nBrand#x\tSMALL BURNISHED NICKEL\t49\t4\nBrand#x\tSMALL BURNISHED STEEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t45\t4\nBrand#x\tSMALL BURNISHED STEEL\t49\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL PLATED BRASS\t14\t4\nBrand#x\tSMALL PLATED BRASS\t23\t4\nBrand#x\tSMALL PLATED BRASS\t49\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED NICKEL\t23\t4\nBrand#x\tSMALL PLATED STEEL\t45\t4\nBrand#x\tSMALL PLATED STEEL\t49\t4\nBrand#x\tSMALL PLATED TIN\t9\t4\nBrand#x\tSMALL PLATED TIN\t19\t4\nBrand#x\tSMALL POLISHED BRASS\t9\t4\nBrand#x\tSMALL POLISHED BRASS\t19\t4\nBrand#x\tSMALL POLISHED BRASS\t36\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED NICKEL\t3\t4\nBrand#x\tSMALL POLISHED NICKEL\t14\t4\nBrand#x\tSMALL POLISHED NICKEL\t19\t4\nBrand#x\tSMALL POLISHED STEEL\t49\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t14\t4\nBrand#x\tSMALL POLISHED TIN\t23\t4\nBrand#x\tSMALL POLISHED TIN\t49\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t4\nBrand#x\tSTANDARD ANODIZED TIN\t3\t4\nBrand#x\tSTANDARD ANODIZED TIN\t23\t4\nBrand#x\tSTANDARD ANODIZED TIN\t49\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t19\t4\nBrand#x\tSTANDARD BRUSHED TIN\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t4\nBrand#x\tSTANDARD BURNISHED TIN\t3\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t19\t4\nBrand#x\tSTANDARD BURNISHED TIN\t36\t4\nBrand#x\tSTANDARD PLATED BRASS\t23\t4\nBrand#x\tSTANDARD PLATED BRASS\t45\t4\nBrand#x\tSTANDARD PLATED COPPER\t3\t4\nBrand#x\tSTANDARD PLATED COPPER\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t49\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED STEEL\t19\t4\nBrand#x\tSTANDARD PLATED STEEL\t36\t4\nBrand#x\tSTANDARD PLATED TIN\t14\t4\nBrand#x\tSTANDARD PLATED TIN\t23\t4\nBrand#x\tSTANDARD PLATED TIN\t36\t4\nBrand#x\tSTANDARD PLATED TIN\t49\t4\nBrand#x\tSTANDARD POLISHED BRASS\t19\t4\nBrand#x\tSTANDARD POLISHED BRASS\t23\t4\nBrand#x\tSTANDARD POLISHED BRASS\t45\t4\nBrand#x\tSTANDARD POLISHED BRASS\t49\t4\nBrand#x\tSTANDARD POLISHED COPPER\t19\t4\nBrand#x\tSTANDARD POLISHED COPPER\t45\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED STEEL\t3\t4\nBrand#x\tSTANDARD POLISHED STEEL\t19\t4\nBrand#x\tSTANDARD POLISHED STEEL\t45\t4\nBrand#x\tSTANDARD POLISHED STEEL\t49\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t36\t4\nBrand#x\tECONOMY ANODIZED BRASS\t9\t4\nBrand#x\tECONOMY ANODIZED BRASS\t14\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED BRASS\t36\t4\nBrand#x\tECONOMY ANODIZED COPPER\t14\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t3\t4\nBrand#x\tECONOMY ANODIZED STEEL\t14\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t14\t4\nBrand#x\tECONOMY ANODIZED TIN\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED COPPER\t49\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t4\nBrand#x\tECONOMY BRUSHED STEEL\t3\t4\nBrand#x\tECONOMY BRUSHED STEEL\t9\t4\nBrand#x\tECONOMY BRUSHED STEEL\t45\t4\nBrand#x\tECONOMY BRUSHED TIN\t9\t4\nBrand#x\tECONOMY BRUSHED TIN\t14\t4\nBrand#x\tECONOMY BRUSHED TIN\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t23\t4\nBrand#x\tECONOMY BRUSHED TIN\t49\t4\nBrand#x\tECONOMY BURNISHED BRASS\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t9\t4\nBrand#x\tECONOMY BURNISHED COPPER\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t45\t4\nBrand#x\tECONOMY BURNISHED COPPER\t49\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t3\t4\nBrand#x\tECONOMY BURNISHED STEEL\t14\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t45\t4\nBrand#x\tECONOMY BURNISHED TIN\t14\t4\nBrand#x\tECONOMY BURNISHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t23\t4\nBrand#x\tECONOMY PLATED COPPER\t3\t4\nBrand#x\tECONOMY PLATED COPPER\t36\t4\nBrand#x\tECONOMY PLATED COPPER\t45\t4\nBrand#x\tECONOMY PLATED NICKEL\t9\t4\nBrand#x\tECONOMY PLATED NICKEL\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t23\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED NICKEL\t49\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t19\t4\nBrand#x\tECONOMY PLATED TIN\t23\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t23\t4\nBrand#x\tECONOMY POLISHED NICKEL\t3\t4\nBrand#x\tECONOMY POLISHED NICKEL\t23\t4\nBrand#x\tECONOMY POLISHED NICKEL\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t14\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t23\t4\nBrand#x\tECONOMY POLISHED STEEL\t45\t4\nBrand#x\tECONOMY POLISHED TIN\t3\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t14\t4\nBrand#x\tECONOMY POLISHED TIN\t23\t4\nBrand#x\tECONOMY POLISHED TIN\t45\t4\nBrand#x\tLARGE ANODIZED BRASS\t14\t4\nBrand#x\tLARGE ANODIZED COPPER\t14\t4\nBrand#x\tLARGE ANODIZED COPPER\t45\t4\nBrand#x\tLARGE ANODIZED COPPER\t49\t4\nBrand#x\tLARGE ANODIZED NICKEL\t3\t4\nBrand#x\tLARGE ANODIZED NICKEL\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t14\t4\nBrand#x\tLARGE ANODIZED NICKEL\t36\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED STEEL\t19\t4\nBrand#x\tLARGE ANODIZED STEEL\t36\t4\nBrand#x\tLARGE ANODIZED TIN\t14\t4\nBrand#x\tLARGE ANODIZED TIN\t19\t4\nBrand#x\tLARGE BRUSHED BRASS\t45\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t14\t4\nBrand#x\tLARGE BRUSHED COPPER\t23\t4\nBrand#x\tLARGE BRUSHED COPPER\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t9\t4\nBrand#x\tLARGE BRUSHED STEEL\t14\t4\nBrand#x\tLARGE BRUSHED STEEL\t19\t4\nBrand#x\tLARGE BRUSHED TIN\t9\t4\nBrand#x\tLARGE BURNISHED COPPER\t3\t4\nBrand#x\tLARGE BURNISHED COPPER\t23\t4\nBrand#x\tLARGE BURNISHED COPPER\t36\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t23\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED STEEL\t19\t4\nBrand#x\tLARGE BURNISHED STEEL\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t19\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE PLATED BRASS\t9\t4\nBrand#x\tLARGE PLATED BRASS\t23\t4\nBrand#x\tLARGE PLATED BRASS\t36\t4\nBrand#x\tLARGE PLATED BRASS\t49\t4\nBrand#x\tLARGE PLATED COPPER\t9\t4\nBrand#x\tLARGE PLATED COPPER\t14\t4\nBrand#x\tLARGE PLATED COPPER\t19\t4\nBrand#x\tLARGE PLATED COPPER\t23\t4\nBrand#x\tLARGE PLATED COPPER\t45\t4\nBrand#x\tLARGE PLATED COPPER\t49\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED NICKEL\t45\t4\nBrand#x\tLARGE PLATED STEEL\t9\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t23\t4\nBrand#x\tLARGE PLATED STEEL\t36\t4\nBrand#x\tLARGE PLATED STEEL\t45\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t9\t4\nBrand#x\tLARGE PLATED TIN\t14\t4\nBrand#x\tLARGE PLATED TIN\t23\t4\nBrand#x\tLARGE PLATED TIN\t45\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t19\t4\nBrand#x\tLARGE POLISHED BRASS\t45\t4\nBrand#x\tLARGE POLISHED BRASS\t49\t4\nBrand#x\tLARGE POLISHED COPPER\t14\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t23\t4\nBrand#x\tLARGE POLISHED COPPER\t49\t4\nBrand#x\tLARGE POLISHED NICKEL\t23\t4\nBrand#x\tLARGE POLISHED NICKEL\t36\t4\nBrand#x\tLARGE POLISHED STEEL\t9\t4\nBrand#x\tLARGE POLISHED STEEL\t14\t4\nBrand#x\tLARGE POLISHED STEEL\t19\t4\nBrand#x\tLARGE POLISHED STEEL\t23\t4\nBrand#x\tLARGE POLISHED STEEL\t49\t4\nBrand#x\tLARGE POLISHED TIN\t45\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t4\nBrand#x\tMEDIUM ANODIZED TIN\t3\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t49\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t14\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t4\nBrand#x\tMEDIUM BURNISHED TIN\t19\t4\nBrand#x\tMEDIUM PLATED BRASS\t9\t4\nBrand#x\tMEDIUM PLATED BRASS\t36\t4\nBrand#x\tMEDIUM PLATED BRASS\t45\t4\nBrand#x\tMEDIUM PLATED BRASS\t49\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t49\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t9\t4\nBrand#x\tMEDIUM PLATED STEEL\t23\t4\nBrand#x\tMEDIUM PLATED STEEL\t36\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t3\t4\nBrand#x\tPROMO ANODIZED BRASS\t36\t4\nBrand#x\tPROMO ANODIZED COPPER\t3\t4\nBrand#x\tPROMO ANODIZED COPPER\t9\t4\nBrand#x\tPROMO ANODIZED NICKEL\t3\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t49\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t49\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t36\t4\nBrand#x\tPROMO BRUSHED BRASS\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t49\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t23\t4\nBrand#x\tPROMO BRUSHED COPPER\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t3\t4\nBrand#x\tPROMO BRUSHED NICKEL\t23\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED STEEL\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t14\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t36\t4\nBrand#x\tPROMO BRUSHED TIN\t45\t4\nBrand#x\tPROMO BURNISHED BRASS\t14\t4\nBrand#x\tPROMO BURNISHED BRASS\t23\t4\nBrand#x\tPROMO BURNISHED COPPER\t9\t4\nBrand#x\tPROMO BURNISHED COPPER\t14\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t23\t4\nBrand#x\tPROMO BURNISHED STEEL\t49\t4\nBrand#x\tPROMO BURNISHED TIN\t14\t4\nBrand#x\tPROMO PLATED BRASS\t19\t4\nBrand#x\tPROMO PLATED COPPER\t9\t4\nBrand#x\tPROMO PLATED COPPER\t23\t4\nBrand#x\tPROMO PLATED COPPER\t36\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED NICKEL\t36\t4\nBrand#x\tPROMO PLATED STEEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t14\t4\nBrand#x\tPROMO PLATED STEEL\t45\t4\nBrand#x\tPROMO PLATED TIN\t3\t4\nBrand#x\tPROMO PLATED TIN\t9\t4\nBrand#x\tPROMO PLATED TIN\t19\t4\nBrand#x\tPROMO PLATED TIN\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t9\t4\nBrand#x\tPROMO POLISHED COPPER\t19\t4\nBrand#x\tPROMO POLISHED COPPER\t23\t4\nBrand#x\tPROMO POLISHED NICKEL\t19\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED STEEL\t36\t4\nBrand#x\tPROMO POLISHED TIN\t9\t4\nBrand#x\tPROMO POLISHED TIN\t23\t4\nBrand#x\tSMALL ANODIZED BRASS\t14\t4\nBrand#x\tSMALL ANODIZED BRASS\t19\t4\nBrand#x\tSMALL ANODIZED BRASS\t45\t4\nBrand#x\tSMALL ANODIZED BRASS\t49\t4\nBrand#x\tSMALL ANODIZED COPPER\t36\t4\nBrand#x\tSMALL ANODIZED NICKEL\t19\t4\nBrand#x\tSMALL ANODIZED STEEL\t23\t4\nBrand#x\tSMALL ANODIZED STEEL\t45\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL ANODIZED TIN\t14\t4\nBrand#x\tSMALL ANODIZED TIN\t36\t4\nBrand#x\tSMALL BRUSHED BRASS\t19\t4\nBrand#x\tSMALL BRUSHED BRASS\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t19\t4\nBrand#x\tSMALL BRUSHED NICKEL\t14\t4\nBrand#x\tSMALL BRUSHED NICKEL\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t36\t4\nBrand#x\tSMALL BRUSHED NICKEL\t49\t4\nBrand#x\tSMALL BRUSHED STEEL\t14\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t36\t4\nBrand#x\tSMALL BRUSHED STEEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t3\t4\nBrand#x\tSMALL BRUSHED TIN\t14\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t49\t4\nBrand#x\tSMALL BURNISHED BRASS\t3\t4\nBrand#x\tSMALL BURNISHED BRASS\t19\t4\nBrand#x\tSMALL BURNISHED BRASS\t49\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t14\t4\nBrand#x\tSMALL BURNISHED COPPER\t23\t4\nBrand#x\tSMALL BURNISHED NICKEL\t3\t4\nBrand#x\tSMALL BURNISHED NICKEL\t9\t4\nBrand#x\tSMALL BURNISHED NICKEL\t14\t4\nBrand#x\tSMALL BURNISHED NICKEL\t23\t4\nBrand#x\tSMALL BURNISHED NICKEL\t45\t4\nBrand#x\tSMALL BURNISHED NICKEL\t49\t4\nBrand#x\tSMALL BURNISHED STEEL\t3\t4\nBrand#x\tSMALL BURNISHED STEEL\t49\t4\nBrand#x\tSMALL BURNISHED TIN\t3\t4\nBrand#x\tSMALL BURNISHED TIN\t14\t4\nBrand#x\tSMALL PLATED BRASS\t3\t4\nBrand#x\tSMALL PLATED BRASS\t36\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED COPPER\t14\t4\nBrand#x\tSMALL PLATED COPPER\t19\t4\nBrand#x\tSMALL PLATED COPPER\t23\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED COPPER\t49\t4\nBrand#x\tSMALL PLATED NICKEL\t3\t4\nBrand#x\tSMALL PLATED NICKEL\t9\t4\nBrand#x\tSMALL PLATED STEEL\t9\t4\nBrand#x\tSMALL PLATED STEEL\t36\t4\nBrand#x\tSMALL PLATED STEEL\t45\t4\nBrand#x\tSMALL PLATED TIN\t3\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED COPPER\t23\t4\nBrand#x\tSMALL POLISHED COPPER\t45\t4\nBrand#x\tSMALL POLISHED NICKEL\t14\t4\nBrand#x\tSMALL POLISHED NICKEL\t23\t4\nBrand#x\tSMALL POLISHED NICKEL\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t45\t4\nBrand#x\tSMALL POLISHED STEEL\t19\t4\nBrand#x\tSMALL POLISHED STEEL\t36\t4\nBrand#x\tSMALL POLISHED STEEL\t45\t4\nBrand#x\tSMALL POLISHED TIN\t36\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t4\nBrand#x\tSTANDARD ANODIZED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t45\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t4\nBrand#x\tSTANDARD BRUSHED TIN\t3\t4\nBrand#x\tSTANDARD BRUSHED TIN\t9\t4\nBrand#x\tSTANDARD BRUSHED TIN\t36\t4\nBrand#x\tSTANDARD BRUSHED TIN\t45\t4\nBrand#x\tSTANDARD BRUSHED TIN\t49\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t49\t4\nBrand#x\tSTANDARD PLATED BRASS\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t3\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t14\t4\nBrand#x\tSTANDARD PLATED NICKEL\t19\t4\nBrand#x\tSTANDARD PLATED NICKEL\t23\t4\nBrand#x\tSTANDARD PLATED NICKEL\t45\t4\nBrand#x\tSTANDARD PLATED STEEL\t36\t4\nBrand#x\tSTANDARD PLATED STEEL\t45\t4\nBrand#x\tSTANDARD PLATED TIN\t19\t4\nBrand#x\tSTANDARD PLATED TIN\t23\t4\nBrand#x\tSTANDARD PLATED TIN\t36\t4\nBrand#x\tSTANDARD PLATED TIN\t45\t4\nBrand#x\tSTANDARD POLISHED BRASS\t45\t4\nBrand#x\tSTANDARD POLISHED BRASS\t49\t4\nBrand#x\tSTANDARD POLISHED COPPER\t19\t4\nBrand#x\tSTANDARD POLISHED COPPER\t36\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t4\nBrand#x\tSTANDARD POLISHED STEEL\t9\t4\nBrand#x\tSTANDARD POLISHED STEEL\t36\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t23\t4\nBrand#x\tSTANDARD POLISHED TIN\t36\t4\nBrand#x\tECONOMY ANODIZED BRASS\t9\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED BRASS\t36\t4\nBrand#x\tECONOMY ANODIZED BRASS\t45\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t19\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED COPPER\t49\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t14\t4\nBrand#x\tECONOMY ANODIZED STEEL\t19\t4\nBrand#x\tECONOMY ANODIZED STEEL\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t36\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t4\nBrand#x\tECONOMY BRUSHED STEEL\t45\t4\nBrand#x\tECONOMY BRUSHED STEEL\t49\t4\nBrand#x\tECONOMY BRUSHED TIN\t9\t4\nBrand#x\tECONOMY BRUSHED TIN\t23\t4\nBrand#x\tECONOMY BRUSHED TIN\t36\t4\nBrand#x\tECONOMY BRUSHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t3\t4\nBrand#x\tECONOMY BURNISHED COPPER\t49\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t19\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t36\t4\nBrand#x\tECONOMY BURNISHED STEEL\t45\t4\nBrand#x\tECONOMY BURNISHED TIN\t23\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t49\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED COPPER\t19\t4\nBrand#x\tECONOMY PLATED NICKEL\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t23\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED STEEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t19\t4\nBrand#x\tECONOMY PLATED TIN\t23\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY PLATED TIN\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t3\t4\nBrand#x\tECONOMY POLISHED BRASS\t23\t4\nBrand#x\tECONOMY POLISHED BRASS\t45\t4\nBrand#x\tECONOMY POLISHED COPPER\t3\t4\nBrand#x\tECONOMY POLISHED COPPER\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t23\t4\nBrand#x\tECONOMY POLISHED COPPER\t49\t4\nBrand#x\tECONOMY POLISHED NICKEL\t3\t4\nBrand#x\tECONOMY POLISHED NICKEL\t23\t4\nBrand#x\tECONOMY POLISHED NICKEL\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED TIN\t3\t4\nBrand#x\tECONOMY POLISHED TIN\t19\t4\nBrand#x\tECONOMY POLISHED TIN\t45\t4\nBrand#x\tLARGE ANODIZED BRASS\t3\t4\nBrand#x\tLARGE ANODIZED BRASS\t14\t4\nBrand#x\tLARGE ANODIZED BRASS\t23\t4\nBrand#x\tLARGE ANODIZED BRASS\t36\t4\nBrand#x\tLARGE ANODIZED BRASS\t45\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t14\t4\nBrand#x\tLARGE ANODIZED COPPER\t19\t4\nBrand#x\tLARGE ANODIZED COPPER\t36\t4\nBrand#x\tLARGE ANODIZED COPPER\t45\t4\nBrand#x\tLARGE ANODIZED NICKEL\t14\t4\nBrand#x\tLARGE ANODIZED NICKEL\t36\t4\nBrand#x\tLARGE ANODIZED STEEL\t9\t4\nBrand#x\tLARGE ANODIZED STEEL\t23\t4\nBrand#x\tLARGE ANODIZED TIN\t3\t4\nBrand#x\tLARGE ANODIZED TIN\t9\t4\nBrand#x\tLARGE ANODIZED TIN\t19\t4\nBrand#x\tLARGE ANODIZED TIN\t49\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t14\t4\nBrand#x\tLARGE BRUSHED COPPER\t23\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED COPPER\t49\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t9\t4\nBrand#x\tLARGE BRUSHED NICKEL\t23\t4\nBrand#x\tLARGE BRUSHED NICKEL\t45\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t14\t4\nBrand#x\tLARGE BRUSHED STEEL\t23\t4\nBrand#x\tLARGE BRUSHED TIN\t19\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BURNISHED BRASS\t3\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t49\t4\nBrand#x\tLARGE BURNISHED COPPER\t9\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED NICKEL\t9\t4\nBrand#x\tLARGE BURNISHED NICKEL\t19\t4\nBrand#x\tLARGE BURNISHED NICKEL\t36\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED STEEL\t3\t4\nBrand#x\tLARGE BURNISHED STEEL\t23\t4\nBrand#x\tLARGE BURNISHED STEEL\t49\t4\nBrand#x\tLARGE BURNISHED TIN\t19\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE PLATED BRASS\t3\t4\nBrand#x\tLARGE PLATED BRASS\t14\t4\nBrand#x\tLARGE PLATED BRASS\t23\t4\nBrand#x\tLARGE PLATED BRASS\t45\t4\nBrand#x\tLARGE PLATED BRASS\t49\t4\nBrand#x\tLARGE PLATED COPPER\t23\t4\nBrand#x\tLARGE PLATED COPPER\t45\t4\nBrand#x\tLARGE PLATED NICKEL\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t23\t4\nBrand#x\tLARGE PLATED NICKEL\t36\t4\nBrand#x\tLARGE PLATED NICKEL\t49\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t36\t4\nBrand#x\tLARGE PLATED STEEL\t45\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t9\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t9\t4\nBrand#x\tLARGE POLISHED COPPER\t49\t4\nBrand#x\tLARGE POLISHED NICKEL\t23\t4\nBrand#x\tLARGE POLISHED NICKEL\t36\t4\nBrand#x\tLARGE POLISHED STEEL\t9\t4\nBrand#x\tLARGE POLISHED STEEL\t45\t4\nBrand#x\tLARGE POLISHED TIN\t9\t4\nBrand#x\tLARGE POLISHED TIN\t49\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t4\nBrand#x\tMEDIUM ANODIZED TIN\t3\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t14\t4\nBrand#x\tMEDIUM BRUSHED TIN\t19\t4\nBrand#x\tMEDIUM BRUSHED TIN\t23\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t3\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t4\nBrand#x\tMEDIUM BURNISHED TIN\t9\t4\nBrand#x\tMEDIUM BURNISHED TIN\t14\t4\nBrand#x\tMEDIUM BURNISHED TIN\t19\t4\nBrand#x\tMEDIUM BURNISHED TIN\t49\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t14\t4\nBrand#x\tMEDIUM PLATED BRASS\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t3\t4\nBrand#x\tMEDIUM PLATED COPPER\t23\t4\nBrand#x\tMEDIUM PLATED COPPER\t36\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED NICKEL\t3\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t3\t4\nBrand#x\tMEDIUM PLATED STEEL\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t3\t4\nBrand#x\tMEDIUM PLATED TIN\t45\t4\nBrand#x\tPROMO ANODIZED BRASS\t19\t4\nBrand#x\tPROMO ANODIZED BRASS\t45\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED COPPER\t23\t4\nBrand#x\tPROMO ANODIZED COPPER\t49\t4\nBrand#x\tPROMO ANODIZED NICKEL\t23\t4\nBrand#x\tPROMO ANODIZED NICKEL\t49\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t36\t4\nBrand#x\tPROMO ANODIZED STEEL\t49\t4\nBrand#x\tPROMO ANODIZED TIN\t19\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t3\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t36\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t23\t4\nBrand#x\tPROMO BRUSHED COPPER\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t3\t4\nBrand#x\tPROMO BRUSHED NICKEL\t9\t4\nBrand#x\tPROMO BRUSHED NICKEL\t14\t4\nBrand#x\tPROMO BRUSHED NICKEL\t23\t4\nBrand#x\tPROMO BRUSHED NICKEL\t36\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t23\t4\nBrand#x\tPROMO BRUSHED STEEL\t49\t4\nBrand#x\tPROMO BRUSHED TIN\t9\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t3\t4\nBrand#x\tPROMO BURNISHED BRASS\t19\t4\nBrand#x\tPROMO BURNISHED BRASS\t23\t4\nBrand#x\tPROMO BURNISHED BRASS\t49\t4\nBrand#x\tPROMO BURNISHED COPPER\t9\t4\nBrand#x\tPROMO BURNISHED COPPER\t19\t4\nBrand#x\tPROMO BURNISHED COPPER\t23\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED COPPER\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t3\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t36\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED STEEL\t36\t4\nBrand#x\tPROMO BURNISHED TIN\t3\t4\nBrand#x\tPROMO BURNISHED TIN\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t19\t4\nBrand#x\tPROMO BURNISHED TIN\t23\t4\nBrand#x\tPROMO BURNISHED TIN\t49\t4\nBrand#x\tPROMO PLATED BRASS\t14\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t9\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t45\t4\nBrand#x\tPROMO PLATED STEEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED STEEL\t49\t4\nBrand#x\tPROMO PLATED TIN\t3\t4\nBrand#x\tPROMO PLATED TIN\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t3\t4\nBrand#x\tPROMO POLISHED BRASS\t36\t4\nBrand#x\tPROMO POLISHED BRASS\t45\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t3\t4\nBrand#x\tPROMO POLISHED COPPER\t45\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t3\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t23\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED NICKEL\t45\t4\nBrand#x\tPROMO POLISHED STEEL\t36\t4\nBrand#x\tPROMO POLISHED STEEL\t45\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t45\t4\nBrand#x\tSMALL ANODIZED COPPER\t3\t4\nBrand#x\tSMALL ANODIZED COPPER\t36\t4\nBrand#x\tSMALL ANODIZED COPPER\t45\t4\nBrand#x\tSMALL ANODIZED NICKEL\t19\t4\nBrand#x\tSMALL ANODIZED STEEL\t3\t4\nBrand#x\tSMALL ANODIZED STEEL\t14\t4\nBrand#x\tSMALL ANODIZED STEEL\t23\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL ANODIZED TIN\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t23\t4\nBrand#x\tSMALL ANODIZED TIN\t36\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t23\t4\nBrand#x\tSMALL BRUSHED BRASS\t36\t4\nBrand#x\tSMALL BRUSHED BRASS\t45\t4\nBrand#x\tSMALL BRUSHED BRASS\t49\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t9\t4\nBrand#x\tSMALL BRUSHED NICKEL\t3\t4\nBrand#x\tSMALL BRUSHED NICKEL\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t36\t4\nBrand#x\tSMALL BRUSHED NICKEL\t49\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t23\t4\nBrand#x\tSMALL BRUSHED STEEL\t36\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t9\t4\nBrand#x\tSMALL BRUSHED TIN\t14\t4\nBrand#x\tSMALL BRUSHED TIN\t19\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BRUSHED TIN\t45\t4\nBrand#x\tSMALL BURNISHED BRASS\t3\t4\nBrand#x\tSMALL BURNISHED BRASS\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t49\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t49\t4\nBrand#x\tSMALL BURNISHED NICKEL\t19\t4\nBrand#x\tSMALL BURNISHED NICKEL\t23\t4\nBrand#x\tSMALL BURNISHED STEEL\t3\t4\nBrand#x\tSMALL BURNISHED STEEL\t9\t4\nBrand#x\tSMALL BURNISHED STEEL\t19\t4\nBrand#x\tSMALL BURNISHED STEEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t49\t4\nBrand#x\tSMALL BURNISHED TIN\t14\t4\nBrand#x\tSMALL BURNISHED TIN\t23\t4\nBrand#x\tSMALL BURNISHED TIN\t45\t4\nBrand#x\tSMALL PLATED BRASS\t9\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED COPPER\t3\t4\nBrand#x\tSMALL PLATED COPPER\t9\t4\nBrand#x\tSMALL PLATED COPPER\t14\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED COPPER\t45\t4\nBrand#x\tSMALL PLATED NICKEL\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t3\t4\nBrand#x\tSMALL PLATED STEEL\t14\t4\nBrand#x\tSMALL PLATED STEEL\t23\t4\nBrand#x\tSMALL PLATED STEEL\t36\t4\nBrand#x\tSMALL PLATED STEEL\t49\t4\nBrand#x\tSMALL PLATED TIN\t3\t4\nBrand#x\tSMALL PLATED TIN\t23\t4\nBrand#x\tSMALL PLATED TIN\t36\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t3\t4\nBrand#x\tSMALL POLISHED BRASS\t9\t4\nBrand#x\tSMALL POLISHED BRASS\t19\t4\nBrand#x\tSMALL POLISHED BRASS\t36\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t3\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED NICKEL\t9\t4\nBrand#x\tSMALL POLISHED NICKEL\t14\t4\nBrand#x\tSMALL POLISHED NICKEL\t45\t4\nBrand#x\tSMALL POLISHED NICKEL\t49\t4\nBrand#x\tSMALL POLISHED STEEL\t3\t4\nBrand#x\tSMALL POLISHED STEEL\t14\t4\nBrand#x\tSMALL POLISHED STEEL\t23\t4\nBrand#x\tSMALL POLISHED STEEL\t45\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t9\t4\nBrand#x\tSMALL POLISHED TIN\t14\t4\nBrand#x\tSMALL POLISHED TIN\t19\t4\nBrand#x\tSMALL POLISHED TIN\t23\t4\nBrand#x\tSMALL POLISHED TIN\t45\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t4\nBrand#x\tSTANDARD ANODIZED TIN\t9\t4\nBrand#x\tSTANDARD ANODIZED TIN\t19\t4\nBrand#x\tSTANDARD ANODIZED TIN\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t9\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t14\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t4\nBrand#x\tSTANDARD BRUSHED TIN\t19\t4\nBrand#x\tSTANDARD BRUSHED TIN\t23\t4\nBrand#x\tSTANDARD BRUSHED TIN\t36\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t3\t4\nBrand#x\tSTANDARD BURNISHED TIN\t14\t4\nBrand#x\tSTANDARD BURNISHED TIN\t19\t4\nBrand#x\tSTANDARD BURNISHED TIN\t36\t4\nBrand#x\tSTANDARD PLATED BRASS\t9\t4\nBrand#x\tSTANDARD PLATED BRASS\t23\t4\nBrand#x\tSTANDARD PLATED BRASS\t36\t4\nBrand#x\tSTANDARD PLATED COPPER\t3\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED COPPER\t49\t4\nBrand#x\tSTANDARD PLATED NICKEL\t9\t4\nBrand#x\tSTANDARD PLATED NICKEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED STEEL\t14\t4\nBrand#x\tSTANDARD PLATED STEEL\t19\t4\nBrand#x\tSTANDARD PLATED TIN\t23\t4\nBrand#x\tSTANDARD PLATED TIN\t49\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t14\t4\nBrand#x\tSTANDARD POLISHED COPPER\t3\t4\nBrand#x\tSTANDARD POLISHED COPPER\t9\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED STEEL\t45\t4\nBrand#x\tSTANDARD POLISHED TIN\t14\t4\nBrand#x\tSTANDARD POLISHED TIN\t49\t4\nBrand#x\tECONOMY ANODIZED COPPER\t14\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t4\nBrand#x\tECONOMY ANODIZED STEEL\t3\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t3\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED COPPER\t9\t4\nBrand#x\tECONOMY BRUSHED COPPER\t14\t4\nBrand#x\tECONOMY BRUSHED COPPER\t36\t4\nBrand#x\tECONOMY BRUSHED COPPER\t49\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t4\nBrand#x\tECONOMY BRUSHED STEEL\t3\t4\nBrand#x\tECONOMY BRUSHED STEEL\t9\t4\nBrand#x\tECONOMY BRUSHED TIN\t14\t4\nBrand#x\tECONOMY BRUSHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t3\t4\nBrand#x\tECONOMY BURNISHED COPPER\t49\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t4\nBrand#x\tECONOMY BURNISHED STEEL\t3\t4\nBrand#x\tECONOMY BURNISHED STEEL\t9\t4\nBrand#x\tECONOMY BURNISHED STEEL\t14\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t49\t4\nBrand#x\tECONOMY BURNISHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t36\t4\nBrand#x\tECONOMY BURNISHED TIN\t49\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED COPPER\t36\t4\nBrand#x\tECONOMY PLATED COPPER\t49\t4\nBrand#x\tECONOMY PLATED NICKEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t3\t4\nBrand#x\tECONOMY PLATED STEEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t45\t4\nBrand#x\tECONOMY PLATED TIN\t3\t4\nBrand#x\tECONOMY PLATED TIN\t9\t4\nBrand#x\tECONOMY PLATED TIN\t19\t4\nBrand#x\tECONOMY PLATED TIN\t23\t4\nBrand#x\tECONOMY POLISHED BRASS\t19\t4\nBrand#x\tECONOMY POLISHED BRASS\t23\t4\nBrand#x\tECONOMY POLISHED BRASS\t49\t4\nBrand#x\tECONOMY POLISHED COPPER\t19\t4\nBrand#x\tECONOMY POLISHED COPPER\t23\t4\nBrand#x\tECONOMY POLISHED COPPER\t45\t4\nBrand#x\tECONOMY POLISHED COPPER\t49\t4\nBrand#x\tECONOMY POLISHED NICKEL\t3\t4\nBrand#x\tECONOMY POLISHED NICKEL\t14\t4\nBrand#x\tECONOMY POLISHED NICKEL\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t45\t4\nBrand#x\tECONOMY POLISHED STEEL\t23\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t36\t4\nBrand#x\tECONOMY POLISHED TIN\t45\t4\nBrand#x\tLARGE ANODIZED BRASS\t14\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED NICKEL\t19\t4\nBrand#x\tLARGE ANODIZED NICKEL\t36\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED STEEL\t36\t4\nBrand#x\tLARGE BRUSHED BRASS\t9\t4\nBrand#x\tLARGE BRUSHED BRASS\t19\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED BRASS\t45\t4\nBrand#x\tLARGE BRUSHED BRASS\t49\t4\nBrand#x\tLARGE BRUSHED COPPER\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED COPPER\t49\t4\nBrand#x\tLARGE BRUSHED NICKEL\t14\t4\nBrand#x\tLARGE BRUSHED NICKEL\t45\t4\nBrand#x\tLARGE BRUSHED STEEL\t9\t4\nBrand#x\tLARGE BRUSHED STEEL\t45\t4\nBrand#x\tLARGE BRUSHED STEEL\t49\t4\nBrand#x\tLARGE BRUSHED TIN\t3\t4\nBrand#x\tLARGE BRUSHED TIN\t9\t4\nBrand#x\tLARGE BRUSHED TIN\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t23\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t36\t4\nBrand#x\tLARGE BURNISHED STEEL\t23\t4\nBrand#x\tLARGE BURNISHED TIN\t45\t4\nBrand#x\tLARGE PLATED COPPER\t3\t4\nBrand#x\tLARGE PLATED COPPER\t9\t4\nBrand#x\tLARGE PLATED COPPER\t14\t4\nBrand#x\tLARGE PLATED COPPER\t36\t4\nBrand#x\tLARGE PLATED COPPER\t49\t4\nBrand#x\tLARGE PLATED NICKEL\t9\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED NICKEL\t23\t4\nBrand#x\tLARGE PLATED NICKEL\t49\t4\nBrand#x\tLARGE PLATED STEEL\t36\t4\nBrand#x\tLARGE PLATED STEEL\t45\t4\nBrand#x\tLARGE PLATED TIN\t3\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED BRASS\t9\t4\nBrand#x\tLARGE POLISHED BRASS\t14\t4\nBrand#x\tLARGE POLISHED BRASS\t23\t4\nBrand#x\tLARGE POLISHED BRASS\t36\t4\nBrand#x\tLARGE POLISHED BRASS\t45\t4\nBrand#x\tLARGE POLISHED COPPER\t9\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t3\t4\nBrand#x\tLARGE POLISHED NICKEL\t9\t4\nBrand#x\tLARGE POLISHED NICKEL\t14\t4\nBrand#x\tLARGE POLISHED NICKEL\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t49\t4\nBrand#x\tLARGE POLISHED STEEL\t3\t4\nBrand#x\tLARGE POLISHED STEEL\t9\t4\nBrand#x\tLARGE POLISHED STEEL\t45\t4\nBrand#x\tLARGE POLISHED STEEL\t49\t4\nBrand#x\tLARGE POLISHED TIN\t19\t4\nBrand#x\tLARGE POLISHED TIN\t36\t4\nBrand#x\tLARGE POLISHED TIN\t45\t4\nBrand#x\tLARGE POLISHED TIN\t49\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t49\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t4\nBrand#x\tMEDIUM ANODIZED TIN\t9\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t36\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t3\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t4\nBrand#x\tMEDIUM PLATED BRASS\t9\t4\nBrand#x\tMEDIUM PLATED BRASS\t19\t4\nBrand#x\tMEDIUM PLATED BRASS\t49\t4\nBrand#x\tMEDIUM PLATED COPPER\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t3\t4\nBrand#x\tMEDIUM PLATED NICKEL\t19\t4\nBrand#x\tMEDIUM PLATED STEEL\t9\t4\nBrand#x\tMEDIUM PLATED STEEL\t19\t4\nBrand#x\tMEDIUM PLATED STEEL\t45\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t3\t4\nBrand#x\tMEDIUM PLATED TIN\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t45\t4\nBrand#x\tPROMO ANODIZED BRASS\t19\t4\nBrand#x\tPROMO ANODIZED BRASS\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t36\t4\nBrand#x\tPROMO ANODIZED BRASS\t49\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t9\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t23\t4\nBrand#x\tPROMO ANODIZED STEEL\t9\t4\nBrand#x\tPROMO ANODIZED STEEL\t19\t4\nBrand#x\tPROMO ANODIZED TIN\t3\t4\nBrand#x\tPROMO ANODIZED TIN\t19\t4\nBrand#x\tPROMO ANODIZED TIN\t23\t4\nBrand#x\tPROMO ANODIZED TIN\t36\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED BRASS\t19\t4\nBrand#x\tPROMO BRUSHED BRASS\t36\t4\nBrand#x\tPROMO BRUSHED BRASS\t49\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t45\t4\nBrand#x\tPROMO BRUSHED NICKEL\t23\t4\nBrand#x\tPROMO BRUSHED STEEL\t3\t4\nBrand#x\tPROMO BRUSHED STEEL\t45\t4\nBrand#x\tPROMO BRUSHED STEEL\t49\t4\nBrand#x\tPROMO BRUSHED TIN\t9\t4\nBrand#x\tPROMO BRUSHED TIN\t14\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t9\t4\nBrand#x\tPROMO BURNISHED BRASS\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED STEEL\t23\t4\nBrand#x\tPROMO BURNISHED STEEL\t36\t4\nBrand#x\tPROMO BURNISHED TIN\t49\t4\nBrand#x\tPROMO PLATED BRASS\t3\t4\nBrand#x\tPROMO PLATED BRASS\t9\t4\nBrand#x\tPROMO PLATED BRASS\t36\t4\nBrand#x\tPROMO PLATED COPPER\t9\t4\nBrand#x\tPROMO PLATED COPPER\t14\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED COPPER\t45\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED NICKEL\t36\t4\nBrand#x\tPROMO PLATED NICKEL\t49\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED TIN\t49\t4\nBrand#x\tPROMO POLISHED BRASS\t14\t4\nBrand#x\tPROMO POLISHED BRASS\t36\t4\nBrand#x\tPROMO POLISHED BRASS\t45\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t9\t4\nBrand#x\tPROMO POLISHED COPPER\t45\t4\nBrand#x\tPROMO POLISHED NICKEL\t3\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED STEEL\t3\t4\nBrand#x\tPROMO POLISHED STEEL\t23\t4\nBrand#x\tPROMO POLISHED STEEL\t36\t4\nBrand#x\tPROMO POLISHED STEEL\t49\t4\nBrand#x\tPROMO POLISHED TIN\t9\t4\nBrand#x\tPROMO POLISHED TIN\t19\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED COPPER\t3\t4\nBrand#x\tSMALL ANODIZED COPPER\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t23\t4\nBrand#x\tSMALL ANODIZED COPPER\t36\t4\nBrand#x\tSMALL ANODIZED COPPER\t45\t4\nBrand#x\tSMALL ANODIZED COPPER\t49\t4\nBrand#x\tSMALL ANODIZED NICKEL\t3\t4\nBrand#x\tSMALL ANODIZED NICKEL\t14\t4\nBrand#x\tSMALL ANODIZED NICKEL\t45\t4\nBrand#x\tSMALL ANODIZED STEEL\t3\t4\nBrand#x\tSMALL ANODIZED STEEL\t9\t4\nBrand#x\tSMALL ANODIZED STEEL\t23\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t9\t4\nBrand#x\tSMALL ANODIZED TIN\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t23\t4\nBrand#x\tSMALL ANODIZED TIN\t45\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t9\t4\nBrand#x\tSMALL BRUSHED BRASS\t19\t4\nBrand#x\tSMALL BRUSHED BRASS\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t14\t4\nBrand#x\tSMALL BRUSHED COPPER\t23\t4\nBrand#x\tSMALL BRUSHED COPPER\t36\t4\nBrand#x\tSMALL BRUSHED NICKEL\t36\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t3\t4\nBrand#x\tSMALL BRUSHED STEEL\t14\t4\nBrand#x\tSMALL BRUSHED TIN\t3\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t3\t4\nBrand#x\tSMALL BURNISHED BRASS\t19\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t19\t4\nBrand#x\tSMALL BURNISHED COPPER\t23\t4\nBrand#x\tSMALL BURNISHED NICKEL\t14\t4\nBrand#x\tSMALL BURNISHED NICKEL\t45\t4\nBrand#x\tSMALL BURNISHED NICKEL\t49\t4\nBrand#x\tSMALL BURNISHED STEEL\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t3\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t14\t4\nBrand#x\tSMALL BURNISHED TIN\t23\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t49\t4\nBrand#x\tSMALL PLATED BRASS\t3\t4\nBrand#x\tSMALL PLATED BRASS\t14\t4\nBrand#x\tSMALL PLATED BRASS\t23\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED BRASS\t49\t4\nBrand#x\tSMALL PLATED COPPER\t9\t4\nBrand#x\tSMALL PLATED COPPER\t19\t4\nBrand#x\tSMALL PLATED COPPER\t23\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED NICKEL\t3\t4\nBrand#x\tSMALL PLATED NICKEL\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED STEEL\t9\t4\nBrand#x\tSMALL PLATED STEEL\t19\t4\nBrand#x\tSMALL PLATED STEEL\t45\t4\nBrand#x\tSMALL PLATED STEEL\t49\t4\nBrand#x\tSMALL PLATED TIN\t19\t4\nBrand#x\tSMALL PLATED TIN\t23\t4\nBrand#x\tSMALL POLISHED BRASS\t19\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t9\t4\nBrand#x\tSMALL POLISHED NICKEL\t3\t4\nBrand#x\tSMALL POLISHED NICKEL\t9\t4\nBrand#x\tSMALL POLISHED NICKEL\t23\t4\nBrand#x\tSMALL POLISHED NICKEL\t45\t4\nBrand#x\tSMALL POLISHED STEEL\t3\t4\nBrand#x\tSMALL POLISHED STEEL\t9\t4\nBrand#x\tSMALL POLISHED STEEL\t14\t4\nBrand#x\tSMALL POLISHED TIN\t36\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t4\nBrand#x\tSTANDARD ANODIZED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t19\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t49\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t45\t4\nBrand#x\tSTANDARD BRUSHED TIN\t9\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t19\t4\nBrand#x\tSTANDARD BRUSHED TIN\t36\t4\nBrand#x\tSTANDARD BRUSHED TIN\t49\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t23\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t14\t4\nBrand#x\tSTANDARD BURNISHED TIN\t23\t4\nBrand#x\tSTANDARD PLATED BRASS\t3\t4\nBrand#x\tSTANDARD PLATED BRASS\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t23\t4\nBrand#x\tSTANDARD PLATED COPPER\t49\t4\nBrand#x\tSTANDARD PLATED NICKEL\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t45\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED STEEL\t14\t4\nBrand#x\tSTANDARD PLATED STEEL\t49\t4\nBrand#x\tSTANDARD PLATED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t14\t4\nBrand#x\tSTANDARD POLISHED BRASS\t45\t4\nBrand#x\tSTANDARD POLISHED COPPER\t3\t4\nBrand#x\tSTANDARD POLISHED COPPER\t9\t4\nBrand#x\tSTANDARD POLISHED COPPER\t14\t4\nBrand#x\tSTANDARD POLISHED COPPER\t19\t4\nBrand#x\tSTANDARD POLISHED COPPER\t36\t4\nBrand#x\tSTANDARD POLISHED COPPER\t45\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED STEEL\t9\t4\nBrand#x\tSTANDARD POLISHED STEEL\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t23\t4\nBrand#x\tSTANDARD POLISHED TIN\t49\t4\nBrand#x\tECONOMY ANODIZED BRASS\t3\t4\nBrand#x\tECONOMY ANODIZED BRASS\t9\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t9\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t14\t4\nBrand#x\tECONOMY ANODIZED STEEL\t23\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t19\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED BRASS\t9\t4\nBrand#x\tECONOMY BRUSHED BRASS\t19\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t9\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t4\nBrand#x\tECONOMY BRUSHED STEEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t23\t4\nBrand#x\tECONOMY BRUSHED STEEL\t49\t4\nBrand#x\tECONOMY BRUSHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED COPPER\t19\t4\nBrand#x\tECONOMY BURNISHED COPPER\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t9\t4\nBrand#x\tECONOMY BURNISHED STEEL\t45\t4\nBrand#x\tECONOMY BURNISHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t45\t4\nBrand#x\tECONOMY BURNISHED TIN\t49\t4\nBrand#x\tECONOMY PLATED COPPER\t3\t4\nBrand#x\tECONOMY PLATED COPPER\t9\t4\nBrand#x\tECONOMY PLATED COPPER\t19\t4\nBrand#x\tECONOMY PLATED COPPER\t23\t4\nBrand#x\tECONOMY PLATED COPPER\t36\t4\nBrand#x\tECONOMY PLATED NICKEL\t19\t4\nBrand#x\tECONOMY PLATED NICKEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t3\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t3\t4\nBrand#x\tECONOMY POLISHED COPPER\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t19\t4\nBrand#x\tECONOMY POLISHED COPPER\t23\t4\nBrand#x\tECONOMY POLISHED NICKEL\t3\t4\nBrand#x\tECONOMY POLISHED NICKEL\t14\t4\nBrand#x\tECONOMY POLISHED NICKEL\t36\t4\nBrand#x\tECONOMY POLISHED STEEL\t9\t4\nBrand#x\tECONOMY POLISHED STEEL\t14\t4\nBrand#x\tECONOMY POLISHED STEEL\t36\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t19\t4\nBrand#x\tLARGE ANODIZED COPPER\t23\t4\nBrand#x\tLARGE ANODIZED COPPER\t49\t4\nBrand#x\tLARGE ANODIZED NICKEL\t14\t4\nBrand#x\tLARGE ANODIZED NICKEL\t23\t4\nBrand#x\tLARGE ANODIZED NICKEL\t36\t4\nBrand#x\tLARGE ANODIZED NICKEL\t45\t4\nBrand#x\tLARGE ANODIZED NICKEL\t49\t4\nBrand#x\tLARGE ANODIZED STEEL\t9\t4\nBrand#x\tLARGE ANODIZED STEEL\t45\t4\nBrand#x\tLARGE ANODIZED STEEL\t49\t4\nBrand#x\tLARGE ANODIZED TIN\t9\t4\nBrand#x\tLARGE ANODIZED TIN\t14\t4\nBrand#x\tLARGE ANODIZED TIN\t36\t4\nBrand#x\tLARGE ANODIZED TIN\t49\t4\nBrand#x\tLARGE BRUSHED BRASS\t19\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED BRASS\t45\t4\nBrand#x\tLARGE BRUSHED BRASS\t49\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t14\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t9\t4\nBrand#x\tLARGE BRUSHED NICKEL\t49\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t19\t4\nBrand#x\tLARGE BRUSHED TIN\t9\t4\nBrand#x\tLARGE BRUSHED TIN\t23\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t14\t4\nBrand#x\tLARGE BURNISHED BRASS\t45\t4\nBrand#x\tLARGE BURNISHED BRASS\t49\t4\nBrand#x\tLARGE BURNISHED COPPER\t9\t4\nBrand#x\tLARGE BURNISHED COPPER\t36\t4\nBrand#x\tLARGE BURNISHED NICKEL\t3\t4\nBrand#x\tLARGE BURNISHED NICKEL\t9\t4\nBrand#x\tLARGE BURNISHED NICKEL\t23\t4\nBrand#x\tLARGE BURNISHED STEEL\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t23\t4\nBrand#x\tLARGE BURNISHED TIN\t49\t4\nBrand#x\tLARGE PLATED BRASS\t49\t4\nBrand#x\tLARGE PLATED NICKEL\t23\t4\nBrand#x\tLARGE PLATED NICKEL\t45\t4\nBrand#x\tLARGE PLATED STEEL\t9\t4\nBrand#x\tLARGE PLATED STEEL\t45\t4\nBrand#x\tLARGE PLATED TIN\t9\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t9\t4\nBrand#x\tLARGE POLISHED BRASS\t23\t4\nBrand#x\tLARGE POLISHED COPPER\t9\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t9\t4\nBrand#x\tLARGE POLISHED NICKEL\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t36\t4\nBrand#x\tLARGE POLISHED STEEL\t19\t4\nBrand#x\tLARGE POLISHED STEEL\t36\t4\nBrand#x\tLARGE POLISHED STEEL\t45\t4\nBrand#x\tLARGE POLISHED STEEL\t49\t4\nBrand#x\tLARGE POLISHED TIN\t23\t4\nBrand#x\tLARGE POLISHED TIN\t36\t4\nBrand#x\tLARGE POLISHED TIN\t45\t4\nBrand#x\tLARGE POLISHED TIN\t49\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t4\nBrand#x\tMEDIUM BRUSHED TIN\t14\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t4\nBrand#x\tMEDIUM BURNISHED TIN\t9\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t36\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t9\t4\nBrand#x\tMEDIUM PLATED BRASS\t14\t4\nBrand#x\tMEDIUM PLATED BRASS\t36\t4\nBrand#x\tMEDIUM PLATED COPPER\t3\t4\nBrand#x\tMEDIUM PLATED COPPER\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t36\t4\nBrand#x\tMEDIUM PLATED NICKEL\t3\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t14\t4\nBrand#x\tMEDIUM PLATED TIN\t14\t4\nBrand#x\tMEDIUM PLATED TIN\t19\t4\nBrand#x\tMEDIUM PLATED TIN\t23\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t19\t4\nBrand#x\tPROMO ANODIZED BRASS\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t45\t4\nBrand#x\tPROMO ANODIZED COPPER\t9\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED COPPER\t23\t4\nBrand#x\tPROMO ANODIZED COPPER\t49\t4\nBrand#x\tPROMO ANODIZED NICKEL\t9\t4\nBrand#x\tPROMO ANODIZED NICKEL\t14\t4\nBrand#x\tPROMO ANODIZED NICKEL\t23\t4\nBrand#x\tPROMO ANODIZED NICKEL\t36\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t36\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t3\t4\nBrand#x\tPROMO ANODIZED TIN\t14\t4\nBrand#x\tPROMO ANODIZED TIN\t19\t4\nBrand#x\tPROMO ANODIZED TIN\t23\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t49\t4\nBrand#x\tPROMO BRUSHED BRASS\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t49\t4\nBrand#x\tPROMO BRUSHED COPPER\t3\t4\nBrand#x\tPROMO BRUSHED COPPER\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t23\t4\nBrand#x\tPROMO BRUSHED NICKEL\t14\t4\nBrand#x\tPROMO BRUSHED NICKEL\t19\t4\nBrand#x\tPROMO BRUSHED NICKEL\t45\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED TIN\t3\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t3\t4\nBrand#x\tPROMO BURNISHED BRASS\t19\t4\nBrand#x\tPROMO BURNISHED BRASS\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED BRASS\t49\t4\nBrand#x\tPROMO BURNISHED COPPER\t3\t4\nBrand#x\tPROMO BURNISHED COPPER\t14\t4\nBrand#x\tPROMO BURNISHED NICKEL\t3\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t49\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t9\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED STEEL\t23\t4\nBrand#x\tPROMO BURNISHED STEEL\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t49\t4\nBrand#x\tPROMO BURNISHED TIN\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED TIN\t45\t4\nBrand#x\tPROMO BURNISHED TIN\t49\t4\nBrand#x\tPROMO PLATED BRASS\t19\t4\nBrand#x\tPROMO PLATED BRASS\t23\t4\nBrand#x\tPROMO PLATED BRASS\t45\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t23\t4\nBrand#x\tPROMO PLATED NICKEL\t45\t4\nBrand#x\tPROMO PLATED STEEL\t9\t4\nBrand#x\tPROMO PLATED STEEL\t23\t4\nBrand#x\tPROMO PLATED TIN\t9\t4\nBrand#x\tPROMO PLATED TIN\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t3\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t23\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED NICKEL\t45\t4\nBrand#x\tPROMO POLISHED NICKEL\t49\t4\nBrand#x\tPROMO POLISHED STEEL\t14\t4\nBrand#x\tPROMO POLISHED STEEL\t23\t4\nBrand#x\tPROMO POLISHED TIN\t3\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tPROMO POLISHED TIN\t49\t4\nBrand#x\tSMALL ANODIZED BRASS\t19\t4\nBrand#x\tSMALL ANODIZED BRASS\t49\t4\nBrand#x\tSMALL ANODIZED COPPER\t36\t4\nBrand#x\tSMALL ANODIZED COPPER\t45\t4\nBrand#x\tSMALL ANODIZED NICKEL\t3\t4\nBrand#x\tSMALL ANODIZED NICKEL\t23\t4\nBrand#x\tSMALL ANODIZED NICKEL\t49\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t14\t4\nBrand#x\tSMALL ANODIZED TIN\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t49\t4\nBrand#x\tSMALL BRUSHED BRASS\t14\t4\nBrand#x\tSMALL BRUSHED BRASS\t19\t4\nBrand#x\tSMALL BRUSHED BRASS\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t23\t4\nBrand#x\tSMALL BRUSHED COPPER\t36\t4\nBrand#x\tSMALL BRUSHED NICKEL\t3\t4\nBrand#x\tSMALL BRUSHED NICKEL\t19\t4\nBrand#x\tSMALL BRUSHED NICKEL\t49\t4\nBrand#x\tSMALL BRUSHED STEEL\t9\t4\nBrand#x\tSMALL BRUSHED STEEL\t14\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t45\t4\nBrand#x\tSMALL BRUSHED TIN\t49\t4\nBrand#x\tSMALL BURNISHED BRASS\t23\t4\nBrand#x\tSMALL BURNISHED BRASS\t36\t4\nBrand#x\tSMALL BURNISHED COPPER\t14\t4\nBrand#x\tSMALL BURNISHED COPPER\t36\t4\nBrand#x\tSMALL BURNISHED COPPER\t49\t4\nBrand#x\tSMALL BURNISHED NICKEL\t14\t4\nBrand#x\tSMALL BURNISHED NICKEL\t49\t4\nBrand#x\tSMALL BURNISHED STEEL\t14\t4\nBrand#x\tSMALL BURNISHED STEEL\t19\t4\nBrand#x\tSMALL BURNISHED STEEL\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t19\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t45\t4\nBrand#x\tSMALL BURNISHED TIN\t49\t4\nBrand#x\tSMALL PLATED BRASS\t19\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED COPPER\t3\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED COPPER\t45\t4\nBrand#x\tSMALL PLATED COPPER\t49\t4\nBrand#x\tSMALL PLATED NICKEL\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t45\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t3\t4\nBrand#x\tSMALL PLATED STEEL\t19\t4\nBrand#x\tSMALL PLATED STEEL\t23\t4\nBrand#x\tSMALL PLATED TIN\t14\t4\nBrand#x\tSMALL PLATED TIN\t36\t4\nBrand#x\tSMALL PLATED TIN\t45\t4\nBrand#x\tSMALL POLISHED BRASS\t3\t4\nBrand#x\tSMALL POLISHED BRASS\t9\t4\nBrand#x\tSMALL POLISHED BRASS\t14\t4\nBrand#x\tSMALL POLISHED BRASS\t23\t4\nBrand#x\tSMALL POLISHED COPPER\t9\t4\nBrand#x\tSMALL POLISHED COPPER\t19\t4\nBrand#x\tSMALL POLISHED COPPER\t49\t4\nBrand#x\tSMALL POLISHED NICKEL\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t45\t4\nBrand#x\tSMALL POLISHED STEEL\t3\t4\nBrand#x\tSMALL POLISHED STEEL\t9\t4\nBrand#x\tSMALL POLISHED STEEL\t14\t4\nBrand#x\tSMALL POLISHED STEEL\t19\t4\nBrand#x\tSMALL POLISHED STEEL\t23\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t4\nBrand#x\tSTANDARD ANODIZED TIN\t9\t4\nBrand#x\tSTANDARD ANODIZED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t36\t4\nBrand#x\tSTANDARD ANODIZED TIN\t45\t4\nBrand#x\tSTANDARD ANODIZED TIN\t49\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t49\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t49\t4\nBrand#x\tSTANDARD PLATED BRASS\t3\t4\nBrand#x\tSTANDARD PLATED BRASS\t23\t4\nBrand#x\tSTANDARD PLATED COPPER\t14\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED COPPER\t23\t4\nBrand#x\tSTANDARD PLATED NICKEL\t3\t4\nBrand#x\tSTANDARD PLATED NICKEL\t36\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t45\t4\nBrand#x\tSTANDARD PLATED TIN\t19\t4\nBrand#x\tSTANDARD PLATED TIN\t23\t4\nBrand#x\tSTANDARD PLATED TIN\t36\t4\nBrand#x\tSTANDARD POLISHED BRASS\t9\t4\nBrand#x\tSTANDARD POLISHED BRASS\t23\t4\nBrand#x\tSTANDARD POLISHED BRASS\t45\t4\nBrand#x\tSTANDARD POLISHED BRASS\t49\t4\nBrand#x\tSTANDARD POLISHED COPPER\t19\t4\nBrand#x\tSTANDARD POLISHED COPPER\t45\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t9\t4\nBrand#x\tSTANDARD POLISHED STEEL\t14\t4\nBrand#x\tSTANDARD POLISHED STEEL\t19\t4\nBrand#x\tSTANDARD POLISHED STEEL\t23\t4\nBrand#x\tSTANDARD POLISHED STEEL\t49\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t49\t4\nBrand#x\tECONOMY ANODIZED BRASS\t3\t4\nBrand#x\tECONOMY ANODIZED BRASS\t45\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t9\t4\nBrand#x\tECONOMY ANODIZED COPPER\t19\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t4\nBrand#x\tECONOMY ANODIZED STEEL\t14\t4\nBrand#x\tECONOMY ANODIZED STEEL\t36\t4\nBrand#x\tECONOMY ANODIZED TIN\t3\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY BRUSHED BRASS\t14\t4\nBrand#x\tECONOMY BRUSHED BRASS\t19\t4\nBrand#x\tECONOMY BRUSHED BRASS\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED COPPER\t14\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t23\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t4\nBrand#x\tECONOMY BRUSHED STEEL\t36\t4\nBrand#x\tECONOMY BRUSHED TIN\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED BRASS\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t9\t4\nBrand#x\tECONOMY BURNISHED COPPER\t14\t4\nBrand#x\tECONOMY BURNISHED COPPER\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t45\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t4\nBrand#x\tECONOMY BURNISHED STEEL\t3\t4\nBrand#x\tECONOMY BURNISHED STEEL\t36\t4\nBrand#x\tECONOMY BURNISHED TIN\t3\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t45\t4\nBrand#x\tECONOMY PLATED COPPER\t19\t4\nBrand#x\tECONOMY PLATED COPPER\t45\t4\nBrand#x\tECONOMY PLATED COPPER\t49\t4\nBrand#x\tECONOMY PLATED NICKEL\t3\t4\nBrand#x\tECONOMY PLATED NICKEL\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t23\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t3\t4\nBrand#x\tECONOMY PLATED STEEL\t23\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t3\t4\nBrand#x\tECONOMY POLISHED BRASS\t14\t4\nBrand#x\tECONOMY POLISHED BRASS\t19\t4\nBrand#x\tECONOMY POLISHED BRASS\t23\t4\nBrand#x\tECONOMY POLISHED BRASS\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t45\t4\nBrand#x\tECONOMY POLISHED BRASS\t49\t4\nBrand#x\tECONOMY POLISHED COPPER\t14\t4\nBrand#x\tECONOMY POLISHED COPPER\t19\t4\nBrand#x\tECONOMY POLISHED COPPER\t49\t4\nBrand#x\tECONOMY POLISHED NICKEL\t3\t4\nBrand#x\tECONOMY POLISHED NICKEL\t9\t4\nBrand#x\tECONOMY POLISHED NICKEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t3\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t45\t4\nBrand#x\tECONOMY POLISHED STEEL\t49\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t14\t4\nBrand#x\tECONOMY POLISHED TIN\t19\t4\nBrand#x\tECONOMY POLISHED TIN\t45\t4\nBrand#x\tECONOMY POLISHED TIN\t49\t4\nBrand#x\tLARGE ANODIZED BRASS\t14\t4\nBrand#x\tLARGE ANODIZED BRASS\t36\t4\nBrand#x\tLARGE ANODIZED COPPER\t9\t4\nBrand#x\tLARGE ANODIZED COPPER\t19\t4\nBrand#x\tLARGE ANODIZED COPPER\t45\t4\nBrand#x\tLARGE ANODIZED NICKEL\t14\t4\nBrand#x\tLARGE ANODIZED NICKEL\t19\t4\nBrand#x\tLARGE ANODIZED NICKEL\t23\t4\nBrand#x\tLARGE ANODIZED NICKEL\t36\t4\nBrand#x\tLARGE ANODIZED STEEL\t19\t4\nBrand#x\tLARGE ANODIZED STEEL\t23\t4\nBrand#x\tLARGE ANODIZED STEEL\t45\t4\nBrand#x\tLARGE ANODIZED STEEL\t49\t4\nBrand#x\tLARGE ANODIZED TIN\t19\t4\nBrand#x\tLARGE ANODIZED TIN\t36\t4\nBrand#x\tLARGE ANODIZED TIN\t49\t4\nBrand#x\tLARGE BRUSHED BRASS\t9\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t14\t4\nBrand#x\tLARGE BRUSHED COPPER\t23\t4\nBrand#x\tLARGE BRUSHED COPPER\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t9\t4\nBrand#x\tLARGE BRUSHED NICKEL\t14\t4\nBrand#x\tLARGE BRUSHED NICKEL\t45\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t36\t4\nBrand#x\tLARGE BRUSHED STEEL\t49\t4\nBrand#x\tLARGE BRUSHED TIN\t9\t4\nBrand#x\tLARGE BRUSHED TIN\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t36\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t23\t4\nBrand#x\tLARGE BURNISHED BRASS\t36\t4\nBrand#x\tLARGE BURNISHED BRASS\t45\t4\nBrand#x\tLARGE BURNISHED COPPER\t3\t4\nBrand#x\tLARGE BURNISHED COPPER\t23\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t36\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED STEEL\t14\t4\nBrand#x\tLARGE BURNISHED STEEL\t19\t4\nBrand#x\tLARGE BURNISHED STEEL\t45\t4\nBrand#x\tLARGE BURNISHED TIN\t3\t4\nBrand#x\tLARGE BURNISHED TIN\t14\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE PLATED BRASS\t45\t4\nBrand#x\tLARGE PLATED BRASS\t49\t4\nBrand#x\tLARGE PLATED COPPER\t3\t4\nBrand#x\tLARGE PLATED COPPER\t23\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED NICKEL\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t36\t4\nBrand#x\tLARGE PLATED NICKEL\t49\t4\nBrand#x\tLARGE PLATED STEEL\t3\t4\nBrand#x\tLARGE PLATED STEEL\t14\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t23\t4\nBrand#x\tLARGE PLATED STEEL\t36\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t23\t4\nBrand#x\tLARGE PLATED TIN\t36\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED BRASS\t9\t4\nBrand#x\tLARGE POLISHED BRASS\t23\t4\nBrand#x\tLARGE POLISHED BRASS\t45\t4\nBrand#x\tLARGE POLISHED BRASS\t49\t4\nBrand#x\tLARGE POLISHED COPPER\t9\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t3\t4\nBrand#x\tLARGE POLISHED NICKEL\t9\t4\nBrand#x\tLARGE POLISHED NICKEL\t14\t4\nBrand#x\tLARGE POLISHED NICKEL\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t45\t4\nBrand#x\tLARGE POLISHED STEEL\t19\t4\nBrand#x\tLARGE POLISHED STEEL\t23\t4\nBrand#x\tLARGE POLISHED STEEL\t49\t4\nBrand#x\tLARGE POLISHED TIN\t36\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t14\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t4\nBrand#x\tMEDIUM ANODIZED TIN\t3\t4\nBrand#x\tMEDIUM ANODIZED TIN\t9\t4\nBrand#x\tMEDIUM ANODIZED TIN\t23\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t9\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t49\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t4\nBrand#x\tMEDIUM BURNISHED TIN\t9\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t45\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t14\t4\nBrand#x\tMEDIUM PLATED BRASS\t23\t4\nBrand#x\tMEDIUM PLATED COPPER\t9\t4\nBrand#x\tMEDIUM PLATED COPPER\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t19\t4\nBrand#x\tMEDIUM PLATED NICKEL\t3\t4\nBrand#x\tMEDIUM PLATED NICKEL\t45\t4\nBrand#x\tMEDIUM PLATED NICKEL\t49\t4\nBrand#x\tMEDIUM PLATED STEEL\t23\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t3\t4\nBrand#x\tMEDIUM PLATED TIN\t19\t4\nBrand#x\tMEDIUM PLATED TIN\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t3\t4\nBrand#x\tPROMO ANODIZED BRASS\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t49\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED COPPER\t36\t4\nBrand#x\tPROMO ANODIZED COPPER\t49\t4\nBrand#x\tPROMO ANODIZED NICKEL\t3\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t36\t4\nBrand#x\tPROMO ANODIZED STEEL\t9\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t9\t4\nBrand#x\tPROMO ANODIZED TIN\t19\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t3\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t23\t4\nBrand#x\tPROMO BRUSHED COPPER\t3\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t23\t4\nBrand#x\tPROMO BRUSHED COPPER\t36\t4\nBrand#x\tPROMO BRUSHED COPPER\t45\t4\nBrand#x\tPROMO BRUSHED COPPER\t49\t4\nBrand#x\tPROMO BRUSHED NICKEL\t9\t4\nBrand#x\tPROMO BRUSHED NICKEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t3\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t49\t4\nBrand#x\tPROMO BRUSHED TIN\t9\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t9\t4\nBrand#x\tPROMO BURNISHED BRASS\t36\t4\nBrand#x\tPROMO BURNISHED COPPER\t3\t4\nBrand#x\tPROMO BURNISHED COPPER\t14\t4\nBrand#x\tPROMO BURNISHED COPPER\t19\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t19\t4\nBrand#x\tPROMO BURNISHED NICKEL\t49\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t9\t4\nBrand#x\tPROMO BURNISHED STEEL\t14\t4\nBrand#x\tPROMO BURNISHED STEEL\t36\t4\nBrand#x\tPROMO BURNISHED STEEL\t45\t4\nBrand#x\tPROMO BURNISHED TIN\t3\t4\nBrand#x\tPROMO BURNISHED TIN\t19\t4\nBrand#x\tPROMO BURNISHED TIN\t36\t4\nBrand#x\tPROMO PLATED BRASS\t45\t4\nBrand#x\tPROMO PLATED BRASS\t49\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t14\t4\nBrand#x\tPROMO PLATED COPPER\t23\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED NICKEL\t9\t4\nBrand#x\tPROMO PLATED NICKEL\t14\t4\nBrand#x\tPROMO PLATED NICKEL\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t49\t4\nBrand#x\tPROMO PLATED STEEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t9\t4\nBrand#x\tPROMO PLATED STEEL\t36\t4\nBrand#x\tPROMO PLATED TIN\t3\t4\nBrand#x\tPROMO POLISHED BRASS\t3\t4\nBrand#x\tPROMO POLISHED COPPER\t9\t4\nBrand#x\tPROMO POLISHED COPPER\t23\t4\nBrand#x\tPROMO POLISHED COPPER\t45\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t23\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED NICKEL\t45\t4\nBrand#x\tPROMO POLISHED NICKEL\t49\t4\nBrand#x\tPROMO POLISHED TIN\t14\t4\nBrand#x\tPROMO POLISHED TIN\t19\t4\nBrand#x\tPROMO POLISHED TIN\t23\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tPROMO POLISHED TIN\t45\t4\nBrand#x\tSMALL ANODIZED BRASS\t9\t4\nBrand#x\tSMALL ANODIZED BRASS\t14\t4\nBrand#x\tSMALL ANODIZED BRASS\t49\t4\nBrand#x\tSMALL ANODIZED COPPER\t3\t4\nBrand#x\tSMALL ANODIZED COPPER\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t45\t4\nBrand#x\tSMALL ANODIZED NICKEL\t3\t4\nBrand#x\tSMALL ANODIZED STEEL\t14\t4\nBrand#x\tSMALL ANODIZED STEEL\t45\t4\nBrand#x\tSMALL ANODIZED TIN\t9\t4\nBrand#x\tSMALL ANODIZED TIN\t14\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t9\t4\nBrand#x\tSMALL BRUSHED BRASS\t19\t4\nBrand#x\tSMALL BRUSHED BRASS\t23\t4\nBrand#x\tSMALL BRUSHED BRASS\t49\t4\nBrand#x\tSMALL BRUSHED COPPER\t23\t4\nBrand#x\tSMALL BRUSHED COPPER\t45\t4\nBrand#x\tSMALL BRUSHED NICKEL\t19\t4\nBrand#x\tSMALL BRUSHED NICKEL\t36\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED TIN\t3\t4\nBrand#x\tSMALL BRUSHED TIN\t19\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t14\t4\nBrand#x\tSMALL BURNISHED BRASS\t19\t4\nBrand#x\tSMALL BURNISHED BRASS\t45\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t14\t4\nBrand#x\tSMALL BURNISHED COPPER\t19\t4\nBrand#x\tSMALL BURNISHED COPPER\t23\t4\nBrand#x\tSMALL BURNISHED COPPER\t49\t4\nBrand#x\tSMALL BURNISHED NICKEL\t9\t4\nBrand#x\tSMALL BURNISHED NICKEL\t14\t4\nBrand#x\tSMALL BURNISHED STEEL\t9\t4\nBrand#x\tSMALL BURNISHED STEEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t45\t4\nBrand#x\tSMALL BURNISHED STEEL\t49\t4\nBrand#x\tSMALL BURNISHED TIN\t3\t4\nBrand#x\tSMALL BURNISHED TIN\t45\t4\nBrand#x\tSMALL PLATED BRASS\t3\t4\nBrand#x\tSMALL PLATED BRASS\t9\t4\nBrand#x\tSMALL PLATED BRASS\t23\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED BRASS\t49\t4\nBrand#x\tSMALL PLATED COPPER\t3\t4\nBrand#x\tSMALL PLATED COPPER\t45\t4\nBrand#x\tSMALL PLATED NICKEL\t9\t4\nBrand#x\tSMALL PLATED NICKEL\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t36\t4\nBrand#x\tSMALL PLATED NICKEL\t45\t4\nBrand#x\tSMALL PLATED STEEL\t9\t4\nBrand#x\tSMALL PLATED STEEL\t14\t4\nBrand#x\tSMALL PLATED STEEL\t45\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t14\t4\nBrand#x\tSMALL POLISHED BRASS\t19\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t9\t4\nBrand#x\tSMALL POLISHED COPPER\t19\t4\nBrand#x\tSMALL POLISHED COPPER\t49\t4\nBrand#x\tSMALL POLISHED NICKEL\t3\t4\nBrand#x\tSMALL POLISHED NICKEL\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t49\t4\nBrand#x\tSMALL POLISHED STEEL\t3\t4\nBrand#x\tSMALL POLISHED STEEL\t19\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t4\nBrand#x\tSTANDARD ANODIZED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t19\t4\nBrand#x\tSTANDARD ANODIZED TIN\t49\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t45\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t19\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t3\t4\nBrand#x\tSTANDARD BURNISHED TIN\t14\t4\nBrand#x\tSTANDARD BURNISHED TIN\t19\t4\nBrand#x\tSTANDARD BURNISHED TIN\t36\t4\nBrand#x\tSTANDARD PLATED BRASS\t3\t4\nBrand#x\tSTANDARD PLATED BRASS\t9\t4\nBrand#x\tSTANDARD PLATED BRASS\t19\t4\nBrand#x\tSTANDARD PLATED BRASS\t23\t4\nBrand#x\tSTANDARD PLATED BRASS\t36\t4\nBrand#x\tSTANDARD PLATED BRASS\t49\t4\nBrand#x\tSTANDARD PLATED COPPER\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t9\t4\nBrand#x\tSTANDARD PLATED NICKEL\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t49\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED STEEL\t9\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t14\t4\nBrand#x\tSTANDARD POLISHED BRASS\t45\t4\nBrand#x\tSTANDARD POLISHED BRASS\t49\t4\nBrand#x\tSTANDARD POLISHED COPPER\t3\t4\nBrand#x\tSTANDARD POLISHED COPPER\t9\t4\nBrand#x\tSTANDARD POLISHED COPPER\t36\t4\nBrand#x\tSTANDARD POLISHED COPPER\t45\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t45\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t19\t4\nBrand#x\tECONOMY ANODIZED BRASS\t19\t4\nBrand#x\tECONOMY ANODIZED COPPER\t23\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED COPPER\t49\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t4\nBrand#x\tECONOMY ANODIZED STEEL\t3\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t14\t4\nBrand#x\tECONOMY ANODIZED STEEL\t19\t4\nBrand#x\tECONOMY ANODIZED TIN\t19\t4\nBrand#x\tECONOMY ANODIZED TIN\t23\t4\nBrand#x\tECONOMY ANODIZED TIN\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t3\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED BRASS\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t14\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t36\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t4\nBrand#x\tECONOMY BRUSHED STEEL\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t3\t4\nBrand#x\tECONOMY BRUSHED TIN\t14\t4\nBrand#x\tECONOMY BRUSHED TIN\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED BRASS\t14\t4\nBrand#x\tECONOMY BURNISHED BRASS\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t45\t4\nBrand#x\tECONOMY BURNISHED COPPER\t3\t4\nBrand#x\tECONOMY BURNISHED COPPER\t19\t4\nBrand#x\tECONOMY BURNISHED COPPER\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t45\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t14\t4\nBrand#x\tECONOMY BURNISHED STEEL\t45\t4\nBrand#x\tECONOMY BURNISHED TIN\t14\t4\nBrand#x\tECONOMY BURNISHED TIN\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t3\t4\nBrand#x\tECONOMY PLATED BRASS\t14\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED BRASS\t23\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t49\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED COPPER\t36\t4\nBrand#x\tECONOMY PLATED NICKEL\t36\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t3\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY PLATED TIN\t45\t4\nBrand#x\tECONOMY POLISHED BRASS\t3\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED BRASS\t14\t4\nBrand#x\tECONOMY POLISHED BRASS\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t49\t4\nBrand#x\tECONOMY POLISHED COPPER\t3\t4\nBrand#x\tECONOMY POLISHED COPPER\t14\t4\nBrand#x\tECONOMY POLISHED COPPER\t23\t4\nBrand#x\tECONOMY POLISHED COPPER\t45\t4\nBrand#x\tECONOMY POLISHED NICKEL\t3\t4\nBrand#x\tECONOMY POLISHED NICKEL\t9\t4\nBrand#x\tECONOMY POLISHED NICKEL\t14\t4\nBrand#x\tECONOMY POLISHED NICKEL\t23\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t45\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t23\t4\nBrand#x\tLARGE ANODIZED COPPER\t3\t4\nBrand#x\tLARGE ANODIZED COPPER\t36\t4\nBrand#x\tLARGE ANODIZED COPPER\t45\t4\nBrand#x\tLARGE ANODIZED NICKEL\t14\t4\nBrand#x\tLARGE ANODIZED STEEL\t3\t4\nBrand#x\tLARGE ANODIZED STEEL\t9\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED TIN\t3\t4\nBrand#x\tLARGE ANODIZED TIN\t49\t4\nBrand#x\tLARGE BRUSHED BRASS\t14\t4\nBrand#x\tLARGE BRUSHED BRASS\t19\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t23\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t45\t4\nBrand#x\tLARGE BRUSHED STEEL\t19\t4\nBrand#x\tLARGE BRUSHED STEEL\t49\t4\nBrand#x\tLARGE BRUSHED TIN\t3\t4\nBrand#x\tLARGE BRUSHED TIN\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BRUSHED TIN\t49\t4\nBrand#x\tLARGE BURNISHED BRASS\t3\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED COPPER\t9\t4\nBrand#x\tLARGE BURNISHED COPPER\t19\t4\nBrand#x\tLARGE BURNISHED COPPER\t23\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t9\t4\nBrand#x\tLARGE BURNISHED NICKEL\t19\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED STEEL\t19\t4\nBrand#x\tLARGE BURNISHED STEEL\t23\t4\nBrand#x\tLARGE BURNISHED STEEL\t45\t4\nBrand#x\tLARGE BURNISHED STEEL\t49\t4\nBrand#x\tLARGE BURNISHED TIN\t9\t4\nBrand#x\tLARGE BURNISHED TIN\t49\t4\nBrand#x\tLARGE PLATED BRASS\t3\t4\nBrand#x\tLARGE PLATED BRASS\t36\t4\nBrand#x\tLARGE PLATED COPPER\t3\t4\nBrand#x\tLARGE PLATED COPPER\t14\t4\nBrand#x\tLARGE PLATED COPPER\t19\t4\nBrand#x\tLARGE PLATED COPPER\t23\t4\nBrand#x\tLARGE PLATED COPPER\t49\t4\nBrand#x\tLARGE PLATED NICKEL\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t23\t4\nBrand#x\tLARGE PLATED NICKEL\t36\t4\nBrand#x\tLARGE PLATED STEEL\t9\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t45\t4\nBrand#x\tLARGE PLATED TIN\t3\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t19\t4\nBrand#x\tLARGE POLISHED BRASS\t23\t4\nBrand#x\tLARGE POLISHED BRASS\t45\t4\nBrand#x\tLARGE POLISHED BRASS\t49\t4\nBrand#x\tLARGE POLISHED COPPER\t9\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t14\t4\nBrand#x\tLARGE POLISHED NICKEL\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t36\t4\nBrand#x\tLARGE POLISHED NICKEL\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t49\t4\nBrand#x\tLARGE POLISHED STEEL\t3\t4\nBrand#x\tLARGE POLISHED STEEL\t23\t4\nBrand#x\tLARGE POLISHED STEEL\t45\t4\nBrand#x\tLARGE POLISHED STEEL\t49\t4\nBrand#x\tLARGE POLISHED TIN\t3\t4\nBrand#x\tLARGE POLISHED TIN\t19\t4\nBrand#x\tLARGE POLISHED TIN\t23\t4\nBrand#x\tLARGE POLISHED TIN\t36\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t45\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t4\nBrand#x\tMEDIUM ANODIZED TIN\t3\t4\nBrand#x\tMEDIUM ANODIZED TIN\t9\t4\nBrand#x\tMEDIUM ANODIZED TIN\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t49\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t49\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t4\nBrand#x\tMEDIUM BRUSHED TIN\t9\t4\nBrand#x\tMEDIUM BRUSHED TIN\t14\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM PLATED BRASS\t9\t4\nBrand#x\tMEDIUM PLATED BRASS\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED NICKEL\t23\t4\nBrand#x\tMEDIUM PLATED NICKEL\t49\t4\nBrand#x\tMEDIUM PLATED STEEL\t9\t4\nBrand#x\tMEDIUM PLATED STEEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t19\t4\nBrand#x\tMEDIUM PLATED STEEL\t23\t4\nBrand#x\tMEDIUM PLATED TIN\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t14\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t19\t4\nBrand#x\tPROMO ANODIZED BRASS\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t49\t4\nBrand#x\tPROMO ANODIZED COPPER\t3\t4\nBrand#x\tPROMO ANODIZED COPPER\t9\t4\nBrand#x\tPROMO ANODIZED COPPER\t14\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED COPPER\t49\t4\nBrand#x\tPROMO ANODIZED NICKEL\t9\t4\nBrand#x\tPROMO ANODIZED NICKEL\t36\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t19\t4\nBrand#x\tPROMO ANODIZED STEEL\t36\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t36\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED BRASS\t23\t4\nBrand#x\tPROMO BRUSHED BRASS\t49\t4\nBrand#x\tPROMO BRUSHED COPPER\t14\t4\nBrand#x\tPROMO BRUSHED COPPER\t45\t4\nBrand#x\tPROMO BRUSHED COPPER\t49\t4\nBrand#x\tPROMO BRUSHED NICKEL\t3\t4\nBrand#x\tPROMO BRUSHED STEEL\t3\t4\nBrand#x\tPROMO BRUSHED STEEL\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t9\t4\nBrand#x\tPROMO BRUSHED TIN\t14\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t36\t4\nBrand#x\tPROMO BRUSHED TIN\t45\t4\nBrand#x\tPROMO BURNISHED BRASS\t9\t4\nBrand#x\tPROMO BURNISHED BRASS\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED COPPER\t3\t4\nBrand#x\tPROMO BURNISHED COPPER\t9\t4\nBrand#x\tPROMO BURNISHED COPPER\t19\t4\nBrand#x\tPROMO BURNISHED COPPER\t23\t4\nBrand#x\tPROMO BURNISHED COPPER\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t19\t4\nBrand#x\tPROMO BURNISHED NICKEL\t23\t4\nBrand#x\tPROMO BURNISHED NICKEL\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t9\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED STEEL\t23\t4\nBrand#x\tPROMO BURNISHED TIN\t19\t4\nBrand#x\tPROMO BURNISHED TIN\t45\t4\nBrand#x\tPROMO PLATED BRASS\t3\t4\nBrand#x\tPROMO PLATED BRASS\t9\t4\nBrand#x\tPROMO PLATED BRASS\t19\t4\nBrand#x\tPROMO PLATED BRASS\t23\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t23\t4\nBrand#x\tPROMO PLATED COPPER\t36\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED NICKEL\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t36\t4\nBrand#x\tPROMO PLATED NICKEL\t49\t4\nBrand#x\tPROMO PLATED STEEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED STEEL\t23\t4\nBrand#x\tPROMO PLATED STEEL\t36\t4\nBrand#x\tPROMO PLATED STEEL\t49\t4\nBrand#x\tPROMO PLATED TIN\t3\t4\nBrand#x\tPROMO PLATED TIN\t14\t4\nBrand#x\tPROMO PLATED TIN\t49\t4\nBrand#x\tPROMO POLISHED BRASS\t3\t4\nBrand#x\tPROMO POLISHED BRASS\t14\t4\nBrand#x\tPROMO POLISHED BRASS\t19\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t9\t4\nBrand#x\tPROMO POLISHED COPPER\t45\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t23\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED NICKEL\t45\t4\nBrand#x\tPROMO POLISHED NICKEL\t49\t4\nBrand#x\tPROMO POLISHED STEEL\t19\t4\nBrand#x\tPROMO POLISHED STEEL\t45\t4\nBrand#x\tPROMO POLISHED STEEL\t49\t4\nBrand#x\tPROMO POLISHED TIN\t9\t4\nBrand#x\tPROMO POLISHED TIN\t19\t4\nBrand#x\tPROMO POLISHED TIN\t23\t4\nBrand#x\tPROMO POLISHED TIN\t49\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t9\t4\nBrand#x\tSMALL ANODIZED BRASS\t14\t4\nBrand#x\tSMALL ANODIZED COPPER\t23\t4\nBrand#x\tSMALL ANODIZED COPPER\t45\t4\nBrand#x\tSMALL ANODIZED NICKEL\t9\t4\nBrand#x\tSMALL ANODIZED NICKEL\t49\t4\nBrand#x\tSMALL ANODIZED STEEL\t9\t4\nBrand#x\tSMALL ANODIZED STEEL\t14\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t45\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t14\t4\nBrand#x\tSMALL BRUSHED BRASS\t23\t4\nBrand#x\tSMALL BRUSHED BRASS\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t14\t4\nBrand#x\tSMALL BRUSHED NICKEL\t3\t4\nBrand#x\tSMALL BRUSHED NICKEL\t14\t4\nBrand#x\tSMALL BRUSHED NICKEL\t19\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t23\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t19\t4\nBrand#x\tSMALL BURNISHED BRASS\t23\t4\nBrand#x\tSMALL BURNISHED BRASS\t49\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t36\t4\nBrand#x\tSMALL BURNISHED NICKEL\t3\t4\nBrand#x\tSMALL BURNISHED NICKEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t14\t4\nBrand#x\tSMALL BURNISHED STEEL\t19\t4\nBrand#x\tSMALL BURNISHED STEEL\t23\t4\nBrand#x\tSMALL BURNISHED STEEL\t49\t4\nBrand#x\tSMALL BURNISHED TIN\t14\t4\nBrand#x\tSMALL BURNISHED TIN\t19\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL PLATED BRASS\t3\t4\nBrand#x\tSMALL PLATED BRASS\t9\t4\nBrand#x\tSMALL PLATED BRASS\t14\t4\nBrand#x\tSMALL PLATED COPPER\t3\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED NICKEL\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t36\t4\nBrand#x\tSMALL PLATED STEEL\t9\t4\nBrand#x\tSMALL PLATED STEEL\t23\t4\nBrand#x\tSMALL PLATED STEEL\t45\t4\nBrand#x\tSMALL PLATED STEEL\t49\t4\nBrand#x\tSMALL PLATED TIN\t3\t4\nBrand#x\tSMALL PLATED TIN\t36\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t36\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t23\t4\nBrand#x\tSMALL POLISHED COPPER\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t9\t4\nBrand#x\tSMALL POLISHED NICKEL\t49\t4\nBrand#x\tSMALL POLISHED STEEL\t3\t4\nBrand#x\tSMALL POLISHED STEEL\t14\t4\nBrand#x\tSMALL POLISHED STEEL\t23\t4\nBrand#x\tSMALL POLISHED STEEL\t36\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t9\t4\nBrand#x\tSMALL POLISHED TIN\t23\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t4\nBrand#x\tSTANDARD ANODIZED TIN\t3\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t14\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t23\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t45\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t4\nBrand#x\tSTANDARD BRUSHED TIN\t3\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t23\t4\nBrand#x\tSTANDARD BRUSHED TIN\t36\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t45\t4\nBrand#x\tSTANDARD PLATED BRASS\t9\t4\nBrand#x\tSTANDARD PLATED BRASS\t36\t4\nBrand#x\tSTANDARD PLATED BRASS\t49\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t14\t4\nBrand#x\tSTANDARD PLATED COPPER\t49\t4\nBrand#x\tSTANDARD PLATED NICKEL\t3\t4\nBrand#x\tSTANDARD PLATED NICKEL\t9\t4\nBrand#x\tSTANDARD PLATED NICKEL\t45\t4\nBrand#x\tSTANDARD PLATED STEEL\t9\t4\nBrand#x\tSTANDARD PLATED STEEL\t14\t4\nBrand#x\tSTANDARD PLATED STEEL\t19\t4\nBrand#x\tSTANDARD PLATED STEEL\t45\t4\nBrand#x\tSTANDARD PLATED STEEL\t49\t4\nBrand#x\tSTANDARD PLATED TIN\t36\t4\nBrand#x\tSTANDARD POLISHED BRASS\t23\t4\nBrand#x\tSTANDARD POLISHED BRASS\t45\t4\nBrand#x\tSTANDARD POLISHED BRASS\t49\t4\nBrand#x\tSTANDARD POLISHED COPPER\t9\t4\nBrand#x\tSTANDARD POLISHED COPPER\t14\t4\nBrand#x\tSTANDARD POLISHED COPPER\t23\t4\nBrand#x\tSTANDARD POLISHED COPPER\t36\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t19\t4\nBrand#x\tSTANDARD POLISHED STEEL\t23\t4\nBrand#x\tSTANDARD POLISHED STEEL\t45\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t45\t4\nBrand#x\tSTANDARD POLISHED TIN\t49\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED BRASS\t36\t4\nBrand#x\tECONOMY ANODIZED BRASS\t49\t4\nBrand#x\tECONOMY ANODIZED COPPER\t14\t4\nBrand#x\tECONOMY ANODIZED COPPER\t19\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t36\t4\nBrand#x\tECONOMY ANODIZED STEEL\t49\t4\nBrand#x\tECONOMY ANODIZED TIN\t23\t4\nBrand#x\tECONOMY ANODIZED TIN\t45\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED BRASS\t3\t4\nBrand#x\tECONOMY BRUSHED BRASS\t9\t4\nBrand#x\tECONOMY BRUSHED BRASS\t19\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t3\t4\nBrand#x\tECONOMY BRUSHED COPPER\t9\t4\nBrand#x\tECONOMY BRUSHED COPPER\t14\t4\nBrand#x\tECONOMY BRUSHED COPPER\t36\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t3\t4\nBrand#x\tECONOMY BRUSHED STEEL\t23\t4\nBrand#x\tECONOMY BRUSHED STEEL\t45\t4\nBrand#x\tECONOMY BRUSHED STEEL\t49\t4\nBrand#x\tECONOMY BRUSHED TIN\t9\t4\nBrand#x\tECONOMY BRUSHED TIN\t23\t4\nBrand#x\tECONOMY BRUSHED TIN\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t14\t4\nBrand#x\tECONOMY BURNISHED BRASS\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t19\t4\nBrand#x\tECONOMY BURNISHED COPPER\t45\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t19\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t4\nBrand#x\tECONOMY BURNISHED STEEL\t3\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED TIN\t23\t4\nBrand#x\tECONOMY BURNISHED TIN\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t14\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t45\t4\nBrand#x\tECONOMY PLATED COPPER\t19\t4\nBrand#x\tECONOMY PLATED COPPER\t49\t4\nBrand#x\tECONOMY PLATED NICKEL\t3\t4\nBrand#x\tECONOMY PLATED NICKEL\t9\t4\nBrand#x\tECONOMY PLATED NICKEL\t19\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED NICKEL\t49\t4\nBrand#x\tECONOMY PLATED STEEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t3\t4\nBrand#x\tECONOMY PLATED TIN\t9\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t23\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY PLATED TIN\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED BRASS\t14\t4\nBrand#x\tECONOMY POLISHED COPPER\t3\t4\nBrand#x\tECONOMY POLISHED COPPER\t45\t4\nBrand#x\tECONOMY POLISHED COPPER\t49\t4\nBrand#x\tECONOMY POLISHED NICKEL\t3\t4\nBrand#x\tECONOMY POLISHED NICKEL\t14\t4\nBrand#x\tECONOMY POLISHED NICKEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t3\t4\nBrand#x\tECONOMY POLISHED STEEL\t14\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t36\t4\nBrand#x\tECONOMY POLISHED STEEL\t45\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t14\t4\nBrand#x\tECONOMY POLISHED TIN\t23\t4\nBrand#x\tLARGE ANODIZED BRASS\t14\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t36\t4\nBrand#x\tLARGE ANODIZED COPPER\t23\t4\nBrand#x\tLARGE ANODIZED COPPER\t49\t4\nBrand#x\tLARGE ANODIZED NICKEL\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t45\t4\nBrand#x\tLARGE ANODIZED STEEL\t3\t4\nBrand#x\tLARGE ANODIZED STEEL\t9\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED STEEL\t36\t4\nBrand#x\tLARGE ANODIZED STEEL\t45\t4\nBrand#x\tLARGE ANODIZED STEEL\t49\t4\nBrand#x\tLARGE ANODIZED TIN\t9\t4\nBrand#x\tLARGE ANODIZED TIN\t19\t4\nBrand#x\tLARGE ANODIZED TIN\t36\t4\nBrand#x\tLARGE ANODIZED TIN\t45\t4\nBrand#x\tLARGE ANODIZED TIN\t49\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED BRASS\t23\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED BRASS\t45\t4\nBrand#x\tLARGE BRUSHED BRASS\t49\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t19\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED COPPER\t49\t4\nBrand#x\tLARGE BRUSHED NICKEL\t36\t4\nBrand#x\tLARGE BRUSHED NICKEL\t49\t4\nBrand#x\tLARGE BRUSHED STEEL\t19\t4\nBrand#x\tLARGE BRUSHED STEEL\t45\t4\nBrand#x\tLARGE BRUSHED TIN\t36\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BRUSHED TIN\t49\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t23\t4\nBrand#x\tLARGE BURNISHED BRASS\t45\t4\nBrand#x\tLARGE BURNISHED COPPER\t3\t4\nBrand#x\tLARGE BURNISHED COPPER\t36\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t19\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED STEEL\t9\t4\nBrand#x\tLARGE BURNISHED TIN\t9\t4\nBrand#x\tLARGE BURNISHED TIN\t45\t4\nBrand#x\tLARGE BURNISHED TIN\t49\t4\nBrand#x\tLARGE PLATED BRASS\t36\t4\nBrand#x\tLARGE PLATED COPPER\t3\t4\nBrand#x\tLARGE PLATED NICKEL\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t45\t4\nBrand#x\tLARGE PLATED NICKEL\t49\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t23\t4\nBrand#x\tLARGE PLATED TIN\t45\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED COPPER\t3\t4\nBrand#x\tLARGE POLISHED COPPER\t14\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t14\t4\nBrand#x\tLARGE POLISHED NICKEL\t45\t4\nBrand#x\tLARGE POLISHED STEEL\t3\t4\nBrand#x\tLARGE POLISHED STEEL\t14\t4\nBrand#x\tLARGE POLISHED STEEL\t23\t4\nBrand#x\tLARGE POLISHED STEEL\t49\t4\nBrand#x\tLARGE POLISHED TIN\t23\t4\nBrand#x\tLARGE POLISHED TIN\t45\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t4\nBrand#x\tMEDIUM ANODIZED TIN\t3\t4\nBrand#x\tMEDIUM ANODIZED TIN\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED TIN\t23\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t49\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t23\t4\nBrand#x\tMEDIUM BRUSHED TIN\t49\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t4\nBrand#x\tMEDIUM BURNISHED TIN\t3\t4\nBrand#x\tMEDIUM BURNISHED TIN\t9\t4\nBrand#x\tMEDIUM BURNISHED TIN\t14\t4\nBrand#x\tMEDIUM BURNISHED TIN\t36\t4\nBrand#x\tMEDIUM PLATED COPPER\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t23\t4\nBrand#x\tMEDIUM PLATED COPPER\t36\t4\nBrand#x\tMEDIUM PLATED NICKEL\t9\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t19\t4\nBrand#x\tMEDIUM PLATED NICKEL\t36\t4\nBrand#x\tMEDIUM PLATED STEEL\t3\t4\nBrand#x\tMEDIUM PLATED STEEL\t36\t4\nBrand#x\tMEDIUM PLATED TIN\t19\t4\nBrand#x\tMEDIUM PLATED TIN\t45\t4\nBrand#x\tPROMO ANODIZED BRASS\t23\t4\nBrand#x\tPROMO ANODIZED BRASS\t45\t4\nBrand#x\tPROMO ANODIZED COPPER\t3\t4\nBrand#x\tPROMO ANODIZED COPPER\t9\t4\nBrand#x\tPROMO ANODIZED COPPER\t14\t4\nBrand#x\tPROMO ANODIZED COPPER\t49\t4\nBrand#x\tPROMO ANODIZED NICKEL\t3\t4\nBrand#x\tPROMO ANODIZED NICKEL\t49\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED STEEL\t19\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t9\t4\nBrand#x\tPROMO ANODIZED TIN\t14\t4\nBrand#x\tPROMO ANODIZED TIN\t36\t4\nBrand#x\tPROMO ANODIZED TIN\t49\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t23\t4\nBrand#x\tPROMO BRUSHED BRASS\t36\t4\nBrand#x\tPROMO BRUSHED BRASS\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t49\t4\nBrand#x\tPROMO BRUSHED COPPER\t14\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t45\t4\nBrand#x\tPROMO BRUSHED COPPER\t49\t4\nBrand#x\tPROMO BRUSHED NICKEL\t9\t4\nBrand#x\tPROMO BRUSHED NICKEL\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t49\t4\nBrand#x\tPROMO BRUSHED TIN\t14\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t9\t4\nBrand#x\tPROMO BURNISHED BRASS\t14\t4\nBrand#x\tPROMO BURNISHED BRASS\t23\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED COPPER\t14\t4\nBrand#x\tPROMO BURNISHED COPPER\t19\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t19\t4\nBrand#x\tPROMO BURNISHED NICKEL\t23\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t36\t4\nBrand#x\tPROMO BURNISHED TIN\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t23\t4\nBrand#x\tPROMO BURNISHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED TIN\t49\t4\nBrand#x\tPROMO PLATED BRASS\t3\t4\nBrand#x\tPROMO PLATED BRASS\t49\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t9\t4\nBrand#x\tPROMO PLATED COPPER\t14\t4\nBrand#x\tPROMO PLATED COPPER\t36\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t14\t4\nBrand#x\tPROMO PLATED NICKEL\t49\t4\nBrand#x\tPROMO PLATED STEEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t9\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED STEEL\t45\t4\nBrand#x\tPROMO PLATED TIN\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t3\t4\nBrand#x\tPROMO POLISHED BRASS\t14\t4\nBrand#x\tPROMO POLISHED BRASS\t19\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t19\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t3\t4\nBrand#x\tPROMO POLISHED NICKEL\t23\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED NICKEL\t49\t4\nBrand#x\tPROMO POLISHED STEEL\t14\t4\nBrand#x\tPROMO POLISHED STEEL\t23\t4\nBrand#x\tPROMO POLISHED TIN\t9\t4\nBrand#x\tSMALL ANODIZED BRASS\t14\t4\nBrand#x\tSMALL ANODIZED BRASS\t23\t4\nBrand#x\tSMALL ANODIZED COPPER\t36\t4\nBrand#x\tSMALL ANODIZED COPPER\t45\t4\nBrand#x\tSMALL ANODIZED NICKEL\t3\t4\nBrand#x\tSMALL ANODIZED NICKEL\t9\t4\nBrand#x\tSMALL ANODIZED NICKEL\t14\t4\nBrand#x\tSMALL ANODIZED NICKEL\t19\t4\nBrand#x\tSMALL ANODIZED NICKEL\t36\t4\nBrand#x\tSMALL ANODIZED NICKEL\t45\t4\nBrand#x\tSMALL ANODIZED NICKEL\t49\t4\nBrand#x\tSMALL ANODIZED STEEL\t3\t4\nBrand#x\tSMALL ANODIZED STEEL\t23\t4\nBrand#x\tSMALL ANODIZED STEEL\t49\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL ANODIZED TIN\t9\t4\nBrand#x\tSMALL ANODIZED TIN\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t49\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t9\t4\nBrand#x\tSMALL BRUSHED BRASS\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t9\t4\nBrand#x\tSMALL BRUSHED COPPER\t14\t4\nBrand#x\tSMALL BRUSHED NICKEL\t14\t4\nBrand#x\tSMALL BRUSHED NICKEL\t36\t4\nBrand#x\tSMALL BRUSHED NICKEL\t49\t4\nBrand#x\tSMALL BRUSHED STEEL\t3\t4\nBrand#x\tSMALL BRUSHED STEEL\t9\t4\nBrand#x\tSMALL BRUSHED STEEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t9\t4\nBrand#x\tSMALL BRUSHED TIN\t23\t4\nBrand#x\tSMALL BURNISHED BRASS\t9\t4\nBrand#x\tSMALL BURNISHED BRASS\t14\t4\nBrand#x\tSMALL BURNISHED BRASS\t19\t4\nBrand#x\tSMALL BURNISHED BRASS\t23\t4\nBrand#x\tSMALL BURNISHED BRASS\t45\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t19\t4\nBrand#x\tSMALL BURNISHED COPPER\t36\t4\nBrand#x\tSMALL BURNISHED COPPER\t45\t4\nBrand#x\tSMALL BURNISHED COPPER\t49\t4\nBrand#x\tSMALL BURNISHED NICKEL\t9\t4\nBrand#x\tSMALL BURNISHED NICKEL\t19\t4\nBrand#x\tSMALL BURNISHED NICKEL\t23\t4\nBrand#x\tSMALL BURNISHED NICKEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t45\t4\nBrand#x\tSMALL BURNISHED STEEL\t49\t4\nBrand#x\tSMALL BURNISHED TIN\t19\t4\nBrand#x\tSMALL PLATED BRASS\t9\t4\nBrand#x\tSMALL PLATED BRASS\t14\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED COPPER\t9\t4\nBrand#x\tSMALL PLATED COPPER\t19\t4\nBrand#x\tSMALL PLATED COPPER\t23\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED COPPER\t49\t4\nBrand#x\tSMALL PLATED NICKEL\t3\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED NICKEL\t23\t4\nBrand#x\tSMALL PLATED STEEL\t23\t4\nBrand#x\tSMALL PLATED STEEL\t36\t4\nBrand#x\tSMALL PLATED TIN\t3\t4\nBrand#x\tSMALL PLATED TIN\t23\t4\nBrand#x\tSMALL PLATED TIN\t45\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t14\t4\nBrand#x\tSMALL POLISHED BRASS\t19\t4\nBrand#x\tSMALL POLISHED BRASS\t23\t4\nBrand#x\tSMALL POLISHED BRASS\t36\t4\nBrand#x\tSMALL POLISHED COPPER\t3\t4\nBrand#x\tSMALL POLISHED COPPER\t19\t4\nBrand#x\tSMALL POLISHED COPPER\t45\t4\nBrand#x\tSMALL POLISHED NICKEL\t36\t4\nBrand#x\tSMALL POLISHED STEEL\t23\t4\nBrand#x\tSMALL POLISHED STEEL\t36\t4\nBrand#x\tSMALL POLISHED STEEL\t45\t4\nBrand#x\tSMALL POLISHED STEEL\t49\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t36\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t49\t4\nBrand#x\tSTANDARD ANODIZED TIN\t3\t4\nBrand#x\tSTANDARD ANODIZED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t19\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t49\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t49\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t36\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t3\t4\nBrand#x\tSTANDARD BURNISHED TIN\t23\t4\nBrand#x\tSTANDARD BURNISHED TIN\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t49\t4\nBrand#x\tSTANDARD PLATED BRASS\t14\t4\nBrand#x\tSTANDARD PLATED BRASS\t19\t4\nBrand#x\tSTANDARD PLATED BRASS\t23\t4\nBrand#x\tSTANDARD PLATED COPPER\t3\t4\nBrand#x\tSTANDARD PLATED COPPER\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t3\t4\nBrand#x\tSTANDARD PLATED NICKEL\t9\t4\nBrand#x\tSTANDARD PLATED NICKEL\t23\t4\nBrand#x\tSTANDARD PLATED NICKEL\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t49\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED STEEL\t9\t4\nBrand#x\tSTANDARD PLATED STEEL\t14\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t49\t4\nBrand#x\tSTANDARD PLATED TIN\t14\t4\nBrand#x\tSTANDARD PLATED TIN\t36\t4\nBrand#x\tSTANDARD PLATED TIN\t45\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t9\t4\nBrand#x\tSTANDARD POLISHED BRASS\t19\t4\nBrand#x\tSTANDARD POLISHED COPPER\t9\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t3\t4\nBrand#x\tSTANDARD POLISHED STEEL\t36\t4\nBrand#x\tSTANDARD POLISHED STEEL\t45\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t49\t4\nBrand#x\tECONOMY ANODIZED BRASS\t3\t4\nBrand#x\tECONOMY ANODIZED BRASS\t14\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED COPPER\t9\t4\nBrand#x\tECONOMY ANODIZED COPPER\t14\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t9\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t19\t4\nBrand#x\tECONOMY ANODIZED STEEL\t23\t4\nBrand#x\tECONOMY ANODIZED TIN\t3\t4\nBrand#x\tECONOMY ANODIZED TIN\t45\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED BRASS\t9\t4\nBrand#x\tECONOMY BRUSHED BRASS\t14\t4\nBrand#x\tECONOMY BRUSHED BRASS\t19\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED BRASS\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED COPPER\t9\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t23\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t4\nBrand#x\tECONOMY BRUSHED STEEL\t3\t4\nBrand#x\tECONOMY BRUSHED STEEL\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t3\t4\nBrand#x\tECONOMY BRUSHED TIN\t9\t4\nBrand#x\tECONOMY BRUSHED TIN\t14\t4\nBrand#x\tECONOMY BRUSHED TIN\t49\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED BRASS\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t3\t4\nBrand#x\tECONOMY BURNISHED COPPER\t9\t4\nBrand#x\tECONOMY BURNISHED COPPER\t14\t4\nBrand#x\tECONOMY BURNISHED COPPER\t19\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t49\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t23\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t4\nBrand#x\tECONOMY BURNISHED STEEL\t3\t4\nBrand#x\tECONOMY BURNISHED STEEL\t45\t4\nBrand#x\tECONOMY BURNISHED STEEL\t49\t4\nBrand#x\tECONOMY BURNISHED TIN\t9\t4\nBrand#x\tECONOMY BURNISHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t49\t4\nBrand#x\tECONOMY PLATED BRASS\t14\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED BRASS\t49\t4\nBrand#x\tECONOMY PLATED COPPER\t3\t4\nBrand#x\tECONOMY PLATED COPPER\t9\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED COPPER\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t3\t4\nBrand#x\tECONOMY PLATED STEEL\t14\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED STEEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t3\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED BRASS\t19\t4\nBrand#x\tECONOMY POLISHED COPPER\t3\t4\nBrand#x\tECONOMY POLISHED COPPER\t14\t4\nBrand#x\tECONOMY POLISHED COPPER\t19\t4\nBrand#x\tECONOMY POLISHED COPPER\t45\t4\nBrand#x\tECONOMY POLISHED COPPER\t49\t4\nBrand#x\tECONOMY POLISHED NICKEL\t45\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t23\t4\nBrand#x\tECONOMY POLISHED STEEL\t49\t4\nBrand#x\tECONOMY POLISHED TIN\t3\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t23\t4\nBrand#x\tECONOMY POLISHED TIN\t45\t4\nBrand#x\tECONOMY POLISHED TIN\t49\t4\nBrand#x\tLARGE ANODIZED BRASS\t9\t4\nBrand#x\tLARGE ANODIZED BRASS\t14\t4\nBrand#x\tLARGE ANODIZED BRASS\t36\t4\nBrand#x\tLARGE ANODIZED BRASS\t45\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t3\t4\nBrand#x\tLARGE ANODIZED COPPER\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t3\t4\nBrand#x\tLARGE ANODIZED NICKEL\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t23\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED STEEL\t19\t4\nBrand#x\tLARGE ANODIZED STEEL\t23\t4\nBrand#x\tLARGE ANODIZED STEEL\t36\t4\nBrand#x\tLARGE ANODIZED STEEL\t49\t4\nBrand#x\tLARGE ANODIZED TIN\t36\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED BRASS\t14\t4\nBrand#x\tLARGE BRUSHED BRASS\t19\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t23\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t9\t4\nBrand#x\tLARGE BRUSHED STEEL\t14\t4\nBrand#x\tLARGE BRUSHED STEEL\t23\t4\nBrand#x\tLARGE BRUSHED TIN\t3\t4\nBrand#x\tLARGE BRUSHED TIN\t9\t4\nBrand#x\tLARGE BRUSHED TIN\t19\t4\nBrand#x\tLARGE BRUSHED TIN\t23\t4\nBrand#x\tLARGE BRUSHED TIN\t36\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t23\t4\nBrand#x\tLARGE BURNISHED COPPER\t19\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED NICKEL\t9\t4\nBrand#x\tLARGE BURNISHED NICKEL\t14\t4\nBrand#x\tLARGE BURNISHED NICKEL\t19\t4\nBrand#x\tLARGE BURNISHED NICKEL\t36\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED NICKEL\t49\t4\nBrand#x\tLARGE BURNISHED STEEL\t19\t4\nBrand#x\tLARGE BURNISHED STEEL\t45\t4\nBrand#x\tLARGE BURNISHED TIN\t3\t4\nBrand#x\tLARGE BURNISHED TIN\t9\t4\nBrand#x\tLARGE BURNISHED TIN\t14\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t49\t4\nBrand#x\tLARGE PLATED BRASS\t9\t4\nBrand#x\tLARGE PLATED BRASS\t14\t4\nBrand#x\tLARGE PLATED BRASS\t19\t4\nBrand#x\tLARGE PLATED COPPER\t3\t4\nBrand#x\tLARGE PLATED COPPER\t14\t4\nBrand#x\tLARGE PLATED COPPER\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED STEEL\t9\t4\nBrand#x\tLARGE PLATED STEEL\t14\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t23\t4\nBrand#x\tLARGE PLATED TIN\t45\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t14\t4\nBrand#x\tLARGE POLISHED BRASS\t19\t4\nBrand#x\tLARGE POLISHED BRASS\t49\t4\nBrand#x\tLARGE POLISHED COPPER\t9\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t49\t4\nBrand#x\tLARGE POLISHED NICKEL\t3\t4\nBrand#x\tLARGE POLISHED NICKEL\t23\t4\nBrand#x\tLARGE POLISHED NICKEL\t36\t4\nBrand#x\tLARGE POLISHED NICKEL\t45\t4\nBrand#x\tLARGE POLISHED STEEL\t19\t4\nBrand#x\tLARGE POLISHED STEEL\t23\t4\nBrand#x\tLARGE POLISHED STEEL\t36\t4\nBrand#x\tLARGE POLISHED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t36\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t19\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t4\nBrand#x\tMEDIUM ANODIZED TIN\t49\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t23\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t23\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t45\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t19\t4\nBrand#x\tMEDIUM BRUSHED TIN\t36\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t19\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t45\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t19\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t4\nBrand#x\tMEDIUM BURNISHED TIN\t3\t4\nBrand#x\tMEDIUM BURNISHED TIN\t14\t4\nBrand#x\tMEDIUM BURNISHED TIN\t19\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t36\t4\nBrand#x\tMEDIUM BURNISHED TIN\t45\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t23\t4\nBrand#x\tMEDIUM PLATED BRASS\t36\t4\nBrand#x\tMEDIUM PLATED COPPER\t3\t4\nBrand#x\tMEDIUM PLATED COPPER\t14\t4\nBrand#x\tMEDIUM PLATED COPPER\t23\t4\nBrand#x\tMEDIUM PLATED COPPER\t36\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t49\t4\nBrand#x\tMEDIUM PLATED NICKEL\t19\t4\nBrand#x\tMEDIUM PLATED STEEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t19\t4\nBrand#x\tMEDIUM PLATED STEEL\t23\t4\nBrand#x\tMEDIUM PLATED STEEL\t36\t4\nBrand#x\tMEDIUM PLATED STEEL\t45\t4\nBrand#x\tMEDIUM PLATED TIN\t3\t4\nBrand#x\tMEDIUM PLATED TIN\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t19\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t45\t4\nBrand#x\tPROMO ANODIZED BRASS\t49\t4\nBrand#x\tPROMO ANODIZED COPPER\t3\t4\nBrand#x\tPROMO ANODIZED COPPER\t9\t4\nBrand#x\tPROMO ANODIZED COPPER\t14\t4\nBrand#x\tPROMO ANODIZED NICKEL\t3\t4\nBrand#x\tPROMO ANODIZED NICKEL\t36\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t23\t4\nBrand#x\tPROMO ANODIZED TIN\t3\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t3\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t36\t4\nBrand#x\tPROMO BRUSHED BRASS\t49\t4\nBrand#x\tPROMO BRUSHED COPPER\t3\t4\nBrand#x\tPROMO BRUSHED COPPER\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t14\t4\nBrand#x\tPROMO BRUSHED COPPER\t45\t4\nBrand#x\tPROMO BRUSHED NICKEL\t45\t4\nBrand#x\tPROMO BRUSHED STEEL\t3\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t23\t4\nBrand#x\tPROMO BRUSHED STEEL\t45\t4\nBrand#x\tPROMO BRUSHED TIN\t9\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t49\t4\nBrand#x\tPROMO BURNISHED COPPER\t14\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED COPPER\t45\t4\nBrand#x\tPROMO BURNISHED COPPER\t49\t4\nBrand#x\tPROMO BURNISHED NICKEL\t9\t4\nBrand#x\tPROMO BURNISHED NICKEL\t19\t4\nBrand#x\tPROMO BURNISHED NICKEL\t23\t4\nBrand#x\tPROMO BURNISHED NICKEL\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t49\t4\nBrand#x\tPROMO BURNISHED STEEL\t3\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED STEEL\t45\t4\nBrand#x\tPROMO BURNISHED TIN\t49\t4\nBrand#x\tPROMO PLATED BRASS\t3\t4\nBrand#x\tPROMO PLATED BRASS\t23\t4\nBrand#x\tPROMO PLATED BRASS\t45\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t45\t4\nBrand#x\tPROMO PLATED COPPER\t49\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED TIN\t23\t4\nBrand#x\tPROMO PLATED TIN\t36\t4\nBrand#x\tPROMO PLATED TIN\t45\t4\nBrand#x\tPROMO POLISHED BRASS\t14\t4\nBrand#x\tPROMO POLISHED BRASS\t36\t4\nBrand#x\tPROMO POLISHED BRASS\t45\t4\nBrand#x\tPROMO POLISHED COPPER\t23\t4\nBrand#x\tPROMO POLISHED COPPER\t45\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED NICKEL\t45\t4\nBrand#x\tPROMO POLISHED NICKEL\t49\t4\nBrand#x\tPROMO POLISHED STEEL\t3\t4\nBrand#x\tPROMO POLISHED STEEL\t9\t4\nBrand#x\tPROMO POLISHED STEEL\t14\t4\nBrand#x\tPROMO POLISHED STEEL\t23\t4\nBrand#x\tPROMO POLISHED STEEL\t49\t4\nBrand#x\tPROMO POLISHED TIN\t3\t4\nBrand#x\tPROMO POLISHED TIN\t19\t4\nBrand#x\tPROMO POLISHED TIN\t23\t4\nBrand#x\tPROMO POLISHED TIN\t45\t4\nBrand#x\tPROMO POLISHED TIN\t49\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t14\t4\nBrand#x\tSMALL ANODIZED BRASS\t19\t4\nBrand#x\tSMALL ANODIZED BRASS\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t49\t4\nBrand#x\tSMALL ANODIZED COPPER\t3\t4\nBrand#x\tSMALL ANODIZED COPPER\t14\t4\nBrand#x\tSMALL ANODIZED COPPER\t19\t4\nBrand#x\tSMALL ANODIZED NICKEL\t3\t4\nBrand#x\tSMALL ANODIZED NICKEL\t23\t4\nBrand#x\tSMALL ANODIZED STEEL\t9\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t23\t4\nBrand#x\tSMALL ANODIZED TIN\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t45\t4\nBrand#x\tSMALL ANODIZED TIN\t49\t4\nBrand#x\tSMALL BRUSHED BRASS\t14\t4\nBrand#x\tSMALL BRUSHED BRASS\t23\t4\nBrand#x\tSMALL BRUSHED BRASS\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t14\t4\nBrand#x\tSMALL BRUSHED COPPER\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t19\t4\nBrand#x\tSMALL BRUSHED NICKEL\t49\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t23\t4\nBrand#x\tSMALL BRUSHED STEEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t3\t4\nBrand#x\tSMALL BRUSHED TIN\t14\t4\nBrand#x\tSMALL BRUSHED TIN\t49\t4\nBrand#x\tSMALL BURNISHED BRASS\t3\t4\nBrand#x\tSMALL BURNISHED BRASS\t45\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t49\t4\nBrand#x\tSMALL BURNISHED NICKEL\t9\t4\nBrand#x\tSMALL BURNISHED NICKEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t3\t4\nBrand#x\tSMALL BURNISHED STEEL\t9\t4\nBrand#x\tSMALL BURNISHED STEEL\t23\t4\nBrand#x\tSMALL BURNISHED TIN\t14\t4\nBrand#x\tSMALL BURNISHED TIN\t19\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t45\t4\nBrand#x\tSMALL PLATED BRASS\t9\t4\nBrand#x\tSMALL PLATED BRASS\t14\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED BRASS\t49\t4\nBrand#x\tSMALL PLATED COPPER\t3\t4\nBrand#x\tSMALL PLATED NICKEL\t3\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED NICKEL\t23\t4\nBrand#x\tSMALL PLATED NICKEL\t45\t4\nBrand#x\tSMALL PLATED STEEL\t3\t4\nBrand#x\tSMALL PLATED STEEL\t14\t4\nBrand#x\tSMALL PLATED STEEL\t23\t4\nBrand#x\tSMALL PLATED TIN\t3\t4\nBrand#x\tSMALL PLATED TIN\t45\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t3\t4\nBrand#x\tSMALL POLISHED BRASS\t9\t4\nBrand#x\tSMALL POLISHED BRASS\t14\t4\nBrand#x\tSMALL POLISHED BRASS\t23\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t9\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED COPPER\t19\t4\nBrand#x\tSMALL POLISHED COPPER\t49\t4\nBrand#x\tSMALL POLISHED NICKEL\t9\t4\nBrand#x\tSMALL POLISHED NICKEL\t14\t4\nBrand#x\tSMALL POLISHED NICKEL\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t45\t4\nBrand#x\tSMALL POLISHED NICKEL\t49\t4\nBrand#x\tSMALL POLISHED STEEL\t9\t4\nBrand#x\tSMALL POLISHED STEEL\t19\t4\nBrand#x\tSMALL POLISHED STEEL\t36\t4\nBrand#x\tSMALL POLISHED STEEL\t49\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t9\t4\nBrand#x\tSMALL POLISHED TIN\t14\t4\nBrand#x\tSMALL POLISHED TIN\t45\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t45\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t23\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t4\nBrand#x\tSTANDARD ANODIZED TIN\t3\t4\nBrand#x\tSTANDARD ANODIZED TIN\t36\t4\nBrand#x\tSTANDARD ANODIZED TIN\t49\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t14\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t49\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t4\nBrand#x\tSTANDARD BRUSHED TIN\t3\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t49\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t19\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t49\t4\nBrand#x\tSTANDARD BURNISHED TIN\t3\t4\nBrand#x\tSTANDARD BURNISHED TIN\t23\t4\nBrand#x\tSTANDARD BURNISHED TIN\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t49\t4\nBrand#x\tSTANDARD PLATED BRASS\t9\t4\nBrand#x\tSTANDARD PLATED BRASS\t14\t4\nBrand#x\tSTANDARD PLATED COPPER\t3\t4\nBrand#x\tSTANDARD PLATED COPPER\t14\t4\nBrand#x\tSTANDARD PLATED COPPER\t23\t4\nBrand#x\tSTANDARD PLATED COPPER\t49\t4\nBrand#x\tSTANDARD PLATED NICKEL\t3\t4\nBrand#x\tSTANDARD PLATED NICKEL\t23\t4\nBrand#x\tSTANDARD PLATED NICKEL\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t45\t4\nBrand#x\tSTANDARD PLATED NICKEL\t49\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED STEEL\t9\t4\nBrand#x\tSTANDARD PLATED STEEL\t14\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t36\t4\nBrand#x\tSTANDARD PLATED STEEL\t49\t4\nBrand#x\tSTANDARD PLATED TIN\t3\t4\nBrand#x\tSTANDARD PLATED TIN\t49\t4\nBrand#x\tSTANDARD POLISHED BRASS\t9\t4\nBrand#x\tSTANDARD POLISHED BRASS\t14\t4\nBrand#x\tSTANDARD POLISHED BRASS\t19\t4\nBrand#x\tSTANDARD POLISHED BRASS\t36\t4\nBrand#x\tSTANDARD POLISHED BRASS\t49\t4\nBrand#x\tSTANDARD POLISHED COPPER\t14\t4\nBrand#x\tSTANDARD POLISHED COPPER\t19\t4\nBrand#x\tSTANDARD POLISHED COPPER\t23\t4\nBrand#x\tSTANDARD POLISHED COPPER\t36\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t14\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t4\nBrand#x\tSTANDARD POLISHED STEEL\t3\t4\nBrand#x\tSTANDARD POLISHED STEEL\t23\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t36\t4\nBrand#x\tSTANDARD POLISHED TIN\t45\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t9\t4\nBrand#x\tECONOMY ANODIZED COPPER\t14\t4\nBrand#x\tECONOMY ANODIZED COPPER\t45\t4\nBrand#x\tECONOMY ANODIZED COPPER\t49\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t9\t4\nBrand#x\tECONOMY ANODIZED STEEL\t45\t4\nBrand#x\tECONOMY ANODIZED TIN\t23\t4\nBrand#x\tECONOMY BRUSHED BRASS\t3\t4\nBrand#x\tECONOMY BRUSHED BRASS\t23\t4\nBrand#x\tECONOMY BRUSHED COPPER\t3\t4\nBrand#x\tECONOMY BRUSHED COPPER\t9\t4\nBrand#x\tECONOMY BRUSHED COPPER\t14\t4\nBrand#x\tECONOMY BRUSHED COPPER\t36\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t4\nBrand#x\tECONOMY BRUSHED STEEL\t45\t4\nBrand#x\tECONOMY BRUSHED STEEL\t49\t4\nBrand#x\tECONOMY BRUSHED TIN\t3\t4\nBrand#x\tECONOMY BRUSHED TIN\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t9\t4\nBrand#x\tECONOMY BURNISHED BRASS\t14\t4\nBrand#x\tECONOMY BURNISHED BRASS\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t23\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t49\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t4\nBrand#x\tECONOMY BURNISHED STEEL\t3\t4\nBrand#x\tECONOMY BURNISHED STEEL\t23\t4\nBrand#x\tECONOMY BURNISHED STEEL\t49\t4\nBrand#x\tECONOMY BURNISHED TIN\t9\t4\nBrand#x\tECONOMY BURNISHED TIN\t23\t4\nBrand#x\tECONOMY BURNISHED TIN\t36\t4\nBrand#x\tECONOMY BURNISHED TIN\t45\t4\nBrand#x\tECONOMY PLATED BRASS\t9\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED COPPER\t23\t4\nBrand#x\tECONOMY PLATED COPPER\t45\t4\nBrand#x\tECONOMY PLATED NICKEL\t9\t4\nBrand#x\tECONOMY PLATED NICKEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t9\t4\nBrand#x\tECONOMY PLATED STEEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t23\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED TIN\t45\t4\nBrand#x\tECONOMY PLATED TIN\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t3\t4\nBrand#x\tECONOMY POLISHED NICKEL\t9\t4\nBrand#x\tECONOMY POLISHED NICKEL\t36\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t14\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t23\t4\nBrand#x\tECONOMY POLISHED STEEL\t36\t4\nBrand#x\tECONOMY POLISHED TIN\t3\t4\nBrand#x\tECONOMY POLISHED TIN\t9\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t36\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t3\t4\nBrand#x\tLARGE ANODIZED COPPER\t9\t4\nBrand#x\tLARGE ANODIZED COPPER\t19\t4\nBrand#x\tLARGE ANODIZED COPPER\t23\t4\nBrand#x\tLARGE ANODIZED COPPER\t36\t4\nBrand#x\tLARGE ANODIZED NICKEL\t9\t4\nBrand#x\tLARGE ANODIZED NICKEL\t14\t4\nBrand#x\tLARGE ANODIZED NICKEL\t19\t4\nBrand#x\tLARGE ANODIZED NICKEL\t49\t4\nBrand#x\tLARGE ANODIZED STEEL\t3\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED TIN\t19\t4\nBrand#x\tLARGE ANODIZED TIN\t23\t4\nBrand#x\tLARGE ANODIZED TIN\t45\t4\nBrand#x\tLARGE ANODIZED TIN\t49\t4\nBrand#x\tLARGE BRUSHED BRASS\t9\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t9\t4\nBrand#x\tLARGE BRUSHED COPPER\t19\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t9\t4\nBrand#x\tLARGE BRUSHED NICKEL\t19\t4\nBrand#x\tLARGE BRUSHED NICKEL\t23\t4\nBrand#x\tLARGE BRUSHED NICKEL\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t49\t4\nBrand#x\tLARGE BRUSHED STEEL\t9\t4\nBrand#x\tLARGE BRUSHED STEEL\t45\t4\nBrand#x\tLARGE BRUSHED STEEL\t49\t4\nBrand#x\tLARGE BRUSHED TIN\t3\t4\nBrand#x\tLARGE BRUSHED TIN\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t36\t4\nBrand#x\tLARGE BURNISHED BRASS\t3\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t23\t4\nBrand#x\tLARGE BURNISHED BRASS\t45\t4\nBrand#x\tLARGE BURNISHED COPPER\t36\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t14\t4\nBrand#x\tLARGE BURNISHED NICKEL\t19\t4\nBrand#x\tLARGE BURNISHED NICKEL\t36\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED STEEL\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t9\t4\nBrand#x\tLARGE BURNISHED TIN\t19\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE BURNISHED TIN\t49\t4\nBrand#x\tLARGE PLATED BRASS\t3\t4\nBrand#x\tLARGE PLATED COPPER\t9\t4\nBrand#x\tLARGE PLATED COPPER\t49\t4\nBrand#x\tLARGE PLATED NICKEL\t9\t4\nBrand#x\tLARGE PLATED NICKEL\t36\t4\nBrand#x\tLARGE PLATED STEEL\t9\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t45\t4\nBrand#x\tLARGE PLATED TIN\t9\t4\nBrand#x\tLARGE POLISHED BRASS\t36\t4\nBrand#x\tLARGE POLISHED COPPER\t23\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t3\t4\nBrand#x\tLARGE POLISHED NICKEL\t14\t4\nBrand#x\tLARGE POLISHED NICKEL\t19\t4\nBrand#x\tLARGE POLISHED NICKEL\t36\t4\nBrand#x\tLARGE POLISHED NICKEL\t45\t4\nBrand#x\tLARGE POLISHED STEEL\t3\t4\nBrand#x\tLARGE POLISHED STEEL\t9\t4\nBrand#x\tLARGE POLISHED TIN\t3\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t49\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t19\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t36\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t45\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t14\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t49\t4\nBrand#x\tMEDIUM BURNISHED TIN\t36\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t9\t4\nBrand#x\tMEDIUM PLATED BRASS\t19\t4\nBrand#x\tMEDIUM PLATED BRASS\t36\t4\nBrand#x\tMEDIUM PLATED BRASS\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t3\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t49\t4\nBrand#x\tMEDIUM PLATED NICKEL\t9\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t45\t4\nBrand#x\tMEDIUM PLATED STEEL\t3\t4\nBrand#x\tMEDIUM PLATED STEEL\t9\t4\nBrand#x\tMEDIUM PLATED STEEL\t14\t4\nBrand#x\tMEDIUM PLATED STEEL\t19\t4\nBrand#x\tMEDIUM PLATED STEEL\t23\t4\nBrand#x\tMEDIUM PLATED STEEL\t45\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t19\t4\nBrand#x\tPROMO ANODIZED BRASS\t14\t4\nBrand#x\tPROMO ANODIZED BRASS\t19\t4\nBrand#x\tPROMO ANODIZED COPPER\t3\t4\nBrand#x\tPROMO ANODIZED COPPER\t9\t4\nBrand#x\tPROMO ANODIZED COPPER\t45\t4\nBrand#x\tPROMO ANODIZED NICKEL\t14\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t23\t4\nBrand#x\tPROMO ANODIZED NICKEL\t36\t4\nBrand#x\tPROMO ANODIZED NICKEL\t45\t4\nBrand#x\tPROMO ANODIZED STEEL\t3\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED TIN\t45\t4\nBrand#x\tPROMO BRUSHED BRASS\t19\t4\nBrand#x\tPROMO BRUSHED BRASS\t23\t4\nBrand#x\tPROMO BRUSHED BRASS\t49\t4\nBrand#x\tPROMO BRUSHED COPPER\t3\t4\nBrand#x\tPROMO BRUSHED COPPER\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t23\t4\nBrand#x\tPROMO BRUSHED COPPER\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t14\t4\nBrand#x\tPROMO BRUSHED NICKEL\t36\t4\nBrand#x\tPROMO BRUSHED STEEL\t3\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED STEEL\t45\t4\nBrand#x\tPROMO BRUSHED STEEL\t49\t4\nBrand#x\tPROMO BRUSHED TIN\t3\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t45\t4\nBrand#x\tPROMO BRUSHED TIN\t49\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED BRASS\t49\t4\nBrand#x\tPROMO BURNISHED COPPER\t9\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t9\t4\nBrand#x\tPROMO BURNISHED STEEL\t14\t4\nBrand#x\tPROMO BURNISHED STEEL\t23\t4\nBrand#x\tPROMO BURNISHED STEEL\t36\t4\nBrand#x\tPROMO BURNISHED STEEL\t49\t4\nBrand#x\tPROMO BURNISHED TIN\t9\t4\nBrand#x\tPROMO BURNISHED TIN\t14\t4\nBrand#x\tPROMO BURNISHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED TIN\t49\t4\nBrand#x\tPROMO PLATED BRASS\t19\t4\nBrand#x\tPROMO PLATED BRASS\t23\t4\nBrand#x\tPROMO PLATED BRASS\t36\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED COPPER\t23\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t36\t4\nBrand#x\tPROMO PLATED STEEL\t45\t4\nBrand#x\tPROMO PLATED TIN\t14\t4\nBrand#x\tPROMO PLATED TIN\t19\t4\nBrand#x\tPROMO PLATED TIN\t49\t4\nBrand#x\tPROMO POLISHED BRASS\t9\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t3\t4\nBrand#x\tPROMO POLISHED COPPER\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t3\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t19\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED NICKEL\t45\t4\nBrand#x\tPROMO POLISHED STEEL\t3\t4\nBrand#x\tPROMO POLISHED STEEL\t9\t4\nBrand#x\tPROMO POLISHED STEEL\t14\t4\nBrand#x\tPROMO POLISHED STEEL\t36\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t49\t4\nBrand#x\tSMALL ANODIZED COPPER\t49\t4\nBrand#x\tSMALL ANODIZED NICKEL\t9\t4\nBrand#x\tSMALL ANODIZED NICKEL\t23\t4\nBrand#x\tSMALL ANODIZED NICKEL\t49\t4\nBrand#x\tSMALL ANODIZED STEEL\t9\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED STEEL\t49\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t23\t4\nBrand#x\tSMALL BRUSHED BRASS\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t19\t4\nBrand#x\tSMALL BRUSHED COPPER\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED NICKEL\t3\t4\nBrand#x\tSMALL BRUSHED NICKEL\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t36\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t3\t4\nBrand#x\tSMALL BRUSHED STEEL\t14\t4\nBrand#x\tSMALL BRUSHED STEEL\t23\t4\nBrand#x\tSMALL BRUSHED TIN\t9\t4\nBrand#x\tSMALL BRUSHED TIN\t14\t4\nBrand#x\tSMALL BURNISHED BRASS\t3\t4\nBrand#x\tSMALL BURNISHED BRASS\t23\t4\nBrand#x\tSMALL BURNISHED BRASS\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t49\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t36\t4\nBrand#x\tSMALL BURNISHED COPPER\t49\t4\nBrand#x\tSMALL BURNISHED NICKEL\t23\t4\nBrand#x\tSMALL BURNISHED STEEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t45\t4\nBrand#x\tSMALL BURNISHED STEEL\t49\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t19\t4\nBrand#x\tSMALL BURNISHED TIN\t23\t4\nBrand#x\tSMALL BURNISHED TIN\t45\t4\nBrand#x\tSMALL BURNISHED TIN\t49\t4\nBrand#x\tSMALL PLATED BRASS\t14\t4\nBrand#x\tSMALL PLATED BRASS\t19\t4\nBrand#x\tSMALL PLATED COPPER\t9\t4\nBrand#x\tSMALL PLATED COPPER\t45\t4\nBrand#x\tSMALL PLATED NICKEL\t9\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t9\t4\nBrand#x\tSMALL PLATED STEEL\t49\t4\nBrand#x\tSMALL PLATED TIN\t9\t4\nBrand#x\tSMALL PLATED TIN\t45\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t9\t4\nBrand#x\tSMALL POLISHED BRASS\t36\t4\nBrand#x\tSMALL POLISHED BRASS\t45\t4\nBrand#x\tSMALL POLISHED COPPER\t3\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED COPPER\t23\t4\nBrand#x\tSMALL POLISHED NICKEL\t14\t4\nBrand#x\tSMALL POLISHED NICKEL\t23\t4\nBrand#x\tSMALL POLISHED NICKEL\t36\t4\nBrand#x\tSMALL POLISHED NICKEL\t45\t4\nBrand#x\tSMALL POLISHED NICKEL\t49\t4\nBrand#x\tSMALL POLISHED STEEL\t45\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t23\t4\nBrand#x\tSMALL POLISHED TIN\t36\t4\nBrand#x\tSMALL POLISHED TIN\t45\t4\nBrand#x\tSMALL POLISHED TIN\t49\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t9\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t19\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t36\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t4\nBrand#x\tSTANDARD ANODIZED TIN\t9\t4\nBrand#x\tSTANDARD ANODIZED TIN\t23\t4\nBrand#x\tSTANDARD ANODIZED TIN\t36\t4\nBrand#x\tSTANDARD ANODIZED TIN\t49\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t49\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t49\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t45\t4\nBrand#x\tSTANDARD BRUSHED TIN\t14\t4\nBrand#x\tSTANDARD BRUSHED TIN\t19\t4\nBrand#x\tSTANDARD BRUSHED TIN\t23\t4\nBrand#x\tSTANDARD BRUSHED TIN\t45\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t45\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t23\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t45\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t3\t4\nBrand#x\tSTANDARD BURNISHED TIN\t36\t4\nBrand#x\tSTANDARD PLATED BRASS\t3\t4\nBrand#x\tSTANDARD PLATED BRASS\t9\t4\nBrand#x\tSTANDARD PLATED BRASS\t14\t4\nBrand#x\tSTANDARD PLATED COPPER\t14\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED COPPER\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t19\t4\nBrand#x\tSTANDARD PLATED NICKEL\t23\t4\nBrand#x\tSTANDARD PLATED NICKEL\t36\t4\nBrand#x\tSTANDARD PLATED NICKEL\t49\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t49\t4\nBrand#x\tSTANDARD PLATED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED BRASS\t19\t4\nBrand#x\tSTANDARD POLISHED BRASS\t23\t4\nBrand#x\tSTANDARD POLISHED COPPER\t3\t4\nBrand#x\tSTANDARD POLISHED COPPER\t19\t4\nBrand#x\tSTANDARD POLISHED COPPER\t23\t4\nBrand#x\tSTANDARD POLISHED COPPER\t45\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t4\nBrand#x\tSTANDARD POLISHED STEEL\t3\t4\nBrand#x\tSTANDARD POLISHED STEEL\t14\t4\nBrand#x\tSTANDARD POLISHED STEEL\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t45\t4\nBrand#x\tECONOMY ANODIZED BRASS\t3\t4\nBrand#x\tECONOMY ANODIZED BRASS\t14\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t9\t4\nBrand#x\tECONOMY ANODIZED COPPER\t14\t4\nBrand#x\tECONOMY ANODIZED COPPER\t49\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t49\t4\nBrand#x\tECONOMY ANODIZED STEEL\t3\t4\nBrand#x\tECONOMY ANODIZED STEEL\t19\t4\nBrand#x\tECONOMY ANODIZED STEEL\t36\t4\nBrand#x\tECONOMY ANODIZED STEEL\t49\t4\nBrand#x\tECONOMY ANODIZED TIN\t19\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED BRASS\t9\t4\nBrand#x\tECONOMY BRUSHED BRASS\t14\t4\nBrand#x\tECONOMY BRUSHED COPPER\t9\t4\nBrand#x\tECONOMY BRUSHED COPPER\t14\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t23\t4\nBrand#x\tECONOMY BRUSHED COPPER\t36\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t4\nBrand#x\tECONOMY BRUSHED STEEL\t9\t4\nBrand#x\tECONOMY BRUSHED STEEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t36\t4\nBrand#x\tECONOMY BRUSHED TIN\t14\t4\nBrand#x\tECONOMY BRUSHED TIN\t23\t4\nBrand#x\tECONOMY BRUSHED TIN\t45\t4\nBrand#x\tECONOMY BRUSHED TIN\t49\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t14\t4\nBrand#x\tECONOMY BURNISHED BRASS\t19\t4\nBrand#x\tECONOMY BURNISHED BRASS\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t3\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED COPPER\t49\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t3\t4\nBrand#x\tECONOMY BURNISHED STEEL\t9\t4\nBrand#x\tECONOMY BURNISHED STEEL\t14\t4\nBrand#x\tECONOMY BURNISHED STEEL\t49\t4\nBrand#x\tECONOMY BURNISHED TIN\t9\t4\nBrand#x\tECONOMY BURNISHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t36\t4\nBrand#x\tECONOMY BURNISHED TIN\t45\t4\nBrand#x\tECONOMY PLATED BRASS\t3\t4\nBrand#x\tECONOMY PLATED BRASS\t49\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t19\t4\nBrand#x\tECONOMY PLATED NICKEL\t36\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED NICKEL\t49\t4\nBrand#x\tECONOMY PLATED STEEL\t14\t4\nBrand#x\tECONOMY PLATED STEEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t23\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY PLATED TIN\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t3\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED BRASS\t23\t4\nBrand#x\tECONOMY POLISHED BRASS\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t45\t4\nBrand#x\tECONOMY POLISHED BRASS\t49\t4\nBrand#x\tECONOMY POLISHED COPPER\t9\t4\nBrand#x\tECONOMY POLISHED COPPER\t36\t4\nBrand#x\tECONOMY POLISHED COPPER\t45\t4\nBrand#x\tECONOMY POLISHED COPPER\t49\t4\nBrand#x\tECONOMY POLISHED NICKEL\t14\t4\nBrand#x\tECONOMY POLISHED NICKEL\t19\t4\nBrand#x\tECONOMY POLISHED NICKEL\t45\t4\nBrand#x\tECONOMY POLISHED NICKEL\t49\t4\nBrand#x\tECONOMY POLISHED STEEL\t19\t4\nBrand#x\tECONOMY POLISHED TIN\t23\t4\nBrand#x\tLARGE ANODIZED BRASS\t3\t4\nBrand#x\tLARGE ANODIZED BRASS\t9\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t3\t4\nBrand#x\tLARGE ANODIZED COPPER\t23\t4\nBrand#x\tLARGE ANODIZED COPPER\t36\t4\nBrand#x\tLARGE ANODIZED NICKEL\t3\t4\nBrand#x\tLARGE ANODIZED NICKEL\t14\t4\nBrand#x\tLARGE ANODIZED NICKEL\t19\t4\nBrand#x\tLARGE ANODIZED NICKEL\t23\t4\nBrand#x\tLARGE ANODIZED NICKEL\t36\t4\nBrand#x\tLARGE ANODIZED NICKEL\t45\t4\nBrand#x\tLARGE ANODIZED NICKEL\t49\t4\nBrand#x\tLARGE ANODIZED STEEL\t9\t4\nBrand#x\tLARGE ANODIZED STEEL\t14\t4\nBrand#x\tLARGE ANODIZED STEEL\t36\t4\nBrand#x\tLARGE ANODIZED TIN\t3\t4\nBrand#x\tLARGE ANODIZED TIN\t14\t4\nBrand#x\tLARGE ANODIZED TIN\t19\t4\nBrand#x\tLARGE BRUSHED BRASS\t3\t4\nBrand#x\tLARGE BRUSHED BRASS\t23\t4\nBrand#x\tLARGE BRUSHED BRASS\t45\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t9\t4\nBrand#x\tLARGE BRUSHED COPPER\t23\t4\nBrand#x\tLARGE BRUSHED NICKEL\t3\t4\nBrand#x\tLARGE BRUSHED NICKEL\t14\t4\nBrand#x\tLARGE BRUSHED NICKEL\t19\t4\nBrand#x\tLARGE BRUSHED NICKEL\t36\t4\nBrand#x\tLARGE BRUSHED NICKEL\t49\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t14\t4\nBrand#x\tLARGE BRUSHED STEEL\t23\t4\nBrand#x\tLARGE BRUSHED STEEL\t49\t4\nBrand#x\tLARGE BRUSHED TIN\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BRUSHED TIN\t49\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t23\t4\nBrand#x\tLARGE BURNISHED BRASS\t36\t4\nBrand#x\tLARGE BURNISHED BRASS\t45\t4\nBrand#x\tLARGE BURNISHED COPPER\t19\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED COPPER\t49\t4\nBrand#x\tLARGE BURNISHED NICKEL\t36\t4\nBrand#x\tLARGE BURNISHED STEEL\t9\t4\nBrand#x\tLARGE BURNISHED STEEL\t49\t4\nBrand#x\tLARGE BURNISHED TIN\t3\t4\nBrand#x\tLARGE BURNISHED TIN\t23\t4\nBrand#x\tLARGE BURNISHED TIN\t49\t4\nBrand#x\tLARGE PLATED BRASS\t14\t4\nBrand#x\tLARGE PLATED BRASS\t19\t4\nBrand#x\tLARGE PLATED BRASS\t45\t4\nBrand#x\tLARGE PLATED COPPER\t14\t4\nBrand#x\tLARGE PLATED COPPER\t23\t4\nBrand#x\tLARGE PLATED COPPER\t45\t4\nBrand#x\tLARGE PLATED NICKEL\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t23\t4\nBrand#x\tLARGE PLATED NICKEL\t36\t4\nBrand#x\tLARGE PLATED STEEL\t19\t4\nBrand#x\tLARGE PLATED STEEL\t49\t4\nBrand#x\tLARGE PLATED TIN\t3\t4\nBrand#x\tLARGE PLATED TIN\t19\t4\nBrand#x\tLARGE POLISHED BRASS\t9\t4\nBrand#x\tLARGE POLISHED BRASS\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t14\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t36\t4\nBrand#x\tLARGE POLISHED NICKEL\t45\t4\nBrand#x\tLARGE POLISHED STEEL\t9\t4\nBrand#x\tLARGE POLISHED TIN\t14\t4\nBrand#x\tLARGE POLISHED TIN\t19\t4\nBrand#x\tLARGE POLISHED TIN\t36\t4\nBrand#x\tLARGE POLISHED TIN\t45\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t9\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t49\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t3\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t19\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t4\nBrand#x\tMEDIUM ANODIZED TIN\t9\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED TIN\t45\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t14\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t19\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t3\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t23\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t36\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t9\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t4\nBrand#x\tMEDIUM BRUSHED TIN\t14\t4\nBrand#x\tMEDIUM BRUSHED TIN\t49\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t36\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t23\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t3\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t45\t4\nBrand#x\tMEDIUM BURNISHED TIN\t3\t4\nBrand#x\tMEDIUM BURNISHED TIN\t19\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t36\t4\nBrand#x\tMEDIUM BURNISHED TIN\t49\t4\nBrand#x\tMEDIUM PLATED BRASS\t3\t4\nBrand#x\tMEDIUM PLATED BRASS\t23\t4\nBrand#x\tMEDIUM PLATED COPPER\t36\t4\nBrand#x\tMEDIUM PLATED COPPER\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t49\t4\nBrand#x\tMEDIUM PLATED NICKEL\t9\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t19\t4\nBrand#x\tMEDIUM PLATED NICKEL\t49\t4\nBrand#x\tMEDIUM PLATED STEEL\t3\t4\nBrand#x\tMEDIUM PLATED STEEL\t9\t4\nBrand#x\tMEDIUM PLATED STEEL\t36\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t3\t4\nBrand#x\tMEDIUM PLATED TIN\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t19\t4\nBrand#x\tMEDIUM PLATED TIN\t23\t4\nBrand#x\tMEDIUM PLATED TIN\t36\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t14\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED COPPER\t45\t4\nBrand#x\tPROMO ANODIZED NICKEL\t9\t4\nBrand#x\tPROMO ANODIZED NICKEL\t14\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t23\t4\nBrand#x\tPROMO ANODIZED NICKEL\t45\t4\nBrand#x\tPROMO ANODIZED STEEL\t23\t4\nBrand#x\tPROMO ANODIZED STEEL\t36\t4\nBrand#x\tPROMO ANODIZED STEEL\t49\t4\nBrand#x\tPROMO ANODIZED TIN\t3\t4\nBrand#x\tPROMO ANODIZED TIN\t9\t4\nBrand#x\tPROMO ANODIZED TIN\t14\t4\nBrand#x\tPROMO ANODIZED TIN\t23\t4\nBrand#x\tPROMO BRUSHED BRASS\t3\t4\nBrand#x\tPROMO BRUSHED BRASS\t9\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t19\t4\nBrand#x\tPROMO BRUSHED BRASS\t23\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t45\t4\nBrand#x\tPROMO BRUSHED NICKEL\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t45\t4\nBrand#x\tPROMO BRUSHED STEEL\t9\t4\nBrand#x\tPROMO BRUSHED STEEL\t36\t4\nBrand#x\tPROMO BRUSHED STEEL\t45\t4\nBrand#x\tPROMO BRUSHED TIN\t3\t4\nBrand#x\tPROMO BRUSHED TIN\t45\t4\nBrand#x\tPROMO BURNISHED BRASS\t3\t4\nBrand#x\tPROMO BURNISHED BRASS\t9\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED COPPER\t3\t4\nBrand#x\tPROMO BURNISHED COPPER\t19\t4\nBrand#x\tPROMO BURNISHED COPPER\t23\t4\nBrand#x\tPROMO BURNISHED NICKEL\t3\t4\nBrand#x\tPROMO BURNISHED NICKEL\t23\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED TIN\t14\t4\nBrand#x\tPROMO BURNISHED TIN\t36\t4\nBrand#x\tPROMO PLATED BRASS\t3\t4\nBrand#x\tPROMO PLATED BRASS\t9\t4\nBrand#x\tPROMO PLATED BRASS\t14\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED NICKEL\t9\t4\nBrand#x\tPROMO PLATED NICKEL\t14\t4\nBrand#x\tPROMO PLATED NICKEL\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t23\t4\nBrand#x\tPROMO PLATED NICKEL\t45\t4\nBrand#x\tPROMO PLATED STEEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t14\t4\nBrand#x\tPROMO PLATED STEEL\t23\t4\nBrand#x\tPROMO PLATED STEEL\t36\t4\nBrand#x\tPROMO PLATED STEEL\t45\t4\nBrand#x\tPROMO PLATED TIN\t36\t4\nBrand#x\tPROMO POLISHED BRASS\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t9\t4\nBrand#x\tPROMO POLISHED COPPER\t14\t4\nBrand#x\tPROMO POLISHED COPPER\t36\t4\nBrand#x\tPROMO POLISHED COPPER\t45\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED STEEL\t14\t4\nBrand#x\tPROMO POLISHED STEEL\t19\t4\nBrand#x\tPROMO POLISHED STEEL\t23\t4\nBrand#x\tPROMO POLISHED TIN\t3\t4\nBrand#x\tPROMO POLISHED TIN\t9\t4\nBrand#x\tPROMO POLISHED TIN\t19\t4\nBrand#x\tPROMO POLISHED TIN\t23\t4\nBrand#x\tSMALL ANODIZED BRASS\t14\t4\nBrand#x\tSMALL ANODIZED BRASS\t36\t4\nBrand#x\tSMALL ANODIZED COPPER\t14\t4\nBrand#x\tSMALL ANODIZED COPPER\t45\t4\nBrand#x\tSMALL ANODIZED COPPER\t49\t4\nBrand#x\tSMALL ANODIZED NICKEL\t14\t4\nBrand#x\tSMALL ANODIZED STEEL\t14\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t14\t4\nBrand#x\tSMALL ANODIZED TIN\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t23\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t19\t4\nBrand#x\tSMALL BRUSHED BRASS\t23\t4\nBrand#x\tSMALL BRUSHED BRASS\t45\t4\nBrand#x\tSMALL BRUSHED BRASS\t49\t4\nBrand#x\tSMALL BRUSHED COPPER\t9\t4\nBrand#x\tSMALL BRUSHED COPPER\t19\t4\nBrand#x\tSMALL BRUSHED COPPER\t23\t4\nBrand#x\tSMALL BRUSHED COPPER\t45\t4\nBrand#x\tSMALL BRUSHED NICKEL\t9\t4\nBrand#x\tSMALL BRUSHED NICKEL\t14\t4\nBrand#x\tSMALL BRUSHED NICKEL\t19\t4\nBrand#x\tSMALL BRUSHED NICKEL\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t36\t4\nBrand#x\tSMALL BRUSHED STEEL\t14\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED TIN\t9\t4\nBrand#x\tSMALL BRUSHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t19\t4\nBrand#x\tSMALL BURNISHED NICKEL\t3\t4\nBrand#x\tSMALL BURNISHED NICKEL\t19\t4\nBrand#x\tSMALL BURNISHED STEEL\t3\t4\nBrand#x\tSMALL BURNISHED STEEL\t14\t4\nBrand#x\tSMALL BURNISHED STEEL\t23\t4\nBrand#x\tSMALL BURNISHED STEEL\t45\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t19\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t45\t4\nBrand#x\tSMALL BURNISHED TIN\t49\t4\nBrand#x\tSMALL PLATED BRASS\t14\t4\nBrand#x\tSMALL PLATED BRASS\t19\t4\nBrand#x\tSMALL PLATED BRASS\t23\t4\nBrand#x\tSMALL PLATED COPPER\t45\t4\nBrand#x\tSMALL PLATED NICKEL\t36\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t9\t4\nBrand#x\tSMALL PLATED STEEL\t45\t4\nBrand#x\tSMALL PLATED STEEL\t49\t4\nBrand#x\tSMALL PLATED TIN\t3\t4\nBrand#x\tSMALL PLATED TIN\t23\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t23\t4\nBrand#x\tSMALL POLISHED COPPER\t3\t4\nBrand#x\tSMALL POLISHED COPPER\t14\t4\nBrand#x\tSMALL POLISHED COPPER\t36\t4\nBrand#x\tSMALL POLISHED COPPER\t45\t4\nBrand#x\tSMALL POLISHED COPPER\t49\t4\nBrand#x\tSMALL POLISHED NICKEL\t9\t4\nBrand#x\tSMALL POLISHED STEEL\t9\t4\nBrand#x\tSMALL POLISHED STEEL\t19\t4\nBrand#x\tSMALL POLISHED TIN\t9\t4\nBrand#x\tSMALL POLISHED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t3\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t49\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t23\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t4\nBrand#x\tSTANDARD ANODIZED TIN\t9\t4\nBrand#x\tSTANDARD ANODIZED TIN\t36\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t45\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t36\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t45\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t23\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t36\t4\nBrand#x\tSTANDARD BRUSHED TIN\t3\t4\nBrand#x\tSTANDARD BRUSHED TIN\t9\t4\nBrand#x\tSTANDARD BRUSHED TIN\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t3\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t14\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t45\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t9\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t49\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t14\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t14\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t23\t4\nBrand#x\tSTANDARD BURNISHED TIN\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t49\t4\nBrand#x\tSTANDARD PLATED BRASS\t14\t4\nBrand#x\tSTANDARD PLATED BRASS\t45\t4\nBrand#x\tSTANDARD PLATED BRASS\t49\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t14\t4\nBrand#x\tSTANDARD PLATED COPPER\t19\t4\nBrand#x\tSTANDARD PLATED COPPER\t23\t4\nBrand#x\tSTANDARD PLATED COPPER\t49\t4\nBrand#x\tSTANDARD PLATED NICKEL\t3\t4\nBrand#x\tSTANDARD PLATED NICKEL\t9\t4\nBrand#x\tSTANDARD PLATED NICKEL\t23\t4\nBrand#x\tSTANDARD PLATED NICKEL\t49\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED STEEL\t9\t4\nBrand#x\tSTANDARD PLATED STEEL\t36\t4\nBrand#x\tSTANDARD PLATED STEEL\t49\t4\nBrand#x\tSTANDARD PLATED TIN\t3\t4\nBrand#x\tSTANDARD PLATED TIN\t49\t4\nBrand#x\tSTANDARD POLISHED BRASS\t9\t4\nBrand#x\tSTANDARD POLISHED BRASS\t14\t4\nBrand#x\tSTANDARD POLISHED BRASS\t23\t4\nBrand#x\tSTANDARD POLISHED COPPER\t9\t4\nBrand#x\tSTANDARD POLISHED COPPER\t23\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED STEEL\t3\t4\nBrand#x\tSTANDARD POLISHED STEEL\t36\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t36\t4\nBrand#x\tECONOMY ANODIZED BRASS\t9\t4\nBrand#x\tECONOMY ANODIZED BRASS\t19\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED BRASS\t45\t4\nBrand#x\tECONOMY ANODIZED BRASS\t49\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t9\t4\nBrand#x\tECONOMY ANODIZED COPPER\t23\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED COPPER\t45\t4\nBrand#x\tECONOMY ANODIZED COPPER\t49\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t14\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t45\t4\nBrand#x\tECONOMY ANODIZED STEEL\t3\t4\nBrand#x\tECONOMY ANODIZED STEEL\t14\t4\nBrand#x\tECONOMY ANODIZED STEEL\t36\t4\nBrand#x\tECONOMY ANODIZED STEEL\t45\t4\nBrand#x\tECONOMY ANODIZED TIN\t9\t4\nBrand#x\tECONOMY ANODIZED TIN\t23\t4\nBrand#x\tECONOMY ANODIZED TIN\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t23\t4\nBrand#x\tECONOMY BRUSHED COPPER\t36\t4\nBrand#x\tECONOMY BRUSHED COPPER\t49\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t19\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t45\t4\nBrand#x\tECONOMY BRUSHED STEEL\t9\t4\nBrand#x\tECONOMY BRUSHED TIN\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t49\t4\nBrand#x\tECONOMY BURNISHED BRASS\t3\t4\nBrand#x\tECONOMY BURNISHED BRASS\t23\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t23\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t3\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t14\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t49\t4\nBrand#x\tECONOMY BURNISHED STEEL\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t3\t4\nBrand#x\tECONOMY BURNISHED TIN\t19\t4\nBrand#x\tECONOMY BURNISHED TIN\t49\t4\nBrand#x\tECONOMY PLATED BRASS\t9\t4\nBrand#x\tECONOMY PLATED BRASS\t14\t4\nBrand#x\tECONOMY PLATED BRASS\t19\t4\nBrand#x\tECONOMY PLATED BRASS\t23\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED BRASS\t45\t4\nBrand#x\tECONOMY PLATED COPPER\t3\t4\nBrand#x\tECONOMY PLATED COPPER\t23\t4\nBrand#x\tECONOMY PLATED NICKEL\t3\t4\nBrand#x\tECONOMY PLATED NICKEL\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t14\t4\nBrand#x\tECONOMY PLATED STEEL\t23\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED STEEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t49\t4\nBrand#x\tECONOMY PLATED TIN\t3\t4\nBrand#x\tECONOMY PLATED TIN\t9\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t19\t4\nBrand#x\tECONOMY PLATED TIN\t45\t4\nBrand#x\tECONOMY POLISHED BRASS\t3\t4\nBrand#x\tECONOMY POLISHED BRASS\t19\t4\nBrand#x\tECONOMY POLISHED COPPER\t14\t4\nBrand#x\tECONOMY POLISHED NICKEL\t19\t4\nBrand#x\tECONOMY POLISHED STEEL\t9\t4\nBrand#x\tECONOMY POLISHED TIN\t14\t4\nBrand#x\tECONOMY POLISHED TIN\t49\t4\nBrand#x\tLARGE ANODIZED BRASS\t14\t4\nBrand#x\tLARGE ANODIZED BRASS\t23\t4\nBrand#x\tLARGE ANODIZED BRASS\t36\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t19\t4\nBrand#x\tLARGE ANODIZED COPPER\t23\t4\nBrand#x\tLARGE ANODIZED COPPER\t36\t4\nBrand#x\tLARGE ANODIZED NICKEL\t3\t4\nBrand#x\tLARGE ANODIZED NICKEL\t14\t4\nBrand#x\tLARGE ANODIZED NICKEL\t19\t4\nBrand#x\tLARGE ANODIZED NICKEL\t36\t4\nBrand#x\tLARGE ANODIZED NICKEL\t45\t4\nBrand#x\tLARGE ANODIZED STEEL\t3\t4\nBrand#x\tLARGE ANODIZED STEEL\t19\t4\nBrand#x\tLARGE ANODIZED STEEL\t36\t4\nBrand#x\tLARGE ANODIZED TIN\t3\t4\nBrand#x\tLARGE ANODIZED TIN\t9\t4\nBrand#x\tLARGE ANODIZED TIN\t19\t4\nBrand#x\tLARGE ANODIZED TIN\t45\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t3\t4\nBrand#x\tLARGE BRUSHED COPPER\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t49\t4\nBrand#x\tLARGE BRUSHED NICKEL\t14\t4\nBrand#x\tLARGE BRUSHED NICKEL\t19\t4\nBrand#x\tLARGE BRUSHED NICKEL\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t49\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t9\t4\nBrand#x\tLARGE BRUSHED STEEL\t19\t4\nBrand#x\tLARGE BRUSHED STEEL\t23\t4\nBrand#x\tLARGE BRUSHED STEEL\t45\t4\nBrand#x\tLARGE BRUSHED TIN\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t19\t4\nBrand#x\tLARGE BRUSHED TIN\t45\t4\nBrand#x\tLARGE BURNISHED BRASS\t14\t4\nBrand#x\tLARGE BURNISHED BRASS\t19\t4\nBrand#x\tLARGE BURNISHED BRASS\t36\t4\nBrand#x\tLARGE BURNISHED NICKEL\t3\t4\nBrand#x\tLARGE BURNISHED NICKEL\t19\t4\nBrand#x\tLARGE BURNISHED NICKEL\t45\t4\nBrand#x\tLARGE BURNISHED STEEL\t9\t4\nBrand#x\tLARGE BURNISHED STEEL\t36\t4\nBrand#x\tLARGE BURNISHED STEEL\t45\t4\nBrand#x\tLARGE BURNISHED TIN\t9\t4\nBrand#x\tLARGE BURNISHED TIN\t23\t4\nBrand#x\tLARGE BURNISHED TIN\t36\t4\nBrand#x\tLARGE PLATED BRASS\t3\t4\nBrand#x\tLARGE PLATED BRASS\t14\t4\nBrand#x\tLARGE PLATED COPPER\t14\t4\nBrand#x\tLARGE PLATED COPPER\t36\t4\nBrand#x\tLARGE PLATED NICKEL\t9\t4\nBrand#x\tLARGE PLATED NICKEL\t14\t4\nBrand#x\tLARGE PLATED NICKEL\t19\t4\nBrand#x\tLARGE PLATED NICKEL\t45\t4\nBrand#x\tLARGE PLATED NICKEL\t49\t4\nBrand#x\tLARGE PLATED STEEL\t45\t4\nBrand#x\tLARGE PLATED TIN\t3\t4\nBrand#x\tLARGE PLATED TIN\t14\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t3\t4\nBrand#x\tLARGE POLISHED BRASS\t14\t4\nBrand#x\tLARGE POLISHED BRASS\t19\t4\nBrand#x\tLARGE POLISHED BRASS\t36\t4\nBrand#x\tLARGE POLISHED COPPER\t14\t4\nBrand#x\tLARGE POLISHED COPPER\t23\t4\nBrand#x\tLARGE POLISHED COPPER\t36\t4\nBrand#x\tLARGE POLISHED COPPER\t49\t4\nBrand#x\tLARGE POLISHED NICKEL\t45\t4\nBrand#x\tLARGE POLISHED NICKEL\t49\t4\nBrand#x\tLARGE POLISHED STEEL\t9\t4\nBrand#x\tLARGE POLISHED STEEL\t23\t4\nBrand#x\tLARGE POLISHED STEEL\t36\t4\nBrand#x\tLARGE POLISHED TIN\t3\t4\nBrand#x\tLARGE POLISHED TIN\t9\t4\nBrand#x\tLARGE POLISHED TIN\t23\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t23\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t3\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t14\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t36\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t14\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t4\nBrand#x\tMEDIUM ANODIZED TIN\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t49\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t45\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t49\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t3\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t19\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t45\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t49\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t14\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t19\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t23\t4\nBrand#x\tMEDIUM BRUSHED TIN\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t19\t4\nBrand#x\tMEDIUM BRUSHED TIN\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t3\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t14\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t19\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t49\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t23\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t23\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t36\t4\nBrand#x\tMEDIUM BURNISHED TIN\t14\t4\nBrand#x\tMEDIUM BURNISHED TIN\t49\t4\nBrand#x\tMEDIUM PLATED BRASS\t9\t4\nBrand#x\tMEDIUM PLATED BRASS\t14\t4\nBrand#x\tMEDIUM PLATED BRASS\t19\t4\nBrand#x\tMEDIUM PLATED BRASS\t45\t4\nBrand#x\tMEDIUM PLATED COPPER\t3\t4\nBrand#x\tMEDIUM PLATED COPPER\t19\t4\nBrand#x\tMEDIUM PLATED NICKEL\t3\t4\nBrand#x\tMEDIUM PLATED NICKEL\t36\t4\nBrand#x\tMEDIUM PLATED STEEL\t3\t4\nBrand#x\tMEDIUM PLATED STEEL\t9\t4\nBrand#x\tMEDIUM PLATED STEEL\t19\t4\nBrand#x\tMEDIUM PLATED STEEL\t23\t4\nBrand#x\tMEDIUM PLATED STEEL\t36\t4\nBrand#x\tMEDIUM PLATED STEEL\t45\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t3\t4\nBrand#x\tMEDIUM PLATED TIN\t9\t4\nBrand#x\tMEDIUM PLATED TIN\t14\t4\nBrand#x\tMEDIUM PLATED TIN\t36\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t3\t4\nBrand#x\tPROMO ANODIZED NICKEL\t9\t4\nBrand#x\tPROMO ANODIZED NICKEL\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t45\t4\nBrand#x\tPROMO ANODIZED NICKEL\t49\t4\nBrand#x\tPROMO ANODIZED STEEL\t45\t4\nBrand#x\tPROMO ANODIZED STEEL\t49\t4\nBrand#x\tPROMO ANODIZED TIN\t3\t4\nBrand#x\tPROMO ANODIZED TIN\t23\t4\nBrand#x\tPROMO ANODIZED TIN\t36\t4\nBrand#x\tPROMO BRUSHED BRASS\t3\t4\nBrand#x\tPROMO BRUSHED BRASS\t36\t4\nBrand#x\tPROMO BRUSHED BRASS\t45\t4\nBrand#x\tPROMO BRUSHED COPPER\t9\t4\nBrand#x\tPROMO BRUSHED COPPER\t19\t4\nBrand#x\tPROMO BRUSHED COPPER\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t14\t4\nBrand#x\tPROMO BRUSHED NICKEL\t36\t4\nBrand#x\tPROMO BRUSHED NICKEL\t45\t4\nBrand#x\tPROMO BRUSHED NICKEL\t49\t4\nBrand#x\tPROMO BRUSHED STEEL\t9\t4\nBrand#x\tPROMO BRUSHED STEEL\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t23\t4\nBrand#x\tPROMO BRUSHED TIN\t36\t4\nBrand#x\tPROMO BURNISHED BRASS\t3\t4\nBrand#x\tPROMO BURNISHED BRASS\t23\t4\nBrand#x\tPROMO BURNISHED BRASS\t45\t4\nBrand#x\tPROMO BURNISHED COPPER\t3\t4\nBrand#x\tPROMO BURNISHED COPPER\t19\t4\nBrand#x\tPROMO BURNISHED COPPER\t23\t4\nBrand#x\tPROMO BURNISHED COPPER\t36\t4\nBrand#x\tPROMO BURNISHED COPPER\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t3\t4\nBrand#x\tPROMO BURNISHED NICKEL\t14\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED STEEL\t45\t4\nBrand#x\tPROMO BURNISHED STEEL\t49\t4\nBrand#x\tPROMO BURNISHED TIN\t49\t4\nBrand#x\tPROMO PLATED BRASS\t3\t4\nBrand#x\tPROMO PLATED BRASS\t9\t4\nBrand#x\tPROMO PLATED BRASS\t14\t4\nBrand#x\tPROMO PLATED BRASS\t36\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t14\t4\nBrand#x\tPROMO PLATED COPPER\t19\t4\nBrand#x\tPROMO PLATED NICKEL\t23\t4\nBrand#x\tPROMO PLATED NICKEL\t36\t4\nBrand#x\tPROMO PLATED NICKEL\t45\t4\nBrand#x\tPROMO PLATED STEEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t14\t4\nBrand#x\tPROMO PLATED STEEL\t19\t4\nBrand#x\tPROMO PLATED STEEL\t23\t4\nBrand#x\tPROMO PLATED TIN\t9\t4\nBrand#x\tPROMO PLATED TIN\t19\t4\nBrand#x\tPROMO POLISHED BRASS\t3\t4\nBrand#x\tPROMO POLISHED BRASS\t19\t4\nBrand#x\tPROMO POLISHED BRASS\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t45\t4\nBrand#x\tPROMO POLISHED BRASS\t49\t4\nBrand#x\tPROMO POLISHED COPPER\t9\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t3\t4\nBrand#x\tPROMO POLISHED NICKEL\t9\t4\nBrand#x\tPROMO POLISHED NICKEL\t45\t4\nBrand#x\tPROMO POLISHED NICKEL\t49\t4\nBrand#x\tPROMO POLISHED TIN\t9\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t3\t4\nBrand#x\tSMALL ANODIZED BRASS\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t49\t4\nBrand#x\tSMALL ANODIZED COPPER\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t36\t4\nBrand#x\tSMALL ANODIZED NICKEL\t3\t4\nBrand#x\tSMALL ANODIZED NICKEL\t9\t4\nBrand#x\tSMALL ANODIZED NICKEL\t36\t4\nBrand#x\tSMALL ANODIZED STEEL\t14\t4\nBrand#x\tSMALL ANODIZED STEEL\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t3\t4\nBrand#x\tSMALL ANODIZED TIN\t9\t4\nBrand#x\tSMALL ANODIZED TIN\t19\t4\nBrand#x\tSMALL ANODIZED TIN\t23\t4\nBrand#x\tSMALL ANODIZED TIN\t45\t4\nBrand#x\tSMALL ANODIZED TIN\t49\t4\nBrand#x\tSMALL BRUSHED BRASS\t3\t4\nBrand#x\tSMALL BRUSHED BRASS\t14\t4\nBrand#x\tSMALL BRUSHED BRASS\t45\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t14\t4\nBrand#x\tSMALL BRUSHED COPPER\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t49\t4\nBrand#x\tSMALL BRUSHED NICKEL\t3\t4\nBrand#x\tSMALL BRUSHED NICKEL\t9\t4\nBrand#x\tSMALL BRUSHED NICKEL\t19\t4\nBrand#x\tSMALL BRUSHED NICKEL\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t49\t4\nBrand#x\tSMALL BRUSHED STEEL\t3\t4\nBrand#x\tSMALL BRUSHED STEEL\t9\t4\nBrand#x\tSMALL BRUSHED STEEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED BRASS\t19\t4\nBrand#x\tSMALL BURNISHED BRASS\t36\t4\nBrand#x\tSMALL BURNISHED BRASS\t49\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t36\t4\nBrand#x\tSMALL BURNISHED NICKEL\t9\t4\nBrand#x\tSMALL BURNISHED NICKEL\t19\t4\nBrand#x\tSMALL BURNISHED NICKEL\t23\t4\nBrand#x\tSMALL BURNISHED NICKEL\t45\t4\nBrand#x\tSMALL BURNISHED STEEL\t14\t4\nBrand#x\tSMALL BURNISHED STEEL\t23\t4\nBrand#x\tSMALL BURNISHED STEEL\t36\t4\nBrand#x\tSMALL BURNISHED STEEL\t49\t4\nBrand#x\tSMALL BURNISHED TIN\t3\t4\nBrand#x\tSMALL BURNISHED TIN\t14\t4\nBrand#x\tSMALL BURNISHED TIN\t36\t4\nBrand#x\tSMALL BURNISHED TIN\t45\t4\nBrand#x\tSMALL PLATED BRASS\t36\t4\nBrand#x\tSMALL PLATED BRASS\t45\t4\nBrand#x\tSMALL PLATED BRASS\t49\t4\nBrand#x\tSMALL PLATED COPPER\t9\t4\nBrand#x\tSMALL PLATED COPPER\t49\t4\nBrand#x\tSMALL PLATED NICKEL\t3\t4\nBrand#x\tSMALL PLATED NICKEL\t19\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t3\t4\nBrand#x\tSMALL PLATED STEEL\t9\t4\nBrand#x\tSMALL PLATED STEEL\t19\t4\nBrand#x\tSMALL PLATED STEEL\t36\t4\nBrand#x\tSMALL PLATED STEEL\t45\t4\nBrand#x\tSMALL PLATED TIN\t9\t4\nBrand#x\tSMALL PLATED TIN\t49\t4\nBrand#x\tSMALL POLISHED BRASS\t14\t4\nBrand#x\tSMALL POLISHED BRASS\t23\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t9\t4\nBrand#x\tSMALL POLISHED COPPER\t23\t4\nBrand#x\tSMALL POLISHED NICKEL\t9\t4\nBrand#x\tSMALL POLISHED NICKEL\t19\t4\nBrand#x\tSMALL POLISHED NICKEL\t45\t4\nBrand#x\tSMALL POLISHED STEEL\t14\t4\nBrand#x\tSMALL POLISHED STEEL\t19\t4\nBrand#x\tSMALL POLISHED STEEL\t36\t4\nBrand#x\tSMALL POLISHED STEEL\t45\t4\nBrand#x\tSMALL POLISHED TIN\t3\t4\nBrand#x\tSMALL POLISHED TIN\t9\t4\nBrand#x\tSMALL POLISHED TIN\t14\t4\nBrand#x\tSMALL POLISHED TIN\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t14\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t19\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t36\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t3\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t19\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t9\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t19\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t36\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t45\t4\nBrand#x\tSTANDARD ANODIZED TIN\t3\t4\nBrand#x\tSTANDARD ANODIZED TIN\t23\t4\nBrand#x\tSTANDARD ANODIZED TIN\t36\t4\nBrand#x\tSTANDARD ANODIZED TIN\t49\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t3\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t23\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t3\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t9\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t19\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t45\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t3\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t4\nBrand#x\tSTANDARD BRUSHED TIN\t3\t4\nBrand#x\tSTANDARD BRUSHED TIN\t36\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t49\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t14\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t23\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t19\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t36\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t3\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t9\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t3\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t36\t4\nBrand#x\tSTANDARD BURNISHED TIN\t45\t4\nBrand#x\tSTANDARD PLATED BRASS\t9\t4\nBrand#x\tSTANDARD PLATED BRASS\t14\t4\nBrand#x\tSTANDARD PLATED BRASS\t36\t4\nBrand#x\tSTANDARD PLATED BRASS\t49\t4\nBrand#x\tSTANDARD PLATED COPPER\t14\t4\nBrand#x\tSTANDARD PLATED NICKEL\t3\t4\nBrand#x\tSTANDARD PLATED NICKEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t3\t4\nBrand#x\tSTANDARD PLATED STEEL\t9\t4\nBrand#x\tSTANDARD PLATED STEEL\t14\t4\nBrand#x\tSTANDARD PLATED STEEL\t19\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t49\t4\nBrand#x\tSTANDARD PLATED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED BRASS\t36\t4\nBrand#x\tSTANDARD POLISHED COPPER\t36\t4\nBrand#x\tSTANDARD POLISHED COPPER\t49\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t9\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t19\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t3\t4\nBrand#x\tSTANDARD POLISHED STEEL\t23\t4\nBrand#x\tSTANDARD POLISHED STEEL\t45\t4\nBrand#x\tSTANDARD POLISHED TIN\t3\t4\nBrand#x\tSTANDARD POLISHED TIN\t23\t4\nBrand#x\tECONOMY ANODIZED BRASS\t3\t4\nBrand#x\tECONOMY ANODIZED BRASS\t14\t4\nBrand#x\tECONOMY ANODIZED BRASS\t19\t4\nBrand#x\tECONOMY ANODIZED BRASS\t23\t4\nBrand#x\tECONOMY ANODIZED BRASS\t49\t4\nBrand#x\tECONOMY ANODIZED COPPER\t3\t4\nBrand#x\tECONOMY ANODIZED COPPER\t19\t4\nBrand#x\tECONOMY ANODIZED COPPER\t36\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t3\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t19\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t23\t4\nBrand#x\tECONOMY ANODIZED NICKEL\t36\t4\nBrand#x\tECONOMY ANODIZED STEEL\t3\t4\nBrand#x\tECONOMY ANODIZED STEEL\t23\t4\nBrand#x\tECONOMY ANODIZED STEEL\t45\t4\nBrand#x\tECONOMY ANODIZED TIN\t3\t4\nBrand#x\tECONOMY BRUSHED BRASS\t9\t4\nBrand#x\tECONOMY BRUSHED BRASS\t14\t4\nBrand#x\tECONOMY BRUSHED BRASS\t19\t4\nBrand#x\tECONOMY BRUSHED BRASS\t36\t4\nBrand#x\tECONOMY BRUSHED BRASS\t45\t4\nBrand#x\tECONOMY BRUSHED BRASS\t49\t4\nBrand#x\tECONOMY BRUSHED COPPER\t3\t4\nBrand#x\tECONOMY BRUSHED COPPER\t19\t4\nBrand#x\tECONOMY BRUSHED COPPER\t45\t4\nBrand#x\tECONOMY BRUSHED COPPER\t49\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t3\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t9\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t14\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t36\t4\nBrand#x\tECONOMY BRUSHED NICKEL\t49\t4\nBrand#x\tECONOMY BRUSHED STEEL\t14\t4\nBrand#x\tECONOMY BRUSHED STEEL\t19\t4\nBrand#x\tECONOMY BRUSHED STEEL\t23\t4\nBrand#x\tECONOMY BRUSHED STEEL\t45\t4\nBrand#x\tECONOMY BRUSHED TIN\t9\t4\nBrand#x\tECONOMY BRUSHED TIN\t14\t4\nBrand#x\tECONOMY BRUSHED TIN\t19\t4\nBrand#x\tECONOMY BRUSHED TIN\t49\t4\nBrand#x\tECONOMY BURNISHED BRASS\t36\t4\nBrand#x\tECONOMY BURNISHED BRASS\t45\t4\nBrand#x\tECONOMY BURNISHED BRASS\t49\t4\nBrand#x\tECONOMY BURNISHED COPPER\t3\t4\nBrand#x\tECONOMY BURNISHED COPPER\t14\t4\nBrand#x\tECONOMY BURNISHED COPPER\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t9\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t36\t4\nBrand#x\tECONOMY BURNISHED NICKEL\t45\t4\nBrand#x\tECONOMY BURNISHED STEEL\t3\t4\nBrand#x\tECONOMY BURNISHED STEEL\t14\t4\nBrand#x\tECONOMY BURNISHED STEEL\t36\t4\nBrand#x\tECONOMY BURNISHED STEEL\t45\t4\nBrand#x\tECONOMY BURNISHED TIN\t14\t4\nBrand#x\tECONOMY PLATED BRASS\t3\t4\nBrand#x\tECONOMY PLATED BRASS\t9\t4\nBrand#x\tECONOMY PLATED BRASS\t14\t4\nBrand#x\tECONOMY PLATED BRASS\t23\t4\nBrand#x\tECONOMY PLATED BRASS\t36\t4\nBrand#x\tECONOMY PLATED COPPER\t3\t4\nBrand#x\tECONOMY PLATED COPPER\t9\t4\nBrand#x\tECONOMY PLATED COPPER\t14\t4\nBrand#x\tECONOMY PLATED NICKEL\t45\t4\nBrand#x\tECONOMY PLATED STEEL\t3\t4\nBrand#x\tECONOMY PLATED STEEL\t19\t4\nBrand#x\tECONOMY PLATED STEEL\t36\t4\nBrand#x\tECONOMY PLATED TIN\t3\t4\nBrand#x\tECONOMY PLATED TIN\t14\t4\nBrand#x\tECONOMY PLATED TIN\t36\t4\nBrand#x\tECONOMY PLATED TIN\t45\t4\nBrand#x\tECONOMY PLATED TIN\t49\t4\nBrand#x\tECONOMY POLISHED BRASS\t3\t4\nBrand#x\tECONOMY POLISHED BRASS\t9\t4\nBrand#x\tECONOMY POLISHED BRASS\t14\t4\nBrand#x\tECONOMY POLISHED BRASS\t36\t4\nBrand#x\tECONOMY POLISHED BRASS\t45\t4\nBrand#x\tECONOMY POLISHED COPPER\t14\t4\nBrand#x\tECONOMY POLISHED NICKEL\t3\t4\nBrand#x\tECONOMY POLISHED NICKEL\t23\t4\nBrand#x\tECONOMY POLISHED NICKEL\t36\t4\nBrand#x\tECONOMY POLISHED STEEL\t9\t4\nBrand#x\tECONOMY POLISHED STEEL\t36\t4\nBrand#x\tECONOMY POLISHED STEEL\t45\t4\nBrand#x\tECONOMY POLISHED TIN\t3\t4\nBrand#x\tECONOMY POLISHED TIN\t23\t4\nBrand#x\tECONOMY POLISHED TIN\t45\t4\nBrand#x\tECONOMY POLISHED TIN\t49\t4\nBrand#x\tLARGE ANODIZED BRASS\t3\t4\nBrand#x\tLARGE ANODIZED BRASS\t14\t4\nBrand#x\tLARGE ANODIZED BRASS\t19\t4\nBrand#x\tLARGE ANODIZED BRASS\t45\t4\nBrand#x\tLARGE ANODIZED BRASS\t49\t4\nBrand#x\tLARGE ANODIZED COPPER\t19\t4\nBrand#x\tLARGE ANODIZED COPPER\t49\t4\nBrand#x\tLARGE ANODIZED NICKEL\t3\t4\nBrand#x\tLARGE ANODIZED NICKEL\t49\t4\nBrand#x\tLARGE ANODIZED STEEL\t3\t4\nBrand#x\tLARGE ANODIZED STEEL\t19\t4\nBrand#x\tLARGE ANODIZED STEEL\t36\t4\nBrand#x\tLARGE ANODIZED STEEL\t45\t4\nBrand#x\tLARGE ANODIZED STEEL\t49\t4\nBrand#x\tLARGE ANODIZED TIN\t9\t4\nBrand#x\tLARGE ANODIZED TIN\t23\t4\nBrand#x\tLARGE BRUSHED BRASS\t19\t4\nBrand#x\tLARGE BRUSHED BRASS\t23\t4\nBrand#x\tLARGE BRUSHED BRASS\t36\t4\nBrand#x\tLARGE BRUSHED BRASS\t45\t4\nBrand#x\tLARGE BRUSHED COPPER\t36\t4\nBrand#x\tLARGE BRUSHED COPPER\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t9\t4\nBrand#x\tLARGE BRUSHED NICKEL\t45\t4\nBrand#x\tLARGE BRUSHED NICKEL\t49\t4\nBrand#x\tLARGE BRUSHED STEEL\t3\t4\nBrand#x\tLARGE BRUSHED STEEL\t19\t4\nBrand#x\tLARGE BRUSHED STEEL\t36\t4\nBrand#x\tLARGE BRUSHED STEEL\t45\t4\nBrand#x\tLARGE BRUSHED TIN\t3\t4\nBrand#x\tLARGE BRUSHED TIN\t14\t4\nBrand#x\tLARGE BRUSHED TIN\t23\t4\nBrand#x\tLARGE BRUSHED TIN\t36\t4\nBrand#x\tLARGE BURNISHED BRASS\t9\t4\nBrand#x\tLARGE BURNISHED BRASS\t23\t4\nBrand#x\tLARGE BURNISHED BRASS\t36\t4\nBrand#x\tLARGE BURNISHED COPPER\t23\t4\nBrand#x\tLARGE BURNISHED COPPER\t45\t4\nBrand#x\tLARGE BURNISHED NICKEL\t3\t4\nBrand#x\tLARGE BURNISHED NICKEL\t9\t4\nBrand#x\tLARGE BURNISHED STEEL\t14\t4\nBrand#x\tLARGE BURNISHED TIN\t23\t4\nBrand#x\tLARGE BURNISHED TIN\t45\t4\nBrand#x\tLARGE PLATED BRASS\t19\t4\nBrand#x\tLARGE PLATED BRASS\t36\t4\nBrand#x\tLARGE PLATED BRASS\t49\t4\nBrand#x\tLARGE PLATED COPPER\t3\t4\nBrand#x\tLARGE PLATED COPPER\t19\t4\nBrand#x\tLARGE PLATED COPPER\t36\t4\nBrand#x\tLARGE PLATED NICKEL\t3\t4\nBrand#x\tLARGE PLATED NICKEL\t23\t4\nBrand#x\tLARGE PLATED NICKEL\t45\t4\nBrand#x\tLARGE PLATED NICKEL\t49\t4\nBrand#x\tLARGE PLATED STEEL\t9\t4\nBrand#x\tLARGE PLATED STEEL\t45\t4\nBrand#x\tLARGE PLATED TIN\t3\t4\nBrand#x\tLARGE PLATED TIN\t23\t4\nBrand#x\tLARGE PLATED TIN\t49\t4\nBrand#x\tLARGE POLISHED BRASS\t9\t4\nBrand#x\tLARGE POLISHED BRASS\t14\t4\nBrand#x\tLARGE POLISHED BRASS\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t3\t4\nBrand#x\tLARGE POLISHED COPPER\t14\t4\nBrand#x\tLARGE POLISHED COPPER\t19\t4\nBrand#x\tLARGE POLISHED COPPER\t23\t4\nBrand#x\tLARGE POLISHED COPPER\t45\t4\nBrand#x\tLARGE POLISHED COPPER\t49\t4\nBrand#x\tLARGE POLISHED NICKEL\t23\t4\nBrand#x\tLARGE POLISHED NICKEL\t45\t4\nBrand#x\tLARGE POLISHED STEEL\t23\t4\nBrand#x\tLARGE POLISHED STEEL\t36\t4\nBrand#x\tLARGE POLISHED STEEL\t49\t4\nBrand#x\tLARGE POLISHED TIN\t3\t4\nBrand#x\tLARGE POLISHED TIN\t19\t4\nBrand#x\tLARGE POLISHED TIN\t23\t4\nBrand#x\tLARGE POLISHED TIN\t49\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t3\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t14\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t19\t4\nBrand#x\tMEDIUM ANODIZED BRASS\t45\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t9\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t23\t4\nBrand#x\tMEDIUM ANODIZED COPPER\t45\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t3\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t9\t4\nBrand#x\tMEDIUM ANODIZED NICKEL\t23\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t9\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t23\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t36\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t45\t4\nBrand#x\tMEDIUM ANODIZED STEEL\t49\t4\nBrand#x\tMEDIUM ANODIZED TIN\t3\t4\nBrand#x\tMEDIUM ANODIZED TIN\t9\t4\nBrand#x\tMEDIUM ANODIZED TIN\t14\t4\nBrand#x\tMEDIUM ANODIZED TIN\t19\t4\nBrand#x\tMEDIUM ANODIZED TIN\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t3\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t9\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t36\t4\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t9\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t14\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t19\t4\nBrand#x\tMEDIUM BRUSHED COPPER\t36\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t14\t4\nBrand#x\tMEDIUM BRUSHED NICKEL\t49\t4\nBrand#x\tMEDIUM BRUSHED STEEL\t3\t4\nBrand#x\tMEDIUM BRUSHED TIN\t23\t4\nBrand#x\tMEDIUM BRUSHED TIN\t49\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t9\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t23\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t45\t4\nBrand#x\tMEDIUM BURNISHED BRASS\t49\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t9\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t14\t4\nBrand#x\tMEDIUM BURNISHED COPPER\t19\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t3\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t9\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t14\t4\nBrand#x\tMEDIUM BURNISHED NICKEL\t36\t4\nBrand#x\tMEDIUM BURNISHED STEEL\t9\t4\nBrand#x\tMEDIUM BURNISHED TIN\t3\t4\nBrand#x\tMEDIUM BURNISHED TIN\t23\t4\nBrand#x\tMEDIUM BURNISHED TIN\t49\t4\nBrand#x\tMEDIUM PLATED BRASS\t9\t4\nBrand#x\tMEDIUM PLATED BRASS\t14\t4\nBrand#x\tMEDIUM PLATED BRASS\t36\t4\nBrand#x\tMEDIUM PLATED BRASS\t49\t4\nBrand#x\tMEDIUM PLATED COPPER\t49\t4\nBrand#x\tMEDIUM PLATED NICKEL\t9\t4\nBrand#x\tMEDIUM PLATED NICKEL\t14\t4\nBrand#x\tMEDIUM PLATED NICKEL\t45\t4\nBrand#x\tMEDIUM PLATED STEEL\t9\t4\nBrand#x\tMEDIUM PLATED STEEL\t23\t4\nBrand#x\tMEDIUM PLATED STEEL\t49\t4\nBrand#x\tMEDIUM PLATED TIN\t45\t4\nBrand#x\tMEDIUM PLATED TIN\t49\t4\nBrand#x\tPROMO ANODIZED BRASS\t9\t4\nBrand#x\tPROMO ANODIZED COPPER\t19\t4\nBrand#x\tPROMO ANODIZED NICKEL\t23\t4\nBrand#x\tPROMO ANODIZED NICKEL\t45\t4\nBrand#x\tPROMO ANODIZED STEEL\t9\t4\nBrand#x\tPROMO ANODIZED STEEL\t14\t4\nBrand#x\tPROMO ANODIZED STEEL\t23\t4\nBrand#x\tPROMO ANODIZED TIN\t9\t4\nBrand#x\tPROMO ANODIZED TIN\t23\t4\nBrand#x\tPROMO BRUSHED BRASS\t14\t4\nBrand#x\tPROMO BRUSHED BRASS\t19\t4\nBrand#x\tPROMO BRUSHED BRASS\t49\t4\nBrand#x\tPROMO BRUSHED COPPER\t14\t4\nBrand#x\tPROMO BRUSHED COPPER\t23\t4\nBrand#x\tPROMO BRUSHED COPPER\t36\t4\nBrand#x\tPROMO BRUSHED COPPER\t45\t4\nBrand#x\tPROMO BRUSHED COPPER\t49\t4\nBrand#x\tPROMO BRUSHED NICKEL\t3\t4\nBrand#x\tPROMO BRUSHED NICKEL\t19\t4\nBrand#x\tPROMO BRUSHED STEEL\t14\t4\nBrand#x\tPROMO BRUSHED STEEL\t19\t4\nBrand#x\tPROMO BRUSHED TIN\t19\t4\nBrand#x\tPROMO BURNISHED BRASS\t9\t4\nBrand#x\tPROMO BURNISHED BRASS\t14\t4\nBrand#x\tPROMO BURNISHED BRASS\t19\t4\nBrand#x\tPROMO BURNISHED COPPER\t3\t4\nBrand#x\tPROMO BURNISHED COPPER\t49\t4\nBrand#x\tPROMO BURNISHED NICKEL\t14\t4\nBrand#x\tPROMO BURNISHED NICKEL\t19\t4\nBrand#x\tPROMO BURNISHED NICKEL\t23\t4\nBrand#x\tPROMO BURNISHED NICKEL\t45\t4\nBrand#x\tPROMO BURNISHED NICKEL\t49\t4\nBrand#x\tPROMO BURNISHED STEEL\t19\t4\nBrand#x\tPROMO BURNISHED STEEL\t36\t4\nBrand#x\tPROMO BURNISHED TIN\t3\t4\nBrand#x\tPROMO BURNISHED TIN\t14\t4\nBrand#x\tPROMO BURNISHED TIN\t23\t4\nBrand#x\tPROMO PLATED BRASS\t3\t4\nBrand#x\tPROMO PLATED BRASS\t49\t4\nBrand#x\tPROMO PLATED COPPER\t3\t4\nBrand#x\tPROMO PLATED COPPER\t45\t4\nBrand#x\tPROMO PLATED NICKEL\t3\t4\nBrand#x\tPROMO PLATED NICKEL\t23\t4\nBrand#x\tPROMO PLATED STEEL\t3\t4\nBrand#x\tPROMO PLATED STEEL\t23\t4\nBrand#x\tPROMO PLATED STEEL\t36\t4\nBrand#x\tPROMO PLATED STEEL\t45\t4\nBrand#x\tPROMO PLATED STEEL\t49\t4\nBrand#x\tPROMO PLATED TIN\t3\t4\nBrand#x\tPROMO PLATED TIN\t19\t4\nBrand#x\tPROMO PLATED TIN\t23\t4\nBrand#x\tPROMO POLISHED BRASS\t14\t4\nBrand#x\tPROMO POLISHED COPPER\t3\t4\nBrand#x\tPROMO POLISHED COPPER\t19\t4\nBrand#x\tPROMO POLISHED COPPER\t45\t4\nBrand#x\tPROMO POLISHED COPPER\t49\t4\nBrand#x\tPROMO POLISHED NICKEL\t3\t4\nBrand#x\tPROMO POLISHED NICKEL\t14\t4\nBrand#x\tPROMO POLISHED NICKEL\t19\t4\nBrand#x\tPROMO POLISHED NICKEL\t23\t4\nBrand#x\tPROMO POLISHED NICKEL\t36\t4\nBrand#x\tPROMO POLISHED STEEL\t19\t4\nBrand#x\tPROMO POLISHED STEEL\t45\t4\nBrand#x\tPROMO POLISHED STEEL\t49\t4\nBrand#x\tPROMO POLISHED TIN\t3\t4\nBrand#x\tPROMO POLISHED TIN\t9\t4\nBrand#x\tPROMO POLISHED TIN\t14\t4\nBrand#x\tPROMO POLISHED TIN\t19\t4\nBrand#x\tPROMO POLISHED TIN\t23\t4\nBrand#x\tPROMO POLISHED TIN\t36\t4\nBrand#x\tPROMO POLISHED TIN\t45\t4\nBrand#x\tPROMO POLISHED TIN\t49\t4\nBrand#x\tSMALL ANODIZED BRASS\t23\t4\nBrand#x\tSMALL ANODIZED BRASS\t36\t4\nBrand#x\tSMALL ANODIZED BRASS\t45\t4\nBrand#x\tSMALL ANODIZED COPPER\t9\t4\nBrand#x\tSMALL ANODIZED COPPER\t19\t4\nBrand#x\tSMALL ANODIZED COPPER\t23\t4\nBrand#x\tSMALL ANODIZED NICKEL\t9\t4\nBrand#x\tSMALL ANODIZED NICKEL\t14\t4\nBrand#x\tSMALL ANODIZED NICKEL\t23\t4\nBrand#x\tSMALL ANODIZED NICKEL\t36\t4\nBrand#x\tSMALL ANODIZED NICKEL\t45\t4\nBrand#x\tSMALL ANODIZED STEEL\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t9\t4\nBrand#x\tSMALL ANODIZED TIN\t36\t4\nBrand#x\tSMALL ANODIZED TIN\t45\t4\nBrand#x\tSMALL ANODIZED TIN\t49\t4\nBrand#x\tSMALL BRUSHED BRASS\t9\t4\nBrand#x\tSMALL BRUSHED BRASS\t36\t4\nBrand#x\tSMALL BRUSHED COPPER\t3\t4\nBrand#x\tSMALL BRUSHED COPPER\t9\t4\nBrand#x\tSMALL BRUSHED COPPER\t19\t4\nBrand#x\tSMALL BRUSHED COPPER\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t3\t4\nBrand#x\tSMALL BRUSHED NICKEL\t9\t4\nBrand#x\tSMALL BRUSHED NICKEL\t19\t4\nBrand#x\tSMALL BRUSHED NICKEL\t23\t4\nBrand#x\tSMALL BRUSHED NICKEL\t45\t4\nBrand#x\tSMALL BRUSHED NICKEL\t49\t4\nBrand#x\tSMALL BRUSHED STEEL\t3\t4\nBrand#x\tSMALL BRUSHED STEEL\t14\t4\nBrand#x\tSMALL BRUSHED STEEL\t19\t4\nBrand#x\tSMALL BRUSHED STEEL\t23\t4\nBrand#x\tSMALL BRUSHED STEEL\t45\t4\nBrand#x\tSMALL BRUSHED STEEL\t49\t4\nBrand#x\tSMALL BRUSHED TIN\t9\t4\nBrand#x\tSMALL BRUSHED TIN\t49\t4\nBrand#x\tSMALL BURNISHED BRASS\t14\t4\nBrand#x\tSMALL BURNISHED BRASS\t23\t4\nBrand#x\tSMALL BURNISHED COPPER\t3\t4\nBrand#x\tSMALL BURNISHED COPPER\t9\t4\nBrand#x\tSMALL BURNISHED COPPER\t36\t4\nBrand#x\tSMALL BURNISHED NICKEL\t9\t4\nBrand#x\tSMALL BURNISHED NICKEL\t19\t4\nBrand#x\tSMALL BURNISHED NICKEL\t36\t4\nBrand#x\tSMALL BURNISHED NICKEL\t45\t4\nBrand#x\tSMALL BURNISHED STEEL\t14\t4\nBrand#x\tSMALL BURNISHED TIN\t9\t4\nBrand#x\tSMALL BURNISHED TIN\t23\t4\nBrand#x\tSMALL PLATED COPPER\t3\t4\nBrand#x\tSMALL PLATED COPPER\t14\t4\nBrand#x\tSMALL PLATED COPPER\t36\t4\nBrand#x\tSMALL PLATED COPPER\t49\t4\nBrand#x\tSMALL PLATED NICKEL\t14\t4\nBrand#x\tSMALL PLATED NICKEL\t49\t4\nBrand#x\tSMALL PLATED STEEL\t3\t4\nBrand#x\tSMALL PLATED STEEL\t23\t4\nBrand#x\tSMALL PLATED STEEL\t36\t4\nBrand#x\tSMALL PLATED TIN\t36\t4\nBrand#x\tSMALL PLATED TIN\t45\t4\nBrand#x\tSMALL POLISHED BRASS\t9\t4\nBrand#x\tSMALL POLISHED BRASS\t19\t4\nBrand#x\tSMALL POLISHED BRASS\t49\t4\nBrand#x\tSMALL POLISHED COPPER\t19\t4\nBrand#x\tSMALL POLISHED COPPER\t23\t4\nBrand#x\tSMALL POLISHED COPPER\t36\t4\nBrand#x\tSMALL POLISHED COPPER\t45\t4\nBrand#x\tSMALL POLISHED COPPER\t49\t4\nBrand#x\tSMALL POLISHED NICKEL\t9\t4\nBrand#x\tSMALL POLISHED NICKEL\t14\t4\nBrand#x\tSMALL POLISHED NICKEL\t19\t4\nBrand#x\tSMALL POLISHED NICKEL\t23\t4\nBrand#x\tSMALL POLISHED NICKEL\t45\t4\nBrand#x\tSMALL POLISHED NICKEL\t49\t4\nBrand#x\tSMALL POLISHED STEEL\t19\t4\nBrand#x\tSMALL POLISHED STEEL\t45\t4\nBrand#x\tSMALL POLISHED TIN\t14\t4\nBrand#x\tSMALL POLISHED TIN\t23\t4\nBrand#x\tSMALL POLISHED TIN\t45\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t9\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t23\t4\nBrand#x\tSTANDARD ANODIZED BRASS\t49\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t9\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t14\t4\nBrand#x\tSTANDARD ANODIZED COPPER\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t3\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t14\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t45\t4\nBrand#x\tSTANDARD ANODIZED NICKEL\t49\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t3\t4\nBrand#x\tSTANDARD ANODIZED STEEL\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t14\t4\nBrand#x\tSTANDARD ANODIZED TIN\t36\t4\nBrand#x\tSTANDARD ANODIZED TIN\t45\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t9\t4\nBrand#x\tSTANDARD BRUSHED BRASS\t19\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t14\t4\nBrand#x\tSTANDARD BRUSHED COPPER\t19\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t3\t4\nBrand#x\tSTANDARD BRUSHED NICKEL\t36\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t9\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t14\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t19\t4\nBrand#x\tSTANDARD BRUSHED STEEL\t49\t4\nBrand#x\tSTANDARD BRUSHED TIN\t19\t4\nBrand#x\tSTANDARD BRUSHED TIN\t49\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t9\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t19\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t23\t4\nBrand#x\tSTANDARD BURNISHED BRASS\t36\t4\nBrand#x\tSTANDARD BURNISHED COPPER\t3\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t9\t4\nBrand#x\tSTANDARD BURNISHED NICKEL\t49\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t19\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t23\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t36\t4\nBrand#x\tSTANDARD BURNISHED STEEL\t45\t4\nBrand#x\tSTANDARD BURNISHED TIN\t9\t4\nBrand#x\tSTANDARD BURNISHED TIN\t19\t4\nBrand#x\tSTANDARD BURNISHED TIN\t36\t4\nBrand#x\tSTANDARD BURNISHED TIN\t49\t4\nBrand#x\tSTANDARD PLATED BRASS\t9\t4\nBrand#x\tSTANDARD PLATED BRASS\t45\t4\nBrand#x\tSTANDARD PLATED BRASS\t49\t4\nBrand#x\tSTANDARD PLATED COPPER\t9\t4\nBrand#x\tSTANDARD PLATED COPPER\t45\t4\nBrand#x\tSTANDARD PLATED NICKEL\t3\t4\nBrand#x\tSTANDARD PLATED NICKEL\t19\t4\nBrand#x\tSTANDARD PLATED NICKEL\t45\t4\nBrand#x\tSTANDARD PLATED STEEL\t14\t4\nBrand#x\tSTANDARD PLATED STEEL\t23\t4\nBrand#x\tSTANDARD PLATED STEEL\t49\t4\nBrand#x\tSTANDARD PLATED TIN\t9\t4\nBrand#x\tSTANDARD PLATED TIN\t14\t4\nBrand#x\tSTANDARD PLATED TIN\t36\t4\nBrand#x\tSTANDARD POLISHED BRASS\t3\t4\nBrand#x\tSTANDARD POLISHED BRASS\t9\t4\nBrand#x\tSTANDARD POLISHED BRASS\t23\t4\nBrand#x\tSTANDARD POLISHED COPPER\t3\t4\nBrand#x\tSTANDARD POLISHED COPPER\t23\t4\nBrand#x\tSTANDARD POLISHED COPPER\t45\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t3\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t23\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t36\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t45\t4\nBrand#x\tSTANDARD POLISHED NICKEL\t49\t4\nBrand#x\tSTANDARD POLISHED STEEL\t14\t4\nBrand#x\tSTANDARD POLISHED STEEL\t23\t4\nBrand#x\tSTANDARD POLISHED TIN\t9\t4\nBrand#x\tSTANDARD POLISHED TIN\t19\t4\nBrand#x\tSTANDARD POLISHED TIN\t36\t4\nBrand#x\tSMALL BRUSHED TIN\t19\t3\nBrand#x\tLARGE PLATED NICKEL\t45\t3\nBrand#x\tLARGE POLISHED NICKEL\t9\t3\nBrand#x\tPROMO BURNISHED STEEL\t45\t3\nBrand#x\tSTANDARD PLATED STEEL\t23\t3\nBrand#x\tLARGE PLATED STEEL\t19\t3\nBrand#x\tSTANDARD ANODIZED COPPER\t23\t3\nBrand#x\tSMALL ANODIZED BRASS\t9\t3\nBrand#x\tMEDIUM ANODIZED TIN\t19\t3\nBrand#x\tSMALL PLATED BRASS\t23\t3\nBrand#x\tMEDIUM BRUSHED BRASS\t45\t3\nBrand#x\tMEDIUM BRUSHED TIN\t45\t3\nBrand#x\tECONOMY POLISHED BRASS\t9\t3\nBrand#x\tPROMO PLATED BRASS\t19\t3\nBrand#x\tSTANDARD PLATED TIN\t49\t3\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q17.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<avg_yearly:decimal(27,6)>\n-- !query output\n348406.054286\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q18.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<c_name:string,c_custkey:bigint,o_orderkey:bigint,o_orderdate:date,o_totalprice:decimal(12,2),sum(l_quantity):decimal(22,2)>\n-- !query output\nCustomer#x\t128120\t4722021\t1994-04-07\t544089.09\t323.00\nCustomer#x\t144617\t3043270\t1997-02-12\t530604.44\t317.00\nCustomer#x\t13940\t2232932\t1997-04-13\t522720.61\t304.00\nCustomer#x\t66790\t2199712\t1996-09-30\t515531.82\t327.00\nCustomer#x\t46435\t4745607\t1997-07-03\t508047.99\t309.00\nCustomer#x\t15272\t3883783\t1993-07-28\t500241.33\t302.00\nCustomer#x\t146608\t3342468\t1994-06-12\t499794.58\t303.00\nCustomer#x\t96103\t5984582\t1992-03-16\t494398.79\t312.00\nCustomer#x\t24341\t1474818\t1992-11-15\t491348.26\t302.00\nCustomer#x\t137446\t5489475\t1997-05-23\t487763.25\t311.00\nCustomer#x\t107590\t4267751\t1994-11-04\t485141.38\t301.00\nCustomer#x\t50008\t2366755\t1996-12-09\t483891.26\t302.00\nCustomer#x\t15619\t3767271\t1996-08-07\t480083.96\t318.00\nCustomer#x\t77260\t1436544\t1992-09-12\t479499.43\t307.00\nCustomer#x\t109379\t5746311\t1996-10-10\t478064.11\t302.00\nCustomer#x\t54602\t5832321\t1997-02-09\t471220.08\t307.00\nCustomer#x\t105995\t2096705\t1994-07-03\t469692.58\t307.00\nCustomer#x\t148885\t2942469\t1992-05-31\t469630.44\t313.00\nCustomer#x\t114586\t551136\t1993-05-19\t469605.59\t308.00\nCustomer#x\t105260\t5296167\t1996-09-06\t469360.57\t303.00\nCustomer#x\t147197\t1263015\t1997-02-02\t467149.67\t320.00\nCustomer#x\t64483\t2745894\t1996-07-04\t466991.35\t304.00\nCustomer#x\t136573\t2761378\t1996-05-31\t461282.73\t301.00\nCustomer#x\t16384\t502886\t1994-04-12\t458378.92\t312.00\nCustomer#x\t117919\t2869152\t1996-06-20\t456815.92\t317.00\nCustomer#x\t12251\t735366\t1993-11-24\t455107.26\t309.00\nCustomer#x\t120098\t1971680\t1995-06-14\t453451.23\t308.00\nCustomer#x\t66098\t5007490\t1992-08-07\t453436.16\t304.00\nCustomer#x\t117076\t4290656\t1997-02-05\t449545.85\t301.00\nCustomer#x\t129379\t4720454\t1997-06-07\t448665.79\t303.00\nCustomer#x\t126865\t4702759\t1994-11-07\t447606.65\t320.00\nCustomer#x\t88876\t983201\t1993-12-30\t446717.46\t304.00\nCustomer#x\t36619\t4806726\t1995-01-17\t446704.09\t328.00\nCustomer#x\t141823\t2806245\t1996-12-29\t446269.12\t310.00\nCustomer#x\t53029\t2662214\t1993-08-13\t446144.49\t302.00\nCustomer#x\t18188\t3037414\t1995-01-25\t443807.22\t308.00\nCustomer#x\t66533\t29158\t1995-10-21\t443576.50\t305.00\nCustomer#x\t37729\t4134341\t1995-06-29\t441082.97\t309.00\nCustomer#x\t3566\t2329187\t1998-01-04\t439803.36\t304.00\nCustomer#x\t45538\t4527553\t1994-05-22\t436275.31\t305.00\nCustomer#x\t81581\t4739650\t1995-11-04\t435405.90\t305.00\nCustomer#x\t119989\t1544643\t1997-09-20\t434568.25\t320.00\nCustomer#x\t3680\t3861123\t1998-07-03\t433525.97\t301.00\nCustomer#x\t113131\t967334\t1995-12-15\t432957.75\t301.00\nCustomer#x\t141098\t565574\t1995-09-24\t430986.69\t301.00\nCustomer#x\t93392\t5200102\t1997-01-22\t425487.51\t304.00\nCustomer#x\t15631\t1845057\t1994-05-12\t419879.59\t302.00\nCustomer#x\t112987\t4439686\t1996-09-17\t418161.49\t305.00\nCustomer#x\t12599\t4259524\t1998-02-12\t415200.61\t304.00\nCustomer#x\t105410\t4478371\t1996-03-05\t412754.51\t302.00\nCustomer#x\t149842\t5156581\t1994-05-30\t411329.35\t302.00\nCustomer#x\t10129\t5849444\t1994-03-21\t409129.85\t309.00\nCustomer#x\t69904\t1742403\t1996-10-19\t408513.00\t305.00\nCustomer#x\t17746\t6882\t1997-04-09\t408446.93\t303.00\nCustomer#x\t13072\t1481925\t1998-03-15\t399195.47\t301.00\nCustomer#x\t82441\t857959\t1994-02-07\t382579.74\t305.00\nCustomer#x\t88703\t2995076\t1994-01-30\t363812.12\t302.00\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q19.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<revenue:decimal(36,4)>\n-- !query output\n3083843.0578\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q2.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<s_acctbal:decimal(12,2),s_name:string,n_name:string,p_partkey:bigint,p_mfgr:string,s_address:string,s_phone:string,s_comment:string>\n-- !query output\n9938.53\tSupplier#x\tUNITED KINGDOM\t185358\tManufacturer#x\tQKuHYh,vZGiwu2FWEJoLDx04\t33-429-790-6131\turiously regular requests hag\n9937.84\tSupplier#x\tROMANIA\t108438\tManufacturer#x\tANDENSOSmk,miq23Xfb5RWt6dvUcvt6Qa\t29-520-692-3537\tefully express instructions. regular requests against the slyly fin\n9936.22\tSupplier#x\tUNITED KINGDOM\t249\tManufacturer#x\tB3rqp0xbSEim4Mpy2RH J\t33-320-228-2957\tetect about the furiously final accounts. slyly ironic pinto beans sleep inside the furiously\n9923.77\tSupplier#x\tGERMANY\t29821\tManufacturer#x\ty3OD9UywSTOk\t17-779-299-1839\tackages boost blithely. blithely regular deposits c\n9871.22\tSupplier#x\tGERMANY\t43868\tManufacturer#x\tJ8fcXWsTqM\t17-813-485-8637\tetect blithely bold asymptotes. fluffily ironic platelets wake furiously; blit\n9870.78\tSupplier#x\tGERMANY\t81285\tManufacturer#x\tYKA,E2fjiVd7eUrzp2Ef8j1QxGo2DFnosaTEH\t17-516-924-4574\t regular accounts. furiously unusual courts above the fi\n9870.78\tSupplier#x\tGERMANY\t181285\tManufacturer#x\tYKA,E2fjiVd7eUrzp2Ef8j1QxGo2DFnosaTEH\t17-516-924-4574\t regular accounts. furiously unusual courts above the fi\n9852.52\tSupplier#x\tRUSSIA\t18972\tManufacturer#x\tt5L67YdBYYH6o,Vz24jpDyQ9\t32-188-594-7038\trns wake final foxes. carefully unusual depende\n9847.83\tSupplier#x\tRUSSIA\t130557\tManufacturer#x\txMe97bpE69NzdwLoX\t32-375-640-3593\t the special excuses. silent sentiments serve carefully final ac\n9847.57\tSupplier#x\tFRANCE\t86344\tManufacturer#x\tVSt3rzk3qG698u6ld8HhOByvrTcSTSvQlDQDag\t16-886-766-7945\tges. slyly regular requests are. ruthless, express excuses cajole blithely across the unu\n9847.57\tSupplier#x\tFRANCE\t173827\tManufacturer#x\tVSt3rzk3qG698u6ld8HhOByvrTcSTSvQlDQDag\t16-886-766-7945\tges. slyly regular requests are. ruthless, express excuses cajole blithely across the unu\n9836.93\tSupplier#x\tRUSSIA\t4841\tManufacturer#x\tJOlK7C1,7xrEZSSOw\t32-399-414-5385\tblithely carefully bold theodolites. fur\n9817.10\tSupplier#x\tRUSSIA\t124815\tManufacturer#x\t4LfoHUZjgjEbAKw TgdKcgOc4D4uCYw\t32-551-831-1437\twake carefully alongside of the carefully final ex\n9817.10\tSupplier#x\tRUSSIA\t152351\tManufacturer#x\t4LfoHUZjgjEbAKw TgdKcgOc4D4uCYw\t32-551-831-1437\twake carefully alongside of the carefully final ex\n9739.86\tSupplier#x\tFRANCE\t138357\tManufacturer#x\to,Z3v4POifevE k9U1b 6J1ucX,I\t16-494-913-5925\ts after the furiously bold packages sleep fluffily idly final requests: quickly final\n9721.95\tSupplier#x\tUNITED KINGDOM\t156241\tManufacturer#x\tAtg6GnM4dT2\t33-821-407-2995\teep furiously sauternes; quickl\n9681.33\tSupplier#x\tRUSSIA\t78405\tManufacturer#x\t,qUuXcftUl\t32-139-873-8571\thaggle slyly regular excuses. quic\n9643.55\tSupplier#x\tROMANIA\t107617\tManufacturer#x\tkT4ciVFslx9z4s79p Js825\t29-252-617-4850\tfinal excuses. final ideas boost quickly furiously speci\n9624.82\tSupplier#x\tFRANCE\t34306\tManufacturer#x\te7vab91vLJPWxxZnewmnDBpDmxYHrb\t16-392-237-6726\te packages are around the special ideas. special, pending foxes us\n9624.78\tSupplier#x\tROMANIA\t189657\tManufacturer#x\toE9uBgEfSS4opIcepXyAYM,x\t29-748-876-2014\tronic asymptotes wake bravely final\n9612.94\tSupplier#x\tROMANIA\t120715\tManufacturer#x\tKDdpNKN3cWu7ZSrbdqp7AfSLxx,qWB\t29-325-784-8187\twarhorses. quickly even deposits sublate daringly ironic instructions. slyly blithe t\n9612.94\tSupplier#x\tROMANIA\t198189\tManufacturer#x\tKDdpNKN3cWu7ZSrbdqp7AfSLxx,qWB\t29-325-784-8187\twarhorses. quickly even deposits sublate daringly ironic instructions. slyly blithe t\n9571.83\tSupplier#x\tROMANIA\t179270\tManufacturer#x\tqNHZ7WmCzygwMPRDO9Ps\t29-973-481-1831\tkly carefully express asymptotes. furiou\n9558.10\tSupplier#x\tUNITED KINGDOM\t88515\tManufacturer#x\tEOeuiiOn21OVpTlGguufFDFsbN1p0lhpxHp\t33-152-301-2164\t foxes. quickly even excuses use. slyly special foxes nag bl\n9492.79\tSupplier#x\tGERMANY\t25974\tManufacturer#x\tS6mIiCTx82z7lV\t17-992-579-4839\tarefully pending accounts. blithely regular excuses boost carefully carefully ironic p\n9461.05\tSupplier#x\tUNITED KINGDOM\t20033\tManufacturer#x\t8mmGbyzaU 7ZS2wJumTibypncu9pNkDc4FYA\t33-556-973-5522\t. slyly regular deposits wake slyly. furiously regular warthogs are.\n9453.01\tSupplier#x\tROMANIA\t175767\tManufacturer#x\t,6HYXb4uaHITmtMBj4Ak57Pd\t29-342-882-6463\tgular frets. permanently special multipliers believe blithely alongs\n9408.65\tSupplier#x\tUNITED KINGDOM\t117771\tManufacturer#x\tAiC5YAH,gdu0i7\t33-152-491-1126\tnag against the final requests. furiously unusual packages cajole blit\n9359.61\tSupplier#x\tROMANIA\t62349\tManufacturer#x\tHYogcF3Jb yh1\t29-334-870-9731\ty ironic theodolites. blithely sile\n9357.45\tSupplier#x\tUNITED KINGDOM\t138648\tManufacturer#x\tg801,ssP8wpTk4Hm\t33-583-607-1633\tously always regular packages. fluffily even accounts beneath the furiously final pack\n9352.04\tSupplier#x\tGERMANY\t170921\tManufacturer#x\tqYPDgoiBGhCYxjgC\t17-128-996-4650\t according to the carefully bold ideas\n9312.97\tSupplier#x\tRUSSIA\t90279\tManufacturer#x\toGYMPCk9XHGB2PBfKRnHA\t32-673-872-5854\tecial packages among the pending, even requests use regula\n9312.97\tSupplier#x\tRUSSIA\t100276\tManufacturer#x\toGYMPCk9XHGB2PBfKRnHA\t32-673-872-5854\tecial packages among the pending, even requests use regula\n9280.27\tSupplier#x\tROMANIA\t47193\tManufacturer#x\tzhRUQkBSrFYxIAXTfInj vyGRQjeK\t29-318-454-2133\to beans haggle after the furiously unusual deposits. carefully silent dolphins cajole carefully\n9274.80\tSupplier#x\tRUSSIA\t76346\tManufacturer#x\t1xhLoOUM7I3mZ1mKnerw OSqdbb4QbGa\t32-524-148-5221\ty. courts do wake slyly. carefully ironic platelets haggle above the slyly regular the\n9249.35\tSupplier#x\tFRANCE\t26466\tManufacturer#x\td18GiDsL6Wm2IsGXM,RZf1jCsgZAOjNYVThTRP4\t16-722-866-1658\tuests are furiously. regular tithes through the regular, final accounts cajole furiously above the q\n9249.35\tSupplier#x\tFRANCE\t33972\tManufacturer#x\td18GiDsL6Wm2IsGXM,RZf1jCsgZAOjNYVThTRP4\t16-722-866-1658\tuests are furiously. regular tithes through the regular, final accounts cajole furiously above the q\n9208.70\tSupplier#x\tROMANIA\t40256\tManufacturer#x\trsimdze 5o9P Ht7xS\t29-964-424-9649\tlites was quickly above the furiously ironic requests. slyly even foxes against the blithely bold \n9201.47\tSupplier#x\tUNITED KINGDOM\t67183\tManufacturer#x\tCB BnUTlmi5zdeEl7R7\t33-121-267-9529\te even, even foxes. blithely ironic packages cajole regular packages. slyly final ide\n9192.10\tSupplier#x\tUNITED KINGDOM\t85098\tManufacturer#x\tnJ 2t0f7Ve,wL1,6WzGBJLNBUCKlsV\t33-597-248-1220\tes across the carefully express accounts boost caref\n9189.98\tSupplier#x\tGERMANY\t21225\tManufacturer#x\tqsLCqSvLyZfuXIpjz\t17-725-903-1381\t deposits. blithely bold excuses about the slyly bold forges wake \n9128.97\tSupplier#x\tRUSSIA\t146768\tManufacturer#x\tI8IjnXd7NSJRs594RxsRR0\t32-155-440-7120\trefully. blithely unusual asymptotes haggle \n9104.83\tSupplier#x\tGERMANY\t150974\tManufacturer#x\tRqRVDgD0ER J9 b41vR2,3\t17-728-804-1793\tly about the blithely ironic depths. slyly final theodolites among the fluffily bold ideas print\n9101.00\tSupplier#x\tROMANIA\t128254\tManufacturer#x\tzub2zCV,jhHPPQqi,P2INAjE1zI n66cOEoXFG\t29-549-251-5384\tts. notornis detect blithely above the carefully bold requests. blithely even package\n9094.57\tSupplier#x\tRUSSIA\t39575\tManufacturer#x\tWB0XkCSG3r,mnQ n,h9VIxjjr9ARHFvKgMDf\t32-587-577-1351\tjole. regular accounts sleep blithely frets. final pinto beans play furiously past the \n8996.87\tSupplier#x\tFRANCE\t102191\tManufacturer#x\t8XVcQK23akp\t16-811-269-8946\tickly final packages along the express plat\n8996.14\tSupplier#x\tROMANIA\t139813\tManufacturer#x\taf0O5pg83lPU4IDVmEylXZVqYZQzSDlYLAmR\t29-995-571-8781\t dependencies boost quickly across the furiously pending requests! unusual dolphins play sl\n8968.42\tSupplier#x\tROMANIA\t119999\tManufacturer#x\taTGLEusCiL4F PDBdv665XBJhPyCOB0i\t29-578-432-2146\tly regular foxes boost slyly. quickly special waters boost carefully ironi\n8936.82\tSupplier#x\tUNITED KINGDOM\t109512\tManufacturer#x\tFVajceZInZdbJE6Z9XsRUxrUEpiwHDrOXi,1Rz\t33-784-177-8208\tefully regular courts. furiousl\n8929.42\tSupplier#x\tFRANCE\t173735\tManufacturer#x\tR7cG26TtXrHAP9 HckhfRi\t16-242-746-9248\tcajole furiously unusual requests. quickly stealthy requests are. \n8920.59\tSupplier#x\tROMANIA\t26460\tManufacturer#x\teHoAXe62SY9\t29-194-731-3944\taters. express, pending instructions sleep. brave, r\n8920.59\tSupplier#x\tROMANIA\t173966\tManufacturer#x\teHoAXe62SY9\t29-194-731-3944\taters. express, pending instructions sleep. brave, r\n8913.96\tSupplier#x\tUNITED KINGDOM\t137063\tManufacturer#x\tOUzlvMUr7n,utLxmPNeYKSf3T24OXskxB5\t33-789-255-7342\t haggle slyly above the furiously regular pinto beans. even \n8877.82\tSupplier#x\tFRANCE\t167966\tManufacturer#x\tA3pi1BARM4nx6R,qrwFoRPU\t16-442-147-9345\tously foxes. express, ironic requests im\n8862.24\tSupplier#x\tROMANIA\t73322\tManufacturer#x\tW9 lYcsC9FwBqk3ItL\t29-736-951-3710\tly pending ideas sleep about the furiously unu\n8841.59\tSupplier#x\tROMANIA\t100729\tManufacturer#x\tErx3lAgu0g62iaHF9x50uMH4EgeN9hEG\t29-344-502-5481\tgainst the pinto beans. fluffily unusual dependencies affix slyly even deposits.\n8781.71\tSupplier#x\tROMANIA\t13120\tManufacturer#x\twNqTogx238ZYCamFb,50v,bj 4IbNFW9Bvw1xP\t29-707-291-5144\ts wake quickly ironic ideas\n8754.24\tSupplier#x\tUNITED KINGDOM\t179406\tManufacturer#x\tCHRCbkaWcf5B\t33-903-970-9604\te ironic requests. carefully even foxes above the furious\n8691.06\tSupplier#x\tUNITED KINGDOM\t126892\tManufacturer#x\tk,BQms5UhoAF1B2Asi,fLib\t33-964-337-5038\tefully express deposits kindle after the deposits. final \n8655.99\tSupplier#x\tRUSSIA\t193810\tManufacturer#x\tUozlaENr0ytKe2w6CeIEWFWn iO3S8Rae7Ou\t32-561-198-3705\tsymptotes use about the express dolphins. requests use after the express platelets. final, ex\n8638.36\tSupplier#x\tRUSSIA\t75398\tManufacturer#x\tJe2a8bszf3L\t32-122-621-7549\tly quickly ironic requests. even requests whithout t\n8638.36\tSupplier#x\tRUSSIA\t170402\tManufacturer#x\tJe2a8bszf3L\t32-122-621-7549\tly quickly ironic requests. even requests whithout t\n8607.69\tSupplier#x\tUNITED KINGDOM\t76002\tManufacturer#x\tEH9wADcEiuenM0NR08zDwMidw,52Y2RyILEiA\t33-416-807-5206\tar, pending accounts. pending depende\n8569.52\tSupplier#x\tRUSSIA\t5935\tManufacturer#x\tjXaNZ6vwnEWJ2ksLZJpjtgt0bY2a3AU\t32-644-251-7916\t. regular foxes nag carefully atop the regular, silent deposits. quickly regular packages \n8564.12\tSupplier#x\tGERMANY\t110032\tManufacturer#x\tgfeKpYw3400L0SDywXA6Ya1Qmq1w6YB9f3R\t17-138-897-9374\tn sauternes along the regular asymptotes are regularly along the \n8553.82\tSupplier#x\tROMANIA\t143978\tManufacturer#x\tBfmVhCAnCMY3jzpjUMy4CNWs9 HzpdQR7INJU\t29-124-646-4897\tic requests wake against the blithely unusual accounts. fluffily r\n8517.23\tSupplier#x\tRUSSIA\t37025\tManufacturer#x\te44R8o7JAIS9iMcr\t32-565-297-8775\tove the even courts. furiously special platelets \n8517.23\tSupplier#x\tRUSSIA\t59528\tManufacturer#x\te44R8o7JAIS9iMcr\t32-565-297-8775\tove the even courts. furiously special platelets \n8503.70\tSupplier#x\tRUSSIA\t44325\tManufacturer#x\tBC4WFCYRUZyaIgchU 4S\t32-147-878-5069\tpades cajole. furious packages among the carefully express excuses boost furiously across th\n8457.09\tSupplier#x\tUNITED KINGDOM\t19455\tManufacturer#x\t7SBhZs8gP1cJjT0Qf433YBk\t33-858-440-4349\tcing requests along the furiously unusual deposits promise among the furiously unus\n8441.40\tSupplier#x\tFRANCE\t141302\tManufacturer#x\thU3fz3xL78\t16-339-356-5115\tely even ideas. ideas wake slyly furiously unusual instructions. pinto beans sleep ag\n8432.89\tSupplier#x\tRUSSIA\t191470\tManufacturer#x\twehBBp1RQbfxAYDASS75MsywmsKHRVdkrvNe6m\t32-839-509-9301\tep furiously. packages should have to haggle slyly across the deposits. furiously regu\n8431.40\tSupplier#x\tROMANIA\t5174\tManufacturer#x\tHJFStOu9R5NGPOegKhgbzBdyvrG2yh8w\t29-474-643-1443\tithely express pinto beans. blithely even foxes haggle. furiously regular theodol\n8407.04\tSupplier#x\tRUSSIA\t162889\tManufacturer#x\tj7 gYF5RW8DC5UrjKC\t32-626-152-4621\tr the blithely regular packages. slyly ironic theodoli\n8386.08\tSupplier#x\tFRANCE\t36014\tManufacturer#x\t2jqzqqAVe9crMVGP,n9nTsQXulNLTUYoJjEDcqWV\t16-618-780-7481\tblithely bold pains are carefully platelets. finally regular pinto beans sleep carefully special\n8376.52\tSupplier#x\tUNITED KINGDOM\t190267\tManufacturer#x\t9t8Y8 QqSIsoADPt6NLdk,TP5zyRx41oBUlgoGc9\t33-632-514-7931\tly final accounts sleep special, regular requests. furiously regular\n8348.74\tSupplier#x\tFRANCE\t66344\tManufacturer#x\tnWxi7GwEbjhw1\t16-796-240-2472\t boldly final deposits. regular, even instructions detect slyly. fluffily unusual pinto bea\n8338.58\tSupplier#x\tFRANCE\t17268\tManufacturer#x\tZwhJSwABUoiB04,3\t16-267-277-4365\tiously final accounts. even pinto beans cajole slyly regular\n8328.46\tSupplier#x\tROMANIA\t69237\tManufacturer#x\toLo3fV64q2,FKHa3p,qHnS7Yzv,ps8\t29-330-728-5873\tep carefully-- even, careful packages are slyly along t\n8307.93\tSupplier#x\tGERMANY\t18139\tManufacturer#x\tdqblvV8dCNAorGlJ\t17-595-447-6026\tolites wake furiously regular decoys. final requests nod \n8231.61\tSupplier#x\tRUSSIA\t192000\tManufacturer#x\tmcdgen,yT1iJDHDS5fV\t32-762-137-5858\t foxes according to the furi\n8152.61\tSupplier#x\tROMANIA\t15227\tManufacturer#x\t nluXJCuY1tu\t29-805-463-2030\t special requests. even, regular warhorses affix among the final gr\n8109.09\tSupplier#x\tFRANCE\t99185\tManufacturer#x\twgfosrVPexl9pEXWywaqlBMDYYf\t16-668-570-1402\ttions haggle slyly about the sil\n8102.62\tSupplier#x\tUNITED KINGDOM\t18344\tManufacturer#x\tm CtXS2S16i\t33-454-274-8532\tegrate with the slyly bold instructions. special foxes haggle silently among the\n8046.07\tSupplier#x\tFRANCE\t191222\tManufacturer#x\tAczzuE0UK9osj ,Lx0Jmh\t16-473-215-6395\tonic platelets cajole after the regular instructions. permanently bold excuses\n8042.09\tSupplier#x\tRUSSIA\t135705\tManufacturer#x\tDh8Ikg39onrbOL4DyTfGw8a9oKUX3d9Y\t32-836-132-8872\tosits. packages cajole slyly. furiously regular deposits cajole slyly. q\n8042.09\tSupplier#x\tRUSSIA\t150729\tManufacturer#x\tDh8Ikg39onrbOL4DyTfGw8a9oKUX3d9Y\t32-836-132-8872\tosits. packages cajole slyly. furiously regular deposits cajole slyly. q\n7992.40\tSupplier#x\tFRANCE\t118574\tManufacturer#x\t8tBydnTDwUqfBfFV4l3\t16-974-998-8937\t ironic ideas? fluffily even instructions wake. blithel\n7980.65\tSupplier#x\tFRANCE\t13784\tManufacturer#x\tzE,7HgVPrCn\t16-646-464-8247\tully bold courts. escapades nag slyly. furiously fluffy theodo\n7950.37\tSupplier#x\tGERMANY\t33094\tManufacturer#x\tkkYvL6IuvojJgTNG IKkaXQDYgx8ILohj\t17-627-663-8014\tarefully unusual requests x-ray above the quickly final deposits. \n7937.93\tSupplier#x\tROMANIA\t83995\tManufacturer#x\tiUiTziH,Ek3i4lwSgunXMgrcTzwdb\t29-250-925-9690\tto the blithely ironic deposits nag sly\n7914.45\tSupplier#x\tRUSSIA\t125988\tManufacturer#x\triRcntps4KEDtYScjpMIWeYF6mNnR\t32-194-698-3365\t busily bold packages are dolphi\n7912.91\tSupplier#x\tGERMANY\t159180\tManufacturer#x\t2wQRVovHrm3,v03IKzfTd,1PYsFXQFFOG\t17-266-947-7315\tay furiously regular platelets. cou\n7912.91\tSupplier#x\tGERMANY\t184210\tManufacturer#x\t2wQRVovHrm3,v03IKzfTd,1PYsFXQFFOG\t17-266-947-7315\tay furiously regular platelets. cou\n7894.56\tSupplier#x\tGERMANY\t85472\tManufacturer#x\tNSJ96vMROAbeXP\t17-963-404-3760\tic platelets affix after the furiously\n7887.08\tSupplier#x\tGERMANY\t164759\tManufacturer#x\tY28ITVeYriT3kIGdV2K8fSZ V2UqT5H1Otz\t17-988-938-4296\tckly around the carefully fluffy theodolites. slyly ironic pack\n7871.50\tSupplier#x\tRUSSIA\t104695\tManufacturer#x\t3w fNCnrVmvJjE95sgWZzvW\t32-432-452-7731\tironic requests. furiously final theodolites cajole. final, express packages sleep. quickly reg\n7852.45\tSupplier#x\tRUSSIA\t8363\tManufacturer#x\tWCNfBPZeSXh3h,c\t32-454-883-3821\tusly unusual pinto beans. brave ideas sleep carefully quickly ironi\n7850.66\tSupplier#x\tUNITED KINGDOM\t86501\tManufacturer#x\tONda3YJiHKJOC\t33-730-383-3892\tifts haggle fluffily pending pai\n7843.52\tSupplier#x\tFRANCE\t11680\tManufacturer#x\t2Z0JGkiv01Y00oCFwUGfviIbhzCdy\t16-464-517-8943\t express, final pinto beans x-ray slyly asymptotes. unusual, unusual\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q20.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<s_name:string,s_address:string>\n-- !query output\nSupplier#x\tiybAE,RmTymrZVYaFZva2SH,j\nSupplier#x\tYV45D7TkfdQanOOZ7q9QxkyGUapU1oOWU6q3\nSupplier#x\trF uV8d0JNEk\nSupplier#x\tBr7e1nnt1yxrw6ImgpJ7YdhFDjuBf\nSupplier#x\t7a9SP7qW5Yku5PvSg\nSupplier#x\tw8fOo5W,aS\nSupplier#x\tFfbhyCxWvcPrO8ltp9\nSupplier#x\ti9Sw4DoyMhzhKXCH9By,AYSgmD\nSupplier#x\t0qwCMwobKY OcmLyfRXlagA8ukENJv,\nSupplier#x\tTfB,a5bfl3Ah 3Z 74GqnNs6zKVGM\nSupplier#x\tmvvtlQKsTOsJj5Ihk7,cq\nSupplier#x\tpqck2ppy758TQpZCUAjPvlU55K3QjfL7Bi\nSupplier#x\tl6i2nMwVuovfKnuVgaSGK2rDy65DlAFLegiL7\nSupplier#x\tzlSLelQUj2XrvTTFnv7WAcYZGvvMTx882d4\nSupplier#x\turEaTejH5POADP2ARrf\nSupplier#x\tij98czM 2KzWe7dDTOxB8sq0UfCdvrX\nSupplier#x\t,AC e,tBpNwKb5xMUzeohxlRn, hdZJo73gFQF8y\nSupplier#x\trQWr6nf8ZhB2TAiIDIvo5Io\nSupplier#x\t42YSkFcAXMMcucsqeEefOE4HeCC\nSupplier#x\tbPOCc086oFm8sLtS,fGrH\nSupplier#x\tlch9HMNU1R7a0LIybsUodVknk6\nSupplier#x\twDmF5xLxtQch9ctVu,\nSupplier#x\tuKNWIeafaM644\nSupplier#x\tUhxNRzUu1dtFmp0\nSupplier#x\tpXTkGxrTQVyH1Rr\nSupplier#x\t7hMlCof1Y5zLFg\nSupplier#x\tTeRY7TtTH24sEword7yAaSkjx8\nSupplier#x\tRc8e,1Pybn r6zo0VJIEiD0UD vhk\nSupplier#x\tqWsendlOekQG1aW4uq06uQaCm51se8lirv7 hBRd\nSupplier#x\tM934fuZSnLW\nSupplier#x\tMWk6EAeozXb\nSupplier#x\tFpJbMU2h6ZR2eBv8I9NIxF\nSupplier#x\t dwebGX7Id2pc25YvY33\nSupplier#x\t20ytTtVObjKUUI2WCB0A\nSupplier#x\tkuxseyLtq QPLXxm9ZUrnB6Kkh92JtK5cQzzXNU \nSupplier#x\tMRtkgKolHJ9Wh X9J,urANHKDzvjr\nSupplier#x\tuYmlr46C06udCqanj0KiRsoTQakZsEyssL\nSupplier#x\tnODZw5q4dx kp0K5\nSupplier#x\tnSOEV3JeOU79\nSupplier#x\thz2qWXWVjOyKhqPYMoEwz6zFkrTaDM\nSupplier#x\tES21K9dxoW1I1TzWCj7ekdlNwSWnv1Z  6mQ,BKn\nSupplier#x\tnCoWfpB6YOymbgOht7ltfklpkHl\nSupplier#x\tWRh2w5WFvRg7Z0S1AvSvHCL\nSupplier#x\tRzHSxOTQmElCjxIBiVA52Z JB58rJhPRylR\nSupplier#x\tqydBQd14I5l5mVXa4fYY\nSupplier#x\tJZUugz04c iJFLrlGsz9O N,W 1rVHNIReyq\nSupplier#x\tCsPoKpw2QuTY4AV1NkWuttneIa4SN\nSupplier#x\t0Bw,q5Zp8su9XrzoCngZ3cAEXZwZ\nSupplier#x\tHVdFAN2JHMQSpKm\nSupplier#x\tlIFxR4fzm31C6,muzJwl84z\nSupplier#x\tyDclaDaBD4ihH\nSupplier#x\tlwr, 6L3gdfc79PQut,4XO6nQsTJY63cAyYO\nSupplier#x\tm,trBENywSArwg3DhB\nSupplier#x\tNaddba 8YTEKekZyP0\nSupplier#x\tjouzgX0WZjhNMWLaH4fy\nSupplier#x\tHxON3jJhUi3zjt,r mTD\nSupplier#x\thdolgh608uTkHh7t6qfSqkifKaiFjnCH\nSupplier#x\thMa535Cbf2mj1Nw4OWOKWVrsK0VdDkJURrdjSIJe\nSupplier#x\tDWdPxt7 RnkZv6VOByR0em\nSupplier#x\tE87yws6I,t0qNs4QW7UzExKiJnJDZWue\nSupplier#x\tpxrRP4irQ1VoyfQ,dTf3\nSupplier#x\t9xO4nyJ2QJcX6vGf\nSupplier#x\tEDdfNt7E5Uc,xLTupoIgYL4yY7ujh,\nSupplier#x\tjnisU8MzqO4iUB3zsPcrysMw3DDUojS4q7LD\nSupplier#x\tiy8VM48ynpc3N2OsBwAvhYakO2us9R1bi\nSupplier#x\tSh3dt9W5oeofFWovnFhrg,\nSupplier#x\tDJoCEapUeBXoV1iYiCcPFQvzsTv2ZI960\nSupplier#x\tzvFJIzS,oUuShHjpcX\nSupplier#x\tsy79CMLxqb,Cbo\nSupplier#x\tlNqFHQYjwSAkf\nSupplier#x\tqY588W0Yk5iaUy1RXTgNrEKrMAjBYHcKs\nSupplier#x\tjZEp0OEythCLcS OmJSrFtxJ66bMlzSp\nSupplier#x\tKgbZEaRk,6Q3mWvwh6uptrs1KRUHg 0\nSupplier#x\tvvGC rameLOk\nSupplier#x\tPmb05mQfBMS618O7WKqZJ 9vyv\nSupplier#x\tumEYZSq9RJ2WEzdsv9meU8rmqwzVLRgiZwC\nSupplier#x\ttF64pwiOM4IkWjN3mS,e06WuAjLx\nSupplier#x\tdl,HPtJmGipxYsSqn9wmqkuWjst,mCeJ8O6T\nSupplier#x\tbBddbpBxIVp Di9\nSupplier#x\t1OwPHh Pgiyeus,iZS5eA23JDOipwk\nSupplier#x\thQCAz59k,HLlp2CKUrcBIL\nSupplier#x\tS3076LEOwo\nSupplier#x\tAh0ZaLu6VwufPWUz,7kbXgYZhauEaHqGIg\nSupplier#x\tyvSsKNSTL5HLXBET4luOsPNLxKzAMk\nSupplier#x\tp pVXCnxgcklWF6A1o3OHY3qW6\nSupplier#x\t67NqBc4 t3PG3F8aO IsqWNq4kGaPowYL\nSupplier#x\tRj,x6IgLT7kBL99nqp\nSupplier#x\t,phpt6AWEnUS8t4Avb50rFfdg7O9c6nU8xxv8eC5\nSupplier#x\t42Z1uLye9nsn6aTGBNd dI8 x\nSupplier#x\tGPq5PMKY6Wy\nSupplier#x\tXl7h9ifgvIHmqxFLgWfHK4Gjav BkP\nSupplier#x\tWoi3b2ZaicPh ZSfu1EfXhE\nSupplier#x\tOnc3t57VAMchm,pmoVLaU8bONni9NsuaM PzMMFz\nSupplier#x\tf9g8SEHB7obMj3QXAjXS2vfYY22\nSupplier#x\tgXG28YqpxU\nSupplier#x\ttMCkdqbDoyNo8vMIkzjBqYexoRAuv,T6 qzcu\nSupplier#x\tUb6AAfHpWLWP\nSupplier#x\t9Dz2OVT1q sb4BK71ljQ1XjPBYRPvO\nSupplier#x\t63cYZenZBRZ613Q1FaoG0,smnC5zl9\nSupplier#x\tsaFdOR qW7AFY,3asPqiiAa11Mo22pCoN0BtPrKo\nSupplier#x\td2sbjG43KwMPX\nSupplier#x\tOn f5ypzoWgB\nSupplier#x\t14TVrjlzo2SJEBYCDgpMwTlvwSqC\nSupplier#x\tZwKxAv3V40tW E8P7Qwu,zlu,kPsL\nSupplier#x\tf2RBKec2T1NIi7yS M\nSupplier#x\t5rkb0PSews HvxkL8JaD41UpnSF2cg8H1\nSupplier#x\t2dq XTYhtYWSfp\nSupplier#x\tdmEWcS32C3kx,d,B95 OmYn48\nSupplier#x\t,o,OebwRbSDmVl9gN9fpWPCiqB UogvlSR\nSupplier#x\tlK,sYiGzB94hSyHy9xvSZFbVQNCZe2LXZuGbS\nSupplier#x\tREhR5jE,lLusQXvf54SwYySgsSSVFhu\nSupplier#x\t4m0cv8MwJ9yX2vlwI Z\nSupplier#x\tUiI2Cy3W4Tu5sLk LuvXLRy6KihlGv\nSupplier#x\tKJNUg1odUT2wtCS2s6PrH3D6fd\nSupplier#x\taZilwQKYDTVPoK\nSupplier#x\trY5gbfh3dKHnylcQUTPGCwnbe\nSupplier#x\tRVN23SYT9jenUeaWGXUd\nSupplier#x\t73VRDOO56GUCyvc40oYJ\nSupplier#x\txIgE69XszYbnO4Eon7cHHO8y\nSupplier#x\t7 wkdj2EO49iotley2kmIM ADpLSszGV3RNWj\nSupplier#x\tbQYPnj9lpmW3U\nSupplier#x\tb9 2zjHzxR\nSupplier#x\tN,CUclSqRLJcS8zQ\nSupplier#x\tiTLsnvD8D2GzWNUv kRInwRjk5rDeEmfup1\nSupplier#x\tNQ4Yryj624p7K53\nSupplier#x\trC,2rEn8gKDIS5Q0dJEoiF\nSupplier#x\tn4jhxGMqB5prD1HhpLvwrWStOLlla\nSupplier#x\tHGd2Xo 9nEcHJhZvXjXxWKIpApT\nSupplier#x\tfnlINT885vBBhsWwTGiZ0o22thwGY16h GHJj21\nSupplier#x\tTo6Slo0GJTqcIvD\nSupplier#x\tmLxYUJhsGcLtKe ,GFirNu183AvT\nSupplier#x\t2tRyX9M1a 4Rcm57s779F1ANG9jlpK\nSupplier#x\tG3j8g0KC4OcbAu2OVoPHrXQWMCUdjq8wgCHOExu\nSupplier#x\txonvn0KAQIL3p8kYk HC1FSSDSUSTC\nSupplier#x\tls DoKV7V5ulfQy9V\nSupplier#x\tXzb16kC63wmLVYexUEgB0hXFvHkjT5iPpq\nSupplier#x\tTqDGBULB3cTqIT6FKDvm9BS4e4v,zwYiQPb\nSupplier#x\ttEc95D2moN9S84nd55O,dlnW\nSupplier#x\tI2ae3rS7KVF8GVHtB\nSupplier#x\t51xhROLvQMJ05DndtZWt\nSupplier#x\tV8eE6oZ00OFNU,\nSupplier#x\t4UVv58ery1rjmqSR5\nSupplier#x\tyhhpWiJi7EJ6Q5VCaQ\nSupplier#x\tBYuucapYkptZl6fnd2QaDyZmI9gR1Ih16e\nSupplier#x\t9m9j0wfhWzCvVHxkU,PpAxwSH0h\nSupplier#x\tq8,V6LJRoHJjHcOuSG7aLTMg\nSupplier#x\trMcFg2530VC\nSupplier#x\tR IovIqzDi3,QHnaqZk1xS4hGAgelhP4yj\nSupplier#x\tJsPE18PvcdFTK\nSupplier#x\t69fi,U1r6enUb \nSupplier#x\t5cDGCS,T6N\nSupplier#x\tu3sicchh5ZpyTUpN1cJKNcAoabIWgY\nSupplier#x\tErzCF80K9Uy\nSupplier#x\tLnASFBfYRFOo9d6d,asBvVq9Lo2P\nSupplier#x\teonbJZvoDFYBNUinYfp6yERIg\nSupplier#x\tTWxt9f,LVER\nSupplier#x\tIK7eGw Yj90sTdpsP,vcqWxLB\nSupplier#x\t2AyePMkDqmzVzjGTizXthFLo8h EiudCMxOmIIG\nSupplier#x\t75I18sZmASwm POeheRMdj9tmpyeQ,BfCXN5BIAb\nSupplier#x\th778cEj14BuW9OEKlvPTWq4iwASR6EBBXN7zeS8\nSupplier#x\tUc29q4,5xVdDOF87UZrxhr4xWS0ihEUXuh\nSupplier#x\tMH0iB73GQ3z UW3O DbCbqmc\nSupplier#x\tSgVgP90vP452sUNTgzL9zKwXHXAzV6tV\nSupplier#x\tgLuGcugfpJSeGQARnaHNCaWnGaqsNnjyl20\nSupplier#x\taE,trRNdPx,4yinTD9O3DebDIp\nSupplier#x\tHmPlQEzKCPEcTUL14,kKq\nSupplier#x\tI 85Lu1sekbg2xrSIzm0\nSupplier#x\t8D 45GgxJO2OwwYP9S4AaXJKvDwPfLM\nSupplier#x\trDSA,D9oPM,65NMWEFrmGKAu\nSupplier#x\t2kwEHyMG 7FwozNImAUE6mH0hYtqYculJM\nSupplier#x\tw2vF6 D5YZO3visPXsqVfLADTK\nSupplier#x\tqK,trB6Sdy4Dz1BRUFNy\nSupplier#x\t57OPvKH4qyXIZ7IzYeCaw11a5N1Ki9f1WWmVQ,\nSupplier#x\tRqYTzgxj93CLX 0mcYfCENOefD\nSupplier#x\tXmiC,uy36B9,fb0zhcjaagiXQutg\nSupplier#x\tigRqmneFt \nSupplier#x\th3RVchUf8MzY46IzbZ0ng09\nSupplier#x\t51m637bO,Rw5DnHWFUvLacRx9\nSupplier#x\trRnCbHYgDgl9PZYnyWKVYSUW0vKg\nSupplier#x\twLhVEcRmd7PkJF4FBnGK7Z\nSupplier#x\t 4wNjXGa4OKWl\nSupplier#x\tE3iuyq7UnZxU7oPZIe2Gu6\nSupplier#x\tAPFRMy3lCbgFga53n5t9DxzFPQPgnjrGt32\nSupplier#x\t57sNwJJ3PtBDu,hMPP5QvpcOcSNRXn3PypJJrh\nSupplier#x\t7XdpAHrzr1t,UQFZE\nSupplier#x\t7wJ,J5DKcxSU4Kp1cQLpbcAvB5AsvKT\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q21.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<s_name:string,numwait:bigint>\n-- !query output\nSupplier#x\t20\nSupplier#x\t18\nSupplier#x\t17\nSupplier#x\t17\nSupplier#x\t17\nSupplier#x\t17\nSupplier#x\t17\nSupplier#x\t17\nSupplier#x\t17\nSupplier#x\t17\nSupplier#x\t16\nSupplier#x\t16\nSupplier#x\t16\nSupplier#x\t16\nSupplier#x\t16\nSupplier#x\t16\nSupplier#x\t16\nSupplier#x\t16\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t15\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t14\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t13\nSupplier#x\t12\nSupplier#x\t12\nSupplier#x\t12\nSupplier#x\t12\nSupplier#x\t12\nSupplier#x\t12\nSupplier#x\t12\nSupplier#x\t12\nSupplier#x\t12\nSupplier#x\t12\nSupplier#x\t12\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q22.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<cntrycode:string,numcust:bigint,totacctbal:decimal(22,2)>\n-- !query output\n13\t888\t6737713.99\n17\t861\t6460573.72\n18\t964\t7236687.40\n23\t892\t6701457.95\n29\t948\t7158866.63\n30\t909\t6808436.13\n31\t922\t6806670.18\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q3.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<l_orderkey:bigint,revenue:decimal(36,4),o_orderdate:date,o_shippriority:int>\n-- !query output\n2456423\t406181.0111\t1995-03-05\t0\n3459808\t405838.6989\t1995-03-04\t0\n492164\t390324.0610\t1995-02-19\t0\n1188320\t384537.9359\t1995-03-09\t0\n2435712\t378673.0558\t1995-02-26\t0\n4878020\t378376.7952\t1995-03-12\t0\n5521732\t375153.9215\t1995-03-13\t0\n2628192\t373133.3094\t1995-02-22\t0\n993600\t371407.4595\t1995-03-05\t0\n2300070\t367371.1452\t1995-03-13\t0\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q4.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<o_orderpriority:string,order_count:bigint>\n-- !query output\n1-URGENT\t10594\n2-HIGH\t10476\n3-MEDIUM\t10410\n4-NOT SPECIFIED\t10556\n5-LOW\t10487\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q5.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<n_name:string,revenue:decimal(36,4)>\n-- !query output\nINDONESIA\t55502041.1697\nVIETNAM\t55295086.9967\nCHINA\t53724494.2566\nINDIA\t52035512.0002\nJAPAN\t45410175.6954\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q6.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<revenue:decimal(35,4)>\n-- !query output\n123141078.2283\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q7.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<supp_nation:string,cust_nation:string,l_year:int,revenue:decimal(36,4)>\n-- !query output\nFRANCE\tGERMANY\t1995\t54639732.7336\nFRANCE\tGERMANY\t1996\t54633083.3076\nGERMANY\tFRANCE\t1995\t52531746.6697\nGERMANY\tFRANCE\t1996\t52520549.0224\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q8.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<o_year:int,mkt_share:decimal(38,6)>\n-- !query output\n1995\t0.034436\n1996\t0.041486\n"
  },
  {
    "path": "spark/src/test/resources/tpch-query-results/q9.sql.out",
    "content": "-- Automatically generated by CometTPCHQuerySuite\n\n-- !query schema\nstruct<nation:string,o_year:int,sum_profit:decimal(37,4)>\n-- !query output\nALGERIA\t1998\t27136900.1803\nALGERIA\t1997\t48611833.4962\nALGERIA\t1996\t48285482.6782\nALGERIA\t1995\t44402273.5999\nALGERIA\t1994\t48694008.0668\nALGERIA\t1993\t46044207.7838\nALGERIA\t1992\t45636849.4881\nARGENTINA\t1998\t28341663.7848\nARGENTINA\t1997\t47143964.1176\nARGENTINA\t1996\t45255278.6021\nARGENTINA\t1995\t45631769.2054\nARGENTINA\t1994\t48268856.3547\nARGENTINA\t1993\t48605593.6162\nARGENTINA\t1992\t46654240.7487\nBRAZIL\t1998\t26527736.3960\nBRAZIL\t1997\t45640660.7677\nBRAZIL\t1996\t45090647.1630\nBRAZIL\t1995\t44015888.5132\nBRAZIL\t1994\t44854218.8932\nBRAZIL\t1993\t45766603.7379\nBRAZIL\t1992\t45280216.8027\nCANADA\t1998\t26828985.3944\nCANADA\t1997\t44849954.3186\nCANADA\t1996\t46307936.1108\nCANADA\t1995\t47311993.0441\nCANADA\t1994\t46691491.9596\nCANADA\t1993\t46634791.1121\nCANADA\t1992\t45873849.6882\nCHINA\t1998\t27510180.1657\nCHINA\t1997\t46123865.4097\nCHINA\t1996\t49532807.0601\nCHINA\t1995\t46734651.4838\nCHINA\t1994\t46397896.6097\nCHINA\t1993\t49634673.9463\nCHINA\t1992\t46949457.6426\nEGYPT\t1998\t28401491.7968\nEGYPT\t1997\t47674857.6783\nEGYPT\t1996\t47745727.5450\nEGYPT\t1995\t45897160.6783\nEGYPT\t1994\t47194895.2280\nEGYPT\t1993\t49133627.6471\nEGYPT\t1992\t47000574.5027\nETHIOPIA\t1998\t25135046.1377\nETHIOPIA\t1997\t43010596.0838\nETHIOPIA\t1996\t43636287.1922\nETHIOPIA\t1995\t43575757.3343\nETHIOPIA\t1994\t41597208.5283\nETHIOPIA\t1993\t42622804.1616\nETHIOPIA\t1992\t44385735.6813\nFRANCE\t1998\t26210392.2804\nFRANCE\t1997\t42392969.4731\nFRANCE\t1996\t43306317.9749\nFRANCE\t1995\t46377408.4328\nFRANCE\t1994\t43447352.9922\nFRANCE\t1993\t43729961.0639\nFRANCE\t1992\t44052308.4290\nGERMANY\t1998\t25991257.1071\nGERMANY\t1997\t43968355.8079\nGERMANY\t1996\t45882074.8049\nGERMANY\t1995\t43314338.3077\nGERMANY\t1994\t44616995.4369\nGERMANY\t1993\t45126645.9113\nGERMANY\t1992\t44361141.2107\nINDIA\t1998\t29626417.2379\nINDIA\t1997\t51386111.3448\nINDIA\t1996\t47571018.5122\nINDIA\t1995\t49344062.2829\nINDIA\t1994\t50106952.4261\nINDIA\t1993\t48112766.6987\nINDIA\t1992\t47914303.1234\nINDONESIA\t1998\t27734909.6763\nINDONESIA\t1997\t44593812.9863\nINDONESIA\t1996\t44746729.8078\nINDONESIA\t1995\t45593622.6993\nINDONESIA\t1994\t45988483.8772\nINDONESIA\t1993\t46147963.7895\nINDONESIA\t1992\t45185777.0688\nIRAN\t1998\t26661608.9301\nIRAN\t1997\t45019114.1696\nIRAN\t1996\t45891397.0992\nIRAN\t1995\t44414285.2348\nIRAN\t1994\t43696360.4795\nIRAN\t1993\t45362775.8094\nIRAN\t1992\t43052338.4143\nIRAQ\t1998\t31188498.1914\nIRAQ\t1997\t48585307.5222\nIRAQ\t1996\t50036593.8404\nIRAQ\t1995\t48774801.7275\nIRAQ\t1994\t48795847.2310\nIRAQ\t1993\t47435691.5082\nIRAQ\t1992\t47562355.6571\nJAPAN\t1998\t24694102.1720\nJAPAN\t1997\t42377052.3454\nJAPAN\t1996\t40267778.9094\nJAPAN\t1995\t40925317.4650\nJAPAN\t1994\t41159518.3058\nJAPAN\t1993\t39589074.2771\nJAPAN\t1992\t39113493.9052\nJORDAN\t1998\t23489867.7893\nJORDAN\t1997\t41615962.6619\nJORDAN\t1996\t41860855.4684\nJORDAN\t1995\t39931672.0908\nJORDAN\t1994\t40707555.4638\nJORDAN\t1993\t39060405.4658\nJORDAN\t1992\t41657604.2684\nKENYA\t1998\t25566337.4303\nKENYA\t1997\t43108847.9024\nKENYA\t1996\t43482953.5430\nKENYA\t1995\t42517988.9814\nKENYA\t1994\t43612479.4523\nKENYA\t1993\t42724038.7571\nKENYA\t1992\t43217106.2068\nMOROCCO\t1998\t24915496.8756\nMOROCCO\t1997\t42698382.8550\nMOROCCO\t1996\t42986113.5049\nMOROCCO\t1995\t42316089.1593\nMOROCCO\t1994\t43458604.6029\nMOROCCO\t1993\t42672288.0699\nMOROCCO\t1992\t42800781.6415\nMOZAMBIQUE\t1998\t28279876.0301\nMOZAMBIQUE\t1997\t51159216.2298\nMOZAMBIQUE\t1996\t48072525.0645\nMOZAMBIQUE\t1995\t48905200.6007\nMOZAMBIQUE\t1994\t46092076.2805\nMOZAMBIQUE\t1993\t48555926.2669\nMOZAMBIQUE\t1992\t47809075.1192\nPERU\t1998\t26713966.2678\nPERU\t1997\t48324008.6011\nPERU\t1996\t50310008.8629\nPERU\t1995\t49647080.9629\nPERU\t1994\t46420910.2773\nPERU\t1993\t51536906.2487\nPERU\t1992\t47711665.3137\nROMANIA\t1998\t27271993.1010\nROMANIA\t1997\t45063059.1953\nROMANIA\t1996\t47492335.0323\nROMANIA\t1995\t45710636.2909\nROMANIA\t1994\t46088041.1066\nROMANIA\t1993\t47515092.5613\nROMANIA\t1992\t44111439.8044\nRUSSIA\t1998\t27935323.7271\nRUSSIA\t1997\t48222347.2924\nRUSSIA\t1996\t47553559.4932\nRUSSIA\t1995\t46755990.0976\nRUSSIA\t1994\t48000515.6191\nRUSSIA\t1993\t48569624.5082\nRUSSIA\t1992\t47672831.5329\nSAUDI ARABIA\t1998\t27113516.8424\nSAUDI ARABIA\t1997\t46690468.9649\nSAUDI ARABIA\t1996\t47775782.6670\nSAUDI ARABIA\t1995\t46657107.8287\nSAUDI ARABIA\t1994\t48181672.8100\nSAUDI ARABIA\t1993\t45692556.4438\nSAUDI ARABIA\t1992\t48924913.2717\nUNITED KINGDOM\t1998\t26366682.8786\nUNITED KINGDOM\t1997\t44518130.1851\nUNITED KINGDOM\t1996\t45539729.6166\nUNITED KINGDOM\t1995\t46845879.3390\nUNITED KINGDOM\t1994\t43081609.5737\nUNITED KINGDOM\t1993\t44770146.7555\nUNITED KINGDOM\t1992\t44123402.5484\nUNITED STATES\t1998\t27826593.6825\nUNITED STATES\t1997\t46638572.3648\nUNITED STATES\t1996\t46688280.5474\nUNITED STATES\t1995\t48951591.6156\nUNITED STATES\t1994\t45099092.0598\nUNITED STATES\t1993\t46181600.5278\nUNITED STATES\t1992\t46168214.0901\nVIETNAM\t1998\t27281931.0011\nVIETNAM\t1997\t48735914.1796\nVIETNAM\t1996\t47824595.9040\nVIETNAM\t1995\t48235135.8016\nVIETNAM\t1994\t47729256.3324\nVIETNAM\t1993\t45352676.8672\nVIETNAM\t1992\t47846355.6485\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometArrayExpressionSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.catalyst.expressions.{ArrayAppend, ArrayExcept, ArrayInsert, ArrayIntersect, ArrayJoin, ArrayRepeat, ArraysOverlap, ArrayUnion}\nimport org.apache.spark.sql.catalyst.expressions.{ArrayContains, ArrayRemove}\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.functions._\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.ArrayType\n\nimport org.apache.comet.CometSparkSessionExtensions.{isSpark35Plus, isSpark40Plus}\nimport org.apache.comet.DataTypeSupport.isComplexType\nimport org.apache.comet.serde.{CometArrayExcept, CometArrayRemove, CometArrayReverse, CometFlatten}\nimport org.apache.comet.testing.{DataGenOptions, ParquetGenerator, SchemaGenOptions}\n\nclass CometArrayExpressionSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  test(\"array_remove - integer\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayRemove]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempView(\"t1\") {\n          withTempDir { dir =>\n            val path = new Path(dir.toURI.toString, \"test.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, 10000)\n            spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT array_remove(array(_2, _3,_4), _2) from t1 where _2 is null\"))\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT array_remove(array(_2, _3,_4), _3) from t1 where _3 is not null\"))\n            checkSparkAnswerAndOperator(sql(\n              \"SELECT array_remove(case when _2 = _3 THEN array(_2, _3,_4) ELSE null END, _3) from t1\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_remove - test all types (native Parquet reader)\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayRemove]) -> \"true\") {\n      withTempDir { dir =>\n        withTempView(\"t1\") {\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          val filename = path.toString\n          val random = new Random(42)\n          withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n            ParquetGenerator.makeParquetFile(\n              random,\n              spark,\n              filename,\n              100,\n              SchemaGenOptions(\n                generateArray = false,\n                generateStruct = false,\n                generateMap = false),\n              DataGenOptions(allowNull = true, generateNegativeZero = true))\n          }\n          val table = spark.read.parquet(filename)\n          table.createOrReplaceTempView(\"t1\")\n          // test with array of each column\n          val fieldNames =\n            table.schema.fields\n              .filter(field => CometArrayRemove.isTypeSupported(field.dataType))\n              .map(_.name)\n          for (fieldName <- fieldNames) {\n            sql(s\"SELECT array($fieldName, $fieldName) as a, $fieldName as b FROM t1\")\n              .createOrReplaceTempView(\"t2\")\n            val df = sql(\"SELECT array_remove(a, b) FROM t2\")\n            checkSparkAnswerAndOperator(df)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_remove - test all types (convert from Parquet)\") {\n    withTempDir { dir =>\n      withTempView(\"t1\") {\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        val filename = path.toString\n        val random = new Random(42)\n        withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n          ParquetGenerator.makeParquetFile(\n            random,\n            spark,\n            filename,\n            100,\n            SchemaGenOptions(generateArray = true, generateStruct = true, generateMap = false),\n            DataGenOptions(allowNull = true, generateNegativeZero = true))\n        }\n        withSQLConf(\n          CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n          CometConf.COMET_SPARK_TO_ARROW_ENABLED.key -> \"true\",\n          CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\") {\n          val table = spark.read.parquet(filename)\n          table.createOrReplaceTempView(\"t1\")\n          // test with array of each column\n          for (field <- table.schema.fields) {\n            val fieldName = field.name\n            sql(s\"SELECT array($fieldName, $fieldName) as a, $fieldName as b FROM t1\")\n              .createOrReplaceTempView(\"t2\")\n            val df = sql(\"SELECT array_remove(a, b) FROM t2\")\n            checkSparkAnswer(df)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_remove - fallback for unsupported type struct\") {\n    withTempDir { dir =>\n      withTempView(\"t1\", \"t2\") {\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = true, 100)\n        spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n        sql(\"SELECT array(struct(_1, _2)) as a, struct(_1, _2) as b FROM t1\")\n          .createOrReplaceTempView(\"t2\")\n        val expectedFallbackReason =\n          \"data type not supported\"\n        checkSparkAnswerAndFallbackReason(\n          sql(\"SELECT array_remove(a, b) FROM t2\"),\n          expectedFallbackReason)\n      }\n    }\n  }\n\n  test(\"array_append\") {\n    val incompatKey = if (isSpark40Plus) {\n      classOf[ArrayInsert]\n    } else {\n      classOf[ArrayAppend]\n    }\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(incompatKey) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          withTempView(\"t1\") {\n            val path = new Path(dir.toURI.toString, \"test.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n            spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\");\n            checkSparkAnswerAndOperator(spark.sql(\"Select array_append(array(_1),false) from t1\"))\n            checkSparkAnswerAndOperator(\n              spark.sql(\"SELECT array_append(array(_2, _3, _4), 4) FROM t1\"))\n            checkSparkAnswerAndOperator(\n              spark.sql(\"SELECT array_append(array(_2, _3, _4), null) FROM t1\"));\n            checkSparkAnswerAndOperator(\n              spark.sql(\"SELECT array_append(array(_6, _7), CAST(6.5 AS DOUBLE)) FROM t1\"));\n            checkSparkAnswerAndOperator(\n              spark.sql(\"SELECT array_append(array(_8), 'test') FROM t1\"));\n            checkSparkAnswerAndOperator(\n              spark.sql(\"SELECT array_append(array(_19), _19) FROM t1\"));\n            checkSparkAnswerAndOperator(\n              spark.sql(\n                \"SELECT array_append((CASE WHEN _2 =_3 THEN array(_4) END), _4) FROM t1\"));\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_prepend\") {\n    assume(isSpark35Plus) // in Spark 3.5 array_prepend is implemented via array_insert\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayInsert]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          withTempView(\"t1\") {\n            val path = new Path(dir.toURI.toString, \"test.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n            spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\");\n            checkSparkAnswerAndOperator(\n              spark.sql(\"Select array_prepend(array(_1),false) from t1\"))\n            checkSparkAnswerAndOperator(\n              spark.sql(\"SELECT array_prepend(array(_2, _3, _4), 4) FROM t1\"))\n            checkSparkAnswerAndOperator(\n              spark.sql(\"SELECT array_prepend(array(_2, _3, _4), null) FROM t1\"));\n            checkSparkAnswerAndOperator(\n              spark.sql(\"SELECT array_prepend(array(_6, _7), CAST(6.5 AS DOUBLE)) FROM t1\"));\n            checkSparkAnswerAndOperator(\n              spark.sql(\"SELECT array_prepend(array(_8), 'test') FROM t1\"));\n            checkSparkAnswerAndOperator(\n              spark.sql(\"SELECT array_prepend(array(_19), _19) FROM t1\"));\n            checkSparkAnswerAndOperator(\n              spark.sql(\n                \"SELECT array_prepend((CASE WHEN _2 =_3 THEN array(_4) END), _4) FROM t1\"));\n          }\n        }\n      }\n    }\n  }\n\n  test(\"ArrayInsert\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayInsert]) -> \"true\") {\n      Seq(true, false).foreach(dictionaryEnabled =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, 10000)\n          val df = spark.read\n            .parquet(path.toString)\n            .withColumn(\"arr\", array(col(\"_4\"), lit(null), col(\"_4\")))\n            .withColumn(\"arrInsertResult\", expr(\"array_insert(arr, 1, 1)\"))\n            .withColumn(\"arrInsertNegativeIndexResult\", expr(\"array_insert(arr, -1, 1)\"))\n            .withColumn(\"arrPosGreaterThanSize\", expr(\"array_insert(arr, 8, 1)\"))\n            .withColumn(\"arrPosIsNull\", expr(\"array_insert(arr, cast(null as int), 1)\"))\n            .withColumn(\"arrNegPosGreaterThanSize\", expr(\"array_insert(arr, -8, 1)\"))\n            .withColumn(\"arrInsertNone\", expr(\"array_insert(arr, 1, null)\"))\n          checkSparkAnswerAndOperator(df.select(\"arrInsertResult\"))\n          checkSparkAnswerAndOperator(df.select(\"arrInsertNegativeIndexResult\"))\n          checkSparkAnswerAndOperator(df.select(\"arrPosGreaterThanSize\"))\n          checkSparkAnswerAndOperator(df.select(\"arrPosIsNull\"))\n          checkSparkAnswerAndOperator(df.select(\"arrNegPosGreaterThanSize\"))\n          checkSparkAnswerAndOperator(df.select(\"arrInsertNone\"))\n        })\n    }\n  }\n\n  test(\"ArrayInsertUnsupportedArgs\") {\n    // This test checks that the else branch in ArrayInsert\n    // mapping to the comet is valid and fallback to spark is working fine.\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayInsert]) -> \"true\") {\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = false, 10000)\n        val df = spark.read\n          .parquet(path.toString)\n          .withColumn(\"arr\", array(col(\"_4\"), lit(null), col(\"_4\")))\n          .withColumn(\"idx\", udf((_: Int) => 1).apply(col(\"_4\")))\n          .withColumn(\"arrUnsupportedArgs\", expr(\"array_insert(arr, idx, 1)\"))\n        checkSparkAnswerAndFallbackReasons(\n          df.select(\"arrUnsupportedArgs\"),\n          Set(\"scalaudf is not supported\", \"unsupported arguments for ArrayInsert\"))\n      }\n    }\n  }\n\n  test(\"array_contains - int values\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayContains]) -> \"true\") {\n      withTempDir { dir =>\n        withTempView(\"t1\") {\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = false, n = 10000)\n          spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\");\n          checkSparkAnswerAndOperator(\n            spark.sql(\"SELECT array_contains(array(_2, _3, _4), _2) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\n              \"SELECT array_contains((CASE WHEN _2 =_3 THEN array(_4) END), _4) FROM t1\"));\n        }\n      }\n    }\n  }\n\n  test(\"array_contains - test all types (native Parquet reader)\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayContains]) -> \"true\") {\n      withTempDir { dir =>\n        withTempView(\"t1\", \"t2\", \"t3\") {\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          val filename = path.toString\n          val random = new Random(42)\n          withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n            ParquetGenerator.makeParquetFile(\n              random,\n              spark,\n              filename,\n              100,\n              SchemaGenOptions(generateArray = true, generateStruct = true, generateMap = false),\n              DataGenOptions(allowNull = true, generateNegativeZero = true))\n          }\n          val table = spark.read.parquet(filename)\n          table.createOrReplaceTempView(\"t1\")\n          val complexTypeFields =\n            table.schema.fields.filter(field => isComplexType(field.dataType))\n          val primitiveTypeFields =\n            table.schema.fields.filterNot(field => isComplexType(field.dataType))\n          for (field <- primitiveTypeFields) {\n            val fieldName = field.name\n            val typeName = field.dataType.typeName\n            sql(s\"SELECT array($fieldName, $fieldName) as a, $fieldName as b FROM t1\")\n              .createOrReplaceTempView(\"t2\")\n            checkSparkAnswerAndOperator(sql(\"SELECT array_contains(a, b) FROM t2\"))\n            checkSparkAnswerAndOperator(\n              sql(s\"SELECT array_contains(a, cast(null as $typeName)) FROM t2\"))\n          }\n          for (field <- complexTypeFields) {\n            val fieldName = field.name\n            sql(s\"SELECT array($fieldName, $fieldName) as a, $fieldName as b FROM t1\")\n              .createOrReplaceTempView(\"t3\")\n            checkSparkAnswer(sql(\"SELECT array_contains(a, b) FROM t3\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_contains - array literals\") {\n    withTempDir { dir =>\n      withTempView(\"t2\") {\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        val filename = path.toString\n        val random = new Random(42)\n        withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n          ParquetGenerator.makeParquetFile(\n            random,\n            spark,\n            filename,\n            100,\n            SchemaGenOptions(generateArray = false, generateStruct = false, generateMap = false),\n            DataGenOptions(allowNull = true, generateNegativeZero = true))\n        }\n        val table = spark.read.parquet(filename)\n        table.createOrReplaceTempView(\"t2\")\n        for (field <- table.schema.fields) {\n          val typeName = field.dataType.typeName\n          checkSparkAnswerAndOperator(sql(\n            s\"SELECT array_contains(cast(null as array<$typeName>), cast(null as $typeName)) FROM t2\"))\n        }\n        checkSparkAnswerAndOperator(sql(\"SELECT array_contains(array(), 1) FROM t2\"))\n      }\n    }\n  }\n\n  test(\"array_contains - NULL array returns NULL\") {\n    // Test that array_contains returns NULL when the array argument is NULL\n    // This matches Spark's SQL three-valued logic behavior\n    withTempDir { dir =>\n      withTempView(\"t1\") {\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = false, n = 100)\n        spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n\n        // Test NULL array with non-null value\n        checkSparkAnswerAndOperator(\n          sql(\"SELECT array_contains(cast(null as array<int>), 1) FROM t1\"))\n        checkSparkAnswerAndOperator(\n          sql(\"SELECT array_contains(cast(null as array<string>), 'test') FROM t1\"))\n        checkSparkAnswerAndOperator(\n          sql(\"SELECT array_contains(cast(null as array<double>), 1.5) FROM t1\"))\n\n        // Test NULL array with NULL value\n        checkSparkAnswerAndOperator(\n          sql(\"SELECT array_contains(cast(null as array<int>), cast(null as int)) FROM t1\"))\n\n        // Test NULL array with column value\n        checkSparkAnswerAndOperator(\n          sql(\"SELECT array_contains(cast(null as array<int>), _2) FROM t1\"))\n\n        // Test non-null array with values (to ensure fix doesn't break normal operation)\n        checkSparkAnswerAndOperator(sql(\"SELECT array_contains(array(1, 2, 3), 2) FROM t1\"))\n        checkSparkAnswerAndOperator(sql(\"SELECT array_contains(array(1, 2, 3), 5) FROM t1\"))\n      }\n    }\n  }\n\n  test(\"array_contains - test all types (convert from Parquet)\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          SchemaGenOptions(generateArray = true, generateStruct = true, generateMap = false),\n          DataGenOptions(allowNull = true, generateNegativeZero = true))\n      }\n      withSQLConf(\n        CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n        CometConf.COMET_SPARK_TO_ARROW_ENABLED.key -> \"true\",\n        CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\") {\n        withTempView(\"t1\", \"t2\") {\n          val table = spark.read.parquet(filename)\n          table.createOrReplaceTempView(\"t1\")\n          for (field <- table.schema.fields) {\n            val fieldName = field.name\n            sql(s\"SELECT array($fieldName, $fieldName) as a, $fieldName as b FROM t1\")\n              .createOrReplaceTempView(\"t2\")\n            checkSparkAnswer(sql(\"SELECT array_contains(a, b) FROM t2\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_distinct\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        withTempView(\"t1\") {\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, n = 10000)\n          spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n          checkSparkAnswerAndOperator(\n            spark.sql(\"SELECT array_distinct(array(_3, _2, _4, _2, _4)) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\"SELECT array_distinct((CASE WHEN _2 =_3 THEN array(_4) END)) FROM t1\"))\n          checkSparkAnswerAndOperator(spark.sql(\n            \"SELECT array_distinct((CASE WHEN _2 =_3 THEN array(_2, _2, _4, _4, _5) END)) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\n              \"SELECT array_distinct(array(_2, _2, CAST(NULL AS INT), _3, _4, _4)) FROM t1\"))\n          checkSparkAnswerAndOperator(spark.sql(\n            \"SELECT array_distinct(array(_2, _2, CAST(NULL AS INT), CAST(NULL AS INT), _3, _4, _4)) FROM t1\"))\n        }\n      }\n    }\n  }\n\n  test(\"array_union\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayUnion]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          withTempView(\"t1\") {\n            val path = new Path(dir.toURI.toString, \"test.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, n = 10000)\n            spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n            checkSparkAnswerAndOperator(\n              spark.sql(\"SELECT array_union(array(_2, _3, _4), array(_3, _4)) FROM t1\"))\n            checkSparkAnswerAndOperator(sql(\"SELECT array_union(array(_18), array(_19)) from t1\"))\n            checkSparkAnswerAndOperator(spark.sql(\n              \"SELECT array_union(array(CAST(NULL AS INT), _2, _3, _4), array(CAST(NULL AS INT), _2, _3)) FROM t1\"))\n            checkSparkAnswerAndOperator(spark.sql(\n              \"SELECT array_union(array(CAST(NULL AS INT), CAST(NULL AS INT), _2, _3, _4), array(CAST(NULL AS INT), CAST(NULL AS INT), _2, _3)) FROM t1\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_max\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        withTempView(\"t1\") {\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, n = 10000)\n          spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\");\n          checkSparkAnswerAndOperator(spark.sql(\"SELECT array_max(array(_2, _3, _4)) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\"SELECT array_max((CASE WHEN _2 =_3 THEN array(_4) END)) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\"SELECT array_max((CASE WHEN _2 =_3 THEN array(_2, _4) END)) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\"SELECT array_max(array(CAST(NULL AS INT), CAST(NULL AS INT))) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\"SELECT array_max(array(_2, CAST(NULL AS INT))) FROM t1\"))\n          checkSparkAnswerAndOperator(spark.sql(\"SELECT array_max(array()) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\n              \"SELECT array_max(array(double('-Infinity'), 0.0, double('Infinity'))) FROM t1\"))\n        }\n      }\n    }\n  }\n\n  test(\"array_min\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        withTempView(\"t1\") {\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, n = 10000)\n          spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\");\n          checkSparkAnswerAndOperator(spark.sql(\"SELECT array_min(array(_2, _3, _4)) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\"SELECT array_min((CASE WHEN _2 =_3 THEN array(_4) END)) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\"SELECT array_min((CASE WHEN _2 =_3 THEN array(_2, _4) END)) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\"SELECT array_min(array(CAST(NULL AS INT), CAST(NULL AS INT))) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\"SELECT array_min(array(_2, CAST(NULL AS INT))) FROM t1\"))\n          checkSparkAnswerAndOperator(spark.sql(\"SELECT array_min(array()) FROM t1\"))\n          checkSparkAnswerAndOperator(\n            spark.sql(\n              \"SELECT array_min(array(double('-Infinity'), 0.0, double('Infinity'))) FROM t1\"))\n        }\n      }\n    }\n  }\n\n  test(\"array_intersect\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayIntersect]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          withTempView(\"t1\") {\n            val path = new Path(dir.toURI.toString, \"test.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, 10000)\n            spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT array_intersect(array(_2, _3, _4), array(_3, _4)) from t1\"))\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT array_intersect(array(_4 * -1), array(_5)) from t1\"))\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT array_intersect(array(_18), array(_19)) from t1\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_join\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayJoin]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          withTempView(\"t1\") {\n            val path = new Path(dir.toURI.toString, \"test.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, 10000)\n            spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n            checkSparkAnswerAndOperator(sql(\n              \"SELECT array_join(array(cast(_1 as string), cast(_2 as string), cast(_6 as string)), ' @ ') from t1\"))\n            checkSparkAnswerAndOperator(sql(\n              \"SELECT array_join(array(cast(_1 as string), cast(_2 as string), cast(_6 as string)), ' @ ', ' +++ ') from t1\"))\n            checkSparkAnswerAndOperator(sql(\n              \"SELECT array_join(array('hello', 'world', cast(_2 as string)), ' ') from t1 where _2 is not null\"))\n            checkSparkAnswerAndOperator(sql(\n              \"SELECT array_join(array('hello', '-', 'world', cast(_2 as string)), ' ') from t1\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"arrays_overlap\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArraysOverlap]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          withTempView(\"t1\") {\n            val path = new Path(dir.toURI.toString, \"test.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, 10000)\n            spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n            checkSparkAnswerAndOperator(sql(\n              \"SELECT arrays_overlap(array(_2, _3, _4), array(_3, _4)) from t1 where _2 is not null\"))\n            checkSparkAnswerAndOperator(sql(\n              \"SELECT arrays_overlap(array('a', null, cast(_1 as string)), array('b', cast(_1 as string), cast(_2 as string))) from t1 where _1 is not null\"))\n            checkSparkAnswerAndOperator(sql(\n              \"SELECT arrays_overlap(array('a', null), array('b', null)) from t1 where _1 is not null\"))\n            checkSparkAnswerAndOperator(spark.sql(\n              \"SELECT arrays_overlap((CASE WHEN _2 =_3 THEN array(_6, _7) END), array(_6, _7)) FROM t1\"));\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_compact\") {\n    // TODO fix for Spark 4.0.0\n    assume(!isSpark40Plus)\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        withTempView(\"t1\") {\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, n = 10000)\n          spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n\n          checkSparkAnswerAndOperator(\n            sql(\"SELECT array_compact(array(_2)) FROM t1 WHERE _2 IS NULL\"))\n          checkSparkAnswerAndOperator(\n            sql(\"SELECT array_compact(array(_2)) FROM t1 WHERE _2 IS NOT NULL\"))\n          checkSparkAnswerAndOperator(\n            sql(\"SELECT array_compact(array(_2, _3, null)) FROM t1 WHERE _2 IS NOT NULL\"))\n        }\n      }\n    }\n  }\n\n  test(\"array_except - basic test (only integer values)\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayExcept]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          withTempView(\"t1\") {\n            val path = new Path(dir.toURI.toString, \"test.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, 10000)\n            spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT array_except(array(_2, _3, _4), array(_3, _4)) from t1\"))\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT array_except(array(_18), array(_19)) from t1\"))\n            checkSparkAnswerAndOperator(\n              spark.sql(\n                \"SELECT array_except(array(_2, _2, _4), array(_4)) FROM t1 WHERE _2 IS NOT NULL\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_except - test all types (native Parquet reader)\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          SchemaGenOptions(generateArray = false, generateStruct = false, generateMap = false),\n          DataGenOptions(allowNull = true, generateNegativeZero = true))\n      }\n      withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[ArrayExcept]) -> \"true\") {\n        withTempView(\"t1\", \"t2\") {\n          val table = spark.read.parquet(filename)\n          table.createOrReplaceTempView(\"t1\")\n          // test with array of each column\n          val fields =\n            table.schema.fields.filter(field => CometArrayExcept.isTypeSupported(field.dataType))\n          for (field <- fields) {\n            val fieldName = field.name\n            val typeName = field.dataType.typeName\n            sql(\n              s\"SELECT cast(array($fieldName, $fieldName) as array<$typeName>) as a, cast(array($fieldName) as array<$typeName>) as b FROM t1\")\n              .createOrReplaceTempView(\"t2\")\n            val df = sql(\"SELECT array_except(a, b) FROM t2\")\n            checkSparkAnswerAndOperator(df)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_except - test all types (convert from Parquet)\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          SchemaGenOptions(generateArray = true, generateStruct = true, generateMap = false),\n          DataGenOptions(allowNull = true, generateNegativeZero = true))\n      }\n      withSQLConf(\n        CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n        CometConf.COMET_SPARK_TO_ARROW_ENABLED.key -> \"true\",\n        CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\") {\n        withTempView(\"t1\", \"t2\") {\n          val table = spark.read.parquet(filename)\n          table.createOrReplaceTempView(\"t1\")\n          // test with array of each column\n          val fields =\n            table.schema.fields.filter(field => CometArrayExcept.isTypeSupported(field.dataType))\n          for (field <- fields) {\n            val fieldName = field.name\n            sql(s\"SELECT array($fieldName, $fieldName) as a, array($fieldName) as b FROM t1\")\n              .createOrReplaceTempView(\"t2\")\n            val df = sql(\"SELECT array_except(a, b) FROM t2\")\n            checkSparkAnswer(df)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_repeat\") {\n    withSQLConf(\n      CometConf.getExprAllowIncompatConfigKey(classOf[ArrayRepeat]) -> \"true\",\n      CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          withTempView(\"t1\") {\n            val path = new Path(dir.toURI.toString, \"test.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, 100)\n            spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n\n            checkSparkAnswerAndOperator(sql(\"SELECT array_repeat(_4, null) from t1\"))\n            checkSparkAnswerAndOperator(sql(\"SELECT array_repeat(_4, 0) from t1\"))\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT array_repeat(_2, 5) from t1 where _2 is not null\"))\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT array_repeat(_2, 5) from t1 where _2 is null\"))\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT array_repeat(_3, _4) from t1 where _3 is not null\"))\n            checkSparkAnswerAndOperator(sql(\"SELECT array_repeat(cast(_3 as string), 2) from t1\"))\n            checkSparkAnswerAndOperator(sql(\"SELECT array_repeat(array(_2, _3, _4), 2) from t1\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"flatten - test all types (native Parquet reader)\") {\n    withTempDir { dir =>\n      withTempView(\"t1\", \"t2\") {\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        val filename = path.toString\n        val random = new Random(42)\n        withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n          ParquetGenerator.makeParquetFile(\n            random,\n            spark,\n            filename,\n            100,\n            SchemaGenOptions(generateArray = false, generateStruct = false, generateMap = false),\n            DataGenOptions(allowNull = true, generateNegativeZero = true))\n        }\n        val table = spark.read.parquet(filename)\n        table.createOrReplaceTempView(\"t1\")\n        val fieldNames =\n          table.schema.fields\n            .filter(field => CometFlatten.isTypeSupported(field.dataType))\n            .map(_.name)\n        for (fieldName <- fieldNames) {\n          sql(s\"SELECT array(array($fieldName, $fieldName), array($fieldName)) as a FROM t1\")\n            .createOrReplaceTempView(\"t2\")\n          checkSparkAnswerAndOperator(sql(\"SELECT flatten(a) FROM t2\"))\n        }\n      }\n    }\n  }\n\n  test(\"flatten - test all types (convert from Parquet)\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          SchemaGenOptions(generateArray = true, generateStruct = true, generateMap = false),\n          DataGenOptions(allowNull = true, generateNegativeZero = true))\n      }\n      withSQLConf(\n        CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n        CometConf.COMET_SPARK_TO_ARROW_ENABLED.key -> \"true\",\n        CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\") {\n        withTempView(\"t1\", \"t2\") {\n          val table = spark.read.parquet(filename)\n          table.createOrReplaceTempView(\"t1\")\n          val fieldNames =\n            table.schema.fields\n              .filter(field => CometFlatten.isTypeSupported(field.dataType))\n              .map(_.name)\n          for (fieldName <- fieldNames) {\n            sql(s\"SELECT array(array($fieldName, $fieldName), array($fieldName)) as a FROM t1\")\n              .createOrReplaceTempView(\"t2\")\n            checkSparkAnswer(sql(\"SELECT flatten(a) FROM t2\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array literals\") {\n    withSQLConf(CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          withTempView(\"t1\") {\n            val path = new Path(dir.toURI.toString, \"test.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, 100)\n            spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT array(array(1, 2, 3), null, array(), array(null), array(1)) from t1\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"array_reverse\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          SchemaGenOptions(generateArray = true, generateStruct = true, generateMap = false),\n          DataGenOptions(allowNull = true, generateNegativeZero = true))\n      }\n      withSQLConf(\n        CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n        CometConf.COMET_SPARK_TO_ARROW_ENABLED.key -> \"true\",\n        CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\") {\n        withTempView(\"t1\", \"t2\") {\n          val table = spark.read.parquet(filename)\n          table.createOrReplaceTempView(\"t1\")\n          val fieldNames =\n            table.schema.fields\n              .filter(field => CometArrayReverse.isTypeSupported(field.dataType))\n              .map(_.name)\n          for (fieldName <- fieldNames) {\n            sql(s\"SELECT $fieldName as a FROM t1\")\n              .createOrReplaceTempView(\"t2\")\n            checkSparkAnswer(sql(\"SELECT reverse(a) FROM t2\"))\n          }\n        }\n      }\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/2612\n  test(\"array_reverse - fallback for binary array\") {\n    val fallbackReason = CometArrayReverse.unsupportedReason\n    withTable(\"t1\") {\n      sql(\"\"\"create table t1 using parquet as\n          select cast(null as array<binary>) c1, cast(array() as array<binary>) c2\n          from range(10)\n        \"\"\")\n\n      checkSparkAnswerAndFallbackReason(\n        \"select reverse(array(c1, c2)) AS x FROM t1\",\n        fallbackReason)\n\n      checkSparkAnswerAndFallbackReason(\n        \"select reverse(array(c1, c1)) AS x FROM t1\",\n        fallbackReason)\n\n      checkSparkAnswerAndFallbackReason(\n        \"select reverse(array(array(c1), array(c2))) AS x FROM t1\",\n        fallbackReason)\n    }\n  }\n\n  test(\"array_reverse 2\") {\n    // This test validates data correctness for array<binary> columns with nullable elements.\n    // See https://github.com/apache/datafusion-comet/issues/2612\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val schemaOptions =\n          SchemaGenOptions(generateArray = true, generateStruct = false, generateMap = false)\n        val dataOptions = DataGenOptions(allowNull = true, generateNegativeZero = false)\n        ParquetGenerator.makeParquetFile(random, spark, filename, 100, schemaOptions, dataOptions)\n      }\n      withTempView(\"t1\") {\n        val table = spark.read.parquet(filename)\n        table.createOrReplaceTempView(\"t1\")\n        for (field <- table.schema.fields.filter(_.dataType.isInstanceOf[ArrayType])) {\n          val sql = s\"SELECT ${field.name}, reverse(${field.name}) FROM t1 ORDER BY ${field.name}\"\n          checkSparkAnswer(sql)\n        }\n      }\n    }\n  }\n\n  test(\"size with array input\") {\n    withTempDir { dir =>\n      withTempView(\"t1\") {\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = true, 100)\n        spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n\n        // Test size function with arrays built from columns (ensures native execution)\n        checkSparkAnswerAndOperator(\n          sql(\"SELECT size(array(_2, _3, _4)) from t1 where _2 is not null order by _2, _3, _4\"))\n        checkSparkAnswerAndOperator(\n          sql(\"SELECT size(array(_1)) from t1 where _1 is not null order by _1\"))\n        checkSparkAnswerAndOperator(\n          sql(\"SELECT size(array(_2, _3)) from t1 where _2 is null order by _2, _3\"))\n\n        // Test with conditional arrays (forces runtime evaluation)\n        checkSparkAnswerAndOperator(sql(\n          \"SELECT size(case when _2 > 0 then array(_2, _3, _4) else array(_2) end) from t1 order by _2, _3, _4\"))\n        checkSparkAnswerAndOperator(sql(\n          \"SELECT size(case when _1 then array(_8, _9) else array(_8, _9, _10) end) from t1 order by _1, _8, _9, _10\"))\n\n        // Test empty arrays using conditional logic to avoid constant folding\n        checkSparkAnswerAndOperator(sql(\n          \"SELECT size(case when _2 < 0 then array(_2, _3) else array() end) from t1 order by _2, _3\"))\n\n        // Test null arrays using conditional logic\n        checkSparkAnswerAndOperator(sql(\n          \"SELECT size(case when _2 is null then cast(null as array<int>) else array(_2) end) from t1 order by _2\"))\n\n        // Test with different data types using column references\n        checkSparkAnswerAndOperator(\n          sql(\"SELECT size(array(_8, _9, _10)) from t1 where _8 is not null order by _8, _9, _10\")\n        ) // string arrays\n        checkSparkAnswerAndOperator(\n          sql(\n            \"SELECT size(array(_2, _3, _4, _5, _6)) from t1 where _2 is not null order by _2, _3, _4, _5, _6\"\n          )\n        ) // int arrays\n      }\n    }\n  }\n\n  test(\"size - respect to legacySizeOfNull\") {\n    val table = \"t1\"\n    withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_ICEBERG_COMPAT) {\n      withTable(table) {\n        sql(s\"create table $table(col array<string>) using parquet\")\n        sql(s\"insert into $table values(null)\")\n        withSQLConf(SQLConf.LEGACY_SIZE_OF_NULL.key -> \"false\") {\n          checkSparkAnswerAndOperator(sql(s\"select size(col) from $table\"))\n        }\n        withSQLConf(\n          SQLConf.LEGACY_SIZE_OF_NULL.key -> \"true\",\n          SQLConf.ANSI_ENABLED.key -> \"false\") {\n          checkSparkAnswerAndOperator(sql(s\"select size(col) from $table\"))\n        }\n      }\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/3375\n  test(\"(ansi) array access out of bounds - GetArrayItem\") {\n    withSQLConf(\n      SQLConf.ANSI_ENABLED.key -> \"true\",\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n      withTable(\"test_array_get_item\") {\n        sql(\"CREATE TABLE test_array_get_item(arr ARRAY<INT>) USING parquet\")\n        sql(\"INSERT INTO test_array_get_item VALUES (array(1, 2, 3))\")\n        // Try to access array with out-of-bounds index\n        val exception = intercept[Exception] {\n          sql(\"select arr[5] from test_array_get_item\").collect()\n        }\n        val errorMessage = exception.getMessage\n        // Verify error message contains the expected error code\n        assert(\n          errorMessage.contains(\"INVALID_ARRAY_INDEX\"),\n          s\"Error message should contain array index error: $errorMessage\")\n\n        assert(errorMessage.contains(\"The index 5 is out of bounds. The array has 3 elements.\" +\n          \" Use the SQL function `get()` to tolerate accessing element at invalid index and return NULL instead.\"))\n\n        assert(\n          errorMessage.contains(\"select arr[5] from test_array_get_item\"),\n          s\"Error message should contain SQL query text but got: $errorMessage\")\n      }\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/3375\n  test(\"(ansi) array access out of bounds - element_at with invalid index\") {\n    withSQLConf(\n      SQLConf.ANSI_ENABLED.key -> \"true\",\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n      withTable(\"test_element_at_invalid\") {\n        sql(\"CREATE TABLE test_element_at_invalid(arr ARRAY<INT>) USING parquet\")\n        sql(\"INSERT INTO test_element_at_invalid VALUES (array(1, 2, 3))\")\n        // Try to access array with out-of-bounds index using element_at\n        val exception = intercept[Exception] {\n          sql(\"select element_at(arr, 10) from test_element_at_invalid\").collect()\n        }\n        val errorMessage = exception.getMessage\n        // Verify error message contains the expected error code\n        assert(\n          errorMessage.contains(\"INVALID_ARRAY_INDEX_IN_ELEMENT_AT\"),\n          s\"Error message should contain array index error: $errorMessage\")\n\n        assert(errorMessage.contains(\"The index 10 is out of bounds. The array has 3 elements.\" +\n          \" Use `try_element_at` to tolerate accessing element at invalid index and return NULL instead\"))\n\n        assert(\n          errorMessage.contains(\"select element_at(arr, 10) from test_element_at_invalid\"),\n          s\"Error message should contain SQL query text but got: $errorMessage\")\n      }\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/3375\n  test(\"(ansi) array access with zero index - element_at\") {\n    withSQLConf(\n      SQLConf.ANSI_ENABLED.key -> \"true\",\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n      withTable(\"test_element_at_zero\") {\n        sql(\"CREATE TABLE test_element_at_zero(arr ARRAY<INT>) USING parquet\")\n        sql(\"INSERT INTO test_element_at_zero VALUES (array(1, 2, 3))\")\n        // Try to access array with zero index (invalid in Spark)\n        val exception = intercept[Exception] {\n          sql(\"select element_at(arr, 0) from test_element_at_zero\").collect()\n        }\n        val errorMessage = exception.getMessage\n        // Verify error message contains the expected error code\n        assert(\n          errorMessage.contains(\"INVALID_INDEX_OF_ZERO\"),\n          s\"Error message should contain zero index error: $errorMessage\")\n\n        assert(\n          errorMessage.contains(\"The index 0 is invalid. An index shall be either < 0 or > 0\" +\n            \" (the first element has index 1)\"))\n\n        assert(\n          errorMessage.contains(\"select element_at(arr, 0) from test_element_at_zero\"),\n          s\"Error message should contain SQL query text but got: $errorMessage\")\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometBitwiseExpressionSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n\nimport org.apache.comet.testing.{DataGenOptions, ParquetGenerator, SchemaGenOptions}\n\nclass CometBitwiseExpressionSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  test(\"bitwise expressions\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test\"\n        withTable(table) {\n          sql(s\"create table $table(col1 int, col2 int) using parquet\")\n          sql(s\"insert into $table values(1111, 2)\")\n          sql(s\"insert into $table values(1111, 2)\")\n          sql(s\"insert into $table values(3333, 4)\")\n          sql(s\"insert into $table values(5555, 6)\")\n\n          checkSparkAnswerAndOperator(\n            s\"SELECT col1 & col2,  col1 | col2, col1 ^ col2 FROM $table\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT col1 & 1234,  col1 | 1234, col1 ^ 1234 FROM $table\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT shiftright(col1, 2), shiftright(col1, col2) FROM $table\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT shiftleft(col1, 2), shiftleft(col1, col2) FROM $table\")\n          checkSparkAnswerAndOperator(s\"SELECT ~(11), ~col1, ~col2 FROM $table\")\n        }\n      }\n    }\n  }\n\n  test(\"bitwise shift with different left/right types\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test\"\n        withTable(table) {\n          sql(s\"create table $table(col1 long, col2 int) using parquet\")\n          sql(s\"insert into $table values(1111, 2)\")\n          sql(s\"insert into $table values(1111, 2)\")\n          sql(s\"insert into $table values(3333, 4)\")\n          sql(s\"insert into $table values(5555, 6)\")\n\n          checkSparkAnswerAndOperator(\n            s\"SELECT shiftright(col1, 2), shiftright(col1, col2) FROM $table\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT shiftleft(col1, 2), shiftleft(col1, col2) FROM $table\")\n        }\n      }\n    }\n  }\n\n  test(\"bitwise_get - throws exceptions\") {\n    def checkSparkAndCometEqualThrows(query: String): Unit = {\n      checkSparkAnswerMaybeThrows(sql(query)) match {\n        case (Some(sparkExc), Some(cometExc)) =>\n          assert(sparkExc.getMessage == cometExc.getMessage)\n        case _ => fail(\"Exception should be thrown\")\n      }\n    }\n    checkSparkAndCometEqualThrows(\"select bit_get(1000, -30)\")\n    checkSparkAndCometEqualThrows(\"select bit_get(cast(1000 as byte), 9)\")\n    checkSparkAndCometEqualThrows(\"select bit_count(cast(null as byte), 4)\")\n    checkSparkAndCometEqualThrows(\"select bit_count(1000, cast(null as int))\")\n  }\n\n  test(\"bitwise_get - random values (spark parquet gen)\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          SchemaGenOptions(generateArray = false, generateStruct = false, generateMap = false),\n          DataGenOptions(allowNull = true, generateNegativeZero = true))\n      }\n      val table = spark.read.parquet(filename)\n      checkSparkAnswerAndOperator(\n        table\n          .selectExpr(\"bit_get(c1, 7)\", \"bit_get(c2, 10)\", \"bit_get(c3, 12)\", \"bit_get(c4, 16)\"))\n    }\n  }\n\n  test(\"bitwise_get - random values (native parquet gen)\") {\n    def randomBitPosition(maxBitPosition: Int): Int = {\n      Random.nextInt(maxBitPosition)\n    }\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, 0, 10000, nullEnabled = false)\n        val table = spark.read.parquet(path.toString)\n        (0 to 10).foreach { _ =>\n          val byteBitPosition = randomBitPosition(java.lang.Byte.SIZE)\n          val shortBitPosition = randomBitPosition(java.lang.Short.SIZE)\n          val intBitPosition = randomBitPosition(java.lang.Integer.SIZE)\n          val longBitPosition = randomBitPosition(java.lang.Long.SIZE)\n          checkSparkAnswerAndOperator(\n            table\n              .selectExpr(\n                s\"bit_get(_2, $byteBitPosition)\",\n                s\"bit_get(_3, $shortBitPosition)\",\n                s\"bit_get(_4, $intBitPosition)\",\n                s\"bit_get(_5, $longBitPosition)\",\n                s\"bit_get(_11, $longBitPosition)\"))\n        }\n      }\n    }\n  }\n\n  test(\"bitwise_count - min/max values\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"bitwise_count_test\"\n        withTable(table) {\n          sql(s\"create table $table(col1 long, col2 int, col3 short, col4 byte) using parquet\")\n          sql(s\"insert into $table values(1111, 2222, 17, 7)\")\n          sql(\n            s\"insert into $table values(${Long.MaxValue}, ${Int.MaxValue}, ${Short.MaxValue}, ${Byte.MaxValue})\")\n          sql(\n            s\"insert into $table values(${Long.MinValue}, ${Int.MinValue}, ${Short.MinValue}, ${Byte.MinValue})\")\n\n          checkSparkAnswerAndOperator(sql(s\"SELECT bit_count(col1) FROM $table\"))\n          checkSparkAnswerAndOperator(sql(s\"SELECT bit_count(col2) FROM $table\"))\n          checkSparkAnswerAndOperator(sql(s\"SELECT bit_count(col3) FROM $table\"))\n          checkSparkAnswerAndOperator(sql(s\"SELECT bit_count(col4) FROM $table\"))\n          checkSparkAnswerAndOperator(sql(s\"SELECT bit_count(true) FROM $table\"))\n          checkSparkAnswerAndOperator(sql(s\"SELECT bit_count(false) FROM $table\"))\n        }\n      }\n    }\n  }\n\n  test(\"bitwise_count - random values (spark gen)\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          10,\n          SchemaGenOptions(generateArray = false, generateStruct = false, generateMap = false),\n          DataGenOptions(allowNull = true, generateNegativeZero = true))\n      }\n      val table = spark.read.parquet(filename)\n      val df =\n        table.selectExpr(\"bit_count(c1)\", \"bit_count(c2)\", \"bit_count(c3)\", \"bit_count(c4)\")\n\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"bitwise_count - random values (native parquet gen)\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, 0, 10000, nullEnabled = false)\n        val table = spark.read.parquet(path.toString)\n        checkSparkAnswerAndOperator(\n          table\n            .selectExpr(\n              \"bit_count(_2)\",\n              \"bit_count(_3)\",\n              \"bit_count(_4)\",\n              \"bit_count(_5)\",\n              \"bit_count(_11)\"))\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometCastSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.io.File\n\nimport scala.collection.mutable.ListBuffer\nimport scala.util.Random\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql.{CometTestBase, DataFrame, Row, SaveMode}\nimport org.apache.spark.sql.catalyst.expressions.Cast\nimport org.apache.spark.sql.catalyst.parser.ParseException\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.functions.{col, monotonically_increasing_id}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{ArrayType, BinaryType, BooleanType, ByteType, DataType, DataTypes, DateType, DecimalType, DoubleType, FloatType, IntegerType, LongType, ShortType, StringType, StructField, StructType, TimestampType}\n\nimport org.apache.comet.expressions.{CometCast, CometEvalMode}\nimport org.apache.comet.rules.CometScanTypeChecker\nimport org.apache.comet.serde.{Compatible, Incompatible}\n\nclass CometCastSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  import testImplicits._\n\n  /** Create a data generator using a fixed seed so that tests are reproducible */\n  private val gen = DataGenerator.DEFAULT\n\n  /** Number of random data items to generate in each test */\n  private val dataSize = 10000\n\n  // we should eventually add more whitespace chars here as documented in\n  // https://docs.oracle.com/javase/8/docs/api/java/lang/Character.html#isWhitespace-char-\n  // but this is likely a reasonable starting point for now\n  private val whitespaceChars = \" \\t\\r\\n\"\n\n  /**\n   * We use these characters to construct strings that potentially represent valid numbers such as\n   * `-12.34d` or `4e7`. Invalid numeric strings will also be generated, such as `+e.-d`.\n   */\n  private val numericPattern = \"0123456789deEf+-.\" + whitespaceChars\n\n  private val datePattern = \"0123456789/\" + whitespaceChars\n\n  private val timestampPattern = \"0123456789/:T\" + whitespaceChars\n\n  lazy val usingParquetExecWithIncompatTypes: Boolean =\n    hasUnsignedSmallIntSafetyCheck(conf)\n\n  // Timezone list to check temporal type casts\n  private val compatibleTimezones = Seq(\n    \"UTC\",\n    \"America/New_York\",\n    \"America/Chicago\",\n    \"America/Denver\",\n    \"America/Los_Angeles\",\n    \"Europe/London\",\n    \"Europe/Paris\",\n    \"Europe/Berlin\",\n    \"Asia/Tokyo\",\n    \"Asia/Shanghai\",\n    \"Asia/Singapore\",\n    \"Asia/Kolkata\",\n    \"Australia/Sydney\",\n    \"Pacific/Auckland\")\n\n  test(\"all valid cast combinations covered\") {\n    val names = testNames\n\n    def assertTestsExist(fromTypes: Seq[DataType], toTypes: Seq[DataType]): Unit = {\n      for (fromType <- fromTypes) {\n        for (toType <- toTypes) {\n          val expectedTestName = s\"cast $fromType to $toType\"\n          val testExists = names.contains(expectedTestName)\n          if (Cast.canCast(fromType, toType)) {\n            if (fromType == toType) {\n              if (testExists) {\n                fail(s\"Found redundant test for no-op cast: $expectedTestName\")\n              }\n            } else if (!testExists) {\n              fail(s\"Missing test: $expectedTestName\")\n            } else {\n              val testIgnored =\n                tags.get(expectedTestName).exists(s => s.contains(\"org.scalatest.Ignore\"))\n              CometCast.isSupported(fromType, toType, None, CometEvalMode.LEGACY) match {\n                case Compatible(_) =>\n                  if (testIgnored) {\n                    fail(\n                      s\"Cast from $fromType to $toType is reported as compatible \" +\n                        \"with Spark but the test is ignored\")\n                  }\n                case _ =>\n                  if (!testIgnored) {\n                    fail(\n                      s\"We claim that cast from $fromType to $toType is not compatible \" +\n                        \"with Spark but the test is not ignored\")\n                  }\n              }\n            }\n          } else if (testExists) {\n            fail(s\"Found test for cast that Spark does not support: $expectedTestName\")\n          }\n        }\n      }\n    }\n\n    assertTestsExist(CometCast.supportedTypes, CometCast.supportedTypes)\n  }\n\n  // CAST from BooleanType\n\n  test(\"cast BooleanType to ByteType\") {\n    castTest(generateBools(), DataTypes.ByteType)\n  }\n\n  test(\"cast BooleanType to ShortType\") {\n    castTest(generateBools(), DataTypes.ShortType)\n  }\n\n  test(\"cast BooleanType to IntegerType\") {\n    castTest(generateBools(), DataTypes.IntegerType)\n  }\n\n  test(\"cast BooleanType to LongType\") {\n    castTest(generateBools(), DataTypes.LongType)\n  }\n\n  test(\"cast BooleanType to FloatType\") {\n    castTest(generateBools(), DataTypes.FloatType)\n  }\n\n  test(\"cast BooleanType to DoubleType\") {\n    castTest(generateBools(), DataTypes.DoubleType)\n  }\n\n  test(\"cast BooleanType to DecimalType(10,2)\") {\n    castTest(generateBools(), DataTypes.createDecimalType(10, 2))\n  }\n\n  test(\"cast BooleanType to DecimalType(14,4)\") {\n    castTest(generateBools(), DataTypes.createDecimalType(14, 4))\n  }\n\n  test(\"cast BooleanType to DecimalType(30,0)\") {\n    castTest(generateBools(), DataTypes.createDecimalType(30, 0))\n  }\n\n  test(\"cast BooleanType to StringType\") {\n    castTest(generateBools(), DataTypes.StringType)\n  }\n\n  test(\"cast BooleanType to TimestampType\") {\n    // Spark does not support ANSI or Try mode for Boolean to Timestamp casts\n    castTest(generateBools(), DataTypes.TimestampType, testAnsi = false, testTry = false)\n  }\n\n  // CAST from ByteType\n\n  test(\"cast ByteType to BooleanType\") {\n    castTest(\n      generateBytes(),\n      DataTypes.BooleanType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ByteType to ShortType\") {\n    castTest(\n      generateBytes(),\n      DataTypes.ShortType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ByteType to IntegerType\") {\n    castTest(\n      generateBytes(),\n      DataTypes.IntegerType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ByteType to LongType\") {\n    castTest(\n      generateBytes(),\n      DataTypes.LongType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ByteType to FloatType\") {\n    castTest(\n      generateBytes(),\n      DataTypes.FloatType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ByteType to DoubleType\") {\n    castTest(\n      generateBytes(),\n      DataTypes.DoubleType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ByteType to DecimalType(10,2)\") {\n    castTest(\n      generateBytes(),\n      DataTypes.createDecimalType(10, 2),\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ByteType to StringType\") {\n    castTest(\n      generateBytes(),\n      DataTypes.StringType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ByteType to BinaryType\") {\n    //    Spark does not support ANSI or Try mode\n    castTest(\n      generateBytes(),\n      DataTypes.BinaryType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes,\n      testAnsi = false,\n      testTry = false)\n  }\n\n  test(\"cast ByteType to TimestampType\") {\n    compatibleTimezones.foreach { tz =>\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz) {\n        castTest(\n          generateBytes(),\n          DataTypes.TimestampType,\n          hasIncompatibleType = usingParquetExecWithIncompatTypes)\n      }\n    }\n  }\n\n  // CAST from ShortType\n\n  test(\"cast ShortType to BooleanType\") {\n    castTest(\n      generateShorts(),\n      DataTypes.BooleanType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ShortType to ByteType\") {\n    // https://github.com/apache/datafusion-comet/issues/311\n    castTest(\n      generateShorts(),\n      DataTypes.ByteType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ShortType to IntegerType\") {\n    castTest(\n      generateShorts(),\n      DataTypes.IntegerType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ShortType to LongType\") {\n    castTest(\n      generateShorts(),\n      DataTypes.LongType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ShortType to FloatType\") {\n    castTest(\n      generateShorts(),\n      DataTypes.FloatType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ShortType to DoubleType\") {\n    castTest(\n      generateShorts(),\n      DataTypes.DoubleType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ShortType to DecimalType(10,2)\") {\n    castTest(\n      generateShorts(),\n      DataTypes.createDecimalType(10, 2),\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ShortType to StringType\") {\n    castTest(\n      generateShorts(),\n      DataTypes.StringType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes)\n  }\n\n  test(\"cast ShortType to BinaryType\") {\n//    Spark does not support ANSI or Try mode\n    castTest(\n      generateShorts(),\n      DataTypes.BinaryType,\n      hasIncompatibleType = usingParquetExecWithIncompatTypes,\n      testAnsi = false,\n      testTry = false)\n  }\n\n  test(\"cast ShortType to TimestampType\") {\n    compatibleTimezones.foreach { tz =>\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz) {\n        castTest(\n          generateShorts(),\n          DataTypes.TimestampType,\n          hasIncompatibleType = usingParquetExecWithIncompatTypes)\n      }\n    }\n  }\n\n  // CAST from integer\n\n  test(\"cast IntegerType to BooleanType\") {\n    castTest(generateInts(), DataTypes.BooleanType)\n  }\n\n  test(\"cast IntegerType to ByteType\") {\n    // https://github.com/apache/datafusion-comet/issues/311\n    castTest(generateInts(), DataTypes.ByteType)\n  }\n\n  test(\"cast IntegerType to ShortType\") {\n    // https://github.com/apache/datafusion-comet/issues/311\n    castTest(generateInts(), DataTypes.ShortType)\n  }\n\n  test(\"cast IntegerType to LongType\") {\n    castTest(generateInts(), DataTypes.LongType)\n  }\n\n  test(\"cast IntegerType to FloatType\") {\n    castTest(generateInts(), DataTypes.FloatType)\n  }\n\n  test(\"cast IntegerType to DoubleType\") {\n    castTest(generateInts(), DataTypes.DoubleType)\n  }\n\n  test(\"cast IntegerType to DecimalType(10,2)\") {\n    castTest(generateInts(), DataTypes.createDecimalType(10, 2))\n  }\n\n  test(\"cast IntegerType to DecimalType(10,2) overflow check\") {\n    val intToDecimal10OverflowValues =\n      Seq(Int.MinValue, -100000000, -100000001, 100000000, 100000001, Int.MaxValue).toDF(\"a\")\n    castTest(intToDecimal10OverflowValues, DataTypes.createDecimalType(10, 2))\n  }\n\n  test(\"cast IntegerType to DecimalType check arbitrary scale and precision\") {\n    Seq(DecimalType.MAX_PRECISION, DecimalType.MAX_SCALE, 0, 10, 15)\n      .combinations(2)\n      .map({ c =>\n        castTest(generateInts(), DataTypes.createDecimalType(c.head, c.last))\n      })\n  }\n\n  test(\"cast IntegerType to StringType\") {\n    castTest(generateInts(), DataTypes.StringType)\n  }\n\n  test(\"cast IntegerType to BinaryType\") {\n    //    Spark does not support ANSI or Try mode\n    castTest(generateInts(), DataTypes.BinaryType, testAnsi = false, testTry = false)\n  }\n\n  test(\"cast IntegerType to TimestampType\") {\n    compatibleTimezones.foreach { tz =>\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz) {\n        castTest(generateInts(), DataTypes.TimestampType)\n      }\n    }\n  }\n\n  // CAST from LongType\n\n  test(\"cast LongType to BooleanType\") {\n    castTest(generateLongs(), DataTypes.BooleanType)\n  }\n\n  test(\"cast LongType to ByteType\") {\n    // https://github.com/apache/datafusion-comet/issues/311\n    castTest(generateLongs(), DataTypes.ByteType)\n  }\n\n  test(\"cast LongType to ShortType\") {\n    // https://github.com/apache/datafusion-comet/issues/311\n    castTest(generateLongs(), DataTypes.ShortType)\n  }\n\n  test(\"cast LongType to IntegerType\") {\n    // https://github.com/apache/datafusion-comet/issues/311\n    castTest(generateLongs(), DataTypes.IntegerType)\n  }\n\n  test(\"cast LongType to FloatType\") {\n    castTest(generateLongs(), DataTypes.FloatType)\n  }\n\n  test(\"cast LongType to DoubleType\") {\n    castTest(generateLongs(), DataTypes.DoubleType)\n  }\n\n  test(\"cast LongType to DecimalType(10,2)\") {\n    castTest(generateLongs(), DataTypes.createDecimalType(10, 2))\n  }\n\n  test(\"cast LongType to StringType\") {\n    castTest(generateLongs(), DataTypes.StringType)\n  }\n\n  test(\"cast LongType to BinaryType\") {\n    //    Spark does not support ANSI or Try mode\n    castTest(generateLongs(), DataTypes.BinaryType, testAnsi = false, testTry = false)\n  }\n\n  test(\"cast LongType to TimestampType\") {\n    // Cast back to long avoids java.sql.Timestamp overflow during collect() for extreme values\n    compatibleTimezones.foreach { tz =>\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz) {\n        withTable(\"t1\") {\n          generateLongs().write.saveAsTable(\"t1\")\n          val df = spark.sql(\"select a, cast(cast(a as timestamp) as long) from t1\")\n          checkSparkAnswerAndOperator(df)\n        }\n      }\n    }\n  }\n\n  // CAST from FloatType\n\n  test(\"cast FloatType to BooleanType\") {\n    castTest(generateFloats(), DataTypes.BooleanType)\n  }\n\n  test(\"cast FloatType to ByteType\") {\n    castTest(generateFloats(), DataTypes.ByteType)\n  }\n\n  test(\"cast FloatType to ShortType\") {\n    castTest(generateFloats(), DataTypes.ShortType)\n  }\n\n  test(\"cast FloatType to IntegerType\") {\n    castTest(generateFloats(), DataTypes.IntegerType)\n  }\n\n  test(\"cast FloatType to LongType\") {\n    castTest(generateFloats(), DataTypes.LongType)\n  }\n\n  test(\"cast FloatType to DoubleType\") {\n    castTest(generateFloats(), DataTypes.DoubleType)\n  }\n\n  ignore(\"cast FloatType to DecimalType(10,2)\") {\n    // // https://github.com/apache/datafusion-comet/issues/1371\n    castTest(generateFloats(), DataTypes.createDecimalType(10, 2))\n  }\n\n  test(\"cast FloatType to DecimalType(10,2) - allow incompat\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[Cast]) -> \"true\") {\n      castTest(generateFloats(), DataTypes.createDecimalType(10, 2))\n    }\n  }\n\n  test(\"cast FloatType to StringType\") {\n    // https://github.com/apache/datafusion-comet/issues/312\n    val r = new Random(0)\n    val values = Seq(\n      Float.MaxValue,\n      Float.MinValue,\n      Float.NaN,\n      Float.PositiveInfinity,\n      Float.NegativeInfinity,\n      1.0f,\n      -1.0f,\n      Short.MinValue.toFloat,\n      Short.MaxValue.toFloat,\n      -0.0f,\n      0.0f) ++\n      Range(0, dataSize).map(_ => r.nextFloat())\n    castTest(withNulls(values).toDF(\"a\"), DataTypes.StringType)\n  }\n\n  test(\"cast FloatType to TimestampType\") {\n    compatibleTimezones.foreach { tz =>\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz) {\n        // Use useDFDiff to avoid collect() which fails on extreme timestamp values\n        castTest(generateFloats(), DataTypes.TimestampType, useDataFrameDiff = true)\n      }\n    }\n  }\n\n  // CAST from DoubleType\n\n  test(\"cast DoubleType to BooleanType\") {\n    castTest(generateDoubles(), DataTypes.BooleanType)\n  }\n\n  test(\"cast DoubleType to ByteType\") {\n    castTest(generateDoubles(), DataTypes.ByteType)\n  }\n\n  test(\"cast DoubleType to ShortType\") {\n    castTest(generateDoubles(), DataTypes.ShortType)\n  }\n\n  test(\"cast DoubleType to IntegerType\") {\n    castTest(generateDoubles(), DataTypes.IntegerType)\n  }\n\n  test(\"cast DoubleType to LongType\") {\n    castTest(generateDoubles(), DataTypes.LongType)\n  }\n\n  test(\"cast DoubleType to FloatType\") {\n    castTest(generateDoubles(), DataTypes.FloatType)\n  }\n\n  ignore(\"cast DoubleType to DecimalType(10,2)\") {\n    // https://github.com/apache/datafusion-comet/issues/1371\n    castTest(generateDoubles(), DataTypes.createDecimalType(10, 2))\n  }\n\n  test(\"cast DoubleType to DecimalType(10,2) - allow incompat\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[Cast]) -> \"true\") {\n      castTest(generateDoubles(), DataTypes.createDecimalType(10, 2))\n    }\n  }\n\n  test(\"cast DoubleType to StringType\") {\n    // https://github.com/apache/datafusion-comet/issues/312\n    val r = new Random(0)\n    val values = Seq(\n      Double.MaxValue,\n      Double.MinValue,\n      Double.NaN,\n      Double.PositiveInfinity,\n      Double.NegativeInfinity,\n      1.0d,\n      -1.0d,\n      Int.MinValue.toDouble,\n      Int.MaxValue.toDouble,\n      -0.0d,\n      0.0d) ++\n      Range(0, dataSize).map(_ => r.nextDouble())\n    castTest(withNulls(values).toDF(\"a\"), DataTypes.StringType)\n  }\n\n  test(\"cast DoubleType to TimestampType\") {\n    compatibleTimezones.foreach { tz =>\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz) {\n        // Use useDFDiff to avoid collect() which fails on extreme timestamp values\n        castTest(generateDoubles(), DataTypes.TimestampType, useDataFrameDiff = true)\n      }\n    }\n  }\n\n  // CAST from DecimalType(10,2)\n\n  test(\"cast DecimalType(10,2) to BooleanType\") {\n    castTest(generateDecimalsPrecision10Scale2(), DataTypes.BooleanType)\n  }\n\n  test(\"cast DecimalType(10,2) to ByteType\") {\n    castTest(generateDecimalsPrecision10Scale2(), DataTypes.ByteType)\n  }\n\n  test(\"cast DecimalType(10,2) to ShortType\") {\n    castTest(generateDecimalsPrecision10Scale2(), DataTypes.ShortType)\n    castTest(\n      generateDecimalsPrecision10Scale2(Seq(BigDecimal(\"-96833550.07\"))),\n      DataTypes.ShortType)\n  }\n\n  test(\"cast DecimalType(10,2) to IntegerType\") {\n    castTest(generateDecimalsPrecision10Scale2(), DataTypes.IntegerType)\n  }\n\n  test(\"cast DecimalType(10,2) to LongType\") {\n    castTest(generateDecimalsPrecision10Scale2(), DataTypes.LongType)\n  }\n\n  test(\"cast DecimalType(10,2) to FloatType\") {\n    castTest(generateDecimalsPrecision10Scale2(), DataTypes.FloatType)\n  }\n\n  test(\"cast DecimalType(10,2) to DoubleType\") {\n    castTest(generateDecimalsPrecision10Scale2(), DataTypes.DoubleType)\n  }\n\n  // CAST from DecimalType(15,5): fractional truncation for int/long; int overflow possible\n\n  test(\"cast DecimalType(15,5) to IntegerType\") {\n    castTest(generateDecimalsPrecision15Scale5(), DataTypes.IntegerType)\n  }\n\n  test(\"cast DecimalType(15,5) to LongType\") {\n    castTest(generateDecimalsPrecision15Scale5(), DataTypes.LongType)\n  }\n\n  test(\"cast DecimalType(15,5) to BooleanType\") {\n    castTest(generateDecimalsPrecision15Scale5(), DataTypes.BooleanType)\n  }\n\n  // CAST from DecimalType(20,0): large integers with no fractional part; long overflow possible\n\n  test(\"cast DecimalType(20,0) to IntegerType\") {\n    castTest(generateDecimalsPrecision20Scale0(), DataTypes.IntegerType)\n  }\n\n  test(\"cast DecimalType(20,0) to LongType\") {\n    castTest(generateDecimalsPrecision20Scale0(), DataTypes.LongType)\n  }\n\n  test(\"cast DecimalType(20,0) to BooleanType\") {\n    castTest(generateDecimalsPrecision20Scale0(), DataTypes.BooleanType)\n  }\n\n  test(\"cast DecimalType(38,18) to ByteType\") {\n    castTest(generateDecimalsPrecision38Scale18(), DataTypes.ByteType)\n  }\n\n  test(\"cast DecimalType(38,18) to ShortType\") {\n    castTest(generateDecimalsPrecision38Scale18(), DataTypes.ShortType)\n    castTest(\n      generateDecimalsPrecision38Scale18(Seq(BigDecimal(\"-99999999999999999999.07\"))),\n      DataTypes.ShortType)\n  }\n\n  test(\"cast DecimalType(38,18) to IntegerType\") {\n    castTest(generateDecimalsPrecision38Scale18(), DataTypes.IntegerType)\n    castTest(\n      generateDecimalsPrecision38Scale18(Seq(BigDecimal(\"-99999999999999999999.07\"))),\n      DataTypes.IntegerType)\n  }\n\n  test(\"cast DecimalType(38,18) to LongType\") {\n    castTest(generateDecimalsPrecision38Scale18(), DataTypes.LongType)\n    castTest(\n      generateDecimalsPrecision38Scale18(Seq(BigDecimal(\"-99999999999999999999.07\"))),\n      DataTypes.LongType)\n  }\n\n  test(\"cast DecimalType(38,18) to FloatType\") {\n    castTest(generateDecimalsPrecision38Scale18(), DataTypes.FloatType)\n    // small fractions exercise the i128 / 10^scale precision path\n    castTest(\n      generateDecimalsPrecision38Scale18(\n        Seq(\n          BigDecimal(\"0.000000000000000001\"),\n          BigDecimal(\"-0.000000000000000001\"),\n          BigDecimal(\"1.500000000000000000\"),\n          BigDecimal(\"123456789.123456789\"))),\n      DataTypes.FloatType)\n  }\n\n  test(\"cast DecimalType(38,18) to DoubleType\") {\n    castTest(generateDecimalsPrecision38Scale18(), DataTypes.DoubleType)\n    // small fractions exercise the i128 / 10^scale precision path\n    castTest(\n      generateDecimalsPrecision38Scale18(\n        Seq(\n          BigDecimal(\"0.000000000000000001\"),\n          BigDecimal(\"-0.000000000000000001\"),\n          BigDecimal(\"1.500000000000000000\"),\n          BigDecimal(\"123456789.123456789\"))),\n      DataTypes.DoubleType)\n  }\n\n  test(\"cast DecimalType(38,18) to BooleanType\") {\n    castTest(generateDecimalsPrecision38Scale18(), DataTypes.BooleanType)\n    // tiny non-zero values must be true; only exact zero is false\n    castTest(\n      generateDecimalsPrecision38Scale18(\n        Seq(BigDecimal(\"0.000000000000000001\"), BigDecimal(\"-0.000000000000000001\"))),\n      DataTypes.BooleanType)\n  }\n\n  test(\"cast DecimalType(10,2) to StringType\") {\n    castTest(generateDecimalsPrecision10Scale2(), DataTypes.StringType)\n  }\n\n  test(\"cast DecimalType(38,18) to StringType\") {\n    castTest(generateDecimalsPrecision38Scale18(), DataTypes.StringType)\n  }\n\n  test(\"cast DecimalType with negative scale to StringType\") {\n    // Negative-scale decimals are a legacy Spark feature gated on\n    // spark.sql.legacy.allowNegativeScaleOfDecimal=true. Spark LEGACY cast uses Java's\n    // BigDecimal.toString() which produces scientific notation for negative-scale values\n    // (e.g. 12300 stored as Decimal(7,-2) with unscaled=123 → \"1.23E+4\").\n    // CometCast.canCastToString checks the\n    // config and returns Incompatible when it is false.\n    //\n    // Parquet does not support negative-scale decimals so we use checkSparkAnswer directly\n    // (no parquet round-trip) to avoid schema coercion.\n\n    // With config enabled, enable localTableScan so Comet can take over the full plan\n    // and execute the cast natively. Parquet does not support negative-scale decimals so\n    // the data is kept in-memory; localTableScan.enabled bridges that gap.\n    withSQLConf(\n      \"spark.sql.legacy.allowNegativeScaleOfDecimal\" -> \"true\",\n      \"spark.comet.exec.localTableScan.enabled\" -> \"true\") {\n      val dfNeg2 = Seq(\n        Some(BigDecimal(\"0\")),\n        Some(BigDecimal(\"100\")),\n        Some(BigDecimal(\"12300\")),\n        Some(BigDecimal(\"-99900\")),\n        Some(BigDecimal(\"9999900\")),\n        None)\n        .toDF(\"b\")\n        .withColumn(\"a\", col(\"b\").cast(DecimalType(7, -2)))\n        .drop(\"b\")\n        .select(col(\"a\").cast(DataTypes.StringType).as(\"result\"))\n      checkSparkAnswerAndOperator(dfNeg2)\n\n      val dfNeg4 = Seq(\n        Some(BigDecimal(\"0\")),\n        Some(BigDecimal(\"10000\")),\n        Some(BigDecimal(\"120000\")),\n        Some(BigDecimal(\"-9990000\")),\n        None)\n        .toDF(\"b\")\n        .withColumn(\"a\", col(\"b\").cast(DecimalType(7, -4)))\n        .drop(\"b\")\n        .select(col(\"a\").cast(DataTypes.StringType).as(\"result\"))\n      checkSparkAnswerAndOperator(dfNeg4)\n    }\n\n    // With config disabled (default): the SQL parser rejects negative scale, so\n    // negative-scale decimals cannot be created through normal SQL paths.\n    // CometCast.isSupported returns Incompatible for this case, ensuring Comet does\n    // not attempt native execution if such a value ever reaches the planner.\n    // Note: DecimalType(7, -2) must be constructed while config=true, because the\n    // constructor itself checks the config and throws if negative scale is disallowed.\n    var negScaleType: DecimalType = null\n    withSQLConf(\"spark.sql.legacy.allowNegativeScaleOfDecimal\" -> \"true\") {\n      negScaleType = DecimalType(7, -2)\n    }\n    withSQLConf(\"spark.sql.legacy.allowNegativeScaleOfDecimal\" -> \"false\") {\n      assert(\n        CometCast.isSupported(\n          negScaleType,\n          DataTypes.StringType,\n          None,\n          CometEvalMode.LEGACY) == Incompatible())\n    }\n  }\n\n  test(\"cast DecimalType(10,2) to TimestampType\") {\n    castTest(generateDecimalsPrecision10Scale2(), DataTypes.TimestampType)\n  }\n\n  test(\"cast DecimalType(38,10) to TimestampType\") {\n    castTest(generateDecimalsPrecision38Scale18(), DataTypes.TimestampType)\n  }\n\n  // CAST from StringType\n\n  test(\"cast StringType to BooleanType\") {\n    val testValues =\n      (Seq(\"TRUE\", \"True\", \"true\", \"FALSE\", \"False\", \"false\", \"1\", \"0\", \"\", null) ++\n        gen.generateStrings(dataSize, \"truefalseTRUEFALSEyesno10\" + whitespaceChars, 8)).toDF(\"a\")\n    castTest(testValues, DataTypes.BooleanType)\n  }\n\n  private val castStringToIntegralInputs: Seq[String] = Seq(\n    \"\",\n    \".\",\n    \"+\",\n    \"-\",\n    \"+.\",\n    \"-.\",\n    \"-0\",\n    \"+1\",\n    \"-1\",\n    \".2\",\n    \"-.2\",\n    \"1e1\",\n    \"1.1d\",\n    \"1.1f\",\n    Byte.MinValue.toString,\n    (Byte.MinValue.toShort - 1).toString,\n    Byte.MaxValue.toString,\n    (Byte.MaxValue.toShort + 1).toString,\n    Short.MinValue.toString,\n    (Short.MinValue.toInt - 1).toString,\n    Short.MaxValue.toString,\n    (Short.MaxValue.toInt + 1).toString,\n    Int.MinValue.toString,\n    (Int.MinValue.toLong - 1).toString,\n    Int.MaxValue.toString,\n    (Int.MaxValue.toLong + 1).toString,\n    Long.MinValue.toString,\n    Long.MaxValue.toString,\n    \"-9223372036854775809\", // Long.MinValue -1\n    \"9223372036854775808\" // Long.MaxValue + 1\n  )\n\n  test(\"cast StringType to ByteType\") {\n    // test with hand-picked values\n    castTest(castStringToIntegralInputs.toDF(\"a\"), DataTypes.ByteType)\n    // fuzz test\n    castTest(gen.generateStrings(dataSize, numericPattern, 4).toDF(\"a\"), DataTypes.ByteType)\n  }\n\n  test(\"cast StringType to ShortType\") {\n    // test with hand-picked values\n    castTest(castStringToIntegralInputs.toDF(\"a\"), DataTypes.ShortType)\n    // fuzz test\n    castTest(gen.generateStrings(dataSize, numericPattern, 5).toDF(\"a\"), DataTypes.ShortType)\n  }\n\n  test(\"cast StringType to IntegerType\") {\n    // test with hand-picked values\n    castTest(castStringToIntegralInputs.toDF(\"a\"), DataTypes.IntegerType)\n    // fuzz test\n    castTest(gen.generateStrings(dataSize, numericPattern, 8).toDF(\"a\"), DataTypes.IntegerType)\n  }\n\n  test(\"cast StringType to LongType\") {\n    // test with hand-picked values\n    castTest(castStringToIntegralInputs.toDF(\"a\"), DataTypes.LongType)\n    // fuzz test\n    castTest(gen.generateStrings(dataSize, numericPattern, 8).toDF(\"a\"), DataTypes.LongType)\n  }\n\n  test(\"cast StringType to DoubleType\") {\n    // https://github.com/apache/datafusion-comet/issues/326\n    castTest(gen.generateStrings(dataSize, numericPattern, 8).toDF(\"a\"), DataTypes.DoubleType)\n  }\n\n  test(\"cast StringType to FloatType\") {\n    castTest(gen.generateStrings(dataSize, numericPattern, 8).toDF(\"a\"), DataTypes.FloatType)\n  }\n\n  val specialValues: Seq[String] = Seq(\n    \"1.5f\",\n    \"1.5F\",\n    \"2.0d\",\n    \"2.0D\",\n    \"3.14159265358979d\",\n    \"inf\",\n    \"Inf\",\n    \"INF\",\n    \"+inf\",\n    \"+Infinity\",\n    \"-inf\",\n    \"-Infinity\",\n    \"NaN\",\n    \"nan\",\n    \"NAN\",\n    \"1.23e4\",\n    \"1.23E4\",\n    \"-1.23e-4\",\n    \"  123.456789  \",\n    \"0.0\",\n    \"-0.0\",\n    \"\",\n    \"xyz\",\n    null)\n\n  test(\"cast StringType to FloatType special values\") {\n    Seq(true, false).foreach { ansiMode =>\n      castTest(specialValues.toDF(\"a\"), DataTypes.FloatType, testAnsi = ansiMode)\n    }\n  }\n\n  test(\"cast StringType to DoubleType special values\") {\n    Seq(true, false).foreach { ansiMode =>\n      castTest(specialValues.toDF(\"a\"), DataTypes.DoubleType, testAnsi = ansiMode)\n    }\n  }\n\n//  This is to pass the first `all cast combinations are covered`\n  test(\"cast StringType to DecimalType(10,2)\") {\n    val values = gen.generateStrings(dataSize, numericPattern, 12).toDF(\"a\")\n    castTest(values, DataTypes.createDecimalType(10, 2), testAnsi = false)\n  }\n\n  test(\"cast StringType to DecimalType(10,2) fuzz\") {\n    val values = gen.generateStrings(dataSize, numericPattern, 12).toDF(\"a\")\n    Seq(true, false).foreach(ansiEnabled =>\n      castTest(values, DataTypes.createDecimalType(10, 2), testAnsi = ansiEnabled))\n  }\n\n  test(\"cast StringType to DecimalType(2,2)\") {\n    val values = gen.generateStrings(dataSize, numericPattern, 12).toDF(\"a\")\n    Seq(true, false).foreach(ansiEnabled =>\n      castTest(values, DataTypes.createDecimalType(2, 2), testAnsi = ansiEnabled))\n  }\n\n  test(\"cast StringType to DecimalType check if right exception message is thrown\") {\n    val values = Seq(\"d11307\\n\").toDF(\"a\")\n    Seq(true, false).foreach(ansiEnabled =>\n      castTest(values, DataTypes.createDecimalType(2, 2), testAnsi = ansiEnabled))\n  }\n\n  test(\"cast StringType to DecimalType(2,2) check if right exception is being thrown\") {\n    val values = gen.generateInts(10000).map(\"    \" + _).toDF(\"a\")\n    Seq(true, false).foreach(ansiEnabled =>\n      castTest(values, DataTypes.createDecimalType(2, 2), testAnsi = ansiEnabled))\n  }\n\n  test(\"cast StringType to DecimalType(38,10) high precision - check 0 mantissa\") {\n    val values = Seq(\"0e31\", \"000e3375\", \"0e40\", \"0E+695\", \"0e5887677\").toDF(\"a\")\n    Seq(true, false).foreach(ansiEnabled =>\n      castTest(values, DataTypes.createDecimalType(38, 10), testAnsi = ansiEnabled))\n  }\n\n  test(\"cast StringType to DecimalType(38,10) high precision\") {\n    val values = gen.generateStrings(dataSize, numericPattern, 38).toDF(\"a\")\n    Seq(true, false).foreach(ansiEnabled =>\n      castTest(values, DataTypes.createDecimalType(38, 10), testAnsi = ansiEnabled))\n  }\n\n  test(\"cast StringType to DecimalType - null bytes and fullwidth digits\") {\n    // Spark trims null bytes (\\u0000) from both ends of a string before parsing,\n    // matching its whitespace-trim behavior. Null bytes in the middle produce NULL.\n    // Fullwidth digits (U+FF10-U+FF19) are treated as numeric equivalents to ASCII digits.\n    val values = Seq(\n      // null byte positions\n      \"123\\u0000\",\n      \"\\u0000123\",\n      \"12\\u00003\",\n      \"1\\u00002\\u00003\",\n      \"\\u0000\",\n      // null byte with decimal point\n      \"12\\u0000.45\",\n      \"12.\\u000045\",\n      // fullwidth digits (U+FF10-U+FF19)\n      \"１２３.４５\", // \"123.45\" in fullwidth\n      \"１２３\",\n      \"-１２３.４５\",\n      \"+１２３.４５\",\n      \"１２３.４５E２\",\n      // mixed fullwidth and ASCII\n      \"1２3.4５\",\n      null).toDF(\"a\")\n    castTest(values, DataTypes.createDecimalType(10, 2))\n  }\n\n  test(\"cast StringType to DecimalType(10,2) basic values\") {\n    val values = Seq(\n      \"123.45\",\n      \"-67.89\",\n      \"-67.89\",\n      \"-67.895\",\n      \"67.895\",\n      \"0.001\",\n      \"999.99\",\n      \"123.456\",\n      \"123.45D\",\n      \".5\",\n      \"5.\",\n      \"+123.45\",\n      \"  123.45  \",\n      \"inf\",\n      \"\",\n      \"abc\",\n      // values from https://github.com/apache/datafusion-comet/issues/325\n      \"0\",\n      \"1\",\n      \"+1.0\",\n      \".34\",\n      \"-10.0\",\n      \"4e7\",\n      null).toDF(\"a\")\n    Seq(true, false).foreach(ansiEnabled =>\n      castTest(values, DataTypes.createDecimalType(10, 2), testAnsi = ansiEnabled))\n  }\n\n  test(\"cast StringType to Decimal type scientific notation\") {\n    val values = Seq(\n      \"1.23E-5\",\n      \"1.23e10\",\n      \"1.23E+10\",\n      \"-1.23e-5\",\n      \"1e5\",\n      \"1E-2\",\n      \"-1.5e3\",\n      \"1.23E0\",\n      \"0e0\",\n      \"1.23e\",\n      \"e5\",\n      null).toDF(\"a\")\n    Seq(true, false).foreach(ansiEnabled =>\n      castTest(values, DataTypes.createDecimalType(23, 8), testAnsi = ansiEnabled))\n  }\n\n  test(\"cast StringType to BinaryType\") {\n    castTest(gen.generateStrings(dataSize, numericPattern, 8).toDF(\"a\"), DataTypes.BinaryType)\n  }\n\n  test(\"cast StringType to DateType\") {\n    val validDates = Seq(\n      \"262142-01-01\",\n      \"262142-01-01 \",\n      \"262142-01-01T \",\n      \"262142-01-01T 123123123\",\n      \"-262143-12-31\",\n      \"-262143-12-31 \",\n      \"-262143-12-31T\",\n      \"-262143-12-31T \",\n      \"-262143-12-31T 123123123\",\n      \"2020\",\n      \"2020-1\",\n      \"2020-1-1\",\n      \"2020-01\",\n      \"2020-01-01\",\n      \"2020-1-01 \",\n      \"2020-01-1\",\n      \"02020-01-01\",\n      \"2020-01-01T\",\n      \"2020-10-01T  1221213\",\n      \"002020-01-01  \",\n      \"0002020-01-01  123344\",\n      \"-3638-5\")\n    val invalidDates = Seq(\n      \"0\",\n      \"202\",\n      \"3/\",\n      \"3/3/\",\n      \"3/3/2020\",\n      \"3#3#2020\",\n      \"2020-010-01\",\n      \"2020-10-010\",\n      \"2020-10-010T\",\n      \"--262143-12-31\",\n      \"--262143-12-31T 1234 \",\n      \"abc-def-ghi\",\n      \"abc-def-ghi jkl\",\n      \"2020-mar-20\",\n      \"not_a_date\",\n      \"T2\",\n      \"\\t\\n3938\\n8\",\n      \"8701\\t\",\n      \"\\n8757\",\n      \"7593\\t\\t\\t\",\n      \"\\t9374 \\n \",\n      \"\\n 9850 \\t\",\n      \"\\r\\n\\t9840\",\n      \"\\t9629\\n\",\n      \"\\r\\n 9629 \\r\\n\",\n      \"\\r\\n 962 \\r\\n\",\n      \"\\r\\n 62 \\r\\n\")\n\n    // due to limitations of NaiveDate we only support years between 262143 BC and 262142 AD\n    // Filter out strings where the leading digit sequence represents a year > 262142.\n    // All 5-digit years (10000-99999) are within bounds; only 6-digit years may exceed the limit.\n    val fuzzDates = gen\n      .generateStrings(dataSize, datePattern, 8)\n      .filterNot { str =>\n        val yearStr = str.trim.takeWhile(_.isDigit)\n        yearStr.length > 6 || (yearStr.length == 6 && yearStr.toInt > 262142)\n      }\n    castTest((validDates ++ invalidDates ++ fuzzDates).toDF(\"a\"), DataTypes.DateType)\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/2215\n  test(\"(ansi) cast error message should include SQL query\") {\n    withSQLConf(\n      SQLConf.ANSI_ENABLED.key -> \"true\",\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n      withTable(\"cast_error_msg\") {\n        // Create a table with string data using DataFrame API\n        Seq(\"a\").toDF(\"s\").write.format(\"parquet\").saveAsTable(\"cast_error_msg\")\n        // Try to cast invalid string to date - should throw exception with SQL context\n        val exception = intercept[Exception] {\n          sql(\"select cast(s as date) from cast_error_msg\").collect()\n        }\n        val errorMessage = exception.getMessage\n        // Verify error message contains the cast invalid input error\n        assert(\n          errorMessage.contains(\"CAST_INVALID_INPUT\") ||\n            errorMessage.contains(\"cannot be cast to\"),\n          s\"Error message should contain cast error: $errorMessage\")\n\n        assert(\n          errorMessage.contains(\"select cast(s as date) from cast_error_msg\"),\n          s\"Error message should contain SQL query text but got: $errorMessage\")\n      }\n    }\n  }\n\n  test(\"cast StringType to TimestampType - UTC\") {\n    withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"UTC\") {\n      val values = Seq(\n        \"2020\",\n        \"2020-01\",\n        \"2020-01-01\",\n        \"2020-01-01T12\",\n        \"2020-01-01T12:34\",\n        \"2020-01-01T12:34:56\",\n        \"2020-01-01T12:34:56.123456\",\n        \"T2\",\n        \"-9?\",\n        \"0100\",\n        \"0100-01\",\n        \"0100-01-01\",\n        \"0100-01-01T12\",\n        \"0100-01-01T12:34\",\n        \"0100-01-01T12:34:56\",\n        \"0100-01-01T12:34:56.123456\",\n        \"10000\",\n        \"10000-01\",\n        \"10000-01-01\",\n        \"10000-01-01T12\",\n        \"10000-01-01T12:34\",\n        \"10000-01-01T12:34:56\",\n        \"10000-01-01T12:34:56.123456\",\n        \"213170\",\n        \"213170-06\",\n        \"213170-06-15\",\n        \"213170-06-15T12\",\n        \"213170-06-15T12:34\",\n        \"213170-06-15T12:34:56\",\n        \"213170-06-15T12:34:56.123456\")\n      castTimestampTest(values.toDF(\"a\"), DataTypes.TimestampType, assertNative = true)\n    }\n  }\n\n  test(\"cast StringType to TimestampType\") {\n    withSQLConf((SQLConf.SESSION_LOCAL_TIMEZONE.key, \"UTC\")) {\n      val values = Seq(\"2020-01-01T12:34:56.123456\", \"T2\") ++ gen.generateStrings(\n        dataSize,\n        timestampPattern,\n        8)\n      castTest(values.toDF(\"a\"), DataTypes.TimestampType)\n    }\n  }\n\n  test(\"cast StringType to TimestampType - subset of supported values\") {\n    withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"Asia/Kathmandu\") {\n      val values = Seq(\n        \"2020\",\n        \"2020-01\",\n        \"2020-01-01\",\n        \"2020-01-01T12\",\n        \"2020-01-01T12:34\",\n        \"2020-01-01T12:34:56\",\n        \"2020-01-01T12:34:56.123456\",\n        \"T2\",\n        \"-9?\",\n        \"100\",\n        \"100-01\",\n        \"100-01-01\",\n        \"100-01-01T12\",\n        \"100-01-01T12:34\",\n        \"100-01-01T12:34:56\",\n        \"100-01-01T12:34:56.123456\",\n        \"0100\",\n        \"0100-01\",\n        \"0100-01-01\",\n        \"0100-01-01T12\",\n        \"0100-01-01T12:34\",\n        \"0100-01-01T12:34:56\",\n        \"0100-01-01T12:34:56.123456\",\n        \"10000\",\n        \"10000-01\",\n        \"10000-01-01\",\n        \"10000-01-01T12\",\n        \"10000-01-01T12:34\",\n        \"10000-01-01T12:34:56\",\n        \"10000-01-01T12:34:56.123456\",\n        // Space separator\n        \"2020-01-01 12\",\n        \"2020-01-01 12:34\",\n        \"2020-01-01 12:34:56\",\n        \"2020-01-01 12:34:56.123456\",\n        // Z and offset suffixes\n        \"2020-01-01T12:34:56Z\",\n        \"2020-01-01T12:34:56+05:30\",\n        \"2020-01-01T12:34:56-08:00\",\n        // Single-digit hour offset (extract_offset_suffix supports ±H:MM)\n        \"2020-01-01T12:34:56+5:30\",\n        // T-prefixed time-only with colon\n        \"T12:34\",\n        \"T12:34:56\",\n        \"T12:34:56.123456\",\n        // Bare time-only (hour:minute)\n        \"12:34\",\n        \"12:34:56\",\n        // Negative year\n        \"-0001-01-01T12:34:56\")\n      castTimestampTest(values.toDF(\"a\"), DataTypes.TimestampType, assertNative = true)\n    }\n  }\n\n  test(\"cast StringType to TimestampType - T-hour-only whitespace handling\") {\n    // Spark 4.0+ changed whitespace handling for T-prefixed time-only strings:\n    //   - Spark 3.x: trims all whitespace first, so \" T2\" → valid timestamp\n    //   - Spark 4.0+: raw bytes are used; leading whitespace causes the T-check to fail → null\n    // Comet matches the behaviour of whichever Spark version is running (controlled via\n    // is_spark4_plus in the Cast proto, set from CometSparkSessionExtensions.isSpark40Plus).\n    // This test compares Comet output against Spark output for all cases — no hard-coded\n    // null/valid assertions needed.\n    withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"UTC\") {\n      val values = Seq(\n        // Bare T-hour-only: no leading whitespace (valid on all versions)\n        \"T2\", // single-digit hour\n        \"T23\", // two-digit hour\n        \"T0\", // midnight\n        // Bare T-hour-only: trailing whitespace only (valid on all versions)\n        \"T2 \", // trailing space\n        \"T2\\t\", // trailing tab\n        \"T2\\n\", // trailing newline\n        // Bare T-hour-only: leading whitespace (null on 4.0+, valid on 3.x)\n        \" T2\", // leading space\n        \"\\tT2\", // leading tab\n        \"\\nT2\", // leading newline\n        \"\\r\\nT2\", // leading CRLF\n        \"\\t T2\", // tab then space\n        \"  T2\", // double space\n        // T-hour:minute with leading whitespace (null on 4.0+, valid on 3.x)\n        \" T2:30\",\n        \"\\tT2:30\",\n        \"\\nT2:30\",\n        // Full datetime: leading whitespace (valid on all versions — full trim applies)\n        \" 2020-01-01T12:34:56\",\n        \"\\t2020-01-01T12:34:56\",\n        \"\\n2020-01-01T12:34:56\",\n        \"\\r\\n2020-01-01T12:34:56\")\n      castTimestampTest(values.toDF(\"a\"), DataTypes.TimestampType, assertNative = true)\n    }\n  }\n\n  // CAST from BinaryType\n\n  test(\"cast BinaryType to StringType\") {\n    castTest(generateBinary(), DataTypes.StringType)\n  }\n\n  test(\"cast BinaryType to StringType - valid UTF-8 inputs\") {\n    castTest(gen.generateStrings(dataSize, numericPattern, 8).toDF(\"a\"), DataTypes.StringType)\n  }\n\n  // CAST from DateType\n\n  // Date to Boolean/Byte/Short/Int/Long/Float/Double/Decimal casts always return NULL\n  // in LEGACY mode. In ANSI and TRY mode, Spark throws AnalysisException at\n  // query parsing time. Hence, ANSI and Try mode are disabled in tests\n\n  test(\"cast DateType to BooleanType\") {\n    castTest(generateDates(), DataTypes.BooleanType, testAnsi = false, testTry = false)\n  }\n\n  test(\"cast DateType to ByteType\") {\n    castTest(generateDates(), DataTypes.ByteType, testAnsi = false, testTry = false)\n  }\n\n  test(\"cast DateType to ShortType\") {\n    castTest(generateDates(), DataTypes.ShortType, testAnsi = false, testTry = false)\n  }\n\n  test(\"cast DateType to IntegerType\") {\n    castTest(generateDates(), DataTypes.IntegerType, testAnsi = false, testTry = false)\n  }\n\n  test(\"cast DateType to LongType\") {\n    castTest(generateDates(), DataTypes.LongType, testAnsi = false, testTry = false)\n  }\n\n  test(\"cast DateType to FloatType\") {\n    castTest(generateDates(), DataTypes.FloatType, testAnsi = false, testTry = false)\n  }\n\n  test(\"cast DateType to DoubleType\") {\n    castTest(generateDates(), DataTypes.DoubleType, testAnsi = false, testTry = false)\n  }\n\n  test(\"cast DateType to DecimalType(10,2)\") {\n    castTest(\n      generateDates(),\n      DataTypes.createDecimalType(10, 2),\n      testAnsi = false,\n      testTry = false)\n  }\n\n  test(\"cast DateType to StringType\") {\n    // generateDates() covers: 1970-2027 sampled monthly, DST transition dates, and edge\n    // cases including \"999-01-01\" (year < 1000, zero-padded to \"0999-01-01\") and\n    // \"12345-01-01\" (year > 9999, no truncation). Date→String is timezone-independent.\n    castTest(generateDates(), DataTypes.StringType)\n  }\n\n  test(\"cast DateType to TimestampType\") {\n    val compatibleTimezones = Seq(\n      \"UTC\",\n      \"America/New_York\",\n      \"America/Chicago\",\n      \"America/Denver\",\n      \"America/Los_Angeles\",\n      \"Europe/London\",\n      \"Europe/Paris\",\n      \"Europe/Berlin\",\n      \"Asia/Tokyo\",\n      \"Asia/Shanghai\",\n      \"Asia/Singapore\",\n      \"Asia/Kolkata\",\n      \"Australia/Sydney\",\n      \"Pacific/Auckland\")\n    compatibleTimezones.map { tz =>\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz) {\n        castTest(generateDates(), DataTypes.TimestampType)\n      }\n    }\n  }\n\n  // CAST from TimestampType\n\n  ignore(\"cast TimestampType to BooleanType\") {\n    // Arrow error: Cast error: Casting from Timestamp(Microsecond, Some(\"America/Los_Angeles\")) to Boolean not supported\n    castTest(generateTimestamps(), DataTypes.BooleanType)\n  }\n\n  ignore(\"cast TimestampType to ByteType\") {\n    // https://github.com/apache/datafusion-comet/issues/352\n    // input: 2023-12-31 10:00:00.0, expected: 32, actual: null\n    castTest(generateTimestamps(), DataTypes.ByteType)\n  }\n\n  ignore(\"cast TimestampType to ShortType\") {\n    // https://github.com/apache/datafusion-comet/issues/352\n    // input: 2023-12-31 10:00:00.0, expected: -21472, actual: null\n    castTest(generateTimestamps(), DataTypes.ShortType)\n  }\n\n  ignore(\"cast TimestampType to IntegerType\") {\n    // https://github.com/apache/datafusion-comet/issues/352\n    // input: 2023-12-31 10:00:00.0, expected: 1704045600, actual: null\n    castTest(generateTimestamps(), DataTypes.IntegerType)\n  }\n\n  test(\"cast TimestampType to LongType\") {\n    castTest(generateTimestampsExtended(), DataTypes.LongType)\n  }\n\n  ignore(\"cast TimestampType to FloatType\") {\n    // https://github.com/apache/datafusion-comet/issues/352\n    // input: 2023-12-31 10:00:00.0, expected: 1.7040456E9, actual: 1.7040456E15\n    castTest(generateTimestamps(), DataTypes.FloatType)\n  }\n\n  ignore(\"cast TimestampType to DoubleType\") {\n    // https://github.com/apache/datafusion-comet/issues/352\n    // input: 2023-12-31 10:00:00.0, expected: 1.7040456E9, actual: 1.7040456E15\n    castTest(generateTimestamps(), DataTypes.DoubleType)\n  }\n\n  ignore(\"cast TimestampType to DecimalType(10,2)\") {\n    // https://github.com/apache/datafusion-comet/issues/1280\n    // Native cast invoked for unsupported cast from Timestamp(Microsecond, Some(\"Etc/UTC\")) to Decimal128(10, 2)\n    castTest(generateTimestamps(), DataTypes.createDecimalType(10, 2))\n  }\n\n  test(\"cast TimestampType to StringType\") {\n    castTest(generateTimestamps(), DataTypes.StringType)\n  }\n\n  test(\"cast TimestampType to DateType\") {\n    castTest(generateTimestamps(), DataTypes.DateType)\n  }\n\n  // Complex Types\n\n  test(\"cast StructType to StringType\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        withParquetTable(path.toString, \"tbl\") {\n          // primitives\n          checkSparkAnswerAndOperator(\n            \"SELECT CAST(struct(_1, _2, _3, _4, _5, _6, _7, _8) as string) FROM tbl\")\n          // the same field, add _11 and _12 again when\n          // https://github.com/apache/datafusion-comet/issues/2256 resolved\n          checkSparkAnswerAndOperator(\"SELECT CAST(struct(_11, _12) as string) FROM tbl\")\n          // decimals\n          // TODO add _16 when https://github.com/apache/datafusion-comet/issues/1068 is resolved\n          checkSparkAnswerAndOperator(\"SELECT CAST(struct(_15, _17) as string) FROM tbl\")\n          // dates & timestamps\n          checkSparkAnswerAndOperator(\"SELECT CAST(struct(_18, _19, _20) as string) FROM tbl\")\n          // named struct\n          checkSparkAnswerAndOperator(\n            \"SELECT CAST(named_struct('a', _1, 'b', _2) as string) FROM tbl\")\n          // nested struct\n          checkSparkAnswerAndOperator(\n            \"SELECT CAST(named_struct('a', named_struct('b', _1, 'c', _2)) as string) FROM tbl\")\n        }\n      }\n    }\n  }\n\n  test(\"cast StructType to StructType\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        withParquetTable(path.toString, \"tbl\") {\n          checkSparkAnswerAndOperator(\n            \"SELECT CAST(CASE WHEN _1 THEN struct(_1, _2, _3, _4) ELSE null END as \" +\n              \"struct<_1:string, _2:string, _3:string, _4:string>) FROM tbl\")\n        }\n      }\n    }\n  }\n\n  test(\"cast StructType to StructType with different names\") {\n    withTable(\"tab1\") {\n      sql(\"\"\"\n           |CREATE TABLE tab1 (s struct<a: string, b: string>)\n           |USING parquet\n         \"\"\".stripMargin)\n      sql(\"INSERT INTO TABLE tab1 SELECT named_struct('col1','1','col2','2')\")\n      checkSparkAnswerAndOperator(\n        \"SELECT CAST(s AS struct<field1:string, field2:string>) AS new_struct FROM tab1\")\n    }\n  }\n\n  test(\"cast between decimals with different precision and scale\") {\n    val rowData = Seq(\n      Row(BigDecimal(\"12345.6789\")),\n      Row(BigDecimal(\"9876.5432\")),\n      Row(BigDecimal(\"123.4567\")))\n    val df = spark.createDataFrame(\n      spark.sparkContext.parallelize(rowData),\n      StructType(Seq(StructField(\"a\", DataTypes.createDecimalType(10, 4)))))\n\n    castTest(df, DecimalType(6, 2))\n  }\n\n  test(\"cast between decimals with higher precision than source\") {\n    // cast between Decimal(10, 2) to Decimal(10,4)\n    castTest(generateDecimalsPrecision10Scale2(), DataTypes.createDecimalType(10, 4))\n  }\n\n  test(\"cast StringType to DecimalType with negative scale (allowNegativeScaleOfDecimal)\") {\n    // With allowNegativeScaleOfDecimal=true, Spark allows DECIMAL(p, s) where s < 0.\n    // The value is rounded to the nearest 10^|s| — e.g. DECIMAL(10,-4) rounds to\n    // the nearest 10000. This requires the legacy SQL parser config to be enabled.\n    withSQLConf(\"spark.sql.legacy.allowNegativeScaleOfDecimal\" -> \"true\") {\n      val values =\n        Seq(\"12500\", \"15000\", \"99990000\", \"-12500\", \"0\", \"0.001\", \"abc\", null).toDF(\"a\")\n      // testTry=false: try_cast uses SQL string interpolation (toType.sql → \"DECIMAL(10,-4)\")\n      // which the SQL parser rejects regardless of allowNegativeScaleOfDecimal.\n      castTest(values, DataTypes.createDecimalType(10, -4), testTry = false)\n    }\n  }\n\n  test(\"cast between decimals with negative precision\") {\n    // cast to negative scale\n    checkSparkAnswerMaybeThrows(\n      spark.sql(\"select a, cast(a as DECIMAL(10,-4)) from t order by a\")) match {\n      case (Some(expected: ParseException), Some(actual: ParseException)) =>\n        assert(\n          expected.getMessage.contains(\"PARSE_SYNTAX_ERROR\") && actual.getMessage.contains(\n            \"PARSE_SYNTAX_ERROR\"))\n      case (expected, actual) =>\n        fail(\n          s\"Expected Spark and Comet throw ParseException, but got Spark=$expected and Comet=$actual\")\n    }\n  }\n\n  test(\"cast between decimals with zero precision\") {\n    // cast between Decimal(10, 2) to Decimal(10,0)\n    castTest(generateDecimalsPrecision10Scale2(), DataTypes.createDecimalType(10, 0))\n  }\n\n  test(\"cast ArrayType to StringType\") {\n    val hasIncompatibleType = (dt: DataType) =>\n      if (CometConf.COMET_NATIVE_SCAN_IMPL.get() == \"auto\") {\n        true\n      } else {\n        !CometScanTypeChecker(CometConf.COMET_NATIVE_SCAN_IMPL.get())\n          .isTypeSupported(dt, \"a\", ListBuffer.empty)\n      }\n    Seq(\n      BooleanType,\n      StringType,\n      ByteType,\n      IntegerType,\n      LongType,\n      ShortType,\n      // FloatType,\n      // DoubleType,\n      // BinaryType\n      DecimalType(10, 2),\n      DecimalType(38, 18)).foreach { dt =>\n      val input = generateArrays(100, dt)\n      castTest(input, StringType, hasIncompatibleType = hasIncompatibleType(input.schema))\n    }\n  }\n\n  test(\"cast ArrayType to ArrayType\") {\n    val types = Seq(\n      BooleanType,\n      StringType,\n      ByteType,\n      IntegerType,\n      LongType,\n      ShortType,\n      FloatType,\n      DoubleType,\n      DecimalType(10, 2),\n      // DecimalType(38, 18) is excluded here: random data exposes a ~1 ULP difference between\n      // DataFusion's (i128 as f64) / 10^scale path and Spark's BigDecimal.doubleValue() for\n      // float/double casts; and extreme boundary values that would avoid the ULP issue overflow\n      // byte/short/int in ANSI mode, causing non-deterministic exception-message differences\n      // between Spark's row-at-a-time and Comet's vectorized execution. The individual scalar\n      // tests (cast DecimalType(38,18) to FloatType / DoubleType / BooleanType / etc.) already\n      // cover this type fully.\n      DateType,\n      TimestampType,\n      BinaryType)\n    testArrayCastMatrix(types, ArrayType(_), generateArrays(100, _))\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/3906\n  ignore(\"cast nested ArrayType to nested ArrayType\") {\n    val types = Seq(\n      BooleanType,\n      StringType,\n      ByteType,\n      IntegerType,\n      LongType,\n      ShortType,\n      FloatType,\n      DoubleType,\n      DecimalType(10, 2),\n      DecimalType(38, 18),\n      DateType,\n      TimestampType,\n      BinaryType)\n    testArrayCastMatrix(\n      types,\n      dt => ArrayType(ArrayType(dt)),\n      dt => generateArrays(100, ArrayType(dt)))\n  }\n\n  private def testArrayCastMatrix(\n      elementTypes: Seq[DataType],\n      wrapType: DataType => DataType,\n      generateInput: DataType => DataFrame): Unit = {\n    for (fromType <- elementTypes) {\n      val input = generateInput(fromType)\n      for (toType <- elementTypes) {\n        val name = s\"cast $fromType to $toType\"\n        val fromWrappedType = wrapType(fromType)\n        val toWrappedType = wrapType(toType)\n        if (fromType != toType &&\n          testNames.contains(name) &&\n          !tags\n            .get(name)\n            .exists(s => s.contains(\"org.scalatest.Ignore\")) &&\n          Cast.canCast(fromWrappedType, toWrappedType) &&\n          isCompatible(fromWrappedType, toWrappedType, CometEvalMode.LEGACY)) {\n          val ansiSupported =\n            isCompatible(fromWrappedType, toWrappedType, CometEvalMode.ANSI)\n          val trySupported =\n            isCompatible(fromWrappedType, toWrappedType, CometEvalMode.TRY)\n          castTest(input, toWrappedType, testAnsi = ansiSupported, testTry = trySupported)\n        }\n      }\n    }\n  }\n\n  private def isCompatible(\n      fromType: DataType,\n      toType: DataType,\n      evalMode: CometEvalMode.Value): Boolean =\n    CometCast.isSupported(fromType, toType, None, evalMode) match {\n      case Compatible(_) => true\n      case _ => false\n    }\n\n  private def generateFloats(): DataFrame = {\n    withNulls(gen.generateFloats(dataSize)).toDF(\"a\")\n  }\n\n  private def generateSafeFloatValues(): Seq[Float] = {\n    Seq(-123456.75f, -1.0f, 0.0f, 1.0f, 123456.75f)\n  }\n\n  private def generateDoubles(): DataFrame = {\n    withNulls(gen.generateDoubles(dataSize)).toDF(\"a\")\n  }\n\n  private def generateSafeDoubleValues(): Seq[Double] = {\n    Seq(-123456.75d, -1.0d, 0.0d, 1.0d, 123456.75d)\n  }\n\n  private def generateBools(): DataFrame = {\n    withNulls(Seq(true, false)).toDF(\"a\")\n  }\n\n  private def generateBytes(): DataFrame = {\n    withNulls(gen.generateBytes(dataSize)).toDF(\"a\")\n  }\n\n  private def generateShorts(): DataFrame = {\n    withNulls(gen.generateShorts(dataSize)).toDF(\"a\")\n  }\n\n  private def generateInts(): DataFrame = {\n    withNulls(gen.generateInts(dataSize)).toDF(\"a\")\n  }\n\n  private def generateLongs(): DataFrame = {\n    withNulls(gen.generateLongs(dataSize)).toDF(\"a\")\n  }\n\n  private def generateSafeLongValues(): Seq[Long] = {\n    Seq(-123456789L, -1L, 0L, 1L, 123456789L)\n  }\n\n  private def generateArrays(rowNum: Int, elementType: DataType): DataFrame = {\n    import scala.collection.JavaConverters._\n    val schema = StructType(Seq(StructField(\"a\", ArrayType(elementType), true)))\n    def buildRows(values: Seq[Any]): Seq[Row] = {\n      Range(0, rowNum).map { i =>\n        Row(\n          Seq[Any](\n            values(i % values.length),\n            if (i % 3 == 0) null else values((i + 1) % values.length),\n            values((i + 2) % values.length)))\n      }\n    }\n\n    def withEdgeCaseRows(generatedRows: Seq[Row]): Seq[Row] = {\n      val sampleValue =\n        generatedRows.find(_.get(0) != null).flatMap(_.getSeq[Any](0).headOption).orNull\n      Seq(Row(Seq(sampleValue, null, sampleValue)), Row(Seq.empty[Any]), Row(null)) ++\n        generatedRows\n    }\n\n    elementType match {\n      case DateType =>\n        val stringSchema = StructType(Seq(StructField(\"a\", ArrayType(StringType), true)))\n        spark\n          .createDataFrame(\n            withEdgeCaseRows(buildRows(generateDateLiterals())).asJava,\n            stringSchema)\n          .select(col(\"a\").cast(ArrayType(DateType)).as(\"a\"))\n      case TimestampType =>\n        val stringSchema = StructType(Seq(StructField(\"a\", ArrayType(StringType), true)))\n        spark\n          .createDataFrame(\n            withEdgeCaseRows(buildRows(generateTimestampLiterals())).asJava,\n            stringSchema)\n          .select(col(\"a\").cast(ArrayType(TimestampType)).as(\"a\"))\n      case FloatType =>\n        spark.createDataFrame(\n          withEdgeCaseRows(buildRows(generateSafeFloatValues())).asJava,\n          schema)\n      case DoubleType =>\n        spark.createDataFrame(\n          withEdgeCaseRows(buildRows(generateSafeDoubleValues())).asJava,\n          schema)\n      case LongType =>\n        spark.createDataFrame(\n          withEdgeCaseRows(buildRows(generateSafeLongValues())).asJava,\n          schema)\n      case BinaryType =>\n        val values = generateBinary().collect().map(_.getAs[Array[Byte]](\"a\")).toSeq\n        spark.createDataFrame(withEdgeCaseRows(buildRows(values)).asJava, schema)\n      case _ =>\n        spark.createDataFrame(withEdgeCaseRows(gen.generateRows(rowNum, schema)).asJava, schema)\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/2038\n  test(\"test implicit cast to dictionary with case when and dictionary type\") {\n    withSQLConf(\"parquet.enable.dictionary\" -> \"true\") {\n      withParquetTable((0 until 10000).map(i => (i < 5000, \"one\")), \"tbl\") {\n        val df = spark.sql(\"select case when (_1 = true) then _2 else '' end as aaa from tbl\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  private def generateDecimalsPrecision10Scale2(): DataFrame = {\n    val values = Seq(\n      BigDecimal(\"-99999999.999\"),\n      BigDecimal(\"-123456.789\"),\n      BigDecimal(\"-32768.678\"),\n      // Short Min\n      BigDecimal(\"-32767.123\"),\n      BigDecimal(\"-128.12312\"),\n      // Byte Min\n      BigDecimal(\"-127.123\"),\n      BigDecimal(\"0.0\"),\n      // Byte Max\n      BigDecimal(\"127.123\"),\n      BigDecimal(\"128.12312\"),\n      BigDecimal(\"32767.122\"),\n      // Short Max\n      BigDecimal(\"32768.678\"),\n      BigDecimal(\"123456.789\"),\n      BigDecimal(\"99999999.999\"))\n    generateDecimalsPrecision10Scale2(values)\n  }\n\n  private def generateDecimalsPrecision10Scale2(values: Seq[BigDecimal]): DataFrame = {\n    withNulls(values).toDF(\"b\").withColumn(\"a\", col(\"b\").cast(DecimalType(10, 2))).drop(\"b\")\n  }\n\n  private def generateDecimalsPrecision15Scale5(): DataFrame = {\n    val values = Seq(\n      // just above Int.MAX_VALUE (2147483647) — overflows IntegerType\n      BigDecimal(\"2147483648.12345\"),\n      BigDecimal(\"-2147483649.12345\"),\n      // fits in both int and long; exercises fractional truncation\n      BigDecimal(\"123.45678\"),\n      BigDecimal(\"-123.45678\"),\n      // tiny non-zero — boolean must be true\n      BigDecimal(\"0.00001\"),\n      BigDecimal(\"-0.00001\"),\n      BigDecimal(\"0.00000\"))\n    withNulls(values).toDF(\"b\").withColumn(\"a\", col(\"b\").cast(DecimalType(15, 5))).drop(\"b\")\n  }\n\n  private def generateDecimalsPrecision20Scale0(): DataFrame = {\n    val values = Seq(\n      // just above Long.MAX_VALUE (9223372036854775807) — overflows LongType\n      BigDecimal(\"9223372036854775808\"),\n      BigDecimal(\"-9223372036854775809\"),\n      // overflows IntegerType, fits in LongType\n      BigDecimal(\"2147483648\"),\n      BigDecimal(\"-2147483649\"),\n      BigDecimal(\"1\"),\n      BigDecimal(\"-1\"),\n      BigDecimal(\"0\"))\n    withNulls(values).toDF(\"b\").withColumn(\"a\", col(\"b\").cast(DecimalType(20, 0))).drop(\"b\")\n  }\n\n  private def generateDecimalsPrecision38Scale18(): DataFrame = {\n    val values = Seq(\n      BigDecimal(\"-99999999999999999999.999999999999\"),\n      BigDecimal(\"-9223372036854775808.234567\"),\n      // Long Min\n      BigDecimal(\"-9223372036854775807.123123\"),\n      BigDecimal(\"-2147483648.123123123\"),\n      // Int Min\n      BigDecimal(\"-2147483647.123123123\"),\n      BigDecimal(\"-123456.789\"),\n      BigDecimal(\"0.00000000000\"),\n      // Small-magnitude non-zero: adj_exp = -9 + 0 = -9 < -6, so LEGACY produces\n      // scientific notation \"1E-9\" / \"1.000000000E-9\" rather than plain \"0.000000001\".\n      BigDecimal(\"0.000000001\"),\n      BigDecimal(\"-0.000000001\"),\n      BigDecimal(\"123456.789\"),\n      // Int Max\n      BigDecimal(\"2147483647.123123123\"),\n      BigDecimal(\"2147483648.123123123\"),\n      BigDecimal(\"9223372036854775807.123123\"),\n      // Long Max\n      BigDecimal(\"9223372036854775808.234567\"),\n      BigDecimal(\"99999999999999999999.999999999999\"))\n    generateDecimalsPrecision38Scale18(values)\n  }\n\n  private def generateDecimalsPrecision38Scale18(values: Seq[BigDecimal]): DataFrame = {\n    withNulls(values).toDF(\"a\")\n  }\n\n  private def generateDateLiterals(): Seq[String] = {\n    // add 1st, 10th, 20th of each month from epoch to 2027\n    val sampledDates = (1970 to 2027).flatMap { year =>\n      (1 to 12).flatMap { month =>\n        Seq(1, 10, 20).map(day => f\"$year-$month%02d-$day%02d\")\n      }\n    }\n\n    // DST transition dates (1970-2099) for US, EU, Australia\n    val dstDates = (1970 to 2099).flatMap { year =>\n      Seq(\n        // spring forward\n        s\"$year-03-08\",\n        s\"$year-03-09\",\n        s\"$year-03-10\",\n        s\"$year-03-11\",\n        s\"$year-03-14\",\n        s\"$year-03-15\",\n        s\"$year-03-25\",\n        s\"$year-03-26\",\n        s\"$year-03-27\",\n        s\"$year-03-28\",\n        s\"$year-03-29\",\n        s\"$year-03-30\",\n        s\"$year-03-31\",\n        // April (Australia fall back)\n        s\"$year-04-01\",\n        s\"$year-04-02\",\n        s\"$year-04-03\",\n        s\"$year-04-04\",\n        s\"$year-04-05\",\n        // October (EU fall back and Australia spring forward)\n        s\"$year-10-01\",\n        s\"$year-10-02\",\n        s\"$year-10-03\",\n        s\"$year-10-04\",\n        s\"$year-10-05\",\n        s\"$year-10-25\",\n        s\"$year-10-26\",\n        s\"$year-10-27\",\n        s\"$year-10-28\",\n        s\"$year-10-29\",\n        s\"$year-10-30\",\n        s\"$year-10-31\",\n        // US fall back\n        s\"$year-11-01\",\n        s\"$year-11-02\",\n        s\"$year-11-03\",\n        s\"$year-11-04\",\n        s\"$year-11-05\",\n        s\"$year-11-06\",\n        s\"$year-11-07\",\n        s\"$year-11-08\")\n    }\n\n    // Edge cases\n    val edgeCases = Seq(\"1969-12-31\", \"2000-02-29\", \"999-01-01\", \"12345-01-01\")\n    (sampledDates ++ dstDates ++ edgeCases).distinct\n  }\n\n  private def generateDates(): DataFrame = {\n    val values = generateDateLiterals()\n    withNulls(values).toDF(\"b\").withColumn(\"a\", col(\"b\").cast(DataTypes.DateType)).drop(\"b\")\n  }\n\n  // Extended values are Timestamps that are outside dates supported chrono::DateTime and\n  // therefore not supported by operations using it.\n  private def generateTimestampsExtended(): DataFrame = {\n    val values = Seq(\"290000-12-31T01:00:00+02:00\")\n    generateTimestamps().unionByName(\n      values.toDF(\"str\").select(col(\"str\").cast(DataTypes.TimestampType).as(\"a\")))\n  }\n\n  private def generateTimestampLiterals(): Seq[String] =\n    Seq(\n      \"2024-01-01T12:34:56.123456\",\n      \"2024-01-01T01:00:00Z\",\n      \"9999-12-31T01:00:00-02:00\",\n      \"2024-12-31T01:00:00+02:00\")\n\n  private def generateTimestamps(): DataFrame = {\n    val values = generateTimestampLiterals()\n    withNulls(values)\n      .toDF(\"str\")\n      .withColumn(\"a\", col(\"str\").cast(DataTypes.TimestampType))\n      .drop(\"str\")\n  }\n\n  private def generateBinary(): DataFrame = {\n    val r = new Random(0)\n    val bytes = new Array[Byte](8)\n    val values: Seq[Array[Byte]] = Range(0, dataSize).map(_ => {\n      r.nextBytes(bytes)\n      bytes.clone()\n    })\n    values.toDF(\"a\")\n  }\n\n  private def withNulls[T](values: Seq[T]): Seq[Option[T]] = {\n    values.map(v => Some(v)) ++ Seq(None)\n  }\n\n  private def castFallbackTest(\n      input: DataFrame,\n      toType: DataType,\n      expectedMessage: String): Unit = {\n    withTempPath { dir =>\n      val data = roundtripParquet(input, dir).coalesce(1)\n      data.createOrReplaceTempView(\"t\")\n\n      withSQLConf((SQLConf.ANSI_ENABLED.key, \"false\")) {\n        val df = data.withColumn(\"converted\", col(\"a\").cast(toType))\n        df.collect()\n        val str =\n          new ExtendedExplainInfo().generateExtendedInfo(df.queryExecution.executedPlan)\n        assert(str.contains(expectedMessage))\n      }\n    }\n  }\n\n  private def castTimestampTest(\n      input: DataFrame,\n      toType: DataType,\n      assertNative: Boolean = false) = {\n    withTempPath { dir =>\n      val data = roundtripParquet(input, dir).coalesce(1)\n      data.createOrReplaceTempView(\"t\")\n\n      withSQLConf((SQLConf.ANSI_ENABLED.key, \"false\")) {\n        // cast() should return null for invalid inputs when ansi mode is disabled\n        val df = data.withColumn(\"converted\", col(\"a\").cast(toType))\n        if (assertNative) {\n          checkSparkAnswerAndOperator(df)\n        } else {\n          checkSparkAnswer(df)\n        }\n\n        // try_cast() should always return null for invalid inputs\n        val df2 = spark.sql(s\"select try_cast(a as ${toType.sql}) from t\")\n        checkSparkAnswer(df2)\n      }\n\n      // with ANSI enabled, we should produce the same exception as Spark\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> \"true\") {\n        val df = data.withColumn(\"converted\", col(\"a\").cast(toType))\n        checkSparkAnswerMaybeThrows(df) match {\n          case (None, None) =>\n          // both succeeded, results already compared\n          case (None, Some(e)) =>\n            throw e\n          case (Some(e), None) =>\n            val msg = if (e.getCause != null) e.getCause.getMessage else e.getMessage\n            fail(s\"Comet should have failed with $msg\")\n          case (Some(sparkException), Some(cometException)) =>\n            val sparkMessage =\n              if (sparkException.getCause != null) sparkException.getCause.getMessage\n              else sparkException.getMessage\n            val cometMessage =\n              if (cometException.getCause != null) cometException.getCause.getMessage\n              else cometException.getMessage\n            if (CometSparkSessionExtensions.isSpark40Plus) {\n              assert(sparkMessage.contains(\"SQLSTATE\"))\n              assert(\n                sparkMessage.startsWith(\n                  cometMessage.substring(0, math.min(40, cometMessage.length))))\n            } else {\n              assert(cometMessage == sparkMessage)\n            }\n        }\n        // try_cast()\n        val dfTryCast = spark.sql(s\"select try_cast(a as ${toType.sql}) from t\")\n        checkSparkAnswer(dfTryCast)\n      }\n    }\n  }\n\n  private def castTest(\n      input: DataFrame,\n      toType: DataType,\n      hasIncompatibleType: Boolean = false,\n      testAnsi: Boolean = true,\n      testTry: Boolean = true,\n      useDataFrameDiff: Boolean = false): Unit = {\n\n    withTempPath { dir =>\n      val data = roundtripParquet(input, dir).coalesce(1)\n      val dataWithRowId = data.withColumn(\"__row_id\", monotonically_increasing_id())\n\n      withSQLConf((SQLConf.ANSI_ENABLED.key, \"false\")) {\n        // cast() should return null for invalid inputs when ansi mode is disabled\n        val df = dataWithRowId\n          .select(col(\"__row_id\"), col(\"a\"), col(\"a\").cast(toType).as(\"casted\"))\n          .orderBy(col(\"__row_id\"))\n          .drop(\"__row_id\")\n        if (useDataFrameDiff) {\n          assertDataFrameEqualsWithExceptions(df, assertCometNative = !hasIncompatibleType)\n        } else {\n          if (hasIncompatibleType) {\n            checkSparkAnswer(df)\n          } else {\n            checkSparkAnswerAndOperator(df)\n          }\n        }\n\n        if (testTry) {\n          val df2 = tryCastWithRowId(dataWithRowId, toType)\n          if (hasIncompatibleType) {\n            checkSparkAnswer(df2)\n          } else {\n            if (useDataFrameDiff) {\n              assertDataFrameEqualsWithExceptions(df2, assertCometNative = !hasIncompatibleType)\n            } else {\n              checkSparkAnswerAndOperator(df2)\n            }\n          }\n        }\n      }\n\n      if (testAnsi) {\n        // with ANSI enabled, we should produce the same exception as Spark\n        withSQLConf(\n          SQLConf.ANSI_ENABLED.key -> \"true\",\n          CometConf.getExprAllowIncompatConfigKey(classOf[Cast]) -> \"true\") {\n\n          // cast() should throw exception on invalid inputs when ansi mode is enabled\n          val df = dataWithRowId\n            .select(col(\"__row_id\"), col(\"a\"), col(\"a\").cast(toType).as(\"converted\"))\n            .orderBy(col(\"__row_id\"))\n            .drop(\"__row_id\")\n          val res = if (useDataFrameDiff) {\n            assertDataFrameEqualsWithExceptions(df, assertCometNative = !hasIncompatibleType)\n          } else {\n            checkSparkAnswerMaybeThrows(df)\n          }\n          res match {\n            case (None, None) =>\n            // neither system threw an exception\n            case (None, Some(e)) =>\n              throw e\n            case (Some(e), None) =>\n              // Spark failed but Comet succeeded\n              fail(s\"Comet should have failed with ${e.getCause.getMessage}\")\n            case (Some(sparkException), Some(cometException)) =>\n              // both systems threw an exception so we make sure they are the same\n              val sparkMessage =\n                if (sparkException.getCause != null) sparkException.getCause.getMessage\n                else sparkException.getMessage\n              val cometMessage =\n                if (cometException.getCause != null) cometException.getCause.getMessage\n                else cometException.getMessage\n              // this if branch should only check decimal to decimal cast and errors when output precision, scale causes overflow.\n              if (df.schema(\"a\").dataType.typeName.contains(\"decimal\") && toType.typeName\n                  .contains(\"decimal\") && sparkMessage.contains(\"cannot be represented as\")) {\n                assert(cometMessage.contains(\"too large to store\"))\n              } else {\n                if (CometSparkSessionExtensions.isSpark40Plus) {\n                  // for Spark 4 we expect to sparkException carries the message\n                  assert(sparkMessage.contains(\"SQLSTATE\"))\n                  // we compare a subset of the error message. Comet grabs the query\n                  // context eagerly so it displays the call site at the\n                  // line of code where the cast method was called, whereas spark grabs the context\n                  // lazily and displays the call site at the line of code where the error is checked.\n                  assert(\n                    sparkMessage.startsWith(\n                      cometMessage.substring(0, math.min(40, cometMessage.length))))\n                } else {\n                  // for Spark 3.4 we expect to reproduce the error message exactly\n                  assert(cometMessage == sparkMessage)\n                }\n              }\n          }\n        }\n\n        if (testTry) {\n          val df2 = tryCastWithRowId(dataWithRowId, toType)\n          if (useDataFrameDiff) {\n            assertDataFrameEqualsWithExceptions(df2, assertCometNative = !hasIncompatibleType)\n          } else {\n            if (hasIncompatibleType) {\n              checkSparkAnswer(df2)\n            } else {\n              checkSparkAnswerAndOperator(df2)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  private def tryCastWithRowId(dataWithRowId: DataFrame, toType: DataType): DataFrame = {\n    dataWithRowId.createOrReplaceTempView(\"t\")\n    // try_cast() should always return null for invalid inputs\n    // not using spark DSL since `try_cast` is only available from Spark 4.x\n    spark\n      .sql(s\"select __row_id, a, try_cast(a as ${toType.sql}) as casted from t order by __row_id\")\n      .drop(\"__row_id\")\n  }\n\n  private def roundtripParquet(df: DataFrame, tempDir: File): DataFrame = {\n    val filename = new File(tempDir, s\"castTest_${System.currentTimeMillis()}.parquet\").toString\n    df.write.mode(SaveMode.Overwrite).parquet(filename)\n    spark.read.parquet(filename)\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometCsvExpressionSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.jdk.CollectionConverters._\nimport scala.util.Random\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.catalyst.expressions.StructsToCsv\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.functions._\nimport org.apache.spark.sql.types.StringType\n\nimport org.apache.comet.testing.{DataGenOptions, ParquetGenerator, SchemaGenOptions}\n\nclass CometCsvExpressionSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  test(\"to_csv - default options\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          SchemaGenOptions(generateArray = false, generateStruct = false, generateMap = false),\n          DataGenOptions(allowNull = true, generateNegativeZero = true))\n      }\n      withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[StructsToCsv]) -> \"true\") {\n        val df = spark.read\n          .parquet(filename)\n          .select(\n            to_csv(\n              struct(\n                col(\"c0\"),\n                col(\"c1\"),\n                col(\"c2\"),\n                col(\"c3\"),\n                col(\"c4\"),\n                col(\"c5\"),\n                col(\"c7\"),\n                col(\"c8\"),\n                col(\"c9\"),\n                col(\"c12\"))))\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"to_csv - with configurable formatting options\") {\n    val table = \"t1\"\n    withSQLConf(\n      CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_ICEBERG_COMPAT,\n      CometConf.getExprAllowIncompatConfigKey(classOf[StructsToCsv]) -> \"true\") {\n      withTable(table) {\n        val newLinesStr =\n          \"\"\" abc\n            | bcde\"\"\".stripMargin\n        sql(s\"create table $table(col string) using parquet\")\n        sql(s\"insert into $table values('')\")\n        sql(s\"insert into $table values(cast(null as string))\")\n        sql(s\"insert into $table values('   abc')\")\n        sql(s\"insert into $table values('abc   ')\")\n        sql(s\"insert into $table values('  abc   ')\")\n        sql(s\"\"\"insert into $table values('abc \\\"abc\\\"')\"\"\")\n        sql(s\"\"\"insert into $table values('$newLinesStr')\"\"\")\n        sql(s\"\"\"insert into $table values('abc,def')\"\"\")\n        sql(s\"\"\"insert into $table values('abc;def;ghi')\"\"\")\n        sql(s\"\"\"insert into $table values('abc\\tdef')\"\"\")\n        sql(s\"\"\"insert into $table values('a\"b\"c')\"\"\")\n        sql(s\"\"\"insert into $table values('\"quoted\"')\"\"\")\n        sql(s\"\"\"insert into $table values('line1\\nline2')\"\"\")\n        sql(s\"\"\"insert into $table values('line1\\rline2')\"\"\")\n        sql(s\"\"\"insert into $table values('line1\\r\\nline2')\"\"\")\n        sql(s\"\"\"insert into $table values('a''b')\"\"\")\n        sql(s\"\"\"insert into $table values('a\\\\\\\\b')\"\"\")\n\n        val df = sql(s\"select * from $table order by col\")\n\n        // Default options\n        checkSparkAnswerAndOperator(df.select(to_csv(struct(col(\"col\"), lit(1)))))\n\n        // Custom delimiter\n        checkSparkAnswerAndOperator(\n          df.select(to_csv(struct(col(\"col\"), lit(1)), Map(\"delimiter\" -> \";\").asJava)))\n\n        checkSparkAnswerAndOperator(\n          df.select(to_csv(struct(col(\"col\"), lit(1)), Map(\"delimiter\" -> \"|\").asJava)))\n\n        checkSparkAnswerAndOperator(\n          df.select(to_csv(struct(col(\"col\"), lit(1)), Map(\"delimiter\" -> \"\\t\").asJava)))\n\n        // Whitespace handling\n        checkSparkAnswerAndOperator(\n          df.select(\n            to_csv(\n              struct(col(\"col\"), lit(1)),\n              Map(\n                \"delimiter\" -> \";\",\n                \"ignoreLeadingWhiteSpace\" -> \"false\",\n                \"ignoreTrailingWhiteSpace\" -> \"false\").asJava)))\n\n        checkSparkAnswerAndOperator(\n          df.select(\n            to_csv(\n              struct(col(\"col\"), lit(1)),\n              Map(\n                \"ignoreLeadingWhiteSpace\" -> \"true\",\n                \"ignoreTrailingWhiteSpace\" -> \"false\").asJava)))\n\n        checkSparkAnswerAndOperator(\n          df.select(\n            to_csv(\n              struct(col(\"col\"), lit(1)),\n              Map(\n                \"ignoreLeadingWhiteSpace\" -> \"false\",\n                \"ignoreTrailingWhiteSpace\" -> \"true\").asJava)))\n\n        checkSparkAnswerAndOperator(df.select(to_csv(\n          struct(col(\"col\"), lit(1)),\n          Map(\"ignoreLeadingWhiteSpace\" -> \"true\", \"ignoreTrailingWhiteSpace\" -> \"true\").asJava)))\n\n        // Escape character\n        checkSparkAnswerAndOperator(\n          df.select(to_csv(struct(col(\"col\"), lit(1)), Map(\"escape\" -> \"\\\\\").asJava)))\n\n        checkSparkAnswerAndOperator(\n          df.select(to_csv(struct(col(\"col\"), lit(1)), Map(\"escape\" -> \"/\").asJava)))\n\n        // Quote options\n        checkSparkAnswerAndOperator(\n          df.select(to_csv(struct(col(\"col\"), lit(1)), Map(\"quoteAll\" -> \"true\").asJava)))\n\n        checkSparkAnswerAndOperator(\n          df.select(to_csv(struct(col(\"col\"), lit(1)), Map(\"quoteAll\" -> \"false\").asJava)))\n\n        // Null value representation\n        checkSparkAnswerAndOperator(\n          df.select(to_csv(struct(col(\"col\"), lit(1)), Map(\"nullValue\" -> \"NULL\").asJava)))\n\n        checkSparkAnswerAndOperator(\n          df.select(to_csv(struct(col(\"col\"), lit(1)), Map(\"nullValue\" -> \"N/A\").asJava)))\n\n        checkSparkAnswerAndOperator(\n          df.select(to_csv(struct(col(\"col\"), lit(1)), Map(\"nullValue\" -> \"\").asJava)))\n\n        // Combined options\n        checkSparkAnswerAndOperator(\n          df.select(\n            to_csv(\n              struct(col(\"col\"), lit(1)),\n              Map(\n                \"delimiter\" -> \"|\",\n                \"quoteAll\" -> \"false\",\n                \"escape\" -> \"\\\\\",\n                \"nullValue\" -> \"NULL\").asJava)))\n\n        checkSparkAnswerAndOperator(\n          df.select(to_csv(\n            struct(col(\"col\"), lit(1)),\n            Map(\n              \"delimiter\" -> \";\",\n              \"quoteAll\" -> \"false\",\n              \"ignoreLeadingWhiteSpace\" -> \"true\",\n              \"ignoreTrailingWhiteSpace\" -> \"true\",\n              \"nullValue\" -> \"N/A\").asJava)))\n\n        // Edge cases with multiple columns\n        checkSparkAnswerAndOperator(\n          df.select(\n            to_csv(\n              struct(col(\"col\"), lit(1), lit(\"test\"), lit(null).cast(StringType)),\n              Map(\"delimiter\" -> \",\", \"quoteAll\" -> \"true\").asJava)))\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometDateTimeUtilsSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.io.File\n\nimport org.apache.spark.sql.{CometTestBase, DataFrame, SaveMode}\nimport org.apache.spark.sql.functions.col\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.DataTypes\n\n/**\n * Tests for string-to-timestamp (and string-to-date) cast correctness, ported from Spark's\n * DateTimeUtilsSuite. That suite lives in sql/catalyst and does not extend CometTestBase, so it\n * never runs in Comet CI.\n */\nclass CometDateTimeUtilsSuite extends CometTestBase {\n\n  import testImplicits._\n\n  private def roundtripParquet(df: DataFrame, tempDir: File): DataFrame = {\n    val filename = new File(tempDir, s\"dtutils_${System.currentTimeMillis()}.parquet\").toString\n    df.write.mode(SaveMode.Overwrite).parquet(filename)\n    spark.read.parquet(filename)\n  }\n\n  /**\n   * Writes values to Parquet (to prevent constant folding), casts the \"a\" column to\n   * TimestampType, and asserts that Comet produces the same result as Spark.\n   */\n  private def checkCastToTimestamp(values: Seq[String]): Unit = {\n    withTempPath { dir =>\n      val df = roundtripParquet(values.toDF(\"a\"), dir).coalesce(1)\n      checkSparkAnswer(df.withColumn(\"ts\", col(\"a\").cast(DataTypes.TimestampType)))\n    }\n  }\n\n  /**\n   * Same as checkCastToTimestamp but casts the result to BIGINT (seconds since epoch) before\n   * collecting. Use for extreme-year values (e.g. year 294247 or -290308) where both\n   * toJavaTimestamp and timestamp-to-string fail in the test harness due to calendar range\n   * limits. Casting to LongType is pure arithmetic on the i64 epoch value and has no range\n   * restriction.\n   */\n  private def checkCastToTimestampAsLong(values: Seq[String]): Unit = {\n    withTempPath { dir =>\n      val df = roundtripParquet(values.toDF(\"a\"), dir).coalesce(1)\n      checkSparkAnswer(\n        df.withColumn(\"ts\", col(\"a\").cast(DataTypes.TimestampType).cast(DataTypes.LongType)))\n    }\n  }\n\n  private def checkCastToTimestampNTZ(values: Seq[String]): Unit = {\n    withTempPath { dir =>\n      val df = roundtripParquet(values.toDF(\"a\"), dir).coalesce(1)\n      checkSparkAnswer(df.withColumn(\"ts\", col(\"a\").cast(DataTypes.TimestampNTZType)))\n    }\n  }\n\n  private def checkCastToDate(values: Seq[String]): Unit = {\n    withTempPath { dir =>\n      val df = roundtripParquet(values.toDF(\"a\"), dir).coalesce(1)\n      checkSparkAnswer(df.withColumn(\"dt\", col(\"a\").cast(DataTypes.DateType)))\n    }\n  }\n\n  test(\"string to timestamp - basic date and datetime formats\") {\n    // Run for a few representative session timezones instead of the ALL_TIMEZONES loop in\n    // Spark's DateTimeUtilsSuite. The exact UTC epoch depends on the session timezone for\n    // values without an embedded offset.\n    for (tz <- Seq(\"UTC\", \"America/Los_Angeles\", \"Asia/Kathmandu\")) {\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz) {\n        checkCastToTimestamp(\n          Seq(\n            \"1969-12-31 16:00:00\",\n            \"0001\",\n            \"2015-03\",\n            // Leading/trailing whitespace is accepted\n            \"2015-03-18\",\n            \"2015-03-18 \",\n            \" 2015-03-18\",\n            \" 2015-03-18 \",\n            \"2015-03-18 12:03:17\",\n            \"2015-03-18T12:03:17\",\n            // Milliseconds\n            \"2015-03-18 12:03:17.123\",\n            \"2015-03-18T12:03:17.123\",\n            // Microseconds\n            \"2015-03-18T12:03:17.123121\",\n            \"2015-03-18T12:03:17.12312\",\n            // More than 6 fractional digits — truncated to microseconds\n            \"2015-03-18T12:03:17.123456789\",\n            // Time-only: bare HH:MM:SS\n            \"18:12:15\",\n            // Milliseconds with more than 6 fractional digits (truncated)\n            \"2011-05-06 07:08:09.1000\"))\n      }\n    }\n  }\n\n  test(\"string to timestamp - embedded timezone offsets\") {\n    // When the string includes a timezone offset it determines the UTC epoch regardless of the\n    // session timezone. These cases exercise the offset-parsing logic.\n    checkCastToTimestamp(\n      Seq(\n        // ±HH:MM\n        \"2015-03-18T12:03:17-13:53\",\n        \"2015-03-18T12:03:17-01:00\",\n        \"2015-03-18T12:03:17+07:30\",\n        \"2015-03-18T12:03:17+07:03\",\n        // Z and UTC\n        \"2015-03-18T12:03:17Z\",\n        \"2015-03-18 12:03:17Z\",\n        \"2015-03-18 12:03:17UTC\",\n        // ±HHMM (no colon) — issue #3775\n        \"2015-03-18T12:03:17-1353\",\n        \"2015-03-18T12:03:17-0100\",\n        \"2015-03-18T12:03:17+0730\",\n        \"2015-03-18T12:03:17+0703\",\n        // ±H:MM or ±H:M (single-digit hour) — issue #3775\n        \"2015-03-18T12:03:17-1:0\",\n        // GMT±HH:MM — issue #3775\n        \"2015-03-18T12:03:17GMT-13:53\",\n        \"2015-03-18T12:03:17GMT-01:00\",\n        \"2015-03-18T12:03:17 GMT+07:30\",\n        \"2015-03-18T12:03:17GMT+07:03\",\n        // With milliseconds\n        \"2015-03-18T12:03:17.456Z\",\n        \"2015-03-18 12:03:17.456Z\",\n        \"2015-03-18 12:03:17.456 UTC\",\n        \"2015-03-18T12:03:17.123-1:0\",\n        \"2015-03-18T12:03:17.123-01:00\",\n        \"2015-03-18T12:03:17.123 GMT-01:00\",\n        \"2015-03-18T12:03:17.123-0100\",\n        \"2015-03-18T12:03:17.123+07:30\",\n        \"2015-03-18T12:03:17.123 GMT+07:30\",\n        \"2015-03-18T12:03:17.123+0730\",\n        \"2015-03-18T12:03:17.123GMT+07:30\",\n        // With microseconds\n        \"2015-03-18T12:03:17.123121+7:30\",\n        \"2015-03-18T12:03:17.123121 GMT+0730\",\n        \"2015-03-18T12:03:17.12312+7:30\",\n        \"2015-03-18T12:03:17.12312 UT+07:30\",\n        \"2015-03-18T12:03:17.12312+0730\",\n        // Nanoseconds truncated to microseconds\n        \"2015-03-18T12:03:17.123456789+0:00\",\n        \"2015-03-18T12:03:17.123456789 UTC+0\",\n        \"2015-03-18T12:03:17.123456789GMT+00:00\",\n        // Named timezone — issue #3775\n        \"2015-03-18T12:03:17.123456 Europe/Moscow\",\n        // T-prefixed time-only with offset\n        \"T18:12:15.12312+7:30\",\n        \"T18:12:15.12312 UTC+07:30\",\n        \"T18:12:15.12312+0730\",\n        // Bare time-only with offset\n        \"18:12:15.12312+7:30\",\n        \"18:12:15.12312 GMT+07:30\",\n        \"18:12:15.12312+0730\"))\n  }\n\n  test(\"string to timestamp - invalid formats return null\") {\n    // All of these should produce null (not throw) in non-ANSI mode.\n    for (tz <- Seq(\"UTC\", \"America/Los_Angeles\")) {\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz) {\n        checkCastToTimestamp(\n          Seq(\n            \"238\",\n            \"2015-03-18 123142\",\n            \"2015-03-18T123123\",\n            \"2015-03-18X\",\n            \"2015/03/18\",\n            \"2015.03.18\",\n            \"20150318\",\n            \"2015-031-8\",\n            \"015-01-18\",\n            \"2015-03-18T12:03.17-20:0\",\n            \"2015-03-18T12:03.17-0:70\",\n            \"2015-03-18T12:03.17-1:0:0\",\n            \"1999 08 01\",\n            \"1999-08 01\",\n            \"1999 08\",\n            \"\",\n            \"    \",\n            \"+\",\n            \"T\",\n            \"2015-03-18T\",\n            \"12::\",\n            \"2015-03-18T12:03:17-8:\",\n            \"2015-03-18T12:03:17-8:30:\"))\n      }\n    }\n  }\n\n  // \"SPARK-35780: support full range of timestamp string\"\n  test(\"SPARK-35780: full range of timestamp string\") {\n    withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"UTC\") {\n      // Normal-range cases: collect() as TimestampType directly.\n      checkCastToTimestamp(\n        Seq(\n          // Negative year\n          \"-1969-12-31 16:00:00\",\n          // Zero-padded year\n          \"02015-03-18 16:00:00\",\n          \"000001\",\n          \"-000001\",\n          \"00238\",\n          // 5-digit year (within NaiveDate range)\n          \"99999-03-01T12:03:17\",\n          // These look like time-only but with a leading sign — invalid\n          \"+12:12:12\",\n          \"-12:12:12\",\n          // Positive year-sign prefix IS accepted by Spark for timestamps (same value as without +)\n          \"+2020-01-01T12:34:56\",\n          // Empty / whitespace\n          \"\",\n          \"    \",\n          \"+\",\n          // One microsecond past Long.MaxValue — overflows to null\n          \"294247-01-10T04:00:54.775808Z\",\n          // One microsecond before Long.MinValue — overflows to null\n          \"-290308-12-21T19:59:05.224191Z\",\n          // Integer overflow in individual fields\n          \"4294967297\",\n          \"2021-4294967297-11\",\n          \"4294967297:30:00\",\n          \"2021-11-4294967297T12:30:00\",\n          \"2021-01-01T12:4294967297:00\",\n          \"2021-01-01T12:30:4294967297\",\n          \"2021-01-01T12:30:4294967297.123456\",\n          \"2021-01-01T12:30:4294967297+07:30\",\n          \"2021-01-01T12:30:4294967297UTC\",\n          \"2021-01-01T12:30:4294967297+4294967297:30\"))\n\n      // Extreme-year boundary cases: collecting a TimestampType value for year 294247 or -290308\n      // overflows in toJavaTimestamp, and casting to StringType fails in Comet (native engine also\n      // limits timestamp-to-string to ±262143 CE). Cast to LongType (seconds since epoch) instead —\n      // it is pure i64 arithmetic with no calendar range restriction.\n      checkCastToTimestampAsLong(\n        Seq(\n          // Long.MaxValue boundary — valid, equals Long.MaxValue microseconds\n          \"294247-01-10T04:00:54.775807Z\",\n          // Long.MinValue boundary — valid, equals Long.MinValue microseconds\n          \"-290308-12-21T19:59:05.224192Z\"))\n    }\n  }\n\n  test(\"SPARK-15379: invalid calendar dates in string to date cast\") {\n    // Feb 29 on a non-leap year and Apr 31 must produce null for both DATE and TIMESTAMP.\n    checkCastToDate(Seq(\"2015-02-29 00:00:00\", \"2015-04-31 00:00:00\", \"2015-02-29\", \"2015-04-31\"))\n\n    checkCastToTimestamp(\n      Seq(\"2015-02-29 00:00:00\", \"2015-04-31 00:00:00\", \"2015-02-29\", \"2015-04-31\"))\n  }\n\n  test(\"trailing characters while converting string to timestamp\") {\n    // Garbage after a valid ISO timestamp must make the whole value null.\n    checkCastToTimestamp(Seq(\"2019-10-31T10:59:23Z:::\"))\n  }\n\n  test(\"DST spring-forward gap and fall-back overlap\") {\n    // America/New_York: spring-forward 2020-03-08 02:00 -> 03:00 (gap [02:00, 03:00)).\n    // Spark advances the non-existent time forward by the gap duration:\n    //   \"2020-03-08 02:30:00\" -> 03:30 EDT (UTC-4) = 2020-03-08T07:30:00Z.\n    //\n    // Fall-back 2020-11-01 02:00 EDT -> 01:00 EST (overlap [01:00, 02:00)).\n    // Spark picks the earlier (pre-transition) UTC instant:\n    //   \"2020-11-01 01:30:00\" (EDT, UTC-4) = 2020-11-01T05:30:00Z.\n    withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"America/New_York\") {\n      checkCastToTimestamp(\n        Seq(\n          // Spring-forward gap: 02:30 does not exist, Spark advances to 03:30 EDT\n          \"2020-03-08 02:30:00\",\n          // Boundary: just before the gap\n          \"2020-03-08 01:59:59\",\n          // Boundary: just after the gap (first valid post-transition time)\n          \"2020-03-08 03:00:00\",\n          // Fall-back overlap: 01:30 exists twice; Spark picks earlier UTC (EDT = UTC-4)\n          \"2020-11-01 01:30:00\",\n          // Boundary: just before fall-back\n          \"2020-11-01 01:59:59\",\n          // Just after fall-back (unambiguous EST)\n          \"2020-11-01 02:00:00\"))\n    }\n  }\n\n  test(\"SPARK-37326: cast string to TIMESTAMP_NTZ rejects timezone offsets\") {\n    checkCastToTimestampNTZ(\n      Seq(\n        // Has offset — null\n        \"2021-11-22 10:54:27 +08:00\",\n        // No offset — parsed as local datetime\n        \"2021-11-22 10:54:27\",\n        // More NTZ-compatible values\n        \"2021-11-22\",\n        \"2021-11-22T10:54:27.123456\"))\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometExpressionSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.time.{Duration, Period}\n\nimport scala.util.Random\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql.{CometTestBase, DataFrame, Row}\nimport org.apache.spark.sql.catalyst.expressions.{Alias, Cast, FromUnixTime, Literal, StructsToJson, TruncDate, TruncTimestamp}\nimport org.apache.spark.sql.catalyst.optimizer.SimplifyExtractValueOps\nimport org.apache.spark.sql.comet.CometProjectExec\nimport org.apache.spark.sql.execution.{ProjectExec, SparkPlan}\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.functions._\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.internal.SQLConf.SESSION_LOCAL_TIMEZONE\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.CometSparkSessionExtensions.isSpark40Plus\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator}\n\nclass CometExpressionSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n  import testImplicits._\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_AUTO) {\n        testFun\n      }\n    }\n  }\n\n  val ARITHMETIC_OVERFLOW_EXCEPTION_MSG =\n    \"\"\"[ARITHMETIC_OVERFLOW] integer overflow. If necessary set \"spark.sql.ansi.enabled\" to \"false\" to bypass this error\"\"\"\n  val DIVIDE_BY_ZERO_EXCEPTION_MSG =\n    \"\"\"Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead\"\"\"\n\n  // Temporary test to verify checkSparkAnswer failure output labels Comet/Spark correctly.\n  ignore(\"check output labels on mismatch\") {\n    val cometDf = Seq((1, \"apple\"), (2, \"banana\"), (3, \"cherry\")).toDF(\"id\", \"fruit\")\n    val sparkAnswer = Seq(Row(1, \"apple\"), Row(2, \"BANANA\"), Row(3, \"cherry\"))\n    checkCometAnswer(cometDf, sparkAnswer)\n  }\n\n  test(\"sort floating point with negative zero\") {\n    val schema = StructType(\n      Seq(\n        StructField(\"c0\", DataTypes.FloatType, true),\n        StructField(\"c1\", DataTypes.DoubleType, true)))\n    val df = FuzzDataGenerator.generateDataFrame(\n      new Random(42),\n      spark,\n      schema,\n      1000,\n      DataGenOptions(generateNegativeZero = true))\n    df.createOrReplaceTempView(\"tbl\")\n\n    withSQLConf(\n      CometConf.getExprAllowIncompatConfigKey(\"SortOrder\") -> \"false\",\n      CometConf.COMET_EXEC_STRICT_FLOATING_POINT.key -> \"true\") {\n      checkSparkAnswerAndFallbackReasons(\n        \"select * from tbl order by 1, 2\",\n        Set(\n          \"unsupported range partitioning sort order\",\n          \"Sorting on floating-point is not 100% compatible with Spark\"))\n    }\n  }\n\n  test(\"sort array of floating point with negative zero\") {\n    val schema = StructType(\n      Seq(\n        StructField(\"c0\", DataTypes.createArrayType(DataTypes.FloatType), true),\n        StructField(\"c1\", DataTypes.createArrayType(DataTypes.DoubleType), true)))\n    val df = FuzzDataGenerator.generateDataFrame(\n      new Random(42),\n      spark,\n      schema,\n      1000,\n      DataGenOptions(generateNegativeZero = true))\n    df.createOrReplaceTempView(\"tbl\")\n\n    withSQLConf(\n      CometConf.getExprAllowIncompatConfigKey(\"SortOrder\") -> \"false\",\n      CometConf.COMET_EXEC_STRICT_FLOATING_POINT.key -> \"true\") {\n      checkSparkAnswerAndFallbackReason(\n        \"select * from tbl order by 1, 2\",\n        \"unsupported range partitioning sort order\")\n    }\n  }\n\n  test(\"sort struct containing floating point with negative zero\") {\n    val schema = StructType(\n      Seq(\n        StructField(\n          \"float_struct\",\n          StructType(Seq(StructField(\"c0\", DataTypes.FloatType, true)))),\n        StructField(\n          \"float_double\",\n          StructType(Seq(StructField(\"c0\", DataTypes.DoubleType, true))))))\n    val df = FuzzDataGenerator.generateDataFrame(\n      new Random(42),\n      spark,\n      schema,\n      1000,\n      DataGenOptions(generateNegativeZero = true))\n    df.createOrReplaceTempView(\"tbl\")\n\n    withSQLConf(\n      CometConf.getExprAllowIncompatConfigKey(\"SortOrder\") -> \"false\",\n      CometConf.COMET_EXEC_STRICT_FLOATING_POINT.key -> \"true\") {\n      checkSparkAnswerAndFallbackReason(\n        \"select * from tbl order by 1, 2\",\n        \"unsupported range partitioning sort order\")\n    }\n  }\n\n  test(\"compare true/false to negative zero\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test\"\n        withTable(table) {\n          sql(s\"create table $table(col1 boolean, col2 float) using parquet\")\n          sql(s\"insert into $table values(true, -0.0)\")\n          sql(s\"insert into $table values(false, -0.0)\")\n\n          checkSparkAnswerAndOperator(\n            s\"SELECT col1, negative(col2), cast(col1 as float), col1 = negative(col2) FROM $table\")\n        }\n      }\n    }\n  }\n\n  test(\"parquet default values\") {\n    withTable(\"t1\") {\n      sql(\"create table t1(col1 boolean) using parquet\")\n      sql(\"insert into t1 values(true)\")\n      sql(\"alter table t1 add column col2 string default 'hello'\")\n      checkSparkAnswerAndOperator(\"select * from t1\")\n    }\n  }\n\n  test(\"decimals divide by zero\") {\n    Seq(true, false).foreach { dictionary =>\n      withSQLConf(\n        SQLConf.PARQUET_WRITE_LEGACY_FORMAT.key -> \"false\",\n        \"parquet.enable.dictionary\" -> dictionary.toString) {\n        withTempPath { dir =>\n          val data = makeDecimalRDD(10, DecimalType(18, 10), dictionary)\n          data.write.parquet(dir.getCanonicalPath)\n          readParquetFile(dir.getCanonicalPath) { df =>\n            {\n              val decimalLiteral = Decimal(0.00)\n              val cometDf = df.select($\"dec\" / decimalLiteral, $\"dec\" % decimalLiteral)\n              checkSparkSchema(cometDf)\n              checkSparkAnswerAndOperator(cometDf)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"Integral Division Overflow Handling Matches Spark Behavior\") {\n    withTable(\"t1\") {\n      val value = Long.MinValue\n      sql(\"create table t1(c1 long, c2 short) using parquet\")\n      sql(s\"insert into t1 values($value, -1)\")\n      val res = sql(\"select c1 div c2 from t1 order by c1\")\n      checkSparkAnswerAndOperator(res)\n    }\n  }\n\n  test(\"basic data type support - excluding u8/u16\") {\n    // variant that skips _9 (UINT_8) and _10 (UINT_16) for default scan impl\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        withParquetTable(path.toString, \"tbl\") {\n          // select all columns except _9 (UINT_8) and _10 (UINT_16)\n          checkSparkAnswerAndOperator(\n            \"\"\"select _1, _2, _3, _4, _5, _6, _7, _8, _11, _12, _13, _14, _15, _16, _17,\n              |_18, _19, _20, _21, _id FROM tbl WHERE _2 > 100\"\"\".stripMargin)\n        }\n      }\n    }\n  }\n\n  test(\"uint data type support - excluding u8/u16\") {\n    // variant that tests UINT_32 and UINT_64, skipping _9 (UINT_8) and _10 (UINT_16)\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"testuint.parquet\")\n        makeParquetFileAllPrimitiveTypes(\n          path,\n          dictionaryEnabled = dictionaryEnabled,\n          Byte.MinValue,\n          Byte.MaxValue)\n        withParquetTable(path.toString, \"tbl\") {\n          // test UINT_32 (_11) and UINT_64 (_12) only\n          checkSparkAnswerAndOperator(\"select _11, _12 from tbl order by _11\")\n        }\n      }\n    }\n  }\n\n  test(\"null literals\") {\n    val batchSize = 1000\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, batchSize)\n        withParquetTable(path.toString, \"tbl\") {\n          val sqlString =\n            \"\"\"SELECT\n              |_4 + null,\n              |_15 - null,\n              |_16 * null,\n              |cast(null as struct<_1:int>),\n              |cast(null as map<int, int>),\n              |cast(null as array<int>)\n              |FROM tbl\"\"\".stripMargin\n          val df2 = sql(sqlString)\n          val rows = df2.collect()\n          assert(rows.length == batchSize)\n          assert(rows.forall(_ == Row(null, null, null, null, null, null)))\n\n          checkSparkAnswerAndOperator(sqlString)\n        }\n      }\n\n    }\n  }\n\n  test(\"date and timestamp type literals\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        withParquetTable(path.toString, \"tbl\") {\n          checkSparkAnswerAndOperator(\n            \"SELECT _4 FROM tbl WHERE \" +\n              \"_20 > CAST('2020-01-01' AS DATE) AND _18 < CAST('2020-01-01' AS TIMESTAMP)\")\n        }\n      }\n    }\n  }\n\n  test(\"date_add with int scalars\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      Seq(\"TINYINT\", \"SHORT\", \"INT\").foreach { intType =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n          withParquetTable(path.toString, \"tbl\") {\n            checkSparkAnswerAndOperator(f\"SELECT _20 + CAST(2 as $intType) from tbl\")\n          }\n        }\n      }\n    }\n  }\n\n  // TODO: https://github.com/apache/datafusion-comet/issues/2539\n  ignore(\"date_add with scalar overflow\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        withParquetTable(path.toString, \"tbl\") {\n          val (sparkErr, cometErr) =\n            checkSparkAnswerMaybeThrows(sql(s\"SELECT _20 + ${Int.MaxValue} FROM tbl\"))\n          if (isSpark40Plus) {\n            assert(sparkErr.get.getMessage.contains(\"EXPRESSION_DECODING_FAILED\"))\n          } else {\n            assert(sparkErr.get.getMessage.contains(\"integer overflow\"))\n          }\n          assert(cometErr.get.getMessage.contains(\"attempt to add with overflow\"))\n        }\n      }\n    }\n  }\n\n  test(\"date_add with int arrays\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      Seq(\"_2\", \"_3\", \"_4\").foreach { intColumn => // tinyint, short, int columns\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n          withParquetTable(path.toString, \"tbl\") {\n            checkSparkAnswerAndOperator(f\"SELECT _20 + $intColumn FROM tbl\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"date_sub with int scalars\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      Seq(\"TINYINT\", \"SHORT\", \"INT\").foreach { intType =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n          withParquetTable(path.toString, \"tbl\") {\n            checkSparkAnswerAndOperator(f\"SELECT _20 - CAST(2 as $intType) from tbl\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"date_sub with scalar overflow\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        withParquetTable(path.toString, \"tbl\") {\n          val (sparkErr, cometErr) =\n            checkSparkAnswerMaybeThrows(sql(s\"SELECT _20 - ${Int.MaxValue} FROM tbl\"))\n          if (isSpark40Plus) {\n            assert(sparkErr.get.getMessage.contains(\"EXPRESSION_DECODING_FAILED\"))\n            assert(cometErr.get.getMessage.contains(\"EXPRESSION_DECODING_FAILED\"))\n          } else {\n            assert(sparkErr.get.getMessage.contains(\"integer overflow\"))\n            assert(cometErr.get.getMessage.contains(\"integer overflow\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"date_sub with int arrays\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      Seq(\"_2\", \"_3\", \"_4\").foreach { intColumn => // tinyint, short, int columns\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n          withParquetTable(path.toString, \"tbl\") {\n            checkSparkAnswerAndOperator(f\"SELECT _20 - $intColumn FROM tbl\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"try_add\") {\n    val data = Seq((1, 1))\n    withParquetTable(data, \"tbl\") {\n      checkSparkAnswerAndOperator(spark.sql(\"\"\"\n          |SELECT\n          |  try_add(2147483647, 1),\n          |  try_add(-2147483648, -1),\n          |  try_add(NULL, 5),\n          |  try_add(5, NULL),\n          |  try_add(9223372036854775807, 1),\n          |  try_add(-9223372036854775808, -1)\n          |  from tbl\n          |  \"\"\".stripMargin))\n    }\n  }\n\n  test(\"try_subtract\") {\n    val data = Seq((1, 1))\n    withParquetTable(data, \"tbl\") {\n      checkSparkAnswerAndOperator(spark.sql(\"\"\"\n          |SELECT\n          |  try_subtract(2147483647, -1),\n          |  try_subtract(-2147483648, 1),\n          |  try_subtract(NULL, 5),\n          |  try_subtract(5, NULL),\n          |  try_subtract(9223372036854775807, -1),\n          |  try_subtract(-9223372036854775808, 1)\n          |  FROM tbl\n           \"\"\".stripMargin))\n    }\n  }\n\n  test(\"try_multiply\") {\n    val data = Seq((1, 1))\n    withParquetTable(data, \"tbl\") {\n      checkSparkAnswerAndOperator(spark.sql(\"\"\"\n          |SELECT\n          |  try_multiply(1073741824, 4),\n          |  try_multiply(-1073741824, 4),\n          |  try_multiply(NULL, 5),\n          |  try_multiply(5, NULL),\n          |  try_multiply(3037000499, 3037000500),\n          |  try_multiply(-3037000499, 3037000500)\n          |FROM tbl\n           \"\"\".stripMargin))\n    }\n  }\n\n  test(\"try_divide\") {\n    val data = Seq((15121991, 0))\n    withParquetTable(data, \"tbl\") {\n      checkSparkAnswerAndOperator(\"SELECT try_divide(_1, _2) FROM tbl\")\n      checkSparkAnswerAndOperator(\"\"\"\n            |SELECT\n            |  try_divide(10, 0),\n            |  try_divide(NULL, 5),\n            |  try_divide(5, NULL),\n            |  try_divide(-2147483648, -1),\n            |  try_divide(-9223372036854775808, -1),\n            |  try_divide(DECIMAL('9999999999999999999999999999'), 0.1)\n            |  from tbl\n            |\"\"\".stripMargin)\n    }\n  }\n\n  test(\"try_integral_divide overflow cases\") {\n    val data = Seq((15121991, 0))\n    withParquetTable(data, \"tbl\") {\n      checkSparkAnswerAndOperator(\"SELECT try_divide(_1, _2) FROM tbl\")\n      checkSparkAnswerAndOperator(\"\"\"\n                                    |SELECT try_divide(-128, -1),\n                                    |try_divide(-32768, -1),\n                                    |try_divide(-2147483648, -1),\n                                    |try_divide(-9223372036854775808, -1),\n                                    |try_divide(CAST(99999 AS DECIMAL(5,0)), CAST(0.0001 AS DECIMAL(5,4)))\n                                    |from tbl\n                                    |\"\"\".stripMargin)\n    }\n  }\n\n  test(\"dictionary arithmetic\") {\n    // TODO: test ANSI mode\n    withSQLConf(SQLConf.ANSI_ENABLED.key -> \"false\", \"parquet.enable.dictionary\" -> \"true\") {\n      withParquetTable((0 until 10).map(i => (i % 5, i % 3)), \"tbl\") {\n        checkSparkAnswerAndOperator(\"SELECT _1 + _2, _1 - _2, _1 * _2, _1 / _2, _1 % _2 FROM tbl\")\n      }\n    }\n  }\n\n  test(\"dictionary arithmetic with scalar\") {\n    withSQLConf(\"parquet.enable.dictionary\" -> \"true\") {\n      withParquetTable((0 until 10).map(i => (i % 5, i % 3)), \"tbl\") {\n        checkSparkAnswerAndOperator(\"SELECT _1 + 1, _1 - 1, _1 * 2, _1 / 2, _1 % 2 FROM tbl\")\n      }\n    }\n  }\n\n  test(\"string type and substring\") {\n    withParquetTable((0 until 5).map(i => (i.toString, (i + 100).toString)), \"tbl\") {\n      checkSparkAnswerAndOperator(\"SELECT _1, substring(_2, 2, 2) FROM tbl\")\n      checkSparkAnswerAndOperator(\"SELECT _1, substring(_2, 2, -2) FROM tbl\")\n      checkSparkAnswerAndOperator(\"SELECT _1, substring(_2, -2, 2) FROM tbl\")\n      checkSparkAnswerAndOperator(\"SELECT _1, substring(_2, -2, -2) FROM tbl\")\n      checkSparkAnswerAndOperator(\"SELECT _1, substring(_2, -2, 10) FROM tbl\")\n      checkSparkAnswerAndOperator(\"SELECT _1, substring(_2, 0, 0) FROM tbl\")\n      checkSparkAnswerAndOperator(\"SELECT _1, substring(_2, 1, 0) FROM tbl\")\n    }\n  }\n\n  test(\"substring with start < 1\") {\n    withTempPath { _ =>\n      withTable(\"t\") {\n        sql(\"create table t (col string) using parquet\")\n        sql(\"insert into t values('123456')\")\n        checkSparkAnswerAndOperator(sql(\"select substring(col, 0) from t\"))\n        checkSparkAnswerAndOperator(sql(\"select substring(col, -1) from t\"))\n      }\n    }\n  }\n\n  test(\"substring with dictionary\") {\n    val data = (0 until 1000)\n      .map(_ % 5) // reduce value space to trigger dictionary encoding\n      .map(i => (i.toString, (i + 100).toString))\n    withParquetTable(data, \"tbl\") {\n      checkSparkAnswerAndOperator(\"SELECT _1, substring(_2, 2, 2) FROM tbl\")\n    }\n  }\n\n  test(\"hour, minute, second\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        val expected = makeRawTimeParquetFile(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        readParquetFile(path.toString) { df =>\n          val query = df.select(expr(\"hour(_1)\"), expr(\"minute(_1)\"), expr(\"second(_1)\"))\n\n          checkAnswer(\n            query,\n            expected.map {\n              case None =>\n                Row(null, null, null)\n              case Some(i) =>\n                val timestamp = new java.sql.Timestamp(i).toLocalDateTime\n                val hour = timestamp.getHour\n                val minute = timestamp.getMinute\n                val second = timestamp.getSecond\n\n                Row(hour, minute, second)\n            })\n        }\n      }\n    }\n  }\n\n  test(\"time expressions folded on jvm\") {\n    val ts = \"1969-12-31 16:23:45\"\n\n    val functions = Map(\"hour\" -> 16, \"minute\" -> 23, \"second\" -> 45)\n\n    functions.foreach { case (func, expectedValue) =>\n      val query = s\"SELECT $func('$ts') AS result\"\n      val df = spark.sql(query)\n      val optimizedPlan = df.queryExecution.optimizedPlan\n\n      val isFolded = optimizedPlan.expressions.exists {\n        case alias: Alias =>\n          alias.child match {\n            case Literal(value, _) => value == expectedValue\n            case _ => false\n          }\n        case _ => false\n      }\n\n      assert(isFolded, s\"Expected '$func(...)' to be constant-folded to Literal($expectedValue)\")\n    }\n  }\n\n  test(\"hour on int96 timestamp column\") {\n    import testImplicits._\n\n    val N = 100\n    val ts = \"2020-01-01 01:02:03.123456\"\n    Seq(true, false).foreach { dictionaryEnabled =>\n      Seq(false, true).foreach { conversionEnabled =>\n        withSQLConf(\n          SQLConf.PARQUET_OUTPUT_TIMESTAMP_TYPE.key -> \"INT96\",\n          SQLConf.PARQUET_INT96_TIMESTAMP_CONVERSION.key -> conversionEnabled.toString) {\n          withTempPath { path =>\n            Seq\n              .tabulate(N)(_ => ts)\n              .toDF(\"ts1\")\n              .select($\"ts1\".cast(\"timestamp\").as(\"ts\"))\n              .repartition(1)\n              .write\n              .option(\"parquet.enable.dictionary\", dictionaryEnabled)\n              .parquet(path.getCanonicalPath)\n\n            checkAnswer(\n              spark.read.parquet(path.getCanonicalPath).select(expr(\"hour(ts)\")),\n              Seq.tabulate(N)(_ => Row(1)))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"cast timestamp and timestamp_ntz\") {\n    withSQLConf(\n      SESSION_LOCAL_TIMEZONE.key -> \"Asia/Kathmandu\",\n      CometConf.getExprAllowIncompatConfigKey(classOf[Cast]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"timestamp_trunc.parquet\")\n          makeRawTimeParquetFile(path, dictionaryEnabled = dictionaryEnabled, 10000)\n          withParquetTable(path.toString, \"timetbl\") {\n            checkSparkAnswerAndOperator(\n              \"SELECT \" +\n                \"cast(_2 as timestamp) tz_millis, \" +\n                \"cast(_3 as timestamp) ntz_millis, \" +\n                \"cast(_4 as timestamp) tz_micros, \" +\n                \"cast(_5 as timestamp) ntz_micros \" +\n                \" from timetbl\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"cast timestamp and timestamp_ntz to string\") {\n    withSQLConf(\n      SESSION_LOCAL_TIMEZONE.key -> \"Asia/Kathmandu\",\n      CometConf.getExprAllowIncompatConfigKey(classOf[Cast]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"timestamp_trunc.parquet\")\n          makeRawTimeParquetFile(path, dictionaryEnabled = dictionaryEnabled, 2001)\n          withParquetTable(path.toString, \"timetbl\") {\n            checkSparkAnswerAndOperator(\n              \"SELECT \" +\n                \"cast(_2 as string) tz_millis, \" +\n                \"cast(_3 as string) ntz_millis, \" +\n                \"cast(_4 as string) tz_micros, \" +\n                \"cast(_5 as string) ntz_micros \" +\n                \" from timetbl\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"cast timestamp and timestamp_ntz to long, date\") {\n    withSQLConf(\n      SESSION_LOCAL_TIMEZONE.key -> \"Asia/Kathmandu\",\n      CometConf.getExprAllowIncompatConfigKey(classOf[Cast]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"timestamp_trunc.parquet\")\n          makeRawTimeParquetFile(path, dictionaryEnabled = dictionaryEnabled, 10000)\n          withParquetTable(path.toString, \"timetbl\") {\n            checkSparkAnswerAndOperator(\n              \"SELECT \" +\n                \"cast(_2 as long) tz_millis, \" +\n                \"cast(_4 as long) tz_micros, \" +\n                \"cast(_2 as date) tz_millis_to_date, \" +\n                \"cast(_3 as date) ntz_millis_to_date, \" +\n                \"cast(_4 as date) tz_micros_to_date, \" +\n                \"cast(_5 as date) ntz_micros_to_date \" +\n                \" from timetbl\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"trunc\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"date_trunc.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        withParquetTable(path.toString, \"tbl\") {\n          Seq(\"YEAR\", \"YYYY\", \"YY\", \"QUARTER\", \"MON\", \"MONTH\", \"MM\", \"WEEK\").foreach { format =>\n            checkSparkAnswerAndOperator(s\"SELECT trunc(_20, '$format') from tbl\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"trunc with format array\") {\n    val numRows = 1000\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"date_trunc_with_format.parquet\")\n        makeDateTimeWithFormatTable(path, dictionaryEnabled = dictionaryEnabled, numRows)\n        withParquetTable(path.toString, \"dateformattbl\") {\n          withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[TruncDate]) -> \"true\") {\n            checkSparkAnswerAndOperator(\n              \"SELECT \" +\n                \"dateformat, _7, \" +\n                \"trunc(_7, dateformat) \" +\n                \" from dateformattbl \")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"date_trunc\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[TruncTimestamp]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"timestamp_trunc.parquet\")\n          makeRawTimeParquetFile(path, dictionaryEnabled = dictionaryEnabled, 10000)\n          withParquetTable(path.toString, \"timetbl\") {\n            Seq(\n              \"YEAR\",\n              \"YYYY\",\n              \"YY\",\n              \"MON\",\n              \"MONTH\",\n              \"MM\",\n              \"QUARTER\",\n              \"WEEK\",\n              \"DAY\",\n              \"DD\",\n              \"HOUR\",\n              \"MINUTE\",\n              \"SECOND\",\n              \"MILLISECOND\",\n              \"MICROSECOND\").foreach { format =>\n              checkSparkAnswerAndOperator(\n                \"SELECT \" +\n                  s\"date_trunc('$format', _0), \" +\n                  s\"date_trunc('$format', _1), \" +\n                  s\"date_trunc('$format', _2), \" +\n                  s\"date_trunc('$format', _4) \" +\n                  \" from timetbl\")\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"date_trunc with timestamp_ntz\") {\n    withSQLConf(\n      CometConf.getExprAllowIncompatConfigKey(classOf[Cast]) -> \"true\",\n      CometConf.getExprAllowIncompatConfigKey(classOf[TruncTimestamp]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"timestamp_trunc.parquet\")\n          makeRawTimeParquetFile(path, dictionaryEnabled = dictionaryEnabled, 10000)\n          withParquetTable(path.toString, \"timetbl\") {\n            Seq(\n              \"YEAR\",\n              \"YYYY\",\n              \"YY\",\n              \"MON\",\n              \"MONTH\",\n              \"MM\",\n              \"QUARTER\",\n              \"WEEK\",\n              \"DAY\",\n              \"DD\",\n              \"HOUR\",\n              \"MINUTE\",\n              \"SECOND\",\n              \"MILLISECOND\",\n              \"MICROSECOND\").foreach { format =>\n              checkSparkAnswerAndOperator(\n                \"SELECT \" +\n                  s\"date_trunc('$format', _3), \" +\n                  s\"date_trunc('$format', _5)  \" +\n                  \" from timetbl\")\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"date_trunc with format array\") {\n    val numRows = 1000\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"timestamp_trunc_with_format.parquet\")\n        makeDateTimeWithFormatTable(path, dictionaryEnabled = dictionaryEnabled, numRows)\n        withParquetTable(path.toString, \"timeformattbl\") {\n          withSQLConf(\n            CometConf.getExprAllowIncompatConfigKey(classOf[Cast]) -> \"true\",\n            CometConf.getExprAllowIncompatConfigKey(classOf[TruncTimestamp]) -> \"true\") {\n            checkSparkAnswerAndOperator(\n              \"SELECT \" +\n                \"format, _0, _1, _2, _3, _4, _5, \" +\n                \"date_trunc(format, _0), \" +\n                \"date_trunc(format, _1), \" +\n                \"date_trunc(format, _2), \" +\n                \"date_trunc(format, _3), \" +\n                \"date_trunc(format, _4), \" +\n                \"date_trunc(format, _5) \" +\n                \" from timeformattbl \")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"date_trunc on int96 timestamp column\") {\n    import testImplicits._\n\n    val N = 100\n    val ts = \"2020-01-01 01:02:03.123456\"\n    Seq(true, false).foreach { dictionaryEnabled =>\n      Seq(false, true).foreach { conversionEnabled =>\n        withSQLConf(\n          SQLConf.PARQUET_OUTPUT_TIMESTAMP_TYPE.key -> \"INT96\",\n          SQLConf.PARQUET_INT96_TIMESTAMP_CONVERSION.key -> conversionEnabled.toString,\n          CometConf.getExprAllowIncompatConfigKey(classOf[TruncTimestamp]) -> \"true\") {\n          withTempPath { path =>\n            Seq\n              .tabulate(N)(_ => ts)\n              .toDF(\"ts1\")\n              .select($\"ts1\".cast(\"timestamp\").as(\"ts\"))\n              .repartition(1)\n              .write\n              .option(\"parquet.enable.dictionary\", dictionaryEnabled)\n              .parquet(path.getCanonicalPath)\n\n            withParquetTable(path.toString, \"int96timetbl\") {\n              Seq(\n                \"YEAR\",\n                \"YYYY\",\n                \"YY\",\n                \"MON\",\n                \"MONTH\",\n                \"MM\",\n                \"QUARTER\",\n                \"WEEK\",\n                \"DAY\",\n                \"DD\",\n                \"HOUR\",\n                \"MINUTE\",\n                \"SECOND\",\n                \"MILLISECOND\",\n                \"MICROSECOND\").foreach { format =>\n                val sql = \"SELECT \" +\n                  s\"date_trunc('$format', ts )\" +\n                  \" from int96timetbl\"\n\n                if (conversionEnabled) {\n                  // plugin is disabled if PARQUET_INT96_TIMESTAMP_CONVERSION is true\n                  checkSparkAnswer(sql)\n                } else {\n                  checkSparkAnswerAndOperator(sql)\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"charvarchar\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"char_tbl4\"\n        withTable(table) {\n          val view = \"str_view\"\n          withView(view) {\n            sql(s\"\"\"create temporary view $view as select c, v from values\n                   | (null, null), (null, null),\n                   | (null, 'S'), (null, 'S'),\n                   | ('N', 'N '), ('N', 'N '),\n                   | ('Ne', 'Sp'), ('Ne', 'Sp'),\n                   | ('Net  ', 'Spa  '), ('Net  ', 'Spa  '),\n                   | ('NetE', 'Spar'), ('NetE', 'Spar'),\n                   | ('NetEa ', 'Spark '), ('NetEa ', 'Spark '),\n                   | ('NetEas ', 'Spark'), ('NetEas ', 'Spark'),\n                   | ('NetEase', 'Spark-'), ('NetEase', 'Spark-') t(c, v);\"\"\".stripMargin)\n            sql(\n              s\"create table $table(c7 char(7), c8 char(8), v varchar(6), s string) using parquet;\")\n            sql(s\"insert into $table select c, c, v, c from $view;\")\n            val df = sql(s\"\"\"select substring(c7, 2), substring(c8, 2),\n                            | substring(v, 3), substring(s, 2) from $table;\"\"\".stripMargin)\n\n            val expected = Row(\"      \", \"       \", \"\", \"\") ::\n              Row(null, null, \"\", null) :: Row(null, null, null, null) ::\n              Row(\"e     \", \"e      \", \"\", \"e\") :: Row(\"et    \", \"et     \", \"a  \", \"et  \") ::\n              Row(\"etE   \", \"etE    \", \"ar\", \"etE\") ::\n              Row(\"etEa  \", \"etEa   \", \"ark \", \"etEa \") ::\n              Row(\"etEas \", \"etEas  \", \"ark\", \"etEas \") ::\n              Row(\"etEase\", \"etEase \", \"ark-\", \"etEase\") :: Nil\n            checkAnswer(df, expected ::: expected)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"char varchar over length values\") {\n    Seq(\"char\", \"varchar\").foreach { typ =>\n      withTempPath { dir =>\n        withTable(\"t\") {\n          sql(\"select '123456' as col\").write.format(\"parquet\").save(dir.toString)\n          sql(s\"create table t (col $typ(2)) using parquet location '$dir'\")\n          sql(\"insert into t values('1')\")\n          checkSparkAnswerAndOperator(sql(\"select substring(col, 1) from t\"))\n          checkSparkAnswerAndOperator(sql(\"select substring(col, 0) from t\"))\n          checkSparkAnswerAndOperator(sql(\"select substring(col, -1) from t\"))\n        }\n      }\n    }\n  }\n\n  test(\"like (LikeSimplification enabled)\") {\n    val table = \"names\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      sql(s\"insert into $table values(1,'James Smith')\")\n      sql(s\"insert into $table values(2,'Michael Rose')\")\n      sql(s\"insert into $table values(3,'Robert Williams')\")\n      sql(s\"insert into $table values(4,'Rames Rose')\")\n      sql(s\"insert into $table values(5,'Rames rose')\")\n\n      // Filter column having values 'Rames _ose', where any character matches for '_'\n      val query = sql(s\"select id from $table where name like 'Rames _ose'\")\n      checkAnswer(query, Row(4) :: Row(5) :: Nil)\n\n      // Filter rows that contains 'rose' in 'name' column\n      val queryContains = sql(s\"select id from $table where name like '%rose%'\")\n      checkAnswer(queryContains, Row(5) :: Nil)\n\n      // Filter rows that starts with 'R' following by any characters\n      val queryStartsWith = sql(s\"select id from $table where name like 'R%'\")\n      checkAnswer(queryStartsWith, Row(3) :: Row(4) :: Row(5) :: Nil)\n\n      // Filter rows that ends with 's' following by any characters\n      val queryEndsWith = sql(s\"select id from $table where name like '%s'\")\n      checkAnswer(queryEndsWith, Row(3) :: Nil)\n    }\n  }\n\n  test(\"like with custom escape\") {\n    val table = \"names\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      sql(s\"insert into $table values(1,'James Smith')\")\n      sql(s\"insert into $table values(2,'Michael_Rose')\")\n      sql(s\"insert into $table values(3,'Robert_R_Williams')\")\n\n      // Filter column having values that include underscores\n      val queryDefaultEscape = sql(\"select id from names where name like '%\\\\_%'\")\n      checkSparkAnswerAndOperator(queryDefaultEscape)\n\n      val queryCustomEscape = sql(\"select id from names where name like '%$_%' escape '$'\")\n      checkAnswer(queryCustomEscape, Row(2) :: Row(3) :: Nil)\n\n    }\n  }\n\n  test(\"rlike simple case\") {\n    val table = \"rlike_names\"\n    Seq(false, true).foreach { withDictionary =>\n      val data = Seq(\"James Smith\", \"Michael Rose\", \"Rames Rose\", \"Rames rose\") ++\n        // add repetitive data to trigger dictionary encoding\n        Range(0, 100).map(_ => \"John Smith\")\n      withParquetFile(data.zipWithIndex, withDictionary) { file =>\n        withSQLConf(CometConf.getExprAllowIncompatConfigKey(\"regexp\") -> \"true\") {\n          spark.read.parquet(file).createOrReplaceTempView(table)\n          val query = sql(s\"select _2 as id, _1 rlike 'R[a-z]+s [Rr]ose' from $table\")\n          checkSparkAnswerAndOperator(query)\n        }\n      }\n    }\n  }\n\n  test(\"withInfo\") {\n    val table = \"with_info\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      sql(s\"insert into $table values(1,'James Smith')\")\n      val query = sql(s\"select cast(id as string) from $table\")\n      val (_, cometPlan) = checkSparkAnswerAndOperator(query)\n      val project = stripAQEPlan(cometPlan).collectFirst { case p: CometProjectExec => p }.get\n      val id = project.expressions.head\n      CometSparkSessionExtensions.withInfo(id, \"reason 1\")\n      CometSparkSessionExtensions.withInfo(project, \"reason 2\")\n      CometSparkSessionExtensions.withInfo(project, \"reason 3\", id)\n      CometSparkSessionExtensions.withInfo(project, id)\n      CometSparkSessionExtensions.withInfo(project, \"reason 4\")\n      CometSparkSessionExtensions.withInfo(project, \"reason 5\", id)\n      CometSparkSessionExtensions.withInfo(project, id)\n      CometSparkSessionExtensions.withInfo(project, \"reason 6\")\n      val explain = new ExtendedExplainInfo().generateExtendedInfo(project)\n      for (i <- 1 until 7) {\n        assert(explain.contains(s\"reason $i\"))\n      }\n    }\n  }\n\n  test(\"rlike fallback for non scalar pattern\") {\n    val table = \"rlike_fallback\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      sql(s\"insert into $table values(1,'James Smith')\")\n      withSQLConf(CometConf.getExprAllowIncompatConfigKey(\"regexp\") -> \"true\") {\n        val query2 = sql(s\"select id from $table where name rlike name\")\n        val (_, cometPlan) = checkSparkAnswer(query2)\n        val explain = new ExtendedExplainInfo().generateExtendedInfo(cometPlan)\n        assert(explain.contains(\"Only scalar regexp patterns are supported\"))\n      }\n    }\n  }\n\n  test(\"rlike whitespace\") {\n    val table = \"rlike_whitespace\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      val values =\n        Seq(\"James Smith\", \"\\rJames\\rSmith\\r\", \"\\nJames\\nSmith\\n\", \"\\r\\nJames\\r\\nSmith\\r\\n\")\n      values.zipWithIndex.foreach { x =>\n        sql(s\"insert into $table values (${x._2}, '${x._1}')\")\n      }\n      val patterns = Seq(\n        \"James\",\n        \"J[a-z]mes\",\n        \"^James\",\n        \"\\\\AJames\",\n        \"Smith\",\n        \"James$\",\n        \"James\\\\Z\",\n        \"James\\\\z\",\n        \"^Smith\",\n        \"\\\\ASmith\",\n        // $ produces different results - we could potentially transpile this to a different\n        // expression or just fall back to Spark for this case\n        // \"Smith$\",\n        \"Smith\\\\Z\",\n        \"Smith\\\\z\")\n      withSQLConf(CometConf.getExprAllowIncompatConfigKey(\"regexp\") -> \"true\") {\n        patterns.foreach { pattern =>\n          val query2 = sql(s\"select name, '$pattern', name rlike '$pattern' from $table\")\n          checkSparkAnswerAndOperator(query2)\n        }\n      }\n    }\n  }\n\n  test(\"rlike\") {\n    val table = \"rlike_fuzz\"\n    val gen = new DataGenerator(new Random(42))\n    withTable(table) {\n      // generate some data\n      // newline characters are intentionally omitted for now\n      val dataChars = \"\\t abc123\"\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      gen.generateStrings(100, dataChars, 6).zipWithIndex.foreach { x =>\n        sql(s\"insert into $table values(${x._2}, '${x._1}')\")\n      }\n\n      // test some common cases - this is far from comprehensive\n      // see https://docs.oracle.com/javase/8/docs/api/java/util/regex/Pattern.html\n      // for all valid patterns in Java's regexp engine\n      //\n      // patterns not currently covered:\n      // - octal values\n      // - hex values\n      // - specific character matches\n      // - specific whitespace/newline matches\n      // - complex character classes (union, intersection, subtraction)\n      // - POSIX character classes\n      // - java.lang.Character classes\n      // - Classes for Unicode scripts, blocks, categories and binary properties\n      // - reluctant quantifiers\n      // - possessive quantifiers\n      // - logical operators\n      // - back-references\n      // - quotations\n      // - special constructs (name capturing and non-capturing)\n      val startPatterns = Seq(\"\", \"^\", \"\\\\A\")\n      val endPatterns = Seq(\"\", \"$\", \"\\\\Z\", \"\\\\z\")\n      val patternParts = Seq(\n        \"[0-9]\",\n        \"[a-z]\",\n        \"[^a-z]\",\n        \"\\\\d\",\n        \"\\\\D\",\n        \"\\\\w\",\n        \"\\\\W\",\n        \"\\\\b\",\n        \"\\\\B\",\n        \"\\\\h\",\n        \"\\\\H\",\n        \"\\\\s\",\n        \"\\\\S\",\n        \"\\\\v\",\n        \"\\\\V\")\n      val qualifiers = Seq(\"\", \"+\", \"*\", \"?\", \"{1,}\")\n\n      withSQLConf(CometConf.getExprAllowIncompatConfigKey(\"regexp\") -> \"true\") {\n        // testing every possible combination takes too long, so we pick some\n        // random combinations\n        for (_ <- 0 until 100) {\n          val pattern = gen.pickRandom(startPatterns) +\n            gen.pickRandom(patternParts) +\n            gen.pickRandom(qualifiers) +\n            gen.pickRandom(endPatterns)\n          val query = sql(s\"select id, name, name rlike '$pattern' from $table\")\n          checkSparkAnswerAndOperator(query)\n        }\n      }\n    }\n  }\n\n  test(\"contains\") {\n    val table = \"names\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      sql(s\"insert into $table values(1,'James Smith')\")\n      sql(s\"insert into $table values(2,'Michael Rose')\")\n      sql(s\"insert into $table values(3,'Robert Williams')\")\n      sql(s\"insert into $table values(4,'Rames Rose')\")\n      sql(s\"insert into $table values(5,'Rames rose')\")\n\n      // Filter rows that contains 'rose' in 'name' column\n      val queryContains = sql(s\"select id from $table where contains (name, 'rose')\")\n      checkSparkAnswerAndOperator(queryContains)\n\n      // Additional test cases for optimized contains implementation\n      // Test with empty pattern (should match all non-null rows)\n      val queryEmptyPattern = sql(s\"select id from $table where contains (name, '')\")\n      checkSparkAnswerAndOperator(queryEmptyPattern)\n\n      // Test with pattern not found\n      val queryNotFound = sql(s\"select id from $table where contains (name, 'xyz')\")\n      checkSparkAnswerAndOperator(queryNotFound)\n\n      // Test with pattern at start\n      val queryStart = sql(s\"select id from $table where contains (name, 'James')\")\n      checkSparkAnswerAndOperator(queryStart)\n\n      // Test with pattern at end\n      val queryEnd = sql(s\"select id from $table where contains (name, 'Smith')\")\n      checkSparkAnswerAndOperator(queryEnd)\n\n      // Test with null haystack\n      sql(s\"insert into $table values(6, null)\")\n      checkSparkAnswerAndOperator(sql(s\"select id, contains(name, 'Rose') from $table\"))\n\n      // Test case sensitivity (should not match)\n      checkSparkAnswerAndOperator(sql(s\"select id from $table where contains(name, 'james')\"))\n    }\n  }\n\n  test(\"contains with both columns\") {\n    withParquetTable(\n      Seq((\"hello world\", \"world\"), (\"foo bar\", \"baz\"), (\"abc\", \"\"), (null, \"x\"), (\"test\", null)),\n      \"tbl\") {\n      checkSparkAnswerAndOperator(sql(\"select contains(_1, _2) from tbl\"))\n    }\n  }\n\n  test(\"startswith\") {\n    val table = \"names\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      sql(s\"insert into $table values(1,'James Smith')\")\n      sql(s\"insert into $table values(2,'Michael Rose')\")\n      sql(s\"insert into $table values(3,'Robert Williams')\")\n      sql(s\"insert into $table values(4,'Rames Rose')\")\n      sql(s\"insert into $table values(5,'Rames rose')\")\n\n      // Filter rows that starts with 'R' following by any characters\n      val queryStartsWith = sql(s\"select id from $table where startswith (name, 'R')\")\n      checkAnswer(queryStartsWith, Row(3) :: Row(4) :: Row(5) :: Nil)\n    }\n  }\n\n  test(\"endswith\") {\n    val table = \"names\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      sql(s\"insert into $table values(1,'James Smith')\")\n      sql(s\"insert into $table values(2,'Michael Rose')\")\n      sql(s\"insert into $table values(3,'Robert Williams')\")\n      sql(s\"insert into $table values(4,'Rames Rose')\")\n      sql(s\"insert into $table values(5,'Rames rose')\")\n\n      // Filter rows that ends with 's' following by any characters\n      val queryEndsWith = sql(s\"select id from $table where endswith (name, 's')\")\n      checkAnswer(queryEndsWith, Row(3) :: Nil)\n    }\n  }\n\n  test(\"add overflow (ANSI disable)\") {\n    // Enabling ANSI will cause native engine failure, but as we cannot catch\n    // native error now, we cannot test it here.\n    withSQLConf(SQLConf.ANSI_ENABLED.key -> \"false\") {\n      withParquetTable(Seq((Int.MaxValue, 1)), \"tbl\") {\n        checkSparkAnswerAndOperator(\"SELECT _1 + _2 FROM tbl\")\n      }\n    }\n  }\n\n  test(\"divide by zero (ANSI disable)\") {\n    // Enabling ANSI will cause native engine failure, but as we cannot catch\n    // native error now, we cannot test it here.\n    withSQLConf(SQLConf.ANSI_ENABLED.key -> \"false\") {\n      withParquetTable(Seq((1, 0, 1.0, 0.0, -0.0)), \"tbl\") {\n        checkSparkAnswerAndOperator(\"SELECT _1 / _2, _3 / _4, _3 / _5 FROM tbl\")\n        checkSparkAnswerAndOperator(\"SELECT _1 % _2, _3 % _4, _3 % _5 FROM tbl\")\n        checkSparkAnswerAndOperator(\"SELECT _1 / 0, _3 / 0.0, _3 / -0.0 FROM tbl\")\n        checkSparkAnswerAndOperator(\"SELECT _1 % 0, _3 % 0.0, _3 % -0.0 FROM tbl\")\n      }\n    }\n  }\n\n  test(\"decimals arithmetic and comparison\") {\n    def makeDecimalRDD(num: Int, decimal: DecimalType, useDictionary: Boolean): DataFrame = {\n      val div = if (useDictionary) 5 else num // narrow the space to make it dictionary encoded\n      spark\n        .range(num)\n        .map(_ % div)\n        // Parquet doesn't allow column names with spaces, have to add an alias here.\n        // Minus 500 here so that negative decimals are also tested.\n        .select(\n          (($\"value\" - 500) / 100.0) cast decimal as Symbol(\"dec1\"),\n          (($\"value\" - 600) / 100.0) cast decimal as Symbol(\"dec2\"))\n        .coalesce(1)\n    }\n\n    Seq(true, false).foreach { dictionary =>\n      Seq(16, 1024).foreach { batchSize =>\n        withSQLConf(\n          CometConf.COMET_BATCH_SIZE.key -> batchSize.toString,\n          SQLConf.PARQUET_WRITE_LEGACY_FORMAT.key -> \"false\",\n          \"parquet.enable.dictionary\" -> dictionary.toString) {\n          var combinations = Seq((5, 2), (1, 0), (18, 10), (18, 17), (19, 0), (38, 37))\n          // If ANSI mode is on, the combination (1, 1) will cause a runtime error. Otherwise, the\n          // decimal RDD contains all null values and should be able to read back from Parquet.\n\n          if (!SQLConf.get.ansiEnabled) {\n            combinations = combinations ++ Seq((1, 1))\n          }\n\n          for ((precision, scale) <- combinations) {\n            withTempPath { dir =>\n              val data = makeDecimalRDD(10, DecimalType(precision, scale), dictionary)\n              data.write.parquet(dir.getCanonicalPath)\n              readParquetFile(dir.getCanonicalPath) { df =>\n                {\n                  val decimalLiteral1 = Decimal(1.00)\n                  val decimalLiteral2 = Decimal(123.456789)\n                  val cometDf = df.select(\n                    $\"dec1\" + $\"dec2\",\n                    $\"dec1\" - $\"dec2\",\n                    $\"dec1\" / $\"dec2\",\n                    $\"dec1\" % $\"dec2\",\n                    $\"dec1\" >= $\"dec1\",\n                    $\"dec1\" === \"1.0\",\n                    $\"dec1\" + decimalLiteral1,\n                    $\"dec1\" - decimalLiteral1,\n                    $\"dec1\" + decimalLiteral2,\n                    $\"dec1\" - decimalLiteral2)\n\n                  checkAnswer(\n                    cometDf,\n                    data\n                      .select(\n                        $\"dec1\" + $\"dec2\",\n                        $\"dec1\" - $\"dec2\",\n                        $\"dec1\" / $\"dec2\",\n                        $\"dec1\" % $\"dec2\",\n                        $\"dec1\" >= $\"dec1\",\n                        $\"dec1\" === \"1.0\",\n                        $\"dec1\" + decimalLiteral1,\n                        $\"dec1\" - decimalLiteral1,\n                        $\"dec1\" + decimalLiteral2,\n                        $\"dec1\" - decimalLiteral2)\n                      .collect()\n                      .toSeq)\n                  checkSparkSchema(cometDf)\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"scalar decimal arithmetic operations\") {\n    withTable(\"tbl\") {\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"true\") {\n        sql(\"CREATE TABLE tbl (a INT) USING PARQUET\")\n        sql(\"INSERT INTO tbl VALUES (0)\")\n\n        val combinations = Seq((7, 3), (18, 10), (27, 2), (38, 4))\n        for ((precision, scale) <- combinations) {\n          for (op <- Seq(\"+\", \"-\", \"*\", \"/\", \"%\")) {\n            val left = s\"CAST(1.00 AS DECIMAL($precision, $scale))\"\n            val right = s\"CAST(123.45 AS DECIMAL($precision, $scale))\"\n\n            withSQLConf(\n              \"spark.sql.optimizer.excludedRules\" ->\n                \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n              val df = sql(s\"SELECT $left $op $right FROM tbl\")\n              checkSparkSchema(df)\n              checkSparkAnswerAndOperator(df)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"decimal division result type matches Spark\") {\n    // Regression test for Comet applying DecimalPrecision.promote() on Spark 4, which overrides\n    // Spark's computed result type. For example, decimal(27,2)/decimal(27,2) should produce\n    // decimal(38,20) per Spark 4 semantics, but promote() would give decimal(38,11).\n    // checkSparkAnswerAndOperator verifies both the schema and the numeric values match.\n    withTable(\"tbl\") {\n      sql(\"CREATE TABLE tbl (a INT) USING PARQUET\")\n      sql(\"INSERT INTO tbl VALUES (1)\")\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        \"spark.sql.optimizer.excludedRules\" ->\n          \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n        // (27, 2) hits the overflow-adjustment path where promote() and Spark 4 diverge\n        val combinations = Seq((5, 2), (18, 10), (27, 2), (38, 4))\n        for ((p, s) <- combinations) {\n          val df =\n            sql(s\"SELECT CAST(1.00 AS DECIMAL($p, $s)) / CAST(3.00 AS DECIMAL($p, $s)) FROM tbl\")\n          checkSparkSchema(df)\n          checkSparkAnswerAndOperator(df)\n        }\n      }\n    }\n  }\n\n  test(\"scalar decimal overflow - legacy mode produces null\") {\n    // 1.1e19 * 1.1e19 = 1.21e38 fits in i128 (max ~1.7e38) but exceeds DECIMAL(38,0)'s\n    // max of 10^38-1, so CheckOverflow nulls the result in legacy (non-ANSI) mode.\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"true\", SQLConf.ANSI_ENABLED.key -> \"false\") {\n      withParquetTable(Seq((BigDecimal(\"11000000000000000000\"), 0)), \"tbl\") {\n        checkSparkAnswerAndOperator(\"SELECT _1 * _1 FROM tbl\")\n      }\n    }\n  }\n\n  test(\"scalar decimal overflow - ANSI mode throws ArithmeticException\") {\n    // 1.1e19 * 1.1e19 = 1.21e38 overflows DECIMAL(38,0). With ANSI mode on, both Spark and\n    // Comet must throw — Comet must not panic or silently return null. Spark reports\n    // NUMERIC_VALUE_OUT_OF_RANGE; Comet's WideDecimalBinaryExpr catches the overflow first\n    // and surfaces it as an arithmetic overflow error.\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"true\", SQLConf.ANSI_ENABLED.key -> \"true\") {\n      withParquetTable(Seq((BigDecimal(\"11000000000000000000\"), 0)), \"tbl\") {\n        val res = sql(\"SELECT _1 * _1 FROM tbl\")\n        checkSparkAnswerMaybeThrows(res) match {\n          case (Some(sparkExc), Some(cometExc)) =>\n            assert(sparkExc.getMessage.contains(\"NUMERIC_VALUE_OUT_OF_RANGE\"))\n            assert(cometExc.getMessage.toLowerCase.contains(\"overflow\"))\n          case _ =>\n            fail(\"Expected exception for decimal overflow in ANSI mode\")\n        }\n      }\n    }\n  }\n\n  test(\"cast decimals to int\") {\n    Seq(16, 1024).foreach { batchSize =>\n      withSQLConf(\n        CometConf.COMET_BATCH_SIZE.key -> batchSize.toString,\n        SQLConf.PARQUET_WRITE_LEGACY_FORMAT.key -> \"false\") {\n        var combinations = Seq((5, 2), (1, 0), (18, 10), (18, 17), (19, 0), (38, 37))\n        // If ANSI mode is on, the combination (1, 1) will cause a runtime error. Otherwise, the\n        // decimal RDD contains all null values and should be able to read back from Parquet.\n\n        if (!SQLConf.get.ansiEnabled) {\n          combinations = combinations ++ Seq((1, 1))\n        }\n\n        for ((precision, scale) <- combinations; useDictionary <- Seq(false)) {\n          withTempPath { dir =>\n            val data = makeDecimalRDD(10, DecimalType(precision, scale), useDictionary)\n            data.write.parquet(dir.getCanonicalPath)\n            readParquetFile(dir.getCanonicalPath) { df =>\n              {\n                val cometDf = df.select($\"dec\".cast(\"int\"))\n\n                // `data` is not read from Parquet, so it doesn't go Comet exec.\n                checkAnswer(cometDf, data.select($\"dec\".cast(\"int\")).collect().toSeq)\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  private val doubleValues: Seq[Double] = Seq(\n    -1.0,\n    -0.0,\n    0.0,\n    +1.0,\n    Double.MinValue,\n    Double.MaxValue,\n    Double.NaN,\n    Double.MinPositiveValue,\n    Double.PositiveInfinity,\n    Double.NegativeInfinity)\n\n  test(\"various math scalar functions\") {\n    withParquetTable(doubleValues.map(n => (n, n)), \"tbl\") {\n      // expressions with single arg\n      for (expr <- Seq(\n          \"acos\",\n          \"asin\",\n          \"atan\",\n          \"cos\",\n          \"cosh\",\n          \"exp\",\n          \"ln\",\n          \"log10\",\n          \"log2\",\n          \"sin\",\n          \"sinh\",\n          \"sqrt\",\n          \"tan\",\n          \"tanh\",\n          \"cot\")) {\n        val (_, cometPlan) =\n          checkSparkAnswerAndOperatorWithTol(sql(s\"SELECT $expr(_1), $expr(_2) FROM tbl\"))\n        val cometProjectExecs = collect(cometPlan) { case op: CometProjectExec =>\n          op\n        }\n        assert(cometProjectExecs.length == 1, expr)\n      }\n    }\n    withParquetTable(doubleValues.flatMap(m => doubleValues.map(n => (m, n))), \"tbl\") {\n      // expressions with two args\n      for (expr <- Seq(\"atan2\", \"pow\")) {\n        val (_, cometPlan) =\n          checkSparkAnswerAndOperatorWithTol(sql(s\"SELECT $expr(_1, _2) FROM tbl\"))\n        val cometProjectExecs = collect(cometPlan) { case op: CometProjectExec =>\n          op\n        }\n        assert(cometProjectExecs.length == 1, expr)\n      }\n    }\n  }\n\n  private def testDoubleScalarExpr(expr: String): Unit = {\n    val testValuesRepeated = doubleValues.flatMap(v => Seq.fill(1000)(v))\n    for (withDictionary <- Seq(true, false)) {\n      withParquetTable(testValuesRepeated.map(n => (n, n)), \"tbl\", withDictionary) {\n        val (_, cometPlan) = checkSparkAnswerAndOperatorWithTol(sql(s\"SELECT $expr(_1) FROM tbl\"))\n        val projections = collect(cometPlan) { case p: CometProjectExec =>\n          p\n        }\n        assert(projections.length == 1)\n      }\n    }\n  }\n\n  test(\"disable expression using dynamic config\") {\n    def countSparkProjectExec(plan: SparkPlan) = {\n      plan.collect { case _: ProjectExec =>\n        true\n      }.length\n    }\n    withParquetTable(Seq(0, 1, 2).map(n => (n, n)), \"tbl\") {\n      val sql = \"select _1+_2 from tbl\"\n      val (_, cometPlan) = checkSparkAnswerAndOperator(sql)\n      assert(0 == countSparkProjectExec(cometPlan))\n      withSQLConf(CometConf.getExprEnabledConfigKey(\"Add\") -> \"false\") {\n        val (_, cometPlan) = checkSparkAnswerAndFallbackReason(\n          sql,\n          \"Expression support is disabled. Set spark.comet.expression.Add.enabled=true to enable it.\")\n        assert(1 == countSparkProjectExec(cometPlan))\n      }\n    }\n  }\n\n  test(\"enable incompat expression using dynamic config\") {\n    def countSparkProjectExec(plan: SparkPlan) = {\n      plan.collect { case _: ProjectExec =>\n        true\n      }.length\n    }\n    withParquetTable(Seq(0, 1, 2).map(n => (n.toString, n.toString)), \"tbl\") {\n      val sql = \"select initcap(_1) from tbl\"\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(1 == countSparkProjectExec(cometPlan))\n      withSQLConf(CometConf.getExprAllowIncompatConfigKey(\"InitCap\") -> \"true\") {\n        val (_, cometPlan) = checkSparkAnswerAndOperator(sql)\n        assert(0 == countSparkProjectExec(cometPlan))\n      }\n    }\n  }\n\n  test(\"signum\") {\n    testDoubleScalarExpr(\"signum\")\n  }\n\n  test(\"expm1\") {\n    testDoubleScalarExpr(\"expm1\")\n  }\n\n  test(\"ceil and floor\") {\n    Seq(\"true\", \"false\").foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary) {\n        withParquetTable(\n          (-5 until 5).map(i => (i.toDouble + 0.3, i.toDouble + 0.8)),\n          \"tbl\",\n          withDictionary = dictionary.toBoolean) {\n          checkSparkAnswerAndOperator(\"SELECT ceil(_1), ceil(_2), floor(_1), floor(_2) FROM tbl\")\n          checkSparkAnswerAndOperator(\n            \"SELECT ceil(0.0), ceil(-0.0), ceil(-0.5), ceil(0.5), ceil(-1.2), ceil(1.2) FROM tbl\")\n          checkSparkAnswerAndOperator(\n            \"SELECT floor(0.0), floor(-0.0), floor(-0.5), floor(0.5), \" +\n              \"floor(-1.2), floor(1.2) FROM tbl\")\n        }\n        withParquetTable(\n          (-5 until 5).map(i => (i.toLong, i.toLong)),\n          \"tbl\",\n          withDictionary = dictionary.toBoolean) {\n          checkSparkAnswerAndOperator(\"SELECT ceil(_1), ceil(_2), floor(_1), floor(_2) FROM tbl\")\n          checkSparkAnswerAndOperator(\n            \"SELECT ceil(0), ceil(-0), ceil(-5), ceil(5), ceil(-1), ceil(1) FROM tbl\")\n          checkSparkAnswerAndOperator(\n            \"SELECT floor(0), floor(-0), floor(-5), floor(5), \" +\n              \"floor(-1), floor(1) FROM tbl\")\n        }\n        withParquetTable(\n          (-33L to 33L by 3L).map(i => Tuple1(Decimal(i, 21, 1))), // -3.3 ~ +3.3\n          \"tbl\",\n          withDictionary = dictionary.toBoolean) {\n          checkSparkAnswerAndOperator(\"SELECT ceil(_1), floor(_1) FROM tbl\")\n          checkSparkAnswerAndOperator(\"SELECT ceil(cast(_1 as decimal(20, 0))) FROM tbl\")\n          checkSparkAnswerAndOperator(\"SELECT floor(cast(_1 as decimal(20, 0))) FROM tbl\")\n          withSQLConf(\n            // Exclude the constant folding optimizer in order to actually execute the native ceil\n            // and floor operations for scalar (literal) values.\n            \"spark.sql.optimizer.excludedRules\" -> \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n            for (n <- Seq(\"0.0\", \"-0.0\", \"0.5\", \"-0.5\", \"1.2\", \"-1.2\")) {\n              checkSparkAnswerAndOperator(s\"SELECT ceil(cast(${n} as decimal(38, 18))) FROM tbl\")\n              checkSparkAnswerAndOperator(s\"SELECT ceil(cast(${n} as decimal(20, 0))) FROM tbl\")\n              checkSparkAnswerAndOperator(s\"SELECT floor(cast(${n} as decimal(38, 18))) FROM tbl\")\n              checkSparkAnswerAndOperator(s\"SELECT floor(cast(${n} as decimal(20, 0))) FROM tbl\")\n            }\n            for (n <- Seq(\"123.45\", \"125.00\", \"-129.99\")) {\n              checkSparkAnswerAndOperator(s\"SELECT ceil(cast(${n} as decimal(5, 2))) FROM tbl\")\n              checkSparkAnswerAndOperator(s\"SELECT floor(cast(${n} as decimal(5, 2))) FROM tbl\")\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"md5\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test\"\n        withTable(table) {\n          sql(s\"create table $table(col String) using parquet\")\n          sql(\n            s\"insert into $table values ('test1'), ('test1'), ('test2'), ('test2'), (NULL), ('')\")\n          checkSparkAnswerAndOperator(s\"select md5(col) FROM $table\")\n        }\n      }\n    }\n  }\n\n  test(\"unhex\") {\n    val table = \"unhex_table\"\n    withTable(table) {\n      sql(s\"create table $table(col string) using parquet\")\n\n      sql(s\"\"\"INSERT INTO $table VALUES\n        |('537061726B2053514C'),\n        |('737472696E67'),\n        |('\\\\0'),\n        |(''),\n        |('###'),\n        |('G123'),\n        |('hello'),\n        |('A1B'),\n        |('0A1B')\"\"\".stripMargin)\n\n      checkSparkAnswerAndOperator(s\"SELECT unhex(col) FROM $table\")\n    }\n  }\n\n  test(\"EqualNullSafe should preserve comet filter\") {\n    Seq(\"true\", \"false\").foreach(b =>\n      withParquetTable(\n        data = (0 until 8).map(i => (i, if (i > 5) None else Some(i % 2 == 0))),\n        tableName = \"tbl\",\n        withDictionary = b.toBoolean) {\n        // IS TRUE\n        Seq(\"SELECT * FROM tbl where _2 is true\", \"SELECT * FROM tbl where _2 <=> true\")\n          .foreach(s => checkSparkAnswerAndOperator(s))\n\n        // IS FALSE\n        Seq(\"SELECT * FROM tbl where _2 is false\", \"SELECT * FROM tbl where _2 <=> false\")\n          .foreach(s => checkSparkAnswerAndOperator(s))\n\n        // IS NOT TRUE\n        Seq(\"SELECT * FROM tbl where _2 is not true\", \"SELECT * FROM tbl where not _2 <=> true\")\n          .foreach(s => checkSparkAnswerAndOperator(s))\n\n        // IS NOT FALSE\n        Seq(\"SELECT * FROM tbl where _2 is not false\", \"SELECT * FROM tbl where not _2 <=> false\")\n          .foreach(s => checkSparkAnswerAndOperator(s))\n      })\n  }\n\n  test(\"test in(set)/not in(set)\") {\n    Seq(\"100\", \"0\").foreach { inSetThreshold =>\n      Seq(false, true).foreach { dictionary =>\n        withSQLConf(\n          SQLConf.OPTIMIZER_INSET_CONVERSION_THRESHOLD.key -> inSetThreshold,\n          \"parquet.enable.dictionary\" -> dictionary.toString) {\n          val table = \"names\"\n          withTable(table) {\n            sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n            sql(\n              s\"insert into $table values(1, 'James'), (1, 'Jones'), (2, 'Smith'), (3, 'Smith'),\" +\n                \"(NULL, 'Jones'), (4, NULL)\")\n\n            checkSparkAnswerAndOperator(s\"SELECT * FROM $table WHERE id in (1, 2, 4, NULL)\")\n            checkSparkAnswerAndOperator(\n              s\"SELECT * FROM $table WHERE name in ('Smith', 'Brown', NULL)\")\n\n            // TODO: why with not in, the plan is only `LocalTableScan`?\n            checkSparkAnswerAndOperator(s\"SELECT * FROM $table WHERE id not in (1)\")\n            checkSparkAnswer(s\"SELECT * FROM $table WHERE name not in ('Smith', 'Brown', NULL)\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"not\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test\"\n        withTable(table) {\n          sql(s\"create table $table(col1 int, col2 boolean) using parquet\")\n          sql(s\"insert into $table values(1, false), (2, true), (3, true), (3, false)\")\n          checkSparkAnswerAndOperator(s\"SELECT col1, col2, NOT(col2), !(col2) FROM $table\")\n        }\n      }\n    }\n  }\n\n  test(\"negative\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test\"\n        withTable(table) {\n          sql(s\"create table $table(col1 int) using parquet\")\n          sql(s\"insert into $table values(1), (2), (3), (3)\")\n          checkSparkAnswerAndOperator(s\"SELECT negative(col1), -(col1) FROM $table\")\n        }\n      }\n    }\n  }\n\n  test(\"basic arithmetic\") {\n    withSQLConf(\"parquet.enable.dictionary\" -> \"false\") {\n      withParquetTable((1 until 10).map(i => (i, i + 1)), \"tbl\", false) {\n        checkSparkAnswerAndOperator(\"SELECT _1 + _2, _1 - _2, _1 * _2, _1 / _2, _1 % _2 FROM tbl\")\n      }\n    }\n\n    withSQLConf(\"parquet.enable.dictionary\" -> \"false\") {\n      withParquetTable((1 until 10).map(i => (i.toFloat, i.toFloat + 0.5)), \"tbl\", false) {\n        checkSparkAnswerAndOperator(\"SELECT _1 + _2, _1 - _2, _1 * _2, _1 / _2, _1 % _2 FROM tbl\")\n      }\n    }\n\n    withSQLConf(\"parquet.enable.dictionary\" -> \"false\") {\n      withParquetTable((1 until 10).map(i => (i.toDouble, i.toDouble + 0.5d)), \"tbl\", false) {\n        checkSparkAnswerAndOperator(\"SELECT _1 + _2, _1 - _2, _1 * _2, _1 / _2, _1 % _2 FROM tbl\")\n      }\n    }\n  }\n\n  test(\"date partition column does not forget date type\") {\n    withTable(\"t1\") {\n      sql(\"CREATE TABLE t1(flag LONG, cal_dt DATE) USING PARQUET PARTITIONED BY (cal_dt)\")\n      sql(\"\"\"\n            |INSERT INTO t1 VALUES\n            |(2, date'2021-06-27'),\n            |(2, date'2021-06-28'),\n            |(2, date'2021-06-29'),\n            |(2, date'2021-06-30')\"\"\".stripMargin)\n      checkSparkAnswerAndOperator(sql(\"SELECT CAST(cal_dt as STRING) FROM t1\"))\n      checkSparkAnswer(\"SHOW PARTITIONS t1\")\n    }\n  }\n\n  test(\"DatePart functions: Year/Month/DayOfMonth/DayOfWeek/DayOfYear/WeekOfYear/Quarter\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test\"\n        withTable(table) {\n          sql(s\"create table $table(col timestamp) using parquet\")\n          sql(s\"insert into $table values (now()), (timestamp('1900-01-01')), (null)\")\n          checkSparkAnswerAndOperator(\n            \"SELECT col, year(col), month(col), day(col), weekday(col), \" +\n              s\" dayofweek(col), dayofyear(col), weekofyear(col), quarter(col) FROM $table\")\n        }\n      }\n    }\n  }\n\n  test(\"from_unixtime\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\n        \"parquet.enable.dictionary\" -> dictionary.toString,\n        CometConf.getExprAllowIncompatConfigKey(classOf[FromUnixTime]) -> \"true\") {\n        val table = \"test\"\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(\n            path,\n            dictionaryEnabled = dictionary,\n            -128,\n            128,\n            randomSize = 100)\n          withParquetTable(path.toString, table) {\n            // TODO: DataFusion supports only -8334601211038 <= sec <= 8210266876799\n            // https://github.com/apache/datafusion/issues/16594\n            // After fixing this issue, remove the where clause below\n            val where = \"where _5 BETWEEN -8334601211038 AND 8210266876799\"\n            checkSparkAnswerAndOperator(s\"SELECT from_unixtime(_5) FROM $table $where\")\n            checkSparkAnswerAndOperator(s\"SELECT from_unixtime(_8) FROM $table $where\")\n            // TODO: DataFusion toChar does not support Spark datetime pattern format\n            // https://github.com/apache/datafusion/issues/16577\n            // https://github.com/apache/datafusion/issues/14536\n            // After fixing these issues, change checkSparkAnswer to checkSparkAnswerAndOperator\n            checkSparkAnswer(s\"SELECT from_unixtime(_5, 'yyyy') FROM $table $where\")\n            checkSparkAnswer(s\"SELECT from_unixtime(_8, 'yyyy') FROM $table $where\")\n            withSQLConf(SESSION_LOCAL_TIMEZONE.key -> \"Asia/Kathmandu\") {\n              checkSparkAnswerAndOperator(s\"SELECT from_unixtime(_5) FROM $table $where\")\n              checkSparkAnswerAndOperator(s\"SELECT from_unixtime(_8) FROM $table $where\")\n              checkSparkAnswer(s\"SELECT from_unixtime(_5, 'yyyy') FROM $table $where\")\n              checkSparkAnswer(s\"SELECT from_unixtime(_8, 'yyyy') FROM $table $where\")\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"Decimal binary ops multiply is aligned to Spark\") {\n    Seq(true, false).foreach { allowPrecisionLoss =>\n      withSQLConf(\n        \"spark.sql.decimalOperations.allowPrecisionLoss\" -> allowPrecisionLoss.toString) {\n\n        testSingleLineQuery(\n          \"select cast(1.23456 as decimal(10,9)) c1, cast(2.345678 as decimal(10,9)) c2\",\n          \"select a, b, typeof(a), typeof(b) from (select c1 * 2.345678 a, c2 * c1 b from tbl)\",\n          s\"basic_positive_numbers (allowPrecisionLoss = ${allowPrecisionLoss})\")\n\n        testSingleLineQuery(\n          \"select cast(1.23456 as decimal(10,9)) c1, cast(-2.345678 as decimal(10,9)) c2\",\n          \"select a, b, typeof(a), typeof(b) from (select c1 * -2.345678 a, c2 * c1 b from tbl)\",\n          s\"basic_neg_numbers (allowPrecisionLoss = ${allowPrecisionLoss})\")\n\n        testSingleLineQuery(\n          \"select cast(1.23456 as decimal(10,9)) c1, cast(0 as decimal(10,9)) c2\",\n          \"select a, b, typeof(a), typeof(b) from (select c1 * 0.0 a, c2 * c1 b from tbl)\",\n          s\"zero (allowPrecisionLoss = ${allowPrecisionLoss})\")\n\n        testSingleLineQuery(\n          \"select cast(1.23456 as decimal(10,9)) c1, cast(1 as decimal(10,9)) c2\",\n          \"select a, b, typeof(a), typeof(b) from (select c1 * 1.0 a, c2 * c1 b from tbl)\",\n          s\"identity (allowPrecisionLoss = ${allowPrecisionLoss})\")\n\n        testSingleLineQuery(\n          \"select cast(123456789.1234567890 as decimal(20,10)) c1, cast(987654321.9876543210 as decimal(20,10)) c2\",\n          \"select a, b, typeof(a), typeof(b) from (select c1 * cast(987654321.9876543210 as decimal(20,10)) a, c2 * c1 b from tbl)\",\n          s\"large_numbers (allowPrecisionLoss = ${allowPrecisionLoss})\")\n\n        testSingleLineQuery(\n          \"select cast(0.00000000123456789 as decimal(20,19)) c1, cast(0.00000000987654321 as decimal(20,19)) c2\",\n          \"select a, b, typeof(a), typeof(b) from (select c1 * cast(0.00000000987654321 as decimal(20,19)) a, c2 * c1 b from tbl)\",\n          s\"small_numbers (allowPrecisionLoss = ${allowPrecisionLoss})\")\n\n        testSingleLineQuery(\n          \"select cast(64053151420411946063694043751862251568 as decimal(38,0)) c1, cast(12345 as decimal(10,0)) c2\",\n          \"select a, b, typeof(a), typeof(b) from (select c1 * cast(12345 as decimal(10,0)) a, c2 * c1 b from tbl)\",\n          s\"overflow_precision (allowPrecisionLoss = ${allowPrecisionLoss})\")\n\n        testSingleLineQuery(\n          \"select cast(6.4053151420411946063694043751862251568 as decimal(38,37)) c1, cast(1.2345 as decimal(10,9)) c2\",\n          \"select a, b, typeof(a), typeof(b) from (select c1 * cast(1.2345 as decimal(10,9)) a, c2 * c1 b from tbl)\",\n          s\"overflow_scale (allowPrecisionLoss = ${allowPrecisionLoss})\")\n\n        testSingleLineQuery(\n          \"\"\"\n            |select cast(6.4053151420411946063694043751862251568 as decimal(38,37)) c1, cast(1.2345 as decimal(10,9)) c2\n            |union all\n            |select cast(1.23456 as decimal(10,9)) c1, cast(1 as decimal(10,9)) c2\n            |\"\"\".stripMargin,\n          \"select a, typeof(a) from (select c1 * c2 a from tbl)\",\n          s\"mixed_errs_and_results (allowPrecisionLoss = ${allowPrecisionLoss})\")\n      }\n    }\n  }\n\n  test(\"Decimal modulus with Decimal256 intermediate type\") {\n    // regression test for https://github.com/apache/datafusion-comet/issues/2911\n    withTable(\"test\") {\n      sql(\"create table test(a decimal(33, 29), b decimal(28, 17)) using parquet\")\n      sql(\n        \"insert into test values (-6788.53035340376888409034576923353, \" +\n          \"70948216565.90127985418365471)\")\n      withSQLConf(\n        \"spark.comet.enabled\" -> \"true\",\n        \"spark.sql.decimalOperations.allowPrecisionLoss\" -> \"true\") {\n        val df = sql(\"select a, b, a % b from test\")\n        df.collect()\n      }\n    }\n  }\n\n  test(\"Decimal random number tests\") {\n    val rand = new scala.util.Random(42)\n    def makeNum(p: Int, s: Int): String = {\n      val int1 = rand.nextLong()\n      val int2 = rand.nextLong().abs\n      val frac1 = rand.nextLong().abs\n      val frac2 = rand.nextLong().abs\n      s\"$int1$int2\".take(p - s + (int1 >>> 63).toInt) + \".\" + s\"$frac1$frac2\".take(s)\n    }\n\n    val table = \"test\"\n    (0 until 10).foreach { _ =>\n      val p1 = rand.nextInt(38) + 1 // 1 <= p1 <= 38\n      val s1 = rand.nextInt(p1 + 1) // 0 <= s1 <= p1\n      val p2 = rand.nextInt(38) + 1\n      val s2 = rand.nextInt(p2 + 1)\n\n      withTable(table) {\n        sql(s\"create table $table(a decimal($p1, $s1), b decimal($p2, $s2)) using parquet\")\n        val values =\n          (0 until 10).map(_ => s\"(${makeNum(p1, s1)}, ${makeNum(p2, s2)})\").mkString(\",\")\n        sql(s\"insert into $table values $values\")\n        Seq(true, false).foreach { allowPrecisionLoss =>\n          withSQLConf(\n            \"spark.sql.decimalOperations.allowPrecisionLoss\" -> allowPrecisionLoss.toString) {\n            val a = makeNum(p1, s1)\n            val b = makeNum(p2, s2)\n            val ops = Seq(\"+\", \"-\", \"*\", \"/\", \"%\", \"div\")\n            for (op <- ops) {\n              checkSparkAnswerAndOperator(s\"select a, b, a $op b from $table\")\n              checkSparkAnswerAndOperator(s\"select $a, b, $a $op b from $table\")\n              checkSparkAnswerAndOperator(s\"select a, $b, a $op $b from $table\")\n              checkSparkAnswerAndOperator(\n                s\"select $a, $b, decimal($a) $op decimal($b) from $table\")\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"test cast utf8 to boolean as compatible with Spark\") {\n    def testCastedColumn(inputValues: Seq[String]): Unit = {\n      val table = \"test_table\"\n      withTable(table) {\n        val values = inputValues.map(x => s\"('$x')\").mkString(\",\")\n        sql(s\"create table $table(base_column char(20)) using parquet\")\n        sql(s\"insert into $table values $values\")\n        checkSparkAnswerAndOperator(\n          s\"select base_column, cast(base_column as boolean) as casted_column from $table\")\n      }\n    }\n\n    // Supported boolean values as true by both Arrow and Spark\n    testCastedColumn(inputValues = Seq(\"t\", \"true\", \"y\", \"yes\", \"1\", \"T\", \"TrUe\", \"Y\", \"YES\"))\n    // Supported boolean values as false by both Arrow and Spark\n    testCastedColumn(inputValues = Seq(\"f\", \"false\", \"n\", \"no\", \"0\", \"F\", \"FaLSe\", \"N\", \"No\"))\n    // Supported boolean values by Arrow but not Spark\n    testCastedColumn(inputValues =\n      Seq(\"TR\", \"FA\", \"tr\", \"tru\", \"ye\", \"on\", \"fa\", \"fal\", \"fals\", \"of\", \"off\"))\n    // Invalid boolean casting values for Arrow and Spark\n    testCastedColumn(inputValues = Seq(\"car\", \"Truck\"))\n  }\n\n  test(\"explain comet\") {\n    withSQLConf(\n      SQLConf.ANSI_ENABLED.key -> \"false\",\n      SQLConf.COALESCE_PARTITIONS_ENABLED.key -> \"true\",\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\",\n      EXTENDED_EXPLAIN_PROVIDERS_KEY -> \"org.apache.comet.ExtendedExplainInfo\") {\n      val table = \"test\"\n      withTable(table) {\n        sql(s\"create table $table(c0 int, c1 int , c2 float) using parquet\")\n        sql(s\"insert into $table values(0, 1, 100.000001)\")\n\n        Seq(\n          (\n            s\"SELECT cast(make_interval(c0, c1, c0, c1, c0, c0, c2) as string) as C from $table\",\n            Set(\"Cast from CalendarIntervalType to StringType is not supported\")),\n          (\n            \"SELECT \"\n              + \"date_part('YEAR', make_interval(c0, c1, c0, c1, c0, c0, c2))\"\n              + \" + \"\n              + \"date_part('MONTH', make_interval(c0, c1, c0, c1, c0, c0, c2))\"\n              + s\" as yrs_and_mths from $table\",\n            Set(\n              \"extractintervalyears is not supported\",\n              \"extractintervalmonths is not supported\")),\n          (\n            s\"SELECT sum(c0), sum(c2) from $table group by c1\",\n            Set(\"Comet shuffle is not enabled: spark.comet.exec.shuffle.enabled is not enabled\")),\n          (\n            \"SELECT A.c1, A.sum_c0, A.sum_c2, B.casted from \"\n              + s\"(SELECT c1, sum(c0) as sum_c0, sum(c2) as sum_c2 from $table group by c1) as A, \"\n              + s\"(SELECT c1, cast(make_interval(c0, c1, c0, c1, c0, c0, c2) as string) as casted from $table) as B \"\n              + \"where A.c1 = B.c1 \",\n            Set(\n              \"Cast from CalendarIntervalType to StringType is not supported\",\n              \"Comet shuffle is not enabled: spark.comet.exec.shuffle.enabled is not enabled\")),\n          (s\"select * from $table LIMIT 10 OFFSET 3\", Set(\"Comet shuffle is not enabled\")))\n          .foreach(test => {\n            val qry = test._1\n            val expected = test._2\n            val df = sql(qry)\n            df.collect() // force an execution\n            checkSparkAnswerAndFallbackReasons(df, expected)\n          })\n      }\n    }\n  }\n\n  test(\"explain: CollectLimit disabled\") {\n    withSQLConf(\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_COLLECT_LIMIT_ENABLED.key -> \"false\",\n      EXTENDED_EXPLAIN_PROVIDERS_KEY -> \"org.apache.comet.ExtendedExplainInfo\") {\n      val table = \"test\"\n      withTable(table) {\n        sql(s\"create table $table(c0 int, c1 int , c2 float) using parquet\")\n        sql(s\"insert into $table values(0, 1, 100.000001)\")\n        Seq(\n          (\n            s\"select * from $table LIMIT 10\",\n            Set(\"Native support for operator CollectLimitExec is disabled. \" +\n              \"Set spark.comet.exec.collectLimit.enabled=true to enable it\")))\n          .foreach(test => {\n            val qry = test._1\n            val expected = test._2\n            val df = sql(qry)\n            df.collect() // force an execution\n            checkSparkAnswerAndFallbackReasons(df, expected)\n          })\n      }\n    }\n  }\n\n  test(\"hash functions\") {\n    Seq(true, false).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test\"\n        withTable(table) {\n          sql(s\"create table $table(col string, a int, b float) using parquet\")\n          sql(s\"\"\"\n              |insert into $table values\n              |('Spark SQL  ', 10, 1.2), (NULL, NULL, NULL), ('', 0, 0.0), ('苹果手机', NULL, 3.999999)\n              |, ('Spark SQL  ', 10, 1.2), (NULL, NULL, NULL), ('', 0, 0.0), ('苹果手机', NULL, 3.999999)\n              |\"\"\".stripMargin)\n          checkSparkAnswerAndOperator(\"\"\"\n              |select\n              |md5(col), md5(cast(a as string)), md5(cast(b as string)),\n              |hash(col), hash(col, 1), hash(col, 0), hash(col, a, b), hash(b, a, col),\n              |xxhash64(col), xxhash64(col, 1), xxhash64(col, 0), xxhash64(col, a, b), xxhash64(b, a, col),\n              |crc32(col), crc32(cast(a as string)), crc32(cast(b as string)),\n              |sha2(col, 0), sha2(col, 256), sha2(col, 224), sha2(col, 384), sha2(col, 512), sha2(col, 128), sha2(col, -1),\n              |sha1(col), sha1(cast(a as string)), sha1(cast(b as string))\n              |from test\n              |\"\"\".stripMargin)\n        }\n      }\n    }\n  }\n\n  test(\"remainder function\") {\n    def withAnsiMode(enabled: Boolean)(f: => Unit): Unit = {\n      withSQLConf(\n        SQLConf.ANSI_ENABLED.key -> enabled.toString,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\")(f)\n    }\n\n    def verifyResult(query: String): Unit = {\n      val expectedDivideByZeroError =\n        \"[DIVIDE_BY_ZERO] Division by zero. Use `try_divide` to tolerate divisor being 0 and return NULL instead.\"\n\n      checkSparkAnswerMaybeThrows(sql(query)) match {\n        case (Some(sparkException), Some(cometException)) =>\n          assert(sparkException.getMessage.contains(expectedDivideByZeroError))\n          assert(cometException.getMessage.contains(expectedDivideByZeroError))\n        case (None, None) => checkSparkAnswerAndOperator(sql(query))\n        case (None, Some(ex)) =>\n          fail(\"Comet threw an exception but Spark did not. Comet exception: \" + ex.getMessage)\n        case (Some(sparkException), None) =>\n          fail(\n            \"Spark threw an exception but Comet did not. Spark exception: \" +\n              sparkException.getMessage)\n      }\n    }\n\n    val tableName = \"remainder_table\"\n    withTable(tableName) {\n      sql(s\"\"\"\n          |create table $tableName (\n          |a_int int,\n          |b_int int,\n          |a_float float,\n          |b_float float,\n          |a_decimal decimal(18,4),\n          |b_decimal decimal(18,4)\n          |) using parquet\n          \"\"\".stripMargin)\n\n      val minInt = Int.MinValue\n\n      sql(s\"\"\"\n          |insert into $tableName values\n          |(3, 0, 3.0, 0.0, cast(3 as decimal(18,4)), cast(0 as decimal(18,4))),\n          |(10, 3, 10.5, 3.0, cast(10 as decimal(18,4)), cast(3 as decimal(18,4))),\n          |(-10, 3, -10.5, 3.0, cast(-10 as decimal(18,4)), cast(3 as decimal(18,4))),\n          |($minInt, -1, $minInt, -1.0, cast($minInt as decimal(18,4)), cast(-1 as decimal(18,4))),\n          |(null, 3, null, 3.0, null, cast(3 as decimal(18,4))),\n          |(3, null, 3.0, null, cast(3 as decimal(18,4)), null)\n          \"\"\".stripMargin)\n\n      Seq(true, false).foreach(enabled => {\n        withAnsiMode(enabled = enabled) {\n          verifyResult(s\"\"\"\n          |select\n          |a_int, b_int, a_float, b_float, a_decimal, b_decimal,\n          |a_int % b_int as int_int_modulo,\n          |a_int % b_float as int_float_modulo,\n          |mod(a_int, b_int) as int_int_mod,\n          |mod(a_int, b_float) as int_float_mod,\n          |a_float % b_float as float_float_modulo,\n          |a_float % b_decimal as float_decimal_modulo,\n          |mod(a_float, b_float) as float_float_mod,\n          |mod(a_float, b_decimal) as float_decimal_mod,\n          |a_decimal % b_decimal as decimal_decimal_modulo,\n          |mod(a_decimal, b_decimal) as decimal_decimal_mod\n          |from $tableName\n          \"\"\".stripMargin)\n        }\n      })\n    }\n  }\n\n  test(\"hash functions with random input\") {\n    val dataGen = DataGenerator.DEFAULT\n    // sufficient number of rows to create dictionary encoded ArrowArray.\n    val randomNumRows = 1000\n\n    val whitespaceChars = \" \\t\\r\\n\"\n    val timestampPattern = \"0123456789/:T\" + whitespaceChars\n    Seq(true, false).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test\"\n        withTable(table) {\n          sql(s\"create table $table(col string, a int, b float) using parquet\")\n          val tableSchema = spark.table(table).schema\n          val rows = dataGen.generateRows(\n            randomNumRows,\n            tableSchema,\n            Some(() => dataGen.generateString(timestampPattern, 6)))\n          val data = spark.createDataFrame(spark.sparkContext.parallelize(rows), tableSchema)\n          data.write\n            .mode(\"append\")\n            .insertInto(table)\n          // with random generated data\n          // disable cast(b as string) for now, as the cast from float to string may produce incompatible result\n          checkSparkAnswerAndOperator(\"\"\"\n              |select\n              |md5(col), md5(cast(a as string)), --md5(cast(b as string)),\n              |hash(col), hash(col, 1), hash(col, 0), hash(col, a, b), hash(b, a, col),\n              |xxhash64(col), xxhash64(col, 1), xxhash64(col, 0), xxhash64(col, a, b), xxhash64(b, a, col),\n              |crc32(col), crc32(cast(a as string)),\n              |sha2(col, 0), sha2(col, 256), sha2(col, 224), sha2(col, 384), sha2(col, 512), sha2(col, 128), sha2(col, -1),\n              |sha1(col), sha1(cast(a as string))\n              |from test\n              |\"\"\".stripMargin)\n        }\n      }\n    }\n  }\n\n  test(\"hash function with decimal input\") {\n    val testPrecisionScales: Seq[(Int, Int)] = Seq(\n      (1, 0),\n      (17, 2),\n      (18, 2),\n      (19, 2),\n      (DecimalType.MAX_PRECISION, DecimalType.MAX_SCALE - 1))\n    for ((p, s) <- testPrecisionScales) {\n      withTable(\"t1\") {\n        sql(s\"create table t1(c1 decimal($p, $s)) using parquet\")\n        sql(\"insert into t1 values(1.23), (-1.23), (0.0), (null)\")\n        if (p <= 18) {\n          checkSparkAnswerAndOperator(\"select c1, hash(c1) from t1 order by c1\")\n        } else {\n          // not supported natively yet\n          checkSparkAnswer(\"select c1, hash(c1) from t1 order by c1\")\n        }\n      }\n    }\n  }\n\n  test(\"xxhash64 function with decimal input\") {\n    val testPrecisionScales: Seq[(Int, Int)] = Seq(\n      (1, 0),\n      (17, 2),\n      (18, 2),\n      (19, 2),\n      (DecimalType.MAX_PRECISION, DecimalType.MAX_SCALE - 1))\n    for ((p, s) <- testPrecisionScales) {\n      withTable(\"t1\") {\n        sql(s\"create table t1(c1 decimal($p, $s)) using parquet\")\n        sql(\"insert into t1 values(1.23), (-1.23), (0.0), (null)\")\n        if (p <= 18) {\n          checkSparkAnswerAndOperator(\"select c1, xxhash64(c1) from t1 order by c1\")\n        } else {\n          // not supported natively yet\n          checkSparkAnswer(\"select c1, xxhash64(c1) from t1 order by c1\")\n        }\n      }\n    }\n  }\n\n  test(\"unary negative integer overflow test\") {\n    def withAnsiMode(enabled: Boolean)(f: => Unit): Unit = {\n      withSQLConf(\n        SQLConf.ANSI_ENABLED.key -> enabled.toString,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\")(f)\n    }\n\n    def checkOverflow(query: String, dtype: String): Unit = {\n      checkSparkAnswerMaybeThrows(sql(query)) match {\n        case (Some(sparkException), Some(cometException)) =>\n          assert(sparkException.getMessage.contains(dtype + \" overflow\"))\n          assert(cometException.getMessage.contains(dtype + \" overflow\"))\n        case (None, None) => checkSparkAnswerAndOperator(sql(query))\n        case (None, Some(ex)) =>\n          fail(\"Comet threw an exception but Spark did not \" + ex.getMessage)\n        case (Some(_), None) =>\n          fail(\"Spark threw an exception but Comet did not\")\n      }\n    }\n\n    def runArrayTest(query: String, dtype: String, path: String): Unit = {\n      withParquetTable(path, \"t\") {\n        withAnsiMode(enabled = false) {\n          checkSparkAnswerAndOperator(sql(query))\n        }\n        withAnsiMode(enabled = true) {\n          checkOverflow(query, dtype)\n        }\n      }\n    }\n\n    withTempDir { dir =>\n      // Array values test\n      val dataTypes = Seq(\n        (\"array_test.parquet\", Seq(Int.MaxValue, Int.MinValue).toDF(\"a\"), \"integer\"),\n        (\"long_array_test.parquet\", Seq(Long.MaxValue, Long.MinValue).toDF(\"a\"), \"long\"),\n        (\"short_array_test.parquet\", Seq(Short.MaxValue, Short.MinValue).toDF(\"a\"), \"\"),\n        (\"byte_array_test.parquet\", Seq(Byte.MaxValue, Byte.MinValue).toDF(\"a\"), \"\"))\n\n      dataTypes.foreach { case (fileName, df, dtype) =>\n        val path = new Path(dir.toURI.toString, fileName).toString\n        df.write.mode(\"overwrite\").parquet(path)\n        val query = \"select a, -a from t\"\n        runArrayTest(query, dtype, path)\n      }\n\n      withParquetTable((0 until 5).map(i => (i % 5, i % 3)), \"tbl\") {\n        withAnsiMode(enabled = true) {\n          // interval test without cast\n          val longDf = Seq(Long.MaxValue, Long.MaxValue, 2)\n          val yearMonthDf = Seq(Int.MaxValue, Int.MaxValue, 2)\n            .map(Period.ofMonths)\n          val dayTimeDf = Seq(106751991L, 106751991L, 2L)\n            .map(Duration.ofDays)\n          Seq(longDf, yearMonthDf, dayTimeDf).foreach { _ =>\n            checkOverflow(\"select -(_1) FROM tbl\", \"\")\n          }\n        }\n      }\n\n      // scalar tests\n      withParquetTable((0 until 5).map(i => (i % 5, i % 3)), \"tbl\") {\n        withSQLConf(\n          \"spark.sql.optimizer.excludedRules\" -> \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\",\n          SQLConf.ANSI_ENABLED.key -> \"true\",\n          CometConf.COMET_ENABLED.key -> \"true\",\n          CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n          for (n <- Seq(\"2147483647\", \"-2147483648\")) {\n            checkOverflow(s\"select -(cast(${n} as int)) FROM tbl\", \"integer\")\n          }\n          for (n <- Seq(\"32767\", \"-32768\")) {\n            checkOverflow(s\"select -(cast(${n} as short)) FROM tbl\", \"\")\n          }\n          for (n <- Seq(\"127\", \"-128\")) {\n            checkOverflow(s\"select -(cast(${n} as byte)) FROM tbl\", \"\")\n          }\n          for (n <- Seq(\"9223372036854775807\", \"-9223372036854775808\")) {\n            checkOverflow(s\"select -(cast(${n} as long)) FROM tbl\", \"long\")\n          }\n          for (n <- Seq(\"3.4028235E38\", \"-3.4028235E38\")) {\n            checkOverflow(s\"select -(cast(${n} as float)) FROM tbl\", \"float\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"readSidePadding\") {\n    // https://stackoverflow.com/a/46290728\n    val table = \"test\"\n    withTable(table) {\n      sql(s\"create table $table(col1 CHAR(2)) using parquet\")\n      sql(s\"insert into $table values('é')\") // unicode 'e\\\\u{301}'\n      sql(s\"insert into $table values('é')\") // unicode '\\\\u{e9}'\n      sql(s\"insert into $table values('')\")\n      sql(s\"insert into $table values('ab')\")\n\n      checkSparkAnswerAndOperator(s\"SELECT * FROM $table\")\n    }\n  }\n\n  test(\"isnan\") {\n    Seq(\"true\", \"false\").foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary) {\n        withParquetTable(\n          Seq(Some(1.0), Some(Double.NaN), None).map(i => Tuple1(i)),\n          \"tbl\",\n          withDictionary = dictionary.toBoolean) {\n          checkSparkAnswerAndOperator(\"SELECT isnan(_1), isnan(cast(_1 as float)) FROM tbl\")\n          // Use inside a nullable statement to make sure isnan has correct behavior for null input\n          checkSparkAnswerAndOperator(\n            \"SELECT CASE WHEN (_1 > 0) THEN NULL ELSE isnan(_1) END FROM tbl\")\n        }\n      }\n    }\n  }\n\n  test(\"named_struct\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        withParquetTable(path.toString, \"tbl\") {\n          checkSparkAnswerAndOperator(\"SELECT named_struct('a', _1, 'b', _2) FROM tbl\")\n          checkSparkAnswerAndOperator(\"SELECT named_struct('a', _1, 'b', 2) FROM tbl\")\n          checkSparkAnswerAndOperator(\n            \"SELECT named_struct('a', named_struct('b', _1, 'c', _2)) FROM tbl\")\n        }\n      }\n    }\n  }\n\n  test(\"named_struct with duplicate field names\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        withParquetTable(path.toString, \"tbl\") {\n          checkSparkAnswerAndOperator(\n            \"SELECT named_struct('a', _1, 'a', _2) FROM tbl\",\n            classOf[ProjectExec])\n          checkSparkAnswerAndOperator(\n            \"SELECT named_struct('a', _1, 'a', 2) FROM tbl\",\n            classOf[ProjectExec])\n          checkSparkAnswerAndOperator(\n            \"SELECT named_struct('a', named_struct('b', _1, 'b', _2)) FROM tbl\",\n            classOf[ProjectExec])\n        }\n      }\n    }\n  }\n\n  test(\"to_json\") {\n    // TODO fix for Spark 4.0.0\n    assume(!isSpark40Plus)\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[StructsToJson]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withParquetTable(\n          (0 until 100).map(i => {\n            val str = if (i % 2 == 0) {\n              \"even\"\n            } else {\n              \"odd\"\n            }\n            (i.toByte, i.toShort, i, i.toLong, i * 1.2f, -i * 1.2d, str, i.toString)\n          }),\n          \"tbl\",\n          withDictionary = dictionaryEnabled) {\n\n          val fields = Range(1, 8).map(n => s\"'col$n', _$n\").mkString(\", \")\n\n          checkSparkAnswerAndOperator(s\"SELECT to_json(named_struct($fields)) FROM tbl\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT to_json(named_struct('nested', named_struct($fields))) FROM tbl\")\n        }\n      }\n    }\n  }\n\n  test(\"to_json escaping of field names and string values\") {\n    // TODO fix for Spark 4.0.0\n    assume(!isSpark40Plus)\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[StructsToJson]) -> \"true\") {\n      val gen = new DataGenerator(new Random(42))\n      val chars = \"\\\\'\\\"abc\\t\\r\\n\\f\\b\"\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withParquetTable(\n          (0 until 100).map(i => {\n            val str1 = gen.generateString(chars, 8)\n            val str2 = gen.generateString(chars, 8)\n            (i.toString, str1, str2)\n          }),\n          \"tbl\",\n          withDictionary = dictionaryEnabled) {\n\n          val fields = Range(1, 3)\n            .map(n => {\n              val columnName = s\"\"\"column \"$n\"\"\"\"\n              s\"'$columnName', _$n\"\n            })\n            .mkString(\", \")\n\n          checkSparkAnswerAndOperator(\n            \"\"\"SELECT 'column \"1\"' x, \"\"\" +\n              s\"to_json(named_struct($fields)) FROM tbl ORDER BY x\")\n        }\n      }\n    }\n  }\n\n  test(\"to_json unicode\") {\n    // TODO fix for Spark 4.0.0\n    assume(!isSpark40Plus)\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(classOf[StructsToJson]) -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withParquetTable(\n          (0 until 100).map(i => {\n            (i.toString, \"\\uD83E\\uDD11\", \"\\u018F\")\n          }),\n          \"tbl\",\n          withDictionary = dictionaryEnabled) {\n\n          val fields = Range(1, 3)\n            .map(n => {\n              val columnName = s\"\"\"column \"$n\"\"\"\"\n              s\"'$columnName', _$n\"\n            })\n            .mkString(\", \")\n\n          checkSparkAnswerAndOperator(\n            \"\"\"SELECT 'column \"1\"' x, \"\"\" +\n              s\"to_json(named_struct($fields)) FROM tbl ORDER BY x\")\n        }\n      }\n    }\n  }\n\n  test(\"struct and named_struct with dictionary\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        (0 until 100).map(i =>\n          (\n            i,\n            if (i % 2 == 0) { \"even\" }\n            else { \"odd\" })),\n        \"tbl\",\n        withDictionary = dictionaryEnabled) {\n        checkSparkAnswerAndOperator(\"SELECT struct(_1, _2) FROM tbl\")\n        checkSparkAnswerAndOperator(\"SELECT named_struct('a', _1, 'b', _2) FROM tbl\")\n      }\n    }\n  }\n\n  test(\"get_struct_field\") {\n    Seq(\"\", \"parquet\").foreach { v1List =>\n      withSQLConf(\n        SQLConf.USE_V1_SOURCE_LIST.key -> v1List,\n        CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n        CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\") {\n        withTempPath { dir =>\n          var df = spark\n            .range(5)\n            // Add both a null struct and null inner value\n            .select(\n              when(\n                col(\"id\") > 1,\n                struct(\n                  when(col(\"id\") > 2, col(\"id\")).alias(\"id\"),\n                  when(col(\"id\") > 2, struct(when(col(\"id\") > 3, col(\"id\")).alias(\"id\")))\n                    .as(\"nested2\")))\n                .alias(\"nested1\"))\n\n          df.write.parquet(dir.toString())\n\n          df = spark.read.parquet(dir.toString())\n          checkSparkAnswerAndOperator(df.select(\"nested1.id\"))\n          checkSparkAnswerAndOperator(df.select(\"nested1.nested2.id\"))\n        }\n      }\n    }\n  }\n\n  test(\"get_struct_field - select primitive fields\") {\n    val scanImpl = CometConf.COMET_NATIVE_SCAN_IMPL.get()\n    assume(!(scanImpl == CometConf.SCAN_AUTO && CometSparkSessionExtensions.isSpark40Plus))\n    withTempPath { dir =>\n      // create input file with Comet disabled\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val df = spark\n          .range(5)\n          // Add both a null struct and null inner value\n          .select(when(col(\"id\") > 1, struct(when(col(\"id\") > 2, col(\"id\")).alias(\"id\")))\n            .alias(\"nested1\"))\n\n        df.write.parquet(dir.toString())\n      }\n      val df = spark.read.parquet(dir.toString()).select(\"nested1.id\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"get_struct_field - select subset of struct\") {\n    val scanImpl = CometConf.COMET_NATIVE_SCAN_IMPL.get()\n    assume(!(scanImpl == CometConf.SCAN_AUTO && CometSparkSessionExtensions.isSpark40Plus))\n    withTempPath { dir =>\n      // create input file with Comet disabled\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val df = spark\n          .range(5)\n          // Add both a null struct and null inner value\n          .select(\n            when(\n              col(\"id\") > 1,\n              struct(\n                when(col(\"id\") > 2, col(\"id\")).alias(\"id\"),\n                when(col(\"id\") > 2, struct(when(col(\"id\") > 3, col(\"id\")).alias(\"id\")))\n                  .as(\"nested2\")))\n              .alias(\"nested1\"))\n\n        df.write.parquet(dir.toString())\n      }\n\n      val df = spark.read.parquet(dir.toString())\n      checkSparkAnswerAndOperator(df.select(\"nested1.id\"))\n      checkSparkAnswerAndOperator(df.select(\"nested1.nested2\"))\n      checkSparkAnswerAndOperator(df.select(\"nested1.nested2.id\"))\n      checkSparkAnswerAndOperator(df.select(\"nested1.id\", \"nested1.nested2.id\"))\n    }\n  }\n\n  test(\"get_struct_field - read entire struct\") {\n    val scanImpl = CometConf.COMET_NATIVE_SCAN_IMPL.get()\n    assume(!(scanImpl == CometConf.SCAN_AUTO && CometSparkSessionExtensions.isSpark40Plus))\n    withTempPath { dir =>\n      // create input file with Comet disabled\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val df = spark\n          .range(5)\n          // Add both a null struct and null inner value\n          .select(\n            when(\n              col(\"id\") > 1,\n              struct(\n                when(col(\"id\") > 2, col(\"id\")).alias(\"id\"),\n                when(col(\"id\") > 2, struct(when(col(\"id\") > 3, col(\"id\")).alias(\"id\")))\n                  .as(\"nested2\")))\n              .alias(\"nested1\"))\n\n        df.write.parquet(dir.toString())\n      }\n\n      val df = spark.read.parquet(dir.toString()).select(\"nested1.id\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  private def testV1AndV2(testName: String)(f: => Unit): Unit = {\n    test(s\"$testName - V1\") {\n      withSQLConf(SQLConf.USE_V1_SOURCE_LIST.key -> \"parquet\") { f }\n    }\n\n    // The test will fail because it will  produce a different plan and the operator check will fail\n    // We could get the test to pass anyway by skipping the operator check, but when V2 does get supported,\n    // we want to make sure we enable the operator check and marking the test as ignore will make it\n    // more obvious\n    //\n    ignore(s\"$testName - V2\") {\n      withSQLConf(SQLConf.USE_V1_SOURCE_LIST.key -> \"\") { f }\n    }\n  }\n\n  testV1AndV2(\"get_struct_field with DataFusion ParquetExec - simple case\") {\n    withTempPath { dir =>\n      // create input file with Comet disabled\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val df = spark\n          .range(5)\n          // Add both a null struct and null inner value\n          .select(when(col(\"id\") > 1, struct(when(col(\"id\") > 2, col(\"id\")).alias(\"id\")))\n            .alias(\"nested1\"))\n\n        df.write.parquet(dir.toString())\n      }\n\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_DATAFUSION,\n        CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"true\") {\n\n        val df = spark.read.parquet(dir.toString())\n        checkSparkAnswerAndOperator(df.select(\"nested1.id\"))\n      }\n    }\n  }\n\n  testV1AndV2(\"get_struct_field with DataFusion ParquetExec - select subset of struct\") {\n    withTempPath { dir =>\n      // create input file with Comet disabled\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val df = spark\n          .range(5)\n          // Add both a null struct and null inner value\n          .select(\n            when(\n              col(\"id\") > 1,\n              struct(\n                when(col(\"id\") > 2, col(\"id\")).alias(\"id\"),\n                when(col(\"id\") > 2, struct(when(col(\"id\") > 3, col(\"id\")).alias(\"id\")))\n                  .as(\"nested2\")))\n              .alias(\"nested1\"))\n\n        df.write.parquet(dir.toString())\n      }\n\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_DATAFUSION,\n        CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"true\") {\n\n        val df = spark.read.parquet(dir.toString())\n\n        checkSparkAnswerAndOperator(df.select(\"nested1.id\"))\n        checkSparkAnswerAndOperator(df.select(\"nested1.id\", \"nested1.nested2.id\"))\n        checkSparkAnswerAndOperator(df.select(\"nested1.nested2.id\"))\n      }\n    }\n  }\n\n  test(\"get_struct_field with DataFusion ParquetExec - read entire struct\") {\n    withTempPath { dir =>\n      // create input file with Comet disabled\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val df = spark\n          .range(5)\n          // Add both a null struct and null inner value\n          .select(\n            when(\n              col(\"id\") > 1,\n              struct(\n                when(col(\"id\") > 2, col(\"id\")).alias(\"id\"),\n                when(col(\"id\") > 2, struct(when(col(\"id\") > 3, col(\"id\")).alias(\"id\")))\n                  .as(\"nested2\")))\n              .alias(\"nested1\"))\n\n        df.write.parquet(dir.toString())\n      }\n\n      Seq(\"\", \"parquet\").foreach { v1List =>\n        withSQLConf(\n          SQLConf.USE_V1_SOURCE_LIST.key -> v1List,\n          CometConf.COMET_ENABLED.key -> \"true\",\n          CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"true\") {\n\n          val df = spark.read.parquet(dir.toString())\n          if (v1List.isEmpty) {\n            checkSparkAnswer(df.select(\"nested1\"))\n          } else {\n            checkSparkAnswerAndOperator(df.select(\"nested1\"))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"read array[int] from parquet\") {\n\n    withTempPath { dir =>\n// create input file with Comet disabled\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val df = spark\n          .range(5)\n// Spark does not allow null as a key but does allow null as a\n// value, and the entire map be null\n          .select(when(col(\"id\") > 1, sequence(lit(0), col(\"id\") * 2)).alias(\"array1\"))\n        df.write.parquet(dir.toString())\n      }\n\n      Seq(\"\", \"parquet\").foreach { v1List =>\n        withSQLConf(SQLConf.USE_V1_SOURCE_LIST.key -> v1List) {\n          val df = spark.read.parquet(dir.toString())\n          if (v1List.isEmpty) {\n            checkSparkAnswer(df.select(\"array1\"))\n            checkSparkAnswer(df.select(element_at(col(\"array1\"), lit(1))))\n          } else {\n            checkSparkAnswerAndOperator(df.select(\"array1\"))\n            checkSparkAnswerAndOperator(df.select(element_at(col(\"array1\"), lit(1))))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"CreateArray\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        val df = spark.read.parquet(path.toString)\n        checkSparkAnswerAndOperator(df.select(array(col(\"_2\"), col(\"_3\"), col(\"_4\"))))\n        checkSparkAnswerAndOperator(df.select(array(col(\"_4\"), col(\"_11\"), lit(null))))\n        checkSparkAnswerAndOperator(\n          df.select(array(array(col(\"_4\")), array(col(\"_4\"), lit(null)))))\n        checkSparkAnswerAndOperator(df.select(array(col(\"_8\"), col(\"_13\"))))\n        // This ends up returning empty strings instead of nulls for the last element\n        checkSparkAnswerAndOperator(df.select(array(col(\"_8\"), col(\"_13\"), lit(null))))\n        checkSparkAnswerAndOperator(df.select(array(array(col(\"_8\")), array(col(\"_13\")))))\n        checkSparkAnswerAndOperator(df.select(array(col(\"_8\"), col(\"_8\"), lit(null))))\n        checkSparkAnswerAndOperator(df.select(array(struct(\"_4\"), struct(\"_4\"))))\n        checkSparkAnswerAndOperator(\n          df.select(array(struct(col(\"_8\").alias(\"a\")), struct(col(\"_13\").alias(\"a\")))))\n      }\n    }\n  }\n\n  test(\"ListExtract\") {\n    def assertBothThrow(df: DataFrame): Unit = {\n      checkSparkAnswerMaybeThrows(df) match {\n        case (Some(_), Some(_)) => ()\n        case (spark, comet) =>\n          fail(\n            s\"Expected Spark and Comet to throw exception, but got\\nSpark: $spark\\nComet: $comet\")\n      }\n    }\n\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 100)\n\n        Seq(true, false).foreach { ansiEnabled =>\n          withSQLConf(\n            SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString(),\n            // Prevent the optimizer from collapsing an extract value of a create array\n            SQLConf.OPTIMIZER_EXCLUDED_RULES.key -> SimplifyExtractValueOps.ruleName) {\n            val df = spark.read.parquet(path.toString)\n\n            val stringArray = df.select(array(col(\"_8\"), col(\"_8\"), lit(null)).alias(\"arr\"))\n            checkSparkAnswerAndOperator(\n              stringArray\n                .select(col(\"arr\").getItem(0), col(\"arr\").getItem(1), col(\"arr\").getItem(2)))\n\n            checkSparkAnswerAndOperator(\n              stringArray.select(\n                element_at(col(\"arr\"), -3),\n                element_at(col(\"arr\"), -2),\n                element_at(col(\"arr\"), -1),\n                element_at(col(\"arr\"), 1),\n                element_at(col(\"arr\"), 2),\n                element_at(col(\"arr\"), 3)))\n\n            // 0 is an invalid index for element_at\n            assertBothThrow(stringArray.select(element_at(col(\"arr\"), 0)))\n\n            if (ansiEnabled) {\n              assertBothThrow(stringArray.select(col(\"arr\").getItem(-1)))\n              assertBothThrow(stringArray.select(col(\"arr\").getItem(3)))\n              assertBothThrow(stringArray.select(element_at(col(\"arr\"), -4)))\n              assertBothThrow(stringArray.select(element_at(col(\"arr\"), 4)))\n            } else {\n              checkSparkAnswerAndOperator(stringArray.select(col(\"arr\").getItem(-1)))\n              checkSparkAnswerAndOperator(stringArray.select(col(\"arr\").getItem(3)))\n              checkSparkAnswerAndOperator(stringArray.select(element_at(col(\"arr\"), -4)))\n              checkSparkAnswerAndOperator(stringArray.select(element_at(col(\"arr\"), 4)))\n            }\n\n            val intArray =\n              df.select(when(col(\"_4\").isNotNull, array(col(\"_4\"), col(\"_4\"))).alias(\"arr\"))\n            checkSparkAnswerAndOperator(\n              intArray\n                .select(col(\"arr\").getItem(0), col(\"arr\").getItem(1)))\n\n            checkSparkAnswerAndOperator(\n              intArray.select(\n                element_at(col(\"arr\"), 1),\n                element_at(col(\"arr\"), 2),\n                element_at(col(\"arr\"), -1),\n                element_at(col(\"arr\"), -2)))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"GetArrayStructFields\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withSQLConf(SQLConf.OPTIMIZER_EXCLUDED_RULES.key -> SimplifyExtractValueOps.ruleName) {\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n          val df = spark.read\n            .parquet(path.toString)\n            .select(\n              array(struct(col(\"_2\"), col(\"_3\"), col(\"_4\"), col(\"_8\")), lit(null)).alias(\"arr\"))\n          checkSparkAnswerAndOperator(df.select(\"arr._2\", \"arr._3\", \"arr._4\"))\n\n          val complex = spark.read\n            .parquet(path.toString)\n            .select(array(struct(struct(col(\"_4\"), col(\"_8\")).alias(\"nested\"))).alias(\"arr\"))\n\n          checkSparkAnswerAndOperator(complex.select(col(\"arr.nested._4\")))\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for add\") {\n    val data = Seq((Integer.MAX_VALUE, 1), (Integer.MIN_VALUE, -1))\n    withSQLConf(SQLConf.ANSI_ENABLED.key -> \"true\") {\n      withParquetTable(data, \"tbl\") {\n        val res = spark.sql(\"\"\"\n                              |SELECT\n                              |  _1 + _2\n                              |  from tbl\n                              |  \"\"\".stripMargin)\n\n        checkSparkAnswerMaybeThrows(res) match {\n          case (Some(sparkExc), Some(cometExc)) =>\n            assert(cometExc.getMessage.contains(ARITHMETIC_OVERFLOW_EXCEPTION_MSG))\n            assert(sparkExc.getMessage.contains(\"overflow\"))\n          case _ => fail(\"Exception should be thrown\")\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for subtract\") {\n    val data = Seq((Integer.MIN_VALUE, 1))\n    withSQLConf(SQLConf.ANSI_ENABLED.key -> \"true\") {\n      withParquetTable(data, \"tbl\") {\n        val res = spark.sql(\"\"\"\n                              |SELECT\n                              |  _1 - _2\n                              |  from tbl\n                              |  \"\"\".stripMargin)\n        checkSparkAnswerMaybeThrows(res) match {\n          case (Some(sparkExc), Some(cometExc)) =>\n            assert(cometExc.getMessage.contains(ARITHMETIC_OVERFLOW_EXCEPTION_MSG))\n            assert(sparkExc.getMessage.contains(\"overflow\"))\n          case _ => fail(\"Exception should be thrown\")\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for multiply\") {\n    val data = Seq((Integer.MAX_VALUE, 10))\n    withSQLConf(SQLConf.ANSI_ENABLED.key -> \"true\") {\n      withParquetTable(data, \"tbl\") {\n        val res = spark.sql(\"\"\"\n                              |SELECT\n                              |  _1 * _2\n                              |  from tbl\n                              |  \"\"\".stripMargin)\n\n        checkSparkAnswerMaybeThrows(res) match {\n          case (Some(sparkExc), Some(cometExc)) =>\n            assert(cometExc.getMessage.contains(ARITHMETIC_OVERFLOW_EXCEPTION_MSG))\n            assert(sparkExc.getMessage.contains(\"overflow\"))\n          case _ => fail(\"Exception should be thrown\")\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for divide (division by zero)\") {\n    val data = Seq((Integer.MIN_VALUE, 0))\n    withSQLConf(SQLConf.ANSI_ENABLED.key -> \"true\") {\n      withParquetTable(data, \"tbl\") {\n        val res = spark.sql(\"\"\"\n                              |SELECT\n                              |  _1 / _2\n                              |  from tbl\n                              |  \"\"\".stripMargin)\n\n        checkSparkAnswerMaybeThrows(res) match {\n          case (Some(sparkExc), Some(cometExc)) =>\n            assert(cometExc.getMessage.contains(DIVIDE_BY_ZERO_EXCEPTION_MSG))\n            assert(sparkExc.getMessage.contains(\"Division by zero\"))\n          case _ => fail(\"Exception should be thrown\")\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for divide (division by zero) float division\") {\n    val data = Seq((Float.MinPositiveValue, 0.0))\n    withSQLConf(SQLConf.ANSI_ENABLED.key -> \"true\") {\n      withParquetTable(data, \"tbl\") {\n        val res = spark.sql(\"\"\"\n                              |SELECT\n                              |  _1 / _2\n                              |  from tbl\n                              |  \"\"\".stripMargin)\n\n        checkSparkAnswerMaybeThrows(res) match {\n          case (Some(sparkExc), Some(cometExc)) =>\n            assert(cometExc.getMessage.contains(DIVIDE_BY_ZERO_EXCEPTION_MSG))\n            assert(sparkExc.getMessage.contains(\"Division by zero\"))\n          case _ => fail(\"Exception should be thrown\")\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for integral divide (division by zero)\") {\n    val data = Seq((Integer.MAX_VALUE, 0))\n    Seq(\"true\", \"false\").foreach { p =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> p) {\n        withParquetTable(data, \"tbl\") {\n          val res = spark.sql(\"\"\"\n                |SELECT\n                |  _1 div _2\n                |  from tbl\n                |  \"\"\".stripMargin)\n\n          checkSparkAnswerMaybeThrows(res) match {\n            case (Some(sparkException), Some(cometException)) =>\n              assert(sparkException.getMessage.contains(DIVIDE_BY_ZERO_EXCEPTION_MSG))\n              assert(cometException.getMessage.contains(DIVIDE_BY_ZERO_EXCEPTION_MSG))\n            case (None, None) => checkSparkAnswerAndOperator(res)\n            case (None, Some(ex)) =>\n              fail(\n                \"Comet threw an exception but Spark did not. Comet exception: \" + ex.getMessage)\n            case (Some(sparkException), None) =>\n              fail(\n                \"Spark threw an exception but Comet did not. Spark exception: \" +\n                  sparkException.getMessage)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for round function\") {\n    Seq(\n      (\n        Int.MaxValue,\n        Int.MinValue,\n        Long.MinValue,\n        Long.MaxValue,\n        Byte.MinValue,\n        Byte.MaxValue,\n        Short.MinValue,\n        Short.MaxValue)).foreach { value =>\n      val data = Seq(value)\n      withParquetTable(data, \"tbl\") {\n        Seq(-1000, -100, -10, -1, 0, 1, 10, 100, 1000).foreach { scale =>\n          Seq(true, false).foreach { ansi =>\n            withSQLConf(SQLConf.ANSI_ENABLED.key -> ansi.toString) {\n              val res = spark.sql(s\"SELECT round(_1, $scale) from tbl\")\n              checkSparkAnswerMaybeThrows(res) match {\n                case (Some(sparkException), Some(cometException)) =>\n                  assert(sparkException.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n                  assert(cometException.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n                case (None, None) => checkSparkAnswerAndOperator(res)\n                case (None, Some(ex)) =>\n                  fail(\"Comet threw an exception but Spark did not. Comet exception: \" + ex.getMessage)\n                case (Some(sparkException), None) =>\n                  fail(\"Spark threw an exception but Comet did not. Spark exception: \" +\n                    sparkException.getMessage)\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"test integral divide overflow for decimal\") {\n    // All inserted values produce a quotient > Decimal(38,0).max (~1e38), so they overflow\n    // the intermediate decimal result type.  In legacy/try mode both Spark and Comet return\n    // null; in ANSI mode both must throw NUMERIC_VALUE_OUT_OF_RANGE.\n    Seq(true, false).foreach { ansiMode =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiMode.toString) {\n        withTable(\"t1\") {\n          sql(\"create table t1(a decimal(38,0), b decimal(2,2)) using parquet\")\n          sql(\n            \"insert into t1 values(-62672277069777110394022909049981876593,-0.40),\" +\n              \" (-68299431870253176399167726913574455270,-0.22), (-77532633078952291817347741106477071062,0.36),\" +\n              \" (-79918484954351746825313746420585672848,0.44), (54400354300704342908577384819323710194,0.18),\" +\n              \" (78585488402645143056239590008272527352,-0.51)\")\n          if (ansiMode) {\n            // In ANSI mode the overflow must surface as an exception in both Spark and Comet.\n            checkSparkAnswerMaybeThrows(sql(\"select a div b from t1\")) match {\n              case (Some(_), Some(_)) => // expected: both throw\n              case (None, None) =>\n                fail(\n                  \"Expected both Spark and Comet to throw for decimal integral divide overflow \" +\n                    \"in ANSI mode, but neither threw\")\n              case (Some(sparkExc), None) =>\n                fail(\"Spark threw but Comet did not. Spark exception: \" + sparkExc.getMessage)\n              case (None, Some(cometExc)) =>\n                fail(\"Comet threw but Spark did not. Comet exception: \" + cometExc.getMessage)\n            }\n          } else {\n            // In legacy mode overflow produces null; results must match Spark exactly.\n            checkSparkAnswerAndOperator(\"select a div b from t1\")\n          }\n        }\n      }\n    }\n  }\n\n  private def testOnShuffledRangeWithRandomParameters(testLogic: DataFrame => Unit): Unit = {\n    val partitionsNumber = Random.nextInt(10) + 1\n    val rowsNumber = Random.nextInt(500)\n    // use this value to have both single-batch and multi-batch partitions\n    val cometBatchSize = math.max(1, math.floor(rowsNumber.toDouble / partitionsNumber).toInt)\n    withSQLConf(\"spark.comet.batchSize\" -> cometBatchSize.toString) {\n      withParquetDataFrame((0 until rowsNumber).map(Tuple1.apply)) { df =>\n        testLogic(df.repartition(partitionsNumber))\n      }\n    }\n  }\n\n  test(\"rand expression with random parameters\") {\n    testOnShuffledRangeWithRandomParameters { df =>\n      val seed = Random.nextLong()\n      val dfWithRandParameters = df.withColumn(\"rnd\", rand(seed))\n      checkSparkAnswerAndOperator(dfWithRandParameters)\n      val dfWithOverflowSeed = df.withColumn(\"rnd\", rand(Long.MaxValue))\n      checkSparkAnswerAndOperator(dfWithOverflowSeed)\n      val dfWithNullSeed = df.selectExpr(\"_1\", \"rand(null) as rnd\")\n      checkSparkAnswerAndOperator(dfWithNullSeed)\n    }\n  }\n\n  test(\"randn expression with random parameters\") {\n    testOnShuffledRangeWithRandomParameters { df =>\n      val seed = Random.nextLong()\n      val dfWithRandParameters = df.withColumn(\"randn\", randn(seed))\n      checkSparkAnswerAndOperatorWithTol(dfWithRandParameters)\n      val dfWithOverflowSeed = df.withColumn(\"randn\", randn(Long.MaxValue))\n      checkSparkAnswerAndOperatorWithTol(dfWithOverflowSeed)\n      val dfWithNullSeed = df.selectExpr(\"_1\", \"randn(null) as randn\")\n      checkSparkAnswerAndOperatorWithTol(dfWithNullSeed)\n    }\n  }\n\n  test(\"spark_partition_id expression on random dataset\") {\n    testOnShuffledRangeWithRandomParameters { df =>\n      val dfWithRandParameters =\n        df.withColumn(\"pid1\", spark_partition_id())\n          .repartition(3)\n          .withColumn(\"pid2\", spark_partition_id())\n      checkSparkAnswerAndOperator(dfWithRandParameters)\n    }\n  }\n\n  test(\"monotonically_increasing_id expression on random dataset\") {\n    testOnShuffledRangeWithRandomParameters { df =>\n      val dfWithRandParameters =\n        df.withColumn(\"mid1\", monotonically_increasing_id())\n          .repartition(3)\n          .withColumn(\"mid2\", monotonically_increasing_id())\n      checkSparkAnswerAndOperator(dfWithRandParameters)\n    }\n  }\n\n  test(\"multiple nondetermenistic expressions with shuffle\") {\n    testOnShuffledRangeWithRandomParameters { df =>\n      val seed1 = Random.nextLong()\n      val seed2 = Random.nextLong()\n      val complexRandDf = df\n        .withColumn(\"rand1\", rand(seed1))\n        .withColumn(\"randn1\", randn(seed1))\n        .repartition(2, col(\"_1\"))\n        .sortWithinPartitions(\"_1\")\n        .withColumn(\"rand2\", rand(seed2))\n        .withColumn(\"randn2\", randn(seed2))\n      checkSparkAnswerAndOperatorWithTol(complexRandDf)\n    }\n  }\n\n  test(\"vectorized reader: missing all struct fields\") {\n    Seq(true, false).foreach { offheapEnabled =>\n      withSQLConf(\n        SQLConf.USE_V1_SOURCE_LIST.key -> \"parquet\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"false\",\n        CometConf.COMET_NATIVE_SCAN_IMPL.key -> \"native_datafusion\",\n        SQLConf.PARQUET_VECTORIZED_READER_NESTED_COLUMN_ENABLED.key -> \"true\",\n        SQLConf.COLUMN_VECTOR_OFFHEAP_ENABLED.key -> offheapEnabled.toString) {\n        val data = Seq(Tuple1((1, \"a\")), Tuple1((2, null)), Tuple1(null))\n\n        val readSchema = new StructType().add(\n          \"_1\",\n          new StructType()\n            .add(\"_3\", IntegerType, nullable = false)\n            .add(\"_4\", StringType, nullable = false),\n          nullable = false)\n\n        withParquetFile(data) { file =>\n          checkAnswer(\n            spark.read.schema(readSchema).parquet(file),\n            Row(null) :: Row(null) :: Row(null) :: Nil)\n        }\n      }\n    }\n  }\n\n  test(\"test length function\") {\n    withTable(\"t1\") {\n      sql(\n        \"create table t1 using parquet as select cast(id as string) as c1, cast(id as binary) as c2 from range(10)\")\n      // FIXME: Change checkSparkAnswer to checkSparkAnswerAndOperator after resolving\n      //  https://github.com/apache/datafusion-comet/issues/2348\n      checkSparkAnswer(\"select length(c1), length(c2) AS x FROM t1 ORDER BY c1\")\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/2813\n  test(\"make decimal using DataFrame API - integer\") {\n    withTable(\"t1\") {\n      sql(\"create table t1 using parquet as select 123456 as c1 from range(1)\")\n\n      withSQLConf(\n        SQLConf.USE_V1_SOURCE_LIST.key -> \"parquet\",\n        SQLConf.ANSI_ENABLED.key -> \"false\",\n        SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\",\n        SQLConf.ADAPTIVE_OPTIMIZER_EXCLUDED_RULES.key -> \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n\n        val df = sql(\"select * from t1\")\n        val makeDecimalColumn = createMakeDecimalColumn(df.col(\"c1\").expr, 3, 0)\n        val df1 = df.withColumn(\"result\", makeDecimalColumn)\n\n        checkSparkAnswerAndFallbackReason(df1, \"Unsupported input data type: IntegerType\")\n      }\n    }\n  }\n\n  test(\"make decimal using DataFrame API - long\") {\n    withTable(\"t1\") {\n      sql(\"create table t1 using parquet as select cast(123456 as long) as c1 from range(1)\")\n\n      withSQLConf(\n        SQLConf.USE_V1_SOURCE_LIST.key -> \"parquet\",\n        SQLConf.ANSI_ENABLED.key -> \"false\",\n        SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\",\n        SQLConf.ADAPTIVE_OPTIMIZER_EXCLUDED_RULES.key -> \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n\n        val df = sql(\"select * from t1\")\n        val makeDecimalColumn = createMakeDecimalColumn(df.col(\"c1\").expr, 3, 0)\n        val df1 = df.withColumn(\"result\", makeDecimalColumn)\n\n        checkSparkAnswerAndOperator(df1)\n      }\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometFuzzAggregateSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport org.apache.comet.DataTypeSupport.isComplexType\n\nclass CometFuzzAggregateSuite extends CometFuzzTestBase {\n\n  test(\"count distinct - simple columns\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    for (col <- df.schema.fields.filterNot(f => isComplexType(f.dataType)).map(_.name)) {\n      val sql = s\"SELECT count(distinct $col) FROM t1\"\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(1 == collectNativeScans(cometPlan).length)\n\n      checkSparkAnswerAndOperator(sql)\n    }\n  }\n\n  // Aggregate by complex columns not yet supported\n  // https://github.com/apache/datafusion-comet/issues/2382\n  test(\"count distinct - complex columns\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    for (col <- df.schema.fields.filter(f => isComplexType(f.dataType)).map(_.name)) {\n      val sql = s\"SELECT count(distinct $col) FROM t1\"\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(1 == collectNativeScans(cometPlan).length)\n    }\n  }\n\n  test(\"count distinct group by multiple column - simple columns \") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    for (col <- df.schema.fields.filterNot(f => isComplexType(f.dataType)).map(_.name)) {\n      val sql = s\"SELECT c1, c2, c3, count(distinct $col) FROM t1 group by c1, c2, c3\"\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(1 == collectNativeScans(cometPlan).length)\n\n      checkSparkAnswerAndOperator(sql)\n    }\n  }\n\n  // Aggregate by complex columns not yet supported\n  // https://github.com/apache/datafusion-comet/issues/2382\n  test(\"count distinct group by multiple column - complex columns \") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    for (col <- df.schema.fields.filter(f => isComplexType(f.dataType)).map(_.name)) {\n      val sql = s\"SELECT c1, c2, c3, count(distinct $col) FROM t1 group by c1, c2, c3\"\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(1 == collectNativeScans(cometPlan).length)\n    }\n  }\n\n  // COUNT(distinct x, y, z, ...) not yet supported\n  // https://github.com/apache/datafusion-comet/issues/2292\n  test(\"count distinct multiple values and group by multiple column\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    for (col <- df.columns) {\n      val sql = s\"SELECT c1, c2, c3, count(distinct $col, c4, c5) FROM t1 group by c1, c2, c3\"\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(1 == collectNativeScans(cometPlan).length)\n    }\n  }\n\n  test(\"count(*) group by single column\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    for (col <- df.columns) {\n      val sql = s\"SELECT $col, count(*) FROM t1 GROUP BY $col ORDER BY $col\"\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(1 == collectNativeScans(cometPlan).length)\n    }\n  }\n\n  test(\"count(col) group by single column\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    val groupCol = df.columns.head\n    for (col <- df.columns.drop(1)) {\n      val sql = s\"SELECT $groupCol, count($col) FROM t1 GROUP BY $groupCol ORDER BY $groupCol\"\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(1 == collectNativeScans(cometPlan).length)\n    }\n  }\n\n  test(\"count(col1, col2, ..) group by single column\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    val groupCol = df.columns.head\n    val otherCol = df.columns.drop(1)\n    val sql = s\"SELECT $groupCol, count(${otherCol.mkString(\", \")}) FROM t1 \" +\n      s\"GROUP BY $groupCol ORDER BY $groupCol\"\n    val (_, cometPlan) = checkSparkAnswer(sql)\n    assert(1 == collectNativeScans(cometPlan).length)\n  }\n\n  test(\"min/max aggregate\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    for (col <- df.columns) {\n      // cannot run fully native due to HashAggregate\n      val sql = s\"SELECT min($col), max($col) FROM t1\"\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(1 == collectNativeScans(cometPlan).length)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometFuzzIcebergBase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.io.File\nimport java.nio.file.Files\nimport java.text.SimpleDateFormat\n\nimport scala.util.Random\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.comet.CometIcebergNativeScanExec\nimport org.apache.spark.sql.execution.SparkPlan\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.DecimalType\n\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator, SchemaGenOptions}\n\nclass CometFuzzIcebergBase extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  var warehouseDir: File = null\n  val icebergTableName: String = \"hadoop_catalog.db.fuzz_test\"\n\n  // Skip these tests if Iceberg is not available in classpath\n  private def icebergAvailable: Boolean = {\n    try {\n      Class.forName(\"org.apache.iceberg.catalog.Catalog\")\n      true\n    } catch {\n      case _: ClassNotFoundException => false\n    }\n  }\n\n  private def isIcebergVersionLessThan(targetVersion: String): Boolean = {\n    try {\n      val icebergVersion = org.apache.iceberg.IcebergBuild.version()\n      // Parse version strings like \"1.5.2\" or \"1.6.0-SNAPSHOT\"\n      val current = parseVersion(icebergVersion)\n      val target = parseVersion(targetVersion)\n      // Compare tuples: first by major, then minor, then patch\n      if (current._1 != target._1) current._1 < target._1\n      else if (current._2 != target._2) current._2 < target._2\n      else current._3 < target._3\n    } catch {\n      case _: Exception =>\n        // If we can't determine the version, assume it's old to be safe\n        true\n    }\n  }\n\n  private def parseVersion(version: String): (Int, Int, Int) = {\n    val parts = version.split(\"[.-]\").take(3).map(_.filter(_.isDigit))\n    val major = if (parts.length > 0 && parts(0).nonEmpty) parts(0).toInt else 0\n    val minor = if (parts.length > 1 && parts(1).nonEmpty) parts(1).toInt else 0\n    val patch = if (parts.length > 2 && parts(2).nonEmpty) parts(2).toInt else 0\n    (major, minor, patch)\n  }\n\n  /**\n   * We use Asia/Kathmandu because it has a non-zero number of minutes as the offset, so is an\n   * interesting edge case. Also, this timezone tends to be different from the default system\n   * timezone.\n   *\n   * Represents UTC+5:45\n   */\n  val defaultTimezone = \"Asia/Kathmandu\"\n\n  override def beforeAll(): Unit = {\n    super.beforeAll()\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n    warehouseDir = Files.createTempDirectory(\"comet-iceberg-fuzz-test\").toFile\n    val random = new Random(42)\n    withSQLConf(\n      \"spark.sql.catalog.hadoop_catalog\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n      \"spark.sql.catalog.hadoop_catalog.type\" -> \"hadoop\",\n      \"spark.sql.catalog.hadoop_catalog.warehouse\" -> warehouseDir.getAbsolutePath,\n      CometConf.COMET_ENABLED.key -> \"false\",\n      SQLConf.SESSION_LOCAL_TIMEZONE.key -> defaultTimezone) {\n\n      val schema = FuzzDataGenerator.generateSchema(\n        SchemaGenOptions(\n          generateArray = true,\n          generateStruct = true,\n          primitiveTypes = SchemaGenOptions.defaultPrimitiveTypes.filterNot { dataType =>\n            // Disable decimals - iceberg-rust doesn't support FIXED_LEN_BYTE_ARRAY in page index yet\n            dataType.isInstanceOf[DecimalType] ||\n            // Disable ByteType and ShortType for Iceberg < 1.6.0\n            // Fixed in https://github.com/apache/iceberg/pull/10349\n            (isIcebergVersionLessThan(\n              \"1.6.0\") && (dataType == org.apache.spark.sql.types.DataTypes.ByteType ||\n              dataType == org.apache.spark.sql.types.DataTypes.ShortType))\n          }))\n\n      val options =\n        DataGenOptions(\n          generateNegativeZero = false,\n          baseDate =\n            new SimpleDateFormat(\"YYYY-MM-DD hh:mm:ss\").parse(\"2024-05-25 12:34:56\").getTime)\n\n      val df = FuzzDataGenerator.generateDataFrame(random, spark, schema, 1000, options)\n      df.writeTo(icebergTableName).using(\"iceberg\").create()\n    }\n  }\n\n  protected override def afterAll(): Unit = {\n    try {\n      spark.sql(s\"DROP TABLE IF EXISTS $icebergTableName\")\n    } catch {\n      case _: Exception =>\n    }\n\n    if (warehouseDir != null) {\n      def deleteRecursively(file: File): Unit = {\n        if (file.isDirectory) {\n          file.listFiles().foreach(deleteRecursively)\n        }\n        file.delete()\n      }\n\n      deleteRecursively(warehouseDir)\n    }\n    super.afterAll()\n  }\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(\n        \"spark.sql.catalog.hadoop_catalog\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.hadoop_catalog.type\" -> \"hadoop\",\n        \"spark.sql.catalog.hadoop_catalog.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n        testFun\n      }\n    }\n  }\n\n  def collectIcebergNativeScans(plan: SparkPlan): Seq[CometIcebergNativeScanExec] = {\n    collect(plan) { case scan: CometIcebergNativeScanExec =>\n      scan\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometFuzzIcebergSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.internal.SQLConf.ParquetOutputTimestampType\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.DataTypeSupport.isComplexType\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator, ParquetGenerator, SchemaGenOptions}\n\nclass CometFuzzIcebergSuite extends CometFuzzIcebergBase {\n\n  test(\"select *\") {\n    val sql = s\"SELECT * FROM $icebergTableName\"\n    val (_, cometPlan) = checkSparkAnswer(sql)\n    assert(1 == collectIcebergNativeScans(cometPlan).length)\n  }\n\n  test(\"select * with limit\") {\n    val sql = s\"SELECT * FROM $icebergTableName LIMIT 500\"\n    val (_, cometPlan) = checkSparkAnswer(sql)\n    assert(1 == collectIcebergNativeScans(cometPlan).length)\n  }\n\n  test(\"order by single column\") {\n    val df = spark.table(icebergTableName)\n    for (col <- df.columns) {\n      val sql = s\"SELECT $col FROM $icebergTableName ORDER BY $col\"\n      // cannot run fully natively due to range partitioning and sort\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(1 == collectIcebergNativeScans(cometPlan).length)\n    }\n  }\n\n  test(\"order by multiple columns\") {\n    val df = spark.table(icebergTableName)\n    val allCols = df.columns.mkString(\",\")\n    val sql = s\"SELECT $allCols FROM $icebergTableName ORDER BY $allCols\"\n    // cannot run fully natively due to range partitioning and sort\n    val (_, cometPlan) = checkSparkAnswer(sql)\n    assert(1 == collectIcebergNativeScans(cometPlan).length)\n  }\n\n  test(\"order by random columns\") {\n    val df = spark.table(icebergTableName)\n\n    for (_ <- 1 to 10) {\n      // We only do order by permutations of primitive types to exercise native shuffle's\n      // RangePartitioning which only supports those types.\n      val primitiveColumns =\n        df.schema.fields.filterNot(f => isComplexType(f.dataType)).map(_.name)\n      val shuffledPrimitiveCols = Random.shuffle(primitiveColumns.toList)\n      val randomSize = Random.nextInt(shuffledPrimitiveCols.length) + 1\n      val randomColsSubset = shuffledPrimitiveCols.take(randomSize).toArray.mkString(\",\")\n      val sql = s\"SELECT $randomColsSubset FROM $icebergTableName ORDER BY $randomColsSubset\"\n      checkSparkAnswerAndOperator(sql)\n    }\n  }\n\n  test(\"distribute by single column (complex types)\") {\n    val df = spark.table(icebergTableName)\n    val columns = df.schema.fields.filter(f => isComplexType(f.dataType)).map(_.name)\n    for (col <- columns) {\n      // DISTRIBUTE BY is equivalent to df.repartition($col) and uses\n      val sql = s\"SELECT $col FROM $icebergTableName DISTRIBUTE BY $col\"\n      val resultDf = spark.sql(sql)\n      resultDf.collect()\n      // check for Comet shuffle\n      val plan =\n        resultDf.queryExecution.executedPlan.asInstanceOf[AdaptiveSparkPlanExec].executedPlan\n      val cometShuffleExchanges = collectCometShuffleExchanges(plan)\n      // Iceberg native scan supports complex types\n      assert(cometShuffleExchanges.length == 1)\n    }\n  }\n\n  test(\"shuffle supports all types\") {\n    val df = spark.table(icebergTableName)\n    val df2 = df.repartition(8, df.col(\"c0\")).sort(\"c1\")\n    df2.collect()\n    val cometShuffles = collectCometShuffleExchanges(df2.queryExecution.executedPlan)\n    // Iceberg native scan supports complex types\n    assert(cometShuffles.length == 2)\n  }\n\n  test(\"join\") {\n    val df = spark.table(icebergTableName)\n    df.createOrReplaceTempView(\"t1\")\n    df.createOrReplaceTempView(\"t2\")\n    // Filter out complex types - iceberg-rust can't create predicates for struct/array/map equality\n    val primitiveColumns = df.schema.fields.filterNot(f => isComplexType(f.dataType)).map(_.name)\n    for (col <- primitiveColumns) {\n      // cannot run fully native due to HashAggregate\n      val sql = s\"SELECT count(*) FROM t1 JOIN t2 ON t1.$col = t2.$col\"\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(2 == collectIcebergNativeScans(cometPlan).length)\n    }\n  }\n\n  test(\"decode\") {\n    val df = spark.table(icebergTableName)\n    // We want to make sure that the schema generator wasn't modified to accidentally omit\n    // BinaryType, since then this test would not run any queries and silently pass.\n    var testedBinary = false\n    for (field <- df.schema.fields if field.dataType == BinaryType) {\n      testedBinary = true\n      // Intentionally use odd capitalization of 'utf-8' to test normalization.\n      val sql = s\"SELECT decode(${field.name}, 'utF-8') FROM $icebergTableName\"\n      checkSparkAnswerAndOperator(sql)\n    }\n    assert(testedBinary)\n  }\n\n  test(\"regexp_replace\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(\"regexp\") -> \"true\") {\n      val df = spark.table(icebergTableName)\n      // We want to make sure that the schema generator wasn't modified to accidentally omit\n      // StringType, since then this test would not run any queries and silently pass.\n      var testedString = false\n      for (field <- df.schema.fields if field.dataType == StringType) {\n        testedString = true\n        val sql = s\"SELECT regexp_replace(${field.name}, 'a', 'b') FROM $icebergTableName\"\n        checkSparkAnswerAndOperator(sql)\n      }\n      assert(testedString)\n    }\n  }\n\n  test(\"Iceberg temporal types written as INT96\") {\n    testIcebergTemporalTypes(ParquetOutputTimestampType.INT96)\n  }\n\n  test(\"Iceberg temporal types written as TIMESTAMP_MICROS\") {\n    testIcebergTemporalTypes(ParquetOutputTimestampType.TIMESTAMP_MICROS)\n  }\n\n  test(\"Iceberg temporal types written as TIMESTAMP_MILLIS\") {\n    testIcebergTemporalTypes(ParquetOutputTimestampType.TIMESTAMP_MILLIS)\n  }\n\n  private def testIcebergTemporalTypes(\n      outputTimestampType: ParquetOutputTimestampType.Value,\n      generateArray: Boolean = true,\n      generateStruct: Boolean = true): Unit = {\n\n    val schema = FuzzDataGenerator.generateSchema(\n      SchemaGenOptions(\n        generateArray = generateArray,\n        generateStruct = generateStruct,\n        primitiveTypes = SchemaGenOptions.defaultPrimitiveTypes.filterNot { dataType =>\n          // Disable decimals - iceberg-rust doesn't support FIXED_LEN_BYTE_ARRAY in page index yet\n          dataType.isInstanceOf[DecimalType]\n        }))\n\n    val options =\n      DataGenOptions(generateNegativeZero = false)\n\n    withTempPath { filename =>\n      val random = new Random(42)\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"false\",\n        SQLConf.PARQUET_OUTPUT_TIMESTAMP_TYPE.key -> outputTimestampType.toString,\n        SQLConf.SESSION_LOCAL_TIMEZONE.key -> defaultTimezone) {\n        ParquetGenerator.makeParquetFile(random, spark, filename.toString, schema, 100, options)\n      }\n\n      Seq(defaultTimezone, \"UTC\", \"America/Denver\").foreach { tz =>\n        Seq(true, false).foreach { inferTimestampNtzEnabled =>\n          Seq(true, false).foreach { int96TimestampConversion =>\n            Seq(true, false).foreach { int96AsTimestamp =>\n              withSQLConf(\n                CometConf.COMET_ENABLED.key -> \"true\",\n                SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz,\n                SQLConf.PARQUET_INT96_AS_TIMESTAMP.key -> int96AsTimestamp.toString,\n                SQLConf.PARQUET_INT96_TIMESTAMP_CONVERSION.key -> int96TimestampConversion.toString,\n                SQLConf.PARQUET_INFER_TIMESTAMP_NTZ_ENABLED.key -> inferTimestampNtzEnabled.toString) {\n\n                val df = spark.table(icebergTableName)\n\n                Seq(defaultTimezone, \"UTC\", \"America/Denver\").foreach { tz =>\n                  withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz) {\n                    def hasTemporalType(t: DataType): Boolean = t match {\n                      case DataTypes.DateType | DataTypes.TimestampType |\n                          DataTypes.TimestampNTZType =>\n                        true\n                      case t: StructType => t.exists(f => hasTemporalType(f.dataType))\n                      case t: ArrayType => hasTemporalType(t.elementType)\n                      case _ => false\n                    }\n\n                    val columns =\n                      df.schema.fields.filter(f => hasTemporalType(f.dataType)).map(_.name)\n\n                    for (col <- columns) {\n                      checkSparkAnswer(s\"SELECT $col FROM $icebergTableName ORDER BY $col\")\n                    }\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  def collectCometShuffleExchanges(plan: org.apache.spark.sql.execution.SparkPlan)\n      : Seq[org.apache.spark.sql.execution.SparkPlan] = {\n    collect(plan) {\n      case exchange: org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec =>\n        exchange\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometFuzzMathSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport org.apache.spark.sql.types.{DecimalType, IntegerType, LongType}\n\nclass CometFuzzMathSuite extends CometFuzzTestBase {\n\n  for (op <- Seq(\"+\", \"-\", \"*\", \"/\", \"div\")) {\n    test(s\"integer math: $op\") {\n      val df = spark.read.parquet(filename)\n      val cols = df.schema.fields\n        .filter(_.dataType match {\n          case _: IntegerType => true\n          case _: LongType => true\n          case _ => false\n        })\n        .map(_.name)\n      df.createOrReplaceTempView(\"t1\")\n      val sql =\n        s\"SELECT ${cols(0)} $op ${cols(1)} FROM t1 ORDER BY ${cols(0)}, ${cols(1)} LIMIT 500\"\n      if (op == \"div\") {\n        // cast(cast(c3#1975 as bigint) as decimal(19,0)) is not fully compatible with Spark (No overflow check)\n        checkSparkAnswer(sql)\n      } else {\n        checkSparkAnswerAndOperator(sql)\n      }\n    }\n  }\n\n  for (op <- Seq(\"+\", \"-\", \"*\", \"/\", \"div\")) {\n    test(s\"decimal math: $op\") {\n      val df = spark.read.parquet(filename)\n      val cols = df.schema.fields.filter(_.dataType.isInstanceOf[DecimalType]).map(_.name)\n      df.createOrReplaceTempView(\"t1\")\n      val sql =\n        s\"SELECT ${cols(0)} $op ${cols(1)} FROM t1 ORDER BY ${cols(0)}, ${cols(1)} LIMIT 500\"\n      checkSparkAnswerAndOperator(sql)\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometFuzzTestBase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.io.File\nimport java.text.SimpleDateFormat\n\nimport scala.util.Random\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.commons.io.FileUtils\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\nimport org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\nimport org.apache.spark.sql.execution.SparkPlan\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator, ParquetGenerator, SchemaGenOptions}\n\nclass CometFuzzTestBase extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  var filename: String = null\n\n  /** Filename for data file with deeply nested complex types */\n  var complexTypesFilename: String = null\n\n  /**\n   * We use Asia/Kathmandu because it has a non-zero number of minutes as the offset, so is an\n   * interesting edge case. Also, this timezone tends to be different from the default system\n   * timezone.\n   *\n   * Represents UTC+5:45\n   */\n  val defaultTimezone = \"Asia/Kathmandu\"\n\n  override def beforeAll(): Unit = {\n    super.beforeAll()\n    val tempDir = System.getProperty(\"java.io.tmpdir\")\n    val random = new Random(42)\n    val dataGenOptions = DataGenOptions(\n      generateNegativeZero = false,\n      // override base date due to known issues with experimental scans\n      baseDate = new SimpleDateFormat(\"YYYY-MM-DD hh:mm:ss\").parse(\"2024-05-25 12:34:56\").getTime)\n\n    // generate Parquet file with primitives, structs, and arrays, but no maps\n    // and no nested complex types\n    filename = s\"$tempDir/CometFuzzTestSuite_${System.currentTimeMillis()}.parquet\"\n    withSQLConf(\n      CometConf.COMET_ENABLED.key -> \"false\",\n      SQLConf.SESSION_LOCAL_TIMEZONE.key -> defaultTimezone) {\n      val schemaGenOptions =\n        SchemaGenOptions(generateArray = true, generateStruct = true)\n      ParquetGenerator.makeParquetFile(\n        random,\n        spark,\n        filename,\n        1000,\n        schemaGenOptions,\n        dataGenOptions)\n    }\n\n    // generate Parquet file with complex nested types\n    complexTypesFilename =\n      s\"$tempDir/CometFuzzTestSuite_nested_${System.currentTimeMillis()}.parquet\"\n    withSQLConf(\n      CometConf.COMET_ENABLED.key -> \"false\",\n      SQLConf.SESSION_LOCAL_TIMEZONE.key -> defaultTimezone) {\n      val schemaGenOptions =\n        SchemaGenOptions(generateArray = true, generateStruct = true, generateMap = true)\n      val schema = FuzzDataGenerator.generateNestedSchema(\n        random,\n        numCols = 10,\n        minDepth = 2,\n        maxDepth = 4,\n        options = schemaGenOptions)\n      ParquetGenerator.makeParquetFile(\n        random,\n        spark,\n        complexTypesFilename,\n        schema,\n        1000,\n        dataGenOptions)\n    }\n\n  }\n\n  protected override def afterAll(): Unit = {\n    super.afterAll()\n    FileUtils.deleteDirectory(new File(filename))\n  }\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    Seq((\"native\", \"false\"), (\"jvm\", \"true\"), (\"jvm\", \"false\")).foreach {\n      case (shuffleMode, nativeC2R) =>\n        super.test(testName + s\" ($shuffleMode shuffle, nativeC2R=$nativeC2R)\", testTags: _*) {\n          withSQLConf(\n            CometConf.COMET_NATIVE_COLUMNAR_TO_ROW_ENABLED.key -> nativeC2R,\n            CometConf.COMET_PARQUET_UNSIGNED_SMALL_INT_CHECK.key -> \"false\",\n            CometConf.COMET_SHUFFLE_MODE.key -> shuffleMode) {\n            testFun\n          }\n        }\n    }\n  }\n\n  def collectNativeScans(plan: SparkPlan): Seq[SparkPlan] = {\n    collect(plan) {\n      case scan: CometScanExec => scan\n      case scan: CometNativeScanExec => scan\n    }\n  }\n\n  def collectCometShuffleExchanges(plan: SparkPlan): Seq[SparkPlan] = {\n    collect(plan) { case exchange: CometShuffleExchangeExec =>\n      exchange\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometFuzzTestSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.apache.commons.codec.binary.Hex\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.internal.SQLConf.ParquetOutputTimestampType\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.DataTypeSupport.isComplexType\nimport org.apache.comet.testing.{DataGenOptions, ParquetGenerator}\nimport org.apache.comet.testing.FuzzDataGenerator.{doubleNaNLiteral, floatNaNLiteral}\n\nclass CometFuzzTestSuite extends CometFuzzTestBase {\n\n  test(\"select *\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    val sql = \"SELECT * FROM t1\"\n    checkSparkAnswerAndOperator(sql)\n  }\n\n  test(\"select * with deeply nested complex types\") {\n    val df = spark.read.parquet(complexTypesFilename)\n    df.createOrReplaceTempView(\"t1\")\n    val sql = \"SELECT * FROM t1\"\n    checkSparkAnswerAndOperator(sql)\n  }\n\n  test(\"select * with limit\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    val sql = \"SELECT * FROM t1 LIMIT 500\"\n    checkSparkAnswerAndOperator(sql)\n  }\n\n  test(\"select column with default value\") {\n    // This test fails in Spark's vectorized Parquet reader for DECIMAL(36,18) or BINARY default values.\n    withSQLConf(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\") {\n      // This test relies on two tables: 1) t1 the Parquet file generated by ParquetGenerator with random values, and\n      // 2) t2 is a new table created with one column which we add a second column with different types and random values.\n      // We use the schema and values of t1 to simplify random value generation for the default column value in t2.\n      val df = spark.read.parquet(filename)\n      df.createOrReplaceTempView(\"t1\")\n      val columns = df.schema.fields.filter(f => !isComplexType(f.dataType)).map(_.name)\n      for (col <- columns) {\n        // Select the first non-null value from our target column type.\n        val defaultValueRow =\n          spark.sql(s\"SELECT $col FROM t1 WHERE $col IS NOT NULL LIMIT 1\").collect()(0)\n        val defaultValueType = defaultValueRow.schema.fields(0).dataType.sql\n        // Construct the string for the default value based on the column type.\n        val defaultValueString = defaultValueType match {\n          // These explicit type definitions for TINYINT, SMALLINT, FLOAT, DOUBLE, and DATE are only needed for 3.4.\n          case \"TINYINT\" | \"SMALLINT\" =>\n            s\"$defaultValueType(${defaultValueRow.get(0)})\"\n          case \"FLOAT\" =>\n            if (Float.NaN.equals(defaultValueRow.get(0))) {\n              floatNaNLiteral\n            } else {\n              s\"$defaultValueType(${defaultValueRow.get(0)})\"\n            }\n          case \"DOUBLE\" =>\n            if (Double.NaN.equals(defaultValueRow.get(0))) {\n              doubleNaNLiteral\n            } else {\n              s\"$defaultValueType(${defaultValueRow.get(0)})\"\n            }\n          case \"DATE\" => s\"$defaultValueType('${defaultValueRow.get(0)}')\"\n          case \"STRING\" => s\"'${defaultValueRow.get(0)}'\"\n          case \"TIMESTAMP\" | \"TIMESTAMP_NTZ\" => s\"TIMESTAMP '${defaultValueRow.get(0)}'\"\n          case \"BINARY\" =>\n            s\"X'${Hex.encodeHexString(defaultValueRow.get(0).asInstanceOf[Array[Byte]])}'\"\n          case _ => defaultValueRow.get(0).toString\n        }\n        // Create table t2 and then add the second column with our desired type and default value.\n        withTable(\"t2\") {\n          spark.sql(\"create table t2(col1 boolean) using parquet\")\n          spark.sql(\"insert into t2 values(true)\")\n          spark.sql(\n            s\"alter table t2 add column col2 $defaultValueType default $defaultValueString\")\n          // Verify that our default value matches Spark's answer\n          val sql = \"select col2 from t2\"\n          checkSparkAnswerAndOperator(sql)\n          // Verify that our default value matches what we originally selected out of t1.\n          if (defaultValueType == \"BINARY\") {\n            assert(\n              defaultValueRow\n                .get(0)\n                .asInstanceOf[Array[Byte]]\n                .sameElements(spark.sql(sql).collect()(0).get(0).asInstanceOf[Array[Byte]]))\n          } else {\n            assert(defaultValueRow.get(0).equals(spark.sql(sql).collect()(0).get(0)))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"order by single column\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    for (col <- df.columns) {\n      val sql = s\"SELECT $col FROM t1 ORDER BY $col\"\n      // cannot run fully natively due to range partitioning and sort\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(1 == collectNativeScans(cometPlan).length)\n    }\n  }\n\n  test(\"order by multiple columns\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    val allCols = df.columns.mkString(\",\")\n    val sql = s\"SELECT $allCols FROM t1 ORDER BY $allCols\"\n    // cannot run fully natively due to range partitioning and sort\n    val (_, cometPlan) = checkSparkAnswer(sql)\n    assert(1 == collectNativeScans(cometPlan).length)\n  }\n\n  test(\"order by random columns\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n\n    for (_ <- 1 to 10) {\n      // We only do order by permutations of primitive types to exercise native shuffle's\n      // RangePartitioning which only supports those types.\n      val primitiveColumns =\n        df.schema.fields.filterNot(f => isComplexType(f.dataType)).map(_.name)\n      val shuffledPrimitiveCols = Random.shuffle(primitiveColumns.toList)\n      val randomSize = Random.nextInt(shuffledPrimitiveCols.length) + 1\n      val randomColsSubset = shuffledPrimitiveCols.take(randomSize).toArray.mkString(\",\")\n      val sql = s\"SELECT $randomColsSubset FROM t1 ORDER BY $randomColsSubset\"\n      checkSparkAnswerAndOperator(sql)\n    }\n  }\n\n  test(\"distribute by single column (complex types)\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    val columns = df.schema.fields.filter(f => isComplexType(f.dataType)).map(_.name)\n    for (col <- columns) {\n      // DISTRIBUTE BY is equivalent to df.repartition($col) and uses\n      val sql = s\"SELECT $col FROM t1 DISTRIBUTE BY $col\"\n      val df = spark.sql(sql)\n      df.collect()\n      // check for Comet shuffle\n      val plan = df.queryExecution.executedPlan.asInstanceOf[AdaptiveSparkPlanExec].executedPlan\n      val cometShuffleExchanges = collectCometShuffleExchanges(plan)\n      val expectedNumCometShuffles = CometConf.COMET_SHUFFLE_MODE.get() match {\n        case \"jvm\" =>\n          1\n        case \"native\" =>\n          // native shuffle does not support complex types as partitioning keys\n          0\n      }\n      assert(cometShuffleExchanges.length == expectedNumCometShuffles)\n    }\n  }\n\n  test(\"shuffle supports all types\") {\n    val df = spark.read.parquet(filename)\n    val df2 = df.repartition(8, df.col(\"c0\")).sort(\"c1\")\n    df2.collect()\n    val cometShuffles = collectCometShuffleExchanges(df2.queryExecution.executedPlan)\n    val expectedNumCometShuffles = CometConf.COMET_SHUFFLE_MODE.get() match {\n      case \"jvm\" =>\n        1\n      case \"native\" =>\n        2\n    }\n    assert(cometShuffles.length == expectedNumCometShuffles)\n  }\n\n  test(\"join\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    df.createOrReplaceTempView(\"t2\")\n    for (col <- df.columns) {\n      // cannot run fully native due to HashAggregate\n      val sql = s\"SELECT count(*) FROM t1 JOIN t2 ON t1.$col = t2.$col\"\n      val (_, cometPlan) = checkSparkAnswer(sql)\n      assert(2 == collectNativeScans(cometPlan).length)\n    }\n  }\n\n  test(\"decode\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    // We want to make sure that the schema generator wasn't modified to accidentally omit\n    // BinaryType, since then this test would not run any queries and silently pass.\n    var testedBinary = false\n    for (field <- df.schema.fields if field.dataType == BinaryType) {\n      testedBinary = true\n      // Intentionally use odd capitalization of 'utf-8' to test normalization.\n      val sql = s\"SELECT decode(${field.name}, 'utF-8') FROM t1\"\n      checkSparkAnswerAndOperator(sql)\n    }\n    assert(testedBinary)\n  }\n\n  test(\"regexp_replace\") {\n    withSQLConf(CometConf.getExprAllowIncompatConfigKey(\"regexp\") -> \"true\") {\n      val df = spark.read.parquet(filename)\n      df.createOrReplaceTempView(\"t1\")\n      // We want to make sure that the schema generator wasn't modified to accidentally omit\n      // StringType, since then this test would not run any queries and silently pass.\n      var testedString = false\n      for (field <- df.schema.fields if field.dataType == StringType) {\n        testedString = true\n        val sql = s\"SELECT regexp_replace(${field.name}, 'a', 'b') FROM t1\"\n        checkSparkAnswerAndOperator(sql)\n      }\n      assert(testedString)\n    }\n  }\n\n  test(\"Parquet temporal types written as INT96\") {\n    testParquetTemporalTypes(ParquetOutputTimestampType.INT96)\n  }\n\n  test(\"Parquet temporal types written as TIMESTAMP_MICROS\") {\n    testParquetTemporalTypes(ParquetOutputTimestampType.TIMESTAMP_MICROS)\n  }\n\n  test(\"Parquet temporal types written as TIMESTAMP_MILLIS\") {\n    testParquetTemporalTypes(ParquetOutputTimestampType.TIMESTAMP_MILLIS)\n  }\n\n  private def testParquetTemporalTypes(\n      outputTimestampType: ParquetOutputTimestampType.Value): Unit = {\n\n    val dataGenOptions = DataGenOptions(generateNegativeZero = false)\n\n    withTempPath { filename =>\n      val random = new Random(42)\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"false\",\n        SQLConf.PARQUET_OUTPUT_TIMESTAMP_TYPE.key -> outputTimestampType.toString,\n        SQLConf.SESSION_LOCAL_TIMEZONE.key -> defaultTimezone) {\n\n        // TODO test with MapType\n        // https://github.com/apache/datafusion-comet/issues/2945\n        val schema = StructType(\n          Seq(\n            StructField(\"c0\", DataTypes.DateType),\n            StructField(\"c1\", DataTypes.createArrayType(DataTypes.DateType)),\n            StructField(\n              \"c2\",\n              DataTypes.createStructType(Array(StructField(\"c3\", DataTypes.DateType))))))\n\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename.toString,\n          schema,\n          100,\n          dataGenOptions)\n      }\n\n      Seq(defaultTimezone, \"UTC\", \"America/Denver\").foreach { tz =>\n        Seq(true, false).foreach { inferTimestampNtzEnabled =>\n          Seq(true, false).foreach { int96TimestampConversion =>\n            Seq(true, false).foreach { int96AsTimestamp =>\n              withSQLConf(\n                CometConf.COMET_ENABLED.key -> \"true\",\n                SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz,\n                SQLConf.PARQUET_INT96_AS_TIMESTAMP.key -> int96AsTimestamp.toString,\n                SQLConf.PARQUET_INT96_TIMESTAMP_CONVERSION.key -> int96TimestampConversion.toString,\n                SQLConf.PARQUET_INFER_TIMESTAMP_NTZ_ENABLED.key -> inferTimestampNtzEnabled.toString) {\n\n                val df = spark.read.parquet(filename.toString)\n                df.createOrReplaceTempView(\"t1\")\n                val columns =\n                  df.schema.fields\n                    .filter(f => DataTypeSupport.hasTemporalType(f.dataType))\n                    .map(_.name)\n\n                for (col <- columns) {\n                  checkSparkAnswer(s\"SELECT $col FROM t1 ORDER BY $col\")\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometHashExpressionSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator, ParquetGenerator, SchemaGenOptions}\n\n/**\n * Test suite for Spark murmur3 hash function compatibility between Spark and Comet.\n *\n * These tests verify that Comet's native implementation of murmur3 hash produces identical\n * results to Spark's implementation for all supported data types.\n */\nclass CometHashExpressionSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_AUTO) {\n        testFun\n      }\n    }\n  }\n\n  test(\"hash - boolean\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c BOOLEAN) USING parquet\")\n      sql(\"INSERT INTO t VALUES (true), (false), (null)\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - byte/tinyint\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c TINYINT) USING parquet\")\n      sql(\"INSERT INTO t VALUES (1), (0), (-1), (127), (-128), (null)\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - short/smallint\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c SMALLINT) USING parquet\")\n      sql(\"INSERT INTO t VALUES (1), (0), (-1), (32767), (-32768), (null)\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - int\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c INT) USING parquet\")\n      sql(\"INSERT INTO t VALUES (1), (0), (-1), (2147483647), (-2147483648), (null)\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - long/bigint\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c BIGINT) USING parquet\")\n      sql(\n        \"INSERT INTO t VALUES (1), (0), (-1), (9223372036854775807), (-9223372036854775808), (null)\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - float\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c FLOAT) USING parquet\")\n      sql(\"INSERT INTO t VALUES (1.0), (0.0), (-0.0), (-1.0), (3.14159), (null)\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - double\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c DOUBLE) USING parquet\")\n      sql(\"INSERT INTO t VALUES (1.0), (0.0), (-0.0), (-1.0), (3.14159265358979), (null)\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - string\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c STRING) USING parquet\")\n      sql(\"INSERT INTO t VALUES ('hello'), (''), ('Spark SQL'), ('苹果手机'), (null)\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - binary\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c BINARY) USING parquet\")\n      sql(\"INSERT INTO t VALUES (X''), (X'00'), (X'0102030405'), (null)\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - date\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c DATE) USING parquet\")\n      sql(\n        \"INSERT INTO t VALUES (DATE '2023-01-01'), (DATE '1970-01-01'), (DATE '2000-12-31'), (null)\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - timestamp\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c TIMESTAMP) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (TIMESTAMP '2023-01-01 12:00:00'),\n            (TIMESTAMP '1970-01-01 00:00:00'),\n            (TIMESTAMP '2000-12-31 23:59:59'),\n            (null)\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - decimal (precision <= 18)\") {\n    Seq((10, 2), (18, 0), (18, 10)).foreach { case (precision, scale) =>\n      withTable(\"t\") {\n        sql(s\"CREATE TABLE t(c DECIMAL($precision, $scale)) USING parquet\")\n        sql(\"INSERT INTO t VALUES (1.23), (-1.23), (0.0), (null)\")\n        checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n      }\n    }\n  }\n\n  test(\"hash - decimal (precision > 18)\") {\n    Seq((20, 2), (38, 10)).foreach { case (precision, scale) =>\n      withTable(\"t\") {\n        sql(s\"CREATE TABLE t(c DECIMAL($precision, $scale)) USING parquet\")\n        sql(\"INSERT INTO t VALUES (1.23), (-1.23), (0.0), (null)\")\n        // Large decimals may fall back to Spark, so just check the answer\n        checkSparkAnswer(\"SELECT c, hash(c) FROM t ORDER BY c\")\n      }\n    }\n  }\n\n  test(\"hash - array of decimal (precision > 18) falls back to Spark\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c ARRAY<DECIMAL(20, 2)>) USING parquet\")\n      sql(\"INSERT INTO t VALUES (array(1.23, 2.34)), (null)\")\n      // Should fall back to Spark due to nested high-precision decimal\n      checkSparkAnswerAndFallbackReason(\"SELECT c, hash(c) FROM t\", \"precision > 18\")\n    }\n  }\n\n  test(\"hash - struct with decimal (precision > 18) falls back to Spark\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c STRUCT<a: INT, b: DECIMAL(20, 2)>) USING parquet\")\n      sql(\"INSERT INTO t VALUES (named_struct('a', 1, 'b', 1.23)), (null)\")\n      // Should fall back to Spark due to nested high-precision decimal\n      checkSparkAnswerAndFallbackReason(\"SELECT c, hash(c) FROM t\", \"precision > 18\")\n    }\n  }\n\n  test(\"hash - map with decimal (precision > 18) value falls back to Spark\") {\n    withSQLConf(\"spark.sql.legacy.allowHashOnMapType\" -> \"true\") {\n      withTable(\"t\") {\n        sql(\"CREATE TABLE t(c MAP<STRING, DECIMAL(20, 2)>) USING parquet\")\n        sql(\"INSERT INTO t VALUES (map('a', 1.23)), (null)\")\n        // Should fall back to Spark due to nested high-precision decimal\n        checkSparkAnswerAndFallbackReason(\"SELECT c, hash(c) FROM t\", \"precision > 18\")\n      }\n    }\n  }\n\n  test(\"hash - array of integers\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c ARRAY<INT>) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (array(1, 2, 3)),\n            (array(-1, 0, 1)),\n            (array()),\n            (null),\n            (array(null)),\n            (array(1, null, 3))\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n    }\n  }\n\n  test(\"hash - array of strings\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c ARRAY<STRING>) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (array('hello', 'world')),\n            (array('Spark', 'SQL')),\n            (array('')),\n            (array()),\n            (null),\n            (array(null)),\n            (array('a', null, 'b'))\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n    }\n  }\n\n  test(\"hash - array of doubles\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c ARRAY<DOUBLE>) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (array(1.0, 2.0, 3.0)),\n            (array(-1.0, 0.0, 1.0)),\n            (array()),\n            (null)\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n    }\n  }\n\n  test(\"hash - nested array (array of arrays)\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c ARRAY<ARRAY<INT>>) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (array(array(1, 2), array(3, 4))),\n            (array(array(), array(1))),\n            (array()),\n            (null)\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n    }\n  }\n\n  test(\"hash - struct\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c STRUCT<a: INT, b: STRING>) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (named_struct('a', 1, 'b', 'hello')),\n            (named_struct('a', -1, 'b', '')),\n            (named_struct('a', null, 'b', 'test')),\n            (named_struct('a', 42, 'b', null)),\n            (null)\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n    }\n  }\n\n  test(\"hash - nested struct\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c STRUCT<a: INT, b: STRUCT<x: STRING, y: DOUBLE>>) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (named_struct('a', 1, 'b', named_struct('x', 'hello', 'y', 3.14))),\n            (named_struct('a', 2, 'b', named_struct('x', '', 'y', 0.0))),\n            (named_struct('a', 3, 'b', null)),\n            (null)\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n    }\n  }\n\n  test(\"hash - struct with array field\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c STRUCT<a: INT, b: ARRAY<STRING>>) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (named_struct('a', 1, 'b', array('x', 'y'))),\n            (named_struct('a', 2, 'b', array())),\n            (named_struct('a', 3, 'b', null)),\n            (null)\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n    }\n  }\n\n  test(\"hash - array of structs\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c ARRAY<STRUCT<a: INT, b: STRING>>) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (array(named_struct('a', 1, 'b', 'x'), named_struct('a', 2, 'b', 'y'))),\n            (array(named_struct('a', 3, 'b', ''))),\n            (array()),\n            (null)\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n    }\n  }\n\n  test(\"hash - map\") {\n    // Spark prohibits hash on map types by default, enable legacy mode for testing\n    withSQLConf(\"spark.sql.legacy.allowHashOnMapType\" -> \"true\") {\n      withTable(\"t\") {\n        sql(\"CREATE TABLE t(c MAP<STRING, INT>) USING parquet\")\n        sql(\"\"\"INSERT INTO t VALUES\n              (map('a', 1, 'b', 2)),\n              (map('x', -1)),\n              (map()),\n              (null)\"\"\")\n        checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n      }\n    }\n  }\n\n  test(\"hash - map with complex value type\") {\n    // Spark prohibits hash on map types by default, enable legacy mode for testing\n    withSQLConf(\"spark.sql.legacy.allowHashOnMapType\" -> \"true\") {\n      withTable(\"t\") {\n        sql(\"CREATE TABLE t(c MAP<STRING, ARRAY<INT>>) USING parquet\")\n        sql(\"\"\"INSERT INTO t VALUES\n              (map('a', array(1, 2), 'b', array(3))),\n              (map('x', array())),\n              (map()),\n              (null)\"\"\")\n        checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n      }\n    }\n  }\n\n  test(\"hash - multiple primitive columns\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(a INT, b STRING, c DOUBLE) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (1, 'hello', 3.14),\n            (2, '', 0.0),\n            (null, null, null),\n            (-1, 'test', -1.5)\"\"\")\n      checkSparkAnswerAndOperator(\n        \"SELECT hash(a, b, c), hash(c, b, a), hash(a), hash(b), hash(c) FROM t\")\n    }\n  }\n\n  test(\"hash - multiple columns with arrays\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(a INT, b ARRAY<INT>, c STRING) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (1, array(1, 2, 3), 'hello'),\n            (2, array(), ''),\n            (null, null, null),\n            (3, array(-1, 0, 1), 'test')\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT hash(a, b, c), hash(b), hash(a, c) FROM t\")\n    }\n  }\n\n  test(\"hash - multiple columns with structs\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(a INT, b STRUCT<x: INT, y: STRING>) USING parquet\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (1, named_struct('x', 10, 'y', 'hello')),\n            (2, named_struct('x', 20, 'y', '')),\n            (null, null),\n            (3, named_struct('x', null, 'y', 'test'))\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT hash(a, b), hash(b, a), hash(b) FROM t\")\n    }\n  }\n\n  test(\"hash - empty strings and arrays\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(s STRING, a ARRAY<INT>) USING parquet\")\n      sql(\"INSERT INTO t VALUES ('', array()), ('a', array(1))\")\n      checkSparkAnswerAndOperator(\"SELECT hash(s), hash(a), hash(s, a) FROM t\")\n    }\n  }\n\n  test(\"hash - all nulls\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(a INT, b STRING, c ARRAY<INT>) USING parquet\")\n      sql(\"INSERT INTO t VALUES (null, null, null)\")\n      checkSparkAnswerAndOperator(\"SELECT hash(a), hash(b), hash(c), hash(a, b, c) FROM t\")\n    }\n  }\n\n  test(\"hash - with custom seed\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c INT) USING parquet\")\n      sql(\"INSERT INTO t VALUES (1), (2), (3), (null)\")\n      // hash() with seed 42 (default) and seed 0\n      checkSparkAnswerAndOperator(\n        \"SELECT hash(c), hash(c, 0), hash(c, 42), hash(c, -1) FROM t ORDER BY c\")\n    }\n  }\n\n  test(\"hash - large array\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(c ARRAY<INT>) USING parquet\")\n      // Create an array with 1000 elements\n      val largeArray = (1 to 1000).mkString(\"array(\", \", \", \")\")\n      sql(s\"INSERT INTO t VALUES ($largeArray)\")\n      checkSparkAnswerAndOperator(\"SELECT hash(c) FROM t\")\n    }\n  }\n\n  test(\"hash - deeply nested structure\") {\n    withTable(\"t\") {\n      sql(\"\"\"CREATE TABLE t(c STRUCT<\n              a: INT,\n              b: STRUCT<\n                x: STRING,\n                y: ARRAY<STRUCT<p: INT, q: STRING>>\n              >\n            >) USING parquet\"\"\")\n      sql(\"\"\"INSERT INTO t VALUES\n            (named_struct('a', 1, 'b', named_struct('x', 'hello', 'y',\n              array(named_struct('p', 10, 'q', 'foo'), named_struct('p', 20, 'q', 'bar'))))),\n            (null)\"\"\")\n      checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n    }\n  }\n\n  test(\"hash - with dictionary encoding\") {\n    Seq(true, false).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        withTable(\"t\") {\n          sql(\"CREATE TABLE t(c STRING) USING parquet\")\n          // Repeated values to trigger dictionary encoding\n          sql(\"INSERT INTO t VALUES ('a'), ('b'), ('a'), ('b'), ('a'), ('c'), (null)\")\n          checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t ORDER BY c\")\n        }\n      }\n    }\n  }\n\n  test(\"hash - array with dictionary encoding\") {\n    Seq(true, false).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        withTable(\"t\") {\n          sql(\"CREATE TABLE t(c ARRAY<STRING>) USING parquet\")\n          sql(\"\"\"INSERT INTO t VALUES\n                (array('a', 'b')),\n                (array('a', 'b')),\n                (array('c')),\n                (null)\"\"\")\n          checkSparkAnswerAndOperator(\"SELECT c, hash(c) FROM t\")\n        }\n      }\n    }\n  }\n\n  test(\"hash - fuzz test\") {\n    val r = new Random(42)\n    val options = SchemaGenOptions(generateStruct = true)\n    val schema = FuzzDataGenerator.generateNestedSchema(r, 50, 1, 2, options)\n    withTempPath { filename =>\n      ParquetGenerator.makeParquetFile(\n        r,\n        spark,\n        filename.toString,\n        schema,\n        1000,\n        DataGenOptions())\n      spark.read.parquet(filename.toString).createOrReplaceTempView(\"t1\")\n      for (col <- schema.fields) {\n        val name = col.name\n        checkSparkAnswer(s\"select $name, hash($name) from t1 order by $name\")\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometIcebergNativeSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.io.File\nimport java.nio.file.Files\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.comet.CometIcebergNativeScanExec\nimport org.apache.spark.sql.execution.SparkPlan\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{StringType, TimestampType}\n\nimport org.apache.comet.iceberg.RESTCatalogHelper\nimport org.apache.comet.testing.{FuzzDataGenerator, SchemaGenOptions}\n\n/**\n * Test suite for native Iceberg scan using FileScanTasks and iceberg-rust.\n *\n * Note: Requires Iceberg dependencies to be added to pom.xml\n */\nclass CometIcebergNativeSuite extends CometTestBase with RESTCatalogHelper {\n\n  // Skip these tests if Iceberg is not available in classpath\n  private def icebergAvailable: Boolean = {\n    try {\n      Class.forName(\"org.apache.iceberg.catalog.Catalog\")\n      true\n    } catch {\n      case _: ClassNotFoundException => false\n    }\n  }\n\n  /** Collects all CometIcebergNativeScanExec nodes from a plan */\n  private def collectIcebergNativeScans(plan: SparkPlan): Seq[CometIcebergNativeScanExec] = {\n    collect(plan) { case scan: CometIcebergNativeScanExec =>\n      scan\n    }\n  }\n\n  /**\n   * Helper to verify query correctness and that exactly one CometIcebergNativeScanExec is used.\n   * This ensures both correct results and that the native Iceberg scan operator is being used.\n   */\n  private def checkIcebergNativeScan(query: String): Unit = {\n    val (_, cometPlan) = checkSparkAnswer(query)\n    val icebergScans = collectIcebergNativeScans(cometPlan)\n    assert(\n      icebergScans.length == 1,\n      s\"Expected exactly 1 CometIcebergNativeScanExec but found ${icebergScans.length}. Plan:\\n$cometPlan\")\n  }\n\n  test(\"create and query simple Iceberg table with Hadoop catalog\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.hadoop_catalog\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.hadoop_catalog.type\" -> \"hadoop\",\n        \"spark.sql.catalog.hadoop_catalog.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE hadoop_catalog.db.test_table (\n            id INT,\n            name STRING,\n            value DOUBLE\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO hadoop_catalog.db.test_table\n          VALUES (1, 'Alice', 10.5), (2, 'Bob', 20.3), (3, 'Charlie', 30.7)\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM hadoop_catalog.db.test_table ORDER BY id\")\n\n        spark.sql(\"DROP TABLE hadoop_catalog.db.test_table\")\n      }\n    }\n  }\n\n  test(\"filter pushdown - equality predicates\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.filter_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.filter_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.filter_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE filter_cat.db.filter_test (\n            id INT,\n            name STRING,\n            value DOUBLE,\n            active BOOLEAN\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO filter_cat.db.filter_test VALUES\n          (1, 'Alice', 10.5, true),\n          (2, 'Bob', 20.3, false),\n          (3, 'Charlie', 30.7, true),\n          (4, 'Diana', 15.2, false),\n          (5, 'Eve', 25.8, true)\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.filter_test WHERE id = 3\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.filter_test WHERE name = 'Bob'\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.filter_test WHERE active = true\")\n\n        spark.sql(\"DROP TABLE filter_cat.db.filter_test\")\n      }\n    }\n  }\n\n  test(\"filter pushdown - comparison operators\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.filter_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.filter_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.filter_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE filter_cat.db.comparison_test (\n            id INT,\n            value DOUBLE\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO filter_cat.db.comparison_test VALUES\n          (1, 10.5), (2, 20.3), (3, 30.7), (4, 15.2), (5, 25.8)\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.comparison_test WHERE value > 20.0\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.comparison_test WHERE value >= 20.3\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.comparison_test WHERE value < 20.0\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.comparison_test WHERE value <= 20.3\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.comparison_test WHERE id != 3\")\n\n        spark.sql(\"DROP TABLE filter_cat.db.comparison_test\")\n      }\n    }\n  }\n\n  test(\"filter pushdown - AND/OR combinations\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.filter_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.filter_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.filter_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE filter_cat.db.logical_test (\n            id INT,\n            category STRING,\n            value DOUBLE\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO filter_cat.db.logical_test VALUES\n          (1, 'A', 10.5), (2, 'B', 20.3), (3, 'A', 30.7),\n          (4, 'B', 15.2), (5, 'A', 25.8), (6, 'C', 35.0)\n        \"\"\")\n\n        checkIcebergNativeScan(\n          \"SELECT * FROM filter_cat.db.logical_test WHERE category = 'A' AND value > 20.0\")\n\n        checkIcebergNativeScan(\n          \"SELECT * FROM filter_cat.db.logical_test WHERE category = 'B' OR value > 30.0\")\n\n        checkIcebergNativeScan(\"\"\"SELECT * FROM filter_cat.db.logical_test\n             WHERE (category = 'A' AND value > 20.0) OR category = 'C'\"\"\")\n\n        spark.sql(\"DROP TABLE filter_cat.db.logical_test\")\n      }\n    }\n  }\n\n  test(\"filter pushdown - NULL checks\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.filter_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.filter_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.filter_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE filter_cat.db.null_test (\n            id INT,\n            optional_value DOUBLE\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO filter_cat.db.null_test VALUES\n          (1, 10.5), (2, NULL), (3, 30.7), (4, NULL), (5, 25.8)\n        \"\"\")\n\n        checkIcebergNativeScan(\n          \"SELECT * FROM filter_cat.db.null_test WHERE optional_value IS NULL\")\n\n        checkIcebergNativeScan(\n          \"SELECT * FROM filter_cat.db.null_test WHERE optional_value IS NOT NULL\")\n\n        spark.sql(\"DROP TABLE filter_cat.db.null_test\")\n      }\n    }\n  }\n\n  test(\"filter pushdown - IN list\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.filter_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.filter_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.filter_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE filter_cat.db.in_test (\n            id INT,\n            name STRING\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO filter_cat.db.in_test VALUES\n          (1, 'Alice'), (2, 'Bob'), (3, 'Charlie'),\n          (4, 'Diana'), (5, 'Eve'), (6, 'Frank')\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.in_test WHERE id IN (2, 4, 6)\")\n\n        checkIcebergNativeScan(\n          \"SELECT * FROM filter_cat.db.in_test WHERE name IN ('Alice', 'Charlie', 'Eve')\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.in_test WHERE id IS NOT NULL\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.in_test WHERE id NOT IN (1, 3, 5)\")\n\n        spark.sql(\"DROP TABLE filter_cat.db.in_test\")\n      }\n    }\n  }\n\n  test(\"verify filters are pushed to native scan\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.filter_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.filter_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.filter_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE filter_cat.db.filter_debug (\n            id INT,\n            value DOUBLE\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO filter_cat.db.filter_debug VALUES\n          (1, 10.5), (2, 20.3), (3, 30.7), (4, 15.2), (5, 25.8)\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM filter_cat.db.filter_debug WHERE id > 2\")\n\n        spark.sql(\"DROP TABLE filter_cat.db.filter_debug\")\n      }\n    }\n  }\n\n  test(\"small table - verify no duplicate rows (1 file)\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.small_table (\n            id INT,\n            name STRING\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.small_table\n          VALUES (1, 'Alice'), (2, 'Bob'), (3, 'Charlie')\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.small_table ORDER BY id\")\n        checkIcebergNativeScan(\"SELECT COUNT(DISTINCT id) FROM test_cat.db.small_table\")\n\n        spark.sql(\"DROP TABLE test_cat.db.small_table\")\n      }\n    }\n  }\n\n  test(\"medium table - verify correct partition count (multiple files)\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\",\n        \"spark.sql.files.maxRecordsPerFile\" -> \"10\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.medium_table (\n            id INT,\n            value DOUBLE\n          ) USING iceberg\n        \"\"\")\n\n        // Insert 100 rows - should create multiple files with maxRecordsPerFile=10\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.medium_table\n          SELECT id, CAST(id * 1.5 AS DOUBLE) as value\n          FROM range(100)\n        \"\"\")\n\n        // Verify results match Spark native (catches duplicates across partitions)\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.medium_table ORDER BY id\")\n        checkIcebergNativeScan(\"SELECT COUNT(DISTINCT id) FROM test_cat.db.medium_table\")\n        checkIcebergNativeScan(\"SELECT SUM(value) FROM test_cat.db.medium_table\")\n\n        spark.sql(\"DROP TABLE test_cat.db.medium_table\")\n      }\n    }\n  }\n\n  test(\"large table - verify no duplicates with many files\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\",\n        \"spark.sql.files.maxRecordsPerFile\" -> \"100\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.large_table (\n            id BIGINT,\n            category STRING,\n            value DOUBLE\n          ) USING iceberg\n        \"\"\")\n\n        // Insert 10,000 rows - with maxRecordsPerFile=100, creates ~100 files\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.large_table\n          SELECT\n            id,\n            CASE WHEN id % 3 = 0 THEN 'A' WHEN id % 3 = 1 THEN 'B' ELSE 'C' END as category,\n            CAST(id * 2.5 AS DOUBLE) as value\n          FROM range(10000)\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT COUNT(DISTINCT id) FROM test_cat.db.large_table\")\n        checkIcebergNativeScan(\"SELECT SUM(value) FROM test_cat.db.large_table\")\n        checkIcebergNativeScan(\n          \"SELECT category, COUNT(*) FROM test_cat.db.large_table GROUP BY category ORDER BY category\")\n\n        spark.sql(\"DROP TABLE test_cat.db.large_table\")\n      }\n    }\n  }\n\n  test(\"partitioned table - verify key-grouped partitioning\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.partitioned_table (\n            id INT,\n            category STRING,\n            value DOUBLE\n          ) USING iceberg\n          PARTITIONED BY (category)\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.partitioned_table VALUES\n          (1, 'A', 10.5), (2, 'B', 20.3), (3, 'C', 30.7),\n          (4, 'A', 15.2), (5, 'B', 25.8), (6, 'C', 35.0),\n          (7, 'A', 12.1), (8, 'B', 22.5), (9, 'C', 32.9)\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.partitioned_table ORDER BY id\")\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.partitioned_table WHERE category = 'A' ORDER BY id\")\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.partitioned_table WHERE category = 'B' ORDER BY id\")\n        checkIcebergNativeScan(\n          \"SELECT category, COUNT(*) FROM test_cat.db.partitioned_table GROUP BY category ORDER BY category\")\n\n        spark.sql(\"DROP TABLE test_cat.db.partitioned_table\")\n      }\n    }\n  }\n\n  test(\"empty table - verify graceful handling\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.empty_table (\n            id INT,\n            name STRING\n          ) USING iceberg\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.empty_table\")\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.empty_table WHERE id > 0\")\n\n        spark.sql(\"DROP TABLE test_cat.db.empty_table\")\n      }\n    }\n  }\n\n  // MOR (Merge-On-Read) delete file tests.\n  // Delete files are extracted from FileScanTasks and handled by iceberg-rust's ArrowReader,\n  // which automatically applies both positional and equality deletes during scan execution.\n  test(\"MOR table with POSITIONAL deletes - verify deletes are applied\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.positional_delete_test (\n            id INT,\n            name STRING,\n            value DOUBLE\n          ) USING iceberg\n          TBLPROPERTIES (\n            'write.delete.mode' = 'merge-on-read',\n            'write.merge.mode' = 'merge-on-read'\n          )\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.positional_delete_test\n          VALUES\n            (1, 'Alice', 10.5), (2, 'Bob', 20.3), (3, 'Charlie', 30.7),\n            (4, 'Diana', 15.2), (5, 'Eve', 25.8), (6, 'Frank', 35.0),\n            (7, 'Grace', 12.1), (8, 'Hank', 22.5)\n        \"\"\")\n\n        spark.sql(\"DELETE FROM test_cat.db.positional_delete_test WHERE id IN (2, 4, 6)\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.positional_delete_test ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.positional_delete_test\")\n      }\n    }\n  }\n\n  test(\"MOR table with EQUALITY deletes - verify deletes are applied\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Create table with equality delete columns specified\n        // This forces Spark to use equality deletes instead of positional deletes\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.equality_delete_test (\n            id INT,\n            category STRING,\n            value DOUBLE\n          ) USING iceberg\n          TBLPROPERTIES (\n            'write.delete.mode' = 'merge-on-read',\n            'write.merge.mode' = 'merge-on-read',\n            'write.delete.equality-delete-columns' = 'id'\n          )\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.equality_delete_test\n          VALUES\n            (1, 'A', 10.5), (2, 'B', 20.3), (3, 'A', 30.7),\n            (4, 'B', 15.2), (5, 'A', 25.8), (6, 'C', 35.0)\n        \"\"\")\n\n        spark.sql(\"DELETE FROM test_cat.db.equality_delete_test WHERE id IN (2, 4)\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.equality_delete_test ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.equality_delete_test\")\n      }\n    }\n  }\n\n  test(\"MOR table with multiple delete operations - mixed delete types\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.multi_delete_test (\n            id INT,\n            data STRING\n          ) USING iceberg\n          TBLPROPERTIES (\n            'write.delete.mode' = 'merge-on-read',\n            'write.merge.mode' = 'merge-on-read'\n          )\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.multi_delete_test\n          SELECT id, CONCAT('data_', CAST(id AS STRING)) as data\n          FROM range(100)\n        \"\"\")\n\n        spark.sql(\"DELETE FROM test_cat.db.multi_delete_test WHERE id < 10\")\n        spark.sql(\"DELETE FROM test_cat.db.multi_delete_test WHERE id > 90\")\n        spark.sql(\"DELETE FROM test_cat.db.multi_delete_test WHERE id % 10 = 5\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.multi_delete_test ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.multi_delete_test\")\n      }\n    }\n  }\n\n  test(\"verify no duplicate rows across multiple partitions\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\",\n        // Create multiple files to ensure multiple partitions\n        \"spark.sql.files.maxRecordsPerFile\" -> \"50\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.multipart_test (\n            id INT,\n            data STRING\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.multipart_test\n          SELECT id, CONCAT('data_', CAST(id AS STRING)) as data\n          FROM range(500)\n        \"\"\")\n\n        // Critical: COUNT(*) vs COUNT(DISTINCT id) catches duplicates across partitions\n        checkIcebergNativeScan(\"SELECT COUNT(DISTINCT id) FROM test_cat.db.multipart_test\")\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.multipart_test WHERE id < 10 ORDER BY id\")\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.multipart_test WHERE id >= 490 ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.multipart_test\")\n      }\n    }\n  }\n\n  test(\"filter pushdown with multi-partition table\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\",\n        \"spark.sql.files.maxRecordsPerFile\" -> \"20\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.filter_multipart (\n            id INT,\n            category STRING,\n            value DOUBLE\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.filter_multipart\n          SELECT\n            id,\n            CASE WHEN id % 2 = 0 THEN 'even' ELSE 'odd' END as category,\n            CAST(id * 1.5 AS DOUBLE) as value\n          FROM range(200)\n        \"\"\")\n\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.filter_multipart WHERE id > 150 ORDER BY id\")\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.filter_multipart WHERE category = 'even' AND id < 50 ORDER BY id\")\n        checkIcebergNativeScan(\n          \"SELECT COUNT(DISTINCT id) FROM test_cat.db.filter_multipart WHERE id BETWEEN 50 AND 100\")\n        checkIcebergNativeScan(\n          \"SELECT SUM(value) FROM test_cat.db.filter_multipart WHERE category = 'odd'\")\n\n        spark.sql(\"DROP TABLE test_cat.db.filter_multipart\")\n      }\n    }\n  }\n\n  test(\"date partitioned table with date range queries\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.date_partitioned (\n            id INT,\n            event_date DATE,\n            value STRING\n          ) USING iceberg\n          PARTITIONED BY (days(event_date))\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.date_partitioned VALUES\n          (1, DATE '2024-01-01', 'a'), (2, DATE '2024-01-02', 'b'),\n          (3, DATE '2024-01-03', 'c'), (4, DATE '2024-01-15', 'd'),\n          (5, DATE '2024-01-16', 'e'), (6, DATE '2024-02-01', 'f')\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.date_partitioned ORDER BY id\")\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.date_partitioned WHERE event_date = DATE '2024-01-01'\")\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.date_partitioned WHERE event_date BETWEEN DATE '2024-01-01' AND DATE '2024-01-03' ORDER BY id\")\n        checkIcebergNativeScan(\n          \"SELECT event_date, COUNT(*) FROM test_cat.db.date_partitioned GROUP BY event_date ORDER BY event_date\")\n\n        spark.sql(\"DROP TABLE test_cat.db.date_partitioned\")\n      }\n    }\n  }\n\n  test(\"bucket partitioned table\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.bucket_partitioned (\n            id INT,\n            value DOUBLE\n          ) USING iceberg\n          PARTITIONED BY (bucket(4, id))\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.bucket_partitioned\n          SELECT id, CAST(id * 1.5 AS DOUBLE) as value\n          FROM range(100)\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.bucket_partitioned ORDER BY id\")\n        checkIcebergNativeScan(\"SELECT COUNT(DISTINCT id) FROM test_cat.db.bucket_partitioned\")\n        checkIcebergNativeScan(\"SELECT SUM(value) FROM test_cat.db.bucket_partitioned\")\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.bucket_partitioned WHERE id < 20 ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.bucket_partitioned\")\n      }\n    }\n  }\n\n  test(\"partition pruning - bucket transform verifies files are skipped\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.bucket_pruning (\n            id INT,\n            data STRING\n          ) USING iceberg\n          PARTITIONED BY (bucket(8, id))\n        \"\"\")\n\n        (0 until 8).foreach { bucket =>\n          spark.sql(s\"\"\"\n            INSERT INTO test_cat.db.bucket_pruning\n            SELECT id, CONCAT('data_', CAST(id AS STRING)) as data\n            FROM range(${bucket * 100}, ${(bucket + 1) * 100})\n          \"\"\")\n        }\n\n        val specificIds = Seq(5, 15, 25)\n        val df = spark.sql(s\"\"\"\n          SELECT * FROM test_cat.db.bucket_pruning\n          WHERE id IN (${specificIds.mkString(\",\")})\n        \"\"\")\n\n        val scanNodes = df.queryExecution.executedPlan\n          .collectLeaves()\n          .collect { case s: CometIcebergNativeScanExec => s }\n\n        assert(scanNodes.nonEmpty, \"Expected at least one CometIcebergNativeScanExec node\")\n\n        val metrics = scanNodes.head.metrics\n\n        val result = df.collect()\n        assert(result.length == specificIds.length)\n\n        // With bucket partitioning, pruning occurs at the file level, not manifest level\n        // Bucket transforms use hash-based bucketing, so manifests may contain files from\n        // multiple buckets. Iceberg can skip individual files based on bucket metadata,\n        // but cannot skip entire manifests.\n        assert(\n          metrics(\"resultDataFiles\").value < 8,\n          \"Bucket pruning should skip some files, but read \" +\n            s\"${metrics(\"resultDataFiles\").value} out of 8\")\n        assert(\n          metrics(\"skippedDataFiles\").value > 0,\n          \"Expected skipped data files due to bucket pruning, got\" +\n            s\"${metrics(\"skippedDataFiles\").value}\")\n\n        spark.sql(\"DROP TABLE test_cat.db.bucket_pruning\")\n      }\n    }\n  }\n\n  test(\"partition pruning - truncate transform verifies files are skipped\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.truncate_pruning (\n            id INT,\n            message STRING\n          ) USING iceberg\n          PARTITIONED BY (truncate(5, message))\n        \"\"\")\n\n        val prefixes = Seq(\"alpha\", \"bravo\", \"charlie\", \"delta\", \"echo\")\n        prefixes.zipWithIndex.foreach { case (prefix, idx) =>\n          spark.sql(s\"\"\"\n            INSERT INTO test_cat.db.truncate_pruning\n            SELECT\n              id,\n              CONCAT('$prefix', '_suffix_', CAST(id AS STRING)) as message\n            FROM range(${idx * 10}, ${(idx + 1) * 10})\n          \"\"\")\n        }\n\n        val df = spark.sql(\"\"\"\n          SELECT * FROM test_cat.db.truncate_pruning\n          WHERE message LIKE 'alpha%'\n        \"\"\")\n\n        val scanNodes = df.queryExecution.executedPlan\n          .collectLeaves()\n          .collect { case s: CometIcebergNativeScanExec => s }\n\n        assert(scanNodes.nonEmpty, \"Expected at least one CometIcebergNativeScanExec node\")\n\n        val metrics = scanNodes.head.metrics\n\n        val result = df.collect()\n        assert(result.length == 10)\n        assert(result.forall(_.getString(1).startsWith(\"alpha\")))\n\n        // Partition pruning occurs at the manifest level, not file level\n        // Each INSERT creates one manifest, so we verify skippedDataManifests\n        assert(\n          metrics(\"resultDataFiles\").value == 1,\n          s\"Truncate pruning should only read 1 file, read ${metrics(\"resultDataFiles\").value}\")\n        assert(\n          metrics(\"skippedDataManifests\").value == 4,\n          s\"Expected 4 skipped manifests, got ${metrics(\"skippedDataManifests\").value}\")\n\n        spark.sql(\"DROP TABLE test_cat.db.truncate_pruning\")\n      }\n    }\n  }\n\n  test(\"partition pruning - hour transform verifies files are skipped\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.hour_pruning (\n            id INT,\n            event_time TIMESTAMP,\n            data STRING\n          ) USING iceberg\n          PARTITIONED BY (hour(event_time))\n        \"\"\")\n\n        (0 until 6).foreach { hour =>\n          spark.sql(s\"\"\"\n            INSERT INTO test_cat.db.hour_pruning\n            SELECT\n              id,\n              CAST('2024-01-01 $hour:00:00' AS TIMESTAMP) as event_time,\n              CONCAT('event_', CAST(id AS STRING)) as data\n            FROM range(${hour * 10}, ${(hour + 1) * 10})\n          \"\"\")\n        }\n\n        val df = spark.sql(\"\"\"\n          SELECT * FROM test_cat.db.hour_pruning\n          WHERE event_time >= CAST('2024-01-01 04:00:00' AS TIMESTAMP)\n        \"\"\")\n\n        val scanNodes = df.queryExecution.executedPlan\n          .collectLeaves()\n          .collect { case s: CometIcebergNativeScanExec => s }\n\n        assert(scanNodes.nonEmpty, \"Expected at least one CometIcebergNativeScanExec node\")\n\n        val metrics = scanNodes.head.metrics\n\n        val result = df.collect()\n        assert(result.length == 20)\n\n        // Partition pruning occurs at the manifest level, not file level\n        // Each INSERT creates one manifest, so we verify skippedDataManifests\n        assert(\n          metrics(\"resultDataFiles\").value == 2,\n          s\"Hour pruning should read 2 files (hours 4-5), read ${metrics(\"resultDataFiles\").value}\")\n        assert(\n          metrics(\"skippedDataManifests\").value == 4,\n          s\"Expected 4 skipped manifests (hours 0-3), got ${metrics(\"skippedDataManifests\").value}\")\n\n        spark.sql(\"DROP TABLE test_cat.db.hour_pruning\")\n      }\n    }\n  }\n\n  test(\"schema evolution - add column\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.schema_evolution (\n            id INT,\n            name STRING\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.schema_evolution VALUES (1, 'Alice'), (2, 'Bob')\n        \"\"\")\n\n        spark.sql(\"ALTER TABLE test_cat.db.schema_evolution ADD COLUMN age INT\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.schema_evolution VALUES (3, 'Charlie', 30), (4, 'Diana', 25)\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.schema_evolution ORDER BY id\")\n        checkIcebergNativeScan(\"SELECT id, name FROM test_cat.db.schema_evolution ORDER BY id\")\n        checkIcebergNativeScan(\n          \"SELECT id, age FROM test_cat.db.schema_evolution WHERE age IS NOT NULL ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.schema_evolution\")\n      }\n    }\n  }\n\n  test(\"schema evolution - drop column\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.drop_column_test (\n            id INT,\n            name STRING,\n            age INT\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.drop_column_test VALUES (1, 'Alice', 30), (2, 'Bob', 25)\n        \"\"\")\n\n        // Drop the age column\n        spark.sql(\"ALTER TABLE test_cat.db.drop_column_test DROP COLUMN age\")\n\n        // Insert new data without the age column\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.drop_column_test VALUES (3, 'Charlie'), (4, 'Diana')\n        \"\"\")\n\n        // Read all data - must handle old files (with age) and new files (without age)\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.drop_column_test ORDER BY id\")\n        checkIcebergNativeScan(\"SELECT id, name FROM test_cat.db.drop_column_test ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.drop_column_test\")\n      }\n    }\n  }\n\n  test(\"migration - basic read after migration (fallback for no field ID)\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        val sourceName = \"parquet_source\"\n        val destName = \"test_cat.db.iceberg_dest\"\n        val dataPath = s\"${warehouseDir.getAbsolutePath}/source_data\"\n\n        // Step 1: Create regular Parquet table (without field IDs)\n        spark\n          .range(10)\n          .selectExpr(\n            \"CAST(id AS INT) as id\",\n            \"CONCAT('name_', CAST(id AS STRING)) as name\",\n            \"CAST(id * 2 AS DOUBLE) as value\")\n          .write\n          .mode(\"overwrite\")\n          .option(\"path\", dataPath)\n          .saveAsTable(sourceName)\n\n        // Step 2: Snapshot the Parquet table into Iceberg using SparkActions API\n        try {\n          val actionsClass = Class.forName(\"org.apache.iceberg.spark.actions.SparkActions\")\n          val getMethod = actionsClass.getMethod(\"get\")\n          val actions = getMethod.invoke(null)\n          val snapshotMethod = actions.getClass.getMethod(\"snapshotTable\", classOf[String])\n          val snapshotAction = snapshotMethod.invoke(actions, sourceName)\n          val asMethod = snapshotAction.getClass.getMethod(\"as\", classOf[String])\n          val snapshotWithDest = asMethod.invoke(snapshotAction, destName)\n          val executeMethod = snapshotWithDest.getClass.getMethod(\"execute\")\n          executeMethod.invoke(snapshotWithDest)\n\n          // Step 3: Read the Iceberg table - Parquet files have no field IDs, so position-based mapping is used\n          checkIcebergNativeScan(s\"SELECT * FROM $destName ORDER BY id\")\n          checkIcebergNativeScan(s\"SELECT id, name FROM $destName ORDER BY id\")\n          checkIcebergNativeScan(s\"SELECT value FROM $destName WHERE id < 5 ORDER BY id\")\n\n          spark.sql(s\"DROP TABLE $destName\")\n          spark.sql(s\"DROP TABLE $sourceName\")\n        } catch {\n          case _: ClassNotFoundException =>\n            cancel(\"Iceberg Actions API not available - requires iceberg-spark-runtime\")\n        }\n      }\n    }\n  }\n\n  test(\"migration - hive-style partitioned table has partition values\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        val sourceName = \"parquet_partitioned_source\"\n        val destName = \"test_cat.db.iceberg_partitioned\"\n        val dataPath = s\"${warehouseDir.getAbsolutePath}/partitioned_data\"\n\n        // Hive-style partitioning stores partition values in directory paths, not in data files\n        spark\n          .range(10)\n          .selectExpr(\n            \"CAST(id AS INT) as partition_col\",\n            \"CONCAT('data_', CAST(id AS STRING)) as data\")\n          .write\n          .mode(\"overwrite\")\n          .partitionBy(\"partition_col\")\n          .option(\"path\", dataPath)\n          .saveAsTable(sourceName)\n\n        try {\n          val actionsClass = Class.forName(\"org.apache.iceberg.spark.actions.SparkActions\")\n          val getMethod = actionsClass.getMethod(\"get\")\n          val actions = getMethod.invoke(null)\n          val snapshotMethod = actions.getClass.getMethod(\"snapshotTable\", classOf[String])\n          val snapshotAction = snapshotMethod.invoke(actions, sourceName)\n          val asMethod = snapshotAction.getClass.getMethod(\"as\", classOf[String])\n          val snapshotWithDest = asMethod.invoke(snapshotAction, destName)\n          val executeMethod = snapshotWithDest.getClass.getMethod(\"execute\")\n          executeMethod.invoke(snapshotWithDest)\n\n          // Partition columns must have actual values from manifests, not NULL\n          checkIcebergNativeScan(s\"SELECT * FROM $destName ORDER BY partition_col\")\n          checkIcebergNativeScan(\n            s\"SELECT partition_col, data FROM $destName WHERE partition_col < 5 ORDER BY partition_col\")\n\n          spark.sql(s\"DROP TABLE $destName\")\n          spark.sql(s\"DROP TABLE $sourceName\")\n        } catch {\n          case _: ClassNotFoundException =>\n            cancel(\"Iceberg Actions API not available - requires iceberg-spark-runtime\")\n        }\n      }\n    }\n  }\n\n  test(\"projection - column subset, reordering, and duplication\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Create table with multiple columns\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.proj_test (\n            id INT,\n            name STRING,\n            value DOUBLE,\n            flag BOOLEAN\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.proj_test\n          VALUES (1, 'Alice', 10.5, true),\n                 (2, 'Bob', 20.3, false),\n                 (3, 'Charlie', 30.7, true)\n        \"\"\")\n\n        // Test 1: Column subset (only 2 of 4 columns)\n        checkIcebergNativeScan(\"SELECT name, value FROM test_cat.db.proj_test ORDER BY id\")\n\n        // Test 2: Reordered columns (reverse order)\n        checkIcebergNativeScan(\"SELECT value, name, id FROM test_cat.db.proj_test ORDER BY id\")\n\n        // Test 3: Duplicate columns\n        checkIcebergNativeScan(\n          \"SELECT id, name, id AS id2 FROM test_cat.db.proj_test ORDER BY id\")\n\n        // Test 4: Single column\n        checkIcebergNativeScan(\"SELECT name FROM test_cat.db.proj_test ORDER BY name\")\n\n        // Test 5: Different ordering with subset\n        checkIcebergNativeScan(\"SELECT flag, id FROM test_cat.db.proj_test ORDER BY id\")\n\n        // Test 6: Multiple duplicates\n        checkIcebergNativeScan(\n          \"SELECT name, value, name AS name2, value AS value2 FROM test_cat.db.proj_test ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.proj_test\")\n      }\n    }\n  }\n\n  test(\"complex type - array\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.array_test (\n            id INT,\n            name STRING,\n            values ARRAY<INT>\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.array_test\n          VALUES (1, 'Alice', array(1, 2, 3)), (2, 'Bob', array(4, 5, 6))\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.array_test ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.array_test\")\n      }\n    }\n  }\n\n  test(\"complex type - map\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.map_test (\n            id INT,\n            name STRING,\n            properties MAP<STRING, INT>\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.map_test\n          VALUES (1, 'Alice', map('age', 30, 'score', 95)), (2, 'Bob', map('age', 25, 'score', 87))\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.map_test ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.map_test\")\n      }\n    }\n  }\n\n  test(\"complex type - struct\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.struct_test (\n            id INT,\n            name STRING,\n            address STRUCT<city: STRING, zip: INT>\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.struct_test\n          VALUES (1, 'Alice', struct('NYC', 10001)), (2, 'Bob', struct('LA', 90001))\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.struct_test ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.struct_test\")\n      }\n    }\n  }\n\n  test(\"UUID type - native Iceberg UUID column (reproduces type mismatch)\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        import org.apache.iceberg.catalog.TableIdentifier\n        import org.apache.iceberg.spark.SparkCatalog\n        import org.apache.iceberg.types.Types\n        import org.apache.iceberg.{PartitionSpec, Schema}\n\n        // Use Iceberg API to create table with native UUID type\n        // (not possible via Spark SQL CREATE TABLE)\n        // Get Spark's catalog instance to ensure the table is visible to Spark\n        val sparkCatalog = spark.sessionState.catalogManager\n          .catalog(\"test_cat\")\n          .asInstanceOf[SparkCatalog]\n\n        spark.sql(\"CREATE NAMESPACE IF NOT EXISTS test_cat.db\")\n\n        // UUID is stored as FixedSizeBinary(16) but must be presented as Utf8 to Spark\n        val schema = new Schema(\n          Types.NestedField.required(1, \"id\", Types.IntegerType.get()),\n          Types.NestedField.optional(2, \"uuid\", Types.UUIDType.get()))\n        val tableIdent = TableIdentifier.of(\"db\", \"uuid_test\")\n        sparkCatalog.icebergCatalog.createTable(tableIdent, schema, PartitionSpec.unpartitioned())\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.uuid_test VALUES\n          (1, 'a0eebc99-9c0b-4ef8-bb6d-6bb9bd380a11'),\n          (2, 'b1ffcd88-8d1a-3de7-aa5c-5aa8ac269a00'),\n          (3, 'c2aade77-7e0b-2cf6-99e4-4998bc158b22')\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.uuid_test ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.uuid_test\")\n      }\n    }\n  }\n\n  test(\"verify all Iceberg planning metrics are populated\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    val icebergPlanningMetricNames = Seq(\n      \"totalPlanningDuration\",\n      \"totalDataManifest\",\n      \"scannedDataManifests\",\n      \"skippedDataManifests\",\n      \"resultDataFiles\",\n      \"skippedDataFiles\",\n      \"totalDataFileSize\",\n      \"totalDeleteManifests\",\n      \"scannedDeleteManifests\",\n      \"skippedDeleteManifests\",\n      \"totalDeleteFileSize\",\n      \"resultDeleteFiles\",\n      \"equalityDeleteFiles\",\n      \"indexedDeleteFiles\",\n      \"positionalDeleteFiles\",\n      \"skippedDeleteFiles\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.metrics_test (\n            id INT,\n            value DOUBLE\n          ) USING iceberg\n        \"\"\")\n\n        // Create multiple files to ensure non-zero manifest/file counts\n        spark\n          .range(10000)\n          .selectExpr(\"CAST(id AS INT)\", \"CAST(id * 1.5 AS DOUBLE) as value\")\n          .coalesce(1)\n          .write\n          .format(\"iceberg\")\n          .mode(\"append\")\n          .saveAsTable(\"test_cat.db.metrics_test\")\n\n        spark\n          .range(10001, 20000)\n          .selectExpr(\"CAST(id AS INT)\", \"CAST(id * 1.5 AS DOUBLE) as value\")\n          .coalesce(1)\n          .write\n          .format(\"iceberg\")\n          .mode(\"append\")\n          .saveAsTable(\"test_cat.db.metrics_test\")\n\n        val df = spark.sql(\"SELECT * FROM test_cat.db.metrics_test WHERE id < 10000\")\n\n        // Must extract metrics before collect() because planning happens at plan creation\n        val scanNodes = df.queryExecution.executedPlan\n          .collectLeaves()\n          .collect { case s: CometIcebergNativeScanExec => s }\n\n        assert(scanNodes.nonEmpty, \"Expected at least one CometIcebergNativeScanExec node\")\n\n        val metrics = scanNodes.head.metrics\n\n        icebergPlanningMetricNames.foreach { metricName =>\n          assert(metrics.contains(metricName), s\"metric $metricName was not found\")\n        }\n\n        // Planning metrics are populated during plan creation, so they're already available\n        assert(metrics(\"totalDataManifest\").value > 0, \"totalDataManifest should be > 0\")\n        assert(metrics(\"resultDataFiles\").value > 0, \"resultDataFiles should be > 0\")\n        assert(metrics(\"totalDataFileSize\").value > 0, \"totalDataFileSize should be > 0\")\n\n        df.collect()\n\n        assert(metrics(\"output_rows\").value == 10000)\n        assert(metrics(\"num_splits\").value > 0)\n        // ImmutableSQLMetric prevents these from being reset to 0 after execution\n        assert(\n          metrics(\"totalDataManifest\").value > 0,\n          \"totalDataManifest should still be > 0 after execution\")\n        assert(\n          metrics(\"resultDataFiles\").value > 0,\n          \"resultDataFiles should still be > 0 after execution\")\n\n        spark.sql(\"DROP TABLE test_cat.db.metrics_test\")\n      }\n    }\n  }\n\n  test(\"verify manifest pruning metrics\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Partition by category to enable manifest-level pruning\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.pruning_test (\n            id INT,\n            category STRING,\n            value DOUBLE\n          ) USING iceberg\n          PARTITIONED BY (category)\n        \"\"\")\n\n        // Each category gets its own manifest entry\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.pruning_test\n          SELECT id, 'A' as category, CAST(id * 1.5 AS DOUBLE) as value\n          FROM range(1000)\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.pruning_test\n          SELECT id, 'B' as category, CAST(id * 2.0 AS DOUBLE) as value\n          FROM range(1000, 2000)\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.pruning_test\n          SELECT id, 'C' as category, CAST(id * 2.5 AS DOUBLE) as value\n          FROM range(2000, 3000)\n        \"\"\")\n\n        // Filter should prune B and C partitions at manifest level\n        val df = spark.sql(\"SELECT * FROM test_cat.db.pruning_test WHERE category = 'A'\")\n\n        val scanNodes = df.queryExecution.executedPlan\n          .collectLeaves()\n          .collect { case s: CometIcebergNativeScanExec => s }\n\n        assert(scanNodes.nonEmpty, \"Expected at least one CometIcebergNativeScanExec node\")\n\n        val metrics = scanNodes.head.metrics\n\n        // Iceberg prunes entire manifests when all files in a manifest don't match the filter\n        assert(\n          metrics(\"resultDataFiles\").value == 1,\n          s\"Expected 1 result data file, got ${metrics(\"resultDataFiles\").value}\")\n        assert(\n          metrics(\"scannedDataManifests\").value == 1,\n          s\"Expected 1 scanned manifest, got ${metrics(\"scannedDataManifests\").value}\")\n        assert(\n          metrics(\"skippedDataManifests\").value == 2,\n          s\"Expected 2 skipped manifests, got ${metrics(\"skippedDataManifests\").value}\")\n\n        // Verify the query actually returns correct results\n        val result = df.collect()\n        assert(metrics(\"output_rows\").value == 1000)\n        assert(result.length == 1000, s\"Expected 1000 rows, got ${result.length}\")\n\n        spark.sql(\"DROP TABLE test_cat.db.pruning_test\")\n      }\n    }\n  }\n\n  test(\"verify delete file metrics - MOR table\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Equality delete columns force MOR behavior instead of COW\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.delete_metrics (\n            id INT,\n            category STRING,\n            value DOUBLE\n          ) USING iceberg\n          TBLPROPERTIES (\n            'write.delete.mode' = 'merge-on-read',\n            'write.merge.mode' = 'merge-on-read',\n            'write.delete.equality-delete-columns' = 'id'\n          )\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.delete_metrics\n          VALUES\n            (1, 'A', 10.5), (2, 'B', 20.3), (3, 'A', 30.7),\n            (4, 'B', 15.2), (5, 'A', 25.8), (6, 'C', 35.0)\n        \"\"\")\n\n        spark.sql(\"DELETE FROM test_cat.db.delete_metrics WHERE id IN (2, 4, 6)\")\n\n        val df = spark.sql(\"SELECT * FROM test_cat.db.delete_metrics\")\n\n        val scanNodes = df.queryExecution.executedPlan\n          .collectLeaves()\n          .collect { case s: CometIcebergNativeScanExec => s }\n\n        assert(scanNodes.nonEmpty, \"Expected at least one CometIcebergNativeScanExec node\")\n\n        val metrics = scanNodes.head.metrics\n\n        // Iceberg may convert equality deletes to positional deletes internally\n        assert(\n          metrics(\"resultDeleteFiles\").value > 0,\n          s\"Expected result delete files > 0, got ${metrics(\"resultDeleteFiles\").value}\")\n        assert(\n          metrics(\"totalDeleteFileSize\").value > 0,\n          s\"Expected total delete file size > 0, got ${metrics(\"totalDeleteFileSize\").value}\")\n\n        val hasDeletes = metrics(\"positionalDeleteFiles\").value > 0 ||\n          metrics(\"equalityDeleteFiles\").value > 0\n        assert(hasDeletes, \"Expected either positional or equality delete files > 0\")\n\n        val result = df.collect()\n        assert(metrics(\"output_rows\").value == 3)\n        assert(result.length == 3, s\"Expected 3 rows after deletes, got ${result.length}\")\n\n        spark.sql(\"DROP TABLE test_cat.db.delete_metrics\")\n      }\n    }\n  }\n\n  test(\"verify output_rows metric reflects row-level filtering in scan\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\",\n        // Create relatively small files to get multiple row groups per file\n        \"spark.sql.files.maxRecordsPerFile\" -> \"1000\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.filter_metric_test (\n            id INT,\n            category STRING,\n            value DOUBLE\n          ) USING iceberg\n        \"\"\")\n\n        // Insert 10,000 rows with mixed category values\n        // This ensures row groups will have mixed data that can't be completely eliminated\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.filter_metric_test\n          SELECT\n            id,\n            CASE WHEN id % 2 = 0 THEN 'even' ELSE 'odd' END as category,\n            CAST(id * 1.5 AS DOUBLE) as value\n          FROM range(10000)\n        \"\"\")\n\n        // Apply a highly selective filter on id that will filter ~99% of rows\n        // This filter requires row-level evaluation because:\n        // - Row groups contain ranges of IDs (0-999, 1000-1999, etc.)\n        // - The first row group (0-999) cannot be fully eliminated by stats alone\n        // - Row-level filtering must apply \"id < 100\" to filter out rows 100-999\n        val df = spark.sql(\"\"\"\n          SELECT * FROM test_cat.db.filter_metric_test\n          WHERE id < 100\n        \"\"\")\n\n        val scanNodes = df.queryExecution.executedPlan\n          .collectLeaves()\n          .collect { case s: CometIcebergNativeScanExec => s }\n\n        assert(scanNodes.nonEmpty, \"Expected at least one CometIcebergNativeScanExec node\")\n\n        val metrics = scanNodes.head.metrics\n\n        // Execute the query to populate metrics\n        val result = df.collect()\n\n        // The filter \"id < 100\" should match exactly 100 rows (0-99)\n        assert(result.length == 100, s\"Expected 100 rows after filter, got ${result.length}\")\n\n        // CRITICAL: Verify output_rows metric matches the filtered count\n        // If row-level filtering is working, this should be 100\n        // If only row group filtering is working, this would be ~1000 (entire first row group)\n        assert(\n          metrics(\"output_rows\").value == 100,\n          s\"Expected output_rows=100 (filtered count), got ${metrics(\"output_rows\").value}. \" +\n            \"This indicates row-level filtering may not be working correctly.\")\n\n        // Verify the filter actually selected the right rows\n        val ids = result.map(_.getInt(0)).sorted\n        assert(ids.head == 0, s\"Expected first id=0, got ${ids.head}\")\n        assert(ids.last == 99, s\"Expected last id=99, got ${ids.last}\")\n        assert(ids.forall(_ < 100), \"All IDs should be < 100\")\n\n        spark.sql(\"DROP TABLE test_cat.db.filter_metric_test\")\n      }\n    }\n  }\n\n  test(\"schema evolution - read old snapshot after column drop (VERSION AS OF)\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\",\n        // Force LOCAL mode to use iceberg-rust\n        \"spark.sql.iceberg.read.data-planning-mode\" -> \"local\") {\n\n        // This test verifies that Comet correctly handles reading old snapshots after schema changes,\n        // which is a form of backward schema evolution. This corresponds to these Iceberg Java tests:\n        // - TestIcebergSourceHadoopTables::testSnapshotReadAfterDropColumn\n        // - TestIcebergSourceHadoopTables::testSnapshotReadAfterAddAndDropColumn\n        // - TestIcebergSourceHiveTables::testSnapshotReadAfterDropColumn\n        // - TestIcebergSourceHiveTables::testSnapshotReadAfterAddAndDropColumn\n        // - TestSnapshotSelection::testSnapshotSelectionByTagWithSchemaChange\n\n        // Step 1: Create table with columns (id, data, category)\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.schema_evolution_test (\n            id INT,\n            data STRING,\n            category STRING\n          ) USING iceberg\n        \"\"\")\n\n        // Step 2: Write data with all three columns\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.schema_evolution_test\n          VALUES (1, 'x', 'A'), (2, 'y', 'A'), (3, 'z', 'B')\n        \"\"\")\n\n        // Get snapshot ID before schema change\n        val snapshotIdBefore = spark\n          .sql(\"SELECT snapshot_id FROM test_cat.db.schema_evolution_test.snapshots ORDER BY committed_at DESC LIMIT 1\")\n          .collect()(0)\n          .getLong(0)\n\n        // Verify data is correct before schema change\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.schema_evolution_test ORDER BY id\")\n\n        // Step 3: Drop the \"data\" column\n        spark.sql(\"ALTER TABLE test_cat.db.schema_evolution_test DROP COLUMN data\")\n\n        // Step 4: Read the old snapshot (before column was dropped) using VERSION AS OF\n        // This requires using the snapshot's schema, not the current table schema\n        checkIcebergNativeScan(\n          s\"SELECT * FROM test_cat.db.schema_evolution_test VERSION AS OF $snapshotIdBefore ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.schema_evolution_test\")\n      }\n    }\n  }\n\n  test(\"schema evolution - branch read after adding DATE column\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\",\n        \"spark.sql.iceberg.read.data-planning-mode\" -> \"local\") {\n\n        // Reproduces: TestSelect::readAndWriteWithBranchAfterSchemaChange\n        // Error: \"Iceberg scan error: Unexpected => unexpected target column type Date32\"\n        //\n        // Issue: When reading old data from a branch after the table schema evolved to add\n        // a DATE column, the schema adapter fails to handle Date32 type conversion.\n\n        // Step 1: Create table with (id, data, float_col)\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.date_branch_test (\n            id BIGINT,\n            data STRING,\n            float_col FLOAT\n          ) USING iceberg\n        \"\"\")\n\n        // Step 2: Insert data\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.date_branch_test\n          VALUES (1, 'a', 1.0), (2, 'b', 2.0), (3, 'c', CAST('NaN' AS FLOAT))\n        \"\"\")\n\n        // Step 3: Create a branch at this point using Iceberg API\n        val catalog = spark.sessionState.catalogManager.catalog(\"test_cat\")\n        val ident =\n          org.apache.spark.sql.connector.catalog.Identifier.of(Array(\"db\"), \"date_branch_test\")\n        val sparkTable = catalog\n          .asInstanceOf[org.apache.iceberg.spark.SparkCatalog]\n          .loadTable(ident)\n          .asInstanceOf[org.apache.iceberg.spark.source.SparkTable]\n        val table = sparkTable.table()\n        val snapshotId = table.currentSnapshot().snapshotId()\n        table.manageSnapshots().createBranch(\"test_branch\", snapshotId).commit()\n\n        // Step 4: Evolve schema - drop float_col, add date_col\n        spark.sql(\"ALTER TABLE test_cat.db.date_branch_test DROP COLUMN float_col\")\n        spark.sql(\"ALTER TABLE test_cat.db.date_branch_test ADD COLUMN date_col DATE\")\n\n        // Step 5: Insert more data with the new schema\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.date_branch_test\n          VALUES (4, 'd', DATE '2024-04-04'), (5, 'e', DATE '2024-05-05')\n        \"\"\")\n\n        // Step 6: Read from the branch using VERSION AS OF\n        // This reads old data (id, data, float_col) but applies the current schema (id, data, date_col)\n        // The old data files don't have date_col, so it should be NULL\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.date_branch_test VERSION AS OF 'test_branch' ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.date_branch_test\")\n      }\n    }\n  }\n\n  // Complex type filter tests\n  test(\"complex type filter - struct column IS NULL\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.struct_filter_test (\n            id INT,\n            name STRING,\n            address STRUCT<city: STRING, zip: INT>\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.struct_filter_test\n          VALUES\n            (1, 'Alice', struct('NYC', 10001)),\n            (2, 'Bob', struct('LA', 90001)),\n            (3, 'Charlie', NULL)\n        \"\"\")\n\n        // Test filtering on struct IS NULL - this should fall back to Spark\n        // (iceberg-rust doesn't support IS NULL on complex type columns yet)\n        checkSparkAnswer(\n          \"SELECT * FROM test_cat.db.struct_filter_test WHERE address IS NULL ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.struct_filter_test\")\n      }\n    }\n  }\n\n  test(\"complex type filter - struct field filter\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.struct_field_filter_test (\n            id INT,\n            name STRING,\n            address STRUCT<city: STRING, zip: INT>\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.struct_field_filter_test\n          VALUES\n            (1, 'Alice', struct('NYC', 10001)),\n            (2, 'Bob', struct('LA', 90001)),\n            (3, 'Charlie', struct('NYC', 10002))\n        \"\"\")\n\n        // Test filtering on struct field - this should use native scan now!\n        // iceberg-rust supports nested field filters like address.city = 'NYC'\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.struct_field_filter_test WHERE address.city = 'NYC' ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.struct_field_filter_test\")\n      }\n    }\n  }\n\n  test(\"complex type filter - entire struct value\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.struct_value_filter_test (\n            id INT,\n            name STRING,\n            address STRUCT<city: STRING, zip: INT>\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.struct_value_filter_test\n          VALUES\n            (1, 'Alice', named_struct('city', 'NYC', 'zip', 10001)),\n            (2, 'Bob', named_struct('city', 'LA', 'zip', 90001)),\n            (3, 'Charlie', named_struct('city', 'NYC', 'zip', 10001))\n        \"\"\")\n\n        // Test filtering on entire struct value - this falls back to Spark\n        // (Iceberg Java doesn't push down this type of filter)\n        checkSparkAnswer(\n          \"SELECT * FROM test_cat.db.struct_value_filter_test WHERE address = named_struct('city', 'NYC', 'zip', 10001) ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.struct_value_filter_test\")\n      }\n    }\n  }\n\n  test(\"complex type filter - array column IS NULL\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.array_filter_test (\n            id INT,\n            name STRING,\n            values ARRAY<INT>\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.array_filter_test\n          VALUES\n            (1, 'Alice', array(1, 2, 3)),\n            (2, 'Bob', array(4, 5, 6)),\n            (3, 'Charlie', NULL)\n        \"\"\")\n\n        // Test filtering on array IS NULL - this should fall back to Spark\n        // (iceberg-rust doesn't support IS NULL on complex type columns yet)\n        checkSparkAnswer(\n          \"SELECT * FROM test_cat.db.array_filter_test WHERE values IS NULL ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.array_filter_test\")\n      }\n    }\n  }\n\n  test(\"complex type filter - array element filter\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.array_element_filter_test (\n            id INT,\n            name STRING,\n            values ARRAY<INT>\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.array_element_filter_test\n          VALUES\n            (1, 'Alice', array(1, 2, 3)),\n            (2, 'Bob', array(4, 5, 6)),\n            (3, 'Charlie', array(1, 7, 8))\n        \"\"\")\n\n        // Test filtering with array_contains - this should fall back to Spark\n        // (Iceberg Java only pushes down NOT NULL, which fails in iceberg-rust)\n        checkSparkAnswer(\n          \"SELECT * FROM test_cat.db.array_element_filter_test WHERE array_contains(values, 1) ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.array_element_filter_test\")\n      }\n    }\n  }\n\n  test(\"complex type filter - entire array value\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.array_value_filter_test (\n            id INT,\n            name STRING,\n            values ARRAY<INT>\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.array_value_filter_test\n          VALUES\n            (1, 'Alice', array(1, 2, 3)),\n            (2, 'Bob', array(4, 5, 6)),\n            (3, 'Charlie', array(1, 2, 3))\n        \"\"\")\n\n        // Test filtering on entire array value - this should fall back to Spark\n        // (Iceberg Java only pushes down NOT NULL, which fails in iceberg-rust)\n        checkSparkAnswer(\n          \"SELECT * FROM test_cat.db.array_value_filter_test WHERE values = array(1, 2, 3) ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.array_value_filter_test\")\n      }\n    }\n  }\n\n  test(\"complex type filter - map column IS NULL\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.map_filter_test (\n            id INT,\n            name STRING,\n            properties MAP<STRING, INT>\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.map_filter_test\n          VALUES\n            (1, 'Alice', map('age', 30, 'score', 95)),\n            (2, 'Bob', map('age', 25, 'score', 87)),\n            (3, 'Charlie', NULL)\n        \"\"\")\n\n        // Test filtering on map IS NULL - this should fall back to Spark\n        // (iceberg-rust doesn't support IS NULL on complex type columns yet)\n        checkSparkAnswer(\n          \"SELECT * FROM test_cat.db.map_filter_test WHERE properties IS NULL ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.map_filter_test\")\n      }\n    }\n  }\n\n  test(\"complex type filter - map key access filter\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.map_key_filter_test (\n            id INT,\n            name STRING,\n            properties MAP<STRING, INT>\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.map_key_filter_test\n          VALUES\n            (1, 'Alice', map('age', 30, 'score', 95)),\n            (2, 'Bob', map('age', 25, 'score', 87)),\n            (3, 'Charlie', map('age', 30, 'score', 80))\n        \"\"\")\n\n        // Test filtering with map key access - this should fall back to Spark\n        // (Iceberg Java only pushes down NOT NULL, which fails in iceberg-rust)\n        checkSparkAnswer(\n          \"SELECT * FROM test_cat.db.map_key_filter_test WHERE properties['age'] = 30 ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.map_key_filter_test\")\n      }\n    }\n  }\n\n  // Test to reproduce \"Field X not found in schema\" errors\n  // Mimics TestAggregatePushDown.testNaN() where aggregate output schema differs from table schema\n  test(\"partitioned table with aggregates - reproduces Field not found error\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Create table partitioned by id, like TestAggregatePushDown.testNaN\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.agg_test (\n            id INT,\n            data FLOAT\n          ) USING iceberg\n          PARTITIONED BY (id)\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.agg_test VALUES\n          (1, CAST('NaN' AS FLOAT)),\n          (1, CAST('NaN' AS FLOAT)),\n          (2, 2.0),\n          (2, CAST('NaN' AS FLOAT)),\n          (3, CAST('NaN' AS FLOAT)),\n          (3, 1.0)\n        \"\"\")\n\n        // This aggregate query's output schema is completely different from table schema\n        // When iceberg-rust tries to look up partition field 'id' (field 1 in table schema),\n        // it needs to find it in the full table schema, not the aggregate output schema\n        checkIcebergNativeScan(\n          \"SELECT count(*), max(data), min(data), count(data) FROM test_cat.db.agg_test\")\n\n        spark.sql(\"DROP TABLE test_cat.db.agg_test\")\n      }\n    }\n  }\n\n  test(\"MOR partitioned table with timestamp_ntz - reproduces NULL partition issue\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Create partitioned table like TestRewritePositionDeleteFiles.testTimestampNtz\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.timestamp_ntz_partition_test (\n            id LONG,\n            ts TIMESTAMP_NTZ,\n            c1 STRING,\n            c2 STRING\n          ) USING iceberg\n          PARTITIONED BY (ts)\n          TBLPROPERTIES (\n            'format-version' = '2',\n            'write.delete.mode' = 'merge-on-read',\n            'write.merge.mode' = 'merge-on-read'\n          )\n        \"\"\")\n\n        // Insert data into multiple partitions\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.timestamp_ntz_partition_test\n          VALUES\n            (1, TIMESTAMP_NTZ '2023-01-01 15:30:00', 'a', 'b'),\n            (2, TIMESTAMP_NTZ '2023-01-02 15:30:00', 'c', 'd'),\n            (3, TIMESTAMP_NTZ '2023-01-03 15:30:00', 'e', 'f')\n        \"\"\")\n\n        // Delete some rows to create position delete files\n        spark.sql(\"DELETE FROM test_cat.db.timestamp_ntz_partition_test WHERE id = 2\")\n\n        // Query should work with NULL partition handling\n        checkIcebergNativeScan(\n          \"SELECT * FROM test_cat.db.timestamp_ntz_partition_test ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.timestamp_ntz_partition_test\")\n      }\n    }\n  }\n\n  test(\"MOR partitioned table with decimal - reproduces NULL partition issue\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Create partitioned table like TestRewritePositionDeleteFiles.testDecimalPartition\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.decimal_partition_test (\n            id LONG,\n            dec DECIMAL(18, 10),\n            c1 STRING,\n            c2 STRING\n          ) USING iceberg\n          PARTITIONED BY (dec)\n          TBLPROPERTIES (\n            'format-version' = '2',\n            'write.delete.mode' = 'merge-on-read',\n            'write.merge.mode' = 'merge-on-read'\n          )\n        \"\"\")\n\n        // Insert data into multiple partitions\n        spark.sql(\"\"\"\n          INSERT INTO test_cat.db.decimal_partition_test\n          VALUES\n            (1, 1.0, 'a', 'b'),\n            (2, 2.0, 'c', 'd'),\n            (3, 3.0, 'e', 'f')\n        \"\"\")\n\n        // Delete some rows to create position delete files\n        spark.sql(\"DELETE FROM test_cat.db.decimal_partition_test WHERE id = 2\")\n\n        // Query should work with NULL partition handling\n        checkIcebergNativeScan(\"SELECT * FROM test_cat.db.decimal_partition_test ORDER BY id\")\n\n        spark.sql(\"DROP TABLE test_cat.db.decimal_partition_test\")\n      }\n    }\n  }\n\n  // Regression test for https://github.com/apache/datafusion-comet/issues/3856\n  // Fixed in https://github.com/apache/iceberg-rust/pull/2301\n  test(\"migration - INT96 timestamp\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        import org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator}\n        import org.apache.spark.sql.functions.monotonically_increasing_id\n        import org.apache.spark.sql.types._\n\n        val dataPath = s\"${warehouseDir.getAbsolutePath}/int96_data\"\n        val numRows = 50\n        val r = new scala.util.Random(42)\n\n        // Exercise INT96 coercion in flat columns, structs, arrays, and maps\n        val fuzzSchema = StructType(\n          Seq(\n            StructField(\"ts\", TimestampType, nullable = true),\n            StructField(\"value\", DoubleType, nullable = true),\n            StructField(\n              \"ts_struct\",\n              StructType(\n                Seq(\n                  StructField(\"inner_ts\", TimestampType, nullable = true),\n                  StructField(\"inner_val\", DoubleType, nullable = true))),\n              nullable = true),\n            StructField(\n              \"ts_array\",\n              ArrayType(TimestampType, containsNull = true),\n              nullable = true),\n            StructField(\"ts_map\", MapType(IntegerType, TimestampType), nullable = true)))\n\n        // Default FuzzDataGenerator baseDate is year 3333, outside the i64 nanosecond\n        // range (~1677-2262). This triggers the INT96 overflow bug if coercion is missing.\n        val dataGenOptions = DataGenOptions(allowNull = false)\n        val fuzzDf =\n          FuzzDataGenerator.generateDataFrame(r, spark, fuzzSchema, numRows, dataGenOptions)\n\n        val df = fuzzDf.withColumn(\"id\", monotonically_increasing_id())\n\n        // Write Parquet with INT96 timestamps\n        withSQLConf(SQLConf.PARQUET_OUTPUT_TIMESTAMP_TYPE.key -> \"INT96\") {\n          df.write.mode(\"overwrite\").parquet(dataPath)\n        }\n\n        // Verify all timestamp columns in the Parquet file use INT96\n        val parquetFiles = new java.io.File(dataPath)\n          .listFiles()\n          .filter(f => f.getName.endsWith(\".parquet\"))\n        assert(parquetFiles.nonEmpty, \"Expected at least one Parquet file\")\n\n        val parquetFile = parquetFiles.head\n        val reader = org.apache.parquet.hadoop.ParquetFileReader.open(\n          org.apache.parquet.hadoop.util.HadoopInputFile.fromPath(\n            new org.apache.hadoop.fs.Path(parquetFile.getAbsolutePath),\n            spark.sessionState.newHadoopConf()))\n        try {\n          val parquetSchema = reader.getFooter.getFileMetaData.getSchema\n          val int96Columns = parquetSchema.getColumns.asScala\n            .filter(_.getPrimitiveType.getPrimitiveTypeName ==\n              org.apache.parquet.schema.PrimitiveType.PrimitiveTypeName.INT96)\n            .map(_.getPath.mkString(\".\"))\n          // Expect INT96 for: ts, ts_struct.inner_ts, ts_array.list.element, ts_map.value\n          assert(\n            int96Columns.size >= 4,\n            s\"Expected at least 4 INT96 columns but found ${int96Columns.size}: ${int96Columns.mkString(\", \")}\")\n        } finally {\n          reader.close()\n        }\n\n        // Create an unpartitioned Iceberg table and import the Parquet files\n        spark.sql(\"CREATE NAMESPACE IF NOT EXISTS test_cat.db\")\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.int96_test (\n            ts TIMESTAMP,\n            value DOUBLE,\n            ts_struct STRUCT<inner_ts: TIMESTAMP, inner_val: DOUBLE>,\n            ts_array ARRAY<TIMESTAMP>,\n            ts_map MAP<INT, TIMESTAMP>,\n            id BIGINT\n          ) USING iceberg\n        \"\"\")\n\n        try {\n          val tableUtilClass = Class.forName(\"org.apache.iceberg.spark.SparkTableUtil\")\n          val sparkCatalog = spark.sessionState.catalogManager\n            .catalog(\"test_cat\")\n            .asInstanceOf[org.apache.iceberg.spark.SparkCatalog]\n          val ident =\n            org.apache.spark.sql.connector.catalog.Identifier.of(Array(\"db\"), \"int96_test\")\n          val sparkTable = sparkCatalog\n            .loadTable(ident)\n            .asInstanceOf[org.apache.iceberg.spark.source.SparkTable]\n          val table = sparkTable.table()\n\n          val stagingDir = s\"${warehouseDir.getAbsolutePath}/staging\"\n\n          spark.sql(s\"\"\"CREATE TABLE parquet_temp USING parquet LOCATION '$dataPath'\"\"\")\n          val sourceIdent = new org.apache.spark.sql.catalyst.TableIdentifier(\"parquet_temp\")\n\n          val importMethod = tableUtilClass.getMethod(\n            \"importSparkTable\",\n            classOf[org.apache.spark.sql.SparkSession],\n            classOf[org.apache.spark.sql.catalyst.TableIdentifier],\n            classOf[org.apache.iceberg.Table],\n            classOf[String])\n          importMethod.invoke(null, spark, sourceIdent, table, stagingDir)\n\n          val distinctCount = spark\n            .sql(\"SELECT COUNT(DISTINCT id) FROM test_cat.db.int96_test\")\n            .collect()(0)\n            .getLong(0)\n          assert(\n            distinctCount == numRows,\n            s\"Expected $numRows distinct IDs but got $distinctCount\")\n\n          // Spark's Iceberg reader returns null for INT96 timestamps inside structs,\n          // so we can't use checkIcebergNativeScan (which compares against Spark) for\n          // ts_struct. Instead, compare Comet's read against the raw Parquet source.\n          checkIcebergNativeScan(\n            \"SELECT id, ts, value, ts_array, ts_map FROM test_cat.db.int96_test ORDER BY id\")\n          checkIcebergNativeScan(\"SELECT id, ts FROM test_cat.db.int96_test ORDER BY id\")\n          checkIcebergNativeScan(\"SELECT id, ts_array FROM test_cat.db.int96_test ORDER BY id\")\n          checkIcebergNativeScan(\"SELECT id, ts_map FROM test_cat.db.int96_test ORDER BY id\")\n\n          // Validate ts_struct against raw Parquet since Spark's Iceberg reader can't read it\n          val icebergStructDf = spark\n            .sql(\"SELECT id, ts_struct FROM test_cat.db.int96_test ORDER BY id\")\n            .collect()\n          val parquetStructDf = spark.read\n            .parquet(dataPath)\n            .select(\"id\", \"ts_struct\")\n            .orderBy(\"id\")\n            .collect()\n          assert(\n            icebergStructDf.sameElements(parquetStructDf),\n            \"ts_struct mismatch between Comet Iceberg read and raw Parquet\")\n\n          spark.sql(\"DROP TABLE test_cat.db.int96_test\")\n          spark.sql(\"DROP TABLE parquet_temp\")\n        } catch {\n          case _: ClassNotFoundException =>\n            cancel(\"SparkTableUtil not available\")\n        }\n      }\n    }\n  }\n\n  test(\"REST catalog with native Iceberg scan\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withRESTCatalog { (restUri, _, warehouseDir) =>\n      withSQLConf(\n        \"spark.sql.catalog.rest_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.rest_cat.catalog-impl\" -> \"org.apache.iceberg.rest.RESTCatalog\",\n        \"spark.sql.catalog.rest_cat.uri\" -> restUri,\n        \"spark.sql.catalog.rest_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\",\n        CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"true\") {\n\n        // Create namespace first (REST catalog requires explicit namespace creation)\n        spark.sql(\"CREATE NAMESPACE rest_cat.db\")\n\n        // Create a table via REST catalog\n        spark.sql(\"\"\"\n          CREATE TABLE rest_cat.db.test_table (\n            id INT,\n            name STRING,\n            value DOUBLE\n          ) USING iceberg\n        \"\"\")\n\n        spark.sql(\"\"\"\n          INSERT INTO rest_cat.db.test_table\n          VALUES (1, 'Alice', 10.5), (2, 'Bob', 20.3), (3, 'Charlie', 30.7)\n        \"\"\")\n\n        checkIcebergNativeScan(\"SELECT * FROM rest_cat.db.test_table ORDER BY id\")\n\n        spark.sql(\"DROP TABLE rest_cat.db.test_table\")\n        spark.sql(\"DROP NAMESPACE rest_cat.db\")\n      }\n    }\n  }\n\n  // Helper to create temp directory\n  def withTempIcebergDir(f: File => Unit): Unit = {\n    val dir = Files.createTempDirectory(\"comet-iceberg-test\").toFile\n    try {\n      f(dir)\n    } finally {\n      def deleteRecursively(file: File): Unit = {\n        if (file.isDirectory) {\n          file.listFiles().foreach(deleteRecursively)\n        }\n        file.delete()\n      }\n\n      deleteRecursively(dir)\n    }\n  }\n\n  test(\"runtime filtering - multiple DPP filters on two partition columns\") {\n    assume(icebergAvailable, \"Iceberg not available\")\n    withTempIcebergDir { warehouseDir =>\n      val dimDir = new File(warehouseDir, \"dim_parquet\")\n      withSQLConf(\n        \"spark.sql.catalog.runtime_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.runtime_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.runtime_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        \"spark.sql.autoBroadcastJoinThreshold\" -> \"1KB\",\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Create table partitioned by TWO columns: (data, bucket(8, id))\n        // This mimics Iceberg's testMultipleRuntimeFilters\n        spark.sql(\"\"\"\n          CREATE TABLE runtime_cat.db.multi_dpp_fact (\n            id BIGINT,\n            data STRING,\n            date DATE,\n            ts TIMESTAMP\n          ) USING iceberg\n          PARTITIONED BY (data, bucket(8, id))\n        \"\"\")\n\n        // Insert data - 99 rows with varying data and id values\n        val df = spark\n          .range(1, 100)\n          .selectExpr(\n            \"id\",\n            \"CAST(DATE_ADD(DATE '1970-01-01', CAST(id % 4 AS INT)) AS STRING) as data\",\n            \"DATE_ADD(DATE '1970-01-01', CAST(id % 4 AS INT)) as date\",\n            \"CAST(DATE_ADD(DATE '1970-01-01', CAST(id % 4 AS INT)) AS TIMESTAMP) as ts\")\n        df.coalesce(1)\n          .write\n          .format(\"iceberg\")\n          .option(\"fanout-enabled\", \"true\")\n          .mode(\"append\")\n          .saveAsTable(\"runtime_cat.db.multi_dpp_fact\")\n\n        // Create dimension table with specific id=1, data='1970-01-02'\n        spark\n          .createDataFrame(Seq((1L, java.sql.Date.valueOf(\"1970-01-02\"), \"1970-01-02\")))\n          .toDF(\"id\", \"date\", \"data\")\n          .write\n          .parquet(dimDir.getAbsolutePath)\n        spark.read.parquet(dimDir.getAbsolutePath).createOrReplaceTempView(\"dim\")\n\n        // Join on BOTH partition columns - this creates TWO DPP filters\n        val query =\n          \"\"\"SELECT /*+ BROADCAST(d) */ f.*\n            |FROM runtime_cat.db.multi_dpp_fact f\n            |JOIN dim d ON f.id = d.id AND f.data = d.data\n            |WHERE d.date = DATE '1970-01-02'\"\"\".stripMargin\n\n        // Verify plan has 2 dynamic pruning expressions\n        val df2 = spark.sql(query)\n        val planStr = df2.queryExecution.executedPlan.toString\n        // Count \"dynamicpruningexpression(\" to avoid matching \"dynamicpruning#N\" references\n        val dppCount = \"dynamicpruningexpression\\\\(\".r.findAllIn(planStr).length\n        assert(dppCount == 2, s\"Expected 2 DPP expressions but found $dppCount in:\\n$planStr\")\n\n        // Verify native Iceberg scan is used and DPP actually pruned partitions\n        val (_, cometPlan) = checkSparkAnswer(query)\n        val icebergScans = collectIcebergNativeScans(cometPlan)\n        assert(\n          icebergScans.nonEmpty,\n          s\"Expected CometIcebergNativeScanExec but found none. Plan:\\n$cometPlan\")\n        // With 4 data values x 8 buckets = up to 32 partitions total\n        // DPP on (data='1970-01-02', bucket(id=1)) should prune to 1\n        val numPartitions = icebergScans.head.numPartitions\n        assert(numPartitions == 1, s\"Expected DPP to prune to 1 partition but got $numPartitions\")\n\n        spark.sql(\"DROP TABLE runtime_cat.db.multi_dpp_fact\")\n      }\n    }\n  }\n\n  test(\"runtime filtering - join with dynamic partition pruning\") {\n    assume(icebergAvailable, \"Iceberg not available\")\n    withTempIcebergDir { warehouseDir =>\n      val dimDir = new File(warehouseDir, \"dim_parquet\")\n      withSQLConf(\n        \"spark.sql.catalog.runtime_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.runtime_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.runtime_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        // Prevent fact table from being broadcast (force dimension to be broadcast)\n        \"spark.sql.autoBroadcastJoinThreshold\" -> \"1KB\",\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Create partitioned Iceberg table (fact table) with 3 partitions\n        // Add enough data to prevent broadcast\n        spark.sql(\"\"\"\n          CREATE TABLE runtime_cat.db.fact_table (\n            id BIGINT,\n            data STRING,\n            date DATE\n          ) USING iceberg\n          PARTITIONED BY (date)\n        \"\"\")\n\n        // Insert data across multiple partitions\n        spark.sql(\"\"\"\n          INSERT INTO runtime_cat.db.fact_table VALUES\n          (1, 'a', DATE '1970-01-01'),\n          (2, 'b', DATE '1970-01-02'),\n          (3, 'c', DATE '1970-01-02'),\n          (4, 'd', DATE '1970-01-03'),\n          (5, 'e', DATE '1970-01-01'),\n          (6, 'f', DATE '1970-01-02'),\n          (7, 'g', DATE '1970-01-03'),\n          (8, 'h', DATE '1970-01-01')\n        \"\"\")\n\n        // Create dimension table (Parquet) in temp directory\n        spark\n          .createDataFrame(Seq((1L, java.sql.Date.valueOf(\"1970-01-02\"))))\n          .toDF(\"id\", \"date\")\n          .write\n          .parquet(dimDir.getAbsolutePath)\n        spark.read.parquet(dimDir.getAbsolutePath).createOrReplaceTempView(\"dim\")\n\n        // This join should trigger dynamic partition pruning\n        // Use BROADCAST hint to force dimension table to be broadcast\n        val query =\n          \"\"\"SELECT /*+ BROADCAST(d) */ f.* FROM runtime_cat.db.fact_table f\n            |JOIN dim d ON f.date = d.date AND d.id = 1\n            |ORDER BY f.id\"\"\".stripMargin\n\n        // Verify the initial plan contains dynamic pruning expression\n        val df = spark.sql(query)\n        val initialPlan = df.queryExecution.executedPlan\n        val planStr = initialPlan.toString\n        assert(\n          planStr.contains(\"dynamicpruning\"),\n          s\"Expected dynamic pruning in plan but got:\\n$planStr\")\n\n        // Should now use native Iceberg scan with DPP\n        checkIcebergNativeScan(query)\n\n        // Verify DPP actually pruned partitions (should only scan 1 of 3 partitions)\n        val (_, cometPlan) = checkSparkAnswer(query)\n        val icebergScans = collectIcebergNativeScans(cometPlan)\n        assert(\n          icebergScans.nonEmpty,\n          s\"Expected CometIcebergNativeScanExec but found none. Plan:\\n$cometPlan\")\n        val numPartitions = icebergScans.head.numPartitions\n        assert(numPartitions == 1, s\"Expected DPP to prune to 1 partition but got $numPartitions\")\n\n        spark.sql(\"DROP TABLE runtime_cat.db.fact_table\")\n      }\n    }\n  }\n\n  // Regression test for a user reported issue\n  test(\"double partitioning with range filter on top-level partition\") {\n    assume(icebergAvailable, \"Iceberg not available\")\n\n    // Generate Iceberg table without Comet enabled\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        \"spark.sql.files.maxRecordsPerFile\" -> \"50\") {\n\n        // timestamp + geohash with multi-column partitioning\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.geolocation_trips (\n            outputTimestamp TIMESTAMP,\n            geohash7 STRING,\n            tripId STRING\n          ) USING iceberg\n          PARTITIONED BY (hours(outputTimestamp), truncate(3, geohash7))\n          TBLPROPERTIES (\n            'format-version' = '2',\n            'write.distribution-mode' = 'range',\n            'write.target-file-size-bytes' = '1073741824'\n          )\n        \"\"\")\n        val schema = FuzzDataGenerator.generateSchema(\n          SchemaGenOptions(primitiveTypes = Seq(TimestampType, StringType, StringType)))\n\n        val random = new scala.util.Random(42)\n        // Set baseDate to match our filter range (around 2024-01-01)\n        val options = testing.DataGenOptions(\n          allowNull = false,\n          baseDate = 1704067200000L\n        ) // 2024-01-01 00:00:00\n\n        val df = FuzzDataGenerator\n          .generateDataFrame(random, spark, schema, 1000, options)\n          .toDF(\"outputTimestamp\", \"geohash7\", \"tripId\")\n\n        df.writeTo(\"test_cat.db.geolocation_trips\").append()\n      }\n\n      // Query using Comet native Iceberg scan\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Filter for a range that does not align with hour boundaries\n        // Partitioning is hours(outputTimestamp), so filter in middle of hours forces residual filter\n        val startMs = 1704067200000L + 30 * 60 * 1000L // 2024-01-01 01:30:00 (30 min into hour)\n        val endMs = 1704078000000L - 15 * 60 * 1000L // 2024-01-01 03:45:00 (15 min before hour)\n\n        checkIcebergNativeScan(s\"\"\"\n          SELECT COUNT(DISTINCT(tripId)) FROM test_cat.db.geolocation_trips\n          WHERE timestamp_millis($startMs) <= outputTimestamp\n            AND outputTimestamp < timestamp_millis($endMs)\n        \"\"\")\n\n        spark.sql(\"DROP TABLE test_cat.db.geolocation_trips\")\n      }\n    }\n  }\n\n  // Regression test for https://github.com/apache/datafusion-comet/issues/3860\n  // Fixed in https://github.com/apache/iceberg-rust/pull/2307\n  test(\"filter with nested types in migrated table\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.test_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.test_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.test_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        val dataPath = s\"${warehouseDir.getAbsolutePath}/nested_data\"\n\n        // Write Parquet WITHOUT Iceberg (simulates pre-migration data)\n        // id is last so its leaf index is after all nested type leaves\n        spark\n          .sql(\"\"\"\n          SELECT\n            named_struct('age', id * 10, 'score', id * 1.5) AS info,\n            array(id, id + 1) AS tags,\n            map('key', id) AS props,\n            id\n          FROM range(10)\n        \"\"\")\n          .write\n          .parquet(dataPath)\n\n        spark.sql(\"CREATE NAMESPACE IF NOT EXISTS test_cat.db\")\n        spark.sql(\"\"\"\n          CREATE TABLE test_cat.db.nested_migrate (\n            info STRUCT<age: BIGINT, score: DOUBLE>,\n            tags ARRAY<BIGINT>,\n            props MAP<STRING, BIGINT>,\n            id BIGINT\n          ) USING iceberg\n        \"\"\")\n\n        try {\n          val tableUtilClass = Class.forName(\"org.apache.iceberg.spark.SparkTableUtil\")\n          val sparkCatalog = spark.sessionState.catalogManager\n            .catalog(\"test_cat\")\n            .asInstanceOf[org.apache.iceberg.spark.SparkCatalog]\n          val ident =\n            org.apache.spark.sql.connector.catalog.Identifier.of(Array(\"db\"), \"nested_migrate\")\n          val sparkTable = sparkCatalog\n            .loadTable(ident)\n            .asInstanceOf[org.apache.iceberg.spark.source.SparkTable]\n          val table = sparkTable.table()\n\n          val stagingDir = s\"${warehouseDir.getAbsolutePath}/staging\"\n          spark.sql(s\"\"\"CREATE TABLE parquet_temp USING parquet LOCATION '$dataPath'\"\"\")\n          val sourceIdent = new org.apache.spark.sql.catalyst.TableIdentifier(\"parquet_temp\")\n\n          val importMethod = tableUtilClass.getMethod(\n            \"importSparkTable\",\n            classOf[org.apache.spark.sql.SparkSession],\n            classOf[org.apache.spark.sql.catalyst.TableIdentifier],\n            classOf[org.apache.iceberg.Table],\n            classOf[String])\n          importMethod.invoke(null, spark, sourceIdent, table, stagingDir)\n\n          // Select only flat columns to avoid Spark's Iceberg reader returning\n          // null for struct fields in migrated tables (separate Spark bug)\n          checkIcebergNativeScan(\"SELECT id FROM test_cat.db.nested_migrate ORDER BY id\")\n\n          // Filter on root column with nested types in migrated table:\n          // Parquet files lack Iceberg field IDs, so iceberg-rust falls back to\n          // name mapping where column_map resolution was broken for nested types\n          checkIcebergNativeScan(\n            \"SELECT id FROM test_cat.db.nested_migrate WHERE id > 5 ORDER BY id\")\n\n          spark.sql(\"DROP TABLE test_cat.db.nested_migrate\")\n          spark.sql(\"DROP TABLE parquet_temp\")\n        } catch {\n          case _: ClassNotFoundException =>\n            cancel(\"SparkTableUtil not available\")\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometJsonExpressionSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.catalyst.expressions.{JsonToStructs, StructsToJson}\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.functions._\n\nimport org.apache.comet.CometSparkSessionExtensions.isSpark40Plus\nimport org.apache.comet.serde.CometStructsToJson\nimport org.apache.comet.testing.{DataGenOptions, ParquetGenerator, SchemaGenOptions}\n\nclass CometJsonExpressionSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(\n        CometConf.getExprAllowIncompatConfigKey(classOf[JsonToStructs]) -> \"true\",\n        CometConf.getExprAllowIncompatConfigKey(classOf[StructsToJson]) -> \"true\") {\n        testFun\n      }\n    }\n  }\n\n  test(\"to_json - all supported types\") {\n    assume(!isSpark40Plus)\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          SchemaGenOptions(generateArray = false, generateStruct = false, generateMap = false),\n          DataGenOptions(generateNaN = false, generateInfinity = false))\n      }\n      val table = spark.read.parquet(filename)\n      val fieldsNames = table.schema.fields\n        .filter(sf => CometStructsToJson.isSupportedType(sf.dataType))\n        .map(sf => col(sf.name))\n        .toSeq\n      val df = table.select(to_json(struct(fieldsNames: _*)))\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"from_json - basic primitives\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        (0 until 100).map(i => {\n          val json = s\"\"\"{\"a\":$i,\"b\":\"str_$i\"}\"\"\"\n          (i, json)\n        }),\n        \"tbl\",\n        withDictionary = dictionaryEnabled) {\n\n        val schema = \"a INT, b STRING\"\n        checkSparkAnswerAndOperator(s\"SELECT from_json(_2, '$schema') FROM tbl\")\n        checkSparkAnswerAndOperator(s\"SELECT from_json(_2, '$schema').a FROM tbl\")\n        checkSparkAnswerAndOperator(s\"SELECT from_json(_2, '$schema').b FROM tbl\")\n      }\n    }\n  }\n\n  test(\"from_json - null and error handling\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        Seq(\n          (1, \"\"\"{\"a\":123,\"b\":\"test\"}\"\"\"), // Valid JSON\n          (2, \"\"\"{\"a\":456}\"\"\"), // Missing field b\n          (3, \"\"\"{\"a\":null}\"\"\"), // Explicit null\n          (4, \"\"\"invalid json\"\"\"), // Parse error\n          (5, \"\"\"{}\"\"\"), // Empty object\n          (6, \"\"\"null\"\"\"), // JSON null\n          (7, null) // Null input\n        ),\n        \"tbl\",\n        withDictionary = dictionaryEnabled) {\n\n        val schema = \"a INT, b STRING\"\n        checkSparkAnswerAndOperator(\n          s\"SELECT _1, from_json(_2, '$schema') as parsed FROM tbl ORDER BY _1\")\n      }\n    }\n  }\n\n  test(\"from_json - all primitive types\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        (0 until 50).map(i => {\n          val sign = if (i % 2 == 0) 1 else -1\n          val json =\n            s\"\"\"{\"i32\":${sign * i},\"i64\":${sign * i * 1000000000L},\"f32\":${sign * i * 1.5},\"f64\":${sign * i * 2.5},\"bool\":${i % 2 == 0},\"str\":\"value_$i\"}\"\"\"\n          (i, json)\n        }),\n        \"tbl\",\n        withDictionary = dictionaryEnabled) {\n\n        val schema = \"i32 INT, i64 BIGINT, f32 FLOAT, f64 DOUBLE, bool BOOLEAN, str STRING\"\n        checkSparkAnswerAndOperator(s\"SELECT from_json(_2, '$schema') FROM tbl\")\n        checkSparkAnswerAndOperator(s\"SELECT from_json(_2, '$schema').i32 FROM tbl\")\n        checkSparkAnswerAndOperator(s\"SELECT from_json(_2, '$schema').str FROM tbl\")\n      }\n    }\n  }\n\n  test(\"from_json - null input produces null struct\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        Seq(\n          (1, \"\"\"{\"a\":1,\"b\":\"x\"}\"\"\"), // Valid JSON to establish column type\n          (2, null) // Null input\n        ),\n        \"tbl\",\n        withDictionary = dictionaryEnabled) {\n\n        val schema = \"a INT, b STRING\"\n        // Verify that null input produces a NULL struct (not a struct with null fields)\n        checkSparkAnswerAndOperator(\n          s\"SELECT _1, from_json(_2, '$schema') IS NULL as struct_is_null FROM tbl WHERE _1 = 2\")\n        // Field access on null struct should return null\n        checkSparkAnswerAndOperator(\n          s\"SELECT _1, from_json(_2, '$schema').a FROM tbl WHERE _1 = 2\")\n      }\n    }\n  }\n\n  test(\"from_json - nested struct\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        (0 until 50).map(i => {\n          val json = s\"\"\"{\"outer\":{\"inner_a\":$i,\"inner_b\":\"nested_$i\"},\"top_level\":${i * 10}}\"\"\"\n          (i, json)\n        }),\n        \"tbl\",\n        withDictionary = dictionaryEnabled) {\n\n        val schema = \"outer STRUCT<inner_a: INT, inner_b: STRING>, top_level INT\"\n        checkSparkAnswerAndOperator(s\"SELECT from_json(_2, '$schema') FROM tbl\")\n        checkSparkAnswerAndOperator(s\"SELECT from_json(_2, '$schema').outer FROM tbl\")\n        checkSparkAnswerAndOperator(s\"SELECT from_json(_2, '$schema').outer.inner_a FROM tbl\")\n        checkSparkAnswerAndOperator(s\"SELECT from_json(_2, '$schema').top_level FROM tbl\")\n      }\n    }\n  }\n\n  test(\"from_json - valid json with incompatible schema\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        Seq(\n          (1, \"\"\"{\"a\":\"not_a_number\",\"b\":\"test\"}\"\"\"), // String where INT expected\n          (2, \"\"\"{\"a\":123,\"b\":456}\"\"\"), // Number where STRING expected\n          (3, \"\"\"{\"a\":{\"nested\":\"value\"},\"b\":\"test\"}\"\"\"), // Object where INT expected\n          (4, \"\"\"{\"a\":[1,2,3],\"b\":\"test\"}\"\"\"), // Array where INT expected\n          (5, \"\"\"{\"a\":123.456,\"b\":\"test\"}\"\"\"), // Float where INT expected\n          (6, \"\"\"{\"a\":true,\"b\":\"test\"}\"\"\"), // Boolean where INT expected\n          (7, \"\"\"{\"a\":123,\"b\":null}\"\"\") // Null value for STRING field\n        ),\n        \"tbl\",\n        withDictionary = dictionaryEnabled) {\n\n        val schema = \"a INT, b STRING\"\n        // When types don't match, Spark typically returns null for that field\n        checkSparkAnswerAndOperator(\n          s\"SELECT _1, from_json(_2, '$schema') as parsed FROM tbl ORDER BY _1\")\n        checkSparkAnswerAndOperator(s\"SELECT _1, from_json(_2, '$schema').a FROM tbl ORDER BY _1\")\n        checkSparkAnswerAndOperator(s\"SELECT _1, from_json(_2, '$schema').b FROM tbl ORDER BY _1\")\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometMapExpressionSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.functions._\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.BinaryType\n\nimport org.apache.comet.serde.CometMapFromEntries\nimport org.apache.comet.testing.{DataGenOptions, ParquetGenerator, SchemaGenOptions}\n\nclass CometMapExpressionSuite extends CometTestBase {\n\n  test(\"read map[int, int] from parquet\") {\n\n    withTempPath { dir =>\n      // create input file with Comet disabled\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val df = spark\n          .range(5)\n          // Spark does not allow null as a key but does allow null as a\n          // value, and the entire map be null\n          .select(\n            when(col(\"id\") > 1, map(col(\"id\"), when(col(\"id\") > 2, col(\"id\")))).alias(\"map1\"))\n        df.write.parquet(dir.toString())\n      }\n\n      Seq(\"\", \"parquet\").foreach { v1List =>\n        withSQLConf(SQLConf.USE_V1_SOURCE_LIST.key -> v1List) {\n          val df = spark.read.parquet(dir.toString())\n          if (v1List.isEmpty) {\n            checkSparkAnswer(df.select(\"map1\"))\n          } else {\n            checkSparkAnswerAndOperator(df.select(\"map1\"))\n          }\n          // we fall back to Spark for map_keys and map_values\n          checkSparkAnswer(df.select(map_keys(col(\"map1\"))))\n          checkSparkAnswer(df.select(map_values(col(\"map1\"))))\n        }\n      }\n    }\n  }\n\n  // repro for https://github.com/apache/datafusion-comet/issues/1754\n  test(\"read map[struct, struct] from parquet\") {\n\n    withTempPath { dir =>\n      // create input file with Comet disabled\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val df = spark\n          .range(5)\n          .withColumn(\"id2\", col(\"id\"))\n          .withColumn(\"id3\", col(\"id\"))\n          // Spark does not allow null as a key but does allow null as a\n          // value, and the entire map be null\n          .select(\n            when(\n              col(\"id\") > 1,\n              map(\n                struct(col(\"id\"), col(\"id2\"), col(\"id3\")),\n                when(col(\"id\") > 2, struct(col(\"id\"), col(\"id2\"), col(\"id3\"))))).alias(\"map1\"))\n        df.write.parquet(dir.toString())\n      }\n\n      Seq(\"\", \"parquet\").foreach { v1List =>\n        withSQLConf(SQLConf.USE_V1_SOURCE_LIST.key -> v1List) {\n          val df = spark.read.parquet(dir.toString())\n          df.createOrReplaceTempView(\"tbl\")\n          if (v1List.isEmpty) {\n            checkSparkAnswer(df.select(\"map1\"))\n          } else {\n            checkSparkAnswerAndOperator(df.select(\"map1\"))\n          }\n          // we fall back to Spark for map_keys and map_values\n          checkSparkAnswer(df.select(map_keys(col(\"map1\"))))\n          checkSparkAnswer(df.select(map_values(col(\"map1\"))))\n          checkSparkAnswer(spark.sql(\"SELECT map_keys(map1).id2 FROM tbl\"))\n          checkSparkAnswer(spark.sql(\"SELECT map_values(map1).id2 FROM tbl\"))\n        }\n      }\n    }\n  }\n\n  test(\"map_from_arrays\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val schemaGenOptions =\n          SchemaGenOptions(generateArray = true, generateStruct = false, generateMap = false)\n        val dataGenOptions = DataGenOptions(allowNull = false, generateNegativeZero = false)\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          schemaGenOptions,\n          dataGenOptions)\n      }\n      spark.read.parquet(filename).createOrReplaceTempView(\"t1\")\n      val df = spark.sql(\"SELECT map_from_arrays(array(c12), array(c3)) FROM t1\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"fallback for size with map input\") {\n    withTempDir { dir =>\n      withTempView(\"t1\") {\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = true, 100)\n        spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n\n        // Use column references in maps to avoid constant folding\n        checkSparkAnswerAndFallbackReason(\n          sql(\"SELECT size(case when _2 < 0 then map(_8, _9) else map() end) from t1\"),\n          \"size does not support map inputs\")\n      }\n    }\n  }\n\n  // fails with \"map is not supported\"\n  ignore(\"size with map input\") {\n    withTempDir { dir =>\n      withTempView(\"t1\") {\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = true, 100)\n        spark.read.parquet(path.toString).createOrReplaceTempView(\"t1\")\n\n        // Use column references in maps to avoid constant folding\n        checkSparkAnswerAndOperator(\n          sql(\"SELECT size(map(_8, _9, _10, _11)) from t1 where _8 is not null\"))\n        checkSparkAnswerAndOperator(\n          sql(\"SELECT size(case when _2 < 0 then map(_8, _9) else map() end) from t1\"))\n      }\n    }\n  }\n\n  test(\"map_from_entries - convert from Parquet\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val schemaGenOptions =\n          SchemaGenOptions(\n            generateArray = false,\n            generateStruct = false,\n            primitiveTypes = SchemaGenOptions.defaultPrimitiveTypes.filterNot(_ == BinaryType))\n        val dataGenOptions = DataGenOptions(allowNull = false, generateNegativeZero = false)\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          schemaGenOptions,\n          dataGenOptions)\n      }\n      withSQLConf(\n        CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n        CometConf.COMET_SPARK_TO_ARROW_ENABLED.key -> \"true\",\n        CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\") {\n        val df = spark.read.parquet(filename)\n        df.createOrReplaceTempView(\"t1\")\n        for (field <- df.schema.fieldNames) {\n          checkSparkAnswerAndOperator(\n            spark.sql(\n              s\"SELECT map_from_entries(array(struct($field as a, $field as b))) FROM t1\"))\n        }\n      }\n    }\n  }\n\n  test(\"map_from_entries - native Parquet reader\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val schemaGenOptions =\n          SchemaGenOptions(\n            generateArray = false,\n            generateStruct = false,\n            primitiveTypes = SchemaGenOptions.defaultPrimitiveTypes.filterNot(_ == BinaryType))\n        val dataGenOptions = DataGenOptions(allowNull = false, generateNegativeZero = false)\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          schemaGenOptions,\n          dataGenOptions)\n      }\n      val df = spark.read.parquet(filename)\n      df.createOrReplaceTempView(\"t1\")\n      for (field <- df.schema.fieldNames) {\n        checkSparkAnswerAndOperator(\n          spark.sql(s\"SELECT map_from_entries(array(struct($field as a, $field as b))) FROM t1\"))\n      }\n    }\n  }\n\n  test(\"map_from_entries - fallback for binary type\") {\n    def fallbackReason(reason: String) = reason\n    val table = \"t2\"\n    withTable(table) {\n      sql(\n        s\"create table $table using parquet as select cast(array() as array<binary>) as c1 from range(10)\")\n      checkSparkAnswerAndFallbackReason(\n        sql(s\"select map_from_entries(array(struct(c1, 0))) from $table\"),\n        fallbackReason(CometMapFromEntries.keyUnsupportedReason))\n      checkSparkAnswerAndFallbackReason(\n        sql(s\"select map_from_entries(array(struct(0, c1))) from $table\"),\n        fallbackReason(CometMapFromEntries.valueUnsupportedReason))\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometMathExpressionSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{DataTypes, StructField, StructType}\n\nimport org.apache.comet.CometSparkSessionExtensions.isSpark35Plus\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator}\n\nclass CometMathExpressionSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  test(\"abs\") {\n    val df = createTestData(generateNegativeZero = false)\n    df.createOrReplaceTempView(\"tbl\")\n    for (field <- df.schema.fields) {\n      val col = field.name\n      checkSparkAnswerAndOperator(s\"SELECT $col, abs($col) FROM tbl ORDER BY $col\")\n    }\n  }\n\n  test(\"abs - negative zero\") {\n    val df = createTestData(generateNegativeZero = true)\n    df.createOrReplaceTempView(\"tbl\")\n    for (field <- df.schema.fields.filter(f =>\n        f.dataType == DataTypes.FloatType || f.dataType == DataTypes.DoubleType)) {\n      val col = field.name\n      checkSparkAnswerAndOperator(\n        s\"SELECT $col, abs($col) FROM tbl WHERE CAST($col as string) = '-0.0' ORDER BY $col\")\n    }\n  }\n\n  test(\"abs (ANSI mode)\") {\n    val df = createTestData(generateNegativeZero = false)\n    df.createOrReplaceTempView(\"tbl\")\n    withSQLConf(SQLConf.ANSI_ENABLED.key -> \"true\") {\n      for (field <- df.schema.fields) {\n        val col = field.name\n        checkSparkAnswerMaybeThrows(sql(s\"SELECT $col, abs($col) FROM tbl ORDER BY $col\")) match {\n          case (Some(sparkExc), Some(cometExc)) =>\n            val cometErrorPattern =\n              \"\"\".+[ARITHMETIC_OVERFLOW].+overflow. If necessary set \"spark.sql.ansi.enabled\" to \"false\" to bypass this error.\"\"\".r\n            assert(cometErrorPattern.findFirstIn(cometExc.getMessage).isDefined)\n            assert(sparkExc.getMessage.contains(\"overflow\"))\n          case (Some(_), None) =>\n            fail(\"Exception should be thrown\")\n          case (None, Some(cometExc)) =>\n            throw cometExc\n          case _ =>\n        }\n      }\n    }\n  }\n\n  private def createTestData(generateNegativeZero: Boolean) = {\n    val r = new Random(42)\n    val schema = StructType(\n      Seq(\n        StructField(\"c0\", DataTypes.ByteType, nullable = true),\n        StructField(\"c1\", DataTypes.ShortType, nullable = true),\n        StructField(\"c2\", DataTypes.IntegerType, nullable = true),\n        StructField(\"c3\", DataTypes.LongType, nullable = true),\n        StructField(\"c4\", DataTypes.FloatType, nullable = true),\n        StructField(\"c5\", DataTypes.DoubleType, nullable = true),\n        StructField(\"c6\", DataTypes.createDecimalType(10, 2), nullable = true)))\n    FuzzDataGenerator.generateDataFrame(\n      r,\n      spark,\n      schema,\n      1000,\n      DataGenOptions(generateNegativeZero = generateNegativeZero))\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/3561\n  ignore(\"width_bucket\") {\n    assume(isSpark35Plus, \"width_bucket was added in Spark 3.5\")\n    withSQLConf(\"spark.comet.exec.localTableScan.enabled\" -> \"true\") {\n      spark\n        .createDataFrame(\n          Seq((5.3, 0.2, 10.6, 5), (8.1, 0.0, 5.7, 4), (-0.9, 5.2, 0.5, 2), (-2.1, 1.3, 3.4, 3)))\n        .toDF(\"c1\", \"c2\", \"c3\", \"c4\")\n        .createOrReplaceTempView(\"width_bucket_test\")\n      checkSparkAnswerAndOperator(\n        \"SELECT c1, width_bucket(c1, c2, c3, c4) FROM width_bucket_test\")\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/3561\n  ignore(\"width_bucket - edge cases\") {\n    assume(isSpark35Plus, \"width_bucket was added in Spark 3.5\")\n    withSQLConf(\"spark.comet.exec.localTableScan.enabled\" -> \"true\") {\n      spark\n        .createDataFrame(Seq(\n          (0.0, 10.0, 0.0, 5), // Value equals max (reversed bounds)\n          (10.0, 0.0, 10.0, 5), // Value equals max (normal bounds)\n          (10.0, 0.0, 0.0, 5), // Min equals max - returns NULL\n          (5.0, 0.0, 10.0, 0) // Zero buckets - returns NULL\n        ))\n        .toDF(\"c1\", \"c2\", \"c3\", \"c4\")\n        .createOrReplaceTempView(\"width_bucket_edge\")\n      checkSparkAnswerAndOperator(\n        \"SELECT c1, width_bucket(c1, c2, c3, c4) FROM width_bucket_edge\")\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/3561\n  ignore(\"width_bucket - NaN values\") {\n    assume(isSpark35Plus, \"width_bucket was added in Spark 3.5\")\n    withSQLConf(\"spark.comet.exec.localTableScan.enabled\" -> \"true\") {\n      spark\n        .createDataFrame(\n          Seq((Double.NaN, 5.0, 0.0), (5.0, Double.NaN, 0.0), (5.0, 0.0, Double.NaN)))\n        .toDF(\"c1\", \"c2\", \"c3\")\n        .createOrReplaceTempView(\"width_bucket_nan\")\n      checkSparkAnswerAndOperator(\"SELECT c1, width_bucket(c1, c2, c3, 5) FROM width_bucket_nan\")\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/3561\n  ignore(\"width_bucket - with range data\") {\n    assume(isSpark35Plus, \"width_bucket was added in Spark 3.5\")\n    withSQLConf(\"spark.comet.exec.localTableScan.enabled\" -> \"true\") {\n      spark\n        .range(10)\n        .selectExpr(\"id\", \"CAST(id AS DOUBLE) as value\")\n        .createOrReplaceTempView(\"width_bucket_range\")\n      checkSparkAnswerAndOperator(\n        \"SELECT id, width_bucket(value, 0.0, 10.0, 5) FROM width_bucket_range ORDER BY id\")\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometNativeSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport org.apache.spark.{SparkEnv, SparkException}\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.catalyst.expressions.PrettyAttribute\nimport org.apache.spark.sql.comet.{CometBroadcastExchangeExec, CometExec, CometExecUtils}\nimport org.apache.spark.sql.types.LongType\nimport org.apache.spark.sql.vectorized.ColumnarBatch\n\nclass CometNativeSuite extends CometTestBase {\n  test(\"test handling NPE thrown by JVM\") {\n    val rdd = spark.range(0, 1).rdd.map { value =>\n      val limitOp =\n        CometExecUtils.getLimitNativePlan(Seq(PrettyAttribute(\"test\", LongType)), 100).get\n      val cometIter = CometExec.getCometIterator(\n        Seq(new Iterator[ColumnarBatch] {\n          override def hasNext: Boolean = true\n          override def next(): ColumnarBatch = throw new NullPointerException()\n        }),\n        1,\n        limitOp,\n        1,\n        0)\n      try {\n        cometIter.next()\n      } finally {\n        cometIter.close()\n      }\n      value\n    }\n\n    val exception = intercept[SparkException] {\n      rdd.collect()\n    }\n    assert(exception.getMessage contains \"java.lang.NullPointerException\")\n  }\n\n  test(\"handling NPE when closing null handle of parquet reader\") {\n    assert(NativeBase.isLoaded)\n    val exception1 = intercept[NullPointerException] {\n      parquet.Native.closeRecordBatchReader(0)\n    }\n    assert(exception1.getMessage contains \"null batch context handle\")\n\n    val exception2 = intercept[NullPointerException] {\n      parquet.Native.closeColumnReader(0)\n    }\n    assert(exception2.getMessage contains \"null context handle\")\n  }\n\n  test(\"Comet native should use spark local dir as temp dir\") {\n    withParquetTable((0 until 100000).map(i => (i, i + 1)), \"table\") {\n      val dirs = SparkEnv.get.blockManager.getLocalDiskDirs\n      dirs.foreach { dir =>\n        val files = new java.io.File(dir).listFiles()\n        assert(!files.exists(f => f.isDirectory && f.getName.startsWith(\"datafusion-\")))\n      }\n\n      // Check if the DataFusion temporary dir exists in the Spark local dirs when a spark job involving\n      // Comet native operator is running.\n      val observedDataFusionDir = spark\n        .table(\"table\")\n        .selectExpr(\"_1 + _2 as value\")\n        .rdd\n        .mapPartitions { _ =>\n          dirs.map { dir =>\n            val files = new java.io.File(dir).listFiles()\n            files.count(f => f.isDirectory && f.getName.startsWith(\"datafusion-\"))\n          }.iterator\n        }\n        .sum()\n      assert(observedDataFusionDir > 0)\n\n      // DataFusion temporary dir should be cleaned up after the job is done.\n      dirs.foreach { dir =>\n        val files = new java.io.File(dir).listFiles()\n        assert(!files.exists(f => f.isDirectory && f.getName.startsWith(\"datafusion-\")))\n      }\n    }\n  }\n\n  test(\"test maxBroadcastTableSize\") {\n    withSQLConf(\"spark.sql.maxBroadcastTableSize\" -> \"10B\") {\n      spark.range(0, 1000).createOrReplaceTempView(\"t1\")\n      spark.range(0, 100).createOrReplaceTempView(\"t2\")\n      val df = spark.sql(\"select /*+ BROADCAST(t2) */ * from t1 join t2 on t1.id = t2.id\")\n      val exception = intercept[SparkException] {\n        df.collect()\n      }\n      assert(\n        exception.getMessage.contains(\"Cannot broadcast the table that is larger than 10.0 B\"))\n      val broadcasts = collect(df.queryExecution.executedPlan) {\n        case p: CometBroadcastExchangeExec => p\n      }\n      assert(broadcasts.size == 1)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometS3TestBase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.net.URI\n\nimport scala.util.Try\n\nimport org.testcontainers.containers.MinIOContainer\nimport org.testcontainers.utility.DockerImageName\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.sql.CometTestBase\n\nimport software.amazon.awssdk.auth.credentials.{AwsBasicCredentials, StaticCredentialsProvider}\nimport software.amazon.awssdk.services.s3.S3Client\nimport software.amazon.awssdk.services.s3.model.{CreateBucketRequest, HeadBucketRequest}\n\ntrait CometS3TestBase extends CometTestBase {\n\n  protected var minioContainer: MinIOContainer = _\n  protected val userName = \"minio-test-user\"\n  protected val password = \"minio-test-password\"\n\n  protected def testBucketName: String\n\n  override def beforeAll(): Unit = {\n    minioContainer = new MinIOContainer(DockerImageName.parse(\"minio/minio:latest\"))\n      .withUserName(userName)\n      .withPassword(password)\n    minioContainer.start()\n    createBucketIfNotExists(testBucketName)\n\n    super.beforeAll()\n  }\n\n  override def afterAll(): Unit = {\n    super.afterAll()\n    if (minioContainer != null) {\n      minioContainer.stop()\n    }\n  }\n\n  override protected def sparkConf: SparkConf = {\n    val conf = super.sparkConf\n    conf.set(\"spark.hadoop.fs.s3a.access.key\", userName)\n    conf.set(\"spark.hadoop.fs.s3a.secret.key\", password)\n    conf.set(\"spark.hadoop.fs.s3a.endpoint\", minioContainer.getS3URL)\n    conf.set(\"spark.hadoop.fs.s3a.path.style.access\", \"true\")\n  }\n\n  protected def createBucketIfNotExists(bucketName: String): Unit = {\n    val credentials = AwsBasicCredentials.create(userName, password)\n    val s3Client = S3Client\n      .builder()\n      .endpointOverride(URI.create(minioContainer.getS3URL))\n      .credentialsProvider(StaticCredentialsProvider.create(credentials))\n      .forcePathStyle(true)\n      .build()\n    try {\n      val bucketExists = Try {\n        s3Client.headBucket(HeadBucketRequest.builder().bucket(bucketName).build())\n        true\n      }.getOrElse(false)\n\n      if (!bucketExists) {\n        val request = CreateBucketRequest.builder().bucket(bucketName).build()\n        s3Client.createBucket(request)\n      }\n    } finally {\n      s3Client.close()\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometSparkSessionExtensionsSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.sql._\nimport org.apache.spark.sql.internal.SQLConf\n\nclass CometSparkSessionExtensionsSuite extends CometTestBase {\n\n  import CometSparkSessionExtensions._\n\n  test(\"isCometLoaded\") {\n    val conf = new SQLConf\n\n    conf.setConfString(CometConf.COMET_ENABLED.key, \"false\")\n    assert(!isCometLoaded(conf))\n\n    // Since the native lib is probably already loaded due to previous tests, we reset it here\n    NativeBase.setLoaded(false)\n\n    conf.setConfString(CometConf.COMET_ENABLED.key, \"true\")\n    val oldProperty = System.getProperty(\"os.name\")\n    System.setProperty(\"os.name\", \"foo\")\n    assert(!isCometLoaded(conf))\n\n    System.setProperty(\"os.name\", oldProperty)\n\n    conf.setConf(SQLConf.PARQUET_INT96_TIMESTAMP_CONVERSION, true)\n    assert(!isCometLoaded(conf))\n\n    // Restore the original state\n    NativeBase.setLoaded(true)\n  }\n\n  test(\"Arrow properties\") {\n    NativeBase.setLoaded(false)\n    NativeBase.load()\n\n    assert(System.getProperty(NativeBase.ARROW_UNSAFE_MEMORY_ACCESS) == \"true\")\n    assert(System.getProperty(NativeBase.ARROW_NULL_CHECK_FOR_GET) == \"false\")\n\n    System.setProperty(NativeBase.ARROW_UNSAFE_MEMORY_ACCESS, \"false\")\n    NativeBase.setLoaded(false)\n    NativeBase.load()\n    assert(System.getProperty(NativeBase.ARROW_UNSAFE_MEMORY_ACCESS) == \"false\")\n\n    // Should not enable when debug mode is on\n    System.clearProperty(NativeBase.ARROW_UNSAFE_MEMORY_ACCESS)\n    SQLConf.get.setConfString(CometConf.COMET_DEBUG_ENABLED.key, \"true\")\n    NativeBase.setLoaded(false)\n    NativeBase.load()\n    assert(System.getProperty(NativeBase.ARROW_UNSAFE_MEMORY_ACCESS) == null)\n\n    // Restore the original state\n    NativeBase.setLoaded(true)\n    SQLConf.get.setConfString(CometConf.COMET_DEBUG_ENABLED.key, \"false\")\n  }\n\n  def getBytesFromMib(mib: Long): Long = mib * 1024 * 1024\n\n  test(\"Default Comet memory overhead\") {\n    val conf = new SparkConf()\n    assert(getCometMemoryOverhead(conf) == getBytesFromMib(1024))\n  }\n\n  test(\"Comet memory overhead\") {\n    val sparkConf = new SparkConf()\n    sparkConf.set(CometConf.COMET_ONHEAP_MEMORY_OVERHEAD.key, \"10g\")\n    assert(getCometMemoryOverhead(sparkConf) == getBytesFromMib(1024 * 10))\n    assert(shouldOverrideMemoryConf(sparkConf))\n  }\n\n  test(\"Comet memory overhead (off heap)\") {\n    val sparkConf = new SparkConf()\n    sparkConf.set(CometConf.COMET_ONHEAP_MEMORY_OVERHEAD.key, \"64g\")\n    sparkConf.set(\"spark.memory.offHeap.enabled\", \"true\")\n    sparkConf.set(\"spark.memory.offHeap.size\", \"10g\")\n    assert(getCometMemoryOverhead(sparkConf) == 0)\n    assert(!shouldOverrideMemoryConf(sparkConf))\n  }\n\n  test(\"Comet shuffle memory factor\") {\n    val conf = new SparkConf()\n\n    val sqlConf = new SQLConf\n    sqlConf.setConfString(CometConf.COMET_ONHEAP_SHUFFLE_MEMORY_FACTOR.key, \"0.2\")\n\n    // Minimum Comet memory overhead is 384MB\n    assert(\n      getCometShuffleMemorySize(conf, sqlConf) ==\n        getBytesFromMib((1024 * 0.2).toLong))\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometSqlFileTestSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.io.File\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n\nclass CometSqlFileTestSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_AUTO) {\n        testFun\n      }\n    }\n  }\n\n  /** Check if the current Spark version meets a minimum version requirement. */\n  private def meetsMinSparkVersion(minVersion: String): Boolean = {\n    val current = org.apache.spark.SPARK_VERSION.split(\"[.-]\").take(2).map(_.toInt)\n    val required = minVersion.split(\"[.-]\").take(2).map(_.toInt)\n    (current(0) > required(0)) ||\n    (current(0) == required(0) && current(1) >= required(1))\n  }\n\n  private val testResourceDir = {\n    val url = getClass.getClassLoader.getResource(\"sql-tests\")\n    assert(url != null, \"Could not find sql-tests resource directory\")\n    new File(url.toURI)\n  }\n\n  private def discoverTestFiles(dir: File): Seq[File] = {\n    if (!dir.exists()) return Seq.empty\n    val files = dir.listFiles().toSeq\n    val sqlFiles = files.filter(f => f.isFile && f.getName.endsWith(\".sql\"))\n    val subDirFiles = files.filter(_.isDirectory).flatMap(discoverTestFiles)\n    sqlFiles ++ subDirFiles\n  }\n\n  /** Generate all config combinations from a ConfigMatrix specification. */\n  private def configMatrix(matrix: Seq[(String, Seq[String])]): Seq[Seq[(String, String)]] = {\n    if (matrix.isEmpty) return Seq(Seq.empty)\n    val (key, values) = matrix.head\n    val rest = configMatrix(matrix.tail)\n    for {\n      value <- values\n      combo <- rest\n    } yield (key, value) +: combo\n  }\n\n  // Disable constant folding so that literal expressions are evaluated by Comet's\n  // native engine rather than being folded away by Spark's optimizer at plan time.\n  private val constantFoldingExcluded = Seq(\n    \"spark.sql.optimizer.excludedRules\" ->\n      \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\")\n\n  private def runTestFile(relativePath: String, file: SqlTestFile): Unit = {\n    val allConfigs = file.configs ++ constantFoldingExcluded\n    withSQLConf(allConfigs: _*) {\n      withTable(file.tables: _*) {\n        file.records.foreach {\n          case SqlStatement(sql, line) =>\n            try {\n              val location = if (line > 0) s\"$relativePath:$line\" else relativePath\n              withClue(s\"In SQL file $location, executing statement:\\n$sql\\n\") {\n                spark.sql(sql)\n              }\n            } catch {\n              case e: Exception =>\n                throw new RuntimeException(s\"Error executing SQL '$sql' ${e.getMessage}\", e)\n            }\n          case SqlQuery(sql, mode, line) =>\n            try {\n              val location = if (line > 0) s\"$relativePath:$line\" else relativePath\n              withClue(s\"In SQL file $location, executing query:\\n$sql\\n\") {\n                mode match {\n                  case CheckCoverageAndAnswer =>\n                    checkSparkAnswerAndOperator(sql)\n                  case SparkAnswerOnly =>\n                    checkSparkAnswer(sql)\n                  case WithTolerance(tol) =>\n                    checkSparkAnswerAndOperatorWithTolerance(sql, tol)\n                  case ExpectFallback(reason) =>\n                    checkSparkAnswerAndFallbackReason(sql, reason)\n                  case Ignore(reason) =>\n                    logInfo(s\"IGNORED query ($reason): $sql\")\n                  case ExpectError(pattern) =>\n                    val (sparkError, cometError) = checkSparkAnswerMaybeThrows(spark.sql(sql))\n                    assert(\n                      sparkError.isDefined,\n                      s\"Expected Spark to throw an error matching '$pattern' but query succeeded\")\n                    assert(\n                      cometError.isDefined,\n                      s\"Expected Comet to throw an error matching '$pattern' but query succeeded\")\n                    assert(\n                      sparkError.get.getMessage.contains(pattern),\n                      s\"Spark error '${sparkError.get.getMessage}' does not contain '$pattern'\")\n                    assert(\n                      cometError.get.getMessage.contains(pattern),\n                      s\"Comet error '${cometError.get.getMessage}' does not contain '$pattern'\")\n                }\n              }\n\n            } catch {\n              case e: Exception =>\n                throw new RuntimeException(s\"Error executing SQL '$sql' ${e.getMessage}\", e)\n            }\n        }\n      }\n    }\n  }\n\n  // Discover and register all .sql test files\n  discoverTestFiles(testResourceDir).foreach { file =>\n    val relativePath = testResourceDir.toURI.relativize(file.toURI).getPath\n    val parsed = SqlFileTestParser.parse(file)\n    val combinations = configMatrix(parsed.configMatrix)\n\n    // Skip tests that require a newer Spark version\n    val skip = parsed.minSparkVersion.exists(!meetsMinSparkVersion(_))\n\n    if (combinations.size <= 1) {\n      // No matrix or single combination\n      test(s\"sql-file: $relativePath\") {\n        if (skip) {\n          logInfo(s\"SKIPPED (requires Spark ${parsed.minSparkVersion.get}): $relativePath\")\n        } else {\n          val effectiveConfigs = parsed.configs ++ combinations.headOption.getOrElse(Seq.empty)\n          runTestFile(relativePath, parsed.copy(configs = effectiveConfigs))\n        }\n      }\n    } else {\n      // Multiple combinations: generate one test per combination\n      combinations.foreach { matrixConfigs =>\n        val label = matrixConfigs.map { case (k, v) => s\"$k=$v\" }.mkString(\", \")\n        test(s\"sql-file: $relativePath [$label]\") {\n          if (skip) {\n            logInfo(s\"SKIPPED (requires Spark ${parsed.minSparkVersion.get}): $relativePath\")\n          } else {\n            runTestFile(relativePath, parsed.copy(configs = parsed.configs ++ matrixConfigs))\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometStringExpressionSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.apache.parquet.hadoop.ParquetOutputFormat\nimport org.apache.spark.sql.{CometTestBase, DataFrame}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{DataTypes, StructField, StructType}\n\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator}\n\nclass CometStringExpressionSuite extends CometTestBase {\n\n  test(\"lpad string\") {\n    testStringPadding(\"lpad\")\n  }\n\n  test(\"rpad string\") {\n    testStringPadding(\"rpad\")\n  }\n\n  test(\"lpad binary\") {\n    testBinaryPadding(\"lpad\")\n  }\n\n  test(\"rpad binary\") {\n    testBinaryPadding(\"rpad\")\n  }\n\n  private def testStringPadding(expr: String): Unit = {\n    val r = new Random(42)\n    val schema = StructType(\n      Seq(\n        StructField(\"str\", DataTypes.StringType, nullable = true),\n        StructField(\"len\", DataTypes.IntegerType, nullable = true),\n        StructField(\"pad\", DataTypes.StringType, nullable = true)))\n    // scalastyle:off\n    val edgeCases = Seq(\n      \"é\", // unicode 'e\\\\u{301}'\n      \"é\", // unicode '\\\\u{e9}'\n      \"తెలుగు\")\n    // scalastyle:on\n    val df = FuzzDataGenerator.generateDataFrame(\n      r,\n      spark,\n      schema,\n      1000,\n      DataGenOptions(maxStringLength = 6, customStrings = edgeCases))\n    df.createOrReplaceTempView(\"t1\")\n\n    // test all combinations of scalar and array arguments\n    for (str <- Seq(\"'hello'\", \"str\")) {\n      for (len <- Seq(\"6\", \"-6\", \"0\", \"len % 10\")) {\n        for (pad <- Seq(Some(\"'x'\"), Some(\"'zzz'\"), Some(\"pad\"), None)) {\n          val sql = pad match {\n            case Some(p) =>\n              // 3 args\n              s\"SELECT $str, $len, $expr($str, $len, $p) FROM t1 ORDER BY str, len, pad\"\n            case _ =>\n              // 2 args (default pad of ' ')\n              s\"SELECT $str, $len, $expr($str, $len) FROM t1 ORDER BY str, len, pad\"\n          }\n          val isLiteralStr = str == \"'hello'\"\n          val isLiteralLen = !len.contains(\"len\")\n          val isLiteralPad = !pad.contains(\"pad\")\n          if (isLiteralStr && isLiteralLen && isLiteralPad) {\n            // all arguments are literal, so Spark constant folding will kick in\n            // and pad function will not be evaluated by Comet\n            checkSparkAnswerAndOperator(sql)\n          } else if (isLiteralStr) {\n            checkSparkAnswerAndFallbackReason(\n              sql,\n              \"Scalar values are not supported for the str argument\")\n          } else if (!isLiteralPad) {\n            checkSparkAnswerAndFallbackReason(\n              sql,\n              \"Only scalar values are supported for the pad argument\")\n          } else {\n            checkSparkAnswerAndOperator(sql)\n          }\n        }\n      }\n    }\n  }\n\n  private def testBinaryPadding(expr: String): Unit = {\n    val r = new Random(42)\n    val schema = StructType(\n      Seq(\n        StructField(\"str\", DataTypes.BinaryType, nullable = true),\n        StructField(\"len\", DataTypes.IntegerType, nullable = true),\n        StructField(\"pad\", DataTypes.BinaryType, nullable = true)))\n    val df = FuzzDataGenerator.generateDataFrame(r, spark, schema, 1000, DataGenOptions())\n    df.createOrReplaceTempView(\"t1\")\n\n    // test all combinations of scalar and array arguments\n    for (str <- Seq(\"unhex('DDEEFF')\", \"str\")) {\n      // Spark does not support negative length for lpad/rpad with binary input and Comet does\n      // not support abs yet, so use `10 + len % 10` to avoid negative length\n      for (len <- Seq(\"6\", \"0\", \"10 + len % 10\")) {\n        for (pad <- Seq(Some(\"unhex('CAFE')\"), Some(\"pad\"), None)) {\n\n          val sql = pad match {\n            case Some(p) =>\n              // 3 args\n              s\"SELECT $str, $len, $expr($str, $len, $p) FROM t1 ORDER BY str, len, pad\"\n            case _ =>\n              // 2 args (default pad of ' ')\n              s\"SELECT $str, $len, $expr($str, $len) FROM t1 ORDER BY str, len, pad\"\n          }\n\n          val isLiteralStr = str != \"str\"\n          val isLiteralLen = !len.contains(\"len\")\n          val isLiteralPad = !pad.contains(\"pad\")\n\n          if (isLiteralStr && isLiteralLen && isLiteralPad) {\n            // all arguments are literal, so Spark constant folding will kick in\n            // and pad function will not be evaluated by Comet\n            checkSparkAnswerAndOperator(sql)\n          } else {\n            // Comet will fall back to Spark because the plan contains a staticinvoke instruction\n            // which is not supported\n            checkSparkAnswerAndFallbackReason(\n              sql,\n              s\"Static invoke expression: $expr is not supported\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"split string basic\") {\n    withSQLConf(\"spark.comet.expression.StringSplit.allowIncompatible\" -> \"true\") {\n      withParquetTable((0 until 5).map(i => (s\"value$i,test$i\", i)), \"tbl\") {\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ',') FROM tbl\")\n        checkSparkAnswerAndOperator(\"SELECT split('one,two,three', ',') FROM tbl\")\n        checkSparkAnswerAndOperator(\"SELECT split(_1, '-') FROM tbl\")\n      }\n    }\n  }\n\n  test(\"split string with limit\") {\n    withSQLConf(\"spark.comet.expression.StringSplit.allowIncompatible\" -> \"true\") {\n      withParquetTable((0 until 5).map(i => (\"a,b,c,d,e\", i)), \"tbl\") {\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ',', 2) FROM tbl\")\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ',', 3) FROM tbl\")\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ',', -1) FROM tbl\")\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ',', 0) FROM tbl\")\n      }\n    }\n  }\n\n  test(\"split string with regex patterns\") {\n    withSQLConf(\"spark.comet.expression.StringSplit.allowIncompatible\" -> \"true\") {\n      withParquetTable((0 until 5).map(i => (\"word1 word2  word3\", i)), \"tbl\") {\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ' ') FROM tbl\")\n        checkSparkAnswerAndOperator(\"SELECT split(_1, '\\\\\\\\s+') FROM tbl\")\n      }\n\n      withParquetTable((0 until 5).map(i => (\"foo123bar456baz\", i)), \"tbl2\") {\n        checkSparkAnswerAndOperator(\"SELECT split(_1, '\\\\\\\\d+') FROM tbl2\")\n      }\n    }\n  }\n\n  test(\"split string edge cases\") {\n    withSQLConf(\"spark.comet.expression.StringSplit.allowIncompatible\" -> \"true\") {\n      withParquetTable(Seq((\"\", 0), (\"single\", 1), (null, 2), (\"a\", 3)), \"tbl\") {\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ',') FROM tbl\")\n      }\n    }\n  }\n\n  test(\"split string with UTF-8 characters\") {\n    withSQLConf(\"spark.comet.expression.StringSplit.allowIncompatible\" -> \"true\") {\n      // CJK characters\n      withParquetTable(Seq((\"你好,世界\", 0), (\"こんにちは,世界\", 1)), \"tbl_cjk\") {\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ',') FROM tbl_cjk\")\n      }\n\n      // Emoji and symbols\n      withParquetTable(Seq((\"😀,😃,😄\", 0), (\"🔥,💧,🌍\", 1), (\"α,β,γ\", 2)), \"tbl_emoji\") {\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ',') FROM tbl_emoji\")\n      }\n\n      // Combining characters / grapheme clusters\n      withParquetTable(\n        Seq(\n          (\"café,naïve\", 0), // precomposed\n          (\"café,naïve\", 1), // combining (if your editor supports it)\n          (\"मानक,हिन्दी\", 2)\n        ), // Devanagari script\n        \"tbl_graphemes\") {\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ',') FROM tbl_graphemes\")\n      }\n\n      // Mixed ASCII and multi-byte with regex patterns\n      withParquetTable(\n        Seq((\"hello世界test你好\", 0), (\"foo😀bar😃baz\", 1), (\"abc한글def\", 2)), // Korean Hangul\n        \"tbl_mixed\") {\n        // Split on ASCII word boundaries\n        checkSparkAnswerAndOperator(\"SELECT split(_1, '[a-z]+') FROM tbl_mixed\")\n      }\n\n      // RTL (Right-to-Left) characters\n      withParquetTable(Seq((\"مرحبا,عالم\", 0), (\"שלום,עולם\", 1)), \"tbl_rtl\") { // Arabic, Hebrew\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ',') FROM tbl_rtl\")\n      }\n\n      // Zero-width characters and special Unicode\n      withParquetTable(\n        Seq(\n          (\"test\\u200Bword\", 0), // Zero-width space\n          (\"foo\\u00ADbar\", 1)\n        ), // Soft hyphen\n        \"tbl_special\") {\n        checkSparkAnswerAndOperator(\"SELECT split(_1, '\\u200B') FROM tbl_special\")\n      }\n\n      // Surrogate pairs (4-byte UTF-8)\n      withParquetTable(\n        Seq(\n          (\"𝐇𝐞𝐥𝐥𝐨,𝐖𝐨𝐫𝐥𝐝\", 0), // Mathematical bold letters (U+1D400 range)\n          (\"𠜎,𠜱,𠝹\", 1)\n        ), // CJK Extension B\n        \"tbl_surrogate\") {\n        checkSparkAnswerAndOperator(\"SELECT split(_1, ',') FROM tbl_surrogate\")\n      }\n    }\n  }\n\n  test(\"Various String scalar functions\") {\n    val table = \"names\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      sql(\n        s\"insert into $table values(1, 'James Smith'), (2, 'Michael Rose'),\" +\n          \" (3, 'Robert Williams'), (4, 'Rames Rose'), (5, 'James Smith')\")\n      checkSparkAnswerAndOperator(\n        s\"SELECT ascii(name), bit_length(name), octet_length(name) FROM $table\")\n    }\n  }\n\n  test(\"Upper and Lower\") {\n    withSQLConf(CometConf.COMET_CASE_CONVERSION_ENABLED.key -> \"true\") {\n      val table = \"names\"\n      withTable(table) {\n        sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n        sql(\n          s\"insert into $table values(1, 'James Smith'), (2, 'Michael Rose'),\" +\n            \" (3, 'Robert Williams'), (4, 'Rames Rose'), (5, 'James Smith')\")\n        checkSparkAnswerAndOperator(s\"SELECT name, upper(name), lower(name) FROM $table\")\n      }\n    }\n  }\n\n  test(\"Chr\") {\n    val table = \"test\"\n    withTable(table) {\n      sql(s\"create table $table(col varchar(20)) using parquet\")\n      sql(\n        s\"insert into $table values('65'), ('66'), ('67'), ('68'), ('65'), ('66'), ('67'), ('68')\")\n      checkSparkAnswerAndOperator(s\"SELECT chr(col) FROM $table\")\n    }\n  }\n\n  test(\"Chr with null character\") {\n    // test compatibility with Spark, spark supports chr(0)\n    val table = \"test0\"\n    withTable(table) {\n      sql(s\"create table $table(c9 int, c4 int) using parquet\")\n      sql(s\"insert into $table values(0, 0), (66, null), (null, 70), (null, null)\")\n      val query = s\"SELECT chr(c9), chr(c4) FROM $table\"\n      checkSparkAnswerAndOperator(query)\n    }\n  }\n\n  test(\"Chr with negative and large value\") {\n    val table = \"test0\"\n    withTable(table) {\n      sql(s\"create table $table(c9 int, c4 int) using parquet\")\n      sql(\n        s\"insert into $table values(0, 0), (61231, -61231), (-1700, 1700), (0, -4000), (-40, 40), (256, 512)\")\n      val query = s\"SELECT chr(c9), chr(c4) FROM $table\"\n      checkSparkAnswerAndOperator(query)\n    }\n\n    withParquetTable((0 until 5).map(i => (i % 5, i % 3)), \"tbl\") {\n      withSQLConf(\n        \"spark.sql.optimizer.excludedRules\" -> \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n        for (n <- Seq(\"0\", \"-0\", \"0.5\", \"-0.5\", \"555\", \"-555\", \"null\")) {\n          checkSparkAnswerAndOperator(s\"select chr(cast(${n} as int)) FROM tbl\")\n        }\n      }\n    }\n  }\n\n  test(\"InitCap compatible cases\") {\n    val table = \"names\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      withSQLConf(CometConf.getExprAllowIncompatConfigKey(\"InitCap\") -> \"true\") {\n        sql(\n          s\"insert into $table values(1, 'james smith'), (2, 'michael rose'), \" +\n            \"(3, 'robert williams'), (4, 'rames rose'), (5, 'james smith'), \" +\n            \"(7, 'james ähtäri')\")\n        checkSparkAnswerAndOperator(s\"SELECT initcap(name) FROM $table\")\n      }\n    }\n  }\n\n  test(\"InitCap incompatible cases\") {\n    val table = \"names\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      // Comet and Spark differ on hyphenated names\n      sql(s\"insert into $table values(6, 'robert rose-smith')\")\n      checkSparkAnswer(s\"SELECT initcap(name) FROM $table\")\n    }\n  }\n\n  test(\"trim\") {\n    withSQLConf(CometConf.COMET_CASE_CONVERSION_ENABLED.key -> \"true\") {\n      val table = \"test\"\n      withTable(table) {\n        sql(s\"create table $table(col varchar(20)) using parquet\")\n        sql(s\"insert into $table values('    SparkSQL   '), ('SSparkSQLS')\")\n\n        checkSparkAnswerAndOperator(s\"SELECT upper(trim(col)) FROM $table\")\n        checkSparkAnswerAndOperator(s\"SELECT trim('SL', col) FROM $table\")\n\n        checkSparkAnswerAndOperator(s\"SELECT upper(btrim(col)) FROM $table\")\n        checkSparkAnswerAndOperator(s\"SELECT btrim('SL', col) FROM $table\")\n\n        checkSparkAnswerAndOperator(s\"SELECT upper(ltrim(col)) FROM $table\")\n        checkSparkAnswerAndOperator(s\"SELECT ltrim('SL', col) FROM $table\")\n\n        checkSparkAnswerAndOperator(s\"SELECT upper(rtrim(col)) FROM $table\")\n        checkSparkAnswerAndOperator(s\"SELECT rtrim('SL', col) FROM $table\")\n      }\n    }\n  }\n\n  test(\"string repeat\") {\n    val table = \"names\"\n    withTable(table) {\n      sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n      sql(s\"insert into $table values(1, 'James'), (2, 'Smith'), (3, 'Smith')\")\n      checkSparkAnswerAndOperator(s\"SELECT repeat(name, 3) FROM $table\")\n    }\n  }\n\n  test(\"length, reverse, instr, replace, translate\") {\n    val table = \"test\"\n    withTable(table) {\n      sql(s\"create table $table(col string) using parquet\")\n      sql(\n        s\"insert into $table values('Spark SQL  '), (NULL), (''), ('苹果手机'), ('Spark SQL  '), (NULL), (''), ('苹果手机')\")\n      checkSparkAnswerAndOperator(\"select length(col), reverse(col), instr(col, 'SQL'), instr(col, '手机'), replace(col, 'SQL', '123'),\" +\n        s\" replace(col, 'SQL'), replace(col, '手机', '平板'), translate(col, 'SL苹', '123') from $table\")\n    }\n  }\n\n  // Simplified version of \"filter pushdown - StringPredicate\" that does not generate dictionaries\n  test(\"string predicate filter\") {\n    Seq(false, true).foreach { pushdown =>\n      withSQLConf(\n        SQLConf.PARQUET_FILTER_PUSHDOWN_STRING_PREDICATE_ENABLED.key -> pushdown.toString) {\n        val table = \"names\"\n        withTable(table) {\n          sql(s\"create table $table(name varchar(20)) using parquet\")\n          for (ch <- Range('a', 'z')) {\n            sql(s\"insert into $table values('$ch$ch$ch')\")\n          }\n          checkSparkAnswerAndOperator(s\"SELECT * FROM $table WHERE name LIKE 'a%'\")\n          checkSparkAnswerAndOperator(s\"SELECT * FROM $table WHERE name LIKE '%a'\")\n          checkSparkAnswerAndOperator(s\"SELECT * FROM $table WHERE name LIKE '%a%'\")\n        }\n      }\n    }\n  }\n\n  // Modified from Spark test \"filter pushdown - StringPredicate\"\n  private def testStringPredicate(\n      dataFrame: DataFrame,\n      filter: String,\n      enableDictionary: Boolean = true): Unit = {\n    withTempPath { dir =>\n      val path = dir.getCanonicalPath\n      dataFrame.write\n        .option(\"parquet.block.size\", 512)\n        .option(ParquetOutputFormat.ENABLE_DICTIONARY, enableDictionary)\n        .parquet(path)\n      Seq(true, false).foreach { pushDown =>\n        withSQLConf(\n          SQLConf.PARQUET_FILTER_PUSHDOWN_STRING_PREDICATE_ENABLED.key -> pushDown.toString) {\n          val df = spark.read.parquet(path).filter(filter)\n          checkSparkAnswerAndOperator(df)\n        }\n      }\n    }\n  }\n\n  // Modified from Spark test \"filter pushdown - StringPredicate\"\n  test(\"filter pushdown - StringPredicate\") {\n    import testImplicits._\n    // keep() should take effect on StartsWith/EndsWith/Contains\n    Seq(\n      \"value like 'a%'\", // StartsWith\n      \"value like '%a'\", // EndsWith\n      \"value like '%a%'\" // Contains\n    ).foreach { filter =>\n      testStringPredicate(\n        // dictionary will be generated since there are duplicated values\n        spark.range(1000).map(t => (t % 10).toString).toDF(),\n        filter)\n    }\n\n    // canDrop() should take effect on StartsWith,\n    // and has no effect on EndsWith/Contains\n    Seq(\n      \"value like 'a%'\", // StartsWith\n      \"value like '%a'\", // EndsWith\n      \"value like '%a%'\" // Contains\n    ).foreach { filter =>\n      testStringPredicate(\n        spark.range(1024).map(_.toString).toDF(),\n        filter,\n        enableDictionary = false)\n    }\n\n    // inverseCanDrop() should take effect on StartsWith,\n    // and has no effect on EndsWith/Contains\n    Seq(\n      \"value not like '10%'\", // StartsWith\n      \"value not like '%10'\", // EndsWith\n      \"value not like '%10%'\" // Contains\n    ).foreach { filter =>\n      testStringPredicate(\n        spark.range(1024).map(_ => \"100\").toDF(),\n        filter,\n        enableDictionary = false)\n    }\n  }\n\n  test(\"string_space\") {\n    withParquetTable((0 until 5).map(i => (-i, i + 1)), \"tbl\") {\n      checkSparkAnswerAndOperator(\"SELECT space(_1), space(_2) FROM tbl\")\n    }\n  }\n\n  test(\"string_space with dictionary\") {\n    val data = (0 until 1000).map(i => Tuple1(i % 5))\n\n    withSQLConf(\"parquet.enable.dictionary\" -> \"true\") {\n      withParquetTable(data, \"tbl\") {\n        checkSparkAnswerAndOperator(\"SELECT space(_1) FROM tbl\")\n      }\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/CometTemporalExpressionSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.apache.spark.sql.{CometTestBase, DataFrame, Row, SaveMode}\nimport org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute\nimport org.apache.spark.sql.catalyst.expressions.{Days, Hours, Literal}\nimport org.apache.spark.sql.catalyst.util.DateTimeUtils\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.functions.col\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{DataTypes, StructField, StructType}\n\nimport org.apache.comet.serde.{CometDateFormat, CometTruncDate, CometTruncTimestamp}\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator}\n\nclass CometTemporalExpressionSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  test(\"trunc (TruncDate)\") {\n    val supportedFormats = CometTruncDate.supportedFormats\n    val unsupportedFormats = Seq(\"invalid\")\n\n    val r = new Random(42)\n    val schema = StructType(\n      Seq(\n        StructField(\"c0\", DataTypes.DateType, true),\n        StructField(\"c1\", DataTypes.StringType, true)))\n    val df = FuzzDataGenerator.generateDataFrame(r, spark, schema, 1000, DataGenOptions())\n\n    df.createOrReplaceTempView(\"tbl\")\n\n    for (format <- supportedFormats) {\n      checkSparkAnswerAndOperator(s\"SELECT c0, trunc(c0, '$format') from tbl order by c0, c1\")\n    }\n    for (format <- unsupportedFormats) {\n      // Comet should fall back to Spark for unsupported or invalid formats\n      checkSparkAnswerAndFallbackReason(\n        s\"SELECT c0, trunc(c0, '$format') from tbl order by c0, c1\",\n        s\"Format $format is not supported\")\n    }\n\n    // Comet should fall back to Spark if format is not a literal\n    checkSparkAnswerAndFallbackReason(\n      \"SELECT c0, trunc(c0, c1) from tbl order by c0, c1\",\n      \"Invalid format strings will throw an exception instead of returning NULL\")\n  }\n\n  test(\"date_trunc (TruncTimestamp) - reading from DataFrame\") {\n    val supportedFormats = CometTruncTimestamp.supportedFormats\n    val unsupportedFormats = Seq(\"invalid\")\n\n    createTimestampTestData.createOrReplaceTempView(\"tbl\")\n\n    // TODO test fails with non-UTC timezone\n    // https://github.com/apache/datafusion-comet/issues/2649\n    withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"UTC\") {\n      for (format <- supportedFormats) {\n        checkSparkAnswerAndOperator(s\"SELECT c0, date_trunc('$format', c0) from tbl order by c0\")\n      }\n      for (format <- unsupportedFormats) {\n        // Comet should fall back to Spark for unsupported or invalid formats\n        checkSparkAnswerAndFallbackReason(\n          s\"SELECT c0, date_trunc('$format', c0) from tbl order by c0\",\n          s\"Format $format is not supported\")\n      }\n      // Comet should fall back to Spark if format is not a literal\n      checkSparkAnswerAndFallbackReason(\n        \"SELECT c0, date_trunc(fmt, c0) from tbl order by c0, fmt\",\n        \"Invalid format strings will throw an exception instead of returning NULL\")\n    }\n  }\n\n  test(\"date_trunc (TruncTimestamp) - reading from Parquet\") {\n    val supportedFormats = CometTruncTimestamp.supportedFormats\n    val unsupportedFormats = Seq(\"invalid\")\n\n    withTempDir { path =>\n      createTimestampTestData.write.mode(SaveMode.Overwrite).parquet(path.toString)\n      spark.read.parquet(path.toString).createOrReplaceTempView(\"tbl\")\n\n      // TODO test fails with non-UTC timezone\n      // https://github.com/apache/datafusion-comet/issues/2649\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"UTC\") {\n        for (format <- supportedFormats) {\n          checkSparkAnswerAndOperator(\n            s\"SELECT c0, date_trunc('$format', c0) from tbl order by c0\")\n        }\n        for (format <- unsupportedFormats) {\n          // Comet should fall back to Spark for unsupported or invalid formats\n          checkSparkAnswerAndFallbackReason(\n            s\"SELECT c0, date_trunc('$format', c0) from tbl order by c0\",\n            s\"Format $format is not supported\")\n        }\n        // Comet should fall back to Spark if format is not a literal\n        checkSparkAnswerAndFallbackReason(\n          \"SELECT c0, date_trunc(fmt, c0) from tbl order by c0, fmt\",\n          \"Invalid format strings will throw an exception instead of returning NULL\")\n      }\n    }\n  }\n\n  test(\"unix_timestamp - timestamp input\") {\n    createTimestampTestData.createOrReplaceTempView(\"tbl\")\n    for (timezone <- Seq(\"UTC\", \"America/Los_Angeles\")) {\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> timezone) {\n        checkSparkAnswerAndOperator(\"SELECT c0, unix_timestamp(c0) from tbl order by c0\")\n      }\n    }\n  }\n\n  test(\"unix_timestamp - date input\") {\n    val r = new Random(42)\n    val dateSchema = StructType(Seq(StructField(\"d\", DataTypes.DateType, true)))\n    val dateDF = FuzzDataGenerator.generateDataFrame(r, spark, dateSchema, 100, DataGenOptions())\n    dateDF.createOrReplaceTempView(\"date_tbl\")\n    for (timezone <- Seq(\"UTC\", \"America/Los_Angeles\")) {\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> timezone) {\n        checkSparkAnswerAndOperator(\"SELECT d, unix_timestamp(d) from date_tbl order by d\")\n      }\n    }\n  }\n\n  test(\"unix_timestamp - timestamp_ntz input falls back to Spark\") {\n    // TimestampNTZ is not supported because Comet incorrectly applies timezone\n    // conversion. TimestampNTZ stores local time without timezone, so the unix\n    // timestamp should just be the value divided by microseconds per second.\n    val r = new Random(42)\n    val ntzSchema = StructType(Seq(StructField(\"ts_ntz\", DataTypes.TimestampNTZType, true)))\n    val ntzDF = FuzzDataGenerator.generateDataFrame(r, spark, ntzSchema, 100, DataGenOptions())\n    ntzDF.createOrReplaceTempView(\"ntz_tbl\")\n    checkSparkAnswerAndFallbackReason(\n      \"SELECT ts_ntz, unix_timestamp(ts_ntz) from ntz_tbl order by ts_ntz\",\n      \"unix_timestamp does not support input type: TimestampNTZType\")\n  }\n\n  test(\"unix_timestamp - string input falls back to Spark\") {\n    withTempView(\"string_tbl\") {\n      // Create test data with timestamp strings\n      val schema = StructType(Seq(StructField(\"ts_str\", DataTypes.StringType, true)))\n      val data = Seq(\n        Row(\"2020-01-01 00:00:00\"),\n        Row(\"2021-06-15 12:30:45\"),\n        Row(\"2022-12-31 23:59:59\"),\n        Row(null))\n      spark\n        .createDataFrame(spark.sparkContext.parallelize(data), schema)\n        .createOrReplaceTempView(\"string_tbl\")\n\n      // String input should fall back to Spark\n      checkSparkAnswerAndFallbackReason(\n        \"SELECT ts_str, unix_timestamp(ts_str) from string_tbl order by ts_str\",\n        \"unix_timestamp does not support input type: StringType\")\n\n      // String input with custom format should also fall back\n      checkSparkAnswerAndFallbackReason(\n        \"SELECT ts_str, unix_timestamp(ts_str, 'yyyy-MM-dd HH:mm:ss') from string_tbl\",\n        \"unix_timestamp does not support input type: StringType\")\n    }\n  }\n\n  private def createTimestampTestData = {\n    val r = new Random(42)\n    val schema = StructType(\n      Seq(\n        StructField(\"c0\", DataTypes.TimestampType, true),\n        StructField(\"fmt\", DataTypes.StringType, true)))\n    FuzzDataGenerator.generateDataFrame(r, spark, schema, 1000, DataGenOptions())\n  }\n\n  test(\"last_day\") {\n    val r = new Random(42)\n    val schema = StructType(Seq(StructField(\"c0\", DataTypes.DateType, true)))\n    val df = FuzzDataGenerator.generateDataFrame(r, spark, schema, 1000, DataGenOptions())\n    df.createOrReplaceTempView(\"tbl\")\n\n    // Basic test with random dates\n    checkSparkAnswerAndOperator(\"SELECT c0, last_day(c0) FROM tbl ORDER BY c0\")\n\n    // Disable constant folding to ensure literal expressions are executed by Comet\n    withSQLConf(\n      SQLConf.OPTIMIZER_EXCLUDED_RULES.key ->\n        \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n      // Test with literal dates - various months\n      checkSparkAnswerAndOperator(\n        \"SELECT last_day(DATE('2024-01-15')), last_day(DATE('2024-02-15')), last_day(DATE('2024-12-01'))\")\n\n      // Test leap year handling (February)\n      checkSparkAnswerAndOperator(\n        \"SELECT last_day(DATE('2024-02-01')), last_day(DATE('2023-02-01'))\")\n\n      // Test null handling\n      checkSparkAnswerAndOperator(\"SELECT last_day(NULL)\")\n    }\n  }\n\n  test(\"datediff\") {\n    val r = new Random(42)\n    val schema = StructType(\n      Seq(\n        StructField(\"c0\", DataTypes.DateType, true),\n        StructField(\"c1\", DataTypes.DateType, true)))\n    val df = FuzzDataGenerator.generateDataFrame(r, spark, schema, 1000, DataGenOptions())\n    df.createOrReplaceTempView(\"tbl\")\n\n    // Basic test with random dates\n    checkSparkAnswerAndOperator(\"SELECT c0, c1, datediff(c0, c1) FROM tbl ORDER BY c0, c1\")\n\n    // Disable constant folding to ensure literal expressions are executed by Comet\n    withSQLConf(\n      SQLConf.OPTIMIZER_EXCLUDED_RULES.key ->\n        \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n      // Test positive difference (end date > start date)\n      checkSparkAnswerAndOperator(\"SELECT datediff(DATE('2009-07-31'), DATE('2009-07-30'))\")\n\n      // Test negative difference (end date < start date)\n      checkSparkAnswerAndOperator(\"SELECT datediff(DATE('2009-07-30'), DATE('2009-07-31'))\")\n\n      // Test same dates (should be 0)\n      checkSparkAnswerAndOperator(\"SELECT datediff(DATE('2009-07-30'), DATE('2009-07-30'))\")\n\n      // Test larger date differences\n      checkSparkAnswerAndOperator(\"SELECT datediff(DATE('2024-01-01'), DATE('2020-01-01'))\")\n\n      // Test null handling\n      checkSparkAnswerAndOperator(\"SELECT datediff(NULL, DATE('2009-07-30'))\")\n      checkSparkAnswerAndOperator(\"SELECT datediff(DATE('2009-07-30'), NULL)\")\n\n      // Test leap year edge cases\n      // 1900 was NOT a leap year (divisible by 100 but not 400)\n      // 2000 WAS a leap year (divisible by 400)\n      // So Feb 27 to Mar 1 spans different number of days:\n      // 1900: 2 days (Feb 28, Mar 1)\n      // 2000: 3 days (Feb 28, Feb 29, Mar 1)\n      checkSparkAnswerAndOperator(\"SELECT datediff(DATE('1900-03-01'), DATE('1900-02-27'))\")\n      checkSparkAnswerAndOperator(\"SELECT datediff(DATE('2000-03-01'), DATE('2000-02-27'))\")\n\n      // Additional leap year tests\n      // 2004 was a leap year (divisible by 4, not by 100)\n      checkSparkAnswerAndOperator(\"SELECT datediff(DATE('2004-03-01'), DATE('2004-02-28'))\")\n      // 2100 will NOT be a leap year (divisible by 100 but not 400)\n      checkSparkAnswerAndOperator(\"SELECT datediff(DATE('2100-03-01'), DATE('2100-02-28'))\")\n    }\n  }\n\n  test(\"date_format with timestamp column\") {\n    // Filter out formats with embedded quotes that need special handling\n    val supportedFormats = CometDateFormat.supportedFormats.keys.toSeq\n      .filterNot(_.contains(\"'\"))\n\n    createTimestampTestData.createOrReplaceTempView(\"tbl\")\n\n    withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"UTC\") {\n      for (format <- supportedFormats) {\n        checkSparkAnswerAndOperator(s\"SELECT c0, date_format(c0, '$format') from tbl order by c0\")\n      }\n      // Test ISO format with embedded quotes separately using double-quoted string\n      checkSparkAnswerAndOperator(\n        \"SELECT c0, date_format(c0, \\\"yyyy-MM-dd'T'HH:mm:ss\\\") from tbl order by c0\")\n    }\n  }\n\n  test(\"date_format with specific format strings\") {\n    // Test specific format strings with explicit timestamp data\n    createTimestampTestData.createOrReplaceTempView(\"tbl\")\n\n    withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"UTC\") {\n      // Date formats\n      checkSparkAnswerAndOperator(\"SELECT c0, date_format(c0, 'yyyy-MM-dd') from tbl order by c0\")\n      checkSparkAnswerAndOperator(\"SELECT c0, date_format(c0, 'yyyy/MM/dd') from tbl order by c0\")\n\n      // Time formats\n      checkSparkAnswerAndOperator(\"SELECT c0, date_format(c0, 'HH:mm:ss') from tbl order by c0\")\n      checkSparkAnswerAndOperator(\"SELECT c0, date_format(c0, 'HH:mm') from tbl order by c0\")\n\n      // Combined formats\n      checkSparkAnswerAndOperator(\n        \"SELECT c0, date_format(c0, 'yyyy-MM-dd HH:mm:ss') from tbl order by c0\")\n\n      // Day/month names\n      checkSparkAnswerAndOperator(\"SELECT c0, date_format(c0, 'EEEE') from tbl order by c0\")\n      checkSparkAnswerAndOperator(\"SELECT c0, date_format(c0, 'MMMM') from tbl order by c0\")\n\n      // 12-hour time\n      checkSparkAnswerAndOperator(\"SELECT c0, date_format(c0, 'hh:mm:ss a') from tbl order by c0\")\n\n      // ISO format (use double single-quotes to escape the literal T)\n      checkSparkAnswerAndOperator(\n        \"SELECT c0, date_format(c0, \\\"yyyy-MM-dd'T'HH:mm:ss\\\") from tbl order by c0\")\n    }\n  }\n\n  test(\"date_format with literal timestamp\") {\n    // Test specific literal timestamp formats\n    // Disable constant folding to ensure Comet actually executes the expression\n    withSQLConf(\n      SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"UTC\",\n      SQLConf.OPTIMIZER_EXCLUDED_RULES.key ->\n        \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n      checkSparkAnswerAndOperator(\n        \"SELECT date_format(TIMESTAMP '2024-03-15 14:30:45', 'yyyy-MM-dd')\")\n      checkSparkAnswerAndOperator(\n        \"SELECT date_format(TIMESTAMP '2024-03-15 14:30:45', 'yyyy-MM-dd HH:mm:ss')\")\n      checkSparkAnswerAndOperator(\n        \"SELECT date_format(TIMESTAMP '2024-03-15 14:30:45', 'HH:mm:ss')\")\n      checkSparkAnswerAndOperator(\"SELECT date_format(TIMESTAMP '2024-03-15 14:30:45', 'EEEE')\")\n      checkSparkAnswerAndOperator(\n        \"SELECT date_format(TIMESTAMP '2024-03-15 14:30:45', 'hh:mm:ss a')\")\n    }\n  }\n\n  test(\"date_format with null\") {\n    withSQLConf(\n      SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"UTC\",\n      SQLConf.OPTIMIZER_EXCLUDED_RULES.key ->\n        \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n      checkSparkAnswerAndOperator(\"SELECT date_format(CAST(NULL AS TIMESTAMP), 'yyyy-MM-dd')\")\n    }\n  }\n\n  test(\"date_format unsupported format falls back to Spark\") {\n    createTimestampTestData.createOrReplaceTempView(\"tbl\")\n\n    withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"UTC\") {\n      // Unsupported format string\n      checkSparkAnswerAndFallbackReason(\n        \"SELECT c0, date_format(c0, 'yyyy-MM-dd EEEE') from tbl order by c0\",\n        \"Format 'yyyy-MM-dd EEEE' is not supported\")\n    }\n  }\n\n  test(\"date_format with non-UTC timezone falls back to Spark\") {\n    createTimestampTestData.createOrReplaceTempView(\"tbl\")\n\n    val nonUtcTimezones =\n      Seq(\"America/New_York\", \"America/Los_Angeles\", \"Europe/London\", \"Asia/Tokyo\")\n\n    for (tz <- nonUtcTimezones) {\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz) {\n        // Non-UTC timezones should fall back to Spark as Incompatible\n        checkSparkAnswerAndFallbackReason(\n          \"SELECT c0, date_format(c0, 'yyyy-MM-dd HH:mm:ss') from tbl order by c0\",\n          s\"Non-UTC timezone '$tz' may produce different results\")\n      }\n    }\n  }\n\n  test(\"date_format with non-UTC timezone works when allowIncompatible is enabled\") {\n    createTimestampTestData.createOrReplaceTempView(\"tbl\")\n\n    val nonUtcTimezones = Seq(\"America/New_York\", \"Europe/London\", \"Asia/Tokyo\")\n\n    for (tz <- nonUtcTimezones) {\n      withSQLConf(\n        SQLConf.SESSION_LOCAL_TIMEZONE.key -> tz,\n        \"spark.comet.expr.DateFormatClass.allowIncompatible\" -> \"true\") {\n        // With allowIncompatible enabled, Comet will execute the expression\n        // Results may differ from Spark but should not throw errors\n        checkSparkAnswer(\"SELECT c0, date_format(c0, 'yyyy-MM-dd') from tbl order by c0\")\n      }\n    }\n  }\n\n  test(\"unix_date\") {\n    val r = new Random(42)\n    val schema = StructType(Seq(StructField(\"c0\", DataTypes.DateType, true)))\n    val df = FuzzDataGenerator.generateDataFrame(r, spark, schema, 1000, DataGenOptions())\n    df.createOrReplaceTempView(\"tbl\")\n\n    // Basic test\n    checkSparkAnswerAndOperator(\"SELECT c0, unix_date(c0) FROM tbl ORDER BY c0\")\n\n    // Test with literal dates\n    checkSparkAnswerAndOperator(\n      \"SELECT unix_date(DATE('1970-01-01')), unix_date(DATE('1970-01-02')), unix_date(DATE('2024-01-01'))\")\n\n    // Test dates before Unix epoch (should return negative values)\n    checkSparkAnswerAndOperator(\n      \"SELECT unix_date(DATE('1969-12-31')), unix_date(DATE('1960-01-01'))\")\n\n    // Test null handling\n    checkSparkAnswerAndOperator(\"SELECT unix_date(NULL)\")\n  }\n\n  /**\n   * Checks that the Comet-evaluated DataFrame produces the same results as the baseline DataFrame\n   * evaluated by native Spark JVM, and that Comet native operators are used. This is needed\n   * because Days is a PartitionTransformExpression that extends Unevaluable, so\n   * checkSparkAnswerAndOperator cannot be used directly.\n   */\n  private def checkDays(cometDF: DataFrame, baselineDF: DataFrame): Unit = {\n    // Ensure the expected answer is evaluated solely by native Spark JVM (Comet off)\n    var expected: Array[Row] = Array.empty\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n      expected = baselineDF.collect()\n    }\n    checkAnswer(cometDF, expected.toSeq)\n    checkCometOperators(stripAQEPlan(cometDF.queryExecution.executedPlan))\n  }\n\n  test(\"days - date input\") {\n    val r = new Random(42)\n    val dateSchema = StructType(Seq(StructField(\"d\", DataTypes.DateType, true)))\n    val dateDF = FuzzDataGenerator.generateDataFrame(r, spark, dateSchema, 1000, DataGenOptions())\n\n    checkDays(\n      dateDF.select(col(\"d\"), getColumnFromExpression(Days(UnresolvedAttribute(\"d\")))),\n      dateDF.selectExpr(\"d\", \"unix_date(d)\"))\n  }\n\n  test(\"days - timestamp input\") {\n    val r = new Random(42)\n    val tsSchema = StructType(Seq(StructField(\"ts\", DataTypes.TimestampType, true)))\n    val tsDF = FuzzDataGenerator.generateDataFrame(r, spark, tsSchema, 1000, DataGenOptions())\n\n    for (timezone <- Seq(\"UTC\", \"America/Los_Angeles\", \"Asia/Tokyo\")) {\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> timezone) {\n        checkDays(\n          tsDF.select(col(\"ts\"), getColumnFromExpression(Days(UnresolvedAttribute(\"ts\")))),\n          tsDF.selectExpr(\"ts\", \"unix_date(cast(ts as date))\"))\n      }\n    }\n  }\n\n  test(\"days - literal edge cases\") {\n    withSQLConf(\n      SQLConf.OPTIMIZER_EXCLUDED_RULES.key ->\n        \"org.apache.spark.sql.catalyst.optimizer.ConstantFolding\") {\n\n      val dummyDF = spark.range(1)\n\n      // Pre-epoch (should return negative day numbers)\n      checkDays(\n        dummyDF.select(\n          getColumnFromExpression(\n            Days(Literal.create(java.sql.Date.valueOf(\"1969-12-31\"), DataTypes.DateType))),\n          getColumnFromExpression(\n            Days(Literal.create(java.sql.Date.valueOf(\"1960-01-01\"), DataTypes.DateType)))),\n        dummyDF.selectExpr(\"unix_date(DATE('1969-12-31'))\", \"unix_date(DATE('1960-01-01'))\"))\n\n      // Epoch and post-epoch\n      checkDays(\n        dummyDF.select(\n          getColumnFromExpression(\n            Days(Literal.create(java.sql.Date.valueOf(\"1970-01-01\"), DataTypes.DateType))),\n          getColumnFromExpression(\n            Days(Literal.create(java.sql.Date.valueOf(\"1970-01-02\"), DataTypes.DateType))),\n          getColumnFromExpression(\n            Days(Literal.create(java.sql.Date.valueOf(\"2024-01-01\"), DataTypes.DateType)))),\n        dummyDF.selectExpr(\n          \"unix_date(DATE('1970-01-01'))\",\n          \"unix_date(DATE('1970-01-02'))\",\n          \"unix_date(DATE('2024-01-01'))\"))\n\n      // Timestamp literals\n      checkDays(\n        dummyDF.select(\n          getColumnFromExpression(Days(Literal\n            .create(java.sql.Timestamp.valueOf(\"1970-01-01 00:00:00\"), DataTypes.TimestampType))),\n          getColumnFromExpression(\n            Days(\n              Literal.create(\n                java.sql.Timestamp.valueOf(\"2024-06-15 10:30:00\"),\n                DataTypes.TimestampType)))),\n        dummyDF.selectExpr(\n          \"unix_date(cast(TIMESTAMP('1970-01-01 00:00:00') as date))\",\n          \"unix_date(cast(TIMESTAMP('2024-06-15 10:30:00') as date))\"))\n\n      // Null handling\n      checkDays(\n        dummyDF.select(getColumnFromExpression(Days(Literal.create(null, DataTypes.DateType)))),\n        dummyDF.selectExpr(\"unix_date(cast(NULL as date))\"))\n    }\n  }\n\n  /**\n   * Checks that the Comet-evaluated DataFrame produces the same results as the baseline DataFrame\n   * evaluated by native Spark JVM, and that Comet native operators are used. This is needed\n   * because Hours is a PartitionTransformExpression that extends Unevaluable.\n   */\n  private def checkHours(cometDF: DataFrame, baselineDF: DataFrame): Unit = {\n    // Ensure the expected answer is evaluated solely by native Spark JVM (Comet off)\n    var expected: Array[Row] = Array.empty\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n      expected = baselineDF.collect()\n    }\n    checkAnswer(cometDF, expected.toSeq)\n    checkCometOperators(stripAQEPlan(cometDF.queryExecution.executedPlan))\n  }\n\n  test(\"hours - timestamp input\") {\n    val r = new Random(42)\n    val tsSchema = StructType(Seq(StructField(\"ts\", DataTypes.TimestampType, true)))\n    val tsDF = FuzzDataGenerator.generateDataFrame(r, spark, tsSchema, 1000, DataGenOptions())\n\n    for (timezone <- Seq(\"UTC\", \"America/Los_Angeles\", \"Asia/Tokyo\")) {\n      withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> timezone) {\n        checkHours(\n          tsDF.select(col(\"ts\"), getColumnFromExpression(Hours(UnresolvedAttribute(\"ts\")))),\n          tsDF.selectExpr(\"ts\", \"cast(floor(unix_micros(ts) / 3600000000D) as int)\"))\n      }\n    }\n  }\n\n  test(\"hours - timestamp_ntz input\") {\n    val r = new Random(42)\n    val ntzSchema = StructType(Seq(StructField(\"ts\", DataTypes.TimestampNTZType, true)))\n    val ntzDF = FuzzDataGenerator.generateDataFrame(r, spark, ntzSchema, 1000, DataGenOptions())\n\n    // Spark 3.4: unix_micros() does not accept TIMESTAMP_NTZ; baseline matches micros / hour.\n    val _spark = spark\n    import _spark.implicits._\n    val expectedDF = ntzDF\n      .map { row =>\n        val ts = row.getAs[java.time.LocalDateTime](\"ts\")\n        val hoursCol: java.lang.Integer =\n          if (ts == null) null\n          else\n            Integer.valueOf(\n              Math.floorDiv(DateTimeUtils.localDateTimeToMicros(ts), 3600000000L).toInt)\n        (ts, hoursCol)\n      }\n      .toDF(\"ts\", \"hours\")\n\n    checkHours(\n      ntzDF.select(col(\"ts\"), getColumnFromExpression(Hours(UnresolvedAttribute(\"ts\")))),\n      expectedDF)\n  }\n\n  test(\"cast TimestampNTZ to Timestamp - DST edge cases\") {\n    val data = Seq(\n      Row(java.time.LocalDateTime.parse(\"2024-03-31T01:30:00\")), // Spring forward (Europe/London)\n      Row(java.time.LocalDateTime.parse(\"2024-10-27T01:30:00\")) // Fall back (Europe/London)\n    )\n    val schema = StructType(Seq(StructField(\"ts_ntz\", DataTypes.TimestampNTZType, true)))\n    spark\n      .createDataFrame(spark.sparkContext.parallelize(data), schema)\n      .createOrReplaceTempView(\"dst_tbl\")\n\n    // We `allowIncompatible` here because casts involving TimestampNTZ are marked\n    // as Incompatible (due to incorrect behaviour when casting from a string)\n    withSQLConf(\n      SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"Europe/London\",\n      \"spark.comet.expression.Cast.allowIncompatible\" -> \"true\") {\n      checkSparkAnswerAndOperator(\n        \"SELECT ts_ntz, CAST(ts_ntz AS TIMESTAMP) FROM dst_tbl ORDER BY ts_ntz\")\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/DataGenerator.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.apache.spark.sql.{RandomDataGenerator, Row}\nimport org.apache.spark.sql.types.{StringType, StructType}\n\nobject DataGenerator {\n  // note that we use `def` rather than `val` intentionally here so that\n  // each test suite starts with a fresh data generator to help ensure\n  // that tests are deterministic\n  def DEFAULT = new DataGenerator(new Random(42))\n  // matches the probability of nulls in Spark's RandomDataGenerator\n  private val PROBABILITY_OF_NULL: Float = 0.1f\n}\n\nclass DataGenerator(r: Random) {\n  import DataGenerator._\n\n  /** Pick a random item from a sequence */\n  def pickRandom[T](items: Seq[T]): T = {\n    items(r.nextInt(items.length))\n  }\n\n  /** Generate a random string using the specified characters */\n  def generateString(chars: String, maxLen: Int): String = {\n    val len = r.nextInt(maxLen)\n    Range(0, len).map(_ => chars.charAt(r.nextInt(chars.length))).mkString\n  }\n\n  /** Generate random strings */\n  def generateStrings(n: Int, maxLen: Int): Seq[String] = {\n    Range(0, n).map(_ => r.nextString(maxLen))\n  }\n\n  /** Generate random strings using the specified characters */\n  def generateStrings(n: Int, chars: String, maxLen: Int): Seq[String] = {\n    Range(0, n).map(_ => generateString(chars, maxLen))\n  }\n\n  def generateFloats(n: Int): Seq[Float] = {\n    Seq(\n      Float.MaxValue,\n      Float.MinPositiveValue,\n      Float.MinValue,\n      Float.NaN,\n      Float.PositiveInfinity,\n      Float.NegativeInfinity,\n      1.0f,\n      -1.0f,\n      Short.MinValue.toFloat,\n      Short.MaxValue.toFloat,\n      0.0f) ++\n      Range(0, n).map(_ => r.nextFloat())\n  }\n\n  def generateDoubles(n: Int): Seq[Double] = {\n    Seq(\n      Double.MaxValue,\n      Double.MinPositiveValue,\n      Double.MinValue,\n      Double.NaN,\n      Double.PositiveInfinity,\n      Double.NegativeInfinity,\n      0.0d) ++\n      Range(0, n).map(_ => r.nextDouble())\n  }\n\n  def generateBytes(n: Int): Seq[Byte] = {\n    Seq(Byte.MinValue, Byte.MaxValue) ++\n      Range(0, n).map(_ => r.nextInt().toByte)\n  }\n\n  def generateShorts(n: Int): Seq[Short] = {\n    val r = new Random(0)\n    Seq(Short.MinValue, Short.MaxValue) ++\n      Range(0, n).map(_ => r.nextInt().toShort)\n  }\n\n  def generateInts(n: Int): Seq[Int] = {\n    Seq(Int.MinValue, Int.MaxValue) ++\n      Range(0, n).map(_ => r.nextInt())\n  }\n\n  def generateLongs(n: Int): Seq[Long] = {\n    Seq(Long.MinValue, Long.MaxValue) ++\n      Range(0, n).map(_ => r.nextLong())\n  }\n\n  // Generate a random row according to the schema, the string filed in the struct could be\n  // configured to generate strings by passing a stringGen function. Other types are delegated\n  // to Spark's RandomDataGenerator.\n  def generateRow(schema: StructType, stringGen: Option[() => String] = None): Row = {\n    val fields = schema.fields.map { f =>\n      f.dataType match {\n        case StructType(children) =>\n          generateRow(StructType(children), stringGen)\n        case StringType if stringGen.isDefined =>\n          val gen = stringGen.get\n          val data = if (f.nullable && r.nextFloat() <= PROBABILITY_OF_NULL) {\n            null\n          } else {\n            gen()\n          }\n          data\n        case _ =>\n          val gen = RandomDataGenerator.forType(f.dataType, f.nullable, r) match {\n            case Some(g) => g\n            case None =>\n              throw new IllegalStateException(s\"No RandomDataGenerator for type ${f.dataType}\")\n          }\n          gen()\n      }\n    }.toSeq\n    Row.fromSeq(fields)\n  }\n\n  def generateRows(\n      num: Int,\n      schema: StructType,\n      stringGen: Option[() => String] = None): Seq[Row] = {\n    Range(0, num).map(_ => generateRow(schema, stringGen))\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/DataGeneratorSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport scala.util.Random\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.types.{ArrayType, DataType, MapType, StructType}\n\nimport org.apache.comet.testing.{FuzzDataGenerator, SchemaGenOptions}\n\nclass DataGeneratorSuite extends CometTestBase {\n\n  test(\"generate nested schema has at least minDepth levels\") {\n    val minDepth = 3\n    val numCols = 4\n    val schema = FuzzDataGenerator.generateNestedSchema(\n      new Random(42),\n      numCols,\n      minDepth = minDepth,\n      maxDepth = minDepth + 1,\n      options = SchemaGenOptions(generateMap = true, generateArray = true, generateStruct = true))\n    assert(schema.fields.length == numCols)\n\n    def calculateDepth(dataType: DataType): Int = {\n      dataType match {\n        case ArrayType(elementType, _) => 1 + calculateDepth(elementType)\n        case StructType(fields) =>\n          if (fields.isEmpty) 1\n          else 1 + fields.map(f => calculateDepth(f.dataType)).max\n        case MapType(k, v, _) =>\n          calculateDepth(k).max(calculateDepth(v))\n        case _ =>\n          // primitive type\n          1\n      }\n    }\n\n    val actualDepth = schema.fields.map(f => calculateDepth(f.dataType)).max\n    assert(\n      actualDepth >= minDepth,\n      s\"Generated schema depth $actualDepth is less than required minimum depth $minDepth\")\n  }\n\n  test(\"test configurable stringGen in row generator\") {\n    val gen = DataGenerator.DEFAULT\n    val chars = \"abcde\"\n    val maxLen = 10\n    val stringGen = () => gen.generateString(chars, maxLen)\n    val numRows = 100\n    val schema = new StructType().add(\"a\", \"string\")\n    var numNulls = 0\n    gen\n      .generateRows(numRows, schema, Some(stringGen))\n      .foreach(row => {\n        if (row.getString(0) != null) {\n          assert(row.getString(0).forall(chars.toSeq.contains))\n          assert(row.getString(0).length <= maxLen)\n        } else {\n          numNulls += 1\n        }\n      })\n    // 0.1 null probability\n    assert(numNulls >= 0.05 * numRows && numNulls <= 0.15 * numRows)\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/IcebergReadFromS3Suite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.sql.comet.CometIcebergNativeScanExec\nimport org.apache.spark.sql.execution.SparkPlan\n\nimport org.apache.comet.iceberg.RESTCatalogHelper\n\nclass IcebergReadFromS3Suite extends CometS3TestBase with RESTCatalogHelper {\n\n  override protected val testBucketName = \"test-iceberg-bucket\"\n\n  private def icebergAvailable: Boolean = {\n    try {\n      Class.forName(\"org.apache.iceberg.catalog.Catalog\")\n      true\n    } catch {\n      case _: ClassNotFoundException => false\n    }\n  }\n\n  override protected def sparkConf: SparkConf = {\n    val conf = super.sparkConf\n\n    conf.set(\"spark.sql.catalog.s3_catalog\", \"org.apache.iceberg.spark.SparkCatalog\")\n    conf.set(\"spark.sql.catalog.s3_catalog.type\", \"hadoop\")\n    conf.set(\"spark.sql.catalog.s3_catalog.warehouse\", s\"s3a://$testBucketName/warehouse\")\n\n    conf.set(CometConf.COMET_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_EXEC_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_ICEBERG_NATIVE_ENABLED.key, \"true\")\n\n    conf\n  }\n\n  /** Collects all CometIcebergNativeScanExec nodes from a plan */\n  private def collectIcebergNativeScans(plan: SparkPlan): Seq[CometIcebergNativeScanExec] = {\n    collect(plan) { case scan: CometIcebergNativeScanExec =>\n      scan\n    }\n  }\n\n  /**\n   * Helper to verify query correctness and that exactly one CometIcebergNativeScanExec is used.\n   */\n  private def checkIcebergNativeScan(query: String): Unit = {\n    val (_, cometPlan) = checkSparkAnswer(query)\n    val icebergScans = collectIcebergNativeScans(cometPlan)\n    assert(\n      icebergScans.length == 1,\n      s\"Expected exactly 1 CometIcebergNativeScanExec but found ${icebergScans.length}. Plan:\\n$cometPlan\")\n  }\n\n  test(\"create and query simple Iceberg table from MinIO\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    spark.sql(\"\"\"\n      CREATE TABLE s3_catalog.db.simple_table (\n        id INT,\n        name STRING,\n        value DOUBLE\n      ) USING iceberg\n    \"\"\")\n\n    spark.sql(\"\"\"\n      INSERT INTO s3_catalog.db.simple_table\n      VALUES (1, 'Alice', 10.5), (2, 'Bob', 20.3), (3, 'Charlie', 30.7)\n    \"\"\")\n\n    checkIcebergNativeScan(\"SELECT * FROM s3_catalog.db.simple_table ORDER BY id\")\n\n    spark.sql(\"DROP TABLE s3_catalog.db.simple_table\")\n  }\n\n  test(\"read partitioned Iceberg table from MinIO\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    spark.sql(\"\"\"\n      CREATE TABLE s3_catalog.db.partitioned_table (\n        id INT,\n        category STRING,\n        value DOUBLE\n      ) USING iceberg\n      PARTITIONED BY (category)\n    \"\"\")\n\n    spark.sql(\"\"\"\n      INSERT INTO s3_catalog.db.partitioned_table VALUES\n      (1, 'A', 10.5), (2, 'B', 20.3), (3, 'C', 30.7),\n      (4, 'A', 15.2), (5, 'B', 25.8), (6, 'C', 35.0)\n    \"\"\")\n\n    checkIcebergNativeScan(\"SELECT * FROM s3_catalog.db.partitioned_table ORDER BY id\")\n    checkIcebergNativeScan(\n      \"SELECT * FROM s3_catalog.db.partitioned_table WHERE category = 'A' ORDER BY id\")\n\n    spark.sql(\"DROP TABLE s3_catalog.db.partitioned_table\")\n  }\n\n  test(\"filter pushdown to S3-backed Iceberg table\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    spark.sql(\"\"\"\n      CREATE TABLE s3_catalog.db.filter_test (\n        id INT,\n        name STRING,\n        value DOUBLE\n      ) USING iceberg\n    \"\"\")\n\n    spark.sql(\"\"\"\n      INSERT INTO s3_catalog.db.filter_test VALUES\n      (1, 'Alice', 10.5), (2, 'Bob', 20.3), (3, 'Charlie', 30.7),\n      (4, 'Diana', 15.2), (5, 'Eve', 25.8)\n    \"\"\")\n\n    checkIcebergNativeScan(\"SELECT * FROM s3_catalog.db.filter_test WHERE id = 3\")\n    checkIcebergNativeScan(\"SELECT * FROM s3_catalog.db.filter_test WHERE value > 20.0\")\n    checkIcebergNativeScan(\"SELECT * FROM s3_catalog.db.filter_test WHERE name = 'Alice'\")\n\n    spark.sql(\"DROP TABLE s3_catalog.db.filter_test\")\n  }\n\n  test(\"multiple files in S3 - verify no duplicates\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withSQLConf(\"spark.sql.files.maxRecordsPerFile\" -> \"50\") {\n      spark.sql(\"\"\"\n        CREATE TABLE s3_catalog.db.multifile_test (\n          id INT,\n          data STRING\n        ) USING iceberg\n      \"\"\")\n\n      spark.sql(\"\"\"\n        INSERT INTO s3_catalog.db.multifile_test\n        SELECT id, CONCAT('data_', CAST(id AS STRING)) as data\n        FROM range(200)\n      \"\"\")\n\n      checkIcebergNativeScan(\"SELECT COUNT(DISTINCT id) FROM s3_catalog.db.multifile_test\")\n      checkIcebergNativeScan(\n        \"SELECT * FROM s3_catalog.db.multifile_test WHERE id < 10 ORDER BY id\")\n\n      spark.sql(\"DROP TABLE s3_catalog.db.multifile_test\")\n    }\n  }\n\n  test(\"large scale partitioned table - 100 partitions with many files\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    withSQLConf(\n      \"spark.sql.files.maxRecordsPerFile\" -> \"50\",\n      \"spark.sql.adaptive.enabled\" -> \"false\") {\n      spark.sql(\"\"\"\n        CREATE TABLE s3_catalog.db.large_partitioned_test (\n          id INT,\n          data STRING,\n          partition_id INT\n        ) USING iceberg\n        PARTITIONED BY (partition_id)\n      \"\"\")\n\n      spark.sql(\"\"\"\n        INSERT INTO s3_catalog.db.large_partitioned_test\n        SELECT\n          id,\n          CONCAT('data_', CAST(id AS STRING)) as data,\n          (id % 100) as partition_id\n        FROM range(500000)\n      \"\"\")\n\n      checkIcebergNativeScan(\n        \"SELECT COUNT(DISTINCT id) FROM s3_catalog.db.large_partitioned_test\")\n      checkIcebergNativeScan(\n        \"SELECT * FROM s3_catalog.db.large_partitioned_test WHERE id < 10 ORDER BY id\")\n      checkIcebergNativeScan(\n        \"SELECT SUM(id) FROM s3_catalog.db.large_partitioned_test WHERE partition_id = 0\")\n      checkIcebergNativeScan(\n        \"SELECT SUM(id) FROM s3_catalog.db.large_partitioned_test WHERE partition_id IN (0, 50, 99)\")\n\n      spark.sql(\"DROP TABLE s3_catalog.db.large_partitioned_test PURGE\")\n    }\n  }\n\n  test(\"MOR table with deletes in S3\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    spark.sql(\"\"\"\n      CREATE TABLE s3_catalog.db.mor_delete_test (\n        id INT,\n        name STRING,\n        value DOUBLE\n      ) USING iceberg\n      TBLPROPERTIES (\n        'write.delete.mode' = 'merge-on-read',\n        'write.merge.mode' = 'merge-on-read'\n      )\n    \"\"\")\n\n    spark.sql(\"\"\"\n      INSERT INTO s3_catalog.db.mor_delete_test VALUES\n      (1, 'Alice', 10.5), (2, 'Bob', 20.3), (3, 'Charlie', 30.7),\n      (4, 'Diana', 15.2), (5, 'Eve', 25.8)\n    \"\"\")\n\n    spark.sql(\"DELETE FROM s3_catalog.db.mor_delete_test WHERE id IN (2, 4)\")\n\n    checkIcebergNativeScan(\"SELECT * FROM s3_catalog.db.mor_delete_test ORDER BY id\")\n\n    spark.sql(\"DROP TABLE s3_catalog.db.mor_delete_test\")\n  }\n\n  test(\"REST catalog credential vending rejects wrong credentials\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    val wrongCreds = Map(\n      \"s3.access-key-id\" -> \"WRONG_ACCESS_KEY\",\n      \"s3.secret-access-key\" -> \"WRONG_SECRET_KEY\",\n      \"s3.endpoint\" -> minioContainer.getS3URL,\n      \"s3.path-style-access\" -> \"true\")\n    val warehouse = s\"s3a://$testBucketName/warehouse-bad-creds\"\n\n    withRESTCatalog(vendedCredentials = wrongCreds, warehouseLocation = Some(warehouse)) {\n      (restUri, _, _) =>\n        withSQLConf(\n          \"spark.sql.catalog.bad_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n          \"spark.sql.catalog.bad_cat.catalog-impl\" -> \"org.apache.iceberg.rest.RESTCatalog\",\n          \"spark.sql.catalog.bad_cat.uri\" -> restUri,\n          \"spark.sql.catalog.bad_cat.warehouse\" -> warehouse) {\n\n          spark.sql(\"CREATE NAMESPACE bad_cat.db\")\n\n          // CREATE TABLE succeeds (metadata only, no S3 access needed)\n          spark.sql(\"CREATE TABLE bad_cat.db.test (id INT) USING iceberg\")\n\n          // INSERT fails because S3FileIO uses the wrong vended credentials\n          val e = intercept[Exception] {\n            spark.sql(\"INSERT INTO bad_cat.db.test VALUES (1)\")\n          }\n          assert(e.getMessage.contains(\"403\"), s\"Expected S3 403 error but got: ${e.getMessage}\")\n        }\n    }\n  }\n\n  test(\"REST catalog credential vending with native Iceberg scan on S3\") {\n    assume(icebergAvailable, \"Iceberg not available in classpath\")\n\n    val vendedCreds = Map(\n      \"s3.access-key-id\" -> userName,\n      \"s3.secret-access-key\" -> password,\n      \"s3.endpoint\" -> minioContainer.getS3URL,\n      \"s3.path-style-access\" -> \"true\")\n    val warehouse = s\"s3a://$testBucketName/warehouse-vending\"\n\n    withRESTCatalog(vendedCredentials = vendedCreds, warehouseLocation = Some(warehouse)) {\n      (restUri, _, _) =>\n        withSQLConf(\n          \"spark.sql.catalog.vend_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n          \"spark.sql.catalog.vend_cat.catalog-impl\" -> \"org.apache.iceberg.rest.RESTCatalog\",\n          \"spark.sql.catalog.vend_cat.uri\" -> restUri,\n          \"spark.sql.catalog.vend_cat.warehouse\" -> warehouse,\n          CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"true\") {\n\n          spark.sql(\"CREATE NAMESPACE vend_cat.db\")\n\n          spark.sql(\"\"\"\n            CREATE TABLE vend_cat.db.simple (\n              id INT, name STRING, value DOUBLE\n            ) USING iceberg\n          \"\"\")\n          spark.sql(\"\"\"\n            INSERT INTO vend_cat.db.simple\n            VALUES (1, 'Alice', 10.5), (2, 'Bob', 20.3), (3, 'Charlie', 30.7)\n          \"\"\")\n          checkIcebergNativeScan(\"SELECT * FROM vend_cat.db.simple ORDER BY id\")\n\n          spark.sql(\"DROP TABLE vend_cat.db.simple\")\n          spark.sql(\"DROP NAMESPACE vend_cat.db\")\n        }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/SparkErrorConverterSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport org.scalatest.funsuite.AnyFunSuite\n\nclass SparkErrorConverterSuite extends AnyFunSuite {\n  private def castOverflowError(fromType: String, value: String): Throwable = {\n    SparkErrorConverter\n      .convertErrorType(\n        \"CastOverFlow\",\n        \"CAST_OVERFLOW\",\n        Map(\"fromType\" -> fromType, \"toType\" -> \"INT\", \"value\" -> value),\n        Array.empty,\n        null)\n      .getOrElse(fail(\"Expected CastOverFlow to be converted to a Spark exception\"))\n  }\n\n  private def assertCastOverflowContains(\n      fromType: String,\n      value: String,\n      expectedMessagePart: String): Unit = {\n    val err = castOverflowError(fromType, value)\n    assert(\n      !err.isInstanceOf[NumberFormatException],\n      s\"Unexpected parse failure for $fromType $value\")\n    assert(\n      err.getMessage.contains(expectedMessagePart),\n      s\"Expected '${err.getMessage}' to contain '$expectedMessagePart' for $fromType $value\")\n  }\n\n  private def assertCastOverflowContainsNaN(fromType: String, value: String): Unit = {\n    val err = castOverflowError(fromType, value)\n    assert(\n      !err.isInstanceOf[NumberFormatException],\n      s\"Unexpected parse failure for $fromType $value\")\n    assert(\n      err.getMessage.toLowerCase.contains(\"nan\"),\n      s\"Expected '${err.getMessage}' to contain NaN for $fromType $value\")\n  }\n\n  test(\"CastOverFlow conversion handles all float positive infinity literals\") {\n    Seq(\"inf\", \"+inf\", \"infinity\", \"+infinity\").foreach { value =>\n      assertCastOverflowContains(\"FLOAT\", value, \"Infinity\")\n    }\n  }\n\n  test(\"CastOverFlow conversion handles all float negative infinity literals\") {\n    Seq(\"-inf\", \"-infinity\").foreach { value =>\n      assertCastOverflowContains(\"FLOAT\", value, \"-Infinity\")\n    }\n  }\n\n  test(\"CastOverFlow conversion handles all float NaN literals\") {\n    Seq(\"nan\", \"+nan\", \"-nan\").foreach { value =>\n      assertCastOverflowContainsNaN(\"FLOAT\", value)\n    }\n  }\n\n  test(\"CastOverFlow conversion handles float standard numeric literal fallback\") {\n    assertCastOverflowContains(\"FLOAT\", \"1.5\", \"1.5\")\n  }\n\n  test(\"CastOverFlow conversion handles all double positive infinity literals\") {\n    Seq(\"inf\", \"infd\", \"+inf\", \"+infd\", \"infinity\", \"infinityd\", \"+infinity\", \"+infinityd\")\n      .foreach { value =>\n        assertCastOverflowContains(\"DOUBLE\", value, \"Infinity\")\n      }\n  }\n\n  test(\"CastOverFlow conversion handles all double negative infinity literals\") {\n    Seq(\"-inf\", \"-infd\", \"-infinity\", \"-infinityd\").foreach { value =>\n      assertCastOverflowContains(\"DOUBLE\", value, \"-Infinity\")\n    }\n  }\n\n  test(\"CastOverFlow conversion handles all double NaN literals\") {\n    Seq(\"nan\", \"nand\", \"+nan\", \"+nand\", \"-nan\", \"-nand\").foreach { value =>\n      assertCastOverflowContainsNaN(\"DOUBLE\", value)\n    }\n  }\n\n  test(\"CastOverFlow conversion handles double standard numeric literal fallback\") {\n    assertCastOverflowContains(\"DOUBLE\", \"1.5\", \"1.5\")\n    assertCastOverflowContains(\"DOUBLE\", \"1.5d\", \"1.5\")\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/SqlFileTestParser.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.io.File\n\nimport scala.io.Source\n\n/** A record in a SQL test file: either a statement (DDL/DML) or a query (SELECT). */\nsealed trait SqlTestRecord\n\n/**\n * A SQL statement to execute (CREATE TABLE, INSERT, etc.).\n *\n * @param sql\n *   The SQL text.\n * @param line\n *   1-based line number in the original .sql file where the statement starts.\n */\ncase class SqlStatement(sql: String, line: Int) extends SqlTestRecord\n\n/**\n * A SQL query whose results are compared between Spark and Comet.\n *\n * @param sql\n *   The SQL text.\n * @param mode\n *   How to validate the query.\n * @param line\n *   1-based line number in the original .sql file where the query starts.\n */\ncase class SqlQuery(sql: String, mode: QueryAssertionMode = CheckCoverageAndAnswer, line: Int)\n    extends SqlTestRecord\n\nsealed trait QueryAssertionMode\ncase object CheckCoverageAndAnswer extends QueryAssertionMode\ncase object SparkAnswerOnly extends QueryAssertionMode\ncase class WithTolerance(tol: Double) extends QueryAssertionMode\ncase class ExpectFallback(reason: String) extends QueryAssertionMode\ncase class Ignore(reason: String) extends QueryAssertionMode\ncase class ExpectError(pattern: String) extends QueryAssertionMode\n\n/**\n * Parsed representation of a .sql test file.\n *\n * @param configs\n *   Spark SQL configs to set for this test file.\n * @param configMatrix\n *   Map of config key to list of values. The test will run once per combination.\n * @param records\n *   Ordered list of statements and queries.\n * @param tables\n *   Table names extracted from CREATE TABLE statements (for cleanup).\n * @param minSparkVersion\n *   Optional minimum Spark version required to run this test (e.g. \"3.5\").\n */\ncase class SqlTestFile(\n    configs: Seq[(String, String)],\n    configMatrix: Seq[(String, Seq[String])],\n    records: Seq[SqlTestRecord],\n    tables: Seq[String],\n    minSparkVersion: Option[String] = None)\n\nobject SqlFileTestParser {\n\n  private val ConfigPattern = \"\"\"--\\s*Config:\\s*(.+)=(.+)\"\"\".r\n  private val ConfigMatrixPattern = \"\"\"--\\s*ConfigMatrix:\\s*(.+)=(.+)\"\"\".r\n  private val MinSparkVersionPattern = \"\"\"--\\s*MinSparkVersion:\\s*(.+)\"\"\".r\n  private val CreateTablePattern = \"\"\"(?i)CREATE\\s+TABLE\\s+(\\w+)\"\"\".r.unanchored\n\n  def parse(file: File): SqlTestFile = {\n    val source = Source.fromFile(file, \"UTF-8\")\n    try {\n      parse(source.getLines().toSeq)\n    } finally {\n      source.close()\n    }\n  }\n\n  def parse(lines: Seq[String]): SqlTestFile = {\n    var configs = Seq.empty[(String, String)]\n    var configMatrix = Seq.empty[(String, Seq[String])]\n    var minSparkVersion: Option[String] = None\n    val records = Seq.newBuilder[SqlTestRecord]\n    val tables = Seq.newBuilder[String]\n\n    var lineIdx = 0\n    while (lineIdx < lines.length) {\n      val line = lines(lineIdx).trim\n\n      line match {\n        case ConfigPattern(key, value) =>\n          configs :+= (key.trim -> value.trim)\n          lineIdx += 1\n\n        case ConfigMatrixPattern(key, values) =>\n          configMatrix :+= (key.trim -> values.split(\",\").map(_.trim).toSeq)\n          lineIdx += 1\n\n        case MinSparkVersionPattern(version) =>\n          minSparkVersion = Some(version.trim)\n          lineIdx += 1\n\n        case \"statement\" =>\n          lineIdx += 1\n          val startLine = lineIdx + 1\n          val (sql, nextIdx) = collectSql(lines, lineIdx)\n          // Extract table names for cleanup\n          CreateTablePattern.findFirstMatchIn(sql).foreach(m => tables += m.group(1))\n          records += SqlStatement(sql, startLine)\n          lineIdx = nextIdx\n\n        case s if s.startsWith(\"query\") =>\n          val mode = parseQueryAssertionMode(s)\n          lineIdx += 1\n          val startLine = lineIdx + 1\n          val (sql, nextIdx) = collectSql(lines, lineIdx)\n          records += SqlQuery(sql, mode, startLine)\n          lineIdx = nextIdx\n\n        case _ =>\n          // Skip blank lines and comments\n          lineIdx += 1\n      }\n    }\n\n    SqlTestFile(configs, configMatrix, records.result(), tables.result(), minSparkVersion)\n  }\n\n  private val FallbackPattern = \"\"\"query\\s+expect_fallback\\((.+)\\)\"\"\".r\n  private val IgnorePattern = \"\"\"query\\s+ignore\\((.+)\\)\"\"\".r\n  private val ErrorPattern = \"\"\"query\\s+expect_error\\((.+)\\)\"\"\".r\n\n  private def parseQueryAssertionMode(directive: String): QueryAssertionMode = {\n    directive match {\n      case FallbackPattern(reason) =>\n        ExpectFallback(reason.trim)\n      case IgnorePattern(reason) =>\n        Ignore(reason.trim)\n      case ErrorPattern(pattern) =>\n        ExpectError(pattern.trim)\n      case _ =>\n        val parts = directive.split(\"\\\\s+\")\n        if (parts.length == 1) return CheckCoverageAndAnswer\n        parts(1) match {\n          case \"spark_answer_only\" => SparkAnswerOnly\n          case s if s.startsWith(\"tolerance=\") =>\n            WithTolerance(s.stripPrefix(\"tolerance=\").toDouble)\n          case _ => CheckCoverageAndAnswer\n        }\n    }\n  }\n\n  /** Collect SQL lines until a blank line or end of file. */\n  private def collectSql(lines: Seq[String], start: Int): (String, Int) = {\n    val sb = new StringBuilder\n    var lineIdx = start\n    while (lineIdx < lines.length && lines(lineIdx).trim.nonEmpty) {\n      if (sb.nonEmpty) sb.append(\"\\n\")\n      sb.append(lines(lineIdx))\n      lineIdx += 1\n    }\n    (sb.toString, lineIdx)\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/WithHdfsCluster.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet\n\nimport java.io.{File, FileWriter}\nimport java.net.InetAddress\nimport java.nio.file.Files\nimport java.util.UUID\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.commons.io.FileUtils\nimport org.apache.hadoop.conf.Configuration\nimport org.apache.hadoop.fs.{FileSystem, Path}\nimport org.apache.hadoop.hdfs.MiniDFSCluster\nimport org.apache.hadoop.hdfs.client.HdfsClientConfigKeys\nimport org.apache.spark.internal.Logging\n\n/**\n * Trait for starting and stopping a MiniDFSCluster for testing.\n *\n * Most copy from:\n * https://github.com/apache/kyuubi/blob/master/kyuubi-server/src/test/scala/org/apache/kyuubi/server/MiniDFSService.scala\n */\ntrait WithHdfsCluster extends Logging {\n\n  private var hadoopConfDir: File = _\n  private var hdfsCluster: MiniDFSCluster = _\n  private var hdfsConf: Configuration = _\n  private var tmpRootDir: Path = _\n  private var fileSystem: FileSystem = _\n\n  def startHdfsCluster(): Unit = {\n    hdfsConf = new Configuration()\n    // before HADOOP-18206 (3.4.0), HDFS MetricsLogger strongly depends on\n    // commons-logging, we should disable it explicitly, otherwise, it throws\n    // ClassNotFound: org.apache.commons.logging.impl.Log4JLogger\n    hdfsConf.set(\"dfs.namenode.metrics.logger.period.seconds\", \"0\")\n    hdfsConf.set(\"dfs.datanode.metrics.logger.period.seconds\", \"0\")\n    // Set bind host to localhost to avoid java.net.BindException\n    hdfsConf.setIfUnset(\"dfs.namenode.rpc-bind-host\", \"localhost\")\n\n    hdfsCluster = new MiniDFSCluster.Builder(hdfsConf)\n      .checkDataNodeAddrConfig(true)\n      .checkDataNodeHostConfig(true)\n      .build()\n    logInfo(\n      \"NameNode address in configuration is \" +\n        s\"${hdfsConf.get(HdfsClientConfigKeys.DFS_NAMENODE_RPC_ADDRESS_KEY)}\")\n    hadoopConfDir =\n      Files.createTempDirectory(s\"comet_hdfs_conf_${UUID.randomUUID().toString}\").toFile\n    saveHadoopConf(hadoopConfDir)\n\n    fileSystem = hdfsCluster.getFileSystem\n    tmpRootDir = new Path(\"/tmp\")\n    fileSystem.mkdirs(tmpRootDir)\n  }\n\n  def stopHdfsCluster(): Unit = {\n    if (hdfsCluster != null) hdfsCluster.shutdown(true)\n    if (hadoopConfDir != null) FileUtils.deleteDirectory(hadoopConfDir)\n  }\n\n  private def saveHadoopConf(hadoopConfDir: File): Unit = {\n    val configToWrite = new Configuration(false)\n    val hostName = InetAddress.getLocalHost.getHostName\n    hdfsConf.iterator().asScala.foreach { kv =>\n      val key = kv.getKey\n      val value = kv.getValue.replaceAll(hostName, \"localhost\")\n      configToWrite.set(key, value)\n    }\n    val file = new File(hadoopConfDir, \"core-site.xml\")\n    val writer = new FileWriter(file)\n    configToWrite.writeXml(writer)\n    writer.close()\n  }\n\n  def getHadoopConf: Configuration = hdfsConf\n  def getDFSPort: Int = hdfsCluster.getNameNodePort\n  def getHadoopConfDir: String = hadoopConfDir.getAbsolutePath\n  def getHadoopConfFile: Path = new Path(hadoopConfDir.toURI.toURL.toString, \"core-site.xml\")\n  def getTmpRootDir: Path = tmpRootDir\n  def getFileSystem: FileSystem = fileSystem\n\n  def withTmpHdfsDir(tmpDir: Path => Unit): Unit = {\n    val tempPath = new Path(tmpRootDir, UUID.randomUUID().toString)\n    fileSystem.mkdirs(tempPath)\n    try tmpDir(tempPath)\n    finally fileSystem.delete(tempPath, true)\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/csv/CometCsvNativeReadSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.csv\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{IntegerType, StringType, StructType}\n\nimport org.apache.comet.CometConf\n\nclass CometCsvNativeReadSuite extends CometTestBase {\n  private val TEST_CSV_PATH_NO_HEADER = \"src/test/resources/test-data/csv-test-1.csv\"\n  private val TEST_CSV_PATH_HAS_HEADER = \"src/test/resources/test-data/csv-test-2.csv\"\n\n  test(\"Native csv read - with schema\") {\n    withSQLConf(\n      CometConf.COMET_CSV_V2_NATIVE_ENABLED.key -> \"true\",\n      SQLConf.USE_V1_SOURCE_LIST.key -> \"\") {\n      val schema = new StructType()\n        .add(\"a\", IntegerType)\n        .add(\"b\", IntegerType)\n        .add(\"c\", IntegerType)\n      val df = spark.read\n        .options(Map(\"header\" -> \"false\", \"delimiter\" -> \",\"))\n        .schema(schema)\n        .csv(TEST_CSV_PATH_NO_HEADER)\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"Native csv read - without schema\") {\n    withSQLConf(\n      CometConf.COMET_CSV_V2_NATIVE_ENABLED.key -> \"true\",\n      SQLConf.USE_V1_SOURCE_LIST.key -> \"\") {\n      val df = spark.read\n        .options(Map(\"header\" -> \"true\", \"delimiter\" -> \",\"))\n        .csv(TEST_CSV_PATH_HAS_HEADER)\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"Native csv read - test fallback reasons\") {\n    withSQLConf(\n      CometConf.COMET_CSV_V2_NATIVE_ENABLED.key -> \"true\",\n      SQLConf.USE_V1_SOURCE_LIST.key -> \"\") {\n      val columnNameOfCorruptedRecords =\n        SQLConf.get.getConf(SQLConf.COLUMN_NAME_OF_CORRUPT_RECORD)\n      val schema = new StructType()\n        .add(\"a\", IntegerType)\n        .add(\"b\", IntegerType)\n        .add(\"c\", IntegerType)\n        .add(columnNameOfCorruptedRecords, StringType)\n      var df = spark.read\n        .options(Map(\"header\" -> \"false\", \"delimiter\" -> \",\"))\n        .schema(schema)\n        .csv(TEST_CSV_PATH_NO_HEADER)\n      checkSparkAnswerAndFallbackReason(\n        df,\n        \"Comet doesn't support the processing of corrupted records\")\n      df = spark.read\n        .options(Map(\"header\" -> \"false\", \"delimiter\" -> \",\", \"inferSchema\" -> \"true\"))\n        .csv(TEST_CSV_PATH_NO_HEADER)\n      checkSparkAnswerAndFallbackReason(df, \"Comet doesn't support inferSchema=true option\")\n      df = spark.read\n        .options(Map(\"header\" -> \"false\", \"delimiter\" -> \",,\"))\n        .csv(TEST_CSV_PATH_NO_HEADER)\n      checkSparkAnswerAndFallbackReason(\n        df,\n        \"Comet supports only single-character delimiters, but got: ',,'\")\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/exec/CometAggregateSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exec\n\nimport scala.util.Random\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql.{CometTestBase, DataFrame, Row}\nimport org.apache.spark.sql.catalyst.expressions.Cast\nimport org.apache.spark.sql.catalyst.optimizer.EliminateSorts\nimport org.apache.spark.sql.comet.CometHashAggregateExec\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.functions.{avg, col, count_distinct, sum}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{DataTypes, StructField, StructType}\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometConf.COMET_EXEC_STRICT_FLOATING_POINT\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator, ParquetGenerator, SchemaGenOptions}\n\n/**\n * Test suite dedicated to Comet native aggregate operator\n */\nclass CometAggregateSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n  import testImplicits._\n\n  test(\"min/max floating point with negative zero\") {\n    val r = new Random(42)\n    val schema = StructType(\n      Seq(\n        StructField(\"float_col\", DataTypes.FloatType, nullable = true),\n        StructField(\"double_col\", DataTypes.DoubleType, nullable = true)))\n    val df = FuzzDataGenerator.generateDataFrame(\n      r,\n      spark,\n      schema,\n      1000,\n      DataGenOptions(generateNegativeZero = true))\n    df.createOrReplaceTempView(\"tbl\")\n\n    for (col <- Seq(\"float_col\", \"double_col\")) {\n      // assert that data contains positive and negative zero\n      assert(spark.sql(s\"select * from tbl where cast($col as string) = '0.0'\").count() > 0)\n      assert(spark.sql(s\"select * from tbl where cast($col as string) = '-0.0'\").count() > 0)\n      for (agg <- Seq(\"min\", \"max\")) {\n        withSQLConf(COMET_EXEC_STRICT_FLOATING_POINT.key -> \"true\") {\n          checkSparkAnswerAndFallbackReasons(\n            s\"select $agg($col) from tbl where cast($col as string) in ('0.0', '-0.0')\",\n            Set(\n              \"Unsupported aggregate expression(s)\",\n              s\"floating-point not supported when ${COMET_EXEC_STRICT_FLOATING_POINT.key}=true\"))\n        }\n        checkSparkAnswer(\n          s\"select $col, count(*) from tbl \" +\n            s\"where cast($col as string) in ('0.0', '-0.0') group by $col\")\n      }\n    }\n  }\n\n  test(\"avg decimal\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          10000,\n          SchemaGenOptions(),\n          DataGenOptions())\n      }\n      val tableName = \"avg_decimal\"\n      withTable(tableName) {\n        val table = spark.read.parquet(filename).coalesce(1)\n        table.createOrReplaceTempView(tableName)\n        // we fall back to Spark for avg on decimal due to the following issue\n        // https://github.com/apache/datafusion-comet/issues/1371\n        // once this is fixed, we should change this test to\n        // checkSparkAnswerAndNumOfAggregates\n        checkSparkAnswer(s\"SELECT c1, avg(c7) FROM $tableName GROUP BY c1 ORDER BY c1\")\n      }\n    }\n  }\n\n  test(\"stddev_pop should return NaN for some cases\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n      Seq(true, false).foreach { nullOnDivideByZero =>\n        withSQLConf(\"spark.sql.legacy.statisticalAggregate\" -> nullOnDivideByZero.toString) {\n\n          val data: Seq[(Float, Int)] = Seq((Float.PositiveInfinity, 1))\n          withParquetTable(data, \"tbl\", false) {\n            val df = sql(\"SELECT stddev_pop(_1), stddev_pop(_2) FROM tbl\")\n            checkSparkAnswerAndOperator(df)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"count with aggregation filter\") {\n    withSQLConf(\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      val df1 = sql(\"SELECT count(DISTINCT 2), count(DISTINCT 2,3)\")\n      checkSparkAnswer(df1)\n\n      val df2 = sql(\"SELECT count(DISTINCT 2), count(DISTINCT 3,2)\")\n      checkSparkAnswer(df2)\n    }\n  }\n\n  test(\"multiple column distinct count\") {\n    withSQLConf(\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      val df1 = Seq(\n        (\"a\", \"b\", \"c\"),\n        (\"a\", \"b\", \"c\"),\n        (\"a\", \"b\", \"d\"),\n        (\"x\", \"y\", \"z\"),\n        (\"x\", \"q\", null.asInstanceOf[String]))\n        .toDF(\"key1\", \"key2\", \"key3\")\n\n      checkSparkAnswer(df1.agg(count_distinct($\"key1\", $\"key2\")))\n      checkSparkAnswer(df1.agg(count_distinct($\"key1\", $\"key2\", $\"key3\")))\n      checkSparkAnswer(df1.groupBy($\"key1\").agg(count_distinct($\"key2\", $\"key3\")))\n    }\n  }\n\n  test(\"Only trigger Comet Final aggregation on Comet partial aggregation\") {\n    withTempView(\"lowerCaseData\") {\n      lowerCaseData.createOrReplaceTempView(\"lowerCaseData\")\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n        CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n        val df = sql(\"SELECT LAST(n) FROM lowerCaseData\")\n        checkSparkAnswer(df)\n      }\n    }\n  }\n\n  test(\n    \"Average expression in Comet Final should handle \" +\n      \"all null inputs from partial Spark aggregation\") {\n    withTempView(\"allNulls\") {\n      allNulls.createOrReplaceTempView(\"allNulls\")\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n        CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n        val df = sql(\"select sum(a), avg(a) from allNulls\")\n        checkSparkAnswer(df)\n      }\n    }\n  }\n\n  test(\"Aggregation without aggregate expressions should use correct result expressions\") {\n    withSQLConf(\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test\")\n        makeParquetFile(path, 10000, 10, false)\n        withParquetTable(path.toUri.toString, \"tbl\") {\n          val df = sql(\"SELECT _g5 FROM tbl GROUP BY _g1, _g2, _g3, _g4, _g5\")\n          checkSparkAnswerAndOperator(df)\n        }\n      }\n    }\n  }\n\n  test(\"Final aggregation should not bind to the input of partial aggregation\") {\n    withSQLConf(\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"test\")\n          makeParquetFile(path, 10000, 10, dictionaryEnabled)\n          withParquetTable(path.toUri.toString, \"tbl\") {\n            val df = sql(\"SELECT * FROM tbl\").groupBy(\"_g1\").agg(sum($\"_3\" + $\"_g3\"))\n            checkSparkAnswerAndOperator(df)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"Ensure traversed operators during finding first partial aggregation are all native\") {\n    withTable(\"lineitem\", \"part\") {\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n        CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n\n        sql(\n          \"CREATE TABLE lineitem(l_extendedprice DOUBLE, l_quantity DOUBLE, l_partkey STRING) USING PARQUET\")\n        sql(\"INSERT INTO TABLE lineitem VALUES (1.0, 1.0, '1')\")\n\n        sql(\n          \"CREATE TABLE part(p_partkey STRING, p_brand STRING, p_container STRING) USING PARQUET\")\n        sql(\"INSERT INTO TABLE part VALUES ('1', 'Brand#23', 'MED BOX')\")\n\n        val df = sql(\"\"\"select\n            sum(l_extendedprice) / 7.0 as avg_yearly\n            from\n            lineitem,\n            part\n              where\n              p_partkey = l_partkey\n              and p_brand = 'Brand#23'\n          and p_container = 'MED BOX'\n          and l_quantity < (\n            select\n          0.2 * avg(l_quantity)\n          from\n          lineitem\n          where\n          l_partkey = p_partkey\n          )\"\"\")\n        checkAnswer(df, Row(null))\n      }\n    }\n  }\n\n  test(\"SUM decimal supports emit.first\") {\n    withSQLConf(\n      SQLConf.OPTIMIZER_EXCLUDED_RULES.key -> EliminateSorts.ruleName,\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"test\")\n          makeParquetFile(path, 10000, 10, dictionaryEnabled)\n          withParquetTable(path.toUri.toString, \"tbl\") {\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT * FROM tbl\").sort(\"_g1\").groupBy(\"_g1\").agg(sum(\"_8\")))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"AVG decimal supports emit.first\") {\n    withSQLConf(\n      SQLConf.OPTIMIZER_EXCLUDED_RULES.key -> EliminateSorts.ruleName,\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"test\")\n          makeParquetFile(path, 10000, 10, dictionaryEnabled)\n          withParquetTable(path.toUri.toString, \"tbl\") {\n            checkSparkAnswerAndOperator(\n              sql(\"SELECT * FROM tbl\").sort(\"_g1\").groupBy(\"_g1\").agg(avg(\"_8\")))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"Fix NPE in partial decimal sum\") {\n    val table = \"tbl\"\n    withTable(table) {\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\",\n        CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n        withTable(table) {\n          sql(s\"CREATE TABLE $table(col DECIMAL(5, 2)) USING PARQUET\")\n          sql(s\"INSERT INTO TABLE $table VALUES (CAST(12345.01 AS DECIMAL(5, 2)))\")\n          val df = sql(s\"SELECT SUM(col + 100000.01) FROM $table\")\n          checkAnswer(df, Row(null))\n        }\n      }\n    }\n  }\n\n  test(\"fix: Decimal Average should not enable native final aggregation\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"test\")\n          makeParquetFile(path, 1000, 10, dictionaryEnabled)\n          withParquetTable(path.toUri.toString, \"tbl\") {\n            checkSparkAnswer(\"SELECT _g1, AVG(_7) FROM tbl GROUP BY _g1\")\n            checkSparkAnswer(\"SELECT _g1, AVG(_8) FROM tbl GROUP BY _g1\")\n            checkSparkAnswer(\"SELECT _g1, AVG(_9) FROM tbl GROUP BY _g1\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"trivial case\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable((0 until 5).map(i => (i, i)), \"tbl\", dictionaryEnabled) {\n        val df1 = sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(df1, Row(0, 0) :: Row(1, 1) :: Row(2, 2) :: Row(3, 3) :: Row(4, 4) :: Nil)\n\n        val df2 = sql(\"SELECT _2, COUNT(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(df2, Row(0, 1) :: Row(1, 1) :: Row(2, 1) :: Row(3, 1) :: Row(4, 1) :: Nil)\n\n        val df3 = sql(\"SELECT COUNT(_1), COUNT(_2) FROM tbl\")\n        checkAnswer(df3, Row(5, 5) :: Nil)\n\n        checkSparkAnswerAndOperator(\"SELECT _2, MIN(_1), MAX(_1) FROM tbl GROUP BY _2\")\n      }\n    }\n  }\n\n  test(\"avg\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        (0 until 10).map(i => ((i + 1) * (i + 1), (i + 1) / 2)),\n        \"tbl\",\n        dictionaryEnabled) {\n\n        checkSparkAnswerAndOperator(\"SELECT _2, AVG(_1) FROM tbl GROUP BY _2\")\n        checkSparkAnswerAndOperator(\"SELECT AVG(_2) FROM tbl\")\n      }\n    }\n  }\n\n  test(\"count, avg with null\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test\"\n        withTable(table) {\n          sql(s\"create table $table(col1 int, col2 int) using parquet\")\n          sql(s\"insert into $table values(1, 1), (2, 1), (3, 2), (null, 2), (null, 1)\")\n          checkSparkAnswerAndOperator(s\"SELECT COUNT(col1) FROM $table\")\n          checkSparkAnswerAndOperator(s\"SELECT col2, COUNT(col1) FROM $table GROUP BY col2\")\n          checkSparkAnswerAndOperator(s\"SELECT avg(col1) FROM $table\")\n          checkSparkAnswerAndOperator(s\"SELECT col2, avg(col1) FROM $table GROUP BY col2\")\n        }\n      }\n    }\n  }\n\n  test(\"SUM/AVG non-decimal overflow\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(Seq((0, 100.toLong), (0, Long.MaxValue)), \"tbl\", dictionaryEnabled) {\n        checkSparkAnswerAndOperator(\"SELECT SUM(_2) FROM tbl GROUP BY _1\")\n        checkSparkAnswerAndOperator(\"SELECT AVG(_2) FROM tbl GROUP BY _1\")\n      }\n    }\n  }\n\n  test(\"simple SUM, COUNT, MIN, MAX, AVG with non-distinct group keys\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable((0 until 5).map(i => (i, i % 2)), \"tbl\", dictionaryEnabled) {\n        val df1 = sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(df1, Row(0, 6) :: Row(1, 4) :: Nil)\n        val df2 = sql(\"SELECT _2, COUNT(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(df2, Row(0, 3) :: Row(1, 2) :: Nil)\n        checkSparkAnswerAndOperator(\"SELECT _2, MIN(_1), MAX(_1) FROM tbl GROUP BY _2\")\n        checkSparkAnswerAndOperator(\"SELECT _2, AVG(_1) FROM tbl GROUP BY _2\")\n      }\n    }\n  }\n\n  test(\"group-by on variable length types\") {\n    Seq(true, false).foreach { nativeShuffleEnabled =>\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withSQLConf(\n          CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> nativeShuffleEnabled.toString,\n          CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n          withParquetTable(\n            (0 until 100).map(i => (i, (i % 10).toString)),\n            \"tbl\",\n            dictionaryEnabled) {\n            val n = if (nativeShuffleEnabled) 2 else 1\n            checkSparkAnswerAndNumOfAggregates(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\", n)\n            checkSparkAnswerAndNumOfAggregates(\"SELECT _2, COUNT(_1) FROM tbl GROUP BY _2\", n)\n            checkSparkAnswerAndNumOfAggregates(\"SELECT _2, MIN(_1) FROM tbl GROUP BY _2\", n)\n            checkSparkAnswerAndNumOfAggregates(\"SELECT _2, MAX(_1) FROM tbl GROUP BY _2\", n)\n            checkSparkAnswerAndNumOfAggregates(\"SELECT _2, AVG(_1) FROM tbl GROUP BY _2\", n)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"simple SUM, COUNT, MIN, MAX, AVG with non-distinct + null group keys\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        (0 until 10).map { i =>\n          (i, if (i % 3 == 0) null.asInstanceOf[Int] else i % 3)\n        },\n        \"tbl\",\n        dictionaryEnabled) {\n        val df1 = sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(df1, Row(null.asInstanceOf[Int], 18) :: Row(1, 12) :: Row(2, 15) :: Nil)\n\n        val df2 = sql(\"SELECT _2, COUNT(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(df2, Row(null.asInstanceOf[Int], 4) :: Row(1, 3) :: Row(2, 3) :: Nil)\n\n        val df3 = sql(\"SELECT _2, MIN(_1), MAX(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(df3, Row(null.asInstanceOf[Int], 0, 9) :: Row(1, 1, 7) :: Row(2, 2, 8) :: Nil)\n        checkSparkAnswerAndOperator(sql(\"SELECT _2, AVG(_1) FROM tbl GROUP BY _2\"))\n      }\n    }\n  }\n\n  test(\"simple SUM, COUNT, MIN, MAX, AVG with null aggregates\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        (0 until 10).map { i =>\n          (\n            if (i % 2 == 0) null.asInstanceOf[Int] else i,\n            if (i % 3 == 0) null.asInstanceOf[Int] else i % 3)\n        },\n        \"tbl\",\n        dictionaryEnabled) {\n        val df1 = sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(df1, Row(null.asInstanceOf[Int], 12) :: Row(1, 8) :: Row(2, 5) :: Nil)\n\n        val df2 = sql(\"SELECT _2, COUNT(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(df2, Row(null.asInstanceOf[Int], 4) :: Row(1, 3) :: Row(2, 3) :: Nil)\n\n        val df3 = sql(\"SELECT _2, MIN(_1), MAX(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(df3, Row(null.asInstanceOf[Int], 0, 9) :: Row(1, 0, 7) :: Row(2, 0, 5) :: Nil)\n\n        checkSparkAnswerAndOperator(sql(\"SELECT _2, AVG(_1) FROM tbl GROUP BY _2\"))\n      }\n    }\n  }\n\n  test(\"simple SUM, MIN, MAX, AVG with all nulls\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        (0 until 10).map { i =>\n          (null.asInstanceOf[Int], if (i % 3 == 0) null.asInstanceOf[Int] else i % 3)\n        },\n        \"tbl\",\n        dictionaryEnabled) {\n        val df = sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(\n          df,\n          Seq(\n            Row(null.asInstanceOf[Int], null.asInstanceOf[Int]),\n            Row(1, null.asInstanceOf[Int]),\n            Row(2, null.asInstanceOf[Int])))\n\n        val df2 = sql(\"SELECT _2, MIN(_1), MAX(_1) FROM tbl GROUP BY _2\")\n        checkAnswer(\n          df2,\n          Seq(\n            Row(null.asInstanceOf[Int], null.asInstanceOf[Int], null.asInstanceOf[Int]),\n            Row(1, null.asInstanceOf[Int], null.asInstanceOf[Int]),\n            Row(2, null.asInstanceOf[Int], null.asInstanceOf[Int])))\n        checkSparkAnswerAndOperator(sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\"))\n      }\n    }\n  }\n\n  test(\"SUM, COUNT, MIN, MAX, AVG on float & double\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test\")\n        makeParquetFile(path, 1000, 10, dictionaryEnabled)\n        withParquetTable(path.toUri.toString, \"tbl\") {\n          checkSparkAnswerAndOperator(\n            \"SELECT _g5, SUM(_5), COUNT(_5), MIN(_5), MAX(_5), AVG(_5) FROM tbl GROUP BY _g5\")\n          checkSparkAnswerAndOperator(\n            \"SELECT _g6, SUM(_6), COUNT(_6), MIN(_6), MAX(_6), AVG(_6) FROM tbl GROUP BY _g6\")\n        }\n      }\n    }\n  }\n\n  test(\"SUM, MIN, MAX, AVG for NaN, -0.0 and 0.0\") {\n    // NaN should be grouped together, and -0.0 and 0.0 should also be grouped together\n    Seq(true, false).foreach { dictionaryEnabled =>\n      val data: Seq[(Float, Int)] = Seq(\n        (Float.NaN, 1),\n        (-0.0.asInstanceOf[Float], 2),\n        (0.0.asInstanceOf[Float], 3),\n        (Float.NaN, 4))\n      withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\") {\n        withParquetTable(data, \"tbl\", dictionaryEnabled) {\n          checkSparkAnswer(\"SELECT SUM(_2), MIN(_2), MAX(_2), _1 FROM tbl GROUP BY _1\")\n          checkSparkAnswer(\"SELECT MIN(_1), MAX(_1), MIN(_2), MAX(_2) FROM tbl\")\n          checkSparkAnswer(\"SELECT AVG(_2), _1 FROM tbl GROUP BY _1\")\n          checkSparkAnswer(\"SELECT AVG(_1), AVG(_2) FROM tbl\")\n        }\n      }\n    }\n  }\n\n  test(\"SUM/MIN/MAX/AVG on decimal\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test\")\n        makeParquetFile(path, 1000, 10, dictionaryEnabled)\n        withParquetTable(path.toUri.toString, \"tbl\") {\n          checkSparkAnswer(\"SELECT _g1, SUM(_7), MIN(_7), MAX(_7), AVG(_7) FROM tbl GROUP BY _g1\")\n          checkSparkAnswer(\"SELECT _g1, SUM(_8), MIN(_8), MAX(_8), AVG(_8) FROM tbl GROUP BY _g1\")\n          checkSparkAnswer(\"SELECT _g1, SUM(_9), MIN(_9), MAX(_9), AVG(_9) FROM tbl GROUP BY _g1\")\n        }\n      }\n    }\n  }\n\n  test(\"multiple SUM/MIN/MAX/AVG on decimal and non-decimal\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test\")\n        makeParquetFile(path, 1000, 10, dictionaryEnabled)\n        withParquetTable(path.toUri.toString, \"tbl\") {\n          checkSparkAnswer(\n            \"SELECT _g1, COUNT(_6), COUNT(_7), SUM(_6), SUM(_7), MIN(_6), MIN(_7), MAX(_6), MAX(_7), AVG(_6), AVG(_7) FROM tbl GROUP BY _g1\")\n          checkSparkAnswer(\n            \"SELECT _g1, COUNT(_7), COUNT(_8), SUM(_7), SUM(_8), MIN(_7), MIN(_8), MAX(_7), MAX(_8), AVG(_7), AVG(_8) FROM tbl GROUP BY _g1\")\n          checkSparkAnswer(\n            \"SELECT _g1, COUNT(_8), COUNT(_9), SUM(_8), SUM(_9), MIN(_8), MIN(_9), MAX(_8), MAX(_9), AVG(_8), AVG(_9) FROM tbl GROUP BY _g1\")\n          checkSparkAnswer(\n            \"SELECT _g1, COUNT(_9), COUNT(_1), SUM(_9), SUM(_1), MIN(_9), MIN(_1), MAX(_9), MAX(_1), AVG(_9), AVG(_1) FROM tbl GROUP BY _g1\")\n        }\n      }\n    }\n  }\n\n  test(\"SUM/AVG on decimal with different precisions\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test\")\n        makeParquetFile(path, 1000, 10, dictionaryEnabled)\n        withParquetTable(path.toUri.toString, \"tbl\") {\n          Seq(\"SUM\", \"AVG\").foreach { FN =>\n            checkSparkAnswerAndOperator(\n              s\"SELECT _g1, $FN(_8 + CAST(1 AS DECIMAL(20, 10))) FROM tbl GROUP BY _g1\")\n            checkSparkAnswerAndOperator(\n              s\"SELECT _g1, $FN(_8 - CAST(-1 AS DECIMAL(10, 3))) FROM tbl GROUP BY _g1\")\n            checkSparkAnswerAndOperator(\n              s\"SELECT _g1, $FN(_9 * CAST(3.14 AS DECIMAL(4, 3))) FROM tbl GROUP BY _g1\")\n            checkSparkAnswerAndOperator(\n              s\"SELECT _g1, $FN(_9 / CAST(1.2345 AS DECIMAL(35, 10))) FROM tbl GROUP BY _g1\")\n          }\n        }\n      }\n    }\n  }\n\n  test(\"SUM decimal with DF\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      Seq(true, false).foreach { nativeShuffleEnabled =>\n        withSQLConf(\n          CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> nativeShuffleEnabled.toString,\n          CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n          withTempDir { dir =>\n            val path = new Path(dir.toURI.toString, \"test\")\n            makeParquetFile(path, 1000, 20, dictionaryEnabled)\n            withParquetTable(path.toUri.toString, \"tbl\") {\n              val expectedNumOfCometAggregates = if (nativeShuffleEnabled) 2 else 1\n\n              checkSparkAnswerAndNumOfAggregates(\n                \"SELECT _g2, SUM(_7) FROM tbl GROUP BY _g2\",\n                expectedNumOfCometAggregates)\n              checkSparkAnswerAndNumOfAggregates(\n                \"SELECT _g3, SUM(_8) FROM tbl GROUP BY _g3\",\n                expectedNumOfCometAggregates)\n              checkSparkAnswerAndNumOfAggregates(\n                \"SELECT _g4, SUM(_9) FROM tbl GROUP BY _g4\",\n                expectedNumOfCometAggregates)\n              checkSparkAnswerAndNumOfAggregates(\n                \"SELECT SUM(_7) FROM tbl\",\n                expectedNumOfCometAggregates)\n              checkSparkAnswerAndNumOfAggregates(\n                \"SELECT SUM(_8) FROM tbl\",\n                expectedNumOfCometAggregates)\n              checkSparkAnswerAndNumOfAggregates(\n                \"SELECT SUM(_9) FROM tbl\",\n                expectedNumOfCometAggregates)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"COUNT/MIN/MAX on date, timestamp\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test\")\n        makeParquetFile(path, 1000, 10, dictionaryEnabled)\n        withParquetTable(path.toUri.toString, \"tbl\") {\n          checkSparkAnswerAndOperator(\n            \"SELECT _g1, COUNT(_10), MIN(_10), MAX(_10) FROM tbl GROUP BY _g1\")\n          checkSparkAnswerAndOperator(\n            \"SELECT _g1, COUNT(_11), MIN(_11), MAX(_11) FROM tbl GROUP BY _g1\")\n          checkSparkAnswerAndOperator(\n            \"SELECT _g1, COUNT(_12), MIN(_12), MAX(_12) FROM tbl GROUP BY _g1\")\n        }\n      }\n    }\n  }\n\n  test(\"single group-by column + aggregate column, multiple batches, no null\") {\n    val numValues = 10000\n\n    Seq(1, 100, 10000).foreach { numGroups =>\n      Seq(128, 1024, numValues + 1).foreach { batchSize =>\n        Seq(true, false).foreach { dictionaryEnabled =>\n          withSQLConf(\n            SQLConf.COALESCE_PARTITIONS_ENABLED.key -> \"true\",\n            CometConf.COMET_BATCH_SIZE.key -> batchSize.toString) {\n            withParquetTable(\n              (0 until numValues).map(i => (i, Random.nextInt() % numGroups)),\n              \"tbl\",\n              dictionaryEnabled) {\n              withView(\"v\") {\n                sql(\"CREATE TEMP VIEW v AS SELECT _1, _2 FROM tbl ORDER BY _1\")\n                checkSparkAnswerAndFallbackReason(\n                  \"SELECT _2, SUM(_1), SUM(DISTINCT _1), MIN(_1), MAX(_1), COUNT(_1),\" +\n                    \" COUNT(DISTINCT _1), AVG(_1), FIRST(_1), LAST(_1) FROM v GROUP BY _2\",\n                  \"Unsupported aggregation mode PartialMerge\")\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"multiple group-by columns + single aggregate column (first/last), with nulls\") {\n    val numValues = 10000\n\n    Seq(1, 100, numValues).foreach { numGroups =>\n      Seq(128, numValues + 100).foreach { batchSize =>\n        Seq(true, false).foreach { dictionaryEnabled =>\n          withSQLConf(\n            SQLConf.COALESCE_PARTITIONS_ENABLED.key -> \"true\",\n            CometConf.COMET_BATCH_SIZE.key -> batchSize.toString) {\n            withTempPath { dir =>\n              val path = new Path(dir.toURI.toString, \"test.parquet\")\n              makeParquetFile(path, numValues, numGroups, dictionaryEnabled)\n              withParquetTable(path.toUri.toString, \"tbl\") {\n                withView(\"v\") {\n                  sql(\"CREATE TEMP VIEW v AS SELECT _g1, _g2, _3 FROM tbl ORDER BY _3\")\n                  checkSparkAnswerAndOperator(\n                    \"SELECT _g1, _g2, FIRST(_3) FROM v GROUP BY _g1, _g2 ORDER BY _g1, _g2\")\n                  checkSparkAnswerAndOperator(\n                    \"SELECT _g1, _g2, LAST(_3) FROM v GROUP BY _g1, _g2 ORDER BY _g1, _g2\")\n                  checkSparkAnswerAndOperator(\n                    \"SELECT _g1, _g2, FIRST(_3) IGNORE NULLS FROM v GROUP BY _g1, _g2 ORDER BY _g1, _g2\")\n                  checkSparkAnswerAndOperator(\n                    \"SELECT _g1, _g2, LAST(_3) IGNORE NULLS FROM v GROUP BY _g1, _g2 ORDER BY _g1, _g2\")\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"multiple group-by columns + single aggregate column, with nulls\") {\n    val numValues = 10000\n\n    Seq(1, 100, numValues).foreach { numGroups =>\n      Seq(128, numValues + 100).foreach { batchSize =>\n        Seq(true, false).foreach { dictionaryEnabled =>\n          withSQLConf(\n            SQLConf.COALESCE_PARTITIONS_ENABLED.key -> \"true\",\n            CometConf.COMET_BATCH_SIZE.key -> batchSize.toString) {\n            withTempPath { dir =>\n              val path = new Path(dir.toURI.toString, \"test.parquet\")\n              makeParquetFile(path, numValues, numGroups, dictionaryEnabled)\n              withParquetTable(path.toUri.toString, \"tbl\") {\n                checkSparkAnswer(\n                  \"SELECT _g1, _g2, SUM(_3) FROM tbl GROUP BY _g1, _g2 ORDER BY _g1, _g2\")\n                checkSparkAnswer(\n                  \"SELECT _g1, _g2, COUNT(_3) FROM tbl GROUP BY _g1, _g2 ORDER BY _g1, _g2\")\n                checkSparkAnswer(\n                  \"SELECT _g1, _g2, SUM(DISTINCT _3) FROM tbl GROUP BY _g1, _g2 ORDER BY _g1, _g2\")\n                checkSparkAnswer(\n                  \"SELECT _g1, _g2, COUNT(DISTINCT _3) FROM tbl GROUP BY _g1, _g2 ORDER BY _g1, _g2\")\n                checkSparkAnswer(\n                  \"SELECT _g1, _g2, MIN(_3), MAX(_3) FROM tbl GROUP BY _g1, _g2 ORDER BY _g1, _g2\")\n                checkSparkAnswer(\n                  \"SELECT _g1, _g2, AVG(_3) FROM tbl GROUP BY _g1, _g2 ORDER BY _g1, _g2\")\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"string should be supported\") {\n    withTable(\"t\") {\n      sql(\"CREATE TABLE t(v VARCHAR(3), i INT) USING PARQUET\")\n      sql(\"INSERT INTO t VALUES ('c', 1)\")\n      withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\") {\n        checkSparkAnswerAndNumOfAggregates(\"SELECT v, sum(i) FROM t GROUP BY v ORDER BY v\", 1)\n      }\n    }\n  }\n\n  test(\"multiple group-by columns + multiple aggregate column (first/last), with nulls\") {\n    val numValues = 10000\n\n    Seq(1, 100, numValues).foreach { numGroups =>\n      Seq(128, numValues + 100).foreach { batchSize =>\n        Seq(true, false).foreach { dictionaryEnabled =>\n          withSQLConf(\n            SQLConf.COALESCE_PARTITIONS_ENABLED.key -> \"true\",\n            CometConf.COMET_BATCH_SIZE.key -> batchSize.toString) {\n            withTempPath { dir =>\n              val path = new Path(dir.toURI.toString, \"test.parquet\")\n              makeParquetFile(path, numValues, numGroups, dictionaryEnabled)\n              withParquetTable(path.toUri.toString, \"tbl\") {\n                withView(\"v\") {\n                  sql(\"CREATE TEMP VIEW v AS SELECT _g3, _g4, _3, _4 FROM tbl ORDER BY _3, _4\")\n                  checkSparkAnswerAndOperator(\n                    \"SELECT _g3, _g4, FIRST(_3), FIRST(_4) FROM v GROUP BY _g3, _g4 ORDER BY _g3, _g4\")\n                  checkSparkAnswerAndOperator(\n                    \"SELECT _g3, _g4, LAST(_3), LAST(_4) FROM v GROUP BY _g3, _g4 ORDER BY _g3, _g4\")\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n\n  }\n\n  test(\"multiple group-by columns + multiple aggregate column, with nulls\") {\n    val numValues = 10000\n\n    Seq(1, 100, numValues).foreach { numGroups =>\n      Seq(128, numValues + 100).foreach { batchSize =>\n        Seq(true, false).foreach { dictionaryEnabled =>\n          withSQLConf(\n            SQLConf.COALESCE_PARTITIONS_ENABLED.key -> \"true\",\n            CometConf.COMET_BATCH_SIZE.key -> batchSize.toString) {\n            withTempPath { dir =>\n              val path = new Path(dir.toURI.toString, \"test.parquet\")\n              makeParquetFile(path, numValues, numGroups, dictionaryEnabled)\n              withParquetTable(path.toUri.toString, \"tbl\") {\n                checkSparkAnswer(\n                  \"SELECT _g3, _g4, SUM(_3), SUM(_4) FROM tbl GROUP BY _g3, _g4 ORDER BY _g3, _g4\")\n                checkSparkAnswer(\n                  \"SELECT _g3, _g4, SUM(DISTINCT _3), SUM(DISTINCT _4) FROM tbl GROUP BY _g3, _g4 ORDER BY _g3, _g4\")\n                checkSparkAnswer(\n                  \"SELECT _g3, _g4, COUNT(_3), COUNT(_4) FROM tbl GROUP BY _g3, _g4 ORDER BY _g3, _g4\")\n                checkSparkAnswer(\n                  \"SELECT _g3, _g4, COUNT(DISTINCT _3), COUNT(DISTINCT _4) FROM tbl GROUP BY _g3, _g4 ORDER BY _g3, _g4\")\n                checkSparkAnswer(\n                  \"SELECT _g3, _g4, MIN(_3), MAX(_3), MIN(_4), MAX(_4) FROM tbl GROUP BY _g3, _g4 ORDER BY _g3, _g4\")\n                checkSparkAnswer(\n                  \"SELECT _g3, _g4, AVG(_3), AVG(_4) FROM tbl GROUP BY _g3, _g4 ORDER BY _g3, _g4\")\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"all types first/last, with nulls\") {\n    val numValues = 2048\n\n    Seq(1, 100, numValues).foreach { numGroups =>\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempPath { dir =>\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFile(path, numValues, numGroups, dictionaryEnabled)\n          withParquetTable(path.toUri.toString, \"tbl\") {\n            Seq(128, numValues + 100).foreach { batchSize =>\n              withSQLConf(\n                CometConf.COMET_BATCH_SIZE.key -> batchSize.toString,\n                CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n\n                // Test all combinations of different aggregation & group-by types\n                (1 to 14).foreach { gCol =>\n                  withView(\"v\") {\n                    sql(s\"CREATE TEMP VIEW v AS SELECT _g$gCol, _1, _2, _3, _4 \" +\n                      \"FROM tbl ORDER BY _1, _2, _3, _4\")\n                    checkSparkAnswerAndOperator(\n                      s\"SELECT _g$gCol, FIRST(_1), FIRST(_2), FIRST(_3), \" +\n                        s\"FIRST(_4), LAST(_1), LAST(_2), LAST(_3), LAST(_4) FROM v GROUP BY _g$gCol ORDER BY _g$gCol\")\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"first/last with ignore null\") {\n    val data = Range(0, 8192).flatMap(n => Seq((n, 1), (n, 2))).toDF(\"a\", \"b\")\n    withTempDir { dir =>\n      val filename = s\"${dir.getAbsolutePath}/first_last_ignore_null.parquet\"\n      data.write.parquet(filename)\n      withSQLConf(CometConf.COMET_BATCH_SIZE.key -> \"100\") {\n        spark.read.parquet(filename).createOrReplaceTempView(\"t1\")\n        for (expr <- Seq(\"first\", \"last\")) {\n          // deterministic query that should return one non-null value per group\n          val df = spark.sql(\n            s\"SELECT a, $expr(IF(b==1,null,b)) IGNORE NULLS FROM t1 GROUP BY a ORDER BY a\")\n          checkSparkAnswerAndOperator(df)\n        }\n      }\n    }\n  }\n\n  test(\"all types, with nulls\") {\n    val numValues = 2048\n\n    Seq(1, 100, numValues).foreach { numGroups =>\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempPath { dir =>\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFile(path, numValues, numGroups, dictionaryEnabled)\n          withParquetTable(path.toUri.toString, \"tbl\") {\n            Seq(128, numValues + 100).foreach { batchSize =>\n              withSQLConf(\n                CometConf.COMET_BATCH_SIZE.key -> batchSize.toString,\n                CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\") {\n\n                // Test all combinations of different aggregation & group-by types\n                (1 to 14).foreach { gCol =>\n                  checkSparkAnswer(s\"SELECT _g$gCol, SUM(_1), SUM(_2), COUNT(_3), COUNT(_4), \" +\n                    s\"MIN(_1), MAX(_4), AVG(_2), AVG(_4) FROM tbl GROUP BY _g$gCol ORDER BY _g$gCol\")\n                  checkSparkAnswer(\n                    s\"SELECT _g$gCol, SUM(DISTINCT _3) FROM tbl GROUP BY _g$gCol ORDER BY _g$gCol\")\n                  checkSparkAnswer(\n                    s\"SELECT _g$gCol, COUNT(DISTINCT _1) FROM tbl GROUP BY _g$gCol ORDER BY _g$gCol\")\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"test final count\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n      Seq(false, true).foreach { dictionaryEnabled =>\n        withParquetTable((0 until 5).map(i => (i, i % 2)), \"tbl\", dictionaryEnabled) {\n          checkSparkAnswerAndNumOfAggregates(\"SELECT _2, COUNT(_1) FROM tbl GROUP BY _2\", 2)\n          checkSparkAnswerAndNumOfAggregates(\"select count(_1) from tbl\", 2)\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT _2, COUNT(_1), SUM(_1) FROM tbl GROUP BY _2\",\n            2)\n          checkSparkAnswerAndNumOfAggregates(\"SELECT COUNT(_1), COUNT(_2) FROM tbl\", 2)\n        }\n      }\n    }\n  }\n\n  test(\"test final min/max\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withParquetTable((0 until 5).map(i => (i, i % 2)), \"tbl\", dictionaryEnabled) {\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT _2, MIN(_1), MAX(_1), COUNT(_1) FROM tbl GROUP BY _2\",\n            2)\n          checkSparkAnswerAndNumOfAggregates(\"SELECT MIN(_1), MAX(_1), COUNT(_1) FROM tbl\", 2)\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT _2, MIN(_1), MAX(_1), COUNT(_1), SUM(_1) FROM tbl GROUP BY _2\",\n            2)\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT MIN(_1), MIN(_2), MAX(_1), MAX(_2), COUNT(_1), COUNT(_2) FROM tbl\",\n            2)\n        }\n      }\n    }\n  }\n\n  test(\"test final min/max/count with result expressions\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withParquetTable((0 until 5).map(i => (i, i % 2)), \"tbl\", dictionaryEnabled) {\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT _2, MIN(_1) + 2, COUNT(_1) FROM tbl GROUP BY _2\",\n            2)\n          checkSparkAnswerAndNumOfAggregates(\"SELECT _2, COUNT(_1) + 2 FROM tbl GROUP BY _2\", 2)\n          checkSparkAnswerAndNumOfAggregates(\"SELECT _2 + 2, COUNT(_1) FROM tbl GROUP BY _2\", 2)\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT _2, MIN(_1) + MAX(_1) FROM tbl GROUP BY _2\",\n            2)\n          checkSparkAnswerAndNumOfAggregates(\"SELECT _2, MIN(_1) + _2 FROM tbl GROUP BY _2\", 2)\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT _2 + 2, MIN(_1), MAX(_1), COUNT(_1) FROM tbl GROUP BY _2\",\n            2)\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT _2, MIN(_1), MAX(_1) + 2, COUNT(_1) FROM tbl GROUP BY _2\",\n            2)\n          checkSparkAnswerAndNumOfAggregates(\"SELECT _2, SUM(_1) + 2 FROM tbl GROUP BY _2\", 2)\n          checkSparkAnswerAndNumOfAggregates(\"SELECT _2 + 2, SUM(_1) FROM tbl GROUP BY _2\", 2)\n          checkSparkAnswerAndNumOfAggregates(\"SELECT _2, SUM(_1 + 1) FROM tbl GROUP BY _2\", 2)\n\n          // result expression is unsupported by Comet, so only partial aggregation should be used\n          val df = sql(\n            \"SELECT _2, MIN(_1) + java_method('java.lang.Math', 'random') \" +\n              \"FROM tbl GROUP BY _2\")\n          assert(getNumCometHashAggregate(df) == 1)\n        }\n      }\n    }\n  }\n\n  test(\"test final sum\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n      Seq(false, true).foreach { dictionaryEnabled =>\n        withParquetTable((0L until 5L).map(i => (i, i % 2)), \"tbl\", dictionaryEnabled) {\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT _2, SUM(_1), MIN(_1) FROM tbl GROUP BY _2\",\n            2)\n          checkSparkAnswerAndNumOfAggregates(\"SELECT SUM(_1) FROM tbl\", 2)\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT _2, MIN(_1), MAX(_1), COUNT(_1), SUM(_1), AVG(_1) FROM tbl GROUP BY _2\",\n            2)\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT MIN(_1), MIN(_2), MAX(_1), MAX(_2), COUNT(_1), COUNT(_2), SUM(_1), SUM(_2) FROM tbl\",\n            2)\n        }\n      }\n    }\n  }\n\n  test(\"avg/sum overflow on decimal(38, _)\") {\n    val table = \"overflow_decimal_38\"\n    withTable(table) {\n      sql(s\"create table $table(a decimal(38, 2), b INT) using parquet\")\n      sql(s\"insert into $table values(42.00, 1), (999999999999999999999999999999999999.99, 1)\")\n      checkSparkAnswerAndNumOfAggregates(s\"select sum(a) from $table\", 2)\n      sql(s\"insert into $table values(42.00, 2), (99999999999999999999999999999999.99, 2)\")\n      sql(s\"insert into $table values(999999999999999999999999999999999999.99, 3)\")\n      sql(s\"insert into $table values(99999999999999999999999999999999.99, 4)\")\n      checkSparkAnswerAndNumOfAggregates(\n        s\"select avg(a), sum(a) from $table group by b order by b\",\n        2)\n    }\n  }\n\n  test(\"test final avg\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withParquetTable(\n          (0 until 5).map(i => (i.toDouble, i.toDouble % 2)),\n          \"tbl\",\n          dictionaryEnabled) {\n          checkSparkAnswerAndNumOfAggregates(\"SELECT _2 , AVG(_1) FROM tbl GROUP BY _2\", 2)\n          checkSparkAnswerAndNumOfAggregates(\"SELECT AVG(_1) FROM tbl\", 2)\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT _2, MIN(_1), MAX(_1), COUNT(_1), SUM(_1), AVG(_1) FROM tbl GROUP BY _2\",\n            2)\n          checkSparkAnswerAndNumOfAggregates(\n            \"SELECT MIN(_1), MIN(_2), MAX(_1), MAX(_2), COUNT(_1), COUNT(_2), SUM(_1), SUM(_2), AVG(_1), AVG(_2) FROM tbl\",\n            2)\n        }\n      }\n    }\n  }\n\n  test(\"final decimal avg\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withSQLConf(\"parquet.enable.dictionary\" -> dictionaryEnabled.toString) {\n          val table = s\"final_decimal_avg_$dictionaryEnabled\"\n          withTable(table) {\n            sql(s\"create table $table(a decimal(38, 37), b INT) using parquet\")\n            sql(s\"insert into $table values(-0.0000000000000000000000000000000000002, 1)\")\n            sql(s\"insert into $table values(-0.0000000000000000000000000000000000002, 1)\")\n            sql(s\"insert into $table values(-0.0000000000000000000000000000000000004, 2)\")\n            sql(s\"insert into $table values(-0.0000000000000000000000000000000000004, 2)\")\n            sql(s\"insert into $table values(-0.00000000000000000000000000000000000002, 3)\")\n            sql(s\"insert into $table values(-0.00000000000000000000000000000000000002, 3)\")\n            sql(s\"insert into $table values(-0.00000000000000000000000000000000000004, 4)\")\n            sql(s\"insert into $table values(-0.00000000000000000000000000000000000004, 4)\")\n            sql(s\"insert into $table values(0.13344406545919155429936259114971302408, 5)\")\n            sql(s\"insert into $table values(0.13344406545919155429936259114971302408, 5)\")\n\n            checkSparkAnswerAndNumOfAggregates(s\"SELECT b , AVG(a) FROM $table GROUP BY b\", 2)\n            checkSparkAnswerAndNumOfAggregates(s\"SELECT AVG(a) FROM $table\", 2)\n            checkSparkAnswerAndNumOfAggregates(\n              s\"SELECT b, MIN(a), MAX(a), COUNT(a), SUM(a), AVG(a) FROM $table GROUP BY b\",\n              2)\n            checkSparkAnswerAndNumOfAggregates(\n              s\"SELECT MIN(a), MAX(a), COUNT(a), SUM(a), AVG(a) FROM $table\",\n              2)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"test partial avg\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withParquetTable(\n        (0 until 5).map(i => (i.toDouble, i.toDouble % 2)),\n        \"tbl\",\n        dictionaryEnabled) {\n        withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\") {\n          checkSparkAnswerAndNumOfAggregates(\"SELECT _2 , AVG(_1) FROM tbl GROUP BY _2\", 1)\n        }\n      }\n    }\n  }\n\n  test(\"avg null handling\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n      val table = \"avg_null_handling\"\n      withTable(table) {\n        sql(s\"create table $table(a double, b double) using parquet\")\n        sql(s\"insert into $table values(1, 1.0)\")\n        sql(s\"insert into $table values(null, null)\")\n        sql(s\"insert into $table values(1, 2.0)\")\n        sql(s\"insert into $table values(null, null)\")\n        sql(s\"insert into $table values(2, null)\")\n        sql(s\"insert into $table values(2, null)\")\n\n        val query = sql(s\"select a, AVG(b) from $table GROUP BY a\")\n        checkSparkAnswerAndOperator(query)\n      }\n    }\n  }\n\n  test(\"Decimal Avg with DF\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      Seq(true, false).foreach { nativeShuffleEnabled =>\n        withSQLConf(\n          CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> nativeShuffleEnabled.toString,\n          CometConf.COMET_SHUFFLE_MODE.key -> \"native\",\n          CometConf.getExprAllowIncompatConfigKey(classOf[Cast]) -> \"true\") {\n          withTempDir { dir =>\n            val path = new Path(dir.toURI.toString, \"test\")\n            makeParquetFile(path, 1000, 20, dictionaryEnabled)\n            withParquetTable(path.toUri.toString, \"tbl\") {\n              val expectedNumOfCometAggregates = if (nativeShuffleEnabled) 2 else 1\n\n              checkSparkAnswerAndNumOfAggregates(\n                \"SELECT _g2, AVG(_7) FROM tbl GROUP BY _g2\",\n                expectedNumOfCometAggregates)\n\n              checkSparkAnswerWithTolerance(\"SELECT _g3, AVG(_8) FROM tbl GROUP BY _g3\")\n              assert(getNumCometHashAggregate(\n                sql(\"SELECT _g3, AVG(_8) FROM tbl GROUP BY _g3\")) == expectedNumOfCometAggregates)\n\n              checkSparkAnswerAndNumOfAggregates(\n                \"SELECT _g4, AVG(_9) FROM tbl GROUP BY _g4\",\n                expectedNumOfCometAggregates)\n\n              checkSparkAnswerAndNumOfAggregates(\n                \"SELECT AVG(_7) FROM tbl\",\n                expectedNumOfCometAggregates)\n\n              checkSparkAnswerWithTolerance(\"SELECT AVG(_8) FROM tbl\")\n              assert(getNumCometHashAggregate(\n                sql(\"SELECT AVG(_8) FROM tbl\")) == expectedNumOfCometAggregates)\n\n              checkSparkAnswerAndNumOfAggregates(\n                \"SELECT AVG(_9) FROM tbl\",\n                expectedNumOfCometAggregates)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  // TODO enable once https://github.com/apache/datafusion-comet/issues/1267 is implemented\n  ignore(\"distinct\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n      Seq(\"native\", \"jvm\").foreach { cometShuffleMode =>\n        withSQLConf(CometConf.COMET_SHUFFLE_MODE.key -> cometShuffleMode) {\n          Seq(true, false).foreach { dictionary =>\n            withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n              val cometColumnShuffleEnabled = cometShuffleMode == \"jvm\"\n              val table = \"test\"\n              withTable(table) {\n                sql(s\"create table $table(col1 int, col2 int, col3 int) using parquet\")\n                sql(\n                  s\"insert into $table values(1, 1, 1), (1, 1, 1), (1, 3, 1), (1, 4, 2), (5, 3, 2)\")\n\n                var expectedNumOfCometAggregates = 2\n\n                checkSparkAnswerAndNumOfAggregates(\n                  s\"SELECT DISTINCT(col2) FROM $table\",\n                  expectedNumOfCometAggregates)\n\n                expectedNumOfCometAggregates = 4\n\n                checkSparkAnswerAndNumOfAggregates(\n                  s\"SELECT COUNT(distinct col2) FROM $table\",\n                  expectedNumOfCometAggregates)\n\n                checkSparkAnswerAndNumOfAggregates(\n                  s\"SELECT COUNT(distinct col2), col1 FROM $table group by col1\",\n                  expectedNumOfCometAggregates)\n\n                checkSparkAnswerAndNumOfAggregates(\n                  s\"SELECT SUM(distinct col2) FROM $table\",\n                  expectedNumOfCometAggregates)\n\n                checkSparkAnswerAndNumOfAggregates(\n                  s\"SELECT SUM(distinct col2), col1 FROM $table group by col1\",\n                  expectedNumOfCometAggregates)\n\n                checkSparkAnswerAndNumOfAggregates(\n                  \"SELECT COUNT(distinct col2), SUM(distinct col2), col1, COUNT(distinct col2),\" +\n                    s\" SUM(distinct col2) FROM $table group by col1\",\n                  expectedNumOfCometAggregates)\n\n                expectedNumOfCometAggregates = if (cometColumnShuffleEnabled) 2 else 1\n                checkSparkAnswerAndNumOfAggregates(\n                  \"SELECT COUNT(col2), MIN(col2), COUNT(DISTINCT col2), SUM(col2),\" +\n                    s\" SUM(DISTINCT col2), COUNT(DISTINCT col2), col1 FROM $table group by col1\",\n                  expectedNumOfCometAggregates)\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"first/last\") {\n    withSQLConf(\n      SQLConf.COALESCE_PARTITIONS_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      Seq(true, false).foreach { dictionary =>\n        withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n          val table = \"test\"\n          withTable(table) {\n            sql(s\"create table $table(col1 int, col2 int, col3 int) using parquet\")\n            sql(\n              s\"insert into $table values(4, 1, 1), (4, 1, 1), (3, 3, 1),\" +\n                \" (2, 4, 2), (1, 3, 2), (null, 1, 1)\")\n            withView(\"t\") {\n              sql(\"CREATE VIEW t AS SELECT col1, col3 FROM test ORDER BY col1\")\n\n              checkSparkAnswerAndOperator(\"SELECT FIRST(col1), LAST(col1) FROM t\")\n\n              checkSparkAnswerAndOperator(\n                \"SELECT FIRST(col1), LAST(col1), MIN(col1), COUNT(col1) FROM t\")\n\n              checkSparkAnswerAndOperator(\n                \"SELECT FIRST(col1), LAST(col1), col3 FROM t GROUP BY col3\")\n\n              checkSparkAnswerAndOperator(\n                \"SELECT FIRST(col1), LAST(col1), MIN(col1), COUNT(col1), col3 FROM t GROUP BY col3\")\n\n              checkSparkAnswerAndOperator(\"SELECT FIRST(col1, true), LAST(col1) FROM t\")\n\n              checkSparkAnswerAndOperator(\n                \"SELECT FIRST(col1), LAST(col1, true), col3 FROM t GROUP BY col3\")\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"test bool_and/bool_or\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n      Seq(\"native\", \"jvm\").foreach { cometShuffleMode =>\n        withSQLConf(CometConf.COMET_SHUFFLE_MODE.key -> cometShuffleMode) {\n          Seq(true, false).foreach { dictionary =>\n            withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n              val table = \"test\"\n              withTable(table) {\n                sql(s\"create table $table(a boolean, b int) using parquet\")\n                sql(s\"insert into $table values(true, 1)\")\n                sql(s\"insert into $table values(false, 2)\")\n                sql(s\"insert into $table values(true, 3)\")\n                sql(s\"insert into $table values(true, 3)\")\n                // Spark maps BOOL_AND to MIN and BOOL_OR to MAX\n                checkSparkAnswerAndNumOfAggregates(\n                  s\"SELECT MIN(a), MAX(a), BOOL_AND(a), BOOL_OR(a) FROM $table\",\n                  2)\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"bitwise aggregate\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      Seq(true, false).foreach { dictionary =>\n        withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n          val table = \"test\"\n          withTable(table) {\n            sql(s\"create table $table(col1 long, col2 int, col3 short, col4 byte) using parquet\")\n            sql(\n              s\"insert into $table values(4, 1, 1, 3), (4, 1, 1, 3), (3, 3, 1, 4),\" +\n                \" (2, 4, 2, 5), (1, 3, 2, 6), (null, 1, 1, 7)\")\n            val expectedNumOfCometAggregates = 2\n            checkSparkAnswerAndNumOfAggregates(\n              \"SELECT BIT_AND(col1), BIT_OR(col1), BIT_XOR(col1),\" +\n                \" BIT_AND(col2), BIT_OR(col2), BIT_XOR(col2),\" +\n                \" BIT_AND(col3), BIT_OR(col3), BIT_XOR(col3),\" +\n                \" BIT_AND(col4), BIT_OR(col4), BIT_XOR(col4) FROM test\",\n              expectedNumOfCometAggregates)\n\n            // Make sure the combination of BITWISE aggregates and other aggregates work OK\n            checkSparkAnswerAndNumOfAggregates(\n              \"SELECT BIT_AND(col1), BIT_OR(col1), BIT_XOR(col1),\" +\n                \" BIT_AND(col2), BIT_OR(col2), BIT_XOR(col2),\" +\n                \" BIT_AND(col3), BIT_OR(col3), BIT_XOR(col3),\" +\n                \" BIT_AND(col4), BIT_OR(col4), BIT_XOR(col4), MIN(col1), COUNT(col1) FROM test\",\n              expectedNumOfCometAggregates)\n\n            checkSparkAnswerAndNumOfAggregates(\n              \"SELECT BIT_AND(col1), BIT_OR(col1), BIT_XOR(col1),\" +\n                \" BIT_AND(col2), BIT_OR(col2), BIT_XOR(col2),\" +\n                \" BIT_AND(col3), BIT_OR(col3), BIT_XOR(col3),\" +\n                \" BIT_AND(col4), BIT_OR(col4), BIT_XOR(col4), col3 FROM test GROUP BY col3\",\n              expectedNumOfCometAggregates)\n\n            // Make sure the combination of BITWISE aggregates and other aggregates work OK\n            // with group by\n            checkSparkAnswerAndNumOfAggregates(\n              \"SELECT BIT_AND(col1), BIT_OR(col1), BIT_XOR(col1),\" +\n                \" BIT_AND(col2), BIT_OR(col2), BIT_XOR(col2),\" +\n                \" BIT_AND(col3), BIT_OR(col3), BIT_XOR(col3),\" +\n                \" BIT_AND(col4), BIT_OR(col4), BIT_XOR(col4),\" +\n                \" MIN(col1), COUNT(col1), col3 FROM test GROUP BY col3\",\n              expectedNumOfCometAggregates)\n          }\n        }\n      }\n    }\n  }\n\n  def setupAndTestAggregates(\n      table: String,\n      data: Seq[(Any, Any, Any)],\n      dataTypes: (String, String, String),\n      aggregates: String): Unit = {\n    val (type1, type2, type3) = dataTypes\n    withTable(table) {\n      sql(s\"create table $table(col1 $type1, col2 $type2, col3 $type3) using parquet\")\n      val values = data\n        .map { case (c1, c2, c3) =>\n          s\"($c1, $c2, $c3)\"\n        }\n        .mkString(\", \")\n      sql(s\"insert into $table values $values\")\n\n      val expectedNumOfCometAggregates = 2\n\n      checkSparkAnswerWithTolAndNumOfAggregates(\n        s\"SELECT $aggregates FROM $table\",\n        expectedNumOfCometAggregates)\n\n      checkSparkAnswerWithTolAndNumOfAggregates(\n        s\"SELECT $aggregates FROM $table GROUP BY col3\",\n        expectedNumOfCometAggregates)\n    }\n  }\n\n  test(\"covariance & correlation\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n      Seq(\"jvm\", \"native\").foreach { cometShuffleMode =>\n        withSQLConf(CometConf.COMET_SHUFFLE_MODE.key -> cometShuffleMode) {\n          Seq(true, false).foreach { dictionary =>\n            withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n              Seq(true, false).foreach { nullOnDivideByZero =>\n                withSQLConf(\n                  \"spark.sql.legacy.statisticalAggregate\" -> nullOnDivideByZero.toString) {\n                  val table = \"test\"\n                  val aggregates =\n                    \"covar_samp(col1, col2), covar_pop(col1, col2), corr(col1, col2)\"\n                  setupAndTestAggregates(\n                    table,\n                    Seq((1, 4, 1), (2, 5, 1), (3, 6, 2)),\n                    (\"double\", \"double\", \"double\"),\n                    aggregates)\n                  setupAndTestAggregates(\n                    table,\n                    Seq((1, 4, 3), (2, -5, 3), (3, 6, 1)),\n                    (\"double\", \"double\", \"double\"),\n                    aggregates)\n                  setupAndTestAggregates(\n                    table,\n                    Seq((1.1, 4.1, 2.3), (2, 5, 1.5), (3, 6, 2.3)),\n                    (\"double\", \"double\", \"double\"),\n                    aggregates)\n                  setupAndTestAggregates(\n                    table,\n                    Seq(\n                      (1, 4, 1),\n                      (2, 5, 2),\n                      (3, 6, 3),\n                      (1.1, 4.4, 1),\n                      (2.2, 5.5, 2),\n                      (3.3, 6.6, 3)),\n                    (\"double\", \"double\", \"double\"),\n                    aggregates)\n                  setupAndTestAggregates(\n                    table,\n                    Seq((1, 4, 1), (2, 5, 2), (3, 6, 3)),\n                    (\"int\", \"int\", \"int\"),\n                    aggregates)\n                  setupAndTestAggregates(\n                    table,\n                    Seq((1, 4, 2), (null, null, 2), (3, 6, 1), (3, 3, 1)),\n                    (\"int\", \"int\", \"int\"),\n                    aggregates)\n                  setupAndTestAggregates(\n                    table,\n                    Seq((1, 4, 1), (null, 5, 1), (2, 5, 2), (9, null, 2), (3, 6, 2)),\n                    (\"int\", \"int\", \"int\"),\n                    aggregates)\n                  setupAndTestAggregates(\n                    table,\n                    Seq((null, null, 1), (1, 2, 1), (null, null, 2)),\n                    (\"int\", \"int\", \"int\"),\n                    aggregates)\n                  setupAndTestAggregates(\n                    table,\n                    Seq((null, null, 1), (null, null, 1), (null, null, 2)),\n                    (\"int\", \"int\", \"int\"),\n                    aggregates)\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"corr - nan/null\") {\n    Seq(true, false).foreach { nullOnDivideByZero =>\n      withSQLConf(\"spark.sql.legacy.statisticalAggregate\" -> nullOnDivideByZero.toString) {\n        withTable(\"t\") {\n          sql(\"\"\"create table t using parquet as\n              select cast(null as float) f1, CAST('NaN' AS float) f2, cast(null as double) d1, CAST('NaN' AS double) d2\n              from range(1)\n            \"\"\")\n\n          checkSparkAnswerAndOperator(\"\"\"\n              |select\n              | corr(f1, f2) c1,\n              | corr(f1, f1) c2,\n              | corr(f2, f1) c3,\n              | corr(f2, f2) c4,\n              | corr(d1, d2) c5,\n              | corr(d1, d1) c6,\n              | corr(d2, d1) c7,\n              | corr(d2, d2) c8\n              | FROM t\"\"\".stripMargin)\n        }\n      }\n    }\n  }\n\n  test(\"var_pop and var_samp\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n      Seq(\"native\", \"jvm\").foreach { cometShuffleMode =>\n        withSQLConf(CometConf.COMET_SHUFFLE_MODE.key -> cometShuffleMode) {\n          Seq(true, false).foreach { dictionary =>\n            withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n              Seq(true, false).foreach { nullOnDivideByZero =>\n                withSQLConf(\n                  \"spark.sql.legacy.statisticalAggregate\" -> nullOnDivideByZero.toString) {\n                  val table = \"test\"\n                  withTable(table) {\n                    sql(s\"create table $table(col1 int, col2 int, col3 int, col4 float, col5 double, col6 int) using parquet\")\n                    sql(s\"insert into $table values(1, null, null, 1.1, 2.2, 1),\" +\n                      \" (2, null, null, 3.4, 5.6, 1), (3, null, 4, 7.9, 2.4, 2)\")\n                    val expectedNumOfCometAggregates = 2\n                    checkSparkAnswerWithTolAndNumOfAggregates(\n                      \"SELECT var_samp(col1), var_samp(col2), var_samp(col3), var_samp(col4), var_samp(col5) FROM test\",\n                      expectedNumOfCometAggregates)\n                    checkSparkAnswerWithTolAndNumOfAggregates(\n                      \"SELECT var_pop(col1), var_pop(col2), var_pop(col3), var_pop(col4), var_samp(col5) FROM test\",\n                      expectedNumOfCometAggregates)\n                    checkSparkAnswerAndNumOfAggregates(\n                      \"SELECT var_samp(col1), var_samp(col2), var_samp(col3), var_samp(col4), var_samp(col5)\" +\n                        \" FROM test GROUP BY col6\",\n                      expectedNumOfCometAggregates)\n                    checkSparkAnswerAndNumOfAggregates(\n                      \"SELECT var_pop(col1), var_pop(col2), var_pop(col3), var_pop(col4), var_samp(col5)\" +\n                        \" FROM test GROUP BY col6\",\n                      expectedNumOfCometAggregates)\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"stddev_pop and stddev_samp\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n      Seq(\"native\", \"jvm\").foreach { cometShuffleMode =>\n        withSQLConf(CometConf.COMET_SHUFFLE_MODE.key -> cometShuffleMode) {\n          Seq(true, false).foreach { dictionary =>\n            withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n              Seq(true, false).foreach { nullOnDivideByZero =>\n                withSQLConf(\n                  \"spark.sql.legacy.statisticalAggregate\" -> nullOnDivideByZero.toString) {\n                  val table = \"test\"\n                  withTable(table) {\n                    sql(s\"create table $table(col1 int, col2 int, col3 int, col4 float, \" +\n                      \"col5 double, col6 int) using parquet\")\n                    sql(s\"insert into $table values(1, null, null, 1.1, 2.2, 1), \" +\n                      \"(2, null, null, 3.4, 5.6, 1), (3, null, 4, 7.9, 2.4, 2)\")\n                    val expectedNumOfCometAggregates = 2\n                    checkSparkAnswerWithTolAndNumOfAggregates(\n                      \"SELECT stddev_samp(col1), stddev_samp(col2), stddev_samp(col3), \" +\n                        \"stddev_samp(col4), stddev_samp(col5) FROM test\",\n                      expectedNumOfCometAggregates)\n                    checkSparkAnswerWithTolAndNumOfAggregates(\n                      \"SELECT stddev_pop(col1), stddev_pop(col2), stddev_pop(col3), \" +\n                        \"stddev_pop(col4), stddev_pop(col5) FROM test\",\n                      expectedNumOfCometAggregates)\n                    checkSparkAnswerAndNumOfAggregates(\n                      \"SELECT stddev_samp(col1), stddev_samp(col2), stddev_samp(col3), \" +\n                        \"stddev_samp(col4), stddev_samp(col5) FROM test GROUP BY col6\",\n                      expectedNumOfCometAggregates)\n                    checkSparkAnswerWithTolAndNumOfAggregates(\n                      \"SELECT stddev_pop(col1), stddev_pop(col2), stddev_pop(col3), \" +\n                        \"stddev_pop(col4), stddev_pop(col5) FROM test GROUP BY col6\",\n                      expectedNumOfCometAggregates)\n                  }\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"AVG and try_avg - basic functionality\") {\n    withParquetTable(\n      Seq(\n        (10L, 1),\n        (20L, 1),\n        (null.asInstanceOf[Long], 1),\n        (100L, 2),\n        (200L, 2),\n        (null.asInstanceOf[Long], 3)),\n      \"tbl\") {\n\n      Seq(true, false).foreach({ ansiMode =>\n        // without GROUP BY\n        withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiMode.toString) {\n          val res = sql(\"SELECT avg(_1) FROM tbl\")\n          checkSparkAnswerAndOperator(res)\n        }\n\n        // with GROUP BY\n        withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiMode.toString) {\n          val res = sql(\"SELECT _2, avg(_1) FROM tbl GROUP BY _2\")\n          checkSparkAnswerAndOperator(res)\n        }\n      })\n\n      // try_avg without GROUP BY\n      val resTry = sql(\"SELECT try_avg(_1) FROM tbl\")\n      checkSparkAnswerAndOperator(resTry)\n\n      // try_avg with GROUP BY\n      val resTryGroup = sql(\"SELECT _2, try_avg(_1) FROM tbl GROUP BY _2\")\n      checkSparkAnswerAndOperator(resTryGroup)\n\n    }\n  }\n\n  test(\"AVG and try_avg - special numbers\") {\n\n    val negativeNumbers: Seq[(Long, Int)] = Seq(\n      (-1L, 1),\n      (-123L, 1),\n      (-456L, 1),\n      (Long.MinValue, 1),\n      (Long.MinValue, 1),\n      (Long.MinValue, 2),\n      (Long.MinValue, 2),\n      (null.asInstanceOf[Long], 3))\n\n    val zeroSeq: Seq[(Long, Int)] =\n      Seq((0L, 1), (-0L, 1), (+0L, 2), (+0L, 2), (null.asInstanceOf[Long], 3))\n\n    val highValNumbers: Seq[(Long, Int)] = Seq(\n      (Long.MaxValue, 1),\n      (Long.MaxValue, 1),\n      (Long.MaxValue, 2),\n      (Long.MaxValue, 2),\n      (null.asInstanceOf[Long], 3))\n\n    val inputs = Seq(negativeNumbers, highValNumbers, zeroSeq)\n    inputs.foreach(inputSeq => {\n      withParquetTable(inputSeq, \"tbl\") {\n        Seq(true, false).foreach({ ansiMode =>\n          // without GROUP BY\n          withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiMode.toString) {\n            checkSparkAnswerAndOperator(\"SELECT avg(_1) FROM tbl\")\n          }\n\n          // with GROUP BY\n          withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiMode.toString) {\n            checkSparkAnswerAndOperator(\"SELECT _2, avg(_1) FROM tbl GROUP BY _2\")\n          }\n        })\n\n        // try_avg without GROUP BY\n        checkSparkAnswerAndOperator(\"SELECT try_avg(_1) FROM tbl\")\n\n        // try_avg with GROUP BY\n        checkSparkAnswerAndOperator(\"SELECT _2, try_avg(_1) FROM tbl GROUP BY _2\")\n\n      }\n    })\n  }\n\n  test(\"ANSI support for sum - null test\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(\n          Seq((null.asInstanceOf[java.lang.Long], \"a\"), (null.asInstanceOf[java.lang.Long], \"b\")),\n          \"null_tbl\") {\n          val res = sql(\"SELECT sum(_1) FROM null_tbl\")\n          checkSparkAnswerAndOperator(res)\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for decimal sum - null test\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(\n          Seq(\n            (null.asInstanceOf[java.math.BigDecimal], \"a\"),\n            (null.asInstanceOf[java.math.BigDecimal], \"b\")),\n          \"null_tbl\") {\n          val res = sql(\"SELECT sum(_1) FROM null_tbl\")\n          checkSparkAnswerAndOperator(res)\n          assert(res.collect() === Array(Row(null)))\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for try_sum - null test\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(\n          Seq((null.asInstanceOf[java.lang.Long], \"a\"), (null.asInstanceOf[java.lang.Long], \"b\")),\n          \"null_tbl\") {\n          val res = sql(\"SELECT try_sum(_1) FROM null_tbl\")\n          checkSparkAnswerAndOperator(res)\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for try_sum decimal - null test\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(\n          Seq(\n            (null.asInstanceOf[java.math.BigDecimal], \"a\"),\n            (null.asInstanceOf[java.math.BigDecimal], \"b\")),\n          \"null_tbl\") {\n          val res = sql(\"SELECT try_sum(_1) FROM null_tbl\")\n          checkSparkAnswerAndOperator(res)\n          assert(res.collect() === Array(Row(null)))\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for sum - null test (group by)\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(\n          Seq(\n            (null.asInstanceOf[java.lang.Long], \"a\"),\n            (null.asInstanceOf[java.lang.Long], \"a\"),\n            (null.asInstanceOf[java.lang.Long], \"b\"),\n            (null.asInstanceOf[java.lang.Long], \"b\"),\n            (null.asInstanceOf[java.lang.Long], \"b\")),\n          \"tbl\") {\n          val res = sql(\"SELECT _2, sum(_1) FROM tbl group by 1\")\n          checkSparkAnswerAndOperator(res)\n          assert(res.orderBy(col(\"_2\")).collect() === Array(Row(\"a\", null), Row(\"b\", null)))\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for decimal sum - null test (group by)\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(\n          Seq(\n            (null.asInstanceOf[java.math.BigDecimal], \"a\"),\n            (null.asInstanceOf[java.math.BigDecimal], \"a\"),\n            (null.asInstanceOf[java.math.BigDecimal], \"b\"),\n            (null.asInstanceOf[java.math.BigDecimal], \"b\"),\n            (null.asInstanceOf[java.math.BigDecimal], \"b\")),\n          \"tbl\") {\n          val res = sql(\"SELECT _2, sum(_1) FROM tbl group by 1\")\n          checkSparkAnswerAndOperator(res)\n          assert(res.orderBy(col(\"_2\")).collect() === Array(Row(\"a\", null), Row(\"b\", null)))\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for try_sum - null test (group by)\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(\n          Seq(\n            (null.asInstanceOf[java.lang.Long], \"a\"),\n            (null.asInstanceOf[java.lang.Long], \"a\"),\n            (null.asInstanceOf[java.lang.Long], \"b\"),\n            (null.asInstanceOf[java.lang.Long], \"b\"),\n            (null.asInstanceOf[java.lang.Long], \"b\")),\n          \"tbl\") {\n          val res = sql(\"SELECT _2, try_sum(_1) FROM tbl group by 1\")\n          checkSparkAnswerAndOperator(res)\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for try_sum decimal - null test (group by)\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(\n          Seq(\n            (null.asInstanceOf[java.math.BigDecimal], \"a\"),\n            (null.asInstanceOf[java.math.BigDecimal], \"a\"),\n            (null.asInstanceOf[java.math.BigDecimal], \"b\"),\n            (null.asInstanceOf[java.math.BigDecimal], \"b\"),\n            (null.asInstanceOf[java.math.BigDecimal], \"b\")),\n          \"tbl\") {\n          val res = sql(\"SELECT _2, try_sum(_1) FROM tbl group by 1\")\n          checkSparkAnswerAndOperator(res)\n        }\n      }\n    }\n  }\n\n  protected def generateOverflowDecimalInputs: Seq[(java.math.BigDecimal, Int)] = {\n    val maxDec38_0 = new java.math.BigDecimal(\"99999999999999999999\")\n    (1 to 50).flatMap(_ => Seq((maxDec38_0, 1)))\n  }\n\n  test(\"ANSI support - SUM function\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        // Test long overflow\n        withParquetTable(Seq((Long.MaxValue, 1L), (100L, 1L)), \"tbl\") {\n          val res = sql(\"SELECT SUM(_1) FROM tbl\")\n          if (ansiEnabled) {\n            checkSparkAnswerMaybeThrows(res) match {\n              case (Some(sparkExc), Some(cometExc)) =>\n                // make sure that the error message throws overflow exception only\n                assert(sparkExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n                assert(cometExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n              case _ => fail(\"Exception should be thrown for Long overflow in ANSI mode\")\n            }\n          } else {\n            checkSparkAnswerAndOperator(res)\n          }\n        }\n        // Test long underflow\n        withParquetTable(Seq((Long.MinValue, 1L), (-100L, 1L)), \"tbl\") {\n          val res = sql(\"SELECT SUM(_1) FROM tbl\")\n          if (ansiEnabled) {\n            checkSparkAnswerMaybeThrows(res) match {\n              case (Some(sparkExc), Some(cometExc)) =>\n                assert(sparkExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n                assert(cometExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n              case _ => fail(\"Exception should be thrown for Long underflow in ANSI mode\")\n            }\n          } else {\n            checkSparkAnswerAndOperator(res)\n          }\n        }\n        // Test Int SUM (should not overflow)\n        withParquetTable(Seq((Int.MaxValue, 1), (Int.MaxValue, 1), (100, 1)), \"tbl\") {\n          val res = sql(\"SELECT SUM(_1) FROM tbl\")\n          checkSparkAnswerAndOperator(res)\n        }\n        // Test Short SUM (should not overflow)\n        withParquetTable(\n          Seq((Short.MaxValue, 1.toShort), (Short.MaxValue, 1.toShort), (100.toShort, 1.toShort)),\n          \"tbl\") {\n          val res = sql(\"SELECT SUM(_1) FROM tbl\")\n          checkSparkAnswerAndOperator(res)\n        }\n        // Test Byte SUM (should not overflow)\n        withParquetTable(\n          Seq((Byte.MaxValue, 1.toByte), (Byte.MaxValue, 1.toByte), (10.toByte, 1.toByte)),\n          \"tbl\") {\n          val res = sql(\"SELECT SUM(_1) FROM tbl\")\n          checkSparkAnswerAndOperator(res)\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for decimal SUM function\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(generateOverflowDecimalInputs, \"tbl\") {\n          val res = sql(\"SELECT SUM(_1) FROM tbl\")\n          if (ansiEnabled) {\n            checkSparkAnswerMaybeThrows(res) match {\n              case (Some(sparkExc), Some(cometExc)) =>\n                assert(sparkExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n                assert(cometExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n              case _ =>\n                fail(\"Exception should be thrown for decimal overflow in ANSI mode\")\n            }\n          } else {\n            checkSparkAnswerAndOperator(res)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for SUM - GROUP BY\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(\n          Seq((Long.MaxValue, 1), (100L, 1), (Long.MaxValue, 2), (200L, 2)),\n          \"tbl\") {\n          val res = sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\").repartition(2)\n          if (ansiEnabled) {\n            checkSparkAnswerMaybeThrows(res) match {\n              case (Some(sparkExc), Some(cometExc)) =>\n                assert(sparkExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n                assert(cometExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n              case _ =>\n                fail(\"Exception should be thrown for Long overflow with GROUP BY in ANSI mode\")\n            }\n          } else {\n            checkSparkAnswerAndOperator(res)\n          }\n        }\n\n        withParquetTable(\n          Seq((Long.MinValue, 1), (-100L, 1), (Long.MinValue, 2), (-200L, 2)),\n          \"tbl\") {\n          val res = sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\")\n          if (ansiEnabled) {\n            checkSparkAnswerMaybeThrows(res) match {\n              case (Some(sparkExc), Some(cometExc)) =>\n                assert(sparkExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n                assert(cometExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n              case _ =>\n                fail(\"Exception should be thrown for Long underflow with GROUP BY in ANSI mode\")\n            }\n          } else {\n            checkSparkAnswerAndOperator(res)\n          }\n        }\n        // Test Int with GROUP BY\n        withParquetTable(Seq((Int.MaxValue, 1), (Int.MaxValue, 1), (100, 2), (200, 2)), \"tbl\") {\n          val res = sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\")\n          checkSparkAnswerAndOperator(res)\n        }\n        // Test Short with GROUP BY\n        withParquetTable(\n          Seq((Short.MaxValue, 1), (Short.MaxValue, 1), (100.toShort, 2), (200.toShort, 2)),\n          \"tbl\") {\n          val res = sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\")\n          checkSparkAnswerAndOperator(res)\n        }\n        // Test Byte with GROUP BY\n        withParquetTable(\n          Seq((Byte.MaxValue, 1), (Byte.MaxValue, 1), (10.toByte, 2), (20.toByte, 2)),\n          \"tbl\") {\n          val res = sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\")\n          checkSparkAnswerAndOperator(res)\n        }\n      }\n    }\n  }\n\n  test(\"ANSI support for decimal SUM - GROUP BY\") {\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(generateOverflowDecimalInputs, \"tbl\") {\n          val res =\n            sql(\"SELECT _2, SUM(_1) FROM tbl GROUP BY _2\").repartition(2)\n          if (ansiEnabled) {\n            checkSparkAnswerMaybeThrows(res) match {\n              case (Some(sparkExc), Some(cometExc)) =>\n                assert(sparkExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n                assert(cometExc.getMessage.contains(\"ARITHMETIC_OVERFLOW\"))\n              case _ =>\n                fail(\"Exception should be thrown for decimal overflow with GROUP BY in ANSI mode\")\n            }\n          } else {\n            checkSparkAnswerAndOperator(res)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"try_sum overflow - with GROUP BY\") {\n    // Test Long overflow with GROUP BY - some groups overflow while some don't\n    withParquetTable(Seq((Long.MaxValue, 1), (100L, 1), (200L, 2), (300L, 2)), \"tbl\") {\n      // repartition to trigger merge batch and state checks\n      val res = sql(\"SELECT _2, try_sum(_1) FROM tbl GROUP BY _2\").repartition(2, col(\"_2\"))\n      // first group should return NULL (overflow) and group 2 should return 500\n      checkSparkAnswerAndOperator(res)\n    }\n\n    // Test Long underflow with GROUP BY\n    withParquetTable(Seq((Long.MinValue, 1), (-100L, 1), (-200L, 2), (-300L, 2)), \"tbl\") {\n      val res = sql(\"SELECT _2, try_sum(_1) FROM tbl GROUP BY _2\").repartition(2, col(\"_2\"))\n      // first group should return NULL (underflow), second group should return neg 500\n      checkSparkAnswerAndOperator(res)\n    }\n\n    // Test all groups overflow\n    withParquetTable(Seq((Long.MaxValue, 1), (100L, 1), (Long.MaxValue, 2), (100L, 2)), \"tbl\") {\n      val res = sql(\"SELECT _2, try_sum(_1) FROM tbl GROUP BY _2\").repartition(2, col(\"_2\"))\n      // Both groups should return NULL\n      checkSparkAnswerAndOperator(res)\n    }\n\n    // Test Short with GROUP BY (should NOT overflow)\n    withParquetTable(\n      Seq((Short.MaxValue, 1), (Short.MaxValue, 1), (100.toShort, 2), (200.toShort, 2)),\n      \"tbl\") {\n      val res = sql(\"SELECT _2, try_sum(_1) FROM tbl GROUP BY _2\").repartition(2, col(\"_2\"))\n      checkSparkAnswerAndOperator(res)\n    }\n\n    // Test Byte with GROUP BY (no overflow)\n    withParquetTable(\n      Seq((Byte.MaxValue, 1), (Byte.MaxValue, 1), (10.toByte, 2), (20.toByte, 2)),\n      \"tbl\") {\n      val res = sql(\"SELECT _2, try_sum(_1) FROM tbl GROUP BY _2\").repartition(2, col(\"_2\"))\n      checkSparkAnswerAndOperator(res)\n    }\n  }\n\n  test(\"try_sum decimal overflow\") {\n    withParquetTable(generateOverflowDecimalInputs, \"tbl\") {\n      val res = sql(\"SELECT try_sum(_1) FROM tbl\")\n      checkSparkAnswerAndOperator(res)\n    }\n  }\n\n  test(\"try_sum decimal overflow - with GROUP BY\") {\n    withParquetTable(generateOverflowDecimalInputs, \"tbl\") {\n      val res = sql(\"SELECT _2, try_sum(_1) FROM tbl GROUP BY _2\").repartition(2, col(\"_2\"))\n      checkSparkAnswerAndOperator(res)\n    }\n  }\n\n  test(\"try_sum decimal partial overflow - with GROUP BY\") {\n    // Group 1 overflows, Group 2 succeeds\n    val data: Seq[(java.math.BigDecimal, Int)] = generateOverflowDecimalInputs ++ Seq(\n      (new java.math.BigDecimal(300), 2),\n      (new java.math.BigDecimal(200), 2))\n    withParquetTable(data, \"tbl\") {\n      val res = sql(\"SELECT _2, try_sum(_1) FROM tbl GROUP BY _2\")\n      // Group 1 should be NULL, Group 2 should be 500\n      checkSparkAnswerAndOperator(res)\n    }\n  }\n\n  test(\"SumDecimal and AvgDecimal nullable should always be true\") {\n    // SumDecimal and AvgDecimal currently hardcode nullable=true.\n    // This matches Spark's Sum.nullable and Average.nullable which always return true,\n    // regardless of ANSI mode or input nullability.\n    val nonNullableData: Seq[(java.math.BigDecimal, Int)] = Seq(\n      (new java.math.BigDecimal(\"10.00\"), 1),\n      (new java.math.BigDecimal(\"20.00\"), 1),\n      (new java.math.BigDecimal(\"30.00\"), 2))\n\n    val nullableData: Seq[(java.math.BigDecimal, Int)] = Seq(\n      (new java.math.BigDecimal(\"10.00\"), 1),\n      (null.asInstanceOf[java.math.BigDecimal], 1),\n      (new java.math.BigDecimal(\"30.00\"), 2))\n\n    Seq(true, false).foreach { ansiEnabled =>\n      withSQLConf(SQLConf.ANSI_ENABLED.key -> ansiEnabled.toString) {\n        withParquetTable(nonNullableData, \"tbl\") {\n          val sumRes = sql(\"SELECT _2, sum(_1) FROM tbl GROUP BY _2\")\n          checkSparkAnswerAndOperator(sumRes)\n          assert(sumRes.schema.fields(1).nullable == true)\n\n          val avgRes = sql(\"SELECT _2, avg(_1) FROM tbl GROUP BY _2\")\n          checkSparkAnswerAndOperator(avgRes)\n          assert(avgRes.schema.fields(1).nullable == true)\n        }\n\n        withParquetTable(nullableData, \"tbl\") {\n          val sumRes = sql(\"SELECT _2, sum(_1) FROM tbl GROUP BY _2\")\n          checkSparkAnswerAndOperator(sumRes)\n          assert(sumRes.schema.fields(1).nullable == true)\n\n          val avgRes = sql(\"SELECT _2, avg(_1) FROM tbl GROUP BY _2\")\n          checkSparkAnswerAndOperator(avgRes)\n          assert(avgRes.schema.fields(1).nullable == true)\n        }\n      }\n    }\n  }\n\n  protected def checkSparkAnswerAndNumOfAggregates(query: String, numAggregates: Int): Unit = {\n    val df = sql(query)\n    checkSparkAnswer(df)\n    val actualNumAggregates = getNumCometHashAggregate(df)\n    assert(\n      actualNumAggregates == numAggregates,\n      s\"Expected $numAggregates Comet aggregate operators, but found $actualNumAggregates\")\n  }\n\n  protected def checkSparkAnswerWithTolAndNumOfAggregates(\n      query: String,\n      numAggregates: Int,\n      absTol: Double = 1e-6): Unit = {\n    val df = sql(query)\n    checkSparkAnswerWithTolerance(df, absTol)\n    val actualNumAggregates = getNumCometHashAggregate(df)\n    assert(\n      actualNumAggregates == numAggregates,\n      s\"Expected $numAggregates Comet aggregate operators, but found $actualNumAggregates\")\n  }\n\n  def getNumCometHashAggregate(df: DataFrame): Int = {\n    val sparkPlan = stripAQEPlan(df.queryExecution.executedPlan)\n    sparkPlan.collect { case s: CometHashAggregateExec => s }.size\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/exec/CometColumnarShuffleSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exec\n\nimport java.nio.file.{Files, Paths}\n\nimport scala.reflect.runtime.universe._\nimport scala.util.Random\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.{Partitioner, SparkConf}\nimport org.apache.spark.sql.{CometTestBase, DataFrame, RandomDataGenerator, Row}\nimport org.apache.spark.sql.comet.execution.shuffle.{CometShuffleDependency, CometShuffleExchangeExec, CometShuffleManager}\nimport org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanHelper, AQEShuffleReadExec, ShuffleQueryStageExec}\nimport org.apache.spark.sql.execution.exchange.ReusedExchangeExec\nimport org.apache.spark.sql.execution.joins.SortMergeJoinExec\nimport org.apache.spark.sql.functions.col\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometSparkSessionExtensions.isSpark40Plus\n\nabstract class CometColumnarShuffleSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n  protected val adaptiveExecutionEnabled: Boolean\n  protected val numElementsForceSpillThreshold: Int = 10\n\n  override protected def sparkConf: SparkConf = {\n    val conf = super.sparkConf\n    conf\n      .set(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key, adaptiveExecutionEnabled.toString)\n  }\n\n  protected val asyncShuffleEnable: Boolean\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(\n        CometConf.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED.key -> asyncShuffleEnable.toString,\n        CometConf.COMET_COLUMNAR_SHUFFLE_SPILL_THRESHOLD.key -> numElementsForceSpillThreshold.toString,\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\",\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n        testFun\n      }\n    }\n  }\n\n  import testImplicits._\n\n  setupTestData()\n\n  test(\"Fallback to Spark when shuffling on struct with duplicate field name\") {\n    val df = sql(\"\"\"\n        | SELECT max(struct(a, record.*, b)) as r FROM\n        |   (select a as a, b as b, struct(a,b) as record from testData2) tmp\n        | GROUP BY a\n      \"\"\".stripMargin).select($\"r.*\")\n\n    checkSparkAnswer(df)\n  }\n\n  test(\"Unsupported types for SinglePartition should fallback to Spark\") {\n    checkSparkAnswer(spark.sql(\"\"\"\n        |SELECT\n        |  AVG(null),\n        |  COUNT(null),\n        |  FIRST(null),\n        |  LAST(null),\n        |  MAX(null),\n        |  MIN(null),\n        |  SUM(null)\n        \"\"\".stripMargin))\n  }\n\n  test(\"Fallback to Spark for unsupported input besides ordering\") {\n    val dataGenerator = RandomDataGenerator\n      .forType(\n        dataType = NullType,\n        nullable = true,\n        new Random(System.nanoTime()),\n        validJulianDatetime = false)\n      .get\n\n    val schema = new StructType()\n      .add(\"index\", IntegerType, nullable = false)\n      .add(\"col\", NullType, nullable = true)\n    val rdd =\n      spark.sparkContext.parallelize((1 to 20).map(i => Row(i, dataGenerator())))\n    val df = spark.createDataFrame(rdd, schema).orderBy(\"index\").coalesce(1)\n    checkSparkAnswer(df)\n  }\n\n  test(\"columnar shuffle on nested struct including nulls\") {\n    Seq(10, 201).foreach { numPartitions =>\n      Seq(\"1.0\", \"10.0\").foreach { ratio =>\n        withSQLConf(CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> ratio) {\n          withParquetTable(\n            (0 until 50).map(i =>\n              (i, Seq((i + 1, i.toString), null, (i + 3, (i + 3).toString)), i + 1)),\n            \"tbl\") {\n            val df = sql(\"SELECT * FROM tbl\")\n              .filter($\"_1\" > 1)\n              .repartition(numPartitions, $\"_1\", $\"_2\", $\"_3\")\n              .sortWithinPartitions($\"_1\")\n\n            checkShuffleAnswer(df, 1)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"columnar shuffle on struct including nulls\") {\n    Seq(10, 201).foreach { numPartitions =>\n      Seq(\"1.0\", \"10.0\").foreach { ratio =>\n        withSQLConf(CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> ratio) {\n          val data: Seq[(Int, (Int, String))] =\n            Seq((1, (0, \"1\")), (2, (3, \"3\")), (3, null))\n          withParquetTable(data, \"tbl\") {\n            val df = sql(\"SELECT * FROM tbl\")\n              .filter($\"_1\" > 1)\n              .repartition(numPartitions, $\"_1\", $\"_2\")\n              .sortWithinPartitions($\"_1\")\n\n            checkShuffleAnswer(df, 1)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"columnar shuffle on array/struct map key/value\") {\n    Seq(\"false\", \"true\").foreach { execEnabled =>\n      Seq(10, 201).foreach { numPartitions =>\n        Seq(\"1.0\", \"10.0\").foreach { ratio =>\n          withSQLConf(\n            CometConf.COMET_EXEC_ENABLED.key -> execEnabled,\n            CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> ratio) {\n            withParquetTable((0 until 50).map(i => (Map(Seq(i, i + 1) -> i), i + 1)), \"tbl\") {\n              val df = sql(\"SELECT * FROM tbl\")\n                .filter($\"_2\" > 10)\n                .repartition(numPartitions, $\"_1\", $\"_2\")\n                .sortWithinPartitions($\"_2\")\n\n              if (isSpark40Plus) {\n                // https://github.com/apache/datafusion-comet/issues/1941\n                // Spark 4.0 introduces a mapsort which falls back\n                checkShuffleAnswer(df, 0)\n              } else {\n                checkShuffleAnswer(df, 1)\n              }\n            }\n\n            withParquetTable((0 until 50).map(i => (Map(i -> Seq(i, i + 1)), i + 1)), \"tbl\") {\n              val df = sql(\"SELECT * FROM tbl\")\n                .filter($\"_2\" > 10)\n                .repartition(numPartitions, $\"_1\", $\"_2\")\n                .sortWithinPartitions($\"_2\")\n\n              if (isSpark40Plus) {\n                // https://github.com/apache/datafusion-comet/issues/1941\n                // Spark 4.0 introduces a mapsort which falls back\n                checkShuffleAnswer(df, 0)\n              } else {\n                checkShuffleAnswer(df, 1)\n              }\n            }\n\n            withParquetTable((0 until 50).map(i => (Map((i, i.toString) -> i), i + 1)), \"tbl\") {\n              val df = sql(\"SELECT * FROM tbl\")\n                .filter($\"_2\" > 10)\n                .repartition(numPartitions, $\"_1\", $\"_2\")\n                .sortWithinPartitions($\"_2\")\n\n              if (isSpark40Plus) {\n                // https://github.com/apache/datafusion-comet/issues/1941\n                // Spark 4.0 introduces a mapsort which falls back\n                checkShuffleAnswer(df, 0)\n              } else {\n                checkShuffleAnswer(df, 1)\n              }\n            }\n\n            withParquetTable((0 until 50).map(i => (Map(i -> (i, i.toString)), i + 1)), \"tbl\") {\n              val df = sql(\"SELECT * FROM tbl\")\n                .filter($\"_2\" > 10)\n                .repartition(numPartitions, $\"_1\", $\"_2\")\n                .sortWithinPartitions($\"_2\")\n\n              if (isSpark40Plus) {\n                // https://github.com/apache/datafusion-comet/issues/1941\n                // Spark 4.0 introduces a mapsort which falls back\n                checkShuffleAnswer(df, 0)\n              } else {\n                checkShuffleAnswer(df, 1)\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"columnar shuffle on map array element\") {\n    Seq(\"false\", \"true\").foreach { execEnabled =>\n      Seq(10, 201).foreach { numPartitions =>\n        Seq(\"1.0\", \"10.0\").foreach { ratio =>\n          withSQLConf(\n            CometConf.COMET_EXEC_ENABLED.key -> execEnabled,\n            CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> ratio) {\n            withParquetTable(\n              (0 until 50).map(i => ((Seq(Map(1 -> i)), Map(2 -> i), Map(3 -> i)), i + 1)),\n              \"tbl\") {\n              val df = sql(\"SELECT * FROM tbl\")\n                .filter($\"_2\" > 10)\n                .repartition(numPartitions, $\"_1\", $\"_2\")\n                .sortWithinPartitions($\"_2\")\n\n              if (isSpark40Plus) {\n                // https://github.com/apache/datafusion-comet/issues/1941\n                // Spark 4.0 introduces a mapsort which falls back\n                checkShuffleAnswer(df, 0)\n              } else {\n                checkShuffleAnswer(df, 1)\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"RoundRobinPartitioning is supported by columnar shuffle\") {\n    withSQLConf(\n      // AQE has `ShuffleStage` which is a leaf node which blocks\n      // collecting `CometShuffleExchangeExec` node.\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\") {\n      withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n        val df = sql(\"SELECT * FROM tbl\")\n        val shuffled = df\n          .select($\"_1\" + 1 as (\"a\"))\n          .filter($\"a\" > 4)\n          .repartition(10)\n          .limit(2)\n\n        checkAnswer(shuffled, Row(5) :: Nil)\n        val cometShuffleExecs = checkCometExchange(shuffled, 1, false)\n\n        cometShuffleExecs(0).outputPartitioning.getClass.getName\n          .contains(\"RoundRobinPartitioning\")\n      }\n    }\n  }\n\n  def genTuples[K](num: Int, keys: Seq[K]): Seq[(\n      Int,\n      Map[K, Boolean],\n      Map[K, Byte],\n      Map[K, Short],\n      Map[K, Int],\n      Map[K, Long],\n      Map[K, Float],\n      Map[K, Double],\n      Map[K, java.sql.Date],\n      Map[K, java.sql.Timestamp],\n      Map[K, java.math.BigDecimal],\n      Map[K, Array[Byte]],\n      Map[K, String])] = {\n    (0 until num).map(i =>\n      (\n        i + 1,\n        Map(keys(0) -> (i > 10), keys(1) -> (i > 20)),\n        Map(keys(0) -> i.toByte, keys(1) -> (i + 1).toByte),\n        Map(keys(0) -> i.toShort, keys(1) -> (i + 1).toShort),\n        Map(keys(0) -> i, keys(1) -> (i + 1)),\n        Map(keys(0) -> i.toLong, keys(1) -> (i + 1).toLong),\n        Map(keys(0) -> i.toFloat, keys(1) -> (i + 1).toFloat),\n        Map(keys(0) -> i.toDouble, keys(1) -> (i + 1).toDouble),\n        Map(keys(0) -> new java.sql.Date(i.toLong), keys(1) -> new java.sql.Date((i + 1).toLong)),\n        Map(\n          keys(0) -> new java.sql.Timestamp(i.toLong),\n          keys(1) -> new java.sql.Timestamp((i + 1).toLong)),\n        Map(\n          keys(0) -> new java.math.BigDecimal(i.toLong),\n          keys(1) -> new java.math.BigDecimal((i + 1).toLong)),\n        Map(keys(0) -> i.toString.getBytes(), keys(1) -> (i + 1).toString.getBytes()),\n        Map(keys(0) -> i.toString, keys(1) -> (i + 1).toString)))\n  }\n\n  def repartitionAndSort(numPartitions: Int): Unit = {\n    val df = sql(\"SELECT * FROM tbl\")\n      .filter($\"_1\" > 10)\n      .repartition(\n        numPartitions,\n        $\"_2\",\n        $\"_3\",\n        $\"_4\",\n        $\"_5\",\n        $\"_6\",\n        $\"_7\",\n        $\"_8\",\n        $\"_9\",\n        $\"_10\",\n        $\"_11\",\n        $\"_12\",\n        $\"_13\")\n      .sortWithinPartitions($\"_1\")\n\n    checkShuffleAnswer(df, 1)\n  }\n\n  def columnarShuffleOnMapTest[K: TypeTag](num: Int, keys: Seq[K]): Unit = {\n    Seq(10, 201).foreach { numPartitions =>\n      Seq(\"1.0\", \"10.0\").foreach { ratio =>\n        withSQLConf(CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> ratio) {\n          withParquetTable(genTuples(num, keys), \"tbl\") {\n            repartitionAndSort(numPartitions)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"columnar shuffle on map [bool]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(50, Seq(true, false))\n  }\n\n  test(\"columnar shuffle on map [byte]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(50, Seq(0.toByte, 1.toByte))\n  }\n\n  test(\"columnar shuffle on map [short]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(50, Seq(0.toShort, 1.toShort))\n  }\n\n  test(\"columnar shuffle on map [int]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(50, Seq(0, 1))\n  }\n\n  test(\"columnar shuffle on map [long]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(50, Seq(0.toLong, 1.toLong))\n  }\n\n  test(\"columnar shuffle on map [float]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(50, Seq(0.toFloat, 1.toFloat))\n  }\n\n  test(\"columnar shuffle on map [double]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(50, Seq(0.toDouble, 1.toDouble))\n  }\n\n  test(\"columnar shuffle on map [date]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(50, Seq(new java.sql.Date(0.toLong), new java.sql.Date(1.toLong)))\n  }\n\n  test(\"columnar shuffle on map [timestamp]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(\n      50,\n      Seq(new java.sql.Timestamp(0.toLong), new java.sql.Timestamp(1.toLong)))\n  }\n\n  test(\"columnar shuffle on map [decimal]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(\n      50,\n      Seq(new java.math.BigDecimal(0.toLong), new java.math.BigDecimal(1.toLong)))\n  }\n\n  test(\"columnar shuffle on map [string]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(50, Seq(0.toString, 1.toString))\n  }\n\n  test(\"columnar shuffle on map [binary]\") {\n    // https://github.com/apache/datafusion-comet/issues/1941\n    assume(!isSpark40Plus)\n    columnarShuffleOnMapTest(50, Seq(0.toString.getBytes(), 1.toString.getBytes()))\n  }\n\n  test(\"columnar shuffle on array\") {\n\n    Seq(10, 201).foreach { numPartitions =>\n      Seq(\"1.0\", \"10.0\").foreach { ratio =>\n        withSQLConf(CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> ratio) {\n          withParquetTable(\n            (0 until 50).map(i =>\n              (\n                Seq(i + 1, i + 2, i + 3),\n                Seq(i.toLong, (i + 2).toLong, (i + 5).toLong),\n                Seq(i.toString, (i + 3).toString, (i + 2).toString),\n                Seq(\n                  (\n                    i + 1,\n                    Seq(i + 3, i + 1, i + 2), // nested array in struct\n                    Seq(i.toLong, (i + 2).toLong, (i + 5).toLong),\n                    Seq(i.toString, (i + 3).toString, (i + 2).toString),\n                    (i + 2).toString)),\n                i + 1)),\n            \"tbl\") {\n            val df = sql(\"SELECT * FROM tbl\")\n              .filter($\"_5\" > 10)\n              .repartition(numPartitions, $\"_1\", $\"_2\", $\"_3\", $\"_4\", $\"_5\")\n              .sortWithinPartitions($\"_1\")\n\n            checkShuffleAnswer(df, 1)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"columnar shuffle on nested array\") {\n    Seq(\"false\", \"true\").foreach { _ =>\n      Seq(10, 201).foreach { numPartitions =>\n        Seq(\"1.0\", \"10.0\").foreach { ratio =>\n          withSQLConf(CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> ratio) {\n            withParquetTable(\n              (0 until 50).map(i => (Seq(Seq(i + 1), Seq(i + 2), Seq(i + 3)), i + 1)),\n              \"tbl\") {\n              val df = sql(\"SELECT * FROM tbl\")\n                .filter($\"_2\" > 10)\n                .repartition(numPartitions, $\"_1\", $\"_2\")\n                .sortWithinPartitions($\"_1\")\n\n              checkShuffleAnswer(df, 1)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"columnar shuffle on nested struct\") {\n    Seq(10, 201).foreach { numPartitions =>\n      Seq(\"1.0\", \"10.0\").foreach { ratio =>\n        withSQLConf(CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> ratio) {\n          withParquetTable(\n            (0 until 50).map(i =>\n              ((i, 2.toString, (i + 1).toLong, (3.toString, i + 1, (i + 2).toLong)), i + 1)),\n            \"tbl\") {\n            val df = sql(\"SELECT * FROM tbl\")\n              .filter($\"_2\" > 10)\n              .repartition(numPartitions, $\"_1\", $\"_2\")\n              .sortWithinPartitions($\"_1\")\n\n            checkShuffleAnswer(df, 1)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"fix: closing sliced dictionary Comet vector should not close dictionary array\") {\n    (0 to 10).foreach { _ =>\n      withSQLConf(\n        SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n        CometConf.COMET_BATCH_SIZE.key -> \"10\",\n        CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> \"1.1\",\n        CometConf.COMET_COLUMNAR_SHUFFLE_SPILL_THRESHOLD.key -> \"1000000000\") {\n        val table1 = (0 until 1000)\n          .map(i => (111111.toString, 2222222.toString, 3333333.toString, i.toLong))\n          .toDF(\"a\", \"b\", \"c\", \"d\")\n        val table2 = (0 until 1000)\n          .map(i => (3333333.toString, 2222222.toString, 111111.toString, i.toLong))\n          .toDF(\"e\", \"f\", \"g\", \"h\")\n        withParquetTable(table1, \"tbl_a\") {\n          withParquetTable(table2, \"tbl_b\") {\n            val df = sql(\n              \"select a, b, count(distinct h) from tbl_a, tbl_b \" +\n                \"where c = e and b = '2222222' and a not like '2' group by a, b\")\n            checkShuffleAnswer(df, 4)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"fix: Dictionary field should have distinct dict_id\") {\n    Seq(10, 201).foreach { numPartitions =>\n      withSQLConf(CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> \"2.0\") {\n        withParquetTable(\n          (0 until 10000).map(i => (1.toString, 2.toString, (i + 1).toLong)),\n          \"tbl\") {\n          assert(\n            sql(\"SELECT * FROM tbl\").repartition(numPartitions, $\"_1\", $\"_2\").count() == sql(\n              \"SELECT * FROM tbl\")\n              .count())\n          val shuffled = sql(\"SELECT * FROM tbl\").repartition(numPartitions, $\"_1\")\n          checkShuffleAnswer(shuffled, 1)\n        }\n      }\n    }\n  }\n\n  test(\"dictionary shuffle\") {\n    Seq(10, 201).foreach { numPartitions =>\n      withSQLConf(CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> \"2.0\") {\n        withParquetTable((0 until 10000).map(i => (1.toString, (i + 1).toLong)), \"tbl\") {\n          assert(\n            sql(\"SELECT * FROM tbl\").repartition(numPartitions, $\"_1\").count() == sql(\n              \"SELECT * FROM tbl\")\n              .count())\n          val shuffled = sql(\"SELECT * FROM tbl\").select($\"_1\").repartition(numPartitions, $\"_1\")\n          checkShuffleAnswer(shuffled, 1)\n        }\n      }\n    }\n  }\n\n  test(\"dictionary shuffle: fallback to string\") {\n    Seq(10, 201).foreach { numPartitions =>\n      withSQLConf(CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> \"1000000000.0\") {\n        withParquetTable((0 until 10000).map(i => (1.toString, (i + 1).toLong)), \"tbl\") {\n          assert(\n            sql(\"SELECT * FROM tbl\").repartition(numPartitions, $\"_1\").count() == sql(\n              \"SELECT * FROM tbl\")\n              .count())\n          val shuffled = sql(\"SELECT * FROM tbl\").select($\"_1\").repartition(numPartitions, $\"_1\")\n          checkShuffleAnswer(shuffled, 1)\n        }\n      }\n    }\n  }\n\n  test(\"fix: inMemSorter should be reset after spilling\") {\n    withParquetTable((0 until 10000).map(i => (1, (i + 1).toLong)), \"tbl\") {\n      assert(\n        sql(\"SELECT * FROM tbl\").repartition(201, $\"_1\").count() == sql(\"SELECT * FROM tbl\")\n          .count())\n    }\n  }\n\n  test(\"fix: StreamReader should always set useDecimal128 as true\") {\n    Seq(10, 201).foreach { numPartitions =>\n      withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n        withTempPath { dir =>\n          val data = makeDecimalRDD(1000, DecimalType(12, 2), false)\n          data.write.parquet(dir.getCanonicalPath)\n          readParquetFile(dir.getCanonicalPath) { df =>\n            {\n              val shuffled = df.repartition(numPartitions, $\"dec\")\n              checkShuffleAnswer(shuffled, 1)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"fix: Native Unsafe decimal accessors return incorrect results\") {\n    Seq(10, 201).foreach { numPartitions =>\n      withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n        withTempPath { dir =>\n          val data = makeDecimalRDD(1000, DecimalType(22, 2), false)\n          data.write.parquet(dir.getCanonicalPath)\n          readParquetFile(dir.getCanonicalPath) { df =>\n            val shuffled = df.repartition(numPartitions, $\"dec\")\n            checkShuffleAnswer(shuffled, 1)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"Comet shuffle reader should respect spark.comet.batchSize\") {\n    Seq(10, 201).foreach { numPartitions =>\n      withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n        withParquetTable((0 until 10000).map(i => (1, (i + 1).toLong)), \"tbl\") {\n          assert(\n            sql(\"SELECT * FROM tbl\").repartition(numPartitions, $\"_1\").count() == sql(\n              \"SELECT * FROM tbl\").count())\n        }\n      }\n    }\n  }\n\n  test(\"columnar shuffle should work with BatchScan\") {\n    withSQLConf(\n      SQLConf.USE_V1_SOURCE_LIST.key -> \"\", // Use DataSourceV2\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\", // Disable AQE\n      CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\") { // Disable CometScan to use Spark BatchScan\n      withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n        val df = sql(\"SELECT * FROM tbl\")\n        val shuffled = df\n          .select($\"_1\" + 1 as (\"a\"))\n          .filter($\"a\" > 4)\n          .repartitionByRange(10, $\"_2\")\n          .limit(2)\n          .repartition(10, $\"_1\")\n\n        checkAnswer(shuffled, Row(5) :: Nil)\n      }\n    }\n  }\n\n  test(\"Columnar shuffle for large shuffle partition number\") {\n    Seq(10, 200, 201).foreach { numPartitions =>\n      withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n        val df = sql(\"SELECT * FROM tbl\")\n\n        val shuffled = df.repartitionByRange(numPartitions, $\"_2\")\n\n        val cometShuffleExecs = checkCometExchange(shuffled, 1, false)\n        // `CometSerializedShuffleHandle` is used for large shuffle partition number,\n        // i.e., sort-based shuffle writer\n        cometShuffleExecs(0).shuffleDependency.shuffleHandle.getClass.getName\n          .contains(\"CometSerializedShuffleHandle\")\n\n        checkSparkAnswer(shuffled)\n      }\n    }\n  }\n\n  test(\"hash-based columnar shuffle\") {\n    Seq(10, 200, 201).foreach { numPartitions =>\n      withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n        val df = sql(\"SELECT * FROM tbl\")\n\n        val shuffled1 =\n          df.repartitionByRange(numPartitions, $\"_2\").limit(2).repartition(numPartitions, $\"_1\")\n\n        // 3 exchanges are expected: 1) shuffle to repartition by range, 2) shuffle to global limit, 3) hash shuffle\n        checkShuffleAnswer(shuffled1, 3)\n\n        val shuffled2 = df\n          .repartitionByRange(numPartitions, $\"_2\")\n          .limit(2)\n          .repartition(numPartitions, $\"_1\", $\"_2\")\n\n        checkShuffleAnswer(shuffled2, 3)\n\n        val shuffled3 = df\n          .repartitionByRange(numPartitions, $\"_2\")\n          .limit(2)\n          .repartition(numPartitions, $\"_2\", $\"_1\")\n\n        checkShuffleAnswer(shuffled3, 3)\n      }\n    }\n  }\n\n  test(\"columnar shuffle: different data type\") {\n\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 1000)\n\n        Seq(10, 201).foreach { numPartitions =>\n          (1 to 20).map(i => s\"_$i\").foreach { c =>\n            readParquetFile(path.toString) { df =>\n              val shuffled = df\n                .select($\"_1\")\n                .repartition(numPartitions, col(c))\n              val cometShuffleExecs = checkCometExchange(shuffled, 1, false)\n              if (numPartitions > 200) {\n                // For sort-based shuffle writer\n                cometShuffleExecs(0).shuffleDependency.shuffleHandle.getClass.getName\n                  .contains(\"CometSerializedShuffleHandle\")\n              }\n              checkSparkAnswer(shuffled)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"native operator after columnar shuffle\") {\n    withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n      val df = sql(\"SELECT * FROM tbl\")\n\n      val shuffled1 = df\n        .repartition(10, $\"_2\")\n        .select($\"_1\", $\"_1\" + 1, $\"_2\" + 2)\n        .repartition(10, $\"_1\")\n        .filter($\"_1\" > 1)\n\n      // 2 Comet shuffle exchanges are expected\n      checkShuffleAnswer(shuffled1, 2)\n\n      val shuffled2 = df\n        .repartitionByRange(10, $\"_2\")\n        .select($\"_1\", $\"_1\" + 1, $\"_2\" + 2)\n        .repartition(10, $\"_1\")\n        .filter($\"_1\" > 1)\n\n      // 2 Comet shuffle exchanges are expected, if columnar shuffle is enabled\n      checkShuffleAnswer(shuffled2, 2)\n    }\n  }\n\n  test(\"columnar shuffle: single partition\") {\n    withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n      val df = sql(\"SELECT * FROM tbl\").sortWithinPartitions($\"_1\".desc)\n\n      val shuffled = df.repartition(1)\n\n      checkShuffleAnswer(shuffled, 1)\n    }\n  }\n\n  test(\"sort-based columnar shuffle metrics\") {\n    withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n      val df = sql(\"SELECT * FROM tbl\").sortWithinPartitions($\"_1\".desc)\n      val shuffled = df.repartition(201, $\"_1\")\n\n      checkShuffleAnswer(shuffled, 1)\n\n      // Materialize the shuffled data\n      shuffled.collect()\n      val metrics = find(shuffled.queryExecution.executedPlan) {\n        case _: CometShuffleExchangeExec => true\n        case _ => false\n      }.map(_.metrics).get\n\n      assert(metrics.contains(\"shuffleRecordsWritten\"))\n      assert(metrics(\"shuffleRecordsWritten\").value == 5L)\n\n      assert(metrics.contains(\"shuffleBytesWritten\"))\n      assert(metrics(\"shuffleBytesWritten\").value > 0)\n\n      assert(metrics.contains(\"shuffleWriteTime\"))\n      assert(metrics(\"shuffleWriteTime\").value > 0)\n    }\n  }\n\n  test(\"columnar shuffle on null struct fields\") {\n    withTempDir { dir =>\n      val testData = \"{}\\n\"\n      val path = Paths.get(dir.toString, \"test.json\")\n      Files.write(path, testData.getBytes)\n\n      // Define the nested struct schema\n      val readSchema = StructType(\n        Array(\n          StructField(\n            \"metaData\",\n            StructType(\n              Array(StructField(\n                \"format\",\n                StructType(Array(StructField(\"provider\", StringType, nullable = true))),\n                nullable = true))),\n            nullable = true)))\n\n      // Read JSON with custom schema and repartition, this will repartition rows that contain\n      // null struct fields.\n      val df = spark.read.format(\"json\").schema(readSchema).load(path.toString).repartition(2)\n      assert(df.count() == 1)\n      val row = df.collect()(0)\n      assert(row.getAs[org.apache.spark.sql.Row](\"metaData\") == null)\n    }\n  }\n\n  /**\n   * Checks that `df` produces the same answer as Spark does, and has the `expectedNum` Comet\n   * exchange operators.\n   */\n  private def checkShuffleAnswer(df: DataFrame, expectedNum: Int): Unit = {\n    checkCometExchange(df, expectedNum, false)\n    checkSparkAnswer(df)\n  }\n}\n\nclass CometAsyncShuffleSuite extends CometColumnarShuffleSuite {\n  override protected val asyncShuffleEnable: Boolean = true\n\n  protected val adaptiveExecutionEnabled: Boolean = true\n}\n\nclass CometShuffleSuite extends CometColumnarShuffleSuite {\n  override protected val asyncShuffleEnable: Boolean = false\n\n  protected val adaptiveExecutionEnabled: Boolean = true\n\n  import testImplicits._\n\n  test(\"Comet native operator after ShuffleQueryStage\") {\n    withSQLConf(\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      withParquetTable((0 until 10).map(i => (i, i % 5)), \"tbl_a\") {\n        val df = sql(\"SELECT * FROM tbl_a\")\n        val shuffled = df\n          .select($\"_1\" + 1 as (\"a\"))\n          .filter($\"a\" > 4)\n          .repartition(10)\n          .sortWithinPartitions($\"a\")\n          .filter($\"a\" >= 10)\n        checkSparkAnswerAndOperator(shuffled, classOf[ShuffleQueryStageExec])\n      }\n    }\n  }\n\n  test(\"Comet native operator after ShuffleQueryStage + ReusedExchange\") {\n    withSQLConf(\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      withParquetTable((0 until 10).map(i => (i, i % 5)), \"tbl_a\") {\n        withParquetTable((0 until 10).map(i => (i % 10, i + 2)), \"tbl_b\") {\n          val df = sql(\"SELECT * FROM tbl_a\")\n          val left = df\n            .select($\"_1\" + 1 as (\"a\"))\n            .filter($\"a\" > 4)\n          val right = left.select($\"a\" as (\"b\"))\n          val join = left.join(right, $\"a\" === $\"b\")\n          checkSparkAnswerAndOperator(\n            join,\n            classOf[ShuffleQueryStageExec],\n            classOf[SortMergeJoinExec],\n            classOf[AQEShuffleReadExec])\n        }\n      }\n    }\n  }\n}\n\nclass DisableAQECometShuffleSuite extends CometColumnarShuffleSuite {\n  override protected val asyncShuffleEnable: Boolean = false\n\n  protected val adaptiveExecutionEnabled: Boolean = false\n\n  import testImplicits._\n\n  test(\"Comet native operator after ReusedExchange\") {\n    withSQLConf(\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      withParquetTable((0 until 10).map(i => (i, i % 5)), \"tbl_a\") {\n        withParquetTable((0 until 10).map(i => (i % 10, i + 2)), \"tbl_b\") {\n          val df = sql(\"SELECT * FROM tbl_a\")\n          val left = df\n            .select($\"_1\" + 1 as (\"a\"))\n            .filter($\"a\" > 4)\n          val right = left.select($\"a\" as (\"b\"))\n          val join = left.join(right, $\"a\" === $\"b\")\n          checkSparkAnswerAndOperator(\n            join,\n            classOf[ReusedExchangeExec],\n            classOf[SortMergeJoinExec])\n        }\n      }\n    }\n  }\n}\n\nclass DisableAQECometAsyncShuffleSuite extends CometColumnarShuffleSuite {\n  override protected val asyncShuffleEnable: Boolean = true\n\n  protected val adaptiveExecutionEnabled: Boolean = false\n}\n\nclass CometShuffleEncryptionSuite extends CometTestBase {\n\n  import testImplicits._\n\n  override protected def sparkConf: SparkConf = {\n    val conf = super.sparkConf\n    conf.set(\"spark.io.encryption.enabled\", \"true\")\n  }\n\n  test(\"comet columnar shuffle with encryption\") {\n    Seq(10, 201).foreach { numPartitions =>\n      Seq(true, false).foreach { dictionaryEnabled =>\n        Seq(true, false).foreach { asyncEnabled =>\n          withTempDir { dir =>\n            val path = new Path(dir.toURI.toString, \"test.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 1000)\n\n            (1 until 10).map(i => $\"_$i\").foreach { col =>\n              withSQLConf(\n                CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n                CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n                CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\",\n                CometConf.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED.key -> asyncEnabled.toString) {\n                readParquetFile(path.toString) { df =>\n                  val shuffled = df\n                    .select($\"_1\")\n                    .repartition(numPartitions, col)\n                  checkSparkAnswer(shuffled)\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n}\n\nclass CometShuffleManagerSuite extends CometTestBase {\n\n  test(\"should not bypass merge sort if executor cores are too high\") {\n    withSQLConf(CometConf.COMET_COLUMNAR_SHUFFLE_ASYNC_MAX_THREAD_NUM.key -> \"100\") {\n      val conf = new SparkConf()\n      conf.set(\"spark.executor.cores\", \"1\")\n\n      val rdd = spark.emptyDataFrame.rdd.map(x => (0, x))\n\n      val dependency = new CometShuffleDependency[Int, Row, Row](\n        _rdd = rdd,\n        serializer = null,\n        shuffleWriterProcessor = null,\n        partitioner = new Partitioner {\n          override def numPartitions: Int = 50\n\n          override def getPartition(key: Any): Int = key.asInstanceOf[Int]\n        },\n        decodeTime = null)\n\n      assert(CometShuffleManager.shouldBypassMergeSort(conf, dependency))\n\n      conf.set(\"spark.executor.cores\", \"10\")\n      assert(!CometShuffleManager.shouldBypassMergeSort(conf, dependency))\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/exec/CometExec3_4PlusSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exec\n\nimport java.io.ByteArrayOutputStream\n\nimport scala.util.Random\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.catalyst.FunctionIdentifier\nimport org.apache.spark.sql.catalyst.expressions.{BloomFilterMightContain, Expression, ExpressionInfo}\nimport org.apache.spark.sql.functions.{col, lit}\nimport org.apache.spark.util.sketch.BloomFilter\n\nimport org.apache.comet.CometConf\n\n/**\n * This test suite contains tests for only Spark 3.4+.\n */\nclass CometExec3_4PlusSuite extends CometTestBase {\n  import testImplicits._\n\n  val func_might_contain = new FunctionIdentifier(\"might_contain\")\n\n  override def beforeAll(): Unit = {\n    super.beforeAll()\n    // Register 'might_contain' to builtin.\n    spark.sessionState.functionRegistry.registerFunction(\n      func_might_contain,\n      new ExpressionInfo(classOf[BloomFilterMightContain].getName, \"might_contain\"),\n      (children: Seq[Expression]) => BloomFilterMightContain(children.head, children(1)))\n  }\n\n  override def afterAll(): Unit = {\n    spark.sessionState.functionRegistry.dropFunction(func_might_contain)\n    super.afterAll()\n  }\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n        testFun\n      }\n    }\n  }\n\n  // The syntax is only supported by Spark 3.4+.\n  test(\"subquery limit: limit with offset should return correct results\") {\n    withSQLConf(CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      withTable(\"t1\", \"t2\") {\n        val table1 =\n          \"\"\"create temporary view t1 as select * from values\n            |  (\"val1a\", 6S, 8, 10L, float(15.0), 20D, 20E2BD, timestamp '2014-04-04 01:00:00.000', date '2014-04-04'),\n            |  (\"val1b\", 8S, 16, 19L, float(17.0), 25D, 26E2BD, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),\n            |  (\"val1a\", 16S, 12, 21L, float(15.0), 20D, 20E2BD, timestamp '2014-06-04 01:02:00.001', date '2014-06-04'),\n            |  (\"val1a\", 16S, 12, 10L, float(15.0), 20D, 20E2BD, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),\n            |  (\"val1c\", 8S, 16, 19L, float(17.0), 25D, 26E2BD, timestamp '2014-05-04 01:02:00.001', date '2014-05-05'),\n            |  (\"val1d\", null, 16, 22L, float(17.0), 25D, 26E2BD, timestamp '2014-06-04 01:01:00.000', null),\n            |  (\"val1d\", null, 16, 19L, float(17.0), 25D, 26E2BD, timestamp '2014-07-04 01:02:00.001', null),\n            |  (\"val1e\", 10S, null, 25L, float(17.0), 25D, 26E2BD, timestamp '2014-08-04 01:01:00.000', date '2014-08-04'),\n            |  (\"val1e\", 10S, null, 19L, float(17.0), 25D, 26E2BD, timestamp '2014-09-04 01:02:00.001', date '2014-09-04'),\n            |  (\"val1d\", 10S, null, 12L, float(17.0), 25D, 26E2BD, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),\n            |  (\"val1a\", 6S, 8, 10L, float(15.0), 20D, 20E2BD, timestamp '2014-04-04 01:02:00.001', date '2014-04-04'),\n            |  (\"val1e\", 10S, null, 19L, float(17.0), 25D, 26E2BD, timestamp '2014-05-04 01:01:00.000', date '2014-05-04')\n            |  as t1(t1a, t1b, t1c, t1d, t1e, t1f, t1g, t1h, t1i);\"\"\".stripMargin\n        val table2 =\n          \"\"\"create temporary view t2 as select * from values\n            |  (\"val2a\", 6S, 12, 14L, float(15), 20D, 20E2BD, timestamp '2014-04-04 01:01:00.000', date '2014-04-04'),\n            |  (\"val1b\", 10S, 12, 19L, float(17), 25D, 26E2BD, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),\n            |  (\"val1b\", 8S, 16, 119L, float(17), 25D, 26E2BD, timestamp '2015-05-04 01:01:00.000', date '2015-05-04'),\n            |  (\"val1c\", 12S, 16, 219L, float(17), 25D, 26E2BD, timestamp '2016-05-04 01:01:00.000', date '2016-05-04'),\n            |  (\"val1b\", null, 16, 319L, float(17), 25D, 26E2BD, timestamp '2017-05-04 01:01:00.000', null),\n            |  (\"val2e\", 8S, null, 419L, float(17), 25D, 26E2BD, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),\n            |  (\"val1f\", 19S, null, 519L, float(17), 25D, 26E2BD, timestamp '2014-05-04 01:01:00.000', date '2014-05-04'),\n            |  (\"val1b\", 10S, 12, 19L, float(17), 25D, 26E2BD, timestamp '2014-06-04 01:01:00.000', date '2014-06-04'),\n            |  (\"val1b\", 8S, 16, 19L, float(17), 25D, 26E2BD, timestamp '2014-07-04 01:01:00.000', date '2014-07-04'),\n            |  (\"val1c\", 12S, 16, 19L, float(17), 25D, 26E2BD, timestamp '2014-08-04 01:01:00.000', date '2014-08-05'),\n            |  (\"val1e\", 8S, null, 19L, float(17), 25D, 26E2BD, timestamp '2014-09-04 01:01:00.000', date '2014-09-04'),\n            |  (\"val1f\", 19S, null, 19L, float(17), 25D, 26E2BD, timestamp '2014-10-04 01:01:00.000', date '2014-10-04'),\n            |  (\"val1b\", null, 16, 19L, float(17), 25D, 26E2BD, timestamp '2014-05-04 01:01:00.000', null)\n            |  as t2(t2a, t2b, t2c, t2d, t2e, t2f, t2g, t2h, t2i);\"\"\".stripMargin\n        sql(table1)\n        sql(table2)\n\n        val df = sql(\"\"\"SELECT *\n                       |FROM   t1\n                       |WHERE  t1c IN (SELECT t2c\n                       |               FROM   t2\n                       |               WHERE  t2b >= 8\n                       |               LIMIT  2\n                       |               OFFSET 2)\n                       |LIMIT 4\n                       |OFFSET 2;\"\"\".stripMargin)\n        checkSparkAnswer(df)\n      }\n    }\n  }\n\n  // Dataset.offset API is not available before Spark 3.4\n  test(\"offset\") {\n    withSQLConf(CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      checkSparkAnswer(testData.offset(90))\n      checkSparkAnswer(arrayData.toDF().offset(99))\n      checkSparkAnswer(mapData.toDF().offset(99))\n    }\n  }\n\n  test(\"test BloomFilterMightContain can take a constant value input\") {\n    val table = \"test\"\n\n    withTable(table) {\n      sql(s\"create table $table(col1 long, col2 int) using parquet\")\n      sql(s\"insert into $table values (201, 1)\")\n      checkSparkAnswerAndOperator(s\"\"\"\n           |SELECT might_contain(\n           |X'00000001000000050000000343A2EC6EA8C117E2D3CDB767296B144FC5BFBCED9737F267', col1) FROM $table\n           |\"\"\".stripMargin)\n    }\n  }\n\n  test(\"test NULL inputs for BloomFilterMightContain\") {\n    val table = \"test\"\n\n    withTable(table) {\n      sql(s\"create table $table(col1 long, col2 int) using parquet\")\n      sql(s\"insert into $table values (201, 1), (null, 2)\")\n      checkSparkAnswerAndOperator(s\"\"\"\n           |SELECT might_contain(null, null) both_null,\n           |       might_contain(null, 1L) null_bf,\n           |       might_contain(\n           |         X'00000001000000050000000343A2EC6EA8C117E2D3CDB767296B144FC5BFBCED9737F267', col1) null_value\n           |       FROM $table\n           |\"\"\".stripMargin)\n    }\n  }\n\n  test(\"test BloomFilterMightContain from random input\") {\n    val (longs, bfBytes) = bloomFilterFromRandomInput(10000, 10000)\n    val table = \"test\"\n\n    withTable(table) {\n      sql(s\"create table $table(col1 long, col2 binary) using parquet\")\n      spark\n        .createDataset(longs)\n        .map(x => (x, bfBytes))\n        .toDF(\"col1\", \"col2\")\n        .write\n        .insertInto(table)\n      val expr = BloomFilterMightContain(lit(bfBytes).expr, col(\"col1\").expr)\n      val df = spark.table(table).select(getColumnFromExpression(expr))\n      checkSparkAnswerAndOperator(df)\n      // check with scalar subquery\n      checkSparkAnswerAndOperator(s\"\"\"\n           |SELECT might_contain((select first(col2) as col2 from $table), col1) FROM $table\n           |\"\"\".stripMargin)\n    }\n  }\n\n  private def bloomFilterFromRandomInput(\n      expectedItems: Long,\n      expectedBits: Long): (Seq[Long], Array[Byte]) = {\n    val bf = BloomFilter.create(expectedItems, expectedBits)\n    val longs = (0 until expectedItems.toInt).map(_ => Random.nextLong())\n    longs.foreach(bf.put)\n    val os = new ByteArrayOutputStream()\n    bf.writeTo(os)\n    (longs, os.toByteArray)\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/exec/CometExecSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exec\n\nimport java.sql.Date\nimport java.time.{Duration, Period}\n\nimport scala.util.Random\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql._\nimport org.apache.spark.sql.catalyst.{FunctionIdentifier, TableIdentifier}\nimport org.apache.spark.sql.catalyst.catalog.{BucketSpec, CatalogStatistics, CatalogTable}\nimport org.apache.spark.sql.catalyst.expressions.{Expression, ExpressionInfo, Hex}\nimport org.apache.spark.sql.catalyst.expressions.aggregate.{AggregateMode, BloomFilterAggregate}\nimport org.apache.spark.sql.comet._\nimport org.apache.spark.sql.comet.execution.shuffle.{CometColumnarShuffle, CometShuffleExchangeExec}\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.execution.adaptive.{AdaptiveSparkPlanExec, BroadcastQueryStageExec}\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetFileFormat\nimport org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ReusedExchangeExec, ShuffleExchangeExec}\nimport org.apache.spark.sql.execution.joins.{BroadcastHashJoinExec, BroadcastNestedLoopJoinExec, CartesianProductExec, SortMergeJoinExec}\nimport org.apache.spark.sql.execution.reuse.ReuseExchangeAndSubquery\nimport org.apache.spark.sql.execution.window.WindowExec\nimport org.apache.spark.sql.functions._\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.internal.SQLConf.SESSION_LOCAL_TIMEZONE\nimport org.apache.spark.unsafe.types.UTF8String\n\nimport org.apache.comet.{CometConf, CometExecIterator, ExtendedExplainInfo}\nimport org.apache.comet.CometSparkSessionExtensions.{isSpark35Plus, isSpark40Plus}\nimport org.apache.comet.serde.Config.ConfigMap\nimport org.apache.comet.testing.{DataGenOptions, ParquetGenerator, SchemaGenOptions}\n\nclass CometExecSuite extends CometTestBase {\n\n  import testImplicits._\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n        CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_AUTO) {\n        testFun\n      }\n    }\n  }\n\n  test(\"SQLConf serde\") {\n\n    def roundtrip = {\n      val protobuf = CometExecIterator.serializeCometSQLConfs()\n      ConfigMap.parseFrom(protobuf)\n    }\n\n    // test not setting the config\n    val deserialized: ConfigMap = roundtrip\n    assert(null == deserialized.getEntriesMap.get(CometConf.COMET_EXPLAIN_NATIVE_ENABLED.key))\n\n    // test explicitly setting the config\n    for (value <- Seq(\"true\", \"false\")) {\n      withSQLConf(CometConf.COMET_EXPLAIN_NATIVE_ENABLED.key -> value) {\n        val deserialized: ConfigMap = roundtrip\n        assert(\n          value == deserialized.getEntriesMap.get(CometConf.COMET_EXPLAIN_NATIVE_ENABLED.key))\n      }\n    }\n  }\n\n  test(\"TopK operator should return correct results on dictionary column with nulls\") {\n    withSQLConf(SQLConf.USE_V1_SOURCE_LIST.key -> \"\") {\n      withTable(\"test_data\") {\n        val data = (0 to 8000)\n          .flatMap(_ => Seq((1, null, \"A\"), (2, \"BBB\", \"B\"), (3, \"BBB\", \"B\"), (4, \"BBB\", \"B\")))\n        val tableDF = spark.sparkContext\n          .parallelize(data, 3)\n          .toDF(\"c1\", \"c2\", \"c3\")\n        tableDF\n          .coalesce(1)\n          .sortWithinPartitions(\"c1\")\n          .writeTo(\"test_data\")\n          .using(\"parquet\")\n          .create()\n\n        val df = sql(\"SELECT * FROM test_data ORDER BY c1 LIMIT 3\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"DPP fallback\") {\n    withTempDir { path =>\n      // create test data\n      val factPath = s\"${path.getAbsolutePath}/fact.parquet\"\n      val dimPath = s\"${path.getAbsolutePath}/dim.parquet\"\n      withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"false\") {\n        val one_day = 24 * 60 * 60000\n        val fact = Range(0, 100)\n          .map(i => (i, new java.sql.Date(System.currentTimeMillis() + i * one_day), i.toString))\n          .toDF(\"fact_id\", \"fact_date\", \"fact_str\")\n        fact.write.partitionBy(\"fact_date\").parquet(factPath)\n        val dim = Range(0, 10)\n          .map(i => (i, new java.sql.Date(System.currentTimeMillis() + i * one_day), i.toString))\n          .toDF(\"dim_id\", \"dim_date\", \"dim_str\")\n        dim.write.parquet(dimPath)\n      }\n\n      // note that this test does not trigger DPP with v2 data source\n      Seq(\"parquet\").foreach { v1List =>\n        withSQLConf(\n          SQLConf.USE_V1_SOURCE_LIST.key -> v1List,\n          CometConf.COMET_DPP_FALLBACK_ENABLED.key -> \"true\") {\n          spark.read.parquet(factPath).createOrReplaceTempView(\"dpp_fact\")\n          spark.read.parquet(dimPath).createOrReplaceTempView(\"dpp_dim\")\n          val df =\n            spark.sql(\n              \"select * from dpp_fact join dpp_dim on fact_date = dim_date where dim_id > 7\")\n          val (_, cometPlan) = checkSparkAnswer(df)\n          val infos = new ExtendedExplainInfo().generateExtendedInfo(cometPlan)\n          assert(infos.contains(\"Dynamic Partition Pruning is not supported\"))\n        }\n      }\n    }\n  }\n\n  test(\"DPP fallback avoids inefficient Comet shuffle (#3874)\") {\n    withTempDir { path =>\n      val factPath = s\"${path.getAbsolutePath}/fact.parquet\"\n      val dimPath = s\"${path.getAbsolutePath}/dim.parquet\"\n      withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"false\") {\n        val one_day = 24 * 60 * 60000\n        val fact = Range(0, 100)\n          .map(i => (i, new java.sql.Date(System.currentTimeMillis() + i * one_day), i.toString))\n          .toDF(\"fact_id\", \"fact_date\", \"fact_str\")\n        fact.write.partitionBy(\"fact_date\").parquet(factPath)\n        val dim = Range(0, 10)\n          .map(i => (i, new java.sql.Date(System.currentTimeMillis() + i * one_day), i.toString))\n          .toDF(\"dim_id\", \"dim_date\", \"dim_str\")\n        dim.write.parquet(dimPath)\n      }\n\n      // Force sort-merge join to get a shuffle exchange above the DPP scan\n      Seq(\"parquet\").foreach { v1List =>\n        withSQLConf(\n          SQLConf.USE_V1_SOURCE_LIST.key -> v1List,\n          SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n          CometConf.COMET_DPP_FALLBACK_ENABLED.key -> \"true\") {\n          spark.read.parquet(factPath).createOrReplaceTempView(\"dpp_fact2\")\n          spark.read.parquet(dimPath).createOrReplaceTempView(\"dpp_dim2\")\n          val df =\n            spark.sql(\n              \"select * from dpp_fact2 join dpp_dim2 on fact_date = dim_date where dim_id > 7\")\n          val (_, cometPlan) = checkSparkAnswer(df)\n\n          // Verify no CometShuffleExchangeExec wraps the DPP stage\n          assert(\n            !cometPlan.toString().contains(\"CometColumnarShuffle\"),\n            \"Should not use Comet columnar shuffle for stages with DPP scans\")\n        }\n      }\n    }\n  }\n\n  test(\"ShuffleQueryStageExec could be direct child node of CometBroadcastExchangeExec\") {\n    withSQLConf(CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      val table = \"src\"\n      withTable(table) {\n        withView(\"lv_noalias\") {\n          sql(s\"CREATE TABLE $table (key INT, value STRING) USING PARQUET\")\n          sql(s\"INSERT INTO $table VALUES(238, 'val_238')\")\n\n          sql(\n            \"CREATE VIEW lv_noalias AS SELECT myTab.* FROM src \" +\n              \"LATERAL VIEW explode(map('key1', 100, 'key2', 200)) myTab LIMIT 2\")\n          val df = sql(\"SELECT * FROM lv_noalias a JOIN lv_noalias b ON a.key=b.key\");\n          checkSparkAnswer(df)\n        }\n      }\n    }\n  }\n\n  // repro for https://github.com/apache/datafusion-comet/issues/1251\n  test(\"subquery/exists-subquery/exists-orderby-limit.sql\") {\n    withSQLConf(CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      val table = \"src\"\n      withTable(table) {\n        sql(s\"CREATE TABLE $table (key INT, value STRING) USING PARQUET\")\n        sql(s\"INSERT INTO $table VALUES(238, 'val_238')\")\n\n        // the subquery returns the distinct group by values\n        checkSparkAnswerAndOperator(s\"\"\"SELECT * FROM $table\n             |WHERE EXISTS (SELECT MAX(key)\n             |FROM $table\n             |GROUP BY value\n             |LIMIT 1\n             |OFFSET 2)\"\"\".stripMargin)\n\n        checkSparkAnswerAndOperator(s\"\"\"SELECT * FROM $table\n             |WHERE NOT EXISTS (SELECT MAX(key)\n             |FROM $table\n             |GROUP BY value\n             |LIMIT 1\n             |OFFSET 2)\"\"\".stripMargin)\n      }\n    }\n  }\n\n  test(\"Sort on single struct should fallback to Spark\") {\n    withSQLConf(\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n      SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      val data1 =\n        Seq(Tuple1(null), Tuple1((1, \"a\")), Tuple1((2, null)), Tuple1((3, \"b\")), Tuple1(null))\n\n      withParquetFile(data1) { file =>\n        readParquetFile(file) { df =>\n          val sort = df.sort(\"_1\")\n          checkSparkAnswer(sort)\n        }\n      }\n\n      val data2 =\n        Seq(\n          Tuple2(null, 1),\n          Tuple2((1, \"a\"), 2),\n          Tuple2((2, null), 3),\n          Tuple2((3, \"b\"), 5),\n          Tuple2(null, 6))\n\n      withParquetFile(data2) { file =>\n        readParquetFile(file) { df =>\n          val sort = df.sort(\"_1\")\n          checkSparkAnswer(sort)\n        }\n      }\n    }\n  }\n\n  test(\"Sort on array of boolean\") {\n    withSQLConf(\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n      SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n\n      sql(\"\"\"\n          |CREATE OR REPLACE TEMPORARY VIEW test_list AS SELECT * FROM VALUES\n          | (array(true)),\n          | (array(false)),\n          | (array(false)),\n          | (array(false)) AS test(arr)\n          |\"\"\".stripMargin)\n\n      val df = sql(\"\"\"\n          SELECT * FROM test_list ORDER BY arr\n          |\"\"\".stripMargin)\n      val sort = stripAQEPlan(df.queryExecution.executedPlan).collect { case s: CometSortExec =>\n        s\n      }.headOption\n      assert(sort.isDefined)\n    }\n  }\n\n  test(\"Sort on TimestampNTZType\") {\n    withSQLConf(\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n      SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n\n      sql(\"\"\"\n          |CREATE OR REPLACE TEMPORARY VIEW test_list AS SELECT * FROM VALUES\n          | (TIMESTAMP_NTZ'2025-08-29 00:00:00'),\n          | (TIMESTAMP_NTZ'2023-07-07 00:00:00'),\n          | (convert_timezone('Asia/Kathmandu', 'UTC', TIMESTAMP_NTZ'2023-07-07 00:00:00')),\n          | (convert_timezone('America/Los_Angeles', 'UTC', TIMESTAMP_NTZ'2023-07-07 00:00:00')),\n          | (TIMESTAMP_NTZ'1969-12-31 00:00:00') AS test(ts_ntz)\n          |\"\"\".stripMargin)\n\n      val df = sql(\"\"\"\n          SELECT * FROM test_list ORDER BY ts_ntz\n          |\"\"\".stripMargin)\n      checkSparkAnswer(df)\n      val sort = stripAQEPlan(df.queryExecution.executedPlan).collect { case s: CometSortExec =>\n        s\n      }.headOption\n      assert(sort.isDefined)\n    }\n  }\n\n  test(\"Sort on map w/ TimestampNTZType values\") {\n    withSQLConf(\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n      SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n\n      sql(\"\"\"\n          |CREATE OR REPLACE TEMPORARY VIEW test_map AS SELECT * FROM VALUES\n          | (map('a', TIMESTAMP_NTZ'2025-08-29 00:00:00')),\n          | (map('b', TIMESTAMP_NTZ'2023-07-07 00:00:00')),\n          | (map('c', convert_timezone('Asia/Kathmandu', 'UTC', TIMESTAMP_NTZ'2023-07-07 00:00:00'))),\n          | (map('d', convert_timezone('America/Los_Angeles', 'UTC', TIMESTAMP_NTZ'2023-07-07 00:00:00'))) AS test(map)\n          |\"\"\".stripMargin)\n\n      val df = sql(\"\"\"\n          SELECT * FROM test_map ORDER BY map_values(map) DESC\n          |\"\"\".stripMargin)\n      checkSparkAnswer(df)\n      val sort = stripAQEPlan(df.queryExecution.executedPlan).collect { case s: CometSortExec =>\n        s\n      }.headOption\n      assert(sort.isDefined)\n    }\n  }\n\n  test(\"Sort on map w/ boolean values\") {\n    withSQLConf(\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n      SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> \"false\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SORT_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n\n      sql(\"\"\"\n          |CREATE OR REPLACE TEMPORARY VIEW test_map AS SELECT * FROM VALUES\n          | (map('a', true)),\n          | (map('b', true)),\n          | (map('c', false)),\n          | (map('d', true)) AS test(map)\n          |\"\"\".stripMargin)\n\n      val df = sql(\"\"\"\n          SELECT * FROM test_map ORDER BY map_values(map) DESC\n          |\"\"\".stripMargin)\n      val sort = stripAQEPlan(df.queryExecution.executedPlan).collect { case s: CometSortExec =>\n        s\n      }.headOption\n      assert(sort.isDefined)\n    }\n  }\n\n  test(\"subquery execution under CometTakeOrderedAndProjectExec should not fail\") {\n    assume(isSpark35Plus, \"SPARK-45584 is fixed in Spark 3.5+\")\n\n    withTable(\"t1\") {\n      sql(\"\"\"\n          |CREATE TABLE t1 USING PARQUET\n          |AS SELECT * FROM VALUES\n          |(1, \"a\"),\n          |(2, \"a\"),\n          |(3, \"a\") t(id, value)\n          |\"\"\".stripMargin)\n      val df = sql(\"\"\"\n          |WITH t2 AS (\n          |  SELECT * FROM t1 ORDER BY id\n          |)\n          |SELECT *, (SELECT COUNT(*) FROM t2) FROM t2 LIMIT 10\n          |\"\"\".stripMargin)\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"fix CometNativeExec.doCanonicalize for ReusedExchangeExec\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_BROADCAST_FORCE_ENABLED.key -> \"true\",\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\") {\n      withTable(\"td\") {\n        testData\n          .withColumn(\"bucket\", $\"key\" % 3)\n          .write\n          .mode(SaveMode.Overwrite)\n          .bucketBy(2, \"bucket\")\n          .format(\"parquet\")\n          .saveAsTable(\"td\")\n        val df = sql(\"\"\"\n            |SELECT t1.key, t2.key, t3.key\n            |FROM td AS t1\n            |JOIN td AS t2 ON t2.key = t1.key\n            |JOIN td AS t3 ON t3.key = t2.key\n            |WHERE t1.bucket = 1 AND t2.bucket = 1 AND t3.bucket = 1\n            |\"\"\".stripMargin)\n        val reusedPlan = ReuseExchangeAndSubquery.apply(df.queryExecution.executedPlan)\n        val reusedExchanges = collect(reusedPlan) { case r: ReusedExchangeExec =>\n          r\n        }\n        assert(reusedExchanges.size == 1)\n        assert(reusedExchanges.head.child.isInstanceOf[CometBroadcastExchangeExec])\n      }\n    }\n  }\n\n  test(\"CometShuffleExchangeExec logical link should be correct\") {\n    withTempView(\"v\") {\n      spark.sparkContext\n        .parallelize((1 to 4).map(i => TestData(i, i.toString)), 2)\n        .toDF(\"c1\", \"c2\")\n        .createOrReplaceTempView(\"v\")\n\n      Seq(\"native\", \"jvm\").foreach { columnarShuffleMode =>\n        withSQLConf(\n          SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\",\n          CometConf.COMET_SHUFFLE_MODE.key -> columnarShuffleMode) {\n          val df = sql(\"SELECT * FROM v where c1 = 1 order by c1, c2\")\n          val shuffle = find(df.queryExecution.executedPlan) {\n            case _: CometShuffleExchangeExec if columnarShuffleMode.equalsIgnoreCase(\"jvm\") =>\n              true\n            case _: ShuffleExchangeExec if !columnarShuffleMode.equalsIgnoreCase(\"jvm\") => true\n            case _ => false\n          }.get\n          assert(shuffle.logicalLink.isEmpty)\n        }\n      }\n    }\n  }\n\n  test(\"Ensure that the correct outputPartitioning of CometSort\") {\n    withTable(\"test_data\") {\n      val tableDF = spark.sparkContext\n        .parallelize(\n          (1 to 10).map { i =>\n            (if (i > 4) 5 else i, i.toString, Date.valueOf(s\"${2020 + i}-$i-$i\"))\n          },\n          3)\n        .toDF(\"id\", \"data\", \"day\")\n      tableDF.write.saveAsTable(\"test_data\")\n\n      val df = sql(\"SELECT * FROM test_data\")\n        .repartition($\"data\")\n        .sortWithinPartitions($\"id\", $\"data\", $\"day\")\n      df.collect()\n      val sort = stripAQEPlan(df.queryExecution.executedPlan).collect { case s: CometSortExec =>\n        s\n      }.head\n      assert(sort.outputPartitioning == sort.child.outputPartitioning)\n    }\n  }\n\n  test(\"try_sum should return null if overflow happens before merging\") {\n    val longDf = Seq(Long.MaxValue, Long.MaxValue, 2).toDF(\"v\")\n    val yearMonthDf = Seq(Int.MaxValue, Int.MaxValue, 2)\n      .map(Period.ofMonths)\n      .toDF(\"v\")\n    val dayTimeDf = Seq(106751991L, 106751991L, 2L)\n      .map(Duration.ofDays)\n      .toDF(\"v\")\n    Seq(longDf, yearMonthDf, dayTimeDf).foreach { df =>\n      checkSparkAnswer(df.repartitionByRange(2, col(\"v\")).selectExpr(\"try_sum(v)\"))\n    }\n  }\n\n  test(\"Fix corrupted AggregateMode when transforming plan parameters\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"table\") {\n      val df = sql(\"SELECT * FROM table\").groupBy($\"_1\").agg(sum(\"_2\"))\n      val agg = stripAQEPlan(df.queryExecution.executedPlan).collectFirst {\n        case s: CometHashAggregateExec => s\n      }.get\n\n      assert(agg.mode.isDefined && agg.mode.get.isInstanceOf[AggregateMode])\n      val newAgg = agg.cleanBlock().asInstanceOf[CometHashAggregateExec]\n      assert(newAgg.mode.isDefined && newAgg.mode.get.isInstanceOf[AggregateMode])\n    }\n  }\n\n  test(\"CometBroadcastExchangeExec\") {\n    withSQLConf(CometConf.COMET_EXEC_BROADCAST_FORCE_ENABLED.key -> \"true\") {\n      withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl_a\") {\n        withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl_b\") {\n          val df = sql(\n            \"SELECT tbl_a._1, tbl_b._2 FROM tbl_a JOIN tbl_b \" +\n              \"WHERE tbl_a._1 > tbl_a._2 LIMIT 2\")\n\n          val nativeBroadcast = find(df.queryExecution.executedPlan) {\n            case _: CometBroadcastExchangeExec => true\n            case _ => false\n          }.get.asInstanceOf[CometBroadcastExchangeExec]\n\n          val numParts = nativeBroadcast.executeColumnar().getNumPartitions\n\n          val rows = nativeBroadcast.executeCollect().toSeq.sortBy(row => row.getInt(0))\n          val rowContents = rows.map(row => row.getInt(0))\n          val expected = (0 until numParts).flatMap(_ => (0 until 5).map(i => i + 1)).sorted\n\n          assert(rowContents === expected)\n\n          val metrics = nativeBroadcast.metrics\n          assert(metrics(\"numCoalescedBatches\").value == 5L)\n          assert(metrics(\"numCoalescedRows\").value == 5L)\n        }\n      }\n    }\n  }\n\n  test(\"CometBroadcastExchangeExec: empty broadcast\") {\n    withSQLConf(CometConf.COMET_EXEC_BROADCAST_FORCE_ENABLED.key -> \"true\") {\n      withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl_a\") {\n        withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl_b\") {\n          val df = sql(\n            \"SELECT /*+ BROADCAST(a) */ *\" +\n              \" FROM (SELECT * FROM tbl_a WHERE _1 < 0) a JOIN tbl_b b\" +\n              \" ON a._1 = b._1\")\n          val nativeBroadcast = find(df.queryExecution.executedPlan) {\n            case _: CometBroadcastExchangeExec => true\n            case _ => false\n          }.get.asInstanceOf[CometBroadcastExchangeExec]\n          val rows = nativeBroadcast.executeCollect()\n          assert(rows.isEmpty)\n\n          val metrics = nativeBroadcast.metrics\n          assert(metrics(\"numCoalescedBatches\").value == 0L)\n          assert(metrics(\"numCoalescedRows\").value == 0L)\n        }\n      }\n    }\n  }\n\n  test(\"scalar subquery\") {\n    val dataTypes =\n      Seq(\n        \"BOOLEAN\",\n        \"BYTE\",\n        \"SHORT\",\n        \"INT\",\n        \"BIGINT\",\n        \"FLOAT\",\n        \"DOUBLE\",\n        // \"DATE\": TODO: needs to address issue #1364 first\n        // \"TIMESTAMP\", TODO: needs to address issue #1364 first\n        \"STRING\",\n        \"BINARY\",\n        \"DECIMAL(38, 10)\")\n    dataTypes.map { subqueryType =>\n      withSQLConf(\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n        CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n        withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n          var column1 = s\"CAST(max(_1) AS $subqueryType)\"\n          if (subqueryType == \"BINARY\") {\n            // arrow-rs doesn't support casting integer to binary yet.\n            // We added it to upstream but it's not released yet.\n            column1 = \"CAST(CAST(max(_1) AS STRING) AS BINARY)\"\n          }\n\n          val df1 = sql(s\"SELECT (SELECT $column1 FROM tbl) AS a, _1, _2 FROM tbl\")\n          checkSparkAnswerAndOperator(df1)\n\n          var column2 = s\"CAST(_1 AS $subqueryType)\"\n          if (subqueryType == \"BINARY\") {\n            // arrow-rs doesn't support casting integer to binary yet.\n            // We added it to upstream but it's not released yet.\n            column2 = \"CAST(CAST(_1 AS STRING) AS BINARY)\"\n          }\n\n          val df2 = sql(s\"SELECT _1, _2 FROM tbl WHERE $column2 > (SELECT $column1 FROM tbl)\")\n          checkSparkAnswerAndOperator(df2)\n\n          // Non-correlated exists subquery will be rewritten to scalar subquery\n          val df3 = sql(\n            \"SELECT * FROM tbl WHERE EXISTS \" +\n              s\"(SELECT $column2 FROM tbl WHERE _1 > 1)\")\n          checkSparkAnswerAndOperator(df3)\n\n          // Null value\n          column1 = s\"CAST(NULL AS $subqueryType)\"\n          if (subqueryType == \"BINARY\") {\n            column1 = \"CAST(CAST(NULL AS STRING) AS BINARY)\"\n          }\n\n          val df4 = sql(s\"SELECT (SELECT $column1 FROM tbl LIMIT 1) AS a, _1, _2 FROM tbl\")\n          checkSparkAnswerAndOperator(df4)\n        }\n      }\n    }\n  }\n\n  test(\"Comet native metrics: scan\") {\n    Seq(CometConf.SCAN_NATIVE_DATAFUSION, CometConf.SCAN_NATIVE_ICEBERG_COMPAT).foreach {\n      scanMode =>\n        withSQLConf(\n          CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n          CometConf.COMET_NATIVE_SCAN_IMPL.key -> scanMode) {\n          withTempDir { dir =>\n            val path = new Path(dir.toURI.toString, \"native-scan.parquet\")\n            makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = true, 10000)\n            withParquetTable(path.toString, \"tbl\") {\n              val df = sql(\"SELECT * FROM tbl\")\n              df.collect()\n\n              val scan = find(df.queryExecution.executedPlan)(s =>\n                s.isInstanceOf[CometScanExec] || s.isInstanceOf[CometNativeScanExec])\n              assert(scan.isDefined, s\"Expected to find a Comet scan node for $scanMode\")\n              val metrics = scan.get.metrics\n\n              assert(\n                metrics.contains(\"time_elapsed_scanning_total\"),\n                s\"[$scanMode] Missing time_elapsed_scanning_total. Available: ${metrics.keys}\")\n              assert(metrics.contains(\"bytes_scanned\"))\n              assert(metrics.contains(\"output_rows\"))\n              assert(metrics.contains(\"time_elapsed_opening\"))\n              assert(metrics.contains(\"time_elapsed_processing\"))\n              assert(metrics.contains(\"time_elapsed_scanning_until_data\"))\n              assert(\n                metrics(\"time_elapsed_scanning_total\").value > 0,\n                s\"[$scanMode] time_elapsed_scanning_total should be > 0\")\n              assert(\n                metrics(\"bytes_scanned\").value > 0,\n                s\"[$scanMode] bytes_scanned should be > 0\")\n              assert(metrics(\"output_rows\").value > 0, s\"[$scanMode] output_rows should be > 0\")\n              assert(\n                metrics(\"time_elapsed_opening\").value > 0,\n                s\"[$scanMode] time_elapsed_opening should be > 0\")\n              assert(\n                metrics(\"time_elapsed_processing\").value > 0,\n                s\"[$scanMode] time_elapsed_processing should be > 0\")\n              assert(\n                metrics(\"time_elapsed_scanning_until_data\").value > 0,\n                s\"[$scanMode] time_elapsed_scanning_until_data should be > 0\")\n            }\n          }\n        }\n    }\n  }\n\n  test(\"Comet native metrics: project and filter\") {\n    withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n      withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n        val df = sql(\"SELECT _1 + 1, _2 + 2 FROM tbl WHERE _1 > 3\")\n        df.collect()\n\n        var metrics = find(df.queryExecution.executedPlan) {\n          case _: CometProjectExec => true\n          case _ => false\n        }.map(_.metrics).get\n\n        assert(metrics.contains(\"output_rows\"))\n        assert(metrics(\"output_rows\").value == 1L)\n\n        metrics = find(df.queryExecution.executedPlan) {\n          case _: CometFilterExec => true\n          case _ => false\n        }.map(_.metrics).get\n\n        assert(metrics.contains(\"output_rows\"))\n        assert(metrics(\"output_rows\").value == 1L)\n      }\n    }\n  }\n\n  test(\"Comet native metrics: SortMergeJoin\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      \"spark.sql.adaptive.autoBroadcastJoinThreshold\" -> \"-1\",\n      \"spark.sql.autoBroadcastJoinThreshold\" -> \"-1\",\n      \"spark.sql.join.preferSortMergeJoin\" -> \"true\") {\n      withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl1\") {\n        withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl2\") {\n          val df = sql(\"SELECT * FROM tbl1 INNER JOIN tbl2 ON tbl1._1 = tbl2._1\")\n          df.collect()\n\n          val metrics = find(df.queryExecution.executedPlan) {\n            case _: CometSortMergeJoinExec => true\n            case _ => false\n          }.map(_.metrics).get\n\n          assert(metrics.contains(\"input_batches\"))\n          assert(metrics(\"input_batches\").value == 2L)\n          assert(metrics.contains(\"input_rows\"))\n          assert(metrics(\"input_rows\").value == 10L)\n          assert(metrics.contains(\"output_batches\"))\n          assert(metrics(\"output_batches\").value == 1L)\n          assert(metrics.contains(\"output_rows\"))\n          assert(metrics(\"output_rows\").value == 5L)\n          assert(metrics.contains(\"peak_mem_used\"))\n          assert(metrics(\"peak_mem_used\").value > 1L)\n          assert(metrics.contains(\"join_time\"))\n          assert(metrics(\"join_time\").value > 1L)\n          assert(metrics.contains(\"spill_count\"))\n          assert(metrics(\"spill_count\").value == 0)\n        }\n      }\n    }\n  }\n\n  test(\"Comet native metrics: HashJoin\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"t1\") {\n      withParquetTable((0 until 5).map(i => (i, i + 1)), \"t2\") {\n        val df = sql(\"SELECT /*+ SHUFFLE_HASH(t1) */ * FROM t1 INNER JOIN t2 ON t1._1 = t2._1\")\n        df.collect()\n\n        val metrics = find(df.queryExecution.executedPlan) {\n          case _: CometHashJoinExec => true\n          case _ => false\n        }.map(_.metrics).get\n\n        assert(metrics.contains(\"build_time\"))\n        assert(metrics(\"build_time\").value > 1L)\n        assert(metrics.contains(\"build_input_batches\"))\n        assert(metrics(\"build_input_batches\").value == 5L)\n        assert(metrics.contains(\"build_mem_used\"))\n        assert(metrics(\"build_mem_used\").value > 1L)\n        assert(metrics.contains(\"build_input_rows\"))\n        assert(metrics(\"build_input_rows\").value == 5L)\n        assert(metrics.contains(\"input_batches\"))\n        assert(metrics(\"input_batches\").value == 5L)\n        assert(metrics.contains(\"input_rows\"))\n        assert(metrics(\"input_rows\").value == 5L)\n        assert(metrics.contains(\"output_batches\"))\n        assert(metrics(\"output_batches\").value == 1L)\n        assert(metrics.contains(\"output_rows\"))\n        assert(metrics(\"output_rows\").value == 5L)\n        assert(metrics.contains(\"join_time\"))\n        assert(metrics(\"join_time\").value > 1L)\n      }\n    }\n  }\n\n  test(\"Comet native metrics: BroadcastHashJoin\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"t1\") {\n      withParquetTable((0 until 5).map(i => (i, i + 1)), \"t2\") {\n        val df = sql(\"SELECT /*+ BROADCAST(t1) */ * FROM t1 INNER JOIN t2 ON t1._1 = t2._1\")\n        df.collect()\n\n        val metrics = find(df.queryExecution.executedPlan) {\n          case _: CometBroadcastHashJoinExec => true\n          case _ => false\n        }.map(_.metrics).get\n\n        assert(metrics.contains(\"build_time\"))\n        assert(metrics(\"build_time\").value > 1L)\n        assert(metrics.contains(\"build_input_batches\"))\n        assert(metrics(\"build_input_batches\").value == 5L)\n        assert(metrics.contains(\"build_mem_used\"))\n        assert(metrics(\"build_mem_used\").value > 1L)\n        assert(metrics.contains(\"build_input_rows\"))\n        assert(metrics(\"build_input_rows\").value == 25L)\n        assert(metrics.contains(\"input_batches\"))\n        assert(metrics(\"input_batches\").value == 5L)\n        assert(metrics.contains(\"input_rows\"))\n        assert(metrics(\"input_rows\").value == 5L)\n        assert(metrics.contains(\"output_batches\"))\n        assert(metrics(\"output_batches\").value == 5L)\n        assert(metrics.contains(\"output_rows\"))\n        assert(metrics(\"output_rows\").value == 5L)\n        assert(metrics.contains(\"join_time\"))\n        assert(metrics(\"join_time\").value > 1L)\n      }\n    }\n  }\n\n  test(\n    \"fix: ReusedExchangeExec + CometShuffleExchangeExec under QueryStageExec \" +\n      \"should be CometRoot\") {\n    val tableName = \"table1\"\n    val dim = \"dim\"\n\n    withSQLConf(\n      SQLConf.EXCHANGE_REUSE_ENABLED.key -> \"true\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      withTable(tableName, dim) {\n\n        sql(\n          s\"CREATE TABLE $tableName (id BIGINT, price FLOAT, date DATE, ts TIMESTAMP) USING parquet \" +\n            \"PARTITIONED BY (id)\")\n        sql(s\"CREATE TABLE $dim (id BIGINT, date DATE) USING parquet\")\n\n        spark\n          .range(1, 100)\n          .withColumn(\"date\", date_add(expr(\"DATE '1970-01-01'\"), expr(\"CAST(id % 4 AS INT)\")))\n          .withColumn(\"ts\", expr(\"TO_TIMESTAMP(date)\"))\n          .withColumn(\"price\", expr(\"CAST(id AS FLOAT)\"))\n          .select(\"id\", \"price\", \"date\", \"ts\")\n          .coalesce(1)\n          .write\n          .mode(SaveMode.Append)\n          .partitionBy(\"id\")\n          .saveAsTable(tableName)\n\n        spark\n          .range(1, 10)\n          .withColumn(\"date\", expr(\"DATE '1970-01-02'\"))\n          .select(\"id\", \"date\")\n          .coalesce(1)\n          .write\n          .mode(SaveMode.Append)\n          .saveAsTable(dim)\n\n        val query =\n          s\"\"\"\n             |SELECT $tableName.id, sum(price) as sum_price\n             |FROM $tableName, $dim\n             |WHERE $tableName.id = $dim.id AND $tableName.date = $dim.date\n             |GROUP BY $tableName.id HAVING sum(price) > (\n             |  SELECT sum(price) * 0.0001 FROM $tableName, $dim WHERE $tableName.id = $dim.id AND $tableName.date = $dim.date\n             |  )\n             |ORDER BY sum_price\n             |\"\"\".stripMargin\n\n        val df = sql(query)\n        checkSparkAnswerAndOperator(df)\n        val exchanges = stripAQEPlan(df.queryExecution.executedPlan).collect {\n          case s: CometShuffleExchangeExec if s.shuffleType == CometColumnarShuffle =>\n            s\n        }\n        assert(exchanges.length == 4)\n      }\n    }\n  }\n\n  test(\"Comet Shuffled Join should be optimized to CometBroadcastHashJoin by AQE\") {\n    withSQLConf(\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"10485760\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n      withParquetTable((0 until 100).map(i => (i, i + 1)), \"tbl_a\") {\n        withParquetTable((0 until 100).map(i => (i, i + 2)), \"tbl_b\") {\n          withParquetTable((0 until 100).map(i => (i, i + 3)), \"tbl_c\") {\n            val df = sql(\"\"\"SELECT /*+ BROADCAST(c) */ a1, sum_b2, c._2 FROM (\n                |  SELECT a._1 a1, SUM(b._2) sum_b2 FROM tbl_a a\n                |  JOIN tbl_b b ON a._1 = b._1\n                |  GROUP BY a._1) t\n                |JOIN tbl_c c ON t.a1 = c._1\n                |\"\"\".stripMargin)\n            checkSparkAnswerAndOperator(df)\n\n            // Before AQE: 1 broadcast join\n            var broadcastHashJoinExec = stripAQEPlan(df.queryExecution.executedPlan).collect {\n              case s: CometBroadcastHashJoinExec => s\n            }\n            assert(broadcastHashJoinExec.length == 1)\n\n            // After AQE: shuffled join optimized to broadcast join\n            df.collect()\n            broadcastHashJoinExec = stripAQEPlan(df.queryExecution.executedPlan).collect {\n              case s: CometBroadcastHashJoinExec => s\n            }\n            assert(broadcastHashJoinExec.length == 2)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"CometBroadcastExchange could be converted to rows using CometColumnarToRow\") {\n    withSQLConf(\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"10485760\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"auto\") {\n      withParquetTable((0 until 100).map(i => (i, i + 1)), \"tbl_a\") {\n        withParquetTable((0 until 100).map(i => (i, i + 2)), \"tbl_b\") {\n          withParquetTable((0 until 100).map(i => (i, i + 3)), \"tbl_c\") {\n            val df = sql(\"\"\"SELECT /*+ BROADCAST(c) */ a1, sum_b2, c._2 FROM (\n                |  SELECT a._1 a1, SUM(b._2) sum_b2 FROM tbl_a a\n                |  JOIN tbl_b b ON a._1 = b._1\n                |  GROUP BY a._1) t\n                |JOIN tbl_c c ON t.a1 = c._1\n                |\"\"\".stripMargin)\n            checkSparkAnswerAndOperator(df)\n\n            // Before AQE: one CometBroadcastExchange, no CometColumnarToRow\n            var columnarToRowExec: Seq[SparkPlan] =\n              stripAQEPlan(df.queryExecution.executedPlan).collect {\n                case s: CometColumnarToRowExec => s\n                case s: CometNativeColumnarToRowExec => s\n              }\n            assert(columnarToRowExec.isEmpty)\n\n            // Disable CometExecRule after the initial plan is generated. The CometSortMergeJoin and\n            // CometBroadcastHashJoin nodes in the initial plan will be converted to Spark BroadcastHashJoin\n            // during AQE. This will make CometBroadcastExchangeExec being converted to rows to be used by\n            // Spark BroadcastHashJoin.\n            withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"false\") {\n              df.collect()\n            }\n\n            // After AQE: CometBroadcastExchange has to be converted to rows to conform to Spark\n            // BroadcastHashJoin.\n            val plan = stripAQEPlan(df.queryExecution.executedPlan)\n            columnarToRowExec = plan.collect {\n              case s: CometColumnarToRowExec => s\n              case s: CometNativeColumnarToRowExec => s\n            }\n            assert(columnarToRowExec.length == 1)\n\n            // This ColumnarToRowExec should be a descendant of BroadcastHashJoinExec (possibly\n            // wrapped by InputAdapter for codegen).\n            val broadcastJoins = plan.collect { case b: BroadcastHashJoinExec => b }\n            assert(broadcastJoins.nonEmpty, s\"Expected BroadcastHashJoinExec in plan:\\n$plan\")\n            val hasC2RDescendant = broadcastJoins.exists { join =>\n              join.find {\n                case _: CometColumnarToRowExec | _: CometNativeColumnarToRowExec => true\n                case _ => false\n              }.isDefined\n            }\n            assert(\n              hasC2RDescendant,\n              \"BroadcastHashJoinExec should have a columnar-to-row descendant\")\n\n            // There should be a CometBroadcastExchangeExec under CometColumnarToRowExec\n            val broadcastQueryStage =\n              columnarToRowExec.head.find(_.isInstanceOf[BroadcastQueryStageExec])\n            assert(broadcastQueryStage.isDefined)\n            assert(\n              broadcastQueryStage.get\n                .asInstanceOf[BroadcastQueryStageExec]\n                .broadcast\n                .isInstanceOf[CometBroadcastExchangeExec])\n          }\n        }\n      }\n    }\n  }\n\n  test(\"expand operator\") {\n    val data1 = (0 until 1000)\n      .map(_ % 5) // reduce value space to trigger dictionary encoding\n      .map(i => (i, i + 100, i + 10))\n    val data2 = (0 until 5).map(i => (i, i + 1, i * 1000))\n\n    Seq(data1, data2).foreach { tableData =>\n      withParquetTable(tableData, \"tbl\") {\n        val df = sql(\"SELECT _1, _2, SUM(_3) FROM tbl GROUP BY _1, _2 GROUPING SETS ((_1), (_2))\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"multiple distinct multiple columns sets\") {\n    withTable(\"agg2\") {\n      val data2 = Seq[(Integer, Integer, Integer)](\n        (1, 10, -10),\n        (null, -60, 60),\n        (1, 30, -30),\n        (1, 30, 30),\n        (2, 1, 1),\n        (null, -10, 10),\n        (2, -1, null),\n        (2, 1, 1),\n        (2, null, 1),\n        (null, 100, -10),\n        (3, null, 3),\n        (null, null, null),\n        (3, null, null)).toDF(\"key\", \"value1\", \"value2\")\n      data2.write.saveAsTable(\"agg2\")\n\n      val df = spark.sql(\"\"\"\n          |SELECT\n          |  key,\n          |  count(distinct value1),\n          |  sum(distinct value1),\n          |  count(distinct value2),\n          |  sum(distinct value2),\n          |  count(distinct value1, value2),\n          |  count(value1),\n          |  sum(value1),\n          |  count(value2),\n          |  sum(value2),\n          |  count(*),\n          |  count(1)\n          |FROM agg2\n          |GROUP BY key\n              \"\"\".stripMargin)\n\n      // The above query uses SUM(DISTINCT) and count(distinct value1, value2)\n      // which is not yet supported\n      checkSparkAnswer(df)\n      val subPlan = stripAQEPlan(df.queryExecution.executedPlan).collectFirst {\n        case s: CometHashAggregateExec => s\n      }\n      assert(subPlan.isDefined)\n      checkCometOperators(subPlan.get)\n    }\n  }\n\n  test(\"explain native plan\") {\n    withSQLConf(\n      CometConf.COMET_EXPLAIN_NATIVE_ENABLED.key -> \"true\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n      withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n        val df = sql(\"select * FROM tbl a join tbl b on a._1 = b._2\").select(\"a._1\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"transformed cometPlan\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n      val df = sql(\"select * FROM tbl where _1 >= 2\").select(\"_1\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"project\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n      val df = sql(\"SELECT _1 + 1, _2 + 2, _1 - 1, _2 * 2, _2 / 2 FROM tbl\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"project + filter on arrays\") {\n    withParquetTable((0 until 5).map(i => (i, i)), \"tbl\") {\n      val df = sql(\"SELECT _1 FROM tbl WHERE _1 == _2\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"project + filter\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n      val df = sql(\"SELECT _1 + 1, _2 + 2 FROM tbl WHERE _1 > 3\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"empty projection\") {\n    withParquetDataFrame((0 until 5).map(i => (i, i + 1))) { df =>\n      assert(df.where(\"_1 IS NOT NULL\").count() == 5)\n      checkSparkAnswerAndOperator(df)\n      assert(df.select().limit(2).count() === 2)\n    }\n  }\n\n  test(\"filter on string\") {\n    withParquetTable((0 until 5).map(i => (i, i.toString)), \"tbl\") {\n      val df = sql(\"SELECT _1 + 1, _2 FROM tbl WHERE _2 = '3'\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"filter on dictionary string\") {\n    val data = (0 until 1000)\n      .map(_ % 5) // reduce value space to trigger dictionary encoding\n      .map(i => (i.toString, (i + 100).toString))\n\n    withParquetTable(data, \"tbl\") {\n      val df = sql(\"SELECT _1, _2 FROM tbl WHERE _1 = '3'\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"TakeOrderedAndProjectExec with positive offset\") {\n    withParquetTable((0 until 50).map(i => (i, i + 42)), \"tbl\") {\n      val regularDfWithOffset = sql(\"SELECT _1, _2 FROM tbl order by _1 LIMIT 5 OFFSET 7\")\n      checkSparkAnswerAndOperator(regularDfWithOffset)\n      val dfWithOffsetOnly = sql(\"SELECT _1, _2 FROM tbl order by _1 OFFSET 12\")\n      checkSparkAnswerAndOperator(dfWithOffsetOnly)\n      val incompleteDf = sql(\"SELECT _1, _2 FROM tbl order by _1 LIMIT 5 OFFSET 47\")\n      checkSparkAnswerAndOperator(incompleteDf)\n      val emptyDf = sql(\"SELECT _1, _2 FROM tbl order by _1 LIMIT 50 OFFSET 1000\")\n      checkSparkAnswerAndOperator(emptyDf)\n    }\n  }\n\n  test(\"CollectLimitExec with positive offset\") {\n    withParquetTable((0 until 50).map(i => (i, i + 12)), \"tbl\") {\n      // disable top-k sort to switch from TakeProjectExec to CollectLimitExec in the execution plan\n      withSQLConf(SQLConf.TOP_K_SORT_FALLBACK_THRESHOLD.key -> \"0\") {\n        val regularDfWithOffset = sql(\"SELECT _1, _2 FROM tbl order by _2 LIMIT 4 OFFSET 4\")\n        checkSparkAnswerAndOperator(regularDfWithOffset)\n        val dfWithOffsetOnly = sql(\"SELECT _1, _2 FROM tbl order by _2 OFFSET 15\")\n        checkSparkAnswerAndOperator(dfWithOffsetOnly)\n        val incompleteDf = sql(\"SELECT _1, _2 FROM tbl order by _1 LIMIT 25 OFFSET 40\")\n        checkSparkAnswerAndOperator(incompleteDf)\n        val emptyDf = sql(\"SELECT _1, _2 FROM tbl order by _1 LIMIT 50 OFFSET 1000\")\n        checkSparkAnswerAndOperator(emptyDf)\n      }\n    }\n  }\n\n  test(\"GlobalLimit with positive offset\") {\n    withParquetTable((0 until 50).map(i => (i, i + 13)), \"tbl\") {\n      val regularDfWithOffset =\n        sql(\"SELECT _1, _2 FROM tbl order by _2 LIMIT 4 OFFSET 1\").groupBy(\"_1\").agg(sum(\"_2\"))\n      checkSparkAnswerAndOperator(regularDfWithOffset)\n      val dfWithOffsetOnly = sql(\"SELECT _1, _2 FROM tbl order by _2 OFFSET 15\").agg(sum(\"_2\"))\n      checkSparkAnswerAndOperator(dfWithOffsetOnly)\n    }\n  }\n\n  test(\"explicit zero limit and offset\") {\n    withParquetTable((0 until 50).map(i => (i, i + 8)), \"tbl\") {\n      withSQLConf(\n        \"spark.sql.optimizer.excludedRules\" -> \"org.apache.spark.sql.catalyst.optimizer.EliminateLimits\") {\n        val dfWithZeroLimitAndOffsetOrdered =\n          sql(\"SELECT _1, _2 FROM tbl order by _2 LIMIT 0 OFFSET 0\")\n        checkSparkAnswerAndOperator(dfWithZeroLimitAndOffsetOrdered)\n        val dfWithZeroLimitAndOffsetUnordered =\n          sql(\"SELECT _1, _2 FROM tbl LIMIT 0 OFFSET 0\")\n        checkSparkAnswerAndOperator(dfWithZeroLimitAndOffsetUnordered)\n      }\n    }\n  }\n\n  test(\"sort with dictionary\") {\n    withSQLConf(CometConf.COMET_BATCH_SIZE.key -> 8192.toString) {\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test\")\n        spark\n          .createDataFrame((0 until 1000).map(i => (i % 5, (i % 7).toLong)))\n          .write\n          .option(\"compression\", \"none\")\n          .parquet(path.toString)\n\n        spark\n          .createDataFrame((0 until 1000).map(i => (i % 3 + 7, (i % 13 + 10).toLong)))\n          .write\n          .option(\"compression\", \"none\")\n          .mode(SaveMode.Append)\n          .parquet(path.toString)\n\n        val df = spark.read\n          .format(\"parquet\")\n          .load(path.toString)\n          .sortWithinPartitions($\"_1\".asc, $\"_2\".desc)\n\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"final aggregation\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n      withParquetTable(\n        (0 until 100)\n          .map(_ => (Random.nextInt(), Random.nextInt() % 5)),\n        \"tbl\") {\n        val df = sql(\"SELECT _2, COUNT(*) FROM tbl GROUP BY _2\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"bloom_filter_agg\") {\n    val funcId_bloom_filter_agg = new FunctionIdentifier(\"bloom_filter_agg\")\n    spark.sessionState.functionRegistry.registerFunction(\n      funcId_bloom_filter_agg,\n      new ExpressionInfo(classOf[BloomFilterAggregate].getName, \"bloom_filter_agg\"),\n      (children: Seq[Expression]) =>\n        children.size match {\n          case 1 => new BloomFilterAggregate(children.head)\n          case 2 => new BloomFilterAggregate(children.head, children(1))\n          case 3 => new BloomFilterAggregate(children.head, children(1), children(2))\n        })\n\n    withParquetTable(\n      (0 until 100)\n        .map(_ => (Random.nextInt(), Random.nextInt() % 5)),\n      \"tbl\") {\n\n      (if (isSpark35Plus) Seq(\"tinyint\", \"short\", \"int\", \"long\", \"string\") else Seq(\"long\"))\n        .foreach { input_type =>\n          val df = sql(f\"SELECT bloom_filter_agg(cast(_2 as $input_type)) FROM tbl\")\n          checkSparkAnswerAndOperator(df)\n        }\n    }\n\n    spark.sessionState.functionRegistry.dropFunction(funcId_bloom_filter_agg)\n  }\n\n  test(\"sort (non-global)\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n      val df = sql(\"SELECT * FROM tbl\").sortWithinPartitions($\"_1\".desc)\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"global sort (columnar shuffle only)\") {\n    withSQLConf(\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n        val df = sql(\"SELECT * FROM tbl\").sort($\"_1\".desc)\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"spill sort with (multiple) dictionaries\") {\n    withSQLConf(CometConf.COMET_ONHEAP_MEMORY_OVERHEAD.key -> \"15MB\") {\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        makeRawTimeParquetFileColumns(path, dictionaryEnabled = true, n = 1000, rowGroupSize = 10)\n        readParquetFile(path.toString) { df =>\n          Seq(\n            $\"_0\".desc_nulls_first,\n            $\"_0\".desc_nulls_last,\n            $\"_0\".asc_nulls_first,\n            $\"_0\".asc_nulls_last).foreach { colOrder =>\n            val query = df.sortWithinPartitions(colOrder)\n            checkSparkAnswerAndOperator(query)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"spill sort with (multiple) dictionaries on mixed columns\") {\n    withSQLConf(CometConf.COMET_ONHEAP_MEMORY_OVERHEAD.key -> \"15MB\") {\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        makeRawTimeParquetFile(path, dictionaryEnabled = true, n = 1000, rowGroupSize = 10)\n        readParquetFile(path.toString) { df =>\n          Seq(\n            $\"_6\".desc_nulls_first,\n            $\"_6\".desc_nulls_last,\n            $\"_6\".asc_nulls_first,\n            $\"_6\".asc_nulls_last).foreach { colOrder =>\n            // TODO: We should be able to sort on dictionary timestamp column\n            val query = df.sortWithinPartitions(colOrder)\n            checkSparkAnswerAndOperator(query)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"limit\") {\n    Seq(\"native\", \"jvm\").foreach { columnarShuffleMode =>\n      withSQLConf(\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n        CometConf.COMET_SHUFFLE_MODE.key -> columnarShuffleMode) {\n        withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl_a\") {\n          val df = sql(\"SELECT * FROM tbl_a\")\n            .repartition(10, $\"_1\")\n            .limit(2)\n            .sort($\"_2\".desc)\n          checkSparkAnswerAndOperator(df)\n        }\n      }\n    }\n  }\n\n  test(\"limit (cartesian product)\") {\n    withSQLConf(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n      withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl_a\") {\n        withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl_b\") {\n          val df = sql(\"SELECT tbl_a._1, tbl_b._2 FROM tbl_a JOIN tbl_b LIMIT 2\")\n          checkSparkAnswerAndOperator(\n            df,\n            classOf[CollectLimitExec],\n            classOf[CartesianProductExec])\n        }\n      }\n    }\n  }\n\n  test(\"limit with more than one batch\") {\n    withSQLConf(CometConf.COMET_BATCH_SIZE.key -> \"1\") {\n      withParquetTable((0 until 50).map(i => (i, i + 1)), \"tbl_a\") {\n        withParquetTable((0 until 50).map(i => (i, i + 1)), \"tbl_b\") {\n          val df = sql(\"SELECT tbl_a._1, tbl_b._2 FROM tbl_a JOIN tbl_b LIMIT 2\")\n          checkSparkAnswerAndOperator(\n            df,\n            classOf[CollectLimitExec],\n            classOf[BroadcastNestedLoopJoinExec],\n            classOf[BroadcastExchangeExec])\n        }\n      }\n    }\n  }\n\n  test(\"limit less than rows\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl_a\") {\n      withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl_b\") {\n        val df = sql(\n          \"SELECT tbl_a._1, tbl_b._2 FROM tbl_a JOIN tbl_b \" +\n            \"WHERE tbl_a._1 > tbl_a._2 LIMIT 2\")\n        checkSparkAnswerAndOperator(\n          df,\n          classOf[CollectLimitExec],\n          classOf[BroadcastNestedLoopJoinExec],\n          classOf[BroadcastExchangeExec])\n      }\n    }\n  }\n\n  test(\"empty-column input (read schema is empty)\") {\n    withTable(\"t1\") {\n      Seq((1, true), (2, false))\n        .toDF(\"l\", \"b\")\n        .repartition(2)\n        .write\n        .saveAsTable(\"t1\")\n      val query = spark.table(\"t1\").selectExpr(\"IF(l > 1 AND null, 5, 1) AS out\")\n      checkSparkAnswerAndOperator(query)\n    }\n  }\n\n  test(\"empty-column aggregation\") {\n    withTable(\"t1\") {\n      Seq((1, true), (2, false))\n        .toDF(\"l\", \"b\")\n        .repartition(2)\n        .write\n        .saveAsTable(\"t1\")\n      val query = sql(\"SELECT count(1) FROM t1\")\n      checkSparkAnswerAndOperator(query)\n    }\n  }\n\n  test(\"null handling\") {\n    Seq(\"true\", \"false\").foreach { pushDown =>\n      val table = \"t1\"\n      withSQLConf(SQLConf.PARQUET_FILTER_PUSHDOWN_ENABLED.key -> pushDown) {\n        withTable(table) {\n          sql(s\"create table $table(a int, b int, c int) using parquet\")\n          sql(s\"insert into $table values(1,0,0)\")\n          sql(s\"insert into $table values(2,0,1)\")\n          sql(s\"insert into $table values(3,1,0)\")\n          sql(s\"insert into $table values(4,1,1)\")\n          sql(s\"insert into $table values(5,null,0)\")\n          sql(s\"insert into $table values(6,null,1)\")\n          sql(s\"insert into $table values(7,null,null)\")\n\n          val query = sql(s\"select a+120 from $table where b<10 OR c=1\")\n          checkSparkAnswerAndOperator(query)\n        }\n      }\n    }\n  }\n\n  test(\"float4.sql\") {\n    val table = \"t1\"\n    withTable(table) {\n      sql(s\"CREATE TABLE $table (f1  float) USING parquet\")\n      sql(s\"INSERT INTO $table VALUES (float('    0.0'))\")\n      sql(s\"INSERT INTO $table VALUES (float('1004.30   '))\")\n      sql(s\"INSERT INTO $table VALUES (float('     -34.84    '))\")\n      sql(s\"INSERT INTO $table VALUES (float('1.2345678901234e+20'))\")\n      sql(s\"INSERT INTO $table VALUES (float('1.2345678901234e-20'))\")\n\n      val query = sql(s\"SELECT '' AS four, f.* FROM $table f WHERE '1004.3' > f.f1\")\n      checkSparkAnswerAndOperator(query)\n    }\n  }\n\n  test(\"NaN in predicate expression\") {\n    val t = \"test_table\"\n\n    withTable(t) {\n      Seq[(Integer, java.lang.Short, java.lang.Float)](\n        (1, 100.toShort, 3.14.toFloat),\n        (2, Short.MaxValue, Float.NaN),\n        (3, Short.MinValue, Float.PositiveInfinity),\n        (4, 0.toShort, Float.MaxValue),\n        (5, null, null))\n        .toDF(\"c1\", \"c2\", \"c3\")\n        .write\n        .saveAsTable(t)\n\n      val df = spark.table(t)\n\n      var query = df.where(\"c3 > double('nan')\").select(\"c1\")\n      checkSparkAnswer(query)\n      // Empty result will be optimized to a local relation. No CometExec involved.\n      // checkCometExec(query, 0, cometExecs => {})\n\n      query = df.where(\"c3 >= double('nan')\").select(\"c1\")\n      checkSparkAnswerAndOperator(query)\n      // checkCometExec(query, 1, cometExecs => {})\n\n      query = df.where(\"c3 == double('nan')\").select(\"c1\")\n      checkSparkAnswerAndOperator(query)\n\n      query = df.where(\"c3 <=> double('nan')\").select(\"c1\")\n      checkSparkAnswerAndOperator(query)\n\n      query = df.where(\"c3 != double('nan')\").select(\"c1\")\n      checkSparkAnswerAndOperator(query)\n\n      query = df.where(\"c3 <= double('nan')\").select(\"c1\")\n      checkSparkAnswerAndOperator(query)\n\n      query = df.where(\"c3 < double('nan')\").select(\"c1\")\n      checkSparkAnswerAndOperator(query)\n    }\n  }\n\n  test(\"table statistics\") {\n    withTempDatabase { database =>\n      spark.catalog.setCurrentDatabase(database)\n      withTempDir { dir =>\n        withTable(\"t1\", \"t2\") {\n          spark.range(10).write.saveAsTable(\"t1\")\n          sql(\n            s\"CREATE EXTERNAL TABLE t2 USING parquet LOCATION '${dir.toURI}' \" +\n              \"AS SELECT * FROM range(20)\")\n\n          sql(s\"ANALYZE TABLES IN $database COMPUTE STATISTICS NOSCAN\")\n          checkTableStats(\"t1\", hasSizeInBytes = true, expectedRowCounts = None)\n          checkTableStats(\"t2\", hasSizeInBytes = true, expectedRowCounts = None)\n\n          sql(\"ANALYZE TABLES COMPUTE STATISTICS\")\n          checkTableStats(\"t1\", hasSizeInBytes = true, expectedRowCounts = Some(10))\n          checkTableStats(\"t2\", hasSizeInBytes = true, expectedRowCounts = Some(20))\n        }\n      }\n    }\n  }\n\n  test(\"like (LikeSimplification disabled)\") {\n    val table = \"names\"\n    withSQLConf(\n      SQLConf.OPTIMIZER_EXCLUDED_RULES.key -> \"org.apache.spark.sql.catalyst.optimizer.LikeSimplification\") {\n      withTable(table) {\n        sql(s\"create table $table(id int, name varchar(20)) using parquet\")\n        sql(s\"insert into $table values(1,'James Smith')\")\n        sql(s\"insert into $table values(2,'Michael Rose')\")\n        sql(s\"insert into $table values(3,'Robert Williams')\")\n        sql(s\"insert into $table values(4,'Rames Rose')\")\n        sql(s\"insert into $table values(5,'Rames rose')\")\n\n        // Filter column having values 'Rames _ose', where any character matches for '_'\n        val query = sql(s\"select id from $table where name like 'Rames _ose'\")\n        checkSparkAnswerAndOperator(query)\n\n        // Filter rows that contains 'rose' in 'name' column\n        val queryContains = sql(s\"select id from $table where name like '%rose%'\")\n        checkSparkAnswerAndOperator(queryContains)\n\n        // Filter rows that starts with 'R' following by any characters\n        val queryStartsWith = sql(s\"select id from $table where name like 'R%'\")\n        checkSparkAnswerAndOperator(queryStartsWith)\n\n        // Filter rows that ends with 's' following by any characters\n        val queryEndsWith = sql(s\"select id from $table where name like '%s'\")\n        checkSparkAnswerAndOperator(queryEndsWith)\n      }\n    }\n  }\n\n  test(\"sum overflow (ANSI disable)\") {\n    Seq(\"true\", \"false\").foreach { dictionary =>\n      withSQLConf(\n        SQLConf.ANSI_ENABLED.key -> \"false\",\n        \"parquet.enable.dictionary\" -> dictionary) {\n        withParquetTable(Seq((Long.MaxValue, 1), (Long.MaxValue, 2)), \"tbl\") {\n          val df = sql(\"SELECT sum(_1) FROM tbl\")\n          checkSparkAnswerAndOperator(df)\n        }\n      }\n    }\n  }\n\n  test(\"partition col\") {\n    withSQLConf(SESSION_LOCAL_TIMEZONE.key -> \"Asia/Kathmandu\") {\n      withTable(\"t1\") {\n        sql(\"\"\"\n            | CREATE TABLE t1(name STRING, part1 TIMESTAMP)\n            | USING PARQUET PARTITIONED BY (part1)\n       \"\"\".stripMargin)\n\n        sql(\"\"\"\n            | INSERT OVERWRITE t1 PARTITION(\n            | part1 = timestamp'2019-01-01 11:11:11'\n            | ) VALUES('a')\n      \"\"\".stripMargin)\n        checkSparkAnswerAndOperator(sql(\"\"\"\n            | SELECT\n            |   name,\n            |   CAST(part1 AS STRING)\n            | FROM t1\n      \"\"\".stripMargin))\n      }\n    }\n  }\n\n  test(\"SPARK-33474: Support typed literals as partition spec values\") {\n    withSQLConf(SESSION_LOCAL_TIMEZONE.key -> \"Asia/Kathmandu\") {\n      withTable(\"t1\") {\n        val binaryStr = \"Spark SQL\"\n        val binaryHexStr = Hex.hex(UTF8String.fromString(binaryStr).getBytes).toString\n        sql(\"\"\"\n            | CREATE TABLE t1(name STRING, part1 DATE, part2 TIMESTAMP, part3 BINARY,\n            |  part4 STRING, part5 STRING, part6 STRING, part7 STRING)\n            | USING PARQUET PARTITIONED BY (part1, part2, part3, part4, part5, part6, part7)\n         \"\"\".stripMargin)\n\n        sql(s\"\"\"\n             | INSERT OVERWRITE t1 PARTITION(\n             | part1 = date'2019-01-01',\n             | part2 = timestamp'2019-01-01 11:11:11',\n             | part3 = X'$binaryHexStr',\n             | part4 = 'p1',\n             | part5 = date'2019-01-01',\n             | part6 = timestamp'2019-01-01 11:11:11',\n             | part7 = X'$binaryHexStr'\n             | ) VALUES('a')\n        \"\"\".stripMargin)\n        checkSparkAnswerAndOperator(sql(\"\"\"\n            | SELECT\n            |   name,\n            |   CAST(part1 AS STRING),\n            |   CAST(part2 as STRING),\n            |   CAST(part3 as STRING),\n            |   part4,\n            |   part5,\n            |   part6,\n            |   part7\n            | FROM t1\n        \"\"\".stripMargin))\n\n        val e = intercept[AnalysisException] {\n          sql(\"CREATE TABLE t2(name STRING, part INTERVAL) USING PARQUET PARTITIONED BY (part)\")\n        }.getMessage\n        if (isSpark40Plus) {\n          assert(e.contains(\" Cannot use \\\"INTERVAL\\\"\"))\n        } else {\n          assert(e.contains(\"Cannot use interval\"))\n        }\n      }\n    }\n  }\n\n  def getCatalogTable(tableName: String): CatalogTable = {\n    spark.sessionState.catalog.getTableMetadata(TableIdentifier(tableName))\n  }\n\n  def checkTableStats(\n      tableName: String,\n      hasSizeInBytes: Boolean,\n      expectedRowCounts: Option[Int]): Option[CatalogStatistics] = {\n    val stats = getCatalogTable(tableName).stats\n    if (hasSizeInBytes || expectedRowCounts.nonEmpty) {\n      assert(stats.isDefined)\n      assert(stats.get.sizeInBytes >= 0)\n      assert(stats.get.rowCount === expectedRowCounts)\n    } else {\n      assert(stats.isEmpty)\n    }\n\n    stats\n  }\n\n  def joinCondition(joinCols: Seq[String])(left: DataFrame, right: DataFrame): Column = {\n    joinCols.map(col => left(col) === right(col)).reduce(_ && _)\n  }\n\n  def testBucketing(\n      bucketedTableTestSpecLeft: BucketedTableTestSpec,\n      bucketedTableTestSpecRight: BucketedTableTestSpec,\n      joinType: String = \"inner\",\n      joinCondition: (DataFrame, DataFrame) => Column): Unit = {\n    val df1 =\n      (0 until 50).map(i => (i % 5, i % 13, i.toString)).toDF(\"i\", \"j\", \"k\").as(\"df1\")\n    val df2 =\n      (0 until 50).map(i => (i % 7, i % 11, i.toString)).toDF(\"i\", \"j\", \"k\").as(\"df2\")\n\n    val BucketedTableTestSpec(\n      bucketSpecLeft,\n      numPartitionsLeft,\n      shuffleLeft,\n      sortLeft,\n      numOutputPartitionsLeft) = bucketedTableTestSpecLeft\n\n    val BucketedTableTestSpec(\n      bucketSpecRight,\n      numPartitionsRight,\n      shuffleRight,\n      sortRight,\n      numOutputPartitionsRight) = bucketedTableTestSpecRight\n\n    withTable(\"bucketed_table1\", \"bucketed_table2\") {\n      withBucket(df1.repartition(numPartitionsLeft).write.format(\"parquet\"), bucketSpecLeft)\n        .saveAsTable(\"bucketed_table1\")\n      withBucket(df2.repartition(numPartitionsRight).write.format(\"parquet\"), bucketSpecRight)\n        .saveAsTable(\"bucketed_table2\")\n\n      withSQLConf(\n        CometConf.COMET_EXEC_SORT_MERGE_JOIN_ENABLED.key -> \"false\",\n        SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"0\",\n        SQLConf.WHOLESTAGE_CODEGEN_ENABLED.key -> \"false\") {\n        val t1 = spark.table(\"bucketed_table1\")\n        val t2 = spark.table(\"bucketed_table2\")\n        val joined = t1.join(t2, joinCondition(t1, t2), joinType)\n\n        val df = joined.sort(\"bucketed_table1.k\", \"bucketed_table2.k\")\n        checkSparkAnswer(df)\n\n        // The sub-plan contains should contain all native operators except a SMJ\n        val subPlan = stripAQEPlan(df.queryExecution.executedPlan).collectFirst {\n          case s: SortMergeJoinExec => s\n        }\n        assert(subPlan.isDefined)\n        checkCometOperators(subPlan.get, classOf[SortMergeJoinExec])\n      }\n    }\n  }\n\n  test(\"bucketed table\") {\n    val bucketSpec = Some(BucketSpec(8, Seq(\"i\", \"j\"), Nil))\n    val bucketedTableTestSpecLeft = BucketedTableTestSpec(bucketSpec, expectedShuffle = false)\n    val bucketedTableTestSpecRight = BucketedTableTestSpec(bucketSpec, expectedShuffle = false)\n\n    testBucketing(\n      bucketedTableTestSpecLeft = bucketedTableTestSpecLeft,\n      bucketedTableTestSpecRight = bucketedTableTestSpecRight,\n      joinCondition = joinCondition(Seq(\"i\", \"j\")))\n  }\n\n  test(\"bucketed table with bucket pruning\") {\n    withSQLConf(\n      SQLConf.BUCKETING_ENABLED.key -> \"true\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"0\") {\n      val df1 = (0 until 100).map(i => (i, i % 13, s\"left_$i\")).toDF(\"id\", \"bucket_col\", \"value\")\n      val df2 = (0 until 100).map(i => (i, i % 13, s\"right_$i\")).toDF(\"id\", \"bucket_col\", \"value\")\n      val bucketSpec = Some(BucketSpec(8, Seq(\"bucket_col\"), Nil))\n\n      withTable(\"bucketed_table1\", \"bucketed_table2\") {\n        withBucket(df1.write.format(\"parquet\"), bucketSpec).saveAsTable(\"bucketed_table1\")\n        withBucket(df2.write.format(\"parquet\"), bucketSpec).saveAsTable(\"bucketed_table2\")\n\n        // Join two bucketed tables, but filter one side to trigger bucket pruning\n        // Only buckets where hash(bucket_col) % 8 matches hash(1) % 8 will be read from left side\n        val left = spark.table(\"bucketed_table1\").filter(\"bucket_col = 1\")\n        val right = spark.table(\"bucketed_table2\")\n\n        val result = left.join(right, Seq(\"bucket_col\"), \"inner\")\n\n        checkSparkAnswerAndOperator(result)\n      }\n    }\n  }\n\n  def withBucket(\n      writer: DataFrameWriter[Row],\n      bucketSpec: Option[BucketSpec]): DataFrameWriter[Row] = {\n    bucketSpec\n      .map { spec =>\n        writer.bucketBy(\n          spec.numBuckets,\n          spec.bucketColumnNames.head,\n          spec.bucketColumnNames.tail: _*)\n\n        if (spec.sortColumnNames.nonEmpty) {\n          writer.sortBy(spec.sortColumnNames.head, spec.sortColumnNames.tail: _*)\n        } else {\n          writer\n        }\n      }\n      .getOrElse(writer)\n  }\n\n  test(\"union\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n      val df1 = sql(\"select * FROM tbl where _1 >= 2\").select(\"_1\")\n      val df2 = sql(\"select * FROM tbl where _1 >= 2\").select(\"_2\")\n      val df3 = sql(\"select * FROM tbl where _1 >= 3\").select(\"_2\")\n\n      val unionDf1 = df1.union(df2)\n      checkSparkAnswerAndOperator(unionDf1)\n\n      // Test union with different number of rows from inputs\n      val unionDf2 = df1.union(df3)\n      checkSparkAnswerAndOperator(unionDf2)\n\n      val unionDf3 = df1.union(df2).union(df3)\n      checkSparkAnswerAndOperator(unionDf3)\n    }\n  }\n\n  test(\"native execution after union\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n      val df1 = sql(\"select * FROM tbl where _1 >= 2\").select(\"_1\")\n      val df2 = sql(\"select * FROM tbl where _1 >= 2\").select(\"_2\")\n      val df3 = sql(\"select * FROM tbl where _1 >= 3\").select(\"_2\")\n\n      val unionDf1 = df1.union(df2).select($\"_1\" + 1).sortWithinPartitions($\"_1\")\n      checkSparkAnswerAndOperator(unionDf1)\n\n      // Test union with different number of rows from inputs\n      val unionDf2 = df1.union(df3).select($\"_1\" + 1).sortWithinPartitions($\"_1\")\n      checkSparkAnswerAndOperator(unionDf2)\n\n      val unionDf3 = df1.union(df2).union(df3).select($\"_1\" + 1).sortWithinPartitions($\"_1\")\n      checkSparkAnswerAndOperator(unionDf3)\n    }\n  }\n\n  test(\"native execution after coalesce\") {\n    withTable(\"t1\") {\n      (0 until 5)\n        .map(i => (i, (i + 1).toLong))\n        .toDF(\"l\", \"b\")\n        .write\n        .saveAsTable(\"t1\")\n\n      val df = sql(\"SELECT * FROM t1\")\n        .sortWithinPartitions($\"l\".desc)\n        .repartition(10, $\"l\")\n\n      val rdd = df.rdd\n      assert(rdd.partitions.length == 10)\n\n      val coalesced = df.coalesce(2).select($\"l\" + 1).sortWithinPartitions($\"l\")\n      checkSparkAnswerAndOperator(coalesced)\n    }\n  }\n\n  test(\"disabled/unsupported exec with multiple children should not disappear\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_PROJECT_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_UNION_ENABLED.key -> \"false\") {\n      withParquetDataFrame((0 until 5).map(Tuple1(_))) { df =>\n        val projected = df.selectExpr(\"_1 as x\")\n        val unioned = projected.union(df)\n        val p = unioned.queryExecution.executedPlan.find(_.isInstanceOf[UnionExec])\n        assert(\n          p.get\n            .collectLeaves()\n            .forall(o => o.isInstanceOf[CometScanExec] || o.isInstanceOf[CometNativeScanExec]))\n      }\n    }\n  }\n\n  test(\"coalesce\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n      withTable(\"t1\") {\n        (0 until 5)\n          .map(i => (i, (i + 1).toLong))\n          .toDF(\"l\", \"b\")\n          .write\n          .saveAsTable(\"t1\")\n\n        val df = sql(\"SELECT * FROM t1\")\n          .sortWithinPartitions($\"l\".desc)\n          .repartition(10, $\"l\")\n\n        val rdd = df.rdd\n        assert(rdd.getNumPartitions == 10)\n\n        val coalesced = df.coalesce(2)\n        checkSparkAnswerAndOperator(coalesced)\n\n        val coalescedRdd = coalesced.rdd\n        assert(coalescedRdd.getNumPartitions == 2)\n      }\n    }\n  }\n\n  test(\"TakeOrderedAndProjectExec\") {\n    Seq(\"true\", \"false\").foreach(aqeEnabled =>\n      withSQLConf(\n        SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> aqeEnabled,\n        CometConf.COMET_EXEC_WINDOW_ENABLED.key -> \"true\") {\n        withTable(\"t1\") {\n          val numRows = 10\n          spark\n            .range(numRows)\n            .selectExpr(\"if (id % 2 = 0, null, id) AS a\", s\"$numRows - id AS b\")\n            .repartition(3) // Move data across multiple partitions\n            .write\n            .saveAsTable(\"t1\")\n\n          val df1 = spark.sql(\"\"\"\n              |SELECT a, b, ROW_NUMBER() OVER(ORDER BY a, b) AS rn\n              |FROM t1 LIMIT 3\n              |\"\"\".stripMargin)\n\n          assert(df1.rdd.getNumPartitions == 1)\n          checkSparkAnswerAndOperator(df1, classOf[WindowExec])\n\n          val df2 = spark.sql(\"\"\"\n              |SELECT b, RANK() OVER(ORDER BY a, b) AS rk, DENSE_RANK(b) OVER(ORDER BY a, b) AS s\n              |FROM t1 LIMIT 2\n              |\"\"\".stripMargin)\n          assert(df2.rdd.getNumPartitions == 1)\n          checkSparkAnswerAndOperator(df2, classOf[WindowExec], classOf[ProjectExec])\n\n          // Other Comet native operator can take input from `CometTakeOrderedAndProjectExec`.\n          val df3 = sql(\"SELECT * FROM t1 ORDER BY a, b LIMIT 3\").groupBy($\"a\").sum(\"b\")\n          checkSparkAnswerAndOperator(df3)\n        }\n      })\n  }\n\n  test(\"TakeOrderedAndProjectExec without sorting\") {\n    Seq(\"true\", \"false\").foreach(aqeEnabled =>\n      withSQLConf(\n        SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> aqeEnabled,\n        SQLConf.OPTIMIZER_EXCLUDED_RULES.key ->\n          \"org.apache.spark.sql.catalyst.optimizer.EliminateSorts\") {\n        withTable(\"t1\") {\n          val numRows = 10\n          spark\n            .range(numRows)\n            .selectExpr(\"if (id % 2 = 0, null, id) AS a\", s\"$numRows - id AS b\")\n            .repartition(3) // Force repartition to test data will come to single partition\n            .write\n            .saveAsTable(\"t1\")\n\n          val df = spark\n            .table(\"t1\")\n            .select(\"a\", \"b\")\n            .sortWithinPartitions(\"b\", \"a\")\n            .orderBy(\"b\")\n            .select($\"b\" + 1, $\"a\")\n            .limit(3)\n\n          val takeOrdered = stripAQEPlan(df.queryExecution.executedPlan).collect {\n            case b: CometTakeOrderedAndProjectExec => b\n          }\n          assert(takeOrdered.length == 1)\n          assert(takeOrdered.head.orderingSatisfies)\n\n          checkSparkAnswerAndOperator(df)\n        }\n      })\n  }\n\n  test(\"TakeOrderedAndProjectExec with offset\") {\n    Seq(\"true\", \"false\").foreach(aqeEnabled =>\n      withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> aqeEnabled) {\n        withTable(\"t1\") {\n          val numRows = 10\n          spark\n            .range(numRows)\n            .selectExpr(\"if (id % 2 = 0, null, id) AS a\", s\"$numRows - id AS b\")\n            .repartition(3) // Force repartition to test data will come to single partition\n            .write\n            .saveAsTable(\"t1\")\n          val df = sql(\"SELECT * FROM t1 ORDER BY a, b LIMIT 3 OFFSET 1\").groupBy($\"a\").sum(\"b\")\n          checkSparkAnswerAndOperator(df)\n        }\n      })\n  }\n\n  test(\"collect limit\") {\n    Seq(\"true\", \"false\").foreach(aqe => {\n      withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> aqe) {\n        withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n          val df = sql(\"SELECT _1 as id, _2 as value FROM tbl limit 2\")\n          assert(df.queryExecution.executedPlan.execute().getNumPartitions === 1)\n          checkSparkAnswerAndOperator(df, Seq(classOf[CometCollectLimitExec]))\n          assert(df.collect().length === 2)\n\n          val qe = df.queryExecution\n          // make sure the root node is CometCollectLimitExec\n          assert(qe.executedPlan.isInstanceOf[CometCollectLimitExec])\n          // executes CometCollectExec directly to check doExecuteColumnar implementation\n          SQLExecution.withNewExecutionId(qe, Some(\"count\")) {\n            qe.executedPlan.resetMetrics()\n            assert(qe.executedPlan.execute().count() === 2)\n          }\n\n          assert(df.isEmpty === false)\n\n          // follow up native operation is possible\n          val df3 = df.groupBy(\"id\").sum(\"value\")\n          checkSparkAnswerAndOperator(df3)\n        }\n      }\n    })\n  }\n\n  test(\"SparkToColumnar over RangeExec\") {\n    Seq(\"true\", \"false\").foreach(aqe => {\n      Seq(500, 900).foreach { batchSize =>\n        withSQLConf(\n          SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> aqe,\n          SQLConf.ARROW_EXECUTION_MAX_RECORDS_PER_BATCH.key -> batchSize.toString) {\n          val df = spark.range(1000).selectExpr(\"id\", \"id % 8 as k\").groupBy(\"k\").sum(\"id\")\n          checkSparkAnswerAndOperator(df)\n          // empty record batch should also be handled\n          val df2 = spark.range(0).selectExpr(\"id\", \"id % 8 as k\").groupBy(\"k\").sum(\"id\")\n          checkSparkAnswerAndOperator(\n            df2,\n            includeClasses = Seq(classOf[CometSparkToColumnarExec]))\n        }\n      }\n    })\n  }\n\n  test(\"SparkToColumnar over RangeExec directly is eliminated for row output\") {\n    Seq(\"true\", \"false\").foreach(aqe => {\n      Seq(500, 900).foreach { batchSize =>\n        withSQLConf(\n          SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> aqe,\n          SQLConf.ARROW_EXECUTION_MAX_RECORDS_PER_BATCH.key -> batchSize.toString) {\n          val df = spark.range(1000)\n          val qe = df.queryExecution\n          qe.executedPlan.collectFirst({ case r: CometSparkToColumnarExec => r }) match {\n            case Some(_) => fail(\"CometSparkToColumnarExec should be eliminated\")\n            case _ =>\n          }\n        }\n      }\n    })\n  }\n\n  test(\"SparkToColumnar over BatchScan (Spark Parquet reader)\") {\n    Seq(\"\", \"parquet\").foreach { v1List =>\n      Seq(true, false).foreach { parquetVectorized =>\n        Seq(\n          \"cast(id as tinyint)\",\n          \"cast(id as smallint)\",\n          \"cast(id as integer)\",\n          \"cast(id as bigint)\",\n          \"cast(id as float)\",\n          \"cast(id as double)\",\n          \"cast(id as decimal)\",\n          \"cast(id as timestamp)\",\n          \"cast(id as string)\",\n          \"cast(id as binary)\",\n          \"struct(id)\").foreach { valueType =>\n          {\n            withSQLConf(\n              SQLConf.USE_V1_SOURCE_LIST.key -> v1List,\n              CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n              CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\",\n              SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key -> parquetVectorized.toString) {\n              withTempPath { dir =>\n                var df = spark\n                  .range(10000)\n                  .selectExpr(\"id as key\", s\"$valueType as value\")\n                  .toDF(\"key\", \"value\")\n\n                df.write.parquet(dir.toString)\n\n                df = spark.read.parquet(dir.toString)\n                checkSparkAnswerAndOperator(\n                  df.select(\"*\").groupBy(\"key\", \"value\").count(),\n                  includeClasses = Seq(classOf[CometSparkToColumnarExec]))\n\n                // Verify that the BatchScanExec nodes supported columnar output when requested for Spark 3.4+.\n                // Earlier versions support columnar output for fewer type.\n                val leaves = df.queryExecution.executedPlan.collectLeaves()\n                if (parquetVectorized) {\n                  assert(leaves.forall(_.supportsColumnar))\n                } else {\n                  assert(!leaves.forall(_.supportsColumnar))\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"SparkToColumnar over InMemoryTableScanExec\") {\n    Seq(\"true\", \"false\").foreach(cacheVectorized => {\n      withSQLConf(\n        SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\",\n        CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\",\n        SQLConf.CACHE_VECTORIZED_READER_ENABLED.key -> cacheVectorized) {\n        spark\n          .range(1000)\n          .selectExpr(\"id as key\", \"id % 8 as value\")\n          .toDF(\"key\", \"value\")\n          .selectExpr(\"key\", \"value\", \"key+1\")\n          .createOrReplaceTempView(\"abc\")\n        spark.catalog.cacheTable(\"abc\")\n        val df = spark.sql(\"SELECT * FROM abc\").groupBy(\"key\").count()\n        checkSparkAnswerAndOperator(df, includeClasses = Seq(classOf[CometSparkToColumnarExec]))\n        df.collect() // Without this collect we don't get an aggregation of the metrics.\n\n        val metrics = find(df.queryExecution.executedPlan) {\n          case _: CometSparkToColumnarExec => true\n          case _ => false\n        }.map(_.metrics).get\n\n        assert(metrics.contains(\"conversionTime\"))\n        assert(metrics(\"conversionTime\").value > 0)\n\n      }\n    })\n  }\n\n  test(\"SparkToColumnar eliminate redundant in AQE\") {\n    // TODO fix for Spark 4.0.0\n    assume(!isSpark40Plus)\n    withSQLConf(\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      val df = spark\n        .range(1000)\n        .selectExpr(\"id as key\", \"id % 8 as value\")\n        .toDF(\"key\", \"value\")\n        .groupBy(\"key\")\n        .count()\n      df.collect()\n\n      val planAfter = df.queryExecution.executedPlan\n      assert(planAfter.toString.startsWith(\"AdaptiveSparkPlan isFinalPlan=true\"))\n      val adaptivePlan = planAfter.asInstanceOf[AdaptiveSparkPlanExec].executedPlan\n      val numOperators = adaptivePlan.collect { case c: CometSparkToColumnarExec =>\n        c\n      }\n      assert(numOperators.length == 1)\n    }\n  }\n\n  test(\"SparkToColumnar read all types\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"test.parquet\")\n      val filename = path.toString\n      val random = new Random(42)\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val schemaGenOptions =\n          SchemaGenOptions(generateArray = true, generateStruct = true, generateMap = true)\n        val dataGenOptions = DataGenOptions(allowNull = true, generateNegativeZero = true)\n        ParquetGenerator.makeParquetFile(\n          random,\n          spark,\n          filename,\n          100,\n          schemaGenOptions,\n          dataGenOptions)\n      }\n      withSQLConf(\n        CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n        CometConf.COMET_SPARK_TO_ARROW_ENABLED.key -> \"true\",\n        CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\") {\n        val table = spark.read.parquet(filename)\n        table.createOrReplaceTempView(\"t1\")\n        checkSparkAnswer(sql(\"SELECT * FROM t1\"))\n      }\n    }\n  }\n\n  test(\"read CSV file\") {\n    Seq(\"\", \"csv\").foreach { v1List =>\n      withSQLConf(\n        SQLConf.USE_V1_SOURCE_LIST.key -> v1List,\n        CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"true\",\n        CometConf.COMET_CONVERT_FROM_CSV_ENABLED.key -> \"true\") {\n        spark.read\n          .csv(\"src/test/resources/test-data/csv-test-1.csv\")\n          .createOrReplaceTempView(\"tbl\")\n        // use a projection with an expression otherwise we end up with\n        // just the file scan\n        checkSparkAnswerAndOperator(\"SELECT cast(_c0 as int), _c1, _c2 FROM tbl\")\n      }\n    }\n  }\n\n  test(\"read JSON file\") {\n    Seq(\"\", \"json\").foreach { v1List =>\n      withSQLConf(\n        SQLConf.USE_V1_SOURCE_LIST.key -> v1List,\n        CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"true\",\n        CometConf.COMET_CONVERT_FROM_JSON_ENABLED.key -> \"true\") {\n        spark.read\n          .json(\"src/test/resources/test-data/json-test-1.ndjson\")\n          .createOrReplaceTempView(\"tbl\")\n        checkSparkAnswerAndOperator(\"SELECT a, b.c, b.d FROM tbl\")\n      }\n    }\n  }\n\n  test(\"Supported file formats for CometScanExec\") {\n    assert(CometScanExec.isFileFormatSupported(new ParquetFileFormat()))\n\n    class CustomParquetFileFormat extends ParquetFileFormat {}\n\n    assert(!CometScanExec.isFileFormatSupported(new CustomParquetFileFormat()))\n  }\n\n  test(\"SparkToColumnar override node name for row input\") {\n    // TODO fix for Spark 4.0.0\n    assume(!isSpark40Plus)\n    withSQLConf(\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      val df = spark\n        .range(1000)\n        .selectExpr(\"id as key\", \"id % 8 as value\")\n        .toDF(\"key\", \"value\")\n        .groupBy(\"key\")\n        .count()\n      df.collect()\n\n      val planAfter = df.queryExecution.executedPlan\n      assert(planAfter.toString.startsWith(\"AdaptiveSparkPlan isFinalPlan=true\"))\n      val adaptivePlan = planAfter.asInstanceOf[AdaptiveSparkPlanExec].executedPlan\n      val nodeNames = adaptivePlan.collect { case c: CometSparkToColumnarExec =>\n        c.nodeName\n      }\n      assert(nodeNames.length == 1)\n      assert(nodeNames.head == \"CometSparkRowToColumnar\")\n    }\n  }\n\n  test(\"ReusedExchange broadcast with incompatible partitions number does not fail\") {\n    withTable(\"tbl1\", \"tbl2\", \"tbl3\") {\n      // enforce different number of partitions for future broadcasts/exchanges\n      spark\n        .range(50)\n        .withColumnRenamed(\"id\", \"x\")\n        .repartition(2)\n        .writeTo(\"tbl1\")\n        .using(\"parquet\")\n        .create()\n      spark\n        .range(50)\n        .withColumnRenamed(\"id\", \"y\")\n        .repartition(3)\n        .writeTo(\"tbl2\")\n        .using(\"parquet\")\n        .create()\n      spark\n        .range(50)\n        .withColumnRenamed(\"id\", \"z\")\n        .repartition(4)\n        .writeTo(\"tbl3\")\n        .using(\"parquet\")\n        .create()\n      val df1 = spark.table(\"tbl1\")\n      val df2 = spark.table(\"tbl2\")\n      val df3 = spark.table(\"tbl3\")\n      Seq(\"true\", \"false\").foreach(aqeEnabled =>\n        withSQLConf(SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> aqeEnabled) {\n          val dfWithReusedExchange = df1\n            .join(df3.hint(\"broadcast\").join(df1, $\"x\" === $\"z\"), \"x\", \"right\")\n            .join(\n              df3\n                .hint(\"broadcast\")\n                .join(df2, $\"y\" === $\"z\", \"right\")\n                .withColumnRenamed(\"z\", \"z1\"),\n              $\"x\" === $\"y\")\n          checkSparkAnswerAndOperator(dfWithReusedExchange, classOf[ReusedExchangeExec])\n        })\n    }\n  }\n\n  test(\"SparkToColumnar override node name for columnar input\") {\n    withSQLConf(\n      SQLConf.USE_V1_SOURCE_LIST.key -> \"\",\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\",\n      CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n      CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\") {\n      withTempDir { dir =>\n        var df = spark\n          .range(10000)\n          .selectExpr(\"id as key\", \"id % 8 as value\")\n          .toDF(\"key\", \"value\")\n\n        df.write.mode(\"overwrite\").parquet(dir.toString)\n        df = spark.read.parquet(dir.toString)\n        df = df.groupBy(\"key\", \"value\").count()\n        df.collect()\n\n        val planAfter = df.queryExecution.executedPlan\n        val nodeNames = planAfter.collect { case c: CometSparkToColumnarExec =>\n          c.nodeName\n        }\n        assert(nodeNames.length == 1)\n        assert(nodeNames.head == \"CometSparkColumnarToColumnar\")\n      }\n    }\n  }\n\n  test(\"LocalTableScanExec spark fallback\") {\n    withSQLConf(CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"false\") {\n      val df = Seq.range(0, 10).toDF(\"id\")\n      checkSparkAnswerAndFallbackReason(\n        df,\n        \"Native support for operator LocalTableScanExec is disabled\")\n    }\n  }\n\n  test(\"LocalTableScanExec with filter\") {\n    withSQLConf(CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\") {\n      val df = Seq.range(0, 10).toDF(\"id\").filter(col(\"id\") > 5)\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"LocalTableScanExec with groupBy\") {\n    withSQLConf(CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\") {\n      val df = (Seq.range(0, 10) ++ Seq.range(0, 20))\n        .toDF(\"id\")\n        .groupBy(col(\"id\"))\n        .agg(count(\"*\"))\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"LocalTableScanExec with timestamps in non-UTC timezone\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      SESSION_LOCAL_TIMEZONE.key -> \"America/Los_Angeles\") {\n      val df = Seq(\n        (1, java.sql.Timestamp.valueOf(\"2024-01-15 10:30:00\")),\n        (2, java.sql.Timestamp.valueOf(\"2024-06-15 14:00:00\")),\n        (3, java.sql.Timestamp.valueOf(\"2024-12-25 08:00:00\")))\n        .toDF(\"id\", \"ts\")\n        .orderBy(\"ts\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"SparkToColumnar with timestamps in non-UTC timezone\") {\n    withTempDir { dir =>\n      val path = new java.io.File(dir, \"data\").getAbsolutePath\n      Seq(\n        (1, java.sql.Timestamp.valueOf(\"2024-01-15 10:30:00\")),\n        (2, java.sql.Timestamp.valueOf(\"2024-06-15 14:00:00\")),\n        (3, java.sql.Timestamp.valueOf(\"2024-12-25 08:00:00\")))\n        .toDF(\"id\", \"ts\")\n        .write\n        .parquet(path)\n      withSQLConf(\n        CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n        CometConf.COMET_SPARK_TO_ARROW_ENABLED.key -> \"true\",\n        CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\",\n        SESSION_LOCAL_TIMEZONE.key -> \"America/Los_Angeles\") {\n        val df = spark.read.parquet(path).orderBy(\"ts\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"sort on timestamps with non-UTC timezone via LocalTableScan\") {\n    // When session timezone is non-UTC, CometLocalTableScanExec and\n    // CometSparkToColumnarExec must use UTC for the Arrow schema timezone\n    // to match the native side's expectations. Without this, the native\n    // ScanExec sees a timezone mismatch and performs an unnecessary cast.\n    // The cast is currently a no-op (Arrow timestamps with timezone are\n    // always UTC microseconds), but using UTC avoids the overhead and\n    // keeps schemas consistent throughout the native plan.\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      SESSION_LOCAL_TIMEZONE.key -> \"America/Los_Angeles\") {\n      val df = Seq(\n        (1, java.sql.Timestamp.valueOf(\"2024-01-15 10:30:00\")),\n        (2, java.sql.Timestamp.valueOf(\"2024-06-15 14:00:00\")),\n        (3, java.sql.Timestamp.valueOf(\"2024-12-25 08:00:00\")))\n        .toDF(\"id\", \"ts\")\n        .repartition(2)\n        .orderBy(\"ts\")\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"Native_datafusion reports correct files and bytes scanned\") {\n    val inputFiles = 2\n\n    withTempDir { dir =>\n      val path = new java.io.File(dir, \"test_metrics\").getAbsolutePath\n      spark.range(100).repartition(inputFiles).write.mode(\"overwrite\").parquet(path)\n\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_NATIVE_SCAN_IMPL.key -> \"native_datafusion\") {\n        val df = spark.read.parquet(path)\n\n        // Trigger two different actions to ensure metrics are not duplicated\n        df.count()\n        df.collect()\n\n        val scanNode = stripAQEPlan(df.queryExecution.executedPlan)\n          .collectFirst {\n            case n: org.apache.spark.sql.comet.CometNativeScanExec => n\n            case n: org.apache.spark.sql.comet.CometScanExec => n\n          }\n          .getOrElse {\n            fail(\n              s\"Comet scan node not found in the physical plan. Plan: \\n${df.queryExecution.executedPlan}\")\n          }\n\n        val numFiles = scanNode.metrics(\"numFiles\").value\n        assert(\n          numFiles == inputFiles,\n          s\"Expected exactly $inputFiles files to be scanned, but got metrics reporting $numFiles\")\n      }\n    }\n  }\n\n}\n\ncase class BucketedTableTestSpec(\n    bucketSpec: Option[BucketSpec],\n    numPartitions: Int = 10,\n    expectedShuffle: Boolean = true,\n    expectedSort: Boolean = true,\n    expectedNumOutputPartitions: Option[Int] = None)\n\ncase class TestData(key: Int, value: String)\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/exec/CometGenerateExecSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exec\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.execution.GenerateExec\nimport org.apache.spark.sql.functions.col\n\nimport org.apache.comet.CometConf\n\nclass CometGenerateExecSuite extends CometTestBase {\n\n  import testImplicits._\n\n  test(\"explode with simple array\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq((1, Array(1, 2, 3)), (2, Array(4, 5)), (3, Array(6)))\n        .toDF(\"id\", \"arr\")\n        .selectExpr(\"id\", \"explode(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"explode with empty array\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq((1, Array(1, 2)), (2, Array.empty[Int]), (3, Array(3)))\n        .toDF(\"id\", \"arr\")\n        .selectExpr(\"id\", \"explode(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"explode with null array\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq((1, Some(Array(1, 2))), (2, None), (3, Some(Array(3))))\n        .toDF(\"id\", \"arr\")\n        .selectExpr(\"id\", \"explode(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"explode_outer with simple array\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.getOperatorAllowIncompatConfigKey(classOf[GenerateExec]) -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq((1, Array(1, 2, 3)), (2, Array(4, 5)), (3, Array(6)))\n        .toDF(\"id\", \"arr\")\n        .selectExpr(\"id\", \"explode_outer(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/2838\n  ignore(\"explode_outer with empty array\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq((1, Array(1, 2)), (2, Array.empty[Int]), (3, Array(3)))\n        .toDF(\"id\", \"arr\")\n        .selectExpr(\"id\", \"explode_outer(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"explode_outer with null array\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.getOperatorAllowIncompatConfigKey(classOf[GenerateExec]) -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq((1, Some(Array(1, 2))), (2, None), (3, Some(Array(3))))\n        .toDF(\"id\", \"arr\")\n        .selectExpr(\"id\", \"explode_outer(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"explode with multiple columns\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq((1, \"A\", Array(1, 2, 3)), (2, \"B\", Array(4, 5)), (3, \"C\", Array(6)))\n        .toDF(\"id\", \"name\", \"arr\")\n        .selectExpr(\"id\", \"name\", \"explode(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"explode with array of strings\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq((1, Array(\"a\", \"b\", \"c\")), (2, Array(\"d\", \"e\")), (3, Array(\"f\")))\n        .toDF(\"id\", \"arr\")\n        .selectExpr(\"id\", \"explode(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"explode with filter\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq((1, Array(1, 2, 3)), (2, Array(4, 5, 6)), (3, Array(7, 8, 9)))\n        .toDF(\"id\", \"arr\")\n        .selectExpr(\"id\", \"explode(arr) as value\")\n        .filter(col(\"value\") > 5)\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"explode fallback when disabled\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"false\") {\n      val df = Seq((1, Array(1, 2, 3)), (2, Array(4, 5)))\n        .toDF(\"id\", \"arr\")\n        .selectExpr(\"id\", \"explode(arr) as value\")\n      checkSparkAnswerAndFallbackReason(\n        df,\n        \"Native support for operator GenerateExec is disabled\")\n    }\n  }\n\n  test(\"explode with map input falls back\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq((1, Map(\"a\" -> 1, \"b\" -> 2)), (2, Map(\"c\" -> 3)))\n        .toDF(\"id\", \"map\")\n        .selectExpr(\"id\", \"explode(map) as (key, value)\")\n      checkSparkAnswerAndFallbackReason(\n        df,\n        \"Comet only supports explode/explode_outer for arrays, not maps\")\n    }\n  }\n\n  test(\"explode with nullable projected column\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq((1, Some(\"A\"), Array(1, 2)), (2, None, Array(3, 4)), (3, Some(\"C\"), Array(5)))\n        .toDF(\"id\", \"name\", \"arr\")\n        .selectExpr(\"id\", \"name\", \"explode(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/2838\n  ignore(\"explode_outer with nullable projected column\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df =\n        Seq((1, Some(\"A\"), Array(1, 2)), (2, None, Array.empty[Int]), (3, Some(\"C\"), Array(5)))\n          .toDF(\"id\", \"name\", \"arr\")\n          .selectExpr(\"id\", \"name\", \"explode_outer(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"explode with mixed null, empty, and non-empty arrays\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq(\n        (1, Some(Array(1, 2))),\n        (2, None),\n        (3, Some(Array.empty[Int])),\n        (4, Some(Array(3))),\n        (5, None),\n        (6, Some(Array(4, 5, 6))))\n        .toDF(\"id\", \"arr\")\n        .selectExpr(\"id\", \"explode(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // https://github.com/apache/datafusion-comet/issues/2838\n  ignore(\"explode_outer with mixed null, empty, and non-empty arrays\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq(\n        (1, Some(Array(1, 2))),\n        (2, None),\n        (3, Some(Array.empty[Int])),\n        (4, Some(Array(3))),\n        (5, None),\n        (6, Some(Array(4, 5, 6))))\n        .toDF(\"id\", \"arr\")\n        .selectExpr(\"id\", \"explode_outer(arr) as value\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"explode with multiple nullable columns\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_EXPLODE_ENABLED.key -> \"true\") {\n      val df = Seq(\n        (Some(1), Some(\"A\"), Some(100), Array(1, 2)),\n        (None, Some(\"B\"), None, Array(3)),\n        (Some(3), None, Some(300), Array(4, 5)),\n        (None, None, None, Array(6)))\n        .toDF(\"id\", \"name\", \"value\", \"arr\")\n        .selectExpr(\"id\", \"name\", \"value\", \"explode(arr) as element\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/exec/CometJoinSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exec\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.catalyst.TableIdentifier\nimport org.apache.spark.sql.catalyst.analysis.UnresolvedRelation\nimport org.apache.spark.sql.comet.{CometBroadcastExchangeExec, CometBroadcastHashJoinExec}\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.CometConf\n\nclass CometJoinSuite extends CometTestBase {\n\n  import testImplicits._\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n        testFun\n      }\n    }\n  }\n\n  test(\"join - self join\") {\n    val df1 = testData.select(testData(\"key\")).as(\"df1\")\n    val df2 = testData.select(testData(\"key\")).as(\"df2\")\n\n    checkAnswer(\n      df1.join(df2, $\"df1.key\" === $\"df2.key\"),\n      sql(\"SELECT a.key, b.key FROM testData a JOIN testData b ON a.key = b.key\")\n        .collect()\n        .toSeq)\n  }\n\n  test(\"SortMergeJoin with unsupported key type should fall back to Spark\") {\n    withSQLConf(\n      SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"Asia/Kathmandu\",\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n      withTable(\"t1\", \"t2\") {\n        sql(\"CREATE TABLE t1(name STRING, time TIMESTAMP) USING PARQUET\")\n        sql(\"INSERT OVERWRITE t1 VALUES('a', timestamp'2019-01-01 11:11:11')\")\n\n        sql(\"CREATE TABLE t2(name STRING, time TIMESTAMP) USING PARQUET\")\n        sql(\"INSERT OVERWRITE t2 VALUES('a', timestamp'2019-01-01 11:11:11')\")\n\n        val df = sql(\"SELECT * FROM t1 JOIN t2 ON t1.time = t2.time\")\n        val (sparkPlan, cometPlan) = checkSparkAnswer(df)\n        assert(sparkPlan.canonicalized === cometPlan.canonicalized)\n      }\n    }\n  }\n\n  test(\"Broadcast HashJoin without join filter\") {\n    withSQLConf(\n      CometConf.COMET_BATCH_SIZE.key -> \"100\",\n      SQLConf.PREFER_SORTMERGEJOIN.key -> \"false\",\n      \"spark.sql.join.forceApplyShuffledHashJoin\" -> \"true\",\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n      withParquetTable((0 until 1000).map(i => (i, i % 5)), \"tbl_a\") {\n        withParquetTable((0 until 1000).map(i => (i % 10, i + 2)), \"tbl_b\") {\n          // Inner join: build left\n          val df1 =\n            sql(\"SELECT /*+ BROADCAST(tbl_a) */ * FROM tbl_a JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(\n            df1,\n            Seq(classOf[CometBroadcastExchangeExec], classOf[CometBroadcastHashJoinExec]))\n\n          // Right join: build left\n          val df2 =\n            sql(\"SELECT /*+ BROADCAST(tbl_a) */ * FROM tbl_a RIGHT JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(\n            df2,\n            Seq(classOf[CometBroadcastExchangeExec], classOf[CometBroadcastHashJoinExec]))\n        }\n      }\n    }\n  }\n\n  test(\"Broadcast HashJoin with join filter\") {\n    withSQLConf(\n      CometConf.COMET_BATCH_SIZE.key -> \"100\",\n      SQLConf.PREFER_SORTMERGEJOIN.key -> \"false\",\n      \"spark.sql.join.forceApplyShuffledHashJoin\" -> \"true\",\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n      withParquetTable((0 until 1000).map(i => (i, i % 5)), \"tbl_a\") {\n        withParquetTable((0 until 1000).map(i => (i % 10, i + 2)), \"tbl_b\") {\n          // Inner join: build left\n          val df1 =\n            sql(\n              \"SELECT /*+ BROADCAST(tbl_a) */ * FROM tbl_a JOIN tbl_b \" +\n                \"ON tbl_a._2 = tbl_b._1 AND tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(\n            df1,\n            Seq(classOf[CometBroadcastExchangeExec], classOf[CometBroadcastHashJoinExec]))\n\n          // Right join: build left\n          val df2 =\n            sql(\n              \"SELECT /*+ BROADCAST(tbl_a) */ * FROM tbl_a RIGHT JOIN tbl_b \" +\n                \"ON tbl_a._2 = tbl_b._1 AND tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(\n            df2,\n            Seq(classOf[CometBroadcastExchangeExec], classOf[CometBroadcastHashJoinExec]))\n        }\n      }\n    }\n  }\n\n  test(\"HashJoin without join filter\") {\n    withSQLConf(\n      \"spark.sql.join.forceApplyShuffledHashJoin\" -> \"true\",\n      SQLConf.PREFER_SORTMERGEJOIN.key -> \"false\",\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n      withParquetTable((0 until 10).map(i => (i, i % 5)), \"tbl_a\") {\n        withParquetTable((0 until 10).map(i => (i % 10, i + 2)), \"tbl_b\") {\n          // Inner join: build left\n          val df1 =\n            sql(\n              \"SELECT /*+ SHUFFLE_HASH(tbl_a) */ * FROM tbl_a JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df1)\n\n          // Right join: build left\n          val df2 =\n            sql(\"SELECT /*+ SHUFFLE_HASH(tbl_a) */ * FROM tbl_a RIGHT JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df2)\n\n          // Full join: build left\n          val df3 =\n            sql(\"SELECT /*+ SHUFFLE_HASH(tbl_a) */ * FROM tbl_a FULL JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df3)\n\n          // TODO: Spark 3.4 returns SortMergeJoin for this query even with SHUFFLE_HASH hint.\n          // Left join with build left and right join with build right in hash join is only supported\n          // in Spark 3.5 or above. See SPARK-36612.\n          //\n          // Left join: build left\n          // sql(\"SELECT /*+ SHUFFLE_HASH(tbl_a) */ * FROM tbl_a LEFT JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n\n          // Inner join: build right\n          val df4 =\n            sql(\n              \"SELECT /*+ SHUFFLE_HASH(tbl_b) */ * FROM tbl_a JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df4)\n\n          // Left join: build right\n          val df5 =\n            sql(\"SELECT /*+ SHUFFLE_HASH(tbl_b) */ * FROM tbl_a LEFT JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df5)\n\n          // Right join: build right\n          val df6 =\n            sql(\"SELECT /*+ SHUFFLE_HASH(tbl_b) */ * FROM tbl_a RIGHT JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df6)\n\n          // Full join: build right\n          val df7 =\n            sql(\"SELECT /*+ SHUFFLE_HASH(tbl_b) */ * FROM tbl_a FULL JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df7)\n\n          // Left semi and anti joins are only supported with build right in Spark.\n          val left = sql(\"SELECT * FROM tbl_a\")\n          val right = sql(\"SELECT * FROM tbl_b\")\n\n          val df8 = left.join(right, left(\"_2\") === right(\"_1\"), \"leftsemi\")\n          checkSparkAnswerAndOperator(df8)\n\n          // DataFusion HashJoin LeftAnti has bugs in handling nulls and is disabled for now.\n          // left.join(right, left(\"_2\") === right(\"_1\"), \"leftanti\")\n        }\n      }\n    }\n  }\n\n  test(\"HashJoin with join filter\") {\n    withSQLConf(\n      SQLConf.PREFER_SORTMERGEJOIN.key -> \"false\",\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n      withParquetTable((0 until 10).map(i => (i, i % 5)), \"tbl_a\") {\n        withParquetTable((0 until 10).map(i => (i % 10, i + 2)), \"tbl_b\") {\n          // Inner join: build left\n          val df1 =\n            sql(\n              \"SELECT /*+ SHUFFLE_HASH(tbl_a) */ * FROM tbl_a JOIN tbl_b \" +\n                \"ON tbl_a._2 = tbl_b._1 AND tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(df1)\n\n          // Right join: build left\n          val df2 =\n            sql(\n              \"SELECT /*+ SHUFFLE_HASH(tbl_a) */ * FROM tbl_a RIGHT JOIN tbl_b \" +\n                \"ON tbl_a._2 = tbl_b._1 AND tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(df2)\n\n          // Full join: build left\n          val df3 =\n            sql(\n              \"SELECT /*+ SHUFFLE_HASH(tbl_a) */ * FROM tbl_a FULL JOIN tbl_b \" +\n                \"ON tbl_a._2 = tbl_b._1 AND tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(df3)\n        }\n      }\n    }\n  }\n\n  test(\"SortMergeJoin without join filter\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SORT_MERGE_JOIN_ENABLED.key -> \"true\",\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n      withParquetTable((0 until 10).map(i => (i, i % 5)), \"tbl_a\") {\n        withParquetTable((0 until 10).map(i => (i % 10, i + 2)), \"tbl_b\") {\n          val df1 = sql(\"SELECT * FROM tbl_a JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df1)\n\n          val df2 = sql(\"SELECT * FROM tbl_a LEFT JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df2)\n\n          val df3 = sql(\"SELECT * FROM tbl_b LEFT JOIN tbl_a ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df3)\n\n          val df4 = sql(\"SELECT * FROM tbl_a RIGHT JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df4)\n\n          val df5 = sql(\"SELECT * FROM tbl_b RIGHT JOIN tbl_a ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df5)\n\n          val df6 = sql(\"SELECT * FROM tbl_a FULL JOIN tbl_b ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df6)\n\n          val df7 = sql(\"SELECT * FROM tbl_b FULL JOIN tbl_a ON tbl_a._2 = tbl_b._1\")\n          checkSparkAnswerAndOperator(df7)\n\n          val left = sql(\"SELECT * FROM tbl_a\")\n          val right = sql(\"SELECT * FROM tbl_b\")\n\n          val df8 = left.join(right, left(\"_2\") === right(\"_1\"), \"leftsemi\")\n          checkSparkAnswerAndOperator(df8)\n\n          val df9 = right.join(left, left(\"_2\") === right(\"_1\"), \"leftsemi\")\n          checkSparkAnswerAndOperator(df9)\n\n          val df10 = left.join(right, left(\"_2\") === right(\"_1\"), \"leftanti\")\n          checkSparkAnswerAndOperator(df10)\n\n          val df11 = right.join(left, left(\"_2\") === right(\"_1\"), \"leftanti\")\n          checkSparkAnswerAndOperator(df11)\n        }\n      }\n    }\n  }\n\n  test(\"SortMergeJoin with join filter\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SORT_MERGE_JOIN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SORT_MERGE_JOIN_WITH_JOIN_FILTER_ENABLED.key -> \"true\",\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\") {\n      withParquetTable((0 until 10).map(i => (i, i % 5)), \"tbl_a\") {\n        withParquetTable((0 until 10).map(i => (i % 10, i + 2)), \"tbl_b\") {\n          val df1 = sql(\n            \"SELECT * FROM tbl_a JOIN tbl_b ON tbl_a._2 = tbl_b._1 AND \" +\n              \"tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(df1)\n\n          val df2 = sql(\n            \"SELECT * FROM tbl_a LEFT JOIN tbl_b ON tbl_a._2 = tbl_b._1 \" +\n              \"AND tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(df2)\n\n          val df3 = sql(\n            \"SELECT * FROM tbl_b LEFT JOIN tbl_a ON tbl_a._2 = tbl_b._1 \" +\n              \"AND tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(df3)\n\n          val df4 = sql(\n            \"SELECT * FROM tbl_a RIGHT JOIN tbl_b ON tbl_a._2 = tbl_b._1 \" +\n              \"AND tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(df4)\n\n          val df5 = sql(\n            \"SELECT * FROM tbl_b RIGHT JOIN tbl_a ON tbl_a._2 = tbl_b._1 \" +\n              \"AND tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(df5)\n\n          val df6 = sql(\n            \"SELECT * FROM tbl_a FULL JOIN tbl_b ON tbl_a._2 = tbl_b._1 \" +\n              \"AND tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(df6)\n\n          val df7 = sql(\n            \"SELECT * FROM tbl_b FULL JOIN tbl_a ON tbl_a._2 = tbl_b._1 \" +\n              \"AND tbl_a._1 > tbl_b._2\")\n          checkSparkAnswerAndOperator(df7)\n\n          val df8 = sql(\n            \"SELECT * FROM tbl_a LEFT SEMI JOIN tbl_b ON tbl_a._2 = tbl_b._1 \" +\n              \"AND tbl_a._2 >= tbl_b._1\")\n          checkSparkAnswerAndOperator(df8)\n\n          val df9 = sql(\n            \"SELECT * FROM tbl_b LEFT SEMI JOIN tbl_a ON tbl_a._2 = tbl_b._1 \" +\n              \"AND tbl_a._2 >= tbl_b._1\")\n          checkSparkAnswerAndOperator(df9)\n\n          val df10 = sql(\n            \"SELECT * FROM tbl_a LEFT ANTI JOIN tbl_b ON tbl_a._2 = tbl_b._1 \" +\n              \"AND tbl_a._2 >= tbl_b._1\")\n          checkSparkAnswerAndOperator(df10)\n\n          val df11 = sql(\n            \"SELECT * FROM tbl_b LEFT ANTI JOIN tbl_a ON tbl_a._2 = tbl_b._1 \" +\n              \"AND tbl_a._2 >= tbl_b._1\")\n          checkSparkAnswerAndOperator(df11)\n        }\n      }\n    }\n  }\n\n  test(\"full outer join\") {\n    withTempView(\"`left`\", \"`right`\", \"allNulls\") {\n      allNulls.createOrReplaceTempView(\"allNulls\")\n\n      upperCaseData.where($\"N\" <= 4).createOrReplaceTempView(\"`left`\")\n      upperCaseData.where($\"N\" >= 3).createOrReplaceTempView(\"`right`\")\n\n      val left = UnresolvedRelation(TableIdentifier(\"left\"))\n      val right = UnresolvedRelation(TableIdentifier(\"right\"))\n\n      checkSparkAnswer(left.join(right, $\"left.N\" === $\"right.N\", \"full\"))\n\n      checkSparkAnswer(left.join(right, ($\"left.N\" === $\"right.N\") && ($\"left.N\" =!= 3), \"full\"))\n\n      checkSparkAnswer(left.join(right, ($\"left.N\" === $\"right.N\") && ($\"right.N\" =!= 3), \"full\"))\n\n      checkSparkAnswer(sql(\"\"\"\n          |SELECT l.a, count(*)\n          |FROM allNulls l FULL OUTER JOIN upperCaseData r ON (l.a = r.N)\n          |GROUP BY l.a\n        \"\"\".stripMargin))\n\n      checkSparkAnswer(sql(\"\"\"\n          |SELECT r.N, count(*)\n          |FROM allNulls l FULL OUTER JOIN upperCaseData r ON (l.a = r.N)\n          |GROUP BY r.N\n          \"\"\".stripMargin))\n\n      checkSparkAnswer(sql(\"\"\"\n          |SELECT l.N, count(*)\n          |FROM upperCaseData l FULL OUTER JOIN allNulls r ON (l.N = r.a)\n          |GROUP BY l.N\n          \"\"\".stripMargin))\n\n      checkSparkAnswer(sql(\"\"\"\n          |SELECT r.a, count(*)\n          |FROM upperCaseData l FULL OUTER JOIN allNulls r ON (l.N = r.a)\n          |GROUP BY r.a\n        \"\"\".stripMargin))\n    }\n  }\n\n  test(\"Broadcast hash join build-side batch coalescing\") {\n    // Use many shuffle partitions to produce many small broadcast batches,\n    // then verify that coalescing reduces the build-side batch count to 1 per task.\n    val numPartitions = 512\n    withSQLConf(\n      CometConf.COMET_BATCH_SIZE.key -> \"100\",\n      SQLConf.PREFER_SORTMERGEJOIN.key -> \"false\",\n      \"spark.sql.join.forceApplyShuffledHashJoin\" -> \"true\",\n      SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"false\",\n      SQLConf.SHUFFLE_PARTITIONS.key -> numPartitions.toString) {\n      withParquetTable((0 until 10000).map(i => (i, i % 5)), \"tbl_a\") {\n        withParquetTable((0 until 10000).map(i => (i % 10, i + 2)), \"tbl_b\") {\n          // Force a shuffle on tbl_a before broadcast so the broadcast source has\n          // numPartitions partitions, not just the number of parquet files.\n          val query =\n            s\"\"\"SELECT /*+ BROADCAST(a) */ *\n               |FROM (SELECT /*+ REPARTITION($numPartitions) */ * FROM tbl_a) a\n               |JOIN tbl_b ON a._2 = tbl_b._1\"\"\".stripMargin\n\n          val (_, cometPlan) = checkSparkAnswerAndOperator(\n            sql(query),\n            Seq(classOf[CometBroadcastExchangeExec], classOf[CometBroadcastHashJoinExec]))\n\n          val joins = collect(cometPlan) { case j: CometBroadcastHashJoinExec =>\n            j\n          }\n          assert(joins.nonEmpty, \"Expected CometBroadcastHashJoinExec in plan\")\n\n          val join = joins.head\n          val buildBatches = join.metrics(\"build_input_batches\").value\n\n          // Without coalescing, build_input_batches would be ~numPartitions per task,\n          // totaling ~numPartitions * numPartitions across all tasks.\n          // With coalescing, each task gets 1 batch, so total ≈ numPartitions.\n          assert(\n            buildBatches <= numPartitions,\n            s\"Expected at most $numPartitions build batches (1 per task), got $buildBatches. \" +\n              \"Broadcast batch coalescing may not be working.\")\n\n          val broadcasts = collect(cometPlan) { case b: CometBroadcastExchangeExec =>\n            b\n          }\n          assert(broadcasts.nonEmpty, \"Expected CometBroadcastExchangeExec in plan\")\n\n          val broadcast = broadcasts.head\n          val coalescedBatches = broadcast.metrics(\"numCoalescedBatches\").value\n          val coalescedRows = broadcast.metrics(\"numCoalescedRows\").value\n\n          assert(\n            coalescedBatches >= numPartitions,\n            s\"Expected at least $numPartitions coalesced batches, got $coalescedBatches\")\n          assert(coalescedRows == 10000, s\"Expected 10000 coalesced rows, got $coalescedRows\")\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/exec/CometNativeColumnarToRowSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exec\n\nimport java.sql.Date\nimport java.sql.Timestamp\n\nimport scala.util.Random\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.spark.sql.{CometTestBase, Row}\nimport org.apache.spark.sql.comet.CometNativeColumnarToRowExec\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.{CometConf, NativeColumnarToRowConverter}\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator, SchemaGenOptions}\n\n/**\n * Test suite for native columnar to row conversion.\n *\n * These tests verify that CometNativeColumnarToRowExec produces correct results for all supported\n * data types.\n */\nclass CometNativeColumnarToRowSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(\n        CometConf.COMET_NATIVE_COLUMNAR_TO_ROW_ENABLED.key -> \"true\",\n        CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_AUTO) {\n        testFun\n      }\n    }\n  }\n\n  /**\n   * Helper to verify that CometNativeColumnarToRowExec is present in the plan.\n   */\n  private def assertNativeC2RPresent(df: org.apache.spark.sql.DataFrame): Unit = {\n    val plan = stripAQEPlan(df.queryExecution.executedPlan)\n    val nativeC2R = plan.collect { case c: CometNativeColumnarToRowExec => c }\n    assert(\n      nativeC2R.nonEmpty,\n      s\"Expected CometNativeColumnarToRowExec in plan but found none.\\nPlan: $plan\")\n  }\n\n  test(\"primitive types: boolean, byte, short, int, long\") {\n    val data = (0 until 100).map { i =>\n      (i % 2 == 0, i.toByte, i.toShort, i, i.toLong)\n    }\n    withParquetTable(data, \"primitives\") {\n      // Force row output by using a UDF or collect\n      val df = sql(\"SELECT * FROM primitives\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"primitive types: float, double\") {\n    val data = (0 until 100).map { i =>\n      (i.toFloat / 10.0f, i.toDouble / 10.0)\n    }\n    withParquetTable(data, \"floats\") {\n      val df = sql(\"SELECT * FROM floats\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"primitive types with nulls\") {\n    val data = (0 until 100).map { i =>\n      val isNull = i % 5 == 0\n      (\n        if (isNull) null else (i % 2 == 0),\n        if (isNull) null else i.toByte,\n        if (isNull) null else i.toShort,\n        if (isNull) null else i,\n        if (isNull) null else i.toLong,\n        if (isNull) null else i.toFloat,\n        if (isNull) null else i.toDouble)\n    }\n    val df = spark\n      .createDataFrame(\n        spark.sparkContext.parallelize(data.map(Row.fromTuple(_))),\n        StructType(\n          Seq(\n            StructField(\"bool\", BooleanType, nullable = true),\n            StructField(\"byte\", ByteType, nullable = true),\n            StructField(\"short\", ShortType, nullable = true),\n            StructField(\"int\", IntegerType, nullable = true),\n            StructField(\"long\", LongType, nullable = true),\n            StructField(\"float\", FloatType, nullable = true),\n            StructField(\"double\", DoubleType, nullable = true))))\n    withParquetDataFrame(df) { parquetDf =>\n      assertNativeC2RPresent(parquetDf)\n      checkSparkAnswer(parquetDf)\n    }\n  }\n\n  test(\"string type\") {\n    val data = (0 until 100).map { i =>\n      (i, s\"string_value_$i\", if (i % 10 == 0) null else s\"nullable_$i\")\n    }\n    val df = spark\n      .createDataFrame(\n        spark.sparkContext.parallelize(data.map(Row.fromTuple(_))),\n        StructType(\n          Seq(\n            StructField(\"id\", IntegerType),\n            StructField(\"str\", StringType),\n            StructField(\"nullable_str\", StringType, nullable = true))))\n    withParquetDataFrame(df) { parquetDf =>\n      assertNativeC2RPresent(parquetDf)\n      checkSparkAnswer(parquetDf)\n    }\n  }\n\n  test(\"string type with various lengths\") {\n    val random = new Random(42)\n    val data = (0 until 100).map { i =>\n      val len = random.nextInt(1000)\n      (i, random.alphanumeric.take(len).mkString)\n    }\n    withParquetTable(data, \"strings\") {\n      val df = sql(\"SELECT * FROM strings\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"binary type\") {\n    val data = (0 until 100).map { i =>\n      (i, s\"binary_$i\".getBytes)\n    }\n    val df = spark\n      .createDataFrame(\n        spark.sparkContext.parallelize(data.map { case (id, bytes) => Row(id, bytes) }),\n        StructType(\n          Seq(StructField(\"id\", IntegerType), StructField(\"bin\", BinaryType, nullable = true))))\n    withParquetDataFrame(df) { parquetDf =>\n      assertNativeC2RPresent(parquetDf)\n      checkSparkAnswer(parquetDf)\n    }\n  }\n\n  test(\"date type\") {\n    val baseDate = Date.valueOf(\"2024-01-01\")\n    val data = (0 until 100).map { i =>\n      (i, new Date(baseDate.getTime + i * 24 * 60 * 60 * 1000L))\n    }\n    withParquetTable(data, \"dates\") {\n      val df = sql(\"SELECT * FROM dates\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"timestamp type\") {\n    val baseTs = Timestamp.valueOf(\"2024-01-01 00:00:00\")\n    val data = (0 until 100).map { i =>\n      (i, new Timestamp(baseTs.getTime + i * 1000L))\n    }\n    withParquetTable(data, \"timestamps\") {\n      val df = sql(\"SELECT * FROM timestamps\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"decimal type - inline precision (precision <= 18)\") {\n    val data = (0 until 100).map { i =>\n      (i, BigDecimal(i * 100 + i) / 100)\n    }\n    val df = spark\n      .createDataFrame(\n        spark.sparkContext.parallelize(data.map { case (id, dec) =>\n          Row(id, dec.bigDecimal)\n        }),\n        StructType(\n          Seq(\n            StructField(\"id\", IntegerType),\n            StructField(\"dec\", DecimalType(10, 2), nullable = true))))\n    withParquetDataFrame(df) { parquetDf =>\n      assertNativeC2RPresent(parquetDf)\n      checkSparkAnswer(parquetDf)\n    }\n  }\n\n  test(\"decimal type - variable length precision (precision > 18)\") {\n    val data = (0 until 100).map { i =>\n      (i, BigDecimal(s\"12345678901234567890.$i\"))\n    }\n    val df = spark\n      .createDataFrame(\n        spark.sparkContext.parallelize(data.map { case (id, dec) =>\n          Row(id, dec.bigDecimal)\n        }),\n        StructType(\n          Seq(\n            StructField(\"id\", IntegerType),\n            StructField(\"dec\", DecimalType(30, 5), nullable = true))))\n    withParquetDataFrame(df) { parquetDf =>\n      assertNativeC2RPresent(parquetDf)\n      checkSparkAnswer(parquetDf)\n    }\n  }\n\n  test(\"struct type\") {\n    val data = (0 until 100).map { i =>\n      (i, (i * 2, s\"nested_$i\"))\n    }\n    withParquetTable(data, \"structs\") {\n      val df = sql(\"SELECT * FROM structs\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"array type\") {\n    val data = (0 until 100).map { i =>\n      (i, (0 to i % 10).toArray)\n    }\n    withParquetTable(data, \"arrays\") {\n      val df = sql(\"SELECT * FROM arrays\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"array of strings\") {\n    val data = (0 until 100).map { i =>\n      (i, (0 to i % 5).map(j => s\"elem_${i}_$j\").toArray)\n    }\n    withParquetTable(data, \"string_arrays\") {\n      val df = sql(\"SELECT * FROM string_arrays\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"map type\") {\n    val data = (0 until 100).map { i =>\n      (i, (0 to i % 5).map(j => (s\"key_$j\", j * 10)).toMap)\n    }\n    withParquetTable(data, \"maps\") {\n      val df = sql(\"SELECT * FROM maps\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"nested struct in array\") {\n    val data = (0 until 100).map { i =>\n      (i, (0 to i % 5).map(j => (j, s\"nested_$j\")).toArray)\n    }\n    withParquetTable(data, \"nested_structs\") {\n      val df = sql(\"SELECT * FROM nested_structs\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"deeply nested: array of arrays\") {\n    val data = (0 until 100).map { i =>\n      (i, (0 to i % 3).map(j => (0 to j).toArray).toArray)\n    }\n    withParquetTable(data, \"nested_arrays\") {\n      val df = sql(\"SELECT * FROM nested_arrays\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"deeply nested: map with array values\") {\n    val data = (0 until 100).map { i =>\n      (i, (0 to i % 3).map(j => (s\"key_$j\", (0 to j).toArray)).toMap)\n    }\n    withParquetTable(data, \"map_array_values\") {\n      val df = sql(\"SELECT * FROM map_array_values\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"deeply nested: struct containing array of maps\") {\n    val data = (0 until 100).map { i =>\n      (\n        i,\n        ((0 to i % 3).map(j => (0 to j % 2).map(k => (s\"k$k\", k * 10)).toMap).toArray, s\"str_$i\"))\n    }\n    withParquetTable(data, \"struct_array_maps\") {\n      val df = sql(\"SELECT * FROM struct_array_maps\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"all null values\") {\n    val df = spark\n      .createDataFrame(\n        spark.sparkContext.parallelize((0 until 100).map(_ => Row(null, null, null))),\n        StructType(\n          Seq(\n            StructField(\"int_col\", IntegerType, nullable = true),\n            StructField(\"str_col\", StringType, nullable = true),\n            StructField(\"double_col\", DoubleType, nullable = true))))\n    withParquetDataFrame(df) { parquetDf =>\n      assertNativeC2RPresent(parquetDf)\n      checkSparkAnswer(parquetDf)\n    }\n  }\n\n  test(\"empty batch\") {\n    val df = spark\n      .createDataFrame(\n        spark.sparkContext.parallelize(Seq.empty[Row]),\n        StructType(Seq(StructField(\"int_col\", IntegerType), StructField(\"str_col\", StringType))))\n    withParquetDataFrame(df) { parquetDf =>\n      assertNativeC2RPresent(parquetDf)\n      checkSparkAnswer(parquetDf)\n    }\n  }\n\n  test(\"mixed types - comprehensive\") {\n    val baseDate = Date.valueOf(\"2024-01-01\")\n    val baseTs = Timestamp.valueOf(\"2024-01-01 00:00:00\")\n\n    val data = (0 until 100).map { i =>\n      val isNull = i % 7 == 0\n      Row(\n        i,\n        if (isNull) null else (i % 2 == 0),\n        if (isNull) null else i.toByte,\n        if (isNull) null else i.toShort,\n        if (isNull) null else i.toLong,\n        if (isNull) null else i.toFloat,\n        if (isNull) null else i.toDouble,\n        if (isNull) null else s\"string_$i\",\n        if (isNull) null else new Date(baseDate.getTime + i * 24 * 60 * 60 * 1000L),\n        if (isNull) null else new Timestamp(baseTs.getTime + i * 1000L),\n        if (isNull) null else BigDecimal(i * 100 + i, 2).bigDecimal)\n    }\n\n    val schema = StructType(\n      Seq(\n        StructField(\"id\", IntegerType),\n        StructField(\"bool_col\", BooleanType, nullable = true),\n        StructField(\"byte_col\", ByteType, nullable = true),\n        StructField(\"short_col\", ShortType, nullable = true),\n        StructField(\"long_col\", LongType, nullable = true),\n        StructField(\"float_col\", FloatType, nullable = true),\n        StructField(\"double_col\", DoubleType, nullable = true),\n        StructField(\"string_col\", StringType, nullable = true),\n        StructField(\"date_col\", DateType, nullable = true),\n        StructField(\"timestamp_col\", TimestampType, nullable = true),\n        StructField(\"decimal_col\", DecimalType(10, 2), nullable = true)))\n\n    val df = spark.createDataFrame(spark.sparkContext.parallelize(data), schema)\n    withParquetDataFrame(df) { parquetDf =>\n      assertNativeC2RPresent(parquetDf)\n      checkSparkAnswer(parquetDf)\n    }\n  }\n\n  test(\"large batch\") {\n    val data = (0 until 10000).map { i =>\n      (i, s\"string_value_$i\", i.toDouble)\n    }\n    withParquetTable(data, \"large_batch\") {\n      val df = sql(\"SELECT * FROM large_batch\")\n      assertNativeC2RPresent(df)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"disabled by default\") {\n    withSQLConf(CometConf.COMET_NATIVE_COLUMNAR_TO_ROW_ENABLED.key -> \"false\") {\n      val data = (0 until 10).map(i => (i, s\"value_$i\"))\n      withParquetTable(data, \"test_disabled\") {\n        val df = sql(\"SELECT * FROM test_disabled\")\n        val plan = stripAQEPlan(df.queryExecution.executedPlan)\n        val nativeC2R = plan.collect { case c: CometNativeColumnarToRowExec => c }\n        assert(\n          nativeC2R.isEmpty,\n          s\"Expected no CometNativeColumnarToRowExec when disabled.\\nPlan: $plan\")\n        checkSparkAnswer(df)\n      }\n    }\n  }\n\n  test(\"fuzz test with nested types\") {\n    val random = new Random(42)\n    val schemaGenOptions =\n      SchemaGenOptions(generateArray = true, generateStruct = true, generateMap = true)\n    val dataGenOptions =\n      DataGenOptions(generateNegativeZero = false, generateNaN = false, generateInfinity = false)\n\n    // Use generateSchema which creates various nested types including arrays, structs, and maps.\n    // Not all generated types may be supported by native C2R, so we just verify correctness.\n    val schema = FuzzDataGenerator.generateSchema(schemaGenOptions)\n    val df = FuzzDataGenerator.generateDataFrame(random, spark, schema, 100, dataGenOptions)\n\n    withParquetDataFrame(df) { parquetDf =>\n      checkSparkAnswer(parquetDf)\n    }\n  }\n\n  test(\"fuzz test with generateNestedSchema\") {\n    val random = new Random(42)\n\n    // Use only primitive types supported by native C2R (excludes TimestampNTZType)\n    val supportedPrimitiveTypes: Seq[DataType] = Seq(\n      DataTypes.BooleanType,\n      DataTypes.ByteType,\n      DataTypes.ShortType,\n      DataTypes.IntegerType,\n      DataTypes.LongType,\n      DataTypes.FloatType,\n      DataTypes.DoubleType,\n      DataTypes.createDecimalType(10, 2),\n      DataTypes.DateType,\n      DataTypes.TimestampType,\n      DataTypes.StringType,\n      DataTypes.BinaryType)\n\n    val schemaGenOptions = SchemaGenOptions(\n      generateArray = true,\n      generateStruct = true,\n      generateMap = true,\n      primitiveTypes = supportedPrimitiveTypes)\n    val dataGenOptions =\n      DataGenOptions(generateNegativeZero = false, generateNaN = false, generateInfinity = false)\n\n    // Test with multiple random deeply nested schemas\n    for (iteration <- 1 to 3) {\n      val schema = FuzzDataGenerator.generateNestedSchema(\n        random,\n        numCols = 5,\n        minDepth = 1,\n        maxDepth = 3,\n        options = schemaGenOptions)\n\n      val df = FuzzDataGenerator.generateDataFrame(random, spark, schema, 100, dataGenOptions)\n\n      withParquetDataFrame(df) { parquetDf =>\n        assertNativeC2RPresent(parquetDf)\n        checkSparkAnswer(parquetDf)\n      }\n    }\n  }\n\n  // Regression test for https://github.com/apache/datafusion-comet/issues/3308\n  // Native columnar-to-row returns UnsafeRow pointing into a Rust-owned buffer that is\n  // cleared/reused on each convert() call. This test directly exercises the converter:\n  // it converts multiple batches and holds row references from earlier batches, then\n  // verifies they still contain correct data. Without a fix (e.g., copying rows),\n  // rows from earlier batches will contain corrupted data from buffer reuse.\n  test(\"rows from earlier batches are not corrupted by subsequent convert() calls\") {\n    import org.apache.spark.sql.catalyst.InternalRow\n    import org.apache.spark.sql.comet.execution.arrow.CometArrowConverters\n    import org.apache.spark.unsafe.types.UTF8String\n\n    import scala.collection.mutable.ArrayBuffer\n\n    val schema = new StructType().add(\"id\", IntegerType).add(\"str\", StringType)\n\n    // Create multiple small batches using CometArrowConverters\n    val numBatches = 10\n    val rowsPerBatch = 5\n    val totalRows = numBatches * rowsPerBatch\n\n    val rows = (0 until totalRows).map { i =>\n      InternalRow(i, UTF8String.fromString(s\"value_$i\"))\n    }\n\n    // Create batches using rowToArrowBatchIter which handles shading internally\n    val batchIter = CometArrowConverters\n      .rowToArrowBatchIter(rows.iterator, schema, rowsPerBatch, \"UTC\", null)\n\n    val converter = new NativeColumnarToRowConverter(schema, rowsPerBatch)\n    try {\n      // Collect all rows from all batches into a single array\n      // The converter returns rows that should be independent copies\n      val allRows = new ArrayBuffer[InternalRow]()\n      var batchCount = 0\n\n      while (batchIter.hasNext) {\n        val batch = batchIter.next()\n        batchCount += 1\n        // Convert this batch and collect all rows\n        val rowIter = converter.convert(batch)\n        while (rowIter.hasNext) {\n          allRows += rowIter.next()\n        }\n        batch.close()\n      }\n\n      assert(batchCount == numBatches, s\"Expected $numBatches batches, got $batchCount\")\n      assert(allRows.length == totalRows, s\"Expected $totalRows rows, got ${allRows.length}\")\n\n      // Verify that reading through held references produces all expected\n      // distinct values. If rows weren't copied, all entries would point\n      // to the same reused UnsafeRow object with stale data.\n      val distinctIds = allRows.map(_.getInt(0)).toSet\n      assert(\n        distinctIds.size == totalRows,\n        s\"UnsafeRow reuse bug: expected $totalRows distinct row IDs but got \" +\n          s\"${distinctIds.size} (values: ${distinctIds.toSeq.sorted.mkString(\", \")}). \" +\n          \"This means rows were not copied and all references point to the same \" +\n          \"reused UnsafeRow object.\")\n    } finally {\n      converter.close()\n    }\n  }\n\n  /**\n   * Helper to create a parquet table from a DataFrame and run a function with it.\n   */\n  private def withParquetDataFrame(df: org.apache.spark.sql.DataFrame)(\n      f: org.apache.spark.sql.DataFrame => Unit): Unit = {\n    withTempPath { path =>\n      df.write.parquet(path.getAbsolutePath)\n      spark.read.parquet(path.getAbsolutePath).createOrReplaceTempView(\"test_table\")\n      f(sql(\"SELECT * FROM test_table\"))\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/exec/CometNativeReaderSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exec\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.hadoop.conf.Configuration\nimport org.apache.hadoop.fs.Path\nimport org.apache.parquet.hadoop.ParquetWriter\nimport org.apache.parquet.hadoop.api.WriteSupport\nimport org.apache.parquet.hadoop.api.WriteSupport.WriteContext\nimport org.apache.parquet.io.api.RecordConsumer\nimport org.apache.parquet.schema.MessageTypeParser\nimport org.apache.spark.sql.{CometTestBase, Row}\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.functions.{array, col}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{IntegerType, StringType, StructType}\n\nimport org.apache.comet.CometConf\n\nclass CometNativeReaderSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    Seq(CometConf.SCAN_NATIVE_DATAFUSION, CometConf.SCAN_NATIVE_ICEBERG_COMPAT).foreach(scan =>\n      super.test(s\"$testName - $scan\", testTags: _*) {\n        withSQLConf(\n          CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n          SQLConf.USE_V1_SOURCE_LIST.key -> \"parquet\",\n          CometConf.COMET_ENABLED.key -> \"true\",\n          CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"false\",\n          CometConf.COMET_NATIVE_SCAN_IMPL.key -> scan) {\n          testFun\n        }\n      })\n  }\n\n  test(\"native reader case sensitivity\") {\n    withTempPath { path =>\n      spark.range(10).toDF(\"a\").write.parquet(path.toString)\n      Seq(true, false).foreach { caseSensitive =>\n        withSQLConf(SQLConf.CASE_SENSITIVE.key -> caseSensitive.toString) {\n          val tbl = s\"case_sensitivity_${caseSensitive}_${System.currentTimeMillis()}\"\n          sql(s\"create table $tbl (A long) using parquet options (path '\" + path + \"')\")\n          val df = sql(s\"select A from $tbl\")\n          checkSparkAnswer(df)\n        }\n      }\n    }\n  }\n\n  test(\"native reader duplicate fields in case-insensitive mode\") {\n    withTempPath { path =>\n      // Write parquet with columns A, B, b (B and b are duplicates case-insensitively)\n      withSQLConf(SQLConf.CASE_SENSITIVE.key -> \"true\") {\n        spark\n          .range(5)\n          .selectExpr(\"id as A\", \"id as B\", \"id as b\")\n          .write\n          .mode(\"overwrite\")\n          .parquet(path.toString)\n      }\n      val tbl = s\"dup_fields_${System.currentTimeMillis()}\"\n      withSQLConf(SQLConf.CASE_SENSITIVE.key -> \"true\") {\n        sql(s\"create table $tbl (A long, B long) using parquet options (path '${path}')\")\n      }\n      // In case-insensitive mode, selecting B should fail because both B and b match\n      withSQLConf(SQLConf.CASE_SENSITIVE.key -> \"false\") {\n        val e = intercept[Exception] {\n          sql(s\"select B from $tbl\").collect()\n        }\n        assert(\n          e.getMessage.contains(\"duplicate field\") ||\n            e.getMessage.contains(\"Found duplicate field\") ||\n            (e.getCause != null && e.getCause.getMessage.contains(\"duplicate field\")) ||\n            (e.getCause != null && e.getCause.getMessage.contains(\"Found duplicate field\")),\n          s\"Expected duplicate field error, got: ${e.getMessage}\")\n      }\n      // In case-sensitive mode, selecting B should work fine\n      withSQLConf(SQLConf.CASE_SENSITIVE.key -> \"true\") {\n        val df = sql(s\"select A from $tbl\")\n        assert(df.collect().length == 5)\n      }\n      sql(s\"drop table if exists $tbl\")\n    }\n  }\n\n  test(\"native reader - read simple STRUCT fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select named_struct('firstName', 'John', 'lastName', 'Doe', 'age', 35) as personal_info union all\n        |select named_struct('firstName', 'Jane', 'lastName', 'Doe', 'age', 40) as personal_info\n        |\"\"\".stripMargin,\n      \"select personal_info.* from tbl\")\n  }\n\n  test(\"native reader - read simple ARRAY fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select array(1, 2, 3) as arr union all\n        |select array(2, 3, 4) as arr\n        |\"\"\".stripMargin,\n      \"select arr from tbl\")\n  }\n\n  test(\"native reader - read STRUCT of ARRAY fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select named_struct('col', arr) c0 from\n        |(\n        |  select array(1, 2, 3) as arr union all\n        |  select array(2, 3, 4) as arr\n        |)\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read ARRAY of ARRAY fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select array(arr0, arr1) c0 from\n        |(\n        |  select array(1, 2, 3) as arr0, array(2, 3, 4) as arr1\n        |)\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read ARRAY of STRUCT fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select array(str0, str1) c0 from\n        |(\n        |  select named_struct('a', 1, 'b', 'n') str0, named_struct('a', 2, 'b', 'w') str1\n        |)\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read STRUCT of STRUCT fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select named_struct('a', str0, 'b', str1) c0 from\n        |(\n        |  select named_struct('a', 1, 'b', 'n') str0, named_struct('c', 2, 'd', 'w') str1\n        |)\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read STRUCT of ARRAY of STRUCT fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select named_struct('a', array(str0, str1), 'b', array(str2, str3)) c0 from\n        |(\n        |  select named_struct('a', 1, 'b', 'n') str0,\n        |         named_struct('a', 2, 'b', 'w') str1,\n        |         named_struct('x', 3, 'y', 'a') str2,\n        |         named_struct('x', 4, 'y', 'c') str3\n        |)\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read ARRAY of STRUCT of ARRAY fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select array(named_struct('a', a0, 'b', a1), named_struct('a', a2, 'b', a3)) c0 from\n        |(\n        |  select array(1, 2, 3) a0,\n        |         array(2, 3, 4) a1,\n        |         array(3, 4, 5) a2,\n        |         array(4, 5, 6) a3\n        |)\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read simple MAP fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select map('a', 1) as c0 union all\n        |select map('b', 2)\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read MAP of value ARRAY fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select map('a', array(1), 'c', array(3)) as c0 union all\n        |select map('b', array(2))\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read MAP of value STRUCT fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select map('a', named_struct('f0', 0, 'f1', 'foo'), 'b', named_struct('f0', 1, 'f1', 'bar')) as c0 union all\n        |select map('c', named_struct('f2', 0, 'f1', 'baz')) as c0\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read MAP of value MAP fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select map('a', map('a1', 1, 'b1', 2), 'b', map('a2', 2, 'b2', 3)) as c0 union all\n        |select map('c', map('a3', 3, 'b3', 4))\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read STRUCT of MAP fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select named_struct('m0', map('a', 1)) as c0 union all\n        |select named_struct('m1', map('b', 2))\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read ARRAY of MAP fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select array(map('a', 1), map('b', 2)) as c0 union all\n        |select array(map('c', 3))\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read ARRAY of MAP of ARRAY value fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select array(map('a', array(1, 2, 3), 'b', array(2, 3, 4)), map('c', array(4, 5, 6), 'd', array(7, 8, 9))) as c0 union all\n        |select array(map('x', array(1, 2, 3), 'y', array(2, 3, 4)), map('c', array(4, 5, 6), 'z', array(7, 8, 9)))\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read STRUCT of MAP of STRUCT value fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select named_struct('m0', map('a', named_struct('f0', 1)), 'm1', map('b', named_struct('f1', 1))) as c0 union all\n        |select named_struct('m0', map('c', named_struct('f2', 1)), 'm1', map('d', named_struct('f3', 1))) as c0\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read MAP of ARRAY of MAP fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select map('a', array(map(1, 'a', 2, 'b'), map(1, 'a', 2, 'b'))) as c0 union all\n        |select map('b', array(map(1, 'a', 2, 'b'), map(1, 'a', 2, 'b'))) as c0\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n\n  test(\"native reader - read MAP of STRUCT of MAP fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select map('a', named_struct('f0', map(1, 'b')), 'b', named_struct('f0', map(1, 'b'))) as c0 union all\n        |select map('c', named_struct('f0', map(1, 'b'))) as c0\n        |\"\"\".stripMargin,\n      \"select c0 from tbl\")\n  }\n  test(\"native reader - read a STRUCT subfield from ARRAY of STRUCTS - second field\") {\n    testSingleLineQuery(\n      \"\"\"\n        | select array(str0, str1) c0 from\n        | (\n        |   select\n        |     named_struct('a', 1, 'b', 'n', 'c', 'x') str0,\n        |     named_struct('a', 2, 'b', 'w', 'c', 'y') str1\n        | )\n        |\"\"\".stripMargin,\n      \"select c0[0].b col0 from tbl\")\n  }\n\n  test(\"native reader - read a STRUCT subfield - field from second\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a, named_struct('a', 1, 'b', 'n') c0\n        |\"\"\".stripMargin,\n      \"select c0.b from tbl\")\n  }\n\n  test(\"native reader - read a STRUCT subfield from ARRAY of STRUCTS - field from first\") {\n    testSingleLineQuery(\n      \"\"\"\n        | select array(str0, str1) c0 from\n        | (\n        |   select\n        |     named_struct('a', 1, 'b', 'n', 'c', 'x') str0,\n        |     named_struct('a', 2, 'b', 'w', 'c', 'y') str1\n        | )\n        |\"\"\".stripMargin,\n      \"select c0[0].a, c0[0].b, c0[0].c from tbl\")\n  }\n\n  test(\"native reader - read a STRUCT subfield from ARRAY of STRUCTS - reverse fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        | select array(str0, str1) c0 from\n        | (\n        |   select\n        |     named_struct('a', 1, 'b', 'n', 'c', 'x') str0,\n        |     named_struct('a', 2, 'b', 'w', 'c', 'y') str1\n        | )\n        |\"\"\".stripMargin,\n      \"select c0[0].c, c0[0].b, c0[0].a from tbl\")\n  }\n\n  test(\"native reader - read a STRUCT subfield from ARRAY of STRUCTS - skip field\") {\n    testSingleLineQuery(\n      \"\"\"\n        | select array(str0, str1) c0 from\n        | (\n        |   select\n        |     named_struct('a', 1, 'b', 'n', 'c', 'x') str0,\n        |     named_struct('a', 2, 'b', 'w', 'c', 'y') str1\n        | )\n        |\"\"\".stripMargin,\n      \"select c0[0].a, c0[0].c from tbl\")\n  }\n\n  test(\"native reader - read a STRUCT subfield from ARRAY of STRUCTS - duplicate first field\") {\n    testSingleLineQuery(\n      \"\"\"\n        | select array(str0, str1) c0 from\n        | (\n        |   select\n        |     named_struct('a', 1, 'b', 'n', 'c', 'x') str0,\n        |     named_struct('a', 2, 'b', 'w', 'c', 'y') str1\n        | )\n        |\"\"\".stripMargin,\n      \"select c0[0].a, c0[0].a from tbl\")\n  }\n\n  test(\"native reader - select nested field from a complex map[struct, struct] using map_keys\") {\n    testSingleLineQuery(\n      \"\"\"\n        | select map(str0, str1) c0 from\n        | (\n        |   select named_struct('a', cast(1 as long), 'b', cast(2 as long), 'c', cast(3 as long)) str0,\n        |          named_struct('x', cast(8 as long), 'y', cast(9 as long), 'z', cast(0 as long)) str1 union all\n        |   select named_struct('a', cast(3 as long), 'b', cast(4 as long), 'c', cast(5 as long)) str0,\n        |          named_struct('x', cast(6 as long), 'y', cast(7 as long), 'z', cast(8 as long)) str1\n        | )\n        |\"\"\".stripMargin,\n      \"select map_keys(c0).b from tbl\")\n  }\n\n  test(\n    \"native reader - select nested field from a complex map[struct, struct] using map_values\") {\n    testSingleLineQuery(\n      \"\"\"\n        | select map(str0, str1) c0 from\n        | (\n        |   select named_struct('a', cast(1 as long), 'b', cast(2 as long), 'c', cast(3 as long)) str0,\n        |          named_struct('x', cast(8 as long), 'y', cast(9 as long), 'z', cast(0 as long)) str1 union all\n        |   select named_struct('a', cast(3 as long), 'b', cast(4 as long), 'c', cast(5 as long)) str0,\n        |          named_struct('x', cast(6 as long), 'y', cast(7 as long), 'z', cast(8 as long)) str1 union all\n        |   select named_struct('a', cast(31 as long), 'b', cast(41 as long), 'c', cast(51 as long)), null\n        | )\n        |\"\"\".stripMargin,\n      \"select map_values(c0).y from tbl\")\n  }\n\n  test(\"native reader - select struct field with user defined schema\") {\n    // extract existing A column\n    var readSchema = new StructType().add(\n      \"c0\",\n      new StructType()\n        .add(\"a\", IntegerType, nullable = true),\n      nullable = true)\n\n    testSingleLineQuery(\n      \"\"\"\n        | select named_struct('a', 0, 'b', 'xyz') c0\n        |\"\"\".stripMargin,\n      \"select * from tbl\",\n      readSchema = Some(readSchema))\n\n    // extract existing A column, nonexisting X\n    readSchema = new StructType().add(\n      \"c0\",\n      new StructType()\n        .add(\"a\", IntegerType, nullable = true)\n        .add(\"x\", StringType, nullable = true),\n      nullable = true)\n\n    testSingleLineQuery(\n      \"\"\"\n        | select named_struct('a', 0, 'b', 'xyz') c0\n        |\"\"\".stripMargin,\n      \"select * from tbl\",\n      readSchema = Some(readSchema))\n\n    // extract nonexisting X, Y columns\n    readSchema = new StructType().add(\n      \"c0\",\n      new StructType()\n        .add(\"y\", IntegerType, nullable = true)\n        .add(\"x\", StringType, nullable = true),\n      nullable = true)\n\n    testSingleLineQuery(\n      \"\"\"\n        | select named_struct('a', 0, 'b', 'xyz') c0\n        |\"\"\".stripMargin,\n      \"select * from tbl\",\n      readSchema = Some(readSchema))\n  }\n\n  test(\"native reader - extract map by key\") {\n    // existing key\n    testSingleLineQuery(\n      \"\"\"\n        | select map(str0, str1) c0 from\n        | (\n        |    select 'key0' str0, named_struct('a', 1, 'b', 'str') str1\n        | )\n        |\"\"\".stripMargin,\n      \"select c0['key0'] from tbl\")\n\n    // existing key, existing struct subfield\n    testSingleLineQuery(\n      \"\"\"\n        | select map(str0, str1) c0 from\n        | (\n        |    select 'key0' str0, named_struct('a', 1, 'b', 'str') str1\n        | )\n        |\"\"\".stripMargin,\n      \"select c0['key0'].b from tbl\")\n\n    // nonexisting key\n    testSingleLineQuery(\n      \"\"\"\n        | select map(str0, str1) c0 from\n        | (\n        |    select 'key0' str0, named_struct('a', 1, 'b', 'str') str1\n        | )\n        |\"\"\".stripMargin,\n      \"select c0['key1'] from tbl\")\n\n    // nonexisting key, existing struct subfield\n    testSingleLineQuery(\n      \"\"\"\n        | select map(str0, str1) c0 from\n        | (\n        |    select 'key0' str0, named_struct('a', 1, 'b', 'str') str1\n        | )\n        |\"\"\".stripMargin,\n      \"select c0['key1'].b from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal INT fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(1, 2, null, 3, null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal BOOL fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(true, null, false, null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal NULL fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(null) from tbl\")\n  }\n\n  test(\"native reader - support empty ARRAY literal\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array() from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal BYTE fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(cast(1 as byte), cast(2 as byte), null, cast(3 as byte), null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal SHORT fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(cast(1 as short), cast(2 as short), null, cast(3 as short), null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal DATE fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(CAST('2024-01-01' AS DATE), CAST('2024-02-01' AS DATE), null, CAST('2024-03-01' AS DATE), null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal LONG fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(cast(1 as bigint), cast(2 as bigint), null, cast(3 as bigint), null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal TIMESTAMP fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(CAST('2024-01-01 10:00:00' AS TIMESTAMP), CAST('2024-01-02 10:00:00' AS TIMESTAMP), null, CAST('2024-01-03 10:00:00' AS TIMESTAMP), null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal TIMESTAMP TZ fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(CAST('2024-01-01 10:00:00' AS TIMESTAMP_NTZ), CAST('2024-01-02 10:00:00' AS TIMESTAMP_NTZ), null, CAST('2024-01-03 10:00:00' AS TIMESTAMP_NTZ), null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal FLOAT fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(cast(1 as float), cast(2 as float), null, cast(3 as float), null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal DOUBLE fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(cast(1 as double), cast(2 as double), null, cast(3 as double), null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal STRING fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array('a', 'bc', null, 'def', null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal DECIMAL fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(cast(1 as decimal(10, 2)), cast(2.5 as decimal(10, 2)), null, cast(3.75 as decimal(10, 2)), null) from tbl\")\n  }\n\n  test(\"native reader - support ARRAY literal BINARY fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(cast('a' as binary), cast('bc' as binary), null, cast('def' as binary), null) from tbl\")\n  }\n\n  test(\"SPARK-18053: ARRAY equality is broken\") {\n    withTable(\"array_tbl\") {\n      spark.range(10).select(array(col(\"id\")).as(\"arr\")).write.saveAsTable(\"array_tbl\")\n      assert(sql(\"SELECT * FROM array_tbl where arr = ARRAY(1L)\").count == 1)\n    }\n  }\n\n  test(\"native reader - support ARRAY literal nested ARRAY fields\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select 1 a\n        |\"\"\".stripMargin,\n      \"select array(array(1, 2, null), array(), array(10), null, array(null)) from tbl\")\n  }\n\n  // Regression test found during DataFusion 53 upgrade (PR #3629).\n  // Spark's SchemaPruningSuite tests (e.g. \"select a single complex field array\n  // and in clause\", \"select explode of nested field of array of struct\") were\n  // failing because wrap_all_type_mismatches in Comet's schema adapter looked up\n  // the logical field by column index instead of by name. Filter expressions\n  // built against the pruned required_schema had \"friends\" at index 0, but the\n  // full logical_file_schema had \"id: Int32\" at index 0.\n  test(\"native reader - nested schema pruning with array of struct and filter\") {\n    testSingleLineQuery(\n      \"\"\"\n        |select\n        |  0 as id,\n        |  named_struct('first', 'Jane', 'middle', 'X.', 'last', 'Doe') as name,\n        |  '123 Main Street' as address,\n        |  1 as pets,\n        |  array(\n        |    named_struct('first', 'Susan', 'middle', 'Z.', 'last', 'Smith')\n        |  ) as friends\n        |union all\n        |select\n        |  1 as id,\n        |  named_struct('first', 'John', 'middle', 'Y.', 'last', 'Doe') as name,\n        |  '321 Wall Street' as address,\n        |  3 as pets,\n        |  array(\n        |    named_struct('first', 'Alice', 'middle', 'A.', 'last', 'Jones')\n        |  ) as friends\n        |\"\"\".stripMargin,\n      \"select friends.middle from tbl where friends.first[0] = 'Susan'\")\n  }\n\n  // SPARK-39393: bare \"repeated int32\" (protobuf-style, no wrapping list group)\n  // should be readable without crashing on missing def levels.\n  // SPARK-39393: Parquet does not support predicate pushdown on repeated columns.\n  // A bare \"repeated int32 f\" (protobuf-style, no wrapping LIST group) must not\n  // have IsNotNull pushed into the Parquet reader. Comet filters these out in\n  // CometScanExec.supportedDataFilters so the predicate is evaluated after\n  // reading. Without that, DataFusion's list predicate pushdown would push\n  // IsNotNull as a RowFilter, triggering an arrow-rs ListArrayReader crash.\n  test(\"native reader - read bare repeated primitive field\") {\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"protobuf-parquet\").toString\n      val schema =\n        \"\"\"message protobuf_style {\n          |  repeated int32 f;\n          |}\n        \"\"\".stripMargin\n\n      writeDirect(\n        path,\n        schema,\n        { rc =>\n          rc.startMessage()\n          rc.startField(\"f\", 0)\n          rc.addInteger(1)\n          rc.addInteger(2)\n          rc.endField(\"f\", 0)\n          rc.endMessage()\n        })\n\n      // Read without filter\n      checkAnswer(spark.read.parquet(dir.getCanonicalPath), Seq(Row(Seq(1, 2))))\n\n      // Read with isnotnull filter — the filter should not be pushed down into\n      // the Parquet reader for repeated primitive fields (SPARK-39393), but the\n      // query should still return correct results by evaluating the filter after\n      // reading.\n      checkAnswer(\n        spark.read.parquet(dir.getCanonicalPath).filter(\"isnotnull(f)\"),\n        Seq(Row(Seq(1, 2))))\n    }\n  }\n\n  /** Write a Parquet file using a raw RecordConsumer for full schema control. */\n  private def writeDirect(\n      path: String,\n      schema: String,\n      recordWriters: (RecordConsumer => Unit)*): Unit = {\n    val messageType = MessageTypeParser.parseMessageType(schema)\n    val writeSupport = new DirectWriteSupport(messageType)\n    class Builder extends ParquetWriter.Builder[RecordConsumer => Unit, Builder](new Path(path)) {\n      override def getWriteSupport(conf: Configuration): WriteSupport[RecordConsumer => Unit] =\n        writeSupport\n      override def self(): Builder = this\n    }\n    val writer = new Builder().build()\n    try recordWriters.foreach(writer.write)\n    finally writer.close()\n  }\n}\n\nprivate class DirectWriteSupport(schema: org.apache.parquet.schema.MessageType)\n    extends WriteSupport[RecordConsumer => Unit] {\n  private var recordConsumer: RecordConsumer = _\n\n  override def init(configuration: Configuration): WriteContext =\n    new WriteContext(schema, java.util.Collections.emptyMap())\n\n  override def write(recordWriter: RecordConsumer => Unit): Unit =\n    recordWriter(recordConsumer)\n\n  override def prepareForWrite(rc: RecordConsumer): Unit =\n    this.recordConsumer = rc\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/exec/CometNativeShuffleSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exec\n\nimport scala.concurrent.duration.DurationInt\nimport scala.util.Random\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.SparkEnv\nimport org.apache.spark.sql.{CometTestBase, DataFrame, Dataset, Row}\nimport org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.functions.{col, count, sum}\n\nimport org.apache.comet.CometConf\n\nclass CometNativeShuffleSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_SHUFFLE_MODE.key -> \"native\",\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n        testFun\n      }\n    }\n  }\n\n  import testImplicits._\n\n  // TODO: this test takes a long time to run, we should reduce the test time.\n  test(\"fix: Too many task completion listener of ArrowReaderIterator causes OOM\") {\n    withSQLConf(CometConf.COMET_BATCH_SIZE.key -> \"1\") {\n      withParquetTable((0 until 100000).map(i => (1, (i + 1).toLong)), \"tbl\") {\n        assert(\n          sql(\"SELECT * FROM tbl\").repartition(201, $\"_1\").count() == sql(\"SELECT * FROM tbl\")\n            .count())\n      }\n    }\n  }\n\n  test(\"native shuffle: different data type\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 1000)\n        var allTypes: Seq[Int] = (1 to 20)\n        allTypes.map(i => s\"_$i\").foreach { c =>\n          withSQLConf(\"parquet.enable.dictionary\" -> dictionaryEnabled.toString) {\n            readParquetFile(path.toString) { df =>\n              val shuffled = df\n                .select($\"_1\")\n                .repartition(10, col(c))\n              checkShuffleAnswer(shuffled, 1, checkNativeOperators = true)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"native shuffle on nested array\") {\n    Seq(\"false\", \"true\").foreach { _ =>\n      Seq(10, 201).foreach { numPartitions =>\n        Seq(\"1.0\", \"10.0\").foreach { ratio =>\n          withSQLConf(\n            CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> ratio,\n            CometConf.COMET_NATIVE_SCAN_IMPL.key -> \"native_datafusion\") {\n            withParquetTable(\n              (0 until 50).map(i => (i, Seq(Seq(i + 1), Seq(i + 2), Seq(i + 3)), i + 1)),\n              \"tbl\") {\n              var df = sql(\"SELECT * FROM tbl\")\n                .filter($\"_3\" > 10)\n                .repartition(numPartitions, $\"_2\")\n\n              // Partitioning on nested array falls back to Spark\n              checkShuffleAnswer(df, 0)\n\n              df = sql(\"SELECT * FROM tbl\")\n                .filter($\"_3\" > 10)\n                .repartition(numPartitions, $\"_1\")\n\n              // Partitioning on primitive type, with nested array in other cols works with native.\n              checkShuffleAnswer(df, 1)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"hash-based native shuffle\") {\n    withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n      val df = sql(\"SELECT * FROM tbl\").sortWithinPartitions($\"_1\".desc)\n      val shuffled1 = df.repartition(10, $\"_1\")\n      checkShuffleAnswer(shuffled1, 1)\n\n      val shuffled2 = df.repartition(10, $\"_1\", $\"_2\")\n      checkShuffleAnswer(shuffled2, 1)\n\n      val shuffled3 = df.repartition(10, $\"_2\", $\"_1\")\n      checkShuffleAnswer(shuffled3, 1)\n    }\n  }\n\n  test(\"native shuffle: single partition\") {\n    withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n      val df = sql(\"SELECT * FROM tbl\").sortWithinPartitions($\"_1\".desc)\n\n      val shuffled = df.repartition(1)\n      checkShuffleAnswer(shuffled, 1)\n    }\n  }\n\n  test(\"native shuffle with dictionary of binary\") {\n    Seq(\"true\", \"false\").foreach { dictionaryEnabled =>\n      withParquetTable(\n        (0 until 1000).map(i => (i % 5, (i % 5).toString.getBytes())),\n        \"tbl\",\n        dictionaryEnabled.toBoolean) {\n        val shuffled = sql(\"SELECT * FROM tbl\").repartition(2, $\"_2\")\n        checkShuffleAnswer(shuffled, 1)\n      }\n    }\n  }\n\n  test(\"native operator after native shuffle\") {\n    Seq(\"true\", \"false\").zip(Seq(\"true\", \"false\")).foreach { partitioning =>\n      withSQLConf(\n        CometConf.COMET_EXEC_SHUFFLE_WITH_HASH_PARTITIONING_ENABLED.key -> partitioning._1,\n        CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> partitioning._2) {\n        withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n          val df = sql(\"SELECT * FROM tbl\")\n\n          val shuffled = df\n            .repartition(10, $\"_2\")\n            .select($\"_1\", $\"_1\" + 1, $\"_2\" + 2)\n            .repartition(10, $\"_1\")\n            .filter($\"_1\" > 1)\n\n          // We expect a hash and range partitioned exchanges. If both are true, we'll get two\n          // native exchanges. Otherwise both will fall back.\n          if (partitioning._1 == \"true\" && partitioning._2 == \"true\") {\n            checkShuffleAnswer(shuffled, 2)\n          } else {\n            checkShuffleAnswer(shuffled, 0)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"grouped aggregate: native shuffle\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n      val df = sql(\"SELECT count(_2), sum(_2) FROM tbl GROUP BY _1\")\n      checkShuffleAnswer(df, 1, checkNativeOperators = true)\n    }\n  }\n\n  test(\"native shuffle metrics\") {\n    withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n      val df = sql(\"SELECT * FROM tbl\").sortWithinPartitions($\"_1\".desc)\n      val shuffled = df.repartition(10, $\"_1\")\n\n      checkShuffleAnswer(shuffled, 1)\n\n      // Materialize the shuffled data\n      shuffled.collect()\n      val metrics = find(shuffled.queryExecution.executedPlan) {\n        case _: CometShuffleExchangeExec => true\n        case _ => false\n      }.map(_.metrics).get\n\n      assert(metrics.contains(\"shuffleRecordsWritten\"))\n      assert(metrics(\"shuffleRecordsWritten\").value == 5L)\n\n      assert(metrics.contains(\"shuffleBytesWritten\"))\n      assert(metrics(\"shuffleBytesWritten\").value > 0)\n\n      assert(metrics.contains(\"dataSize\"))\n      assert(metrics(\"dataSize\").value > 0L)\n\n      assert(metrics.contains(\"shuffleWriteTime\"))\n      assert(metrics(\"shuffleWriteTime\").value > 0L)\n    }\n  }\n\n  test(\"fix: Dictionary arrays imported from native should not be overridden\") {\n    Seq(10, 201).foreach { numPartitions =>\n      withSQLConf(CometConf.COMET_BATCH_SIZE.key -> \"10\") {\n        withParquetTable((0 until 50).map(i => (1.toString, 2.toString, (i + 1).toLong)), \"tbl\") {\n          val df = sql(\"SELECT * FROM tbl\")\n            .filter($\"_1\" === 1.toString)\n            .repartition(numPartitions, $\"_1\", $\"_2\")\n            .sortWithinPartitions($\"_1\")\n          checkSparkAnswerAndOperator(df)\n        }\n      }\n    }\n  }\n\n  test(\"fix: Comet native shuffle with binary data\") {\n    withParquetTable((0 until 5).map(i => (i, (i + 1).toLong)), \"tbl\") {\n      val df = sql(\"SELECT cast(cast(_1 as STRING) as BINARY) as binary, _2 FROM tbl\")\n\n      val shuffled = df.repartition(1, $\"binary\")\n      checkShuffleAnswer(shuffled, 1)\n    }\n  }\n\n  test(\"fix: Comet native shuffle deletes shuffle files after query\") {\n    withParquetTable((0 until 5).map(i => (i, i + 1)), \"tbl\") {\n      var df = sql(\"SELECT count(_2), sum(_2) FROM tbl GROUP BY _1\")\n      df.collect()\n      val diskBlockManager = SparkEnv.get.blockManager.diskBlockManager\n      assert(diskBlockManager.getAllFiles().nonEmpty)\n      df = null\n      eventually(timeout(30.seconds), interval(1.seconds)) {\n        System.gc()\n        assert(diskBlockManager.getAllFiles().isEmpty)\n      }\n    }\n  }\n\n  // This duplicates behavior seen in a much more complicated Spark SQL test\n  // \"SPARK-44647: test join key is the second cluster key\"\n  test(\"range partitioning with duplicate column references\") {\n    // Test with wider schema and non-adjacent duplicate columns\n    withParquetTable(\n      (0 until 100).map(i => (i % 10, (i % 5).toByte, i % 3, i % 7, (i % 4).toShort, i.toString)),\n      \"tbl\") {\n\n      val df = sql(\"SELECT * FROM tbl\")\n\n      // Test case 1: Adjacent duplicates (original case)\n      val rangePartitioned1 = df.repartitionByRange(3, df.col(\"_1\"), df.col(\"_1\"), df.col(\"_2\"))\n      checkShuffleAnswer(rangePartitioned1, 1)\n\n      // Test case 2: Non-adjacent duplicates in wider schema\n      // Duplicate _1 at positions 0 and 3, with different columns in between\n      val rangePartitioned2 =\n        df.repartitionByRange(4, df.col(\"_1\"), df.col(\"_3\"), df.col(\"_5\"), df.col(\"_1\"))\n      checkShuffleAnswer(rangePartitioned2, 1)\n\n      // Test case 3: Multiple duplicate pairs\n      // _1 duplicated at positions 0,2 and _4 duplicated at positions 1,3\n      val rangePartitioned3 =\n        df.repartitionByRange(4, df.col(\"_1\"), df.col(\"_4\"), df.col(\"_1\"), df.col(\"_4\"))\n      checkShuffleAnswer(rangePartitioned3, 1)\n\n      // Test case 4: Triple duplicates with gaps\n      val rangePartitioned4 = df.repartitionByRange(\n        5,\n        df.col(\"_1\"),\n        df.col(\"_2\"),\n        df.col(\"_1\"),\n        df.col(\"_3\"),\n        df.col(\"_1\"))\n      checkShuffleAnswer(rangePartitioned4, 1)\n    }\n  }\n\n  // Asserts ordering properties of partitions in a Dataset that has been RangePartitioned\n  private def checkRangePartitionedDataset(df_range_partitioned: Dataset[Row]): Unit = {\n    val partition_bounds = df_range_partitioned.rdd\n      .mapPartitionsWithIndex((idx: Int, iterator: Iterator[Row]) => {\n        // Find the min and max value in each partition\n        var min: Option[Int] = None\n        var max: Option[Int] = None\n        iterator.foreach((row: Row) => {\n          val row_val = row.get(0).asInstanceOf[Int]\n          if (min.isEmpty || row_val < min.get) {\n            min = Some(row_val)\n          }\n          if (max.isEmpty || row_val > max.get) {\n            max = Some(row_val)\n          }\n        })\n        Iterator.single((idx, min, max))\n      })\n      .collect()\n\n    // Check min and max values in each partition\n    for (i <- partition_bounds.indices.init) {\n      val currentPartition = partition_bounds(i)\n      val nextPartition = partition_bounds(i + 1)\n\n      if (currentPartition._2.isDefined && currentPartition._3.isDefined) {\n        val currentMin = currentPartition._2.get\n        val currentMax = currentPartition._3.get\n        assert(currentMin <= currentMax)\n      }\n\n      if (currentPartition._3.isDefined && nextPartition._2.isDefined) {\n        val currentMax = currentPartition._3.get\n        val nextMin = nextPartition._2.get\n        assert(currentMax < nextMin)\n      }\n    }\n  }\n\n  // This adapts the PySpark example in https://github.com/apache/datafusion-comet/issues/1906 to\n  // test for incorrect partition values after native RangePartitioning\n  test(\"fix: range partitioning #1906\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> \"true\") {\n      withParquetTable((0 until 100000).map(i => (i, i + 1)), \"tbl\") {\n        val df = sql(\"SELECT * from tbl\")\n\n        // Repartition with two sort columns\n        val repartitioned_df = df.repartitionByRange(10, df.col(\"_1\"))\n        checkSparkAnswerAndOperator(repartitioned_df)\n        checkRangePartitionedDataset(repartitioned_df)\n      }\n    }\n  }\n\n  // This adapts the PySpark example in https://github.com/apache/datafusion-comet/issues/1906 to\n  // test for incorrect partition values after native RangePartitioning\n  test(\"fix: range partitioning #1906, two columns\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> \"true\") {\n      withParquetTable((0 until 100000).map(i => (i, i + 1)), \"tbl\") {\n        val df = sql(\"SELECT * from tbl\")\n\n        // Repartition with two sort columns\n        val repartitioned_df = df.repartitionByRange(10, df.col(\"_1\"), df.col(\"_2\"))\n        checkSparkAnswerAndOperator(repartitioned_df)\n        checkRangePartitionedDataset(repartitioned_df)\n      }\n    }\n  }\n\n  // This adapts the PySpark example in https://github.com/apache/datafusion-comet/issues/1906 to\n  // test for incorrect partition values after native RangePartitioning\n  test(\"fix: range partitioning #1906, random sort column with duplicates\") {\n    withSQLConf(CometConf.COMET_EXEC_SHUFFLE_WITH_RANGE_PARTITIONING_ENABLED.key -> \"true\") {\n      val random = new Random(42)\n      withParquetTable((0 until 100000).map(i => (random.nextInt(10000), i)), \"tbl\") {\n        val df = sql(\"SELECT * from tbl\")\n\n        // Repartition with two sort columns\n        val repartitioned_df = df.repartitionByRange(10, df.col(\"_1\"))\n        checkSparkAnswerAndOperator(repartitioned_df)\n        checkRangePartitionedDataset(repartitioned_df)\n      }\n    }\n  }\n\n  /**\n   * Checks that `df` produces the same answer as Spark does, and has the `expectedNum` Comet\n   * exchange operators. When `checkNativeOperators` is true, this also checks that all operators\n   * used by `df` are Comet native operators.\n   */\n  private def checkShuffleAnswer(\n      df: DataFrame,\n      expectedNum: Int,\n      checkNativeOperators: Boolean = false): Unit = {\n    checkCometExchange(df, expectedNum, true)\n    if (checkNativeOperators) {\n      checkSparkAnswerAndOperator(df)\n    } else {\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"native shuffle: round robin partitioning\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SHUFFLE_WITH_ROUND_ROBIN_PARTITIONING_ENABLED.key -> \"true\") {\n      withParquetTable((0 until 100).map(i => (i, (i + 1).toLong, s\"str$i\")), \"tbl\") {\n        val df = sql(\"SELECT * FROM tbl\")\n\n        // Test basic round robin repartitioning\n        val shuffled = df.repartition(10)\n\n        // Just collect and verify row count - simpler test\n        val result = shuffled.collect()\n        assert(result.length == 100, s\"Expected 100 rows, got ${result.length}\")\n      }\n    }\n  }\n\n  test(\"native shuffle: round robin deterministic behavior\") {\n    // Test that round robin produces consistent results across multiple executions\n    withSQLConf(\n      CometConf.COMET_EXEC_SHUFFLE_WITH_ROUND_ROBIN_PARTITIONING_ENABLED.key -> \"true\") {\n      withParquetTable((0 until 1000).map(i => (i, (i + 1).toLong, s\"str$i\")), \"tbl\") {\n        val df = sql(\"SELECT * FROM tbl\")\n\n        // Execute shuffle twice and compare results\n        val result1 = df.repartition(8).collect().toSeq\n        val result2 = df.repartition(8).collect().toSeq\n\n        // Results should be identical (deterministic ordering)\n        assert(result1 == result2, \"Round robin shuffle should produce deterministic results\")\n      }\n    }\n  }\n\n  test(\"native shuffle: round robin with filter\") {\n    withSQLConf(\n      CometConf.COMET_EXEC_SHUFFLE_WITH_ROUND_ROBIN_PARTITIONING_ENABLED.key -> \"true\") {\n      withParquetTable((0 until 100).map(i => (i, (i + 1).toLong)), \"tbl\") {\n        val df = sql(\"SELECT * FROM tbl\")\n        val shuffled = df\n          .filter($\"_1\" < 50)\n          .repartition(10)\n\n        // Just collect and verify - simpler test\n        val result = shuffled.collect()\n        assert(result.length == 50, s\"Expected 50 rows after filter, got ${result.length}\")\n      }\n    }\n  }\n\n  test(\"shuffle direct read produces same results as FFI path\") {\n    Seq(true, false).foreach { directRead =>\n      withSQLConf(CometConf.COMET_SHUFFLE_DIRECT_READ_ENABLED.key -> directRead.toString) {\n        val df = spark\n          .range(1000)\n          .selectExpr(\"id\", \"id % 10 as key\", \"cast(id as string) as value\")\n          .repartition(4, col(\"key\"))\n          .groupBy(\"key\")\n          .agg(sum(\"id\").as(\"total\"), count(\"value\").as(\"cnt\"))\n          .orderBy(\"key\")\n        checkSparkAnswer(df)\n      }\n    }\n  }\n\n  test(\"shuffle direct read with multiple shuffles in plan\") {\n    Seq(true, false).foreach { directRead =>\n      withSQLConf(CometConf.COMET_SHUFFLE_DIRECT_READ_ENABLED.key -> directRead.toString) {\n        // Join two shuffled datasets to produce a plan with multiple shuffle reads\n        val left = spark\n          .range(100)\n          .selectExpr(\"id as l_id\", \"id % 10 as key\")\n          .repartition(4, col(\"key\"))\n        val right = spark\n          .range(100)\n          .selectExpr(\"id as r_id\", \"id % 10 as key\")\n          .repartition(4, col(\"key\"))\n        val df = left\n          .join(right, \"key\")\n          .groupBy(\"key\")\n          .agg(count(\"l_id\").as(\"cnt\"))\n          .orderBy(\"key\")\n        checkSparkAnswer(df)\n      }\n    }\n  }\n\n  // Regression test for https://github.com/apache/datafusion-comet/issues/3846\n  test(\"repartition count\") {\n    withTempPath { dir =>\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        spark\n          .range(1000)\n          .selectExpr(\"id\", \"concat('name_', id) as name\")\n          .repartition(100)\n          .write\n          .parquet(dir.toString)\n      }\n      withSQLConf(\n        CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_DATAFUSION,\n        CometConf.COMET_EXEC_SHUFFLE_WITH_ROUND_ROBIN_PARTITIONING_ENABLED.key -> \"true\") {\n        val testDF = spark.read.parquet(dir.toString).repartition(10)\n        // Verify CometShuffleExchangeExec is in the plan\n        assert(\n          find(testDF.queryExecution.executedPlan) {\n            case _: CometShuffleExchangeExec => true\n            case _ => false\n          }.isDefined,\n          \"Expected CometShuffleExchangeExec in the plan\")\n        // Actual validation, no crash\n        val count = testDF.count()\n        assert(count == 1000)\n        // Ensure test df evaluated by Comet\n        checkSparkAnswerAndOperator(testDF)\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/exec/CometWindowExecSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exec\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql.{CometTestBase, Row}\nimport org.apache.spark.sql.comet.CometWindowExec\nimport org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\nimport org.apache.spark.sql.execution.window.WindowExec\nimport org.apache.spark.sql.expressions.Window\nimport org.apache.spark.sql.functions.{count, lead, sum}\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.CometConf\n\nclass CometWindowExecSuite extends CometTestBase {\n\n  import testImplicits._\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_WINDOW_ENABLED.key -> \"true\",\n        CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_AUTO) {\n        testFun\n      }\n    }\n  }\n\n  test(\"lead/lag should return the default value if the offset row does not exist\") {\n    withSQLConf(\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n      checkSparkAnswer(sql(\"\"\"\n                             |SELECT\n                             |  lag(123, 100, 321) OVER (ORDER BY id) as lag,\n                             |  lead(123, 100, 321) OVER (ORDER BY id) as lead\n                             |FROM (SELECT 1 as id) tmp\n      \"\"\".stripMargin))\n\n      checkSparkAnswer(sql(\"\"\"\n                             |SELECT\n                             |  lag(123, 100, a) OVER (ORDER BY id) as lag,\n                             |  lead(123, 100, a) OVER (ORDER BY id) as lead\n                             |FROM (SELECT 1 as id, 2 as a) tmp\n      \"\"\".stripMargin))\n    }\n  }\n\n  test(\"window query with rangeBetween\") {\n\n    // values are int\n    val df = Seq(1, 2, 4, 3, 2, 1).toDF(\"value\")\n    val window = Window.orderBy($\"value\".desc)\n\n    // ranges are long\n    val df2 = df.select(\n      $\"value\",\n      sum($\"value\").over(window.rangeBetween(Window.unboundedPreceding, 1L)),\n      sum($\"value\").over(window.rangeBetween(1L, Window.unboundedFollowing)))\n\n    // Comet does not support RANGE BETWEEN\n    // https://github.com/apache/datafusion-comet/issues/1246\n    val (_, cometPlan) = checkSparkAnswer(df2)\n    val cometWindowExecs = collect(cometPlan) { case w: CometWindowExec =>\n      w\n    }\n    assert(cometWindowExecs.isEmpty)\n  }\n\n  // based on Spark's SQLWindowFunctionSuite test of the same name\n  test(\"window function: partition and order expressions\") {\n    for (shuffleMode <- Seq(\"auto\", \"native\", \"jvm\")) {\n      withSQLConf(CometConf.COMET_SHUFFLE_MODE.key -> shuffleMode) {\n        val df =\n          Seq((1, \"a\", 5), (2, \"a\", 6), (3, \"b\", 7), (4, \"b\", 8), (5, \"c\", 9), (6, \"c\", 10)).toDF(\n            \"month\",\n            \"area\",\n            \"product\")\n        df.createOrReplaceTempView(\"windowData\")\n        val df2 = sql(\"\"\"\n                        |select month, area, product, sum(product + 1) over (partition by 1 order by 2)\n                        |from windowData\n          \"\"\".stripMargin)\n        checkSparkAnswer(df2)\n        val cometShuffles = collect(df2.queryExecution.executedPlan) {\n          case _: CometShuffleExchangeExec => true\n        }\n        if (shuffleMode == \"jvm\" || shuffleMode == \"auto\") {\n          assert(cometShuffles.length == 1)\n        } else {\n          // we fall back to Spark for shuffle because we do not support\n          // native shuffle with a LocalTableScan input, and we do not fall\n          // back to Comet columnar shuffle due to\n          // https://github.com/apache/datafusion-comet/issues/1248\n          assert(cometShuffles.isEmpty)\n        }\n      }\n    }\n  }\n\n  test(\n    \"fall back to Spark when the partition spec and order spec are not the same for window function\") {\n    withTempView(\"test\") {\n      sql(\"\"\"\n            |CREATE OR REPLACE TEMPORARY VIEW test_agg AS SELECT * FROM VALUES\n            | (1, true), (1, false),\n            |(2, true), (3, false), (4, true) AS test(k, v)\n            |\"\"\".stripMargin)\n\n      val df = sql(\"\"\"\n          SELECT k, v, every(v) OVER (PARTITION BY k ORDER BY v) FROM test_agg\n                     |\"\"\".stripMargin)\n      checkSparkAnswer(df)\n    }\n  }\n\n  test(\"Native window operator should be CometUnaryExec\") {\n    withTempView(\"testData\") {\n      sql(\"\"\"\n            |CREATE OR REPLACE TEMPORARY VIEW testData AS SELECT * FROM VALUES\n            |(null, 1L, 1.0D, date(\"2017-08-01\"), timestamp_seconds(1501545600), \"a\"),\n            |(1, 1L, 1.0D, date(\"2017-08-01\"), timestamp_seconds(1501545600), \"a\"),\n            |(1, 2L, 2.5D, date(\"2017-08-02\"), timestamp_seconds(1502000000), \"a\"),\n            |(2, 2147483650L, 100.001D, date(\"2020-12-31\"), timestamp_seconds(1609372800), \"a\"),\n            |(1, null, 1.0D, date(\"2017-08-01\"), timestamp_seconds(1501545600), \"b\"),\n            |(2, 3L, 3.3D, date(\"2017-08-03\"), timestamp_seconds(1503000000), \"b\"),\n            |(3, 2147483650L, 100.001D, date(\"2020-12-31\"), timestamp_seconds(1609372800), \"b\"),\n            |(null, null, null, null, null, null),\n            |(3, 1L, 1.0D, date(\"2017-08-01\"), timestamp_seconds(1501545600), null)\n            |AS testData(val, val_long, val_double, val_date, val_timestamp, cate)\n            |\"\"\".stripMargin)\n      val df1 = sql(\"\"\"\n                      |SELECT val, cate, count(val) OVER(PARTITION BY cate ORDER BY val ROWS CURRENT ROW)\n                      |FROM testData ORDER BY cate, val\n                      |\"\"\".stripMargin)\n      checkSparkAnswer(df1)\n    }\n  }\n\n  test(\"Window range frame with long boundary should not fail\") {\n    val df =\n      Seq((1L, \"1\"), (1L, \"1\"), (2147483650L, \"1\"), (3L, \"2\"), (2L, \"1\"), (2147483650L, \"2\"))\n        .toDF(\"key\", \"value\")\n\n    checkSparkAnswer(\n      df.select(\n        $\"key\",\n        count(\"key\").over(\n          Window.partitionBy($\"value\").orderBy($\"key\").rangeBetween(0, 2147483648L))))\n    checkSparkAnswer(\n      df.select(\n        $\"key\",\n        count(\"key\").over(\n          Window.partitionBy($\"value\").orderBy($\"key\").rangeBetween(-2147483649L, 0))))\n  }\n\n  test(\"Unsupported window expression should fall back to Spark\") {\n    checkAnswer(\n      spark.sql(\"select sum(a) over () from values 1.0, 2.0, 3.0 T(a)\"),\n      Row(6.0) :: Row(6.0) :: Row(6.0) :: Nil)\n    checkAnswer(\n      spark.sql(\"select avg(a) over () from values 1.0, 2.0, 3.0 T(a)\"),\n      Row(2.0) :: Row(2.0) :: Row(2.0) :: Nil)\n  }\n\n  test(\"Repeated shuffle exchange don't fail\") {\n    Seq(\"true\", \"false\").foreach { aqeEnabled =>\n      withSQLConf(\n        SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> aqeEnabled,\n        SQLConf.REQUIRE_ALL_CLUSTER_KEYS_FOR_DISTRIBUTION.key -> \"true\",\n        CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n        val df =\n          Seq((\"a\", 1, 1), (\"a\", 2, 2), (\"b\", 1, 3), (\"b\", 1, 4)).toDF(\"key1\", \"key2\", \"value\")\n        val windowSpec = Window.partitionBy(\"key1\", \"key2\").orderBy(\"value\")\n\n        val windowed = df\n          // repartition by subset of window partitionBy keys which satisfies ClusteredDistribution\n          .repartition($\"key1\")\n          .select(lead($\"key1\", 1).over(windowSpec), lead($\"value\", 1).over(windowSpec))\n\n        checkSparkAnswer(windowed)\n      }\n    }\n  }\n\n  ignore(\"aggregate window function for all types\") {\n    val numValues = 2048\n\n    Seq(1, 100, numValues).foreach { numGroups =>\n      Seq(true, false).foreach { dictionaryEnabled =>\n        withTempPath { dir =>\n          val path = new Path(dir.toURI.toString, \"test.parquet\")\n          makeParquetFile(path, numValues, numGroups, dictionaryEnabled)\n          withParquetTable(path.toUri.toString, \"tbl\") {\n            Seq(128, numValues + 100).foreach { batchSize =>\n              withSQLConf(CometConf.COMET_BATCH_SIZE.key -> batchSize.toString) {\n                (1 to 11).foreach { col =>\n                  val aggregateFunctions =\n                    List(s\"COUNT(_$col)\", s\"MAX(_$col)\", s\"MIN(_$col)\", s\"SUM(_$col)\")\n                  aggregateFunctions.foreach { function =>\n                    val df1 = sql(s\"SELECT $function OVER() FROM tbl\")\n                    checkSparkAnswerWithTolerance(df1, 1e-6)\n\n                    val df2 = sql(s\"SELECT $function OVER(order by _2) FROM tbl\")\n                    checkSparkAnswerWithTolerance(df2, 1e-6)\n\n                    val df3 = sql(s\"SELECT $function OVER(order by _2 desc) FROM tbl\")\n                    checkSparkAnswerWithTolerance(df3, 1e-6)\n\n                    val df4 = sql(s\"SELECT $function OVER(partition by _2 order by _2) FROM tbl\")\n                    checkSparkAnswerWithTolerance(df4, 1e-6)\n                  }\n                }\n\n                // SUM doesn't work for Date type. org.apache.spark.sql.AnalysisException will be thrown.\n                val aggregateFunctionsWithoutSum = List(\"COUNT(_12)\", \"MAX(_12)\", \"MIN(_12)\")\n                aggregateFunctionsWithoutSum.foreach { function =>\n                  val df1 = sql(s\"SELECT $function OVER() FROM tbl\")\n                  checkSparkAnswerWithTolerance(df1, 1e-6)\n\n                  val df2 = sql(s\"SELECT $function OVER(order by _2) FROM tbl\")\n                  checkSparkAnswerWithTolerance(df2, 1e-6)\n\n                  val df3 = sql(s\"SELECT $function OVER(order by _2 desc) FROM tbl\")\n                  checkSparkAnswerWithTolerance(df3, 1e-6)\n\n                  val df4 = sql(s\"SELECT $function OVER(partition by _2 order by _2) FROM tbl\")\n                  checkSparkAnswerWithTolerance(df4, 1e-6)\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  ignore(\"Windows support\") {\n    Seq(\"true\", \"false\").foreach(aqeEnabled =>\n      withSQLConf(\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n        SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> aqeEnabled) {\n        withParquetTable((0 until 10).map(i => (i, 10 - i)), \"t1\") { // TODO: test nulls\n          val aggregateFunctions =\n            List(\n              \"COUNT(_1)\",\n              \"COUNT(*)\",\n              \"MAX(_1)\",\n              \"MIN(_1)\",\n              \"SUM(_1)\"\n            ) // TODO: Test all the aggregates\n\n          aggregateFunctions.foreach { function =>\n            val queries = Seq(\n              s\"SELECT $function OVER() FROM t1\",\n              s\"SELECT $function OVER(order by _2) FROM t1\",\n              s\"SELECT $function OVER(order by _2 desc) FROM t1\",\n              s\"SELECT $function OVER(partition by _2 order by _2) FROM t1\",\n              s\"SELECT $function OVER(rows between 1 preceding and 1 following) FROM t1\",\n              s\"SELECT $function OVER(order by _2 rows between 1 preceding and current row) FROM t1\",\n              s\"SELECT $function OVER(order by _2 rows between current row and 1 following) FROM t1\")\n\n            queries.foreach { query =>\n              checkSparkAnswerAndFallbackReason(\n                query,\n                \"Native WindowExec has known correctness issues\")\n            }\n          }\n        }\n      })\n  }\n\n  test(\"window: simple COUNT(*) without frame\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"SELECT a, b, c, COUNT(*) OVER () as cnt FROM window_test\")\n      checkSparkAnswerAndFallbackReason(df, \"Native WindowExec has known correctness issues\")\n    }\n  }\n\n  test(\"window: simple SUM with PARTITION BY\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"SELECT a, b, c, SUM(c) OVER (PARTITION BY a) as sum_c FROM window_test\")\n      checkSparkAnswerAndFallbackReason(df, \"Native WindowExec has known correctness issues\")\n    }\n  }\n\n  // TODO: AVG with PARTITION BY and ORDER BY not supported\n  // Falls back to Spark Window operator - \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: AVG with PARTITION BY and ORDER BY\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df =\n        sql(\"SELECT a, b, c, AVG(c) OVER (PARTITION BY a ORDER BY b) as avg_c FROM window_test\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"window: MIN and MAX with ORDER BY\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          MIN(c) OVER (ORDER BY b) as min_c,\n          MAX(c) OVER (ORDER BY b) as max_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndFallbackReason(df, \"Native WindowExec has known correctness issues\")\n    }\n  }\n\n  // TODO: COUNT with ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW produces incorrect results\n  // Returns wrong cnt values - ordering issue causes swapped values for rows with same partition\n  ignore(\"window: COUNT with ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          COUNT(*) OVER (PARTITION BY a ORDER BY b ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as cnt\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: SUM with ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING produces incorrect results\n  ignore(\"window: SUM with ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          SUM(c) OVER (PARTITION BY a ORDER BY b ROWS BETWEEN CURRENT ROW AND UNBOUNDED FOLLOWING) as sum_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: AVG with ROWS BETWEEN produces incorrect results\n  // Returns wrong avg_c values - calculation appears to be off\n  ignore(\"window: AVG with ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          AVG(c) OVER (PARTITION BY a ORDER BY b ROWS BETWEEN 1 PRECEDING AND 1 FOLLOWING) as avg_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: SUM with ROWS BETWEEN produces incorrect results\n  ignore(\"window: SUM with ROWS BETWEEN 2 PRECEDING AND CURRENT ROW\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          SUM(c) OVER (PARTITION BY a ORDER BY b ROWS BETWEEN 2 PRECEDING AND CURRENT ROW) as sum_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: COUNT with ROWS BETWEEN not supported\n  // Falls back to Spark Window operator - \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: COUNT with ROWS BETWEEN CURRENT ROW AND 2 FOLLOWING\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          COUNT(*) OVER (PARTITION BY a ORDER BY b ROWS BETWEEN CURRENT ROW AND 2 FOLLOWING) as cnt\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: MAX with ROWS BETWEEN UNBOUNDED not supported\n  // Falls back to Spark Window operator - \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: MAX with ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          MAX(c) OVER (PARTITION BY a ORDER BY b ROWS BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED FOLLOWING) as max_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: ROW_NUMBER not supported\n  // Falls back to Spark Window operator\n  ignore(\"window: ROW_NUMBER with PARTITION BY and ORDER BY\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          ROW_NUMBER() OVER (PARTITION BY a ORDER BY b, c) as row_num\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: RANK not supported\n  // Falls back to Spark Window operator - \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: RANK with PARTITION BY and ORDER BY\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          RANK() OVER (PARTITION BY a ORDER BY b) as rnk\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: DENSE_RANK not supported\n  // Falls back to Spark Window operator - \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: DENSE_RANK with PARTITION BY and ORDER BY\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          DENSE_RANK() OVER (PARTITION BY a ORDER BY b) as dense_rnk\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: PERCENT_RANK not supported\n  // Falls back to Spark Window operator - \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: PERCENT_RANK with PARTITION BY and ORDER BY\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          PERCENT_RANK() OVER (PARTITION BY a ORDER BY b) as pct_rnk\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: NTILE not supported\n  // Falls back to Spark Window operator - \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: NTILE with PARTITION BY and ORDER BY\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          NTILE(4) OVER (PARTITION BY a ORDER BY b) as ntile_4\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  test(\"window: LAG with default offset\") {\n    withSQLConf(CometConf.getOperatorAllowIncompatConfigKey(classOf[WindowExec]) -> \"true\") {\n      withTempDir { dir =>\n        (0 until 30)\n          .map(i => (i % 3, i % 5, i))\n          .toDF(\"a\", \"b\", \"c\")\n          .repartition(3)\n          .write\n          .mode(\"overwrite\")\n          .parquet(dir.toString)\n\n        spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n        val df = sql(\"\"\"\n          SELECT a, b, c,\n            LAG(c) OVER (PARTITION BY a ORDER BY b, c) as lag_c\n          FROM window_test\n        \"\"\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"window: LAG with offset 2 and default value\") {\n    withSQLConf(CometConf.getOperatorAllowIncompatConfigKey(classOf[WindowExec]) -> \"true\") {\n      withTempDir { dir =>\n        (0 until 30)\n          .map(i => (i % 3, i % 5, i))\n          .toDF(\"a\", \"b\", \"c\")\n          .repartition(3)\n          .write\n          .mode(\"overwrite\")\n          .parquet(dir.toString)\n\n        spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n        val df = sql(\"\"\"\n          SELECT a, b, c,\n            LAG(c, 2, -1) OVER (PARTITION BY a ORDER BY b, c) as lag_c_2\n          FROM window_test\n        \"\"\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"window: LAG with IGNORE NULLS\") {\n    withSQLConf(CometConf.getOperatorAllowIncompatConfigKey(classOf[WindowExec]) -> \"true\") {\n      withTempDir { dir =>\n        Seq((1, 1, Some(10)), (1, 2, None), (1, 3, Some(30)), (2, 1, None), (2, 2, Some(20)))\n          .toDF(\"a\", \"b\", \"c\")\n          .write\n          .mode(\"overwrite\")\n          .parquet(dir.toString)\n\n        spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n        val df = sql(\"\"\"\n          SELECT a, b, c,\n            LAG(c) IGNORE NULLS OVER (PARTITION BY a ORDER BY b) as lag_c\n          FROM window_test\n        \"\"\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"window: LEAD with default offset\") {\n    withSQLConf(CometConf.getOperatorAllowIncompatConfigKey(classOf[WindowExec]) -> \"true\") {\n      withTempDir { dir =>\n        (0 until 30)\n          .map(i => (i % 3, i % 5, i))\n          .toDF(\"a\", \"b\", \"c\")\n          .repartition(3)\n          .write\n          .mode(\"overwrite\")\n          .parquet(dir.toString)\n\n        spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n        val df = sql(\"\"\"\n          SELECT a, b, c,\n            LEAD(c) OVER (PARTITION BY a ORDER BY b, c) as lead_c\n          FROM window_test\n        \"\"\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"window: LEAD with offset 2 and default value\") {\n    withSQLConf(CometConf.getOperatorAllowIncompatConfigKey(classOf[WindowExec]) -> \"true\") {\n      withTempDir { dir =>\n        (0 until 30)\n          .map(i => (i % 3, i % 5, i))\n          .toDF(\"a\", \"b\", \"c\")\n          .repartition(3)\n          .write\n          .mode(\"overwrite\")\n          .parquet(dir.toString)\n\n        spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n        val df = sql(\"\"\"\n          SELECT a, b, c,\n            LEAD(c, 2, -1) OVER (PARTITION BY a ORDER BY b, c) as lead_c_2\n          FROM window_test\n        \"\"\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  test(\"window: LEAD with IGNORE NULLS\") {\n    withSQLConf(CometConf.getOperatorAllowIncompatConfigKey(classOf[WindowExec]) -> \"true\") {\n      withTempDir { dir =>\n        Seq((1, 1, Some(10)), (1, 2, None), (1, 3, Some(30)), (2, 1, None), (2, 2, Some(20)))\n          .toDF(\"a\", \"b\", \"c\")\n          .write\n          .mode(\"overwrite\")\n          .parquet(dir.toString)\n\n        spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n        val df = sql(\"\"\"\n          SELECT a, b, c,\n            LEAD(c) IGNORE NULLS OVER (PARTITION BY a ORDER BY b) as lead_c\n          FROM window_test\n        \"\"\")\n        checkSparkAnswerAndOperator(df)\n      }\n    }\n  }\n\n  // TODO: FIRST_VALUE causes encoder error\n  // org.apache.spark.SparkUnsupportedOperationException: [ENCODER_NOT_FOUND] Not found an encoder of the type Any\n  ignore(\"window: FIRST_VALUE with default ignore nulls\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, if (i % 7 == 0) null else i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          FIRST_VALUE(c) OVER (PARTITION BY a ORDER BY b) as first_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: LAST_VALUE causes encoder error\n  // org.apache.spark.SparkUnsupportedOperationException: [ENCODER_NOT_FOUND] Not found an encoder of the type Any\n  ignore(\"window: LAST_VALUE with ROWS frame\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, if (i % 7 == 0) null else i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          LAST_VALUE(c) OVER (PARTITION BY a ORDER BY b ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as last_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: NTH_VALUE returns incorrect results - produces 0 instead of null for first row,\n  ignore(\"window: NTH_VALUE with position 2\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          NTH_VALUE(c, 2) OVER (PARTITION BY a ORDER BY b ROWS BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as nth_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: CUME_DIST not supported - falls back to Spark Window operator\n  // Error: \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: CUME_DIST with PARTITION BY and ORDER BY\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          CUME_DIST() OVER (PARTITION BY a ORDER BY b) as cume_dist\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: Multiple window functions with mixed frame types (RowFrame and RangeFrame)\n  ignore(\"window: multiple window functions in single query\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          ROW_NUMBER() OVER (PARTITION BY a ORDER BY b) as row_num,\n          RANK() OVER (PARTITION BY a ORDER BY b) as rnk,\n          SUM(c) OVER (PARTITION BY a ORDER BY b) as sum_c,\n          AVG(c) OVER (PARTITION BY a ORDER BY b) as avg_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: Different window specifications not fully supported\n  // Falls back to Spark Project and Window operators\n  ignore(\"window: different window specifications in single query\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          SUM(c) OVER (PARTITION BY a ORDER BY b) as sum_by_a,\n          SUM(c) OVER (PARTITION BY b ORDER BY a) as sum_by_b,\n          COUNT(*) OVER () as total_count\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: ORDER BY DESC with aggregation not supported\n  // Falls back to Spark Window operator - \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: ORDER BY DESC with aggregation\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          SUM(c) OVER (PARTITION BY a ORDER BY b DESC) as sum_c_desc\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: Multiple PARTITION BY columns not supported\n  // Falls back to Spark Window operator\n  ignore(\"window: multiple PARTITION BY columns\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i % 2, i))\n        .toDF(\"a\", \"b\", \"c\", \"d\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, d, c,\n          SUM(c) OVER (PARTITION BY a, b ORDER BY d) as sum_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: Multiple ORDER BY columns not supported\n  // Falls back to Spark Window operator\n  ignore(\"window: multiple ORDER BY columns\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i % 2, i))\n        .toDF(\"a\", \"b\", \"c\", \"d\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, d, c,\n          ROW_NUMBER() OVER (PARTITION BY a ORDER BY b, d, c) as row_num\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: RANGE BETWEEN with numeric ORDER BY not supported\n  // Falls back to Spark Window operator - \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: RANGE BETWEEN with numeric ORDER BY\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i, i * 2))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          SUM(c) OVER (PARTITION BY a ORDER BY b RANGE BETWEEN 2 PRECEDING AND 2 FOLLOWING) as sum_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW not supported\n  // Falls back to Spark Window operator - \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i, i * 2))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          SUM(c) OVER (PARTITION BY a ORDER BY b RANGE BETWEEN UNBOUNDED PRECEDING AND CURRENT ROW) as sum_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: Complex expressions in window functions not fully supported\n  // Falls back to Spark Project operator\n  ignore(\"window: complex expression in window function\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          SUM(c * 2 + 1) OVER (PARTITION BY a ORDER BY b) as sum_expr\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: Window function with WHERE clause not supported\n  // Falls back to Spark Window operator - \"Partitioning and sorting specifications must be the same\"\n  ignore(\"window: window function with WHERE clause\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          SUM(c) OVER (PARTITION BY a ORDER BY b) as sum_c\n        FROM window_test\n        WHERE a > 0\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: Window function with GROUP BY not fully supported\n  // Falls back to Spark Project and Window operators\n  ignore(\"window: window function with GROUP BY\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, SUM(c) as total_c,\n          RANK() OVER (ORDER BY SUM(c) DESC) as rnk\n        FROM window_test\n        GROUP BY a, b\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: ROWS BETWEEN with negative offset produces incorrect results\n  ignore(\"window: ROWS BETWEEN with negative offset\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          AVG(c) OVER (PARTITION BY a ORDER BY b ROWS BETWEEN 3 PRECEDING AND 1 PRECEDING) as avg_c\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n\n  // TODO: All ranking functions together produce incorrect row_num values\n  ignore(\"window: all ranking functions together\") {\n    withTempDir { dir =>\n      (0 until 30)\n        .map(i => (i % 3, i % 5, i))\n        .toDF(\"a\", \"b\", \"c\")\n        .repartition(3)\n        .write\n        .mode(\"overwrite\")\n        .parquet(dir.toString)\n\n      spark.read.parquet(dir.toString).createOrReplaceTempView(\"window_test\")\n      val df = sql(\"\"\"\n        SELECT a, b, c,\n          ROW_NUMBER() OVER (PARTITION BY a ORDER BY b) as row_num,\n          RANK() OVER (PARTITION BY a ORDER BY b) as rnk,\n          DENSE_RANK() OVER (PARTITION BY a ORDER BY b) as dense_rnk,\n          PERCENT_RANK() OVER (PARTITION BY a ORDER BY b) as pct_rnk,\n          CUME_DIST() OVER (PARTITION BY a ORDER BY b) as cume_dist,\n          NTILE(3) OVER (PARTITION BY a ORDER BY b) as ntile_3\n        FROM window_test\n      \"\"\")\n      checkSparkAnswerAndOperator(df)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/expressions/conditional/CometCaseWhenSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.expressions.conditional\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n\nimport org.apache.comet.CometConf\n\nclass CometCaseWhenSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_AUTO) {\n        testFun\n      }\n    }\n  }\n\n  test(\"case_when\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test\"\n        withTable(table) {\n          sql(s\"create table $table(id int) using parquet\")\n          sql(s\"insert into $table values(1), (NULL), (2), (2), (3), (3), (4), (5), (NULL)\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT CASE WHEN id > 2 THEN 3333 WHEN id > 1 THEN 2222 ELSE 1111 END FROM $table\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT CASE WHEN id > 2 THEN NULL WHEN id > 1 THEN 2222 ELSE 1111 END FROM $table\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT CASE id WHEN 1 THEN 1111 WHEN 2 THEN 2222 ELSE 3333 END FROM $table\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT CASE id WHEN 1 THEN 1111 WHEN 2 THEN 2222 ELSE NULL END FROM $table\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT CASE id WHEN 1 THEN 1111 WHEN 2 THEN 2222 WHEN 3 THEN 3333 WHEN 4 THEN 4444 END FROM $table\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT CASE id WHEN NULL THEN 0 WHEN 1 THEN 1111 WHEN 2 THEN 2222 ELSE 3333 END FROM $table\")\n        }\n      }\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/expressions/conditional/CometCoalesceSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.expressions.conditional\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.CometConf\n\nclass CometCoalesceSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_AUTO) {\n        testFun\n      }\n    }\n  }\n\n  test(\"coalesce should return correct datatype\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"test.parquet\")\n        makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        withParquetTable(path.toString, \"tbl\") {\n          checkSparkAnswerAndOperator(\n            \"SELECT coalesce(cast(_18 as date), cast(_19 as date), _20) FROM tbl\")\n        }\n      }\n    }\n  }\n\n  test(\"test coalesce lazy eval\") {\n    withSQLConf(SQLConf.ANSI_ENABLED.key -> \"true\") {\n      val data = Seq((9999999999999L, 0))\n      withParquetTable(data, \"t1\") {\n        val res = spark.sql(\"\"\"\n            |SELECT coalesce(_1, CAST(_1 AS TINYINT)) from t1;\n            |  \"\"\".stripMargin)\n        checkSparkAnswerAndOperator(res)\n      }\n    }\n  }\n\n  test(\"string with coalesce\") {\n    withParquetTable(\n      (0 until 10).map(i => (i.toString, if (i > 5) None else Some((i + 100).toString))),\n      \"tbl\") {\n      checkSparkAnswerAndOperator(\n        \"SELECT coalesce(_1), coalesce(_1, 1), coalesce(null, _1), coalesce(null, 1), coalesce(_2, _1), coalesce(null) FROM tbl\")\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/expressions/conditional/CometIfSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.expressions.conditional\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n\nimport org.apache.comet.CometConf\n\nclass CometIfSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*) {\n      withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_AUTO) {\n        testFun\n      }\n    }\n  }\n\n  test(\"if expression\") {\n    Seq(false, true).foreach { dictionary =>\n      withSQLConf(\"parquet.enable.dictionary\" -> dictionary.toString) {\n        val table = \"test1\"\n        withTable(table) {\n          sql(s\"create table $table(c1 int, c2 string, c3 int) using parquet\")\n          sql(\n            s\"insert into $table values(1, 'comet', 1), (2, 'comet', 3), (null, 'spark', 4),\" +\n              \" (null, null, 4), (2, 'spark', 3), (2, 'comet', 3)\")\n          checkSparkAnswerAndOperator(s\"SELECT if (c1 < 2, 1111, 2222) FROM $table\")\n          checkSparkAnswerAndOperator(s\"SELECT if (c1 < c3, 1111, 2222) FROM $table\")\n          checkSparkAnswerAndOperator(\n            s\"SELECT if (c2 == 'comet', 'native execution', 'non-native execution') FROM $table\")\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/objectstore/NativeConfigSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.objectstore\n\nimport java.net.URI\n\nimport scala.collection.mutable\n\nimport org.scalatest.BeforeAndAfterEach\nimport org.scalatest.funsuite.AnyFunSuite\nimport org.scalatest.matchers.should.Matchers\n\nimport org.apache.hadoop.conf.Configuration\n\nimport org.apache.comet.rules.CometScanRule\n\nclass NativeConfigSuite extends AnyFunSuite with Matchers with BeforeAndAfterEach {\n\n  override protected def beforeEach(): Unit = {\n    CometScanRule.configValidityMap.clear()\n  }\n\n  test(\"extractObjectStoreOptions - multiple cloud provider configurations\") {\n    val hadoopConf = new Configuration()\n    // S3A configs\n    hadoopConf.set(\"fs.s3a.access.key\", \"s3-access-key\")\n    hadoopConf.set(\"fs.s3a.secret.key\", \"s3-secret-key\")\n    hadoopConf.set(\"fs.s3a.endpoint.region\", \"us-east-1\")\n    hadoopConf.set(\"fs.s3a.bucket.special-bucket.access.key\", \"special-access-key\")\n    hadoopConf.set(\"fs.s3a.bucket.special-bucket.endpoint.region\", \"eu-central-1\")\n\n    // GCS configs\n    hadoopConf.set(\"fs.gs.project.id\", \"gcp-project\")\n\n    // Azure configs\n    hadoopConf.set(\"fs.azure.account.key.testaccount.blob.core.windows.net\", \"azure-key\")\n\n    // Should extract s3 options\n    Seq(\"s3a://test-bucket/test-object\", \"s3://test-bucket/test-object\").foreach { path =>\n      val options = NativeConfig.extractObjectStoreOptions(hadoopConf, new URI(path))\n      assert(options(\"fs.s3a.access.key\") == \"s3-access-key\")\n      assert(options(\"fs.s3a.secret.key\") == \"s3-secret-key\")\n      assert(options(\"fs.s3a.endpoint.region\") == \"us-east-1\")\n      assert(options(\"fs.s3a.bucket.special-bucket.access.key\") == \"special-access-key\")\n      assert(options(\"fs.s3a.bucket.special-bucket.endpoint.region\") == \"eu-central-1\")\n      assert(!options.contains(\"fs.gs.project.id\"))\n    }\n    val gsOptions =\n      NativeConfig.extractObjectStoreOptions(hadoopConf, new URI(\"gs://test-bucket/test-object\"))\n    assert(gsOptions(\"fs.gs.project.id\") == \"gcp-project\")\n    assert(!gsOptions.contains(\"fs.s3a.access.key\"))\n\n    val azureOptions = NativeConfig.extractObjectStoreOptions(\n      hadoopConf,\n      new URI(\"wasb://test-bucket/test-object\"))\n    assert(azureOptions(\"fs.azure.account.key.testaccount.blob.core.windows.net\") == \"azure-key\")\n    assert(!azureOptions.contains(\"fs.s3a.access.key\"))\n\n    // Unsupported scheme should return empty options\n    val unsupportedOptions = NativeConfig.extractObjectStoreOptions(\n      hadoopConf,\n      new URI(\"unsupported://test-bucket/test-object\"))\n    assert(unsupportedOptions.isEmpty, \"Unsupported scheme should return empty options\")\n  }\n\n  test(\"validate object store config - no provider\") {\n    val hadoopConf = new Configuration()\n    validate(hadoopConf)\n  }\n\n  test(\"validate object store config - valid providers\") {\n    val hadoopConf = new Configuration()\n    val provider1 = \"com.amazonaws.auth.EnvironmentVariableCredentialsProvider\"\n    val provider2 = \"com.amazonaws.auth.WebIdentityTokenCredentialsProvider\"\n    hadoopConf.set(\"fs.s3a.aws.credentials.provider\", Seq(provider1, provider2).mkString(\",\"))\n    validate(hadoopConf)\n  }\n\n  test(\"validate object store config - invalid provider\") {\n    val hadoopConf = new Configuration()\n    hadoopConf.set(\"fs.s3a.aws.credentials.provider\", \"invalid\")\n    val fallbackReasons = validate(hadoopConf)\n    val expectedError = \"Unsupported credential provider: invalid\"\n    assert(fallbackReasons.exists(_.contains(expectedError)))\n  }\n\n  test(\"validate object store config - mixed anonymous providers\") {\n    val hadoopConf = new Configuration()\n    val provider1 = \"com.amazonaws.auth.AnonymousAWSCredentials\"\n    val provider2 = \"software.amazon.awssdk.auth.credentials.AnonymousCredentialsProvider\"\n    hadoopConf.set(\"fs.s3a.aws.credentials.provider\", Seq(provider1, provider2).mkString(\",\"))\n    val fallbackReasons = validate(hadoopConf)\n    val expectedError =\n      \"Anonymous credential provider cannot be mixed with other credential providers\"\n    assert(fallbackReasons.exists(_.contains(expectedError)))\n  }\n\n  test(\"validity cache\") {\n    val hadoopConf = new Configuration()\n    val provider1 = \"com.amazonaws.auth.AnonymousAWSCredentials\"\n    val provider2 = \"software.amazon.awssdk.auth.credentials.AnonymousCredentialsProvider\"\n    hadoopConf.set(\"fs.s3a.aws.credentials.provider\", Seq(provider1, provider2).mkString(\",\"))\n\n    assert(CometScanRule.configValidityMap.isEmpty)\n    for (_ <- 0 until 5) {\n      assert(validate(hadoopConf).nonEmpty)\n      assert(CometScanRule.configValidityMap.size == 1)\n    }\n\n    // set the same providers but in a different order\n    hadoopConf.set(\"fs.s3a.aws.credentials.provider\", Seq(provider2, provider1).mkString(\",\"))\n    assert(validate(hadoopConf).nonEmpty)\n    assert(CometScanRule.configValidityMap.size == 2)\n  }\n\n  private def validate(hadoopConf: Configuration): Set[String] = {\n    val path = \"s3a://path/to/file.parquet\"\n    val fallbackReasons = mutable.ListBuffer[String]()\n    CometScanRule.validateObjectStoreConfig(path, hadoopConf, fallbackReasons)\n    fallbackReasons.toSet\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/parquet/CometParquetWriterSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet\n\nimport java.io.File\n\nimport scala.util.Random\n\nimport org.apache.spark.sql.{CometTestBase, DataFrame, Row}\nimport org.apache.spark.sql.comet.{CometBatchScanExec, CometNativeScanExec, CometNativeWriteExec, CometScanExec}\nimport org.apache.spark.sql.execution.{FileSourceScanExec, QueryExecution, SparkPlan}\nimport org.apache.spark.sql.execution.command.DataWritingCommandExec\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.StructType\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator, SchemaGenOptions}\n\nclass CometParquetWriterSuite extends CometTestBase {\n\n  import testImplicits._\n\n  test(\"basic parquet write\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      // Create test data and write it to a temp parquet file first\n      withTempPath { inputDir =>\n        val inputPath = createTestData(inputDir)\n\n        withSQLConf(\n          CometConf.COMET_NATIVE_PARQUET_WRITE_ENABLED.key -> \"true\",\n          SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"America/Halifax\",\n          CometConf.COMET_OPERATOR_DATA_WRITING_COMMAND_ALLOW_INCOMPAT.key -> \"true\",\n          CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n\n          writeWithCometNativeWriteExec(inputPath, outputPath)\n\n          verifyWrittenFile(outputPath)\n        }\n      }\n    }\n  }\n\n  test(\"basic parquet write with native scan child\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      // Create test data and write it to a temp parquet file first\n      withTempPath { inputDir =>\n        val inputPath = createTestData(inputDir)\n\n        withSQLConf(\n          CometConf.COMET_NATIVE_PARQUET_WRITE_ENABLED.key -> \"true\",\n          SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"America/Halifax\",\n          CometConf.COMET_OPERATOR_DATA_WRITING_COMMAND_ALLOW_INCOMPAT.key -> \"true\",\n          CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n\n          withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> \"native_datafusion\") {\n            val capturedPlan = writeWithCometNativeWriteExec(inputPath, outputPath)\n            capturedPlan.foreach { plan =>\n              val hasNativeScan = plan.exists {\n                case _: CometNativeScanExec => true\n                case _ => false\n              }\n\n              assert(\n                hasNativeScan,\n                s\"Expected CometNativeScanExec in the plan, but got:\\n${plan.treeString}\")\n            }\n\n            verifyWrittenFile(outputPath)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"basic parquet write with repartition\") {\n    withTempPath { dir =>\n      // Create test data and write it to a temp parquet file first\n      withTempPath { inputDir =>\n        val inputPath = createTestData(inputDir)\n        Seq(true, false).foreach(adaptive => {\n          // Create a new output path for each AQE value\n          val outputPath = new File(dir, s\"output_aqe_$adaptive.parquet\").getAbsolutePath\n\n          withSQLConf(\n            CometConf.COMET_NATIVE_PARQUET_WRITE_ENABLED.key -> \"true\",\n            \"spark.sql.adaptive.enabled\" -> adaptive.toString,\n            SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"America/Halifax\",\n            CometConf.getOperatorAllowIncompatConfigKey(\n              classOf[DataWritingCommandExec]) -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n\n            writeWithCometNativeWriteExec(inputPath, outputPath, Some(10))\n            verifyWrittenFile(outputPath)\n          }\n        })\n      }\n    }\n  }\n\n  test(\"parquet write with array type\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      val df = Seq((1, Seq(1, 2, 3)), (2, Seq(4, 5)), (3, Seq[Int]()), (4, Seq(6, 7, 8, 9)))\n        .toDF(\"id\", \"values\")\n\n      writeComplexTypeData(df, outputPath, 4)\n    }\n  }\n\n  test(\"parquet write with struct type\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      val df =\n        Seq((1, (\"Alice\", 30)), (2, (\"Bob\", 25)), (3, (\"Charlie\", 35))).toDF(\"id\", \"person\")\n\n      writeComplexTypeData(df, outputPath, 3)\n    }\n  }\n\n  test(\"parquet write with map type\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      val df = Seq(\n        (1, Map(\"a\" -> 1, \"b\" -> 2)),\n        (2, Map(\"c\" -> 3)),\n        (3, Map[String, Int]()),\n        (4, Map(\"d\" -> 4, \"e\" -> 5, \"f\" -> 6))).toDF(\"id\", \"properties\")\n\n      writeComplexTypeData(df, outputPath, 4)\n    }\n  }\n\n  test(\"parquet write with array of structs\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      val df = Seq(\n        (1, Seq((\"Alice\", 30), (\"Bob\", 25))),\n        (2, Seq((\"Charlie\", 35))),\n        (3, Seq[(String, Int)]())).toDF(\"id\", \"people\")\n\n      writeComplexTypeData(df, outputPath, 3)\n    }\n  }\n\n  test(\"parquet write with struct containing array\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      val df = spark.sql(\"\"\"\n        SELECT\n          1 as id,\n          named_struct('name', 'Team A', 'scores', array(95, 87, 92)) as team\n        UNION ALL SELECT\n          2 as id,\n          named_struct('name', 'Team B', 'scores', array(88, 91)) as team\n        UNION ALL SELECT\n          3 as id,\n          named_struct('name', 'Team C', 'scores', array(100)) as team\n      \"\"\")\n\n      writeComplexTypeData(df, outputPath, 3)\n    }\n  }\n\n  test(\"parquet write with map with struct values\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      val df = spark.sql(\"\"\"\n        SELECT\n          1 as id,\n          map('emp1', named_struct('name', 'Alice', 'age', 30),\n              'emp2', named_struct('name', 'Bob', 'age', 25)) as employees\n        UNION ALL SELECT\n          2 as id,\n          map('emp3', named_struct('name', 'Charlie', 'age', 35)) as employees\n      \"\"\")\n\n      writeComplexTypeData(df, outputPath, 2)\n    }\n  }\n\n  test(\"parquet write with deeply nested types\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      // Create deeply nested structure: array of maps containing arrays\n      val df = spark.sql(\"\"\"\n        SELECT\n          1 as id,\n          array(\n            map('key1', array(1, 2, 3), 'key2', array(4, 5)),\n            map('key3', array(6, 7, 8, 9))\n          ) as nested_data\n        UNION ALL SELECT\n          2 as id,\n          array(\n            map('key4', array(10, 11))\n          ) as nested_data\n      \"\"\")\n\n      writeComplexTypeData(df, outputPath, 2)\n    }\n  }\n\n  test(\"parquet write with nullable complex types\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      // Test nulls at various levels\n      val df = spark.sql(\"\"\"\n        SELECT\n          1 as id,\n          array(1, null, 3) as arr_with_nulls,\n          named_struct('a', 1, 'b', cast(null as int)) as struct_with_nulls,\n          map('x', 1, 'y', cast(null as int)) as map_with_nulls\n        UNION ALL SELECT\n          2 as id,\n          cast(null as array<int>) as arr_with_nulls,\n          cast(null as struct<a:int, b:int>) as struct_with_nulls,\n          cast(null as map<string, int>) as map_with_nulls\n        UNION ALL SELECT\n          3 as id,\n          array(4, 5, 6) as arr_with_nulls,\n          named_struct('a', 7, 'b', 8) as struct_with_nulls,\n          map('z', 9) as map_with_nulls\n      \"\"\")\n\n      writeComplexTypeData(df, outputPath, 3)\n    }\n  }\n\n  test(\"parquet write with decimal types within complex types\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      val df = spark.sql(\"\"\"\n        SELECT\n          1 as id,\n          array(cast(1.23 as decimal(10,2)), cast(4.56 as decimal(10,2))) as decimal_arr,\n          named_struct('amount', cast(99.99 as decimal(10,2))) as decimal_struct,\n          map('price', cast(19.99 as decimal(10,2))) as decimal_map\n        UNION ALL SELECT\n          2 as id,\n          array(cast(7.89 as decimal(10,2))) as decimal_arr,\n          named_struct('amount', cast(0.01 as decimal(10,2))) as decimal_struct,\n          map('price', cast(0.50 as decimal(10,2))) as decimal_map\n      \"\"\")\n\n      writeComplexTypeData(df, outputPath, 2)\n    }\n  }\n\n  test(\"parquet write with temporal types within complex types\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      val df = spark.sql(\"\"\"\n        SELECT\n          1 as id,\n          array(date'2024-01-15', date'2024-02-20') as date_arr,\n          named_struct('ts', timestamp'2024-01-15 10:30:00') as ts_struct,\n          map('event', timestamp'2024-03-01 14:00:00') as ts_map\n        UNION ALL SELECT\n          2 as id,\n          array(date'2024-06-30') as date_arr,\n          named_struct('ts', timestamp'2024-07-04 12:00:00') as ts_struct,\n          map('event', timestamp'2024-12-25 00:00:00') as ts_map\n      \"\"\")\n\n      writeComplexTypeData(df, outputPath, 2)\n    }\n  }\n\n  test(\"parquet write with empty arrays and maps\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      val df = Seq(\n        (1, Seq[Int](), Map[String, Int]()),\n        (2, Seq(1, 2), Map(\"a\" -> 1)),\n        (3, Seq[Int](), Map[String, Int]())).toDF(\"id\", \"arr\", \"mp\")\n\n      writeComplexTypeData(df, outputPath, 3)\n    }\n  }\n\n  test(\"parquet write complex types fuzz test\") {\n    withTempPath { dir =>\n      val outputPath = new File(dir, \"output.parquet\").getAbsolutePath\n\n      // Generate test data with complex types enabled\n      val schema = FuzzDataGenerator.generateSchema(\n        SchemaGenOptions(generateArray = true, generateStruct = true, generateMap = true))\n      val df = FuzzDataGenerator.generateDataFrame(\n        new Random(42),\n        spark,\n        schema,\n        500,\n        DataGenOptions(generateNegativeZero = false))\n\n      writeComplexTypeData(df, outputPath, 500)\n    }\n  }\n\n  private def createTestData(inputDir: File): String = {\n    val inputPath = new File(inputDir, \"input.parquet\").getAbsolutePath\n    val schema = FuzzDataGenerator.generateSchema(\n      SchemaGenOptions(generateArray = false, generateStruct = false, generateMap = false))\n    val df = FuzzDataGenerator.generateDataFrame(\n      new Random(42),\n      spark,\n      schema,\n      1000,\n      DataGenOptions(generateNegativeZero = false))\n    withSQLConf(\n      CometConf.COMET_EXEC_ENABLED.key -> \"false\",\n      SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"America/Denver\") {\n      df.write.parquet(inputPath)\n    }\n    inputPath\n  }\n\n  /**\n   * Captures the execution plan during a write operation.\n   *\n   * @param writeOp\n   *   The write operation to execute (takes output path as parameter)\n   * @param outputPath\n   *   The path to write to\n   * @return\n   *   The captured execution plan\n   */\n  private def captureWritePlan(writeOp: String => Unit, outputPath: String): SparkPlan = {\n    var capturedPlan: Option[QueryExecution] = None\n\n    val listener = new org.apache.spark.sql.util.QueryExecutionListener {\n      override def onSuccess(funcName: String, qe: QueryExecution, durationNs: Long): Unit = {\n        if (funcName == \"save\" || funcName.contains(\"command\")) {\n          capturedPlan = Some(qe)\n        }\n      }\n\n      override def onFailure(\n          funcName: String,\n          qe: QueryExecution,\n          exception: Exception): Unit = {}\n    }\n\n    spark.listenerManager.register(listener)\n\n    try {\n      writeOp(outputPath)\n\n      // Wait for listener to be called with timeout\n      val maxWaitTimeMs = 15000\n      val checkIntervalMs = 100\n      val maxIterations = maxWaitTimeMs / checkIntervalMs\n      var iterations = 0\n\n      while (capturedPlan.isEmpty && iterations < maxIterations) {\n        Thread.sleep(checkIntervalMs)\n        iterations += 1\n      }\n\n      assert(\n        capturedPlan.isDefined,\n        s\"Listener was not called within ${maxWaitTimeMs}ms - no execution plan captured\")\n\n      stripAQEPlan(capturedPlan.get.executedPlan)\n    } finally {\n      spark.listenerManager.unregister(listener)\n    }\n  }\n\n  private def assertHasCometNativeWriteExec(plan: SparkPlan): Unit = {\n    var nativeWriteCount = 0\n    plan.foreach {\n      case _: CometNativeWriteExec =>\n        nativeWriteCount += 1\n      case d: DataWritingCommandExec =>\n        d.child.foreach {\n          case _: CometNativeWriteExec =>\n            nativeWriteCount += 1\n          case _ =>\n        }\n      case _ =>\n    }\n\n    assert(\n      nativeWriteCount == 1,\n      s\"Expected exactly one CometNativeWriteExec in the plan, but found $nativeWriteCount:\\n${plan.treeString}\")\n  }\n\n  private def writeWithCometNativeWriteExec(\n      inputPath: String,\n      outputPath: String,\n      num_partitions: Option[Int] = None): Option[SparkPlan] = {\n    val df = spark.read.parquet(inputPath)\n\n    val plan = captureWritePlan(\n      path => num_partitions.fold(df)(n => df.repartition(n)).write.parquet(path),\n      outputPath)\n\n    assertHasCometNativeWriteExec(plan)\n\n    Some(plan)\n  }\n\n  private def verifyWrittenFile(outputPath: String): Unit = {\n    // Verify the data was written correctly\n    val resultDf = spark.read.parquet(outputPath)\n    assert(resultDf.count() == 1000, \"Expected 1000 rows to be written\")\n\n    // Verify multiple part files were created\n    val outputDir = new File(outputPath)\n    val partFiles = outputDir.listFiles().filter(_.getName.startsWith(\"part-\"))\n    // With 1000 rows and default parallelism, we should get multiple partitions\n    assert(partFiles.length > 1, \"Expected multiple part files to be created\")\n\n    // read with and without Comet and compare\n    val sparkRows = readSparkRows(outputPath)\n    val cometRows = readCometRows(outputPath)\n    val schema = spark.read.parquet(outputPath).schema\n    compareRows(schema, sparkRows, cometRows)\n  }\n\n  private def writeComplexTypeData(\n      inputDf: DataFrame,\n      outputPath: String,\n      expectedRows: Int): Unit = {\n    withTempPath { inputDir =>\n      val inputPath = new File(inputDir, \"input.parquet\").getAbsolutePath\n\n      // First write the input data without Comet\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"false\",\n        SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"America/Denver\") {\n        inputDf.write.parquet(inputPath)\n      }\n\n      // read the generated Parquet file and write with Comet native writer\n      withSQLConf(\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        // enable experimental native writes\n        CometConf.COMET_OPERATOR_DATA_WRITING_COMMAND_ALLOW_INCOMPAT.key -> \"true\",\n        CometConf.COMET_NATIVE_PARQUET_WRITE_ENABLED.key -> \"true\",\n        // explicitly set scan impl to override CI defaults\n        CometConf.COMET_NATIVE_SCAN_IMPL.key -> \"auto\",\n        // Disable unsigned small int safety check for ShortType columns\n        CometConf.COMET_PARQUET_UNSIGNED_SMALL_INT_CHECK.key -> \"false\",\n        // use a different timezone to make sure that timezone handling works with nested types\n        SQLConf.SESSION_LOCAL_TIMEZONE.key -> \"America/Halifax\") {\n\n        val parquetDf = spark.read.parquet(inputPath)\n\n        // Capture plan and verify CometNativeWriteExec is used\n        val plan = captureWritePlan(path => parquetDf.write.parquet(path), outputPath)\n        assertHasCometNativeWriteExec(plan)\n      }\n\n      // Verify round-trip: read with Spark and Comet, compare results\n      val sparkRows = readSparkRows(outputPath)\n      val cometRows = readCometRows(outputPath)\n      assert(sparkRows.length == expectedRows, s\"Expected $expectedRows rows\")\n      val schema = spark.read.parquet(outputPath).schema\n      compareRows(schema, sparkRows, cometRows)\n    }\n  }\n\n  private def compareRows(\n      schema: StructType,\n      sparkRows: Array[Row],\n      cometRows: Array[Row]): Unit = {\n    import scala.jdk.CollectionConverters._\n    // Convert collected rows back to DataFrames for checkAnswer\n    val sparkDf = spark.createDataFrame(sparkRows.toSeq.asJava, schema)\n    val cometDf = spark.createDataFrame(cometRows.toSeq.asJava, schema)\n    checkAnswer(sparkDf, cometDf)\n  }\n\n  private def hasCometScan(plan: SparkPlan): Boolean = {\n    stripAQEPlan(plan).exists {\n      case _: CometScanExec => true\n      case _: CometNativeScanExec => true\n      case _: CometBatchScanExec => true\n      case _ => false\n    }\n  }\n\n  private def hasSparkScan(plan: SparkPlan): Boolean = {\n    stripAQEPlan(plan).exists {\n      case _: FileSourceScanExec => true\n      case _ => false\n    }\n  }\n\n  private def readSparkRows(path: String): Array[Row] = {\n    var rows: Array[Row] = null\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n      val df = spark.read.parquet(path)\n      val plan = df.queryExecution.executedPlan\n      assert(\n        hasSparkScan(plan) && !hasCometScan(plan),\n        s\"Expected Spark scan (not Comet) when COMET_ENABLED=false:\\n${plan.treeString}\")\n      rows = df.collect()\n    }\n    rows\n  }\n\n  private def readCometRows(path: String): Array[Row] = {\n    var rows: Array[Row] = null\n    withSQLConf(\n      CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"true\",\n      // Override CI setting to use a scan impl that supports complex types\n      CometConf.COMET_NATIVE_SCAN_IMPL.key -> \"auto\") {\n      val df = spark.read.parquet(path)\n      val plan = df.queryExecution.executedPlan\n      assert(\n        hasCometScan(plan),\n        s\"Expected Comet scan when COMET_NATIVE_SCAN_ENABLED=true:\\n${plan.treeString}\")\n      rows = df.collect()\n    }\n    rows\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/parquet/ParquetReadFromFakeHadoopFsSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet\n\nimport java.io.File\nimport java.nio.file.Files\nimport java.util.UUID\n\nimport org.apache.commons.io.FileUtils\nimport org.apache.spark.SparkConf\nimport org.apache.spark.sql.{CometTestBase, DataFrame, SaveMode}\nimport org.apache.spark.sql.comet.CometNativeScanExec\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.functions.{col, sum}\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.hadoop.fs.FakeHDFSFileSystem\n\nclass ParquetReadFromFakeHadoopFsSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  private var fake_root_dir: File = _\n\n  override protected def sparkConf: SparkConf = {\n    val conf = super.sparkConf\n    conf.set(\"spark.hadoop.fs.fake.impl\", \"org.apache.comet.hadoop.fs.FakeHDFSFileSystem\")\n    conf.set(\"spark.hadoop.fs.defaultFS\", FakeHDFSFileSystem.PREFIX)\n    conf.set(CometConf.COMET_LIBHDFS_SCHEMES.key, \"fake,hdfs\")\n  }\n\n  override def beforeAll(): Unit = {\n    // Initialize fake root dir\n    fake_root_dir = Files.createTempDirectory(s\"comet_fake_${UUID.randomUUID().toString}\").toFile\n    // Initialize Spark session\n    super.beforeAll()\n  }\n\n  protected override def afterAll(): Unit = {\n    if (fake_root_dir != null) FileUtils.deleteDirectory(fake_root_dir)\n    super.afterAll()\n  }\n\n  private def writeTestParquetFile(filePath: String): Unit = {\n    val df = spark.range(0, 1000)\n    df.write.format(\"parquet\").mode(SaveMode.Overwrite).save(filePath)\n  }\n\n  private def assertCometNativeScanOnFakeFs(df: DataFrame): Unit = {\n    val scans = collect(df.queryExecution.executedPlan) { case p: CometNativeScanExec =>\n      p\n    }\n    assert(scans.size == 1)\n    // File partitions are now accessed from the scan field, not from the protobuf\n    val filePartitions = scans.head.scan.getFilePartitions()\n    assert(filePartitions.nonEmpty)\n    assert(\n      filePartitions.head.files.head.filePath.toString\n        .startsWith(FakeHDFSFileSystem.PREFIX))\n  }\n\n  // This test fails for 'hdfs' but succeeds for 'open-dal'. 'hdfs' requires this fix\n  // https://github.com/datafusion-contrib/fs-hdfs/pull/29\n  test(\"test native_datafusion scan on fake fs\") {\n    // Skip test if HDFS feature is not enabled in native library\n    assume(isFeatureEnabled(\"hdfs-opendal\"))\n    val testFilePath =\n      s\"${FakeHDFSFileSystem.PREFIX}${fake_root_dir.getAbsolutePath}/data/test-file.parquet\"\n    writeTestParquetFile(testFilePath)\n    withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_DATAFUSION) {\n      val df = spark.read.format(\"parquet\").load(testFilePath).agg(sum(col(\"id\")))\n      assertCometNativeScanOnFakeFs(df)\n      assert(df.first().getLong(0) == 499500)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/parquet/ParquetReadFromS3Suite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet\n\nimport java.nio.charset.StandardCharsets\nimport java.util.Base64\n\nimport org.apache.parquet.crypto.DecryptionPropertiesFactory\nimport org.apache.parquet.crypto.keytools.{KeyToolkit, PropertiesDrivenCryptoFactory}\nimport org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\nimport org.apache.spark.sql.{DataFrame, SaveMode}\nimport org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.functions.{col, expr, max, sum}\n\nimport org.apache.comet.CometS3TestBase\n\nclass ParquetReadFromS3Suite extends CometS3TestBase with AdaptiveSparkPlanHelper {\n\n  override protected val testBucketName = \"test-bucket\"\n\n  // Encryption keys for testing parquet encryption\n  private val encoder = Base64.getEncoder\n  private val footerKey =\n    encoder.encodeToString(\"0123456789012345\".getBytes(StandardCharsets.UTF_8))\n  private val key1 = encoder.encodeToString(\"1234567890123450\".getBytes(StandardCharsets.UTF_8))\n  private val key2 = encoder.encodeToString(\"1234567890123451\".getBytes(StandardCharsets.UTF_8))\n  private val cryptoFactoryClass =\n    \"org.apache.parquet.crypto.keytools.PropertiesDrivenCryptoFactory\"\n\n  private def writeTestParquetFile(filePath: String): Unit = {\n    val df = spark.range(0, 1000)\n    df.write.format(\"parquet\").mode(SaveMode.Overwrite).save(filePath)\n  }\n\n  private def writePartitionedParquetFile(filePath: String): Unit = {\n    val df = spark.range(0, 1000).withColumn(\"val\", expr(\"concat('val#', id % 10)\"))\n    df.write.format(\"parquet\").partitionBy(\"val\").mode(SaveMode.Overwrite).save(filePath)\n  }\n\n  private def assertCometScan(df: DataFrame): Unit = {\n    val scans = collect(df.queryExecution.executedPlan) {\n      case p: CometScanExec => p\n      case p: CometNativeScanExec => p\n    }\n    assert(scans.size == 1)\n  }\n\n  test(\"read parquet file from MinIO\") {\n    val testFilePath = s\"s3a://$testBucketName/data/test-file.parquet\"\n    writeTestParquetFile(testFilePath)\n\n    val df = spark.read.format(\"parquet\").load(testFilePath).agg(sum(col(\"id\")))\n    assertCometScan(df)\n    assert(df.first().getLong(0) == 499500)\n  }\n\n  test(\"read partitioned parquet file from MinIO\") {\n    val testFilePath = s\"s3a://$testBucketName/data/test-partitioned-file.parquet\"\n    writePartitionedParquetFile(testFilePath)\n\n    val df = spark.read.format(\"parquet\").load(testFilePath).agg(sum(col(\"id\")), max(col(\"val\")))\n    val firstRow = df.first()\n    assert(firstRow.getLong(0) == 499500)\n    assert(firstRow.getString(1) == \"val#9\")\n  }\n\n  test(\"read parquet file from MinIO with URL escape sequences in path\") {\n    // Path with '%23' and '%20' which are URL escape sequences for '#' and ' '\n    val testFilePath = s\"s3a://$testBucketName/data/Brand%2321/test%20file.parquet\"\n    writeTestParquetFile(testFilePath)\n\n    val df = spark.read.format(\"parquet\").load(testFilePath).agg(sum(col(\"id\")))\n    assertCometScan(df)\n    assert(df.first().getLong(0) == 499500)\n  }\n\n  test(\"write and read encrypted parquet from S3\") {\n    import testImplicits._\n\n    withSQLConf(\n      DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n      KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n        \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n      InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n        s\"footerKey: ${footerKey}, key1: ${key1}, key2: ${key2}\") {\n\n      val inputDF = spark\n        .range(0, 1000)\n        .map(i => (i, i.toString, i.toFloat))\n        .repartition(5)\n        .toDF(\"a\", \"b\", \"c\")\n\n      val testFilePath = s\"s3a://$testBucketName/data/encrypted-test.parquet\"\n      inputDF.write\n        .option(PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME, \"key1: a, b; key2: c\")\n        .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n        .parquet(testFilePath)\n\n      val df = spark.read.parquet(testFilePath).agg(sum(col(\"a\")))\n      assertCometScan(df)\n      assert(df.first().getLong(0) == 499500)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/parquet/ParquetReadSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.parquet\n\nimport java.io.File\nimport java.math.{BigDecimal, BigInteger}\nimport java.time.{ZoneId, ZoneOffset}\n\nimport scala.reflect.ClassTag\nimport scala.reflect.runtime.universe.TypeTag\nimport scala.util.control.Breaks.breakable\n\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.parquet.example.data.simple.SimpleGroup\nimport org.apache.parquet.schema.MessageTypeParser\nimport org.apache.spark.SparkException\nimport org.apache.spark.sql.{CometTestBase, DataFrame, Row}\nimport org.apache.spark.sql.catalyst.util.DateTimeUtils\nimport org.apache.spark.sql.comet.{CometNativeScanExec, CometScanExec}\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.execution.datasources.parquet.ParquetUtils\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types._\n\nimport com.google.common.primitives.UnsignedLong\n\nimport org.apache.comet.CometConf\n\nabstract class ParquetReadSuite extends CometTestBase {\n  import testImplicits._\n\n  testStandardAndLegacyModes(\"decimals\") {\n    Seq(true, false).foreach { useDecimal128 =>\n      Seq(16, 1024).foreach { batchSize =>\n        withSQLConf(\n          CometConf.COMET_EXEC_ENABLED.key -> false.toString,\n          CometConf.COMET_USE_DECIMAL_128.key -> useDecimal128.toString,\n          CometConf.COMET_BATCH_SIZE.key -> batchSize.toString) {\n          var combinations = Seq((5, 2), (1, 0), (18, 10), (18, 17), (19, 0), (38, 37))\n          // If ANSI mode is on, the combination (1, 1) will cause a runtime error. Otherwise, the\n          // decimal RDD contains all null values and should be able to read back from Parquet.\n\n          if (!SQLConf.get.ansiEnabled) {\n            combinations = combinations ++ Seq((1, 1))\n          }\n          for ((precision, scale) <- combinations; useDictionary <- Seq(false, true)) {\n            withTempPath { dir =>\n              val data = makeDecimalRDD(1000, DecimalType(precision, scale), useDictionary)\n              data.write.parquet(dir.getCanonicalPath)\n              readParquetFile(dir.getCanonicalPath) { df =>\n                {\n                  checkAnswer(df, data.collect().toSeq)\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"simple count\") {\n    withParquetTable((0 until 10).map(i => (i, i.toString)), \"tbl\") {\n      assert(sql(\"SELECT * FROM tbl WHERE _1 % 2 == 0\").count() == 5)\n    }\n  }\n\n  test(\"basic data types\") {\n    Seq(7, 1024).foreach { batchSize =>\n      withSQLConf(CometConf.COMET_BATCH_SIZE.key -> batchSize.toString) {\n        val data = (-100 to 100).map { i =>\n          (\n            i % 2 == 0,\n            i,\n            i.toByte,\n            i.toShort,\n            i.toLong,\n            i.toFloat,\n            i.toDouble,\n            DateTimeUtils.toJavaDate(i))\n        }\n        if (!hasUnsignedSmallIntSafetyCheck(conf)) {\n          checkParquetScan(data)\n        }\n        checkParquetFile(data)\n      }\n    }\n  }\n\n  test(\"basic data types with dictionary\") {\n    Seq(7, 1024).foreach { batchSize =>\n      withSQLConf(CometConf.COMET_BATCH_SIZE.key -> batchSize.toString) {\n        val data = (-100 to 100).map(_ % 4).map { i =>\n          (\n            i % 2 == 0,\n            i,\n            i.toByte,\n            i.toShort,\n            i.toLong,\n            i.toFloat,\n            i.toDouble,\n            DateTimeUtils.toJavaDate(i))\n        }\n        if (!hasUnsignedSmallIntSafetyCheck(conf)) {\n          checkParquetScan(data)\n        }\n        checkParquetFile(data)\n      }\n    }\n  }\n\n  test(\"basic filters\") {\n    val data = (-100 to 100).map { i =>\n      (\n        i % 2 == 0,\n        i,\n        i.toByte,\n        i.toShort,\n        i.toLong,\n        i.toFloat,\n        i.toDouble,\n        DateTimeUtils.toJavaDate(i))\n    }\n    val filter = (row: Row) => row.getBoolean(0)\n    if (!hasUnsignedSmallIntSafetyCheck(conf)) {\n      checkParquetScan(data, filter)\n    }\n    checkParquetFile(data, filter)\n  }\n\n  test(\"raw binary test\") {\n    val data = (1 to 4).map(i => Tuple1(Array.fill(3)(i.toByte)))\n    withParquetDataFrame(data) { df =>\n      assertResult(data.map(_._1.mkString(\",\")).sorted) {\n        df.collect().map(_.getAs[Array[Byte]](0).mkString(\",\")).sorted\n      }\n    }\n  }\n\n  test(\"string\") {\n    val data = (1 to 4).map(i => Tuple1(i.toString))\n    // Property spark.sql.parquet.binaryAsString shouldn't affect Parquet files written by Spark SQL\n    // as we store Spark SQL schema in the extra metadata.\n    withSQLConf(SQLConf.PARQUET_BINARY_AS_STRING.key -> \"false\")(checkParquetFile(data))\n    withSQLConf(SQLConf.PARQUET_BINARY_AS_STRING.key -> \"true\")(checkParquetFile(data))\n  }\n\n  test(\"string with dictionary\") {\n    Seq((100, 5), (1000, 10)).foreach { case (total, divisor) =>\n      val data = (1 to total).map(i => Tuple1((i % divisor).toString))\n      // Property spark.sql.parquet.binaryAsString shouldn't affect Parquet files written by Spark SQL\n      // as we store Spark SQL schema in the extra metadata.\n      withSQLConf(SQLConf.PARQUET_BINARY_AS_STRING.key -> \"false\")(checkParquetFile(data))\n      withSQLConf(SQLConf.PARQUET_BINARY_AS_STRING.key -> \"true\")(checkParquetFile(data))\n    }\n  }\n\n  test(\"long string + reserve additional space for value buffer\") {\n    withSQLConf(CometConf.COMET_BATCH_SIZE.key -> 16.toString) {\n      val data = (1 to 100).map(i => (i, i.toString * 10))\n      checkParquetFile(data)\n    }\n  }\n\n  test(\"timestamp\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        val expected = makeRawTimeParquetFile(path, dictionaryEnabled = dictionaryEnabled, 10000)\n        readParquetFile(path.toString) { df =>\n          checkAnswer(\n            df.select($\"_0\", $\"_1\", $\"_2\", $\"_3\", $\"_4\", $\"_5\"),\n            expected.map {\n              case None =>\n                Row(null, null, null, null, null, null)\n              case Some(i) =>\n                val ts = new java.sql.Timestamp(i)\n                val ldt = ts.toLocalDateTime\n                  .atZone(ZoneId.systemDefault())\n                  .withZoneSameInstant(ZoneOffset.UTC)\n                  .toLocalDateTime\n                Row(ts, ts, ts, ldt, ts, ldt)\n            })\n        }\n      }\n    }\n  }\n\n  test(\"timestamp as int96\") {\n    import testImplicits._\n\n    val N = 100\n    val ts = \"2020-01-01 01:02:03.123456\"\n    Seq(false, true).foreach { dictionaryEnabled =>\n      Seq(false, true).foreach { conversionEnabled =>\n        withSQLConf(\n          SQLConf.PARQUET_OUTPUT_TIMESTAMP_TYPE.key -> \"INT96\",\n          SQLConf.PARQUET_INT96_TIMESTAMP_CONVERSION.key -> conversionEnabled.toString) {\n          withTempPath { path =>\n            Seq\n              .tabulate(N)(_ => ts)\n              .toDF(\"ts1\")\n              .select($\"ts1\".cast(\"timestamp\").as(\"ts\"))\n              .repartition(1)\n              .write\n              .option(\"parquet.enable.dictionary\", dictionaryEnabled)\n              .parquet(path.getCanonicalPath)\n\n            checkAnswer(\n              spark.read.parquet(path.getCanonicalPath).select($\"ts\".cast(\"string\")),\n              Seq.tabulate(N)(_ => Row(ts)))\n          }\n        }\n      }\n    }\n  }\n\n  test(\"batch paging on basic types\") {\n    Seq(1, 2, 4, 9).foreach { batchSize =>\n      withSQLConf(CometConf.COMET_BATCH_SIZE.key -> batchSize.toString) {\n        val data = (1 to 10).map(i => (i, i.toByte, i.toShort, i.toFloat, i.toDouble, i.toString))\n        checkParquetFile(data)\n      }\n    }\n  }\n\n  test(\"nulls\") {\n    val allNulls = (\n      null.asInstanceOf[java.lang.Boolean],\n      null.asInstanceOf[Integer],\n      null.asInstanceOf[java.lang.Long],\n      null.asInstanceOf[java.lang.Float],\n      null.asInstanceOf[java.lang.Double],\n      null.asInstanceOf[java.lang.String])\n\n    withParquetDataFrame(allNulls :: Nil) { df =>\n      val rows = df.collect()\n      assert(rows.length === 1)\n      assert(rows.head === Row(Seq.fill(6)(null): _*))\n      assert(df.where(\"_1 is null\").count() == 1)\n    }\n  }\n\n  test(\"mixed nulls and non-nulls\") {\n    val rand = new scala.util.Random(42)\n    val data = (0 to 100).map { i =>\n      val row: (Boolean, Integer, java.lang.Long, java.lang.Float, java.lang.Double, String) = {\n        if (rand.nextBoolean()) {\n          (i % 2 == 0, i, i.toLong, i.toFloat, i.toDouble, i.toString)\n        } else {\n          (\n            null.asInstanceOf[java.lang.Boolean],\n            null.asInstanceOf[Integer],\n            null.asInstanceOf[java.lang.Long],\n            null.asInstanceOf[java.lang.Float],\n            null.asInstanceOf[java.lang.Double],\n            null.asInstanceOf[String])\n        }\n      }\n      row\n    }\n    checkParquetFile(data)\n  }\n\n  test(\"vector reloading with all non-null values\") {\n    def makeRawParquetFile(\n        path: Path,\n        dictionaryEnabled: Boolean,\n        n: Int,\n        numNonNulls: Int): Seq[Option[Int]] = {\n      val schemaStr =\n        \"\"\"\n          |message root {\n          | optional int32 _1;\n          |}\n        \"\"\".stripMargin\n\n      val schema = MessageTypeParser.parseMessageType(schemaStr)\n      val writer = createParquetWriter(schema, path, dictionaryEnabled = dictionaryEnabled)\n\n      val expected = (0 until n).map { i =>\n        if (i >= numNonNulls) {\n          None\n        } else {\n          Some(i)\n        }\n      }\n      expected.foreach { opt =>\n        val record = new SimpleGroup(schema)\n        opt match {\n          case Some(i) =>\n            record.add(0, i)\n          case _ =>\n        }\n        writer.write(record)\n      }\n\n      writer.close()\n      expected\n    }\n\n    Seq(2, 99, 1024).foreach { numNonNulls =>\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        val expected = makeRawParquetFile(path, dictionaryEnabled = false, 1024, numNonNulls)\n        withSQLConf(CometConf.COMET_BATCH_SIZE.key -> \"2\") {\n          readParquetFile(path.toString) { df =>\n            checkAnswer(\n              df,\n              expected.map {\n                case None =>\n                  Row(null)\n                case Some(i) =>\n                  Row(i)\n              })\n          }\n        }\n      }\n    }\n  }\n\n  test(\"test lazy materialization skipping\") {\n    def makeRawParquetFile(\n        path: Path,\n        dictionaryEnabled: Boolean,\n        pageSize: Int,\n        pageRowCountLimit: Int,\n        expected: Seq[Row]): Unit = {\n      val schemaStr =\n        \"\"\"\n          |message root {\n          |  optional int32   _1;\n          |  optional binary  _2(UTF8);\n          |}\n        \"\"\".stripMargin\n\n      val schema = MessageTypeParser.parseMessageType(schemaStr)\n      val writer = createParquetWriter(\n        schema,\n        path,\n        dictionaryEnabled = dictionaryEnabled,\n        pageSize = pageSize,\n        dictionaryPageSize = pageSize,\n        pageRowCountLimit = pageRowCountLimit)\n\n      expected.foreach { row =>\n        val record = new SimpleGroup(schema)\n        record.add(0, row.getInt(0))\n        record.add(1, row.getString(1))\n        writer.write(record)\n      }\n\n      writer.close()\n    }\n\n    val skip = Row(0, \"a\") // row to skip by lazy materialization\n    val read = Row(1, \"b\") // row not to skip\n    // The initial page row count is always 100 in ParquetWriter, even with pageRowCountLimit config\n    // Thus, use this header to fill in the first 100\n    val header = Seq.fill(100)(skip)\n\n    val expected = Seq( // spotless:off\n      read, read, read, read, // read all rows in the page\n      skip, skip, skip, skip, // skip all rows in the page\n      skip, skip, skip, skip, // consecutively skip all rows in the page\n      read, skip, skip, read, // skip middle rows in the page\n      skip, read, read, skip, // read middle rows in the page\n      skip, read, skip, read, // skip and read in turns\n      read, skip, read, skip // skip and read in turns\n    ) // spotless:on\n\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n      withSQLConf(\n        CometConf.COMET_BATCH_SIZE.key -> \"4\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"false\") {\n        makeRawParquetFile(path, dictionaryEnabled = false, 1024, 4, header ++ expected)\n        readParquetFile(path.toString) { df =>\n          checkAnswer(df.filter(\"_1 != 0\"), expected.filter(_.getInt(0) != 0))\n        }\n      }\n    }\n  }\n\n  test(\"test multiple pages with mixed PLAIN_DICTIONARY and PLAIN encoding\") {\n    // TODO: consider merging this with the same method above\n    def makeRawParquetFile(path: Path, n: Int): Seq[Option[Int]] = {\n      val dictionaryPageSize = 1024\n      val pageRowCount = 500\n      val schemaStr =\n        \"\"\"\n          |message root {\n          |  optional boolean _1;\n          |  optional int32   _2(INT_8);\n          |  optional int32   _3(INT_16);\n          |  optional int32   _4;\n          |  optional int64   _5;\n          |  optional float   _6;\n          |  optional double  _7;\n          |  optional binary  _8(UTF8);\n          |}\n        \"\"\".stripMargin\n\n      val schema = MessageTypeParser.parseMessageType(schemaStr)\n      val writer = createParquetWriter(\n        schema,\n        path,\n        dictionaryEnabled = true,\n        dictionaryPageSize = dictionaryPageSize,\n        pageRowCountLimit = pageRowCount)\n\n      val rand = new scala.util.Random(42)\n      val expected = (0 until n).map { i =>\n        // use a single value for the first page, to make sure dictionary encoding kicks in\n        val value = if (i < pageRowCount) i % 8 else i\n        if (rand.nextBoolean()) None\n        else Some(value)\n      }\n\n      expected.foreach { opt =>\n        val record = new SimpleGroup(schema)\n        opt match {\n          case Some(i) =>\n            record.add(0, i % 2 == 0)\n            record.add(1, i.toByte)\n            record.add(2, i.toShort)\n            record.add(3, i)\n            record.add(4, i.toLong)\n            record.add(5, i.toFloat)\n            record.add(6, i.toDouble)\n            record.add(7, i.toString * 100)\n          case _ =>\n        }\n        writer.write(record)\n      }\n\n      writer.close()\n      expected\n    }\n\n    Seq(16, 128).foreach { batchSize =>\n      withSQLConf(CometConf.COMET_BATCH_SIZE.key -> batchSize.toString) {\n        withTempDir { dir =>\n          val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n          val expected = makeRawParquetFile(path, 10000)\n          readParquetFile(path.toString) { df =>\n            checkAnswer(\n              df,\n              expected.map {\n                case None =>\n                  Row(null, null, null, null, null, null, null, null)\n                case Some(i) =>\n                  Row(\n                    i % 2 == 0,\n                    i.toByte,\n                    i.toShort,\n                    i,\n                    i.toLong,\n                    i.toFloat,\n                    i.toDouble,\n                    i.toString * 100)\n              })\n          }\n        }\n      }\n    }\n  }\n\n  test(\"skip vector re-loading\") {\n    Seq(false, true).foreach { enableDictionary =>\n      withSQLConf(\n        CometConf.COMET_BATCH_SIZE.key -> 7.toString,\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n        // Make sure this works with Comet native execution too\n        val data = (1 to 100)\n          .map(_ % 5) // trigger dictionary encoding\n          .map(i => (i, i.toByte, i.toShort, i.toFloat, i.toDouble, i.toString))\n        withParquetTable(data, \"tbl\", withDictionary = enableDictionary) {\n          val df = sql(\"SELECT count(*) FROM tbl WHERE _1 >= 0\")\n          checkAnswer(df, Row(100) :: Nil)\n        }\n      }\n    }\n  }\n\n  test(\"partition columns - multiple batch\") {\n    withSQLConf(\n      CometConf.COMET_BATCH_SIZE.key -> 7.toString,\n      CometConf.COMET_EXEC_ENABLED.key -> \"false\",\n      CometConf.COMET_ENABLED.key -> \"true\") {\n      Seq(\"a\", null).foreach { partValue =>\n        withTempPath { dir =>\n          (1 to 100)\n            .map(v => (partValue.asInstanceOf[String], v))\n            .toDF(\"pcol\", \"col\")\n            .repartition(1)\n            .write\n            .format(\"parquet\")\n            .partitionBy(\"pcol\")\n            .save(dir.getCanonicalPath)\n          val df = spark.read.format(\"parquet\").load(dir.getCanonicalPath)\n          assert(df.filter(\"col > 90\").count() == 10)\n        }\n      }\n    }\n  }\n\n  test(\"fix: string partition column with incorrect offset buffer\") {\n    def makeRawParquetFile(\n        path: Path,\n        dictionaryEnabled: Boolean,\n        n: Int,\n        pageSize: Int): Seq[Option[Int]] = {\n      val schemaStr =\n        \"\"\"\n          |message root {\n          |  optional binary                  _1(UTF8);\n          |}\n    \"\"\".stripMargin\n\n      val schema = MessageTypeParser.parseMessageType(schemaStr)\n      val writer = createParquetWriter(\n        schema,\n        path,\n        dictionaryEnabled = dictionaryEnabled,\n        pageSize = pageSize,\n        dictionaryPageSize = pageSize,\n        rowGroupSize = 1024 * 128)\n\n      val rand = new scala.util.Random(42)\n      val expected = (0 until n).map { i =>\n        if (rand.nextBoolean()) {\n          None\n        } else {\n          Some(i)\n        }\n      }\n      expected.foreach { opt =>\n        val record = new SimpleGroup(schema)\n        opt match {\n          case Some(i) =>\n            record.add(0, i.toString * 48)\n          case _ =>\n        }\n        writer.write(record)\n      }\n\n      writer.close()\n      expected\n    }\n\n    withTable(\"tbl\") {\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        makeRawParquetFile(path, false, 10000, 128)\n\n        sql(\"CREATE TABLE tbl (value STRING, p STRING) USING PARQUET PARTITIONED BY (p) \")\n        sql(s\"ALTER TABLE tbl ADD PARTITION (p='a') LOCATION '$dir'\")\n        assert(sql(\"SELECT DISTINCT p FROM tbl\").count() == 1)\n      }\n    }\n\n  }\n\n  test(\"missing columns\") {\n    withTempPath { dir =>\n      Seq(\"a\", \"b\").toDF(\"col1\").write.parquet(dir.getCanonicalPath)\n\n      // Create a schema where `col2` doesn't exist in the file schema\n      var schema =\n        StructType(Seq(StructField(\"col1\", StringType), StructField(\"col2\", IntegerType)))\n      var df = spark.read.schema(schema).parquet(dir.getCanonicalPath)\n      checkAnswer(df, Row(\"a\", null) :: Row(\"b\", null) :: Nil)\n\n      // Should be the same when the missing column is at the beginning of the schema\n\n      schema = StructType(Seq(StructField(\"col0\", BooleanType), StructField(\"col1\", StringType)))\n      df = spark.read.schema(schema).parquet(dir.getCanonicalPath)\n      checkAnswer(df, Row(null, \"a\") :: Row(null, \"b\") :: Nil)\n    }\n  }\n\n  test(\"nested struct with new child field\") {\n    withTempPath { dir =>\n      val path = dir.getCanonicalPath\n\n      val df1 = sql(\"SELECT 1 c1, named_struct('c3', 2, 'c4', named_struct('c5', 3, 'c6', 4)) c2\")\n      val df2 = sql(\n        \"SELECT 1 c1, named_struct('c3', 2, 'c4', named_struct('c5', 3, 'c6', 4, 'c7', 5)) c2\")\n      val dir1 = s\"$path${File.separator}part=one\"\n      val dir2 = s\"$path${File.separator}part=two\"\n\n      df1.write.parquet(dir1)\n      df2.write.parquet(dir2)\n\n      var df = spark.read\n        .schema(df2.schema)\n        .load(path)\n        .select($\"c1\", $\"c2.c3\")\n\n      // read a leaf field\n      val expected1 =\n        Seq(Row(1, 2), Row(1, 2))\n      checkAnswer(df, expected1)\n\n      // read a struct field\n      df = spark.read\n        .schema(df2.schema)\n        .load(path)\n        .select($\"c1\", $\"c2\")\n      val expected2 =\n        Seq(Row(1, Row(2, Row(3, 4, null))), Row(1, Row(2, Row(3, 4, 5))))\n      checkAnswer(df, expected2)\n\n      // read a missing field\n      df = spark.read\n        .schema(df2.schema)\n        .load(path)\n        .select($\"c1\", $\"c2.c4.c7\")\n      val expected3 =\n        Seq(Row(1, null), Row(1, 5))\n      checkAnswer(df, expected3)\n    }\n  }\n\n  test(\"reading required fields in structs\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      def makeRawParquetFile(path: Path): Unit = {\n        val schemaStr =\n          \"\"\"message schema {\n            |  optional int32 a;\n            |  required group b {\n            |    optional int32 b1;\n            |    required int32 b2;\n            |  }\n            |  optional group c {\n            |    required int32 c1;\n            |    optional int32 c2;\n            |  }\n            |}\n        \"\"\".stripMargin\n        val schema = MessageTypeParser.parseMessageType(schemaStr)\n\n        val writer = createParquetWriter(schema, path, dictionaryEnabled)\n\n        // All optional fields are specified\n        var record = new SimpleGroup(schema)\n        record.add(\"a\", 1)\n        var b = record.addGroup(\"b\")\n        b.add(\"b1\", 1)\n        b.add(\"b2\", 1)\n        var c = record.addGroup(\"c\")\n        c.add(\"c1\", 1)\n        c.add(\"c2\", 1)\n        writer.write(record)\n\n        // No optional values\n        record = new SimpleGroup(schema)\n        record.add(\"a\", 2)\n        b = record.addGroup(\"b\")\n        b.add(\"b1\", 2)\n        b.add(\"b2\", 2)\n        writer.write(record)\n\n        writer.close()\n      }\n\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        makeRawParquetFile(path)\n\n        withParquetTable(spark.read.format(\"parquet\").load(path.toString), \"complex_types\") {\n          // required struct field with optional field\n          var df = sql(\"select a, b from complex_types\")\n          checkSparkAnswer(df)\n\n          // Missing optional struct field\n          df = sql(\"select a, c from complex_types\")\n          checkSparkAnswer(df)\n\n          // Missing optional struct field with nested required field\n          // TODO: This produces incorrect results in both native_datafusion and native_iceberg_compat\n          //          df = sql(\"select a, c.c1 from complex_types\")\n          //          checkSparkAnswer(df)\n\n          // required nested field in a struct\n          df = sql(\"select a, b.b2 from complex_types\")\n          checkSparkAnswer(df)\n\n        }\n      }\n    }\n  }\n\n  // TODO: This test fails for both native_datafusion and native_iceberg_compat\n  ignore(\" Missing optional struct field with nested required field\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      def makeRawParquetFile(path: Path): Unit = {\n        val schemaStr =\n          \"\"\"message schema {\n            |  optional int32 a;\n            |  required group b {\n            |    optional int32 b1;\n            |    required int32 b2;\n            |  }\n            |  optional group c {\n            |    required int32 c1;\n            |    optional int32 c2;\n            |  }\n            |}\n        \"\"\".stripMargin\n        val schema = MessageTypeParser.parseMessageType(schemaStr)\n\n        val writer = createParquetWriter(schema, path, dictionaryEnabled)\n\n        // All optional fields are specified\n        var record = new SimpleGroup(schema)\n        record.add(\"a\", 1)\n        var b = record.addGroup(\"b\")\n        b.add(\"b1\", 1)\n        b.add(\"b2\", 1)\n        var c = record.addGroup(\"c\")\n        c.add(\"c1\", 1)\n        c.add(\"c2\", 1)\n        writer.write(record)\n\n        // No optional values\n        record = new SimpleGroup(schema)\n        record.add(\"a\", 2)\n        b = record.addGroup(\"b\")\n        b.add(\"b1\", 2)\n        b.add(\"b2\", 2)\n        writer.write(record)\n\n        writer.close()\n      }\n\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        makeRawParquetFile(path)\n\n        withParquetTable(spark.read.format(\"parquet\").load(path.toString), \"complex_types\") {\n          // Missing optional struct field with nested required field\n          // TODO: This produces incorrect results in both native_datafusion and native_iceberg_compat\n          val df = sql(\"select a, c.c1 from complex_types\")\n          checkSparkAnswer(df)\n\n        }\n      }\n    }\n  }\n\n  test(\"missing required fields in structs\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      def makeRawParquetFile(path: Path): Unit = {\n        val schemaStr =\n          \"\"\"message schema {\n            |  optional int32 a;\n            |  required group b {\n            |    optional int32 b1;\n            |    required int32 b2;\n            |  }\n            |  optional group c {\n            |    required int32 c1;\n            |    optional int32 c2;\n            |  }\n            |}\n        \"\"\".stripMargin\n        val schema = MessageTypeParser.parseMessageType(schemaStr)\n        val writer = createParquetWriter(schema, path, dictionaryEnabled)\n        // Missing required field b.b2\n        val record = new SimpleGroup(schema)\n        record.add(\"a\", 3)\n        val b = record.addGroup(\"b\")\n        b.add(\"b1\", 4)\n        writer.write(record)\n\n        writer.close()\n      }\n\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        makeRawParquetFile(path)\n\n        withParquetTable(\n          spark.read.format(\"parquet\").load(path.toString),\n          \"complex_types_missing_struct\") {\n          // missing required nested field in a struct\n          var failed = false\n          try {\n            val df = sql(\"select a, b.b2 from complex_types_missing_struct\")\n            checkSparkAnswer(df)\n          } catch {\n            case _: Exception => failed = true\n          }\n          assert(failed)\n          // missing required struct field\n          failed = false\n          try {\n            val df = sql(\"select a, b from complex_types_missing_struct\")\n            checkSparkAnswer(df)\n          } catch {\n            case _: Exception => failed = true\n          }\n          assert(failed)\n        }\n      }\n    }\n  }\n\n  test(\"unsigned int supported\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      def makeRawParquetFile(path: Path): Unit = {\n        val schemaStr =\n          \"\"\"message root {\n            |  required INT32 a(UINT_8);\n            |  required INT32 b(UINT_16);\n            |  required INT32 c(UINT_32);\n            |}\n        \"\"\".stripMargin\n        val schema = MessageTypeParser.parseMessageType(schemaStr)\n\n        val writer = createParquetWriter(schema, path, dictionaryEnabled)\n\n        (0 until 10).foreach { n =>\n          val record = new SimpleGroup(schema)\n          record.add(0, n.toByte + Byte.MaxValue)\n          record.add(1, n.toShort + Short.MaxValue)\n          record.add(2, n + Int.MaxValue)\n          writer.write(record)\n        }\n        writer.close()\n      }\n\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        makeRawParquetFile(path)\n        readParquetFile(path.toString) { df =>\n          checkAnswer(\n            df,\n            (0 until 10).map(n =>\n              Row(n.toByte + Byte.MaxValue, n.toShort + Short.MaxValue, n + Int.MaxValue.toLong)))\n        }\n      }\n    }\n  }\n\n  test(\"unsigned long supported\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      def makeRawParquetFile(path: Path): Unit = {\n        val schemaStr =\n          \"\"\"message root {\n            |  required INT64 a(UINT_64);\n            |}\n        \"\"\".stripMargin\n        val schema = MessageTypeParser.parseMessageType(schemaStr)\n\n        val writer = createParquetWriter(schema, path, dictionaryEnabled)\n\n        (0 until 10).map(_.toLong).foreach { n =>\n          val record = new SimpleGroup(schema)\n          record.add(0, n + Long.MaxValue)\n          writer.write(record)\n        }\n        writer.close()\n      }\n\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        makeRawParquetFile(path)\n        readParquetFile(path.toString) { df =>\n          checkAnswer(\n            df,\n            (0 until 10).map(n =>\n              Row(\n                new BigDecimal(UnsignedLong.fromLongBits(n + Long.MaxValue).bigIntegerValue()))))\n        }\n      }\n    }\n  }\n\n  test(\"enum support\") {\n    // https://github.com/apache/parquet-format/blob/master/LogicalTypes.md\n    // \"enum type should interpret ENUM annotated field as a UTF-8\"\n    Seq(true, false).foreach { dictionaryEnabled =>\n      def makeRawParquetFile(path: Path): Unit = {\n        val schemaStr =\n          \"\"\"message root {\n            |  required BINARY a(ENUM);\n            |}\n        \"\"\".stripMargin\n        val schema = MessageTypeParser.parseMessageType(schemaStr)\n\n        val writer = createParquetWriter(schema, path, dictionaryEnabled)\n\n        (0 until 10).map(_.toLong).foreach { n =>\n          val record = new SimpleGroup(schema)\n          record.add(0, n.toString)\n          writer.write(record)\n        }\n        writer.close()\n      }\n\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        makeRawParquetFile(path)\n        readParquetFile(path.toString) { df =>\n          checkAnswer(df, (0 until 10).map(n => Row(n.toString)))\n        }\n      }\n    }\n  }\n\n  test(\"FIXED_LEN_BYTE_ARRAY support\") {\n    Seq(true, false).foreach { dictionaryEnabled =>\n      def makeRawParquetFile(path: Path): Unit = {\n        val schemaStr =\n          \"\"\"message root {\n            |  required FIXED_LEN_BYTE_ARRAY(1) a;\n            |  required FIXED_LEN_BYTE_ARRAY(3) b;\n            |}\n        \"\"\".stripMargin\n        val schema = MessageTypeParser.parseMessageType(schemaStr)\n\n        val writer = createParquetWriter(schema, path, dictionaryEnabled)\n\n        (0 until 10).map(_.toString).foreach { n =>\n          val record = new SimpleGroup(schema)\n          record.add(0, n)\n          record.add(1, n + n + n)\n          writer.write(record)\n        }\n        writer.close()\n      }\n\n      withTempDir { dir =>\n        val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n        makeRawParquetFile(path)\n        readParquetFile(path.toString) { df =>\n          checkAnswer(\n            df,\n            (48 until 58).map(n => // char '0' is 48 in ascii\n              Row(Array(n), Array(n, n, n))))\n        }\n      }\n    }\n  }\n\n  test(\"schema evolution\") {\n    Seq(true, false).foreach { enableSchemaEvolution =>\n      Seq(true, false).foreach { useDictionary =>\n        {\n          withSQLConf(\n            CometConf.COMET_SCHEMA_EVOLUTION_ENABLED.key -> enableSchemaEvolution.toString) {\n            val data = (0 until 100).map(i => {\n              val v = if (useDictionary) i % 5 else i\n              (v, v.toFloat)\n            })\n            val readSchema =\n              StructType(\n                Seq(StructField(\"_1\", LongType, false), StructField(\"_2\", DoubleType, false)))\n\n            withParquetDataFrame(data, schema = Some(readSchema)) { df =>\n              val scan = CometConf.COMET_NATIVE_SCAN_IMPL.get(conf)\n              val isNativeDataFusionScan =\n                scan == CometConf.SCAN_NATIVE_DATAFUSION || scan == CometConf.SCAN_AUTO\n              if (enableSchemaEvolution || isNativeDataFusionScan) {\n                // native_datafusion has more permissive schema evolution\n                // https://github.com/apache/datafusion-comet/issues/3720\n                checkAnswer(df, data.map(Row.fromTuple))\n              } else {\n                assertThrows[SparkException](df.collect())\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"type widening: byte → short/int/long, short → int/long, int → long\") {\n    withSQLConf(CometConf.COMET_SCHEMA_EVOLUTION_ENABLED.key -> \"true\") {\n      withTempPath { dir =>\n        val path = dir.getCanonicalPath\n        val values = 1 to 10\n        val options: Map[String, String] = Map.empty[String, String]\n\n        // Input types and corresponding DataFrames\n        val inputDFs = Seq(\n          \"byte\" -> values.map(_.toByte).toDF(\"col1\"),\n          \"short\" -> values.map(_.toShort).toDF(\"col1\"),\n          \"int\" -> values.map(_.toInt).toDF(\"col1\"))\n\n        // Target Spark read schemas for widening\n        val widenTargets = Seq(\n          \"short\" -> values.map(_.toShort).toDF(\"col1\"),\n          \"int\" -> values.map(_.toInt).toDF(\"col1\"),\n          \"long\" -> values.map(_.toLong).toDF(\"col1\"))\n\n        for ((inputType, inputDF) <- inputDFs) {\n          val writePath = s\"$path/$inputType\"\n          inputDF.write.format(\"parquet\").options(options).save(writePath)\n\n          for ((targetType, targetDF) <- widenTargets) {\n            // Only test valid widenings (e.g., don't test int → short)\n            val wideningValid = (inputType, targetType) match {\n              case (\"byte\", \"short\" | \"int\" | \"long\") => true\n              case (\"short\", \"int\" | \"long\") => true\n              case (\"int\", \"long\") => true\n              case _ => false\n            }\n\n            if (wideningValid) {\n              val reader = spark.read\n                .schema(s\"col1 $targetType\")\n                .format(\"parquet\")\n                .options(options)\n                .load(writePath)\n\n              checkAnswer(reader, targetDF)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"read byte, int, short, long together\") {\n    withSQLConf(CometConf.COMET_SCHEMA_EVOLUTION_ENABLED.key -> \"true\") {\n      withTempPath { dir =>\n        val path = dir.getCanonicalPath\n\n        val byteDF = (Byte.MaxValue - 2 to Byte.MaxValue).map(_.toByte).toDF(\"col1\")\n        val shortDF = (Short.MaxValue - 2 to Short.MaxValue).map(_.toShort).toDF(\"col1\")\n        val intDF = (Int.MaxValue - 2 to Int.MaxValue).toDF(\"col1\")\n        val longDF = (Long.MaxValue - 2 to Long.MaxValue).toDF(\"col1\")\n        val unionDF = byteDF.union(shortDF).union(intDF).union(longDF)\n\n        val byteDir = s\"$path${File.separator}part=byte\"\n        val shortDir = s\"$path${File.separator}part=short\"\n        val intDir = s\"$path${File.separator}part=int\"\n        val longDir = s\"$path${File.separator}part=long\"\n\n        val options: Map[String, String] = Map.empty[String, String]\n\n        byteDF.write.format(\"parquet\").options(options).save(byteDir)\n        shortDF.write.format(\"parquet\").options(options).save(shortDir)\n        intDF.write.format(\"parquet\").options(options).save(intDir)\n        longDF.write.format(\"parquet\").options(options).save(longDir)\n\n        val df = spark.read\n          .schema(unionDF.schema)\n          .format(\"parquet\")\n          .options(options)\n          .load(path)\n          .select(\"col1\")\n\n        checkAnswer(df, unionDF)\n      }\n    }\n  }\n\n  test(\"read dictionary encoded decimals written as FIXED_LEN_BYTE_ARRAY\") {\n    // In this test, data is encoded using Parquet page v2 format, but with PLAIN encoding\n    checkAnswer(\n      // Decimal column in this file is encoded using plain dictionary\n      readResourceParquetFile(\"test-data/dec-in-fixed-len.parquet\"),\n      spark.range(1 << 4).select('id % 10 cast DecimalType(10, 2) as 'fixed_len_dec))\n  }\n\n  test(\"read long decimals with precision <= 9\") {\n    // decimal32-written-as-64-bit.snappy.parquet was generated using a 3rd-party library. It has\n    // 10 rows of Decimal(9, 1) written as LongDecimal instead of an IntDecimal\n    var df = readResourceParquetFile(\"test-data/decimal32-written-as-64-bit.snappy.parquet\")\n    assert(10 == df.collect().length)\n    var first10Df = df.head(10)\n    assert(\n      Seq(792059492, 986842987, 540247998, null, 357991078, 494131059, 92536396, 426847157,\n        -999999999, 204486094)\n        .zip(first10Df)\n        .forall(d =>\n          d._2.isNullAt(0) && d._1 == null ||\n            d._1 == d._2.getDecimal(0).unscaledValue().intValue()))\n\n    // decimal32-written-as-64-bit-dict.snappy.parquet was generated using a 3rd-party library. It\n    // has 2048 rows of Decimal(3, 1) written as LongDecimal instead of an IntDecimal\n    df = readResourceParquetFile(\"test-data/decimal32-written-as-64-bit-dict.snappy.parquet\")\n    assert(2048 == df.collect().length)\n    first10Df = df.head(10)\n    assert(\n      Seq(751, 937, 511, null, 337, 467, 84, 403, -999, 190)\n        .zip(first10Df)\n        .forall(d =>\n          d._2.isNullAt(0) && d._1 == null ||\n            d._1 == d._2.getDecimal(0).unscaledValue().intValue()))\n\n    val last10Df = df.tail(10)\n    assert(\n      Seq(866, 20, 492, 76, 824, 604, 343, 820, 864, 243)\n        .zip(last10Df)\n        .forall(d => d._1 == d._2.getDecimal(0).unscaledValue().intValue()))\n  }\n\n  private val actions: Seq[DataFrame => DataFrame] = Seq(\n    \"_1 = 500\",\n    \"_1 = 500 or _1 = 1500\",\n    \"_1 = 500 or _1 = 501 or _1 = 1500\",\n    \"_1 = 500 or _1 = 501 or _1 = 1000 or _1 = 1500\",\n    \"_1 >= 500 and _1 < 1000\",\n    \"(_1 >= 500 and _1 < 1000) or (_1 >= 1500 and _1 < 1600)\").map(f =>\n    (df: DataFrame) => df.filter(f))\n\n  test(\"test lazy materialization when batch size is small\") {\n    val df = spark.range(0, 2000).selectExpr(\"id as _1\", \"cast(id as string) as _11\")\n    checkParquetDataFrame(df)(actions: _*)\n  }\n\n  test(\"test lazy materialization when batch size is small (dict encode)\") {\n    val df = spark.range(0, 2000).selectExpr(\"id as _1\", \"cast(id % 10 as string) as _11\")\n    checkParquetDataFrame(df)(actions: _*)\n  }\n\n  private def testStandardAndLegacyModes(testName: String)(f: => Unit): Unit = {\n    test(s\"Standard mode - $testName\") {\n      withSQLConf(SQLConf.PARQUET_WRITE_LEGACY_FORMAT.key -> \"false\") { f }\n    }\n\n    test(s\"Legacy mode - $testName\") {\n      withSQLConf(SQLConf.PARQUET_WRITE_LEGACY_FORMAT.key -> \"true\") { f }\n    }\n  }\n\n  private def checkParquetFile[T <: Product: ClassTag: TypeTag](\n      data: Seq[T],\n      f: Row => Boolean = _ => true): Unit = {\n    withParquetDataFrame(data)(r => checkAnswer(r.filter(f), data.map(Row.fromTuple).filter(f)))\n  }\n\n  protected def checkParquetScan[T <: Product: ClassTag: TypeTag](\n      data: Seq[T],\n      f: Row => Boolean = _ => true): Unit\n\n  /**\n   * create parquet file with various page sizes and batch sizes\n   */\n  private def checkParquetDataFrame(df: DataFrame)(actions: (DataFrame => DataFrame)*): Unit = {\n    Seq(true, false).foreach { enableDictionary =>\n      Seq(64, 127, 4049).foreach { pageSize =>\n        withTempPath(file => {\n          df.coalesce(1)\n            .write\n            .option(\"parquet.page.size\", pageSize.toString)\n            .option(\"parquet.enable.dictionary\", enableDictionary.toString)\n            .parquet(file.getCanonicalPath)\n\n          Seq(true, false).foreach { useLazyMaterialization =>\n            Seq(true, false).foreach { enableCometExec =>\n              Seq(4, 13, 4049).foreach { batchSize =>\n                withSQLConf(\n                  CometConf.COMET_BATCH_SIZE.key -> batchSize.toString,\n                  CometConf.COMET_EXEC_ENABLED.key -> enableCometExec.toString,\n                  CometConf.COMET_USE_LAZY_MATERIALIZATION.key -> useLazyMaterialization.toString) {\n                  readParquetFile(file.getCanonicalPath) { parquetDf =>\n                    actions.foreach { action =>\n                      checkAnswer(action(parquetDf), action(df))\n                    }\n                  }\n                }\n              }\n            }\n          }\n        })\n      }\n    }\n  }\n\n  test(\"test merge scan range\") {\n    def makeRawParquetFile(path: Path, n: Int): Seq[Option[Int]] = {\n      val dictionaryPageSize = 1024\n      val pageRowCount = 500\n      val schemaStr =\n        \"\"\"\n          |message root {\n          |  optional int32   _1(INT_16);\n          |  optional int32   _2;\n          |  optional int64   _3;\n          |}\n        \"\"\".stripMargin\n\n      val schema = MessageTypeParser.parseMessageType(schemaStr)\n      val writer = createParquetWriter(\n        schema,\n        path,\n        dictionaryEnabled = true,\n        dictionaryPageSize = dictionaryPageSize,\n        pageRowCountLimit = pageRowCount)\n\n      val rand = new scala.util.Random(42)\n      val expected = (0 until n).map { i =>\n        // use a single value for the first page, to make sure dictionary encoding kicks in\n        val value = if (i < pageRowCount) i % 8 else i\n        if (rand.nextBoolean()) None\n        else Some(value)\n      }\n\n      expected.foreach { opt =>\n        val record = new SimpleGroup(schema)\n        opt match {\n          case Some(i) =>\n            record.add(0, i.toShort)\n            record.add(1, i)\n            record.add(2, i.toLong)\n          case _ =>\n        }\n        writer.write(record)\n      }\n\n      writer.close()\n      expected\n    }\n\n    Seq(16, 128).foreach { batchSize =>\n      Seq(1024, 1024 * 1024).foreach { mergeRangeDelta =>\n        {\n          withSQLConf(\n            CometConf.COMET_BATCH_SIZE.key -> batchSize.toString,\n            CometConf.COMET_IO_MERGE_RANGES.key -> \"true\",\n            CometConf.COMET_IO_MERGE_RANGES_DELTA.key -> mergeRangeDelta.toString) {\n            withTempDir { dir =>\n              val path = new Path(dir.toURI.toString, \"part-r-0.parquet\")\n              val expected = makeRawParquetFile(path, 10000)\n              val schema = StructType(\n                Seq(StructField(\"_1\", ShortType, true), StructField(\"_3\", LongType, true)))\n              readParquetFile(path.toString, Some(schema)) { df =>\n                {\n                  // CometScanExec calls sessionState.newHadoopConfWithOptions which copies\n                  // the sqlConf and some additional options to the hadoopConf and then\n                  // uses the result to create the inputRDD (https://github.com/apache/datafusion-comet/blob/3783faaa01078a35bee93b299368f8c72869198d/spark/src/main/scala/org/apache/spark/sql/comet/CometScanExec.scala#L181).\n                  // We don't have access to the created hadoop Conf, but can confirm that the\n                  // result does contain the correct configuration\n                  assert(\n                    df.sparkSession.sessionState\n                      .newHadoopConfWithOptions(Map.empty)\n                      .get(CometConf.COMET_IO_MERGE_RANGES_DELTA.key)\n                      .equals(mergeRangeDelta.toString))\n                  checkAnswer(\n                    df,\n                    expected.map {\n                      case None =>\n                        Row(null, null)\n                      case Some(i) =>\n                        Row(i.toShort, i.toLong)\n                    })\n                }\n              }\n            }\n          }\n        }\n      }\n    }\n  }\n\n  def testScanner(\n      cometEnabled: String,\n      cometNativeScanImpl: String,\n      scanner: String,\n      v1: Option[String] = None): Unit = {\n    withSQLConf(\n      CometConf.COMET_ENABLED.key -> cometEnabled,\n      CometConf.COMET_EXEC_ENABLED.key -> cometEnabled,\n      CometConf.COMET_NATIVE_SCAN_IMPL.key -> cometNativeScanImpl,\n      SQLConf.USE_V1_SOURCE_LIST.key -> v1.getOrElse(\"\")) {\n      withParquetTable(Seq((Long.MaxValue, 1), (Long.MaxValue, 2)), \"tbl\") {\n        val df = spark.sql(\"select * from tbl\")\n        assert(\n          stripAQEPlan(df.queryExecution.executedPlan)\n            .collectLeaves()\n            .head\n            .toString()\n            .startsWith(scanner))\n      }\n    }\n  }\n\n  private def withId(id: Int) =\n    new MetadataBuilder().putLong(ParquetUtils.FIELD_ID_METADATA_KEY, id).build()\n\n  // Based on Spark ParquetIOSuite.test(\"vectorized reader: array of nested struct\")\n  test(\"array of nested struct with and without field id\") {\n    val nestedSchema = StructType(\n      Seq(StructField(\n        \"_1\",\n        StructType(Seq(\n          StructField(\"_1\", StringType, nullable = true, withId(1)), // Field ID 1\n          StructField(\n            \"_2\",\n            ArrayType(StructType(Seq(\n              StructField(\"_1\", StringType, nullable = true, withId(2)), // Field ID 2\n              StructField(\"_2\", StringType, nullable = true, withId(3)) // Field ID 3\n            ))),\n            nullable = true))),\n        nullable = true)))\n    val nestedSchemaNoId = StructType(\n      Seq(StructField(\n        \"_1\",\n        StructType(Seq(\n          StructField(\"_1\", StringType, nullable = true),\n          StructField(\n            \"_2\",\n            ArrayType(StructType(Seq(\n              StructField(\"_1\", StringType, nullable = true),\n              StructField(\"_2\", StringType, nullable = true)))),\n            nullable = true))),\n        nullable = true)))\n    // data matching the schema\n    val data = Seq(\n      Row(Row(\"a\", null)),\n      Row(Row(\"b\", Seq(Row(\"c\", \"d\")))),\n      Row(null),\n      Row(Row(\"e\", Seq(Row(\"f\", null), Row(null, \"g\")))),\n      Row(Row(null, null)),\n      Row(Row(null, Seq(null))),\n      Row(Row(null, Seq(Row(null, null), Row(\"h\", null), null))),\n      Row(Row(\"i\", Seq())),\n      Row(null))\n    val answer =\n      Row(Row(\"a\", null)) ::\n        Row(Row(\"b\", Seq(Row(\"c\", \"d\")))) ::\n        Row(null) ::\n        Row(Row(\"e\", Seq(Row(\"f\", null), Row(null, \"g\")))) ::\n        Row(Row(null, null)) ::\n        Row(Row(null, Seq(null))) ::\n        Row(Row(null, Seq(Row(null, null), Row(\"h\", null), null))) ::\n        Row(Row(\"i\", Seq())) ::\n        Row(null) ::\n        Nil\n\n    withSQLConf(SQLConf.PARQUET_FIELD_ID_READ_ENABLED.key -> \"true\") {\n      val df = spark.createDataFrame(spark.sparkContext.parallelize(data), nestedSchema)\n      withTempPath { path =>\n        df.write.parquet(path.getCanonicalPath)\n        readParquetFile(path.getCanonicalPath) { df =>\n          checkAnswer(df, answer)\n        }\n      }\n      val df2 = spark.createDataFrame(spark.sparkContext.parallelize(data), nestedSchemaNoId)\n      withTempPath { path =>\n        df2.write.parquet(path.getCanonicalPath)\n        readParquetFile(path.getCanonicalPath) { df =>\n          checkAnswer(df, answer)\n        }\n      }\n    }\n  }\n}\n\nclass ParquetReadV1Suite extends ParquetReadSuite with AdaptiveSparkPlanHelper {\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n    super.test(testName, testTags: _*)(withSQLConf(SQLConf.USE_V1_SOURCE_LIST.key -> \"parquet\") {\n      testFun\n    })(pos)\n  }\n\n  override def checkParquetScan[T <: Product: ClassTag: TypeTag](\n      data: Seq[T],\n      f: Row => Boolean = _ => true): Unit = {\n    withParquetDataFrame(data) { r =>\n      val scans = collect(r.filter(f).queryExecution.executedPlan) {\n        case p: CometScanExec =>\n          p\n        case p: CometNativeScanExec =>\n          p\n      }\n      if (CometConf.COMET_ENABLED.get()) {\n        assert(scans.nonEmpty)\n      } else {\n        assert(scans.isEmpty)\n      }\n    }\n  }\n\n  test(\"Test V1 parquet scan uses respective scanner\") {\n    Seq(\n      (\"false\", CometConf.SCAN_NATIVE_DATAFUSION, \"FileScan parquet\"),\n      (\"true\", CometConf.SCAN_NATIVE_DATAFUSION, \"CometNativeScan\"),\n      (\"true\", CometConf.SCAN_NATIVE_ICEBERG_COMPAT, \"CometScan [native_iceberg_compat] parquet\"))\n      .foreach { case (cometEnabled, cometNativeScanImpl, expectedScanner) =>\n        testScanner(\n          cometEnabled,\n          cometNativeScanImpl,\n          scanner = expectedScanner,\n          v1 = Some(\"parquet\"))\n      }\n  }\n\n  test(\"test V1 parquet native scan -- case insensitive\") {\n    withTempPath { path =>\n      spark.range(10).toDF(\"a\").write.parquet(path.toString)\n      Seq(CometConf.SCAN_NATIVE_DATAFUSION, CometConf.SCAN_NATIVE_ICEBERG_COMPAT).foreach(\n        scanMode => {\n          withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> scanMode) {\n            withTable(\"test\") {\n              sql(\"create table test (A long) using parquet options (path '\" + path + \"')\")\n              val df = sql(\"select A from test\")\n              checkSparkAnswer(df)\n              // TODO: pushed down filters do not used schema adapter in datafusion, will cause empty result\n              // val df = sql(\"select * from test where A > 5\")\n              // checkSparkAnswer(df)\n            }\n          }\n        })\n    }\n  }\n\n  test(\"test V1 parquet scan filter pushdown of primitive types uses native_iceberg_compat\") {\n    withTempPath { dir =>\n      val path = new Path(dir.toURI.toString, \"test1.parquet\")\n      val rows = 1000\n      withSQLConf(\n        CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_ICEBERG_COMPAT,\n        CometConf.COMET_PARQUET_UNSIGNED_SMALL_INT_CHECK.key -> \"true\") {\n        makeParquetFileAllPrimitiveTypes(\n          path,\n          dictionaryEnabled = false,\n          0,\n          rows,\n          nullEnabled = false)\n      }\n      Seq(CometConf.SCAN_NATIVE_DATAFUSION, CometConf.SCAN_NATIVE_ICEBERG_COMPAT).foreach {\n        scanMode =>\n          Seq(true, false).foreach { pushDown =>\n            breakable {\n              withSQLConf(\n                CometConf.COMET_NATIVE_SCAN_IMPL.key -> scanMode,\n                SQLConf.PARQUET_FILTER_PUSHDOWN_ENABLED.key -> pushDown.toString) {\n                Seq(\n                  (\"_1 = true\", Math.ceil(rows.toDouble / 2)), // Boolean\n                  (\"_2 = 1\", Math.ceil(rows.toDouble / 256)), // Byte\n                  (\"_3 = 1\", 1), // Short\n                  (\"_4 = 1\", 1), // Integer\n                  (\"_5 = 1\", 1), // Long\n                  (\"_6 = 1.0\", 1), // Float\n                  (\"_7 = 1.0\", 1), // Double\n                  (s\"_8 = '${1.toString * 48}'\", 1), // String\n                  (\"_21 = to_binary('1', 'utf-8')\", 1), // binary\n                  (\"_15 = 0.0\", 1), // DECIMAL(5, 2)\n                  (\"_16 = 0.0\", 1), // DECIMAL(18, 10)\n                  (\n                    s\"_17 = ${new BigDecimal(new BigInteger((\"1\" * 16).getBytes), 37).toString}\",\n                    Math.ceil(rows.toDouble / 10)\n                  ), // DECIMAL(38, 37)\n                  (s\"_19 = TIMESTAMP '${DateTimeUtils.toJavaTimestamp(1)}'\", 1), // Timestamp\n                  (\"_20 = DATE '1970-01-02'\", 1) // Date\n                ).foreach { case (whereCause, expectedRows) =>\n                  val df = spark.read\n                    .parquet(path.toString)\n                    .where(whereCause)\n                  val (_, cometPlan) = checkSparkAnswer(df)\n                  val scan = collect(cometPlan) {\n                    case p: CometScanExec =>\n                      assert(p.dataFilters.nonEmpty)\n                      p\n                    case p: CometNativeScanExec =>\n                      assert(p.dataFilters.nonEmpty)\n                      p\n                  }\n                  assert(scan.size == 1)\n\n                  if (pushDown) {\n                    assert(scan.head.metrics(\"output_rows\").value == expectedRows)\n                  } else {\n                    assert(scan.head.metrics(\"output_rows\").value == rows)\n                  }\n                }\n              }\n            }\n          }\n      }\n    }\n  }\n\n  test(\"read basic complex types\") {\n    Seq(true, false).foreach(dictionaryEnabled => {\n      withTempPath { dir =>\n        val path = new Path(dir.toURI.toString, \"complex_types.parquet\")\n        makeParquetFileComplexTypes(path, dictionaryEnabled, 10)\n        withParquetTable(path.toUri.toString, \"complex_types\") {\n          Seq(CometConf.SCAN_NATIVE_DATAFUSION, CometConf.SCAN_NATIVE_ICEBERG_COMPAT).foreach(\n            scanMode => {\n              withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> scanMode) {\n                checkSparkAnswerAndOperator(sql(\"select * from complex_types\"))\n                // First level\n                checkSparkAnswerAndOperator(sql(\n                  \"select optional_array, array_of_struct, optional_map, complex_map from complex_types\"))\n                // second nested level\n                checkSparkAnswerAndOperator(\n                  sql(\n                    \"select optional_array[0], \" +\n                      \"array_of_struct[0].field1, \" +\n                      \"array_of_struct[0].optional_nested_array, \" +\n                      \"optional_map.key, \" +\n                      \"optional_map.value, \" +\n                      \"map_keys(complex_map), \" +\n                      \"map_entries(complex_map), \" +\n                      \"map_values(complex_map) \" +\n                      \"from complex_types\"))\n                // leaf fields\n                checkSparkAnswerAndOperator(\n                  sql(\n                    \"select optional_array[0], \" +\n                      \"array_of_struct[0].field1, \" +\n                      \"array_of_struct[0].optional_nested_array[0], \" +\n                      \"optional_map.key, \" +\n                      \"optional_map.value, \" +\n                      \"map_keys(complex_map)[0].key_field1, \" +\n                      \"map_keys(complex_map)[0].key_field2, \" +\n                      \"map_entries(complex_map)[0].key, \" +\n                      \"map_entries(complex_map)[0].value, \" +\n                      \"map_values(complex_map)[0].value_field1, \" +\n                      \"map_values(complex_map)[0].value_field2 \" +\n                      \"from complex_types\"))\n              }\n            })\n        }\n      }\n    })\n  }\n\n  test(\"reading ancient dates before 1582\") {\n    // Verify that legacy dates (before 1582-10-15) are read without error.\n    // Comet does not support datetime rebasing, so these dates are read as if they were\n    // written using the Proleptic Gregorian calendar (no rebase, no exception).\n    val file =\n      getResourceParquetFilePath(\"test-data/before_1582_date_v3_2_0.snappy.parquet\")\n\n    Seq(CometConf.SCAN_NATIVE_ICEBERG_COMPAT, CometConf.SCAN_NATIVE_DATAFUSION).foreach {\n      scanImpl =>\n        withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> scanImpl) {\n          val df = spark.read.parquet(file)\n\n          // Verify Comet scan is in the plan\n          val plan = df.queryExecution.executedPlan\n          checkCometOperators(plan)\n\n          // Verify all 8 rows are read and contain dates before 1582\n          val rows = df.collect()\n          assert(rows.length == 8, s\"Expected 8 rows with $scanImpl, got ${rows.length}\")\n          rows.foreach { row =>\n            val date = row.getDate(0)\n            assert(\n              date.toLocalDate.getYear < 1582,\n              s\"Expected date before 1582 with $scanImpl, got $date\")\n          }\n        }\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/rules/CometExecRuleSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.rules\n\nimport scala.util.Random\n\nimport org.apache.spark.sql._\nimport org.apache.spark.sql.comet._\nimport org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.execution.adaptive.QueryStageExec\nimport org.apache.spark.sql.execution.aggregate.HashAggregateExec\nimport org.apache.spark.sql.execution.exchange.{BroadcastExchangeExec, ShuffleExchangeExec}\nimport org.apache.spark.sql.types.{DataTypes, StructField, StructType}\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator}\n\n/**\n * Test suite specifically for CometExecRule transformation logic. Tests the rule's ability to\n * transform Spark operators to Comet operators, fallback mechanisms, configuration handling, and\n * edge cases.\n */\nclass CometExecRuleSuite extends CometTestBase {\n\n  /** Helper method to apply CometExecRule and return the transformed plan */\n  private def applyCometExecRule(plan: SparkPlan): SparkPlan = {\n    CometExecRule(spark).apply(stripAQEPlan(plan))\n  }\n\n  /** Create a test data frame that is used in all tests */\n  private def createTestDataFrame = {\n    val testSchema = new StructType(\n      Array(\n        StructField(\"id\", DataTypes.IntegerType, nullable = true),\n        StructField(\"name\", DataTypes.StringType, nullable = true)))\n    FuzzDataGenerator.generateDataFrame(new Random(42), spark, testSchema, 100, DataGenOptions())\n  }\n\n  /** Create a SparkPlan from the specified SQL with Comet disabled */\n  private def createSparkPlan(spark: SparkSession, sql: String): SparkPlan = {\n    var sparkPlan: SparkPlan = null\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n      val df = spark.sql(sql)\n      sparkPlan = df.queryExecution.executedPlan\n    }\n    sparkPlan\n  }\n\n  /** Count the number of the specified operator in the plan */\n  private def countOperators(plan: SparkPlan, opClass: Class[_]): Int = {\n    stripAQEPlan(plan).collect {\n      case stage: QueryStageExec =>\n        countOperators(stage.plan, opClass)\n      case op if op.getClass.isAssignableFrom(opClass) => 1\n    }.sum\n  }\n\n  test(\n    \"CometExecRule should apply basic operator transformations, but only when Comet is enabled\") {\n    withTempView(\"test_data\") {\n      createTestDataFrame.createOrReplaceTempView(\"test_data\")\n\n      val sparkPlan =\n        createSparkPlan(spark, \"SELECT id, id * 2 as doubled FROM test_data WHERE id % 2 == 0\")\n\n      // Count original Spark operators\n      assert(countOperators(sparkPlan, classOf[ProjectExec]) == 1)\n      assert(countOperators(sparkPlan, classOf[FilterExec]) == 1)\n\n      for (cometEnabled <- Seq(true, false)) {\n        withSQLConf(\n          CometConf.COMET_ENABLED.key -> cometEnabled.toString,\n          CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\") {\n\n          val transformedPlan = applyCometExecRule(sparkPlan)\n\n          if (cometEnabled) {\n            assert(countOperators(transformedPlan, classOf[ProjectExec]) == 0)\n            assert(countOperators(transformedPlan, classOf[FilterExec]) == 0)\n            assert(countOperators(transformedPlan, classOf[CometProjectExec]) == 1)\n            assert(countOperators(transformedPlan, classOf[CometFilterExec]) == 1)\n          } else {\n            assert(countOperators(transformedPlan, classOf[ProjectExec]) == 1)\n            assert(countOperators(transformedPlan, classOf[FilterExec]) == 1)\n            assert(countOperators(transformedPlan, classOf[CometProjectExec]) == 0)\n            assert(countOperators(transformedPlan, classOf[CometFilterExec]) == 0)\n          }\n        }\n      }\n    }\n  }\n\n  test(\"CometExecRule should apply hash aggregate transformations\") {\n    withTempView(\"test_data\") {\n      createTestDataFrame.createOrReplaceTempView(\"test_data\")\n\n      val sparkPlan =\n        createSparkPlan(spark, \"SELECT COUNT(*), SUM(id) FROM test_data GROUP BY (id % 3)\")\n\n      // Count original Spark operators\n      val originalHashAggCount = countOperators(sparkPlan, classOf[HashAggregateExec])\n      assert(originalHashAggCount == 2)\n\n      withSQLConf(CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\") {\n        val transformedPlan = applyCometExecRule(sparkPlan)\n\n        assert(countOperators(transformedPlan, classOf[HashAggregateExec]) == 0)\n        assert(\n          countOperators(\n            transformedPlan,\n            classOf[CometHashAggregateExec]) == originalHashAggCount)\n      }\n    }\n  }\n\n  // TODO this test exposes the bug described in\n  // https://github.com/apache/datafusion-comet/issues/1389\n  ignore(\"CometExecRule should not allow Comet partial and Spark final hash aggregate\") {\n    withTempView(\"test_data\") {\n      createTestDataFrame.createOrReplaceTempView(\"test_data\")\n\n      val sparkPlan =\n        createSparkPlan(spark, \"SELECT COUNT(*), SUM(id) FROM test_data GROUP BY (id % 3)\")\n\n      // Count original Spark operators\n      val originalHashAggCount = countOperators(sparkPlan, classOf[HashAggregateExec])\n      assert(originalHashAggCount == 2)\n\n      withSQLConf(\n        CometConf.COMET_ENABLE_FINAL_HASH_AGGREGATE.key -> \"false\",\n        CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\") {\n        val transformedPlan = applyCometExecRule(sparkPlan)\n\n        // if the final aggregate cannot be converted to Comet, then neither should be\n        assert(\n          countOperators(transformedPlan, classOf[HashAggregateExec]) == originalHashAggCount)\n        assert(countOperators(transformedPlan, classOf[CometHashAggregateExec]) == 0)\n      }\n    }\n  }\n\n  test(\"CometExecRule should not allow Spark partial and Comet final hash aggregate\") {\n    withTempView(\"test_data\") {\n      createTestDataFrame.createOrReplaceTempView(\"test_data\")\n\n      val sparkPlan =\n        createSparkPlan(spark, \"SELECT COUNT(*), SUM(id) FROM test_data GROUP BY (id % 3)\")\n\n      // Count original Spark operators\n      val originalHashAggCount = countOperators(sparkPlan, classOf[HashAggregateExec])\n      assert(originalHashAggCount == 2)\n\n      withSQLConf(\n        CometConf.COMET_ENABLE_PARTIAL_HASH_AGGREGATE.key -> \"false\",\n        CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\") {\n        val transformedPlan = applyCometExecRule(sparkPlan)\n\n        // if the partial aggregate cannot be converted to Comet, then neither should be\n        assert(\n          countOperators(transformedPlan, classOf[HashAggregateExec]) == originalHashAggCount)\n        assert(countOperators(transformedPlan, classOf[CometHashAggregateExec]) == 0)\n      }\n    }\n  }\n\n  test(\"CometExecRule should apply broadcast exchange transformations\") {\n    withTempView(\"test_data\") {\n      createTestDataFrame.createOrReplaceTempView(\"test_data\")\n\n      val sparkPlan = createSparkPlan(\n        spark,\n        \"SELECT /*+ BROADCAST(b) */ a.id, b.name FROM test_data a JOIN test_data b ON a.id = b.id\")\n\n      // Count original Spark operators\n      val originalBroadcastExchangeCount =\n        countOperators(sparkPlan, classOf[BroadcastExchangeExec])\n      assert(originalBroadcastExchangeCount == 1)\n\n      withSQLConf(CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\") {\n        val transformedPlan = applyCometExecRule(sparkPlan)\n\n        assert(countOperators(transformedPlan, classOf[BroadcastExchangeExec]) == 0)\n        assert(\n          countOperators(\n            transformedPlan,\n            classOf[CometBroadcastExchangeExec]) == originalBroadcastExchangeCount)\n      }\n    }\n  }\n\n  test(\"CometExecRule should apply shuffle exchange transformations\") {\n    withTempView(\"test_data\") {\n      createTestDataFrame.createOrReplaceTempView(\"test_data\")\n\n      val sparkPlan =\n        createSparkPlan(spark, \"SELECT id, COUNT(*) FROM test_data GROUP BY id ORDER BY id\")\n\n      // Count original Spark operators\n      val originalShuffleExchangeCount = countOperators(sparkPlan, classOf[ShuffleExchangeExec])\n      assert(originalShuffleExchangeCount == 2)\n\n      withSQLConf(CometConf.COMET_EXEC_LOCAL_TABLE_SCAN_ENABLED.key -> \"true\") {\n        val transformedPlan = applyCometExecRule(sparkPlan)\n\n        assert(countOperators(transformedPlan, classOf[ShuffleExchangeExec]) == 0)\n        assert(\n          countOperators(\n            transformedPlan,\n            classOf[CometShuffleExchangeExec]) == originalShuffleExchangeCount)\n      }\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/comet/rules/CometScanRuleSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.rules\n\nimport scala.util.Random\n\nimport org.apache.spark.sql._\nimport org.apache.spark.sql.comet._\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.execution.adaptive.QueryStageExec\nimport org.apache.spark.sql.types.{DataTypes, StructField, StructType}\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator}\n\n/**\n * Test suite specifically for CometScanRule transformation logic.\n */\nclass CometScanRuleSuite extends CometTestBase {\n\n  /** Helper method to apply CometExecRule and return the transformed plan */\n  private def applyCometScanRule(plan: SparkPlan): SparkPlan = {\n    CometScanRule(spark).apply(stripAQEPlan(plan))\n  }\n\n  /** Create a test data frame that is used in all tests */\n  private def createTestDataFrame = {\n    val testSchema = new StructType(\n      Array(\n        StructField(\"id\", DataTypes.IntegerType, nullable = true),\n        StructField(\"name\", DataTypes.StringType, nullable = true)))\n    FuzzDataGenerator.generateDataFrame(new Random(42), spark, testSchema, 100, DataGenOptions())\n  }\n\n  /** Create a SparkPlan from the specified SQL with Comet disabled */\n  private def createSparkPlan(spark: SparkSession, sql: String): SparkPlan = {\n    var sparkPlan: SparkPlan = null\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n      val df = spark.sql(sql)\n      sparkPlan = df.queryExecution.executedPlan\n    }\n    sparkPlan\n  }\n\n  /** Count the number of the specified operator in the plan */\n  private def countOperators(plan: SparkPlan, opClass: Class[_]): Int = {\n    stripAQEPlan(plan).collect {\n      case stage: QueryStageExec =>\n        countOperators(stage.plan, opClass)\n      case op if op.getClass.isAssignableFrom(opClass) => 1\n    }.sum\n  }\n\n  test(\"CometExecRule should replace FileSourceScanExec, but only when Comet is enabled\") {\n    withTempPath { path =>\n      createTestDataFrame.write.parquet(path.toString)\n      withTempView(\"test_data\") {\n        spark.read.parquet(path.toString).createOrReplaceTempView(\"test_data\")\n\n        val sparkPlan =\n          createSparkPlan(spark, \"SELECT id, id * 2 as doubled FROM test_data WHERE id % 2 == 0\")\n\n        // Count original Spark operators\n        assert(countOperators(sparkPlan, classOf[FileSourceScanExec]) == 1)\n\n        for (cometEnabled <- Seq(true, false)) {\n          withSQLConf(CometConf.COMET_ENABLED.key -> cometEnabled.toString) {\n\n            val transformedPlan = applyCometScanRule(sparkPlan)\n\n            if (cometEnabled) {\n              assert(countOperators(transformedPlan, classOf[FileSourceScanExec]) == 0)\n              assert(countOperators(transformedPlan, classOf[CometScanExec]) == 1)\n            } else {\n              assert(countOperators(transformedPlan, classOf[FileSourceScanExec]) == 1)\n              assert(countOperators(transformedPlan, classOf[CometScanExec]) == 0)\n            }\n          }\n        }\n      }\n    }\n  }\n\n  test(\"CometScanRule should fallback to Spark for ShortType when safety check enabled\") {\n    withTempPath { path =>\n      // Create test data with ShortType which may be from unsigned UINT_8\n      import org.apache.spark.sql.types._\n      val unsupportedSchema = new StructType(\n        Array(\n          StructField(\"id\", DataTypes.IntegerType, nullable = true),\n          StructField(\n            \"value\",\n            DataTypes.ShortType,\n            nullable = true\n          ), // May be from unsigned UINT_8\n          StructField(\"name\", DataTypes.StringType, nullable = true)))\n\n      val testData = Seq(Row(1, 1.toShort, \"test1\"), Row(2, -1.toShort, \"test2\"))\n\n      val df = spark.createDataFrame(spark.sparkContext.parallelize(testData), unsupportedSchema)\n      df.write.parquet(path.toString)\n\n      withTempView(\"unsupported_data\") {\n        spark.read.parquet(path.toString).createOrReplaceTempView(\"unsupported_data\")\n\n        val sparkPlan =\n          createSparkPlan(spark, \"SELECT id, value FROM unsupported_data WHERE id = 1\")\n\n        withSQLConf(\n          CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_ICEBERG_COMPAT,\n          CometConf.COMET_PARQUET_UNSIGNED_SMALL_INT_CHECK.key -> \"true\") {\n          val transformedPlan = applyCometScanRule(sparkPlan)\n\n          // Should fallback to Spark due to ShortType (may be from unsigned UINT_8)\n          assert(countOperators(transformedPlan, classOf[FileSourceScanExec]) == 1)\n          assert(countOperators(transformedPlan, classOf[CometScanExec]) == 0)\n        }\n      }\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/CometPluginsSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark\n\nimport java.io.File\n\nimport org.apache.spark.sql.{CometTestBase, SaveMode}\nimport org.apache.spark.sql.internal.StaticSQLConf\n\nclass CometPluginsSuite extends CometTestBase {\n  override protected def sparkConf: SparkConf = {\n    val conf = new SparkConf()\n    conf.set(\"spark.driver.memory\", \"1G\")\n    conf.set(\"spark.executor.memory\", \"1G\")\n    conf.set(\"spark.executor.memoryOverhead\", \"2G\")\n    conf.set(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n    conf.set(\"spark.comet.enabled\", \"true\")\n    conf.set(\"spark.comet.exec.enabled\", \"true\")\n    conf.set(\"spark.comet.exec.onHeap.enabled\", \"true\")\n    conf.set(\"spark.comet.metrics.enabled\", \"true\")\n    conf\n  }\n\n  test(\"Register Comet extension\") {\n    // test common case where no extensions are previously registered\n    {\n      val conf = new SparkConf()\n      CometDriverPlugin.registerCometSessionExtension(conf)\n      assert(\n        \"org.apache.comet.CometSparkSessionExtensions\" == conf.get(\n          StaticSQLConf.SPARK_SESSION_EXTENSIONS.key))\n    }\n    // test case where Comet is already registered\n    {\n      val conf = new SparkConf()\n      conf.set(\n        StaticSQLConf.SPARK_SESSION_EXTENSIONS.key,\n        \"org.apache.comet.CometSparkSessionExtensions\")\n      CometDriverPlugin.registerCometSessionExtension(conf)\n      assert(\n        \"org.apache.comet.CometSparkSessionExtensions\" == conf.get(\n          StaticSQLConf.SPARK_SESSION_EXTENSIONS.key))\n    }\n    // test case where other extensions are already registered\n    {\n      val conf = new SparkConf()\n      conf.set(StaticSQLConf.SPARK_SESSION_EXTENSIONS.key, \"foo,bar\")\n      CometDriverPlugin.registerCometSessionExtension(conf)\n      assert(\n        \"foo,bar,org.apache.comet.CometSparkSessionExtensions\" == conf.get(\n          StaticSQLConf.SPARK_SESSION_EXTENSIONS.key))\n    }\n    // test case where other extensions, including Comet, are already registered\n    {\n      val conf = new SparkConf()\n      conf.set(\n        StaticSQLConf.SPARK_SESSION_EXTENSIONS.key,\n        \"foo,bar,org.apache.comet.CometSparkSessionExtensions\")\n      CometDriverPlugin.registerCometSessionExtension(conf)\n      assert(\n        \"foo,bar,org.apache.comet.CometSparkSessionExtensions\" == conf.get(\n          StaticSQLConf.SPARK_SESSION_EXTENSIONS.key))\n    }\n  }\n\n  test(\"CometSource metrics are recorded\") {\n    val nativeBefore = CometSource.NATIVE_OPERATORS.getCount\n    val queriesBefore = CometSource.QUERIES_PLANNED.getCount\n\n    withTempPath { dir =>\n      val path = new File(dir, \"test.parquet\").toString\n      spark.range(1000).toDF(\"id\").write.mode(SaveMode.Overwrite).parquet(path)\n      spark.read.parquet(path).filter(\"id > 500\").collect()\n    }\n    spark.sparkContext.listenerBus.waitUntilEmpty()\n    assert(\n      CometSource.QUERIES_PLANNED.getCount > queriesBefore,\n      \"queries.planned should increment after query\")\n    assert(\n      CometSource.NATIVE_OPERATORS.getCount > nativeBefore,\n      \"operators.native should increment for native execution\")\n  }\n\n  test(\"metrics not double counted with AQE\") {\n    withSQLConf(\"spark.sql.adaptive.enabled\" -> \"true\") {\n      withTempPath { dir =>\n        val path = new File(dir, \"test.parquet\").toString\n        spark.range(10000).toDF(\"id\").write.mode(SaveMode.Overwrite).parquet(path)\n\n        spark.sparkContext.listenerBus.waitUntilEmpty()\n        val queriesBefore = CometSource.QUERIES_PLANNED.getCount\n        spark.read.parquet(path).filter(\"id > 100\").collect()\n        spark.read.parquet(path).filter(\"id > 200\").collect()\n        spark.sparkContext.listenerBus.waitUntilEmpty()\n        val queriesAfter = CometSource.QUERIES_PLANNED.getCount\n        assert(\n          queriesAfter == queriesBefore + 2,\n          s\"Expected 2 queries, got ${queriesAfter - queriesBefore}\")\n      }\n    }\n  }\n\n  test(\"Default Comet memory overhead\") {\n    val execMemOverhead1 = spark.conf.get(\"spark.executor.memoryOverhead\")\n    val execMemOverhead2 = spark.sessionState.conf.getConfString(\"spark.executor.memoryOverhead\")\n    val execMemOverhead3 = spark.sparkContext.getConf.get(\"spark.executor.memoryOverhead\")\n    val execMemOverhead4 = spark.sparkContext.conf.get(\"spark.executor.memoryOverhead\")\n\n    // 2GB + 384MB (default Comet memory overhead)\n    assert(execMemOverhead1 == \"3072M\")\n    assert(execMemOverhead2 == \"3072M\")\n    assert(execMemOverhead3 == \"3072M\")\n    assert(execMemOverhead4 == \"3072M\")\n  }\n}\n\nclass CometPluginsDefaultSuite extends CometTestBase {\n  override protected def sparkConf: SparkConf = {\n    val conf = new SparkConf()\n    conf.set(\"spark.driver.memory\", \"1G\")\n    conf.set(\"spark.executor.memory\", \"1G\")\n    conf.set(\"spark.executor.memoryOverheadFactor\", \"0.5\")\n    conf.set(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n    conf.set(\"spark.comet.enabled\", \"true\")\n    conf.set(\"spark.comet.exec.shuffle.enabled\", \"true\")\n    conf.set(\"spark.comet.exec.onHeap.enabled\", \"true\")\n    conf\n  }\n\n  test(\"Default executor memory overhead + Comet memory overhead\") {\n    val execMemOverhead1 = spark.conf.get(\"spark.executor.memoryOverhead\")\n    val execMemOverhead2 = spark.sessionState.conf.getConfString(\"spark.executor.memoryOverhead\")\n    val execMemOverhead3 = spark.sparkContext.getConf.get(\"spark.executor.memoryOverhead\")\n    val execMemOverhead4 = spark.sparkContext.conf.get(\"spark.executor.memoryOverhead\")\n\n    // Spark executor memory overhead = executor memory (1G) * memoryOverheadFactor (0.5) = 512MB\n    // 512MB + 384MB (default Comet memory overhead)\n    assert(execMemOverhead1 == \"1536M\")\n    assert(execMemOverhead2 == \"1536M\")\n    assert(execMemOverhead3 == \"1536M\")\n    assert(execMemOverhead4 == \"1536M\")\n  }\n}\n\nclass CometPluginsNonOverrideSuite extends CometTestBase {\n  override protected def sparkConf: SparkConf = {\n    val conf = new SparkConf()\n    conf.set(\"spark.driver.memory\", \"1G\")\n    conf.set(\"spark.executor.memory\", \"1G\")\n    conf.set(\"spark.executor.memoryOverhead\", \"2G\")\n    conf.set(\"spark.executor.memoryOverheadFactor\", \"0.5\")\n    conf.set(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n    conf.set(\"spark.comet.enabled\", \"true\")\n    conf.set(\"spark.comet.exec.shuffle.enabled\", \"false\")\n    conf.set(\"spark.comet.exec.enabled\", \"false\")\n    conf.set(\"spark.comet.exec.onHeap.enabled\", \"true\")\n    conf\n  }\n\n  test(\"executor memory overhead is not overridden\") {\n    val execMemOverhead1 = spark.conf.get(\"spark.executor.memoryOverhead\")\n    val execMemOverhead2 = spark.sessionState.conf.getConfString(\"spark.executor.memoryOverhead\")\n    val execMemOverhead3 = spark.sparkContext.getConf.get(\"spark.executor.memoryOverhead\")\n    val execMemOverhead4 = spark.sparkContext.conf.get(\"spark.executor.memoryOverhead\")\n\n    assert(execMemOverhead1 == \"2G\")\n    assert(execMemOverhead2 == \"2G\")\n    assert(execMemOverhead3 == \"2G\")\n    assert(execMemOverhead4 == \"2G\")\n  }\n}\n\nclass CometPluginsUnifiedModeOverrideSuite extends CometTestBase {\n  override protected def sparkConf: SparkConf = {\n    val conf = new SparkConf()\n    conf.set(\"spark.driver.memory\", \"1G\")\n    conf.set(\"spark.executor.memory\", \"1G\")\n    conf.set(\"spark.executor.memoryOverhead\", \"1G\")\n    conf.set(\"spark.plugins\", \"org.apache.spark.CometPlugin\")\n    conf.set(\"spark.comet.enabled\", \"true\")\n    conf.set(\"spark.memory.offHeap.enabled\", \"true\")\n    conf.set(\"spark.memory.offHeap.size\", \"2G\")\n    conf.set(\"spark.comet.exec.shuffle.enabled\", \"true\")\n    conf.set(\"spark.comet.exec.enabled\", \"true\")\n    conf.set(\"spark.comet.memory.overhead.factor\", \"0.5\")\n    conf\n  }\n\n  /*\n   * Since using unified memory executor memory should not be overridden\n   */\n  test(\"executor memory overhead is not overridden\") {\n    val execMemOverhead1 = spark.conf.get(\"spark.executor.memoryOverhead\")\n    val execMemOverhead2 = spark.sessionState.conf.getConfString(\"spark.executor.memoryOverhead\")\n    val execMemOverhead3 = spark.sparkContext.getConf.get(\"spark.executor.memoryOverhead\")\n    val execMemOverhead4 = spark.sparkContext.conf.get(\"spark.executor.memoryOverhead\")\n\n    // in unified memory mode, comet memory overhead is\n    // spark.memory.offHeap.size (2G) * spark.comet.memory.overhead.factor (0.5) = 1G  and the overhead is not overridden\n    assert(execMemOverhead1 == \"1G\")\n    assert(execMemOverhead2 == \"1G\")\n    assert(execMemOverhead3 == \"1G\")\n    assert(execMemOverhead4 == \"1G\")\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/shuffle/sort/SpillSorterSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.shuffle.sort\n\nimport java.util.concurrent.atomic.AtomicInteger\n\nimport org.scalatest.BeforeAndAfterEach\nimport org.scalatest.funsuite.AnyFunSuite\n\nimport org.apache.spark.{SparkConf, TaskContext}\nimport org.apache.spark.executor.ShuffleWriteMetrics\nimport org.apache.spark.memory.{TaskMemoryManager, TestMemoryManager}\nimport org.apache.spark.shuffle.comet.CometShuffleMemoryAllocator\nimport org.apache.spark.sql.types._\nimport org.apache.spark.unsafe.Platform\nimport org.apache.spark.unsafe.UnsafeAlignedOffset\n\n/**\n * Unit tests for [[SpillSorter]].\n *\n * These tests verify SpillSorter behavior using Spark's test memory management infrastructure,\n * without needing a full SparkContext.\n */\nclass SpillSorterSuite extends AnyFunSuite with BeforeAndAfterEach {\n\n  private val INITIAL_SIZE = 1024\n  private val UAO_SIZE = UnsafeAlignedOffset.getUaoSize\n  private val PAGE_SIZE = 4 * 1024 * 1024 // 4MB\n\n  private var conf: SparkConf = _\n  private var memoryManager: TestMemoryManager = _\n  private var taskMemoryManager: TaskMemoryManager = _\n\n  override def beforeEach(): Unit = {\n    super.beforeEach()\n    conf = new SparkConf()\n      .set(\"spark.memory.offHeap.enabled\", \"false\")\n    memoryManager = new TestMemoryManager(conf)\n    memoryManager.limit(100 * 1024 * 1024) // 100MB\n    taskMemoryManager = new TaskMemoryManager(memoryManager, 0)\n  }\n\n  override def afterEach(): Unit = {\n    if (taskMemoryManager != null) {\n      taskMemoryManager.cleanUpAllAllocatedMemory()\n      taskMemoryManager = null\n    }\n    memoryManager = null\n    super.afterEach()\n  }\n\n  private def createTestSchema(): StructType = {\n    new StructType().add(\"id\", IntegerType)\n  }\n\n  private def createSpillSorter(\n      spillCallback: SpillSorter.SpillCallback = () => {},\n      spills: java.util.LinkedList[org.apache.spark.sql.comet.execution.shuffle.SpillInfo] =\n        new java.util.LinkedList[org.apache.spark.sql.comet.execution.shuffle.SpillInfo](),\n      partitionChecksums: Array[Long] = new Array[Long](10)): SpillSorter = {\n    val allocator = CometShuffleMemoryAllocator.getInstance(conf, taskMemoryManager, PAGE_SIZE)\n    val schema = createTestSchema()\n    val writeMetrics = new ShuffleWriteMetrics()\n    val taskContext = TaskContext.empty()\n\n    new SpillSorter(\n      allocator,\n      INITIAL_SIZE,\n      schema,\n      UAO_SIZE,\n      0.5, // preferDictionaryRatio\n      \"zstd\", // compressionCodec\n      1, // compressionLevel\n      \"adler32\", // checksumAlgorithm\n      partitionChecksums,\n      writeMetrics,\n      taskContext,\n      spills,\n      spillCallback)\n  }\n\n  test(\"initial state\") {\n    val sorter = createSpillSorter()\n    try {\n      assert(sorter.numRecords() === 0)\n      assert(sorter.hasSpaceForAnotherRecord())\n      assert(sorter.getMemoryUsage() > 0)\n    } finally {\n      sorter.freeMemory()\n      sorter.freeArray()\n    }\n  }\n\n  test(\"insert single record\") {\n    val sorter = createSpillSorter()\n    try {\n      val recordData = Array[Byte](1, 2, 3, 4)\n      val partitionId = 0\n\n      sorter.initialCurrentPage(recordData.length + UAO_SIZE)\n      sorter.insertRecord(recordData, Platform.BYTE_ARRAY_OFFSET, recordData.length, partitionId)\n\n      assert(sorter.numRecords() === 1)\n    } finally {\n      sorter.freeMemory()\n      sorter.freeArray()\n    }\n  }\n\n  test(\"insert multiple records\") {\n    val sorter = createSpillSorter()\n    try {\n      val recordData = Array[Byte](1, 2, 3, 4)\n      val numRecords = 100\n\n      sorter.initialCurrentPage(numRecords * (recordData.length + UAO_SIZE))\n\n      for (i <- 0 until numRecords) {\n        val partitionId = i % 10\n        sorter.insertRecord(\n          recordData,\n          Platform.BYTE_ARRAY_OFFSET,\n          recordData.length,\n          partitionId)\n      }\n\n      assert(sorter.numRecords() === numRecords)\n    } finally {\n      sorter.freeMemory()\n      sorter.freeArray()\n    }\n  }\n\n  test(\"reset after free\") {\n    val sorter = createSpillSorter()\n    try {\n      val recordData = Array[Byte](1, 2, 3, 4)\n      sorter.initialCurrentPage(recordData.length + UAO_SIZE)\n      sorter.insertRecord(recordData, Platform.BYTE_ARRAY_OFFSET, recordData.length, 0)\n\n      assert(sorter.numRecords() === 1)\n\n      sorter.freeMemory()\n      sorter.reset()\n\n      assert(sorter.numRecords() === 0)\n      assert(sorter.hasSpaceForAnotherRecord())\n    } finally {\n      sorter.freeMemory()\n      sorter.freeArray()\n    }\n  }\n\n  test(\"free memory returns correct value\") {\n    val sorter = createSpillSorter()\n    try {\n      sorter.initialCurrentPage(1024)\n      val memoryBefore = sorter.getMemoryUsage()\n      assert(memoryBefore > 0)\n\n      val freed = sorter.freeMemory()\n      assert(freed > 0)\n    } finally {\n      sorter.freeArray()\n    }\n  }\n\n  test(\"spill callback not triggered during normal operations\") {\n    val spillCount = new AtomicInteger(0)\n    val callback: SpillSorter.SpillCallback = () => spillCount.incrementAndGet()\n\n    val sorter = createSpillSorter(spillCallback = callback)\n    try {\n      sorter.initialCurrentPage(1024)\n      val recordData = Array[Byte](1, 2, 3, 4)\n      sorter.insertRecord(recordData, Platform.BYTE_ARRAY_OFFSET, recordData.length, 0)\n\n      assert(spillCount.get() === 0, \"Spill callback should not be triggered during normal ops\")\n    } finally {\n      sorter.freeMemory()\n      sorter.freeArray()\n    }\n  }\n\n  test(\"getMemoryUsage is thread-safe\") {\n    val sorter = createSpillSorter()\n    try {\n      sorter.initialCurrentPage(1024)\n\n      val threads = (0 until 10).map { _ =>\n        new Thread(() => {\n          for (_ <- 0 until 100) {\n            sorter.getMemoryUsage()\n          }\n        })\n      }\n\n      threads.foreach(_.start())\n      threads.foreach(_.join())\n      // Test passes if no exceptions thrown\n    } finally {\n      sorter.freeMemory()\n      sorter.freeArray()\n    }\n  }\n\n  test(\"expand pointer array\") {\n    val sorter = createSpillSorter()\n    try {\n      val initialMemory = sorter.getMemoryUsage()\n      val allocator = CometShuffleMemoryAllocator.getInstance(conf, taskMemoryManager, PAGE_SIZE)\n      val newArray = allocator.allocateArray(INITIAL_SIZE * 2)\n      sorter.expandPointerArray(newArray)\n\n      assert(sorter.getMemoryUsage() >= initialMemory)\n    } finally {\n      sorter.freeMemory()\n      sorter.freeArray()\n    }\n  }\n\n  test(\"records distributed across partitions\") {\n    val sorter = createSpillSorter()\n    try {\n      val recordData = Array[Byte](1, 2, 3, 4)\n      val numPartitions = 5\n      val recordsPerPartition = 20\n\n      sorter.initialCurrentPage(\n        numPartitions * recordsPerPartition * (recordData.length + UAO_SIZE))\n\n      for (p <- 0 until numPartitions) {\n        for (_ <- 0 until recordsPerPartition) {\n          sorter.insertRecord(recordData, Platform.BYTE_ARRAY_OFFSET, recordData.length, p)\n        }\n      }\n\n      assert(sorter.numRecords() === numPartitions * recordsPerPartition)\n    } finally {\n      sorter.freeMemory()\n      sorter.freeArray()\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/CometSQLQueryTestHelper.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport scala.util.control.NonFatal\n\nimport org.apache.spark.{SparkException, SparkThrowable}\nimport org.apache.spark.sql.catalyst.planning.PhysicalOperation\nimport org.apache.spark.sql.catalyst.plans.logical._\nimport org.apache.spark.sql.execution.HiveResult.hiveResultString\nimport org.apache.spark.sql.execution.SQLExecution\nimport org.apache.spark.sql.execution.command.{DescribeColumnCommand, DescribeCommandBase}\nimport org.apache.spark.sql.types.StructType\n\ntrait CometSQLQueryTestHelper {\n\n  private val notIncludedMsg = \"[not included in comparison]\"\n  private val clsName = this.getClass.getCanonicalName\n  protected val emptySchema: String = StructType(Seq.empty).catalogString\n\n  protected def replaceNotIncludedMsg(line: String): String = {\n    line\n      .replaceAll(\"#\\\\d+\", \"#x\")\n      .replaceAll(\"plan_id=\\\\d+\", \"plan_id=x\")\n      .replaceAll(s\"Location.*$clsName/\", s\"Location $notIncludedMsg/{warehouse_dir}/\")\n      .replaceAll(s\"file:[^\\\\s,]*$clsName\", s\"file:$notIncludedMsg/{warehouse_dir}\")\n      .replaceAll(\"Created By.*\", s\"Created By $notIncludedMsg\")\n      .replaceAll(\"Created Time.*\", s\"Created Time $notIncludedMsg\")\n      .replaceAll(\"Last Access.*\", s\"Last Access $notIncludedMsg\")\n      .replaceAll(\"Partition Statistics\\t\\\\d+\", s\"Partition Statistics\\t$notIncludedMsg\")\n      .replaceAll(\"\\\\*\\\\(\\\\d+\\\\) \", \"*\") // remove the WholeStageCodegen codegenStageIds\n  }\n\n  /** Executes a query and returns the result as (schema of the output, normalized output). */\n  protected def getNormalizedResult(session: SparkSession, sql: String): (String, Seq[String]) = {\n    // Returns true if the plan is supposed to be sorted.\n    def isSorted(plan: LogicalPlan): Boolean = plan match {\n      case _: Join | _: Aggregate | _: Generate | _: Sample | _: Distinct => false\n      case _: DescribeCommandBase | _: DescribeColumnCommand | _: DescribeRelation |\n          _: DescribeColumn =>\n        true\n      case PhysicalOperation(_, _, s: Sort) if s.global => true\n\n      case _ => plan.children.iterator.exists(isSorted)\n    }\n\n    val df = session.sql(sql)\n    val schema = df.schema.catalogString\n    // Get answer, but also get rid of the #1234 expression ids that show up in explain plans\n    val answer = SQLExecution.withNewExecutionId(df.queryExecution, Some(sql)) {\n      hiveResultString(df.queryExecution.executedPlan).map(replaceNotIncludedMsg)\n    }\n\n    // If the output is not pre-sorted, sort it.\n    if (isSorted(df.queryExecution.analyzed)) (schema, answer) else (schema, answer.sorted)\n  }\n\n  /**\n   * This method handles exceptions occurred during query execution as they may need special care\n   * to become comparable to the expected output.\n   *\n   * @param result\n   *   a function that returns a pair of schema and output\n   */\n  protected def handleExceptions(result: => (String, Seq[String])): (String, Seq[String]) = {\n    try {\n      result\n    } catch {\n      case e: SparkThrowable with Throwable if e.getErrorClass != null =>\n        (emptySchema, Seq(e.getClass.getName, e.getMessage))\n      case a: AnalysisException =>\n        // Do not output the logical plan tree which contains expression IDs.\n        // Also implement a crude way of masking expression IDs in the error message\n        // with a generic pattern \"###\".\n        val msg = a.getMessage\n        (emptySchema, Seq(a.getClass.getName, msg.replaceAll(\"#\\\\d+\", \"#x\")))\n      case s: SparkException if s.getCause != null =>\n        // For a runtime exception, it is hard to match because its message contains\n        // information of stage, task ID, etc.\n        // To make result matching simpler, here we match the cause of the exception if it exists.\n        s.getCause match {\n          case e: SparkThrowable with Throwable if e.getErrorClass != null =>\n            (emptySchema, Seq(e.getClass.getName, e.getMessage))\n          case cause =>\n            (emptySchema, Seq(cause.getClass.getName, cause.getMessage))\n        }\n      case NonFatal(e) =>\n        // If there is an exception, put the exception class followed by the message.\n        (emptySchema, Seq(e.getClass.getName, e.getMessage))\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/CometTPCDSQueriesList.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.benchmark.CometTPCDSQueryBenchmark.{nameSuffixForQueriesV2_7, tpcdsQueries, tpcdsQueriesV2_7}\nimport org.apache.spark.sql.benchmark.TPCDSSchemaHelper\nimport org.apache.spark.sql.execution.benchmark.TPCDSQueryBenchmark.tables\nimport org.apache.spark.sql.execution.benchmark.TPCDSQueryBenchmarkArguments\n\n/**\n * Utility to list Comet execution enabling status for TPCDS queries.\n *\n * To run this benchmark:\n * {{{\n * // Build [tpcds-kit](https://github.com/databricks/tpcds-kit)\n * cd /tmp && git clone https://github.com/databricks/tpcds-kit.git\n * cd tpcds-kit/tools && make OS=MACOS\n *\n * // GenTPCDSData\n * cd $COMET_HOME && mkdir /tmp/tpcds\n * make benchmark-org.apache.spark.sql.GenTPCDSData -- --dsdgenDir /tmp/tpcds-kit/tools --location /tmp/tpcds --scaleFactor 1\n *\n * // CometTPCDSQueriesList\n * make benchmark-org.apache.spark.sql.CometTPCDSQueriesList -- --data-location /tmp/tpcds\n * }}}\n *\n * Results will be written to \"spark/inspections/CometTPCDSQueriesList-results.txt\".\n */\nobject CometTPCDSQueriesList extends CometTPCQueryListBase with CometTPCQueryBase with Logging {\n  override def runSuite(mainArgs: Array[String]): Unit = {\n    val benchmarkArgs = new TPCDSQueryBenchmarkArguments(mainArgs)\n\n    // If `--query-filter` defined, filters the queries that this option selects\n    val queriesV1_4ToRun = filterQueries(tpcdsQueries, benchmarkArgs.queryFilter)\n    val queriesV2_7ToRun = filterQueries(\n      tpcdsQueriesV2_7,\n      benchmarkArgs.queryFilter,\n      nameSuffix = nameSuffixForQueriesV2_7)\n\n    if ((queriesV1_4ToRun ++ queriesV2_7ToRun).isEmpty) {\n      throw new RuntimeException(\n        s\"Empty queries to run. Bad query name filter: ${benchmarkArgs.queryFilter}\")\n    }\n\n    setupTables(\n      benchmarkArgs.dataLocation,\n      createTempView = false,\n      tables,\n      TPCDSSchemaHelper.getTableColumns)\n\n    setupCBO(cometSpark, benchmarkArgs.cboEnabled, tables)\n\n    runQueries(\"tpcds\", queries = queriesV1_4ToRun)\n    runQueries(\"tpcds-v2.7.0\", queries = queriesV2_7ToRun, nameSuffix = nameSuffixForQueriesV2_7)\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/CometTPCDSQuerySuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.internal.config.{MEMORY_OFFHEAP_ENABLED, MEMORY_OFFHEAP_SIZE}\nimport org.apache.spark.sql.comet.shims.ShimCometTPCDSQuerySuite\n\nimport org.apache.comet.CometConf\n\nclass CometTPCDSQuerySuite\n    extends {\n      val tpcdsAllQueries: Seq[String] = Seq(\n        \"q1\",\n        \"q2\",\n        \"q3\",\n        \"q4\",\n        \"q5\",\n        \"q6\",\n        \"q7\",\n        \"q8\",\n        \"q9\",\n        \"q10\",\n        \"q11\",\n        \"q12\",\n        \"q13\",\n        \"q14a\",\n        \"q14b\",\n        \"q15\",\n        \"q16\",\n        \"q17\",\n        \"q18\",\n        \"q19\",\n        \"q20\",\n        \"q21\",\n        \"q22\",\n        \"q23a\",\n        \"q23b\",\n        \"q24a\",\n        \"q24b\",\n        \"q25\",\n        \"q26\",\n        \"q27\",\n        \"q28\",\n        \"q29\",\n        \"q30\",\n        \"q31\",\n        \"q32\",\n        \"q33\",\n        \"q34\",\n        \"q35\",\n        \"q36\",\n        \"q37\",\n        \"q38\",\n        // TODO: https://github.com/apache/datafusion-comet/issues/392\n        //  comment out 39a and 39b for now because the expected result for stddev failed:\n        //  expected: 1.5242630430075292, actual: 1.524263043007529.\n        //  Will change the comparison logic to detect floating-point numbers and compare\n        //  with epsilon\n        // \"q39a\",\n        // \"q39b\",\n        \"q40\",\n        \"q41\",\n        \"q42\",\n        \"q43\",\n        \"q44\",\n        \"q45\",\n        \"q46\",\n        \"q47\",\n        \"q48\",\n        \"q49\",\n        \"q50\",\n        \"q51\",\n        \"q52\",\n        \"q53\",\n        \"q54\",\n        \"q55\",\n        \"q56\",\n        \"q57\",\n        \"q58\",\n        \"q59\",\n        \"q60\",\n        \"q61\",\n        \"q62\",\n        \"q63\",\n        \"q64\",\n        \"q65\",\n        \"q66\",\n        \"q67\",\n        \"q68\",\n        \"q69\",\n        \"q70\",\n        \"q71\",\n        \"q72\",\n        \"q73\",\n        \"q74\",\n        \"q75\",\n        \"q76\",\n        \"q77\",\n        \"q78\",\n        \"q79\",\n        \"q80\",\n        \"q81\",\n        \"q82\",\n        \"q83\",\n        \"q84\",\n        \"q85\",\n        \"q86\",\n        \"q87\",\n        \"q88\",\n        \"q89\",\n        \"q90\",\n        \"q91\",\n        \"q92\",\n        \"q93\",\n        \"q94\",\n        \"q95\",\n        \"q96\",\n        \"q97\",\n        \"q98\",\n        \"q99\")\n\n      val tpcdsAllQueriesV2_7_0: Seq[String] = Seq(\n        \"q5a\",\n        \"q6\",\n        \"q10a\",\n        \"q11\",\n        \"q12\",\n        \"q14\",\n        \"q14a\",\n        \"q18a\",\n        \"q20\",\n        \"q22\",\n        \"q22a\",\n        \"q24\",\n        \"q27a\",\n        \"q34\",\n        \"q35\",\n        \"q35a\",\n        \"q36a\",\n        \"q47\",\n        \"q49\",\n        \"q51a\",\n        \"q57\",\n        \"q64\",\n        \"q67a\",\n        \"q70a\",\n        \"q72\",\n        \"q74\",\n        \"q75\",\n        \"q77a\",\n        \"q78\",\n        \"q80a\",\n        \"q86a\",\n        \"q98\")\n\n      override val tpcdsQueries: Seq[String] = tpcdsAllQueries\n\n      override val tpcdsQueriesV2_7_0: Seq[String] = tpcdsAllQueriesV2_7_0\n    }\n    with CometTPCDSQueryTestSuite\n    with ShimCometTPCDSQuerySuite {\n  override def sparkConf: SparkConf = {\n    val conf = super.sparkConf\n    conf.set(\"spark.sql.extensions\", \"org.apache.comet.CometSparkSessionExtensions\")\n    conf.set(\n      \"spark.shuffle.manager\",\n      \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n    conf.set(CometConf.COMET_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_EXEC_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_NATIVE_SCAN_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_ONHEAP_MEMORY_OVERHEAD.key, \"15g\")\n    conf.set(CometConf.COMET_EXPLAIN_TRANSFORMATIONS.key, \"true\")\n    conf.set(\"spark.sql.adaptive.coalescePartitions.enabled\", \"true\")\n    conf.set(MEMORY_OFFHEAP_ENABLED.key, \"true\")\n    conf.set(MEMORY_OFFHEAP_SIZE.key, \"15g\")\n    conf\n  }\n\n  override protected val baseResourcePath: String = {\n    getWorkspaceFilePath(\n      \"spark\",\n      \"src\",\n      \"test\",\n      \"resources\",\n      \"tpcds-query-results\").toFile.getAbsolutePath\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/CometTPCDSQueryTestSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport java.io.File\nimport java.nio.file.{Files, Paths}\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.{SparkConf, SparkContext}\nimport org.apache.spark.sql.catalyst.util.{resourceToString, stringToFile}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.test.TestSparkSession\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometSparkSessionExtensions.isSpark40Plus\n\n/**\n * Because we need to modify some methods of Spark `TPCDSQueryTestSuite` but they are private, we\n * copy Spark `TPCDSQueryTestSuite`.\n */\nclass CometTPCDSQueryTestSuite extends QueryTest with TPCDSBase with CometSQLQueryTestHelper {\n\n  val tpcdsExtendedQueries: Seq[String] = Seq(\"q72\")\n\n  private val tpcdsDataPath = sys.env.get(\"SPARK_TPCDS_DATA\")\n\n  private val regenGoldenFiles: Boolean = System.getenv(\"SPARK_GENERATE_GOLDEN_FILES\") == \"1\"\n\n  // To make output results deterministic\n  override protected def sparkConf: SparkConf = super.sparkConf\n    .set(SQLConf.SHUFFLE_PARTITIONS.key, \"1\")\n\n  protected override def createSparkSession: TestSparkSession = {\n    new TestSparkSession(new SparkContext(\"local[1]\", this.getClass.getSimpleName, sparkConf))\n  }\n\n  // We use SF=1 table data here, so we cannot use SF=100 stats\n  protected override val injectStats: Boolean = false\n\n  if (tpcdsDataPath.nonEmpty) {\n    val nonExistentTables = tableNames.filterNot { tableName =>\n      Files.exists(Paths.get(s\"${tpcdsDataPath.get}/$tableName\"))\n    }\n    if (nonExistentTables.nonEmpty) {\n      fail(\n        s\"Non-existent TPCDS table paths found in ${tpcdsDataPath.get}: \" +\n          nonExistentTables.mkString(\", \"))\n    }\n  }\n\n  protected val baseResourcePath: String = {\n    // use the same way as `SQLQueryTestSuite` to get the resource path\n    getWorkspaceFilePath(\n      \"spark\",\n      \"src\",\n      \"test\",\n      \"resources\",\n      \"tpcds-query-results\").toFile.getAbsolutePath\n  }\n\n  override def createTable(\n      spark: SparkSession,\n      tableName: String,\n      format: String = \"parquet\",\n      options: scala.Seq[String]): Unit = {\n    spark.sql(s\"\"\"\n         |CREATE TABLE `$tableName` (${tableColumns(tableName)})\n         |USING $format\n         |LOCATION '${tpcdsDataPath.get}/$tableName'\n         |${options.mkString(\"\\n\")}\n       \"\"\".stripMargin)\n  }\n\n  private def runQuery(query: String, goldenFile: File, conf: Map[String, String]): Unit = {\n    // This is `sortMergeJoinConf != conf` in Spark, i.e., it sorts results for other joins\n    // than sort merge join. But in some queries DataFusion sort returns correct results\n    // in terms of required sorting columns, but the results are not same as Spark in terms of\n    // order of irrelevant columns. So, we need to sort the results for all joins.\n    val shouldSortResults = true\n    withSQLConf(conf.toSeq: _*) {\n      try {\n        val (schema, output) = handleExceptions(getNormalizedResult(spark, query))\n        val queryString = query.trim\n        val outputString = output.mkString(\"\\n\").replaceAll(\"\\\\s+$\", \"\")\n        if (regenGoldenFiles) {\n          val goldenOutput = {\n            s\"-- Automatically generated by ${getClass.getSimpleName}\\n\\n\" +\n              \"-- !query schema\\n\" +\n              schema + \"\\n\" +\n              \"-- !query output\\n\" +\n              outputString +\n              \"\\n\"\n          }\n          val parent = goldenFile.getParentFile\n          if (!parent.exists()) {\n            assert(parent.mkdirs(), \"Could not create directory: \" + parent)\n          }\n          stringToFile(goldenFile, goldenOutput)\n        }\n\n        // Read back the golden file.\n        val (expectedSchema, expectedOutput) = {\n          val goldenOutput = Files.readString(goldenFile.toPath)\n          val segments = goldenOutput.split(\"-- !query.*\\n\")\n\n          // query has 3 segments, plus the header\n          assert(\n            segments.size == 3,\n            s\"Expected 3 blocks in result file but got ${segments.size}. \" +\n              \"Try regenerate the result files.\")\n\n          (segments(1).trim, segments(2).replaceAll(\"\\\\s+$\", \"\"))\n        }\n\n        val notMatchedSchemaOutput = if (schema == emptySchema) {\n          // There might be exception. See `handleExceptions`.\n          s\"Schema did not match\\n$queryString\\nOutput/Exception: $outputString\"\n        } else {\n          s\"Schema did not match\\n$queryString\"\n        }\n\n        assertResult(expectedSchema, notMatchedSchemaOutput) {\n          schema\n        }\n        if (shouldSortResults) {\n          val expectSorted = expectedOutput\n            .split(\"\\n\")\n            .sorted\n            .map(_.trim)\n            .mkString(\"\\n\")\n            .replaceAll(\"\\\\s+$\", \"\")\n          val outputSorted = output.sorted.map(_.trim).mkString(\"\\n\").replaceAll(\"\\\\s+$\", \"\")\n          assertResult(expectSorted, s\"Result did not match\\n$queryString\") {\n            outputSorted\n          }\n        } else {\n          assertResult(expectedOutput, s\"Result did not match\\n$queryString\") {\n            outputString\n          }\n        }\n      } catch {\n        case e: Throwable =>\n          val configs = conf.map { case (k, v) =>\n            s\"$k=$v\"\n          }\n          throw new Exception(\n            s\"${e.getMessage}\\nError using configs:\\n${configs.mkString(\"\\n\")}\",\n            e)\n      }\n    }\n  }\n\n  val baseConf: Map[String, String] = Map(\n    CometConf.COMET_ENABLED.key -> \"true\",\n    CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"true\",\n    CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n    CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\")\n\n  val sortMergeJoinConf: Map[String, String] = Map(\n    SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n    SQLConf.PREFER_SORTMERGEJOIN.key -> \"true\")\n\n  val broadcastHashJoinConf: Map[String, String] =\n    Map(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"10485760\")\n\n  val shuffledHashJoinConf: Map[String, String] = Map(\n    SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n    \"spark.sql.join.forceApplyShuffledHashJoin\" -> \"true\")\n\n  val allJoinConfCombinations: Seq[Map[String, String]] =\n    Seq(sortMergeJoinConf, broadcastHashJoinConf, shuffledHashJoinConf)\n\n  val joinConfs: Seq[Map[String, String]] = if (regenGoldenFiles) {\n    require(\n      !sys.env.contains(\"SPARK_TPCDS_JOIN_CONF\"),\n      \"'SPARK_TPCDS_JOIN_CONF' cannot be set together with 'SPARK_GENERATE_GOLDEN_FILES'\")\n    Seq(sortMergeJoinConf)\n  } else {\n    sys.env\n      .get(\"SPARK_TPCDS_JOIN_CONF\")\n      .map { s =>\n        val p = new java.util.Properties()\n        p.load(new java.io.StringReader(s))\n        Seq(p.asScala.toMap)\n      }\n      .getOrElse(allJoinConfCombinations)\n  }\n\n  assert(joinConfs.nonEmpty)\n  joinConfs.foreach(conf =>\n    require(\n      allJoinConfCombinations.contains(conf),\n      s\"Join configurations [$conf] should be one of $allJoinConfCombinations\"))\n\n  if (tpcdsDataPath.nonEmpty) {\n    tpcdsQueries.foreach { name =>\n      val queryString = resourceToString(\n        s\"tpcds/$name.sql\",\n        classLoader = Thread.currentThread().getContextClassLoader)\n      test(name) {\n        val goldenFile = new File(s\"$baseResourcePath/v1_4\", s\"$name.sql.out\")\n        joinConfs.foreach { conf =>\n          val sortMergeJoin = sortMergeJoinConf == conf\n          // Skip q72 for sort-merge join because it uses too many resources\n          // that can cause OOM in GitHub Actions\n          if (!(sortMergeJoin && name == \"q72\")) {\n            System.gc() // Workaround for GitHub Actions memory limitation, see also SPARK-37368\n            runQuery(queryString, goldenFile, baseConf ++ conf)\n          }\n        }\n      }\n    }\n\n    tpcdsQueriesV2_7_0.foreach { name =>\n      val queryString = resourceToString(\n        s\"tpcds-v2.7.0/$name.sql\",\n        classLoader = Thread.currentThread().getContextClassLoader)\n      test(s\"$name-v2.7\") {\n        val spark4File = new File(s\"$baseResourcePath/v2_7-spark4_0\", s\"$name.sql.out\")\n        val goldenFile =\n          if (isSpark40Plus && spark4File.exists()) spark4File\n          else new File(s\"$baseResourcePath/v2_7\", s\"$name.sql.out\")\n        joinConfs.foreach { conf =>\n          val sortMergeJoin = sortMergeJoinConf == conf\n          // Skip q72 for sort-merge join because it uses too many resources\n          // that can cause OOM in GitHub Actions\n          if (!(sortMergeJoin && name == \"q72\")) {\n            System.gc() // SPARK-37368\n            runQuery(queryString, goldenFile, baseConf ++ conf)\n          }\n        }\n      }\n    }\n\n    tpcdsExtendedQueries.foreach { name =>\n      val queryString = resourceToString(\n        s\"tpcds-extended/$name.sql\",\n        classLoader = Thread.currentThread().getContextClassLoader)\n      test(s\"extended $name\") {\n        val goldenFile = new File(s\"$baseResourcePath/extended\", s\"$name.sql.out\")\n        joinConfs.foreach { conf =>\n          System.gc() // SPARK-37368\n          runQuery(queryString, goldenFile, baseConf ++ conf)\n        }\n      }\n    }\n  } else {\n    ignore(\"skipped because env `SPARK_TPCDS_DATA` is not set\") {}\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/CometTPCHQueriesList.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.execution.benchmark.TPCDSQueryBenchmarkArguments\n\n/**\n * Utility to list Comet execution enabling status for TPCH queries.\n *\n * To run this benchmark:\n * {{{\n * // Set scale factor in GB\n * scale_factor=1\n *\n * // GenTPCHData to create the data set at /tmp/tpch/sf1_parquet\n * cd $COMET_HOME\n * make benchmark-org.apache.spark.sql.GenTPCHData -- --location /tmp --scaleFactor ${scale_factor}\n *\n * // CometTPCHQueriesList\n * make benchmark-org.apache.spark.sql.CometTPCHQueriesList -- --data-location /tmp/tpch/sf${scale_factor}_parquet\n * }}}\n *\n * Results will be written to \"spark/inspections/CometTPCHQueriesList-results.txt\".\n */\nobject CometTPCHQueriesList extends CometTPCQueryListBase with CometTPCQueryBase with Logging {\n  val tables: Seq[String] =\n    Seq(\"customer\", \"lineitem\", \"nation\", \"orders\", \"part\", \"partsupp\", \"region\", \"supplier\")\n\n  override def runSuite(mainArgs: Array[String]): Unit = {\n    val benchmarkArgs = new TPCDSQueryBenchmarkArguments(mainArgs)\n\n    // List of all TPC-H queries\n    val tpchQueries = (1 to 22).map(n => s\"q$n\")\n    // Only q1 in the extended queries\n    val tpchExtendedQueries = Seq(\"q1\")\n\n    // If `--query-filter` defined, filters the queries that this option selects\n    val queries = filterQueries(tpchQueries, benchmarkArgs.queryFilter)\n    val extendedQueries = filterQueries(tpchExtendedQueries, benchmarkArgs.queryFilter)\n\n    if (queries.isEmpty) {\n      throw new RuntimeException(\n        s\"Empty queries to run. Bad query name filter: ${benchmarkArgs.queryFilter}\")\n    }\n\n    setupTables(benchmarkArgs.dataLocation, createTempView = !benchmarkArgs.cboEnabled, tables)\n\n    setupCBO(cometSpark, benchmarkArgs.cboEnabled, tables)\n\n    runQueries(\"tpch\", queries, \" TPCH Snappy\")\n    runQueries(\"tpch-extended\", extendedQueries, \" TPCH Extended Snappy\")\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/CometTPCHQuerySuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport java.io.File\nimport java.nio.file.{Files, Paths}\n\nimport scala.jdk.CollectionConverters._\n\nimport org.apache.spark.{SparkConf, SparkContext}\nimport org.apache.spark.internal.config.{MEMORY_OFFHEAP_ENABLED, MEMORY_OFFHEAP_SIZE}\nimport org.apache.spark.sql.catalyst.TableIdentifier\nimport org.apache.spark.sql.catalyst.util.{resourceToString, stringToFile}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.test.TestSparkSession\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.shims.ShimCometTPCHQuerySuite\n\n/**\n * End-to-end tests to check TPCH query results.\n *\n * To run this test suite:\n * {{{\n *   SPARK_TPCH_DATA=<path of TPCH SF=1 data>\n *     ./mvnw -Dsuites=org.apache.spark.sql.CometTPCHQuerySuite test\n * }}}\n *\n * To re-generate golden files for this suite, run:\n * {{{\n *   SPARK_GENERATE_GOLDEN_FILES=1 SPARK_TPCH_DATA=<path of TPCH SF=1 data>\n *     ./mvnw -Dsuites=org.apache.spark.sql.CometTPCHQuerySuite test\n * }}}\n */\nclass CometTPCHQuerySuite extends QueryTest with TPCBase with ShimCometTPCHQuerySuite {\n\n  private val tpchDataPath = sys.env.get(\"SPARK_TPCH_DATA\")\n\n  val tpchQueries: Seq[String] = Seq(\n    \"q1\",\n    \"q2\",\n    \"q3\",\n    \"q4\",\n    \"q5\",\n    \"q6\",\n    \"q7\",\n    \"q8\",\n    \"q9\",\n    \"q10\",\n    \"q11\",\n    \"q12\",\n    \"q13\",\n    \"q14\",\n    \"q15\",\n    \"q16\",\n    \"q17\",\n    \"q18\",\n    \"q19\",\n    \"q20\",\n    \"q21\",\n    \"q22\")\n  val disabledTpchQueries: Seq[String] = Seq()\n\n  // To make output results deterministic\n  def testSparkConf: SparkConf = {\n    val conf = super.sparkConf\n    conf.set(SQLConf.SHUFFLE_PARTITIONS.key, \"1\")\n    conf.set(\"spark.sql.extensions\", \"org.apache.comet.CometSparkSessionExtensions\")\n    conf.set(\n      \"spark.shuffle.manager\",\n      \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n    conf.set(CometConf.COMET_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_EXEC_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_NATIVE_SCAN_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key, \"true\")\n    conf.set(MEMORY_OFFHEAP_ENABLED.key, \"true\")\n    conf.set(MEMORY_OFFHEAP_SIZE.key, \"2g\")\n  }\n\n  protected override def createSparkSession: TestSparkSession = {\n    new TestSparkSession(new SparkContext(\"local[1]\", this.getClass.getSimpleName, testSparkConf))\n  }\n\n  val tableNames: Seq[String] =\n    Seq(\"customer\", \"lineitem\", \"nation\", \"orders\", \"part\", \"partsupp\", \"region\", \"supplier\")\n\n  // We use SF=1 table data here, so we cannot use SF=100 stats\n  protected override val injectStats: Boolean = false\n\n  if (tpchDataPath.nonEmpty) {\n    val nonExistentTables = tableNames.filterNot { tableName =>\n      Files.exists(Paths.get(s\"${tpchDataPath.get}/$tableName\"))\n    }\n    if (nonExistentTables.nonEmpty) {\n      fail(\n        s\"Non-existent TPCH table paths found in ${tpchDataPath.get}: \" +\n          nonExistentTables.mkString(\", \"))\n    }\n  }\n\n  protected val baseResourcePath: String = {\n    // use the same way as `SQLQueryTestSuite` to get the resource path\n    getWorkspaceFilePath(\n      \"spark\",\n      \"src\",\n      \"test\",\n      \"resources\",\n      \"tpch-query-results\").toFile.getAbsolutePath\n  }\n\n  override def createTables(): Unit = {\n    tableNames.foreach { tableName =>\n      spark.catalog.createTable(tableName, s\"${tpchDataPath.get}/$tableName\", \"parquet\")\n    }\n  }\n\n  override def dropTables(): Unit = {\n    tableNames.foreach { tableName =>\n      spark.sessionState.catalog.dropTable(TableIdentifier(tableName), true, true)\n    }\n  }\n\n  private def runQuery(query: String, goldenFile: File, conf: Map[String, String]): Unit = {\n    val shouldSortResults = sortMergeJoinConf != conf // Sort for other joins\n    withSQLConf(conf.toSeq: _*) {\n      try {\n        val (schema, output) = handleExceptions(getNormalizedQueryExecutionResult(spark, query))\n        val queryString = query.trim\n        val outputString = output.mkString(\"\\n\").replaceAll(\"\\\\s+$\", \"\")\n        if (regenerateGoldenFiles) {\n          val goldenOutput = {\n            s\"-- Automatically generated by ${getClass.getSimpleName}\\n\\n\" +\n              \"-- !query schema\\n\" +\n              schema + \"\\n\" +\n              \"-- !query output\\n\" +\n              outputString +\n              \"\\n\"\n          }\n          val parent = goldenFile.getParentFile\n          if (!parent.exists()) {\n            assert(parent.mkdirs(), \"Could not create directory: \" + parent)\n          }\n          stringToFile(goldenFile, goldenOutput)\n        }\n\n        // Read back the golden file.\n        val (expectedSchema, expectedOutput) = {\n          val goldenOutput = Files.readString(goldenFile.toPath)\n          val segments = goldenOutput.split(\"-- !query.*\\n\")\n\n          // query has 3 segments, plus the header\n          assert(\n            segments.size == 3,\n            s\"Expected 3 blocks in result file but got ${segments.size}. \" +\n              \"Try regenerate the result files.\")\n\n          (segments(1).trim, segments(2).replaceAll(\"\\\\s+$\", \"\"))\n        }\n\n        // Expose thrown exception when executing the query\n        val notMatchedSchemaOutput = if (schema == emptySchema) {\n          // There might be exception. See `handleExceptions`.\n          s\"Schema did not match\\n$queryString\\nOutput/Exception: $outputString\"\n        } else {\n          s\"Schema did not match\\n$queryString\"\n        }\n\n        assertResult(expectedSchema, notMatchedSchemaOutput) {\n          schema\n        }\n        if (shouldSortResults) {\n          val expectSorted = expectedOutput\n            .split(\"\\n\")\n            .sorted\n            .map(_.trim)\n            .mkString(\"\\n\")\n            .replaceAll(\"\\\\s+$\", \"\")\n          val outputSorted = output.sorted.map(_.trim).mkString(\"\\n\").replaceAll(\"\\\\s+$\", \"\")\n          assertResult(expectSorted, s\"Result did not match\\n$queryString\") {\n            outputSorted\n          }\n        } else {\n          assertResult(expectedOutput, s\"Result did not match\\n$queryString\") {\n            outputString\n          }\n        }\n      } catch {\n        case e: Throwable =>\n          val configs = conf.map { case (k, v) =>\n            s\"$k=$v\"\n          }\n          throw new Exception(s\"${e.getMessage}\\nError using configs:\\n${configs.mkString(\"\\n\")}\")\n      }\n    }\n  }\n\n  val sortMergeJoinConf: Map[String, String] = Map(\n    SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n    SQLConf.PREFER_SORTMERGEJOIN.key -> \"true\")\n\n  val broadcastHashJoinConf: Map[String, String] = Map(\n    SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"10485760\")\n\n  val shuffledHashJoinConf: Map[String, String] = Map(\n    SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n    \"spark.sql.join.forceApplyShuffledHashJoin\" -> \"true\")\n\n  val allJoinConfCombinations: Seq[Map[String, String]] =\n    Seq(sortMergeJoinConf, broadcastHashJoinConf, shuffledHashJoinConf)\n\n  val joinConfs: Seq[Map[String, String]] = if (regenerateGoldenFiles) {\n    require(\n      !sys.env.contains(\"SPARK_TPCH_JOIN_CONF\"),\n      \"'SPARK_TPCH_JOIN_CONF' cannot be set together with 'SPARK_GENERATE_GOLDEN_FILES'\")\n    Seq(sortMergeJoinConf)\n  } else {\n    sys.env\n      .get(\"SPARK_TPCH_JOIN_CONF\")\n      .map { s =>\n        val p = new java.util.Properties()\n        p.load(new java.io.StringReader(s))\n        Seq(p.asScala.toMap)\n      }\n      .getOrElse(allJoinConfCombinations)\n  }\n\n  assert(joinConfs.nonEmpty)\n  joinConfs.foreach(conf =>\n    require(\n      allJoinConfCombinations.contains(conf),\n      s\"Join configurations [$conf] should be one of $allJoinConfCombinations\"))\n\n  if (tpchDataPath.nonEmpty) {\n    tpchQueries.foreach { name =>\n      if (disabledTpchQueries.contains(name)) {\n        ignore(s\"skipped because $name is disabled\") {}\n      } else {\n        val queryString = resourceToString(\n          s\"tpch/$name.sql\",\n          classLoader = Thread.currentThread().getContextClassLoader)\n        test(name) {\n          val goldenFile = new File(s\"$baseResourcePath\", s\"$name.sql.out\")\n          joinConfs.foreach { conf =>\n            System.gc() // Workaround for GitHub Actions memory limitation, see also SPARK-37368\n            runQuery(queryString, goldenFile, conf)\n          }\n        }\n      }\n    }\n  } else {\n    ignore(\"skipped because env `SPARK_TPCH_DATA` is not set\") {}\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/CometTPCQueryBase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport java.io.{File, FileNotFoundException}\n\nimport scala.util.Try\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.catalyst.util.DateTimeConstants.NANOS_PER_SECOND\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.StructType\n\nimport org.apache.comet.{CometConf, CometSparkSessionExtensions}\n\n/**\n * Base trait for TPC related query execution\n */\ntrait CometTPCQueryBase extends Logging {\n  protected val cometSpark: SparkSession = {\n    val conf = new SparkConf()\n      .setMaster(System.getProperty(\"spark.sql.test.master\", \"local[*]\"))\n      .setAppName(this.getClass.getSimpleName.stripSuffix(\"$\"))\n      .set(\"spark.sql.parquet.compression.codec\", \"snappy\")\n      .set(\n        \"spark.sql.shuffle.partitions\",\n        System.getProperty(\"spark.sql.shuffle.partitions\", \"4\"))\n      .set(\"spark.driver.memory\", \"3g\")\n      .set(\"spark.executor.memory\", \"3g\")\n      .set(\"spark.sql.autoBroadcastJoinThreshold\", (20 * 1024 * 1024).toString)\n      .set(\"spark.sql.crossJoin.enabled\", \"true\")\n      .setIfMissing(\"parquet.enable.dictionary\", \"true\")\n      .set(\n        \"spark.shuffle.manager\",\n        \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n\n    val sparkSession = SparkSession.builder\n      .config(conf)\n      .withExtensions(new CometSparkSessionExtensions)\n      .getOrCreate()\n\n    // Set default configs. Individual cases will change them if necessary.\n    sparkSession.conf.set(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key, \"true\")\n    sparkSession.conf.set(SQLConf.WHOLESTAGE_CODEGEN_ENABLED.key, \"true\")\n    sparkSession.conf.set(CometConf.COMET_ENABLED.key, \"false\")\n    sparkSession.conf.set(CometConf.COMET_EXEC_ENABLED.key, \"false\")\n\n    sparkSession\n  }\n\n  protected def setupTables(\n      dataLocation: String,\n      createTempView: Boolean,\n      tables: Seq[String],\n      tableColumns: Map[String, StructType] = Map.empty): Map[String, Long] = {\n    tables.map { tableName =>\n      // some tools will generate TPC-* Parquet files with a `.parquet` extension\n      // and others will not, so let's support both cases\n      val tableFilePathDefault = s\"$dataLocation/$tableName\"\n      val tableFilePathAlt = s\"$dataLocation/$tableName.parquet\"\n      val tableFilePath = if (new File(tableFilePathDefault).exists()) {\n        tableFilePathDefault\n      } else if (new File(tableFilePathAlt).exists()) {\n        tableFilePathAlt\n      } else {\n        throw new FileNotFoundException(\n          s\"Table does not exist at $tableFilePathDefault or $tableFilePathAlt\")\n      }\n      if (createTempView) {\n        cometSpark.read.parquet(tableFilePath).createOrReplaceTempView(tableName)\n      } else {\n        cometSpark.sql(s\"DROP TABLE IF EXISTS $tableName\")\n        if (tableColumns.isEmpty) {\n          cometSpark.catalog.createTable(tableName, tableFilePath, \"parquet\")\n        } else {\n          // SPARK-39584: Fix TPCDSQueryBenchmark Measuring Performance of Wrong Query Results\n          val options = Map(\"path\" -> tableFilePath)\n          cometSpark.catalog.createTable(tableName, \"parquet\", tableColumns(tableName), options)\n        }\n        // Recover partitions but don't fail if a table is not partitioned.\n        Try {\n          cometSpark.sql(s\"ALTER TABLE $tableName RECOVER PARTITIONS\")\n        }.getOrElse {\n          logInfo(s\"Recovering partitions of table $tableName failed\")\n        }\n      }\n      tableName -> cometSpark.table(tableName).count()\n    }.toMap\n  }\n\n  protected def setupCBO(spark: SparkSession, cboEnabled: Boolean, tables: Seq[String]): Unit = {\n    if (cboEnabled) {\n      spark.sql(s\"SET ${SQLConf.CBO_ENABLED.key}=true\")\n      spark.sql(s\"SET ${SQLConf.PLAN_STATS_ENABLED.key}=true\")\n      spark.sql(s\"SET ${SQLConf.JOIN_REORDER_ENABLED.key}=true\")\n      spark.sql(s\"SET ${SQLConf.HISTOGRAM_ENABLED.key}=true\")\n\n      // Analyze all the tables before running queries\n      val startTime = System.nanoTime()\n      tables.foreach { tableName =>\n        spark.sql(s\"ANALYZE TABLE $tableName COMPUTE STATISTICS FOR ALL COLUMNS\")\n      }\n      logInfo(\n        \"The elapsed time to analyze all the tables is \" +\n          s\"${(System.nanoTime() - startTime) / NANOS_PER_SECOND.toDouble} seconds\")\n    } else {\n      spark.sql(s\"SET ${SQLConf.CBO_ENABLED.key}=false\")\n    }\n  }\n\n  protected def filterQueries(\n      origQueries: Seq[String],\n      queryFilter: Set[String],\n      nameSuffix: String = \"\"): Seq[String] = {\n    if (queryFilter.nonEmpty) {\n      if (nameSuffix.nonEmpty) {\n        origQueries.filter(name => queryFilter.contains(s\"$name$nameSuffix\"))\n      } else {\n        origQueries.filter(queryFilter.contains)\n      }\n    } else {\n      origQueries\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/CometTPCQueryListBase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport java.io.{File, FileOutputStream, OutputStream, PrintStream}\n\nimport scala.collection.mutable\n\nimport org.apache.commons.io.output.TeeOutputStream\nimport org.apache.spark.benchmark.Benchmarks\nimport org.apache.spark.sql.catalyst.plans.SQLHelper\nimport org.apache.spark.sql.catalyst.util.resourceToString\nimport org.apache.spark.sql.comet.CometExec\nimport org.apache.spark.sql.execution.SparkPlan\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n\nimport org.apache.comet.{CometConf, ExtendedExplainInfo}\nimport org.apache.comet.shims.ShimCometSparkSessionExtensions\n\ntrait CometTPCQueryListBase\n    extends CometTPCQueryBase\n    with AdaptiveSparkPlanHelper\n    with SQLHelper\n    with ShimCometSparkSessionExtensions {\n  var output: Option[OutputStream] = None\n\n  def main(args: Array[String]): Unit = {\n    val resultFileName =\n      s\"${this.getClass.getSimpleName.replace(\"$\", \"\")}-results.txt\"\n    val prefix = Benchmarks.currentProjectRoot.map(_ + \"/\").getOrElse(\"\")\n    val dir = new File(s\"${prefix}inspections/\")\n    if (!dir.exists()) {\n      // scalastyle:off println\n      println(s\"Creating ${dir.getAbsolutePath} for query inspection results.\")\n      // scalastyle:on println\n      dir.mkdirs()\n    }\n    val file = new File(dir, resultFileName)\n    if (!file.exists()) {\n      file.createNewFile()\n    }\n    output = Some(new FileOutputStream(file))\n\n    runSuite(args)\n\n    output.foreach { o =>\n      if (o != null) {\n        o.close()\n      }\n    }\n  }\n\n  protected def runQueries(\n      queryLocation: String,\n      queries: Seq[String],\n      nameSuffix: String = \"\"): Unit = {\n\n    val out = if (output.isDefined) {\n      new PrintStream(new TeeOutputStream(System.out, output.get))\n    } else {\n      System.out\n    }\n\n    queries.foreach { name =>\n      val queryString = resourceToString(\n        s\"$queryLocation/$name.sql\",\n        classLoader = Thread.currentThread().getContextClassLoader)\n\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n        // Lower bloom filter thresholds to allows us to simulate the plan produced at larger scale.\n        \"spark.sql.optimizer.runtime.bloomFilter.creationSideThreshold\" -> \"1MB\",\n        \"spark.sql.optimizer.runtime.bloomFilter.applicationSideScanSizeThreshold\" -> \"1MB\") {\n\n        val df = cometSpark.sql(queryString)\n        val cometPlans = mutable.HashSet.empty[String]\n        val executedPlan = df.queryExecution.executedPlan\n        stripAQEPlan(executedPlan).foreach {\n          case op: CometExec =>\n            cometPlans += s\"${op.nodeName}\"\n          case _ =>\n        }\n\n        if (cometPlans.nonEmpty) {\n          out.println(\n            s\"Query: $name$nameSuffix. Comet Exec: Enabled (${cometPlans.mkString(\", \")})\")\n        } else {\n          out.println(s\"Query: $name$nameSuffix. Comet Exec: Disabled\")\n        }\n        out.println(\n          s\"Query: $name$nameSuffix: ExplainInfo:\\n\" +\n            s\"${new ExtendedExplainInfo().generateExtendedInfo(executedPlan)}\\n\")\n      }\n    }\n  }\n\n  protected def checkCometExec(df: DataFrame, f: SparkPlan => Unit): Unit = {\n    if (CometConf.COMET_ENABLED.get() && CometConf.COMET_EXEC_ENABLED.get()) {\n      f(stripAQEPlan(df.queryExecution.executedPlan))\n    }\n  }\n\n  def runSuite(mainArgs: Array[String]): Unit\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/CometTestBase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport java.util.concurrent.atomic.AtomicInteger\n\nimport scala.concurrent.duration._\nimport scala.reflect.ClassTag\nimport scala.reflect.runtime.universe.TypeTag\nimport scala.util.{Success, Try}\n\nimport org.scalatest.BeforeAndAfterEach\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.parquet.column.ParquetProperties\nimport org.apache.parquet.example.data.Group\nimport org.apache.parquet.example.data.simple.{SimpleGroup, SimpleGroupFactory}\nimport org.apache.parquet.hadoop.ParquetWriter\nimport org.apache.parquet.hadoop.example.{ExampleParquetWriter, GroupWriteSupport}\nimport org.apache.parquet.schema.{MessageType, MessageTypeParser}\nimport org.apache.spark._\nimport org.apache.spark.internal.config.{MEMORY_OFFHEAP_ENABLED, MEMORY_OFFHEAP_SIZE, SHUFFLE_MANAGER}\nimport org.apache.spark.sql.catalyst.plans.logical\nimport org.apache.spark.sql.catalyst.util.sideBySide\nimport org.apache.spark.sql.comet.CometPlanChecker\nimport org.apache.spark.sql.comet.execution.shuffle.{CometColumnarShuffle, CometNativeShuffle, CometShuffleExchangeExec}\nimport org.apache.spark.sql.execution._\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.internal._\nimport org.apache.spark.sql.test._\nimport org.apache.spark.sql.types.{DecimalType, StructType}\n\nimport org.apache.comet._\nimport org.apache.comet.shims.ShimCometSparkSessionExtensions\n\n/**\n * Base class for testing. This exists in `org.apache.spark.sql` since [[SQLTestUtils]] is\n * package-private.\n */\nabstract class CometTestBase\n    extends QueryTest\n    with SQLTestUtils\n    with BeforeAndAfterEach\n    with AdaptiveSparkPlanHelper\n    with ShimCometSparkSessionExtensions\n    with ShimCometTestBase\n    with CometPlanChecker {\n  import testImplicits._\n\n  protected val shuffleManager: String =\n    \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\"\n\n  protected def sparkConf: SparkConf = {\n    val conf = new SparkConf()\n    conf.set(\"spark.hadoop.fs.file.impl\", classOf[DebugFilesystem].getName)\n    conf.set(\"spark.ui.enabled\", \"false\")\n    conf.set(SQLConf.SHUFFLE_PARTITIONS, 10) // reduce parallelism in tests\n    conf.set(SQLConf.ANSI_ENABLED.key, \"false\")\n    conf.set(SHUFFLE_MANAGER, shuffleManager)\n    conf.set(MEMORY_OFFHEAP_ENABLED.key, \"true\")\n    conf.set(MEMORY_OFFHEAP_SIZE.key, \"2g\")\n    conf.set(SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key, \"1g\")\n    conf.set(SQLConf.ADAPTIVE_AUTO_BROADCASTJOIN_THRESHOLD.key, \"1g\")\n    conf.set(CometConf.COMET_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_ONHEAP_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_EXEC_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_RESPECT_PARQUET_FILTER_PUSHDOWN.key, \"true\")\n    conf.set(CometConf.COMET_SPARK_TO_ARROW_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_NATIVE_SCAN_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_PARQUET_UNSIGNED_SMALL_INT_CHECK.key, \"false\")\n    conf.set(CometConf.COMET_ONHEAP_MEMORY_OVERHEAD.key, \"2g\")\n    conf.set(CometConf.COMET_EXEC_SORT_MERGE_JOIN_WITH_JOIN_FILTER_ENABLED.key, \"true\")\n    // SortOrder is incompatible for mixed zero and negative zero floating point values, but\n    // this is an edge case, and we expect most users to allow sorts on floating point, so we\n    // enable this for the tests\n    conf.set(CometConf.getExprAllowIncompatConfigKey(\"SortOrder\"), \"true\")\n    // For spark 4.0 tests, we need limit the thread threshold to avoid OOM, see:\n    //  https://github.com/apache/datafusion-comet/issues/2965\n    conf.set(\n      \"spark.sql.shuffleExchange.maxThreadThreshold\",\n      sys.env.getOrElse(\"SPARK_TEST_SQL_SHUFFLE_EXCHANGE_MAX_THREAD_THRESHOLD\", \"1024\"))\n    conf.set(\n      \"spark.sql.resultQueryStage.maxThreadThreshold\",\n      sys.env.getOrElse(\"SPARK_TEST_SQL_RESULT_QUERY_STAGE_MAX_THREAD_THRESHOLD\", \"1024\"))\n    conf\n  }\n\n  protected def isFeatureEnabled(feature: String): Boolean = {\n    try {\n      NativeBase.isFeatureEnabled(feature)\n    } catch {\n      case _: Throwable =>\n        false\n    }\n  }\n\n  protected def internalCheckSparkAnswer(\n      df: => DataFrame,\n      assertCometNative: Boolean,\n      includeClasses: Seq[Class[_]] = Seq.empty,\n      excludedClasses: Seq[Class[_]] = Seq.empty,\n      withTol: Option[Double] = None): (SparkPlan, SparkPlan) = {\n\n    var expected: Array[Row] = Array.empty\n    var sparkPlan = null.asInstanceOf[SparkPlan]\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n      val dfSpark = datasetOfRows(spark, df.logicalPlan)\n      expected = dfSpark.collect()\n      sparkPlan = dfSpark.queryExecution.executedPlan\n    }\n    val dfComet = datasetOfRows(spark, df.logicalPlan)\n    if (withTol.isDefined) {\n      checkAnswerWithTolerance(dfComet, expected, withTol.get)\n    } else {\n      checkCometAnswer(dfComet, expected)\n    }\n\n    if (assertCometNative) {\n      checkCometOperators(stripAQEPlan(df.queryExecution.executedPlan), excludedClasses: _*)\n    } else {\n      if (CometConf.COMET_STRICT_TESTING.get()) {\n        if (findFirstNonCometOperator(\n            stripAQEPlan(df.queryExecution.executedPlan),\n            excludedClasses: _*).isEmpty) {\n          fail(\"Plan was fully native in Comet. Call checkSparkAnswerAndOperator instead.\")\n        }\n      }\n    }\n\n    if (includeClasses.nonEmpty) {\n      checkPlanContains(stripAQEPlan(df.queryExecution.executedPlan), includeClasses: _*)\n    }\n\n    (sparkPlan, dfComet.queryExecution.executedPlan)\n  }\n\n  /**\n   * Check that the query returns the correct results when Comet is enabled, but do not check if\n   * Comet accelerated any operators\n   */\n  protected def checkSparkAnswer(query: String): (SparkPlan, SparkPlan) = {\n    internalCheckSparkAnswer(sql(query), assertCometNative = false)\n  }\n\n  /**\n   * Check that the query returns the correct results when Comet is enabled, but do not check if\n   * Comet accelerated any operators\n   */\n  protected def checkSparkAnswer(df: => DataFrame): (SparkPlan, SparkPlan) = {\n    internalCheckSparkAnswer(df, assertCometNative = false)\n  }\n\n  /**\n   * Check that the query returns the correct results when Comet is enabled, but do not check if\n   * Comet accelerated any operators\n   *\n   * Use the provided `tol` when comparing floating-point results.\n   */\n  protected def checkSparkAnswerWithTolerance(\n      query: String,\n      absTol: Double = 1e-6): (SparkPlan, SparkPlan) = {\n    checkSparkAnswerWithTolerance(sql(query), absTol)\n  }\n\n  /**\n   * Check that the query returns the correct results when Comet is enabled, but do not check if\n   * Comet accelerated any operators\n   *\n   * Use the provided `tol` when comparing floating-point results.\n   */\n  protected def checkSparkAnswerWithTolerance(\n      df: => DataFrame,\n      absTol: Double): (SparkPlan, SparkPlan) = {\n    internalCheckSparkAnswer(df, assertCometNative = false, withTol = Some(absTol))\n  }\n\n  /**\n   * Assert that the schema produced by a Comet-enabled execution matches the schema produced by\n   * vanilla Spark for the same logical plan. Useful for catching regressions where Comet modifies\n   * expression result types (e.g. decimal precision promotion).\n   */\n  protected def checkSparkSchema(df: DataFrame): Unit = {\n    var sparkSchema: StructType = null\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n      sparkSchema = datasetOfRows(spark, df.logicalPlan).schema\n    }\n    assert(\n      df.schema == sparkSchema,\n      s\"Schema mismatch:\\nSpark:  $sparkSchema\\nComet: ${df.schema}\")\n  }\n\n  /**\n   * Check that the query returns the correct results when Comet is enabled and that Comet\n   * replaced all possible operators. Use the provided `absTol` when comparing floating-point\n   * results.\n   */\n  protected def checkSparkAnswerAndOperatorWithTolerance(\n      query: String,\n      absTol: Double = 1e-6): (SparkPlan, SparkPlan) = {\n    checkSparkAnswerAndOperatorWithTol(sql(query), absTol)\n  }\n\n  /**\n   * Check that the query returns the correct results when Comet is enabled and that Comet\n   * replaced all possible operators except for those specified in the excluded list.\n   */\n  protected def checkSparkAnswerAndOperator(\n      query: String,\n      excludedClasses: Class[_]*): (SparkPlan, SparkPlan) = {\n    checkSparkAnswerAndOperator(sql(query), excludedClasses: _*)\n  }\n\n  /**\n   * Check that the query returns the correct results when Comet is enabled and that Comet\n   * replaced all possible operators except for those specified in the excluded list.\n   */\n  protected def checkSparkAnswerAndOperator(\n      df: => DataFrame,\n      excludedClasses: Class[_]*): (SparkPlan, SparkPlan) = {\n    internalCheckSparkAnswer(\n      df,\n      assertCometNative = true,\n      excludedClasses = Seq(excludedClasses: _*))\n  }\n\n  /**\n   * Check that the query returns the correct results when Comet is enabled and that Comet\n   * replaced all possible operators except for those specified in the excluded list.\n   *\n   * Also check that the plan included all operators specified in `includeClasses`.\n   */\n  protected def checkSparkAnswerAndOperator(\n      df: => DataFrame,\n      includeClasses: Seq[Class[_]],\n      excludedClasses: Class[_]*): (SparkPlan, SparkPlan) = {\n    internalCheckSparkAnswer(\n      df,\n      assertCometNative = true,\n      includeClasses,\n      excludedClasses = Seq(excludedClasses: _*))\n  }\n\n  /**\n   * Check that the query returns the correct results when Comet is enabled and that Comet\n   * replaced all possible operators except for those specified in the excluded list.\n   *\n   * Also check that the plan included all operators specified in `includeClasses`.\n   *\n   * Use the provided `tol` when comparing floating-point results.\n   */\n  protected def checkSparkAnswerAndOperatorWithTol(\n      df: => DataFrame,\n      tol: Double = 1e-6): (SparkPlan, SparkPlan) = {\n    checkSparkAnswerAndOperatorWithTol(df, tol, Seq.empty)\n  }\n\n  /**\n   * Check that the query returns the correct results when Comet is enabled and that Comet\n   * replaced all possible operators except for those specified in the excluded list.\n   *\n   * Also check that the plan included all operators specified in `includeClasses`.\n   *\n   * Use the provided `tol` when comparing floating-point results.\n   */\n  protected def checkSparkAnswerAndOperatorWithTol(\n      df: => DataFrame,\n      tol: Double,\n      includeClasses: Seq[Class[_]],\n      excludedClasses: Class[_]*): (SparkPlan, SparkPlan) = {\n    internalCheckSparkAnswer(\n      df,\n      assertCometNative = true,\n      includeClasses = Seq(includeClasses: _*),\n      excludedClasses = Seq(excludedClasses: _*),\n      withTol = Some(tol))\n  }\n\n  /** Check for the correct results as well as the expected fallback reason */\n  protected def checkSparkAnswerAndFallbackReason(\n      query: String,\n      fallbackReason: String): (SparkPlan, SparkPlan) = {\n    checkSparkAnswerAndFallbackReasons(sql(query), Set(fallbackReason))\n  }\n\n  /** Check for the correct results as well as the expected fallback reason */\n  protected def checkSparkAnswerAndFallbackReason(\n      df: => DataFrame,\n      fallbackReason: String): (SparkPlan, SparkPlan) = {\n    checkSparkAnswerAndFallbackReasons(df, Set(fallbackReason))\n  }\n\n  /** Check for the correct results as well as the expected fallback reasons */\n  protected def checkSparkAnswerAndFallbackReasons(\n      query: String,\n      fallbackReasons: Set[String]): (SparkPlan, SparkPlan) = {\n    checkSparkAnswerAndFallbackReasons(sql(query), fallbackReasons)\n  }\n\n  /** Check for the correct results as well as the expected fallback reasons */\n  protected def checkSparkAnswerAndFallbackReasons(\n      df: => DataFrame,\n      fallbackReasons: Set[String]): (SparkPlan, SparkPlan) = {\n    val (sparkPlan, cometPlan) = internalCheckSparkAnswer(df, assertCometNative = false)\n    val explainInfo = new ExtendedExplainInfo()\n    val actualFallbacks = explainInfo.getFallbackReasons(cometPlan)\n    for (reason <- fallbackReasons) {\n      if (!actualFallbacks.exists(_.contains(reason))) {\n        if (actualFallbacks.isEmpty) {\n          fail(\n            s\"Expected fallback reason '$reason' but no fallback reasons were found. Explain: ${explainInfo\n                .generateExtendedInfo(cometPlan)}\")\n        } else {\n          fail(\n            s\"Expected fallback reason '$reason' not found in [${actualFallbacks.mkString(\", \")}]\")\n        }\n      }\n    }\n    (sparkPlan, cometPlan)\n  }\n\n  /**\n   * Try executing the query against Spark and Comet and return the results or the exception.\n   *\n   * This method does not check that Comet replaced any operators or that the results match in the\n   * case where the query is successful against both Spark and Comet.\n   */\n  protected def checkSparkAnswerMaybeThrows(\n      df: => DataFrame): (Option[Throwable], Option[Throwable]) = {\n    var expected: Try[Array[Row]] = null\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n      expected = Try(datasetOfRows(spark, df.logicalPlan).collect())\n    }\n    val actual = Try(datasetOfRows(spark, df.logicalPlan).collect())\n\n    (expected, actual) match {\n      case (Success(_), Success(_)) =>\n        // compare results and confirm that they match\n        checkSparkAnswer(df)\n        (None, None)\n      case _ =>\n        (expected.failed.toOption, actual.failed.toOption)\n    }\n  }\n\n  /**\n   * Compares the Comet DataFrame result against the expected Spark answer, using labels that\n   * correctly identify which side is Comet and which is Spark. This avoids the misleading \"Spark\n   * Answer\" label that Spark's built-in `checkAnswer` would apply to the Comet result.\n   */\n  protected def checkCometAnswer(cometDf: DataFrame, sparkAnswer: Seq[Row]): Unit = {\n    val isSorted = cometDf.logicalPlan.collect { case s: logical.Sort => s }.nonEmpty\n    val cometAnswer =\n      try cometDf.collect().toSeq\n      catch {\n        case e: Exception =>\n          fail(s\"\"\"Exception thrown while executing query in Comet:\n             |${cometDf.queryExecution}\n             |== Exception ==\n             |$e\n             |${org.apache.spark.sql.catalyst.util.stackTraceToString(e)}\n           \"\"\".stripMargin)\n      }\n    if (!QueryTest.compare(\n        QueryTest.prepareAnswer(sparkAnswer, isSorted),\n        QueryTest.prepareAnswer(cometAnswer, isSorted))) {\n      val getRowType: Option[Row] => String = row =>\n        row\n          .map(r => if (r.schema == null) \"struct<>\" else r.schema.catalogString)\n          .getOrElse(\"struct<>\")\n      fail(s\"\"\"Results do not match for query:\n           |Timezone: ${java.util.TimeZone.getDefault}\n           |Timezone Env: ${sys.env.getOrElse(\"TZ\", \"\")}\n           |\n           |${cometDf.queryExecution}\n           |== Results ==\n           |${sideBySide(\n               s\"== Spark Answer - ${sparkAnswer.size} ==\" +:\n                 getRowType(sparkAnswer.headOption) +:\n                 QueryTest.prepareAnswer(sparkAnswer, isSorted).map(_.toString()),\n               s\"== Comet Answer - ${cometAnswer.size} ==\" +:\n                 getRowType(cometAnswer.headOption) +:\n                 QueryTest.prepareAnswer(cometAnswer, isSorted).map(_.toString())).mkString(\"\\n\")}\n         \"\"\".stripMargin)\n    }\n  }\n\n  /**\n   * A helper function for comparing Comet DataFrame with Spark result using absolute tolerance.\n   */\n  private def checkAnswerWithTolerance(\n      dataFrame: DataFrame,\n      expectedAnswer: Seq[Row],\n      absTol: Double): Unit = {\n    val actualAnswer = dataFrame.collect()\n    require(\n      actualAnswer.length == expectedAnswer.length,\n      s\"actual num rows ${actualAnswer.length} != expected num of rows ${expectedAnswer.length}\")\n\n    actualAnswer.zip(expectedAnswer).foreach { case (actualRow, expectedRow) =>\n      checkAnswerWithTolerance(actualRow, expectedRow, absTol)\n    }\n  }\n\n  /**\n   * Compares two answers and makes sure the answer is within absTol of the expected result.\n   */\n  private def checkAnswerWithTolerance(\n      actualAnswer: Row,\n      expectedAnswer: Row,\n      absTol: Double): Unit = {\n    require(\n      actualAnswer.length == expectedAnswer.length,\n      s\"actual answer length ${actualAnswer.length} != \" +\n        s\"expected answer length ${expectedAnswer.length}\")\n    require(absTol > 0 && absTol <= 1e-6, s\"absTol $absTol is out of range (0, 1e-6]\")\n\n    actualAnswer.toSeq.zip(expectedAnswer.toSeq).foreach {\n      case (actual: Float, expected: Float) =>\n        if (actual.isInfinity || expected.isInfinity) {\n          assert(actual.isInfinity == expected.isInfinity, s\"actual answer $actual != $expected\")\n        } else if (!actual.isNaN && !expected.isNaN) {\n          assert(\n            math.abs(actual - expected) < absTol,\n            s\"actual answer $actual not within $absTol of correct answer $expected\")\n        }\n      case (actual: Double, expected: Double) =>\n        if (actual.isInfinity || expected.isInfinity) {\n          assert(actual.isInfinity == expected.isInfinity, s\"actual answer $actual != $expected\")\n        } else if (!actual.isNaN && !expected.isNaN) {\n          assert(\n            math.abs(actual - expected) < absTol,\n            s\"actual answer $actual not within $absTol of correct answer $expected\")\n        }\n      case (actual, expected) =>\n        assert(actual == expected, s\"$actualAnswer did not equal $expectedAnswer\")\n    }\n  }\n\n  protected def checkCometOperators(plan: SparkPlan, excludedClasses: Class[_]*): Unit = {\n    findFirstNonCometOperator(plan, excludedClasses: _*) match {\n      case Some(op) =>\n        assert(\n          false,\n          s\"Expected only Comet native operators, but found ${op.nodeName}.\\n\" +\n            s\"plan: ${new ExtendedExplainInfo().generateExtendedInfo(plan)}\")\n      case _ =>\n    }\n\n    checkPlanNotMissingInput(plan)\n  }\n\n  // checks the plan node has no missing inputs\n  // such nodes represented in plan with exclamation mark !\n  // example: !CometWindowExec\n  protected def checkPlanNotMissingInput(plan: SparkPlan): Unit = {\n    def hasMissingInput(node: SparkPlan): Boolean = {\n      node.missingInput.nonEmpty && node.children.nonEmpty\n    }\n\n    val isCometNode = plan.nodeName.startsWith(\"Comet\")\n\n    if (isCometNode && hasMissingInput(plan)) {\n      assert(\n        false,\n        s\"Plan node `${plan.nodeName}` has invalid missingInput: ${plan.missingInput}\")\n    }\n\n    // Otherwise recursively check children\n    plan.children.foreach { child =>\n      checkPlanNotMissingInput(child)\n    }\n  }\n\n  private def checkPlanContains(plan: SparkPlan, includePlans: Class[_]*): Unit = {\n    includePlans.foreach { case planClass =>\n      if (plan.find(op => planClass.isAssignableFrom(op.getClass)).isEmpty) {\n        assert(\n          false,\n          s\"Expected plan to contain ${planClass.getSimpleName}, but not.\\n\" +\n            s\"plan: $plan\")\n      }\n    }\n  }\n\n  private var _spark: SparkSessionType = _\n  override protected implicit def spark: SparkSessionType = _spark\n  protected implicit def sqlContext: SQLContext = _spark.sqlContext\n\n  override protected def sparkContext: SparkContext = {\n    SparkContext.clearActiveContext()\n\n    val conf = sparkConf\n\n    if (!conf.contains(\"spark.master\")) {\n      conf.setMaster(\"local[5]\")\n    }\n\n    if (!conf.contains(\"spark.app.name\")) {\n      conf.setAppName(java.util.UUID.randomUUID().toString)\n    }\n\n    SparkContext.getOrCreate(conf)\n  }\n\n  protected def createSparkSession: SparkSessionType = {\n    SparkSession.clearActiveSession()\n    SparkSession.clearDefaultSession()\n\n    SparkSession\n      .builder()\n      .config(\n        sparkContext.getConf\n      ) // Don't use `sparkConf` as we can have overridden it in plugin\n      .withExtensions(new CometSparkSessionExtensions)\n      .getOrCreate()\n  }\n\n  protected def initializeSession(): Unit = {\n    if (_spark == null) _spark = createSparkSession\n  }\n\n  protected override def beforeAll(): Unit = {\n    initializeSession()\n    super.beforeAll()\n  }\n\n  protected override def afterAll(): Unit = {\n    try {\n      super.afterAll()\n    } finally {\n      if (_spark != null) {\n        try {\n          _spark.stop()\n        } finally {\n          _spark = null\n          SparkSession.clearActiveSession()\n          SparkSession.clearDefaultSession()\n        }\n      }\n    }\n  }\n\n  protected override def beforeEach(): Unit = {\n    super.beforeEach()\n    DebugFilesystem.clearOpenStreams()\n  }\n\n  protected override def afterEach(): Unit = {\n    super.afterEach()\n    spark.sharedState.cacheManager.clearCache()\n    eventually(timeout(10.seconds), interval(2.seconds)) {\n      DebugFilesystem.assertNoOpenStreams()\n    }\n  }\n\n  protected def readResourceParquetFile(name: String): DataFrame = {\n    spark.read.parquet(getResourceParquetFilePath(name))\n  }\n\n  protected def getResourceParquetFilePath(name: String): String = {\n    Thread.currentThread().getContextClassLoader.getResource(name).toString\n  }\n\n  protected def withParquetDataFrame[T <: Product: ClassTag: TypeTag](\n      data: Seq[T],\n      withDictionary: Boolean = true,\n      schema: Option[StructType] = None)(f: DataFrame => Unit): Unit = {\n    withParquetFile(data, withDictionary)(path => readParquetFile(path, schema)(f))\n  }\n\n  protected def withParquetTable[T <: Product: ClassTag: TypeTag](\n      data: Seq[T],\n      tableName: String,\n      withDictionary: Boolean = true)(f: => Unit): Unit = {\n    withParquetDataFrame(data, withDictionary) { df =>\n      df.createOrReplaceTempView(tableName)\n      withTempView(tableName)(f)\n    }\n  }\n\n  protected def withParquetTable(df: DataFrame, tableName: String)(f: => Unit): Unit = {\n    df.createOrReplaceTempView(tableName)\n    withTempView(tableName)(f)\n  }\n\n  protected def withParquetTable(path: String, tableName: String)(f: => Unit): Unit = {\n    val df = spark.read.format(\"parquet\").load(path)\n    withParquetTable(df, tableName)(f)\n  }\n\n  protected def withParquetFile[T <: Product: ClassTag: TypeTag](\n      data: Seq[T],\n      withDictionary: Boolean = true)(f: String => Unit): Unit = {\n    withTempPath { file =>\n      spark\n        .createDataFrame(data)\n        .write\n        .option(\"parquet.enable.dictionary\", withDictionary.toString)\n        .parquet(file.getCanonicalPath)\n      f(file.getCanonicalPath)\n    }\n  }\n\n  protected def readParquetFile(path: String, schema: Option[StructType] = None)(\n      f: DataFrame => Unit): Unit = schema match {\n    case Some(s) => f(spark.read.format(\"parquet\").schema(s).load(path))\n    case None => f(spark.read.format(\"parquet\").load(path))\n  }\n\n  protected def createParquetWriter(\n      schema: MessageType,\n      path: Path,\n      dictionaryEnabled: Boolean = false,\n      pageSize: Int = 1024,\n      dictionaryPageSize: Int = 1024,\n      pageRowCountLimit: Int = ParquetProperties.DEFAULT_PAGE_ROW_COUNT_LIMIT,\n      rowGroupSize: Long = 1024 * 1024L): ParquetWriter[Group] = {\n    val hadoopConf = spark.sessionState.newHadoopConf()\n\n    ExampleParquetWriter\n      .builder(path)\n      .withDictionaryEncoding(dictionaryEnabled)\n      .withType(schema)\n      // TODO we need to shim this and use withRowGroupSize(Long) with later parquet-hadoop versions to remove\n      // the deprecated warning here\n      .withRowGroupSize(rowGroupSize.toInt)\n      .withPageSize(pageSize)\n      .withDictionaryPageSize(dictionaryPageSize)\n      .withPageRowCountLimit(pageRowCountLimit)\n      .withConf(hadoopConf)\n      .build()\n  }\n\n  // Maps `i` to both positive and negative to test timestamp after and before the Unix epoch\n  protected def getValue(i: Long, div: Long): Long = {\n    val value = if (i % 2 == 0) i else -i\n    value % div\n  }\n\n  def makeParquetFileAllPrimitiveTypes(path: Path, dictionaryEnabled: Boolean, n: Int): Unit = {\n    makeParquetFileAllPrimitiveTypes(path, dictionaryEnabled, 0, n)\n  }\n\n  def getPrimitiveTypesParquetSchema: String = {\n    if (hasUnsignedSmallIntSafetyCheck(conf)) {\n      // Comet complex type reader has different behavior for uint_8, uint_16 types.\n      // The issue stems from undefined behavior in the parquet spec and is tracked\n      // here: https://github.com/apache/parquet-java/issues/3142\n      // here: https://github.com/apache/arrow-rs/issues/7040\n      // and here: https://github.com/apache/datafusion-comet/issues/1348\n      \"\"\"\n       |message root {\n       |  optional boolean                  _1;\n       |  optional int32                    _2(INT_8);\n       |  optional int32                    _3(INT_16);\n       |  optional int32                    _4;\n       |  optional int64                    _5;\n       |  optional float                    _6;\n       |  optional double                   _7;\n       |  optional binary                   _8(UTF8);\n       |  optional int32                    _9(UINT_32);\n       |  optional int32                    _10(UINT_32);\n       |  optional int32                    _11(UINT_32);\n       |  optional int64                    _12(UINT_64);\n       |  optional binary                   _13(ENUM);\n       |  optional FIXED_LEN_BYTE_ARRAY(3)  _14;\n       |  optional int32                    _15(DECIMAL(5, 2));\n       |  optional int64                    _16(DECIMAL(18, 10));\n       |  optional FIXED_LEN_BYTE_ARRAY(16) _17(DECIMAL(38, 37));\n       |  optional INT64                    _18(TIMESTAMP(MILLIS,true));\n       |  optional INT64                    _19(TIMESTAMP(MICROS,true));\n       |  optional INT32                    _20(DATE);\n       |  optional binary                   _21;\n       |  optional INT32                    _id;\n       |}\n      \"\"\".stripMargin\n    } else {\n      \"\"\"\n       |message root {\n       |  optional boolean                  _1;\n       |  optional int32                    _2(INT_8);\n       |  optional int32                    _3(INT_16);\n       |  optional int32                    _4;\n       |  optional int64                    _5;\n       |  optional float                    _6;\n       |  optional double                   _7;\n       |  optional binary                   _8(UTF8);\n       |  optional int32                    _9(UINT_8);\n       |  optional int32                    _10(UINT_16);\n       |  optional int32                    _11(UINT_32);\n       |  optional int64                    _12(UINT_64);\n       |  optional binary                   _13(ENUM);\n       |  optional FIXED_LEN_BYTE_ARRAY(3)  _14;\n       |  optional int32                    _15(DECIMAL(5, 2));\n       |  optional int64                    _16(DECIMAL(18, 10));\n       |  optional FIXED_LEN_BYTE_ARRAY(16) _17(DECIMAL(38, 37));\n       |  optional INT64                    _18(TIMESTAMP(MILLIS,true));\n       |  optional INT64                    _19(TIMESTAMP(MICROS,true));\n       |  optional INT32                    _20(DATE);\n       |  optional binary                   _21;\n       |  optional INT32                    _id;\n       |}\n      \"\"\".stripMargin\n    }\n  }\n\n  def makeParquetFileAllPrimitiveTypes(\n      path: Path,\n      dictionaryEnabled: Boolean,\n      begin: Int,\n      end: Int,\n      nullEnabled: Boolean = true,\n      pageSize: Int = 128,\n      randomSize: Int = 0): Unit = {\n    // alwaysIncludeUnsignedIntTypes means we include unsignedIntTypes in the test even if the\n    // reader does not support them\n    val schemaStr = getPrimitiveTypesParquetSchema\n\n    val schema = MessageTypeParser.parseMessageType(schemaStr)\n    val writer = createParquetWriter(\n      schema,\n      path,\n      dictionaryEnabled = dictionaryEnabled,\n      pageSize = pageSize,\n      dictionaryPageSize = pageSize)\n\n    val idGenerator = new AtomicInteger(0)\n\n    val rand = new scala.util.Random(42)\n    val data = (begin until end).map { i =>\n      if (nullEnabled && rand.nextBoolean()) {\n        None\n      } else {\n        if (dictionaryEnabled) Some(i % 4) else Some(i)\n      }\n    }\n    data.foreach { opt =>\n      val record = new SimpleGroup(schema)\n      opt match {\n        case Some(i) =>\n          record.add(0, i % 2 == 0)\n          record.add(1, i.toByte)\n          record.add(2, i.toShort)\n          record.add(3, i)\n          record.add(4, i.toLong)\n          record.add(5, i.toFloat)\n          record.add(6, i.toDouble)\n          record.add(7, i.toString * 48)\n          record.add(8, (-i).toByte)\n          record.add(9, (-i).toShort)\n          record.add(10, -i)\n          record.add(11, (-i).toLong)\n          record.add(12, i.toString)\n          record.add(13, ((i % 10).toString * 3).take(3))\n          record.add(14, i)\n          record.add(15, i.toLong)\n          record.add(16, ((i % 10).toString * 16).take(16))\n          record.add(17, i.toLong)\n          record.add(18, i.toLong)\n          record.add(19, i)\n          record.add(20, i.toString)\n          record.add(21, idGenerator.getAndIncrement())\n        case _ =>\n      }\n      writer.write(record)\n    }\n    (0 until randomSize).foreach { _ =>\n      val i = rand.nextLong()\n      val record = new SimpleGroup(schema)\n      record.add(0, i % 2 == 0)\n      record.add(1, i.toByte)\n      record.add(2, i.toShort)\n      record.add(3, i.toInt)\n      record.add(4, i)\n      record.add(5, java.lang.Float.intBitsToFloat(i.toInt))\n      record.add(6, java.lang.Double.longBitsToDouble(i))\n      record.add(7, i.toString * 24)\n      record.add(8, (-i).toByte)\n      record.add(9, (-i).toShort)\n      record.add(10, (-i).toInt)\n      record.add(11, -i)\n      record.add(12, i.toString)\n      record.add(13, i.toString.take(3).padTo(3, '0'))\n      record.add(14, i.toInt % 100000)\n      record.add(15, i % 1000000000000000000L)\n      record.add(16, i.toString.take(16).padTo(16, '0'))\n      record.add(17, i)\n      record.add(18, i)\n      record.add(19, i.toInt)\n      record.add(20, i.toString)\n      record.add(21, idGenerator.getAndIncrement())\n      writer.write(record)\n    }\n\n    writer.close()\n  }\n\n  protected def makeRawTimeParquetFileColumns(\n      path: Path,\n      dictionaryEnabled: Boolean,\n      n: Int,\n      rowGroupSize: Long = 1024 * 1024L): Seq[Option[Long]] = {\n    val schemaStr =\n      \"\"\"\n        |message root {\n        |  optional int64 _0(INT_64);\n        |  optional int64 _1(INT_64);\n        |  optional int64 _2(INT_64);\n        |  optional int64 _3(INT_64);\n        |  optional int64 _4(INT_64);\n        |  optional int64 _5(INT_64);\n        |}\n        \"\"\".stripMargin\n\n    val schema = MessageTypeParser.parseMessageType(schemaStr)\n    val writer = createParquetWriter(\n      schema,\n      path,\n      dictionaryEnabled = dictionaryEnabled,\n      rowGroupSize = rowGroupSize)\n    val div = if (dictionaryEnabled) 10 else n // maps value to a small range for dict to kick in\n\n    val rand = new scala.util.Random(42)\n    val expected = (0 until n).map { i =>\n      if (rand.nextBoolean()) {\n        None\n      } else {\n        Some(getValue(i, div))\n      }\n    }\n    expected.foreach { opt =>\n      val record = new SimpleGroup(schema)\n      opt match {\n        case Some(i) =>\n          record.add(0, i)\n          record.add(1, i * 1000) // convert millis to micros, same below\n          record.add(2, i)\n          record.add(3, i)\n          record.add(4, i * 1000)\n          record.add(5, i * 1000)\n        case _ =>\n      }\n      writer.write(record)\n    }\n\n    writer.close()\n    expected\n  }\n\n  // Creates Parquet file of timestamp values\n  protected def makeRawTimeParquetFile(\n      path: Path,\n      dictionaryEnabled: Boolean,\n      n: Int,\n      rowGroupSize: Long = 1024 * 1024L): Seq[Option[Long]] = {\n    val schemaStr =\n      \"\"\"\n        |message root {\n        |  optional int64 _0(TIMESTAMP_MILLIS);\n        |  optional int64 _1(TIMESTAMP_MICROS);\n        |  optional int64 _2(TIMESTAMP(MILLIS,true));\n        |  optional int64 _3(TIMESTAMP(MILLIS,false));\n        |  optional int64 _4(TIMESTAMP(MICROS,true));\n        |  optional int64 _5(TIMESTAMP(MICROS,false));\n        |  optional int64 _6(INT_64);\n        |}\n        \"\"\".stripMargin\n\n    val schema = MessageTypeParser.parseMessageType(schemaStr)\n    val writer = createParquetWriter(\n      schema,\n      path,\n      dictionaryEnabled = dictionaryEnabled,\n      rowGroupSize = rowGroupSize)\n    val div = if (dictionaryEnabled) 10 else n // maps value to a small range for dict to kick in\n\n    val rand = new scala.util.Random(42)\n    val expected = (0 until n).map { i =>\n      if (rand.nextBoolean()) {\n        None\n      } else {\n        Some(getValue(i, div))\n      }\n    }\n    expected.foreach { opt =>\n      val record = new SimpleGroup(schema)\n      opt match {\n        case Some(i) =>\n          record.add(0, i)\n          record.add(1, i * 1000) // convert millis to micros, same below\n          record.add(2, i)\n          record.add(3, i)\n          record.add(4, i * 1000)\n          record.add(5, i * 1000)\n          record.add(6, i * 1000)\n        case _ =>\n      }\n      writer.write(record)\n    }\n\n    writer.close()\n    expected\n  }\n\n  // Generate a file based on a complex schema. Schema derived from https://arrow.apache.org/blog/2022/10/17/arrow-parquet-encoding-part-3/\n  def makeParquetFileComplexTypes(\n      path: Path,\n      dictionaryEnabled: Boolean,\n      numRows: Integer = 10000): Unit = {\n    val schemaString =\n      \"\"\"\n      message ComplexDataSchema {\n        optional group optional_array (LIST) {\n          repeated group list {\n            optional int32 element;\n          }\n        }\n        required group array_of_struct (LIST) {\n          repeated group list {\n            optional group struct_element {\n              required int32 field1;\n              optional group optional_nested_array (LIST) {\n                repeated group list {\n                  required int32 element;\n                }\n              }\n            }\n          }\n        }\n        optional group optional_map (MAP) {\n          repeated group key_value {\n            required int32 key;\n            optional int32 value;\n          }\n        }\n        required group complex_map (MAP) {\n          repeated group key_value {\n            required group map_key {\n              required int32 key_field1;\n              optional int32 key_field2;\n            }\n            required group map_value {\n              required int32 value_field1;\n              repeated int32 value_field2;\n            }\n          }\n        }\n      }\n    \"\"\"\n\n    val schema: MessageType = MessageTypeParser.parseMessageType(schemaString)\n    GroupWriteSupport.setSchema(schema, spark.sparkContext.hadoopConfiguration)\n\n    val writer = createParquetWriter(schema, path, dictionaryEnabled)\n\n    val groupFactory = new SimpleGroupFactory(schema)\n\n    for (i <- 0 until numRows) {\n      val record = groupFactory.newGroup()\n\n      // Optional array of optional integers\n      if (i % 2 == 0) { // optional_array for every other row\n        val optionalArray = record.addGroup(\"optional_array\")\n        for (j <- 0 until (i % 5)) {\n          val elementGroup = optionalArray.addGroup(\"list\")\n          if (j % 2 == 0) { // some elements are optional\n            elementGroup.append(\"element\", j)\n          }\n        }\n      }\n\n      // Required array of structs\n      val arrayOfStruct = record.addGroup(\"array_of_struct\")\n      for (j <- 0 until (i % 3) + 1) { // Add one to three elements\n        val structElementGroup = arrayOfStruct.addGroup(\"list\").addGroup(\"struct_element\")\n        structElementGroup.append(\"field1\", i * 10 + j)\n\n        // Optional nested array\n        if (j % 2 != 0) { // optional nested array for every other struct\n          val nestedArray = structElementGroup.addGroup(\"optional_nested_array\")\n          for (k <- 0 until (i % 4)) { // Add zero to three elements.\n            val nestedElementGroup = nestedArray.addGroup(\"list\")\n            nestedElementGroup.append(\"element\", i + j + k)\n          }\n        }\n      }\n\n      // Optional map\n      if (i % 3 == 0) { // optional_map every third row\n        val optionalMap = record.addGroup(\"optional_map\")\n        optionalMap\n          .addGroup(\"key_value\")\n          .append(\"key\", i)\n          .append(\"value\", i)\n        if (i % 5 == 0) { // another optional entry\n          optionalMap\n            .addGroup(\"key_value\")\n            .append(\"key\", i)\n          // Value is optional\n          if (i % 10 == 0) {\n            optionalMap\n              .addGroup(\"key_value\")\n              .append(\"key\", i)\n              .append(\"value\", i)\n          }\n        }\n      }\n\n      // Required map with complex key and value types\n      val complexMap = record.addGroup(\"complex_map\")\n      val complexMapKeyVal = complexMap.addGroup(\"key_value\")\n\n      complexMapKeyVal\n        .addGroup(\"map_key\")\n        .append(\"key_field1\", i)\n\n      complexMapKeyVal\n        .addGroup(\"map_value\")\n        .append(\"value_field1\", i)\n        .append(\"value_field2\", i * 100)\n        .append(\"value_field2\", i * 101)\n        .append(\"value_field2\", i * 102)\n\n      writer.write(record)\n    }\n\n    writer.close()\n  }\n\n  protected def makeDateTimeWithFormatTable(\n      path: Path,\n      dictionaryEnabled: Boolean,\n      n: Int,\n      rowGroupSize: Long = 1024 * 1024L): Seq[Option[Long]] = {\n    val schemaStr =\n      \"\"\"\n        |message root {\n        |  optional int64 _0(TIMESTAMP_MILLIS);\n        |  optional int64 _1(TIMESTAMP_MICROS);\n        |  optional int64 _2(TIMESTAMP(MILLIS,true));\n        |  optional int64 _3(TIMESTAMP(MILLIS,false));\n        |  optional int64 _4(TIMESTAMP(MICROS,true));\n        |  optional int64 _5(TIMESTAMP(MICROS,false));\n        |  optional int64 _6(INT_64);\n        |  optional int32 _7(DATE);\n        |  optional binary format(UTF8);\n        |  optional binary dateFormat(UTF8);\n        |  }\n      \"\"\".stripMargin\n\n    val schema = MessageTypeParser.parseMessageType(schemaStr)\n    val writer = createParquetWriter(\n      schema,\n      path,\n      dictionaryEnabled = dictionaryEnabled,\n      rowGroupSize = rowGroupSize)\n    val div = if (dictionaryEnabled) 10 else n // maps value to a small range for dict to kick in\n\n    val expected = (0 until n).map { i =>\n      Some(getValue(i, div))\n    }\n    expected.foreach { opt =>\n      val timestampFormats = List(\n        \"YEAR\",\n        \"YYYY\",\n        \"YY\",\n        \"MON\",\n        \"MONTH\",\n        \"MM\",\n        \"QUARTER\",\n        \"WEEK\",\n        \"DAY\",\n        \"DD\",\n        \"HOUR\",\n        \"MINUTE\",\n        \"SECOND\",\n        \"MILLISECOND\",\n        \"MICROSECOND\")\n      val dateFormats = List(\"YEAR\", \"YYYY\", \"YY\", \"MON\", \"MONTH\", \"MM\", \"QUARTER\", \"WEEK\")\n      val formats = timestampFormats.zipAll(dateFormats, \"NONE\", \"YEAR\")\n\n      formats.foreach { format =>\n        val record = new SimpleGroup(schema)\n        opt match {\n          case Some(i) =>\n            record.add(0, i)\n            record.add(1, i * 1000) // convert millis to micros, same below\n            record.add(2, i)\n            record.add(3, i)\n            record.add(4, i * 1000)\n            record.add(5, i * 1000)\n            record.add(6, i * 1000)\n            record.add(7, i.toInt)\n            record.add(8, format._1)\n            record.add(9, format._2)\n          case _ =>\n        }\n        writer.write(record)\n      }\n    }\n    writer.close()\n    expected\n  }\n\n  def makeDecimalRDD(num: Int, decimal: DecimalType, useDictionary: Boolean): DataFrame = {\n    val div = if (useDictionary) 5 else num // narrow the space to make it dictionary encoded\n    spark\n      .range(num)\n      .map(_ % div)\n      // Parquet doesn't allow column names with spaces, have to add an alias here.\n      // Minus 500 here so that negative decimals are also tested.\n      .select((($\"value\" - 500) / 100.0) cast decimal as Symbol(\"dec\"))\n      .coalesce(1)\n  }\n\n  def stripRandomPlanParts(plan: String): String = {\n    plan.replaceFirst(\"file:.*,\", \"\").replaceAll(raw\"#\\d+\", \"\")\n  }\n\n  protected def checkCometExchange(\n      df: DataFrame,\n      cometExchangeNum: Int,\n      native: Boolean): Seq[CometShuffleExchangeExec] = {\n    if (CometConf.COMET_EXEC_SHUFFLE_ENABLED.get()) {\n      val sparkPlan = stripAQEPlan(df.queryExecution.executedPlan)\n\n      val cometShuffleExecs = sparkPlan.collect { case b: CometShuffleExchangeExec => b }\n      assert(\n        cometShuffleExecs.length == cometExchangeNum,\n        s\"$sparkPlan has ${cometShuffleExecs.length} \" +\n          s\" CometShuffleExchangeExec node which doesn't match the expected: $cometExchangeNum\")\n\n      if (native) {\n        cometShuffleExecs.foreach { b =>\n          assert(b.shuffleType == CometNativeShuffle)\n        }\n      } else {\n        cometShuffleExecs.foreach { b =>\n          assert(b.shuffleType == CometColumnarShuffle)\n        }\n      }\n\n      cometShuffleExecs\n    } else {\n      Seq.empty\n    }\n  }\n\n  /**\n   * The test encapsulates integration Comet test and does following:\n   *   - prepares data using SELECT query and saves it to the Parquet file in temp folder\n   *   - creates a temporary table with name `tableName` on top of temporary parquet file\n   *   - runs the query `testQuery` reading data from `tableName`\n   *\n   * Asserts the `testQuery` data with Comet is the same is with Apache Spark and also asserts\n   * only Comet operator are in the physical plan\n   *\n   * Example:\n   *\n   * {{{\n   *  test(\"native reader - read simple ARRAY fields with SHORT field\") {\n   *     testSingleLineQuery(\n   *       \"\"\"\n   *         |select array(cast(1 as short)) arr\n   *         |\"\"\".stripMargin,\n   *       \"select arr from tbl\",\n   *       sqlConf = Seq(\n   *         CometConf.COMET_SCAN_UNSIGNED_SMALL_INT_SAFETY_CHECK.key -> \"true\",\n   *         \"spark.comet.explainFallback.enabled\" -> \"false\"\n   *       ),\n   *       debugCometDF = df => {\n   *         df.printSchema()\n   *         df.explain(\"extended\")\n   *         df.show()\n   *       })\n   *   }\n   * }}}\n   *\n   * @param prepareQuery\n   *   prepare sample data with Comet disabled\n   * @param testQuery\n   *   the query to test. Typically with Comet enabled + other SQL config applied\n   * @param testName\n   *   test name\n   * @param tableName\n   *   table name where sample data stored\n   * @param sqlConf\n   *   additional spark sql configuration\n   * @param debugCometDF\n   *   optional debug access to DataFrame for `testQuery`\n   */\n  def testSingleLineQuery(\n      prepareQuery: String,\n      testQuery: String,\n      testName: String = \"test\",\n      tableName: String = \"tbl\",\n      sqlConf: Seq[(String, String)] = Seq.empty,\n      readSchema: Option[StructType] = None,\n      debugCometDF: DataFrame => Unit = _ => (),\n      checkCometOperator: Boolean = true): Unit = {\n\n    withTempDir { dir =>\n      val path = new Path(dir.toURI.toString, testName).toUri.toString\n      var data: java.util.List[Row] = new java.util.ArrayList()\n      var schema: StructType = null\n\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        val df = spark.sql(prepareQuery)\n        data = df.collectAsList()\n        schema = df.schema\n      }\n\n      spark.createDataFrame(data, schema).repartition(1).write.parquet(path)\n      readParquetFile(path, readSchema.orElse(Some(schema))) { df =>\n        df.createOrReplaceTempView(tableName)\n      }\n\n      withSQLConf(sqlConf: _*) {\n        val cometDF = sql(testQuery)\n        debugCometDF(cometDF)\n        if (checkCometOperator) checkSparkAnswerAndOperator(cometDF)\n        else checkSparkAnswer(cometDF)\n      }\n    }\n  }\n\n  def showString[T](\n      df: Dataset[T],\n      _numRows: Int,\n      truncate: Int = 20,\n      vertical: Boolean = false): String = {\n    df.showString(_numRows, truncate, vertical)\n  }\n\n  def makeParquetFile(\n      path: Path,\n      total: Int,\n      numGroups: Int,\n      dictionaryEnabled: Boolean): Unit = {\n    val schemaStr =\n      \"\"\"\n        |message root {\n        |  optional INT32                    _1(INT_8);\n        |  optional INT32                    _2(INT_16);\n        |  optional INT32                    _3;\n        |  optional INT64                    _4;\n        |  optional FLOAT                    _5;\n        |  optional DOUBLE                   _6;\n        |  optional INT32                    _7(DECIMAL(5, 2));\n        |  optional INT64                    _8(DECIMAL(18, 10));\n        |  optional FIXED_LEN_BYTE_ARRAY(16) _9(DECIMAL(38, 37));\n        |  optional INT64                    _10(TIMESTAMP(MILLIS,true));\n        |  optional INT64                    _11(TIMESTAMP(MICROS,true));\n        |  optional INT32                    _12(DATE);\n        |  optional INT32                    _g1(INT_8);\n        |  optional INT32                    _g2(INT_16);\n        |  optional INT32                    _g3;\n        |  optional INT64                    _g4;\n        |  optional FLOAT                    _g5;\n        |  optional DOUBLE                   _g6;\n        |  optional INT32                    _g7(DECIMAL(5, 2));\n        |  optional INT64                    _g8(DECIMAL(18, 10));\n        |  optional FIXED_LEN_BYTE_ARRAY(16) _g9(DECIMAL(38, 37));\n        |  optional INT64                    _g10(TIMESTAMP(MILLIS,true));\n        |  optional INT64                    _g11(TIMESTAMP(MICROS,true));\n        |  optional INT32                    _g12(DATE);\n        |  optional BINARY                   _g13(UTF8);\n        |  optional BINARY                   _g14;\n        |}\n      \"\"\".stripMargin\n\n    val schema = MessageTypeParser.parseMessageType(schemaStr)\n    val writer = createParquetWriter(schema, path, dictionaryEnabled = true)\n\n    val rand = new scala.util.Random(42)\n    val expected = (0 until total).map { i =>\n      // use a single value for the first page, to make sure dictionary encoding kicks in\n      if (rand.nextBoolean()) None\n      else {\n        if (dictionaryEnabled) Some(i % 10) else Some(i)\n      }\n    }\n\n    expected.foreach { opt =>\n      val record = new SimpleGroup(schema)\n      opt match {\n        case Some(i) =>\n          record.add(0, i.toByte)\n          record.add(1, i.toShort)\n          record.add(2, i)\n          record.add(3, i.toLong)\n          record.add(4, rand.nextFloat())\n          record.add(5, rand.nextDouble())\n          record.add(6, i)\n          record.add(7, i.toLong)\n          record.add(8, (i % 10).toString * 16)\n          record.add(9, i.toLong)\n          record.add(10, i.toLong)\n          record.add(11, i)\n          record.add(12, i.toByte % numGroups)\n          record.add(13, i.toShort % numGroups)\n          record.add(14, i % numGroups)\n          record.add(15, i.toLong % numGroups)\n          record.add(16, rand.nextFloat())\n          record.add(17, rand.nextDouble())\n          record.add(18, i)\n          record.add(19, i.toLong)\n          record.add(20, (i % 10).toString * 16)\n          record.add(21, i.toLong)\n          record.add(22, i.toLong)\n          record.add(23, i)\n          record.add(24, (i % 10).toString * 24)\n          record.add(25, (i % 10).toString * 36)\n        case _ =>\n      }\n      writer.write(record)\n    }\n\n    writer.close()\n  }\n\n  def hasUnsignedSmallIntSafetyCheck(conf: SQLConf): Boolean = {\n    CometConf.COMET_PARQUET_UNSIGNED_SMALL_INT_CHECK.get(conf)\n  }\n\n  /**\n   * Compares Spark and Comet results using foreach() and exceptAll() to avoid collect()\n   */\n  protected def assertDataFrameEqualsWithExceptions(\n      df: => DataFrame,\n      assertCometNative: Boolean = true): (Option[Throwable], Option[Throwable]) = {\n\n    var expected: Try[Unit] = null\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n      expected = Try(datasetOfRows(spark, df.logicalPlan).foreach(_ => ()))\n    }\n    val actual = Try(datasetOfRows(spark, df.logicalPlan).foreach(_ => ()))\n\n    (expected, actual) match {\n      case (Success(_), Success(_)) =>\n        // compare results and confirm that they match\n        var dfSpark: DataFrame = null\n        withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n          dfSpark = datasetOfRows(spark, df.logicalPlan)\n        }\n        val dfComet = datasetOfRows(spark, df.logicalPlan)\n\n        // Compare schemas\n        assert(\n          dfSpark.schema == dfComet.schema,\n          s\"Schema mismatch:\\nSpark: ${dfSpark.schema}\\nComet: ${dfComet.schema}\")\n\n        val sparkMinusComet = dfSpark.exceptAll(dfComet)\n        val cometMinusSpark = dfComet.exceptAll(dfSpark)\n        val diffCount1 = sparkMinusComet.count()\n        val diffCount2 = cometMinusSpark.count()\n\n        if (diffCount1 > 0 || diffCount2 > 0) {\n          fail(\n            \"Results do not match. \" +\n              s\"Rows in Spark but not Comet: $diffCount1. \" +\n              s\"Rows in Comet but not Spark: $diffCount2.\")\n        }\n\n        if (assertCometNative) {\n          checkCometOperators(stripAQEPlan(dfComet.queryExecution.executedPlan))\n        }\n\n        (None, None)\n      case _ =>\n        (expected.failed.toOption, actual.failed.toOption)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/GenTPCHData.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport java.io.{File, PrintWriter}\nimport java.nio.file.Files\nimport java.nio.file.Path\n\nimport scala.sys.process._\nimport scala.util.Try\n\nimport org.apache.commons.io.FileUtils\nimport org.apache.spark.deploy.SparkHadoopUtil\n\n/**\n * This class generates TPCH table data by using tpch-dbgen:\n *   - https://github.com/databricks/tpch-dbgen\n *\n * To run this:\n * {{{\n *   make benchmark-org.apache.spark.sql.GenTPCHData -- --location <path> --scaleFactor 1\n * }}}\n */\nobject GenTPCHData {\n  val TEMP_DBGEN_DIR: Path = new File(\"/tmp\").toPath\n  val DBGEN_DIR_PREFIX = \"tempTPCHGen\"\n\n  def main(args: Array[String]): Unit = {\n    val config = new GenTPCHDataConfig(args)\n\n    val spark = SparkSession\n      .builder()\n      .appName(getClass.getName)\n      .master(config.master)\n      .getOrCreate()\n\n    setScaleConfig(spark, config.scaleFactor)\n\n    // Number of worker nodes\n    val workers = spark.sparkContext.getExecutorMemoryStatus.size\n\n    var defaultDbgenDir: File = null\n\n    val dbgenDir = if (config.dbgenDir == null) {\n      defaultDbgenDir = Files.createTempDirectory(TEMP_DBGEN_DIR, DBGEN_DIR_PREFIX).toFile\n      val baseDir = defaultDbgenDir.getAbsolutePath\n      defaultDbgenDir.delete()\n      // Install the data generators in all nodes\n      // TODO: think a better way to install on each worker node\n      //       such as https://stackoverflow.com/a/40876671\n      spark.range(0, workers, 1, workers).foreach(worker => installDBGEN(baseDir)(worker))\n      s\"${baseDir}/dbgen\"\n    } else {\n      config.dbgenDir\n    }\n\n    val tables = new TPCHTables(spark.sqlContext, dbgenDir, config.scaleFactor.toString)\n\n    // Generate data\n    // Since dbgen may uses stdout to output the data, tables.genData needs to run table by table\n    val tableNames =\n      if (config.tableFilter.trim.isEmpty) tables.tables.map(_.name) else Seq(config.tableFilter)\n    tableNames.foreach { tableName =>\n      tables.genData(\n        location = s\"${config.location}/tpch/sf${config.scaleFactor}_${config.format}\",\n        format = config.format,\n        overwrite = config.overwrite,\n        partitionTables = config.partitionTables,\n        clusterByPartitionColumns = config.clusterByPartitionColumns,\n        filterOutNullPartitionValues = config.filterOutNullPartitionValues,\n        tableFilter = tableName,\n        numPartitions = config.numPartitions)\n    }\n\n    // Clean up\n    if (defaultDbgenDir != null) {\n      spark.range(0, workers, 1, workers).foreach { _ =>\n        val _ = FileUtils.deleteQuietly(defaultDbgenDir)\n      }\n    }\n\n    spark.stop()\n  }\n\n  def setScaleConfig(spark: SparkSession, scaleFactor: Int): Unit = {\n    // Avoid OOM when shuffling large scale factors and errors like 2GB shuffle limit at 10TB like:\n    // org.apache.spark.shuffle.FetchFailedException: Too large frame: 9640891355\n    // For 10TB 16x4core nodes were needed with the config below, 8x for 1TB and below.\n    // About 24hrs. for SF 1 to 10,000.\n    if (scaleFactor >= 10000) {\n      spark.conf.set(\"spark.sql.shuffle.partitions\", \"20000\")\n      SparkHadoopUtil.get.conf.set(\"parquet.memory.pool.ratio\", \"0.1\")\n    } else if (scaleFactor >= 1000) {\n      spark.conf.set(\n        \"spark.sql.shuffle.partitions\",\n        \"2001\"\n      ) // one above 2000 to use HighlyCompressedMapStatus\n      SparkHadoopUtil.get.conf.set(\"parquet.memory.pool.ratio\", \"0.3\")\n    } else {\n      spark.conf.set(\"spark.sql.shuffle.partitions\", \"200\") // default\n      SparkHadoopUtil.get.conf.set(\"parquet.memory.pool.ratio\", \"0.5\")\n    }\n  }\n\n  // Install tpch-dbgen (with the stdout patch)\n  def installDBGEN(\n      baseDir: String,\n      url: String = \"https://github.com/databricks/tpch-dbgen.git\",\n      useStdout: Boolean = true)(i: java.lang.Long): Unit = {\n    // Check if we want the revision which makes dbgen output to stdout\n    val checkoutRevision: String =\n      if (useStdout) \"git checkout 0469309147b42abac8857fa61b4cf69a6d3128a8 -- bm_utils.c\" else \"\"\n\n    Seq(\"mkdir\", \"-p\", baseDir).!\n    val pw = new PrintWriter(s\"${baseDir}/dbgen_$i.sh\")\n    pw.write(s\"\"\"\n      |rm -rf ${baseDir}/dbgen\n      |rm -rf ${baseDir}/dbgen_install_$i\n      |mkdir ${baseDir}/dbgen_install_$i\n      |cd ${baseDir}/dbgen_install_$i\n      |git clone '$url'\n      |cd tpch-dbgen\n      |$checkoutRevision\n      |sed -i'' -e 's/#include <malloc\\\\.h>/#ifndef __APPLE__\\\\n#include <malloc\\\\.h>\\\\n#endif/' bm_utils.c\n      |sed -i'' -e 's/#include <malloc\\\\.h>/#if defined(__MACH__)\\\\n#include <stdlib\\\\.h>\\\\n#else\\\\n#include <malloc\\\\.h>\\\\n#endif/' varsub.c\n      |make\n      |ln -sf ${baseDir}/dbgen_install_$i/tpch-dbgen ${baseDir}/dbgen || echo \"ln -sf failed\"\n      |test -e ${baseDir}/dbgen/dbgen\n      |echo \"OK\"\n      \"\"\".stripMargin)\n    pw.close\n    Seq(\"chmod\", \"+x\", s\"${baseDir}/dbgen_$i.sh\").!\n    Seq(s\"${baseDir}/dbgen_$i.sh\").!!\n  }\n}\n\nclass GenTPCHDataConfig(args: Array[String]) {\n  var master: String = \"local[*]\"\n  var dbgenDir: String = null\n  var location: String = null\n  var scaleFactor: Int = 1\n  var format: String = \"parquet\"\n  var overwrite: Boolean = false\n  var partitionTables: Boolean = false\n  var clusterByPartitionColumns: Boolean = false\n  var filterOutNullPartitionValues: Boolean = false\n  var tableFilter: String = \"\"\n  var numPartitions: Int = 100\n\n  parseArgs(args.toList)\n\n  private def parseArgs(inputArgs: List[String]): Unit = {\n    var args = inputArgs\n\n    while (args.nonEmpty) {\n      args match {\n        case \"--master\" :: value :: tail =>\n          master = value\n          args = tail\n\n        case \"--dbgenDir\" :: value :: tail =>\n          dbgenDir = value\n          args = tail\n\n        case \"--location\" :: value :: tail =>\n          location = value\n          args = tail\n\n        case \"--scaleFactor\" :: value :: tail =>\n          scaleFactor = toPositiveIntValue(\"Scale factor\", value)\n          args = tail\n\n        case \"--format\" :: value :: tail =>\n          format = value\n          args = tail\n\n        case \"--overwrite\" :: tail =>\n          overwrite = true\n          args = tail\n\n        case \"--partitionTables\" :: tail =>\n          partitionTables = true\n          args = tail\n\n        case \"--clusterByPartitionColumns\" :: tail =>\n          clusterByPartitionColumns = true\n          args = tail\n\n        case \"--filterOutNullPartitionValues\" :: tail =>\n          filterOutNullPartitionValues = true\n          args = tail\n\n        case \"--tableFilter\" :: value :: tail =>\n          tableFilter = value\n          args = tail\n\n        case \"--numPartitions\" :: value :: tail =>\n          numPartitions = toPositiveIntValue(\"Number of partitions\", value)\n          args = tail\n\n        case \"--help\" :: _ =>\n          printUsageAndExit(0)\n\n        case _ =>\n          // scalastyle:off println\n          System.err.println(\"Unknown/unsupported param \" + args)\n          // scalastyle:on println\n          printUsageAndExit(1)\n      }\n    }\n\n    checkRequiredArguments()\n  }\n\n  private def printUsageAndExit(exitCode: Int): Unit = {\n    // scalastyle:off\n    System.err.println(\"\"\"\n      |make benchmark-org.apache.spark.sql.GenTPCHData -- [Options]\n      |Options:\n      |  --master                        the Spark master to use, default to local[*]\n      |  --dbgenDir                      location of dbgen on worker nodes, if not provided, installs default dbgen\n      |  --location                      root directory of location to generate data in\n      |  --scaleFactor                   size of the dataset to generate (in GB)\n      |  --format                        generated data format, Parquet, ORC ...\n      |  --overwrite                     whether to overwrite the data that is already there\n      |  --partitionTables               whether to create the partitioned fact tables\n      |  --clusterByPartitionColumns     whether to shuffle to get partitions coalesced into single files\n      |  --filterOutNullPartitionValues  whether to filter out the partition with NULL key value\n      |  --tableFilter                   comma-separated list of table names to generate (e.g., store_sales,store_returns),\n      |                                  all the tables are generated by default\n      |  --numPartitions                 how many dbgen partitions to run - number of input tasks\n      \"\"\".stripMargin)\n    // scalastyle:on\n    System.exit(exitCode)\n  }\n\n  private def toPositiveIntValue(name: String, v: String): Int = {\n    if (Try(v.toInt).getOrElse(-1) <= 0) {\n      // scalastyle:off println\n      System.err.println(s\"$name must be a positive number\")\n      // scalastyle:on println\n      printUsageAndExit(-1)\n    }\n    v.toInt\n  }\n\n  private def checkRequiredArguments(): Unit = {\n    if (location == null) {\n      // scalastyle:off println\n      System.err.println(\"Must specify an output location\")\n      // scalastyle:on println\n      printUsageAndExit(-1)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/TPCDSQueries.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\ntrait TPCDSQueries {\n  // List of all TPC-DS v1.4 queries\n  val tpcdsQueries: Seq[String] = Seq(\n    \"q1\",\n    \"q2\",\n    \"q3\",\n    \"q4\",\n    \"q5\",\n    \"q6\",\n    \"q7\",\n    \"q8\",\n    \"q9\",\n    \"q10\",\n    \"q11\",\n    \"q12\",\n    \"q13\",\n    \"q14a\",\n    \"q14b\",\n    \"q15\",\n    \"q16\",\n    \"q17\",\n    \"q18\",\n    \"q19\",\n    \"q20\",\n    \"q21\",\n    \"q22\",\n    \"q23a\",\n    \"q23b\",\n    \"q24a\",\n    \"q24b\",\n    \"q25\",\n    \"q26\",\n    \"q27\",\n    \"q28\",\n    \"q29\",\n    \"q30\",\n    \"q31\",\n    \"q32\",\n    \"q33\",\n    \"q34\",\n    \"q35\",\n    \"q36\",\n    \"q37\",\n    \"q38\",\n    \"q39a\",\n    \"q39b\",\n    \"q40\",\n    \"q41\",\n    \"q42\",\n    \"q43\",\n    \"q44\",\n    \"q45\",\n    \"q46\",\n    \"q47\",\n    \"q48\",\n    \"q49\",\n    \"q50\",\n    \"q51\",\n    \"q52\",\n    \"q53\",\n    \"q54\",\n    \"q55\",\n    \"q56\",\n    \"q57\",\n    \"q58\",\n    \"q59\",\n    \"q60\",\n    \"q61\",\n    \"q62\",\n    \"q63\",\n    \"q64\",\n    \"q65\",\n    \"q66\",\n    \"q67\",\n    \"q68\",\n    \"q69\",\n    \"q70\",\n    \"q71\",\n    \"q72\",\n    \"q73\",\n    \"q74\",\n    \"q75\",\n    \"q76\",\n    \"q77\",\n    \"q78\",\n    \"q79\",\n    \"q80\",\n    \"q81\",\n    \"q82\",\n    \"q83\",\n    \"q84\",\n    \"q85\",\n    \"q86\",\n    \"q87\",\n    \"q88\",\n    \"q89\",\n    \"q90\",\n    \"q91\",\n    \"q92\",\n    \"q93\",\n    \"q94\",\n    \"q95\",\n    \"q96\",\n    \"q97\",\n    \"q98\",\n    \"q99\")\n\n  // This list only includes TPC-DS v2.7 queries that are different from v1.4 ones\n  val nameSuffixForQueriesV2_7 = \"-v2.7\"\n  val tpcdsQueriesV2_7: Seq[String] = Seq(\n    \"q5a\",\n    \"q6\",\n    \"q10a\",\n    \"q11\",\n    \"q12\",\n    \"q14\",\n    \"q14a\",\n    \"q18a\",\n    \"q20\",\n    \"q22\",\n    \"q22a\",\n    \"q24\",\n    \"q27a\",\n    \"q34\",\n    \"q35\",\n    \"q35a\",\n    \"q36a\",\n    \"q47\",\n    \"q49\",\n    \"q51a\",\n    \"q57\",\n    \"q64\",\n    \"q67a\",\n    \"q70a\",\n    \"q72\",\n    \"q74\",\n    \"q75\",\n    \"q77a\",\n    \"q78\",\n    \"q80a\",\n    \"q86a\",\n    \"q98\")\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/TPCH.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport org.apache.spark.SparkContext\nimport org.apache.spark.rdd.RDD\n\n/**\n * Mostly copied from\n * https://github.com/databricks/spark-sql-perf/blob/master/src/main/scala/com/databricks/spark/sql/perf/tpch/TPCH.scala\n *\n * TODO: port this back to the upstream Spark similar to TPCDS benchmark\n */\n\nclass Dbgen(dbgenDir: String, params: Seq[String]) extends DataGenerator {\n  val dbgen: String = s\"$dbgenDir/dbgen\"\n\n  def generate(\n      sparkContext: SparkContext,\n      name: String,\n      partitions: Int,\n      scaleFactor: String): RDD[String] = {\n    val smallTables = Seq(\"nation\", \"region\")\n    val numPartitions = if (partitions > 1 && !smallTables.contains(name)) partitions else 1\n    val generatedData = {\n      sparkContext\n        .parallelize(1 to numPartitions, numPartitions)\n        .flatMap { i =>\n          val localToolsDir = if (new java.io.File(dbgen).exists) {\n            dbgenDir\n          } else if (new java.io.File(s\"/$dbgenDir\").exists) {\n            s\"/$dbgenDir\"\n          } else {\n            sys.error(s\"Could not find dbgen at $dbgen or /$dbgenDir. Run install\")\n          }\n          val parallel = if (numPartitions > 1) s\"-C $partitions -S $i\" else \"\"\n          val shortTableNames = Map(\n            \"customer\" -> \"c\",\n            \"lineitem\" -> \"L\",\n            \"nation\" -> \"n\",\n            \"orders\" -> \"O\",\n            \"part\" -> \"P\",\n            \"region\" -> \"r\",\n            \"supplier\" -> \"s\",\n            \"partsupp\" -> \"S\")\n          val paramsString = params.mkString(\" \")\n          val commands = Seq(\n            \"bash\",\n            \"-c\",\n            s\"cd $localToolsDir && ./dbgen -q $paramsString -T ${shortTableNames(name)} \" +\n              s\"-s $scaleFactor $parallel\")\n          println(commands)\n          BlockingLineStream(commands)\n        }\n        .repartition(numPartitions)\n    }\n\n    generatedData.setName(s\"$name, sf=$scaleFactor, strings\")\n    generatedData\n  }\n}\n\nclass TPCHTables(\n    sqlContext: SQLContext,\n    dbgenDir: String,\n    scaleFactor: String,\n    useDoubleForDecimal: Boolean = false,\n    useStringForDate: Boolean = false,\n    generatorParams: Seq[String] = Nil)\n    extends Tables(sqlContext, scaleFactor, useDoubleForDecimal, useStringForDate) {\n  import sqlContext.implicits._\n\n  val dataGenerator = new Dbgen(dbgenDir, generatorParams)\n\n  val tables: Seq[Table] = Seq(\n    Table(\n      \"part\",\n      partitionColumns = \"p_brand\" :: Nil,\n      'p_partkey.long,\n      'p_name.string,\n      'p_mfgr.string,\n      'p_brand.string,\n      'p_type.string,\n      'p_size.int,\n      'p_container.string,\n      'p_retailprice.decimal(12, 2),\n      'p_comment.string),\n    Table(\n      \"supplier\",\n      partitionColumns = Nil,\n      's_suppkey.long,\n      's_name.string,\n      's_address.string,\n      's_nationkey.long,\n      's_phone.string,\n      's_acctbal.decimal(12, 2),\n      's_comment.string),\n    Table(\n      \"partsupp\",\n      partitionColumns = Nil,\n      'ps_partkey.long,\n      'ps_suppkey.long,\n      'ps_availqty.int,\n      'ps_supplycost.decimal(12, 2),\n      'ps_comment.string),\n    Table(\n      \"customer\",\n      partitionColumns = \"c_mktsegment\" :: Nil,\n      'c_custkey.long,\n      'c_name.string,\n      'c_address.string,\n      'c_nationkey.long,\n      'c_phone.string,\n      'c_acctbal.decimal(12, 2),\n      'c_mktsegment.string,\n      'c_comment.string),\n    Table(\n      \"orders\",\n      partitionColumns = \"o_orderdate\" :: Nil,\n      'o_orderkey.long,\n      'o_custkey.long,\n      'o_orderstatus.string,\n      'o_totalprice.decimal(12, 2),\n      'o_orderdate.date,\n      'o_orderpriority.string,\n      'o_clerk.string,\n      'o_shippriority.int,\n      'o_comment.string),\n    Table(\n      \"lineitem\",\n      partitionColumns = \"l_shipdate\" :: Nil,\n      'l_orderkey.long,\n      'l_partkey.long,\n      'l_suppkey.long,\n      'l_linenumber.int,\n      'l_quantity.decimal(12, 2),\n      'l_extendedprice.decimal(12, 2),\n      'l_discount.decimal(12, 2),\n      'l_tax.decimal(12, 2),\n      'l_returnflag.string,\n      'l_linestatus.string,\n      'l_shipdate.date,\n      'l_commitdate.date,\n      'l_receiptdate.date,\n      'l_shipinstruct.string,\n      'l_shipmode.string,\n      'l_comment.string),\n    Table(\n      \"nation\",\n      partitionColumns = Nil,\n      'n_nationkey.long,\n      'n_name.string,\n      'n_regionkey.long,\n      'n_comment.string),\n    Table(\"region\", partitionColumns = Nil, 'r_regionkey.long, 'r_name.string, 'r_comment.string))\n    .map(_.convertTypes())\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/Tables.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport scala.util.Try\n\nimport org.slf4j.LoggerFactory\n\nimport org.apache.spark.SparkContext\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.functions._\nimport org.apache.spark.sql.types._\n\n/**\n * Mostly copied from\n * https://github.com/databricks/spark-sql-perf/blob/master/src/main/scala/com/databricks/spark/sql/perf/Tables.scala\n *\n * TODO: port this back to the upstream Spark similar to TPCDS benchmark\n */\ntrait DataGenerator extends Serializable {\n  def generate(\n      sparkContext: SparkContext,\n      name: String,\n      partitions: Int,\n      scaleFactor: String): RDD[String]\n}\n\nabstract class Tables(\n    sqlContext: SQLContext,\n    scaleFactor: String,\n    useDoubleForDecimal: Boolean = false,\n    useStringForDate: Boolean = false)\n    extends Serializable {\n\n  def dataGenerator: DataGenerator\n  def tables: Seq[Table]\n\n  private val log = LoggerFactory.getLogger(getClass)\n\n  def sparkContext = sqlContext.sparkContext\n\n  case class Table(name: String, partitionColumns: Seq[String], fields: StructField*) {\n    val schema: StructType = StructType(fields)\n\n    def nonPartitioned: Table = {\n      Table(name, Nil, fields: _*)\n    }\n\n    /**\n     * If convertToSchema is true, the data from generator will be parsed into columns and\n     * converted to `schema`. Otherwise, it just outputs the raw data (as a single STRING column).\n     */\n    def df(convertToSchema: Boolean, numPartition: Int): DataFrame = {\n      val generatedData = dataGenerator.generate(sparkContext, name, numPartition, scaleFactor)\n      val rows = generatedData.mapPartitions { iter =>\n        iter.map { l =>\n          if (convertToSchema) {\n            val values = l.split(\"\\\\|\", -1).dropRight(1).map { v =>\n              if (v.equals(\"\")) {\n                // If the string value is an empty string, we turn it to a null\n                null\n              } else {\n                v\n              }\n            }\n            Row.fromSeq(values)\n          } else {\n            Row.fromSeq(Seq(l))\n          }\n        }\n      }\n\n      if (convertToSchema) {\n        val stringData =\n          sqlContext.createDataFrame(\n            rows,\n            StructType(schema.fields.map(f => StructField(f.name, StringType))))\n\n        val convertedData = {\n          val columns = schema.fields.map { f =>\n            col(f.name).cast(f.dataType).as(f.name)\n          }\n          stringData.select(columns: _*)\n        }\n\n        convertedData\n      } else {\n        sqlContext.createDataFrame(rows, StructType(Seq(StructField(\"value\", StringType))))\n      }\n    }\n\n    def convertTypes(): Table = {\n      val newFields = fields.map { field =>\n        val newDataType = field.dataType match {\n          case _: DecimalType if useDoubleForDecimal => DoubleType\n          case _: DateType if useStringForDate => StringType\n          case other => other\n        }\n        field.copy(dataType = newDataType)\n      }\n\n      Table(name, partitionColumns, newFields: _*)\n    }\n\n    def genData(\n        location: String,\n        format: String,\n        overwrite: Boolean,\n        clusterByPartitionColumns: Boolean,\n        filterOutNullPartitionValues: Boolean,\n        numPartitions: Int): Unit = {\n      val mode = if (overwrite) SaveMode.Overwrite else SaveMode.Ignore\n\n      val data = df(format != \"text\", numPartitions)\n      val tempTableName = s\"${name}_text\"\n      data.createOrReplaceTempView(tempTableName)\n\n      val writer = if (partitionColumns.nonEmpty) {\n        if (clusterByPartitionColumns) {\n          val columnString = data.schema.fields\n            .map { field =>\n              field.name\n            }\n            .mkString(\",\")\n          val partitionColumnString = partitionColumns.mkString(\",\")\n          val predicates = if (filterOutNullPartitionValues) {\n            partitionColumns.map(col => s\"$col IS NOT NULL\").mkString(\"WHERE \", \" AND \", \"\")\n          } else {\n            \"\"\n          }\n\n          val query =\n            s\"\"\"\n               |SELECT\n               |  $columnString\n               |FROM\n               |  $tempTableName\n               |$predicates\n               |DISTRIBUTE BY\n               |  $partitionColumnString\n            \"\"\".stripMargin\n          val grouped = sqlContext.sql(query)\n          println(s\"Pre-clustering with partitioning columns with query $query.\")\n          log.info(s\"Pre-clustering with partitioning columns with query $query.\")\n          grouped.write\n        } else {\n          data.write\n        }\n      } else {\n        // treat non-partitioned tables as \"one partition\" that we want to coalesce\n        if (clusterByPartitionColumns) {\n          // in case data has more than maxRecordsPerFile, split into multiple writers to improve\n          // datagen speed files will be truncated to maxRecordsPerFile value, so the final result\n          // will be the same\n          val numRows = data.count\n          val maxRecordPerFile =\n            Try(sqlContext.getConf(\"spark.sql.files.maxRecordsPerFile\").toInt).getOrElse(0)\n\n          println(\n            s\"Data has $numRows rows clustered $clusterByPartitionColumns for $maxRecordPerFile\")\n          log.info(\n            s\"Data has $numRows rows clustered $clusterByPartitionColumns for $maxRecordPerFile\")\n\n          if (maxRecordPerFile > 0 && numRows > maxRecordPerFile) {\n            val numFiles = (numRows.toDouble / maxRecordPerFile).ceil.toInt\n            println(s\"Coalescing into $numFiles files\")\n            log.info(s\"Coalescing into $numFiles files\")\n            data.coalesce(numFiles).write\n          } else {\n            data.coalesce(1).write\n          }\n        } else {\n          data.write\n        }\n      }\n      writer.format(format).mode(mode)\n      if (partitionColumns.nonEmpty) {\n        writer.partitionBy(partitionColumns: _*)\n      }\n      println(s\"Generating table $name in database to $location with save mode $mode.\")\n      log.info(s\"Generating table $name in database to $location with save mode $mode.\")\n      writer.save(location)\n      sqlContext.dropTempTable(tempTableName)\n    }\n  }\n\n  def genData(\n      location: String,\n      format: String,\n      overwrite: Boolean,\n      partitionTables: Boolean,\n      clusterByPartitionColumns: Boolean,\n      filterOutNullPartitionValues: Boolean,\n      tableFilter: String = \"\",\n      numPartitions: Int = 100): Unit = {\n    var tablesToBeGenerated = if (partitionTables) {\n      tables\n    } else {\n      tables.map(_.nonPartitioned)\n    }\n\n    if (tableFilter.nonEmpty) {\n      tablesToBeGenerated = tablesToBeGenerated.filter(_.name == tableFilter)\n      if (tablesToBeGenerated.isEmpty) {\n        throw new RuntimeException(\"Bad table name filter: \" + tableFilter)\n      }\n    }\n\n    tablesToBeGenerated.foreach { table =>\n      val tableLocation = s\"$location/${table.name}\"\n      table.genData(\n        tableLocation,\n        format,\n        overwrite,\n        clusterByPartitionColumns,\n        filterOutNullPartitionValues,\n        numPartitions)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometAggregateExpressionBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\ncase class AggExprConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Comprehensive benchmark for Comet aggregate functions. To run this benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometAggregateExpressionBenchmark\n * }}}\n * Results will be written to \"spark/benchmarks/CometAggregateFunctionBenchmark-**results.txt\".\n */\nobject CometAggregateExpressionBenchmark extends CometBenchmarkBase {\n\n  private val basicAggregates = List(\n    AggExprConfig(\"count\", \"SELECT COUNT(*) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"count_col\", \"SELECT COUNT(c_int) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\n      \"count_distinct\",\n      \"SELECT COUNT(DISTINCT c_int) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"min_int\", \"SELECT MIN(c_int) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"max_int\", \"SELECT MAX(c_int) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"min_double\", \"SELECT MIN(c_double) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"max_double\", \"SELECT MAX(c_double) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"sum_int\", \"SELECT SUM(c_int) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"sum_long\", \"SELECT SUM(c_long) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"sum_double\", \"SELECT SUM(c_double) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"avg_int\", \"SELECT AVG(c_int) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"avg_double\", \"SELECT AVG(c_double) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"first\", \"SELECT FIRST(c_int) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\n      \"first_ignore_nulls\",\n      \"SELECT FIRST(c_int, true) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"last\", \"SELECT LAST(c_int) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\n      \"last_ignore_nulls\",\n      \"SELECT LAST(c_int, true) FROM parquetV1Table GROUP BY grp\"))\n\n  private val statisticalAggregates = List(\n    AggExprConfig(\"var_samp\", \"SELECT VAR_SAMP(c_double) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"var_pop\", \"SELECT VAR_POP(c_double) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"stddev_samp\", \"SELECT STDDEV_SAMP(c_double) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"stddev_pop\", \"SELECT STDDEV_POP(c_double) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\n      \"covar_samp\",\n      \"SELECT COVAR_SAMP(c_double, c_double2) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\n      \"covar_pop\",\n      \"SELECT COVAR_POP(c_double, c_double2) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"corr\", \"SELECT CORR(c_double, c_double2) FROM parquetV1Table GROUP BY grp\"))\n\n  private val bitwiseAggregates = List(\n    AggExprConfig(\"bit_and\", \"SELECT BIT_AND(c_long) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"bit_or\", \"SELECT BIT_OR(c_long) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"bit_xor\", \"SELECT BIT_XOR(c_long) FROM parquetV1Table GROUP BY grp\"))\n\n  // Additional structural tests (multiple group keys, multiple aggregates)\n  private val multiKeyAggregates = List(\n    AggExprConfig(\"sum_multi_key\", \"SELECT SUM(c_int) FROM parquetV1Table GROUP BY grp, grp2\"),\n    AggExprConfig(\"avg_multi_key\", \"SELECT AVG(c_double) FROM parquetV1Table GROUP BY grp, grp2\"))\n\n  private val multiAggregates = List(\n    AggExprConfig(\"sum_sum\", \"SELECT SUM(c_int), SUM(c_long) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"min_max\", \"SELECT MIN(c_int), MAX(c_int) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\n      \"count_sum_avg\",\n      \"SELECT COUNT(*), SUM(c_int), AVG(c_double) FROM parquetV1Table GROUP BY grp\"))\n\n  // Decimal aggregates\n  private val decimalAggregates = List(\n    AggExprConfig(\"sum_decimal\", \"SELECT SUM(c_decimal) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"avg_decimal\", \"SELECT AVG(c_decimal) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"min_decimal\", \"SELECT MIN(c_decimal) FROM parquetV1Table GROUP BY grp\"),\n    AggExprConfig(\"max_decimal\", \"SELECT MAX(c_decimal) FROM parquetV1Table GROUP BY grp\"))\n\n  // High cardinality tests\n  private val highCardinalityAggregates = List(\n    AggExprConfig(\n      \"sum_high_card\",\n      \"SELECT SUM(c_int) FROM parquetV1Table GROUP BY high_card_grp\"),\n    AggExprConfig(\n      \"count_distinct_high_card\",\n      \"SELECT COUNT(DISTINCT c_int) FROM parquetV1Table GROUP BY high_card_grp\"))\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024\n\n    runBenchmarkWithTable(\"Aggregate function benchmarks\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT\n                CAST(value % 1000 AS INT) AS grp,\n                CAST(value % 100 AS INT) AS grp2,\n                CAST(value % 100000 AS INT) AS high_card_grp,\n                CASE WHEN value % 100 = 0 THEN NULL ELSE CAST((value % 10000) - 5000 AS INT) END AS c_int,\n                CASE WHEN value % 100 = 1 THEN NULL ELSE CAST(value * 1000 AS LONG) END AS c_long,\n                CASE WHEN value % 100 = 2 THEN NULL ELSE CAST((value % 10000) / 100.0 AS DOUBLE) END AS c_double,\n                CASE WHEN value % 100 = 3 THEN NULL ELSE CAST((value % 5000) / 50.0 AS DOUBLE) END AS c_double2,\n                CASE WHEN value % 100 = 4 THEN NULL ELSE CAST((value % 10000 - 5000) / 100.0 AS DECIMAL(18, 10)) END AS c_decimal\n              FROM $tbl\n            \"\"\"))\n\n          val allAggregates = basicAggregates ++ statisticalAggregates ++ bitwiseAggregates ++\n            multiKeyAggregates ++ multiAggregates ++ decimalAggregates ++ highCardinalityAggregates\n\n          allAggregates.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometArithmeticBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.spark.sql.types._\n\n/**\n * Benchmark to measure Comet expression evaluation performance. To run this benchmark:\n * `SPARK_GENERATE_BENCHMARK_FILES=1 make\n * benchmark-org.apache.spark.sql.benchmark.CometArithmeticBenchmark` Results will be written to\n * \"spark/benchmarks/CometArithmeticBenchmark-**results.txt\".\n */\nobject CometArithmeticBenchmark extends CometBenchmarkBase {\n  private val table = \"parquetV1Table\"\n\n  def integerArithmeticBenchmark(values: Int, op: BinaryOp, useDictionary: Boolean): Unit = {\n    val dataType = IntegerType\n\n    withTempPath { dir =>\n      withTempTable(table) {\n        prepareTable(\n          dir,\n          spark.sql(\n            s\"SELECT CAST(value AS ${dataType.sql}) AS c1, \" +\n              s\"CAST(value AS ${dataType.sql}) c2 FROM $tbl\"))\n\n        val name = s\"Binary op ${dataType.sql}, dictionary = $useDictionary\"\n        val query = s\"SELECT c1 ${op.sig} c2 FROM $table\"\n\n        runExpressionBenchmark(name, values, query)\n      }\n    }\n  }\n\n  def decimalArithmeticBenchmark(\n      values: Int,\n      dataType: DecimalType,\n      op: BinaryOp,\n      useDictionary: Boolean): Unit = {\n    val df = makeDecimalDataFrame(values, dataType, useDictionary)\n\n    withTempPath { dir =>\n      withTempTable(table) {\n        df.createOrReplaceTempView(tbl)\n        prepareTable(dir, spark.sql(s\"SELECT dec AS c1, dec AS c2 FROM $tbl\"))\n\n        val name = s\"Binary op ${dataType.sql}, dictionary = $useDictionary\"\n        val query = s\"SELECT c1 ${op.sig} c2 FROM $table\"\n\n        runExpressionBenchmark(name, values, query)\n      }\n    }\n  }\n\n  private val TOTAL: Int = 1024 * 1024 * 10\n\n  override def runCometBenchmark(args: Array[String]): Unit = {\n    Seq(true, false).foreach { useDictionary =>\n      Seq(Plus, Mul, Div).foreach { op =>\n        for ((precision, scale) <- Seq((5, 2), (18, 10), (38, 37))) {\n          runBenchmark(op.name) {\n            decimalArithmeticBenchmark(TOTAL, DecimalType(precision, scale), op, useDictionary)\n          }\n        }\n      }\n    }\n\n    Seq(true, false).foreach { useDictionary =>\n      Seq(Minus, Mul).foreach { op =>\n        runBenchmarkWithTable(op.name, TOTAL, useDictionary) { v =>\n          integerArithmeticBenchmark(v, op, useDictionary)\n        }\n      }\n    }\n  }\n\n  private val Plus: BinaryOp = BinaryOp(\"plus\", \"+\")\n  private val Minus: BinaryOp = BinaryOp(\"minus\", \"-\")\n  private val Mul: BinaryOp = BinaryOp(\"mul\", \"*\")\n  private val Div: BinaryOp = BinaryOp(\"div\", \"/\")\n\n  case class BinaryOp(name: String, sig: String) {\n    override def toString: String = name\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometArrayExpressionBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\n/**\n * Benchmark to measure performance of Comet array expressions. To run this benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometArrayExpressionBenchmark\n * }}}\n * Results will be written to \"spark/benchmarks/CometArrayExpressionBenchmark-**results.txt\".\n */\nobject CometArrayExpressionBenchmark extends CometBenchmarkBase {\n\n  private def buildWideIntArrayExpr(width: Int, modulus: Int): String = {\n    require(width > 0, \"width must be positive\")\n\n    (0 until width)\n      .map { i =>\n        val seed = 13 + i * 17\n        if (i % 11 == 0) {\n          s\"CASE WHEN value % 32 = 0 THEN NULL ELSE CAST((value * $seed + $i) % $modulus AS INT) END\"\n        } else {\n          s\"CAST((value * $seed + $i) % $modulus AS INT)\"\n        }\n      }\n      .mkString(\"array(\", \",\\n                \", \")\")\n  }\n\n  private def prepareSortArrayTable(width: Int)(f: => Unit): Unit = {\n    val intArrayExpr = buildWideIntArrayExpr(width, modulus = width * 32)\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(s\"\"\"\n            SELECT\n              $intArrayExpr AS int_arr\n            FROM $tbl\n          \"\"\"))\n        f\n      }\n    }\n  }\n\n  def sortArrayIntAscBenchmark(values: Int, width: Int): Unit = {\n    prepareSortArrayTable(width) {\n      runExpressionBenchmark(\n        s\"sort_array int ascending (width=$width)\",\n        values,\n        \"SELECT sort_array(int_arr) FROM parquetV1Table\")\n    }\n  }\n\n  def sortArrayIntDescBenchmark(values: Int, width: Int): Unit = {\n    prepareSortArrayTable(width) {\n      runExpressionBenchmark(\n        s\"sort_array int descending (width=$width)\",\n        values,\n        \"SELECT sort_array(int_arr, false) FROM parquetV1Table\")\n    }\n  }\n\n  def sortArrayIntAscFirstElementBenchmark(values: Int, width: Int): Unit = {\n    prepareSortArrayTable(width) {\n      runExpressionBenchmark(\n        s\"element_at(sort_array(int_arr), 1) (width=$width)\",\n        values,\n        \"SELECT element_at(sort_array(int_arr), 1) FROM parquetV1Table\")\n    }\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 4 * 1024 * 1024\n\n    runBenchmarkWithTable(\"sortArrayIntAsc\", values) { v =>\n      sortArrayIntAscBenchmark(v, width = 16)\n    }\n\n    runBenchmarkWithTable(\"sortArrayIntDesc\", values) { v =>\n      sortArrayIntDescBenchmark(v, width = 16)\n    }\n\n    runBenchmarkWithTable(\"sortArrayIntAscWide\", values) { v =>\n      sortArrayIntAscBenchmark(v, width = 32)\n    }\n\n    runBenchmarkWithTable(\"sortArrayIntAscFirstElement\", values) { v =>\n      sortArrayIntAscFirstElementBenchmark(v, width = 32)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometBenchmarkBase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport java.io.File\nimport java.nio.charset.StandardCharsets\nimport java.util.Base64\n\nimport scala.util.Random\n\nimport org.apache.parquet.crypto.DecryptionPropertiesFactory\nimport org.apache.parquet.crypto.keytools.{KeyToolkit, PropertiesDrivenCryptoFactory}\nimport org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\nimport org.apache.spark.SparkConf\nimport org.apache.spark.benchmark.Benchmark\nimport org.apache.spark.sql.{DataFrame, DataFrameWriter, Row, SparkSession}\nimport org.apache.spark.sql.comet.CometPlanChecker\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\nimport org.apache.spark.sql.execution.benchmark.SqlBasedBenchmark\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.DecimalType\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometConf.{SCAN_NATIVE_DATAFUSION, SCAN_NATIVE_ICEBERG_COMPAT}\nimport org.apache.comet.CometSparkSessionExtensions\n\ntrait CometBenchmarkBase\n    extends SqlBasedBenchmark\n    with AdaptiveSparkPlanHelper\n    with CometPlanChecker {\n  override def getSparkSession: SparkSession = {\n    val conf = new SparkConf()\n      .setAppName(\"CometReadBenchmark\")\n      // Since `spark.master` always exists, overrides this value\n      .set(\"spark.master\", \"local[1]\")\n      .setIfMissing(\"spark.driver.memory\", \"3g\")\n      .setIfMissing(\"spark.executor.memory\", \"3g\")\n\n    val sparkSession = SparkSession.builder\n      .config(conf)\n      .withExtensions(new CometSparkSessionExtensions)\n      .getOrCreate()\n\n    // Set default configs. Individual cases will change them if necessary.\n    sparkSession.conf.set(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key, \"true\")\n    sparkSession.conf.set(SQLConf.WHOLESTAGE_CODEGEN_ENABLED.key, \"true\")\n    sparkSession.conf.set(CometConf.COMET_ENABLED.key, \"false\")\n    sparkSession.conf.set(CometConf.COMET_EXEC_ENABLED.key, \"false\")\n    // Benchmarks use invalid input values that should produce NULL, not exceptions\n    sparkSession.conf.set(SQLConf.ANSI_ENABLED.key, \"false\")\n\n    sparkSession\n  }\n\n  def runCometBenchmark(args: Array[String]): Unit\n\n  override def runBenchmarkSuite(mainArgs: Array[String]): Unit = {\n    runCometBenchmark(mainArgs)\n  }\n\n  protected val tbl = \"comet_table\"\n\n  protected def withTempTable(tableNames: String*)(f: => Unit): Unit = {\n    try f\n    finally tableNames.foreach(spark.catalog.dropTempView)\n  }\n\n  protected def runBenchmarkWithTable(\n      benchmarkName: String,\n      values: Int,\n      useDictionary: Boolean = false)(f: Int => Any): Unit = {\n    withTempTable(tbl) {\n      import spark.implicits._\n      spark\n        .range(values)\n        .map(_ => if (useDictionary) Random.nextLong % 5 else Random.nextLong)\n        .createOrReplaceTempView(tbl)\n      runBenchmark(benchmarkName)(f(values))\n    }\n  }\n\n  /**\n   * Runs an expression benchmark with standard cases: Spark, Comet (Scan), Comet (Scan + Exec).\n   * This provides a consistent benchmark structure for expression evaluation.\n   *\n   * @param name\n   *   Benchmark name\n   * @param cardinality\n   *   Number of rows being processed\n   * @param query\n   *   SQL query to benchmark\n   * @param extraCometConfigs\n   *   Additional configurations to apply for Comet cases (optional)\n   */\n  final def runExpressionBenchmark(\n      name: String,\n      cardinality: Long,\n      query: String,\n      extraCometConfigs: Map[String, String] = Map.empty): Unit = {\n    val benchmark = new Benchmark(name, cardinality, output = output)\n\n    benchmark.addCase(\"Spark\") { _ =>\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        spark.sql(query).noop()\n      }\n    }\n\n    benchmark.addCase(\"Comet (Scan)\") { _ =>\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"false\") {\n        spark.sql(query).noop()\n      }\n    }\n\n    val cometExecConfigs = Map(\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      \"spark.sql.optimizer.constantFolding.enabled\" -> \"false\") ++ extraCometConfigs\n\n    // Check that the plan is fully Comet native before running the benchmark\n    withSQLConf(cometExecConfigs.toSeq: _*) {\n      val df = spark.sql(query)\n      df.noop()\n      val plan = stripAQEPlan(df.queryExecution.executedPlan)\n      findFirstNonCometOperator(plan) match {\n        case Some(op) =>\n          // scalastyle:off println\n          println()\n          println(\"=\" * 80)\n          println(\"WARNING: Benchmark plan is NOT fully Comet native!\")\n          println(s\"First non-Comet operator: ${op.nodeName}\")\n          println(\"=\" * 80)\n          println(\"Query plan:\")\n          println(plan.treeString)\n          println(\"=\" * 80)\n          println()\n        // scalastyle:on println\n        case None =>\n        // All operators are Comet native, no warning needed\n      }\n    }\n\n    benchmark.addCase(\"Comet (Scan + Exec)\") { _ =>\n      withSQLConf(cometExecConfigs.toSeq: _*) {\n        spark.sql(query).noop()\n      }\n    }\n\n    benchmark.run()\n  }\n\n  protected def addParquetScanCases(\n      benchmark: Benchmark,\n      query: String,\n      caseSuffix: String = \"\",\n      extraConf: Map[String, String] = Map.empty): Unit = {\n    val suffix = if (caseSuffix.nonEmpty) s\" ($caseSuffix)\" else \"\"\n\n    benchmark.addCase(s\"SQL Parquet - Spark$suffix\") { _ =>\n      withSQLConf(extraConf.toSeq: _*) {\n        spark.sql(query).noop()\n      }\n    }\n\n    for (scanImpl <- Seq(SCAN_NATIVE_DATAFUSION, SCAN_NATIVE_ICEBERG_COMPAT)) {\n      benchmark.addCase(s\"SQL Parquet - Comet ($scanImpl)$suffix\") { _ =>\n        withSQLConf(\n          (extraConf ++ Map(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_NATIVE_SCAN_IMPL.key -> scanImpl)).toSeq: _*) {\n          spark.sql(query).noop()\n        }\n      }\n    }\n  }\n\n  protected def prepareTable(dir: File, df: DataFrame, partition: Option[String] = None): Unit = {\n    val testDf = if (partition.isDefined) {\n      df.write.partitionBy(partition.get)\n    } else {\n      df.write\n    }\n\n    saveAsParquetV1Table(testDf, dir.getCanonicalPath + \"/parquetV1\")\n  }\n\n  protected def saveAsParquetV1Table(df: DataFrameWriter[Row], dir: String): Unit = {\n    df.mode(\"overwrite\").option(\"compression\", \"snappy\").parquet(dir)\n    spark.read.parquet(dir).createOrReplaceTempView(\"parquetV1Table\")\n  }\n\n  protected def prepareEncryptedTable(\n      dir: File,\n      df: DataFrame,\n      partition: Option[String] = None): Unit = {\n    val testDf = if (partition.isDefined) {\n      df.write.partitionBy(partition.get)\n    } else {\n      df.write\n    }\n\n    saveAsEncryptedParquetV1Table(testDf, dir.getCanonicalPath + \"/parquetV1\")\n  }\n\n  protected def prepareIcebergTable(\n      dir: File,\n      df: DataFrame,\n      tableName: String = \"icebergTable\",\n      partition: Option[String] = None): Unit = {\n    val warehouseDir = new File(dir, \"iceberg-warehouse\")\n\n    // Configure Hadoop catalog (same pattern as CometIcebergNativeSuite)\n    spark.conf.set(\"spark.sql.catalog.benchmark_cat\", \"org.apache.iceberg.spark.SparkCatalog\")\n    spark.conf.set(\"spark.sql.catalog.benchmark_cat.type\", \"hadoop\")\n    spark.conf.set(\"spark.sql.catalog.benchmark_cat.warehouse\", warehouseDir.getAbsolutePath)\n\n    val fullTableName = s\"benchmark_cat.db.$tableName\"\n\n    // Drop table if exists\n    spark.sql(s\"DROP TABLE IF EXISTS $fullTableName\")\n\n    // Create a temp view from the DataFrame\n    df.createOrReplaceTempView(\"temp_df_for_iceberg\")\n\n    // Create Iceberg table from temp view\n    val partitionClause = partition.map(p => s\"PARTITIONED BY ($p)\").getOrElse(\"\")\n    spark.sql(s\"\"\"\n      CREATE TABLE $fullTableName\n      USING iceberg\n      TBLPROPERTIES ('format-version'='2', 'write.parquet.compression-codec' = 'snappy')\n      $partitionClause\n      AS SELECT * FROM temp_df_for_iceberg\n    \"\"\")\n\n    // Create temp view for benchmarking\n    spark.table(fullTableName).createOrReplaceTempView(tableName)\n\n    spark.catalog.dropTempView(\"temp_df_for_iceberg\")\n  }\n\n  protected def saveAsEncryptedParquetV1Table(df: DataFrameWriter[Row], dir: String): Unit = {\n    val encoder = Base64.getEncoder\n    val footerKey =\n      encoder.encodeToString(\"0123456789012345\".getBytes(StandardCharsets.UTF_8))\n    val key1 = encoder.encodeToString(\"1234567890123450\".getBytes(StandardCharsets.UTF_8))\n    val cryptoFactoryClass =\n      \"org.apache.parquet.crypto.keytools.PropertiesDrivenCryptoFactory\"\n    withSQLConf(\n      DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n      KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n        \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n      InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n        s\"footerKey: ${footerKey}, key1: ${key1}\") {\n      df.mode(\"overwrite\")\n        .option(\"compression\", \"snappy\")\n        .option(PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME, \"key1: id\")\n        .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n        .parquet(dir)\n      spark.read.parquet(dir).createOrReplaceTempView(\"parquetV1Table\")\n    }\n  }\n\n  protected def makeDecimalDataFrame(\n      values: Int,\n      decimal: DecimalType,\n      useDictionary: Boolean): DataFrame = {\n    import spark.implicits._\n\n    val div = if (useDictionary) 5 else values\n    spark\n      .range(values)\n      .map(_ % div)\n      .select((($\"value\" - 500) / 100.0) cast decimal as Symbol(\"dec\"))\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometCastBooleanBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\ncase class CastBooleanConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Benchmark to measure performance of Comet cast operations involving Boolean type. To run this\n * benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometCastBooleanBenchmark\n * }}}\n * Results will be written to \"spark/benchmarks/CometCastBooleanBenchmark-**results.txt\".\n */\nobject CometCastBooleanBenchmark extends CometBenchmarkBase {\n\n  private val castFunctions = Seq(\"CAST\", \"TRY_CAST\")\n\n  // Boolean to String\n  private val boolToStringConfigs = for {\n    castFunc <- castFunctions\n  } yield CastBooleanConfig(\n    s\"$castFunc Boolean to String\",\n    s\"SELECT $castFunc(c_bool AS STRING) FROM parquetV1Table\")\n\n  // Boolean to numeric types\n  private val boolToNumericTypes =\n    Seq(\"BYTE\", \"SHORT\", \"INT\", \"LONG\", \"FLOAT\", \"DOUBLE\", \"DECIMAL(10,2)\")\n  private val boolToNumericConfigs = for {\n    castFunc <- castFunctions\n    targetType <- boolToNumericTypes\n  } yield CastBooleanConfig(\n    s\"$castFunc Boolean to $targetType\",\n    s\"SELECT $castFunc(c_bool AS $targetType) FROM parquetV1Table\")\n\n  // Numeric to Boolean\n  private val numericTypes = Seq(\n    (\"BYTE\", \"c_byte\"),\n    (\"SHORT\", \"c_short\"),\n    (\"INT\", \"c_int\"),\n    (\"LONG\", \"c_long\"),\n    (\"FLOAT\", \"c_float\"),\n    (\"DOUBLE\", \"c_double\"),\n    (\"DECIMAL(10,2)\", \"c_decimal\"))\n\n  private val numericToBoolConfigs = for {\n    castFunc <- castFunctions\n    (sourceType, colName) <- numericTypes\n  } yield CastBooleanConfig(\n    s\"$castFunc $sourceType to Boolean\",\n    s\"SELECT $castFunc($colName AS BOOLEAN) FROM parquetV1Table\")\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024 // 1M rows\n\n    // Generate boolean data for boolean-to-other casts\n    runBenchmarkWithTable(\"Boolean to other types casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL, 50/50 true/false\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 100 = 0 THEN NULL\n                ELSE (value % 2 = 0)\n              END AS c_bool\n              FROM $tbl\n            \"\"\"))\n\n          (boolToStringConfigs ++ boolToNumericConfigs).foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n\n    // Generate numeric data for numeric-to-boolean casts\n    runBenchmarkWithTable(\"Numeric to Boolean casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL per column, values in {-1, 0, 1} (~33% each)\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT\n                CASE WHEN value % 100 = 0 THEN NULL ELSE CAST((value % 3) - 1 AS BYTE) END AS c_byte,\n                CASE WHEN value % 100 = 1 THEN NULL ELSE CAST((value % 3) - 1 AS SHORT) END AS c_short,\n                CASE WHEN value % 100 = 2 THEN NULL ELSE CAST((value % 3) - 1 AS INT) END AS c_int,\n                CASE WHEN value % 100 = 3 THEN NULL ELSE CAST((value % 3) - 1 AS LONG) END AS c_long,\n                CASE WHEN value % 100 = 4 THEN NULL ELSE CAST((value % 3) - 1 AS FLOAT) END AS c_float,\n                CASE WHEN value % 100 = 5 THEN NULL ELSE CAST((value % 3) - 1 AS DOUBLE) END AS c_double,\n                CASE WHEN value % 100 = 6 THEN NULL ELSE CAST((value % 3) - 1 AS DECIMAL(10,2)) END AS c_decimal\n              FROM $tbl\n            \"\"\"))\n\n          numericToBoolConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometCastNumericToNumericBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\ncase class CastNumericToNumericConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Benchmark to measure performance of Comet cast between numeric types. To run this benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometCastNumericToNumericBenchmark\n * }}}\n * Results will be written to \"spark/benchmarks/CometCastNumericToNumericBenchmark-**results.txt\".\n */\nobject CometCastNumericToNumericBenchmark extends CometBenchmarkBase {\n\n  private val castFunctions = Seq(\"CAST\", \"TRY_CAST\")\n\n  // Integer widening conversions\n  private val integerWideningPairs = Seq(\n    (\"BYTE\", \"c_byte\", \"SHORT\"),\n    (\"BYTE\", \"c_byte\", \"INT\"),\n    (\"BYTE\", \"c_byte\", \"LONG\"),\n    (\"SHORT\", \"c_short\", \"INT\"),\n    (\"SHORT\", \"c_short\", \"LONG\"),\n    (\"INT\", \"c_int\", \"LONG\"))\n\n  // Integer narrowing conversions\n  private val integerNarrowingPairs = Seq(\n    (\"LONG\", \"c_long\", \"INT\"),\n    (\"LONG\", \"c_long\", \"SHORT\"),\n    (\"LONG\", \"c_long\", \"BYTE\"),\n    (\"INT\", \"c_int\", \"SHORT\"),\n    (\"INT\", \"c_int\", \"BYTE\"),\n    (\"SHORT\", \"c_short\", \"BYTE\"))\n\n  // Floating point conversions\n  private val floatPairs = Seq((\"FLOAT\", \"c_float\", \"DOUBLE\"), (\"DOUBLE\", \"c_double\", \"FLOAT\"))\n\n  // Integer to floating point conversions\n  private val intToFloatPairs = Seq(\n    (\"BYTE\", \"c_byte\", \"FLOAT\"),\n    (\"SHORT\", \"c_short\", \"FLOAT\"),\n    (\"INT\", \"c_int\", \"FLOAT\"),\n    (\"LONG\", \"c_long\", \"FLOAT\"),\n    (\"INT\", \"c_int\", \"DOUBLE\"),\n    (\"LONG\", \"c_long\", \"DOUBLE\"))\n\n  // Floating point to integer conversions\n  private val floatToIntPairs = Seq(\n    (\"FLOAT\", \"c_float\", \"INT\"),\n    (\"FLOAT\", \"c_float\", \"LONG\"),\n    (\"DOUBLE\", \"c_double\", \"INT\"),\n    (\"DOUBLE\", \"c_double\", \"LONG\"))\n\n  // Decimal conversions\n  private val decimalPairs = Seq(\n    (\"INT\", \"c_int\", \"DECIMAL(10,2)\"),\n    (\"LONG\", \"c_long\", \"DECIMAL(20,4)\"),\n    (\"DOUBLE\", \"c_double\", \"DECIMAL(15,5)\"),\n    (\"DECIMAL(10,2)\", \"c_decimal\", \"INT\"),\n    (\"DECIMAL(10,2)\", \"c_decimal\", \"LONG\"),\n    (\"DECIMAL(10,2)\", \"c_decimal\", \"DOUBLE\"))\n\n  private def generateConfigs(\n      pairs: Seq[(String, String, String)]): Seq[CastNumericToNumericConfig] = {\n    for {\n      castFunc <- castFunctions\n      (sourceType, colName, targetType) <- pairs\n    } yield CastNumericToNumericConfig(\n      s\"$castFunc $sourceType to $targetType\",\n      s\"SELECT $castFunc($colName AS $targetType) FROM parquetV1Table\")\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024 // 1M rows\n\n    // Generate input data once with all numeric types\n    runBenchmarkWithTable(\"Numeric to Numeric casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL per column\n          // - c_byte: full range -64 to 63\n          // - c_short: full range -16384 to 16383\n          // - c_int: centered around 0 (-2.5M to +2.5M)\n          // - c_long: large positive values (0 to ~5 billion)\n          // - c_float/c_double: 4% special values (NaN/Infinity), rest centered around 0\n          // - c_decimal: values from -25000.00 to +25000.00\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT\n                CASE WHEN value % 100 = 0 THEN NULL ELSE CAST((value % 128) - 64 AS BYTE) END AS c_byte,\n                CASE WHEN value % 100 = 1 THEN NULL ELSE CAST((value % 32768) - 16384 AS SHORT) END AS c_short,\n                CASE WHEN value % 100 = 2 THEN NULL ELSE CAST(value - 2500000 AS INT) END AS c_int,\n                CASE WHEN value % 100 = 3 THEN NULL ELSE CAST(value * 1000 AS LONG) END AS c_long,\n                CASE\n                  WHEN value % 100 = 4 THEN NULL\n                  WHEN value % 100 = 5 THEN CAST('NaN' AS FLOAT)\n                  WHEN value % 100 = 6 THEN CAST('Infinity' AS FLOAT)\n                  WHEN value % 100 = 7 THEN CAST('-Infinity' AS FLOAT)\n                  ELSE CAST((value - 2500000) / 100.0 AS FLOAT)\n                END AS c_float,\n                CASE\n                  WHEN value % 100 = 8 THEN NULL\n                  WHEN value % 100 = 9 THEN CAST('NaN' AS DOUBLE)\n                  WHEN value % 100 = 10 THEN CAST('Infinity' AS DOUBLE)\n                  WHEN value % 100 = 11 THEN CAST('-Infinity' AS DOUBLE)\n                  ELSE CAST((value - 2500000) / 100.0 AS DOUBLE)\n                END AS c_double,\n                CASE WHEN value % 100 = 12 THEN NULL ELSE CAST((value - 2500000) / 100.0 AS DECIMAL(10,2)) END AS c_decimal\n              FROM $tbl\n            \"\"\"))\n\n          // Run all benchmark categories\n          (generateConfigs(integerWideningPairs) ++\n            generateConfigs(integerNarrowingPairs) ++\n            generateConfigs(floatPairs) ++\n            generateConfigs(intToFloatPairs) ++\n            generateConfigs(floatToIntPairs) ++\n            generateConfigs(decimalPairs)).foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometCastNumericToStringBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\ncase class CastNumericToStringConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Benchmark to measure performance of Comet cast from numeric types to String. To run this\n * benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometCastNumericToStringBenchmark\n * }}}\n * Results will be written to \"spark/benchmarks/CometCastNumericToStringBenchmark-**results.txt\".\n */\nobject CometCastNumericToStringBenchmark extends CometBenchmarkBase {\n\n  private val castFunctions = Seq(\"CAST\", \"TRY_CAST\")\n  private val sourceTypes =\n    Seq(\n      (\"BOOLEAN\", \"c_bool\"),\n      (\"BYTE\", \"c_byte\"),\n      (\"SHORT\", \"c_short\"),\n      (\"INT\", \"c_int\"),\n      (\"LONG\", \"c_long\"),\n      (\"FLOAT\", \"c_float\"),\n      (\"DOUBLE\", \"c_double\"),\n      (\"DECIMAL(10,2)\", \"c_decimal\"))\n\n  private val castConfigs = for {\n    castFunc <- castFunctions\n    (sourceType, colName) <- sourceTypes\n  } yield CastNumericToStringConfig(\n    s\"$castFunc $sourceType to String\",\n    s\"SELECT $castFunc($colName AS STRING) FROM parquetV1Table\")\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024 // 1M rows\n\n    // Generate input data once with all numeric types\n    runBenchmarkWithTable(\"Numeric to String casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL per column\n          // - c_bool: 50/50 true/false\n          // - c_byte: full range -64 to 63\n          // - c_short: full range -16384 to 16383\n          // - c_int/c_long: large values centered around 0\n          // - c_float/c_double: 3% special values (NaN/Infinity), rest centered around 0\n          // - c_decimal: values from -25000.00 to +25000.00\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT\n                CASE WHEN value % 100 = 0 THEN NULL ELSE (value % 2 = 0) END AS c_bool,\n                CASE WHEN value % 100 = 1 THEN NULL ELSE CAST((value % 128) - 64 AS BYTE) END AS c_byte,\n                CASE WHEN value % 100 = 2 THEN NULL ELSE CAST((value % 32768) - 16384 AS SHORT) END AS c_short,\n                CASE WHEN value % 100 = 3 THEN NULL ELSE CAST(value - 2500000 AS INT) END AS c_int,\n                CASE WHEN value % 100 = 4 THEN NULL ELSE CAST(value * 1000000 AS LONG) END AS c_long,\n                CASE\n                  WHEN value % 100 = 5 THEN NULL\n                  WHEN value % 100 = 6 THEN CAST('NaN' AS FLOAT)\n                  WHEN value % 100 = 7 THEN CAST('Infinity' AS FLOAT)\n                  WHEN value % 100 = 8 THEN CAST('-Infinity' AS FLOAT)\n                  ELSE CAST((value - 2500000) / 1000.0 AS FLOAT)\n                END AS c_float,\n                CASE\n                  WHEN value % 100 = 9 THEN NULL\n                  WHEN value % 100 = 10 THEN CAST('NaN' AS DOUBLE)\n                  WHEN value % 100 = 11 THEN CAST('Infinity' AS DOUBLE)\n                  WHEN value % 100 = 12 THEN CAST('-Infinity' AS DOUBLE)\n                  ELSE CAST((value - 2500000) / 100.0 AS DOUBLE)\n                END AS c_double,\n                CASE WHEN value % 100 = 13 THEN NULL ELSE CAST((value - 2500000) / 100.0 AS DECIMAL(10,2)) END AS c_decimal\n              FROM $tbl\n            \"\"\"))\n\n          castConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometCastNumericToTemporalBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\ncase class CastNumericToTemporalConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Benchmark to measure performance of Comet cast from numeric types to temporal types. To run\n * this benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometCastNumericToTemporalBenchmark\n * }}}\n * Results will be written to\n * \"spark/benchmarks/CometCastNumericToTemporalBenchmark-**results.txt\".\n */\nobject CometCastNumericToTemporalBenchmark extends CometBenchmarkBase {\n\n  private val castFunctions = Seq(\"CAST\", \"TRY_CAST\")\n\n  // INT to DATE (days since epoch)\n  private val intToDateConfigs = for {\n    castFunc <- castFunctions\n  } yield CastNumericToTemporalConfig(\n    s\"$castFunc Int to Date\",\n    s\"SELECT $castFunc(c_int AS DATE) FROM parquetV1Table\")\n\n  // LONG to TIMESTAMP (microseconds since epoch)\n  private val longToTimestampConfigs = for {\n    castFunc <- castFunctions\n  } yield CastNumericToTemporalConfig(\n    s\"$castFunc Long to Timestamp\",\n    s\"SELECT $castFunc(c_long AS TIMESTAMP) FROM parquetV1Table\")\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024 // 1M rows\n\n    // Generate data once for INT to DATE conversions\n    runBenchmarkWithTable(\"Int to Date casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL, days since epoch spanning ~100 years (1920-2020)\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 100 = 0 THEN NULL\n                ELSE CAST((value % 36500) - 18000 AS INT)\n              END AS c_int\n              FROM $tbl\n            \"\"\"))\n\n          intToDateConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n\n    // Generate data once for LONG to TIMESTAMP conversions\n    runBenchmarkWithTable(\"Long to Timestamp casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL, microseconds since epoch spanning ~1 year from 2020-01-01\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 100 = 0 THEN NULL\n                ELSE 1577836800000000 + (value % 31536000000000)\n              END AS c_long\n              FROM $tbl\n            \"\"\"))\n\n          longToTimestampConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometCastStringToNumericBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.spark.sql.catalyst.expressions.Cast\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.CometConf\n\ncase class CastStringToNumericConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Benchmark to measure performance of Comet cast from String to numeric types. To run this\n * benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometCastStringToNumericBenchmark\n * }}}\n */\nobject CometCastStringToNumericBenchmark extends CometBenchmarkBase {\n\n  private val castFunctions = Seq(\"CAST\", \"TRY_CAST\")\n  private val targetTypes =\n    Seq(\n      \"BOOLEAN\",\n      \"BYTE\",\n      \"SHORT\",\n      \"INT\",\n      \"LONG\",\n      \"FLOAT\",\n      \"DOUBLE\",\n      \"DECIMAL(10,2)\",\n      \"DECIMAL(38,19)\")\n\n  private val castConfigs = for {\n    castFunc <- castFunctions\n    targetType <- targetTypes\n  } yield CastStringToNumericConfig(\n    s\"$castFunc String to $targetType\",\n    s\"SELECT $castFunc(c1 AS $targetType) FROM parquetV1Table\",\n    Map(\n      SQLConf.ANSI_ENABLED.key -> \"false\",\n      CometConf.getExprAllowIncompatConfigKey(classOf[Cast]) -> \"true\"))\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024 // 1M rows\n\n    // Generate input data once for all benchmarks\n    runBenchmarkWithTable(\"String to numeric casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution:\n          // - 2% NULL, 2% 'NaN', 2% 'Infinity', 2% '-Infinity'\n          // - 12% small integers (0-98)\n          // - 40% medium integers (0-999,998)\n          // - 40% decimals centered around 0 (approx -5000.00 to +5000.00)\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 50 = 0 THEN NULL\n                WHEN value % 50 = 1 THEN 'NaN'\n                WHEN value % 50 = 2 THEN 'Infinity'\n                WHEN value % 50 = 3 THEN '-Infinity'\n                WHEN value % 50 < 10 THEN CAST(value % 99 AS STRING)\n                WHEN value % 50 < 30 THEN CAST(value % 999999 AS STRING)\n                ELSE CAST((value - 500000) / 100.0 AS STRING)\n              END AS c1\n              FROM $tbl\n            \"\"\"))\n\n          castConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometCastStringToTemporalBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\ncase class CastStringToTemporalConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Benchmark to measure performance of Comet cast from String to temporal types. To run this\n * benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometCastStringToTemporalBenchmark\n * }}}\n * Results will be written to \"spark/benchmarks/CometCastStringToTemporalBenchmark-**results.txt\".\n */\nobject CometCastStringToTemporalBenchmark extends CometBenchmarkBase {\n\n  // Configuration for String to temporal cast benchmarks\n  private val dateCastConfigs = List(\n    CastStringToTemporalConfig(\n      \"Cast String to Date\",\n      \"SELECT CAST(c1 AS DATE) FROM parquetV1Table\"),\n    CastStringToTemporalConfig(\n      \"Try_Cast String to Date\",\n      \"SELECT TRY_CAST(c1 AS DATE) FROM parquetV1Table\"))\n\n  private val timestampCastConfigs = List(\n    CastStringToTemporalConfig(\n      \"Cast String to Timestamp\",\n      \"SELECT CAST(c1 AS TIMESTAMP) FROM parquetV1Table\"),\n    CastStringToTemporalConfig(\n      \"Try_Cast String to Timestamp\",\n      \"SELECT TRY_CAST(c1 AS TIMESTAMP) FROM parquetV1Table\"))\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024 // 1M rows\n\n    // Generate date data once with ~10% invalid values\n    runBenchmarkWithTable(\"date data generation\", values) { v =>\n      withTempPath { dateDir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 10% invalid strings, 90% valid date strings spanning ~10 years\n          prepareTable(\n            dateDir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 10 = 0 THEN 'invalid-date'\n                ELSE CAST(DATE_ADD('2020-01-01', CAST(value % 3650 AS INT)) AS STRING)\n              END AS c1\n              FROM $tbl\n            \"\"\"))\n\n          // Run date cast benchmarks with the same data\n          dateCastConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n\n    // Generate timestamp data once with ~10% invalid values\n    runBenchmarkWithTable(\"timestamp data generation\", values) { v =>\n      withTempPath { timestampDir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 10% invalid strings, 90% valid timestamp strings (1970 epoch range)\n          prepareTable(\n            timestampDir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 10 = 0 THEN 'not-a-timestamp'\n                ELSE CAST(TIMESTAMP_MICROS(value % 9999999999) AS STRING)\n              END AS c1\n              FROM $tbl\n            \"\"\"))\n\n          // Run timestamp cast benchmarks with the same data\n          timestampCastConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometCastTemporalToNumericBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\ncase class CastTemporalToNumericConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Benchmark to measure performance of Comet cast from temporal types to numeric types. To run\n * this benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometCastTemporalToNumericBenchmark\n * }}}\n * Results will be written to\n * \"spark/benchmarks/CometCastTemporalToNumericBenchmark-**results.txt\".\n */\nobject CometCastTemporalToNumericBenchmark extends CometBenchmarkBase {\n\n  private val castFunctions = Seq(\"CAST\", \"TRY_CAST\")\n\n  // DATE to numeric types\n  private val dateToNumericTypes = Seq(\"BYTE\", \"SHORT\", \"INT\", \"LONG\")\n  private val dateToNumericConfigs = for {\n    castFunc <- castFunctions\n    targetType <- dateToNumericTypes\n  } yield CastTemporalToNumericConfig(\n    s\"$castFunc Date to $targetType\",\n    s\"SELECT $castFunc(c_date AS $targetType) FROM parquetV1Table\")\n\n  // TIMESTAMP to numeric types\n  private val timestampToNumericTypes = Seq(\"BYTE\", \"SHORT\", \"INT\", \"LONG\")\n  private val timestampToNumericConfigs = for {\n    castFunc <- castFunctions\n    targetType <- timestampToNumericTypes\n  } yield CastTemporalToNumericConfig(\n    s\"$castFunc Timestamp to $targetType\",\n    s\"SELECT $castFunc(c_timestamp AS $targetType) FROM parquetV1Table\")\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024 // 1M rows\n\n    // Generate DATE data once for all date-to-numeric benchmarks\n    runBenchmarkWithTable(\"Date to Numeric casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL, dates spanning ~10 years from 2020-01-01\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 100 = 0 THEN NULL\n                ELSE DATE_ADD('2020-01-01', CAST(value % 3650 AS INT))\n              END AS c_date\n              FROM $tbl\n            \"\"\"))\n\n          dateToNumericConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n\n    // Generate TIMESTAMP data once for all timestamp-to-numeric benchmarks\n    runBenchmarkWithTable(\"Timestamp to Numeric casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL, timestamps spanning ~1 year from 2020-01-01\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 100 = 0 THEN NULL\n                ELSE TIMESTAMP_MICROS(1577836800000000 + value % 31536000000000)\n              END AS c_timestamp\n              FROM $tbl\n            \"\"\"))\n\n          timestampToNumericConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometCastTemporalToStringBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\ncase class CastTemporalToStringConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Benchmark to measure performance of Comet cast from temporal types to String. To run this\n * benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometCastTemporalToStringBenchmark\n * }}}\n * Results will be written to \"spark/benchmarks/CometCastTemporalToStringBenchmark-**results.txt\".\n */\nobject CometCastTemporalToStringBenchmark extends CometBenchmarkBase {\n\n  private val castFunctions = Seq(\"CAST\", \"TRY_CAST\")\n\n  private val dateCastConfigs = for {\n    castFunc <- castFunctions\n  } yield CastTemporalToStringConfig(\n    s\"$castFunc Date to String\",\n    s\"SELECT $castFunc(c_date AS STRING) FROM parquetV1Table\")\n\n  private val timestampCastConfigs = for {\n    castFunc <- castFunctions\n  } yield CastTemporalToStringConfig(\n    s\"$castFunc Timestamp to String\",\n    s\"SELECT $castFunc(c_timestamp AS STRING) FROM parquetV1Table\")\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024 // 1M rows\n\n    // Generate temporal data once for date benchmarks\n    runBenchmarkWithTable(\"Date to String casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL, dates spanning ~10 years from 2020-01-01\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 100 = 0 THEN NULL\n                ELSE DATE_ADD('2020-01-01', CAST(value % 3650 AS INT))\n              END AS c_date\n              FROM $tbl\n            \"\"\"))\n\n          dateCastConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n\n    // Generate temporal data once for timestamp benchmarks\n    runBenchmarkWithTable(\"Timestamp to String casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL, timestamps spanning ~1 year from 2020-01-01\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 100 = 0 THEN NULL\n                ELSE TIMESTAMP_MICROS(1577836800000000 + value % 31536000000000)\n              END AS c_timestamp\n              FROM $tbl\n            \"\"\"))\n\n          timestampCastConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometCastTemporalToTemporalBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\ncase class CastTemporalToTemporalConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Benchmark to measure performance of Comet cast between temporal types. To run this benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometCastTemporalToTemporalBenchmark\n * }}}\n * Results will be written to\n * \"spark/benchmarks/CometCastTemporalToTemporalBenchmark-**results.txt\".\n */\nobject CometCastTemporalToTemporalBenchmark extends CometBenchmarkBase {\n\n  private val castFunctions = Seq(\"CAST\", \"TRY_CAST\")\n\n  // Date to Timestamp\n  private val dateToTimestampConfigs = for {\n    castFunc <- castFunctions\n  } yield CastTemporalToTemporalConfig(\n    s\"$castFunc Date to Timestamp\",\n    s\"SELECT $castFunc(c_date AS TIMESTAMP) FROM parquetV1Table\")\n\n  // Timestamp to Date\n  private val timestampToDateConfigs = for {\n    castFunc <- castFunctions\n  } yield CastTemporalToTemporalConfig(\n    s\"$castFunc Timestamp to Date\",\n    s\"SELECT $castFunc(c_timestamp AS DATE) FROM parquetV1Table\")\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024 // 1M rows\n\n    // Generate DATE data for Date -> Timestamp benchmarks\n    runBenchmarkWithTable(\"Date to Timestamp casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL, dates spanning ~10 years from 2020-01-01\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 100 = 0 THEN NULL\n                ELSE DATE_ADD('2020-01-01', CAST(value % 3650 AS INT))\n              END AS c_date\n              FROM $tbl\n            \"\"\"))\n\n          dateToTimestampConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n\n    // Generate TIMESTAMP data for Timestamp -> Date benchmarks\n    runBenchmarkWithTable(\"Timestamp to Date casts\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL, timestamps spanning ~1 year from 2020-01-01\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT CASE\n                WHEN value % 100 = 0 THEN NULL\n                ELSE TIMESTAMP_MICROS(1577836800000000 + value % 31536000000000)\n              END AS c_timestamp\n              FROM $tbl\n            \"\"\"))\n\n          timestampToDateConfigs.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometColumnarToRowBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.benchmark.Benchmark\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.{CometConf, CometSparkSessionExtensions}\n\n/**\n * Benchmark to compare Columnar to Row conversion performance:\n *   - Spark's default ColumnarToRowExec\n *   - Comet's JVM-based CometColumnarToRowExec\n *   - Comet's Native CometNativeColumnarToRowExec\n *\n * To run this benchmark:\n * {{{\n * SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometColumnarToRowBenchmark\n * }}}\n *\n * Results will be written to \"spark/benchmarks/CometColumnarToRowBenchmark-**results.txt\".\n */\nobject CometColumnarToRowBenchmark extends CometBenchmarkBase {\n  override def getSparkSession: SparkSession = {\n    val conf = new SparkConf()\n      .setAppName(\"CometColumnarToRowBenchmark\")\n      .set(\"spark.master\", \"local[1]\")\n      .setIfMissing(\"spark.driver.memory\", \"3g\")\n      .setIfMissing(\"spark.executor.memory\", \"3g\")\n      .set(\"spark.memory.offHeap.enabled\", \"true\")\n      .set(\"spark.memory.offHeap.size\", \"2g\")\n\n    val sparkSession = SparkSession.builder\n      .config(conf)\n      .withExtensions(new CometSparkSessionExtensions)\n      .getOrCreate()\n\n    // Set default configs\n    sparkSession.conf.set(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key, \"true\")\n    sparkSession.conf.set(SQLConf.WHOLESTAGE_CODEGEN_ENABLED.key, \"true\")\n    sparkSession.conf.set(CometConf.COMET_ENABLED.key, \"false\")\n    sparkSession.conf.set(CometConf.COMET_EXEC_ENABLED.key, \"false\")\n    // Disable dictionary encoding to ensure consistent data representation\n    sparkSession.conf.set(\"parquet.enable.dictionary\", \"false\")\n\n    sparkSession\n  }\n\n  /**\n   * Helper method to add the standard benchmark cases for columnar to row conversion. Reduces\n   * code duplication across benchmark methods.\n   */\n  private def addC2RBenchmarkCases(benchmark: Benchmark, query: String): Unit = {\n    benchmark.addCase(\"Spark (ColumnarToRowExec)\") { _ =>\n      withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n        spark.sql(query).noop()\n      }\n    }\n\n    benchmark.addCase(\"Comet JVM (CometColumnarToRowExec)\") { _ =>\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_NATIVE_COLUMNAR_TO_ROW_ENABLED.key -> \"false\") {\n        spark.sql(query).noop()\n      }\n    }\n\n    benchmark.addCase(\"Comet Native (CometNativeColumnarToRowExec)\") { _ =>\n      withSQLConf(\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_NATIVE_COLUMNAR_TO_ROW_ENABLED.key -> \"true\") {\n        spark.sql(query).noop()\n      }\n    }\n  }\n\n  /**\n   * Benchmark columnar to row conversion for primitive types.\n   */\n  def primitiveTypesBenchmark(values: Int): Unit = {\n    val benchmark =\n      new Benchmark(\"Columnar to Row - Primitive Types\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        // Create a table with various primitive types (includes strings)\n        val df = spark\n          .range(values)\n          .selectExpr(\n            \"id as long_col\",\n            \"cast(id as int) as int_col\",\n            \"cast(id as short) as short_col\",\n            \"cast(id as byte) as byte_col\",\n            \"cast(id % 2 as boolean) as bool_col\",\n            \"cast(id as float) as float_col\",\n            \"cast(id as double) as double_col\",\n            \"cast(id as string) as string_col\",\n            \"date_add(to_date('2024-01-01'), cast(id % 365 as int)) as date_col\")\n\n        prepareTable(dir, df)\n        val query = \"SELECT * FROM parquetV1Table\"\n        addC2RBenchmarkCases(benchmark, query)\n        benchmark.run()\n      }\n    }\n  }\n\n  /**\n   * Benchmark columnar to row conversion for fixed-width types ONLY (no strings). This tests the\n   * fast path in native C2R that pre-allocates buffers.\n   */\n  def fixedWidthOnlyBenchmark(values: Int): Unit = {\n    val benchmark =\n      new Benchmark(\"Columnar to Row - Fixed Width Only (no strings)\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        // Create a table with ONLY fixed-width primitive types (no strings!)\n        val df = spark\n          .range(values)\n          .selectExpr(\n            \"id as long_col\",\n            \"cast(id as int) as int_col\",\n            \"cast(id as short) as short_col\",\n            \"cast(id as byte) as byte_col\",\n            \"cast(id % 2 as boolean) as bool_col\",\n            \"cast(id as float) as float_col\",\n            \"cast(id as double) as double_col\",\n            \"date_add(to_date('2024-01-01'), cast(id % 365 as int)) as date_col\",\n            \"cast(id * 2 as long) as long_col2\",\n            \"cast(id * 3 as int) as int_col2\")\n\n        prepareTable(dir, df)\n        val query = \"SELECT * FROM parquetV1Table\"\n        addC2RBenchmarkCases(benchmark, query)\n        benchmark.run()\n      }\n    }\n  }\n\n  /**\n   * Benchmark columnar to row conversion for string-heavy data.\n   */\n  def stringTypesBenchmark(values: Int): Unit = {\n    val benchmark =\n      new Benchmark(\"Columnar to Row - String Types\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        val df = spark\n          .range(values)\n          .selectExpr(\n            \"id\",\n            \"concat('short_', cast(id % 100 as string)) as short_str\",\n            \"concat('medium_string_value_', cast(id as string), '_with_more_content') as medium_str\",\n            \"repeat(concat('long_', cast(id as string)), 10) as long_str\")\n\n        prepareTable(dir, df)\n        val query = \"SELECT * FROM parquetV1Table\"\n        addC2RBenchmarkCases(benchmark, query)\n        benchmark.run()\n      }\n    }\n  }\n\n  /**\n   * Benchmark columnar to row conversion for nested struct types.\n   */\n  def structTypesBenchmark(values: Int): Unit = {\n    val benchmark =\n      new Benchmark(\"Columnar to Row - Struct Types\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        val df = spark\n          .range(values)\n          .selectExpr(\n            \"id\",\n            // Simple struct\n            \"named_struct('a', cast(id as int), 'b', cast(id as string)) as simple_struct\",\n            // Nested struct (2 levels)\n            \"\"\"named_struct(\n              'outer_int', cast(id as int),\n              'inner', named_struct('x', cast(id as double), 'y', cast(id as string))\n            ) as nested_struct\"\"\",\n            // Deeply nested struct (3 levels)\n            \"\"\"named_struct(\n              'level1', named_struct(\n                'level2', named_struct(\n                  'value', cast(id as int),\n                  'name', concat('item_', cast(id as string))\n                ),\n                'count', cast(id % 100 as int)\n              ),\n              'id', id\n            ) as deep_struct\"\"\")\n\n        prepareTable(dir, df)\n        val query = \"SELECT * FROM parquetV1Table\"\n        addC2RBenchmarkCases(benchmark, query)\n        benchmark.run()\n      }\n    }\n  }\n\n  /**\n   * Benchmark columnar to row conversion for array types.\n   */\n  def arrayTypesBenchmark(values: Int): Unit = {\n    val benchmark =\n      new Benchmark(\"Columnar to Row - Array Types\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        val df = spark\n          .range(values)\n          .selectExpr(\n            \"id\",\n            // Array of primitives\n            \"array(cast(id as int), cast(id + 1 as int), cast(id + 2 as int)) as int_array\",\n            // Array of strings\n            \"array(concat('a_', cast(id as string)), concat('b_', cast(id as string))) as str_array\",\n            // Longer array\n            \"\"\"array(\n              cast(id % 10 as int), cast((id + 1) % 10 as int), cast((id + 2) % 10 as int),\n              cast((id + 3) % 10 as int), cast((id + 4) % 10 as int)\n            ) as longer_array\"\"\")\n\n        prepareTable(dir, df)\n        val query = \"SELECT * FROM parquetV1Table\"\n        addC2RBenchmarkCases(benchmark, query)\n        benchmark.run()\n      }\n    }\n  }\n\n  /**\n   * Benchmark columnar to row conversion for map types.\n   */\n  def mapTypesBenchmark(values: Int): Unit = {\n    val benchmark =\n      new Benchmark(\"Columnar to Row - Map Types\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        val df = spark\n          .range(values)\n          .selectExpr(\n            \"id\",\n            // Map with string keys and int values\n            \"map('key1', cast(id as int), 'key2', cast(id + 1 as int)) as str_int_map\",\n            // Map with int keys and string values\n            \"map(cast(id % 10 as int), concat('val_', cast(id as string))) as int_str_map\",\n            // Larger map\n            \"\"\"map(\n              'a', cast(id as double),\n              'b', cast(id + 1 as double),\n              'c', cast(id + 2 as double)\n            ) as larger_map\"\"\")\n\n        prepareTable(dir, df)\n        val query = \"SELECT * FROM parquetV1Table\"\n        addC2RBenchmarkCases(benchmark, query)\n        benchmark.run()\n      }\n    }\n  }\n\n  /**\n   * Benchmark columnar to row conversion for complex nested types (arrays of structs, maps with\n   * array values, etc.)\n   */\n  def complexNestedTypesBenchmark(values: Int): Unit = {\n    val benchmark =\n      new Benchmark(\"Columnar to Row - Complex Nested Types\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        val df = spark\n          .range(values)\n          .selectExpr(\n            \"id\",\n            // Array of structs\n            \"\"\"array(\n              named_struct('id', cast(id as int), 'name', concat('item_', cast(id as string))),\n              named_struct('id', cast(id + 1 as int), 'name', concat('item_', cast(id + 1 as string)))\n            ) as array_of_structs\"\"\",\n            // Struct with array field\n            \"\"\"named_struct(\n              'values', array(cast(id as int), cast(id + 1 as int), cast(id + 2 as int)),\n              'label', concat('label_', cast(id as string))\n            ) as struct_with_array\"\"\",\n            // Map with array values\n            \"\"\"map(\n              'scores', array(cast(id % 100 as double), cast((id + 1) % 100 as double)),\n              'ranks', array(cast(id % 10 as double))\n            ) as map_with_arrays\"\"\")\n\n        prepareTable(dir, df)\n        val query = \"SELECT * FROM parquetV1Table\"\n        addC2RBenchmarkCases(benchmark, query)\n        benchmark.run()\n      }\n    }\n  }\n\n  /**\n   * Benchmark with wide rows (many columns) to stress test row conversion.\n   */\n  def wideRowsBenchmark(values: Int): Unit = {\n    val benchmark =\n      new Benchmark(\"Columnar to Row - Wide Rows (50 columns)\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        // Generate 50 columns of mixed types\n        val columns = (0 until 50).map { i =>\n          i % 5 match {\n            case 0 => s\"cast(id + $i as int) as int_col_$i\"\n            case 1 => s\"cast(id + $i as long) as long_col_$i\"\n            case 2 => s\"cast(id + $i as double) as double_col_$i\"\n            case 3 => s\"concat('str_${i}_', cast(id as string)) as str_col_$i\"\n            case 4 => s\"cast((id + $i) % 2 as boolean) as bool_col_$i\"\n          }\n        }\n\n        val df = spark.range(values).selectExpr(columns: _*)\n\n        prepareTable(dir, df)\n        val query = \"SELECT * FROM parquetV1Table\"\n        addC2RBenchmarkCases(benchmark, query)\n        benchmark.run()\n      }\n    }\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val numRows = 1024 * 1024 // 1M rows\n\n    runBenchmark(\"Columnar to Row Conversion - Fixed Width Only\") {\n      fixedWidthOnlyBenchmark(numRows)\n    }\n\n    runBenchmark(\"Columnar to Row Conversion - Primitive Types\") {\n      primitiveTypesBenchmark(numRows)\n    }\n\n    runBenchmark(\"Columnar to Row Conversion - String Types\") {\n      stringTypesBenchmark(numRows)\n    }\n\n    runBenchmark(\"Columnar to Row Conversion - Struct Types\") {\n      structTypesBenchmark(numRows)\n    }\n\n    runBenchmark(\"Columnar to Row Conversion - Array Types\") {\n      arrayTypesBenchmark(numRows)\n    }\n\n    runBenchmark(\"Columnar to Row Conversion - Map Types\") {\n      mapTypesBenchmark(numRows)\n    }\n\n    runBenchmark(\"Columnar to Row Conversion - Complex Nested Types\") {\n      complexNestedTypesBenchmark(numRows)\n    }\n\n    runBenchmark(\"Columnar to Row Conversion - Wide Rows\") {\n      wideRowsBenchmark(numRows)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometComparisonExpressionBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\ncase class ComparisonExprConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Comprehensive benchmark for Comet comparison and predicate expressions. To run this benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometComparisonExpressionBenchmark\n * }}}\n * Results will be written to \"spark/benchmarks/CometComparisonExpressionBenchmark-**results.txt\".\n */\nobject CometComparisonExpressionBenchmark extends CometBenchmarkBase {\n\n  private val comparisonExpressions = List(\n    ComparisonExprConfig(\"equal_to\", \"SELECT c_int = c_int2 FROM parquetV1Table\"),\n    ComparisonExprConfig(\"not_equal_to\", \"SELECT c_int != c_int2 FROM parquetV1Table\"),\n    ComparisonExprConfig(\"less_than\", \"SELECT c_int < c_int2 FROM parquetV1Table\"),\n    ComparisonExprConfig(\"less_than_or_equal\", \"SELECT c_int <= c_int2 FROM parquetV1Table\"),\n    ComparisonExprConfig(\"greater_than\", \"SELECT c_int > c_int2 FROM parquetV1Table\"),\n    ComparisonExprConfig(\"greater_than_or_equal\", \"SELECT c_int >= c_int2 FROM parquetV1Table\"),\n    ComparisonExprConfig(\"equal_null_safe\", \"SELECT c_int <=> c_int2 FROM parquetV1Table\"),\n    ComparisonExprConfig(\"is_null\", \"SELECT c_int IS NULL FROM parquetV1Table\"),\n    ComparisonExprConfig(\"is_not_null\", \"SELECT c_int IS NOT NULL FROM parquetV1Table\"),\n    ComparisonExprConfig(\"is_nan_float\", \"SELECT isnan(c_float) FROM parquetV1Table\"),\n    ComparisonExprConfig(\"is_nan_double\", \"SELECT isnan(c_double) FROM parquetV1Table\"),\n    ComparisonExprConfig(\"and\", \"SELECT (c_int > 0) AND (c_int2 < 100) FROM parquetV1Table\"),\n    ComparisonExprConfig(\"or\", \"SELECT (c_int > 0) OR (c_int2 < 100) FROM parquetV1Table\"),\n    ComparisonExprConfig(\"not\", \"SELECT NOT (c_int > 0) FROM parquetV1Table\"),\n    ComparisonExprConfig(\n      \"in_list\",\n      \"SELECT c_int IN (1, 10, 100, 1000, 10000) FROM parquetV1Table\"),\n    ComparisonExprConfig(\n      \"not_in_list\",\n      \"SELECT c_int NOT IN (1, 10, 100, 1000, 10000) FROM parquetV1Table\"))\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024\n\n    runBenchmarkWithTable(\"Comparison expression benchmarks\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution:\n          // - c_int: 10% NULL, integers -50,000 to 49,999\n          // - c_int2: 10% NULL, integers 0-999\n          // - c_float/c_double: 2% NULL, 2% NaN, rest are values 0.00-99.99\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT\n                CASE WHEN value % 10 = 0 THEN NULL ELSE CAST((value % 100000) - 50000 AS INT) END AS c_int,\n                CASE WHEN value % 10 = 1 THEN NULL ELSE CAST((value % 1000) AS INT) END AS c_int2,\n                CASE\n                  WHEN value % 50 = 2 THEN NULL\n                  WHEN value % 50 = 3 THEN CAST('NaN' AS FLOAT)\n                  ELSE CAST((value % 10000) / 100.0 AS FLOAT)\n                END AS c_float,\n                CASE\n                  WHEN value % 50 = 4 THEN NULL\n                  WHEN value % 50 = 5 THEN CAST('NaN' AS DOUBLE)\n                  ELSE CAST((value % 10000) / 100.0 AS DOUBLE)\n                END AS c_double\n              FROM $tbl\n            \"\"\"))\n\n          comparisonExpressions.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometConditionalExpressionBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\n/**\n * Benchmark to measure Comet execution performance for conditional expressions. To run this\n * benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometConditionalExpressionBenchmark\n * }}}\n * Results will be written to\n * \"spark/benchmarks/CometConditionalExpressionBenchmark-**results.txt\".\n */\nobject CometConditionalExpressionBenchmark extends CometBenchmarkBase {\n\n  private def prepareTestTable(values: Int)(f: => Unit): Unit = {\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        // Create table with multiple columns for richer test scenarios:\n        // - c1: random long values (full range)\n        // - c2: values 0-99 for multi-branch testing\n        // - c3: secondary column for non-literal result expressions\n        // - c4: string column for string result expressions\n        prepareTable(\n          dir,\n          spark.sql(s\"\"\"\n            SELECT\n              value AS c1,\n              CAST(ABS(value % 100) AS INT) AS c2,\n              CAST(value * 2 AS LONG) AS c3,\n              CAST(value AS STRING) AS c4\n            FROM $tbl\n          \"\"\"))\n        f\n      }\n    }\n  }\n\n  def caseWhenLiteralBenchmark(values: Int): Unit = {\n    prepareTestTable(values) {\n      val query =\n        \"SELECT CASE WHEN c1 < 0 THEN '<0' WHEN c1 = 0 THEN '=0' ELSE '>0' END FROM parquetV1Table\"\n      runExpressionBenchmark(\"Case When Literal (3 branches)\", values, query)\n    }\n  }\n\n  def caseWhenManyBranchesLiteralBenchmark(values: Int): Unit = {\n    prepareTestTable(values) {\n      // 10 branches using c2 (values 0-99)\n      val query = \"\"\"\n        SELECT CASE\n          WHEN c2 < 10 THEN 'a'\n          WHEN c2 < 20 THEN 'b'\n          WHEN c2 < 30 THEN 'c'\n          WHEN c2 < 40 THEN 'd'\n          WHEN c2 < 50 THEN 'e'\n          WHEN c2 < 60 THEN 'f'\n          WHEN c2 < 70 THEN 'g'\n          WHEN c2 < 80 THEN 'h'\n          WHEN c2 < 90 THEN 'i'\n          ELSE 'j'\n        END FROM parquetV1Table\n      \"\"\"\n      runExpressionBenchmark(\"Case When Literal (10 branches)\", values, query)\n    }\n  }\n\n  def caseWhenColumnResultBenchmark(values: Int): Unit = {\n    prepareTestTable(values) {\n      // Result expressions are column references, not literals\n      val query =\n        \"SELECT CASE WHEN c1 < 0 THEN c3 WHEN c1 = 0 THEN c1 ELSE c3 + c1 END FROM parquetV1Table\"\n      runExpressionBenchmark(\"Case When Column Result (3 branches)\", values, query)\n    }\n  }\n\n  def caseWhenManyBranchesColumnResultBenchmark(values: Int): Unit = {\n    prepareTestTable(values) {\n      // 10 branches with column expressions as results\n      val query = \"\"\"\n        SELECT CASE\n          WHEN c2 < 10 THEN c1\n          WHEN c2 < 20 THEN c3\n          WHEN c2 < 30 THEN c1 + c3\n          WHEN c2 < 40 THEN c1 - c2\n          WHEN c2 < 50 THEN c3 * 2\n          WHEN c2 < 60 THEN c1 / 2\n          WHEN c2 < 70 THEN c2 + c3\n          WHEN c2 < 80 THEN c1 * c2\n          WHEN c2 < 90 THEN c3 - c1\n          ELSE c1 + c2 + c3\n        END FROM parquetV1Table\n      \"\"\"\n      runExpressionBenchmark(\"Case When Column Result (10 branches)\", values, query)\n    }\n  }\n\n  def ifLiteralBenchmark(values: Int): Unit = {\n    prepareTestTable(values) {\n      val query = \"SELECT IF(c1 < 0, '<0', '>=0') FROM parquetV1Table\"\n      runExpressionBenchmark(\"If Literal\", values, query)\n    }\n  }\n\n  def ifColumnResultBenchmark(values: Int): Unit = {\n    prepareTestTable(values) {\n      // Result expressions are column references\n      val query = \"SELECT IF(c1 < 0, c3, c1 + c3) FROM parquetV1Table\"\n      runExpressionBenchmark(\"If Column Result\", values, query)\n    }\n  }\n\n  def nestedIfBenchmark(values: Int): Unit = {\n    prepareTestTable(values) {\n      // Nested IF expressions (equivalent to CASE WHEN with multiple branches)\n      val query = \"\"\"\n        SELECT IF(c2 < 25, 'a',\n                 IF(c2 < 50, 'b',\n                   IF(c2 < 75, 'c', 'd')))\n        FROM parquetV1Table\n      \"\"\"\n      runExpressionBenchmark(\"Nested If Literal (4 outcomes)\", values, query)\n    }\n  }\n\n  def nestedIfColumnResultBenchmark(values: Int): Unit = {\n    prepareTestTable(values) {\n      val query = \"\"\"\n        SELECT IF(c2 < 25, c1,\n                 IF(c2 < 50, c3,\n                   IF(c2 < 75, c1 + c3, c3 * 2)))\n        FROM parquetV1Table\n      \"\"\"\n      runExpressionBenchmark(\"Nested If Column Result (4 outcomes)\", values, query)\n    }\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024\n\n    // CASE WHEN with literal results\n    runBenchmarkWithTable(\"caseWhenLiteral\", values) { v =>\n      caseWhenLiteralBenchmark(v)\n    }\n\n    runBenchmarkWithTable(\"caseWhenManyBranchesLiteral\", values) { v =>\n      caseWhenManyBranchesLiteralBenchmark(v)\n    }\n\n    // CASE WHEN with column/expression results\n    runBenchmarkWithTable(\"caseWhenColumnResult\", values) { v =>\n      caseWhenColumnResultBenchmark(v)\n    }\n\n    runBenchmarkWithTable(\"caseWhenManyBranchesColumnResult\", values) { v =>\n      caseWhenManyBranchesColumnResultBenchmark(v)\n    }\n\n    // IF with literal results\n    runBenchmarkWithTable(\"ifLiteral\", values) { v =>\n      ifLiteralBenchmark(v)\n    }\n\n    // IF with column/expression results\n    runBenchmarkWithTable(\"ifColumnResult\", values) { v =>\n      ifColumnResultBenchmark(v)\n    }\n\n    // Nested IF expressions\n    runBenchmarkWithTable(\"nestedIfLiteral\", values) { v =>\n      nestedIfBenchmark(v)\n    }\n\n    runBenchmarkWithTable(\"nestedIfColumnResult\", values) { v =>\n      nestedIfColumnResultBenchmark(v)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometCsvExpressionBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.spark.sql.catalyst.expressions.CsvToStructs\n\nimport org.apache.comet.CometConf\n\n/**\n * Configuration for a CSV expression benchmark.\n *\n * @param name\n *   Name for the benchmark\n * @param query\n *   SQL query to benchmark\n * @param extraCometConfigs\n *   Additional Comet configurations for the scan+exec case\n */\ncase class CsvExprConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n// spotless:off\n/**\n * Benchmark to measure performance of Comet CSV expressions. To run this benchmark:\n * `SPARK_GENERATE_BENCHMARK_FILES=1 make\n * benchmark-org.apache.spark.sql.benchmark.CometCsvExpressionBenchmark` Results will be written\n * to \"spark/benchmarks/CometCsvExpressionBenchmark-**results.txt\".\n */\n// spotless:on\nobject CometCsvExpressionBenchmark extends CometBenchmarkBase {\n\n  /**\n   * Generic method to run a CSV expression benchmark with the given configuration.\n   */\n  def runCsvExprBenchmark(config: CsvExprConfig, values: Int): Unit = {\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(\n            s\"SELECT CAST(value AS STRING) AS c1, CAST(value AS INT) AS c2, CAST(value AS LONG) AS c3 FROM $tbl\"))\n\n        val extraConfigs = Map(\n          CometConf.getExprAllowIncompatConfigKey(\n            classOf[CsvToStructs]) -> \"true\") ++ config.extraCometConfigs\n\n        runExpressionBenchmark(config.name, values, config.query, extraConfigs)\n      }\n    }\n  }\n\n  // Configuration for all CSV expression benchmarks\n  private val csvExpressions = List(\n    CsvExprConfig(\"to_csv\", \"SELECT to_csv(struct(c1, c2, c3)) FROM parquetV1Table\"))\n\n  override def runCometBenchmark(args: Array[String]): Unit = {\n    val values = 1024 * 1024\n\n    csvExpressions.foreach { config =>\n      runBenchmarkWithTable(config.name, values) { value =>\n        runCsvExprBenchmark(config, value)\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometDatetimeExpressionBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.spark.sql.catalyst.util.DateTimeTestUtils.{withDefaultTimeZone, LA}\nimport org.apache.spark.sql.internal.SQLConf\n\n// spotless:off\n/**\n * Benchmark to measure Comet execution performance. To run this benchmark:\n * `SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometDatetimeExpressionBenchmark`\n * Results will be written to \"spark/benchmarks/CometDatetimeExpressionBenchmark-**results.txt\".\n */\n// spotless:on\nobject CometDatetimeExpressionBenchmark extends CometBenchmarkBase {\n\n  def dateTruncExprBenchmark(values: Int): Unit = {\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(\n            s\"select cast(timestamp_micros(cast(value/100000 as integer)) as date) as dt FROM $tbl\"))\n        Seq(\"YEAR\", \"MONTH\").foreach { level =>\n          val name = s\"Date Truncate - $level\"\n          val query = s\"select trunc(dt, '$level') from parquetV1Table\"\n          runExpressionBenchmark(name, values, query)\n        }\n      }\n    }\n  }\n\n  def timestampTruncExprBenchmark(values: Int): Unit = {\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(s\"select timestamp_micros(cast(value/100000 as integer)) as ts FROM $tbl\"))\n        Seq(\n          \"YEAR\",\n          \"QUARTER\",\n          \"MONTH\",\n          \"WEEK\",\n          \"DAY\",\n          \"HOUR\",\n          \"MINUTE\",\n          \"SECOND\",\n          \"MILLISECOND\",\n          \"MICROSECOND\").foreach { level =>\n          val name = s\"Timestamp Truncate - $level\"\n          val query = s\"select date_trunc('$level', ts) from parquetV1Table\"\n          runExpressionBenchmark(name, values, query)\n        }\n      }\n    }\n  }\n\n  def unixTimestampBenchmark(values: Int, timeZone: String): Unit = {\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(s\"select timestamp_micros(cast(value/100000 as integer)) as ts FROM $tbl\"))\n        withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> timeZone) {\n          val name = s\"Unix Timestamp from Timestamp ($timeZone)\"\n          val query = \"select unix_timestamp(ts) from parquetV1Table\"\n          runExpressionBenchmark(name, values, query)\n        }\n      }\n    }\n  }\n\n  def unixTimestampFromDateBenchmark(values: Int, timeZone: String): Unit = {\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(\n            s\"select cast(timestamp_micros(cast(value/100000 as integer)) as date) as dt FROM $tbl\"))\n        withSQLConf(SQLConf.SESSION_LOCAL_TIMEZONE.key -> timeZone) {\n          val name = s\"Unix Timestamp from Date ($timeZone)\"\n          val query = \"select unix_timestamp(dt) from parquetV1Table\"\n          runExpressionBenchmark(name, values, query)\n        }\n      }\n    }\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024;\n\n    for (timeZone <- Seq(\"UTC\", \"America/Los_Angeles\")) {\n      withSQLConf(\"spark.sql.parquet.datetimeRebaseModeInWrite\" -> \"CORRECTED\") {\n        runBenchmarkWithTable(s\"UnixTimestamp(timestamp) - $timeZone\", values) { v =>\n          unixTimestampBenchmark(v, timeZone)\n        }\n        runBenchmarkWithTable(s\"UnixTimestamp(date) - $timeZone\", values) { v =>\n          unixTimestampFromDateBenchmark(v, timeZone)\n        }\n      }\n    }\n\n    withDefaultTimeZone(LA) {\n      withSQLConf(\n        SQLConf.SESSION_LOCAL_TIMEZONE.key -> LA.getId,\n        \"spark.sql.parquet.datetimeRebaseModeInWrite\" -> \"CORRECTED\") {\n\n        runBenchmarkWithTable(\"DateTrunc\", values) { v =>\n          dateTruncExprBenchmark(v)\n        }\n        runBenchmarkWithTable(\"TimestampTrunc\", values) { v =>\n          timestampTruncExprBenchmark(v)\n        }\n      }\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometExecBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.benchmark.Benchmark\nimport org.apache.spark.sql.SparkSession\nimport org.apache.spark.sql.catalyst.FunctionIdentifier\nimport org.apache.spark.sql.catalyst.expressions.{Expression, ExpressionInfo}\nimport org.apache.spark.sql.catalyst.expressions.aggregate.BloomFilterAggregate\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.{CometConf, CometSparkSessionExtensions}\n\n/**\n * Benchmark to measure Comet execution performance. To run this benchmark:\n * `SPARK_GENERATE_BENCHMARK_FILES=1 make\n * benchmark-org.apache.spark.sql.benchmark.CometExecBenchmark` Results will be written to\n * \"spark/benchmarks/CometExecBenchmark-**results.txt\".\n */\nobject CometExecBenchmark extends CometBenchmarkBase {\n  override def getSparkSession: SparkSession = {\n    val conf = new SparkConf()\n      .setAppName(\"CometExecBenchmark\")\n      .set(\"spark.master\", \"local[5]\")\n      .setIfMissing(\"spark.driver.memory\", \"3g\")\n      .setIfMissing(\"spark.executor.memory\", \"3g\")\n      .set(\"spark.executor.memoryOverhead\", \"10g\")\n      .set(\n        \"spark.shuffle.manager\",\n        \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n      .set(\"spark.comet.columnar.shuffle.async.thread.num\", \"7\")\n      .set(\"spark.comet.columnar.shuffle.spill.threshold\", \"30000\")\n\n    val sparkSession = SparkSession.builder\n      .config(conf)\n      .withExtensions(new CometSparkSessionExtensions)\n      .getOrCreate()\n\n    // Set default configs. Individual cases will change them if necessary.\n    sparkSession.conf.set(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key, \"true\")\n    sparkSession.conf.set(SQLConf.WHOLESTAGE_CODEGEN_ENABLED.key, \"true\")\n    sparkSession.conf.set(CometConf.COMET_ENABLED.key, \"false\")\n    sparkSession.conf.set(CometConf.COMET_EXEC_ENABLED.key, \"false\")\n    sparkSession.conf.set(CometConf.COMET_ONHEAP_MEMORY_OVERHEAD.key, \"10g\")\n    // TODO: support dictionary encoding in vectorized execution\n    sparkSession.conf.set(\"parquet.enable.dictionary\", \"false\")\n    sparkSession.conf.set(\"spark.sql.shuffle.partitions\", \"2\")\n\n    sparkSession\n  }\n\n  def numericFilterExecBenchmark(values: Int, fractionOfZeros: Double): Unit = {\n    val percentageOfZeros = fractionOfZeros * 100\n    val benchmark =\n      new Benchmark(s\"Project + Filter Exec ($percentageOfZeros% zeros)\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(\n            s\"SELECT IF(RAND(1) < $fractionOfZeros, -1, value) AS c1, value AS c2 FROM \" +\n              s\"$tbl\"))\n\n        benchmark.addCase(\"SQL Parquet - Spark\") { _ =>\n          spark.sql(\"select c2 + 1, c1 + 2 from parquetV1Table where c1 + 1 > 0\").noop()\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Scan)\") { _ =>\n          withSQLConf(CometConf.COMET_ENABLED.key -> \"true\") {\n            spark.sql(\"select c2 + 1, c1 + 2 from parquetV1Table where c1 + 1 > 0\").noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Scan, Exec)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n            spark.sql(\"select c2 + 1, c1 + 2 from parquetV1Table where c1 + 1 > 0\").noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Spark (Scan), Comet (Exec)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"false\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_CONVERT_FROM_PARQUET_ENABLED.key -> \"true\") {\n            spark.sql(\"select c2 + 1, c1 + 2 from parquetV1Table where c1 + 1 > 0\").noop()\n          }\n        }\n\n        benchmark.run()\n      }\n    }\n  }\n\n  def subqueryExecBenchmark(values: Int): Unit = {\n    val benchmark = new Benchmark(\"Subquery\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(s\"SELECT value as col1, value + 100 as col2, value + 10 as col3 FROM $tbl\"))\n\n        benchmark.addCase(\"SQL Parquet - Spark\") { _ =>\n          spark.sql(\n            \"SELECT (SELECT max(col1) AS parquetV1Table FROM parquetV1Table) AS a, \" +\n              \"col2, col3 FROM parquetV1Table\")\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Scan)\") { _ =>\n          withSQLConf(CometConf.COMET_ENABLED.key -> \"true\") {\n            spark.sql(\n              \"SELECT (SELECT max(col1) AS parquetV1Table FROM parquetV1Table) AS a, \" +\n                \"col2, col3 FROM parquetV1Table\")\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Scan, Exec)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n            spark.sql(\n              \"SELECT (SELECT max(col1) AS parquetV1Table FROM parquetV1Table) AS a, \" +\n                \"col2, col3 FROM parquetV1Table\")\n          }\n        }\n\n        benchmark.run()\n      }\n    }\n  }\n\n  def sortExecBenchmark(values: Int): Unit = {\n    val benchmark = new Benchmark(\"Sort Exec\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(dir, spark.sql(s\"SELECT * FROM $tbl\"))\n\n        benchmark.addCase(\"SQL Parquet - Spark\") { _ =>\n          spark.sql(\"select * from parquetV1Table\").sortWithinPartitions(\"value\").noop()\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Scan)\") { _ =>\n          withSQLConf(CometConf.COMET_ENABLED.key -> \"true\") {\n            spark.sql(\"select * from parquetV1Table\").sortWithinPartitions(\"value\").noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Scan, Exec)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n            spark.sql(\"select * from parquetV1Table\").sortWithinPartitions(\"value\").noop()\n          }\n        }\n\n        benchmark.run()\n      }\n    }\n  }\n\n  def expandExecBenchmark(values: Int): Unit = {\n    val benchmark = new Benchmark(\"Expand Exec\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(s\"SELECT value as col1, value + 100 as col2, value + 10 as col3 FROM $tbl\"))\n\n        benchmark.addCase(\"SQL Parquet - Spark\") { _ =>\n          spark\n            .sql(\"SELECT col1, col2, SUM(col3) FROM parquetV1Table \" +\n              \"GROUP BY col1, col2 GROUPING SETS ((col1), (col2))\")\n            .noop()\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Scan)\") { _ =>\n          withSQLConf(CometConf.COMET_ENABLED.key -> \"true\") {\n            spark\n              .sql(\"SELECT col1, col2, SUM(col3) FROM parquetV1Table \" +\n                \"GROUP BY col1, col2 GROUPING SETS ((col1), (col2))\")\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Scan, Exec)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n            spark\n              .sql(\"SELECT col1, col2, SUM(col3) FROM parquetV1Table \" +\n                \"GROUP BY col1, col2 GROUPING SETS ((col1), (col2))\")\n              .noop()\n          }\n        }\n\n        benchmark.run()\n      }\n    }\n  }\n\n  // BloomFilterAgg takes an argument for the expected number of distinct values, which determines filter size and\n  // number of hash functions. We use the cardinality as a hint to the aggregate, otherwise the default Spark values\n  // make a big filter with a lot of hash functions.\n  def bloomFilterAggregate(values: Int, cardinality: Int): Unit = {\n    val benchmark =\n      new Benchmark(\n        s\"BloomFilterAggregate Exec (cardinality $cardinality)\",\n        values,\n        output = output)\n\n    val funcId_bloom_filter_agg = new FunctionIdentifier(\"bloom_filter_agg\")\n    spark.sessionState.functionRegistry.registerFunction(\n      funcId_bloom_filter_agg,\n      new ExpressionInfo(classOf[BloomFilterAggregate].getName, \"bloom_filter_agg\"),\n      (children: Seq[Expression]) => new BloomFilterAggregate(children.head, children(1)))\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(dir, spark.sql(s\"SELECT floor(rand() * $cardinality) as key FROM $tbl\"))\n\n        val query =\n          s\"SELECT bloom_filter_agg(cast(key as long), cast($cardinality as long)) FROM parquetV1Table\"\n\n        benchmark.addCase(\"SQL Parquet - Spark (BloomFilterAgg)\") { _ =>\n          spark.sql(query).noop()\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Scan) (BloomFilterAgg)\") { _ =>\n          withSQLConf(CometConf.COMET_ENABLED.key -> \"true\") {\n            spark.sql(query).noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Scan, Exec) (BloomFilterAgg)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n            spark.sql(query).noop()\n          }\n        }\n\n        benchmark.run()\n      }\n    }\n\n    spark.sessionState.functionRegistry.dropFunction(funcId_bloom_filter_agg)\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    runBenchmarkWithTable(\"Subquery\", 1024 * 1024 * 10) { v =>\n      subqueryExecBenchmark(v)\n    }\n\n    runBenchmarkWithTable(\"Expand\", 1024 * 1024 * 10) { v =>\n      expandExecBenchmark(v)\n    }\n\n    runBenchmarkWithTable(\"Project + Filter\", 1024 * 1024 * 10) { v =>\n      for (fractionOfZeros <- List(0.0, 0.50, 0.95)) {\n        numericFilterExecBenchmark(v, fractionOfZeros)\n      }\n    }\n\n    runBenchmarkWithTable(\"Sort\", 1024 * 1024 * 10) { v =>\n      sortExecBenchmark(v)\n    }\n\n    runBenchmarkWithTable(\"BloomFilterAggregate\", 1024 * 1024 * 10) { v =>\n      for (card <- List(100, 1024, 1024 * 1024)) {\n        bloomFilterAggregate(v, card)\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometGetJsonObjectBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\n/**\n * Benchmark to measure performance of Comet get_json_object expression. To run this benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometGetJsonObjectBenchmark\n * }}}\n * Results will be written to \"spark/benchmarks/CometGetJsonObjectBenchmark-**results.txt\".\n */\nobject CometGetJsonObjectBenchmark extends CometBenchmarkBase {\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val numRows = 1024 * 1024\n    runBenchmarkWithTable(\"get_json_object\", numRows) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          import spark.implicits._\n\n          prepareTable(\n            dir,\n            spark\n              .range(numRows)\n              .map { i =>\n                val name = s\"user_$i\"\n                val age = (i % 80 + 18).toInt\n                val nested = s\"\"\"{\"city\":\"city_${i % 100}\",\"zip\":\"${10000 + i % 90000}\"}\"\"\"\n                val items =\n                  (0 until (i % 5 + 1).toInt).map(j => s\"\"\"\"item_$j\"\"\"\").mkString(\"[\", \",\", \"]\")\n                s\"\"\"{\"name\":\"$name\",\"age\":$age,\"address\":$nested,\"items\":$items}\"\"\"\n              }\n              .toDF(\"c1\"))\n\n          val extraConfigs =\n            Map(\"spark.comet.expression.GetJsonObject.allowIncompatible\" -> \"true\")\n\n          val benchmarks = List(\n            StringExprConfig(\n              \"simple field\",\n              \"select get_json_object(c1, '$.name') from parquetV1Table\",\n              extraConfigs),\n            StringExprConfig(\n              \"numeric field\",\n              \"select get_json_object(c1, '$.age') from parquetV1Table\",\n              extraConfigs),\n            StringExprConfig(\n              \"nested field\",\n              \"select get_json_object(c1, '$.address.city') from parquetV1Table\",\n              extraConfigs),\n            StringExprConfig(\n              \"array element\",\n              \"select get_json_object(c1, '$.items[0]') from parquetV1Table\",\n              extraConfigs),\n            StringExprConfig(\n              \"nested object\",\n              \"select get_json_object(c1, '$.address') from parquetV1Table\",\n              extraConfigs))\n\n          benchmarks.foreach { config =>\n            runBenchmark(config.name) {\n              runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n            }\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometHashExpressionBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\ncase class HashExprConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Comprehensive benchmark for Comet hash expressions. To run this benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometHashExpressionBenchmark\n * }}}\n * Results will be written to \"spark/benchmarks/CometHashExpressionBenchmark-**results.txt\".\n */\nobject CometHashExpressionBenchmark extends CometBenchmarkBase {\n\n  private val hashExpressions = List(\n    HashExprConfig(\"xxhash64_single\", \"SELECT xxhash64(c_str) FROM parquetV1Table\"),\n    HashExprConfig(\"xxhash64_multi\", \"SELECT xxhash64(c_str, c_int, c_long) FROM parquetV1Table\"),\n    HashExprConfig(\"murmur3_hash_single\", \"SELECT hash(c_str) FROM parquetV1Table\"),\n    HashExprConfig(\"murmur3_hash_multi\", \"SELECT hash(c_str, c_int, c_long) FROM parquetV1Table\"),\n    HashExprConfig(\"sha1\", \"SELECT sha1(c_str) FROM parquetV1Table\"),\n    HashExprConfig(\"sha2_224\", \"SELECT sha2(c_str, 224) FROM parquetV1Table\"),\n    HashExprConfig(\"sha2_256\", \"SELECT sha2(c_str, 256) FROM parquetV1Table\"),\n    HashExprConfig(\"sha2_384\", \"SELECT sha2(c_str, 384) FROM parquetV1Table\"),\n    HashExprConfig(\"sha2_512\", \"SELECT sha2(c_str, 512) FROM parquetV1Table\"))\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024\n\n    runBenchmarkWithTable(\"Hash expression benchmarks\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          // Data distribution: 1% NULL per column\n          // - c_str: unique strings \"string_0\" through \"string_N\"\n          // - c_int: integers 0-999,999 (cycling)\n          // - c_long: large values 0 to ~1 billion\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT\n                CASE WHEN value % 100 = 0 THEN NULL ELSE CONCAT('string_', CAST(value AS STRING)) END AS c_str,\n                CASE WHEN value % 100 = 1 THEN NULL ELSE CAST(value % 1000000 AS INT) END AS c_int,\n                CASE WHEN value % 100 = 2 THEN NULL ELSE CAST(value * 1000 AS LONG) END AS c_long\n              FROM $tbl\n            \"\"\"))\n\n          hashExpressions.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n\n    runMurmur3HashBenchmarks(values)\n  }\n\n  /**\n   * Comprehensive benchmarks for murmur3 hash function across different data types. These\n   * benchmarks cover primitive types, complex types (arrays, structs), and nested structures to\n   * measure hash performance comprehensively.\n   */\n  private def runMurmur3HashBenchmarks(values: Int): Unit = {\n    // Primitive type benchmarks\n    runPrimitiveTypeBenchmarks(values)\n    // Complex type benchmarks (arrays, structs)\n    runComplexTypeBenchmarks(values)\n    // Nested structure benchmarks\n    runNestedTypeBenchmarks(values)\n  }\n\n  private def runPrimitiveTypeBenchmarks(values: Int): Unit = {\n    runBenchmarkWithTable(\"Murmur3 hash - primitive types\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT\n                CASE WHEN value % 100 = 0 THEN NULL ELSE CAST(value % 2 = 0 AS BOOLEAN) END AS c_bool,\n                CASE WHEN value % 100 = 1 THEN NULL ELSE CAST(value % 128 AS TINYINT) END AS c_byte,\n                CASE WHEN value % 100 = 2 THEN NULL ELSE CAST(value % 32768 AS SMALLINT) END AS c_short,\n                CASE WHEN value % 100 = 3 THEN NULL ELSE CAST(value AS INT) END AS c_int,\n                CASE WHEN value % 100 = 4 THEN NULL ELSE CAST(value * 1000 AS BIGINT) END AS c_long,\n                CASE WHEN value % 100 = 5 THEN NULL ELSE CAST(value * 1.5 AS FLOAT) END AS c_float,\n                CASE WHEN value % 100 = 6 THEN NULL ELSE CAST(value * 1.5 AS DOUBLE) END AS c_double,\n                CASE WHEN value % 100 = 7 THEN NULL ELSE CONCAT('str_', CAST(value AS STRING)) END AS c_string,\n                CASE WHEN value % 100 = 8 THEN NULL ELSE CAST(CONCAT('bin_', CAST(value AS STRING)) AS BINARY) END AS c_binary,\n                CASE WHEN value % 100 = 9 THEN NULL ELSE DATE_ADD(DATE '2020-01-01', CAST(value % 1000 AS INT)) END AS c_date,\n                CASE WHEN value % 100 = 10 THEN NULL ELSE CAST(value AS DECIMAL(10, 2)) END AS c_decimal\n              FROM $tbl\n            \"\"\"))\n\n          val primitiveHashBenchmarks = List(\n            HashExprConfig(\"hash_boolean\", \"SELECT hash(c_bool) FROM parquetV1Table\"),\n            HashExprConfig(\"hash_byte\", \"SELECT hash(c_byte) FROM parquetV1Table\"),\n            HashExprConfig(\"hash_short\", \"SELECT hash(c_short) FROM parquetV1Table\"),\n            HashExprConfig(\"hash_int\", \"SELECT hash(c_int) FROM parquetV1Table\"),\n            HashExprConfig(\"hash_long\", \"SELECT hash(c_long) FROM parquetV1Table\"),\n            HashExprConfig(\"hash_float\", \"SELECT hash(c_float) FROM parquetV1Table\"),\n            HashExprConfig(\"hash_double\", \"SELECT hash(c_double) FROM parquetV1Table\"),\n            HashExprConfig(\"hash_string\", \"SELECT hash(c_string) FROM parquetV1Table\"),\n            HashExprConfig(\"hash_binary\", \"SELECT hash(c_binary) FROM parquetV1Table\"),\n            HashExprConfig(\"hash_date\", \"SELECT hash(c_date) FROM parquetV1Table\"),\n            HashExprConfig(\"hash_decimal\", \"SELECT hash(c_decimal) FROM parquetV1Table\"))\n\n          primitiveHashBenchmarks.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n\n  private def runComplexTypeBenchmarks(values: Int): Unit = {\n    runBenchmarkWithTable(\"Murmur3 hash - complex types\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT\n                CASE WHEN value % 100 = 0 THEN NULL\n                     ELSE array(CAST(value AS INT), CAST(value + 1 AS INT), CAST(value + 2 AS INT))\n                END AS c_array_int,\n                CASE WHEN value % 100 = 1 THEN NULL\n                     ELSE array(CONCAT('s', CAST(value AS STRING)), CONCAT('t', CAST(value AS STRING)))\n                END AS c_array_string,\n                CASE WHEN value % 100 = 2 THEN NULL\n                     ELSE array(CAST(value * 1.1 AS DOUBLE), CAST(value * 2.2 AS DOUBLE))\n                END AS c_array_double,\n                CASE WHEN value % 100 = 3 THEN NULL\n                     ELSE named_struct('a', CAST(value AS INT), 'b', CONCAT('str_', CAST(value AS STRING)))\n                END AS c_struct,\n                CASE WHEN value % 100 = 4 THEN NULL\n                     ELSE named_struct(\n                       'x', CAST(value AS INT),\n                       'y', CAST(value * 1.5 AS DOUBLE),\n                       'z', CAST(value % 2 = 0 AS BOOLEAN)\n                     )\n                END AS c_struct_multi\n              FROM $tbl\n            \"\"\"))\n\n          val complexHashBenchmarks = List(\n            HashExprConfig(\"hash_array_int\", \"SELECT hash(c_array_int) FROM parquetV1Table\"),\n            HashExprConfig(\n              \"hash_array_string\",\n              \"SELECT hash(c_array_string) FROM parquetV1Table\"),\n            HashExprConfig(\n              \"hash_array_double\",\n              \"SELECT hash(c_array_double) FROM parquetV1Table\"),\n            HashExprConfig(\"hash_struct\", \"SELECT hash(c_struct) FROM parquetV1Table\"),\n            HashExprConfig(\n              \"hash_struct_multi_fields\",\n              \"SELECT hash(c_struct_multi) FROM parquetV1Table\"))\n\n          complexHashBenchmarks.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n\n  private def runNestedTypeBenchmarks(values: Int): Unit = {\n    runBenchmarkWithTable(\"Murmur3 hash - nested types\", values) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          prepareTable(\n            dir,\n            spark.sql(s\"\"\"\n              SELECT\n                CASE WHEN value % 100 = 0 THEN NULL\n                     ELSE array(\n                       array(CAST(value AS INT), CAST(value + 1 AS INT)),\n                       array(CAST(value + 2 AS INT))\n                     )\n                END AS c_nested_array,\n                CASE WHEN value % 100 = 1 THEN NULL\n                     ELSE named_struct(\n                       'a', CAST(value AS INT),\n                       'b', named_struct('x', CONCAT('s', CAST(value AS STRING)), 'y', CAST(value * 1.5 AS DOUBLE))\n                     )\n                END AS c_nested_struct,\n                CASE WHEN value % 100 = 2 THEN NULL\n                     ELSE named_struct(\n                       'id', CAST(value AS INT),\n                       'items', array(CONCAT('item_', CAST(value AS STRING)), CONCAT('item_', CAST(value + 1 AS STRING)))\n                     )\n                END AS c_struct_with_array,\n                CASE WHEN value % 100 = 3 THEN NULL\n                     ELSE array(\n                       named_struct('k', CAST(value AS INT), 'v', CONCAT('val_', CAST(value AS STRING))),\n                       named_struct('k', CAST(value + 1 AS INT), 'v', CONCAT('val_', CAST(value + 1 AS STRING)))\n                     )\n                END AS c_array_of_struct\n              FROM $tbl\n            \"\"\"))\n\n          val nestedHashBenchmarks = List(\n            HashExprConfig(\n              \"hash_nested_array\",\n              \"SELECT hash(c_nested_array) FROM parquetV1Table\"),\n            HashExprConfig(\n              \"hash_nested_struct\",\n              \"SELECT hash(c_nested_struct) FROM parquetV1Table\"),\n            HashExprConfig(\n              \"hash_struct_with_array\",\n              \"SELECT hash(c_struct_with_array) FROM parquetV1Table\"),\n            HashExprConfig(\n              \"hash_array_of_struct\",\n              \"SELECT hash(c_array_of_struct) FROM parquetV1Table\"))\n\n          nestedHashBenchmarks.foreach { config =>\n            runExpressionBenchmark(config.name, v, config.query, config.extraCometConfigs)\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometIcebergReadBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.spark.benchmark.Benchmark\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.CometConf\n\n/**\n * Benchmark to measure Comet Iceberg read performance. To run this benchmark:\n * `SPARK_GENERATE_BENCHMARK_FILES=1 make\n * benchmark-org.apache.spark.sql.benchmark.CometIcebergReadBenchmark` Results will be written to\n * \"spark/benchmarks/CometIcebergReadBenchmark-**results.txt\".\n */\nobject CometIcebergReadBenchmark extends CometBenchmarkBase {\n\n  def icebergScanBenchmark(values: Int, dataType: DataType): Unit = {\n    val sqlBenchmark =\n      new Benchmark(s\"SQL Single ${dataType.sql} Iceberg Column Scan\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"icebergTable\") {\n        prepareIcebergTable(\n          dir,\n          spark.sql(s\"SELECT CAST(value as ${dataType.sql}) id FROM $tbl\"),\n          \"icebergTable\")\n\n        val query = dataType match {\n          case BooleanType => \"sum(cast(id as bigint))\"\n          case _ => \"sum(id)\"\n        }\n\n        sqlBenchmark.addCase(\"SQL Iceberg - Spark\") { _ =>\n          withSQLConf(\n            \"spark.memory.offHeap.enabled\" -> \"true\",\n            \"spark.memory.offHeap.size\" -> \"10g\") {\n            spark.sql(s\"select $query from icebergTable\").noop()\n          }\n        }\n\n        sqlBenchmark.addCase(\"SQL Iceberg - Comet Iceberg-Rust\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            \"spark.memory.offHeap.enabled\" -> \"true\",\n            \"spark.memory.offHeap.size\" -> \"10g\",\n            CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n            spark.sql(s\"select $query from icebergTable\").noop()\n          }\n        }\n\n        sqlBenchmark.run()\n      }\n    }\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    runBenchmarkWithTable(\"SQL Single Numeric Iceberg Column Scan\", 1024 * 1024 * 128) { v =>\n      Seq(BooleanType, ByteType, ShortType, IntegerType, LongType, FloatType, DoubleType)\n        .foreach(icebergScanBenchmark(v, _))\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometJsonExpressionBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.spark.sql.catalyst.expressions.{JsonToStructs, StructsToJson}\n\nimport org.apache.comet.CometConf\n\n/**\n * Configuration for a JSON expression benchmark.\n * @param name\n *   Name for the benchmark\n * @param schema\n *   Target schema for from_json\n * @param query\n *   SQL query to benchmark\n * @param extraCometConfigs\n *   Additional Comet configurations for the scan+exec case\n */\ncase class JsonExprConfig(\n    name: String,\n    schema: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n// spotless:off\n/**\n * Benchmark to measure performance of Comet JSON expressions. To run this benchmark:\n * `SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometJsonExpressionBenchmark`\n * Results will be written to \"spark/benchmarks/CometJsonExpressionBenchmark-**results.txt\".\n */\n// spotless:on\nobject CometJsonExpressionBenchmark extends CometBenchmarkBase {\n\n  /**\n   * Generic method to run a JSON expression benchmark with the given configuration.\n   */\n  def runJsonExprBenchmark(config: JsonExprConfig, values: Int): Unit = {\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        // Generate data with specified JSON patterns\n        val jsonData = config.name match {\n          case \"from_json - simple primitives\" =>\n            spark.sql(s\"\"\"\n              SELECT\n                concat('{\"a\":', CAST(value AS STRING), ',\"b\":\"str_', CAST(value AS STRING), '\"}') AS json_str\n              FROM $tbl\n            \"\"\")\n\n          case \"from_json - all primitive types\" =>\n            spark.sql(s\"\"\"\n              SELECT\n                concat(\n                  '{\"i32\":', CAST(value % 1000 AS STRING),\n                  ',\"i64\":', CAST(value * 1000000000L AS STRING),\n                  ',\"f32\":', CAST(value * 1.5 AS STRING),\n                  ',\"f64\":', CAST(value * 2.5 AS STRING),\n                  ',\"bool\":', CASE WHEN value % 2 = 0 THEN 'true' ELSE 'false' END,\n                  ',\"str\":\"value_', CAST(value AS STRING), '\"}'\n                ) AS json_str\n              FROM $tbl\n            \"\"\")\n\n          case \"from_json - with nulls\" =>\n            spark.sql(s\"\"\"\n              SELECT\n                CASE\n                  WHEN value % 10 = 0 THEN NULL\n                  WHEN value % 5 = 0 THEN '{\"a\":null,\"b\":\"test\"}'\n                  WHEN value % 3 = 0 THEN '{\"a\":123}'\n                  ELSE concat('{\"a\":', CAST(value AS STRING), ',\"b\":\"str_', CAST(value AS STRING), '\"}')\n                END AS json_str\n              FROM $tbl\n            \"\"\")\n\n          case \"from_json - nested struct\" =>\n            spark.sql(s\"\"\"\n              SELECT\n                concat(\n                  '{\"outer\":{\"inner_a\":', CAST(value AS STRING),\n                  ',\"inner_b\":\"nested_', CAST(value AS STRING), '\"}}') AS json_str\n              FROM $tbl\n            \"\"\")\n\n          case \"from_json - field access\" =>\n            spark.sql(s\"\"\"\n              SELECT\n                concat('{\"a\":', CAST(value AS STRING), ',\"b\":\"str_', CAST(value AS STRING), '\"}') AS json_str\n              FROM $tbl\n            \"\"\")\n\n          case \"to_json - simple primitives\" =>\n            spark.sql(\n              s\"\"\"SELECT named_struct(\"a\", CAST(value AS INT), \"b\", concat(\"str_\", CAST(value AS STRING))) AS json_struct FROM $tbl\"\"\")\n\n          case \"to_json - all primitive types\" =>\n            spark.sql(s\"\"\"\n              SELECT named_struct(\n                \"i32\", CAST(value % 1000 AS INT),\n                \"i64\", CAST(value * 1000000000L AS LONG),\n                \"f32\", CAST(value * 1.5 AS FLOAT),\n                \"f64\", CAST(value * 2.5 AS DOUBLE),\n                \"bool\", CASE WHEN value % 2 = 0 THEN true ELSE false END,\n                \"str\", concat(\"value_\", CAST(value AS STRING))\n              ) AS json_struct FROM $tbl\n            \"\"\")\n\n          case \"to_json - with nulls\" =>\n            spark.sql(s\"\"\"\n              SELECT\n                CASE\n                  WHEN value % 10 = 0 THEN CAST(NULL AS STRUCT<a: INT, b: STRING>)\n                  WHEN value % 5 = 0 THEN named_struct(\"a\", CAST(NULL AS INT), \"b\", \"test\")\n                  WHEN value % 3 = 0 THEN named_struct(\"a\", CAST(123 AS INT), \"b\", CAST(NULL AS STRING))\n                  ELSE named_struct(\"a\", CAST(value AS INT), \"b\", concat(\"str_\", CAST(value AS STRING)))\n                END AS json_struct\n              FROM $tbl\n            \"\"\")\n\n          case \"to_json - nested struct\" =>\n            spark.sql(s\"\"\"\n              SELECT named_struct(\n                \"outer\", named_struct(\n                  \"inner_a\", CAST(value AS INT),\n                  \"inner_b\", concat(\"nested_\", CAST(value AS STRING))\n                )\n              ) AS json_struct FROM $tbl\n            \"\"\")\n\n          case _ =>\n            spark.sql(s\"\"\"\n              SELECT\n                concat('{\"a\":', CAST(value AS STRING), ',\"b\":\"str_', CAST(value AS STRING), '\"}') AS json_str\n              FROM $tbl\n            \"\"\")\n        }\n\n        prepareTable(dir, jsonData)\n\n        val extraConfigs = Map(\n          CometConf.getExprAllowIncompatConfigKey(classOf[JsonToStructs]) -> \"true\",\n          CometConf.getExprAllowIncompatConfigKey(\n            classOf[StructsToJson]) -> \"true\") ++ config.extraCometConfigs\n\n        runExpressionBenchmark(config.name, values, config.query, extraConfigs)\n      }\n    }\n  }\n\n  // Configuration for all JSON expression benchmarks\n  private val jsonExpressions = List(\n    // from_json tests\n    JsonExprConfig(\n      \"from_json - simple primitives\",\n      \"a INT, b STRING\",\n      \"SELECT from_json(json_str, 'a INT, b STRING') FROM parquetV1Table\"),\n    JsonExprConfig(\n      \"from_json - all primitive types\",\n      \"i32 INT, i64 BIGINT, f32 FLOAT, f64 DOUBLE, bool BOOLEAN, str STRING\",\n      \"SELECT from_json(json_str, 'i32 INT, i64 BIGINT, f32 FLOAT, f64 DOUBLE, bool BOOLEAN, str STRING') FROM parquetV1Table\"),\n    JsonExprConfig(\n      \"from_json - with nulls\",\n      \"a INT, b STRING\",\n      \"SELECT from_json(json_str, 'a INT, b STRING') FROM parquetV1Table\"),\n    JsonExprConfig(\n      \"from_json - nested struct\",\n      \"outer STRUCT<inner_a: INT, inner_b: STRING>\",\n      \"SELECT from_json(json_str, 'outer STRUCT<inner_a: INT, inner_b: STRING>') FROM parquetV1Table\"),\n    JsonExprConfig(\n      \"from_json - field access\",\n      \"a INT, b STRING\",\n      \"SELECT from_json(json_str, 'a INT, b STRING').a FROM parquetV1Table\"),\n\n    // to_json tests\n    JsonExprConfig(\n      \"to_json - simple primitives\",\n      \"a INT, b STRING\",\n      \"SELECT to_json(json_struct) FROM parquetV1Table\"),\n    JsonExprConfig(\n      \"to_json - all primitive types\",\n      \"i32 INT, i64 BIGINT, f32 FLOAT, f64 DOUBLE, bool BOOLEAN, str STRING\",\n      \"SELECT to_json(json_struct) FROM parquetV1Table\"),\n    JsonExprConfig(\n      \"to_json - with nulls\",\n      \"a INT, b STRING\",\n      \"SELECT to_json(json_struct) FROM parquetV1Table\"),\n    JsonExprConfig(\n      \"to_json - nested struct\",\n      \"outer STRUCT<inner_a: INT, inner_b: STRING>\",\n      \"SELECT to_json(json_struct) FROM parquetV1Table\"))\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024\n\n    jsonExpressions.foreach { config =>\n      runBenchmarkWithTable(config.name, values) { v =>\n        runJsonExprBenchmark(config, v)\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometOperatorSerdeBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport java.io.File\nimport java.nio.file.Files\n\nimport org.apache.spark.benchmark.Benchmark\nimport org.apache.spark.sql.comet.{CometBatchScanExec, CometIcebergNativeScanExec}\nimport org.apache.spark.sql.execution.SparkPlan\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.rules.CometScanRule\nimport org.apache.comet.serde.OperatorOuterClass\nimport org.apache.comet.serde.operator.CometIcebergNativeScan\n\n/**\n * Benchmark for operator serialization/deserialization roundtrip performance.\n *\n * This benchmark measures the time to serialize Iceberg FileScanTask objects to protobuf,\n * starting from actual Iceberg Java objects rather than pre-constructed protobuf messages.\n *\n * To run this benchmark:\n * {{{\n * SPARK_GENERATE_BENCHMARK_FILES=1 make \\\n *   benchmark-org.apache.spark.sql.benchmark.CometOperatorSerdeBenchmark\n * }}}\n *\n * Results will be written to \"spark/benchmarks/CometOperatorSerdeBenchmark-**results.txt\".\n */\nobject CometOperatorSerdeBenchmark extends CometBenchmarkBase {\n\n  // Check if Iceberg is available in classpath\n  private def icebergAvailable: Boolean = {\n    try {\n      Class.forName(\"org.apache.iceberg.catalog.Catalog\")\n      true\n    } catch {\n      case _: ClassNotFoundException => false\n    }\n  }\n\n  // Helper to create temp directory for Iceberg warehouse\n  private def withTempIcebergDir(f: File => Unit): Unit = {\n    val dir = Files.createTempDirectory(\"comet-serde-benchmark\").toFile\n    try {\n      f(dir)\n    } finally {\n      def deleteRecursively(file: File): Unit = {\n        if (file.isDirectory) {\n          Option(file.listFiles()).foreach(_.foreach(deleteRecursively))\n        }\n        file.delete()\n      }\n      deleteRecursively(dir)\n    }\n  }\n\n  /**\n   * Extracts CometIcebergNativeScanExec from a query plan, unwrapping AQE if present.\n   */\n  private def extractIcebergNativeScanExec(\n      plan: SparkPlan): Option[CometIcebergNativeScanExec] = {\n    val unwrapped = plan match {\n      case aqe: AdaptiveSparkPlanExec => aqe.executedPlan\n      case other => other\n    }\n\n    def find(p: SparkPlan): Option[CometIcebergNativeScanExec] = {\n      p match {\n        case scan: CometIcebergNativeScanExec => Some(scan)\n        case _ => p.children.flatMap(find).headOption\n      }\n    }\n    find(unwrapped)\n  }\n\n  /**\n   * Reconstructs a CometBatchScanExec from CometIcebergNativeScanExec for benchmarking the\n   * conversion process.\n   */\n  private def reconstructBatchScanExec(\n      nativeScan: CometIcebergNativeScanExec): CometBatchScanExec = {\n    CometBatchScanExec(\n      wrapped = nativeScan.originalPlan,\n      runtimeFilters = Seq.empty,\n      nativeIcebergScanMetadata = Some(nativeScan.nativeIcebergScanMetadata))\n  }\n\n  /**\n   * Creates an Iceberg table with the specified number of partitions. Each partition contains one\n   * data file.\n   */\n  private def createPartitionedIcebergTable(\n      warehouseDir: File,\n      numPartitions: Int,\n      tableName: String = \"serde_bench_table\"): Unit = {\n    // Configure Hadoop catalog\n    spark.conf.set(\"spark.sql.catalog.bench_cat\", \"org.apache.iceberg.spark.SparkCatalog\")\n    spark.conf.set(\"spark.sql.catalog.bench_cat.type\", \"hadoop\")\n    spark.conf.set(\"spark.sql.catalog.bench_cat.warehouse\", warehouseDir.getAbsolutePath)\n\n    val fullTableName = s\"bench_cat.db.$tableName\"\n\n    // Drop table if exists\n    spark.sql(s\"DROP TABLE IF EXISTS $fullTableName\")\n    spark.sql(\"CREATE NAMESPACE IF NOT EXISTS bench_cat.db\")\n\n    // Create partitioned Iceberg table\n    spark.sql(s\"\"\"\n      CREATE TABLE $fullTableName (\n        id BIGINT,\n        name STRING,\n        value DOUBLE,\n        partition_col INT\n      ) USING iceberg\n      PARTITIONED BY (partition_col)\n      TBLPROPERTIES (\n        'format-version'='2',\n        'write.parquet.compression-codec' = 'snappy'\n      )\n    \"\"\")\n\n    // Insert data to create the specified number of partitions\n    // Use a range to create unique partition values\n    // scalastyle:off println\n    println(s\"Creating Iceberg table with $numPartitions partitions...\")\n    // scalastyle:on println\n\n    // Insert in batches to avoid memory issues\n    val batchSize = 1000\n    var partitionsCreated = 0\n\n    while (partitionsCreated < numPartitions) {\n      val batchEnd = math.min(partitionsCreated + batchSize, numPartitions)\n      val partitionRange = partitionsCreated until batchEnd\n\n      // Create DataFrame with partition data\n      import spark.implicits._\n      val df = partitionRange\n        .map { p =>\n          (p.toLong, s\"name_$p\", p * 1.5, p)\n        }\n        .toDF(\"id\", \"name\", \"value\", \"partition_col\")\n\n      df.writeTo(fullTableName).append()\n      partitionsCreated = batchEnd\n\n      if (partitionsCreated % 5000 == 0 || partitionsCreated == numPartitions) {\n        // scalastyle:off println\n        println(s\"  Created $partitionsCreated / $numPartitions partitions\")\n        // scalastyle:on println\n      }\n    }\n  }\n\n  /**\n   * Benchmarks the serialization of IcebergScan operator from FileScanTask objects.\n   */\n  def icebergScanSerdeBenchmark(numPartitions: Int): Unit = {\n    if (!icebergAvailable) {\n      // scalastyle:off println\n      println(\"Iceberg not available in classpath, skipping benchmark\")\n      // scalastyle:on println\n      return\n    }\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.bench_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.bench_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.bench_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Create the partitioned table\n        createPartitionedIcebergTable(warehouseDir, numPartitions)\n\n        val fullTableName = \"bench_cat.db.serde_bench_table\"\n\n        // Plan a query to get the CometIcebergNativeScanExec with FileScanTasks\n        val df = spark.sql(s\"SELECT * FROM $fullTableName\")\n        val plan = df.queryExecution.executedPlan\n\n        val nativeScanOpt = extractIcebergNativeScanExec(plan)\n\n        nativeScanOpt match {\n          case Some(nativeScan) =>\n            // Get metadata and tasks\n            val metadata = nativeScan.nativeIcebergScanMetadata\n            val tasks = metadata.tasks\n            // scalastyle:off println\n            println(s\"Found ${tasks.size()} FileScanTasks\")\n            // scalastyle:on println\n\n            // Reconstruct CometBatchScanExec for conversion benchmarking\n            val scanExec = reconstructBatchScanExec(nativeScan)\n\n            // Benchmark the serialization\n            val iterations = 100\n            val benchmark = new Benchmark(\n              s\"IcebergScan serde ($numPartitions partitions, ${tasks.size()} tasks)\",\n              iterations,\n              output = output)\n\n            // Benchmark: Convert FileScanTasks to protobuf (the convert() method)\n            benchmark.addCase(\"FileScanTask -> Protobuf (convert)\") { _ =>\n              var i = 0\n              while (i < iterations) {\n                val builder = OperatorOuterClass.Operator.newBuilder()\n                CometIcebergNativeScan.convert(scanExec, builder)\n                i += 1\n              }\n            }\n\n            // Benchmark: Full roundtrip - convert to protobuf and serialize to bytes\n            benchmark.addCase(\"FileScanTask -> Protobuf -> bytes\") { _ =>\n              var i = 0\n              while (i < iterations) {\n                val builder = OperatorOuterClass.Operator.newBuilder()\n                val operatorOpt = CometIcebergNativeScan.convert(scanExec, builder)\n                operatorOpt.foreach(_.toByteArray)\n                i += 1\n              }\n            }\n\n            // Get serialized bytes for deserialization benchmark\n            val builder = OperatorOuterClass.Operator.newBuilder()\n            val operatorOpt = CometIcebergNativeScan.convert(scanExec, builder)\n\n            operatorOpt match {\n              case Some(operator) =>\n                val serializedBytes = operator.toByteArray\n                val sizeKB = serializedBytes.length / 1024.0\n                val sizeMB = sizeKB / 1024.0\n\n                // scalastyle:off println\n                println(\n                  s\"Serialized IcebergScan size: ${f\"$sizeKB%.1f\"} KB (${f\"$sizeMB%.2f\"} MB)\")\n                // scalastyle:on println\n\n                // Benchmark: Deserialize from bytes\n                benchmark.addCase(\"bytes -> Protobuf (parseFrom)\") { _ =>\n                  var i = 0\n                  while (i < iterations) {\n                    OperatorOuterClass.Operator.parseFrom(serializedBytes)\n                    i += 1\n                  }\n                }\n\n                // Benchmark: Full roundtrip including deserialization\n                benchmark.addCase(\"Full roundtrip (convert + serialize + deserialize)\") { _ =>\n                  var i = 0\n                  while (i < iterations) {\n                    val b = OperatorOuterClass.Operator.newBuilder()\n                    val op = CometIcebergNativeScan.convert(scanExec, b)\n                    op.foreach { o =>\n                      val bytes = o.toByteArray\n                      OperatorOuterClass.Operator.parseFrom(bytes)\n                    }\n                    i += 1\n                  }\n                }\n\n              case None =>\n                // scalastyle:off println\n                println(\"WARNING: convert() returned None, cannot benchmark serialization\")\n              // scalastyle:on println\n            }\n\n            benchmark.run()\n\n          case None =>\n            // scalastyle:off println\n            println(\"WARNING: Could not find CometIcebergNativeScanExec in query plan\")\n            println(s\"Plan:\\n$plan\")\n          // scalastyle:on println\n        }\n\n        // Cleanup\n        spark.sql(s\"DROP TABLE IF EXISTS $fullTableName\")\n      }\n    }\n  }\n\n  /**\n   * Benchmarks CometScanRule.apply() on Iceberg BatchScanExec plans.\n   *\n   * This measures the validation overhead when converting Spark Iceberg scans to Comet scans.\n   */\n  def icebergScanRuleBenchmark(numPartitions: Int): Unit = {\n    if (!icebergAvailable) {\n      // scalastyle:off println\n      println(\"Iceberg not available in classpath, skipping benchmark\")\n      // scalastyle:on println\n      return\n    }\n\n    withTempIcebergDir { warehouseDir =>\n      withSQLConf(\n        \"spark.sql.catalog.bench_cat\" -> \"org.apache.iceberg.spark.SparkCatalog\",\n        \"spark.sql.catalog.bench_cat.type\" -> \"hadoop\",\n        \"spark.sql.catalog.bench_cat.warehouse\" -> warehouseDir.getAbsolutePath,\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n        CometConf.COMET_ICEBERG_NATIVE_ENABLED.key -> \"true\") {\n\n        // Create the partitioned table\n        createPartitionedIcebergTable(warehouseDir, numPartitions)\n\n        val fullTableName = \"bench_cat.db.serde_bench_table\"\n\n        // Get the sparkPlan (before post-hoc rules like CometScanRule)\n        val df = spark.sql(s\"SELECT * FROM $fullTableName\")\n        val sparkPlan = df.queryExecution.sparkPlan\n\n        // scalastyle:off println\n        println(s\"SparkPlan class: ${sparkPlan.getClass.getSimpleName}\")\n        // scalastyle:on println\n\n        val rule = CometScanRule(spark)\n        val iterations = 100\n\n        val benchmark = new Benchmark(\n          s\"CometScanRule apply ($numPartitions partitions)\",\n          iterations,\n          output = output)\n\n        benchmark.addCase(\"CometScanRule.apply(sparkPlan)\") { _ =>\n          var i = 0\n          while (i < iterations) {\n            rule.apply(sparkPlan)\n            i += 1\n          }\n        }\n\n        benchmark.run()\n\n        // Cleanup\n        spark.sql(s\"DROP TABLE IF EXISTS $fullTableName\")\n      }\n    }\n  }\n\n  override def runCometBenchmark(args: Array[String]): Unit = {\n    val numPartitions = if (args.nonEmpty) args(0).toInt else 30000\n\n    runBenchmark(\"CometScanRule Benchmark\") {\n      icebergScanRuleBenchmark(numPartitions)\n    }\n\n    runBenchmark(\"IcebergScan Operator Serde Benchmark\") {\n      icebergScanSerdeBenchmark(numPartitions)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometPartitionColumnBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.spark.benchmark.Benchmark\n\n/**\n * Benchmark to measure partition column scan performance. This exercises the CometConstantVector\n * path where constant columns are exported as 1-element Arrow arrays and expanded on the native\n * side.\n *\n * To run this benchmark:\n * {{{\n * SPARK_GENERATE_BENCHMARK_FILES=1 make \\\n *   benchmark-org.apache.spark.sql.benchmark.CometPartitionColumnBenchmark\n * }}}\n *\n * Results will be written to \"spark/benchmarks/CometPartitionColumnBenchmark-**results.txt\".\n */\nobject CometPartitionColumnBenchmark extends CometBenchmarkBase {\n\n  def partitionColumnScanBenchmark(values: Int, numPartitionCols: Int): Unit = {\n    val sqlBenchmark = new Benchmark(\n      s\"Partitioned Scan with $numPartitionCols partition column(s)\",\n      values,\n      output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        val partCols =\n          (1 to numPartitionCols).map(i => s\"'part$i' as p$i\").mkString(\", \")\n        val partNames = (1 to numPartitionCols).map(i => s\"p$i\")\n        val df = spark.sql(s\"SELECT value as id, $partCols FROM $tbl\")\n        val parquetDir = dir.getCanonicalPath + \"/parquetV1\"\n        df.write\n          .partitionBy(partNames: _*)\n          .mode(\"overwrite\")\n          .option(\"compression\", \"snappy\")\n          .parquet(parquetDir)\n        spark.read.parquet(parquetDir).createOrReplaceTempView(\"parquetV1Table\")\n\n        addParquetScanCases(sqlBenchmark, \"select sum(id) from parquetV1Table\")\n\n        // Also benchmark reading partition columns themselves\n        val partSumExpr =\n          (1 to numPartitionCols).map(i => s\"sum(length(p$i))\").mkString(\", \")\n\n        addParquetScanCases(\n          sqlBenchmark,\n          s\"select $partSumExpr from parquetV1Table\",\n          caseSuffix = \"partition cols\")\n\n        sqlBenchmark.run()\n      }\n    }\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    runBenchmarkWithTable(\"Partitioned Column Scan\", 1024 * 1024 * 15) { v =>\n      for (numPartCols <- List(1, 5)) {\n        partitionColumnScanBenchmark(v, numPartCols)\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometPredicateExpressionBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\n/**\n * Benchmark to measure Comet execution performance. To run this benchmark:\n * `SPARK_GENERATE_BENCHMARK_FILES=1 make\n * benchmark-org.apache.spark.sql.benchmark.CometPredicateExpressionBenchmark` Results will be\n * written to \"spark/benchmarks/CometPredicateExpressionBenchmark -**results.txt\".\n */\nobject CometPredicateExpressionBenchmark extends CometBenchmarkBase {\n\n  def inExprBenchmark(values: Int): Unit = {\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(\n            \"select CASE WHEN value < 0 THEN 'negative'\" +\n              s\" WHEN value = 0 THEN 'zero' ELSE 'positive' END c1 from $tbl\"))\n\n        val query = \"select * from parquetV1Table where c1 in ('positive', 'zero')\"\n\n        runExpressionBenchmark(\"in Expr\", values, query)\n      }\n    }\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    val values = 1024 * 1024;\n\n    runBenchmarkWithTable(\"inExpr\", values) { v =>\n      inExprBenchmark(v)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometReadBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport java.io.File\nimport java.nio.charset.StandardCharsets\nimport java.util.Base64\n\nimport scala.jdk.CollectionConverters._\nimport scala.util.Random\n\nimport org.apache.hadoop.fs.Path\nimport org.apache.parquet.crypto.DecryptionPropertiesFactory\nimport org.apache.parquet.crypto.keytools.KeyToolkit\nimport org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\nimport org.apache.spark.TestUtils\nimport org.apache.spark.benchmark.Benchmark\nimport org.apache.spark.sql.{DataFrame, SparkSession}\nimport org.apache.spark.sql.execution.datasources.parquet.VectorizedParquetRecordReader\nimport org.apache.spark.sql.types._\nimport org.apache.spark.sql.vectorized.ColumnVector\n\nimport org.apache.comet.{CometConf, WithHdfsCluster}\n\n/**\n * Benchmark to measure Comet read performance. To run this benchmark:\n * `SPARK_GENERATE_BENCHMARK_FILES=1 make\n * benchmark-org.apache.spark.sql.benchmark.CometReadBenchmark` Results will be written to\n * \"spark/benchmarks/CometReadBenchmark-**results.txt\".\n */\nclass CometReadBaseBenchmark extends CometBenchmarkBase {\n\n  def numericScanBenchmark(values: Int, dataType: DataType): Unit = {\n    val sqlBenchmark =\n      new Benchmark(s\"SQL Single ${dataType.sql} Column Scan\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(dir, spark.sql(s\"SELECT CAST(value as ${dataType.sql}) id FROM $tbl\"))\n\n        val query = dataType match {\n          case BooleanType => \"sum(cast(id as bigint))\"\n          case _ => \"sum(id)\"\n        }\n\n        addParquetScanCases(sqlBenchmark, s\"select $query from parquetV1Table\")\n        sqlBenchmark.run()\n      }\n    }\n  }\n\n  def encryptedScanBenchmark(values: Int, dataType: DataType): Unit = {\n    val sqlBenchmark =\n      new Benchmark(s\"SQL Single ${dataType.sql} Encrypted Column Scan\", values, output = output)\n\n    val encoder = Base64.getEncoder\n    val footerKey =\n      encoder.encodeToString(\"0123456789012345\".getBytes(StandardCharsets.UTF_8))\n    val key1 = encoder.encodeToString(\"1234567890123450\".getBytes(StandardCharsets.UTF_8))\n    val cryptoFactoryClass =\n      \"org.apache.parquet.crypto.keytools.PropertiesDrivenCryptoFactory\"\n\n    val cryptoConf = Map(\n      \"spark.memory.offHeap.enabled\" -> \"true\",\n      \"spark.memory.offHeap.size\" -> \"10g\",\n      DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n      KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n        \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n      InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n        s\"footerKey: ${footerKey}, key1: ${key1}\")\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareEncryptedTable(\n          dir,\n          spark.sql(s\"SELECT CAST(value as ${dataType.sql}) id FROM $tbl\"))\n\n        val query = dataType match {\n          case BooleanType => \"sum(cast(id as bigint))\"\n          case _ => \"sum(id)\"\n        }\n\n        addParquetScanCases(\n          sqlBenchmark,\n          s\"select $query from parquetV1Table\",\n          extraConf = cryptoConf)\n        sqlBenchmark.run()\n      }\n    }\n  }\n\n  def decimalScanBenchmark(values: Int, precision: Int, scale: Int): Unit = {\n    val sqlBenchmark = new Benchmark(\n      s\"SQL Single Decimal(precision: $precision, scale: $scale) Column Scan\",\n      values,\n      output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(\n            s\"SELECT CAST(value / 10000000.0 as DECIMAL($precision, $scale)) \" +\n              s\"id FROM $tbl\"))\n\n        addParquetScanCases(sqlBenchmark, \"select sum(id) from parquetV1Table\")\n        sqlBenchmark.run()\n      }\n    }\n  }\n\n  def readerBenchmark(values: Int, dataType: DataType): Unit = {\n    val sqlBenchmark =\n      new Benchmark(s\"Parquet reader benchmark for $dataType\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(dir, spark.sql(s\"SELECT CAST(value as ${dataType.sql}) id FROM $tbl\"))\n\n        val enableOffHeapColumnVector = spark.sessionState.conf.offHeapColumnVectorEnabled\n        val vectorizedReaderBatchSize = CometConf.COMET_BATCH_SIZE.get(spark.sessionState.conf)\n\n        var longSum = 0L\n        var doubleSum = 0.0\n        val aggregateValue: (ColumnVector, Int) => Unit = dataType match {\n          case BooleanType => (col: ColumnVector, i: Int) => if (col.getBoolean(i)) longSum += 1\n          case ByteType => (col: ColumnVector, i: Int) => longSum += col.getByte(i)\n          case ShortType => (col: ColumnVector, i: Int) => longSum += col.getShort(i)\n          case IntegerType => (col: ColumnVector, i: Int) => longSum += col.getInt(i)\n          case LongType => (col: ColumnVector, i: Int) => longSum += col.getLong(i)\n          case FloatType => (col: ColumnVector, i: Int) => doubleSum += col.getFloat(i)\n          case DoubleType => (col: ColumnVector, i: Int) => doubleSum += col.getDouble(i)\n          case StringType =>\n            (col: ColumnVector, i: Int) => longSum += col.getUTF8String(i).toLongExact\n        }\n\n        val files = TestUtils.listDirectory(new File(dir, \"parquetV1\"))\n\n        sqlBenchmark.addCase(\"ParquetReader Spark\") { _ =>\n          files.map(_.asInstanceOf[String]).foreach { p =>\n            val reader = new VectorizedParquetRecordReader(\n              enableOffHeapColumnVector,\n              vectorizedReaderBatchSize)\n            try {\n              reader.initialize(p, (\"id\" :: Nil).asJava)\n              val batch = reader.resultBatch()\n              val column = batch.column(0)\n              var totalNumRows = 0\n              while (reader.nextBatch()) {\n                val numRows = batch.numRows()\n                var i = 0\n                while (i < numRows) {\n                  if (!column.isNullAt(i)) aggregateValue(column, i)\n                  i += 1\n                }\n                totalNumRows += batch.numRows()\n              }\n            } finally {\n              reader.close()\n            }\n          }\n        }\n\n        sqlBenchmark.run()\n      }\n    }\n  }\n\n  def numericFilterScanBenchmark(values: Int, fractionOfZeros: Double): Unit = {\n    val percentageOfZeros = fractionOfZeros * 100\n    val benchmark =\n      new Benchmark(s\"Numeric Filter Scan ($percentageOfZeros% zeros)\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\", \"parquetV2Table\") {\n        prepareTable(\n          dir,\n          spark.sql(\n            s\"SELECT IF(RAND(1) < $fractionOfZeros, -1, value) AS c1, value AS c2 FROM \" +\n              s\"$tbl\"))\n\n        addParquetScanCases(benchmark, \"select sum(c2) from parquetV1Table where c1 + 1 > 0\")\n        benchmark.run()\n      }\n    }\n  }\n\n  def stringWithDictionaryScanBenchmark(values: Int): Unit = {\n    val sqlBenchmark =\n      new Benchmark(\"String Scan with Dictionary Encoding\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\", \"parquetV2Table\") {\n        prepareTable(\n          dir,\n          spark.sql(s\"\"\"\n                       |WITH tmp\n                       |  AS (SELECT RAND() r FROM $tbl)\n                       |SELECT\n                       |  CASE\n                       |    WHEN r < 0.2 THEN 'aaa'\n                       |    WHEN r < 0.4 THEN 'bbb'\n                       |    WHEN r < 0.6 THEN 'ccc'\n                       |    WHEN r < 0.8 THEN 'ddd'\n                       |    ELSE 'eee'\n                       |  END\n                       |AS id\n                       |FROM tmp\n                       |\"\"\".stripMargin))\n\n        addParquetScanCases(sqlBenchmark, \"select sum(length(id)) from parquetV1Table\")\n        sqlBenchmark.run()\n      }\n    }\n  }\n\n  def stringWithNullsScanBenchmark(values: Int, fractionOfNulls: Double): Unit = {\n    val percentageOfNulls = fractionOfNulls * 100\n    val benchmark =\n      new Benchmark(s\"String with Nulls Scan ($percentageOfNulls%)\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(\n            s\"SELECT IF(RAND(1) < $fractionOfNulls, NULL, CAST(value as STRING)) AS c1, \" +\n              s\"IF(RAND(2) < $fractionOfNulls, NULL, CAST(value as STRING)) AS c2 FROM $tbl\"))\n\n        addParquetScanCases(\n          benchmark,\n          \"select sum(length(c2)) from parquetV1Table where c1 is \" +\n            \"not NULL and c2 is not NULL\")\n        benchmark.run()\n      }\n    }\n  }\n\n  def columnsBenchmark(values: Int, width: Int): Unit = {\n    val benchmark =\n      new Benchmark(s\"Single Column Scan from $width columns\", values, output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"t1\", \"parquetV1Table\") {\n        val middle = width / 2\n        val selectExpr = (1 to width).map(i => s\"value as c$i\")\n        spark.table(tbl).selectExpr(selectExpr: _*).createOrReplaceTempView(\"t1\")\n\n        prepareTable(dir, spark.sql(\"SELECT * FROM t1\"))\n\n        addParquetScanCases(benchmark, s\"SELECT sum(c$middle) FROM parquetV1Table\")\n        benchmark.run()\n      }\n    }\n  }\n\n  def largeStringFilterScanBenchmark(values: Int, fractionOfZeros: Double): Unit = {\n    val percentageOfZeros = fractionOfZeros * 100\n    val benchmark =\n      new Benchmark(\n        s\"Large String Filter Scan ($percentageOfZeros% zeros)\",\n        values,\n        output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(\n            s\"SELECT IF(RAND(1) < $fractionOfZeros, -1, value) AS c1, \" +\n              s\"REPEAT(CAST(value AS STRING), 100) AS c2 FROM $tbl\"))\n\n        addParquetScanCases(benchmark, \"SELECT * FROM parquetV1Table WHERE c1 + 1 > 0\")\n        benchmark.run()\n      }\n    }\n  }\n\n  def sortedLgStrFilterScanBenchmark(values: Int, fractionOfZeros: Double): Unit = {\n    val percentageOfZeros = fractionOfZeros * 100\n    val benchmark =\n      new Benchmark(\n        s\"Sorted Lg Str Filter Scan ($percentageOfZeros% zeros)\",\n        values,\n        output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\", \"parquetV2Table\") {\n        prepareTable(\n          dir,\n          spark.sql(\n            s\"SELECT IF(RAND(1) < $fractionOfZeros, -1, value) AS c1, \" +\n              s\"REPEAT(CAST(value AS STRING), 100) AS c2 FROM $tbl ORDER BY c1, c2\"))\n\n        addParquetScanCases(benchmark, \"SELECT * FROM parquetV1Table WHERE c1 + 1 > 0\")\n        benchmark.run()\n      }\n    }\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    runBenchmarkWithTable(\"Parquet Reader\", 1024 * 1024 * 15) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType).foreach { dataType =>\n        readerBenchmark(v, dataType)\n      }\n    }\n\n    runBenchmarkWithTable(\"SQL Single Numeric Column Scan\", 1024 * 1024 * 128) { v =>\n      Seq(BooleanType, ByteType, ShortType, IntegerType, LongType, FloatType, DoubleType)\n        .foreach { dataType =>\n          numericScanBenchmark(v, dataType)\n        }\n    }\n\n    runBenchmarkWithTable(\"SQL Single Numeric Encrypted Column Scan\", 1024 * 1024 * 128) { v =>\n      Seq(BooleanType, ByteType, ShortType, IntegerType, LongType, FloatType, DoubleType)\n        .foreach { dataType =>\n          encryptedScanBenchmark(v, dataType)\n        }\n    }\n\n    runBenchmark(\"SQL Decimal Column Scan\") {\n      withTempTable(tbl) {\n        import spark.implicits._\n        spark.range(1024 * 1024 * 15).map(_ => Random.nextInt).createOrReplaceTempView(tbl)\n\n        Seq((5, 2), (18, 4), (20, 8)).foreach { case (precision, scale) =>\n          decimalScanBenchmark(1024 * 1024 * 15, precision, scale)\n        }\n      }\n    }\n\n    runBenchmarkWithTable(\"String Scan with Dictionary\", 1024 * 1024 * 15) { v =>\n      stringWithDictionaryScanBenchmark(v)\n    }\n\n    runBenchmarkWithTable(\"Numeric Filter Scan\", 1024 * 1024 * 10) { v =>\n      for (fractionOfZeros <- List(0.0, 0.50, 0.95)) {\n        numericFilterScanBenchmark(v, fractionOfZeros)\n      }\n    }\n\n    runBenchmarkWithTable(\"String with Nulls Scan\", 1024 * 1024 * 10) { v =>\n      for (fractionOfNulls <- List(0.0, 0.50, 0.95)) {\n        stringWithNullsScanBenchmark(v, fractionOfNulls)\n      }\n    }\n\n    runBenchmarkWithTable(\"Single Column Scan From Wide Columns\", 1024 * 1024 * 1) { v =>\n      for (columnWidth <- List(10, 50, 100)) {\n        columnsBenchmark(v, columnWidth)\n      }\n    }\n\n    runBenchmarkWithTable(\"Large String Filter Scan\", 1024 * 1024) { v =>\n      for (fractionOfZeros <- List(0.0, 0.50, 0.999)) {\n        largeStringFilterScanBenchmark(v, fractionOfZeros)\n      }\n    }\n\n    runBenchmarkWithTable(\"Sorted Lg Str Filter Scan\", 1024 * 1024) { v =>\n      for (fractionOfZeros <- List(0.0, 0.50, 0.999)) {\n        sortedLgStrFilterScanBenchmark(v, fractionOfZeros)\n      }\n    }\n  }\n}\n\nobject CometReadBenchmark extends CometReadBaseBenchmark {}\n\nobject CometReadHdfsBenchmark extends CometReadBaseBenchmark with WithHdfsCluster {\n\n  override def getSparkSession: SparkSession = {\n    // start HDFS cluster and add hadoop conf\n    startHdfsCluster()\n    val sparkSession = super.getSparkSession\n    sparkSession.sparkContext.hadoopConfiguration.addResource(getHadoopConfFile)\n    sparkSession\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    try {\n      super.runCometBenchmark(mainArgs)\n    } finally {\n      stopHdfsCluster()\n    }\n  }\n\n  override def readerBenchmark(values: Int, dataType: DataType): Unit = {\n    // ignore reader benchmark for HDFS\n  }\n\n  // mock local dir to hdfs\n  override protected def withTempPath(f: File => Unit): Unit = {\n    super.withTempPath { dir =>\n      val tempHdfsPath = new Path(getTmpRootDir, dir.getName)\n      getFileSystem.mkdirs(tempHdfsPath)\n      try f(dir)\n      finally getFileSystem.delete(tempHdfsPath, true)\n    }\n  }\n\n  override protected def prepareTable(\n      dir: File,\n      df: DataFrame,\n      partition: Option[String]): Unit = {\n    val testDf = if (partition.isDefined) {\n      df.write.partitionBy(partition.get)\n    } else {\n      df.write\n    }\n    val tempHdfsPath = getFileSystem.resolvePath(new Path(getTmpRootDir, dir.getName))\n    val parquetV1Path = new Path(tempHdfsPath, \"parquetV1\")\n    saveAsParquetV1Table(testDf, parquetV1Path.toString)\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometShuffleBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport java.text.SimpleDateFormat\n\nimport scala.util.Random\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.benchmark.Benchmark\nimport org.apache.spark.sql.{Column, SaveMode, SparkSession}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types._\n\nimport org.apache.comet.CometConf\nimport org.apache.comet.CometSparkSessionExtensions\nimport org.apache.comet.testing.{DataGenOptions, FuzzDataGenerator, SchemaGenOptions}\n\n// spotless:off\n/**\n * Benchmark to measure Comet shuffle performance. To run this benchmark:\n * `SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometShuffleBenchmark`\n * Results will be written to \"spark/benchmarks/CometShuffleBenchmark-**results.txt\".\n */\n// spotless:on\nobject CometShuffleBenchmark extends CometBenchmarkBase {\n\n  override def getSparkSession: SparkSession = {\n    val conf = new SparkConf()\n      .setAppName(\"CometShuffleBenchmark\")\n      // Since `spark.master` always exists, overrides this value\n      .set(\"spark.master\", \"local[5]\")\n      .setIfMissing(\"spark.driver.memory\", \"3g\")\n      .setIfMissing(\"spark.executor.memory\", \"3g\")\n      .set(\"spark.executor.memoryOverhead\", \"10g\")\n      .set(\n        \"spark.shuffle.manager\",\n        \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n      .set(\"spark.comet.columnar.shuffle.async.thread.num\", \"7\")\n      .set(\"spark.comet.columnar.shuffle.spill.threshold\", \"30000\")\n      .set(\"spark.comet.memoryOverhead\", \"10g\")\n\n    val sparkSession = SparkSession.builder\n      .config(conf)\n      .withExtensions(new CometSparkSessionExtensions)\n      .getOrCreate()\n\n    // Set default configs. Individual cases will change them if necessary.\n    sparkSession.conf.set(SQLConf.PARQUET_VECTORIZED_READER_ENABLED.key, \"true\")\n    sparkSession.conf.set(SQLConf.WHOLESTAGE_CODEGEN_ENABLED.key, \"true\")\n    sparkSession.conf.set(CometConf.COMET_ENABLED.key, \"false\")\n    sparkSession.conf.set(CometConf.COMET_EXEC_ENABLED.key, \"false\")\n    // TODO: support dictionary encoding in vectorized execution\n    sparkSession.conf.set(\"parquet.enable.dictionary\", \"false\")\n\n    sparkSession\n  }\n\n  def shuffleArrayBenchmark(values: Int, dataType: DataType, partitionNum: Int): Unit = {\n    val benchmark =\n      new Benchmark(\n        s\"SQL ${dataType.sql} shuffle on array ($partitionNum Partition)\",\n        values,\n        output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(dir, spark.sql(s\"SELECT CAST(1 AS ${dataType.sql}) AS c1 FROM $tbl\"))\n\n        benchmark.addCase(\"SQL Parquet - Spark\") { _ =>\n          spark\n            .sql(s\"SELECT ARRAY_REPEAT(CAST(1 AS ${dataType.sql}), 10) AS c1 FROM parquetV1Table\")\n            .repartition(partitionNum, Column(\"c1\"))\n            .noop()\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Spark Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\") {\n            spark\n              .sql(\n                s\"SELECT ARRAY_REPEAT(CAST(1 AS ${dataType.sql}), 10) AS c1 FROM parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet JVM Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> \"1.0\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\",\n            CometConf.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED.key -> \"false\") {\n            spark\n              .sql(\n                s\"SELECT ARRAY_REPEAT(CAST(1 AS ${dataType.sql}), 10) AS c1 FROM parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.run()\n      }\n    }\n  }\n\n  def shuffleStructBenchmark(values: Int, dataType: DataType, partitionNum: Int): Unit = {\n    val benchmark =\n      new Benchmark(\n        s\"SQL ${dataType.sql} shuffle on struct ($partitionNum Partition)\",\n        values,\n        output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(dir, spark.sql(s\"SELECT CAST(1 AS ${dataType.sql}) AS c1 FROM $tbl\"))\n\n        benchmark.addCase(\"SQL Parquet - Spark\") { _ =>\n          spark\n            .sql(\n              s\"SELECT STRUCT(CAST(c1 AS ${dataType.sql}),\" +\n                s\"CAST(c1 AS ${dataType.sql}), \" +\n                s\"CAST(c1 AS ${dataType.sql})) AS c1 FROM parquetV1Table\")\n            .repartition(partitionNum, Column(\"c1\"))\n            .noop()\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Spark Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\") {\n            spark\n              .sql(\n                s\"SELECT STRUCT(CAST(c1 AS ${dataType.sql}),\" +\n                  s\"CAST(c1 AS ${dataType.sql}), \" +\n                  s\"CAST(c1 AS ${dataType.sql})) AS c1 FROM parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet JVM Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> \"1.0\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\",\n            CometConf.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED.key -> \"false\") {\n            spark\n              .sql(\n                s\"SELECT STRUCT(CAST(c1 AS ${dataType.sql}),\" +\n                  s\"CAST(c1 AS ${dataType.sql}), \" +\n                  s\"CAST(c1 AS ${dataType.sql})) AS c1 FROM parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.run()\n      }\n    }\n  }\n\n  def shuffleDictionaryBenchmark(values: Int, dataType: DataType, partitionNum: Int): Unit = {\n    val benchmark =\n      new Benchmark(\n        s\"SQL ${dataType.sql} Dictionary Shuffle($partitionNum Partition)\",\n        values,\n        output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(\n          dir,\n          spark.sql(s\"SELECT REPEAT(CAST(1 AS ${dataType.sql}), 100) AS c1 FROM $tbl\"))\n\n        benchmark.addCase(\"SQL Parquet - Spark\") { _ =>\n          spark\n            .sql(\"select c1 from parquetV1Table\")\n            .repartition(partitionNum, Column(\"c1\"))\n            .noop()\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Spark Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\") {\n            spark\n              .sql(\"select c1 from parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet JVM Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> \"1.0\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\",\n            CometConf.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED.key -> \"false\") {\n            spark\n              .sql(\"select c1 from parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet JVM Shuffle + Prefer Dictionary)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> \"2.0\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\",\n            CometConf.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED.key -> \"false\") {\n            spark\n              .sql(\"select c1 from parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet JVM Shuffle + Fallback to string)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_PREFER_DICTIONARY_RATIO.key -> \"1000000000.0\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\",\n            CometConf.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED.key -> \"false\") {\n            spark\n              .sql(\"select c1 from parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.run()\n      }\n    }\n  }\n\n  def shuffleBenchmark(\n      values: Int,\n      dataType: DataType,\n      random: Boolean,\n      partitionNum: Int): Unit = {\n    val randomTitle = if (random) {\n      \"With Random\"\n    } else {\n      \"\"\n    }\n    val benchmark =\n      new Benchmark(\n        s\"SQL Single ${dataType.sql} Shuffle($partitionNum Partition) $randomTitle\",\n        values,\n        output = output)\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        if (random) {\n          prepareTable(\n            dir,\n            spark.sql(\n              s\"SELECT CAST(CAST(RAND(1) * 100 AS INTEGER) AS ${dataType.sql}) AS c1 FROM $tbl\"))\n        } else {\n          prepareTable(dir, spark.sql(s\"SELECT CAST(1 AS ${dataType.sql}) AS c1 FROM $tbl\"))\n        }\n\n        benchmark.addCase(\"SQL Parquet - Spark\") { _ =>\n          spark\n            .sql(\"select c1 from parquetV1Table\")\n            .repartition(partitionNum, Column(\"c1\"))\n            .noop()\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Spark Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\") {\n            spark\n              .sql(\"select c1 from parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet JVM Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\",\n            CometConf.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED.key -> \"false\") {\n            spark\n              .sql(\"select c1 from parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet Async JVM Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\",\n            CometConf.COMET_COLUMNAR_SHUFFLE_ASYNC_ENABLED.key -> \"true\") {\n            spark\n              .sql(\"select c1 from parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n            spark\n              .sql(\"select c1 from parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.run()\n      }\n    }\n  }\n\n  def shuffleWideBenchmark(\n      values: Int,\n      dataType: DataType,\n      width: Int,\n      partitionNum: Int): Unit = {\n    val benchmark =\n      new Benchmark(\n        s\"SQL Wide ($width cols) ${dataType.sql} Shuffle($partitionNum Partition)\",\n        values,\n        output = output)\n\n    val projection = (1 to width)\n      .map(i => s\"CAST(CAST(RAND(1) * 100 AS INTEGER) AS ${dataType.sql}) AS c$i\")\n      .mkString(\", \")\n    val columns = (1 to width).map(i => s\"c$i\").mkString(\", \")\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(dir, spark.sql(s\"SELECT $projection FROM $tbl\"))\n\n        benchmark.addCase(\"SQL Parquet - Spark\") { _ =>\n          spark\n            .sql(s\"select $columns from parquetV1Table\")\n            .repartition(partitionNum, Column(\"c1\"))\n            .noop()\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Spark Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\") {\n            spark\n              .sql(s\"select $columns from parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet JVM Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n            spark\n              .sql(s\"select $columns from parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet Native Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n            spark\n              .sql(s\"select $columns from parquetV1Table\")\n              .repartition(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.run()\n      }\n    }\n  }\n\n  def shuffleRangePartitionBenchmark(\n      values: Int,\n      dataType: DataType,\n      width: Int,\n      partitionNum: Int): Unit = {\n    val benchmark =\n      new Benchmark(\n        s\"SQL Wide ($width cols) ${dataType.sql} Range Partition Shuffle($partitionNum Partition)\",\n        values,\n        output = output)\n\n    val projection = (1 to width)\n      .map(i => s\"CAST(CAST(RAND(1) * 100 AS INTEGER) AS ${dataType.sql}) AS c$i\")\n      .mkString(\", \")\n    val columns = (1 to width).map(i => s\"c$i\").mkString(\", \")\n\n    withTempPath { dir =>\n      withTempTable(\"parquetV1Table\") {\n        prepareTable(dir, spark.sql(s\"SELECT $projection FROM $tbl\"))\n\n        benchmark.addCase(\"SQL Parquet - Spark\") { _ =>\n          spark\n            .sql(s\"select $columns from parquetV1Table\")\n            .repartitionByRange(partitionNum, Column(\"c1\"))\n            .noop()\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Spark Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\") {\n            spark\n              .sql(s\"select $columns from parquetV1Table\")\n              .repartitionByRange(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet JVM Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"jvm\") {\n            spark\n              .sql(s\"select $columns from parquetV1Table\")\n              .repartitionByRange(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.addCase(\"SQL Parquet - Comet (Comet Native Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_MODE.key -> \"native\") {\n            spark\n              .sql(s\"select $columns from parquetV1Table\")\n              .repartitionByRange(partitionNum, Column(\"c1\"))\n              .noop()\n          }\n        }\n\n        benchmark.run()\n      }\n    }\n  }\n\n  def shuffleDeeplyNestedBenchmark(\n      name: String,\n      filename: String,\n      numRows: Int,\n      partitionNum: Int): Unit = {\n    val benchmark =\n      new Benchmark(s\"Shuffle with nested schema ($name)\", numRows, output = output)\n    val df = spark.read.parquet(filename)\n    withTempTable(\"deeplyNestedTable\") {\n      df.createOrReplaceTempView(\"deeplyNestedTable\")\n      val sql = \"select * from deeplyNestedTable\"\n\n      benchmark.addCase(\"Spark\") { _ =>\n        spark\n          .sql(sql)\n          .repartition(partitionNum)\n          .noop()\n      }\n\n      benchmark.addCase(\"Comet (Spark Shuffle)\") { _ =>\n        withSQLConf(\n          CometConf.COMET_ENABLED.key -> \"true\",\n          CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n          CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"false\") {\n          spark\n            .sql(sql)\n            .repartition(partitionNum)\n            .noop()\n        }\n      }\n\n      for (shuffle <- Seq(\"jvm\", \"native\")) {\n        benchmark.addCase(s\"Comet ($shuffle Shuffle)\") { _ =>\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n            CometConf.COMET_SHUFFLE_MODE.key -> shuffle) {\n            spark\n              .sql(sql)\n              .repartition(partitionNum)\n              .noop()\n          }\n        }\n      }\n\n      benchmark.run()\n    }\n  }\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n\n    // nested type shuffle\n    val numRows = 1000\n    for (maxDepth <- Seq(2, 6)) {\n      val filename =\n        createDeeplyNestedParquetFile(numRows, maxDepth)\n      try {\n        for (partitionNum <- Seq(5, 201)) {\n          val name = s\"maxDepth=$maxDepth, partitionNum=$partitionNum\"\n          shuffleDeeplyNestedBenchmark(name, filename, numRows, partitionNum)\n        }\n      } finally {\n        new java.io.File(filename).delete()\n      }\n    }\n\n    runBenchmarkWithTable(\"Shuffle on array\", 1024 * 1024 * 1) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0)).foreach { dataType =>\n        Seq(5, 201).foreach { partitionNum =>\n          shuffleArrayBenchmark(v, dataType, partitionNum)\n        }\n      }\n    }\n\n    runBenchmarkWithTable(\"Shuffle on struct\", 1024 * 1024 * 100) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0)).foreach { dataType =>\n        Seq(5, 201).foreach { partitionNum =>\n          shuffleStructBenchmark(v, dataType, partitionNum)\n        }\n      }\n    }\n\n    runBenchmarkWithTable(\"Dictionary Shuffle\", 1024 * 1024 * 100) { v =>\n      Seq(BinaryType, StringType).foreach { dataType =>\n        Seq(5, 201).foreach { partitionNum =>\n          shuffleDictionaryBenchmark(v, dataType, partitionNum)\n        }\n      }\n    }\n\n    runBenchmarkWithTable(\"Shuffle\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleBenchmark(v, dataType, false, 5)\n        }\n    }\n\n    runBenchmarkWithTable(\"Shuffle\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleBenchmark(v, dataType, false, 201)\n        }\n    }\n\n    runBenchmarkWithTable(\"Shuffle with random values\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleBenchmark(v, dataType, true, 5)\n        }\n    }\n\n    runBenchmarkWithTable(\"Shuffle with random values\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleBenchmark(v, dataType, true, 201)\n        }\n    }\n\n    runBenchmarkWithTable(\"Wide Shuffle (10 cols)\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleWideBenchmark(v, dataType, 10, 5)\n        }\n    }\n\n    runBenchmarkWithTable(\"Wide Shuffle (20 cols)\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleWideBenchmark(v, dataType, 20, 5)\n        }\n    }\n\n    runBenchmarkWithTable(\"Wide Shuffle (10 cols)\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleWideBenchmark(v, dataType, 10, 201)\n        }\n    }\n\n    runBenchmarkWithTable(\"Wide Shuffle (20 cols)\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleWideBenchmark(v, dataType, 20, 201)\n        }\n    }\n\n    runBenchmarkWithTable(\"Wide Range Partition Shuffle (10 cols)\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleRangePartitionBenchmark(v, dataType, 10, 5)\n        }\n    }\n\n    runBenchmarkWithTable(\"Wide Range Partition Shuffle (20 cols)\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleRangePartitionBenchmark(v, dataType, 20, 5)\n        }\n    }\n\n    runBenchmarkWithTable(\"Wide Range Partition Shuffle (10 cols)\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleRangePartitionBenchmark(v, dataType, 10, 201)\n        }\n    }\n\n    runBenchmarkWithTable(\"Wide Range Partition Shuffle (20 cols)\", 1024 * 1024 * 10) { v =>\n      Seq(\n        BooleanType,\n        ByteType,\n        ShortType,\n        IntegerType,\n        LongType,\n        FloatType,\n        DoubleType,\n        StringType,\n        DecimalType(10, 0))\n        .foreach { dataType =>\n          shuffleRangePartitionBenchmark(v, dataType, 20, 201)\n        }\n    }\n  }\n\n  private def createDeeplyNestedParquetFile(numRows: Int, maxDepth: Int): String = {\n    val r = new Random(42)\n    val options =\n      SchemaGenOptions(generateArray = true, generateStruct = true, generateMap = true)\n    val schema = FuzzDataGenerator.generateNestedSchema(r, 100, maxDepth - 1, maxDepth, options)\n    val tempDir = System.getProperty(\"java.io.tmpdir\")\n    val filename = s\"$tempDir/CometShuffleBenchmark_${System.currentTimeMillis()}.parquet\"\n    withSQLConf(CometConf.COMET_ENABLED.key -> \"false\") {\n      val dataGenOptions = DataGenOptions(\n        generateNegativeZero = false,\n        // override base date due to known issues with experimental scans\n        baseDate =\n          new SimpleDateFormat(\"YYYY-MM-DD hh:mm:ss\").parse(\"2024-05-25 12:34:56\").getTime)\n      val df =\n        FuzzDataGenerator.generateDataFrame(r, spark, schema, numRows, dataGenOptions)\n      df.write.mode(SaveMode.Overwrite).parquet(filename)\n    }\n    filename\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometStringExpressionBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.comet.CometConf\n\n/**\n * Configuration for a string expression benchmark.\n * @param name\n *   Name for the benchmark\n * @param query\n *   SQL query to benchmark\n * @param extraCometConfigs\n *   Additional Comet configurations for the scan+exec case\n */\ncase class StringExprConfig(\n    name: String,\n    query: String,\n    extraCometConfigs: Map[String, String] = Map.empty)\n\n/**\n * Benchmark to measure performance of Comet string expressions. To run this benchmark:\n * {{{\n *   SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometStringExpressionBenchmark\n * }}}\n * Results will be written to \"spark/benchmarks/CometStringExpressionBenchmark-**results.txt\".\n */\nobject CometStringExpressionBenchmark extends CometBenchmarkBase {\n\n  // Configuration for all string expression benchmarks\n  private val stringExpressions = List(\n    StringExprConfig(\"ascii\", \"select ascii(c1) from parquetV1Table\"),\n    StringExprConfig(\"bit_length\", \"select bit_length(c1) from parquetV1Table\"),\n    StringExprConfig(\"btrim\", \"select btrim(c1) from parquetV1Table\"),\n    StringExprConfig(\"chr\", \"select chr(c1) from parquetV1Table\"),\n    StringExprConfig(\"concat\", \"select concat(c1, c1) from parquetV1Table\"),\n    StringExprConfig(\"concat_ws\", \"select concat_ws(' ', c1, c1) from parquetV1Table\"),\n    StringExprConfig(\"contains\", \"select contains(c1, '123') from parquetV1Table\"),\n    StringExprConfig(\"endswith\", \"select endswith(c1, '9') from parquetV1Table\"),\n    StringExprConfig(\"initCap\", \"select initCap(c1) from parquetV1Table\"),\n    StringExprConfig(\"instr\", \"select instr(c1, '123') from parquetV1Table\"),\n    StringExprConfig(\"length\", \"select length(c1) from parquetV1Table\"),\n    StringExprConfig(\"like\", \"select c1 like '%123%' from parquetV1Table\"),\n    StringExprConfig(\"lower\", \"select lower(c1) from parquetV1Table\"),\n    StringExprConfig(\"lpad\", \"select lpad(c1, 150, 'x') from parquetV1Table\"),\n    StringExprConfig(\"ltrim\", \"select ltrim(c1) from parquetV1Table\"),\n    StringExprConfig(\"octet_length\", \"select octet_length(c1) from parquetV1Table\"),\n    StringExprConfig(\n      \"regexp_replace\",\n      \"select regexp_replace(c1, '[0-9]', 'X') from parquetV1Table\"),\n    StringExprConfig(\"repeat\", \"select repeat(c1, 3) from parquetV1Table\"),\n    StringExprConfig(\"replace\", \"select replace(c1, '123', 'ab') from parquetV1Table\"),\n    StringExprConfig(\"reverse\", \"select reverse(c1) from parquetV1Table\"),\n    StringExprConfig(\"rlike\", \"select c1 rlike '[0-9]+' from parquetV1Table\"),\n    StringExprConfig(\"rpad\", \"select rpad(c1, 150, 'x') from parquetV1Table\"),\n    StringExprConfig(\"rtrim\", \"select rtrim(c1) from parquetV1Table\"),\n    StringExprConfig(\"space\", \"select space(2) from parquetV1Table\"),\n    StringExprConfig(\"startswith\", \"select startswith(c1, '1') from parquetV1Table\"),\n    StringExprConfig(\"substring\", \"select substring(c1, 1, 100) from parquetV1Table\"),\n    StringExprConfig(\"translate\", \"select translate(c1, '123456', 'aBcDeF') from parquetV1Table\"),\n    StringExprConfig(\"trim\", \"select trim(c1) from parquetV1Table\"),\n    StringExprConfig(\"upper\", \"select upper(c1) from parquetV1Table\"))\n\n  override def runCometBenchmark(mainArgs: Array[String]): Unit = {\n    runBenchmarkWithTable(\"String expressions\", 1024) { v =>\n      withTempPath { dir =>\n        withTempTable(\"parquetV1Table\") {\n          prepareTable(\n            dir,\n            spark.sql(s\"SELECT REPEAT(CAST(value AS STRING), 10) AS c1 FROM $tbl\"))\n\n          val extraConfigs = Map(CometConf.COMET_CASE_CONVERSION_ENABLED.key -> \"true\")\n\n          stringExpressions.foreach { config =>\n            val allConfigs = extraConfigs ++ config.extraCometConfigs\n            runBenchmark(config.name) {\n              runExpressionBenchmark(config.name, v, config.query, allConfigs)\n            }\n          }\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometTPCDSMicroBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport scala.io.Source\n\nimport org.apache.spark.benchmark.Benchmark\nimport org.apache.spark.sql.catalyst.catalog.HiveTableRelation\nimport org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias\nimport org.apache.spark.sql.execution.benchmark.TPCDSQueryBenchmark.tables\nimport org.apache.spark.sql.execution.benchmark.TPCDSQueryBenchmarkArguments\nimport org.apache.spark.sql.execution.datasources.LogicalRelation\n\nimport org.apache.comet.CometConf\n\n/**\n * Benchmark to measure Comet query performance with micro benchmarks that use the TPCDS data.\n * These queries represent subsets of the full TPCDS queries.\n *\n * To run this benchmark:\n * {{{\n * // Build [tpcds-kit](https://github.com/databricks/tpcds-kit)\n * cd /tmp && git clone https://github.com/databricks/tpcds-kit.git\n * cd tpcds-kit/tools && make OS=MACOS\n *\n * // GenTPCDSData\n * cd $COMET_HOME && mkdir /tmp/tpcds\n * make benchmark-org.apache.spark.sql.GenTPCDSData -- --dsdgenDir /tmp/tpcds-kit/tools --location /tmp/tpcds --scaleFactor 1\n *\n * // CometTPCDSMicroBenchmark\n * SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometTPCDSMicroBenchmark -- --data-location /tmp/tpcds\n * }}}\n *\n * Results will be written to \"spark/benchmarks/CometTPCDSMicroBenchmark-**results.txt\".\n */\nobject CometTPCDSMicroBenchmark extends CometTPCQueryBenchmarkBase {\n\n  val queries: Seq[String] = Seq(\n    \"scan_decimal\",\n    \"add_decimals\",\n    \"add_many_decimals\",\n    \"add_many_integers\",\n    \"agg_high_cardinality\",\n    \"agg_low_cardinality\",\n    \"agg_sum_decimals_no_grouping\",\n    \"agg_sum_integers_no_grouping\",\n    \"agg_sum_integers_with_grouping\",\n    \"agg_stddev\",\n    \"case_when_column_or_null\",\n    \"case_when_scalar\",\n    \"char_type\",\n    \"filter_highly_selective\",\n    \"filter_less_selective\",\n    \"if_column_or_null\",\n    \"join_anti\",\n    \"join_condition\",\n    \"join_exploding_output\",\n    \"join_inner\",\n    \"join_left_outer\",\n    \"join_semi\",\n    \"rlike\",\n    \"explode\",\n    \"to_json\")\n\n  override def runQueries(\n      queryLocation: String,\n      queries: Seq[String],\n      tableSizes: Map[String, Long],\n      benchmarkName: String,\n      nameSuffix: String = \"\"): Unit = {\n    queries.foreach { name =>\n      val source = Source.fromFile(s\"src/test/resources/tpcds-micro-benchmarks/$name.sql\")\n      val queryString = source\n        .getLines()\n        .filterNot(_.startsWith(\"--\"))\n        .mkString(\"\\n\")\n      source.close()\n\n      // This is an indirect hack to estimate the size of each query's input by traversing the\n      // logical plan and adding up the sizes of all tables that appear in the plan.\n      val queryRelations = scala.collection.mutable.HashSet[String]()\n      cometSpark.sql(queryString).queryExecution.analyzed.foreach {\n        case SubqueryAlias(alias, _: LogicalRelation) =>\n          queryRelations.add(alias.name)\n        case rel: LogicalRelation if rel.catalogTable.isDefined =>\n          queryRelations.add(rel.catalogTable.get.identifier.table)\n        case HiveTableRelation(tableMeta, _, _, _, _) =>\n          queryRelations.add(tableMeta.identifier.table)\n        case _ =>\n      }\n      val numRows = queryRelations.map(tableSizes.getOrElse(_, 0L)).sum\n      val benchmark = new Benchmark(benchmarkName, numRows, 2, output = output)\n      benchmark.addCase(s\"$name$nameSuffix\") { _ =>\n        cometSpark.sql(queryString).noop()\n      }\n      benchmark.addCase(s\"$name$nameSuffix: Comet\") { _ =>\n        withSQLConf(\n          CometConf.COMET_ENABLED.key -> \"true\",\n          CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n          CometConf.getExprAllowIncompatConfigKey(\"regexp\") -> \"true\",\n          // enabling COMET_EXPLAIN_NATIVE_ENABLED may add overhead but is useful for debugging\n          CometConf.COMET_EXPLAIN_NATIVE_ENABLED.key -> \"false\") {\n          cometSpark.sql(queryString).noop()\n        }\n      }\n      benchmark.run()\n    }\n  }\n\n  override def runBenchmarkSuite(mainArgs: Array[String]): Unit = {\n    val benchmarkArgs = new TPCDSQueryBenchmarkArguments(mainArgs)\n\n    // If `--query-filter` defined, filters the queries that this option selects\n    val queriesToRun = filterQueries(queries, benchmarkArgs.queryFilter)\n\n    val tableSizes = setupTables(\n      benchmarkArgs.dataLocation,\n      createTempView = false,\n      tables,\n      TPCDSSchemaHelper.getTableColumns)\n\n    setupCBO(cometSpark, benchmarkArgs.cboEnabled, tables)\n\n    runQueries(\"tpcdsmicro\", queries = queriesToRun, tableSizes, \"TPCDS Micro Benchmarks\")\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometTPCDSQueryBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.spark.sql.{TPCDSQueries, TPCDSSchema}\nimport org.apache.spark.sql.execution.benchmark.TPCDSQueryBenchmark.tables\nimport org.apache.spark.sql.execution.benchmark.TPCDSQueryBenchmarkArguments\nimport org.apache.spark.sql.types.StructType\n\n/**\n * Benchmark to measure Comet TPCDS query performance.\n *\n * To run this benchmark:\n * {{{\n * // Build [tpcds-kit](https://github.com/databricks/tpcds-kit)\n * cd /tmp && git clone https://github.com/databricks/tpcds-kit.git\n * cd tpcds-kit/tools && make OS=MACOS\n *\n * // GenTPCDSData\n * cd $COMET_HOME && mkdir /tmp/tpcds\n * make benchmark-org.apache.spark.sql.GenTPCDSData -- --dsdgenDir /tmp/tpcds-kit/tools --location /tmp/tpcds --scaleFactor 1\n *\n * // CometTPCDSQueryBenchmark\n * SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometTPCDSQueryBenchmark -- --data-location /tmp/tpcds\n * }}}\n *\n * Results will be written to \"spark/benchmarks/CometTPCDSQueryBenchmark-**results.txt\".\n */\nobject CometTPCDSQueryBenchmark extends CometTPCQueryBenchmarkBase with TPCDSQueries {\n  override def runBenchmarkSuite(mainArgs: Array[String]): Unit = {\n    val benchmarkArgs = new TPCDSQueryBenchmarkArguments(mainArgs)\n\n    // If `--query-filter` defined, filters the queries that this option selects\n    val queriesV1_4ToRun = filterQueries(tpcdsQueries, benchmarkArgs.queryFilter)\n    val queriesV2_7ToRun = filterQueries(\n      tpcdsQueriesV2_7,\n      benchmarkArgs.queryFilter,\n      nameSuffix = nameSuffixForQueriesV2_7)\n\n    if ((queriesV1_4ToRun ++ queriesV2_7ToRun).isEmpty) {\n      throw new RuntimeException(\n        s\"Empty queries to run. Bad query name filter: ${benchmarkArgs.queryFilter}\")\n    }\n\n    val tableSizes = setupTables(\n      benchmarkArgs.dataLocation,\n      createTempView = false,\n      tables,\n      TPCDSSchemaHelper.getTableColumns)\n\n    setupCBO(cometSpark, benchmarkArgs.cboEnabled, tables)\n\n    runQueries(\"tpcds\", queries = queriesV1_4ToRun, tableSizes, \"TPCDS Snappy\")\n    runQueries(\n      \"tpcds-v2.7.0\",\n      queries = queriesV2_7ToRun,\n      tableSizes,\n      \"TPCDS Snappy\",\n      nameSuffix = nameSuffixForQueriesV2_7)\n  }\n}\n\nobject TPCDSSchemaHelper extends TPCDSSchema {\n  def getTableColumns: Map[String, StructType] =\n    tableColumns.map(kv => kv._1 -> StructType.fromDDL(kv._2))\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometTPCHQueryBenchmark.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport java.util.Locale\n\n/**\n * Benchmark to measure Comet TPCH query performance.\n *\n * To run this benchmark:\n * {{{\n * // Set scale factor in GB\n * scale_factor=1\n *\n * // GenTPCHData to create the data set at /tmp/tpch/sf1_parquet\n * cd $COMET_HOME\n * make benchmark-org.apache.spark.sql.GenTPCHData -- --location /tmp --scaleFactor ${scale_factor}\n *\n * // CometTPCHQueryBenchmark\n * SPARK_GENERATE_BENCHMARK_FILES=1 make benchmark-org.apache.spark.sql.benchmark.CometTPCHQueryBenchmark -- --data-location /tmp/tpch/sf${scale_factor}_parquet\n * }}}\n *\n * Results will be written to \"spark/benchmarks/CometTPCHQueryBenchmark-**results.txt\".\n */\nobject CometTPCHQueryBenchmark extends CometTPCQueryBenchmarkBase {\n  val tables: Seq[String] =\n    Seq(\"customer\", \"lineitem\", \"nation\", \"orders\", \"part\", \"partsupp\", \"region\", \"supplier\")\n\n  override def runBenchmarkSuite(mainArgs: Array[String]): Unit = {\n    val benchmarkArgs = new TPCHQueryBenchmarkArguments(mainArgs)\n\n    // List of all TPC-H queries\n    val tpchQueries = (1 to 22).map(n => s\"q$n\")\n\n    // If `--query-filter` defined, filters the queries that this option selects\n    val queries = filterQueries(tpchQueries, benchmarkArgs.queryFilter)\n\n    if (queries.isEmpty) {\n      throw new RuntimeException(\n        s\"Empty queries to run. Bad query name filter: ${benchmarkArgs.queryFilter}\")\n    }\n\n    val tableSizes =\n      setupTables(benchmarkArgs.dataLocation, createTempView = !benchmarkArgs.cboEnabled, tables)\n\n    setupCBO(cometSpark, benchmarkArgs.cboEnabled, tables)\n\n    runQueries(\"tpch\", queries, tableSizes, \"TPCH Snappy\")\n    runQueries(\"tpch-extended\", queries, tableSizes, \" TPCH Extended Snappy\")\n  }\n}\n\n/**\n * Mostly copied from TPCDSQueryBenchmarkArguments. Only the help message is different TODO: make\n * TPCDSQueryBenchmarkArguments extensible to avoid copies\n */\nclass TPCHQueryBenchmarkArguments(val args: Array[String]) {\n  var dataLocation: String = null\n  var queryFilter: Set[String] = Set.empty\n  var cboEnabled: Boolean = false\n\n  parseArgs(args.toList)\n  validateArguments()\n\n  private def optionMatch(optionName: String, s: String): Boolean = {\n    optionName == s.toLowerCase(Locale.ROOT)\n  }\n\n  private def parseArgs(inputArgs: List[String]): Unit = {\n    var args = inputArgs\n\n    while (args.nonEmpty) {\n      args match {\n        case optName :: value :: tail if optionMatch(\"--data-location\", optName) =>\n          dataLocation = value\n          args = tail\n\n        case optName :: value :: tail if optionMatch(\"--query-filter\", optName) =>\n          queryFilter = value.toLowerCase(Locale.ROOT).split(\",\").map(_.trim).toSet\n          args = tail\n\n        case optName :: tail if optionMatch(\"--cbo\", optName) =>\n          cboEnabled = true\n          args = tail\n\n        case _ =>\n          // scalastyle:off println\n          System.err.println(\"Unknown/unsupported param \" + args)\n          // scalastyle:on println\n          printUsageAndExit(1)\n      }\n    }\n  }\n\n  private def printUsageAndExit(exitCode: Int): Unit = {\n    // scalastyle:off\n    System.err.println(\"\"\"\n      |Usage: spark-submit --class <this class> <spark sql test jar> [Options]\n      |Options:\n      |  --data-location      Path to TPCH data\n      |  --query-filter       Queries to filter, e.g., q3,q5,q13\n      |  --cbo                Whether to enable cost-based optimization\n      |\n      |------------------------------------------------------------------------------------------------------------------\n      |In order to run this benchmark, please follow the instructions of\n      |org.apache.spark.sql.GenTPCHData to generate the TPCH data.\n      |Thereafter, the value of <TPCH data location> needs to be set to the location where the generated data is stored.\n      \"\"\".stripMargin)\n    // scalastyle:on\n    System.exit(exitCode)\n  }\n\n  private def validateArguments(): Unit = {\n    if (dataLocation == null) {\n      // scalastyle:off println\n      System.err.println(\"Must specify a data location\")\n      // scalastyle:on println\n      printUsageAndExit(-1)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/benchmark/CometTPCQueryBenchmarkBase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.benchmark\n\nimport org.apache.spark.benchmark.Benchmark\nimport org.apache.spark.internal.Logging\nimport org.apache.spark.sql.{CometTPCQueryBase, SparkSession}\nimport org.apache.spark.sql.catalyst.catalog.HiveTableRelation\nimport org.apache.spark.sql.catalyst.plans.logical.SubqueryAlias\nimport org.apache.spark.sql.catalyst.util.resourceToString\nimport org.apache.spark.sql.execution.benchmark.SqlBasedBenchmark\nimport org.apache.spark.sql.execution.datasources.LogicalRelation\n\nimport org.apache.comet.CometConf\n\n/**\n * Base class for CometTPCDSQueryBenchmark and CometTPCHQueryBenchmark Mostly copied from\n * TPCDSQueryBenchmark. TODO: make TPCDSQueryBenchmark extensible to avoid copies\n */\ntrait CometTPCQueryBenchmarkBase extends SqlBasedBenchmark with CometTPCQueryBase with Logging {\n  override def getSparkSession: SparkSession = cometSpark\n\n  protected def runQueries(\n      queryLocation: String,\n      queries: Seq[String],\n      tableSizes: Map[String, Long],\n      benchmarkName: String,\n      nameSuffix: String = \"\"): Unit = {\n    queries.foreach { name =>\n      val queryString = resourceToString(\n        s\"$queryLocation/$name.sql\",\n        classLoader = Thread.currentThread().getContextClassLoader)\n\n      // This is an indirect hack to estimate the size of each query's input by traversing the\n      // logical plan and adding up the sizes of all tables that appear in the plan.\n      val queryRelations = scala.collection.mutable.HashSet[String]()\n      cometSpark.sql(queryString).queryExecution.analyzed.foreach {\n        case SubqueryAlias(alias, _: LogicalRelation) =>\n          queryRelations.add(alias.name)\n        case rel: LogicalRelation if rel.catalogTable.isDefined =>\n          queryRelations.add(rel.catalogTable.get.identifier.table)\n        case HiveTableRelation(tableMeta, _, _, _, _) =>\n          queryRelations.add(tableMeta.identifier.table)\n        case _ =>\n      }\n      val numRows = queryRelations.map(tableSizes.getOrElse(_, 0L)).sum\n      val benchmark = new Benchmark(benchmarkName, numRows, 2, output = output)\n      benchmark.addCase(s\"$name$nameSuffix\") { _ =>\n        cometSpark.sql(queryString).noop()\n      }\n      benchmark.addCase(s\"$name$nameSuffix: Comet\") { _ =>\n        withSQLConf(\n          CometConf.COMET_ENABLED.key -> \"true\",\n          CometConf.COMET_EXEC_ENABLED.key -> \"true\") {\n          cometSpark.sql(queryString).noop()\n        }\n      }\n      benchmark.run()\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/comet/CometDppFallbackRepro3949Suite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.collection.mutable\n\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.{CometTestBase, Row}\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.{Attribute, PlanExpression}\nimport org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\nimport org.apache.spark.sql.execution.{FileSourceScanExec, LeafExecNode, SparkPlan}\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec\nimport org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.{CometConf, CometExplainInfo}\n\n/**\n * Attempts an end-to-end reproduction of issue #3949.\n *\n * #3949 reports `[INTERNAL_ERROR]` from `BroadcastExchangeExec.doCanonicalize`, ultimately caused\n * by `ColumnarToRowExec.<init>` asserting that its child `supportsColumnar`. The suspected root\n * cause is that Comet's DPP fallback (`CometShuffleExchangeExec.stageContainsDPPScan`) walks\n * `s.child.exists(...)` for a `FileSourceScanExec` with a `PlanExpression` partition filter, and\n * that walk is not stable across the two planning passes:\n *\n *   - initial planning: shuffle's child subtree includes the DPP scan -> `.exists` finds it ->\n *     fall back to Spark.\n *   - AQE stage prep: the inner stage that contained the scan has materialized and been replaced\n *     by a `ShuffleQueryStageExec` (a `LeafExecNode` whose `children == Seq.empty`). `.exists`\n *     can no longer descend into it, the DPP scan becomes invisible, the same shuffle is\n *     converted to Comet, and the plan shape changes between passes.\n *\n * This suite has two tests:\n *\n *   1. `mechanism`: synthetic. Builds a real DPP plan, observes the initial-pass decision is\n *      \"fall back\", then swaps the shuffle's child for an opaque `LeafExecNode` (mirroring how\n *      `ShuffleQueryStageExec` presents to `.exists`) and asserts the decision flips to\n *      \"convert\". Documents the mechanism without depending on AQE actually triggering it.\n *\n * 2. `endToEnd`: runs DPP-flavored queries with AQE on, sweeps a few seeds/variants, and asks\n * whether `df.collect()` ever throws or whether the final executed plan ever contains a Comet\n * shuffle whose child subtree (descending through `QueryStageExec.plan`) still contains a DPP\n * scan -- i.e. an inconsistency that the bug would produce.\n */\nclass CometDppFallbackRepro3949Suite extends CometTestBase {\n\n  // ----------------------------------------------------------------------\n  // Mechanism (synthetic): proves the AQE wrap flips the fallback decision.\n  // ----------------------------------------------------------------------\n\n  private def buildDppTables(dir: java.io.File, factPrefix: String): Unit = {\n    val factPath = s\"${dir.getAbsolutePath}/$factPrefix.parquet\"\n    val dimPath = s\"${dir.getAbsolutePath}/${factPrefix}_dim.parquet\"\n    withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"false\") {\n      val sess = spark\n      import sess.implicits._\n      val oneDay = 24L * 60L * 60000L\n      val now = System.currentTimeMillis()\n      (0 until 400)\n        .map(i => (i, new java.sql.Date(now + (i % 40) * oneDay), i.toString))\n        .toDF(\"fact_id\", \"fact_date\", \"fact_str\")\n        .write\n        .partitionBy(\"fact_date\")\n        .parquet(factPath)\n      (0 until 40)\n        .map(i => (i, new java.sql.Date(now + i * oneDay), i.toString))\n        .toDF(\"dim_id\", \"dim_date\", \"dim_str\")\n        .write\n        .parquet(dimPath)\n    }\n    spark.read.parquet(factPath).createOrReplaceTempView(s\"${factPrefix}_fact\")\n    spark.read.parquet(dimPath).createOrReplaceTempView(s\"${factPrefix}_dim\")\n  }\n\n  private def unwrapAqe(plan: SparkPlan): SparkPlan = plan match {\n    case a: AdaptiveSparkPlanExec => a.initialPlan\n    case other => other\n  }\n\n  private def findFirstShuffle(plan: SparkPlan): Option[ShuffleExchangeExec] = {\n    var found: Option[ShuffleExchangeExec] = None\n    plan.foreach {\n      case s: ShuffleExchangeExec if found.isEmpty => found = Some(s)\n      case _ =>\n    }\n    found\n  }\n\n  test(\"mechanism: DPP fallback decision is sticky across an AQE-style child wrap\") {\n    withTempDir { dir =>\n      buildDppTables(dir, \"mech\")\n      withSQLConf(\n        CometConf.COMET_DPP_FALLBACK_ENABLED.key -> \"true\",\n        SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n        SQLConf.PREFER_SORTMERGEJOIN.key -> \"true\",\n        SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n        SQLConf.USE_V1_SOURCE_LIST.key -> \"parquet\") {\n\n        val df = spark.sql(\n          \"select f.fact_date, count(*) c \" +\n            \"from mech_fact f join mech_dim d on f.fact_date = d.dim_date \" +\n            \"where d.dim_id > 35 group by f.fact_date\")\n        val initialPlan = unwrapAqe(df.queryExecution.executedPlan)\n        val shuffle = findFirstShuffle(initialPlan).getOrElse {\n          fail(s\"No ShuffleExchangeExec found in initial plan:\\n${initialPlan.treeString}\")\n        }\n\n        val initialDecision = CometShuffleExchangeExec.columnarShuffleSupported(shuffle)\n\n        val initialDppVisible = shuffle.child.exists {\n          case scan: FileSourceScanExec =>\n            scan.partitionFilters.exists(_.exists(_.isInstanceOf[PlanExpression[_]]))\n          case _ => false\n        }\n\n        // Simulate AQE stage prep: wrap the shuffle's child in an opaque LeafExecNode,\n        // matching how `ShuffleQueryStageExec` presents to `.exists` walks (its `children`\n        // is `Seq.empty`). `withNewChildren` preserves tree-node tags, so if the fix is in\n        // place the sticky CometFallback marker on `shuffle` carries over to\n        // `postAqeShuffle`, and the decision short-circuits to false. Without the fix,\n        // the DPP walk re-runs, fails to see the scan, and flips to true.\n        val hiddenChild = OpaqueStageStub(shuffle.child.output)\n        val postAqeShuffle =\n          shuffle.withNewChildren(Seq(hiddenChild)).asInstanceOf[ShuffleExchangeExec]\n        val postAqeDecision = CometShuffleExchangeExec.columnarShuffleSupported(postAqeShuffle)\n\n        val postAqeDppVisible = postAqeShuffle.child.exists {\n          case scan: FileSourceScanExec =>\n            scan.partitionFilters.exists(_.exists(_.isInstanceOf[PlanExpression[_]]))\n          case _ => false\n        }\n\n        assert(initialDppVisible, \"initial child tree should expose DPP scan\")\n        assert(!postAqeDppVisible, \"stage-wrapped child should hide DPP scan\")\n        assert(!initialDecision, s\"expected fall back initially, got $initialDecision\")\n        assert(\n          !postAqeDecision,\n          s\"decision must stay 'fall back' across the AQE-style wrap, got $postAqeDecision\")\n      }\n    }\n  }\n\n  // ----------------------------------------------------------------------\n  // End-to-end: actually run DPP-flavored queries and look for the bug.\n  // ----------------------------------------------------------------------\n\n  // Walk the executed plan descending into QueryStageExec.plan, and return any Comet\n  // shuffle whose subtree still contains a DPP scan -- the cross-pass inconsistency the\n  // bug would create.\n  private def findInconsistentCometShuffles(plan: SparkPlan): Seq[SparkPlan] = {\n    import org.apache.spark.sql.execution.adaptive.QueryStageExec\n    def containsDppScan(p: SparkPlan): Boolean = p match {\n      case scan: FileSourceScanExec =>\n        scan.partitionFilters.exists(_.exists(_.isInstanceOf[PlanExpression[_]]))\n      case stage: QueryStageExec => containsDppScan(stage.plan)\n      case other => other.children.exists(containsDppScan)\n    }\n    val acc = mutable.Buffer.empty[SparkPlan]\n    def walk(p: SparkPlan): Unit = {\n      p match {\n        case s: CometShuffleExchangeExec if containsDppScan(s) =>\n          acc += s\n        case _ =>\n      }\n      p match {\n        case stage: QueryStageExec => walk(stage.plan)\n        case _ => p.children.foreach(walk)\n      }\n    }\n    walk(plan)\n    acc.toSeq\n  }\n\n  // Match the executed plan: any Comet shuffle whose explain-info already says it should have\n  // fallen back due to DPP. That's the smoking-gun pattern from the issue:\n  //   CometColumnarExchange [COMET: Stage contains a scan with Dynamic Partition Pruning]\n  private def findCometShufflesTaggedAsDppFallback(plan: SparkPlan): Seq[SparkPlan] = {\n    import org.apache.spark.sql.execution.adaptive.QueryStageExec\n    val acc = mutable.Buffer.empty[SparkPlan]\n    def walk(p: SparkPlan): Unit = {\n      p match {\n        case s: CometShuffleExchangeExec =>\n          val tags = s.getTagValue(CometExplainInfo.EXTENSION_INFO).getOrElse(Set.empty[String])\n          if (tags.exists(_.contains(\"Dynamic Partition Pruning\"))) acc += s\n        case _ =>\n      }\n      p match {\n        case stage: QueryStageExec => walk(stage.plan)\n        case _ => p.children.foreach(walk)\n      }\n    }\n    walk(plan)\n    acc.toSeq\n  }\n\n  private def buildDppTablesShared(dir: java.io.File): Unit = {\n    val factPath = s\"${dir.getAbsolutePath}/e2e_fact.parquet\"\n    val dimPath = s\"${dir.getAbsolutePath}/e2e_dim.parquet\"\n    val dim2Path = s\"${dir.getAbsolutePath}/e2e_dim2.parquet\"\n    withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"false\") {\n      val sess = spark\n      import sess.implicits._\n      val oneDay = 24L * 60L * 60000L\n      val now = System.currentTimeMillis()\n      // larger fact so AQE actually has stages to materialize\n      (0 until 4000)\n        .map(i => (i, new java.sql.Date(now + (i % 80) * oneDay), i.toString, i % 10))\n        .toDF(\"fact_id\", \"fact_date\", \"fact_str\", \"fact_grp\")\n        .write\n        .partitionBy(\"fact_date\")\n        .parquet(factPath)\n      (0 until 80)\n        .map(i => (i, new java.sql.Date(now + i * oneDay), i.toString))\n        .toDF(\"dim_id\", \"dim_date\", \"dim_str\")\n        .write\n        .parquet(dimPath)\n      (0 until 10)\n        .map(i => (i, s\"g$i\"))\n        .toDF(\"grp_id\", \"grp_str\")\n        .write\n        .parquet(dim2Path)\n    }\n    spark.read.parquet(factPath).createOrReplaceTempView(\"e2e_fact\")\n    spark.read.parquet(dimPath).createOrReplaceTempView(\"e2e_dim\")\n    spark.read.parquet(dim2Path).createOrReplaceTempView(\"e2e_dim2\")\n  }\n\n  // A handful of DPP-flavored queries in roughly increasing complexity. The last few mimic the\n  // q14a structure (UNION ALL of multiple DPP-using subqueries with HAVING and outer aggregate)\n  // because the issue specifically hit q14a / q14b / q31 / q47 / q57.\n  private val queries: Seq[String] = Seq(\n    // 1. Plain SMJ DPP\n    \"\"\"select f.fact_date, count(*) c\n      |from e2e_fact f join e2e_dim d on f.fact_date = d.dim_date\n      |where d.dim_id > 70\n      |group by f.fact_date\"\"\".stripMargin,\n    // 2. Aggregation above DPP join, then second join on the aggregate\n    \"\"\"select g.grp_str, sum(t.c) total from e2e_dim2 g join (\n      |  select f.fact_grp, count(*) c\n      |  from e2e_fact f join e2e_dim d on f.fact_date = d.dim_date\n      |  where d.dim_id > 60\n      |  group by f.fact_grp\n      |) t on g.grp_id = t.fact_grp\n      |group by g.grp_str\"\"\".stripMargin,\n    // 3. UNION ALL of two DPP-using subqueries, outer aggregate -- q14a-style.\n    \"\"\"select channel, fact_grp, sum(c) total from (\n      |  select 'a' channel, f.fact_grp, count(*) c\n      |  from e2e_fact f join e2e_dim d on f.fact_date = d.dim_date\n      |  where d.dim_id > 60\n      |  group by f.fact_grp\n      |  union all\n      |  select 'b' channel, f.fact_grp, count(*) c\n      |  from e2e_fact f join e2e_dim d on f.fact_date = d.dim_date\n      |  where d.dim_id > 70\n      |  group by f.fact_grp\n      |) y group by channel, fact_grp\"\"\".stripMargin,\n    // 4. UNION ALL with rollup -- pushes the bug surface higher up the plan.\n    \"\"\"select channel, fact_grp, sum(c) total from (\n      |  select 'a' channel, f.fact_grp, count(*) c\n      |  from e2e_fact f join e2e_dim d on f.fact_date = d.dim_date\n      |  where d.dim_id > 60\n      |  group by f.fact_grp\n      |  union all\n      |  select 'b' channel, f.fact_grp, count(*) c\n      |  from e2e_fact f join e2e_dim d on f.fact_date = d.dim_date\n      |  where d.dim_id > 70\n      |  group by f.fact_grp\n      |  union all\n      |  select 'c' channel, f.fact_grp, count(*) c\n      |  from e2e_fact f join e2e_dim d on f.fact_date = d.dim_date\n      |  where d.dim_id > 50\n      |  group by f.fact_grp\n      |) y group by rollup (channel, fact_grp)\"\"\".stripMargin,\n    // 5. q14a-style with HAVING + scalar subquery (forces broadcast-side stage materialization\n    //    BEFORE outer planning -- the configuration that historically reproduced #3949).\n    \"\"\"with avg_sales as (\n      |  select avg(c) avg_c from (\n      |    select count(*) c from e2e_fact f\n      |    join e2e_dim d on f.fact_date = d.dim_date\n      |    where d.dim_id > 50\n      |    group by f.fact_grp\n      |  ) x\n      |)\n      |select 'store' channel, f.fact_grp, count(*) c\n      |from e2e_fact f join e2e_dim d on f.fact_date = d.dim_date\n      |where d.dim_id > 60\n      |group by f.fact_grp\n      |having count(*) > (select avg_c from avg_sales)\n      |union all\n      |select 'web' channel, f.fact_grp, count(*) c\n      |from e2e_fact f join e2e_dim d on f.fact_date = d.dim_date\n      |where d.dim_id > 70\n      |group by f.fact_grp\n      |having count(*) > (select avg_c from avg_sales)\"\"\".stripMargin)\n\n  private val variants: Seq[(String, Map[String, String])] = Seq(\n    \"smj+aqe\" -> Map(\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.PREFER_SORTMERGEJOIN.key -> \"true\",\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\"),\n    \"bhj+aqe\" -> Map(\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"10MB\",\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\"),\n    \"smj+aqe+coalesce\" -> Map(\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n      SQLConf.PREFER_SORTMERGEJOIN.key -> \"true\",\n      SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n      \"spark.sql.adaptive.coalescePartitions.enabled\" -> \"true\",\n      \"spark.sql.adaptive.coalescePartitions.minPartitionSize\" -> \"1b\",\n      \"spark.sql.adaptive.coalescePartitions.initialPartitionNum\" -> \"16\"))\n\n  test(\"end-to-end: collect DPP queries under AQE; look for #3949 symptoms\") {\n    withTempDir { dir =>\n      buildDppTablesShared(dir)\n\n      val failures = mutable.Buffer.empty[(String, Int, String, String)]\n      val suspicious = mutable.Buffer.empty[(String, Int, String)]\n\n      for ((variantName, variantConf) <- variants; (q, idx) <- queries.zipWithIndex) {\n        val conf = variantConf ++ Map(\n          CometConf.COMET_DPP_FALLBACK_ENABLED.key -> \"true\",\n          SQLConf.USE_V1_SOURCE_LIST.key -> \"parquet\")\n        try {\n          withSQLConf(conf.toSeq: _*) {\n            val df = spark.sql(q)\n            val rows: Array[Row] = df.collect()\n            rows.length // touch\n\n            val executedPlan = df.queryExecution.executedPlan\n            val taggedFallback = findCometShufflesTaggedAsDppFallback(executedPlan)\n            val inconsistent = findInconsistentCometShuffles(executedPlan)\n            if (taggedFallback.nonEmpty) {\n              suspicious += ((\n                variantName,\n                idx,\n                \"Comet shuffle tagged with DPP fallback reason but not fallen back \" +\n                  s\"(${taggedFallback.size}). Plan:\\n${executedPlan.treeString}\"))\n            }\n            if (inconsistent.nonEmpty) {\n              suspicious += ((\n                variantName,\n                idx,\n                s\"Comet shuffle still has DPP scan in subtree (${inconsistent.size}). \" +\n                  s\"Plan:\\n${executedPlan.treeString}\"))\n            }\n          }\n        } catch {\n          case t: Throwable =>\n            var c: Throwable = t\n            while (c.getCause != null && c.getCause != c) c = c.getCause\n            val sw = new java.io.StringWriter\n            c.printStackTrace(new java.io.PrintWriter(sw))\n            val detail = Option(c.getMessage).getOrElse(c.toString) + \"\\n\" + sw.toString\n            failures += ((variantName, idx, detail, c.getClass.getName))\n        }\n      }\n\n      // Demonstrate-the-bug assertion: if EITHER an #3949-shaped crash or a plan inconsistency\n      // was observed, the bug is reproduced. The 3949 signature is an AssertionError whose\n      // stack goes through ColumnarToRowExec.<init> (Columnar.scala:70) during\n      // BroadcastExchangeExec.doCanonicalize.\n      val anyEvidence = failures.exists { case (_, _, msg, _) =>\n        msg.contains(\"INTERNAL_ERROR\") ||\n        msg.contains(\"supportsColumnar\") ||\n        msg.contains(\"ColumnarToRowExec\") ||\n        msg.contains(\"doCanonicalize\")\n      } || suspicious.nonEmpty\n\n      val summary = new StringBuilder\n      summary.append(s\"#3949 reproduced: ${failures.size} collect failure(s) and \")\n      summary.append(s\"${suspicious.size} plan-shape inconsistencies\\n\")\n      summary.append(\"---- failures ----\\n\")\n      failures.foreach { case (v, i, msg, cls) =>\n        summary.append(s\"$v/q$i ($cls):\\n\").append(msg.take(4000)).append(\"\\n\")\n      }\n      summary.append(\"---- suspicious ----\\n\")\n      suspicious.foreach { case (v, i, note) =>\n        summary.append(s\"$v/q$i: ${note.take(1500)}\\n\")\n      }\n      assert(!anyEvidence, summary.toString)\n    }\n  }\n}\n\n/**\n * `LeafExecNode` stub mirroring how `ShuffleQueryStageExec` presents to a `.exists` walk:\n * `children == Seq.empty`, so descent stops at the wrapper. Used by the mechanism test only.\n */\nprivate case class OpaqueStageStub(output: Seq[Attribute]) extends LeafExecNode {\n  override protected def doExecute(): RDD[InternalRow] =\n    throw new UnsupportedOperationException(\"stub\")\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/comet/CometPlanChecker.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\nimport org.apache.spark.sql.execution.{ColumnarToRowExec, InputAdapter, SparkPlan, WholeStageCodegenExec}\n\n/**\n * Trait providing utilities to check if a Spark plan is fully running on Comet native operators.\n * Used by both CometTestBase and CometBenchmarkBase.\n */\ntrait CometPlanChecker {\n\n  /**\n   * Finds the first non-Comet operator in the plan, if any.\n   *\n   * @param plan\n   *   The SparkPlan to check\n   * @param excludedClasses\n   *   Classes to exclude from the check (these are allowed to be non-Comet)\n   * @return\n   *   Some(operator) if a non-Comet operator is found, None otherwise\n   */\n  protected def findFirstNonCometOperator(\n      plan: SparkPlan,\n      excludedClasses: Class[_]*): Option[SparkPlan] = {\n    val wrapped = wrapCometSparkToColumnar(plan)\n    wrapped.foreach {\n      case _: CometNativeScanExec | _: CometScanExec | _: CometBatchScanExec |\n          _: CometIcebergNativeScanExec =>\n      case _: CometSinkPlaceHolder | _: CometScanWrapper =>\n      case _: CometColumnarToRowExec | _: CometNativeColumnarToRowExec =>\n      case _: CometSparkToColumnarExec =>\n      case _: CometExec | _: CometShuffleExchangeExec =>\n      case _: CometBroadcastExchangeExec =>\n      case _: WholeStageCodegenExec | _: ColumnarToRowExec | _: InputAdapter =>\n      case op if !excludedClasses.exists(c => c.isAssignableFrom(op.getClass)) =>\n        return Some(op)\n      case _ =>\n    }\n    None\n  }\n\n  /** Wraps the CometSparkToColumnar as ScanWrapper, so the child operators will not be checked */\n  private def wrapCometSparkToColumnar(plan: SparkPlan): SparkPlan = {\n    plan.transformDown {\n      // don't care the native operators\n      case p: CometSparkToColumnarExec => CometScanWrapper(null, p)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/comet/CometPlanStabilitySuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport java.io.File\nimport java.nio.charset.StandardCharsets\n\nimport scala.collection.mutable\n\nimport org.apache.commons.io.FileUtils\nimport org.apache.spark.SparkContext\nimport org.apache.spark.internal.config.{MEMORY_OFFHEAP_ENABLED, MEMORY_OFFHEAP_SIZE}\nimport org.apache.spark.sql.TPCDSBase\nimport org.apache.spark.sql.catalyst.expressions.{AttributeSet, Cast}\nimport org.apache.spark.sql.catalyst.util.resourceToString\nimport org.apache.spark.sql.execution.{ReusedSubqueryExec, SparkPlan, SubqueryBroadcastExec, SubqueryExec}\nimport org.apache.spark.sql.execution.adaptive.DisableAdaptiveExecutionSuite\nimport org.apache.spark.sql.execution.exchange.{Exchange, ReusedExchangeExec, ValidateRequirements}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.test.TestSparkSession\n\nimport org.apache.comet.{CometConf, ExtendedExplainInfo}\nimport org.apache.comet.CometSparkSessionExtensions.{isSpark35Plus, isSpark40Plus}\n\n/**\n * Similar to [[org.apache.spark.sql.PlanStabilitySuite]], checks that TPC-DS Comet plans don't\n * change.\n *\n * If there are plan differences, the error message looks like this: Plans did not match: last\n * approved simplified plan: /path/to/tpcds-plan-stability/approved-plans-xxx/q1/simplified.txt\n * last approved explain plan: /path/to/tpcds-plan-stability/approved-plans-xxx/q1/explain.txt\n * [last approved simplified plan]\n *\n * actual simplified plan: /path/to/tmp/q1.actual.simplified.txt actual explain plan:\n * /path/to/tmp/q1.actual.explain.txt [actual simplified plan]\n *\n * To run the entire test suite, for instance `CometTPCDSV2_7_PlanStabilitySuite`:\n * {{{\n *   mvn -pl spark -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV2_7_PlanStabilitySuite\" test\n * }}}\n *\n * To re-generate golden files for entire suite, run:\n * {{{\n *   SPARK_GENERATE_GOLDEN_FILES=1 mvn -pl spark -Dsuites=\"org.apache.spark.sql.comet.CometTPCDSV2_7_PlanStabilitySuite\" test\n * }}}\n */\ntrait CometPlanStabilitySuite extends DisableAdaptiveExecutionSuite with TPCDSBase {\n  protected val scanImpls: Seq[String] =\n    Seq(CometConf.SCAN_NATIVE_ICEBERG_COMPAT, CometConf.SCAN_NATIVE_DATAFUSION)\n\n  protected val baseResourcePath: File = {\n    getWorkspaceFilePath(\"spark\", \"src\", \"test\", \"resources\", \"tpcds-plan-stability\").toFile\n  }\n\n  private val referenceRegex = \"#\\\\d+\".r\n  private val normalizeRegex = \"#\\\\d+L?\".r\n  private val planIdRegex = \"plan_id=\\\\d+\".r\n\n  private val clsName = this.getClass.getCanonicalName\n\n  def goldenFilePath: String\n\n  private val approvedAnsiPlans: Seq[String] = Seq(\"q83\", \"q83.sf100\")\n\n  private def getDirForTest(name: String): File = {\n    var goldenFileName = if (SQLConf.get.ansiEnabled && approvedAnsiPlans.contains(name)) {\n      name + \".ansi\"\n    } else {\n      name\n    }\n    val nativeImpl = CometConf.COMET_NATIVE_SCAN_IMPL.get()\n    goldenFileName = s\"$goldenFileName.$nativeImpl\"\n    new File(goldenFilePath, goldenFileName)\n  }\n\n  private def writeGoldenFile(dir: File, filename: String, plan: String): Unit = {\n    FileUtils.writeStringToFile(new File(dir, s\"$filename.txt\"), plan, StandardCharsets.UTF_8)\n    logDebug(s\"APPROVED: $filename\")\n  }\n\n  private def checkWithApproved(dir: File, name: String, filename: String, plan: String): Unit = {\n    val tempDir = FileUtils.getTempDirectory\n    val approvedFile = new File(dir, s\"$filename.txt\")\n    val actualFile = new File(tempDir, s\"$name.actual.$filename.txt\")\n    FileUtils.writeStringToFile(actualFile, plan, StandardCharsets.UTF_8)\n    comparePlans(filename, approvedFile, actualFile)\n  }\n\n  private def comparePlans(planType: String, expectedFile: File, actualFile: File): Unit = {\n    val expected = FileUtils.readFileToString(expectedFile, StandardCharsets.UTF_8)\n    val actual = FileUtils.readFileToString(actualFile, StandardCharsets.UTF_8)\n    if (expected != actual) {\n      fail(s\"\"\"\n              |Plans did not match:\n              |last approved $planType plan: ${expectedFile.getAbsolutePath}\n              |\n              |$expected\n              |\n              |actual $planType plan: ${actualFile.getAbsolutePath}\n              |\n              |$actual\n        \"\"\".stripMargin)\n    }\n  }\n\n  /**\n   * Get the simplified plan for a specific SparkPlan. In the simplified plan, the node only has\n   * its name and all the sorted reference and produced attributes names(without ExprId) and its\n   * simplified children as well. And we'll only identify the performance sensitive nodes, e.g.,\n   * Exchange, Subquery, in the simplified plan. Given such a identical but simplified plan, we'd\n   * expect to avoid frequent plan changing and catch the possible meaningful regression.\n   */\n  private def getSimplifiedPlan(plan: SparkPlan): String = {\n    val exchangeIdMap = new mutable.HashMap[Int, Int]()\n    val subqueriesMap = new mutable.HashMap[Int, Int]()\n\n    def getId(plan: SparkPlan): Int = plan match {\n      case exchange: Exchange =>\n        exchangeIdMap.getOrElseUpdate(exchange.id, exchangeIdMap.size + 1)\n      case ReusedExchangeExec(_, exchange) =>\n        exchangeIdMap.getOrElseUpdate(exchange.id, exchangeIdMap.size + 1)\n      case subquery: SubqueryExec =>\n        subqueriesMap.getOrElseUpdate(subquery.id, subqueriesMap.size + 1)\n      case subquery: SubqueryBroadcastExec =>\n        subqueriesMap.getOrElseUpdate(subquery.id, subqueriesMap.size + 1)\n      case ReusedSubqueryExec(subquery) =>\n        subqueriesMap.getOrElseUpdate(subquery.id, subqueriesMap.size + 1)\n      case _ => -1\n    }\n\n    /**\n     * Some expression names have ExprId in them due to using things such as\n     * \"sum(sr_return_amt#14)\", so we remove all of these using regex\n     */\n    def cleanUpReferences(references: AttributeSet): String = {\n      referenceRegex.replaceAllIn(references.map(_.name).mkString(\",\"), \"\")\n    }\n\n    /**\n     * Generate a simplified plan as a string Example output: TakeOrderedAndProject\n     * [c_customer_id] WholeStageCodegen Project [c_customer_id]\n     */\n    def simplifyNode(node: SparkPlan, depth: Int): String = {\n      val padding = \"  \" * depth\n      var thisNode = node.nodeName\n      if (node.references.nonEmpty) {\n        thisNode += s\" [${cleanUpReferences(node.references)}]\"\n      }\n      if (node.producedAttributes.nonEmpty) {\n        thisNode += s\" [${cleanUpReferences(node.producedAttributes)}]\"\n      }\n      val id = getId(node)\n      if (id > 0) {\n        thisNode += s\" #$id\"\n      }\n      val childrenSimplified = node.children.map(simplifyNode(_, depth + 1))\n      val subqueriesSimplified = node.subqueries.map(simplifyNode(_, depth + 1))\n      s\"$padding$thisNode\\n${subqueriesSimplified.mkString(\"\")}${childrenSimplified.mkString(\"\")}\"\n    }\n\n    simplifyNode(plan, 0)\n  }\n\n  private def normalizeIds(plan: String): String = {\n    val map = new mutable.HashMap[String, String]()\n    normalizeRegex\n      .findAllMatchIn(plan)\n      .map(_.toString)\n      .foreach(map.getOrElseUpdate(_, (map.size + 1).toString))\n    val exprIdNormalized =\n      normalizeRegex.replaceAllIn(plan, regexMatch => s\"#${map(regexMatch.toString)}\")\n\n    // Normalize the plan id in Exchange nodes. See `Exchange.stringArgs`.\n    val planIdMap = new mutable.HashMap[String, String]()\n    planIdRegex\n      .findAllMatchIn(exprIdNormalized)\n      .map(_.toString)\n      .foreach(planIdMap.getOrElseUpdate(_, (planIdMap.size + 1).toString))\n    planIdRegex.replaceAllIn(\n      exprIdNormalized,\n      regexMatch => s\"plan_id=${planIdMap(regexMatch.toString)}\")\n  }\n\n  private def normalizeLocation(plan: String): String = {\n    plan.replaceAll(\n      s\"Location.*$clsName/\",\n      \"Location [not included in comparison]/{warehouse_dir}/\")\n  }\n\n  /**\n   * Test a TPC-DS query. Depending on the settings this test will either check if the plan\n   * matches a golden file or it will create a new golden file.\n   */\n  protected def testQuery(tpcdsGroup: String, query: String, suffix: String = \"\"): Unit = {\n    val queryString = resourceToString(\n      s\"$tpcdsGroup/$query.sql\",\n      classLoader = Thread.currentThread().getContextClassLoader)\n\n    withSQLConf(\n      CometConf.COMET_EXPLAIN_FALLBACK_ENABLED.key -> \"true\",\n      CometConf.COMET_ENABLED.key -> \"true\",\n      CometConf.COMET_NATIVE_SCAN_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n      CometConf.COMET_DPP_FALLBACK_ENABLED.key -> \"false\",\n      CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\",\n      CometConf.COMET_EXEC_SORT_MERGE_JOIN_WITH_JOIN_FILTER_ENABLED.key -> \"true\",\n      // as well as for v1.4/q9, v1.4/q44, v2.7.0/q6, v2.7.0/q64\n      CometConf.getExprAllowIncompatConfigKey(classOf[Cast]) -> \"true\",\n      SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"10MB\") {\n      val qe = sql(queryString).queryExecution\n      val plan = qe.executedPlan\n      val extendedExplain = new ExtendedExplainInfo().generateExtendedInfo(qe.executedPlan)\n      val extended = normalizeLocation(normalizeIds(extendedExplain))\n      assert(ValidateRequirements.validate(plan))\n\n      val name = query + suffix\n      val dir = getDirForTest(name)\n\n      if (regenerateGoldenFiles) {\n        FileUtils.deleteDirectory(dir)\n        if (!dir.mkdirs()) {\n          fail(s\"Could not create dir: $dir\")\n        }\n        writeGoldenFile(dir, \"extended\", extended)\n      } else {\n        checkWithApproved(dir, name, \"extended\", extended)\n      }\n    }\n  }\n\n  protected override def createSparkSession: TestSparkSession = {\n    val conf = super.sparkConf\n    conf.set(\"spark.sql.extensions\", \"org.apache.comet.CometSparkSessionExtensions\")\n    conf.set(\n      \"spark.shuffle.manager\",\n      \"org.apache.spark.sql.comet.execution.shuffle.CometShuffleManager\")\n    conf.set(MEMORY_OFFHEAP_ENABLED.key, \"true\")\n    conf.set(MEMORY_OFFHEAP_SIZE.key, \"2g\")\n    conf.set(CometConf.COMET_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_EXEC_ENABLED.key, \"true\")\n    conf.set(CometConf.COMET_ONHEAP_MEMORY_OVERHEAD.key, \"1g\")\n    conf.set(CometConf.COMET_EXEC_SHUFFLE_ENABLED.key, \"true\")\n\n    new TestSparkSession(new SparkContext(\"local[1]\", this.getClass.getCanonicalName, conf))\n  }\n}\n\nclass CometTPCDSV1_4_PlanStabilitySuite extends CometPlanStabilitySuite {\n  private val planName = if (isSpark40Plus) {\n    \"approved-plans-v1_4-spark4_0\"\n  } else if (isSpark35Plus) {\n    \"approved-plans-v1_4-spark3_5\"\n  } else {\n    \"approved-plans-v1_4\"\n  }\n  override val goldenFilePath: String =\n    new File(baseResourcePath, planName).getAbsolutePath\n\n  scanImpls.foreach { scan =>\n    tpcdsQueries.foreach { q =>\n      test(s\"check simplified (tpcds-v1.4/$q) - $scan\") {\n        withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> scan) {\n          testQuery(\"tpcds\", q)\n        }\n      }\n    }\n  }\n}\n\nclass CometTPCDSV2_7_PlanStabilitySuite extends CometPlanStabilitySuite {\n  private val planName = if (isSpark40Plus) {\n    \"approved-plans-v2_7-spark4_0\"\n  } else if (isSpark35Plus) {\n    \"approved-plans-v2_7-spark3_5\"\n  } else {\n    \"approved-plans-v2_7\"\n  }\n  override val goldenFilePath: String =\n    new File(baseResourcePath, planName).getAbsolutePath\n\n  scanImpls.foreach { scan =>\n    tpcdsQueriesV2_7_0.foreach { q =>\n      test(s\"check simplified (tpcds-v2.7.0/$q) - $scan\") {\n        withSQLConf(CometConf.COMET_NATIVE_SCAN_IMPL.key -> scan) {\n          testQuery(\"tpcds-v2.7.0\", q)\n        }\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/comet/CometShuffleFallbackStickinessSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport org.apache.spark.rdd.RDD\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.catalyst.InternalRow\nimport org.apache.spark.sql.catalyst.expressions.Attribute\nimport org.apache.spark.sql.catalyst.plans.physical.SinglePartition\nimport org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\nimport org.apache.spark.sql.execution.LeafExecNode\nimport org.apache.spark.sql.execution.exchange.ShuffleExchangeExec\nimport org.apache.spark.sql.internal.SQLConf\n\nimport org.apache.comet.{CometConf, CometFallback}\n\n/**\n * Pins the sticky-fallback invariant for Comet shuffle decisions: `nativeShuffleSupported` /\n * `columnarShuffleSupported` must return `false` whenever the shuffle already carries a\n * `CometFallback` marker from a prior rule pass.\n *\n * Without this behavior, AQE's stage-prep rule re-evaluation can flip the decision — e.g.,\n * `stageContainsDPPScan` walks the shuffle's child tree with `.exists`, but a materialized child\n * stage is wrapped in `ShuffleQueryStageExec` (a `LeafExecNode`) so `.exists` stops at the\n * wrapper and the DPP scan becomes invisible. That causes the same shuffle to fall back to Spark\n * at initial planning and then convert to Comet at stage prep, producing plan-shape\n * inconsistencies across the two passes (suspected mechanism behind #3949).\n *\n * Fallback decisions that must survive AQE replanning use `CometFallback.markForFallback`. The\n * shuffle-support predicates check `isMarkedForFallback` at the top and short-circuit.\n */\nclass CometShuffleFallbackStickinessSuite extends CometTestBase {\n\n  test(\"both support predicates fall back when the shuffle carries a CometFallback marker\") {\n    val shuffle = ShuffleExchangeExec(SinglePartition, SyntheticLeaf(Nil))\n    CometFallback.markForFallback(shuffle, \"pretend prior pass decided Spark fallback\")\n\n    withSQLConf(CometConf.COMET_DPP_FALLBACK_ENABLED.key -> \"true\") {\n      assert(\n        !CometShuffleExchangeExec.columnarShuffleSupported(shuffle),\n        \"marked shuffle must preserve its prior-pass fallback decision (columnar path)\")\n      assert(\n        !CometShuffleExchangeExec.nativeShuffleSupported(shuffle),\n        \"marked shuffle must preserve its prior-pass fallback decision (native path)\")\n    }\n  }\n\n  test(\"informational explain-info alone does NOT force fallback\") {\n    // A shuffle can accumulate explain info (e.g. 'Comet native shuffle not enabled') as\n    // informational output from earlier checks without being a full-fallback signal. That\n    // info must not cause the columnar path to decline.\n    val shuffle = ShuffleExchangeExec(SinglePartition, SyntheticLeaf(Nil))\n    // Note: withInfo, not markForFallback.\n    org.apache.comet.CometSparkSessionExtensions\n      .withInfo(shuffle, \"Comet native shuffle not enabled\")\n    assert(\n      !CometFallback.isMarkedForFallback(shuffle),\n      \"explain info alone must not imply a sticky fallback marker\")\n  }\n\n  test(\n    \"DPP fallback decision is sticky across two invocations even when the child tree changes\") {\n    withTempDir { dir =>\n      val factPath = s\"${dir.getAbsolutePath}/fact.parquet\"\n      val dimPath = s\"${dir.getAbsolutePath}/dim.parquet\"\n      withSQLConf(CometConf.COMET_EXEC_ENABLED.key -> \"false\") {\n        val sess = spark\n        import sess.implicits._\n        val oneDay = 24L * 60L * 60000L\n        val now = System.currentTimeMillis()\n        (0 until 400)\n          .map(i => (i, new java.sql.Date(now + (i % 40) * oneDay), i.toString))\n          .toDF(\"fact_id\", \"fact_date\", \"fact_str\")\n          .write\n          .partitionBy(\"fact_date\")\n          .parquet(factPath)\n        (0 until 40)\n          .map(i => (i, new java.sql.Date(now + i * oneDay), i.toString))\n          .toDF(\"dim_id\", \"dim_date\", \"dim_str\")\n          .write\n          .parquet(dimPath)\n      }\n      spark.read.parquet(factPath).createOrReplaceTempView(\"t_sticky_fact\")\n      spark.read.parquet(dimPath).createOrReplaceTempView(\"t_sticky_dim\")\n\n      withSQLConf(\n        CometConf.COMET_DPP_FALLBACK_ENABLED.key -> \"true\",\n        SQLConf.AUTO_BROADCASTJOIN_THRESHOLD.key -> \"-1\",\n        SQLConf.PREFER_SORTMERGEJOIN.key -> \"true\",\n        SQLConf.ADAPTIVE_EXECUTION_ENABLED.key -> \"true\",\n        SQLConf.USE_V1_SOURCE_LIST.key -> \"parquet\") {\n\n        val df = spark.sql(\n          \"select f.fact_date, count(*) c from t_sticky_fact f \" +\n            \"join t_sticky_dim d on f.fact_date = d.dim_date \" +\n            \"where d.dim_id > 35 group by f.fact_date\")\n        val initial = df.queryExecution.executedPlan match {\n          case a: org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanExec => a.initialPlan\n          case other => other\n        }\n        val shuffle =\n          initial\n            .collectFirst { case s: ShuffleExchangeExec => s }\n            .getOrElse(fail(s\"no shuffle found:\\n${initial.treeString}\"))\n\n        // Pass 1: real DPP subtree visible. Returns false AND marks the shuffle.\n        val first = CometShuffleExchangeExec.columnarShuffleSupported(shuffle)\n        assert(!first, \"initial pass must fall back (DPP visible)\")\n        assert(\n          CometFallback.isMarkedForFallback(shuffle),\n          \"fallback marker must be placed on the shuffle\")\n\n        // Pass 2 simulates AQE stage-prep: replace the child with an opaque leaf that hides\n        // the DPP subtree from tree walks. A naive `.exists`-based check would flip to true\n        // here; the sticky marker must keep the decision stable.\n        val reshapedShuffle =\n          shuffle\n            .withNewChildren(Seq(SyntheticLeaf(shuffle.child.output)))\n            .asInstanceOf[ShuffleExchangeExec]\n        val second = CometShuffleExchangeExec.columnarShuffleSupported(reshapedShuffle)\n        assert(\n          !second,\n          \"second pass must still fall back even though the DPP subtree is now hidden\")\n      }\n    }\n  }\n}\n\nprivate case class SyntheticLeaf(output: Seq[Attribute]) extends LeafExecNode {\n  override protected def doExecute(): RDD[InternalRow] =\n    throw new UnsupportedOperationException(\"stub\")\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/comet/CometTaskMetricsSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport scala.collection.mutable\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.executor.ShuffleReadMetrics\nimport org.apache.spark.executor.ShuffleWriteMetrics\nimport org.apache.spark.scheduler.SparkListener\nimport org.apache.spark.scheduler.SparkListenerTaskEnd\nimport org.apache.spark.sql.CometTestBase\nimport org.apache.spark.sql.comet.execution.shuffle.CometNativeShuffle\nimport org.apache.spark.sql.comet.execution.shuffle.CometShuffleExchangeExec\nimport org.apache.spark.sql.execution.SparkPlan\nimport org.apache.spark.sql.execution.adaptive.AdaptiveSparkPlanHelper\n\nimport org.apache.comet.CometConf\n\nclass CometTaskMetricsSuite extends CometTestBase with AdaptiveSparkPlanHelper {\n\n  override protected def sparkConf: SparkConf = {\n    super.sparkConf.set(\"spark.ui.enabled\", \"true\")\n  }\n\n  import testImplicits._\n\n  test(\"per-task native shuffle metrics\") {\n    withParquetTable((0 until 10000).map(i => (i, (i + 1).toLong)), \"tbl\") {\n      val df = sql(\"SELECT * FROM tbl\").sortWithinPartitions($\"_1\".desc)\n      val shuffled = df.repartition(1, $\"_1\")\n\n      val cometShuffle = find(shuffled.queryExecution.executedPlan) {\n        case _: CometShuffleExchangeExec => true\n        case _ => false\n      }\n      assert(cometShuffle.isDefined, \"CometShuffleExchangeExec not found in the plan\")\n      assert(\n        cometShuffle.get.asInstanceOf[CometShuffleExchangeExec].shuffleType == CometNativeShuffle)\n\n      val shuffleWriteMetricsList = mutable.ArrayBuffer.empty[ShuffleWriteMetrics]\n      val shuffleReadMetricsList = mutable.ArrayBuffer.empty[ShuffleReadMetrics]\n\n      spark.sparkContext.addSparkListener(new SparkListener {\n        override def onTaskEnd(taskEnd: SparkListenerTaskEnd): Unit = {\n          val taskMetrics = taskEnd.taskMetrics\n\n          if (taskEnd.taskType.contains(\"ShuffleMapTask\")) {\n            val shuffleWriteMetrics = taskMetrics.shuffleWriteMetrics\n            shuffleWriteMetricsList.synchronized {\n              shuffleWriteMetricsList += shuffleWriteMetrics\n            }\n          } else {\n            val shuffleReadMetrics = taskMetrics.shuffleReadMetrics\n            shuffleReadMetricsList.synchronized {\n              shuffleReadMetricsList += shuffleReadMetrics\n            }\n          }\n        }\n      })\n\n      // Avoid receiving earlier taskEnd events\n      spark.sparkContext.listenerBus.waitUntilEmpty()\n\n      // Run the action to trigger the shuffle\n      shuffled.collect()\n\n      spark.sparkContext.listenerBus.waitUntilEmpty()\n\n      // Check the shuffle write and read metrics\n      assert(shuffleWriteMetricsList.nonEmpty, \"No shuffle write metrics found\")\n      shuffleWriteMetricsList.foreach { metrics =>\n        assert(metrics.writeTime > 0)\n        assert(metrics.bytesWritten > 0)\n        assert(metrics.recordsWritten > 0)\n      }\n\n      assert(shuffleReadMetricsList.nonEmpty, \"No shuffle read metrics found\")\n      shuffleReadMetricsList.foreach { metrics =>\n        assert(metrics.recordsRead > 0)\n        assert(metrics.totalBytesRead > 0)\n      }\n    }\n  }\n\n  test(\"native_datafusion scan reports task-level input metrics matching Spark\") {\n    val totalRows = 10000\n    withTempPath { dir =>\n      spark\n        .createDataFrame((0 until totalRows).map(i => (i, s\"elem_$i\")))\n        .repartition(5)\n        .write\n        .parquet(dir.getAbsolutePath)\n      spark.read.parquet(dir.getAbsolutePath).createOrReplaceTempView(\"tbl\")\n      // Collect baseline input metrics from vanilla Spark (Comet disabled)\n      val (sparkBytes, sparkRecords, _) =\n        collectInputMetrics(\n          \"SELECT * FROM tbl where _1 > 2000\",\n          CometConf.COMET_ENABLED.key -> \"false\")\n\n      // Collect input metrics from Comet native_datafusion scan.\n      val (cometBytes, cometRecords, cometPlan) = collectInputMetrics(\n        \"SELECT * FROM tbl where _1 > 2000\",\n        CometConf.COMET_ENABLED.key -> \"true\",\n        CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_DATAFUSION)\n\n      // Verify the plan actually used CometNativeScanExec\n      assert(\n        find(cometPlan)(_.isInstanceOf[CometNativeScanExec]).isDefined,\n        s\"Expected CometNativeScanExec in plan:\\n${cometPlan.treeString}\")\n\n      assert(sparkRecords > 0, s\"Spark outputRecords should be > 0, got $sparkRecords\")\n      assert(cometRecords > 0, s\"Comet outputRecords should be > 0, got $cometRecords\")\n\n      assert(\n        cometRecords == sparkRecords,\n        s\"recordsRead mismatch: comet=$cometRecords, sparkRecords=$sparkRecords\")\n\n      // Bytes should be in the same ballpark -- both read the same Parquet file(s),\n      // but the exact byte count can differ due to reader implementation details\n      // (e.g. footer reads, page headers, buffering granularity).\n      assert(sparkBytes > 0, s\"Spark bytesRead should be > 0, got $sparkBytes\")\n      assert(cometBytes > 0, s\"Comet bytesRead should be > 0, got $cometBytes\")\n      val ratio = cometBytes.toDouble / sparkBytes.toDouble\n      assert(\n        ratio >= 0.7 && ratio <= 1.3,\n        s\"bytesRead ratio out of range: comet=$cometBytes, spark=$sparkBytes, ratio=$ratio\")\n    }\n  }\n\n  test(\"input metrics aggregate across multiple native scans in a join\") {\n    withTempPath { dir1 =>\n      withTempPath { dir2 =>\n        // Create two separate parquet tables\n        spark\n          .createDataFrame((0 until 5000).map(i => (i, s\"left_$i\")))\n          .repartition(3)\n          .write\n          .parquet(dir1.getAbsolutePath)\n        spark\n          .createDataFrame((0 until 5000).map(i => (i, s\"right_$i\")))\n          .repartition(3)\n          .write\n          .parquet(dir2.getAbsolutePath)\n\n        spark.read.parquet(dir1.getAbsolutePath).createOrReplaceTempView(\"left_tbl\")\n        spark.read.parquet(dir2.getAbsolutePath).createOrReplaceTempView(\"right_tbl\")\n\n        val joinQuery = \"SELECT * FROM left_tbl JOIN right_tbl ON left_tbl._1 = right_tbl._1\"\n\n        // Collect baseline from vanilla Spark\n        val (sparkBytes, sparkRecords, _) =\n          collectInputMetrics(joinQuery, CometConf.COMET_ENABLED.key -> \"false\")\n\n        // Collect from Comet native scan\n        val (cometBytes, cometRecords, cometPlan) = collectInputMetrics(\n          joinQuery,\n          CometConf.COMET_ENABLED.key -> \"true\",\n          CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_DATAFUSION)\n\n        // Verify the plan has multiple CometNativeScanExec nodes\n        val scanCount = collect(cometPlan) { case s: CometNativeScanExec =>\n          s\n        }.size\n        assert(\n          scanCount >= 2,\n          s\"Expected at least 2 CometNativeScanExec in plan, found $scanCount:\\n\" +\n            cometPlan.treeString)\n\n        assert(sparkBytes > 0, s\"Spark bytesRead should be > 0, got $sparkBytes\")\n        assert(cometBytes > 0, s\"Comet bytesRead should be > 0, got $cometBytes\")\n        assert(sparkRecords > 0, s\"Spark recordsRead should be > 0, got $sparkRecords\")\n        assert(cometRecords > 0, s\"Comet recordsRead should be > 0, got $cometRecords\")\n\n        // Both sides should contribute to the total bytes\n        val ratio = cometBytes.toDouble / sparkBytes.toDouble\n        assert(\n          ratio >= 0.7 && ratio <= 1.3,\n          s\"bytesRead ratio out of range: comet=$cometBytes, spark=$sparkBytes, ratio=$ratio\")\n      }\n    }\n  }\n\n  test(\"input metrics aggregate across multiple native scans in a union\") {\n    withTempPath { dir1 =>\n      withTempPath { dir2 =>\n        spark\n          .createDataFrame((0 until 5000).map(i => (i, s\"left_$i\")))\n          .repartition(3)\n          .write\n          .parquet(dir1.getAbsolutePath)\n        spark\n          .createDataFrame((5000 until 10000).map(i => (i, s\"right_$i\")))\n          .repartition(3)\n          .write\n          .parquet(dir2.getAbsolutePath)\n\n        spark.read.parquet(dir1.getAbsolutePath).createOrReplaceTempView(\"union_left\")\n        spark.read.parquet(dir2.getAbsolutePath).createOrReplaceTempView(\"union_right\")\n\n        val unionQuery = \"SELECT * FROM union_left UNION ALL SELECT * FROM union_right\"\n\n        // Collect baseline from vanilla Spark\n        val (sparkBytes, sparkRecords, _) =\n          collectInputMetrics(unionQuery, CometConf.COMET_ENABLED.key -> \"false\")\n\n        // Collect from Comet native scan\n        val (cometBytes, cometRecords, cometPlan) = collectInputMetrics(\n          unionQuery,\n          CometConf.COMET_ENABLED.key -> \"true\",\n          CometConf.COMET_NATIVE_SCAN_IMPL.key -> CometConf.SCAN_NATIVE_DATAFUSION)\n\n        // Verify the plan has multiple CometNativeScanExec nodes\n        val scanCount = collect(cometPlan) { case s: CometNativeScanExec =>\n          s\n        }.size\n        assert(\n          scanCount >= 2,\n          s\"Expected at least 2 CometNativeScanExec in plan, found $scanCount:\\n\" +\n            cometPlan.treeString)\n\n        assert(sparkBytes > 0, s\"Spark bytesRead should be > 0, got $sparkBytes\")\n        assert(cometBytes > 0, s\"Comet bytesRead should be > 0, got $cometBytes\")\n        assert(sparkRecords > 0, s\"Spark recordsRead should be > 0, got $sparkRecords\")\n        assert(cometRecords > 0, s\"Comet recordsRead should be > 0, got $cometRecords\")\n\n        val ratio = cometBytes.toDouble / sparkBytes.toDouble\n        assert(\n          ratio >= 0.7 && ratio <= 1.3,\n          s\"bytesRead ratio out of range: comet=$cometBytes, spark=$sparkBytes, ratio=$ratio\")\n      }\n    }\n  }\n\n  /**\n   * Runs the given query with the given SQL config overrides and returns the aggregated\n   * (bytesRead, recordsRead) across all tasks, along with the executed plan.\n   *\n   * Uses AppStatusStore (same source as Spark UI) to read task-level input metrics.\n   * AppStatusStore stores immutable snapshots of metric values, unlike SparkListener's\n   * InputMetrics which are backed by mutable accumulators that can be reset.\n   */\n  private def collectInputMetrics(\n      query: String,\n      confs: (String, String)*): (Long, Long, SparkPlan) = {\n    val store = spark.sparkContext.statusStore\n\n    // Record existing stage IDs so we only look at stages from our query\n    val stagesBefore = store.stageList(null).map(_.stageId).toSet\n\n    var plan: SparkPlan = null\n    withSQLConf(confs: _*) {\n      val df = sql(query)\n      df.collect()\n      plan = stripAQEPlan(df.queryExecution.executedPlan)\n    }\n\n    // Wait for listener bus to flush all events into the status store\n    spark.sparkContext.listenerBus.waitUntilEmpty()\n\n    // Sum input metrics from stages created by our query\n    val newStages = store.stageList(null).filterNot(s => stagesBefore.contains(s.stageId))\n    assert(newStages.nonEmpty, s\"No new stages found for confs=$confs\")\n\n    val totalBytes = newStages.map(_.inputBytes).sum\n    val totalRecords = newStages.map(_.inputRecords).sum\n\n    (totalBytes, totalRecords, plan)\n  }\n}\n"
  },
  {
    "path": "spark/src/test/scala/org/apache/spark/sql/comet/ParquetEncryptionITCase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet\n\nimport java.io.{File, RandomAccessFile}\nimport java.nio.charset.StandardCharsets\nimport java.util.Base64\n\nimport org.junit.runner.RunWith\nimport org.scalactic.source.Position\nimport org.scalatest.Tag\nimport org.scalatestplus.junit.JUnitRunner\n\nimport org.apache.parquet.crypto.DecryptionPropertiesFactory\nimport org.apache.parquet.crypto.keytools.{KeyToolkit, PropertiesDrivenCryptoFactory}\nimport org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\nimport org.apache.spark.{DebugFilesystem, SparkConf}\nimport org.apache.spark.sql.{functions, CometTestBase, SQLContext}\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.test.SQLTestUtils\n\nimport org.apache.comet.{CometConf, IntegrationTestSuite}\nimport org.apache.comet.CometConf.{SCAN_NATIVE_DATAFUSION, SCAN_NATIVE_ICEBERG_COMPAT}\n\n/**\n * A integration test suite that tests parquet modular encryption usage.\n */\n@RunWith(classOf[JUnitRunner])\n@IntegrationTestSuite\nclass ParquetEncryptionITCase extends CometTestBase with SQLTestUtils {\n  private val encoder = Base64.getEncoder\n  private val footerKey =\n    encoder.encodeToString(\"0123456789012345\".getBytes(StandardCharsets.UTF_8))\n  private val key1 = encoder.encodeToString(\"1234567890123450\".getBytes(StandardCharsets.UTF_8))\n  private val key2 = encoder.encodeToString(\"1234567890123451\".getBytes(StandardCharsets.UTF_8))\n  private val cryptoFactoryClass =\n    \"org.apache.parquet.crypto.keytools.PropertiesDrivenCryptoFactory\"\n\n  test(\"SPARK-34990: Write and read an encrypted parquet\") {\n\n    import testImplicits._\n\n    withTempDir { dir =>\n      withSQLConf(\n        DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n        KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n          \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n        InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n          s\"footerKey: ${footerKey}, key1: ${key1}, key2: ${key2}\") {\n\n        // Make sure encryption works with multiple Parquet files\n        val inputDF = spark\n          .range(0, 2000)\n          .map(i => (i, i.toString, i.toFloat))\n          .repartition(10)\n          .toDF(\"a\", \"b\", \"c\")\n        val parquetDir = new File(dir, \"parquet\").getCanonicalPath\n        inputDF.write\n          .option(PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME, \"key1: a, b; key2: c\")\n          .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n          .parquet(parquetDir)\n\n        verifyParquetEncrypted(parquetDir)\n\n        val parquetDF = spark.read.parquet(parquetDir)\n        assert(parquetDF.inputFiles.nonEmpty)\n        val readDataset = parquetDF.select(\"a\", \"b\", \"c\")\n\n        if (CometConf.COMET_ENABLED.get(conf)) {\n          checkSparkAnswerAndOperator(readDataset)\n        } else {\n          checkAnswer(readDataset, inputDF)\n        }\n      }\n    }\n  }\n\n  test(\"SPARK-37117: Can't read files in Parquet encryption external key material mode\") {\n\n    import testImplicits._\n\n    withTempDir { dir =>\n      withSQLConf(\n        DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n        KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n          \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n        KeyToolkit.KEY_MATERIAL_INTERNAL_PROPERTY_NAME -> \"false\", // default is true\n        InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n          s\"footerKey: ${footerKey}, key1: ${key1}, key2: ${key2}\") {\n\n        val inputDF = spark\n          .range(0, 2000)\n          .map(i => (i, i.toString, i.toFloat))\n          .repartition(10)\n          .toDF(\"a\", \"b\", \"c\")\n        val parquetDir = new File(dir, \"parquet\").getCanonicalPath\n        inputDF.write\n          .option(PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME, \"key1: a, b; key2: c\")\n          .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n          .parquet(parquetDir)\n\n        verifyParquetEncrypted(parquetDir)\n\n        val parquetDF = spark.read.parquet(parquetDir)\n        assert(parquetDF.inputFiles.nonEmpty)\n        val readDataset = parquetDF.select(\"a\", \"b\", \"c\")\n\n        if (CometConf.COMET_ENABLED.get(conf)) {\n          checkSparkAnswerAndOperator(readDataset)\n        } else {\n          checkAnswer(readDataset, inputDF)\n        }\n      }\n    }\n  }\n\n  test(\"SPARK-42114: Test of uniform parquet encryption\") {\n\n    import testImplicits._\n\n    withTempDir { dir =>\n      withSQLConf(\n        DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n        KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n          \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n        InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n          s\"key1: ${key1}\") {\n\n        val inputDF = spark\n          .range(0, 2000)\n          .map(i => (i, i.toString, i.toFloat))\n          .repartition(10)\n          .toDF(\"a\", \"b\", \"c\")\n        val parquetDir = new File(dir, \"parquet\").getCanonicalPath\n        inputDF.write\n          .option(\"parquet.encryption.uniform.key\", \"key1\")\n          .parquet(parquetDir)\n\n        verifyParquetEncrypted(parquetDir)\n\n        val parquetDF = spark.read.parquet(parquetDir)\n        assert(parquetDF.inputFiles.nonEmpty)\n        val readDataset = parquetDF.select(\"a\", \"b\", \"c\")\n\n        if (CometConf.COMET_ENABLED.get(conf)) {\n          checkSparkAnswerAndOperator(readDataset)\n        } else {\n          checkAnswer(readDataset, inputDF)\n        }\n      }\n    }\n  }\n\n  test(\"Plain text footer mode\") {\n    import testImplicits._\n\n    withTempDir { dir =>\n      withSQLConf(\n        DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n        KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n          \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n        PropertiesDrivenCryptoFactory.PLAINTEXT_FOOTER_PROPERTY_NAME -> \"true\", // default is false\n        InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n          s\"footerKey: ${footerKey}, key1: ${key1}, key2: ${key2}\") {\n\n        val inputDF = spark\n          .range(0, 1000)\n          .map(i => (i, i.toString, i.toFloat))\n          .repartition(5)\n          .toDF(\"a\", \"b\", \"c\")\n        val parquetDir = new File(dir, \"parquet\").getCanonicalPath\n        inputDF.write\n          .option(PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME, \"key1: a, b; key2: c\")\n          .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n          .parquet(parquetDir)\n\n        verifyParquetPlaintextFooter(parquetDir)\n\n        val parquetDF = spark.read.parquet(parquetDir)\n        assert(parquetDF.inputFiles.nonEmpty)\n        val readDataset = parquetDF.select(\"a\", \"b\", \"c\")\n\n        if (CometConf.COMET_ENABLED.get(conf)) {\n          checkSparkAnswerAndOperator(readDataset)\n        } else {\n          checkAnswer(readDataset, inputDF)\n        }\n      }\n    }\n  }\n\n  test(\"Change encryption algorithm\") {\n    import testImplicits._\n\n    withTempDir { dir =>\n      withSQLConf(\n        DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n        KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n          \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n        // default is AES_GCM_V1\n        PropertiesDrivenCryptoFactory.ENCRYPTION_ALGORITHM_PROPERTY_NAME -> \"AES_GCM_CTR_V1\",\n        InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n          s\"footerKey: ${footerKey}, key1: ${key1}, key2: ${key2}\") {\n\n        val inputDF = spark\n          .range(0, 1000)\n          .map(i => (i, i.toString, i.toFloat))\n          .repartition(5)\n          .toDF(\"a\", \"b\", \"c\")\n        val parquetDir = new File(dir, \"parquet\").getCanonicalPath\n        inputDF.write\n          .option(PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME, \"key1: a, b; key2: c\")\n          .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n          .parquet(parquetDir)\n\n        verifyParquetEncrypted(parquetDir)\n\n        val parquetDF = spark.read.parquet(parquetDir)\n        assert(parquetDF.inputFiles.nonEmpty)\n        val readDataset = parquetDF.select(\"a\", \"b\", \"c\")\n\n        // native_datafusion and native_iceberg_compat fall back due to Arrow-rs\n        // https://github.com/apache/arrow-rs/blob/da9829728e2a9dffb8d4f47ffe7b103793851724/parquet/src/file/metadata/parser.rs#L494\n        checkAnswer(readDataset, inputDF)\n      }\n    }\n  }\n\n  test(\"Test double wrapping disabled\") {\n    import testImplicits._\n\n    withTempDir { dir =>\n      withSQLConf(\n        DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n        KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n          \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n        KeyToolkit.DOUBLE_WRAPPING_PROPERTY_NAME -> \"false\", // default is true\n        InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n          s\"footerKey: ${footerKey}, key1: ${key1}, key2: ${key2}\") {\n\n        val inputDF = spark\n          .range(0, 1000)\n          .map(i => (i, i.toString, i.toFloat))\n          .repartition(5)\n          .toDF(\"a\", \"b\", \"c\")\n        val parquetDir = new File(dir, \"parquet\").getCanonicalPath\n        inputDF.write\n          .option(PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME, \"key1: a, b; key2: c\")\n          .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n          .parquet(parquetDir)\n\n        verifyParquetEncrypted(parquetDir)\n\n        val parquetDF = spark.read.parquet(parquetDir)\n        assert(parquetDF.inputFiles.nonEmpty)\n        val readDataset = parquetDF.select(\"a\", \"b\", \"c\")\n\n        if (CometConf.COMET_ENABLED.get(conf)) {\n          checkSparkAnswerAndOperator(readDataset)\n        } else {\n          checkAnswer(readDataset, inputDF)\n        }\n      }\n    }\n  }\n\n  test(\"Join between files with different encryption keys\") {\n    import testImplicits._\n\n    withTempDir { dir =>\n      withSQLConf(\n        DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n        KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n          \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n        InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n          s\"footerKey: ${footerKey}, key1: ${key1}, key2: ${key2}\") {\n\n        // Write first file\n        val inputDF1 = spark\n          .range(0, 100)\n          .map(i => (i, s\"file1_${i}\", i.toFloat))\n          .toDF(\"id\", \"name\", \"value\")\n        val parquetDir1 = new File(dir, \"parquet1\").getCanonicalPath\n        inputDF1.write\n          .option(\n            PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME,\n            \"key1: id, name, value\")\n          .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n          .parquet(parquetDir1)\n\n        // Write second file using different column key\n        val inputDF2 = spark\n          .range(0, 100)\n          .map(i => (i, s\"file2_${i}\", (i * 2).toFloat))\n          .toDF(\"id\", \"description\", \"score\")\n        val parquetDir2 = new File(dir, \"parquet2\").getCanonicalPath\n        inputDF2.write\n          .option(\n            PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME,\n            \"key2: id, description, score\")\n          .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n          .parquet(parquetDir2)\n\n        verifyParquetEncrypted(parquetDir1)\n        verifyParquetEncrypted(parquetDir2)\n\n        // Now perform a join between the two files with different encryption keys\n        // This tests that hadoopConf properties propagate correctly to each scan\n        val parquetDF1 = spark.read.parquet(parquetDir1).alias(\"f1\")\n        val parquetDF2 = spark.read.parquet(parquetDir2).alias(\"f2\")\n\n        val joinedDF = parquetDF1\n          .join(parquetDF2, parquetDF1(\"id\") === parquetDF2(\"id\"), \"inner\")\n          .select(\n            parquetDF1(\"id\"),\n            parquetDF1(\"name\"),\n            parquetDF2(\"description\"),\n            parquetDF2(\"score\"))\n\n        if (CometConf.COMET_ENABLED.get(conf)) {\n          checkSparkAnswerAndOperator(joinedDF)\n        } else {\n          checkSparkAnswer(joinedDF)\n        }\n      }\n    }\n  }\n\n  // Union ends up with two scans in the same plan, so this ensures that Comet can distinguish\n  // between the hadoopConfs for each relation\n  test(\"Union between files with different encryption keys\") {\n    import testImplicits._\n\n    withTempDir { dir =>\n      withSQLConf(\n        DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n        KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n          \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n        InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n          s\"footerKey: ${footerKey}, key1: ${key1}, key2: ${key2}\",\n        CometConf.COMET_EXEC_SHUFFLE_ENABLED.key -> \"true\") {\n\n        // Write first file with key1\n        val inputDF1 = spark\n          .range(0, 100)\n          .map(i => (i, s\"file1_${i}\", i.toFloat))\n          .toDF(\"id\", \"name\", \"value\")\n        val parquetDir1 = new File(dir, \"parquet1\").getCanonicalPath\n        inputDF1.write\n          .option(\n            PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME,\n            \"key1: id, name, value\")\n          .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n          .parquet(parquetDir1)\n\n        // Write second file with key2 - same schema, different encryption key\n        val inputDF2 = spark\n          .range(100, 200)\n          .map(i => (i, s\"file2_${i}\", i.toFloat))\n          .toDF(\"id\", \"name\", \"value\")\n        val parquetDir2 = new File(dir, \"parquet2\").getCanonicalPath\n        inputDF2.write\n          .option(\n            PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME,\n            \"key2: id, name, value\")\n          .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n          .parquet(parquetDir2)\n\n        verifyParquetEncrypted(parquetDir1)\n        verifyParquetEncrypted(parquetDir2)\n\n        val parquetDF1 = spark.read.parquet(parquetDir1)\n        val parquetDF2 = spark.read.parquet(parquetDir2)\n\n        val unionDF = parquetDF1.union(parquetDF2)\n        // Since the union has its own executeColumnar, problems would not surface if it is the last operator\n        // If we add another comet aggregate after the union, we see the need for the\n        // foreachUntilCometInput() in operator.scala\n        // as we would error on multiple native scan execs despite no longer being in the same plan at all\n        val aggDf = unionDF.agg(functions.sum(\"id\"))\n\n        if (CometConf.COMET_ENABLED.get(conf)) {\n          checkSparkAnswerAndOperator(aggDf)\n        } else {\n          checkSparkAnswer(aggDf)\n        }\n      }\n    }\n  }\n\n  test(\"Test different key lengths\") {\n    import testImplicits._\n\n    withTempDir { dir =>\n      withSQLConf(\n        DecryptionPropertiesFactory.CRYPTO_FACTORY_CLASS_PROPERTY_NAME -> cryptoFactoryClass,\n        KeyToolkit.KMS_CLIENT_CLASS_PROPERTY_NAME ->\n          \"org.apache.parquet.crypto.keytools.mocks.InMemoryKMS\",\n        KeyToolkit.DATA_KEY_LENGTH_PROPERTY_NAME -> \"256\",\n        KeyToolkit.KEK_LENGTH_PROPERTY_NAME -> \"256\",\n        InMemoryKMS.KEY_LIST_PROPERTY_NAME ->\n          s\"footerKey: ${footerKey}, key1: ${key1}, key2: ${key2}\") {\n\n        val inputDF = spark\n          .range(0, 1000)\n          .map(i => (i, i.toString, i.toFloat))\n          .repartition(5)\n          .toDF(\"a\", \"b\", \"c\")\n        val parquetDir = new File(dir, \"parquet\").getCanonicalPath\n        inputDF.write\n          .option(PropertiesDrivenCryptoFactory.COLUMN_KEYS_PROPERTY_NAME, \"key1: a, b; key2: c\")\n          .option(PropertiesDrivenCryptoFactory.FOOTER_KEY_PROPERTY_NAME, \"footerKey\")\n          .parquet(parquetDir)\n\n        verifyParquetEncrypted(parquetDir)\n\n        val parquetDF = spark.read.parquet(parquetDir)\n        assert(parquetDF.inputFiles.nonEmpty)\n        val readDataset = parquetDF.select(\"a\", \"b\", \"c\")\n\n        // native_datafusion and native_iceberg_compat fall back due to Arrow-rs not\n        // supporting other key lengths\n        checkAnswer(readDataset, inputDF)\n      }\n    }\n  }\n\n  protected override def sparkConf: SparkConf = {\n    val conf = super.sparkConf\n    conf.set(\"spark.hadoop.fs.file.impl\", classOf[DebugFilesystem].getName)\n    conf\n  }\n\n  protected override def createSparkSession: SparkSessionType = {\n    createSparkSessionWithExtensions(sparkConf)\n  }\n\n  override protected def test(testName: String, testTags: Tag*)(testFun: => Any)(implicit\n      pos: Position): Unit = {\n\n    Seq(\"true\", \"false\").foreach { cometEnabled =>\n      if (cometEnabled == \"true\") {\n        Seq(SCAN_NATIVE_DATAFUSION, SCAN_NATIVE_ICEBERG_COMPAT).foreach { scanImpl =>\n          super.test(testName + s\" Comet($cometEnabled)\" + s\" Scan($scanImpl)\", testTags: _*) {\n            withSQLConf(\n              CometConf.COMET_ENABLED.key -> cometEnabled,\n              CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n              SQLConf.ANSI_ENABLED.key -> \"false\",\n              CometConf.COMET_NATIVE_SCAN_IMPL.key -> scanImpl) {\n              testFun\n            }\n          }\n        }\n      } else {\n        super.test(testName + s\" Comet($cometEnabled)\", testTags: _*) {\n          withSQLConf(\n            CometConf.COMET_ENABLED.key -> cometEnabled,\n            CometConf.COMET_EXEC_ENABLED.key -> \"true\",\n            SQLConf.ANSI_ENABLED.key -> \"false\") {\n            testFun\n          }\n        }\n      }\n    }\n  }\n\n  protected override def beforeAll(): Unit = {\n    if (_spark == null) _spark = createSparkSession\n    super.beforeAll()\n  }\n\n  private var _spark: SparkSessionType = _\n\n  protected implicit override def spark: SparkSessionType = _spark\n\n  protected implicit override def sqlContext: SQLContext = _spark.sqlContext\n\n  /**\n   * Verify that the directory contains an encrypted parquet in encrypted footer mode by means of\n   * checking for all the parquet part files in the parquet directory that their magic string is\n   * PARE, as defined in the spec:\n   * https://github.com/apache/parquet-format/blob/master/Encryption.md#54-encrypted-footer-mode\n   */\n  private def verifyParquetEncrypted(parquetDir: String): Unit = {\n    val parquetPartitionFiles = getListOfParquetFiles(new File(parquetDir))\n    assert(parquetPartitionFiles.size >= 1)\n    parquetPartitionFiles.foreach { parquetFile =>\n      val magicString = \"PARE\"\n      val magicStringLength = magicString.length()\n      val byteArray = new Array[Byte](magicStringLength)\n      val randomAccessFile = new RandomAccessFile(parquetFile, \"r\")\n      try {\n        // Check first 4 bytes\n        randomAccessFile.read(byteArray, 0, magicStringLength)\n        val firstMagicString = new String(byteArray, StandardCharsets.UTF_8)\n        assert(magicString == firstMagicString)\n\n        // Check last 4 bytes\n        randomAccessFile.seek(randomAccessFile.length() - magicStringLength)\n        randomAccessFile.read(byteArray, 0, magicStringLength)\n        val lastMagicString = new String(byteArray, StandardCharsets.UTF_8)\n        assert(magicString == lastMagicString)\n      } finally {\n        randomAccessFile.close()\n      }\n    }\n  }\n\n  /**\n   * Verify that the directory contains an encrypted parquet in plaintext footer mode by means of\n   * checking for all the parquet part files in the parquet directory that their magic string is\n   * PAR1, as defined in the spec:\n   * https://github.com/apache/parquet-format/blob/master/Encryption.md#55-plaintext-footer-mode\n   */\n  private def verifyParquetPlaintextFooter(parquetDir: String): Unit = {\n    val parquetPartitionFiles = getListOfParquetFiles(new File(parquetDir))\n    assert(parquetPartitionFiles.size >= 1)\n    parquetPartitionFiles.foreach { parquetFile =>\n      val magicString = \"PAR1\"\n      val magicStringLength = magicString.length()\n      val byteArray = new Array[Byte](magicStringLength)\n      val randomAccessFile = new RandomAccessFile(parquetFile, \"r\")\n      try {\n        // Check first 4 bytes\n        randomAccessFile.read(byteArray, 0, magicStringLength)\n        val firstMagicString = new String(byteArray, StandardCharsets.UTF_8)\n        assert(magicString == firstMagicString)\n\n        // Check last 4 bytes\n        randomAccessFile.seek(randomAccessFile.length() - magicStringLength)\n        randomAccessFile.read(byteArray, 0, magicStringLength)\n        val lastMagicString = new String(byteArray, StandardCharsets.UTF_8)\n        assert(magicString == lastMagicString)\n      } finally {\n        randomAccessFile.close()\n      }\n    }\n  }\n\n  private def getListOfParquetFiles(dir: File): List[File] = {\n    dir.listFiles.filter(_.isFile).toList.filter { file =>\n      file.getName.endsWith(\"parquet\")\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/spark-3.4/org/apache/comet/shims/ShimCometTPCHQuerySuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.{SparkSession, SQLQueryTestHelper}\n\ntrait ShimCometTPCHQuerySuite extends SQLQueryTestHelper {\n  protected def getNormalizedQueryExecutionResult(\n      session: SparkSession,\n      sql: String): (String, Seq[String]) = {\n    getNormalizedResult(session, sql)\n  }\n}\n"
  },
  {
    "path": "spark/src/test/spark-3.4/org/apache/spark/sql/ShimCometTestBase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.sql.catalyst.expressions.{Expression, MakeDecimal}\nimport org.apache.spark.sql.catalyst.plans.logical.LogicalPlan\n\ntrait ShimCometTestBase {\n  type SparkSessionType = SparkSession\n\n  def createSparkSessionWithExtensions(conf: SparkConf): SparkSessionType = {\n    SparkSession\n      .builder()\n      .config(conf)\n      .master(\"local[1]\")\n      .withExtensions(new org.apache.comet.CometSparkSessionExtensions)\n      .getOrCreate()\n  }\n\n  def datasetOfRows(spark: SparkSession, plan: LogicalPlan): DataFrame = {\n    Dataset.ofRows(spark, plan)\n  }\n\n  def getColumnFromExpression(expr: Expression): Column = {\n    new Column(expr)\n  }\n\n  def extractLogicalPlan(df: DataFrame): LogicalPlan = {\n    df.logicalPlan\n  }\n\n  def createMakeDecimalColumn(child: Expression, precision: Int, scale: Int): Column = {\n    new Column(MakeDecimal(child, precision, scale))\n  }\n}\n"
  },
  {
    "path": "spark/src/test/spark-3.5/org/apache/comet/shims/ShimCometTPCHQuerySuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.SQLQueryTestHelper\n\ntrait ShimCometTPCHQuerySuite extends SQLQueryTestHelper {\n  // no need to define getNormalizedQueryExecutionResult in this version of the shim because it exists\n  // in SQLQueryTestHelper in Spark 3.5\n}\n"
  },
  {
    "path": "spark/src/test/spark-3.5/org/apache/spark/sql/CometToPrettyStringSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport org.apache.spark.sql.catalyst.TableIdentifier\nimport org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute\nimport org.apache.spark.sql.catalyst.expressions.{Alias, ToPrettyString}\nimport org.apache.spark.sql.catalyst.plans.logical.Project\nimport org.apache.spark.sql.types.DataTypes\n\nimport org.apache.comet.CometFuzzTestBase\nimport org.apache.comet.expressions.{CometCast, CometEvalMode}\nimport org.apache.comet.serde.Compatible\n\nclass CometToPrettyStringSuite extends CometFuzzTestBase {\n\n  test(\"ToPrettyString\") {\n    val df = spark.read.parquet(filename)\n    df.createOrReplaceTempView(\"t1\")\n    val table = spark.sessionState.catalog.lookupRelation(TableIdentifier(\"t1\"))\n\n    for (field <- df.schema.fields) {\n      val col = field.name\n      val prettyExpr = Alias(ToPrettyString(UnresolvedAttribute(col)), s\"pretty_$col\")()\n      val plan = Project(Seq(prettyExpr), table)\n      val analyzed = spark.sessionState.analyzer.execute(plan)\n      val result: DataFrame = Dataset.ofRows(spark, analyzed)\n      val supportLevel = CometCast.isSupported(\n        field.dataType,\n        DataTypes.StringType,\n        Some(spark.sessionState.conf.sessionLocalTimeZone),\n        CometEvalMode.TRY)\n      supportLevel match {\n        case _: Compatible => checkSparkAnswerAndOperator(result)\n        case _ => checkSparkAnswer(result)\n      }\n    }\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/spark-3.5/org/apache/spark/sql/ShimCometTestBase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.sql.catalyst.expressions.{Expression, MakeDecimal}\nimport org.apache.spark.sql.catalyst.plans.logical.LogicalPlan\n\ntrait ShimCometTestBase {\n  type SparkSessionType = SparkSession\n\n  def createSparkSessionWithExtensions(conf: SparkConf): SparkSessionType = {\n    SparkSession\n      .builder()\n      .config(conf)\n      .master(\"local[1]\")\n      .withExtensions(new org.apache.comet.CometSparkSessionExtensions)\n      .getOrCreate()\n  }\n\n  def datasetOfRows(spark: SparkSession, plan: LogicalPlan): DataFrame = {\n    Dataset.ofRows(spark, plan)\n  }\n\n  def getColumnFromExpression(expr: Expression): Column = {\n    new Column(expr)\n  }\n\n  def extractLogicalPlan(df: DataFrame): LogicalPlan = {\n    df.logicalPlan\n  }\n\n  def createMakeDecimalColumn(child: Expression, precision: Int, scale: Int): Column = {\n    new Column(MakeDecimal(child, precision, scale))\n  }\n\n}\n"
  },
  {
    "path": "spark/src/test/spark-3.x/org/apache/comet/iceberg/RESTCatalogHelper.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.iceberg\n\nimport java.io.File\nimport java.nio.file.Files\n\n/** Helper trait for setting up REST catalog with Jetty 9.4 (javax.servlet) for Spark 3.x */\ntrait RESTCatalogHelper {\n\n  /** Helper to set up REST catalog with embedded Jetty server (Spark 3.x / Jetty 9.4) */\n  def withRESTCatalog(f: (String, org.eclipse.jetty.server.Server, File) => Unit): Unit =\n    withRESTCatalog()(f)\n\n  /**\n   * Helper to set up REST catalog with optional credential vending.\n   *\n   * @param vendedCredentials\n   *   Storage credentials to inject into loadTable responses, simulating REST catalog credential\n   *   vending. When non-empty, these are added to every LoadTableResponse.config().\n   * @param warehouseLocation\n   *   Override the warehouse location (e.g., for S3). Defaults to a local temp directory.\n   */\n  def withRESTCatalog(\n      vendedCredentials: Map[String, String] = Map.empty,\n      warehouseLocation: Option[String] = None)(\n      f: (String, org.eclipse.jetty.server.Server, File) => Unit): Unit = {\n    import org.apache.iceberg.inmemory.InMemoryCatalog\n    import org.apache.iceberg.CatalogProperties\n    import org.apache.iceberg.rest.{RESTCatalogAdapter, RESTCatalogServlet}\n    import org.eclipse.jetty.server.Server\n    import org.eclipse.jetty.servlet.{ServletContextHandler, ServletHolder}\n    import org.eclipse.jetty.server.handler.gzip.GzipHandler\n\n    val warehouseDir = Files.createTempDirectory(\"comet-rest-catalog-test\").toFile\n    val effectiveWarehouse = warehouseLocation.getOrElse(warehouseDir.getAbsolutePath)\n\n    val backendCatalog = new InMemoryCatalog()\n    backendCatalog.initialize(\n      \"in-memory\",\n      java.util.Map.of(CatalogProperties.WAREHOUSE_LOCATION, effectiveWarehouse))\n\n    val adapter = new RESTCatalogAdapter(backendCatalog)\n    if (vendedCredentials.nonEmpty) {\n      import scala.jdk.CollectionConverters._\n      adapter.setVendedCredentials(vendedCredentials.asJava)\n    }\n    val servlet = new RESTCatalogServlet(adapter)\n\n    val servletContext = new ServletContextHandler(ServletContextHandler.NO_SESSIONS)\n    servletContext.setContextPath(\"/\")\n    val servletHolder = new ServletHolder(servlet.asInstanceOf[javax.servlet.Servlet])\n    servletHolder.setInitParameter(\"javax.ws.rs.Application\", \"ServiceListPublic\")\n    servletContext.addServlet(servletHolder, \"/*\")\n    servletContext.setVirtualHosts(null)\n    servletContext.setGzipHandler(new GzipHandler())\n\n    val httpServer = new Server(0) // random port\n    httpServer.setHandler(servletContext)\n\n    try {\n      httpServer.start()\n      val restUri = httpServer.getURI.toString.stripSuffix(\"/\")\n      f(restUri, httpServer, warehouseDir)\n    } finally {\n      try {\n        httpServer.stop()\n        httpServer.join()\n      } catch {\n        case _: Exception => // ignore cleanup errors\n      }\n      try {\n        backendCatalog.close()\n      } catch {\n        case _: Exception => // ignore cleanup errors\n      }\n      def deleteRecursively(file: File): Unit = {\n        if (file.isDirectory) {\n          file.listFiles().foreach(deleteRecursively)\n        }\n        file.delete()\n      }\n      deleteRecursively(warehouseDir)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/spark-3.x/org/apache/iceberg/rest/RESTCatalogServlet.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.iceberg.rest;\n\nimport java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.io.Reader;\nimport java.io.UncheckedIOException;\nimport java.util.Collections;\nimport java.util.Map;\nimport java.util.Optional;\nimport java.util.function.Consumer;\nimport java.util.function.Function;\nimport java.util.stream.Collectors;\nimport javax.servlet.http.HttpServlet;\nimport javax.servlet.http.HttpServletRequest;\nimport javax.servlet.http.HttpServletResponse;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.iceberg.exceptions.RESTException;\nimport org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;\nimport org.apache.iceberg.relocated.com.google.common.io.CharStreams;\nimport org.apache.iceberg.rest.RESTCatalogAdapter.HTTPMethod;\nimport org.apache.iceberg.rest.RESTCatalogAdapter.Route;\nimport org.apache.iceberg.rest.responses.ErrorResponse;\nimport org.apache.iceberg.util.Pair;\n\nimport static java.lang.String.format;\n\n/**\n * The RESTCatalogServlet provides a servlet implementation used in combination with a\n * RESTCatalogAdaptor to proxy the REST Spec to any Catalog implementation.\n * Modified version of Iceberg's org/apache/iceberg/rest/RESTCatalogServlet.java\n */\npublic class RESTCatalogServlet extends HttpServlet {\n  private static final Logger LOG = LoggerFactory.getLogger(RESTCatalogServlet.class);\n\n  private final RESTCatalogAdapter restCatalogAdapter;\n  private final Map<String, String> responseHeaders =\n      ImmutableMap.of(\"Content-Type\", \"application/json\");\n\n  public RESTCatalogServlet(RESTCatalogAdapter restCatalogAdapter) {\n    this.restCatalogAdapter = restCatalogAdapter;\n  }\n\n  @Override\n  protected void doGet(HttpServletRequest request, HttpServletResponse response)\n      throws IOException {\n    execute(ServletRequestContext.from(request), response);\n  }\n\n  @Override\n  protected void doHead(HttpServletRequest request, HttpServletResponse response)\n      throws IOException {\n    execute(ServletRequestContext.from(request), response);\n  }\n\n  @Override\n  protected void doPost(HttpServletRequest request, HttpServletResponse response)\n      throws IOException {\n    execute(ServletRequestContext.from(request), response);\n  }\n\n  @Override\n  protected void doDelete(HttpServletRequest request, HttpServletResponse response)\n      throws IOException {\n    execute(ServletRequestContext.from(request), response);\n  }\n\n  protected void execute(ServletRequestContext context, HttpServletResponse response)\n      throws IOException {\n    response.setStatus(HttpServletResponse.SC_OK);\n    responseHeaders.forEach(response::setHeader);\n\n    if (context.error().isPresent()) {\n      response.setStatus(HttpServletResponse.SC_BAD_REQUEST);\n      RESTObjectMapper.mapper().writeValue(response.getWriter(), context.error().get());\n      return;\n    }\n\n    try {\n      Object responseBody =\n          restCatalogAdapter.execute(\n              context.method(),\n              context.path(),\n              context.queryParams(),\n              context.body(),\n              context.route().responseClass(),\n              context.headers(),\n              handle(response));\n\n      if (responseBody != null) {\n        RESTObjectMapper.mapper().writeValue(response.getWriter(), responseBody);\n      }\n    } catch (RESTException e) {\n      LOG.error(\"Error processing REST request\", e);\n      response.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);\n    } catch (Exception e) {\n      LOG.error(\"Unexpected exception when processing REST request\", e);\n      response.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);\n    }\n  }\n\n  protected Consumer<ErrorResponse> handle(HttpServletResponse response) {\n    return (errorResponse) -> {\n      response.setStatus(errorResponse.code());\n      try {\n        RESTObjectMapper.mapper().writeValue(response.getWriter(), errorResponse);\n      } catch (IOException e) {\n        throw new UncheckedIOException(e);\n      }\n    };\n  }\n\n  public static class ServletRequestContext {\n    private HTTPMethod method;\n    private Route route;\n    private String path;\n    private Map<String, String> headers;\n    private Map<String, String> queryParams;\n    private Object body;\n\n    private ErrorResponse errorResponse;\n\n    private ServletRequestContext(ErrorResponse errorResponse) {\n      this.errorResponse = errorResponse;\n    }\n\n    private ServletRequestContext(\n        HTTPMethod method,\n        Route route,\n        String path,\n        Map<String, String> headers,\n        Map<String, String> queryParams,\n        Object body) {\n      this.method = method;\n      this.route = route;\n      this.path = path;\n      this.headers = headers;\n      this.queryParams = queryParams;\n      this.body = body;\n    }\n\n    static ServletRequestContext from(HttpServletRequest request) throws IOException {\n      HTTPMethod method = HTTPMethod.valueOf(request.getMethod());\n      String path = request.getRequestURI().substring(1);\n      Pair<Route, Map<String, String>> routeContext = Route.from(method, path);\n\n      if (routeContext == null) {\n        return new ServletRequestContext(\n            ErrorResponse.builder()\n                .responseCode(400)\n                .withType(\"BadRequestException\")\n                .withMessage(format(\"No route for request: %s %s\", method, path))\n                .build());\n      }\n\n      Route route = routeContext.first();\n      Object requestBody = null;\n      if (route.requestClass() != null) {\n        requestBody =\n            RESTObjectMapper.mapper().readValue(request.getReader(), route.requestClass());\n      } else if (route == Route.TOKENS) {\n        try (Reader reader = new InputStreamReader(request.getInputStream())) {\n          requestBody = RESTUtil.decodeFormData(CharStreams.toString(reader));\n        }\n      }\n\n      Map<String, String> queryParams =\n          request.getParameterMap().entrySet().stream()\n              .collect(Collectors.toMap(Map.Entry::getKey, e -> e.getValue()[0]));\n      Map<String, String> headers =\n          Collections.list(request.getHeaderNames()).stream()\n              .collect(Collectors.toMap(Function.identity(), request::getHeader));\n\n      return new ServletRequestContext(method, route, path, headers, queryParams, requestBody);\n    }\n\n    public HTTPMethod method() {\n      return method;\n    }\n\n    public Route route() {\n      return route;\n    }\n\n    public String path() {\n      return path;\n    }\n\n    public Map<String, String> headers() {\n      return headers;\n    }\n\n    public Map<String, String> queryParams() {\n      return queryParams;\n    }\n\n    public Object body() {\n      return body;\n    }\n\n    public Optional<ErrorResponse> error() {\n      return Optional.ofNullable(errorResponse);\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/spark-3.x/org/apache/spark/sql/comet/shims/ShimCometTPCDSQuerySuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\ntrait ShimCometTPCDSQuerySuite {\n  // This is private in `TPCDSBase`.\n  val excludedTpcdsQueries: Set[String] = Set()\n}\n"
  },
  {
    "path": "spark/src/test/spark-4.0/org/apache/comet/exec/CometShuffle4_0Suite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.exec\n\nimport java.util.Collections\n\nimport org.apache.spark.sql.DataFrame\nimport org.apache.spark.sql.connector.catalog.{Column, Identifier, InMemoryCatalog, InMemoryTableCatalog}\nimport org.apache.spark.sql.connector.expressions.Expressions.identity\nimport org.apache.spark.sql.connector.expressions.Transform\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.types.{FloatType, LongType, StringType, TimestampType}\n\nclass CometShuffle4_0Suite extends CometColumnarShuffleSuite {\n  override protected val asyncShuffleEnable: Boolean = false\n\n  protected val adaptiveExecutionEnabled: Boolean = true\n\n  override def beforeAll(): Unit = {\n    super.beforeAll()\n    spark.conf.set(\"spark.sql.catalog.testcat\", classOf[InMemoryCatalog].getName)\n  }\n\n  override def afterAll(): Unit = {\n    spark.sessionState.conf.unsetConf(\"spark.sql.catalog.testcat\")\n    super.afterAll()\n  }\n\n  private val emptyProps: java.util.Map[String, String] = {\n    Collections.emptyMap[String, String]\n  }\n  private val items: String = \"items\"\n  private val itemsColumns: Array[Column] = Array(\n    Column.create(\"id\", LongType),\n    Column.create(\"name\", StringType),\n    Column.create(\"price\", FloatType),\n    Column.create(\"arrive_time\", TimestampType))\n\n  private val purchases: String = \"purchases\"\n  private val purchasesColumns: Array[Column] = Array(\n    Column.create(\"item_id\", LongType),\n    Column.create(\"price\", FloatType),\n    Column.create(\"time\", TimestampType))\n\n  protected def catalog: InMemoryCatalog = {\n    val catalog = spark.sessionState.catalogManager.catalog(\"testcat\")\n    catalog.asInstanceOf[InMemoryCatalog]\n  }\n\n  private def createTable(\n      table: String,\n      columns: Array[Column],\n      partitions: Array[Transform],\n      catalog: InMemoryTableCatalog = catalog): Unit = {\n    catalog.createTable(Identifier.of(Array(\"ns\"), table), columns, partitions, emptyProps)\n  }\n\n  private def selectWithMergeJoinHint(t1: String, t2: String): String = {\n    s\"SELECT /*+ MERGE($t1, $t2) */ \"\n  }\n\n  private def createJoinTestDF(\n      keys: Seq[(String, String)],\n      extraColumns: Seq[String] = Nil,\n      joinType: String = \"\"): DataFrame = {\n    val extraColList = if (extraColumns.isEmpty) \"\" else extraColumns.mkString(\", \", \", \", \"\")\n    sql(s\"\"\"\n           |${selectWithMergeJoinHint(\"i\", \"p\")}\n           |id, name, i.price as purchase_price, p.price as sale_price $extraColList\n           |FROM testcat.ns.$items i $joinType JOIN testcat.ns.$purchases p\n           |ON ${keys.map(k => s\"i.${k._1} = p.${k._2}\").mkString(\" AND \")}\n           |ORDER BY id, purchase_price, sale_price $extraColList\n           |\"\"\".stripMargin)\n  }\n\n  test(\"Fallback to Spark for unsupported partitioning\") {\n    val items_partitions = Array(identity(\"id\"))\n    createTable(items, itemsColumns, items_partitions)\n\n    sql(\n      s\"INSERT INTO testcat.ns.$items VALUES \" +\n        \"(1, 'aa', 40.0, cast('2020-01-01' as timestamp)), \" +\n        \"(3, 'bb', 10.0, cast('2020-01-01' as timestamp)), \" +\n        \"(4, 'cc', 15.5, cast('2020-02-01' as timestamp))\")\n\n    createTable(purchases, purchasesColumns, Array.empty)\n    sql(\n      s\"INSERT INTO testcat.ns.$purchases VALUES \" +\n        \"(1, 42.0, cast('2020-01-01' as timestamp)), \" +\n        \"(3, 19.5, cast('2020-02-01' as timestamp)), \" +\n        \"(5, 26.0, cast('2023-01-01' as timestamp)), \" +\n        \"(6, 50.0, cast('2023-02-01' as timestamp))\")\n\n    Seq(true, false).foreach { shuffle =>\n      withSQLConf(\n        SQLConf.V2_BUCKETING_ENABLED.key -> \"true\",\n        \"spark.sql.sources.v2.bucketing.shuffle.enabled\" -> shuffle.toString,\n        SQLConf.V2_BUCKETING_PUSH_PART_VALUES_ENABLED.key -> \"true\",\n        SQLConf.V2_BUCKETING_PARTIALLY_CLUSTERED_DISTRIBUTION_ENABLED.key -> \"true\") {\n        val df = createJoinTestDF(Seq(\"id\" -> \"item_id\"))\n        checkSparkAnswer(df)\n      }\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/spark-4.0/org/apache/comet/iceberg/RESTCatalogHelper.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.iceberg\n\nimport java.io.File\nimport java.nio.file.Files\n\n/** Helper trait for setting up REST catalog with Jetty 11 (jakarta.servlet) for Spark 4.0 */\ntrait RESTCatalogHelper {\n\n  /** Helper to set up REST catalog with embedded Jetty server (Spark 4.0 / Jetty 11) */\n  def withRESTCatalog(f: (String, org.eclipse.jetty.server.Server, File) => Unit): Unit =\n    withRESTCatalog()(f)\n\n  /**\n   * Helper to set up REST catalog with optional credential vending.\n   *\n   * @param vendedCredentials\n   *   Storage credentials to inject into loadTable responses, simulating REST catalog credential\n   *   vending. When non-empty, these are added to every LoadTableResponse.config().\n   * @param warehouseLocation\n   *   Override the warehouse location (e.g., for S3). Defaults to a local temp directory.\n   */\n  def withRESTCatalog(\n      vendedCredentials: Map[String, String] = Map.empty,\n      warehouseLocation: Option[String] = None)(\n      f: (String, org.eclipse.jetty.server.Server, File) => Unit): Unit = {\n    import org.apache.iceberg.inmemory.InMemoryCatalog\n    import org.apache.iceberg.CatalogProperties\n    import org.apache.iceberg.rest.{RESTCatalogAdapter, RESTCatalogServlet}\n    import org.eclipse.jetty.server.Server\n    import org.eclipse.jetty.servlet.{ServletContextHandler, ServletHolder}\n    import org.eclipse.jetty.server.handler.gzip.GzipHandler\n\n    val warehouseDir = Files.createTempDirectory(\"comet-rest-catalog-test\").toFile\n    val effectiveWarehouse = warehouseLocation.getOrElse(warehouseDir.getAbsolutePath)\n\n    val backendCatalog = new InMemoryCatalog()\n    backendCatalog.initialize(\n      \"in-memory\",\n      java.util.Map.of(CatalogProperties.WAREHOUSE_LOCATION, effectiveWarehouse))\n\n    val adapter = new RESTCatalogAdapter(backendCatalog)\n    if (vendedCredentials.nonEmpty) {\n      import scala.jdk.CollectionConverters._\n      adapter.setVendedCredentials(vendedCredentials.asJava)\n    }\n    val servlet = new RESTCatalogServlet(adapter)\n\n    val servletContext = new ServletContextHandler(ServletContextHandler.NO_SESSIONS)\n    servletContext.setContextPath(\"/\")\n    val servletHolder = new ServletHolder(servlet.asInstanceOf[jakarta.servlet.Servlet])\n    servletHolder.setInitParameter(\"jakarta.ws.rs.Application\", \"ServiceListPublic\")\n    servletContext.addServlet(servletHolder, \"/*\")\n    servletContext.setVirtualHosts(null)\n    servletContext.insertHandler(new GzipHandler())\n\n    val httpServer = new Server(0) // random port\n    httpServer.setHandler(servletContext)\n\n    try {\n      httpServer.start()\n      val restUri = httpServer.getURI.toString.stripSuffix(\"/\")\n      f(restUri, httpServer, warehouseDir)\n    } finally {\n      try {\n        httpServer.stop()\n        httpServer.join()\n      } catch {\n        case _: Exception => // ignore cleanup errors\n      }\n      try {\n        backendCatalog.close()\n      } catch {\n        case _: Exception => // ignore cleanup errors\n      }\n      def deleteRecursively(file: File): Unit = {\n        if (file.isDirectory) {\n          file.listFiles().foreach(deleteRecursively)\n        }\n        file.delete()\n      }\n      deleteRecursively(warehouseDir)\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/spark-4.0/org/apache/comet/shims/ShimCometTPCHQuerySuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.comet.shims\n\nimport org.apache.spark.sql.SQLQueryTestHelper\n\ntrait ShimCometTPCHQuerySuite extends SQLQueryTestHelper {}\n"
  },
  {
    "path": "spark/src/test/spark-4.0/org/apache/iceberg/rest/RESTCatalogServlet.java",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.iceberg.rest;\n\nimport java.io.IOException;\nimport java.io.InputStreamReader;\nimport java.io.Reader;\nimport java.io.UncheckedIOException;\nimport java.util.Collections;\nimport java.util.Map;\nimport java.util.Optional;\nimport java.util.function.Consumer;\nimport java.util.function.Function;\nimport java.util.stream.Collectors;\nimport jakarta.servlet.http.HttpServlet;\nimport jakarta.servlet.http.HttpServletRequest;\nimport jakarta.servlet.http.HttpServletResponse;\n\nimport org.slf4j.Logger;\nimport org.slf4j.LoggerFactory;\n\nimport org.apache.iceberg.exceptions.RESTException;\nimport org.apache.iceberg.relocated.com.google.common.collect.ImmutableMap;\nimport org.apache.iceberg.relocated.com.google.common.io.CharStreams;\nimport org.apache.iceberg.rest.RESTCatalogAdapter.HTTPMethod;\nimport org.apache.iceberg.rest.RESTCatalogAdapter.Route;\nimport org.apache.iceberg.rest.responses.ErrorResponse;\nimport org.apache.iceberg.util.Pair;\n\nimport static java.lang.String.format;\n\n/**\n * The RESTCatalogServlet provides a servlet implementation used in combination with a\n * RESTCatalogAdaptor to proxy the REST Spec to any Catalog implementation.\n * Modified version of Iceberg's org/apache/iceberg/rest/RESTCatalogServlet.java\n */\npublic class RESTCatalogServlet extends HttpServlet {\n  private static final Logger LOG = LoggerFactory.getLogger(RESTCatalogServlet.class);\n\n  private final RESTCatalogAdapter restCatalogAdapter;\n  private final Map<String, String> responseHeaders =\n      ImmutableMap.of(\"Content-Type\", \"application/json\");\n\n  public RESTCatalogServlet(RESTCatalogAdapter restCatalogAdapter) {\n    this.restCatalogAdapter = restCatalogAdapter;\n  }\n\n  @Override\n  protected void doGet(HttpServletRequest request, HttpServletResponse response)\n      throws IOException {\n    execute(ServletRequestContext.from(request), response);\n  }\n\n  @Override\n  protected void doHead(HttpServletRequest request, HttpServletResponse response)\n      throws IOException {\n    execute(ServletRequestContext.from(request), response);\n  }\n\n  @Override\n  protected void doPost(HttpServletRequest request, HttpServletResponse response)\n      throws IOException {\n    execute(ServletRequestContext.from(request), response);\n  }\n\n  @Override\n  protected void doDelete(HttpServletRequest request, HttpServletResponse response)\n      throws IOException {\n    execute(ServletRequestContext.from(request), response);\n  }\n\n  protected void execute(ServletRequestContext context, HttpServletResponse response)\n      throws IOException {\n    response.setStatus(HttpServletResponse.SC_OK);\n    responseHeaders.forEach(response::setHeader);\n\n    if (context.error().isPresent()) {\n      response.setStatus(HttpServletResponse.SC_BAD_REQUEST);\n      RESTObjectMapper.mapper().writeValue(response.getWriter(), context.error().get());\n      return;\n    }\n\n    try {\n      Object responseBody =\n          restCatalogAdapter.execute(\n              context.method(),\n              context.path(),\n              context.queryParams(),\n              context.body(),\n              context.route().responseClass(),\n              context.headers(),\n              handle(response));\n\n      if (responseBody != null) {\n        RESTObjectMapper.mapper().writeValue(response.getWriter(), responseBody);\n      }\n    } catch (RESTException e) {\n      LOG.error(\"Error processing REST request\", e);\n      response.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);\n    } catch (Exception e) {\n      LOG.error(\"Unexpected exception when processing REST request\", e);\n      response.setStatus(HttpServletResponse.SC_INTERNAL_SERVER_ERROR);\n    }\n  }\n\n  protected Consumer<ErrorResponse> handle(HttpServletResponse response) {\n    return (errorResponse) -> {\n      response.setStatus(errorResponse.code());\n      try {\n        RESTObjectMapper.mapper().writeValue(response.getWriter(), errorResponse);\n      } catch (IOException e) {\n        throw new UncheckedIOException(e);\n      }\n    };\n  }\n\n  public static class ServletRequestContext {\n    private HTTPMethod method;\n    private Route route;\n    private String path;\n    private Map<String, String> headers;\n    private Map<String, String> queryParams;\n    private Object body;\n\n    private ErrorResponse errorResponse;\n\n    private ServletRequestContext(ErrorResponse errorResponse) {\n      this.errorResponse = errorResponse;\n    }\n\n    private ServletRequestContext(\n        HTTPMethod method,\n        Route route,\n        String path,\n        Map<String, String> headers,\n        Map<String, String> queryParams,\n        Object body) {\n      this.method = method;\n      this.route = route;\n      this.path = path;\n      this.headers = headers;\n      this.queryParams = queryParams;\n      this.body = body;\n    }\n\n    static ServletRequestContext from(HttpServletRequest request) throws IOException {\n      HTTPMethod method = HTTPMethod.valueOf(request.getMethod());\n      String path = request.getRequestURI().substring(1);\n      Pair<Route, Map<String, String>> routeContext = Route.from(method, path);\n\n      if (routeContext == null) {\n        return new ServletRequestContext(\n            ErrorResponse.builder()\n                .responseCode(400)\n                .withType(\"BadRequestException\")\n                .withMessage(format(\"No route for request: %s %s\", method, path))\n                .build());\n      }\n\n      Route route = routeContext.first();\n      Object requestBody = null;\n      if (route.requestClass() != null) {\n        requestBody =\n            RESTObjectMapper.mapper().readValue(request.getReader(), route.requestClass());\n      } else if (route == Route.TOKENS) {\n        try (Reader reader = new InputStreamReader(request.getInputStream())) {\n          requestBody = RESTUtil.decodeFormData(CharStreams.toString(reader));\n        }\n      }\n\n      Map<String, String> queryParams =\n          request.getParameterMap().entrySet().stream()\n              .collect(Collectors.toMap(Map.Entry::getKey, e -> e.getValue()[0]));\n      Map<String, String> headers =\n          Collections.list(request.getHeaderNames()).stream()\n              .collect(Collectors.toMap(Function.identity(), request::getHeader));\n\n      return new ServletRequestContext(method, route, path, headers, queryParams, requestBody);\n    }\n\n    public HTTPMethod method() {\n      return method;\n    }\n\n    public Route route() {\n      return route;\n    }\n\n    public String path() {\n      return path;\n    }\n\n    public Map<String, String> headers() {\n      return headers;\n    }\n\n    public Map<String, String> queryParams() {\n      return queryParams;\n    }\n\n    public Object body() {\n      return body;\n    }\n\n    public Optional<ErrorResponse> error() {\n      return Optional.ofNullable(errorResponse);\n    }\n  }\n}\n"
  },
  {
    "path": "spark/src/test/spark-4.0/org/apache/spark/comet/shims/ShimTestUtils.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.comet.shims\n\nimport java.io.File\n\nobject ShimTestUtils {\n  def listDirectory(path: File): Array[String] =\n    org.apache.spark.TestUtils.listDirectory(path)\n}\n"
  },
  {
    "path": "spark/src/test/spark-4.0/org/apache/spark/sql/CometToPrettyStringSuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport org.apache.spark.sql.catalyst.TableIdentifier\nimport org.apache.spark.sql.catalyst.analysis.UnresolvedAttribute\nimport org.apache.spark.sql.catalyst.expressions.{Alias, ToPrettyString}\nimport org.apache.spark.sql.catalyst.plans.logical.Project\nimport org.apache.spark.sql.classic.Dataset\nimport org.apache.spark.sql.internal.SQLConf\nimport org.apache.spark.sql.internal.SQLConf.BinaryOutputStyle\nimport org.apache.spark.sql.types.DataTypes\n\nimport org.apache.comet.CometFuzzTestBase\nimport org.apache.comet.expressions.{CometCast, CometEvalMode}\nimport org.apache.comet.serde.Compatible\n\nclass CometToPrettyStringSuite extends CometFuzzTestBase {\n\n  test(\"ToPrettyString\") {\n    val style = List(\n      BinaryOutputStyle.UTF8,\n      BinaryOutputStyle.BASIC,\n      BinaryOutputStyle.BASE64,\n      BinaryOutputStyle.HEX,\n      BinaryOutputStyle.HEX_DISCRETE)\n    style.foreach(s =>\n      withSQLConf(SQLConf.BINARY_OUTPUT_STYLE.key -> s.toString) {\n        val df = spark.read.parquet(filename)\n        df.createOrReplaceTempView(\"t1\")\n        val table = spark.sessionState.catalog.lookupRelation(TableIdentifier(\"t1\"))\n\n        for (field <- df.schema.fields) {\n          val col = field.name\n          val prettyExpr = Alias(ToPrettyString(UnresolvedAttribute(col)), s\"pretty_$col\")()\n          val plan = Project(Seq(prettyExpr), table)\n          val analyzed = spark.sessionState.analyzer.execute(plan)\n          val result: DataFrame = Dataset.ofRows(spark, analyzed)\n          val supportLevel = CometCast.isSupported(\n            field.dataType,\n            DataTypes.StringType,\n            Some(spark.sessionState.conf.sessionLocalTimeZone),\n            CometEvalMode.TRY)\n          supportLevel match {\n            case _: Compatible => checkSparkAnswerAndOperator(result)\n            case _ => checkSparkAnswer(result)\n          }\n        }\n      })\n  }\n}\n"
  },
  {
    "path": "spark/src/test/spark-4.0/org/apache/spark/sql/ShimCometTestBase.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql\n\nimport org.apache.spark.SparkConf\nimport org.apache.spark.sql.catalyst.expressions.{Expression, MakeDecimal}\nimport org.apache.spark.sql.catalyst.plans.logical.LogicalPlan\nimport org.apache.spark.sql.classic.{Dataset, ExpressionColumnNode, SparkSession}\n\ntrait ShimCometTestBase {\n  type SparkSessionType = SparkSession\n\n  def createSparkSessionWithExtensions(conf: SparkConf): SparkSessionType = {\n    SparkSession\n      .builder()\n      .config(conf)\n      .master(\"local[1]\")\n      .withExtensions(new org.apache.comet.CometSparkSessionExtensions)\n      .getOrCreate()\n  }\n\n  def datasetOfRows(spark: SparkSession, plan: LogicalPlan): DataFrame = {\n    Dataset.ofRows(spark, plan)\n  }\n\n  def getColumnFromExpression(expr: Expression): Column = {\n    new Column(ExpressionColumnNode.apply(expr))\n  }\n\n  def extractLogicalPlan(df: DataFrame): LogicalPlan = {\n    df.queryExecution.analyzed\n  }\n\n  def createMakeDecimalColumn(child: Expression, precision: Int, scale: Int): Column = {\n    new Column(ExpressionColumnNode.apply(MakeDecimal(child, precision, scale, true)))\n  }\n}\n"
  },
  {
    "path": "spark/src/test/spark-4.0/org/apache/spark/sql/comet/shims/ShimCometTPCDSQuerySuite.scala",
    "content": "/*\n * Licensed to the Apache Software Foundation (ASF) under one\n * or more contributor license agreements.  See the NOTICE file\n * distributed with this work for additional information\n * regarding copyright ownership.  The ASF licenses this file\n * to you under the Apache License, Version 2.0 (the\n * \"License\"); you may not use this file except in compliance\n * with the License.  You may obtain a copy of the License at\n *\n *   http://www.apache.org/licenses/LICENSE-2.0\n *\n * Unless required by applicable law or agreed to in writing,\n * software distributed under the License is distributed on an\n * \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\n * KIND, either express or implied.  See the License for the\n * specific language governing permissions and limitations\n * under the License.\n */\n\npackage org.apache.spark.sql.comet.shims\n\ntrait ShimCometTPCDSQuerySuite {}\n"
  },
  {
    "path": "spark-integration/pom.xml",
    "content": "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\n\n<!--\nLicensed to the Apache Software Foundation (ASF) under one\nor more contributor license agreements.  See the NOTICE file\ndistributed with this work for additional information\nregarding copyright ownership.  The ASF licenses this file\nto you under the Apache License, Version 2.0 (the\n\"License\"); you may not use this file except in compliance\nwith the License.  You may obtain a copy of the License at\n\n  http://www.apache.org/licenses/LICENSE-2.0\n\nUnless required by applicable law or agreed to in writing,\nsoftware distributed under the License is distributed on an\n\"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY\nKIND, either express or implied.  See the License for the\nspecific language governing permissions and limitations\nunder the License.\n-->\n\n\n<project xmlns=\"http://maven.apache.org/POM/4.0.0\" xmlns:xsi=\"http://www.w3.org/2001/XMLSchema-instance\"\n         xsi:schemaLocation=\"http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd\">\n    <modelVersion>4.0.0</modelVersion>\n    <parent>\n        <groupId>org.apache.datafusion</groupId>\n        <artifactId>comet-parent-spark${spark.version.short}_${scala.binary.version}</artifactId>\n        <version>0.15.0-SNAPSHOT</version>\n        <relativePath>../pom.xml</relativePath>\n    </parent>\n\n    <artifactId>comet-spark-integration-spark${spark.version.short}_${scala.binary.version}</artifactId>\n    <name>comet-spark-integration</name>\n    <packaging>pom</packaging>\n\n  <properties>\n    <!-- Reverse default (skip installation), and then enable only for child modules -->\n    <maven.deploy.skip>false</maven.deploy.skip>\n  </properties>\n\n    <dependencies>\n        <dependency>\n            <groupId>org.apache.datafusion</groupId>\n            <artifactId>comet-spark-spark${spark.version.short}_${scala.binary.version}</artifactId>\n            <version>${project.version}</version>\n            <exclusions>\n              <!-- This is shaded into the jar -->\n              <exclusion>\n                <groupId>org.apache.datafusion</groupId>\n                <artifactId>comet-common-spark${spark.version.short}_${scala.binary.version}</artifactId>\n              </exclusion>\n            </exclusions>\n        </dependency>\n    </dependencies>\n\n    <build>\n        <plugins>\n            <plugin>\n                <groupId>org.apache.maven.plugins</groupId>\n                <artifactId>maven-dependency-plugin</artifactId>\n                <executions>\n                    <!-- create a maven pom property that has all of our dependencies.\n                       below in the integration-test phase we'll pass this list\n                       of paths to our jar checker script.\n                    -->\n                    <execution>\n                        <id>put-client-artifacts-in-a-property</id>\n                        <phase>pre-integration-test</phase>\n                        <goals>\n                            <goal>build-classpath</goal>\n                        </goals>\n                        <configuration>\n                            <excludeTransitive>true</excludeTransitive>\n                            <pathSeparator>;</pathSeparator>\n                            <outputProperty>comet-artifacts</outputProperty>\n                        </configuration>\n                    </execution>\n                </executions>\n            </plugin>\n            <plugin>\n                <groupId>org.codehaus.mojo</groupId>\n                <artifactId>exec-maven-plugin</artifactId>\n                <executions>\n                    <execution>\n                        <id>check-jar-contents</id>\n                        <phase>integration-test</phase>\n                        <goals>\n                            <goal>exec</goal>\n                        </goals>\n                        <configuration>\n                            <executable>bash</executable>\n                            <workingDirectory>${project.build.testOutputDirectory}</workingDirectory>\n                            <requiresOnline>false</requiresOnline>\n                            <arguments>\n                                <argument>${project.basedir}/../dev/ensure-jars-have-correct-contents.sh</argument>\n                                <argument>${comet-artifacts}</argument>\n                            </arguments>\n                        </configuration>\n                    </execution>\n                </executions>\n            </plugin>\n            <plugin>\n                <groupId>org.apache.maven.plugins</groupId>\n                <artifactId>maven-install-plugin</artifactId>\n                <configuration>\n                    <skip>true</skip>\n                </configuration>\n            </plugin>\n        </plugins>\n    </build>\n\n</project>\n"
  }
]